+
+## Contacts (Maintainers)
+
+* Liang-Chieh Chen, github: [aquariusjay](https://github.com/aquariusjay)
+* YuKun Zhu, github: [yknzhu](https://github.com/YknZhu)
+* George Papandreou, github: [gpapan](https://github.com/gpapan)
+* Hui Hui, github: [huihui-personal](https://github.com/huihui-personal)
+
+## Tables of Contents
+
+Demo:
+
+* Colab notebook for off-the-shelf inference.
+
+Running:
+
+* Installation.
+* Running DeepLab on PASCAL VOC 2012 semantic segmentation dataset.
+* Running DeepLab on Cityscapes semantic segmentation dataset.
+* Running DeepLab on ADE20K semantic segmentation dataset.
+
+Models:
+
+* Checkpoints and frozen inference graphs.
+
+Misc:
+
+* Please check FAQ if you have some questions before reporting the issues.
+
+## Getting Help
+
+To get help with issues you may encounter while using the DeepLab Tensorflow
+implementation, create a new question on
+[StackOverflow](https://stackoverflow.com/) with the tag "tensorflow".
+
+Please report bugs (i.e., broken code, not usage questions) to the
+tensorflow/models GitHub [issue
+tracker](https://github.com/tensorflow/models/issues), prefixing the issue name
+with "deeplab".
+
+## Change Logs
+
+### October 1, 2018
+
+Released MobileNet-v2 depth-multiplier = 0.5 COCO-pretrained checkpoints on
+PASCAL VOC 2012, and Xception-65 COCO pretrained checkpoint (i.e., no PASCAL
+pretrained).
+
+
+### September 5, 2018
+
+Released Cityscapes pretrained checkpoints with found best dense prediction cell.
+
+
+### May 26, 2018
+
+Updated ADE20K pretrained checkpoint.
+
+
+### May 18, 2018
+1. Added builders for ResNet-v1 and Xception model variants.
+1. Added ADE20K support, including colormap and pretrained Xception_65 checkpoint.
+1. Fixed a bug on using non-default depth_multiplier for MobileNet-v2.
+
+
+### March 22, 2018
+
+Released checkpoints using MobileNet-V2 as network backbone and pretrained on
+PASCAL VOC 2012 and Cityscapes.
+
+
+### March 5, 2018
+
+First release of DeepLab in TensorFlow including deeper Xception network
+backbone. Included chekcpoints that have been pretrained on PASCAL VOC 2012
+and Cityscapes.
+
+## References
+
+1. **Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs**
+ Liang-Chieh Chen+, George Papandreou+, Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille (+ equal
+ contribution).
+ [[link]](https://arxiv.org/abs/1412.7062). In ICLR, 2015.
+
+2. **DeepLab: Semantic Image Segmentation with Deep Convolutional Nets,**
+ **Atrous Convolution, and Fully Connected CRFs**
+ Liang-Chieh Chen+, George Papandreou+, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille (+ equal
+ contribution).
+ [[link]](http://arxiv.org/abs/1606.00915). TPAMI 2017.
+
+3. **Rethinking Atrous Convolution for Semantic Image Segmentation**
+ Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam.
+ [[link]](http://arxiv.org/abs/1706.05587). arXiv: 1706.05587, 2017.
+
+4. **Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation**
+ Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam.
+ [[link]](https://arxiv.org/abs/1802.02611). In ECCV, 2018.
+
+5. **ParseNet: Looking Wider to See Better**
+ Wei Liu, Andrew Rabinovich, Alexander C Berg
+ [[link]](https://arxiv.org/abs/1506.04579). arXiv:1506.04579, 2015.
+
+6. **Pyramid Scene Parsing Network**
+ Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia
+ [[link]](https://arxiv.org/abs/1612.01105). In CVPR, 2017.
+
+7. **Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate shift**
+ Sergey Ioffe, Christian Szegedy
+ [[link]](https://arxiv.org/abs/1502.03167). In ICML, 2015.
+
+8. **MobileNetV2: Inverted Residuals and Linear Bottlenecks**
+ Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen
+ [[link]](https://arxiv.org/abs/1801.04381). In CVPR, 2018.
+
+9. **Xception: Deep Learning with Depthwise Separable Convolutions**
+ François Chollet
+ [[link]](https://arxiv.org/abs/1610.02357). In CVPR, 2017.
+
+10. **Deformable Convolutional Networks -- COCO Detection and Segmentation Challenge 2017 Entry**
+ Haozhi Qi, Zheng Zhang, Bin Xiao, Han Hu, Bowen Cheng, Yichen Wei, Jifeng Dai
+ [[link]](http://presentations.cocodataset.org/COCO17-Detect-MSRA.pdf). ICCV COCO Challenge
+ Workshop, 2017.
+
+11. **Tensorflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems**
+ M. Abadi, A. Agarwal, et al.
+ [[link]](https://arxiv.org/abs/1603.04467). arXiv:1603.04467, 2016.
+
+12. **The Pascal Visual Object Classes Challenge – A Retrospective,**
+ Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John
+ Winn, and Andrew Zisserma.
+ [[link]](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/). IJCV, 2014.
+
+13. **The Cityscapes Dataset for Semantic Urban Scene Understanding**
+ Cordts, Marius, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, Bernt Schiele.
+ [[link]](https://www.cityscapes-dataset.com/). In CVPR, 2016.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/common.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/common.py"
new file mode 100644
index 00000000..83b73f07
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/common.py"
@@ -0,0 +1,173 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Provides flags that are common to scripts.
+
+Common flags from train/eval/vis/export_model.py are collected in this script.
+"""
+import collections
+import copy
+import json
+
+import tensorflow as tf
+
+flags = tf.app.flags
+
+# Flags for input preprocessing.
+
+flags.DEFINE_integer('min_resize_value', None,
+ 'Desired size of the smaller image side.')
+
+flags.DEFINE_integer('max_resize_value', None,
+ 'Maximum allowed size of the larger image side.')
+
+flags.DEFINE_integer('resize_factor', None,
+ 'Resized dimensions are multiple of factor plus one.')
+
+# Model dependent flags.
+
+flags.DEFINE_integer('logits_kernel_size', 1,
+ 'The kernel size for the convolutional kernel that '
+ 'generates logits.')
+
+# When using 'mobilent_v2', we set atrous_rates = decoder_output_stride = None.
+# When using 'xception_65' or 'resnet_v1' model variants, we set
+# atrous_rates = [6, 12, 18] (output stride 16) and decoder_output_stride = 4.
+# See core/feature_extractor.py for supported model variants.
+flags.DEFINE_string('model_variant', 'mobilenet_v2', 'DeepLab model variant.')
+
+flags.DEFINE_multi_float('image_pyramid', None,
+ 'Input scales for multi-scale feature extraction.')
+
+flags.DEFINE_boolean('add_image_level_feature', True,
+ 'Add image level feature.')
+
+flags.DEFINE_multi_integer(
+ 'image_pooling_crop_size', None,
+ 'Image pooling crop size [height, width] used in the ASPP module. When '
+ 'value is None, the model performs image pooling with "crop_size". This'
+ 'flag is useful when one likes to use different image pooling sizes.')
+
+flags.DEFINE_boolean('aspp_with_batch_norm', True,
+ 'Use batch norm parameters for ASPP or not.')
+
+flags.DEFINE_boolean('aspp_with_separable_conv', True,
+ 'Use separable convolution for ASPP or not.')
+
+# Defaults to None. Set multi_grid = [1, 2, 4] when using provided
+# 'resnet_v1_{50,101}_beta' checkpoints.
+flags.DEFINE_multi_integer('multi_grid', None,
+ 'Employ a hierarchy of atrous rates for ResNet.')
+
+flags.DEFINE_float('depth_multiplier', 1.0,
+ 'Multiplier for the depth (number of channels) for all '
+ 'convolution ops used in MobileNet.')
+
+# For `xception_65`, use decoder_output_stride = 4. For `mobilenet_v2`, use
+# decoder_output_stride = None.
+flags.DEFINE_integer('decoder_output_stride', None,
+ 'The ratio of input to output spatial resolution when '
+ 'employing decoder to refine segmentation results.')
+
+flags.DEFINE_boolean('decoder_use_separable_conv', True,
+ 'Employ separable convolution for decoder or not.')
+
+flags.DEFINE_enum('merge_method', 'max', ['max', 'avg'],
+ 'Scheme to merge multi scale features.')
+
+flags.DEFINE_string(
+ 'dense_prediction_cell_json',
+ '',
+ 'A JSON file that specifies the dense prediction cell.')
+
+FLAGS = flags.FLAGS
+
+# Constants
+
+# Perform semantic segmentation predictions.
+OUTPUT_TYPE = 'semantic'
+
+# Semantic segmentation item names.
+LABELS_CLASS = 'labels_class'
+IMAGE = 'image'
+HEIGHT = 'height'
+WIDTH = 'width'
+IMAGE_NAME = 'image_name'
+LABEL = 'label'
+ORIGINAL_IMAGE = 'original_image'
+
+# Test set name.
+TEST_SET = 'test'
+
+
+class ModelOptions(
+ collections.namedtuple('ModelOptions', [
+ 'outputs_to_num_classes',
+ 'crop_size',
+ 'atrous_rates',
+ 'output_stride',
+ 'merge_method',
+ 'add_image_level_feature',
+ 'image_pooling_crop_size',
+ 'aspp_with_batch_norm',
+ 'aspp_with_separable_conv',
+ 'multi_grid',
+ 'decoder_output_stride',
+ 'decoder_use_separable_conv',
+ 'logits_kernel_size',
+ 'model_variant',
+ 'depth_multiplier',
+ 'dense_prediction_cell_config',
+ ])):
+ """Immutable class to hold model options."""
+
+ __slots__ = ()
+
+ def __new__(cls,
+ outputs_to_num_classes,
+ crop_size=None,
+ atrous_rates=None,
+ output_stride=8):
+ """Constructor to set default values.
+
+ Args:
+ outputs_to_num_classes: A dictionary from output type to the number of
+ classes. For example, for the task of semantic segmentation with 21
+ semantic classes, we would have outputs_to_num_classes['semantic'] = 21.
+ crop_size: A tuple [crop_height, crop_width].
+ atrous_rates: A list of atrous convolution rates for ASPP.
+ output_stride: The ratio of input to output spatial resolution.
+
+ Returns:
+ A new ModelOptions instance.
+ """
+ dense_prediction_cell_config = None
+ if FLAGS.dense_prediction_cell_json:
+ with tf.gfile.Open(FLAGS.dense_prediction_cell_json, 'r') as f:
+ dense_prediction_cell_config = json.load(f)
+
+ return super(ModelOptions, cls).__new__(
+ cls, outputs_to_num_classes, crop_size, atrous_rates, output_stride,
+ FLAGS.merge_method, FLAGS.add_image_level_feature,
+ FLAGS.image_pooling_crop_size, FLAGS.aspp_with_batch_norm,
+ FLAGS.aspp_with_separable_conv, FLAGS.multi_grid,
+ FLAGS.decoder_output_stride, FLAGS.decoder_use_separable_conv,
+ FLAGS.logits_kernel_size, FLAGS.model_variant, FLAGS.depth_multiplier,
+ dense_prediction_cell_config)
+
+ def __deepcopy__(self, memo):
+ return ModelOptions(copy.deepcopy(self.outputs_to_num_classes),
+ self.crop_size,
+ self.atrous_rates,
+ self.output_stride)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/common_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/common_test.py"
new file mode 100644
index 00000000..45b64e50
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/common_test.py"
@@ -0,0 +1,52 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for common.py."""
+import copy
+
+import tensorflow as tf
+
+from deeplab import common
+
+
+class CommonTest(tf.test.TestCase):
+
+ def testOutputsToNumClasses(self):
+ num_classes = 21
+ model_options = common.ModelOptions(
+ outputs_to_num_classes={common.OUTPUT_TYPE: num_classes})
+ self.assertEqual(model_options.outputs_to_num_classes[common.OUTPUT_TYPE],
+ num_classes)
+
+ def testDeepcopy(self):
+ num_classes = 21
+ model_options = common.ModelOptions(
+ outputs_to_num_classes={common.OUTPUT_TYPE: num_classes})
+ model_options_new = copy.deepcopy(model_options)
+ self.assertEqual((model_options_new.
+ outputs_to_num_classes[common.OUTPUT_TYPE]),
+ num_classes)
+
+ num_classes_new = 22
+ model_options_new.outputs_to_num_classes[common.OUTPUT_TYPE] = (
+ num_classes_new)
+ self.assertEqual(model_options.outputs_to_num_classes[common.OUTPUT_TYPE],
+ num_classes)
+ self.assertEqual((model_options_new.
+ outputs_to_num_classes[common.OUTPUT_TYPE]),
+ num_classes_new)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/dense_prediction_cell.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/dense_prediction_cell.py"
new file mode 100644
index 00000000..c1498586
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/dense_prediction_cell.py"
@@ -0,0 +1,288 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Dense Prediction Cell class that can be evolved in semantic segmentation.
+
+DensePredictionCell is used as a `layer` in semantic segmentation whose
+architecture is determined by the `config`, a dictionary specifying
+the architecture.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from deeplab.core import utils
+
+slim = tf.contrib.slim
+
+# Local constants.
+_META_ARCHITECTURE_SCOPE = 'meta_architecture'
+_CONCAT_PROJECTION_SCOPE = 'concat_projection'
+_OP = 'op'
+_CONV = 'conv'
+_PYRAMID_POOLING = 'pyramid_pooling'
+_KERNEL = 'kernel'
+_RATE = 'rate'
+_GRID_SIZE = 'grid_size'
+_TARGET_SIZE = 'target_size'
+_INPUT = 'input'
+
+
+def dense_prediction_cell_hparams():
+ """DensePredictionCell HParams.
+
+ Returns:
+ A dictionary of hyper-parameters used for dense prediction cell with keys:
+ - reduction_size: Integer, the number of output filters for each operation
+ inside the cell.
+ - dropout_on_concat_features: Boolean, apply dropout on the concatenated
+ features or not.
+ - dropout_on_projection_features: Boolean, apply dropout on the projection
+ features or not.
+ - dropout_keep_prob: Float, when `dropout_on_concat_features' or
+ `dropout_on_projection_features' is True, the `keep_prob` value used
+ in the dropout operation.
+ - concat_channels: Integer, the concatenated features will be
+ channel-reduced to `concat_channels` channels.
+ - conv_rate_multiplier: Integer, used to multiply the convolution rates.
+ This is useful in the case when the output_stride is changed from 16
+ to 8, we need to double the convolution rates correspondingly.
+ """
+ return {
+ 'reduction_size': 256,
+ 'dropout_on_concat_features': True,
+ 'dropout_on_projection_features': False,
+ 'dropout_keep_prob': 0.9,
+ 'concat_channels': 256,
+ 'conv_rate_multiplier': 1,
+ }
+
+
+class DensePredictionCell(object):
+ """DensePredictionCell class used as a 'layer' in semantic segmentation."""
+
+ def __init__(self, config, hparams=None):
+ """Initializes the dense prediction cell.
+
+ Args:
+ config: A dictionary storing the architecture of a dense prediction cell.
+ hparams: A dictionary of hyper-parameters, provided by users. This
+ dictionary will be used to update the default dictionary returned by
+ dense_prediction_cell_hparams().
+
+ Raises:
+ ValueError: If `conv_rate_multiplier` has value < 1.
+ """
+ self.hparams = dense_prediction_cell_hparams()
+ if hparams is not None:
+ self.hparams.update(hparams)
+ self.config = config
+
+ # Check values in hparams are valid or not.
+ if self.hparams['conv_rate_multiplier'] < 1:
+ raise ValueError('conv_rate_multiplier cannot have value < 1.')
+
+ def _get_pyramid_pooling_arguments(
+ self, crop_size, output_stride, image_grid, image_pooling_crop_size=None):
+ """Gets arguments for pyramid pooling.
+
+ Args:
+ crop_size: A list of two integers, [crop_height, crop_width] specifying
+ whole patch crop size.
+ output_stride: Integer, output stride value for extracted features.
+ image_grid: A list of two integers, [image_grid_height, image_grid_width],
+ specifying the grid size of how the pyramid pooling will be performed.
+ image_pooling_crop_size: A list of two integers, [crop_height, crop_width]
+ specifying the crop size for image pooling operations. Note that we
+ decouple whole patch crop_size and image_pooling_crop_size as one could
+ perform the image_pooling with different crop sizes.
+
+ Returns:
+ A list of (resize_value, pooled_kernel)
+ """
+ resize_height = utils.scale_dimension(crop_size[0], 1. / output_stride)
+ resize_width = utils.scale_dimension(crop_size[1], 1. / output_stride)
+ # If image_pooling_crop_size is not specified, use crop_size.
+ if image_pooling_crop_size is None:
+ image_pooling_crop_size = crop_size
+ pooled_height = utils.scale_dimension(
+ image_pooling_crop_size[0], 1. / (output_stride * image_grid[0]))
+ pooled_width = utils.scale_dimension(
+ image_pooling_crop_size[1], 1. / (output_stride * image_grid[1]))
+ return ([resize_height, resize_width], [pooled_height, pooled_width])
+
+ def _parse_operation(self, config, crop_size, output_stride,
+ image_pooling_crop_size=None):
+ """Parses one operation.
+
+ When 'operation' is 'pyramid_pooling', we compute the required
+ hyper-parameters and save in config.
+
+ Args:
+ config: A dictionary storing required hyper-parameters for one
+ operation.
+ crop_size: A list of two integers, [crop_height, crop_width] specifying
+ whole patch crop size.
+ output_stride: Integer, output stride value for extracted features.
+ image_pooling_crop_size: A list of two integers, [crop_height, crop_width]
+ specifying the crop size for image pooling operations. Note that we
+ decouple whole patch crop_size and image_pooling_crop_size as one could
+ perform the image_pooling with different crop sizes.
+
+ Returns:
+ A dictionary stores the related information for the operation.
+ """
+ if config[_OP] == _PYRAMID_POOLING:
+ (config[_TARGET_SIZE],
+ config[_KERNEL]) = self._get_pyramid_pooling_arguments(
+ crop_size=crop_size,
+ output_stride=output_stride,
+ image_grid=config[_GRID_SIZE],
+ image_pooling_crop_size=image_pooling_crop_size)
+
+ return config
+
+ def build_cell(self,
+ features,
+ output_stride=16,
+ crop_size=None,
+ image_pooling_crop_size=None,
+ weight_decay=0.00004,
+ reuse=None,
+ is_training=False,
+ fine_tune_batch_norm=False,
+ scope=None):
+ """Builds the dense prediction cell based on the config.
+
+ Args:
+ features: Input feature map of size [batch, height, width, channels].
+ output_stride: Int, output stride at which the features were extracted.
+ crop_size: A list [crop_height, crop_width], determining the input
+ features resolution.
+ image_pooling_crop_size: A list of two integers, [crop_height, crop_width]
+ specifying the crop size for image pooling operations. Note that we
+ decouple whole patch crop_size and image_pooling_crop_size as one could
+ perform the image_pooling with different crop sizes.
+ weight_decay: Float, the weight decay for model variables.
+ reuse: Reuse the model variables or not.
+ is_training: Boolean, is training or not.
+ fine_tune_batch_norm: Boolean, fine-tuning batch norm parameters or not.
+ scope: Optional string, specifying the variable scope.
+
+ Returns:
+ Features after passing through the constructed dense prediction cell with
+ shape = [batch, height, width, channels] where channels are determined
+ by `reduction_size` returned by dense_prediction_cell_hparams().
+
+ Raises:
+ ValueError: Use Convolution with kernel size not equal to 1x1 or 3x3 or
+ the operation is not recognized.
+ """
+ batch_norm_params = {
+ 'is_training': is_training and fine_tune_batch_norm,
+ 'decay': 0.9997,
+ 'epsilon': 1e-5,
+ 'scale': True,
+ }
+ hparams = self.hparams
+ with slim.arg_scope(
+ [slim.conv2d, slim.separable_conv2d],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm,
+ padding='SAME',
+ stride=1,
+ reuse=reuse):
+ with slim.arg_scope([slim.batch_norm], **batch_norm_params):
+ with tf.variable_scope(scope, _META_ARCHITECTURE_SCOPE, [features]):
+ depth = hparams['reduction_size']
+ branch_logits = []
+ for i, current_config in enumerate(self.config):
+ scope = 'branch%d' % i
+ current_config = self._parse_operation(
+ config=current_config,
+ crop_size=crop_size,
+ output_stride=output_stride,
+ image_pooling_crop_size=image_pooling_crop_size)
+ tf.logging.info(current_config)
+ if current_config[_INPUT] < 0:
+ operation_input = features
+ else:
+ operation_input = branch_logits[current_config[_INPUT]]
+ if current_config[_OP] == _CONV:
+ if current_config[_KERNEL] == [1, 1] or current_config[
+ _KERNEL] == 1:
+ branch_logits.append(
+ slim.conv2d(operation_input, depth, 1, scope=scope))
+ else:
+ conv_rate = [r * hparams['conv_rate_multiplier']
+ for r in current_config[_RATE]]
+ branch_logits.append(
+ utils.split_separable_conv2d(
+ operation_input,
+ filters=depth,
+ kernel_size=current_config[_KERNEL],
+ rate=conv_rate,
+ weight_decay=weight_decay,
+ scope=scope))
+ elif current_config[_OP] == _PYRAMID_POOLING:
+ pooled_features = slim.avg_pool2d(
+ operation_input,
+ kernel_size=current_config[_KERNEL],
+ stride=[1, 1],
+ padding='VALID')
+ pooled_features = slim.conv2d(
+ pooled_features,
+ depth,
+ 1,
+ scope=scope)
+ pooled_features = tf.image.resize_bilinear(
+ pooled_features,
+ current_config[_TARGET_SIZE],
+ align_corners=True)
+ # Set shape for resize_height/resize_width if they are not Tensor.
+ resize_height = current_config[_TARGET_SIZE][0]
+ resize_width = current_config[_TARGET_SIZE][1]
+ if isinstance(resize_height, tf.Tensor):
+ resize_height = None
+ if isinstance(resize_width, tf.Tensor):
+ resize_width = None
+ pooled_features.set_shape(
+ [None, resize_height, resize_width, depth])
+ branch_logits.append(pooled_features)
+ else:
+ raise ValueError('Unrecognized operation.')
+ # Merge branch logits.
+ concat_logits = tf.concat(branch_logits, 3)
+ if self.hparams['dropout_on_concat_features']:
+ concat_logits = slim.dropout(
+ concat_logits,
+ keep_prob=self.hparams['dropout_keep_prob'],
+ is_training=is_training,
+ scope=_CONCAT_PROJECTION_SCOPE + '_dropout')
+ concat_logits = slim.conv2d(concat_logits,
+ self.hparams['concat_channels'],
+ 1,
+ scope=_CONCAT_PROJECTION_SCOPE)
+ if self.hparams['dropout_on_projection_features']:
+ concat_logits = slim.dropout(
+ concat_logits,
+ keep_prob=self.hparams['dropout_keep_prob'],
+ is_training=is_training,
+ scope=_CONCAT_PROJECTION_SCOPE + '_dropout')
+ return concat_logits
\ No newline at end of file
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/dense_prediction_cell_branch5_top1_cityscapes.json" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/dense_prediction_cell_branch5_top1_cityscapes.json"
new file mode 100644
index 00000000..12b093d0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/dense_prediction_cell_branch5_top1_cityscapes.json"
@@ -0,0 +1 @@
+[{"kernel": 3, "rate": [1, 6], "op": "conv", "input": -1}, {"kernel": 3, "rate": [18, 15], "op": "conv", "input": 0}, {"kernel": 3, "rate": [6, 3], "op": "conv", "input": 1}, {"kernel": 3, "rate": [1, 1], "op": "conv", "input": 0}, {"kernel": 3, "rate": [6, 21], "op": "conv", "input": 0}]
\ No newline at end of file
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/dense_prediction_cell_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/dense_prediction_cell_test.py"
new file mode 100644
index 00000000..9ae84ae8
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/dense_prediction_cell_test.py"
@@ -0,0 +1,135 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for dense_prediction_cell."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from deeplab.core import dense_prediction_cell
+
+
+class DensePredictionCellTest(tf.test.TestCase):
+
+ def setUp(self):
+ self.segmentation_layer = dense_prediction_cell.DensePredictionCell(
+ config=[
+ {
+ dense_prediction_cell._INPUT: -1,
+ dense_prediction_cell._OP: dense_prediction_cell._CONV,
+ dense_prediction_cell._KERNEL: 1,
+ },
+ {
+ dense_prediction_cell._INPUT: 0,
+ dense_prediction_cell._OP: dense_prediction_cell._CONV,
+ dense_prediction_cell._KERNEL: 3,
+ dense_prediction_cell._RATE: [1, 3],
+ },
+ {
+ dense_prediction_cell._INPUT: 1,
+ dense_prediction_cell._OP: (
+ dense_prediction_cell._PYRAMID_POOLING),
+ dense_prediction_cell._GRID_SIZE: [1, 2],
+ },
+ ],
+ hparams={'conv_rate_multiplier': 2})
+
+ def testPyramidPoolingArguments(self):
+ features_size, pooled_kernel = (
+ self.segmentation_layer._get_pyramid_pooling_arguments(
+ crop_size=[513, 513],
+ output_stride=16,
+ image_grid=[4, 4]))
+ self.assertListEqual(features_size, [33, 33])
+ self.assertListEqual(pooled_kernel, [9, 9])
+
+ def testPyramidPoolingArgumentsWithImageGrid1x1(self):
+ features_size, pooled_kernel = (
+ self.segmentation_layer._get_pyramid_pooling_arguments(
+ crop_size=[257, 257],
+ output_stride=16,
+ image_grid=[1, 1]))
+ self.assertListEqual(features_size, [17, 17])
+ self.assertListEqual(pooled_kernel, [17, 17])
+
+ def testParseOperationStringWithConv1x1(self):
+ operation = self.segmentation_layer._parse_operation(
+ config={
+ dense_prediction_cell._OP: dense_prediction_cell._CONV,
+ dense_prediction_cell._KERNEL: [1, 1],
+ },
+ crop_size=[513, 513], output_stride=16)
+ self.assertEqual(operation[dense_prediction_cell._OP],
+ dense_prediction_cell._CONV)
+ self.assertListEqual(operation[dense_prediction_cell._KERNEL], [1, 1])
+
+ def testParseOperationStringWithConv3x3(self):
+ operation = self.segmentation_layer._parse_operation(
+ config={
+ dense_prediction_cell._OP: dense_prediction_cell._CONV,
+ dense_prediction_cell._KERNEL: [3, 3],
+ dense_prediction_cell._RATE: [9, 6],
+ },
+ crop_size=[513, 513], output_stride=16)
+ self.assertEqual(operation[dense_prediction_cell._OP],
+ dense_prediction_cell._CONV)
+ self.assertListEqual(operation[dense_prediction_cell._KERNEL], [3, 3])
+ self.assertEqual(operation[dense_prediction_cell._RATE], [9, 6])
+
+ def testParseOperationStringWithPyramidPooling2x2(self):
+ operation = self.segmentation_layer._parse_operation(
+ config={
+ dense_prediction_cell._OP: dense_prediction_cell._PYRAMID_POOLING,
+ dense_prediction_cell._GRID_SIZE: [2, 2],
+ },
+ crop_size=[513, 513],
+ output_stride=16)
+ self.assertEqual(operation[dense_prediction_cell._OP],
+ dense_prediction_cell._PYRAMID_POOLING)
+ # The feature maps of size [33, 33] should be covered by 2x2 kernels with
+ # size [17, 17].
+ self.assertListEqual(
+ operation[dense_prediction_cell._TARGET_SIZE], [33, 33])
+ self.assertListEqual(operation[dense_prediction_cell._KERNEL], [17, 17])
+
+ def testBuildCell(self):
+ with self.test_session(graph=tf.Graph()) as sess:
+ features = tf.random_normal([2, 33, 33, 5])
+ concat_logits = self.segmentation_layer.build_cell(
+ features,
+ output_stride=8,
+ crop_size=[257, 257])
+ sess.run(tf.global_variables_initializer())
+ concat_logits = sess.run(concat_logits)
+ self.assertTrue(concat_logits.any())
+
+ def testBuildCellWithImagePoolingCropSize(self):
+ with self.test_session(graph=tf.Graph()) as sess:
+ features = tf.random_normal([2, 33, 33, 5])
+ concat_logits = self.segmentation_layer.build_cell(
+ features,
+ output_stride=8,
+ crop_size=[257, 257],
+ image_pooling_crop_size=[129, 129])
+ sess.run(tf.global_variables_initializer())
+ concat_logits = sess.run(concat_logits)
+ self.assertTrue(concat_logits.any())
+
+
+if __name__ == '__main__':
+ tf.test.main()
\ No newline at end of file
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/feature_extractor.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/feature_extractor.py"
new file mode 100644
index 00000000..da89dfe9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/feature_extractor.py"
@@ -0,0 +1,330 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Extracts features for different models."""
+import functools
+import tensorflow as tf
+
+from deeplab.core import resnet_v1_beta
+from deeplab.core import xception
+from tensorflow.contrib.slim.nets import resnet_utils
+from nets.mobilenet import mobilenet_v2
+
+
+slim = tf.contrib.slim
+
+# Default end point for MobileNetv2.
+_MOBILENET_V2_FINAL_ENDPOINT = 'layer_18'
+
+
+def _mobilenet_v2(net,
+ depth_multiplier,
+ output_stride,
+ reuse=None,
+ scope=None,
+ final_endpoint=None):
+ """Auxiliary function to add support for 'reuse' to mobilenet_v2.
+
+ Args:
+ net: Input tensor of shape [batch_size, height, width, channels].
+ depth_multiplier: Float multiplier for the depth (number of channels)
+ for all convolution ops. The value must be greater than zero. Typical
+ usage will be to set this value in (0, 1) to reduce the number of
+ parameters or computation cost of the model.
+ output_stride: An integer that specifies the requested ratio of input to
+ output spatial resolution. If not None, then we invoke atrous convolution
+ if necessary to prevent the network from reducing the spatial resolution
+ of the activation maps. Allowed values are 8 (accurate fully convolutional
+ mode), 16 (fast fully convolutional mode), 32 (classification mode).
+ reuse: Reuse model variables.
+ scope: Optional variable scope.
+ final_endpoint: The endpoint to construct the network up to.
+
+ Returns:
+ Features extracted by MobileNetv2.
+ """
+ with tf.variable_scope(
+ scope, 'MobilenetV2', [net], reuse=reuse) as scope:
+ return mobilenet_v2.mobilenet_base(
+ net,
+ conv_defs=mobilenet_v2.V2_DEF,
+ depth_multiplier=depth_multiplier,
+ min_depth=8 if depth_multiplier == 1.0 else 1,
+ divisible_by=8 if depth_multiplier == 1.0 else 1,
+ final_endpoint=final_endpoint or _MOBILENET_V2_FINAL_ENDPOINT,
+ output_stride=output_stride,
+ scope=scope)
+
+
+# A map from network name to network function.
+networks_map = {
+ 'mobilenet_v2': _mobilenet_v2,
+ 'resnet_v1_50': resnet_v1_beta.resnet_v1_50,
+ 'resnet_v1_50_beta': resnet_v1_beta.resnet_v1_50_beta,
+ 'resnet_v1_101': resnet_v1_beta.resnet_v1_101,
+ 'resnet_v1_101_beta': resnet_v1_beta.resnet_v1_101_beta,
+ 'xception_41': xception.xception_41,
+ 'xception_65': xception.xception_65,
+ 'xception_71': xception.xception_71,
+}
+
+# A map from network name to network arg scope.
+arg_scopes_map = {
+ 'mobilenet_v2': mobilenet_v2.training_scope,
+ 'resnet_v1_50': resnet_utils.resnet_arg_scope,
+ 'resnet_v1_50_beta': resnet_utils.resnet_arg_scope,
+ 'resnet_v1_101': resnet_utils.resnet_arg_scope,
+ 'resnet_v1_101_beta': resnet_utils.resnet_arg_scope,
+ 'xception_41': xception.xception_arg_scope,
+ 'xception_65': xception.xception_arg_scope,
+ 'xception_71': xception.xception_arg_scope,
+}
+
+# Names for end point features.
+DECODER_END_POINTS = 'decoder_end_points'
+
+# A dictionary from network name to a map of end point features.
+networks_to_feature_maps = {
+ 'mobilenet_v2': {
+ DECODER_END_POINTS: ['layer_4/depthwise_output'],
+ },
+ 'resnet_v1_50': {
+ DECODER_END_POINTS: ['block1/unit_2/bottleneck_v1/conv3'],
+ },
+ 'resnet_v1_50_beta': {
+ DECODER_END_POINTS: ['block1/unit_2/bottleneck_v1/conv3'],
+ },
+ 'resnet_v1_101': {
+ DECODER_END_POINTS: ['block1/unit_2/bottleneck_v1/conv3'],
+ },
+ 'resnet_v1_101_beta': {
+ DECODER_END_POINTS: ['block1/unit_2/bottleneck_v1/conv3'],
+ },
+ 'xception_41': {
+ DECODER_END_POINTS: [
+ 'entry_flow/block2/unit_1/xception_module/'
+ 'separable_conv2_pointwise',
+ ],
+ },
+ 'xception_65': {
+ DECODER_END_POINTS: [
+ 'entry_flow/block2/unit_1/xception_module/'
+ 'separable_conv2_pointwise',
+ ],
+ },
+ 'xception_71': {
+ DECODER_END_POINTS: [
+ 'entry_flow/block3/unit_1/xception_module/'
+ 'separable_conv2_pointwise',
+ ],
+ },
+}
+
+# A map from feature extractor name to the network name scope used in the
+# ImageNet pretrained versions of these models.
+name_scope = {
+ 'mobilenet_v2': 'MobilenetV2',
+ 'resnet_v1_50': 'resnet_v1_50',
+ 'resnet_v1_50_beta': 'resnet_v1_50',
+ 'resnet_v1_101': 'resnet_v1_101',
+ 'resnet_v1_101_beta': 'resnet_v1_101',
+ 'xception_41': 'xception_41',
+ 'xception_65': 'xception_65',
+ 'xception_71': 'xception_71',
+}
+
+# Mean pixel value.
+_MEAN_RGB = [123.15, 115.90, 103.06]
+
+
+def _preprocess_subtract_imagenet_mean(inputs):
+ """Subtract Imagenet mean RGB value."""
+ mean_rgb = tf.reshape(_MEAN_RGB, [1, 1, 1, 3])
+ return inputs - mean_rgb
+
+
+def _preprocess_zero_mean_unit_range(inputs):
+ """Map image values from [0, 255] to [-1, 1]."""
+ return (2.0 / 255.0) * tf.to_float(inputs) - 1.0
+
+
+_PREPROCESS_FN = {
+ 'mobilenet_v2': _preprocess_zero_mean_unit_range,
+ 'resnet_v1_50': _preprocess_subtract_imagenet_mean,
+ 'resnet_v1_50_beta': _preprocess_zero_mean_unit_range,
+ 'resnet_v1_101': _preprocess_subtract_imagenet_mean,
+ 'resnet_v1_101_beta': _preprocess_zero_mean_unit_range,
+ 'xception_41': _preprocess_zero_mean_unit_range,
+ 'xception_65': _preprocess_zero_mean_unit_range,
+ 'xception_71': _preprocess_zero_mean_unit_range,
+}
+
+
+def mean_pixel(model_variant=None):
+ """Gets mean pixel value.
+
+ This function returns different mean pixel value, depending on the input
+ model_variant which adopts different preprocessing functions. We currently
+ handle the following preprocessing functions:
+ (1) _preprocess_subtract_imagenet_mean. We simply return mean pixel value.
+ (2) _preprocess_zero_mean_unit_range. We return [127.5, 127.5, 127.5].
+ The return values are used in a way that the padded regions after
+ pre-processing will contain value 0.
+
+ Args:
+ model_variant: Model variant (string) for feature extraction. For
+ backwards compatibility, model_variant=None returns _MEAN_RGB.
+
+ Returns:
+ Mean pixel value.
+ """
+ if model_variant in ['resnet_v1_50',
+ 'resnet_v1_101'] or model_variant is None:
+ return _MEAN_RGB
+ else:
+ return [127.5, 127.5, 127.5]
+
+
+def extract_features(images,
+ output_stride=8,
+ multi_grid=None,
+ depth_multiplier=1.0,
+ final_endpoint=None,
+ model_variant=None,
+ weight_decay=0.0001,
+ reuse=None,
+ is_training=False,
+ fine_tune_batch_norm=False,
+ regularize_depthwise=False,
+ preprocess_images=True,
+ num_classes=None,
+ global_pool=False):
+ """Extracts features by the particular model_variant.
+
+ Args:
+ images: A tensor of size [batch, height, width, channels].
+ output_stride: The ratio of input to output spatial resolution.
+ multi_grid: Employ a hierarchy of different atrous rates within network.
+ depth_multiplier: Float multiplier for the depth (number of channels)
+ for all convolution ops used in MobileNet.
+ final_endpoint: The MobileNet endpoint to construct the network up to.
+ model_variant: Model variant for feature extraction.
+ weight_decay: The weight decay for model variables.
+ reuse: Reuse the model variables or not.
+ is_training: Is training or not.
+ fine_tune_batch_norm: Fine-tune the batch norm parameters or not.
+ regularize_depthwise: Whether or not apply L2-norm regularization on the
+ depthwise convolution weights.
+ preprocess_images: Performs preprocessing on images or not. Defaults to
+ True. Set to False if preprocessing will be done by other functions. We
+ supprot two types of preprocessing: (1) Mean pixel substraction and (2)
+ Pixel values normalization to be [-1, 1].
+ num_classes: Number of classes for image classification task. Defaults
+ to None for dense prediction tasks.
+ global_pool: Global pooling for image classification task. Defaults to
+ False, since dense prediction tasks do not use this.
+
+ Returns:
+ features: A tensor of size [batch, feature_height, feature_width,
+ feature_channels], where feature_height/feature_width are determined
+ by the images height/width and output_stride.
+ end_points: A dictionary from components of the network to the corresponding
+ activation.
+
+ Raises:
+ ValueError: Unrecognized model variant.
+ """
+ if 'resnet' in model_variant:
+ arg_scope = arg_scopes_map[model_variant](
+ weight_decay=weight_decay,
+ batch_norm_decay=0.95,
+ batch_norm_epsilon=1e-5,
+ batch_norm_scale=True)
+ features, end_points = get_network(
+ model_variant, preprocess_images, arg_scope)(
+ inputs=images,
+ num_classes=num_classes,
+ is_training=(is_training and fine_tune_batch_norm),
+ global_pool=global_pool,
+ output_stride=output_stride,
+ multi_grid=multi_grid,
+ reuse=reuse,
+ scope=name_scope[model_variant])
+ elif 'xception' in model_variant:
+ arg_scope = arg_scopes_map[model_variant](
+ weight_decay=weight_decay,
+ batch_norm_decay=0.9997,
+ batch_norm_epsilon=1e-3,
+ batch_norm_scale=True,
+ regularize_depthwise=regularize_depthwise)
+ features, end_points = get_network(
+ model_variant, preprocess_images, arg_scope)(
+ inputs=images,
+ num_classes=num_classes,
+ is_training=(is_training and fine_tune_batch_norm),
+ global_pool=global_pool,
+ output_stride=output_stride,
+ regularize_depthwise=regularize_depthwise,
+ multi_grid=multi_grid,
+ reuse=reuse,
+ scope=name_scope[model_variant])
+ elif 'mobilenet' in model_variant:
+ arg_scope = arg_scopes_map[model_variant](
+ is_training=(is_training and fine_tune_batch_norm),
+ weight_decay=weight_decay)
+ features, end_points = get_network(
+ model_variant, preprocess_images, arg_scope)(
+ inputs=images,
+ depth_multiplier=depth_multiplier,
+ output_stride=output_stride,
+ reuse=reuse,
+ scope=name_scope[model_variant],
+ final_endpoint=final_endpoint)
+ else:
+ raise ValueError('Unknown model variant %s.' % model_variant)
+
+ return features, end_points
+
+
+def get_network(network_name, preprocess_images, arg_scope=None):
+ """Gets the network.
+
+ Args:
+ network_name: Network name.
+ preprocess_images: Preprocesses the images or not.
+ arg_scope: Optional, arg_scope to build the network. If not provided the
+ default arg_scope of the network would be used.
+
+ Returns:
+ A network function that is used to extract features.
+
+ Raises:
+ ValueError: network is not supported.
+ """
+ if network_name not in networks_map:
+ raise ValueError('Unsupported network %s.' % network_name)
+ arg_scope = arg_scope or arg_scopes_map[network_name]()
+ def _identity_function(inputs):
+ return inputs
+ if preprocess_images:
+ preprocess_function = _PREPROCESS_FN[network_name]
+ else:
+ preprocess_function = _identity_function
+ func = networks_map[network_name]
+ @functools.wraps(func)
+ def network_fn(inputs, *args, **kwargs):
+ with slim.arg_scope(arg_scope):
+ return func(preprocess_function(inputs), *args, **kwargs)
+ return network_fn
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/preprocess_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/preprocess_utils.py"
new file mode 100644
index 00000000..f6026505
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/preprocess_utils.py"
@@ -0,0 +1,445 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Utility functions related to preprocessing inputs."""
+import tensorflow as tf
+
+
+def flip_dim(tensor_list, prob=0.5, dim=1):
+ """Randomly flips a dimension of the given tensor.
+
+ The decision to randomly flip the `Tensors` is made together. In other words,
+ all or none of the images pass in are flipped.
+
+ Note that tf.random_flip_left_right and tf.random_flip_up_down isn't used so
+ that we can control for the probability as well as ensure the same decision
+ is applied across the images.
+
+ Args:
+ tensor_list: A list of `Tensors` with the same number of dimensions.
+ prob: The probability of a left-right flip.
+ dim: The dimension to flip, 0, 1, ..
+
+ Returns:
+ outputs: A list of the possibly flipped `Tensors` as well as an indicator
+ `Tensor` at the end whose value is `True` if the inputs were flipped and
+ `False` otherwise.
+
+ Raises:
+ ValueError: If dim is negative or greater than the dimension of a `Tensor`.
+ """
+ random_value = tf.random_uniform([])
+
+ def flip():
+ flipped = []
+ for tensor in tensor_list:
+ if dim < 0 or dim >= len(tensor.get_shape().as_list()):
+ raise ValueError('dim must represent a valid dimension.')
+ flipped.append(tf.reverse_v2(tensor, [dim]))
+ return flipped
+
+ is_flipped = tf.less_equal(random_value, prob)
+ outputs = tf.cond(is_flipped, flip, lambda: tensor_list)
+ if not isinstance(outputs, (list, tuple)):
+ outputs = [outputs]
+ outputs.append(is_flipped)
+
+ return outputs
+
+
+def pad_to_bounding_box(image, offset_height, offset_width, target_height,
+ target_width, pad_value):
+ """Pads the given image with the given pad_value.
+
+ Works like tf.image.pad_to_bounding_box, except it can pad the image
+ with any given arbitrary pad value and also handle images whose sizes are not
+ known during graph construction.
+
+ Args:
+ image: 3-D tensor with shape [height, width, channels]
+ offset_height: Number of rows of zeros to add on top.
+ offset_width: Number of columns of zeros to add on the left.
+ target_height: Height of output image.
+ target_width: Width of output image.
+ pad_value: Value to pad the image tensor with.
+
+ Returns:
+ 3-D tensor of shape [target_height, target_width, channels].
+
+ Raises:
+ ValueError: If the shape of image is incompatible with the offset_* or
+ target_* arguments.
+ """
+ image_rank = tf.rank(image)
+ image_rank_assert = tf.Assert(
+ tf.equal(image_rank, 3),
+ ['Wrong image tensor rank [Expected] [Actual]',
+ 3, image_rank])
+ with tf.control_dependencies([image_rank_assert]):
+ image -= pad_value
+ image_shape = tf.shape(image)
+ height, width = image_shape[0], image_shape[1]
+ target_width_assert = tf.Assert(
+ tf.greater_equal(
+ target_width, width),
+ ['target_width must be >= width'])
+ target_height_assert = tf.Assert(
+ tf.greater_equal(target_height, height),
+ ['target_height must be >= height'])
+ with tf.control_dependencies([target_width_assert]):
+ after_padding_width = target_width - offset_width - width
+ with tf.control_dependencies([target_height_assert]):
+ after_padding_height = target_height - offset_height - height
+ offset_assert = tf.Assert(
+ tf.logical_and(
+ tf.greater_equal(after_padding_width, 0),
+ tf.greater_equal(after_padding_height, 0)),
+ ['target size not possible with the given target offsets'])
+
+ height_params = tf.stack([offset_height, after_padding_height])
+ width_params = tf.stack([offset_width, after_padding_width])
+ channel_params = tf.stack([0, 0])
+ with tf.control_dependencies([offset_assert]):
+ paddings = tf.stack([height_params, width_params, channel_params])
+ padded = tf.pad(image, paddings)
+ return padded + pad_value
+
+
+def _crop(image, offset_height, offset_width, crop_height, crop_width):
+ """Crops the given image using the provided offsets and sizes.
+
+ Note that the method doesn't assume we know the input image size but it does
+ assume we know the input image rank.
+
+ Args:
+ image: an image of shape [height, width, channels].
+ offset_height: a scalar tensor indicating the height offset.
+ offset_width: a scalar tensor indicating the width offset.
+ crop_height: the height of the cropped image.
+ crop_width: the width of the cropped image.
+
+ Returns:
+ The cropped (and resized) image.
+
+ Raises:
+ ValueError: if `image` doesn't have rank of 3.
+ InvalidArgumentError: if the rank is not 3 or if the image dimensions are
+ less than the crop size.
+ """
+ original_shape = tf.shape(image)
+
+ if len(image.get_shape().as_list()) != 3:
+ raise ValueError('input must have rank of 3')
+ original_channels = image.get_shape().as_list()[2]
+
+ rank_assertion = tf.Assert(
+ tf.equal(tf.rank(image), 3),
+ ['Rank of image must be equal to 3.'])
+ with tf.control_dependencies([rank_assertion]):
+ cropped_shape = tf.stack([crop_height, crop_width, original_shape[2]])
+
+ size_assertion = tf.Assert(
+ tf.logical_and(
+ tf.greater_equal(original_shape[0], crop_height),
+ tf.greater_equal(original_shape[1], crop_width)),
+ ['Crop size greater than the image size.'])
+
+ offsets = tf.to_int32(tf.stack([offset_height, offset_width, 0]))
+
+ # Use tf.slice instead of crop_to_bounding box as it accepts tensors to
+ # define the crop size.
+ with tf.control_dependencies([size_assertion]):
+ image = tf.slice(image, offsets, cropped_shape)
+ image = tf.reshape(image, cropped_shape)
+ image.set_shape([crop_height, crop_width, original_channels])
+ return image
+
+
+def random_crop(image_list, crop_height, crop_width):
+ """Crops the given list of images.
+
+ The function applies the same crop to each image in the list. This can be
+ effectively applied when there are multiple image inputs of the same
+ dimension such as:
+
+ image, depths, normals = random_crop([image, depths, normals], 120, 150)
+
+ Args:
+ image_list: a list of image tensors of the same dimension but possibly
+ varying channel.
+ crop_height: the new height.
+ crop_width: the new width.
+
+ Returns:
+ the image_list with cropped images.
+
+ Raises:
+ ValueError: if there are multiple image inputs provided with different size
+ or the images are smaller than the crop dimensions.
+ """
+ if not image_list:
+ raise ValueError('Empty image_list.')
+
+ # Compute the rank assertions.
+ rank_assertions = []
+ for i in range(len(image_list)):
+ image_rank = tf.rank(image_list[i])
+ rank_assert = tf.Assert(
+ tf.equal(image_rank, 3),
+ ['Wrong rank for tensor %s [expected] [actual]',
+ image_list[i].name, 3, image_rank])
+ rank_assertions.append(rank_assert)
+
+ with tf.control_dependencies([rank_assertions[0]]):
+ image_shape = tf.shape(image_list[0])
+ image_height = image_shape[0]
+ image_width = image_shape[1]
+ crop_size_assert = tf.Assert(
+ tf.logical_and(
+ tf.greater_equal(image_height, crop_height),
+ tf.greater_equal(image_width, crop_width)),
+ ['Crop size greater than the image size.'])
+
+ asserts = [rank_assertions[0], crop_size_assert]
+
+ for i in range(1, len(image_list)):
+ image = image_list[i]
+ asserts.append(rank_assertions[i])
+ with tf.control_dependencies([rank_assertions[i]]):
+ shape = tf.shape(image)
+ height = shape[0]
+ width = shape[1]
+
+ height_assert = tf.Assert(
+ tf.equal(height, image_height),
+ ['Wrong height for tensor %s [expected][actual]',
+ image.name, height, image_height])
+ width_assert = tf.Assert(
+ tf.equal(width, image_width),
+ ['Wrong width for tensor %s [expected][actual]',
+ image.name, width, image_width])
+ asserts.extend([height_assert, width_assert])
+
+ # Create a random bounding box.
+ #
+ # Use tf.random_uniform and not numpy.random.rand as doing the former would
+ # generate random numbers at graph eval time, unlike the latter which
+ # generates random numbers at graph definition time.
+ with tf.control_dependencies(asserts):
+ max_offset_height = tf.reshape(image_height - crop_height + 1, [])
+ max_offset_width = tf.reshape(image_width - crop_width + 1, [])
+ offset_height = tf.random_uniform(
+ [], maxval=max_offset_height, dtype=tf.int32)
+ offset_width = tf.random_uniform(
+ [], maxval=max_offset_width, dtype=tf.int32)
+
+ return [_crop(image, offset_height, offset_width,
+ crop_height, crop_width) for image in image_list]
+
+
+def get_random_scale(min_scale_factor, max_scale_factor, step_size):
+ """Gets a random scale value.
+
+ Args:
+ min_scale_factor: Minimum scale value.
+ max_scale_factor: Maximum scale value.
+ step_size: The step size from minimum to maximum value.
+
+ Returns:
+ A random scale value selected between minimum and maximum value.
+
+ Raises:
+ ValueError: min_scale_factor has unexpected value.
+ """
+ if min_scale_factor < 0 or min_scale_factor > max_scale_factor:
+ raise ValueError('Unexpected value of min_scale_factor.')
+
+ if min_scale_factor == max_scale_factor:
+ return tf.to_float(min_scale_factor)
+
+ # When step_size = 0, we sample the value uniformly from [min, max).
+ if step_size == 0:
+ return tf.random_uniform([1],
+ minval=min_scale_factor,
+ maxval=max_scale_factor)
+
+ # When step_size != 0, we randomly select one discrete value from [min, max].
+ num_steps = int((max_scale_factor - min_scale_factor) / step_size + 1)
+ scale_factors = tf.lin_space(min_scale_factor, max_scale_factor, num_steps)
+ shuffled_scale_factors = tf.random_shuffle(scale_factors)
+ return shuffled_scale_factors[0]
+
+
+def randomly_scale_image_and_label(image, label=None, scale=1.0):
+ """Randomly scales image and label.
+
+ Args:
+ image: Image with shape [height, width, 3].
+ label: Label with shape [height, width, 1].
+ scale: The value to scale image and label.
+
+ Returns:
+ Scaled image and label.
+ """
+ # No random scaling if scale == 1.
+ if scale == 1.0:
+ return image, label
+ image_shape = tf.shape(image)
+ new_dim = tf.to_int32(tf.to_float([image_shape[0], image_shape[1]]) * scale)
+
+ # Need squeeze and expand_dims because image interpolation takes
+ # 4D tensors as input.
+ image = tf.squeeze(tf.image.resize_bilinear(
+ tf.expand_dims(image, 0),
+ new_dim,
+ align_corners=True), [0])
+ if label is not None:
+ label = tf.squeeze(tf.image.resize_nearest_neighbor(
+ tf.expand_dims(label, 0),
+ new_dim,
+ align_corners=True), [0])
+
+ return image, label
+
+
+def resolve_shape(tensor, rank=None, scope=None):
+ """Fully resolves the shape of a Tensor.
+
+ Use as much as possible the shape components already known during graph
+ creation and resolve the remaining ones during runtime.
+
+ Args:
+ tensor: Input tensor whose shape we query.
+ rank: The rank of the tensor, provided that we know it.
+ scope: Optional name scope.
+
+ Returns:
+ shape: The full shape of the tensor.
+ """
+ with tf.name_scope(scope, 'resolve_shape', [tensor]):
+ if rank is not None:
+ shape = tensor.get_shape().with_rank(rank).as_list()
+ else:
+ shape = tensor.get_shape().as_list()
+
+ if None in shape:
+ shape_dynamic = tf.shape(tensor)
+ for i in range(len(shape)):
+ if shape[i] is None:
+ shape[i] = shape_dynamic[i]
+
+ return shape
+
+
+def resize_to_range(image,
+ label=None,
+ min_size=None,
+ max_size=None,
+ factor=None,
+ align_corners=True,
+ label_layout_is_chw=False,
+ scope=None,
+ method=tf.image.ResizeMethod.BILINEAR):
+ """Resizes image or label so their sides are within the provided range.
+
+ The output size can be described by two cases:
+ 1. If the image can be rescaled so its minimum size is equal to min_size
+ without the other side exceeding max_size, then do so.
+ 2. Otherwise, resize so the largest side is equal to max_size.
+
+ An integer in `range(factor)` is added to the computed sides so that the
+ final dimensions are multiples of `factor` plus one.
+
+ Args:
+ image: A 3D tensor of shape [height, width, channels].
+ label: (optional) A 3D tensor of shape [height, width, channels] (default)
+ or [channels, height, width] when label_layout_is_chw = True.
+ min_size: (scalar) desired size of the smaller image side.
+ max_size: (scalar) maximum allowed size of the larger image side. Note
+ that the output dimension is no larger than max_size and may be slightly
+ smaller than min_size when factor is not None.
+ factor: Make output size multiple of factor plus one.
+ align_corners: If True, exactly align all 4 corners of input and output.
+ label_layout_is_chw: If true, the label has shape [channel, height, width].
+ We support this case because for some instance segmentation dataset, the
+ instance segmentation is saved as [num_instances, height, width].
+ scope: Optional name scope.
+ method: Image resize method. Defaults to tf.image.ResizeMethod.BILINEAR.
+
+ Returns:
+ A 3-D tensor of shape [new_height, new_width, channels], where the image
+ has been resized (with the specified method) so that
+ min(new_height, new_width) == ceil(min_size) or
+ max(new_height, new_width) == ceil(max_size).
+
+ Raises:
+ ValueError: If the image is not a 3D tensor.
+ """
+ with tf.name_scope(scope, 'resize_to_range', [image]):
+ new_tensor_list = []
+ min_size = tf.to_float(min_size)
+ if max_size is not None:
+ max_size = tf.to_float(max_size)
+ # Modify the max_size to be a multiple of factor plus 1 and make sure the
+ # max dimension after resizing is no larger than max_size.
+ if factor is not None:
+ max_size = (max_size + (factor - (max_size - 1) % factor) % factor
+ - factor)
+
+ [orig_height, orig_width, _] = resolve_shape(image, rank=3)
+ orig_height = tf.to_float(orig_height)
+ orig_width = tf.to_float(orig_width)
+ orig_min_size = tf.minimum(orig_height, orig_width)
+
+ # Calculate the larger of the possible sizes
+ large_scale_factor = min_size / orig_min_size
+ large_height = tf.to_int32(tf.ceil(orig_height * large_scale_factor))
+ large_width = tf.to_int32(tf.ceil(orig_width * large_scale_factor))
+ large_size = tf.stack([large_height, large_width])
+
+ new_size = large_size
+ if max_size is not None:
+ # Calculate the smaller of the possible sizes, use that if the larger
+ # is too big.
+ orig_max_size = tf.maximum(orig_height, orig_width)
+ small_scale_factor = max_size / orig_max_size
+ small_height = tf.to_int32(tf.ceil(orig_height * small_scale_factor))
+ small_width = tf.to_int32(tf.ceil(orig_width * small_scale_factor))
+ small_size = tf.stack([small_height, small_width])
+ new_size = tf.cond(
+ tf.to_float(tf.reduce_max(large_size)) > max_size,
+ lambda: small_size,
+ lambda: large_size)
+ # Ensure that both output sides are multiples of factor plus one.
+ if factor is not None:
+ new_size += (factor - (new_size - 1) % factor) % factor
+ new_tensor_list.append(tf.image.resize_images(
+ image, new_size, method=method, align_corners=align_corners))
+ if label is not None:
+ if label_layout_is_chw:
+ # Input label has shape [channel, height, width].
+ resized_label = tf.expand_dims(label, 3)
+ resized_label = tf.image.resize_nearest_neighbor(
+ resized_label, new_size, align_corners=align_corners)
+ resized_label = tf.squeeze(resized_label, 3)
+ else:
+ # Input label has shape [height, width, channel].
+ resized_label = tf.image.resize_images(
+ label, new_size, method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
+ align_corners=align_corners)
+ new_tensor_list.append(resized_label)
+ else:
+ new_tensor_list.append(None)
+ return new_tensor_list
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/preprocess_utils_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/preprocess_utils_test.py"
new file mode 100644
index 00000000..bca14d4b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/preprocess_utils_test.py"
@@ -0,0 +1,432 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for preprocess_utils."""
+import numpy as np
+import tensorflow as tf
+
+from tensorflow.python.framework import errors
+from deeplab.core import preprocess_utils
+
+
+class PreprocessUtilsTest(tf.test.TestCase):
+
+ def testNoFlipWhenProbIsZero(self):
+ numpy_image = np.dstack([[[5., 6.],
+ [9., 0.]],
+ [[4., 3.],
+ [3., 5.]]])
+ image = tf.convert_to_tensor(numpy_image)
+
+ with self.test_session():
+ actual, is_flipped = preprocess_utils.flip_dim([image], prob=0, dim=0)
+ self.assertAllEqual(numpy_image, actual.eval())
+ self.assertAllEqual(False, is_flipped.eval())
+ actual, is_flipped = preprocess_utils.flip_dim([image], prob=0, dim=1)
+ self.assertAllEqual(numpy_image, actual.eval())
+ self.assertAllEqual(False, is_flipped.eval())
+ actual, is_flipped = preprocess_utils.flip_dim([image], prob=0, dim=2)
+ self.assertAllEqual(numpy_image, actual.eval())
+ self.assertAllEqual(False, is_flipped.eval())
+
+ def testFlipWhenProbIsOne(self):
+ numpy_image = np.dstack([[[5., 6.],
+ [9., 0.]],
+ [[4., 3.],
+ [3., 5.]]])
+ dim0_flipped = np.dstack([[[9., 0.],
+ [5., 6.]],
+ [[3., 5.],
+ [4., 3.]]])
+ dim1_flipped = np.dstack([[[6., 5.],
+ [0., 9.]],
+ [[3., 4.],
+ [5., 3.]]])
+ dim2_flipped = np.dstack([[[4., 3.],
+ [3., 5.]],
+ [[5., 6.],
+ [9., 0.]]])
+ image = tf.convert_to_tensor(numpy_image)
+
+ with self.test_session():
+ actual, is_flipped = preprocess_utils.flip_dim([image], prob=1, dim=0)
+ self.assertAllEqual(dim0_flipped, actual.eval())
+ self.assertAllEqual(True, is_flipped.eval())
+ actual, is_flipped = preprocess_utils.flip_dim([image], prob=1, dim=1)
+ self.assertAllEqual(dim1_flipped, actual.eval())
+ self.assertAllEqual(True, is_flipped.eval())
+ actual, is_flipped = preprocess_utils.flip_dim([image], prob=1, dim=2)
+ self.assertAllEqual(dim2_flipped, actual.eval())
+ self.assertAllEqual(True, is_flipped.eval())
+
+ def testFlipMultipleImagesConsistentlyWhenProbIsOne(self):
+ numpy_image = np.dstack([[[5., 6.],
+ [9., 0.]],
+ [[4., 3.],
+ [3., 5.]]])
+ numpy_label = np.dstack([[[0., 1.],
+ [2., 3.]]])
+ image_dim1_flipped = np.dstack([[[6., 5.],
+ [0., 9.]],
+ [[3., 4.],
+ [5., 3.]]])
+ label_dim1_flipped = np.dstack([[[1., 0.],
+ [3., 2.]]])
+ image = tf.convert_to_tensor(numpy_image)
+ label = tf.convert_to_tensor(numpy_label)
+
+ with self.test_session() as sess:
+ image, label, is_flipped = preprocess_utils.flip_dim(
+ [image, label], prob=1, dim=1)
+ actual_image, actual_label = sess.run([image, label])
+ self.assertAllEqual(image_dim1_flipped, actual_image)
+ self.assertAllEqual(label_dim1_flipped, actual_label)
+ self.assertEqual(True, is_flipped.eval())
+
+ def testReturnRandomFlipsOnMultipleEvals(self):
+ numpy_image = np.dstack([[[5., 6.],
+ [9., 0.]],
+ [[4., 3.],
+ [3., 5.]]])
+ dim1_flipped = np.dstack([[[6., 5.],
+ [0., 9.]],
+ [[3., 4.],
+ [5., 3.]]])
+ image = tf.convert_to_tensor(numpy_image)
+ tf.set_random_seed(53)
+
+ with self.test_session() as sess:
+ actual, is_flipped = preprocess_utils.flip_dim(
+ [image], prob=0.5, dim=1)
+ actual_image, actual_is_flipped = sess.run([actual, is_flipped])
+ self.assertAllEqual(numpy_image, actual_image)
+ self.assertEqual(False, actual_is_flipped)
+ actual_image, actual_is_flipped = sess.run([actual, is_flipped])
+ self.assertAllEqual(dim1_flipped, actual_image)
+ self.assertEqual(True, actual_is_flipped)
+
+ def testReturnCorrectCropOfSingleImage(self):
+ np.random.seed(0)
+
+ height, width = 10, 20
+ image = np.random.randint(0, 256, size=(height, width, 3))
+
+ crop_height, crop_width = 2, 4
+
+ image_placeholder = tf.placeholder(tf.int32, shape=(None, None, 3))
+ [cropped] = preprocess_utils.random_crop([image_placeholder],
+ crop_height,
+ crop_width)
+
+ with self.test_session():
+ cropped_image = cropped.eval(feed_dict={image_placeholder: image})
+
+ # Ensure we can find the cropped image in the original:
+ is_found = False
+ for x in range(0, width - crop_width + 1):
+ for y in range(0, height - crop_height + 1):
+ if np.isclose(image[y:y+crop_height, x:x+crop_width, :],
+ cropped_image).all():
+ is_found = True
+ break
+
+ self.assertTrue(is_found)
+
+ def testRandomCropMaintainsNumberOfChannels(self):
+ np.random.seed(0)
+
+ crop_height, crop_width = 10, 20
+ image = np.random.randint(0, 256, size=(100, 200, 3))
+
+ tf.set_random_seed(37)
+ image_placeholder = tf.placeholder(tf.int32, shape=(None, None, 3))
+ [cropped] = preprocess_utils.random_crop(
+ [image_placeholder], crop_height, crop_width)
+
+ with self.test_session():
+ cropped_image = cropped.eval(feed_dict={image_placeholder: image})
+ self.assertTupleEqual(cropped_image.shape, (crop_height, crop_width, 3))
+
+ def testReturnDifferentCropAreasOnTwoEvals(self):
+ tf.set_random_seed(0)
+
+ crop_height, crop_width = 2, 3
+ image = np.random.randint(0, 256, size=(100, 200, 3))
+ image_placeholder = tf.placeholder(tf.int32, shape=(None, None, 3))
+ [cropped] = preprocess_utils.random_crop(
+ [image_placeholder], crop_height, crop_width)
+
+ with self.test_session():
+ crop0 = cropped.eval(feed_dict={image_placeholder: image})
+ crop1 = cropped.eval(feed_dict={image_placeholder: image})
+ self.assertFalse(np.isclose(crop0, crop1).all())
+
+ def testReturnConsistenCropsOfImagesInTheList(self):
+ tf.set_random_seed(0)
+
+ height, width = 10, 20
+ crop_height, crop_width = 2, 3
+ labels = np.linspace(0, height * width-1, height * width)
+ labels = labels.reshape((height, width, 1))
+ image = np.tile(labels, (1, 1, 3))
+
+ image_placeholder = tf.placeholder(tf.int32, shape=(None, None, 3))
+ label_placeholder = tf.placeholder(tf.int32, shape=(None, None, 1))
+ [cropped_image, cropped_label] = preprocess_utils.random_crop(
+ [image_placeholder, label_placeholder], crop_height, crop_width)
+
+ with self.test_session() as sess:
+ cropped_image, cropped_labels = sess.run([cropped_image, cropped_label],
+ feed_dict={
+ image_placeholder: image,
+ label_placeholder: labels})
+ for i in range(3):
+ self.assertAllEqual(cropped_image[:, :, i], cropped_labels.squeeze())
+
+ def testDieOnRandomCropWhenImagesWithDifferentWidth(self):
+ crop_height, crop_width = 2, 3
+ image1 = tf.placeholder(tf.float32, name='image1', shape=(None, None, 3))
+ image2 = tf.placeholder(tf.float32, name='image2', shape=(None, None, 1))
+ cropped = preprocess_utils.random_crop(
+ [image1, image2], crop_height, crop_width)
+
+ with self.test_session() as sess:
+ with self.assertRaises(errors.InvalidArgumentError):
+ sess.run(cropped, feed_dict={image1: np.random.rand(4, 5, 3),
+ image2: np.random.rand(4, 6, 1)})
+
+ def testDieOnRandomCropWhenImagesWithDifferentHeight(self):
+ crop_height, crop_width = 2, 3
+ image1 = tf.placeholder(tf.float32, name='image1', shape=(None, None, 3))
+ image2 = tf.placeholder(tf.float32, name='image2', shape=(None, None, 1))
+ cropped = preprocess_utils.random_crop(
+ [image1, image2], crop_height, crop_width)
+
+ with self.test_session() as sess:
+ with self.assertRaisesWithPredicateMatch(
+ errors.InvalidArgumentError,
+ 'Wrong height for tensor'):
+ sess.run(cropped, feed_dict={image1: np.random.rand(4, 5, 3),
+ image2: np.random.rand(3, 5, 1)})
+
+ def testDieOnRandomCropWhenCropSizeIsGreaterThanImage(self):
+ crop_height, crop_width = 5, 9
+ image1 = tf.placeholder(tf.float32, name='image1', shape=(None, None, 3))
+ image2 = tf.placeholder(tf.float32, name='image2', shape=(None, None, 1))
+ cropped = preprocess_utils.random_crop(
+ [image1, image2], crop_height, crop_width)
+
+ with self.test_session() as sess:
+ with self.assertRaisesWithPredicateMatch(
+ errors.InvalidArgumentError,
+ 'Crop size greater than the image size.'):
+ sess.run(cropped, feed_dict={image1: np.random.rand(4, 5, 3),
+ image2: np.random.rand(4, 5, 1)})
+
+ def testReturnPaddedImageWithNonZeroPadValue(self):
+ for dtype in [np.int32, np.int64, np.float32, np.float64]:
+ image = np.dstack([[[5, 6],
+ [9, 0]],
+ [[4, 3],
+ [3, 5]]]).astype(dtype)
+ expected_image = np.dstack([[[255, 255, 255, 255, 255],
+ [255, 255, 255, 255, 255],
+ [255, 5, 6, 255, 255],
+ [255, 9, 0, 255, 255],
+ [255, 255, 255, 255, 255]],
+ [[255, 255, 255, 255, 255],
+ [255, 255, 255, 255, 255],
+ [255, 4, 3, 255, 255],
+ [255, 3, 5, 255, 255],
+ [255, 255, 255, 255, 255]]]).astype(dtype)
+
+ with self.test_session():
+ image_placeholder = tf.placeholder(tf.float32)
+ padded_image = preprocess_utils.pad_to_bounding_box(
+ image_placeholder, 2, 1, 5, 5, 255)
+ self.assertAllClose(padded_image.eval(
+ feed_dict={image_placeholder: image}), expected_image)
+
+ def testReturnOriginalImageWhenTargetSizeIsEqualToImageSize(self):
+ image = np.dstack([[[5, 6],
+ [9, 0]],
+ [[4, 3],
+ [3, 5]]])
+
+ with self.test_session():
+ image_placeholder = tf.placeholder(tf.float32)
+ padded_image = preprocess_utils.pad_to_bounding_box(
+ image_placeholder, 0, 0, 2, 2, 255)
+ self.assertAllClose(padded_image.eval(
+ feed_dict={image_placeholder: image}), image)
+
+ def testDieOnTargetSizeGreaterThanImageSize(self):
+ image = np.dstack([[[5, 6],
+ [9, 0]],
+ [[4, 3],
+ [3, 5]]])
+ with self.test_session():
+ image_placeholder = tf.placeholder(tf.float32)
+ padded_image = preprocess_utils.pad_to_bounding_box(
+ image_placeholder, 0, 0, 2, 1, 255)
+ with self.assertRaisesWithPredicateMatch(
+ errors.InvalidArgumentError,
+ 'target_width must be >= width'):
+ padded_image.eval(feed_dict={image_placeholder: image})
+ padded_image = preprocess_utils.pad_to_bounding_box(
+ image_placeholder, 0, 0, 1, 2, 255)
+ with self.assertRaisesWithPredicateMatch(
+ errors.InvalidArgumentError,
+ 'target_height must be >= height'):
+ padded_image.eval(feed_dict={image_placeholder: image})
+
+ def testDieIfTargetSizeNotPossibleWithGivenOffset(self):
+ image = np.dstack([[[5, 6],
+ [9, 0]],
+ [[4, 3],
+ [3, 5]]])
+ with self.test_session():
+ image_placeholder = tf.placeholder(tf.float32)
+ padded_image = preprocess_utils.pad_to_bounding_box(
+ image_placeholder, 3, 0, 4, 4, 255)
+ with self.assertRaisesWithPredicateMatch(
+ errors.InvalidArgumentError,
+ 'target size not possible with the given target offsets'):
+ padded_image.eval(feed_dict={image_placeholder: image})
+
+ def testDieIfImageTensorRankIsNotThree(self):
+ image = np.vstack([[5, 6],
+ [9, 0]])
+ with self.test_session():
+ image_placeholder = tf.placeholder(tf.float32)
+ padded_image = preprocess_utils.pad_to_bounding_box(
+ image_placeholder, 0, 0, 2, 2, 255)
+ with self.assertRaisesWithPredicateMatch(
+ errors.InvalidArgumentError,
+ 'Wrong image tensor rank'):
+ padded_image.eval(feed_dict={image_placeholder: image})
+
+ def testResizeTensorsToRange(self):
+ test_shapes = [[60, 40],
+ [15, 30],
+ [15, 50]]
+ min_size = 50
+ max_size = 100
+ factor = None
+ expected_shape_list = [(75, 50, 3),
+ (50, 100, 3),
+ (30, 100, 3)]
+ for i, test_shape in enumerate(test_shapes):
+ image = tf.random_normal([test_shape[0], test_shape[1], 3])
+ new_tensor_list = preprocess_utils.resize_to_range(
+ image=image,
+ label=None,
+ min_size=min_size,
+ max_size=max_size,
+ factor=factor,
+ align_corners=True)
+ with self.test_session() as session:
+ resized_image = session.run(new_tensor_list[0])
+ self.assertEqual(resized_image.shape, expected_shape_list[i])
+
+ def testResizeTensorsToRangeWithFactor(self):
+ test_shapes = [[60, 40],
+ [15, 30],
+ [15, 50]]
+ min_size = 50
+ max_size = 98
+ factor = 8
+ expected_image_shape_list = [(81, 57, 3),
+ (49, 97, 3),
+ (33, 97, 3)]
+ expected_label_shape_list = [(81, 57, 1),
+ (49, 97, 1),
+ (33, 97, 1)]
+ for i, test_shape in enumerate(test_shapes):
+ image = tf.random_normal([test_shape[0], test_shape[1], 3])
+ label = tf.random_normal([test_shape[0], test_shape[1], 1])
+ new_tensor_list = preprocess_utils.resize_to_range(
+ image=image,
+ label=label,
+ min_size=min_size,
+ max_size=max_size,
+ factor=factor,
+ align_corners=True)
+ with self.test_session() as session:
+ new_tensor_list = session.run(new_tensor_list)
+ self.assertEqual(new_tensor_list[0].shape, expected_image_shape_list[i])
+ self.assertEqual(new_tensor_list[1].shape, expected_label_shape_list[i])
+
+ def testResizeTensorsToRangeWithFactorAndLabelShapeCHW(self):
+ test_shapes = [[60, 40],
+ [15, 30],
+ [15, 50]]
+ min_size = 50
+ max_size = 98
+ factor = 8
+ expected_image_shape_list = [(81, 57, 3),
+ (49, 97, 3),
+ (33, 97, 3)]
+ expected_label_shape_list = [(5, 81, 57),
+ (5, 49, 97),
+ (5, 33, 97)]
+ for i, test_shape in enumerate(test_shapes):
+ image = tf.random_normal([test_shape[0], test_shape[1], 3])
+ label = tf.random_normal([5, test_shape[0], test_shape[1]])
+ new_tensor_list = preprocess_utils.resize_to_range(
+ image=image,
+ label=label,
+ min_size=min_size,
+ max_size=max_size,
+ factor=factor,
+ align_corners=True,
+ label_layout_is_chw=True)
+ with self.test_session() as session:
+ new_tensor_list = session.run(new_tensor_list)
+ self.assertEqual(new_tensor_list[0].shape, expected_image_shape_list[i])
+ self.assertEqual(new_tensor_list[1].shape, expected_label_shape_list[i])
+
+ def testResizeTensorsToRangeWithSimilarMinMaxSizes(self):
+ test_shapes = [[60, 40],
+ [15, 30],
+ [15, 50]]
+ # Values set so that one of the side = 97.
+ min_size = 96
+ max_size = 98
+ factor = 8
+ expected_image_shape_list = [(97, 65, 3),
+ (49, 97, 3),
+ (33, 97, 3)]
+ expected_label_shape_list = [(97, 65, 1),
+ (49, 97, 1),
+ (33, 97, 1)]
+ for i, test_shape in enumerate(test_shapes):
+ image = tf.random_normal([test_shape[0], test_shape[1], 3])
+ label = tf.random_normal([test_shape[0], test_shape[1], 1])
+ new_tensor_list = preprocess_utils.resize_to_range(
+ image=image,
+ label=label,
+ min_size=min_size,
+ max_size=max_size,
+ factor=factor,
+ align_corners=True)
+ with self.test_session() as session:
+ new_tensor_list = session.run(new_tensor_list)
+ self.assertEqual(new_tensor_list[0].shape, expected_image_shape_list[i])
+ self.assertEqual(new_tensor_list[1].shape, expected_label_shape_list[i])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/resnet_v1_beta.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/resnet_v1_beta.py"
new file mode 100644
index 00000000..91b72e43
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/resnet_v1_beta.py"
@@ -0,0 +1,517 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Resnet v1 model variants.
+
+Code branched out from slim/nets/resnet_v1.py, and please refer to it for
+more details.
+
+The original version ResNets-v1 were proposed by:
+[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
+ Deep Residual Learning for Image Recognition. arXiv:1512.03385
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import functools
+import tensorflow as tf
+
+from tensorflow.contrib.slim.nets import resnet_utils
+
+slim = tf.contrib.slim
+
+_DEFAULT_MULTI_GRID = [1, 1, 1]
+
+
+@slim.add_arg_scope
+def bottleneck(inputs,
+ depth,
+ depth_bottleneck,
+ stride,
+ unit_rate=1,
+ rate=1,
+ outputs_collections=None,
+ scope=None):
+ """Bottleneck residual unit variant with BN after convolutions.
+
+ This is the original residual unit proposed in [1]. See Fig. 1(a) of [2] for
+ its definition. Note that we use here the bottleneck variant which has an
+ extra bottleneck layer.
+
+ When putting together two consecutive ResNet blocks that use this unit, one
+ should use stride = 2 in the last unit of the first block.
+
+ Args:
+ inputs: A tensor of size [batch, height, width, channels].
+ depth: The depth of the ResNet unit output.
+ depth_bottleneck: The depth of the bottleneck layers.
+ stride: The ResNet unit's stride. Determines the amount of downsampling of
+ the units output compared to its input.
+ unit_rate: An integer, unit rate for atrous convolution.
+ rate: An integer, rate for atrous convolution.
+ outputs_collections: Collection to add the ResNet unit output.
+ scope: Optional variable_scope.
+
+ Returns:
+ The ResNet unit's output.
+ """
+ with tf.variable_scope(scope, 'bottleneck_v1', [inputs]) as sc:
+ depth_in = slim.utils.last_dimension(inputs.get_shape(), min_rank=4)
+ if depth == depth_in:
+ shortcut = resnet_utils.subsample(inputs, stride, 'shortcut')
+ else:
+ shortcut = slim.conv2d(
+ inputs,
+ depth,
+ [1, 1],
+ stride=stride,
+ activation_fn=None,
+ scope='shortcut')
+
+ residual = slim.conv2d(inputs, depth_bottleneck, [1, 1], stride=1,
+ scope='conv1')
+ residual = resnet_utils.conv2d_same(residual, depth_bottleneck, 3, stride,
+ rate=rate*unit_rate, scope='conv2')
+ residual = slim.conv2d(residual, depth, [1, 1], stride=1,
+ activation_fn=None, scope='conv3')
+ output = tf.nn.relu(shortcut + residual)
+
+ return slim.utils.collect_named_outputs(outputs_collections,
+ sc.name,
+ output)
+
+
+def root_block_fn_for_beta_variant(net):
+ """Gets root_block_fn for beta variant.
+
+ ResNet-v1 beta variant modifies the first original 7x7 convolution to three
+ 3x3 convolutions.
+
+ Args:
+ net: A tensor of size [batch, height, width, channels], input to the model.
+
+ Returns:
+ A tensor after three 3x3 convolutions.
+ """
+ net = resnet_utils.conv2d_same(net, 64, 3, stride=2, scope='conv1_1')
+ net = resnet_utils.conv2d_same(net, 64, 3, stride=1, scope='conv1_2')
+ net = resnet_utils.conv2d_same(net, 128, 3, stride=1, scope='conv1_3')
+
+ return net
+
+
+def resnet_v1_beta(inputs,
+ blocks,
+ num_classes=None,
+ is_training=None,
+ global_pool=True,
+ output_stride=None,
+ root_block_fn=None,
+ reuse=None,
+ scope=None):
+ """Generator for v1 ResNet models (beta variant).
+
+ This function generates a family of modified ResNet v1 models. In particular,
+ the first original 7x7 convolution is replaced with three 3x3 convolutions.
+ See the resnet_v1_*() methods for specific model instantiations, obtained by
+ selecting different block instantiations that produce ResNets of various
+ depths.
+
+ The code is modified from slim/nets/resnet_v1.py, and please refer to it for
+ more details.
+
+ Args:
+ inputs: A tensor of size [batch, height_in, width_in, channels].
+ blocks: A list of length equal to the number of ResNet blocks. Each element
+ is a resnet_utils.Block object describing the units in the block.
+ num_classes: Number of predicted classes for classification tasks. If None
+ we return the features before the logit layer.
+ is_training: Enable/disable is_training for batch normalization.
+ global_pool: If True, we perform global average pooling before computing the
+ logits. Set to True for image classification, False for dense prediction.
+ output_stride: If None, then the output will be computed at the nominal
+ network stride. If output_stride is not None, it specifies the requested
+ ratio of input to output spatial resolution.
+ root_block_fn: The function consisting of convolution operations applied to
+ the root input. If root_block_fn is None, use the original setting of
+ RseNet-v1, which is simply one convolution with 7x7 kernel and stride=2.
+ reuse: whether or not the network and its variables should be reused. To be
+ able to reuse 'scope' must be given.
+ scope: Optional variable_scope.
+
+ Returns:
+ net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
+ If global_pool is False, then height_out and width_out are reduced by a
+ factor of output_stride compared to the respective height_in and width_in,
+ else both height_out and width_out equal one. If num_classes is None, then
+ net is the output of the last ResNet block, potentially after global
+ average pooling. If num_classes is not None, net contains the pre-softmax
+ activations.
+ end_points: A dictionary from components of the network to the corresponding
+ activation.
+
+ Raises:
+ ValueError: If the target output_stride is not valid.
+ """
+ if root_block_fn is None:
+ root_block_fn = functools.partial(resnet_utils.conv2d_same,
+ num_outputs=64,
+ kernel_size=7,
+ stride=2,
+ scope='conv1')
+ with tf.variable_scope(scope, 'resnet_v1', [inputs], reuse=reuse) as sc:
+ end_points_collection = sc.original_name_scope + '_end_points'
+ with slim.arg_scope([slim.conv2d, bottleneck,
+ resnet_utils.stack_blocks_dense],
+ outputs_collections=end_points_collection):
+ if is_training is not None:
+ arg_scope = slim.arg_scope([slim.batch_norm], is_training=is_training)
+ else:
+ arg_scope = slim.arg_scope([])
+ with arg_scope:
+ net = inputs
+ if output_stride is not None:
+ if output_stride % 4 != 0:
+ raise ValueError('The output_stride needs to be a multiple of 4.')
+ output_stride /= 4
+ net = root_block_fn(net)
+ net = slim.max_pool2d(net, 3, stride=2, padding='SAME', scope='pool1')
+ net = resnet_utils.stack_blocks_dense(net, blocks, output_stride)
+
+ if global_pool:
+ # Global average pooling.
+ net = tf.reduce_mean(net, [1, 2], name='pool5', keepdims=True)
+ if num_classes is not None:
+ net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
+ normalizer_fn=None, scope='logits')
+ # Convert end_points_collection into a dictionary of end_points.
+ end_points = slim.utils.convert_collection_to_dict(
+ end_points_collection)
+ if num_classes is not None:
+ end_points['predictions'] = slim.softmax(net, scope='predictions')
+ return net, end_points
+
+
+def resnet_v1_beta_block(scope, base_depth, num_units, stride):
+ """Helper function for creating a resnet_v1 beta variant bottleneck block.
+
+ Args:
+ scope: The scope of the block.
+ base_depth: The depth of the bottleneck layer for each unit.
+ num_units: The number of units in the block.
+ stride: The stride of the block, implemented as a stride in the last unit.
+ All other units have stride=1.
+
+ Returns:
+ A resnet_v1 bottleneck block.
+ """
+ return resnet_utils.Block(scope, bottleneck, [{
+ 'depth': base_depth * 4,
+ 'depth_bottleneck': base_depth,
+ 'stride': 1,
+ 'unit_rate': 1
+ }] * (num_units - 1) + [{
+ 'depth': base_depth * 4,
+ 'depth_bottleneck': base_depth,
+ 'stride': stride,
+ 'unit_rate': 1
+ }])
+
+
+def resnet_v1_50(inputs,
+ num_classes=None,
+ is_training=None,
+ global_pool=False,
+ output_stride=None,
+ multi_grid=None,
+ reuse=None,
+ scope='resnet_v1_50'):
+ """Resnet v1 50.
+
+ Args:
+ inputs: A tensor of size [batch, height_in, width_in, channels].
+ num_classes: Number of predicted classes for classification tasks. If None
+ we return the features before the logit layer.
+ is_training: Enable/disable is_training for batch normalization.
+ global_pool: If True, we perform global average pooling before computing the
+ logits. Set to True for image classification, False for dense prediction.
+ output_stride: If None, then the output will be computed at the nominal
+ network stride. If output_stride is not None, it specifies the requested
+ ratio of input to output spatial resolution.
+ multi_grid: Employ a hierarchy of different atrous rates within network.
+ reuse: whether or not the network and its variables should be reused. To be
+ able to reuse 'scope' must be given.
+ scope: Optional variable_scope.
+
+ Returns:
+ net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
+ If global_pool is False, then height_out and width_out are reduced by a
+ factor of output_stride compared to the respective height_in and width_in,
+ else both height_out and width_out equal one. If num_classes is None, then
+ net is the output of the last ResNet block, potentially after global
+ average pooling. If num_classes is not None, net contains the pre-softmax
+ activations.
+ end_points: A dictionary from components of the network to the corresponding
+ activation.
+
+ Raises:
+ ValueError: if multi_grid is not None and does not have length = 3.
+ """
+ if multi_grid is None:
+ multi_grid = _DEFAULT_MULTI_GRID
+ else:
+ if len(multi_grid) != 3:
+ raise ValueError('Expect multi_grid to have length 3.')
+
+ blocks = [
+ resnet_v1_beta_block(
+ 'block1', base_depth=64, num_units=3, stride=2),
+ resnet_v1_beta_block(
+ 'block2', base_depth=128, num_units=4, stride=2),
+ resnet_v1_beta_block(
+ 'block3', base_depth=256, num_units=6, stride=2),
+ resnet_utils.Block('block4', bottleneck, [
+ {'depth': 2048,
+ 'depth_bottleneck': 512,
+ 'stride': 1,
+ 'unit_rate': rate} for rate in multi_grid]),
+ ]
+ return resnet_v1_beta(
+ inputs,
+ blocks=blocks,
+ num_classes=num_classes,
+ is_training=is_training,
+ global_pool=global_pool,
+ output_stride=output_stride,
+ reuse=reuse,
+ scope=scope)
+
+
+def resnet_v1_50_beta(inputs,
+ num_classes=None,
+ is_training=None,
+ global_pool=False,
+ output_stride=None,
+ multi_grid=None,
+ reuse=None,
+ scope='resnet_v1_50'):
+ """Resnet v1 50 beta variant.
+
+ This variant modifies the first convolution layer of ResNet-v1-50. In
+ particular, it changes the original one 7x7 convolution to three 3x3
+ convolutions.
+
+ Args:
+ inputs: A tensor of size [batch, height_in, width_in, channels].
+ num_classes: Number of predicted classes for classification tasks. If None
+ we return the features before the logit layer.
+ is_training: Enable/disable is_training for batch normalization.
+ global_pool: If True, we perform global average pooling before computing the
+ logits. Set to True for image classification, False for dense prediction.
+ output_stride: If None, then the output will be computed at the nominal
+ network stride. If output_stride is not None, it specifies the requested
+ ratio of input to output spatial resolution.
+ multi_grid: Employ a hierarchy of different atrous rates within network.
+ reuse: whether or not the network and its variables should be reused. To be
+ able to reuse 'scope' must be given.
+ scope: Optional variable_scope.
+
+ Returns:
+ net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
+ If global_pool is False, then height_out and width_out are reduced by a
+ factor of output_stride compared to the respective height_in and width_in,
+ else both height_out and width_out equal one. If num_classes is None, then
+ net is the output of the last ResNet block, potentially after global
+ average pooling. If num_classes is not None, net contains the pre-softmax
+ activations.
+ end_points: A dictionary from components of the network to the corresponding
+ activation.
+
+ Raises:
+ ValueError: if multi_grid is not None and does not have length = 3.
+ """
+ if multi_grid is None:
+ multi_grid = _DEFAULT_MULTI_GRID
+ else:
+ if len(multi_grid) != 3:
+ raise ValueError('Expect multi_grid to have length 3.')
+
+ blocks = [
+ resnet_v1_beta_block(
+ 'block1', base_depth=64, num_units=3, stride=2),
+ resnet_v1_beta_block(
+ 'block2', base_depth=128, num_units=4, stride=2),
+ resnet_v1_beta_block(
+ 'block3', base_depth=256, num_units=6, stride=2),
+ resnet_utils.Block('block4', bottleneck, [
+ {'depth': 2048,
+ 'depth_bottleneck': 512,
+ 'stride': 1,
+ 'unit_rate': rate} for rate in multi_grid]),
+ ]
+ return resnet_v1_beta(
+ inputs,
+ blocks=blocks,
+ num_classes=num_classes,
+ is_training=is_training,
+ global_pool=global_pool,
+ output_stride=output_stride,
+ root_block_fn=functools.partial(root_block_fn_for_beta_variant),
+ reuse=reuse,
+ scope=scope)
+
+
+def resnet_v1_101(inputs,
+ num_classes=None,
+ is_training=None,
+ global_pool=False,
+ output_stride=None,
+ multi_grid=None,
+ reuse=None,
+ scope='resnet_v1_101'):
+ """Resnet v1 101.
+
+ Args:
+ inputs: A tensor of size [batch, height_in, width_in, channels].
+ num_classes: Number of predicted classes for classification tasks. If None
+ we return the features before the logit layer.
+ is_training: Enable/disable is_training for batch normalization.
+ global_pool: If True, we perform global average pooling before computing the
+ logits. Set to True for image classification, False for dense prediction.
+ output_stride: If None, then the output will be computed at the nominal
+ network stride. If output_stride is not None, it specifies the requested
+ ratio of input to output spatial resolution.
+ multi_grid: Employ a hierarchy of different atrous rates within network.
+ reuse: whether or not the network and its variables should be reused. To be
+ able to reuse 'scope' must be given.
+ scope: Optional variable_scope.
+
+ Returns:
+ net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
+ If global_pool is False, then height_out and width_out are reduced by a
+ factor of output_stride compared to the respective height_in and width_in,
+ else both height_out and width_out equal one. If num_classes is None, then
+ net is the output of the last ResNet block, potentially after global
+ average pooling. If num_classes is not None, net contains the pre-softmax
+ activations.
+ end_points: A dictionary from components of the network to the corresponding
+ activation.
+
+ Raises:
+ ValueError: if multi_grid is not None and does not have length = 3.
+ """
+ if multi_grid is None:
+ multi_grid = _DEFAULT_MULTI_GRID
+ else:
+ if len(multi_grid) != 3:
+ raise ValueError('Expect multi_grid to have length 3.')
+
+ blocks = [
+ resnet_v1_beta_block(
+ 'block1', base_depth=64, num_units=3, stride=2),
+ resnet_v1_beta_block(
+ 'block2', base_depth=128, num_units=4, stride=2),
+ resnet_v1_beta_block(
+ 'block3', base_depth=256, num_units=23, stride=2),
+ resnet_utils.Block('block4', bottleneck, [
+ {'depth': 2048,
+ 'depth_bottleneck': 512,
+ 'stride': 1,
+ 'unit_rate': rate} for rate in multi_grid]),
+ ]
+ return resnet_v1_beta(
+ inputs,
+ blocks=blocks,
+ num_classes=num_classes,
+ is_training=is_training,
+ global_pool=global_pool,
+ output_stride=output_stride,
+ reuse=reuse,
+ scope=scope)
+
+
+def resnet_v1_101_beta(inputs,
+ num_classes=None,
+ is_training=None,
+ global_pool=False,
+ output_stride=None,
+ multi_grid=None,
+ reuse=None,
+ scope='resnet_v1_101'):
+ """Resnet v1 101 beta variant.
+
+ This variant modifies the first convolution layer of ResNet-v1-101. In
+ particular, it changes the original one 7x7 convolution to three 3x3
+ convolutions.
+
+ Args:
+ inputs: A tensor of size [batch, height_in, width_in, channels].
+ num_classes: Number of predicted classes for classification tasks. If None
+ we return the features before the logit layer.
+ is_training: Enable/disable is_training for batch normalization.
+ global_pool: If True, we perform global average pooling before computing the
+ logits. Set to True for image classification, False for dense prediction.
+ output_stride: If None, then the output will be computed at the nominal
+ network stride. If output_stride is not None, it specifies the requested
+ ratio of input to output spatial resolution.
+ multi_grid: Employ a hierarchy of different atrous rates within network.
+ reuse: whether or not the network and its variables should be reused. To be
+ able to reuse 'scope' must be given.
+ scope: Optional variable_scope.
+
+ Returns:
+ net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
+ If global_pool is False, then height_out and width_out are reduced by a
+ factor of output_stride compared to the respective height_in and width_in,
+ else both height_out and width_out equal one. If num_classes is None, then
+ net is the output of the last ResNet block, potentially after global
+ average pooling. If num_classes is not None, net contains the pre-softmax
+ activations.
+ end_points: A dictionary from components of the network to the corresponding
+ activation.
+
+ Raises:
+ ValueError: if multi_grid is not None and does not have length = 3.
+ """
+ if multi_grid is None:
+ multi_grid = _DEFAULT_MULTI_GRID
+ else:
+ if len(multi_grid) != 3:
+ raise ValueError('Expect multi_grid to have length 3.')
+
+ blocks = [
+ resnet_v1_beta_block(
+ 'block1', base_depth=64, num_units=3, stride=2),
+ resnet_v1_beta_block(
+ 'block2', base_depth=128, num_units=4, stride=2),
+ resnet_v1_beta_block(
+ 'block3', base_depth=256, num_units=23, stride=2),
+ resnet_utils.Block('block4', bottleneck, [
+ {'depth': 2048,
+ 'depth_bottleneck': 512,
+ 'stride': 1,
+ 'unit_rate': rate} for rate in multi_grid]),
+ ]
+ return resnet_v1_beta(
+ inputs,
+ blocks=blocks,
+ num_classes=num_classes,
+ is_training=is_training,
+ global_pool=global_pool,
+ output_stride=output_stride,
+ root_block_fn=functools.partial(root_block_fn_for_beta_variant),
+ reuse=reuse,
+ scope=scope)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/resnet_v1_beta_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/resnet_v1_beta_test.py"
new file mode 100644
index 00000000..49ea3eeb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/resnet_v1_beta_test.py"
@@ -0,0 +1,274 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for resnet_v1_beta module."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import functools
+
+import numpy as np
+import tensorflow as tf
+
+from deeplab.core import resnet_v1_beta
+from tensorflow.contrib.slim.nets import resnet_utils
+
+slim = tf.contrib.slim
+
+
+def create_test_input(batch, height, width, channels):
+ """Create test input tensor."""
+ if None in [batch, height, width, channels]:
+ return tf.placeholder(tf.float32, (batch, height, width, channels))
+ else:
+ return tf.to_float(
+ np.tile(
+ np.reshape(
+ np.reshape(np.arange(height), [height, 1]) +
+ np.reshape(np.arange(width), [1, width]),
+ [1, height, width, 1]),
+ [batch, 1, 1, channels]))
+
+
+class ResnetCompleteNetworkTest(tf.test.TestCase):
+ """Tests with complete small ResNet v1 networks."""
+
+ def _resnet_small(self,
+ inputs,
+ num_classes=None,
+ is_training=True,
+ global_pool=True,
+ output_stride=None,
+ multi_grid=None,
+ reuse=None,
+ scope='resnet_v1_small'):
+ """A shallow and thin ResNet v1 for faster tests."""
+ if multi_grid is None:
+ multi_grid = [1, 1, 1]
+ else:
+ if len(multi_grid) != 3:
+ raise ValueError('Expect multi_grid to have length 3.')
+
+ block = resnet_v1_beta.resnet_v1_beta_block
+ blocks = [
+ block('block1', base_depth=1, num_units=3, stride=2),
+ block('block2', base_depth=2, num_units=3, stride=2),
+ block('block3', base_depth=4, num_units=3, stride=2),
+ resnet_utils.Block('block4', resnet_v1_beta.bottleneck, [
+ {'depth': 32,
+ 'depth_bottleneck': 8,
+ 'stride': 1,
+ 'unit_rate': rate} for rate in multi_grid])]
+
+ return resnet_v1_beta.resnet_v1_beta(
+ inputs,
+ blocks,
+ num_classes=num_classes,
+ is_training=is_training,
+ global_pool=global_pool,
+ output_stride=output_stride,
+ root_block_fn=functools.partial(
+ resnet_v1_beta.root_block_fn_for_beta_variant),
+ reuse=reuse,
+ scope=scope)
+
+ def testClassificationEndPoints(self):
+ global_pool = True
+ num_classes = 10
+ inputs = create_test_input(2, 224, 224, 3)
+ with slim.arg_scope(resnet_utils.resnet_arg_scope()):
+ logits, end_points = self._resnet_small(inputs,
+ num_classes,
+ global_pool=global_pool,
+ scope='resnet')
+
+ self.assertTrue(logits.op.name.startswith('resnet/logits'))
+ self.assertListEqual(logits.get_shape().as_list(), [2, 1, 1, num_classes])
+ self.assertTrue('predictions' in end_points)
+ self.assertListEqual(end_points['predictions'].get_shape().as_list(),
+ [2, 1, 1, num_classes])
+
+ def testClassificationEndPointsWithMultigrid(self):
+ global_pool = True
+ num_classes = 10
+ inputs = create_test_input(2, 224, 224, 3)
+ multi_grid = [1, 2, 4]
+ with slim.arg_scope(resnet_utils.resnet_arg_scope()):
+ logits, end_points = self._resnet_small(inputs,
+ num_classes,
+ global_pool=global_pool,
+ multi_grid=multi_grid,
+ scope='resnet')
+
+ self.assertTrue(logits.op.name.startswith('resnet/logits'))
+ self.assertListEqual(logits.get_shape().as_list(), [2, 1, 1, num_classes])
+ self.assertTrue('predictions' in end_points)
+ self.assertListEqual(end_points['predictions'].get_shape().as_list(),
+ [2, 1, 1, num_classes])
+
+ def testClassificationShapes(self):
+ global_pool = True
+ num_classes = 10
+ inputs = create_test_input(2, 224, 224, 3)
+ with slim.arg_scope(resnet_utils.resnet_arg_scope()):
+ _, end_points = self._resnet_small(inputs,
+ num_classes,
+ global_pool=global_pool,
+ scope='resnet')
+ endpoint_to_shape = {
+ 'resnet/conv1_1': [2, 112, 112, 64],
+ 'resnet/conv1_2': [2, 112, 112, 64],
+ 'resnet/conv1_3': [2, 112, 112, 128],
+ 'resnet/block1': [2, 28, 28, 4],
+ 'resnet/block2': [2, 14, 14, 8],
+ 'resnet/block3': [2, 7, 7, 16],
+ 'resnet/block4': [2, 7, 7, 32]}
+ for endpoint, shape in endpoint_to_shape.iteritems():
+ self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
+
+ def testFullyConvolutionalEndpointShapes(self):
+ global_pool = False
+ num_classes = 10
+ inputs = create_test_input(2, 321, 321, 3)
+ with slim.arg_scope(resnet_utils.resnet_arg_scope()):
+ _, end_points = self._resnet_small(inputs,
+ num_classes,
+ global_pool=global_pool,
+ scope='resnet')
+ endpoint_to_shape = {
+ 'resnet/conv1_1': [2, 161, 161, 64],
+ 'resnet/conv1_2': [2, 161, 161, 64],
+ 'resnet/conv1_3': [2, 161, 161, 128],
+ 'resnet/block1': [2, 41, 41, 4],
+ 'resnet/block2': [2, 21, 21, 8],
+ 'resnet/block3': [2, 11, 11, 16],
+ 'resnet/block4': [2, 11, 11, 32]}
+ for endpoint, shape in endpoint_to_shape.iteritems():
+ self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
+
+ def testAtrousFullyConvolutionalEndpointShapes(self):
+ global_pool = False
+ num_classes = 10
+ output_stride = 8
+ inputs = create_test_input(2, 321, 321, 3)
+ with slim.arg_scope(resnet_utils.resnet_arg_scope()):
+ _, end_points = self._resnet_small(inputs,
+ num_classes,
+ global_pool=global_pool,
+ output_stride=output_stride,
+ scope='resnet')
+ endpoint_to_shape = {
+ 'resnet/conv1_1': [2, 161, 161, 64],
+ 'resnet/conv1_2': [2, 161, 161, 64],
+ 'resnet/conv1_3': [2, 161, 161, 128],
+ 'resnet/block1': [2, 41, 41, 4],
+ 'resnet/block2': [2, 41, 41, 8],
+ 'resnet/block3': [2, 41, 41, 16],
+ 'resnet/block4': [2, 41, 41, 32]}
+ for endpoint, shape in endpoint_to_shape.iteritems():
+ self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
+
+ def testAtrousFullyConvolutionalValues(self):
+ """Verify dense feature extraction with atrous convolution."""
+ nominal_stride = 32
+ for output_stride in [4, 8, 16, 32, None]:
+ with slim.arg_scope(resnet_utils.resnet_arg_scope()):
+ with tf.Graph().as_default():
+ with self.test_session() as sess:
+ tf.set_random_seed(0)
+ inputs = create_test_input(2, 81, 81, 3)
+ # Dense feature extraction followed by subsampling.
+ output, _ = self._resnet_small(inputs,
+ None,
+ is_training=False,
+ global_pool=False,
+ output_stride=output_stride)
+ if output_stride is None:
+ factor = 1
+ else:
+ factor = nominal_stride // output_stride
+ output = resnet_utils.subsample(output, factor)
+ # Make the two networks use the same weights.
+ tf.get_variable_scope().reuse_variables()
+ # Feature extraction at the nominal network rate.
+ expected, _ = self._resnet_small(inputs,
+ None,
+ is_training=False,
+ global_pool=False)
+ sess.run(tf.global_variables_initializer())
+ self.assertAllClose(output.eval(), expected.eval(),
+ atol=1e-4, rtol=1e-4)
+
+ def testUnknownBatchSize(self):
+ batch = 2
+ height, width = 65, 65
+ global_pool = True
+ num_classes = 10
+ inputs = create_test_input(None, height, width, 3)
+ with slim.arg_scope(resnet_utils.resnet_arg_scope()):
+ logits, _ = self._resnet_small(inputs,
+ num_classes,
+ global_pool=global_pool,
+ scope='resnet')
+ self.assertTrue(logits.op.name.startswith('resnet/logits'))
+ self.assertListEqual(logits.get_shape().as_list(),
+ [None, 1, 1, num_classes])
+ images = create_test_input(batch, height, width, 3)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(logits, {inputs: images.eval()})
+ self.assertEquals(output.shape, (batch, 1, 1, num_classes))
+
+ def testFullyConvolutionalUnknownHeightWidth(self):
+ batch = 2
+ height, width = 65, 65
+ global_pool = False
+ inputs = create_test_input(batch, None, None, 3)
+ with slim.arg_scope(resnet_utils.resnet_arg_scope()):
+ output, _ = self._resnet_small(inputs,
+ None,
+ global_pool=global_pool)
+ self.assertListEqual(output.get_shape().as_list(),
+ [batch, None, None, 32])
+ images = create_test_input(batch, height, width, 3)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output, {inputs: images.eval()})
+ self.assertEquals(output.shape, (batch, 3, 3, 32))
+
+ def testAtrousFullyConvolutionalUnknownHeightWidth(self):
+ batch = 2
+ height, width = 65, 65
+ global_pool = False
+ output_stride = 8
+ inputs = create_test_input(batch, None, None, 3)
+ with slim.arg_scope(resnet_utils.resnet_arg_scope()):
+ output, _ = self._resnet_small(inputs,
+ None,
+ global_pool=global_pool,
+ output_stride=output_stride)
+ self.assertListEqual(output.get_shape().as_list(),
+ [batch, None, None, 32])
+ images = create_test_input(batch, height, width, 3)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output, {inputs: images.eval()})
+ self.assertEquals(output.shape, (batch, 9, 9, 32))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/utils.py"
new file mode 100644
index 00000000..60a87430
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/utils.py"
@@ -0,0 +1,84 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""This script contains utility functions."""
+import tensorflow as tf
+
+slim = tf.contrib.slim
+
+
+def scale_dimension(dim, scale):
+ """Scales the input dimension.
+
+ Args:
+ dim: Input dimension (a scalar or a scalar Tensor).
+ scale: The amount of scaling applied to the input.
+
+ Returns:
+ Scaled dimension.
+ """
+ if isinstance(dim, tf.Tensor):
+ return tf.cast((tf.to_float(dim) - 1.0) * scale + 1.0, dtype=tf.int32)
+ else:
+ return int((float(dim) - 1.0) * scale + 1.0)
+
+
+def split_separable_conv2d(inputs,
+ filters,
+ kernel_size=3,
+ rate=1,
+ weight_decay=0.00004,
+ depthwise_weights_initializer_stddev=0.33,
+ pointwise_weights_initializer_stddev=0.06,
+ scope=None):
+ """Splits a separable conv2d into depthwise and pointwise conv2d.
+
+ This operation differs from `tf.layers.separable_conv2d` as this operation
+ applies activation function between depthwise and pointwise conv2d.
+
+ Args:
+ inputs: Input tensor with shape [batch, height, width, channels].
+ filters: Number of filters in the 1x1 pointwise convolution.
+ kernel_size: A list of length 2: [kernel_height, kernel_width] of
+ of the filters. Can be an int if both values are the same.
+ rate: Atrous convolution rate for the depthwise convolution.
+ weight_decay: The weight decay to use for regularizing the model.
+ depthwise_weights_initializer_stddev: The standard deviation of the
+ truncated normal weight initializer for depthwise convolution.
+ pointwise_weights_initializer_stddev: The standard deviation of the
+ truncated normal weight initializer for pointwise convolution.
+ scope: Optional scope for the operation.
+
+ Returns:
+ Computed features after split separable conv2d.
+ """
+ outputs = slim.separable_conv2d(
+ inputs,
+ None,
+ kernel_size=kernel_size,
+ depth_multiplier=1,
+ rate=rate,
+ weights_initializer=tf.truncated_normal_initializer(
+ stddev=depthwise_weights_initializer_stddev),
+ weights_regularizer=None,
+ scope=scope + '_depthwise')
+ return slim.conv2d(
+ outputs,
+ filters,
+ 1,
+ weights_initializer=tf.truncated_normal_initializer(
+ stddev=pointwise_weights_initializer_stddev),
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ scope=scope + '_pointwise')
\ No newline at end of file
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/utils_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/utils_test.py"
new file mode 100644
index 00000000..83843282
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/utils_test.py"
@@ -0,0 +1,32 @@
+
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for utils.py."""
+
+import tensorflow as tf
+
+from deeplab.core import utils
+
+
+class UtilsTest(tf.test.TestCase):
+
+ def testScaleDimensionOutput(self):
+ self.assertEqual(161, utils.scale_dimension(321, 0.5))
+ self.assertEqual(193, utils.scale_dimension(321, 0.6))
+ self.assertEqual(241, utils.scale_dimension(321, 0.75))
+
+
+if __name__ == '__main__':
+ tf.test.main()
\ No newline at end of file
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/xception.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/xception.py"
new file mode 100644
index 00000000..6fd306cb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/xception.py"
@@ -0,0 +1,761 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Xception model.
+
+"Xception: Deep Learning with Depthwise Separable Convolutions"
+Fran{\c{c}}ois Chollet
+https://arxiv.org/abs/1610.02357
+
+We implement the modified version by Jifeng Dai et al. for their COCO 2017
+detection challenge submission, where the model is made deeper and has aligned
+features for dense prediction tasks. See their slides for details:
+
+"Deformable Convolutional Networks -- COCO Detection and Segmentation Challenge
+2017 Entry"
+Haozhi Qi, Zheng Zhang, Bin Xiao, Han Hu, Bowen Cheng, Yichen Wei and Jifeng Dai
+ICCV 2017 COCO Challenge workshop
+http://presentations.cocodataset.org/COCO17-Detect-MSRA.pdf
+
+We made a few more changes on top of MSRA's modifications:
+1. Fully convolutional: All the max-pooling layers are replaced with separable
+ conv2d with stride = 2. This allows us to use atrous convolution to extract
+ feature maps at any resolution.
+
+2. We support adding ReLU and BatchNorm after depthwise convolution, motivated
+ by the design of MobileNetv1.
+
+"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
+Applications"
+Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang,
+Tobias Weyand, Marco Andreetto, Hartwig Adam
+https://arxiv.org/abs/1704.04861
+"""
+import collections
+import tensorflow as tf
+
+from tensorflow.contrib.slim.nets import resnet_utils
+
+slim = tf.contrib.slim
+
+
+_DEFAULT_MULTI_GRID = [1, 1, 1]
+
+
+class Block(collections.namedtuple('Block', ['scope', 'unit_fn', 'args'])):
+ """A named tuple describing an Xception block.
+
+ Its parts are:
+ scope: The scope of the block.
+ unit_fn: The Xception unit function which takes as input a tensor and
+ returns another tensor with the output of the Xception unit.
+ args: A list of length equal to the number of units in the block. The list
+ contains one dictionary for each unit in the block to serve as argument to
+ unit_fn.
+ """
+
+
+def fixed_padding(inputs, kernel_size, rate=1):
+ """Pads the input along the spatial dimensions independently of input size.
+
+ Args:
+ inputs: A tensor of size [batch, height_in, width_in, channels].
+ kernel_size: The kernel to be used in the conv2d or max_pool2d operation.
+ Should be a positive integer.
+ rate: An integer, rate for atrous convolution.
+
+ Returns:
+ output: A tensor of size [batch, height_out, width_out, channels] with the
+ input, either intact (if kernel_size == 1) or padded (if kernel_size > 1).
+ """
+ kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1)
+ pad_total = kernel_size_effective - 1
+ pad_beg = pad_total // 2
+ pad_end = pad_total - pad_beg
+ padded_inputs = tf.pad(inputs, [[0, 0], [pad_beg, pad_end],
+ [pad_beg, pad_end], [0, 0]])
+ return padded_inputs
+
+
+@slim.add_arg_scope
+def separable_conv2d_same(inputs,
+ num_outputs,
+ kernel_size,
+ depth_multiplier,
+ stride,
+ rate=1,
+ use_explicit_padding=True,
+ regularize_depthwise=False,
+ scope=None,
+ **kwargs):
+ """Strided 2-D separable convolution with 'SAME' padding.
+
+ If stride > 1 and use_explicit_padding is True, then we do explicit zero-
+ padding, followed by conv2d with 'VALID' padding.
+
+ Note that
+
+ net = separable_conv2d_same(inputs, num_outputs, 3,
+ depth_multiplier=1, stride=stride)
+
+ is equivalent to
+
+ net = slim.separable_conv2d(inputs, num_outputs, 3,
+ depth_multiplier=1, stride=1, padding='SAME')
+ net = resnet_utils.subsample(net, factor=stride)
+
+ whereas
+
+ net = slim.separable_conv2d(inputs, num_outputs, 3, stride=stride,
+ depth_multiplier=1, padding='SAME')
+
+ is different when the input's height or width is even, which is why we add the
+ current function.
+
+ Consequently, if the input feature map has even height or width, setting
+ `use_explicit_padding=False` will result in feature misalignment by one pixel
+ along the corresponding dimension.
+
+ Args:
+ inputs: A 4-D tensor of size [batch, height_in, width_in, channels].
+ num_outputs: An integer, the number of output filters.
+ kernel_size: An int with the kernel_size of the filters.
+ depth_multiplier: The number of depthwise convolution output channels for
+ each input channel. The total number of depthwise convolution output
+ channels will be equal to `num_filters_in * depth_multiplier`.
+ stride: An integer, the output stride.
+ rate: An integer, rate for atrous convolution.
+ use_explicit_padding: If True, use explicit padding to make the model fully
+ compatible with the open source version, otherwise use the native
+ Tensorflow 'SAME' padding.
+ regularize_depthwise: Whether or not apply L2-norm regularization on the
+ depthwise convolution weights.
+ scope: Scope.
+ **kwargs: additional keyword arguments to pass to slim.conv2d
+
+ Returns:
+ output: A 4-D tensor of size [batch, height_out, width_out, channels] with
+ the convolution output.
+ """
+ def _separable_conv2d(padding):
+ """Wrapper for separable conv2d."""
+ return slim.separable_conv2d(inputs,
+ num_outputs,
+ kernel_size,
+ depth_multiplier=depth_multiplier,
+ stride=stride,
+ rate=rate,
+ padding=padding,
+ scope=scope,
+ **kwargs)
+ def _split_separable_conv2d(padding):
+ """Splits separable conv2d into depthwise and pointwise conv2d."""
+ outputs = slim.separable_conv2d(inputs,
+ None,
+ kernel_size,
+ depth_multiplier=depth_multiplier,
+ stride=stride,
+ rate=rate,
+ padding=padding,
+ scope=scope + '_depthwise',
+ **kwargs)
+ return slim.conv2d(outputs,
+ num_outputs,
+ 1,
+ scope=scope + '_pointwise',
+ **kwargs)
+ if stride == 1 or not use_explicit_padding:
+ if regularize_depthwise:
+ outputs = _separable_conv2d(padding='SAME')
+ else:
+ outputs = _split_separable_conv2d(padding='SAME')
+ else:
+ inputs = fixed_padding(inputs, kernel_size, rate)
+ if regularize_depthwise:
+ outputs = _separable_conv2d(padding='VALID')
+ else:
+ outputs = _split_separable_conv2d(padding='VALID')
+ return outputs
+
+
+@slim.add_arg_scope
+def xception_module(inputs,
+ depth_list,
+ skip_connection_type,
+ stride,
+ unit_rate_list=None,
+ rate=1,
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=False,
+ outputs_collections=None,
+ scope=None):
+ """An Xception module.
+
+ The output of one Xception module is equal to the sum of `residual` and
+ `shortcut`, where `residual` is the feature computed by three separable
+ convolution. The `shortcut` is the feature computed by 1x1 convolution with
+ or without striding. In some cases, the `shortcut` path could be a simple
+ identity function or none (i.e, no shortcut).
+
+ Note that we replace the max pooling operations in the Xception module with
+ another separable convolution with striding, since atrous rate is not properly
+ supported in current TensorFlow max pooling implementation.
+
+ Args:
+ inputs: A tensor of size [batch, height, width, channels].
+ depth_list: A list of three integers specifying the depth values of one
+ Xception module.
+ skip_connection_type: Skip connection type for the residual path. Only
+ supports 'conv', 'sum', or 'none'.
+ stride: The block unit's stride. Determines the amount of downsampling of
+ the units output compared to its input.
+ unit_rate_list: A list of three integers, determining the unit rate for
+ each separable convolution in the xception module.
+ rate: An integer, rate for atrous convolution.
+ activation_fn_in_separable_conv: Includes activation function in the
+ separable convolution or not.
+ regularize_depthwise: Whether or not apply L2-norm regularization on the
+ depthwise convolution weights.
+ outputs_collections: Collection to add the Xception unit output.
+ scope: Optional variable_scope.
+
+ Returns:
+ The Xception module's output.
+
+ Raises:
+ ValueError: If depth_list and unit_rate_list do not contain three elements,
+ or if stride != 1 for the third separable convolution operation in the
+ residual path, or unsupported skip connection type.
+ """
+ if len(depth_list) != 3:
+ raise ValueError('Expect three elements in depth_list.')
+ if unit_rate_list:
+ if len(unit_rate_list) != 3:
+ raise ValueError('Expect three elements in unit_rate_list.')
+
+ with tf.variable_scope(scope, 'xception_module', [inputs]) as sc:
+ residual = inputs
+
+ def _separable_conv(features, depth, kernel_size, depth_multiplier,
+ regularize_depthwise, rate, stride, scope):
+ if activation_fn_in_separable_conv:
+ activation_fn = tf.nn.relu
+ else:
+ activation_fn = None
+ features = tf.nn.relu(features)
+ return separable_conv2d_same(features,
+ depth,
+ kernel_size,
+ depth_multiplier=depth_multiplier,
+ stride=stride,
+ rate=rate,
+ activation_fn=activation_fn,
+ regularize_depthwise=regularize_depthwise,
+ scope=scope)
+ for i in range(3):
+ residual = _separable_conv(residual,
+ depth_list[i],
+ kernel_size=3,
+ depth_multiplier=1,
+ regularize_depthwise=regularize_depthwise,
+ rate=rate*unit_rate_list[i],
+ stride=stride if i == 2 else 1,
+ scope='separable_conv' + str(i+1))
+ if skip_connection_type == 'conv':
+ shortcut = slim.conv2d(inputs,
+ depth_list[-1],
+ [1, 1],
+ stride=stride,
+ activation_fn=None,
+ scope='shortcut')
+ outputs = residual + shortcut
+ elif skip_connection_type == 'sum':
+ outputs = residual + inputs
+ elif skip_connection_type == 'none':
+ outputs = residual
+ else:
+ raise ValueError('Unsupported skip connection type.')
+
+ return slim.utils.collect_named_outputs(outputs_collections,
+ sc.name,
+ outputs)
+
+
+@slim.add_arg_scope
+def stack_blocks_dense(net,
+ blocks,
+ output_stride=None,
+ outputs_collections=None):
+ """Stacks Xception blocks and controls output feature density.
+
+ First, this function creates scopes for the Xception in the form of
+ 'block_name/unit_1', 'block_name/unit_2', etc.
+
+ Second, this function allows the user to explicitly control the output
+ stride, which is the ratio of the input to output spatial resolution. This
+ is useful for dense prediction tasks such as semantic segmentation or
+ object detection.
+
+ Control of the output feature density is implemented by atrous convolution.
+
+ Args:
+ net: A tensor of size [batch, height, width, channels].
+ blocks: A list of length equal to the number of Xception blocks. Each
+ element is an Xception Block object describing the units in the block.
+ output_stride: If None, then the output will be computed at the nominal
+ network stride. If output_stride is not None, it specifies the requested
+ ratio of input to output spatial resolution, which needs to be equal to
+ the product of unit strides from the start up to some level of Xception.
+ For example, if the Xception employs units with strides 1, 2, 1, 3, 4, 1,
+ then valid values for the output_stride are 1, 2, 6, 24 or None (which
+ is equivalent to output_stride=24).
+ outputs_collections: Collection to add the Xception block outputs.
+
+ Returns:
+ net: Output tensor with stride equal to the specified output_stride.
+
+ Raises:
+ ValueError: If the target output_stride is not valid.
+ """
+ # The current_stride variable keeps track of the effective stride of the
+ # activations. This allows us to invoke atrous convolution whenever applying
+ # the next residual unit would result in the activations having stride larger
+ # than the target output_stride.
+ current_stride = 1
+
+ # The atrous convolution rate parameter.
+ rate = 1
+
+ for block in blocks:
+ with tf.variable_scope(block.scope, 'block', [net]) as sc:
+ for i, unit in enumerate(block.args):
+ if output_stride is not None and current_stride > output_stride:
+ raise ValueError('The target output_stride cannot be reached.')
+ with tf.variable_scope('unit_%d' % (i + 1), values=[net]):
+ # If we have reached the target output_stride, then we need to employ
+ # atrous convolution with stride=1 and multiply the atrous rate by the
+ # current unit's stride for use in subsequent layers.
+ if output_stride is not None and current_stride == output_stride:
+ net = block.unit_fn(net, rate=rate, **dict(unit, stride=1))
+ rate *= unit.get('stride', 1)
+ else:
+ net = block.unit_fn(net, rate=1, **unit)
+ current_stride *= unit.get('stride', 1)
+
+ # Collect activations at the block's end before performing subsampling.
+ net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net)
+
+ if output_stride is not None and current_stride != output_stride:
+ raise ValueError('The target output_stride cannot be reached.')
+
+ return net
+
+
+def xception(inputs,
+ blocks,
+ num_classes=None,
+ is_training=True,
+ global_pool=True,
+ keep_prob=0.5,
+ output_stride=None,
+ reuse=None,
+ scope=None):
+ """Generator for Xception models.
+
+ This function generates a family of Xception models. See the xception_*()
+ methods for specific model instantiations, obtained by selecting different
+ block instantiations that produce Xception of various depths.
+
+ Args:
+ inputs: A tensor of size [batch, height_in, width_in, channels]. Must be
+ floating point. If a pretrained checkpoint is used, pixel values should be
+ the same as during training (see go/slim-classification-models for
+ specifics).
+ blocks: A list of length equal to the number of Xception blocks. Each
+ element is an Xception Block object describing the units in the block.
+ num_classes: Number of predicted classes for classification tasks.
+ If 0 or None, we return the features before the logit layer.
+ is_training: whether batch_norm layers are in training mode.
+ global_pool: If True, we perform global average pooling before computing the
+ logits. Set to True for image classification, False for dense prediction.
+ keep_prob: Keep probability used in the pre-logits dropout layer.
+ output_stride: If None, then the output will be computed at the nominal
+ network stride. If output_stride is not None, it specifies the requested
+ ratio of input to output spatial resolution.
+ reuse: whether or not the network and its variables should be reused. To be
+ able to reuse 'scope' must be given.
+ scope: Optional variable_scope.
+
+ Returns:
+ net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
+ If global_pool is False, then height_out and width_out are reduced by a
+ factor of output_stride compared to the respective height_in and width_in,
+ else both height_out and width_out equal one. If num_classes is 0 or None,
+ then net is the output of the last Xception block, potentially after
+ global average pooling. If num_classes is a non-zero integer, net contains
+ the pre-softmax activations.
+ end_points: A dictionary from components of the network to the corresponding
+ activation.
+
+ Raises:
+ ValueError: If the target output_stride is not valid.
+ """
+ with tf.variable_scope(
+ scope, 'xception', [inputs], reuse=reuse) as sc:
+ end_points_collection = sc.original_name_scope + 'end_points'
+ with slim.arg_scope([slim.conv2d,
+ slim.separable_conv2d,
+ xception_module,
+ stack_blocks_dense],
+ outputs_collections=end_points_collection):
+ with slim.arg_scope([slim.batch_norm], is_training=is_training):
+ net = inputs
+ if output_stride is not None:
+ if output_stride % 2 != 0:
+ raise ValueError('The output_stride needs to be a multiple of 2.')
+ output_stride /= 2
+ # Root block function operated on inputs.
+ net = resnet_utils.conv2d_same(net, 32, 3, stride=2,
+ scope='entry_flow/conv1_1')
+ net = resnet_utils.conv2d_same(net, 64, 3, stride=1,
+ scope='entry_flow/conv1_2')
+
+ # Extract features for entry_flow, middle_flow, and exit_flow.
+ net = stack_blocks_dense(net, blocks, output_stride)
+
+ # Convert end_points_collection into a dictionary of end_points.
+ end_points = slim.utils.convert_collection_to_dict(
+ end_points_collection, clear_collection=True)
+
+ if global_pool:
+ # Global average pooling.
+ net = tf.reduce_mean(net, [1, 2], name='global_pool', keepdims=True)
+ end_points['global_pool'] = net
+ if num_classes:
+ net = slim.dropout(net, keep_prob=keep_prob, is_training=is_training,
+ scope='prelogits_dropout')
+ net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
+ normalizer_fn=None, scope='logits')
+ end_points[sc.name + '/logits'] = net
+ end_points['predictions'] = slim.softmax(net, scope='predictions')
+ return net, end_points
+
+
+def xception_block(scope,
+ depth_list,
+ skip_connection_type,
+ activation_fn_in_separable_conv,
+ regularize_depthwise,
+ num_units,
+ stride,
+ unit_rate_list=None):
+ """Helper function for creating a Xception block.
+
+ Args:
+ scope: The scope of the block.
+ depth_list: The depth of the bottleneck layer for each unit.
+ skip_connection_type: Skip connection type for the residual path. Only
+ supports 'conv', 'sum', or 'none'.
+ activation_fn_in_separable_conv: Includes activation function in the
+ separable convolution or not.
+ regularize_depthwise: Whether or not apply L2-norm regularization on the
+ depthwise convolution weights.
+ num_units: The number of units in the block.
+ stride: The stride of the block, implemented as a stride in the last unit.
+ All other units have stride=1.
+ unit_rate_list: A list of three integers, determining the unit rate in the
+ corresponding xception block.
+
+ Returns:
+ An Xception block.
+ """
+ if unit_rate_list is None:
+ unit_rate_list = _DEFAULT_MULTI_GRID
+ return Block(scope, xception_module, [{
+ 'depth_list': depth_list,
+ 'skip_connection_type': skip_connection_type,
+ 'activation_fn_in_separable_conv': activation_fn_in_separable_conv,
+ 'regularize_depthwise': regularize_depthwise,
+ 'stride': stride,
+ 'unit_rate_list': unit_rate_list,
+ }] * num_units)
+
+
+def xception_41(inputs,
+ num_classes=None,
+ is_training=True,
+ global_pool=True,
+ keep_prob=0.5,
+ output_stride=None,
+ regularize_depthwise=False,
+ multi_grid=None,
+ reuse=None,
+ scope='xception_41'):
+ """Xception-41 model."""
+ blocks = [
+ xception_block('entry_flow/block1',
+ depth_list=[128, 128, 128],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('entry_flow/block2',
+ depth_list=[256, 256, 256],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('entry_flow/block3',
+ depth_list=[728, 728, 728],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('middle_flow/block1',
+ depth_list=[728, 728, 728],
+ skip_connection_type='sum',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=8,
+ stride=1),
+ xception_block('exit_flow/block1',
+ depth_list=[728, 1024, 1024],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('exit_flow/block2',
+ depth_list=[1536, 1536, 2048],
+ skip_connection_type='none',
+ activation_fn_in_separable_conv=True,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=1,
+ unit_rate_list=multi_grid),
+ ]
+ return xception(inputs,
+ blocks=blocks,
+ num_classes=num_classes,
+ is_training=is_training,
+ global_pool=global_pool,
+ keep_prob=keep_prob,
+ output_stride=output_stride,
+ reuse=reuse,
+ scope=scope)
+
+
+def xception_65(inputs,
+ num_classes=None,
+ is_training=True,
+ global_pool=True,
+ keep_prob=0.5,
+ output_stride=None,
+ regularize_depthwise=False,
+ multi_grid=None,
+ reuse=None,
+ scope='xception_65'):
+ """Xception-65 model."""
+ blocks = [
+ xception_block('entry_flow/block1',
+ depth_list=[128, 128, 128],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('entry_flow/block2',
+ depth_list=[256, 256, 256],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('entry_flow/block3',
+ depth_list=[728, 728, 728],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('middle_flow/block1',
+ depth_list=[728, 728, 728],
+ skip_connection_type='sum',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=16,
+ stride=1),
+ xception_block('exit_flow/block1',
+ depth_list=[728, 1024, 1024],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('exit_flow/block2',
+ depth_list=[1536, 1536, 2048],
+ skip_connection_type='none',
+ activation_fn_in_separable_conv=True,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=1,
+ unit_rate_list=multi_grid),
+ ]
+ return xception(inputs,
+ blocks=blocks,
+ num_classes=num_classes,
+ is_training=is_training,
+ global_pool=global_pool,
+ keep_prob=keep_prob,
+ output_stride=output_stride,
+ reuse=reuse,
+ scope=scope)
+
+
+def xception_71(inputs,
+ num_classes=None,
+ is_training=True,
+ global_pool=True,
+ keep_prob=0.5,
+ output_stride=None,
+ regularize_depthwise=False,
+ multi_grid=None,
+ reuse=None,
+ scope='xception_71'):
+ """Xception-71 model."""
+ blocks = [
+ xception_block('entry_flow/block1',
+ depth_list=[128, 128, 128],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('entry_flow/block2',
+ depth_list=[256, 256, 256],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=1),
+ xception_block('entry_flow/block3',
+ depth_list=[256, 256, 256],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('entry_flow/block4',
+ depth_list=[728, 728, 728],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=1),
+ xception_block('entry_flow/block5',
+ depth_list=[728, 728, 728],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('middle_flow/block1',
+ depth_list=[728, 728, 728],
+ skip_connection_type='sum',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=16,
+ stride=1),
+ xception_block('exit_flow/block1',
+ depth_list=[728, 1024, 1024],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ xception_block('exit_flow/block2',
+ depth_list=[1536, 1536, 2048],
+ skip_connection_type='none',
+ activation_fn_in_separable_conv=True,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=1,
+ unit_rate_list=multi_grid),
+ ]
+ return xception(inputs,
+ blocks=blocks,
+ num_classes=num_classes,
+ is_training=is_training,
+ global_pool=global_pool,
+ keep_prob=keep_prob,
+ output_stride=output_stride,
+ reuse=reuse,
+ scope=scope)
+
+
+def xception_arg_scope(weight_decay=0.00004,
+ batch_norm_decay=0.9997,
+ batch_norm_epsilon=0.001,
+ batch_norm_scale=True,
+ weights_initializer_stddev=0.09,
+ activation_fn=tf.nn.relu,
+ regularize_depthwise=False,
+ use_batch_norm=True):
+ """Defines the default Xception arg scope.
+
+ Args:
+ weight_decay: The weight decay to use for regularizing the model.
+ batch_norm_decay: The moving average decay when estimating layer activation
+ statistics in batch normalization.
+ batch_norm_epsilon: Small constant to prevent division by zero when
+ normalizing activations by their variance in batch normalization.
+ batch_norm_scale: If True, uses an explicit `gamma` multiplier to scale the
+ activations in the batch normalization layer.
+ weights_initializer_stddev: The standard deviation of the trunctated normal
+ weight initializer.
+ activation_fn: The activation function in Xception.
+ regularize_depthwise: Whether or not apply L2-norm regularization on the
+ depthwise convolution weights.
+ use_batch_norm: Whether or not to use batch normalization.
+
+ Returns:
+ An `arg_scope` to use for the Xception models.
+ """
+ batch_norm_params = {
+ 'decay': batch_norm_decay,
+ 'epsilon': batch_norm_epsilon,
+ 'scale': batch_norm_scale,
+ }
+ if regularize_depthwise:
+ depthwise_regularizer = slim.l2_regularizer(weight_decay)
+ else:
+ depthwise_regularizer = None
+ with slim.arg_scope(
+ [slim.conv2d, slim.separable_conv2d],
+ weights_initializer=tf.truncated_normal_initializer(
+ stddev=weights_initializer_stddev),
+ activation_fn=activation_fn,
+ normalizer_fn=slim.batch_norm if use_batch_norm else None):
+ with slim.arg_scope([slim.batch_norm], **batch_norm_params):
+ with slim.arg_scope(
+ [slim.conv2d],
+ weights_regularizer=slim.l2_regularizer(weight_decay)):
+ with slim.arg_scope(
+ [slim.separable_conv2d],
+ weights_regularizer=depthwise_regularizer) as arg_sc:
+ return arg_sc
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/xception_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/xception_test.py"
new file mode 100644
index 00000000..4ef25ba5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/core/xception_test.py"
@@ -0,0 +1,467 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for xception.py."""
+import numpy as np
+import six
+import tensorflow as tf
+
+from deeplab.core import xception
+from tensorflow.contrib.slim.nets import resnet_utils
+
+slim = tf.contrib.slim
+
+
+def create_test_input(batch, height, width, channels):
+ """Create test input tensor."""
+ if None in [batch, height, width, channels]:
+ return tf.placeholder(tf.float32, (batch, height, width, channels))
+ else:
+ return tf.to_float(
+ np.tile(
+ np.reshape(
+ np.reshape(np.arange(height), [height, 1]) +
+ np.reshape(np.arange(width), [1, width]),
+ [1, height, width, 1]),
+ [batch, 1, 1, channels]))
+
+
+class UtilityFunctionTest(tf.test.TestCase):
+
+ def testSeparableConv2DSameWithInputEvenSize(self):
+ n, n2 = 4, 2
+
+ # Input image.
+ x = create_test_input(1, n, n, 1)
+
+ # Convolution kernel.
+ dw = create_test_input(1, 3, 3, 1)
+ dw = tf.reshape(dw, [3, 3, 1, 1])
+
+ tf.get_variable('Conv/depthwise_weights', initializer=dw)
+ tf.get_variable('Conv/pointwise_weights',
+ initializer=tf.ones([1, 1, 1, 1]))
+ tf.get_variable('Conv/biases', initializer=tf.zeros([1]))
+ tf.get_variable_scope().reuse_variables()
+
+ y1 = slim.separable_conv2d(x, 1, [3, 3], depth_multiplier=1,
+ stride=1, scope='Conv')
+ y1_expected = tf.to_float([[14, 28, 43, 26],
+ [28, 48, 66, 37],
+ [43, 66, 84, 46],
+ [26, 37, 46, 22]])
+ y1_expected = tf.reshape(y1_expected, [1, n, n, 1])
+
+ y2 = resnet_utils.subsample(y1, 2)
+ y2_expected = tf.to_float([[14, 43],
+ [43, 84]])
+ y2_expected = tf.reshape(y2_expected, [1, n2, n2, 1])
+
+ y3 = xception.separable_conv2d_same(x, 1, 3, depth_multiplier=1,
+ regularize_depthwise=True,
+ stride=2, scope='Conv')
+ y3_expected = y2_expected
+
+ y4 = slim.separable_conv2d(x, 1, [3, 3], depth_multiplier=1,
+ stride=2, scope='Conv')
+ y4_expected = tf.to_float([[48, 37],
+ [37, 22]])
+ y4_expected = tf.reshape(y4_expected, [1, n2, n2, 1])
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ self.assertAllClose(y1.eval(), y1_expected.eval())
+ self.assertAllClose(y2.eval(), y2_expected.eval())
+ self.assertAllClose(y3.eval(), y3_expected.eval())
+ self.assertAllClose(y4.eval(), y4_expected.eval())
+
+ def testSeparableConv2DSameWithInputOddSize(self):
+ n, n2 = 5, 3
+
+ # Input image.
+ x = create_test_input(1, n, n, 1)
+
+ # Convolution kernel.
+ dw = create_test_input(1, 3, 3, 1)
+ dw = tf.reshape(dw, [3, 3, 1, 1])
+
+ tf.get_variable('Conv/depthwise_weights', initializer=dw)
+ tf.get_variable('Conv/pointwise_weights',
+ initializer=tf.ones([1, 1, 1, 1]))
+ tf.get_variable('Conv/biases', initializer=tf.zeros([1]))
+ tf.get_variable_scope().reuse_variables()
+
+ y1 = slim.separable_conv2d(x, 1, [3, 3], depth_multiplier=1,
+ stride=1, scope='Conv')
+ y1_expected = tf.to_float([[14, 28, 43, 58, 34],
+ [28, 48, 66, 84, 46],
+ [43, 66, 84, 102, 55],
+ [58, 84, 102, 120, 64],
+ [34, 46, 55, 64, 30]])
+ y1_expected = tf.reshape(y1_expected, [1, n, n, 1])
+
+ y2 = resnet_utils.subsample(y1, 2)
+ y2_expected = tf.to_float([[14, 43, 34],
+ [43, 84, 55],
+ [34, 55, 30]])
+ y2_expected = tf.reshape(y2_expected, [1, n2, n2, 1])
+
+ y3 = xception.separable_conv2d_same(x, 1, 3, depth_multiplier=1,
+ regularize_depthwise=True,
+ stride=2, scope='Conv')
+ y3_expected = y2_expected
+
+ y4 = slim.separable_conv2d(x, 1, [3, 3], depth_multiplier=1,
+ stride=2, scope='Conv')
+ y4_expected = y2_expected
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ self.assertAllClose(y1.eval(), y1_expected.eval())
+ self.assertAllClose(y2.eval(), y2_expected.eval())
+ self.assertAllClose(y3.eval(), y3_expected.eval())
+ self.assertAllClose(y4.eval(), y4_expected.eval())
+
+
+class XceptionNetworkTest(tf.test.TestCase):
+ """Tests with small Xception network."""
+
+ def _xception_small(self,
+ inputs,
+ num_classes=None,
+ is_training=True,
+ global_pool=True,
+ output_stride=None,
+ regularize_depthwise=True,
+ reuse=None,
+ scope='xception_small'):
+ """A shallow and thin Xception for faster tests."""
+ block = xception.xception_block
+ blocks = [
+ block('entry_flow/block1',
+ depth_list=[1, 1, 1],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ block('entry_flow/block2',
+ depth_list=[2, 2, 2],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ block('entry_flow/block3',
+ depth_list=[4, 4, 4],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=1),
+ block('entry_flow/block4',
+ depth_list=[4, 4, 4],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ block('middle_flow/block1',
+ depth_list=[4, 4, 4],
+ skip_connection_type='sum',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=2,
+ stride=1),
+ block('exit_flow/block1',
+ depth_list=[8, 8, 8],
+ skip_connection_type='conv',
+ activation_fn_in_separable_conv=False,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=2),
+ block('exit_flow/block2',
+ depth_list=[16, 16, 16],
+ skip_connection_type='none',
+ activation_fn_in_separable_conv=True,
+ regularize_depthwise=regularize_depthwise,
+ num_units=1,
+ stride=1),
+ ]
+ return xception.xception(inputs,
+ blocks=blocks,
+ num_classes=num_classes,
+ is_training=is_training,
+ global_pool=global_pool,
+ output_stride=output_stride,
+ reuse=reuse,
+ scope=scope)
+
+ def testClassificationEndPoints(self):
+ global_pool = True
+ num_classes = 10
+ inputs = create_test_input(2, 224, 224, 3)
+ with slim.arg_scope(xception.xception_arg_scope()):
+ logits, end_points = self._xception_small(
+ inputs,
+ num_classes=num_classes,
+ global_pool=global_pool,
+ scope='xception')
+ self.assertTrue(
+ logits.op.name.startswith('xception/logits'))
+ self.assertListEqual(logits.get_shape().as_list(), [2, 1, 1, num_classes])
+ self.assertTrue('predictions' in end_points)
+ self.assertListEqual(end_points['predictions'].get_shape().as_list(),
+ [2, 1, 1, num_classes])
+ self.assertTrue('global_pool' in end_points)
+ self.assertListEqual(end_points['global_pool'].get_shape().as_list(),
+ [2, 1, 1, 16])
+
+ def testEndpointNames(self):
+ global_pool = True
+ num_classes = 10
+ inputs = create_test_input(2, 224, 224, 3)
+ with slim.arg_scope(xception.xception_arg_scope()):
+ _, end_points = self._xception_small(
+ inputs,
+ num_classes=num_classes,
+ global_pool=global_pool,
+ scope='xception')
+ expected = [
+ 'xception/entry_flow/conv1_1',
+ 'xception/entry_flow/conv1_2',
+ 'xception/entry_flow/block1/unit_1/xception_module/separable_conv1',
+ 'xception/entry_flow/block1/unit_1/xception_module/separable_conv2',
+ 'xception/entry_flow/block1/unit_1/xception_module/separable_conv3',
+ 'xception/entry_flow/block1/unit_1/xception_module/shortcut',
+ 'xception/entry_flow/block1/unit_1/xception_module',
+ 'xception/entry_flow/block1',
+ 'xception/entry_flow/block2/unit_1/xception_module/separable_conv1',
+ 'xception/entry_flow/block2/unit_1/xception_module/separable_conv2',
+ 'xception/entry_flow/block2/unit_1/xception_module/separable_conv3',
+ 'xception/entry_flow/block2/unit_1/xception_module/shortcut',
+ 'xception/entry_flow/block2/unit_1/xception_module',
+ 'xception/entry_flow/block2',
+ 'xception/entry_flow/block3/unit_1/xception_module/separable_conv1',
+ 'xception/entry_flow/block3/unit_1/xception_module/separable_conv2',
+ 'xception/entry_flow/block3/unit_1/xception_module/separable_conv3',
+ 'xception/entry_flow/block3/unit_1/xception_module/shortcut',
+ 'xception/entry_flow/block3/unit_1/xception_module',
+ 'xception/entry_flow/block3',
+ 'xception/entry_flow/block4/unit_1/xception_module/separable_conv1',
+ 'xception/entry_flow/block4/unit_1/xception_module/separable_conv2',
+ 'xception/entry_flow/block4/unit_1/xception_module/separable_conv3',
+ 'xception/entry_flow/block4/unit_1/xception_module/shortcut',
+ 'xception/entry_flow/block4/unit_1/xception_module',
+ 'xception/entry_flow/block4',
+ 'xception/middle_flow/block1/unit_1/xception_module/separable_conv1',
+ 'xception/middle_flow/block1/unit_1/xception_module/separable_conv2',
+ 'xception/middle_flow/block1/unit_1/xception_module/separable_conv3',
+ 'xception/middle_flow/block1/unit_1/xception_module',
+ 'xception/middle_flow/block1/unit_2/xception_module/separable_conv1',
+ 'xception/middle_flow/block1/unit_2/xception_module/separable_conv2',
+ 'xception/middle_flow/block1/unit_2/xception_module/separable_conv3',
+ 'xception/middle_flow/block1/unit_2/xception_module',
+ 'xception/middle_flow/block1',
+ 'xception/exit_flow/block1/unit_1/xception_module/separable_conv1',
+ 'xception/exit_flow/block1/unit_1/xception_module/separable_conv2',
+ 'xception/exit_flow/block1/unit_1/xception_module/separable_conv3',
+ 'xception/exit_flow/block1/unit_1/xception_module/shortcut',
+ 'xception/exit_flow/block1/unit_1/xception_module',
+ 'xception/exit_flow/block1',
+ 'xception/exit_flow/block2/unit_1/xception_module/separable_conv1',
+ 'xception/exit_flow/block2/unit_1/xception_module/separable_conv2',
+ 'xception/exit_flow/block2/unit_1/xception_module/separable_conv3',
+ 'xception/exit_flow/block2/unit_1/xception_module',
+ 'xception/exit_flow/block2',
+ 'global_pool',
+ 'xception/logits',
+ 'predictions',
+ ]
+ self.assertItemsEqual(end_points.keys(), expected)
+
+ def testClassificationShapes(self):
+ global_pool = True
+ num_classes = 10
+ inputs = create_test_input(2, 224, 224, 3)
+ with slim.arg_scope(xception.xception_arg_scope()):
+ _, end_points = self._xception_small(
+ inputs,
+ num_classes,
+ global_pool=global_pool,
+ scope='xception')
+ endpoint_to_shape = {
+ 'xception/entry_flow/conv1_1': [2, 112, 112, 32],
+ 'xception/entry_flow/block1': [2, 56, 56, 1],
+ 'xception/entry_flow/block2': [2, 28, 28, 2],
+ 'xception/entry_flow/block4': [2, 14, 14, 4],
+ 'xception/middle_flow/block1': [2, 14, 14, 4],
+ 'xception/exit_flow/block1': [2, 7, 7, 8],
+ 'xception/exit_flow/block2': [2, 7, 7, 16]}
+ for endpoint, shape in six.iteritems(endpoint_to_shape):
+ self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
+
+ def testFullyConvolutionalEndpointShapes(self):
+ global_pool = False
+ num_classes = 10
+ inputs = create_test_input(2, 321, 321, 3)
+ with slim.arg_scope(xception.xception_arg_scope()):
+ _, end_points = self._xception_small(
+ inputs,
+ num_classes,
+ global_pool=global_pool,
+ scope='xception')
+ endpoint_to_shape = {
+ 'xception/entry_flow/conv1_1': [2, 161, 161, 32],
+ 'xception/entry_flow/block1': [2, 81, 81, 1],
+ 'xception/entry_flow/block2': [2, 41, 41, 2],
+ 'xception/entry_flow/block4': [2, 21, 21, 4],
+ 'xception/middle_flow/block1': [2, 21, 21, 4],
+ 'xception/exit_flow/block1': [2, 11, 11, 8],
+ 'xception/exit_flow/block2': [2, 11, 11, 16]}
+ for endpoint, shape in six.iteritems(endpoint_to_shape):
+ self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
+
+ def testAtrousFullyConvolutionalEndpointShapes(self):
+ global_pool = False
+ num_classes = 10
+ output_stride = 8
+ inputs = create_test_input(2, 321, 321, 3)
+ with slim.arg_scope(xception.xception_arg_scope()):
+ _, end_points = self._xception_small(
+ inputs,
+ num_classes,
+ global_pool=global_pool,
+ output_stride=output_stride,
+ scope='xception')
+ endpoint_to_shape = {
+ 'xception/entry_flow/block1': [2, 81, 81, 1],
+ 'xception/entry_flow/block2': [2, 41, 41, 2],
+ 'xception/entry_flow/block4': [2, 41, 41, 4],
+ 'xception/middle_flow/block1': [2, 41, 41, 4],
+ 'xception/exit_flow/block1': [2, 41, 41, 8],
+ 'xception/exit_flow/block2': [2, 41, 41, 16]}
+ for endpoint, shape in six.iteritems(endpoint_to_shape):
+ self.assertListEqual(end_points[endpoint].get_shape().as_list(), shape)
+
+ def testAtrousFullyConvolutionalValues(self):
+ """Verify dense feature extraction with atrous convolution."""
+ nominal_stride = 32
+ for output_stride in [4, 8, 16, 32, None]:
+ with slim.arg_scope(xception.xception_arg_scope()):
+ with tf.Graph().as_default():
+ with self.test_session() as sess:
+ tf.set_random_seed(0)
+ inputs = create_test_input(2, 96, 97, 3)
+ # Dense feature extraction followed by subsampling.
+ output, _ = self._xception_small(
+ inputs,
+ None,
+ is_training=False,
+ global_pool=False,
+ output_stride=output_stride)
+ if output_stride is None:
+ factor = 1
+ else:
+ factor = nominal_stride // output_stride
+ output = resnet_utils.subsample(output, factor)
+ # Make the two networks use the same weights.
+ tf.get_variable_scope().reuse_variables()
+ # Feature extraction at the nominal network rate.
+ expected, _ = self._xception_small(
+ inputs,
+ None,
+ is_training=False,
+ global_pool=False)
+ sess.run(tf.global_variables_initializer())
+ self.assertAllClose(output.eval(), expected.eval(),
+ atol=1e-5, rtol=1e-5)
+
+ def testUnknownBatchSize(self):
+ batch = 2
+ height, width = 65, 65
+ global_pool = True
+ num_classes = 10
+ inputs = create_test_input(None, height, width, 3)
+ with slim.arg_scope(xception.xception_arg_scope()):
+ logits, _ = self._xception_small(
+ inputs,
+ num_classes,
+ global_pool=global_pool,
+ scope='xception')
+ self.assertTrue(logits.op.name.startswith('xception/logits'))
+ self.assertListEqual(logits.get_shape().as_list(),
+ [None, 1, 1, num_classes])
+ images = create_test_input(batch, height, width, 3)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(logits, {inputs: images.eval()})
+ self.assertEquals(output.shape, (batch, 1, 1, num_classes))
+
+ def testFullyConvolutionalUnknownHeightWidth(self):
+ batch = 2
+ height, width = 65, 65
+ global_pool = False
+ inputs = create_test_input(batch, None, None, 3)
+ with slim.arg_scope(xception.xception_arg_scope()):
+ output, _ = self._xception_small(
+ inputs,
+ None,
+ global_pool=global_pool)
+ self.assertListEqual(output.get_shape().as_list(),
+ [batch, None, None, 16])
+ images = create_test_input(batch, height, width, 3)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output, {inputs: images.eval()})
+ self.assertEquals(output.shape, (batch, 3, 3, 16))
+
+ def testAtrousFullyConvolutionalUnknownHeightWidth(self):
+ batch = 2
+ height, width = 65, 65
+ global_pool = False
+ output_stride = 8
+ inputs = create_test_input(batch, None, None, 3)
+ with slim.arg_scope(xception.xception_arg_scope()):
+ output, _ = self._xception_small(
+ inputs,
+ None,
+ global_pool=global_pool,
+ output_stride=output_stride)
+ self.assertListEqual(output.get_shape().as_list(),
+ [batch, None, None, 16])
+ images = create_test_input(batch, height, width, 3)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output, {inputs: images.eval()})
+ self.assertEquals(output.shape, (batch, 9, 9, 16))
+
+ def testEndpointsReuse(self):
+ inputs = create_test_input(2, 32, 32, 3)
+ with slim.arg_scope(xception.xception_arg_scope()):
+ _, end_points0 = xception.xception_65(
+ inputs,
+ num_classes=10,
+ reuse=False)
+ with slim.arg_scope(xception.xception_arg_scope()):
+ _, end_points1 = xception.xception_65(
+ inputs,
+ num_classes=10,
+ reuse=True)
+ self.assertItemsEqual(end_points0.keys(), end_points1.keys())
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_ade20k_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_ade20k_data.py"
new file mode 100644
index 00000000..b637fb43
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_ade20k_data.py"
@@ -0,0 +1,118 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Converts ADE20K data to TFRecord file format with Example protos."""
+
+import math
+import os
+import random
+import sys
+import build_data
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_string(
+ 'train_image_folder',
+ './ADE20K/ADEChallengeData2016/images/training',
+ 'Folder containing trainng images')
+tf.app.flags.DEFINE_string(
+ 'train_image_label_folder',
+ './ADE20K/ADEChallengeData2016/annotations/training',
+ 'Folder containing annotations for trainng images')
+
+tf.app.flags.DEFINE_string(
+ 'val_image_folder',
+ './ADE20K/ADEChallengeData2016/images/validation',
+ 'Folder containing validation images')
+
+tf.app.flags.DEFINE_string(
+ 'val_image_label_folder',
+ './ADE20K/ADEChallengeData2016/annotations/validation',
+ 'Folder containing annotations for validation')
+
+tf.app.flags.DEFINE_string(
+ 'output_dir', './ADE20K/tfrecord',
+ 'Path to save converted tfrecord of Tensorflow example')
+
+_NUM_SHARDS = 4
+
+
+def _convert_dataset(dataset_split, dataset_dir, dataset_label_dir):
+ """Converts the ADE20k dataset into into tfrecord format.
+
+ Args:
+ dataset_split: Dataset split (e.g., train, val).
+ dataset_dir: Dir in which the dataset locates.
+ dataset_label_dir: Dir in which the annotations locates.
+
+ Raises:
+ RuntimeError: If loaded image and label have different shape.
+ """
+
+ img_names = tf.gfile.Glob(os.path.join(dataset_dir, '*.jpg'))
+ random.shuffle(img_names)
+ seg_names = []
+ for f in img_names:
+ # get the filename without the extension
+ basename = os.path.basename(f).split('.')[0]
+ # cover its corresponding *_seg.png
+ seg = os.path.join(dataset_label_dir, basename+'.png')
+ seg_names.append(seg)
+
+ num_images = len(img_names)
+ num_per_shard = int(math.ceil(num_images / float(_NUM_SHARDS)))
+
+ image_reader = build_data.ImageReader('jpeg', channels=3)
+ label_reader = build_data.ImageReader('png', channels=1)
+
+ for shard_id in range(_NUM_SHARDS):
+ output_filename = os.path.join(
+ FLAGS.output_dir,
+ '%s-%05d-of-%05d.tfrecord' % (dataset_split, shard_id, _NUM_SHARDS))
+ with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer:
+ start_idx = shard_id * num_per_shard
+ end_idx = min((shard_id + 1) * num_per_shard, num_images)
+ for i in range(start_idx, end_idx):
+ sys.stdout.write('\r>> Converting image %d/%d shard %d' % (
+ i + 1, num_images, shard_id))
+ sys.stdout.flush()
+ # Read the image.
+ image_filename = img_names[i]
+ image_data = tf.gfile.FastGFile(image_filename, 'r').read()
+ height, width = image_reader.read_image_dims(image_data)
+ # Read the semantic segmentation annotation.
+ seg_filename = seg_names[i]
+ seg_data = tf.gfile.FastGFile(seg_filename, 'r').read()
+ seg_height, seg_width = label_reader.read_image_dims(seg_data)
+ if height != seg_height or width != seg_width:
+ raise RuntimeError('Shape mismatched between image and label.')
+ # Convert to tf example.
+ example = build_data.image_seg_to_tfexample(
+ image_data, img_names[i], height, width, seg_data)
+ tfrecord_writer.write(example.SerializeToString())
+ sys.stdout.write('\n')
+ sys.stdout.flush()
+
+
+def main(unused_argv):
+ tf.gfile.MakeDirs(FLAGS.output_dir)
+ _convert_dataset(
+ 'train', FLAGS.train_image_folder, FLAGS.train_image_label_folder)
+ _convert_dataset('val', FLAGS.val_image_folder, FLAGS.val_image_label_folder)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_cityscapes_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_cityscapes_data.py"
new file mode 100644
index 00000000..df7fec65
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_cityscapes_data.py"
@@ -0,0 +1,183 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Converts Cityscapes data to TFRecord file format with Example protos.
+
+The Cityscapes dataset is expected to have the following directory structure:
+
+ + cityscapes
+ - build_cityscapes_data.py (current working directiory).
+ - build_data.py
+ + cityscapesscripts
+ + annotation
+ + evaluation
+ + helpers
+ + preparation
+ + viewer
+ + gtFine
+ + train
+ + val
+ + test
+ + leftImg8bit
+ + train
+ + val
+ + test
+ + tfrecord
+
+This script converts data into sharded data files and save at tfrecord folder.
+
+Note that before running this script, the users should (1) register the
+Cityscapes dataset website at https://www.cityscapes-dataset.com to
+download the dataset, and (2) run the script provided by Cityscapes
+`preparation/createTrainIdLabelImgs.py` to generate the training groundtruth.
+
+Also note that the tensorflow model will be trained with `TrainId' instead
+of `EvalId' used on the evaluation server. Thus, the users need to convert
+the predicted labels to `EvalId` for evaluation on the server. See the
+vis.py for more details.
+
+The Example proto contains the following fields:
+
+ image/encoded: encoded image content.
+ image/filename: image filename.
+ image/format: image file format.
+ image/height: image height.
+ image/width: image width.
+ image/channels: image channels.
+ image/segmentation/class/encoded: encoded semantic segmentation content.
+ image/segmentation/class/format: semantic segmentation file format.
+"""
+import glob
+import math
+import os.path
+import re
+import sys
+import build_data
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_string('cityscapes_root',
+ './cityscapes',
+ 'Cityscapes dataset root folder.')
+
+tf.app.flags.DEFINE_string(
+ 'output_dir',
+ './tfrecord',
+ 'Path to save converted SSTable of TensorFlow examples.')
+
+
+_NUM_SHARDS = 10
+
+# A map from data type to folder name that saves the data.
+_FOLDERS_MAP = {
+ 'image': 'leftImg8bit',
+ 'label': 'gtFine',
+}
+
+# A map from data type to filename postfix.
+_POSTFIX_MAP = {
+ 'image': '_leftImg8bit',
+ 'label': '_gtFine_labelTrainIds',
+}
+
+# A map from data type to data format.
+_DATA_FORMAT_MAP = {
+ 'image': 'png',
+ 'label': 'png',
+}
+
+# Image file pattern.
+_IMAGE_FILENAME_RE = re.compile('(.+)' + _POSTFIX_MAP['image'])
+
+
+def _get_files(data, dataset_split):
+ """Gets files for the specified data type and dataset split.
+
+ Args:
+ data: String, desired data ('image' or 'label').
+ dataset_split: String, dataset split ('train', 'val', 'test')
+
+ Returns:
+ A list of sorted file names or None when getting label for
+ test set.
+ """
+ if data == 'label' and dataset_split == 'test':
+ return None
+ pattern = '*%s.%s' % (_POSTFIX_MAP[data], _DATA_FORMAT_MAP[data])
+ search_files = os.path.join(
+ FLAGS.cityscapes_root, _FOLDERS_MAP[data], dataset_split, '*', pattern)
+ filenames = glob.glob(search_files)
+ return sorted(filenames)
+
+
+def _convert_dataset(dataset_split):
+ """Converts the specified dataset split to TFRecord format.
+
+ Args:
+ dataset_split: The dataset split (e.g., train, val).
+
+ Raises:
+ RuntimeError: If loaded image and label have different shape, or if the
+ image file with specified postfix could not be found.
+ """
+ image_files = _get_files('image', dataset_split)
+ label_files = _get_files('label', dataset_split)
+
+ num_images = len(image_files)
+ num_per_shard = int(math.ceil(num_images / float(_NUM_SHARDS)))
+
+ image_reader = build_data.ImageReader('png', channels=3)
+ label_reader = build_data.ImageReader('png', channels=1)
+
+ for shard_id in range(_NUM_SHARDS):
+ shard_filename = '%s-%05d-of-%05d.tfrecord' % (
+ dataset_split, shard_id, _NUM_SHARDS)
+ output_filename = os.path.join(FLAGS.output_dir, shard_filename)
+ with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer:
+ start_idx = shard_id * num_per_shard
+ end_idx = min((shard_id + 1) * num_per_shard, num_images)
+ for i in range(start_idx, end_idx):
+ sys.stdout.write('\r>> Converting image %d/%d shard %d' % (
+ i + 1, num_images, shard_id))
+ sys.stdout.flush()
+ # Read the image.
+ image_data = tf.gfile.FastGFile(image_files[i], 'rb').read()
+ height, width = image_reader.read_image_dims(image_data)
+ # Read the semantic segmentation annotation.
+ seg_data = tf.gfile.FastGFile(label_files[i], 'rb').read()
+ seg_height, seg_width = label_reader.read_image_dims(seg_data)
+ if height != seg_height or width != seg_width:
+ raise RuntimeError('Shape mismatched between image and label.')
+ # Convert to tf example.
+ re_match = _IMAGE_FILENAME_RE.search(image_files[i])
+ if re_match is None:
+ raise RuntimeError('Invalid image filename: ' + image_files[i])
+ filename = os.path.basename(re_match.group(1))
+ example = build_data.image_seg_to_tfexample(
+ image_data, filename, height, width, seg_data)
+ tfrecord_writer.write(example.SerializeToString())
+ sys.stdout.write('\n')
+ sys.stdout.flush()
+
+
+def main(unused_argv):
+ # Only support converting 'train' and 'val' sets for now.
+ for dataset_split in ['train', 'val']:
+ _convert_dataset(dataset_split)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_data.py"
new file mode 100644
index 00000000..45628674
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_data.py"
@@ -0,0 +1,161 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Contains common utility functions and classes for building dataset.
+
+This script contains utility functions and classes to converts dataset to
+TFRecord file format with Example protos.
+
+The Example proto contains the following fields:
+
+ image/encoded: encoded image content.
+ image/filename: image filename.
+ image/format: image file format.
+ image/height: image height.
+ image/width: image width.
+ image/channels: image channels.
+ image/segmentation/class/encoded: encoded semantic segmentation content.
+ image/segmentation/class/format: semantic segmentation file format.
+"""
+import collections
+import six
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_enum('image_format', 'png', ['jpg', 'jpeg', 'png'],
+ 'Image format.')
+
+tf.app.flags.DEFINE_enum('label_format', 'png', ['png'],
+ 'Segmentation label format.')
+
+# A map from image format to expected data format.
+_IMAGE_FORMAT_MAP = {
+ 'jpg': 'jpeg',
+ 'jpeg': 'jpeg',
+ 'png': 'png',
+}
+
+
+class ImageReader(object):
+ """Helper class that provides TensorFlow image coding utilities."""
+
+ def __init__(self, image_format='jpeg', channels=3):
+ """Class constructor.
+
+ Args:
+ image_format: Image format. Only 'jpeg', 'jpg', or 'png' are supported.
+ channels: Image channels.
+ """
+ with tf.Graph().as_default():
+ self._decode_data = tf.placeholder(dtype=tf.string)
+ self._image_format = image_format
+ self._session = tf.Session()
+ if self._image_format in ('jpeg', 'jpg'):
+ self._decode = tf.image.decode_jpeg(self._decode_data,
+ channels=channels)
+ elif self._image_format == 'png':
+ self._decode = tf.image.decode_png(self._decode_data,
+ channels=channels)
+
+ def read_image_dims(self, image_data):
+ """Reads the image dimensions.
+
+ Args:
+ image_data: string of image data.
+
+ Returns:
+ image_height and image_width.
+ """
+ image = self.decode_image(image_data)
+ return image.shape[:2]
+
+ def decode_image(self, image_data):
+ """Decodes the image data string.
+
+ Args:
+ image_data: string of image data.
+
+ Returns:
+ Decoded image data.
+
+ Raises:
+ ValueError: Value of image channels not supported.
+ """
+ image = self._session.run(self._decode,
+ feed_dict={self._decode_data: image_data})
+ if len(image.shape) != 3 or image.shape[2] not in (1, 3):
+ raise ValueError('The image channels not supported.')
+
+ return image
+
+
+def _int64_list_feature(values):
+ """Returns a TF-Feature of int64_list.
+
+ Args:
+ values: A scalar or list of values.
+
+ Returns:
+ A TF-Feature.
+ """
+ if not isinstance(values, collections.Iterable):
+ values = [values]
+
+ return tf.train.Feature(int64_list=tf.train.Int64List(value=values))
+
+
+def _bytes_list_feature(values):
+ """Returns a TF-Feature of bytes.
+
+ Args:
+ values: A string.
+
+ Returns:
+ A TF-Feature.
+ """
+ def norm2bytes(value):
+ return value.encode() if isinstance(value, str) and six.PY3 else value
+
+ return tf.train.Feature(
+ bytes_list=tf.train.BytesList(value=[norm2bytes(values)]))
+
+
+def image_seg_to_tfexample(image_data, filename, height, width, seg_data):
+ """Converts one image/segmentation pair to tf example.
+
+ Args:
+ image_data: string of image data.
+ filename: image filename.
+ height: image height.
+ width: image width.
+ seg_data: string of semantic segmentation data.
+
+ Returns:
+ tf example of one image/segmentation pair.
+ """
+ return tf.train.Example(features=tf.train.Features(feature={
+ 'image/encoded': _bytes_list_feature(image_data),
+ 'image/filename': _bytes_list_feature(filename),
+ 'image/format': _bytes_list_feature(
+ _IMAGE_FORMAT_MAP[FLAGS.image_format]),
+ 'image/height': _int64_list_feature(height),
+ 'image/width': _int64_list_feature(width),
+ 'image/channels': _int64_list_feature(3),
+ 'image/segmentation/class/encoded': (
+ _bytes_list_feature(seg_data)),
+ 'image/segmentation/class/format': _bytes_list_feature(
+ FLAGS.label_format),
+ }))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_voc2012_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_voc2012_data.py"
new file mode 100644
index 00000000..94ad3191
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/build_voc2012_data.py"
@@ -0,0 +1,141 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Converts PASCAL VOC 2012 data to TFRecord file format with Example protos.
+
+PASCAL VOC 2012 dataset is expected to have the following directory structure:
+
+ + pascal_voc_seg
+ - build_data.py
+ - build_voc2012_data.py (current working directory).
+ + VOCdevkit
+ + VOC2012
+ + JPEGImages
+ + SegmentationClass
+ + ImageSets
+ + Segmentation
+ + tfrecord
+
+Image folder:
+ ./VOCdevkit/VOC2012/JPEGImages
+
+Semantic segmentation annotations:
+ ./VOCdevkit/VOC2012/SegmentationClass
+
+list folder:
+ ./VOCdevkit/VOC2012/ImageSets/Segmentation
+
+This script converts data into sharded data files and save at tfrecord folder.
+
+The Example proto contains the following fields:
+
+ image/encoded: encoded image content.
+ image/filename: image filename.
+ image/format: image file format.
+ image/height: image height.
+ image/width: image width.
+ image/channels: image channels.
+ image/segmentation/class/encoded: encoded semantic segmentation content.
+ image/segmentation/class/format: semantic segmentation file format.
+"""
+import math
+import os.path
+import sys
+import build_data
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_string('image_folder',
+ './VOCdevkit/VOC2012/JPEGImages',
+ 'Folder containing images.')
+
+tf.app.flags.DEFINE_string(
+ 'semantic_segmentation_folder',
+ './VOCdevkit/VOC2012/SegmentationClassRaw',
+ 'Folder containing semantic segmentation annotations.')
+
+tf.app.flags.DEFINE_string(
+ 'list_folder',
+ './VOCdevkit/VOC2012/ImageSets/Segmentation',
+ 'Folder containing lists for training and validation')
+
+tf.app.flags.DEFINE_string(
+ 'output_dir',
+ './tfrecord',
+ 'Path to save converted SSTable of TensorFlow examples.')
+
+
+_NUM_SHARDS = 4
+
+
+def _convert_dataset(dataset_split):
+ """Converts the specified dataset split to TFRecord format.
+
+ Args:
+ dataset_split: The dataset split (e.g., train, test).
+
+ Raises:
+ RuntimeError: If loaded image and label have different shape.
+ """
+ dataset = os.path.basename(dataset_split)[:-4]
+ sys.stdout.write('Processing ' + dataset)
+ filenames = [x.strip('\n') for x in open(dataset_split, 'r')]
+ num_images = len(filenames)
+ num_per_shard = int(math.ceil(num_images / float(_NUM_SHARDS)))
+
+ image_reader = build_data.ImageReader('jpeg', channels=3)
+ label_reader = build_data.ImageReader('png', channels=1)
+
+ for shard_id in range(_NUM_SHARDS):
+ output_filename = os.path.join(
+ FLAGS.output_dir,
+ '%s-%05d-of-%05d.tfrecord' % (dataset, shard_id, _NUM_SHARDS))
+ with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer:
+ start_idx = shard_id * num_per_shard
+ end_idx = min((shard_id + 1) * num_per_shard, num_images)
+ for i in range(start_idx, end_idx):
+ sys.stdout.write('\r>> Converting image %d/%d shard %d' % (
+ i + 1, len(filenames), shard_id))
+ sys.stdout.flush()
+ # Read the image.
+ image_filename = os.path.join(
+ FLAGS.image_folder, filenames[i] + '.' + FLAGS.image_format)
+ image_data = tf.gfile.FastGFile(image_filename, 'rb').read()
+ height, width = image_reader.read_image_dims(image_data)
+ # Read the semantic segmentation annotation.
+ seg_filename = os.path.join(
+ FLAGS.semantic_segmentation_folder,
+ filenames[i] + '.' + FLAGS.label_format)
+ seg_data = tf.gfile.FastGFile(seg_filename, 'rb').read()
+ seg_height, seg_width = label_reader.read_image_dims(seg_data)
+ if height != seg_height or width != seg_width:
+ raise RuntimeError('Shape mismatched between image and label.')
+ # Convert to tf example.
+ example = build_data.image_seg_to_tfexample(
+ image_data, filenames[i], height, width, seg_data)
+ tfrecord_writer.write(example.SerializeToString())
+ sys.stdout.write('\n')
+ sys.stdout.flush()
+
+
+def main(unused_argv):
+ dataset_splits = tf.gfile.Glob(os.path.join(FLAGS.list_folder, '*.txt'))
+ for dataset_split in dataset_splits:
+ _convert_dataset(dataset_split)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/convert_cityscapes.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/convert_cityscapes.sh"
new file mode 100644
index 00000000..9945e121
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/convert_cityscapes.sh"
@@ -0,0 +1,58 @@
+#!/bin/bash
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+#
+# Script to preprocess the Cityscapes dataset. Note (1) the users should
+# register the Cityscapes dataset website at
+# https://www.cityscapes-dataset.com/downloads/ to download the dataset,
+# and (2) the users should download the utility scripts provided by
+# Cityscapes at https://github.com/mcordts/cityscapesScripts.
+#
+# Usage:
+# bash ./preprocess_cityscapes.sh
+#
+# The folder structure is assumed to be:
+# + datasets
+# - build_cityscapes_data.py
+# - convert_cityscapes.sh
+# + cityscapes
+# + cityscapesscripts (downloaded scripts)
+# + gtFine
+# + leftImg8bit
+#
+
+# Exit immediately if a command exits with a non-zero status.
+set -e
+
+CURRENT_DIR=$(pwd)
+WORK_DIR="."
+
+# Root path for Cityscapes dataset.
+CITYSCAPES_ROOT="${WORK_DIR}/cityscapes"
+
+# Create training labels.
+python "${CITYSCAPES_ROOT}/cityscapesscripts/preparation/createTrainIdLabelImgs.py"
+
+# Build TFRecords of the dataset.
+# First, create output directory for storing TFRecords.
+OUTPUT_DIR="${CITYSCAPES_ROOT}/tfrecord"
+mkdir -p "${OUTPUT_DIR}"
+
+BUILD_SCRIPT="${CURRENT_DIR}/build_cityscapes_data.py"
+
+echo "Converting Cityscapes dataset..."
+python "${BUILD_SCRIPT}" \
+ --cityscapes_root="${CITYSCAPES_ROOT}" \
+ --output_dir="${OUTPUT_DIR}" \
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/download_and_convert_ade20k.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/download_and_convert_ade20k.sh"
new file mode 100644
index 00000000..aa684b41
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/download_and_convert_ade20k.sh"
@@ -0,0 +1,80 @@
+#!/bin/bash
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+#
+# Script to download and preprocess the PASCAL VOC 2012 dataset.
+#
+# Usage:
+# bash ./download_and_convert_ade20k.sh
+#
+# The folder structure is assumed to be:
+# + datasets
+# - build_data.py
+# - build_ade20k_data.py
+# - download_and_convert_ade20k.sh
+# + ADE20K
+# + tfrecord
+# + ADEChallengeData2016
+# + annotations
+# + training
+# + validation
+# + images
+# + training
+# + validation
+
+# Exit immediately if a command exits with a non-zero status.
+set -e
+
+CURRENT_DIR=$(pwd)
+WORK_DIR="./ADE20K"
+mkdir -p "${WORK_DIR}"
+cd "${WORK_DIR}"
+
+# Helper function to download and unpack ADE20K dataset.
+download_and_uncompress() {
+ local BASE_URL=${1}
+ local FILENAME=${2}
+
+ if [ ! -f "${FILENAME}" ]; then
+ echo "Downloading ${FILENAME} to ${WORK_DIR}"
+ wget -nd -c "${BASE_URL}/${FILENAME}"
+ fi
+ echo "Uncompressing ${FILENAME}"
+ unzip "${FILENAME}"
+}
+
+# Download the images.
+BASE_URL="http://data.csail.mit.edu/places/ADEchallenge"
+FILENAME="ADEChallengeData2016.zip"
+
+download_and_uncompress "${BASE_URL}" "${FILENAME}"
+
+cd "${CURRENT_DIR}"
+
+# Root path for ADE20K dataset.
+ADE20K_ROOT="${WORK_DIR}/ADEChallengeData2016"
+
+# Build TFRecords of the dataset.
+# First, create output directory for storing TFRecords.
+OUTPUT_DIR="${WORK_DIR}/tfrecord"
+mkdir -p "${OUTPUT_DIR}"
+
+echo "Converting ADE20K dataset..."
+python ./build_ade20k_data.py \
+ --train_image_folder="${ADE20K_ROOT}/images/training/" \
+ --train_image_label_folder="${ADE20K_ROOT}/annotations/training/" \
+ --val_image_folder="${ADE20K_ROOT}/images/validation/" \
+ --val_image_label_folder="${ADE20K_ROOT}/annotations/validation/" \
+ --output_dir="${OUTPUT_DIR}"
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/download_and_convert_voc2012.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/download_and_convert_voc2012.sh"
new file mode 100644
index 00000000..30eec7f7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/download_and_convert_voc2012.sh"
@@ -0,0 +1,90 @@
+#!/bin/bash
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+#
+# Script to download and preprocess the PASCAL VOC 2012 dataset.
+#
+# Usage:
+# bash ./download_and_convert_voc2012.sh
+#
+# The folder structure is assumed to be:
+# + datasets
+# - build_data.py
+# - build_voc2012_data.py
+# - download_and_convert_voc2012.sh
+# - remove_gt_colormap.py
+# + pascal_voc_seg
+# + VOCdevkit
+# + VOC2012
+# + JPEGImages
+# + SegmentationClass
+#
+
+# Exit immediately if a command exits with a non-zero status.
+set -e
+
+CURRENT_DIR=$(pwd)
+WORK_DIR="./pascal_voc_seg"
+mkdir -p "${WORK_DIR}"
+cd "${WORK_DIR}"
+
+# Helper function to download and unpack VOC 2012 dataset.
+download_and_uncompress() {
+ local BASE_URL=${1}
+ local FILENAME=${2}
+
+ if [ ! -f "${FILENAME}" ]; then
+ echo "Downloading ${FILENAME} to ${WORK_DIR}"
+ wget -nd -c "${BASE_URL}/${FILENAME}"
+ fi
+ echo "Uncompressing ${FILENAME}"
+ tar -xf "${FILENAME}"
+}
+
+# Download the images.
+BASE_URL="http://host.robots.ox.ac.uk/pascal/VOC/voc2012/"
+FILENAME="VOCtrainval_11-May-2012.tar"
+
+download_and_uncompress "${BASE_URL}" "${FILENAME}"
+
+cd "${CURRENT_DIR}"
+
+# Root path for PASCAL VOC 2012 dataset.
+PASCAL_ROOT="${WORK_DIR}/VOCdevkit/VOC2012"
+
+# Remove the colormap in the ground truth annotations.
+SEG_FOLDER="${PASCAL_ROOT}/SegmentationClass"
+SEMANTIC_SEG_FOLDER="${PASCAL_ROOT}/SegmentationClassRaw"
+
+echo "Removing the color map in ground truth annotations..."
+python ./remove_gt_colormap.py \
+ --original_gt_folder="${SEG_FOLDER}" \
+ --output_dir="${SEMANTIC_SEG_FOLDER}"
+
+# Build TFRecords of the dataset.
+# First, create output directory for storing TFRecords.
+OUTPUT_DIR="${WORK_DIR}/tfrecord"
+mkdir -p "${OUTPUT_DIR}"
+
+IMAGE_FOLDER="${PASCAL_ROOT}/JPEGImages"
+LIST_FOLDER="${PASCAL_ROOT}/ImageSets/Segmentation"
+
+echo "Converting PASCAL VOC 2012 dataset..."
+python ./build_voc2012_data.py \
+ --image_folder="${IMAGE_FOLDER}" \
+ --semantic_segmentation_folder="${SEMANTIC_SEG_FOLDER}" \
+ --list_folder="${LIST_FOLDER}" \
+ --image_format="jpg" \
+ --output_dir="${OUTPUT_DIR}"
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/remove_gt_colormap.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/remove_gt_colormap.py"
new file mode 100644
index 00000000..7bd68b02
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/remove_gt_colormap.py"
@@ -0,0 +1,83 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Removes the color map from segmentation annotations.
+
+Removes the color map from the ground truth segmentation annotations and save
+the results to output_dir.
+"""
+import glob
+import os.path
+import numpy as np
+
+from PIL import Image
+
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_string('original_gt_folder',
+ './VOCdevkit/VOC2012/SegmentationClass',
+ 'Original ground truth annotations.')
+
+tf.app.flags.DEFINE_string('segmentation_format', 'png', 'Segmentation format.')
+
+tf.app.flags.DEFINE_string('output_dir',
+ './VOCdevkit/VOC2012/SegmentationClassRaw',
+ 'folder to save modified ground truth annotations.')
+
+
+def _remove_colormap(filename):
+ """Removes the color map from the annotation.
+
+ Args:
+ filename: Ground truth annotation filename.
+
+ Returns:
+ Annotation without color map.
+ """
+ return np.array(Image.open(filename))
+
+
+def _save_annotation(annotation, filename):
+ """Saves the annotation as png file.
+
+ Args:
+ annotation: Segmentation annotation.
+ filename: Output filename.
+ """
+ pil_image = Image.fromarray(annotation.astype(dtype=np.uint8))
+ with tf.gfile.Open(filename, mode='w') as f:
+ pil_image.save(f, 'PNG')
+
+
+def main(unused_argv):
+ # Create the output directory if not exists.
+ if not tf.gfile.IsDirectory(FLAGS.output_dir):
+ tf.gfile.MakeDirs(FLAGS.output_dir)
+
+ annotations = glob.glob(os.path.join(FLAGS.original_gt_folder,
+ '*.' + FLAGS.segmentation_format))
+ for annotation in annotations:
+ raw_annotation = _remove_colormap(annotation)
+ filename = os.path.splitext(os.path.basename(annotation))[0]
+ _save_annotation(raw_annotation,
+ os.path.join(
+ FLAGS.output_dir,
+ filename + '.' + FLAGS.segmentation_format))
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/segmentation_dataset.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/segmentation_dataset.py"
new file mode 100644
index 00000000..65c06041
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/datasets/segmentation_dataset.py"
@@ -0,0 +1,198 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Provides data from semantic segmentation datasets.
+
+The SegmentationDataset class provides both images and annotations (semantic
+segmentation and/or instance segmentation) for TensorFlow. Currently, we
+support the following datasets:
+
+1. PASCAL VOC 2012 (http://host.robots.ox.ac.uk/pascal/VOC/voc2012/).
+
+PASCAL VOC 2012 semantic segmentation dataset annotates 20 foreground objects
+(e.g., bike, person, and so on) and leaves all the other semantic classes as
+one background class. The dataset contains 1464, 1449, and 1456 annotated
+images for the training, validation and test respectively.
+
+2. Cityscapes dataset (https://www.cityscapes-dataset.com)
+
+The Cityscapes dataset contains 19 semantic labels (such as road, person, car,
+and so on) for urban street scenes.
+
+3. ADE20K dataset (http://groups.csail.mit.edu/vision/datasets/ADE20K)
+
+The ADE20K dataset contains 150 semantic labels both urban street scenes and
+indoor scenes.
+
+References:
+ M. Everingham, S. M. A. Eslami, L. V. Gool, C. K. I. Williams, J. Winn,
+ and A. Zisserman, The pascal visual object classes challenge a retrospective.
+ IJCV, 2014.
+
+ M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
+ U. Franke, S. Roth, and B. Schiele, "The cityscapes dataset for semantic urban
+ scene understanding," In Proc. of CVPR, 2016.
+
+ B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, A. Torralba, "Scene Parsing
+ through ADE20K dataset", In Proc. of CVPR, 2017.
+"""
+import collections
+import os.path
+import tensorflow as tf
+
+slim = tf.contrib.slim
+
+dataset = slim.dataset
+
+tfexample_decoder = slim.tfexample_decoder
+
+
+_ITEMS_TO_DESCRIPTIONS = {
+ 'image': 'A color image of varying height and width.',
+ 'labels_class': ('A semantic segmentation label whose size matches image.'
+ 'Its values range from 0 (background) to num_classes.'),
+}
+
+# Named tuple to describe the dataset properties.
+DatasetDescriptor = collections.namedtuple(
+ 'DatasetDescriptor',
+ ['splits_to_sizes', # Splits of the dataset into training, val, and test.
+ 'num_classes', # Number of semantic classes, including the background
+ # class (if exists). For example, there are 20
+ # foreground classes + 1 background class in the PASCAL
+ # VOC 2012 dataset. Thus, we set num_classes=21.
+ 'ignore_label', # Ignore label value.
+ ]
+)
+
+_CITYSCAPES_INFORMATION = DatasetDescriptor(
+ splits_to_sizes={
+ 'train': 2975,
+ 'val': 500,
+ },
+ num_classes=19,
+ ignore_label=255,
+)
+
+_PASCAL_VOC_SEG_INFORMATION = DatasetDescriptor(
+ splits_to_sizes={
+ 'train': 1464,
+ 'train_aug': 10582,
+ 'trainval': 2913,
+ 'val': 1449,
+ },
+ num_classes=21,
+ ignore_label=255,
+)
+
+# These number (i.e., 'train'/'test') seems to have to be hard coded
+# You are required to figure it out for your training/testing example.
+_ADE20K_INFORMATION = DatasetDescriptor(
+ splits_to_sizes={
+ 'train': 20210, # num of samples in images/training
+ 'val': 2000, # num of samples in images/validation
+ },
+ num_classes=151,
+ ignore_label=0,
+)
+
+
+_DATASETS_INFORMATION = {
+ 'cityscapes': _CITYSCAPES_INFORMATION,
+ 'pascal_voc_seg': _PASCAL_VOC_SEG_INFORMATION,
+ 'ade20k': _ADE20K_INFORMATION,
+}
+
+# Default file pattern of TFRecord of TensorFlow Example.
+_FILE_PATTERN = '%s-*'
+
+
+def get_cityscapes_dataset_name():
+ return 'cityscapes'
+
+
+def get_dataset(dataset_name, split_name, dataset_dir):
+ """Gets an instance of slim Dataset.
+
+ Args:
+ dataset_name: Dataset name.
+ split_name: A train/val Split name.
+ dataset_dir: The directory of the dataset sources.
+
+ Returns:
+ An instance of slim Dataset.
+
+ Raises:
+ ValueError: if the dataset_name or split_name is not recognized.
+ """
+ if dataset_name not in _DATASETS_INFORMATION:
+ raise ValueError('The specified dataset is not supported yet.')
+
+ splits_to_sizes = _DATASETS_INFORMATION[dataset_name].splits_to_sizes
+
+ if split_name not in splits_to_sizes:
+ raise ValueError('data split name %s not recognized' % split_name)
+
+ # Prepare the variables for different datasets.
+ num_classes = _DATASETS_INFORMATION[dataset_name].num_classes
+ ignore_label = _DATASETS_INFORMATION[dataset_name].ignore_label
+
+ file_pattern = _FILE_PATTERN
+ file_pattern = os.path.join(dataset_dir, file_pattern % split_name)
+
+ # Specify how the TF-Examples are decoded.
+ keys_to_features = {
+ 'image/encoded': tf.FixedLenFeature(
+ (), tf.string, default_value=''),
+ 'image/filename': tf.FixedLenFeature(
+ (), tf.string, default_value=''),
+ 'image/format': tf.FixedLenFeature(
+ (), tf.string, default_value='jpeg'),
+ 'image/height': tf.FixedLenFeature(
+ (), tf.int64, default_value=0),
+ 'image/width': tf.FixedLenFeature(
+ (), tf.int64, default_value=0),
+ 'image/segmentation/class/encoded': tf.FixedLenFeature(
+ (), tf.string, default_value=''),
+ 'image/segmentation/class/format': tf.FixedLenFeature(
+ (), tf.string, default_value='png'),
+ }
+ items_to_handlers = {
+ 'image': tfexample_decoder.Image(
+ image_key='image/encoded',
+ format_key='image/format',
+ channels=3),
+ 'image_name': tfexample_decoder.Tensor('image/filename'),
+ 'height': tfexample_decoder.Tensor('image/height'),
+ 'width': tfexample_decoder.Tensor('image/width'),
+ 'labels_class': tfexample_decoder.Image(
+ image_key='image/segmentation/class/encoded',
+ format_key='image/segmentation/class/format',
+ channels=1),
+ }
+
+ decoder = tfexample_decoder.TFExampleDecoder(
+ keys_to_features, items_to_handlers)
+
+ return dataset.Dataset(
+ data_sources=file_pattern,
+ reader=tf.TFRecordReader,
+ decoder=decoder,
+ num_samples=splits_to_sizes[split_name],
+ items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
+ ignore_label=ignore_label,
+ num_classes=num_classes,
+ name=dataset_name,
+ multi_label=True)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/deeplab_demo.ipynb" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/deeplab_demo.ipynb"
new file mode 100644
index 00000000..84728fe1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/deeplab_demo.ipynb"
@@ -0,0 +1,348 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text",
+ "id": "KFPcBuVFw61h"
+ },
+ "source": [
+ "# DeepLab Demo\n",
+ "\n",
+ "This demo will demostrate the steps to run deeplab semantic segmentation model on sample input images."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "cellView": "code",
+ "colab": {
+ "autoexec": {
+ "startup": false,
+ "wait_interval": 0
+ }
+ },
+ "colab_type": "code",
+ "id": "kAbdmRmvq0Je"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Imports\n",
+ "\n",
+ "import os\n",
+ "from io import BytesIO\n",
+ "import tarfile\n",
+ "import tempfile\n",
+ "from six.moves import urllib\n",
+ "\n",
+ "from matplotlib import gridspec\n",
+ "from matplotlib import pyplot as plt\n",
+ "import numpy as np\n",
+ "from PIL import Image\n",
+ "\n",
+ "import tensorflow as tf"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "cellView": "code",
+ "colab": {
+ "autoexec": {
+ "startup": false,
+ "wait_interval": 0
+ }
+ },
+ "colab_type": "code",
+ "id": "vN0kU6NJ1Ye5"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Helper methods\n",
+ "\n",
+ "\n",
+ "class DeepLabModel(object):\n",
+ " \"\"\"Class to load deeplab model and run inference.\"\"\"\n",
+ "\n",
+ " INPUT_TENSOR_NAME = 'ImageTensor:0'\n",
+ " OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'\n",
+ " INPUT_SIZE = 513\n",
+ " FROZEN_GRAPH_NAME = 'frozen_inference_graph'\n",
+ "\n",
+ " def __init__(self, tarball_path):\n",
+ " \"\"\"Creates and loads pretrained deeplab model.\"\"\"\n",
+ " self.graph = tf.Graph()\n",
+ "\n",
+ " graph_def = None\n",
+ " # Extract frozen graph from tar archive.\n",
+ " tar_file = tarfile.open(tarball_path)\n",
+ " for tar_info in tar_file.getmembers():\n",
+ " if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name):\n",
+ " file_handle = tar_file.extractfile(tar_info)\n",
+ " graph_def = tf.GraphDef.FromString(file_handle.read())\n",
+ " break\n",
+ "\n",
+ " tar_file.close()\n",
+ "\n",
+ " if graph_def is None:\n",
+ " raise RuntimeError('Cannot find inference graph in tar archive.')\n",
+ "\n",
+ " with self.graph.as_default():\n",
+ " tf.import_graph_def(graph_def, name='')\n",
+ "\n",
+ " self.sess = tf.Session(graph=self.graph)\n",
+ "\n",
+ " def run(self, image):\n",
+ " \"\"\"Runs inference on a single image.\n",
+ "\n",
+ " Args:\n",
+ " image: A PIL.Image object, raw input image.\n",
+ "\n",
+ " Returns:\n",
+ " resized_image: RGB image resized from original input image.\n",
+ " seg_map: Segmentation map of `resized_image`.\n",
+ " \"\"\"\n",
+ " width, height = image.size\n",
+ " resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height)\n",
+ " target_size = (int(resize_ratio * width), int(resize_ratio * height))\n",
+ " resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS)\n",
+ " batch_seg_map = self.sess.run(\n",
+ " self.OUTPUT_TENSOR_NAME,\n",
+ " feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]})\n",
+ " seg_map = batch_seg_map[0]\n",
+ " return resized_image, seg_map\n",
+ "\n",
+ "\n",
+ "def create_pascal_label_colormap():\n",
+ " \"\"\"Creates a label colormap used in PASCAL VOC segmentation benchmark.\n",
+ "\n",
+ " Returns:\n",
+ " A Colormap for visualizing segmentation results.\n",
+ " \"\"\"\n",
+ " colormap = np.zeros((256, 3), dtype=int)\n",
+ " ind = np.arange(256, dtype=int)\n",
+ "\n",
+ " for shift in reversed(range(8)):\n",
+ " for channel in range(3):\n",
+ " colormap[:, channel] |= ((ind \u003e\u003e channel) \u0026 1) \u003c\u003c shift\n",
+ " ind \u003e\u003e= 3\n",
+ "\n",
+ " return colormap\n",
+ "\n",
+ "\n",
+ "def label_to_color_image(label):\n",
+ " \"\"\"Adds color defined by the dataset colormap to the label.\n",
+ "\n",
+ " Args:\n",
+ " label: A 2D array with integer type, storing the segmentation label.\n",
+ "\n",
+ " Returns:\n",
+ " result: A 2D array with floating type. The element of the array\n",
+ " is the color indexed by the corresponding element in the input label\n",
+ " to the PASCAL color map.\n",
+ "\n",
+ " Raises:\n",
+ " ValueError: If label is not of rank 2 or its value is larger than color\n",
+ " map maximum entry.\n",
+ " \"\"\"\n",
+ " if label.ndim != 2:\n",
+ " raise ValueError('Expect 2-D input label')\n",
+ "\n",
+ " colormap = create_pascal_label_colormap()\n",
+ "\n",
+ " if np.max(label) \u003e= len(colormap):\n",
+ " raise ValueError('label value too large.')\n",
+ "\n",
+ " return colormap[label]\n",
+ "\n",
+ "\n",
+ "def vis_segmentation(image, seg_map):\n",
+ " \"\"\"Visualizes input image, segmentation map and overlay view.\"\"\"\n",
+ " plt.figure(figsize=(15, 5))\n",
+ " grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])\n",
+ "\n",
+ " plt.subplot(grid_spec[0])\n",
+ " plt.imshow(image)\n",
+ " plt.axis('off')\n",
+ " plt.title('input image')\n",
+ "\n",
+ " plt.subplot(grid_spec[1])\n",
+ " seg_image = label_to_color_image(seg_map).astype(np.uint8)\n",
+ " plt.imshow(seg_image)\n",
+ " plt.axis('off')\n",
+ " plt.title('segmentation map')\n",
+ "\n",
+ " plt.subplot(grid_spec[2])\n",
+ " plt.imshow(image)\n",
+ " plt.imshow(seg_image, alpha=0.7)\n",
+ " plt.axis('off')\n",
+ " plt.title('segmentation overlay')\n",
+ "\n",
+ " unique_labels = np.unique(seg_map)\n",
+ " ax = plt.subplot(grid_spec[3])\n",
+ " plt.imshow(\n",
+ " FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest')\n",
+ " ax.yaxis.tick_right()\n",
+ " plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])\n",
+ " plt.xticks([], [])\n",
+ " ax.tick_params(width=0.0)\n",
+ " plt.grid('off')\n",
+ " plt.show()\n",
+ "\n",
+ "\n",
+ "LABEL_NAMES = np.asarray([\n",
+ " 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',\n",
+ " 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',\n",
+ " 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv'\n",
+ "])\n",
+ "\n",
+ "FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)\n",
+ "FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab": {
+ "autoexec": {
+ "startup": false,
+ "wait_interval": 0
+ }
+ },
+ "colab_type": "code",
+ "id": "c4oXKmnjw6i_"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Select and download models {display-mode: \"form\"}\n",
+ "\n",
+ "MODEL_NAME = 'mobilenetv2_coco_voctrainaug' # @param ['mobilenetv2_coco_voctrainaug', 'mobilenetv2_coco_voctrainval', 'xception_coco_voctrainaug', 'xception_coco_voctrainval']\n",
+ "\n",
+ "_DOWNLOAD_URL_PREFIX = 'http://download.tensorflow.org/models/'\n",
+ "_MODEL_URLS = {\n",
+ " 'mobilenetv2_coco_voctrainaug':\n",
+ " 'deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz',\n",
+ " 'mobilenetv2_coco_voctrainval':\n",
+ " 'deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz',\n",
+ " 'xception_coco_voctrainaug':\n",
+ " 'deeplabv3_pascal_train_aug_2018_01_04.tar.gz',\n",
+ " 'xception_coco_voctrainval':\n",
+ " 'deeplabv3_pascal_trainval_2018_01_04.tar.gz',\n",
+ "}\n",
+ "_TARBALL_NAME = 'deeplab_model.tar.gz'\n",
+ "\n",
+ "model_dir = tempfile.mkdtemp()\n",
+ "tf.gfile.MakeDirs(model_dir)\n",
+ "\n",
+ "download_path = os.path.join(model_dir, _TARBALL_NAME)\n",
+ "print('downloading model, this might take a while...')\n",
+ "urllib.request.urlretrieve(_DOWNLOAD_URL_PREFIX + _MODEL_URLS[MODEL_NAME],\n",
+ " download_path)\n",
+ "print('download completed! loading DeepLab model...')\n",
+ "\n",
+ "MODEL = DeepLabModel(download_path)\n",
+ "print('model loaded successfully!')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text",
+ "id": "SZst78N-4OKO"
+ },
+ "source": [
+ "## Run on sample images\n",
+ "\n",
+ "Select one of sample images (leave `IMAGE_URL` empty) or feed any internet image\n",
+ "url for inference.\n",
+ "\n",
+ "Note that we are using single scale inference in the demo for fast computation,\n",
+ "so the results may slightly differ from the visualizations in\n",
+ "[README](https://github.com/tensorflow/models/blob/master/research/deeplab/README.md),\n",
+ "which uses multi-scale and left-right flipped inputs."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab": {
+ "autoexec": {
+ "startup": false,
+ "wait_interval": 0
+ }
+ },
+ "colab_type": "code",
+ "id": "edGukUHXyymr"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Run on sample images {display-mode: \"form\"}\n",
+ "\n",
+ "SAMPLE_IMAGE = 'image1' # @param ['image1', 'image2', 'image3']\n",
+ "IMAGE_URL = '' #@param {type:\"string\"}\n",
+ "\n",
+ "_SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/'\n",
+ " 'deeplab/g3doc/img/%s.jpg?raw=true')\n",
+ "\n",
+ "\n",
+ "def run_visualization(url):\n",
+ " \"\"\"Inferences DeepLab model and visualizes result.\"\"\"\n",
+ " try:\n",
+ " f = urllib.request.urlopen(url)\n",
+ " jpeg_str = f.read()\n",
+ " original_im = Image.open(BytesIO(jpeg_str))\n",
+ " except IOError:\n",
+ " print('Cannot retrieve image. Please check url: ' + url)\n",
+ " return\n",
+ "\n",
+ " print('running deeplab on image %s...' % url)\n",
+ " resized_im, seg_map = MODEL.run(original_im)\n",
+ "\n",
+ " vis_segmentation(resized_im, seg_map)\n",
+ "\n",
+ "\n",
+ "image_url = IMAGE_URL or _SAMPLE_URL % SAMPLE_IMAGE\n",
+ "run_visualization(image_url)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab": {
+ "autoexec": {
+ "startup": false,
+ "wait_interval": 0
+ }
+ },
+ "colab_type": "code",
+ "id": "7XrFNGsxzSIB"
+ },
+ "outputs": [],
+ "source": [
+ ""
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "collapsed_sections": [],
+ "default_view": {},
+ "name": "DeepLab Demo.ipynb",
+ "provenance": [],
+ "version": "0.3.2",
+ "views": {}
+ },
+ "kernelspec": {
+ "display_name": "Python 2",
+ "name": "python2"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/eval.py"
new file mode 100644
index 00000000..600bb5f7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/eval.py"
@@ -0,0 +1,176 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Evaluation script for the DeepLab model.
+
+See model.py for more details and usage.
+"""
+
+import math
+import six
+import tensorflow as tf
+from deeplab import common
+from deeplab import model
+from deeplab.datasets import segmentation_dataset
+from deeplab.utils import input_generator
+
+slim = tf.contrib.slim
+
+flags = tf.app.flags
+
+FLAGS = flags.FLAGS
+
+flags.DEFINE_string('master', '', 'BNS name of the tensorflow server')
+
+# Settings for log directories.
+
+flags.DEFINE_string('eval_logdir', None, 'Where to write the event logs.')
+
+flags.DEFINE_string('checkpoint_dir', None, 'Directory of model checkpoints.')
+
+# Settings for evaluating the model.
+
+flags.DEFINE_integer('eval_batch_size', 1,
+ 'The number of images in each batch during evaluation.')
+
+flags.DEFINE_multi_integer('eval_crop_size', [513, 513],
+ 'Image crop size [height, width] for evaluation.')
+
+flags.DEFINE_integer('eval_interval_secs', 60 * 5,
+ 'How often (in seconds) to run evaluation.')
+
+# For `xception_65`, use atrous_rates = [12, 24, 36] if output_stride = 8, or
+# rates = [6, 12, 18] if output_stride = 16. For `mobilenet_v2`, use None. Note
+# one could use different atrous_rates/output_stride during training/evaluation.
+flags.DEFINE_multi_integer('atrous_rates', None,
+ 'Atrous rates for atrous spatial pyramid pooling.')
+
+flags.DEFINE_integer('output_stride', 16,
+ 'The ratio of input to output spatial resolution.')
+
+# Change to [0.5, 0.75, 1.0, 1.25, 1.5, 1.75] for multi-scale test.
+flags.DEFINE_multi_float('eval_scales', [1.0],
+ 'The scales to resize images for evaluation.')
+
+# Change to True for adding flipped images during test.
+flags.DEFINE_bool('add_flipped_images', False,
+ 'Add flipped images for evaluation or not.')
+
+# Dataset settings.
+
+flags.DEFINE_string('dataset', 'pascal_voc_seg',
+ 'Name of the segmentation dataset.')
+
+flags.DEFINE_string('eval_split', 'val',
+ 'Which split of the dataset used for evaluation')
+
+flags.DEFINE_string('dataset_dir', None, 'Where the dataset reside.')
+
+flags.DEFINE_integer('max_number_of_evaluations', 0,
+ 'Maximum number of eval iterations. Will loop '
+ 'indefinitely upon nonpositive values.')
+
+
+def main(unused_argv):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ # Get dataset-dependent information.
+ dataset = segmentation_dataset.get_dataset(
+ FLAGS.dataset, FLAGS.eval_split, dataset_dir=FLAGS.dataset_dir)
+
+ tf.gfile.MakeDirs(FLAGS.eval_logdir)
+ tf.logging.info('Evaluating on %s set', FLAGS.eval_split)
+
+ with tf.Graph().as_default():
+ samples = input_generator.get(
+ dataset,
+ FLAGS.eval_crop_size,
+ FLAGS.eval_batch_size,
+ min_resize_value=FLAGS.min_resize_value,
+ max_resize_value=FLAGS.max_resize_value,
+ resize_factor=FLAGS.resize_factor,
+ dataset_split=FLAGS.eval_split,
+ is_training=False,
+ model_variant=FLAGS.model_variant)
+
+ model_options = common.ModelOptions(
+ outputs_to_num_classes={common.OUTPUT_TYPE: dataset.num_classes},
+ crop_size=FLAGS.eval_crop_size,
+ atrous_rates=FLAGS.atrous_rates,
+ output_stride=FLAGS.output_stride)
+
+ if tuple(FLAGS.eval_scales) == (1.0,):
+ tf.logging.info('Performing single-scale test.')
+ predictions = model.predict_labels(samples[common.IMAGE], model_options,
+ image_pyramid=FLAGS.image_pyramid)
+ else:
+ tf.logging.info('Performing multi-scale test.')
+ predictions = model.predict_labels_multi_scale(
+ samples[common.IMAGE],
+ model_options=model_options,
+ eval_scales=FLAGS.eval_scales,
+ add_flipped_images=FLAGS.add_flipped_images)
+ predictions = predictions[common.OUTPUT_TYPE]
+ predictions = tf.reshape(predictions, shape=[-1])
+ labels = tf.reshape(samples[common.LABEL], shape=[-1])
+ weights = tf.to_float(tf.not_equal(labels, dataset.ignore_label))
+
+ # Set ignore_label regions to label 0, because metrics.mean_iou requires
+ # range of labels = [0, dataset.num_classes). Note the ignore_label regions
+ # are not evaluated since the corresponding regions contain weights = 0.
+ labels = tf.where(
+ tf.equal(labels, dataset.ignore_label), tf.zeros_like(labels), labels)
+
+ predictions_tag = 'miou'
+ for eval_scale in FLAGS.eval_scales:
+ predictions_tag += '_' + str(eval_scale)
+ if FLAGS.add_flipped_images:
+ predictions_tag += '_flipped'
+
+ # Define the evaluation metric.
+ metric_map = {}
+ metric_map[predictions_tag] = tf.metrics.mean_iou(
+ predictions, labels, dataset.num_classes, weights=weights)
+
+ metrics_to_values, metrics_to_updates = (
+ tf.contrib.metrics.aggregate_metric_map(metric_map))
+
+ for metric_name, metric_value in six.iteritems(metrics_to_values):
+ slim.summaries.add_scalar_summary(
+ metric_value, metric_name, print_summary=True)
+
+ num_batches = int(
+ math.ceil(dataset.num_samples / float(FLAGS.eval_batch_size)))
+
+ tf.logging.info('Eval num images %d', dataset.num_samples)
+ tf.logging.info('Eval batch size %d and num batch %d',
+ FLAGS.eval_batch_size, num_batches)
+
+ num_eval_iters = None
+ if FLAGS.max_number_of_evaluations > 0:
+ num_eval_iters = FLAGS.max_number_of_evaluations
+ slim.evaluation.evaluation_loop(
+ master=FLAGS.master,
+ checkpoint_dir=FLAGS.checkpoint_dir,
+ logdir=FLAGS.eval_logdir,
+ num_evals=num_batches,
+ eval_op=list(metrics_to_updates.values()),
+ max_number_of_evaluations=num_eval_iters,
+ eval_interval_secs=FLAGS.eval_interval_secs)
+
+
+if __name__ == '__main__':
+ flags.mark_flag_as_required('checkpoint_dir')
+ flags.mark_flag_as_required('eval_logdir')
+ flags.mark_flag_as_required('dataset_dir')
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/export_model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/export_model.py"
new file mode 100644
index 00000000..c2ad6a61
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/export_model.py"
@@ -0,0 +1,166 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Exports trained model to TensorFlow frozen graph."""
+
+import os
+import tensorflow as tf
+
+from tensorflow.python.tools import freeze_graph
+from deeplab import common
+from deeplab import input_preprocess
+from deeplab import model
+
+slim = tf.contrib.slim
+flags = tf.app.flags
+
+FLAGS = flags.FLAGS
+
+flags.DEFINE_string('checkpoint_path', None, 'Checkpoint path')
+
+flags.DEFINE_string('export_path', None,
+ 'Path to output Tensorflow frozen graph.')
+
+flags.DEFINE_integer('num_classes', 21, 'Number of classes.')
+
+flags.DEFINE_multi_integer('crop_size', [513, 513],
+ 'Crop size [height, width].')
+
+# For `xception_65`, use atrous_rates = [12, 24, 36] if output_stride = 8, or
+# rates = [6, 12, 18] if output_stride = 16. For `mobilenet_v2`, use None. Note
+# one could use different atrous_rates/output_stride during training/evaluation.
+flags.DEFINE_multi_integer('atrous_rates', None,
+ 'Atrous rates for atrous spatial pyramid pooling.')
+
+flags.DEFINE_integer('output_stride', 8,
+ 'The ratio of input to output spatial resolution.')
+
+# Change to [0.5, 0.75, 1.0, 1.25, 1.5, 1.75] for multi-scale inference.
+flags.DEFINE_multi_float('inference_scales', [1.0],
+ 'The scales to resize images for inference.')
+
+flags.DEFINE_bool('add_flipped_images', False,
+ 'Add flipped images during inference or not.')
+
+# Input name of the exported model.
+_INPUT_NAME = 'ImageTensor'
+
+# Output name of the exported model.
+_OUTPUT_NAME = 'SemanticPredictions'
+
+
+def _create_input_tensors():
+ """Creates and prepares input tensors for DeepLab model.
+
+ This method creates a 4-D uint8 image tensor 'ImageTensor' with shape
+ [1, None, None, 3]. The actual input tensor name to use during inference is
+ 'ImageTensor:0'.
+
+ Returns:
+ image: Preprocessed 4-D float32 tensor with shape [1, crop_height,
+ crop_width, 3].
+ original_image_size: Original image shape tensor [height, width].
+ resized_image_size: Resized image shape tensor [height, width].
+ """
+ # input_preprocess takes 4-D image tensor as input.
+ input_image = tf.placeholder(tf.uint8, [1, None, None, 3], name=_INPUT_NAME)
+ original_image_size = tf.shape(input_image)[1:3]
+
+ # Squeeze the dimension in axis=0 since `preprocess_image_and_label` assumes
+ # image to be 3-D.
+ image = tf.squeeze(input_image, axis=0)
+ resized_image, image, _ = input_preprocess.preprocess_image_and_label(
+ image,
+ label=None,
+ crop_height=FLAGS.crop_size[0],
+ crop_width=FLAGS.crop_size[1],
+ min_resize_value=FLAGS.min_resize_value,
+ max_resize_value=FLAGS.max_resize_value,
+ resize_factor=FLAGS.resize_factor,
+ is_training=False,
+ model_variant=FLAGS.model_variant)
+ resized_image_size = tf.shape(resized_image)[:2]
+
+ # Expand the dimension in axis=0, since the following operations assume the
+ # image to be 4-D.
+ image = tf.expand_dims(image, 0)
+
+ return image, original_image_size, resized_image_size
+
+
+def main(unused_argv):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ tf.logging.info('Prepare to export model to: %s', FLAGS.export_path)
+
+ with tf.Graph().as_default():
+ image, image_size, resized_image_size = _create_input_tensors()
+
+ model_options = common.ModelOptions(
+ outputs_to_num_classes={common.OUTPUT_TYPE: FLAGS.num_classes},
+ crop_size=FLAGS.crop_size,
+ atrous_rates=FLAGS.atrous_rates,
+ output_stride=FLAGS.output_stride)
+
+ if tuple(FLAGS.inference_scales) == (1.0,):
+ tf.logging.info('Exported model performs single-scale inference.')
+ predictions = model.predict_labels(
+ image,
+ model_options=model_options,
+ image_pyramid=FLAGS.image_pyramid)
+ else:
+ tf.logging.info('Exported model performs multi-scale inference.')
+ predictions = model.predict_labels_multi_scale(
+ image,
+ model_options=model_options,
+ eval_scales=FLAGS.inference_scales,
+ add_flipped_images=FLAGS.add_flipped_images)
+
+ predictions = tf.cast(predictions[common.OUTPUT_TYPE], tf.float32)
+ # Crop the valid regions from the predictions.
+ semantic_predictions = tf.slice(
+ predictions,
+ [0, 0, 0],
+ [1, resized_image_size[0], resized_image_size[1]])
+ # Resize back the prediction to the original image size.
+ def _resize_label(label, label_size):
+ # Expand dimension of label to [1, height, width, 1] for resize operation.
+ label = tf.expand_dims(label, 3)
+ resized_label = tf.image.resize_images(
+ label,
+ label_size,
+ method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
+ align_corners=True)
+ return tf.cast(tf.squeeze(resized_label, 3), tf.int32)
+ semantic_predictions = _resize_label(semantic_predictions, image_size)
+ semantic_predictions = tf.identity(semantic_predictions, name=_OUTPUT_NAME)
+
+ saver = tf.train.Saver(tf.model_variables())
+
+ tf.gfile.MakeDirs(os.path.dirname(FLAGS.export_path))
+ freeze_graph.freeze_graph_with_def_protos(
+ tf.get_default_graph().as_graph_def(add_shapes=True),
+ saver.as_saver_def(),
+ FLAGS.checkpoint_path,
+ _OUTPUT_NAME,
+ restore_op_name=None,
+ filename_tensor_name=None,
+ output_graph=FLAGS.export_path,
+ clear_devices=True,
+ initializer_nodes=None)
+
+
+if __name__ == '__main__':
+ flags.mark_flag_as_required('checkpoint_path')
+ flags.mark_flag_as_required('export_path')
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/ade20k.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/ade20k.md"
new file mode 100644
index 00000000..d7d8a382
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/ade20k.md"
@@ -0,0 +1,108 @@
+# Running DeepLab on ADE20K Semantic Segmentation Dataset
+
+This page walks through the steps required to run DeepLab on ADE20K dataset on a
+local machine.
+
+## Download dataset and convert to TFRecord
+
+We have prepared the script (under the folder `datasets`) to download and
+convert ADE20K semantic segmentation dataset to TFRecord.
+
+```bash
+# From the tensorflow/models/research/deeplab/datasets directory.
+bash download_and_convert_ade20k.sh
+```
+
+The converted dataset will be saved at ./deeplab/datasets/ADE20K/tfrecord
+
+## Recommended Directory Structure for Training and Evaluation
+
+```
++ datasets
+ - build_data.py
+ - build_ade20k_data.py
+ - download_and_convert_ade20k.sh
+ + ADE20K
+ + tfrecord
+ + exp
+ + train_on_train_set
+ + train
+ + eval
+ + vis
+ + ADEChallengeData2016
+ + annotations
+ + training
+ + validation
+ + images
+ + training
+ + validation
+```
+
+where the folder `train_on_train_set` stores the train/eval/vis events and
+results (when training DeepLab on the ADE20K train set).
+
+## Running the train/eval/vis jobs
+
+A local training job using `xception_65` can be run with the following command:
+
+```bash
+# From tensorflow/models/research/
+python deeplab/train.py \
+ --logtostderr \
+ --training_number_of_steps=150000 \
+ --train_split="train" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --train_crop_size=513 \
+ --train_crop_size=513 \
+ --train_batch_size=4 \
+ --min_resize_value=513 \
+ --max_resize_value=513 \
+ --resize_factor=16 \
+ --dataset="ade20k" \
+ --tf_initial_checkpoint=${PATH_TO_INITIAL_CHECKPOINT} \
+ --train_logdir=${PATH_TO_TRAIN_DIR}\
+ --dataset_dir=${PATH_TO_DATASET}
+```
+
+where ${PATH\_TO\_INITIAL\_CHECKPOINT} is the path to the initial checkpoint.
+${PATH\_TO\_TRAIN\_DIR} is the directory in which training checkpoints and
+events will be written to (it is recommended to set it to the
+`train_on_train_set/train` above), and ${PATH\_TO\_DATASET} is the directory in
+which the ADE20K dataset resides (the `tfrecord` above)
+
+**Note that for train.py:**
+
+1. In order to fine tune the BN layers, one needs to use large batch size (>
+ 12), and set fine_tune_batch_norm = True. Here, we simply use small batch
+ size during training for the purpose of demonstration. If the users have
+ limited GPU memory at hand, please fine-tune from our provided checkpoints
+ whose batch norm parameters have been trained, and use smaller learning rate
+ with fine_tune_batch_norm = False.
+
+2. User should fine tune the `min_resize_value` and `max_resize_value` to get
+ better result. Note that `resize_factor` has to be equal to `output_stride`.
+
+3. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
+ setting output_stride=8.
+
+4. The users could skip the flag, `decoder_output_stride`, if you do not want
+ to use the decoder structure.
+
+## Running Tensorboard
+
+Progress for training and evaluation jobs can be inspected using Tensorboard. If
+using the recommended directory structure, Tensorboard can be run using the
+following command:
+
+```bash
+tensorboard --logdir=${PATH_TO_LOG_DIRECTORY}
+```
+
+where `${PATH_TO_LOG_DIRECTORY}` points to the directory that contains the train
+directorie (e.g., the folder `train_on_train_set` in the above example). Please
+note it may take Tensorboard a couple minutes to populate with data.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/cityscapes.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/cityscapes.md"
new file mode 100644
index 00000000..8e897031
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/cityscapes.md"
@@ -0,0 +1,160 @@
+# Running DeepLab on Cityscapes Semantic Segmentation Dataset
+
+This page walks through the steps required to run DeepLab on Cityscapes on a
+local machine.
+
+## Download dataset and convert to TFRecord
+
+We have prepared the script (under the folder `datasets`) to convert Cityscapes
+dataset to TFRecord. The users are required to download the dataset beforehand
+by registering the [website](https://www.cityscapes-dataset.com/).
+
+```bash
+# From the tensorflow/models/research/deeplab/datasets directory.
+sh convert_cityscapes.sh
+```
+
+The converted dataset will be saved at ./deeplab/datasets/cityscapes/tfrecord.
+
+## Recommended Directory Structure for Training and Evaluation
+
+```
++ datasets
+ + cityscapes
+ + leftImg8bit
+ + gtFine
+ + tfrecord
+ + exp
+ + train_on_train_set
+ + train
+ + eval
+ + vis
+```
+
+where the folder `train_on_train_set` stores the train/eval/vis events and
+results (when training DeepLab on the Cityscapes train set).
+
+## Running the train/eval/vis jobs
+
+A local training job using `xception_65` can be run with the following command:
+
+```bash
+# From tensorflow/models/research/
+python deeplab/train.py \
+ --logtostderr \
+ --training_number_of_steps=90000 \
+ --train_split="train" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --train_crop_size=769 \
+ --train_crop_size=769 \
+ --train_batch_size=1 \
+ --dataset="cityscapes" \
+ --tf_initial_checkpoint=${PATH_TO_INITIAL_CHECKPOINT} \
+ --train_logdir=${PATH_TO_TRAIN_DIR} \
+ --dataset_dir=${PATH_TO_DATASET}
+```
+
+where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint
+(usually an ImageNet pretrained checkpoint), ${PATH_TO_TRAIN_DIR} is the
+directory in which training checkpoints and events will be written to, and
+${PATH_TO_DATASET} is the directory in which the Cityscapes dataset resides.
+
+**Note that for {train,eval,vis}.py**:
+
+1. In order to reproduce our results, one needs to use large batch size (> 8),
+ and set fine_tune_batch_norm = True. Here, we simply use small batch size
+ during training for the purpose of demonstration. If the users have limited
+ GPU memory at hand, please fine-tune from our provided checkpoints whose
+ batch norm parameters have been trained, and use smaller learning rate with
+ fine_tune_batch_norm = False.
+
+2. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
+ setting output_stride=8.
+
+3. The users could skip the flag, `decoder_output_stride`, if you do not want
+ to use the decoder structure.
+
+4. Change and add the following flags in order to use the provided dense prediction cell.
+
+```bash
+--model_variant="xception_71"
+--dense_prediction_cell_json="deeplab/core/dense_prediction_cell_branch5_top1_cityscapes.json"
+
+```
+
+A local evaluation job using `xception_65` can be run with the following
+command:
+
+```bash
+# From tensorflow/models/research/
+python deeplab/eval.py \
+ --logtostderr \
+ --eval_split="val" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --eval_crop_size=1025 \
+ --eval_crop_size=2049 \
+ --dataset="cityscapes" \
+ --checkpoint_dir=${PATH_TO_CHECKPOINT} \
+ --eval_logdir=${PATH_TO_EVAL_DIR} \
+ --dataset_dir=${PATH_TO_DATASET}
+```
+
+where ${PATH_TO_CHECKPOINT} is the path to the trained checkpoint (i.e., the
+path to train_logdir), ${PATH_TO_EVAL_DIR} is the directory in which evaluation
+events will be written to, and ${PATH_TO_DATASET} is the directory in which the
+Cityscapes dataset resides.
+
+A local visualization job using `xception_65` can be run with the following
+command:
+
+```bash
+# From tensorflow/models/research/
+python deeplab/vis.py \
+ --logtostderr \
+ --vis_split="val" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --vis_crop_size=1025 \
+ --vis_crop_size=2049 \
+ --dataset="cityscapes" \
+ --colormap_type="cityscapes" \
+ --checkpoint_dir=${PATH_TO_CHECKPOINT} \
+ --vis_logdir=${PATH_TO_VIS_DIR} \
+ --dataset_dir=${PATH_TO_DATASET}
+```
+
+where ${PATH_TO_CHECKPOINT} is the path to the trained checkpoint (i.e., the
+path to train_logdir), ${PATH_TO_VIS_DIR} is the directory in which evaluation
+events will be written to, and ${PATH_TO_DATASET} is the directory in which the
+Cityscapes dataset resides. Note that if the users would like to save the
+segmentation results for evaluation server, set also_save_raw_predictions =
+True.
+
+## Running Tensorboard
+
+Progress for training and evaluation jobs can be inspected using Tensorboard. If
+using the recommended directory structure, Tensorboard can be run using the
+following command:
+
+```bash
+tensorboard --logdir=${PATH_TO_LOG_DIRECTORY}
+```
+
+where `${PATH_TO_LOG_DIRECTORY}` points to the directory that contains the
+train, eval, and vis directories (e.g., the folder `train_on_train_set` in the
+above example). Please note it may take Tensorboard a couple minutes to populate
+with data.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/export_model.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/export_model.md"
new file mode 100644
index 00000000..c41649e6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/export_model.md"
@@ -0,0 +1,23 @@
+# Export trained deeplab model to frozen inference graph
+
+After model training finishes, you could export it to a frozen TensorFlow
+inference graph proto. Your trained model checkpoint usually includes the
+following files:
+
+* model.ckpt-${CHECKPOINT_NUMBER}.data-00000-of-00001,
+* model.ckpt-${CHECKPOINT_NUMBER}.index
+* model.ckpt-${CHECKPOINT_NUMBER}.meta
+
+After you have identified a candidate checkpoint to export, you can run the
+following commandline to export to a frozen graph:
+
+```bash
+# From tensorflow/models/research/
+# Assume all checkpoint files share the same path prefix `${CHECKPOINT_PATH}`.
+python deeplab/export_model.py \
+ --checkpoint_path=${CHECKPOINT_PATH} \
+ --export_path=${OUTPUT_DIR}/frozen_inference_graph.pb
+```
+
+Please also add other model specific flags as you use for training, such as
+`model_variant`, `add_image_level_feature`, etc.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/faq.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/faq.md"
new file mode 100644
index 00000000..c8162497
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/faq.md"
@@ -0,0 +1,82 @@
+# FAQ
+___
+Q1: What if I want to use other network backbones, such as ResNet [1], instead of only those provided ones (e.g., Xception)?
+
+A: The users could modify the provided core/feature_extractor.py to support more network backbones.
+___
+Q2: What if I want to train the model on other datasets?
+
+A: The users could modify the provided dataset/build_{cityscapes,voc2012}_data.py and dataset/segmentation_dataset.py to build their own dataset.
+___
+Q3: Where can I download the PASCAL VOC augmented training set?
+
+A: The PASCAL VOC augmented training set is provided by Bharath Hariharan et al. [2] Please refer to their [website](http://home.bharathh.info/pubs/codes/SBD/download.html) for details and consider citing their paper if using the dataset.
+___
+Q4: Why the implementation does not include DenseCRF [3]?
+
+A: We have not tried this. The interested users could take a look at Philipp Krähenbühl's [website](http://graphics.stanford.edu/projects/densecrf/) and [paper](https://arxiv.org/abs/1210.5644) for details.
+___
+Q5: What if I want to train the model and fine-tune the batch normalization parameters?
+
+A: If given the limited resource at hand, we would suggest you simply fine-tune
+from our provided checkpoint whose batch-norm parameters have been trained (i.e.,
+train with a smaller learning rate, set `fine_tune_batch_norm = false`, and
+employ longer training iterations since the learning rate is small). If
+you really would like to train by yourself, we would suggest
+
+1. Set `output_stride = 16` or maybe even `32` (remember to change the flag
+`atrous_rates` accordingly, e.g., `atrous_rates = [3, 6, 9]` for
+`output_stride = 32`).
+
+2. Use as many GPUs as possible (change the flag `num_clones` in train.py) and
+set `train_batch_size` as large as possible.
+
+3. Adjust the `train_crop_size` in train.py. Maybe set it to be smaller, e.g.,
+513x513 (or even 321x321), so that you could use a larger batch size.
+
+4. Use a smaller network backbone, such as MobileNet-v2.
+
+___
+Q6: How can I train the model asynchronously?
+
+A: In the train.py, the users could set `num_replicas` (number of machines for training) and `num_ps_tasks` (we usually set `num_ps_tasks` = `num_replicas` / 2). See slim.deployment.model_deploy for more details.
+___
+Q7: I could not reproduce the performance even with the provided checkpoints.
+
+A: Please try running
+
+```bash
+# Run the simple test with Xception_65 as network backbone.
+sh local_test.sh
+```
+
+or
+
+```bash
+# Run the simple test with MobileNet-v2 as network backbone.
+sh local_test_mobilenetv2.sh
+```
+
+First, make sure you could reproduce the results with our provided setting.
+After that, you could start to make a new change one at a time to help debug.
+___
+Q8: What value of `eval_crop_size` should I use?
+
+A: Our model uses whole-image inference, meaning that we need to set `eval_crop_size` equal to `output_stride` * k + 1, where k is an integer and set k so that the resulting `eval_crop_size` is slightly larger the largest
+image dimension in the dataset. For example, we have `eval_crop_size` = 513x513 for PASCAL dataset whose largest image dimension is 512. Similarly, we set `eval_crop_size` = 1025x2049 for Cityscapes images whose
+image dimension is all equal to 1024x2048.
+___
+
+## References
+
+1. **Deep Residual Learning for Image Recognition**
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
+ [[link]](https://arxiv.org/abs/1512.03385), In CVPR, 2016.
+
+2. **Semantic Contours from Inverse Detectors**
+ Bharath Hariharan, Pablo Arbelaez, Lubomir Bourdev, Subhransu Maji, Jitendra Malik
+ [[link]](http://home.bharathh.info/pubs/codes/SBD/download.html), In ICCV, 2011.
+
+3. **Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials**
+ Philipp Krähenbühl, Vladlen Koltun
+ [[link]](http://graphics.stanford.edu/projects/densecrf/), In NIPS, 2011.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image1.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image1.jpg"
new file mode 100644
index 00000000..939b6f9c
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image1.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image2.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image2.jpg"
new file mode 100644
index 00000000..5ec1b8ac
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image2.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image3.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image3.jpg"
new file mode 100644
index 00000000..d788e3dc
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image3.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image_info.txt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image_info.txt"
new file mode 100644
index 00000000..583d113e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/image_info.txt"
@@ -0,0 +1,13 @@
+Image provenance:
+
+image1.jpg: Philippe Put,
+ https://www.flickr.com/photos/34547181@N00/14499172124
+
+image2.jpg: Peretz Partensky
+ https://www.flickr.com/photos/ifl/3926001309
+
+image3.jpg: Peter Harrison
+ https://www.flickr.com/photos/devcentre/392585679
+
+
+vis[1-3].png: Showing original image together with DeepLab segmentation map.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/vis1.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/vis1.png"
new file mode 100644
index 00000000..41b8ecd8
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/vis1.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/vis2.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/vis2.png"
new file mode 100644
index 00000000..7fa7a4ca
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/vis2.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/vis3.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/vis3.png"
new file mode 100644
index 00000000..813b6340
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/img/vis3.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/installation.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/installation.md"
new file mode 100644
index 00000000..481015e9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/installation.md"
@@ -0,0 +1,66 @@
+# Installation
+
+## Dependencies
+
+DeepLab depends on the following libraries:
+
+* Numpy
+* Pillow 1.0
+* tf Slim (which is included in the "tensorflow/models/research/" checkout)
+* Jupyter notebook
+* Matplotlib
+* Tensorflow
+
+For detailed steps to install Tensorflow, follow the [Tensorflow installation
+instructions](https://www.tensorflow.org/install/). A typical user can install
+Tensorflow using one of the following commands:
+
+```bash
+# For CPU
+pip install tensorflow
+# For GPU
+pip install tensorflow-gpu
+```
+
+The remaining libraries can be installed on Ubuntu 14.04 using via apt-get:
+
+```bash
+sudo apt-get install python-pil python-numpy
+sudo pip install jupyter
+sudo pip install matplotlib
+```
+
+## Add Libraries to PYTHONPATH
+
+When running locally, the tensorflow/models/research/ and slim directories
+should be appended to PYTHONPATH. This can be done by running the following from
+tensorflow/models/research/:
+
+```bash
+# From tensorflow/models/research/
+export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
+```
+
+Note: This command needs to run from every new terminal you start. If you wish
+to avoid running this manually, you can add it as a new line to the end of your
+~/.bashrc file.
+
+# Testing the Installation
+
+You can test if you have successfully installed the Tensorflow DeepLab by
+running the following commands:
+
+Quick test by running model_test.py:
+
+```bash
+# From tensorflow/models/research/
+python deeplab/model_test.py
+```
+
+Quick running the whole code on the PASCAL VOC 2012 dataset:
+
+```bash
+# From tensorflow/models/research/deeplab
+sh local_test.sh
+```
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/model_zoo.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/model_zoo.md"
new file mode 100644
index 00000000..12e026dc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/model_zoo.md"
@@ -0,0 +1,185 @@
+# TensorFlow DeepLab Model Zoo
+
+We provide deeplab models pretrained several datasets, including (1) PASCAL VOC
+2012, (2) Cityscapes, and (3) ADE20K for reproducing our results, as well as
+some checkpoints that are only pretrained on ImageNet for training your own
+models.
+
+## DeepLab models trained on PASCAL VOC 2012
+
+Un-tar'ed directory includes:
+
+* a frozen inference graph (`frozen_inference_graph.pb`). All frozen inference
+ graphs use output stride of 8 and a single eval scale of 1.0. No left-right
+ flips are used, and MobileNet-v2 based models do not include the decoder
+ module.
+
+* a checkpoint (`model.ckpt.data-00000-of-00001`, `model.ckpt.index`)
+
+### Model details
+
+We provide several checkpoints that have been pretrained on VOC 2012 train_aug
+set or train_aug + trainval set. In the former case, one could train their model
+with smaller batch size and freeze batch normalization when limited GPU memory
+is available, since we have already fine-tuned the batch normalization for you.
+In the latter case, one could directly evaluate the checkpoints on VOC 2012 test
+set or use this checkpoint for demo. Note *MobileNet-v2* based models do not
+employ ASPP and decoder modules for fast computation.
+
+Checkpoint name | Network backbone | Pretrained dataset | ASPP | Decoder
+--------------------------- | :--------------: | :-----------------: | :---: | :-----:
+mobilenetv2_dm05_coco_voc_trainaug | MobileNet-v2 Depth-Multiplier = 0.5 | MS-COCO VOC 2012 train_aug set| N/A | N/A
+mobilenetv2_dm05_coco_voc_trainval | MobileNet-v2 Depth-Multiplier = 0.5 | MS-COCO VOC 2012 train_aug + trainval sets | N/A | N/A
+mobilenetv2_coco_voc_trainaug | MobileNet-v2 | MS-COCO VOC 2012 train_aug set| N/A | N/A
+mobilenetv2_coco_voc_trainval | MobileNet-v2 | MS-COCO VOC 2012 train_aug + trainval sets | N/A | N/A
+xception65_coco_voc_trainaug | Xception_65 | MS-COCO VOC 2012 train_aug set| [6,12,18] for OS=16 [12,24,36] for OS=8 | OS = 4
+xception65_coco_voc_trainval | Xception_65 | MS-COCO VOC 2012 train_aug + trainval sets | [6,12,18] for OS=16 [12,24,36] for OS=8 | OS = 4
+
+In the table, **OS** denotes output stride.
+
+Checkpoint name | Eval OS | Eval scales | Left-right Flip | Multiply-Adds | Runtime (sec) | PASCAL mIOU | File Size
+------------------------------------------------------------------------------------------------------------------------ | :-------: | :------------------------: | :-------------: | :------------------: | :------------: | :----------------------------: | :-------:
+[mobilenetv2_dm05_coco_voc_trainaug](http://download.tensorflow.org/models/deeplabv3_mnv2_dm05_pascal_trainaug_2018_10_01.tar.gz) | 16 | [1.0] | No | 0.88B | - | 70.19% (val) | 7.6MB
+[mobilenetv2_dm05_coco_voc_trainval](http://download.tensorflow.org/models/deeplabv3_mnv2_dm05_pascal_trainval_2018_10_01.tar.gz) | 8 | [1.0] | No | 2.84B | - | 71.83% (test) | 7.6MB
+[mobilenetv2_coco_voc_trainaug](http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz) | 16 8 | [1.0] [0.5:0.25:1.75] | No Yes | 2.75B 152.59B | 0.1 26.9 | 75.32% (val) 77.33 (val) | 23MB
+[mobilenetv2_coco_voc_trainval](http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz) | 8 | [0.5:0.25:1.75] | Yes | 152.59B | 26.9 | 80.25% (**test**) | 23MB
+[xception65_coco_voc_trainaug](http://download.tensorflow.org/models/deeplabv3_pascal_train_aug_2018_01_04.tar.gz) | 16 8 | [1.0] [0.5:0.25:1.75] | No Yes | 54.17B 3055.35B | 0.7 223.2 | 82.20% (val) 83.58% (val) | 439MB
+[xception65_coco_voc_trainval](http://download.tensorflow.org/models/deeplabv3_pascal_trainval_2018_01_04.tar.gz) | 8 | [0.5:0.25:1.75] | Yes | 3055.35B | 223.2 | 87.80% (**test**) | 439MB
+
+In the table, we report both computation complexity (in terms of Multiply-Adds
+and CPU Runtime) and segmentation performance (in terms of mIOU) on the PASCAL
+VOC val or test set. The reported runtime is calculated by tfprof on a
+workstation with CPU E5-1650 v3 @ 3.50GHz and 32GB memory. Note that applying
+multi-scale inputs and left-right flips increases the segmentation performance
+but also significantly increases the computation and thus may not be suitable
+for real-time applications.
+
+## DeepLab models trained on Cityscapes
+
+### Model details
+
+We provide several checkpoints that have been pretrained on Cityscapes
+train_fine set. Note *MobileNet-v2* based model has been pretrained on MS-COCO
+dataset and does not employ ASPP and decoder modules for fast computation.
+
+Checkpoint name | Network backbone | Pretrained dataset | ASPP | Decoder
+------------------------------------- | :--------------: | :-------------------------------------: | :----------------------------------------------: | :-----:
+mobilenetv2_coco_cityscapes_trainfine | MobileNet-v2 | MS-COCO Cityscapes train_fine set | N/A | N/A
+xception65_cityscapes_trainfine | Xception_65 | ImageNet Cityscapes train_fine set | [6, 12, 18] for OS=16 [12, 24, 36] for OS=8 | OS = 4
+xception71_dpc_cityscapes_trainfine | Xception_71 | ImageNet MS-COCO Cityscapes train_fine set | Dense Prediction Cell | OS = 4
+xception71_dpc_cityscapes_trainval | Xception_71 | ImageNet MS-COCO Cityscapes trainval_fine and coarse set | Dense Prediction Cell | OS = 4
+
+In the table, **OS** denotes output stride.
+
+Checkpoint name | Eval OS | Eval scales | Left-right Flip | Multiply-Adds | Runtime (sec) | Cityscapes mIOU | File Size
+-------------------------------------------------------------------------------------------------------------------------------- | :-------: | :-------------------------: | :-------------: | :-------------------: | :------------: | :----------------------------: | :-------:
+[mobilenetv2_coco_cityscapes_trainfine](http://download.tensorflow.org/models/deeplabv3_mnv2_cityscapes_train_2018_02_05.tar.gz) | 16 8 | [1.0] [0.75:0.25:1.25] | No Yes | 21.27B 433.24B | 0.8 51.12 | 70.71% (val) 73.57% (val) | 23MB
+[xception65_cityscapes_trainfine](http://download.tensorflow.org/models/deeplabv3_cityscapes_train_2018_02_06.tar.gz) | 16 8 | [1.0] [0.75:0.25:1.25] | No Yes | 418.64B 8677.92B | 5.0 422.8 | 78.79% (val) 80.42% (val) | 439MB
+[xception71_dpc_cityscapes_trainfine](http://download.tensorflow.org/models/deeplab_cityscapes_xception71_trainfine_2018_09_08.tar.gz) | 16 | [1.0] | No | 502.07B | - | 80.31% (val) | 445MB
+[xception71_dpc_cityscapes_trainval](http://download.tensorflow.org/models/deeplab_cityscapes_xception71_trainvalfine_2018_09_08.tar.gz) | 8 | [0.75:0.25:2] | Yes | - | - | 82.66% (**test**) | 446MB
+
+
+
+## DeepLab models trained on ADE20K
+
+### Model details
+
+We provide some checkpoints that have been pretrained on ADE20K training set.
+Note that the model has only been pretrained on ImageNet, following the
+dataset rule.
+
+Checkpoint name | Network backbone | Pretrained dataset | ASPP | Decoder
+------------------------------------- | :--------------: | :-------------------------------------: | :----------------------------------------------: | :-----:
+xception65_ade20k_train | Xception_65 | ImageNet ADE20K training set | [6, 12, 18] for OS=16 [12, 24, 36] for OS=8 | OS = 4
+
+Checkpoint name | Eval OS | Eval scales | Left-right Flip | mIOU | Pixel-wise Accuracy | File Size
+------------------------------------- | :-------: | :-------------------------: | :-------------: | :-------------------: | :-------------------: | :-------:
+[xception65_ade20k_train](http://download.tensorflow.org/models/deeplabv3_xception_ade20k_train_2018_05_29.tar.gz) | 8 | [0.5:0.25:1.75] | Yes | 45.65% (val) | 82.52% (val) | 439MB
+
+## Checkpoints pretrained on ImageNet
+
+Un-tar'ed directory includes:
+
+* model checkpoint (`model.ckpt.data-00000-of-00001`, `model.ckpt.index`).
+
+### Model details
+
+We also provide some checkpoints that are pretrained on ImageNet and/or COCO (as
+post-fixed in the model name) so that one could use this for training your own
+models.
+
+* mobilenet_v2: We refer the interested users to the TensorFlow open source
+ [MobileNet-V2](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet)
+ for details.
+
+* xception_{41,65,71}: We adapt the original Xception model to the task of
+ semantic segmentation with the following changes: (1) more layers, (2) all
+ max pooling operations are replaced by strided (atrous) separable
+ convolutions, and (3) extra batch-norm and ReLU after each 3x3 depthwise
+ convolution are added. We provide three Xception model variants with
+ different network depths.
+
+* resnet_v1_{50,101}_beta: We modify the original ResNet-101 [10], similar to
+ PSPNet [11] by replacing the first 7x7 convolution with three 3x3
+ convolutions. See resnet_v1_beta.py for more details.
+
+Model name | File Size
+-------------------------------------------------------------------------------------- | :-------:
+[xception_41_imagenet](http://download.tensorflow.org/models/xception_41_2018_05_09.tar.gz ) | 288MB
+[xception_65_imagenet](http://download.tensorflow.org/models/deeplabv3_xception_2018_01_04.tar.gz) | 447MB
+[xception_65_imagenet_coco](http://download.tensorflow.org/models/xception_65_coco_pretrained_2018_10_02.tar.gz) | 292MB
+[xception_71_imagenet](http://download.tensorflow.org/models/xception_71_2018_05_09.tar.gz ) | 474MB
+[resnet_v1_50_beta_imagenet](http://download.tensorflow.org/models/resnet_v1_50_2018_05_04.tar.gz) | 274MB
+[resnet_v1_101_beta_imagenet](http://download.tensorflow.org/models/resnet_v1_101_2018_05_04.tar.gz) | 477MB
+
+## References
+
+1. **Mobilenets: Efficient convolutional neural networks for mobile vision applications**
+ Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam
+ [[link]](https://arxiv.org/abs/1704.04861). arXiv:1704.04861, 2017.
+
+2. **Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation**
+ Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen
+ [[link]](https://arxiv.org/abs/1801.04381). arXiv:1801.04381, 2018.
+
+3. **Xception: Deep Learning with Depthwise Separable Convolutions**
+ François Chollet
+ [[link]](https://arxiv.org/abs/1610.02357). In the Proc. of CVPR, 2017.
+
+4. **Deformable Convolutional Networks -- COCO Detection and Segmentation Challenge 2017 Entry**
+ Haozhi Qi, Zheng Zhang, Bin Xiao, Han Hu, Bowen Cheng, Yichen Wei, Jifeng Dai
+ [[link]](http://presentations.cocodataset.org/COCO17-Detect-MSRA.pdf). ICCV COCO Challenge
+ Workshop, 2017.
+
+5. **The Pascal Visual Object Classes Challenge: A Retrospective**
+ Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John M. Winn, Andrew Zisserman
+ [[link]](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/). IJCV, 2014.
+
+6. **Semantic Contours from Inverse Detectors**
+ Bharath Hariharan, Pablo Arbelaez, Lubomir Bourdev, Subhransu Maji, Jitendra Malik
+ [[link]](http://home.bharathh.info/pubs/codes/SBD/download.html). In the Proc. of ICCV, 2011.
+
+7. **The Cityscapes Dataset for Semantic Urban Scene Understanding**
+ Cordts, Marius, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, Bernt Schiele.
+ [[link]](https://www.cityscapes-dataset.com/). In the Proc. of CVPR, 2016.
+
+8. **Microsoft COCO: Common Objects in Context**
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, Piotr Dollar
+ [[link]](http://cocodataset.org/). In the Proc. of ECCV, 2014.
+
+9. **ImageNet Large Scale Visual Recognition Challenge**
+ Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, Li Fei-Fei
+ [[link]](http://www.image-net.org/). IJCV, 2015.
+
+10. **Deep Residual Learning for Image Recognition**
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
+ [[link]](https://arxiv.org/abs/1512.03385). CVPR, 2016.
+
+11. **Pyramid Scene Parsing Network**
+ Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia
+ [[link]](https://arxiv.org/abs/1612.01105). In CVPR, 2017.
+
+12. **Scene Parsing through ADE20K Dataset**
+ Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, Antonio Torralba
+ [[link]](http://groups.csail.mit.edu/vision/datasets/ADE20K/). In CVPR,
+ 2017.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/pascal.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/pascal.md"
new file mode 100644
index 00000000..de627182
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/g3doc/pascal.md"
@@ -0,0 +1,164 @@
+# Running DeepLab on PASCAL VOC 2012 Semantic Segmentation Dataset
+
+This page walks through the steps required to run DeepLab on PASCAL VOC 2012 on
+a local machine.
+
+## Download dataset and convert to TFRecord
+
+We have prepared the script (under the folder `datasets`) to download and
+convert PASCAL VOC 2012 semantic segmentation dataset to TFRecord.
+
+```bash
+# From the tensorflow/models/research/deeplab/datasets directory.
+sh download_and_convert_voc2012.sh
+```
+
+The converted dataset will be saved at
+./deeplab/datasets/pascal_voc_seg/tfrecord
+
+## Recommended Directory Structure for Training and Evaluation
+
+```
++ datasets
+ + pascal_voc_seg
+ + VOCdevkit
+ + VOC2012
+ + JPEGImages
+ + SegmentationClass
+ + tfrecord
+ + exp
+ + train_on_train_set
+ + train
+ + eval
+ + vis
+```
+
+where the folder `train_on_train_set` stores the train/eval/vis events and
+results (when training DeepLab on the PASCAL VOC 2012 train set).
+
+## Running the train/eval/vis jobs
+
+A local training job using `xception_65` can be run with the following command:
+
+```bash
+# From tensorflow/models/research/
+python deeplab/train.py \
+ --logtostderr \
+ --training_number_of_steps=30000 \
+ --train_split="train" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --train_crop_size=513 \
+ --train_crop_size=513 \
+ --train_batch_size=1 \
+ --dataset="pascal_voc_seg" \
+ --tf_initial_checkpoint=${PATH_TO_INITIAL_CHECKPOINT} \
+ --train_logdir=${PATH_TO_TRAIN_DIR} \
+ --dataset_dir=${PATH_TO_DATASET}
+```
+
+where ${PATH_TO_INITIAL_CHECKPOINT} is the path to the initial checkpoint
+(usually an ImageNet pretrained checkpoint), ${PATH_TO_TRAIN_DIR} is the
+directory in which training checkpoints and events will be written to, and
+${PATH_TO_DATASET} is the directory in which the PASCAL VOC 2012 dataset
+resides.
+
+**Note that for {train,eval,vis}.py:**
+
+1. In order to reproduce our results, one needs to use large batch size (> 12),
+ and set fine_tune_batch_norm = True. Here, we simply use small batch size
+ during training for the purpose of demonstration. If the users have limited
+ GPU memory at hand, please fine-tune from our provided checkpoints whose
+ batch norm parameters have been trained, and use smaller learning rate with
+ fine_tune_batch_norm = False.
+
+2. The users should change atrous_rates from [6, 12, 18] to [12, 24, 36] if
+ setting output_stride=8.
+
+3. The users could skip the flag, `decoder_output_stride`, if you do not want
+ to use the decoder structure.
+
+A local evaluation job using `xception_65` can be run with the following
+command:
+
+```bash
+# From tensorflow/models/research/
+python deeplab/eval.py \
+ --logtostderr \
+ --eval_split="val" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --eval_crop_size=513 \
+ --eval_crop_size=513 \
+ --dataset="pascal_voc_seg" \
+ --checkpoint_dir=${PATH_TO_CHECKPOINT} \
+ --eval_logdir=${PATH_TO_EVAL_DIR} \
+ --dataset_dir=${PATH_TO_DATASET}
+```
+
+where ${PATH_TO_CHECKPOINT} is the path to the trained checkpoint (i.e., the
+path to train_logdir), ${PATH_TO_EVAL_DIR} is the directory in which evaluation
+events will be written to, and ${PATH_TO_DATASET} is the directory in which the
+PASCAL VOC 2012 dataset resides.
+
+A local visualization job using `xception_65` can be run with the following
+command:
+
+```bash
+# From tensorflow/models/research/
+python deeplab/vis.py \
+ --logtostderr \
+ --vis_split="val" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --vis_crop_size=513 \
+ --vis_crop_size=513 \
+ --dataset="pascal_voc_seg" \
+ --checkpoint_dir=${PATH_TO_CHECKPOINT} \
+ --vis_logdir=${PATH_TO_VIS_DIR} \
+ --dataset_dir=${PATH_TO_DATASET}
+```
+
+where ${PATH_TO_CHECKPOINT} is the path to the trained checkpoint (i.e., the
+path to train_logdir), ${PATH_TO_VIS_DIR} is the directory in which evaluation
+events will be written to, and ${PATH_TO_DATASET} is the directory in which the
+PASCAL VOC 2012 dataset resides. Note that if the users would like to save the
+segmentation results for evaluation server, set also_save_raw_predictions =
+True.
+
+## Running Tensorboard
+
+Progress for training and evaluation jobs can be inspected using Tensorboard. If
+using the recommended directory structure, Tensorboard can be run using the
+following command:
+
+```bash
+tensorboard --logdir=${PATH_TO_LOG_DIRECTORY}
+```
+
+where `${PATH_TO_LOG_DIRECTORY}` points to the directory that contains the
+train, eval, and vis directories (e.g., the folder `train_on_train_set` in the
+above example). Please note it may take Tensorboard a couple minutes to populate
+with data.
+
+## Example
+
+We provide a script to run the {train,eval,vis,export_model}.py on the PASCAL VOC
+2012 dataset as an example. See the code in local_test.sh for details.
+
+```bash
+# From tensorflow/models/research/deeplab
+sh local_test.sh
+```
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/input_preprocess.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/input_preprocess.py"
new file mode 100644
index 00000000..135e24cf
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/input_preprocess.py"
@@ -0,0 +1,138 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Prepares the data used for DeepLab training/evaluation."""
+import tensorflow as tf
+from deeplab.core import feature_extractor
+from deeplab.core import preprocess_utils
+
+
+# The probability of flipping the images and labels
+# left-right during training
+_PROB_OF_FLIP = 0.5
+
+
+def preprocess_image_and_label(image,
+ label,
+ crop_height,
+ crop_width,
+ min_resize_value=None,
+ max_resize_value=None,
+ resize_factor=None,
+ min_scale_factor=1.,
+ max_scale_factor=1.,
+ scale_factor_step_size=0,
+ ignore_label=255,
+ is_training=True,
+ model_variant=None):
+ """Preprocesses the image and label.
+
+ Args:
+ image: Input image.
+ label: Ground truth annotation label.
+ crop_height: The height value used to crop the image and label.
+ crop_width: The width value used to crop the image and label.
+ min_resize_value: Desired size of the smaller image side.
+ max_resize_value: Maximum allowed size of the larger image side.
+ resize_factor: Resized dimensions are multiple of factor plus one.
+ min_scale_factor: Minimum scale factor value.
+ max_scale_factor: Maximum scale factor value.
+ scale_factor_step_size: The step size from min scale factor to max scale
+ factor. The input is randomly scaled based on the value of
+ (min_scale_factor, max_scale_factor, scale_factor_step_size).
+ ignore_label: The label value which will be ignored for training and
+ evaluation.
+ is_training: If the preprocessing is used for training or not.
+ model_variant: Model variant (string) for choosing how to mean-subtract the
+ images. See feature_extractor.network_map for supported model variants.
+
+ Returns:
+ original_image: Original image (could be resized).
+ processed_image: Preprocessed image.
+ label: Preprocessed ground truth segmentation label.
+
+ Raises:
+ ValueError: Ground truth label not provided during training.
+ """
+ if is_training and label is None:
+ raise ValueError('During training, label must be provided.')
+ if model_variant is None:
+ tf.logging.warning('Default mean-subtraction is performed. Please specify '
+ 'a model_variant. See feature_extractor.network_map for '
+ 'supported model variants.')
+
+ # Keep reference to original image.
+ original_image = image
+
+ processed_image = tf.cast(image, tf.float32)
+
+ if label is not None:
+ label = tf.cast(label, tf.int32)
+
+ # Resize image and label to the desired range.
+ if min_resize_value is not None or max_resize_value is not None:
+ [processed_image, label] = (
+ preprocess_utils.resize_to_range(
+ image=processed_image,
+ label=label,
+ min_size=min_resize_value,
+ max_size=max_resize_value,
+ factor=resize_factor,
+ align_corners=True))
+ # The `original_image` becomes the resized image.
+ original_image = tf.identity(processed_image)
+
+ # Data augmentation by randomly scaling the inputs.
+ if is_training:
+ scale = preprocess_utils.get_random_scale(
+ min_scale_factor, max_scale_factor, scale_factor_step_size)
+ processed_image, label = preprocess_utils.randomly_scale_image_and_label(
+ processed_image, label, scale)
+ processed_image.set_shape([None, None, 3])
+
+ # Pad image and label to have dimensions >= [crop_height, crop_width]
+ image_shape = tf.shape(processed_image)
+ image_height = image_shape[0]
+ image_width = image_shape[1]
+
+ target_height = image_height + tf.maximum(crop_height - image_height, 0)
+ target_width = image_width + tf.maximum(crop_width - image_width, 0)
+
+ # Pad image with mean pixel value.
+ mean_pixel = tf.reshape(
+ feature_extractor.mean_pixel(model_variant), [1, 1, 3])
+ processed_image = preprocess_utils.pad_to_bounding_box(
+ processed_image, 0, 0, target_height, target_width, mean_pixel)
+
+ if label is not None:
+ label = preprocess_utils.pad_to_bounding_box(
+ label, 0, 0, target_height, target_width, ignore_label)
+
+ # Randomly crop the image and label.
+ if is_training and label is not None:
+ processed_image, label = preprocess_utils.random_crop(
+ [processed_image, label], crop_height, crop_width)
+
+ processed_image.set_shape([crop_height, crop_width, 3])
+
+ if label is not None:
+ label.set_shape([crop_height, crop_width, 1])
+
+ if is_training:
+ # Randomly left-right flip the image and label.
+ processed_image, label, _ = preprocess_utils.flip_dim(
+ [processed_image, label], _PROB_OF_FLIP, dim=1)
+
+ return original_image, processed_image, label
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/local_test.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/local_test.sh"
new file mode 100644
index 00000000..65c827af
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/local_test.sh"
@@ -0,0 +1,150 @@
+#!/bin/bash
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+#
+# This script is used to run local test on PASCAL VOC 2012. Users could also
+# modify from this script for their use case.
+#
+# Usage:
+# # From the tensorflow/models/research/deeplab directory.
+# sh ./local_test.sh
+#
+#
+
+# Exit immediately if a command exits with a non-zero status.
+set -e
+
+# Move one-level up to tensorflow/models/research directory.
+cd ..
+
+# Update PYTHONPATH.
+export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
+
+# Set up the working environment.
+CURRENT_DIR=$(pwd)
+WORK_DIR="${CURRENT_DIR}/deeplab"
+
+# Run model_test first to make sure the PYTHONPATH is correctly set.
+python "${WORK_DIR}"/model_test.py -v
+
+# Go to datasets folder and download PASCAL VOC 2012 segmentation dataset.
+DATASET_DIR="datasets"
+cd "${WORK_DIR}/${DATASET_DIR}"
+sh download_and_convert_voc2012.sh
+
+# Go back to original directory.
+cd "${CURRENT_DIR}"
+
+# Set up the working directories.
+PASCAL_FOLDER="pascal_voc_seg"
+EXP_FOLDER="exp/train_on_trainval_set"
+INIT_FOLDER="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/init_models"
+TRAIN_LOGDIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/train"
+EVAL_LOGDIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/eval"
+VIS_LOGDIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/vis"
+EXPORT_DIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/export"
+mkdir -p "${INIT_FOLDER}"
+mkdir -p "${TRAIN_LOGDIR}"
+mkdir -p "${EVAL_LOGDIR}"
+mkdir -p "${VIS_LOGDIR}"
+mkdir -p "${EXPORT_DIR}"
+
+# Copy locally the trained checkpoint as the initial checkpoint.
+TF_INIT_ROOT="http://download.tensorflow.org/models"
+TF_INIT_CKPT="deeplabv3_pascal_train_aug_2018_01_04.tar.gz"
+cd "${INIT_FOLDER}"
+wget -nd -c "${TF_INIT_ROOT}/${TF_INIT_CKPT}"
+tar -xf "${TF_INIT_CKPT}"
+cd "${CURRENT_DIR}"
+
+PASCAL_DATASET="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/tfrecord"
+
+# Train 10 iterations.
+NUM_ITERATIONS=10
+python "${WORK_DIR}"/train.py \
+ --logtostderr \
+ --train_split="trainval" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --train_crop_size=513 \
+ --train_crop_size=513 \
+ --train_batch_size=4 \
+ --training_number_of_steps="${NUM_ITERATIONS}" \
+ --fine_tune_batch_norm=true \
+ --tf_initial_checkpoint="${INIT_FOLDER}/deeplabv3_pascal_train_aug/model.ckpt" \
+ --train_logdir="${TRAIN_LOGDIR}" \
+ --dataset_dir="${PASCAL_DATASET}"
+
+# Run evaluation. This performs eval over the full val split (1449 images) and
+# will take a while.
+# Using the provided checkpoint, one should expect mIOU=82.20%.
+python "${WORK_DIR}"/eval.py \
+ --logtostderr \
+ --eval_split="val" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --eval_crop_size=513 \
+ --eval_crop_size=513 \
+ --checkpoint_dir="${TRAIN_LOGDIR}" \
+ --eval_logdir="${EVAL_LOGDIR}" \
+ --dataset_dir="${PASCAL_DATASET}" \
+ --max_number_of_evaluations=1
+
+# Visualize the results.
+python "${WORK_DIR}"/vis.py \
+ --logtostderr \
+ --vis_split="val" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --vis_crop_size=513 \
+ --vis_crop_size=513 \
+ --checkpoint_dir="${TRAIN_LOGDIR}" \
+ --vis_logdir="${VIS_LOGDIR}" \
+ --dataset_dir="${PASCAL_DATASET}" \
+ --max_number_of_iterations=1
+
+# Export the trained checkpoint.
+CKPT_PATH="${TRAIN_LOGDIR}/model.ckpt-${NUM_ITERATIONS}"
+EXPORT_PATH="${EXPORT_DIR}/frozen_inference_graph.pb"
+
+python "${WORK_DIR}"/export_model.py \
+ --logtostderr \
+ --checkpoint_path="${CKPT_PATH}" \
+ --export_path="${EXPORT_PATH}" \
+ --model_variant="xception_65" \
+ --atrous_rates=6 \
+ --atrous_rates=12 \
+ --atrous_rates=18 \
+ --output_stride=16 \
+ --decoder_output_stride=4 \
+ --num_classes=21 \
+ --crop_size=513 \
+ --crop_size=513 \
+ --inference_scales=1.0
+
+# Run inference with the exported checkpoint.
+# Please refer to the provided deeplab_demo.ipynb for an example.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/local_test_mobilenetv2.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/local_test_mobilenetv2.sh"
new file mode 100644
index 00000000..1f0bc846
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/local_test_mobilenetv2.sh"
@@ -0,0 +1,132 @@
+#!/bin/bash
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+#
+# This script is used to run local test on PASCAL VOC 2012 using MobileNet-v2.
+# Users could also modify from this script for their use case.
+#
+# Usage:
+# # From the tensorflow/models/research/deeplab directory.
+# sh ./local_test_mobilenetv2.sh
+#
+#
+
+# Exit immediately if a command exits with a non-zero status.
+set -e
+
+# Move one-level up to tensorflow/models/research directory.
+cd ..
+
+# Update PYTHONPATH.
+export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
+
+# Set up the working environment.
+CURRENT_DIR=$(pwd)
+WORK_DIR="${CURRENT_DIR}/deeplab"
+
+# Run model_test first to make sure the PYTHONPATH is correctly set.
+python "${WORK_DIR}"/model_test.py -v
+
+# Go to datasets folder and download PASCAL VOC 2012 segmentation dataset.
+DATASET_DIR="datasets"
+cd "${WORK_DIR}/${DATASET_DIR}"
+sh download_and_convert_voc2012.sh
+
+# Go back to original directory.
+cd "${CURRENT_DIR}"
+
+# Set up the working directories.
+PASCAL_FOLDER="pascal_voc_seg"
+EXP_FOLDER="exp/train_on_trainval_set_mobilenetv2"
+INIT_FOLDER="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/init_models"
+TRAIN_LOGDIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/train"
+EVAL_LOGDIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/eval"
+VIS_LOGDIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/vis"
+EXPORT_DIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/export"
+mkdir -p "${INIT_FOLDER}"
+mkdir -p "${TRAIN_LOGDIR}"
+mkdir -p "${EVAL_LOGDIR}"
+mkdir -p "${VIS_LOGDIR}"
+mkdir -p "${EXPORT_DIR}"
+
+# Copy locally the trained checkpoint as the initial checkpoint.
+TF_INIT_ROOT="http://download.tensorflow.org/models"
+CKPT_NAME="deeplabv3_mnv2_pascal_train_aug"
+TF_INIT_CKPT="${CKPT_NAME}_2018_01_29.tar.gz"
+cd "${INIT_FOLDER}"
+wget -nd -c "${TF_INIT_ROOT}/${TF_INIT_CKPT}"
+tar -xf "${TF_INIT_CKPT}"
+cd "${CURRENT_DIR}"
+
+PASCAL_DATASET="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/tfrecord"
+
+# Train 10 iterations.
+NUM_ITERATIONS=10
+python "${WORK_DIR}"/train.py \
+ --logtostderr \
+ --train_split="trainval" \
+ --model_variant="mobilenet_v2" \
+ --output_stride=16 \
+ --train_crop_size=513 \
+ --train_crop_size=513 \
+ --train_batch_size=4 \
+ --training_number_of_steps="${NUM_ITERATIONS}" \
+ --fine_tune_batch_norm=true \
+ --tf_initial_checkpoint="${INIT_FOLDER}/${CKPT_NAME}/model.ckpt-30000" \
+ --train_logdir="${TRAIN_LOGDIR}" \
+ --dataset_dir="${PASCAL_DATASET}"
+
+# Run evaluation. This performs eval over the full val split (1449 images) and
+# will take a while.
+# Using the provided checkpoint, one should expect mIOU=75.34%.
+python "${WORK_DIR}"/eval.py \
+ --logtostderr \
+ --eval_split="val" \
+ --model_variant="mobilenet_v2" \
+ --eval_crop_size=513 \
+ --eval_crop_size=513 \
+ --checkpoint_dir="${TRAIN_LOGDIR}" \
+ --eval_logdir="${EVAL_LOGDIR}" \
+ --dataset_dir="${PASCAL_DATASET}" \
+ --max_number_of_evaluations=1
+
+# Visualize the results.
+python "${WORK_DIR}"/vis.py \
+ --logtostderr \
+ --vis_split="val" \
+ --model_variant="mobilenet_v2" \
+ --vis_crop_size=513 \
+ --vis_crop_size=513 \
+ --checkpoint_dir="${TRAIN_LOGDIR}" \
+ --vis_logdir="${VIS_LOGDIR}" \
+ --dataset_dir="${PASCAL_DATASET}" \
+ --max_number_of_iterations=1
+
+# Export the trained checkpoint.
+CKPT_PATH="${TRAIN_LOGDIR}/model.ckpt-${NUM_ITERATIONS}"
+EXPORT_PATH="${EXPORT_DIR}/frozen_inference_graph.pb"
+
+python "${WORK_DIR}"/export_model.py \
+ --logtostderr \
+ --checkpoint_path="${CKPT_PATH}" \
+ --export_path="${EXPORT_PATH}" \
+ --model_variant="mobilenet_v2" \
+ --num_classes=21 \
+ --crop_size=513 \
+ --crop_size=513 \
+ --inference_scales=1.0
+
+# Run inference with the exported checkpoint.
+# Please refer to the provided deeplab_demo.ipynb for an example.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/model.py"
new file mode 100644
index 00000000..8a687395
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/model.py"
@@ -0,0 +1,709 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+r"""Provides DeepLab model definition and helper functions.
+
+DeepLab is a deep learning system for semantic image segmentation with
+the following features:
+
+(1) Atrous convolution to explicitly control the resolution at which
+feature responses are computed within Deep Convolutional Neural Networks.
+
+(2) Atrous spatial pyramid pooling (ASPP) to robustly segment objects at
+multiple scales with filters at multiple sampling rates and effective
+fields-of-views.
+
+(3) ASPP module augmented with image-level feature and batch normalization.
+
+(4) A simple yet effective decoder module to recover the object boundaries.
+
+See the following papers for more details:
+
+"Encoder-Decoder with Atrous Separable Convolution for Semantic Image
+Segmentation"
+Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam.
+(https://arxiv.org/abs/1802.02611)
+
+"Rethinking Atrous Convolution for Semantic Image Segmentation,"
+Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam
+(https://arxiv.org/abs/1706.05587)
+
+"DeepLab: Semantic Image Segmentation with Deep Convolutional Nets,
+Atrous Convolution, and Fully Connected CRFs",
+Liang-Chieh Chen*, George Papandreou*, Iasonas Kokkinos, Kevin Murphy,
+Alan L Yuille (* equal contribution)
+(https://arxiv.org/abs/1606.00915)
+
+"Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected
+CRFs"
+Liang-Chieh Chen*, George Papandreou*, Iasonas Kokkinos, Kevin Murphy,
+Alan L. Yuille (* equal contribution)
+(https://arxiv.org/abs/1412.7062)
+"""
+import tensorflow as tf
+from deeplab.core import dense_prediction_cell
+from deeplab.core import feature_extractor
+from deeplab.core import utils
+
+
+slim = tf.contrib.slim
+
+LOGITS_SCOPE_NAME = 'logits'
+MERGED_LOGITS_SCOPE = 'merged_logits'
+IMAGE_POOLING_SCOPE = 'image_pooling'
+ASPP_SCOPE = 'aspp'
+CONCAT_PROJECTION_SCOPE = 'concat_projection'
+DECODER_SCOPE = 'decoder'
+META_ARCHITECTURE_SCOPE = 'meta_architecture'
+
+scale_dimension = utils.scale_dimension
+split_separable_conv2d = utils.split_separable_conv2d
+
+def get_extra_layer_scopes(last_layers_contain_logits_only=False):
+ """Gets the scopes for extra layers.
+
+ Args:
+ last_layers_contain_logits_only: Boolean, True if only consider logits as
+ the last layer (i.e., exclude ASPP module, decoder module and so on)
+
+ Returns:
+ A list of scopes for extra layers.
+ """
+ if last_layers_contain_logits_only:
+ return [LOGITS_SCOPE_NAME]
+ else:
+ return [
+ LOGITS_SCOPE_NAME,
+ IMAGE_POOLING_SCOPE,
+ ASPP_SCOPE,
+ CONCAT_PROJECTION_SCOPE,
+ DECODER_SCOPE,
+ META_ARCHITECTURE_SCOPE,
+ ]
+
+
+def predict_labels_multi_scale(images,
+ model_options,
+ eval_scales=(1.0,),
+ add_flipped_images=False):
+ """Predicts segmentation labels.
+
+ Args:
+ images: A tensor of size [batch, height, width, channels].
+ model_options: A ModelOptions instance to configure models.
+ eval_scales: The scales to resize images for evaluation.
+ add_flipped_images: Add flipped images for evaluation or not.
+
+ Returns:
+ A dictionary with keys specifying the output_type (e.g., semantic
+ prediction) and values storing Tensors representing predictions (argmax
+ over channels). Each prediction has size [batch, height, width].
+ """
+ outputs_to_predictions = {
+ output: []
+ for output in model_options.outputs_to_num_classes
+ }
+
+ for i, image_scale in enumerate(eval_scales):
+ with tf.variable_scope(tf.get_variable_scope(), reuse=True if i else None):
+ outputs_to_scales_to_logits = multi_scale_logits(
+ images,
+ model_options=model_options,
+ image_pyramid=[image_scale],
+ is_training=False,
+ fine_tune_batch_norm=False)
+
+ if add_flipped_images:
+ with tf.variable_scope(tf.get_variable_scope(), reuse=True):
+ outputs_to_scales_to_logits_reversed = multi_scale_logits(
+ tf.reverse_v2(images, [2]),
+ model_options=model_options,
+ image_pyramid=[image_scale],
+ is_training=False,
+ fine_tune_batch_norm=False)
+
+ for output in sorted(outputs_to_scales_to_logits):
+ scales_to_logits = outputs_to_scales_to_logits[output]
+ logits = tf.image.resize_bilinear(
+ scales_to_logits[MERGED_LOGITS_SCOPE],
+ tf.shape(images)[1:3],
+ align_corners=True)
+ outputs_to_predictions[output].append(
+ tf.expand_dims(tf.nn.softmax(logits), 4))
+
+ if add_flipped_images:
+ scales_to_logits_reversed = (
+ outputs_to_scales_to_logits_reversed[output])
+ logits_reversed = tf.image.resize_bilinear(
+ tf.reverse_v2(scales_to_logits_reversed[MERGED_LOGITS_SCOPE], [2]),
+ tf.shape(images)[1:3],
+ align_corners=True)
+ outputs_to_predictions[output].append(
+ tf.expand_dims(tf.nn.softmax(logits_reversed), 4))
+
+ for output in sorted(outputs_to_predictions):
+ predictions = outputs_to_predictions[output]
+ # Compute average prediction across different scales and flipped images.
+ predictions = tf.reduce_mean(tf.concat(predictions, 4), axis=4)
+ outputs_to_predictions[output] = tf.argmax(predictions, 3)
+
+ return outputs_to_predictions
+
+
+def predict_labels(images, model_options, image_pyramid=None):
+ """Predicts segmentation labels.
+
+ Args:
+ images: A tensor of size [batch, height, width, channels].
+ model_options: A ModelOptions instance to configure models.
+ image_pyramid: Input image scales for multi-scale feature extraction.
+
+ Returns:
+ A dictionary with keys specifying the output_type (e.g., semantic
+ prediction) and values storing Tensors representing predictions (argmax
+ over channels). Each prediction has size [batch, height, width].
+ """
+ outputs_to_scales_to_logits = multi_scale_logits(
+ images,
+ model_options=model_options,
+ image_pyramid=image_pyramid,
+ is_training=False,
+ fine_tune_batch_norm=False)
+
+ predictions = {}
+ for output in sorted(outputs_to_scales_to_logits):
+ scales_to_logits = outputs_to_scales_to_logits[output]
+ logits = tf.image.resize_bilinear(
+ scales_to_logits[MERGED_LOGITS_SCOPE],
+ tf.shape(images)[1:3],
+ align_corners=True)
+ predictions[output] = tf.argmax(logits, 3)
+
+ return predictions
+
+
+def _resize_bilinear(images, size, output_dtype=tf.float32):
+ """Returns resized images as output_type.
+
+ Args:
+ images: A tensor of size [batch, height_in, width_in, channels].
+ size: A 1-D int32 Tensor of 2 elements: new_height, new_width. The new size
+ for the images.
+ output_dtype: The destination type.
+ Returns:
+ A tensor of size [batch, height_out, width_out, channels] as a dtype of
+ output_dtype.
+ """
+ images = tf.image.resize_bilinear(images, size, align_corners=True)
+ return tf.cast(images, dtype=output_dtype)
+
+
+def multi_scale_logits(images,
+ model_options,
+ image_pyramid,
+ weight_decay=0.0001,
+ is_training=False,
+ fine_tune_batch_norm=False):
+ """Gets the logits for multi-scale inputs.
+
+ The returned logits are all downsampled (due to max-pooling layers)
+ for both training and evaluation.
+
+ Args:
+ images: A tensor of size [batch, height, width, channels].
+ model_options: A ModelOptions instance to configure models.
+ image_pyramid: Input image scales for multi-scale feature extraction.
+ weight_decay: The weight decay for model variables.
+ is_training: Is training or not.
+ fine_tune_batch_norm: Fine-tune the batch norm parameters or not.
+
+ Returns:
+ outputs_to_scales_to_logits: A map of maps from output_type (e.g.,
+ semantic prediction) to a dictionary of multi-scale logits names to
+ logits. For each output_type, the dictionary has keys which
+ correspond to the scales and values which correspond to the logits.
+ For example, if `scales` equals [1.0, 1.5], then the keys would
+ include 'merged_logits', 'logits_1.00' and 'logits_1.50'.
+
+ Raises:
+ ValueError: If model_options doesn't specify crop_size and its
+ add_image_level_feature = True, since add_image_level_feature requires
+ crop_size information.
+ """
+ # Setup default values.
+ if not image_pyramid:
+ image_pyramid = [1.0]
+ crop_height = (
+ model_options.crop_size[0]
+ if model_options.crop_size else tf.shape(images)[1])
+ crop_width = (
+ model_options.crop_size[1]
+ if model_options.crop_size else tf.shape(images)[2])
+
+ # Compute the height, width for the output logits.
+ logits_output_stride = (
+ model_options.decoder_output_stride or model_options.output_stride)
+
+ logits_height = scale_dimension(
+ crop_height,
+ max(1.0, max(image_pyramid)) / logits_output_stride)
+ logits_width = scale_dimension(
+ crop_width,
+ max(1.0, max(image_pyramid)) / logits_output_stride)
+
+ # Compute the logits for each scale in the image pyramid.
+ outputs_to_scales_to_logits = {
+ k: {}
+ for k in model_options.outputs_to_num_classes
+ }
+
+ for image_scale in image_pyramid:
+ if image_scale != 1.0:
+ scaled_height = scale_dimension(crop_height, image_scale)
+ scaled_width = scale_dimension(crop_width, image_scale)
+ scaled_crop_size = [scaled_height, scaled_width]
+ scaled_images = tf.image.resize_bilinear(
+ images, scaled_crop_size, align_corners=True)
+ if model_options.crop_size:
+ scaled_images.set_shape([None, scaled_height, scaled_width, 3])
+ else:
+ scaled_crop_size = model_options.crop_size
+ scaled_images = images
+
+ updated_options = model_options._replace(crop_size=scaled_crop_size)
+ outputs_to_logits = _get_logits(
+ scaled_images,
+ updated_options,
+ weight_decay=weight_decay,
+ reuse=tf.AUTO_REUSE,
+ is_training=is_training,
+ fine_tune_batch_norm=fine_tune_batch_norm)
+
+ # Resize the logits to have the same dimension before merging.
+ for output in sorted(outputs_to_logits):
+ outputs_to_logits[output] = tf.image.resize_bilinear(
+ outputs_to_logits[output], [logits_height, logits_width],
+ align_corners=True)
+
+ # Return when only one input scale.
+ if len(image_pyramid) == 1:
+ for output in sorted(model_options.outputs_to_num_classes):
+ outputs_to_scales_to_logits[output][
+ MERGED_LOGITS_SCOPE] = outputs_to_logits[output]
+ return outputs_to_scales_to_logits
+
+ # Save logits to the output map.
+ for output in sorted(model_options.outputs_to_num_classes):
+ outputs_to_scales_to_logits[output][
+ 'logits_%.2f' % image_scale] = outputs_to_logits[output]
+
+ # Merge the logits from all the multi-scale inputs.
+ for output in sorted(model_options.outputs_to_num_classes):
+ # Concatenate the multi-scale logits for each output type.
+ all_logits = [
+ tf.expand_dims(logits, axis=4)
+ for logits in outputs_to_scales_to_logits[output].values()
+ ]
+ all_logits = tf.concat(all_logits, 4)
+ merge_fn = (
+ tf.reduce_max
+ if model_options.merge_method == 'max' else tf.reduce_mean)
+ outputs_to_scales_to_logits[output][MERGED_LOGITS_SCOPE] = merge_fn(
+ all_logits, axis=4)
+
+ return outputs_to_scales_to_logits
+
+
+def extract_features(images,
+ model_options,
+ weight_decay=0.0001,
+ reuse=None,
+ is_training=False,
+ fine_tune_batch_norm=False):
+ """Extracts features by the particular model_variant.
+
+ Args:
+ images: A tensor of size [batch, height, width, channels].
+ model_options: A ModelOptions instance to configure models.
+ weight_decay: The weight decay for model variables.
+ reuse: Reuse the model variables or not.
+ is_training: Is training or not.
+ fine_tune_batch_norm: Fine-tune the batch norm parameters or not.
+
+ Returns:
+ concat_logits: A tensor of size [batch, feature_height, feature_width,
+ feature_channels], where feature_height/feature_width are determined by
+ the images height/width and output_stride.
+ end_points: A dictionary from components of the network to the corresponding
+ activation.
+ """
+ features, end_points = feature_extractor.extract_features(
+ images,
+ output_stride=model_options.output_stride,
+ multi_grid=model_options.multi_grid,
+ model_variant=model_options.model_variant,
+ depth_multiplier=model_options.depth_multiplier,
+ weight_decay=weight_decay,
+ reuse=reuse,
+ is_training=is_training,
+ fine_tune_batch_norm=fine_tune_batch_norm)
+
+ if not model_options.aspp_with_batch_norm:
+ return features, end_points
+ else:
+ if model_options.dense_prediction_cell_config is not None:
+ tf.logging.info('Using dense prediction cell config.')
+ dense_prediction_layer = dense_prediction_cell.DensePredictionCell(
+ config=model_options.dense_prediction_cell_config,
+ hparams={
+ 'conv_rate_multiplier': 16 // model_options.output_stride,
+ })
+ concat_logits = dense_prediction_layer.build_cell(
+ features,
+ output_stride=model_options.output_stride,
+ crop_size=model_options.crop_size,
+ image_pooling_crop_size=model_options.image_pooling_crop_size,
+ weight_decay=weight_decay,
+ reuse=reuse,
+ is_training=is_training,
+ fine_tune_batch_norm=fine_tune_batch_norm)
+ return concat_logits, end_points
+ else:
+ # The following codes employ the DeepLabv3 ASPP module. Note that We
+ # could express the ASPP module as one particular dense prediction
+ # cell architecture. We do not do so but leave the following codes in
+ # order for backward compatibility.
+ batch_norm_params = {
+ 'is_training': is_training and fine_tune_batch_norm,
+ 'decay': 0.9997,
+ 'epsilon': 1e-5,
+ 'scale': True,
+ }
+
+ with slim.arg_scope(
+ [slim.conv2d, slim.separable_conv2d],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm,
+ padding='SAME',
+ stride=1,
+ reuse=reuse):
+ with slim.arg_scope([slim.batch_norm], **batch_norm_params):
+ depth = 256
+ branch_logits = []
+
+ if model_options.add_image_level_feature:
+ if model_options.crop_size is not None:
+ image_pooling_crop_size = model_options.image_pooling_crop_size
+ # If image_pooling_crop_size is not specified, use crop_size.
+ if image_pooling_crop_size is None:
+ image_pooling_crop_size = model_options.crop_size
+ pool_height = scale_dimension(
+ image_pooling_crop_size[0],
+ 1. / model_options.output_stride)
+ pool_width = scale_dimension(
+ image_pooling_crop_size[1],
+ 1. / model_options.output_stride)
+ image_feature = slim.avg_pool2d(
+ features, [pool_height, pool_width], [1, 1], padding='VALID')
+ resize_height = scale_dimension(
+ model_options.crop_size[0],
+ 1. / model_options.output_stride)
+ resize_width = scale_dimension(
+ model_options.crop_size[1],
+ 1. / model_options.output_stride)
+ else:
+ # If crop_size is None, we simply do global pooling.
+ pool_height = tf.shape(features)[1]
+ pool_width = tf.shape(features)[2]
+ image_feature = tf.reduce_mean(
+ features, axis=[1, 2], keepdims=True)
+ resize_height = pool_height
+ resize_width = pool_width
+ image_feature = slim.conv2d(
+ image_feature, depth, 1, scope=IMAGE_POOLING_SCOPE)
+ image_feature = _resize_bilinear(
+ image_feature,
+ [resize_height, resize_width],
+ image_feature.dtype)
+ # Set shape for resize_height/resize_width if they are not Tensor.
+ if isinstance(resize_height, tf.Tensor):
+ resize_height = None
+ if isinstance(resize_width, tf.Tensor):
+ resize_width = None
+ image_feature.set_shape([None, resize_height, resize_width, depth])
+ branch_logits.append(image_feature)
+
+ # Employ a 1x1 convolution.
+ branch_logits.append(slim.conv2d(features, depth, 1,
+ scope=ASPP_SCOPE + str(0)))
+
+ if model_options.atrous_rates:
+ # Employ 3x3 convolutions with different atrous rates.
+ for i, rate in enumerate(model_options.atrous_rates, 1):
+ scope = ASPP_SCOPE + str(i)
+ if model_options.aspp_with_separable_conv:
+ aspp_features = split_separable_conv2d(
+ features,
+ filters=depth,
+ rate=rate,
+ weight_decay=weight_decay,
+ scope=scope)
+ else:
+ aspp_features = slim.conv2d(
+ features, depth, 3, rate=rate, scope=scope)
+ branch_logits.append(aspp_features)
+
+ # Merge branch logits.
+ concat_logits = tf.concat(branch_logits, 3)
+ concat_logits = slim.conv2d(
+ concat_logits, depth, 1, scope=CONCAT_PROJECTION_SCOPE)
+ concat_logits = slim.dropout(
+ concat_logits,
+ keep_prob=0.9,
+ is_training=is_training,
+ scope=CONCAT_PROJECTION_SCOPE + '_dropout')
+
+ return concat_logits, end_points
+
+
+def _get_logits(images,
+ model_options,
+ weight_decay=0.0001,
+ reuse=None,
+ is_training=False,
+ fine_tune_batch_norm=False):
+ """Gets the logits by atrous/image spatial pyramid pooling.
+
+ Args:
+ images: A tensor of size [batch, height, width, channels].
+ model_options: A ModelOptions instance to configure models.
+ weight_decay: The weight decay for model variables.
+ reuse: Reuse the model variables or not.
+ is_training: Is training or not.
+ fine_tune_batch_norm: Fine-tune the batch norm parameters or not.
+
+ Returns:
+ outputs_to_logits: A map from output_type to logits.
+ """
+ features, end_points = extract_features(
+ images,
+ model_options,
+ weight_decay=weight_decay,
+ reuse=reuse,
+ is_training=is_training,
+ fine_tune_batch_norm=fine_tune_batch_norm)
+
+ if model_options.decoder_output_stride is not None:
+ if model_options.crop_size is None:
+ height = tf.shape(images)[1]
+ width = tf.shape(images)[2]
+ else:
+ height, width = model_options.crop_size
+ decoder_height = scale_dimension(height,
+ 1.0 / model_options.decoder_output_stride)
+ decoder_width = scale_dimension(width,
+ 1.0 / model_options.decoder_output_stride)
+ features = refine_by_decoder(
+ features,
+ end_points,
+ decoder_height=decoder_height,
+ decoder_width=decoder_width,
+ decoder_use_separable_conv=model_options.decoder_use_separable_conv,
+ model_variant=model_options.model_variant,
+ weight_decay=weight_decay,
+ reuse=reuse,
+ is_training=is_training,
+ fine_tune_batch_norm=fine_tune_batch_norm)
+
+ outputs_to_logits = {}
+ for output in sorted(model_options.outputs_to_num_classes):
+ outputs_to_logits[output] = get_branch_logits(
+ features,
+ model_options.outputs_to_num_classes[output],
+ model_options.atrous_rates,
+ aspp_with_batch_norm=model_options.aspp_with_batch_norm,
+ kernel_size=model_options.logits_kernel_size,
+ weight_decay=weight_decay,
+ reuse=reuse,
+ scope_suffix=output)
+
+ return outputs_to_logits
+
+
+def refine_by_decoder(features,
+ end_points,
+ decoder_height,
+ decoder_width,
+ decoder_use_separable_conv=False,
+ model_variant=None,
+ weight_decay=0.0001,
+ reuse=None,
+ is_training=False,
+ fine_tune_batch_norm=False):
+ """Adds the decoder to obtain sharper segmentation results.
+
+ Args:
+ features: A tensor of size [batch, features_height, features_width,
+ features_channels].
+ end_points: A dictionary from components of the network to the corresponding
+ activation.
+ decoder_height: The height of decoder feature maps.
+ decoder_width: The width of decoder feature maps.
+ decoder_use_separable_conv: Employ separable convolution for decoder or not.
+ model_variant: Model variant for feature extraction.
+ weight_decay: The weight decay for model variables.
+ reuse: Reuse the model variables or not.
+ is_training: Is training or not.
+ fine_tune_batch_norm: Fine-tune the batch norm parameters or not.
+
+ Returns:
+ Decoder output with size [batch, decoder_height, decoder_width,
+ decoder_channels].
+ """
+ batch_norm_params = {
+ 'is_training': is_training and fine_tune_batch_norm,
+ 'decay': 0.9997,
+ 'epsilon': 1e-5,
+ 'scale': True,
+ }
+
+ with slim.arg_scope(
+ [slim.conv2d, slim.separable_conv2d],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm,
+ padding='SAME',
+ stride=1,
+ reuse=reuse):
+ with slim.arg_scope([slim.batch_norm], **batch_norm_params):
+ with tf.variable_scope(DECODER_SCOPE, DECODER_SCOPE, [features]):
+ feature_list = feature_extractor.networks_to_feature_maps[
+ model_variant][feature_extractor.DECODER_END_POINTS]
+ if feature_list is None:
+ tf.logging.info('Not found any decoder end points.')
+ return features
+ else:
+ decoder_features = features
+ for i, name in enumerate(feature_list):
+ decoder_features_list = [decoder_features]
+
+ # MobileNet variants use different naming convention.
+ if 'mobilenet' in model_variant:
+ feature_name = name
+ else:
+ feature_name = '{}/{}'.format(
+ feature_extractor.name_scope[model_variant], name)
+ decoder_features_list.append(
+ slim.conv2d(
+ end_points[feature_name],
+ 48,
+ 1,
+ scope='feature_projection' + str(i)))
+ # Resize to decoder_height/decoder_width.
+ for j, feature in enumerate(decoder_features_list):
+ decoder_features_list[j] = tf.image.resize_bilinear(
+ feature, [decoder_height, decoder_width], align_corners=True)
+ h = (None if isinstance(decoder_height, tf.Tensor)
+ else decoder_height)
+ w = (None if isinstance(decoder_width, tf.Tensor)
+ else decoder_width)
+ decoder_features_list[j].set_shape([None, h, w, None])
+ decoder_depth = 256
+ if decoder_use_separable_conv:
+ decoder_features = split_separable_conv2d(
+ tf.concat(decoder_features_list, 3),
+ filters=decoder_depth,
+ rate=1,
+ weight_decay=weight_decay,
+ scope='decoder_conv0')
+ decoder_features = split_separable_conv2d(
+ decoder_features,
+ filters=decoder_depth,
+ rate=1,
+ weight_decay=weight_decay,
+ scope='decoder_conv1')
+ else:
+ num_convs = 2
+ decoder_features = slim.repeat(
+ tf.concat(decoder_features_list, 3),
+ num_convs,
+ slim.conv2d,
+ decoder_depth,
+ 3,
+ scope='decoder_conv' + str(i))
+ return decoder_features
+
+
+def get_branch_logits(features,
+ num_classes,
+ atrous_rates=None,
+ aspp_with_batch_norm=False,
+ kernel_size=1,
+ weight_decay=0.0001,
+ reuse=None,
+ scope_suffix=''):
+ """Gets the logits from each model's branch.
+
+ The underlying model is branched out in the last layer when atrous
+ spatial pyramid pooling is employed, and all branches are sum-merged
+ to form the final logits.
+
+ Args:
+ features: A float tensor of shape [batch, height, width, channels].
+ num_classes: Number of classes to predict.
+ atrous_rates: A list of atrous convolution rates for last layer.
+ aspp_with_batch_norm: Use batch normalization layers for ASPP.
+ kernel_size: Kernel size for convolution.
+ weight_decay: Weight decay for the model variables.
+ reuse: Reuse model variables or not.
+ scope_suffix: Scope suffix for the model variables.
+
+ Returns:
+ Merged logits with shape [batch, height, width, num_classes].
+
+ Raises:
+ ValueError: Upon invalid input kernel_size value.
+ """
+ # When using batch normalization with ASPP, ASPP has been applied before
+ # in extract_features, and thus we simply apply 1x1 convolution here.
+ if aspp_with_batch_norm or atrous_rates is None:
+ if kernel_size != 1:
+ raise ValueError('Kernel size must be 1 when atrous_rates is None or '
+ 'using aspp_with_batch_norm. Gets %d.' % kernel_size)
+ atrous_rates = [1]
+
+ with slim.arg_scope(
+ [slim.conv2d],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
+ reuse=reuse):
+ with tf.variable_scope(LOGITS_SCOPE_NAME, LOGITS_SCOPE_NAME, [features]):
+ branch_logits = []
+ for i, rate in enumerate(atrous_rates):
+ scope = scope_suffix
+ if i:
+ scope += '_%d' % i
+
+ branch_logits.append(
+ slim.conv2d(
+ features,
+ num_classes,
+ kernel_size=kernel_size,
+ rate=rate,
+ activation_fn=None,
+ normalizer_fn=None,
+ scope=scope))
+
+ return tf.add_n(branch_logits)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/model_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/model_test.py"
new file mode 100644
index 00000000..50471c27
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/model_test.py"
@@ -0,0 +1,146 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for DeepLab model and some helper functions."""
+
+import tensorflow as tf
+
+from deeplab import common
+from deeplab import model
+
+
+class DeeplabModelTest(tf.test.TestCase):
+
+ def testWrongDeepLabVariant(self):
+ model_options = common.ModelOptions([])._replace(
+ model_variant='no_such_variant')
+ with self.assertRaises(ValueError):
+ model._get_logits(images=[], model_options=model_options)
+
+ def testBuildDeepLabv2(self):
+ batch_size = 2
+ crop_size = [41, 41]
+
+ # Test with two image_pyramids.
+ image_pyramids = [[1], [0.5, 1]]
+
+ # Test two model variants.
+ model_variants = ['xception_65', 'mobilenet_v2']
+
+ # Test with two output_types.
+ outputs_to_num_classes = {'semantic': 3,
+ 'direction': 2}
+
+ expected_endpoints = [['merged_logits'],
+ ['merged_logits',
+ 'logits_0.50',
+ 'logits_1.00']]
+ expected_num_logits = [1, 3]
+
+ for model_variant in model_variants:
+ model_options = common.ModelOptions(outputs_to_num_classes)._replace(
+ add_image_level_feature=False,
+ aspp_with_batch_norm=False,
+ aspp_with_separable_conv=False,
+ model_variant=model_variant)
+
+ for i, image_pyramid in enumerate(image_pyramids):
+ g = tf.Graph()
+ with g.as_default():
+ with self.test_session(graph=g):
+ inputs = tf.random_uniform(
+ (batch_size, crop_size[0], crop_size[1], 3))
+ outputs_to_scales_to_logits = model.multi_scale_logits(
+ inputs, model_options, image_pyramid=image_pyramid)
+
+ # Check computed results for each output type.
+ for output in outputs_to_num_classes:
+ scales_to_logits = outputs_to_scales_to_logits[output]
+ self.assertListEqual(sorted(scales_to_logits.keys()),
+ sorted(expected_endpoints[i]))
+
+ # Expected number of logits = len(image_pyramid) + 1, since the
+ # last logits is merged from all the scales.
+ self.assertEqual(len(scales_to_logits), expected_num_logits[i])
+
+ def testForwardpassDeepLabv3plus(self):
+ crop_size = [33, 33]
+ outputs_to_num_classes = {'semantic': 3}
+
+ model_options = common.ModelOptions(
+ outputs_to_num_classes,
+ crop_size,
+ output_stride=16
+ )._replace(
+ add_image_level_feature=True,
+ aspp_with_batch_norm=True,
+ logits_kernel_size=1,
+ model_variant='mobilenet_v2') # Employ MobileNetv2 for fast test.
+
+ g = tf.Graph()
+ with g.as_default():
+ with self.test_session(graph=g) as sess:
+ inputs = tf.random_uniform(
+ (1, crop_size[0], crop_size[1], 3))
+ outputs_to_scales_to_logits = model.multi_scale_logits(
+ inputs,
+ model_options,
+ image_pyramid=[1.0])
+
+ sess.run(tf.global_variables_initializer())
+ outputs_to_scales_to_logits = sess.run(outputs_to_scales_to_logits)
+
+ # Check computed results for each output type.
+ for output in outputs_to_num_classes:
+ scales_to_logits = outputs_to_scales_to_logits[output]
+ # Expect only one output.
+ self.assertEqual(len(scales_to_logits), 1)
+ for logits in scales_to_logits.values():
+ self.assertTrue(logits.any())
+
+ def testBuildDeepLabWithDensePredictionCell(self):
+ batch_size = 1
+ crop_size = [33, 33]
+ outputs_to_num_classes = {'semantic': 2}
+ expected_endpoints = ['merged_logits']
+ dense_prediction_cell_config = [
+ {'kernel': 3, 'rate': [1, 6], 'op': 'conv', 'input': -1},
+ {'kernel': 3, 'rate': [18, 15], 'op': 'conv', 'input': 0},
+ ]
+ model_options = common.ModelOptions(
+ outputs_to_num_classes,
+ crop_size,
+ output_stride=16)._replace(
+ aspp_with_batch_norm=True,
+ model_variant='mobilenet_v2',
+ dense_prediction_cell_config=dense_prediction_cell_config)
+ g = tf.Graph()
+ with g.as_default():
+ with self.test_session(graph=g):
+ inputs = tf.random_uniform(
+ (batch_size, crop_size[0], crop_size[1], 3))
+ outputs_to_scales_to_model_results = model.multi_scale_logits(
+ inputs,
+ model_options,
+ image_pyramid=[1.0])
+ for output in outputs_to_num_classes:
+ scales_to_model_results = outputs_to_scales_to_model_results[output]
+ self.assertListEqual(scales_to_model_results.keys(),
+ expected_endpoints)
+ self.assertEqual(len(scales_to_model_results), 1)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/train.py"
new file mode 100644
index 00000000..bf362922
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/train.py"
@@ -0,0 +1,394 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Training script for the DeepLab model.
+
+See model.py for more details and usage.
+"""
+
+import six
+import tensorflow as tf
+from deeplab import common
+from deeplab import model
+from deeplab.datasets import segmentation_dataset
+from deeplab.utils import input_generator
+from deeplab.utils import train_utils
+from deployment import model_deploy
+
+slim = tf.contrib.slim
+
+prefetch_queue = slim.prefetch_queue
+
+flags = tf.app.flags
+
+FLAGS = flags.FLAGS
+
+# Settings for multi-GPUs/multi-replicas training.
+
+flags.DEFINE_integer('num_clones', 1, 'Number of clones to deploy.')
+
+flags.DEFINE_boolean('clone_on_cpu', False, 'Use CPUs to deploy clones.')
+
+flags.DEFINE_integer('num_replicas', 1, 'Number of worker replicas.')
+
+flags.DEFINE_integer('startup_delay_steps', 15,
+ 'Number of training steps between replicas startup.')
+
+flags.DEFINE_integer('num_ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then '
+ 'the parameters are handled locally by the worker.')
+
+flags.DEFINE_string('master', '', 'BNS name of the tensorflow server')
+
+flags.DEFINE_integer('task', 0, 'The task ID.')
+
+# Settings for logging.
+
+flags.DEFINE_string('train_logdir', None,
+ 'Where the checkpoint and logs are stored.')
+
+flags.DEFINE_integer('log_steps', 10,
+ 'Display logging information at every log_steps.')
+
+flags.DEFINE_integer('save_interval_secs', 1200,
+ 'How often, in seconds, we save the model to disk.')
+
+flags.DEFINE_integer('save_summaries_secs', 600,
+ 'How often, in seconds, we compute the summaries.')
+
+flags.DEFINE_boolean('save_summaries_images', False,
+ 'Save sample inputs, labels, and semantic predictions as '
+ 'images to summary.')
+
+# Settings for training strategy.
+
+flags.DEFINE_enum('learning_policy', 'poly', ['poly', 'step'],
+ 'Learning rate policy for training.')
+
+# Use 0.007 when training on PASCAL augmented training set, train_aug. When
+# fine-tuning on PASCAL trainval set, use learning rate=0.0001.
+flags.DEFINE_float('base_learning_rate', .0001,
+ 'The base learning rate for model training.')
+
+flags.DEFINE_float('learning_rate_decay_factor', 0.1,
+ 'The rate to decay the base learning rate.')
+
+flags.DEFINE_integer('learning_rate_decay_step', 2000,
+ 'Decay the base learning rate at a fixed step.')
+
+flags.DEFINE_float('learning_power', 0.9,
+ 'The power value used in the poly learning policy.')
+
+flags.DEFINE_integer('training_number_of_steps', 30000,
+ 'The number of steps used for training')
+
+flags.DEFINE_float('momentum', 0.9, 'The momentum value to use')
+
+# When fine_tune_batch_norm=True, use at least batch size larger than 12
+# (batch size more than 16 is better). Otherwise, one could use smaller batch
+# size and set fine_tune_batch_norm=False.
+flags.DEFINE_integer('train_batch_size', 8,
+ 'The number of images in each batch during training.')
+
+# For weight_decay, use 0.00004 for MobileNet-V2 or Xcpetion model variants.
+# Use 0.0001 for ResNet model variants.
+flags.DEFINE_float('weight_decay', 0.00004,
+ 'The value of the weight decay for training.')
+
+flags.DEFINE_multi_integer('train_crop_size', [513, 513],
+ 'Image crop size [height, width] during training.')
+
+flags.DEFINE_float('last_layer_gradient_multiplier', 1.0,
+ 'The gradient multiplier for last layers, which is used to '
+ 'boost the gradient of last layers if the value > 1.')
+
+flags.DEFINE_boolean('upsample_logits', True,
+ 'Upsample logits during training.')
+
+# Settings for fine-tuning the network.
+
+flags.DEFINE_string('tf_initial_checkpoint', None,
+ 'The initial checkpoint in tensorflow format.')
+
+# Set to False if one does not want to re-use the trained classifier weights.
+flags.DEFINE_boolean('initialize_last_layer', True,
+ 'Initialize the last layer.')
+
+flags.DEFINE_boolean('last_layers_contain_logits_only', False,
+ 'Only consider logits as last layers or not.')
+
+flags.DEFINE_integer('slow_start_step', 0,
+ 'Training model with small learning rate for few steps.')
+
+flags.DEFINE_float('slow_start_learning_rate', 1e-4,
+ 'Learning rate employed during slow start.')
+
+# Set to True if one wants to fine-tune the batch norm parameters in DeepLabv3.
+# Set to False and use small batch size to save GPU memory.
+flags.DEFINE_boolean('fine_tune_batch_norm', True,
+ 'Fine tune the batch norm parameters or not.')
+
+flags.DEFINE_float('min_scale_factor', 0.5,
+ 'Mininum scale factor for data augmentation.')
+
+flags.DEFINE_float('max_scale_factor', 2.,
+ 'Maximum scale factor for data augmentation.')
+
+flags.DEFINE_float('scale_factor_step_size', 0.25,
+ 'Scale factor step size for data augmentation.')
+
+# For `xception_65`, use atrous_rates = [12, 24, 36] if output_stride = 8, or
+# rates = [6, 12, 18] if output_stride = 16. For `mobilenet_v2`, use None. Note
+# one could use different atrous_rates/output_stride during training/evaluation.
+flags.DEFINE_multi_integer('atrous_rates', None,
+ 'Atrous rates for atrous spatial pyramid pooling.')
+
+flags.DEFINE_integer('output_stride', 16,
+ 'The ratio of input to output spatial resolution.')
+
+# Dataset settings.
+flags.DEFINE_string('dataset', 'pascal_voc_seg',
+ 'Name of the segmentation dataset.')
+
+flags.DEFINE_string('train_split', 'train',
+ 'Which split of the dataset to be used for training')
+
+flags.DEFINE_string('dataset_dir', None, 'Where the dataset reside.')
+
+
+def _build_deeplab(inputs_queue, outputs_to_num_classes, ignore_label):
+ """Builds a clone of DeepLab.
+
+ Args:
+ inputs_queue: A prefetch queue for images and labels.
+ outputs_to_num_classes: A map from output type to the number of classes.
+ For example, for the task of semantic segmentation with 21 semantic
+ classes, we would have outputs_to_num_classes['semantic'] = 21.
+ ignore_label: Ignore label.
+
+ Returns:
+ A map of maps from output_type (e.g., semantic prediction) to a
+ dictionary of multi-scale logits names to logits. For each output_type,
+ the dictionary has keys which correspond to the scales and values which
+ correspond to the logits. For example, if `scales` equals [1.0, 1.5],
+ then the keys would include 'merged_logits', 'logits_1.00' and
+ 'logits_1.50'.
+ """
+ samples = inputs_queue.dequeue()
+
+ # Add name to input and label nodes so we can add to summary.
+ samples[common.IMAGE] = tf.identity(
+ samples[common.IMAGE], name=common.IMAGE)
+ samples[common.LABEL] = tf.identity(
+ samples[common.LABEL], name=common.LABEL)
+
+ model_options = common.ModelOptions(
+ outputs_to_num_classes=outputs_to_num_classes,
+ crop_size=FLAGS.train_crop_size,
+ atrous_rates=FLAGS.atrous_rates,
+ output_stride=FLAGS.output_stride)
+ outputs_to_scales_to_logits = model.multi_scale_logits(
+ samples[common.IMAGE],
+ model_options=model_options,
+ image_pyramid=FLAGS.image_pyramid,
+ weight_decay=FLAGS.weight_decay,
+ is_training=True,
+ fine_tune_batch_norm=FLAGS.fine_tune_batch_norm)
+
+ # Add name to graph node so we can add to summary.
+ output_type_dict = outputs_to_scales_to_logits[common.OUTPUT_TYPE]
+ output_type_dict[model.MERGED_LOGITS_SCOPE] = tf.identity(
+ output_type_dict[model.MERGED_LOGITS_SCOPE],
+ name=common.OUTPUT_TYPE)
+
+ for output, num_classes in six.iteritems(outputs_to_num_classes):
+ train_utils.add_softmax_cross_entropy_loss_for_each_scale(
+ outputs_to_scales_to_logits[output],
+ samples[common.LABEL],
+ num_classes,
+ ignore_label,
+ loss_weight=1.0,
+ upsample_logits=FLAGS.upsample_logits,
+ scope=output)
+
+ return outputs_to_scales_to_logits
+
+
+def main(unused_argv):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ # Set up deployment (i.e., multi-GPUs and/or multi-replicas).
+ config = model_deploy.DeploymentConfig(
+ num_clones=FLAGS.num_clones,
+ clone_on_cpu=FLAGS.clone_on_cpu,
+ replica_id=FLAGS.task,
+ num_replicas=FLAGS.num_replicas,
+ num_ps_tasks=FLAGS.num_ps_tasks)
+
+ # Split the batch across GPUs.
+ assert FLAGS.train_batch_size % config.num_clones == 0, (
+ 'Training batch size not divisble by number of clones (GPUs).')
+
+ clone_batch_size = FLAGS.train_batch_size // config.num_clones
+
+ # Get dataset-dependent information.
+ dataset = segmentation_dataset.get_dataset(
+ FLAGS.dataset, FLAGS.train_split, dataset_dir=FLAGS.dataset_dir)
+
+ tf.gfile.MakeDirs(FLAGS.train_logdir)
+ tf.logging.info('Training on %s set', FLAGS.train_split)
+
+ with tf.Graph().as_default() as graph:
+ with tf.device(config.inputs_device()):
+ samples = input_generator.get(
+ dataset,
+ FLAGS.train_crop_size,
+ clone_batch_size,
+ min_resize_value=FLAGS.min_resize_value,
+ max_resize_value=FLAGS.max_resize_value,
+ resize_factor=FLAGS.resize_factor,
+ min_scale_factor=FLAGS.min_scale_factor,
+ max_scale_factor=FLAGS.max_scale_factor,
+ scale_factor_step_size=FLAGS.scale_factor_step_size,
+ dataset_split=FLAGS.train_split,
+ is_training=True,
+ model_variant=FLAGS.model_variant)
+ inputs_queue = prefetch_queue.prefetch_queue(
+ samples, capacity=128 * config.num_clones)
+
+ # Create the global step on the device storing the variables.
+ with tf.device(config.variables_device()):
+ global_step = tf.train.get_or_create_global_step()
+
+ # Define the model and create clones.
+ model_fn = _build_deeplab
+ model_args = (inputs_queue, {
+ common.OUTPUT_TYPE: dataset.num_classes
+ }, dataset.ignore_label)
+ clones = model_deploy.create_clones(config, model_fn, args=model_args)
+
+ # Gather update_ops from the first clone. These contain, for example,
+ # the updates for the batch_norm variables created by model_fn.
+ first_clone_scope = config.clone_scope(0)
+ update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS, first_clone_scope)
+
+ # Gather initial summaries.
+ summaries = set(tf.get_collection(tf.GraphKeys.SUMMARIES))
+
+ # Add summaries for model variables.
+ for model_var in slim.get_model_variables():
+ summaries.add(tf.summary.histogram(model_var.op.name, model_var))
+
+ # Add summaries for images, labels, semantic predictions
+ if FLAGS.save_summaries_images:
+ summary_image = graph.get_tensor_by_name(
+ ('%s/%s:0' % (first_clone_scope, common.IMAGE)).strip('/'))
+ summaries.add(
+ tf.summary.image('samples/%s' % common.IMAGE, summary_image))
+
+ first_clone_label = graph.get_tensor_by_name(
+ ('%s/%s:0' % (first_clone_scope, common.LABEL)).strip('/'))
+ # Scale up summary image pixel values for better visualization.
+ pixel_scaling = max(1, 255 // dataset.num_classes)
+ summary_label = tf.cast(first_clone_label * pixel_scaling, tf.uint8)
+ summaries.add(
+ tf.summary.image('samples/%s' % common.LABEL, summary_label))
+
+ first_clone_output = graph.get_tensor_by_name(
+ ('%s/%s:0' % (first_clone_scope, common.OUTPUT_TYPE)).strip('/'))
+ predictions = tf.expand_dims(tf.argmax(first_clone_output, 3), -1)
+
+ summary_predictions = tf.cast(predictions * pixel_scaling, tf.uint8)
+ summaries.add(
+ tf.summary.image(
+ 'samples/%s' % common.OUTPUT_TYPE, summary_predictions))
+
+ # Add summaries for losses.
+ for loss in tf.get_collection(tf.GraphKeys.LOSSES, first_clone_scope):
+ summaries.add(tf.summary.scalar('losses/%s' % loss.op.name, loss))
+
+ # Build the optimizer based on the device specification.
+ with tf.device(config.optimizer_device()):
+ learning_rate = train_utils.get_model_learning_rate(
+ FLAGS.learning_policy, FLAGS.base_learning_rate,
+ FLAGS.learning_rate_decay_step, FLAGS.learning_rate_decay_factor,
+ FLAGS.training_number_of_steps, FLAGS.learning_power,
+ FLAGS.slow_start_step, FLAGS.slow_start_learning_rate)
+ optimizer = tf.train.MomentumOptimizer(learning_rate, FLAGS.momentum)
+ summaries.add(tf.summary.scalar('learning_rate', learning_rate))
+
+ startup_delay_steps = FLAGS.task * FLAGS.startup_delay_steps
+ for variable in slim.get_model_variables():
+ summaries.add(tf.summary.histogram(variable.op.name, variable))
+
+ with tf.device(config.variables_device()):
+ total_loss, grads_and_vars = model_deploy.optimize_clones(
+ clones, optimizer)
+ total_loss = tf.check_numerics(total_loss, 'Loss is inf or nan.')
+ summaries.add(tf.summary.scalar('total_loss', total_loss))
+
+ # Modify the gradients for biases and last layer variables.
+ last_layers = model.get_extra_layer_scopes(
+ FLAGS.last_layers_contain_logits_only)
+ grad_mult = train_utils.get_model_gradient_multipliers(
+ last_layers, FLAGS.last_layer_gradient_multiplier)
+ if grad_mult:
+ grads_and_vars = slim.learning.multiply_gradients(
+ grads_and_vars, grad_mult)
+
+ # Create gradient update op.
+ grad_updates = optimizer.apply_gradients(
+ grads_and_vars, global_step=global_step)
+ update_ops.append(grad_updates)
+ update_op = tf.group(*update_ops)
+ with tf.control_dependencies([update_op]):
+ train_tensor = tf.identity(total_loss, name='train_op')
+
+ # Add the summaries from the first clone. These contain the summaries
+ # created by model_fn and either optimize_clones() or _gather_clone_loss().
+ summaries |= set(
+ tf.get_collection(tf.GraphKeys.SUMMARIES, first_clone_scope))
+
+ # Merge all summaries together.
+ summary_op = tf.summary.merge(list(summaries))
+
+ # Soft placement allows placing on CPU ops without GPU implementation.
+ session_config = tf.ConfigProto(
+ allow_soft_placement=True, log_device_placement=False)
+
+ # Start the training.
+ slim.learning.train(
+ train_tensor,
+ logdir=FLAGS.train_logdir,
+ log_every_n_steps=FLAGS.log_steps,
+ master=FLAGS.master,
+ number_of_steps=FLAGS.training_number_of_steps,
+ is_chief=(FLAGS.task == 0),
+ session_config=session_config,
+ startup_delay_steps=startup_delay_steps,
+ init_fn=train_utils.get_model_init_fn(
+ FLAGS.train_logdir,
+ FLAGS.tf_initial_checkpoint,
+ FLAGS.initialize_last_layer,
+ last_layers,
+ ignore_missing_vars=True),
+ summary_op=summary_op,
+ save_summaries_secs=FLAGS.save_summaries_secs,
+ save_interval_secs=FLAGS.save_interval_secs)
+
+
+if __name__ == '__main__':
+ flags.mark_flag_as_required('train_logdir')
+ flags.mark_flag_as_required('tf_initial_checkpoint')
+ flags.mark_flag_as_required('dataset_dir')
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/get_dataset_colormap.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/get_dataset_colormap.py"
new file mode 100644
index 00000000..eff9b195
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/get_dataset_colormap.py"
@@ -0,0 +1,405 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Visualizes the segmentation results via specified color map.
+
+Visualizes the semantic segmentation results by the color map
+defined by the different datasets. Supported colormaps are:
+
+* ADE20K (http://groups.csail.mit.edu/vision/datasets/ADE20K/).
+
+* Cityscapes dataset (https://www.cityscapes-dataset.com).
+
+* Mapillary Vistas (https://research.mapillary.com).
+
+* PASCAL VOC 2012 (http://host.robots.ox.ac.uk/pascal/VOC/).
+"""
+
+import numpy as np
+
+# Dataset names.
+_ADE20K = 'ade20k'
+_CITYSCAPES = 'cityscapes'
+_MAPILLARY_VISTAS = 'mapillary_vistas'
+_PASCAL = 'pascal'
+
+# Max number of entries in the colormap for each dataset.
+_DATASET_MAX_ENTRIES = {
+ _ADE20K: 151,
+ _CITYSCAPES: 19,
+ _MAPILLARY_VISTAS: 66,
+ _PASCAL: 256,
+}
+
+
+def create_ade20k_label_colormap():
+ """Creates a label colormap used in ADE20K segmentation benchmark.
+
+ Returns:
+ A colormap for visualizing segmentation results.
+ """
+ return np.asarray([
+ [0, 0, 0],
+ [120, 120, 120],
+ [180, 120, 120],
+ [6, 230, 230],
+ [80, 50, 50],
+ [4, 200, 3],
+ [120, 120, 80],
+ [140, 140, 140],
+ [204, 5, 255],
+ [230, 230, 230],
+ [4, 250, 7],
+ [224, 5, 255],
+ [235, 255, 7],
+ [150, 5, 61],
+ [120, 120, 70],
+ [8, 255, 51],
+ [255, 6, 82],
+ [143, 255, 140],
+ [204, 255, 4],
+ [255, 51, 7],
+ [204, 70, 3],
+ [0, 102, 200],
+ [61, 230, 250],
+ [255, 6, 51],
+ [11, 102, 255],
+ [255, 7, 71],
+ [255, 9, 224],
+ [9, 7, 230],
+ [220, 220, 220],
+ [255, 9, 92],
+ [112, 9, 255],
+ [8, 255, 214],
+ [7, 255, 224],
+ [255, 184, 6],
+ [10, 255, 71],
+ [255, 41, 10],
+ [7, 255, 255],
+ [224, 255, 8],
+ [102, 8, 255],
+ [255, 61, 6],
+ [255, 194, 7],
+ [255, 122, 8],
+ [0, 255, 20],
+ [255, 8, 41],
+ [255, 5, 153],
+ [6, 51, 255],
+ [235, 12, 255],
+ [160, 150, 20],
+ [0, 163, 255],
+ [140, 140, 140],
+ [250, 10, 15],
+ [20, 255, 0],
+ [31, 255, 0],
+ [255, 31, 0],
+ [255, 224, 0],
+ [153, 255, 0],
+ [0, 0, 255],
+ [255, 71, 0],
+ [0, 235, 255],
+ [0, 173, 255],
+ [31, 0, 255],
+ [11, 200, 200],
+ [255, 82, 0],
+ [0, 255, 245],
+ [0, 61, 255],
+ [0, 255, 112],
+ [0, 255, 133],
+ [255, 0, 0],
+ [255, 163, 0],
+ [255, 102, 0],
+ [194, 255, 0],
+ [0, 143, 255],
+ [51, 255, 0],
+ [0, 82, 255],
+ [0, 255, 41],
+ [0, 255, 173],
+ [10, 0, 255],
+ [173, 255, 0],
+ [0, 255, 153],
+ [255, 92, 0],
+ [255, 0, 255],
+ [255, 0, 245],
+ [255, 0, 102],
+ [255, 173, 0],
+ [255, 0, 20],
+ [255, 184, 184],
+ [0, 31, 255],
+ [0, 255, 61],
+ [0, 71, 255],
+ [255, 0, 204],
+ [0, 255, 194],
+ [0, 255, 82],
+ [0, 10, 255],
+ [0, 112, 255],
+ [51, 0, 255],
+ [0, 194, 255],
+ [0, 122, 255],
+ [0, 255, 163],
+ [255, 153, 0],
+ [0, 255, 10],
+ [255, 112, 0],
+ [143, 255, 0],
+ [82, 0, 255],
+ [163, 255, 0],
+ [255, 235, 0],
+ [8, 184, 170],
+ [133, 0, 255],
+ [0, 255, 92],
+ [184, 0, 255],
+ [255, 0, 31],
+ [0, 184, 255],
+ [0, 214, 255],
+ [255, 0, 112],
+ [92, 255, 0],
+ [0, 224, 255],
+ [112, 224, 255],
+ [70, 184, 160],
+ [163, 0, 255],
+ [153, 0, 255],
+ [71, 255, 0],
+ [255, 0, 163],
+ [255, 204, 0],
+ [255, 0, 143],
+ [0, 255, 235],
+ [133, 255, 0],
+ [255, 0, 235],
+ [245, 0, 255],
+ [255, 0, 122],
+ [255, 245, 0],
+ [10, 190, 212],
+ [214, 255, 0],
+ [0, 204, 255],
+ [20, 0, 255],
+ [255, 255, 0],
+ [0, 153, 255],
+ [0, 41, 255],
+ [0, 255, 204],
+ [41, 0, 255],
+ [41, 255, 0],
+ [173, 0, 255],
+ [0, 245, 255],
+ [71, 0, 255],
+ [122, 0, 255],
+ [0, 255, 184],
+ [0, 92, 255],
+ [184, 255, 0],
+ [0, 133, 255],
+ [255, 214, 0],
+ [25, 194, 194],
+ [102, 255, 0],
+ [92, 0, 255],
+ ])
+
+
+def create_cityscapes_label_colormap():
+ """Creates a label colormap used in CITYSCAPES segmentation benchmark.
+
+ Returns:
+ A colormap for visualizing segmentation results.
+ """
+ return np.asarray([
+ [128, 64, 128],
+ [244, 35, 232],
+ [70, 70, 70],
+ [102, 102, 156],
+ [190, 153, 153],
+ [153, 153, 153],
+ [250, 170, 30],
+ [220, 220, 0],
+ [107, 142, 35],
+ [152, 251, 152],
+ [70, 130, 180],
+ [220, 20, 60],
+ [255, 0, 0],
+ [0, 0, 142],
+ [0, 0, 70],
+ [0, 60, 100],
+ [0, 80, 100],
+ [0, 0, 230],
+ [119, 11, 32],
+ ])
+
+
+def create_mapillary_vistas_label_colormap():
+ """Creates a label colormap used in Mapillary Vistas segmentation benchmark.
+
+ Returns:
+ A colormap for visualizing segmentation results.
+ """
+ return np.asarray([
+ [165, 42, 42],
+ [0, 192, 0],
+ [196, 196, 196],
+ [190, 153, 153],
+ [180, 165, 180],
+ [102, 102, 156],
+ [102, 102, 156],
+ [128, 64, 255],
+ [140, 140, 200],
+ [170, 170, 170],
+ [250, 170, 160],
+ [96, 96, 96],
+ [230, 150, 140],
+ [128, 64, 128],
+ [110, 110, 110],
+ [244, 35, 232],
+ [150, 100, 100],
+ [70, 70, 70],
+ [150, 120, 90],
+ [220, 20, 60],
+ [255, 0, 0],
+ [255, 0, 0],
+ [255, 0, 0],
+ [200, 128, 128],
+ [255, 255, 255],
+ [64, 170, 64],
+ [128, 64, 64],
+ [70, 130, 180],
+ [255, 255, 255],
+ [152, 251, 152],
+ [107, 142, 35],
+ [0, 170, 30],
+ [255, 255, 128],
+ [250, 0, 30],
+ [0, 0, 0],
+ [220, 220, 220],
+ [170, 170, 170],
+ [222, 40, 40],
+ [100, 170, 30],
+ [40, 40, 40],
+ [33, 33, 33],
+ [170, 170, 170],
+ [0, 0, 142],
+ [170, 170, 170],
+ [210, 170, 100],
+ [153, 153, 153],
+ [128, 128, 128],
+ [0, 0, 142],
+ [250, 170, 30],
+ [192, 192, 192],
+ [220, 220, 0],
+ [180, 165, 180],
+ [119, 11, 32],
+ [0, 0, 142],
+ [0, 60, 100],
+ [0, 0, 142],
+ [0, 0, 90],
+ [0, 0, 230],
+ [0, 80, 100],
+ [128, 64, 64],
+ [0, 0, 110],
+ [0, 0, 70],
+ [0, 0, 192],
+ [32, 32, 32],
+ [0, 0, 0],
+ [0, 0, 0],
+ ])
+
+
+def create_pascal_label_colormap():
+ """Creates a label colormap used in PASCAL VOC segmentation benchmark.
+
+ Returns:
+ A colormap for visualizing segmentation results.
+ """
+ colormap = np.zeros((_DATASET_MAX_ENTRIES[_PASCAL], 3), dtype=int)
+ ind = np.arange(_DATASET_MAX_ENTRIES[_PASCAL], dtype=int)
+
+ for shift in reversed(range(8)):
+ for channel in range(3):
+ colormap[:, channel] |= bit_get(ind, channel) << shift
+ ind >>= 3
+
+ return colormap
+
+
+def get_ade20k_name():
+ return _ADE20K
+
+
+def get_cityscapes_name():
+ return _CITYSCAPES
+
+
+def get_mapillary_vistas_name():
+ return _MAPILLARY_VISTAS
+
+
+def get_pascal_name():
+ return _PASCAL
+
+
+def bit_get(val, idx):
+ """Gets the bit value.
+
+ Args:
+ val: Input value, int or numpy int array.
+ idx: Which bit of the input val.
+
+ Returns:
+ The "idx"-th bit of input val.
+ """
+ return (val >> idx) & 1
+
+
+def create_label_colormap(dataset=_PASCAL):
+ """Creates a label colormap for the specified dataset.
+
+ Args:
+ dataset: The colormap used in the dataset.
+
+ Returns:
+ A numpy array of the dataset colormap.
+
+ Raises:
+ ValueError: If the dataset is not supported.
+ """
+ if dataset == _ADE20K:
+ return create_ade20k_label_colormap()
+ elif dataset == _CITYSCAPES:
+ return create_cityscapes_label_colormap()
+ elif dataset == _MAPILLARY_VISTAS:
+ return create_mapillary_vistas_label_colormap()
+ elif dataset == _PASCAL:
+ return create_pascal_label_colormap()
+ else:
+ raise ValueError('Unsupported dataset.')
+
+
+def label_to_color_image(label, dataset=_PASCAL):
+ """Adds color defined by the dataset colormap to the label.
+
+ Args:
+ label: A 2D array with integer type, storing the segmentation label.
+ dataset: The colormap used in the dataset.
+
+ Returns:
+ result: A 2D array with floating type. The element of the array
+ is the color indexed by the corresponding element in the input label
+ to the dataset color map.
+
+ Raises:
+ ValueError: If label is not of rank 2 or its value is larger than color
+ map maximum entry.
+ """
+ if label.ndim != 2:
+ raise ValueError('Expect 2-D input label')
+
+ if np.max(label) >= _DATASET_MAX_ENTRIES[dataset]:
+ raise ValueError('label value too large.')
+
+ colormap = create_label_colormap(dataset)
+ return colormap[label]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/get_dataset_colormap_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/get_dataset_colormap_test.py"
new file mode 100644
index 00000000..676beb11
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/get_dataset_colormap_test.py"
@@ -0,0 +1,96 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for get_dataset_colormap.py."""
+
+import numpy as np
+import tensorflow as tf
+
+from deeplab.utils import get_dataset_colormap
+
+
+class VisualizationUtilTest(tf.test.TestCase):
+
+ def testBitGet(self):
+ """Test that if the returned bit value is correct."""
+ self.assertEqual(1, get_dataset_colormap.bit_get(9, 0))
+ self.assertEqual(0, get_dataset_colormap.bit_get(9, 1))
+ self.assertEqual(0, get_dataset_colormap.bit_get(9, 2))
+ self.assertEqual(1, get_dataset_colormap.bit_get(9, 3))
+
+ def testPASCALLabelColorMapValue(self):
+ """Test the getd color map value."""
+ colormap = get_dataset_colormap.create_pascal_label_colormap()
+
+ # Only test a few sampled entries in the color map.
+ self.assertTrue(np.array_equal([128., 0., 128.], colormap[5, :]))
+ self.assertTrue(np.array_equal([128., 192., 128.], colormap[23, :]))
+ self.assertTrue(np.array_equal([128., 0., 192.], colormap[37, :]))
+ self.assertTrue(np.array_equal([224., 192., 192.], colormap[127, :]))
+ self.assertTrue(np.array_equal([192., 160., 192.], colormap[175, :]))
+
+ def testLabelToPASCALColorImage(self):
+ """Test the value of the converted label value."""
+ label = np.array([[0, 16, 16], [52, 7, 52]])
+ expected_result = np.array([
+ [[0, 0, 0], [0, 64, 0], [0, 64, 0]],
+ [[0, 64, 192], [128, 128, 128], [0, 64, 192]]
+ ])
+ colored_label = get_dataset_colormap.label_to_color_image(
+ label, get_dataset_colormap.get_pascal_name())
+ self.assertTrue(np.array_equal(expected_result, colored_label))
+
+ def testUnExpectedLabelValueForLabelToPASCALColorImage(self):
+ """Raise ValueError when input value exceeds range."""
+ label = np.array([[120], [300]])
+ with self.assertRaises(ValueError):
+ get_dataset_colormap.label_to_color_image(
+ label, get_dataset_colormap.get_pascal_name())
+
+ def testUnExpectedLabelDimensionForLabelToPASCALColorImage(self):
+ """Raise ValueError if input dimension is not correct."""
+ label = np.array([120])
+ with self.assertRaises(ValueError):
+ get_dataset_colormap.label_to_color_image(
+ label, get_dataset_colormap.get_pascal_name())
+
+ def testGetColormapForUnsupportedDataset(self):
+ with self.assertRaises(ValueError):
+ get_dataset_colormap.create_label_colormap('unsupported_dataset')
+
+ def testUnExpectedLabelDimensionForLabelToADE20KColorImage(self):
+ label = np.array([250])
+ with self.assertRaises(ValueError):
+ get_dataset_colormap.label_to_color_image(
+ label, get_dataset_colormap.get_ade20k_name())
+
+ def testFirstColorInADE20KColorMap(self):
+ label = np.array([[1, 3], [10, 20]])
+ expected_result = np.array([
+ [[120, 120, 120], [6, 230, 230]],
+ [[4, 250, 7], [204, 70, 3]]
+ ])
+ colored_label = get_dataset_colormap.label_to_color_image(
+ label, get_dataset_colormap.get_ade20k_name())
+ self.assertTrue(np.array_equal(colored_label, expected_result))
+
+ def testMapillaryVistasColorMapValue(self):
+ colormap = get_dataset_colormap.create_mapillary_vistas_label_colormap()
+ self.assertTrue(np.array_equal([190, 153, 153], colormap[3, :]))
+ self.assertTrue(np.array_equal([102, 102, 156], colormap[6, :]))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/input_generator.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/input_generator.py"
new file mode 100644
index 00000000..4d4d09a5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/input_generator.py"
@@ -0,0 +1,168 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Wrapper for providing semantic segmentation data."""
+
+import tensorflow as tf
+from deeplab import common
+from deeplab import input_preprocess
+
+slim = tf.contrib.slim
+
+dataset_data_provider = slim.dataset_data_provider
+
+
+def _get_data(data_provider, dataset_split):
+ """Gets data from data provider.
+
+ Args:
+ data_provider: An object of slim.data_provider.
+ dataset_split: Dataset split.
+
+ Returns:
+ image: Image Tensor.
+ label: Label Tensor storing segmentation annotations.
+ image_name: Image name.
+ height: Image height.
+ width: Image width.
+
+ Raises:
+ ValueError: Failed to find label.
+ """
+ if common.LABELS_CLASS not in data_provider.list_items():
+ raise ValueError('Failed to find labels.')
+
+ image, height, width = data_provider.get(
+ [common.IMAGE, common.HEIGHT, common.WIDTH])
+
+ # Some datasets do not contain image_name.
+ if common.IMAGE_NAME in data_provider.list_items():
+ image_name, = data_provider.get([common.IMAGE_NAME])
+ else:
+ image_name = tf.constant('')
+
+ label = None
+ if dataset_split != common.TEST_SET:
+ label, = data_provider.get([common.LABELS_CLASS])
+
+ return image, label, image_name, height, width
+
+
+def get(dataset,
+ crop_size,
+ batch_size,
+ min_resize_value=None,
+ max_resize_value=None,
+ resize_factor=None,
+ min_scale_factor=1.,
+ max_scale_factor=1.,
+ scale_factor_step_size=0,
+ num_readers=1,
+ num_threads=1,
+ dataset_split=None,
+ is_training=True,
+ model_variant=None):
+ """Gets the dataset split for semantic segmentation.
+
+ This functions gets the dataset split for semantic segmentation. In
+ particular, it is a wrapper of (1) dataset_data_provider which returns the raw
+ dataset split, (2) input_preprcess which preprocess the raw data, and (3) the
+ Tensorflow operation of batching the preprocessed data. Then, the output could
+ be directly used by training, evaluation or visualization.
+
+ Args:
+ dataset: An instance of slim Dataset.
+ crop_size: Image crop size [height, width].
+ batch_size: Batch size.
+ min_resize_value: Desired size of the smaller image side.
+ max_resize_value: Maximum allowed size of the larger image side.
+ resize_factor: Resized dimensions are multiple of factor plus one.
+ min_scale_factor: Minimum scale factor value.
+ max_scale_factor: Maximum scale factor value.
+ scale_factor_step_size: The step size from min scale factor to max scale
+ factor. The input is randomly scaled based on the value of
+ (min_scale_factor, max_scale_factor, scale_factor_step_size).
+ num_readers: Number of readers for data provider.
+ num_threads: Number of threads for batching data.
+ dataset_split: Dataset split.
+ is_training: Is training or not.
+ model_variant: Model variant (string) for choosing how to mean-subtract the
+ images. See feature_extractor.network_map for supported model variants.
+
+ Returns:
+ A dictionary of batched Tensors for semantic segmentation.
+
+ Raises:
+ ValueError: dataset_split is None, failed to find labels, or label shape
+ is not valid.
+ """
+ if dataset_split is None:
+ raise ValueError('Unknown dataset split.')
+ if model_variant is None:
+ tf.logging.warning('Please specify a model_variant. See '
+ 'feature_extractor.network_map for supported model '
+ 'variants.')
+
+ data_provider = dataset_data_provider.DatasetDataProvider(
+ dataset,
+ num_readers=num_readers,
+ num_epochs=None if is_training else 1,
+ shuffle=is_training)
+ image, label, image_name, height, width = _get_data(data_provider,
+ dataset_split)
+ if label is not None:
+ if label.shape.ndims == 2:
+ label = tf.expand_dims(label, 2)
+ elif label.shape.ndims == 3 and label.shape.dims[2] == 1:
+ pass
+ else:
+ raise ValueError('Input label shape must be [height, width], or '
+ '[height, width, 1].')
+
+ label.set_shape([None, None, 1])
+ original_image, image, label = input_preprocess.preprocess_image_and_label(
+ image,
+ label,
+ crop_height=crop_size[0],
+ crop_width=crop_size[1],
+ min_resize_value=min_resize_value,
+ max_resize_value=max_resize_value,
+ resize_factor=resize_factor,
+ min_scale_factor=min_scale_factor,
+ max_scale_factor=max_scale_factor,
+ scale_factor_step_size=scale_factor_step_size,
+ ignore_label=dataset.ignore_label,
+ is_training=is_training,
+ model_variant=model_variant)
+ sample = {
+ common.IMAGE: image,
+ common.IMAGE_NAME: image_name,
+ common.HEIGHT: height,
+ common.WIDTH: width
+ }
+ if label is not None:
+ sample[common.LABEL] = label
+
+ if not is_training:
+ # Original image is only used during visualization.
+ sample[common.ORIGINAL_IMAGE] = original_image,
+ num_threads = 1
+
+ return tf.train.batch(
+ sample,
+ batch_size=batch_size,
+ num_threads=num_threads,
+ capacity=32 * batch_size,
+ allow_smaller_final_batch=not is_training,
+ dynamic_pad=True)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/save_annotation.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/save_annotation.py"
new file mode 100644
index 00000000..9f3c7e78
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/save_annotation.py"
@@ -0,0 +1,52 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Saves an annotation as one png image.
+
+This script saves an annotation as one png image, and has the option to add
+colormap to the png image for better visualization.
+"""
+
+import numpy as np
+import PIL.Image as img
+import tensorflow as tf
+
+from deeplab.utils import get_dataset_colormap
+
+
+def save_annotation(label,
+ save_dir,
+ filename,
+ add_colormap=True,
+ colormap_type=get_dataset_colormap.get_pascal_name()):
+ """Saves the given label to image on disk.
+
+ Args:
+ label: The numpy array to be saved. The data will be converted
+ to uint8 and saved as png image.
+ save_dir: The directory to which the results will be saved.
+ filename: The image filename.
+ add_colormap: Add color map to the label or not.
+ colormap_type: Colormap type for visualization.
+ """
+ # Add colormap for visualizing the prediction.
+ if add_colormap:
+ colored_label = get_dataset_colormap.label_to_color_image(
+ label, colormap_type)
+ else:
+ colored_label = label
+
+ pil_image = img.fromarray(colored_label.astype(dtype=np.uint8))
+ with tf.gfile.Open('%s/%s.png' % (save_dir, filename), mode='w') as f:
+ pil_image.save(f, 'PNG')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/train_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/train_utils.py"
new file mode 100644
index 00000000..4eeffb1e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/utils/train_utils.py"
@@ -0,0 +1,211 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Utility functions for training."""
+
+import six
+
+import tensorflow as tf
+from deeplab.core import preprocess_utils
+
+slim = tf.contrib.slim
+
+
+def add_softmax_cross_entropy_loss_for_each_scale(scales_to_logits,
+ labels,
+ num_classes,
+ ignore_label,
+ loss_weight=1.0,
+ upsample_logits=True,
+ scope=None):
+ """Adds softmax cross entropy loss for logits of each scale.
+
+ Args:
+ scales_to_logits: A map from logits names for different scales to logits.
+ The logits have shape [batch, logits_height, logits_width, num_classes].
+ labels: Groundtruth labels with shape [batch, image_height, image_width, 1].
+ num_classes: Integer, number of target classes.
+ ignore_label: Integer, label to ignore.
+ loss_weight: Float, loss weight.
+ upsample_logits: Boolean, upsample logits or not.
+ scope: String, the scope for the loss.
+
+ Raises:
+ ValueError: Label or logits is None.
+ """
+ if labels is None:
+ raise ValueError('No label for softmax cross entropy loss.')
+
+ for scale, logits in six.iteritems(scales_to_logits):
+ loss_scope = None
+ if scope:
+ loss_scope = '%s_%s' % (scope, scale)
+
+ if upsample_logits:
+ # Label is not downsampled, and instead we upsample logits.
+ logits = tf.image.resize_bilinear(
+ logits,
+ preprocess_utils.resolve_shape(labels, 4)[1:3],
+ align_corners=True)
+ scaled_labels = labels
+ else:
+ # Label is downsampled to the same size as logits.
+ scaled_labels = tf.image.resize_nearest_neighbor(
+ labels,
+ preprocess_utils.resolve_shape(logits, 4)[1:3],
+ align_corners=True)
+
+ scaled_labels = tf.reshape(scaled_labels, shape=[-1])
+ not_ignore_mask = tf.to_float(tf.not_equal(scaled_labels,
+ ignore_label)) * loss_weight
+ one_hot_labels = slim.one_hot_encoding(
+ scaled_labels, num_classes, on_value=1.0, off_value=0.0)
+ tf.losses.softmax_cross_entropy(
+ one_hot_labels,
+ tf.reshape(logits, shape=[-1, num_classes]),
+ weights=not_ignore_mask,
+ scope=loss_scope)
+
+
+def get_model_init_fn(train_logdir,
+ tf_initial_checkpoint,
+ initialize_last_layer,
+ last_layers,
+ ignore_missing_vars=False):
+ """Gets the function initializing model variables from a checkpoint.
+
+ Args:
+ train_logdir: Log directory for training.
+ tf_initial_checkpoint: TensorFlow checkpoint for initialization.
+ initialize_last_layer: Initialize last layer or not.
+ last_layers: Last layers of the model.
+ ignore_missing_vars: Ignore missing variables in the checkpoint.
+
+ Returns:
+ Initialization function.
+ """
+ if tf_initial_checkpoint is None:
+ tf.logging.info('Not initializing the model from a checkpoint.')
+ return None
+
+ if tf.train.latest_checkpoint(train_logdir):
+ tf.logging.info('Ignoring initialization; other checkpoint exists')
+ return None
+
+ tf.logging.info('Initializing model from path: %s', tf_initial_checkpoint)
+
+ # Variables that will not be restored.
+ exclude_list = ['global_step']
+ if not initialize_last_layer:
+ exclude_list.extend(last_layers)
+
+ variables_to_restore = slim.get_variables_to_restore(exclude=exclude_list)
+
+ if variables_to_restore:
+ return slim.assign_from_checkpoint_fn(
+ tf_initial_checkpoint,
+ variables_to_restore,
+ ignore_missing_vars=ignore_missing_vars)
+ return None
+
+
+def get_model_gradient_multipliers(last_layers, last_layer_gradient_multiplier):
+ """Gets the gradient multipliers.
+
+ The gradient multipliers will adjust the learning rates for model
+ variables. For the task of semantic segmentation, the models are
+ usually fine-tuned from the models trained on the task of image
+ classification. To fine-tune the models, we usually set larger (e.g.,
+ 10 times larger) learning rate for the parameters of last layer.
+
+ Args:
+ last_layers: Scopes of last layers.
+ last_layer_gradient_multiplier: The gradient multiplier for last layers.
+
+ Returns:
+ The gradient multiplier map with variables as key, and multipliers as value.
+ """
+ gradient_multipliers = {}
+
+ for var in slim.get_model_variables():
+ # Double the learning rate for biases.
+ if 'biases' in var.op.name:
+ gradient_multipliers[var.op.name] = 2.
+
+ # Use larger learning rate for last layer variables.
+ for layer in last_layers:
+ if layer in var.op.name and 'biases' in var.op.name:
+ gradient_multipliers[var.op.name] = 2 * last_layer_gradient_multiplier
+ break
+ elif layer in var.op.name:
+ gradient_multipliers[var.op.name] = last_layer_gradient_multiplier
+ break
+
+ return gradient_multipliers
+
+
+def get_model_learning_rate(
+ learning_policy, base_learning_rate, learning_rate_decay_step,
+ learning_rate_decay_factor, training_number_of_steps, learning_power,
+ slow_start_step, slow_start_learning_rate):
+ """Gets model's learning rate.
+
+ Computes the model's learning rate for different learning policy.
+ Right now, only "step" and "poly" are supported.
+ (1) The learning policy for "step" is computed as follows:
+ current_learning_rate = base_learning_rate *
+ learning_rate_decay_factor ^ (global_step / learning_rate_decay_step)
+ See tf.train.exponential_decay for details.
+ (2) The learning policy for "poly" is computed as follows:
+ current_learning_rate = base_learning_rate *
+ (1 - global_step / training_number_of_steps) ^ learning_power
+
+ Args:
+ learning_policy: Learning rate policy for training.
+ base_learning_rate: The base learning rate for model training.
+ learning_rate_decay_step: Decay the base learning rate at a fixed step.
+ learning_rate_decay_factor: The rate to decay the base learning rate.
+ training_number_of_steps: Number of steps for training.
+ learning_power: Power used for 'poly' learning policy.
+ slow_start_step: Training model with small learning rate for the first
+ few steps.
+ slow_start_learning_rate: The learning rate employed during slow start.
+
+ Returns:
+ Learning rate for the specified learning policy.
+
+ Raises:
+ ValueError: If learning policy is not recognized.
+ """
+ global_step = tf.train.get_or_create_global_step()
+ if learning_policy == 'step':
+ learning_rate = tf.train.exponential_decay(
+ base_learning_rate,
+ global_step,
+ learning_rate_decay_step,
+ learning_rate_decay_factor,
+ staircase=True)
+ elif learning_policy == 'poly':
+ learning_rate = tf.train.polynomial_decay(
+ base_learning_rate,
+ global_step,
+ training_number_of_steps,
+ end_learning_rate=0,
+ power=learning_power)
+ else:
+ raise ValueError('Unknown learning policy.')
+
+ # Employ small learning rate at the first few steps for warm start.
+ return tf.where(global_step < slow_start_step, slow_start_learning_rate,
+ learning_rate)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/vis.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/vis.py"
new file mode 100644
index 00000000..9f75d104
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/deeplab/vis.py"
@@ -0,0 +1,320 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Segmentation results visualization on a given set of images.
+
+See model.py for more details and usage.
+"""
+import math
+import os.path
+import time
+import numpy as np
+import tensorflow as tf
+from deeplab import common
+from deeplab import model
+from deeplab.datasets import segmentation_dataset
+from deeplab.utils import input_generator
+from deeplab.utils import save_annotation
+
+slim = tf.contrib.slim
+
+flags = tf.app.flags
+
+FLAGS = flags.FLAGS
+
+flags.DEFINE_string('master', '', 'BNS name of the tensorflow server')
+
+# Settings for log directories.
+
+flags.DEFINE_string('vis_logdir', None, 'Where to write the event logs.')
+
+flags.DEFINE_string('checkpoint_dir', None, 'Directory of model checkpoints.')
+
+# Settings for visualizing the model.
+
+flags.DEFINE_integer('vis_batch_size', 1,
+ 'The number of images in each batch during evaluation.')
+
+flags.DEFINE_multi_integer('vis_crop_size', [513, 513],
+ 'Crop size [height, width] for visualization.')
+
+flags.DEFINE_integer('eval_interval_secs', 60 * 5,
+ 'How often (in seconds) to run evaluation.')
+
+# For `xception_65`, use atrous_rates = [12, 24, 36] if output_stride = 8, or
+# rates = [6, 12, 18] if output_stride = 16. For `mobilenet_v2`, use None. Note
+# one could use different atrous_rates/output_stride during training/evaluation.
+flags.DEFINE_multi_integer('atrous_rates', None,
+ 'Atrous rates for atrous spatial pyramid pooling.')
+
+flags.DEFINE_integer('output_stride', 16,
+ 'The ratio of input to output spatial resolution.')
+
+# Change to [0.5, 0.75, 1.0, 1.25, 1.5, 1.75] for multi-scale test.
+flags.DEFINE_multi_float('eval_scales', [1.0],
+ 'The scales to resize images for evaluation.')
+
+# Change to True for adding flipped images during test.
+flags.DEFINE_bool('add_flipped_images', False,
+ 'Add flipped images for evaluation or not.')
+
+# Dataset settings.
+
+flags.DEFINE_string('dataset', 'pascal_voc_seg',
+ 'Name of the segmentation dataset.')
+
+flags.DEFINE_string('vis_split', 'val',
+ 'Which split of the dataset used for visualizing results')
+
+flags.DEFINE_string('dataset_dir', None, 'Where the dataset reside.')
+
+flags.DEFINE_enum('colormap_type', 'pascal', ['pascal', 'cityscapes'],
+ 'Visualization colormap type.')
+
+flags.DEFINE_boolean('also_save_raw_predictions', False,
+ 'Also save raw predictions.')
+
+flags.DEFINE_integer('max_number_of_iterations', 0,
+ 'Maximum number of visualization iterations. Will loop '
+ 'indefinitely upon nonpositive values.')
+
+# The folder where semantic segmentation predictions are saved.
+_SEMANTIC_PREDICTION_SAVE_FOLDER = 'segmentation_results'
+
+# The folder where raw semantic segmentation predictions are saved.
+_RAW_SEMANTIC_PREDICTION_SAVE_FOLDER = 'raw_segmentation_results'
+
+# The format to save image.
+_IMAGE_FORMAT = '%06d_image'
+
+# The format to save prediction
+_PREDICTION_FORMAT = '%06d_prediction'
+
+# To evaluate Cityscapes results on the evaluation server, the labels used
+# during training should be mapped to the labels for evaluation.
+_CITYSCAPES_TRAIN_ID_TO_EVAL_ID = [7, 8, 11, 12, 13, 17, 19, 20, 21, 22,
+ 23, 24, 25, 26, 27, 28, 31, 32, 33]
+
+
+def _convert_train_id_to_eval_id(prediction, train_id_to_eval_id):
+ """Converts the predicted label for evaluation.
+
+ There are cases where the training labels are not equal to the evaluation
+ labels. This function is used to perform the conversion so that we could
+ evaluate the results on the evaluation server.
+
+ Args:
+ prediction: Semantic segmentation prediction.
+ train_id_to_eval_id: A list mapping from train id to evaluation id.
+
+ Returns:
+ Semantic segmentation prediction whose labels have been changed.
+ """
+ converted_prediction = prediction.copy()
+ for train_id, eval_id in enumerate(train_id_to_eval_id):
+ converted_prediction[prediction == train_id] = eval_id
+
+ return converted_prediction
+
+
+def _process_batch(sess, original_images, semantic_predictions, image_names,
+ image_heights, image_widths, image_id_offset, save_dir,
+ raw_save_dir, train_id_to_eval_id=None):
+ """Evaluates one single batch qualitatively.
+
+ Args:
+ sess: TensorFlow session.
+ original_images: One batch of original images.
+ semantic_predictions: One batch of semantic segmentation predictions.
+ image_names: Image names.
+ image_heights: Image heights.
+ image_widths: Image widths.
+ image_id_offset: Image id offset for indexing images.
+ save_dir: The directory where the predictions will be saved.
+ raw_save_dir: The directory where the raw predictions will be saved.
+ train_id_to_eval_id: A list mapping from train id to eval id.
+ """
+ (original_images,
+ semantic_predictions,
+ image_names,
+ image_heights,
+ image_widths) = sess.run([original_images, semantic_predictions,
+ image_names, image_heights, image_widths])
+
+ num_image = semantic_predictions.shape[0]
+ for i in range(num_image):
+ image_height = np.squeeze(image_heights[i])
+ image_width = np.squeeze(image_widths[i])
+ original_image = np.squeeze(original_images[i])
+ semantic_prediction = np.squeeze(semantic_predictions[i])
+ crop_semantic_prediction = semantic_prediction[:image_height, :image_width]
+
+ # Save image.
+ save_annotation.save_annotation(
+ original_image, save_dir, _IMAGE_FORMAT % (image_id_offset + i),
+ add_colormap=False)
+
+ # Save prediction.
+ save_annotation.save_annotation(
+ crop_semantic_prediction, save_dir,
+ _PREDICTION_FORMAT % (image_id_offset + i), add_colormap=True,
+ colormap_type=FLAGS.colormap_type)
+
+ if FLAGS.also_save_raw_predictions:
+ image_filename = os.path.basename(image_names[i])
+
+ if train_id_to_eval_id is not None:
+ crop_semantic_prediction = _convert_train_id_to_eval_id(
+ crop_semantic_prediction,
+ train_id_to_eval_id)
+ save_annotation.save_annotation(
+ crop_semantic_prediction, raw_save_dir, image_filename,
+ add_colormap=False)
+
+
+def main(unused_argv):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ # Get dataset-dependent information.
+ dataset = segmentation_dataset.get_dataset(
+ FLAGS.dataset, FLAGS.vis_split, dataset_dir=FLAGS.dataset_dir)
+ train_id_to_eval_id = None
+ if dataset.name == segmentation_dataset.get_cityscapes_dataset_name():
+ tf.logging.info('Cityscapes requires converting train_id to eval_id.')
+ train_id_to_eval_id = _CITYSCAPES_TRAIN_ID_TO_EVAL_ID
+
+ # Prepare for visualization.
+ tf.gfile.MakeDirs(FLAGS.vis_logdir)
+ save_dir = os.path.join(FLAGS.vis_logdir, _SEMANTIC_PREDICTION_SAVE_FOLDER)
+ tf.gfile.MakeDirs(save_dir)
+ raw_save_dir = os.path.join(
+ FLAGS.vis_logdir, _RAW_SEMANTIC_PREDICTION_SAVE_FOLDER)
+ tf.gfile.MakeDirs(raw_save_dir)
+
+ tf.logging.info('Visualizing on %s set', FLAGS.vis_split)
+
+ g = tf.Graph()
+ with g.as_default():
+ samples = input_generator.get(dataset,
+ FLAGS.vis_crop_size,
+ FLAGS.vis_batch_size,
+ min_resize_value=FLAGS.min_resize_value,
+ max_resize_value=FLAGS.max_resize_value,
+ resize_factor=FLAGS.resize_factor,
+ dataset_split=FLAGS.vis_split,
+ is_training=False,
+ model_variant=FLAGS.model_variant)
+
+ model_options = common.ModelOptions(
+ outputs_to_num_classes={common.OUTPUT_TYPE: dataset.num_classes},
+ crop_size=FLAGS.vis_crop_size,
+ atrous_rates=FLAGS.atrous_rates,
+ output_stride=FLAGS.output_stride)
+
+ if tuple(FLAGS.eval_scales) == (1.0,):
+ tf.logging.info('Performing single-scale test.')
+ predictions = model.predict_labels(
+ samples[common.IMAGE],
+ model_options=model_options,
+ image_pyramid=FLAGS.image_pyramid)
+ else:
+ tf.logging.info('Performing multi-scale test.')
+ predictions = model.predict_labels_multi_scale(
+ samples[common.IMAGE],
+ model_options=model_options,
+ eval_scales=FLAGS.eval_scales,
+ add_flipped_images=FLAGS.add_flipped_images)
+ predictions = predictions[common.OUTPUT_TYPE]
+
+ if FLAGS.min_resize_value and FLAGS.max_resize_value:
+ # Only support batch_size = 1, since we assume the dimensions of original
+ # image after tf.squeeze is [height, width, 3].
+ assert FLAGS.vis_batch_size == 1
+
+ # Reverse the resizing and padding operations performed in preprocessing.
+ # First, we slice the valid regions (i.e., remove padded region) and then
+ # we reisze the predictions back.
+ original_image = tf.squeeze(samples[common.ORIGINAL_IMAGE])
+ original_image_shape = tf.shape(original_image)
+ predictions = tf.slice(
+ predictions,
+ [0, 0, 0],
+ [1, original_image_shape[0], original_image_shape[1]])
+ resized_shape = tf.to_int32([tf.squeeze(samples[common.HEIGHT]),
+ tf.squeeze(samples[common.WIDTH])])
+ predictions = tf.squeeze(
+ tf.image.resize_images(tf.expand_dims(predictions, 3),
+ resized_shape,
+ method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
+ align_corners=True), 3)
+
+ tf.train.get_or_create_global_step()
+ saver = tf.train.Saver(slim.get_variables_to_restore())
+ sv = tf.train.Supervisor(graph=g,
+ logdir=FLAGS.vis_logdir,
+ init_op=tf.global_variables_initializer(),
+ summary_op=None,
+ summary_writer=None,
+ global_step=None,
+ saver=saver)
+ num_batches = int(math.ceil(
+ dataset.num_samples / float(FLAGS.vis_batch_size)))
+ last_checkpoint = None
+
+ # Loop to visualize the results when new checkpoint is created.
+ num_iters = 0
+ while (FLAGS.max_number_of_iterations <= 0 or
+ num_iters < FLAGS.max_number_of_iterations):
+ num_iters += 1
+ last_checkpoint = slim.evaluation.wait_for_new_checkpoint(
+ FLAGS.checkpoint_dir, last_checkpoint)
+ start = time.time()
+ tf.logging.info(
+ 'Starting visualization at ' + time.strftime('%Y-%m-%d-%H:%M:%S',
+ time.gmtime()))
+ tf.logging.info('Visualizing with model %s', last_checkpoint)
+
+ with sv.managed_session(FLAGS.master,
+ start_standard_services=False) as sess:
+ sv.start_queue_runners(sess)
+ sv.saver.restore(sess, last_checkpoint)
+
+ image_id_offset = 0
+ for batch in range(num_batches):
+ tf.logging.info('Visualizing batch %d / %d', batch + 1, num_batches)
+ _process_batch(sess=sess,
+ original_images=samples[common.ORIGINAL_IMAGE],
+ semantic_predictions=predictions,
+ image_names=samples[common.IMAGE_NAME],
+ image_heights=samples[common.HEIGHT],
+ image_widths=samples[common.WIDTH],
+ image_id_offset=image_id_offset,
+ save_dir=save_dir,
+ raw_save_dir=raw_save_dir,
+ train_id_to_eval_id=train_id_to_eval_id)
+ image_id_offset += FLAGS.vis_batch_size
+
+ tf.logging.info(
+ 'Finished visualization at ' + time.strftime('%Y-%m-%d-%H:%M:%S',
+ time.gmtime()))
+ time_to_next_eval = start + FLAGS.eval_interval_secs - time.time()
+ if time_to_next_eval > 0:
+ time.sleep(time_to_next_eval)
+
+
+if __name__ == '__main__':
+ flags.mark_flag_as_required('checkpoint_dir')
+ flags.mark_flag_as_required('vis_logdir')
+ flags.mark_flag_as_required('dataset_dir')
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/.gitignore" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/.gitignore"
new file mode 100644
index 00000000..b61ddd10
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/.gitignore"
@@ -0,0 +1,4 @@
+*pyc
+*~
+*pb2.py
+*pb2.pyc
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/EXTRACTION_MATCHING.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/EXTRACTION_MATCHING.md"
new file mode 100644
index 00000000..399eb730
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/EXTRACTION_MATCHING.md"
@@ -0,0 +1,67 @@
+## Quick start: DELF extraction and matching
+
+To illustrate DELF usage, please download the Oxford buildings dataset. To
+follow these instructions closely, please download the dataset to the
+`tensorflow/models/research/delf/delf/python/examples` directory, as in the
+following commands:
+
+```bash
+# From tensorflow/models/research/delf/delf/python/examples/
+mkdir data && cd data
+wget http://www.robots.ox.ac.uk/~vgg/data/oxbuildings/oxbuild_images.tgz
+mkdir oxford5k_images oxford5k_features
+tar -xvzf oxbuild_images.tgz -C oxford5k_images/
+cd ../
+echo data/oxford5k_images/hertford_000056.jpg >> list_images.txt
+echo data/oxford5k_images/oxford_000317.jpg >> list_images.txt
+```
+
+Also, you will need to download the trained DELF model:
+
+```bash
+# From tensorflow/models/research/delf/delf/python/examples/
+mkdir parameters && cd parameters
+wget http://download.tensorflow.org/models/delf_v1_20171026.tar.gz
+tar -xvzf delf_v1_20171026.tar.gz
+```
+
+### DELF feature extraction
+
+Now that you have everything in place, running this command should extract DELF
+features for the images `hertford_000056.jpg` and `oxford_000317.jpg`:
+
+```bash
+# From tensorflow/models/research/delf/delf/python/examples/
+python extract_features.py \
+ --config_path delf_config_example.pbtxt \
+ --list_images_path list_images.txt \
+ --output_dir data/oxford5k_features
+```
+
+### Image matching using DELF features
+
+After feature extraction, run this command to perform feature matching between
+the images `hertford_000056.jpg` and `oxford_000317.jpg`:
+
+```bash
+python match_images.py \
+ --image_1_path data/oxford5k_images/hertford_000056.jpg \
+ --image_2_path data/oxford5k_images/oxford_000317.jpg \
+ --features_1_path data/oxford5k_features/hertford_000056.delf \
+ --features_2_path data/oxford5k_features/oxford_000317.delf \
+ --output_image matched_images.png
+```
+
+The image `matched_images.png` is generated and should look similar to this one:
+
+
+
+### Troubleshooting
+
+#### `matplotlib`
+
+`matplotlib` may complain with a message such as `no display name and no
+$DISPLAY environment variable`. To fix this, one option is add the line
+`backend : Agg` to the file `.config/matplotlib/matplotlibrc`. On this problem,
+see the discussion
+[here](https://stackoverflow.com/questions/37604289/tkinter-tclerror-no-display-name-and-no-display-environment-variable).
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/INSTALL_INSTRUCTIONS.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/INSTALL_INSTRUCTIONS.md"
new file mode 100644
index 00000000..0b47f2f8
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/INSTALL_INSTRUCTIONS.md"
@@ -0,0 +1,98 @@
+## DELF installation
+
+### Tensorflow
+
+For detailed steps to install Tensorflow, follow the [Tensorflow installation
+instructions](https://www.tensorflow.org/install/). A typical user can install
+Tensorflow using one of the following commands:
+
+```bash
+# For CPU:
+pip install tensorflow
+# For GPU:
+pip install tensorflow-gpu
+```
+
+### Protobuf
+
+The DELF library uses [protobuf](https://github.com/google/protobuf) (the python
+version) to configure feature extraction and its format. You will need the
+`protoc` compiler, version >= 3.3. The easiest way to get it is to download
+directly. For Linux, this can be done as (see
+[here](https://github.com/google/protobuf/releases) for other platforms):
+
+```bash
+wget https://github.com/google/protobuf/releases/download/v3.3.0/protoc-3.3.0-linux-x86_64.zip
+unzip protoc-3.3.0-linux-x86_64.zip
+PATH_TO_PROTOC=`pwd`
+```
+
+### Python dependencies
+
+Install python library dependencies:
+
+```bash
+sudo pip install matplotlib
+sudo pip install numpy
+sudo pip install scikit-image
+sudo pip install scipy
+```
+
+### `tensorflow/models`
+
+Now, clone `tensorflow/models`, and install required libraries: (note that the
+`object_detection` library requires you to add `tensorflow/models/research/` to
+your `PYTHONPATH`, as instructed
+[here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md))
+
+```bash
+git clone https://github.com/tensorflow/models
+
+# First, install slim's "nets" package.
+cd models/research/slim/
+sudo pip install -e .
+
+# Second, setup the object_detection module by editing PYTHONPATH.
+cd ..
+# From tensorflow/models/research/
+export PYTHONPATH=$PYTHONPATH:`pwd`
+```
+
+Then, compile DELF's protobufs. Use `PATH_TO_PROTOC` as the directory where you
+downloaded the `protoc` compiler.
+
+```bash
+# From tensorflow/models/research/delf/
+${PATH_TO_PROTOC?}/bin/protoc delf/protos/*.proto --python_out=.
+```
+
+Finally, install the DELF package.
+
+```bash
+# From tensorflow/models/research/delf/
+sudo pip install -e . # Install "delf" package.
+```
+
+At this point, running
+
+```bash
+python -c 'import delf'
+```
+
+should just return without complaints. This indicates that the DELF package is
+loaded successfully.
+
+### Troubleshooting
+
+#### Python version
+
+Installation issues may happen if multiple python versions are mixed. The
+instructions above assume python2.7 version is used; if using python3.X, be sure
+to use `pip3` instead of `pip`, and all should work.
+
+#### `pip install`
+
+Issues might be observed if using `pip install` with `-e` option (editable
+mode). You may try out to simply remove the `-e` from the commands above. Also,
+depending on your machine setup, you might need to run the `pip install` command
+without `sudo` at the beginning.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/README.md"
new file mode 100644
index 00000000..56032a47
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/README.md"
@@ -0,0 +1,103 @@
+# DELF: DEep Local Features
+
+This project presents code for extracting DELF features, which were introduced
+with the paper ["Large-Scale Image Retrieval with Attentive Deep Local
+Features"](https://arxiv.org/abs/1612.06321). A simple application is also
+illustrated, where two images containing the same landmark can be matched to
+each other, to obtain local image correspondences.
+
+DELF is particularly useful for large-scale instance-level image recognition. It
+detects and describes semantic local features which can be geometrically
+verified between images showing the same object instance. The pre-trained model
+released here has been optimized for landmark recognition, so expect it to work
+well in this area. We also provide tensorflow code for building the DELF model,
+which could then be used to train models for other types of objects.
+
+If you make use of this code, please consider citing:
+
+```
+"Large-Scale Image Retrieval with Attentive Deep Local Features",
+Hyeonwoo Noh, Andre Araujo, Jack Sim, Tobias Weyand, Bohyung Han,
+Proc. ICCV'17
+```
+
+## News
+
+- DELF achieved state-of-the-art results in a CVPR'18 image retrieval paper:
+ [Radenovic et al., "Revisiting Oxford and Paris: Large-Scale Image Retrieval
+ Benchmarking"](https://arxiv.org/abs/1803.11285).
+- DELF was featured in
+ [ModelDepot](https://modeldepot.io/mikeshi/delf/overview)
+- DELF is now available in
+ [TF-Hub](https://www.tensorflow.org/hub/modules/google/delf/1)
+
+## Dataset
+
+The Google-Landmarks dataset has been released as part of two Kaggle challenges:
+[Landmark Recognition](https://www.kaggle.com/c/landmark-recognition-challenge)
+and [Landmark Retrieval](https://www.kaggle.com/c/landmark-retrieval-challenge).
+If you make use of the dataset in your research, please consider citing the
+paper mentioned above.
+
+## Installation
+
+To be able to use this code, please follow [these
+instructions](INSTALL_INSTRUCTIONS.md) to properly install the DELF library.
+
+## Quick start: DELF extraction and matching
+
+Please follow [these instructions](EXTRACTION_MATCHING.md). At the end, you
+should obtain a nice figure showing local feature matches, as:
+
+
+
+## Code overview
+
+DELF's code is located under the `delf` directory. There are two directories
+therein, `protos` and `python`.
+
+### `delf/protos`
+
+This directory contains three protobufs:
+
+- `datum.proto`: general-purpose protobuf for serializing float tensors.
+- `feature.proto`: protobuf for serializing DELF features.
+- `delf_config.proto`: protobuf for configuring DELF extraction.
+
+### `delf/python`
+
+This directory contains files for several different purposes:
+
+- `datum_io.py`, `feature_io.py` are helper files for reading and writing
+ tensors and features.
+- `delf_v1.py` contains the code to create DELF models.
+- `feature_extractor.py` contains the code to extract features using DELF.
+ This is particularly useful for extracting features over multiple scales,
+ with keypoint selection based on attention scores, and PCA/whitening
+ post-processing.
+
+Besides these, other files in this directory contain tests for different
+modules.
+
+The subdirectory `delf/python/examples` contains sample scripts to run DELF
+feature extraction and matching:
+
+- `extract_features.py` enables DELF extraction from a list of images.
+- `match_images.py` supports image matching using DELF features extracted
+ using `extract_features.py`.
+- `delf_config_example.pbtxt` shows an example instantiation of the DelfConfig
+ proto, used for DELF feature extraction.
+
+## Maintainers
+
+André Araujo (@andrefaraujo)
+
+## Release history
+
+### October 26, 2017
+
+Initial release containing DELF-v1 code, including feature extraction and
+matching examples.
+
+**Thanks to contributors**: André Araujo, Hyeonwoo Noh, Youlong Cheng,
+Jack Sim.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/__init__.py"
new file mode 100644
index 00000000..2c20f4c2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/__init__.py"
@@ -0,0 +1,29 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Module to extract deep local features."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# pylint: disable=unused-import
+from delf.protos import datum_pb2
+from delf.protos import delf_config_pb2
+from delf.protos import feature_pb2
+from delf.python import datum_io
+from delf.python import delf_v1
+from delf.python import feature_extractor
+from delf.python import feature_io
+# pylint: enable=unused-import
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/datum.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/datum.proto"
new file mode 100644
index 00000000..66960dfc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/datum.proto"
@@ -0,0 +1,53 @@
+// Protocol buffer for serializing arbitrary float tensors.
+// Note: Currently only floating point feature is supported.
+
+syntax = "proto2";
+
+package delf.protos;
+
+// A DatumProto is a data structure used to serialize tensor with arbitrary
+// shape. DatumProto contains an array of floating point values and its shape
+// is represented as a sequence of integer values. Values are contained in
+// row major order.
+//
+// Example:
+// 3 x 2 array
+//
+// [1.1, 2.2]
+// [3.3, 4.4]
+// [5.5, 6.6]
+//
+// can be represented with the following DatumProto:
+//
+// DatumProto {
+// shape {
+// dim: 3
+// dim: 2
+// }
+// float_list {
+// value: 1.1
+// value: 2.2
+// value: 3.3
+// value: 4.4
+// value: 5.5
+// value: 6.6
+// }
+// }
+
+// DatumShape is array of dimension of the tensor.
+message DatumShape {
+ repeated int64 dim = 1 [packed = true];
+}
+
+// FloatList is the container of tensor values. The tensor values are saved as
+// a list of floating point values.
+message FloatList {
+ repeated float value = 1 [packed = true];
+}
+
+message DatumProto {
+ optional DatumShape shape = 1;
+ oneof kind_oneof {
+ FloatList float_list = 2;
+ }
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/delf_config.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/delf_config.proto"
new file mode 100644
index 00000000..40aaaf73
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/delf_config.proto"
@@ -0,0 +1,64 @@
+// Protocol buffer for configuring DELF feature extraction.
+
+syntax = "proto2";
+
+package delf.protos;
+
+message DelfPcaParameters {
+ // Path to PCA mean file.
+ optional string mean_path = 1; // Required.
+
+ // Path to PCA matrix file.
+ optional string projection_matrix_path = 2; // Required.
+
+ // Dimensionality of feature after PCA.
+ optional int32 pca_dim = 3; // Required.
+
+ // If whitening is to be used, this must be set to true.
+ optional bool use_whitening = 4 [default = false];
+
+ // Path to PCA variances file, used for whitening. This is used only if
+ // use_whitening is set to true.
+ optional string pca_variances_path = 5;
+}
+
+message DelfLocalFeatureConfig {
+ // If PCA is to be used, this must be set to true.
+ optional bool use_pca = 1 [default = true];
+
+ // Target layer name for DELF model. This is used to obtain receptive field
+ // parameters used for localizing features with respect to the input image.
+ optional string layer_name = 2 [default = ""];
+
+ // Intersection over union threshold for the non-max suppression (NMS)
+ // operation. If two features overlap by at most this amount, both are kept.
+ // Otherwise, the one with largest attention score is kept. This should be a
+ // number between 0.0 (no region is selected) and 1.0 (all regions are
+ // selected and NMS is not performed).
+ optional float iou_threshold = 3 [default = 1.0];
+
+ // Maximum number of features that will be selected. The features with largest
+ // scores (eg, largest attention score if score_type is "Att") are the
+ // selected ones.
+ optional int32 max_feature_num = 4 [default = 1000];
+
+ // Threshold to be used for feature selection: no feature with score lower
+ // than this number will be selected).
+ optional float score_threshold = 5 [default = 100.0];
+
+ // PCA parameters for DELF local feature. This is used only if use_pca is
+ // true.
+ optional DelfPcaParameters pca_parameters = 6;
+}
+
+message DelfConfig {
+ // Path to DELF model.
+ optional string model_path = 1; // Required.
+
+ // Image scales to be used.
+ repeated float image_scales = 2;
+
+ // Configuration used for DELF local features.
+ optional DelfLocalFeatureConfig delf_local_config = 3;
+
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/feature.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/feature.proto"
new file mode 100644
index 00000000..64c342fe
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/protos/feature.proto"
@@ -0,0 +1,22 @@
+// Protocol buffer for serializing the DELF feature information.
+
+syntax = "proto2";
+
+package delf.protos;
+
+import "delf/protos/datum.proto";
+
+// FloatList is the container of tensor values. The tensor values are saved as
+// a list of floating point values.
+message DelfFeature {
+ optional DatumProto descriptor = 1;
+ optional float x = 2;
+ optional float y = 3;
+ optional float scale = 4;
+ optional float orientation = 5;
+ optional float strength = 6;
+}
+
+message DelfFeatures {
+ repeated DelfFeature feature = 1;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/datum_io.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/datum_io.py"
new file mode 100644
index 00000000..5602667a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/datum_io.py"
@@ -0,0 +1,109 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Python interface for DatumProto.
+
+DatumProto is protocol buffer used to serialize tensor with arbitrary shape.
+Please refer to datum.proto for details.
+
+Support read and write of DatumProto from/to numpy array and file.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+
+from delf import datum_pb2
+
+
+def ArrayToDatum(arr):
+ """Converts numpy array to DatumProto.
+
+ Args:
+ arr: Numpy array of arbitrary shape.
+
+ Returns:
+ datum: DatumProto object.
+ """
+ datum = datum_pb2.DatumProto()
+ datum.float_list.value.extend(arr.astype(float).flat)
+ datum.shape.dim.extend(arr.shape)
+ return datum
+
+
+def DatumToArray(datum):
+ """Converts data saved in DatumProto to numpy array.
+
+ Args:
+ datum: DatumProto object.
+
+ Returns:
+ Numpy array of arbitrary shape.
+ """
+ return np.array(datum.float_list.value).astype(float).reshape(datum.shape.dim)
+
+
+def SerializeToString(arr):
+ """Converts numpy array to serialized DatumProto.
+
+ Args:
+ arr: Numpy array of arbitrary shape.
+
+ Returns:
+ Serialized DatumProto string.
+ """
+ datum = ArrayToDatum(arr)
+ return datum.SerializeToString()
+
+
+def ParseFromString(string):
+ """Converts serialized DatumProto string to numpy array.
+
+ Args:
+ string: Serialized DatumProto string.
+
+ Returns:
+ Numpy array.
+ """
+ datum = datum_pb2.DatumProto()
+ datum.ParseFromString(string)
+ return DatumToArray(datum)
+
+
+def ReadFromFile(file_path):
+ """Helper function to load data from a DatumProto format in a file.
+
+ Args:
+ file_path: Path to file containing data.
+
+ Returns:
+ data: Numpy array.
+ """
+ with tf.gfile.FastGFile(file_path, 'rb') as f:
+ return ParseFromString(f.read())
+
+
+def WriteToFile(data, file_path):
+ """Helper function to write data to a file in DatumProto format.
+
+ Args:
+ data: Numpy array.
+ file_path: Path to file that will be written.
+ """
+ serialized_data = SerializeToString(data)
+ with tf.gfile.FastGFile(file_path, 'w') as f:
+ f.write(serialized_data)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/datum_io_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/datum_io_test.py"
new file mode 100644
index 00000000..5cee0c3d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/datum_io_test.py"
@@ -0,0 +1,72 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for datum_io, the python interface of DatumProto."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+import numpy as np
+import tensorflow as tf
+
+from delf import datum_io
+
+
+class DatumIoTest(tf.test.TestCase):
+
+ def Conversion2dTestWithType(self, dtype):
+ original_data = np.arange(9).reshape(3, 3).astype(dtype)
+ serialized = datum_io.SerializeToString(original_data)
+ retrieved_data = datum_io.ParseFromString(serialized)
+ self.assertTrue(np.array_equal(original_data, retrieved_data))
+
+ def Conversion3dTestWithType(self, dtype):
+ original_data = np.arange(24).reshape(2, 3, 4).astype(dtype)
+ serialized = datum_io.SerializeToString(original_data)
+ retrieved_data = datum_io.ParseFromString(serialized)
+ self.assertTrue(np.array_equal(original_data, retrieved_data))
+
+ def testConversion2dWithType(self):
+ self.Conversion2dTestWithType(np.int8)
+ self.Conversion2dTestWithType(np.int16)
+ self.Conversion2dTestWithType(np.int32)
+ self.Conversion2dTestWithType(np.int64)
+ self.Conversion2dTestWithType(np.float16)
+ self.Conversion2dTestWithType(np.float32)
+ self.Conversion2dTestWithType(np.float64)
+
+ def testConversion3dWithType(self):
+ self.Conversion3dTestWithType(np.int8)
+ self.Conversion3dTestWithType(np.int16)
+ self.Conversion3dTestWithType(np.int32)
+ self.Conversion3dTestWithType(np.int64)
+ self.Conversion3dTestWithType(np.float16)
+ self.Conversion3dTestWithType(np.float32)
+ self.Conversion3dTestWithType(np.float64)
+
+ def testWriteAndReadToFile(self):
+ data = np.array([[[-1.0, 125.0, -2.5], [14.5, 3.5, 0.0]],
+ [[20.0, 0.0, 30.0], [25.5, 36.0, 42.0]]])
+ tmpdir = tf.test.get_temp_dir()
+ filename = os.path.join(tmpdir, 'test.datum')
+ datum_io.WriteToFile(data, filename)
+ data_read = datum_io.ReadFromFile(filename)
+ self.assertAllEqual(data_read, data)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/delf_v1.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/delf_v1.py"
new file mode 100644
index 00000000..7fcf44fe
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/delf_v1.py"
@@ -0,0 +1,398 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""DELF model implementation based on the following paper.
+
+ Large-Scale Image Retrieval with Attentive Deep Local Features
+ https://arxiv.org/abs/1612.06321
+
+Please refer to the README.md file for detailed explanations on using the DELF
+model.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from nets import resnet_v1
+
+slim = tf.contrib.slim
+
+_SUPPORTED_TARGET_LAYER = ['resnet_v1_50/block3', 'resnet_v1_50/block4']
+
+# The variable scope for the attention portion of the model.
+_ATTENTION_VARIABLE_SCOPE = 'attention_block'
+
+# The attention_type determines whether the attention based feature aggregation
+# is performed on the L2-normalized feature map or on the default feature map
+# where L2-normalization is not applied. Note that in both cases, attention
+# functions are built on the un-normalized feature map. This is only relevant
+# for the training stage.
+# Currently supported options are as follows:
+# * use_l2_normalized_feature:
+# The option use_l2_normalized_feature first applies L2-normalization on the
+# feature map and then applies attention based feature aggregation. This
+# option is used for the DELF+FT+Att model in the paper.
+# * use_default_input_feature:
+# The option use_default_input_feature aggregates unnormalized feature map
+# directly.
+_SUPPORTED_ATTENTION_TYPES = [
+ 'use_l2_normalized_feature', 'use_default_input_feature'
+]
+
+# Supported types of non-lineary for the attention score function.
+_SUPPORTED_ATTENTION_NONLINEARITY = ['softplus']
+
+
+class DelfV1(object):
+ """Creates a DELF model.
+
+ Args:
+ target_layer_type: The name of target CNN architecture and its layer.
+
+ Raises:
+ ValueError: If an unknown target_layer_type is provided.
+ """
+
+ def __init__(self, target_layer_type=_SUPPORTED_TARGET_LAYER[0]):
+ tf.logging.info('Creating model %s ', target_layer_type)
+
+ self._target_layer_type = target_layer_type
+ if self._target_layer_type not in _SUPPORTED_TARGET_LAYER:
+ raise ValueError('Unknown model type.')
+
+ @property
+ def target_layer_type(self):
+ return self._target_layer_type
+
+ def _PerformAttention(self,
+ attention_feature_map,
+ feature_map,
+ attention_nonlinear,
+ kernel=1):
+ """Helper function to construct the attention part of the model.
+
+ Computes attention score map and aggregates the input feature map based on
+ the attention score map.
+
+ Args:
+ attention_feature_map: Potentially normalized feature map that will
+ be aggregated with attention score map.
+ feature_map: Unnormalized feature map that will be used to compute
+ attention score map.
+ attention_nonlinear: Type of non-linearity that will be applied to
+ attention value.
+ kernel: Convolutional kernel to use in attention layers (eg: 1, [3, 3]).
+
+ Returns:
+ attention_feat: Aggregated feature vector.
+ attention_prob: Attention score map after the non-linearity.
+ attention_score: Attention score map before the non-linearity.
+
+ Raises:
+ ValueError: If unknown attention non-linearity type is provided.
+ """
+ with tf.variable_scope(
+ 'attention', values=[attention_feature_map, feature_map]):
+ with tf.variable_scope('compute', values=[feature_map]):
+ activation_fn_conv1 = tf.nn.relu
+ feature_map_conv1 = slim.conv2d(
+ feature_map,
+ 512,
+ kernel,
+ rate=1,
+ activation_fn=activation_fn_conv1,
+ scope='conv1')
+
+ attention_score = slim.conv2d(
+ feature_map_conv1,
+ 1,
+ kernel,
+ rate=1,
+ activation_fn=None,
+ normalizer_fn=None,
+ scope='conv2')
+
+ # Set activation of conv2 layer of attention model.
+ with tf.variable_scope(
+ 'merge', values=[attention_feature_map, attention_score]):
+ if attention_nonlinear not in _SUPPORTED_ATTENTION_NONLINEARITY:
+ raise ValueError('Unknown attention non-linearity.')
+ if attention_nonlinear == 'softplus':
+ with tf.variable_scope(
+ 'softplus_attention',
+ values=[attention_feature_map, attention_score]):
+ attention_prob = tf.nn.softplus(attention_score)
+ attention_feat = tf.reduce_mean(
+ tf.multiply(attention_feature_map, attention_prob), [1, 2])
+ attention_feat = tf.expand_dims(tf.expand_dims(attention_feat, 1), 2)
+ return attention_feat, attention_prob, attention_score
+
+ def _GetAttentionSubnetwork(
+ self,
+ feature_map,
+ end_points,
+ attention_nonlinear=_SUPPORTED_ATTENTION_NONLINEARITY[0],
+ attention_type=_SUPPORTED_ATTENTION_TYPES[0],
+ kernel=1,
+ reuse=False):
+ """Constructs the part of the model performing attention.
+
+ Args:
+ feature_map: A tensor of size [batch, height, width, channels]. Usually it
+ corresponds to the output feature map of a fully-convolutional network.
+ end_points: Set of activations of the network constructed so far.
+ attention_nonlinear: Type of non-linearity on top of the attention
+ function.
+ attention_type: Type of the attention structure.
+ kernel: Convolutional kernel to use in attention layers (eg, [3, 3]).
+ reuse: Whether or not the layer and its variables should be reused.
+
+ Returns:
+ prelogits: A tensor of size [batch, 1, 1, channels].
+ attention_prob: Attention score after the non-linearity.
+ attention_score: Attention score before the non-linearity.
+ end_points: Updated set of activations, for external use.
+ Raises:
+ ValueError: If unknown attention_type is provided.
+ """
+ with tf.variable_scope(
+ _ATTENTION_VARIABLE_SCOPE,
+ values=[feature_map, end_points],
+ reuse=reuse):
+ if attention_type not in _SUPPORTED_ATTENTION_TYPES:
+ raise ValueError('Unknown attention_type.')
+ if attention_type == 'use_l2_normalized_feature':
+ attention_feature_map = tf.nn.l2_normalize(
+ feature_map, 3, name='l2_normalize')
+ elif attention_type == 'use_default_input_feature':
+ attention_feature_map = feature_map
+ end_points['attention_feature_map'] = attention_feature_map
+
+ attention_outputs = self._PerformAttention(
+ attention_feature_map, feature_map, attention_nonlinear, kernel)
+ prelogits, attention_prob, attention_score = attention_outputs
+ end_points['prelogits'] = prelogits
+ end_points['attention_prob'] = attention_prob
+ end_points['attention_score'] = attention_score
+ return prelogits, attention_prob, attention_score, end_points
+
+ def GetResnet50Subnetwork(self,
+ images,
+ is_training=False,
+ global_pool=False,
+ reuse=None):
+ """Constructs resnet_v1_50 part of the DELF model.
+
+ Args:
+ images: A tensor of size [batch, height, width, channels].
+ is_training: Whether or not the model is in training mode.
+ global_pool: If True, perform global average pooling after feature
+ extraction. This may be useful for DELF's descriptor fine-tuning stage.
+ reuse: Whether or not the layer and its variables should be reused.
+
+ Returns:
+ net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
+ If global_pool is True, height_out = width_out = 1.
+ end_points: A set of activations for external use.
+ """
+ block = resnet_v1.resnet_v1_block
+ blocks = [
+ block('block1', base_depth=64, num_units=3, stride=2),
+ block('block2', base_depth=128, num_units=4, stride=2),
+ block('block3', base_depth=256, num_units=6, stride=2),
+ ]
+ if self._target_layer_type == 'resnet_v1_50/block4':
+ blocks.append(block('block4', base_depth=512, num_units=3, stride=1))
+ net, end_points = resnet_v1.resnet_v1(
+ images,
+ blocks,
+ is_training=is_training,
+ global_pool=global_pool,
+ reuse=reuse,
+ scope='resnet_v1_50')
+ return net, end_points
+
+ def GetAttentionPrelogit(
+ self,
+ images,
+ weight_decay=0.0001,
+ attention_nonlinear=_SUPPORTED_ATTENTION_NONLINEARITY[0],
+ attention_type=_SUPPORTED_ATTENTION_TYPES[0],
+ kernel=1,
+ training_resnet=False,
+ training_attention=False,
+ reuse=False,
+ use_batch_norm=True):
+ """Constructs attention model on resnet_v1_50.
+
+ Args:
+ images: A tensor of size [batch, height, width, channels].
+ weight_decay: The parameters for weight_decay regularizer.
+ attention_nonlinear: Type of non-linearity on top of the attention
+ function.
+ attention_type: Type of the attention structure.
+ kernel: Convolutional kernel to use in attention layers (eg, [3, 3]).
+ training_resnet: Whether or not the Resnet blocks from the model are in
+ training mode.
+ training_attention: Whether or not the attention part of the model is
+ in training mode.
+ reuse: Whether or not the layer and its variables should be reused.
+ use_batch_norm: Whether or not to use batch normalization.
+
+ Returns:
+ prelogits: A tensor of size [batch, 1, 1, channels].
+ attention_prob: Attention score after the non-linearity.
+ attention_score: Attention score before the non-linearity.
+ feature_map: Features extracted from the model, which are not
+ l2-normalized.
+ end_points: Set of activations for external use.
+ """
+ # Construct Resnet50 features.
+ with slim.arg_scope(
+ resnet_v1.resnet_arg_scope(use_batch_norm=use_batch_norm)):
+ _, end_points = self.GetResnet50Subnetwork(
+ images, is_training=training_resnet, reuse=reuse)
+
+ feature_map = end_points[self._target_layer_type]
+
+ # Construct attention subnetwork on top of features.
+ with slim.arg_scope(
+ resnet_v1.resnet_arg_scope(
+ weight_decay=weight_decay, use_batch_norm=use_batch_norm)):
+ with slim.arg_scope([slim.batch_norm], is_training=training_attention):
+ (prelogits, attention_prob, attention_score,
+ end_points) = self._GetAttentionSubnetwork(
+ feature_map,
+ end_points,
+ attention_nonlinear=attention_nonlinear,
+ attention_type=attention_type,
+ kernel=kernel,
+ reuse=reuse)
+
+ return prelogits, attention_prob, attention_score, feature_map, end_points
+
+ def _GetAttentionModel(
+ self,
+ images,
+ num_classes,
+ weight_decay=0.0001,
+ attention_nonlinear=_SUPPORTED_ATTENTION_NONLINEARITY[0],
+ attention_type=_SUPPORTED_ATTENTION_TYPES[0],
+ kernel=1,
+ training_resnet=False,
+ training_attention=False,
+ reuse=False):
+ """Constructs attention model on resnet_v1_50.
+
+ Args:
+ images: A tensor of size [batch, height, width, channels]
+ num_classes: The number of output classes.
+ weight_decay: The parameters for weight_decay regularizer.
+ attention_nonlinear: Type of non-linearity on top of the attention
+ function.
+ attention_type: Type of the attention structure.
+ kernel: Convolutional kernel to use in attention layers (eg, [3, 3]).
+ training_resnet: Whether or not the Resnet blocks from the model are in
+ training mode.
+ training_attention: Whether or not the attention part of the model is in
+ training mode.
+ reuse: Whether or not the layer and its variables should be reused.
+
+ Returns:
+ logits: A tensor of size [batch, num_classes].
+ attention_prob: Attention score after the non-linearity.
+ attention_score: Attention score before the non-linearity.
+ feature_map: Features extracted from the model, which are not
+ l2-normalized.
+ """
+
+ attention_feat, attention_prob, attention_score, feature_map, _ = (
+ self.GetAttentionPrelogit(
+ images,
+ weight_decay,
+ attention_nonlinear=attention_nonlinear,
+ attention_type=attention_type,
+ kernel=kernel,
+ training_resnet=training_resnet,
+ training_attention=training_attention,
+ reuse=reuse))
+ with slim.arg_scope(
+ resnet_v1.resnet_arg_scope(
+ weight_decay=weight_decay, batch_norm_scale=True)):
+ with slim.arg_scope([slim.batch_norm], is_training=training_attention):
+ with tf.variable_scope(
+ _ATTENTION_VARIABLE_SCOPE, values=[attention_feat], reuse=reuse):
+ logits = slim.conv2d(
+ attention_feat,
+ num_classes, [1, 1],
+ activation_fn=None,
+ normalizer_fn=None,
+ scope='logits')
+ logits = tf.squeeze(logits, [1, 2], name='spatial_squeeze')
+ return logits, attention_prob, attention_score, feature_map
+
+ def AttentionModel(self,
+ images,
+ num_classes,
+ weight_decay=0.0001,
+ attention_nonlinear=_SUPPORTED_ATTENTION_NONLINEARITY[0],
+ attention_type=_SUPPORTED_ATTENTION_TYPES[0],
+ kernel=1,
+ training_resnet=False,
+ training_attention=False,
+ reuse=False):
+ """Constructs attention based classification model for training.
+
+ Args:
+ images: A tensor of size [batch, height, width, channels]
+ num_classes: The number of output classes.
+ weight_decay: The parameters for weight_decay regularizer.
+ attention_nonlinear: Type of non-linearity on top of the attention
+ function.
+ attention_type: Type of the attention structure.
+ kernel: Convolutional kernel to use in attention layers (eg, [3, 3]).
+ training_resnet: Whether or not the Resnet blocks from the model are in
+ training mode.
+ training_attention: Whether or not the model is in training mode. Note
+ that this function only supports training the attention part of the
+ model, ie, the feature extraction layers are not trained.
+ reuse: Whether or not the layer and its variables should be reused.
+
+ Returns:
+ logit: A tensor of size [batch, num_classes]
+ attention: Attention score after the non-linearity.
+ feature_map: Features extracted from the model, which are not
+ l2-normalized.
+
+ Raises:
+ ValueError: If unknown target_layer_type is provided.
+ """
+ if 'resnet_v1_50' in self._target_layer_type:
+ net_outputs = self._GetAttentionModel(
+ images,
+ num_classes,
+ weight_decay,
+ attention_nonlinear=attention_nonlinear,
+ attention_type=attention_type,
+ kernel=kernel,
+ training_resnet=training_resnet,
+ training_attention=training_attention,
+ reuse=reuse)
+ logits, attention, _, feature_map = net_outputs
+ else:
+ raise ValueError('Unknown target_layer_type.')
+ return logits, attention, feature_map
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/delf_config_example.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/delf_config_example.pbtxt"
new file mode 100644
index 00000000..7765551c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/delf_config_example.pbtxt"
@@ -0,0 +1,25 @@
+model_path: "parameters/delf_v1_20171026/model/"
+image_scales: .25
+image_scales: .3536
+image_scales: .5
+image_scales: .7071
+image_scales: 1.0
+image_scales: 1.4142
+image_scales: 2.0
+delf_local_config {
+ use_pca: true
+ # Note that for the exported model provided as an example, layer_name and
+ # iou_threshold are hard-coded in the checkpoint. So, the layer_name and
+ # iou_threshold variables here have no effect on the provided
+ # extract_features.py script.
+ layer_name: "resnet_v1_50/block3"
+ iou_threshold: 1.0
+ max_feature_num: 1000
+ score_threshold: 100.0
+ pca_parameters {
+ mean_path: "parameters/delf_v1_20171026/pca/mean.datum"
+ projection_matrix_path: "parameters/delf_v1_20171026/pca/pca_proj_mat.datum"
+ pca_dim: 40
+ use_whitening: false
+ }
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/extract_features.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/extract_features.py"
new file mode 100644
index 00000000..7a32a861
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/extract_features.py"
@@ -0,0 +1,189 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Extracts DELF features from a list of images, saving them to file.
+
+The images must be in JPG format. The program checks if descriptors already
+exist, and skips computation for those.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import argparse
+import os
+import sys
+import time
+
+import tensorflow as tf
+
+from google.protobuf import text_format
+from tensorflow.python.platform import app
+from delf import delf_config_pb2
+from delf import feature_extractor
+from delf import feature_io
+
+cmd_args = None
+
+# Extension of feature files.
+_DELF_EXT = '.delf'
+
+# Pace to report extraction log.
+_STATUS_CHECK_ITERATIONS = 100
+
+
+def _ReadImageList(list_path):
+ """Helper function to read image paths.
+
+ Args:
+ list_path: Path to list of images, one image path per line.
+
+ Returns:
+ image_paths: List of image paths.
+ """
+ with tf.gfile.GFile(list_path, 'r') as f:
+ image_paths = f.readlines()
+ image_paths = [entry.rstrip() for entry in image_paths]
+ return image_paths
+
+
+def main(unused_argv):
+ tf.logging.set_verbosity(tf.logging.INFO)
+
+ # Read list of images.
+ tf.logging.info('Reading list of images...')
+ image_paths = _ReadImageList(cmd_args.list_images_path)
+ num_images = len(image_paths)
+ tf.logging.info('done! Found %d images', num_images)
+
+ # Parse DelfConfig proto.
+ config = delf_config_pb2.DelfConfig()
+ with tf.gfile.FastGFile(cmd_args.config_path, 'r') as f:
+ text_format.Merge(f.read(), config)
+
+ # Create output directory if necessary.
+ if not os.path.exists(cmd_args.output_dir):
+ os.makedirs(cmd_args.output_dir)
+
+ # Tell TensorFlow that the model will be built into the default Graph.
+ with tf.Graph().as_default():
+ # Reading list of images.
+ filename_queue = tf.train.string_input_producer(image_paths, shuffle=False)
+ reader = tf.WholeFileReader()
+ _, value = reader.read(filename_queue)
+ image_tf = tf.image.decode_jpeg(value, channels=3)
+
+ with tf.Session() as sess:
+ # Initialize variables.
+ init_op = tf.global_variables_initializer()
+ sess.run(init_op)
+
+ # Loading model that will be used.
+ tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING],
+ config.model_path)
+ graph = tf.get_default_graph()
+ input_image = graph.get_tensor_by_name('input_image:0')
+ input_score_threshold = graph.get_tensor_by_name('input_abs_thres:0')
+ input_image_scales = graph.get_tensor_by_name('input_scales:0')
+ input_max_feature_num = graph.get_tensor_by_name(
+ 'input_max_feature_num:0')
+ boxes = graph.get_tensor_by_name('boxes:0')
+ raw_descriptors = graph.get_tensor_by_name('features:0')
+ feature_scales = graph.get_tensor_by_name('scales:0')
+ attention_with_extra_dim = graph.get_tensor_by_name('scores:0')
+ attention = tf.reshape(attention_with_extra_dim,
+ [tf.shape(attention_with_extra_dim)[0]])
+
+ locations, descriptors = feature_extractor.DelfFeaturePostProcessing(
+ boxes, raw_descriptors, config)
+
+ # Start input enqueue threads.
+ coord = tf.train.Coordinator()
+ threads = tf.train.start_queue_runners(sess=sess, coord=coord)
+ start = time.clock()
+ for i in range(num_images):
+ # Write to log-info once in a while.
+ if i == 0:
+ tf.logging.info('Starting to extract DELF features from images...')
+ elif i % _STATUS_CHECK_ITERATIONS == 0:
+ elapsed = (time.clock() - start)
+ tf.logging.info('Processing image %d out of %d, last %d '
+ 'images took %f seconds', i, num_images,
+ _STATUS_CHECK_ITERATIONS, elapsed)
+ start = time.clock()
+
+ # # Get next image.
+ im = sess.run(image_tf)
+
+ # If descriptor already exists, skip its computation.
+ out_desc_filename = os.path.splitext(os.path.basename(
+ image_paths[i]))[0] + _DELF_EXT
+ out_desc_fullpath = os.path.join(cmd_args.output_dir, out_desc_filename)
+ if tf.gfile.Exists(out_desc_fullpath):
+ tf.logging.info('Skipping %s', image_paths[i])
+ continue
+
+ # Extract and save features.
+ (locations_out, descriptors_out, feature_scales_out,
+ attention_out) = sess.run(
+ [locations, descriptors, feature_scales, attention],
+ feed_dict={
+ input_image:
+ im,
+ input_score_threshold:
+ config.delf_local_config.score_threshold,
+ input_image_scales:
+ list(config.image_scales),
+ input_max_feature_num:
+ config.delf_local_config.max_feature_num
+ })
+
+ feature_io.WriteToFile(out_desc_fullpath, locations_out,
+ feature_scales_out, descriptors_out,
+ attention_out)
+
+ # Finalize enqueue threads.
+ coord.request_stop()
+ coord.join(threads)
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.register('type', 'bool', lambda v: v.lower() == 'true')
+ parser.add_argument(
+ '--config_path',
+ type=str,
+ default='delf_config_example.pbtxt',
+ help="""
+ Path to DelfConfig proto text file with configuration to be used for DELF
+ extraction.
+ """)
+ parser.add_argument(
+ '--list_images_path',
+ type=str,
+ default='list_images.txt',
+ help="""
+ Path to list of images whose DELF features will be extracted.
+ """)
+ parser.add_argument(
+ '--output_dir',
+ type=str,
+ default='test_features',
+ help="""
+ Directory where DELF features will be written to. Each image's features
+ will be written to a file with same name, and extension replaced by .delf.
+ """)
+ cmd_args, unparsed = parser.parse_known_args()
+ app.run(main=main, argv=[sys.argv[0]] + unparsed)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/match_images.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/match_images.py"
new file mode 100644
index 00000000..6aa8ce37
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/match_images.py"
@@ -0,0 +1,144 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Matches two images using their DELF features.
+
+The matching is done using feature-based nearest-neighbor search, followed by
+geometric verification using RANSAC.
+
+The DELF features can be extracted using the extract_features.py script.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import argparse
+import sys
+
+import matplotlib.image as mpimg
+import matplotlib.pyplot as plt
+import numpy as np
+from scipy.spatial import cKDTree
+from skimage.feature import plot_matches
+from skimage.measure import ransac
+from skimage.transform import AffineTransform
+import tensorflow as tf
+
+from tensorflow.python.platform import app
+from delf import feature_io
+
+cmd_args = None
+
+_DISTANCE_THRESHOLD = 0.8
+
+
+def main(unused_argv):
+ tf.logging.set_verbosity(tf.logging.INFO)
+
+ # Read features.
+ locations_1, _, descriptors_1, _, _ = feature_io.ReadFromFile(
+ cmd_args.features_1_path)
+ num_features_1 = locations_1.shape[0]
+ tf.logging.info("Loaded image 1's %d features" % num_features_1)
+ locations_2, _, descriptors_2, _, _ = feature_io.ReadFromFile(
+ cmd_args.features_2_path)
+ num_features_2 = locations_2.shape[0]
+ tf.logging.info("Loaded image 2's %d features" % num_features_2)
+
+ # Find nearest-neighbor matches using a KD tree.
+ d1_tree = cKDTree(descriptors_1)
+ _, indices = d1_tree.query(
+ descriptors_2, distance_upper_bound=_DISTANCE_THRESHOLD)
+
+ # Select feature locations for putative matches.
+ locations_2_to_use = np.array([
+ locations_2[i,]
+ for i in range(num_features_2)
+ if indices[i] != num_features_1
+ ])
+ locations_1_to_use = np.array([
+ locations_1[indices[i],]
+ for i in range(num_features_2)
+ if indices[i] != num_features_1
+ ])
+
+ # Perform geometric verification using RANSAC.
+ _, inliers = ransac(
+ (locations_1_to_use, locations_2_to_use),
+ AffineTransform,
+ min_samples=3,
+ residual_threshold=20,
+ max_trials=1000)
+
+ tf.logging.info('Found %d inliers' % sum(inliers))
+
+ # Visualize correspondences, and save to file.
+ _, ax = plt.subplots()
+ img_1 = mpimg.imread(cmd_args.image_1_path)
+ img_2 = mpimg.imread(cmd_args.image_2_path)
+ inlier_idxs = np.nonzero(inliers)[0]
+ plot_matches(
+ ax,
+ img_1,
+ img_2,
+ locations_1_to_use,
+ locations_2_to_use,
+ np.column_stack((inlier_idxs, inlier_idxs)),
+ matches_color='b')
+ ax.axis('off')
+ ax.set_title('DELF correspondences')
+ plt.savefig(cmd_args.output_image)
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.register('type', 'bool', lambda v: v.lower() == 'true')
+ parser.add_argument(
+ '--image_1_path',
+ type=str,
+ default='test_images/image_1.jpg',
+ help="""
+ Path to test image 1.
+ """)
+ parser.add_argument(
+ '--image_2_path',
+ type=str,
+ default='test_images/image_2.jpg',
+ help="""
+ Path to test image 2.
+ """)
+ parser.add_argument(
+ '--features_1_path',
+ type=str,
+ default='test_features/image_1.delf',
+ help="""
+ Path to DELF features from image 1.
+ """)
+ parser.add_argument(
+ '--features_2_path',
+ type=str,
+ default='test_features/image_2.delf',
+ help="""
+ Path to DELF features from image 2.
+ """)
+ parser.add_argument(
+ '--output_image',
+ type=str,
+ default='test_match.png',
+ help="""
+ Path where an image showing the matches will be saved.
+ """)
+ cmd_args, unparsed = parser.parse_known_args()
+ app.run(main=main, argv=[sys.argv[0]] + unparsed)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/matched_images_example.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/matched_images_example.png"
new file mode 100644
index 00000000..2430e003
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/examples/matched_images_example.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_extractor.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_extractor.py"
new file mode 100644
index 00000000..6eca9c6c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_extractor.py"
@@ -0,0 +1,384 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""DELF feature extractor.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from delf import datum_io
+from delf import delf_v1
+from object_detection.core import box_list
+from object_detection.core import box_list_ops
+
+
+def NormalizePixelValues(image,
+ pixel_value_offset=128.0,
+ pixel_value_scale=128.0):
+ """Normalize image pixel values.
+
+ Args:
+ image: a uint8 tensor.
+ pixel_value_offset: a Python float, offset for normalizing pixel values.
+ pixel_value_scale: a Python float, scale for normalizing pixel values.
+
+ Returns:
+ image: a float32 tensor of the same shape as the input image.
+ """
+ image = tf.to_float(image)
+ image = tf.div(tf.subtract(image, pixel_value_offset), pixel_value_scale)
+ return image
+
+
+def CalculateReceptiveBoxes(height, width, rf, stride, padding):
+ """Calculate receptive boxes for each feature point.
+
+ Args:
+ height: The height of feature map.
+ width: The width of feature map.
+ rf: The receptive field size.
+ stride: The effective stride between two adjacent feature points.
+ padding: The effective padding size.
+ Returns:
+ rf_boxes: [N, 4] receptive boxes tensor. Here N equals to height x width.
+ Each box is represented by [ymin, xmin, ymax, xmax].
+ """
+ x, y = tf.meshgrid(tf.range(width), tf.range(height))
+ coordinates = tf.reshape(tf.stack([y, x], axis=2), [-1, 2])
+ # [y,x,y,x]
+ point_boxes = tf.to_float(tf.concat([coordinates, coordinates], 1))
+ bias = [-padding, -padding, -padding + rf - 1, -padding + rf - 1]
+ rf_boxes = stride * point_boxes + bias
+ return rf_boxes
+
+
+def CalculateKeypointCenters(boxes):
+ """Helper function to compute feature centers, from RF boxes.
+
+ Args:
+ boxes: [N, 4] float tensor.
+
+ Returns:
+ centers: [N, 2] float tensor.
+ """
+ return tf.divide(
+ tf.add(
+ tf.gather(boxes, [0, 1], axis=1), tf.gather(boxes, [2, 3], axis=1)),
+ 2.0)
+
+
+def ExtractKeypointDescriptor(image, layer_name, image_scales, iou,
+ max_feature_num, abs_thres, model_fn):
+ """Extract keypoint descriptor for input image.
+
+ Args:
+ image: A image tensor with shape [h, w, channels].
+ layer_name: The endpoint of feature extraction layer.
+ image_scales: A 1D float tensor which contains the scales.
+ iou: A float scalar denoting the IOU threshold for NMS.
+ max_feature_num: An int tensor denoting the maximum selected feature points.
+ abs_thres: A float tensor denoting the score threshold for feature
+ selection.
+ model_fn: Model function. Follows the signature:
+
+ * Args:
+ * `images`: Image tensor which is re-scaled.
+ * `normalized_image`: Whether or not the images are normalized.
+ * `reuse`: Whether or not the layer and its variables should be reused.
+
+ * Returns:
+ * `attention`: Attention score after the non-linearity.
+ * `feature_map`: Feature map obtained from the ResNet model.
+
+ Returns:
+ boxes: [N, 4] float tensor which denotes the selected receptive box. N is
+ the number of final feature points which pass through keypoint selection
+ and NMS steps.
+ feature_scales: [N] float tensor. It is the inverse of the input image
+ scales such that larger image scales correspond to larger image regions,
+ which is compatible with scale-space keypoint detection convention.
+ features: [N, depth] float tensor with feature descriptors.
+ scores: [N, 1] float tensor denoting the attention score.
+
+ Raises:
+ ValueError: If the layer_name is unsupported.
+ """
+ original_image_shape_float = tf.gather(tf.to_float(tf.shape(image)), [0, 1])
+ image_tensor = NormalizePixelValues(image)
+ image_tensor = tf.expand_dims(image_tensor, 0, name='image/expand_dims')
+
+ # Feature depth and receptive field parameters for each network version.
+ if layer_name == 'resnet_v1_50/block3':
+ feature_depth = 1024
+ rf, stride, padding = [291.0, 32.0, 145.0]
+ elif layer_name == 'resnet_v1_50/block4':
+ feature_depth = 2048
+ rf, stride, padding = [483.0, 32.0, 241.0]
+ else:
+ raise ValueError('Unsupported layer_name.')
+
+ def _ProcessSingleScale(scale_index,
+ boxes,
+ features,
+ scales,
+ scores,
+ reuse=True):
+ """Resize the image and run feature extraction and keypoint selection.
+
+ This function will be passed into tf.while_loop() and be called
+ repeatedly. The input boxes are collected from the previous iteration
+ [0: scale_index -1]. We get the current scale by
+ image_scales[scale_index], and run image resizing, feature extraction and
+ keypoint selection. Then we will get a new set of selected_boxes for
+ current scale. In the end, we concat the previous boxes with current
+ selected_boxes as the output.
+
+ Args:
+ scale_index: A valid index in the image_scales.
+ boxes: Box tensor with the shape of [N, 4].
+ features: Feature tensor with the shape of [N, depth].
+ scales: Scale tensor with the shape of [N].
+ scores: Attention score tensor with the shape of [N].
+ reuse: Whether or not the layer and its variables should be reused.
+
+ Returns:
+ scale_index: The next scale index for processing.
+ boxes: Concatenated box tensor with the shape of [K, 4]. K >= N.
+ features: Concatenated feature tensor with the shape of [K, depth].
+ scales: Concatenated scale tensor with the shape of [K].
+ scores: Concatenated attention score tensor with the shape of [K].
+ """
+ scale = tf.gather(image_scales, scale_index)
+ new_image_size = tf.to_int32(tf.round(original_image_shape_float * scale))
+ resized_image = tf.image.resize_bilinear(image_tensor, new_image_size)
+
+ attention, feature_map = model_fn(
+ resized_image, normalized_image=True, reuse=reuse)
+
+ rf_boxes = CalculateReceptiveBoxes(
+ tf.shape(feature_map)[1],
+ tf.shape(feature_map)[2], rf, stride, padding)
+ # Re-project back to the original image space.
+ rf_boxes = tf.divide(rf_boxes, scale)
+ attention = tf.reshape(attention, [-1])
+ feature_map = tf.reshape(feature_map, [-1, feature_depth])
+
+ # Use attention score to select feature vectors.
+ indices = tf.reshape(tf.where(attention >= abs_thres), [-1])
+ selected_boxes = tf.gather(rf_boxes, indices)
+ selected_features = tf.gather(feature_map, indices)
+ selected_scores = tf.gather(attention, indices)
+ selected_scales = tf.ones_like(selected_scores, tf.float32) / scale
+
+ # Concat with the previous result from different scales.
+ boxes = tf.concat([boxes, selected_boxes], 0)
+ features = tf.concat([features, selected_features], 0)
+ scales = tf.concat([scales, selected_scales], 0)
+ scores = tf.concat([scores, selected_scores], 0)
+
+ return scale_index + 1, boxes, features, scales, scores
+
+ output_boxes = tf.zeros([0, 4], dtype=tf.float32)
+ output_features = tf.zeros([0, feature_depth], dtype=tf.float32)
+ output_scales = tf.zeros([0], dtype=tf.float32)
+ output_scores = tf.zeros([0], dtype=tf.float32)
+
+ # Process the first scale separately, the following scales will reuse the
+ # graph variables.
+ (_, output_boxes, output_features, output_scales,
+ output_scores) = _ProcessSingleScale(
+ 0,
+ output_boxes,
+ output_features,
+ output_scales,
+ output_scores,
+ reuse=False)
+ i = tf.constant(1, dtype=tf.int32)
+ num_scales = tf.shape(image_scales)[0]
+ keep_going = lambda j, boxes, features, scales, scores: tf.less(j, num_scales)
+
+ (_, output_boxes, output_features, output_scales,
+ output_scores) = tf.while_loop(
+ cond=keep_going,
+ body=_ProcessSingleScale,
+ loop_vars=[
+ i, output_boxes, output_features, output_scales, output_scores
+ ],
+ shape_invariants=[
+ i.get_shape(),
+ tf.TensorShape([None, 4]),
+ tf.TensorShape([None, feature_depth]),
+ tf.TensorShape([None]),
+ tf.TensorShape([None])
+ ],
+ back_prop=False)
+
+ feature_boxes = box_list.BoxList(output_boxes)
+ feature_boxes.add_field('features', output_features)
+ feature_boxes.add_field('scales', output_scales)
+ feature_boxes.add_field('scores', output_scores)
+
+ nms_max_boxes = tf.minimum(max_feature_num, feature_boxes.num_boxes())
+ final_boxes = box_list_ops.non_max_suppression(feature_boxes, iou,
+ nms_max_boxes)
+
+ return (final_boxes.get(), final_boxes.get_field('scales'),
+ final_boxes.get_field('features'),
+ tf.expand_dims(final_boxes.get_field('scores'), 1))
+
+
+def BuildModel(layer_name, attention_nonlinear, attention_type,
+ attention_kernel_size):
+ """Build the DELF model.
+
+ This function is helpful for constructing the model function which will be fed
+ to ExtractKeypointDescriptor().
+
+ Args:
+ layer_name: the endpoint of feature extraction layer.
+ attention_nonlinear: Type of the non-linearity for the attention function.
+ Currently, only 'softplus' is supported.
+ attention_type: Type of the attention used. Options are:
+ 'use_l2_normalized_feature' and 'use_default_input_feature'. Note that
+ this is irrelevant during inference time.
+ attention_kernel_size: Size of attention kernel (kernel is square).
+
+ Returns:
+ Attention model function.
+ """
+
+ def _ModelFn(images, normalized_image, reuse):
+ """Attention model to get feature map and attention score map.
+
+ Args:
+ images: Image tensor.
+ normalized_image: Whether or not the images are normalized.
+ reuse: Whether or not the layer and its variables should be reused.
+ Returns:
+ attention: Attention score after the non-linearity.
+ feature_map: Feature map after ResNet convolution.
+ """
+ if normalized_image:
+ image_tensor = images
+ else:
+ image_tensor = NormalizePixelValues(images)
+
+ # Extract features and attention scores.
+ model = delf_v1.DelfV1(layer_name)
+ _, attention, _, feature_map, _ = model.GetAttentionPrelogit(
+ image_tensor,
+ attention_nonlinear=attention_nonlinear,
+ attention_type=attention_type,
+ kernel=[attention_kernel_size, attention_kernel_size],
+ training_resnet=False,
+ training_attention=False,
+ reuse=reuse)
+ return attention, feature_map
+
+ return _ModelFn
+
+
+def ApplyPcaAndWhitening(data,
+ pca_matrix,
+ pca_mean,
+ output_dim,
+ use_whitening=False,
+ pca_variances=None):
+ """Applies PCA/whitening to data.
+
+ Args:
+ data: [N, dim] float tensor containing data which undergoes PCA/whitening.
+ pca_matrix: [dim, dim] float tensor PCA matrix, row-major.
+ pca_mean: [dim] float tensor, mean to subtract before projection.
+ output_dim: Number of dimensions to use in output data, of type int.
+ use_whitening: Whether whitening is to be used.
+ pca_variances: [dim] float tensor containing PCA variances. Only used if
+ use_whitening is True.
+
+ Returns:
+ output: [N, output_dim] float tensor with output of PCA/whitening operation.
+ """
+ output = tf.matmul(
+ tf.subtract(data, pca_mean),
+ tf.slice(pca_matrix, [0, 0], [output_dim, -1]),
+ transpose_b=True,
+ name='pca_matmul')
+
+ # Apply whitening if desired.
+ if use_whitening:
+ output = tf.divide(
+ output,
+ tf.sqrt(tf.slice(pca_variances, [0], [output_dim])),
+ name='whitening')
+
+ return output
+
+
+def DelfFeaturePostProcessing(boxes, descriptors, config):
+ """Extract DELF features from input image.
+
+ Args:
+ boxes: [N, 4] float tensor which denotes the selected receptive box. N is
+ the number of final feature points which pass through keypoint selection
+ and NMS steps.
+ descriptors: [N, input_dim] float tensor.
+ config: DelfConfig proto with DELF extraction options.
+
+ Returns:
+ locations: [N, 2] float tensor which denotes the selected keypoint
+ locations.
+ final_descriptors: [N, output_dim] float tensor with DELF descriptors after
+ normalization and (possibly) PCA/whitening.
+ """
+
+ # Get center of descriptor boxes, corresponding to feature locations.
+ locations = CalculateKeypointCenters(boxes)
+
+ # Post-process descriptors: L2-normalize, and if desired apply PCA (followed
+ # by L2-normalization).
+ with tf.variable_scope('postprocess'):
+ final_descriptors = tf.nn.l2_normalize(
+ descriptors, dim=1, name='l2_normalization')
+
+ if config.delf_local_config.use_pca:
+ # Load PCA parameters.
+ pca_mean = tf.constant(
+ datum_io.ReadFromFile(
+ config.delf_local_config.pca_parameters.mean_path),
+ dtype=tf.float32)
+ pca_matrix = tf.constant(
+ datum_io.ReadFromFile(
+ config.delf_local_config.pca_parameters.projection_matrix_path),
+ dtype=tf.float32)
+ pca_dim = config.delf_local_config.pca_parameters.pca_dim
+ pca_variances = None
+ if config.delf_local_config.pca_parameters.use_whitening:
+ pca_variances = tf.constant(
+ datum_io.ReadFromFile(
+ config.delf_local_config.pca_parameters.pca_variances_path),
+ dtype=tf.float32)
+
+ # Apply PCA, and whitening if desired.
+ final_descriptors = ApplyPcaAndWhitening(
+ final_descriptors, pca_matrix, pca_mean, pca_dim,
+ config.delf_local_config.pca_parameters.use_whitening, pca_variances)
+
+ # Re-normalize.
+ final_descriptors = tf.nn.l2_normalize(
+ final_descriptors, dim=1, name='pca_l2_normalization')
+
+ return locations, final_descriptors
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_extractor_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_extractor_test.py"
new file mode 100644
index 00000000..882dc9e1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_extractor_test.py"
@@ -0,0 +1,133 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for DELF feature extractor."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+
+from delf import feature_extractor
+
+
+class FeatureExtractorTest(tf.test.TestCase):
+
+ def testNormalizePixelValues(self):
+ image = tf.constant(
+ [[[3, 255, 0], [34, 12, 5]], [[45, 5, 65], [56, 77, 89]]],
+ dtype=tf.uint8)
+ normalized_image = feature_extractor.NormalizePixelValues(
+ image, pixel_value_offset=5.0, pixel_value_scale=2.0)
+ exp_normalized_image = [[[-1.0, 125.0, -2.5], [14.5, 3.5, 0.0]],
+ [[20.0, 0.0, 30.0], [25.5, 36.0, 42.0]]]
+ with self.test_session() as sess:
+ normalized_image_out = sess.run(normalized_image)
+
+ self.assertAllEqual(normalized_image_out, exp_normalized_image)
+
+ def testCalculateReceptiveBoxes(self):
+ boxes = feature_extractor.CalculateReceptiveBoxes(
+ height=1, width=2, rf=291, stride=32, padding=145)
+ exp_boxes = [[-145., -145., 145., 145.], [-145., -113., 145., 177.]]
+ with self.test_session() as sess:
+ boxes_out = sess.run(boxes)
+
+ self.assertAllEqual(exp_boxes, boxes_out)
+
+ def testCalculateKeypointCenters(self):
+ boxes = [[-10.0, 0.0, 11.0, 21.0], [-2.5, 5.0, 18.5, 26.0],
+ [45.0, -2.5, 66.0, 18.5]]
+ centers = feature_extractor.CalculateKeypointCenters(boxes)
+ with self.test_session() as sess:
+ centers_out = sess.run(centers)
+
+ exp_centers = [[0.5, 10.5], [8.0, 15.5], [55.5, 8.0]]
+
+ self.assertAllEqual(exp_centers, centers_out)
+
+ def testExtractKeypointDescriptor(self):
+ image = tf.constant(
+ [[[0, 255, 255], [128, 64, 196]], [[0, 0, 32], [32, 128, 16]]],
+ dtype=tf.uint8)
+
+ # Arbitrary model function used to test ExtractKeypointDescriptor. The
+ # generated feature_map is a replicated version of the image, concatenated
+ # with zeros to achieve the required dimensionality. The attention is simply
+ # the norm of the input image pixels.
+ def _test_model_fn(image, normalized_image, reuse):
+ del normalized_image, reuse # Unused variables in the test.
+ image_shape = tf.shape(image)
+ attention = tf.squeeze(tf.norm(image, axis=3))
+ feature_map = tf.concat(
+ [
+ tf.tile(image, [1, 1, 1, 341]),
+ tf.zeros([1, image_shape[1], image_shape[2], 1])
+ ],
+ axis=3)
+ return attention, feature_map
+
+ boxes, feature_scales, features, scores = (
+ feature_extractor.ExtractKeypointDescriptor(
+ image,
+ layer_name='resnet_v1_50/block3',
+ image_scales=tf.constant([1.0]),
+ iou=1.0,
+ max_feature_num=10,
+ abs_thres=1.5,
+ model_fn=_test_model_fn))
+
+ exp_boxes = [[-145.0, -145.0, 145.0, 145.0], [-113.0, -145.0, 177.0, 145.0]]
+ exp_feature_scales = [1.0, 1.0]
+ exp_features = np.array(
+ np.concatenate(
+ (np.tile([[-1.0, 127.0 / 128.0, 127.0 / 128.0], [-1.0, -1.0, -0.75]
+ ], [1, 341]), np.zeros([2, 1])),
+ axis=1))
+ exp_scores = [[1.723042], [1.600781]]
+
+ with self.test_session() as sess:
+ boxes_out, feature_scales_out, features_out, scores_out = sess.run(
+ [boxes, feature_scales, features, scores])
+
+ self.assertAllEqual(exp_boxes, boxes_out)
+ self.assertAllEqual(exp_feature_scales, feature_scales_out)
+ self.assertAllClose(exp_features, features_out)
+ self.assertAllClose(exp_scores, scores_out)
+
+ def testPcaWhitening(self):
+ data = tf.constant([[1.0, 2.0, -2.0], [-5.0, 0.0, 3.0], [-1.0, 2.0, 0.0],
+ [0.0, 4.0, -1.0]])
+ pca_matrix = tf.constant([[2.0, 0.0, -1.0], [0.0, 1.0, 1.0],
+ [-1.0, 1.0, 3.0]])
+ pca_mean = tf.constant([1.0, 2.0, 3.0])
+ output_dim = 2
+ use_whitening = True
+ pca_variances = tf.constant([4.0, 1.0])
+
+ output = feature_extractor.ApplyPcaAndWhitening(
+ data, pca_matrix, pca_mean, output_dim, use_whitening, pca_variances)
+
+ exp_output = [[2.5, -5.0], [-6.0, -2.0], [-0.5, -3.0], [1.0, -2.0]]
+
+ with self.test_session() as sess:
+ output_out = sess.run(output)
+
+ self.assertAllEqual(exp_output, output_out)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_io.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_io.py"
new file mode 100644
index 00000000..8f7a02bb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_io.py"
@@ -0,0 +1,197 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Python interface for DelfFeatures proto.
+
+Support read and write of DelfFeatures from/to numpy arrays and file.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+from delf import feature_pb2
+from delf import datum_io
+
+
+def ArraysToDelfFeatures(locations,
+ scales,
+ descriptors,
+ attention,
+ orientations=None):
+ """Converts DELF features to DelfFeatures proto.
+
+ Args:
+ locations: [N, 2] float array which denotes the selected keypoint
+ locations. N is the number of features.
+ scales: [N] float array with feature scales.
+ descriptors: [N, depth] float array with DELF descriptors.
+ attention: [N] float array with attention scores.
+ orientations: [N] float array with orientations. If None, all orientations
+ are set to zero.
+
+ Returns:
+ delf_features: DelfFeatures object.
+ """
+ num_features = len(attention)
+ assert num_features == locations.shape[0]
+ assert num_features == len(scales)
+ assert num_features == descriptors.shape[0]
+
+ if orientations is None:
+ orientations = np.zeros([num_features], dtype=np.float32)
+ else:
+ assert num_features == len(orientations)
+
+ delf_features = feature_pb2.DelfFeatures()
+ for i in range(num_features):
+ delf_feature = delf_features.feature.add()
+ delf_feature.y = locations[i, 0]
+ delf_feature.x = locations[i, 1]
+ delf_feature.scale = scales[i]
+ delf_feature.orientation = orientations[i]
+ delf_feature.strength = attention[i]
+ delf_feature.descriptor.CopyFrom(datum_io.ArrayToDatum(descriptors[i,]))
+
+ return delf_features
+
+
+def DelfFeaturesToArrays(delf_features):
+ """Converts data saved in DelfFeatures to numpy arrays.
+
+ If there are no features, the function returns four empty arrays.
+
+ Args:
+ delf_features: DelfFeatures object.
+
+ Returns:
+ locations: [N, 2] float array which denotes the selected keypoint
+ locations. N is the number of features.
+ scales: [N] float array with feature scales.
+ descriptors: [N, depth] float array with DELF descriptors.
+ attention: [N] float array with attention scores.
+ orientations: [N] float array with orientations.
+ """
+ num_features = len(delf_features.feature)
+ if num_features == 0:
+ return np.array([]), np.array([]), np.array([]), np.array([])
+
+ # Figure out descriptor dimensionality by parsing first one.
+ descriptor_dim = len(
+ datum_io.DatumToArray(delf_features.feature[0].descriptor))
+ locations = np.zeros([num_features, 2])
+ scales = np.zeros([num_features])
+ descriptors = np.zeros([num_features, descriptor_dim])
+ attention = np.zeros([num_features])
+ orientations = np.zeros([num_features])
+
+ for i in range(num_features):
+ delf_feature = delf_features.feature[i]
+ locations[i, 0] = delf_feature.y
+ locations[i, 1] = delf_feature.x
+ scales[i] = delf_feature.scale
+ descriptors[i,] = datum_io.DatumToArray(delf_feature.descriptor)
+ attention[i] = delf_feature.strength
+ orientations[i] = delf_feature.orientation
+
+ return locations, scales, descriptors, attention, orientations
+
+
+def SerializeToString(locations,
+ scales,
+ descriptors,
+ attention,
+ orientations=None):
+ """Converts numpy arrays to serialized DelfFeatures.
+
+ Args:
+ locations: [N, 2] float array which denotes the selected keypoint
+ locations. N is the number of features.
+ scales: [N] float array with feature scales.
+ descriptors: [N, depth] float array with DELF descriptors.
+ attention: [N] float array with attention scores.
+ orientations: [N] float array with orientations. If None, all orientations
+ are set to zero.
+
+ Returns:
+ Serialized DelfFeatures string.
+ """
+ delf_features = ArraysToDelfFeatures(locations, scales, descriptors,
+ attention, orientations)
+ return delf_features.SerializeToString()
+
+
+def ParseFromString(string):
+ """Converts serialized DelfFeatures string to numpy arrays.
+
+ Args:
+ string: Serialized DelfFeatures string.
+
+ Returns:
+ locations: [N, 2] float array which denotes the selected keypoint
+ locations. N is the number of features.
+ scales: [N] float array with feature scales.
+ descriptors: [N, depth] float array with DELF descriptors.
+ attention: [N] float array with attention scores.
+ orientations: [N] float array with orientations.
+ """
+ delf_features = feature_pb2.DelfFeatures()
+ delf_features.ParseFromString(string)
+ return DelfFeaturesToArrays(delf_features)
+
+
+def ReadFromFile(file_path):
+ """Helper function to load data from a DelfFeatures format in a file.
+
+ Args:
+ file_path: Path to file containing data.
+
+ Returns:
+ locations: [N, 2] float array which denotes the selected keypoint
+ locations. N is the number of features.
+ scales: [N] float array with feature scales.
+ descriptors: [N, depth] float array with DELF descriptors.
+ attention: [N] float array with attention scores.
+ orientations: [N] float array with orientations.
+ """
+ with tf.gfile.FastGFile(file_path, 'rb') as f:
+ return ParseFromString(f.read())
+
+
+def WriteToFile(file_path,
+ locations,
+ scales,
+ descriptors,
+ attention,
+ orientations=None):
+ """Helper function to write data to a file in DelfFeatures format.
+
+ Args:
+ file_path: Path to file that will be written.
+ locations: [N, 2] float array which denotes the selected keypoint
+ locations. N is the number of features.
+ scales: [N] float array with feature scales.
+ descriptors: [N, depth] float array with DELF descriptors.
+ attention: [N] float array with attention scores.
+ orientations: [N] float array with orientations. If None, all orientations
+ are set to zero.
+ """
+ serialized_data = SerializeToString(locations, scales, descriptors, attention,
+ orientations)
+ with tf.gfile.FastGFile(file_path, 'w') as f:
+ f.write(serialized_data)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_io_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_io_test.py"
new file mode 100644
index 00000000..377c6a58
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/delf/python/feature_io_test.py"
@@ -0,0 +1,98 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for feature_io, the python interface of DelfFeatures."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+import numpy as np
+import tensorflow as tf
+
+from delf import feature_io
+
+
+def create_data():
+ """Creates data to be used in tests.
+
+ Returns:
+ locations: [N, 2] float array which denotes the selected keypoint
+ locations. N is the number of features.
+ scales: [N] float array with feature scales.
+ descriptors: [N, depth] float array with DELF descriptors.
+ attention: [N] float array with attention scores.
+ orientations: [N] float array with orientations.
+ """
+ locations = np.arange(8, dtype=np.float32).reshape(4, 2)
+ scales = np.arange(4, dtype=np.float32)
+ attention = np.arange(4, dtype=np.float32)
+ orientations = np.arange(4, dtype=np.float32)
+ descriptors = np.zeros([4, 1024])
+ descriptors[0,] = np.arange(1024)
+ descriptors[1,] = np.zeros([1024])
+ descriptors[2,] = np.ones([1024])
+ descriptors[3,] = -np.ones([1024])
+
+ return locations, scales, descriptors, attention, orientations
+
+
+class DelfFeaturesIoTest(tf.test.TestCase):
+
+ def testConversionAndBack(self):
+ locations, scales, descriptors, attention, orientations = create_data()
+
+ serialized = feature_io.SerializeToString(locations, scales, descriptors,
+ attention, orientations)
+ parsed_data = feature_io.ParseFromString(serialized)
+
+ self.assertAllEqual(locations, parsed_data[0])
+ self.assertAllEqual(scales, parsed_data[1])
+ self.assertAllEqual(descriptors, parsed_data[2])
+ self.assertAllEqual(attention, parsed_data[3])
+ self.assertAllEqual(orientations, parsed_data[4])
+
+ def testConversionAndBackNoOrientations(self):
+ locations, scales, descriptors, attention, _ = create_data()
+
+ serialized = feature_io.SerializeToString(locations, scales, descriptors,
+ attention)
+ parsed_data = feature_io.ParseFromString(serialized)
+
+ self.assertAllEqual(locations, parsed_data[0])
+ self.assertAllEqual(scales, parsed_data[1])
+ self.assertAllEqual(descriptors, parsed_data[2])
+ self.assertAllEqual(attention, parsed_data[3])
+ self.assertAllEqual(np.zeros([4]), parsed_data[4])
+
+ def testWriteAndReadToFile(self):
+ locations, scales, descriptors, attention, orientations = create_data()
+
+ tmpdir = tf.test.get_temp_dir()
+ filename = os.path.join(tmpdir, 'test.delf')
+ feature_io.WriteToFile(filename, locations, scales, descriptors, attention,
+ orientations)
+ data_read = feature_io.ReadFromFile(filename)
+
+ self.assertAllEqual(locations, data_read[0])
+ self.assertAllEqual(scales, data_read[1])
+ self.assertAllEqual(descriptors, data_read[2])
+ self.assertAllEqual(attention, data_read[3])
+ self.assertAllEqual(orientations, data_read[4])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/setup.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/setup.py"
new file mode 100644
index 00000000..61b0ac50
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/delf/setup.py"
@@ -0,0 +1,25 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Setup script for delf."""
+
+from setuptools import setup, find_packages
+
+setup(
+ name='delf',
+ version='0.1',
+ include_package_data=True,
+ packages=find_packages(),
+ description='DELF (DEep Local Features)',
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/differential_privacy/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/differential_privacy/README.md"
new file mode 100644
index 00000000..9cd41749
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/differential_privacy/README.md"
@@ -0,0 +1,3 @@
+# Deep Learning with Differential Privacy
+
+All of the content from this directory has moved to the [tensorflow/privacy](https://github.com/tensorflow/privacy) repository, which is dedicated to learning with (differential) privacy.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/README.md"
new file mode 100644
index 00000000..7fec0bb0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/README.md"
@@ -0,0 +1,123 @@
+## Introduction
+This is the code used for two domain adaptation papers.
+
+The `domain_separation` directory contains code for the "Domain Separation
+Networks" paper by Bousmalis K., Trigeorgis G., et al. which was presented at
+NIPS 2016. The paper can be found here: https://arxiv.org/abs/1608.06019.
+
+The `pixel_domain_adaptation` directory contains the code used for the
+"Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial
+Networks" paper by Bousmalis K., et al. (presented at CVPR 2017). The paper can
+be found here: https://arxiv.org/abs/1612.05424. PixelDA aims to perform domain
+adaptation by transfering the visual style of the target domain (which has few
+or no labels) to a source domain (which has many labels). This is accomplished
+using a Generative Adversarial Network (GAN).
+
+## Contact
+The domain separation code was open-sourced
+by [Konstantinos Bousmalis](https://github.com/bousmalis)
+(konstantinos@google.com), while the pixel level domain adaptation code was
+open-sourced by [David Dohan](https://github.com/dmrd) (ddohan@google.com).
+
+## Installation
+You will need to have the following installed on your machine before trying out the DSN code.
+
+* Tensorflow: https://www.tensorflow.org/install/
+* Bazel: https://bazel.build/
+
+## Important Note
+We are working to open source the pose estimation dataset. For now, the MNIST to
+MNIST-M dataset is available. Check back here in a few weeks or wait for a
+relevant announcement from [@bousmalis](https://twitter.com/bousmalis).
+
+## Initial setup
+In order to run the MNIST to MNIST-M experiments, you will need to set the
+data directory:
+
+```
+$ export DSN_DATA_DIR=/your/dir
+```
+
+Add models and models/slim to your `$PYTHONPATH` (assumes $PWD is /models):
+
+```
+$ export PYTHONPATH=$PYTHONPATH:$PWD:$PWD/slim
+```
+
+## Getting the datasets
+
+You can fetch the MNIST data by running
+
+```
+ $ bazel run slim:download_and_convert_data -- --dataset_dir $DSN_DATA_DIR --dataset_name=mnist
+```
+
+The MNIST-M dataset is available online [here](http://bit.ly/2nrlUAJ). Once it is downloaded and extracted into your data directory, create TFRecord files by running:
+```
+$ bazel run domain_adaptation/datasets:download_and_convert_mnist_m -- --dataset_dir $DSN_DATA_DIR
+```
+
+
+
+# Running PixelDA from MNIST to MNIST-M
+You can run PixelDA as follows (using Tensorboard to examine the results):
+
+```
+$ bazel run domain_adaptation/pixel_domain_adaptation:pixelda_train -- --dataset_dir $DSN_DATA_DIR --source_dataset mnist --target_dataset mnist_m
+```
+
+And evaluation as:
+```
+$ bazel run domain_adaptation/pixel_domain_adaptation:pixelda_eval -- --dataset_dir $DSN_DATA_DIR --source_dataset mnist --target_dataset mnist_m --target_split_name test
+```
+
+The MNIST-M results in the paper were run with the following hparams flag:
+```
+--hparams arch=resnet,domain_loss_weight=0.135603587834,num_training_examples=16000000,style_transfer_loss_weight=0.0113173311334,task_loss_in_g_weight=0.0100959947002,task_tower=mnist,task_tower_in_g_step=true
+```
+
+### A note on terminology/language of the code:
+
+The components of the network can be grouped into two parts
+which correspond to elements which are jointly optimized: The generator
+component and the discriminator component.
+
+The generator component takes either an image or noise vector and produces an
+output image.
+
+The discriminator component takes the generated images and the target images
+and attempts to discriminate between them.
+
+## Running DSN code for adapting MNIST to MNIST-M
+
+Then you need to build the binaries with Bazel:
+
+```
+$ bazel build -c opt domain_adaptation/domain_separation/...
+```
+
+You can then train with the following command:
+
+```
+$ ./bazel-bin/domain_adaptation/domain_separation/dsn_train \
+ --similarity_loss=dann_loss \
+ --basic_tower=dann_mnist \
+ --source_dataset=mnist \
+ --target_dataset=mnist_m \
+ --learning_rate=0.0117249 \
+ --gamma_weight=0.251175 \
+ --weight_decay=1e-6 \
+ --layers_to_regularize=fc3 \
+ --nouse_separation \
+ --master="" \
+ --dataset_dir=${DSN_DATA_DIR} \
+ -v --use_logging
+```
+
+Evaluation can be invoked with the following command:
+
+```
+$ ./bazel-bin/domain_adaptation/domain_separation/dsn_eval \
+ -v --dataset mnist_m --split test --num_examples=9001 \
+ --dataset_dir=${DSN_DATA_DIR}
+```
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/WORKSPACE" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/WORKSPACE"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/BUILD"
new file mode 100644
index 00000000..067a7937
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/BUILD"
@@ -0,0 +1,45 @@
+# Domain Adaptation Scenarios Datasets
+
+package(
+ default_visibility = [
+ ":internal",
+ ],
+)
+
+licenses(["notice"]) # Apache 2.0
+
+exports_files(["LICENSE"])
+
+package_group(
+ name = "internal",
+ packages = [
+ "//domain_adaptation/...",
+ ],
+)
+
+py_library(
+ name = "dataset_factory",
+ srcs = ["dataset_factory.py"],
+ deps = [
+ ":mnist_m",
+ "//slim:mnist",
+ ],
+)
+
+py_binary(
+ name = "download_and_convert_mnist_m",
+ srcs = ["download_and_convert_mnist_m.py"],
+ deps = [
+
+ "//slim:dataset_utils",
+ ],
+)
+
+py_binary(
+ name = "mnist_m",
+ srcs = ["mnist_m.py"],
+ deps = [
+
+ "//slim:dataset_utils",
+ ],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/dataset_factory.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/dataset_factory.py"
new file mode 100644
index 00000000..4ca1b41c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/dataset_factory.py"
@@ -0,0 +1,107 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""A factory-pattern class which returns image/label pairs."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+import tensorflow as tf
+
+from slim.datasets import mnist
+from domain_adaptation.datasets import mnist_m
+
+slim = tf.contrib.slim
+
+
+def get_dataset(dataset_name,
+ split_name,
+ dataset_dir,
+ file_pattern=None,
+ reader=None):
+ """Given a dataset name and a split_name returns a Dataset.
+
+ Args:
+ dataset_name: String, the name of the dataset.
+ split_name: A train/test split name.
+ dataset_dir: The directory where the dataset files are stored.
+ file_pattern: The file pattern to use for matching the dataset source files.
+ reader: The subclass of tf.ReaderBase. If left as `None`, then the default
+ reader defined by each dataset is used.
+
+ Returns:
+ A tf-slim `Dataset` class.
+
+ Raises:
+ ValueError: if `dataset_name` isn't recognized.
+ """
+ dataset_name_to_module = {'mnist': mnist, 'mnist_m': mnist_m}
+ if dataset_name not in dataset_name_to_module:
+ raise ValueError('Name of dataset unknown %s.' % dataset_name)
+
+ return dataset_name_to_module[dataset_name].get_split(split_name, dataset_dir,
+ file_pattern, reader)
+
+
+def provide_batch(dataset_name, split_name, dataset_dir, num_readers,
+ batch_size, num_preprocessing_threads):
+ """Provides a batch of images and corresponding labels.
+
+ Args:
+ dataset_name: String, the name of the dataset.
+ split_name: A train/test split name.
+ dataset_dir: The directory where the dataset files are stored.
+ num_readers: The number of readers used by DatasetDataProvider.
+ batch_size: The size of the batch requested.
+ num_preprocessing_threads: The number of preprocessing threads for
+ tf.train.batch.
+ file_pattern: The file pattern to use for matching the dataset source files.
+ reader: The subclass of tf.ReaderBase. If left as `None`, then the default
+ reader defined by each dataset is used.
+
+ Returns:
+ A batch of
+ images: tensor of [batch_size, height, width, channels].
+ labels: dictionary of labels.
+ """
+ dataset = get_dataset(dataset_name, split_name, dataset_dir)
+ provider = slim.dataset_data_provider.DatasetDataProvider(
+ dataset,
+ num_readers=num_readers,
+ common_queue_capacity=20 * batch_size,
+ common_queue_min=10 * batch_size)
+ [image, label] = provider.get(['image', 'label'])
+
+ # Convert images to float32
+ image = tf.image.convert_image_dtype(image, tf.float32)
+ image -= 0.5
+ image *= 2
+
+ # Load the data.
+ labels = {}
+ images, labels['classes'] = tf.train.batch(
+ [image, label],
+ batch_size=batch_size,
+ num_threads=num_preprocessing_threads,
+ capacity=5 * batch_size)
+ labels['classes'] = slim.one_hot_encoding(labels['classes'],
+ dataset.num_classes)
+
+ # Convert mnist to RGB and 32x32 so that it can match mnist_m.
+ if dataset_name == 'mnist':
+ images = tf.image.grayscale_to_rgb(images)
+ images = tf.image.resize_images(images, [32, 32])
+ return images, labels
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/download_and_convert_mnist_m.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/download_and_convert_mnist_m.py"
new file mode 100644
index 00000000..3b5004d3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/download_and_convert_mnist_m.py"
@@ -0,0 +1,237 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+r"""Downloads and converts MNIST-M data to TFRecords of TF-Example protos.
+
+This module downloads the MNIST-M data, uncompresses it, reads the files
+that make up the MNIST-M data and creates two TFRecord datasets: one for train
+and one for test. Each TFRecord dataset is comprised of a set of TF-Example
+protocol buffers, each of which contain a single image and label.
+
+The script should take about a minute to run.
+
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import random
+import sys
+
+# Dependency imports
+import numpy as np
+from six.moves import urllib
+import tensorflow as tf
+
+from slim.datasets import dataset_utils
+
+tf.app.flags.DEFINE_string(
+ 'dataset_dir', None,
+ 'The directory where the output TFRecords and temporary files are saved.')
+
+FLAGS = tf.app.flags.FLAGS
+
+_IMAGE_SIZE = 32
+_NUM_CHANNELS = 3
+
+# The number of images in the training set.
+_NUM_TRAIN_SAMPLES = 59001
+
+# The number of images to be kept from the training set for the validation set.
+_NUM_VALIDATION = 1000
+
+# The number of images in the test set.
+_NUM_TEST_SAMPLES = 9001
+
+# Seed for repeatability.
+_RANDOM_SEED = 0
+
+# The names of the classes.
+_CLASS_NAMES = [
+ 'zero',
+ 'one',
+ 'two',
+ 'three',
+ 'four',
+ 'five',
+ 'size',
+ 'seven',
+ 'eight',
+ 'nine',
+]
+
+
+class ImageReader(object):
+ """Helper class that provides TensorFlow image coding utilities."""
+
+ def __init__(self):
+ # Initializes function that decodes RGB PNG data.
+ self._decode_png_data = tf.placeholder(dtype=tf.string)
+ self._decode_png = tf.image.decode_png(self._decode_png_data, channels=3)
+
+ def read_image_dims(self, sess, image_data):
+ image = self.decode_png(sess, image_data)
+ return image.shape[0], image.shape[1]
+
+ def decode_png(self, sess, image_data):
+ image = sess.run(
+ self._decode_png, feed_dict={self._decode_png_data: image_data})
+ assert len(image.shape) == 3
+ assert image.shape[2] == 3
+ return image
+
+
+def _convert_dataset(split_name, filenames, filename_to_class_id, dataset_dir):
+ """Converts the given filenames to a TFRecord dataset.
+
+ Args:
+ split_name: The name of the dataset, either 'train' or 'valid'.
+ filenames: A list of absolute paths to png images.
+ filename_to_class_id: A dictionary from filenames (strings) to class ids
+ (integers).
+ dataset_dir: The directory where the converted datasets are stored.
+ """
+ print('Converting the {} split.'.format(split_name))
+ # Train and validation splits are both in the train directory.
+ if split_name in ['train', 'valid']:
+ png_directory = os.path.join(dataset_dir, 'mnist_m', 'mnist_m_train')
+ elif split_name == 'test':
+ png_directory = os.path.join(dataset_dir, 'mnist_m', 'mnist_m_test')
+
+ with tf.Graph().as_default():
+ image_reader = ImageReader()
+
+ with tf.Session('') as sess:
+ output_filename = _get_output_filename(dataset_dir, split_name)
+
+ with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer:
+ for filename in filenames:
+ # Read the filename:
+ image_data = tf.gfile.FastGFile(
+ os.path.join(png_directory, filename), 'r').read()
+ height, width = image_reader.read_image_dims(sess, image_data)
+
+ class_id = filename_to_class_id[filename]
+ example = dataset_utils.image_to_tfexample(image_data, 'png', height,
+ width, class_id)
+ tfrecord_writer.write(example.SerializeToString())
+
+ sys.stdout.write('\n')
+ sys.stdout.flush()
+
+
+def _extract_labels(label_filename):
+ """Extract the labels into a dict of filenames to int labels.
+
+ Args:
+ labels_filename: The filename of the MNIST-M labels.
+
+ Returns:
+ A dictionary of filenames to int labels.
+ """
+ print('Extracting labels from: ', label_filename)
+ label_file = tf.gfile.FastGFile(label_filename, 'r').readlines()
+ label_lines = [line.rstrip('\n').split() for line in label_file]
+ labels = {}
+ for line in label_lines:
+ assert len(line) == 2
+ labels[line[0]] = int(line[1])
+ return labels
+
+
+def _get_output_filename(dataset_dir, split_name):
+ """Creates the output filename.
+
+ Args:
+ dataset_dir: The directory where the temporary files are stored.
+ split_name: The name of the train/test split.
+
+ Returns:
+ An absolute file path.
+ """
+ return '%s/mnist_m_%s.tfrecord' % (dataset_dir, split_name)
+
+
+def _get_filenames(dataset_dir):
+ """Returns a list of filenames and inferred class names.
+
+ Args:
+ dataset_dir: A directory containing a set PNG encoded MNIST-M images.
+
+ Returns:
+ A list of image file paths, relative to `dataset_dir`.
+ """
+ photo_filenames = []
+ for filename in os.listdir(dataset_dir):
+ photo_filenames.append(filename)
+ return photo_filenames
+
+
+def run(dataset_dir):
+ """Runs the download and conversion operation.
+
+ Args:
+ dataset_dir: The dataset directory where the dataset is stored.
+ """
+ if not tf.gfile.Exists(dataset_dir):
+ tf.gfile.MakeDirs(dataset_dir)
+
+ train_filename = _get_output_filename(dataset_dir, 'train')
+ testing_filename = _get_output_filename(dataset_dir, 'test')
+
+ if tf.gfile.Exists(train_filename) and tf.gfile.Exists(testing_filename):
+ print('Dataset files already exist. Exiting without re-creating them.')
+ return
+
+ # TODO(konstantinos): Add download and cleanup functionality
+
+ train_validation_filenames = _get_filenames(
+ os.path.join(dataset_dir, 'mnist_m', 'mnist_m_train'))
+ test_filenames = _get_filenames(
+ os.path.join(dataset_dir, 'mnist_m', 'mnist_m_test'))
+
+ # Divide into train and validation:
+ random.seed(_RANDOM_SEED)
+ random.shuffle(train_validation_filenames)
+ train_filenames = train_validation_filenames[_NUM_VALIDATION:]
+ validation_filenames = train_validation_filenames[:_NUM_VALIDATION]
+
+ train_validation_filenames_to_class_ids = _extract_labels(
+ os.path.join(dataset_dir, 'mnist_m', 'mnist_m_train_labels.txt'))
+ test_filenames_to_class_ids = _extract_labels(
+ os.path.join(dataset_dir, 'mnist_m', 'mnist_m_test_labels.txt'))
+
+ # Convert the train, validation, and test sets.
+ _convert_dataset('train', train_filenames,
+ train_validation_filenames_to_class_ids, dataset_dir)
+ _convert_dataset('valid', validation_filenames,
+ train_validation_filenames_to_class_ids, dataset_dir)
+ _convert_dataset('test', test_filenames, test_filenames_to_class_ids,
+ dataset_dir)
+
+ # Finally, write the labels file:
+ labels_to_class_names = dict(zip(range(len(_CLASS_NAMES)), _CLASS_NAMES))
+ dataset_utils.write_label_file(labels_to_class_names, dataset_dir)
+
+ print('\nFinished converting the MNIST-M dataset!')
+
+
+def main(_):
+ assert FLAGS.dataset_dir
+ run(FLAGS.dataset_dir)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/mnist_m.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/mnist_m.py"
new file mode 100644
index 00000000..fab6c443
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/datasets/mnist_m.py"
@@ -0,0 +1,98 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Provides data for the MNIST-M dataset.
+
+The dataset scripts used to create the dataset can be found at:
+tensorflow_models/domain_adaptation_/datasets/download_and_convert_mnist_m_dataset.py
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+# Dependency imports
+import tensorflow as tf
+
+from slim.datasets import dataset_utils
+
+slim = tf.contrib.slim
+
+_FILE_PATTERN = 'mnist_m_%s.tfrecord'
+
+_SPLITS_TO_SIZES = {'train': 58001, 'valid': 1000, 'test': 9001}
+
+_NUM_CLASSES = 10
+
+_ITEMS_TO_DESCRIPTIONS = {
+ 'image': 'A [32 x 32 x 1] RGB image.',
+ 'label': 'A single integer between 0 and 9',
+}
+
+
+def get_split(split_name, dataset_dir, file_pattern=None, reader=None):
+ """Gets a dataset tuple with instructions for reading MNIST.
+
+ Args:
+ split_name: A train/test split name.
+ dataset_dir: The base directory of the dataset sources.
+
+ Returns:
+ A `Dataset` namedtuple.
+
+ Raises:
+ ValueError: if `split_name` is not a valid train/test split.
+ """
+ if split_name not in _SPLITS_TO_SIZES:
+ raise ValueError('split name %s was not recognized.' % split_name)
+
+ if not file_pattern:
+ file_pattern = _FILE_PATTERN
+ file_pattern = os.path.join(dataset_dir, file_pattern % split_name)
+
+ # Allowing None in the signature so that dataset_factory can use the default.
+ if reader is None:
+ reader = tf.TFRecordReader
+
+ keys_to_features = {
+ 'image/encoded':
+ tf.FixedLenFeature((), tf.string, default_value=''),
+ 'image/format':
+ tf.FixedLenFeature((), tf.string, default_value='png'),
+ 'image/class/label':
+ tf.FixedLenFeature(
+ [1], tf.int64, default_value=tf.zeros([1], dtype=tf.int64)),
+ }
+
+ items_to_handlers = {
+ 'image': slim.tfexample_decoder.Image(shape=[32, 32, 3], channels=3),
+ 'label': slim.tfexample_decoder.Tensor('image/class/label', shape=[]),
+ }
+
+ decoder = slim.tfexample_decoder.TFExampleDecoder(
+ keys_to_features, items_to_handlers)
+
+ labels_to_names = None
+ if dataset_utils.has_labels(dataset_dir):
+ labels_to_names = dataset_utils.read_label_file(dataset_dir)
+
+ return slim.dataset.Dataset(
+ data_sources=file_pattern,
+ reader=reader,
+ decoder=decoder,
+ num_samples=_SPLITS_TO_SIZES[split_name],
+ num_classes=_NUM_CLASSES,
+ items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
+ labels_to_names=labels_to_names)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/BUILD"
new file mode 100644
index 00000000..14dceda2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/BUILD"
@@ -0,0 +1,157 @@
+# Domain Separation Networks
+
+package(
+ default_visibility = [
+ ":internal",
+ ],
+)
+
+licenses(["notice"]) # Apache 2.0
+
+exports_files(["LICENSE"])
+
+package_group(
+ name = "internal",
+ packages = [
+ "//domain_adaptation/...",
+ ],
+)
+
+py_library(
+ name = "models",
+ srcs = [
+ "models.py",
+ ],
+ deps = [
+ ":utils",
+ ],
+)
+
+py_library(
+ name = "losses",
+ srcs = [
+ "losses.py",
+ ],
+ deps = [
+ ":grl_op_grads_py",
+ ":grl_op_shapes_py",
+ ":grl_ops",
+ ":utils",
+ ],
+)
+
+py_test(
+ name = "losses_test",
+ srcs = [
+ "losses_test.py",
+ ],
+ deps = [
+ ":losses",
+ ":utils",
+ ],
+)
+
+py_library(
+ name = "dsn",
+ srcs = [
+ "dsn.py",
+ ],
+ deps = [
+ ":grl_op_grads_py",
+ ":grl_op_shapes_py",
+ ":grl_ops",
+ ":losses",
+ ":models",
+ ":utils",
+ ],
+)
+
+py_test(
+ name = "dsn_test",
+ srcs = [
+ "dsn_test.py",
+ ],
+ deps = [
+ ":dsn",
+ ],
+)
+
+py_binary(
+ name = "dsn_train",
+ srcs = [
+ "dsn_train.py",
+ ],
+ deps = [
+ ":dsn",
+ ":models",
+ "//domain_adaptation/datasets:dataset_factory",
+ ],
+)
+
+py_binary(
+ name = "dsn_eval",
+ srcs = [
+ "dsn_eval.py",
+ ],
+ deps = [
+ ":dsn",
+ ":models",
+ "//domain_adaptation/datasets:dataset_factory",
+ ],
+)
+
+py_test(
+ name = "models_test",
+ srcs = [
+ "models_test.py",
+ ],
+ deps = [
+ ":models",
+ "//domain_adaptation/datasets:dataset_factory",
+ ],
+)
+
+py_library(
+ name = "utils",
+ srcs = [
+ "utils.py",
+ ],
+ deps = [
+ ],
+)
+
+py_library(
+ name = "grl_op_grads_py",
+ srcs = [
+ "grl_op_grads.py",
+ ],
+ deps = [
+ ":grl_ops",
+ ],
+)
+
+py_library(
+ name = "grl_op_shapes_py",
+ srcs = [
+ "grl_op_shapes.py",
+ ],
+ deps = [
+ ],
+)
+
+py_library(
+ name = "grl_ops",
+ srcs = ["grl_ops.py"],
+ data = ["_grl_ops.so"],
+)
+
+py_test(
+ name = "grl_ops_test",
+ size = "small",
+ srcs = ["grl_ops_test.py"],
+ deps = [
+ ":grl_op_grads_py",
+ ":grl_op_shapes_py",
+ ":grl_ops",
+ ],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn.py"
new file mode 100644
index 00000000..3018e8a7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn.py"
@@ -0,0 +1,355 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Functions to create a DSN model and add the different losses to it.
+
+Specifically, in this file we define the:
+ - Shared Encoding Similarity Loss Module, with:
+ - The MMD Similarity method
+ - The Correlation Similarity method
+ - The Gradient Reversal (Domain-Adversarial) method
+ - Difference Loss Module
+ - Reconstruction Loss Module
+ - Task Loss Module
+"""
+from functools import partial
+
+import tensorflow as tf
+
+import losses
+import models
+import utils
+
+slim = tf.contrib.slim
+
+
+################################################################################
+# HELPER FUNCTIONS
+################################################################################
+def dsn_loss_coefficient(params):
+ """The global_step-dependent weight that specifies when to kick in DSN losses.
+
+ Args:
+ params: A dictionary of parameters. Expecting 'domain_separation_startpoint'
+
+ Returns:
+ A weight to that effectively enables or disables the DSN-related losses,
+ i.e. similarity, difference, and reconstruction losses.
+ """
+ return tf.where(
+ tf.less(slim.get_or_create_global_step(),
+ params['domain_separation_startpoint']), 1e-10, 1.0)
+
+
+################################################################################
+# MODEL CREATION
+################################################################################
+def create_model(source_images, source_labels, domain_selection_mask,
+ target_images, target_labels, similarity_loss, params,
+ basic_tower_name):
+ """Creates a DSN model.
+
+ Args:
+ source_images: images from the source domain, a tensor of size
+ [batch_size, height, width, channels]
+ source_labels: a dictionary with the name, tensor pairs. 'classes' is one-
+ hot for the number of classes.
+ domain_selection_mask: a boolean tensor of size [batch_size, ] which denotes
+ the labeled images that belong to the source domain.
+ target_images: images from the target domain, a tensor of size
+ [batch_size, height width, channels].
+ target_labels: a dictionary with the name, tensor pairs.
+ similarity_loss: The type of method to use for encouraging
+ the codes from the shared encoder to be similar.
+ params: A dictionary of parameters. Expecting 'weight_decay',
+ 'layers_to_regularize', 'use_separation', 'domain_separation_startpoint',
+ 'alpha_weight', 'beta_weight', 'gamma_weight', 'recon_loss_name',
+ 'decoder_name', 'encoder_name'
+ basic_tower_name: the name of the tower to use for the shared encoder.
+
+ Raises:
+ ValueError: if the arch is not one of the available architectures.
+ """
+ network = getattr(models, basic_tower_name)
+ num_classes = source_labels['classes'].get_shape().as_list()[1]
+
+ # Make sure we are using the appropriate number of classes.
+ network = partial(network, num_classes=num_classes)
+
+ # Add the classification/pose estimation loss to the source domain.
+ source_endpoints = add_task_loss(source_images, source_labels, network,
+ params)
+
+ if similarity_loss == 'none':
+ # No domain adaptation, we can stop here.
+ return
+
+ with tf.variable_scope('towers', reuse=True):
+ target_logits, target_endpoints = network(
+ target_images, weight_decay=params['weight_decay'], prefix='target')
+
+ # Plot target accuracy of the train set.
+ target_accuracy = utils.accuracy(
+ tf.argmax(target_logits, 1), tf.argmax(target_labels['classes'], 1))
+
+ if 'quaternions' in target_labels:
+ target_quaternion_loss = losses.log_quaternion_loss(
+ target_labels['quaternions'], target_endpoints['quaternion_pred'],
+ params)
+ tf.summary.scalar('eval/Target quaternions', target_quaternion_loss)
+
+ tf.summary.scalar('eval/Target accuracy', target_accuracy)
+
+ source_shared = source_endpoints[params['layers_to_regularize']]
+ target_shared = target_endpoints[params['layers_to_regularize']]
+
+ # When using the semisupervised model we include labeled target data in the
+ # source classifier. We do not want to include these target domain when
+ # we use the similarity loss.
+ indices = tf.range(0, source_shared.get_shape().as_list()[0])
+ indices = tf.boolean_mask(indices, domain_selection_mask)
+ add_similarity_loss(similarity_loss,
+ tf.gather(source_shared, indices),
+ tf.gather(target_shared, indices), params)
+
+ if params['use_separation']:
+ add_autoencoders(
+ source_images,
+ source_shared,
+ target_images,
+ target_shared,
+ params=params,)
+
+
+def add_similarity_loss(method_name,
+ source_samples,
+ target_samples,
+ params,
+ scope=None):
+ """Adds a loss encouraging the shared encoding from each domain to be similar.
+
+ Args:
+ method_name: the name of the encoding similarity method to use. Valid
+ options include `dann_loss', `mmd_loss' or `correlation_loss'.
+ source_samples: a tensor of shape [num_samples, num_features].
+ target_samples: a tensor of shape [num_samples, num_features].
+ params: a dictionary of parameters. Expecting 'gamma_weight'.
+ scope: optional name scope for summary tags.
+ Raises:
+ ValueError: if `method_name` is not recognized.
+ """
+ weight = dsn_loss_coefficient(params) * params['gamma_weight']
+ method = getattr(losses, method_name)
+ method(source_samples, target_samples, weight, scope)
+
+
+def add_reconstruction_loss(recon_loss_name, images, recons, weight, domain):
+ """Adds a reconstruction loss.
+
+ Args:
+ recon_loss_name: The name of the reconstruction loss.
+ images: A `Tensor` of size [batch_size, height, width, 3].
+ recons: A `Tensor` whose size matches `images`.
+ weight: A scalar coefficient for the loss.
+ domain: The name of the domain being reconstructed.
+
+ Raises:
+ ValueError: If `recon_loss_name` is not recognized.
+ """
+ if recon_loss_name == 'sum_of_pairwise_squares':
+ loss_fn = tf.contrib.losses.mean_pairwise_squared_error
+ elif recon_loss_name == 'sum_of_squares':
+ loss_fn = tf.contrib.losses.mean_squared_error
+ else:
+ raise ValueError('recon_loss_name value [%s] not recognized.' %
+ recon_loss_name)
+
+ loss = loss_fn(recons, images, weight)
+ assert_op = tf.Assert(tf.is_finite(loss), [loss])
+ with tf.control_dependencies([assert_op]):
+ tf.summary.scalar('losses/%s Recon Loss' % domain, loss)
+
+
+def add_autoencoders(source_data, source_shared, target_data, target_shared,
+ params):
+ """Adds the encoders/decoders for our domain separation model w/ incoherence.
+
+ Args:
+ source_data: images from the source domain, a tensor of size
+ [batch_size, height, width, channels]
+ source_shared: a tensor with first dimension batch_size
+ target_data: images from the target domain, a tensor of size
+ [batch_size, height, width, channels]
+ target_shared: a tensor with first dimension batch_size
+ params: A dictionary of parameters. Expecting 'layers_to_regularize',
+ 'beta_weight', 'alpha_weight', 'recon_loss_name', 'decoder_name',
+ 'encoder_name', 'weight_decay'
+ """
+
+ def normalize_images(images):
+ images -= tf.reduce_min(images)
+ return images / tf.reduce_max(images)
+
+ def concat_operation(shared_repr, private_repr):
+ return shared_repr + private_repr
+
+ mu = dsn_loss_coefficient(params)
+
+ # The layer to concatenate the networks at.
+ concat_layer = params['layers_to_regularize']
+
+ # The coefficient for modulating the private/shared difference loss.
+ difference_loss_weight = params['beta_weight'] * mu
+
+ # The reconstruction weight.
+ recon_loss_weight = params['alpha_weight'] * mu
+
+ # The reconstruction loss to use.
+ recon_loss_name = params['recon_loss_name']
+
+ # The decoder/encoder to use.
+ decoder_name = params['decoder_name']
+ encoder_name = params['encoder_name']
+
+ _, height, width, _ = source_data.get_shape().as_list()
+ code_size = source_shared.get_shape().as_list()[-1]
+ weight_decay = params['weight_decay']
+
+ encoder_fn = getattr(models, encoder_name)
+ # Target Auto-encoding.
+ with tf.variable_scope('source_encoder'):
+ source_endpoints = encoder_fn(
+ source_data, code_size, weight_decay=weight_decay)
+
+ with tf.variable_scope('target_encoder'):
+ target_endpoints = encoder_fn(
+ target_data, code_size, weight_decay=weight_decay)
+
+ decoder_fn = getattr(models, decoder_name)
+
+ decoder = partial(
+ decoder_fn,
+ height=height,
+ width=width,
+ channels=source_data.get_shape().as_list()[-1],
+ weight_decay=weight_decay)
+
+ # Source Auto-encoding.
+ source_private = source_endpoints[concat_layer]
+ target_private = target_endpoints[concat_layer]
+ with tf.variable_scope('decoder'):
+ source_recons = decoder(concat_operation(source_shared, source_private))
+
+ with tf.variable_scope('decoder', reuse=True):
+ source_private_recons = decoder(
+ concat_operation(tf.zeros_like(source_private), source_private))
+ source_shared_recons = decoder(
+ concat_operation(source_shared, tf.zeros_like(source_shared)))
+
+ with tf.variable_scope('decoder', reuse=True):
+ target_recons = decoder(concat_operation(target_shared, target_private))
+ target_shared_recons = decoder(
+ concat_operation(target_shared, tf.zeros_like(target_shared)))
+ target_private_recons = decoder(
+ concat_operation(tf.zeros_like(target_private), target_private))
+
+ losses.difference_loss(
+ source_private,
+ source_shared,
+ weight=difference_loss_weight,
+ name='Source')
+ losses.difference_loss(
+ target_private,
+ target_shared,
+ weight=difference_loss_weight,
+ name='Target')
+
+ add_reconstruction_loss(recon_loss_name, source_data, source_recons,
+ recon_loss_weight, 'source')
+ add_reconstruction_loss(recon_loss_name, target_data, target_recons,
+ recon_loss_weight, 'target')
+
+ # Add summaries
+ source_reconstructions = tf.concat(
+ axis=2,
+ values=map(normalize_images, [
+ source_data, source_recons, source_shared_recons,
+ source_private_recons
+ ]))
+ target_reconstructions = tf.concat(
+ axis=2,
+ values=map(normalize_images, [
+ target_data, target_recons, target_shared_recons,
+ target_private_recons
+ ]))
+ tf.summary.image(
+ 'Source Images:Recons:RGB',
+ source_reconstructions[:, :, :, :3],
+ max_outputs=10)
+ tf.summary.image(
+ 'Target Images:Recons:RGB',
+ target_reconstructions[:, :, :, :3],
+ max_outputs=10)
+
+ if source_reconstructions.get_shape().as_list()[3] == 4:
+ tf.summary.image(
+ 'Source Images:Recons:Depth',
+ source_reconstructions[:, :, :, 3:4],
+ max_outputs=10)
+ tf.summary.image(
+ 'Target Images:Recons:Depth',
+ target_reconstructions[:, :, :, 3:4],
+ max_outputs=10)
+
+
+def add_task_loss(source_images, source_labels, basic_tower, params):
+ """Adds a classification and/or pose estimation loss to the model.
+
+ Args:
+ source_images: images from the source domain, a tensor of size
+ [batch_size, height, width, channels]
+ source_labels: labels from the source domain, a tensor of size [batch_size].
+ or a tuple of (quaternions, class_labels)
+ basic_tower: a function that creates the single tower of the model.
+ params: A dictionary of parameters. Expecting 'weight_decay', 'pose_weight'.
+ Returns:
+ The source endpoints.
+
+ Raises:
+ RuntimeError: if basic tower does not support pose estimation.
+ """
+ with tf.variable_scope('towers'):
+ source_logits, source_endpoints = basic_tower(
+ source_images, weight_decay=params['weight_decay'], prefix='Source')
+
+ if 'quaternions' in source_labels: # We have pose estimation as well
+ if 'quaternion_pred' not in source_endpoints:
+ raise RuntimeError('Please use a model for estimation e.g. pose_mini')
+
+ loss = losses.log_quaternion_loss(source_labels['quaternions'],
+ source_endpoints['quaternion_pred'],
+ params)
+
+ assert_op = tf.Assert(tf.is_finite(loss), [loss])
+ with tf.control_dependencies([assert_op]):
+ quaternion_loss = loss
+ tf.summary.histogram('log_quaternion_loss_hist', quaternion_loss)
+ slim.losses.add_loss(quaternion_loss * params['pose_weight'])
+ tf.summary.scalar('losses/quaternion_loss', quaternion_loss)
+
+ classification_loss = tf.losses.softmax_cross_entropy(
+ source_labels['classes'], source_logits)
+
+ tf.summary.scalar('losses/classification_loss', classification_loss)
+ return source_endpoints
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn_eval.py"
new file mode 100644
index 00000000..b6cccdfc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn_eval.py"
@@ -0,0 +1,161 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# pylint: disable=line-too-long
+"""Evaluation for Domain Separation Networks (DSNs)."""
+# pylint: enable=line-too-long
+import math
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+from domain_adaptation.datasets import dataset_factory
+from domain_adaptation.domain_separation import losses
+from domain_adaptation.domain_separation import models
+
+slim = tf.contrib.slim
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_integer('batch_size', 32,
+ 'The number of images in each batch.')
+
+tf.app.flags.DEFINE_string('master', '',
+ 'BNS name of the TensorFlow master to use.')
+
+tf.app.flags.DEFINE_string('checkpoint_dir', '/tmp/da/',
+ 'Directory where the model was written to.')
+
+tf.app.flags.DEFINE_string(
+ 'eval_dir', '/tmp/da/',
+ 'Directory where we should write the tf summaries to.')
+
+tf.app.flags.DEFINE_string('dataset_dir', None,
+ 'The directory where the dataset files are stored.')
+
+tf.app.flags.DEFINE_string('dataset', 'mnist_m',
+ 'Which dataset to test on: "mnist", "mnist_m".')
+
+tf.app.flags.DEFINE_string('split', 'valid',
+ 'Which portion to test on: "valid", "test".')
+
+tf.app.flags.DEFINE_integer('num_examples', 1000, 'Number of test examples.')
+
+tf.app.flags.DEFINE_string('basic_tower', 'dann_mnist',
+ 'The basic tower building block.')
+
+tf.app.flags.DEFINE_bool('enable_precision_recall', False,
+ 'If True, precision and recall for each class will '
+ 'be added to the metrics.')
+
+tf.app.flags.DEFINE_bool('use_logging', False, 'Debugging messages.')
+
+
+def quaternion_metric(predictions, labels):
+ params = {'batch_size': FLAGS.batch_size, 'use_logging': False}
+ logcost = losses.log_quaternion_loss_batch(predictions, labels, params)
+ return slim.metrics.streaming_mean(logcost)
+
+
+def angle_diff(true_q, pred_q):
+ angles = 2 * (
+ 180.0 /
+ np.pi) * np.arccos(np.abs(np.sum(np.multiply(pred_q, true_q), axis=1)))
+ return angles
+
+
+def provide_batch_fn():
+ """ The provide_batch function to use. """
+ return dataset_factory.provide_batch
+
+
+def main(_):
+ g = tf.Graph()
+ with g.as_default():
+ # Load the data.
+ images, labels = provide_batch_fn()(
+ FLAGS.dataset, FLAGS.split, FLAGS.dataset_dir, 4, FLAGS.batch_size, 4)
+
+ num_classes = labels['classes'].get_shape().as_list()[1]
+
+ tf.summary.image('eval_images', images, max_outputs=3)
+
+ # Define the model:
+ with tf.variable_scope('towers'):
+ basic_tower = getattr(models, FLAGS.basic_tower)
+ predictions, endpoints = basic_tower(
+ images,
+ num_classes=num_classes,
+ is_training=False,
+ batch_norm_params=None)
+ metric_names_to_values = {}
+
+ # Define the metrics:
+ if 'quaternions' in labels: # Also have to evaluate pose estimation!
+ quaternion_loss = quaternion_metric(labels['quaternions'],
+ endpoints['quaternion_pred'])
+
+ angle_errors, = tf.py_func(
+ angle_diff, [labels['quaternions'], endpoints['quaternion_pred']],
+ [tf.float32])
+
+ metric_names_to_values[
+ 'Angular mean error'] = slim.metrics.streaming_mean(angle_errors)
+ metric_names_to_values['Quaternion Loss'] = quaternion_loss
+
+ accuracy = tf.contrib.metrics.streaming_accuracy(
+ tf.argmax(predictions, 1), tf.argmax(labels['classes'], 1))
+
+ predictions = tf.argmax(predictions, 1)
+ labels = tf.argmax(labels['classes'], 1)
+ metric_names_to_values['Accuracy'] = accuracy
+
+ if FLAGS.enable_precision_recall:
+ for i in xrange(num_classes):
+ index_map = tf.one_hot(i, depth=num_classes)
+ name = 'PR/Precision_{}'.format(i)
+ metric_names_to_values[name] = slim.metrics.streaming_precision(
+ tf.gather(index_map, predictions), tf.gather(index_map, labels))
+ name = 'PR/Recall_{}'.format(i)
+ metric_names_to_values[name] = slim.metrics.streaming_recall(
+ tf.gather(index_map, predictions), tf.gather(index_map, labels))
+
+ names_to_values, names_to_updates = slim.metrics.aggregate_metric_map(
+ metric_names_to_values)
+
+ # Create the summary ops such that they also print out to std output:
+ summary_ops = []
+ for metric_name, metric_value in names_to_values.iteritems():
+ op = tf.summary.scalar(metric_name, metric_value)
+ op = tf.Print(op, [metric_value], metric_name)
+ summary_ops.append(op)
+
+ # This ensures that we make a single pass over all of the data.
+ num_batches = math.ceil(FLAGS.num_examples / float(FLAGS.batch_size))
+
+ # Setup the global step.
+ slim.get_or_create_global_step()
+ slim.evaluation.evaluation_loop(
+ FLAGS.master,
+ checkpoint_dir=FLAGS.checkpoint_dir,
+ logdir=FLAGS.eval_dir,
+ num_evals=num_batches,
+ eval_op=names_to_updates.values(),
+ summary_op=tf.summary.merge(summary_ops))
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn_test.py"
new file mode 100644
index 00000000..3d687398
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn_test.py"
@@ -0,0 +1,157 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for DSN model assembly functions."""
+
+import numpy as np
+import tensorflow as tf
+
+import dsn
+
+
+class HelperFunctionsTest(tf.test.TestCase):
+
+ def testBasicDomainSeparationStartPoint(self):
+ with self.test_session() as sess:
+ # Test for when global_step < domain_separation_startpoint
+ step = tf.contrib.slim.get_or_create_global_step()
+ sess.run(tf.global_variables_initializer()) # global_step = 0
+ params = {'domain_separation_startpoint': 2}
+ weight = dsn.dsn_loss_coefficient(params)
+ weight_np = sess.run(weight)
+ self.assertAlmostEqual(weight_np, 1e-10)
+
+ step_op = tf.assign_add(step, 1)
+ step_np = sess.run(step_op) # global_step = 1
+ weight = dsn.dsn_loss_coefficient(params)
+ weight_np = sess.run(weight)
+ self.assertAlmostEqual(weight_np, 1e-10)
+
+ # Test for when global_step >= domain_separation_startpoint
+ step_np = sess.run(step_op) # global_step = 2
+ tf.logging.info(step_np)
+ weight = dsn.dsn_loss_coefficient(params)
+ weight_np = sess.run(weight)
+ self.assertAlmostEqual(weight_np, 1.0)
+
+
+class DsnModelAssemblyTest(tf.test.TestCase):
+
+ def _testBuildDefaultModel(self):
+ images = tf.to_float(np.random.rand(32, 28, 28, 1))
+ labels = {}
+ labels['classes'] = tf.one_hot(
+ tf.to_int32(np.random.randint(0, 9, (32))), 10)
+
+ params = {
+ 'use_separation': True,
+ 'layers_to_regularize': 'fc3',
+ 'weight_decay': 0.0,
+ 'ps_tasks': 1,
+ 'domain_separation_startpoint': 1,
+ 'alpha_weight': 1,
+ 'beta_weight': 1,
+ 'gamma_weight': 1,
+ 'recon_loss_name': 'sum_of_squares',
+ 'decoder_name': 'small_decoder',
+ 'encoder_name': 'default_encoder',
+ }
+ return images, labels, params
+
+ def testBuildModelDann(self):
+ images, labels, params = self._testBuildDefaultModel()
+
+ with self.test_session():
+ dsn.create_model(images, labels,
+ tf.cast(tf.ones([32,]), tf.bool), images, labels,
+ 'dann_loss', params, 'dann_mnist')
+ loss_tensors = tf.contrib.losses.get_losses()
+ self.assertEqual(len(loss_tensors), 6)
+
+ def testBuildModelDannSumOfPairwiseSquares(self):
+ images, labels, params = self._testBuildDefaultModel()
+
+ with self.test_session():
+ dsn.create_model(images, labels,
+ tf.cast(tf.ones([32,]), tf.bool), images, labels,
+ 'dann_loss', params, 'dann_mnist')
+ loss_tensors = tf.contrib.losses.get_losses()
+ self.assertEqual(len(loss_tensors), 6)
+
+ def testBuildModelDannMultiPSTasks(self):
+ images, labels, params = self._testBuildDefaultModel()
+ params['ps_tasks'] = 10
+ with self.test_session():
+ dsn.create_model(images, labels,
+ tf.cast(tf.ones([32,]), tf.bool), images, labels,
+ 'dann_loss', params, 'dann_mnist')
+ loss_tensors = tf.contrib.losses.get_losses()
+ self.assertEqual(len(loss_tensors), 6)
+
+ def testBuildModelMmd(self):
+ images, labels, params = self._testBuildDefaultModel()
+
+ with self.test_session():
+ dsn.create_model(images, labels,
+ tf.cast(tf.ones([32,]), tf.bool), images, labels,
+ 'mmd_loss', params, 'dann_mnist')
+ loss_tensors = tf.contrib.losses.get_losses()
+ self.assertEqual(len(loss_tensors), 6)
+
+ def testBuildModelCorr(self):
+ images, labels, params = self._testBuildDefaultModel()
+
+ with self.test_session():
+ dsn.create_model(images, labels,
+ tf.cast(tf.ones([32,]), tf.bool), images, labels,
+ 'correlation_loss', params, 'dann_mnist')
+ loss_tensors = tf.contrib.losses.get_losses()
+ self.assertEqual(len(loss_tensors), 6)
+
+ def testBuildModelNoDomainAdaptation(self):
+ images, labels, params = self._testBuildDefaultModel()
+ params['use_separation'] = False
+ with self.test_session():
+ dsn.create_model(images, labels,
+ tf.cast(tf.ones([32,]), tf.bool), images, labels, 'none',
+ params, 'dann_mnist')
+ loss_tensors = tf.contrib.losses.get_losses()
+ self.assertEqual(len(loss_tensors), 1)
+ self.assertEqual(len(tf.contrib.losses.get_regularization_losses()), 0)
+
+ def testBuildModelNoAdaptationWeightDecay(self):
+ images, labels, params = self._testBuildDefaultModel()
+ params['use_separation'] = False
+ params['weight_decay'] = 1e-5
+ with self.test_session():
+ dsn.create_model(images, labels,
+ tf.cast(tf.ones([32,]), tf.bool), images, labels, 'none',
+ params, 'dann_mnist')
+ loss_tensors = tf.contrib.losses.get_losses()
+ self.assertEqual(len(loss_tensors), 1)
+ self.assertTrue(len(tf.contrib.losses.get_regularization_losses()) >= 1)
+
+ def testBuildModelNoSeparation(self):
+ images, labels, params = self._testBuildDefaultModel()
+ params['use_separation'] = False
+ with self.test_session():
+ dsn.create_model(images, labels,
+ tf.cast(tf.ones([32,]), tf.bool), images, labels,
+ 'dann_loss', params, 'dann_mnist')
+ loss_tensors = tf.contrib.losses.get_losses()
+ self.assertEqual(len(loss_tensors), 2)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn_train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn_train.py"
new file mode 100644
index 00000000..5e364ad3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/dsn_train.py"
@@ -0,0 +1,278 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Training for Domain Separation Networks (DSNs)."""
+from __future__ import division
+
+import tensorflow as tf
+
+from domain_adaptation.datasets import dataset_factory
+import dsn
+
+slim = tf.contrib.slim
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_integer('batch_size', 32,
+ 'The number of images in each batch.')
+
+tf.app.flags.DEFINE_string('source_dataset', 'pose_synthetic',
+ 'Source dataset to train on.')
+
+tf.app.flags.DEFINE_string('target_dataset', 'pose_real',
+ 'Target dataset to train on.')
+
+tf.app.flags.DEFINE_string('target_labeled_dataset', 'none',
+ 'Target dataset to train on.')
+
+tf.app.flags.DEFINE_string('dataset_dir', None,
+ 'The directory where the dataset files are stored.')
+
+tf.app.flags.DEFINE_string('master', '',
+ 'BNS name of the TensorFlow master to use.')
+
+tf.app.flags.DEFINE_string('train_log_dir', '/tmp/da/',
+ 'Directory where to write event logs.')
+
+tf.app.flags.DEFINE_string(
+ 'layers_to_regularize', 'fc3',
+ 'Comma-separated list of layer names to use MMD regularization on.')
+
+tf.app.flags.DEFINE_float('learning_rate', .01, 'The learning rate')
+
+tf.app.flags.DEFINE_float('alpha_weight', 1e-6,
+ 'The coefficient for scaling the reconstruction '
+ 'loss.')
+
+tf.app.flags.DEFINE_float(
+ 'beta_weight', 1e-6,
+ 'The coefficient for scaling the private/shared difference loss.')
+
+tf.app.flags.DEFINE_float(
+ 'gamma_weight', 1e-6,
+ 'The coefficient for scaling the shared encoding similarity loss.')
+
+tf.app.flags.DEFINE_float('pose_weight', 0.125,
+ 'The coefficient for scaling the pose loss.')
+
+tf.app.flags.DEFINE_float(
+ 'weight_decay', 1e-6,
+ 'The coefficient for the L2 regularization applied for all weights.')
+
+tf.app.flags.DEFINE_integer(
+ 'save_summaries_secs', 60,
+ 'The frequency with which summaries are saved, in seconds.')
+
+tf.app.flags.DEFINE_integer(
+ 'save_interval_secs', 60,
+ 'The frequency with which the model is saved, in seconds.')
+
+tf.app.flags.DEFINE_integer(
+ 'max_number_of_steps', None,
+ 'The maximum number of gradient steps. Use None to train indefinitely.')
+
+tf.app.flags.DEFINE_integer(
+ 'domain_separation_startpoint', 1,
+ 'The global step to add the domain separation losses.')
+
+tf.app.flags.DEFINE_integer(
+ 'bipartite_assignment_top_k', 3,
+ 'The number of top-k matches to use in bipartite matching adaptation.')
+
+tf.app.flags.DEFINE_float('decay_rate', 0.95, 'Learning rate decay factor.')
+
+tf.app.flags.DEFINE_integer('decay_steps', 20000, 'Learning rate decay steps.')
+
+tf.app.flags.DEFINE_float('momentum', 0.9, 'The momentum value.')
+
+tf.app.flags.DEFINE_bool('use_separation', False,
+ 'Use our domain separation model.')
+
+tf.app.flags.DEFINE_bool('use_logging', False, 'Debugging messages.')
+
+tf.app.flags.DEFINE_integer(
+ 'ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then the parameters '
+ 'are handled locally by the worker.')
+
+tf.app.flags.DEFINE_integer(
+ 'num_readers', 4,
+ 'The number of parallel readers that read data from the dataset.')
+
+tf.app.flags.DEFINE_integer('num_preprocessing_threads', 4,
+ 'The number of threads used to create the batches.')
+
+tf.app.flags.DEFINE_integer(
+ 'task', 0,
+ 'The Task ID. This value is used when training with multiple workers to '
+ 'identify each worker.')
+
+tf.app.flags.DEFINE_string('decoder_name', 'small_decoder',
+ 'The decoder to use.')
+tf.app.flags.DEFINE_string('encoder_name', 'default_encoder',
+ 'The encoder to use.')
+
+################################################################################
+# Flags that control the architecture and losses
+################################################################################
+tf.app.flags.DEFINE_string(
+ 'similarity_loss', 'grl',
+ 'The method to use for encouraging the common encoder codes to be '
+ 'similar, one of "grl", "mmd", "corr".')
+
+tf.app.flags.DEFINE_string('recon_loss_name', 'sum_of_pairwise_squares',
+ 'The name of the reconstruction loss.')
+
+tf.app.flags.DEFINE_string('basic_tower', 'pose_mini',
+ 'The basic tower building block.')
+
+def provide_batch_fn():
+ """ The provide_batch function to use. """
+ return dataset_factory.provide_batch
+
+def main(_):
+ model_params = {
+ 'use_separation': FLAGS.use_separation,
+ 'domain_separation_startpoint': FLAGS.domain_separation_startpoint,
+ 'layers_to_regularize': FLAGS.layers_to_regularize,
+ 'alpha_weight': FLAGS.alpha_weight,
+ 'beta_weight': FLAGS.beta_weight,
+ 'gamma_weight': FLAGS.gamma_weight,
+ 'pose_weight': FLAGS.pose_weight,
+ 'recon_loss_name': FLAGS.recon_loss_name,
+ 'decoder_name': FLAGS.decoder_name,
+ 'encoder_name': FLAGS.encoder_name,
+ 'weight_decay': FLAGS.weight_decay,
+ 'batch_size': FLAGS.batch_size,
+ 'use_logging': FLAGS.use_logging,
+ 'ps_tasks': FLAGS.ps_tasks,
+ 'task': FLAGS.task,
+ }
+ g = tf.Graph()
+ with g.as_default():
+ with tf.device(tf.train.replica_device_setter(FLAGS.ps_tasks)):
+ # Load the data.
+ source_images, source_labels = provide_batch_fn()(
+ FLAGS.source_dataset, 'train', FLAGS.dataset_dir, FLAGS.num_readers,
+ FLAGS.batch_size, FLAGS.num_preprocessing_threads)
+ target_images, target_labels = provide_batch_fn()(
+ FLAGS.target_dataset, 'train', FLAGS.dataset_dir, FLAGS.num_readers,
+ FLAGS.batch_size, FLAGS.num_preprocessing_threads)
+
+ # In the unsupervised case all the samples in the labeled
+ # domain are from the source domain.
+ domain_selection_mask = tf.fill((source_images.get_shape().as_list()[0],),
+ True)
+
+ # When using the semisupervised model we include labeled target data in
+ # the source labelled data.
+ if FLAGS.target_labeled_dataset != 'none':
+ # 1000 is the maximum number of labelled target samples that exists in
+ # the datasets.
+ target_semi_images, target_semi_labels = provide_batch_fn()(
+ FLAGS.target_labeled_dataset, 'train', FLAGS.batch_size)
+
+ # Calculate the proportion of source domain samples in the semi-
+ # supervised setting, so that the proportion is set accordingly in the
+ # batches.
+ proportion = float(source_labels['num_train_samples']) / (
+ source_labels['num_train_samples'] +
+ target_semi_labels['num_train_samples'])
+
+ rnd_tensor = tf.random_uniform(
+ (target_semi_images.get_shape().as_list()[0],))
+
+ domain_selection_mask = rnd_tensor < proportion
+ source_images = tf.where(domain_selection_mask, source_images,
+ target_semi_images)
+ source_class_labels = tf.where(domain_selection_mask,
+ source_labels['classes'],
+ target_semi_labels['classes'])
+
+ if 'quaternions' in source_labels:
+ source_pose_labels = tf.where(domain_selection_mask,
+ source_labels['quaternions'],
+ target_semi_labels['quaternions'])
+ (source_images, source_class_labels, source_pose_labels,
+ domain_selection_mask) = tf.train.shuffle_batch(
+ [
+ source_images, source_class_labels, source_pose_labels,
+ domain_selection_mask
+ ],
+ FLAGS.batch_size,
+ 50000,
+ 5000,
+ num_threads=1,
+ enqueue_many=True)
+
+ else:
+ (source_images, source_class_labels,
+ domain_selection_mask) = tf.train.shuffle_batch(
+ [source_images, source_class_labels, domain_selection_mask],
+ FLAGS.batch_size,
+ 50000,
+ 5000,
+ num_threads=1,
+ enqueue_many=True)
+ source_labels = {}
+ source_labels['classes'] = source_class_labels
+ if 'quaternions' in source_labels:
+ source_labels['quaternions'] = source_pose_labels
+
+ slim.get_or_create_global_step()
+ tf.summary.image('source_images', source_images, max_outputs=3)
+ tf.summary.image('target_images', target_images, max_outputs=3)
+
+ dsn.create_model(
+ source_images,
+ source_labels,
+ domain_selection_mask,
+ target_images,
+ target_labels,
+ FLAGS.similarity_loss,
+ model_params,
+ basic_tower_name=FLAGS.basic_tower)
+
+ # Configure the optimization scheme:
+ learning_rate = tf.train.exponential_decay(
+ FLAGS.learning_rate,
+ slim.get_or_create_global_step(),
+ FLAGS.decay_steps,
+ FLAGS.decay_rate,
+ staircase=True,
+ name='learning_rate')
+
+ tf.summary.scalar('learning_rate', learning_rate)
+ tf.summary.scalar('total_loss', tf.losses.get_total_loss())
+
+ opt = tf.train.MomentumOptimizer(learning_rate, FLAGS.momentum)
+ tf.logging.set_verbosity(tf.logging.INFO)
+ # Run training.
+ loss_tensor = slim.learning.create_train_op(
+ slim.losses.get_total_loss(),
+ opt,
+ summarize_gradients=True,
+ colocate_gradients_with_ops=True)
+ slim.learning.train(
+ train_op=loss_tensor,
+ logdir=FLAGS.train_log_dir,
+ master=FLAGS.master,
+ is_chief=FLAGS.task == 0,
+ number_of_steps=FLAGS.max_number_of_steps,
+ save_summaries_secs=FLAGS.save_summaries_secs,
+ save_interval_secs=FLAGS.save_interval_secs)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_op_grads.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_op_grads.py"
new file mode 100644
index 00000000..fcd85ba2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_op_grads.py"
@@ -0,0 +1,34 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Gradients for operators defined in grl_ops.py."""
+import tensorflow as tf
+
+
+@tf.RegisterGradient("GradientReversal")
+def _GradientReversalGrad(_, grad):
+ """The gradients for `gradient_reversal`.
+
+ Args:
+ _: The `gradient_reversal` `Operation` that we are differentiating,
+ which we can use to find the inputs and outputs of the original op.
+ grad: Gradient with respect to the output of the `gradient_reversal` op.
+
+ Returns:
+ Gradient with respect to the input of `gradient_reversal`, which is simply
+ the negative of the input gradient.
+
+ """
+ return tf.negative(grad)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_op_kernels.cc" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_op_kernels.cc"
new file mode 100644
index 00000000..ba30128f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_op_kernels.cc"
@@ -0,0 +1,47 @@
+/* Copyright 2016 The TensorFlow Authors All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+==============================================================================*/
+
+// This file contains the implementations of the ops registered in
+// grl_ops.cc.
+
+#include "tensorflow/core/framework/op_kernel.h"
+#include "tensorflow/core/framework/types.pb.h"
+
+namespace tensorflow {
+
+// The gradient reversal op is used in domain adversarial training. It behaves
+// as the identity op during forward propagation, and multiplies its input by -1
+// during backward propagation.
+class GradientReversalOp : public OpKernel {
+ public:
+ explicit GradientReversalOp(OpKernelConstruction* context)
+ : OpKernel(context) {}
+
+ // Gradient reversal op behaves as the identity op during forward
+ // propagation. Compute() function copied from the IdentityOp::Compute()
+ // function here: third_party/tensorflow/core/kernels/identity_op.h.
+ void Compute(OpKernelContext* context) override {
+ if (IsRefType(context->input_dtype(0))) {
+ context->forward_ref_input_to_ref_output(0, 0);
+ } else {
+ context->set_output(0, context->input(0));
+ }
+ }
+};
+
+REGISTER_KERNEL_BUILDER(Name("GradientReversal").Device(DEVICE_CPU),
+ GradientReversalOp);
+
+} // namespace tensorflow
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_op_shapes.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_op_shapes.py"
new file mode 100644
index 00000000..52773c68
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_op_shapes.py"
@@ -0,0 +1,16 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Shape inference for operators defined in grl_ops.cc."""
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_ops.cc" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_ops.cc"
new file mode 100644
index 00000000..d441c2b4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_ops.cc"
@@ -0,0 +1,36 @@
+/* Copyright 2016 The TensorFlow Authors All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+==============================================================================*/
+
+// Contains custom ops.
+
+#include "tensorflow/core/framework/common_shape_fns.h"
+#include "tensorflow/core/framework/op.h"
+
+namespace tensorflow {
+
+// This custom op is used by adversarial training.
+REGISTER_OP("GradientReversal")
+ .Input("input: float")
+ .Output("output: float")
+ .SetShapeFn(shape_inference::UnchangedShape)
+ .Doc(R"doc(
+This op copies the input to the output during forward propagation, and
+negates the input during backward propagation.
+
+input: Tensor.
+output: Tensor, copied from input.
+)doc");
+
+} // namespace tensorflow
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_ops.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_ops.py"
new file mode 100644
index 00000000..50447247
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_ops.py"
@@ -0,0 +1,28 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""GradientReversal op Python library."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os.path
+
+import tensorflow as tf
+
+tf.logging.info(tf.resource_loader.get_data_files_path())
+_grl_ops_module = tf.load_op_library(
+ os.path.join(tf.resource_loader.get_data_files_path(),
+ '_grl_ops.so'))
+gradient_reversal = _grl_ops_module.gradient_reversal
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_ops_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_ops_test.py"
new file mode 100644
index 00000000..b431a6c0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/grl_ops_test.py"
@@ -0,0 +1,73 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for grl_ops."""
+
+#from models.domain_adaptation.domain_separation import grl_op_grads # pylint: disable=unused-import
+#from models.domain_adaptation.domain_separation import grl_op_shapes # pylint: disable=unused-import
+import tensorflow as tf
+
+import grl_op_grads
+import grl_ops
+
+FLAGS = tf.app.flags.FLAGS
+
+
+class GRLOpsTest(tf.test.TestCase):
+
+ def testGradientReversalOp(self):
+ with tf.Graph().as_default():
+ with self.test_session():
+ # Test that in forward prop, gradient reversal op acts as the
+ # identity operation.
+ examples = tf.constant([5.0, 4.0, 3.0, 2.0, 1.0])
+ output = grl_ops.gradient_reversal(examples)
+ expected_output = examples
+ self.assertAllEqual(output.eval(), expected_output.eval())
+
+ # Test that shape inference works as expected.
+ self.assertAllEqual(output.get_shape(), expected_output.get_shape())
+
+ # Test that in backward prop, gradient reversal op multiplies
+ # gradients by -1.
+ examples = tf.constant([[1.0]])
+ w = tf.get_variable(name='w', shape=[1, 1])
+ b = tf.get_variable(name='b', shape=[1])
+ init_op = tf.global_variables_initializer()
+ init_op.run()
+ features = tf.nn.xw_plus_b(examples, w, b)
+ # Construct two outputs: features layer passes directly to output1, but
+ # features layer passes through a gradient reversal layer before
+ # reaching output2.
+ output1 = features
+ output2 = grl_ops.gradient_reversal(features)
+ gold = tf.constant([1.0])
+ loss1 = gold - output1
+ loss2 = gold - output2
+ opt = tf.train.GradientDescentOptimizer(learning_rate=0.01)
+ grads_and_vars_1 = opt.compute_gradients(loss1,
+ tf.trainable_variables())
+ grads_and_vars_2 = opt.compute_gradients(loss2,
+ tf.trainable_variables())
+ self.assertAllEqual(len(grads_and_vars_1), len(grads_and_vars_2))
+ for i in range(len(grads_and_vars_1)):
+ g1 = grads_and_vars_1[i][0]
+ g2 = grads_and_vars_2[i][0]
+ # Verify that gradients of loss1 are the negative of gradients of
+ # loss2.
+ self.assertAllEqual(tf.negative(g1).eval(), g2.eval())
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/losses.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/losses.py"
new file mode 100644
index 00000000..0d882340
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/losses.py"
@@ -0,0 +1,290 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Domain Adaptation Loss Functions.
+
+The following domain adaptation loss functions are defined:
+
+- Maximum Mean Discrepancy (MMD).
+ Relevant paper:
+ Gretton, Arthur, et al.,
+ "A kernel two-sample test."
+ The Journal of Machine Learning Research, 2012
+
+- Correlation Loss on a batch.
+"""
+from functools import partial
+import tensorflow as tf
+
+import grl_op_grads # pylint: disable=unused-import
+import grl_op_shapes # pylint: disable=unused-import
+import grl_ops
+import utils
+slim = tf.contrib.slim
+
+
+################################################################################
+# SIMILARITY LOSS
+################################################################################
+def maximum_mean_discrepancy(x, y, kernel=utils.gaussian_kernel_matrix):
+ r"""Computes the Maximum Mean Discrepancy (MMD) of two samples: x and y.
+
+ Maximum Mean Discrepancy (MMD) is a distance-measure between the samples of
+ the distributions of x and y. Here we use the kernel two sample estimate
+ using the empirical mean of the two distributions.
+
+ MMD^2(P, Q) = || \E{\phi(x)} - \E{\phi(y)} ||^2
+ = \E{ K(x, x) } + \E{ K(y, y) } - 2 \E{ K(x, y) },
+
+ where K = <\phi(x), \phi(y)>,
+ is the desired kernel function, in this case a radial basis kernel.
+
+ Args:
+ x: a tensor of shape [num_samples, num_features]
+ y: a tensor of shape [num_samples, num_features]
+ kernel: a function which computes the kernel in MMD. Defaults to the
+ GaussianKernelMatrix.
+
+ Returns:
+ a scalar denoting the squared maximum mean discrepancy loss.
+ """
+ with tf.name_scope('MaximumMeanDiscrepancy'):
+ # \E{ K(x, x) } + \E{ K(y, y) } - 2 \E{ K(x, y) }
+ cost = tf.reduce_mean(kernel(x, x))
+ cost += tf.reduce_mean(kernel(y, y))
+ cost -= 2 * tf.reduce_mean(kernel(x, y))
+
+ # We do not allow the loss to become negative.
+ cost = tf.where(cost > 0, cost, 0, name='value')
+ return cost
+
+
+def mmd_loss(source_samples, target_samples, weight, scope=None):
+ """Adds a similarity loss term, the MMD between two representations.
+
+ This Maximum Mean Discrepancy (MMD) loss is calculated with a number of
+ different Gaussian kernels.
+
+ Args:
+ source_samples: a tensor of shape [num_samples, num_features].
+ target_samples: a tensor of shape [num_samples, num_features].
+ weight: the weight of the MMD loss.
+ scope: optional name scope for summary tags.
+
+ Returns:
+ a scalar tensor representing the MMD loss value.
+ """
+ sigmas = [
+ 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1, 5, 10, 15, 20, 25, 30, 35, 100,
+ 1e3, 1e4, 1e5, 1e6
+ ]
+ gaussian_kernel = partial(
+ utils.gaussian_kernel_matrix, sigmas=tf.constant(sigmas))
+
+ loss_value = maximum_mean_discrepancy(
+ source_samples, target_samples, kernel=gaussian_kernel)
+ loss_value = tf.maximum(1e-4, loss_value) * weight
+ assert_op = tf.Assert(tf.is_finite(loss_value), [loss_value])
+ with tf.control_dependencies([assert_op]):
+ tag = 'MMD Loss'
+ if scope:
+ tag = scope + tag
+ tf.summary.scalar(tag, loss_value)
+ tf.losses.add_loss(loss_value)
+
+ return loss_value
+
+
+def correlation_loss(source_samples, target_samples, weight, scope=None):
+ """Adds a similarity loss term, the correlation between two representations.
+
+ Args:
+ source_samples: a tensor of shape [num_samples, num_features]
+ target_samples: a tensor of shape [num_samples, num_features]
+ weight: a scalar weight for the loss.
+ scope: optional name scope for summary tags.
+
+ Returns:
+ a scalar tensor representing the correlation loss value.
+ """
+ with tf.name_scope('corr_loss'):
+ source_samples -= tf.reduce_mean(source_samples, 0)
+ target_samples -= tf.reduce_mean(target_samples, 0)
+
+ source_samples = tf.nn.l2_normalize(source_samples, 1)
+ target_samples = tf.nn.l2_normalize(target_samples, 1)
+
+ source_cov = tf.matmul(tf.transpose(source_samples), source_samples)
+ target_cov = tf.matmul(tf.transpose(target_samples), target_samples)
+
+ corr_loss = tf.reduce_mean(tf.square(source_cov - target_cov)) * weight
+
+ assert_op = tf.Assert(tf.is_finite(corr_loss), [corr_loss])
+ with tf.control_dependencies([assert_op]):
+ tag = 'Correlation Loss'
+ if scope:
+ tag = scope + tag
+ tf.summary.scalar(tag, corr_loss)
+ tf.losses.add_loss(corr_loss)
+
+ return corr_loss
+
+
+def dann_loss(source_samples, target_samples, weight, scope=None):
+ """Adds the domain adversarial (DANN) loss.
+
+ Args:
+ source_samples: a tensor of shape [num_samples, num_features].
+ target_samples: a tensor of shape [num_samples, num_features].
+ weight: the weight of the loss.
+ scope: optional name scope for summary tags.
+
+ Returns:
+ a scalar tensor representing the correlation loss value.
+ """
+ with tf.variable_scope('dann'):
+ batch_size = tf.shape(source_samples)[0]
+ samples = tf.concat(axis=0, values=[source_samples, target_samples])
+ samples = slim.flatten(samples)
+
+ domain_selection_mask = tf.concat(
+ axis=0, values=[tf.zeros((batch_size, 1)), tf.ones((batch_size, 1))])
+
+ # Perform the gradient reversal and be careful with the shape.
+ grl = grl_ops.gradient_reversal(samples)
+ grl = tf.reshape(grl, (-1, samples.get_shape().as_list()[1]))
+
+ grl = slim.fully_connected(grl, 100, scope='fc1')
+ logits = slim.fully_connected(grl, 1, activation_fn=None, scope='fc2')
+
+ domain_predictions = tf.sigmoid(logits)
+
+ domain_loss = tf.losses.log_loss(
+ domain_selection_mask, domain_predictions, weights=weight)
+
+ domain_accuracy = utils.accuracy(
+ tf.round(domain_predictions), domain_selection_mask)
+
+ assert_op = tf.Assert(tf.is_finite(domain_loss), [domain_loss])
+ with tf.control_dependencies([assert_op]):
+ tag_loss = 'losses/domain_loss'
+ tag_accuracy = 'losses/domain_accuracy'
+ if scope:
+ tag_loss = scope + tag_loss
+ tag_accuracy = scope + tag_accuracy
+
+ tf.summary.scalar(tag_loss, domain_loss)
+ tf.summary.scalar(tag_accuracy, domain_accuracy)
+
+ return domain_loss
+
+
+################################################################################
+# DIFFERENCE LOSS
+################################################################################
+def difference_loss(private_samples, shared_samples, weight=1.0, name=''):
+ """Adds the difference loss between the private and shared representations.
+
+ Args:
+ private_samples: a tensor of shape [num_samples, num_features].
+ shared_samples: a tensor of shape [num_samples, num_features].
+ weight: the weight of the incoherence loss.
+ name: the name of the tf summary.
+ """
+ private_samples -= tf.reduce_mean(private_samples, 0)
+ shared_samples -= tf.reduce_mean(shared_samples, 0)
+
+ private_samples = tf.nn.l2_normalize(private_samples, 1)
+ shared_samples = tf.nn.l2_normalize(shared_samples, 1)
+
+ correlation_matrix = tf.matmul(
+ private_samples, shared_samples, transpose_a=True)
+
+ cost = tf.reduce_mean(tf.square(correlation_matrix)) * weight
+ cost = tf.where(cost > 0, cost, 0, name='value')
+
+ tf.summary.scalar('losses/Difference Loss {}'.format(name),
+ cost)
+ assert_op = tf.Assert(tf.is_finite(cost), [cost])
+ with tf.control_dependencies([assert_op]):
+ tf.losses.add_loss(cost)
+
+
+################################################################################
+# TASK LOSS
+################################################################################
+def log_quaternion_loss_batch(predictions, labels, params):
+ """A helper function to compute the error between quaternions.
+
+ Args:
+ predictions: A Tensor of size [batch_size, 4].
+ labels: A Tensor of size [batch_size, 4].
+ params: A dictionary of parameters. Expecting 'use_logging', 'batch_size'.
+
+ Returns:
+ A Tensor of size [batch_size], denoting the error between the quaternions.
+ """
+ use_logging = params['use_logging']
+ assertions = []
+ if use_logging:
+ assertions.append(
+ tf.Assert(
+ tf.reduce_all(
+ tf.less(
+ tf.abs(tf.reduce_sum(tf.square(predictions), [1]) - 1),
+ 1e-4)),
+ ['The l2 norm of each prediction quaternion vector should be 1.']))
+ assertions.append(
+ tf.Assert(
+ tf.reduce_all(
+ tf.less(
+ tf.abs(tf.reduce_sum(tf.square(labels), [1]) - 1), 1e-4)),
+ ['The l2 norm of each label quaternion vector should be 1.']))
+
+ with tf.control_dependencies(assertions):
+ product = tf.multiply(predictions, labels)
+ internal_dot_products = tf.reduce_sum(product, [1])
+
+ if use_logging:
+ internal_dot_products = tf.Print(
+ internal_dot_products,
+ [internal_dot_products, tf.shape(internal_dot_products)],
+ 'internal_dot_products:')
+
+ logcost = tf.log(1e-4 + 1 - tf.abs(internal_dot_products))
+ return logcost
+
+
+def log_quaternion_loss(predictions, labels, params):
+ """A helper function to compute the mean error between batches of quaternions.
+
+ The caller is expected to add the loss to the graph.
+
+ Args:
+ predictions: A Tensor of size [batch_size, 4].
+ labels: A Tensor of size [batch_size, 4].
+ params: A dictionary of parameters. Expecting 'use_logging', 'batch_size'.
+
+ Returns:
+ A Tensor of size 1, denoting the mean error between batches of quaternions.
+ """
+ use_logging = params['use_logging']
+ logcost = log_quaternion_loss_batch(predictions, labels, params)
+ logcost = tf.reduce_sum(logcost, [0])
+ batch_size = params['batch_size']
+ logcost = tf.multiply(logcost, 1.0 / batch_size, name='log_quaternion_loss')
+ if use_logging:
+ logcost = tf.Print(
+ logcost, [logcost], '[logcost]', name='log_quaternion_loss_print')
+ return logcost
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/losses_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/losses_test.py"
new file mode 100644
index 00000000..46e50301
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/losses_test.py"
@@ -0,0 +1,110 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for DSN losses."""
+from functools import partial
+
+import numpy as np
+import tensorflow as tf
+
+import losses
+import utils
+
+
+def MaximumMeanDiscrepancySlow(x, y, sigmas):
+ num_samples = x.get_shape().as_list()[0]
+
+ def AverageGaussianKernel(x, y, sigmas):
+ result = 0
+ for sigma in sigmas:
+ dist = tf.reduce_sum(tf.square(x - y))
+ result += tf.exp((-1.0 / (2.0 * sigma)) * dist)
+ return result / num_samples**2
+
+ total = 0
+
+ for i in range(num_samples):
+ for j in range(num_samples):
+ total += AverageGaussianKernel(x[i, :], x[j, :], sigmas)
+ total += AverageGaussianKernel(y[i, :], y[j, :], sigmas)
+ total += -2 * AverageGaussianKernel(x[i, :], y[j, :], sigmas)
+
+ return total
+
+
+class LogQuaternionLossTest(tf.test.TestCase):
+
+ def test_log_quaternion_loss_batch(self):
+ with self.test_session():
+ predictions = tf.random_uniform((10, 4), seed=1)
+ predictions = tf.nn.l2_normalize(predictions, 1)
+ labels = tf.random_uniform((10, 4), seed=1)
+ labels = tf.nn.l2_normalize(labels, 1)
+ params = {'batch_size': 10, 'use_logging': False}
+ x = losses.log_quaternion_loss_batch(predictions, labels, params)
+ self.assertTrue(((10,) == tf.shape(x).eval()).all())
+
+
+class MaximumMeanDiscrepancyTest(tf.test.TestCase):
+
+ def test_mmd_name(self):
+ with self.test_session():
+ x = tf.random_uniform((2, 3), seed=1)
+ kernel = partial(utils.gaussian_kernel_matrix, sigmas=tf.constant([1.]))
+ loss = losses.maximum_mean_discrepancy(x, x, kernel)
+
+ self.assertEquals(loss.op.name, 'MaximumMeanDiscrepancy/value')
+
+ def test_mmd_is_zero_when_inputs_are_same(self):
+ with self.test_session():
+ x = tf.random_uniform((2, 3), seed=1)
+ kernel = partial(utils.gaussian_kernel_matrix, sigmas=tf.constant([1.]))
+ self.assertEquals(0, losses.maximum_mean_discrepancy(x, x, kernel).eval())
+
+ def test_fast_mmd_is_similar_to_slow_mmd(self):
+ with self.test_session():
+ x = tf.constant(np.random.normal(size=(2, 3)), tf.float32)
+ y = tf.constant(np.random.rand(2, 3), tf.float32)
+
+ cost_old = MaximumMeanDiscrepancySlow(x, y, [1.]).eval()
+ kernel = partial(utils.gaussian_kernel_matrix, sigmas=tf.constant([1.]))
+ cost_new = losses.maximum_mean_discrepancy(x, y, kernel).eval()
+
+ self.assertAlmostEqual(cost_old, cost_new, delta=1e-5)
+
+ def test_multiple_sigmas(self):
+ with self.test_session():
+ x = tf.constant(np.random.normal(size=(2, 3)), tf.float32)
+ y = tf.constant(np.random.rand(2, 3), tf.float32)
+
+ sigmas = tf.constant([2., 5., 10, 20, 30])
+ kernel = partial(utils.gaussian_kernel_matrix, sigmas=sigmas)
+ cost_old = MaximumMeanDiscrepancySlow(x, y, [2., 5., 10, 20, 30]).eval()
+ cost_new = losses.maximum_mean_discrepancy(x, y, kernel=kernel).eval()
+
+ self.assertAlmostEqual(cost_old, cost_new, delta=1e-5)
+
+ def test_mmd_is_zero_when_distributions_are_same(self):
+
+ with self.test_session():
+ x = tf.random_uniform((1000, 10), seed=1)
+ y = tf.random_uniform((1000, 10), seed=3)
+
+ kernel = partial(utils.gaussian_kernel_matrix, sigmas=tf.constant([100.]))
+ loss = losses.maximum_mean_discrepancy(x, y, kernel=kernel).eval()
+
+ self.assertAlmostEqual(0, loss, delta=1e-4)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/models.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/models.py"
new file mode 100644
index 00000000..04ccaf82
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/models.py"
@@ -0,0 +1,443 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains different architectures for the different DSN parts.
+
+We define here the modules that can be used in the different parts of the DSN
+model.
+- shared encoder (dsn_cropped_linemod, dann_xxxx)
+- private encoder (default_encoder)
+- decoder (large_decoder, gtsrb_decoder, small_decoder)
+"""
+import tensorflow as tf
+
+#from models.domain_adaptation.domain_separation
+import utils
+
+slim = tf.contrib.slim
+
+
+def default_batch_norm_params(is_training=False):
+ """Returns default batch normalization parameters for DSNs.
+
+ Args:
+ is_training: whether or not the model is training.
+
+ Returns:
+ a dictionary that maps batch norm parameter names (strings) to values.
+ """
+ return {
+ # Decay for the moving averages.
+ 'decay': 0.5,
+ # epsilon to prevent 0s in variance.
+ 'epsilon': 0.001,
+ 'is_training': is_training
+ }
+
+
+################################################################################
+# PRIVATE ENCODERS
+################################################################################
+def default_encoder(images, code_size, batch_norm_params=None,
+ weight_decay=0.0):
+ """Encodes the given images to codes of the given size.
+
+ Args:
+ images: a tensor of size [batch_size, height, width, 1].
+ code_size: the number of hidden units in the code layer of the classifier.
+ batch_norm_params: a dictionary that maps batch norm parameter names to
+ values.
+ weight_decay: the value for the weight decay coefficient.
+
+ Returns:
+ end_points: the code of the input.
+ """
+ end_points = {}
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm,
+ normalizer_params=batch_norm_params):
+ with slim.arg_scope([slim.conv2d], kernel_size=[5, 5], padding='SAME'):
+ net = slim.conv2d(images, 32, scope='conv1')
+ net = slim.max_pool2d(net, [2, 2], 2, scope='pool1')
+ net = slim.conv2d(net, 64, scope='conv2')
+ net = slim.max_pool2d(net, [2, 2], 2, scope='pool2')
+
+ net = slim.flatten(net)
+ end_points['flatten'] = net
+ net = slim.fully_connected(net, code_size, scope='fc1')
+ end_points['fc3'] = net
+ return end_points
+
+
+################################################################################
+# DECODERS
+################################################################################
+def large_decoder(codes,
+ height,
+ width,
+ channels,
+ batch_norm_params=None,
+ weight_decay=0.0):
+ """Decodes the codes to a fixed output size.
+
+ Args:
+ codes: a tensor of size [batch_size, code_size].
+ height: the height of the output images.
+ width: the width of the output images.
+ channels: the number of the output channels.
+ batch_norm_params: a dictionary that maps batch norm parameter names to
+ values.
+ weight_decay: the value for the weight decay coefficient.
+
+ Returns:
+ recons: the reconstruction tensor of shape [batch_size, height, width, 3].
+ """
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm,
+ normalizer_params=batch_norm_params):
+ net = slim.fully_connected(codes, 600, scope='fc1')
+ batch_size = net.get_shape().as_list()[0]
+ net = tf.reshape(net, [batch_size, 10, 10, 6])
+
+ net = slim.conv2d(net, 32, [5, 5], scope='conv1_1')
+
+ net = tf.image.resize_nearest_neighbor(net, (16, 16))
+
+ net = slim.conv2d(net, 32, [5, 5], scope='conv2_1')
+
+ net = tf.image.resize_nearest_neighbor(net, (32, 32))
+
+ net = slim.conv2d(net, 32, [5, 5], scope='conv3_2')
+
+ output_size = [height, width]
+ net = tf.image.resize_nearest_neighbor(net, output_size)
+
+ with slim.arg_scope([slim.conv2d], kernel_size=[3, 3]):
+ net = slim.conv2d(net, channels, activation_fn=None, scope='conv4_1')
+
+ return net
+
+
+def gtsrb_decoder(codes,
+ height,
+ width,
+ channels,
+ batch_norm_params=None,
+ weight_decay=0.0):
+ """Decodes the codes to a fixed output size. This decoder is specific to GTSRB
+
+ Args:
+ codes: a tensor of size [batch_size, 100].
+ height: the height of the output images.
+ width: the width of the output images.
+ channels: the number of the output channels.
+ batch_norm_params: a dictionary that maps batch norm parameter names to
+ values.
+ weight_decay: the value for the weight decay coefficient.
+
+ Returns:
+ recons: the reconstruction tensor of shape [batch_size, height, width, 3].
+
+ Raises:
+ ValueError: When the input code size is not 100.
+ """
+ batch_size, code_size = codes.get_shape().as_list()
+ if code_size != 100:
+ raise ValueError('The code size used as an input to the GTSRB decoder is '
+ 'expected to be 100.')
+
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm,
+ normalizer_params=batch_norm_params):
+ net = codes
+ net = tf.reshape(net, [batch_size, 10, 10, 1])
+ net = slim.conv2d(net, 32, [3, 3], scope='conv1_1')
+
+ # First upsampling 20x20
+ net = tf.image.resize_nearest_neighbor(net, [20, 20])
+
+ net = slim.conv2d(net, 32, [3, 3], scope='conv2_1')
+
+ output_size = [height, width]
+ # Final upsampling 40 x 40
+ net = tf.image.resize_nearest_neighbor(net, output_size)
+
+ with slim.arg_scope([slim.conv2d], kernel_size=[3, 3]):
+ net = slim.conv2d(net, 16, scope='conv3_1')
+ net = slim.conv2d(net, channels, activation_fn=None, scope='conv3_2')
+
+ return net
+
+
+def small_decoder(codes,
+ height,
+ width,
+ channels,
+ batch_norm_params=None,
+ weight_decay=0.0):
+ """Decodes the codes to a fixed output size.
+
+ Args:
+ codes: a tensor of size [batch_size, code_size].
+ height: the height of the output images.
+ width: the width of the output images.
+ channels: the number of the output channels.
+ batch_norm_params: a dictionary that maps batch norm parameter names to
+ values.
+ weight_decay: the value for the weight decay coefficient.
+
+ Returns:
+ recons: the reconstruction tensor of shape [batch_size, height, width, 3].
+ """
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm,
+ normalizer_params=batch_norm_params):
+ net = slim.fully_connected(codes, 300, scope='fc1')
+ batch_size = net.get_shape().as_list()[0]
+ net = tf.reshape(net, [batch_size, 10, 10, 3])
+
+ net = slim.conv2d(net, 16, [3, 3], scope='conv1_1')
+ net = slim.conv2d(net, 16, [3, 3], scope='conv1_2')
+
+ output_size = [height, width]
+ net = tf.image.resize_nearest_neighbor(net, output_size)
+
+ with slim.arg_scope([slim.conv2d], kernel_size=[3, 3]):
+ net = slim.conv2d(net, 16, scope='conv2_1')
+ net = slim.conv2d(net, channels, activation_fn=None, scope='conv2_2')
+
+ return net
+
+
+################################################################################
+# SHARED ENCODERS
+################################################################################
+def dann_mnist(images,
+ weight_decay=0.0,
+ prefix='model',
+ num_classes=10,
+ **kwargs):
+ """Creates a convolution MNIST model.
+
+ Note that this model implements the architecture for MNIST proposed in:
+ Y. Ganin et al., Domain-Adversarial Training of Neural Networks (DANN),
+ JMLR 2015
+
+ Args:
+ images: the MNIST digits, a tensor of size [batch_size, 28, 28, 1].
+ weight_decay: the value for the weight decay coefficient.
+ prefix: name of the model to use when prefixing tags.
+ num_classes: the number of output classes to use.
+ **kwargs: Placeholder for keyword arguments used by other shared encoders.
+
+ Returns:
+ the output logits, a tensor of size [batch_size, num_classes].
+ a dictionary with key/values the layer names and tensors.
+ """
+ end_points = {}
+
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,):
+ with slim.arg_scope([slim.conv2d], padding='SAME'):
+ end_points['conv1'] = slim.conv2d(images, 32, [5, 5], scope='conv1')
+ end_points['pool1'] = slim.max_pool2d(
+ end_points['conv1'], [2, 2], 2, scope='pool1')
+ end_points['conv2'] = slim.conv2d(
+ end_points['pool1'], 48, [5, 5], scope='conv2')
+ end_points['pool2'] = slim.max_pool2d(
+ end_points['conv2'], [2, 2], 2, scope='pool2')
+ end_points['fc3'] = slim.fully_connected(
+ slim.flatten(end_points['pool2']), 100, scope='fc3')
+ end_points['fc4'] = slim.fully_connected(
+ slim.flatten(end_points['fc3']), 100, scope='fc4')
+
+ logits = slim.fully_connected(
+ end_points['fc4'], num_classes, activation_fn=None, scope='fc5')
+
+ return logits, end_points
+
+
+def dann_svhn(images,
+ weight_decay=0.0,
+ prefix='model',
+ num_classes=10,
+ **kwargs):
+ """Creates the convolutional SVHN model.
+
+ Note that this model implements the architecture for MNIST proposed in:
+ Y. Ganin et al., Domain-Adversarial Training of Neural Networks (DANN),
+ JMLR 2015
+
+ Args:
+ images: the SVHN digits, a tensor of size [batch_size, 32, 32, 3].
+ weight_decay: the value for the weight decay coefficient.
+ prefix: name of the model to use when prefixing tags.
+ num_classes: the number of output classes to use.
+ **kwargs: Placeholder for keyword arguments used by other shared encoders.
+
+ Returns:
+ the output logits, a tensor of size [batch_size, num_classes].
+ a dictionary with key/values the layer names and tensors.
+ """
+
+ end_points = {}
+
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,):
+ with slim.arg_scope([slim.conv2d], padding='SAME'):
+
+ end_points['conv1'] = slim.conv2d(images, 64, [5, 5], scope='conv1')
+ end_points['pool1'] = slim.max_pool2d(
+ end_points['conv1'], [3, 3], 2, scope='pool1')
+ end_points['conv2'] = slim.conv2d(
+ end_points['pool1'], 64, [5, 5], scope='conv2')
+ end_points['pool2'] = slim.max_pool2d(
+ end_points['conv2'], [3, 3], 2, scope='pool2')
+ end_points['conv3'] = slim.conv2d(
+ end_points['pool2'], 128, [5, 5], scope='conv3')
+
+ end_points['fc3'] = slim.fully_connected(
+ slim.flatten(end_points['conv3']), 3072, scope='fc3')
+ end_points['fc4'] = slim.fully_connected(
+ slim.flatten(end_points['fc3']), 2048, scope='fc4')
+
+ logits = slim.fully_connected(
+ end_points['fc4'], num_classes, activation_fn=None, scope='fc5')
+
+ return logits, end_points
+
+
+def dann_gtsrb(images,
+ weight_decay=0.0,
+ prefix='model',
+ num_classes=43,
+ **kwargs):
+ """Creates the convolutional GTSRB model.
+
+ Note that this model implements the architecture for MNIST proposed in:
+ Y. Ganin et al., Domain-Adversarial Training of Neural Networks (DANN),
+ JMLR 2015
+
+ Args:
+ images: the GTSRB images, a tensor of size [batch_size, 40, 40, 3].
+ weight_decay: the value for the weight decay coefficient.
+ prefix: name of the model to use when prefixing tags.
+ num_classes: the number of output classes to use.
+ **kwargs: Placeholder for keyword arguments used by other shared encoders.
+
+ Returns:
+ the output logits, a tensor of size [batch_size, num_classes].
+ a dictionary with key/values the layer names and tensors.
+ """
+
+ end_points = {}
+
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,):
+ with slim.arg_scope([slim.conv2d], padding='SAME'):
+
+ end_points['conv1'] = slim.conv2d(images, 96, [5, 5], scope='conv1')
+ end_points['pool1'] = slim.max_pool2d(
+ end_points['conv1'], [2, 2], 2, scope='pool1')
+ end_points['conv2'] = slim.conv2d(
+ end_points['pool1'], 144, [3, 3], scope='conv2')
+ end_points['pool2'] = slim.max_pool2d(
+ end_points['conv2'], [2, 2], 2, scope='pool2')
+ end_points['conv3'] = slim.conv2d(
+ end_points['pool2'], 256, [5, 5], scope='conv3')
+ end_points['pool3'] = slim.max_pool2d(
+ end_points['conv3'], [2, 2], 2, scope='pool3')
+
+ end_points['fc3'] = slim.fully_connected(
+ slim.flatten(end_points['pool3']), 512, scope='fc3')
+
+ logits = slim.fully_connected(
+ end_points['fc3'], num_classes, activation_fn=None, scope='fc4')
+
+ return logits, end_points
+
+
+def dsn_cropped_linemod(images,
+ weight_decay=0.0,
+ prefix='model',
+ num_classes=11,
+ batch_norm_params=None,
+ is_training=False):
+ """Creates the convolutional pose estimation model for Cropped Linemod.
+
+ Args:
+ images: the Cropped Linemod samples, a tensor of size
+ [batch_size, 64, 64, 4].
+ weight_decay: the value for the weight decay coefficient.
+ prefix: name of the model to use when prefixing tags.
+ num_classes: the number of output classes to use.
+ batch_norm_params: a dictionary that maps batch norm parameter names to
+ values.
+ is_training: specifies whether or not we're currently training the model.
+ This variable will determine the behaviour of the dropout layer.
+
+ Returns:
+ the output logits, a tensor of size [batch_size, num_classes].
+ a dictionary with key/values the layer names and tensors.
+ """
+
+ end_points = {}
+
+ tf.summary.image('{}/input_images'.format(prefix), images)
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm if batch_norm_params else None,
+ normalizer_params=batch_norm_params):
+ with slim.arg_scope([slim.conv2d], padding='SAME'):
+ end_points['conv1'] = slim.conv2d(images, 32, [5, 5], scope='conv1')
+ end_points['pool1'] = slim.max_pool2d(
+ end_points['conv1'], [2, 2], 2, scope='pool1')
+ end_points['conv2'] = slim.conv2d(
+ end_points['pool1'], 64, [5, 5], scope='conv2')
+ end_points['pool2'] = slim.max_pool2d(
+ end_points['conv2'], [2, 2], 2, scope='pool2')
+ net = slim.flatten(end_points['pool2'])
+ end_points['fc3'] = slim.fully_connected(net, 128, scope='fc3')
+ net = slim.dropout(
+ end_points['fc3'], 0.5, is_training=is_training, scope='dropout')
+
+ with tf.variable_scope('quaternion_prediction'):
+ predicted_quaternion = slim.fully_connected(
+ net, 4, activation_fn=tf.nn.tanh)
+ predicted_quaternion = tf.nn.l2_normalize(predicted_quaternion, 1)
+ logits = slim.fully_connected(
+ net, num_classes, activation_fn=None, scope='fc4')
+ end_points['quaternion_pred'] = predicted_quaternion
+
+ return logits, end_points
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/models_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/models_test.py"
new file mode 100644
index 00000000..69d1a272
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/models_test.py"
@@ -0,0 +1,167 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for DSN components."""
+
+import numpy as np
+import tensorflow as tf
+
+#from models.domain_adaptation.domain_separation
+import models
+
+
+class SharedEncodersTest(tf.test.TestCase):
+
+ def _testSharedEncoder(self,
+ input_shape=[5, 28, 28, 1],
+ model=models.dann_mnist,
+ is_training=True):
+ images = tf.to_float(np.random.rand(*input_shape))
+
+ with self.test_session() as sess:
+ logits, _ = model(images)
+ sess.run(tf.global_variables_initializer())
+ logits_np = sess.run(logits)
+ return logits_np
+
+ def testBuildGRLMnistModel(self):
+ logits = self._testSharedEncoder(model=getattr(models,
+ 'dann_mnist'))
+ self.assertEqual(logits.shape, (5, 10))
+ self.assertTrue(np.any(logits))
+
+ def testBuildGRLSvhnModel(self):
+ logits = self._testSharedEncoder(model=getattr(models,
+ 'dann_svhn'))
+ self.assertEqual(logits.shape, (5, 10))
+ self.assertTrue(np.any(logits))
+
+ def testBuildGRLGtsrbModel(self):
+ logits = self._testSharedEncoder([5, 40, 40, 3],
+ getattr(models, 'dann_gtsrb'))
+ self.assertEqual(logits.shape, (5, 43))
+ self.assertTrue(np.any(logits))
+
+ def testBuildPoseModel(self):
+ logits = self._testSharedEncoder([5, 64, 64, 4],
+ getattr(models, 'dsn_cropped_linemod'))
+ self.assertEqual(logits.shape, (5, 11))
+ self.assertTrue(np.any(logits))
+
+ def testBuildPoseModelWithBatchNorm(self):
+ images = tf.to_float(np.random.rand(10, 64, 64, 4))
+
+ with self.test_session() as sess:
+ logits, _ = getattr(models, 'dsn_cropped_linemod')(
+ images, batch_norm_params=models.default_batch_norm_params(True))
+ sess.run(tf.global_variables_initializer())
+ logits_np = sess.run(logits)
+ self.assertEqual(logits_np.shape, (10, 11))
+ self.assertTrue(np.any(logits_np))
+
+
+class EncoderTest(tf.test.TestCase):
+
+ def _testEncoder(self, batch_norm_params=None, channels=1):
+ images = tf.to_float(np.random.rand(10, 28, 28, channels))
+
+ with self.test_session() as sess:
+ end_points = models.default_encoder(
+ images, 128, batch_norm_params=batch_norm_params)
+ sess.run(tf.global_variables_initializer())
+ private_code = sess.run(end_points['fc3'])
+ self.assertEqual(private_code.shape, (10, 128))
+ self.assertTrue(np.any(private_code))
+ self.assertTrue(np.all(np.isfinite(private_code)))
+
+ def testEncoder(self):
+ self._testEncoder()
+
+ def testEncoderMultiChannel(self):
+ self._testEncoder(None, 4)
+
+ def testEncoderIsTrainingBatchNorm(self):
+ self._testEncoder(models.default_batch_norm_params(True))
+
+ def testEncoderBatchNorm(self):
+ self._testEncoder(models.default_batch_norm_params(False))
+
+
+class DecoderTest(tf.test.TestCase):
+
+ def _testDecoder(self,
+ height=64,
+ width=64,
+ channels=4,
+ batch_norm_params=None,
+ decoder=models.small_decoder):
+ codes = tf.to_float(np.random.rand(32, 100))
+
+ with self.test_session() as sess:
+ output = decoder(
+ codes,
+ height=height,
+ width=width,
+ channels=channels,
+ batch_norm_params=batch_norm_params)
+ sess.run(tf.global_variables_initializer())
+ output_np = sess.run(output)
+ self.assertEqual(output_np.shape, (32, height, width, channels))
+ self.assertTrue(np.any(output_np))
+ self.assertTrue(np.all(np.isfinite(output_np)))
+
+ def testSmallDecoder(self):
+ self._testDecoder(28, 28, 4, None, getattr(models, 'small_decoder'))
+
+ def testSmallDecoderThreeChannels(self):
+ self._testDecoder(28, 28, 3)
+
+ def testSmallDecoderBatchNorm(self):
+ self._testDecoder(28, 28, 4, models.default_batch_norm_params(False))
+
+ def testSmallDecoderIsTrainingBatchNorm(self):
+ self._testDecoder(28, 28, 4, models.default_batch_norm_params(True))
+
+ def testLargeDecoder(self):
+ self._testDecoder(32, 32, 4, None, getattr(models, 'large_decoder'))
+
+ def testLargeDecoderThreeChannels(self):
+ self._testDecoder(32, 32, 3, None, getattr(models, 'large_decoder'))
+
+ def testLargeDecoderBatchNorm(self):
+ self._testDecoder(32, 32, 4,
+ models.default_batch_norm_params(False),
+ getattr(models, 'large_decoder'))
+
+ def testLargeDecoderIsTrainingBatchNorm(self):
+ self._testDecoder(32, 32, 4,
+ models.default_batch_norm_params(True),
+ getattr(models, 'large_decoder'))
+
+ def testGtsrbDecoder(self):
+ self._testDecoder(40, 40, 3, None, getattr(models, 'large_decoder'))
+
+ def testGtsrbDecoderBatchNorm(self):
+ self._testDecoder(40, 40, 4,
+ models.default_batch_norm_params(False),
+ getattr(models, 'gtsrb_decoder'))
+
+ def testGtsrbDecoderIsTrainingBatchNorm(self):
+ self._testDecoder(40, 40, 4,
+ models.default_batch_norm_params(True),
+ getattr(models, 'gtsrb_decoder'))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/utils.py"
new file mode 100644
index 00000000..e144ee86
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/domain_separation/utils.py"
@@ -0,0 +1,183 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Auxiliary functions for domain adaptation related losses.
+"""
+import math
+import tensorflow as tf
+
+
+def create_summaries(end_points, prefix='', max_images=3, use_op_name=False):
+ """Creates a tf summary per endpoint.
+
+ If the endpoint is a 4 dimensional tensor it displays it as an image
+ otherwise if it is a two dimensional one it creates a histogram summary.
+
+ Args:
+ end_points: a dictionary of name, tf tensor pairs.
+ prefix: an optional string to prefix the summary with.
+ max_images: the maximum number of images to display per summary.
+ use_op_name: Use the op name as opposed to the shorter end_points key.
+ """
+ for layer_name in end_points:
+ if use_op_name:
+ name = end_points[layer_name].op.name
+ else:
+ name = layer_name
+ if len(end_points[layer_name].get_shape().as_list()) == 4:
+ # if it's an actual image do not attempt to reshape it
+ if end_points[layer_name].get_shape().as_list()[-1] == 1 or end_points[
+ layer_name].get_shape().as_list()[-1] == 3:
+ visualization_image = end_points[layer_name]
+ else:
+ visualization_image = reshape_feature_maps(end_points[layer_name])
+ tf.summary.image(
+ '{}/{}'.format(prefix, name),
+ visualization_image,
+ max_outputs=max_images)
+ elif len(end_points[layer_name].get_shape().as_list()) == 3:
+ images = tf.expand_dims(end_points[layer_name], 3)
+ tf.summary.image(
+ '{}/{}'.format(prefix, name),
+ images,
+ max_outputs=max_images)
+ elif len(end_points[layer_name].get_shape().as_list()) == 2:
+ tf.summary.histogram('{}/{}'.format(prefix, name), end_points[layer_name])
+
+
+def reshape_feature_maps(features_tensor):
+ """Reshape activations for tf.summary.image visualization.
+
+ Arguments:
+ features_tensor: a tensor of activations with a square number of feature
+ maps, eg 4, 9, 16, etc.
+ Returns:
+ A composite image with all the feature maps that can be passed as an
+ argument to tf.summary.image.
+ """
+ assert len(features_tensor.get_shape().as_list()) == 4
+ num_filters = features_tensor.get_shape().as_list()[-1]
+ assert num_filters > 0
+ num_filters_sqrt = math.sqrt(num_filters)
+ assert num_filters_sqrt.is_integer(
+ ), 'Number of filters should be a square number but got {}'.format(
+ num_filters)
+ num_filters_sqrt = int(num_filters_sqrt)
+ conv_summary = tf.unstack(features_tensor, axis=3)
+ conv_one_row = tf.concat(axis=2, values=conv_summary[0:num_filters_sqrt])
+ ind = 1
+ conv_final = conv_one_row
+ for ind in range(1, num_filters_sqrt):
+ conv_one_row = tf.concat(axis=2,
+ values=conv_summary[
+ ind * num_filters_sqrt + 0:ind * num_filters_sqrt + num_filters_sqrt])
+ conv_final = tf.concat(
+ axis=1, values=[tf.squeeze(conv_final), tf.squeeze(conv_one_row)])
+ conv_final = tf.expand_dims(conv_final, -1)
+ return conv_final
+
+
+def accuracy(predictions, labels):
+ """Calculates the classificaton accuracy.
+
+ Args:
+ predictions: the predicted values, a tensor whose size matches 'labels'.
+ labels: the ground truth values, a tensor of any size.
+
+ Returns:
+ a tensor whose value on evaluation returns the total accuracy.
+ """
+ return tf.reduce_mean(tf.cast(tf.equal(predictions, labels), tf.float32))
+
+
+def compute_upsample_values(input_tensor, upsample_height, upsample_width):
+ """Compute values for an upsampling op (ops.BatchCropAndResize).
+
+ Args:
+ input_tensor: image tensor with shape [batch, height, width, in_channels]
+ upsample_height: integer
+ upsample_width: integer
+
+ Returns:
+ grid_centers: tensor with shape [batch, 1]
+ crop_sizes: tensor with shape [batch, 1]
+ output_height: integer
+ output_width: integer
+ """
+ batch, input_height, input_width, _ = input_tensor.shape
+
+ height_half = input_height / 2.
+ width_half = input_width / 2.
+ grid_centers = tf.constant(batch * [[height_half, width_half]])
+ crop_sizes = tf.constant(batch * [[input_height, input_width]])
+ output_height = input_height * upsample_height
+ output_width = input_width * upsample_width
+
+ return grid_centers, tf.to_float(crop_sizes), output_height, output_width
+
+
+def compute_pairwise_distances(x, y):
+ """Computes the squared pairwise Euclidean distances between x and y.
+
+ Args:
+ x: a tensor of shape [num_x_samples, num_features]
+ y: a tensor of shape [num_y_samples, num_features]
+
+ Returns:
+ a distance matrix of dimensions [num_x_samples, num_y_samples].
+
+ Raises:
+ ValueError: if the inputs do no matched the specified dimensions.
+ """
+
+ if not len(x.get_shape()) == len(y.get_shape()) == 2:
+ raise ValueError('Both inputs should be matrices.')
+
+ if x.get_shape().as_list()[1] != y.get_shape().as_list()[1]:
+ raise ValueError('The number of features should be the same.')
+
+ norm = lambda x: tf.reduce_sum(tf.square(x), 1)
+
+ # By making the `inner' dimensions of the two matrices equal to 1 using
+ # broadcasting then we are essentially substracting every pair of rows
+ # of x and y.
+ # x will be num_samples x num_features x 1,
+ # and y will be 1 x num_features x num_samples (after broadcasting).
+ # After the substraction we will get a
+ # num_x_samples x num_features x num_y_samples matrix.
+ # The resulting dist will be of shape num_y_samples x num_x_samples.
+ # and thus we need to transpose it again.
+ return tf.transpose(norm(tf.expand_dims(x, 2) - tf.transpose(y)))
+
+
+def gaussian_kernel_matrix(x, y, sigmas):
+ r"""Computes a Guassian Radial Basis Kernel between the samples of x and y.
+
+ We create a sum of multiple gaussian kernels each having a width sigma_i.
+
+ Args:
+ x: a tensor of shape [num_samples, num_features]
+ y: a tensor of shape [num_samples, num_features]
+ sigmas: a tensor of floats which denote the widths of each of the
+ gaussians in the kernel.
+ Returns:
+ A tensor of shape [num_samples{x}, num_samples{y}] with the RBF kernel.
+ """
+ beta = 1. / (2. * (tf.expand_dims(sigmas, 1)))
+
+ dist = compute_pairwise_distances(x, y)
+
+ s = tf.matmul(beta, tf.reshape(dist, (1, -1)))
+
+ return tf.reshape(tf.reduce_sum(tf.exp(-s), 0), tf.shape(dist))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/BUILD"
new file mode 100644
index 00000000..2bc8d4a4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/BUILD"
@@ -0,0 +1,90 @@
+# Description:
+# Contains code for domain-adaptation style transfer.
+
+package(
+ default_visibility = [
+ ":internal",
+ ],
+)
+
+licenses(["notice"]) # Apache 2.0
+
+exports_files(["LICENSE"])
+
+package_group(
+ name = "internal",
+ packages = [
+ "//domain_adaptation/...",
+ ],
+)
+
+py_library(
+ name = "pixelda_preprocess",
+ srcs = ["pixelda_preprocess.py"],
+ deps = [
+
+ ],
+)
+
+py_test(
+ name = "pixelda_preprocess_test",
+ srcs = ["pixelda_preprocess_test.py"],
+ deps = [
+ ":pixelda_preprocess",
+
+ ],
+)
+
+py_library(
+ name = "pixelda_model",
+ srcs = [
+ "pixelda_model.py",
+ "pixelda_task_towers.py",
+ "hparams.py",
+ ],
+ deps = [
+
+ ],
+)
+
+py_library(
+ name = "pixelda_utils",
+ srcs = ["pixelda_utils.py"],
+ deps = [
+
+ ],
+)
+
+py_library(
+ name = "pixelda_losses",
+ srcs = ["pixelda_losses.py"],
+ deps = [
+
+ ],
+)
+
+py_binary(
+ name = "pixelda_train",
+ srcs = ["pixelda_train.py"],
+ deps = [
+ ":pixelda_losses",
+ ":pixelda_model",
+ ":pixelda_preprocess",
+ ":pixelda_utils",
+
+ "//domain_adaptation/datasets:dataset_factory",
+ ],
+)
+
+py_binary(
+ name = "pixelda_eval",
+ srcs = ["pixelda_eval.py"],
+ deps = [
+ ":pixelda_losses",
+ ":pixelda_model",
+ ":pixelda_preprocess",
+ ":pixelda_utils",
+
+ "//domain_adaptation/datasets:dataset_factory",
+ ],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/README.md"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/BUILD"
new file mode 100644
index 00000000..c41a4ffe
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/BUILD"
@@ -0,0 +1,23 @@
+licenses(["notice"]) # Apache 2.0
+
+py_binary(
+ name = "baseline_train",
+ srcs = ["baseline_train.py"],
+ deps = [
+
+ "//domain_adaptation/datasets:dataset_factory",
+ "//domain_adaptation/pixel_domain_adaptation:pixelda_model",
+ "//domain_adaptation/pixel_domain_adaptation:pixelda_preprocess",
+ ],
+)
+
+py_binary(
+ name = "baseline_eval",
+ srcs = ["baseline_eval.py"],
+ deps = [
+
+ "//domain_adaptation/datasets:dataset_factory",
+ "//domain_adaptation/pixel_domain_adaptation:pixelda_model",
+ "//domain_adaptation/pixel_domain_adaptation:pixelda_preprocess",
+ ],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/README.md"
new file mode 100644
index 00000000..d61195ad
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/README.md"
@@ -0,0 +1,60 @@
+The best baselines are obtainable via the following configuration:
+
+
+## MNIST => MNIST_M
+
+Accuracy:
+MNIST-Train: 99.9
+MNIST_M-Train: 63.9
+MNIST_M-Valid: 63.9
+MNIST_M-Test: 63.6
+
+Learning Rate = 0.0001
+Weight Decay = 0.0
+Number of Steps: 105,000
+
+## MNIST => USPS
+
+Accuracy:
+MNIST-Train: 100.0
+USPS-Train: 82.8
+USPS-Valid: 82.8
+USPS-Test: 78.9
+
+Learning Rate = 0.0001
+Weight Decay = 0.0
+Number of Steps: 22,000
+
+## MNIST_M => MNIST
+
+Accuracy:
+MNIST_M-Train: 100
+MNIST-Train: 98.5
+MNIST-Valid: 98.5
+MNIST-Test: 98.1
+
+Learning Rate = 0.001
+Weight Decay = 0.0
+Number of Steps: 604,400
+
+## MNIST_M => MNIST_M
+
+Accuracy:
+MNIST_M-Train: 100.0
+MNIST_M-Valid: 96.6
+MNIST_M-Test: 96.4
+
+Learning Rate = 0.001
+Weight Decay = 0.0
+Number of Steps: 139,400
+
+## USPS => USPS
+
+Accuracy:
+USPS-Train: 100.0
+USPS-Valid: 100.0
+USPS-Test: 96.5
+
+Learning Rate = 0.001
+Weight Decay = 0.0
+Number of Steps: 67,000
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/baseline_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/baseline_eval.py"
new file mode 100644
index 00000000..6b7ef645
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/baseline_eval.py"
@@ -0,0 +1,141 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+r"""Evals the classification/pose baselines."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from functools import partial
+
+import math
+
+# Dependency imports
+
+import tensorflow as tf
+
+from domain_adaptation.datasets import dataset_factory
+from domain_adaptation.pixel_domain_adaptation import pixelda_preprocess
+from domain_adaptation.pixel_domain_adaptation import pixelda_task_towers
+
+flags = tf.app.flags
+FLAGS = flags.FLAGS
+
+slim = tf.contrib.slim
+
+flags.DEFINE_string('master', '', 'BNS name of the tensorflow server')
+
+flags.DEFINE_string(
+ 'checkpoint_dir', None, 'The location of the checkpoint files.')
+
+flags.DEFINE_string(
+ 'eval_dir', None, 'The directory where evaluation logs are written.')
+
+flags.DEFINE_integer('batch_size', 32, 'The number of samples per batch.')
+
+flags.DEFINE_string('dataset_name', None, 'The name of the dataset.')
+
+flags.DEFINE_string('dataset_dir', None,
+ 'The directory where the data is stored.')
+
+flags.DEFINE_string('split_name', None, 'The name of the train/test split.')
+
+flags.DEFINE_integer('eval_interval_secs', 60 * 5,
+ 'How often (in seconds) to run evaluation.')
+
+flags.DEFINE_integer(
+ 'num_readers', 4,
+ 'The number of parallel readers that read data from the dataset.')
+
+def main(unused_argv):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ hparams = tf.contrib.training.HParams()
+ hparams.weight_decay_task_classifier = 0.0
+
+ if FLAGS.dataset_name in ['mnist', 'mnist_m', 'usps']:
+ hparams.task_tower = 'mnist'
+ else:
+ raise ValueError('Unknown dataset %s' % FLAGS.dataset_name)
+
+ if not tf.gfile.Exists(FLAGS.eval_dir):
+ tf.gfile.MakeDirs(FLAGS.eval_dir)
+
+ with tf.Graph().as_default():
+ dataset = dataset_factory.get_dataset(FLAGS.dataset_name, FLAGS.split_name,
+ FLAGS.dataset_dir)
+ num_classes = dataset.num_classes
+ num_samples = dataset.num_samples
+
+ preprocess_fn = partial(pixelda_preprocess.preprocess_classification,
+ is_training=False)
+
+ images, labels = dataset_factory.provide_batch(
+ FLAGS.dataset_name,
+ FLAGS.split_name,
+ dataset_dir=FLAGS.dataset_dir,
+ num_readers=FLAGS.num_readers,
+ batch_size=FLAGS.batch_size,
+ num_preprocessing_threads=FLAGS.num_readers)
+
+ # Define the model
+ logits, _ = pixelda_task_towers.add_task_specific_model(
+ images, hparams, num_classes=num_classes, is_training=True)
+
+ #####################
+ # Define the losses #
+ #####################
+ if 'classes' in labels:
+ one_hot_labels = labels['classes']
+ loss = tf.losses.softmax_cross_entropy(
+ onehot_labels=one_hot_labels, logits=logits)
+ tf.summary.scalar('losses/Classification_Loss', loss)
+ else:
+ raise ValueError('Only support classification for now.')
+
+ total_loss = tf.losses.get_total_loss()
+
+ predictions = tf.reshape(tf.argmax(logits, 1), shape=[-1])
+ class_labels = tf.argmax(labels['classes'], 1)
+
+ metrics_to_values, metrics_to_updates = slim.metrics.aggregate_metric_map({
+ 'Mean_Loss':
+ tf.contrib.metrics.streaming_mean(total_loss),
+ 'Accuracy':
+ tf.contrib.metrics.streaming_accuracy(predictions,
+ tf.reshape(
+ class_labels,
+ shape=[-1])),
+ 'Recall_at_5':
+ tf.contrib.metrics.streaming_recall_at_k(logits, class_labels, 5),
+ })
+
+ tf.summary.histogram('outputs/Predictions', predictions)
+ tf.summary.histogram('outputs/Ground_Truth', class_labels)
+
+ for name, value in metrics_to_values.iteritems():
+ tf.summary.scalar(name, value)
+
+ num_batches = int(math.ceil(num_samples / float(FLAGS.batch_size)))
+
+ slim.evaluation.evaluation_loop(
+ master=FLAGS.master,
+ checkpoint_dir=FLAGS.checkpoint_dir,
+ logdir=FLAGS.eval_dir,
+ num_evals=num_batches,
+ eval_op=metrics_to_updates.values(),
+ eval_interval_secs=FLAGS.eval_interval_secs)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/baseline_train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/baseline_train.py"
new file mode 100644
index 00000000..8c92bd81
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/baselines/baseline_train.py"
@@ -0,0 +1,161 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+r"""Trains the classification/pose baselines."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from functools import partial
+
+# Dependency imports
+
+import tensorflow as tf
+
+from domain_adaptation.datasets import dataset_factory
+from domain_adaptation.pixel_domain_adaptation import pixelda_preprocess
+from domain_adaptation.pixel_domain_adaptation import pixelda_task_towers
+
+flags = tf.app.flags
+FLAGS = flags.FLAGS
+
+slim = tf.contrib.slim
+
+flags.DEFINE_string('master', '', 'BNS name of the tensorflow server')
+
+flags.DEFINE_integer('task', 0, 'The task ID.')
+
+flags.DEFINE_integer('num_ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then '
+ 'the parameters are handled locally by the worker.')
+
+flags.DEFINE_integer('batch_size', 32, 'The number of samples per batch.')
+
+flags.DEFINE_string('dataset_name', None, 'The name of the dataset.')
+
+flags.DEFINE_string('dataset_dir', None,
+ 'The directory where the data is stored.')
+
+flags.DEFINE_string('split_name', None, 'The name of the train/test split.')
+
+flags.DEFINE_float('learning_rate', 0.001, 'The initial learning rate.')
+
+flags.DEFINE_integer(
+ 'learning_rate_decay_steps', 20000,
+ 'The frequency, in steps, at which the learning rate is decayed.')
+
+flags.DEFINE_float('learning_rate_decay_factor',
+ 0.95,
+ 'The factor with which the learning rate is decayed.')
+
+flags.DEFINE_float('adam_beta1', 0.5, 'The beta1 value for the AdamOptimizer')
+
+flags.DEFINE_float('weight_decay', 1e-5,
+ 'The L2 coefficient on the model weights.')
+
+flags.DEFINE_string(
+ 'logdir', None, 'The location of the logs and checkpoints.')
+
+flags.DEFINE_integer('save_interval_secs', 600,
+ 'How often, in seconds, we save the model to disk.')
+
+flags.DEFINE_integer('save_summaries_secs', 600,
+ 'How often, in seconds, we compute the summaries.')
+
+flags.DEFINE_integer(
+ 'num_readers', 4,
+ 'The number of parallel readers that read data from the dataset.')
+
+flags.DEFINE_float(
+ 'moving_average_decay', 0.9999,
+ 'The amount of decay to use for moving averages.')
+
+
+def main(unused_argv):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ hparams = tf.contrib.training.HParams()
+ hparams.weight_decay_task_classifier = FLAGS.weight_decay
+
+ if FLAGS.dataset_name in ['mnist', 'mnist_m', 'usps']:
+ hparams.task_tower = 'mnist'
+ else:
+ raise ValueError('Unknown dataset %s' % FLAGS.dataset_name)
+
+ with tf.Graph().as_default():
+ with tf.device(
+ tf.train.replica_device_setter(FLAGS.num_ps_tasks, merge_devices=True)):
+ dataset = dataset_factory.get_dataset(FLAGS.dataset_name,
+ FLAGS.split_name, FLAGS.dataset_dir)
+ num_classes = dataset.num_classes
+
+ preprocess_fn = partial(pixelda_preprocess.preprocess_classification,
+ is_training=True)
+
+ images, labels = dataset_factory.provide_batch(
+ FLAGS.dataset_name,
+ FLAGS.split_name,
+ dataset_dir=FLAGS.dataset_dir,
+ num_readers=FLAGS.num_readers,
+ batch_size=FLAGS.batch_size,
+ num_preprocessing_threads=FLAGS.num_readers)
+ # preprocess_fn=preprocess_fn)
+
+ # Define the model
+ logits, _ = pixelda_task_towers.add_task_specific_model(
+ images, hparams, num_classes=num_classes, is_training=True)
+
+ # Define the losses
+ if 'classes' in labels:
+ one_hot_labels = labels['classes']
+ loss = tf.losses.softmax_cross_entropy(
+ onehot_labels=one_hot_labels, logits=logits)
+ tf.summary.scalar('losses/Classification_Loss', loss)
+ else:
+ raise ValueError('Only support classification for now.')
+
+ total_loss = tf.losses.get_total_loss()
+ tf.summary.scalar('losses/Total_Loss', total_loss)
+
+ # Setup the moving averages
+ moving_average_variables = slim.get_model_variables()
+ variable_averages = tf.train.ExponentialMovingAverage(
+ FLAGS.moving_average_decay, slim.get_or_create_global_step())
+ tf.add_to_collection(
+ tf.GraphKeys.UPDATE_OPS,
+ variable_averages.apply(moving_average_variables))
+
+ # Specify the optimization scheme:
+ learning_rate = tf.train.exponential_decay(
+ FLAGS.learning_rate,
+ slim.get_or_create_global_step(),
+ FLAGS.learning_rate_decay_steps,
+ FLAGS.learning_rate_decay_factor,
+ staircase=True)
+
+ optimizer = tf.train.AdamOptimizer(learning_rate, beta1=FLAGS.adam_beta1)
+
+ train_op = slim.learning.create_train_op(total_loss, optimizer)
+
+ slim.learning.train(
+ train_op,
+ FLAGS.logdir,
+ master=FLAGS.master,
+ is_chief=(FLAGS.task == 0),
+ save_summaries_secs=FLAGS.save_summaries_secs,
+ save_interval_secs=FLAGS.save_interval_secs)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/hparams.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/hparams.py"
new file mode 100644
index 00000000..ba9539f7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/hparams.py"
@@ -0,0 +1,201 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Define model HParams."""
+import tensorflow as tf
+
+
+def create_hparams(hparam_string=None):
+ """Create model hyperparameters. Parse nondefault from given string."""
+ hparams = tf.contrib.training.HParams(
+ # The name of the architecture to use.
+ arch='resnet',
+ lrelu_leakiness=0.2,
+ batch_norm_decay=0.9,
+ weight_decay=1e-5,
+ normal_init_std=0.02,
+ generator_kernel_size=3,
+ discriminator_kernel_size=3,
+
+ # Stop training after this many examples are processed
+ # If none, train indefinitely
+ num_training_examples=0,
+
+ # Apply data augmentation to datasets
+ # Applies only in training job
+ augment_source_images=False,
+ augment_target_images=False,
+
+ # Discriminator
+ # Number of filters in first layer of discriminator
+ num_discriminator_filters=64,
+ discriminator_conv_block_size=1, # How many convs to have at each size
+ discriminator_filter_factor=2.0, # Multiply # filters by this each layer
+ # Add gaussian noise with this stddev to every hidden layer of D
+ discriminator_noise_stddev=0.2, # lmetz: Start seeing results at >= 0.1
+ # If true, add this gaussian noise to input images to D as well
+ discriminator_image_noise=False,
+ discriminator_first_stride=1, # Stride in first conv of discriminator
+ discriminator_do_pooling=False, # If true, replace stride 2 with avg pool
+ discriminator_dropout_keep_prob=0.9, # keep probability for dropout
+
+ # DCGAN Generator
+ # Number of filters in generator decoder last layer (repeatedly halved
+ # from 1st layer)
+ num_decoder_filters=64,
+ # Number of filters in generator encoder 1st layer (repeatedly doubled
+ # after 1st layer)
+ num_encoder_filters=64,
+
+ # This is the shape to which the noise vector is projected (if we're
+ # transferring from noise).
+ # Write this way instead of [4, 4, 64] for hparam search flexibility
+ projection_shape_size=4,
+ projection_shape_channels=64,
+
+ # Indicates the method by which we enlarge the spatial representation
+ # of an image. Possible values include:
+ # - resize_conv: Performs a nearest neighbor resize followed by a conv.
+ # - conv2d_transpose: Performs a conv2d_transpose.
+ upsample_method='resize_conv',
+
+ # Visualization
+ summary_steps=500, # Output image summary every N steps
+
+ ###################################
+ # Task Classifier Hyperparameters #
+ ###################################
+
+ # Which task-specific prediction tower to use. Possible choices are:
+ # none: No task tower.
+ # doubling_pose_estimator: classifier + quaternion regressor.
+ # [conv + pool]* + FC
+ # Classifiers used in DSN paper:
+ # gtsrb: Classifier used for GTSRB
+ # svhn: Classifier used for SVHN
+ # mnist: Classifier used for MNIST
+ # pose_mini: Classifier + regressor used for pose_mini
+ task_tower='doubling_pose_estimator',
+ weight_decay_task_classifier=1e-5,
+ source_task_loss_weight=1.0,
+ transferred_task_loss_weight=1.0,
+
+ # Number of private layers in doubling_pose_estimator task tower
+ num_private_layers=2,
+
+ # The weight for the log quaternion loss we use for source and transferred
+ # samples of the cropped_linemod dataset.
+ # In the DSN work, 1/8 of the classifier weight worked well for our log
+ # quaternion loss
+ source_pose_weight=0.125 * 2.0,
+ transferred_pose_weight=0.125 * 1.0,
+
+ # If set to True, the style transfer network also attempts to change its
+ # weights to maximize the performance of the task tower. If set to False,
+ # then the style transfer network only attempts to change its weights to
+ # make the transferred images more likely according to the domain
+ # classifier.
+ task_tower_in_g_step=True,
+ task_loss_in_g_weight=1.0, # Weight of task loss in G
+
+ #########################################
+ # 'simple` generator arch model hparams #
+ #########################################
+ simple_num_conv_layers=1,
+ simple_conv_filters=8,
+
+ #########################
+ # Resnet Hyperparameters#
+ #########################
+ resnet_blocks=6, # Number of resnet blocks
+ resnet_filters=64, # Number of filters per conv in resnet blocks
+ # If true, add original input back to result of convolutions inside the
+ # resnet arch. If false, it turns into a simple stack of conv/relu/BN
+ # layers.
+ resnet_residuals=True,
+
+ #######################################
+ # The residual / interpretable model. #
+ #######################################
+ res_int_blocks=2, # The number of residual blocks.
+ res_int_convs=2, # The number of conv calls inside each block.
+ res_int_filters=64, # The number of filters used by each convolution.
+
+ ####################
+ # Latent variables #
+ ####################
+ # if true, then generate random noise and project to input for generator
+ noise_channel=True,
+ # The number of dimensions in the input noise vector.
+ noise_dims=10,
+
+ # If true, then one hot encode source image class and project as an
+ # additional channel for the input to generator. This gives the generator
+ # access to the class, which may help generation performance.
+ condition_on_source_class=False,
+
+ ########################
+ # Loss Hyperparameters #
+ ########################
+ domain_loss_weight=1.0,
+ style_transfer_loss_weight=1.0,
+
+ ########################################################################
+ # Encourages the transferred images to be similar to the source images #
+ # using a configurable metric. #
+ ########################################################################
+
+ # The weight of the loss function encouraging the source and transferred
+ # images to be similar. If set to 0, then the loss function is not used.
+ transferred_similarity_loss_weight=0.0,
+
+ # The type of loss used to encourage transferred and source image
+ # similarity. Valid values include:
+ # mpse: Mean Pairwise Squared Error
+ # mse: Mean Squared Error
+ # hinged_mse: Computes the mean squared error using squared differences
+ # greater than hparams.transferred_similarity_max_diff
+ # hinged_mae: Computes the mean absolute error using absolute
+ # differences greater than hparams.transferred_similarity_max_diff.
+ transferred_similarity_loss='mpse',
+
+ # The maximum allowable difference between the source and target images.
+ # This value is used, in effect, to produce a hinge loss. Note that the
+ # range of values should be between 0 and 1.
+ transferred_similarity_max_diff=0.4,
+
+ ################################
+ # Optimization Hyperparameters #
+ ################################
+ learning_rate=0.001,
+ batch_size=32,
+ lr_decay_steps=20000,
+ lr_decay_rate=0.95,
+
+ # Recomendation from the DCGAN paper:
+ adam_beta1=0.5,
+ clip_gradient_norm=5.0,
+
+ # The number of times we run the discriminator train_op in a row.
+ discriminator_steps=1,
+
+ # The number of times we run the generator train_op in a row.
+ generator_steps=1)
+
+ if hparam_string:
+ tf.logging.info('Parsing command line hparams: %s', hparam_string)
+ hparams.parse(hparam_string)
+
+ tf.logging.info('Final parsed hparams: %s', hparams.values())
+ return hparams
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_eval.py"
new file mode 100644
index 00000000..23824249
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_eval.py"
@@ -0,0 +1,298 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+r"""Evaluates the PIXELDA model.
+
+-- Compiles the model for CPU.
+$ bazel build -c opt third_party/tensorflow_models/domain_adaptation/pixel_domain_adaptation:pixelda_eval
+
+-- Compile the model for GPU.
+$ bazel build -c opt --copt=-mavx --config=cuda \
+ third_party/tensorflow_models/domain_adaptation/pixel_domain_adaptation:pixelda_eval
+
+-- Runs the training.
+$ ./bazel-bin/third_party/tensorflow_models/domain_adaptation/pixel_domain_adaptation/pixelda_eval \
+ --source_dataset=mnist \
+ --target_dataset=mnist_m \
+ --dataset_dir=/tmp/datasets/ \
+ --alsologtostderr
+
+-- Visualize the results.
+$ bash learning/brain/tensorboard/tensorboard.sh \
+ --port 2222 --logdir=/tmp/pixelda/
+"""
+from functools import partial
+import math
+
+# Dependency imports
+
+import tensorflow as tf
+
+from domain_adaptation.datasets import dataset_factory
+from domain_adaptation.pixel_domain_adaptation import pixelda_model
+from domain_adaptation.pixel_domain_adaptation import pixelda_preprocess
+from domain_adaptation.pixel_domain_adaptation import pixelda_utils
+from domain_adaptation.pixel_domain_adaptation import pixelda_losses
+from domain_adaptation.pixel_domain_adaptation.hparams import create_hparams
+
+slim = tf.contrib.slim
+
+flags = tf.app.flags
+FLAGS = flags.FLAGS
+
+flags.DEFINE_string('master', '', 'BNS name of the TensorFlow master to use.')
+
+flags.DEFINE_string('checkpoint_dir', '/tmp/pixelda/',
+ 'Directory where the model was written to.')
+
+flags.DEFINE_string('eval_dir', '/tmp/pixelda/',
+ 'Directory where the results are saved to.')
+
+flags.DEFINE_integer('eval_interval_secs', 60,
+ 'The frequency, in seconds, with which evaluation is run.')
+
+flags.DEFINE_string('target_split_name', 'test',
+ 'The name of the train/test split.')
+flags.DEFINE_string('source_split_name', 'train', 'Split for source dataset.'
+ ' Defaults to train.')
+
+flags.DEFINE_string('source_dataset', 'mnist',
+ 'The name of the source dataset.')
+
+flags.DEFINE_string('target_dataset', 'mnist_m',
+ 'The name of the target dataset.')
+
+flags.DEFINE_string(
+ 'dataset_dir',
+ '', # None,
+ 'The directory where the datasets can be found.')
+
+flags.DEFINE_integer(
+ 'num_readers', 4,
+ 'The number of parallel readers that read data from the dataset.')
+
+flags.DEFINE_integer('num_preprocessing_threads', 4,
+ 'The number of threads used to create the batches.')
+
+# HParams
+
+flags.DEFINE_string('hparams', '', 'Comma separated hyperparameter values')
+
+
+def run_eval(run_dir, checkpoint_dir, hparams):
+ """Runs the eval loop.
+
+ Args:
+ run_dir: The directory where eval specific logs are placed
+ checkpoint_dir: The directory where the checkpoints are stored
+ hparams: The hyperparameters struct.
+
+ Raises:
+ ValueError: if hparams.arch is not recognized.
+ """
+ for checkpoint_path in slim.evaluation.checkpoints_iterator(
+ checkpoint_dir, FLAGS.eval_interval_secs):
+ with tf.Graph().as_default():
+ #########################
+ # Preprocess the inputs #
+ #########################
+ target_dataset = dataset_factory.get_dataset(
+ FLAGS.target_dataset,
+ split_name=FLAGS.target_split_name,
+ dataset_dir=FLAGS.dataset_dir)
+ target_images, target_labels = dataset_factory.provide_batch(
+ FLAGS.target_dataset, FLAGS.target_split_name, FLAGS.dataset_dir,
+ FLAGS.num_readers, hparams.batch_size,
+ FLAGS.num_preprocessing_threads)
+ num_target_classes = target_dataset.num_classes
+ target_labels['class'] = tf.argmax(target_labels['classes'], 1)
+ del target_labels['classes']
+
+ if hparams.arch not in ['dcgan']:
+ source_dataset = dataset_factory.get_dataset(
+ FLAGS.source_dataset,
+ split_name=FLAGS.source_split_name,
+ dataset_dir=FLAGS.dataset_dir)
+ num_source_classes = source_dataset.num_classes
+ source_images, source_labels = dataset_factory.provide_batch(
+ FLAGS.source_dataset, FLAGS.source_split_name, FLAGS.dataset_dir,
+ FLAGS.num_readers, hparams.batch_size,
+ FLAGS.num_preprocessing_threads)
+ source_labels['class'] = tf.argmax(source_labels['classes'], 1)
+ del source_labels['classes']
+ if num_source_classes != num_target_classes:
+ raise ValueError(
+ 'Input and output datasets must have same number of classes')
+ else:
+ source_images = None
+ source_labels = None
+
+ ####################
+ # Define the model #
+ ####################
+ end_points = pixelda_model.create_model(
+ hparams,
+ target_images,
+ source_images=source_images,
+ source_labels=source_labels,
+ is_training=False,
+ num_classes=num_target_classes)
+
+ #######################
+ # Metrics & Summaries #
+ #######################
+ names_to_values, names_to_updates = create_metrics(end_points,
+ source_labels,
+ target_labels, hparams)
+ pixelda_utils.summarize_model(end_points)
+ pixelda_utils.summarize_transferred_grid(
+ end_points['transferred_images'], source_images, name='Transferred')
+ if 'source_images_recon' in end_points:
+ pixelda_utils.summarize_transferred_grid(
+ end_points['source_images_recon'],
+ source_images,
+ name='Source Reconstruction')
+ pixelda_utils.summarize_images(target_images, 'Target')
+
+ for name, value in names_to_values.iteritems():
+ tf.summary.scalar(name, value)
+
+ # Use the entire split by default
+ num_examples = target_dataset.num_samples
+
+ num_batches = math.ceil(num_examples / float(hparams.batch_size))
+ global_step = slim.get_or_create_global_step()
+
+ result = slim.evaluation.evaluate_once(
+ master=FLAGS.master,
+ checkpoint_path=checkpoint_path,
+ logdir=run_dir,
+ num_evals=num_batches,
+ eval_op=names_to_updates.values(),
+ final_op=names_to_values)
+
+
+def to_degrees(log_quaternion_loss):
+ """Converts a log quaternion distance to an angle.
+
+ Args:
+ log_quaternion_loss: The log quaternion distance between two
+ unit quaternions (or a batch of pairs of quaternions).
+
+ Returns:
+ The angle in degrees of the implied angle-axis representation.
+ """
+ return tf.acos(-(tf.exp(log_quaternion_loss) - 1)) * 2 * 180 / math.pi
+
+
+def create_metrics(end_points, source_labels, target_labels, hparams):
+ """Create metrics for the model.
+
+ Args:
+ end_points: A dictionary of end point name to tensor
+ source_labels: Labels for source images. batch_size x 1
+ target_labels: Labels for target images. batch_size x 1
+ hparams: The hyperparameters struct.
+
+ Returns:
+ Tuple of (names_to_values, names_to_updates), dictionaries that map a metric
+ name to its value and update op, respectively
+
+ """
+ ###########################################
+ # Evaluate the Domain Prediction Accuracy #
+ ###########################################
+ batch_size = hparams.batch_size
+ names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
+ ('eval/Domain_Accuracy-Transferred'):
+ tf.contrib.metrics.streaming_accuracy(
+ tf.to_int32(
+ tf.round(tf.sigmoid(end_points[
+ 'transferred_domain_logits']))),
+ tf.zeros(batch_size, dtype=tf.int32)),
+ ('eval/Domain_Accuracy-Target'):
+ tf.contrib.metrics.streaming_accuracy(
+ tf.to_int32(
+ tf.round(tf.sigmoid(end_points['target_domain_logits']))),
+ tf.ones(batch_size, dtype=tf.int32))
+ })
+
+ ################################
+ # Evaluate the task classifier #
+ ################################
+ if 'source_task_logits' in end_points:
+ metric_name = 'eval/Task_Accuracy-Source'
+ names_to_values[metric_name], names_to_updates[
+ metric_name] = tf.contrib.metrics.streaming_accuracy(
+ tf.argmax(end_points['source_task_logits'], 1),
+ source_labels['class'])
+
+ if 'transferred_task_logits' in end_points:
+ metric_name = 'eval/Task_Accuracy-Transferred'
+ names_to_values[metric_name], names_to_updates[
+ metric_name] = tf.contrib.metrics.streaming_accuracy(
+ tf.argmax(end_points['transferred_task_logits'], 1),
+ source_labels['class'])
+
+ if 'target_task_logits' in end_points:
+ metric_name = 'eval/Task_Accuracy-Target'
+ names_to_values[metric_name], names_to_updates[
+ metric_name] = tf.contrib.metrics.streaming_accuracy(
+ tf.argmax(end_points['target_task_logits'], 1),
+ target_labels['class'])
+
+ ##########################################################################
+ # Pose data-specific losses.
+ ##########################################################################
+ if 'quaternion' in source_labels.keys():
+ params = {}
+ params['use_logging'] = False
+ params['batch_size'] = batch_size
+
+ angle_loss_source = to_degrees(
+ pixelda_losses.log_quaternion_loss_batch(end_points[
+ 'source_quaternion'], source_labels['quaternion'], params))
+ angle_loss_transferred = to_degrees(
+ pixelda_losses.log_quaternion_loss_batch(end_points[
+ 'transferred_quaternion'], source_labels['quaternion'], params))
+ angle_loss_target = to_degrees(
+ pixelda_losses.log_quaternion_loss_batch(end_points[
+ 'target_quaternion'], target_labels['quaternion'], params))
+
+ metric_name = 'eval/Angle_Loss-Source'
+ names_to_values[metric_name], names_to_updates[
+ metric_name] = slim.metrics.mean(angle_loss_source)
+
+ metric_name = 'eval/Angle_Loss-Transferred'
+ names_to_values[metric_name], names_to_updates[
+ metric_name] = slim.metrics.mean(angle_loss_transferred)
+
+ metric_name = 'eval/Angle_Loss-Target'
+ names_to_values[metric_name], names_to_updates[
+ metric_name] = slim.metrics.mean(angle_loss_target)
+
+ return names_to_values, names_to_updates
+
+
+def main(_):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ hparams = create_hparams(FLAGS.hparams)
+ run_eval(
+ run_dir=FLAGS.eval_dir,
+ checkpoint_dir=FLAGS.checkpoint_dir,
+ hparams=hparams)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_losses.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_losses.py"
new file mode 100644
index 00000000..cf39765d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_losses.py"
@@ -0,0 +1,385 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Defines the various loss functions in use by the PIXELDA model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+
+import tensorflow as tf
+
+slim = tf.contrib.slim
+
+
+def add_domain_classifier_losses(end_points, hparams):
+ """Adds losses related to the domain-classifier.
+
+ Args:
+ end_points: A map of network end point names to `Tensors`.
+ hparams: The hyperparameters struct.
+
+ Returns:
+ loss: A `Tensor` representing the total task-classifier loss.
+ """
+ if hparams.domain_loss_weight == 0:
+ tf.logging.info(
+ 'Domain classifier loss weight is 0, so not creating losses.')
+ return 0
+
+ # The domain prediction loss is minimized with respect to the domain
+ # classifier features only. Its aim is to predict the domain of the images.
+ # Note: 1 = 'real image' label, 0 = 'fake image' label
+ transferred_domain_loss = tf.losses.sigmoid_cross_entropy(
+ multi_class_labels=tf.zeros_like(end_points['transferred_domain_logits']),
+ logits=end_points['transferred_domain_logits'])
+ tf.summary.scalar('Domain_loss_transferred', transferred_domain_loss)
+
+ target_domain_loss = tf.losses.sigmoid_cross_entropy(
+ multi_class_labels=tf.ones_like(end_points['target_domain_logits']),
+ logits=end_points['target_domain_logits'])
+ tf.summary.scalar('Domain_loss_target', target_domain_loss)
+
+ # Compute the total domain loss:
+ total_domain_loss = transferred_domain_loss + target_domain_loss
+ total_domain_loss *= hparams.domain_loss_weight
+ tf.summary.scalar('Domain_loss_total', total_domain_loss)
+
+ return total_domain_loss
+
+def log_quaternion_loss_batch(predictions, labels, params):
+ """A helper function to compute the error between quaternions.
+
+ Args:
+ predictions: A Tensor of size [batch_size, 4].
+ labels: A Tensor of size [batch_size, 4].
+ params: A dictionary of parameters. Expecting 'use_logging', 'batch_size'.
+
+ Returns:
+ A Tensor of size [batch_size], denoting the error between the quaternions.
+ """
+ use_logging = params['use_logging']
+ assertions = []
+ if use_logging:
+ assertions.append(
+ tf.Assert(
+ tf.reduce_all(
+ tf.less(
+ tf.abs(tf.reduce_sum(tf.square(predictions), [1]) - 1),
+ 1e-4)),
+ ['The l2 norm of each prediction quaternion vector should be 1.']))
+ assertions.append(
+ tf.Assert(
+ tf.reduce_all(
+ tf.less(
+ tf.abs(tf.reduce_sum(tf.square(labels), [1]) - 1), 1e-4)),
+ ['The l2 norm of each label quaternion vector should be 1.']))
+
+ with tf.control_dependencies(assertions):
+ product = tf.multiply(predictions, labels)
+ internal_dot_products = tf.reduce_sum(product, [1])
+
+ if use_logging:
+ internal_dot_products = tf.Print(internal_dot_products, [
+ internal_dot_products,
+ tf.shape(internal_dot_products)
+ ], 'internal_dot_products:')
+
+ logcost = tf.log(1e-4 + 1 - tf.abs(internal_dot_products))
+ return logcost
+
+
+def log_quaternion_loss(predictions, labels, params):
+ """A helper function to compute the mean error between batches of quaternions.
+
+ The caller is expected to add the loss to the graph.
+
+ Args:
+ predictions: A Tensor of size [batch_size, 4].
+ labels: A Tensor of size [batch_size, 4].
+ params: A dictionary of parameters. Expecting 'use_logging', 'batch_size'.
+
+ Returns:
+ A Tensor of size 1, denoting the mean error between batches of quaternions.
+ """
+ use_logging = params['use_logging']
+ logcost = log_quaternion_loss_batch(predictions, labels, params)
+ logcost = tf.reduce_sum(logcost, [0])
+ batch_size = params['batch_size']
+ logcost = tf.multiply(logcost, 1.0 / batch_size, name='log_quaternion_loss')
+ if use_logging:
+ logcost = tf.Print(
+ logcost, [logcost], '[logcost]', name='log_quaternion_loss_print')
+ return logcost
+
+def _quaternion_loss(labels, predictions, weight, batch_size, domain,
+ add_summaries):
+ """Creates a Quaternion Loss.
+
+ Args:
+ labels: The true quaternions.
+ predictions: The predicted quaternions.
+ weight: A scalar weight.
+ batch_size: The size of the batches.
+ domain: The name of the domain from which the labels were taken.
+ add_summaries: Whether or not to add summaries for the losses.
+
+ Returns:
+ A `Tensor` representing the loss.
+ """
+ assert domain in ['Source', 'Transferred']
+
+ params = {'use_logging': False, 'batch_size': batch_size}
+ loss = weight * log_quaternion_loss(labels, predictions, params)
+
+ if add_summaries:
+ assert_op = tf.Assert(tf.is_finite(loss), [loss])
+ with tf.control_dependencies([assert_op]):
+ tf.summary.histogram(
+ 'Log_Quaternion_Loss_%s' % domain, loss, collections='losses')
+ tf.summary.scalar(
+ 'Task_Quaternion_Loss_%s' % domain, loss, collections='losses')
+
+ return loss
+
+
+def _add_task_specific_losses(end_points, source_labels, num_classes, hparams,
+ add_summaries=False):
+ """Adds losses related to the task-classifier.
+
+ Args:
+ end_points: A map of network end point names to `Tensors`.
+ source_labels: A dictionary of output labels to `Tensors`.
+ num_classes: The number of classes used by the classifier.
+ hparams: The hyperparameters struct.
+ add_summaries: Whether or not to add the summaries.
+
+ Returns:
+ loss: A `Tensor` representing the total task-classifier loss.
+ """
+ # TODO(ddohan): Make sure the l2 regularization is added to the loss
+
+ one_hot_labels = slim.one_hot_encoding(source_labels['class'], num_classes)
+ total_loss = 0
+
+ if 'source_task_logits' in end_points:
+ loss = tf.losses.softmax_cross_entropy(
+ onehot_labels=one_hot_labels,
+ logits=end_points['source_task_logits'],
+ weights=hparams.source_task_loss_weight)
+ if add_summaries:
+ tf.summary.scalar('Task_Classifier_Loss_Source', loss)
+ total_loss += loss
+
+ if 'transferred_task_logits' in end_points:
+ loss = tf.losses.softmax_cross_entropy(
+ onehot_labels=one_hot_labels,
+ logits=end_points['transferred_task_logits'],
+ weights=hparams.transferred_task_loss_weight)
+ if add_summaries:
+ tf.summary.scalar('Task_Classifier_Loss_Transferred', loss)
+ total_loss += loss
+
+ #########################
+ # Pose specific losses. #
+ #########################
+ if 'quaternion' in source_labels:
+ total_loss += _quaternion_loss(
+ source_labels['quaternion'],
+ end_points['source_quaternion'],
+ hparams.source_pose_weight,
+ hparams.batch_size,
+ 'Source',
+ add_summaries)
+
+ total_loss += _quaternion_loss(
+ source_labels['quaternion'],
+ end_points['transferred_quaternion'],
+ hparams.transferred_pose_weight,
+ hparams.batch_size,
+ 'Transferred',
+ add_summaries)
+
+ if add_summaries:
+ tf.summary.scalar('Task_Loss_Total', total_loss)
+
+ return total_loss
+
+
+def _transferred_similarity_loss(reconstructions,
+ source_images,
+ weight=1.0,
+ method='mse',
+ max_diff=0.4,
+ name='similarity'):
+ """Computes a loss encouraging similarity between source and transferred.
+
+ Args:
+ reconstructions: A `Tensor` of shape [batch_size, height, width, channels]
+ source_images: A `Tensor` of shape [batch_size, height, width, channels].
+ weight: Multiple similarity loss by this weight before returning
+ method: One of:
+ mpse = Mean Pairwise Squared Error
+ mse = Mean Squared Error
+ hinged_mse = Computes the mean squared error using squared differences
+ greater than hparams.transferred_similarity_max_diff
+ hinged_mae = Computes the mean absolute error using absolute
+ differences greater than hparams.transferred_similarity_max_diff.
+ max_diff: Maximum unpenalized difference for hinged losses
+ name: Identifying name to use for creating summaries
+
+
+ Returns:
+ A `Tensor` representing the transferred similarity loss.
+
+ Raises:
+ ValueError: if `method` is not recognized.
+ """
+ if weight == 0:
+ return 0
+
+ source_channels = source_images.shape.as_list()[-1]
+ reconstruction_channels = reconstructions.shape.as_list()[-1]
+
+ # Convert grayscale source to RGB if target is RGB
+ if source_channels == 1 and reconstruction_channels != 1:
+ source_images = tf.tile(source_images, [1, 1, 1, reconstruction_channels])
+ if reconstruction_channels == 1 and source_channels != 1:
+ reconstructions = tf.tile(reconstructions, [1, 1, 1, source_channels])
+
+ if method == 'mpse':
+ reconstruction_similarity_loss_fn = (
+ tf.contrib.losses.mean_pairwise_squared_error)
+ elif method == 'masked_mpse':
+
+ def masked_mpse(predictions, labels, weight):
+ """Masked mpse assuming we have a depth to create a mask from."""
+ assert labels.shape.as_list()[-1] == 4
+ mask = tf.to_float(tf.less(labels[:, :, :, 3:4], 0.99))
+ mask = tf.tile(mask, [1, 1, 1, 4])
+ predictions *= mask
+ labels *= mask
+ tf.image_summary('masked_pred', predictions)
+ tf.image_summary('masked_label', labels)
+ return tf.contrib.losses.mean_pairwise_squared_error(
+ predictions, labels, weight)
+
+ reconstruction_similarity_loss_fn = masked_mpse
+ elif method == 'mse':
+ reconstruction_similarity_loss_fn = tf.contrib.losses.mean_squared_error
+ elif method == 'hinged_mse':
+
+ def hinged_mse(predictions, labels, weight):
+ diffs = tf.square(predictions - labels)
+ diffs = tf.maximum(0.0, diffs - max_diff)
+ return tf.reduce_mean(diffs) * weight
+
+ reconstruction_similarity_loss_fn = hinged_mse
+ elif method == 'hinged_mae':
+
+ def hinged_mae(predictions, labels, weight):
+ diffs = tf.abs(predictions - labels)
+ diffs = tf.maximum(0.0, diffs - max_diff)
+ return tf.reduce_mean(diffs) * weight
+
+ reconstruction_similarity_loss_fn = hinged_mae
+ else:
+ raise ValueError('Unknown reconstruction loss %s' % method)
+
+ reconstruction_similarity_loss = reconstruction_similarity_loss_fn(
+ reconstructions, source_images, weight)
+
+ name = '%s_Similarity_(%s)' % (name, method)
+ tf.summary.scalar(name, reconstruction_similarity_loss)
+ return reconstruction_similarity_loss
+
+
+def g_step_loss(source_images, source_labels, end_points, hparams, num_classes):
+ """Configures the loss function which runs during the g-step.
+
+ Args:
+ source_images: A `Tensor` of shape [batch_size, height, width, channels].
+ source_labels: A dictionary of `Tensors` of shape [batch_size]. Valid keys
+ are 'class' and 'quaternion'.
+ end_points: A map of the network end points.
+ hparams: The hyperparameters struct.
+ num_classes: Number of classes for classifier loss
+
+ Returns:
+ A `Tensor` representing a loss function.
+
+ Raises:
+ ValueError: if hparams.transferred_similarity_loss_weight is non-zero but
+ hparams.transferred_similarity_loss is invalid.
+ """
+ generator_loss = 0
+
+ ################################################################
+ # Adds a loss which encourages the discriminator probabilities #
+ # to be high (near one).
+ ################################################################
+
+ # As per the GAN paper, maximize the log probs, instead of minimizing
+ # log(1-probs). Since we're minimizing, we'll minimize -log(probs) which is
+ # the same thing.
+ style_transfer_loss = tf.losses.sigmoid_cross_entropy(
+ logits=end_points['transferred_domain_logits'],
+ multi_class_labels=tf.ones_like(end_points['transferred_domain_logits']),
+ weights=hparams.style_transfer_loss_weight)
+ tf.summary.scalar('Style_transfer_loss', style_transfer_loss)
+ generator_loss += style_transfer_loss
+
+ # Optimizes the style transfer network to produce transferred images similar
+ # to the source images.
+ generator_loss += _transferred_similarity_loss(
+ end_points['transferred_images'],
+ source_images,
+ weight=hparams.transferred_similarity_loss_weight,
+ method=hparams.transferred_similarity_loss,
+ name='transferred_similarity')
+
+ # Optimizes the style transfer network to maximize classification accuracy.
+ if source_labels is not None and hparams.task_tower_in_g_step:
+ generator_loss += _add_task_specific_losses(
+ end_points, source_labels, num_classes,
+ hparams) * hparams.task_loss_in_g_weight
+
+ return generator_loss
+
+
+def d_step_loss(end_points, source_labels, num_classes, hparams):
+ """Configures the losses during the D-Step.
+
+ Note that during the D-step, the model optimizes both the domain (binary)
+ classifier and the task classifier.
+
+ Args:
+ end_points: A map of the network end points.
+ source_labels: A dictionary of output labels to `Tensors`.
+ num_classes: The number of classes used by the classifier.
+ hparams: The hyperparameters struct.
+
+ Returns:
+ A `Tensor` representing the value of the D-step loss.
+ """
+ domain_classifier_loss = add_domain_classifier_losses(end_points, hparams)
+
+ task_classifier_loss = 0
+ if source_labels is not None:
+ task_classifier_loss = _add_task_specific_losses(
+ end_points, source_labels, num_classes, hparams, add_summaries=True)
+
+ return domain_classifier_loss + task_classifier_loss
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_model.py"
new file mode 100644
index 00000000..16b550a6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_model.py"
@@ -0,0 +1,713 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Contains the Domain Adaptation via Style Transfer (PixelDA) model components.
+
+A number of details in the implementation make reference to one of the following
+works:
+
+- "Unsupervised Representation Learning with Deep Convolutional
+ Generative Adversarial Networks""
+ https://arxiv.org/abs/1511.06434
+
+This paper makes several architecture recommendations:
+1. Use strided convs in discriminator, fractional-strided convs in generator
+2. batchnorm everywhere
+3. remove fully connected layers for deep models
+4. ReLu for all layers in generator, except tanh on output
+5. LeakyReLu for everything in discriminator
+"""
+import functools
+import math
+
+# Dependency imports
+import numpy as np
+
+import tensorflow as tf
+
+slim = tf.contrib.slim
+
+from domain_adaptation.pixel_domain_adaptation import pixelda_task_towers
+
+
+def create_model(hparams,
+ target_images,
+ source_images=None,
+ source_labels=None,
+ is_training=False,
+ noise=None,
+ num_classes=None):
+ """Create a GAN model.
+
+ Arguments:
+ hparams: HParam object specifying model params
+ target_images: A `Tensor` of size [batch_size, height, width, channels]. It
+ is assumed that the images are [-1, 1] normalized.
+ source_images: A `Tensor` of size [batch_size, height, width, channels]. It
+ is assumed that the images are [-1, 1] normalized.
+ source_labels: A `Tensor` of size [batch_size] of categorical labels between
+ [0, num_classes]
+ is_training: whether model is currently training
+ noise: If None, model generates its own noise. Otherwise use provided.
+ num_classes: Number of classes for classification
+
+ Returns:
+ end_points dict with model outputs
+
+ Raises:
+ ValueError: unknown hparams.arch setting
+ """
+ if num_classes is None and hparams.arch in ['resnet', 'simple']:
+ raise ValueError('Num classes must be provided to create task classifier')
+
+ if target_images.dtype != tf.float32:
+ raise ValueError('target_images must be tf.float32 and [-1, 1] normalized.')
+ if source_images is not None and source_images.dtype != tf.float32:
+ raise ValueError('source_images must be tf.float32 and [-1, 1] normalized.')
+
+ ###########################
+ # Create latent variables #
+ ###########################
+ latent_vars = dict()
+
+ if hparams.noise_channel:
+ noise_shape = [hparams.batch_size, hparams.noise_dims]
+ if noise is not None:
+ assert noise.shape.as_list() == noise_shape
+ tf.logging.info('Using provided noise')
+ else:
+ tf.logging.info('Using random noise')
+ noise = tf.random_uniform(
+ shape=noise_shape,
+ minval=-1,
+ maxval=1,
+ dtype=tf.float32,
+ name='random_noise')
+ latent_vars['noise'] = noise
+
+ ####################
+ # Create generator #
+ ####################
+
+ with slim.arg_scope(
+ [slim.conv2d, slim.conv2d_transpose, slim.fully_connected],
+ normalizer_params=batch_norm_params(is_training,
+ hparams.batch_norm_decay),
+ weights_initializer=tf.random_normal_initializer(
+ stddev=hparams.normal_init_std),
+ weights_regularizer=tf.contrib.layers.l2_regularizer(
+ hparams.weight_decay)):
+ with slim.arg_scope([slim.conv2d], padding='SAME'):
+ if hparams.arch == 'dcgan':
+ end_points = dcgan(
+ target_images, latent_vars, hparams, scope='generator')
+ elif hparams.arch == 'resnet':
+ end_points = resnet_generator(
+ source_images,
+ target_images.shape.as_list()[1:4],
+ hparams=hparams,
+ latent_vars=latent_vars)
+ elif hparams.arch == 'residual_interpretation':
+ end_points = residual_interpretation_generator(
+ source_images, is_training=is_training, hparams=hparams)
+ elif hparams.arch == 'simple':
+ end_points = simple_generator(
+ source_images,
+ target_images,
+ is_training=is_training,
+ hparams=hparams,
+ latent_vars=latent_vars)
+ elif hparams.arch == 'identity':
+ # Pass through unmodified, besides changing # channels
+ # Used to calculate baseline numbers
+ # Also set `generator_steps=0` for baseline
+ if hparams.generator_steps:
+ raise ValueError('Must set generator_steps=0 for identity arch. Is %s'
+ % hparams.generator_steps)
+ transferred_images = source_images
+ source_channels = source_images.shape.as_list()[-1]
+ target_channels = target_images.shape.as_list()[-1]
+ if source_channels == 1 and target_channels == 3:
+ transferred_images = tf.tile(source_images, [1, 1, 1, 3])
+ if source_channels == 3 and target_channels == 1:
+ transferred_images = tf.image.rgb_to_grayscale(source_images)
+ end_points = {'transferred_images': transferred_images}
+ else:
+ raise ValueError('Unknown architecture: %s' % hparams.arch)
+
+ #####################
+ # Domain Classifier #
+ #####################
+ if hparams.arch in [
+ 'dcgan', 'resnet', 'residual_interpretation', 'simple', 'identity',
+ ]:
+
+ # Add a discriminator for these architectures
+ end_points['transferred_domain_logits'] = predict_domain(
+ end_points['transferred_images'],
+ hparams,
+ is_training=is_training,
+ reuse=False)
+ end_points['target_domain_logits'] = predict_domain(
+ target_images,
+ hparams,
+ is_training=is_training,
+ reuse=True)
+
+ ###################
+ # Task Classifier #
+ ###################
+ if hparams.task_tower != 'none' and hparams.arch in [
+ 'resnet', 'residual_interpretation', 'simple', 'identity',
+ ]:
+ with tf.variable_scope('discriminator'):
+ with tf.variable_scope('task_tower'):
+ end_points['source_task_logits'], end_points[
+ 'source_quaternion'] = pixelda_task_towers.add_task_specific_model(
+ source_images,
+ hparams,
+ num_classes=num_classes,
+ is_training=is_training,
+ reuse_private=False,
+ private_scope='source_task_classifier',
+ reuse_shared=False)
+ end_points['transferred_task_logits'], end_points[
+ 'transferred_quaternion'] = (
+ pixelda_task_towers.add_task_specific_model(
+ end_points['transferred_images'],
+ hparams,
+ num_classes=num_classes,
+ is_training=is_training,
+ reuse_private=False,
+ private_scope='transferred_task_classifier',
+ reuse_shared=True))
+ end_points['target_task_logits'], end_points[
+ 'target_quaternion'] = pixelda_task_towers.add_task_specific_model(
+ target_images,
+ hparams,
+ num_classes=num_classes,
+ is_training=is_training,
+ reuse_private=True,
+ private_scope='transferred_task_classifier',
+ reuse_shared=True)
+ # Remove any endpoints with None values
+ return dict((k, v) for k, v in end_points.iteritems() if v is not None)
+
+
+def batch_norm_params(is_training, batch_norm_decay):
+ return {
+ 'is_training': is_training,
+ # Decay for the moving averages.
+ 'decay': batch_norm_decay,
+ # epsilon to prevent 0s in variance.
+ 'epsilon': 0.001,
+ }
+
+
+def lrelu(x, leakiness=0.2):
+ """Relu, with optional leaky support."""
+ return tf.where(tf.less(x, 0.0), leakiness * x, x, name='leaky_relu')
+
+
+def upsample(net, num_filters, scale=2, method='resize_conv', scope=None):
+ """Performs spatial upsampling of the given features.
+
+ Args:
+ net: A `Tensor` of shape [batch_size, height, width, filters].
+ num_filters: The number of output filters.
+ scale: The scale of the upsampling. Must be a positive integer greater or
+ equal to two.
+ method: The method by which the features are upsampled. Valid options
+ include 'resize_conv' and 'conv2d_transpose'.
+ scope: An optional variable scope.
+
+ Returns:
+ A new set of features of shape
+ [batch_size, height*scale, width*scale, num_filters].
+
+ Raises:
+ ValueError: if `method` is not valid or
+ """
+ if scale < 2:
+ raise ValueError('scale must be greater or equal to two.')
+
+ with tf.variable_scope(scope, 'upsample', [net]):
+ if method == 'resize_conv':
+ net = tf.image.resize_nearest_neighbor(
+ net, [net.shape.as_list()[1] * scale,
+ net.shape.as_list()[2] * scale],
+ align_corners=True,
+ name='resize')
+ return slim.conv2d(net, num_filters, stride=1, scope='conv')
+ elif method == 'conv2d_transpose':
+ return slim.conv2d_transpose(net, num_filters, scope='deconv')
+ else:
+ raise ValueError('Upsample method [%s] was not recognized.' % method)
+
+
+def project_latent_vars(hparams, proj_shape, latent_vars, combine_method='sum'):
+ """Generate noise and project to input volume size.
+
+ Args:
+ hparams: The hyperparameter HParams struct.
+ proj_shape: Shape to project noise (not including batch size).
+ latent_vars: dictionary of `'key': Tensor of shape [batch_size, N]`
+ combine_method: How to combine the projected values.
+ sum = project to volume then sum
+ concat = concatenate along last dimension (i.e. channel)
+
+ Returns:
+ If combine_method=sum, a `Tensor` of size `hparams.projection_shape`
+ If combine_method=concat and there are N latent vars, a `Tensor` of size
+ `hparams.projection_shape`, with the last channel multiplied by N
+
+
+ Raises:
+ ValueError: combine_method is not one of sum/concat
+ """
+ values = []
+ for var in latent_vars:
+ with tf.variable_scope(var):
+ # Project & reshape noise to a HxWxC input
+ projected = slim.fully_connected(
+ latent_vars[var],
+ np.prod(proj_shape),
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm)
+ values.append(tf.reshape(projected, [hparams.batch_size] + proj_shape))
+
+ if combine_method == 'sum':
+ result = values[0]
+ for value in values[1:]:
+ result += value
+ elif combine_method == 'concat':
+ # Concatenate along last axis
+ result = tf.concat(values, len(proj_shape))
+ else:
+ raise ValueError('Unknown combine_method %s' % combine_method)
+
+ tf.logging.info('Latent variables projected to size %s volume', result.shape)
+
+ return result
+
+
+def resnet_block(net, hparams):
+ """Create a resnet block."""
+ net_in = net
+ net = slim.conv2d(
+ net,
+ hparams.resnet_filters,
+ stride=1,
+ normalizer_fn=slim.batch_norm,
+ activation_fn=tf.nn.relu)
+ net = slim.conv2d(
+ net,
+ hparams.resnet_filters,
+ stride=1,
+ normalizer_fn=slim.batch_norm,
+ activation_fn=None)
+ if hparams.resnet_residuals:
+ net += net_in
+ return net
+
+
+def resnet_stack(images, output_shape, hparams, scope=None):
+ """Create a resnet style transfer block.
+
+ Args:
+ images: [batch-size, height, width, channels] image tensor to feed as input
+ output_shape: output image shape in form [height, width, channels]
+ hparams: hparams objects
+ scope: Variable scope
+
+ Returns:
+ Images after processing with resnet blocks.
+ """
+ end_points = {}
+ if hparams.noise_channel:
+ # separate the noise for visualization
+ end_points['noise'] = images[:, :, :, -1]
+ assert images.shape.as_list()[1:3] == output_shape[0:2]
+
+ with tf.variable_scope(scope, 'resnet_style_transfer', [images]):
+ with slim.arg_scope(
+ [slim.conv2d],
+ normalizer_fn=slim.batch_norm,
+ kernel_size=[hparams.generator_kernel_size] * 2,
+ stride=1):
+ net = slim.conv2d(
+ images,
+ hparams.resnet_filters,
+ normalizer_fn=None,
+ activation_fn=tf.nn.relu)
+ for block in range(hparams.resnet_blocks):
+ net = resnet_block(net, hparams)
+ end_points['resnet_block_{}'.format(block)] = net
+
+ net = slim.conv2d(
+ net,
+ output_shape[-1],
+ kernel_size=[1, 1],
+ normalizer_fn=None,
+ activation_fn=tf.nn.tanh,
+ scope='conv_out')
+ end_points['transferred_images'] = net
+ return net, end_points
+
+
+def predict_domain(images,
+ hparams,
+ is_training=False,
+ reuse=False,
+ scope='discriminator'):
+ """Creates a discriminator for a GAN.
+
+ Args:
+ images: A `Tensor` of size [batch_size, height, width, channels]. It is
+ assumed that the images are centered between -1 and 1.
+ hparams: hparam object with params for discriminator
+ is_training: Specifies whether or not we're training or testing.
+ reuse: Whether to reuse variable scope
+ scope: An optional variable_scope.
+
+ Returns:
+ [batch size, 1] - logit output of discriminator.
+ """
+ with tf.variable_scope(scope, 'discriminator', [images], reuse=reuse):
+ lrelu_partial = functools.partial(lrelu, leakiness=hparams.lrelu_leakiness)
+ with slim.arg_scope(
+ [slim.conv2d],
+ kernel_size=[hparams.discriminator_kernel_size] * 2,
+ activation_fn=lrelu_partial,
+ stride=2,
+ normalizer_fn=slim.batch_norm):
+
+ def add_noise(hidden, scope_num=None):
+ if scope_num:
+ hidden = slim.dropout(
+ hidden,
+ hparams.discriminator_dropout_keep_prob,
+ is_training=is_training,
+ scope='dropout_%s' % scope_num)
+ if hparams.discriminator_noise_stddev == 0:
+ return hidden
+ return hidden + tf.random_normal(
+ hidden.shape.as_list(),
+ mean=0.0,
+ stddev=hparams.discriminator_noise_stddev)
+
+ # As per the recommendation of the DCGAN paper, we don't use batch norm
+ # on the discriminator input (https://arxiv.org/pdf/1511.06434v2.pdf).
+ if hparams.discriminator_image_noise:
+ images = add_noise(images)
+ net = slim.conv2d(
+ images,
+ hparams.num_discriminator_filters,
+ normalizer_fn=None,
+ stride=hparams.discriminator_first_stride,
+ scope='conv1_stride%s' % hparams.discriminator_first_stride)
+ net = add_noise(net, 1)
+
+ block_id = 2
+ # Repeatedly stack
+ # discriminator_conv_block_size-1 conv layers with stride 1
+ # followed by a stride 2 layer
+ # Add (optional) noise at every point
+ while net.shape.as_list()[1] > hparams.projection_shape_size:
+ num_filters = int(hparams.num_discriminator_filters *
+ (hparams.discriminator_filter_factor**(block_id - 1)))
+ for conv_id in range(1, hparams.discriminator_conv_block_size):
+ net = slim.conv2d(
+ net,
+ num_filters,
+ stride=1,
+ scope='conv_%s_%s' % (block_id, conv_id))
+ if hparams.discriminator_do_pooling:
+ net = slim.conv2d(
+ net, num_filters, scope='conv_%s_prepool' % block_id)
+ net = slim.avg_pool2d(
+ net, kernel_size=[2, 2], stride=2, scope='pool_%s' % block_id)
+ else:
+ net = slim.conv2d(
+ net, num_filters, scope='conv_%s_stride2' % block_id)
+ net = add_noise(net, block_id)
+ block_id += 1
+ net = slim.flatten(net)
+ net = slim.fully_connected(
+ net,
+ 1,
+ # Models with BN here generally produce noise
+ normalizer_fn=None,
+ activation_fn=None,
+ scope='fc_logit_out') # Returns logits!
+ return net
+
+
+def dcgan_generator(images, output_shape, hparams, scope=None):
+ """Transforms the visual style of the input images.
+
+ Args:
+ images: A `Tensor` of shape [batch_size, height, width, channels].
+ output_shape: A list or tuple of 3 elements: the output height, width and
+ number of channels.
+ hparams: hparams object with generator parameters
+ scope: Scope to place generator inside
+
+ Returns:
+ A `Tensor` of shape [batch_size, height, width, output_channels] which
+ represents the result of style transfer.
+
+ Raises:
+ ValueError: If `output_shape` is not a list or tuple or if it doesn't have
+ three elements or if `output_shape` or `images` arent square.
+ """
+ if not isinstance(output_shape, (tuple, list)):
+ raise ValueError('output_shape must be a tuple or list.')
+ elif len(output_shape) != 3:
+ raise ValueError('output_shape must have three elements.')
+
+ if output_shape[0] != output_shape[1]:
+ raise ValueError('output_shape must be square')
+ if images.shape.as_list()[1] != images.shape.as_list()[2]:
+ raise ValueError('images height and width must match.')
+
+ outdim = output_shape[0]
+ indim = images.shape.as_list()[1]
+ num_iterations = int(math.ceil(math.log(float(outdim) / float(indim), 2.0)))
+
+ with slim.arg_scope(
+ [slim.conv2d, slim.conv2d_transpose],
+ kernel_size=[hparams.generator_kernel_size] * 2,
+ stride=2):
+ with tf.variable_scope(scope or 'generator'):
+
+ net = images
+
+ # Repeatedly halve # filters until = hparams.decode_filters in last layer
+ for i in range(num_iterations):
+ num_filters = hparams.num_decoder_filters * 2**(num_iterations - i - 1)
+ net = slim.conv2d_transpose(net, num_filters, scope='deconv_%s' % i)
+
+ # Crop down to desired size (e.g. 32x32 -> 28x28)
+ dif = net.shape.as_list()[1] - outdim
+ low = dif / 2
+ high = net.shape.as_list()[1] - low
+ net = net[:, low:high, low:high, :]
+
+ # No batch norm on generator output
+ net = slim.conv2d(
+ net,
+ output_shape[2],
+ kernel_size=[1, 1],
+ stride=1,
+ normalizer_fn=None,
+ activation_fn=tf.tanh,
+ scope='conv_out')
+ return net
+
+
+def dcgan(target_images, latent_vars, hparams, scope='dcgan'):
+ """Creates the PixelDA model.
+
+ Args:
+ target_images: A `Tensor` of shape [batch_size, height, width, 3]
+ sampled from the image domain to which we want to transfer.
+ latent_vars: dictionary of 'key': Tensor of shape [batch_size, N]
+ hparams: The hyperparameter map.
+ scope: Surround generator component with this scope
+
+ Returns:
+ A dictionary of model outputs.
+ """
+ proj_shape = [
+ hparams.projection_shape_size, hparams.projection_shape_size,
+ hparams.projection_shape_channels
+ ]
+ source_volume = project_latent_vars(
+ hparams, proj_shape, latent_vars, combine_method='concat')
+
+ ###################################################
+ # Transfer the source images to the target style. #
+ ###################################################
+ with tf.variable_scope(scope, 'generator', [target_images]):
+ transferred_images = dcgan_generator(
+ source_volume,
+ output_shape=target_images.shape.as_list()[1:4],
+ hparams=hparams)
+ assert transferred_images.shape.as_list() == target_images.shape.as_list()
+
+ return {'transferred_images': transferred_images}
+
+
+def resnet_generator(images, output_shape, hparams, latent_vars=None):
+ """Creates a ResNet-based generator.
+
+ Args:
+ images: A `Tensor` of shape [batch_size, height, width, num_channels]
+ sampled from the image domain from which we want to transfer
+ output_shape: A length-3 array indicating the height, width and channels of
+ the output.
+ hparams: The hyperparameter map.
+ latent_vars: dictionary of 'key': Tensor of shape [batch_size, N]
+
+ Returns:
+ A dictionary of model outputs.
+ """
+ with tf.variable_scope('generator'):
+ if latent_vars:
+ noise_channel = project_latent_vars(
+ hparams,
+ proj_shape=images.shape.as_list()[1:3] + [1],
+ latent_vars=latent_vars,
+ combine_method='concat')
+ images = tf.concat([images, noise_channel], 3)
+
+ transferred_images, end_points = resnet_stack(
+ images,
+ output_shape=output_shape,
+ hparams=hparams,
+ scope='resnet_stack')
+ end_points['transferred_images'] = transferred_images
+
+ return end_points
+
+
+def residual_interpretation_block(images, hparams, scope):
+ """Learns a residual image which is added to the incoming image.
+
+ Args:
+ images: A `Tensor` of size [batch_size, height, width, 3]
+ hparams: The hyperparameters struct.
+ scope: The name of the variable op scope.
+
+ Returns:
+ The updated images.
+ """
+ with tf.variable_scope(scope):
+ with slim.arg_scope(
+ [slim.conv2d],
+ normalizer_fn=None,
+ kernel_size=[hparams.generator_kernel_size] * 2):
+
+ net = images
+ for _ in range(hparams.res_int_convs):
+ net = slim.conv2d(
+ net, hparams.res_int_filters, activation_fn=tf.nn.relu)
+ net = slim.conv2d(net, 3, activation_fn=tf.nn.tanh)
+
+ # Add the residual
+ images += net
+
+ # Clip the output
+ images = tf.maximum(images, -1.0)
+ images = tf.minimum(images, 1.0)
+ return images
+
+
+def residual_interpretation_generator(images,
+ is_training,
+ hparams,
+ latent_vars=None):
+ """Creates a generator producing purely residual transformations.
+
+ A residual generator differs from the resnet generator in that each 'block' of
+ the residual generator produces a residual image. Consequently, the 'progress'
+ of the model generation process can be directly observed at inference time,
+ making it easier to diagnose and understand.
+
+ Args:
+ images: A `Tensor` of shape [batch_size, height, width, num_channels]
+ sampled from the image domain from which we want to transfer. It is
+ assumed that the images are centered between -1 and 1.
+ is_training: whether or not the model is training.
+ hparams: The hyperparameter map.
+ latent_vars: dictionary of 'key': Tensor of shape [batch_size, N]
+
+ Returns:
+ A dictionary of model outputs.
+ """
+ end_points = {}
+
+ with tf.variable_scope('generator'):
+ if latent_vars:
+ projected_latent = project_latent_vars(
+ hparams,
+ proj_shape=images.shape.as_list()[1:3] + [images.shape.as_list()[-1]],
+ latent_vars=latent_vars,
+ combine_method='sum')
+ images += projected_latent
+ with tf.variable_scope(None, 'residual_style_transfer', [images]):
+ for i in range(hparams.res_int_blocks):
+ images = residual_interpretation_block(images, hparams,
+ 'residual_%d' % i)
+ end_points['transferred_images_%d' % i] = images
+
+ end_points['transferred_images'] = images
+
+ return end_points
+
+
+def simple_generator(source_images, target_images, is_training, hparams,
+ latent_vars):
+ """Simple generator architecture (stack of convs) for trying small models."""
+ end_points = {}
+ with tf.variable_scope('generator'):
+ feed_source_images = source_images
+
+ if latent_vars:
+ projected_latent = project_latent_vars(
+ hparams,
+ proj_shape=source_images.shape.as_list()[1:3] + [1],
+ latent_vars=latent_vars,
+ combine_method='concat')
+ feed_source_images = tf.concat([source_images, projected_latent], 3)
+
+ end_points = {}
+
+ ###################################################
+ # Transfer the source images to the target style. #
+ ###################################################
+ with slim.arg_scope(
+ [slim.conv2d],
+ normalizer_fn=slim.batch_norm,
+ stride=1,
+ kernel_size=[hparams.generator_kernel_size] * 2):
+ net = feed_source_images
+
+ # N convolutions
+ for i in range(1, hparams.simple_num_conv_layers):
+ normalizer_fn = None
+ if i != 0:
+ normalizer_fn = slim.batch_norm
+ net = slim.conv2d(
+ net,
+ hparams.simple_conv_filters,
+ normalizer_fn=normalizer_fn,
+ activation_fn=tf.nn.relu)
+
+ # Project back to right # image channels
+ net = slim.conv2d(
+ net,
+ target_images.shape.as_list()[-1],
+ kernel_size=[1, 1],
+ stride=1,
+ normalizer_fn=None,
+ activation_fn=tf.tanh,
+ scope='conv_out')
+
+ transferred_images = net
+ assert transferred_images.shape.as_list() == target_images.shape.as_list()
+ end_points['transferred_images'] = transferred_images
+
+ return end_points
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_preprocess.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_preprocess.py"
new file mode 100644
index 00000000..747c17b1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_preprocess.py"
@@ -0,0 +1,129 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Contains functions for preprocessing the inputs."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+
+import tensorflow as tf
+
+
+def preprocess_classification(image, labels, is_training=False):
+ """Preprocesses the image and labels for classification purposes.
+
+ Preprocessing includes shifting the images to be 0-centered between -1 and 1.
+ This is not only a popular method of preprocessing (inception) but is also
+ the mechanism used by DSNs.
+
+ Args:
+ image: A `Tensor` of size [height, width, 3].
+ labels: A dictionary of labels.
+ is_training: Whether or not we're training the model.
+
+ Returns:
+ The preprocessed image and labels.
+ """
+ # If the image is uint8, this will scale it to 0-1.
+ image = tf.image.convert_image_dtype(image, tf.float32)
+ image -= 0.5
+ image *= 2
+
+ return image, labels
+
+
+def preprocess_style_transfer(image,
+ labels,
+ augment=False,
+ size=None,
+ is_training=False):
+ """Preprocesses the image and labels for style transfer purposes.
+
+ Args:
+ image: A `Tensor` of size [height, width, 3].
+ labels: A dictionary of labels.
+ augment: Whether to apply data augmentation to inputs
+ size: The height and width to which images should be resized. If left as
+ `None`, then no resizing is performed
+ is_training: Whether or not we're training the model
+
+ Returns:
+ The preprocessed image and labels. Scaled to [-1, 1]
+ """
+ # If the image is uint8, this will scale it to 0-1.
+ image = tf.image.convert_image_dtype(image, tf.float32)
+ if augment and is_training:
+ image = image_augmentation(image)
+
+ if size:
+ image = resize_image(image, size)
+
+ image -= 0.5
+ image *= 2
+
+ return image, labels
+
+
+def image_augmentation(image):
+ """Performs data augmentation by randomly permuting the inputs.
+
+ Args:
+ image: A float `Tensor` of size [height, width, channels] with values
+ in range[0,1].
+
+ Returns:
+ The mutated batch of images
+ """
+ # Apply photometric data augmentation (contrast etc.)
+ num_channels = image.shape_as_list()[-1]
+ if num_channels == 4:
+ # Only augment image part
+ image, depth = image[:, :, 0:3], image[:, :, 3:4]
+ elif num_channels == 1:
+ image = tf.image.grayscale_to_rgb(image)
+ image = tf.image.random_brightness(image, max_delta=0.1)
+ image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
+ image = tf.image.random_hue(image, max_delta=0.032)
+ image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
+ image = tf.clip_by_value(image, 0, 1.0)
+ if num_channels == 4:
+ image = tf.concat(2, [image, depth])
+ elif num_channels == 1:
+ image = tf.image.rgb_to_grayscale(image)
+ return image
+
+
+def resize_image(image, size=None):
+ """Resize image to target size.
+
+ Args:
+ image: A `Tensor` of size [height, width, 3].
+ size: (height, width) to resize image to.
+
+ Returns:
+ resized image
+ """
+ if size is None:
+ raise ValueError('Must specify size')
+
+ if image.shape_as_list()[:2] == size:
+ # Don't resize if not necessary
+ return image
+ image = tf.expand_dims(image, 0)
+ image = tf.image.resize_images(image, size)
+ image = tf.squeeze(image, 0)
+ return image
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_preprocess_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_preprocess_test.py"
new file mode 100644
index 00000000..73f8c7ff
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_preprocess_test.py"
@@ -0,0 +1,69 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Tests for domain_adaptation.pixel_domain_adaptation.pixelda_preprocess."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+
+import tensorflow as tf
+
+from domain_adaptation.pixel_domain_adaptation import pixelda_preprocess
+
+
+class PixelDAPreprocessTest(tf.test.TestCase):
+
+ def assert_preprocess_classification_is_centered(self, dtype, is_training):
+ tf.set_random_seed(0)
+
+ if dtype == tf.uint8:
+ image = tf.random_uniform((100, 200, 3), maxval=255, dtype=tf.int64)
+ image = tf.cast(image, tf.uint8)
+ else:
+ image = tf.random_uniform((100, 200, 3), maxval=1.0, dtype=dtype)
+
+ labels = {}
+ image, labels = pixelda_preprocess.preprocess_classification(
+ image, labels, is_training=is_training)
+
+ with self.test_session() as sess:
+ np_image = sess.run(image)
+
+ self.assertTrue(np_image.min() <= -0.95)
+ self.assertTrue(np_image.min() >= -1.0)
+ self.assertTrue(np_image.max() >= 0.95)
+ self.assertTrue(np_image.max() <= 1.0)
+
+ def testPreprocessClassificationZeroCentersUint8DuringTrain(self):
+ self.assert_preprocess_classification_is_centered(
+ tf.uint8, is_training=True)
+
+ def testPreprocessClassificationZeroCentersUint8DuringTest(self):
+ self.assert_preprocess_classification_is_centered(
+ tf.uint8, is_training=False)
+
+ def testPreprocessClassificationZeroCentersFloatDuringTrain(self):
+ self.assert_preprocess_classification_is_centered(
+ tf.float32, is_training=True)
+
+ def testPreprocessClassificationZeroCentersFloatDuringTest(self):
+ self.assert_preprocess_classification_is_centered(
+ tf.float32, is_training=False)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_task_towers.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_task_towers.py"
new file mode 100644
index 00000000..1cb42e2d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_task_towers.py"
@@ -0,0 +1,317 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Task towers for PixelDA model."""
+import tensorflow as tf
+
+slim = tf.contrib.slim
+
+
+def add_task_specific_model(images,
+ hparams,
+ num_classes=10,
+ is_training=False,
+ reuse_private=False,
+ private_scope=None,
+ reuse_shared=False,
+ shared_scope=None):
+ """Create a classifier for the given images.
+
+ The classifier is composed of a few 'private' layers followed by a few
+ 'shared' layers. This lets us account for different image 'style', while
+ sharing the last few layers as 'content' layers.
+
+ Args:
+ images: A `Tensor` of size [batch_size, height, width, 3].
+ hparams: model hparams
+ num_classes: The number of output classes.
+ is_training: whether model is training
+ reuse_private: Whether or not to reuse the private weights, which are the
+ first few layers in the classifier
+ private_scope: The name of the variable_scope for the private (unshared)
+ components of the classifier.
+ reuse_shared: Whether or not to reuse the shared weights, which are the last
+ few layers in the classifier
+ shared_scope: The name of the variable_scope for the shared components of
+ the classifier.
+
+ Returns:
+ The logits, a `Tensor` of shape [batch_size, num_classes].
+
+ Raises:
+ ValueError: If hparams.task_classifier is an unknown value
+ """
+
+ model = hparams.task_tower
+ # Make sure the classifier name shows up in graph
+ shared_scope = shared_scope or (model + '_shared')
+ kwargs = {
+ 'num_classes': num_classes,
+ 'is_training': is_training,
+ 'reuse_private': reuse_private,
+ 'reuse_shared': reuse_shared,
+ }
+
+ if private_scope:
+ kwargs['private_scope'] = private_scope
+ if shared_scope:
+ kwargs['shared_scope'] = shared_scope
+
+ quaternion_pred = None
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ activation_fn=tf.nn.relu,
+ weights_regularizer=tf.contrib.layers.l2_regularizer(
+ hparams.weight_decay_task_classifier)):
+ with slim.arg_scope([slim.conv2d], padding='SAME'):
+ if model == 'doubling_pose_estimator':
+ logits, quaternion_pred = doubling_cnn_class_and_quaternion(
+ images, num_private_layers=hparams.num_private_layers, **kwargs)
+ elif model == 'mnist':
+ logits, _ = mnist_classifier(images, **kwargs)
+ elif model == 'svhn':
+ logits, _ = svhn_classifier(images, **kwargs)
+ elif model == 'gtsrb':
+ logits, _ = gtsrb_classifier(images, **kwargs)
+ elif model == 'pose_mini':
+ logits, quaternion_pred = pose_mini_tower(images, **kwargs)
+ else:
+ raise ValueError('Unknown task classifier %s' % model)
+
+ return logits, quaternion_pred
+
+
+#####################################
+# Classifiers used in the DSN paper #
+#####################################
+
+
+def mnist_classifier(images,
+ is_training=False,
+ num_classes=10,
+ reuse_private=False,
+ private_scope='mnist',
+ reuse_shared=False,
+ shared_scope='task_model'):
+ """Creates the convolutional MNIST model from the gradient reversal paper.
+
+ Note that since the output is a set of 'logits', the values fall in the
+ interval of (-infinity, infinity). Consequently, to convert the outputs to a
+ probability distribution over the characters, one will need to convert them
+ using the softmax function:
+ logits, endpoints = conv_mnist(images, is_training=False)
+ predictions = tf.nn.softmax(logits)
+
+ Args:
+ images: the MNIST digits, a tensor of size [batch_size, 28, 28, 1].
+ is_training: specifies whether or not we're currently training the model.
+ This variable will determine the behaviour of the dropout layer.
+ num_classes: the number of output classes to use.
+
+ Returns:
+ the output logits, a tensor of size [batch_size, num_classes].
+ a dictionary with key/values the layer names and tensors.
+ """
+
+ net = {}
+
+ with tf.variable_scope(private_scope, reuse=reuse_private):
+ net['conv1'] = slim.conv2d(images, 32, [5, 5], scope='conv1')
+ net['pool1'] = slim.max_pool2d(net['conv1'], [2, 2], 2, scope='pool1')
+
+ with tf.variable_scope(shared_scope, reuse=reuse_shared):
+ net['conv2'] = slim.conv2d(net['pool1'], 48, [5, 5], scope='conv2')
+ net['pool2'] = slim.max_pool2d(net['conv2'], [2, 2], 2, scope='pool2')
+ net['fc3'] = slim.fully_connected(
+ slim.flatten(net['pool2']), 100, scope='fc3')
+ net['fc4'] = slim.fully_connected(
+ slim.flatten(net['fc3']), 100, scope='fc4')
+ logits = slim.fully_connected(
+ net['fc4'], num_classes, activation_fn=None, scope='fc5')
+ return logits, net
+
+
+def svhn_classifier(images,
+ is_training=False,
+ num_classes=10,
+ reuse_private=False,
+ private_scope=None,
+ reuse_shared=False,
+ shared_scope='task_model'):
+ """Creates the convolutional SVHN model from the gradient reversal paper.
+
+ Note that since the output is a set of 'logits', the values fall in the
+ interval of (-infinity, infinity). Consequently, to convert the outputs to a
+ probability distribution over the characters, one will need to convert them
+ using the softmax function:
+ logits = mnist.Mnist(images, is_training=False)
+ predictions = tf.nn.softmax(logits)
+
+ Args:
+ images: the SVHN digits, a tensor of size [batch_size, 40, 40, 3].
+ is_training: specifies whether or not we're currently training the model.
+ This variable will determine the behaviour of the dropout layer.
+ num_classes: the number of output classes to use.
+
+ Returns:
+ the output logits, a tensor of size [batch_size, num_classes].
+ a dictionary with key/values the layer names and tensors.
+ """
+
+ net = {}
+
+ with tf.variable_scope(private_scope, reuse=reuse_private):
+ net['conv1'] = slim.conv2d(images, 64, [5, 5], scope='conv1')
+ net['pool1'] = slim.max_pool2d(net['conv1'], [3, 3], 2, scope='pool1')
+
+ with tf.variable_scope(shared_scope, reuse=reuse_shared):
+ net['conv2'] = slim.conv2d(net['pool1'], 64, [5, 5], scope='conv2')
+ net['pool2'] = slim.max_pool2d(net['conv2'], [3, 3], 2, scope='pool2')
+ net['conv3'] = slim.conv2d(net['pool2'], 128, [5, 5], scope='conv3')
+
+ net['fc3'] = slim.fully_connected(
+ slim.flatten(net['conv3']), 3072, scope='fc3')
+ net['fc4'] = slim.fully_connected(
+ slim.flatten(net['fc3']), 2048, scope='fc4')
+
+ logits = slim.fully_connected(
+ net['fc4'], num_classes, activation_fn=None, scope='fc5')
+
+ return logits, net
+
+
+def gtsrb_classifier(images,
+ is_training=False,
+ num_classes=43,
+ reuse_private=False,
+ private_scope='gtsrb',
+ reuse_shared=False,
+ shared_scope='task_model'):
+ """Creates the convolutional GTSRB model from the gradient reversal paper.
+
+ Note that since the output is a set of 'logits', the values fall in the
+ interval of (-infinity, infinity). Consequently, to convert the outputs to a
+ probability distribution over the characters, one will need to convert them
+ using the softmax function:
+ logits = mnist.Mnist(images, is_training=False)
+ predictions = tf.nn.softmax(logits)
+
+ Args:
+ images: the SVHN digits, a tensor of size [batch_size, 40, 40, 3].
+ is_training: specifies whether or not we're currently training the model.
+ This variable will determine the behaviour of the dropout layer.
+ num_classes: the number of output classes to use.
+ reuse_private: Whether or not to reuse the private components of the model.
+ private_scope: The name of the private scope.
+ reuse_shared: Whether or not to reuse the shared components of the model.
+ shared_scope: The name of the shared scope.
+
+ Returns:
+ the output logits, a tensor of size [batch_size, num_classes].
+ a dictionary with key/values the layer names and tensors.
+ """
+
+ net = {}
+
+ with tf.variable_scope(private_scope, reuse=reuse_private):
+ net['conv1'] = slim.conv2d(images, 96, [5, 5], scope='conv1')
+ net['pool1'] = slim.max_pool2d(net['conv1'], [2, 2], 2, scope='pool1')
+ with tf.variable_scope(shared_scope, reuse=reuse_shared):
+ net['conv2'] = slim.conv2d(net['pool1'], 144, [3, 3], scope='conv2')
+ net['pool2'] = slim.max_pool2d(net['conv2'], [2, 2], 2, scope='pool2')
+ net['conv3'] = slim.conv2d(net['pool2'], 256, [5, 5], scope='conv3')
+ net['pool3'] = slim.max_pool2d(net['conv3'], [2, 2], 2, scope='pool3')
+
+ net['fc3'] = slim.fully_connected(
+ slim.flatten(net['pool3']), 512, scope='fc3')
+ logits = slim.fully_connected(
+ net['fc3'], num_classes, activation_fn=None, scope='fc4')
+
+ return logits, net
+
+
+#########################
+# pose_mini task towers #
+#########################
+
+
+def pose_mini_tower(images,
+ num_classes=11,
+ is_training=False,
+ reuse_private=False,
+ private_scope='pose_mini',
+ reuse_shared=False,
+ shared_scope='task_model'):
+ """Task tower for the pose_mini dataset."""
+
+ with tf.variable_scope(private_scope, reuse=reuse_private):
+ net = slim.conv2d(images, 32, [5, 5], scope='conv1')
+ net = slim.max_pool2d(net, [2, 2], stride=2, scope='pool1')
+ with tf.variable_scope(shared_scope, reuse=reuse_shared):
+ net = slim.conv2d(net, 64, [5, 5], scope='conv2')
+ net = slim.max_pool2d(net, [2, 2], stride=2, scope='pool2')
+ net = slim.flatten(net)
+
+ net = slim.fully_connected(net, 128, scope='fc3')
+ net = slim.dropout(net, 0.5, is_training=is_training, scope='dropout')
+ with tf.variable_scope('quaternion_prediction'):
+ quaternion_pred = slim.fully_connected(
+ net, 4, activation_fn=tf.tanh, scope='fc_q')
+ quaternion_pred = tf.nn.l2_normalize(quaternion_pred, 1)
+
+ logits = slim.fully_connected(
+ net, num_classes, activation_fn=None, scope='fc4')
+
+ return logits, quaternion_pred
+
+
+def doubling_cnn_class_and_quaternion(images,
+ num_private_layers=1,
+ num_classes=10,
+ is_training=False,
+ reuse_private=False,
+ private_scope='doubling_cnn',
+ reuse_shared=False,
+ shared_scope='task_model'):
+ """Alternate conv, pool while doubling filter count."""
+ net = images
+ depth = 32
+ layer_id = 1
+
+ with tf.variable_scope(private_scope, reuse=reuse_private):
+ while num_private_layers > 0 and net.shape.as_list()[1] > 5:
+ net = slim.conv2d(net, depth, [3, 3], scope='conv%s' % layer_id)
+ net = slim.max_pool2d(net, [2, 2], stride=2, scope='pool%s' % layer_id)
+ depth *= 2
+ layer_id += 1
+ num_private_layers -= 1
+
+ with tf.variable_scope(shared_scope, reuse=reuse_shared):
+ while net.shape.as_list()[1] > 5:
+ net = slim.conv2d(net, depth, [3, 3], scope='conv%s' % layer_id)
+ net = slim.max_pool2d(net, [2, 2], stride=2, scope='pool%s' % layer_id)
+ depth *= 2
+ layer_id += 1
+
+ net = slim.flatten(net)
+ net = slim.fully_connected(net, 100, scope='fc1')
+ net = slim.dropout(net, 0.5, is_training=is_training, scope='dropout')
+ quaternion_pred = slim.fully_connected(
+ net, 4, activation_fn=tf.tanh, scope='fc_q')
+ quaternion_pred = tf.nn.l2_normalize(quaternion_pred, 1)
+
+ logits = slim.fully_connected(
+ net, num_classes, activation_fn=None, scope='fc_logits')
+
+ return logits, quaternion_pred
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_train.py"
new file mode 100644
index 00000000..4ca072cc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_train.py"
@@ -0,0 +1,409 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+r"""Trains the PixelDA model."""
+
+from functools import partial
+import os
+
+# Dependency imports
+
+import tensorflow as tf
+
+from domain_adaptation.datasets import dataset_factory
+from domain_adaptation.pixel_domain_adaptation import pixelda_losses
+from domain_adaptation.pixel_domain_adaptation import pixelda_model
+from domain_adaptation.pixel_domain_adaptation import pixelda_preprocess
+from domain_adaptation.pixel_domain_adaptation import pixelda_utils
+from domain_adaptation.pixel_domain_adaptation.hparams import create_hparams
+
+slim = tf.contrib.slim
+
+flags = tf.app.flags
+FLAGS = flags.FLAGS
+
+flags.DEFINE_string('master', '', 'BNS name of the TensorFlow master to use.')
+
+flags.DEFINE_integer(
+ 'ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then the parameters '
+ 'are handled locally by the worker.')
+
+flags.DEFINE_integer(
+ 'task', 0,
+ 'The Task ID. This value is used when training with multiple workers to '
+ 'identify each worker.')
+
+flags.DEFINE_string('train_log_dir', '/tmp/pixelda/',
+ 'Directory where to write event logs.')
+
+flags.DEFINE_integer(
+ 'save_summaries_steps', 500,
+ 'The frequency with which summaries are saved, in seconds.')
+
+flags.DEFINE_integer('save_interval_secs', 300,
+ 'The frequency with which the model is saved, in seconds.')
+
+flags.DEFINE_boolean('summarize_gradients', False,
+ 'Whether to summarize model gradients')
+
+flags.DEFINE_integer(
+ 'print_loss_steps', 100,
+ 'The frequency with which the losses are printed, in steps.')
+
+flags.DEFINE_string('source_dataset', 'mnist', 'The name of the source dataset.'
+ ' If hparams="arch=dcgan", this flag is ignored.')
+
+flags.DEFINE_string('target_dataset', 'mnist_m',
+ 'The name of the target dataset.')
+
+flags.DEFINE_string('source_split_name', 'train',
+ 'Name of the train split for the source.')
+
+flags.DEFINE_string('target_split_name', 'train',
+ 'Name of the train split for the target.')
+
+flags.DEFINE_string('dataset_dir', '',
+ 'The directory where the datasets can be found.')
+
+flags.DEFINE_integer(
+ 'num_readers', 4,
+ 'The number of parallel readers that read data from the dataset.')
+
+flags.DEFINE_integer('num_preprocessing_threads', 4,
+ 'The number of threads used to create the batches.')
+
+# HParams
+
+flags.DEFINE_string('hparams', '', 'Comma separated hyperparameter values')
+
+
+def _get_vars_and_update_ops(hparams, scope):
+ """Returns the variables and update ops for a particular variable scope.
+
+ Args:
+ hparams: The hyperparameters struct.
+ scope: The variable scope.
+
+ Returns:
+ A tuple consisting of trainable variables and update ops.
+ """
+ is_trainable = lambda x: x in tf.trainable_variables()
+ var_list = filter(is_trainable, slim.get_model_variables(scope))
+ global_step = slim.get_or_create_global_step()
+
+ update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS, scope)
+
+ tf.logging.info('All variables for scope: %s',
+ slim.get_model_variables(scope))
+ tf.logging.info('Trainable variables for scope: %s', var_list)
+
+ return var_list, update_ops
+
+
+def _train(discriminator_train_op,
+ generator_train_op,
+ logdir,
+ master='',
+ is_chief=True,
+ scaffold=None,
+ hooks=None,
+ chief_only_hooks=None,
+ save_checkpoint_secs=600,
+ save_summaries_steps=100,
+ hparams=None):
+ """Runs the training loop.
+
+ Args:
+ discriminator_train_op: A `Tensor` that, when executed, will apply the
+ gradients and return the loss value for the discriminator.
+ generator_train_op: A `Tensor` that, when executed, will apply the
+ gradients and return the loss value for the generator.
+ logdir: The directory where the graph and checkpoints are saved.
+ master: The URL of the master.
+ is_chief: Specifies whether or not the training is being run by the primary
+ replica during replica training.
+ scaffold: An tf.train.Scaffold instance.
+ hooks: List of `tf.train.SessionRunHook` callbacks which are run inside the
+ training loop.
+ chief_only_hooks: List of `tf.train.SessionRunHook` instances which are run
+ inside the training loop for the chief trainer only.
+ save_checkpoint_secs: The frequency, in seconds, that a checkpoint is saved
+ using a default checkpoint saver. If `save_checkpoint_secs` is set to
+ `None`, then the default checkpoint saver isn't used.
+ save_summaries_steps: The frequency, in number of global steps, that the
+ summaries are written to disk using a default summary saver. If
+ `save_summaries_steps` is set to `None`, then the default summary saver
+ isn't used.
+ hparams: The hparams struct.
+
+ Returns:
+ the value of the loss function after training.
+
+ Raises:
+ ValueError: if `logdir` is `None` and either `save_checkpoint_secs` or
+ `save_summaries_steps` are `None.
+ """
+ global_step = slim.get_or_create_global_step()
+
+ scaffold = scaffold or tf.train.Scaffold()
+
+ hooks = hooks or []
+
+ if is_chief:
+ session_creator = tf.train.ChiefSessionCreator(
+ scaffold=scaffold, checkpoint_dir=logdir, master=master)
+
+ if chief_only_hooks:
+ hooks.extend(chief_only_hooks)
+ hooks.append(tf.train.StepCounterHook(output_dir=logdir))
+
+ if save_summaries_steps:
+ if logdir is None:
+ raise ValueError(
+ 'logdir cannot be None when save_summaries_steps is None')
+ hooks.append(
+ tf.train.SummarySaverHook(
+ scaffold=scaffold,
+ save_steps=save_summaries_steps,
+ output_dir=logdir))
+
+ if save_checkpoint_secs:
+ if logdir is None:
+ raise ValueError(
+ 'logdir cannot be None when save_checkpoint_secs is None')
+ hooks.append(
+ tf.train.CheckpointSaverHook(
+ logdir, save_secs=save_checkpoint_secs, scaffold=scaffold))
+ else:
+ session_creator = tf.train.WorkerSessionCreator(
+ scaffold=scaffold, master=master)
+
+ with tf.train.MonitoredSession(
+ session_creator=session_creator, hooks=hooks) as session:
+ loss = None
+ while not session.should_stop():
+ # Run the domain classifier op X times.
+ for _ in range(hparams.discriminator_steps):
+ if session.should_stop():
+ return loss
+ loss, np_global_step = session.run(
+ [discriminator_train_op, global_step])
+ if np_global_step % FLAGS.print_loss_steps == 0:
+ tf.logging.info('Step %d: Discriminator Loss = %.2f', np_global_step,
+ loss)
+
+ # Run the generator op X times.
+ for _ in range(hparams.generator_steps):
+ if session.should_stop():
+ return loss
+ loss, np_global_step = session.run([generator_train_op, global_step])
+ if np_global_step % FLAGS.print_loss_steps == 0:
+ tf.logging.info('Step %d: Generator Loss = %.2f', np_global_step,
+ loss)
+ return loss
+
+
+def run_training(run_dir, checkpoint_dir, hparams):
+ """Runs the training loop.
+
+ Args:
+ run_dir: The directory where training specific logs are placed
+ checkpoint_dir: The directory where the checkpoints and log files are
+ stored.
+ hparams: The hyperparameters struct.
+
+ Raises:
+ ValueError: if hparams.arch is not recognized.
+ """
+ for path in [run_dir, checkpoint_dir]:
+ if not tf.gfile.Exists(path):
+ tf.gfile.MakeDirs(path)
+
+ # Serialize hparams to log dir
+ hparams_filename = os.path.join(checkpoint_dir, 'hparams.json')
+ with tf.gfile.FastGFile(hparams_filename, 'w') as f:
+ f.write(hparams.to_json())
+
+ with tf.Graph().as_default():
+ with tf.device(tf.train.replica_device_setter(FLAGS.ps_tasks)):
+ global_step = slim.get_or_create_global_step()
+
+ #########################
+ # Preprocess the inputs #
+ #########################
+ target_dataset = dataset_factory.get_dataset(
+ FLAGS.target_dataset,
+ split_name='train',
+ dataset_dir=FLAGS.dataset_dir)
+ target_images, _ = dataset_factory.provide_batch(
+ FLAGS.target_dataset, 'train', FLAGS.dataset_dir, FLAGS.num_readers,
+ hparams.batch_size, FLAGS.num_preprocessing_threads)
+ num_target_classes = target_dataset.num_classes
+
+ if hparams.arch not in ['dcgan']:
+ source_dataset = dataset_factory.get_dataset(
+ FLAGS.source_dataset,
+ split_name='train',
+ dataset_dir=FLAGS.dataset_dir)
+ num_source_classes = source_dataset.num_classes
+ source_images, source_labels = dataset_factory.provide_batch(
+ FLAGS.source_dataset, 'train', FLAGS.dataset_dir, FLAGS.num_readers,
+ hparams.batch_size, FLAGS.num_preprocessing_threads)
+ # Data provider provides 1 hot labels, but we expect categorical.
+ source_labels['class'] = tf.argmax(source_labels['classes'], 1)
+ del source_labels['classes']
+ if num_source_classes != num_target_classes:
+ raise ValueError(
+ 'Source and Target datasets must have same number of classes. '
+ 'Are %d and %d' % (num_source_classes, num_target_classes))
+ else:
+ source_images = None
+ source_labels = None
+
+ ####################
+ # Define the model #
+ ####################
+ end_points = pixelda_model.create_model(
+ hparams,
+ target_images,
+ source_images=source_images,
+ source_labels=source_labels,
+ is_training=True,
+ num_classes=num_target_classes)
+
+ #################################
+ # Get the variables to optimize #
+ #################################
+ generator_vars, generator_update_ops = _get_vars_and_update_ops(
+ hparams, 'generator')
+ discriminator_vars, discriminator_update_ops = _get_vars_and_update_ops(
+ hparams, 'discriminator')
+
+ ########################
+ # Configure the losses #
+ ########################
+ generator_loss = pixelda_losses.g_step_loss(
+ source_images,
+ source_labels,
+ end_points,
+ hparams,
+ num_classes=num_target_classes)
+ discriminator_loss = pixelda_losses.d_step_loss(
+ end_points, source_labels, num_target_classes, hparams)
+
+ ###########################
+ # Create the training ops #
+ ###########################
+ learning_rate = hparams.learning_rate
+ if hparams.lr_decay_steps:
+ learning_rate = tf.train.exponential_decay(
+ learning_rate,
+ slim.get_or_create_global_step(),
+ decay_steps=hparams.lr_decay_steps,
+ decay_rate=hparams.lr_decay_rate,
+ staircase=True)
+ tf.summary.scalar('Learning_rate', learning_rate)
+
+
+ if hparams.discriminator_steps == 0:
+ discriminator_train_op = tf.no_op()
+ else:
+ discriminator_optimizer = tf.train.AdamOptimizer(
+ learning_rate, beta1=hparams.adam_beta1)
+
+ discriminator_train_op = slim.learning.create_train_op(
+ discriminator_loss,
+ discriminator_optimizer,
+ update_ops=discriminator_update_ops,
+ variables_to_train=discriminator_vars,
+ clip_gradient_norm=hparams.clip_gradient_norm,
+ summarize_gradients=FLAGS.summarize_gradients)
+
+ if hparams.generator_steps == 0:
+ generator_train_op = tf.no_op()
+ else:
+ generator_optimizer = tf.train.AdamOptimizer(
+ learning_rate, beta1=hparams.adam_beta1)
+ generator_train_op = slim.learning.create_train_op(
+ generator_loss,
+ generator_optimizer,
+ update_ops=generator_update_ops,
+ variables_to_train=generator_vars,
+ clip_gradient_norm=hparams.clip_gradient_norm,
+ summarize_gradients=FLAGS.summarize_gradients)
+
+ #############
+ # Summaries #
+ #############
+ pixelda_utils.summarize_model(end_points)
+ pixelda_utils.summarize_transferred_grid(
+ end_points['transferred_images'], source_images, name='Transferred')
+ if 'source_images_recon' in end_points:
+ pixelda_utils.summarize_transferred_grid(
+ end_points['source_images_recon'],
+ source_images,
+ name='Source Reconstruction')
+ pixelda_utils.summaries_color_distributions(end_points['transferred_images'],
+ 'Transferred')
+ pixelda_utils.summaries_color_distributions(target_images, 'Target')
+
+ if source_images is not None:
+ pixelda_utils.summarize_transferred(source_images,
+ end_points['transferred_images'])
+ pixelda_utils.summaries_color_distributions(source_images, 'Source')
+ pixelda_utils.summaries_color_distributions(
+ tf.abs(source_images - end_points['transferred_images']),
+ 'Abs(Source_minus_Transferred)')
+
+ number_of_steps = None
+ if hparams.num_training_examples:
+ # Want to control by amount of data seen, not # steps
+ number_of_steps = hparams.num_training_examples / hparams.batch_size
+
+ hooks = [tf.train.StepCounterHook(),]
+
+ chief_only_hooks = [
+ tf.train.CheckpointSaverHook(
+ saver=tf.train.Saver(),
+ checkpoint_dir=run_dir,
+ save_secs=FLAGS.save_interval_secs)
+ ]
+
+ if number_of_steps:
+ hooks.append(tf.train.StopAtStepHook(last_step=number_of_steps))
+
+ _train(
+ discriminator_train_op,
+ generator_train_op,
+ logdir=run_dir,
+ master=FLAGS.master,
+ is_chief=FLAGS.task == 0,
+ hooks=hooks,
+ chief_only_hooks=chief_only_hooks,
+ save_checkpoint_secs=None,
+ save_summaries_steps=FLAGS.save_summaries_steps,
+ hparams=hparams)
+
+def main(_):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ hparams = create_hparams(FLAGS.hparams)
+ run_training(
+ run_dir=FLAGS.train_log_dir,
+ checkpoint_dir=FLAGS.train_log_dir,
+ hparams=hparams)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_utils.py"
new file mode 100644
index 00000000..28e8006f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/domain_adaptation/pixel_domain_adaptation/pixelda_utils.py"
@@ -0,0 +1,195 @@
+# Copyright 2017 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Utilities for PixelDA model."""
+import math
+
+# Dependency imports
+
+import tensorflow as tf
+
+slim = tf.contrib.slim
+
+flags = tf.app.flags
+FLAGS = flags.FLAGS
+
+
+def remove_depth(images):
+ """Takes a batch of images and remove depth channel if present."""
+ if images.shape.as_list()[-1] == 4:
+ return images[:, :, :, 0:3]
+ return images
+
+
+def image_grid(images, max_grid_size=4):
+ """Given images and N, return first N^2 images as an NxN image grid.
+
+ Args:
+ images: a `Tensor` of size [batch_size, height, width, channels]
+ max_grid_size: Maximum image grid height/width
+
+ Returns:
+ Single image batch, of dim [1, h*n, w*n, c]
+ """
+ images = remove_depth(images)
+ batch_size = images.shape.as_list()[0]
+ grid_size = min(int(math.sqrt(batch_size)), max_grid_size)
+ assert images.shape.as_list()[0] >= grid_size * grid_size
+
+ # If we have a depth channel
+ if images.shape.as_list()[-1] == 4:
+ images = images[:grid_size * grid_size, :, :, 0:3]
+ depth = tf.image.grayscale_to_rgb(images[:grid_size * grid_size, :, :, 3:4])
+
+ images = tf.reshape(images, [-1, images.shape.as_list()[2], 3])
+ split = tf.split(0, grid_size, images)
+ depth = tf.reshape(depth, [-1, images.shape.as_list()[2], 3])
+ depth_split = tf.split(0, grid_size, depth)
+ grid = tf.concat(split + depth_split, 1)
+ return tf.expand_dims(grid, 0)
+ else:
+ images = images[:grid_size * grid_size, :, :, :]
+ images = tf.reshape(
+ images, [-1, images.shape.as_list()[2],
+ images.shape.as_list()[3]])
+ split = tf.split(images, grid_size, 0)
+ grid = tf.concat(split, 1)
+ return tf.expand_dims(grid, 0)
+
+
+def source_and_output_image_grid(output_images,
+ source_images=None,
+ max_grid_size=4):
+ """Create NxN image grid for output, concatenate source grid if given.
+
+ Makes grid out of output_images and, if provided, source_images, and
+ concatenates them.
+
+ Args:
+ output_images: [batch_size, h, w, c] tensor of images
+ source_images: optional[batch_size, h, w, c] tensor of images
+ max_grid_size: Image grid height/width
+
+ Returns:
+ Single image batch, of dim [1, h*n, w*n, c]
+
+
+ """
+ output_grid = image_grid(output_images, max_grid_size=max_grid_size)
+ if source_images is not None:
+ source_grid = image_grid(source_images, max_grid_size=max_grid_size)
+ # Make sure they have the same # of channels before concat
+ # Assumes either 1 or 3 channels
+ if output_grid.shape.as_list()[-1] != source_grid.shape.as_list()[-1]:
+ if output_grid.shape.as_list()[-1] == 1:
+ output_grid = tf.tile(output_grid, [1, 1, 1, 3])
+ if source_grid.shape.as_list()[-1] == 1:
+ source_grid = tf.tile(source_grid, [1, 1, 1, 3])
+ output_grid = tf.concat([output_grid, source_grid], 1)
+ return output_grid
+
+
+def summarize_model(end_points):
+ """Summarizes the given model via its end_points.
+
+ Args:
+ end_points: A dictionary of end_point names to `Tensor`.
+ """
+ tf.summary.histogram('domain_logits_transferred',
+ tf.sigmoid(end_points['transferred_domain_logits']))
+
+ tf.summary.histogram('domain_logits_target',
+ tf.sigmoid(end_points['target_domain_logits']))
+
+
+def summarize_transferred_grid(transferred_images,
+ source_images=None,
+ name='Transferred'):
+ """Produces a visual grid summarization of the image transferrence.
+
+ Args:
+ transferred_images: A `Tensor` of size [batch_size, height, width, c].
+ source_images: A `Tensor` of size [batch_size, height, width, c].
+ name: Name to use in summary name
+ """
+ if source_images is not None:
+ grid = source_and_output_image_grid(transferred_images, source_images)
+ else:
+ grid = image_grid(transferred_images)
+ tf.summary.image('%s_Images_Grid' % name, grid, max_outputs=1)
+
+
+def summarize_transferred(source_images,
+ transferred_images,
+ max_images=20,
+ name='Transferred'):
+ """Produces a visual summary of the image transferrence.
+
+ This summary displays the source image, transferred image, and a grayscale
+ difference image which highlights the differences between input and output.
+
+ Args:
+ source_images: A `Tensor` of size [batch_size, height, width, channels].
+ transferred_images: A `Tensor` of size [batch_size, height, width, channels]
+ max_images: The number of images to show.
+ name: Name to use in summary name
+
+ Raises:
+ ValueError: If number of channels in source and target are incompatible
+ """
+ source_channels = source_images.shape.as_list()[-1]
+ transferred_channels = transferred_images.shape.as_list()[-1]
+ if source_channels < transferred_channels:
+ if source_channels != 1:
+ raise ValueError(
+ 'Source must be 1 channel or same # of channels as target')
+ source_images = tf.tile(source_images, [1, 1, 1, transferred_channels])
+ if transferred_channels < source_channels:
+ if transferred_channels != 1:
+ raise ValueError(
+ 'Target must be 1 channel or same # of channels as source')
+ transferred_images = tf.tile(transferred_images, [1, 1, 1, source_channels])
+ diffs = tf.abs(source_images - transferred_images)
+ diffs = tf.reduce_max(diffs, reduction_indices=[3], keep_dims=True)
+ diffs = tf.tile(diffs, [1, 1, 1, max(source_channels, transferred_channels)])
+
+ transition_images = tf.concat([
+ source_images,
+ transferred_images,
+ diffs,
+ ], 2)
+
+ tf.summary.image(
+ '%s_difference' % name, transition_images, max_outputs=max_images)
+
+
+def summaries_color_distributions(images, name):
+ """Produces a histogram of the color distributions of the images.
+
+ Args:
+ images: A `Tensor` of size [batch_size, height, width, 3].
+ name: The name of the images being summarized.
+ """
+ tf.summary.histogram('color_values/%s' % name, images)
+
+
+def summarize_images(images, name):
+ """Produces a visual summary of the given images.
+
+ Args:
+ images: A `Tensor` of size [batch_size, height, width, 3].
+ name: The name of the images being summarized.
+ """
+ grid = image_grid(images)
+ tf.summary.image('%s_Images' % name, grid, max_outputs=1)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/README.md"
new file mode 100644
index 00000000..6ea4918e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/README.md"
@@ -0,0 +1,62 @@
+Code for performing Hierarchical RL based on the following publications:
+
+"Data-Efficient Hierarchical Reinforcement Learning" by
+Ofir Nachum, Shixiang (Shane) Gu, Honglak Lee, and Sergey Levine
+(https://arxiv.org/abs/1805.08296).
+
+"Near-Optimal Representation Learning for Hierarchical Reinforcement Learning"
+by Ofir Nachum, Shixiang (Shane) Gu, Honglak Lee, and Sergey Levine
+(https://arxiv.org/abs/1810.01257).
+
+
+Requirements:
+* TensorFlow (see http://www.tensorflow.org for how to install/upgrade)
+* Gin Config (see https://github.com/google/gin-config)
+* Tensorflow Agents (see https://github.com/tensorflow/agents)
+* OpenAI Gym (see http://gym.openai.com/docs, be sure to install MuJoCo as well)
+* NumPy (see http://www.numpy.org/)
+
+
+Quick Start:
+
+Run a training job based on the original HIRO paper on Ant Maze:
+
+```
+python scripts/local_train.py test1 hiro_orig ant_maze base_uvf suite
+```
+
+Run a continuous evaluation job for that experiment:
+
+```
+python scripts/local_eval.py test1 hiro_orig ant_maze base_uvf suite
+```
+
+To run the same experiment with online representation learning (the
+"Near-Optimal" paper), change `hiro_orig` to `hiro_repr`.
+You can also run with `hiro_xy` to run the same experiment with HIRO on only the
+xy coordinates of the agent.
+
+To run on other environments, change `ant_maze` to something else; e.g.,
+`ant_push_multi`, `ant_fall_multi`, etc. See `context/configs/*` for other options.
+
+
+Basic Code Guide:
+
+The code for training resides in train.py. The code trains a lower-level policy
+(a UVF agent in the code) and a higher-level policy (a MetaAgent in the code)
+concurrently. The higher-level policy communicates goals to the lower-level
+policy. In the code, this is called a context. Not only does the lower-level
+policy act with respect to a context (a higher-level specified goal), but the
+higher-level policy also acts with respect to an environment-specified context
+(corresponding to the navigation target location associated with the task).
+Therefore, in `context/configs/*` you will find both specifications for task setup
+as well as goal configurations. Most remaining hyperparameters used for
+training/evaluation may be found in `configs/*`.
+
+NOTE: Not all the code corresponding to the "Near-Optimal" paper is included.
+Namely, changes to low-level policy training proposed in the paper (discounting
+and auxiliary rewards) are not implemented here. Performance should not change
+significantly.
+
+
+Maintained by Ofir Nachum (ofirnachum).
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agent.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agent.py"
new file mode 100644
index 00000000..cb02b51f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agent.py"
@@ -0,0 +1,774 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A UVF agent.
+"""
+
+import tensorflow as tf
+import gin.tf
+from agents import ddpg_agent
+# pylint: disable=unused-import
+import cond_fn
+from utils import utils as uvf_utils
+from context import gin_imports
+# pylint: enable=unused-import
+slim = tf.contrib.slim
+
+
+@gin.configurable
+class UvfAgentCore(object):
+ """Defines basic functions for UVF agent. Must be inherited with an RL agent.
+
+ Used as lower-level agent.
+ """
+
+ def __init__(self,
+ observation_spec,
+ action_spec,
+ tf_env,
+ tf_context,
+ step_cond_fn=cond_fn.env_transition,
+ reset_episode_cond_fn=cond_fn.env_restart,
+ reset_env_cond_fn=cond_fn.false_fn,
+ metrics=None,
+ **base_agent_kwargs):
+ """Constructs a UVF agent.
+
+ Args:
+ observation_spec: A TensorSpec defining the observations.
+ action_spec: A BoundedTensorSpec defining the actions.
+ tf_env: A Tensorflow environment object.
+ tf_context: A Context class.
+ step_cond_fn: A function indicating whether to increment the num of steps.
+ reset_episode_cond_fn: A function indicating whether to restart the
+ episode, resampling the context.
+ reset_env_cond_fn: A function indicating whether to perform a manual reset
+ of the environment.
+ metrics: A list of functions that evaluate metrics of the agent.
+ **base_agent_kwargs: A dictionary of parameters for base RL Agent.
+ Raises:
+ ValueError: If 'dqda_clipping' is < 0.
+ """
+ self._step_cond_fn = step_cond_fn
+ self._reset_episode_cond_fn = reset_episode_cond_fn
+ self._reset_env_cond_fn = reset_env_cond_fn
+ self.metrics = metrics
+
+ # expose tf_context methods
+ self.tf_context = tf_context(tf_env=tf_env)
+ self.set_replay = self.tf_context.set_replay
+ self.sample_contexts = self.tf_context.sample_contexts
+ self.compute_rewards = self.tf_context.compute_rewards
+ self.gamma_index = self.tf_context.gamma_index
+ self.context_specs = self.tf_context.context_specs
+ self.context_as_action_specs = self.tf_context.context_as_action_specs
+ self.init_context_vars = self.tf_context.create_vars
+
+ self.env_observation_spec = observation_spec[0]
+ merged_observation_spec = (uvf_utils.merge_specs(
+ (self.env_observation_spec,) + self.context_specs),)
+ self._context_vars = dict()
+ self._action_vars = dict()
+
+ self.BASE_AGENT_CLASS.__init__(
+ self,
+ observation_spec=merged_observation_spec,
+ action_spec=action_spec,
+ **base_agent_kwargs
+ )
+
+ def set_meta_agent(self, agent=None):
+ self._meta_agent = agent
+
+ @property
+ def meta_agent(self):
+ return self._meta_agent
+
+ def actor_loss(self, states, actions, rewards, discounts,
+ next_states):
+ """Returns the next action for the state.
+
+ Args:
+ state: A [num_state_dims] tensor representing a state.
+ context: A list of [num_context_dims] tensor representing a context.
+ Returns:
+ A [num_action_dims] tensor representing the action.
+ """
+ return self.BASE_AGENT_CLASS.actor_loss(self, states)
+
+ def action(self, state, context=None):
+ """Returns the next action for the state.
+
+ Args:
+ state: A [num_state_dims] tensor representing a state.
+ context: A list of [num_context_dims] tensor representing a context.
+ Returns:
+ A [num_action_dims] tensor representing the action.
+ """
+ merged_state = self.merged_state(state, context)
+ return self.BASE_AGENT_CLASS.action(self, merged_state)
+
+ def actions(self, state, context=None):
+ """Returns the next action for the state.
+
+ Args:
+ state: A [-1, num_state_dims] tensor representing a state.
+ context: A list of [-1, num_context_dims] tensor representing a context.
+ Returns:
+ A [-1, num_action_dims] tensor representing the action.
+ """
+ merged_states = self.merged_states(state, context)
+ return self.BASE_AGENT_CLASS.actor_net(self, merged_states)
+
+ def log_probs(self, states, actions, state_reprs, contexts=None):
+ assert contexts is not None
+ batch_dims = [tf.shape(states)[0], tf.shape(states)[1]]
+ contexts = self.tf_context.context_multi_transition_fn(
+ contexts, states=tf.to_float(state_reprs))
+
+ flat_states = tf.reshape(states,
+ [batch_dims[0] * batch_dims[1], states.shape[-1]])
+ flat_contexts = [tf.reshape(tf.cast(context, states.dtype),
+ [batch_dims[0] * batch_dims[1], context.shape[-1]])
+ for context in contexts]
+ flat_pred_actions = self.actions(flat_states, flat_contexts)
+ pred_actions = tf.reshape(flat_pred_actions,
+ batch_dims + [flat_pred_actions.shape[-1]])
+
+ error = tf.square(actions - pred_actions)
+ spec_range = (self._action_spec.maximum - self._action_spec.minimum) / 2
+ normalized_error = error / tf.constant(spec_range) ** 2
+ return -normalized_error
+
+ @gin.configurable('uvf_add_noise_fn')
+ def add_noise_fn(self, action_fn, stddev=1.0, debug=False,
+ clip=True, global_step=None):
+ """Returns the action_fn with additive Gaussian noise.
+
+ Args:
+ action_fn: A callable(`state`, `context`) which returns a
+ [num_action_dims] tensor representing a action.
+ stddev: stddev for the Ornstein-Uhlenbeck noise.
+ debug: Print debug messages.
+ Returns:
+ A [num_action_dims] action tensor.
+ """
+ if global_step is not None:
+ stddev *= tf.maximum( # Decay exploration during training.
+ tf.train.exponential_decay(1.0, global_step, 1e6, 0.8), 0.5)
+ def noisy_action_fn(state, context=None):
+ """Noisy action fn."""
+ action = action_fn(state, context)
+ if debug:
+ action = uvf_utils.tf_print(
+ action, [action],
+ message='[add_noise_fn] pre-noise action',
+ first_n=100)
+ noise_dist = tf.distributions.Normal(tf.zeros_like(action),
+ tf.ones_like(action) * stddev)
+ noise = noise_dist.sample()
+ action += noise
+ if debug:
+ action = uvf_utils.tf_print(
+ action, [action],
+ message='[add_noise_fn] post-noise action',
+ first_n=100)
+ if clip:
+ action = uvf_utils.clip_to_spec(action, self._action_spec)
+ return action
+ return noisy_action_fn
+
+ def merged_state(self, state, context=None):
+ """Returns the merged state from the environment state and contexts.
+
+ Args:
+ state: A [num_state_dims] tensor representing a state.
+ context: A list of [num_context_dims] tensor representing a context.
+ If None, use the internal context.
+ Returns:
+ A [num_merged_state_dims] tensor representing the merged state.
+ """
+ if context is None:
+ context = list(self.context_vars)
+ state = tf.concat([state,] + context, axis=-1)
+ self._validate_states(self._batch_state(state))
+ return state
+
+ def merged_states(self, states, contexts=None):
+ """Returns the batch merged state from the batch env state and contexts.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ contexts: A list of [batch_size, num_context_dims] tensor
+ representing a batch of contexts. If None,
+ use the internal context.
+ Returns:
+ A [batch_size, num_merged_state_dims] tensor representing the batch
+ of merged states.
+ """
+ if contexts is None:
+ contexts = [tf.tile(tf.expand_dims(context, axis=0),
+ (tf.shape(states)[0], 1)) for
+ context in self.context_vars]
+ states = tf.concat([states,] + contexts, axis=-1)
+ self._validate_states(states)
+ return states
+
+ def unmerged_states(self, merged_states):
+ """Returns the batch state and contexts from the batch merged state.
+
+ Args:
+ merged_states: A [batch_size, num_merged_state_dims] tensor
+ representing a batch of merged states.
+ Returns:
+ A [batch_size, num_state_dims] tensor and a list of
+ [batch_size, num_context_dims] tensors representing the batch state
+ and contexts respectively.
+ """
+ self._validate_states(merged_states)
+ num_state_dims = self.env_observation_spec.shape.as_list()[0]
+ num_context_dims_list = [c.shape.as_list()[0] for c in self.context_specs]
+ states = merged_states[:, :num_state_dims]
+ contexts = []
+ i = num_state_dims
+ for num_context_dims in num_context_dims_list:
+ contexts.append(merged_states[:, i: i+num_context_dims])
+ i += num_context_dims
+ return states, contexts
+
+ def sample_random_actions(self, batch_size=1):
+ """Return random actions.
+
+ Args:
+ batch_size: Batch size.
+ Returns:
+ A [batch_size, num_action_dims] tensor representing the batch of actions.
+ """
+ actions = tf.concat(
+ [
+ tf.random_uniform(
+ shape=(batch_size, 1),
+ minval=self._action_spec.minimum[i],
+ maxval=self._action_spec.maximum[i])
+ for i in range(self._action_spec.shape[0].value)
+ ],
+ axis=1)
+ return actions
+
+ def clip_actions(self, actions):
+ """Clip actions to spec.
+
+ Args:
+ actions: A [batch_size, num_action_dims] tensor representing
+ the batch of actions.
+ Returns:
+ A [batch_size, num_action_dims] tensor representing the batch
+ of clipped actions.
+ """
+ actions = tf.concat(
+ [
+ tf.clip_by_value(
+ actions[:, i:i+1],
+ self._action_spec.minimum[i],
+ self._action_spec.maximum[i])
+ for i in range(self._action_spec.shape[0].value)
+ ],
+ axis=1)
+ return actions
+
+ def mix_contexts(self, contexts, insert_contexts, indices):
+ """Mix two contexts based on indices.
+
+ Args:
+ contexts: A list of [batch_size, num_context_dims] tensor representing
+ the batch of contexts.
+ insert_contexts: A list of [batch_size, num_context_dims] tensor
+ representing the batch of contexts to be inserted.
+ indices: A list of a list of integers denoting indices to replace.
+ Returns:
+ A list of resulting contexts.
+ """
+ if indices is None: indices = [[]] * len(contexts)
+ assert len(contexts) == len(indices)
+ assert all([spec.shape.ndims == 1 for spec in self.context_specs])
+ mix_contexts = []
+ for contexts_, insert_contexts_, indices_, spec in zip(
+ contexts, insert_contexts, indices, self.context_specs):
+ mix_contexts.append(
+ tf.concat(
+ [
+ insert_contexts_[:, i:i + 1] if i in indices_ else
+ contexts_[:, i:i + 1] for i in range(spec.shape.as_list()[0])
+ ],
+ axis=1))
+ return mix_contexts
+
+ def begin_episode_ops(self, mode, action_fn=None, state=None):
+ """Returns ops that reset agent at beginning of episodes.
+
+ Args:
+ mode: a string representing the mode=[train, explore, eval].
+ Returns:
+ A list of ops.
+ """
+ all_ops = []
+ for _, action_var in sorted(self._action_vars.items()):
+ sample_action = self.sample_random_actions(1)[0]
+ all_ops.append(tf.assign(action_var, sample_action))
+ all_ops += self.tf_context.reset(mode=mode, agent=self._meta_agent,
+ action_fn=action_fn, state=state)
+ return all_ops
+
+ def cond_begin_episode_op(self, cond, input_vars, mode, meta_action_fn):
+ """Returns op that resets agent at beginning of episodes.
+
+ A new episode is begun if the cond op evalues to `False`.
+
+ Args:
+ cond: a Boolean tensor variable.
+ input_vars: A list of tensor variables.
+ mode: a string representing the mode=[train, explore, eval].
+ Returns:
+ Conditional begin op.
+ """
+ (state, action, reward, next_state,
+ state_repr, next_state_repr) = input_vars
+ def continue_fn():
+ """Continue op fn."""
+ items = [state, action, reward, next_state,
+ state_repr, next_state_repr] + list(self.context_vars)
+ batch_items = [tf.expand_dims(item, 0) for item in items]
+ (states, actions, rewards, next_states,
+ state_reprs, next_state_reprs) = batch_items[:6]
+ context_reward = self.compute_rewards(
+ mode, state_reprs, actions, rewards, next_state_reprs,
+ batch_items[6:])[0][0]
+ context_reward = tf.cast(context_reward, dtype=reward.dtype)
+ if self.meta_agent is not None:
+ meta_action = tf.concat(self.context_vars, -1)
+ items = [state, meta_action, reward, next_state,
+ state_repr, next_state_repr] + list(self.meta_agent.context_vars)
+ batch_items = [tf.expand_dims(item, 0) for item in items]
+ (states, meta_actions, rewards, next_states,
+ state_reprs, next_state_reprs) = batch_items[:6]
+ meta_reward = self.meta_agent.compute_rewards(
+ mode, states, meta_actions, rewards,
+ next_states, batch_items[6:])[0][0]
+ meta_reward = tf.cast(meta_reward, dtype=reward.dtype)
+ else:
+ meta_reward = tf.constant(0, dtype=reward.dtype)
+
+ with tf.control_dependencies([context_reward, meta_reward]):
+ step_ops = self.tf_context.step(mode=mode, agent=self._meta_agent,
+ state=state,
+ next_state=next_state,
+ state_repr=state_repr,
+ next_state_repr=next_state_repr,
+ action_fn=meta_action_fn)
+ with tf.control_dependencies(step_ops):
+ context_reward, meta_reward = map(tf.identity, [context_reward, meta_reward])
+ return context_reward, meta_reward
+ def begin_episode_fn():
+ """Begin op fn."""
+ begin_ops = self.begin_episode_ops(mode=mode, action_fn=meta_action_fn, state=state)
+ with tf.control_dependencies(begin_ops):
+ return tf.zeros_like(reward), tf.zeros_like(reward)
+ with tf.control_dependencies(input_vars):
+ cond_begin_episode_op = tf.cond(cond, continue_fn, begin_episode_fn)
+ return cond_begin_episode_op
+
+ def get_env_base_wrapper(self, env_base, **begin_kwargs):
+ """Create a wrapper around env_base, with agent-specific begin/end_episode.
+
+ Args:
+ env_base: A python environment base.
+ **begin_kwargs: Keyword args for begin_episode_ops.
+ Returns:
+ An object with begin_episode() and end_episode().
+ """
+ begin_ops = self.begin_episode_ops(**begin_kwargs)
+ return uvf_utils.get_contextual_env_base(env_base, begin_ops)
+
+ def init_action_vars(self, name, i=None):
+ """Create and return a tensorflow Variable holding an action.
+
+ Args:
+ name: Name of the variables.
+ i: Integer id.
+ Returns:
+ A [num_action_dims] tensor.
+ """
+ if i is not None:
+ name += '_%d' % i
+ assert name not in self._action_vars, ('Conflict! %s is already '
+ 'initialized.') % name
+ self._action_vars[name] = tf.Variable(
+ self.sample_random_actions(1)[0], name='%s_action' % (name))
+ self._validate_actions(tf.expand_dims(self._action_vars[name], 0))
+ return self._action_vars[name]
+
+ @gin.configurable('uvf_critic_function')
+ def critic_function(self, critic_vals, states, critic_fn=None):
+ """Computes q values based on outputs from the critic net.
+
+ Args:
+ critic_vals: A tf.float32 [batch_size, ...] tensor representing outputs
+ from the critic net.
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ critic_fn: A callable that process outputs from critic_net and
+ outputs a [batch_size] tensor representing q values.
+ Returns:
+ A tf.float32 [batch_size] tensor representing q values.
+ """
+ if critic_fn is not None:
+ env_states, contexts = self.unmerged_states(states)
+ critic_vals = critic_fn(critic_vals, env_states, contexts)
+ critic_vals.shape.assert_has_rank(1)
+ return critic_vals
+
+ def get_action_vars(self, key):
+ return self._action_vars[key]
+
+ def get_context_vars(self, key):
+ return self.tf_context.context_vars[key]
+
+ def step_cond_fn(self, *args):
+ return self._step_cond_fn(self, *args)
+
+ def reset_episode_cond_fn(self, *args):
+ return self._reset_episode_cond_fn(self, *args)
+
+ def reset_env_cond_fn(self, *args):
+ return self._reset_env_cond_fn(self, *args)
+
+ @property
+ def context_vars(self):
+ return self.tf_context.vars
+
+
+@gin.configurable
+class MetaAgentCore(UvfAgentCore):
+ """Defines basic functions for UVF Meta-agent. Must be inherited with an RL agent.
+
+ Used as higher-level agent.
+ """
+
+ def __init__(self,
+ observation_spec,
+ action_spec,
+ tf_env,
+ tf_context,
+ sub_context,
+ step_cond_fn=cond_fn.env_transition,
+ reset_episode_cond_fn=cond_fn.env_restart,
+ reset_env_cond_fn=cond_fn.false_fn,
+ metrics=None,
+ actions_reg=0.,
+ k=2,
+ **base_agent_kwargs):
+ """Constructs a Meta agent.
+
+ Args:
+ observation_spec: A TensorSpec defining the observations.
+ action_spec: A BoundedTensorSpec defining the actions.
+ tf_env: A Tensorflow environment object.
+ tf_context: A Context class.
+ step_cond_fn: A function indicating whether to increment the num of steps.
+ reset_episode_cond_fn: A function indicating whether to restart the
+ episode, resampling the context.
+ reset_env_cond_fn: A function indicating whether to perform a manual reset
+ of the environment.
+ metrics: A list of functions that evaluate metrics of the agent.
+ **base_agent_kwargs: A dictionary of parameters for base RL Agent.
+ Raises:
+ ValueError: If 'dqda_clipping' is < 0.
+ """
+ self._step_cond_fn = step_cond_fn
+ self._reset_episode_cond_fn = reset_episode_cond_fn
+ self._reset_env_cond_fn = reset_env_cond_fn
+ self.metrics = metrics
+ self._actions_reg = actions_reg
+ self._k = k
+
+ # expose tf_context methods
+ self.tf_context = tf_context(tf_env=tf_env)
+ self.sub_context = sub_context(tf_env=tf_env)
+ self.set_replay = self.tf_context.set_replay
+ self.sample_contexts = self.tf_context.sample_contexts
+ self.compute_rewards = self.tf_context.compute_rewards
+ self.gamma_index = self.tf_context.gamma_index
+ self.context_specs = self.tf_context.context_specs
+ self.context_as_action_specs = self.tf_context.context_as_action_specs
+ self.sub_context_as_action_specs = self.sub_context.context_as_action_specs
+ self.init_context_vars = self.tf_context.create_vars
+
+ self.env_observation_spec = observation_spec[0]
+ merged_observation_spec = (uvf_utils.merge_specs(
+ (self.env_observation_spec,) + self.context_specs),)
+ self._context_vars = dict()
+ self._action_vars = dict()
+
+ assert len(self.context_as_action_specs) == 1
+ self.BASE_AGENT_CLASS.__init__(
+ self,
+ observation_spec=merged_observation_spec,
+ action_spec=self.sub_context_as_action_specs,
+ **base_agent_kwargs
+ )
+
+ @gin.configurable('meta_add_noise_fn')
+ def add_noise_fn(self, action_fn, stddev=1.0, debug=False,
+ global_step=None):
+ noisy_action_fn = super(MetaAgentCore, self).add_noise_fn(
+ action_fn, stddev,
+ clip=True, global_step=global_step)
+ return noisy_action_fn
+
+ def actor_loss(self, states, actions, rewards, discounts,
+ next_states):
+ """Returns the next action for the state.
+
+ Args:
+ state: A [num_state_dims] tensor representing a state.
+ context: A list of [num_context_dims] tensor representing a context.
+ Returns:
+ A [num_action_dims] tensor representing the action.
+ """
+ actions = self.actor_net(states, stop_gradients=False)
+ regularizer = self._actions_reg * tf.reduce_mean(
+ tf.reduce_sum(tf.abs(actions[:, self._k:]), -1), 0)
+ loss = self.BASE_AGENT_CLASS.actor_loss(self, states)
+ return regularizer + loss
+
+
+@gin.configurable
+class UvfAgent(UvfAgentCore, ddpg_agent.TD3Agent):
+ """A DDPG agent with UVF.
+ """
+ BASE_AGENT_CLASS = ddpg_agent.TD3Agent
+ ACTION_TYPE = 'continuous'
+
+ def __init__(self, *args, **kwargs):
+ UvfAgentCore.__init__(self, *args, **kwargs)
+
+
+@gin.configurable
+class MetaAgent(MetaAgentCore, ddpg_agent.TD3Agent):
+ """A DDPG meta-agent.
+ """
+ BASE_AGENT_CLASS = ddpg_agent.TD3Agent
+ ACTION_TYPE = 'continuous'
+
+ def __init__(self, *args, **kwargs):
+ MetaAgentCore.__init__(self, *args, **kwargs)
+
+
+@gin.configurable()
+def state_preprocess_net(
+ states,
+ num_output_dims=2,
+ states_hidden_layers=(100,),
+ normalizer_fn=None,
+ activation_fn=tf.nn.relu,
+ zero_time=True,
+ images=False):
+ """Creates a simple feed forward net for embedding states.
+ """
+ with slim.arg_scope(
+ [slim.fully_connected],
+ activation_fn=activation_fn,
+ normalizer_fn=normalizer_fn,
+ weights_initializer=slim.variance_scaling_initializer(
+ factor=1.0/3.0, mode='FAN_IN', uniform=True)):
+
+ states_shape = tf.shape(states)
+ states_dtype = states.dtype
+ states = tf.to_float(states)
+ if images: # Zero-out x-y
+ states *= tf.constant([0.] * 2 + [1.] * (states.shape[-1] - 2), dtype=states.dtype)
+ if zero_time:
+ states *= tf.constant([1.] * (states.shape[-1] - 1) + [0.], dtype=states.dtype)
+ orig_states = states
+ embed = states
+ if states_hidden_layers:
+ embed = slim.stack(embed, slim.fully_connected, states_hidden_layers,
+ scope='states')
+
+ with slim.arg_scope([slim.fully_connected],
+ weights_regularizer=None,
+ weights_initializer=tf.random_uniform_initializer(
+ minval=-0.003, maxval=0.003)):
+ embed = slim.fully_connected(embed, num_output_dims,
+ activation_fn=None,
+ normalizer_fn=None,
+ scope='value')
+
+ output = embed
+ output = tf.cast(output, states_dtype)
+ return output
+
+
+@gin.configurable()
+def action_embed_net(
+ actions,
+ states=None,
+ num_output_dims=2,
+ hidden_layers=(400, 300),
+ normalizer_fn=None,
+ activation_fn=tf.nn.relu,
+ zero_time=True,
+ images=False):
+ """Creates a simple feed forward net for embedding actions.
+ """
+ with slim.arg_scope(
+ [slim.fully_connected],
+ activation_fn=activation_fn,
+ normalizer_fn=normalizer_fn,
+ weights_initializer=slim.variance_scaling_initializer(
+ factor=1.0/3.0, mode='FAN_IN', uniform=True)):
+
+ actions = tf.to_float(actions)
+ if states is not None:
+ if images: # Zero-out x-y
+ states *= tf.constant([0.] * 2 + [1.] * (states.shape[-1] - 2), dtype=states.dtype)
+ if zero_time:
+ states *= tf.constant([1.] * (states.shape[-1] - 1) + [0.], dtype=states.dtype)
+ actions = tf.concat([actions, tf.to_float(states)], -1)
+
+ embed = actions
+ if hidden_layers:
+ embed = slim.stack(embed, slim.fully_connected, hidden_layers,
+ scope='hidden')
+
+ with slim.arg_scope([slim.fully_connected],
+ weights_regularizer=None,
+ weights_initializer=tf.random_uniform_initializer(
+ minval=-0.003, maxval=0.003)):
+ embed = slim.fully_connected(embed, num_output_dims,
+ activation_fn=None,
+ normalizer_fn=None,
+ scope='value')
+ if num_output_dims == 1:
+ return embed[:, 0, ...]
+ else:
+ return embed
+
+
+def huber(x, kappa=0.1):
+ return (0.5 * tf.square(x) * tf.to_float(tf.abs(x) <= kappa) +
+ kappa * (tf.abs(x) - 0.5 * kappa) * tf.to_float(tf.abs(x) > kappa)
+ ) / kappa
+
+
+@gin.configurable()
+class StatePreprocess(object):
+ STATE_PREPROCESS_NET_SCOPE = 'state_process_net'
+ ACTION_EMBED_NET_SCOPE = 'action_embed_net'
+
+ def __init__(self, trainable=False,
+ state_preprocess_net=lambda states: states,
+ action_embed_net=lambda actions, *args, **kwargs: actions,
+ ndims=None):
+ self.trainable = trainable
+ self._scope = tf.get_variable_scope().name
+ self._ndims = ndims
+ self._state_preprocess_net = tf.make_template(
+ self.STATE_PREPROCESS_NET_SCOPE, state_preprocess_net,
+ create_scope_now_=True)
+ self._action_embed_net = tf.make_template(
+ self.ACTION_EMBED_NET_SCOPE, action_embed_net,
+ create_scope_now_=True)
+
+ def __call__(self, states):
+ batched = states.get_shape().ndims != 1
+ if not batched:
+ states = tf.expand_dims(states, 0)
+ embedded = self._state_preprocess_net(states)
+ if self._ndims is not None:
+ embedded = embedded[..., :self._ndims]
+ if not batched:
+ return embedded[0]
+ return embedded
+
+ def loss(self, states, next_states, low_actions, low_states):
+ batch_size = tf.shape(states)[0]
+ d = int(low_states.shape[1])
+ # Sample indices into meta-transition to train on.
+ probs = 0.99 ** tf.range(d, dtype=tf.float32)
+ probs *= tf.constant([1.0] * (d - 1) + [1.0 / (1 - 0.99)],
+ dtype=tf.float32)
+ probs /= tf.reduce_sum(probs)
+ index_dist = tf.distributions.Categorical(probs=probs, dtype=tf.int64)
+ indices = index_dist.sample(batch_size)
+ batch_size = tf.cast(batch_size, tf.int64)
+ next_indices = tf.concat(
+ [tf.range(batch_size, dtype=tf.int64)[:, None],
+ (1 + indices[:, None]) % d], -1)
+ new_next_states = tf.where(indices < d - 1,
+ tf.gather_nd(low_states, next_indices),
+ next_states)
+ next_states = new_next_states
+
+ embed1 = tf.to_float(self._state_preprocess_net(states))
+ embed2 = tf.to_float(self._state_preprocess_net(next_states))
+ action_embed = self._action_embed_net(
+ tf.layers.flatten(low_actions), states=states)
+
+ tau = 2.0
+ fn = lambda z: tau * tf.reduce_sum(huber(z), -1)
+ all_embed = tf.get_variable('all_embed', [1024, int(embed1.shape[-1])],
+ initializer=tf.zeros_initializer())
+ upd = all_embed.assign(tf.concat([all_embed[batch_size:], embed2], 0))
+ with tf.control_dependencies([upd]):
+ close = 1 * tf.reduce_mean(fn(embed1 + action_embed - embed2))
+ prior_log_probs = tf.reduce_logsumexp(
+ -fn((embed1 + action_embed)[:, None, :] - all_embed[None, :, :]),
+ axis=-1) - tf.log(tf.to_float(all_embed.shape[0]))
+ far = tf.reduce_mean(tf.exp(-fn((embed1 + action_embed)[1:] - embed2[:-1])
+ - tf.stop_gradient(prior_log_probs[1:])))
+ repr_log_probs = tf.stop_gradient(
+ -fn(embed1 + action_embed - embed2) - prior_log_probs) / tau
+ return close + far, repr_log_probs, indices
+
+ def get_trainable_vars(self):
+ return (
+ slim.get_trainable_variables(
+ uvf_utils.join_scope(self._scope, self.STATE_PREPROCESS_NET_SCOPE)) +
+ slim.get_trainable_variables(
+ uvf_utils.join_scope(self._scope, self.ACTION_EMBED_NET_SCOPE)))
+
+
+@gin.configurable()
+class InverseDynamics(object):
+ INVERSE_DYNAMICS_NET_SCOPE = 'inverse_dynamics'
+
+ def __init__(self, spec):
+ self._spec = spec
+
+ def sample(self, states, next_states, num_samples, orig_goals, sc=0.5):
+ goal_dim = orig_goals.shape[-1]
+ spec_range = (self._spec.maximum - self._spec.minimum) / 2 * tf.ones([goal_dim])
+ loc = tf.cast(next_states - states, tf.float32)[:, :goal_dim]
+ scale = sc * tf.tile(tf.reshape(spec_range, [1, goal_dim]),
+ [tf.shape(states)[0], 1])
+ dist = tf.distributions.Normal(loc, scale)
+ if num_samples == 1:
+ return dist.sample()
+ samples = tf.concat([dist.sample(num_samples - 2),
+ tf.expand_dims(loc, 0),
+ tf.expand_dims(orig_goals, 0)], 0)
+ return uvf_utils.clip_to_spec(samples, self._spec)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/__init__.py"
new file mode 100644
index 00000000..8b137891
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/__init__.py"
@@ -0,0 +1 @@
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/circular_buffer.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/circular_buffer.py"
new file mode 100644
index 00000000..17b6304b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/circular_buffer.py"
@@ -0,0 +1,289 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A circular buffer where each element is a list of tensors.
+
+Each element of the buffer is a list of tensors. An example use case is a replay
+buffer in reinforcement learning, where each element is a list of tensors
+representing the state, action, reward etc.
+
+New elements are added sequentially, and once the buffer is full, we
+start overwriting them in a circular fashion. Reading does not remove any
+elements, only adding new elements does.
+"""
+
+import collections
+import numpy as np
+import tensorflow as tf
+
+import gin.tf
+
+
+@gin.configurable
+class CircularBuffer(object):
+ """A circular buffer where each element is a list of tensors."""
+
+ def __init__(self, buffer_size=1000, scope='replay_buffer'):
+ """Circular buffer of list of tensors.
+
+ Args:
+ buffer_size: (integer) maximum number of tensor lists the buffer can hold.
+ scope: (string) variable scope for creating the variables.
+ """
+ self._buffer_size = np.int64(buffer_size)
+ self._scope = scope
+ self._tensors = collections.OrderedDict()
+ with tf.variable_scope(self._scope):
+ self._num_adds = tf.Variable(0, dtype=tf.int64, name='num_adds')
+ self._num_adds_cs = tf.contrib.framework.CriticalSection(name='num_adds')
+
+ @property
+ def buffer_size(self):
+ return self._buffer_size
+
+ @property
+ def scope(self):
+ return self._scope
+
+ @property
+ def num_adds(self):
+ return self._num_adds
+
+ def _create_variables(self, tensors):
+ with tf.variable_scope(self._scope):
+ for name in tensors.keys():
+ tensor = tensors[name]
+ self._tensors[name] = tf.get_variable(
+ name='BufferVariable_' + name,
+ shape=[self._buffer_size] + tensor.get_shape().as_list(),
+ dtype=tensor.dtype,
+ trainable=False)
+
+ def _validate(self, tensors):
+ """Validate shapes of tensors."""
+ if len(tensors) != len(self._tensors):
+ raise ValueError('Expected tensors to have %d elements. Received %d '
+ 'instead.' % (len(self._tensors), len(tensors)))
+ if self._tensors.keys() != tensors.keys():
+ raise ValueError('The keys of tensors should be the always the same.'
+ 'Received %s instead %s.' %
+ (tensors.keys(), self._tensors.keys()))
+ for name, tensor in tensors.items():
+ if tensor.get_shape().as_list() != self._tensors[
+ name].get_shape().as_list()[1:]:
+ raise ValueError('Tensor %s has incorrect shape.' % name)
+ if not tensor.dtype.is_compatible_with(self._tensors[name].dtype):
+ raise ValueError(
+ 'Tensor %s has incorrect data type. Expected %s, received %s' %
+ (name, self._tensors[name].read_value().dtype, tensor.dtype))
+
+ def add(self, tensors):
+ """Adds an element (list/tuple/dict of tensors) to the buffer.
+
+ Args:
+ tensors: (list/tuple/dict of tensors) to be added to the buffer.
+ Returns:
+ An add operation that adds the input `tensors` to the buffer. Similar to
+ an enqueue_op.
+ Raises:
+ ValueError: If the shapes and data types of input `tensors' are not the
+ same across calls to the add function.
+ """
+ return self.maybe_add(tensors, True)
+
+ def maybe_add(self, tensors, condition):
+ """Adds an element (tensors) to the buffer based on the condition..
+
+ Args:
+ tensors: (list/tuple of tensors) to be added to the buffer.
+ condition: A boolean Tensor controlling whether the tensors would be added
+ to the buffer or not.
+ Returns:
+ An add operation that adds the input `tensors` to the buffer. Similar to
+ an maybe_enqueue_op.
+ Raises:
+ ValueError: If the shapes and data types of input `tensors' are not the
+ same across calls to the add function.
+ """
+ if not isinstance(tensors, dict):
+ names = [str(i) for i in range(len(tensors))]
+ tensors = collections.OrderedDict(zip(names, tensors))
+ if not isinstance(tensors, collections.OrderedDict):
+ tensors = collections.OrderedDict(
+ sorted(tensors.items(), key=lambda t: t[0]))
+ if not self._tensors:
+ self._create_variables(tensors)
+ else:
+ self._validate(tensors)
+
+ #@tf.critical_section(self._position_mutex)
+ def _increment_num_adds():
+ # Adding 0 to the num_adds variable is a trick to read the value of the
+ # variable and return a read-only tensor. Doing this in a critical
+ # section allows us to capture a snapshot of the variable that will
+ # not be affected by other threads updating num_adds.
+ return self._num_adds.assign_add(1) + 0
+ def _add():
+ num_adds_inc = self._num_adds_cs.execute(_increment_num_adds)
+ current_pos = tf.mod(num_adds_inc - 1, self._buffer_size)
+ update_ops = []
+ for name in self._tensors.keys():
+ update_ops.append(
+ tf.scatter_update(self._tensors[name], current_pos, tensors[name]))
+ return tf.group(*update_ops)
+
+ return tf.contrib.framework.smart_cond(condition, _add, tf.no_op)
+
+ def get_random_batch(self, batch_size, keys=None, num_steps=1):
+ """Samples a batch of tensors from the buffer with replacement.
+
+ Args:
+ batch_size: (integer) number of elements to sample.
+ keys: List of keys of tensors to retrieve. If None retrieve all.
+ num_steps: (integer) length of trajectories to return. If > 1 will return
+ a list of lists, where each internal list represents a trajectory of
+ length num_steps.
+ Returns:
+ A list of tensors, where each element in the list is a batch sampled from
+ one of the tensors in the buffer.
+ Raises:
+ ValueError: If get_random_batch is called before calling the add function.
+ tf.errors.InvalidArgumentError: If this operation is executed before any
+ items are added to the buffer.
+ """
+ if not self._tensors:
+ raise ValueError('The add function must be called before get_random_batch.')
+ if keys is None:
+ keys = self._tensors.keys()
+
+ latest_start_index = self.get_num_adds() - num_steps + 1
+ empty_buffer_assert = tf.Assert(
+ tf.greater(latest_start_index, 0),
+ ['Not enough elements have been added to the buffer.'])
+ with tf.control_dependencies([empty_buffer_assert]):
+ max_index = tf.minimum(self._buffer_size, latest_start_index)
+ indices = tf.random_uniform(
+ [batch_size],
+ minval=0,
+ maxval=max_index,
+ dtype=tf.int64)
+ if num_steps == 1:
+ return self.gather(indices, keys)
+ else:
+ return self.gather_nstep(num_steps, indices, keys)
+
+ def gather(self, indices, keys=None):
+ """Returns elements at the specified indices from the buffer.
+
+ Args:
+ indices: (list of integers or rank 1 int Tensor) indices in the buffer to
+ retrieve elements from.
+ keys: List of keys of tensors to retrieve. If None retrieve all.
+ Returns:
+ A list of tensors, where each element in the list is obtained by indexing
+ one of the tensors in the buffer.
+ Raises:
+ ValueError: If gather is called before calling the add function.
+ tf.errors.InvalidArgumentError: If indices are bigger than the number of
+ items in the buffer.
+ """
+ if not self._tensors:
+ raise ValueError('The add function must be called before calling gather.')
+ if keys is None:
+ keys = self._tensors.keys()
+ with tf.name_scope('Gather'):
+ index_bound_assert = tf.Assert(
+ tf.less(
+ tf.to_int64(tf.reduce_max(indices)),
+ tf.minimum(self.get_num_adds(), self._buffer_size)),
+ ['Index out of bounds.'])
+ with tf.control_dependencies([index_bound_assert]):
+ indices = tf.convert_to_tensor(indices)
+
+ batch = []
+ for key in keys:
+ batch.append(tf.gather(self._tensors[key], indices, name=key))
+ return batch
+
+ def gather_nstep(self, num_steps, indices, keys=None):
+ """Returns elements at the specified indices from the buffer.
+
+ Args:
+ num_steps: (integer) length of trajectories to return.
+ indices: (list of rank num_steps int Tensor) indices in the buffer to
+ retrieve elements from for multiple trajectories. Each Tensor in the
+ list represents the indices for a trajectory.
+ keys: List of keys of tensors to retrieve. If None retrieve all.
+ Returns:
+ A list of list-of-tensors, where each element in the list is obtained by
+ indexing one of the tensors in the buffer.
+ Raises:
+ ValueError: If gather is called before calling the add function.
+ tf.errors.InvalidArgumentError: If indices are bigger than the number of
+ items in the buffer.
+ """
+ if not self._tensors:
+ raise ValueError('The add function must be called before calling gather.')
+ if keys is None:
+ keys = self._tensors.keys()
+ with tf.name_scope('Gather'):
+ index_bound_assert = tf.Assert(
+ tf.less_equal(
+ tf.to_int64(tf.reduce_max(indices) + num_steps),
+ self.get_num_adds()),
+ ['Trajectory indices go out of bounds.'])
+ with tf.control_dependencies([index_bound_assert]):
+ indices = tf.map_fn(
+ lambda x: tf.mod(tf.range(x, x + num_steps), self._buffer_size),
+ indices,
+ dtype=tf.int64)
+
+ batch = []
+ for key in keys:
+
+ def SampleTrajectories(trajectory_indices, key=key,
+ num_steps=num_steps):
+ trajectory_indices.set_shape([num_steps])
+ return tf.gather(self._tensors[key], trajectory_indices, name=key)
+
+ batch.append(tf.map_fn(SampleTrajectories, indices,
+ dtype=self._tensors[key].dtype))
+ return batch
+
+ def get_position(self):
+ """Returns the position at which the last element was added.
+
+ Returns:
+ An int tensor representing the index at which the last element was added
+ to the buffer or -1 if no elements were added.
+ """
+ return tf.cond(self.get_num_adds() < 1,
+ lambda: self.get_num_adds() - 1,
+ lambda: tf.mod(self.get_num_adds() - 1, self._buffer_size))
+
+ def get_num_adds(self):
+ """Returns the number of additions to the buffer.
+
+ Returns:
+ An int tensor representing the number of elements that were added.
+ """
+ def num_adds():
+ return self._num_adds.value()
+
+ return self._num_adds_cs.execute(num_adds)
+
+ def get_num_tensors(self):
+ """Returns the number of tensors (slots) in the buffer."""
+ return len(self._tensors)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/ddpg_agent.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/ddpg_agent.py"
new file mode 100644
index 00000000..904eb650
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/ddpg_agent.py"
@@ -0,0 +1,739 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A DDPG/NAF agent.
+
+Implements the Deep Deterministic Policy Gradient (DDPG) algorithm from
+"Continuous control with deep reinforcement learning" - Lilicrap et al.
+https://arxiv.org/abs/1509.02971, and the Normalized Advantage Functions (NAF)
+algorithm "Continuous Deep Q-Learning with Model-based Acceleration" - Gu et al.
+https://arxiv.org/pdf/1603.00748.
+"""
+
+import tensorflow as tf
+slim = tf.contrib.slim
+import gin.tf
+from utils import utils
+from agents import ddpg_networks as networks
+
+
+@gin.configurable
+class DdpgAgent(object):
+ """An RL agent that learns using the DDPG algorithm.
+
+ Example usage:
+
+ def critic_net(states, actions):
+ ...
+ def actor_net(states, num_action_dims):
+ ...
+
+ Given a tensorflow environment tf_env,
+ (of type learning.deepmind.rl.environments.tensorflow.python.tfpyenvironment)
+
+ obs_spec = tf_env.observation_spec()
+ action_spec = tf_env.action_spec()
+
+ ddpg_agent = agent.DdpgAgent(obs_spec,
+ action_spec,
+ actor_net=actor_net,
+ critic_net=critic_net)
+
+ we can perform actions on the environment as follows:
+
+ state = tf_env.observations()[0]
+ action = ddpg_agent.actor_net(tf.expand_dims(state, 0))[0, :]
+ transition_type, reward, discount = tf_env.step([action])
+
+ Train:
+
+ critic_loss = ddpg_agent.critic_loss(states, actions, rewards, discounts,
+ next_states)
+ actor_loss = ddpg_agent.actor_loss(states)
+
+ critic_train_op = slim.learning.create_train_op(
+ critic_loss,
+ critic_optimizer,
+ variables_to_train=ddpg_agent.get_trainable_critic_vars(),
+ )
+
+ actor_train_op = slim.learning.create_train_op(
+ actor_loss,
+ actor_optimizer,
+ variables_to_train=ddpg_agent.get_trainable_actor_vars(),
+ )
+ """
+
+ ACTOR_NET_SCOPE = 'actor_net'
+ CRITIC_NET_SCOPE = 'critic_net'
+ TARGET_ACTOR_NET_SCOPE = 'target_actor_net'
+ TARGET_CRITIC_NET_SCOPE = 'target_critic_net'
+
+ def __init__(self,
+ observation_spec,
+ action_spec,
+ actor_net=networks.actor_net,
+ critic_net=networks.critic_net,
+ td_errors_loss=tf.losses.huber_loss,
+ dqda_clipping=0.,
+ actions_regularizer=0.,
+ target_q_clipping=None,
+ residual_phi=0.0,
+ debug_summaries=False):
+ """Constructs a DDPG agent.
+
+ Args:
+ observation_spec: A TensorSpec defining the observations.
+ action_spec: A BoundedTensorSpec defining the actions.
+ actor_net: A callable that creates the actor network. Must take the
+ following arguments: states, num_actions. Please see networks.actor_net
+ for an example.
+ critic_net: A callable that creates the critic network. Must take the
+ following arguments: states, actions. Please see networks.critic_net
+ for an example.
+ td_errors_loss: A callable defining the loss function for the critic
+ td error.
+ dqda_clipping: (float) clips the gradient dqda element-wise between
+ [-dqda_clipping, dqda_clipping]. Does not perform clipping if
+ dqda_clipping == 0.
+ actions_regularizer: A scalar, when positive penalizes the norm of the
+ actions. This can prevent saturation of actions for the actor_loss.
+ target_q_clipping: (tuple of floats) clips target q values within
+ (low, high) values when computing the critic loss.
+ residual_phi: (float) [0.0, 1.0] Residual algorithm parameter that
+ interpolates between Q-learning and residual gradient algorithm.
+ http://www.leemon.com/papers/1995b.pdf
+ debug_summaries: If True, add summaries to help debug behavior.
+ Raises:
+ ValueError: If 'dqda_clipping' is < 0.
+ """
+ self._observation_spec = observation_spec[0]
+ self._action_spec = action_spec[0]
+ self._state_shape = tf.TensorShape([None]).concatenate(
+ self._observation_spec.shape)
+ self._action_shape = tf.TensorShape([None]).concatenate(
+ self._action_spec.shape)
+ self._num_action_dims = self._action_spec.shape.num_elements()
+
+ self._scope = tf.get_variable_scope().name
+ self._actor_net = tf.make_template(
+ self.ACTOR_NET_SCOPE, actor_net, create_scope_now_=True)
+ self._critic_net = tf.make_template(
+ self.CRITIC_NET_SCOPE, critic_net, create_scope_now_=True)
+ self._target_actor_net = tf.make_template(
+ self.TARGET_ACTOR_NET_SCOPE, actor_net, create_scope_now_=True)
+ self._target_critic_net = tf.make_template(
+ self.TARGET_CRITIC_NET_SCOPE, critic_net, create_scope_now_=True)
+ self._td_errors_loss = td_errors_loss
+ if dqda_clipping < 0:
+ raise ValueError('dqda_clipping must be >= 0.')
+ self._dqda_clipping = dqda_clipping
+ self._actions_regularizer = actions_regularizer
+ self._target_q_clipping = target_q_clipping
+ self._residual_phi = residual_phi
+ self._debug_summaries = debug_summaries
+
+ def _batch_state(self, state):
+ """Convert state to a batched state.
+
+ Args:
+ state: Either a list/tuple with an state tensor [num_state_dims].
+ Returns:
+ A tensor [1, num_state_dims]
+ """
+ if isinstance(state, (tuple, list)):
+ state = state[0]
+ if state.get_shape().ndims == 1:
+ state = tf.expand_dims(state, 0)
+ return state
+
+ def action(self, state):
+ """Returns the next action for the state.
+
+ Args:
+ state: A [num_state_dims] tensor representing a state.
+ Returns:
+ A [num_action_dims] tensor representing the action.
+ """
+ return self.actor_net(self._batch_state(state), stop_gradients=True)[0, :]
+
+ @gin.configurable('ddpg_sample_action')
+ def sample_action(self, state, stddev=1.0):
+ """Returns the action for the state with additive noise.
+
+ Args:
+ state: A [num_state_dims] tensor representing a state.
+ stddev: stddev for the Ornstein-Uhlenbeck noise.
+ Returns:
+ A [num_action_dims] action tensor.
+ """
+ agent_action = self.action(state)
+ agent_action += tf.random_normal(tf.shape(agent_action)) * stddev
+ return utils.clip_to_spec(agent_action, self._action_spec)
+
+ def actor_net(self, states, stop_gradients=False):
+ """Returns the output of the actor network.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ stop_gradients: (boolean) if true, gradients cannot be propogated through
+ this operation.
+ Returns:
+ A [batch_size, num_action_dims] tensor of actions.
+ Raises:
+ ValueError: If `states` does not have the expected dimensions.
+ """
+ self._validate_states(states)
+ actions = self._actor_net(states, self._action_spec)
+ if stop_gradients:
+ actions = tf.stop_gradient(actions)
+ return actions
+
+ def critic_net(self, states, actions, for_critic_loss=False):
+ """Returns the output of the critic network.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] tensor representing a batch
+ of actions.
+ Returns:
+ q values: A [batch_size] tensor of q values.
+ Raises:
+ ValueError: If `states` or `actions' do not have the expected dimensions.
+ """
+ self._validate_states(states)
+ self._validate_actions(actions)
+ return self._critic_net(states, actions,
+ for_critic_loss=for_critic_loss)
+
+ def target_actor_net(self, states):
+ """Returns the output of the target actor network.
+
+ The target network is used to compute stable targets for training.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ Returns:
+ A [batch_size, num_action_dims] tensor of actions.
+ Raises:
+ ValueError: If `states` does not have the expected dimensions.
+ """
+ self._validate_states(states)
+ actions = self._target_actor_net(states, self._action_spec)
+ return tf.stop_gradient(actions)
+
+ def target_critic_net(self, states, actions, for_critic_loss=False):
+ """Returns the output of the target critic network.
+
+ The target network is used to compute stable targets for training.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] tensor representing a batch
+ of actions.
+ Returns:
+ q values: A [batch_size] tensor of q values.
+ Raises:
+ ValueError: If `states` or `actions' do not have the expected dimensions.
+ """
+ self._validate_states(states)
+ self._validate_actions(actions)
+ return tf.stop_gradient(
+ self._target_critic_net(states, actions,
+ for_critic_loss=for_critic_loss))
+
+ def value_net(self, states, for_critic_loss=False):
+ """Returns the output of the critic evaluated with the actor.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ Returns:
+ q values: A [batch_size] tensor of q values.
+ """
+ actions = self.actor_net(states)
+ return self.critic_net(states, actions,
+ for_critic_loss=for_critic_loss)
+
+ def target_value_net(self, states, for_critic_loss=False):
+ """Returns the output of the target critic evaluated with the target actor.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ Returns:
+ q values: A [batch_size] tensor of q values.
+ """
+ target_actions = self.target_actor_net(states)
+ return self.target_critic_net(states, target_actions,
+ for_critic_loss=for_critic_loss)
+
+ def critic_loss(self, states, actions, rewards, discounts,
+ next_states):
+ """Computes a loss for training the critic network.
+
+ The loss is the mean squared error between the Q value predictions of the
+ critic and Q values estimated using TD-lambda.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] tensor representing a batch
+ of actions.
+ rewards: A [batch_size, ...] tensor representing a batch of rewards,
+ broadcastable to the critic net output.
+ discounts: A [batch_size, ...] tensor representing a batch of discounts,
+ broadcastable to the critic net output.
+ next_states: A [batch_size, num_state_dims] tensor representing a batch
+ of next states.
+ Returns:
+ A rank-0 tensor representing the critic loss.
+ Raises:
+ ValueError: If any of the inputs do not have the expected dimensions, or
+ if their batch_sizes do not match.
+ """
+ self._validate_states(states)
+ self._validate_actions(actions)
+ self._validate_states(next_states)
+
+ target_q_values = self.target_value_net(next_states, for_critic_loss=True)
+ td_targets = target_q_values * discounts + rewards
+ if self._target_q_clipping is not None:
+ td_targets = tf.clip_by_value(td_targets, self._target_q_clipping[0],
+ self._target_q_clipping[1])
+ q_values = self.critic_net(states, actions, for_critic_loss=True)
+ td_errors = td_targets - q_values
+ if self._debug_summaries:
+ gen_debug_td_error_summaries(
+ target_q_values, q_values, td_targets, td_errors)
+
+ loss = self._td_errors_loss(td_targets, q_values)
+
+ if self._residual_phi > 0.0: # compute residual gradient loss
+ residual_q_values = self.value_net(next_states, for_critic_loss=True)
+ residual_td_targets = residual_q_values * discounts + rewards
+ if self._target_q_clipping is not None:
+ residual_td_targets = tf.clip_by_value(residual_td_targets,
+ self._target_q_clipping[0],
+ self._target_q_clipping[1])
+ residual_td_errors = residual_td_targets - q_values
+ residual_loss = self._td_errors_loss(
+ residual_td_targets, residual_q_values)
+ loss = (loss * (1.0 - self._residual_phi) +
+ residual_loss * self._residual_phi)
+ return loss
+
+ def actor_loss(self, states):
+ """Computes a loss for training the actor network.
+
+ Note that output does not represent an actual loss. It is called a loss only
+ in the sense that its gradient w.r.t. the actor network weights is the
+ correct gradient for training the actor network,
+ i.e. dloss/dweights = (dq/da)*(da/dweights)
+ which is the gradient used in Algorithm 1 of Lilicrap et al.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ Returns:
+ A rank-0 tensor representing the actor loss.
+ Raises:
+ ValueError: If `states` does not have the expected dimensions.
+ """
+ self._validate_states(states)
+ actions = self.actor_net(states, stop_gradients=False)
+ critic_values = self.critic_net(states, actions)
+ q_values = self.critic_function(critic_values, states)
+ dqda = tf.gradients([q_values], [actions])[0]
+ dqda_unclipped = dqda
+ if self._dqda_clipping > 0:
+ dqda = tf.clip_by_value(dqda, -self._dqda_clipping, self._dqda_clipping)
+
+ actions_norm = tf.norm(actions)
+ if self._debug_summaries:
+ with tf.name_scope('dqda'):
+ tf.summary.scalar('actions_norm', actions_norm)
+ tf.summary.histogram('dqda', dqda)
+ tf.summary.histogram('dqda_unclipped', dqda_unclipped)
+ tf.summary.histogram('actions', actions)
+ for a in range(self._num_action_dims):
+ tf.summary.histogram('dqda_unclipped_%d' % a, dqda_unclipped[:, a])
+ tf.summary.histogram('dqda_%d' % a, dqda[:, a])
+
+ actions_norm *= self._actions_regularizer
+ return slim.losses.mean_squared_error(tf.stop_gradient(dqda + actions),
+ actions,
+ scope='actor_loss') + actions_norm
+
+ @gin.configurable('ddpg_critic_function')
+ def critic_function(self, critic_values, states, weights=None):
+ """Computes q values based on critic_net outputs, states, and weights.
+
+ Args:
+ critic_values: A tf.float32 [batch_size, ...] tensor representing outputs
+ from the critic net.
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ weights: A list or Numpy array or tensor with a shape broadcastable to
+ `critic_values`.
+ Returns:
+ A tf.float32 [batch_size] tensor representing q values.
+ """
+ del states # unused args
+ if weights is not None:
+ weights = tf.convert_to_tensor(weights, dtype=critic_values.dtype)
+ critic_values *= weights
+ if critic_values.shape.ndims > 1:
+ critic_values = tf.reduce_sum(critic_values,
+ range(1, critic_values.shape.ndims))
+ critic_values.shape.assert_has_rank(1)
+ return critic_values
+
+ @gin.configurable('ddpg_update_targets')
+ def update_targets(self, tau=1.0):
+ """Performs a soft update of the target network parameters.
+
+ For each weight w_s in the actor/critic networks, and its corresponding
+ weight w_t in the target actor/critic networks, a soft update is:
+ w_t = (1- tau) x w_t + tau x ws
+
+ Args:
+ tau: A float scalar in [0, 1]
+ Returns:
+ An operation that performs a soft update of the target network parameters.
+ Raises:
+ ValueError: If `tau` is not in [0, 1].
+ """
+ if tau < 0 or tau > 1:
+ raise ValueError('Input `tau` should be in [0, 1].')
+ update_actor = utils.soft_variables_update(
+ slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.ACTOR_NET_SCOPE)),
+ slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.TARGET_ACTOR_NET_SCOPE)),
+ tau)
+ update_critic = utils.soft_variables_update(
+ slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.CRITIC_NET_SCOPE)),
+ slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.TARGET_CRITIC_NET_SCOPE)),
+ tau)
+ return tf.group(update_actor, update_critic, name='update_targets')
+
+ def get_trainable_critic_vars(self):
+ """Returns a list of trainable variables in the critic network.
+
+ Returns:
+ A list of trainable variables in the critic network.
+ """
+ return slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.CRITIC_NET_SCOPE))
+
+ def get_trainable_actor_vars(self):
+ """Returns a list of trainable variables in the actor network.
+
+ Returns:
+ A list of trainable variables in the actor network.
+ """
+ return slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.ACTOR_NET_SCOPE))
+
+ def get_critic_vars(self):
+ """Returns a list of all variables in the critic network.
+
+ Returns:
+ A list of trainable variables in the critic network.
+ """
+ return slim.get_model_variables(
+ utils.join_scope(self._scope, self.CRITIC_NET_SCOPE))
+
+ def get_actor_vars(self):
+ """Returns a list of all variables in the actor network.
+
+ Returns:
+ A list of trainable variables in the actor network.
+ """
+ return slim.get_model_variables(
+ utils.join_scope(self._scope, self.ACTOR_NET_SCOPE))
+
+ def _validate_states(self, states):
+ """Raises a value error if `states` does not have the expected shape.
+
+ Args:
+ states: A tensor.
+ Raises:
+ ValueError: If states.shape or states.dtype are not compatible with
+ observation_spec.
+ """
+ states.shape.assert_is_compatible_with(self._state_shape)
+ if not states.dtype.is_compatible_with(self._observation_spec.dtype):
+ raise ValueError('states.dtype={} is not compatible with'
+ ' observation_spec.dtype={}'.format(
+ states.dtype, self._observation_spec.dtype))
+
+ def _validate_actions(self, actions):
+ """Raises a value error if `actions` does not have the expected shape.
+
+ Args:
+ actions: A tensor.
+ Raises:
+ ValueError: If actions.shape or actions.dtype are not compatible with
+ action_spec.
+ """
+ actions.shape.assert_is_compatible_with(self._action_shape)
+ if not actions.dtype.is_compatible_with(self._action_spec.dtype):
+ raise ValueError('actions.dtype={} is not compatible with'
+ ' action_spec.dtype={}'.format(
+ actions.dtype, self._action_spec.dtype))
+
+
+@gin.configurable
+class TD3Agent(DdpgAgent):
+ """An RL agent that learns using the TD3 algorithm."""
+
+ ACTOR_NET_SCOPE = 'actor_net'
+ CRITIC_NET_SCOPE = 'critic_net'
+ CRITIC_NET2_SCOPE = 'critic_net2'
+ TARGET_ACTOR_NET_SCOPE = 'target_actor_net'
+ TARGET_CRITIC_NET_SCOPE = 'target_critic_net'
+ TARGET_CRITIC_NET2_SCOPE = 'target_critic_net2'
+
+ def __init__(self,
+ observation_spec,
+ action_spec,
+ actor_net=networks.actor_net,
+ critic_net=networks.critic_net,
+ td_errors_loss=tf.losses.huber_loss,
+ dqda_clipping=0.,
+ actions_regularizer=0.,
+ target_q_clipping=None,
+ residual_phi=0.0,
+ debug_summaries=False):
+ """Constructs a TD3 agent.
+
+ Args:
+ observation_spec: A TensorSpec defining the observations.
+ action_spec: A BoundedTensorSpec defining the actions.
+ actor_net: A callable that creates the actor network. Must take the
+ following arguments: states, num_actions. Please see networks.actor_net
+ for an example.
+ critic_net: A callable that creates the critic network. Must take the
+ following arguments: states, actions. Please see networks.critic_net
+ for an example.
+ td_errors_loss: A callable defining the loss function for the critic
+ td error.
+ dqda_clipping: (float) clips the gradient dqda element-wise between
+ [-dqda_clipping, dqda_clipping]. Does not perform clipping if
+ dqda_clipping == 0.
+ actions_regularizer: A scalar, when positive penalizes the norm of the
+ actions. This can prevent saturation of actions for the actor_loss.
+ target_q_clipping: (tuple of floats) clips target q values within
+ (low, high) values when computing the critic loss.
+ residual_phi: (float) [0.0, 1.0] Residual algorithm parameter that
+ interpolates between Q-learning and residual gradient algorithm.
+ http://www.leemon.com/papers/1995b.pdf
+ debug_summaries: If True, add summaries to help debug behavior.
+ Raises:
+ ValueError: If 'dqda_clipping' is < 0.
+ """
+ self._observation_spec = observation_spec[0]
+ self._action_spec = action_spec[0]
+ self._state_shape = tf.TensorShape([None]).concatenate(
+ self._observation_spec.shape)
+ self._action_shape = tf.TensorShape([None]).concatenate(
+ self._action_spec.shape)
+ self._num_action_dims = self._action_spec.shape.num_elements()
+
+ self._scope = tf.get_variable_scope().name
+ self._actor_net = tf.make_template(
+ self.ACTOR_NET_SCOPE, actor_net, create_scope_now_=True)
+ self._critic_net = tf.make_template(
+ self.CRITIC_NET_SCOPE, critic_net, create_scope_now_=True)
+ self._critic_net2 = tf.make_template(
+ self.CRITIC_NET2_SCOPE, critic_net, create_scope_now_=True)
+ self._target_actor_net = tf.make_template(
+ self.TARGET_ACTOR_NET_SCOPE, actor_net, create_scope_now_=True)
+ self._target_critic_net = tf.make_template(
+ self.TARGET_CRITIC_NET_SCOPE, critic_net, create_scope_now_=True)
+ self._target_critic_net2 = tf.make_template(
+ self.TARGET_CRITIC_NET2_SCOPE, critic_net, create_scope_now_=True)
+ self._td_errors_loss = td_errors_loss
+ if dqda_clipping < 0:
+ raise ValueError('dqda_clipping must be >= 0.')
+ self._dqda_clipping = dqda_clipping
+ self._actions_regularizer = actions_regularizer
+ self._target_q_clipping = target_q_clipping
+ self._residual_phi = residual_phi
+ self._debug_summaries = debug_summaries
+
+ def get_trainable_critic_vars(self):
+ """Returns a list of trainable variables in the critic network.
+ NOTE: This gets the vars of both critic networks.
+
+ Returns:
+ A list of trainable variables in the critic network.
+ """
+ return (
+ slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.CRITIC_NET_SCOPE)))
+
+ def critic_net(self, states, actions, for_critic_loss=False):
+ """Returns the output of the critic network.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] tensor representing a batch
+ of actions.
+ Returns:
+ q values: A [batch_size] tensor of q values.
+ Raises:
+ ValueError: If `states` or `actions' do not have the expected dimensions.
+ """
+ values1 = self._critic_net(states, actions,
+ for_critic_loss=for_critic_loss)
+ values2 = self._critic_net2(states, actions,
+ for_critic_loss=for_critic_loss)
+ if for_critic_loss:
+ return values1, values2
+ return values1
+
+ def target_critic_net(self, states, actions, for_critic_loss=False):
+ """Returns the output of the target critic network.
+
+ The target network is used to compute stable targets for training.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] tensor representing a batch
+ of actions.
+ Returns:
+ q values: A [batch_size] tensor of q values.
+ Raises:
+ ValueError: If `states` or `actions' do not have the expected dimensions.
+ """
+ self._validate_states(states)
+ self._validate_actions(actions)
+ values1 = tf.stop_gradient(
+ self._target_critic_net(states, actions,
+ for_critic_loss=for_critic_loss))
+ values2 = tf.stop_gradient(
+ self._target_critic_net2(states, actions,
+ for_critic_loss=for_critic_loss))
+ if for_critic_loss:
+ return values1, values2
+ return values1
+
+ def value_net(self, states, for_critic_loss=False):
+ """Returns the output of the critic evaluated with the actor.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ Returns:
+ q values: A [batch_size] tensor of q values.
+ """
+ actions = self.actor_net(states)
+ return self.critic_net(states, actions,
+ for_critic_loss=for_critic_loss)
+
+ def target_value_net(self, states, for_critic_loss=False):
+ """Returns the output of the target critic evaluated with the target actor.
+
+ Args:
+ states: A [batch_size, num_state_dims] tensor representing a batch
+ of states.
+ Returns:
+ q values: A [batch_size] tensor of q values.
+ """
+ target_actions = self.target_actor_net(states)
+ noise = tf.clip_by_value(
+ tf.random_normal(tf.shape(target_actions), stddev=0.2), -0.5, 0.5)
+ values1, values2 = self.target_critic_net(
+ states, target_actions + noise,
+ for_critic_loss=for_critic_loss)
+ values = tf.minimum(values1, values2)
+ return values, values
+
+ @gin.configurable('td3_update_targets')
+ def update_targets(self, tau=1.0):
+ """Performs a soft update of the target network parameters.
+
+ For each weight w_s in the actor/critic networks, and its corresponding
+ weight w_t in the target actor/critic networks, a soft update is:
+ w_t = (1- tau) x w_t + tau x ws
+
+ Args:
+ tau: A float scalar in [0, 1]
+ Returns:
+ An operation that performs a soft update of the target network parameters.
+ Raises:
+ ValueError: If `tau` is not in [0, 1].
+ """
+ if tau < 0 or tau > 1:
+ raise ValueError('Input `tau` should be in [0, 1].')
+ update_actor = utils.soft_variables_update(
+ slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.ACTOR_NET_SCOPE)),
+ slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.TARGET_ACTOR_NET_SCOPE)),
+ tau)
+ # NOTE: This updates both critic networks.
+ update_critic = utils.soft_variables_update(
+ slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.CRITIC_NET_SCOPE)),
+ slim.get_trainable_variables(
+ utils.join_scope(self._scope, self.TARGET_CRITIC_NET_SCOPE)),
+ tau)
+ return tf.group(update_actor, update_critic, name='update_targets')
+
+
+def gen_debug_td_error_summaries(
+ target_q_values, q_values, td_targets, td_errors):
+ """Generates debug summaries for critic given a set of batch samples.
+
+ Args:
+ target_q_values: set of predicted next stage values.
+ q_values: current predicted value for the critic network.
+ td_targets: discounted target_q_values with added next stage reward.
+ td_errors: the different between td_targets and q_values.
+ """
+ with tf.name_scope('td_errors'):
+ tf.summary.histogram('td_targets', td_targets)
+ tf.summary.histogram('q_values', q_values)
+ tf.summary.histogram('target_q_values', target_q_values)
+ tf.summary.histogram('td_errors', td_errors)
+ with tf.name_scope('td_targets'):
+ tf.summary.scalar('mean', tf.reduce_mean(td_targets))
+ tf.summary.scalar('max', tf.reduce_max(td_targets))
+ tf.summary.scalar('min', tf.reduce_min(td_targets))
+ with tf.name_scope('q_values'):
+ tf.summary.scalar('mean', tf.reduce_mean(q_values))
+ tf.summary.scalar('max', tf.reduce_max(q_values))
+ tf.summary.scalar('min', tf.reduce_min(q_values))
+ with tf.name_scope('target_q_values'):
+ tf.summary.scalar('mean', tf.reduce_mean(target_q_values))
+ tf.summary.scalar('max', tf.reduce_max(target_q_values))
+ tf.summary.scalar('min', tf.reduce_min(target_q_values))
+ with tf.name_scope('td_errors'):
+ tf.summary.scalar('mean', tf.reduce_mean(td_errors))
+ tf.summary.scalar('max', tf.reduce_max(td_errors))
+ tf.summary.scalar('min', tf.reduce_min(td_errors))
+ tf.summary.scalar('mean_abs', tf.reduce_mean(tf.abs(td_errors)))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/ddpg_networks.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/ddpg_networks.py"
new file mode 100644
index 00000000..63074dfb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/agents/ddpg_networks.py"
@@ -0,0 +1,150 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Sample actor(policy) and critic(q) networks to use with DDPG/NAF agents.
+
+The DDPG networks are defined in "Section 7: Experiment Details" of
+"Continuous control with deep reinforcement learning" - Lilicrap et al.
+https://arxiv.org/abs/1509.02971
+
+The NAF critic network is based on "Section 4" of "Continuous deep Q-learning
+with model-based acceleration" - Gu et al. https://arxiv.org/pdf/1603.00748.
+"""
+
+import tensorflow as tf
+slim = tf.contrib.slim
+import gin.tf
+
+
+@gin.configurable('ddpg_critic_net')
+def critic_net(states, actions,
+ for_critic_loss=False,
+ num_reward_dims=1,
+ states_hidden_layers=(400,),
+ actions_hidden_layers=None,
+ joint_hidden_layers=(300,),
+ weight_decay=0.0001,
+ normalizer_fn=None,
+ activation_fn=tf.nn.relu,
+ zero_obs=False,
+ images=False):
+ """Creates a critic that returns q values for the given states and actions.
+
+ Args:
+ states: (castable to tf.float32) a [batch_size, num_state_dims] tensor
+ representing a batch of states.
+ actions: (castable to tf.float32) a [batch_size, num_action_dims] tensor
+ representing a batch of actions.
+ num_reward_dims: Number of reward dimensions.
+ states_hidden_layers: tuple of hidden layers units for states.
+ actions_hidden_layers: tuple of hidden layers units for actions.
+ joint_hidden_layers: tuple of hidden layers units after joining states
+ and actions using tf.concat().
+ weight_decay: Weight decay for l2 weights regularizer.
+ normalizer_fn: Normalizer function, i.e. slim.layer_norm,
+ activation_fn: Activation function, i.e. tf.nn.relu, slim.leaky_relu, ...
+ Returns:
+ A tf.float32 [batch_size] tensor of q values, or a tf.float32
+ [batch_size, num_reward_dims] tensor of vector q values if
+ num_reward_dims > 1.
+ """
+ with slim.arg_scope(
+ [slim.fully_connected],
+ activation_fn=activation_fn,
+ normalizer_fn=normalizer_fn,
+ weights_regularizer=slim.l2_regularizer(weight_decay),
+ weights_initializer=slim.variance_scaling_initializer(
+ factor=1.0/3.0, mode='FAN_IN', uniform=True)):
+
+ orig_states = tf.to_float(states)
+ #states = tf.to_float(states)
+ states = tf.concat([tf.to_float(states), tf.to_float(actions)], -1) #TD3
+ if images or zero_obs:
+ states *= tf.constant([0.0] * 2 + [1.0] * (states.shape[1] - 2)) #LALA
+ actions = tf.to_float(actions)
+ if states_hidden_layers:
+ states = slim.stack(states, slim.fully_connected, states_hidden_layers,
+ scope='states')
+ if actions_hidden_layers:
+ actions = slim.stack(actions, slim.fully_connected, actions_hidden_layers,
+ scope='actions')
+ joint = tf.concat([states, actions], 1)
+ if joint_hidden_layers:
+ joint = slim.stack(joint, slim.fully_connected, joint_hidden_layers,
+ scope='joint')
+ with slim.arg_scope([slim.fully_connected],
+ weights_regularizer=None,
+ weights_initializer=tf.random_uniform_initializer(
+ minval=-0.003, maxval=0.003)):
+ value = slim.fully_connected(joint, num_reward_dims,
+ activation_fn=None,
+ normalizer_fn=None,
+ scope='q_value')
+ if num_reward_dims == 1:
+ value = tf.reshape(value, [-1])
+ if not for_critic_loss and num_reward_dims > 1:
+ value = tf.reduce_sum(
+ value * tf.abs(orig_states[:, -num_reward_dims:]), -1)
+ return value
+
+
+@gin.configurable('ddpg_actor_net')
+def actor_net(states, action_spec,
+ hidden_layers=(400, 300),
+ normalizer_fn=None,
+ activation_fn=tf.nn.relu,
+ zero_obs=False,
+ images=False):
+ """Creates an actor that returns actions for the given states.
+
+ Args:
+ states: (castable to tf.float32) a [batch_size, num_state_dims] tensor
+ representing a batch of states.
+ action_spec: (BoundedTensorSpec) A tensor spec indicating the shape
+ and range of actions.
+ hidden_layers: tuple of hidden layers units.
+ normalizer_fn: Normalizer function, i.e. slim.layer_norm,
+ activation_fn: Activation function, i.e. tf.nn.relu, slim.leaky_relu, ...
+ Returns:
+ A tf.float32 [batch_size, num_action_dims] tensor of actions.
+ """
+
+ with slim.arg_scope(
+ [slim.fully_connected],
+ activation_fn=activation_fn,
+ normalizer_fn=normalizer_fn,
+ weights_initializer=slim.variance_scaling_initializer(
+ factor=1.0/3.0, mode='FAN_IN', uniform=True)):
+
+ states = tf.to_float(states)
+ orig_states = states
+ if images or zero_obs: # Zero-out x, y position. Hacky.
+ states *= tf.constant([0.0] * 2 + [1.0] * (states.shape[1] - 2))
+ if hidden_layers:
+ states = slim.stack(states, slim.fully_connected, hidden_layers,
+ scope='states')
+ with slim.arg_scope([slim.fully_connected],
+ weights_initializer=tf.random_uniform_initializer(
+ minval=-0.003, maxval=0.003)):
+ actions = slim.fully_connected(states,
+ action_spec.shape.num_elements(),
+ scope='actions',
+ normalizer_fn=None,
+ activation_fn=tf.nn.tanh)
+ action_means = (action_spec.maximum + action_spec.minimum) / 2.0
+ action_magnitudes = (action_spec.maximum - action_spec.minimum) / 2.0
+ actions = action_means + action_magnitudes * actions
+
+ return actions
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/cond_fn.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/cond_fn.py"
new file mode 100644
index 00000000..cd1a276e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/cond_fn.py"
@@ -0,0 +1,244 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Defines many boolean functions indicating when to step and reset.
+"""
+
+import tensorflow as tf
+import gin.tf
+
+
+@gin.configurable
+def env_transition(agent, state, action, transition_type, environment_steps,
+ num_episodes):
+ """True if the transition_type is TRANSITION or FINAL_TRANSITION.
+
+ Args:
+ agent: RL agent.
+ state: A [num_state_dims] tensor representing a state.
+ action: Action performed.
+ transition_type: Type of transition after action
+ environment_steps: Number of steps performed by environment.
+ num_episodes: Number of episodes.
+ Returns:
+ cond: Returns an op that evaluates to true if the transition type is
+ not RESTARTING
+ """
+ del agent, state, action, num_episodes, environment_steps
+ cond = tf.logical_not(transition_type)
+ return cond
+
+
+@gin.configurable
+def env_restart(agent, state, action, transition_type, environment_steps,
+ num_episodes):
+ """True if the transition_type is RESTARTING.
+
+ Args:
+ agent: RL agent.
+ state: A [num_state_dims] tensor representing a state.
+ action: Action performed.
+ transition_type: Type of transition after action
+ environment_steps: Number of steps performed by environment.
+ num_episodes: Number of episodes.
+ Returns:
+ cond: Returns an op that evaluates to true if the transition type equals
+ RESTARTING.
+ """
+ del agent, state, action, num_episodes, environment_steps
+ cond = tf.identity(transition_type)
+ return cond
+
+
+@gin.configurable
+def every_n_steps(agent,
+ state,
+ action,
+ transition_type,
+ environment_steps,
+ num_episodes,
+ n=150):
+ """True once every n steps.
+
+ Args:
+ agent: RL agent.
+ state: A [num_state_dims] tensor representing a state.
+ action: Action performed.
+ transition_type: Type of transition after action
+ environment_steps: Number of steps performed by environment.
+ num_episodes: Number of episodes.
+ n: Return true once every n steps.
+ Returns:
+ cond: Returns an op that evaluates to true if environment_steps
+ equals 0 mod n. We increment the step before checking this condition, so
+ we do not need to add one to environment_steps.
+ """
+ del agent, state, action, transition_type, num_episodes
+ cond = tf.equal(tf.mod(environment_steps, n), 0)
+ return cond
+
+
+@gin.configurable
+def every_n_episodes(agent,
+ state,
+ action,
+ transition_type,
+ environment_steps,
+ num_episodes,
+ n=2,
+ steps_per_episode=None):
+ """True once every n episodes.
+
+ Specifically, evaluates to True on the 0th step of every nth episode.
+ Unlike environment_steps, num_episodes starts at 0, so we do want to add
+ one to ensure it does not reset on the first call.
+
+ Args:
+ agent: RL agent.
+ state: A [num_state_dims] tensor representing a state.
+ action: Action performed.
+ transition_type: Type of transition after action
+ environment_steps: Number of steps performed by environment.
+ num_episodes: Number of episodes.
+ n: Return true once every n episodes.
+ steps_per_episode: How many steps per episode. Needed to determine when a
+ new episode starts.
+ Returns:
+ cond: Returns an op that evaluates to true on the last step of the episode
+ (i.e. if num_episodes equals 0 mod n).
+ """
+ assert steps_per_episode is not None
+ del agent, action, transition_type
+ ant_fell = tf.logical_or(state[2] < 0.2, state[2] > 1.0)
+ cond = tf.logical_and(
+ tf.logical_or(
+ ant_fell,
+ tf.equal(tf.mod(num_episodes + 1, n), 0)),
+ tf.equal(tf.mod(environment_steps, steps_per_episode), 0))
+ return cond
+
+
+@gin.configurable
+def failed_reset_after_n_episodes(agent,
+ state,
+ action,
+ transition_type,
+ environment_steps,
+ num_episodes,
+ steps_per_episode=None,
+ reset_state=None,
+ max_dist=1.0,
+ epsilon=1e-10):
+ """Every n episodes, returns True if the reset agent fails to return.
+
+ Specifically, evaluates to True if the distance between the state and the
+ reset state is greater than max_dist at the end of the episode.
+
+ Args:
+ agent: RL agent.
+ state: A [num_state_dims] tensor representing a state.
+ action: Action performed.
+ transition_type: Type of transition after action
+ environment_steps: Number of steps performed by environment.
+ num_episodes: Number of episodes.
+ steps_per_episode: How many steps per episode. Needed to determine when a
+ new episode starts.
+ reset_state: State to which the reset controller should return.
+ max_dist: Agent is considered to have successfully reset if its distance
+ from the reset_state is less than max_dist.
+ epsilon: small offset to ensure non-negative/zero distance.
+ Returns:
+ cond: Returns an op that evaluates to true if num_episodes+1 equals 0
+ mod n. We add one to the num_episodes so the environment is not reset after
+ the 0th step.
+ """
+ assert steps_per_episode is not None
+ assert reset_state is not None
+ del agent, state, action, transition_type, num_episodes
+ dist = tf.sqrt(
+ tf.reduce_sum(tf.squared_difference(state, reset_state)) + epsilon)
+ cond = tf.logical_and(
+ tf.greater(dist, tf.constant(max_dist)),
+ tf.equal(tf.mod(environment_steps, steps_per_episode), 0))
+ return cond
+
+
+@gin.configurable
+def q_too_small(agent,
+ state,
+ action,
+ transition_type,
+ environment_steps,
+ num_episodes,
+ q_min=0.5):
+ """True of q is too small.
+
+ Args:
+ agent: RL agent.
+ state: A [num_state_dims] tensor representing a state.
+ action: Action performed.
+ transition_type: Type of transition after action
+ environment_steps: Number of steps performed by environment.
+ num_episodes: Number of episodes.
+ q_min: Returns true if the qval is less than q_min
+ Returns:
+ cond: Returns an op that evaluates to true if qval is less than q_min.
+ """
+ del transition_type, environment_steps, num_episodes
+ state_for_reset_agent = tf.stack(state[:-1], tf.constant([0], dtype=tf.float))
+ qval = agent.BASE_AGENT_CLASS.critic_net(
+ tf.expand_dims(state_for_reset_agent, 0), tf.expand_dims(action, 0))[0, :]
+ cond = tf.greater(tf.constant(q_min), qval)
+ return cond
+
+
+@gin.configurable
+def true_fn(agent, state, action, transition_type, environment_steps,
+ num_episodes):
+ """Returns an op that evaluates to true.
+
+ Args:
+ agent: RL agent.
+ state: A [num_state_dims] tensor representing a state.
+ action: Action performed.
+ transition_type: Type of transition after action
+ environment_steps: Number of steps performed by environment.
+ num_episodes: Number of episodes.
+ Returns:
+ cond: op that always evaluates to True.
+ """
+ del agent, state, action, transition_type, environment_steps, num_episodes
+ cond = tf.constant(True, dtype=tf.bool)
+ return cond
+
+
+@gin.configurable
+def false_fn(agent, state, action, transition_type, environment_steps,
+ num_episodes):
+ """Returns an op that evaluates to false.
+
+ Args:
+ agent: RL agent.
+ state: A [num_state_dims] tensor representing a state.
+ action: Action performed.
+ transition_type: Type of transition after action
+ environment_steps: Number of steps performed by environment.
+ num_episodes: Number of episodes.
+ Returns:
+ cond: op that always evaluates to False.
+ """
+ del agent, state, action, transition_type, environment_steps, num_episodes
+ cond = tf.constant(False, dtype=tf.bool)
+ return cond
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/configs/base_uvf.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/configs/base_uvf.gin"
new file mode 100644
index 00000000..2f3f47b6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/configs/base_uvf.gin"
@@ -0,0 +1,68 @@
+#-*-Python-*-
+import gin.tf.external_configurables
+
+create_maze_env.top_down_view = %IMAGES
+## Create the agent
+AGENT_CLASS = @UvfAgent
+UvfAgent.tf_context = %CONTEXT
+UvfAgent.actor_net = @agent/ddpg_actor_net
+UvfAgent.critic_net = @agent/ddpg_critic_net
+UvfAgent.dqda_clipping = 0.0
+UvfAgent.td_errors_loss = @tf.losses.huber_loss
+UvfAgent.target_q_clipping = %TARGET_Q_CLIPPING
+
+# Create meta agent
+META_CLASS = @MetaAgent
+MetaAgent.tf_context = %META_CONTEXT
+MetaAgent.sub_context = %CONTEXT
+MetaAgent.actor_net = @meta/ddpg_actor_net
+MetaAgent.critic_net = @meta/ddpg_critic_net
+MetaAgent.dqda_clipping = 0.0
+MetaAgent.td_errors_loss = @tf.losses.huber_loss
+MetaAgent.target_q_clipping = %TARGET_Q_CLIPPING
+
+# Create state preprocess
+STATE_PREPROCESS_CLASS = @StatePreprocess
+StatePreprocess.ndims = %SUBGOAL_DIM
+state_preprocess_net.states_hidden_layers = (100, 100)
+state_preprocess_net.num_output_dims = %SUBGOAL_DIM
+state_preprocess_net.images = %IMAGES
+action_embed_net.num_output_dims = %SUBGOAL_DIM
+INVERSE_DYNAMICS_CLASS = @InverseDynamics
+
+# actor_net
+ACTOR_HIDDEN_SIZE_1 = 300
+ACTOR_HIDDEN_SIZE_2 = 300
+agent/ddpg_actor_net.hidden_layers = (%ACTOR_HIDDEN_SIZE_1, %ACTOR_HIDDEN_SIZE_2)
+agent/ddpg_actor_net.activation_fn = @tf.nn.relu
+agent/ddpg_actor_net.zero_obs = %ZERO_OBS
+agent/ddpg_actor_net.images = %IMAGES
+meta/ddpg_actor_net.hidden_layers = (%ACTOR_HIDDEN_SIZE_1, %ACTOR_HIDDEN_SIZE_2)
+meta/ddpg_actor_net.activation_fn = @tf.nn.relu
+meta/ddpg_actor_net.zero_obs = False
+meta/ddpg_actor_net.images = %IMAGES
+# critic_net
+CRITIC_HIDDEN_SIZE_1 = 300
+CRITIC_HIDDEN_SIZE_2 = 300
+agent/ddpg_critic_net.states_hidden_layers = (%CRITIC_HIDDEN_SIZE_1,)
+agent/ddpg_critic_net.actions_hidden_layers = None
+agent/ddpg_critic_net.joint_hidden_layers = (%CRITIC_HIDDEN_SIZE_2,)
+agent/ddpg_critic_net.weight_decay = 0.0
+agent/ddpg_critic_net.activation_fn = @tf.nn.relu
+agent/ddpg_critic_net.zero_obs = %ZERO_OBS
+agent/ddpg_critic_net.images = %IMAGES
+meta/ddpg_critic_net.states_hidden_layers = (%CRITIC_HIDDEN_SIZE_1,)
+meta/ddpg_critic_net.actions_hidden_layers = None
+meta/ddpg_critic_net.joint_hidden_layers = (%CRITIC_HIDDEN_SIZE_2,)
+meta/ddpg_critic_net.weight_decay = 0.0
+meta/ddpg_critic_net.activation_fn = @tf.nn.relu
+meta/ddpg_critic_net.zero_obs = False
+meta/ddpg_critic_net.images = %IMAGES
+
+tf.losses.huber_loss.delta = 1.0
+# Sample action
+uvf_add_noise_fn.stddev = 1.0
+meta_add_noise_fn.stddev = %META_EXPLORE_NOISE
+# Update targets
+ddpg_update_targets.tau = 0.001
+td3_update_targets.tau = 0.005
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/configs/eval_uvf.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/configs/eval_uvf.gin"
new file mode 100644
index 00000000..7a58241e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/configs/eval_uvf.gin"
@@ -0,0 +1,14 @@
+#-*-Python-*-
+# Config eval
+evaluate.environment = @create_maze_env()
+evaluate.agent_class = %AGENT_CLASS
+evaluate.meta_agent_class = %META_CLASS
+evaluate.state_preprocess_class = %STATE_PREPROCESS_CLASS
+evaluate.num_episodes_eval = 50
+evaluate.num_episodes_videos = 1
+evaluate.gamma = 1.0
+evaluate.eval_interval_secs = 1
+evaluate.generate_videos = False
+evaluate.generate_summaries = True
+evaluate.eval_modes = %EVAL_MODES
+evaluate.max_steps_per_episode = %RESET_EPISODE_PERIOD
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/configs/train_uvf.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/configs/train_uvf.gin"
new file mode 100644
index 00000000..8b02d7a6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/configs/train_uvf.gin"
@@ -0,0 +1,52 @@
+#-*-Python-*-
+# Create replay_buffer
+agent/CircularBuffer.buffer_size = 200000
+meta/CircularBuffer.buffer_size = 200000
+agent/CircularBuffer.scope = "agent"
+meta/CircularBuffer.scope = "meta"
+
+# Config train
+train_uvf.environment = @create_maze_env()
+train_uvf.agent_class = %AGENT_CLASS
+train_uvf.meta_agent_class = %META_CLASS
+train_uvf.state_preprocess_class = %STATE_PREPROCESS_CLASS
+train_uvf.inverse_dynamics_class = %INVERSE_DYNAMICS_CLASS
+train_uvf.replay_buffer = @agent/CircularBuffer()
+train_uvf.meta_replay_buffer = @meta/CircularBuffer()
+train_uvf.critic_optimizer = @critic/AdamOptimizer()
+train_uvf.actor_optimizer = @actor/AdamOptimizer()
+train_uvf.meta_critic_optimizer = @meta_critic/AdamOptimizer()
+train_uvf.meta_actor_optimizer = @meta_actor/AdamOptimizer()
+train_uvf.repr_optimizer = @repr/AdamOptimizer()
+train_uvf.num_episodes_train = 25000
+train_uvf.batch_size = 100
+train_uvf.initial_episodes = 5
+train_uvf.gamma = 0.99
+train_uvf.meta_gamma = 0.99
+train_uvf.reward_scale_factor = 1.0
+train_uvf.target_update_period = 2
+train_uvf.num_updates_per_observation = 1
+train_uvf.num_collect_per_update = 1
+train_uvf.num_collect_per_meta_update = 10
+train_uvf.debug_summaries = False
+train_uvf.log_every_n_steps = 1000
+train_uvf.save_policy_every_n_steps =100000
+
+# Config Optimizers
+critic/AdamOptimizer.learning_rate = 0.001
+critic/AdamOptimizer.beta1 = 0.9
+critic/AdamOptimizer.beta2 = 0.999
+actor/AdamOptimizer.learning_rate = 0.0001
+actor/AdamOptimizer.beta1 = 0.9
+actor/AdamOptimizer.beta2 = 0.999
+
+meta_critic/AdamOptimizer.learning_rate = 0.001
+meta_critic/AdamOptimizer.beta1 = 0.9
+meta_critic/AdamOptimizer.beta2 = 0.999
+meta_actor/AdamOptimizer.learning_rate = 0.0001
+meta_actor/AdamOptimizer.beta1 = 0.9
+meta_actor/AdamOptimizer.beta2 = 0.999
+
+repr/AdamOptimizer.learning_rate = 0.0001
+repr/AdamOptimizer.beta1 = 0.9
+repr/AdamOptimizer.beta2 = 0.999
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/__init__.py"
new file mode 100644
index 00000000..8b137891
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/__init__.py"
@@ -0,0 +1 @@
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_block.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_block.gin"
new file mode 100644
index 00000000..d5bd4f01
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_block.gin"
@@ -0,0 +1,67 @@
+#-*-Python-*-
+create_maze_env.env_name = "AntBlock"
+ZERO_OBS = False
+context_range = (%CONTEXT_RANGE_MIN, %CONTEXT_RANGE_MAX)
+meta_context_range = ((-4, -4), (20, 20))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval1", "eval2", "eval3"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [2]
+meta/Context.samplers = {
+ "train": [@train/RandomSampler],
+ "explore": [@train/RandomSampler],
+ "eval1": [@eval1/ConstantSampler],
+ "eval2": [@eval2/ConstantSampler],
+ "eval3": [@eval3/ConstantSampler],
+}
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [3, 4]
+task/negative_distance.relative_context = False
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+MetaAgent.k = %SUBGOAL_DIM
+
+eval1/ConstantSampler.value = [16, 0]
+eval2/ConstantSampler.value = [16, 16]
+eval3/ConstantSampler.value = [0, 16]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_block_maze.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_block_maze.gin"
new file mode 100644
index 00000000..cebf775b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_block_maze.gin"
@@ -0,0 +1,67 @@
+#-*-Python-*-
+create_maze_env.env_name = "AntBlockMaze"
+ZERO_OBS = False
+context_range = (%CONTEXT_RANGE_MIN, %CONTEXT_RANGE_MAX)
+meta_context_range = ((-4, -4), (12, 20))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval1", "eval2", "eval3"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [2]
+meta/Context.samplers = {
+ "train": [@train/RandomSampler],
+ "explore": [@train/RandomSampler],
+ "eval1": [@eval1/ConstantSampler],
+ "eval2": [@eval2/ConstantSampler],
+ "eval3": [@eval3/ConstantSampler],
+}
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [3, 4]
+task/negative_distance.relative_context = False
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+MetaAgent.k = %SUBGOAL_DIM
+
+eval1/ConstantSampler.value = [8, 0]
+eval2/ConstantSampler.value = [8, 16]
+eval3/ConstantSampler.value = [0, 16]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_fall_multi.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_fall_multi.gin"
new file mode 100644
index 00000000..eb89ad0c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_fall_multi.gin"
@@ -0,0 +1,62 @@
+#-*-Python-*-
+create_maze_env.env_name = "AntFall"
+context_range = (%CONTEXT_RANGE_MIN, %CONTEXT_RANGE_MAX)
+meta_context_range = ((-4, -4, 0), (12, 28, 5))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval1"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [3]
+meta/Context.samplers = {
+ "train": [@train/RandomSampler],
+ "explore": [@train/RandomSampler],
+ "eval1": [@eval1/ConstantSampler],
+}
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [0, 1, 2]
+task/negative_distance.relative_context = False
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+MetaAgent.k = %SUBGOAL_DIM
+
+eval1/ConstantSampler.value = [0, 27, 4.5]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_fall_multi_img.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_fall_multi_img.gin"
new file mode 100644
index 00000000..b54fb7c9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_fall_multi_img.gin"
@@ -0,0 +1,68 @@
+#-*-Python-*-
+create_maze_env.env_name = "AntFall"
+IMAGES = True
+
+context_range = (%CONTEXT_RANGE_MIN, %CONTEXT_RANGE_MAX)
+meta_context_range = ((-4, -4, 0), (12, 28, 5))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval1"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [3]
+meta/Context.samplers = {
+ "train": [@train/RandomSampler],
+ "explore": [@train/RandomSampler],
+ "eval1": [@eval1/ConstantSampler],
+}
+meta/Context.context_transition_fn = @task/relative_context_transition_fn
+meta/Context.context_multi_transition_fn = @task/relative_context_multi_transition_fn
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [0, 1, 2]
+task/negative_distance.relative_context = True
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+task/relative_context_transition_fn.k = 3
+task/relative_context_multi_transition_fn.k = 3
+MetaAgent.k = %SUBGOAL_DIM
+
+eval1/ConstantSampler.value = [0, 27, 0]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_fall_single.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_fall_single.gin"
new file mode 100644
index 00000000..56bbc070
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_fall_single.gin"
@@ -0,0 +1,62 @@
+#-*-Python-*-
+create_maze_env.env_name = "AntFall"
+context_range = (%CONTEXT_RANGE_MIN, %CONTEXT_RANGE_MAX)
+meta_context_range = ((-4, -4, 0), (12, 28, 5))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval1"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [3]
+meta/Context.samplers = {
+ "train": [@eval1/ConstantSampler],
+ "explore": [@eval1/ConstantSampler],
+ "eval1": [@eval1/ConstantSampler],
+}
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [0, 1, 2]
+task/negative_distance.relative_context = False
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+MetaAgent.k = %SUBGOAL_DIM
+
+eval1/ConstantSampler.value = [0, 27, 4.5]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_maze.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_maze.gin"
new file mode 100644
index 00000000..3a0b73e3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_maze.gin"
@@ -0,0 +1,66 @@
+#-*-Python-*-
+create_maze_env.env_name = "AntMaze"
+context_range = (%CONTEXT_RANGE_MIN, %CONTEXT_RANGE_MAX)
+meta_context_range = ((-4, -4), (20, 20))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval1", "eval2", "eval3"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [2]
+meta/Context.samplers = {
+ "train": [@train/RandomSampler],
+ "explore": [@train/RandomSampler],
+ "eval1": [@eval1/ConstantSampler],
+ "eval2": [@eval2/ConstantSampler],
+ "eval3": [@eval3/ConstantSampler],
+}
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [0, 1]
+task/negative_distance.relative_context = False
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+MetaAgent.k = %SUBGOAL_DIM
+
+eval1/ConstantSampler.value = [16, 0]
+eval2/ConstantSampler.value = [16, 16]
+eval3/ConstantSampler.value = [0, 16]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_maze_img.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_maze_img.gin"
new file mode 100644
index 00000000..ceed65a0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_maze_img.gin"
@@ -0,0 +1,72 @@
+#-*-Python-*-
+create_maze_env.env_name = "AntMaze"
+IMAGES = True
+
+context_range = (%CONTEXT_RANGE_MIN, %CONTEXT_RANGE_MAX)
+meta_context_range = ((-4, -4), (20, 20))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval1", "eval2", "eval3"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [2]
+meta/Context.samplers = {
+ "train": [@train/RandomSampler],
+ "explore": [@train/RandomSampler],
+ "eval1": [@eval1/ConstantSampler],
+ "eval2": [@eval2/ConstantSampler],
+ "eval3": [@eval3/ConstantSampler],
+}
+meta/Context.context_transition_fn = @task/relative_context_transition_fn
+meta/Context.context_multi_transition_fn = @task/relative_context_multi_transition_fn
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [0, 1]
+task/negative_distance.relative_context = True
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+task/relative_context_transition_fn.k = 2
+task/relative_context_multi_transition_fn.k = 2
+MetaAgent.k = %SUBGOAL_DIM
+
+eval1/ConstantSampler.value = [16, 0]
+eval2/ConstantSampler.value = [16, 16]
+eval3/ConstantSampler.value = [0, 16]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_push_multi.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_push_multi.gin"
new file mode 100644
index 00000000..db9b4ed7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_push_multi.gin"
@@ -0,0 +1,62 @@
+#-*-Python-*-
+create_maze_env.env_name = "AntPush"
+context_range = (%CONTEXT_RANGE_MIN, %CONTEXT_RANGE_MAX)
+meta_context_range = ((-16, -4), (16, 20))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval2"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [2]
+meta/Context.samplers = {
+ "train": [@train/RandomSampler],
+ "explore": [@train/RandomSampler],
+ "eval2": [@eval2/ConstantSampler],
+}
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [0, 1]
+task/negative_distance.relative_context = False
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+MetaAgent.k = %SUBGOAL_DIM
+
+eval2/ConstantSampler.value = [0, 19]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_push_multi_img.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_push_multi_img.gin"
new file mode 100644
index 00000000..abdc4340
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_push_multi_img.gin"
@@ -0,0 +1,68 @@
+#-*-Python-*-
+create_maze_env.env_name = "AntPush"
+IMAGES = True
+
+context_range = (%CONTEXT_RANGE_MIN, %CONTEXT_RANGE_MAX)
+meta_context_range = ((-16, -4), (16, 20))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval2"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [2]
+meta/Context.samplers = {
+ "train": [@train/RandomSampler],
+ "explore": [@train/RandomSampler],
+ "eval2": [@eval2/ConstantSampler],
+}
+meta/Context.context_transition_fn = @task/relative_context_transition_fn
+meta/Context.context_multi_transition_fn = @task/relative_context_multi_transition_fn
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [0, 1]
+task/negative_distance.relative_context = True
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+task/relative_context_transition_fn.k = 2
+task/relative_context_multi_transition_fn.k = 2
+MetaAgent.k = %SUBGOAL_DIM
+
+eval2/ConstantSampler.value = [0, 19]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_push_single.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_push_single.gin"
new file mode 100644
index 00000000..e85c5dfb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/ant_push_single.gin"
@@ -0,0 +1,62 @@
+#-*-Python-*-
+create_maze_env.env_name = "AntPush"
+context_range = (%CONTEXT_RANGE_MIN, %CONTEXT_RANGE_MAX)
+meta_context_range = ((-16, -4), (16, 20))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval2"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [2]
+meta/Context.samplers = {
+ "train": [@eval2/ConstantSampler],
+ "explore": [@eval2/ConstantSampler],
+ "eval2": [@eval2/ConstantSampler],
+}
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [0, 1]
+task/negative_distance.relative_context = False
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+MetaAgent.k = %SUBGOAL_DIM
+
+eval2/ConstantSampler.value = [0, 19]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/default.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/default.gin"
new file mode 100644
index 00000000..65f91e52
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/default.gin"
@@ -0,0 +1,12 @@
+#-*-Python-*-
+ENV_CONTEXT = None
+EVAL_MODES = ["eval"]
+TARGET_Q_CLIPPING = None
+RESET_EPISODE_PERIOD = None
+ZERO_OBS = False
+CONTEXT_RANGE_MIN = -10
+CONTEXT_RANGE_MAX = 10
+SUBGOAL_DIM = 2
+
+uvf/negative_distance.summarize = False
+uvf/negative_distance.relative_context = True
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/hiro_orig.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/hiro_orig.gin"
new file mode 100644
index 00000000..e39ba96b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/hiro_orig.gin"
@@ -0,0 +1,14 @@
+#-*-Python-*-
+ENV_CONTEXT = None
+EVAL_MODES = ["eval"]
+TARGET_Q_CLIPPING = None
+RESET_EPISODE_PERIOD = None
+ZERO_OBS = True
+IMAGES = False
+CONTEXT_RANGE_MIN = (-10, -10, -0.5, -1, -1, -1, -1, -0.5, -0.3, -0.5, -0.3, -0.5, -0.3, -0.5, -0.3)
+CONTEXT_RANGE_MAX = ( 10, 10, 0.5, 1, 1, 1, 1, 0.5, 0.3, 0.5, 0.3, 0.5, 0.3, 0.5, 0.3)
+SUBGOAL_DIM = 15
+META_EXPLORE_NOISE = 1.0
+
+uvf/negative_distance.summarize = False
+uvf/negative_distance.relative_context = True
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/hiro_repr.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/hiro_repr.gin"
new file mode 100644
index 00000000..a0a8057b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/hiro_repr.gin"
@@ -0,0 +1,18 @@
+#-*-Python-*-
+ENV_CONTEXT = None
+EVAL_MODES = ["eval"]
+TARGET_Q_CLIPPING = None
+RESET_EPISODE_PERIOD = None
+ZERO_OBS = False
+IMAGES = False
+CONTEXT_RANGE_MIN = -10
+CONTEXT_RANGE_MAX = 10
+SUBGOAL_DIM = 2
+META_EXPLORE_NOISE = 5.0
+
+StatePreprocess.trainable = True
+StatePreprocess.state_preprocess_net = @state_preprocess_net
+StatePreprocess.action_embed_net = @action_embed_net
+
+uvf/negative_distance.summarize = False
+uvf/negative_distance.relative_context = True
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/hiro_xy.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/hiro_xy.gin"
new file mode 100644
index 00000000..f35026c9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/hiro_xy.gin"
@@ -0,0 +1,14 @@
+#-*-Python-*-
+ENV_CONTEXT = None
+EVAL_MODES = ["eval"]
+TARGET_Q_CLIPPING = None
+RESET_EPISODE_PERIOD = None
+ZERO_OBS = False
+IMAGES = False
+CONTEXT_RANGE_MIN = -10
+CONTEXT_RANGE_MAX = 10
+SUBGOAL_DIM = 2
+META_EXPLORE_NOISE = 1.0
+
+uvf/negative_distance.summarize = False
+uvf/negative_distance.relative_context = True
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/point_maze.gin" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/point_maze.gin"
new file mode 100644
index 00000000..0ea67d2d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/configs/point_maze.gin"
@@ -0,0 +1,73 @@
+#-*-Python-*-
+# NOTE: For best training, low-level exploration (uvf_add_noise_fn.stddev)
+# should be reduced to around 0.1.
+create_maze_env.env_name = "PointMaze"
+context_range_min = -10
+context_range_max = 10
+context_range = (%context_range_min, %context_range_max)
+meta_context_range = ((-2, -2), (10, 10))
+
+RESET_EPISODE_PERIOD = 500
+RESET_ENV_PERIOD = 1
+# End episode every N steps
+UvfAgent.reset_episode_cond_fn = @every_n_steps
+every_n_steps.n = %RESET_EPISODE_PERIOD
+train_uvf.max_steps_per_episode = %RESET_EPISODE_PERIOD
+# Do a manual reset every N episodes
+UvfAgent.reset_env_cond_fn = @every_n_episodes
+every_n_episodes.n = %RESET_ENV_PERIOD
+every_n_episodes.steps_per_episode = %RESET_EPISODE_PERIOD
+
+## Config defaults
+EVAL_MODES = ["eval1", "eval2", "eval3"]
+
+## Config agent
+CONTEXT = @agent/Context
+META_CONTEXT = @meta/Context
+
+## Config agent context
+agent/Context.context_ranges = [%context_range]
+agent/Context.context_shapes = [%SUBGOAL_DIM]
+agent/Context.meta_action_every_n = 10
+agent/Context.samplers = {
+ "train": [@train/DirectionSampler],
+ "explore": [@train/DirectionSampler],
+ "eval1": [@uvf_eval1/ConstantSampler],
+ "eval2": [@uvf_eval2/ConstantSampler],
+ "eval3": [@uvf_eval3/ConstantSampler],
+}
+
+agent/Context.context_transition_fn = @relative_context_transition_fn
+agent/Context.context_multi_transition_fn = @relative_context_multi_transition_fn
+
+agent/Context.reward_fn = @uvf/negative_distance
+
+## Config meta context
+meta/Context.context_ranges = [%meta_context_range]
+meta/Context.context_shapes = [2]
+meta/Context.samplers = {
+ "train": [@train/RandomSampler],
+ "explore": [@train/RandomSampler],
+ "eval1": [@eval1/ConstantSampler],
+ "eval2": [@eval2/ConstantSampler],
+ "eval3": [@eval3/ConstantSampler],
+}
+meta/Context.reward_fn = @task/negative_distance
+
+## Config rewards
+task/negative_distance.state_indices = [0, 1]
+task/negative_distance.relative_context = False
+task/negative_distance.diff = False
+task/negative_distance.offset = 0.0
+
+## Config samplers
+train/RandomSampler.context_range = %meta_context_range
+train/DirectionSampler.context_range = %context_range
+train/DirectionSampler.k = %SUBGOAL_DIM
+relative_context_transition_fn.k = %SUBGOAL_DIM
+relative_context_multi_transition_fn.k = %SUBGOAL_DIM
+MetaAgent.k = %SUBGOAL_DIM
+
+eval1/ConstantSampler.value = [8, 0]
+eval2/ConstantSampler.value = [8, 8]
+eval3/ConstantSampler.value = [0, 8]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/context.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/context.py"
new file mode 100644
index 00000000..76be00b4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/context.py"
@@ -0,0 +1,467 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Context for Universal Value Function agents.
+
+A context specifies a list of contextual variables, each with
+ own sampling and reward computation methods.
+
+Examples of contextual variables include
+ goal states, reward combination vectors, etc.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+import numpy as np
+import tensorflow as tf
+from tf_agents import specs
+import gin.tf
+from utils import utils as uvf_utils
+
+
+@gin.configurable
+class Context(object):
+ """Base context."""
+ VAR_NAME = 'action'
+
+ def __init__(self,
+ tf_env,
+ context_ranges=None,
+ context_shapes=None,
+ state_indices=None,
+ variable_indices=None,
+ gamma_index=None,
+ settable_context=False,
+ timers=None,
+ samplers=None,
+ reward_weights=None,
+ reward_fn=None,
+ random_sampler_mode='random',
+ normalizers=None,
+ context_transition_fn=None,
+ context_multi_transition_fn=None,
+ meta_action_every_n=None):
+ self._tf_env = tf_env
+ self.variable_indices = variable_indices
+ self.gamma_index = gamma_index
+ self._settable_context = settable_context
+ self.timers = timers
+ self._context_transition_fn = context_transition_fn
+ self._context_multi_transition_fn = context_multi_transition_fn
+ self._random_sampler_mode = random_sampler_mode
+
+ # assign specs
+ self._obs_spec = self._tf_env.observation_spec()
+ self._context_shapes = tuple([
+ shape if shape is not None else self._obs_spec.shape
+ for shape in context_shapes
+ ])
+ self.context_specs = tuple([
+ specs.TensorSpec(dtype=self._obs_spec.dtype, shape=shape)
+ for shape in self._context_shapes
+ ])
+ if context_ranges is not None:
+ self.context_ranges = context_ranges
+ else:
+ self.context_ranges = [None] * len(self._context_shapes)
+
+ self.context_as_action_specs = tuple([
+ specs.BoundedTensorSpec(
+ shape=shape,
+ dtype=(tf.float32 if self._obs_spec.dtype in
+ [tf.float32, tf.float64] else self._obs_spec.dtype),
+ minimum=context_range[0],
+ maximum=context_range[-1])
+ for shape, context_range in zip(self._context_shapes, self.context_ranges)
+ ])
+
+ if state_indices is not None:
+ self.state_indices = state_indices
+ else:
+ self.state_indices = [None] * len(self._context_shapes)
+ if self.variable_indices is not None and self.n != len(
+ self.variable_indices):
+ raise ValueError(
+ 'variable_indices (%s) must have the same length as contexts (%s).' %
+ (self.variable_indices, self.context_specs))
+ assert self.n == len(self.context_ranges)
+ assert self.n == len(self.state_indices)
+
+ # assign reward/sampler fns
+ self._sampler_fns = dict()
+ self._samplers = dict()
+ self._reward_fns = dict()
+
+ # assign reward fns
+ self._add_custom_reward_fns()
+ reward_weights = reward_weights or None
+ self._reward_fn = self._make_reward_fn(reward_fn, reward_weights)
+
+ # assign samplers
+ self._add_custom_sampler_fns()
+ for mode, sampler_fns in samplers.items():
+ self._make_sampler_fn(sampler_fns, mode)
+
+ # create normalizers
+ if normalizers is None:
+ self._normalizers = [None] * len(self.context_specs)
+ else:
+ self._normalizers = [
+ normalizer(tf.zeros(shape=spec.shape, dtype=spec.dtype))
+ if normalizer is not None else None
+ for normalizer, spec in zip(normalizers, self.context_specs)
+ ]
+ assert self.n == len(self._normalizers)
+
+ self.meta_action_every_n = meta_action_every_n
+
+ # create vars
+ self.context_vars = {}
+ self.timer_vars = {}
+ self.create_vars(self.VAR_NAME)
+ self.t = tf.Variable(
+ tf.zeros(shape=(), dtype=tf.int32), name='num_timer_steps')
+
+ def _add_custom_reward_fns(self):
+ pass
+
+ def _add_custom_sampler_fns(self):
+ pass
+
+ def sample_random_contexts(self, batch_size):
+ """Sample random batch contexts."""
+ assert self._random_sampler_mode is not None
+ return self.sample_contexts(self._random_sampler_mode, batch_size)[0]
+
+ def sample_contexts(self, mode, batch_size, state=None, next_state=None,
+ **kwargs):
+ """Sample a batch of contexts.
+
+ Args:
+ mode: A string representing the mode [`train`, `explore`, `eval`].
+ batch_size: Batch size.
+ Returns:
+ Two lists of [batch_size, num_context_dims] contexts.
+ """
+ contexts, next_contexts = self._sampler_fns[mode](
+ batch_size, state=state, next_state=next_state,
+ **kwargs)
+ self._validate_contexts(contexts)
+ self._validate_contexts(next_contexts)
+ return contexts, next_contexts
+
+ def compute_rewards(self, mode, states, actions, rewards, next_states,
+ contexts):
+ """Compute context-based rewards.
+
+ Args:
+ mode: A string representing the mode ['uvf', 'task'].
+ states: A [batch_size, num_state_dims] tensor.
+ actions: A [batch_size, num_action_dims] tensor.
+ rewards: A [batch_size] tensor representing unmodified rewards.
+ next_states: A [batch_size, num_state_dims] tensor.
+ contexts: A list of [batch_size, num_context_dims] tensors.
+ Returns:
+ A [batch_size] tensor representing rewards.
+ """
+ return self._reward_fn(states, actions, rewards, next_states,
+ contexts)
+
+ def _make_reward_fn(self, reward_fns_list, reward_weights):
+ """Returns a fn that computes rewards.
+
+ Args:
+ reward_fns_list: A fn or a list of reward fns.
+ mode: A string representing the operating mode.
+ reward_weights: A list of reward weights.
+ """
+ if not isinstance(reward_fns_list, (list, tuple)):
+ reward_fns_list = [reward_fns_list]
+ if reward_weights is None:
+ reward_weights = [1.0] * len(reward_fns_list)
+ assert len(reward_fns_list) == len(reward_weights)
+
+ reward_fns_list = [
+ self._custom_reward_fns[fn] if isinstance(fn, (str,)) else fn
+ for fn in reward_fns_list
+ ]
+
+ def reward_fn(*args, **kwargs):
+ """Returns rewards, discounts."""
+ reward_tuples = [
+ reward_fn(*args, **kwargs) for reward_fn in reward_fns_list
+ ]
+ rewards_list = [reward_tuple[0] for reward_tuple in reward_tuples]
+ discounts_list = [reward_tuple[1] for reward_tuple in reward_tuples]
+ ndims = max([r.shape.ndims for r in rewards_list])
+ if ndims > 1: # expand reward shapes to allow broadcasting
+ for i in range(len(rewards_list)):
+ for _ in range(rewards_list[i].shape.ndims - ndims):
+ rewards_list[i] = tf.expand_dims(rewards_list[i], axis=-1)
+ for _ in range(discounts_list[i].shape.ndims - ndims):
+ discounts_list[i] = tf.expand_dims(discounts_list[i], axis=-1)
+ rewards = tf.add_n(
+ [r * tf.to_float(w) for r, w in zip(rewards_list, reward_weights)])
+ discounts = discounts_list[0]
+ for d in discounts_list[1:]:
+ discounts *= d
+
+ return rewards, discounts
+
+ return reward_fn
+
+ def _make_sampler_fn(self, sampler_cls_list, mode):
+ """Returns a fn that samples a list of context vars.
+
+ Args:
+ sampler_cls_list: A list of sampler classes.
+ mode: A string representing the operating mode.
+ """
+ if not isinstance(sampler_cls_list, (list, tuple)):
+ sampler_cls_list = [sampler_cls_list]
+
+ self._samplers[mode] = []
+ sampler_fns = []
+ for spec, sampler in zip(self.context_specs, sampler_cls_list):
+ if isinstance(sampler, (str,)):
+ sampler_fn = self._custom_sampler_fns[sampler]
+ else:
+ sampler_fn = sampler(context_spec=spec)
+ self._samplers[mode].append(sampler_fn)
+ sampler_fns.append(sampler_fn)
+
+ def batch_sampler_fn(batch_size, state=None, next_state=None, **kwargs):
+ """Sampler fn."""
+ contexts_tuples = [
+ sampler(batch_size, state=state, next_state=next_state, **kwargs)
+ for sampler in sampler_fns]
+ contexts = [c[0] for c in contexts_tuples]
+ next_contexts = [c[1] for c in contexts_tuples]
+ contexts = [
+ normalizer.update_apply(c) if normalizer is not None else c
+ for normalizer, c in zip(self._normalizers, contexts)
+ ]
+ next_contexts = [
+ normalizer.apply(c) if normalizer is not None else c
+ for normalizer, c in zip(self._normalizers, next_contexts)
+ ]
+ return contexts, next_contexts
+
+ self._sampler_fns[mode] = batch_sampler_fn
+
+ def set_env_context_op(self, context, disable_unnormalizer=False):
+ """Returns a TensorFlow op that sets the environment context.
+
+ Args:
+ context: A list of context Tensor variables.
+ disable_unnormalizer: Disable unnormalization.
+ Returns:
+ A TensorFlow op that sets the environment context.
+ """
+ ret_val = np.array(1.0, dtype=np.float32)
+ if not self._settable_context:
+ return tf.identity(ret_val)
+
+ if not disable_unnormalizer:
+ context = [
+ normalizer.unapply(tf.expand_dims(c, 0))[0]
+ if normalizer is not None else c
+ for normalizer, c in zip(self._normalizers, context)
+ ]
+
+ def set_context_func(*env_context_values):
+ tf.logging.info('[set_env_context_op] Setting gym environment context.')
+ # pylint: disable=protected-access
+ self.gym_env.set_context(*env_context_values)
+ return ret_val
+ # pylint: enable=protected-access
+
+ with tf.name_scope('set_env_context'):
+ set_op = tf.py_func(set_context_func, context, tf.float32,
+ name='set_env_context_py_func')
+ set_op.set_shape([])
+ return set_op
+
+ def set_replay(self, replay):
+ """Set replay buffer for samplers.
+
+ Args:
+ replay: A replay buffer.
+ """
+ for _, samplers in self._samplers.items():
+ for sampler in samplers:
+ sampler.set_replay(replay)
+
+ def get_clip_fns(self):
+ """Returns a list of clip fns for contexts.
+
+ Returns:
+ A list of fns that clip context tensors.
+ """
+ clip_fns = []
+ for context_range in self.context_ranges:
+ def clip_fn(var_, range_=context_range):
+ """Clip a tensor."""
+ if range_ is None:
+ clipped_var = tf.identity(var_)
+ elif isinstance(range_[0], (int, long, float, list, np.ndarray)):
+ clipped_var = tf.clip_by_value(
+ var_,
+ range_[0],
+ range_[1],)
+ else: raise NotImplementedError(range_)
+ return clipped_var
+ clip_fns.append(clip_fn)
+ return clip_fns
+
+ def _validate_contexts(self, contexts):
+ """Validate if contexts have right specs.
+
+ Args:
+ contexts: A list of [batch_size, num_context_dim] tensors.
+ Raises:
+ ValueError: If shape or dtype mismatches that of spec.
+ """
+ for i, (context, spec) in enumerate(zip(contexts, self.context_specs)):
+ if context[0].shape != spec.shape:
+ raise ValueError('contexts[%d] has invalid shape %s wrt spec shape %s' %
+ (i, context[0].shape, spec.shape))
+ if context.dtype != spec.dtype:
+ raise ValueError('contexts[%d] has invalid dtype %s wrt spec dtype %s' %
+ (i, context.dtype, spec.dtype))
+
+ def context_multi_transition_fn(self, contexts, **kwargs):
+ """Returns multiple future contexts starting from a batch."""
+ assert self._context_multi_transition_fn
+ return self._context_multi_transition_fn(contexts, None, None, **kwargs)
+
+ def step(self, mode, agent=None, action_fn=None, **kwargs):
+ """Returns [next_contexts..., next_timer] list of ops.
+
+ Args:
+ mode: a string representing the mode=[train, explore, eval].
+ **kwargs: kwargs for context_transition_fn.
+ Returns:
+ a list of ops that set the context.
+ """
+ if agent is None:
+ ops = []
+ if self._context_transition_fn is not None:
+ def sampler_fn():
+ samples = self.sample_contexts(mode, 1)[0]
+ return [s[0] for s in samples]
+ values = self._context_transition_fn(self.vars, self.t, sampler_fn, **kwargs)
+ ops += [tf.assign(var, value) for var, value in zip(self.vars, values)]
+ ops.append(tf.assign_add(self.t, 1)) # increment timer
+ return ops
+ else:
+ ops = agent.tf_context.step(mode, **kwargs)
+ state = kwargs['state']
+ next_state = kwargs['next_state']
+ state_repr = kwargs['state_repr']
+ next_state_repr = kwargs['next_state_repr']
+ with tf.control_dependencies(ops): # Step high level context before computing low level one.
+ # Get the context transition function output.
+ values = self._context_transition_fn(self.vars, self.t, None,
+ state=state_repr,
+ next_state=next_state_repr)
+ # Select a new goal every C steps, otherwise use context transition.
+ low_level_context = [
+ tf.cond(tf.equal(self.t % self.meta_action_every_n, 0),
+ lambda: tf.cast(action_fn(next_state, context=None), tf.float32),
+ lambda: values)]
+ ops = [tf.assign(var, value)
+ for var, value in zip(self.vars, low_level_context)]
+ with tf.control_dependencies(ops):
+ return [tf.assign_add(self.t, 1)] # increment timer
+ return ops
+
+ def reset(self, mode, agent=None, action_fn=None, state=None):
+ """Returns ops that reset the context.
+
+ Args:
+ mode: a string representing the mode=[train, explore, eval].
+ Returns:
+ a list of ops that reset the context.
+ """
+ if agent is None:
+ values = self.sample_contexts(mode=mode, batch_size=1)[0]
+ if values is None:
+ return []
+ values = [value[0] for value in values]
+ values[0] = uvf_utils.tf_print(
+ values[0],
+ values,
+ message='context:reset, mode=%s' % mode,
+ first_n=10,
+ name='context:reset:%s' % mode)
+ all_ops = []
+ for _, context_vars in sorted(self.context_vars.items()):
+ ops = [tf.assign(var, value) for var, value in zip(context_vars, values)]
+ all_ops += ops
+ all_ops.append(self.set_env_context_op(values))
+ all_ops.append(tf.assign(self.t, 0)) # reset timer
+ return all_ops
+ else:
+ ops = agent.tf_context.reset(mode)
+ # NOTE: The code is currently written in such a way that the higher level
+ # policy does not provide a low-level context until the second
+ # observation. Insead, we just zero-out low-level contexts.
+ for key, context_vars in sorted(self.context_vars.items()):
+ ops += [tf.assign(var, tf.zeros_like(var)) for var, meta_var in
+ zip(context_vars, agent.tf_context.context_vars[key])]
+
+ ops.append(tf.assign(self.t, 0)) # reset timer
+ return ops
+
+ def create_vars(self, name, agent=None):
+ """Create tf variables for contexts.
+
+ Args:
+ name: Name of the variables.
+ Returns:
+ A list of [num_context_dims] tensors.
+ """
+ if agent is not None:
+ meta_vars = agent.create_vars(name)
+ else:
+ meta_vars = {}
+ assert name not in self.context_vars, ('Conflict! %s is already '
+ 'initialized.') % name
+ self.context_vars[name] = tuple([
+ tf.Variable(
+ tf.zeros(shape=spec.shape, dtype=spec.dtype),
+ name='%s_context_%d' % (name, i))
+ for i, spec in enumerate(self.context_specs)
+ ])
+ return self.context_vars[name], meta_vars
+
+ @property
+ def n(self):
+ return len(self.context_specs)
+
+ @property
+ def vars(self):
+ return self.context_vars[self.VAR_NAME]
+
+ # pylint: disable=protected-access
+ @property
+ def gym_env(self):
+ return self._tf_env.pyenv._gym_env
+
+ @property
+ def tf_env(self):
+ return self._tf_env
+ # pylint: enable=protected-access
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/context_transition_functions.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/context_transition_functions.py"
new file mode 100644
index 00000000..70326deb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/context_transition_functions.py"
@@ -0,0 +1,123 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Context functions.
+
+Given the current contexts, timer and context sampler, returns new contexts
+ after an environment step. This can be used to define a high-level policy
+ that controls contexts as its actions.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+import gin.tf
+import utils as uvf_utils
+
+
+@gin.configurable
+def periodic_context_fn(contexts, timer, sampler_fn, period=1):
+ """Periodically samples contexts.
+
+ Args:
+ contexts: a list of [num_context_dims] tensor variables representing
+ current contexts.
+ timer: a scalar integer tensor variable holding the current time step.
+ sampler_fn: a sampler function that samples a list of [num_context_dims]
+ tensors.
+ period: (integer) period of update.
+ Returns:
+ a list of [num_context_dims] tensors.
+ """
+ contexts = list(contexts[:]) # create copy
+ return tf.cond(tf.mod(timer, period) == 0, sampler_fn, lambda: contexts)
+
+
+@gin.configurable
+def timer_context_fn(contexts,
+ timer,
+ sampler_fn,
+ period=1,
+ timer_index=-1,
+ debug=False):
+ """Samples contexts based on timer in contexts.
+
+ Args:
+ contexts: a list of [num_context_dims] tensor variables representing
+ current contexts.
+ timer: a scalar integer tensor variable holding the current time step.
+ sampler_fn: a sampler function that samples a list of [num_context_dims]
+ tensors.
+ period: (integer) period of update; actual period = `period` + 1.
+ timer_index: (integer) Index of context list that present timer.
+ debug: (boolean) Print debug messages.
+ Returns:
+ a list of [num_context_dims] tensors.
+ """
+ contexts = list(contexts[:]) # create copy
+ cond = tf.equal(contexts[timer_index][0], 0)
+ def reset():
+ """Sample context and reset the timer."""
+ new_contexts = sampler_fn()
+ new_contexts[timer_index] = tf.zeros_like(
+ contexts[timer_index]) + period
+ return new_contexts
+ def update():
+ """Decrement the timer."""
+ contexts[timer_index] -= 1
+ return contexts
+ values = tf.cond(cond, reset, update)
+ if debug:
+ values[0] = uvf_utils.tf_print(
+ values[0],
+ values + [timer],
+ 'timer_context_fn',
+ first_n=200,
+ name='timer_context_fn:contexts')
+ return values
+
+
+@gin.configurable
+def relative_context_transition_fn(
+ contexts, timer, sampler_fn,
+ k=2, state=None, next_state=None,
+ **kwargs):
+ """Contexts updated to be relative to next state.
+ """
+ contexts = list(contexts[:]) # create copy
+ assert len(contexts) == 1
+ new_contexts = [
+ tf.concat(
+ [contexts[0][:k] + state[:k] - next_state[:k],
+ contexts[0][k:]], -1)]
+ return new_contexts
+
+
+@gin.configurable
+def relative_context_multi_transition_fn(
+ contexts, timer, sampler_fn,
+ k=2, states=None,
+ **kwargs):
+ """Given contexts at first state and sequence of states, derives sequence of all contexts.
+ """
+ contexts = list(contexts[:]) # create copy
+ assert len(contexts) == 1
+ contexts = [
+ tf.concat(
+ [tf.expand_dims(contexts[0][:, :k] + states[:, 0, :k], 1) - states[:, :, :k],
+ contexts[0][:, None, k:] * tf.ones_like(states[:, :, :1])], -1)]
+ return contexts
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/gin_imports.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/gin_imports.py"
new file mode 100644
index 00000000..94512cef
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/gin_imports.py"
@@ -0,0 +1,25 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Import gin configurable modules.
+"""
+
+# pylint: disable=unused-import
+from context import context
+from context import context_transition_functions
+from context import gin_utils
+from context import rewards_functions
+from context import samplers
+# pylint: disable=unused-import
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/gin_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/gin_utils.py"
new file mode 100644
index 00000000..ab7c1b2d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/gin_utils.py"
@@ -0,0 +1,45 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Gin configurable utility functions.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import gin.tf
+
+
+@gin.configurable
+def gin_sparse_array(size, values, indices, fill_value=0):
+ arr = np.zeros(size)
+ arr.fill(fill_value)
+ arr[indices] = values
+ return arr
+
+
+@gin.configurable
+def gin_sum(values):
+ result = values[0]
+ for value in values[1:]:
+ result += value
+ return result
+
+
+@gin.configurable
+def gin_range(n):
+ return range(n)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/rewards_functions.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/rewards_functions.py"
new file mode 100644
index 00000000..ab560a7f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/rewards_functions.py"
@@ -0,0 +1,741 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Reward shaping functions used by Contexts.
+
+ Each reward function should take the following inputs and return new rewards,
+ and discounts.
+
+ new_rewards, discounts = reward_fn(states, actions, rewards,
+ next_states, contexts)
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+import gin.tf
+
+
+def summarize_stats(stats):
+ """Summarize a dictionary of variables.
+
+ Args:
+ stats: a dictionary of {name: tensor} to compute stats over.
+ """
+ for name, stat in stats.items():
+ mean = tf.reduce_mean(stat)
+ tf.summary.scalar('mean_%s' % name, mean)
+ tf.summary.scalar('max_%s' % name, tf.reduce_max(stat))
+ tf.summary.scalar('min_%s' % name, tf.reduce_min(stat))
+ std = tf.sqrt(tf.reduce_mean(tf.square(stat)) - tf.square(mean) + 1e-10)
+ tf.summary.scalar('std_%s' % name, std)
+ tf.summary.histogram(name, stat)
+
+
+def index_states(states, indices):
+ """Return indexed states.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ indices: (a list of Numpy integer array) Indices of states dimensions
+ to be mapped.
+ Returns:
+ A [batch_size, num_indices] Tensor representing the batch of indexed states.
+ """
+ if indices is None:
+ return states
+ indices = tf.constant(indices, dtype=tf.int32)
+ return tf.gather(states, indices=indices, axis=1)
+
+
+def record_tensor(tensor, indices, stats, name='states'):
+ """Record specified tensor dimensions into stats.
+
+ Args:
+ tensor: A [batch_size, num_dims] Tensor.
+ indices: (a list of integers) Indices of dimensions to record.
+ stats: A dictionary holding stats.
+ name: (string) Name of tensor.
+ """
+ if indices is None:
+ indices = range(tensor.shape.as_list()[1])
+ for index in indices:
+ stats['%s_%02d' % (name, index)] = tensor[:, index]
+
+
+@gin.configurable
+def potential_rewards(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ gamma=1.0,
+ reward_fn=None):
+ """Return the potential-based rewards.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ gamma: Reward discount.
+ reward_fn: A reward function.
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ del actions # unused args
+ gamma = tf.to_float(gamma)
+ rewards_tp1, discounts = reward_fn(None, None, rewards, next_states, contexts)
+ rewards, _ = reward_fn(None, None, rewards, states, contexts)
+ return -rewards + gamma * rewards_tp1, discounts
+
+
+@gin.configurable
+def timed_rewards(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ reward_fn=None,
+ dense=False,
+ timer_index=-1):
+ """Return the timed rewards.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ reward_fn: A reward function.
+ dense: (boolean) Provide dense rewards or sparse rewards at time = 0.
+ timer_index: (integer) The context list index that specifies timer.
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ assert contexts[timer_index].get_shape().as_list()[1] == 1
+ timers = contexts[timer_index][:, 0]
+ rewards, discounts = reward_fn(states, actions, rewards, next_states,
+ contexts)
+ terminates = tf.to_float(timers <= 0) # if terminate set 1, else set 0
+ for _ in range(rewards.shape.ndims - 1):
+ terminates = tf.expand_dims(terminates, axis=-1)
+ if not dense:
+ rewards *= terminates # if terminate, return rewards, else return 0
+ discounts *= (tf.to_float(1.0) - terminates)
+ return rewards, discounts
+
+
+@gin.configurable
+def reset_rewards(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ reset_index=0,
+ reset_state=None,
+ reset_reward_function=None,
+ include_forward_rewards=True,
+ include_reset_rewards=True):
+ """Returns the rewards for a forward/reset agent.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ reset_index: (integer) The context list index that specifies reset.
+ reset_state: Reset state.
+ reset_reward_function: Reward function for reset step.
+ include_forward_rewards: Include the rewards from the forward pass.
+ include_reset_rewards: Include the rewards from the reset pass.
+
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ reset_state = tf.constant(
+ reset_state, dtype=next_states.dtype, shape=next_states.shape)
+ reset_states = tf.expand_dims(reset_state, 0)
+
+ def true_fn():
+ if include_reset_rewards:
+ return reset_reward_function(states, actions, rewards, next_states,
+ [reset_states] + contexts[1:])
+ else:
+ return tf.zeros_like(rewards), tf.ones_like(rewards)
+
+ def false_fn():
+ if include_forward_rewards:
+ return plain_rewards(states, actions, rewards, next_states, contexts)
+ else:
+ return tf.zeros_like(rewards), tf.ones_like(rewards)
+
+ rewards, discounts = tf.cond(
+ tf.cast(contexts[reset_index][0, 0], dtype=tf.bool), true_fn, false_fn)
+ return rewards, discounts
+
+
+@gin.configurable
+def tanh_similarity(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ mse_scale=1.0,
+ state_scales=1.0,
+ goal_scales=1.0,
+ summarize=False):
+ """Returns the similarity between next_states and contexts using tanh and mse.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ mse_scale: A float, to scale mse before tanh.
+ state_scales: multiplicative scale for (next) states. A scalar or 1D tensor,
+ must be broadcastable to number of state dimensions.
+ goal_scales: multiplicative scale for contexts. A scalar or 1D tensor,
+ must be broadcastable to number of goal dimensions.
+ summarize: (boolean) enable summary ops.
+
+
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ del states, actions, rewards # Unused
+ mse = tf.reduce_mean(tf.squared_difference(next_states * state_scales,
+ contexts[0] * goal_scales), -1)
+ tanh = tf.tanh(mse_scale * mse)
+ if summarize:
+ with tf.name_scope('RewardFn/'):
+ tf.summary.scalar('mean_mse', tf.reduce_mean(mse))
+ tf.summary.histogram('mse', mse)
+ tf.summary.scalar('mean_tanh', tf.reduce_mean(tanh))
+ tf.summary.histogram('tanh', tanh)
+ rewards = tf.to_float(1 - tanh)
+ return rewards, tf.ones_like(rewards)
+
+
+@gin.configurable
+def negative_mse(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ state_scales=1.0,
+ goal_scales=1.0,
+ summarize=False):
+ """Returns the negative mean square error between next_states and contexts.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ state_scales: multiplicative scale for (next) states. A scalar or 1D tensor,
+ must be broadcastable to number of state dimensions.
+ goal_scales: multiplicative scale for contexts. A scalar or 1D tensor,
+ must be broadcastable to number of goal dimensions.
+ summarize: (boolean) enable summary ops.
+
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ del states, actions, rewards # Unused
+ mse = tf.reduce_mean(tf.squared_difference(next_states * state_scales,
+ contexts[0] * goal_scales), -1)
+ if summarize:
+ with tf.name_scope('RewardFn/'):
+ tf.summary.scalar('mean_mse', tf.reduce_mean(mse))
+ tf.summary.histogram('mse', mse)
+ rewards = tf.to_float(-mse)
+ return rewards, tf.ones_like(rewards)
+
+
+@gin.configurable
+def negative_distance(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ state_scales=1.0,
+ goal_scales=1.0,
+ reward_scales=1.0,
+ weight_index=None,
+ weight_vector=None,
+ summarize=False,
+ termination_epsilon=1e-4,
+ state_indices=None,
+ goal_indices=None,
+ vectorize=False,
+ relative_context=False,
+ diff=False,
+ norm='L2',
+ epsilon=1e-10,
+ bonus_epsilon=0., #5.,
+ offset=0.0):
+ """Returns the negative euclidean distance between next_states and contexts.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ state_scales: multiplicative scale for (next) states. A scalar or 1D tensor,
+ must be broadcastable to number of state dimensions.
+ goal_scales: multiplicative scale for goals. A scalar or 1D tensor,
+ must be broadcastable to number of goal dimensions.
+ reward_scales: multiplicative scale for rewards. A scalar or 1D tensor,
+ must be broadcastable to number of reward dimensions.
+ weight_index: (integer) The context list index that specifies weight.
+ weight_vector: (a number or a list or Numpy array) The weighting vector,
+ broadcastable to `next_states`.
+ summarize: (boolean) enable summary ops.
+ termination_epsilon: terminate if dist is less than this quantity.
+ state_indices: (a list of integers) list of state indices to select.
+ goal_indices: (a list of integers) list of goal indices to select.
+ vectorize: Return a vectorized form.
+ norm: L1 or L2.
+ epsilon: small offset to ensure non-negative/zero distance.
+
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ del actions, rewards # Unused
+ stats = {}
+ record_tensor(next_states, state_indices, stats, 'next_states')
+ states = index_states(states, state_indices)
+ next_states = index_states(next_states, state_indices)
+ goals = index_states(contexts[0], goal_indices)
+ if relative_context:
+ goals = states + goals
+ sq_dists = tf.squared_difference(next_states * state_scales,
+ goals * goal_scales)
+ old_sq_dists = tf.squared_difference(states * state_scales,
+ goals * goal_scales)
+ record_tensor(sq_dists, None, stats, 'sq_dists')
+ if weight_vector is not None:
+ sq_dists *= tf.convert_to_tensor(weight_vector, dtype=next_states.dtype)
+ old_sq_dists *= tf.convert_to_tensor(weight_vector, dtype=next_states.dtype)
+ if weight_index is not None:
+ #sq_dists *= contexts[weight_index]
+ weights = tf.abs(index_states(contexts[0], weight_index))
+ #weights /= tf.reduce_sum(weights, -1, keepdims=True)
+ sq_dists *= weights
+ old_sq_dists *= weights
+ if norm == 'L1':
+ dist = tf.sqrt(sq_dists + epsilon)
+ old_dist = tf.sqrt(old_sq_dists + epsilon)
+ if not vectorize:
+ dist = tf.reduce_sum(dist, -1)
+ old_dist = tf.reduce_sum(old_dist, -1)
+ elif norm == 'L2':
+ if vectorize:
+ dist = sq_dists
+ old_dist = old_sq_dists
+ else:
+ dist = tf.reduce_sum(sq_dists, -1)
+ old_dist = tf.reduce_sum(old_sq_dists, -1)
+ dist = tf.sqrt(dist + epsilon) # tf.gradients fails when tf.sqrt(-0.0)
+ old_dist = tf.sqrt(old_dist + epsilon) # tf.gradients fails when tf.sqrt(-0.0)
+ else:
+ raise NotImplementedError(norm)
+ discounts = dist > termination_epsilon
+ if summarize:
+ with tf.name_scope('RewardFn/'):
+ tf.summary.scalar('mean_dist', tf.reduce_mean(dist))
+ tf.summary.histogram('dist', dist)
+ summarize_stats(stats)
+ bonus = tf.to_float(dist < bonus_epsilon)
+ dist *= reward_scales
+ old_dist *= reward_scales
+ if diff:
+ return bonus + offset + tf.to_float(old_dist - dist), tf.to_float(discounts)
+ return bonus + offset + tf.to_float(-dist), tf.to_float(discounts)
+
+
+@gin.configurable
+def cosine_similarity(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ state_scales=1.0,
+ goal_scales=1.0,
+ reward_scales=1.0,
+ normalize_states=True,
+ normalize_goals=True,
+ weight_index=None,
+ weight_vector=None,
+ summarize=False,
+ state_indices=None,
+ goal_indices=None,
+ offset=0.0):
+ """Returns the cosine similarity between next_states - states and contexts.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ state_scales: multiplicative scale for (next) states. A scalar or 1D tensor,
+ must be broadcastable to number of state dimensions.
+ goal_scales: multiplicative scale for goals. A scalar or 1D tensor,
+ must be broadcastable to number of goal dimensions.
+ reward_scales: multiplicative scale for rewards. A scalar or 1D tensor,
+ must be broadcastable to number of reward dimensions.
+ weight_index: (integer) The context list index that specifies weight.
+ weight_vector: (a number or a list or Numpy array) The weighting vector,
+ broadcastable to `next_states`.
+ summarize: (boolean) enable summary ops.
+ termination_epsilon: terminate if dist is less than this quantity.
+ state_indices: (a list of integers) list of state indices to select.
+ goal_indices: (a list of integers) list of goal indices to select.
+ vectorize: Return a vectorized form.
+ norm: L1 or L2.
+ epsilon: small offset to ensure non-negative/zero distance.
+
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ del actions, rewards # Unused
+ stats = {}
+ record_tensor(next_states, state_indices, stats, 'next_states')
+ states = index_states(states, state_indices)
+ next_states = index_states(next_states, state_indices)
+ goals = index_states(contexts[0], goal_indices)
+
+ if weight_vector is not None:
+ goals *= tf.convert_to_tensor(weight_vector, dtype=next_states.dtype)
+ if weight_index is not None:
+ weights = tf.abs(index_states(contexts[0], weight_index))
+ goals *= weights
+
+ direction_vec = next_states - states
+ if normalize_states:
+ direction_vec = tf.nn.l2_normalize(direction_vec, -1)
+ goal_vec = goals
+ if normalize_goals:
+ goal_vec = tf.nn.l2_normalize(goal_vec, -1)
+
+ similarity = tf.reduce_sum(goal_vec * direction_vec, -1)
+ discounts = tf.ones_like(similarity)
+ return offset + tf.to_float(similarity), tf.to_float(discounts)
+
+
+@gin.configurable
+def diff_distance(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ state_scales=1.0,
+ goal_scales=1.0,
+ reward_scales=1.0,
+ weight_index=None,
+ weight_vector=None,
+ summarize=False,
+ termination_epsilon=1e-4,
+ state_indices=None,
+ goal_indices=None,
+ norm='L2',
+ epsilon=1e-10):
+ """Returns the difference in euclidean distance between states/next_states and contexts.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ state_scales: multiplicative scale for (next) states. A scalar or 1D tensor,
+ must be broadcastable to number of state dimensions.
+ goal_scales: multiplicative scale for goals. A scalar or 1D tensor,
+ must be broadcastable to number of goal dimensions.
+ reward_scales: multiplicative scale for rewards. A scalar or 1D tensor,
+ must be broadcastable to number of reward dimensions.
+ weight_index: (integer) The context list index that specifies weight.
+ weight_vector: (a number or a list or Numpy array) The weighting vector,
+ broadcastable to `next_states`.
+ summarize: (boolean) enable summary ops.
+ termination_epsilon: terminate if dist is less than this quantity.
+ state_indices: (a list of integers) list of state indices to select.
+ goal_indices: (a list of integers) list of goal indices to select.
+ vectorize: Return a vectorized form.
+ norm: L1 or L2.
+ epsilon: small offset to ensure non-negative/zero distance.
+
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ del actions, rewards # Unused
+ stats = {}
+ record_tensor(next_states, state_indices, stats, 'next_states')
+ next_states = index_states(next_states, state_indices)
+ states = index_states(states, state_indices)
+ goals = index_states(contexts[0], goal_indices)
+ next_sq_dists = tf.squared_difference(next_states * state_scales,
+ goals * goal_scales)
+ sq_dists = tf.squared_difference(states * state_scales,
+ goals * goal_scales)
+ record_tensor(sq_dists, None, stats, 'sq_dists')
+ if weight_vector is not None:
+ next_sq_dists *= tf.convert_to_tensor(weight_vector, dtype=next_states.dtype)
+ sq_dists *= tf.convert_to_tensor(weight_vector, dtype=next_states.dtype)
+ if weight_index is not None:
+ next_sq_dists *= contexts[weight_index]
+ sq_dists *= contexts[weight_index]
+ if norm == 'L1':
+ next_dist = tf.sqrt(next_sq_dists + epsilon)
+ dist = tf.sqrt(sq_dists + epsilon)
+ next_dist = tf.reduce_sum(next_dist, -1)
+ dist = tf.reduce_sum(dist, -1)
+ elif norm == 'L2':
+ next_dist = tf.reduce_sum(next_sq_dists, -1)
+ next_dist = tf.sqrt(next_dist + epsilon) # tf.gradients fails when tf.sqrt(-0.0)
+ dist = tf.reduce_sum(sq_dists, -1)
+ dist = tf.sqrt(dist + epsilon) # tf.gradients fails when tf.sqrt(-0.0)
+ else:
+ raise NotImplementedError(norm)
+ discounts = next_dist > termination_epsilon
+ if summarize:
+ with tf.name_scope('RewardFn/'):
+ tf.summary.scalar('mean_dist', tf.reduce_mean(dist))
+ tf.summary.histogram('dist', dist)
+ summarize_stats(stats)
+ diff = dist - next_dist
+ diff *= reward_scales
+ return tf.to_float(diff), tf.to_float(discounts)
+
+
+@gin.configurable
+def binary_indicator(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ termination_epsilon=1e-4,
+ offset=0,
+ epsilon=1e-10,
+ state_indices=None,
+ summarize=False):
+ """Returns 0/1 by checking if next_states and contexts overlap.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ termination_epsilon: terminate if dist is less than this quantity.
+ offset: Offset the rewards.
+ epsilon: small offset to ensure non-negative/zero distance.
+
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ del states, actions # unused args
+ next_states = index_states(next_states, state_indices)
+ dist = tf.reduce_sum(tf.squared_difference(next_states, contexts[0]), -1)
+ dist = tf.sqrt(dist + epsilon)
+ discounts = dist > termination_epsilon
+ rewards = tf.logical_not(discounts)
+ rewards = tf.to_float(rewards) + offset
+ return tf.to_float(rewards), tf.ones_like(tf.to_float(discounts)) #tf.to_float(discounts)
+
+
+@gin.configurable
+def plain_rewards(states, actions, rewards, next_states, contexts):
+ """Returns the given rewards.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ del states, actions, next_states, contexts # Unused
+ return rewards, tf.ones_like(rewards)
+
+
+@gin.configurable
+def ctrl_rewards(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ reward_scales=1.0):
+ """Returns the negative control cost.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ reward_scales: multiplicative scale for rewards. A scalar or 1D tensor,
+ must be broadcastable to number of reward dimensions.
+
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ del states, rewards, contexts # Unused
+ if actions is None:
+ rewards = tf.to_float(tf.zeros(shape=next_states.shape[:1]))
+ else:
+ rewards = -tf.reduce_sum(tf.square(actions), axis=1)
+ rewards *= reward_scales
+ rewards = tf.to_float(rewards)
+ return rewards, tf.ones_like(rewards)
+
+
+@gin.configurable
+def diff_rewards(
+ states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ state_indices=None,
+ goal_index=0,):
+ """Returns (next_states - goals) as a batched vector reward."""
+ del states, rewards, actions # Unused
+ if state_indices is not None:
+ next_states = index_states(next_states, state_indices)
+ rewards = tf.to_float(next_states - contexts[goal_index])
+ return rewards, tf.ones_like(rewards)
+
+
+@gin.configurable
+def state_rewards(states,
+ actions,
+ rewards,
+ next_states,
+ contexts,
+ weight_index=None,
+ state_indices=None,
+ weight_vector=1.0,
+ offset_vector=0.0,
+ summarize=False):
+ """Returns the rewards that are linear mapping of next_states.
+
+ Args:
+ states: A [batch_size, num_state_dims] Tensor representing a batch
+ of states.
+ actions: A [batch_size, num_action_dims] Tensor representing a batch
+ of actions.
+ rewards: A [batch_size] Tensor representing a batch of rewards.
+ next_states: A [batch_size, num_state_dims] Tensor representing a batch
+ of next states.
+ contexts: A list of [batch_size, num_context_dims] Tensor representing
+ a batch of contexts.
+ weight_index: (integer) Index of contexts lists that specify weighting.
+ state_indices: (a list of Numpy integer array) Indices of states dimensions
+ to be mapped.
+ weight_vector: (a number or a list or Numpy array) The weighting vector,
+ broadcastable to `next_states`.
+ offset_vector: (a number or a list of Numpy array) The off vector.
+ summarize: (boolean) enable summary ops.
+
+ Returns:
+ A new tf.float32 [batch_size] rewards Tensor, and
+ tf.float32 [batch_size] discounts tensor.
+ """
+ del states, actions, rewards # unused args
+ stats = {}
+ record_tensor(next_states, state_indices, stats)
+ next_states = index_states(next_states, state_indices)
+ weight = tf.constant(
+ weight_vector, dtype=next_states.dtype, shape=next_states[0].shape)
+ weights = tf.expand_dims(weight, 0)
+ offset = tf.constant(
+ offset_vector, dtype=next_states.dtype, shape=next_states[0].shape)
+ offsets = tf.expand_dims(offset, 0)
+ if weight_index is not None:
+ weights *= contexts[weight_index]
+ rewards = tf.to_float(tf.reduce_sum(weights * (next_states+offsets), axis=1))
+ if summarize:
+ with tf.name_scope('RewardFn/'):
+ summarize_stats(stats)
+ return rewards, tf.ones_like(rewards)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/samplers.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/samplers.py"
new file mode 100644
index 00000000..15a22df5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/context/samplers.py"
@@ -0,0 +1,445 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Samplers for Contexts.
+
+ Each sampler class should define __call__(batch_size).
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+slim = tf.contrib.slim
+import gin.tf
+
+
+@gin.configurable
+class BaseSampler(object):
+ """Base sampler."""
+
+ def __init__(self, context_spec, context_range=None, k=2, scope='sampler'):
+ """Construct a base sampler.
+
+ Args:
+ context_spec: A context spec.
+ context_range: A tuple of (minval, max), where minval, maxval are floats
+ or Numpy arrays with the same shape as the context.
+ scope: A string denoting scope.
+ """
+ self._context_spec = context_spec
+ self._context_range = context_range
+ self._k = k
+ self._scope = scope
+
+ def __call__(self, batch_size, **kwargs):
+ raise NotImplementedError
+
+ def set_replay(self, replay=None):
+ pass
+
+ def _validate_contexts(self, contexts):
+ """Validate if contexts have right spec.
+
+ Args:
+ contexts: A [batch_size, num_contexts_dim] tensor.
+ Raises:
+ ValueError: If shape or dtype mismatches that of spec.
+ """
+ if contexts[0].shape != self._context_spec.shape:
+ raise ValueError('contexts has invalid shape %s wrt spec shape %s' %
+ (contexts[0].shape, self._context_spec.shape))
+ if contexts.dtype != self._context_spec.dtype:
+ raise ValueError('contexts has invalid dtype %s wrt spec dtype %s' %
+ (contexts.dtype, self._context_spec.dtype))
+
+
+@gin.configurable
+class ZeroSampler(BaseSampler):
+ """Zero sampler."""
+
+ def __call__(self, batch_size, **kwargs):
+ """Sample a batch of context.
+
+ Args:
+ batch_size: Batch size.
+ Returns:
+ Two [batch_size, num_context_dims] tensors.
+ """
+ contexts = tf.zeros(
+ dtype=self._context_spec.dtype,
+ shape=[
+ batch_size,
+ ] + self._context_spec.shape.as_list())
+ return contexts, contexts
+
+
+@gin.configurable
+class BinarySampler(BaseSampler):
+ """Binary sampler."""
+
+ def __init__(self, probs=0.5, *args, **kwargs):
+ """Constructor."""
+ super(BinarySampler, self).__init__(*args, **kwargs)
+ self._probs = probs
+
+ def __call__(self, batch_size, **kwargs):
+ """Sample a batch of context."""
+ spec = self._context_spec
+ contexts = tf.random_uniform(
+ shape=[
+ batch_size,
+ ] + spec.shape.as_list(), dtype=tf.float32)
+ contexts = tf.cast(tf.greater(contexts, self._probs), dtype=spec.dtype)
+ return contexts, contexts
+
+
+@gin.configurable
+class RandomSampler(BaseSampler):
+ """Random sampler."""
+
+ def __call__(self, batch_size, **kwargs):
+ """Sample a batch of context.
+
+ Args:
+ batch_size: Batch size.
+ Returns:
+ Two [batch_size, num_context_dims] tensors.
+ """
+ spec = self._context_spec
+ context_range = self._context_range
+ if isinstance(context_range[0], (int, float)):
+ contexts = tf.random_uniform(
+ shape=[
+ batch_size,
+ ] + spec.shape.as_list(),
+ minval=context_range[0],
+ maxval=context_range[1],
+ dtype=spec.dtype)
+ elif isinstance(context_range[0], (list, tuple, np.ndarray)):
+ assert len(spec.shape.as_list()) == 1
+ assert spec.shape.as_list()[0] == len(context_range[0])
+ assert spec.shape.as_list()[0] == len(context_range[1])
+ contexts = tf.concat(
+ [
+ tf.random_uniform(
+ shape=[
+ batch_size, 1,
+ ] + spec.shape.as_list()[1:],
+ minval=context_range[0][i],
+ maxval=context_range[1][i],
+ dtype=spec.dtype) for i in range(spec.shape.as_list()[0])
+ ],
+ axis=1)
+ else: raise NotImplementedError(context_range)
+ self._validate_contexts(contexts)
+ state, next_state = kwargs['state'], kwargs['next_state']
+ if state is not None and next_state is not None:
+ pass
+ #contexts = tf.concat(
+ # [tf.random_normal(tf.shape(state[:, :self._k]), dtype=tf.float64) +
+ # tf.random_shuffle(state[:, :self._k]),
+ # contexts[:, self._k:]], 1)
+
+ return contexts, contexts
+
+
+@gin.configurable
+class ScheduledSampler(BaseSampler):
+ """Scheduled sampler."""
+
+ def __init__(self,
+ scope='default',
+ values=None,
+ scheduler='cycle',
+ scheduler_params=None,
+ *args, **kwargs):
+ """Construct sampler.
+
+ Args:
+ scope: Scope name.
+ values: A list of numbers or [num_context_dim] Numpy arrays
+ representing the values to cycle.
+ scheduler: scheduler type.
+ scheduler_params: scheduler parameters.
+ *args: arguments.
+ **kwargs: keyword arguments.
+ """
+ super(ScheduledSampler, self).__init__(*args, **kwargs)
+ self._scope = scope
+ self._values = values
+ self._scheduler = scheduler
+ self._scheduler_params = scheduler_params or {}
+ assert self._values is not None and len(
+ self._values), 'must provide non-empty values.'
+ self._n = len(self._values)
+ # TODO(shanegu): move variable creation outside. resolve tf.cond problem.
+ self._count = 0
+ self._i = tf.Variable(
+ tf.zeros(shape=(), dtype=tf.int32),
+ name='%s-scheduled_sampler_%d' % (self._scope, self._count))
+ self._values = tf.constant(self._values, dtype=self._context_spec.dtype)
+
+ def __call__(self, batch_size, **kwargs):
+ """Sample a batch of context.
+
+ Args:
+ batch_size: Batch size.
+ Returns:
+ Two [batch_size, num_context_dims] tensors.
+ """
+ spec = self._context_spec
+ next_op = self._next(self._i)
+ with tf.control_dependencies([next_op]):
+ value = self._values[self._i]
+ if value.get_shape().as_list():
+ values = tf.tile(
+ tf.expand_dims(value, 0), (batch_size,) + (1,) * spec.shape.ndims)
+ else:
+ values = value + tf.zeros(
+ shape=[
+ batch_size,
+ ] + spec.shape.as_list(), dtype=spec.dtype)
+ self._validate_contexts(values)
+ self._count += 1
+ return values, values
+
+ def _next(self, i):
+ """Return op that increments pointer to next value.
+
+ Args:
+ i: A tensorflow integer variable.
+ Returns:
+ Op that increments pointer.
+ """
+ if self._scheduler == 'cycle':
+ inc = ('inc' in self._scheduler_params and
+ self._scheduler_params['inc']) or 1
+ return tf.assign(i, tf.mod(i+inc, self._n))
+ else:
+ raise NotImplementedError(self._scheduler)
+
+
+@gin.configurable
+class ReplaySampler(BaseSampler):
+ """Replay sampler."""
+
+ def __init__(self,
+ prefetch_queue_capacity=2,
+ override_indices=None,
+ state_indices=None,
+ *args,
+ **kwargs):
+ """Construct sampler.
+
+ Args:
+ prefetch_queue_capacity: Capacity for prefetch queue.
+ override_indices: Override indices.
+ state_indices: Select certain indices from state dimension.
+ *args: arguments.
+ **kwargs: keyword arguments.
+ """
+ super(ReplaySampler, self).__init__(*args, **kwargs)
+ self._prefetch_queue_capacity = prefetch_queue_capacity
+ self._override_indices = override_indices
+ self._state_indices = state_indices
+
+ def set_replay(self, replay):
+ """Set replay.
+
+ Args:
+ replay: A replay buffer.
+ """
+ self._replay = replay
+
+ def __call__(self, batch_size, **kwargs):
+ """Sample a batch of context.
+
+ Args:
+ batch_size: Batch size.
+ Returns:
+ Two [batch_size, num_context_dims] tensors.
+ """
+ batch = self._replay.GetRandomBatch(batch_size)
+ next_states = batch[4]
+ if self._prefetch_queue_capacity > 0:
+ batch_queue = slim.prefetch_queue.prefetch_queue(
+ [next_states],
+ capacity=self._prefetch_queue_capacity,
+ name='%s/batch_context_queue' % self._scope)
+ next_states = batch_queue.dequeue()
+ if self._override_indices is not None:
+ assert self._context_range is not None and isinstance(
+ self._context_range[0], (int, long, float))
+ next_states = tf.concat(
+ [
+ tf.random_uniform(
+ shape=next_states[:, :1].shape,
+ minval=self._context_range[0],
+ maxval=self._context_range[1],
+ dtype=next_states.dtype)
+ if i in self._override_indices else next_states[:, i:i + 1]
+ for i in range(self._context_spec.shape.as_list()[0])
+ ],
+ axis=1)
+ if self._state_indices is not None:
+ next_states = tf.concat(
+ [
+ next_states[:, i:i + 1]
+ for i in range(self._context_spec.shape.as_list()[0])
+ ],
+ axis=1)
+ self._validate_contexts(next_states)
+ return next_states, next_states
+
+
+@gin.configurable
+class TimeSampler(BaseSampler):
+ """Time Sampler."""
+
+ def __init__(self, minval=0, maxval=1, timestep=-1, *args, **kwargs):
+ """Construct sampler.
+
+ Args:
+ minval: Min value integer.
+ maxval: Max value integer.
+ timestep: Time step between states and next_states.
+ *args: arguments.
+ **kwargs: keyword arguments.
+ """
+ super(TimeSampler, self).__init__(*args, **kwargs)
+ assert self._context_spec.shape.as_list() == [1]
+ self._minval = minval
+ self._maxval = maxval
+ self._timestep = timestep
+
+ def __call__(self, batch_size, **kwargs):
+ """Sample a batch of context.
+
+ Args:
+ batch_size: Batch size.
+ Returns:
+ Two [batch_size, num_context_dims] tensors.
+ """
+ if self._maxval == self._minval:
+ contexts = tf.constant(
+ self._maxval, shape=[batch_size, 1], dtype=tf.int32)
+ else:
+ contexts = tf.random_uniform(
+ shape=[batch_size, 1],
+ dtype=tf.int32,
+ maxval=self._maxval,
+ minval=self._minval)
+ next_contexts = tf.maximum(contexts + self._timestep, 0)
+
+ return tf.cast(
+ contexts, dtype=self._context_spec.dtype), tf.cast(
+ next_contexts, dtype=self._context_spec.dtype)
+
+
+@gin.configurable
+class ConstantSampler(BaseSampler):
+ """Constant sampler."""
+
+ def __init__(self, value=None, *args, **kwargs):
+ """Construct sampler.
+
+ Args:
+ value: A list or Numpy array for values of the constant.
+ *args: arguments.
+ **kwargs: keyword arguments.
+ """
+ super(ConstantSampler, self).__init__(*args, **kwargs)
+ self._value = value
+
+ def __call__(self, batch_size, **kwargs):
+ """Sample a batch of context.
+
+ Args:
+ batch_size: Batch size.
+ Returns:
+ Two [batch_size, num_context_dims] tensors.
+ """
+ spec = self._context_spec
+ value_ = tf.constant(self._value, shape=spec.shape, dtype=spec.dtype)
+ values = tf.tile(
+ tf.expand_dims(value_, 0), (batch_size,) + (1,) * spec.shape.ndims)
+ self._validate_contexts(values)
+ return values, values
+
+
+@gin.configurable
+class DirectionSampler(RandomSampler):
+ """Direction sampler."""
+
+ def __call__(self, batch_size, **kwargs):
+ """Sample a batch of context.
+
+ Args:
+ batch_size: Batch size.
+ Returns:
+ Two [batch_size, num_context_dims] tensors.
+ """
+ spec = self._context_spec
+ context_range = self._context_range
+ if isinstance(context_range[0], (int, float)):
+ contexts = tf.random_uniform(
+ shape=[
+ batch_size,
+ ] + spec.shape.as_list(),
+ minval=context_range[0],
+ maxval=context_range[1],
+ dtype=spec.dtype)
+ elif isinstance(context_range[0], (list, tuple, np.ndarray)):
+ assert len(spec.shape.as_list()) == 1
+ assert spec.shape.as_list()[0] == len(context_range[0])
+ assert spec.shape.as_list()[0] == len(context_range[1])
+ contexts = tf.concat(
+ [
+ tf.random_uniform(
+ shape=[
+ batch_size, 1,
+ ] + spec.shape.as_list()[1:],
+ minval=context_range[0][i],
+ maxval=context_range[1][i],
+ dtype=spec.dtype) for i in range(spec.shape.as_list()[0])
+ ],
+ axis=1)
+ else: raise NotImplementedError(context_range)
+ self._validate_contexts(contexts)
+ if 'sampler_fn' in kwargs:
+ other_contexts = kwargs['sampler_fn']()
+ else:
+ other_contexts = contexts
+ state, next_state = kwargs['state'], kwargs['next_state']
+ if state is not None and next_state is not None:
+ my_context_range = (np.array(context_range[1]) - np.array(context_range[0])) / 2 * np.ones(spec.shape.as_list())
+ contexts = tf.concat(
+ [0.1 * my_context_range[:self._k] *
+ tf.random_normal(tf.shape(state[:, :self._k]), dtype=state.dtype) +
+ tf.random_shuffle(state[:, :self._k]) - state[:, :self._k],
+ other_contexts[:, self._k:]], 1)
+ #contexts = tf.Print(contexts,
+ # [contexts, tf.reduce_max(contexts, 0),
+ # tf.reduce_min(state, 0), tf.reduce_max(state, 0)], 'contexts', summarize=15)
+ next_contexts = tf.concat( #LALA
+ [state[:, :self._k] + contexts[:, :self._k] - next_state[:, :self._k],
+ other_contexts[:, self._k:]], 1)
+ next_contexts = contexts #LALA cosine
+ else:
+ next_contexts = contexts
+ return tf.stop_gradient(contexts), tf.stop_gradient(next_contexts)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/__init__.py"
new file mode 100644
index 00000000..8b137891
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/__init__.py"
@@ -0,0 +1 @@
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/ant.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/ant.py"
new file mode 100644
index 00000000..bea9b209
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/ant.py"
@@ -0,0 +1,134 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Wrapper for creating the ant environment in gym_mujoco."""
+
+import math
+import numpy as np
+from gym import utils
+from gym.envs.mujoco import mujoco_env
+
+
+def q_inv(a):
+ return [a[0], -a[1], -a[2], -a[3]]
+
+
+def q_mult(a, b): # multiply two quaternion
+ w = a[0] * b[0] - a[1] * b[1] - a[2] * b[2] - a[3] * b[3]
+ i = a[0] * b[1] + a[1] * b[0] + a[2] * b[3] - a[3] * b[2]
+ j = a[0] * b[2] - a[1] * b[3] + a[2] * b[0] + a[3] * b[1]
+ k = a[0] * b[3] + a[1] * b[2] - a[2] * b[1] + a[3] * b[0]
+ return [w, i, j, k]
+
+
+class AntEnv(mujoco_env.MujocoEnv, utils.EzPickle):
+ FILE = "ant.xml"
+ ORI_IND = 3
+
+ def __init__(self, file_path=None, expose_all_qpos=True,
+ expose_body_coms=None, expose_body_comvels=None):
+ self._expose_all_qpos = expose_all_qpos
+ self._expose_body_coms = expose_body_coms
+ self._expose_body_comvels = expose_body_comvels
+ self._body_com_indices = {}
+ self._body_comvel_indices = {}
+
+ mujoco_env.MujocoEnv.__init__(self, file_path, 5)
+ utils.EzPickle.__init__(self)
+
+ @property
+ def physics(self):
+ return self.model
+
+ def _step(self, a):
+ return self.step(a)
+
+ def step(self, a):
+ xposbefore = self.get_body_com("torso")[0]
+ self.do_simulation(a, self.frame_skip)
+ xposafter = self.get_body_com("torso")[0]
+ forward_reward = (xposafter - xposbefore) / self.dt
+ ctrl_cost = .5 * np.square(a).sum()
+ survive_reward = 1.0
+ reward = forward_reward - ctrl_cost + survive_reward
+ state = self.state_vector()
+ done = False
+ ob = self._get_obs()
+ return ob, reward, done, dict(
+ reward_forward=forward_reward,
+ reward_ctrl=-ctrl_cost,
+ reward_survive=survive_reward)
+
+ def _get_obs(self):
+ # No cfrc observation
+ if self._expose_all_qpos:
+ obs = np.concatenate([
+ self.physics.data.qpos.flat[:15], # Ensures only ant obs.
+ self.physics.data.qvel.flat[:14],
+ ])
+ else:
+ obs = np.concatenate([
+ self.physics.data.qpos.flat[2:15],
+ self.physics.data.qvel.flat[:14],
+ ])
+
+ if self._expose_body_coms is not None:
+ for name in self._expose_body_coms:
+ com = self.get_body_com(name)
+ if name not in self._body_com_indices:
+ indices = range(len(obs), len(obs) + len(com))
+ self._body_com_indices[name] = indices
+ obs = np.concatenate([obs, com])
+
+ if self._expose_body_comvels is not None:
+ for name in self._expose_body_comvels:
+ comvel = self.get_body_comvel(name)
+ if name not in self._body_comvel_indices:
+ indices = range(len(obs), len(obs) + len(comvel))
+ self._body_comvel_indices[name] = indices
+ obs = np.concatenate([obs, comvel])
+ return obs
+
+ def reset_model(self):
+ qpos = self.init_qpos + self.np_random.uniform(
+ size=self.model.nq, low=-.1, high=.1)
+ qvel = self.init_qvel + self.np_random.randn(self.model.nv) * .1
+
+ # Set everything other than ant to original position and 0 velocity.
+ qpos[15:] = self.init_qpos[15:]
+ qvel[14:] = 0.
+ self.set_state(qpos, qvel)
+ return self._get_obs()
+
+ def viewer_setup(self):
+ self.viewer.cam.distance = self.model.stat.extent * 0.5
+
+ def get_ori(self):
+ ori = [0, 1, 0, 0]
+ rot = self.model.data.qpos[self.__class__.ORI_IND:self.__class__.ORI_IND + 4] # take the quaternion
+ ori = q_mult(q_mult(rot, ori), q_inv(rot))[1:3] # project onto x-y plane
+ ori = math.atan2(ori[1], ori[0])
+ return ori
+
+ def set_xy(self, xy):
+ qpos = np.copy(self.physics.data.qpos)
+ qpos[0] = xy[0]
+ qpos[1] = xy[1]
+
+ qvel = self.physics.data.qvel
+ self.set_state(qpos, qvel)
+
+ def get_xy(self):
+ return self.physics.data.qpos[:2]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/ant_maze_env.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/ant_maze_env.py"
new file mode 100644
index 00000000..69a10663
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/ant_maze_env.py"
@@ -0,0 +1,21 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+from environments.maze_env import MazeEnv
+from environments.ant import AntEnv
+
+
+class AntMazeEnv(MazeEnv):
+ MODEL_CLASS = AntEnv
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/assets/ant.xml" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/assets/ant.xml"
new file mode 100644
index 00000000..5a49d7f5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/assets/ant.xml"
@@ -0,0 +1,81 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/create_maze_env.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/create_maze_env.py"
new file mode 100644
index 00000000..f6dc4f42
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/create_maze_env.py"
@@ -0,0 +1,97 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+from environments.ant_maze_env import AntMazeEnv
+from environments.point_maze_env import PointMazeEnv
+
+import tensorflow as tf
+import gin.tf
+from tf_agents.environments import gym_wrapper
+from tf_agents.environments import tf_py_environment
+
+
+@gin.configurable
+def create_maze_env(env_name=None, top_down_view=False):
+ n_bins = 0
+ manual_collision = False
+ if env_name.startswith('Ego'):
+ n_bins = 8
+ env_name = env_name[3:]
+ if env_name.startswith('Ant'):
+ cls = AntMazeEnv
+ env_name = env_name[3:]
+ maze_size_scaling = 8
+ elif env_name.startswith('Point'):
+ cls = PointMazeEnv
+ manual_collision = True
+ env_name = env_name[5:]
+ maze_size_scaling = 4
+ else:
+ assert False, 'unknown env %s' % env_name
+
+ maze_id = None
+ observe_blocks = False
+ put_spin_near_agent = False
+ if env_name == 'Maze':
+ maze_id = 'Maze'
+ elif env_name == 'Push':
+ maze_id = 'Push'
+ elif env_name == 'Fall':
+ maze_id = 'Fall'
+ elif env_name == 'Block':
+ maze_id = 'Block'
+ put_spin_near_agent = True
+ observe_blocks = True
+ elif env_name == 'BlockMaze':
+ maze_id = 'BlockMaze'
+ put_spin_near_agent = True
+ observe_blocks = True
+ else:
+ raise ValueError('Unknown maze environment %s' % env_name)
+
+ gym_mujoco_kwargs = {
+ 'maze_id': maze_id,
+ 'n_bins': n_bins,
+ 'observe_blocks': observe_blocks,
+ 'put_spin_near_agent': put_spin_near_agent,
+ 'top_down_view': top_down_view,
+ 'manual_collision': manual_collision,
+ 'maze_size_scaling': maze_size_scaling
+ }
+ gym_env = cls(**gym_mujoco_kwargs)
+ gym_env.reset()
+ wrapped_env = gym_wrapper.GymWrapper(gym_env)
+ return wrapped_env
+
+
+class TFPyEnvironment(tf_py_environment.TFPyEnvironment):
+
+ def __init__(self, *args, **kwargs):
+ super(TFPyEnvironment, self).__init__(*args, **kwargs)
+
+ def start_collect(self):
+ pass
+
+ def current_obs(self):
+ time_step = self.current_time_step()
+ return time_step.observation[0] # For some reason, there is an extra dim.
+
+ def step(self, actions):
+ actions = tf.expand_dims(actions, 0)
+ next_step = super(TFPyEnvironment, self).step(actions)
+ return next_step.is_last()[0], next_step.reward[0], next_step.discount[0]
+
+ def reset(self):
+ return super(TFPyEnvironment, self).reset()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/maze_env.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/maze_env.py"
new file mode 100644
index 00000000..74b80121
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/maze_env.py"
@@ -0,0 +1,499 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Adapted from rllab maze_env.py."""
+
+import os
+import tempfile
+import xml.etree.ElementTree as ET
+import math
+import numpy as np
+import gym
+
+from environments import maze_env_utils
+
+# Directory that contains mujoco xml files.
+MODEL_DIR = 'environments/assets'
+
+
+class MazeEnv(gym.Env):
+ MODEL_CLASS = None
+
+ MAZE_HEIGHT = None
+ MAZE_SIZE_SCALING = None
+
+ def __init__(
+ self,
+ maze_id=None,
+ maze_height=0.5,
+ maze_size_scaling=8,
+ n_bins=0,
+ sensor_range=3.,
+ sensor_span=2 * math.pi,
+ observe_blocks=False,
+ put_spin_near_agent=False,
+ top_down_view=False,
+ manual_collision=False,
+ *args,
+ **kwargs):
+ self._maze_id = maze_id
+
+ model_cls = self.__class__.MODEL_CLASS
+ if model_cls is None:
+ raise "MODEL_CLASS unspecified!"
+ xml_path = os.path.join(MODEL_DIR, model_cls.FILE)
+ tree = ET.parse(xml_path)
+ worldbody = tree.find(".//worldbody")
+
+ self.MAZE_HEIGHT = height = maze_height
+ self.MAZE_SIZE_SCALING = size_scaling = maze_size_scaling
+ self._n_bins = n_bins
+ self._sensor_range = sensor_range * size_scaling
+ self._sensor_span = sensor_span
+ self._observe_blocks = observe_blocks
+ self._put_spin_near_agent = put_spin_near_agent
+ self._top_down_view = top_down_view
+ self._manual_collision = manual_collision
+
+ self.MAZE_STRUCTURE = structure = maze_env_utils.construct_maze(maze_id=self._maze_id)
+ self.elevated = any(-1 in row for row in structure) # Elevate the maze to allow for falling.
+ self.blocks = any(
+ any(maze_env_utils.can_move(r) for r in row)
+ for row in structure) # Are there any movable blocks?
+
+ torso_x, torso_y = self._find_robot()
+ self._init_torso_x = torso_x
+ self._init_torso_y = torso_y
+ self._init_positions = [
+ (x - torso_x, y - torso_y)
+ for x, y in self._find_all_robots()]
+
+ self._xy_to_rowcol = lambda x, y: (2 + (y + size_scaling / 2) / size_scaling,
+ 2 + (x + size_scaling / 2) / size_scaling)
+ self._view = np.zeros([5, 5, 3]) # walls (immovable), chasms (fall), movable blocks
+
+ height_offset = 0.
+ if self.elevated:
+ # Increase initial z-pos of ant.
+ height_offset = height * size_scaling
+ torso = tree.find(".//body[@name='torso']")
+ torso.set('pos', '0 0 %.2f' % (0.75 + height_offset))
+ if self.blocks:
+ # If there are movable blocks, change simulation settings to perform
+ # better contact detection.
+ default = tree.find(".//default")
+ default.find('.//geom').set('solimp', '.995 .995 .01')
+
+ self.movable_blocks = []
+ for i in range(len(structure)):
+ for j in range(len(structure[0])):
+ struct = structure[i][j]
+ if struct == 'r' and self._put_spin_near_agent:
+ struct = maze_env_utils.Move.SpinXY
+ if self.elevated and struct not in [-1]:
+ # Create elevated platform.
+ ET.SubElement(
+ worldbody, "geom",
+ name="elevated_%d_%d" % (i, j),
+ pos="%f %f %f" % (j * size_scaling - torso_x,
+ i * size_scaling - torso_y,
+ height / 2 * size_scaling),
+ size="%f %f %f" % (0.5 * size_scaling,
+ 0.5 * size_scaling,
+ height / 2 * size_scaling),
+ type="box",
+ material="",
+ contype="1",
+ conaffinity="1",
+ rgba="0.9 0.9 0.9 1",
+ )
+ if struct == 1: # Unmovable block.
+ # Offset all coordinates so that robot starts at the origin.
+ ET.SubElement(
+ worldbody, "geom",
+ name="block_%d_%d" % (i, j),
+ pos="%f %f %f" % (j * size_scaling - torso_x,
+ i * size_scaling - torso_y,
+ height_offset +
+ height / 2 * size_scaling),
+ size="%f %f %f" % (0.5 * size_scaling,
+ 0.5 * size_scaling,
+ height / 2 * size_scaling),
+ type="box",
+ material="",
+ contype="1",
+ conaffinity="1",
+ rgba="0.4 0.4 0.4 1",
+ )
+ elif maze_env_utils.can_move(struct): # Movable block.
+ # The "falling" blocks are shrunk slightly and increased in mass to
+ # ensure that it can fall easily through a gap in the platform blocks.
+ name = "movable_%d_%d" % (i, j)
+ self.movable_blocks.append((name, struct))
+ falling = maze_env_utils.can_move_z(struct)
+ spinning = maze_env_utils.can_spin(struct)
+ x_offset = 0.25 * size_scaling if spinning else 0.0
+ y_offset = 0.0
+ shrink = 0.1 if spinning else 0.99 if falling else 1.0
+ height_shrink = 0.1 if spinning else 1.0
+ movable_body = ET.SubElement(
+ worldbody, "body",
+ name=name,
+ pos="%f %f %f" % (j * size_scaling - torso_x + x_offset,
+ i * size_scaling - torso_y + y_offset,
+ height_offset +
+ height / 2 * size_scaling * height_shrink),
+ )
+ ET.SubElement(
+ movable_body, "geom",
+ name="block_%d_%d" % (i, j),
+ pos="0 0 0",
+ size="%f %f %f" % (0.5 * size_scaling * shrink,
+ 0.5 * size_scaling * shrink,
+ height / 2 * size_scaling * height_shrink),
+ type="box",
+ material="",
+ mass="0.001" if falling else "0.0002",
+ contype="1",
+ conaffinity="1",
+ rgba="0.9 0.1 0.1 1"
+ )
+ if maze_env_utils.can_move_x(struct):
+ ET.SubElement(
+ movable_body, "joint",
+ armature="0",
+ axis="1 0 0",
+ damping="0.0",
+ limited="true" if falling else "false",
+ range="%f %f" % (-size_scaling, size_scaling),
+ margin="0.01",
+ name="movable_x_%d_%d" % (i, j),
+ pos="0 0 0",
+ type="slide"
+ )
+ if maze_env_utils.can_move_y(struct):
+ ET.SubElement(
+ movable_body, "joint",
+ armature="0",
+ axis="0 1 0",
+ damping="0.0",
+ limited="true" if falling else "false",
+ range="%f %f" % (-size_scaling, size_scaling),
+ margin="0.01",
+ name="movable_y_%d_%d" % (i, j),
+ pos="0 0 0",
+ type="slide"
+ )
+ if maze_env_utils.can_move_z(struct):
+ ET.SubElement(
+ movable_body, "joint",
+ armature="0",
+ axis="0 0 1",
+ damping="0.0",
+ limited="true",
+ range="%f 0" % (-height_offset),
+ margin="0.01",
+ name="movable_z_%d_%d" % (i, j),
+ pos="0 0 0",
+ type="slide"
+ )
+ if maze_env_utils.can_spin(struct):
+ ET.SubElement(
+ movable_body, "joint",
+ armature="0",
+ axis="0 0 1",
+ damping="0.0",
+ limited="false",
+ name="spinable_%d_%d" % (i, j),
+ pos="0 0 0",
+ type="ball"
+ )
+
+ torso = tree.find(".//body[@name='torso']")
+ geoms = torso.findall(".//geom")
+ for geom in geoms:
+ if 'name' not in geom.attrib:
+ raise Exception("Every geom of the torso must have a name "
+ "defined")
+
+ _, file_path = tempfile.mkstemp(text=True)
+ tree.write(file_path)
+
+ self.wrapped_env = model_cls(*args, file_path=file_path, **kwargs)
+
+ def get_ori(self):
+ return self.wrapped_env.get_ori()
+
+ def get_top_down_view(self):
+ self._view = np.zeros_like(self._view)
+
+ def valid(row, col):
+ return self._view.shape[0] > row >= 0 and self._view.shape[1] > col >= 0
+
+ def update_view(x, y, d, row=None, col=None):
+ if row is None or col is None:
+ x = x - self._robot_x
+ y = y - self._robot_y
+ th = self._robot_ori
+
+ row, col = self._xy_to_rowcol(x, y)
+ update_view(x, y, d, row=row, col=col)
+ return
+
+ row, row_frac, col, col_frac = int(row), row % 1, int(col), col % 1
+ if row_frac < 0:
+ row_frac += 1
+ if col_frac < 0:
+ col_frac += 1
+
+ if valid(row, col):
+ self._view[row, col, d] += (
+ (min(1., row_frac + 0.5) - max(0., row_frac - 0.5)) *
+ (min(1., col_frac + 0.5) - max(0., col_frac - 0.5)))
+ if valid(row - 1, col):
+ self._view[row - 1, col, d] += (
+ (max(0., 0.5 - row_frac)) *
+ (min(1., col_frac + 0.5) - max(0., col_frac - 0.5)))
+ if valid(row + 1, col):
+ self._view[row + 1, col, d] += (
+ (max(0., row_frac - 0.5)) *
+ (min(1., col_frac + 0.5) - max(0., col_frac - 0.5)))
+ if valid(row, col - 1):
+ self._view[row, col - 1, d] += (
+ (min(1., row_frac + 0.5) - max(0., row_frac - 0.5)) *
+ (max(0., 0.5 - col_frac)))
+ if valid(row, col + 1):
+ self._view[row, col + 1, d] += (
+ (min(1., row_frac + 0.5) - max(0., row_frac - 0.5)) *
+ (max(0., col_frac - 0.5)))
+ if valid(row - 1, col - 1):
+ self._view[row - 1, col - 1, d] += (
+ (max(0., 0.5 - row_frac)) * max(0., 0.5 - col_frac))
+ if valid(row - 1, col + 1):
+ self._view[row - 1, col + 1, d] += (
+ (max(0., 0.5 - row_frac)) * max(0., col_frac - 0.5))
+ if valid(row + 1, col + 1):
+ self._view[row + 1, col + 1, d] += (
+ (max(0., row_frac - 0.5)) * max(0., col_frac - 0.5))
+ if valid(row + 1, col - 1):
+ self._view[row + 1, col - 1, d] += (
+ (max(0., row_frac - 0.5)) * max(0., 0.5 - col_frac))
+
+ # Draw ant.
+ robot_x, robot_y = self.wrapped_env.get_body_com("torso")[:2]
+ self._robot_x = robot_x
+ self._robot_y = robot_y
+ self._robot_ori = self.get_ori()
+
+ structure = self.MAZE_STRUCTURE
+ size_scaling = self.MAZE_SIZE_SCALING
+ height = self.MAZE_HEIGHT
+
+ # Draw immovable blocks and chasms.
+ for i in range(len(structure)):
+ for j in range(len(structure[0])):
+ if structure[i][j] == 1: # Wall.
+ update_view(j * size_scaling - self._init_torso_x,
+ i * size_scaling - self._init_torso_y,
+ 0)
+ if structure[i][j] == -1: # Chasm.
+ update_view(j * size_scaling - self._init_torso_x,
+ i * size_scaling - self._init_torso_y,
+ 1)
+
+ # Draw movable blocks.
+ for block_name, block_type in self.movable_blocks:
+ block_x, block_y = self.wrapped_env.get_body_com(block_name)[:2]
+ update_view(block_x, block_y, 2)
+
+ return self._view
+
+ def get_range_sensor_obs(self):
+ """Returns egocentric range sensor observations of maze."""
+ robot_x, robot_y, robot_z = self.wrapped_env.get_body_com("torso")[:3]
+ ori = self.get_ori()
+
+ structure = self.MAZE_STRUCTURE
+ size_scaling = self.MAZE_SIZE_SCALING
+ height = self.MAZE_HEIGHT
+
+ segments = []
+ # Get line segments (corresponding to outer boundary) of each immovable
+ # block or drop-off.
+ for i in range(len(structure)):
+ for j in range(len(structure[0])):
+ if structure[i][j] in [1, -1]: # There's a wall or drop-off.
+ cx = j * size_scaling - self._init_torso_x
+ cy = i * size_scaling - self._init_torso_y
+ x1 = cx - 0.5 * size_scaling
+ x2 = cx + 0.5 * size_scaling
+ y1 = cy - 0.5 * size_scaling
+ y2 = cy + 0.5 * size_scaling
+ struct_segments = [
+ ((x1, y1), (x2, y1)),
+ ((x2, y1), (x2, y2)),
+ ((x2, y2), (x1, y2)),
+ ((x1, y2), (x1, y1)),
+ ]
+ for seg in struct_segments:
+ segments.append(dict(
+ segment=seg,
+ type=structure[i][j],
+ ))
+ # Get line segments (corresponding to outer boundary) of each movable
+ # block within the agent's z-view.
+ for block_name, block_type in self.movable_blocks:
+ block_x, block_y, block_z = self.wrapped_env.get_body_com(block_name)[:3]
+ if (block_z + height * size_scaling / 2 >= robot_z and
+ robot_z >= block_z - height * size_scaling / 2): # Block in view.
+ x1 = block_x - 0.5 * size_scaling
+ x2 = block_x + 0.5 * size_scaling
+ y1 = block_y - 0.5 * size_scaling
+ y2 = block_y + 0.5 * size_scaling
+ struct_segments = [
+ ((x1, y1), (x2, y1)),
+ ((x2, y1), (x2, y2)),
+ ((x2, y2), (x1, y2)),
+ ((x1, y2), (x1, y1)),
+ ]
+ for seg in struct_segments:
+ segments.append(dict(
+ segment=seg,
+ type=block_type,
+ ))
+
+ sensor_readings = np.zeros((self._n_bins, 3)) # 3 for wall, drop-off, block
+ for ray_idx in range(self._n_bins):
+ ray_ori = (ori - self._sensor_span * 0.5 +
+ (2 * ray_idx + 1.0) / (2 * self._n_bins) * self._sensor_span)
+ ray_segments = []
+ # Get all segments that intersect with ray.
+ for seg in segments:
+ p = maze_env_utils.ray_segment_intersect(
+ ray=((robot_x, robot_y), ray_ori),
+ segment=seg["segment"])
+ if p is not None:
+ ray_segments.append(dict(
+ segment=seg["segment"],
+ type=seg["type"],
+ ray_ori=ray_ori,
+ distance=maze_env_utils.point_distance(p, (robot_x, robot_y)),
+ ))
+ if len(ray_segments) > 0:
+ # Find out which segment is intersected first.
+ first_seg = sorted(ray_segments, key=lambda x: x["distance"])[0]
+ seg_type = first_seg["type"]
+ idx = (0 if seg_type == 1 else # Wall.
+ 1 if seg_type == -1 else # Drop-off.
+ 2 if maze_env_utils.can_move(seg_type) else # Block.
+ None)
+ if first_seg["distance"] <= self._sensor_range:
+ sensor_readings[ray_idx][idx] = (self._sensor_range - first_seg["distance"]) / self._sensor_range
+
+ return sensor_readings
+
+ def _get_obs(self):
+ wrapped_obs = self.wrapped_env._get_obs()
+ if self._top_down_view:
+ view = [self.get_top_down_view().flat]
+ else:
+ view = []
+
+ if self._observe_blocks:
+ additional_obs = []
+ for block_name, block_type in self.movable_blocks:
+ additional_obs.append(self.wrapped_env.get_body_com(block_name))
+ wrapped_obs = np.concatenate([wrapped_obs[:3]] + additional_obs +
+ [wrapped_obs[3:]])
+
+ range_sensor_obs = self.get_range_sensor_obs()
+ return np.concatenate([wrapped_obs,
+ range_sensor_obs.flat] +
+ view + [[self.t * 0.001]])
+
+ def reset(self):
+ self.t = 0
+ self.trajectory = []
+ self.wrapped_env.reset()
+ if len(self._init_positions) > 1:
+ xy = random.choice(self._init_positions)
+ self.wrapped_env.set_xy(xy)
+ return self._get_obs()
+
+ @property
+ def viewer(self):
+ return self.wrapped_env.viewer
+
+ def render(self, *args, **kwargs):
+ return self.wrapped_env.render(*args, **kwargs)
+
+ @property
+ def observation_space(self):
+ shape = self._get_obs().shape
+ high = np.inf * np.ones(shape)
+ low = -high
+ return gym.spaces.Box(low, high)
+
+ @property
+ def action_space(self):
+ return self.wrapped_env.action_space
+
+ def _find_robot(self):
+ structure = self.MAZE_STRUCTURE
+ size_scaling = self.MAZE_SIZE_SCALING
+ for i in range(len(structure)):
+ for j in range(len(structure[0])):
+ if structure[i][j] == 'r':
+ return j * size_scaling, i * size_scaling
+ assert False, 'No robot in maze specification.'
+
+ def _find_all_robots(self):
+ structure = self.MAZE_STRUCTURE
+ size_scaling = self.MAZE_SIZE_SCALING
+ coords = []
+ for i in range(len(structure)):
+ for j in range(len(structure[0])):
+ if structure[i][j] == 'r':
+ coords.append((j * size_scaling, i * size_scaling))
+ return coords
+
+ def _is_in_collision(self, pos):
+ x, y = pos
+ structure = self.MAZE_STRUCTURE
+ size_scaling = self.MAZE_SIZE_SCALING
+ for i in range(len(structure)):
+ for j in range(len(structure[0])):
+ if structure[i][j] == 1:
+ minx = j * size_scaling - size_scaling * 0.5 - self._init_torso_x
+ maxx = j * size_scaling + size_scaling * 0.5 - self._init_torso_x
+ miny = i * size_scaling - size_scaling * 0.5 - self._init_torso_y
+ maxy = i * size_scaling + size_scaling * 0.5 - self._init_torso_y
+ if minx <= x <= maxx and miny <= y <= maxy:
+ return True
+ return False
+
+ def step(self, action):
+ self.t += 1
+ if self._manual_collision:
+ old_pos = self.wrapped_env.get_xy()
+ inner_next_obs, inner_reward, done, info = self.wrapped_env.step(action)
+ new_pos = self.wrapped_env.get_xy()
+ if self._is_in_collision(new_pos):
+ self.wrapped_env.set_xy(old_pos)
+ else:
+ inner_next_obs, inner_reward, done, info = self.wrapped_env.step(action)
+ next_obs = self._get_obs()
+ done = False
+ return next_obs, inner_reward, done, info
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/maze_env_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/maze_env_utils.py"
new file mode 100644
index 00000000..4f52509b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/maze_env_utils.py"
@@ -0,0 +1,164 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Adapted from rllab maze_env_utils.py."""
+import numpy as np
+import math
+
+
+class Move(object):
+ X = 11
+ Y = 12
+ Z = 13
+ XY = 14
+ XZ = 15
+ YZ = 16
+ XYZ = 17
+ SpinXY = 18
+
+
+def can_move_x(movable):
+ return movable in [Move.X, Move.XY, Move.XZ, Move.XYZ,
+ Move.SpinXY]
+
+
+def can_move_y(movable):
+ return movable in [Move.Y, Move.XY, Move.YZ, Move.XYZ,
+ Move.SpinXY]
+
+
+def can_move_z(movable):
+ return movable in [Move.Z, Move.XZ, Move.YZ, Move.XYZ]
+
+
+def can_spin(movable):
+ return movable in [Move.SpinXY]
+
+
+def can_move(movable):
+ return can_move_x(movable) or can_move_y(movable) or can_move_z(movable)
+
+
+def construct_maze(maze_id='Maze'):
+ if maze_id == 'Maze':
+ structure = [
+ [1, 1, 1, 1, 1],
+ [1, 'r', 0, 0, 1],
+ [1, 1, 1, 0, 1],
+ [1, 0, 0, 0, 1],
+ [1, 1, 1, 1, 1],
+ ]
+ elif maze_id == 'Push':
+ structure = [
+ [1, 1, 1, 1, 1],
+ [1, 0, 'r', 1, 1],
+ [1, 0, Move.XY, 0, 1],
+ [1, 1, 0, 1, 1],
+ [1, 1, 1, 1, 1],
+ ]
+ elif maze_id == 'Fall':
+ structure = [
+ [1, 1, 1, 1],
+ [1, 'r', 0, 1],
+ [1, 0, Move.YZ, 1],
+ [1, -1, -1, 1],
+ [1, 0, 0, 1],
+ [1, 1, 1, 1],
+ ]
+ elif maze_id == 'Block':
+ O = 'r'
+ structure = [
+ [1, 1, 1, 1, 1],
+ [1, O, 0, 0, 1],
+ [1, 0, 0, 0, 1],
+ [1, 0, 0, 0, 1],
+ [1, 1, 1, 1, 1],
+ ]
+ elif maze_id == 'BlockMaze':
+ O = 'r'
+ structure = [
+ [1, 1, 1, 1],
+ [1, O, 0, 1],
+ [1, 1, 0, 1],
+ [1, 0, 0, 1],
+ [1, 1, 1, 1],
+ ]
+ else:
+ raise NotImplementedError('The provided MazeId %s is not recognized' % maze_id)
+
+ return structure
+
+
+def line_intersect(pt1, pt2, ptA, ptB):
+ """
+ Taken from https://www.cs.hmc.edu/ACM/lectures/intersections.html
+
+ this returns the intersection of Line(pt1,pt2) and Line(ptA,ptB)
+ """
+
+ DET_TOLERANCE = 0.00000001
+
+ # the first line is pt1 + r*(pt2-pt1)
+ # in component form:
+ x1, y1 = pt1
+ x2, y2 = pt2
+ dx1 = x2 - x1
+ dy1 = y2 - y1
+
+ # the second line is ptA + s*(ptB-ptA)
+ x, y = ptA
+ xB, yB = ptB
+ dx = xB - x
+ dy = yB - y
+
+ DET = (-dx1 * dy + dy1 * dx)
+
+ if math.fabs(DET) < DET_TOLERANCE: return (0, 0, 0, 0, 0)
+
+ # now, the determinant should be OK
+ DETinv = 1.0 / DET
+
+ # find the scalar amount along the "self" segment
+ r = DETinv * (-dy * (x - x1) + dx * (y - y1))
+
+ # find the scalar amount along the input line
+ s = DETinv * (-dy1 * (x - x1) + dx1 * (y - y1))
+
+ # return the average of the two descriptions
+ xi = (x1 + r * dx1 + x + s * dx) / 2.0
+ yi = (y1 + r * dy1 + y + s * dy) / 2.0
+ return (xi, yi, 1, r, s)
+
+
+def ray_segment_intersect(ray, segment):
+ """
+ Check if the ray originated from (x, y) with direction theta intersects the line segment (x1, y1) -- (x2, y2),
+ and return the intersection point if there is one
+ """
+ (x, y), theta = ray
+ # (x1, y1), (x2, y2) = segment
+ pt1 = (x, y)
+ len = 1
+ pt2 = (x + len * math.cos(theta), y + len * math.sin(theta))
+ xo, yo, valid, r, s = line_intersect(pt1, pt2, *segment)
+ if valid and r >= 0 and 0 <= s <= 1:
+ return (xo, yo)
+ return None
+
+
+def point_distance(p1, p2):
+ x1, y1 = p1
+ x2, y2 = p2
+ return ((x1 - x2) ** 2 + (y1 - y2) ** 2) ** 0.5
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/point.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/point.py"
new file mode 100644
index 00000000..1a753235
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/point.py"
@@ -0,0 +1,90 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Wrapper for creating the ant environment in gym_mujoco."""
+
+import math
+import numpy as np
+from gym import utils
+from gym.envs.mujoco import mujoco_env
+
+
+class PointEnv(mujoco_env.MujocoEnv, utils.EzPickle):
+ FILE = "point.xml"
+ ORI_IND = 2
+
+ def __init__(self, file_path=None, expose_all_qpos=True):
+ self._expose_all_qpos = expose_all_qpos
+
+ mujoco_env.MujocoEnv.__init__(self, file_path, 1)
+ utils.EzPickle.__init__(self)
+
+ @property
+ def physics(self):
+ return self.model
+
+ def _step(self, a):
+ return self.step(a)
+
+ def step(self, action):
+ action[0] = 0.2 * action[0]
+ qpos = np.copy(self.physics.data.qpos)
+ qpos[2] += action[1]
+ ori = qpos[2]
+ # compute increment in each direction
+ dx = math.cos(ori) * action[0]
+ dy = math.sin(ori) * action[0]
+ # ensure that the robot is within reasonable range
+ qpos[0] = np.clip(qpos[0] + dx, -100, 100)
+ qpos[1] = np.clip(qpos[1] + dy, -100, 100)
+ qvel = self.physics.data.qvel
+ self.set_state(qpos, qvel)
+ for _ in range(0, self.frame_skip):
+ self.physics.step()
+ next_obs = self._get_obs()
+ reward = 0
+ done = False
+ info = {}
+ return next_obs, reward, done, info
+
+ def _get_obs(self):
+ if self._expose_all_qpos:
+ return np.concatenate([
+ self.physics.data.qpos.flat[:3], # Only point-relevant coords.
+ self.physics.data.qvel.flat[:3]])
+ return np.concatenate([
+ self.physics.data.qpos.flat[2:3],
+ self.physics.data.qvel.flat[:3]])
+
+ def reset_model(self):
+ qpos = self.init_qpos + self.np_random.uniform(
+ size=self.physics.model.nq, low=-.1, high=.1)
+ qvel = self.init_qvel + self.np_random.randn(self.physics.model.nv) * .1
+
+ # Set everything other than point to original position and 0 velocity.
+ qpos[3:] = self.init_qpos[3:]
+ qvel[3:] = 0.
+ self.set_state(qpos, qvel)
+ return self._get_obs()
+
+ def get_ori(self):
+ return self.model.data.qpos[self.__class__.ORI_IND]
+
+ def set_xy(self, xy):
+ qpos = np.copy(self.physics.data.qpos)
+ qpos[0] = xy[0]
+ qpos[1] = xy[1]
+
+ qvel = self.physics.data.qvel
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/point_maze_env.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/point_maze_env.py"
new file mode 100644
index 00000000..8d6b8194
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/environments/point_maze_env.py"
@@ -0,0 +1,21 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+from environments.maze_env import MazeEnv
+from environments.point import PointEnv
+
+
+class PointMazeEnv(MazeEnv):
+ MODEL_CLASS = PointEnv
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/eval.py"
new file mode 100644
index 00000000..4f5a4b20
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/eval.py"
@@ -0,0 +1,460 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Script for evaluating a UVF agent.
+
+To run locally: See run_eval.py
+
+To run on borg: See train_eval.borg
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import tensorflow as tf
+slim = tf.contrib.slim
+import gin.tf
+# pylint: disable=unused-import
+import agent
+import train
+from utils import utils as uvf_utils
+from utils import eval_utils
+from environments import create_maze_env
+# pylint: enable=unused-import
+
+flags = tf.app.flags
+
+flags.DEFINE_string('eval_dir', None,
+ 'Directory for writing logs/summaries during eval.')
+flags.DEFINE_string('checkpoint_dir', None,
+ 'Directory containing checkpoints to eval.')
+FLAGS = flags.FLAGS
+
+
+def get_evaluate_checkpoint_fn(master, output_dir, eval_step_fns,
+ model_rollout_fn, gamma, max_steps_per_episode,
+ num_episodes_eval, num_episodes_videos,
+ tuner_hook, generate_videos,
+ generate_summaries, video_settings):
+ """Returns a function that evaluates a given checkpoint.
+
+ Args:
+ master: BNS name of the TensorFlow master
+ output_dir: The output directory to which the metric summaries are written.
+ eval_step_fns: A dictionary of a functions that return a list of
+ [state, action, reward, discount, transition_type] tensors,
+ indexed by summary tag name.
+ model_rollout_fn: Model rollout fn.
+ gamma: Discount factor for the reward.
+ max_steps_per_episode: Maximum steps to run each episode for.
+ num_episodes_eval: Number of episodes to evaluate and average reward over.
+ num_episodes_videos: Number of episodes to record for video.
+ tuner_hook: A callable(average reward, global step) that updates a Vizier
+ tuner trial.
+ generate_videos: Whether to generate videos of the agent in action.
+ generate_summaries: Whether to generate summaries.
+ video_settings: Settings for generating videos of the agent.
+
+ Returns:
+ A function that evaluates a checkpoint.
+ """
+ sess = tf.Session(master, graph=tf.get_default_graph())
+ sess.run(tf.global_variables_initializer())
+ sess.run(tf.local_variables_initializer())
+ summary_writer = tf.summary.FileWriter(output_dir)
+
+ def evaluate_checkpoint(checkpoint_path):
+ """Performs a one-time evaluation of the given checkpoint.
+
+ Args:
+ checkpoint_path: Checkpoint to evaluate.
+ Returns:
+ True if the evaluation process should stop
+ """
+ restore_fn = tf.contrib.framework.assign_from_checkpoint_fn(
+ checkpoint_path,
+ uvf_utils.get_all_vars(),
+ ignore_missing_vars=True,
+ reshape_variables=False)
+ assert restore_fn is not None, 'cannot restore %s' % checkpoint_path
+ restore_fn(sess)
+ global_step = sess.run(slim.get_global_step())
+ should_stop = False
+ max_reward = -1e10
+ max_meta_reward = -1e10
+
+ for eval_tag, (eval_step, env_base,) in sorted(eval_step_fns.items()):
+ if hasattr(env_base, 'set_sess'):
+ env_base.set_sess(sess) # set session
+
+ if generate_summaries:
+ tf.logging.info(
+ '[%s] Computing average reward over %d episodes at global step %d.',
+ eval_tag, num_episodes_eval, global_step)
+ (average_reward, last_reward,
+ average_meta_reward, last_meta_reward, average_success,
+ states, actions) = eval_utils.compute_average_reward(
+ sess, env_base, eval_step, gamma, max_steps_per_episode,
+ num_episodes_eval)
+ tf.logging.info('[%s] Average reward = %f', eval_tag, average_reward)
+ tf.logging.info('[%s] Last reward = %f', eval_tag, last_reward)
+ tf.logging.info('[%s] Average meta reward = %f', eval_tag, average_meta_reward)
+ tf.logging.info('[%s] Last meta reward = %f', eval_tag, last_meta_reward)
+ tf.logging.info('[%s] Average success = %f', eval_tag, average_success)
+ if model_rollout_fn is not None:
+ preds, model_losses = eval_utils.compute_model_loss(
+ sess, model_rollout_fn, states, actions)
+ for i, (pred, state, model_loss) in enumerate(
+ zip(preds, states, model_losses)):
+ tf.logging.info('[%s] Model rollout step %d: loss=%f', eval_tag, i,
+ model_loss)
+ tf.logging.info('[%s] Model rollout step %d: pred=%s', eval_tag, i,
+ str(pred.tolist()))
+ tf.logging.info('[%s] Model rollout step %d: state=%s', eval_tag, i,
+ str(state.tolist()))
+
+ # Report the eval stats to the tuner.
+ if average_reward > max_reward:
+ max_reward = average_reward
+ if average_meta_reward > max_meta_reward:
+ max_meta_reward = average_meta_reward
+
+ for (tag, value) in [('Reward/average_%s_reward', average_reward),
+ ('Reward/last_%s_reward', last_reward),
+ ('Reward/average_%s_meta_reward', average_meta_reward),
+ ('Reward/last_%s_meta_reward', last_meta_reward),
+ ('Reward/average_%s_success', average_success)]:
+ summary_str = tf.Summary(value=[
+ tf.Summary.Value(
+ tag=tag % eval_tag,
+ simple_value=value)
+ ])
+ summary_writer.add_summary(summary_str, global_step)
+ summary_writer.flush()
+
+ if generate_videos or should_stop:
+ # Do a manual reset before generating the video to see the initial
+ # pose of the robot, towards which the reset controller is moving.
+ if hasattr(env_base, '_gym_env'):
+ tf.logging.info('Resetting before recording video')
+ if hasattr(env_base._gym_env, 'reset_model'):
+ env_base._gym_env.reset_model() # pylint: disable=protected-access
+ else:
+ env_base._gym_env.wrapped_env.reset_model()
+ video_filename = os.path.join(output_dir, 'videos',
+ '%s_step_%d.mp4' % (eval_tag,
+ global_step))
+ eval_utils.capture_video(sess, eval_step, env_base,
+ max_steps_per_episode * num_episodes_videos,
+ video_filename, video_settings,
+ reset_every=max_steps_per_episode)
+
+ should_stop = should_stop or (generate_summaries and tuner_hook and
+ tuner_hook(max_reward, global_step))
+ return bool(should_stop)
+
+ return evaluate_checkpoint
+
+
+def get_model_rollout(uvf_agent, tf_env):
+ """Model rollout function."""
+ state_spec = tf_env.observation_spec()[0]
+ action_spec = tf_env.action_spec()[0]
+ state_ph = tf.placeholder(dtype=state_spec.dtype, shape=state_spec.shape)
+ action_ph = tf.placeholder(dtype=action_spec.dtype, shape=action_spec.shape)
+
+ merged_state = uvf_agent.merged_state(state_ph)
+ diff_value = uvf_agent.critic_net(tf.expand_dims(merged_state, 0),
+ tf.expand_dims(action_ph, 0))[0]
+ diff_value = tf.cast(diff_value, dtype=state_ph.dtype)
+ state_ph.shape.assert_is_compatible_with(diff_value.shape)
+ next_state = state_ph + diff_value
+
+ def model_rollout_fn(sess, state, action):
+ return sess.run(next_state, feed_dict={state_ph: state, action_ph: action})
+
+ return model_rollout_fn
+
+
+def get_eval_step(uvf_agent,
+ state_preprocess,
+ tf_env,
+ action_fn,
+ meta_action_fn,
+ environment_steps,
+ num_episodes,
+ mode='eval'):
+ """Get one-step policy/env stepping ops.
+
+ Args:
+ uvf_agent: A UVF agent.
+ tf_env: A TFEnvironment.
+ action_fn: A function to produce actions given current state.
+ meta_action_fn: A function to produce meta actions given current state.
+ environment_steps: A variable to count the number of steps in the tf_env.
+ num_episodes: A variable to count the number of episodes.
+ mode: a string representing the mode=[train, explore, eval].
+
+ Returns:
+ A collect_experience_op that excute an action and store into the
+ replay_buffer
+ """
+
+ tf_env.start_collect()
+ state = tf_env.current_obs()
+ action = action_fn(state, context=None)
+ state_repr = state_preprocess(state)
+
+ action_spec = tf_env.action_spec()
+ action_ph = tf.placeholder(dtype=action_spec.dtype, shape=action_spec.shape)
+ with tf.control_dependencies([state]):
+ transition_type, reward, discount = tf_env.step(action_ph)
+
+ def increment_step():
+ return environment_steps.assign_add(1)
+
+ def increment_episode():
+ return num_episodes.assign_add(1)
+
+ def no_op_int():
+ return tf.constant(0, dtype=tf.int64)
+
+ step_cond = uvf_agent.step_cond_fn(state, action,
+ transition_type,
+ environment_steps, num_episodes)
+ reset_episode_cond = uvf_agent.reset_episode_cond_fn(
+ state, action,
+ transition_type, environment_steps, num_episodes)
+ reset_env_cond = uvf_agent.reset_env_cond_fn(state, action,
+ transition_type,
+ environment_steps, num_episodes)
+
+ increment_step_op = tf.cond(step_cond, increment_step, no_op_int)
+ with tf.control_dependencies([increment_step_op]):
+ increment_episode_op = tf.cond(reset_episode_cond, increment_episode,
+ no_op_int)
+
+ with tf.control_dependencies([reward, discount]):
+ next_state = tf_env.current_obs()
+ next_state_repr = state_preprocess(next_state)
+
+ with tf.control_dependencies([increment_episode_op]):
+ post_reward, post_meta_reward = uvf_agent.cond_begin_episode_op(
+ tf.logical_not(reset_episode_cond),
+ [state, action_ph, reward, next_state,
+ state_repr, next_state_repr],
+ mode=mode, meta_action_fn=meta_action_fn)
+
+ # Important: do manual reset after getting the final reward from the
+ # unreset environment.
+ with tf.control_dependencies([post_reward, post_meta_reward]):
+ cond_reset_op = tf.cond(reset_env_cond,
+ tf_env.reset,
+ tf_env.current_time_step)
+
+ # Add a dummy control dependency to force the reset_op to run
+ with tf.control_dependencies(cond_reset_op):
+ post_reward, post_meta_reward = map(tf.identity, [post_reward, post_meta_reward])
+
+ eval_step = [next_state, action_ph, transition_type, post_reward, post_meta_reward, discount, uvf_agent.context_vars, state_repr]
+
+ if callable(action):
+ def step_fn(sess):
+ action_value = action(sess)
+ return sess.run(eval_step, feed_dict={action_ph: action_value})
+ else:
+ action = uvf_utils.clip_to_spec(action, action_spec)
+ def step_fn(sess):
+ action_value = sess.run(action)
+ return sess.run(eval_step, feed_dict={action_ph: action_value})
+
+ return step_fn
+
+
+@gin.configurable
+def evaluate(checkpoint_dir,
+ eval_dir,
+ environment=None,
+ num_bin_actions=3,
+ agent_class=None,
+ meta_agent_class=None,
+ state_preprocess_class=None,
+ gamma=1.0,
+ num_episodes_eval=10,
+ eval_interval_secs=60,
+ max_number_of_evaluations=None,
+ checkpoint_timeout=None,
+ timeout_fn=None,
+ tuner_hook=None,
+ generate_videos=False,
+ generate_summaries=True,
+ num_episodes_videos=5,
+ video_settings=None,
+ eval_modes=('eval',),
+ eval_model_rollout=False,
+ policy_save_dir='policy',
+ checkpoint_range=None,
+ checkpoint_path=None,
+ max_steps_per_episode=None,
+ evaluate_nohrl=False):
+ """Loads and repeatedly evaluates a checkpointed model at a set interval.
+
+ Args:
+ checkpoint_dir: The directory where the checkpoints reside.
+ eval_dir: Directory to save the evaluation summary results.
+ environment: A BaseEnvironment to evaluate.
+ num_bin_actions: Number of bins for discretizing continuous actions.
+ agent_class: An RL agent class.
+ meta_agent_class: A Meta agent class.
+ gamma: Discount factor for the reward.
+ num_episodes_eval: Number of episodes to evaluate and average reward over.
+ eval_interval_secs: The number of seconds between each evaluation run.
+ max_number_of_evaluations: The max number of evaluations. If None the
+ evaluation continues indefinitely.
+ checkpoint_timeout: The maximum amount of time to wait between checkpoints.
+ If left as `None`, then the process will wait indefinitely.
+ timeout_fn: Optional function to call after a timeout.
+ tuner_hook: A callable that takes the average reward and global step and
+ updates a Vizier tuner trial.
+ generate_videos: Whether to generate videos of the agent in action.
+ generate_summaries: Whether to generate summaries.
+ num_episodes_videos: Number of episodes to evaluate for generating videos.
+ video_settings: Settings for generating videos of the agent.
+ optimal action based on the critic.
+ eval_modes: A tuple of eval modes.
+ eval_model_rollout: Evaluate model rollout.
+ policy_save_dir: Optional sub-directory where the policies are
+ saved.
+ checkpoint_range: Optional. If provided, evaluate all checkpoints in
+ the range.
+ checkpoint_path: Optional sub-directory specifying which checkpoint to
+ evaluate. If None, will evaluate the most recent checkpoint.
+ """
+ tf_env = create_maze_env.TFPyEnvironment(environment)
+ observation_spec = [tf_env.observation_spec()]
+ action_spec = [tf_env.action_spec()]
+
+ assert max_steps_per_episode, 'max_steps_per_episode need to be set'
+
+ if agent_class.ACTION_TYPE == 'discrete':
+ assert False
+ else:
+ assert agent_class.ACTION_TYPE == 'continuous'
+
+ if meta_agent_class is not None:
+ assert agent_class.ACTION_TYPE == meta_agent_class.ACTION_TYPE
+ with tf.variable_scope('meta_agent'):
+ meta_agent = meta_agent_class(
+ observation_spec,
+ action_spec,
+ tf_env,
+ )
+ else:
+ meta_agent = None
+
+ with tf.variable_scope('uvf_agent'):
+ uvf_agent = agent_class(
+ observation_spec,
+ action_spec,
+ tf_env,
+ )
+ uvf_agent.set_meta_agent(agent=meta_agent)
+
+ with tf.variable_scope('state_preprocess'):
+ state_preprocess = state_preprocess_class()
+
+ # run both actor and critic once to ensure networks are initialized
+ # and gin configs will be saved
+ # pylint: disable=protected-access
+ temp_states = tf.expand_dims(
+ tf.zeros(
+ dtype=uvf_agent._observation_spec.dtype,
+ shape=uvf_agent._observation_spec.shape), 0)
+ # pylint: enable=protected-access
+ temp_actions = uvf_agent.actor_net(temp_states)
+ uvf_agent.critic_net(temp_states, temp_actions)
+
+ # create eval_step_fns for each action function
+ eval_step_fns = dict()
+ meta_agent = uvf_agent.meta_agent
+ for meta in [True] + [False] * evaluate_nohrl:
+ meta_tag = 'hrl' if meta else 'nohrl'
+ uvf_agent.set_meta_agent(meta_agent if meta else None)
+ for mode in eval_modes:
+ # wrap environment
+ wrapped_environment = uvf_agent.get_env_base_wrapper(
+ environment, mode=mode)
+ action_wrapper = lambda agent_: agent_.action
+ action_fn = action_wrapper(uvf_agent)
+ meta_action_fn = action_wrapper(meta_agent)
+ eval_step_fns['%s_%s' % (mode, meta_tag)] = (get_eval_step(
+ uvf_agent=uvf_agent,
+ state_preprocess=state_preprocess,
+ tf_env=tf_env,
+ action_fn=action_fn,
+ meta_action_fn=meta_action_fn,
+ environment_steps=tf.Variable(
+ 0, dtype=tf.int64, name='environment_steps'),
+ num_episodes=tf.Variable(0, dtype=tf.int64, name='num_episodes'),
+ mode=mode), wrapped_environment,)
+
+ model_rollout_fn = None
+ if eval_model_rollout:
+ model_rollout_fn = get_model_rollout(uvf_agent, tf_env)
+
+ tf.train.get_or_create_global_step()
+
+ if policy_save_dir:
+ checkpoint_dir = os.path.join(checkpoint_dir, policy_save_dir)
+
+ tf.logging.info('Evaluating policies at %s', checkpoint_dir)
+ tf.logging.info('Running episodes for max %d steps', max_steps_per_episode)
+
+ evaluate_checkpoint_fn = get_evaluate_checkpoint_fn(
+ '', eval_dir, eval_step_fns, model_rollout_fn, gamma,
+ max_steps_per_episode, num_episodes_eval, num_episodes_videos, tuner_hook,
+ generate_videos, generate_summaries, video_settings)
+
+ if checkpoint_path is not None:
+ checkpoint_path = os.path.join(checkpoint_dir, checkpoint_path)
+ evaluate_checkpoint_fn(checkpoint_path)
+ elif checkpoint_range is not None:
+ model_files = tf.gfile.Glob(
+ os.path.join(checkpoint_dir, 'model.ckpt-*.index'))
+ tf.logging.info('Found %s policies at %s', len(model_files), checkpoint_dir)
+ model_files = {
+ int(f.split('model.ckpt-', 1)[1].split('.', 1)[0]):
+ os.path.splitext(f)[0]
+ for f in model_files
+ }
+ model_files = {
+ k: v
+ for k, v in model_files.items()
+ if k >= checkpoint_range[0] and k <= checkpoint_range[1]
+ }
+ tf.logging.info('Evaluating %d policies at %s',
+ len(model_files), checkpoint_dir)
+ for _, checkpoint_path in sorted(model_files.items()):
+ evaluate_checkpoint_fn(checkpoint_path)
+ else:
+ eval_utils.evaluate_checkpoint_repeatedly(
+ checkpoint_dir,
+ evaluate_checkpoint_fn,
+ eval_interval_secs=eval_interval_secs,
+ max_number_of_evaluations=max_number_of_evaluations,
+ checkpoint_timeout=checkpoint_timeout,
+ timeout_fn=timeout_fn)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/run_env.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/run_env.py"
new file mode 100644
index 00000000..87fad542
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/run_env.py"
@@ -0,0 +1,129 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Random policy on an environment."""
+
+import tensorflow as tf
+import numpy as np
+import random
+
+from environments import create_maze_env
+
+app = tf.app
+flags = tf.flags
+logging = tf.logging
+
+FLAGS = flags.FLAGS
+
+flags.DEFINE_string('env', 'AntMaze', 'environment name: AntMaze, AntPush, or AntFall')
+flags.DEFINE_integer('episode_length', 500, 'episode length')
+flags.DEFINE_integer('num_episodes', 50, 'number of episodes')
+
+
+def get_goal_sample_fn(env_name):
+ if env_name == 'AntMaze':
+ # NOTE: When evaluating (i.e. the metrics shown in the paper,
+ # we use the commented out goal sampling function. The uncommented
+ # one is only used for training.
+ #return lambda: np.array([0., 16.])
+ return lambda: np.random.uniform((-4, -4), (20, 20))
+ elif env_name == 'AntPush':
+ return lambda: np.array([0., 19.])
+ elif env_name == 'AntFall':
+ return lambda: np.array([0., 27., 4.5])
+ else:
+ assert False, 'Unknown env'
+
+
+def get_reward_fn(env_name):
+ if env_name == 'AntMaze':
+ return lambda obs, goal: -np.sum(np.square(obs[:2] - goal)) ** 0.5
+ elif env_name == 'AntPush':
+ return lambda obs, goal: -np.sum(np.square(obs[:2] - goal)) ** 0.5
+ elif env_name == 'AntFall':
+ return lambda obs, goal: -np.sum(np.square(obs[:3] - goal)) ** 0.5
+ else:
+ assert False, 'Unknown env'
+
+
+def success_fn(last_reward):
+ return last_reward > -5.0
+
+
+class EnvWithGoal(object):
+
+ def __init__(self, base_env, env_name):
+ self.base_env = base_env
+ self.goal_sample_fn = get_goal_sample_fn(env_name)
+ self.reward_fn = get_reward_fn(env_name)
+ self.goal = None
+
+ def reset(self):
+ obs = self.base_env.reset()
+ self.goal = self.goal_sample_fn()
+ return np.concatenate([obs, self.goal])
+
+ def step(self, a):
+ obs, _, done, info = self.base_env.step(a)
+ reward = self.reward_fn(obs, self.goal)
+ return np.concatenate([obs, self.goal]), reward, done, info
+
+ @property
+ def action_space(self):
+ return self.base_env.action_space
+
+
+def run_environment(env_name, episode_length, num_episodes):
+ env = EnvWithGoal(
+ create_maze_env.create_maze_env(env_name).gym,
+ env_name)
+
+ def action_fn(obs):
+ action_space = env.action_space
+ action_space_mean = (action_space.low + action_space.high) / 2.0
+ action_space_magn = (action_space.high - action_space.low) / 2.0
+ random_action = (action_space_mean +
+ action_space_magn *
+ np.random.uniform(low=-1.0, high=1.0,
+ size=action_space.shape))
+ return random_action
+
+ rewards = []
+ successes = []
+ for ep in range(num_episodes):
+ rewards.append(0.0)
+ successes.append(False)
+ obs = env.reset()
+ for _ in range(episode_length):
+ obs, reward, done, _ = env.step(action_fn(obs))
+ rewards[-1] += reward
+ successes[-1] = success_fn(reward)
+ if done:
+ break
+ logging.info('Episode %d reward: %.2f, Success: %d', ep + 1, rewards[-1], successes[-1])
+
+ logging.info('Average Reward over %d episodes: %.2f',
+ num_episodes, np.mean(rewards))
+ logging.info('Average Success over %d episodes: %.2f',
+ num_episodes, np.mean(successes))
+
+
+def main(unused_argv):
+ logging.set_verbosity(logging.INFO)
+ run_environment(FLAGS.env, FLAGS.episode_length, FLAGS.num_episodes)
+
+
+if __name__ == '__main__':
+ app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/run_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/run_eval.py"
new file mode 100644
index 00000000..12f12369
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/run_eval.py"
@@ -0,0 +1,51 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Script for evaluating a UVF agent.
+
+To run locally: See scripts/local_eval.py
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+import gin.tf
+# pylint: disable=unused-import
+import eval as eval_
+# pylint: enable=unused-import
+
+flags = tf.app.flags
+FLAGS = flags.FLAGS
+
+
+def main(_):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ assert FLAGS.checkpoint_dir, "Flag 'checkpoint_dir' must be set."
+ assert FLAGS.eval_dir, "Flag 'eval_dir' must be set."
+
+ if FLAGS.config_file:
+ for config_file in FLAGS.config_file:
+ gin.parse_config_file(config_file)
+ if FLAGS.params:
+ gin.parse_config(FLAGS.params)
+
+ eval_.evaluate(FLAGS.checkpoint_dir, FLAGS.eval_dir)
+
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/run_train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/run_train.py"
new file mode 100644
index 00000000..1d459d60
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/run_train.py"
@@ -0,0 +1,49 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Script for training an RL agent using the UVF algorithm.
+
+To run locally: See scripts/local_train.py
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+import gin.tf
+# pylint: enable=unused-import
+import train
+# pylint: disable=unused-import
+
+flags = tf.app.flags
+FLAGS = flags.FLAGS
+
+
+def main(_):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ if FLAGS.config_file:
+ for config_file in FLAGS.config_file:
+ gin.parse_config_file(config_file)
+ if FLAGS.params:
+ gin.parse_config(FLAGS.params)
+
+ assert FLAGS.train_dir, "Flag 'train_dir' must be set."
+ return train.train_uvf(FLAGS.train_dir)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/scripts/local_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/scripts/local_eval.py"
new file mode 100644
index 00000000..89ef745a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/scripts/local_eval.py"
@@ -0,0 +1,76 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Script to run run_eval.py locally.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+import os
+from subprocess import call
+import sys
+
+CONFIGS_PATH = 'configs'
+CONTEXT_CONFIGS_PATH = 'context/configs'
+
+def main():
+ bb = './'
+ base_num_args = 6
+ if len(sys.argv) < base_num_args:
+ print(
+ "usage: python %s "
+ " [params...]"
+ % sys.argv[0])
+ sys.exit(0)
+ exp = sys.argv[1]
+ context_setting = sys.argv[2]
+ context = sys.argv[3]
+ agent = sys.argv[4]
+ assert sys.argv[5] in ["suite"], "args[5] must be `suite'"
+ suite = ""
+ binary = "python {bb}/run_eval{suite}.py ".format(bb=bb, suite=suite)
+
+ h = os.environ["HOME"]
+ ucp = CONFIGS_PATH
+ ccp = CONTEXT_CONFIGS_PATH
+ extra = ''
+ command_str = ("{binary} "
+ "--logtostderr "
+ "--checkpoint_dir={h}/tmp/{context_setting}/{context}/{agent}/{exp}/train "
+ "--eval_dir={h}/tmp/{context_setting}/{context}/{agent}/{exp}/eval "
+ "--config_file={ucp}/{agent}.gin "
+ "--config_file={ucp}/eval_{extra}uvf.gin "
+ "--config_file={ccp}/{context_setting}.gin "
+ "--config_file={ccp}/{context}.gin ").format(
+ h=h,
+ ucp=ucp,
+ ccp=ccp,
+ context_setting=context_setting,
+ context=context,
+ agent=agent,
+ extra=extra,
+ suite=suite,
+ exp=exp,
+ binary=binary)
+ for extra_arg in sys.argv[base_num_args:]:
+ command_str += "--params='%s' " % extra_arg
+
+ print(command_str)
+ call(command_str, shell=True)
+
+
+if __name__ == "__main__":
+ main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/scripts/local_train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/scripts/local_train.py"
new file mode 100644
index 00000000..986bb614
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/scripts/local_train.py"
@@ -0,0 +1,76 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Script to run run_train.py locally.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+import os
+import random
+from subprocess import call
+import sys
+
+CONFIGS_PATH = './configs'
+CONTEXT_CONFIGS_PATH = './context/configs'
+
+def main():
+ bb = './'
+ base_num_args = 6
+ if len(sys.argv) < base_num_args:
+ print(
+ "usage: python %s "
+ " [params...]"
+ % sys.argv[0])
+ sys.exit(0)
+ exp = sys.argv[1] # Name for experiment, e.g. 'test001'
+ context_setting = sys.argv[2] # Context setting, e.g. 'hiro_orig'
+ context = sys.argv[3] # Environment-specific context, e.g. 'ant_maze'
+ agent = sys.argv[4] # Agent settings, e.g. 'base_uvf'
+ assert sys.argv[5] in ["suite"], "args[5] must be `suite'"
+ suite = ""
+ binary = "python {bb}/run_train{suite}.py ".format(bb=bb, suite=suite)
+
+ h = os.environ["HOME"]
+ ucp = CONFIGS_PATH
+ ccp = CONTEXT_CONFIGS_PATH
+ extra = ''
+ port = random.randint(2000, 8000)
+ command_str = ("{binary} "
+ "--train_dir={h}/tmp/{context_setting}/{context}/{agent}/{exp}/train "
+ "--config_file={ucp}/{agent}.gin "
+ "--config_file={ucp}/train_{extra}uvf.gin "
+ "--config_file={ccp}/{context_setting}.gin "
+ "--config_file={ccp}/{context}.gin "
+ "--summarize_gradients=False "
+ "--save_interval_secs=60 "
+ "--save_summaries_secs=1 "
+ "--master=local "
+ "--alsologtostderr ").format(h=h, ucp=ucp,
+ context_setting=context_setting,
+ context=context, ccp=ccp,
+ suite=suite, agent=agent, extra=extra,
+ exp=exp, binary=binary,
+ port=port)
+ for extra_arg in sys.argv[base_num_args:]:
+ command_str += "--params='%s' " % extra_arg
+
+ print(command_str)
+ call(command_str, shell=True)
+
+
+if __name__ == "__main__":
+ main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/train.py"
new file mode 100644
index 00000000..a40e81db
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/train.py"
@@ -0,0 +1,670 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Script for training an RL agent using the UVF algorithm.
+
+To run locally: See run_train.py
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import time
+import tensorflow as tf
+slim = tf.contrib.slim
+
+import gin.tf
+# pylint: disable=unused-import
+import train_utils
+import agent as agent_
+from agents import circular_buffer
+from utils import utils as uvf_utils
+from environments import create_maze_env
+# pylint: enable=unused-import
+
+
+flags = tf.app.flags
+
+FLAGS = flags.FLAGS
+flags.DEFINE_string('goal_sample_strategy', 'sample',
+ 'None, sample, FuN')
+
+LOAD_PATH = None
+
+
+def collect_experience(tf_env, agent, meta_agent, state_preprocess,
+ replay_buffer, meta_replay_buffer,
+ action_fn, meta_action_fn,
+ environment_steps, num_episodes, num_resets,
+ episode_rewards, episode_meta_rewards,
+ store_context,
+ disable_agent_reset):
+ """Collect experience in a tf_env into a replay_buffer using action_fn.
+
+ Args:
+ tf_env: A TFEnvironment.
+ agent: A UVF agent.
+ meta_agent: A Meta Agent.
+ replay_buffer: A Replay buffer to collect experience in.
+ meta_replay_buffer: A Replay buffer to collect meta agent experience in.
+ action_fn: A function to produce actions given current state.
+ meta_action_fn: A function to produce meta actions given current state.
+ environment_steps: A variable to count the number of steps in the tf_env.
+ num_episodes: A variable to count the number of episodes.
+ num_resets: A variable to count the number of resets.
+ store_context: A boolean to check if store context in replay.
+ disable_agent_reset: A boolean that disables agent from resetting.
+
+ Returns:
+ A collect_experience_op that excute an action and store into the
+ replay_buffers
+ """
+ tf_env.start_collect()
+ state = tf_env.current_obs()
+ state_repr = state_preprocess(state)
+ action = action_fn(state, context=None)
+
+ with tf.control_dependencies([state]):
+ transition_type, reward, discount = tf_env.step(action)
+
+ def increment_step():
+ return environment_steps.assign_add(1)
+
+ def increment_episode():
+ return num_episodes.assign_add(1)
+
+ def increment_reset():
+ return num_resets.assign_add(1)
+
+ def update_episode_rewards(context_reward, meta_reward, reset):
+ new_episode_rewards = tf.concat(
+ [episode_rewards[:1] + context_reward, episode_rewards[1:]], 0)
+ new_episode_meta_rewards = tf.concat(
+ [episode_meta_rewards[:1] + meta_reward,
+ episode_meta_rewards[1:]], 0)
+ return tf.group(
+ episode_rewards.assign(
+ tf.cond(reset,
+ lambda: tf.concat([[0.], episode_rewards[:-1]], 0),
+ lambda: new_episode_rewards)),
+ episode_meta_rewards.assign(
+ tf.cond(reset,
+ lambda: tf.concat([[0.], episode_meta_rewards[:-1]], 0),
+ lambda: new_episode_meta_rewards)))
+
+ def no_op_int():
+ return tf.constant(0, dtype=tf.int64)
+
+ step_cond = agent.step_cond_fn(state, action,
+ transition_type,
+ environment_steps, num_episodes)
+ reset_episode_cond = agent.reset_episode_cond_fn(
+ state, action,
+ transition_type, environment_steps, num_episodes)
+ reset_env_cond = agent.reset_env_cond_fn(state, action,
+ transition_type,
+ environment_steps, num_episodes)
+
+ increment_step_op = tf.cond(step_cond, increment_step, no_op_int)
+ increment_episode_op = tf.cond(reset_episode_cond, increment_episode,
+ no_op_int)
+ increment_reset_op = tf.cond(reset_env_cond, increment_reset, no_op_int)
+ increment_op = tf.group(increment_step_op, increment_episode_op,
+ increment_reset_op)
+
+ with tf.control_dependencies([increment_op, reward, discount]):
+ next_state = tf_env.current_obs()
+ next_state_repr = state_preprocess(next_state)
+ next_reset_episode_cond = tf.logical_or(
+ agent.reset_episode_cond_fn(
+ state, action,
+ transition_type, environment_steps, num_episodes),
+ tf.equal(discount, 0.0))
+
+ if store_context:
+ context = [tf.identity(var) + tf.zeros_like(var) for var in agent.context_vars]
+ meta_context = [tf.identity(var) + tf.zeros_like(var) for var in meta_agent.context_vars]
+ else:
+ context = []
+ meta_context = []
+ with tf.control_dependencies([next_state] + context + meta_context):
+ if disable_agent_reset:
+ collect_experience_ops = [tf.no_op()] # don't reset agent
+ else:
+ collect_experience_ops = agent.cond_begin_episode_op(
+ tf.logical_not(reset_episode_cond),
+ [state, action, reward, next_state,
+ state_repr, next_state_repr],
+ mode='explore', meta_action_fn=meta_action_fn)
+ context_reward, meta_reward = collect_experience_ops
+ collect_experience_ops = list(collect_experience_ops)
+ collect_experience_ops.append(
+ update_episode_rewards(tf.reduce_sum(context_reward), meta_reward,
+ reset_episode_cond))
+
+ meta_action_every_n = agent.tf_context.meta_action_every_n
+ with tf.control_dependencies(collect_experience_ops):
+ transition = [state, action, reward, discount, next_state]
+
+ meta_action = tf.to_float(
+ tf.concat(context, -1)) # Meta agent action is low-level context
+
+ meta_end = tf.logical_and( # End of meta-transition.
+ tf.equal(agent.tf_context.t % meta_action_every_n, 1),
+ agent.tf_context.t > 1)
+ with tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE):
+ states_var = tf.get_variable('states_var',
+ [meta_action_every_n, state.shape[-1]],
+ state.dtype)
+ actions_var = tf.get_variable('actions_var',
+ [meta_action_every_n, action.shape[-1]],
+ action.dtype)
+ state_var = tf.get_variable('state_var', state.shape, state.dtype)
+ reward_var = tf.get_variable('reward_var', reward.shape, reward.dtype)
+ meta_action_var = tf.get_variable('meta_action_var',
+ meta_action.shape, meta_action.dtype)
+ meta_context_var = [
+ tf.get_variable('meta_context_var%d' % idx,
+ meta_context[idx].shape, meta_context[idx].dtype)
+ for idx in range(len(meta_context))]
+
+ actions_var_upd = tf.scatter_update(
+ actions_var, (agent.tf_context.t - 2) % meta_action_every_n, action)
+ with tf.control_dependencies([actions_var_upd]):
+ actions = tf.identity(actions_var) + tf.zeros_like(actions_var)
+ meta_reward = tf.identity(meta_reward) + tf.zeros_like(meta_reward)
+ meta_reward = tf.reshape(meta_reward, reward.shape)
+
+ reward = 0.1 * meta_reward
+ meta_transition = [state_var, meta_action_var,
+ reward_var + reward,
+ discount * (1 - tf.to_float(next_reset_episode_cond)),
+ next_state]
+ meta_transition.extend([states_var, actions])
+ if store_context: # store current and next context into replay
+ transition += context + list(agent.context_vars)
+ meta_transition += meta_context_var + list(meta_agent.context_vars)
+
+ meta_step_cond = tf.squeeze(tf.logical_and(step_cond, tf.logical_or(next_reset_episode_cond, meta_end)))
+
+ collect_experience_op = tf.group(
+ replay_buffer.maybe_add(transition, step_cond),
+ meta_replay_buffer.maybe_add(meta_transition, meta_step_cond),
+ )
+
+ with tf.control_dependencies([collect_experience_op]):
+ collect_experience_op = tf.cond(reset_env_cond,
+ tf_env.reset,
+ tf_env.current_time_step)
+
+ meta_period = tf.equal(agent.tf_context.t % meta_action_every_n, 1)
+ states_var_upd = tf.scatter_update(
+ states_var, (agent.tf_context.t - 1) % meta_action_every_n,
+ next_state)
+ state_var_upd = tf.assign(
+ state_var,
+ tf.cond(meta_period, lambda: next_state, lambda: state_var))
+ reward_var_upd = tf.assign(
+ reward_var,
+ tf.cond(meta_period,
+ lambda: tf.zeros_like(reward_var),
+ lambda: reward_var + reward))
+ meta_action = tf.to_float(tf.concat(agent.context_vars, -1))
+ meta_action_var_upd = tf.assign(
+ meta_action_var,
+ tf.cond(meta_period, lambda: meta_action, lambda: meta_action_var))
+ meta_context_var_upd = [
+ tf.assign(
+ meta_context_var[idx],
+ tf.cond(meta_period,
+ lambda: meta_agent.context_vars[idx],
+ lambda: meta_context_var[idx]))
+ for idx in range(len(meta_context))]
+
+ return tf.group(
+ collect_experience_op,
+ states_var_upd,
+ state_var_upd,
+ reward_var_upd,
+ meta_action_var_upd,
+ *meta_context_var_upd)
+
+
+def sample_best_meta_actions(state_reprs, next_state_reprs, prev_meta_actions,
+ low_states, low_actions, low_state_reprs,
+ inverse_dynamics, uvf_agent, k=10):
+ """Return meta-actions which approximately maximize low-level log-probs."""
+ sampled_actions = inverse_dynamics.sample(state_reprs, next_state_reprs, k, prev_meta_actions)
+ sampled_actions = tf.stop_gradient(sampled_actions)
+ sampled_log_probs = tf.reshape(uvf_agent.log_probs(
+ tf.tile(low_states, [k, 1, 1]),
+ tf.tile(low_actions, [k, 1, 1]),
+ tf.tile(low_state_reprs, [k, 1, 1]),
+ [tf.reshape(sampled_actions, [-1, sampled_actions.shape[-1]])]),
+ [k, low_states.shape[0],
+ low_states.shape[1], -1])
+ fitness = tf.reduce_sum(sampled_log_probs, [2, 3])
+ best_actions = tf.argmax(fitness, 0)
+ actions = tf.gather_nd(
+ sampled_actions,
+ tf.stack([best_actions,
+ tf.range(prev_meta_actions.shape[0], dtype=tf.int64)], -1))
+ return actions
+
+
+@gin.configurable
+def train_uvf(train_dir,
+ environment=None,
+ num_bin_actions=3,
+ agent_class=None,
+ meta_agent_class=None,
+ state_preprocess_class=None,
+ inverse_dynamics_class=None,
+ exp_action_wrapper=None,
+ replay_buffer=None,
+ meta_replay_buffer=None,
+ replay_num_steps=1,
+ meta_replay_num_steps=1,
+ critic_optimizer=None,
+ actor_optimizer=None,
+ meta_critic_optimizer=None,
+ meta_actor_optimizer=None,
+ repr_optimizer=None,
+ relabel_contexts=False,
+ meta_relabel_contexts=False,
+ batch_size=64,
+ repeat_size=0,
+ num_episodes_train=2000,
+ initial_episodes=2,
+ initial_steps=None,
+ num_updates_per_observation=1,
+ num_collect_per_update=1,
+ num_collect_per_meta_update=1,
+ gamma=1.0,
+ meta_gamma=1.0,
+ reward_scale_factor=1.0,
+ target_update_period=1,
+ should_stop_early=None,
+ clip_gradient_norm=0.0,
+ summarize_gradients=False,
+ debug_summaries=False,
+ log_every_n_steps=100,
+ prefetch_queue_capacity=2,
+ policy_save_dir='policy',
+ save_policy_every_n_steps=1000,
+ save_policy_interval_secs=0,
+ replay_context_ratio=0.0,
+ next_state_as_context_ratio=0.0,
+ state_index=0,
+ zero_timer_ratio=0.0,
+ timer_index=-1,
+ debug=False,
+ max_policies_to_save=None,
+ max_steps_per_episode=None,
+ load_path=LOAD_PATH):
+ """Train an agent."""
+ tf_env = create_maze_env.TFPyEnvironment(environment)
+ observation_spec = [tf_env.observation_spec()]
+ action_spec = [tf_env.action_spec()]
+
+ max_steps_per_episode = max_steps_per_episode or tf_env.pyenv.max_episode_steps
+
+ assert max_steps_per_episode, 'max_steps_per_episode need to be set'
+
+ if initial_steps is None:
+ initial_steps = initial_episodes * max_steps_per_episode
+
+ if agent_class.ACTION_TYPE == 'discrete':
+ assert False
+ else:
+ assert agent_class.ACTION_TYPE == 'continuous'
+
+ assert agent_class.ACTION_TYPE == meta_agent_class.ACTION_TYPE
+ with tf.variable_scope('meta_agent'):
+ meta_agent = meta_agent_class(
+ observation_spec,
+ action_spec,
+ tf_env,
+ debug_summaries=debug_summaries)
+ meta_agent.set_replay(replay=meta_replay_buffer)
+
+ with tf.variable_scope('uvf_agent'):
+ uvf_agent = agent_class(
+ observation_spec,
+ action_spec,
+ tf_env,
+ debug_summaries=debug_summaries)
+ uvf_agent.set_meta_agent(agent=meta_agent)
+ uvf_agent.set_replay(replay=replay_buffer)
+
+ with tf.variable_scope('state_preprocess'):
+ state_preprocess = state_preprocess_class()
+
+ with tf.variable_scope('inverse_dynamics'):
+ inverse_dynamics = inverse_dynamics_class(
+ meta_agent.sub_context_as_action_specs[0])
+
+ # Create counter variables
+ global_step = tf.contrib.framework.get_or_create_global_step()
+ num_episodes = tf.Variable(0, dtype=tf.int64, name='num_episodes')
+ num_resets = tf.Variable(0, dtype=tf.int64, name='num_resets')
+ num_updates = tf.Variable(0, dtype=tf.int64, name='num_updates')
+ num_meta_updates = tf.Variable(0, dtype=tf.int64, name='num_meta_updates')
+ episode_rewards = tf.Variable([0.] * 100, name='episode_rewards')
+ episode_meta_rewards = tf.Variable([0.] * 100, name='episode_meta_rewards')
+
+ # Create counter variables summaries
+ train_utils.create_counter_summaries([
+ ('environment_steps', global_step),
+ ('num_episodes', num_episodes),
+ ('num_resets', num_resets),
+ ('num_updates', num_updates),
+ ('num_meta_updates', num_meta_updates),
+ ('replay_buffer_adds', replay_buffer.get_num_adds()),
+ ('meta_replay_buffer_adds', meta_replay_buffer.get_num_adds()),
+ ])
+
+ tf.summary.scalar('avg_episode_rewards',
+ tf.reduce_mean(episode_rewards[1:]))
+ tf.summary.scalar('avg_episode_meta_rewards',
+ tf.reduce_mean(episode_meta_rewards[1:]))
+ tf.summary.histogram('episode_rewards', episode_rewards[1:])
+ tf.summary.histogram('episode_meta_rewards', episode_meta_rewards[1:])
+
+ # Create init ops
+ action_fn = uvf_agent.action
+ action_fn = uvf_agent.add_noise_fn(action_fn, global_step=None)
+ meta_action_fn = meta_agent.action
+ meta_action_fn = meta_agent.add_noise_fn(meta_action_fn, global_step=None)
+ meta_actions_fn = meta_agent.actions
+ meta_actions_fn = meta_agent.add_noise_fn(meta_actions_fn, global_step=None)
+ init_collect_experience_op = collect_experience(
+ tf_env,
+ uvf_agent,
+ meta_agent,
+ state_preprocess,
+ replay_buffer,
+ meta_replay_buffer,
+ action_fn,
+ meta_action_fn,
+ environment_steps=global_step,
+ num_episodes=num_episodes,
+ num_resets=num_resets,
+ episode_rewards=episode_rewards,
+ episode_meta_rewards=episode_meta_rewards,
+ store_context=True,
+ disable_agent_reset=False,
+ )
+
+ # Create train ops
+ collect_experience_op = collect_experience(
+ tf_env,
+ uvf_agent,
+ meta_agent,
+ state_preprocess,
+ replay_buffer,
+ meta_replay_buffer,
+ action_fn,
+ meta_action_fn,
+ environment_steps=global_step,
+ num_episodes=num_episodes,
+ num_resets=num_resets,
+ episode_rewards=episode_rewards,
+ episode_meta_rewards=episode_meta_rewards,
+ store_context=True,
+ disable_agent_reset=False,
+ )
+
+ train_op_list = []
+ repr_train_op = tf.constant(0.0)
+ for mode in ['meta', 'nometa']:
+ if mode == 'meta':
+ agent = meta_agent
+ buff = meta_replay_buffer
+ critic_opt = meta_critic_optimizer
+ actor_opt = meta_actor_optimizer
+ relabel = meta_relabel_contexts
+ num_steps = meta_replay_num_steps
+ my_gamma = meta_gamma,
+ n_updates = num_meta_updates
+ else:
+ agent = uvf_agent
+ buff = replay_buffer
+ critic_opt = critic_optimizer
+ actor_opt = actor_optimizer
+ relabel = relabel_contexts
+ num_steps = replay_num_steps
+ my_gamma = gamma
+ n_updates = num_updates
+
+ with tf.name_scope(mode):
+ batch = buff.get_random_batch(batch_size, num_steps=num_steps)
+ states, actions, rewards, discounts, next_states = batch[:5]
+ with tf.name_scope('Reward'):
+ tf.summary.scalar('average_step_reward', tf.reduce_mean(rewards))
+ rewards *= reward_scale_factor
+ batch_queue = slim.prefetch_queue.prefetch_queue(
+ [states, actions, rewards, discounts, next_states] + batch[5:],
+ capacity=prefetch_queue_capacity,
+ name='batch_queue')
+
+ batch_dequeue = batch_queue.dequeue()
+ if repeat_size > 0:
+ batch_dequeue = [
+ tf.tile(batch, (repeat_size+1,) + (1,) * (batch.shape.ndims - 1))
+ for batch in batch_dequeue
+ ]
+ batch_size *= (repeat_size + 1)
+ states, actions, rewards, discounts, next_states = batch_dequeue[:5]
+ if mode == 'meta':
+ low_states = batch_dequeue[5]
+ low_actions = batch_dequeue[6]
+ low_state_reprs = state_preprocess(low_states)
+ state_reprs = state_preprocess(states)
+ next_state_reprs = state_preprocess(next_states)
+
+ if mode == 'meta': # Re-label meta-action
+ prev_actions = actions
+ if FLAGS.goal_sample_strategy == 'None':
+ pass
+ elif FLAGS.goal_sample_strategy == 'FuN':
+ actions = inverse_dynamics.sample(state_reprs, next_state_reprs, 1, prev_actions, sc=0.1)
+ actions = tf.stop_gradient(actions)
+ elif FLAGS.goal_sample_strategy == 'sample':
+ actions = sample_best_meta_actions(state_reprs, next_state_reprs, prev_actions,
+ low_states, low_actions, low_state_reprs,
+ inverse_dynamics, uvf_agent, k=10)
+ else:
+ assert False
+
+ if state_preprocess.trainable and mode == 'meta':
+ # Representation learning is based on meta-transitions, but is trained
+ # along with low-level policy updates.
+ repr_loss, _, _ = state_preprocess.loss(states, next_states, low_actions, low_states)
+ repr_train_op = slim.learning.create_train_op(
+ repr_loss,
+ repr_optimizer,
+ global_step=None,
+ update_ops=None,
+ summarize_gradients=summarize_gradients,
+ clip_gradient_norm=clip_gradient_norm,
+ variables_to_train=state_preprocess.get_trainable_vars(),)
+
+ # Get contexts for training
+ contexts, next_contexts = agent.sample_contexts(
+ mode='train', batch_size=batch_size,
+ state=states, next_state=next_states,
+ )
+ if not relabel: # Re-label context (in the style of TDM or HER).
+ contexts, next_contexts = (
+ batch_dequeue[-2*len(contexts):-1*len(contexts)],
+ batch_dequeue[-1*len(contexts):])
+
+ merged_states = agent.merged_states(states, contexts)
+ merged_next_states = agent.merged_states(next_states, next_contexts)
+ if mode == 'nometa':
+ context_rewards, context_discounts = agent.compute_rewards(
+ 'train', state_reprs, actions, rewards, next_state_reprs, contexts)
+ elif mode == 'meta': # Meta-agent uses sum of rewards, not context-specific rewards.
+ _, context_discounts = agent.compute_rewards(
+ 'train', states, actions, rewards, next_states, contexts)
+ context_rewards = rewards
+
+ if agent.gamma_index is not None:
+ context_discounts *= tf.cast(
+ tf.reshape(contexts[agent.gamma_index], (-1,)),
+ dtype=context_discounts.dtype)
+ else: context_discounts *= my_gamma
+
+ critic_loss = agent.critic_loss(merged_states, actions,
+ context_rewards, context_discounts,
+ merged_next_states)
+
+ critic_loss = tf.reduce_mean(critic_loss)
+
+ actor_loss = agent.actor_loss(merged_states, actions,
+ context_rewards, context_discounts,
+ merged_next_states)
+ actor_loss *= tf.to_float( # Only update actor every N steps.
+ tf.equal(n_updates % target_update_period, 0))
+
+ critic_train_op = slim.learning.create_train_op(
+ critic_loss,
+ critic_opt,
+ global_step=n_updates,
+ update_ops=None,
+ summarize_gradients=summarize_gradients,
+ clip_gradient_norm=clip_gradient_norm,
+ variables_to_train=agent.get_trainable_critic_vars(),)
+ critic_train_op = uvf_utils.tf_print(
+ critic_train_op, [critic_train_op],
+ message='critic_loss',
+ print_freq=1000,
+ name='critic_loss')
+ train_op_list.append(critic_train_op)
+ if actor_loss is not None:
+ actor_train_op = slim.learning.create_train_op(
+ actor_loss,
+ actor_opt,
+ global_step=None,
+ update_ops=None,
+ summarize_gradients=summarize_gradients,
+ clip_gradient_norm=clip_gradient_norm,
+ variables_to_train=agent.get_trainable_actor_vars(),)
+ actor_train_op = uvf_utils.tf_print(
+ actor_train_op, [actor_train_op],
+ message='actor_loss',
+ print_freq=1000,
+ name='actor_loss')
+ train_op_list.append(actor_train_op)
+
+ assert len(train_op_list) == 4
+ # Update targets should happen after the networks have been updated.
+ with tf.control_dependencies(train_op_list[2:]):
+ update_targets_op = uvf_utils.periodically(
+ uvf_agent.update_targets, target_update_period, 'update_targets')
+ if meta_agent is not None:
+ with tf.control_dependencies(train_op_list[:2]):
+ update_meta_targets_op = uvf_utils.periodically(
+ meta_agent.update_targets, target_update_period, 'update_targets')
+
+ assert_op = tf.Assert( # Hack to get training to stop.
+ tf.less_equal(global_step, 200 + num_episodes_train * max_steps_per_episode),
+ [global_step])
+ with tf.control_dependencies([update_targets_op, assert_op]):
+ train_op = tf.add_n(train_op_list[2:], name='post_update_targets')
+ # Representation training steps on every low-level policy training step.
+ train_op += repr_train_op
+ with tf.control_dependencies([update_meta_targets_op, assert_op]):
+ meta_train_op = tf.add_n(train_op_list[:2],
+ name='post_update_meta_targets')
+
+ if debug_summaries:
+ train_.gen_debug_batch_summaries(batch)
+ slim.summaries.add_histogram_summaries(
+ uvf_agent.get_trainable_critic_vars(), 'critic_vars')
+ slim.summaries.add_histogram_summaries(
+ uvf_agent.get_trainable_actor_vars(), 'actor_vars')
+
+ train_ops = train_utils.TrainOps(train_op, meta_train_op,
+ collect_experience_op)
+
+ policy_save_path = os.path.join(train_dir, policy_save_dir, 'model.ckpt')
+ policy_vars = uvf_agent.get_actor_vars() + meta_agent.get_actor_vars() + [
+ global_step, num_episodes, num_resets
+ ] + list(uvf_agent.context_vars) + list(meta_agent.context_vars) + state_preprocess.get_trainable_vars()
+ # add critic vars, since some test evaluation depends on them
+ policy_vars += uvf_agent.get_trainable_critic_vars() + meta_agent.get_trainable_critic_vars()
+ policy_saver = tf.train.Saver(
+ policy_vars, max_to_keep=max_policies_to_save, sharded=False)
+
+ lowlevel_vars = (uvf_agent.get_actor_vars() +
+ uvf_agent.get_trainable_critic_vars() +
+ state_preprocess.get_trainable_vars())
+ lowlevel_saver = tf.train.Saver(lowlevel_vars)
+
+ def policy_save_fn(sess):
+ policy_saver.save(
+ sess, policy_save_path, global_step=global_step, write_meta_graph=False)
+ if save_policy_interval_secs > 0:
+ tf.logging.info(
+ 'Wait %d secs after save policy.' % save_policy_interval_secs)
+ time.sleep(save_policy_interval_secs)
+
+ train_step_fn = train_utils.TrainStep(
+ max_number_of_steps=num_episodes_train * max_steps_per_episode + 100,
+ num_updates_per_observation=num_updates_per_observation,
+ num_collect_per_update=num_collect_per_update,
+ num_collect_per_meta_update=num_collect_per_meta_update,
+ log_every_n_steps=log_every_n_steps,
+ policy_save_fn=policy_save_fn,
+ save_policy_every_n_steps=save_policy_every_n_steps,
+ should_stop_early=should_stop_early).train_step
+
+ local_init_op = tf.local_variables_initializer()
+ init_targets_op = tf.group(uvf_agent.update_targets(1.0),
+ meta_agent.update_targets(1.0))
+
+ def initialize_training_fn(sess):
+ """Initialize training function."""
+ sess.run(local_init_op)
+ sess.run(init_targets_op)
+ if load_path:
+ tf.logging.info('Restoring low-level from %s' % load_path)
+ lowlevel_saver.restore(sess, load_path)
+ global_step_value = sess.run(global_step)
+ assert global_step_value == 0, 'Global step should be zero.'
+ collect_experience_call = sess.make_callable(
+ init_collect_experience_op)
+
+ for _ in range(initial_steps):
+ collect_experience_call()
+
+ train_saver = tf.train.Saver(max_to_keep=2, sharded=True)
+ tf.logging.info('train dir: %s', train_dir)
+ return slim.learning.train(
+ train_ops,
+ train_dir,
+ train_step_fn=train_step_fn,
+ save_interval_secs=FLAGS.save_interval_secs,
+ saver=train_saver,
+ log_every_n_steps=0,
+ global_step=global_step,
+ master="",
+ is_chief=(FLAGS.task == 0),
+ save_summaries_secs=FLAGS.save_summaries_secs,
+ init_fn=initialize_training_fn)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/train_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/train_utils.py"
new file mode 100644
index 00000000..ae23ef9f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/train_utils.py"
@@ -0,0 +1,175 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r""""""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from collections import namedtuple
+import os
+import time
+
+import tensorflow as tf
+
+import gin.tf
+
+flags = tf.app.flags
+
+
+flags.DEFINE_multi_string('config_file', None,
+ 'List of paths to the config files.')
+flags.DEFINE_multi_string('params', None,
+ 'Newline separated list of Gin parameter bindings.')
+
+flags.DEFINE_string('train_dir', None,
+ 'Directory for writing logs/summaries during training.')
+flags.DEFINE_string('master', 'local',
+ 'BNS name of the TensorFlow master to use.')
+flags.DEFINE_integer('task', 0, 'task id')
+flags.DEFINE_integer('save_interval_secs', 300, 'The frequency at which '
+ 'checkpoints are saved, in seconds.')
+flags.DEFINE_integer('save_summaries_secs', 30, 'The frequency at which '
+ 'summaries are saved, in seconds.')
+flags.DEFINE_boolean('summarize_gradients', False,
+ 'Whether to generate gradient summaries.')
+
+FLAGS = flags.FLAGS
+
+TrainOps = namedtuple('TrainOps',
+ ['train_op', 'meta_train_op', 'collect_experience_op'])
+
+
+class TrainStep(object):
+ """Handles training step."""
+
+ def __init__(self,
+ max_number_of_steps=0,
+ num_updates_per_observation=1,
+ num_collect_per_update=1,
+ num_collect_per_meta_update=1,
+ log_every_n_steps=1,
+ policy_save_fn=None,
+ save_policy_every_n_steps=0,
+ should_stop_early=None):
+ """Returns a function that is executed at each step of slim training.
+
+ Args:
+ max_number_of_steps: Optional maximum number of train steps to take.
+ num_updates_per_observation: Number of updates per observation.
+ log_every_n_steps: The frequency, in terms of global steps, that the loss
+ and global step and logged.
+ policy_save_fn: A tf.Saver().save function to save the policy.
+ save_policy_every_n_steps: How frequently to save the policy.
+ should_stop_early: Optional hook to report whether training should stop.
+ Raises:
+ ValueError: If policy_save_fn is not provided when
+ save_policy_every_n_steps > 0.
+ """
+ if save_policy_every_n_steps and policy_save_fn is None:
+ raise ValueError(
+ 'policy_save_fn is required when save_policy_every_n_steps > 0')
+ self.max_number_of_steps = max_number_of_steps
+ self.num_updates_per_observation = num_updates_per_observation
+ self.num_collect_per_update = num_collect_per_update
+ self.num_collect_per_meta_update = num_collect_per_meta_update
+ self.log_every_n_steps = log_every_n_steps
+ self.policy_save_fn = policy_save_fn
+ self.save_policy_every_n_steps = save_policy_every_n_steps
+ self.should_stop_early = should_stop_early
+ self.last_global_step_val = 0
+ self.train_op_fn = None
+ self.collect_and_train_fn = None
+ tf.logging.info('Training for %d max_number_of_steps',
+ self.max_number_of_steps)
+
+ def train_step(self, sess, train_ops, global_step, _):
+ """This function will be called at each step of training.
+
+ This represents one step of the DDPG algorithm and can include:
+ 1. collect a transition
+ 2. update the target network
+ 3. train the actor
+ 4. train the critic
+
+ Args:
+ sess: A Tensorflow session.
+ train_ops: A DdpgTrainOps tuple of train ops to run.
+ global_step: The global step.
+
+ Returns:
+ A scalar total loss.
+ A boolean should stop.
+ """
+ start_time = time.time()
+ if self.train_op_fn is None:
+ self.train_op_fn = sess.make_callable([train_ops.train_op, global_step])
+ self.meta_train_op_fn = sess.make_callable([train_ops.meta_train_op, global_step])
+ self.collect_fn = sess.make_callable([train_ops.collect_experience_op, global_step])
+ self.collect_and_train_fn = sess.make_callable(
+ [train_ops.train_op, global_step, train_ops.collect_experience_op])
+ self.collect_and_meta_train_fn = sess.make_callable(
+ [train_ops.meta_train_op, global_step, train_ops.collect_experience_op])
+ for _ in range(self.num_collect_per_update - 1):
+ self.collect_fn()
+ for _ in range(self.num_updates_per_observation - 1):
+ self.train_op_fn()
+
+ total_loss, global_step_val, _ = self.collect_and_train_fn()
+ if (global_step_val // self.num_collect_per_meta_update !=
+ self.last_global_step_val // self.num_collect_per_meta_update):
+ self.meta_train_op_fn()
+
+ time_elapsed = time.time() - start_time
+ should_stop = False
+ if self.max_number_of_steps:
+ should_stop = global_step_val >= self.max_number_of_steps
+ if global_step_val != self.last_global_step_val:
+ if (self.save_policy_every_n_steps and
+ global_step_val // self.save_policy_every_n_steps !=
+ self.last_global_step_val // self.save_policy_every_n_steps):
+ self.policy_save_fn(sess)
+
+ if (self.log_every_n_steps and
+ global_step_val % self.log_every_n_steps == 0):
+ tf.logging.info(
+ 'global step %d: loss = %.4f (%.3f sec/step) (%d steps/sec)',
+ global_step_val, total_loss, time_elapsed, 1 / time_elapsed)
+
+ self.last_global_step_val = global_step_val
+ stop_early = bool(self.should_stop_early and self.should_stop_early())
+ return total_loss, should_stop or stop_early
+
+
+def create_counter_summaries(counters):
+ """Add named summaries to counters, a list of tuples (name, counter)."""
+ if counters:
+ with tf.name_scope('Counters/'):
+ for name, counter in counters:
+ tf.summary.scalar(name, counter)
+
+
+def gen_debug_batch_summaries(batch):
+ """Generates summaries for the sampled replay batch."""
+ states, actions, rewards, _, next_states = batch
+ with tf.name_scope('batch'):
+ for s in range(states.get_shape()[-1]):
+ tf.summary.histogram('states_%d' % s, states[:, s])
+ for s in range(states.get_shape()[-1]):
+ tf.summary.histogram('next_states_%d' % s, next_states[:, s])
+ for a in range(actions.get_shape()[-1]):
+ tf.summary.histogram('actions_%d' % a, actions[:, a])
+ tf.summary.histogram('rewards', rewards)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/utils/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/utils/__init__.py"
new file mode 100644
index 00000000..8b137891
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/utils/__init__.py"
@@ -0,0 +1 @@
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/utils/eval_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/utils/eval_utils.py"
new file mode 100644
index 00000000..c88efc80
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/utils/eval_utils.py"
@@ -0,0 +1,151 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Evaluation utility functions.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+import numpy as np
+import tensorflow as tf
+from collections import namedtuple
+logging = tf.logging
+import gin.tf
+
+
+@gin.configurable
+def evaluate_checkpoint_repeatedly(checkpoint_dir,
+ evaluate_checkpoint_fn,
+ eval_interval_secs=600,
+ max_number_of_evaluations=None,
+ checkpoint_timeout=None,
+ timeout_fn=None):
+ """Evaluates a checkpointed model at a set interval."""
+ if max_number_of_evaluations is not None and max_number_of_evaluations <= 0:
+ raise ValueError(
+ '`max_number_of_evaluations` must be either None or a positive number.')
+
+ number_of_evaluations = 0
+ for checkpoint_path in tf.contrib.training.checkpoints_iterator(
+ checkpoint_dir,
+ min_interval_secs=eval_interval_secs,
+ timeout=checkpoint_timeout,
+ timeout_fn=timeout_fn):
+ retries = 3
+ for _ in range(retries):
+ try:
+ should_stop = evaluate_checkpoint_fn(checkpoint_path)
+ break
+ except tf.errors.DataLossError as e:
+ logging.warn(
+ 'Encountered a DataLossError while evaluating a checkpoint. This '
+ 'can happen when reading a checkpoint before it is fully written. '
+ 'Retrying...'
+ )
+ time.sleep(2.0)
+
+
+def compute_model_loss(sess, model_rollout_fn, states, actions):
+ """Computes model loss."""
+ preds, losses = [], []
+ preds.append(states[0])
+ losses.append(0)
+ for state, action in zip(states[1:], actions[1:]):
+ pred = model_rollout_fn(sess, preds[-1], action)
+ loss = np.sqrt(np.sum((state - pred) ** 2))
+ preds.append(pred)
+ losses.append(loss)
+ return preds, losses
+
+
+def compute_average_reward(sess, env_base, step_fn, gamma, num_steps,
+ num_episodes):
+ """Computes the discounted reward for a given number of steps.
+
+ Args:
+ sess: The tensorflow session.
+ env_base: A python environment.
+ step_fn: A function that takes in `sess` and returns a list of
+ [state, action, reward, discount, transition_type] values.
+ gamma: discounting factor to apply to the reward.
+ num_steps: number of steps to compute the reward over.
+ num_episodes: number of episodes to average the reward over.
+ Returns:
+ average_reward: a scalar of discounted reward.
+ last_reward: last reward received.
+ """
+ average_reward = 0
+ average_last_reward = 0
+ average_meta_reward = 0
+ average_last_meta_reward = 0
+ average_success = 0.
+ states, actions = None, None
+ for i in range(num_episodes):
+ env_base.end_episode()
+ env_base.begin_episode()
+ (reward, last_reward, meta_reward, last_meta_reward,
+ states, actions) = compute_reward(
+ sess, step_fn, gamma, num_steps)
+ s_reward = last_meta_reward # Navigation
+ success = (s_reward > -5.0) # When using diff=False
+ logging.info('Episode = %d, reward = %s, meta_reward = %f, '
+ 'last_reward = %s, last meta_reward = %f, success = %s',
+ i, reward, meta_reward, last_reward, last_meta_reward,
+ success)
+ average_reward += reward
+ average_last_reward += last_reward
+ average_meta_reward += meta_reward
+ average_last_meta_reward += last_meta_reward
+ average_success += success
+ average_reward /= num_episodes
+ average_last_reward /= num_episodes
+ average_meta_reward /= num_episodes
+ average_last_meta_reward /= num_episodes
+ average_success /= num_episodes
+ return (average_reward, average_last_reward,
+ average_meta_reward, average_last_meta_reward,
+ average_success,
+ states, actions)
+
+
+def compute_reward(sess, step_fn, gamma, num_steps):
+ """Computes the discounted reward for a given number of steps.
+
+ Args:
+ sess: The tensorflow session.
+ step_fn: A function that takes in `sess` and returns a list of
+ [state, action, reward, discount, transition_type] values.
+ gamma: discounting factor to apply to the reward.
+ num_steps: number of steps to compute the reward over.
+ Returns:
+ reward: cumulative discounted reward.
+ last_reward: reward received at final step.
+ """
+
+ total_reward = 0
+ total_meta_reward = 0
+ gamma_step = 1
+ states = []
+ actions = []
+ for _ in range(num_steps):
+ state, action, transition_type, reward, meta_reward, discount, _, _ = step_fn(sess)
+ total_reward += reward * gamma_step * discount
+ total_meta_reward += meta_reward * gamma_step * discount
+ gamma_step *= gamma
+ states.append(state)
+ actions.append(action)
+ return (total_reward, reward, total_meta_reward, meta_reward,
+ states, actions)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/utils/utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/utils/utils.py"
new file mode 100644
index 00000000..e188316c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/efficient-hrl/utils/utils.py"
@@ -0,0 +1,318 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""TensorFlow utility functions.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from copy import deepcopy
+import tensorflow as tf
+from tf_agents import specs
+from tf_agents.utils import common
+
+_tf_print_counts = dict()
+_tf_print_running_sums = dict()
+_tf_print_running_counts = dict()
+_tf_print_ids = 0
+
+
+def get_contextual_env_base(env_base, begin_ops=None, end_ops=None):
+ """Wrap env_base with additional tf ops."""
+ # pylint: disable=protected-access
+ def init(self_, env_base):
+ self_._env_base = env_base
+ attribute_list = ["_render_mode", "_gym_env"]
+ for attribute in attribute_list:
+ if hasattr(env_base, attribute):
+ setattr(self_, attribute, getattr(env_base, attribute))
+ if hasattr(env_base, "physics"):
+ self_._physics = env_base.physics
+ elif hasattr(env_base, "gym"):
+ class Physics(object):
+ def render(self, *args, **kwargs):
+ return env_base.gym.render("rgb_array")
+ physics = Physics()
+ self_._physics = physics
+ self_.physics = physics
+ def set_sess(self_, sess):
+ self_._sess = sess
+ if hasattr(self_._env_base, "set_sess"):
+ self_._env_base.set_sess(sess)
+ def begin_episode(self_):
+ self_._env_base.reset()
+ if begin_ops is not None:
+ self_._sess.run(begin_ops)
+ def end_episode(self_):
+ self_._env_base.reset()
+ if end_ops is not None:
+ self_._sess.run(end_ops)
+ return type("ContextualEnvBase", (env_base.__class__,), dict(
+ __init__=init,
+ set_sess=set_sess,
+ begin_episode=begin_episode,
+ end_episode=end_episode,
+ ))(env_base)
+ # pylint: enable=protected-access
+
+
+def merge_specs(specs_):
+ """Merge TensorSpecs.
+
+ Args:
+ specs_: List of TensorSpecs to be merged.
+ Returns:
+ a TensorSpec: a merged TensorSpec.
+ """
+ shape = specs_[0].shape
+ dtype = specs_[0].dtype
+ name = specs_[0].name
+ for spec in specs_[1:]:
+ assert shape[1:] == spec.shape[1:], "incompatible shapes: %s, %s" % (
+ shape, spec.shape)
+ assert dtype == spec.dtype, "incompatible dtypes: %s, %s" % (
+ dtype, spec.dtype)
+ shape = merge_shapes((shape, spec.shape), axis=0)
+ return specs.TensorSpec(
+ shape=shape,
+ dtype=dtype,
+ name=name,
+ )
+
+
+def merge_shapes(shapes, axis=0):
+ """Merge TensorShapes.
+
+ Args:
+ shapes: List of TensorShapes to be merged.
+ axis: optional, the axis to merge shaped.
+ Returns:
+ a TensorShape: a merged TensorShape.
+ """
+ assert len(shapes) > 1
+ dims = deepcopy(shapes[0].dims)
+ for shape in shapes[1:]:
+ assert shapes[0].ndims == shape.ndims
+ dims[axis] += shape.dims[axis]
+ return tf.TensorShape(dims=dims)
+
+
+def get_all_vars(ignore_scopes=None):
+ """Get all tf variables in scope.
+
+ Args:
+ ignore_scopes: A list of scope names to ignore.
+ Returns:
+ A list of all tf variables in scope.
+ """
+ all_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
+ all_vars = [var for var in all_vars if ignore_scopes is None or not
+ any(var.name.startswith(scope) for scope in ignore_scopes)]
+ return all_vars
+
+
+def clip(tensor, range_=None):
+ """Return a tf op which clips tensor according to range_.
+
+ Args:
+ tensor: A Tensor to be clipped.
+ range_: None, or a tuple representing (minval, maxval)
+ Returns:
+ A clipped Tensor.
+ """
+ if range_ is None:
+ return tf.identity(tensor)
+ elif isinstance(range_, (tuple, list)):
+ assert len(range_) == 2
+ return tf.clip_by_value(tensor, range_[0], range_[1])
+ else: raise NotImplementedError("Unacceptable range input: %r" % range_)
+
+
+def clip_to_bounds(value, minimum, maximum):
+ """Clips value to be between minimum and maximum.
+
+ Args:
+ value: (tensor) value to be clipped.
+ minimum: (numpy float array) minimum value to clip to.
+ maximum: (numpy float array) maximum value to clip to.
+ Returns:
+ clipped_value: (tensor) `value` clipped to between `minimum` and `maximum`.
+ """
+ value = tf.minimum(value, maximum)
+ return tf.maximum(value, minimum)
+
+
+clip_to_spec = common.clip_to_spec
+def _clip_to_spec(value, spec):
+ """Clips value to a given bounded tensor spec.
+
+ Args:
+ value: (tensor) value to be clipped.
+ spec: (BoundedTensorSpec) spec containing min. and max. values for clipping.
+ Returns:
+ clipped_value: (tensor) `value` clipped to be compatible with `spec`.
+ """
+ return clip_to_bounds(value, spec.minimum, spec.maximum)
+
+
+join_scope = common.join_scope
+def _join_scope(parent_scope, child_scope):
+ """Joins a parent and child scope using `/`, checking for empty/none.
+
+ Args:
+ parent_scope: (string) parent/prefix scope.
+ child_scope: (string) child/suffix scope.
+ Returns:
+ joined scope: (string) parent and child scopes joined by /.
+ """
+ if not parent_scope:
+ return child_scope
+ if not child_scope:
+ return parent_scope
+ return '/'.join([parent_scope, child_scope])
+
+
+def assign_vars(vars_, values):
+ """Returns the update ops for assigning a list of vars.
+
+ Args:
+ vars_: A list of variables.
+ values: A list of tensors representing new values.
+ Returns:
+ A list of update ops for the variables.
+ """
+ return [var.assign(value) for var, value in zip(vars_, values)]
+
+
+def identity_vars(vars_):
+ """Return the identity ops for a list of tensors.
+
+ Args:
+ vars_: A list of tensors.
+ Returns:
+ A list of identity ops.
+ """
+ return [tf.identity(var) for var in vars_]
+
+
+def tile(var, batch_size=1):
+ """Return tiled tensor.
+
+ Args:
+ var: A tensor representing the state.
+ batch_size: Batch size.
+ Returns:
+ A tensor with shape [batch_size,] + var.shape.
+ """
+ batch_var = tf.tile(
+ tf.expand_dims(var, 0),
+ (batch_size,) + (1,) * var.get_shape().ndims)
+ return batch_var
+
+
+def batch_list(vars_list):
+ """Batch a list of variables.
+
+ Args:
+ vars_list: A list of tensor variables.
+ Returns:
+ A list of tensor variables with additional first dimension.
+ """
+ return [tf.expand_dims(var, 0) for var in vars_list]
+
+
+def tf_print(op,
+ tensors,
+ message="",
+ first_n=-1,
+ name=None,
+ sub_messages=None,
+ print_freq=-1,
+ include_count=True):
+ """tf.Print, but to stdout."""
+ # TODO(shanegu): `name` is deprecated. Remove from the rest of codes.
+ global _tf_print_ids
+ _tf_print_ids += 1
+ name = _tf_print_ids
+ _tf_print_counts[name] = 0
+ if print_freq > 0:
+ _tf_print_running_sums[name] = [0 for _ in tensors]
+ _tf_print_running_counts[name] = 0
+ def print_message(*xs):
+ """print message fn."""
+ _tf_print_counts[name] += 1
+ if print_freq > 0:
+ for i, x in enumerate(xs):
+ _tf_print_running_sums[name][i] += x
+ _tf_print_running_counts[name] += 1
+ if (print_freq <= 0 or _tf_print_running_counts[name] >= print_freq) and (
+ first_n < 0 or _tf_print_counts[name] <= first_n):
+ for i, x in enumerate(xs):
+ if print_freq > 0:
+ del x
+ x = _tf_print_running_sums[name][i]/_tf_print_running_counts[name]
+ if sub_messages is None:
+ sub_message = str(i)
+ else:
+ sub_message = sub_messages[i]
+ log_message = "%s, %s" % (message, sub_message)
+ if include_count:
+ log_message += ", count=%d" % _tf_print_counts[name]
+ tf.logging.info("[%s]: %s" % (log_message, x))
+ if print_freq > 0:
+ for i, x in enumerate(xs):
+ _tf_print_running_sums[name][i] = 0
+ _tf_print_running_counts[name] = 0
+ return xs[0]
+
+ print_op = tf.py_func(print_message, tensors, tensors[0].dtype)
+ with tf.control_dependencies([print_op]):
+ op = tf.identity(op)
+ return op
+
+
+periodically = common.periodically
+def _periodically(body, period, name='periodically'):
+ """Periodically performs a tensorflow op."""
+ if period is None or period == 0:
+ return tf.no_op()
+
+ if period < 0:
+ raise ValueError("period cannot be less than 0.")
+
+ if period == 1:
+ return body()
+
+ with tf.variable_scope(None, default_name=name):
+ counter = tf.get_variable(
+ "counter",
+ shape=[],
+ dtype=tf.int64,
+ trainable=False,
+ initializer=tf.constant_initializer(period, dtype=tf.int64))
+
+ def _wrapped_body():
+ with tf.control_dependencies([body()]):
+ return counter.assign(1)
+
+ update = tf.cond(
+ tf.equal(counter, period), _wrapped_body,
+ lambda: counter.assign_add(1))
+
+ return update
+
+soft_variables_update = common.soft_variables_update
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/.gitattributes" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/.gitattributes"
new file mode 100644
index 00000000..f706c042
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/.gitattributes"
@@ -0,0 +1,2 @@
+*.pkl binary
+*.tfrecord binary
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/.gitignore" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/.gitignore"
new file mode 100644
index 00000000..af2f5375
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/.gitignore"
@@ -0,0 +1,104 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+.hypothesis/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+.static_storage/
+.media/
+local_settings.py
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# pyenv
+.python-version
+
+# celery beat schedule file
+celerybeat-schedule
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/README.md"
new file mode 100644
index 00000000..e6363f0d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/README.md"
@@ -0,0 +1,211 @@
+# Filtering Variational Objectives
+
+This folder contains a TensorFlow implementation of the algorithms from
+
+Chris J. Maddison\*, Dieterich Lawson\*, George Tucker\*, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, and Yee Whye Teh. "Filtering Variational Objectives." NIPS 2017.
+
+[https://arxiv.org/abs/1705.09279](https://arxiv.org/abs/1705.09279)
+
+This code implements 3 different bounds for training sequential latent variable models: the evidence lower bound (ELBO), the importance weighted auto-encoder bound (IWAE), and our bound, the filtering variational objective (FIVO).
+
+Additionally it contains several sequential latent variable model implementations:
+
+* Variational recurrent neural network (VRNN)
+* Stochastic recurrent neural network (SRNN)
+* Gaussian hidden Markov model with linear conditionals (GHMM)
+
+The VRNN and SRNN can be trained for sequence modeling of pianoroll and speech data. The GHMM is trainable on a synthetic dataset, useful as a simple example of an analytically tractable model.
+
+#### Directory Structure
+The important parts of the code are organized as follows.
+
+```
+run_fivo.py # main script, contains flag definitions
+fivo
+├─smc.py # a sequential Monte Carlo implementation
+├─bounds.py # code for computing each bound, uses smc.py
+├─runners.py # code for VRNN and SRNN training and evaluation
+├─ghmm_runners.py # code for GHMM training and evaluation
+├─data
+| ├─datasets.py # readers for pianoroll and speech datasets
+| ├─calculate_pianoroll_mean.py # preprocesses the pianoroll datasets
+| └─create_timit_dataset.py # preprocesses the TIMIT dataset
+└─models
+ ├─base.py # base classes used in other models
+ ├─vrnn.py # VRNN implementation
+ ├─srnn.py # SRNN implementation
+ └─ghmm.py # Gaussian hidden Markov model (GHMM) implementation
+bin
+├─run_train.sh # an example script that runs training
+├─run_eval.sh # an example script that runs evaluation
+├─run_sample.sh # an example script that runs sampling
+├─run_tests.sh # a script that runs all tests
+└─download_pianorolls.sh # a script that downloads pianoroll files
+```
+
+### Pianorolls
+
+Requirements before we start:
+
+* TensorFlow (see [tensorflow.org](http://tensorflow.org) for how to install)
+* [scipy](https://www.scipy.org/)
+* [sonnet](https://github.com/deepmind/sonnet)
+
+
+#### Download the Data
+
+The pianoroll datasets are encoded as pickled sparse arrays and are available at [http://www-etud.iro.umontreal.ca/~boulanni/icml2012](http://www-etud.iro.umontreal.ca/~boulanni/icml2012). You can use the script `bin/download_pianorolls.sh` to download the files into a directory of your choosing.
+```
+export PIANOROLL_DIR=~/pianorolls
+mkdir $PIANOROLL_DIR
+sh bin/download_pianorolls.sh $PIANOROLL_DIR
+```
+
+#### Preprocess the Data
+
+The script `calculate_pianoroll_mean.py` loads a pianoroll pickle file, calculates the mean, updates the pickle file to include the mean under the key `train_mean`, and writes the file back to disk in-place. You should do this for all pianoroll datasets you wish to train on.
+
+```
+python data/calculate_pianoroll_mean.py --in_file=$PIANOROLL_DIR/piano-midi.de.pkl
+python data/calculate_pianoroll_mean.py --in_file=$PIANOROLL_DIR/nottingham.de.pkl
+python data/calculate_pianoroll_mean.py --in_file=$PIANOROLL_DIR/musedata.pkl
+python data/calculate_pianoroll_mean.py --in_file=$PIANOROLL_DIR/jsb.pkl
+```
+
+#### Training
+
+Now we can train a model. Here is the command for a standard training run, taken from `bin/run_train.sh`:
+```
+python run_fivo.py \
+ --mode=train \
+ --logdir=/tmp/fivo \
+ --model=vrnn \
+ --bound=fivo \
+ --summarize_every=100 \
+ --batch_size=4 \
+ --num_samples=4 \
+ --learning_rate=0.0001 \
+ --dataset_path="$PIANOROLL_DIR/jsb.pkl" \
+ --dataset_type="pianoroll"
+```
+
+You should see output that looks something like this (with extra logging cruft):
+
+```
+Saving checkpoints for 0 into /tmp/fivo/model.ckpt.
+Step 1, fivo bound per timestep: -11.322491
+global_step/sec: 7.49971
+Step 101, fivo bound per timestep: -11.399275
+global_step/sec: 8.04498
+Step 201, fivo bound per timestep: -11.174991
+global_step/sec: 8.03989
+Step 301, fivo bound per timestep: -11.073008
+```
+#### Evaluation
+
+You can also evaluate saved checkpoints. The `eval` mode loads a model checkpoint, tests its performance on all items in a dataset, and reports the log-likelihood averaged over the dataset. For example here is a command, taken from `bin/run_eval.sh`, that will evaluate a JSB model on the test set:
+
+```
+python run_fivo.py \
+ --mode=eval \
+ --split=test \
+ --alsologtostderr \
+ --logdir=/tmp/fivo \
+ --model=vrnn \
+ --batch_size=4 \
+ --num_samples=4 \
+ --dataset_path="$PIANOROLL_DIR/jsb.pkl" \
+ --dataset_type="pianoroll"
+```
+
+You should see output like this:
+```
+Restoring parameters from /tmp/fivo/model.ckpt-0
+Model restored from step 0, evaluating.
+test elbo ll/t: -12.198834, iwae ll/t: -11.981187 fivo ll/t: -11.579776
+test elbo ll/seq: -748.564789, iwae ll/seq: -735.209206 fivo ll/seq: -710.577141
+```
+The evaluation script prints log-likelihood in both nats per timestep (ll/t) and nats per sequence (ll/seq) for all three bounds.
+
+#### Sampling
+
+You can also sample from trained models. The `sample` mode loads a model checkpoint, conditions the model on a prefix of a randomly chosen datapoint, samples a sequence of outputs from the conditioned model, and writes out the samples and prefix to a `.npz` file in `logdir`. For example here is a command that samples from a model trained on JSB, taken from `bin/run_sample.sh`:
+```
+python run_fivo.py \
+ --mode=sample \
+ --alsologtostderr \
+ --logdir="/tmp/fivo" \
+ --model=vrnn \
+ --bound=fivo \
+ --batch_size=4 \
+ --num_samples=4 \
+ --split=test \
+ --dataset_path="$PIANOROLL_DIR/jsb.pkl" \
+ --dataset_type="pianoroll" \
+ --prefix_length=25 \
+ --sample_length=50
+```
+
+Here `num_samples` denotes the number of samples used when conditioning the model as well as the number of trajectories to sample for each prefix.
+
+You should see very little output.
+```
+Restoring parameters from /tmp/fivo/model.ckpt-0
+Running local_init_op.
+Done running local_init_op.
+```
+
+Loading the samples with `np.load` confirms that we conditioned the model on 4
+prefixes of length 25 and sampled 4 sequences of length 50 for each prefix.
+```
+>>> import numpy as np
+>>> x = np.load("/tmp/fivo/samples.npz")
+>>> x[()]['prefixes'].shape
+(25, 4, 88)
+>>> x[()]['samples'].shape
+(50, 4, 4, 88)
+```
+
+### Training on TIMIT
+
+The TIMIT speech dataset is available at the [Linguistic Data Consortium website](https://catalog.ldc.upenn.edu/LDC93S1), but is unfortunately not free. These instructions will proceed assuming you have downloaded the TIMIT archive and extracted it into the directory `$RAW_TIMIT_DIR`.
+
+#### Preprocess TIMIT
+
+We preprocess TIMIT (as described in our paper) and write it out to a series of TFRecord files. To prepare the TIMIT dataset use the script `create_timit_dataset.py`
+```
+export $TIMIT_DIR=~/timit_dataset
+mkdir $TIMIT_DIR
+python data/create_timit_dataset.py \
+ --raw_timit_dir=$RAW_TIMIT_DIR \
+ --out_dir=$TIMIT_DIR
+```
+You should see this exact output:
+```
+4389 train / 231 valid / 1680 test
+train mean: 0.006060 train std: 548.136169
+```
+
+#### Training on TIMIT
+This is very similar to training on pianoroll datasets, with just a few flags switched.
+```
+python run_fivo.py \
+ --mode=train \
+ --logdir=/tmp/fivo \
+ --model=vrnn \
+ --bound=fivo \
+ --summarize_every=100 \
+ --batch_size=4 \
+ --num_samples=4 \
+ --learning_rate=0.0001 \
+ --dataset_path="$TIMIT_DIR/train" \
+ --dataset_type="speech"
+```
+Evaluation and sampling are similar.
+
+### Tests
+This codebase comes with a number of tests to verify correctness, runnable via `bin/run_tests.sh`. The tests are also useful to look at for examples of how to use the code.
+
+### Contact
+
+This codebase is maintained by Dieterich Lawson, reachable via email at dieterichl@google.com. For questions and issues please open an issue on the tensorflow/models issues tracker and assign it to @dieterichlawson.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/download_pianorolls.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/download_pianorolls.sh"
new file mode 100644
index 00000000..ef7050b4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/download_pianorolls.sh"
@@ -0,0 +1,30 @@
+#!/bin/bash
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# A script to download the pianoroll datasets.
+# Accepts one argument, the directory to put the files in
+
+if [ -z "$1" ]
+ then
+ echo "Error, must provide a directory to download the files to."
+ exit
+fi
+
+echo "Downloading datasets into $1"
+curl -s "http://www-etud.iro.umontreal.ca/~boulanni/Piano-midi.de.pickle" > $1/piano-midi.de.pkl
+curl -s "http://www-etud.iro.umontreal.ca/~boulanni/Nottingham.pickle" > $1/nottingham.pkl
+curl -s "http://www-etud.iro.umontreal.ca/~boulanni/MuseData.pickle" > $1/musedata.pkl
+curl -s "http://www-etud.iro.umontreal.ca/~boulanni/JSB%20Chorales.pickle" > $1/jsb.pkl
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_eval.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_eval.sh"
new file mode 100644
index 00000000..b30bcedc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_eval.sh"
@@ -0,0 +1,29 @@
+#!/bin/bash
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# An example of running evaluation.
+
+PIANOROLL_DIR=$HOME/pianorolls
+
+python run_fivo.py \
+ --mode=eval \
+ --logdir=/tmp/fivo \
+ --model=vrnn \
+ --batch_size=4 \
+ --num_samples=4 \
+ --split=test \
+ --dataset_path="$PIANOROLL_DIR/jsb.pkl" \
+ --dataset_type="pianoroll"
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_sample.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_sample.sh"
new file mode 100644
index 00000000..e0c82a0c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_sample.sh"
@@ -0,0 +1,33 @@
+#!/bin/bash
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# An example of sampling from the model.
+
+PIANOROLL_DIR=$HOME/pianorolls
+
+python run_fivo.py \
+ --mode=sample \
+ --alsologtostderr \
+ --logdir="/tmp/fivo" \
+ --model=vrnn \
+ --bound=fivo \
+ --batch_size=4 \
+ --num_samples=4 \
+ --split=test \
+ --dataset_path="$PIANOROLL_DIR/jsb.pkl" \
+ --dataset_type="pianoroll" \
+ --prefix_length=25 \
+ --sample_length=50
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_tests.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_tests.sh"
new file mode 100644
index 00000000..2ea58f01
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_tests.sh"
@@ -0,0 +1,25 @@
+#!/bin/bash
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+python -m fivo.smc_test && \
+python -m fivo.bounds_test && \
+python -m fivo.nested_utils_test && \
+python -m fivo.data.datasets_test && \
+python -m fivo.models.ghmm_test && \
+python -m fivo.models.vrnn_test && \
+python -m fivo.models.srnn_test && \
+python -m fivo.ghmm_runners_test && \
+python -m fivo.runners_test
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_train.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_train.sh"
new file mode 100644
index 00000000..a8459597
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/bin/run_train.sh"
@@ -0,0 +1,31 @@
+#!/bin/bash
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# An example of running training.
+
+PIANOROLL_DIR=$HOME/pianorolls
+
+python run_fivo.py \
+ --mode=train \
+ --logdir=/tmp/fivo \
+ --model=vrnn \
+ --bound=fivo \
+ --summarize_every=100 \
+ --batch_size=4 \
+ --num_samples=4 \
+ --learning_rate=0.0001 \
+ --dataset_path="$PIANOROLL_DIR/jsb.pkl" \
+ --dataset_type="pianoroll"
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/README.md"
new file mode 100644
index 00000000..649de0ba
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/README.md"
@@ -0,0 +1 @@
+An experimental codebase for running simple examples.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/bounds.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/bounds.py"
new file mode 100644
index 00000000..afc970c5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/bounds.py"
@@ -0,0 +1,673 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from collections import namedtuple
+
+import tensorflow as tf
+import summary_utils as summ
+
+Loss = namedtuple("Loss", "name loss vars")
+Loss.__new__.__defaults__ = (tf.GraphKeys.TRAINABLE_VARIABLES,)
+
+
+def iwae(model, observation, num_timesteps, num_samples=1,
+ summarize=False):
+ """Compute the IWAE evidence lower bound.
+
+ Args:
+ model: A callable that computes one timestep of the model.
+ observation: A shape [batch_size*num_samples, state_size] Tensor
+ containing z_n, the observation for each sequence in the batch.
+ num_timesteps: The number of timesteps in each sequence, an integer.
+ num_samples: The number of samples to use to compute the IWAE bound.
+ Returns:
+ log_p_hat: The IWAE estimator of the lower bound on the log marginal.
+ loss: A tensor that you can perform gradient descent on to optimize the
+ bound.
+ maintain_ema_op: A no-op included for compatibility with FIVO.
+ states: The sequence of states sampled.
+ """
+ # Initialization
+ num_instances = tf.shape(observation)[0]
+ batch_size = tf.cast(num_instances / num_samples, tf.int32)
+ states = [model.zero_state(num_instances)]
+ log_weights = []
+ log_weight_acc = tf.zeros([num_samples, batch_size], dtype=observation.dtype)
+
+ for t in xrange(num_timesteps):
+ # run the model for one timestep
+ (zt, log_q_zt, log_p_zt, log_p_x_given_z, _) = model(
+ states[-1], observation, t)
+ # update accumulators
+ states.append(zt)
+ log_weight = log_p_zt + log_p_x_given_z - log_q_zt
+ log_weight_acc += tf.reshape(log_weight, [num_samples, batch_size])
+ if summarize:
+ weight_dist = tf.contrib.distributions.Categorical(
+ logits=tf.transpose(log_weight_acc, perm=[1, 0]),
+ allow_nan_stats=False)
+ weight_entropy = weight_dist.entropy()
+ weight_entropy = tf.reduce_mean(weight_entropy)
+ tf.summary.scalar("weight_entropy/%d" % t, weight_entropy)
+ log_weights.append(log_weight_acc)
+ # Compute the lower bound on the log evidence.
+ log_p_hat = (tf.reduce_logsumexp(log_weight_acc, axis=0) -
+ tf.log(tf.cast(num_samples, observation.dtype))) / num_timesteps
+ loss = -tf.reduce_mean(log_p_hat)
+ losses = [Loss("log_p_hat", loss)]
+
+ # we clip off the initial state before returning.
+ # there are no emas for iwae, so we return a noop for that
+ return log_p_hat, losses, tf.no_op(), states[1:], log_weights
+
+
+def multinomial_resampling(log_weights, states, n, b):
+ """Resample states with multinomial resampling.
+
+ Args:
+ log_weights: A (n x b) Tensor representing a batch of b logits for n-ary
+ Categorical distribution.
+ states: A list of (b*n x d) Tensors that will be resample in from the groups
+ of every n-th row.
+
+ Returns:
+ resampled_states: A list of (b*n x d) Tensors resampled via stratified sampling.
+ log_probs: A (n x b) Tensor of the log probabilities of the ancestry decisions.
+ resampling_parameters: The Tensor of parameters of the resampling distribution.
+ ancestors: An (n x b) Tensor of integral indices representing the ancestry decisions.
+ resampling_dist: The distribution object for resampling.
+ """
+ log_weights = tf.convert_to_tensor(log_weights)
+ states = [tf.convert_to_tensor(state) for state in states]
+
+ resampling_parameters = tf.transpose(log_weights, perm=[1,0])
+ resampling_dist = tf.contrib.distributions.Categorical(logits=resampling_parameters)
+ ancestors = tf.stop_gradient(
+ resampling_dist.sample(sample_shape=n))
+ log_probs = resampling_dist.log_prob(ancestors)
+
+ offset = tf.expand_dims(tf.range(b), 0)
+ ancestor_inds = tf.reshape(ancestors * b + offset, [-1])
+
+ resampled_states = []
+ for state in states:
+ resampled_states.append(tf.gather(state, ancestor_inds))
+ return resampled_states, log_probs, resampling_parameters, ancestors, resampling_dist
+
+def stratified_resampling(log_weights, states, n, b):
+ """Resample states with straitified resampling.
+
+ Args:
+ log_weights: A (n x b) Tensor representing a batch of b logits for n-ary
+ Categorical distribution.
+ states: A list of (b*n x d) Tensors that will be resample in from the groups
+ of every n-th row.
+
+ Returns:
+ resampled_states: A list of (b*n x d) Tensors resampled via stratified sampling.
+ log_probs: A (n x b) Tensor of the log probabilities of the ancestry decisions.
+ resampling_parameters: The Tensor of parameters of the resampling distribution.
+ ancestors: An (n x b) Tensor of integral indices representing the ancestry decisions.
+ resampling_dist: The distribution object for resampling.
+ """
+ log_weights = tf.convert_to_tensor(log_weights)
+ states = [tf.convert_to_tensor(state) for state in states]
+
+ log_weights = tf.transpose(log_weights, perm=[1,0])
+
+ probs = tf.nn.softmax(
+ tf.tile(tf.expand_dims(log_weights, axis=1),
+ [1, n, 1])
+ )
+
+ cdfs = tf.concat([tf.zeros((b,n,1), dtype=probs.dtype), tf.cumsum(probs, axis=2)], 2)
+
+ bins = tf.range(n, dtype=probs.dtype) / n
+ bins = tf.tile(tf.reshape(bins, [1,-1,1]), [b,1,n+1])
+
+ strat_cdfs = tf.minimum(tf.maximum((cdfs - bins) * n, 0.0), 1.0)
+ resampling_parameters = strat_cdfs[:,:,1:] - strat_cdfs[:,:,:-1]
+
+ resampling_dist = tf.contrib.distributions.Categorical(
+ probs = resampling_parameters,
+ allow_nan_stats=False)
+
+ ancestors = tf.stop_gradient(
+ resampling_dist.sample())
+ log_probs = resampling_dist.log_prob(ancestors)
+
+ ancestors = tf.transpose(ancestors, perm=[1,0])
+ log_probs = tf.transpose(log_probs, perm=[1,0])
+
+ offset = tf.expand_dims(tf.range(b), 0)
+ ancestor_inds = tf.reshape(ancestors * b + offset, [-1])
+
+ resampled_states = []
+ for state in states:
+ resampled_states.append(tf.gather(state, ancestor_inds))
+
+ return resampled_states, log_probs, resampling_parameters, ancestors, resampling_dist
+
+def systematic_resampling(log_weights, states, n, b):
+ """Resample states with systematic resampling.
+
+ Args:
+ log_weights: A (n x b) Tensor representing a batch of b logits for n-ary
+ Categorical distribution.
+ states: A list of (b*n x d) Tensors that will be resample in from the groups
+ of every n-th row.
+
+ Returns:
+ resampled_states: A list of (b*n x d) Tensors resampled via stratified sampling.
+ log_probs: A (n x b) Tensor of the log probabilities of the ancestry decisions.
+ resampling_parameters: The Tensor of parameters of the resampling distribution.
+ ancestors: An (n x b) Tensor of integral indices representing the ancestry decisions.
+ resampling_dist: The distribution object for resampling.
+ """
+
+ log_weights = tf.convert_to_tensor(log_weights)
+ states = [tf.convert_to_tensor(state) for state in states]
+
+ log_weights = tf.transpose(log_weights, perm=[1,0])
+
+ probs = tf.nn.softmax(
+ tf.tile(tf.expand_dims(log_weights, axis=1),
+ [1, n, 1])
+ )
+
+ cdfs = tf.concat([tf.zeros((b,n,1), dtype=probs.dtype), tf.cumsum(probs, axis=2)], 2)
+
+ bins = tf.range(n, dtype=probs.dtype) / n
+ bins = tf.tile(tf.reshape(bins, [1,-1,1]), [b,1,n+1])
+
+ strat_cdfs = tf.minimum(tf.maximum((cdfs - bins) * n, 0.0), 1.0)
+ resampling_parameters = strat_cdfs[:,:,1:] - strat_cdfs[:,:,:-1]
+
+ resampling_dist = tf.contrib.distributions.Categorical(
+ probs=resampling_parameters,
+ allow_nan_stats=True)
+
+ U = tf.random_uniform((b, 1, 1), dtype=probs.dtype)
+
+ ancestors = tf.stop_gradient(tf.reduce_sum(tf.to_float(U > strat_cdfs[:,:,1:]), axis=-1))
+ log_probs = resampling_dist.log_prob(ancestors)
+
+ ancestors = tf.transpose(ancestors, perm=[1,0])
+ log_probs = tf.transpose(log_probs, perm=[1,0])
+
+ offset = tf.expand_dims(tf.range(b, dtype=probs.dtype), 0)
+ ancestor_inds = tf.reshape(ancestors * b + offset, [-1])
+
+ resampled_states = []
+ for state in states:
+ resampled_states.append(tf.gather(state, ancestor_inds))
+
+ return resampled_states, log_probs, resampling_parameters, ancestors, resampling_dist
+
+
+def log_blend(inputs, weights):
+ """Blends state in the log space.
+
+ Args:
+ inputs: A set of scalar states, one for each particle in each particle filter.
+ Should be [num_samples, batch_size].
+ weights: A set of weights used to blend the state. Each set of weights
+ should be of dimension [num_samples] (one weight for each previous particle).
+ There should be one set of weights for each new particle in each particle filter.
+ Thus the shape should be [num_samples, batch_size, num_samples] where
+ the first axis indexes new particle and the last axis indexes old particles.
+ Returns:
+ blended: The blended states, a tensor of shape [num_samples, batch_size].
+ """
+ raw_max = tf.reduce_max(inputs, axis=0, keepdims=True)
+ my_max = tf.stop_gradient(
+ tf.where(tf.is_finite(raw_max), raw_max, tf.zeros_like(raw_max))
+ )
+ # Don't ask.
+ blended = tf.log(tf.einsum("ijk,kj->ij", weights, tf.exp(inputs - raw_max))) + my_max
+ return blended
+
+
+def relaxed_resampling(log_weights, states, num_samples, batch_size,
+ log_r_x=None, blend_type="log", temperature=0.5,
+ straight_through=False):
+ """Resample states with relaxed resampling.
+
+ Args:
+ log_weights: A (n x b) Tensor representing a batch of b logits for n-ary
+ Categorical distribution.
+ states: A list of (b*n x d) Tensors that will be resample in from the groups
+ of every n-th row.
+
+ Returns:
+ resampled_states: A list of (b*n x d) Tensors resampled via stratified sampling.
+ log_probs: A (n x b) Tensor of the log probabilities of the ancestry decisions.
+ resampling_parameters: The Tensor of parameters of the resampling distribution.
+ ancestors: An (n x b x n) Tensor of relaxed one hot representations of the ancestry decisions.
+ resampling_dist: The distribution object for resampling.
+ """
+ assert blend_type in ["log", "linear"], "Blend type must be 'log' or 'linear'."
+ log_weights = tf.convert_to_tensor(log_weights)
+ states = [tf.convert_to_tensor(state) for state in states]
+ state_dim = states[0].get_shape().as_list()[-1]
+ # weights are num_samples by batch_size, so we transpose to get a
+ # set of batch_size distributions over [0,num_samples).
+ resampling_parameters = tf.transpose(log_weights, perm=[1, 0])
+ resampling_dist = tf.contrib.distributions.RelaxedOneHotCategorical(
+ temperature,
+ logits=resampling_parameters)
+
+ # sample num_samples samples from the distribution, resulting in a
+ # [num_samples, batch_size, num_samples] Tensor that represents a set of
+ # [num_samples, batch_size] blending weights. The dimensions represent
+ # [sample index, batch index, blending weight index]
+ ancestors = resampling_dist.sample(sample_shape=num_samples)
+ if straight_through:
+ # Forward pass discrete choices, backwards pass soft choices
+ hard_ancestor_indices = tf.argmax(ancestors, axis=-1)
+ hard_ancestors = tf.one_hot(hard_ancestor_indices, num_samples,
+ dtype=ancestors.dtype)
+ ancestors = tf.stop_gradient(hard_ancestors - ancestors) + ancestors
+ log_probs = resampling_dist.log_prob(ancestors)
+ if log_r_x is not None and blend_type == "log":
+ log_r_x = tf.reshape(log_r_x, [num_samples, batch_size])
+ log_r_x = log_blend(log_r_x, ancestors)
+ log_r_x = tf.reshape(log_r_x, [num_samples*batch_size])
+ elif log_r_x is not None and blend_type == "linear":
+ # If blend type is linear just add log_r to the states that will be blended
+ # linearly.
+ states.append(log_r_x)
+
+ # transpose the 'indices' to be [batch_index, blending weight index, sample index]
+ ancestor_inds = tf.transpose(ancestors, perm=[1, 2, 0])
+ resampled_states = []
+ for state in states:
+ # state is currently [num_samples * batch_size, state_dim] so we reshape
+ # to [num_samples, batch_size, state_dim] and then transpose to
+ # [batch_size, state_size, num_samples]
+ state = tf.transpose(tf.reshape(state, [num_samples, batch_size, -1]), perm=[1, 2, 0])
+ # state is now (batch_size, state_size, num_samples)
+ # and ancestor is (batch index, blending weight index, sample index)
+ # multiplying these gives a matrix of size [batch_size, state_size, num_samples]
+ next_state = tf.matmul(state, ancestor_inds)
+ # transpose the state to be [num_samples, batch_size, state_size]
+ # and then reshape it to match the state format.
+ next_state = tf.reshape(tf.transpose(next_state, perm=[2,0,1]), [num_samples*batch_size, state_dim])
+ resampled_states.append(next_state)
+
+ new_dist = tf.contrib.distributions.Categorical(
+ logits=resampling_parameters)
+
+ if log_r_x is not None and blend_type == "linear":
+ # If blend type is linear pop off log_r that we added to the states.
+ log_r_x = tf.squeeze(resampled_states[-1])
+ resampled_states = resampled_states[:-1]
+ return resampled_states, log_probs, log_r_x, resampling_parameters, ancestors, new_dist
+
+
+def fivo(model,
+ observation,
+ num_timesteps,
+ resampling_schedule,
+ num_samples=1,
+ use_resampling_grads=True,
+ resampling_type="multinomial",
+ resampling_temperature=0.5,
+ aux=True,
+ summarize=False):
+ """Compute the FIVO evidence lower bound.
+
+ Args:
+ model: A callable that computes one timestep of the model.
+ observation: A shape [batch_size*num_samples, state_size] Tensor
+ containing z_n, the observation for each sequence in the batch.
+ num_timesteps: The number of timesteps in each sequence, an integer.
+ resampling_schedule: A list of booleans of length num_timesteps, contains
+ True if a resampling should occur on a specific timestep.
+ num_samples: The number of samples to use to compute the IWAE bound.
+ use_resampling_grads: Whether or not to include the resampling gradients
+ in loss.
+ resampling type: The type of resampling, one of "multinomial", "stratified",
+ "relaxed-logblend", "relaxed-linearblend", "relaxed-stateblend", or
+ "systematic".
+ resampling_temperature: A positive temperature only used for relaxed
+ resampling.
+ aux: If true, compute the FIVO-AUX bound.
+ Returns:
+ log_p_hat: The IWAE estimator of the lower bound on the log marginal.
+ loss: A tensor that you can perform gradient descent on to optimize the
+ bound.
+ maintain_ema_op: An op to update the baseline ema used for the resampling
+ gradients.
+ states: The sequence of states sampled.
+ """
+ # Initialization
+ num_instances = tf.cast(tf.shape(observation)[0], tf.int32)
+ batch_size = tf.cast(num_instances / num_samples, tf.int32)
+ states = [model.zero_state(num_instances)]
+ prev_state = states[0]
+ log_weight_acc = tf.zeros(shape=[num_samples, batch_size], dtype=observation.dtype)
+ prev_log_r_zt = tf.zeros([num_instances], dtype=observation.dtype)
+ log_weights = []
+ log_weights_all = []
+ log_p_hats = []
+ resampling_log_probs = []
+ for t in xrange(num_timesteps):
+ # run the model for one timestep
+ (zt, log_q_zt, log_p_zt, log_p_x_given_z, log_r_zt) = model(
+ prev_state, observation, t)
+ # update accumulators
+ states.append(zt)
+ log_weight = log_p_zt + log_p_x_given_z - log_q_zt
+ if aux:
+ if t == num_timesteps - 1:
+ log_weight -= prev_log_r_zt
+ else:
+ log_weight += log_r_zt - prev_log_r_zt
+ prev_log_r_zt = log_r_zt
+ log_weight_acc += tf.reshape(log_weight, [num_samples, batch_size])
+ log_weights_all.append(log_weight_acc)
+ if resampling_schedule[t]:
+
+ # These objects will be resampled
+ to_resample = [states[-1]]
+ if aux and "relaxed" not in resampling_type:
+ to_resample.append(prev_log_r_zt)
+
+ # do the resampling
+ if resampling_type == "multinomial":
+ (resampled,
+ resampling_log_prob,
+ _, _, _) = multinomial_resampling(log_weight_acc,
+ to_resample,
+ num_samples,
+ batch_size)
+ elif resampling_type == "stratified":
+ (resampled,
+ resampling_log_prob,
+ _, _, _) = stratified_resampling(log_weight_acc,
+ to_resample,
+ num_samples,
+ batch_size)
+ elif resampling_type == "systematic":
+ (resampled,
+ resampling_log_prob,
+ _, _, _) = systematic_resampling(log_weight_acc,
+ to_resample,
+ num_samples,
+ batch_size)
+ elif "relaxed" in resampling_type:
+ if aux:
+ if resampling_type == "relaxed-logblend":
+ (resampled,
+ resampling_log_prob,
+ prev_log_r_zt,
+ _, _, _) = relaxed_resampling(log_weight_acc,
+ to_resample,
+ num_samples,
+ batch_size,
+ temperature=resampling_temperature,
+ log_r_x=prev_log_r_zt,
+ blend_type="log")
+ elif resampling_type == "relaxed-linearblend":
+ (resampled,
+ resampling_log_prob,
+ prev_log_r_zt,
+ _, _, _) = relaxed_resampling(log_weight_acc,
+ to_resample,
+ num_samples,
+ batch_size,
+ temperature=resampling_temperature,
+ log_r_x=prev_log_r_zt,
+ blend_type="linear")
+ elif resampling_type == "relaxed-stateblend":
+ (resampled,
+ resampling_log_prob,
+ _, _, _, _) = relaxed_resampling(log_weight_acc,
+ to_resample,
+ num_samples,
+ batch_size,
+ temperature=resampling_temperature)
+ # Calculate prev_log_r_zt from the post-resampling state
+ prev_r_zt = model.r.r_xn(resampled[0], t)
+ prev_log_r_zt = tf.reduce_sum(
+ prev_r_zt.log_prob(observation), axis=[1])
+ elif resampling_type == "relaxed-stateblend-st":
+ (resampled,
+ resampling_log_prob,
+ _, _, _, _) = relaxed_resampling(log_weight_acc,
+ to_resample,
+ num_samples,
+ batch_size,
+ temperature=resampling_temperature,
+ straight_through=True)
+ # Calculate prev_log_r_zt from the post-resampling state
+ prev_r_zt = model.r.r_xn(resampled[0], t)
+ prev_log_r_zt = tf.reduce_sum(
+ prev_r_zt.log_prob(observation), axis=[1])
+ else:
+ (resampled,
+ resampling_log_prob,
+ _, _, _, _) = relaxed_resampling(log_weight_acc,
+ to_resample,
+ num_samples,
+ batch_size,
+ temperature=resampling_temperature)
+ #if summarize:
+ # resampling_entropy = resampling_dist.entropy()
+ # resampling_entropy = tf.reduce_mean(resampling_entropy)
+ # tf.summary.scalar("weight_entropy/%d" % t, resampling_entropy)
+
+ resampling_log_probs.append(tf.reduce_sum(resampling_log_prob, axis=0))
+ prev_state = resampled[0]
+ if aux and "relaxed" not in resampling_type:
+ # Squeeze out the extra dim potentially added by resampling.
+ # prev_log_r_zt should always be [num_instances]
+ prev_log_r_zt = tf.squeeze(resampled[1])
+ # Update the log p hat estimate, taking a log sum exp over the sample
+ # dimension. The appended tensor is [batch_size].
+ log_p_hats.append(
+ tf.reduce_logsumexp(log_weight_acc, axis=0) - tf.log(
+ tf.cast(num_samples, dtype=observation.dtype)))
+ # reset the weights
+ log_weights.append(log_weight_acc)
+ log_weight_acc = tf.zeros_like(log_weight_acc)
+ else:
+ prev_state = states[-1]
+ # Compute the final weight update. If we just resampled this will be zero.
+ final_update = (tf.reduce_logsumexp(log_weight_acc, axis=0) -
+ tf.log(tf.cast(num_samples, dtype=observation.dtype)))
+ # If we ever resampled, then sum up the previous log p hat terms
+ if len(log_p_hats) > 0:
+ log_p_hat = tf.reduce_sum(log_p_hats, axis=0) + final_update
+ else: # otherwise, log_p_hat only comes from the final update
+ log_p_hat = final_update
+
+ if use_resampling_grads and any(resampling_schedule):
+ # compute the rewards
+ # cumsum([a, b, c]) => [a, a+b, a+b+c]
+ # learning signal at timestep t is
+ # [sum from i=t+1 to T of log_p_hat_i for t=1:T]
+ # so we will compute (sum from i=1 to T of log_p_hat_i)
+ # and at timestep t will subtract off (sum from i=1 to t of log_p_hat_i)
+ # rewards is a [num_resampling_events, batch_size] Tensor
+ rewards = tf.stop_gradient(
+ tf.expand_dims(log_p_hat, 0) - tf.cumsum(log_p_hats, axis=0))
+ batch_avg_rewards = tf.reduce_mean(rewards, axis=1)
+ # compute ema baseline.
+ # centered_rewards is [num_resampling_events, batch_size]
+ baseline_ema = tf.train.ExponentialMovingAverage(decay=0.94)
+ maintain_baseline_op = baseline_ema.apply([batch_avg_rewards])
+ baseline = tf.expand_dims(baseline_ema.average(batch_avg_rewards), 1)
+ centered_rewards = rewards - baseline
+ if summarize:
+ summ.summarize_learning_signal(rewards, "rewards")
+ summ.summarize_learning_signal(centered_rewards, "centered_rewards")
+ # compute the loss tensor.
+ resampling_grads = tf.reduce_sum(
+ tf.stop_gradient(centered_rewards) * resampling_log_probs, axis=0)
+ losses = [Loss("log_p_hat", -tf.reduce_mean(log_p_hat)/num_timesteps),
+ Loss("resampling_grads", -tf.reduce_mean(resampling_grads)/num_timesteps)]
+ else:
+ losses = [Loss("log_p_hat", -tf.reduce_mean(log_p_hat)/num_timesteps)]
+ maintain_baseline_op = tf.no_op()
+
+ log_p_hat /= num_timesteps
+ # we clip off the initial state before returning.
+ return log_p_hat, losses, maintain_baseline_op, states[1:], log_weights_all
+
+
+def fivo_aux_td(
+ model,
+ observation,
+ num_timesteps,
+ resampling_schedule,
+ num_samples=1,
+ summarize=False):
+ """Compute the FIVO_AUX evidence lower bound."""
+ # Initialization
+ num_instances = tf.cast(tf.shape(observation)[0], tf.int32)
+ batch_size = tf.cast(num_instances / num_samples, tf.int32)
+ states = [model.zero_state(num_instances)]
+ prev_state = states[0]
+ log_weight_acc = tf.zeros(shape=[num_samples, batch_size], dtype=observation.dtype)
+ prev_log_r = tf.zeros([num_instances], dtype=observation.dtype)
+ # must be pre-resampling
+ log_rs = []
+ # must be post-resampling
+ r_tilde_params = [model.r_tilde.r_zt(states[0], observation, 0)]
+ log_r_tildes = []
+ log_p_xs = []
+ # contains the weight at each timestep before resampling only on resampling timesteps
+ log_weights = []
+ # contains weight at each timestep before resampling
+ log_weights_all = []
+ log_p_hats = []
+ for t in xrange(num_timesteps):
+ # run the model for one timestep
+ # zt is state, [num_instances, state_dim]
+ # log_q_zt, log_p_x_given_z is [num_instances]
+ # r_tilde_mu, r_tilde_sigma is [num_instances, state_dim]
+ # p_ztplus1 is a normal distribution on [num_instances, state_dim]
+ (zt, log_q_zt, log_p_zt, log_p_x_given_z,
+ r_tilde_mu, r_tilde_sigma_sq, p_ztplus1) = model(prev_state, observation, t)
+
+ # Compute the log weight without log r.
+ log_weight = log_p_zt + log_p_x_given_z - log_q_zt
+
+ # Compute log r.
+ if t == num_timesteps - 1:
+ log_r = tf.zeros_like(prev_log_r)
+ else:
+ p_mu = p_ztplus1.mean()
+ p_sigma_sq = p_ztplus1.variance()
+ log_r = (tf.log(r_tilde_sigma_sq) -
+ tf.log(r_tilde_sigma_sq + p_sigma_sq) -
+ tf.square(r_tilde_mu - p_mu)/(r_tilde_sigma_sq + p_sigma_sq))
+ log_r = 0.5*tf.reduce_sum(log_r, axis=-1)
+
+ #log_weight += tf.stop_gradient(log_r - prev_log_r)
+ log_weight += log_r - prev_log_r
+ log_weight_acc += tf.reshape(log_weight, [num_samples, batch_size])
+
+ # Update accumulators
+ states.append(zt)
+ log_weights_all.append(log_weight_acc)
+ log_p_xs.append(log_p_x_given_z)
+ log_rs.append(log_r)
+
+ # Compute log_r_tilde as [num_instances] Tensor.
+ prev_r_tilde_mu, prev_r_tilde_sigma_sq = r_tilde_params[-1]
+ prev_log_r_tilde = -0.5*tf.reduce_sum(
+ tf.square(zt - prev_r_tilde_mu)/prev_r_tilde_sigma_sq, axis=-1)
+ #tf.square(tf.stop_gradient(zt) - r_tilde_mu)/r_tilde_sigma_sq, axis=-1)
+ #tf.square(zt - r_tilde_mu)/r_tilde_sigma_sq, axis=-1)
+ log_r_tildes.append(prev_log_r_tilde)
+
+ # optionally resample
+ if resampling_schedule[t]:
+ # These objects will be resampled
+ if t < num_timesteps - 1:
+ to_resample = [zt, log_r, r_tilde_mu, r_tilde_sigma_sq]
+ else:
+ to_resample = [zt, log_r]
+ (resampled,
+ _, _, _, _) = multinomial_resampling(log_weight_acc,
+ to_resample,
+ num_samples,
+ batch_size)
+ prev_state = resampled[0]
+ # Squeeze out the extra dim potentially added by resampling.
+ # prev_log_r_zt and log_r_tilde should always be [num_instances]
+ prev_log_r = tf.squeeze(resampled[1])
+ if t < num_timesteps -1:
+ r_tilde_params.append((resampled[2], resampled[3]))
+ # Update the log p hat estimate, taking a log sum exp over the sample
+ # dimension. The appended tensor is [batch_size].
+ log_p_hats.append(
+ tf.reduce_logsumexp(log_weight_acc, axis=0) - tf.log(
+ tf.cast(num_samples, dtype=observation.dtype)))
+ # reset the weights
+ log_weights.append(log_weight_acc)
+ log_weight_acc = tf.zeros_like(log_weight_acc)
+ else:
+ prev_state = zt
+ prev_log_r = log_r
+ if t < num_timesteps - 1:
+ r_tilde_params.append((r_tilde_mu, r_tilde_sigma_sq))
+
+ # Compute the final weight update. If we just resampled this will be zero.
+ final_update = (tf.reduce_logsumexp(log_weight_acc, axis=0) -
+ tf.log(tf.cast(num_samples, dtype=observation.dtype)))
+ # If we ever resampled, then sum up the previous log p hat terms
+ if len(log_p_hats) > 0:
+ log_p_hat = tf.reduce_sum(log_p_hats, axis=0) + final_update
+ else: # otherwise, log_p_hat only comes from the final update
+ log_p_hat = final_update
+
+ # Compute the bellman loss.
+ # Will remove the first timestep as it is not used.
+ # log p(x_t|z_t) is in row t-1.
+ log_p_x = tf.reshape(tf.stack(log_p_xs),
+ [num_timesteps, num_samples, batch_size])
+ # log r_t is contained in row t-1.
+ # last column is zeros (because at timestep T (num_timesteps) r is 1.
+ log_r = tf.reshape(tf.stack(log_rs),
+ [num_timesteps, num_samples, batch_size])
+ # [num_timesteps, num_instances]. log r_tilde_t is in row t-1.
+ log_r_tilde = tf.reshape(tf.stack(log_r_tildes),
+ [num_timesteps, num_samples, batch_size])
+ log_lambda = tf.reduce_mean(log_r_tilde - log_p_x - log_r, axis=1,
+ keepdims=True)
+ bellman_sos = tf.reduce_mean(tf.square(
+ log_r_tilde - tf.stop_gradient(log_lambda + log_p_x + log_r)), axis=[0, 1])
+ bellman_loss = tf.reduce_mean(bellman_sos)/num_timesteps
+ tf.summary.scalar("bellman_loss", bellman_loss)
+
+ if len(tf.get_collection("LOG_P_HAT_VARS")) == 0:
+ log_p_hat_collection = list(set(tf.trainable_variables()) -
+ set(tf.get_collection("R_TILDE_VARS")))
+ for v in log_p_hat_collection:
+ tf.add_to_collection("LOG_P_HAT_VARS", v)
+
+ log_p_hat /= num_timesteps
+ losses = [Loss("log_p_hat", -tf.reduce_mean(log_p_hat), "LOG_P_HAT_VARS"),
+ Loss("bellman_loss", bellman_loss, "R_TILDE_VARS")]
+
+ return log_p_hat, losses, tf.no_op(), states[1:], log_weights_all
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/data.py"
new file mode 100644
index 00000000..0842f212
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/data.py"
@@ -0,0 +1,192 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Datasets."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+
+import models
+
+
+def make_long_chain_dataset(
+ state_size=1,
+ num_obs=5,
+ steps_per_obs=3,
+ variance=1.,
+ observation_variance=1.,
+ batch_size=4,
+ num_samples=1,
+ observation_type=models.STANDARD_OBSERVATION,
+ transition_type=models.STANDARD_TRANSITION,
+ fixed_observation=None,
+ dtype="float32"):
+ """Creates a long chain data generating process.
+
+ Creates a tf.data.Dataset that provides batches of data from a long
+ chain.
+
+ Args:
+ state_size: The dimension of the state space of the process.
+ num_obs: The number of observations in the chain.
+ steps_per_obs: The number of steps between each observation.
+ variance: The variance of the normal distributions used at each timestep.
+ batch_size: The number of trajectories to include in each batch.
+ num_samples: The number of replicas of each trajectory to include in each
+ batch.
+ dtype: The datatype of the states and observations.
+ Returns:
+ dataset: A tf.data.Dataset that can be iterated over.
+ """
+ num_timesteps = num_obs * steps_per_obs
+ def data_generator():
+ """An infinite generator of latents and observations from the model."""
+ while True:
+ states = []
+ observations = []
+ # z0 ~ Normal(0, sqrt(variance)).
+ states.append(
+ np.random.normal(size=[state_size],
+ scale=np.sqrt(variance)).astype(dtype))
+ # start at 1 because we've already generated z0
+ # go to num_timesteps+1 because we want to include the num_timesteps-th step
+ for t in xrange(1, num_timesteps+1):
+ if transition_type == models.ROUND_TRANSITION:
+ loc = np.round(states[-1])
+ elif transition_type == models.STANDARD_TRANSITION:
+ loc = states[-1]
+ new_state = np.random.normal(size=[state_size],
+ loc=loc,
+ scale=np.sqrt(variance))
+ states.append(new_state.astype(dtype))
+ if t % steps_per_obs == 0:
+ if fixed_observation is None:
+ if observation_type == models.SQUARED_OBSERVATION:
+ loc = np.square(states[-1])
+ elif observation_type == models.ABS_OBSERVATION:
+ loc = np.abs(states[-1])
+ elif observation_type == models.STANDARD_OBSERVATION:
+ loc = states[-1]
+ new_obs = np.random.normal(size=[state_size],
+ loc=loc,
+ scale=np.sqrt(observation_variance)).astype(dtype)
+ else:
+ new_obs = np.ones([state_size])* fixed_observation
+
+ observations.append(new_obs)
+ yield states, observations
+
+ dataset = tf.data.Dataset.from_generator(
+ data_generator,
+ output_types=(tf.as_dtype(dtype), tf.as_dtype(dtype)),
+ output_shapes=([num_timesteps+1, state_size], [num_obs, state_size]))
+ dataset = dataset.repeat().batch(batch_size)
+
+ def tile_batch(state, observation):
+ state = tf.tile(state, [num_samples, 1, 1])
+ observation = tf.tile(observation, [num_samples, 1, 1])
+ return state, observation
+
+ dataset = dataset.map(tile_batch, num_parallel_calls=12).prefetch(1024)
+ return dataset
+
+
+def make_dataset(bs=None,
+ state_size=1,
+ num_timesteps=10,
+ variance=1.,
+ prior_type="unimodal",
+ bimodal_prior_weight=0.5,
+ bimodal_prior_mean=1,
+ transition_type=models.STANDARD_TRANSITION,
+ fixed_observation=None,
+ batch_size=4,
+ num_samples=1,
+ dtype='float32'):
+ """Creates a data generating process.
+
+ Creates a tf.data.Dataset that provides batches of data.
+
+ Args:
+ bs: The parameters of the data generating process. If None, new bs are
+ randomly generated.
+ state_size: The dimension of the state space of the process.
+ num_timesteps: The length of the state sequences in the process.
+ variance: The variance of the normal distributions used at each timestep.
+ batch_size: The number of trajectories to include in each batch.
+ num_samples: The number of replicas of each trajectory to include in each
+ batch.
+ Returns:
+ bs: The true bs used to generate the data
+ dataset: A tf.data.Dataset that can be iterated over.
+ """
+
+ if bs is None:
+ bs = [np.random.uniform(size=[state_size]).astype(dtype) for _ in xrange(num_timesteps)]
+ tf.logging.info("data generating processs bs: %s",
+ np.array(bs).reshape(num_timesteps))
+
+
+ def data_generator():
+ """An infinite generator of latents and observations from the model."""
+ while True:
+ states = []
+ if prior_type == "unimodal" or prior_type == "nonlinear":
+ # Prior is Normal(0, sqrt(variance)).
+ states.append(np.random.normal(size=[state_size], scale=np.sqrt(variance)).astype(dtype))
+ elif prior_type == "bimodal":
+ if np.random.uniform() > bimodal_prior_weight:
+ loc = bimodal_prior_mean
+ else:
+ loc = - bimodal_prior_mean
+ states.append(np.random.normal(size=[state_size],
+ loc=loc,
+ scale=np.sqrt(variance)
+ ).astype(dtype))
+
+ for t in xrange(num_timesteps):
+ if transition_type == models.ROUND_TRANSITION:
+ loc = np.round(states[-1])
+ elif transition_type == models.STANDARD_TRANSITION:
+ loc = states[-1]
+ loc += bs[t]
+ new_state = np.random.normal(size=[state_size],
+ loc=loc,
+ scale=np.sqrt(variance)).astype(dtype)
+ states.append(new_state)
+
+ if fixed_observation is None:
+ observation = states[-1]
+ else:
+ observation = np.ones_like(states[-1]) * fixed_observation
+ yield np.array(states[:-1]), observation
+
+ dataset = tf.data.Dataset.from_generator(
+ data_generator,
+ output_types=(tf.as_dtype(dtype), tf.as_dtype(dtype)),
+ output_shapes=([num_timesteps, state_size], [state_size]))
+ dataset = dataset.repeat().batch(batch_size)
+
+ def tile_batch(state, observation):
+ state = tf.tile(state, [num_samples, 1, 1])
+ observation = tf.tile(observation, [num_samples, 1])
+ return state, observation
+
+ dataset = dataset.map(tile_batch, num_parallel_calls=12).prefetch(1024)
+ return np.array(bs), dataset
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/models.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/models.py"
new file mode 100644
index 00000000..62801ca1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/models.py"
@@ -0,0 +1,1227 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import functools
+import sonnet as snt
+import tensorflow as tf
+import numpy as np
+import math
+
+SQUARED_OBSERVATION = "squared"
+ABS_OBSERVATION = "abs"
+STANDARD_OBSERVATION = "standard"
+OBSERVATION_TYPES = [SQUARED_OBSERVATION, ABS_OBSERVATION, STANDARD_OBSERVATION]
+
+ROUND_TRANSITION = "round"
+STANDARD_TRANSITION = "standard"
+TRANSITION_TYPES = [ROUND_TRANSITION, STANDARD_TRANSITION]
+
+
+class Q(object):
+
+ def __init__(self,
+ state_size,
+ num_timesteps,
+ sigma_min=1e-5,
+ dtype=tf.float32,
+ random_seed=None,
+ init_mu0_to_zero=False,
+ graph_collection_name="Q_VARS"):
+ self.sigma_min = sigma_min
+ self.dtype = dtype
+ self.graph_collection_name = graph_collection_name
+ initializers = []
+ for t in xrange(num_timesteps):
+ if t == 0 and init_mu0_to_zero:
+ initializers.append(
+ {"w": tf.zeros_initializer, "b": tf.zeros_initializer})
+ else:
+ initializers.append(
+ {"w": tf.random_uniform_initializer(seed=random_seed),
+ "b": tf.zeros_initializer})
+
+ def custom_getter(getter, *args, **kwargs):
+ out = getter(*args, **kwargs)
+ ref = tf.get_collection_ref(self.graph_collection_name)
+ if out not in ref:
+ ref.append(out)
+ return out
+
+ self.mus = [
+ snt.Linear(output_size=state_size,
+ initializers=initializers[t],
+ name="q_mu_%d" % t,
+ custom_getter=custom_getter
+ )
+ for t in xrange(num_timesteps)
+ ]
+ self.sigmas = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="q_sigma_%d" % (t + 1),
+ collections=[tf.GraphKeys.GLOBAL_VARIABLES, graph_collection_name],
+ initializer=tf.random_uniform_initializer(seed=random_seed))
+ for t in xrange(num_timesteps)
+ ]
+
+ def q_zt(self, observation, prev_state, t):
+ batch_size = tf.shape(prev_state)[0]
+ q_mu = self.mus[t](tf.concat([observation, prev_state], axis=1))
+ q_sigma = tf.maximum(tf.nn.softplus(self.sigmas[t]), self.sigma_min)
+ q_sigma = tf.tile(q_sigma[tf.newaxis, :], [batch_size, 1])
+ q_zt = tf.contrib.distributions.Normal(loc=q_mu, scale=tf.sqrt(q_sigma))
+ return q_zt
+
+ def summarize_weights(self):
+ for t, sigma in enumerate(self.sigmas):
+ tf.summary.scalar("q_sigma/%d" % t, sigma[0])
+ for t, f in enumerate(self.mus):
+ tf.summary.scalar("q_mu/b_%d" % t, f.b[0])
+ tf.summary.scalar("q_mu/w_obs_%d" % t, f.w[0,0])
+ if t != 0:
+ tf.summary.scalar("q_mu/w_prev_state_%d" % t, f.w[1,0])
+
+
+class PreviousStateQ(Q):
+
+ def q_zt(self, unused_observation, prev_state, t):
+ batch_size = tf.shape(prev_state)[0]
+ q_mu = self.mus[t](prev_state)
+ q_sigma = tf.maximum(tf.nn.softplus(self.sigmas[t]), self.sigma_min)
+ q_sigma = tf.tile(q_sigma[tf.newaxis, :], [batch_size, 1])
+ q_zt = tf.contrib.distributions.Normal(loc=q_mu, scale=tf.sqrt(q_sigma))
+ return q_zt
+
+ def summarize_weights(self):
+ for t, sigma in enumerate(self.sigmas):
+ tf.summary.scalar("q_sigma/%d" % t, sigma[0])
+ for t, f in enumerate(self.mus):
+ tf.summary.scalar("q_mu/b_%d" % t, f.b[0])
+ tf.summary.scalar("q_mu/w_prev_state_%d" % t, f.w[0,0])
+
+
+class ObservationQ(Q):
+
+ def q_zt(self, observation, prev_state, t):
+ batch_size = tf.shape(prev_state)[0]
+ q_mu = self.mus[t](observation)
+ q_sigma = tf.maximum(tf.nn.softplus(self.sigmas[t]), self.sigma_min)
+ q_sigma = tf.tile(q_sigma[tf.newaxis, :], [batch_size, 1])
+ q_zt = tf.contrib.distributions.Normal(loc=q_mu, scale=tf.sqrt(q_sigma))
+ return q_zt
+
+ def summarize_weights(self):
+ for t, sigma in enumerate(self.sigmas):
+ tf.summary.scalar("q_sigma/%d" % t, sigma[0])
+ for t, f in enumerate(self.mus):
+ tf.summary.scalar("q_mu/b_%d" % t, f.b[0])
+ tf.summary.scalar("q_mu/w_obs_%d" % t, f.w[0,0])
+
+
+class SimpleMeanQ(object):
+
+ def __init__(self,
+ state_size,
+ num_timesteps,
+ sigma_min=1e-5,
+ dtype=tf.float32,
+ random_seed=None,
+ init_mu0_to_zero=False,
+ graph_collection_name="Q_VARS"):
+ self.sigma_min = sigma_min
+ self.dtype = dtype
+ self.graph_collection_name = graph_collection_name
+ initializers = []
+ for t in xrange(num_timesteps):
+ if t == 0 and init_mu0_to_zero:
+ initializers.append(tf.zeros_initializer)
+ else:
+ initializers.append(tf.random_uniform_initializer(seed=random_seed))
+
+ self.mus = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="q_mu_%d" % (t + 1),
+ collections=[tf.GraphKeys.GLOBAL_VARIABLES, graph_collection_name],
+ initializer=initializers[t])
+ for t in xrange(num_timesteps)
+ ]
+ self.sigmas = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="q_sigma_%d" % (t + 1),
+ collections=[tf.GraphKeys.GLOBAL_VARIABLES, graph_collection_name],
+ initializer=tf.random_uniform_initializer(seed=random_seed))
+ for t in xrange(num_timesteps)
+ ]
+
+ def q_zt(self, unused_observation, prev_state, t):
+ batch_size = tf.shape(prev_state)[0]
+ q_mu = tf.tile(self.mus[t][tf.newaxis, :], [batch_size, 1])
+ q_sigma = tf.maximum(tf.nn.softplus(self.sigmas[t]), self.sigma_min)
+ q_sigma = tf.tile(q_sigma[tf.newaxis, :], [batch_size, 1])
+ q_zt = tf.contrib.distributions.Normal(loc=q_mu, scale=tf.sqrt(q_sigma))
+ return q_zt
+
+ def summarize_weights(self):
+ for t, sigma in enumerate(self.sigmas):
+ tf.summary.scalar("q_sigma/%d" % t, sigma[0])
+ for t, f in enumerate(self.mus):
+ tf.summary.scalar("q_mu/%d" % t, f[0])
+
+
+class R(object):
+
+ def __init__(self,
+ state_size,
+ num_timesteps,
+ sigma_min=1e-5,
+ dtype=tf.float32,
+ sigma_init=1.,
+ random_seed=None,
+ graph_collection_name="R_VARS"):
+ self.dtype = dtype
+ self.sigma_min = sigma_min
+ initializers = {"w": tf.truncated_normal_initializer(seed=random_seed),
+ "b": tf.zeros_initializer}
+ self.graph_collection_name=graph_collection_name
+
+ def custom_getter(getter, *args, **kwargs):
+ out = getter(*args, **kwargs)
+ ref = tf.get_collection_ref(self.graph_collection_name)
+ if out not in ref:
+ ref.append(out)
+ return out
+
+ self.mus= [
+ snt.Linear(output_size=state_size,
+ initializers=initializers,
+ name="r_mu_%d" % t,
+ custom_getter=custom_getter)
+ for t in xrange(num_timesteps)
+ ]
+
+ self.sigmas = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="r_sigma_%d" % (t + 1),
+ collections=[tf.GraphKeys.GLOBAL_VARIABLES, graph_collection_name],
+ #initializer=tf.random_uniform_initializer(seed=random_seed, maxval=100))
+ initializer=tf.constant_initializer(sigma_init))
+ for t in xrange(num_timesteps)
+ ]
+
+ def r_xn(self, z_t, t):
+ batch_size = tf.shape(z_t)[0]
+ r_mu = self.mus[t](z_t)
+ r_sigma = tf.maximum(tf.nn.softplus(self.sigmas[t]), self.sigma_min)
+ r_sigma = tf.tile(r_sigma[tf.newaxis, :], [batch_size, 1])
+ return tf.contrib.distributions.Normal(
+ loc=r_mu, scale=tf.sqrt(r_sigma))
+
+ def summarize_weights(self):
+ for t in range(len(self.mus) - 1):
+ tf.summary.scalar("r_mu/%d" % t, self.mus[t][0])
+ tf.summary.scalar("r_sigma/%d" % t, self.sigmas[t][0])
+
+
+class P(object):
+
+ def __init__(self,
+ state_size,
+ num_timesteps,
+ sigma_min=1e-5,
+ variance=1.0,
+ dtype=tf.float32,
+ random_seed=None,
+ trainable=True,
+ init_bs_to_zero=False,
+ graph_collection_name="P_VARS"):
+ self.state_size = state_size
+ self.num_timesteps = num_timesteps
+ self.sigma_min = sigma_min
+ self.dtype = dtype
+ self.variance = variance
+ self.graph_collection_name = graph_collection_name
+ if init_bs_to_zero:
+ initializers = [tf.zeros_initializer for _ in xrange(num_timesteps)]
+ else:
+ initializers = [tf.random_uniform_initializer(seed=random_seed) for _ in xrange(num_timesteps)]
+
+ self.bs = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="p_b_%d" % (t + 1),
+ initializer=initializers[t],
+ collections=[tf.GraphKeys.GLOBAL_VARIABLES, graph_collection_name],
+ trainable=trainable) for t in xrange(num_timesteps)
+ ]
+ self.Bs = tf.cumsum(self.bs, reverse=True, axis=0)
+
+ def posterior(self, observation, prev_state, t):
+ """Computes the true posterior p(z_t|z_{t-1}, z_n)."""
+ # bs[0] is really b_1
+ # Bs[i] is sum from k=i+1^n b_k
+ mu = observation - self.Bs[t]
+ if t > 0:
+ mu += (prev_state + self.bs[t - 1]) * float(self.num_timesteps - t)
+ mu /= float(self.num_timesteps - t + 1)
+ sigma = tf.ones_like(mu) * self.variance * (
+ float(self.num_timesteps - t) / float(self.num_timesteps - t + 1))
+ return tf.contrib.distributions.Normal(loc=mu, scale=tf.sqrt(sigma))
+
+ def lookahead(self, state, t):
+ """Computes the true lookahead distribution p(z_n|z_t)."""
+ mu = state + self.Bs[t]
+ sigma = tf.ones_like(state) * self.variance * float(self.num_timesteps - t)
+ return tf.contrib.distributions.Normal(loc=mu, scale=tf.sqrt(sigma))
+
+ def likelihood(self, observation):
+ batch_size = tf.shape(observation)[0]
+ mu = tf.tile(tf.reduce_sum(self.bs, axis=0)[tf.newaxis, :], [batch_size, 1])
+ sigma = tf.ones_like(mu) * self.variance * (self.num_timesteps + 1)
+ dist = tf.contrib.distributions.Normal(loc=mu, scale=tf.sqrt(sigma))
+ # Average over the batch and take the sum over the state size
+ return tf.reduce_mean(tf.reduce_sum(dist.log_prob(observation), axis=1))
+
+ def p_zt(self, prev_state, t):
+ """Computes the model p(z_t| z_{t-1})."""
+ batch_size = tf.shape(prev_state)[0]
+ if t > 0:
+ z_mu_p = prev_state + self.bs[t - 1]
+ else: # p(z_0) is Normal(0,1)
+ z_mu_p = tf.zeros([batch_size, self.state_size], dtype=self.dtype)
+ p_zt = tf.contrib.distributions.Normal(
+ loc=z_mu_p, scale=tf.sqrt(tf.ones_like(z_mu_p) * self.variance))
+ return p_zt
+
+ def generative(self, unused_observation, z_nm1):
+ """Computes the model's generative distribution p(z_n| z_{n-1})."""
+ generative_p_mu = z_nm1 + self.bs[-1]
+ return tf.contrib.distributions.Normal(
+ loc=generative_p_mu, scale=tf.sqrt(tf.ones_like(generative_p_mu) * self.variance))
+
+
+class ShortChainNonlinearP(object):
+
+ def __init__(self,
+ state_size,
+ num_timesteps,
+ sigma_min=1e-5,
+ variance=1.0,
+ observation_variance=1.0,
+ transition_type=STANDARD_TRANSITION,
+ transition_dist=tf.contrib.distributions.Normal,
+ dtype=tf.float32,
+ random_seed=None):
+ self.state_size = state_size
+ self.num_timesteps = num_timesteps
+ self.sigma_min = sigma_min
+ self.dtype = dtype
+ self.variance = variance
+ self.observation_variance = observation_variance
+ self.transition_type = transition_type
+ self.transition_dist = transition_dist
+
+ def p_zt(self, prev_state, t):
+ """Computes the model p(z_t| z_{t-1})."""
+ batch_size = tf.shape(prev_state)[0]
+ if t > 0:
+ if self.transition_type == ROUND_TRANSITION:
+ loc = tf.round(prev_state)
+ tf.logging.info("p(z_%d | z_%d) ~ N(round(z_%d), %0.1f)" % (t, t-1, t-1, self.variance))
+ elif self.transition_type == STANDARD_TRANSITION:
+ loc = prev_state
+ tf.logging.info("p(z_%d | z_%d) ~ N(z_%d, %0.1f)" % (t, t-1, t-1, self.variance))
+ else: # p(z_0) is Normal(0,1)
+ loc = tf.zeros([batch_size, self.state_size], dtype=self.dtype)
+ tf.logging.info("p(z_0) ~ N(0,%0.1f)" % self.variance)
+
+ p_zt = self.transition_dist(
+ loc=loc,
+ scale=tf.sqrt(tf.ones_like(loc) * self.variance))
+ return p_zt
+
+ def generative(self, unused_obs, z_ni):
+ """Computes the model's generative distribution p(x_i| z_{ni})."""
+ if self.transition_type == ROUND_TRANSITION:
+ loc = tf.round(z_ni)
+ elif self.transition_type == STANDARD_TRANSITION:
+ loc = z_ni
+ generative_sigma_sq = tf.ones_like(loc) * self.observation_variance
+ return self.transition_dist(
+ loc=loc, scale=tf.sqrt(generative_sigma_sq))
+
+
+class BimodalPriorP(object):
+
+ def __init__(self,
+ state_size,
+ num_timesteps,
+ mixing_coeff=0.5,
+ prior_mode_mean=1,
+ sigma_min=1e-5,
+ variance=1.0,
+ dtype=tf.float32,
+ random_seed=None,
+ trainable=True,
+ init_bs_to_zero=False,
+ graph_collection_name="P_VARS"):
+ self.state_size = state_size
+ self.num_timesteps = num_timesteps
+ self.sigma_min = sigma_min
+ self.dtype = dtype
+ self.variance = variance
+ self.mixing_coeff = mixing_coeff
+ self.prior_mode_mean = prior_mode_mean
+
+ if init_bs_to_zero:
+ initializers = [tf.zeros_initializer for _ in xrange(num_timesteps)]
+ else:
+ initializers = [tf.random_uniform_initializer(seed=random_seed) for _ in xrange(num_timesteps)]
+
+ self.bs = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="b_%d" % (t + 1),
+ initializer=initializers[t],
+ collections=[tf.GraphKeys.GLOBAL_VARIABLES, graph_collection_name],
+ trainable=trainable) for t in xrange(num_timesteps)
+ ]
+ self.Bs = tf.cumsum(self.bs, reverse=True, axis=0)
+
+ def posterior(self, observation, prev_state, t):
+ # NOTE: This is currently wrong, but would require a refactoring of
+ # summarize_q to fix as kl is not defined for a mixture
+ """Computes the true posterior p(z_t|z_{t-1}, z_n)."""
+ # bs[0] is really b_1
+ # Bs[i] is sum from k=i+1^n b_k
+ mu = observation - self.Bs[t]
+ if t > 0:
+ mu += (prev_state + self.bs[t - 1]) * float(self.num_timesteps - t)
+ mu /= float(self.num_timesteps - t + 1)
+ sigma = tf.ones_like(mu) * self.variance * (
+ float(self.num_timesteps - t) / float(self.num_timesteps - t + 1))
+ return tf.contrib.distributions.Normal(loc=mu, scale=tf.sqrt(sigma))
+
+ def lookahead(self, state, t):
+ """Computes the true lookahead distribution p(z_n|z_t)."""
+ mu = state + self.Bs[t]
+ sigma = tf.ones_like(state) * self.variance * float(self.num_timesteps - t)
+ return tf.contrib.distributions.Normal(loc=mu, scale=tf.sqrt(sigma))
+
+ def likelihood(self, observation):
+ batch_size = tf.shape(observation)[0]
+ sum_of_bs = tf.tile(tf.reduce_sum(self.bs, axis=0)[tf.newaxis, :], [batch_size, 1])
+ sigma = tf.ones_like(sum_of_bs) * self.variance * (self.num_timesteps + 1)
+ mu_pos = (tf.ones([batch_size, self.state_size], dtype=self.dtype) * self.prior_mode_mean) + sum_of_bs
+ mu_neg = (tf.ones([batch_size, self.state_size], dtype=self.dtype) * -self.prior_mode_mean) + sum_of_bs
+ zn_pos = tf.contrib.distributions.Normal(
+ loc=mu_pos,
+ scale=tf.sqrt(sigma))
+ zn_neg = tf.contrib.distributions.Normal(
+ loc=mu_neg,
+ scale=tf.sqrt(sigma))
+ mode_probs = tf.convert_to_tensor([self.mixing_coeff, 1-self.mixing_coeff], dtype=tf.float64)
+ mode_probs = tf.tile(mode_probs[tf.newaxis, tf.newaxis, :], [batch_size, 1, 1])
+ mode_selection_dist = tf.contrib.distributions.Categorical(probs=mode_probs)
+ zn_dist = tf.contrib.distributions.Mixture(
+ cat=mode_selection_dist,
+ components=[zn_pos, zn_neg],
+ validate_args=True)
+ # Average over the batch and take the sum over the state size
+ return tf.reduce_mean(tf.reduce_sum(zn_dist.log_prob(observation), axis=1))
+
+ def p_zt(self, prev_state, t):
+ """Computes the model p(z_t| z_{t-1})."""
+ batch_size = tf.shape(prev_state)[0]
+ if t > 0:
+ z_mu_p = prev_state + self.bs[t - 1]
+ p_zt = tf.contrib.distributions.Normal(
+ loc=z_mu_p, scale=tf.sqrt(tf.ones_like(z_mu_p) * self.variance))
+ return p_zt
+ else: # p(z_0) is mixture of two Normals
+ mu_pos = tf.ones([batch_size, self.state_size], dtype=self.dtype) * self.prior_mode_mean
+ mu_neg = tf.ones([batch_size, self.state_size], dtype=self.dtype) * -self.prior_mode_mean
+ z0_pos = tf.contrib.distributions.Normal(
+ loc=mu_pos,
+ scale=tf.sqrt(tf.ones_like(mu_pos) * self.variance))
+ z0_neg = tf.contrib.distributions.Normal(
+ loc=mu_neg,
+ scale=tf.sqrt(tf.ones_like(mu_neg) * self.variance))
+ mode_probs = tf.convert_to_tensor([self.mixing_coeff, 1-self.mixing_coeff], dtype=tf.float64)
+ mode_probs = tf.tile(mode_probs[tf.newaxis, tf.newaxis, :], [batch_size, 1, 1])
+ mode_selection_dist = tf.contrib.distributions.Categorical(probs=mode_probs)
+ z0_dist = tf.contrib.distributions.Mixture(
+ cat=mode_selection_dist,
+ components=[z0_pos, z0_neg],
+ validate_args=False)
+ return z0_dist
+
+ def generative(self, unused_observation, z_nm1):
+ """Computes the model's generative distribution p(z_n| z_{n-1})."""
+ generative_p_mu = z_nm1 + self.bs[-1]
+ return tf.contrib.distributions.Normal(
+ loc=generative_p_mu, scale=tf.sqrt(tf.ones_like(generative_p_mu) * self.variance))
+
+class Model(object):
+
+ def __init__(self,
+ p,
+ q,
+ r,
+ state_size,
+ num_timesteps,
+ dtype=tf.float32):
+ self.p = p
+ self.q = q
+ self.r = r
+ self.state_size = state_size
+ self.num_timesteps = num_timesteps
+ self.dtype = dtype
+
+ def zero_state(self, batch_size):
+ return tf.zeros([batch_size, self.state_size], dtype=self.dtype)
+
+ def __call__(self, prev_state, observation, t):
+ # Compute the q distribution over z, q(z_t|z_n, z_{t-1}).
+ q_zt = self.q.q_zt(observation, prev_state, t)
+ # Compute the p distribution over z, p(z_t|z_{t-1}).
+ p_zt = self.p.p_zt(prev_state, t)
+ # sample from q
+ zt = q_zt.sample()
+ r_xn = self.r.r_xn(zt, t)
+ # Calculate the logprobs and sum over the state size.
+ log_q_zt = tf.reduce_sum(q_zt.log_prob(zt), axis=1)
+ log_p_zt = tf.reduce_sum(p_zt.log_prob(zt), axis=1)
+ log_r_xn = tf.reduce_sum(r_xn.log_prob(observation), axis=1)
+ # If we're at the last timestep, also calc the logprob of the observation.
+ if t == self.num_timesteps - 1:
+ generative_dist = self.p.generative(observation, zt)
+ log_p_x_given_z = tf.reduce_sum(generative_dist.log_prob(observation), axis=1)
+ else:
+ log_p_x_given_z = tf.zeros_like(log_q_zt)
+ return (zt, log_q_zt, log_p_zt, log_p_x_given_z, log_r_xn)
+
+ @staticmethod
+ def create(state_size,
+ num_timesteps,
+ sigma_min=1e-5,
+ r_sigma_init=1,
+ variance=1.0,
+ mixing_coeff=0.5,
+ prior_mode_mean=1.0,
+ dtype=tf.float32,
+ random_seed=None,
+ train_p=True,
+ p_type="unimodal",
+ q_type="normal",
+ observation_variance=1.0,
+ transition_type=STANDARD_TRANSITION,
+ use_bs=True):
+ if p_type == "unimodal":
+ p = P(state_size,
+ num_timesteps,
+ sigma_min=sigma_min,
+ variance=variance,
+ dtype=dtype,
+ random_seed=random_seed,
+ trainable=train_p,
+ init_bs_to_zero=not use_bs)
+ elif p_type == "bimodal":
+ p = BimodalPriorP(
+ state_size,
+ num_timesteps,
+ mixing_coeff=mixing_coeff,
+ prior_mode_mean=prior_mode_mean,
+ sigma_min=sigma_min,
+ variance=variance,
+ dtype=dtype,
+ random_seed=random_seed,
+ trainable=train_p,
+ init_bs_to_zero=not use_bs)
+ elif "nonlinear" in p_type:
+ if "cauchy" in p_type:
+ trans_dist = tf.contrib.distributions.Cauchy
+ else:
+ trans_dist = tf.contrib.distributions.Normal
+ p = ShortChainNonlinearP(
+ state_size,
+ num_timesteps,
+ sigma_min=sigma_min,
+ variance=variance,
+ observation_variance=observation_variance,
+ transition_type=transition_type,
+ transition_dist=trans_dist,
+ dtype=dtype,
+ random_seed=random_seed
+ )
+
+ if q_type == "normal":
+ q_class = Q
+ elif q_type == "simple_mean":
+ q_class = SimpleMeanQ
+ elif q_type == "prev_state":
+ q_class = PreviousStateQ
+ elif q_type == "observation":
+ q_class = ObservationQ
+
+ q = q_class(state_size,
+ num_timesteps,
+ sigma_min=sigma_min,
+ dtype=dtype,
+ random_seed=random_seed,
+ init_mu0_to_zero=not use_bs)
+ r = R(state_size,
+ num_timesteps,
+ sigma_min=sigma_min,
+ sigma_init=r_sigma_init,
+ dtype=dtype,
+ random_seed=random_seed)
+ model = Model(p, q, r, state_size, num_timesteps, dtype=dtype)
+ return model
+
+
+class BackwardsModel(object):
+
+ def __init__(self,
+ state_size,
+ num_timesteps,
+ sigma_min=1e-5,
+ dtype=tf.float32):
+ self.state_size = state_size
+ self.num_timesteps = num_timesteps
+ self.sigma_min = sigma_min
+ self.dtype = dtype
+ self.bs = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="b_%d" % (t + 1),
+ initializer=tf.zeros_initializer) for t in xrange(num_timesteps)
+ ]
+ self.Bs = tf.cumsum(self.bs, reverse=True, axis=0)
+ self.q_mus = [
+ snt.Linear(output_size=state_size) for _ in xrange(num_timesteps)
+ ]
+ self.q_sigmas = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="q_sigma_%d" % (t + 1),
+ initializer=tf.zeros_initializer) for t in xrange(num_timesteps)
+ ]
+ self.r_mus = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="r_mu_%d" % (t + 1),
+ initializer=tf.zeros_initializer) for t in xrange(num_timesteps)
+ ]
+ self.r_sigmas = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="r_sigma_%d" % (t + 1),
+ initializer=tf.zeros_initializer) for t in xrange(num_timesteps)
+ ]
+
+ def zero_state(self, batch_size):
+ return tf.zeros([batch_size, self.state_size], dtype=self.dtype)
+
+ def posterior(self, unused_observation, prev_state, unused_t):
+ # TODO(dieterichl): Correct this.
+ return tf.contrib.distributions.Normal(
+ loc=tf.zeros_like(prev_state), scale=tf.zeros_like(prev_state))
+
+ def lookahead(self, state, unused_t):
+ # TODO(dieterichl): Correct this.
+ return tf.contrib.distributions.Normal(
+ loc=tf.zeros_like(state), scale=tf.zeros_like(state))
+
+ def q_zt(self, observation, next_state, t):
+ """Computes the variational posterior q(z_{t}|z_{t+1}, z_n)."""
+ t_backwards = self.num_timesteps - t - 1
+ batch_size = tf.shape(next_state)[0]
+ q_mu = self.q_mus[t_backwards](tf.concat([observation, next_state], axis=1))
+ q_sigma = tf.maximum(
+ tf.nn.softplus(self.q_sigmas[t_backwards]), self.sigma_min)
+ q_sigma = tf.tile(q_sigma[tf.newaxis, :], [batch_size, 1])
+ q_zt = tf.contrib.distributions.Normal(loc=q_mu, scale=tf.sqrt(q_sigma))
+ return q_zt
+
+ def p_zt(self, prev_state, t):
+ """Computes the model p(z_{t+1}| z_{t})."""
+ t_backwards = self.num_timesteps - t - 1
+ z_mu_p = prev_state + self.bs[t_backwards]
+ p_zt = tf.contrib.distributions.Normal(
+ loc=z_mu_p, scale=tf.ones_like(z_mu_p))
+ return p_zt
+
+ def generative(self, unused_observation, z_nm1):
+ """Computes the model's generative distribution p(z_n| z_{n-1})."""
+ generative_p_mu = z_nm1 + self.bs[-1]
+ return tf.contrib.distributions.Normal(
+ loc=generative_p_mu, scale=tf.ones_like(generative_p_mu))
+
+ def r(self, z_t, t):
+ t_backwards = self.num_timesteps - t - 1
+ batch_size = tf.shape(z_t)[0]
+ r_mu = tf.tile(self.r_mus[t_backwards][tf.newaxis, :], [batch_size, 1])
+ r_sigma = tf.maximum(
+ tf.nn.softplus(self.r_sigmas[t_backwards]), self.sigma_min)
+ r_sigma = tf.tile(r_sigma[tf.newaxis, :], [batch_size, 1])
+ return tf.contrib.distributions.Normal(loc=r_mu, scale=tf.sqrt(r_sigma))
+
+ def likelihood(self, observation):
+ batch_size = tf.shape(observation)[0]
+ mu = tf.tile(tf.reduce_sum(self.bs, axis=0)[tf.newaxis, :], [batch_size, 1])
+ sigma = tf.ones_like(mu) * (self.num_timesteps + 1)
+ dist = tf.contrib.distributions.Normal(loc=mu, scale=tf.sqrt(sigma))
+ # Average over the batch and take the sum over the state size
+ return tf.reduce_mean(tf.reduce_sum(dist.log_prob(observation), axis=1))
+
+ def __call__(self, next_state, observation, t):
+ # next state = z_{t+1}
+ # Compute the q distribution over z, q(z_{t}|z_n, z_{t+1}).
+ q_zt = self.q_zt(observation, next_state, t)
+ # sample from q
+ zt = q_zt.sample()
+ # Compute the p distribution over z, p(z_{t+1}|z_{t}).
+ p_zt = self.p_zt(zt, t)
+ # Compute log p(z_{t+1} | z_t)
+ if t == 0:
+ log_p_zt = p_zt.log_prob(observation)
+ else:
+ log_p_zt = p_zt.log_prob(next_state)
+
+ # Compute r prior over zt
+ r_zt = self.r(zt, t)
+ log_r_zt = r_zt.log_prob(zt)
+ # Compute proposal density at zt
+ log_q_zt = q_zt.log_prob(zt)
+ # If we're at the last timestep, also calc the logprob of the observation.
+
+ if t == self.num_timesteps - 1:
+ p_z0_dist = tf.contrib.distributions.Normal(
+ loc=tf.zeros_like(zt), scale=tf.ones_like(zt))
+ z0_log_prob = p_z0_dist.log_prob(zt)
+ else:
+ z0_log_prob = tf.zeros_like(log_q_zt)
+ return (zt, log_q_zt, log_p_zt, z0_log_prob, log_r_zt)
+
+
+class LongChainP(object):
+
+ def __init__(self,
+ state_size,
+ num_obs,
+ steps_per_obs,
+ sigma_min=1e-5,
+ variance=1.0,
+ observation_variance=1.0,
+ observation_type=STANDARD_OBSERVATION,
+ transition_type=STANDARD_TRANSITION,
+ dtype=tf.float32,
+ random_seed=None):
+ self.state_size = state_size
+ self.steps_per_obs = steps_per_obs
+ self.num_obs = num_obs
+ self.num_timesteps = steps_per_obs*num_obs + 1
+ self.sigma_min = sigma_min
+ self.dtype = dtype
+ self.variance = variance
+ self.observation_variance = observation_variance
+ self.observation_type = observation_type
+ self.transition_type = transition_type
+
+ def likelihood(self, observations):
+ """Computes the model's true likelihood of the observations.
+
+ Args:
+ observations: A [batch_size, m, state_size] Tensor representing each of
+ the m observations.
+ Returns:
+ logprob: The true likelihood of the observations given the model.
+ """
+ raise ValueError("Likelihood is not defined for long-chain models")
+ # batch_size = tf.shape(observations)[0]
+ # mu = tf.zeros([batch_size, self.state_size, self.num_obs], dtype=self.dtype)
+ # sigma = np.fromfunction(
+ # lambda i, j: 1 + self.steps_per_obs*np.minimum(i+1, j+1),
+ # [self.num_obs, self.num_obs])
+ # sigma += np.eye(self.num_obs)
+ # sigma = tf.convert_to_tensor(sigma * self.variance, dtype=self.dtype)
+ # sigma = tf.tile(sigma[tf.newaxis, tf.newaxis, ...],
+ # [batch_size, self.state_size, 1, 1])
+ # dist = tf.contrib.distributions.MultivariateNormalFullCovariance(
+ # loc=mu,
+ # covariance_matrix=sigma)
+ # Average over the batch and take the sum over the state size
+ #return tf.reduce_mean(tf.reduce_sum(dist.log_prob(observations), axis=1))
+
+ def p_zt(self, prev_state, t):
+ """Computes the model p(z_t| z_{t-1})."""
+ batch_size = tf.shape(prev_state)[0]
+ if t > 0:
+ if self.transition_type == ROUND_TRANSITION:
+ loc = tf.round(prev_state)
+ tf.logging.info("p(z_%d | z_%d) ~ N(round(z_%d), %0.1f)" % (t, t-1, t-1, self.variance))
+ elif self.transition_type == STANDARD_TRANSITION:
+ loc = prev_state
+ tf.logging.info("p(z_%d | z_%d) ~ N(z_%d, %0.1f)" % (t, t-1, t-1, self.variance))
+ else: # p(z_0) is Normal(0,1)
+ loc = tf.zeros([batch_size, self.state_size], dtype=self.dtype)
+ tf.logging.info("p(z_0) ~ N(0,%0.1f)" % self.variance)
+
+ p_zt = tf.contrib.distributions.Normal(
+ loc=loc,
+ scale=tf.sqrt(tf.ones_like(loc) * self.variance))
+ return p_zt
+
+ def generative(self, z_ni, t):
+ """Computes the model's generative distribution p(x_i| z_{ni})."""
+ if self.observation_type == SQUARED_OBSERVATION:
+ generative_mu = tf.square(z_ni)
+ tf.logging.info("p(x_%d | z_%d) ~ N(z_%d^2, %0.1f)" % (t, t, t, self.variance))
+ elif self.observation_type == ABS_OBSERVATION:
+ generative_mu = tf.abs(z_ni)
+ tf.logging.info("p(x_%d | z_%d) ~ N(|z_%d|, %0.1f)" % (t, t, t, self.variance))
+ elif self.observation_type == STANDARD_OBSERVATION:
+ generative_mu = z_ni
+ tf.logging.info("p(x_%d | z_%d) ~ N(z_%d, %0.1f)" % (t, t, t, self.variance))
+ generative_sigma_sq = tf.ones_like(generative_mu) * self.observation_variance
+ return tf.contrib.distributions.Normal(
+ loc=generative_mu, scale=tf.sqrt(generative_sigma_sq))
+
+
+class LongChainQ(object):
+
+ def __init__(self,
+ state_size,
+ num_obs,
+ steps_per_obs,
+ sigma_min=1e-5,
+ dtype=tf.float32,
+ random_seed=None):
+ self.state_size = state_size
+ self.sigma_min = sigma_min
+ self.dtype = dtype
+ self.steps_per_obs = steps_per_obs
+ self.num_obs = num_obs
+ self.num_timesteps = num_obs*steps_per_obs +1
+
+ initializers = {
+ "w": tf.random_uniform_initializer(seed=random_seed),
+ "b": tf.zeros_initializer
+ }
+ self.mus = [
+ snt.Linear(output_size=state_size, initializers=initializers)
+ for t in xrange(self.num_timesteps)
+ ]
+ self.sigmas = [
+ tf.get_variable(
+ shape=[state_size],
+ dtype=self.dtype,
+ name="q_sigma_%d" % (t + 1),
+ initializer=tf.random_uniform_initializer(seed=random_seed))
+ for t in xrange(self.num_timesteps)
+ ]
+
+ def first_relevant_obs_index(self, t):
+ return int(max((t-1)/self.steps_per_obs, 0))
+
+ def q_zt(self, observations, prev_state, t):
+ """Computes a distribution over z_t.
+
+ Args:
+ observations: a [batch_size, num_observations, state_size] Tensor.
+ prev_state: a [batch_size, state_size] Tensor.
+ t: The current timestep, an int Tensor.
+ """
+ # filter out unneeded past obs
+ first_relevant_obs_index = int(math.floor(max(t-1, 0) / self.steps_per_obs))
+ num_relevant_observations = self.num_obs - first_relevant_obs_index
+ observations = observations[:,first_relevant_obs_index:,:]
+ batch_size = tf.shape(prev_state)[0]
+ # concatenate the prev state and observations along the second axis (that is
+ # not the batch or state size axis, and then flatten it to
+ # [batch_size, (num_relevant_observations + 1) * state_size] to feed it into
+ # the linear layer.
+ q_input = tf.concat([observations, prev_state[:,tf.newaxis, :]], axis=1)
+ q_input = tf.reshape(q_input,
+ [batch_size, (num_relevant_observations + 1) * self.state_size])
+ q_mu = self.mus[t](q_input)
+ q_sigma = tf.maximum(tf.nn.softplus(self.sigmas[t]), self.sigma_min)
+ q_sigma = tf.tile(q_sigma[tf.newaxis, :], [batch_size, 1])
+ q_zt = tf.contrib.distributions.Normal(loc=q_mu, scale=tf.sqrt(q_sigma))
+ tf.logging.info(
+ "q(z_{t} | z_{tm1}, x_{obsf}:{obst}) ~ N(Linear([z_{tm1},x_{obsf}:{obst}]), sigma_{t})".format(
+ **{"t": t,
+ "tm1": t-1,
+ "obsf": (first_relevant_obs_index+1)*self.steps_per_obs,
+ "obst":self.steps_per_obs*self.num_obs}))
+ return q_zt
+
+ def summarize_weights(self):
+ pass
+
+class LongChainR(object):
+
+ def __init__(self,
+ state_size,
+ num_obs,
+ steps_per_obs,
+ sigma_min=1e-5,
+ dtype=tf.float32,
+ random_seed=None):
+ self.state_size = state_size
+ self.dtype = dtype
+ self.sigma_min = sigma_min
+ self.steps_per_obs = steps_per_obs
+ self.num_obs = num_obs
+ self.num_timesteps = num_obs*steps_per_obs + 1
+ self.sigmas = [
+ tf.get_variable(
+ shape=[self.num_future_obs(t)],
+ dtype=self.dtype,
+ name="r_sigma_%d" % (t + 1),
+ #initializer=tf.random_uniform_initializer(seed=random_seed, maxval=100))
+ initializer=tf.constant_initializer(1.0))
+ for t in range(self.num_timesteps)
+ ]
+
+ def first_future_obs_index(self, t):
+ return int(math.floor(t / self.steps_per_obs))
+
+ def num_future_obs(self, t):
+ return int(self.num_obs - self.first_future_obs_index(t))
+
+ def r_xn(self, z_t, t):
+ """Computes a distribution over the future observations given current latent
+ state.
+
+ The indexing in these messages is 1 indexed and inclusive. This is
+ consistent with the latex documents.
+
+ Args:
+ z_t: [batch_size, state_size] Tensor
+ t: Current timestep
+ """
+ tf.logging.info(
+ "r(x_{start}:{end} | z_{t}) ~ N(z_{t}, sigma_{t})".format(
+ **{"t": t,
+ "start": (self.first_future_obs_index(t)+1)*self.steps_per_obs,
+ "end": self.num_timesteps-1}))
+ batch_size = tf.shape(z_t)[0]
+ # the mean for all future observations is the same.
+ # this tiling results in a [batch_size, num_future_obs, state_size] Tensor
+ r_mu = tf.tile(z_t[:,tf.newaxis,:], [1, self.num_future_obs(t), 1])
+ # compute the variance
+ r_sigma = tf.maximum(tf.nn.softplus(self.sigmas[t]), self.sigma_min)
+ # the variance is the same across all state dimensions, so we only have to
+ # time sigma to be [batch_size, num_future_obs].
+ r_sigma = tf.tile(r_sigma[tf.newaxis,:, tf.newaxis], [batch_size, 1, self.state_size])
+ return tf.contrib.distributions.Normal(
+ loc=r_mu, scale=tf.sqrt(r_sigma))
+
+ def summarize_weights(self):
+ pass
+
+
+class LongChainModel(object):
+
+ def __init__(self,
+ p,
+ q,
+ r,
+ state_size,
+ num_obs,
+ steps_per_obs,
+ dtype=tf.float32,
+ disable_r=False):
+ self.p = p
+ self.q = q
+ self.r = r
+ self.disable_r = disable_r
+ self.state_size = state_size
+ self.num_obs = num_obs
+ self.steps_per_obs = steps_per_obs
+ self.num_timesteps = steps_per_obs*num_obs + 1
+ self.dtype = dtype
+
+ def zero_state(self, batch_size):
+ return tf.zeros([batch_size, self.state_size], dtype=self.dtype)
+
+ def next_obs_ind(self, t):
+ return int(math.floor(max(t-1,0)/self.steps_per_obs))
+
+ def __call__(self, prev_state, observations, t):
+ """Computes the importance weight for the model system.
+
+ Args:
+ prev_state: [batch_size, state_size] Tensor
+ observations: [batch_size, num_observations, state_size] Tensor
+ """
+ # Compute the q distribution over z, q(z_t|z_n, z_{t-1}).
+ q_zt = self.q.q_zt(observations, prev_state, t)
+ # Compute the p distribution over z, p(z_t|z_{t-1}).
+ p_zt = self.p.p_zt(prev_state, t)
+ # sample from q and evaluate the logprobs, summing over the state size
+ zt = q_zt.sample()
+ log_q_zt = tf.reduce_sum(q_zt.log_prob(zt), axis=1)
+ log_p_zt = tf.reduce_sum(p_zt.log_prob(zt), axis=1)
+ if not self.disable_r and t < self.num_timesteps-1:
+ # score the remaining observations using r
+ r_xn = self.r.r_xn(zt, t)
+ log_r_xn = r_xn.log_prob(observations[:, self.next_obs_ind(t+1):, :])
+ # sum over state size and observation, leaving the batch index
+ log_r_xn = tf.reduce_sum(log_r_xn, axis=[1,2])
+ else:
+ log_r_xn = tf.zeros_like(log_p_zt)
+ if t != 0 and t % self.steps_per_obs == 0:
+ generative_dist = self.p.generative(zt, t)
+ log_p_x_given_z = generative_dist.log_prob(observations[:,self.next_obs_ind(t),:])
+ log_p_x_given_z = tf.reduce_sum(log_p_x_given_z, axis=1)
+ else:
+ log_p_x_given_z = tf.zeros_like(log_q_zt)
+ return (zt, log_q_zt, log_p_zt, log_p_x_given_z, log_r_xn)
+
+ @staticmethod
+ def create(state_size,
+ num_obs,
+ steps_per_obs,
+ sigma_min=1e-5,
+ variance=1.0,
+ observation_variance=1.0,
+ observation_type=STANDARD_OBSERVATION,
+ transition_type=STANDARD_TRANSITION,
+ dtype=tf.float32,
+ random_seed=None,
+ disable_r=False):
+ p = LongChainP(
+ state_size,
+ num_obs,
+ steps_per_obs,
+ sigma_min=sigma_min,
+ variance=variance,
+ observation_variance=observation_variance,
+ observation_type=observation_type,
+ transition_type=transition_type,
+ dtype=dtype,
+ random_seed=random_seed)
+ q = LongChainQ(
+ state_size,
+ num_obs,
+ steps_per_obs,
+ sigma_min=sigma_min,
+ dtype=dtype,
+ random_seed=random_seed)
+ r = LongChainR(
+ state_size,
+ num_obs,
+ steps_per_obs,
+ sigma_min=sigma_min,
+ dtype=dtype,
+ random_seed=random_seed)
+ model = LongChainModel(
+ p, q, r, state_size, num_obs, steps_per_obs,
+ dtype=dtype,
+ disable_r=disable_r)
+ return model
+
+
+class RTilde(object):
+
+ def __init__(self,
+ state_size,
+ num_timesteps,
+ sigma_min=1e-5,
+ dtype=tf.float32,
+ random_seed=None,
+ graph_collection_name="R_TILDE_VARS"):
+ self.dtype = dtype
+ self.sigma_min = sigma_min
+ initializers = {"w": tf.truncated_normal_initializer(seed=random_seed),
+ "b": tf.zeros_initializer}
+ self.graph_collection_name=graph_collection_name
+
+ def custom_getter(getter, *args, **kwargs):
+ out = getter(*args, **kwargs)
+ ref = tf.get_collection_ref(self.graph_collection_name)
+ if out not in ref:
+ ref.append(out)
+ return out
+
+ self.fns = [
+ snt.Linear(output_size=2*state_size,
+ initializers=initializers,
+ name="r_tilde_%d" % t,
+ custom_getter=custom_getter)
+ for t in xrange(num_timesteps)
+ ]
+
+ def r_zt(self, z_t, observation, t):
+ #out = self.fns[t](tf.stop_gradient(tf.concat([z_t, observation], axis=1)))
+ out = self.fns[t](tf.concat([z_t, observation], axis=1))
+ mu, raw_sigma_sq = tf.split(out, 2, axis=1)
+ sigma_sq = tf.maximum(tf.nn.softplus(raw_sigma_sq), self.sigma_min)
+ return mu, sigma_sq
+
+class TDModel(object):
+
+ def __init__(self,
+ p,
+ q,
+ r_tilde,
+ state_size,
+ num_timesteps,
+ dtype=tf.float32,
+ disable_r=False):
+ self.p = p
+ self.q = q
+ self.r_tilde = r_tilde
+ self.disable_r = disable_r
+ self.state_size = state_size
+ self.num_timesteps = num_timesteps
+ self.dtype = dtype
+
+ def zero_state(self, batch_size):
+ return tf.zeros([batch_size, self.state_size], dtype=self.dtype)
+
+ def __call__(self, prev_state, observation, t):
+ """Computes the importance weight for the model system.
+
+ Args:
+ prev_state: [batch_size, state_size] Tensor
+ observations: [batch_size, num_observations, state_size] Tensor
+ """
+ # Compute the q distribution over z, q(z_t|z_n, z_{t-1}).
+ q_zt = self.q.q_zt(observation, prev_state, t)
+ # Compute the p distribution over z, p(z_t|z_{t-1}).
+ p_zt = self.p.p_zt(prev_state, t)
+ # sample from q and evaluate the logprobs, summing over the state size
+ zt = q_zt.sample()
+ # If it isn't the last timestep, compute the distribution over the next z.
+ if t < self.num_timesteps - 1:
+ p_ztplus1 = self.p.p_zt(zt, t+1)
+ else:
+ p_ztplus1 = None
+ log_q_zt = tf.reduce_sum(q_zt.log_prob(zt), axis=1)
+ log_p_zt = tf.reduce_sum(p_zt.log_prob(zt), axis=1)
+
+ if not self.disable_r and t < self.num_timesteps-1:
+ # score the remaining observations using r
+ r_tilde_mu, r_tilde_sigma_sq = self.r_tilde.r_zt(zt, observation, t+1)
+ else:
+ r_tilde_mu = None
+ r_tilde_sigma_sq = None
+ if t == self.num_timesteps - 1:
+ generative_dist = self.p.generative(observation, zt)
+ log_p_x_given_z = tf.reduce_sum(generative_dist.log_prob(observation), axis=1)
+ else:
+ log_p_x_given_z = tf.zeros_like(log_q_zt)
+ return (zt, log_q_zt, log_p_zt, log_p_x_given_z,
+ r_tilde_mu, r_tilde_sigma_sq, p_ztplus1)
+
+ @staticmethod
+ def create(state_size,
+ num_timesteps,
+ sigma_min=1e-5,
+ variance=1.0,
+ dtype=tf.float32,
+ random_seed=None,
+ train_p=True,
+ p_type="unimodal",
+ q_type="normal",
+ mixing_coeff=0.5,
+ prior_mode_mean=1.0,
+ observation_variance=1.0,
+ transition_type=STANDARD_TRANSITION,
+ use_bs=True):
+ if p_type == "unimodal":
+ p = P(state_size,
+ num_timesteps,
+ sigma_min=sigma_min,
+ variance=variance,
+ dtype=dtype,
+ random_seed=random_seed,
+ trainable=train_p,
+ init_bs_to_zero=not use_bs)
+ elif p_type == "bimodal":
+ p = BimodalPriorP(
+ state_size,
+ num_timesteps,
+ mixing_coeff=mixing_coeff,
+ prior_mode_mean=prior_mode_mean,
+ sigma_min=sigma_min,
+ variance=variance,
+ dtype=dtype,
+ random_seed=random_seed,
+ trainable=train_p,
+ init_bs_to_zero=not use_bs)
+ elif "nonlinear" in p_type:
+ if "cauchy" in p_type:
+ trans_dist = tf.contrib.distributions.Cauchy
+ else:
+ trans_dist = tf.contrib.distributions.Normal
+
+ p = ShortChainNonlinearP(
+ state_size,
+ num_timesteps,
+ sigma_min=sigma_min,
+ variance=variance,
+ observation_variance=observation_variance,
+ transition_type=transition_type,
+ transition_dist=trans_dist,
+ dtype=dtype,
+ random_seed=random_seed
+ )
+
+ if q_type == "normal":
+ q_class = Q
+ elif q_type == "simple_mean":
+ q_class = SimpleMeanQ
+ elif q_type == "prev_state":
+ q_class = PreviousStateQ
+ elif q_type == "observation":
+ q_class = ObservationQ
+
+ q = q_class(state_size,
+ num_timesteps,
+ sigma_min=sigma_min,
+ dtype=dtype,
+ random_seed=random_seed,
+ init_mu0_to_zero=not use_bs)
+ r_tilde = RTilde(
+ state_size,
+ num_timesteps,
+ sigma_min=sigma_min,
+ dtype=dtype,
+ random_seed=random_seed)
+ model = TDModel(p, q, r_tilde, state_size, num_timesteps, dtype=dtype)
+ return model
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/run.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/run.sh"
new file mode 100644
index 00000000..c650f636
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/run.sh"
@@ -0,0 +1,54 @@
+#!/bin/bash
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+model="forward"
+T=5
+num_obs=1
+var=0.1
+n=4
+lr=0.0001
+bound="fivo-aux"
+q_type="normal"
+resampling_method="multinomial"
+rgrad="true"
+p_type="unimodal"
+use_bs=false
+
+LOGDIR=/tmp/fivo/model-$model-$bound-$resampling_method-resampling-rgrad-$rgrad-T-$T-var-$var-n-$n-lr-$lr-q-$q_type-p-$p_type
+
+python train.py \
+ --logdir=$LOGDIR \
+ --model=$model \
+ --bound=$bound \
+ --q_type=$q_type \
+ --p_type=$p_type \
+ --variance=$var \
+ --use_resampling_grads=$rgrad \
+ --resampling=always \
+ --resampling_method=$resampling_method \
+ --batch_size=4 \
+ --num_samples=$n \
+ --num_timesteps=$T \
+ --num_eval_samples=256 \
+ --summarize_every=100 \
+ --learning_rate=$lr \
+ --decay_steps=1000000 \
+ --max_steps=1000000000 \
+ --random_seed=1234 \
+ --train_p=false \
+ --use_bs=$use_bs \
+ --alsologtostderr
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/summary_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/summary_utils.py"
new file mode 100644
index 00000000..04e4aeea
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/summary_utils.py"
@@ -0,0 +1,332 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Utils for plotting and summarizing.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import matplotlib.gridspec as gridspec
+import matplotlib.pyplot as plt
+import numpy as np
+import scipy
+
+import tensorflow as tf
+
+import models
+
+
+def summarize_ess(weights, only_last_timestep=False):
+ """Plots the effective sample size.
+
+ Args:
+ weights: List of length num_timesteps Tensors of shape
+ [num_samples, batch_size]
+ """
+ num_timesteps = len(weights)
+ batch_size = tf.cast(tf.shape(weights[0])[1], dtype=tf.float64)
+ for i in range(num_timesteps):
+ if only_last_timestep and i < num_timesteps-1: continue
+
+ w = tf.nn.softmax(weights[i], dim=0)
+ centered_weights = w - tf.reduce_mean(w, axis=0, keepdims=True)
+ variance = tf.reduce_sum(tf.square(centered_weights))/(batch_size-1)
+ ess = 1./tf.reduce_mean(tf.reduce_sum(tf.square(w), axis=0))
+ tf.summary.scalar("ess/%d" % i, ess)
+ tf.summary.scalar("ese/%d" % i, ess / batch_size)
+ tf.summary.scalar("weight_variance/%d" % i, variance)
+
+
+def summarize_particles(states, weights, observation, model):
+ """Plots particle locations and weights.
+
+ Args:
+ states: List of length num_timesteps Tensors of shape
+ [batch_size*num_particles, state_size].
+ weights: List of length num_timesteps Tensors of shape [num_samples,
+ batch_size]
+ observation: Tensor of shape [batch_size*num_samples, state_size]
+ """
+ num_timesteps = len(weights)
+ num_samples, batch_size = weights[0].get_shape().as_list()
+ # get q0 information for plotting
+ q0_dist = model.q.q_zt(observation, tf.zeros_like(states[0]), 0)
+ q0_loc = q0_dist.loc[0:batch_size, 0]
+ q0_scale = q0_dist.scale[0:batch_size, 0]
+ # get posterior information for plotting
+ post = (model.p.mixing_coeff, model.p.prior_mode_mean, model.p.variance,
+ tf.reduce_sum(model.p.bs), model.p.num_timesteps)
+
+ # Reshape states and weights to be [time, num_samples, batch_size]
+ states = tf.stack(states)
+ weights = tf.stack(weights)
+ # normalize the weights over the sample dimension
+ weights = tf.nn.softmax(weights, dim=1)
+ states = tf.reshape(states, tf.shape(weights))
+
+ ess = 1./tf.reduce_sum(tf.square(weights), axis=1)
+
+ def _plot_states(states_batch, weights_batch, observation_batch, ess_batch, q0, post):
+ """
+ states: [time, num_samples, batch_size]
+ weights [time, num_samples, batch_size]
+ observation: [batch_size, 1]
+ q0: ([batch_size], [batch_size])
+ post: ...
+ """
+ num_timesteps, _, batch_size = states_batch.shape
+ plots = []
+ for i in range(batch_size):
+ states = states_batch[:,:,i]
+ weights = weights_batch[:,:,i]
+ observation = observation_batch[i]
+ ess = ess_batch[:,i]
+ q0_loc = q0[0][i]
+ q0_scale = q0[1][i]
+
+ fig = plt.figure(figsize=(7, (num_timesteps + 1) * 2))
+ # Each timestep gets two plots -- a bar plot and a histogram of state locs.
+ # The bar plot will be bar_rows rows tall.
+ # The histogram will be 1 row tall.
+ # There is also 1 extra plot at the top showing the posterior and q.
+ bar_rows = 8
+ num_rows = (num_timesteps + 1) * (bar_rows + 1)
+ gs = gridspec.GridSpec(num_rows, 1)
+
+ # Figure out how wide to make the plot
+ prior_lims = (post[1] * -2, post[1] * 2)
+ q_lims = (scipy.stats.norm.ppf(0.01, loc=q0_loc, scale=q0_scale),
+ scipy.stats.norm.ppf(0.99, loc=q0_loc, scale=q0_scale))
+ state_width = states.max() - states.min()
+ state_lims = (states.min() - state_width * 0.15,
+ states.max() + state_width * 0.15)
+
+ lims = (min(prior_lims[0], q_lims[0], state_lims[0]),
+ max(prior_lims[1], q_lims[1], state_lims[1]))
+ # plot the posterior
+ z0 = np.arange(lims[0], lims[1], 0.1)
+ alpha, pos_mu, sigma_sq, B, T = post
+ neg_mu = -pos_mu
+ scale = np.sqrt((T + 1) * sigma_sq)
+ p_zn = (
+ alpha * scipy.stats.norm.pdf(
+ observation, loc=pos_mu + B, scale=scale) + (1 - alpha) *
+ scipy.stats.norm.pdf(observation, loc=neg_mu + B, scale=scale))
+ p_z0 = (
+ alpha * scipy.stats.norm.pdf(z0, loc=pos_mu, scale=np.sqrt(sigma_sq))
+ + (1 - alpha) * scipy.stats.norm.pdf(
+ z0, loc=neg_mu, scale=np.sqrt(sigma_sq)))
+ p_zn_given_z0 = scipy.stats.norm.pdf(
+ observation, loc=z0 + B, scale=np.sqrt(T * sigma_sq))
+ post_z0 = (p_z0 * p_zn_given_z0) / p_zn
+ # plot q
+ q_z0 = scipy.stats.norm.pdf(z0, loc=q0_loc, scale=q0_scale)
+ ax = plt.subplot(gs[0:bar_rows, :])
+ ax.plot(z0, q_z0, color="blue")
+ ax.plot(z0, post_z0, color="green")
+ ax.plot(z0, p_z0, color="red")
+ ax.legend(("q", "posterior", "prior"), loc="best", prop={"size": 10})
+
+ ax.set_xticks([])
+ ax.set_xlim(*lims)
+
+ # plot the states
+ for t in range(num_timesteps):
+ start = (t + 1) * (bar_rows + 1)
+ ax1 = plt.subplot(gs[start:start + bar_rows, :])
+ ax2 = plt.subplot(gs[start + bar_rows:start + bar_rows + 1, :])
+ # plot the states barplot
+ # ax1.hist(
+ # states[t, :],
+ # weights=weights[t, :],
+ # bins=50,
+ # edgecolor="none",
+ # alpha=0.2)
+ ax1.bar(states[t,:], weights[t,:], width=0.02, alpha=0.2, edgecolor = "none")
+ ax1.set_ylabel("t=%d" % t)
+ ax1.set_xticks([])
+ ax1.grid(True, which="both")
+ ax1.set_xlim(*lims)
+ # plot the observation
+ ax1.axvline(x=observation, color="red", linestyle="dashed")
+ # add the ESS
+ ax1.text(0.1, 0.9, "ESS: %0.2f" % ess[t],
+ ha='center', va='center', transform=ax1.transAxes)
+
+ # plot the state location histogram
+ ax2.hist2d(
+ states[t, :], np.zeros_like(states[t, :]), bins=[50, 1], cmap="Greys")
+ ax2.grid(False)
+ ax2.set_yticks([])
+ ax2.set_xlim(*lims)
+ if t != num_timesteps - 1:
+ ax2.set_xticks([])
+
+ fig.canvas.draw()
+ p = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
+ plots.append(p.reshape(fig.canvas.get_width_height()[::-1] + (3,)))
+ plt.close(fig)
+ return np.stack(plots)
+
+ plots = tf.py_func(_plot_states,
+ [states, weights, observation, ess, (q0_loc, q0_scale), post],
+ [tf.uint8])[0]
+ tf.summary.image("states", plots, 5, collections=["infrequent_summaries"])
+
+
+def plot_weights(weights, resampled=None):
+ """Plots the weights and effective sample size from an SMC rollout.
+
+ Args:
+ weights: [num_timesteps, num_samples, batch_size] importance weights
+ resampled: [num_timesteps] 0/1 indicating if resampling ocurred
+ """
+ weights = tf.convert_to_tensor(weights)
+
+ def _make_plots(weights, resampled):
+ num_timesteps, num_samples, batch_size = weights.shape
+ plots = []
+ for i in range(batch_size):
+ fig, axes = plt.subplots(nrows=1, sharex=True, figsize=(8, 4))
+ axes.stackplot(np.arange(num_timesteps), np.transpose(weights[:, :, i]))
+ axes.set_title("Weights")
+ axes.set_xlabel("Steps")
+ axes.set_ylim([0, 1])
+ axes.set_xlim([0, num_timesteps - 1])
+ for j in np.where(resampled > 0)[0]:
+ axes.axvline(x=j, color="red", linestyle="dashed", ymin=0.0, ymax=1.0)
+ fig.canvas.draw()
+ data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
+ data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
+ plots.append(data)
+ plt.close(fig)
+ return np.stack(plots, axis=0)
+
+ if resampled is None:
+ num_timesteps, _, batch_size = weights.get_shape().as_list()
+ resampled = tf.zeros([num_timesteps], dtype=tf.float32)
+ plots = tf.py_func(_make_plots,
+ [tf.nn.softmax(weights, dim=1),
+ tf.to_float(resampled)], [tf.uint8])[0]
+ batch_size = weights.get_shape().as_list()[-1]
+ tf.summary.image(
+ "weights", plots, batch_size, collections=["infrequent_summaries"])
+
+
+def summarize_weights(weights, num_timesteps, num_samples):
+ # weights is [num_timesteps, num_samples, batch_size]
+ weights = tf.convert_to_tensor(weights)
+ mean = tf.reduce_mean(weights, axis=1, keepdims=True)
+ squared_diff = tf.square(weights - mean)
+ variances = tf.reduce_sum(squared_diff, axis=1) / (num_samples - 1)
+ # average the variance over the batch
+ variances = tf.reduce_mean(variances, axis=1)
+ avg_magnitude = tf.reduce_mean(tf.abs(weights), axis=[1, 2])
+ for t in xrange(num_timesteps):
+ tf.summary.scalar("weights/variance_%d" % t, variances[t])
+ tf.summary.scalar("weights/magnitude_%d" % t, avg_magnitude[t])
+ tf.summary.histogram("weights/step_%d" % t, weights[t])
+
+
+def summarize_learning_signal(rewards, tag):
+ num_resampling_events, _ = rewards.get_shape().as_list()
+ mean = tf.reduce_mean(rewards, axis=1)
+ avg_magnitude = tf.reduce_mean(tf.abs(rewards), axis=1)
+ reward_square = tf.reduce_mean(tf.square(rewards), axis=1)
+ for t in xrange(num_resampling_events):
+ tf.summary.scalar("%s/mean_%d" % (tag, t), mean[t])
+ tf.summary.scalar("%s/magnitude_%d" % (tag, t), avg_magnitude[t])
+ tf.summary.scalar("%s/squared_%d" % (tag, t), reward_square[t])
+ tf.summary.histogram("%s/step_%d" % (tag, t), rewards[t])
+
+
+def summarize_qs(model, observation, states):
+ model.q.summarize_weights()
+ if hasattr(model.p, "posterior") and callable(getattr(model.p, "posterior")):
+ states = [tf.zeros_like(states[0])] + states[:-1]
+ for t, prev_state in enumerate(states):
+ p = model.p.posterior(observation, prev_state, t)
+ q = model.q.q_zt(observation, prev_state, t)
+ kl = tf.reduce_mean(tf.contrib.distributions.kl_divergence(p, q))
+ tf.summary.scalar("kl_q/%d" % t, tf.reduce_mean(kl))
+ mean_diff = q.loc - p.loc
+ mean_abs_err = tf.abs(mean_diff)
+ mean_rel_err = tf.abs(mean_diff / p.loc)
+ tf.summary.scalar("q_mean_convergence/absolute_error_%d" % t,
+ tf.reduce_mean(mean_abs_err))
+ tf.summary.scalar("q_mean_convergence/relative_error_%d" % t,
+ tf.reduce_mean(mean_rel_err))
+ sigma_diff = tf.square(q.scale) - tf.square(p.scale)
+ sigma_abs_err = tf.abs(sigma_diff)
+ sigma_rel_err = tf.abs(sigma_diff / tf.square(p.scale))
+ tf.summary.scalar("q_variance_convergence/absolute_error_%d" % t,
+ tf.reduce_mean(sigma_abs_err))
+ tf.summary.scalar("q_variance_convergence/relative_error_%d" % t,
+ tf.reduce_mean(sigma_rel_err))
+
+
+def summarize_rs(model, states):
+ model.r.summarize_weights()
+ for t, state in enumerate(states):
+ true_r = model.p.lookahead(state, t)
+ r = model.r.r_xn(state, t)
+ kl = tf.reduce_mean(tf.contrib.distributions.kl_divergence(true_r, r))
+ tf.summary.scalar("kl_r/%d" % t, tf.reduce_mean(kl))
+ mean_diff = true_r.loc - r.loc
+ mean_abs_err = tf.abs(mean_diff)
+ mean_rel_err = tf.abs(mean_diff / true_r.loc)
+ tf.summary.scalar("r_mean_convergence/absolute_error_%d" % t,
+ tf.reduce_mean(mean_abs_err))
+ tf.summary.scalar("r_mean_convergence/relative_error_%d" % t,
+ tf.reduce_mean(mean_rel_err))
+ sigma_diff = tf.square(r.scale) - tf.square(true_r.scale)
+ sigma_abs_err = tf.abs(sigma_diff)
+ sigma_rel_err = tf.abs(sigma_diff / tf.square(true_r.scale))
+ tf.summary.scalar("r_variance_convergence/absolute_error_%d" % t,
+ tf.reduce_mean(sigma_abs_err))
+ tf.summary.scalar("r_variance_convergence/relative_error_%d" % t,
+ tf.reduce_mean(sigma_rel_err))
+
+
+def summarize_model(model, true_bs, observation, states, bound, summarize_r=True):
+ if hasattr(model.p, "bs"):
+ model_b = tf.reduce_sum(model.p.bs, axis=0)
+ true_b = tf.reduce_sum(true_bs, axis=0)
+ abs_err = tf.abs(model_b - true_b)
+ rel_err = abs_err / true_b
+ tf.summary.scalar("sum_of_bs/data_generating_process", tf.reduce_mean(true_b))
+ tf.summary.scalar("sum_of_bs/model", tf.reduce_mean(model_b))
+ tf.summary.scalar("sum_of_bs/absolute_error", tf.reduce_mean(abs_err))
+ tf.summary.scalar("sum_of_bs/relative_error", tf.reduce_mean(rel_err))
+ #summarize_qs(model, observation, states)
+ #if bound == "fivo-aux" and summarize_r:
+ # summarize_rs(model, states)
+
+
+def summarize_grads(grads, loss_name):
+ grad_ema = tf.train.ExponentialMovingAverage(decay=0.99)
+ vectorized_grads = tf.concat(
+ [tf.reshape(g, [-1]) for g, _ in grads if g is not None], axis=0)
+ new_second_moments = tf.square(vectorized_grads)
+ new_first_moments = vectorized_grads
+ maintain_grad_ema_op = grad_ema.apply([new_first_moments, new_second_moments])
+ first_moments = grad_ema.average(new_first_moments)
+ second_moments = grad_ema.average(new_second_moments)
+ variances = second_moments - tf.square(first_moments)
+ tf.summary.scalar("grad_variance/%s" % loss_name, tf.reduce_mean(variances))
+ tf.summary.histogram("grad_variance/%s" % loss_name, variances)
+ tf.summary.histogram("grad_mean/%s" % loss_name, first_moments)
+ return maintain_grad_ema_op
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/train.py"
new file mode 100644
index 00000000..8abc9909
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/experimental/train.py"
@@ -0,0 +1,637 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Main script for running fivo"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from collections import defaultdict
+
+import numpy as np
+import tensorflow as tf
+
+import bounds
+import data
+import models
+import summary_utils as summ
+
+tf.logging.set_verbosity(tf.logging.INFO)
+
+tf.app.flags.DEFINE_integer("random_seed", None,
+ "A random seed for the data generating process. Same seed "
+ "-> same data generating process and initialization.")
+tf.app.flags.DEFINE_enum("bound", "fivo", ["iwae", "fivo", "fivo-aux", "fivo-aux-td"],
+ "The bound to optimize.")
+tf.app.flags.DEFINE_enum("model", "forward", ["forward", "long_chain"],
+ "The model to use.")
+tf.app.flags.DEFINE_enum("q_type", "normal",
+ ["normal", "simple_mean", "prev_state", "observation"],
+ "The parameterization to use for q")
+tf.app.flags.DEFINE_enum("p_type", "unimodal", ["unimodal", "bimodal", "nonlinear"],
+ "The type of prior.")
+tf.app.flags.DEFINE_boolean("train_p", True,
+ "If false, do not train the model p.")
+
+tf.app.flags.DEFINE_integer("state_size", 1,
+ "The dimensionality of the state space.")
+tf.app.flags.DEFINE_float("variance", 1.0,
+ "The variance of the data generating process.")
+
+tf.app.flags.DEFINE_boolean("use_bs", True,
+ "If False, initialize all bs to 0.")
+tf.app.flags.DEFINE_float("bimodal_prior_weight", 0.5,
+ "The weight assigned to the positive mode of the prior in "
+ "both the data generating process and p.")
+tf.app.flags.DEFINE_float("bimodal_prior_mean", None,
+ "If supplied, sets the mean of the 2 modes of the prior to "
+ "be 1 and -1 times the supplied value. This is for both the "
+ "data generating process and p.")
+tf.app.flags.DEFINE_float("fixed_observation", None,
+ "If supplied, fix the observation to a constant value in the"
+ " data generating process only.")
+tf.app.flags.DEFINE_float("r_sigma_init", 1.,
+ "Value to initialize variance of r to.")
+tf.app.flags.DEFINE_enum("observation_type",
+ models.STANDARD_OBSERVATION, models.OBSERVATION_TYPES,
+ "The type of observation for the long chain model.")
+tf.app.flags.DEFINE_enum("transition_type",
+ models.STANDARD_TRANSITION, models.TRANSITION_TYPES,
+ "The type of transition for the long chain model.")
+tf.app.flags.DEFINE_float("observation_variance", None,
+ "The variance of the observation. Defaults to 'variance'")
+
+tf.app.flags.DEFINE_integer("num_timesteps", 5,
+ "Number of timesteps in the sequence.")
+tf.app.flags.DEFINE_integer("num_observations", 1,
+ "The number of observations.")
+tf.app.flags.DEFINE_integer("steps_per_observation", 5,
+ "The number of timesteps between each observation.")
+
+tf.app.flags.DEFINE_integer("batch_size", 4,
+ "The number of examples per batch.")
+tf.app.flags.DEFINE_integer("num_samples", 4,
+ "The number particles to use.")
+tf.app.flags.DEFINE_integer("num_eval_samples", 512,
+ "The batch size and # of particles to use for eval.")
+
+tf.app.flags.DEFINE_string("resampling", "always",
+ "How to resample. Accepts 'always','never', or a "
+ "comma-separated list of booleans like 'true,true,false'.")
+tf.app.flags.DEFINE_enum("resampling_method", "multinomial", ["multinomial",
+ "stratified",
+ "systematic",
+ "relaxed-logblend",
+ "relaxed-stateblend",
+ "relaxed-linearblend",
+ "relaxed-stateblend-st",],
+ "Type of resampling method to use.")
+tf.app.flags.DEFINE_boolean("use_resampling_grads", True,
+ "Whether or not to use resampling grads to optimize FIVO."
+ "Disabled automatically if resampling_method=relaxed.")
+tf.app.flags.DEFINE_boolean("disable_r", False,
+ "If false, r is not used for fivo-aux and is set to zeros.")
+
+tf.app.flags.DEFINE_float("learning_rate", 1e-4,
+ "The learning rate to use for ADAM or SGD.")
+tf.app.flags.DEFINE_integer("decay_steps", 25000,
+ "The number of steps before the learning rate is halved.")
+tf.app.flags.DEFINE_integer("max_steps", int(1e6),
+ "The number of steps to run training for.")
+
+tf.app.flags.DEFINE_string("logdir", "/tmp/fivo-aux",
+ "Directory for summaries and checkpoints.")
+
+tf.app.flags.DEFINE_integer("summarize_every", int(1e3),
+ "The number of steps between each evaluation.")
+FLAGS = tf.app.flags.FLAGS
+
+
+def combine_grad_lists(grad_lists):
+ # grads is num_losses by num_variables.
+ # each list could have different variables.
+ # for each variable, sum the grads across all losses.
+ grads_dict = defaultdict(list)
+ var_dict = {}
+ for grad_list in grad_lists:
+ for grad, var in grad_list:
+ if grad is not None:
+ grads_dict[var.name].append(grad)
+ var_dict[var.name] = var
+
+ final_grads = []
+ for var_name, var in var_dict.iteritems():
+ grads = grads_dict[var_name]
+ if len(grads) > 0:
+ tf.logging.info("Var %s has combined grads from %s." %
+ (var_name, [g.name for g in grads]))
+ grad = tf.reduce_sum(grads, axis=0)
+ else:
+ tf.logging.info("Var %s has no grads" % var_name)
+ grad = None
+ final_grads.append((grad, var))
+ return final_grads
+
+
+def make_apply_grads_op(losses, global_step, learning_rate, lr_decay_steps):
+ for l in losses:
+ assert isinstance(l, bounds.Loss)
+
+ lr = tf.train.exponential_decay(
+ learning_rate, global_step, lr_decay_steps, 0.5, staircase=False)
+ tf.summary.scalar("learning_rate", lr)
+ opt = tf.train.AdamOptimizer(lr)
+
+ ema_ops = []
+ grads = []
+ for loss_name, loss, loss_var_collection in losses:
+ tf.logging.info("Computing grads of %s w.r.t. vars in collection %s" %
+ (loss_name, loss_var_collection))
+ g = opt.compute_gradients(loss,
+ var_list=tf.get_collection(loss_var_collection))
+ ema_ops.append(summ.summarize_grads(g, loss_name))
+ grads.append(g)
+
+ all_grads = combine_grad_lists(grads)
+ apply_grads_op = opt.apply_gradients(all_grads, global_step=global_step)
+
+ # Update the emas after applying the grads.
+ with tf.control_dependencies([apply_grads_op]):
+ train_op = tf.group(*ema_ops)
+ return train_op
+
+
+def add_check_numerics_ops():
+ check_op = []
+ for op in tf.get_default_graph().get_operations():
+ bad = ["logits/Log", "sample/Reshape", "log_prob/mul",
+ "log_prob/SparseSoftmaxCrossEntropyWithLogits/Reshape",
+ "entropy/Reshape", "entropy/LogSoftmax", "Categorical", "Mean"]
+ if all([x not in op.name for x in bad]):
+ for output in op.outputs:
+ if output.dtype in [tf.float16, tf.float32, tf.float64]:
+ if op._get_control_flow_context() is not None: # pylint: disable=protected-access
+ raise ValueError("`tf.add_check_numerics_ops() is not compatible "
+ "with TensorFlow control flow operations such as "
+ "`tf.cond()` or `tf.while_loop()`.")
+
+ message = op.name + ":" + str(output.value_index)
+ with tf.control_dependencies(check_op):
+ check_op = [tf.check_numerics(output, message=message)]
+ return tf.group(*check_op)
+
+
+def create_long_chain_graph(bound, state_size, num_obs, steps_per_obs,
+ batch_size, num_samples, num_eval_samples,
+ resampling_schedule, use_resampling_grads,
+ learning_rate, lr_decay_steps, dtype="float64"):
+ num_timesteps = num_obs * steps_per_obs + 1
+ # Make the dataset.
+ dataset = data.make_long_chain_dataset(
+ state_size=state_size,
+ num_obs=num_obs,
+ steps_per_obs=steps_per_obs,
+ batch_size=batch_size,
+ num_samples=num_samples,
+ variance=FLAGS.variance,
+ observation_variance=FLAGS.observation_variance,
+ dtype=dtype,
+ observation_type=FLAGS.observation_type,
+ transition_type=FLAGS.transition_type,
+ fixed_observation=FLAGS.fixed_observation)
+ itr = dataset.make_one_shot_iterator()
+ _, observations = itr.get_next()
+ # Make the dataset for eval
+ eval_dataset = data.make_long_chain_dataset(
+ state_size=state_size,
+ num_obs=num_obs,
+ steps_per_obs=steps_per_obs,
+ batch_size=batch_size,
+ num_samples=num_eval_samples,
+ variance=FLAGS.variance,
+ observation_variance=FLAGS.observation_variance,
+ dtype=dtype,
+ observation_type=FLAGS.observation_type,
+ transition_type=FLAGS.transition_type,
+ fixed_observation=FLAGS.fixed_observation)
+ eval_itr = eval_dataset.make_one_shot_iterator()
+ _, eval_observations = eval_itr.get_next()
+
+ # Make the model.
+ model = models.LongChainModel.create(
+ state_size,
+ num_obs,
+ steps_per_obs,
+ observation_type=FLAGS.observation_type,
+ transition_type=FLAGS.transition_type,
+ variance=FLAGS.variance,
+ observation_variance=FLAGS.observation_variance,
+ dtype=tf.as_dtype(dtype),
+ disable_r=FLAGS.disable_r)
+
+ # Compute the bound and loss
+ if bound == "iwae":
+ (_, losses, ema_op, _, _) = bounds.iwae(
+ model,
+ observations,
+ num_timesteps,
+ num_samples=num_samples)
+ (eval_log_p_hat, _, _, _, eval_log_weights) = bounds.iwae(
+ model,
+ eval_observations,
+ num_timesteps,
+ num_samples=num_eval_samples,
+ summarize=False)
+ eval_log_p_hat = tf.reduce_mean(eval_log_p_hat)
+ elif bound == "fivo" or "fivo-aux":
+ (_, losses, ema_op, _, _) = bounds.fivo(
+ model,
+ observations,
+ num_timesteps,
+ resampling_schedule=resampling_schedule,
+ use_resampling_grads=use_resampling_grads,
+ resampling_type=FLAGS.resampling_method,
+ aux=("aux" in bound),
+ num_samples=num_samples)
+ (eval_log_p_hat, _, _, _, eval_log_weights) = bounds.fivo(
+ model,
+ eval_observations,
+ num_timesteps,
+ resampling_schedule=resampling_schedule,
+ use_resampling_grads=False,
+ resampling_type="multinomial",
+ aux=("aux" in bound),
+ num_samples=num_eval_samples,
+ summarize=False)
+ eval_log_p_hat = tf.reduce_mean(eval_log_p_hat)
+
+ summ.summarize_ess(eval_log_weights, only_last_timestep=True)
+
+ tf.summary.scalar("log_p_hat", eval_log_p_hat)
+
+ # Compute and apply grads.
+ global_step = tf.train.get_or_create_global_step()
+
+ apply_grads = make_apply_grads_op(losses,
+ global_step,
+ learning_rate,
+ lr_decay_steps)
+
+ # Update the emas after applying the grads.
+ with tf.control_dependencies([apply_grads]):
+ train_op = tf.group(ema_op)
+
+ # We can't calculate the likelihood for most of these models
+ # so we just return zeros.
+ eval_likelihood = tf.zeros([], dtype=dtype)
+ return global_step, train_op, eval_log_p_hat, eval_likelihood
+
+
+def create_graph(bound, state_size, num_timesteps, batch_size,
+ num_samples, num_eval_samples, resampling_schedule,
+ use_resampling_grads, learning_rate, lr_decay_steps,
+ train_p, dtype='float64'):
+ if FLAGS.use_bs:
+ true_bs = None
+ else:
+ true_bs = [np.zeros([state_size]).astype(dtype) for _ in xrange(num_timesteps)]
+
+ # Make the dataset.
+ true_bs, dataset = data.make_dataset(
+ bs=true_bs,
+ state_size=state_size,
+ num_timesteps=num_timesteps,
+ batch_size=batch_size,
+ num_samples=num_samples,
+ variance=FLAGS.variance,
+ prior_type=FLAGS.p_type,
+ bimodal_prior_weight=FLAGS.bimodal_prior_weight,
+ bimodal_prior_mean=FLAGS.bimodal_prior_mean,
+ transition_type=FLAGS.transition_type,
+ fixed_observation=FLAGS.fixed_observation,
+ dtype=dtype)
+ itr = dataset.make_one_shot_iterator()
+ _, observations = itr.get_next()
+ # Make the dataset for eval
+ _, eval_dataset = data.make_dataset(
+ bs=true_bs,
+ state_size=state_size,
+ num_timesteps=num_timesteps,
+ batch_size=num_eval_samples,
+ num_samples=num_eval_samples,
+ variance=FLAGS.variance,
+ prior_type=FLAGS.p_type,
+ bimodal_prior_weight=FLAGS.bimodal_prior_weight,
+ bimodal_prior_mean=FLAGS.bimodal_prior_mean,
+ transition_type=FLAGS.transition_type,
+ fixed_observation=FLAGS.fixed_observation,
+ dtype=dtype)
+ eval_itr = eval_dataset.make_one_shot_iterator()
+ _, eval_observations = eval_itr.get_next()
+
+ # Make the model.
+ if bound == "fivo-aux-td":
+ model = models.TDModel.create(
+ state_size,
+ num_timesteps,
+ variance=FLAGS.variance,
+ train_p=train_p,
+ p_type=FLAGS.p_type,
+ q_type=FLAGS.q_type,
+ mixing_coeff=FLAGS.bimodal_prior_weight,
+ prior_mode_mean=FLAGS.bimodal_prior_mean,
+ observation_variance=FLAGS.observation_variance,
+ transition_type=FLAGS.transition_type,
+ use_bs=FLAGS.use_bs,
+ dtype=tf.as_dtype(dtype),
+ random_seed=FLAGS.random_seed)
+ else:
+ model = models.Model.create(
+ state_size,
+ num_timesteps,
+ variance=FLAGS.variance,
+ train_p=train_p,
+ p_type=FLAGS.p_type,
+ q_type=FLAGS.q_type,
+ mixing_coeff=FLAGS.bimodal_prior_weight,
+ prior_mode_mean=FLAGS.bimodal_prior_mean,
+ observation_variance=FLAGS.observation_variance,
+ transition_type=FLAGS.transition_type,
+ use_bs=FLAGS.use_bs,
+ r_sigma_init=FLAGS.r_sigma_init,
+ dtype=tf.as_dtype(dtype),
+ random_seed=FLAGS.random_seed)
+
+ # Compute the bound and loss
+ if bound == "iwae":
+ (_, losses, ema_op, _, _) = bounds.iwae(
+ model,
+ observations,
+ num_timesteps,
+ num_samples=num_samples)
+ (eval_log_p_hat, _, _, eval_states, eval_log_weights) = bounds.iwae(
+ model,
+ eval_observations,
+ num_timesteps,
+ num_samples=num_eval_samples,
+ summarize=True)
+
+ eval_log_p_hat = tf.reduce_mean(eval_log_p_hat)
+
+ elif "fivo" in bound:
+ if bound == "fivo-aux-td":
+ (_, losses, ema_op, _, _) = bounds.fivo_aux_td(
+ model,
+ observations,
+ num_timesteps,
+ resampling_schedule=resampling_schedule,
+ num_samples=num_samples)
+ (eval_log_p_hat, _, _, eval_states, eval_log_weights) = bounds.fivo_aux_td(
+ model,
+ eval_observations,
+ num_timesteps,
+ resampling_schedule=resampling_schedule,
+ num_samples=num_eval_samples,
+ summarize=True)
+ else:
+ (_, losses, ema_op, _, _) = bounds.fivo(
+ model,
+ observations,
+ num_timesteps,
+ resampling_schedule=resampling_schedule,
+ use_resampling_grads=use_resampling_grads,
+ resampling_type=FLAGS.resampling_method,
+ aux=("aux" in bound),
+ num_samples=num_samples)
+ (eval_log_p_hat, _, _, eval_states, eval_log_weights) = bounds.fivo(
+ model,
+ eval_observations,
+ num_timesteps,
+ resampling_schedule=resampling_schedule,
+ use_resampling_grads=False,
+ resampling_type="multinomial",
+ aux=("aux" in bound),
+ num_samples=num_eval_samples,
+ summarize=True)
+ eval_log_p_hat = tf.reduce_mean(eval_log_p_hat)
+
+ summ.summarize_ess(eval_log_weights, only_last_timestep=True)
+
+ # if FLAGS.p_type == "bimodal":
+ # # create the observations that showcase the model.
+ # mode_odds_ratio = tf.convert_to_tensor([1., 3., 1./3., 512., 1./512.],
+ # dtype=tf.float64)
+ # mode_odds_ratio = tf.expand_dims(mode_odds_ratio, 1)
+ # k = ((num_timesteps+1) * FLAGS.variance) / (2*FLAGS.bimodal_prior_mean)
+ # explain_obs = tf.reduce_sum(model.p.bs) + tf.log(mode_odds_ratio) * k
+ # explain_obs = tf.tile(explain_obs, [num_eval_samples, 1])
+ # # run the model on the explainable observations
+ # if bound == "iwae":
+ # (_, _, _, explain_states, explain_log_weights) = bounds.iwae(
+ # model,
+ # explain_obs,
+ # num_timesteps,
+ # num_samples=num_eval_samples)
+ # elif bound == "fivo" or "fivo-aux":
+ # (_, _, _, explain_states, explain_log_weights) = bounds.fivo(
+ # model,
+ # explain_obs,
+ # num_timesteps,
+ # resampling_schedule=resampling_schedule,
+ # use_resampling_grads=False,
+ # resampling_type="multinomial",
+ # aux=("aux" in bound),
+ # num_samples=num_eval_samples)
+ # summ.summarize_particles(explain_states,
+ # explain_log_weights,
+ # explain_obs,
+ # model)
+
+ # Calculate the true likelihood.
+ if hasattr(model.p, 'likelihood') and callable(getattr(model.p, 'likelihood')):
+ eval_likelihood = model.p.likelihood(eval_observations)/ FLAGS.num_timesteps
+ else:
+ eval_likelihood = tf.zeros_like(eval_log_p_hat)
+
+ tf.summary.scalar("log_p_hat", eval_log_p_hat)
+ tf.summary.scalar("likelihood", eval_likelihood)
+ tf.summary.scalar("bound_gap", eval_likelihood - eval_log_p_hat)
+ summ.summarize_model(model, true_bs, eval_observations, eval_states, bound,
+ summarize_r=not bound == "fivo-aux-td")
+
+ # Compute and apply grads.
+ global_step = tf.train.get_or_create_global_step()
+
+ apply_grads = make_apply_grads_op(losses,
+ global_step,
+ learning_rate,
+ lr_decay_steps)
+
+ # Update the emas after applying the grads.
+ with tf.control_dependencies([apply_grads]):
+ train_op = tf.group(ema_op)
+ #train_op = tf.group(ema_op, add_check_numerics_ops())
+
+ return global_step, train_op, eval_log_p_hat, eval_likelihood
+
+
+def parse_resampling_schedule(schedule, num_timesteps):
+ schedule = schedule.strip().lower()
+ if schedule == "always":
+ return [True] * (num_timesteps - 1) + [False]
+ elif schedule == "never":
+ return [False] * num_timesteps
+ elif "every" in schedule:
+ n = int(schedule.split("_")[1])
+ return [(i+1) % n == 0 for i in xrange(num_timesteps)]
+ else:
+ sched = [x.strip() == "true" for x in schedule.split(",")]
+ assert len(
+ sched
+ ) == num_timesteps, "Wrong number of timesteps in resampling schedule."
+ return sched
+
+
+def create_log_hook(step, eval_log_p_hat, eval_likelihood):
+ def summ_formatter(d):
+ return ("Step {step}, log p_hat: {log_p_hat:.5f} likelihood: {likelihood:.5f}".format(**d))
+ hook = tf.train.LoggingTensorHook(
+ {
+ "step": step,
+ "log_p_hat": eval_log_p_hat,
+ "likelihood": eval_likelihood,
+ },
+ every_n_iter=FLAGS.summarize_every,
+ formatter=summ_formatter)
+ return hook
+
+
+def create_infrequent_summary_hook():
+ infrequent_summary_hook = tf.train.SummarySaverHook(
+ save_steps=10000,
+ output_dir=FLAGS.logdir,
+ summary_op=tf.summary.merge_all(key="infrequent_summaries")
+ )
+ return infrequent_summary_hook
+
+
+def main(unused_argv):
+ if FLAGS.model == "long_chain":
+ resampling_schedule = parse_resampling_schedule(FLAGS.resampling,
+ FLAGS.num_timesteps + 1)
+ else:
+ resampling_schedule = parse_resampling_schedule(FLAGS.resampling,
+ FLAGS.num_timesteps)
+ if FLAGS.random_seed is None:
+ seed = np.random.randint(0, high=10000)
+ else:
+ seed = FLAGS.random_seed
+ tf.logging.info("Using random seed %d", seed)
+
+ if FLAGS.model == "long_chain":
+ assert FLAGS.q_type == "normal", "Q type %s not supported for long chain models" % FLAGS.q_type
+ assert FLAGS.p_type == "unimodal", "Bimodal priors are not supported for long chain models"
+ assert not FLAGS.use_bs, "Bs are not supported with long chain models"
+ assert FLAGS.num_timesteps == FLAGS.num_observations * FLAGS.steps_per_observation, "Num timesteps does not match."
+ assert FLAGS.bound != "fivo-aux-td", "TD Training is not compatible with long chain models."
+
+ if FLAGS.model == "forward":
+ if "nonlinear" not in FLAGS.p_type:
+ assert FLAGS.transition_type == models.STANDARD_TRANSITION, "Non-standard transitions not supported by the forward model."
+ assert FLAGS.observation_type == models.STANDARD_OBSERVATION, "Non-standard observations not supported by the forward model."
+ assert FLAGS.observation_variance is None, "Forward model does not support observation variance."
+ assert FLAGS.num_observations == 1, "Forward model only supports 1 observation."
+
+ if "relaxed" in FLAGS.resampling_method:
+ FLAGS.use_resampling_grads = False
+ assert FLAGS.bound != "fivo-aux-td", "TD Training is not compatible with relaxed resampling."
+
+ if FLAGS.observation_variance is None:
+ FLAGS.observation_variance = FLAGS.variance
+
+ if FLAGS.p_type == "bimodal":
+ assert FLAGS.bimodal_prior_mean is not None, "Must specify prior mean if using bimodal p."
+
+ if FLAGS.p_type == "nonlinear" or FLAGS.p_type == "nonlinear-cauchy":
+ assert not FLAGS.use_bs, "Using bs is not compatible with the nonlinear model."
+
+ g = tf.Graph()
+ with g.as_default():
+ # Set the seeds.
+ tf.set_random_seed(seed)
+ np.random.seed(seed)
+ if FLAGS.model == "long_chain":
+ (global_step, train_op, eval_log_p_hat,
+ eval_likelihood) = create_long_chain_graph(
+ FLAGS.bound,
+ FLAGS.state_size,
+ FLAGS.num_observations,
+ FLAGS.steps_per_observation,
+ FLAGS.batch_size,
+ FLAGS.num_samples,
+ FLAGS.num_eval_samples,
+ resampling_schedule,
+ FLAGS.use_resampling_grads,
+ FLAGS.learning_rate,
+ FLAGS.decay_steps)
+ else:
+ (global_step, train_op,
+ eval_log_p_hat, eval_likelihood) = create_graph(
+ FLAGS.bound,
+ FLAGS.state_size,
+ FLAGS.num_timesteps,
+ FLAGS.batch_size,
+ FLAGS.num_samples,
+ FLAGS.num_eval_samples,
+ resampling_schedule,
+ FLAGS.use_resampling_grads,
+ FLAGS.learning_rate,
+ FLAGS.decay_steps,
+ FLAGS.train_p)
+
+ log_hooks = [create_log_hook(global_step, eval_log_p_hat, eval_likelihood)]
+ if len(tf.get_collection("infrequent_summaries")) > 0:
+ log_hooks.append(create_infrequent_summary_hook())
+
+ tf.logging.info("trainable variables:")
+ tf.logging.info([v.name for v in tf.trainable_variables()])
+ tf.logging.info("p vars:")
+ tf.logging.info([v.name for v in tf.get_collection("P_VARS")])
+ tf.logging.info("q vars:")
+ tf.logging.info([v.name for v in tf.get_collection("Q_VARS")])
+ tf.logging.info("r vars:")
+ tf.logging.info([v.name for v in tf.get_collection("R_VARS")])
+ tf.logging.info("r tilde vars:")
+ tf.logging.info([v.name for v in tf.get_collection("R_TILDE_VARS")])
+
+ with tf.train.MonitoredTrainingSession(
+ master="",
+ is_chief=True,
+ hooks=log_hooks,
+ checkpoint_dir=FLAGS.logdir,
+ save_checkpoint_secs=120,
+ save_summaries_steps=FLAGS.summarize_every,
+ log_step_count_steps=FLAGS.summarize_every) as sess:
+ cur_step = -1
+ while True:
+ if sess.should_stop() or cur_step > FLAGS.max_steps:
+ break
+ # run a step
+ _, cur_step = sess.run([train_op, global_step])
+
+
+if __name__ == "__main__":
+ tf.app.run(main)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/bounds.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/bounds.py"
new file mode 100644
index 00000000..08851903
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/bounds.py"
@@ -0,0 +1,317 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Implementation of objectives for training stochastic latent variable models.
+
+Contains implementations of the Importance Weighted Autoencoder objective (IWAE)
+and the Filtering Variational objective (FIVO).
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import functools
+import tensorflow as tf
+
+from fivo import nested_utils as nested
+from fivo import smc
+
+
+def iwae(model,
+ observations,
+ seq_lengths,
+ num_samples=1,
+ parallel_iterations=30,
+ swap_memory=True):
+ """Computes the IWAE lower bound on the log marginal probability.
+
+ This method accepts a stochastic latent variable model and some observations
+ and computes a stochastic lower bound on the log marginal probability of the
+ observations. The IWAE estimator is defined by averaging multiple importance
+ weights. For more details see "Importance Weighted Autoencoders" by Burda
+ et al. https://arxiv.org/abs/1509.00519.
+
+ When num_samples = 1, this bound becomes the evidence lower bound (ELBO).
+
+ Args:
+ model: A subclass of ELBOTrainableSequenceModel that implements one
+ timestep of the model. See models/vrnn.py for an example.
+ observations: The inputs to the model. A potentially nested list or tuple of
+ Tensors each of shape [max_seq_len, batch_size, ...]. The Tensors must
+ have a rank at least two and have matching shapes in the first two
+ dimensions, which represent time and the batch respectively. The model
+ will be provided with the observations before computing the bound.
+ seq_lengths: A [batch_size] Tensor of ints encoding the length of each
+ sequence in the batch (sequences can be padded to a common length).
+ num_samples: The number of samples to use.
+ parallel_iterations: The number of parallel iterations to use for the
+ internal while loop.
+ swap_memory: Whether GPU-CPU memory swapping should be enabled for the
+ internal while loop.
+
+ Returns:
+ log_p_hat: A Tensor of shape [batch_size] containing IWAE's estimate of the
+ log marginal probability of the observations.
+ log_weights: A Tensor of shape [max_seq_len, batch_size, num_samples]
+ containing the log weights at each timestep. Will not be valid for
+ timesteps past the end of a sequence.
+ """
+ log_p_hat, log_weights, _, final_state = fivo(
+ model,
+ observations,
+ seq_lengths,
+ num_samples=num_samples,
+ resampling_criterion=smc.never_resample_criterion,
+ parallel_iterations=parallel_iterations,
+ swap_memory=swap_memory)
+ return log_p_hat, log_weights, final_state
+
+
+def fivo(model,
+ observations,
+ seq_lengths,
+ num_samples=1,
+ resampling_criterion=smc.ess_criterion,
+ resampling_type='multinomial',
+ relaxed_resampling_temperature=0.5,
+ parallel_iterations=30,
+ swap_memory=True,
+ random_seed=None):
+ """Computes the FIVO lower bound on the log marginal probability.
+
+ This method accepts a stochastic latent variable model and some observations
+ and computes a stochastic lower bound on the log marginal probability of the
+ observations. The lower bound is defined by a particle filter's unbiased
+ estimate of the marginal probability of the observations. For more details see
+ "Filtering Variational Objectives" by Maddison et al.
+ https://arxiv.org/abs/1705.09279.
+
+ When the resampling criterion is "never resample", this bound becomes IWAE.
+
+ Args:
+ model: A subclass of ELBOTrainableSequenceModel that implements one
+ timestep of the model. See models/vrnn.py for an example.
+ observations: The inputs to the model. A potentially nested list or tuple of
+ Tensors each of shape [max_seq_len, batch_size, ...]. The Tensors must
+ have a rank at least two and have matching shapes in the first two
+ dimensions, which represent time and the batch respectively. The model
+ will be provided with the observations before computing the bound.
+ seq_lengths: A [batch_size] Tensor of ints encoding the length of each
+ sequence in the batch (sequences can be padded to a common length).
+ num_samples: The number of particles to use in each particle filter.
+ resampling_criterion: The resampling criterion to use for this particle
+ filter. Must accept the number of samples, the current log weights,
+ and the current timestep and return a boolean Tensor of shape [batch_size]
+ indicating whether each particle filter should resample. See
+ ess_criterion and related functions for examples. When
+ resampling_criterion is never_resample_criterion, resampling_fn is ignored
+ and never called.
+ resampling_type: The type of resampling, one of "multinomial" or "relaxed".
+ relaxed_resampling_temperature: A positive temperature only used for relaxed
+ resampling.
+ parallel_iterations: The number of parallel iterations to use for the
+ internal while loop. Note that values greater than 1 can introduce
+ non-determinism even when random_seed is provided.
+ swap_memory: Whether GPU-CPU memory swapping should be enabled for the
+ internal while loop.
+ random_seed: The random seed to pass to the resampling operations in
+ the particle filter. Mainly useful for testing.
+
+ Returns:
+ log_p_hat: A Tensor of shape [batch_size] containing FIVO's estimate of the
+ log marginal probability of the observations.
+ log_weights: A Tensor of shape [max_seq_len, batch_size, num_samples]
+ containing the log weights at each timestep of the particle filter. Note
+ that on timesteps when a resampling operation is performed the log weights
+ are reset to 0. Will not be valid for timesteps past the end of a
+ sequence.
+ resampled: A Tensor of shape [max_seq_len, batch_size] indicating when the
+ particle filters resampled. Will be 1.0 on timesteps when resampling
+ occurred and 0.0 on timesteps when it did not.
+ """
+ # batch_size is the number of particle filters running in parallel.
+ batch_size = tf.shape(seq_lengths)[0]
+
+ # Each sequence in the batch will be the input data for a different
+ # particle filter. The batch will be laid out as:
+ # particle 1 of particle filter 1
+ # particle 1 of particle filter 2
+ # ...
+ # particle 1 of particle filter batch_size
+ # particle 2 of particle filter 1
+ # ...
+ # particle num_samples of particle filter batch_size
+ observations = nested.tile_tensors(observations, [1, num_samples])
+ tiled_seq_lengths = tf.tile(seq_lengths, [num_samples])
+ model.set_observations(observations, tiled_seq_lengths)
+
+ if resampling_type == 'multinomial':
+ resampling_fn = smc.multinomial_resampling
+ elif resampling_type == 'relaxed':
+ resampling_fn = functools.partial(
+ smc.relaxed_resampling, temperature=relaxed_resampling_temperature)
+ resampling_fn = functools.partial(resampling_fn, random_seed=random_seed)
+
+ def transition_fn(prev_state, t):
+ if prev_state is None:
+ return model.zero_state(batch_size * num_samples, tf.float32)
+ return model.propose_and_weight(prev_state, t)
+
+ log_p_hat, log_weights, resampled, final_state, _ = smc.smc(
+ transition_fn,
+ seq_lengths,
+ num_particles=num_samples,
+ resampling_criterion=resampling_criterion,
+ resampling_fn=resampling_fn,
+ parallel_iterations=parallel_iterations,
+ swap_memory=swap_memory)
+
+ return log_p_hat, log_weights, resampled, final_state
+
+def fivo_aux_td(
+ model,
+ observations,
+ seq_lengths,
+ num_samples=1,
+ resampling_criterion=smc.ess_criterion,
+ resampling_type='multinomial',
+ relaxed_resampling_temperature=0.5,
+ parallel_iterations=30,
+ swap_memory=True,
+ random_seed=None):
+ """Experimental."""
+ # batch_size is the number of particle filters running in parallel.
+ batch_size = tf.shape(seq_lengths)[0]
+ max_seq_len = tf.reduce_max(seq_lengths)
+
+ # Each sequence in the batch will be the input data for a different
+ # particle filter. The batch will be laid out as:
+ # particle 1 of particle filter 1
+ # particle 1 of particle filter 2
+ # ...
+ # particle 1 of particle filter batch_size
+ # particle 2 of particle filter 1
+ # ...
+ # particle num_samples of particle filter batch_size
+ observations = nested.tile_tensors(observations, [1, num_samples])
+ tiled_seq_lengths = tf.tile(seq_lengths, [num_samples])
+ model.set_observations(observations, tiled_seq_lengths)
+
+ if resampling_type == 'multinomial':
+ resampling_fn = smc.multinomial_resampling
+ elif resampling_type == 'relaxed':
+ resampling_fn = functools.partial(
+ smc.relaxed_resampling, temperature=relaxed_resampling_temperature)
+ resampling_fn = functools.partial(resampling_fn, random_seed=random_seed)
+
+ def transition_fn(prev_state, t):
+ if prev_state is None:
+ model_init_state = model.zero_state(batch_size * num_samples, tf.float32)
+ return (tf.zeros([num_samples*batch_size], dtype=tf.float32),
+ (tf.zeros([num_samples*batch_size, model.latent_size], dtype=tf.float32),
+ tf.zeros([num_samples*batch_size, model.latent_size], dtype=tf.float32)),
+ model_init_state)
+
+ prev_log_r, prev_log_r_tilde, prev_model_state = prev_state
+ (new_model_state, zt, log_q_zt, log_p_zt,
+ log_p_x_given_z, log_r_tilde, p_ztplus1) = model(prev_model_state, t)
+ r_tilde_mu, r_tilde_sigma_sq = log_r_tilde
+ # Compute the weight without r.
+ log_weight = log_p_zt + log_p_x_given_z - log_q_zt
+ # Compute log_r and log_r_tilde.
+ p_mu = tf.stop_gradient(p_ztplus1.mean())
+ p_sigma_sq = tf.stop_gradient(p_ztplus1.variance())
+ log_r = (tf.log(r_tilde_sigma_sq) -
+ tf.log(r_tilde_sigma_sq + p_sigma_sq) -
+ tf.square(r_tilde_mu - p_mu)/(r_tilde_sigma_sq + p_sigma_sq))
+ # log_r is [num_samples*batch_size, latent_size]. We sum it along the last
+ # dimension to compute log r.
+ log_r = 0.5*tf.reduce_sum(log_r, axis=-1)
+ # Compute prev log r tilde
+ prev_r_tilde_mu, prev_r_tilde_sigma_sq = prev_log_r_tilde
+ prev_log_r_tilde = -0.5*tf.reduce_sum(
+ tf.square(tf.stop_gradient(zt) - prev_r_tilde_mu)/prev_r_tilde_sigma_sq, axis=-1)
+ # If the sequence is on the last timestep, log_r and log_r_tilde are just zeros.
+ last_timestep = t >= (tiled_seq_lengths - 1)
+ log_r = tf.where(last_timestep,
+ tf.zeros_like(log_r),
+ log_r)
+ prev_log_r_tilde = tf.where(last_timestep,
+ tf.zeros_like(prev_log_r_tilde),
+ prev_log_r_tilde)
+ log_weight += tf.stop_gradient(log_r - prev_log_r)
+ new_state = (log_r, log_r_tilde, new_model_state)
+ loop_fn_args = (log_r, prev_log_r_tilde, log_p_x_given_z, log_r - prev_log_r)
+ return log_weight, new_state, loop_fn_args
+
+ def loop_fn(loop_state, loop_args, unused_model_state, log_weights, resampled, mask, t):
+ if loop_state is None:
+ return (tf.zeros([batch_size], dtype=tf.float32),
+ tf.zeros([batch_size], dtype=tf.float32),
+ tf.zeros([num_samples, batch_size], dtype=tf.float32))
+ log_p_hat_acc, bellman_loss_acc, log_r_diff_acc = loop_state
+ log_r, prev_log_r_tilde, log_p_x_given_z, log_r_diff = loop_args
+ # Compute the log_p_hat update
+ log_p_hat_update = tf.reduce_logsumexp(
+ log_weights, axis=0) - tf.log(tf.to_float(num_samples))
+ # If it is the last timestep, we always add the update.
+ log_p_hat_acc += tf.cond(t >= max_seq_len-1,
+ lambda: log_p_hat_update,
+ lambda: log_p_hat_update * resampled)
+ # Compute the Bellman update.
+ log_r = tf.reshape(log_r, [num_samples, batch_size])
+ prev_log_r_tilde = tf.reshape(prev_log_r_tilde, [num_samples, batch_size])
+ log_p_x_given_z = tf.reshape(log_p_x_given_z, [num_samples, batch_size])
+ mask = tf.reshape(mask, [num_samples, batch_size])
+ # On the first timestep there is no bellman error because there is no
+ # prev_log_r_tilde.
+ mask = tf.cond(tf.equal(t, 0),
+ lambda: tf.zeros_like(mask),
+ lambda: mask)
+ # On the first timestep also fix up prev_log_r_tilde, which will be -inf.
+ prev_log_r_tilde = tf.where(
+ tf.is_inf(prev_log_r_tilde),
+ tf.zeros_like(prev_log_r_tilde),
+ prev_log_r_tilde)
+ # log_lambda is [num_samples, batch_size]
+ log_lambda = tf.reduce_mean(prev_log_r_tilde - log_p_x_given_z - log_r,
+ axis=0, keepdims=True)
+ bellman_error = mask * tf.square(
+ prev_log_r_tilde -
+ tf.stop_gradient(log_lambda + log_p_x_given_z + log_r)
+ )
+ bellman_loss_acc += tf.reduce_mean(bellman_error, axis=0)
+ # Compute the log_r_diff update
+ log_r_diff_acc += mask * tf.reshape(log_r_diff, [num_samples, batch_size])
+ return (log_p_hat_acc, bellman_loss_acc, log_r_diff_acc)
+
+ log_weights, resampled, accs = smc.smc(
+ transition_fn,
+ seq_lengths,
+ num_particles=num_samples,
+ resampling_criterion=resampling_criterion,
+ resampling_fn=resampling_fn,
+ loop_fn=loop_fn,
+ parallel_iterations=parallel_iterations,
+ swap_memory=swap_memory)
+
+ log_p_hat, bellman_loss, log_r_diff = accs
+ loss_per_seq = [- log_p_hat, bellman_loss]
+ tf.summary.scalar("bellman_loss",
+ tf.reduce_mean(bellman_loss / tf.to_float(seq_lengths)))
+ tf.summary.scalar("log_r_diff",
+ tf.reduce_mean(tf.reduce_mean(log_r_diff, axis=0) / tf.to_float(seq_lengths)))
+ return loss_per_seq, log_p_hat, log_weights, resampled
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/bounds_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/bounds_test.py"
new file mode 100644
index 00000000..c970f74f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/bounds_test.py"
@@ -0,0 +1,183 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for fivo.bounds"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+
+from fivo.test_utils import create_vrnn
+from fivo import bounds
+
+
+class BoundsTest(tf.test.TestCase):
+
+ def test_elbo(self):
+ """A golden-value test for the ELBO (the IWAE bound with num_samples=1)."""
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ model, inputs, targets, lengths = create_vrnn(random_seed=1234)
+ outs = bounds.iwae(model, (inputs, targets), lengths, num_samples=1,
+ parallel_iterations=1)
+ sess.run(tf.global_variables_initializer())
+ log_p_hat, _, _ = sess.run(outs)
+ self.assertAllClose([-21.615765, -13.614225], log_p_hat)
+
+ def test_iwae(self):
+ """A golden-value test for the IWAE bound."""
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ model, inputs, targets, lengths = create_vrnn(random_seed=1234)
+ outs = bounds.iwae(model, (inputs, targets), lengths, num_samples=4,
+ parallel_iterations=1)
+ sess.run(tf.global_variables_initializer())
+ log_p_hat, weights, _ = sess.run(outs)
+ self.assertAllClose([-23.301426, -13.64028], log_p_hat)
+ weights_gt = np.array(
+ [[[-3.66708851, -2.07074022, -4.91751671, -5.03293562],
+ [-2.99690723, -3.17782736, -4.50084877, -3.48536515]],
+ [[-6.2539978, -4.37615728, -7.43738699, -7.85044909],
+ [-8.27518654, -6.71545124, -8.96198845, -7.05567837]],
+ [[-9.19093227, -8.01637268, -11.64603615, -10.51128292],
+ [-12.34527206, -11.54284477, -11.8667469, -9.69417381]],
+ [[-12.20609856, -10.47217369, -13.66270638, -13.46115875],
+ [-17.17656708, -16.25190353, -15.28658581, -12.33067703]],
+ [[-16.14766312, -15.57472229, -17.47755432, -17.98189926],
+ [-17.17656708, -16.25190353, -15.28658581, -12.33067703]],
+ [[-20.07182884, -18.43191147, -20.1606636, -21.45263863],
+ [-17.17656708, -16.25190353, -15.28658581, -12.33067703]],
+ [[-24.10270691, -22.20865822, -24.14675522, -25.27248383],
+ [-17.17656708, -16.25190353, -15.28658581, -12.33067703]]])
+ self.assertAllClose(weights_gt, weights)
+
+ def test_fivo(self):
+ """A golden-value test for the FIVO bound."""
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ model, inputs, targets, lengths = create_vrnn(random_seed=1234)
+ outs = bounds.fivo(model, (inputs, targets), lengths, num_samples=4,
+ random_seed=1234, parallel_iterations=1)
+ sess.run(tf.global_variables_initializer())
+ log_p_hat, weights, resampled, _ = sess.run(outs)
+ self.assertAllClose([-22.98902512, -14.21689224], log_p_hat)
+ weights_gt = np.array(
+ [[[-3.66708851, -2.07074022, -4.91751671, -5.03293562],
+ [-2.99690723, -3.17782736, -4.50084877, -3.48536515]],
+ [[-2.67100811, -2.30541706, -2.34178066, -2.81751347],
+ [-8.27518654, -6.71545124, -8.96198845, -7.05567837]],
+ [[-5.65190411, -5.94563246, -6.55041981, -5.4783473],
+ [-12.34527206, -11.54284477, -11.8667469, -9.69417381]],
+ [[-8.71947861, -8.40143299, -8.54593086, -8.42822266],
+ [-4.28782988, -4.50591278, -3.40847206, -2.63650274]],
+ [[-12.7003831, -13.5039815, -12.3569726, -12.9489622],
+ [-4.28782988, -4.50591278, -3.40847206, -2.63650274]],
+ [[-16.4520301, -16.3611698, -15.0314846, -16.4197006],
+ [-4.28782988, -4.50591278, -3.40847206, -2.63650274]],
+ [[-20.7010765, -20.1379165, -19.0020351, -20.2395458],
+ [-4.28782988, -4.50591278, -3.40847206, -2.63650274]]])
+ self.assertAllClose(weights_gt, weights)
+ resampled_gt = np.array(
+ [[1., 0.],
+ [0., 0.],
+ [0., 1.],
+ [0., 0.],
+ [0., 0.],
+ [0., 0.],
+ [0., 0.]])
+ self.assertAllClose(resampled_gt, resampled)
+
+ def test_fivo_relaxed(self):
+ """A golden-value test for the FIVO bound with relaxed sampling."""
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ model, inputs, targets, lengths = create_vrnn(random_seed=1234)
+ outs = bounds.fivo(model, (inputs, targets), lengths, num_samples=4,
+ random_seed=1234, parallel_iterations=1,
+ resampling_type="relaxed")
+ sess.run(tf.global_variables_initializer())
+ log_p_hat, weights, resampled, _ = sess.run(outs)
+ self.assertAllClose([-22.942394, -14.273882], log_p_hat)
+ weights_gt = np.array(
+ [[[-3.66708851, -2.07074118, -4.91751575, -5.03293514],
+ [-2.99690628, -3.17782831, -4.50084877, -3.48536515]],
+ [[-2.84939098, -2.30087185, -2.35649204, -2.48417377],
+ [-8.27518654, -6.71545172, -8.96199131, -7.05567837]],
+ [[-5.92327023, -5.9433074, -6.5826683, -5.04259014],
+ [-12.34527206, -11.54284668, -11.86675072, -9.69417477]],
+ [[-8.95323944, -8.40061855, -8.52760506, -7.99130583],
+ [-4.58102798, -4.56017351, -3.46283388, -2.65550804]],
+ [[-12.87836456, -13.49628639, -12.31680107, -12.74228859],
+ [-4.58102798, -4.56017351, -3.46283388, -2.65550804]],
+ [[-16.78347397, -16.35150909, -14.98797417, -16.35162735],
+ [-4.58102798, -4.56017351, -3.46283388, -2.65550804]],
+ [[-20.81165886, -20.1307621, -18.92229652, -20.17458153],
+ [-4.58102798, -4.56017351, -3.46283388, -2.65550804]]])
+ self.assertAllClose(weights_gt, weights)
+ resampled_gt = np.array(
+ [[1., 0.],
+ [0., 0.],
+ [0., 1.],
+ [0., 0.],
+ [0., 0.],
+ [0., 0.],
+ [0., 0.]])
+ self.assertAllClose(resampled_gt, resampled)
+
+ def test_fivo_aux_relaxed(self):
+ """A golden-value test for the FIVO-AUX bound with relaxed sampling."""
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ model, inputs, targets, lengths = create_vrnn(random_seed=1234,
+ use_tilt=True)
+ outs = bounds.fivo(model, (inputs, targets), lengths, num_samples=4,
+ random_seed=1234, parallel_iterations=1,
+ resampling_type="relaxed")
+ sess.run(tf.global_variables_initializer())
+ log_p_hat, weights, resampled, _ = sess.run(outs)
+ self.assertAllClose([-23.1395, -14.271059], log_p_hat)
+ weights_gt = np.array(
+ [[[-5.19826221, -3.55476403, -5.98663855, -6.08058834],
+ [-6.31685925, -5.70243931, -7.07638931, -6.18138981]],
+ [[-3.97986865, -3.58831525, -3.85753584, -3.5010016],
+ [-11.38203049, -8.66213989, -11.23646641, -10.02024746]],
+ [[-6.62269831, -6.36680222, -6.78096485, -5.80072498],
+ [-3.55419445, -8.11326408, -3.48766923, -3.08593249]],
+ [[-10.56472301, -10.16084099, -9.96741676, -8.5270071],
+ [-6.04880285, -7.80853653, -4.72652149, -3.49711013]],
+ [[-13.36585426, -16.08720398, -13.33416367, -13.1017189],
+ [-0., -0., -0., -0.]],
+ [[-17.54233551, -17.35167503, -16.79163361, -16.51471138],
+ [0., -0., -0., -0.]],
+ [[-19.74024963, -18.69452858, -17.76246452, -18.76182365],
+ [0., -0., -0., -0.]]])
+ self.assertAllClose(weights_gt, weights)
+ resampled_gt = np.array([[1., 0.],
+ [0., 1.],
+ [0., 0.],
+ [0., 1.],
+ [0., 0.],
+ [0., 0.],
+ [0., 0.]])
+ self.assertAllClose(resampled_gt, resampled)
+
+
+if __name__ == "__main__":
+ np.set_printoptions(threshold=np.nan) # Used to easily see the gold values.
+ # Use print(repr(numpy_array)) to print the values.
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/calculate_pianoroll_mean.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/calculate_pianoroll_mean.py"
new file mode 100644
index 00000000..93f712bd
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/calculate_pianoroll_mean.py"
@@ -0,0 +1,65 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Script to calculate the mean of a pianoroll dataset.
+
+Given a pianoroll pickle file, this script loads the dataset and
+calculates the mean of the training set. Then it updates the pickle file
+so that the key "train_mean" points to the mean vector.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import pickle
+import numpy as np
+
+import tensorflow as tf
+
+
+from datasets import sparse_pianoroll_to_dense
+
+tf.app.flags.DEFINE_string('in_file', None,
+ 'Filename of the pickled pianoroll dataset to load.')
+tf.app.flags.DEFINE_string('out_file', None,
+ 'Name of the output pickle file. Defaults to in_file, '
+ 'updating the input pickle file.')
+tf.app.flags.mark_flag_as_required('in_file')
+
+FLAGS = tf.app.flags.FLAGS
+
+MIN_NOTE = 21
+MAX_NOTE = 108
+NUM_NOTES = MAX_NOTE - MIN_NOTE + 1
+
+
+def main(unused_argv):
+ if FLAGS.out_file is None:
+ FLAGS.out_file = FLAGS.in_file
+ with tf.gfile.Open(FLAGS.in_file, 'r') as f:
+ pianorolls = pickle.load(f)
+ dense_pianorolls = [sparse_pianoroll_to_dense(p, MIN_NOTE, NUM_NOTES)[0]
+ for p in pianorolls['train']]
+ # Concatenate all elements along the time axis.
+ concatenated = np.concatenate(dense_pianorolls, axis=0)
+ mean = np.mean(concatenated, axis=0)
+ pianorolls['train_mean'] = mean
+ # Write out the whole pickle file, including the train mean.
+ pickle.dump(pianorolls, open(FLAGS.out_file, 'wb'))
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/create_timit_dataset.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/create_timit_dataset.py"
new file mode 100644
index 00000000..ea1cd3b1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/create_timit_dataset.py"
@@ -0,0 +1,180 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Preprocesses TIMIT from raw wavfiles to create a set of TFRecords.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import glob
+import os
+import random
+import re
+
+import numpy as np
+import tensorflow as tf
+
+tf.app.flags.DEFINE_string("raw_timit_dir", None,
+ "Directory containing TIMIT files.")
+tf.app.flags.DEFINE_string("out_dir", None,
+ "Output directory for TFRecord files.")
+tf.app.flags.DEFINE_float("valid_frac", 0.05,
+ "Fraction of train set to use as valid set. "
+ "Must be between 0.0 and 1.0.")
+
+tf.app.flags.mark_flag_as_required("raw_timit_dir")
+tf.app.flags.mark_flag_as_required("out_dir")
+
+FLAGS = tf.app.flags.FLAGS
+
+NUM_TRAIN_FILES = 4620
+NUM_TEST_FILES = 1680
+SAMPLES_PER_TIMESTEP = 200
+
+# Regexes for reading SPHERE header files.
+SAMPLE_COUNT_REGEX = re.compile(r"sample_count -i (\d+)")
+SAMPLE_MIN_REGEX = re.compile(r"sample_min -i (-?\d+)")
+SAMPLE_MAX_REGEX = re.compile(r"sample_max -i (-?\d+)")
+
+
+def get_filenames(split):
+ """Get all wav filenames from the TIMIT archive."""
+ path = os.path.join(FLAGS.raw_timit_dir, "TIMIT", split, "*", "*", "*.WAV")
+ # Sort the output by name so the order is deterministic.
+ files = sorted(glob.glob(path))
+ return files
+
+
+def load_timit_wav(filename):
+ """Loads a TIMIT wavfile into a numpy array.
+
+ TIMIT wavfiles include a SPHERE header, detailed in the TIMIT docs. The first
+ line is the header type and the second is the length of the header in bytes.
+ After the header, the remaining bytes are actual WAV data.
+
+ The header includes information about the WAV data such as the number of
+ samples and minimum and maximum amplitude. This function asserts that the
+ loaded wav data matches the header.
+
+ Args:
+ filename: The name of the TIMIT wavfile to load.
+ Returns:
+ wav: A numpy array containing the loaded wav data.
+ """
+ wav_file = open(filename, "rb")
+ header_type = wav_file.readline()
+ header_length_str = wav_file.readline()
+ # The header length includes the length of the first two lines.
+ header_remaining_bytes = (int(header_length_str) - len(header_type) -
+ len(header_length_str))
+ header = wav_file.read(header_remaining_bytes)
+ # Read the relevant header fields.
+ sample_count = int(SAMPLE_COUNT_REGEX.search(header).group(1))
+ sample_min = int(SAMPLE_MIN_REGEX.search(header).group(1))
+ sample_max = int(SAMPLE_MAX_REGEX.search(header).group(1))
+ wav = np.fromstring(wav_file.read(), dtype="int16").astype("float32")
+ # Check that the loaded data conforms to the header description.
+ assert len(wav) == sample_count
+ assert wav.min() == sample_min
+ assert wav.max() == sample_max
+ return wav
+
+
+def preprocess(wavs, block_size, mean, std):
+ """Normalize the wav data and reshape it into chunks."""
+ processed_wavs = []
+ for wav in wavs:
+ wav = (wav - mean) / std
+ wav_length = wav.shape[0]
+ if wav_length % block_size != 0:
+ pad_width = block_size - (wav_length % block_size)
+ wav = np.pad(wav, (0, pad_width), "constant")
+ assert wav.shape[0] % block_size == 0
+ wav = wav.reshape((-1, block_size))
+ processed_wavs.append(wav)
+ return processed_wavs
+
+
+def create_tfrecord_from_wavs(wavs, output_file):
+ """Writes processed wav files to disk as sharded TFRecord files."""
+ with tf.python_io.TFRecordWriter(output_file) as builder:
+ for wav in wavs:
+ builder.write(wav.astype(np.float32).tobytes())
+
+
+def main(unused_argv):
+ train_filenames = get_filenames("TRAIN")
+ test_filenames = get_filenames("TEST")
+
+ num_train_files = len(train_filenames)
+ num_test_files = len(test_filenames)
+ num_valid_files = int(num_train_files * FLAGS.valid_frac)
+ num_train_files -= num_valid_files
+
+ print("%d train / %d valid / %d test" % (
+ num_train_files, num_valid_files, num_test_files))
+
+ random.seed(1234)
+ random.shuffle(train_filenames)
+
+ valid_filenames = train_filenames[:num_valid_files]
+ train_filenames = train_filenames[num_valid_files:]
+
+ # Make sure there is no overlap in the train, test, and valid sets.
+ train_s = set(train_filenames)
+ test_s = set(test_filenames)
+ valid_s = set(valid_filenames)
+ # Disable explicit length testing to make the assertions more readable.
+ # pylint: disable=g-explicit-length-test
+ assert len(train_s & test_s) == 0
+ assert len(train_s & valid_s) == 0
+ assert len(valid_s & test_s) == 0
+ # pylint: enable=g-explicit-length-test
+
+ train_wavs = [load_timit_wav(f) for f in train_filenames]
+ valid_wavs = [load_timit_wav(f) for f in valid_filenames]
+ test_wavs = [load_timit_wav(f) for f in test_filenames]
+ assert len(train_wavs) + len(valid_wavs) == NUM_TRAIN_FILES
+ assert len(test_wavs) == NUM_TEST_FILES
+
+ # Calculate the mean and standard deviation of the train set.
+ train_stacked = np.hstack(train_wavs)
+ train_mean = np.mean(train_stacked)
+ train_std = np.std(train_stacked)
+ print("train mean: %f train std: %f" % (train_mean, train_std))
+
+ # Process all data, normalizing with the train set statistics.
+ processed_train_wavs = preprocess(train_wavs, SAMPLES_PER_TIMESTEP,
+ train_mean, train_std)
+ processed_valid_wavs = preprocess(valid_wavs, SAMPLES_PER_TIMESTEP,
+ train_mean, train_std)
+ processed_test_wavs = preprocess(test_wavs, SAMPLES_PER_TIMESTEP, train_mean,
+ train_std)
+
+ # Write the datasets to disk.
+ create_tfrecord_from_wavs(
+ processed_train_wavs,
+ os.path.join(FLAGS.out_dir, "train"))
+ create_tfrecord_from_wavs(
+ processed_valid_wavs,
+ os.path.join(FLAGS.out_dir, "valid"))
+ create_tfrecord_from_wavs(
+ processed_test_wavs,
+ os.path.join(FLAGS.out_dir, "test"))
+
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/datasets.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/datasets.py"
new file mode 100644
index 00000000..6d532462
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/datasets.py"
@@ -0,0 +1,453 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Code for creating sequence datasets.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import pickle
+
+import numpy as np
+from scipy.sparse import coo_matrix
+import tensorflow as tf
+
+# The default number of threads used to process data in parallel.
+DEFAULT_PARALLELISM = 12
+
+
+def sparse_pianoroll_to_dense(pianoroll, min_note, num_notes):
+ """Converts a sparse pianoroll to a dense numpy array.
+
+ Given a sparse pianoroll, converts it to a dense numpy array of shape
+ [num_timesteps, num_notes] where entry i,j is 1.0 if note j is active on
+ timestep i and 0.0 otherwise.
+
+ Args:
+ pianoroll: A sparse pianoroll object, a list of tuples where the i'th tuple
+ contains the indices of the notes active at timestep i.
+ min_note: The minimum note in the pianoroll, subtracted from all notes so
+ that the minimum note becomes 0.
+ num_notes: The number of possible different note indices, determines the
+ second dimension of the resulting dense array.
+ Returns:
+ dense_pianoroll: A [num_timesteps, num_notes] numpy array of floats.
+ num_timesteps: A python int, the number of timesteps in the pianoroll.
+ """
+ num_timesteps = len(pianoroll)
+ inds = []
+ for time, chord in enumerate(pianoroll):
+ # Re-index the notes to start from min_note.
+ inds.extend((time, note-min_note) for note in chord)
+ shape = [num_timesteps, num_notes]
+ values = [1.] * len(inds)
+ sparse_pianoroll = coo_matrix(
+ (values, ([x[0] for x in inds], [x[1] for x in inds])),
+ shape=shape)
+ return sparse_pianoroll.toarray(), num_timesteps
+
+
+def create_pianoroll_dataset(path,
+ split,
+ batch_size,
+ num_parallel_calls=DEFAULT_PARALLELISM,
+ shuffle=False,
+ repeat=False,
+ min_note=21,
+ max_note=108):
+ """Creates a pianoroll dataset.
+
+ Args:
+ path: The path of a pickle file containing the dataset to load.
+ split: The split to use, can be train, test, or valid.
+ batch_size: The batch size. If repeat is False then it is not guaranteed
+ that the true batch size will match for all batches since batch_size
+ may not necessarily evenly divide the number of elements.
+ num_parallel_calls: The number of threads to use for parallel processing of
+ the data.
+ shuffle: If true, shuffles the order of the dataset.
+ repeat: If true, repeats the dataset endlessly.
+ min_note: The minimum note number of the dataset. For all pianoroll datasets
+ the minimum note is number 21, and changing this affects the dimension of
+ the data. This is useful mostly for testing.
+ max_note: The maximum note number of the dataset. For all pianoroll datasets
+ the maximum note is number 108, and changing this affects the dimension of
+ the data. This is useful mostly for testing.
+ Returns:
+ inputs: A batch of input sequences represented as a dense Tensor of shape
+ [time, batch_size, data_dimension]. The sequences in inputs are the
+ sequences in targets shifted one timestep into the future, padded with
+ zeros. This tensor is mean-centered, with the mean taken from the pickle
+ file key 'train_mean'.
+ targets: A batch of target sequences represented as a dense Tensor of
+ shape [time, batch_size, data_dimension].
+ lens: An int Tensor of shape [batch_size] representing the lengths of each
+ sequence in the batch.
+ mean: A float Tensor of shape [data_dimension] containing the mean loaded
+ from the pickle file.
+ """
+ # Load the data from disk.
+ num_notes = max_note - min_note + 1
+ with tf.gfile.Open(path, "r") as f:
+ raw_data = pickle.load(f)
+ pianorolls = raw_data[split]
+ mean = raw_data["train_mean"]
+ num_examples = len(pianorolls)
+
+ def pianoroll_generator():
+ for sparse_pianoroll in pianorolls:
+ yield sparse_pianoroll_to_dense(sparse_pianoroll, min_note, num_notes)
+
+ dataset = tf.data.Dataset.from_generator(
+ pianoroll_generator,
+ output_types=(tf.float64, tf.int64),
+ output_shapes=([None, num_notes], []))
+
+ if repeat: dataset = dataset.repeat()
+ if shuffle: dataset = dataset.shuffle(num_examples)
+
+ # Batch sequences togther, padding them to a common length in time.
+ dataset = dataset.padded_batch(batch_size,
+ padded_shapes=([None, num_notes], []))
+
+ def process_pianoroll_batch(data, lengths):
+ """Create mean-centered and time-major next-step prediction Tensors."""
+ data = tf.to_float(tf.transpose(data, perm=[1, 0, 2]))
+ lengths = tf.to_int32(lengths)
+ targets = data
+ # Mean center the inputs.
+ inputs = data - tf.constant(mean, dtype=tf.float32,
+ shape=[1, 1, mean.shape[0]])
+ # Shift the inputs one step forward in time. Also remove the last timestep
+ # so that targets and inputs are the same length.
+ inputs = tf.pad(inputs, [[1, 0], [0, 0], [0, 0]], mode="CONSTANT")[:-1]
+ # Mask out unused timesteps.
+ inputs *= tf.expand_dims(tf.transpose(
+ tf.sequence_mask(lengths, dtype=inputs.dtype)), 2)
+ return inputs, targets, lengths
+
+ dataset = dataset.map(process_pianoroll_batch,
+ num_parallel_calls=num_parallel_calls)
+ dataset = dataset.prefetch(num_examples)
+
+ itr = dataset.make_one_shot_iterator()
+ inputs, targets, lengths = itr.get_next()
+ return inputs, targets, lengths, tf.constant(mean, dtype=tf.float32)
+
+
+def create_human_pose_dataset(
+ path,
+ split,
+ batch_size,
+ num_parallel_calls=DEFAULT_PARALLELISM,
+ shuffle=False,
+ repeat=False,):
+ """Creates a human pose dataset.
+
+ Args:
+ path: The path of a pickle file containing the dataset to load.
+ split: The split to use, can be train, test, or valid.
+ batch_size: The batch size. If repeat is False then it is not guaranteed
+ that the true batch size will match for all batches since batch_size
+ may not necessarily evenly divide the number of elements.
+ num_parallel_calls: The number of threads to use for parallel processing of
+ the data.
+ shuffle: If true, shuffles the order of the dataset.
+ repeat: If true, repeats the dataset endlessly.
+ Returns:
+ inputs: A batch of input sequences represented as a dense Tensor of shape
+ [time, batch_size, data_dimension]. The sequences in inputs are the
+ sequences in targets shifted one timestep into the future, padded with
+ zeros. This tensor is mean-centered, with the mean taken from the pickle
+ file key 'train_mean'.
+ targets: A batch of target sequences represented as a dense Tensor of
+ shape [time, batch_size, data_dimension].
+ lens: An int Tensor of shape [batch_size] representing the lengths of each
+ sequence in the batch.
+ mean: A float Tensor of shape [data_dimension] containing the mean loaded
+ from the pickle file.
+ """
+ # Load the data from disk.
+ with tf.gfile.Open(path, "r") as f:
+ raw_data = pickle.load(f)
+
+ mean = raw_data["train_mean"]
+ pose_sequences = raw_data[split]
+ num_examples = len(pose_sequences)
+ num_features = pose_sequences[0].shape[1]
+
+ def pose_generator():
+ """A generator that yields pose data sequences."""
+ # Each timestep has 32 x values followed by 32 y values so is 64
+ # dimensional.
+ for pose_sequence in pose_sequences:
+ yield pose_sequence, pose_sequence.shape[0]
+
+ dataset = tf.data.Dataset.from_generator(
+ pose_generator,
+ output_types=(tf.float64, tf.int64),
+ output_shapes=([None, num_features], []))
+
+ if repeat:
+ dataset = dataset.repeat()
+ if shuffle:
+ dataset = dataset.shuffle(num_examples)
+
+ # Batch sequences togther, padding them to a common length in time.
+ dataset = dataset.padded_batch(
+ batch_size, padded_shapes=([None, num_features], []))
+
+ # Post-process each batch, ensuring that it is mean-centered and time-major.
+ def process_pose_data(data, lengths):
+ """Creates Tensors for next step prediction and mean-centers the input."""
+ data = tf.to_float(tf.transpose(data, perm=[1, 0, 2]))
+ lengths = tf.to_int32(lengths)
+ targets = data
+ # Mean center the inputs.
+ inputs = data - tf.constant(
+ mean, dtype=tf.float32, shape=[1, 1, mean.shape[0]])
+ # Shift the inputs one step forward in time. Also remove the last timestep
+ # so that targets and inputs are the same length.
+ inputs = tf.pad(inputs, [[1, 0], [0, 0], [0, 0]], mode="CONSTANT")[:-1]
+ # Mask out unused timesteps.
+ inputs *= tf.expand_dims(
+ tf.transpose(tf.sequence_mask(lengths, dtype=inputs.dtype)), 2)
+ return inputs, targets, lengths
+
+ dataset = dataset.map(
+ process_pose_data,
+ num_parallel_calls=num_parallel_calls)
+ dataset = dataset.prefetch(num_examples)
+
+ itr = dataset.make_one_shot_iterator()
+ inputs, targets, lengths = itr.get_next()
+ return inputs, targets, lengths, tf.constant(mean, dtype=tf.float32)
+
+
+def create_speech_dataset(path,
+ batch_size,
+ samples_per_timestep=200,
+ num_parallel_calls=DEFAULT_PARALLELISM,
+ prefetch_buffer_size=2048,
+ shuffle=False,
+ repeat=False):
+ """Creates a speech dataset.
+
+ Args:
+ path: The path of a possibly sharded TFRecord file containing the data.
+ batch_size: The batch size. If repeat is False then it is not guaranteed
+ that the true batch size will match for all batches since batch_size
+ may not necessarily evenly divide the number of elements.
+ samples_per_timestep: The number of audio samples per timestep. Used to
+ reshape the data into sequences of shape [time, samples_per_timestep].
+ Should not change except for testing -- in all speech datasets 200 is the
+ number of samples per timestep.
+ num_parallel_calls: The number of threads to use for parallel processing of
+ the data.
+ prefetch_buffer_size: The size of the prefetch queues to use after reading
+ and processing the raw data.
+ shuffle: If true, shuffles the order of the dataset.
+ repeat: If true, repeats the dataset endlessly.
+ Returns:
+ inputs: A batch of input sequences represented as a dense Tensor of shape
+ [time, batch_size, samples_per_timestep]. The sequences in inputs are the
+ sequences in targets shifted one timestep into the future, padded with
+ zeros.
+ targets: A batch of target sequences represented as a dense Tensor of
+ shape [time, batch_size, samples_per_timestep].
+ lens: An int Tensor of shape [batch_size] representing the lengths of each
+ sequence in the batch.
+ """
+ filenames = [path]
+
+ def read_speech_example(value):
+ """Parses a single tf.Example from the TFRecord file."""
+ decoded = tf.decode_raw(value, out_type=tf.float32)
+ example = tf.reshape(decoded, [-1, samples_per_timestep])
+ length = tf.shape(example)[0]
+ return example, length
+
+ # Create the dataset from the TFRecord files
+ dataset = tf.data.TFRecordDataset(filenames).map(
+ read_speech_example, num_parallel_calls=num_parallel_calls)
+ dataset = dataset.prefetch(prefetch_buffer_size)
+
+ if repeat: dataset = dataset.repeat()
+ if shuffle: dataset = dataset.shuffle(prefetch_buffer_size)
+
+ dataset = dataset.padded_batch(
+ batch_size, padded_shapes=([None, samples_per_timestep], []))
+
+ def process_speech_batch(data, lengths):
+ """Creates Tensors for next step prediction."""
+ data = tf.transpose(data, perm=[1, 0, 2])
+ lengths = tf.to_int32(lengths)
+ targets = data
+ # Shift the inputs one step forward in time. Also remove the last timestep
+ # so that targets and inputs are the same length.
+ inputs = tf.pad(data, [[1, 0], [0, 0], [0, 0]], mode="CONSTANT")[:-1]
+ # Mask out unused timesteps.
+ inputs *= tf.expand_dims(
+ tf.transpose(tf.sequence_mask(lengths, dtype=inputs.dtype)), 2)
+ return inputs, targets, lengths
+
+ dataset = dataset.map(process_speech_batch,
+ num_parallel_calls=num_parallel_calls)
+ dataset = dataset.prefetch(prefetch_buffer_size)
+
+ itr = dataset.make_one_shot_iterator()
+ inputs, targets, lengths = itr.get_next()
+ return inputs, targets, lengths
+
+
+SQUARED_OBSERVATION = "squared"
+ABS_OBSERVATION = "abs"
+STANDARD_OBSERVATION = "standard"
+OBSERVATION_TYPES = [SQUARED_OBSERVATION, ABS_OBSERVATION, STANDARD_OBSERVATION]
+
+ROUND_TRANSITION = "round"
+STANDARD_TRANSITION = "standard"
+TRANSITION_TYPES = [ROUND_TRANSITION, STANDARD_TRANSITION]
+
+
+def create_chain_graph_dataset(
+ batch_size,
+ num_timesteps,
+ steps_per_observation=None,
+ state_size=1,
+ transition_variance=1.,
+ observation_variance=1.,
+ transition_type=STANDARD_TRANSITION,
+ observation_type=STANDARD_OBSERVATION,
+ fixed_observation=None,
+ prefetch_buffer_size=2048,
+ dtype="float32"):
+ """Creates a toy chain graph dataset.
+
+ Creates a dataset where the data are sampled from a diffusion process. The
+ 'latent' states of the process are sampled as a chain of Normals:
+
+ z0 ~ N(0, transition_variance)
+ z1 ~ N(transition_fn(z0), transition_variance)
+ ...
+
+ where transition_fn could be round z0 or pass it through unchanged.
+
+ The observations are produced every steps_per_observation timesteps as a
+ function of the latent zs. For example if steps_per_observation is 3 then the
+ first observation will be produced as a function of z3:
+
+ x1 ~ N(observation_fn(z3), observation_variance)
+
+ where observation_fn could square z3, take the absolute value, or pass
+ it through unchanged.
+
+ Only the observations are returned.
+
+ Args:
+ batch_size: The batch size. The number of trajectories to run in parallel.
+ num_timesteps: The length of the chain of latent states (i.e. the
+ number of z's excluding z0.
+ steps_per_observation: The number of latent states between each observation,
+ must evenly divide num_timesteps.
+ state_size: The size of the latent state and observation, must be a
+ python int.
+ transition_variance: The variance of the transition density.
+ observation_variance: The variance of the observation density.
+ transition_type: Must be one of "round" or "standard". "round" means that
+ the transition density is centered at the rounded previous latent state.
+ "standard" centers the transition density at the previous latent state,
+ unchanged.
+ observation_type: Must be one of "squared", "abs" or "standard". "squared"
+ centers the observation density at the squared latent state. "abs"
+ centers the observaiton density at the absolute value of the current
+ latent state. "standard" centers the observation density at the current
+ latent state.
+ fixed_observation: If not None, fixes all observations to be a constant.
+ Must be a scalar.
+ prefetch_buffer_size: The size of the prefetch queues to use after reading
+ and processing the raw data.
+ dtype: A string convertible to a tensorflow datatype. The datatype used
+ to represent the states and observations.
+ Returns:
+ observations: A batch of observations represented as a dense Tensor of
+ shape [num_observations, batch_size, state_size]. num_observations is
+ num_timesteps/steps_per_observation.
+ lens: An int Tensor of shape [batch_size] representing the lengths of each
+ sequence in the batch. Will contain num_observations as each entry.
+ Raises:
+ ValueError: Raised if steps_per_observation does not evenly divide
+ num_timesteps.
+ """
+ if steps_per_observation is None:
+ steps_per_observation = num_timesteps
+ if num_timesteps % steps_per_observation != 0:
+ raise ValueError("steps_per_observation must evenly divide num_timesteps.")
+ num_observations = int(num_timesteps / steps_per_observation)
+ def data_generator():
+ """An infinite generator of latents and observations from the model."""
+ transition_std = np.sqrt(transition_variance)
+ observation_std = np.sqrt(observation_variance)
+ while True:
+ states = []
+ observations = []
+ # Sample z0 ~ Normal(0, sqrt(variance)).
+ states.append(
+ np.random.normal(size=[state_size],
+ scale=observation_std).astype(dtype))
+ # Start the range at 1 because we've already generated z0.
+ # The range ends at num_timesteps+1 because we want to include the
+ # num_timesteps-th step.
+ for t in xrange(1, num_timesteps+1):
+ if transition_type == ROUND_TRANSITION:
+ loc = np.round(states[-1])
+ elif transition_type == STANDARD_TRANSITION:
+ loc = states[-1]
+ z_t = np.random.normal(size=[state_size], loc=loc, scale=transition_std)
+ states.append(z_t.astype(dtype))
+ if t % steps_per_observation == 0:
+ if fixed_observation is None:
+ if observation_type == SQUARED_OBSERVATION:
+ loc = np.square(states[-1])
+ elif observation_type == ABS_OBSERVATION:
+ loc = np.abs(states[-1])
+ elif observation_type == STANDARD_OBSERVATION:
+ loc = states[-1]
+ x_t = np.random.normal(size=[state_size],
+ loc=loc,
+ scale=observation_std).astype(dtype)
+ else:
+ x_t = np.ones([state_size]) * fixed_observation
+
+ observations.append(x_t)
+ yield states, observations
+
+ dataset = tf.data.Dataset.from_generator(
+ data_generator,
+ output_types=(tf.as_dtype(dtype), tf.as_dtype(dtype)),
+ output_shapes=([num_timesteps+1, state_size],
+ [num_observations, state_size])
+ )
+ dataset = dataset.repeat().batch(batch_size)
+ dataset = dataset.prefetch(prefetch_buffer_size)
+ itr = dataset.make_one_shot_iterator()
+ _, observations = itr.get_next()
+ # Transpose observations from [batch, time, state_size] to
+ # [time, batch, state_size].
+ observations = tf.transpose(observations, perm=[1, 0, 2])
+ lengths = tf.ones([batch_size], dtype=tf.int32) * num_observations
+ return observations, lengths
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/datasets_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/datasets_test.py"
new file mode 100644
index 00000000..e6bbfda6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/data/datasets_test.py"
@@ -0,0 +1,303 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for fivo.data.datasets."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import pickle
+import os
+
+import numpy as np
+import tensorflow as tf
+
+from fivo.data import datasets
+
+FLAGS = tf.app.flags.FLAGS
+
+
+class DatasetsTest(tf.test.TestCase):
+
+ def test_sparse_pianoroll_to_dense_empty_at_end(self):
+ sparse_pianoroll = [(0, 1), (1, 0), (), (1,), (), ()]
+ dense_pianoroll, num_timesteps = datasets.sparse_pianoroll_to_dense(
+ sparse_pianoroll, min_note=0, num_notes=2)
+ self.assertEqual(num_timesteps, 6)
+ self.assertAllEqual([[1, 1],
+ [1, 1],
+ [0, 0],
+ [0, 1],
+ [0, 0],
+ [0, 0]], dense_pianoroll)
+
+ def test_sparse_pianoroll_to_dense_with_chord(self):
+ sparse_pianoroll = [(0, 1), (1, 0), (), (1,)]
+ dense_pianoroll, num_timesteps = datasets.sparse_pianoroll_to_dense(
+ sparse_pianoroll, min_note=0, num_notes=2)
+ self.assertEqual(num_timesteps, 4)
+ self.assertAllEqual([[1, 1],
+ [1, 1],
+ [0, 0],
+ [0, 1]], dense_pianoroll)
+
+ def test_sparse_pianoroll_to_dense_simple(self):
+ sparse_pianoroll = [(0,), (), (1,)]
+ dense_pianoroll, num_timesteps = datasets.sparse_pianoroll_to_dense(
+ sparse_pianoroll, min_note=0, num_notes=2)
+ self.assertEqual(num_timesteps, 3)
+ self.assertAllEqual([[1, 0],
+ [0, 0],
+ [0, 1]], dense_pianoroll)
+
+ def test_sparse_pianoroll_to_dense_subtracts_min_note(self):
+ sparse_pianoroll = [(4, 5), (5, 4), (), (5,), (), ()]
+ dense_pianoroll, num_timesteps = datasets.sparse_pianoroll_to_dense(
+ sparse_pianoroll, min_note=4, num_notes=2)
+ self.assertEqual(num_timesteps, 6)
+ self.assertAllEqual([[1, 1],
+ [1, 1],
+ [0, 0],
+ [0, 1],
+ [0, 0],
+ [0, 0]], dense_pianoroll)
+
+ def test_sparse_pianoroll_to_dense_uses_num_notes(self):
+ sparse_pianoroll = [(4, 5), (5, 4), (), (5,), (), ()]
+ dense_pianoroll, num_timesteps = datasets.sparse_pianoroll_to_dense(
+ sparse_pianoroll, min_note=4, num_notes=3)
+ self.assertEqual(num_timesteps, 6)
+ self.assertAllEqual([[1, 1, 0],
+ [1, 1, 0],
+ [0, 0, 0],
+ [0, 1, 0],
+ [0, 0, 0],
+ [0, 0, 0]], dense_pianoroll)
+
+ def test_pianoroll_dataset(self):
+ pianoroll_data = [[(0,), (), (1,)],
+ [(0, 1), (1,)],
+ [(1,), (0,), (), (0, 1), (), ()]]
+ pianoroll_mean = np.zeros([3])
+ pianoroll_mean[-1] = 1
+ data = {"train": pianoroll_data, "train_mean": pianoroll_mean}
+ path = os.path.join(tf.test.get_temp_dir(), "test.pkl")
+ pickle.dump(data, open(path, "wb"))
+ with self.test_session() as sess:
+ inputs, targets, lens, mean = datasets.create_pianoroll_dataset(
+ path, "train", 2, num_parallel_calls=1,
+ shuffle=False, repeat=False,
+ min_note=0, max_note=2)
+ i1, t1, l1 = sess.run([inputs, targets, lens])
+ i2, t2, l2 = sess.run([inputs, targets, lens])
+ m = sess.run(mean)
+ # Check the lengths.
+ self.assertAllEqual([3, 2], l1)
+ self.assertAllEqual([6], l2)
+ # Check the mean.
+ self.assertAllEqual(pianoroll_mean, m)
+ # Check the targets. The targets should not be mean-centered and should
+ # be padded with zeros to a common length within a batch.
+ self.assertAllEqual([[1, 0, 0],
+ [0, 0, 0],
+ [0, 1, 0]], t1[:, 0, :])
+ self.assertAllEqual([[1, 1, 0],
+ [0, 1, 0],
+ [0, 0, 0]], t1[:, 1, :])
+ self.assertAllEqual([[0, 1, 0],
+ [1, 0, 0],
+ [0, 0, 0],
+ [1, 1, 0],
+ [0, 0, 0],
+ [0, 0, 0]], t2[:, 0, :])
+ # Check the inputs. Each sequence should start with zeros on the first
+ # timestep. Each sequence should be padded with zeros to a common length
+ # within a batch. The mean should be subtracted from all timesteps except
+ # the first and the padding.
+ self.assertAllEqual([[0, 0, 0],
+ [1, 0, -1],
+ [0, 0, -1]], i1[:, 0, :])
+ self.assertAllEqual([[0, 0, 0],
+ [1, 1, -1],
+ [0, 0, 0]], i1[:, 1, :])
+ self.assertAllEqual([[0, 0, 0],
+ [0, 1, -1],
+ [1, 0, -1],
+ [0, 0, -1],
+ [1, 1, -1],
+ [0, 0, -1]], i2[:, 0, :])
+
+ def test_human_pose_dataset(self):
+ pose_data = [
+ [[0, 0], [2, 2]],
+ [[2, 2]],
+ [[0, 0], [0, 0], [2, 2], [2, 2], [0, 0]],
+ ]
+ pose_data = [np.array(x, dtype=np.float64) for x in pose_data]
+ pose_data_mean = np.array([1, 1], dtype=np.float64)
+ data = {
+ "train": pose_data,
+ "train_mean": pose_data_mean,
+ }
+ path = os.path.join(tf.test.get_temp_dir(), "test_human_pose_dataset.pkl")
+ with open(path, "wb") as out:
+ pickle.dump(data, out)
+ with self.test_session() as sess:
+ inputs, targets, lens, mean = datasets.create_human_pose_dataset(
+ path, "train", 2, num_parallel_calls=1, shuffle=False, repeat=False)
+ i1, t1, l1 = sess.run([inputs, targets, lens])
+ i2, t2, l2 = sess.run([inputs, targets, lens])
+ m = sess.run(mean)
+ # Check the lengths.
+ self.assertAllEqual([2, 1], l1)
+ self.assertAllEqual([5], l2)
+ # Check the mean.
+ self.assertAllEqual(pose_data_mean, m)
+ # Check the targets. The targets should not be mean-centered and should
+ # be padded with zeros to a common length within a batch.
+ self.assertAllEqual([[0, 0], [2, 2]], t1[:, 0, :])
+ self.assertAllEqual([[2, 2], [0, 0]], t1[:, 1, :])
+ self.assertAllEqual([[0, 0], [0, 0], [2, 2], [2, 2], [0, 0]], t2[:, 0, :])
+ # Check the inputs. Each sequence should start with zeros on the first
+ # timestep. Each sequence should be padded with zeros to a common length
+ # within a batch. The mean should be subtracted from all timesteps except
+ # the first and the padding.
+ self.assertAllEqual([[0, 0], [-1, -1]], i1[:, 0, :])
+ self.assertAllEqual([[0, 0], [0, 0]], i1[:, 1, :])
+ self.assertAllEqual([[0, 0], [-1, -1], [-1, -1], [1, 1], [1, 1]],
+ i2[:, 0, :])
+
+ def test_speech_dataset(self):
+ with self.test_session() as sess:
+ path = os.path.join(
+ os.path.dirname(os.path.dirname(os.path.realpath(__file__))),
+ "test_data",
+ "tiny_speech_dataset.tfrecord")
+ inputs, targets, lens = datasets.create_speech_dataset(
+ path, 3, samples_per_timestep=2, num_parallel_calls=1,
+ prefetch_buffer_size=3, shuffle=False, repeat=False)
+ inputs1, targets1, lengths1 = sess.run([inputs, targets, lens])
+ inputs2, targets2, lengths2 = sess.run([inputs, targets, lens])
+ # Check the lengths.
+ self.assertAllEqual([1, 2, 3], lengths1)
+ self.assertAllEqual([4], lengths2)
+ # Check the targets. The targets should be padded with zeros to a common
+ # length within a batch.
+ self.assertAllEqual([[[0., 1.], [0., 1.], [0., 1.]],
+ [[0., 0.], [2., 3.], [2., 3.]],
+ [[0., 0.], [0., 0.], [4., 5.]]],
+ targets1)
+ self.assertAllEqual([[[0., 1.]],
+ [[2., 3.]],
+ [[4., 5.]],
+ [[6., 7.]]],
+ targets2)
+ # Check the inputs. Each sequence should start with zeros on the first
+ # timestep. Each sequence should be padded with zeros to a common length
+ # within a batch.
+ self.assertAllEqual([[[0., 0.], [0., 0.], [0., 0.]],
+ [[0., 0.], [0., 1.], [0., 1.]],
+ [[0., 0.], [0., 0.], [2., 3.]]],
+ inputs1)
+ self.assertAllEqual([[[0., 0.]],
+ [[0., 1.]],
+ [[2., 3.]],
+ [[4., 5.]]],
+ inputs2)
+
+ def test_chain_graph_raises_error_on_wrong_steps_per_observation(self):
+ with self.assertRaises(ValueError):
+ datasets.create_chain_graph_dataset(
+ batch_size=4,
+ num_timesteps=10,
+ steps_per_observation=9)
+
+ def test_chain_graph_single_obs(self):
+ with self.test_session() as sess:
+ np.random.seed(1234)
+ num_observations = 1
+ num_timesteps = 5
+ batch_size = 2
+ state_size = 1
+ observations, lengths = datasets.create_chain_graph_dataset(
+ batch_size=batch_size,
+ num_timesteps=num_timesteps,
+ state_size=state_size)
+ out_observations, out_lengths = sess.run([observations, lengths])
+ self.assertAllEqual([num_observations, num_observations], out_lengths)
+ self.assertAllClose(
+ [[[1.426677], [-1.789461]]],
+ out_observations)
+
+ def test_chain_graph_multiple_obs(self):
+ with self.test_session() as sess:
+ np.random.seed(1234)
+ num_observations = 3
+ num_timesteps = 6
+ batch_size = 2
+ state_size = 1
+ observations, lengths = datasets.create_chain_graph_dataset(
+ batch_size=batch_size,
+ num_timesteps=num_timesteps,
+ steps_per_observation=num_timesteps/num_observations,
+ state_size=state_size)
+ out_observations, out_lengths = sess.run([observations, lengths])
+ self.assertAllEqual([num_observations, num_observations], out_lengths)
+ self.assertAllClose(
+ [[[0.40051451], [1.07405114]],
+ [[1.73932898], [3.16880035]],
+ [[-1.98377144], [2.82669163]]],
+ out_observations)
+
+ def test_chain_graph_state_dims(self):
+ with self.test_session() as sess:
+ np.random.seed(1234)
+ num_observations = 1
+ num_timesteps = 5
+ batch_size = 2
+ state_size = 3
+ observations, lengths = datasets.create_chain_graph_dataset(
+ batch_size=batch_size,
+ num_timesteps=num_timesteps,
+ state_size=state_size)
+ out_observations, out_lengths = sess.run([observations, lengths])
+ self.assertAllEqual([num_observations, num_observations], out_lengths)
+ self.assertAllClose(
+ [[[1.052287, -4.560759, 3.07988],
+ [2.008926, 0.495567, 3.488678]]],
+ out_observations)
+
+ def test_chain_graph_fixed_obs(self):
+ with self.test_session() as sess:
+ np.random.seed(1234)
+ num_observations = 3
+ num_timesteps = 6
+ batch_size = 2
+ state_size = 1
+ observations, lengths = datasets.create_chain_graph_dataset(
+ batch_size=batch_size,
+ num_timesteps=num_timesteps,
+ steps_per_observation=num_timesteps/num_observations,
+ state_size=state_size,
+ fixed_observation=4.)
+ out_observations, out_lengths = sess.run([observations, lengths])
+ self.assertAllEqual([num_observations, num_observations], out_lengths)
+ self.assertAllClose(
+ np.ones([num_observations, batch_size, state_size]) * 4.,
+ out_observations)
+
+if __name__ == "__main__":
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/ghmm_runners.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/ghmm_runners.py"
new file mode 100644
index 00000000..1f1ba6d4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/ghmm_runners.py"
@@ -0,0 +1,235 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Creates and runs Gaussian HMM-related graphs."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+import numpy as np
+import tensorflow as tf
+
+from fivo import smc
+from fivo import bounds
+from fivo.data import datasets
+from fivo.models import ghmm
+
+
+def run_train(config):
+ """Runs training for a Gaussian HMM setup."""
+
+ def create_logging_hook(step, bound_value, likelihood, bound_gap):
+ """Creates a logging hook that prints the bound value periodically."""
+ bound_label = config.bound + "/t"
+ def summary_formatter(log_dict):
+ string = ("Step {step}, %s: {value:.3f}, "
+ "likelihood: {ll:.3f}, gap: {gap:.3e}") % bound_label
+ return string.format(**log_dict)
+ logging_hook = tf.train.LoggingTensorHook(
+ {"step": step, "value": bound_value,
+ "ll": likelihood, "gap": bound_gap},
+ every_n_iter=config.summarize_every,
+ formatter=summary_formatter)
+ return logging_hook
+
+ def create_losses(model, observations, lengths):
+ """Creates the loss to be optimized.
+
+ Args:
+ model: A Trainable GHMM model.
+ observations: A set of observations.
+ lengths: The lengths of each sequence in the observations.
+ Returns:
+ loss: A float Tensor that when differentiated yields the gradients
+ to apply to the model. Should be optimized via gradient descent.
+ bound: A float Tensor containing the value of the bound that is
+ being optimized.
+ true_ll: The true log-likelihood of the data under the model.
+ bound_gap: The gap between the bound and the true log-likelihood.
+ """
+ # Compute lower bounds on the log likelihood.
+ if config.bound == "elbo":
+ ll_per_seq, _, _ = bounds.iwae(
+ model, observations, lengths, num_samples=1,
+ parallel_iterations=config.parallel_iterations
+ )
+ elif config.bound == "iwae":
+ ll_per_seq, _, _ = bounds.iwae(
+ model, observations, lengths, num_samples=config.num_samples,
+ parallel_iterations=config.parallel_iterations
+ )
+ elif config.bound == "fivo":
+ if config.resampling_type == "relaxed":
+ ll_per_seq, _, _, _ = bounds.fivo(
+ model,
+ observations,
+ lengths,
+ num_samples=config.num_samples,
+ resampling_criterion=smc.ess_criterion,
+ resampling_type=config.resampling_type,
+ relaxed_resampling_temperature=config.
+ relaxed_resampling_temperature,
+ random_seed=config.random_seed,
+ parallel_iterations=config.parallel_iterations)
+ else:
+ ll_per_seq, _, _, _ = bounds.fivo(
+ model, observations, lengths,
+ num_samples=config.num_samples,
+ resampling_criterion=smc.ess_criterion,
+ resampling_type=config.resampling_type,
+ random_seed=config.random_seed,
+ parallel_iterations=config.parallel_iterations
+ )
+ ll_per_t = tf.reduce_mean(ll_per_seq / tf.to_float(lengths))
+ # Compute the data's true likelihood under the model and the bound gap.
+ true_ll_per_seq = model.likelihood(tf.squeeze(observations))
+ true_ll_per_t = tf.reduce_mean(true_ll_per_seq / tf.to_float(lengths))
+ bound_gap = true_ll_per_seq - ll_per_seq
+ bound_gap = tf.reduce_mean(bound_gap/ tf.to_float(lengths))
+ tf.summary.scalar("train_ll_bound", ll_per_t)
+ tf.summary.scalar("train_true_ll", true_ll_per_t)
+ tf.summary.scalar("bound_gap", bound_gap)
+ return -ll_per_t, ll_per_t, true_ll_per_t, bound_gap
+
+ def create_graph():
+ """Creates the training graph."""
+ global_step = tf.train.get_or_create_global_step()
+ xs, lengths = datasets.create_chain_graph_dataset(
+ config.batch_size,
+ config.num_timesteps,
+ steps_per_observation=1,
+ state_size=1,
+ transition_variance=config.variance,
+ observation_variance=config.variance)
+ model = ghmm.TrainableGaussianHMM(
+ config.num_timesteps,
+ config.proposal_type,
+ transition_variances=config.variance,
+ emission_variances=config.variance,
+ random_seed=config.random_seed)
+ loss, bound, true_ll, gap = create_losses(model, xs, lengths)
+ opt = tf.train.AdamOptimizer(config.learning_rate)
+ grads = opt.compute_gradients(loss, var_list=tf.trainable_variables())
+ train_op = opt.apply_gradients(grads, global_step=global_step)
+ return bound, true_ll, gap, train_op, global_step
+
+ with tf.Graph().as_default():
+ if config.random_seed:
+ tf.set_random_seed(config.random_seed)
+ np.random.seed(config.random_seed)
+ bound, true_ll, gap, train_op, global_step = create_graph()
+ log_hook = create_logging_hook(global_step, bound, true_ll, gap)
+ with tf.train.MonitoredTrainingSession(
+ master="",
+ hooks=[log_hook],
+ checkpoint_dir=config.logdir,
+ save_checkpoint_secs=120,
+ save_summaries_steps=config.summarize_every,
+ log_step_count_steps=config.summarize_every*20) as sess:
+ cur_step = -1
+ while cur_step <= config.max_steps and not sess.should_stop():
+ cur_step = sess.run(global_step)
+ _, cur_step = sess.run([train_op, global_step])
+
+
+def run_eval(config):
+ """Evaluates a Gaussian HMM using the given config."""
+
+ def create_bound(model, xs, lengths):
+ """Creates the bound to be evaluated."""
+ if config.bound == "elbo":
+ ll_per_seq, log_weights, _ = bounds.iwae(
+ model, xs, lengths, num_samples=1,
+ parallel_iterations=config.parallel_iterations
+ )
+ elif config.bound == "iwae":
+ ll_per_seq, log_weights, _ = bounds.iwae(
+ model, xs, lengths, num_samples=config.num_samples,
+ parallel_iterations=config.parallel_iterations
+ )
+ elif config.bound == "fivo":
+ ll_per_seq, log_weights, resampled, _ = bounds.fivo(
+ model, xs, lengths,
+ num_samples=config.num_samples,
+ resampling_criterion=smc.ess_criterion,
+ resampling_type=config.resampling_type,
+ random_seed=config.random_seed,
+ parallel_iterations=config.parallel_iterations
+ )
+ # Compute bound scaled by number of timesteps.
+ bound_per_t = ll_per_seq / tf.to_float(lengths)
+ if config.bound == "fivo":
+ return bound_per_t, log_weights, resampled
+ else:
+ return bound_per_t, log_weights
+
+ def create_graph():
+ """Creates the dataset, model, and bound."""
+ xs, lengths = datasets.create_chain_graph_dataset(
+ config.batch_size,
+ config.num_timesteps,
+ steps_per_observation=1,
+ state_size=1,
+ transition_variance=config.variance,
+ observation_variance=config.variance)
+ model = ghmm.TrainableGaussianHMM(
+ config.num_timesteps,
+ config.proposal_type,
+ transition_variances=config.variance,
+ emission_variances=config.variance,
+ random_seed=config.random_seed)
+ true_likelihood = tf.reduce_mean(
+ model.likelihood(tf.squeeze(xs)) / tf.to_float(lengths))
+ outs = [true_likelihood]
+ outs.extend(list(create_bound(model, xs, lengths)))
+ return outs
+
+ with tf.Graph().as_default():
+ if config.random_seed:
+ tf.set_random_seed(config.random_seed)
+ np.random.seed(config.random_seed)
+ graph_outs = create_graph()
+ with tf.train.SingularMonitoredSession(
+ checkpoint_dir=config.logdir) as sess:
+ outs = sess.run(graph_outs)
+ likelihood = outs[0]
+ avg_bound = np.mean(outs[1])
+ std = np.std(outs[1])
+ log_weights = outs[2]
+ log_weight_variances = np.var(log_weights, axis=2)
+ avg_log_weight_variance = np.var(log_weight_variances, axis=1)
+ avg_log_weight = np.mean(log_weights, axis=(1, 2))
+ data = {"mean": avg_bound, "std": std, "log_weights": log_weights,
+ "log_weight_means": avg_log_weight,
+ "log_weight_variances": avg_log_weight_variance}
+ if len(outs) == 4:
+ data["resampled"] = outs[3]
+ data["avg_resampled"] = np.mean(outs[3], axis=1)
+ # Log some useful statistics.
+ tf.logging.info("Evaled bound %s with batch_size: %d, num_samples: %d."
+ % (config.bound, config.batch_size, config.num_samples))
+ tf.logging.info("mean: %f, std: %f" % (avg_bound, std))
+ tf.logging.info("true likelihood: %s" % likelihood)
+ tf.logging.info("avg log weight: %s" % avg_log_weight)
+ tf.logging.info("log weight variance: %s" % avg_log_weight_variance)
+ if len(outs) == 4:
+ tf.logging.info("avg resamples per t: %s" % data["avg_resampled"])
+ if not tf.gfile.Exists(config.logdir):
+ tf.gfile.MakeDirs(config.logdir)
+ with tf.gfile.Open(os.path.join(config.logdir, "out.npz"), "w") as fout:
+ np.save(fout, data)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/ghmm_runners_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/ghmm_runners_test.py"
new file mode 100644
index 00000000..50044ad4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/ghmm_runners_test.py"
@@ -0,0 +1,106 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for fivo.ghmm_runners."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import numpy as np
+import tensorflow as tf
+
+from fivo import ghmm_runners
+
+
+class GHMMRunnersTest(tf.test.TestCase):
+
+ def default_config(self):
+ class Config(object):
+ pass
+ config = Config()
+ config.model = "ghmm"
+ config.bound = "fivo"
+ config.proposal_type = "prior"
+ config.batch_size = 4
+ config.num_samples = 4
+ config.num_timesteps = 10
+ config.variance = 0.1
+ config.resampling_type = "multinomial"
+ config.random_seed = 1234
+ config.parallel_iterations = 1
+ config.learning_rate = 1e-4
+ config.summarize_every = 1
+ config.max_steps = 1
+ return config
+
+ def test_eval_ghmm_notraining_fivo_prior(self):
+ self.eval_ghmm_notraining("fivo", "prior", -3.063864)
+
+ def test_eval_ghmm_notraining_fivo_true_filtering(self):
+ self.eval_ghmm_notraining("fivo", "true-filtering", -1.1409812)
+
+ def test_eval_ghmm_notraining_fivo_true_smoothing(self):
+ self.eval_ghmm_notraining("fivo", "true-smoothing", -0.85592091)
+
+ def test_eval_ghmm_notraining_iwae_prior(self):
+ self.eval_ghmm_notraining("iwae", "prior", -5.9730167)
+
+ def test_eval_ghmm_notraining_iwae_true_filtering(self):
+ self.eval_ghmm_notraining("iwae", "true-filtering", -1.1485999)
+
+ def test_eval_ghmm_notraining_iwae_true_smoothing(self):
+ self.eval_ghmm_notraining("iwae", "true-smoothing", -0.85592091)
+
+ def eval_ghmm_notraining(self, bound, proposal_type, expected_bound_avg):
+ config = self.default_config()
+ config.proposal_type = proposal_type
+ config.bound = bound
+ config.logdir = os.path.join(
+ tf.test.get_temp_dir(), "test-ghmm-%s-%s" % (proposal_type, bound))
+
+ ghmm_runners.run_eval(config)
+
+ data = np.load(os.path.join(config.logdir, "out.npz")).item()
+ self.assertAlmostEqual(expected_bound_avg, data["mean"], places=3)
+
+ def test_train_ghmm_for_one_step_and_eval_fivo_filtering(self):
+ self.train_ghmm_for_one_step_and_eval("fivo", "filtering", -16.727108)
+
+ def test_train_ghmm_for_one_step_and_eval_fivo_smoothing(self):
+ self.train_ghmm_for_one_step_and_eval("fivo", "smoothing", -19.381277)
+
+ def test_train_ghmm_for_one_step_and_eval_iwae_filtering(self):
+ self.train_ghmm_for_one_step_and_eval("iwae", "filtering", -33.31966)
+
+ def test_train_ghmm_for_one_step_and_eval_iwae_smoothing(self):
+ self.train_ghmm_for_one_step_and_eval("iwae", "smoothing", -46.388447)
+
+ def train_ghmm_for_one_step_and_eval(self, bound, proposal_type, expected_bound_avg):
+ config = self.default_config()
+ config.proposal_type = proposal_type
+ config.bound = bound
+ config.max_steps = 1
+ config.logdir = os.path.join(
+ tf.test.get_temp_dir(), "test-ghmm-training-%s-%s" % (proposal_type, bound))
+ ghmm_runners.run_train(config)
+ ghmm_runners.run_eval(config)
+ data = np.load(os.path.join(config.logdir, "out.npz")).item()
+ self.assertAlmostEqual(expected_bound_avg, data["mean"], places=2)
+
+
+if __name__ == "__main__":
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/base.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/base.py"
new file mode 100644
index 00000000..5ffcb7af
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/base.py"
@@ -0,0 +1,342 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Reusable model classes for FIVO."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import sonnet as snt
+import tensorflow as tf
+
+from fivo import nested_utils as nested
+
+tfd = tf.contrib.distributions
+
+
+class ELBOTrainableSequenceModel(object):
+ """An abstract class for ELBO-trainable sequence models to extend.
+
+ Because the ELBO, IWAE, and FIVO bounds all accept the same arguments,
+ any model that is ELBO-trainable is also IWAE- and FIVO-trainable.
+ """
+
+ def zero_state(self, batch_size, dtype):
+ """Returns the initial state of the model as a Tensor or tuple of Tensors.
+
+ Args:
+ batch_size: The batch size.
+ dtype: The datatype to use for the state.
+ """
+ raise NotImplementedError("zero_state not yet implemented.")
+
+ def set_observations(self, observations, seq_lengths):
+ """Sets the observations for the model.
+
+ This method provides the model with all observed variables including both
+ inputs and targets. It will be called before running any computations with
+ the model that require the observations, e.g. training the model or
+ computing bounds, and should be used to run any necessary preprocessing
+ steps.
+
+ Args:
+ observations: A potentially nested set of Tensors containing
+ all observations for the model, both inputs and targets. Typically
+ a set of Tensors with shape [max_seq_len, batch_size, data_size].
+ seq_lengths: A [batch_size] Tensor of ints encoding the length of each
+ sequence in the batch (sequences can be padded to a common length).
+ """
+ self.observations = observations
+ self.max_seq_len = tf.reduce_max(seq_lengths)
+ self.observations_ta = nested.tas_for_tensors(
+ observations, self.max_seq_len, clear_after_read=False)
+ self.seq_lengths = seq_lengths
+
+ def propose_and_weight(self, state, t):
+ """Propogates model state one timestep and computes log weights.
+
+ This method accepts the current state of the model and computes the state
+ for the next timestep as well as the incremental log weight of each
+ element in the batch.
+
+ Args:
+ state: The current state of the model.
+ t: A scalar integer Tensor representing the current timestep.
+ Returns:
+ next_state: The state of the model after one timestep.
+ log_weights: A [batch_size] Tensor containing the incremental log weights.
+ """
+ raise NotImplementedError("propose_and_weight not yet implemented.")
+
+DEFAULT_INITIALIZERS = {"w": tf.contrib.layers.xavier_initializer(),
+ "b": tf.zeros_initializer()}
+
+
+class ConditionalNormalDistribution(object):
+ """A Normal distribution conditioned on Tensor inputs via a fc network."""
+
+ def __init__(self, size, hidden_layer_sizes, sigma_min=0.0,
+ raw_sigma_bias=0.25, hidden_activation_fn=tf.nn.relu,
+ initializers=None, name="conditional_normal_distribution"):
+ """Creates a conditional Normal distribution.
+
+ Args:
+ size: The dimension of the random variable.
+ hidden_layer_sizes: The sizes of the hidden layers of the fully connected
+ network used to condition the distribution on the inputs.
+ sigma_min: The minimum standard deviation allowed, a scalar.
+ raw_sigma_bias: A scalar that is added to the raw standard deviation
+ output from the fully connected network. Set to 0.25 by default to
+ prevent standard deviations close to 0.
+ hidden_activation_fn: The activation function to use on the hidden layers
+ of the fully connected network.
+ initializers: The variable intitializers to use for the fully connected
+ network. The network is implemented using snt.nets.MLP so it must
+ be a dictionary mapping the keys 'w' and 'b' to the initializers for
+ the weights and biases. Defaults to xavier for the weights and zeros
+ for the biases when initializers is None.
+ name: The name of this distribution, used for sonnet scoping.
+ """
+ self.sigma_min = sigma_min
+ self.raw_sigma_bias = raw_sigma_bias
+ self.name = name
+ self.size = size
+ if initializers is None:
+ initializers = DEFAULT_INITIALIZERS
+ self.fcnet = snt.nets.MLP(
+ output_sizes=hidden_layer_sizes + [2*size],
+ activation=hidden_activation_fn,
+ initializers=initializers,
+ activate_final=False,
+ use_bias=True,
+ name=name + "_fcnet")
+
+ def condition(self, tensor_list, **unused_kwargs):
+ """Computes the parameters of a normal distribution based on the inputs."""
+ inputs = tf.concat(tensor_list, axis=1)
+ outs = self.fcnet(inputs)
+ mu, sigma = tf.split(outs, 2, axis=1)
+ sigma = tf.maximum(tf.nn.softplus(sigma + self.raw_sigma_bias),
+ self.sigma_min)
+ return mu, sigma
+
+ def __call__(self, *args, **kwargs):
+ """Creates a normal distribution conditioned on the inputs."""
+ mu, sigma = self.condition(args, **kwargs)
+ return tf.contrib.distributions.Normal(loc=mu, scale=sigma)
+
+
+class ConditionalBernoulliDistribution(object):
+ """A Bernoulli distribution conditioned on Tensor inputs via a fc net."""
+
+ def __init__(self, size, hidden_layer_sizes, hidden_activation_fn=tf.nn.relu,
+ initializers=None, bias_init=0.0,
+ name="conditional_bernoulli_distribution"):
+ """Creates a conditional Bernoulli distribution.
+
+ Args:
+ size: The dimension of the random variable.
+ hidden_layer_sizes: The sizes of the hidden layers of the fully connected
+ network used to condition the distribution on the inputs.
+ hidden_activation_fn: The activation function to use on the hidden layers
+ of the fully connected network.
+ initializers: The variable intiializers to use for the fully connected
+ network. The network is implemented using snt.nets.MLP so it must
+ be a dictionary mapping the keys 'w' and 'b' to the initializers for
+ the weights and biases. Defaults to xavier for the weights and zeros
+ for the biases when initializers is None.
+ bias_init: A scalar or vector Tensor that is added to the output of the
+ fully-connected network that parameterizes the mean of this
+ distribution.
+ name: The name of this distribution, used for sonnet scoping.
+ """
+ self.bias_init = bias_init
+ self.size = size
+ if initializers is None:
+ initializers = DEFAULT_INITIALIZERS
+ self.fcnet = snt.nets.MLP(
+ output_sizes=hidden_layer_sizes + [size],
+ activation=hidden_activation_fn,
+ initializers=initializers,
+ activate_final=False,
+ use_bias=True,
+ name=name + "_fcnet")
+
+ def condition(self, tensor_list):
+ """Computes the p parameter of the Bernoulli distribution."""
+ inputs = tf.concat(tensor_list, axis=1)
+ return self.fcnet(inputs) + self.bias_init
+
+ def __call__(self, *args):
+ p = self.condition(args)
+ return tf.contrib.distributions.Bernoulli(logits=p)
+
+
+class NormalApproximatePosterior(ConditionalNormalDistribution):
+ """A Normally-distributed approx. posterior with res_q parameterization."""
+
+ def __init__(self, size, hidden_layer_sizes, sigma_min=0.0,
+ raw_sigma_bias=0.25, hidden_activation_fn=tf.nn.relu,
+ initializers=None, smoothing=False,
+ name="conditional_normal_distribution"):
+ super(NormalApproximatePosterior, self).__init__(
+ size, hidden_layer_sizes, sigma_min=sigma_min,
+ raw_sigma_bias=raw_sigma_bias,
+ hidden_activation_fn=hidden_activation_fn, initializers=initializers,
+ name=name)
+ self.smoothing = smoothing
+
+ def condition(self, tensor_list, prior_mu, smoothing_tensors=None):
+ """Generates the mean and variance of the normal distribution.
+
+ Args:
+ tensor_list: The list of Tensors to condition on. Will be concatenated and
+ fed through a fully connected network.
+ prior_mu: The mean of the prior distribution associated with this
+ approximate posterior. Will be added to the mean produced by
+ this approximate posterior, in res_q fashion.
+ smoothing_tensors: A list of Tensors. If smoothing is True, these Tensors
+ will be concatenated with the tensors in tensor_list.
+ Returns:
+ mu: The mean of the approximate posterior.
+ sigma: The standard deviation of the approximate posterior.
+ """
+ if self.smoothing:
+ tensor_list.extend(smoothing_tensors)
+ mu, sigma = super(NormalApproximatePosterior, self).condition(tensor_list)
+ return mu + prior_mu, sigma
+
+
+class NonstationaryLinearDistribution(object):
+ """A set of loc-scale distributions that are linear functions of inputs.
+
+ This class defines a series of location-scale distributions such that
+ the means are learnable linear functions of the inputs and the log variances
+ are learnable constants. The functions and log variances are different across
+ timesteps, allowing the distributions to be nonstationary.
+ """
+
+ def __init__(self,
+ num_timesteps,
+ inputs_per_timestep=None,
+ outputs_per_timestep=None,
+ initializers=None,
+ variance_min=0.0,
+ output_distribution=tfd.Normal,
+ dtype=tf.float32):
+ """Creates a NonstationaryLinearDistribution.
+
+ Args:
+ num_timesteps: The number of timesteps, i.e. the number of distributions.
+ inputs_per_timestep: A list of python ints, the dimension of inputs to the
+ linear function at each timestep. If not provided, the dimension at each
+ timestep is assumed to be 1.
+ outputs_per_timestep: A list of python ints, the dimension of the output
+ distribution at each timestep. If not provided, the dimension at each
+ timestep is assumed to be 1.
+ initializers: A dictionary containing intializers for the variables. The
+ initializer under the key 'w' is used for the weights in the linear
+ function and the initializer under the key 'b' is used for the biases.
+ Defaults to xavier initialization for the weights and zeros for the
+ biases.
+ variance_min: Python float, the minimum variance of each distribution.
+ output_distribution: A locatin-scale subclass of tfd.Distribution that
+ defines the output distribution, e.g. Normal.
+ dtype: The dtype of the weights and biases.
+ """
+ if not initializers:
+ initializers = DEFAULT_INITIALIZERS
+ if not inputs_per_timestep:
+ inputs_per_timestep = [1] * num_timesteps
+ if not outputs_per_timestep:
+ outputs_per_timestep = [1] * num_timesteps
+ self.num_timesteps = num_timesteps
+ self.variance_min = variance_min
+ self.initializers = initializers
+ self.dtype = dtype
+ self.output_distribution = output_distribution
+
+ def _get_variables_ta(shapes, name, initializer, trainable=True):
+ """Creates a sequence of variables and stores them in a TensorArray."""
+ # Infer shape if all shapes are equal.
+ first_shape = shapes[0]
+ infer_shape = all(shape == first_shape for shape in shapes)
+ ta = tf.TensorArray(
+ dtype=dtype, size=len(shapes), dynamic_size=False,
+ clear_after_read=False, infer_shape=infer_shape)
+ for t, shape in enumerate(shapes):
+ var = tf.get_variable(
+ name % t, shape=shape, initializer=initializer, trainable=trainable)
+ ta = ta.write(t, var)
+ return ta
+
+ bias_shapes = [[num_outputs] for num_outputs in outputs_per_timestep]
+ self.log_variances = _get_variables_ta(
+ bias_shapes, "proposal_log_variance_%d", initializers["b"])
+ self.mean_biases = _get_variables_ta(
+ bias_shapes, "proposal_b_%d", initializers["b"])
+ weight_shapes = zip(inputs_per_timestep, outputs_per_timestep)
+ self.mean_weights = _get_variables_ta(
+ weight_shapes, "proposal_w_%d", initializers["w"])
+ self.shapes = tf.TensorArray(
+ dtype=tf.int32, size=num_timesteps,
+ dynamic_size=False, clear_after_read=False).unstack(weight_shapes)
+
+ def __call__(self, t, inputs):
+ """Computes the distribution at timestep t.
+
+ Args:
+ t: Scalar integer Tensor, the current timestep. Must be in
+ [0, num_timesteps).
+ inputs: The inputs to the linear function parameterizing the mean of
+ the current distribution. A Tensor of shape [batch_size, num_inputs_t].
+ Returns:
+ A tfd.Distribution subclass representing the distribution at timestep t.
+ """
+ b = self.mean_biases.read(t)
+ w = self.mean_weights.read(t)
+ shape = self.shapes.read(t)
+ w = tf.reshape(w, shape)
+ b = tf.reshape(b, [shape[1], 1])
+ log_variance = self.log_variances.read(t)
+ scale = tf.sqrt(tf.maximum(tf.exp(log_variance), self.variance_min))
+ loc = tf.matmul(w, inputs, transpose_a=True) + b
+ return self.output_distribution(loc=loc, scale=scale)
+
+
+def encode_all(inputs, encoder):
+ """Encodes a timeseries of inputs with a time independent encoder.
+
+ Args:
+ inputs: A [time, batch, feature_dimensions] tensor.
+ encoder: A network that takes a [batch, features_dimensions] input and
+ encodes the input.
+ Returns:
+ A [time, batch, encoded_feature_dimensions] output tensor.
+ """
+ input_shape = tf.shape(inputs)
+ num_timesteps, batch_size = input_shape[0], input_shape[1]
+ reshaped_inputs = tf.reshape(inputs, [-1, inputs.shape[-1]])
+ inputs_encoded = encoder(reshaped_inputs)
+ inputs_encoded = tf.reshape(inputs_encoded,
+ [num_timesteps, batch_size, encoder.output_size])
+ return inputs_encoded
+
+
+def ta_for_tensor(x, **kwargs):
+ """Creates a TensorArray for the input tensor."""
+ return tf.TensorArray(
+ x.dtype, tf.shape(x)[0], dynamic_size=False, **kwargs).unstack(x)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/ghmm.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/ghmm.py"
new file mode 100644
index 00000000..07cf6c50
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/ghmm.py"
@@ -0,0 +1,483 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A Gaussian hidden markov model.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from fivo.models import base
+
+tfd = tf.contrib.distributions
+
+
+class GaussianHMM(object):
+ """A hidden markov model with 1-D Gaussian latent space and observations.
+
+ This is a hidden markov model where the state and observations are
+ one-dimensional Gaussians. The mean of each latent state is a linear
+ function of the previous latent state, and the mean of each observation
+ is a linear function of the current latent state.
+
+ The description that follows is 0-indexed instead of 1-indexed to make
+ it easier to reason about the parameters passed to the model.
+
+ The parameters of the model are:
+ T: The number timesteps, latent states, and observations.
+ vz_t, t=0 to T-1: The variance of the latent state at timestep t.
+ vx_t, t=0 to T-1: The variance of the observation at timestep t.
+ wz_t, t=1 to T-1: The weight that defines the latent transition at t.
+ wx_t, t=0 to T-1: The weight that defines the observation function at t.
+
+ There are T vz_t, vx_t, and wx_t but only T-1 wz_t because there are only
+ T-1 transitions in the model.
+
+ Given these parameters, sampling from the model is defined as
+
+ z_0 ~ N(0, vz_0)
+ x_0 | z_0 ~ N(wx_0 * z_0, vx_0)
+ z_1 | z_0 ~ N(wz_1 * z_0, vz_1)
+ x_1 | z_1 ~ N(wx_1 * z_1, vx_1)
+ ...
+ z_{T-1} | z_{T-2} ~ N(wz_{T-1} * z_{T-2}, vz_{T-1})
+ x_{T-1} | z_{T-1} ~ N(wx_{T-1} * z_{T-1}, vx_{T-1}).
+ """
+
+ def __init__(self,
+ num_timesteps,
+ transition_variances=1.,
+ emission_variances=1.,
+ transition_weights=1.,
+ emission_weights=1.,
+ dtype=tf.float32):
+ """Creates a gaussian hidden markov model.
+
+ Args:
+ num_timesteps: A python int, the number of timesteps in the model.
+ transition_variances: The variance of p(z_t | z_t-1). Can be a scalar,
+ setting all variances to be the same, or a Tensor of shape
+ [num_timesteps].
+ emission_variances: The variance of p(x_t | z_t). Can be a scalar,
+ setting all variances to be the same, or a Tensor of shape
+ [num_timesteps].
+ transition_weights: The weight that defines the linear function that
+ produces the mean of z_t given z_{t-1}. Can be a scalar, setting
+ all weights to be the same, or a Tensor of shape [num_timesteps-1].
+ emission_weights: The weight that defines the linear function that
+ produces the mean of x_t given z_t. Can be a scalar, setting
+ all weights to be the same, or a Tensor of shape [num_timesteps].
+ dtype: The datatype of the state.
+ """
+ self.num_timesteps = num_timesteps
+ self.dtype = dtype
+
+ def _expand_param(param, size):
+ param = tf.convert_to_tensor(param, dtype=self.dtype)
+ if not param.get_shape().as_list():
+ param = tf.tile(param[tf.newaxis], [size])
+
+ return param
+
+ def _ta_for_param(param):
+ size = tf.shape(param)[0]
+ ta = tf.TensorArray(dtype=param.dtype,
+ size=size,
+ dynamic_size=False,
+ clear_after_read=False).unstack(param)
+ return ta
+
+ self.transition_variances = _ta_for_param(
+ _expand_param(transition_variances, num_timesteps))
+ self.transition_weights = _ta_for_param(
+ _expand_param(transition_weights, num_timesteps-1))
+ em_var = _expand_param(emission_variances, num_timesteps)
+ self.emission_variances = _ta_for_param(em_var)
+ em_w = _expand_param(emission_weights, num_timesteps)
+ self.emission_weights = _ta_for_param(em_w)
+ self._compute_covariances(em_w, em_var)
+
+ def _compute_covariances(self, emission_weights, emission_variances):
+ """Compute all covariance matrices.
+
+ Computes the covaraince matrix for the latent variables, the observations,
+ and the covariance between the latents and observations.
+
+ Args:
+ emission_weights: A Tensor of shape [num_timesteps] containing
+ the emission distribution weights at each timestep.
+ emission_variances: A Tensor of shape [num_timesteps] containing
+ the emiision distribution variances at each timestep.
+ """
+ # Compute the marginal variance of each latent.
+ z_variances = [self.transition_variances.read(0)]
+ for i in range(1, self.num_timesteps):
+ z_variances.append(
+ z_variances[i-1] * tf.square(self.transition_weights.read(i-1)) +
+ self.transition_variances.read(i))
+ # Compute the latent covariance matrix.
+ sigma_z = []
+ for i in range(self.num_timesteps):
+ sigma_z_row = []
+ for j in range(self.num_timesteps):
+ if i == j:
+ sigma_z_row.append(z_variances[i])
+ continue
+ min_ind = min(i, j)
+ max_ind = max(i, j)
+ weight = tf.reduce_prod(
+ self.transition_weights.gather(tf.range(min_ind, max_ind)))
+ sigma_z_row.append(z_variances[min_ind] * weight)
+ sigma_z.append(tf.stack(sigma_z_row))
+ self.sigma_z = tf.stack(sigma_z)
+ # Compute the observation covariance matrix.
+ x_weights_outer = tf.einsum("i,j->ij", emission_weights, emission_weights)
+ self.sigma_x = x_weights_outer * self.sigma_z + tf.diag(emission_variances)
+ # Compute the latent - observation covariance matrix.
+ # The first axis will index latents, the second axis will index observtions.
+ self.sigma_zx = emission_weights[tf.newaxis, :] * self.sigma_z
+ self.obs_dist = tfd.MultivariateNormalFullCovariance(
+ loc=tf.zeros([self.num_timesteps], dtype=tf.float32),
+ covariance_matrix=self.sigma_x)
+
+ def transition(self, t, z_prev):
+ """Compute the transition distribution p(z_t | z_t-1).
+
+ Args:
+ t: The current timestep, a scalar integer Tensor. When t=0 z_prev is
+ mostly ignored and the distribution p(z_0) is returned. z_prev is
+ 'mostly' ignored because it is still used to derive batch_size.
+ z_prev: A [batch_size] set of states.
+ Returns:
+ p(z_t | z_t-1) as a univariate normal distribution.
+ """
+ batch_size = tf.shape(z_prev)[0]
+ scale = tf.sqrt(self.transition_variances.read(t))
+ scale = tf.tile(scale[tf.newaxis], [batch_size])
+ loc = tf.cond(tf.greater(t, 0),
+ lambda: self.transition_weights.read(t-1)*z_prev,
+ lambda: tf.zeros_like(scale))
+ return tfd.Normal(loc=loc, scale=scale)
+
+ def emission(self, t, z):
+ """Compute the emission distribution p(x_t | z_t).
+
+ Args:
+ t: The current timestep, a scalar integer Tensor.
+ z: A [batch_size] set of the current states.
+ Returns:
+ p(x_t | z_t) as a univariate normal distribution.
+ """
+ batch_size = tf.shape(z)[0]
+ scale = tf.sqrt(self.emission_variances.read(t))
+ scale = tf.tile(scale[tf.newaxis], [batch_size])
+ loc = self.emission_weights.read(t)*z
+ return tfd.Normal(loc=loc, scale=scale)
+
+ def filtering(self, t, z_prev, x_cur):
+ """Computes the filtering distribution p(z_t | z_{t-1}, x_t).
+
+ Args:
+ t: A python int, the index for z_t. When t is 0, z_prev is ignored,
+ giving p(z_0 | x_0).
+ z_prev: z_{t-1}, the previous z to condition on. A Tensor of shape
+ [batch_size].
+ x_cur: x_t, the current x to condition on. A Tensor of shape [batch_size].
+ Returns:
+ p(z_t | z_{t-1}, x_t) as a univariate normal distribution.
+ """
+ z_prev = tf.convert_to_tensor(z_prev)
+ x_cur = tf.convert_to_tensor(x_cur)
+ batch_size = tf.shape(z_prev)[0]
+ z_var = self.transition_variances.read(t)
+ x_var = self.emission_variances.read(t)
+ x_weight = self.emission_weights.read(t)
+ prev_state_weight = x_var/(tf.square(x_weight)*z_var + x_var)
+ prev_state_weight *= tf.cond(tf.greater(t, 0),
+ lambda: self.transition_weights.read(t-1),
+ lambda: tf.zeros_like(prev_state_weight))
+ cur_obs_weight = (x_weight*z_var)/(tf.square(x_weight)*z_var + x_var)
+ loc = prev_state_weight*z_prev + cur_obs_weight*x_cur
+ scale = tf.sqrt((z_var*x_var)/(tf.square(x_weight)*z_var + x_var))
+ scale = tf.tile(scale[tf.newaxis], [batch_size])
+ return tfd.Normal(loc=loc, scale=scale)
+
+ def smoothing(self, t, z_prev, xs):
+ """Computes the smoothing distribution p(z_t | z_{t-1}, x_{t:num_timesteps).
+
+ Args:
+ t: A python int, the index for z_t. When t is 0, z_prev is ignored,
+ giving p(z_0 | x_{0:num_timesteps-1}).
+ z_prev: z_{t-1}, the previous z to condition on. A Tensor of shape
+ [batch_size].
+ xs: x_{t:num_timesteps}, the future xs to condition on. A Tensor of shape
+ [num_timesteps - t, batch_size].
+ Returns:
+ p(z_t | z_{t-1}, x_{t:num_timesteps}) as a univariate normal distribution.
+ """
+ xs = tf.convert_to_tensor(xs)
+ z_prev = tf.convert_to_tensor(z_prev)
+ batch_size = tf.shape(xs)[1]
+ mess_mean, mess_prec = tf.cond(
+ tf.less(t, self.num_timesteps-1),
+ lambda: tf.unstack(self._compute_backwards_messages(xs[1:]).read(0)),
+ lambda: [tf.zeros([batch_size]), tf.zeros([batch_size])])
+ return self._smoothing_from_message(t, z_prev, xs[0], mess_mean, mess_prec)
+
+ def _smoothing_from_message(self, t, z_prev, x_t, mess_mean, mess_prec):
+ """Computes the smoothing distribution given message incoming to z_t.
+
+ Computes p(z_t | z_{t-1}, x_{t:num_timesteps}) given the message incoming
+ to the node for z_t.
+
+ Args:
+ t: A python int, the index for z_t. When t is 0, z_prev is ignored.
+ z_prev: z_{t-1}, the previous z to condition on. A Tensor of shape
+ [batch_size].
+ x_t: The observation x at timestep t.
+ mess_mean: The mean of the message incoming to z_t, in information form.
+ mess_prec: The precision of the message incoming to z_t.
+ Returns:
+ p(z_t | z_{t-1}, x_{t:num_timesteps}) as a univariate normal distribution.
+ """
+
+ batch_size = tf.shape(x_t)[0]
+ z_var = self.transition_variances.read(t)
+ x_var = self.emission_variances.read(t)
+ w_x = self.emission_weights.read(t)
+
+ def transition_term():
+ return (tf.square(self.transition_weights.read(t))/
+ self.transition_variances.read(t+1))
+
+ prec = 1./z_var + tf.square(w_x)/x_var + mess_prec
+ prec += tf.cond(tf.less(t, self.num_timesteps-1),
+ transition_term, lambda: 0.)
+ mean = x_t*(w_x/x_var) + mess_mean
+ mean += tf.cond(tf.greater(t, 0),
+ lambda: z_prev*(self.transition_weights.read(t-1)/z_var),
+ lambda: 0.)
+ mean = tf.reshape(mean / prec, [batch_size])
+ scale = tf.reshape(tf.sqrt(1./prec), [batch_size])
+ return tfd.Normal(loc=mean, scale=scale)
+
+ def _compute_backwards_messages(self, xs):
+ """Computes the backwards messages used in smoothing."""
+ batch_size = tf.shape(xs)[1]
+ num_xs = tf.shape(xs)[0]
+ until_t = self.num_timesteps - num_xs
+ xs = tf.TensorArray(dtype=xs.dtype,
+ size=num_xs,
+ dynamic_size=False,
+ clear_after_read=True).unstack(xs)
+ messages_ta = tf.TensorArray(dtype=xs.dtype,
+ size=num_xs,
+ dynamic_size=False,
+ clear_after_read=False)
+
+ def compute_message(t, prev_mean, prev_prec, messages_ta):
+ """Computes one step of the backwards messages."""
+ z_var = self.transition_variances.read(t)
+ w_z = self.transition_weights.read(t-1)
+ x_var = self.emission_variances.read(t)
+ w_x = self.emission_weights.read(t)
+ cur_x = xs.read(t - until_t)
+
+ # If it isn't the first message, add the terms from the transition.
+ def transition_term():
+ return (tf.square(self.transition_weights.read(t))/
+ self.transition_variances.read(t+1))
+
+ unary_prec = 1/z_var + tf.square(w_x)/x_var
+ unary_prec += tf.cond(tf.less(t, self.num_timesteps-1),
+ transition_term, lambda: 0.)
+
+ unary_mean = (w_x / x_var) * cur_x
+ pairwise_prec = w_z / z_var
+
+ next_prec = -tf.square(pairwise_prec)/(unary_prec + prev_prec)
+ next_mean = (pairwise_prec * (unary_mean + prev_mean) /
+ (unary_prec + prev_prec))
+ next_prec = tf.reshape(next_prec, [batch_size])
+ next_mean = tf.reshape(next_mean, [batch_size])
+ messages_ta = messages_ta.write(t - until_t,
+ tf.stack([next_mean, next_prec]))
+ return t-1, next_mean, next_prec, messages_ta
+
+ def pred(t, *unused_args):
+ return tf.greater_equal(t, until_t)
+
+ init_prec = tf.zeros([batch_size], dtype=xs.dtype)
+ init_mean = tf.zeros([batch_size], dtype=xs.dtype)
+ t0 = tf.constant(self.num_timesteps - 1, dtype=tf.int32)
+
+ outs = tf.while_loop(pred, compute_message,
+ (t0, init_mean, init_prec, messages_ta))
+ messages = outs[-1]
+ return messages
+
+ def lookahead(self, t, z_prev):
+ """Compute the 'lookahead' distribution, p(x_{t:T} | z_{t-1}).
+
+ Args:
+ t: A scalar Tensor int, the current timestep. Must be at least 1.
+ z_prev: The latent state at time t-1. A Tensor of shape [batch_size].
+ Returns:
+ p(x_{t:T} | z_{t-1}) as a multivariate normal distribution.
+ """
+ z_prev = tf.convert_to_tensor(z_prev)
+ sigma_zx = self.sigma_zx[t-1, t:]
+ z_var = self.sigma_z[t-1, t-1]
+ mean = tf.einsum("i,j->ij", z_prev, sigma_zx) / z_var
+ variance = (self.sigma_x[t:, t:] -
+ tf.einsum("i,j->ij", sigma_zx, sigma_zx) / z_var)
+ return tfd.MultivariateNormalFullCovariance(
+ loc=mean, covariance_matrix=variance)
+
+ def likelihood(self, xs):
+ """Compute the true marginal likelihood of the data.
+
+ Args:
+ xs: The observations, a [num_timesteps, batch_size] float Tensor.
+ Returns:
+ likelihoods: A [batch_size] float Tensor representing the likelihood of
+ each sequence of observations in the batch.
+ """
+ return self.obs_dist.log_prob(tf.transpose(xs))
+
+
+class TrainableGaussianHMM(GaussianHMM, base.ELBOTrainableSequenceModel):
+ """An interface between importance-sampling training methods and the GHMM."""
+
+ def __init__(self,
+ num_timesteps,
+ proposal_type,
+ transition_variances=1.,
+ emission_variances=1.,
+ transition_weights=1.,
+ emission_weights=1.,
+ random_seed=None,
+ dtype=tf.float32):
+ """Constructs a trainable Gaussian HMM.
+
+ Args:
+ num_timesteps: A python int, the number of timesteps in the model.
+ proposal_type: The type of proposal to use in the importance sampling
+ setup. Could be "filtering", "smoothing", "prior", "true-filtering",
+ or "true-smoothing". If "true-filtering" or "true-smoothing" are
+ selected, then the true filtering or smoothing distributions are used to
+ propose new states. If "learned-filtering" is selected then a
+ distribution with learnable parameters is used. Specifically at each
+ timestep the proposal is Gaussian with mean that is a learnable linear
+ function of the previous state and current observation. The log variance
+ is a per-timestep learnable constant. "learned-smoothing" is similar,
+ but the mean is a learnable linear function of the previous state and
+ all future observations. Note that this proposal class includes the true
+ posterior. If "prior" is selected then states are proposed from the
+ model's prior.
+ transition_variances: The variance of p(z_t | z_t-1). Can be a scalar,
+ setting all variances to be the same, or a Tensor of shape
+ [num_timesteps].
+ emission_variances: The variance of p(x_t | z_t). Can be a scalar,
+ setting all variances to be the same, or a Tensor of shape
+ [num_timesteps].
+ transition_weights: The weight that defines the linear function that
+ produces the mean of z_t given z_{t-1}. Can be a scalar, setting
+ all weights to be the same, or a Tensor of shape [num_timesteps-1].
+ emission_weights: The weight that defines the linear function that
+ produces the mean of x_t given z_t. Can be a scalar, setting
+ all weights to be the same, or a Tensor of shape [num_timesteps].
+ random_seed: A seed for the proposal sampling, mainly useful for testing.
+ dtype: The datatype of the state.
+ """
+ super(TrainableGaussianHMM, self).__init__(
+ num_timesteps, transition_variances, emission_variances,
+ transition_weights, emission_weights, dtype=dtype)
+ self.random_seed = random_seed
+ assert proposal_type in ["filtering", "smoothing", "prior",
+ "true-filtering", "true-smoothing"]
+ if proposal_type == "true-filtering":
+ self.proposal = self._filtering_proposal
+ elif proposal_type == "true-smoothing":
+ self.proposal = self._smoothing_proposal
+ elif proposal_type == "prior":
+ self.proposal = self.transition
+ elif proposal_type == "filtering":
+ self._learned_proposal_fn = base.NonstationaryLinearDistribution(
+ num_timesteps, inputs_per_timestep=[1] + [2] * (num_timesteps-1))
+ self.proposal = self._learned_filtering_proposal
+ elif proposal_type == "smoothing":
+ inputs_per_timestep = [num_timesteps] + [num_timesteps - t
+ for t in range(num_timesteps-1)]
+ self._learned_proposal_fn = base.NonstationaryLinearDistribution(
+ num_timesteps, inputs_per_timestep=inputs_per_timestep)
+ self.proposal = self._learned_smoothing_proposal
+
+ def set_observations(self, xs, seq_lengths):
+ """Sets the observations and stores the backwards messages."""
+ # Squeeze out data dimension since everything is 1-d.
+ xs = tf.squeeze(xs)
+ self.batch_size = tf.shape(xs)[1]
+ super(TrainableGaussianHMM, self).set_observations(xs, seq_lengths)
+ self.messages = self._compute_backwards_messages(xs[1:])
+
+ def zero_state(self, batch_size, dtype):
+ return tf.zeros([batch_size], dtype=dtype)
+
+ def propose_and_weight(self, state, t):
+ """Computes the next state and log weights for the GHMM."""
+ state_shape = tf.shape(state)
+ xt = self.observations[t]
+ p_zt = self.transition(t, state)
+ q_zt = self.proposal(t, state)
+ zt = q_zt.sample(seed=self.random_seed)
+ zt = tf.reshape(zt, state_shape)
+ p_xt_given_zt = self.emission(t, zt)
+ log_p_zt = p_zt.log_prob(zt)
+ log_q_zt = q_zt.log_prob(zt)
+ log_p_xt_given_zt = p_xt_given_zt.log_prob(xt)
+ weight = log_p_zt + log_p_xt_given_zt - log_q_zt
+ return weight, zt
+
+ def _filtering_proposal(self, t, state):
+ """Uses the stored observations to compute the filtering distribution."""
+ cur_x = self.observations[t]
+ return self.filtering(t, state, cur_x)
+
+ def _smoothing_proposal(self, t, state):
+ """Uses the stored messages to compute the smoothing distribution."""
+ mess_mean, mess_prec = tf.cond(
+ tf.less(t, self.num_timesteps-1),
+ lambda: tf.unstack(self.messages.read(t)),
+ lambda: [tf.zeros([self.batch_size]), tf.zeros([self.batch_size])])
+ return self._smoothing_from_message(t, state, self.observations[t],
+ mess_mean, mess_prec)
+
+ def _learned_filtering_proposal(self, t, state):
+ cur_x = self.observations[t]
+ inputs = tf.cond(tf.greater(t, 0),
+ lambda: tf.stack([state, cur_x], axis=0),
+ lambda: cur_x[tf.newaxis, :])
+ return self._learned_proposal_fn(t, inputs)
+
+ def _learned_smoothing_proposal(self, t, state):
+ xs = self.observations_ta.gather(tf.range(t, self.num_timesteps))
+ inputs = tf.cond(tf.greater(t, 0),
+ lambda: tf.concat([state[tf.newaxis, :], xs], axis=0),
+ lambda: xs)
+ return self._learned_proposal_fn(t, inputs)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/ghmm_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/ghmm_test.py"
new file mode 100644
index 00000000..15a03c0c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/ghmm_test.py"
@@ -0,0 +1,313 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for fivo.models.ghmm"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+
+from fivo.models.ghmm import GaussianHMM
+from fivo.models.ghmm import TrainableGaussianHMM
+
+
+class GHMMTest(tf.test.TestCase):
+
+ def test_transition_no_weights(self):
+ with self.test_session() as sess:
+ ghmm = GaussianHMM(3,
+ transition_variances=[1., 2., 3.])
+ prev_z = tf.constant([1., 2.], dtype=tf.float32)
+ z0 = ghmm.transition(0, prev_z)
+ z1 = ghmm.transition(1, prev_z)
+ z2 = ghmm.transition(2, prev_z)
+ outs = sess.run([z0.mean(), z0.variance(),
+ z1.mean(), z1.variance(),
+ z2.mean(), z2.variance()])
+ self.assertAllClose(outs, [[0., 0.], [1., 1.],
+ [1., 2.], [2., 2.],
+ [1., 2.], [3., 3.]])
+
+ def test_transition_with_weights(self):
+ with self.test_session() as sess:
+ ghmm = GaussianHMM(3,
+ transition_variances=[1., 2., 3.],
+ transition_weights=[2., 3.])
+ prev_z = tf.constant([1., 2.], dtype=tf.float32)
+ z0 = ghmm.transition(0, prev_z)
+ z1 = ghmm.transition(1, prev_z)
+ z2 = ghmm.transition(2, prev_z)
+ outs = sess.run([z0.mean(), z0.variance(),
+ z1.mean(), z1.variance(),
+ z2.mean(), z2.variance()])
+ self.assertAllClose(outs, [[0., 0.], [1., 1.],
+ [2., 4.], [2., 2.],
+ [3., 6.], [3., 3.]])
+
+ def test_emission_no_weights(self):
+ with self.test_session() as sess:
+ ghmm = GaussianHMM(3, emission_variances=[1., 2., 3.])
+ z = tf.constant([1., 2.], dtype=tf.float32)
+ x0 = ghmm.emission(0, z)
+ x1 = ghmm.emission(1, z)
+ x2 = ghmm.emission(2, z)
+ outs = sess.run([x0.mean(), x0.variance(),
+ x1.mean(), x1.variance(),
+ x2.mean(), x2.variance()])
+ self.assertAllClose(outs, [[1., 2.], [1., 1.],
+ [1., 2.], [2., 2.],
+ [1., 2.], [3., 3.]])
+
+ def test_emission_with_weights(self):
+ with self.test_session() as sess:
+ ghmm = GaussianHMM(3,
+ emission_variances=[1., 2., 3.],
+ emission_weights=[1., 2., 3.])
+ z = tf.constant([1., 2.], dtype=tf.float32)
+ x0 = ghmm.emission(0, z)
+ x1 = ghmm.emission(1, z)
+ x2 = ghmm.emission(2, z)
+ outs = sess.run([x0.mean(), x0.variance(),
+ x1.mean(), x1.variance(),
+ x2.mean(), x2.variance()])
+ self.assertAllClose(outs, [[1., 2.], [1., 1.],
+ [2., 4.], [2., 2.],
+ [3., 6.], [3., 3.]])
+
+ def test_filtering_no_weights(self):
+ with self.test_session() as sess:
+ ghmm = GaussianHMM(3,
+ transition_variances=[1., 2., 3.],
+ emission_variances=[4., 5., 6.])
+ z_prev = tf.constant([1., 2.], dtype=tf.float32)
+ x_cur = tf.constant([3., 4.], dtype=tf.float32)
+ expected_outs = [[[3./5., 4./5.], [4./5., 4./5.]],
+ [[11./7., 18./7.], [10./7., 10./7.]],
+ [[5./3., 8./3.], [2., 2.]]]
+ f_post_0 = ghmm.filtering(0, z_prev, x_cur)
+ f_post_1 = ghmm.filtering(1, z_prev, x_cur)
+ f_post_2 = ghmm.filtering(2, z_prev, x_cur)
+ outs = sess.run([[f_post_0.mean(), f_post_0.variance()],
+ [f_post_1.mean(), f_post_1.variance()],
+ [f_post_2.mean(), f_post_2.variance()]])
+ self.assertAllClose(expected_outs, outs)
+
+ def test_filtering_with_weights(self):
+ with self.test_session() as sess:
+ ghmm = GaussianHMM(3,
+ transition_variances=[1., 2., 3.],
+ emission_variances=[4., 5., 6.],
+ transition_weights=[7., 8.],
+ emission_weights=[9., 10., 11])
+ z_prev = tf.constant([1., 2.], dtype=tf.float32)
+ x_cur = tf.constant([3., 4.], dtype=tf.float32)
+ expected_outs = [[[27./85., 36./85.], [4./85., 4./85.]],
+ [[95./205., 150./205.], [10./205., 10./205.]],
+ [[147./369., 228./369.], [18./369., 18./369.]]]
+ f_post_0 = ghmm.filtering(0, z_prev, x_cur)
+ f_post_1 = ghmm.filtering(1, z_prev, x_cur)
+ f_post_2 = ghmm.filtering(2, z_prev, x_cur)
+ outs = sess.run([[f_post_0.mean(), f_post_0.variance()],
+ [f_post_1.mean(), f_post_1.variance()],
+ [f_post_2.mean(), f_post_2.variance()]])
+ self.assertAllClose(expected_outs, outs)
+
+ def test_smoothing(self):
+ with self.test_session() as sess:
+ ghmm = GaussianHMM(3,
+ transition_variances=[1., 2., 3.],
+ emission_variances=[4., 5., 6.])
+ z_prev = tf.constant([1., 2.], dtype=tf.float32)
+ xs = tf.constant([[1., 2.],
+ [3., 4.],
+ [5., 6.]], dtype=tf.float32)
+ s_post1 = ghmm.smoothing(0, z_prev, xs)
+ outs = sess.run([s_post1.mean(), s_post1.variance()])
+ expected_outs = [[281./421., 410./421.], [292./421., 292./421.]]
+ self.assertAllClose(expected_outs, outs)
+
+ expected_outs = [[149./73., 222./73.], [90./73., 90./73.]]
+ s_post2 = ghmm.smoothing(1, z_prev, xs[1:])
+ outs = sess.run([s_post2.mean(), s_post2.variance()])
+ self.assertAllClose(expected_outs, outs)
+
+ s_post3 = ghmm.smoothing(2, z_prev, xs[2:])
+ outs = sess.run([s_post3.mean(), s_post3.variance()])
+ expected_outs = [[7./3., 10./3.], [2., 2.]]
+ self.assertAllClose(expected_outs, outs)
+
+ def test_smoothing_with_weights(self):
+ with self.test_session() as sess:
+ x_weight = np.array([4, 5, 6, 7], dtype=np.float32)
+ sigma_x = np.array([5, 6, 7, 8], dtype=np.float32)
+ z_weight = np.array([1, 2, 3], dtype=np.float32)
+ sigma_z = np.array([1, 2, 3, 4], dtype=np.float32)
+ z_prev = np.array([1, 2], dtype=np.float32)
+ batch_size = 2
+ xs = np.array([[1, 2], [3, 4], [5, 6], [7, 8]], dtype=np.float32)
+
+ z_cov, x_cov, z_x_cov = self._compute_covariance_matrices(
+ x_weight, z_weight, sigma_x, sigma_z)
+
+ expected_outs = []
+ # Compute mean and variance for z_0 when we don't condition
+ # on previous zs.
+ sigma_12 = z_x_cov[0, :]
+ sigma_12_22 = np.dot(sigma_12, np.linalg.inv(x_cov))
+ mean = np.dot(sigma_12_22, xs)
+ variance = np.squeeze(z_cov[0, 0] - np.dot(sigma_12_22, sigma_12))
+ expected_outs.append([mean, np.tile(variance, [batch_size])])
+
+ # Compute mean and variance for remaining z_ts.
+ for t in xrange(1, 4):
+ sigma_12 = np.concatenate([[z_cov[t, t - 1]], z_x_cov[t, t:]])
+ sigma_22 = np.vstack((
+ np.hstack((z_cov[t-1, t-1], z_x_cov[t-1, t:])),
+ np.hstack((np.transpose([z_x_cov[t-1, t:]]), x_cov[t:, t:]))
+ ))
+ sigma_12_22 = np.dot(sigma_12, np.linalg.inv(sigma_22))
+ mean = np.dot(sigma_12_22, np.vstack((z_prev, xs[t:])))
+ variance = np.squeeze(z_cov[t, t] - np.dot(sigma_12_22, sigma_12))
+ expected_outs.append([mean, np.tile(variance, [batch_size])])
+
+ ghmm = GaussianHMM(4,
+ transition_variances=sigma_z,
+ emission_variances=sigma_x,
+ transition_weights=z_weight,
+ emission_weights=x_weight)
+ out_dists = [ghmm.smoothing(t, z_prev, xs[t:]) for t in range(0, 4)]
+ outs = [[d.mean(), d.variance()] for d in out_dists]
+ run_outs = sess.run(outs)
+ self.assertAllClose(expected_outs, run_outs)
+
+ def test_covariance_matrices(self):
+ with self.test_session() as sess:
+ x_weight = np.array([4, 5, 6, 7], dtype=np.float32)
+ sigma_x = np.array([5, 6, 7, 8], dtype=np.float32)
+ z_weight = np.array([1, 2, 3], dtype=np.float32)
+ sigma_z = np.array([1, 2, 3, 4], dtype=np.float32)
+
+ z_cov, x_cov, z_x_cov = self._compute_covariance_matrices(
+ x_weight, z_weight, sigma_x, sigma_z)
+
+ ghmm = GaussianHMM(4,
+ transition_variances=sigma_z,
+ emission_variances=sigma_x,
+ transition_weights=z_weight,
+ emission_weights=x_weight)
+ self.assertAllClose(z_cov, sess.run(ghmm.sigma_z))
+ self.assertAllClose(x_cov, sess.run(ghmm.sigma_x))
+ self.assertAllClose(z_x_cov, sess.run(ghmm.sigma_zx))
+
+ def _compute_covariance_matrices(self, x_weight, z_weight, sigma_x, sigma_z):
+ # Create z covariance matrix from the definitions.
+ z_cov = np.zeros([4, 4])
+ z_cov[0, 0] = sigma_z[0]
+ for i in range(1, 4):
+ z_cov[i, i] = (z_cov[i - 1, i - 1] * np.square(z_weight[i - 1]) +
+ sigma_z[i])
+ for i in range(4):
+ for j in range(4):
+ if i == j: continue
+ min_ind = min(i, j)
+ max_ind = max(i, j)
+ weights = np.prod(z_weight[min_ind:max_ind])
+ z_cov[i, j] = z_cov[min_ind, min_ind] * weights
+ # Compute the x covariance matrix and the z-x covariance matrix.
+ x_weights_outer = np.outer(x_weight, x_weight)
+ x_cov = x_weights_outer * z_cov + np.diag(sigma_x)
+ z_x_cov = x_weight * z_cov
+ return z_cov, x_cov, z_x_cov
+
+ def test_lookahead(self):
+ x_weight = np.array([4, 5, 6, 7], dtype=np.float32)
+ sigma_x = np.array([5, 6, 7, 8], dtype=np.float32)
+ z_weight = np.array([1, 2, 3], dtype=np.float32)
+ sigma_z = np.array([1, 2, 3, 4], dtype=np.float32)
+ z_prev = np.array([1, 2], dtype=np.float32)
+
+ with self.test_session() as sess:
+ z_cov, x_cov, z_x_cov = self._compute_covariance_matrices(
+ x_weight, z_weight, sigma_x, sigma_z)
+
+ expected_outs = []
+ for t in range(1, 4):
+ sigma_12 = z_x_cov[t-1, t:]
+ z_var = z_cov[t-1, t-1]
+ mean = np.outer(z_prev, sigma_12/z_var)
+ variance = x_cov[t:, t:] - np.outer(sigma_12, sigma_12)/ z_var
+ expected_outs.append([mean, variance])
+
+ ghmm = GaussianHMM(4,
+ transition_variances=sigma_z,
+ emission_variances=sigma_x,
+ transition_weights=z_weight,
+ emission_weights=x_weight)
+ out_dists = [ghmm.lookahead(t, z_prev) for t in range(1, 4)]
+ outs = [[d.mean(), d.covariance()] for d in out_dists]
+ run_outs = sess.run(outs)
+ self.assertAllClose(expected_outs, run_outs)
+
+
+class TrainableGHMMTest(tf.test.TestCase):
+
+ def test_filtering_proposal(self):
+ """Check that stashing the xs doesn't change the filtering distributions."""
+ with self.test_session() as sess:
+ ghmm = TrainableGaussianHMM(
+ 3, "filtering",
+ transition_variances=[1., 2., 3.],
+ emission_variances=[4., 5., 6.],
+ transition_weights=[7., 8.],
+ emission_weights=[9., 10., 11])
+ observations = tf.constant([[3., 4.],
+ [3., 4.],
+ [3., 4.]], dtype=tf.float32)
+ ghmm.set_observations(observations, [3, 3])
+ z_prev = tf.constant([1., 2.], dtype=tf.float32)
+
+ proposals = [ghmm._filtering_proposal(t, z_prev) for t in range(3)]
+ dist_params = [[p.mean(), p.variance()] for p in proposals]
+
+ expected_outs = [[[27./85., 36./85.], [4./85., 4./85.]],
+ [[95./205., 150./205.], [10./205., 10./205.]],
+ [[147./369., 228./369.], [18./369., 18./369.]]]
+ self.assertAllClose(expected_outs, sess.run(dist_params))
+
+ def test_smoothing_proposal(self):
+ with self.test_session() as sess:
+ ghmm = TrainableGaussianHMM(
+ 3, "smoothing",
+ transition_variances=[1., 2., 3.],
+ emission_variances=[4., 5., 6.])
+ xs = tf.constant([[1., 2.],
+ [3., 4.],
+ [5., 6.]], dtype=tf.float32)
+ ghmm.set_observations(xs, [3, 3])
+ z_prev = tf.constant([1., 2.], dtype=tf.float32)
+
+ proposals = [ghmm._smoothing_proposal(t, z_prev) for t in range(3)]
+ dist_params = [[p.mean(), p.variance()] for p in proposals]
+
+ expected_outs = [[[281./421., 410./421.], [292./421., 292./421.]],
+ [[149./73., 222./73.], [90./73., 90./73.]],
+ [[7./3., 10./3.], [2., 2.]]]
+ self.assertAllClose(expected_outs, sess.run(dist_params))
+
+if __name__ == "__main__":
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/srnn.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/srnn.py"
new file mode 100644
index 00000000..cdfb560e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/srnn.py"
@@ -0,0 +1,587 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""SRNN classes."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from collections import namedtuple
+import functools
+
+import sonnet as snt
+import tensorflow as tf
+
+from fivo.models import base
+
+
+SRNNState = namedtuple("SRNNState", "rnn_state latent_encoded")
+
+
+class SRNN(object):
+ """Implementation of a Stochastic Recurrent Neural Network (SRNN).
+
+ Introduced in "Sequential Neural Models with Stochastic Layers"
+ by Fraccaro et al. https://arxiv.org/pdf/1605.07571.pdf.
+
+ The SRNN is a sequence model similar to an RNN that uses stochastic latent
+ variables to improve its representational power. It can be thought of as a
+ sequential analogue to the variational auto-encoder (VAE).
+
+ The SRNN has a deterministic RNN as its backbone, represented by the
+ sequence of RNN hidden states h_t. The latent state is conditioned on
+ the deterministic RNN states and previous latent state. Unlike the VRNN, the
+ the RNN state is not conditioned on the previous latent state. The latent
+ states have a Markov structure and it is assumed that
+ p(z_t | z_{1:t-1}) = p(z_t | z_{t-1}).
+
+ In this implementation of the SRNN the latent state z_t is Gaussian. The
+ model's prior over z_t (also called the transition distribution) is
+ distributed as Normal(mu_t, diag(sigma_t^2)) where mu_t and sigma_t are the
+ mean and standard deviation output from a fully connected network that accepts
+ the rnn hidden state h_t and previous latent state z_{t-1} as input.
+
+ The emission distribution p(x_t|z_t, h_t) is conditioned on the latent state
+ z_t as well as the current RNN hidden state h_t via a fully connected network.
+
+ To increase the modeling power of the SRNN, two additional networks are
+ used to extract features from the data and the latent state. Those networks
+ are called data_encoder and latent_encoder respectively.
+
+ For an example of how to call the SRNN's methods see sample_step.
+
+ There are a few differences between this exposition and the paper. The main
+ goal was to be consistent with the VRNN code. A few components are renamed.
+ The backward RNN for approximating the posterior, g_phi_a in the paper, is the
+ rev_rnn_cell. The forward RNN that conditions the latent distribution, d in
+ the paper, is the rnn_cell. The paper doesn't name the NN's that serve as
+ feature extractors, and we name them here as the data_encoder and
+ latent_encoder.
+ """
+
+ def __init__(self,
+ rnn_cell,
+ data_encoder,
+ latent_encoder,
+ transition,
+ emission,
+ random_seed=None):
+ """Create a SRNN.
+
+ Args:
+ rnn_cell: A subclass of tf.nn.rnn_cell.RNNCell that will form the
+ deterministic backbone of the SRNN. The inputs to the RNN will be the
+ the encoded input of the current timestep, a Tensor of shape
+ [batch_size, encoded_data_size].
+ data_encoder: A callable that accepts a batch of data x_t and
+ 'encodes' it, e.g. runs it through a fully connected network. Must
+ accept as argument the inputs x_t, a Tensor of the shape
+ [batch_size, data_size] and return a Tensor of shape
+ [batch_size, encoded_data_size]. This callable will be called multiple
+ times in the SRNN cell so if scoping is not handled correctly then
+ multiple copies of the variables in this network could be made. It is
+ recommended to use a snt.nets.MLP module, which takes care of this for
+ you.
+ latent_encoder: A callable that accepts a latent state z_t and
+ 'encodes' it, e.g. runs it through a fully connected network. Must
+ accept as argument a Tensor of shape [batch_size, latent_size] and
+ return a Tensor of shape [batch_size, encoded_latent_size].
+ This callable must also have the property 'output_size' defined,
+ returning encoded_latent_size.
+ transition: A callable that implements the transition distribution
+ p(z_t|h_t, z_t-1). Must accept as argument the previous RNN hidden state
+ and previous encoded latent state then return a tf.distributions.Normal
+ distribution conditioned on the input.
+ emission: A callable that implements the emission distribution
+ p(x_t|z_t, h_t). Must accept as arguments the encoded latent state
+ and the RNN hidden state and return a subclass of
+ tf.distributions.Distribution that can be used to evaluate the logprob
+ of the targets.
+ random_seed: The seed for the random ops. Sets the seed for sample_step.
+ """
+ self.random_seed = random_seed
+ self.rnn_cell = rnn_cell
+ self.data_encoder = data_encoder
+ self.latent_encoder = latent_encoder
+ self.encoded_z_size = latent_encoder.output_size
+ self.state_size = (self.rnn_cell.state_size)
+ self._transition = transition
+ self._emission = emission
+
+ def zero_state(self, batch_size, dtype):
+ """The initial state of the SRNN.
+
+ Contains the initial state of the RNN and the inital encoded latent.
+
+ Args:
+ batch_size: The batch size.
+ dtype: The data type of the SRNN.
+ Returns:
+ zero_state: The initial state of the SRNN.
+ """
+ return SRNNState(
+ rnn_state=self.rnn_cell.zero_state(batch_size, dtype),
+ latent_encoded=tf.zeros(
+ [batch_size, self.latent_encoder.output_size], dtype=dtype))
+
+ def run_rnn(self, prev_rnn_state, inputs):
+ """Runs the deterministic RNN for one step.
+
+ Args:
+ prev_rnn_state: The state of the RNN from the previous timestep.
+ inputs: A Tensor of shape [batch_size, data_size], the current inputs to
+ the model. Most often this is x_{t-1}, the previous token in the
+ observation sequence.
+ Returns:
+ rnn_out: The output of the RNN.
+ rnn_state: The new state of the RNN.
+ """
+ rnn_inputs = self.data_encoder(tf.to_float(inputs))
+ rnn_out, rnn_state = self.rnn_cell(rnn_inputs, prev_rnn_state)
+ return rnn_out, rnn_state
+
+ def transition(self, rnn_out, prev_latent_encoded):
+ """Computes the transition distribution p(z_t|h_t, z_{t-1}).
+
+ Note that p(z_t | h_t, z_{t-1}) = p(z_t| z_{t-1}, x_{1:t-1})
+
+ Args:
+ rnn_out: The output of the rnn for the current timestep.
+ prev_latent_encoded: Float Tensor of shape
+ [batch_size, encoded_latent_size], the previous latent state z_{t-1}
+ run through latent_encoder.
+ Returns:
+ p(z_t | h_t): A normal distribution with event shape
+ [batch_size, latent_size].
+ """
+ return self._transition(rnn_out, prev_latent_encoded)
+
+ def emission(self, latent, rnn_out):
+ """Computes the emission distribution p(x_t | z_t, h_t).
+
+ Note that p(x_t | z_t, h_t) = p(x_t | z_t, x_{1:t-1})
+
+ Args:
+ latent: The stochastic latent state z_t.
+ rnn_out: The output of the rnn for the current timestep.
+ Returns:
+ p(x_t | z_t, h_t): A distribution with event shape
+ [batch_size, data_size].
+ latent_encoded: The latent state encoded with latent_encoder. Should be
+ passed to transition() on the next timestep.
+ """
+ latent_encoded = self.latent_encoder(latent)
+ return self._emission(latent_encoded, rnn_out), latent_encoded
+
+ def sample_step(self, prev_state, inputs, unused_t):
+ """Samples one output from the model.
+
+ Args:
+ prev_state: The previous state of the model, a SRNNState containing the
+ previous rnn state and the previous encoded latent.
+ inputs: A Tensor of shape [batch_size, data_size], the current inputs to
+ the model. Most often this is x_{t-1}, the previous token in the
+ observation sequence.
+ unused_t: The current timestep. Not used currently.
+ Returns:
+ new_state: The next state of the model, a SRNNState.
+ xt: A float Tensor of shape [batch_size, data_size], an output sampled
+ from the emission distribution.
+ """
+ rnn_out, rnn_state = self.run_rnn(prev_state.rnn_state,
+ inputs)
+ p_zt = self.transition(rnn_out, prev_state.latent_encoded)
+ zt = p_zt.sample(seed=self.random_seed)
+ p_xt_given_zt, latent_encoded = self.emission(zt, rnn_out)
+ xt = p_xt_given_zt.sample(seed=self.random_seed)
+ new_state = SRNNState(rnn_state=rnn_state, latent_encoded=latent_encoded)
+ return new_state, tf.to_float(xt)
+
+# pylint: disable=invalid-name
+# pylint thinks this is a top-level constant.
+TrainableSRNNState = namedtuple("TrainableSRNNState",
+ SRNNState._fields + ("rnn_out",))
+# pylint: enable=g-invalid-name
+
+
+class TrainableSRNN(SRNN, base.ELBOTrainableSequenceModel):
+ """A SRNN subclass with proposals and methods for training and evaluation.
+
+ This class adds proposals used for training with importance-sampling based
+ methods such as the ELBO. The model can be configured to propose from one
+ of three proposals: a learned filtering proposal, a learned smoothing
+ proposal, or the prior (i.e. the transition distribution).
+
+ As described in the SRNN paper, the learned filtering proposal is
+ parameterized by a fully connected neural network that accepts as input the
+ current target x_t and the current rnn output h_t. The learned smoothing
+ proposal is also given the hidden state of an RNN run in reverse over the
+ inputs, so as to incorporate information about future observations.
+
+ All learned proposals use the 'res_q' parameterization, meaning that instead
+ of directly producing the mean of z_t, the proposal network predicts the
+ 'residual' from the prior's mean. This is explored more in section 3.3 of
+ https://arxiv.org/pdf/1605.07571.pdf.
+
+ During training, the latent state z_t is sampled from the proposal and the
+ reparameterization trick is used to provide low-variance gradients.
+
+ Note that the SRNN paper refers to the proposals as the approximate posterior,
+ but we match the VRNN convention of referring to it as the encoder.
+ """
+
+ def __init__(self,
+ rnn_cell,
+ data_encoder,
+ latent_encoder,
+ transition,
+ emission,
+ proposal_type,
+ proposal=None,
+ rev_rnn_cell=None,
+ tilt=None,
+ random_seed=None):
+ """Create a trainable RNN.
+
+ Args:
+ rnn_cell: A subclass of tf.nn.rnn_cell.RNNCell that will form the
+ deterministic backbone of the SRNN. The inputs to the RNN will be the
+ the encoded input of the current timestep, a Tensor of shape
+ [batch_size, encoded_data_size].
+ data_encoder: A callable that accepts a batch of data x_t and
+ 'encodes' it, e.g. runs it through a fully connected network. Must
+ accept as argument the inputs x_t, a Tensor of the shape
+ [batch_size, data_size] and return a Tensor of shape
+ [batch_size, encoded_data_size]. This callable will be called multiple
+ times in the SRNN cell so if scoping is not handled correctly then
+ multiple copies of the variables in this network could be made. It is
+ recommended to use a snt.nets.MLP module, which takes care of this for
+ you.
+ latent_encoder: A callable that accepts a latent state z_t and
+ 'encodes' it, e.g. runs it through a fully connected network. Must
+ accept as argument a Tensor of shape [batch_size, latent_size] and
+ return a Tensor of shape [batch_size, encoded_latent_size].
+ This callable must also have the property 'output_size' defined,
+ returning encoded_latent_size.
+ transition: A callable that implements the transition distribution
+ p(z_t|h_t, z_t-1). Must accept as argument the previous RNN hidden state
+ and previous encoded latent state then return a tf.distributions.Normal
+ distribution conditioned on the input.
+ emission: A callable that implements the emission distribution
+ p(x_t|z_t, h_t). Must accept as arguments the encoded latent state
+ and the RNN hidden state and return a subclass of
+ tf.distributions.Distribution that can be used to evaluate the logprob
+ of the targets.
+ proposal_type: A string indicating the type of proposal to use. Can
+ be either "filtering", "smoothing", or "prior". When proposal_type is
+ "filtering" or "smoothing", proposal must be provided. When
+ proposal_type is "smoothing", rev_rnn_cell must also be provided.
+ proposal: A callable that implements the proposal q(z_t| h_t, x_{1:T}).
+ If proposal_type is "filtering" then proposal must accept as arguments
+ the current rnn output, the encoded target of the current timestep,
+ and the mean of the prior. If proposal_type is "smoothing" then
+ in addition to the current rnn output and the mean of the prior
+ proposal must accept as arguments the output of the reverse rnn.
+ proposal should return a tf.distributions.Normal distribution
+ conditioned on its inputs. If proposal_type is "prior" this argument is
+ ignored.
+ rev_rnn_cell: A subclass of tf.nn.rnn_cell.RNNCell that will aggregate
+ forward rnn outputs in the reverse direction. The inputs to the RNN
+ will be the encoded reverse input of the current timestep, a Tensor of
+ shape [batch_size, encoded_data_size].
+ tilt: A callable that implements the log of a positive tilting function
+ (ideally approximating log p(x_{t+1}|z_t, h_t). Must accept as arguments
+ the encoded latent state and the RNN hidden state and return a subclass
+ of tf.distributions.Distribution that can be used to evaluate the
+ logprob of x_{t+1}. Optionally, None and then no tilt is used.
+ random_seed: The seed for the random ops. Sets the seed for sample_step
+ and __call__.
+ """
+ super(TrainableSRNN, self).__init__(
+ rnn_cell, data_encoder, latent_encoder,
+ transition, emission, random_seed=random_seed)
+ self.rev_rnn_cell = rev_rnn_cell
+ self._tilt = tilt
+ assert proposal_type in ["filtering", "smoothing", "prior"]
+ self._proposal = proposal
+ self.proposal_type = proposal_type
+ if proposal_type != "prior":
+ assert proposal, "If not proposing from the prior, must provide proposal."
+ if proposal_type == "smoothing":
+ assert rev_rnn_cell, "Must provide rev_rnn_cell for smoothing proposal."
+
+ def zero_state(self, batch_size, dtype):
+ super_state = super(TrainableSRNN, self).zero_state(batch_size, dtype)
+ return TrainableSRNNState(
+ rnn_out=tf.zeros([batch_size, self.rnn_cell.output_size], dtype=dtype),
+ **super_state._asdict())
+
+ def set_observations(self, observations, seq_lengths):
+ """Stores the model's observations.
+
+ Stores the observations (inputs and targets) in TensorArrays and precomputes
+ things for later like the reverse RNN output and encoded targets.
+
+ Args:
+ observations: The observations of the model, a tuple containing two
+ Tensors of shape [max_seq_len, batch_size, data_size]. The Tensors
+ should be the inputs and targets, respectively.
+ seq_lengths: An int Tensor of shape [batch_size] containing the length
+ of each sequence in observations.
+ """
+ inputs, targets = observations
+ self.seq_lengths = seq_lengths
+ self.max_seq_len = tf.reduce_max(seq_lengths)
+ self.targets_ta = base.ta_for_tensor(targets, clear_after_read=False)
+ targets_encoded = base.encode_all(targets, self.data_encoder)
+ self.targets_encoded_ta = base.ta_for_tensor(targets_encoded,
+ clear_after_read=False)
+ inputs_encoded = base.encode_all(inputs, self.data_encoder)
+ rnn_out, _ = tf.nn.dynamic_rnn(self.rnn_cell,
+ inputs_encoded,
+ time_major=True,
+ dtype=tf.float32,
+ scope="forward_rnn")
+ self.rnn_ta = base.ta_for_tensor(rnn_out,
+ clear_after_read=False)
+ if self.rev_rnn_cell:
+ targets_and_rnn_out = tf.concat([rnn_out, targets_encoded], 2)
+ reversed_targets_and_rnn_out = tf.reverse_sequence(
+ targets_and_rnn_out, seq_lengths, seq_axis=0, batch_axis=1)
+ # Compute the reverse rnn over the targets.
+ reverse_rnn_out, _ = tf.nn.dynamic_rnn(self.rev_rnn_cell,
+ reversed_targets_and_rnn_out,
+ time_major=True,
+ dtype=tf.float32,
+ scope="reverse_rnn")
+ reverse_rnn_out = tf.reverse_sequence(reverse_rnn_out, seq_lengths,
+ seq_axis=0, batch_axis=1)
+ self.reverse_rnn_ta = base.ta_for_tensor(reverse_rnn_out,
+ clear_after_read=False)
+
+ def _filtering_proposal(self, rnn_out, prev_latent_encoded, prior, t):
+ """Computes the filtering proposal distribution."""
+ return self._proposal(rnn_out,
+ prev_latent_encoded,
+ self.targets_encoded_ta.read(t),
+ prior_mu=prior.mean())
+
+ def _smoothing_proposal(self, rnn_out, prev_latent_encoded, prior, t):
+ """Computes the smoothing proposal distribution."""
+ return self._proposal(rnn_out,
+ prev_latent_encoded,
+ smoothing_tensors=[self.reverse_rnn_ta.read(t)],
+ prior_mu=prior.mean())
+
+ def proposal(self, rnn_out, prev_latent_encoded, prior, t):
+ """Computes the proposal distribution specified by proposal_type.
+
+ Args:
+ rnn_out: The output of the rnn for the current timestep.
+ prev_latent_encoded: Float Tensor of shape
+ [batch_size, encoded_latent_size], the previous latent state z_{t-1}
+ run through latent_encoder.
+ prior: A tf.distributions.Normal distribution representing the prior
+ over z_t, p(z_t | z_{1:t-1}, x_{1:t-1}). Used for 'res_q'.
+ t: A scalar int Tensor, the current timestep.
+ """
+ if self.proposal_type == "filtering":
+ return self._filtering_proposal(rnn_out, prev_latent_encoded, prior, t)
+ elif self.proposal_type == "smoothing":
+ return self._smoothing_proposal(rnn_out, prev_latent_encoded, prior, t)
+ elif self.proposal_type == "prior":
+ return self.transition(rnn_out, prev_latent_encoded)
+
+ def tilt(self, rnn_out, latent_encoded, targets):
+ r_func = self._tilt(rnn_out, latent_encoded)
+ return tf.reduce_sum(r_func.log_prob(targets), axis=-1)
+
+ def propose_and_weight(self, state, t):
+ """Runs the model and computes importance weights for one timestep.
+
+ Runs the model and computes importance weights, sampling from the proposal
+ instead of the transition/prior.
+
+ Args:
+ state: The previous state of the model, a TrainableSRNNState containing
+ the previous rnn state, the previous rnn outs, and the previous encoded
+ latent.
+ t: A scalar integer Tensor, the current timestep.
+ Returns:
+ weights: A float Tensor of shape [batch_size].
+ new_state: The new state of the model.
+ """
+ targets = self.targets_ta.read(t)
+ rnn_out = self.rnn_ta.read(t)
+ p_zt = self.transition(rnn_out, state.latent_encoded)
+ q_zt = self.proposal(rnn_out, state.latent_encoded, p_zt, t)
+ zt = q_zt.sample(seed=self.random_seed)
+ p_xt_given_zt, latent_encoded = self.emission(zt, rnn_out)
+ log_p_xt_given_zt = tf.reduce_sum(p_xt_given_zt.log_prob(targets), axis=-1)
+ log_p_zt = tf.reduce_sum(p_zt.log_prob(zt), axis=-1)
+ log_q_zt = tf.reduce_sum(q_zt.log_prob(zt), axis=-1)
+ weights = log_p_zt + log_p_xt_given_zt - log_q_zt
+ if self._tilt:
+ prev_log_r = tf.cond(
+ tf.greater(t, 0),
+ lambda: self.tilt(state.rnn_out, state.latent_encoded, targets),
+ lambda: 0.) # On the first step, prev_log_r = 0.
+ log_r = tf.cond(
+ tf.less(t + 1, self.max_seq_len),
+ lambda: self.tilt(rnn_out, latent_encoded, self.targets_ta.read(t+1)),
+ lambda: 0.)
+ # On the last step, log_r = 0.
+ log_r *= tf.to_float(t < self.seq_lengths - 1)
+ weights += log_r - prev_log_r
+
+ # This reshape is required because the TensorArray reports different shapes
+ # than the initial state provides (where the first dimension is unknown).
+ # The difference breaks the while_loop. Reshape prevents the error.
+ rnn_out = tf.reshape(rnn_out, tf.shape(state.rnn_out))
+
+ new_state = TrainableSRNNState(rnn_out=rnn_out,
+ rnn_state=state.rnn_state, # unmodified
+ latent_encoded=latent_encoded)
+ return weights, new_state
+
+
+_DEFAULT_INITIALIZERS = {"w": tf.contrib.layers.xavier_initializer(),
+ "b": tf.zeros_initializer()}
+
+
+def create_srnn(
+ data_size,
+ latent_size,
+ emission_class,
+ rnn_hidden_size=None,
+ fcnet_hidden_sizes=None,
+ encoded_data_size=None,
+ encoded_latent_size=None,
+ sigma_min=0.0,
+ raw_sigma_bias=0.25,
+ emission_bias_init=0.0,
+ use_tilt=False,
+ proposal_type="filtering",
+ initializers=None,
+ random_seed=None):
+ """A factory method for creating SRNN cells.
+
+ Args:
+ data_size: The dimension of the vectors that make up the data sequences.
+ latent_size: The size of the stochastic latent state of the SRNN.
+ emission_class: The class of the emission distribution. Can be either
+ ConditionalNormalDistribution or ConditionalBernoulliDistribution.
+ rnn_hidden_size: The hidden state dimension of the RNN that forms the
+ deterministic part of this SRNN. If None, then it defaults
+ to latent_size.
+ fcnet_hidden_sizes: A list of python integers, the size of the hidden
+ layers of the fully connected networks that parameterize the conditional
+ distributions of the SRNN. If None, then it defaults to one hidden
+ layer of size latent_size.
+ encoded_data_size: The size of the output of the data encoding network. If
+ None, defaults to latent_size.
+ encoded_latent_size: The size of the output of the latent state encoding
+ network. If None, defaults to latent_size.
+ sigma_min: The minimum value that the standard deviation of the
+ distribution over the latent state can take.
+ raw_sigma_bias: A scalar that is added to the raw standard deviation
+ output from the neural networks that parameterize the prior and
+ approximate posterior. Useful for preventing standard deviations close
+ to zero.
+ emission_bias_init: A bias to added to the raw output of the fully
+ connected network that parameterizes the emission distribution. Useful
+ for initalizing the mean of the distribution to a sensible starting point
+ such as the mean of the training data. Only used with Bernoulli generative
+ distributions.
+ use_tilt: If true, create a SRNN with a tilting function.
+ proposal_type: The type of proposal to use. Can be "filtering", "smoothing",
+ or "prior".
+ initializers: The variable intitializers to use for the fully connected
+ networks and RNN cell. Must be a dictionary mapping the keys 'w' and 'b'
+ to the initializers for the weights and biases. Defaults to xavier for
+ the weights and zeros for the biases when initializers is None.
+ random_seed: A random seed for the SRNN resampling operations.
+ Returns:
+ model: A TrainableSRNN object.
+ """
+ if rnn_hidden_size is None:
+ rnn_hidden_size = latent_size
+ if fcnet_hidden_sizes is None:
+ fcnet_hidden_sizes = [latent_size]
+ if encoded_data_size is None:
+ encoded_data_size = latent_size
+ if encoded_latent_size is None:
+ encoded_latent_size = latent_size
+ if initializers is None:
+ initializers = _DEFAULT_INITIALIZERS
+ data_encoder = snt.nets.MLP(
+ output_sizes=fcnet_hidden_sizes + [encoded_data_size],
+ initializers=initializers,
+ name="data_encoder")
+ latent_encoder = snt.nets.MLP(
+ output_sizes=fcnet_hidden_sizes + [encoded_latent_size],
+ initializers=initializers,
+ name="latent_encoder")
+ transition = base.ConditionalNormalDistribution(
+ size=latent_size,
+ hidden_layer_sizes=fcnet_hidden_sizes,
+ sigma_min=sigma_min,
+ raw_sigma_bias=raw_sigma_bias,
+ initializers=initializers,
+ name="prior")
+ # Construct the emission distribution.
+ if emission_class == base.ConditionalBernoulliDistribution:
+ # For Bernoulli distributed outputs, we initialize the bias so that the
+ # network generates on average the mean from the training set.
+ emission_dist = functools.partial(base.ConditionalBernoulliDistribution,
+ bias_init=emission_bias_init)
+ else:
+ emission_dist = base.ConditionalNormalDistribution
+ emission = emission_dist(
+ size=data_size,
+ hidden_layer_sizes=fcnet_hidden_sizes,
+ initializers=initializers,
+ name="generative")
+ # Construct the proposal distribution.
+ if proposal_type in ["filtering", "smoothing"]:
+ proposal = base.NormalApproximatePosterior(
+ size=latent_size,
+ hidden_layer_sizes=fcnet_hidden_sizes,
+ sigma_min=sigma_min,
+ raw_sigma_bias=raw_sigma_bias,
+ initializers=initializers,
+ smoothing=(proposal_type == "smoothing"),
+ name="approximate_posterior")
+ else:
+ proposal = None
+
+ if use_tilt:
+ tilt = emission_dist(
+ size=data_size,
+ hidden_layer_sizes=fcnet_hidden_sizes,
+ initializers=initializers,
+ name="tilt")
+ else:
+ tilt = None
+
+ rnn_cell = tf.nn.rnn_cell.LSTMCell(rnn_hidden_size,
+ initializer=initializers["w"])
+ rev_rnn_cell = tf.nn.rnn_cell.LSTMCell(rnn_hidden_size,
+ initializer=initializers["w"])
+ return TrainableSRNN(
+ rnn_cell, data_encoder, latent_encoder, transition,
+ emission, proposal_type, proposal=proposal, rev_rnn_cell=rev_rnn_cell,
+ tilt=tilt, random_seed=random_seed)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/srnn_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/srnn_test.py"
new file mode 100644
index 00000000..39e10da1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/srnn_test.py"
@@ -0,0 +1,105 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for fivo.models.srnn."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from fivo.models import base
+from fivo.test_utils import create_srnn
+
+
+class SrnnTest(tf.test.TestCase):
+
+ def test_srnn_normal_emission(self):
+ self.run_srnn(base.ConditionalNormalDistribution, [-5.947752, -1.182961])
+
+ def test_srnn_bernoulli_emission(self):
+ self.run_srnn(base.ConditionalBernoulliDistribution, [-2.566631, -2.479234])
+
+ def run_srnn(self, generative_class, gt_log_alpha):
+ """Tests the SRNN.
+
+ All test values are 'golden values' derived by running the code and copying
+ the output.
+
+ Args:
+ generative_class: The class of the generative distribution to use.
+ gt_log_alpha: The ground-truth value of log alpha.
+ """
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ batch_size = 2
+ model, inputs, targets, _ = create_srnn(generative_class=generative_class,
+ batch_size=batch_size,
+ data_lengths=(1, 1),
+ random_seed=1234)
+ zero_state = model.zero_state(batch_size=batch_size, dtype=tf.float32)
+ model.set_observations([inputs, targets], tf.convert_to_tensor([1, 1]))
+ model_out = model.propose_and_weight(zero_state, 0)
+ sess.run(tf.global_variables_initializer())
+ log_alpha, state = sess.run(model_out)
+ self.assertAllClose(
+ state.latent_encoded,
+ [[0.591787, 1.310583], [-1.523136, 0.953918]])
+ self.assertAllClose(state.rnn_out,
+ [[0.041675, -0.056038, -0.001823, 0.005224],
+ [0.042925, -0.044619, 0.021401, 0.016998]])
+ self.assertAllClose(log_alpha, gt_log_alpha)
+
+ def test_srnn_with_tilt_normal_emission(self):
+ self.run_srnn_with_tilt(base.ConditionalNormalDistribution, [-9.13577, -4.56725])
+
+
+ def test_srnn_with_tilt_bernoulli_emission(self):
+ self.run_srnn_with_tilt(base.ConditionalBernoulliDistribution, [-4.617461, -5.079248])
+
+ def run_srnn_with_tilt(self, generative_class, gt_log_alpha):
+ """Tests the SRNN with a tilting function.
+
+ All test values are 'golden values' derived by running the code and copying
+ the output.
+
+ Args:
+ generative_class: The class of the generative distribution to use.
+ gt_log_alpha: The ground-truth value of log alpha.
+ """
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ batch_size = 2
+ model, inputs, targets, _ = create_srnn(generative_class=generative_class,
+ batch_size=batch_size,
+ data_lengths=(3, 2),
+ random_seed=1234,
+ use_tilt=True)
+ zero_state = model.zero_state(batch_size=batch_size, dtype=tf.float32)
+ model.set_observations([inputs, targets], tf.convert_to_tensor([3, 2]))
+ model_out = model.propose_and_weight(zero_state, 0)
+ sess.run(tf.global_variables_initializer())
+ log_alpha, state = sess.run(model_out)
+ self.assertAllClose(
+ state.latent_encoded,
+ [[0.591787, 1.310583], [-1.523136, 0.953918]])
+ self.assertAllClose(state.rnn_out,
+ [[0.041675, -0.056038, -0.001823, 0.005224],
+ [0.042925, -0.044619, 0.021401, 0.016998]])
+ self.assertAllClose(log_alpha, gt_log_alpha)
+
+if __name__ == "__main__":
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/vrnn.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/vrnn.py"
new file mode 100644
index 00000000..4e255208
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/vrnn.py"
@@ -0,0 +1,572 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""VRNN classes."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from collections import namedtuple
+import functools
+
+import sonnet as snt
+import tensorflow as tf
+
+from fivo.models import base
+
+
+VRNNState = namedtuple("VRNNState", "rnn_state latent_encoded")
+
+
+class VRNN(object):
+ """Implementation of a Variational Recurrent Neural Network (VRNN).
+
+ Introduced in "A Recurrent Latent Variable Model for Sequential data"
+ by Chung et al. https://arxiv.org/pdf/1506.02216.pdf.
+
+ The VRNN is a sequence model similar to an RNN that uses stochastic latent
+ variables to improve its representational power. It can be thought of as a
+ sequential analogue to the variational auto-encoder (VAE).
+
+ The VRNN has a deterministic RNN as its backbone, represented by the
+ sequence of RNN hidden states h_t. At each timestep, the RNN hidden state h_t
+ is conditioned on the previous sequence element, x_{t-1}, as well as the
+ latent state from the previous timestep, z_{t-1}.
+
+ In this implementation of the VRNN the latent state z_t is Gaussian. The
+ model's prior over z_t (also called the transition distribution) is
+ distributed as Normal(mu_t, diag(sigma_t^2)) where mu_t and sigma_t are the
+ mean and standard deviation output from a fully connected network that accepts
+ the rnn hidden state h_t as input.
+
+ The emission distribution p(x_t|z_t, h_t) is conditioned on the latent state
+ z_t as well as the current RNN hidden state h_t via a fully connected network.
+
+ To increase the modeling power of the VRNN, two additional networks are
+ used to extract features from the data and the latent state. Those networks
+ are called data_encoder and latent_encoder respectively.
+
+ For an example of how to call the VRNN's methods see sample_step.
+
+ There are a few differences between this exposition and the paper.
+ First, the indexing scheme for h_t is different than the paper's -- what the
+ paper calls h_t we call h_{t+1}. This is the same notation used by Fraccaro
+ et al. to describe the VRNN in the paper linked above. Also, the VRNN paper
+ uses VAE terminology to refer to the different internal networks, so it
+ refers to the emission distribution as the decoder. This implementation also
+ renames the functions phi_x and phi_z in the paper to data_encoder and
+ latent_encoder.
+ """
+
+ def __init__(self,
+ rnn_cell,
+ data_encoder,
+ latent_encoder,
+ transition,
+ emission,
+ random_seed=None):
+ """Create a VRNN.
+
+ Args:
+ rnn_cell: A subclass of tf.nn.rnn_cell.RNNCell that will form the
+ deterministic backbone of the VRNN. The inputs to the RNN will be the
+ encoded latent state of the previous timestep with shape
+ [batch_size, encoded_latent_size] as well as the encoded input of the
+ current timestep, a Tensor of shape [batch_size, encoded_data_size].
+ data_encoder: A callable that accepts a batch of data x_t and
+ 'encodes' it, e.g. runs it through a fully connected network. Must
+ accept as argument the inputs x_t, a Tensor of the shape
+ [batch_size, data_size] and return a Tensor of shape
+ [batch_size, encoded_data_size]. This callable will be called multiple
+ times in the VRNN cell so if scoping is not handled correctly then
+ multiple copies of the variables in this network could be made. It is
+ recommended to use a snt.nets.MLP module, which takes care of this for
+ you.
+ latent_encoder: A callable that accepts a latent state z_t and
+ 'encodes' it, e.g. runs it through a fully connected network. Must
+ accept as argument a Tensor of shape [batch_size, latent_size] and
+ return a Tensor of shape [batch_size, encoded_latent_size].
+ This callable must also have the property 'output_size' defined,
+ returning encoded_latent_size.
+ transition: A callable that implements the transition distribution
+ p(z_t|h_t). Must accept as argument the previous RNN hidden state and
+ return a tf.distributions.Normal distribution conditioned on the input.
+ emission: A callable that implements the emission distribution
+ p(x_t|z_t, h_t). Must accept as arguments the encoded latent state
+ and the RNN hidden state and return a subclass of
+ tf.distributions.Distribution that can be used to evaluate the logprob
+ of the targets.
+ random_seed: The seed for the random ops. Sets the seed for sample_step.
+ """
+ self.random_seed = random_seed
+ self.rnn_cell = rnn_cell
+ self.data_encoder = data_encoder
+ self.latent_encoder = latent_encoder
+ self.encoded_z_size = latent_encoder.output_size
+ self.state_size = (self.rnn_cell.state_size)
+ self._transition = transition
+ self._emission = emission
+
+ def zero_state(self, batch_size, dtype):
+ """The initial state of the VRNN.
+
+ Contains the initial state of the RNN and the inital encoded latent.
+
+ Args:
+ batch_size: The batch size.
+ dtype: The data type of the VRNN.
+ Returns:
+ zero_state: The initial state of the VRNN.
+ """
+ return VRNNState(
+ rnn_state=self.rnn_cell.zero_state(batch_size, dtype),
+ latent_encoded=tf.zeros(
+ [batch_size, self.latent_encoder.output_size], dtype=dtype))
+
+ def run_rnn(self, prev_rnn_state, prev_latent_encoded, inputs):
+ """Runs the deterministic RNN for one step.
+
+ Args:
+ prev_rnn_state: The state of the RNN from the previous timestep.
+ prev_latent_encoded: Float Tensor of shape
+ [batch_size, encoded_latent_size], the previous latent state z_{t-1}
+ run through latent_encoder.
+ inputs: A Tensor of shape [batch_size, data_size], the current inputs to
+ the model. Most often this is x_{t-1}, the previous token in the
+ observation sequence.
+ Returns:
+ rnn_out: The output of the RNN.
+ rnn_state: The new state of the RNN.
+ """
+ inputs_encoded = self.data_encoder(tf.to_float(inputs))
+ rnn_inputs = tf.concat([inputs_encoded, prev_latent_encoded], axis=1)
+ rnn_out, rnn_state = self.rnn_cell(rnn_inputs, prev_rnn_state)
+ return rnn_out, rnn_state
+
+ def transition(self, rnn_out):
+ """Computes the transition distribution p(z_t|h_t).
+
+ Note that p(z_t | h_t) = p(z_t| z_{1:t-1}, x_{1:t-1})
+
+ Args:
+ rnn_out: The output of the rnn for the current timestep.
+ Returns:
+ p(z_t | h_t): A normal distribution with event shape
+ [batch_size, latent_size].
+ """
+ return self._transition(rnn_out)
+
+ def emission(self, latent, rnn_out):
+ """Computes the emission distribution p(x_t | z_t, h_t).
+
+ Note that p(x_t | z_t, h_t) = p(x_t | z_{1:t}, x_{1:t-1}).
+
+ Args:
+ latent: The stochastic latent state z_t.
+ rnn_out: The output of the rnn for the current timestep.
+ Returns:
+ p(x_t | z_t, h_t): A distribution with event shape
+ [batch_size, data_size].
+ latent_encoded: The latent state encoded with latent_encoder. Should be
+ passed to run_rnn on the next timestep.
+ """
+ latent_encoded = self.latent_encoder(latent)
+ return self._emission(latent_encoded, rnn_out), latent_encoded
+
+ def sample_step(self, prev_state, inputs, unused_t):
+ """Samples one output from the model.
+
+ Args:
+ prev_state: The previous state of the model, a VRNNState containing the
+ previous rnn state and the previous encoded latent.
+ inputs: A Tensor of shape [batch_size, data_size], the current inputs to
+ the model. Most often this is x_{t-1}, the previous token in the
+ observation sequence.
+ unused_t: The current timestep. Not used currently.
+ Returns:
+ new_state: The next state of the model, a VRNNState.
+ xt: A float Tensor of shape [batch_size, data_size], an output sampled
+ from the emission distribution.
+ """
+ rnn_out, rnn_state = self.run_rnn(prev_state.rnn_state,
+ prev_state.latent_encoded,
+ inputs)
+ p_zt = self.transition(rnn_out)
+ zt = p_zt.sample(seed=self.random_seed)
+ p_xt_given_zt, latent_encoded = self.emission(zt, rnn_out)
+ xt = p_xt_given_zt.sample(seed=self.random_seed)
+ new_state = VRNNState(rnn_state=rnn_state, latent_encoded=latent_encoded)
+ return new_state, tf.to_float(xt)
+
+# pylint: disable=invalid-name
+# pylint thinks this is a top-level constant.
+TrainableVRNNState = namedtuple("TrainableVRNNState",
+ VRNNState._fields + ("rnn_out",))
+# pylint: enable=g-invalid-name
+
+
+class TrainableVRNN(VRNN, base.ELBOTrainableSequenceModel):
+ """A VRNN subclass with proposals and methods for training and evaluation.
+
+ This class adds proposals used for training with importance-sampling based
+ methods such as the ELBO. The model can be configured to propose from one
+ of three proposals: a learned filtering proposal, a learned smoothing
+ proposal, or the prior (i.e. the transition distribution).
+
+ As described in the VRNN paper, the learned filtering proposal is
+ parameterized by a fully connected neural network that accepts as input the
+ current target x_t and the current rnn output h_t. The learned smoothing
+ proposal is also given the hidden state of an RNN run in reverse over the
+ inputs, so as to incorporate information about future observations. This
+ smoothing proposal is not described in the VRNN paper.
+
+ All learned proposals use the 'res_q' parameterization, meaning that instead
+ of directly producing the mean of z_t, the proposal network predicts the
+ 'residual' from the prior's mean. This is explored more in section 3.3 of
+ https://arxiv.org/pdf/1605.07571.pdf.
+
+ During training, the latent state z_t is sampled from the proposal and the
+ reparameterization trick is used to provide low-variance gradients.
+
+ Note that the VRNN paper uses VAE terminology to refer to the different
+ internal networks, so the proposal is referred to as the encoder.
+ """
+
+ def __init__(self,
+ rnn_cell,
+ data_encoder,
+ latent_encoder,
+ transition,
+ emission,
+ proposal_type,
+ proposal=None,
+ rev_rnn_cell=None,
+ tilt=None,
+ random_seed=None):
+ """Create a trainable RNN.
+
+ Args:
+ rnn_cell: A subclass of tf.nn.rnn_cell.RNNCell that will form the
+ deterministic backbone of the VRNN. The inputs to the RNN will be the
+ encoded latent state of the previous timestep with shape
+ [batch_size, encoded_latent_size] as well as the encoded input of the
+ current timestep, a Tensor of shape [batch_size, encoded_data_size].
+ data_encoder: A callable that accepts a batch of data x_t and
+ 'encodes' it, e.g. runs it through a fully connected network. Must
+ accept as argument the inputs x_t, a Tensor of the shape
+ [batch_size, data_size] and return a Tensor of shape
+ [batch_size, encoded_data_size]. This callable will be called multiple
+ times in the VRNN cell so if scoping is not handled correctly then
+ multiple copies of the variables in this network could be made. It is
+ recommended to use a snt.nets.MLP module, which takes care of this for
+ you.
+ latent_encoder: A callable that accepts a latent state z_t and
+ 'encodes' it, e.g. runs it through a fully connected network. Must
+ accept as argument a Tensor of shape [batch_size, latent_size] and
+ return a Tensor of shape [batch_size, encoded_latent_size].
+ This callable must also have the property 'output_size' defined,
+ returning encoded_latent_size.
+ transition: A callable that implements the transition distribution
+ p(z_t|h_t). Must accept as argument the previous RNN hidden state and
+ return a tf.distributions.Normal distribution conditioned on the input.
+ emission: A callable that implements the emission distribution
+ p(x_t|z_t, h_t). Must accept as arguments the encoded latent state
+ and the RNN hidden state and return a subclass of
+ tf.distributions.Distribution that can be used to evaluate the logprob
+ of the targets.
+ proposal_type: A string indicating the type of proposal to use. Can
+ be either "filtering", "smoothing", or "prior". When proposal_type is
+ "filtering" or "smoothing", proposal must be provided. When
+ proposal_type is "smoothing", rev_rnn_cell must also be provided.
+ proposal: A callable that implements the proposal q(z_t| h_t, x_{1:T}).
+ If proposal_type is "filtering" then proposal must accept as arguments
+ the current rnn output, the encoded target of the current timestep,
+ and the mean of the prior. If proposal_type is "smoothing" then
+ in addition to the current rnn output and the mean of the prior
+ proposal must accept as arguments the output of the reverse rnn.
+ proposal should return a tf.distributions.Normal distribution
+ conditioned on its inputs. If proposal_type is "prior" this argument is
+ ignored.
+ rev_rnn_cell: A subclass of tf.nn.rnn_cell.RNNCell that will aggregate
+ observation statistics in the reverse direction. The inputs to the RNN
+ will be the encoded reverse input of the current timestep, a Tensor of
+ shape [batch_size, encoded_data_size].
+ tilt: A callable that implements the log of a positive tilting function
+ (ideally approximating log p(x_{t+1}|z_t, h_t). Must accept as arguments
+ the encoded latent state and the RNN hidden state and return a subclass
+ of tf.distributions.Distribution that can be used to evaluate the
+ logprob of x_{t+1}. Optionally, None and then no tilt is used.
+ random_seed: The seed for the random ops. Sets the seed for sample_step
+ and __call__.
+ """
+ super(TrainableVRNN, self).__init__(
+ rnn_cell, data_encoder, latent_encoder,
+ transition, emission, random_seed=random_seed)
+ self.rev_rnn_cell = rev_rnn_cell
+ self._tilt = tilt
+ assert proposal_type in ["filtering", "smoothing", "prior"]
+ self._proposal = proposal
+ self.proposal_type = proposal_type
+ if proposal_type != "prior":
+ assert proposal, "If not proposing from the prior, must provide proposal."
+ if proposal_type == "smoothing":
+ assert rev_rnn_cell, "Must provide rev_rnn_cell for smoothing proposal."
+
+ def zero_state(self, batch_size, dtype):
+ super_state = super(TrainableVRNN, self).zero_state(batch_size, dtype)
+ return TrainableVRNNState(
+ rnn_out=tf.zeros([batch_size, self.rnn_cell.output_size], dtype=dtype),
+ **super_state._asdict())
+
+ def set_observations(self, observations, seq_lengths):
+ """Stores the model's observations.
+
+ Stores the observations (inputs and targets) in TensorArrays and precomputes
+ things for later like the reverse RNN output and encoded targets.
+
+ Args:
+ observations: The observations of the model, a tuple containing two
+ Tensors of shape [max_seq_len, batch_size, data_size]. The Tensors
+ should be the inputs and targets, respectively.
+ seq_lengths: An int Tensor of shape [batch_size] containing the length
+ of each sequence in observations.
+ """
+ inputs, targets = observations
+ self.seq_lengths = seq_lengths
+ self.max_seq_len = tf.reduce_max(seq_lengths)
+ self.inputs_ta = base.ta_for_tensor(inputs, clear_after_read=False)
+ self.targets_ta = base.ta_for_tensor(targets, clear_after_read=False)
+ targets_encoded = base.encode_all(targets, self.data_encoder)
+ self.targets_encoded_ta = base.ta_for_tensor(targets_encoded,
+ clear_after_read=False)
+ if self.rev_rnn_cell:
+ reverse_targets_encoded = tf.reverse_sequence(
+ targets_encoded, seq_lengths, seq_axis=0, batch_axis=1)
+ # Compute the reverse rnn over the targets.
+ reverse_rnn_out, _ = tf.nn.dynamic_rnn(self.rev_rnn_cell,
+ reverse_targets_encoded,
+ time_major=True,
+ dtype=tf.float32)
+ reverse_rnn_out = tf.reverse_sequence(reverse_rnn_out, seq_lengths,
+ seq_axis=0, batch_axis=1)
+ self.reverse_rnn_ta = base.ta_for_tensor(reverse_rnn_out,
+ clear_after_read=False)
+
+ def _filtering_proposal(self, rnn_out, prior, t):
+ """Computes the filtering proposal distribution."""
+ return self._proposal(rnn_out,
+ self.targets_encoded_ta.read(t),
+ prior_mu=prior.mean())
+
+ def _smoothing_proposal(self, rnn_out, prior, t):
+ """Computes the smoothing proposal distribution."""
+ return self._proposal(rnn_out,
+ smoothing_tensors=[self.reverse_rnn_ta.read(t)],
+ prior_mu=prior.mean())
+
+ def proposal(self, rnn_out, prior, t):
+ """Computes the proposal distribution specified by proposal_type.
+
+ Args:
+ rnn_out: The output of the rnn for the current timestep.
+ prior: A tf.distributions.Normal distribution representing the prior
+ over z_t, p(z_t | z_{1:t-1}, x_{1:t-1}). Used for 'res_q'.
+ t: A scalar int Tensor, the current timestep.
+ """
+ if self.proposal_type == "filtering":
+ return self._filtering_proposal(rnn_out, prior, t)
+ elif self.proposal_type == "smoothing":
+ return self._smoothing_proposal(rnn_out, prior, t)
+ elif self.proposal_type == "prior":
+ return self.transition(rnn_out)
+
+ def tilt(self, rnn_out, latent_encoded, targets):
+ r_func = self._tilt(rnn_out, latent_encoded)
+ return tf.reduce_sum(r_func.log_prob(targets), axis=-1)
+
+ def propose_and_weight(self, state, t):
+ """Runs the model and computes importance weights for one timestep.
+
+ Runs the model and computes importance weights, sampling from the proposal
+ instead of the transition/prior.
+
+ Args:
+ state: The previous state of the model, a TrainableVRNNState containing
+ the previous rnn state, the previous rnn outs, and the previous encoded
+ latent.
+ t: A scalar integer Tensor, the current timestep.
+ Returns:
+ weights: A float Tensor of shape [batch_size].
+ new_state: The new state of the model.
+ """
+ inputs = self.inputs_ta.read(t)
+ targets = self.targets_ta.read(t)
+ rnn_out, next_rnn_state = self.run_rnn(state.rnn_state,
+ state.latent_encoded,
+ inputs)
+ p_zt = self.transition(rnn_out)
+ q_zt = self.proposal(rnn_out, p_zt, t)
+ zt = q_zt.sample(seed=self.random_seed)
+ p_xt_given_zt, latent_encoded = self.emission(zt, rnn_out)
+ log_p_xt_given_zt = tf.reduce_sum(p_xt_given_zt.log_prob(targets), axis=-1)
+ log_p_zt = tf.reduce_sum(p_zt.log_prob(zt), axis=-1)
+ log_q_zt = tf.reduce_sum(q_zt.log_prob(zt), axis=-1)
+ weights = log_p_zt + log_p_xt_given_zt - log_q_zt
+ if self._tilt:
+ prev_log_r = tf.cond(
+ tf.greater(t, 0),
+ lambda: self.tilt(state.rnn_out, state.latent_encoded, targets),
+ lambda: 0.) # On the first step, prev_log_r = 0.
+ log_r = tf.cond(
+ tf.less(t + 1, self.max_seq_len),
+ lambda: self.tilt(rnn_out, latent_encoded, self.targets_ta.read(t+1)),
+ lambda: 0.)
+ # On the last step, log_r = 0.
+ log_r *= tf.to_float(t < self.seq_lengths - 1)
+ weights += log_r - prev_log_r
+ new_state = TrainableVRNNState(rnn_state=next_rnn_state,
+ rnn_out=rnn_out,
+ latent_encoded=latent_encoded)
+ return weights, new_state
+
+
+_DEFAULT_INITIALIZERS = {"w": tf.contrib.layers.xavier_initializer(),
+ "b": tf.zeros_initializer()}
+
+
+def create_vrnn(
+ data_size,
+ latent_size,
+ emission_class,
+ rnn_hidden_size=None,
+ fcnet_hidden_sizes=None,
+ encoded_data_size=None,
+ encoded_latent_size=None,
+ sigma_min=0.0,
+ raw_sigma_bias=0.25,
+ emission_bias_init=0.0,
+ use_tilt=False,
+ proposal_type="filtering",
+ initializers=None,
+ random_seed=None):
+ """A factory method for creating VRNN cells.
+
+ Args:
+ data_size: The dimension of the vectors that make up the data sequences.
+ latent_size: The size of the stochastic latent state of the VRNN.
+ emission_class: The class of the emission distribution. Can be either
+ ConditionalNormalDistribution or ConditionalBernoulliDistribution.
+ rnn_hidden_size: The hidden state dimension of the RNN that forms the
+ deterministic part of this VRNN. If None, then it defaults
+ to latent_size.
+ fcnet_hidden_sizes: A list of python integers, the size of the hidden
+ layers of the fully connected networks that parameterize the conditional
+ distributions of the VRNN. If None, then it defaults to one hidden
+ layer of size latent_size.
+ encoded_data_size: The size of the output of the data encoding network. If
+ None, defaults to latent_size.
+ encoded_latent_size: The size of the output of the latent state encoding
+ network. If None, defaults to latent_size.
+ sigma_min: The minimum value that the standard deviation of the
+ distribution over the latent state can take.
+ raw_sigma_bias: A scalar that is added to the raw standard deviation
+ output from the neural networks that parameterize the prior and
+ approximate posterior. Useful for preventing standard deviations close
+ to zero.
+ emission_bias_init: A bias to added to the raw output of the fully
+ connected network that parameterizes the emission distribution. Useful
+ for initalizing the mean of the distribution to a sensible starting point
+ such as the mean of the training data. Only used with Bernoulli generative
+ distributions.
+ use_tilt: If true, create a VRNN with a tilting function.
+ proposal_type: The type of proposal to use. Can be "filtering", "smoothing",
+ or "prior".
+ initializers: The variable intitializers to use for the fully connected
+ networks and RNN cell. Must be a dictionary mapping the keys 'w' and 'b'
+ to the initializers for the weights and biases. Defaults to xavier for
+ the weights and zeros for the biases when initializers is None.
+ random_seed: A random seed for the VRNN resampling operations.
+ Returns:
+ model: A TrainableVRNN object.
+ """
+ if rnn_hidden_size is None:
+ rnn_hidden_size = latent_size
+ if fcnet_hidden_sizes is None:
+ fcnet_hidden_sizes = [latent_size]
+ if encoded_data_size is None:
+ encoded_data_size = latent_size
+ if encoded_latent_size is None:
+ encoded_latent_size = latent_size
+ if initializers is None:
+ initializers = _DEFAULT_INITIALIZERS
+ data_encoder = snt.nets.MLP(
+ output_sizes=fcnet_hidden_sizes + [encoded_data_size],
+ initializers=initializers,
+ name="data_encoder")
+ latent_encoder = snt.nets.MLP(
+ output_sizes=fcnet_hidden_sizes + [encoded_latent_size],
+ initializers=initializers,
+ name="latent_encoder")
+ transition = base.ConditionalNormalDistribution(
+ size=latent_size,
+ hidden_layer_sizes=fcnet_hidden_sizes,
+ sigma_min=sigma_min,
+ raw_sigma_bias=raw_sigma_bias,
+ initializers=initializers,
+ name="prior")
+ # Construct the emission distribution.
+ if emission_class == base.ConditionalBernoulliDistribution:
+ # For Bernoulli distributed outputs, we initialize the bias so that the
+ # network generates on average the mean from the training set.
+ emission_dist = functools.partial(base.ConditionalBernoulliDistribution,
+ bias_init=emission_bias_init)
+ else:
+ emission_dist = base.ConditionalNormalDistribution
+ emission = emission_dist(
+ size=data_size,
+ hidden_layer_sizes=fcnet_hidden_sizes,
+ initializers=initializers,
+ name="generative")
+ # Construct the proposal distribution.
+ if proposal_type in ["filtering", "smoothing"]:
+ proposal = base.NormalApproximatePosterior(
+ size=latent_size,
+ hidden_layer_sizes=fcnet_hidden_sizes,
+ sigma_min=sigma_min,
+ raw_sigma_bias=raw_sigma_bias,
+ initializers=initializers,
+ smoothing=(proposal_type == "smoothing"),
+ name="approximate_posterior")
+ else:
+ proposal = None
+
+ if use_tilt:
+ tilt = emission_dist(
+ size=data_size,
+ hidden_layer_sizes=fcnet_hidden_sizes,
+ initializers=initializers,
+ name="tilt")
+ else:
+ tilt = None
+
+ rnn_cell = tf.nn.rnn_cell.LSTMCell(rnn_hidden_size,
+ initializer=initializers["w"])
+ rev_rnn_cell = tf.nn.rnn_cell.LSTMCell(rnn_hidden_size,
+ initializer=initializers["w"])
+ return TrainableVRNN(
+ rnn_cell, data_encoder, latent_encoder, transition,
+ emission, proposal_type, proposal=proposal, rev_rnn_cell=rev_rnn_cell,
+ tilt=tilt, random_seed=random_seed)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/vrnn_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/vrnn_test.py"
new file mode 100644
index 00000000..2d9bde3d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/models/vrnn_test.py"
@@ -0,0 +1,137 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for fivo.models.vrnn."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+
+import tensorflow as tf
+
+from fivo.models import base
+from fivo.test_utils import create_vrnn
+
+
+class VrnnTest(tf.test.TestCase):
+
+ def test_vrnn_normal_emission(self):
+ self.run_vrnn(base.ConditionalNormalDistribution, [-4.509767, -3.242221])
+
+ def test_vrnn_bernoulli_emission(self):
+ self.run_vrnn(base.ConditionalBernoulliDistribution, [-2.63812733, -2.02216434]),
+
+ def run_vrnn(self, generative_class, gt_log_p_x_given_z):
+ """Tests the VRNN.
+
+ All test values are 'golden values' derived by running the code and copying
+ the output.
+
+ Args:
+ generative_class: The class of the generative distribution to use.
+ gt_log_p_x_given_z: The ground-truth value of log p(x|z).
+ """
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ batch_size = 2
+ model, inputs, targets, _ = create_vrnn(generative_class=generative_class,
+ batch_size=batch_size,
+ data_lengths=(1, 1),
+ random_seed=1234)
+ zero_state = model.zero_state(batch_size=batch_size, dtype=tf.float32)
+ model.set_observations([inputs, targets], tf.convert_to_tensor([1, 1]))
+ model_out = model.propose_and_weight(zero_state, 0)
+ sess.run(tf.global_variables_initializer())
+ log_alpha, state = sess.run(model_out)
+ rnn_state, latent_state, rnn_out = state
+ self.assertAllClose(
+ rnn_state.c,
+ [[-0.15014534, 0.0143046, 0.00160489, -0.12899463],
+ [-0.25015137, 0.09377634, -0.05000039, -0.17123522]])
+ self.assertAllClose(
+ rnn_state.h,
+ [[-0.06842659, 0.00760155, 0.00096106, -0.05434214],
+ [-0.1109542, 0.0441804, -0.03121299, -0.07882939]]
+ )
+ self.assertAllClose(
+ latent_state,
+ [[0.025241, 0.122011, 1.066661, 0.316209, -0.25369, 0.108215,
+ -1.501128, -0.440111, -0.40447, -0.156649, 1.206028],
+ [0.066824, 0.519937, 0.610973, 0.977739, -0.121889, -0.223429,
+ -0.32687, -0.578763, -0.56965, 0.751886, 0.681606]]
+ )
+ self.assertAllClose(rnn_out, [[-0.068427, 0.007602, 0.000961, -0.054342],
+ [-0.110954, 0.04418, -0.031213, -0.078829]])
+ gt_log_q_z = [-8.0895052, -6.75819111]
+ gt_log_p_z = [-7.246827, -6.512877]
+ gt_log_alpha = (np.array(gt_log_p_z) +
+ np.array(gt_log_p_x_given_z) -
+ np.array(gt_log_q_z))
+ self.assertAllClose(log_alpha, gt_log_alpha)
+
+ def test_vrnn_with_tilt_normal_emission(self):
+ self.run_vrnn_with_tilt(base.ConditionalNormalDistribution, [-5.198263, -6.31686])
+
+ def test_vrnn_with_tilt_bernoulli_emission(self):
+ self.run_vrnn_with_tilt(base.ConditionalBernoulliDistribution, [-4.66985, -3.802245])
+
+ def run_vrnn_with_tilt(self, generative_class, gt_log_alpha):
+ """Tests the VRNN with a tilting function.
+
+ All test values are 'golden values' derived by running the code and copying
+ the output.
+
+ Args:
+ generative_class: The class of the generative distribution to use.
+ gt_log_alpha: The ground-truth value of log alpha.
+ """
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ batch_size = 2
+ model, inputs, targets, _ = create_vrnn(generative_class=generative_class,
+ batch_size=batch_size,
+ data_lengths=(3, 2),
+ random_seed=1234,
+ use_tilt=True)
+ zero_state = model.zero_state(batch_size=batch_size, dtype=tf.float32)
+ model.set_observations([inputs, targets], tf.convert_to_tensor([3, 2]))
+ model_out = model.propose_and_weight(zero_state, 0)
+ sess.run(tf.global_variables_initializer())
+ log_alpha, state = sess.run(model_out)
+ rnn_state, latent_state, rnn_out = state
+ self.assertAllClose(
+ rnn_state.c,
+ [[-0.15014534, 0.0143046, 0.00160489, -0.12899463],
+ [-0.25015137, 0.09377634, -0.05000039, -0.17123522]])
+ self.assertAllClose(
+ rnn_state.h,
+ [[-0.06842659, 0.00760155, 0.00096106, -0.05434214],
+ [-0.1109542, 0.0441804, -0.03121299, -0.07882939]]
+ )
+ self.assertAllClose(
+ latent_state,
+ [[0.025241, 0.122011, 1.066661, 0.316209, -0.25369, 0.108215,
+ -1.501128, -0.440111, -0.40447, -0.156649, 1.206028],
+ [0.066824, 0.519937, 0.610973, 0.977739, -0.121889, -0.223429,
+ -0.32687, -0.578763, -0.56965, 0.751886, 0.681606]]
+ )
+ self.assertAllClose(rnn_out, [[-0.068427, 0.007602, 0.000961, -0.054342],
+ [-0.110954, 0.04418, -0.031213, -0.078829]])
+ self.assertAllClose(log_alpha, gt_log_alpha)
+
+if __name__ == "__main__":
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/nested_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/nested_utils.py"
new file mode 100644
index 00000000..ef956a80
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/nested_utils.py"
@@ -0,0 +1,139 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A set of utils for dealing with nested lists and tuples of Tensors."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import itertools
+import tensorflow as tf
+
+from tensorflow.python.util import nest
+
+
+def map_nested(map_fn, nested):
+ """Executes map_fn on every element in a (potentially) nested structure.
+
+ Args:
+ map_fn: A callable to execute on each element in 'nested'.
+ nested: A potentially nested combination of sequence objects. Sequence
+ objects include tuples, lists, namedtuples, and all subclasses of
+ collections.Sequence except strings. See nest.is_sequence for details.
+ For example [1, ('hello', 4.3)] is a nested structure containing elements
+ 1, 'hello', and 4.3.
+ Returns:
+ out_structure: A potentially nested combination of sequence objects with the
+ same structure as the 'nested' input argument. out_structure
+ contains the result of applying map_fn to each element in 'nested'. For
+ example map_nested(lambda x: x+1, [1, (3, 4.3)]) returns [2, (4, 5.3)].
+ """
+ out = map(map_fn, nest.flatten(nested))
+ return nest.pack_sequence_as(nested, out)
+
+
+def tile_tensors(tensors, multiples):
+ """Tiles a set of Tensors.
+
+ Args:
+ tensors: A potentially nested tuple or list of Tensors with rank
+ greater than or equal to the length of 'multiples'. The Tensors do not
+ need to have the same rank, but their rank must not be dynamic.
+ multiples: A python list of ints indicating how to tile each Tensor
+ in 'tensors'. Similar to the 'multiples' argument to tf.tile.
+ Returns:
+ tiled_tensors: A potentially nested tuple or list of Tensors with the same
+ structure as the 'tensors' input argument. Contains the result of
+ applying tf.tile to each Tensor in 'tensors'. When the rank of a Tensor
+ in 'tensors' is greater than the length of multiples, multiples is padded
+ at the end with 1s. For example when tiling a 4-dimensional Tensor with
+ multiples [3, 4], multiples would be padded to [3, 4, 1, 1] before tiling.
+ """
+ def tile_fn(x):
+ return tf.tile(x, multiples + [1] * (x.shape.ndims - len(multiples)))
+
+ return map_nested(tile_fn, tensors)
+
+
+def where_tensors(condition, x_tensors, y_tensors):
+ """Performs a tf.where operation on a two sets of Tensors.
+
+ Args:
+ condition: The condition tensor to use for the where operation.
+ x_tensors: A potentially nested tuple or list of Tensors.
+ y_tensors: A potentially nested tuple or list of Tensors. Must have the
+ same structure as x_tensors.
+ Returns:
+ whered_tensors: A potentially nested tuple or list of Tensors with the
+ same structure as the 'tensors' input argument. Contains the result of
+ applying tf.where(condition, x, y) on each pair of elements in x_tensors
+ and y_tensors.
+ """
+ flat_x = nest.flatten(x_tensors)
+ flat_y = nest.flatten(y_tensors)
+ result = [tf.where(condition, x, y) for x, y in
+ itertools.izip(flat_x, flat_y)]
+
+ return nest.pack_sequence_as(x_tensors, result)
+
+
+def gather_tensors(tensors, indices):
+ """Performs a tf.gather operation on a set of Tensors.
+
+ Args:
+ tensors: A potentially nested tuple or list of Tensors.
+ indices: The indices to use for the gather operation.
+ Returns:
+ gathered_tensors: A potentially nested tuple or list of Tensors with the
+ same structure as the 'tensors' input argument. Contains the result of
+ applying tf.gather(x, indices) on each element x in 'tensors'.
+ """
+ return map_nested(lambda x: tf.gather(x, indices), tensors)
+
+
+def tas_for_tensors(tensors, length, **kwargs):
+ """Unstacks a set of Tensors into TensorArrays.
+
+ Args:
+ tensors: A potentially nested tuple or list of Tensors with length in the
+ first dimension greater than or equal to the 'length' input argument.
+ length: The desired length of the TensorArrays.
+ **kwargs: Keyword args for TensorArray constructor.
+ Returns:
+ tensorarrays: A potentially nested tuple or list of TensorArrays with the
+ same structure as 'tensors'. Contains the result of unstacking each Tensor
+ in 'tensors'.
+ """
+ def map_fn(x):
+ ta = tf.TensorArray(x.dtype, length,
+ name=x.name.split(':')[0] + '_ta', **kwargs)
+ return ta.unstack(x[:length, :])
+ return map_nested(map_fn, tensors)
+
+
+def read_tas(tas, index):
+ """Performs a read operation on a set of TensorArrays.
+
+ Args:
+ tas: A potentially nested tuple or list of TensorArrays with length greater
+ than 'index'.
+ index: The location to read from.
+ Returns:
+ read_tensors: A potentially nested tuple or list of Tensors with the same
+ structure as the 'tas' input argument. Contains the result of
+ performing a read operation at 'index' on each TensorArray in 'tas'.
+ """
+ return map_nested(lambda ta: ta.read(index), tas)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/nested_utils_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/nested_utils_test.py"
new file mode 100644
index 00000000..87991dd7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/nested_utils_test.py"
@@ -0,0 +1,125 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for fivo.nested_utils."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import tensorflow as tf
+nest = tf.contrib.framework.nest
+
+from fivo import nested_utils
+
+# An example namedtuple for use in the following tests.
+ExampleTuple = collections.namedtuple('ExampleTuple', ['a', 'b'])
+
+
+class NestedUtilsTest(tf.test.TestCase):
+
+ def test_map_nested_works_on_nested_structures(self):
+ """Check that map_nested works with nested structures."""
+ original = [1, (2, 3.2, (4., ExampleTuple(5, 6)))]
+ expected = [2, (3, 4.2, (5., ExampleTuple(6, 7)))]
+ out = nested_utils.map_nested(lambda x: x+1, original)
+ self.assertEqual(expected, out)
+
+ def test_map_nested_works_on_single_objects(self):
+ """Check that map_nested works with raw objects."""
+ original = 1
+ expected = 2
+ out = nested_utils.map_nested(lambda x: x+1, original)
+ self.assertEqual(expected, out)
+
+ def test_map_nested_works_on_flat_lists(self):
+ """Check that map_nested works with a flat list."""
+ original = [1, 2, 3]
+ expected = [2, 3, 4]
+ out = nested_utils.map_nested(lambda x: x+1, original)
+ self.assertEqual(expected, out)
+
+ def test_tile_tensors(self):
+ """Checks that tile_tensors correctly tiles tensors of different ranks."""
+ a = tf.range(20)
+ b = tf.reshape(a, [2, 10])
+ c = tf.reshape(a, [2, 2, 5])
+ a_tiled = tf.tile(a, [3])
+ b_tiled = tf.tile(b, [3, 1])
+ c_tiled = tf.tile(c, [3, 1, 1])
+ tensors = [a, (b, ExampleTuple(c, c))]
+ expected_tensors = [a_tiled, (b_tiled, ExampleTuple(c_tiled, c_tiled))]
+ tiled = nested_utils.tile_tensors(tensors, [3])
+ nest.assert_same_structure(expected_tensors, tiled)
+ with self.test_session() as sess:
+ expected, out = sess.run([expected_tensors, tiled])
+ expected = nest.flatten(expected)
+ out = nest.flatten(out)
+ # Check that the tiling is correct.
+ for x, y in zip(expected, out):
+ self.assertAllClose(x, y)
+
+ def test_gather_tensors(self):
+ a = tf.reshape(tf.range(20), [5, 4])
+ inds = [0, 0, 1, 4]
+ a_gathered = tf.gather(a, inds)
+ tensors = [a, (a, ExampleTuple(a, a))]
+ gt_gathered = [a_gathered, (a_gathered,
+ ExampleTuple(a_gathered, a_gathered))]
+ gathered = nested_utils.gather_tensors(tensors, inds)
+ nest.assert_same_structure(gt_gathered, gathered)
+ with self.test_session() as sess:
+ gt, out = sess.run([gt_gathered, gathered])
+ gt = nest.flatten(gt)
+ out = nest.flatten(out)
+ # Check that the gathering is correct.
+ for x, y in zip(gt, out):
+ self.assertAllClose(x, y)
+
+ def test_tas_for_tensors(self):
+ a = tf.reshape(tf.range(20), [5, 4])
+ tensors = [a, (a, ExampleTuple(a, a))]
+ tas = nested_utils.tas_for_tensors(tensors, 5)
+ nest.assert_same_structure(tensors, tas)
+ # We can't pass TensorArrays to sess.run so instead we turn then back into
+ # tensors to check that they were created correctly.
+ stacked = nested_utils.map_nested(lambda x: x.stack(), tas)
+ with self.test_session() as sess:
+ gt, out = sess.run([tensors, stacked])
+ gt = nest.flatten(gt)
+ out = nest.flatten(out)
+ # Check that the tas were created correctly.
+ for x, y in zip(gt, out):
+ self.assertAllClose(x, y)
+
+ def test_read_tas(self):
+ a = tf.reshape(tf.range(20), [5, 4])
+ a_read = a[3, :]
+ tensors = [a, (a, ExampleTuple(a, a))]
+ gt_read = [a_read, (a_read, ExampleTuple(a_read, a_read))]
+ tas = nested_utils.tas_for_tensors(tensors, 5)
+ tas_read = nested_utils.read_tas(tas, 3)
+ nest.assert_same_structure(tas, tas_read)
+ with self.test_session() as sess:
+ gt, out = sess.run([gt_read, tas_read])
+ gt = nest.flatten(gt)
+ out = nest.flatten(out)
+ # Check that the tas were read correctly.
+ for x, y in zip(gt, out):
+ self.assertAllClose(x, y)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/runners.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/runners.py"
new file mode 100644
index 00000000..ec6fb91b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/runners.py"
@@ -0,0 +1,489 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""High-level code for creating and running FIVO-related Tensorflow graphs.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import os
+import time
+
+import numpy as np
+import tensorflow as tf
+
+from fivo import bounds
+from fivo import smc
+
+from fivo.data import datasets
+from fivo.models import base
+from fivo.models import srnn
+from fivo.models import vrnn
+
+
+def create_dataset_and_model(config, split, shuffle, repeat):
+ """Creates the dataset and model for a given config.
+
+ Args:
+ config: A configuration object with config values accessible as properties.
+ Most likely a FLAGS object. This function expects the properties
+ batch_size, dataset_path, dataset_type, and latent_size to be defined.
+ split: The dataset split to load.
+ shuffle: If true, shuffle the dataset randomly.
+ repeat: If true, repeat the dataset endlessly.
+ Returns:
+ inputs: A batch of input sequences represented as a dense Tensor of shape
+ [time, batch_size, data_dimension].
+ targets: A batch of target sequences represented as a dense Tensor of
+ shape [time, batch_size, data_dimension].
+ lens: An int Tensor of shape [batch_size] representing the lengths of each
+ sequence in the batch.
+ model: A vrnn.VRNNCell model object.
+ Raises:
+ ValueError: if the config is invalid.
+ """
+ sigma_min = 0.0
+ if config.dataset_type == "pianoroll":
+ inputs, targets, lengths, mean = datasets.create_pianoroll_dataset(
+ config.dataset_path, split, config.batch_size, shuffle=shuffle,
+ repeat=repeat)
+ # Convert the mean of the training set to logit space so it can be used to
+ # initialize the bias of the generative distribution.
+ emission_bias_init = -tf.log(
+ 1. / tf.clip_by_value(mean, 0.0001, 0.9999) - 1)
+ emission_distribution_class = base.ConditionalBernoulliDistribution
+ elif config.dataset_type == "speech":
+ inputs, targets, lengths = datasets.create_speech_dataset(
+ config.dataset_path, config.batch_size,
+ samples_per_timestep=config.data_dimension, prefetch_buffer_size=1,
+ shuffle=False, repeat=False)
+ # There is no bias for the generative distribution because the test set
+ # is assumed to be already standardized with the training set statistics.
+ mean = None
+ emission_bias_init = None
+ emission_distribution_class = base.ConditionalNormalDistribution
+ if config.model == "vrnn":
+ model = vrnn.create_vrnn(inputs.get_shape().as_list()[2],
+ config.latent_size,
+ emission_distribution_class,
+ emission_bias_init=emission_bias_init,
+ proposal_type=config.proposal_type,
+ sigma_min=sigma_min,
+ raw_sigma_bias=0.5,
+ use_tilt=(config.bound == "fivo-aux"))
+ elif config.model == "srnn":
+ model = srnn.create_srnn(inputs.get_shape().as_list()[2],
+ config.latent_size,
+ emission_distribution_class,
+ emission_bias_init=emission_bias_init,
+ proposal_type=config.proposal_type,
+ sigma_min=sigma_min,
+ raw_sigma_bias=0.5,
+ use_tilt=(config.bound == "fivo-aux"))
+ else:
+ raise ValueError("model flag: %s is unrecognized" % config.model)
+ return inputs, targets, lengths, model, mean
+
+
+def restore_checkpoint_if_exists(saver, sess, logdir):
+ """Looks for a checkpoint and restores the session from it if found.
+
+ Args:
+ saver: A tf.train.Saver for restoring the session.
+ sess: A TensorFlow session.
+ logdir: The directory to look for checkpoints in.
+ Returns:
+ True if a checkpoint was found and restored, False otherwise.
+ """
+ checkpoint = tf.train.get_checkpoint_state(logdir)
+ if checkpoint:
+ checkpoint_name = os.path.basename(checkpoint.model_checkpoint_path)
+ full_checkpoint_path = os.path.join(logdir, checkpoint_name)
+ saver.restore(sess, full_checkpoint_path)
+ return True
+ return False
+
+
+def wait_for_checkpoint(saver, sess, logdir):
+ """Loops until the session is restored from a checkpoint in logdir.
+
+ Args:
+ saver: A tf.train.Saver for restoring the session.
+ sess: A TensorFlow session.
+ logdir: The directory to look for checkpoints in.
+ """
+ while not restore_checkpoint_if_exists(saver, sess, logdir):
+ tf.logging.info("Checkpoint not found in %s, sleeping for 60 seconds."
+ % logdir)
+ time.sleep(60)
+
+
+def run_train(config, create_dataset_and_model_fn=create_dataset_and_model):
+ """Runs training for a sequential latent variable model.
+
+ Args:
+ config: A configuration object with config values accessible as properties.
+ Most likely a FLAGS object. For a list of expected properties and their
+ meaning see the flags defined in fivo.py.
+ create_dataset_and_model_fn: If present, calls this function to create a
+ dataset and model instead of create_dataset_and_model() above. The
+ signature must be the same.
+ """
+
+ def create_logging_hook(step, bound_value):
+ """Creates a logging hook that prints the bound value periodically."""
+ bound_label = config.bound + " bound"
+ if config.normalize_by_seq_len:
+ bound_label += " per timestep"
+ else:
+ bound_label += " per sequence"
+ def summary_formatter(log_dict):
+ return "Step %d, %s: %f" % (
+ log_dict["step"], bound_label, log_dict["bound_value"])
+ logging_hook = tf.train.LoggingTensorHook(
+ {"step": step, "bound_value": bound_value},
+ every_n_iter=config.summarize_every,
+ formatter=summary_formatter)
+ return logging_hook
+
+ def create_loss():
+ """Creates the loss to be optimized.
+
+ Returns:
+ bound: A float Tensor containing the value of the bound that is
+ being optimized.
+ loss: A float Tensor that when differentiated yields the gradients
+ to apply to the model. Should be optimized via gradient descent.
+ """
+ inputs, targets, lengths, model, _ = create_dataset_and_model_fn(
+ config, split="train", shuffle=True, repeat=True)
+ # Compute lower bounds on the log likelihood.
+ if config.bound == "elbo":
+ ll_per_seq, _, _ = bounds.iwae(
+ model, (inputs, targets), lengths, num_samples=1,
+ parallel_iterations=config.parallel_iterations
+ )
+ elif config.bound == "iwae":
+ ll_per_seq, _, _ = bounds.iwae(
+ model, (inputs, targets), lengths, num_samples=config.num_samples,
+ parallel_iterations=config.parallel_iterations
+ )
+ elif config.bound in ("fivo", "fivo-aux"):
+ if config.resampling_type == "relaxed":
+ ll_per_seq, _, _, _ = bounds.fivo(
+ model, (inputs, targets),
+ lengths,
+ num_samples=config.num_samples,
+ resampling_criterion=smc.ess_criterion,
+ resampling_type=config.resampling_type,
+ random_seed=config.random_seed,
+ relaxed_resampling_temperature=config.
+ relaxed_resampling_temperature,
+ parallel_iterations=config.parallel_iterations
+ )
+ else:
+ ll_per_seq, _, _, _ = bounds.fivo(
+ model, (inputs, targets), lengths, num_samples=config.num_samples,
+ resampling_criterion=smc.ess_criterion,
+ resampling_type=config.resampling_type,
+ random_seed=config.random_seed,
+ parallel_iterations=config.parallel_iterations
+ )
+ # Compute loss scaled by number of timesteps.
+ ll_per_t = tf.reduce_mean(ll_per_seq / tf.to_float(lengths))
+ ll_per_seq = tf.reduce_mean(ll_per_seq)
+
+ tf.summary.scalar("train_ll_per_seq", ll_per_seq)
+ tf.summary.scalar("train_ll_per_t", ll_per_t)
+
+ if config.normalize_by_seq_len:
+ return ll_per_t, -ll_per_t
+ else:
+ return ll_per_seq, -ll_per_seq
+
+ def create_graph():
+ """Creates the training graph."""
+ global_step = tf.train.get_or_create_global_step()
+ bound, loss = create_loss()
+ opt = tf.train.AdamOptimizer(config.learning_rate)
+ grads = opt.compute_gradients(loss, var_list=tf.trainable_variables())
+ train_op = opt.apply_gradients(grads, global_step=global_step)
+ return bound, train_op, global_step
+
+ device = tf.train.replica_device_setter(ps_tasks=config.ps_tasks)
+ with tf.Graph().as_default():
+ if config.random_seed: tf.set_random_seed(config.random_seed)
+ with tf.device(device):
+ bound, train_op, global_step = create_graph()
+ log_hook = create_logging_hook(global_step, bound)
+ start_training = not config.stagger_workers
+ with tf.train.MonitoredTrainingSession(
+ master=config.master,
+ is_chief=config.task == 0,
+ hooks=[log_hook],
+ checkpoint_dir=config.logdir,
+ save_checkpoint_secs=120,
+ save_summaries_steps=config.summarize_every,
+ log_step_count_steps=config.summarize_every) as sess:
+ cur_step = -1
+ while not sess.should_stop() and cur_step <= config.max_steps:
+ if config.task > 0 and not start_training:
+ cur_step = sess.run(global_step)
+ tf.logging.info("task %d not active yet, sleeping at step %d" %
+ (config.task, cur_step))
+ time.sleep(30)
+ if cur_step >= config.task * 1000:
+ start_training = True
+ else:
+ _, cur_step = sess.run([train_op, global_step])
+
+
+def run_eval(config, create_dataset_and_model_fn=create_dataset_and_model):
+ """Runs evaluation for a sequential latent variable model.
+
+ This method runs only one evaluation over the dataset, writes summaries to
+ disk, and then terminates. It does not loop indefinitely.
+
+ Args:
+ config: A configuration object with config values accessible as properties.
+ Most likely a FLAGS object. For a list of expected properties and their
+ meaning see the flags defined in fivo.py.
+ create_dataset_and_model_fn: If present, calls this function to create a
+ dataset and model instead of create_dataset_and_model() above. The
+ signature must be the same.
+ """
+
+ def create_graph():
+ """Creates the evaluation graph.
+
+ Returns:
+ lower_bounds: A tuple of float Tensors containing the values of the 3
+ evidence lower bounds, summed across the batch.
+ total_batch_length: The total number of timesteps in the batch, summed
+ across batch examples.
+ batch_size: The batch size.
+ global_step: The global step the checkpoint was loaded from.
+ """
+ global_step = tf.train.get_or_create_global_step()
+ inputs, targets, lengths, model, _ = create_dataset_and_model_fn(
+ config, split=config.split, shuffle=False, repeat=False)
+ # Compute lower bounds on the log likelihood.
+ elbo_ll_per_seq, _, _ = bounds.iwae(
+ model, (inputs, targets), lengths, num_samples=1,
+ parallel_iterations=config.parallel_iterations
+ )
+ iwae_ll_per_seq, _, _ = bounds.iwae(
+ model, (inputs, targets), lengths, num_samples=config.num_samples,
+ parallel_iterations=config.parallel_iterations
+ )
+ # The resampling type should only be used for training, so we ignore it.
+ fivo_ll_per_seq, _, _, _ = bounds.fivo(
+ model, (inputs, targets), lengths, num_samples=config.num_samples,
+ resampling_criterion=smc.ess_criterion, random_seed=config.random_seed,
+ parallel_iterations=config.parallel_iterations
+ )
+ elbo_ll = tf.reduce_sum(elbo_ll_per_seq)
+ iwae_ll = tf.reduce_sum(iwae_ll_per_seq)
+ fivo_ll = tf.reduce_sum(fivo_ll_per_seq)
+ batch_size = tf.shape(lengths)[0]
+ total_batch_length = tf.reduce_sum(lengths)
+ return ((elbo_ll, iwae_ll, fivo_ll), total_batch_length, batch_size,
+ global_step)
+
+ def average_bounds_over_dataset(lower_bounds, total_batch_length, batch_size,
+ sess):
+ """Computes the values of the bounds, averaged over the datset.
+
+ Args:
+ lower_bounds: Tuple of float Tensors containing the values of the bounds
+ evaluated on a single batch.
+ total_batch_length: Integer Tensor that represents the total number of
+ timesteps in the current batch.
+ batch_size: Integer Tensor containing the batch size. This can vary if the
+ requested batch_size does not evenly divide the size of the dataset.
+ sess: A TensorFlow Session object.
+ Returns:
+ ll_per_t: A length 3 numpy array of floats containing each bound's average
+ value, normalized by the total number of timesteps in the datset. Can
+ be interpreted as a lower bound on the average log likelihood per
+ timestep in the dataset.
+ ll_per_seq: A length 3 numpy array of floats containing each bound's
+ average value, normalized by the number of sequences in the dataset.
+ Can be interpreted as a lower bound on the average log likelihood per
+ sequence in the datset.
+ """
+ total_ll = np.zeros(3, dtype=np.float64)
+ total_n_elems = 0.0
+ total_length = 0.0
+ while True:
+ try:
+ outs = sess.run([lower_bounds, batch_size, total_batch_length])
+ except tf.errors.OutOfRangeError:
+ break
+ total_ll += outs[0]
+ total_n_elems += outs[1]
+ total_length += outs[2]
+ ll_per_t = total_ll / total_length
+ ll_per_seq = total_ll / total_n_elems
+ return ll_per_t, ll_per_seq
+
+ def summarize_lls(lls_per_t, lls_per_seq, summary_writer, step):
+ """Creates log-likelihood lower bound summaries and writes them to disk.
+
+ Args:
+ lls_per_t: An array of 3 python floats, contains the values of the
+ evaluated bounds normalized by the number of timesteps.
+ lls_per_seq: An array of 3 python floats, contains the values of the
+ evaluated bounds normalized by the number of sequences.
+ summary_writer: A tf.SummaryWriter.
+ step: The current global step.
+ """
+ def scalar_summary(name, value):
+ value = tf.Summary.Value(tag=name, simple_value=value)
+ return tf.Summary(value=[value])
+
+ for i, bound in enumerate(["elbo", "iwae", "fivo"]):
+ per_t_summary = scalar_summary("%s/%s_ll_per_t" % (config.split, bound),
+ lls_per_t[i])
+ per_seq_summary = scalar_summary("%s/%s_ll_per_seq" %
+ (config.split, bound),
+ lls_per_seq[i])
+ summary_writer.add_summary(per_t_summary, global_step=step)
+ summary_writer.add_summary(per_seq_summary, global_step=step)
+ summary_writer.flush()
+
+ with tf.Graph().as_default():
+ if config.random_seed: tf.set_random_seed(config.random_seed)
+ lower_bounds, total_batch_length, batch_size, global_step = create_graph()
+ summary_dir = config.logdir + "/" + config.split
+ summary_writer = tf.summary.FileWriter(
+ summary_dir, flush_secs=15, max_queue=100)
+ saver = tf.train.Saver()
+ with tf.train.SingularMonitoredSession() as sess:
+ wait_for_checkpoint(saver, sess, config.logdir)
+ step = sess.run(global_step)
+ tf.logging.info("Model restored from step %d, evaluating." % step)
+ ll_per_t, ll_per_seq = average_bounds_over_dataset(
+ lower_bounds, total_batch_length, batch_size, sess)
+ summarize_lls(ll_per_t, ll_per_seq, summary_writer, step)
+ tf.logging.info("%s elbo ll/t: %f, iwae ll/t: %f fivo ll/t: %f",
+ config.split, ll_per_t[0], ll_per_t[1], ll_per_t[2])
+ tf.logging.info("%s elbo ll/seq: %f, iwae ll/seq: %f fivo ll/seq: %f",
+ config.split, ll_per_seq[0], ll_per_seq[1], ll_per_seq[2])
+
+
+def run_sample(config, create_dataset_and_model_fn=create_dataset_and_model):
+ """Sample from the model. Only pianorolls and pose datasets are supported."""
+
+ def sample_from_model(model, initial_state, initial_inputs, mean):
+ """Samples a sequence of outputs from the model.
+
+ The mean must be supplied -- if it isn't the results will be incorrect.
+
+ Args:
+ model: A model with sample_step implemented. See models/vrnn.py for an
+ example.
+ initial_state: The initial state of the model.
+ initial_inputs: The initial inputs to feed into the model.
+ mean: The mean of the training set, a Tensor of shape [data_dimension].
+ Returns:
+ samples: A Tensor of shape [sample_length, batch_size, num_timesteps,
+ data_dimension] containing the samples from the model.
+ """
+ initial_state, initial_output = model.sample_step(initial_state,
+ initial_inputs, 0)
+ output_ta = tf.TensorArray(size=config.sample_length,
+ dtype=tf.float32,
+ dynamic_size=False,
+ clear_after_read=True)
+ output_ta = output_ta.write(0, initial_output)
+ t0 = tf.constant(1, dtype=tf.int32)
+
+ def sample_step(t, state, prev_outputs, output_ta):
+ state, output = model.sample_step(state, prev_outputs, t)
+ output_ta = output_ta.write(t, output)
+ centered_output = output - mean[tf.newaxis, :]
+ return t+1, state, centered_output, output_ta
+
+ def sample_predicate(t, *unused_args):
+ return t < config.sample_length
+
+ _, _, _, output_ta = tf.while_loop(
+ sample_predicate,
+ sample_step,
+ loop_vars=(t0, initial_state, initial_output, output_ta),
+ parallel_iterations=config.parallel_iterations
+ )
+ samples = output_ta.stack()
+ samples = tf.reshape(samples, [config.sample_length, config.batch_size,
+ config.num_samples, config.data_dimension])
+ return samples
+
+ def create_graph():
+ """Creates the graph to sample from the model.
+
+ First, the model is conditioned on a prefix by sampling a batch of data
+ and trimming it to prefix_length. The configured bound is used to do the
+ conditioning. Then the final state from the conditioning is used to sample
+ from the model.
+
+ Returns:
+ samples: A Tensor of shape [sample_length, batch_size,
+ num_samples, data_dimension] representing samples from the model.
+ prefixes: A Tensor of shape [prefix_length, batch_size, data_dimension]
+ representing the prefixes the model was conditioned on.
+ """
+ inputs, targets, lengths, model, mean = create_dataset_and_model_fn(
+ config, split=config.split, shuffle=True, repeat=True)
+ input_prefixes = inputs[:config.prefix_length]
+ target_prefixes = targets[:config.prefix_length]
+ prefix_lengths = tf.ones_like(lengths) * config.prefix_length
+ if config.bound == "elbo":
+ _, _, state = bounds.iwae(
+ model, (input_prefixes, target_prefixes),
+ prefix_lengths, num_samples=1)
+ elif config.bound == "iwae":
+ _, _, state = bounds.iwae(
+ model, (input_prefixes, target_prefixes),
+ prefix_lengths, num_samples=config.num_samples)
+ elif config.bound == "fivo":
+ _, _, _, state = bounds.fivo(
+ model, (input_prefixes, target_prefixes), prefix_lengths,
+ num_samples=config.num_samples,
+ resampling_criterion=smc.ess_criterion,
+ random_seed=config.random_seed)
+ sample_inputs = tf.tile(inputs[config.prefix_length],
+ [config.num_samples, 1])
+ samples = sample_from_model(model, state, sample_inputs, mean)
+ return samples, target_prefixes
+
+ with tf.Graph().as_default():
+ if config.random_seed:
+ tf.set_random_seed(config.random_seed)
+ samples, prefixes = create_graph()
+ if config.sample_out_dir:
+ out_dir = config.sample_our_dir
+ else:
+ out_dir = config.logdir
+ if not tf.gfile.Exists(out_dir):
+ tf.gfile.MakeDirs(out_dir)
+ with tf.train.SingularMonitoredSession(
+ checkpoint_dir=config.logdir) as sess:
+ samples_out, prefixes_out = sess.run([samples, prefixes])
+ with tf.gfile.Open(os.path.join(out_dir, "samples.npz"), "w") as fout:
+ np.save(fout, {"prefixes": prefixes_out, "samples": samples_out})
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/runners_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/runners_test.py"
new file mode 100644
index 00000000..eb050c0a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/runners_test.py"
@@ -0,0 +1,242 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for fivo.runners"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import numpy as np
+import tensorflow as tf
+
+from fivo import runners
+from fivo.models import base
+from fivo.models import vrnn
+
+FLAGS = tf.app.flags.FLAGS
+
+
+class RunnersTest(tf.test.TestCase):
+
+ def default_config(self):
+ class Config(object):
+ pass
+ config = Config()
+ config.model = "vrnn"
+ config.latent_size = 64
+ config.batch_size = 4
+ config.num_samples = 4
+ config.resampling_type = "multinomial"
+ config.normalize_by_seq_len = True
+ config.learning_rate = 0.0001
+ config.max_steps = int(1e6)
+ config.summarize_every = 50
+ # Master must be "" to prevent state from persisting between sessions.
+ config.master = ""
+ config.task = 0
+ config.ps_tasks = 0
+ config.stagger_workers = True
+ config.random_seed = 1234
+ config.parallel_iterations = 1
+ config.dataset_type = "pianoroll"
+ config.data_dimension = None
+ config.dataset_path = os.path.join(
+ os.path.dirname(os.path.realpath(__file__)),
+ "test_data", "tiny_pianoroll.pkl")
+ config.proposal_type = "filtering"
+ return config
+
+ def run_training_one_step(self, bound, dataset_type, data_dimension,
+ dataset_filename, dir_prefix, resampling_type,
+ model, batch_size=2, num_samples=3,
+ create_dataset_and_model_fn=(runners.create_dataset_and_model)):
+ config = self.default_config()
+ config.model = model
+ config.resampling_type = resampling_type
+ config.relaxed_resampling_temperature = 0.5
+ config.bound = bound
+ config.split = "train"
+ config.dataset_type = dataset_type
+ config.dataset_path = os.path.join(
+ os.path.dirname(os.path.realpath(__file__)),
+ "test_data",
+ dataset_filename)
+ config.max_steps = 1
+ config.batch_size = batch_size
+ config.num_samples = num_samples
+ config.latent_size = 4
+ config.data_dimension = data_dimension
+ config.logdir = os.path.join(tf.test.get_temp_dir(), "%s-%s-%s-%s" %
+ (dir_prefix, bound, dataset_type, model))
+ runners.run_train(config,
+ create_dataset_and_model_fn=create_dataset_and_model_fn)
+ return config
+
+ def dummmy_dataset_and_model_fn(self, *unused_args, **unused_kwargs):
+ # We ignore the arguments in the dummy but need to preserve prototype.
+ batch_elements = 5
+ sequence_length = 4
+ data_dimensions = 3
+ dataset = tf.data.Dataset.from_tensors(
+ tf.zeros((sequence_length, batch_elements, data_dimensions),
+ dtype=tf.float32))
+ inputs = dataset.make_one_shot_iterator().get_next()
+ targets = tf.zeros_like(inputs)
+ lengths = tf.constant([sequence_length] * batch_elements)
+ mean = tf.constant((0.0, 0.0, 0.0))
+ model = vrnn.create_vrnn(data_dimensions, 1,
+ base.ConditionalNormalDistribution)
+ return inputs, targets, lengths, model, mean
+
+ def test_training_one_step_fivo_pianoroll_vrnn(self):
+ self.run_training_one_step("fivo", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "multinomial", "vrnn")
+
+ def test_training_one_step_iwae_pianoroll_vrnn(self):
+ self.run_training_one_step("iwae", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "multinomial", "vrnn")
+
+ def test_training_one_step_elbo_pianoroll_vrnn(self):
+ self.run_training_one_step("elbo", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "multinomial", "vrnn")
+
+ def test_training_one_step_fivo_speech_vrnn(self):
+ self.run_training_one_step("fivo", "speech", 2, "tiny_speech_dataset.tfrecord",
+ "test-training", "multinomial", "vrnn")
+
+ def test_training_one_step_iwae_speech_vrnn(self):
+ self.run_training_one_step("iwae", "speech", 2, "tiny_speech_dataset.tfrecord",
+ "test-training", "multinomial", "vrnn")
+
+ def test_training_one_step_elbo_speech_vrnn(self):
+ self.run_training_one_step("elbo", "speech", 2, "tiny_speech_dataset.tfrecord",
+ "test-training", "multinomial", "vrnn")
+
+ def test_training_one_step_fivo_pianoroll_srnn(self):
+ self.run_training_one_step("fivo", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "multinomial", "srnn")
+
+ def test_training_one_step_iwae_pianoroll_srnn(self):
+ self.run_training_one_step("iwae", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "multinomial", "srnn")
+
+ def test_training_one_step_elbo_pianoroll_srnn(self):
+ self.run_training_one_step("elbo", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "multinomial", "srnn")
+
+ def test_training_one_step_fivo_speech_srnn(self):
+ self.run_training_one_step("fivo", "speech", 2, "tiny_speech_dataset.tfrecord",
+ "test-training", "multinomial", "srnn")
+
+ def test_training_one_step_iwae_speech_srnn(self):
+ self.run_training_one_step("iwae", "speech", 2, "tiny_speech_dataset.tfrecord",
+ "test-training", "multinomial", "srnn")
+
+ def test_training_one_step_elbo_speech_srnn(self):
+ self.run_training_one_step("elbo", "speech", 2, "tiny_speech_dataset.tfrecord",
+ "test-training", "multinomial", "srnn")
+
+ def test_training_one_step_fivo_pianoroll_vrnn_relaxed(self):
+ self.run_training_one_step("fivo", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "relaxed", "vrnn")
+
+ def test_training_one_step_iwae_pianoroll_vrnn_relaxed(self):
+ self.run_training_one_step("iwae", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "relaxed", "vrnn")
+
+ def test_training_one_step_elbo_pianoroll_vrnn_relaxed(self):
+ self.run_training_one_step("elbo", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "relaxed", "vrnn")
+
+ def test_training_one_step_fivo_pianoroll_srnn_relaxed(self):
+ self.run_training_one_step("fivo", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "relaxed", "srnn")
+
+ def test_training_one_step_iwae_pianoroll_srnn_relaxed(self):
+ self.run_training_one_step("iwae", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "relaxed", "srnn")
+
+ def test_training_one_step_elbo_pianoroll_srnn_relaxed(self):
+ self.run_training_one_step("elbo", "pianoroll", 88, "tiny_pianoroll.pkl",
+ "test-training", "relaxed", "srnn")
+
+ def test_eval_vrnn(self):
+ self.run_eval("vrnn")
+
+ def test_eval_srnn(self):
+ self.run_eval("srnn")
+
+ def run_eval(self, model):
+ config = self.run_training_one_step(
+ "fivo", "pianoroll", 88, "tiny_pianoroll.pkl", "test-eval-" + model,
+ "multinomial", model)
+ config.split = "train"
+ runners.run_eval(config)
+
+ def test_sampling_vrnn(self):
+ self.run_sampling("vrnn")
+
+ def test_sampling_srnn(self):
+ self.run_sampling("srnn")
+
+ def run_sampling(self, model):
+ """Test sampling from the model."""
+ config = self.run_training_one_step(
+ "fivo", "pianoroll", 88, "tiny_pianoroll.pkl", "test-sampling", "multinomial",
+ model)
+ config.prefix_length = 3
+ config.sample_length = 6
+ config.split = "train"
+ config.sample_out_dir = None
+
+ runners.run_sample(config)
+ unused_samples = np.load(os.path.join(config.logdir, "samples.npz"))
+
+ def test_training_with_custom_fn(self):
+ self.run_training_one_step(
+ "fivo", "pianoroll", 3, "tiny_pianoroll.pkl",
+ "test-training-custom-fn", "multinomial", "vrnn", batch_size=5,
+ create_dataset_and_model_fn=self.dummmy_dataset_and_model_fn)
+
+ def test_eval_with_custom_fn(self):
+ config = self.run_training_one_step(
+ "fivo", "pianoroll", 1, "tiny_pianoroll.pkl",
+ "test-eval-custom-fn", "multinomial", "vrnn", batch_size=1,
+ create_dataset_and_model_fn=self.dummmy_dataset_and_model_fn)
+ config.split = "train"
+ runners.run_eval(
+ config,
+ create_dataset_and_model_fn=self.dummmy_dataset_and_model_fn)
+
+ def test_sampling_with_custom_fn(self):
+ config = self.run_training_one_step(
+ "fivo", "pianoroll", 3, "tiny_pianoroll.pkl",
+ "test-sample-custom-fn", "multinomial", "vrnn", batch_size=5,
+ create_dataset_and_model_fn=self.dummmy_dataset_and_model_fn)
+ config.prefix_length = 2
+ config.sample_length = 3
+ config.split = "train"
+ config.sample_out_dir = None
+
+ runners.run_sample(
+ config,
+ create_dataset_and_model_fn=self.dummmy_dataset_and_model_fn)
+ unused_samples = np.load(os.path.join(config.logdir, "samples.npz"))
+
+
+if __name__ == "__main__":
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/smc.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/smc.py"
new file mode 100644
index 00000000..25d49690
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/smc.py"
@@ -0,0 +1,338 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Implementation of sequential Monte Carlo algorithms.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+import fivo.nested_utils as nested
+
+
+def ess_criterion(log_weights, unused_t):
+ """A criterion that resamples based on effective sample size."""
+ num_particles = tf.shape(log_weights)[0]
+ # Calculate the effective sample size.
+ ess_num = 2 * tf.reduce_logsumexp(log_weights, axis=0)
+ ess_denom = tf.reduce_logsumexp(2 * log_weights, axis=0)
+ log_ess = ess_num - ess_denom
+ return log_ess <= tf.log(tf.to_float(num_particles) / 2.0)
+
+
+def never_resample_criterion(log_weights, unused_t):
+ """A criterion that never resamples."""
+ batch_size = tf.shape(log_weights)[1]
+ return tf.cast(tf.zeros([batch_size]), tf.bool)
+
+
+def always_resample_criterion(log_weights, unused_t):
+ """A criterion resamples at every timestep."""
+ batch_size = tf.shape(log_weights)[1]
+ return tf.cast(tf.ones([batch_size]), tf.bool)
+
+
+def multinomial_resampling(log_weights, states, num_particles, batch_size,
+ random_seed=None):
+ """Resample states with multinomial resampling.
+
+ Args:
+ log_weights: A [num_particles, batch_size] Tensor representing a batch
+ of batch_size logits for num_particles-ary Categorical distribution.
+ states: A nested list of [batch_size*num_particles, data_size] Tensors that
+ will be resampled from the groups of every num_particles-th row.
+ num_particles: The number of particles/samples.
+ batch_size: The batch size.
+ random_seed: The random seed to pass to the resampling operations in
+ the particle filter. Mainly useful for testing.
+
+ Returns:
+ resampled_states: A nested list of [batch_size*num_particles, data_size]
+ Tensors resampled via multinomial sampling.
+ """
+ # Calculate the ancestor indices via resampling. Because we maintain the
+ # log unnormalized weights, we pass the weights in as logits, allowing
+ # the distribution object to apply a softmax and normalize them.
+ resampling_parameters = tf.transpose(log_weights, perm=[1, 0])
+ resampling_dist = tf.contrib.distributions.Categorical(
+ logits=resampling_parameters)
+ ancestors = tf.stop_gradient(
+ resampling_dist.sample(sample_shape=num_particles, seed=random_seed))
+
+ # Because the batch is flattened, we must modify ancestor_inds to index the
+ # proper samples. The particles in the ith filter are distributed every
+ # batch_size rows in the batch, and offset i rows from the top. So, to
+ # correct the indices we multiply by the batch_size and add the proper offset.
+ # Crucially, when ancestor_inds is flattened the layout of the batch is
+ # maintained.
+ offset = tf.expand_dims(tf.range(batch_size), 0)
+ ancestor_inds = tf.reshape(ancestors * batch_size + offset, [-1])
+
+ resampled_states = nested.gather_tensors(states, ancestor_inds)
+ return resampled_states
+
+
+def _blend_tensor(blending_weights, tensor, num_particles, batch_size):
+ """Blend tensor according to the weights.
+
+ The first dimension of tensor is actually a 2d index compacted to a 1d
+ index and similarly for blended_tensor. So if we index these Tensors
+ by [(i, j), k], then
+
+ blended_tensor[(i, j), k] =
+ sum_l tensor[(l, j), :] * blending_weights[i, j, l].
+
+ Args:
+ blending_weights: [num_particles, batch_size, num_particles] weights where
+ the indices represent [sample index, batch index, blending weight index].
+ tensor: [num_particles * batch_size, state_dim] Tensor to be blended.
+ num_particles: The number of particles/samples.
+ batch_size: The batch size.
+
+ Returns:
+ blended_tensor: [num_particles*batch_size, state_dim] blended Tensor.
+ """
+ # tensor is currently [num_particles * batch_size, state_dim], so we reshape
+ # it to [num_particles, batch_size, state_dim]. Then, transpose it to
+ # [batch_size, state_size, num_particles].
+ tensor = tf.transpose(
+ tf.reshape(tensor, [num_particles, batch_size, -1]), perm=[1, 2, 0])
+ blending_weights = tf.transpose(blending_weights, perm=[1, 2, 0])
+ # blendeding_weights is [batch index, blending weight index, sample index].
+ # Multiplying these gives a matrix of size [batch_size, state_size,
+ # num_particles].
+ tensor = tf.matmul(tensor, blending_weights)
+ # transpose the tensor to be [num_particles, batch_size, state_size]
+ # and then reshape it to match the original format.
+ tensor = tf.reshape(tf.transpose(tensor, perm=[2, 0, 1]),
+ [num_particles*batch_size, -1])
+ return tensor
+
+
+def relaxed_resampling(log_weights, states, num_particles, batch_size,
+ temperature=0.5, random_seed=None):
+ """Resample states with relaxed resampling.
+
+ Draw soft "ancestors" using the Gumbel-Softmax distribution.
+
+ Args:
+ log_weights: A [num_particles, batch_size] Tensor representing a batch
+ of batch_size logits for num_particles-ary Categorical distribution.
+ states: A nested list of [batch_size * num_particles, d] Tensors that will
+ be resampled from the groups of every num_particles-th row.
+ num_particles: The number of particles/samples.
+ batch_size: The batch size.
+ temperature: The temperature used for the relaxed one hot distribution.
+ random_seed: The random seed to pass to the resampling operations in
+ the particle filter. Mainly useful for testing.
+
+ Returns:
+ resampled_states: A nested list of [batch_size * num_particles, d]
+ Tensors resampled via multinomial sampling.
+ """
+ # log_weights are [num_particles, batch_size], so we transpose to get a
+ # set of batch_size distributions over [0, num_particles).
+ resampling_parameters = tf.transpose(log_weights, perm=[1, 0])
+ resampling_dist = tf.contrib.distributions.RelaxedOneHotCategorical(
+ temperature,
+ logits=resampling_parameters)
+
+ # Sample num_particles samples from the distribution, resulting in a
+ # [num_particles, batch_size, num_particles] Tensor that represents a set of
+ # [num_particles, batch_size] blending weights. The dimensions represent
+ # [particle index, batch index, blending weight index].
+ ancestors = resampling_dist.sample(sample_shape=num_particles,
+ seed=random_seed)
+ def map_fn(tensor):
+ return _blend_tensor(ancestors, tensor, num_particles, batch_size)
+
+ resampled_states = nested.map_nested(map_fn, states)
+ return resampled_states
+
+
+def smc(
+ transition_fn,
+ num_steps,
+ num_particles=1,
+ resampling_criterion=ess_criterion,
+ resampling_fn=multinomial_resampling,
+ loop_fn=None,
+ parallel_iterations=30,
+ swap_memory=True):
+ """Run a sequential Monte Carlo (SMC) algorithm.
+
+ This method runs an SMC algorithm that evolves systems of particles
+ using the supplied transition function for the specified number of steps. The
+ particles are optionally resampled using resampling_fn when indicated by
+ resampling_criterion.
+
+ Args:
+ transition_fn: A callable that propogates a batch of particles one step.
+ Must accept as arguments a batch of particle states and the current
+ timestep. Must return the particle states one timestep in the future, the
+ incremental weights of each particle as a [num_samples*batch_size] float
+ Tensor, and optionally a set of arguments to pass to the loop_fn. If
+ the loop args are not provided, they will be set to None. Before the
+ first timestep transition_fn will be called with the arguments None, -1
+ and should return the initial particle states.
+ num_steps: A [batch_size] Tensor of ints representing the number of steps
+ to run each filter for.
+ num_particles: A scalar int, the number of particles to use in each filter.
+ resampling_criterion: The resampling criterion to use for this particle
+ filter. Must accept the current log weights and timestep and
+ return a boolean Tensor of shape [batch_size] indicating whether each
+ particle filter should resample. See ess_criterion and related functions
+ for examples. When resampling_criterion is never_resample_criterion,
+ resampling_fn is ignored and never called.
+ resampling_fn: A callable that performs the resampling operation. Must
+ accept as arguments the log weights, particle states, num_particles,
+ and batch_size and return the resampled particle states. See
+ multinomial_resampling and relaxed_resampling for examples.
+ loop_fn: A callable that performs operations on the weights and
+ particle states, useful for accumulating and processing state that
+ shouldn't be resampled. At each timestep after (possibly) resampling
+ loop_fn will be called with the previous loop_state, a set of arguments
+ produced by transition_fn called loop_args, the resampled particle states,
+ the current log weights as [num_particles, batch_size] float Tensor, a
+ [batch_size] float Tensor representing whether or not each filter
+ resampled, the current mask indicating which filters are active, and the
+ current timestep. It must return the next loop state. Before the first
+ timestep loop_fn will be called with the arguments None, None, None, None,
+ -1 and must return the initial loop state. The loop state can be a
+ possibly nested structure of Tensors and TensorArrays.
+ parallel_iterations: The number of parallel iterations to use for the
+ internal while loop. Note that values greater than 1 can introduce
+ non-determinism even when resampling is deterministic.
+ swap_memory: Whether GPU-CPU memory swapping should be enabled for the
+ internal while loop.
+
+ Returns:
+ log_z_hat: A Tensor of shape [batch_size] containing an estimate of the log
+ normalizing constant that converts between the unormalized target
+ distribution (as defined by the weights) and the true target distribution.
+ log_weights: A Tensor of shape [max_num_steps, batch_size, num_particles]
+ containing the log weights at each timestep of the particle filter.
+ Will not be valid for timesteps past the supplied num_steps.
+ resampled: A float Tensor of shape [max_num_steps, batch_size] indicating
+ when the particle filters resampled. Will be 1.0 on timesteps when
+ resampling occurred and 0.0 on timesteps when it did not.
+ final_loop_state: The final state returned by loop_fn. If loop_fn is None
+ then 0 will be returned.
+ """
+ # batch_size represents the number of particle filters running in parallel.
+ batch_size = tf.shape(num_steps)[0]
+ # Create a TensorArray where element t is the [num_particles*batch_size]
+ # sequence mask for timestep t.
+ max_num_steps = tf.reduce_max(num_steps)
+ seq_mask = tf.transpose(
+ tf.sequence_mask(num_steps, maxlen=max_num_steps, dtype=tf.float32),
+ perm=[1, 0])
+ seq_mask = tf.tile(seq_mask, [1, num_particles])
+ mask_ta = tf.TensorArray(seq_mask.dtype,
+ max_num_steps,
+ name='mask_ta')
+ mask_ta = mask_ta.unstack(seq_mask)
+ # Initialize the state.
+ t0 = tf.constant(0, tf.int32)
+ init_particle_state = transition_fn(None, -1)
+
+ def transition(*args):
+ transition_outs = transition_fn(*args)
+ if len(transition_outs) == 2:
+ return transition_outs + (None,)
+ else:
+ return transition_outs
+
+ if loop_fn is None:
+ loop_fn = lambda *args: 0
+
+ init_loop_state = loop_fn(None, None, None, None, None, None, -1)
+ init_states = (init_particle_state, init_loop_state)
+ ta_names = ['log_weights', 'resampled']
+ tas = [tf.TensorArray(tf.float32, max_num_steps, name='%s_ta' % n)
+ for n in ta_names]
+ log_weights_acc = tf.zeros([num_particles, batch_size], dtype=tf.float32)
+ log_z_hat_acc = tf.zeros([batch_size], dtype=tf.float32)
+
+ def while_predicate(t, *unused_args):
+ return t < max_num_steps
+
+ def while_step(t, state, tas, log_weights_acc, log_z_hat_acc):
+ """Implements one timestep of the particle filter."""
+ particle_state, loop_state = state
+ cur_mask = nested.read_tas(mask_ta, t)
+ # Propagate the particles one step.
+ log_alpha, new_particle_state, loop_args = transition(particle_state, t)
+ # Update the current weights with the incremental weights.
+ log_alpha *= cur_mask
+ log_alpha = tf.reshape(log_alpha, [num_particles, batch_size])
+ log_weights_acc += log_alpha
+
+ should_resample = resampling_criterion(log_weights_acc, t)
+
+ if resampling_criterion == never_resample_criterion:
+ resampled = tf.to_float(should_resample)
+ else:
+ # Compute the states as if we did resample.
+ resampled_states = resampling_fn(
+ log_weights_acc,
+ new_particle_state,
+ num_particles,
+ batch_size)
+ # Decide whether or not we should resample; don't resample if we are past
+ # the end of a sequence.
+ should_resample = tf.logical_and(should_resample,
+ cur_mask[:batch_size] > 0.)
+ float_should_resample = tf.to_float(should_resample)
+ new_particle_state = nested.where_tensors(
+ tf.tile(should_resample, [num_particles]),
+ resampled_states,
+ new_particle_state)
+ resampled = float_should_resample
+
+ new_loop_state = loop_fn(loop_state, loop_args, new_particle_state,
+ log_weights_acc, resampled, cur_mask, t)
+ # Update log Z hat.
+ log_z_hat_update = tf.reduce_logsumexp(
+ log_weights_acc, axis=0) - tf.log(tf.to_float(num_particles))
+ # If it is the last timestep, always add the update.
+ log_z_hat_acc += tf.cond(t < max_num_steps - 1,
+ lambda: log_z_hat_update * resampled,
+ lambda: log_z_hat_update)
+ # Update the TensorArrays before we reset the weights so that we capture
+ # the incremental weights and not zeros.
+ ta_updates = [log_weights_acc, resampled]
+ new_tas = [ta.write(t, x) for ta, x in zip(tas, ta_updates)]
+ # For the particle filters that resampled, reset weights to zero.
+ log_weights_acc *= (1. - tf.tile(resampled[tf.newaxis, :],
+ [num_particles, 1]))
+ new_state = (new_particle_state, new_loop_state)
+ return t + 1, new_state, new_tas, log_weights_acc, log_z_hat_acc
+
+ _, final_state, tas, _, log_z_hat = tf.while_loop(
+ while_predicate,
+ while_step,
+ loop_vars=(t0, init_states, tas, log_weights_acc, log_z_hat_acc),
+ parallel_iterations=parallel_iterations,
+ swap_memory=swap_memory)
+
+ log_weights, resampled = [x.stack() for x in tas]
+ log_weights = tf.transpose(log_weights, perm=[0, 2, 1])
+ final_particle_state, final_loop_state = final_state
+ return (log_z_hat, log_weights, resampled,
+ final_particle_state, final_loop_state)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/smc_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/smc_test.py"
new file mode 100644
index 00000000..ae32a62f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/smc_test.py"
@@ -0,0 +1,241 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for fivo.smc."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import scipy
+import tensorflow as tf
+
+from fivo import smc
+
+lse = scipy.special.logsumexp
+
+
+def _simple_transition_fn(state, unused_t):
+ if state is None:
+ return tf.zeros([4], dtype=tf.float32)
+ return tf.constant([5., 4., 1., 0.5]), tf.zeros([4], dtype=tf.float32)
+
+
+def _resample_at_step_criterion(step):
+ """A criterion that resamples once at a specific timestep."""
+ def criterion(log_weights, t):
+ batch_size = tf.shape(log_weights)[1]
+ return tf.fill([batch_size], tf.equal(t, step))
+ return criterion
+
+
+class SMCTest(tf.test.TestCase):
+
+ def test_never_resampling(self):
+ """Test that never_resample_criterion makes smc not resample.
+
+ Also test that the weights and log_z_hat are computed correctly when never
+ resampling.
+ """
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ outs = smc.smc(
+ _simple_transition_fn,
+ num_steps=tf.convert_to_tensor([5, 3]),
+ num_particles=2,
+ resampling_criterion=smc.never_resample_criterion)
+ log_z_hat, weights, resampled = sess.run(outs[0:3])
+ gt_weights = np.array(
+ [[[5, 1], [4, .5]],
+ [[10, 2], [8, 1]],
+ [[15, 3], [12, 1.5]],
+ [[20, 4], [12, 1.5]],
+ [[25, 5], [12, 1.5]]],
+ dtype=np.float32)
+ gt_log_z_hat = np.array(
+ [lse([25, 5]) - np.log(2),
+ lse([12, 1.5]) - np.log(2)],
+ dtype=np.float32)
+ self.assertAllClose(gt_log_z_hat, log_z_hat)
+ self.assertAllClose(gt_weights, weights)
+ self.assertAllEqual(np.zeros_like(resampled), resampled)
+
+ def test_always_resampling(self):
+ """Test always_resample_criterion makes smc always resample.
+
+ Past a sequence end the filter should not resample, however.
+ Also check that weights and log_z_hat estimate are correct.
+ """
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ outs = smc.smc(
+ _simple_transition_fn,
+ num_steps=tf.convert_to_tensor([5, 3]),
+ num_particles=2,
+ resampling_criterion=smc.always_resample_criterion)
+ log_z_hat, weights, resampled = sess.run(outs[0:3])
+ gt_weights = np.array(
+ [[[5, 1], [4, .5]],
+ [[5, 1], [4, .5]],
+ [[5, 1], [4, .5]],
+ [[5, 1], [0., 0.]],
+ [[5, 1], [0., 0.]]],
+ dtype=np.float32)
+ gt_log_z_hat = np.array(
+ [5*lse([5, 1]) - 5*np.log(2),
+ 3*lse([4, .5]) - 3*np.log(2)],
+ dtype=np.float32)
+ gt_resampled = np.array(
+ [[1, 1], [1, 1], [1, 1], [1, 0], [1, 0]],
+ dtype=np.float32)
+ self.assertAllClose(gt_log_z_hat, log_z_hat)
+ self.assertAllClose(gt_weights, weights)
+ self.assertAllEqual(gt_resampled, resampled)
+
+ def test_weights_reset_when_resampling_at_sequence_end(self):
+ """Test that the weights are reset when resampling at the sequence end.
+
+ When resampling happens on the last timestep of a sequence the weights
+ should be set to zero on the next timestep and remain zero afterwards.
+ """
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ outs = smc.smc(
+ _simple_transition_fn,
+ num_steps=tf.convert_to_tensor([5, 3]),
+ num_particles=2,
+ resampling_criterion=_resample_at_step_criterion(2))
+ log_z_hat, weights, resampled = sess.run(outs[0:3])
+ gt_log_z = np.array(
+ [lse([15, 3]) + lse([10, 2]) - 2*np.log(2),
+ lse([12, 1.5]) - np.log(2)],
+ dtype=np.float32)
+ gt_resampled = np.array(
+ [[0, 0], [0, 0], [1, 1], [0, 0], [0, 0]],
+ dtype=np.float32)
+ gt_weights = np.array(
+ [[[5, 1], [4, .5]],
+ [[10, 2], [8, 1]],
+ [[15, 3], [12, 1.5]],
+ [[5, 1], [0, 0]],
+ [[10, 2], [0, 0]]],
+ dtype=np.float32)
+ self.assertAllClose(gt_log_z, log_z_hat)
+ self.assertAllEqual(gt_resampled, resampled)
+ self.assertAllEqual(gt_weights, weights)
+
+ def test_weights_not_updated_past_sequence_end(self):
+ """Test that non-zero weights are not updated past the end of a sequence."""
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ outs = smc.smc(
+ _simple_transition_fn,
+ num_steps=tf.convert_to_tensor([6, 4]),
+ num_particles=2,
+ resampling_criterion=_resample_at_step_criterion(1))
+ log_z_hat, weights, resampled = sess.run(outs[0:3])
+ gt_log_z_hat = np.array(
+ [lse([10, 2]) + lse([20, 4]) - 2*np.log(2),
+ lse([8, 1]) + lse([8, 1]) - 2*np.log(2)],
+ dtype=np.float32)
+ # Ensure that we only resample on the 2nd timestep.
+ gt_resampled = np.array(
+ [[0, 0], [1, 1], [0, 0], [0, 0], [0, 0], [0, 0]],
+ dtype=np.float32)
+ # Ensure that the weights after the end of the sequence don't change.
+ # Ensure that the weights after resampling before the end of the sequence
+ # do change.
+ gt_weights = np.array(
+ [[[5, 1], [4, .5]],
+ [[10, 2], [8, 1]],
+ [[5, 1], [4, .5]],
+ [[10, 2], [8, 1]],
+ [[15, 3], [8, 1]],
+ [[20, 4], [8, 1]]],
+ dtype=np.float32)
+ self.assertAllClose(gt_log_z_hat, log_z_hat)
+ self.assertAllEqual(gt_resampled, resampled)
+ self.assertAllEqual(gt_weights, weights)
+
+ def test_resampling_on_max_num_steps(self):
+ """Test that everything is correct when resampling on step max_num_steps.
+
+ When resampling on step max_num_steps (i.e. the last step of the longest
+ sequence), ensure that there are no off-by-one errors preventing resampling
+ and also that the weights are not updated.
+ """
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ outs = smc.smc(
+ _simple_transition_fn,
+ num_steps=tf.convert_to_tensor([4, 2]),
+ num_particles=2,
+ resampling_criterion=_resample_at_step_criterion(3))
+ log_z_hat, weights, resampled = sess.run(outs[0:3])
+ gt_log_z_hat = np.array(
+ [lse([20, 4]) - np.log(2),
+ lse([8, 1]) - np.log(2)],
+ dtype=np.float32)
+ # Ensure that we only resample on the 3rd timestep and that the second
+ # filter doesn't resample at all because it is only run for 2 steps.
+ gt_resampled = np.array(
+ [[0, 0], [0, 0], [0, 0], [1, 0]],
+ dtype=np.float32)
+ gt_weights = np.array(
+ [[[5, 1], [4, .5]],
+ [[10, 2], [8, 1]],
+ [[15, 3], [8, 1]],
+ [[20, 4], [8, 1]]],
+ dtype=np.float32)
+ self.assertAllClose(gt_log_z_hat, log_z_hat)
+ self.assertAllEqual(gt_resampled, resampled)
+ self.assertAllEqual(gt_weights, weights)
+
+ def test_multinomial_resampling(self):
+ """Test that mulitnomial resampling selects the correct states."""
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ # Setup input.
+ inf = 1000.0 # Very large value in log space.
+ num_samples = 2
+ batch_size = 2
+ log_weights = tf.convert_to_tensor([[inf, 0], [0, inf]])
+ states = tf.convert_to_tensor([1, 2, 3, 4])
+ # Run test.
+ resampled_states = smc.multinomial_resampling(
+ log_weights, states, num_samples, batch_size, random_seed=0)
+ resampled_states_values = sess.run(resampled_states)
+ self.assertAllEqual(resampled_states_values, [1, 4, 1, 4])
+
+ def test_blend_tensor(self):
+ """Test that relaxed resampling blends the correct states."""
+ tf.set_random_seed(1234)
+ with self.test_session() as sess:
+ # Setup input.
+ num_samples = 2
+ batch_size = 2
+ blending_weights = tf.convert_to_tensor(
+ [[[0.5, 0.5], [0.25, 0.75]], [[0.75, 0.25], [0.5, 0.5]]])
+ states = tf.convert_to_tensor([4., 8., 12., 16.])
+ # Run test.
+ blended_states = smc._blend_tensor(blending_weights, states,
+ num_samples, batch_size)
+ blended_states_values = sess.run(blended_states)
+ self.assertAllClose(blended_states_values[:, 0], [8., 14., 6., 12.])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/test_data/tiny_pianoroll.pkl" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/test_data/tiny_pianoroll.pkl"
new file mode 100644
index 00000000..c5501c6c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/test_data/tiny_pianoroll.pkl"
@@ -0,0 +1,10979 @@
+(dp1
+S'train_mean'
+p2
+cnumpy.core.multiarray
+_reconstruct
+p3
+(cnumpy
+ndarray
+p4
+(I0
+tS'b'
+tRp5
+(I1
+(I88
+tcnumpy
+dtype
+p6
+(S'f8'
+I0
+I1
+tRp7
+(I3
+S'<'
+NNNI-1
+I-1
+I0
+tbI00
+S'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x9e0^X\xbez,?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x9e0^X\xbez\x00\x00\x00\x00\x00\x00\x00\x00\xde\xecw\x04\xe1\xe8\x90?\\\x9c\xe1\x08\xef\x9c@?\\\x9c\xe1\x08\xef\x9cp?Y\xfb\xb4\x11\x0b\x05\x80?&\xc3MBx\xf6}?\xb4+\xe5\xc8#\xea\x94?}\xe6\x9f\xb0\xd6\x8bf?`\xa9\xbfQ\xa9\xec\xac?\x9d\xc4\xac\x06\xe8\xc2\x90?&\xc3MBx\xf6\x9d?\x02\xd8b\xa3\xaco\x97?\xb0\x8a\xb8\xd1?R\x94?\xbeMQ\x11F\x93\xc7?\x1bt\x16\x0b\xf6v\x80?\x85\xf0\xec;\xa0\xa2\xb3?"\xb6o\xf9\xbd\xa6\xb1?,\xf4as_\\\xb6?\xbf\xe7\xa3\x08\xb2b\xbb?\xa7\xa72\xec\x93\x8a\x92?\x07%\xfd\x05\x13\xe2\xd1?\xaaY\xa4\xa0X\xec\xab?\x1f\x81\xf4S\xb0\xc6\xbc?\x92t\x9f\x180\x02\xb6?\xa2\xf4\xea\x80\x99\xe7\xbb?jm=\xe14\xe4\xd4?\xab\xb4\x105N\xda\x8e?F\xfc\xc6,\x7f\x1b\xcb?^\xfe\'\x9d\\S\xc0?\xf30P\x95%;\xc6?4=\x95a\x9fT\xc4?\xfe\x80]\x83\xdd\xfb\x90?R\xf4\xe9\xaa\xe2\xb1\xda?\xe5\xe4\xa9\x1b\x94\xf4\xa7?\xcbH\xf6\xb3J\xed\xbe?v\xa4F\xc2\x0e\\\xb5?\xe3\xc1I\xea\x9c\x1f\xb9?\xca\x8f\x9bf\xbeM\xd1?\\\x9c\xe1\x08\xef\x9cp?\xdeG\xe4\x98\xd6\xd6\xc3?M\x99\x8c\xaf<9\xaf?BJUx\xba\xb9\xb1?\xb2R\xacnA9\xb0?\x83(\xf9\x9e\x9e\xbbg?@pFg\xa2\xc7\xaf?\x80\x87\xcc\xa7\xba#g?+\x99\xf5\xdein\x83?}\xe6\x9f\xb0\xd6\x8bf?b\xde:\xf7\xb6\xccQ?\x83(\xf9\x9e\x9e\xbbW?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
+tbsS'train'
+p8
+(lp9
+(lp10
+(cnumpy.core.multiarray
+scalar
+p11
+(g6
+(S'i8'
+I0
+I1
+tRp12
+(I3
+S'<'
+NNNI-1
+I-1
+I0
+tbS'<\x00\x00\x00\x00\x00\x00\x00'
+tRp13
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp14
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp15
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp16
+tp17
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp18
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp19
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp20
+tp21
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp22
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp23
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp24
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp25
+tp26
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp27
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp28
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp29
+tp30
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp31
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp32
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp33
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp34
+tp35
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp36
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp37
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp38
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp39
+tp40
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp41
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp42
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp43
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp44
+tp45
+a(g11
+(g12
+S':\x00\x00\x00\x00\x00\x00\x00'
+tRp46
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp47
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp48
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp49
+tp50
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp51
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp52
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp53
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp54
+tp55
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp56
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp57
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp58
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp59
+tp60
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp61
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp62
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp63
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp64
+tp65
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp66
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp67
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp68
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp69
+tp70
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp71
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp72
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp73
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp74
+tp75
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp76
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp77
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp78
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp79
+tp80
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp81
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp82
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp83
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp84
+tp85
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp86
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp87
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp88
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp89
+tp90
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp91
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp92
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp93
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp94
+tp95
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp96
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp97
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp98
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp99
+tp100
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp101
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp102
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp103
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp104
+tp105
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp106
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp107
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp108
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp109
+tp110
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp111
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp112
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp113
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp114
+tp115
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp116
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp117
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp118
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp119
+tp120
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp121
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp122
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp123
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp124
+tp125
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp126
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp127
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp128
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp129
+tp130
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp131
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp132
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp133
+tp134
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp135
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp136
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp137
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp138
+tp139
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp140
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp141
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp142
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp143
+tp144
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp145
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp146
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp147
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp148
+tp149
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp150
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp151
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp152
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp153
+tp154
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp155
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp156
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp157
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp158
+tp159
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp160
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp161
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp162
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp163
+tp164
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp165
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp166
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp167
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp168
+tp169
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp170
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp171
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp172
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp173
+tp174
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp175
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp176
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp177
+tp178
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp179
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp180
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp181
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp182
+tp183
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp184
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp185
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp186
+tp187
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp188
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp189
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp190
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp191
+tp192
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp193
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp194
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp195
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp196
+tp197
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp198
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp199
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp200
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp201
+tp202
+a(g11
+(g12
+S':\x00\x00\x00\x00\x00\x00\x00'
+tRp203
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp204
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp205
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp206
+tp207
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp208
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp209
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp210
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp211
+tp212
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp213
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp214
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp215
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp216
+tp217
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp218
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp219
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp220
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp221
+tp222
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp223
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp224
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp225
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp226
+tp227
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp228
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp229
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp230
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp231
+tp232
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp233
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp234
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp235
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp236
+tp237
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp238
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp239
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp240
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp241
+tp242
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp243
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp244
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp245
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp246
+tp247
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp248
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp249
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp250
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp251
+tp252
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp253
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp254
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp255
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp256
+tp257
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp258
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp259
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp260
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp261
+tp262
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp263
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp264
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp265
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp266
+tp267
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp268
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp269
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp270
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp271
+tp272
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp273
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp274
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp275
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp276
+tp277
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp278
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp279
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp280
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp281
+tp282
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp283
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp284
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp285
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp286
+tp287
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp288
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp289
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp290
+tp291
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp292
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp293
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp294
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp295
+tp296
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp297
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp298
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp299
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp300
+tp301
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp302
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp303
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp304
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp305
+tp306
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp307
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp308
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp309
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp310
+tp311
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp312
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp313
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp314
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp315
+tp316
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp317
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp318
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp319
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp320
+tp321
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp322
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp323
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp324
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp325
+tp326
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp327
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp328
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp329
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp330
+tp331
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp332
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp333
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp334
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp335
+tp336
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp337
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp338
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp339
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp340
+tp341
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp342
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp343
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp344
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp345
+tp346
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp347
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp348
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp349
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp350
+tp351
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp352
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp353
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp354
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp355
+tp356
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp357
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp358
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp359
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp360
+tp361
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp362
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp363
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp364
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp365
+tp366
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp367
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp368
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp369
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp370
+tp371
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp372
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp373
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp374
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp375
+tp376
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp377
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp378
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp379
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp380
+tp381
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp382
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp383
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp384
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp385
+tp386
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp387
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp388
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp389
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp390
+tp391
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp392
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp393
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp394
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp395
+tp396
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp397
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp398
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp399
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp400
+tp401
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp402
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp403
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp404
+tp405
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp406
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp407
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp408
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp409
+tp410
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp411
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp412
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp413
+tp414
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp415
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp416
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp417
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp418
+tp419
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp420
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp421
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp422
+tp423
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp424
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp425
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp426
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp427
+tp428
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp429
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp430
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp431
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp432
+tp433
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp434
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp435
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp436
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp437
+tp438
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp439
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp440
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp441
+tp442
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp443
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp444
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp445
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp446
+tp447
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp448
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp449
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp450
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp451
+tp452
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp453
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp454
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp455
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp456
+tp457
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp458
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp459
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp460
+tp461
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp462
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp463
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp464
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp465
+tp466
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp467
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp468
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp469
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp470
+tp471
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp472
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp473
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp474
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp475
+tp476
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp477
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp478
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp479
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp480
+tp481
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp482
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp483
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp484
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp485
+tp486
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp487
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp488
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp489
+tp490
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp491
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp492
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp493
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp494
+tp495
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp496
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp497
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp498
+tp499
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp500
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp501
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp502
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp503
+tp504
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp505
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp506
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp507
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp508
+tp509
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp510
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp511
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp512
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp513
+tp514
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp515
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp516
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp517
+tp518
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp519
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp520
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp521
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp522
+tp523
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp524
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp525
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp526
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp527
+tp528
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp529
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp530
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp531
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp532
+tp533
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp534
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp535
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp536
+tp537
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp538
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp539
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp540
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp541
+tp542
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp543
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp544
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp545
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp546
+tp547
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp548
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp549
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp550
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp551
+tp552
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp553
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp554
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp555
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp556
+tp557
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp558
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp559
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp560
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp561
+tp562
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp563
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp564
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp565
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp566
+tp567
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp568
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp569
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp570
+tp571
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp572
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp573
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp574
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp575
+tp576
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp577
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp578
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp579
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp580
+tp581
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp582
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp583
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp584
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp585
+tp586
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp587
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp588
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp589
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp590
+tp591
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp592
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp593
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp594
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp595
+tp596
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp597
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp598
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp599
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp600
+tp601
+a(g11
+(g12
+S'8\x00\x00\x00\x00\x00\x00\x00'
+tRp602
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp603
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp604
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp605
+tp606
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp607
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp608
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp609
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp610
+tp611
+a(g11
+(g12
+S'6\x00\x00\x00\x00\x00\x00\x00'
+tRp612
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp613
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp614
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp615
+tp616
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp617
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp618
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp619
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp620
+tp621
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp622
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp623
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp624
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp625
+tp626
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp627
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp628
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp629
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp630
+tp631
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp632
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp633
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp634
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp635
+tp636
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp637
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp638
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp639
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp640
+tp641
+aa(lp642
+(g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp643
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp644
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp645
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp646
+tp647
+a(g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp648
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp649
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp650
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp651
+tp652
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp653
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp654
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp655
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp656
+tp657
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp658
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp659
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp660
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp661
+tp662
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp663
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp664
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp665
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp666
+tp667
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp668
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp669
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp670
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp671
+tp672
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp673
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp674
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp675
+tp676
+a(g11
+(g12
+S'8\x00\x00\x00\x00\x00\x00\x00'
+tRp677
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp678
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp679
+tp680
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp681
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp682
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp683
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp684
+tp685
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp686
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp687
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp688
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp689
+tp690
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp691
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp692
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp693
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp694
+tp695
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp696
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp697
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp698
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp699
+tp700
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp701
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp702
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp703
+tp704
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp705
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp706
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp707
+tp708
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp709
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp710
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp711
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp712
+tp713
+a(g11
+(g12
+S'8\x00\x00\x00\x00\x00\x00\x00'
+tRp714
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp715
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp716
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp717
+tp718
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp719
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp720
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp721
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp722
+tp723
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp724
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp725
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp726
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp727
+tp728
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp729
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp730
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp731
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp732
+tp733
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp734
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp735
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp736
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp737
+tp738
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp739
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp740
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp741
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp742
+tp743
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp744
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp745
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp746
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp747
+tp748
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp749
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp750
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp751
+tp752
+a(g11
+(g12
+S'8\x00\x00\x00\x00\x00\x00\x00'
+tRp753
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp754
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp755
+tp756
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp757
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp758
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp759
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp760
+tp761
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp762
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp763
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp764
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp765
+tp766
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp767
+g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp768
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp769
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp770
+tp771
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp772
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp773
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp774
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp775
+tp776
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp777
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp778
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp779
+tp780
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp781
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp782
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp783
+tp784
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp785
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp786
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp787
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp788
+tp789
+a(g11
+(g12
+S'8\x00\x00\x00\x00\x00\x00\x00'
+tRp790
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp791
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp792
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp793
+tp794
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp795
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp796
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp797
+g11
+(g12
+S'\\\x00\x00\x00\x00\x00\x00\x00'
+tRp798
+tp799
+a(g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp800
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp801
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp802
+g11
+(g12
+S'Z\x00\x00\x00\x00\x00\x00\x00'
+tRp803
+tp804
+a(g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp805
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp806
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp807
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp808
+tp809
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp810
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp811
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp812
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp813
+tp814
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp815
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp816
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp817
+tp818
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp819
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp820
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp821
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp822
+tp823
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp824
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp825
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp826
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp827
+tp828
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp829
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp830
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp831
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp832
+tp833
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp834
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp835
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp836
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp837
+tp838
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp839
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp840
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp841
+g11
+(g12
+S'Z\x00\x00\x00\x00\x00\x00\x00'
+tRp842
+tp843
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp844
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp845
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp846
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp847
+tp848
+a(g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp849
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp850
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp851
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp852
+tp853
+a(g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp854
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp855
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp856
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp857
+tp858
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp859
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp860
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp861
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp862
+tp863
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp864
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp865
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp866
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp867
+tp868
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp869
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp870
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp871
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp872
+tp873
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp874
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp875
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp876
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp877
+tp878
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp879
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp880
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp881
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp882
+tp883
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp884
+g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp885
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp886
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp887
+tp888
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp889
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp890
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp891
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp892
+tp893
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp894
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp895
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp896
+g11
+(g12
+S'Z\x00\x00\x00\x00\x00\x00\x00'
+tRp897
+tp898
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp899
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp900
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp901
+g11
+(g12
+S'\\\x00\x00\x00\x00\x00\x00\x00'
+tRp902
+tp903
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp904
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp905
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp906
+g11
+(g12
+S'Z\x00\x00\x00\x00\x00\x00\x00'
+tRp907
+tp908
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp909
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp910
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp911
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp912
+tp913
+a(g11
+(g12
+S'F\x00\x00\x00\x00\x00\x00\x00'
+tRp914
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp915
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp916
+g11
+(g12
+S'Z\x00\x00\x00\x00\x00\x00\x00'
+tRp917
+tp918
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp919
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp920
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp921
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp922
+tp923
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp924
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp925
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp926
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp927
+tp928
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp929
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp930
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp931
+g11
+(g12
+S'\\\x00\x00\x00\x00\x00\x00\x00'
+tRp932
+tp933
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp934
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp935
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp936
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp937
+tp938
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp939
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp940
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp941
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp942
+tp943
+a(g11
+(g12
+S'8\x00\x00\x00\x00\x00\x00\x00'
+tRp944
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp945
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp946
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp947
+tp948
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp949
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp950
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp951
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp952
+tp953
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp954
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp955
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp956
+g11
+(g12
+S'U\x00\x00\x00\x00\x00\x00\x00'
+tRp957
+tp958
+aa(lp959
+(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp960
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp961
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp962
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp963
+tp964
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp965
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp966
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp967
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp968
+tp969
+a(g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp970
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp971
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp972
+tp973
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp974
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp975
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp976
+g11
+(g12
+S']\x00\x00\x00\x00\x00\x00\x00'
+tRp977
+tp978
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp979
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp980
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp981
+g11
+(g12
+S']\x00\x00\x00\x00\x00\x00\x00'
+tRp982
+tp983
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp984
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp985
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp986
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp987
+tp988
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp989
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp990
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp991
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp992
+tp993
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp994
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp995
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp996
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp997
+tp998
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp999
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1000
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1001
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1002
+tp1003
+a(g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1004
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1005
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1006
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp1007
+tp1008
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1009
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1010
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1011
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1012
+tp1013
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1014
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1015
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1016
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1017
+tp1018
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1019
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1020
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1021
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1022
+tp1023
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1024
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1025
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1026
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1027
+tp1028
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1029
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1030
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1031
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1032
+tp1033
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1034
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1035
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1036
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1037
+tp1038
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1039
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1040
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1041
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1042
+tp1043
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1044
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1045
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1046
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1047
+tp1048
+a(g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1049
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1050
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1051
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1052
+tp1053
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1054
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1055
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1056
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1057
+tp1058
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1059
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1060
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1061
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1062
+tp1063
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1064
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1065
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1066
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1067
+tp1068
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1069
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1070
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1071
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1072
+tp1073
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1074
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1075
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1076
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1077
+tp1078
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1079
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1080
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1081
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1082
+tp1083
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1084
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1085
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1086
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1087
+tp1088
+a(g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1089
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1090
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1091
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp1092
+tp1093
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1094
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1095
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1096
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp1097
+tp1098
+a(g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1099
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1100
+g11
+(g12
+S'Z\x00\x00\x00\x00\x00\x00\x00'
+tRp1101
+tp1102
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1103
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1104
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1105
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp1106
+tp1107
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1108
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1109
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1110
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp1111
+tp1112
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1113
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1114
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1115
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp1116
+tp1117
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1118
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1119
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1120
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp1121
+tp1122
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1123
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1124
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1125
+g11
+(g12
+S']\x00\x00\x00\x00\x00\x00\x00'
+tRp1126
+tp1127
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1128
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1129
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1130
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp1131
+tp1132
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1133
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1134
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1135
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp1136
+tp1137
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1138
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp1139
+g11
+(g12
+S'W\x00\x00\x00\x00\x00\x00\x00'
+tRp1140
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1141
+tp1142
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1143
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1144
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1145
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp1146
+tp1147
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1148
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1149
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1150
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp1151
+tp1152
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1153
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1154
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1155
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp1156
+tp1157
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1158
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1159
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1160
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1161
+tp1162
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1163
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1164
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1165
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1166
+tp1167
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1168
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1169
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1170
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp1171
+tp1172
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1173
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1174
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1175
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1176
+tp1177
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1178
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1179
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1180
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1181
+tp1182
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1183
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1184
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1185
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1186
+tp1187
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1188
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1189
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1190
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1191
+tp1192
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1193
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1194
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1195
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1196
+tp1197
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1198
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1199
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1200
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1201
+tp1202
+aa(lp1203
+(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1204
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1205
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1206
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1207
+tp1208
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1209
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1210
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1211
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1212
+tp1213
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1214
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1215
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1216
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1217
+tp1218
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1219
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1220
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1221
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1222
+tp1223
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1224
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1225
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1226
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1227
+tp1228
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1229
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1230
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1231
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1232
+tp1233
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1234
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1235
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1236
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1237
+tp1238
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1239
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1240
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1241
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1242
+tp1243
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp1244
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1245
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1246
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1247
+tp1248
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1249
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1250
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1251
+tp1252
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1253
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1254
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1255
+tp1256
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1257
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1258
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1259
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1260
+tp1261
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1262
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1263
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1264
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1265
+tp1266
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1267
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1268
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1269
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1270
+tp1271
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1272
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1273
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1274
+tp1275
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1276
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1277
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1278
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1279
+tp1280
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1281
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1282
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1283
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1284
+tp1285
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1286
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1287
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1288
+tp1289
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1290
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1291
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1292
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1293
+tp1294
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1295
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1296
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1297
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1298
+tp1299
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1300
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1301
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1302
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1303
+tp1304
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1305
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1306
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1307
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1308
+tp1309
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1310
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1311
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1312
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1313
+tp1314
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1315
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1316
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1317
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1318
+tp1319
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1320
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1321
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1322
+tp1323
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1324
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1325
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1326
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1327
+tp1328
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1329
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1330
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1331
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp1332
+tp1333
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1334
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1335
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1336
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1337
+tp1338
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1339
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1340
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1341
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1342
+tp1343
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1344
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1345
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1346
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1347
+tp1348
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1349
+g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp1350
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1351
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1352
+tp1353
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1354
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1355
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1356
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1357
+tp1358
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1359
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1360
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1361
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1362
+tp1363
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1364
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1365
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1366
+tp1367
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1368
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1369
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1370
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1371
+tp1372
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1373
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1374
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1375
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1376
+tp1377
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1378
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1379
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1380
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp1381
+tp1382
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1383
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1384
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1385
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1386
+tp1387
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1388
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1389
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1390
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1391
+tp1392
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1393
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1394
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1395
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1396
+tp1397
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp1398
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1399
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1400
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1401
+tp1402
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1403
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1404
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1405
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1406
+tp1407
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1408
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1409
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp1410
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1411
+tp1412
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1413
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1414
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp1415
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp1416
+tp1417
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1418
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1419
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1420
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1421
+tp1422
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1423
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1424
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1425
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1426
+tp1427
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1428
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1429
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1430
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1431
+tp1432
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1433
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1434
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1435
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1436
+tp1437
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1438
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1439
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1440
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1441
+tp1442
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1443
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1444
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1445
+tp1446
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1447
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1448
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1449
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1450
+tp1451
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1452
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1453
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1454
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1455
+tp1456
+a(g11
+(g12
+S'8\x00\x00\x00\x00\x00\x00\x00'
+tRp1457
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1458
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1459
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1460
+tp1461
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1462
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1463
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1464
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1465
+tp1466
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1467
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1468
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1469
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1470
+tp1471
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1472
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1473
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1474
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1475
+tp1476
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1477
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1478
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1479
+tp1480
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1481
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1482
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1483
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1484
+tp1485
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1486
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1487
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1488
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1489
+tp1490
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp1491
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1492
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1493
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1494
+tp1495
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1496
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1497
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1498
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1499
+tp1500
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1501
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1502
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1503
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1504
+tp1505
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1506
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1507
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1508
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1509
+tp1510
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1511
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1512
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1513
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1514
+tp1515
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1516
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1517
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1518
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1519
+tp1520
+aa(lp1521
+(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp1522
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1523
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1524
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1525
+tp1526
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1527
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1528
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1529
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1530
+tp1531
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1532
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1533
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1534
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp1535
+tp1536
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1537
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1538
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1539
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1540
+tp1541
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1542
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1543
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1544
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1545
+tp1546
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1547
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1548
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1549
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1550
+tp1551
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1552
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1553
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1554
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1555
+tp1556
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1557
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1558
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1559
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1560
+tp1561
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1562
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1563
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1564
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1565
+tp1566
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1567
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1568
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1569
+tp1570
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1571
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1572
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1573
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1574
+tp1575
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1576
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1577
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1578
+tp1579
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1580
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1581
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1582
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1583
+tp1584
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1585
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1586
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1587
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1588
+tp1589
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1590
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1591
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1592
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1593
+tp1594
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1595
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1596
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1597
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1598
+tp1599
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1600
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1601
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1602
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1603
+tp1604
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp1605
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1606
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1607
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1608
+tp1609
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1610
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1611
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1612
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1613
+tp1614
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1615
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1616
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1617
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1618
+tp1619
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp1620
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1621
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1622
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1623
+tp1624
+a(g11
+(g12
+S'4\x00\x00\x00\x00\x00\x00\x00'
+tRp1625
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1626
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1627
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1628
+tp1629
+a(g11
+(g12
+S'2\x00\x00\x00\x00\x00\x00\x00'
+tRp1630
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1631
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1632
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1633
+tp1634
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp1635
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1636
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1637
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1638
+tp1639
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1640
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1641
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1642
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1643
+tp1644
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp1645
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1646
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1647
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1648
+tp1649
+a(g11
+(g12
+S'4\x00\x00\x00\x00\x00\x00\x00'
+tRp1650
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1651
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp1652
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1653
+tp1654
+a(g11
+(g12
+S'2\x00\x00\x00\x00\x00\x00\x00'
+tRp1655
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1656
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1657
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1658
+tp1659
+a(g11
+(g12
+S'4\x00\x00\x00\x00\x00\x00\x00'
+tRp1660
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1661
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1662
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1663
+tp1664
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp1665
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1666
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1667
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1668
+tp1669
+a(g11
+(g12
+S'4\x00\x00\x00\x00\x00\x00\x00'
+tRp1670
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1671
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1672
+tp1673
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp1674
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1675
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1676
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1677
+tp1678
+a(g11
+(g12
+S'2\x00\x00\x00\x00\x00\x00\x00'
+tRp1679
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1680
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1681
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1682
+tp1683
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp1684
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1685
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1686
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1687
+tp1688
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1689
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1690
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1691
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1692
+tp1693
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1694
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1695
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1696
+tp1697
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1698
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1699
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1700
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1701
+tp1702
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1703
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1704
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1705
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1706
+tp1707
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1708
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1709
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1710
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp1711
+tp1712
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1713
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1714
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1715
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1716
+tp1717
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1718
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1719
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1720
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1721
+tp1722
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1723
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1724
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1725
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1726
+tp1727
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1728
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1729
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1730
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1731
+tp1732
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1733
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1734
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1735
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1736
+tp1737
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1738
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1739
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1740
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1741
+tp1742
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1743
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1744
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1745
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1746
+tp1747
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp1748
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1749
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1750
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1751
+tp1752
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1753
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1754
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1755
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1756
+tp1757
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1758
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1759
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1760
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1761
+tp1762
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1763
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1764
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1765
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1766
+tp1767
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1768
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1769
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1770
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1771
+tp1772
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1773
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1774
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1775
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp1776
+tp1777
+a(g11
+(g12
+S'4\x00\x00\x00\x00\x00\x00\x00'
+tRp1778
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1779
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1780
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp1781
+tp1782
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1783
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp1784
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1785
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1786
+tp1787
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1788
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1789
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1790
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1791
+tp1792
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1793
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1794
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1795
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1796
+tp1797
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1798
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1799
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1800
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1801
+tp1802
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1803
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1804
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1805
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1806
+tp1807
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1808
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1809
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1810
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1811
+tp1812
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1813
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1814
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1815
+tp1816
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1817
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1818
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1819
+tp1820
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1821
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1822
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1823
+tp1824
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1825
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp1826
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1827
+tp1828
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1829
+g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1830
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1831
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1832
+tp1833
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1834
+g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1835
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1836
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1837
+tp1838
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1839
+g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1840
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1841
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1842
+tp1843
+a(g11
+(g12
+S'4\x00\x00\x00\x00\x00\x00\x00'
+tRp1844
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1845
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1846
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp1847
+tp1848
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1849
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1850
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp1851
+tp1852
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1853
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1854
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1855
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp1856
+tp1857
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp1858
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1859
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1860
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1861
+tp1862
+a(g11
+(g12
+S'4\x00\x00\x00\x00\x00\x00\x00'
+tRp1863
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1864
+g11
+(g12
+S'I\x00\x00\x00\x00\x00\x00\x00'
+tRp1865
+tp1866
+a(g11
+(g12
+S'2\x00\x00\x00\x00\x00\x00\x00'
+tRp1867
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1868
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1869
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1870
+tp1871
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp1872
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1873
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1874
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1875
+tp1876
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1877
+g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1878
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1879
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1880
+tp1881
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1882
+g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp1883
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1884
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1885
+tp1886
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1887
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1888
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1889
+tp1890
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1891
+g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp1892
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1893
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1894
+tp1895
+a(g11
+(g12
+S'2\x00\x00\x00\x00\x00\x00\x00'
+tRp1896
+g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1897
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1898
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1899
+tp1900
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1901
+g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1902
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1903
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1904
+tp1905
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1906
+g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1907
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1908
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1909
+tp1910
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp1911
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1912
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1913
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1914
+tp1915
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1916
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1917
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1918
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1919
+tp1920
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1921
+g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp1922
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1923
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1924
+tp1925
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1926
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1927
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1928
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1929
+tp1930
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp1931
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1932
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1933
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1934
+tp1935
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1936
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1937
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1938
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1939
+tp1940
+a(g11
+(g12
+S'2\x00\x00\x00\x00\x00\x00\x00'
+tRp1941
+g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp1942
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1943
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1944
+tp1945
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1946
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1947
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1948
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1949
+tp1950
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1951
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1952
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1953
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp1954
+tp1955
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp1956
+g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1957
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp1958
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1959
+tp1960
+a(g11
+(g12
+S'4\x00\x00\x00\x00\x00\x00\x00'
+tRp1961
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1962
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1963
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp1964
+tp1965
+a(g11
+(g12
+S'4\x00\x00\x00\x00\x00\x00\x00'
+tRp1966
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1967
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1968
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1969
+tp1970
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp1971
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1972
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1973
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp1974
+tp1975
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp1976
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1977
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1978
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1979
+tp1980
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp1981
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1982
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp1983
+tp1984
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp1985
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1986
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp1987
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1988
+tp1989
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp1990
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp1991
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1992
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp1993
+tp1994
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp1995
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp1996
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp1997
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp1998
+tp1999
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2000
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2001
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2002
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2003
+tp2004
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2005
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2006
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2007
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2008
+tp2009
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2010
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2011
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2012
+tp2013
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2014
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2015
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2016
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2017
+tp2018
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2019
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2020
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2021
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2022
+tp2023
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2024
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2025
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2026
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2027
+tp2028
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2029
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2030
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2031
+tp2032
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2033
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2034
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2035
+tp2036
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2037
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2038
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2039
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2040
+tp2041
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2042
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2043
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2044
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2045
+tp2046
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2047
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2048
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2049
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2050
+tp2051
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp2052
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2053
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2054
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2055
+tp2056
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2057
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2058
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2059
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2060
+tp2061
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp2062
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2063
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2064
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2065
+tp2066
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp2067
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2068
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2069
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2070
+tp2071
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp2072
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2073
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2074
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2075
+tp2076
+aa(lp2077
+(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2078
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2079
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2080
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2081
+tp2082
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2083
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2084
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2085
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2086
+tp2087
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2088
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2089
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2090
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2091
+tp2092
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2093
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2094
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2095
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2096
+tp2097
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2098
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2099
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2100
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2101
+tp2102
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2103
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2104
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2105
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2106
+tp2107
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2108
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2109
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2110
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2111
+tp2112
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2113
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2114
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2115
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2116
+tp2117
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2118
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2119
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2120
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2121
+tp2122
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2123
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2124
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2125
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2126
+tp2127
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp2128
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2129
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2130
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2131
+tp2132
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp2133
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp2134
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2135
+tp2136
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2137
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2138
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2139
+tp2140
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2141
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2142
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2143
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2144
+tp2145
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2146
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2147
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp2148
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2149
+tp2150
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2151
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2152
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2153
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2154
+tp2155
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2156
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2157
+g11
+(g12
+S'P\x00\x00\x00\x00\x00\x00\x00'
+tRp2158
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2159
+tp2160
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2161
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2162
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2163
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2164
+tp2165
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2166
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2167
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2168
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2169
+tp2170
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2171
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2172
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2173
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2174
+tp2175
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2176
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2177
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2178
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2179
+tp2180
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2181
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp2182
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2183
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2184
+tp2185
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2186
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp2187
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2188
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2189
+tp2190
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2191
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2192
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2193
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2194
+tp2195
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp2196
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2197
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2198
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2199
+tp2200
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2201
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2202
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2203
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp2204
+tp2205
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2206
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2207
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2208
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2209
+tp2210
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2211
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2212
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2213
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2214
+tp2215
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2216
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2217
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2218
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2219
+tp2220
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2221
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2222
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2223
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2224
+tp2225
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2226
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2227
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2228
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2229
+tp2230
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2231
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2232
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2233
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2234
+tp2235
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2236
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2237
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2238
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2239
+tp2240
+aa(lp2241
+(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2242
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2243
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2244
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2245
+tp2246
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp2247
+g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2248
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2249
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2250
+tp2251
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2252
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2253
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2254
+tp2255
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp2256
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2257
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2258
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2259
+tp2260
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2261
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2262
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2263
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2264
+tp2265
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2266
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2267
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2268
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2269
+tp2270
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2271
+g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp2272
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2273
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2274
+tp2275
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2276
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2277
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2278
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2279
+tp2280
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2281
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2282
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2283
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2284
+tp2285
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2286
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2287
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2288
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2289
+tp2290
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2291
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2292
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2293
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2294
+tp2295
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2296
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2297
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2298
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2299
+tp2300
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2301
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2302
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2303
+tp2304
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2305
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2306
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2307
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2308
+tp2309
+a(g11
+(g12
+S'2\x00\x00\x00\x00\x00\x00\x00'
+tRp2310
+g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp2311
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2312
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2313
+tp2314
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2315
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2316
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2317
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2318
+tp2319
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2320
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2321
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2322
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2323
+tp2324
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp2325
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2326
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2327
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2328
+tp2329
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2330
+g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2331
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2332
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2333
+tp2334
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp2335
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2336
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2337
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2338
+tp2339
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2340
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2341
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2342
+tp2343
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2344
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2345
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2346
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2347
+tp2348
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2349
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2350
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2351
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2352
+tp2353
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2354
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2355
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2356
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2357
+tp2358
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2359
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2360
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2361
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2362
+tp2363
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp2364
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2365
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2366
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2367
+tp2368
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp2369
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2370
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2371
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2372
+tp2373
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2374
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2375
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2376
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2377
+tp2378
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2379
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2380
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2381
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2382
+tp2383
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2384
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2385
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2386
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2387
+tp2388
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2389
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2390
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2391
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2392
+tp2393
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp2394
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2395
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2396
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2397
+tp2398
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp2399
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2400
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2401
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2402
+tp2403
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2404
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2405
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2406
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2407
+tp2408
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2409
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2410
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2411
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2412
+tp2413
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2414
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2415
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2416
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2417
+tp2418
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2419
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2420
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2421
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2422
+tp2423
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp2424
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2425
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2426
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2427
+tp2428
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2429
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2430
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2431
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2432
+tp2433
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2434
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2435
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2436
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2437
+tp2438
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2439
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2440
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2441
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2442
+tp2443
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2444
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2445
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2446
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2447
+tp2448
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp2449
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2450
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2451
+tp2452
+a(g11
+(g12
+S'5\x00\x00\x00\x00\x00\x00\x00'
+tRp2453
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2454
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2455
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2456
+tp2457
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2458
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2459
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2460
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2461
+tp2462
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2463
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2464
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2465
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2466
+tp2467
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2468
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2469
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2470
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2471
+tp2472
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp2473
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2474
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2475
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2476
+tp2477
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp2478
+g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp2479
+g11
+(g12
+S'K\x00\x00\x00\x00\x00\x00\x00'
+tRp2480
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2481
+tp2482
+a(g11
+(g12
+S'=\x00\x00\x00\x00\x00\x00\x00'
+tRp2483
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2484
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2485
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2486
+tp2487
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2488
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2489
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2490
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2491
+tp2492
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp2493
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2494
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2495
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2496
+tp2497
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2498
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2499
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2500
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2501
+tp2502
+a(g11
+(g12
+S'6\x00\x00\x00\x00\x00\x00\x00'
+tRp2503
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2504
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2505
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2506
+tp2507
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2508
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2509
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2510
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2511
+tp2512
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2513
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2514
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2515
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2516
+tp2517
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2518
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2519
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2520
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2521
+tp2522
+aa(lp2523
+(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2524
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2525
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2526
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2527
+tp2528
+a(g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2529
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2530
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2531
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2532
+tp2533
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2534
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2535
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2536
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2537
+tp2538
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2539
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2540
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2541
+g11
+(g12
+S']\x00\x00\x00\x00\x00\x00\x00'
+tRp2542
+tp2543
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2544
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2545
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2546
+g11
+(g12
+S']\x00\x00\x00\x00\x00\x00\x00'
+tRp2547
+tp2548
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2549
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2550
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2551
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2552
+tp2553
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2554
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2555
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2556
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2557
+tp2558
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2559
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2560
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2561
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2562
+tp2563
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2564
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2565
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2566
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2567
+tp2568
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2569
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2570
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2571
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp2572
+tp2573
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2574
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2575
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2576
+tp2577
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2578
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2579
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2580
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2581
+tp2582
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2583
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2584
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2585
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2586
+tp2587
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2588
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2589
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2590
+tp2591
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2592
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2593
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2594
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2595
+tp2596
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2597
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2598
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2599
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2600
+tp2601
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2602
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2603
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2604
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2605
+tp2606
+a(g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2607
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2608
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2609
+tp2610
+a(g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2611
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2612
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2613
+tp2614
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2615
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2616
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2617
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2618
+tp2619
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2620
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2621
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2622
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2623
+tp2624
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2625
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2626
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2627
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2628
+tp2629
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2630
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2631
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2632
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2633
+tp2634
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2635
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2636
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2637
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2638
+tp2639
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2640
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2641
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2642
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2643
+tp2644
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2645
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2646
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2647
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2648
+tp2649
+a(g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2650
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2651
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2652
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2653
+tp2654
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2655
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2656
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2657
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2658
+tp2659
+a(g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2660
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2661
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2662
+g11
+(g12
+S'Z\x00\x00\x00\x00\x00\x00\x00'
+tRp2663
+tp2664
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2665
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2666
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2667
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2668
+tp2669
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2670
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2671
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2672
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2673
+tp2674
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2675
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2676
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2677
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2678
+tp2679
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2680
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2681
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2682
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2683
+tp2684
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2685
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2686
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2687
+g11
+(g12
+S']\x00\x00\x00\x00\x00\x00\x00'
+tRp2688
+tp2689
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2690
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2691
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2692
+g11
+(g12
+S'[\x00\x00\x00\x00\x00\x00\x00'
+tRp2693
+tp2694
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2695
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2696
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2697
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp2698
+tp2699
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2700
+g11
+(g12
+S'R\x00\x00\x00\x00\x00\x00\x00'
+tRp2701
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2702
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2703
+tp2704
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2705
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2706
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2707
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp2708
+tp2709
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2710
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2711
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2712
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp2713
+tp2714
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2715
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2716
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2717
+g11
+(g12
+S'Y\x00\x00\x00\x00\x00\x00\x00'
+tRp2718
+tp2719
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2720
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2721
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2722
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2723
+tp2724
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2725
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2726
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2727
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2728
+tp2729
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2730
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2731
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2732
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp2733
+tp2734
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2735
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2736
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2737
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2738
+tp2739
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2740
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2741
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2742
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2743
+tp2744
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2745
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2746
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2747
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2748
+tp2749
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2750
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2751
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2752
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2753
+tp2754
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2755
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2756
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2757
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2758
+tp2759
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2760
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2761
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2762
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2763
+tp2764
+aa(lp2765
+(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2766
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2767
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2768
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2769
+tp2770
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2771
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2772
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2773
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2774
+tp2775
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2776
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2777
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2778
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2779
+tp2780
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2781
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2782
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2783
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2784
+tp2785
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2786
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2787
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2788
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2789
+tp2790
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2791
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2792
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2793
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2794
+tp2795
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2796
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2797
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2798
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2799
+tp2800
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2801
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2802
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2803
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2804
+tp2805
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2806
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2807
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2808
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2809
+tp2810
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2811
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2812
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2813
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2814
+tp2815
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2816
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2817
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2818
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2819
+tp2820
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2821
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2822
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2823
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2824
+tp2825
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2826
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2827
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2828
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2829
+tp2830
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2831
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2832
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2833
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2834
+tp2835
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2836
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2837
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2838
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2839
+tp2840
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp2841
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2842
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2843
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2844
+tp2845
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2846
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2847
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2848
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2849
+tp2850
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2851
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2852
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2853
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2854
+tp2855
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2856
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2857
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2858
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2859
+tp2860
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp2861
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2862
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2863
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2864
+tp2865
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2866
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp2867
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2868
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2869
+tp2870
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2871
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp2872
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2873
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2874
+tp2875
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2876
+g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp2877
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2878
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2879
+tp2880
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2881
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2882
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2883
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2884
+tp2885
+a(g11
+(g12
+S'?\x00\x00\x00\x00\x00\x00\x00'
+tRp2886
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2887
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp2888
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2889
+tp2890
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2891
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp2892
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2893
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2894
+tp2895
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp2896
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2897
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp2898
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2899
+tp2900
+a(g11
+(g12
+S'D\x00\x00\x00\x00\x00\x00\x00'
+tRp2901
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2902
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2903
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2904
+tp2905
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2906
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2907
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2908
+tp2909
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp2910
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2911
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2912
+tp2913
+a(g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2914
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2915
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2916
+tp2917
+a(g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2918
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2919
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp2920
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2921
+tp2922
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp2923
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2924
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2925
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2926
+tp2927
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2928
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2929
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2930
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2931
+tp2932
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp2933
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp2934
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2935
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2936
+tp2937
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp2938
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2939
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2940
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2941
+tp2942
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2943
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2944
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2945
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2946
+tp2947
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2948
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2949
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2950
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2951
+tp2952
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2953
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2954
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2955
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2956
+tp2957
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp2958
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp2959
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2960
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2961
+tp2962
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp2963
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2964
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2965
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2966
+tp2967
+a(g11
+(g12
+S'8\x00\x00\x00\x00\x00\x00\x00'
+tRp2968
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2969
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2970
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2971
+tp2972
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp2973
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp2974
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2975
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp2976
+tp2977
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp2978
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2979
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp2980
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2981
+tp2982
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2983
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2984
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2985
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2986
+tp2987
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp2988
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2989
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp2990
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp2991
+tp2992
+a(g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2993
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp2994
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp2995
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp2996
+tp2997
+a(g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp2998
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp2999
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3000
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp3001
+tp3002
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3003
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp3004
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp3005
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp3006
+tp3007
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3008
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp3009
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3010
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp3011
+tp3012
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp3013
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3014
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3015
+g11
+(g12
+S'X\x00\x00\x00\x00\x00\x00\x00'
+tRp3016
+tp3017
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp3018
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp3019
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp3020
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp3021
+tp3022
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3023
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3024
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3025
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp3026
+tp3027
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3028
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3029
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3030
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp3031
+tp3032
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp3033
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3034
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3035
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp3036
+tp3037
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp3038
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3039
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3040
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3041
+tp3042
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp3043
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3044
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3045
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp3046
+tp3047
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp3048
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3049
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3050
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp3051
+tp3052
+a(g11
+(g12
+S'6\x00\x00\x00\x00\x00\x00\x00'
+tRp3053
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3054
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp3055
+g11
+(g12
+S'V\x00\x00\x00\x00\x00\x00\x00'
+tRp3056
+tp3057
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp3058
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3059
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp3060
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp3061
+tp3062
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp3063
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3064
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3065
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp3066
+tp3067
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp3068
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3069
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp3070
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp3071
+tp3072
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp3073
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3074
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3075
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp3076
+tp3077
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp3078
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3079
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3080
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp3081
+tp3082
+aa(lp3083
+(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp3084
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3085
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3086
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3087
+tp3088
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp3089
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3090
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3091
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3092
+tp3093
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp3094
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp3095
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3096
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp3097
+tp3098
+a(g11
+(g12
+S';\x00\x00\x00\x00\x00\x00\x00'
+tRp3099
+g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp3100
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3101
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3102
+tp3103
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp3104
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3105
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3106
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3107
+tp3108
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp3109
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp3110
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp3111
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp3112
+tp3113
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3114
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3115
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3116
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3117
+tp3118
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3119
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp3120
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3121
+tp3122
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3123
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp3124
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3125
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3126
+tp3127
+a(g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp3128
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3129
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3130
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp3131
+tp3132
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3133
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3134
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3135
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp3136
+tp3137
+a(g11
+(g12
+S'B\x00\x00\x00\x00\x00\x00\x00'
+tRp3138
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3139
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp3140
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp3141
+tp3142
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3143
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3144
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3145
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp3146
+tp3147
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp3148
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3149
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3150
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp3151
+tp3152
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp3153
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3154
+g11
+(g12
+S'N\x00\x00\x00\x00\x00\x00\x00'
+tRp3155
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp3156
+tp3157
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp3158
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp3159
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3160
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3161
+tp3162
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3163
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp3164
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3165
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3166
+tp3167
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3168
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3169
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3170
+g11
+(g12
+S'T\x00\x00\x00\x00\x00\x00\x00'
+tRp3171
+tp3172
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp3173
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp3174
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3175
+g11
+(g12
+S'S\x00\x00\x00\x00\x00\x00\x00'
+tRp3176
+tp3177
+a(g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp3178
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp3179
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3180
+g11
+(g12
+S'Q\x00\x00\x00\x00\x00\x00\x00'
+tRp3181
+tp3182
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp3183
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp3184
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3185
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3186
+tp3187
+a(g11
+(g12
+S'>\x00\x00\x00\x00\x00\x00\x00'
+tRp3188
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp3189
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3190
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp3191
+tp3192
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3193
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3194
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3195
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3196
+tp3197
+a(g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3198
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp3199
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3200
+tp3201
+a(g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3202
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp3203
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3204
+g11
+(g12
+S'O\x00\x00\x00\x00\x00\x00\x00'
+tRp3205
+tp3206
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp3207
+g11
+(g12
+S'E\x00\x00\x00\x00\x00\x00\x00'
+tRp3208
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3209
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp3210
+tp3211
+a(g11
+(g12
+S'<\x00\x00\x00\x00\x00\x00\x00'
+tRp3212
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3213
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3214
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3215
+tp3216
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp3217
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3218
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3219
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3220
+tp3221
+a(g11
+(g12
+S'9\x00\x00\x00\x00\x00\x00\x00'
+tRp3222
+g11
+(g12
+S'A\x00\x00\x00\x00\x00\x00\x00'
+tRp3223
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3224
+g11
+(g12
+S'M\x00\x00\x00\x00\x00\x00\x00'
+tRp3225
+tp3226
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp3227
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3228
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3229
+g11
+(g12
+S'L\x00\x00\x00\x00\x00\x00\x00'
+tRp3230
+tp3231
+a(g11
+(g12
+S'7\x00\x00\x00\x00\x00\x00\x00'
+tRp3232
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3233
+g11
+(g12
+S'G\x00\x00\x00\x00\x00\x00\x00'
+tRp3234
+g11
+(g12
+S'J\x00\x00\x00\x00\x00\x00\x00'
+tRp3235
+tp3236
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp3237
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3238
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3239
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3240
+tp3241
+a(g11
+(g12
+S'0\x00\x00\x00\x00\x00\x00\x00'
+tRp3242
+g11
+(g12
+S'@\x00\x00\x00\x00\x00\x00\x00'
+tRp3243
+g11
+(g12
+S'C\x00\x00\x00\x00\x00\x00\x00'
+tRp3244
+g11
+(g12
+S'H\x00\x00\x00\x00\x00\x00\x00'
+tRp3245
+tp3246
+aas.
\ No newline at end of file
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/test_data/tiny_speech_dataset.tfrecord" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/test_data/tiny_speech_dataset.tfrecord"
new file mode 100644
index 00000000..93fe8791
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/test_data/tiny_speech_dataset.tfrecord" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/test_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/test_utils.py"
new file mode 100644
index 00000000..48bbd3d4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/fivo/test_utils.py"
@@ -0,0 +1,144 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Utilities for testing FIVO.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+from fivo.models import base
+from fivo.models import srnn
+from fivo.models import vrnn
+
+
+def create_vrnn(generative_class=base.ConditionalNormalDistribution,
+ batch_size=2, data_size=3, rnn_hidden_size=4,
+ latent_size=5, fcnet_hidden_size=7, encoded_data_size=9,
+ encoded_latent_size=11, num_timesteps=7, data_lengths=(7, 4),
+ use_tilt=False, random_seed=None):
+ """Creates a VRNN and some dummy data to feed it for testing purposes.
+
+ Args:
+ generative_class: The class of the generative distribution.
+ batch_size: The number of elements per batch.
+ data_size: The dimension of the vectors that make up the data sequences.
+ rnn_hidden_size: The hidden state dimension of the RNN that forms the
+ deterministic part of this VRNN.
+ latent_size: The size of the stochastic latent state of the VRNN.
+ fcnet_hidden_size: The size of the hidden layer of the fully connected
+ networks that parameterize the conditional probability distributions
+ of the VRNN.
+ encoded_data_size: The size of the output of the data encoding network.
+ encoded_latent_size: The size of the output of the latent state encoding
+ network.
+ num_timesteps: The maximum number of timesteps in the data.
+ data_lengths: A tuple of size batch_size that contains the desired lengths
+ of each sequence in the dummy data.
+ use_tilt: Use a tilting function.
+ random_seed: A random seed to feed the VRNN, mainly useful for testing
+ purposes.
+
+ Returns:
+ model: A VRNN object.
+ inputs: A Tensor of shape [num_timesteps, batch_size, data_size], the inputs
+ to the model, also known as the observations.
+ targets: A Tensor of shape [num_timesteps, batch_size, data_size], the
+ desired outputs of the model.
+ lengths: A Tensor of shape [batch_size], the lengths of the sequences in the
+ batch.
+ """
+
+ fcnet_hidden_sizes = [fcnet_hidden_size]
+ initializers = {"w": tf.contrib.layers.xavier_initializer(seed=random_seed),
+ "b": tf.zeros_initializer()}
+ model = vrnn.create_vrnn(
+ data_size,
+ latent_size,
+ generative_class,
+ rnn_hidden_size=rnn_hidden_size,
+ fcnet_hidden_sizes=fcnet_hidden_sizes,
+ encoded_data_size=encoded_data_size,
+ encoded_latent_size=encoded_latent_size,
+ use_tilt=use_tilt,
+ initializers=initializers,
+ random_seed=random_seed)
+ inputs = tf.random_uniform([num_timesteps, batch_size, data_size],
+ seed=random_seed, dtype=tf.float32)
+ targets = tf.random_uniform([num_timesteps, batch_size, data_size],
+ seed=random_seed, dtype=tf.float32)
+ lengths = tf.constant(data_lengths, dtype=tf.int32)
+ return model, inputs, targets, lengths
+
+
+def create_srnn(generative_class=base.ConditionalNormalDistribution,
+ batch_size=2, data_size=3, rnn_hidden_size=4,
+ latent_size=5, fcnet_hidden_size=7, encoded_data_size=3,
+ encoded_latent_size=2, num_timesteps=7, data_lengths=(7, 4),
+ use_tilt=False, random_seed=None):
+ """Creates a SRNN and some dummy data to feed it for testing purposes.
+
+ Args:
+ generative_class: The class of the generative distribution.
+ batch_size: The number of elements per batch.
+ data_size: The dimension of the vectors that make up the data sequences.
+ rnn_hidden_size: The hidden state dimension of the RNN that forms the
+ deterministic part of this SRNN.
+ latent_size: The size of the stochastic latent state of the SRNN.
+ fcnet_hidden_size: The size of the hidden layer of the fully connected
+ networks that parameterize the conditional probability distributions
+ of the SRNN.
+ encoded_data_size: The size of the output of the data encoding network.
+ encoded_latent_size: The size of the output of the latent state encoding
+ network.
+ num_timesteps: The maximum number of timesteps in the data.
+ data_lengths: A tuple of size batch_size that contains the desired lengths
+ of each sequence in the dummy data.
+ use_tilt: Use a tilting function.
+ random_seed: A random seed to feed the SRNN, mainly useful for testing
+ purposes.
+
+ Returns:
+ model: A SRNN object.
+ inputs: A Tensor of shape [num_timesteps, batch_size, data_size], the inputs
+ to the model, also known as the observations.
+ targets: A Tensor of shape [num_timesteps, batch_size, data_size], the
+ desired outputs of the model.
+ lengths: A Tensor of shape [batch_size], the lengths of the sequences in the
+ batch.
+ """
+
+ fcnet_hidden_sizes = [fcnet_hidden_size]
+ initializers = {"w": tf.contrib.layers.xavier_initializer(seed=random_seed),
+ "b": tf.zeros_initializer()}
+ model = srnn.create_srnn(
+ data_size,
+ latent_size,
+ generative_class,
+ rnn_hidden_size=rnn_hidden_size,
+ fcnet_hidden_sizes=fcnet_hidden_sizes,
+ encoded_data_size=encoded_data_size,
+ encoded_latent_size=encoded_latent_size,
+ use_tilt=use_tilt,
+ initializers=initializers,
+ random_seed=random_seed)
+ inputs = tf.random_uniform([num_timesteps, batch_size, data_size],
+ seed=random_seed, dtype=tf.float32)
+ targets = tf.random_uniform([num_timesteps, batch_size, data_size],
+ seed=random_seed, dtype=tf.float32)
+ lengths = tf.constant(data_lengths, dtype=tf.int32)
+ return model, inputs, targets, lengths
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/run_fivo.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/run_fivo.py"
new file mode 100644
index 00000000..1ca07942
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/fivo/run_fivo.py"
@@ -0,0 +1,142 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A script to run training for sequential latent variable models.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from fivo import ghmm_runners
+from fivo import runners
+
+# Shared flags.
+tf.app.flags.DEFINE_enum("mode", "train",
+ ["train", "eval", "sample"],
+ "The mode of the binary.")
+tf.app.flags.DEFINE_enum("model", "vrnn",
+ ["vrnn", "ghmm", "srnn"],
+ "Model choice.")
+tf.app.flags.DEFINE_integer("latent_size", 64,
+ "The size of the latent state of the model.")
+tf.app.flags.DEFINE_enum("dataset_type", "pianoroll",
+ ["pianoroll", "speech", "pose"],
+ "The type of dataset.")
+tf.app.flags.DEFINE_string("dataset_path", "",
+ "Path to load the dataset from.")
+tf.app.flags.DEFINE_integer("data_dimension", None,
+ "The dimension of each vector in the data sequence. "
+ "Defaults to 88 for pianoroll datasets and 200 for speech "
+ "datasets. Should not need to be changed except for "
+ "testing.")
+tf.app.flags.DEFINE_integer("batch_size", 4,
+ "Batch size.")
+tf.app.flags.DEFINE_integer("num_samples", 4,
+ "The number of samples (or particles) for multisample "
+ "algorithms.")
+tf.app.flags.DEFINE_string("logdir", "/tmp/smc_vi",
+ "The directory to keep checkpoints and summaries in.")
+tf.app.flags.DEFINE_integer("random_seed", None,
+ "A random seed for seeding the TensorFlow graph.")
+tf.app.flags.DEFINE_integer("parallel_iterations", 30,
+ "The number of parallel iterations to use for the while "
+ "loop that computes the bounds.")
+
+# Training flags.
+tf.app.flags.DEFINE_enum("bound", "fivo",
+ ["elbo", "iwae", "fivo", "fivo-aux"],
+ "The bound to optimize.")
+tf.app.flags.DEFINE_boolean("normalize_by_seq_len", True,
+ "If true, normalize the loss by the number of timesteps "
+ "per sequence.")
+tf.app.flags.DEFINE_float("learning_rate", 0.0002,
+ "The learning rate for ADAM.")
+tf.app.flags.DEFINE_integer("max_steps", int(1e9),
+ "The number of gradient update steps to train for.")
+tf.app.flags.DEFINE_integer("summarize_every", 50,
+ "The number of steps between summaries.")
+tf.app.flags.DEFINE_enum("resampling_type", "multinomial",
+ ["multinomial", "relaxed"],
+ "The resampling strategy to use for training.")
+tf.app.flags.DEFINE_float("relaxed_resampling_temperature", 0.5,
+ "The relaxation temperature for relaxed resampling.")
+tf.app.flags.DEFINE_enum("proposal_type", "filtering",
+ ["prior", "filtering", "smoothing",
+ "true-filtering", "true-smoothing"],
+ "The type of proposal to use. true-filtering and true-smoothing "
+ "are only available for the GHMM. The specific implementation "
+ "of each proposal type is left to model-writers.")
+
+# Distributed training flags.
+tf.app.flags.DEFINE_string("master", "",
+ "The BNS name of the TensorFlow master to use.")
+tf.app.flags.DEFINE_integer("task", 0,
+ "Task id of the replica running the training.")
+tf.app.flags.DEFINE_integer("ps_tasks", 0,
+ "Number of tasks in the ps job. If 0 no ps job is used.")
+tf.app.flags.DEFINE_boolean("stagger_workers", True,
+ "If true, bring one worker online every 1000 steps.")
+
+# Evaluation flags.
+tf.app.flags.DEFINE_enum("split", "train",
+ ["train", "test", "valid"],
+ "Split to evaluate the model on.")
+
+# Sampling flags.
+tf.app.flags.DEFINE_integer("sample_length", 50,
+ "The number of timesteps to sample for.")
+tf.app.flags.DEFINE_integer("prefix_length", 25,
+ "The number of timesteps to condition the model on "
+ "before sampling.")
+tf.app.flags.DEFINE_string("sample_out_dir", None,
+ "The directory to write the samples to. "
+ "Defaults to logdir.")
+
+# GHMM flags.
+tf.app.flags.DEFINE_float("variance", 0.1,
+ "The variance of the ghmm.")
+tf.app.flags.DEFINE_integer("num_timesteps", 5,
+ "The number of timesteps to run the gmp for.")
+FLAGS = tf.app.flags.FLAGS
+
+PIANOROLL_DEFAULT_DATA_DIMENSION = 88
+SPEECH_DEFAULT_DATA_DIMENSION = 200
+
+
+def main(unused_argv):
+ tf.logging.set_verbosity(tf.logging.INFO)
+ if FLAGS.model in ["vrnn", "srnn"]:
+ if FLAGS.data_dimension is None:
+ if FLAGS.dataset_type == "pianoroll":
+ FLAGS.data_dimension = PIANOROLL_DEFAULT_DATA_DIMENSION
+ elif FLAGS.dataset_type == "speech":
+ FLAGS.data_dimension = SPEECH_DEFAULT_DATA_DIMENSION
+ if FLAGS.mode == "train":
+ runners.run_train(FLAGS)
+ elif FLAGS.mode == "eval":
+ runners.run_eval(FLAGS)
+ elif FLAGS.mode == "sample":
+ runners.run_sample(FLAGS)
+ elif FLAGS.model == "ghmm":
+ if FLAGS.mode == "train":
+ ghmm_runners.run_train(FLAGS)
+ elif FLAGS.mode == "eval":
+ ghmm_runners.run_eval(FLAGS)
+
+if __name__ == "__main__":
+ tf.app.run(main)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/README.md"
new file mode 100644
index 00000000..6e84a31c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/README.md"
@@ -0,0 +1,118 @@
+# TFGAN Examples
+
+[TFGAN](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/gan) is a lightweight library for training and evaluating Generative
+Adversarial Networks (GANs). GANs have been in a wide range of tasks
+including [image translation](https://arxiv.org/abs/1703.10593), [superresolution](https://arxiv.org/abs/1609.04802), and [data augmentation](https://arxiv.org/abs/1612.07828). This directory contains fully-working examples
+that demonstrate the ease and flexibility of TFGAN. Each subdirectory contains a
+different working example. The sub-sections below describe each of the problems,
+and include some sample outputs. We've also included a [jupyter notebook](https://github.com/tensorflow/models/tree/master/research/gan/tutorial.ipynb), which
+provides a walkthrough of TFGAN.
+
+## Contacts
+
+Maintainers of TFGAN:
+
+* Joel Shor,
+ github: [joel-shor](https://github.com/joel-shor)
+
+## Table of contents
+
+1. [MNIST](#mnist)
+
+1. [MNIST with GANEstimator](#mnist_estimator)
+
+1. [CIFAR10](#cifar10)
+
+1. [Image compression](#compression)
+
+## MNIST
+
+
+We train a simple generator to produce [MNIST digits](http://yann.lecun.com/exdb/mnist/).
+The unconditional case maps noise to MNIST digits. The conditional case maps
+noise and digit class to MNIST digits. [InfoGAN](https://arxiv.org/abs/1606.03657) learns to produce
+digits of a given class without labels, as well as controlling style. The
+network architectures are defined [here](https://github.com/tensorflow/models/tree/master/research/gan/mnist/networks.py).
+
+We use a classifier trained on MNIST digit classification for evaluation.
+
+### Unconditional MNIST
+
+
+### Conditional MNIST
+
+
+### InfoGAN MNIST
+
+
+## MNIST with GANEstimator
+
+
+This setup is exactly the same as in the [unconditional MNIST example](#mnist), but
+uses the `tf.Learn` `GANEstimator`.
+
+
+
+## CIFAR10
+
+
+We train a [DCGAN generator](https://arxiv.org/abs/1511.06434) to produce [CIFAR10 images](https://www.cs.toronto.edu/~kriz/cifar.html).
+The unconditional case maps noise to CIFAR10 images. The conditional case maps
+noise and image class to CIFAR10 images. The
+network architectures are defined [here](https://github.com/tensorflow/models/tree/master/research/gan/cifar/networks.py).
+
+We use the [Inception Score](https://arxiv.org/abs/1606.03498) to evaluate the images.
+
+### Unconditional CIFAR10
+
+
+### Conditional CIFAR10
+
+
+## Image compression
+
+
+In neural image compression, we attempt to reduce an image to a smaller representation
+such that we can recreate the original image as closely as possible. See [`Full Resolution Image Compression with Recurrent Neural Networks`](https://arxiv.org/abs/1608.05148) for more details on using neural networks
+for image compression.
+
+In this example, we train an encoder to compress images to a compressed binary
+representation and a decoder to map the binary representation back to the image.
+We treat both systems together (encoder -> decoder) as the generator.
+
+A typical image compression trained on L1 pixel loss will decode into
+blurry images. We use an adversarial loss to force the outputs to be more
+plausible.
+
+This example also highlights the following infrastructure challenges:
+
+* When you have custom code to keep track of your variables
+
+Some other notes on the problem:
+
+* Since the network is fully convolutional, we train on image patches.
+* Bottleneck layer is floating point during training and binarized during
+ evaluation.
+
+### Results
+
+#### No adversarial loss
+
+
+#### Adversarial loss
+
+
+### Architectures
+
+#### Compression Network
+
+The compression network is a DCGAN discriminator for the encoder and a DCGAN
+generator for the decoder from [`Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks`](https://arxiv.org/abs/1511.06434).
+The binarizer adds uniform noise during training then binarizes during eval, as in
+[`End-to-end Optimized Image Compression`](https://arxiv.org/abs/1611.01704).
+
+#### Discriminator
+
+The discriminator looks at 70x70 patches, as in
+[`Image-to-Image Translation with Conditional Adversarial Networks`](https://arxiv.org/abs/1611.07004).
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/data_provider.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/data_provider.py"
new file mode 100644
index 00000000..329f86b8
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/data_provider.py"
@@ -0,0 +1,94 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains code for loading and preprocessing the CIFAR data."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from slim.datasets import dataset_factory as datasets
+
+slim = tf.contrib.slim
+
+
+def provide_data(batch_size, dataset_dir, dataset_name='cifar10',
+ split_name='train', one_hot=True):
+ """Provides batches of CIFAR data.
+
+ Args:
+ batch_size: The number of images in each batch.
+ dataset_dir: The directory where the CIFAR10 data can be found. If `None`,
+ use default.
+ dataset_name: Name of the dataset.
+ split_name: Should be either 'train' or 'test'.
+ one_hot: Output one hot vector instead of int32 label.
+
+ Returns:
+ images: A `Tensor` of size [batch_size, 32, 32, 3]. Output pixel values are
+ in [-1, 1].
+ labels: Either (1) one_hot_labels if `one_hot` is `True`
+ A `Tensor` of size [batch_size, num_classes], where each row has a
+ single element set to one and the rest set to zeros.
+ Or (2) labels if `one_hot` is `False`
+ A `Tensor` of size [batch_size], holding the labels as integers.
+ num_samples: The number of total samples in the dataset.
+ num_classes: The number of classes in the dataset.
+
+ Raises:
+ ValueError: if the split_name is not either 'train' or 'test'.
+ """
+ dataset = datasets.get_dataset(
+ dataset_name, split_name, dataset_dir=dataset_dir)
+ provider = slim.dataset_data_provider.DatasetDataProvider(
+ dataset,
+ common_queue_capacity=5 * batch_size,
+ common_queue_min=batch_size,
+ shuffle=(split_name == 'train'))
+ [image, label] = provider.get(['image', 'label'])
+
+ # Preprocess the images.
+ image = (tf.to_float(image) - 128.0) / 128.0
+
+ # Creates a QueueRunner for the pre-fetching operation.
+ images, labels = tf.train.batch(
+ [image, label],
+ batch_size=batch_size,
+ num_threads=32,
+ capacity=5 * batch_size)
+
+ labels = tf.reshape(labels, [-1])
+
+ if one_hot:
+ labels = tf.one_hot(labels, dataset.num_classes)
+
+ return images, labels, dataset.num_samples, dataset.num_classes
+
+
+def float_image_to_uint8(image):
+ """Convert float image in [-1, 1) to [0, 255] uint8.
+
+ Note that `1` gets mapped to `0`, but `1 - epsilon` gets mapped to 255.
+
+ Args:
+ image: An image tensor. Values should be in [-1, 1).
+
+ Returns:
+ Input image cast to uint8 and with integer values in [0, 255].
+ """
+ image = (image * 128.0) + 128.0
+ return tf.cast(image, tf.uint8)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/data_provider_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/data_provider_test.py"
new file mode 100644
index 00000000..c9f18008
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/data_provider_test.py"
@@ -0,0 +1,54 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for data_provider."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+from absl import flags
+import numpy as np
+
+import tensorflow as tf
+
+import data_provider
+
+
+class DataProviderTest(tf.test.TestCase):
+
+ def test_cifar10_train_set(self):
+ dataset_dir = os.path.join(
+ flags.FLAGS.test_srcdir,
+ 'google3/third_party/tensorflow_models/gan/cifar/testdata')
+
+ batch_size = 4
+ images, labels, num_samples, num_classes = data_provider.provide_data(
+ batch_size, dataset_dir)
+ self.assertEqual(50000, num_samples)
+ self.assertEqual(10, num_classes)
+ with self.test_session(use_gpu=True) as sess:
+ with tf.contrib.slim.queues.QueueRunners(sess):
+ images_out, labels_out = sess.run([images, labels])
+ self.assertEqual(images_out.shape, (batch_size, 32, 32, 3))
+ expected_label_shape = (batch_size, 10)
+ self.assertEqual(expected_label_shape, labels_out.shape)
+ # Check range.
+ self.assertTrue(np.all(np.abs(images_out) <= 1))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/eval.py"
new file mode 100644
index 00000000..6addd5e1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/eval.py"
@@ -0,0 +1,170 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Evaluates a TFGAN trained CIFAR model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from absl import app
+from absl import flags
+import tensorflow as tf
+
+import data_provider
+import networks
+import util
+
+
+FLAGS = flags.FLAGS
+tfgan = tf.contrib.gan
+
+flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
+
+flags.DEFINE_string('checkpoint_dir', '/tmp/cifar10/',
+ 'Directory where the model was written to.')
+
+flags.DEFINE_string('eval_dir', '/tmp/cifar10/',
+ 'Directory where the results are saved to.')
+
+flags.DEFINE_string('dataset_dir', None, 'Location of data.')
+
+flags.DEFINE_integer('num_images_generated', 100,
+ 'Number of images to generate at once.')
+
+flags.DEFINE_integer('num_inception_images', 10,
+ 'The number of images to run through Inception at once.')
+
+flags.DEFINE_boolean('eval_real_images', False,
+ 'If `True`, run Inception network on real images.')
+
+flags.DEFINE_boolean('conditional_eval', False,
+ 'If `True`, set up a conditional GAN.')
+
+flags.DEFINE_boolean('eval_frechet_inception_distance', True,
+ 'If `True`, compute Frechet Inception distance using real '
+ 'images and generated images.')
+
+flags.DEFINE_integer('num_images_per_class', 10,
+ 'When a conditional generator is used, this is the number '
+ 'of images to display per class.')
+
+flags.DEFINE_integer('max_number_of_evaluations', None,
+ 'Number of times to run evaluation. If `None`, run '
+ 'forever.')
+
+flags.DEFINE_boolean('write_to_disk', True, 'If `True`, run images to disk.')
+
+flags.DEFINE_integer(
+ 'inter_op_parallelism_threads', 0,
+ 'Number of threads to use for inter-op parallelism. If left as default value of 0, the system will pick an appropriate number.')
+
+flags.DEFINE_integer(
+ 'intra_op_parallelism_threads', 0,
+ 'Number of threads to use for intra-op parallelism. If left as default value of 0, the system will pick an appropriate number.')
+
+def main(_, run_eval_loop=True):
+ # Fetch and generate images to run through Inception.
+ with tf.name_scope('inputs'):
+ real_data, num_classes = _get_real_data(
+ FLAGS.num_images_generated, FLAGS.dataset_dir)
+ generated_data = _get_generated_data(
+ FLAGS.num_images_generated, FLAGS.conditional_eval, num_classes)
+
+ # Compute Frechet Inception Distance.
+ if FLAGS.eval_frechet_inception_distance:
+ fid = util.get_frechet_inception_distance(
+ real_data, generated_data, FLAGS.num_images_generated,
+ FLAGS.num_inception_images)
+ tf.summary.scalar('frechet_inception_distance', fid)
+
+ # Compute normal Inception scores.
+ if FLAGS.eval_real_images:
+ inc_score = util.get_inception_scores(
+ real_data, FLAGS.num_images_generated, FLAGS.num_inception_images)
+ else:
+ inc_score = util.get_inception_scores(
+ generated_data, FLAGS.num_images_generated, FLAGS.num_inception_images)
+ tf.summary.scalar('inception_score', inc_score)
+
+ # If conditional, display an image grid of difference classes.
+ if FLAGS.conditional_eval and not FLAGS.eval_real_images:
+ reshaped_imgs = util.get_image_grid(
+ generated_data, FLAGS.num_images_generated, num_classes,
+ FLAGS.num_images_per_class)
+ tf.summary.image('generated_data', reshaped_imgs, max_outputs=1)
+
+ # Create ops that write images to disk.
+ image_write_ops = None
+ if FLAGS.conditional_eval and FLAGS.write_to_disk:
+ reshaped_imgs = util.get_image_grid(
+ generated_data, FLAGS.num_images_generated, num_classes,
+ FLAGS.num_images_per_class)
+ uint8_images = data_provider.float_image_to_uint8(reshaped_imgs)
+ image_write_ops = tf.write_file(
+ '%s/%s'% (FLAGS.eval_dir, 'conditional_cifar10.png'),
+ tf.image.encode_png(uint8_images[0]))
+ else:
+ if FLAGS.num_images_generated >= 100 and FLAGS.write_to_disk:
+ reshaped_imgs = tfgan.eval.image_reshaper(
+ generated_data[:100], num_cols=FLAGS.num_images_per_class)
+ uint8_images = data_provider.float_image_to_uint8(reshaped_imgs)
+ image_write_ops = tf.write_file(
+ '%s/%s'% (FLAGS.eval_dir, 'unconditional_cifar10.png'),
+ tf.image.encode_png(uint8_images[0]))
+
+ # For unit testing, use `run_eval_loop=False`.
+ if not run_eval_loop: return
+ sess_config = tf.ConfigProto(
+ inter_op_parallelism_threads=FLAGS.inter_op_parallelism_threads,
+ intra_op_parallelism_threads=FLAGS.intra_op_parallelism_threads)
+ tf.contrib.training.evaluate_repeatedly(
+ FLAGS.checkpoint_dir,
+ master=FLAGS.master,
+ hooks=[tf.contrib.training.SummaryAtEndHook(FLAGS.eval_dir),
+ tf.contrib.training.StopAfterNEvalsHook(1)],
+ eval_ops=image_write_ops,
+ config=sess_config,
+ max_number_of_evaluations=FLAGS.max_number_of_evaluations)
+
+
+def _get_real_data(num_images_generated, dataset_dir):
+ """Get real images."""
+ data, _, _, num_classes = data_provider.provide_data(
+ num_images_generated, dataset_dir)
+ return data, num_classes
+
+
+def _get_generated_data(num_images_generated, conditional_eval, num_classes):
+ """Get generated images."""
+ noise = tf.random_normal([num_images_generated, 64])
+ # If conditional, generate class-specific images.
+ if conditional_eval:
+ conditioning = util.get_generator_conditioning(
+ num_images_generated, num_classes)
+ generator_inputs = (noise, conditioning)
+ generator_fn = networks.conditional_generator
+ else:
+ generator_inputs = noise
+ generator_fn = networks.generator
+ # In order for variables to load, use the same variable scope as in the
+ # train job.
+ with tf.variable_scope('Generator'):
+ data = generator_fn(generator_inputs, is_training=False)
+
+ return data
+
+
+if __name__ == '__main__':
+ app.run(main)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/eval_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/eval_test.py"
new file mode 100644
index 00000000..f8c644ea
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/eval_test.py"
@@ -0,0 +1,51 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for gan.cifar.eval."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+from absl.testing import parameterized
+import tensorflow as tf
+import eval # pylint:disable=redefined-builtin
+
+FLAGS = flags.FLAGS
+mock = tf.test.mock
+
+
+class EvalTest(tf.test.TestCase, parameterized.TestCase):
+
+ @parameterized.named_parameters(
+ ('RealData', True, False),
+ ('GeneratedData', False, False),
+ ('GeneratedDataConditional', False, True))
+ def test_build_graph(self, eval_real_images, conditional_eval):
+ FLAGS.eval_real_images = eval_real_images
+ FLAGS.conditional_eval = conditional_eval
+ # Mock `frechet_inception_distance` and `inception_score`, which are
+ # expensive.
+ with mock.patch.object(
+ eval.util, 'get_frechet_inception_distance') as mock_fid:
+ with mock.patch.object(eval.util, 'get_inception_scores') as mock_iscore:
+ mock_fid.return_value = 1.0
+ mock_iscore.return_value = 1.0
+ eval.main(None, run_eval_loop=False)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/launch_jobs.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/launch_jobs.sh"
new file mode 100644
index 00000000..89313da5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/launch_jobs.sh"
@@ -0,0 +1,121 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+#!/bin/bash
+#
+# This script performs the following operations:
+# 1. Downloads the CIFAR dataset.
+# 2. Trains an unconditional or conditional model on the CIFAR training set.
+# 3. Evaluates the models and writes sample images to disk.
+#
+#
+# With the default batch size and number of steps, train times are:
+#
+# Usage:
+# cd models/research/gan/cifar
+# ./launch_jobs.sh ${gan_type} ${git_repo}
+set -e
+
+# Type of GAN to run. Right now, options are `unconditional`, `conditional`, or
+# `infogan`.
+gan_type=$1
+if ! [[ "$gan_type" =~ ^(unconditional|conditional) ]]; then
+ echo "'gan_type' must be one of: 'unconditional', 'conditional'."
+ exit
+fi
+
+# Location of the git repository.
+git_repo=$2
+if [[ "$git_repo" == "" ]]; then
+ echo "'git_repo' must not be empty."
+ exit
+fi
+
+# Base name for where the checkpoint and logs will be saved to.
+TRAIN_DIR=/tmp/cifar-model
+
+# Base name for where the evaluation images will be saved to.
+EVAL_DIR=/tmp/cifar-model/eval
+
+# Where the dataset is saved to.
+DATASET_DIR=/tmp/cifar-data
+
+export PYTHONPATH=$PYTHONPATH:$git_repo:$git_repo/research:$git_repo/research/gan:$git_repo/research/slim
+
+# A helper function for printing pretty output.
+Banner () {
+ local text=$1
+ local green='\033[0;32m'
+ local nc='\033[0m' # No color.
+ echo -e "${green}${text}${nc}"
+}
+
+# Download the dataset.
+python "${git_repo}/research/slim/download_and_convert_data.py" \
+ --dataset_name=cifar10 \
+ --dataset_dir=${DATASET_DIR}
+
+# Run unconditional GAN.
+if [[ "$gan_type" == "unconditional" ]]; then
+ UNCONDITIONAL_TRAIN_DIR="${TRAIN_DIR}/unconditional"
+ UNCONDITIONAL_EVAL_DIR="${EVAL_DIR}/unconditional"
+ NUM_STEPS=10000
+ # Run training.
+ Banner "Starting training unconditional GAN for ${NUM_STEPS} steps..."
+ python "${git_repo}/research/gan/cifar/train.py" \
+ --train_log_dir=${UNCONDITIONAL_TRAIN_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --max_number_of_steps=${NUM_STEPS} \
+ --gan_type="unconditional" \
+ --alsologtostderr
+ Banner "Finished training unconditional GAN ${NUM_STEPS} steps."
+
+ # Run evaluation.
+ Banner "Starting evaluation of unconditional GAN..."
+ python "${git_repo}/research/gan/cifar/eval.py" \
+ --checkpoint_dir=${UNCONDITIONAL_TRAIN_DIR} \
+ --eval_dir=${UNCONDITIONAL_EVAL_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --eval_real_images=false \
+ --conditional_eval=false \
+ --max_number_of_evaluations=1
+ Banner "Finished unconditional evaluation. See ${UNCONDITIONAL_EVAL_DIR} for output images."
+fi
+
+# Run conditional GAN.
+if [[ "$gan_type" == "conditional" ]]; then
+ CONDITIONAL_TRAIN_DIR="${TRAIN_DIR}/conditional"
+ CONDITIONAL_EVAL_DIR="${EVAL_DIR}/conditional"
+ NUM_STEPS=10000
+ # Run training.
+ Banner "Starting training conditional GAN for ${NUM_STEPS} steps..."
+ python "${git_repo}/research/gan/cifar/train.py" \
+ --train_log_dir=${CONDITIONAL_TRAIN_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --max_number_of_steps=${NUM_STEPS} \
+ --gan_type="conditional" \
+ --alsologtostderr
+ Banner "Finished training conditional GAN ${NUM_STEPS} steps."
+
+ # Run evaluation.
+ Banner "Starting evaluation of conditional GAN..."
+ python "${git_repo}/research/gan/cifar/eval.py" \
+ --checkpoint_dir=${CONDITIONAL_TRAIN_DIR} \
+ --eval_dir=${CONDITIONAL_EVAL_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --eval_real_images=false \
+ --conditional_eval=true \
+ --max_number_of_evaluations=1
+ Banner "Finished conditional evaluation. See ${CONDITIONAL_EVAL_DIR} for output images."
+fi
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/networks.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/networks.py"
new file mode 100644
index 00000000..f50b1ce9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/networks.py"
@@ -0,0 +1,130 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Networks for GAN CIFAR example using TFGAN."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from slim.nets import dcgan
+
+tfgan = tf.contrib.gan
+
+
+def _last_conv_layer(end_points):
+ """"Returns the last convolutional layer from an endpoints dictionary."""
+ conv_list = [k if k[:4] == 'conv' else None for k in end_points.keys()]
+ conv_list.sort()
+ return end_points[conv_list[-1]]
+
+
+def generator(noise, is_training=True):
+ """Generator to produce CIFAR images.
+
+ Args:
+ noise: A 2D Tensor of shape [batch size, noise dim]. Since this example
+ does not use conditioning, this Tensor represents a noise vector of some
+ kind that will be reshaped by the generator into CIFAR examples.
+ is_training: If `True`, batch norm uses batch statistics. If `False`, batch
+ norm uses the exponential moving average collected from population
+ statistics.
+
+ Returns:
+ A single Tensor with a batch of generated CIFAR images.
+ """
+ images, _ = dcgan.generator(noise, is_training=is_training, fused_batch_norm=True)
+
+ # Make sure output lies between [-1, 1].
+ return tf.tanh(images)
+
+
+def conditional_generator(inputs, is_training=True):
+ """Generator to produce CIFAR images.
+
+ Args:
+ inputs: A 2-tuple of Tensors (noise, one_hot_labels) and creates a
+ conditional generator.
+ is_training: If `True`, batch norm uses batch statistics. If `False`, batch
+ norm uses the exponential moving average collected from population
+ statistics.
+
+ Returns:
+ A single Tensor with a batch of generated CIFAR images.
+ """
+ noise, one_hot_labels = inputs
+ noise = tfgan.features.condition_tensor_from_onehot(noise, one_hot_labels)
+
+ images, _ = dcgan.generator(noise, is_training=is_training, fused_batch_norm=True)
+
+ # Make sure output lies between [-1, 1].
+ return tf.tanh(images)
+
+
+def discriminator(img, unused_conditioning, is_training=True):
+ """Discriminator for CIFAR images.
+
+ Args:
+ img: A Tensor of shape [batch size, width, height, channels], that can be
+ either real or generated. It is the discriminator's goal to distinguish
+ between the two.
+ unused_conditioning: The TFGAN API can help with conditional GANs, which
+ would require extra `condition` information to both the generator and the
+ discriminator. Since this example is not conditional, we do not use this
+ argument.
+ is_training: If `True`, batch norm uses batch statistics. If `False`, batch
+ norm uses the exponential moving average collected from population
+ statistics.
+
+ Returns:
+ A 1D Tensor of shape [batch size] representing the confidence that the
+ images are real. The output can lie in [-inf, inf], with positive values
+ indicating high confidence that the images are real.
+ """
+ logits, _ = dcgan.discriminator(img, is_training=is_training, fused_batch_norm=True)
+ return logits
+
+
+# (joelshor): This discriminator creates variables that aren't used, and
+# causes logging warnings. Improve `dcgan` nets to accept a target end layer,
+# so extraneous variables aren't created.
+def conditional_discriminator(img, conditioning, is_training=True):
+ """Discriminator for CIFAR images.
+
+ Args:
+ img: A Tensor of shape [batch size, width, height, channels], that can be
+ either real or generated. It is the discriminator's goal to distinguish
+ between the two.
+ conditioning: A 2-tuple of Tensors representing (noise, one_hot_labels).
+ is_training: If `True`, batch norm uses batch statistics. If `False`, batch
+ norm uses the exponential moving average collected from population
+ statistics.
+
+ Returns:
+ A 1D Tensor of shape [batch size] representing the confidence that the
+ images are real. The output can lie in [-inf, inf], with positive values
+ indicating high confidence that the images are real.
+ """
+ logits, end_points = dcgan.discriminator(img, is_training=is_training, fused_batch_norm=True)
+
+ # Condition the last convolution layer.
+ _, one_hot_labels = conditioning
+ net = _last_conv_layer(end_points)
+ net = tfgan.features.condition_tensor_from_onehot(
+ tf.contrib.layers.flatten(net), one_hot_labels)
+ logits = tf.contrib.layers.linear(net, 1)
+
+ return logits
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/networks_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/networks_test.py"
new file mode 100644
index 00000000..1893d164
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/networks_test.py"
@@ -0,0 +1,78 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for tfgan.examples.cifar.networks."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import numpy as np
+import tensorflow as tf
+import networks
+
+
+class NetworksTest(tf.test.TestCase):
+
+ def test_generator(self):
+ tf.set_random_seed(1234)
+ batch_size = 100
+ noise = tf.random_normal([batch_size, 64])
+ image = networks.generator(noise)
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ image_np = image.eval()
+
+ self.assertAllEqual([batch_size, 32, 32, 3], image_np.shape)
+ self.assertTrue(np.all(np.abs(image_np) <= 1))
+
+ def test_generator_conditional(self):
+ tf.set_random_seed(1234)
+ batch_size = 100
+ noise = tf.random_normal([batch_size, 64])
+ conditioning = tf.one_hot([0] * batch_size, 10)
+ image = networks.conditional_generator((noise, conditioning))
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ image_np = image.eval()
+
+ self.assertAllEqual([batch_size, 32, 32, 3], image_np.shape)
+ self.assertTrue(np.all(np.abs(image_np) <= 1))
+
+ def test_discriminator(self):
+ batch_size = 5
+ image = tf.random_uniform([batch_size, 32, 32, 3], -1, 1)
+ dis_output = networks.discriminator(image, None)
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ dis_output_np = dis_output.eval()
+
+ self.assertAllEqual([batch_size, 1], dis_output_np.shape)
+
+ def test_discriminator_conditional(self):
+ batch_size = 5
+ image = tf.random_uniform([batch_size, 32, 32, 3], -1, 1)
+ conditioning = (None, tf.one_hot([0] * batch_size, 10))
+ dis_output = networks.conditional_discriminator(image, conditioning)
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ dis_output_np = dis_output.eval()
+
+ self.assertAllEqual([batch_size, 1], dis_output_np.shape)
+
+
+if __name__ == '__main__':
+ tf.test.main()
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/testdata/cifar10_train.tfrecord" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/testdata/cifar10_train.tfrecord"
new file mode 100644
index 00000000..93402244
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/testdata/cifar10_train.tfrecord" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/train.py"
new file mode 100644
index 00000000..f08f7955
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/train.py"
@@ -0,0 +1,190 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Trains a generator on CIFAR data."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+from absl import logging
+import tensorflow as tf
+
+import data_provider
+import networks
+
+
+tfgan = tf.contrib.gan
+
+flags.DEFINE_integer('batch_size', 32, 'The number of images in each batch.')
+
+flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
+
+flags.DEFINE_string('train_log_dir', '/tmp/cifar/',
+ 'Directory where to write event logs.')
+
+flags.DEFINE_string('dataset_dir', None, 'Location of data.')
+
+flags.DEFINE_integer('max_number_of_steps', 1000000,
+ 'The maximum number of gradient steps.')
+
+flags.DEFINE_integer(
+ 'ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then the parameters '
+ 'are handled locally by the worker.')
+
+flags.DEFINE_integer(
+ 'task', 0,
+ 'The Task ID. This value is used when training with multiple workers to '
+ 'identify each worker.')
+
+flags.DEFINE_boolean(
+ 'conditional', False,
+ 'If `True`, set up a conditional GAN. If False, it is unconditional.')
+
+# Sync replicas flags.
+flags.DEFINE_boolean(
+ 'use_sync_replicas', True,
+ 'If `True`, use sync replicas. Otherwise use async.')
+
+flags.DEFINE_integer(
+ 'worker_replicas', 10,
+ 'The number of gradients to collect before updating params. Only used '
+ 'with sync replicas.')
+
+flags.DEFINE_integer(
+ 'backup_workers', 1,
+ 'Number of workers to be kept as backup in the sync replicas case.')
+
+flags.DEFINE_integer(
+ 'inter_op_parallelism_threads', 0,
+ 'Number of threads to use for inter-op parallelism. If left as default value of 0, the system will pick an appropriate number.')
+
+flags.DEFINE_integer(
+ 'intra_op_parallelism_threads', 0,
+ 'Number of threads to use for intra-op parallelism. If left as default value of 0, the system will pick an appropriate number.')
+
+FLAGS = flags.FLAGS
+
+
+def main(_):
+ if not tf.gfile.Exists(FLAGS.train_log_dir):
+ tf.gfile.MakeDirs(FLAGS.train_log_dir)
+
+ with tf.device(tf.train.replica_device_setter(FLAGS.ps_tasks)):
+ # Force all input processing onto CPU in order to reserve the GPU for
+ # the forward inference and back-propagation.
+ with tf.name_scope('inputs'):
+ with tf.device('/cpu:0'):
+ images, one_hot_labels, _, _ = data_provider.provide_data(
+ FLAGS.batch_size, FLAGS.dataset_dir)
+
+ # Define the GANModel tuple.
+ noise = tf.random_normal([FLAGS.batch_size, 64])
+ if FLAGS.conditional:
+ generator_fn = networks.conditional_generator
+ discriminator_fn = networks.conditional_discriminator
+ generator_inputs = (noise, one_hot_labels)
+ else:
+ generator_fn = networks.generator
+ discriminator_fn = networks.discriminator
+ generator_inputs = noise
+ gan_model = tfgan.gan_model(
+ generator_fn,
+ discriminator_fn,
+ real_data=images,
+ generator_inputs=generator_inputs)
+ tfgan.eval.add_gan_model_image_summaries(gan_model)
+
+ # Get the GANLoss tuple. Use the selected GAN loss functions.
+ # (joelshor): Put this block in `with tf.name_scope('loss'):` when
+ # cl/171610946 goes into the opensource release.
+ gan_loss = tfgan.gan_loss(gan_model,
+ gradient_penalty_weight=1.0,
+ add_summaries=True)
+
+ # Get the GANTrain ops using the custom optimizers and optional
+ # discriminator weight clipping.
+ with tf.name_scope('train'):
+ gen_lr, dis_lr = _learning_rate()
+ gen_opt, dis_opt = _optimizer(gen_lr, dis_lr, FLAGS.use_sync_replicas)
+ train_ops = tfgan.gan_train_ops(
+ gan_model,
+ gan_loss,
+ generator_optimizer=gen_opt,
+ discriminator_optimizer=dis_opt,
+ summarize_gradients=True,
+ colocate_gradients_with_ops=True,
+ aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)
+ tf.summary.scalar('generator_lr', gen_lr)
+ tf.summary.scalar('discriminator_lr', dis_lr)
+
+ # Run the alternating training loop. Skip it if no steps should be taken
+ # (used for graph construction tests).
+ sync_hooks = ([gen_opt.make_session_run_hook(FLAGS.task == 0),
+ dis_opt.make_session_run_hook(FLAGS.task == 0)]
+ if FLAGS.use_sync_replicas else [])
+ status_message = tf.string_join(
+ ['Starting train step: ',
+ tf.as_string(tf.train.get_or_create_global_step())],
+ name='status_message')
+ if FLAGS.max_number_of_steps == 0: return
+ sess_config = tf.ConfigProto(
+ inter_op_parallelism_threads=FLAGS.inter_op_parallelism_threads,
+ intra_op_parallelism_threads=FLAGS.intra_op_parallelism_threads)
+ tfgan.gan_train(
+ train_ops,
+ hooks=(
+ [tf.train.StopAtStepHook(num_steps=FLAGS.max_number_of_steps),
+ tf.train.LoggingTensorHook([status_message], every_n_iter=10)] +
+ sync_hooks),
+ logdir=FLAGS.train_log_dir,
+ master=FLAGS.master,
+ is_chief=FLAGS.task == 0,
+ config=sess_config)
+
+
+def _learning_rate():
+ generator_lr = tf.train.exponential_decay(
+ learning_rate=0.0001,
+ global_step=tf.train.get_or_create_global_step(),
+ decay_steps=100000,
+ decay_rate=0.9,
+ staircase=True)
+ discriminator_lr = 0.001
+ return generator_lr, discriminator_lr
+
+
+def _optimizer(gen_lr, dis_lr, use_sync_replicas):
+ """Get an optimizer, that's optionally synchronous."""
+ generator_opt = tf.train.RMSPropOptimizer(gen_lr, decay=.9, momentum=0.1)
+ discriminator_opt = tf.train.RMSPropOptimizer(dis_lr, decay=.95, momentum=0.1)
+
+ def _make_sync(opt):
+ return tf.train.SyncReplicasOptimizer(
+ opt,
+ replicas_to_aggregate=FLAGS.worker_replicas-FLAGS.backup_workers,
+ total_num_replicas=FLAGS.worker_replicas)
+ if use_sync_replicas:
+ generator_opt = _make_sync(generator_opt)
+ discriminator_opt = _make_sync(discriminator_opt)
+
+ return generator_opt, discriminator_opt
+
+
+if __name__ == '__main__':
+ logging.set_verbosity(logging.INFO)
+ tf.app.run()
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/train_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/train_test.py"
new file mode 100644
index 00000000..6f0fbc3d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/train_test.py"
@@ -0,0 +1,56 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for tfgan.examples.train."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+from absl.testing import parameterized
+import numpy as np
+import tensorflow as tf
+import train
+
+FLAGS = flags.FLAGS
+mock = tf.test.mock
+
+
+class TrainTest(tf.test.TestCase, parameterized.TestCase):
+
+ @parameterized.named_parameters(
+ ('Unconditional', False, False),
+ ('Conditional', True, False),
+ ('SyncReplicas', False, True))
+ def test_build_graph(self, conditional, use_sync_replicas):
+ FLAGS.max_number_of_steps = 0
+ FLAGS.conditional = conditional
+ FLAGS.use_sync_replicas = use_sync_replicas
+ FLAGS.batch_size = 16
+
+ # Mock input pipeline.
+ mock_imgs = np.zeros([FLAGS.batch_size, 32, 32, 3], dtype=np.float32)
+ mock_lbls = np.concatenate(
+ (np.ones([FLAGS.batch_size, 1], dtype=np.int32),
+ np.zeros([FLAGS.batch_size, 9], dtype=np.int32)), axis=1)
+ with mock.patch.object(train, 'data_provider') as mock_data_provider:
+ mock_data_provider.provide_data.return_value = (
+ mock_imgs, mock_lbls, None, None)
+ train.main(None)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/util.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/util.py"
new file mode 100644
index 00000000..02f344de
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/util.py"
@@ -0,0 +1,156 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Convenience functions for training and evaluating a TFGAN CIFAR example."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from six.moves import xrange # pylint: disable=redefined-builtin
+import tensorflow as tf
+tfgan = tf.contrib.gan
+
+
+def get_generator_conditioning(batch_size, num_classes):
+ """Generates TFGAN conditioning inputs for evaluation.
+
+ Args:
+ batch_size: A Python integer. The desired batch size.
+ num_classes: A Python integer. The number of classes.
+
+ Returns:
+ A Tensor of one-hot vectors corresponding to an even distribution over
+ classes.
+
+ Raises:
+ ValueError: If `batch_size` isn't evenly divisible by `num_classes`.
+ """
+ if batch_size % num_classes != 0:
+ raise ValueError('`batch_size` %i must be evenly divisible by '
+ '`num_classes` %i.' % (batch_size, num_classes))
+ labels = [lbl for lbl in xrange(num_classes)
+ for _ in xrange(batch_size // num_classes)]
+ return tf.one_hot(tf.constant(labels), num_classes)
+
+
+def get_image_grid(images, batch_size, num_classes, num_images_per_class):
+ """Combines images from each class in a single summary image.
+
+ Args:
+ images: Tensor of images that are arranged by class. The first
+ `batch_size / num_classes` images belong to the first class, the second
+ group belong to the second class, etc. Shape is
+ [batch, width, height, channels].
+ batch_size: Python integer. Batch dimension.
+ num_classes: Number of classes to show.
+ num_images_per_class: Number of image examples per class to show.
+
+ Raises:
+ ValueError: If the batch dimension of `images` is known at graph
+ construction, and it isn't `batch_size`.
+ ValueError: If there aren't enough images to show
+ `num_classes * num_images_per_class` images.
+ ValueError: If `batch_size` isn't divisible by `num_classes`.
+
+ Returns:
+ A single image.
+ """
+ # Validate inputs.
+ images.shape[0:1].assert_is_compatible_with([batch_size])
+ if batch_size < num_classes * num_images_per_class:
+ raise ValueError('Not enough images in batch to show the desired number of '
+ 'images.')
+ if batch_size % num_classes != 0:
+ raise ValueError('`batch_size` must be divisible by `num_classes`.')
+
+ # Only get a certain number of images per class.
+ num_batches = batch_size // num_classes
+ indices = [i * num_batches + j for i in xrange(num_classes)
+ for j in xrange(num_images_per_class)]
+ sampled_images = tf.gather(images, indices)
+ return tfgan.eval.image_reshaper(
+ sampled_images, num_cols=num_images_per_class)
+
+
+def get_inception_scores(images, batch_size, num_inception_images):
+ """Get Inception score for some images.
+
+ Args:
+ images: Image minibatch. Shape [batch size, width, height, channels]. Values
+ are in [-1, 1].
+ batch_size: Python integer. Batch dimension.
+ num_inception_images: Number of images to run through Inception at once.
+
+ Returns:
+ Inception scores. Tensor shape is [batch size].
+
+ Raises:
+ ValueError: If `batch_size` is incompatible with the first dimension of
+ `images`.
+ ValueError: If `batch_size` isn't divisible by `num_inception_images`.
+ """
+ # Validate inputs.
+ images.shape[0:1].assert_is_compatible_with([batch_size])
+ if batch_size % num_inception_images != 0:
+ raise ValueError(
+ '`batch_size` must be divisible by `num_inception_images`.')
+
+ # Resize images.
+ size = 299
+ resized_images = tf.image.resize_bilinear(images, [size, size])
+
+ # Run images through Inception.
+ num_batches = batch_size // num_inception_images
+ inc_score = tfgan.eval.inception_score(
+ resized_images, num_batches=num_batches)
+
+ return inc_score
+
+
+def get_frechet_inception_distance(real_images, generated_images, batch_size,
+ num_inception_images):
+ """Get Frechet Inception Distance between real and generated images.
+
+ Args:
+ real_images: Real images minibatch. Shape [batch size, width, height,
+ channels. Values are in [-1, 1].
+ generated_images: Generated images minibatch. Shape [batch size, width,
+ height, channels]. Values are in [-1, 1].
+ batch_size: Python integer. Batch dimension.
+ num_inception_images: Number of images to run through Inception at once.
+
+ Returns:
+ Frechet Inception distance. A floating-point scalar.
+
+ Raises:
+ ValueError: If the minibatch size is known at graph construction time, and
+ doesn't batch `batch_size`.
+ """
+ # Validate input dimensions.
+ real_images.shape[0:1].assert_is_compatible_with([batch_size])
+ generated_images.shape[0:1].assert_is_compatible_with([batch_size])
+
+ # Resize input images.
+ size = 299
+ resized_real_images = tf.image.resize_bilinear(real_images, [size, size])
+ resized_generated_images = tf.image.resize_bilinear(
+ generated_images, [size, size])
+
+ # Compute Frechet Inception Distance.
+ num_batches = batch_size // num_inception_images
+ fid = tfgan.eval.frechet_inception_distance(
+ resized_real_images, resized_generated_images, num_batches=num_batches)
+
+ return fid
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/util_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/util_test.py"
new file mode 100644
index 00000000..b1847489
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cifar/util_test.py"
@@ -0,0 +1,62 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for gan.cifar.util."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+import util
+
+mock = tf.test.mock
+
+
+class UtilTest(tf.test.TestCase):
+
+ def test_get_generator_conditioning(self):
+ conditioning = util.get_generator_conditioning(12, 4)
+ self.assertEqual([12, 4], conditioning.shape.as_list())
+
+ def test_get_image_grid(self):
+ util.get_image_grid(
+ tf.zeros([6, 28, 28, 1]),
+ batch_size=6,
+ num_classes=3,
+ num_images_per_class=1)
+
+ # Mock `inception_score` which is expensive.
+ @mock.patch.object(util.tfgan.eval, 'inception_score', autospec=True)
+ def test_get_inception_scores(self, mock_inception_score):
+ mock_inception_score.return_value = 1.0
+ util.get_inception_scores(
+ tf.placeholder(tf.float32, shape=[None, 28, 28, 3]),
+ batch_size=100,
+ num_inception_images=10)
+
+ # Mock `frechet_inception_distance` which is expensive.
+ @mock.patch.object(util.tfgan.eval, 'frechet_inception_distance',
+ autospec=True)
+ def test_get_frechet_inception_distance(self, mock_fid):
+ mock_fid.return_value = 1.0
+ util.get_frechet_inception_distance(
+ tf.placeholder(tf.float32, shape=[None, 28, 28, 3]),
+ tf.placeholder(tf.float32, shape=[None, 28, 28, 3]),
+ batch_size=100,
+ num_inception_images=10)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/data_provider.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/data_provider.py"
new file mode 100644
index 00000000..bed09a3f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/data_provider.py"
@@ -0,0 +1,185 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains code for loading and preprocessing image data."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import numpy as np
+import tensorflow as tf
+
+
+def normalize_image(image):
+ """Rescale from range [0, 255] to [-1, 1]."""
+ return (tf.to_float(image) - 127.5) / 127.5
+
+
+def undo_normalize_image(normalized_image):
+ """Convert to a numpy array that can be read by PIL."""
+ # Convert from NHWC to HWC.
+ normalized_image = np.squeeze(normalized_image, axis=0)
+ return np.uint8(normalized_image * 127.5 + 127.5)
+
+
+def _sample_patch(image, patch_size):
+ """Crop image to square shape and resize it to `patch_size`.
+
+ Args:
+ image: A 3D `Tensor` of HWC format.
+ patch_size: A Python scalar. The output image size.
+
+ Returns:
+ A 3D `Tensor` of HWC format which has the shape of
+ [patch_size, patch_size, 3].
+ """
+ image_shape = tf.shape(image)
+ height, width = image_shape[0], image_shape[1]
+ target_size = tf.minimum(height, width)
+ image = tf.image.resize_image_with_crop_or_pad(image, target_size,
+ target_size)
+ # tf.image.resize_area only accepts 4D tensor, so expand dims first.
+ image = tf.expand_dims(image, axis=0)
+ image = tf.image.resize_images(image, [patch_size, patch_size])
+ image = tf.squeeze(image, axis=0)
+ # Force image num_channels = 3
+ image = tf.tile(image, [1, 1, tf.maximum(1, 4 - tf.shape(image)[2])])
+ image = tf.slice(image, [0, 0, 0], [patch_size, patch_size, 3])
+ return image
+
+
+def full_image_to_patch(image, patch_size):
+ image = normalize_image(image)
+ # Sample a patch of fixed size.
+ image_patch = _sample_patch(image, patch_size)
+ image_patch.shape.assert_is_compatible_with([patch_size, patch_size, 3])
+ return image_patch
+
+
+def _provide_custom_dataset(image_file_pattern,
+ batch_size,
+ shuffle=True,
+ num_threads=1,
+ patch_size=128):
+ """Provides batches of custom image data.
+
+ Args:
+ image_file_pattern: A string of glob pattern of image files.
+ batch_size: The number of images in each batch.
+ shuffle: Whether to shuffle the read images. Defaults to True.
+ num_threads: Number of mapping threads. Defaults to 1.
+ patch_size: Size of the path to extract from the image. Defaults to 128.
+
+ Returns:
+ A tf.data.Dataset with Tensors of shape
+ [batch_size, patch_size, patch_size, 3] representing a batch of images.
+
+ Raises:
+ ValueError: If no files match `image_file_pattern`.
+ """
+ if not tf.gfile.Glob(image_file_pattern):
+ raise ValueError('No file patterns found.')
+ filenames_ds = tf.data.Dataset.list_files(image_file_pattern)
+ bytes_ds = filenames_ds.map(tf.io.read_file, num_parallel_calls=num_threads)
+ images_ds = bytes_ds.map(
+ tf.image.decode_image, num_parallel_calls=num_threads)
+ patches_ds = images_ds.map(
+ lambda img: full_image_to_patch(img, patch_size),
+ num_parallel_calls=num_threads)
+ patches_ds = patches_ds.repeat()
+
+ if shuffle:
+ patches_ds = patches_ds.shuffle(5 * batch_size)
+
+ patches_ds = patches_ds.prefetch(5 * batch_size)
+ patches_ds = patches_ds.batch(batch_size)
+
+ return patches_ds
+
+
+def provide_custom_datasets(image_file_patterns,
+ batch_size,
+ shuffle=True,
+ num_threads=1,
+ patch_size=128):
+ """Provides multiple batches of custom image data.
+
+ Args:
+ image_file_patterns: A list of glob patterns of image files.
+ batch_size: The number of images in each batch.
+ shuffle: Whether to shuffle the read images. Defaults to True.
+ num_threads: Number of prefetching threads. Defaults to 1.
+ patch_size: Size of the patch to extract from the image. Defaults to 128.
+
+ Returns:
+ A list of tf.data.Datasets the same number as `image_file_patterns`. Each
+ of the datasets have `Tensor`'s in the list has a shape of
+ [batch_size, patch_size, patch_size, 3] representing a batch of images.
+
+ Raises:
+ ValueError: If image_file_patterns is not a list or tuple.
+ """
+ if not isinstance(image_file_patterns, (list, tuple)):
+ raise ValueError(
+ '`image_file_patterns` should be either list or tuple, but was {}.'.
+ format(type(image_file_patterns)))
+ custom_datasets = []
+ for pattern in image_file_patterns:
+ custom_datasets.append(
+ _provide_custom_dataset(
+ pattern,
+ batch_size=batch_size,
+ shuffle=shuffle,
+ num_threads=num_threads,
+ patch_size=patch_size))
+
+ return custom_datasets
+
+
+def provide_custom_data(image_file_patterns,
+ batch_size,
+ shuffle=True,
+ num_threads=1,
+ patch_size=128):
+ """Provides multiple batches of custom image data.
+
+ Args:
+ image_file_patterns: A list of glob patterns of image files.
+ batch_size: The number of images in each batch.
+ shuffle: Whether to shuffle the read images. Defaults to True.
+ num_threads: Number of prefetching threads. Defaults to 1.
+ patch_size: Size of the patch to extract from the image. Defaults to 128.
+
+ Returns:
+ A list of float `Tensor`s with the same size of `image_file_patterns`. Each
+ of the `Tensor` in the list has a shape of
+ [batch_size, patch_size, patch_size, 3] representing a batch of images. As a
+ side effect, the tf.Dataset initializer is added to the
+ tf.GraphKeys.TABLE_INITIALIZERS collection.
+
+ Raises:
+ ValueError: If image_file_patterns is not a list or tuple.
+ """
+ datasets = provide_custom_datasets(
+ image_file_patterns, batch_size, shuffle, num_threads, patch_size)
+
+ tensors = []
+ for ds in datasets:
+ iterator = ds.make_initializable_iterator()
+ tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS, iterator.initializer)
+ tensors.append(iterator.get_next())
+
+ return tensors
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/data_provider_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/data_provider_test.py"
new file mode 100644
index 00000000..c4c44b38
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/data_provider_test.py"
@@ -0,0 +1,129 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for data_provider."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+from absl import flags
+import numpy as np
+
+import tensorflow as tf
+
+import data_provider
+
+mock = tf.test.mock
+
+
+class DataProviderTest(tf.test.TestCase):
+
+ def setUp(self):
+ super(DataProviderTest, self).setUp()
+ self.testdata_dir = os.path.join(
+ flags.FLAGS.test_srcdir,
+ 'google3/third_party/tensorflow_models/gan/cyclegan/testdata')
+
+ def test_normalize_image(self):
+ image = tf.random_uniform(shape=(8, 8, 3), maxval=256, dtype=tf.int32)
+ rescaled_image = data_provider.normalize_image(image)
+ self.assertEqual(tf.float32, rescaled_image.dtype)
+ self.assertListEqual(image.shape.as_list(), rescaled_image.shape.as_list())
+ with self.test_session(use_gpu=True) as sess:
+ rescaled_image_out = sess.run(rescaled_image)
+ self.assertTrue(np.all(np.abs(rescaled_image_out) <= 1.0))
+
+ def test_sample_patch(self):
+ image = tf.zeros(shape=(8, 8, 3))
+ patch1 = data_provider._sample_patch(image, 7)
+ patch2 = data_provider._sample_patch(image, 10)
+ image = tf.zeros(shape=(8, 8, 1))
+ patch3 = data_provider._sample_patch(image, 10)
+ with self.test_session(use_gpu=True) as sess:
+ self.assertTupleEqual((7, 7, 3), sess.run(patch1).shape)
+ self.assertTupleEqual((10, 10, 3), sess.run(patch2).shape)
+ self.assertTupleEqual((10, 10, 3), sess.run(patch3).shape)
+
+ def test_custom_dataset_provider(self):
+ file_pattern = os.path.join(self.testdata_dir, '*.jpg')
+ batch_size = 3
+ patch_size = 8
+ images_ds = data_provider._provide_custom_dataset(
+ file_pattern, batch_size=batch_size, patch_size=patch_size)
+ self.assertListEqual([None, patch_size, patch_size, 3],
+ images_ds.output_shapes.as_list())
+ self.assertEqual(tf.float32, images_ds.output_types)
+
+ iterator = images_ds.make_initializable_iterator()
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.local_variables_initializer())
+ sess.run(iterator.initializer)
+ images_out = sess.run(iterator.get_next())
+ self.assertTupleEqual((batch_size, patch_size, patch_size, 3),
+ images_out.shape)
+ self.assertTrue(np.all(np.abs(images_out) <= 1.0))
+
+ def test_custom_datasets_provider(self):
+ file_pattern = os.path.join(self.testdata_dir, '*.jpg')
+ batch_size = 3
+ patch_size = 8
+ images_ds_list = data_provider.provide_custom_datasets(
+ [file_pattern, file_pattern],
+ batch_size=batch_size,
+ patch_size=patch_size)
+ for images_ds in images_ds_list:
+ self.assertListEqual([None, patch_size, patch_size, 3],
+ images_ds.output_shapes.as_list())
+ self.assertEqual(tf.float32, images_ds.output_types)
+
+ iterators = [x.make_initializable_iterator() for x in images_ds_list]
+ initialiers = [x.initializer for x in iterators]
+ img_tensors = [x.get_next() for x in iterators]
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.local_variables_initializer())
+ sess.run(initialiers)
+ images_out_list = sess.run(img_tensors)
+ for images_out in images_out_list:
+ self.assertTupleEqual((batch_size, patch_size, patch_size, 3),
+ images_out.shape)
+ self.assertTrue(np.all(np.abs(images_out) <= 1.0))
+
+ def test_custom_data_provider(self):
+ file_pattern = os.path.join(self.testdata_dir, '*.jpg')
+ batch_size = 3
+ patch_size = 8
+ images_list = data_provider.provide_custom_data(
+ [file_pattern, file_pattern],
+ batch_size=batch_size,
+ patch_size=patch_size)
+ for images in images_list:
+ self.assertListEqual([None, patch_size, patch_size, 3],
+ images.shape.as_list())
+ self.assertEqual(tf.float32, images.dtype)
+
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.local_variables_initializer())
+ sess.run(tf.tables_initializer())
+ images_out_list = sess.run(images_list)
+ for images_out in images_out_list:
+ self.assertTupleEqual((batch_size, patch_size, patch_size, 3),
+ images_out.shape)
+ self.assertTrue(np.all(np.abs(images_out) <= 1.0))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/inference_demo.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/inference_demo.py"
new file mode 100644
index 00000000..9eaaa111
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/inference_demo.py"
@@ -0,0 +1,151 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+r"""Demo that makes inference requests against a running inference server."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+
+from absl import app
+from absl import flags
+import numpy as np
+import PIL
+import tensorflow as tf
+
+import data_provider
+import networks
+
+tfgan = tf.contrib.gan
+
+flags.DEFINE_string('checkpoint_path', '',
+ 'CycleGAN checkpoint path created by train.py. '
+ '(e.g. "/mylogdir/model.ckpt-18442")')
+
+flags.DEFINE_string(
+ 'image_set_x_glob', '',
+ 'Optional: Glob path to images of class X to feed through the CycleGAN.')
+
+flags.DEFINE_string(
+ 'image_set_y_glob', '',
+ 'Optional: Glob path to images of class Y to feed through the CycleGAN.')
+
+flags.DEFINE_string(
+ 'generated_x_dir', '/tmp/generated_x/',
+ 'If image_set_y_glob is defined, where to output the generated X '
+ 'images.')
+
+flags.DEFINE_string(
+ 'generated_y_dir', '/tmp/generated_y/',
+ 'If image_set_x_glob is defined, where to output the generated Y '
+ 'images.')
+
+flags.DEFINE_integer('patch_dim', 128,
+ 'The patch size of images that was used in train.py.')
+
+FLAGS = flags.FLAGS
+
+
+def _make_dir_if_not_exists(dir_path):
+ """Make a directory if it does not exist."""
+ if not tf.gfile.Exists(dir_path):
+ tf.gfile.MakeDirs(dir_path)
+
+
+def _file_output_path(dir_path, input_file_path):
+ """Create output path for an individual file."""
+ return os.path.join(dir_path, os.path.basename(input_file_path))
+
+
+def make_inference_graph(model_name, patch_dim):
+ """Build the inference graph for either the X2Y or Y2X GAN.
+
+ Args:
+ model_name: The var scope name 'ModelX2Y' or 'ModelY2X'.
+ patch_dim: An integer size of patches to feed to the generator.
+
+ Returns:
+ Tuple of (input_placeholder, generated_tensor).
+ """
+ input_hwc_pl = tf.placeholder(tf.float32, [None, None, 3])
+
+ # Expand HWC to NHWC
+ images_x = tf.expand_dims(
+ data_provider.full_image_to_patch(input_hwc_pl, patch_dim), 0)
+
+ with tf.variable_scope(model_name):
+ with tf.variable_scope('Generator'):
+ generated = networks.generator(images_x)
+ return input_hwc_pl, generated
+
+
+def export(sess, input_pl, output_tensor, input_file_pattern, output_dir):
+ """Exports inference outputs to an output directory.
+
+ Args:
+ sess: tf.Session with variables already loaded.
+ input_pl: tf.Placeholder for input (HWC format).
+ output_tensor: Tensor for generated outut images.
+ input_file_pattern: Glob file pattern for input images.
+ output_dir: Output directory.
+ """
+ if output_dir:
+ _make_dir_if_not_exists(output_dir)
+
+ if input_file_pattern:
+ for file_path in tf.gfile.Glob(input_file_pattern):
+ # Grab a single image and run it through inference
+ input_np = np.asarray(PIL.Image.open(file_path))
+ output_np = sess.run(output_tensor, feed_dict={input_pl: input_np})
+ image_np = data_provider.undo_normalize_image(output_np)
+ output_path = _file_output_path(output_dir, file_path)
+ PIL.Image.fromarray(image_np).save(output_path)
+
+
+def _validate_flags():
+ flags.register_validator('checkpoint_path', bool,
+ 'Must provide `checkpoint_path`.')
+ flags.register_validator(
+ 'generated_x_dir',
+ lambda x: False if (FLAGS.image_set_y_glob and not x) else True,
+ 'Must provide `generated_x_dir`.')
+ flags.register_validator(
+ 'generated_y_dir',
+ lambda x: False if (FLAGS.image_set_x_glob and not x) else True,
+ 'Must provide `generated_y_dir`.')
+
+
+def main(_):
+ _validate_flags()
+ images_x_hwc_pl, generated_y = make_inference_graph('ModelX2Y',
+ FLAGS.patch_dim)
+ images_y_hwc_pl, generated_x = make_inference_graph('ModelY2X',
+ FLAGS.patch_dim)
+
+ # Restore all the variables that were saved in the checkpoint.
+ saver = tf.train.Saver()
+ with tf.Session() as sess:
+ saver.restore(sess, FLAGS.checkpoint_path)
+
+ export(sess, images_x_hwc_pl, generated_y, FLAGS.image_set_x_glob,
+ FLAGS.generated_y_dir)
+ export(sess, images_y_hwc_pl, generated_x, FLAGS.image_set_y_glob,
+ FLAGS.generated_x_dir)
+
+
+if __name__ == '__main__':
+ app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/inference_demo_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/inference_demo_test.py"
new file mode 100644
index 00000000..4474693e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/inference_demo_test.py"
@@ -0,0 +1,106 @@
+"""Tests for CycleGAN inference demo."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+from absl import logging
+import numpy as np
+import PIL
+
+import tensorflow as tf
+
+import inference_demo
+import train
+
+FLAGS = tf.flags.FLAGS
+mock = tf.test.mock
+tfgan = tf.contrib.gan
+
+
+def _basenames_from_glob(file_glob):
+ return [os.path.basename(file_path) for file_path in tf.gfile.Glob(file_glob)]
+
+
+class InferenceDemoTest(tf.test.TestCase):
+
+ def setUp(self):
+ self._export_dir = os.path.join(FLAGS.test_tmpdir, 'export')
+ self._ckpt_path = os.path.join(self._export_dir, 'model.ckpt')
+ self._image_glob = os.path.join(
+ FLAGS.test_srcdir,
+ 'google3/third_party/tensorflow_models/gan/cyclegan/testdata', '*.jpg')
+ self._genx_dir = os.path.join(FLAGS.test_tmpdir, 'genx')
+ self._geny_dir = os.path.join(FLAGS.test_tmpdir, 'geny')
+
+ @mock.patch.object(tfgan, 'gan_train', autospec=True)
+ @mock.patch.object(
+ train.data_provider, 'provide_custom_data', autospec=True)
+ def testTrainingAndInferenceGraphsAreCompatible(
+ self, mock_provide_custom_data, unused_mock_gan_train):
+ # Training and inference graphs can get out of sync if changes are made
+ # to one but not the other. This test will keep them in sync.
+
+ # Save the training graph
+ train_sess = tf.Session()
+ FLAGS.image_set_x_file_pattern = '/tmp/x/*.jpg'
+ FLAGS.image_set_y_file_pattern = '/tmp/y/*.jpg'
+ FLAGS.batch_size = 3
+ FLAGS.patch_size = 128
+ FLAGS.generator_lr = 0.02
+ FLAGS.discriminator_lr = 0.3
+ FLAGS.train_log_dir = self._export_dir
+ FLAGS.master = 'master'
+ FLAGS.task = 0
+ FLAGS.cycle_consistency_loss_weight = 2.0
+ FLAGS.max_number_of_steps = 1
+ mock_provide_custom_data.return_value = (
+ tf.zeros([3, 4, 4, 3,]), tf.zeros([3, 4, 4, 3]))
+ train.main(None)
+ init_op = tf.global_variables_initializer()
+ train_sess.run(init_op)
+ train_saver = tf.train.Saver()
+ train_saver.save(train_sess, save_path=self._ckpt_path)
+
+ # Create inference graph
+ tf.reset_default_graph()
+ FLAGS.patch_dim = FLAGS.patch_size
+ logging.info('dir_path: %s', os.listdir(self._export_dir))
+ FLAGS.checkpoint_path = self._ckpt_path
+ FLAGS.image_set_x_glob = self._image_glob
+ FLAGS.image_set_y_glob = self._image_glob
+ FLAGS.generated_x_dir = self._genx_dir
+ FLAGS.generated_y_dir = self._geny_dir
+
+ inference_demo.main(None)
+ logging.info('gen x: %s', os.listdir(self._genx_dir))
+
+ # Check that the image names match
+ self.assertSetEqual(
+ set(_basenames_from_glob(FLAGS.image_set_x_glob)),
+ set(os.listdir(FLAGS.generated_y_dir)))
+ self.assertSetEqual(
+ set(_basenames_from_glob(FLAGS.image_set_y_glob)),
+ set(os.listdir(FLAGS.generated_x_dir)))
+
+ # Check that each image in the directory looks as expected
+ for directory in [FLAGS.generated_x_dir, FLAGS.generated_x_dir]:
+ for base_name in os.listdir(directory):
+ image_path = os.path.join(directory, base_name)
+ self.assertRealisticImage(image_path)
+
+ def assertRealisticImage(self, image_path):
+ logging.info('Testing %s for realism.', image_path)
+ # If the normalization is off or forgotten, then the generated image is
+ # all one pixel value. This tests that different pixel values are achieved.
+ input_np = np.asarray(PIL.Image.open(image_path))
+ self.assertEqual(len(input_np.shape), 3)
+ self.assertGreaterEqual(input_np.shape[0], 50)
+ self.assertGreaterEqual(input_np.shape[1], 50)
+ self.assertGreater(np.mean(input_np), 20)
+ self.assertGreater(np.var(input_np), 100)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/testdata/00500.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/testdata/00500.jpg"
new file mode 100644
index 00000000..f465f6b4
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/testdata/00500.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/train.py"
new file mode 100644
index 00000000..63523cee
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/train.py"
@@ -0,0 +1,217 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Trains a CycleGAN model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+import tensorflow as tf
+
+import data_provider
+import networks
+
+tfgan = tf.contrib.gan
+
+
+flags.DEFINE_string('image_set_x_file_pattern', None,
+ 'File pattern of images in image set X')
+
+flags.DEFINE_string('image_set_y_file_pattern', None,
+ 'File pattern of images in image set Y')
+
+flags.DEFINE_integer('batch_size', 1, 'The number of images in each batch.')
+
+flags.DEFINE_integer('patch_size', 64, 'The patch size of images.')
+
+flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
+
+flags.DEFINE_string('train_log_dir', '/tmp/cyclegan/',
+ 'Directory where to write event logs.')
+
+flags.DEFINE_float('generator_lr', 0.0002,
+ 'The compression model learning rate.')
+
+flags.DEFINE_float('discriminator_lr', 0.0001,
+ 'The discriminator learning rate.')
+
+flags.DEFINE_integer('max_number_of_steps', 500000,
+ 'The maximum number of gradient steps.')
+
+flags.DEFINE_integer(
+ 'ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then the parameters '
+ 'are handled locally by the worker.')
+
+flags.DEFINE_integer(
+ 'task', 0,
+ 'The Task ID. This value is used when training with multiple workers to '
+ 'identify each worker.')
+
+flags.DEFINE_float('cycle_consistency_loss_weight', 10.0,
+ 'The weight of cycle consistency loss')
+
+
+FLAGS = flags.FLAGS
+
+
+def _define_model(images_x, images_y):
+ """Defines a CycleGAN model that maps between images_x and images_y.
+
+ Args:
+ images_x: A 4D float `Tensor` of NHWC format. Images in set X.
+ images_y: A 4D float `Tensor` of NHWC format. Images in set Y.
+
+ Returns:
+ A `CycleGANModel` namedtuple.
+ """
+ cyclegan_model = tfgan.cyclegan_model(
+ generator_fn=networks.generator,
+ discriminator_fn=networks.discriminator,
+ data_x=images_x,
+ data_y=images_y)
+
+ # Add summaries for generated images.
+ tfgan.eval.add_cyclegan_image_summaries(cyclegan_model)
+
+ return cyclegan_model
+
+
+def _get_lr(base_lr):
+ """Returns a learning rate `Tensor`.
+
+ Args:
+ base_lr: A scalar float `Tensor` or a Python number. The base learning
+ rate.
+
+ Returns:
+ A scalar float `Tensor` of learning rate which equals `base_lr` when the
+ global training step is less than FLAGS.max_number_of_steps / 2, afterwards
+ it linearly decays to zero.
+ """
+ global_step = tf.train.get_or_create_global_step()
+ lr_constant_steps = FLAGS.max_number_of_steps // 2
+
+ def _lr_decay():
+ return tf.train.polynomial_decay(
+ learning_rate=base_lr,
+ global_step=(global_step - lr_constant_steps),
+ decay_steps=(FLAGS.max_number_of_steps - lr_constant_steps),
+ end_learning_rate=0.0)
+
+ return tf.cond(global_step < lr_constant_steps, lambda: base_lr, _lr_decay)
+
+
+def _get_optimizer(gen_lr, dis_lr):
+ """Returns generator optimizer and discriminator optimizer.
+
+ Args:
+ gen_lr: A scalar float `Tensor` or a Python number. The Generator learning
+ rate.
+ dis_lr: A scalar float `Tensor` or a Python number. The Discriminator
+ learning rate.
+
+ Returns:
+ A tuple of generator optimizer and discriminator optimizer.
+ """
+ # beta1 follows
+ # https://github.com/junyanz/CycleGAN/blob/master/options.lua
+ gen_opt = tf.train.AdamOptimizer(gen_lr, beta1=0.5, use_locking=True)
+ dis_opt = tf.train.AdamOptimizer(dis_lr, beta1=0.5, use_locking=True)
+ return gen_opt, dis_opt
+
+
+def _define_train_ops(cyclegan_model, cyclegan_loss):
+ """Defines train ops that trains `cyclegan_model` with `cyclegan_loss`.
+
+ Args:
+ cyclegan_model: A `CycleGANModel` namedtuple.
+ cyclegan_loss: A `CycleGANLoss` namedtuple containing all losses for
+ `cyclegan_model`.
+
+ Returns:
+ A `GANTrainOps` namedtuple.
+ """
+ gen_lr = _get_lr(FLAGS.generator_lr)
+ dis_lr = _get_lr(FLAGS.discriminator_lr)
+ gen_opt, dis_opt = _get_optimizer(gen_lr, dis_lr)
+ train_ops = tfgan.gan_train_ops(
+ cyclegan_model,
+ cyclegan_loss,
+ generator_optimizer=gen_opt,
+ discriminator_optimizer=dis_opt,
+ summarize_gradients=True,
+ colocate_gradients_with_ops=True,
+ aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)
+
+ tf.summary.scalar('generator_lr', gen_lr)
+ tf.summary.scalar('discriminator_lr', dis_lr)
+ return train_ops
+
+
+def main(_):
+ if not tf.gfile.Exists(FLAGS.train_log_dir):
+ tf.gfile.MakeDirs(FLAGS.train_log_dir)
+
+ with tf.device(tf.train.replica_device_setter(FLAGS.ps_tasks)):
+ with tf.name_scope('inputs'):
+ images_x, images_y = data_provider.provide_custom_data(
+ [FLAGS.image_set_x_file_pattern, FLAGS.image_set_y_file_pattern],
+ batch_size=FLAGS.batch_size,
+ patch_size=FLAGS.patch_size)
+ # Set batch size for summaries.
+ images_x.set_shape([FLAGS.batch_size, None, None, None])
+ images_y.set_shape([FLAGS.batch_size, None, None, None])
+
+ # Define CycleGAN model.
+ cyclegan_model = _define_model(images_x, images_y)
+
+ # Define CycleGAN loss.
+ cyclegan_loss = tfgan.cyclegan_loss(
+ cyclegan_model,
+ cycle_consistency_loss_weight=FLAGS.cycle_consistency_loss_weight,
+ tensor_pool_fn=tfgan.features.tensor_pool)
+
+ # Define CycleGAN train ops.
+ train_ops = _define_train_ops(cyclegan_model, cyclegan_loss)
+
+ # Training
+ train_steps = tfgan.GANTrainSteps(1, 1)
+ status_message = tf.string_join(
+ [
+ 'Starting train step: ',
+ tf.as_string(tf.train.get_or_create_global_step())
+ ],
+ name='status_message')
+ if not FLAGS.max_number_of_steps:
+ return
+ tfgan.gan_train(
+ train_ops,
+ FLAGS.train_log_dir,
+ get_hooks_fn=tfgan.get_sequential_train_hooks(train_steps),
+ hooks=[
+ tf.train.StopAtStepHook(num_steps=FLAGS.max_number_of_steps),
+ tf.train.LoggingTensorHook([status_message], every_n_iter=10)
+ ],
+ master=FLAGS.master,
+ is_chief=FLAGS.task == 0)
+
+
+if __name__ == '__main__':
+ tf.flags.mark_flag_as_required('image_set_x_file_pattern')
+ tf.flags.mark_flag_as_required('image_set_y_file_pattern')
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/train_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/train_test.py"
new file mode 100644
index 00000000..5d27d377
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/cyclegan/train_test.py"
@@ -0,0 +1,155 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for cyclegan.train."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+import numpy as np
+import tensorflow as tf
+
+import train
+
+FLAGS = flags.FLAGS
+mock = tf.test.mock
+tfgan = tf.contrib.gan
+
+
+def _test_generator(input_images):
+ """Simple generator function."""
+ return input_images * tf.get_variable('dummy_g', initializer=2.0)
+
+
+def _test_discriminator(image_batch, unused_conditioning=None):
+ """Simple discriminator function."""
+ return tf.contrib.layers.flatten(
+ image_batch * tf.get_variable('dummy_d', initializer=2.0))
+
+
+train.networks.generator = _test_generator
+train.networks.discriminator = _test_discriminator
+
+
+class TrainTest(tf.test.TestCase):
+
+ @mock.patch.object(tfgan, 'eval', autospec=True)
+ def test_define_model(self, mock_eval):
+ FLAGS.batch_size = 2
+ images_shape = [FLAGS.batch_size, 4, 4, 3]
+ images_x_np = np.zeros(shape=images_shape)
+ images_y_np = np.zeros(shape=images_shape)
+ images_x = tf.constant(images_x_np, dtype=tf.float32)
+ images_y = tf.constant(images_y_np, dtype=tf.float32)
+
+ cyclegan_model = train._define_model(images_x, images_y)
+ self.assertIsInstance(cyclegan_model, tfgan.CycleGANModel)
+ self.assertShapeEqual(images_x_np, cyclegan_model.reconstructed_x)
+ self.assertShapeEqual(images_y_np, cyclegan_model.reconstructed_y)
+
+ @mock.patch.object(train.networks, 'generator', autospec=True)
+ @mock.patch.object(train.networks, 'discriminator', autospec=True)
+ @mock.patch.object(
+ tf.train, 'get_or_create_global_step', autospec=True)
+ def test_get_lr(self, mock_get_or_create_global_step,
+ unused_mock_discriminator, unused_mock_generator):
+ FLAGS.max_number_of_steps = 10
+ base_lr = 0.01
+ with self.test_session(use_gpu=True) as sess:
+ mock_get_or_create_global_step.return_value = tf.constant(2)
+ lr_step2 = sess.run(train._get_lr(base_lr))
+ mock_get_or_create_global_step.return_value = tf.constant(9)
+ lr_step9 = sess.run(train._get_lr(base_lr))
+
+ self.assertAlmostEqual(base_lr, lr_step2)
+ self.assertAlmostEqual(base_lr * 0.2, lr_step9)
+
+ @mock.patch.object(tf.train, 'AdamOptimizer', autospec=True)
+ def test_get_optimizer(self, mock_adam_optimizer):
+ gen_lr, dis_lr = 0.1, 0.01
+ train._get_optimizer(gen_lr=gen_lr, dis_lr=dis_lr)
+ mock_adam_optimizer.assert_has_calls([
+ mock.call(gen_lr, beta1=mock.ANY, use_locking=True),
+ mock.call(dis_lr, beta1=mock.ANY, use_locking=True)
+ ])
+
+ @mock.patch.object(tf.summary, 'scalar', autospec=True)
+ def test_define_train_ops(self, mock_summary_scalar):
+ FLAGS.batch_size = 2
+ FLAGS.generator_lr = 0.1
+ FLAGS.discriminator_lr = 0.01
+
+ images_shape = [FLAGS.batch_size, 4, 4, 3]
+ images_x = tf.zeros(images_shape, dtype=tf.float32)
+ images_y = tf.zeros(images_shape, dtype=tf.float32)
+
+ cyclegan_model = train._define_model(images_x, images_y)
+ cyclegan_loss = tfgan.cyclegan_loss(
+ cyclegan_model, cycle_consistency_loss_weight=10.0)
+ train_ops = train._define_train_ops(cyclegan_model, cyclegan_loss)
+
+ self.assertIsInstance(train_ops, tfgan.GANTrainOps)
+ mock_summary_scalar.assert_has_calls([
+ mock.call('generator_lr', mock.ANY),
+ mock.call('discriminator_lr', mock.ANY)
+ ])
+
+ @mock.patch.object(tf, 'gfile', autospec=True)
+ @mock.patch.object(train, 'data_provider', autospec=True)
+ @mock.patch.object(train, '_define_model', autospec=True)
+ @mock.patch.object(tfgan, 'cyclegan_loss', autospec=True)
+ @mock.patch.object(train, '_define_train_ops', autospec=True)
+ @mock.patch.object(tfgan, 'gan_train', autospec=True)
+ def test_main(self, mock_gan_train, mock_define_train_ops, mock_cyclegan_loss,
+ mock_define_model, mock_data_provider, mock_gfile):
+ FLAGS.image_set_x_file_pattern = '/tmp/x/*.jpg'
+ FLAGS.image_set_y_file_pattern = '/tmp/y/*.jpg'
+ FLAGS.batch_size = 3
+ FLAGS.patch_size = 8
+ FLAGS.generator_lr = 0.02
+ FLAGS.discriminator_lr = 0.3
+ FLAGS.train_log_dir = '/tmp/foo'
+ FLAGS.master = 'master'
+ FLAGS.task = 0
+ FLAGS.cycle_consistency_loss_weight = 2.0
+ FLAGS.max_number_of_steps = 1
+
+ mock_data_provider.provide_custom_data.return_value = (
+ tf.zeros([3, 2, 2, 3], dtype=tf.float32),
+ tf.zeros([3, 2, 2, 3], dtype=tf.float32))
+
+ train.main(None)
+ mock_data_provider.provide_custom_data.assert_called_once_with(
+ ['/tmp/x/*.jpg', '/tmp/y/*.jpg'], batch_size=3, patch_size=8)
+ mock_define_model.assert_called_once_with(mock.ANY, mock.ANY)
+ mock_cyclegan_loss.assert_called_once_with(
+ mock_define_model.return_value,
+ cycle_consistency_loss_weight=2.0,
+ tensor_pool_fn=mock.ANY)
+ mock_define_train_ops.assert_called_once_with(
+ mock_define_model.return_value, mock_cyclegan_loss.return_value)
+ mock_gan_train.assert_called_once_with(
+ mock_define_train_ops.return_value,
+ '/tmp/foo',
+ get_hooks_fn=mock.ANY,
+ hooks=mock.ANY,
+ master='master',
+ is_chief=True)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/cifar_conditional_gan.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/cifar_conditional_gan.png"
new file mode 100644
index 00000000..954c339c
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/cifar_conditional_gan.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/cifar_unconditional_gan.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/cifar_unconditional_gan.png"
new file mode 100644
index 00000000..481a6e80
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/cifar_unconditional_gan.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/compression_wf0.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/compression_wf0.png"
new file mode 100644
index 00000000..4b39eda9
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/compression_wf0.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/compression_wf10000.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/compression_wf10000.png"
new file mode 100644
index 00000000..ed74069f
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/compression_wf10000.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_conditional_gan.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_conditional_gan.png"
new file mode 100644
index 00000000..7f47b3c0
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_conditional_gan.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_estimator_unconditional_gan.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_estimator_unconditional_gan.png"
new file mode 100644
index 00000000..4aeab789
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_estimator_unconditional_gan.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_infogan.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_infogan.png"
new file mode 100644
index 00000000..fcc488fb
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_infogan.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_unconditional_gan.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_unconditional_gan.png"
new file mode 100644
index 00000000..624bcf7c
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/g3doc/mnist_unconditional_gan.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/data_provider.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/data_provider.py"
new file mode 100644
index 00000000..da14741f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/data_provider.py"
@@ -0,0 +1,93 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains code for loading and preprocessing the compression image data."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from slim.datasets import dataset_factory as datasets
+
+slim = tf.contrib.slim
+
+
+def provide_data(split_name, batch_size, dataset_dir,
+ dataset_name='imagenet', num_readers=1, num_threads=1,
+ patch_size=128):
+ """Provides batches of image data for compression.
+
+ Args:
+ split_name: Either 'train' or 'validation'.
+ batch_size: The number of images in each batch.
+ dataset_dir: The directory where the data can be found. If `None`, use
+ default.
+ dataset_name: Name of the dataset.
+ num_readers: Number of dataset readers.
+ num_threads: Number of prefetching threads.
+ patch_size: Size of the path to extract from the image.
+
+ Returns:
+ images: A `Tensor` of size [batch_size, patch_size, patch_size, channels]
+ """
+ randomize = split_name == 'train'
+ dataset = datasets.get_dataset(
+ dataset_name, split_name, dataset_dir=dataset_dir)
+ provider = slim.dataset_data_provider.DatasetDataProvider(
+ dataset,
+ num_readers=num_readers,
+ common_queue_capacity=5 * batch_size,
+ common_queue_min=batch_size,
+ shuffle=randomize)
+ [image] = provider.get(['image'])
+
+ # Sample a patch of fixed size.
+ patch = tf.image.resize_image_with_crop_or_pad(image, patch_size, patch_size)
+ patch.shape.assert_is_compatible_with([patch_size, patch_size, 3])
+
+ # Preprocess the images. Make the range lie in a strictly smaller range than
+ # [-1, 1], so that network outputs aren't forced to the extreme ranges.
+ patch = (tf.to_float(patch) - 128.0) / 142.0
+
+ if randomize:
+ image_batch = tf.train.shuffle_batch(
+ [patch],
+ batch_size=batch_size,
+ num_threads=num_threads,
+ capacity=5 * batch_size,
+ min_after_dequeue=batch_size)
+ else:
+ image_batch = tf.train.batch(
+ [patch],
+ batch_size=batch_size,
+ num_threads=1, # no threads so it's deterministic
+ capacity=5 * batch_size)
+
+ return image_batch
+
+
+def float_image_to_uint8(image):
+ """Convert float image in ~[-0.9, 0.9) to [0, 255] uint8.
+
+ Args:
+ image: An image tensor. Values should be in [-0.9, 0.9).
+
+ Returns:
+ Input image cast to uint8 and with integer values in [0, 255].
+ """
+ image = (image * 142.0) + 128.0
+ return tf.cast(image, tf.uint8)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/data_provider_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/data_provider_test.py"
new file mode 100644
index 00000000..1d9e54bc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/data_provider_test.py"
@@ -0,0 +1,59 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for data_provider."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+from absl import flags
+from absl.testing import parameterized
+import numpy as np
+
+import tensorflow as tf
+
+import data_provider
+
+
+class DataProviderTest(tf.test.TestCase, parameterized.TestCase):
+
+ @parameterized.named_parameters(
+ ('train', 'train'),
+ ('validation', 'validation'))
+ def test_data_provider(self, split_name):
+ dataset_dir = os.path.join(
+ flags.FLAGS.test_srcdir,
+ 'google3/third_party/tensorflow_models/gan/image_compression/testdata/')
+
+ batch_size = 3
+ patch_size = 8
+ images = data_provider.provide_data(
+ split_name, batch_size, dataset_dir, patch_size=8)
+ self.assertListEqual([batch_size, patch_size, patch_size, 3],
+ images.shape.as_list())
+
+ with self.test_session(use_gpu=True) as sess:
+ with tf.contrib.slim.queues.QueueRunners(sess):
+ images_out = sess.run(images)
+ self.assertEqual((batch_size, patch_size, patch_size, 3),
+ images_out.shape)
+ # Check range.
+ self.assertTrue(np.all(np.abs(images_out) <= 1.0))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/eval.py"
new file mode 100644
index 00000000..240f9ef5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/eval.py"
@@ -0,0 +1,102 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Evaluates a TFGAN trained compression model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+from absl import app
+from absl import flags
+import tensorflow as tf
+
+import data_provider
+import networks
+import summaries
+
+FLAGS = flags.FLAGS
+
+flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
+
+flags.DEFINE_string('checkpoint_dir', '/tmp/compression/',
+ 'Directory where the model was written to.')
+
+flags.DEFINE_string('eval_dir', '/tmp/compression/',
+ 'Directory where the results are saved to.')
+
+flags.DEFINE_integer('max_number_of_evaluations', None,
+ 'Number of times to run evaluation. If `None`, run '
+ 'forever.')
+
+flags.DEFINE_string('dataset_dir', None, 'Location of data.')
+
+# Compression-specific flags.
+flags.DEFINE_integer('batch_size', 32, 'The number of images in each batch.')
+
+flags.DEFINE_integer('patch_size', 32, 'The size of the patches to train on.')
+
+flags.DEFINE_integer('bits_per_patch', 1230,
+ 'The number of bits to produce per patch.')
+
+flags.DEFINE_integer('model_depth', 64,
+ 'Number of filters for compression model')
+
+
+def main(_, run_eval_loop=True):
+ with tf.name_scope('inputs'):
+ images = data_provider.provide_data(
+ 'validation', FLAGS.batch_size, dataset_dir=FLAGS.dataset_dir,
+ patch_size=FLAGS.patch_size)
+
+ # In order for variables to load, use the same variable scope as in the
+ # train job.
+ with tf.variable_scope('generator'):
+ reconstructions, _, prebinary = networks.compression_model(
+ images,
+ num_bits=FLAGS.bits_per_patch,
+ depth=FLAGS.model_depth,
+ is_training=False)
+ summaries.add_reconstruction_summaries(images, reconstructions, prebinary)
+
+ # Visualize losses.
+ pixel_loss_per_example = tf.reduce_mean(
+ tf.abs(images - reconstructions), axis=[1, 2, 3])
+ pixel_loss = tf.reduce_mean(pixel_loss_per_example)
+ tf.summary.histogram('pixel_l1_loss_hist', pixel_loss_per_example)
+ tf.summary.scalar('pixel_l1_loss', pixel_loss)
+
+ # Create ops to write images to disk.
+ uint8_images = data_provider.float_image_to_uint8(images)
+ uint8_reconstructions = data_provider.float_image_to_uint8(reconstructions)
+ uint8_reshaped = summaries.stack_images(uint8_images, uint8_reconstructions)
+ image_write_ops = tf.write_file(
+ '%s/%s'% (FLAGS.eval_dir, 'compression.png'),
+ tf.image.encode_png(uint8_reshaped[0]))
+
+ # For unit testing, use `run_eval_loop=False`.
+ if not run_eval_loop: return
+ tf.contrib.training.evaluate_repeatedly(
+ FLAGS.checkpoint_dir,
+ master=FLAGS.master,
+ hooks=[tf.contrib.training.SummaryAtEndHook(FLAGS.eval_dir),
+ tf.contrib.training.StopAfterNEvalsHook(1)],
+ eval_ops=image_write_ops,
+ max_number_of_evaluations=FLAGS.max_number_of_evaluations)
+
+
+if __name__ == '__main__':
+ app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/eval_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/eval_test.py"
new file mode 100644
index 00000000..89b3f1b5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/eval_test.py"
@@ -0,0 +1,32 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for gan.image_compression.eval."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+import eval # pylint:disable=redefined-builtin
+
+
+class EvalTest(tf.test.TestCase):
+
+ def test_build_graph(self):
+ eval.main(None, run_eval_loop=False)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/launch_jobs.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/launch_jobs.sh"
new file mode 100644
index 00000000..e7fa2ec4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/launch_jobs.sh"
@@ -0,0 +1,85 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+#!/bin/bash
+#
+# This script performs the following operations:
+# 1. Downloads the Imagenet dataset.
+# 2. Trains image compression model on patches from Imagenet.
+# 3. Evaluates the models and writes sample images to disk.
+#
+# Usage:
+# cd models/research/gan/image_compression
+# ./launch_jobs.sh ${weight_factor} ${git_repo}
+set -e
+
+# Weight of the adversarial loss.
+weight_factor=$1
+if [[ "$weight_factor" == "" ]]; then
+ echo "'weight_factor' must not be empty."
+ exit
+fi
+
+# Location of the git repository.
+git_repo=$2
+if [[ "$git_repo" == "" ]]; then
+ echo "'git_repo' must not be empty."
+ exit
+fi
+
+# Base name for where the checkpoint and logs will be saved to.
+TRAIN_DIR=/tmp/compression-model
+
+# Base name for where the evaluation images will be saved to.
+EVAL_DIR=/tmp/compression-model/eval
+
+# Where the dataset is saved to.
+DATASET_DIR=/tmp/imagenet-data
+
+export PYTHONPATH=$PYTHONPATH:$git_repo:$git_repo/research:$git_repo/research/slim:$git_repo/research/slim/nets
+
+# A helper function for printing pretty output.
+Banner () {
+ local text=$1
+ local green='\033[0;32m'
+ local nc='\033[0m' # No color.
+ echo -e "${green}${text}${nc}"
+}
+
+# Download the dataset. You will be asked for an ImageNet username and password.
+# To get one, register at http://www.image-net.org/.
+bazel build "${git_repo}/research/slim:download_and_convert_imagenet"
+"./bazel-bin/download_and_convert_imagenet" ${DATASET_DIR}
+
+# Run the compression model.
+NUM_STEPS=10000
+MODEL_TRAIN_DIR="${TRAIN_DIR}/wt${weight_factor}"
+Banner "Starting training an image compression model for ${NUM_STEPS} steps..."
+python "${git_repo}/research/gan/image_compression/train.py" \
+ --train_log_dir=${MODEL_TRAIN_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --max_number_of_steps=${NUM_STEPS} \
+ --weight_factor=${weight_factor} \
+ --alsologtostderr
+Banner "Finished training image compression model ${NUM_STEPS} steps."
+
+# Run evaluation.
+MODEL_EVAL_DIR="${TRAIN_DIR}/eval/wt${weight_factor}"
+Banner "Starting evaluation of image compression model..."
+python "${git_repo}/research/gan/image_compression/eval.py" \
+ --checkpoint_dir=${MODEL_TRAIN_DIR} \
+ --eval_dir=${MODEL_EVAL_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --max_number_of_evaluations=1
+Banner "Finished evaluation. See ${MODEL_EVAL_DIR} for output images."
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/networks.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/networks.py"
new file mode 100644
index 00000000..ff44d84a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/networks.py"
@@ -0,0 +1,145 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Networks for GAN compression example using TFGAN."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from slim.nets import dcgan
+from slim.nets import pix2pix
+
+
+def _last_conv_layer(end_points):
+ """"Returns the last convolutional layer from an endpoints dictionary."""
+ conv_list = [k if k[:4] == 'conv' else None for k in end_points.keys()]
+ conv_list.sort()
+ return end_points[conv_list[-1]]
+
+
+def _encoder(img_batch, is_training=True, bits=64, depth=64):
+ """Maps images to internal representation.
+
+ Args:
+ img_batch: Stuff
+ is_training: Stuff
+ bits: Number of bits per patch.
+ depth: Stuff
+
+ Returns:
+ Real-valued 2D Tensor of size [batch_size, bits].
+ """
+ _, end_points = dcgan.discriminator(
+ img_batch, depth=depth, is_training=is_training, scope='Encoder')
+
+ # (joelshor): Make the DCGAN convolutional layer that converts to logits
+ # not trainable, since it doesn't affect the encoder output.
+
+ # Get the pre-logit layer, which is the last conv.
+ net = _last_conv_layer(end_points)
+
+ # Transform the features to the proper number of bits.
+ with tf.variable_scope('EncoderTransformer'):
+ encoded = tf.contrib.layers.conv2d(net, bits, kernel_size=1, stride=1,
+ padding='VALID', normalizer_fn=None,
+ activation_fn=None)
+ encoded = tf.squeeze(encoded, [1, 2])
+ encoded.shape.assert_has_rank(2)
+
+ # Map encoded to the range [-1, 1].
+ return tf.nn.softsign(encoded)
+
+
+def _binarizer(prebinary_codes, is_training):
+ """Binarize compression logits.
+
+ During training, add noise, as in https://arxiv.org/pdf/1611.01704.pdf. During
+ eval, map [-1, 1] -> {-1, 1}.
+
+ Args:
+ prebinary_codes: Floating-point tensors corresponding to pre-binary codes.
+ Shape is [batch, code_length].
+ is_training: A python bool. If True, add noise. If false, binarize.
+
+ Returns:
+ Binarized codes. Shape is [batch, code_length].
+
+ Raises:
+ ValueError: If the shape of `prebinary_codes` isn't static.
+ """
+ if is_training:
+ # In order to train codes that can be binarized during eval, we add noise as
+ # in https://arxiv.org/pdf/1611.01704.pdf. Another option is to use a
+ # stochastic node, as in https://arxiv.org/abs/1608.05148.
+ noise = tf.random_uniform(
+ prebinary_codes.shape,
+ minval=-1.0,
+ maxval=1.0)
+ return prebinary_codes + noise
+ else:
+ return tf.sign(prebinary_codes)
+
+
+def _decoder(codes, final_size, is_training, depth=64):
+ """Compression decoder."""
+ decoded_img, _ = dcgan.generator(
+ codes,
+ depth=depth,
+ final_size=final_size,
+ num_outputs=3,
+ is_training=is_training,
+ scope='Decoder')
+
+ # Map output to [-1, 1].
+ # Use softsign instead of tanh, as per empirical results of
+ # http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf.
+ return tf.nn.softsign(decoded_img)
+
+
+def _validate_image_inputs(image_batch):
+ image_batch.shape.assert_has_rank(4)
+ image_batch.shape[1:].assert_is_fully_defined()
+
+
+def compression_model(image_batch, num_bits=64, depth=64, is_training=True):
+ """Image compression model.
+
+ Args:
+ image_batch: A batch of images to compress and reconstruct. Images should
+ be normalized already. Shape is [batch, height, width, channels].
+ num_bits: Desired number of bits per image in the compressed representation.
+ depth: The base number of filters for the encoder and decoder networks.
+ is_training: A python bool. If False, run in evaluation mode.
+
+ Returns:
+ uncompressed images, binary codes, prebinary codes
+ """
+ image_batch = tf.convert_to_tensor(image_batch)
+ _validate_image_inputs(image_batch)
+ final_size = image_batch.shape.as_list()[1]
+
+ prebinary_codes = _encoder(image_batch, is_training, num_bits, depth)
+ binary_codes = _binarizer(prebinary_codes, is_training)
+ uncompressed_imgs = _decoder(binary_codes, final_size, is_training, depth)
+ return uncompressed_imgs, binary_codes, prebinary_codes
+
+
+def discriminator(image_batch, unused_conditioning=None, depth=64):
+ """A thin wrapper around the pix2pix discriminator to conform to TFGAN API."""
+ logits, _ = pix2pix.pix2pix_discriminator(
+ image_batch, num_filters=[depth, 2 * depth, 4 * depth, 8 * depth])
+ return tf.layers.flatten(logits)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/networks_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/networks_test.py"
new file mode 100644
index 00000000..222f7e8a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/networks_test.py"
@@ -0,0 +1,99 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for gan.image_compression.networks."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from six.moves import xrange # pylint: disable=redefined-builtin
+import tensorflow as tf
+import networks
+
+
+class NetworksTest(tf.test.TestCase):
+
+ def test_last_conv_layer(self):
+ x = tf.constant(1.0)
+ y = tf.constant(0.0)
+ end_points = {
+ 'silly': y,
+ 'conv2': y,
+ 'conv4': x,
+ 'logits': y,
+ 'conv-1': y,
+ }
+ self.assertEqual(x, networks._last_conv_layer(end_points))
+
+ def test_generator_run(self):
+ img_batch = tf.zeros([3, 16, 16, 3])
+ model_output = networks.compression_model(img_batch)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ sess.run(model_output)
+
+ def test_generator_graph(self):
+ for i, batch_size in zip(xrange(3, 7), xrange(3, 11, 2)):
+ tf.reset_default_graph()
+ patch_size = 2 ** i
+ bits = 2 ** i
+ img = tf.ones([batch_size, patch_size, patch_size, 3])
+ uncompressed, binary_codes, prebinary = networks.compression_model(
+ img, bits)
+
+ self.assertAllEqual([batch_size, patch_size, patch_size, 3],
+ uncompressed.shape.as_list())
+ self.assertEqual([batch_size, bits], binary_codes.shape.as_list())
+ self.assertEqual([batch_size, bits], prebinary.shape.as_list())
+
+ def test_generator_invalid_input(self):
+ wrong_dim_input = tf.zeros([5, 32, 32])
+ with self.assertRaisesRegexp(ValueError, 'Shape .* must have rank 4'):
+ networks.compression_model(wrong_dim_input)
+
+ not_fully_defined = tf.placeholder(tf.float32, [3, None, 32, 3])
+ with self.assertRaisesRegexp(ValueError, 'Shape .* is not fully defined'):
+ networks.compression_model(not_fully_defined)
+
+ def test_discriminator_run(self):
+ img_batch = tf.zeros([3, 70, 70, 3])
+ disc_output = networks.discriminator(img_batch)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ sess.run(disc_output)
+
+ def test_discriminator_graph(self):
+ # Check graph construction for a number of image size/depths and batch
+ # sizes.
+ for batch_size, patch_size in zip([3, 6], [70, 128]):
+ tf.reset_default_graph()
+ img = tf.ones([batch_size, patch_size, patch_size, 3])
+ disc_output = networks.discriminator(img)
+
+ self.assertEqual(2, disc_output.shape.ndims)
+ self.assertEqual(batch_size, disc_output.shape[0])
+
+ def test_discriminator_invalid_input(self):
+ wrong_dim_input = tf.zeros([5, 32, 32])
+ with self.assertRaisesRegexp(ValueError, 'Shape must be rank 4'):
+ networks.discriminator(wrong_dim_input)
+
+ not_fully_defined = tf.placeholder(tf.float32, [3, None, 32, 3])
+ with self.assertRaisesRegexp(ValueError, 'Shape .* is not fully defined'):
+ networks.compression_model(not_fully_defined)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/summaries.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/summaries.py"
new file mode 100644
index 00000000..f85a780c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/summaries.py"
@@ -0,0 +1,44 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Summaries utility file to share between train and eval."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+import tensorflow as tf
+
+tfgan = tf.contrib.gan
+
+
+def add_reconstruction_summaries(images, reconstructions, prebinary,
+ num_imgs_to_visualize=8):
+ """Adds image summaries."""
+ reshaped_img = stack_images(images, reconstructions, num_imgs_to_visualize)
+
+ tf.summary.image('real_vs_reconstruction', reshaped_img, max_outputs=1)
+ if prebinary is not None:
+ tf.summary.histogram('prebinary_codes', prebinary)
+
+
+def stack_images(images, reconstructions, num_imgs_to_visualize=8):
+ """Stack and reshape images to see compression effects."""
+ to_reshape = (tf.unstack(images)[:num_imgs_to_visualize] +
+ tf.unstack(reconstructions)[:num_imgs_to_visualize])
+ reshaped_img = tfgan.eval.image_reshaper(
+ to_reshape, num_cols=num_imgs_to_visualize)
+ return reshaped_img
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/testdata/labels.txt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/testdata/labels.txt"
new file mode 100644
index 00000000..8b137891
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/testdata/labels.txt"
@@ -0,0 +1 @@
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/train.py"
new file mode 100644
index 00000000..70aed9fc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/train.py"
@@ -0,0 +1,217 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+# pylint:disable=line-too-long
+"""Trains an image compression network with an adversarial loss."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+from absl import logging
+import tensorflow as tf
+
+import data_provider
+import networks
+import summaries
+
+
+tfgan = tf.contrib.gan
+FLAGS = flags.FLAGS
+
+flags.DEFINE_integer('batch_size', 32, 'The number of images in each batch.')
+
+flags.DEFINE_integer('patch_size', 32, 'The size of the patches to train on.')
+
+flags.DEFINE_integer('bits_per_patch', 1230,
+ 'The number of bits to produce per patch.')
+
+flags.DEFINE_integer('model_depth', 64,
+ 'Number of filters for compression model')
+
+flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
+
+flags.DEFINE_string('train_log_dir', '/tmp/compression/',
+ 'Directory where to write event logs.')
+
+flags.DEFINE_float('generator_lr', 1e-5,
+ 'The compression model learning rate.')
+
+flags.DEFINE_float('discriminator_lr', 1e-6,
+ 'The discriminator learning rate.')
+
+flags.DEFINE_integer('max_number_of_steps', 2000000,
+ 'The maximum number of gradient steps.')
+
+flags.DEFINE_integer(
+ 'ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then the parameters '
+ 'are handled locally by the worker.')
+
+flags.DEFINE_integer(
+ 'task', 0,
+ 'The Task ID. This value is used when training with multiple workers to '
+ 'identify each worker.')
+
+flags.DEFINE_float(
+ 'weight_factor', 10000.0,
+ 'How much to weight the adversarial loss relative to pixel loss.')
+
+flags.DEFINE_string('dataset_dir', None, 'Location of data.')
+
+
+def main(_):
+ if not tf.gfile.Exists(FLAGS.train_log_dir):
+ tf.gfile.MakeDirs(FLAGS.train_log_dir)
+
+ with tf.device(tf.train.replica_device_setter(FLAGS.ps_tasks)):
+ # Put input pipeline on CPU to reserve GPU for training.
+ with tf.name_scope('inputs'), tf.device('/cpu:0'):
+ images = data_provider.provide_data(
+ 'train', FLAGS.batch_size, dataset_dir=FLAGS.dataset_dir,
+ patch_size=FLAGS.patch_size)
+
+ # Manually define a GANModel tuple. This is useful when we have custom
+ # code to track variables. Note that we could replace all of this with a
+ # call to `tfgan.gan_model`, but we don't in order to demonstrate some of
+ # TFGAN's flexibility.
+ with tf.variable_scope('generator') as gen_scope:
+ reconstructions, _, prebinary = networks.compression_model(
+ images,
+ num_bits=FLAGS.bits_per_patch,
+ depth=FLAGS.model_depth)
+ gan_model = _get_gan_model(
+ generator_inputs=images,
+ generated_data=reconstructions,
+ real_data=images,
+ generator_scope=gen_scope)
+ summaries.add_reconstruction_summaries(images, reconstructions, prebinary)
+ tfgan.eval.add_gan_model_summaries(gan_model)
+
+ # Define the GANLoss tuple using standard library functions.
+ with tf.name_scope('loss'):
+ gan_loss = tfgan.gan_loss(
+ gan_model,
+ generator_loss_fn=tfgan.losses.least_squares_generator_loss,
+ discriminator_loss_fn=tfgan.losses.least_squares_discriminator_loss,
+ add_summaries=FLAGS.weight_factor > 0)
+
+ # Define the standard pixel loss.
+ l1_pixel_loss = tf.norm(gan_model.real_data - gan_model.generated_data,
+ ord=1)
+
+ # Modify the loss tuple to include the pixel loss. Add summaries as well.
+ gan_loss = tfgan.losses.combine_adversarial_loss(
+ gan_loss, gan_model, l1_pixel_loss, weight_factor=FLAGS.weight_factor)
+
+ # Get the GANTrain ops using the custom optimizers and optional
+ # discriminator weight clipping.
+ with tf.name_scope('train_ops'):
+ gen_lr, dis_lr = _lr(FLAGS.generator_lr, FLAGS.discriminator_lr)
+ gen_opt, dis_opt = _optimizer(gen_lr, dis_lr)
+ train_ops = tfgan.gan_train_ops(
+ gan_model,
+ gan_loss,
+ generator_optimizer=gen_opt,
+ discriminator_optimizer=dis_opt,
+ summarize_gradients=True,
+ colocate_gradients_with_ops=True,
+ aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)
+ tf.summary.scalar('generator_lr', gen_lr)
+ tf.summary.scalar('discriminator_lr', dis_lr)
+
+ # Determine the number of generator vs discriminator steps.
+ train_steps = tfgan.GANTrainSteps(
+ generator_train_steps=1,
+ discriminator_train_steps=int(FLAGS.weight_factor > 0))
+
+ # Run the alternating training loop. Skip it if no steps should be taken
+ # (used for graph construction tests).
+ status_message = tf.string_join(
+ ['Starting train step: ',
+ tf.as_string(tf.train.get_or_create_global_step())],
+ name='status_message')
+ if FLAGS.max_number_of_steps == 0: return
+ tfgan.gan_train(
+ train_ops,
+ FLAGS.train_log_dir,
+ tfgan.get_sequential_train_hooks(train_steps),
+ hooks=[tf.train.StopAtStepHook(num_steps=FLAGS.max_number_of_steps),
+ tf.train.LoggingTensorHook([status_message], every_n_iter=10)],
+ master=FLAGS.master,
+ is_chief=FLAGS.task == 0)
+
+
+def _optimizer(gen_lr, dis_lr):
+ # First is generator optimizer, second is discriminator.
+ adam_kwargs = {
+ 'epsilon': 1e-8,
+ 'beta1': 0.5,
+ }
+ return (tf.train.AdamOptimizer(gen_lr, **adam_kwargs),
+ tf.train.AdamOptimizer(dis_lr, **adam_kwargs))
+
+
+def _lr(gen_lr_base, dis_lr_base):
+ """Return the generator and discriminator learning rates."""
+ gen_lr_kwargs = {
+ 'decay_steps': 60000,
+ 'decay_rate': 0.9,
+ 'staircase': True,
+ }
+ gen_lr = tf.train.exponential_decay(
+ learning_rate=gen_lr_base,
+ global_step=tf.train.get_or_create_global_step(),
+ **gen_lr_kwargs)
+ dis_lr = dis_lr_base
+
+ return gen_lr, dis_lr
+
+
+def _get_gan_model(generator_inputs, generated_data, real_data,
+ generator_scope):
+ """Manually construct and return a GANModel tuple."""
+ generator_vars = tf.contrib.framework.get_trainable_variables(generator_scope)
+
+ discriminator_fn = networks.discriminator
+ with tf.variable_scope('discriminator') as dis_scope:
+ discriminator_gen_outputs = discriminator_fn(generated_data)
+ with tf.variable_scope(dis_scope, reuse=True):
+ discriminator_real_outputs = discriminator_fn(real_data)
+ discriminator_vars = tf.contrib.framework.get_trainable_variables(
+ dis_scope)
+
+ # Manually construct GANModel tuple.
+ gan_model = tfgan.GANModel(
+ generator_inputs=generator_inputs,
+ generated_data=generated_data,
+ generator_variables=generator_vars,
+ generator_scope=generator_scope,
+ generator_fn=None, # not necessary
+ real_data=real_data,
+ discriminator_real_outputs=discriminator_real_outputs,
+ discriminator_gen_outputs=discriminator_gen_outputs,
+ discriminator_variables=discriminator_vars,
+ discriminator_scope=dis_scope,
+ discriminator_fn=discriminator_fn)
+
+ return gan_model
+
+
+if __name__ == '__main__':
+ logging.set_verbosity(logging.INFO)
+ tf.app.run()
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/train_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/train_test.py"
new file mode 100644
index 00000000..a76f9b06
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/image_compression/train_test.py"
@@ -0,0 +1,56 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for image_compression.train."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+from absl.testing import parameterized
+import numpy as np
+import tensorflow as tf
+import train
+
+FLAGS = flags.FLAGS
+mock = tf.test.mock
+
+
+class TrainTest(tf.test.TestCase, parameterized.TestCase):
+
+ @parameterized.named_parameters(
+ ('NoAdversarialLoss', 0.0),
+ ('AdversarialLoss', 1.0))
+ def test_build_graph(self, weight_factor):
+ FLAGS.max_number_of_steps = 0
+ FLAGS.weight_factor = weight_factor
+
+ batch_size = 3
+ patch_size = 16
+
+ FLAGS.batch_size = batch_size
+ FLAGS.patch_size = patch_size
+ mock_imgs = np.zeros([batch_size, patch_size, patch_size, 3],
+ dtype=np.float32)
+
+ with mock.patch.object(train, 'data_provider') as mock_data_provider:
+ mock_data_provider.provide_data.return_value = mock_imgs
+ train.main(None)
+
+
+if __name__ == '__main__':
+ tf.test.main()
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/conditional_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/conditional_eval.py"
new file mode 100644
index 00000000..9fbd640f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/conditional_eval.py"
@@ -0,0 +1,111 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Evaluates a conditional TFGAN trained MNIST model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import app
+from absl import flags
+import tensorflow as tf
+
+import data_provider
+import networks
+import util
+
+tfgan = tf.contrib.gan
+
+
+flags.DEFINE_string('checkpoint_dir', '/tmp/mnist/',
+ 'Directory where the model was written to.')
+
+flags.DEFINE_string('eval_dir', '/tmp/mnist/',
+ 'Directory where the results are saved to.')
+
+flags.DEFINE_integer('num_images_per_class', 10,
+ 'Number of images to generate per class.')
+
+flags.DEFINE_integer('noise_dims', 64,
+ 'Dimensions of the generator noise vector')
+
+flags.DEFINE_string('classifier_filename', None,
+ 'Location of the pretrained classifier. If `None`, use '
+ 'default.')
+
+flags.DEFINE_integer('max_number_of_evaluations', None,
+ 'Number of times to run evaluation. If `None`, run '
+ 'forever.')
+
+flags.DEFINE_boolean('write_to_disk', True, 'If `True`, run images to disk.')
+
+FLAGS = flags.FLAGS
+NUM_CLASSES = 10
+
+
+def main(_, run_eval_loop=True):
+ with tf.name_scope('inputs'):
+ noise, one_hot_labels = _get_generator_inputs(
+ FLAGS.num_images_per_class, NUM_CLASSES, FLAGS.noise_dims)
+
+ # Generate images.
+ with tf.variable_scope('Generator'): # Same scope as in train job.
+ images = networks.conditional_generator(
+ (noise, one_hot_labels), is_training=False)
+
+ # Visualize images.
+ reshaped_img = tfgan.eval.image_reshaper(
+ images, num_cols=FLAGS.num_images_per_class)
+ tf.summary.image('generated_images', reshaped_img, max_outputs=1)
+
+ # Calculate evaluation metrics.
+ tf.summary.scalar('MNIST_Classifier_score',
+ util.mnist_score(images, FLAGS.classifier_filename))
+ tf.summary.scalar('MNIST_Cross_entropy',
+ util.mnist_cross_entropy(
+ images, one_hot_labels, FLAGS.classifier_filename))
+
+ # Write images to disk.
+ image_write_ops = None
+ if FLAGS.write_to_disk:
+ image_write_ops = tf.write_file(
+ '%s/%s'% (FLAGS.eval_dir, 'conditional_gan.png'),
+ tf.image.encode_png(data_provider.float_image_to_uint8(
+ reshaped_img[0])))
+
+ # For unit testing, use `run_eval_loop=False`.
+ if not run_eval_loop: return
+ tf.contrib.training.evaluate_repeatedly(
+ FLAGS.checkpoint_dir,
+ hooks=[tf.contrib.training.SummaryAtEndHook(FLAGS.eval_dir),
+ tf.contrib.training.StopAfterNEvalsHook(1)],
+ eval_ops=image_write_ops,
+ max_number_of_evaluations=FLAGS.max_number_of_evaluations)
+
+
+def _get_generator_inputs(num_images_per_class, num_classes, noise_dims):
+ # Since we want a grid of numbers for the conditional generator, manually
+ # construct the desired class labels.
+ num_images_generated = num_images_per_class * num_classes
+ noise = tf.random_normal([num_images_generated, noise_dims])
+ labels = [lbl for lbl in range(num_classes) for _
+ in range(num_images_per_class)]
+ one_hot_labels = tf.one_hot(tf.constant(labels), num_classes)
+ return noise, one_hot_labels
+
+
+if __name__ == '__main__':
+ app.run(main)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/conditional_eval_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/conditional_eval_test.py"
new file mode 100644
index 00000000..1bb96a24
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/conditional_eval_test.py"
@@ -0,0 +1,32 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for tfgan.examples.mnist.conditional_eval."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from absl.testing import absltest
+import conditional_eval
+
+
+class ConditionalEvalTest(absltest.TestCase):
+
+ def test_build_graph(self):
+ conditional_eval.main(None, run_eval_loop=False)
+
+
+if __name__ == '__main__':
+ absltest.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/data_provider.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/data_provider.py"
new file mode 100644
index 00000000..afbc3809
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/data_provider.py"
@@ -0,0 +1,85 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains code for loading and preprocessing the MNIST data."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+import tensorflow as tf
+
+from slim.datasets import dataset_factory as datasets
+
+slim = tf.contrib.slim
+
+
+def provide_data(split_name, batch_size, dataset_dir, num_readers=1,
+ num_threads=1):
+ """Provides batches of MNIST digits.
+
+ Args:
+ split_name: Either 'train' or 'test'.
+ batch_size: The number of images in each batch.
+ dataset_dir: The directory where the MNIST data can be found.
+ num_readers: Number of dataset readers.
+ num_threads: Number of prefetching threads.
+
+ Returns:
+ images: A `Tensor` of size [batch_size, 28, 28, 1]
+ one_hot_labels: A `Tensor` of size [batch_size, mnist.NUM_CLASSES], where
+ each row has a single element set to one and the rest set to zeros.
+ num_samples: The number of total samples in the dataset.
+
+ Raises:
+ ValueError: If `split_name` is not either 'train' or 'test'.
+ """
+ dataset = datasets.get_dataset('mnist', split_name, dataset_dir=dataset_dir)
+ provider = slim.dataset_data_provider.DatasetDataProvider(
+ dataset,
+ num_readers=num_readers,
+ common_queue_capacity=2 * batch_size,
+ common_queue_min=batch_size,
+ shuffle=(split_name == 'train'))
+ [image, label] = provider.get(['image', 'label'])
+
+ # Preprocess the images.
+ image = (tf.to_float(image) - 128.0) / 128.0
+
+ # Creates a QueueRunner for the pre-fetching operation.
+ images, labels = tf.train.batch(
+ [image, label],
+ batch_size=batch_size,
+ num_threads=num_threads,
+ capacity=5 * batch_size)
+
+ one_hot_labels = tf.one_hot(labels, dataset.num_classes)
+ return images, one_hot_labels, dataset.num_samples
+
+
+def float_image_to_uint8(image):
+ """Convert float image in [-1, 1) to [0, 255] uint8.
+
+ Note that `1` gets mapped to `0`, but `1 - epsilon` gets mapped to 255.
+
+ Args:
+ image: An image tensor. Values should be in [-1, 1).
+
+ Returns:
+ Input image cast to uint8 and with integer values in [0, 255].
+ """
+ image = (image * 128.0) + 128.0
+ return tf.cast(image, tf.uint8)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/data_provider_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/data_provider_test.py"
new file mode 100644
index 00000000..463ecfdc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/data_provider_test.py"
@@ -0,0 +1,49 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for data_provider."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+
+from absl import flags
+import tensorflow as tf
+
+import data_provider
+
+
+class DataProviderTest(tf.test.TestCase):
+
+ def test_mnist_data_reading(self):
+ dataset_dir = os.path.join(
+ flags.FLAGS.test_srcdir,
+ 'google3/third_party/tensorflow_models/gan/mnist/testdata')
+
+ batch_size = 5
+ images, labels, num_samples = data_provider.provide_data(
+ 'test', batch_size, dataset_dir)
+ self.assertEqual(num_samples, 10000)
+
+ with self.test_session() as sess:
+ with tf.contrib.slim.queues.QueueRunners(sess):
+ images, labels = sess.run([images, labels])
+ self.assertEqual(images.shape, (batch_size, 28, 28, 1))
+ self.assertEqual(labels.shape, (batch_size, 10))
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/eval.py"
new file mode 100644
index 00000000..b7f50f3a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/eval.py"
@@ -0,0 +1,104 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Evaluates a TFGAN trained MNIST model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+from absl import app
+from absl import flags
+import tensorflow as tf
+
+import data_provider
+import networks
+import util
+
+FLAGS = flags.FLAGS
+tfgan = tf.contrib.gan
+
+
+flags.DEFINE_string('checkpoint_dir', '/tmp/mnist/',
+ 'Directory where the model was written to.')
+
+flags.DEFINE_string('eval_dir', '/tmp/mnist/',
+ 'Directory where the results are saved to.')
+
+flags.DEFINE_string('dataset_dir', None, 'Location of data.')
+
+flags.DEFINE_integer('num_images_generated', 1000,
+ 'Number of images to generate at once.')
+
+flags.DEFINE_boolean('eval_real_images', False,
+ 'If `True`, run Inception network on real images.')
+
+flags.DEFINE_integer('noise_dims', 64,
+ 'Dimensions of the generator noise vector')
+
+flags.DEFINE_string('classifier_filename', None,
+ 'Location of the pretrained classifier. If `None`, use '
+ 'default.')
+
+flags.DEFINE_integer('max_number_of_evaluations', None,
+ 'Number of times to run evaluation. If `None`, run '
+ 'forever.')
+
+flags.DEFINE_boolean('write_to_disk', True, 'If `True`, run images to disk.')
+
+
+def main(_, run_eval_loop=True):
+ # Fetch real images.
+ with tf.name_scope('inputs'):
+ real_images, _, _ = data_provider.provide_data(
+ 'train', FLAGS.num_images_generated, FLAGS.dataset_dir)
+
+ image_write_ops = None
+ if FLAGS.eval_real_images:
+ tf.summary.scalar('MNIST_Classifier_score',
+ util.mnist_score(real_images, FLAGS.classifier_filename))
+ else:
+ # In order for variables to load, use the same variable scope as in the
+ # train job.
+ with tf.variable_scope('Generator'):
+ images = networks.unconditional_generator(
+ tf.random_normal([FLAGS.num_images_generated, FLAGS.noise_dims]),
+ is_training=False)
+ tf.summary.scalar('MNIST_Frechet_distance',
+ util.mnist_frechet_distance(
+ real_images, images, FLAGS.classifier_filename))
+ tf.summary.scalar('MNIST_Classifier_score',
+ util.mnist_score(images, FLAGS.classifier_filename))
+ if FLAGS.num_images_generated >= 100 and FLAGS.write_to_disk:
+ reshaped_images = tfgan.eval.image_reshaper(
+ images[:100, ...], num_cols=10)
+ uint8_images = data_provider.float_image_to_uint8(reshaped_images)
+ image_write_ops = tf.write_file(
+ '%s/%s'% (FLAGS.eval_dir, 'unconditional_gan.png'),
+ tf.image.encode_png(uint8_images[0]))
+
+ # For unit testing, use `run_eval_loop=False`.
+ if not run_eval_loop: return
+ tf.contrib.training.evaluate_repeatedly(
+ FLAGS.checkpoint_dir,
+ hooks=[tf.contrib.training.SummaryAtEndHook(FLAGS.eval_dir),
+ tf.contrib.training.StopAfterNEvalsHook(1)],
+ eval_ops=image_write_ops,
+ max_number_of_evaluations=FLAGS.max_number_of_evaluations)
+
+
+if __name__ == '__main__':
+ app.run(main)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/eval_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/eval_test.py"
new file mode 100644
index 00000000..1a7d4ca8
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/eval_test.py"
@@ -0,0 +1,40 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for tfgan.examples.mnist.eval."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+from absl import flags
+from absl.testing import absltest
+from absl.testing import parameterized
+import eval # pylint:disable=redefined-builtin
+
+
+class EvalTest(parameterized.TestCase):
+
+ @parameterized.named_parameters(
+ ('RealData', True),
+ ('GeneratedData', False))
+ def test_build_graph(self, eval_real_images):
+ flags.FLAGS.eval_real_images = eval_real_images
+ eval.main(None, run_eval_loop=False)
+
+
+if __name__ == '__main__':
+ absltest.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/infogan_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/infogan_eval.py"
new file mode 100644
index 00000000..ef2eae01
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/infogan_eval.py"
@@ -0,0 +1,160 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Evaluates an InfoGAN TFGAN trained MNIST model.
+
+The image visualizations, as in https://arxiv.org/abs/1606.03657, show the
+effect of varying a specific latent variable on the image. Each visualization
+focuses on one of the three structured variables. Columns have two of the three
+variables fixed, while the third one is varied. Different rows have different
+random samples from the remaining latents.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import app
+from absl import flags
+import numpy as np
+
+import tensorflow as tf
+
+import data_provider
+import networks
+import util
+
+tfgan = tf.contrib.gan
+
+
+flags.DEFINE_string('checkpoint_dir', '/tmp/mnist/',
+ 'Directory where the model was written to.')
+
+flags.DEFINE_string('eval_dir', '/tmp/mnist/',
+ 'Directory where the results are saved to.')
+
+flags.DEFINE_integer('noise_samples', 6,
+ 'Number of samples to draw from the continuous structured '
+ 'noise.')
+
+flags.DEFINE_integer('unstructured_noise_dims', 62,
+ 'The number of dimensions of the unstructured noise.')
+
+flags.DEFINE_integer('continuous_noise_dims', 2,
+ 'The number of dimensions of the continuous noise.')
+
+flags.DEFINE_string('classifier_filename', None,
+ 'Location of the pretrained classifier. If `None`, use '
+ 'default.')
+
+flags.DEFINE_integer('max_number_of_evaluations', None,
+ 'Number of times to run evaluation. If `None`, run '
+ 'forever.')
+
+flags.DEFINE_boolean('write_to_disk', True, 'If `True`, run images to disk.')
+
+CAT_SAMPLE_POINTS = np.arange(0, 10)
+CONT_SAMPLE_POINTS = np.linspace(-2.0, 2.0, 10)
+FLAGS = flags.FLAGS
+
+
+def main(_, run_eval_loop=True):
+ with tf.name_scope('inputs'):
+ noise_args = (FLAGS.noise_samples, CAT_SAMPLE_POINTS, CONT_SAMPLE_POINTS,
+ FLAGS.unstructured_noise_dims, FLAGS.continuous_noise_dims)
+ # Use fixed noise vectors to illustrate the effect of each dimension.
+ display_noise1 = util.get_eval_noise_categorical(*noise_args)
+ display_noise2 = util.get_eval_noise_continuous_dim1(*noise_args)
+ display_noise3 = util.get_eval_noise_continuous_dim2(*noise_args)
+ _validate_noises([display_noise1, display_noise2, display_noise3])
+
+ # Visualize the effect of each structured noise dimension on the generated
+ # image.
+ def generator_fn(inputs):
+ return networks.infogan_generator(
+ inputs, len(CAT_SAMPLE_POINTS), is_training=False)
+ with tf.variable_scope('Generator') as genscope: # Same scope as in training.
+ categorical_images = generator_fn(display_noise1)
+ reshaped_categorical_img = tfgan.eval.image_reshaper(
+ categorical_images, num_cols=len(CAT_SAMPLE_POINTS))
+ tf.summary.image('categorical', reshaped_categorical_img, max_outputs=1)
+
+ with tf.variable_scope(genscope, reuse=True):
+ continuous1_images = generator_fn(display_noise2)
+ reshaped_continuous1_img = tfgan.eval.image_reshaper(
+ continuous1_images, num_cols=len(CONT_SAMPLE_POINTS))
+ tf.summary.image('continuous1', reshaped_continuous1_img, max_outputs=1)
+
+ with tf.variable_scope(genscope, reuse=True):
+ continuous2_images = generator_fn(display_noise3)
+ reshaped_continuous2_img = tfgan.eval.image_reshaper(
+ continuous2_images, num_cols=len(CONT_SAMPLE_POINTS))
+ tf.summary.image('continuous2', reshaped_continuous2_img, max_outputs=1)
+
+ # Evaluate image quality.
+ all_images = tf.concat(
+ [categorical_images, continuous1_images, continuous2_images], 0)
+ tf.summary.scalar('MNIST_Classifier_score',
+ util.mnist_score(all_images, FLAGS.classifier_filename))
+
+ # Write images to disk.
+ image_write_ops = []
+ if FLAGS.write_to_disk:
+ image_write_ops.append(_get_write_image_ops(
+ FLAGS.eval_dir, 'categorical_infogan.png', reshaped_categorical_img[0]))
+ image_write_ops.append(_get_write_image_ops(
+ FLAGS.eval_dir, 'continuous1_infogan.png', reshaped_continuous1_img[0]))
+ image_write_ops.append(_get_write_image_ops(
+ FLAGS.eval_dir, 'continuous2_infogan.png', reshaped_continuous2_img[0]))
+
+ # For unit testing, use `run_eval_loop=False`.
+ if not run_eval_loop: return
+ tf.contrib.training.evaluate_repeatedly(
+ FLAGS.checkpoint_dir,
+ hooks=[tf.contrib.training.SummaryAtEndHook(FLAGS.eval_dir),
+ tf.contrib.training.StopAfterNEvalsHook(1)],
+ eval_ops=image_write_ops,
+ max_number_of_evaluations=FLAGS.max_number_of_evaluations)
+
+
+def _validate_noises(noises):
+ """Sanity check on constructed noise tensors.
+
+ Args:
+ noises: List of 3-tuples of noise vectors.
+ """
+ assert isinstance(noises, (list, tuple))
+ for noise_l in noises:
+ assert len(noise_l) == 3
+ assert isinstance(noise_l[0], np.ndarray)
+ batch_dim = noise_l[0].shape[0]
+ for i, noise in enumerate(noise_l):
+ assert isinstance(noise, np.ndarray)
+ # Check that batch dimensions are all the same.
+ assert noise.shape[0] == batch_dim
+
+ # Check that shapes for corresponding noises are the same.
+ assert noise.shape == noises[0][i].shape
+
+
+def _get_write_image_ops(eval_dir, filename, images):
+ """Create Ops that write images to disk."""
+ return tf.write_file(
+ '%s/%s'% (eval_dir, filename),
+ tf.image.encode_png(data_provider.float_image_to_uint8(images)))
+
+
+if __name__ == '__main__':
+ app.run(main)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/infogan_eval_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/infogan_eval_test.py"
new file mode 100644
index 00000000..741465fc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/infogan_eval_test.py"
@@ -0,0 +1,33 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for tfgan.examples.mnist.infogan_eval."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl.testing import absltest
+import infogan_eval
+
+
+class MnistInfoGANEvalTest(absltest.TestCase):
+
+ def test_build_graph(self):
+ infogan_eval.main(None, run_eval_loop=False)
+
+
+if __name__ == '__main__':
+ absltest.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/launch_jobs.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/launch_jobs.sh"
new file mode 100644
index 00000000..8bccb12f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/launch_jobs.sh"
@@ -0,0 +1,157 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+#!/bin/bash
+#
+# This script performs the following operations:
+# 1. Downloads the MNIST dataset.
+# 2. Trains an unconditional, conditional, or InfoGAN model on the MNIST
+# training set.
+# 3. Evaluates the models and writes sample images to disk.
+#
+# These examples are intended to be fast. For better final results, tune
+# hyperparameters or train longer.
+#
+# NOTE: Each training step takes about 0.5 second with a batch size of 32 on
+# CPU. On GPU, it takes ~5 milliseconds.
+#
+# With the default batch size and number of steps, train times are:
+#
+# unconditional: CPU: 800 steps, ~10 min GPU: 800 steps, ~1 min
+# conditional: CPU: 2000 steps, ~20 min GPU: 2000 steps, ~2 min
+# infogan: CPU: 3000 steps, ~20 min GPU: 3000 steps, ~6 min
+#
+# Usage:
+# cd models/research/gan/mnist
+# ./launch_jobs.sh ${gan_type} ${git_repo}
+set -e
+
+# Type of GAN to run. Right now, options are `unconditional`, `conditional`, or
+# `infogan`.
+gan_type=$1
+if ! [[ "$gan_type" =~ ^(unconditional|conditional|infogan) ]]; then
+ echo "'gan_type' must be one of: 'unconditional', 'conditional', 'infogan'."
+ exit
+fi
+
+# Location of the git repository.
+git_repo=$2
+if [[ "$git_repo" == "" ]]; then
+ echo "'git_repo' must not be empty."
+ exit
+fi
+
+# Base name for where the checkpoint and logs will be saved to.
+TRAIN_DIR=/tmp/mnist-model
+
+# Base name for where the evaluation images will be saved to.
+EVAL_DIR=/tmp/mnist-model/eval
+
+# Where the dataset is saved to.
+DATASET_DIR=/tmp/mnist-data
+
+# Location of the classifier frozen graph used for evaluation.
+FROZEN_GRAPH="${git_repo}/research/gan/mnist/data/classify_mnist_graph_def.pb"
+
+export PYTHONPATH=$PYTHONPATH:$git_repo:$git_repo/research:$git_repo/research/slim
+
+# A helper function for printing pretty output.
+Banner () {
+ local text=$1
+ local green='\033[0;32m'
+ local nc='\033[0m' # No color.
+ echo -e "${green}${text}${nc}"
+}
+
+# Download the dataset.
+python "${git_repo}/research/slim/download_and_convert_data.py" \
+ --dataset_name=mnist \
+ --dataset_dir=${DATASET_DIR}
+
+# Run unconditional GAN.
+if [[ "$gan_type" == "unconditional" ]]; then
+ UNCONDITIONAL_TRAIN_DIR="${TRAIN_DIR}/unconditional"
+ UNCONDITIONAL_EVAL_DIR="${EVAL_DIR}/unconditional"
+ NUM_STEPS=3000
+ # Run training.
+ Banner "Starting training unconditional GAN for ${NUM_STEPS} steps..."
+ python "${git_repo}/research/gan/mnist/train.py" \
+ --train_log_dir=${UNCONDITIONAL_TRAIN_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --max_number_of_steps=${NUM_STEPS} \
+ --gan_type="unconditional" \
+ --alsologtostderr
+ Banner "Finished training unconditional GAN ${NUM_STEPS} steps."
+
+ # Run evaluation.
+ Banner "Starting evaluation of unconditional GAN..."
+ python "${git_repo}/research/gan/mnist/eval.py" \
+ --checkpoint_dir=${UNCONDITIONAL_TRAIN_DIR} \
+ --eval_dir=${UNCONDITIONAL_EVAL_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --eval_real_images=false \
+ --classifier_filename=${FROZEN_GRAPH} \
+ --max_number_of_evaluations=1
+ Banner "Finished unconditional evaluation. See ${UNCONDITIONAL_EVAL_DIR} for output images."
+fi
+
+# Run conditional GAN.
+if [[ "$gan_type" == "conditional" ]]; then
+ CONDITIONAL_TRAIN_DIR="${TRAIN_DIR}/conditional"
+ CONDITIONAL_EVAL_DIR="${EVAL_DIR}/conditional"
+ NUM_STEPS=3000
+ # Run training.
+ Banner "Starting training conditional GAN for ${NUM_STEPS} steps..."
+ python "${git_repo}/research/gan/mnist/train.py" \
+ --train_log_dir=${CONDITIONAL_TRAIN_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --max_number_of_steps=${NUM_STEPS} \
+ --gan_type="conditional" \
+ --alsologtostderr
+ Banner "Finished training conditional GAN ${NUM_STEPS} steps."
+
+ # Run evaluation.
+ Banner "Starting evaluation of conditional GAN..."
+ python "${git_repo}/research/gan/mnist/conditional_eval.py" \
+ --checkpoint_dir=${CONDITIONAL_TRAIN_DIR} \
+ --eval_dir=${CONDITIONAL_EVAL_DIR} \
+ --classifier_filename=${FROZEN_GRAPH} \
+ --max_number_of_evaluations=1
+ Banner "Finished conditional evaluation. See ${CONDITIONAL_EVAL_DIR} for output images."
+fi
+
+# Run InfoGAN.
+if [[ "$gan_type" == "infogan" ]]; then
+ INFOGAN_TRAIN_DIR="${TRAIN_DIR}/infogan"
+ INFOGAN_EVAL_DIR="${EVAL_DIR}/infogan"
+ NUM_STEPS=3000
+ # Run training.
+ Banner "Starting training infogan GAN for ${NUM_STEPS} steps..."
+ python "${git_repo}/research/gan/mnist/train.py" \
+ --train_log_dir=${INFOGAN_TRAIN_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --max_number_of_steps=${NUM_STEPS} \
+ --gan_type="infogan" \
+ --alsologtostderr
+ Banner "Finished training InfoGAN ${NUM_STEPS} steps."
+
+ # Run evaluation.
+ Banner "Starting evaluation of infogan..."
+ python "${git_repo}/research/gan/mnist/infogan_eval.py" \
+ --checkpoint_dir=${INFOGAN_TRAIN_DIR} \
+ --eval_dir=${INFOGAN_EVAL_DIR} \
+ --classifier_filename=${FROZEN_GRAPH} \
+ --max_number_of_evaluations=1
+ Banner "Finished InfoGAN evaluation. See ${INFOGAN_EVAL_DIR} for output images."
+fi
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/networks.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/networks.py"
new file mode 100644
index 00000000..72f78c83
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/networks.py"
@@ -0,0 +1,235 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Networks for MNIST example using TFGAN."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+ds = tf.contrib.distributions
+layers = tf.contrib.layers
+tfgan = tf.contrib.gan
+
+
+def _generator_helper(
+ noise, is_conditional, one_hot_labels, weight_decay, is_training):
+ """Core MNIST generator.
+
+ This function is reused between the different GAN modes (unconditional,
+ conditional, etc).
+
+ Args:
+ noise: A 2D Tensor of shape [batch size, noise dim].
+ is_conditional: Whether to condition on labels.
+ one_hot_labels: Optional labels for conditioning.
+ weight_decay: The value of the l2 weight decay.
+ is_training: If `True`, batch norm uses batch statistics. If `False`, batch
+ norm uses the exponential moving average collected from population
+ statistics.
+
+ Returns:
+ A generated image in the range [-1, 1].
+ """
+ with tf.contrib.framework.arg_scope(
+ [layers.fully_connected, layers.conv2d_transpose],
+ activation_fn=tf.nn.relu, normalizer_fn=layers.batch_norm,
+ weights_regularizer=layers.l2_regularizer(weight_decay)):
+ with tf.contrib.framework.arg_scope(
+ [layers.batch_norm], is_training=is_training):
+ net = layers.fully_connected(noise, 1024)
+ if is_conditional:
+ net = tfgan.features.condition_tensor_from_onehot(net, one_hot_labels)
+ net = layers.fully_connected(net, 7 * 7 * 128)
+ net = tf.reshape(net, [-1, 7, 7, 128])
+ net = layers.conv2d_transpose(net, 64, [4, 4], stride=2)
+ net = layers.conv2d_transpose(net, 32, [4, 4], stride=2)
+ # Make sure that generator output is in the same range as `inputs`
+ # ie [-1, 1].
+ net = layers.conv2d(
+ net, 1, [4, 4], normalizer_fn=None, activation_fn=tf.tanh)
+
+ return net
+
+
+def unconditional_generator(noise, weight_decay=2.5e-5, is_training=True):
+ """Generator to produce unconditional MNIST images.
+
+ Args:
+ noise: A single Tensor representing noise.
+ weight_decay: The value of the l2 weight decay.
+ is_training: If `True`, batch norm uses batch statistics. If `False`, batch
+ norm uses the exponential moving average collected from population
+ statistics.
+
+ Returns:
+ A generated image in the range [-1, 1].
+ """
+ return _generator_helper(noise, False, None, weight_decay, is_training)
+
+
+def conditional_generator(inputs, weight_decay=2.5e-5, is_training=True):
+ """Generator to produce MNIST images conditioned on class.
+
+ Args:
+ inputs: A 2-tuple of Tensors (noise, one_hot_labels).
+ weight_decay: The value of the l2 weight decay.
+ is_training: If `True`, batch norm uses batch statistics. If `False`, batch
+ norm uses the exponential moving average collected from population
+ statistics.
+
+ Returns:
+ A generated image in the range [-1, 1].
+ """
+ noise, one_hot_labels = inputs
+ return _generator_helper(
+ noise, True, one_hot_labels, weight_decay, is_training)
+
+
+def infogan_generator(inputs, categorical_dim, weight_decay=2.5e-5,
+ is_training=True):
+ """InfoGAN generator network on MNIST digits.
+
+ Based on a paper https://arxiv.org/abs/1606.03657, their code
+ https://github.com/openai/InfoGAN, and code by pooleb@.
+
+ Args:
+ inputs: A 3-tuple of Tensors (unstructured_noise, categorical structured
+ noise, continuous structured noise). `inputs[0]` and `inputs[2]` must be
+ 2D, and `inputs[1]` must be 1D. All must have the same first dimension.
+ categorical_dim: Dimensions of the incompressible categorical noise.
+ weight_decay: The value of the l2 weight decay.
+ is_training: If `True`, batch norm uses batch statistics. If `False`, batch
+ norm uses the exponential moving average collected from population
+ statistics.
+
+ Returns:
+ A generated image in the range [-1, 1].
+ """
+ unstructured_noise, cat_noise, cont_noise = inputs
+ cat_noise_onehot = tf.one_hot(cat_noise, categorical_dim)
+ all_noise = tf.concat(
+ [unstructured_noise, cat_noise_onehot, cont_noise], axis=1)
+ return _generator_helper(all_noise, False, None, weight_decay, is_training)
+
+
+_leaky_relu = lambda x: tf.nn.leaky_relu(x, alpha=0.01)
+
+
+def _discriminator_helper(img, is_conditional, one_hot_labels, weight_decay):
+ """Core MNIST discriminator.
+
+ This function is reused between the different GAN modes (unconditional,
+ conditional, etc).
+
+ Args:
+ img: Real or generated MNIST digits. Should be in the range [-1, 1].
+ is_conditional: Whether to condition on labels.
+ one_hot_labels: Labels to optionally condition the network on.
+ weight_decay: The L2 weight decay.
+
+ Returns:
+ Final fully connected discriminator layer. [batch_size, 1024].
+ """
+ with tf.contrib.framework.arg_scope(
+ [layers.conv2d, layers.fully_connected],
+ activation_fn=_leaky_relu, normalizer_fn=None,
+ weights_regularizer=layers.l2_regularizer(weight_decay),
+ biases_regularizer=layers.l2_regularizer(weight_decay)):
+ net = layers.conv2d(img, 64, [4, 4], stride=2)
+ net = layers.conv2d(net, 128, [4, 4], stride=2)
+ net = layers.flatten(net)
+ if is_conditional:
+ net = tfgan.features.condition_tensor_from_onehot(net, one_hot_labels)
+ net = layers.fully_connected(net, 1024, normalizer_fn=layers.layer_norm)
+
+ return net
+
+
+def unconditional_discriminator(img, unused_conditioning, weight_decay=2.5e-5):
+ """Discriminator network on unconditional MNIST digits.
+
+ Args:
+ img: Real or generated MNIST digits. Should be in the range [-1, 1].
+ unused_conditioning: The TFGAN API can help with conditional GANs, which
+ would require extra `condition` information to both the generator and the
+ discriminator. Since this example is not conditional, we do not use this
+ argument.
+ weight_decay: The L2 weight decay.
+
+ Returns:
+ Logits for the probability that the image is real.
+ """
+ net = _discriminator_helper(img, False, None, weight_decay)
+ return layers.linear(net, 1)
+
+
+def conditional_discriminator(img, conditioning, weight_decay=2.5e-5):
+ """Conditional discriminator network on MNIST digits.
+
+ Args:
+ img: Real or generated MNIST digits. Should be in the range [-1, 1].
+ conditioning: A 2-tuple of Tensors representing (noise, one_hot_labels).
+ weight_decay: The L2 weight decay.
+
+ Returns:
+ Logits for the probability that the image is real.
+ """
+ _, one_hot_labels = conditioning
+ net = _discriminator_helper(img, True, one_hot_labels, weight_decay)
+ return layers.linear(net, 1)
+
+
+def infogan_discriminator(img, unused_conditioning, weight_decay=2.5e-5,
+ categorical_dim=10, continuous_dim=2):
+ """InfoGAN discriminator network on MNIST digits.
+
+ Based on a paper https://arxiv.org/abs/1606.03657, their code
+ https://github.com/openai/InfoGAN, and code by pooleb@.
+
+ Args:
+ img: Real or generated MNIST digits. Should be in the range [-1, 1].
+ unused_conditioning: The TFGAN API can help with conditional GANs, which
+ would require extra `condition` information to both the generator and the
+ discriminator. Since this example is not conditional, we do not use this
+ argument.
+ weight_decay: The L2 weight decay.
+ categorical_dim: Dimensions of the incompressible categorical noise.
+ continuous_dim: Dimensions of the incompressible continuous noise.
+
+ Returns:
+ Logits for the probability that the image is real, and a list of posterior
+ distributions for each of the noise vectors.
+ """
+ net = _discriminator_helper(img, False, None, weight_decay)
+ logits_real = layers.fully_connected(net, 1, activation_fn=None)
+
+ # Recognition network for latent variables has an additional layer
+ with tf.contrib.framework.arg_scope([layers.batch_norm], is_training=False):
+ encoder = layers.fully_connected(net, 128, normalizer_fn=layers.batch_norm,
+ activation_fn=_leaky_relu)
+
+ # Compute logits for each category of categorical latent.
+ logits_cat = layers.fully_connected(
+ encoder, categorical_dim, activation_fn=None)
+ q_cat = ds.Categorical(logits_cat)
+
+ # Compute mean for Gaussian posterior of continuous latents.
+ mu_cont = layers.fully_connected(encoder, continuous_dim, activation_fn=None)
+ sigma_cont = tf.ones_like(mu_cont)
+ q_cont = ds.Normal(loc=mu_cont, scale=sigma_cont)
+
+ return logits_real, [q_cat, q_cont]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/train.py"
new file mode 100644
index 00000000..b51f2812
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/train.py"
@@ -0,0 +1,151 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Trains a generator on MNIST data."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import functools
+
+
+from absl import flags
+from absl import logging
+import tensorflow as tf
+
+import data_provider
+import networks
+import util
+
+tfgan = tf.contrib.gan
+
+
+flags.DEFINE_integer('batch_size', 32, 'The number of images in each batch.')
+
+flags.DEFINE_string('train_log_dir', '/tmp/mnist/',
+ 'Directory where to write event logs.')
+
+flags.DEFINE_string('dataset_dir', None, 'Location of data.')
+
+flags.DEFINE_integer('max_number_of_steps', 20000,
+ 'The maximum number of gradient steps.')
+
+flags.DEFINE_string(
+ 'gan_type', 'unconditional',
+ 'Either `unconditional`, `conditional`, or `infogan`.')
+
+flags.DEFINE_integer(
+ 'grid_size', 5, 'Grid size for image visualization.')
+
+
+flags.DEFINE_integer(
+ 'noise_dims', 64, 'Dimensions of the generator noise vector.')
+
+FLAGS = flags.FLAGS
+
+
+def _learning_rate(gan_type):
+ # First is generator learning rate, second is discriminator learning rate.
+ return {
+ 'unconditional': (1e-3, 1e-4),
+ 'conditional': (1e-5, 1e-4),
+ 'infogan': (0.001, 9e-5),
+ }[gan_type]
+
+
+def main(_):
+ if not tf.gfile.Exists(FLAGS.train_log_dir):
+ tf.gfile.MakeDirs(FLAGS.train_log_dir)
+
+ # Force all input processing onto CPU in order to reserve the GPU for
+ # the forward inference and back-propagation.
+ with tf.name_scope('inputs'):
+ with tf.device('/cpu:0'):
+ images, one_hot_labels, _ = data_provider.provide_data(
+ 'train', FLAGS.batch_size, FLAGS.dataset_dir, num_threads=4)
+
+ # Define the GANModel tuple. Optionally, condition the GAN on the label or
+ # use an InfoGAN to learn a latent representation.
+ if FLAGS.gan_type == 'unconditional':
+ gan_model = tfgan.gan_model(
+ generator_fn=networks.unconditional_generator,
+ discriminator_fn=networks.unconditional_discriminator,
+ real_data=images,
+ generator_inputs=tf.random_normal(
+ [FLAGS.batch_size, FLAGS.noise_dims]))
+ elif FLAGS.gan_type == 'conditional':
+ noise = tf.random_normal([FLAGS.batch_size, FLAGS.noise_dims])
+ gan_model = tfgan.gan_model(
+ generator_fn=networks.conditional_generator,
+ discriminator_fn=networks.conditional_discriminator,
+ real_data=images,
+ generator_inputs=(noise, one_hot_labels))
+ elif FLAGS.gan_type == 'infogan':
+ cat_dim, cont_dim = 10, 2
+ generator_fn = functools.partial(
+ networks.infogan_generator, categorical_dim=cat_dim)
+ discriminator_fn = functools.partial(
+ networks.infogan_discriminator, categorical_dim=cat_dim,
+ continuous_dim=cont_dim)
+ unstructured_inputs, structured_inputs = util.get_infogan_noise(
+ FLAGS.batch_size, cat_dim, cont_dim, FLAGS.noise_dims)
+ gan_model = tfgan.infogan_model(
+ generator_fn=generator_fn,
+ discriminator_fn=discriminator_fn,
+ real_data=images,
+ unstructured_generator_inputs=unstructured_inputs,
+ structured_generator_inputs=structured_inputs)
+ tfgan.eval.add_gan_model_image_summaries(gan_model, FLAGS.grid_size)
+
+ # Get the GANLoss tuple. You can pass a custom function, use one of the
+ # already-implemented losses from the losses library, or use the defaults.
+ with tf.name_scope('loss'):
+ mutual_information_penalty_weight = (1.0 if FLAGS.gan_type == 'infogan'
+ else 0.0)
+ gan_loss = tfgan.gan_loss(
+ gan_model,
+ gradient_penalty_weight=1.0,
+ mutual_information_penalty_weight=mutual_information_penalty_weight,
+ add_summaries=True)
+ tfgan.eval.add_regularization_loss_summaries(gan_model)
+
+ # Get the GANTrain ops using custom optimizers.
+ with tf.name_scope('train'):
+ gen_lr, dis_lr = _learning_rate(FLAGS.gan_type)
+ train_ops = tfgan.gan_train_ops(
+ gan_model,
+ gan_loss,
+ generator_optimizer=tf.train.AdamOptimizer(gen_lr, 0.5),
+ discriminator_optimizer=tf.train.AdamOptimizer(dis_lr, 0.5),
+ summarize_gradients=True,
+ aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)
+
+ # Run the alternating training loop. Skip it if no steps should be taken
+ # (used for graph construction tests).
+ status_message = tf.string_join(
+ ['Starting train step: ',
+ tf.as_string(tf.train.get_or_create_global_step())],
+ name='status_message')
+ if FLAGS.max_number_of_steps == 0: return
+ tfgan.gan_train(
+ train_ops,
+ hooks=[tf.train.StopAtStepHook(num_steps=FLAGS.max_number_of_steps),
+ tf.train.LoggingTensorHook([status_message], every_n_iter=10)],
+ logdir=FLAGS.train_log_dir,
+ get_hooks_fn=tfgan.get_joint_train_hooks())
+
+if __name__ == '__main__':
+ logging.set_verbosity(logging.INFO)
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/train_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/train_test.py"
new file mode 100644
index 00000000..a75f858b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/train_test.py"
@@ -0,0 +1,71 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for mnist.train."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+from absl.testing import parameterized
+import numpy as np
+
+import tensorflow as tf
+import train
+
+FLAGS = flags.FLAGS
+mock = tf.test.mock
+
+
+class TrainTest(tf.test.TestCase, parameterized.TestCase):
+
+ @mock.patch.object(train, 'data_provider', autospec=True)
+ def test_run_one_train_step(self, mock_data_provider):
+ FLAGS.max_number_of_steps = 1
+ FLAGS.gan_type = 'unconditional'
+ FLAGS.batch_size = 5
+ FLAGS.grid_size = 1
+ tf.set_random_seed(1234)
+
+ # Mock input pipeline.
+ mock_imgs = np.zeros([FLAGS.batch_size, 28, 28, 1], dtype=np.float32)
+ mock_lbls = np.concatenate(
+ (np.ones([FLAGS.batch_size, 1], dtype=np.int32),
+ np.zeros([FLAGS.batch_size, 9], dtype=np.int32)), axis=1)
+ mock_data_provider.provide_data.return_value = (mock_imgs, mock_lbls, None)
+
+ train.main(None)
+
+ @parameterized.named_parameters(
+ ('Unconditional', 'unconditional'),
+ ('Conditional', 'conditional'),
+ ('InfoGAN', 'infogan'))
+ def test_build_graph(self, gan_type):
+ FLAGS.max_number_of_steps = 0
+ FLAGS.gan_type = gan_type
+
+ # Mock input pipeline.
+ mock_imgs = np.zeros([FLAGS.batch_size, 28, 28, 1], dtype=np.float32)
+ mock_lbls = np.concatenate(
+ (np.ones([FLAGS.batch_size, 1], dtype=np.int32),
+ np.zeros([FLAGS.batch_size, 9], dtype=np.int32)), axis=1)
+ with mock.patch.object(train, 'data_provider') as mock_data_provider:
+ mock_data_provider.provide_data.return_value = (
+ mock_imgs, mock_lbls, None)
+ train.main(None)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/util.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/util.py"
new file mode 100644
index 00000000..0cf6b00e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/util.py"
@@ -0,0 +1,323 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A utility for evaluating MNIST generative models.
+
+These functions use a pretrained MNIST classifier with ~99% eval accuracy to
+measure various aspects of the quality of generated MNIST digits.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import numpy as np
+from six.moves import xrange # pylint: disable=redefined-builtin
+import tensorflow as tf
+
+ds = tf.contrib.distributions
+tfgan = tf.contrib.gan
+
+
+__all__ = [
+ 'mnist_score',
+ 'mnist_frechet_distance',
+ 'mnist_cross_entropy',
+ 'get_eval_noise_categorical',
+ 'get_eval_noise_continuous_dim1',
+ 'get_eval_noise_continuous_dim2',
+ 'get_infogan_noise',
+]
+
+
+# Prepend `../`, since paths start from `third_party/tensorflow`.
+MODEL_GRAPH_DEF = '../tensorflow_models/gan/mnist/data/classify_mnist_graph_def.pb'
+INPUT_TENSOR = 'inputs:0'
+OUTPUT_TENSOR = 'logits:0'
+
+
+def mnist_score(images, graph_def_filename=None, input_tensor=INPUT_TENSOR,
+ output_tensor=OUTPUT_TENSOR, num_batches=1):
+ """Get MNIST logits of a fully-trained classifier.
+
+ Args:
+ images: A minibatch tensor of MNIST digits. Shape must be
+ [batch, 28, 28, 1].
+ graph_def_filename: Location of a frozen GraphDef binary file on disk. If
+ `None`, uses a default graph.
+ input_tensor: GraphDef's input tensor name.
+ output_tensor: GraphDef's output tensor name.
+ num_batches: Number of batches to split `generated_images` in to in order to
+ efficiently run them through Inception.
+
+ Returns:
+ A logits tensor of [batch, 10].
+ """
+ images.shape.assert_is_compatible_with([None, 28, 28, 1])
+
+ graph_def = _graph_def_from_par_or_disk(graph_def_filename)
+ mnist_classifier_fn = lambda x: tfgan.eval.run_image_classifier( # pylint: disable=g-long-lambda
+ x, graph_def, input_tensor, output_tensor)
+
+ score = tfgan.eval.classifier_score(
+ images, mnist_classifier_fn, num_batches)
+ score.shape.assert_is_compatible_with([])
+
+ return score
+
+
+def mnist_frechet_distance(real_images, generated_images,
+ graph_def_filename=None, input_tensor=INPUT_TENSOR,
+ output_tensor=OUTPUT_TENSOR, num_batches=1):
+ """Frechet distance between real and generated images.
+
+ This technique is described in detail in https://arxiv.org/abs/1706.08500.
+ Please see TFGAN for implementation details
+ (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/
+ python/eval/python/classifier_metrics_impl.py).
+
+ Args:
+ real_images: Real images to use to compute Frechet Inception distance.
+ generated_images: Generated images to use to compute Frechet Inception
+ distance.
+ graph_def_filename: Location of a frozen GraphDef binary file on disk. If
+ `None`, uses a default graph.
+ input_tensor: GraphDef's input tensor name.
+ output_tensor: GraphDef's output tensor name.
+ num_batches: Number of batches to split images into in order to
+ efficiently run them through the classifier network.
+
+ Returns:
+ The Frechet distance. A floating-point scalar.
+ """
+ real_images.shape.assert_is_compatible_with([None, 28, 28, 1])
+ generated_images.shape.assert_is_compatible_with([None, 28, 28, 1])
+
+ graph_def = _graph_def_from_par_or_disk(graph_def_filename)
+ mnist_classifier_fn = lambda x: tfgan.eval.run_image_classifier( # pylint: disable=g-long-lambda
+ x, graph_def, input_tensor, output_tensor)
+
+ frechet_distance = tfgan.eval.frechet_classifier_distance(
+ real_images, generated_images, mnist_classifier_fn, num_batches)
+ frechet_distance.shape.assert_is_compatible_with([])
+
+ return frechet_distance
+
+
+def mnist_cross_entropy(images, one_hot_labels, graph_def_filename=None,
+ input_tensor=INPUT_TENSOR, output_tensor=OUTPUT_TENSOR):
+ """Returns the cross entropy loss of the classifier on images.
+
+ Args:
+ images: A minibatch tensor of MNIST digits. Shape must be
+ [batch, 28, 28, 1].
+ one_hot_labels: The one hot label of the examples. Tensor size is
+ [batch, 10].
+ graph_def_filename: Location of a frozen GraphDef binary file on disk. If
+ `None`, uses a default graph embedded in the par file.
+ input_tensor: GraphDef's input tensor name.
+ output_tensor: GraphDef's output tensor name.
+
+ Returns:
+ A scalar Tensor representing the cross entropy of the image minibatch.
+ """
+ graph_def = _graph_def_from_par_or_disk(graph_def_filename)
+
+ logits = tfgan.eval.run_image_classifier(
+ images, graph_def, input_tensor, output_tensor)
+ return tf.losses.softmax_cross_entropy(
+ one_hot_labels, logits, loss_collection=None)
+
+
+# (joelshor): Refactor the `eval_noise` functions to reuse code.
+def get_eval_noise_categorical(
+ noise_samples, categorical_sample_points, continuous_sample_points,
+ unstructured_noise_dims, continuous_noise_dims):
+ """Create noise showing impact of categorical noise in InfoGAN.
+
+ Categorical noise is constant across columns. Other noise is constant across
+ rows.
+
+ Args:
+ noise_samples: Number of non-categorical noise samples to use.
+ categorical_sample_points: Possible categorical noise points to sample.
+ continuous_sample_points: Possible continuous noise points to sample.
+ unstructured_noise_dims: Dimensions of the unstructured noise.
+ continuous_noise_dims: Dimensions of the continuous noise.
+
+ Returns:
+ Unstructured noise, categorical noise, continuous noise numpy arrays. Each
+ should have shape [noise_samples, ?].
+ """
+ rows, cols = noise_samples, len(categorical_sample_points)
+
+ # Take random draws for non-categorical noise, making sure they are constant
+ # across columns.
+ unstructured_noise = []
+ for _ in xrange(rows):
+ cur_sample = np.random.normal(size=[1, unstructured_noise_dims])
+ unstructured_noise.extend([cur_sample] * cols)
+ unstructured_noise = np.concatenate(unstructured_noise)
+
+ continuous_noise = []
+ for _ in xrange(rows):
+ cur_sample = np.random.choice(
+ continuous_sample_points, size=[1, continuous_noise_dims])
+ continuous_noise.extend([cur_sample] * cols)
+ continuous_noise = np.concatenate(continuous_noise)
+
+ # Increase categorical noise from left to right, making sure they are constant
+ # across rows.
+ categorical_noise = np.tile(categorical_sample_points, rows)
+
+ return unstructured_noise, categorical_noise, continuous_noise
+
+
+def get_eval_noise_continuous_dim1(
+ noise_samples, categorical_sample_points, continuous_sample_points,
+ unstructured_noise_dims, continuous_noise_dims): # pylint:disable=unused-argument
+ """Create noise showing impact of first dim continuous noise in InfoGAN.
+
+ First dimension of continuous noise is constant across columns. Other noise is
+ constant across rows.
+
+ Args:
+ noise_samples: Number of non-categorical noise samples to use.
+ categorical_sample_points: Possible categorical noise points to sample.
+ continuous_sample_points: Possible continuous noise points to sample.
+ unstructured_noise_dims: Dimensions of the unstructured noise.
+ continuous_noise_dims: Dimensions of the continuous noise.
+
+ Returns:
+ Unstructured noise, categorical noise, continuous noise numpy arrays.
+ """
+ rows, cols = noise_samples, len(continuous_sample_points)
+
+ # Take random draws for non-first-dim-continuous noise, making sure they are
+ # constant across columns.
+ unstructured_noise = []
+ for _ in xrange(rows):
+ cur_sample = np.random.normal(size=[1, unstructured_noise_dims])
+ unstructured_noise.extend([cur_sample] * cols)
+ unstructured_noise = np.concatenate(unstructured_noise)
+
+ categorical_noise = []
+ for _ in xrange(rows):
+ cur_sample = np.random.choice(categorical_sample_points)
+ categorical_noise.extend([cur_sample] * cols)
+ categorical_noise = np.array(categorical_noise)
+
+ cont_noise_dim2 = []
+ for _ in xrange(rows):
+ cur_sample = np.random.choice(continuous_sample_points, size=[1, 1])
+ cont_noise_dim2.extend([cur_sample] * cols)
+ cont_noise_dim2 = np.concatenate(cont_noise_dim2)
+
+ # Increase first dimension of continuous noise from left to right, making sure
+ # they are constant across rows.
+ cont_noise_dim1 = np.expand_dims(np.tile(continuous_sample_points, rows), 1)
+
+ continuous_noise = np.concatenate((cont_noise_dim1, cont_noise_dim2), 1)
+
+ return unstructured_noise, categorical_noise, continuous_noise
+
+
+def get_eval_noise_continuous_dim2(
+ noise_samples, categorical_sample_points, continuous_sample_points,
+ unstructured_noise_dims, continuous_noise_dims): # pylint:disable=unused-argument
+ """Create noise showing impact of second dim of continuous noise in InfoGAN.
+
+ Second dimension of continuous noise is constant across columns. Other noise
+ is constant across rows.
+
+ Args:
+ noise_samples: Number of non-categorical noise samples to use.
+ categorical_sample_points: Possible categorical noise points to sample.
+ continuous_sample_points: Possible continuous noise points to sample.
+ unstructured_noise_dims: Dimensions of the unstructured noise.
+ continuous_noise_dims: Dimensions of the continuous noise.
+
+ Returns:
+ Unstructured noise, categorical noise, continuous noise numpy arrays.
+ """
+ rows, cols = noise_samples, len(continuous_sample_points)
+
+ # Take random draws for non-first-dim-continuous noise, making sure they are
+ # constant across columns.
+ unstructured_noise = []
+ for _ in xrange(rows):
+ cur_sample = np.random.normal(size=[1, unstructured_noise_dims])
+ unstructured_noise.extend([cur_sample] * cols)
+ unstructured_noise = np.concatenate(unstructured_noise)
+
+ categorical_noise = []
+ for _ in xrange(rows):
+ cur_sample = np.random.choice(categorical_sample_points)
+ categorical_noise.extend([cur_sample] * cols)
+ categorical_noise = np.array(categorical_noise)
+
+ cont_noise_dim1 = []
+ for _ in xrange(rows):
+ cur_sample = np.random.choice(continuous_sample_points, size=[1, 1])
+ cont_noise_dim1.extend([cur_sample] * cols)
+ cont_noise_dim1 = np.concatenate(cont_noise_dim1)
+
+ # Increase first dimension of continuous noise from left to right, making sure
+ # they are constant across rows.
+ cont_noise_dim2 = np.expand_dims(np.tile(continuous_sample_points, rows), 1)
+
+ continuous_noise = np.concatenate((cont_noise_dim1, cont_noise_dim2), 1)
+
+ return unstructured_noise, categorical_noise, continuous_noise
+
+
+def get_infogan_noise(batch_size, categorical_dim, structured_continuous_dim,
+ total_continuous_noise_dims):
+ """Get unstructured and structured noise for InfoGAN.
+
+ Args:
+ batch_size: The number of noise vectors to generate.
+ categorical_dim: The number of categories in the categorical noise.
+ structured_continuous_dim: The number of dimensions of the uniform
+ continuous noise.
+ total_continuous_noise_dims: The number of continuous noise dimensions. This
+ number includes the structured and unstructured noise.
+
+ Returns:
+ A 2-tuple of structured and unstructured noise. First element is the
+ unstructured noise, and the second is a 2-tuple of
+ (categorical structured noise, continuous structured noise).
+ """
+ # Get unstructurd noise.
+ unstructured_noise = tf.random_normal(
+ [batch_size, total_continuous_noise_dims - structured_continuous_dim])
+
+ # Get categorical noise Tensor.
+ categorical_dist = ds.Categorical(logits=tf.zeros([categorical_dim]))
+ categorical_noise = categorical_dist.sample([batch_size])
+
+ # Get continuous noise Tensor.
+ continuous_dist = ds.Uniform(-tf.ones([structured_continuous_dim]),
+ tf.ones([structured_continuous_dim]))
+ continuous_noise = continuous_dist.sample([batch_size])
+
+ return [unstructured_noise], [categorical_noise, continuous_noise]
+
+
+def _graph_def_from_par_or_disk(filename):
+ if filename is None:
+ return tfgan.eval.get_graph_def_from_resource(MODEL_GRAPH_DEF)
+ else:
+ return tfgan.eval.get_graph_def_from_disk(filename)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/util_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/util_test.py"
new file mode 100644
index 00000000..53b851b6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist/util_test.py"
@@ -0,0 +1,229 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for mnist.util."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+
+import tensorflow as tf
+from google3.third_party.tensorflow_models.gan.mnist import util
+
+
+# pylint: disable=line-too-long
+# This is a real digit `5` from MNIST.
+REAL_DIGIT = [[-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.9765625, -0.859375, -0.859375, -0.859375, -0.015625, 0.0625, 0.3671875, -0.796875, 0.296875, 0.9921875, 0.9296875, -0.0078125, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.765625, -0.71875, -0.265625, 0.203125, 0.328125, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.7578125, 0.34375, 0.9765625, 0.890625, 0.5234375, -0.5, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.6171875, 0.859375, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.9609375, -0.2734375, -0.359375, -0.359375, -0.5625, -0.6953125, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.859375, 0.7109375, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.546875, 0.421875, 0.9296875, 0.8828125, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.375, 0.21875, -0.1640625, 0.9765625, 0.9765625, 0.6015625, -0.9140625, -1.0, -0.6640625, 0.203125, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.890625, -0.9921875, 0.203125, 0.9765625, -0.296875, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, 0.0859375, 0.9765625, 0.484375, -0.984375, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.9140625, 0.484375, 0.9765625, -0.453125, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.7265625, 0.8828125, 0.7578125, 0.25, -0.15625, -0.9921875, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.3671875, 0.875, 0.9765625, 0.9765625, -0.0703125, -0.8046875, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.6484375, 0.453125, 0.9765625, 0.9765625, 0.171875, -0.7890625, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.875, -0.2734375, 0.96875, 0.9765625, 0.4609375, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, 0.9453125, 0.9765625, 0.9453125, -0.5, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.640625, 0.015625, 0.4296875, 0.9765625, 0.9765625, 0.6171875, -0.984375, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.6953125, 0.15625, 0.7890625, 0.9765625, 0.9765625, 0.9765625, 0.953125, 0.421875, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.8125, -0.109375, 0.7265625, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.5703125, -0.390625, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.8203125, -0.484375, 0.6640625, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.546875, -0.3671875, -0.984375, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -0.859375, 0.3359375, 0.7109375, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.5234375, -0.375, -0.9296875, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -0.5703125, 0.34375, 0.765625, 0.9765625, 0.9765625, 0.9765625, 0.9765625, 0.90625, 0.0390625, -0.9140625, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, 0.0625, 0.9765625, 0.9765625, 0.9765625, 0.65625, 0.0546875, 0.03125, -0.875, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0], [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0]]
+ONE_HOT = [[0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0]]
+
+# Uniform noise in [-1, 1].
+FAKE_DIGIT = [[0.778958797454834, 0.8792028427124023, 0.07099628448486328, 0.8518857955932617, -0.3541288375854492, -0.7431280612945557, -0.12607860565185547, 0.17328786849975586, 0.6749839782714844, -0.5402040481567383, 0.9034252166748047, 0.2420203685760498, 0.3455841541290283, 0.1937558650970459, 0.9989571571350098, 0.9039363861083984, -0.955411434173584, 0.6228537559509277, -0.33131909370422363, 0.9653763771057129, 0.864208459854126, -0.05056142807006836, 0.12686634063720703, -0.09225749969482422, 0.49758028984069824, 0.08698725700378418, 0.5533185005187988, 0.20227980613708496], [0.8400616645812988, 0.7409703731536865, -0.6215496063232422, -0.53228759765625, -0.20184636116027832, -0.8568699359893799, -0.8662903308868408, -0.8735041618347168, -0.11022663116455078, -0.8418543338775635, 0.8193502426147461, -0.901512622833252, -0.7680232524871826, 0.6209826469421387, 0.06459426879882812, 0.5341305732727051, -0.4078702926635742, -0.13658642768859863, 0.6602437496185303, 0.848508358001709, -0.23431801795959473, 0.5995683670043945, -0.9807922840118408, 0.2657158374786377, -0.8068397045135498, 0.2438051700592041, -0.2116842269897461, 0.011460304260253906], [0.00040912628173828125, -0.058798789978027344, 0.3239307403564453, 0.5040378570556641, -0.03192305564880371, -0.4816470146179199, -0.14559340476989746, -0.9231269359588623, -0.6602556705474854, -0.2537086009979248, -0.11059761047363281, -0.8174862861633301, 0.6180260181427002, 0.7245023250579834, 0.5007762908935547, -0.1575303077697754, -0.0167086124420166, 0.7173266410827637, 0.1126704216003418, -0.9878268241882324, 0.4538843631744385, -0.4422755241394043, -0.7899672985076904, 0.7349567413330078, -0.4448075294494629, -0.7548923492431641, -0.5739786624908447, 0.30504918098449707], [-0.8488152027130127, 0.43424463272094727, 0.7724254131317139, 0.43314504623413086, 0.7352848052978516, -0.26010799407958984, 0.43951940536499023, -0.7642686367034912, -0.657184362411499, -0.9933960437774658, -0.47258639335632324, 0.10390830039978027, 0.11454653739929199, -0.6156411170959473, -0.23431062698364258, -0.6897118091583252, -0.5721850395202637, -0.3574075698852539, 0.13927006721496582, -0.6530766487121582, 0.32231855392456055, 0.6294634342193604, 0.5507853031158447, -0.4867420196533203, 0.4329197406768799, 0.6168341636657715, -0.8720219135284424, 0.8639121055603027], [0.02407360076904297, -0.11185193061828613, -0.38637852668762207, -0.8244953155517578, -0.648916482925415, 0.44907593727111816, -0.368192195892334, 0.0190126895904541, -0.9450500011444092, 0.41033458709716797, -0.7877917289733887, 0.617938756942749, 0.551692008972168, -0.48288512229919434, 0.019921541213989258, -0.8765170574188232, 0.5651748180389404, 0.850874662399292, -0.5792787075042725, 0.1748213768005371, -0.6905481815338135, -0.521310567855835, 0.062479496002197266, 0.17763280868530273, -0.4628307819366455, 0.8870463371276855, -0.8685822486877441, -0.29169774055480957], [-0.14687561988830566, 0.8801963329315186, 0.11353135108947754, -0.6009430885314941, 0.8818719387054443, -0.8621203899383545, -0.48749589920043945, 0.5224916934967041, 0.7050364017486572, -0.9968757629394531, 0.7235188484191895, 0.662771463394165, 0.588390588760376, -0.9624209403991699, 0.39203453063964844, -0.2210233211517334, 0.9266352653503418, -0.9132544994354248, -0.5175333023071289, 0.7251780033111572, 0.3030557632446289, 0.5743863582611084, 0.14350247383117676, -0.3735086917877197, -0.6299927234649658, 0.5088682174682617, -0.6953752040863037, 0.23583650588989258], [0.25864362716674805, 0.5736973285675049, -0.3975222110748291, -0.6369199752807617, 0.5376672744750977, -0.19951462745666504, 0.6843924522399902, 0.6296660900115967, 0.36865997314453125, -0.7243289947509766, 0.5768749713897705, -0.5493001937866211, 0.31238412857055664, 0.21019506454467773, -0.368206262588501, -0.33148622512817383, -0.3421964645385742, -0.15616083145141602, 0.6617193222045898, 0.4400820732116699, 0.7893157005310059, -0.2935945987701416, 0.6241741180419922, -0.26036930084228516, -0.6958446502685547, 0.27047157287597656, -0.9095940589904785, 0.9525108337402344], [-0.5233585834503174, -0.45003342628479004, 0.15099048614501953, 0.6257956027984619, 0.9017877578735352, -0.18155455589294434, -0.20237135887145996, -0.014468908309936523, 0.01797318458557129, -0.5453977584838867, 0.21428155899047852, -0.9678947925567627, 0.5137600898742676, -0.1094369888305664, -0.13572359085083008, -0.1704423427581787, -0.9122319221496582, 0.8274900913238525, -0.11746454238891602, -0.8701446056365967, -0.9545385837554932, -0.6735866069793701, 0.9445557594299316, -0.3842940330505371, -0.6240942478179932, -0.1673595905303955, -0.3959221839904785, -0.05602693557739258], [0.8833198547363281, 0.14288711547851562, -0.9623878002166748, -0.26968836784362793, 0.9689288139343262, -0.3792128562927246, 0.2520296573638916, 0.1477947235107422, -0.24453139305114746, 0.94329833984375, 0.8014910221099854, 0.5443501472473145, 0.8486857414245605, -0.0795745849609375, -0.30250096321105957, -0.909733772277832, 0.8387842178344727, -0.41989898681640625, -0.8364224433898926, 0.04792976379394531, -0.38036274909973145, 0.12747883796691895, -0.5356688499450684, -0.04269552230834961, -0.2070469856262207, -0.6911153793334961, 0.33954668045043945, 0.25260138511657715], [0.3128373622894287, 0.36142778396606445, -0.2512378692626953, -0.6497259140014648, -0.21787405014038086, -0.9972476959228516, -0.026904821395874023, -0.4214746952056885, -0.3354766368865967, -0.6112637519836426, -0.7594058513641357, 0.09093379974365234, -0.7331845760345459, 0.5222046375274658, -0.8997514247894287, -0.3749384880065918, -0.0775001049041748, -0.26296448707580566, 0.404325008392334, -0.25776195526123047, 0.9136955738067627, 0.2623283863067627, -0.6411356925964355, 0.6646602153778076, -0.12833356857299805, -0.4184732437133789, -0.3663449287414551, -0.5468103885650635], [0.3770618438720703, 0.5572817325592041, -0.3657073974609375, -0.5056321620941162, 0.6555137634277344, 0.9557509422302246, -0.6900768280029297, -0.4980638027191162, 0.05510354042053223, -0.610318660736084, 0.9753992557525635, 0.7569930553436279, 0.4011664390563965, 0.8439173698425293, -0.5921270847320557, -0.2775266170501709, -0.061129093170166016, -0.49707984924316406, 0.5820951461791992, 0.008175849914550781, 0.06372833251953125, 0.3061811923980713, -0.5091361999511719, 0.9751057624816895, 0.4571402072906494, 0.6769094467163086, -0.46695923805236816, 0.44080281257629395], [-0.6510493755340576, -0.032715559005737305, 0.3482983112335205, -0.8135421276092529, 0.1506943702697754, 0.5220685005187988, -0.8834004402160645, -0.908900260925293, 0.3211519718170166, 0.896381139755249, -0.9448244571685791, -0.6193962097167969, -0.009401559829711914, 0.38227057456970215, -0.9219558238983154, -0.029483318328857422, -0.3889012336730957, 0.5242419242858887, 0.7338912487030029, -0.8713808059692383, -0.04948568344116211, -0.797940731048584, -0.9933724403381348, -0.262890100479126, -0.7165846824645996, -0.9763388633728027, -0.4105076789855957, 0.5907857418060303], [-0.6091890335083008, 0.7921168804168701, -0.033307790756225586, -0.9177074432373047, 0.4553513526916504, 0.5754055976867676, 0.6747269630432129, 0.0015664100646972656, -0.36865878105163574, -0.7999486923217773, 0.993431568145752, -0.7310445308685303, -0.49965715408325195, -0.028263330459594727, -0.20190834999084473, -0.7398116588592529, 0.10513901710510254, 0.22136950492858887, 0.42579007148742676, -0.4703383445739746, -0.8729751110076904, 0.28951215744018555, -0.3110074996948242, 0.9935362339019775, -0.29533815383911133, -0.3384673595428467, -0.07292437553405762, 0.8471579551696777], [-0.04648447036743164, -0.6876633167266846, -0.4921104907989502, -0.1925184726715088, -0.5843420028686523, -0.2852492332458496, 0.1251826286315918, -0.5709969997406006, 0.6744673252105713, 0.17812824249267578, 0.675086259841919, 0.1219022274017334, 0.6496286392211914, 0.05892682075500488, 0.33709263801574707, -0.9087743759155273, -0.4785141944885254, 0.2689247131347656, 0.3600578308105469, -0.6822943687438965, -0.0801839828491211, 0.9234893321990967, 0.5037875175476074, -0.8148105144500732, 0.6820621490478516, -0.8345451354980469, 0.9400079250335693, -0.752265453338623], [0.4599113464355469, 0.0292510986328125, -0.7475054264068604, -0.4074263572692871, 0.8800950050354004, 0.41760849952697754, 0.3225588798522949, 0.8359887599945068, -0.8655965328216553, 0.17258906364440918, 0.752063512802124, -0.48968982696533203, -0.9149389266967773, -0.46224403381347656, -0.26379919052124023, -0.9342350959777832, -0.5444085597991943, -0.4576752185821533, -0.12544608116149902, -0.3810274600982666, -0.5277583599090576, 0.3267025947570801, 0.4762594699859619, 0.09010529518127441, -0.2434844970703125, -0.8013439178466797, -0.23540687561035156, 0.8844473361968994], [-0.8922028541564941, -0.7912418842315674, -0.14429497718811035, -0.3925011157989502, 0.1870102882385254, -0.28865933418273926, 0.11481523513793945, -0.9174098968505859, -0.5264348983764648, -0.7296302318572998, 0.44644737243652344, 0.11140275001525879, 0.08105874061584473, 0.3871574401855469, -0.5030152797698975, -0.9560322761535645, -0.07085466384887695, 0.7949709892272949, 0.37600183486938477, -0.8664641380310059, -0.08428549766540527, 0.9169652462005615, 0.9479010105133057, 0.3576698303222656, 0.6677501201629639, -0.34919166564941406, -0.11635351181030273, -0.05966901779174805], [-0.9716870784759521, -0.7901370525360107, 0.9152195453643799, -0.42171406745910645, 0.20472931861877441, -0.9847748279571533, 0.9935972690582275, -0.5975158214569092, 0.9557573795318604, 0.41234254837036133, -0.26670074462890625, 0.869682788848877, 0.4443938732147217, -0.13968205451965332, -0.7745835781097412, -0.5692052841186523, 0.321591854095459, -0.3146944046020508, -0.3921670913696289, -0.36029601097106934, 0.33528995513916016, 0.029220104217529297, -0.8788590431213379, 0.6883881092071533, -0.8260040283203125, -0.43041515350341797, 0.42810845375061035, 0.40050792694091797], [0.6319584846496582, 0.32369279861450195, -0.6308813095092773, 0.07861590385437012, 0.5387494564056396, -0.902024507522583, -0.5346510410308838, 0.10787153244018555, 0.36048197746276855, 0.7801573276519775, -0.39187049865722656, 0.409869909286499, 0.5972449779510498, -0.7578165531158447, 0.30403685569763184, -0.7885205745697021, 0.01341390609741211, -0.023469209671020508, -0.11588907241821289, -0.941727876663208, -0.09618496894836426, 0.8042230606079102, 0.4683551788330078, -0.6497313976287842, 0.7443571090698242, 0.7150907516479492, -0.5759801864624023, -0.6523513793945312], [0.3150167465209961, -0.1853046417236328, 0.1839759349822998, -0.9234504699707031, -0.666614294052124, -0.740748405456543, 0.5700008869171143, 0.4091987609863281, -0.8912513256072998, 0.027419567108154297, 0.07219696044921875, -0.3128829002380371, 0.9465951919555664, 0.8715424537658691, -0.4559173583984375, -0.5862705707550049, -0.6734106540679932, 0.8419013023376465, -0.5523068904876709, -0.5019669532775879, -0.4969966411590576, -0.030994892120361328, 0.7572557926177979, 0.3824281692504883, 0.6366133689880371, -0.13271117210388184, 0.632627010345459, -0.9462625980377197], [-0.3923935890197754, 0.5466675758361816, -0.9815363883972168, -0.12867975234985352, 0.9932572841644287, -0.712486743927002, -0.17107725143432617, -0.41582536697387695, 0.10990118980407715, 0.11867499351501465, -0.7774863243103027, -0.9382553100585938, 0.4290587902069092, -0.1412363052368164, -0.498490571975708, 0.7793624401092529, 0.9015710353851318, -0.5107312202453613, 0.12394595146179199, 0.4988982677459717, -0.5731511116027832, -0.8105146884918213, 0.19758391380310059, 0.4081540107727051, -0.20432400703430176, 0.9924988746643066, -0.16952204704284668, 0.45574188232421875], [-0.023319005966186523, -0.3514130115509033, 0.4652390480041504, 0.7690563201904297, 0.7906622886657715, 0.9922001361846924, -0.6670932769775391, -0.5720524787902832, 0.3491201400756836, -0.18706512451171875, 0.4899415969848633, -0.5637645721435547, 0.8986928462982178, -0.7359066009521484, -0.6342356204986572, 0.6608924865722656, 0.6014213562011719, 0.7906208038330078, 0.19034814834594727, 0.16884255409240723, -0.8803558349609375, 0.4060337543487549, 0.9862797260284424, -0.48758840560913086, 0.3894319534301758, 0.4226539134979248, 0.00042724609375, -0.3623659610748291], [0.7065057754516602, -0.4103825092315674, 0.3343489170074463, -0.9341757297515869, -0.20749878883361816, -0.3589310646057129, -0.9470062255859375, 0.7851138114929199, -0.74678635597229, -0.05102086067199707, 0.16727972030639648, 0.12385678291320801, -0.41132497787475586, 0.5856695175170898, 0.06311273574829102, 0.8238284587860107, 0.2257852554321289, -0.6452250480651855, -0.6373245716094971, -0.1247873306274414, 0.0021851062774658203, -0.5648598670959473, 0.8261771202087402, 0.3713812828063965, -0.7185893058776855, 0.45911359786987305, 0.6722264289855957, -0.31804966926574707], [-0.2952606678009033, -0.12052011489868164, 0.9074821472167969, -0.9850583076477051, 0.6732273101806641, 0.7954013347625732, 0.938248872756958, -0.35915136337280273, -0.8291504383087158, 0.6020793914794922, -0.30096912384033203, 0.6136975288391113, 0.46443629264831543, -0.9057025909423828, 0.43993186950683594, -0.11653375625610352, 0.9514555931091309, 0.7985148429870605, -0.6911783218383789, -0.931948184967041, 0.06829452514648438, 0.711458683013916, 0.17953252792358398, 0.19076848030090332, 0.6339912414550781, -0.38822221755981445, 0.09893226623535156, -0.7663829326629639], [0.2640511989593506, 0.16821074485778809, 0.9845118522644043, 0.13789963722229004, -0.1097862720489502, 0.24889636039733887, 0.12135696411132812, 0.4419589042663574, -0.022361040115356445, 0.3793671131134033, 0.7053709030151367, 0.5814387798309326, 0.24962759017944336, -0.40136122703552246, -0.364804744720459, -0.2518625259399414, 0.25757312774658203, 0.6828348636627197, -0.08237361907958984, -0.07745933532714844, -0.9299273490905762, -0.3066704273223877, 0.22127842903137207, -0.6690163612365723, 0.7477891445159912, 0.5738682746887207, -0.9027633666992188, 0.33333301544189453], [0.44417595863342285, 0.5279359817504883, 0.36589932441711426, -0.049490928649902344, -0.3003063201904297, 0.8447706699371338, 0.8139019012451172, -0.4958505630493164, -0.3640005588531494, -0.7731854915618896, 0.4138674736022949, 0.3080577850341797, -0.5615301132202148, -0.4247474670410156, -0.25992918014526367, -0.7983508110046387, -0.4926283359527588, 0.08595061302185059, 0.9205574989318848, 0.6025278568267822, 0.37128734588623047, -0.1856670379638672, 0.9658422470092773, -0.6652145385742188, -0.2217710018157959, 0.8311929702758789, 0.9117603302001953, 0.4660191535949707], [-0.6164810657501221, -0.9509942531585693, 0.30228447914123535, 0.39792585372924805, -0.45194506645202637, -0.3315877914428711, 0.5393261909484863, 0.030223369598388672, 0.6013507843017578, 0.020318031311035156, 0.17352747917175293, -0.25470709800720215, -0.7904071807861328, 0.5383470058441162, -0.43855953216552734, -0.350492000579834, -0.5038156509399414, 0.0511777400970459, 0.9628784656524658, -0.520085334777832, 0.33083176612854004, 0.3698580265045166, 0.9121549129486084, -0.664802074432373, -0.45629024505615234, -0.44208550453186035, 0.6502475738525391, -0.597346305847168], [-0.5405080318450928, 0.2640984058380127, -0.2119138240814209, 0.04504036903381348, -0.10604047775268555, -0.8093295097351074, 0.20915007591247559, 0.08994936943054199, 0.5347979068756104, -0.550915002822876, 0.3008608818054199, 0.2416989803314209, 0.025292158126831055, -0.2790646553039551, 0.5850257873535156, 0.7020430564880371, -0.9527721405029297, 0.8273317813873291, -0.8028199672698975, 0.3686351776123047, -0.9391682147979736, -0.4261181354522705, 0.1674947738647461, 0.27993154525756836, 0.7567532062530518, 0.9245085716247559, 0.9245762825012207, -0.296356201171875], [-0.7747671604156494, 0.7632861137390137, 0.10236740112304688, 0.14044547080993652, 0.9318621158599854, -0.5622165203094482, -0.6605725288391113, -0.8286299705505371, 0.8717818260192871, -0.22177672386169434, -0.6030778884887695, 0.20917797088623047, 0.31551361083984375, 0.7741527557373047, -0.3320643901824951, -0.9014863967895508, 0.44268250465393066, 0.25649309158325195, -0.5621528625488281, -0.6077632904052734, 0.21485304832458496, -0.658627986907959, -0.9116294384002686, -0.294114351272583, 0.0452420711517334, 0.8542745113372803, 0.7148771286010742, 0.3244490623474121]]
+# pylint: enable=line-too-long
+
+
+def real_digit():
+ return tf.expand_dims(tf.expand_dims(REAL_DIGIT, 0), -1)
+
+
+def fake_digit():
+ return tf.expand_dims(tf.expand_dims(FAKE_DIGIT, 0), -1)
+
+
+def one_hot_real():
+ return tf.constant(ONE_HOT)
+
+
+def one_hot1():
+ return tf.constant([[1.0] + [0.0] * 9])
+
+
+class MnistScoreTest(tf.test.TestCase):
+
+ def test_any_batch_size(self):
+ inputs = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
+ mscore = util.mnist_score(inputs)
+ for batch_size in [4, 16, 30]:
+ with self.test_session() as sess:
+ sess.run(mscore, feed_dict={inputs: np.zeros([batch_size, 28, 28, 1])})
+
+ def test_deterministic(self):
+ m_score = util.mnist_score(real_digit())
+ with self.test_session():
+ m_score1 = m_score.eval()
+ m_score2 = m_score.eval()
+ self.assertEqual(m_score1, m_score2)
+
+ with self.test_session():
+ m_score3 = m_score.eval()
+ self.assertEqual(m_score1, m_score3)
+
+ def test_single_example_correct(self):
+ real_score = util.mnist_score(real_digit())
+ fake_score = util.mnist_score(fake_digit())
+ with self.test_session():
+ self.assertNear(1.0, real_score.eval(), 1e-6)
+ self.assertNear(1.0, fake_score.eval(), 1e-6)
+
+ def test_minibatch_correct(self):
+ mscore = util.mnist_score(
+ tf.concat([real_digit(), real_digit(), fake_digit()], 0))
+ with self.test_session():
+ self.assertNear(1.612828, mscore.eval(), 1e-6)
+
+ def test_batch_splitting_doesnt_change_value(self):
+ for num_batches in [1, 2, 4, 8]:
+ mscore = util.mnist_score(
+ tf.concat([real_digit()] * 4 + [fake_digit()] * 4, 0),
+ num_batches=num_batches)
+ with self.test_session():
+ self.assertNear(1.649209, mscore.eval(), 1e-6)
+
+
+class MnistFrechetDistanceTest(tf.test.TestCase):
+
+ def test_any_batch_size(self):
+ inputs = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
+ fdistance = util.mnist_frechet_distance(inputs, inputs)
+ for batch_size in [4, 16, 30]:
+ with self.test_session() as sess:
+ sess.run(fdistance,
+ feed_dict={inputs: np.zeros([batch_size, 28, 28, 1])})
+
+ def test_deterministic(self):
+ fdistance = util.mnist_frechet_distance(
+ tf.concat([real_digit()] * 2, 0),
+ tf.concat([fake_digit()] * 2, 0))
+ with self.test_session():
+ fdistance1 = fdistance.eval()
+ fdistance2 = fdistance.eval()
+ self.assertNear(fdistance1, fdistance2, 2e-1)
+
+ with self.test_session():
+ fdistance3 = fdistance.eval()
+ self.assertNear(fdistance1, fdistance3, 2e-1)
+
+ def test_single_example_correct(self):
+ fdistance = util.mnist_frechet_distance(
+ tf.concat([real_digit()] * 2, 0),
+ tf.concat([real_digit()] * 2, 0))
+ with self.test_session():
+ self.assertNear(0.0, fdistance.eval(), 2e-1)
+
+ def test_minibatch_correct(self):
+ fdistance = util.mnist_frechet_distance(
+ tf.concat([real_digit(), real_digit(), fake_digit()], 0),
+ tf.concat([real_digit(), fake_digit(), fake_digit()], 0))
+ with self.test_session():
+ self.assertNear(43.5, fdistance.eval(), 2e-1)
+
+ def test_batch_splitting_doesnt_change_value(self):
+ for num_batches in [1, 2, 4, 8]:
+ fdistance = util.mnist_frechet_distance(
+ tf.concat([real_digit()] * 6 + [fake_digit()] * 2, 0),
+ tf.concat([real_digit()] * 2 + [fake_digit()] * 6, 0),
+ num_batches=num_batches)
+ with self.test_session():
+ self.assertNear(97.8, fdistance.eval(), 2e-1)
+
+
+class MnistCrossEntropyTest(tf.test.TestCase):
+
+ def test_any_batch_size(self):
+ num_classes = 10
+ one_label = np.array([[1] + [0] * (num_classes - 1)])
+ inputs = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
+ one_hot_label = tf.placeholder(tf.int32, shape=[None, num_classes])
+ entropy = util.mnist_cross_entropy(inputs, one_hot_label)
+ for batch_size in [4, 16, 30]:
+ with self.test_session() as sess:
+ sess.run(entropy, feed_dict={
+ inputs: np.zeros([batch_size, 28, 28, 1]),
+ one_hot_label: np.concatenate([one_label] * batch_size)})
+
+ def test_deterministic(self):
+ xent = util.mnist_cross_entropy(real_digit(), one_hot_real())
+ with self.test_session():
+ ent1 = xent.eval()
+ ent2 = xent.eval()
+ self.assertEqual(ent1, ent2)
+
+ with self.test_session():
+ ent3 = xent.eval()
+ self.assertEqual(ent1, ent3)
+
+ def test_single_example_correct(self):
+ # The correct label should have low cross entropy.
+ correct_xent = util.mnist_cross_entropy(real_digit(), one_hot_real())
+ # The incorrect label should have high cross entropy.
+ wrong_xent = util.mnist_cross_entropy(real_digit(), one_hot1())
+ # A random digit should have medium cross entropy for any label.
+ fake_xent1 = util.mnist_cross_entropy(fake_digit(), one_hot_real())
+ fake_xent6 = util.mnist_cross_entropy(fake_digit(), one_hot1())
+ with self.test_session():
+ self.assertNear(0.00996, correct_xent.eval(), 1e-5)
+ self.assertNear(18.63073, wrong_xent.eval(), 1e-5)
+ self.assertNear(2.2, fake_xent1.eval(), 1e-1)
+ self.assertNear(2.2, fake_xent6.eval(), 1e-1)
+
+ def test_minibatch_correct(self):
+ # Reorded minibatches should have the same value.
+ xent1 = util.mnist_cross_entropy(
+ tf.concat([real_digit(), real_digit(), fake_digit()], 0),
+ tf.concat([one_hot_real(), one_hot1(), one_hot1()], 0))
+ xent2 = util.mnist_cross_entropy(
+ tf.concat([real_digit(), fake_digit(), real_digit()], 0),
+ tf.concat([one_hot_real(), one_hot1(), one_hot1()], 0))
+ with self.test_session():
+ self.assertNear(6.972539, xent1.eval(), 1e-5)
+ self.assertNear(xent1.eval(), xent2.eval(), 1e-5)
+
+
+class GetNoiseTest(tf.test.TestCase):
+
+ def test_get_noise_categorical_syntax(self):
+ util.get_eval_noise_categorical(
+ noise_samples=4,
+ categorical_sample_points=np.arange(0, 10),
+ continuous_sample_points=np.linspace(-2.0, 2.0, 10),
+ unstructured_noise_dims=62,
+ continuous_noise_dims=2)
+
+ def test_get_noise_continuous_dim1_syntax(self):
+ util.get_eval_noise_continuous_dim1(
+ noise_samples=4,
+ categorical_sample_points=np.arange(0, 10),
+ continuous_sample_points=np.linspace(-2.0, 2.0, 10),
+ unstructured_noise_dims=62,
+ continuous_noise_dims=2)
+
+ def test_get_noise_continuous_dim2_syntax(self):
+ util.get_eval_noise_continuous_dim2(
+ noise_samples=4,
+ categorical_sample_points=np.arange(0, 10),
+ continuous_sample_points=np.linspace(-2.0, 2.0, 10),
+ unstructured_noise_dims=62,
+ continuous_noise_dims=2)
+
+ def test_get_infogan_syntax(self):
+ util.get_infogan_noise(
+ batch_size=4,
+ categorical_dim=10,
+ structured_continuous_dim=3,
+ total_continuous_noise_dims=62)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist_estimator/launch_jobs.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist_estimator/launch_jobs.sh"
new file mode 100644
index 00000000..fcc5131f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist_estimator/launch_jobs.sh"
@@ -0,0 +1,65 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+#!/bin/bash
+#
+# This script performs the following operations:
+# 1. Downloads the MNIST dataset.
+# 2. Trains an unconditional model on the MNIST training set using a
+# tf.Estimator.
+# 3. Evaluates the models and writes sample images to disk.
+#
+#
+# Usage:
+# cd models/research/gan/mnist_estimator
+# ./launch_jobs.sh ${git_repo}
+set -e
+
+# Location of the git repository.
+git_repo=$1
+if [[ "$git_repo" == "" ]]; then
+ echo "'git_repo' must not be empty."
+ exit
+fi
+
+# Base name for where the evaluation images will be saved to.
+EVAL_DIR=/tmp/mnist-estimator
+
+# Where the dataset is saved to.
+DATASET_DIR=/tmp/mnist-data
+
+export PYTHONPATH=$PYTHONPATH:$git_repo:$git_repo/research:$git_repo/research/gan:$git_repo/research/slim
+
+# A helper function for printing pretty output.
+Banner () {
+ local text=$1
+ local green='\033[0;32m'
+ local nc='\033[0m' # No color.
+ echo -e "${green}${text}${nc}"
+}
+
+# Download the dataset.
+python "${git_repo}/research/slim/download_and_convert_data.py" \
+ --dataset_name=mnist \
+ --dataset_dir=${DATASET_DIR}
+
+# Run unconditional GAN.
+NUM_STEPS=1600
+Banner "Starting training GANEstimator ${NUM_STEPS} steps..."
+python "${git_repo}/research/gan/mnist_estimator/train.py" \
+ --max_number_of_steps=${NUM_STEPS} \
+ --eval_dir=${EVAL_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --alsologtostderr
+Banner "Finished training GANEstimator ${NUM_STEPS} steps. See ${EVAL_DIR} for output images."
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist_estimator/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist_estimator/train.py"
new file mode 100644
index 00000000..d2c8ea34
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist_estimator/train.py"
@@ -0,0 +1,109 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Trains a GANEstimator on MNIST data."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+from absl import flags
+import numpy as np
+import scipy.misc
+from six.moves import xrange # pylint: disable=redefined-builtin
+import tensorflow as tf
+
+from mnist import data_provider
+from mnist import networks
+
+tfgan = tf.contrib.gan
+
+flags.DEFINE_integer('batch_size', 32,
+ 'The number of images in each train batch.')
+
+flags.DEFINE_integer('max_number_of_steps', 20000,
+ 'The maximum number of gradient steps.')
+
+flags.DEFINE_integer(
+ 'noise_dims', 64, 'Dimensions of the generator noise vector')
+
+flags.DEFINE_string('dataset_dir', None, 'Location of data.')
+
+flags.DEFINE_string('eval_dir', '/tmp/mnist-estimator/',
+ 'Directory where the results are saved to.')
+
+FLAGS = flags.FLAGS
+
+
+def _get_train_input_fn(batch_size, noise_dims, dataset_dir=None,
+ num_threads=4):
+ def train_input_fn():
+ with tf.device('/cpu:0'):
+ images, _, _ = data_provider.provide_data(
+ 'train', batch_size, dataset_dir, num_threads=num_threads)
+ noise = tf.random_normal([batch_size, noise_dims])
+ return noise, images
+ return train_input_fn
+
+
+def _get_predict_input_fn(batch_size, noise_dims):
+ def predict_input_fn():
+ noise = tf.random_normal([batch_size, noise_dims])
+ return noise
+ return predict_input_fn
+
+
+def _unconditional_generator(noise, mode):
+ """MNIST generator with extra argument for tf.Estimator's `mode`."""
+ is_training = (mode == tf.estimator.ModeKeys.TRAIN)
+ return networks.unconditional_generator(noise, is_training=is_training)
+
+
+def main(_):
+ # Initialize GANEstimator with options and hyperparameters.
+ gan_estimator = tfgan.estimator.GANEstimator(
+ generator_fn=_unconditional_generator,
+ discriminator_fn=networks.unconditional_discriminator,
+ generator_loss_fn=tfgan.losses.wasserstein_generator_loss,
+ discriminator_loss_fn=tfgan.losses.wasserstein_discriminator_loss,
+ generator_optimizer=tf.train.AdamOptimizer(0.001, 0.5),
+ discriminator_optimizer=tf.train.AdamOptimizer(0.0001, 0.5),
+ add_summaries=tfgan.estimator.SummaryType.IMAGES)
+
+ # Train estimator.
+ train_input_fn = _get_train_input_fn(
+ FLAGS.batch_size, FLAGS.noise_dims, FLAGS.dataset_dir)
+ gan_estimator.train(train_input_fn, max_steps=FLAGS.max_number_of_steps)
+
+ # Run inference.
+ predict_input_fn = _get_predict_input_fn(36, FLAGS.noise_dims)
+ prediction_iterable = gan_estimator.predict(predict_input_fn)
+ predictions = [prediction_iterable.next() for _ in xrange(36)]
+
+ # Nicely tile.
+ image_rows = [np.concatenate(predictions[i:i+6], axis=0) for i in
+ range(0, 36, 6)]
+ tiled_image = np.concatenate(image_rows, axis=1)
+
+ # Write to disk.
+ if not tf.gfile.Exists(FLAGS.eval_dir):
+ tf.gfile.MakeDirs(FLAGS.eval_dir)
+ scipy.misc.imsave(os.path.join(FLAGS.eval_dir, 'unconditional_gan.png'),
+ np.squeeze(tiled_image, axis=2))
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist_estimator/train_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist_estimator/train_test.py"
new file mode 100644
index 00000000..5bec5db6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/mnist_estimator/train_test.py"
@@ -0,0 +1,52 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for mnist_estimator.train."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+import numpy as np
+
+import tensorflow as tf
+import train
+
+FLAGS = flags.FLAGS
+mock = tf.test.mock
+
+
+class TrainTest(tf.test.TestCase):
+
+ @mock.patch.object(train, 'data_provider', autospec=True)
+ def test_full_flow(self, mock_data_provider):
+ FLAGS.eval_dir = self.get_temp_dir()
+ FLAGS.batch_size = 16
+ FLAGS.max_number_of_steps = 2
+ FLAGS.noise_dims = 3
+
+ # Construct mock inputs.
+ mock_imgs = np.zeros([FLAGS.batch_size, 28, 28, 1], dtype=np.float32)
+ mock_lbls = np.concatenate(
+ (np.ones([FLAGS.batch_size, 1], dtype=np.int32),
+ np.zeros([FLAGS.batch_size, 9], dtype=np.int32)), axis=1)
+ mock_data_provider.provide_data.return_value = (mock_imgs, mock_lbls, None)
+
+ train.main(None)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/launch_jobs.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/launch_jobs.sh"
new file mode 100644
index 00000000..9302b99b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/launch_jobs.sh"
@@ -0,0 +1,74 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+#!/bin/bash
+#
+# This script performs the following operations:
+# 1. Downloads the Imagenet dataset.
+# 2. Trains image compression model on patches from Imagenet.
+# 3. Evaluates the models and writes sample images to disk.
+#
+# Usage:
+# cd models/research/gan/image_compression
+# ./launch_jobs.sh ${weight_factor} ${git_repo}
+set -e
+
+# Weight of the adversarial loss.
+weight_factor=$1
+if [[ "$weight_factor" == "" ]]; then
+ echo "'weight_factor' must not be empty."
+ exit
+fi
+
+# Location of the git repository.
+git_repo=$2
+if [[ "$git_repo" == "" ]]; then
+ echo "'git_repo' must not be empty."
+ exit
+fi
+
+# Base name for where the checkpoint and logs will be saved to.
+TRAIN_DIR=/tmp/compression-model
+
+# Base name for where the evaluation images will be saved to.
+EVAL_DIR=/tmp/compression-model/eval
+
+# Where the dataset is saved to.
+DATASET_DIR=/tmp/imagenet-data
+
+export PYTHONPATH=$PYTHONPATH:$git_repo:$git_repo/research:$git_repo/research/slim:$git_repo/research/slim/nets
+
+# A helper function for printing pretty output.
+Banner () {
+ local text=$1
+ local green='\033[0;32m'
+ local nc='\033[0m' # No color.
+ echo -e "${green}${text}${nc}"
+}
+
+# Download the dataset.
+bazel build "${git_repo}/research/slim:download_and_convert_imagenet"
+"./bazel-bin/download_and_convert_imagenet" ${DATASET_DIR}
+
+# Run the pix2pix model.
+NUM_STEPS=10000
+MODEL_TRAIN_DIR="${TRAIN_DIR}/wt${weight_factor}"
+Banner "Starting training an image compression model for ${NUM_STEPS} steps..."
+python "${git_repo}/research/gan/image_compression/train.py" \
+ --train_log_dir=${MODEL_TRAIN_DIR} \
+ --dataset_dir=${DATASET_DIR} \
+ --max_number_of_steps=${NUM_STEPS} \
+ --weight_factor=${weight_factor} \
+ --alsologtostderr
+Banner "Finished training pix2pix model ${NUM_STEPS} steps."
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/networks.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/networks.py"
new file mode 100644
index 00000000..3a59d41a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/networks.py"
@@ -0,0 +1,61 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Networks for GAN Pix2Pix example using TFGAN."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from slim.nets import cyclegan
+from slim.nets import pix2pix
+
+
+def generator(input_images):
+ """Thin wrapper around CycleGAN generator to conform to the TFGAN API.
+
+ Args:
+ input_images: A batch of images to translate. Images should be normalized
+ already. Shape is [batch, height, width, channels].
+
+ Returns:
+ Returns generated image batch.
+
+ Raises:
+ ValueError: If shape of last dimension (channels) is not defined.
+ """
+ input_images.shape.assert_has_rank(4)
+ input_size = input_images.shape.as_list()
+ channels = input_size[-1]
+ if channels is None:
+ raise ValueError(
+ 'Last dimension shape must be known but is None: %s' % input_size)
+ with tf.contrib.framework.arg_scope(cyclegan.cyclegan_arg_scope()):
+ output_images, _ = cyclegan.cyclegan_generator_resnet(input_images,
+ num_outputs=channels)
+ return output_images
+
+
+def discriminator(image_batch, unused_conditioning=None):
+ """A thin wrapper around the Pix2Pix discriminator to conform to TFGAN API."""
+ with tf.contrib.framework.arg_scope(pix2pix.pix2pix_arg_scope()):
+ logits_4d, _ = pix2pix.pix2pix_discriminator(
+ image_batch, num_filters=[64, 128, 256, 512])
+ logits_4d.shape.assert_has_rank(4)
+ # Output of logits is 4D. Reshape to 2D, for TFGAN.
+ logits_2d = tf.contrib.layers.flatten(logits_4d)
+
+ return logits_2d
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/networks_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/networks_test.py"
new file mode 100644
index 00000000..7b94173a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/networks_test.py"
@@ -0,0 +1,89 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for tfgan.examples.networks.networks."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+import networks
+
+
+class Pix2PixTest(tf.test.TestCase):
+
+ def test_generator_run(self):
+ img_batch = tf.zeros([3, 128, 128, 3])
+ model_output = networks.generator(img_batch)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ sess.run(model_output)
+
+ def test_generator_graph(self):
+ for shape in ([4, 32, 32], [3, 128, 128], [2, 80, 400]):
+ tf.reset_default_graph()
+ img = tf.ones(shape + [3])
+ output_imgs = networks.generator(img)
+
+ self.assertAllEqual(shape + [3], output_imgs.shape.as_list())
+
+ def test_generator_graph_unknown_batch_dim(self):
+ img = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
+ output_imgs = networks.generator(img)
+
+ self.assertAllEqual([None, 32, 32, 3], output_imgs.shape.as_list())
+
+ def test_generator_invalid_input(self):
+ with self.assertRaisesRegexp(ValueError, 'must have rank 4'):
+ networks.generator(tf.zeros([28, 28, 3]))
+
+ def test_generator_run_multi_channel(self):
+ img_batch = tf.zeros([3, 128, 128, 5])
+ model_output = networks.generator(img_batch)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ sess.run(model_output)
+
+ def test_generator_invalid_channels(self):
+ with self.assertRaisesRegexp(
+ ValueError, 'Last dimension shape must be known but is None'):
+ img = tf.placeholder(tf.float32, shape=[4, 32, 32, None])
+ networks.generator(img)
+
+ def test_discriminator_run(self):
+ img_batch = tf.zeros([3, 70, 70, 3])
+ disc_output = networks.discriminator(img_batch)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ sess.run(disc_output)
+
+ def test_discriminator_graph(self):
+ # Check graph construction for a number of image size/depths and batch
+ # sizes.
+ for batch_size, patch_size in zip([3, 6], [70, 128]):
+ tf.reset_default_graph()
+ img = tf.ones([batch_size, patch_size, patch_size, 3])
+ disc_output = networks.discriminator(img)
+
+ self.assertEqual(2, disc_output.shape.ndims)
+ self.assertEqual(batch_size, disc_output.shape.as_list()[0])
+
+ def test_discriminator_invalid_input(self):
+ with self.assertRaisesRegexp(ValueError, 'Shape must be rank 4'):
+ networks.discriminator(tf.zeros([28, 28, 3]))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/train.py"
new file mode 100644
index 00000000..2b0d5e36
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/train.py"
@@ -0,0 +1,177 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Trains an image-to-image translation network with an adversarial loss."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+import tensorflow as tf
+
+import data_provider
+import networks
+
+tfgan = tf.contrib.gan
+
+
+flags.DEFINE_integer('batch_size', 10, 'The number of images in each batch.')
+
+flags.DEFINE_integer('patch_size', 32, 'The size of the patches to train on.')
+
+flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
+
+flags.DEFINE_string('train_log_dir', '/tmp/pix2pix/',
+ 'Directory where to write event logs.')
+
+flags.DEFINE_float('generator_lr', 0.00001,
+ 'The compression model learning rate.')
+
+flags.DEFINE_float('discriminator_lr', 0.00001,
+ 'The discriminator learning rate.')
+
+flags.DEFINE_integer('max_number_of_steps', 2000000,
+ 'The maximum number of gradient steps.')
+
+flags.DEFINE_integer(
+ 'ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then the parameters '
+ 'are handled locally by the worker.')
+
+flags.DEFINE_integer(
+ 'task', 0,
+ 'The Task ID. This value is used when training with multiple workers to '
+ 'identify each worker.')
+
+flags.DEFINE_float(
+ 'weight_factor', 0.0,
+ 'How much to weight the adversarial loss relative to pixel loss.')
+
+flags.DEFINE_string('dataset_dir', None, 'Location of data.')
+
+
+FLAGS = flags.FLAGS
+
+
+def main(_):
+ if not tf.gfile.Exists(FLAGS.train_log_dir):
+ tf.gfile.MakeDirs(FLAGS.train_log_dir)
+
+ with tf.device(tf.train.replica_device_setter(FLAGS.ps_tasks)):
+ # Get real and distorted images.
+ with tf.device('/cpu:0'), tf.name_scope('inputs'):
+ real_images = data_provider.provide_data(
+ 'train', FLAGS.batch_size, dataset_dir=FLAGS.dataset_dir,
+ patch_size=FLAGS.patch_size)
+ distorted_images = _distort_images(
+ real_images, downscale_size=int(FLAGS.patch_size / 2),
+ upscale_size=FLAGS.patch_size)
+
+ # Create a GANModel tuple.
+ gan_model = tfgan.gan_model(
+ generator_fn=networks.generator,
+ discriminator_fn=networks.discriminator,
+ real_data=real_images,
+ generator_inputs=distorted_images)
+ tfgan.eval.add_image_comparison_summaries(
+ gan_model, num_comparisons=3, display_diffs=True)
+ tfgan.eval.add_gan_model_image_summaries(gan_model, grid_size=3)
+
+ # Define the GANLoss tuple using standard library functions.
+ with tf.name_scope('losses'):
+ gan_loss = tfgan.gan_loss(
+ gan_model,
+ generator_loss_fn=tfgan.losses.least_squares_generator_loss,
+ discriminator_loss_fn=tfgan.losses.least_squares_discriminator_loss)
+
+ # Define the standard L1 pixel loss.
+ l1_pixel_loss = tf.norm(gan_model.real_data - gan_model.generated_data,
+ ord=1) / FLAGS.patch_size ** 2
+
+ # Modify the loss tuple to include the pixel loss. Add summaries as well.
+ gan_loss = tfgan.losses.combine_adversarial_loss(
+ gan_loss, gan_model, l1_pixel_loss,
+ weight_factor=FLAGS.weight_factor)
+
+ with tf.name_scope('train_ops'):
+ # Get the GANTrain ops using the custom optimizers and optional
+ # discriminator weight clipping.
+ gen_lr, dis_lr = _lr(FLAGS.generator_lr, FLAGS.discriminator_lr)
+ gen_opt, dis_opt = _optimizer(gen_lr, dis_lr)
+ train_ops = tfgan.gan_train_ops(
+ gan_model,
+ gan_loss,
+ generator_optimizer=gen_opt,
+ discriminator_optimizer=dis_opt,
+ summarize_gradients=True,
+ colocate_gradients_with_ops=True,
+ aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N,
+ transform_grads_fn=tf.contrib.training.clip_gradient_norms_fn(1e3))
+ tf.summary.scalar('generator_lr', gen_lr)
+ tf.summary.scalar('discriminator_lr', dis_lr)
+
+ # Use GAN train step function if using adversarial loss, otherwise
+ # only train the generator.
+ train_steps = tfgan.GANTrainSteps(
+ generator_train_steps=1,
+ discriminator_train_steps=int(FLAGS.weight_factor > 0))
+
+ # Run the alternating training loop. Skip it if no steps should be taken
+ # (used for graph construction tests).
+ status_message = tf.string_join(
+ ['Starting train step: ',
+ tf.as_string(tf.train.get_or_create_global_step())],
+ name='status_message')
+ if FLAGS.max_number_of_steps == 0: return
+ tfgan.gan_train(
+ train_ops,
+ FLAGS.train_log_dir,
+ get_hooks_fn=tfgan.get_sequential_train_hooks(train_steps),
+ hooks=[tf.train.StopAtStepHook(num_steps=FLAGS.max_number_of_steps),
+ tf.train.LoggingTensorHook([status_message], every_n_iter=10)],
+ master=FLAGS.master,
+ is_chief=FLAGS.task == 0)
+
+
+def _optimizer(gen_lr, dis_lr):
+ kwargs = {'beta1': 0.5, 'beta2': 0.999}
+ generator_opt = tf.train.AdamOptimizer(gen_lr, **kwargs)
+ discriminator_opt = tf.train.AdamOptimizer(dis_lr, **kwargs)
+ return generator_opt, discriminator_opt
+
+
+def _lr(gen_lr_base, dis_lr_base):
+ """Return the generator and discriminator learning rates."""
+ gen_lr = tf.train.exponential_decay(
+ learning_rate=gen_lr_base,
+ global_step=tf.train.get_or_create_global_step(),
+ decay_steps=100000,
+ decay_rate=0.8,
+ staircase=True,)
+ dis_lr = dis_lr_base
+
+ return gen_lr, dis_lr
+
+
+def _distort_images(images, downscale_size, upscale_size):
+ downscaled = tf.image.resize_area(images, [downscale_size] * 2)
+ upscaled = tf.image.resize_area(downscaled, [upscale_size] * 2)
+ return upscaled
+
+
+if __name__ == '__main__':
+ tf.app.run()
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/train_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/train_test.py"
new file mode 100644
index 00000000..02c6d5b9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/pix2pix/train_test.py"
@@ -0,0 +1,52 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for pix2pix.train."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+from absl.testing import parameterized
+import numpy as np
+import tensorflow as tf
+import train
+
+FLAGS = flags.FLAGS
+mock = tf.test.mock
+
+
+class TrainTest(tf.test.TestCase, parameterized.TestCase):
+
+ @parameterized.named_parameters(
+ ('NoAdversarialLoss', 0.0),
+ ('AdversarialLoss', 1.0))
+ def test_build_graph(self, weight_factor):
+ FLAGS.max_number_of_steps = 0
+ FLAGS.weight_factor = weight_factor
+ FLAGS.batch_size = 9
+ FLAGS.patch_size = 32
+
+ mock_imgs = np.zeros(
+ [FLAGS.batch_size, FLAGS.patch_size, FLAGS.patch_size, 3],
+ dtype=np.float32)
+ with mock.patch.object(train, 'data_provider') as mock_data_provider:
+ mock_data_provider.provide_data.return_value = mock_imgs
+ train.main(None)
+
+if __name__ == '__main__':
+ tf.test.main()
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/data_provider.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/data_provider.py"
new file mode 100644
index 00000000..6e1563bc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/data_provider.py"
@@ -0,0 +1,189 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Loading and preprocessing image data."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from slim.datasets import dataset_factory as datasets
+
+
+def normalize_image(image):
+ """Rescales image from range [0, 255] to [-1, 1]."""
+ return (tf.to_float(image) - 127.5) / 127.5
+
+
+def sample_patch(image, patch_height, patch_width, colors):
+ """Crops image to the desired aspect ratio shape and resizes it.
+
+ If the image has shape H x W, crops a square in the center of
+ shape min(H,W) x min(H,W).
+
+ Args:
+ image: A 3D `Tensor` of HWC format.
+ patch_height: A Python integer. The output images height.
+ patch_width: A Python integer. The output images width.
+ colors: Number of output image channels. Defaults to 3.
+
+ Returns:
+ A 3D `Tensor` of HWC format with shape [patch_height, patch_width, colors].
+ """
+ image_shape = tf.shape(image)
+ h, w = image_shape[0], image_shape[1]
+
+ h_major_target_h = h
+ h_major_target_w = tf.maximum(1, tf.to_int32(
+ (h * patch_width) / patch_height))
+ w_major_target_h = tf.maximum(1, tf.to_int32(
+ (w * patch_height) / patch_width))
+ w_major_target_w = w
+ target_hw = tf.cond(
+ h_major_target_w <= w,
+ lambda: tf.convert_to_tensor([h_major_target_h, h_major_target_w]),
+ lambda: tf.convert_to_tensor([w_major_target_h, w_major_target_w]))
+ # Cut a patch of shape (target_h, target_w).
+ image = tf.image.resize_image_with_crop_or_pad(image, target_hw[0],
+ target_hw[1])
+ # Resize the cropped image to (patch_h, patch_w).
+ image = tf.image.resize_images([image], [patch_height, patch_width])[0]
+ # Force number of channels: repeat the channel dimension enough
+ # number of times and then slice the first `colors` channels.
+ num_repeats = tf.to_int32(tf.ceil(colors / image_shape[2]))
+ image = tf.tile(image, [1, 1, num_repeats])
+ image = tf.slice(image, [0, 0, 0], [-1, -1, colors])
+ image.set_shape([patch_height, patch_width, colors])
+ return image
+
+
+def batch_images(image, patch_height, patch_width, colors, batch_size, shuffle,
+ num_threads):
+ """Creates a batch of images.
+
+ Args:
+ image: A 3D `Tensor` of HWC format.
+ patch_height: A Python integer. The output images height.
+ patch_width: A Python integer. The output images width.
+ colors: Number of channels.
+ batch_size: The number of images in each minibatch. Defaults to 32.
+ shuffle: Whether to shuffle the read images.
+ num_threads: Number of prefetching threads.
+
+ Returns:
+ A float `Tensor`s with shape [batch_size, patch_height, patch_width, colors]
+ representing a batch of images.
+ """
+ image = sample_patch(image, patch_height, patch_width, colors)
+ images = None
+ if shuffle:
+ images = tf.train.shuffle_batch(
+ [image],
+ batch_size=batch_size,
+ num_threads=num_threads,
+ capacity=5 * batch_size,
+ min_after_dequeue=batch_size)
+ else:
+ images = tf.train.batch(
+ [image],
+ batch_size=batch_size,
+ num_threads=1, # no threads so it's deterministic
+ capacity=5 * batch_size)
+ images.set_shape([batch_size, patch_height, patch_width, colors])
+ return images
+
+
+def provide_data(dataset_name='cifar10',
+ split_name='train',
+ dataset_dir,
+ batch_size=32,
+ shuffle=True,
+ num_threads=1,
+ patch_height=32,
+ patch_width=32,
+ colors=3):
+ """Provides a batch of image data from predefined dataset.
+
+ Args:
+ dataset_name: A string of dataset name. Defaults to 'cifar10'.
+ split_name: Either 'train' or 'validation'. Defaults to 'train'.
+ dataset_dir: The directory where the data can be found. If `None`, use
+ default.
+ batch_size: The number of images in each minibatch. Defaults to 32.
+ shuffle: Whether to shuffle the read images. Defaults to True.
+ num_threads: Number of prefetching threads. Defaults to 1.
+ patch_height: A Python integer. The read images height. Defaults to 32.
+ patch_width: A Python integer. The read images width. Defaults to 32.
+ colors: Number of channels. Defaults to 3.
+
+ Returns:
+ A float `Tensor`s with shape [batch_size, patch_height, patch_width, colors]
+ representing a batch of images.
+ """
+ dataset = datasets.get_dataset(
+ dataset_name, split_name, dataset_dir=dataset_dir)
+ provider = tf.contrib.slim.dataset_data_provider.DatasetDataProvider(
+ dataset,
+ num_readers=1,
+ common_queue_capacity=5 * batch_size,
+ common_queue_min=batch_size,
+ shuffle=shuffle)
+ return batch_images(
+ image=normalize_image(provider.get(['image'])[0]),
+ patch_height=patch_height,
+ patch_width=patch_width,
+ colors=colors,
+ batch_size=batch_size,
+ shuffle=shuffle,
+ num_threads=num_threads)
+
+
+def provide_data_from_image_files(file_pattern,
+ batch_size=32,
+ shuffle=True,
+ num_threads=1,
+ patch_height=32,
+ patch_width=32,
+ colors=3):
+ """Provides a batch of image data from image files.
+
+ Args:
+ file_pattern: A file pattern (glob), or 1D `Tensor` of file patterns.
+ batch_size: The number of images in each minibatch. Defaults to 32.
+ shuffle: Whether to shuffle the read images. Defaults to True.
+ num_threads: Number of prefetching threads. Defaults to 1.
+ patch_height: A Python integer. The read images height. Defaults to 32.
+ patch_width: A Python integer. The read images width. Defaults to 32.
+ colors: Number of channels. Defaults to 3.
+
+ Returns:
+ A float `Tensor` of shape [batch_size, patch_height, patch_width, 3]
+ representing a batch of images.
+ """
+ filename_queue = tf.train.string_input_producer(
+ tf.train.match_filenames_once(file_pattern),
+ shuffle=shuffle,
+ capacity=5 * batch_size)
+ _, image_bytes = tf.WholeFileReader().read(filename_queue)
+ return batch_images(
+ image=normalize_image(tf.image.decode_image(image_bytes)),
+ patch_height=patch_height,
+ patch_width=patch_width,
+ colors=colors,
+ batch_size=batch_size,
+ shuffle=shuffle,
+ num_threads=num_threads)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/data_provider_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/data_provider_test.py"
new file mode 100644
index 00000000..86dc93a6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/data_provider_test.py"
@@ -0,0 +1,136 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+
+from absl import flags
+import numpy as np
+import tensorflow as tf
+
+import data_provider
+
+
+class DataProviderTest(tf.test.TestCase):
+
+ def setUp(self):
+ super(DataProviderTest, self).setUp()
+ self.testdata_dir = os.path.join(
+ flags.FLAGS.test_srcdir,
+ 'google3/third_party/tensorflow_models/gan/progressive_gan/testdata/')
+
+ def test_normalize_image(self):
+ image_np = np.asarray([0, 255, 210], dtype=np.uint8)
+ normalized_image = data_provider.normalize_image(tf.constant(image_np))
+ self.assertEqual(normalized_image.dtype, tf.float32)
+ self.assertEqual(normalized_image.shape.as_list(), [3])
+ with self.test_session(use_gpu=True) as sess:
+ normalized_image_np = sess.run(normalized_image)
+ self.assertNDArrayNear(normalized_image_np, [-1, 1, 0.6470588235], 1.0e-6)
+
+ def test_sample_patch_large_patch_returns_upscaled_image(self):
+ image_np = np.reshape(np.arange(2 * 2), [2, 2, 1])
+ image = tf.constant(image_np, dtype=tf.float32)
+ image_patch = data_provider.sample_patch(
+ image, patch_height=3, patch_width=3, colors=1)
+ with self.test_session(use_gpu=True) as sess:
+ image_patch_np = sess.run(image_patch)
+ expected_np = np.asarray([[[0.], [0.66666669], [1.]], [[1.33333337], [2.],
+ [2.33333349]],
+ [[2.], [2.66666675], [3.]]])
+ self.assertNDArrayNear(image_patch_np, expected_np, 1.0e-6)
+
+ def test_sample_patch_small_patch_returns_downscaled_image(self):
+ image_np = np.reshape(np.arange(3 * 3), [3, 3, 1])
+ image = tf.constant(image_np, dtype=tf.float32)
+ image_patch = data_provider.sample_patch(
+ image, patch_height=2, patch_width=2, colors=1)
+ with self.test_session(use_gpu=True) as sess:
+ image_patch_np = sess.run(image_patch)
+ expected_np = np.asarray([[[0.], [1.5]], [[4.5], [6.]]])
+ self.assertNDArrayNear(image_patch_np, expected_np, 1.0e-6)
+
+ def test_batch_images(self):
+ image_np = np.reshape(np.arange(3 * 3), [3, 3, 1])
+ image = tf.constant(image_np, dtype=tf.float32)
+ images = data_provider.batch_images(
+ image,
+ patch_height=2,
+ patch_width=2,
+ colors=1,
+ batch_size=2,
+ shuffle=False,
+ num_threads=1)
+ with self.test_session(use_gpu=True) as sess:
+ with tf.contrib.slim.queues.QueueRunners(sess):
+ images_np = sess.run(images)
+ expected_np = np.asarray([[[[0.], [1.5]], [[4.5], [6.]]], [[[0.], [1.5]],
+ [[4.5], [6.]]]])
+ self.assertNDArrayNear(images_np, expected_np, 1.0e-6)
+
+ def test_provide_data(self):
+ images = data_provider.provide_data(
+ 'mnist',
+ 'train',
+ dataset_dir=self.testdata_dir,
+ batch_size=2,
+ shuffle=False,
+ patch_height=3,
+ patch_width=3,
+ colors=1)
+ self.assertEqual(images.shape.as_list(), [2, 3, 3, 1])
+ with self.test_session(use_gpu=True) as sess:
+ with tf.contrib.slim.queues.QueueRunners(sess):
+ images_np = sess.run(images)
+ self.assertEqual(images_np.shape, (2, 3, 3, 1))
+
+ def test_provide_data_from_image_files_a_single_pattern(self):
+ file_pattern = os.path.join(self.testdata_dir, '*.jpg')
+ images = data_provider.provide_data_from_image_files(
+ file_pattern,
+ batch_size=2,
+ shuffle=False,
+ patch_height=3,
+ patch_width=3,
+ colors=1)
+ self.assertEqual(images.shape.as_list(), [2, 3, 3, 1])
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.local_variables_initializer())
+ with tf.contrib.slim.queues.QueueRunners(sess):
+ images_np = sess.run(images)
+ self.assertEqual(images_np.shape, (2, 3, 3, 1))
+
+ def test_provide_data_from_image_files_a_list_of_patterns(self):
+ file_pattern = [os.path.join(self.testdata_dir, '*.jpg')]
+ images = data_provider.provide_data_from_image_files(
+ file_pattern,
+ batch_size=2,
+ shuffle=False,
+ patch_height=3,
+ patch_width=3,
+ colors=1)
+ self.assertEqual(images.shape.as_list(), [2, 3, 3, 1])
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.local_variables_initializer())
+ with tf.contrib.slim.queues.QueueRunners(sess):
+ images_np = sess.run(images)
+ self.assertEqual(images_np.shape, (2, 3, 3, 1))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/layers.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/layers.py"
new file mode 100644
index 00000000..5bf0d804
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/layers.py"
@@ -0,0 +1,293 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Layers for a progressive GAN model.
+
+This module contains basic building blocks to build a progressive GAN model.
+
+See https://arxiv.org/abs/1710.10196 for details about the model.
+
+See https://github.com/tkarras/progressive_growing_of_gans for the original
+theano implementation.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import numpy as np
+import tensorflow as tf
+
+
+def pixel_norm(images, epsilon=1.0e-8):
+ """Pixel normalization.
+
+ For each pixel a[i,j,k] of image in HWC format, normalize its value to
+ b[i,j,k] = a[i,j,k] / SQRT(SUM_k(a[i,j,k]^2) / C + eps).
+
+ Args:
+ images: A 4D `Tensor` of NHWC format.
+ epsilon: A small positive number to avoid division by zero.
+
+ Returns:
+ A 4D `Tensor` with pixel-wise normalized channels.
+ """
+ return images * tf.rsqrt(
+ tf.reduce_mean(tf.square(images), axis=3, keepdims=True) + epsilon)
+
+
+def _get_validated_scale(scale):
+ """Returns the scale guaranteed to be a positive integer."""
+ scale = int(scale)
+ if scale <= 0:
+ raise ValueError('`scale` must be a positive integer.')
+ return scale
+
+
+def downscale(images, scale):
+ """Box downscaling of images.
+
+ Args:
+ images: A 4D `Tensor` in NHWC format.
+ scale: A positive integer scale.
+
+ Returns:
+ A 4D `Tensor` of `images` down scaled by a factor `scale`.
+
+ Raises:
+ ValueError: If `scale` is not a positive integer.
+ """
+ scale = _get_validated_scale(scale)
+ if scale == 1:
+ return images
+ return tf.nn.avg_pool(
+ images,
+ ksize=[1, scale, scale, 1],
+ strides=[1, scale, scale, 1],
+ padding='VALID')
+
+
+def upscale(images, scale):
+ """Box upscaling (also called nearest neighbors) of images.
+
+ Args:
+ images: A 4D `Tensor` in NHWC format.
+ scale: A positive integer scale.
+
+ Returns:
+ A 4D `Tensor` of `images` up scaled by a factor `scale`.
+
+ Raises:
+ ValueError: If `scale` is not a positive integer.
+ """
+ scale = _get_validated_scale(scale)
+ if scale == 1:
+ return images
+ return tf.batch_to_space(
+ tf.tile(images, [scale**2, 1, 1, 1]),
+ crops=[[0, 0], [0, 0]],
+ block_size=scale)
+
+
+def minibatch_mean_stddev(x):
+ """Computes the standard deviation average.
+
+ This is used by the discriminator as a form of batch discrimination.
+
+ Args:
+ x: A `Tensor` for which to compute the standard deviation average. The first
+ dimension must be batch size.
+
+ Returns:
+ A scalar `Tensor` which is the mean variance of variable x.
+ """
+ mean, var = tf.nn.moments(x, axes=[0])
+ del mean
+ return tf.reduce_mean(tf.sqrt(var))
+
+
+def scalar_concat(tensor, scalar):
+ """Concatenates a scalar to the last dimension of a tensor.
+
+ Args:
+ tensor: A `Tensor`.
+ scalar: a scalar `Tensor` to concatenate to tensor `tensor`.
+
+ Returns:
+ A `Tensor`. If `tensor` has shape [...,N], the result R has shape
+ [...,N+1] and R[...,N] = scalar.
+
+ Raises:
+ ValueError: If `tensor` is a scalar `Tensor`.
+ """
+ ndims = tensor.shape.ndims
+ if ndims < 1:
+ raise ValueError('`tensor` must have number of dimensions >= 1.')
+ shape = tf.shape(tensor)
+ return tf.concat(
+ [tensor, tf.ones([shape[i] for i in range(ndims - 1)] + [1]) * scalar],
+ axis=ndims - 1)
+
+
+def he_initializer_scale(shape, slope=1.0):
+ """The scale of He neural network initializer.
+
+ Args:
+ shape: A list of ints representing the dimensions of a tensor.
+ slope: A float representing the slope of the ReLu following the layer.
+
+ Returns:
+ A float of he initializer scale.
+ """
+ fan_in = np.prod(shape[:-1])
+ return np.sqrt(2. / ((1. + slope**2) * fan_in))
+
+
+def _custom_layer_impl(apply_kernel, kernel_shape, bias_shape, activation,
+ he_initializer_slope, use_weight_scaling):
+ """Helper function to implement custom_xxx layer.
+
+ Args:
+ apply_kernel: A function that transforms kernel to output.
+ kernel_shape: An integer tuple or list of the kernel shape.
+ bias_shape: An integer tuple or list of the bias shape.
+ activation: An activation function to be applied. None means no
+ activation.
+ he_initializer_slope: A float slope for the He initializer.
+ use_weight_scaling: Whether to apply weight scaling.
+
+ Returns:
+ A `Tensor` computed as apply_kernel(kernel) + bias where kernel is a
+ `Tensor` variable with shape `kernel_shape`, bias is a `Tensor` variable
+ with shape `bias_shape`.
+ """
+ kernel_scale = he_initializer_scale(kernel_shape, he_initializer_slope)
+ init_scale, post_scale = kernel_scale, 1.0
+ if use_weight_scaling:
+ init_scale, post_scale = post_scale, init_scale
+
+ kernel_initializer = tf.random_normal_initializer(stddev=init_scale)
+
+ bias = tf.get_variable(
+ 'bias', shape=bias_shape, initializer=tf.zeros_initializer())
+
+ output = post_scale * apply_kernel(kernel_shape, kernel_initializer) + bias
+
+ if activation is not None:
+ output = activation(output)
+ return output
+
+
+def custom_conv2d(x,
+ filters,
+ kernel_size,
+ strides=(1, 1),
+ padding='SAME',
+ activation=None,
+ he_initializer_slope=1.0,
+ use_weight_scaling=True,
+ scope='custom_conv2d',
+ reuse=None):
+ """Custom conv2d layer.
+
+ In comparison with tf.layers.conv2d this implementation use the He initializer
+ to initialize convolutional kernel and the weight scaling trick (if
+ `use_weight_scaling` is True) to equalize learning rates. See
+ https://arxiv.org/abs/1710.10196 for more details.
+
+ Args:
+ x: A `Tensor` of NHWC format.
+ filters: An int of output channels.
+ kernel_size: An integer or a int tuple of [kernel_height, kernel_width].
+ strides: A list of strides.
+ padding: One of "VALID" or "SAME".
+ activation: An activation function to be applied. None means no
+ activation. Defaults to None.
+ he_initializer_slope: A float slope for the He initializer. Defaults to 1.0.
+ use_weight_scaling: Whether to apply weight scaling. Defaults to True.
+ scope: A string or variable scope.
+ reuse: Whether to reuse the weights. Defaults to None.
+
+ Returns:
+ A `Tensor` of NHWC format where the last dimension has size `filters`.
+ """
+ if not isinstance(kernel_size, (list, tuple)):
+ kernel_size = [kernel_size] * 2
+ kernel_size = list(kernel_size)
+
+ def _apply_kernel(kernel_shape, kernel_initializer):
+ return tf.layers.conv2d(
+ x,
+ filters=filters,
+ kernel_size=kernel_shape[0:2],
+ strides=strides,
+ padding=padding,
+ use_bias=False,
+ kernel_initializer=kernel_initializer)
+
+ with tf.variable_scope(scope, reuse=reuse):
+ return _custom_layer_impl(
+ _apply_kernel,
+ kernel_shape=kernel_size + [x.shape.as_list()[3], filters],
+ bias_shape=(filters,),
+ activation=activation,
+ he_initializer_slope=he_initializer_slope,
+ use_weight_scaling=use_weight_scaling)
+
+
+def custom_dense(x,
+ units,
+ activation=None,
+ he_initializer_slope=1.0,
+ use_weight_scaling=True,
+ scope='custom_dense',
+ reuse=None):
+ """Custom dense layer.
+
+ In comparison with tf.layers.dense This implementation use the He
+ initializer to initialize weights and the weight scaling trick
+ (if `use_weight_scaling` is True) to equalize learning rates. See
+ https://arxiv.org/abs/1710.10196 for more details.
+
+ Args:
+ x: A `Tensor`.
+ units: An int of the last dimension size of output.
+ activation: An activation function to be applied. None means no
+ activation. Defaults to None.
+ he_initializer_slope: A float slope for the He initializer. Defaults to 1.0.
+ use_weight_scaling: Whether to apply weight scaling. Defaults to True.
+ scope: A string or variable scope.
+ reuse: Whether to reuse the weights. Defaults to None.
+
+ Returns:
+ A `Tensor` where the last dimension has size `units`.
+ """
+ x = tf.contrib.layers.flatten(x)
+
+ def _apply_kernel(kernel_shape, kernel_initializer):
+ return tf.layers.dense(
+ x,
+ kernel_shape[1],
+ use_bias=False,
+ kernel_initializer=kernel_initializer)
+
+ with tf.variable_scope(scope, reuse=reuse):
+ return _custom_layer_impl(
+ _apply_kernel,
+ kernel_shape=(x.shape.as_list()[-1], units),
+ bias_shape=(units,),
+ activation=activation,
+ he_initializer_slope=he_initializer_slope,
+ use_weight_scaling=use_weight_scaling)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/layers_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/layers_test.py"
new file mode 100644
index 00000000..73d8fecf
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/layers_test.py"
@@ -0,0 +1,282 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import numpy as np
+import tensorflow as tf
+
+import layers
+
+mock = tf.test.mock
+
+
+def dummy_apply_kernel(kernel_shape, kernel_initializer):
+ kernel = tf.get_variable(
+ 'kernel', shape=kernel_shape, initializer=kernel_initializer)
+ return tf.reduce_sum(kernel) + 1.5
+
+
+class LayersTest(tf.test.TestCase):
+
+ def test_pixel_norm_4d_images_returns_channel_normalized_images(self):
+ x = tf.constant(
+ [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]],
+ [[[0, 0, 0], [-1, -2, -3]], [[1, -2, 2], [2, 5, 3]]]],
+ dtype=tf.float32)
+ with self.test_session(use_gpu=True) as sess:
+ output_np = sess.run(layers.pixel_norm(x))
+
+ expected_np = [[[[0.46291006, 0.92582011, 1.38873017],
+ [0.78954202, 0.98692751, 1.18431306]],
+ [[0.87047803, 0.99483204, 1.11918604],
+ [0.90659684, 0.99725652, 1.08791625]]],
+ [[[0., 0., 0.], [-0.46291006, -0.92582011, -1.38873017]],
+ [[0.57735026, -1.15470052, 1.15470052],
+ [0.56195146, 1.40487862, 0.84292722]]]]
+ self.assertNDArrayNear(output_np, expected_np, 1.0e-5)
+
+ def test_get_validated_scale_invalid_scale_throws_exception(self):
+ with self.assertRaises(ValueError):
+ layers._get_validated_scale(0)
+
+ def test_get_validated_scale_float_scale_returns_integer(self):
+ self.assertEqual(layers._get_validated_scale(5.5), 5)
+
+ def test_downscale_invalid_scale_throws_exception(self):
+ with self.assertRaises(ValueError):
+ layers.downscale(tf.constant([]), -1)
+
+ def test_downscale_4d_images_returns_downscaled_images(self):
+ x_np = np.array(
+ [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]],
+ [[[0, 0, 0], [-1, -2, -3]], [[1, -2, 2], [2, 5, 3]]]],
+ dtype=np.float32)
+ with self.test_session(use_gpu=True) as sess:
+ x1_np, x2_np = sess.run(
+ [layers.downscale(tf.constant(x_np), n) for n in [1, 2]])
+
+ expected2_np = [[[[5.5, 6.5, 7.5]]], [[[0.5, 0.25, 0.5]]]]
+
+ self.assertNDArrayNear(x1_np, x_np, 1.0e-5)
+ self.assertNDArrayNear(x2_np, expected2_np, 1.0e-5)
+
+ def test_upscale_invalid_scale_throws_exception(self):
+ with self.assertRaises(ValueError):
+ self.assertRaises(layers.upscale(tf.constant([]), -1))
+
+ def test_upscale_4d_images_returns_upscaled_images(self):
+ x_np = np.array([[[[1, 2, 3]]], [[[4, 5, 6]]]], dtype=np.float32)
+ with self.test_session(use_gpu=True) as sess:
+ x1_np, x2_np = sess.run(
+ [layers.upscale(tf.constant(x_np), n) for n in [1, 2]])
+
+ expected2_np = [[[[1, 2, 3], [1, 2, 3]], [[1, 2, 3], [1, 2, 3]]],
+ [[[4, 5, 6], [4, 5, 6]], [[4, 5, 6], [4, 5, 6]]]]
+
+ self.assertNDArrayNear(x1_np, x_np, 1.0e-5)
+ self.assertNDArrayNear(x2_np, expected2_np, 1.0e-5)
+
+ def test_minibatch_mean_stddev_4d_images_returns_scalar(self):
+ x = tf.constant(
+ [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]],
+ [[[0, 0, 0], [-1, -2, -3]], [[1, -2, 2], [2, 5, 3]]]],
+ dtype=tf.float32)
+ with self.test_session(use_gpu=True) as sess:
+ output_np = sess.run(layers.minibatch_mean_stddev(x))
+
+ self.assertAlmostEqual(output_np, 3.0416667, 5)
+
+ def test_scalar_concat_invalid_input_throws_exception(self):
+ with self.assertRaises(ValueError):
+ layers.scalar_concat(tf.constant(1.2), 2.0)
+
+ def test_scalar_concat_4d_images_and_scalar(self):
+ x = tf.constant(
+ [[[[1, 2], [4, 5]], [[7, 8], [10, 11]]], [[[0, 0], [-1, -2]],
+ [[1, -2], [2, 5]]]],
+ dtype=tf.float32)
+ with self.test_session(use_gpu=True) as sess:
+ output_np = sess.run(layers.scalar_concat(x, 7))
+
+ expected_np = [[[[1, 2, 7], [4, 5, 7]], [[7, 8, 7], [10, 11, 7]]],
+ [[[0, 0, 7], [-1, -2, 7]], [[1, -2, 7], [2, 5, 7]]]]
+
+ self.assertNDArrayNear(output_np, expected_np, 1.0e-5)
+
+ def test_he_initializer_scale_slope_linear(self):
+ self.assertAlmostEqual(
+ layers.he_initializer_scale([3, 4, 5, 6], 1.0), 0.1290994, 5)
+
+ def test_he_initializer_scale_slope_relu(self):
+ self.assertAlmostEqual(
+ layers.he_initializer_scale([3, 4, 5, 6], 0.0), 0.1825742, 5)
+
+ @mock.patch.object(tf, 'random_normal_initializer', autospec=True)
+ @mock.patch.object(tf, 'zeros_initializer', autospec=True)
+ def test_custom_layer_impl_with_weight_scaling(
+ self, mock_zeros_initializer, mock_random_normal_initializer):
+ mock_zeros_initializer.return_value = tf.constant_initializer(1.0)
+ mock_random_normal_initializer.return_value = tf.constant_initializer(3.0)
+ output = layers._custom_layer_impl(
+ apply_kernel=dummy_apply_kernel,
+ kernel_shape=(25, 6),
+ bias_shape=(),
+ activation=lambda x: 2.0 * x,
+ he_initializer_slope=1.0,
+ use_weight_scaling=True)
+ mock_zeros_initializer.assert_called_once_with()
+ mock_random_normal_initializer.assert_called_once_with(stddev=1.0)
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ output_np = sess.run(output)
+
+ self.assertAlmostEqual(output_np, 182.6, 3)
+
+ @mock.patch.object(tf, 'random_normal_initializer', autospec=True)
+ @mock.patch.object(tf, 'zeros_initializer', autospec=True)
+ def test_custom_layer_impl_no_weight_scaling(self, mock_zeros_initializer,
+ mock_random_normal_initializer):
+ mock_zeros_initializer.return_value = tf.constant_initializer(1.0)
+ mock_random_normal_initializer.return_value = tf.constant_initializer(3.0)
+ output = layers._custom_layer_impl(
+ apply_kernel=dummy_apply_kernel,
+ kernel_shape=(25, 6),
+ bias_shape=(),
+ activation=lambda x: 2.0 * x,
+ he_initializer_slope=1.0,
+ use_weight_scaling=False)
+ mock_zeros_initializer.assert_called_once_with()
+ mock_random_normal_initializer.assert_called_once_with(stddev=0.2)
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ output_np = sess.run(output)
+
+ self.assertAlmostEqual(output_np, 905.0, 3)
+
+ @mock.patch.object(tf.layers, 'conv2d', autospec=True)
+ def test_custom_conv2d_passes_conv2d_options(self, mock_conv2d):
+ x = tf.constant(
+ [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]],
+ [[[0, 0, 0], [-1, -2, -3]], [[1, -2, 2], [2, 5, 3]]]],
+ dtype=tf.float32)
+ layers.custom_conv2d(x, 1, 2)
+ mock_conv2d.assert_called_once_with(
+ x,
+ filters=1,
+ kernel_size=[2, 2],
+ strides=(1, 1),
+ padding='SAME',
+ use_bias=False,
+ kernel_initializer=mock.ANY)
+
+ @mock.patch.object(layers, '_custom_layer_impl', autospec=True)
+ def test_custom_conv2d_passes_custom_layer_options(self,
+ mock_custom_layer_impl):
+ x = tf.constant(
+ [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]],
+ [[[0, 0, 0], [-1, -2, -3]], [[1, -2, 2], [2, 5, 3]]]],
+ dtype=tf.float32)
+ layers.custom_conv2d(x, 1, 2)
+ mock_custom_layer_impl.assert_called_once_with(
+ mock.ANY,
+ kernel_shape=[2, 2, 3, 1],
+ bias_shape=(1,),
+ activation=None,
+ he_initializer_slope=1.0,
+ use_weight_scaling=True)
+
+ @mock.patch.object(tf, 'random_normal_initializer', autospec=True)
+ @mock.patch.object(tf, 'zeros_initializer', autospec=True)
+ def test_custom_conv2d_scalar_kernel_size(self, mock_zeros_initializer,
+ mock_random_normal_initializer):
+ mock_zeros_initializer.return_value = tf.constant_initializer(1.0)
+ mock_random_normal_initializer.return_value = tf.constant_initializer(3.0)
+ x = tf.constant(
+ [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]],
+ [[[0, 0, 0], [-1, -2, -3]], [[1, -2, 2], [2, 5, 3]]]],
+ dtype=tf.float32)
+ output = layers.custom_conv2d(x, 1, 2)
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ output_np = sess.run(output)
+
+ expected_np = [[[[68.54998016], [42.56921768]], [[50.36344528],
+ [29.57883835]]],
+ [[[5.33012676], [4.46410179]], [[10.52627945],
+ [9.66025352]]]]
+ self.assertNDArrayNear(output_np, expected_np, 1.0e-5)
+
+ @mock.patch.object(tf, 'random_normal_initializer', autospec=True)
+ @mock.patch.object(tf, 'zeros_initializer', autospec=True)
+ def test_custom_conv2d_list_kernel_size(self, mock_zeros_initializer,
+ mock_random_normal_initializer):
+ mock_zeros_initializer.return_value = tf.constant_initializer(1.0)
+ mock_random_normal_initializer.return_value = tf.constant_initializer(3.0)
+ x = tf.constant(
+ [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]],
+ [[[0, 0, 0], [-1, -2, -3]], [[1, -2, 2], [2, 5, 3]]]],
+ dtype=tf.float32)
+ output = layers.custom_conv2d(x, 1, [2, 3])
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ output_np = sess.run(output)
+
+ expected_np = [[
+ [[56.15432739], [56.15432739]],
+ [[41.30508804], [41.30508804]],
+ ], [[[4.53553391], [4.53553391]], [[8.7781744], [8.7781744]]]]
+ self.assertNDArrayNear(output_np, expected_np, 1.0e-5)
+
+ @mock.patch.object(layers, '_custom_layer_impl', autospec=True)
+ def test_custom_dense_passes_custom_layer_options(self,
+ mock_custom_layer_impl):
+ x = tf.constant(
+ [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]],
+ [[[0, 0, 0], [-1, -2, -3]], [[1, -2, 2], [2, 5, 3]]]],
+ dtype=tf.float32)
+ layers.custom_dense(x, 3)
+ mock_custom_layer_impl.assert_called_once_with(
+ mock.ANY,
+ kernel_shape=(12, 3),
+ bias_shape=(3,),
+ activation=None,
+ he_initializer_slope=1.0,
+ use_weight_scaling=True)
+
+ @mock.patch.object(tf, 'random_normal_initializer', autospec=True)
+ @mock.patch.object(tf, 'zeros_initializer', autospec=True)
+ def test_custom_dense_output_is_correct(self, mock_zeros_initializer,
+ mock_random_normal_initializer):
+ mock_zeros_initializer.return_value = tf.constant_initializer(1.0)
+ mock_random_normal_initializer.return_value = tf.constant_initializer(3.0)
+ x = tf.constant(
+ [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]],
+ [[[0, 0, 0], [-1, -2, -3]], [[1, -2, 2], [2, 5, 3]]]],
+ dtype=tf.float32)
+ output = layers.custom_dense(x, 3)
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ output_np = sess.run(output)
+
+ expected_np = [[68.54998016, 68.54998016, 68.54998016],
+ [5.33012676, 5.33012676, 5.33012676]]
+ self.assertNDArrayNear(output_np, expected_np, 1.0e-5)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/networks.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/networks.py"
new file mode 100644
index 00000000..0561ea65
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/networks.py"
@@ -0,0 +1,412 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Generator and discriminator for a progressive GAN model.
+
+See https://arxiv.org/abs/1710.10196 for details about the model.
+
+See https://github.com/tkarras/progressive_growing_of_gans for the original
+theano implementation.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+
+import tensorflow as tf
+
+import layers
+
+
+class ResolutionSchedule(object):
+ """Image resolution upscaling schedule."""
+
+ def __init__(self, start_resolutions=(4, 4), scale_base=2, num_resolutions=4):
+ """Initializer.
+
+ Args:
+ start_resolutions: An tuple of integers of HxW format for start image
+ resolutions. Defaults to (4, 4).
+ scale_base: An integer of resolution base multiplier. Defaults to 2.
+ num_resolutions: An integer of how many progressive resolutions (including
+ `start_resolutions`). Defaults to 4.
+ """
+ self._start_resolutions = start_resolutions
+ self._scale_base = scale_base
+ self._num_resolutions = num_resolutions
+
+ @property
+ def start_resolutions(self):
+ return tuple(self._start_resolutions)
+
+ @property
+ def scale_base(self):
+ return self._scale_base
+
+ @property
+ def num_resolutions(self):
+ return self._num_resolutions
+
+ @property
+ def final_resolutions(self):
+ """Returns the final resolutions."""
+ return tuple([
+ r * self._scale_base**(self._num_resolutions - 1)
+ for r in self._start_resolutions
+ ])
+
+ def scale_factor(self, block_id):
+ """Returns the scale factor for network block `block_id`."""
+ if block_id < 1 or block_id > self._num_resolutions:
+ raise ValueError('`block_id` must be in [1, {}]'.format(
+ self._num_resolutions))
+ return self._scale_base**(self._num_resolutions - block_id)
+
+
+def block_name(block_id):
+ """Returns the scope name for the network block `block_id`."""
+ return 'progressive_gan_block_{}'.format(block_id)
+
+
+def min_total_num_images(stable_stage_num_images, transition_stage_num_images,
+ num_blocks):
+ """Returns the minimum total number of images.
+
+ Computes the minimum total number of images required to reach the desired
+ `resolution`.
+
+ Args:
+ stable_stage_num_images: Number of images in the stable stage.
+ transition_stage_num_images: Number of images in the transition stage.
+ num_blocks: Number of network blocks.
+
+ Returns:
+ An integer of the minimum total number of images.
+ """
+ return (num_blocks * stable_stage_num_images +
+ (num_blocks - 1) * transition_stage_num_images)
+
+
+def compute_progress(current_image_id, stable_stage_num_images,
+ transition_stage_num_images, num_blocks):
+ """Computes the training progress.
+
+ The training alternates between stable phase and transition phase.
+ The `progress` indicates the training progress, i.e. the training is at
+ - a stable phase p if progress = p
+ - a transition stage between p and p + 1 if progress = p + fraction
+ where p = 0,1,2.,...
+
+ Note the max value of progress is `num_blocks` - 1.
+
+ In terms of LOD (of the original implementation):
+ progress = `num_blocks` - 1 - LOD
+
+ Args:
+ current_image_id: An scalar integer `Tensor` of the current image id, count
+ from 0.
+ stable_stage_num_images: An integer representing the number of images in
+ each stable stage.
+ transition_stage_num_images: An integer representing the number of images in
+ each transition stage.
+ num_blocks: Number of network blocks.
+
+ Returns:
+ A scalar float `Tensor` of the training progress.
+ """
+ # Note when current_image_id >= min_total_num_images - 1 (which means we
+ # are already at the highest resolution), we want to keep progress constant.
+ # Therefore, cap current_image_id here.
+ capped_current_image_id = tf.minimum(
+ current_image_id,
+ min_total_num_images(stable_stage_num_images, transition_stage_num_images,
+ num_blocks) - 1)
+
+ stage_num_images = stable_stage_num_images + transition_stage_num_images
+ progress_integer = tf.floordiv(capped_current_image_id, stage_num_images)
+ progress_fraction = tf.maximum(
+ 0.0,
+ tf.to_float(
+ tf.mod(capped_current_image_id, stage_num_images) -
+ stable_stage_num_images) / tf.to_float(transition_stage_num_images))
+ return tf.to_float(progress_integer) + progress_fraction
+
+
+def _generator_alpha(block_id, progress):
+ """Returns the block output parameter for the generator network.
+
+ The generator has N blocks with `block_id` = 1,2,...,N. Each block
+ block_id outputs a fake data output(block_id). The generator output is a
+ linear combination of all block outputs, i.e.
+ SUM_block_id(output(block_id) * alpha(block_id, progress)) where
+ alpha(block_id, progress) = _generator_alpha(block_id, progress). Note it
+ garantees that SUM_block_id(alpha(block_id, progress)) = 1 for any progress.
+
+ With a fixed block_id, the plot of alpha(block_id, progress) against progress
+ is a 'triangle' with its peak at (block_id - 1, 1).
+
+ Args:
+ block_id: An integer of generator block id.
+ progress: A scalar float `Tensor` of training progress.
+
+ Returns:
+ A scalar float `Tensor` of block output parameter.
+ """
+ return tf.maximum(0.0,
+ tf.minimum(progress - (block_id - 2), block_id - progress))
+
+
+def _discriminator_alpha(block_id, progress):
+ """Returns the block input parameter for discriminator network.
+
+ The discriminator has N blocks with `block_id` = 1,2,...,N. Each block
+ block_id accepts an
+ - input(block_id) transformed from the real data and
+ - the output of block block_id + 1, i.e. output(block_id + 1)
+ The final input is a linear combination of them,
+ i.e. alpha * input(block_id) + (1 - alpha) * output(block_id + 1)
+ where alpha = _discriminator_alpha(block_id, progress).
+
+ With a fixed block_id, alpha(block_id, progress) stays to be 1
+ when progress <= block_id - 1, then linear decays to 0 when
+ block_id - 1 < progress <= block_id, and finally stays at 0
+ when progress > block_id.
+
+ Args:
+ block_id: An integer of generator block id.
+ progress: A scalar float `Tensor` of training progress.
+
+ Returns:
+ A scalar float `Tensor` of block input parameter.
+ """
+ return tf.clip_by_value(block_id - progress, 0.0, 1.0)
+
+
+def blend_images(x, progress, resolution_schedule, num_blocks):
+ """Blends images of different resolutions according to `progress`.
+
+ When training `progress` is at a stable stage for resolution r, returns
+ image `x` downscaled to resolution r and then upscaled to `final_resolutions`,
+ call it x'(r).
+
+ Otherwise when training `progress` is at a transition stage from resolution
+ r to 2r, returns a linear combination of x'(r) and x'(2r).
+
+ Args:
+ x: An image `Tensor` of NHWC format with resolution `final_resolutions`.
+ progress: A scalar float `Tensor` of training progress.
+ resolution_schedule: An object of `ResolutionSchedule`.
+ num_blocks: An integer of number of blocks.
+
+ Returns:
+ An image `Tensor` which is a blend of images of different resolutions.
+ """
+ x_blend = []
+ for block_id in range(1, num_blocks + 1):
+ alpha = _generator_alpha(block_id, progress)
+ scale = resolution_schedule.scale_factor(block_id)
+ x_blend.append(alpha * layers.upscale(layers.downscale(x, scale), scale))
+ return tf.add_n(x_blend)
+
+
+def num_filters(block_id, fmap_base=4096, fmap_decay=1.0, fmap_max=256):
+ """Computes number of filters of block `block_id`."""
+ return int(min(fmap_base / math.pow(2.0, block_id * fmap_decay), fmap_max))
+
+
+def generator(z,
+ progress,
+ num_filters_fn,
+ resolution_schedule,
+ num_blocks=None,
+ kernel_size=3,
+ colors=3,
+ to_rgb_activation=None,
+ scope='progressive_gan_generator',
+ reuse=None):
+ """Generator network for the progressive GAN model.
+
+ Args:
+ z: A `Tensor` of latent vector. The first dimension must be batch size.
+ progress: A scalar float `Tensor` of training progress.
+ num_filters_fn: A function that maps `block_id` to # of filters for the
+ block.
+ resolution_schedule: An object of `ResolutionSchedule`.
+ num_blocks: An integer of number of blocks. None means maximum number of
+ blocks, i.e. `resolution.schedule.num_resolutions`. Defaults to None.
+ kernel_size: An integer of convolution kernel size.
+ colors: Number of output color channels. Defaults to 3.
+ to_rgb_activation: Activation function applied when output rgb.
+ scope: A string or variable scope.
+ reuse: Whether to reuse `scope`. Defaults to None which means to inherit
+ the reuse option of the parent scope.
+
+ Returns:
+ A `Tensor` of model output and a dictionary of model end points.
+ """
+ if num_blocks is None:
+ num_blocks = resolution_schedule.num_resolutions
+
+ start_h, start_w = resolution_schedule.start_resolutions
+ final_h, final_w = resolution_schedule.final_resolutions
+
+ def _conv2d(scope, x, kernel_size, filters, padding='SAME'):
+ return layers.custom_conv2d(
+ x=x,
+ filters=filters,
+ kernel_size=kernel_size,
+ padding=padding,
+ activation=lambda x: layers.pixel_norm(tf.nn.leaky_relu(x)),
+ he_initializer_slope=0.0,
+ scope=scope)
+
+ def _to_rgb(x):
+ return layers.custom_conv2d(
+ x=x,
+ filters=colors,
+ kernel_size=1,
+ padding='SAME',
+ activation=to_rgb_activation,
+ scope='to_rgb')
+
+ end_points = {}
+
+ with tf.variable_scope(scope, reuse=reuse):
+ with tf.name_scope('input'):
+ x = tf.contrib.layers.flatten(z)
+ end_points['latent_vector'] = x
+
+ with tf.variable_scope(block_name(1)):
+ x = tf.expand_dims(tf.expand_dims(x, 1), 1)
+ x = layers.pixel_norm(x)
+ # Pad the 1 x 1 image to 2 * (start_h - 1) x 2 * (start_w - 1)
+ # with zeros for the next conv.
+ x = tf.pad(x, [[0] * 2, [start_h - 1] * 2, [start_w - 1] * 2, [0] * 2])
+ # The output is start_h x start_w x num_filters_fn(1).
+ x = _conv2d('conv0', x, (start_h, start_w), num_filters_fn(1), 'VALID')
+ x = _conv2d('conv1', x, kernel_size, num_filters_fn(1))
+ lods = [x]
+
+ for block_id in range(2, num_blocks + 1):
+ with tf.variable_scope(block_name(block_id)):
+ x = layers.upscale(x, resolution_schedule.scale_base)
+ x = _conv2d('conv0', x, kernel_size, num_filters_fn(block_id))
+ x = _conv2d('conv1', x, kernel_size, num_filters_fn(block_id))
+ lods.append(x)
+
+ outputs = []
+ for block_id in range(1, num_blocks + 1):
+ with tf.variable_scope(block_name(block_id)):
+ lod = _to_rgb(lods[block_id - 1])
+ scale = resolution_schedule.scale_factor(block_id)
+ lod = layers.upscale(lod, scale)
+ end_points['upscaled_rgb_{}'.format(block_id)] = lod
+
+ # alpha_i is used to replace lod_select. Note sum(alpha_i) is
+ # garanteed to be 1.
+ alpha = _generator_alpha(block_id, progress)
+ end_points['alpha_{}'.format(block_id)] = alpha
+
+ outputs.append(lod * alpha)
+
+ predictions = tf.add_n(outputs)
+ batch_size = z.shape[0].value
+ predictions.set_shape([batch_size, final_h, final_w, colors])
+ end_points['predictions'] = predictions
+
+ return predictions, end_points
+
+
+def discriminator(x,
+ progress,
+ num_filters_fn,
+ resolution_schedule,
+ num_blocks=None,
+ kernel_size=3,
+ scope='progressive_gan_discriminator',
+ reuse=None):
+ """Discriminator network for the progressive GAN model.
+
+ Args:
+ x: A `Tensor`of NHWC format representing images of size `resolution`.
+ progress: A scalar float `Tensor` of training progress.
+ num_filters_fn: A function that maps `block_id` to # of filters for the
+ block.
+ resolution_schedule: An object of `ResolutionSchedule`.
+ num_blocks: An integer of number of blocks. None means maximum number of
+ blocks, i.e. `resolution.schedule.num_resolutions`. Defaults to None.
+ kernel_size: An integer of convolution kernel size.
+ scope: A string or variable scope.
+ reuse: Whether to reuse `scope`. Defaults to None which means to inherit
+ the reuse option of the parent scope.
+
+ Returns:
+ A `Tensor` of model output and a dictionary of model end points.
+ """
+ if num_blocks is None:
+ num_blocks = resolution_schedule.num_resolutions
+
+ def _conv2d(scope, x, kernel_size, filters, padding='SAME'):
+ return layers.custom_conv2d(
+ x=x,
+ filters=filters,
+ kernel_size=kernel_size,
+ padding=padding,
+ activation=tf.nn.leaky_relu,
+ he_initializer_slope=0.0,
+ scope=scope)
+
+ def _from_rgb(x, block_id):
+ return _conv2d('from_rgb', x, 1, num_filters_fn(block_id))
+
+ end_points = {}
+
+ with tf.variable_scope(scope, reuse=reuse):
+ x0 = x
+ end_points['rgb'] = x0
+
+ lods = []
+ for block_id in range(num_blocks, 0, -1):
+ with tf.variable_scope(block_name(block_id)):
+ scale = resolution_schedule.scale_factor(block_id)
+ lod = layers.downscale(x0, scale)
+ end_points['downscaled_rgb_{}'.format(block_id)] = lod
+ lod = _from_rgb(lod, block_id)
+ # alpha_i is used to replace lod_select.
+ alpha = _discriminator_alpha(block_id, progress)
+ end_points['alpha_{}'.format(block_id)] = alpha
+ lods.append((lod, alpha))
+
+ lods_iter = iter(lods)
+ x, _ = lods_iter.next()
+ for block_id in range(num_blocks, 1, -1):
+ with tf.variable_scope(block_name(block_id)):
+ x = _conv2d('conv0', x, kernel_size, num_filters_fn(block_id))
+ x = _conv2d('conv1', x, kernel_size, num_filters_fn(block_id - 1))
+ x = layers.downscale(x, resolution_schedule.scale_base)
+ lod, alpha = lods_iter.next()
+ x = alpha * lod + (1.0 - alpha) * x
+
+ with tf.variable_scope(block_name(1)):
+ x = layers.scalar_concat(x, layers.minibatch_mean_stddev(x))
+ x = _conv2d('conv0', x, kernel_size, num_filters_fn(1))
+ x = _conv2d('conv1', x, resolution_schedule.start_resolutions,
+ num_filters_fn(0), 'VALID')
+ end_points['last_conv'] = x
+ logits = layers.custom_dense(x=x, units=1, scope='logits')
+ end_points['logits'] = logits
+
+ return logits, end_points
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/networks_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/networks_test.py"
new file mode 100644
index 00000000..5e0f7374
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/networks_test.py"
@@ -0,0 +1,248 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import numpy as np
+import tensorflow as tf
+
+import layers
+import networks
+
+
+def _get_grad_norm(ys, xs):
+ """Compute 2-norm of dys / dxs."""
+ return tf.sqrt(
+ tf.add_n([tf.reduce_sum(tf.square(g)) for g in tf.gradients(ys, xs)]))
+
+
+def _num_filters_stub(block_id):
+ return networks.num_filters(block_id, 8, 1, 8)
+
+
+class NetworksTest(tf.test.TestCase):
+
+ def test_resolution_schedule_correct(self):
+ rs = networks.ResolutionSchedule(
+ start_resolutions=[5, 3], scale_base=2, num_resolutions=3)
+ self.assertEqual(rs.start_resolutions, (5, 3))
+ self.assertEqual(rs.scale_base, 2)
+ self.assertEqual(rs.num_resolutions, 3)
+ self.assertEqual(rs.final_resolutions, (20, 12))
+ self.assertEqual(rs.scale_factor(1), 4)
+ self.assertEqual(rs.scale_factor(2), 2)
+ self.assertEqual(rs.scale_factor(3), 1)
+ with self.assertRaises(ValueError):
+ rs.scale_factor(0)
+ with self.assertRaises(ValueError):
+ rs.scale_factor(4)
+
+ def test_block_name(self):
+ self.assertEqual(networks.block_name(10), 'progressive_gan_block_10')
+
+ def test_min_total_num_images(self):
+ self.assertEqual(networks.min_total_num_images(7, 8, 4), 52)
+
+ def test_compute_progress(self):
+ current_image_id_ph = tf.placeholder(tf.int32, [])
+ progress = networks.compute_progress(
+ current_image_id_ph,
+ stable_stage_num_images=7,
+ transition_stage_num_images=8,
+ num_blocks=2)
+ with self.test_session(use_gpu=True) as sess:
+ progress_output = [
+ sess.run(progress, feed_dict={current_image_id_ph: current_image_id})
+ for current_image_id in [0, 3, 6, 7, 8, 10, 15, 29, 100]
+ ]
+ self.assertArrayNear(progress_output,
+ [0.0, 0.0, 0.0, 0.0, 0.125, 0.375, 1.0, 1.0, 1.0],
+ 1.0e-6)
+
+ def test_generator_alpha(self):
+ with self.test_session(use_gpu=True) as sess:
+ alpha_fixed_block_id = [
+ sess.run(
+ networks._generator_alpha(2, tf.constant(progress, tf.float32)))
+ for progress in [0, 0.2, 1, 1.2, 2, 2.2, 3]
+ ]
+ alpha_fixed_progress = [
+ sess.run(
+ networks._generator_alpha(block_id, tf.constant(1.2, tf.float32)))
+ for block_id in range(1, 5)
+ ]
+
+ self.assertArrayNear(alpha_fixed_block_id, [0, 0.2, 1, 0.8, 0, 0, 0],
+ 1.0e-6)
+ self.assertArrayNear(alpha_fixed_progress, [0, 0.8, 0.2, 0], 1.0e-6)
+
+ def test_discriminator_alpha(self):
+ with self.test_session(use_gpu=True) as sess:
+ alpha_fixed_block_id = [
+ sess.run(
+ networks._discriminator_alpha(2, tf.constant(
+ progress, tf.float32)))
+ for progress in [0, 0.2, 1, 1.2, 2, 2.2, 3]
+ ]
+ alpha_fixed_progress = [
+ sess.run(
+ networks._discriminator_alpha(block_id,
+ tf.constant(1.2, tf.float32)))
+ for block_id in range(1, 5)
+ ]
+
+ self.assertArrayNear(alpha_fixed_block_id, [1, 1, 1, 0.8, 0, 0, 0], 1.0e-6)
+ self.assertArrayNear(alpha_fixed_progress, [0, 0.8, 1, 1], 1.0e-6)
+
+ def test_blend_images_in_stable_stage(self):
+ x_np = np.random.normal(size=[2, 8, 8, 3])
+ x = tf.constant(x_np, tf.float32)
+ x_blend = networks.blend_images(
+ x,
+ progress=tf.constant(0.0),
+ resolution_schedule=networks.ResolutionSchedule(
+ scale_base=2, num_resolutions=2),
+ num_blocks=2)
+ with self.test_session(use_gpu=True) as sess:
+ x_blend_np = sess.run(x_blend)
+ x_blend_expected_np = sess.run(layers.upscale(layers.downscale(x, 2), 2))
+ self.assertNDArrayNear(x_blend_np, x_blend_expected_np, 1.0e-6)
+
+ def test_blend_images_in_transition_stage(self):
+ x_np = np.random.normal(size=[2, 8, 8, 3])
+ x = tf.constant(x_np, tf.float32)
+ x_blend = networks.blend_images(
+ x,
+ tf.constant(0.2),
+ resolution_schedule=networks.ResolutionSchedule(
+ scale_base=2, num_resolutions=2),
+ num_blocks=2)
+ with self.test_session(use_gpu=True) as sess:
+ x_blend_np = sess.run(x_blend)
+ x_blend_expected_np = 0.8 * sess.run(
+ layers.upscale(layers.downscale(x, 2), 2)) + 0.2 * x_np
+ self.assertNDArrayNear(x_blend_np, x_blend_expected_np, 1.0e-6)
+
+ def test_num_filters(self):
+ self.assertEqual(networks.num_filters(1, 4096, 1, 256), 256)
+ self.assertEqual(networks.num_filters(5, 4096, 1, 256), 128)
+
+ def test_generator_grad_norm_progress(self):
+ stable_stage_num_images = 2
+ transition_stage_num_images = 3
+
+ current_image_id_ph = tf.placeholder(tf.int32, [])
+ progress = networks.compute_progress(
+ current_image_id_ph,
+ stable_stage_num_images,
+ transition_stage_num_images,
+ num_blocks=3)
+ z = tf.random_normal([2, 10], dtype=tf.float32)
+ x, _ = networks.generator(
+ z, progress, _num_filters_stub,
+ networks.ResolutionSchedule(
+ start_resolutions=(4, 4), scale_base=2, num_resolutions=3))
+ fake_loss = tf.reduce_sum(tf.square(x))
+ grad_norms = [
+ _get_grad_norm(
+ fake_loss, tf.trainable_variables('.*/progressive_gan_block_1/.*')),
+ _get_grad_norm(
+ fake_loss, tf.trainable_variables('.*/progressive_gan_block_2/.*')),
+ _get_grad_norm(
+ fake_loss, tf.trainable_variables('.*/progressive_gan_block_3/.*'))
+ ]
+
+ grad_norms_output = None
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ x1_np = sess.run(x, feed_dict={current_image_id_ph: 0.12})
+ x2_np = sess.run(x, feed_dict={current_image_id_ph: 1.8})
+ grad_norms_output = np.array([
+ sess.run(grad_norms, feed_dict={current_image_id_ph: i})
+ for i in range(15) # total num of images
+ ])
+
+ self.assertEqual((2, 16, 16, 3), x1_np.shape)
+ self.assertEqual((2, 16, 16, 3), x2_np.shape)
+ # The gradient of block_1 is always on.
+ self.assertEqual(
+ np.argmax(grad_norms_output[:, 0] > 0), 0,
+ 'gradient norms {} for block 1 is not always on'.format(
+ grad_norms_output[:, 0]))
+ # The gradient of block_2 is on after 1 stable stage.
+ self.assertEqual(
+ np.argmax(grad_norms_output[:, 1] > 0), 3,
+ 'gradient norms {} for block 2 is not on at step 3'.format(
+ grad_norms_output[:, 1]))
+ # The gradient of block_3 is on after 2 stable stage + 1 transition stage.
+ self.assertEqual(
+ np.argmax(grad_norms_output[:, 2] > 0), 8,
+ 'gradient norms {} for block 3 is not on at step 8'.format(
+ grad_norms_output[:, 2]))
+
+ def test_discriminator_grad_norm_progress(self):
+ stable_stage_num_images = 2
+ transition_stage_num_images = 3
+
+ current_image_id_ph = tf.placeholder(tf.int32, [])
+ progress = networks.compute_progress(
+ current_image_id_ph,
+ stable_stage_num_images,
+ transition_stage_num_images,
+ num_blocks=3)
+ x = tf.random_normal([2, 16, 16, 3])
+ logits, _ = networks.discriminator(
+ x, progress, _num_filters_stub,
+ networks.ResolutionSchedule(
+ start_resolutions=(4, 4), scale_base=2, num_resolutions=3))
+ fake_loss = tf.reduce_sum(tf.square(logits))
+ grad_norms = [
+ _get_grad_norm(
+ fake_loss, tf.trainable_variables('.*/progressive_gan_block_1/.*')),
+ _get_grad_norm(
+ fake_loss, tf.trainable_variables('.*/progressive_gan_block_2/.*')),
+ _get_grad_norm(
+ fake_loss, tf.trainable_variables('.*/progressive_gan_block_3/.*'))
+ ]
+
+ grad_norms_output = None
+ with self.test_session(use_gpu=True) as sess:
+ sess.run(tf.global_variables_initializer())
+ grad_norms_output = np.array([
+ sess.run(grad_norms, feed_dict={current_image_id_ph: i})
+ for i in range(15) # total num of images
+ ])
+
+ # The gradient of block_1 is always on.
+ self.assertEqual(
+ np.argmax(grad_norms_output[:, 0] > 0), 0,
+ 'gradient norms {} for block 1 is not always on'.format(
+ grad_norms_output[:, 0]))
+ # The gradient of block_2 is on after 1 stable stage.
+ self.assertEqual(
+ np.argmax(grad_norms_output[:, 1] > 0), 3,
+ 'gradient norms {} for block 2 is not on at step 3'.format(
+ grad_norms_output[:, 1]))
+ # The gradient of block_3 is on after 2 stable stage + 1 transition stage.
+ self.assertEqual(
+ np.argmax(grad_norms_output[:, 2] > 0), 8,
+ 'gradient norms {} for block 3 is not on at step 8'.format(
+ grad_norms_output[:, 2]))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/testdata/test.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/testdata/test.jpg"
new file mode 100644
index 00000000..edf90ef6
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/testdata/test.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/train.py"
new file mode 100644
index 00000000..66f9e1cb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/train.py"
@@ -0,0 +1,589 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Train a progressive GAN model.
+
+See https://arxiv.org/abs/1710.10196 for details about the model.
+
+See https://github.com/tkarras/progressive_growing_of_gans for the original
+theano implementation.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+
+from absl import logging
+import numpy as np
+import tensorflow as tf
+
+import networks
+
+tfgan = tf.contrib.gan
+
+
+def make_train_sub_dir(stage_id, **kwargs):
+ """Returns the log directory for training stage `stage_id`."""
+ return os.path.join(kwargs['train_root_dir'], 'stage_{:05d}'.format(stage_id))
+
+
+def make_resolution_schedule(**kwargs):
+ """Returns an object of `ResolutionSchedule`."""
+ return networks.ResolutionSchedule(
+ start_resolutions=(kwargs['start_height'], kwargs['start_width']),
+ scale_base=kwargs['scale_base'],
+ num_resolutions=kwargs['num_resolutions'])
+
+
+def get_stage_ids(**kwargs):
+ """Returns a list of stage ids.
+
+ Args:
+ **kwargs: A dictionary of
+ 'train_root_dir': A string of root directory of training logs.
+ 'num_resolutions': An integer of number of progressive resolutions.
+ """
+ train_sub_dirs = [
+ sub_dir for sub_dir in tf.gfile.ListDirectory(kwargs['train_root_dir'])
+ if sub_dir.startswith('stage_')
+ ]
+
+ # If fresh start, start with start_stage_id = 0
+ # If has been trained for n = len(train_sub_dirs) stages, start with the last
+ # stage, i.e. start_stage_id = n - 1.
+ start_stage_id = max(0, len(train_sub_dirs) - 1)
+
+ return range(start_stage_id, get_total_num_stages(**kwargs))
+
+
+def get_total_num_stages(**kwargs):
+ """Returns total number of training stages."""
+ return 2 * kwargs['num_resolutions'] - 1
+
+
+def get_batch_size(stage_id, **kwargs):
+ """Returns batch size for each stage.
+
+ It is expected that `len(batch_size_schedule) == num_resolutions`. Each stage
+ corresponds to a resolution and hence a batch size. However if
+ `len(batch_size_schedule) < num_resolutions`, pad `batch_size_schedule` in the
+ beginning with the first batch size.
+
+ Args:
+ stage_id: An integer of training stage index.
+ **kwargs: A dictionary of
+ 'batch_size_schedule': A list of integer, each element is the batch size
+ for the current training image resolution.
+ 'num_resolutions': An integer of number of progressive resolutions.
+
+ Returns:
+ An integer batch size for the `stage_id`.
+ """
+ batch_size_schedule = kwargs['batch_size_schedule']
+ num_resolutions = kwargs['num_resolutions']
+ if len(batch_size_schedule) < num_resolutions:
+ batch_size_schedule = (
+ [batch_size_schedule[0]] * (num_resolutions - len(batch_size_schedule))
+ + batch_size_schedule)
+
+ return int(batch_size_schedule[(stage_id + 1) // 2])
+
+
+def get_stage_info(stage_id, **kwargs):
+ """Returns information for a training stage.
+
+ Args:
+ stage_id: An integer of training stage index.
+ **kwargs: A dictionary of
+ 'num_resolutions': An integer of number of progressive resolutions.
+ 'stable_stage_num_images': An integer of number of training images in
+ the stable stage.
+ 'transition_stage_num_images': An integer of number of training images
+ in the transition stage.
+ 'total_num_images': An integer of total number of training images.
+
+ Returns:
+ A tuple of integers. The first entry is the number of blocks. The second
+ entry is the accumulated total number of training images when stage
+ `stage_id` is finished.
+
+ Raises:
+ ValueError: If `stage_id` is not in [0, total number of stages).
+ """
+ total_num_stages = get_total_num_stages(**kwargs)
+ if not (stage_id >= 0 and stage_id < total_num_stages):
+ raise ValueError(
+ '`stage_id` must be in [0, {0}), but instead was {1}'.format(
+ total_num_stages, stage_id))
+
+ # Even stage_id: stable training stage.
+ # Odd stage_id: transition training stage.
+ num_blocks = (stage_id + 1) // 2 + 1
+ num_images = ((stage_id // 2 + 1) * kwargs['stable_stage_num_images'] + (
+ (stage_id + 1) // 2) * kwargs['transition_stage_num_images'])
+
+ total_num_images = kwargs['total_num_images']
+ if stage_id >= total_num_stages - 1:
+ num_images = total_num_images
+ num_images = min(num_images, total_num_images)
+
+ return num_blocks, num_images
+
+
+def make_latent_vectors(num, **kwargs):
+ """Returns a batch of `num` random latent vectors."""
+ return tf.random_normal([num, kwargs['latent_vector_size']], dtype=tf.float32)
+
+
+def make_interpolated_latent_vectors(num_rows, num_columns, **kwargs):
+ """Returns a batch of linearly interpolated latent vectors.
+
+ Given two randomly generated latent vector za and zb, it can generate
+ a row of `num_columns` interpolated latent vectors, i.e.
+ [..., za + (zb - za) * i / (num_columns - 1), ...] where
+ i = 0, 1, ..., `num_columns` - 1.
+
+ This function produces `num_rows` such rows and returns a (flattened)
+ batch of latent vectors with batch size `num_rows * num_columns`.
+
+ Args:
+ num_rows: An integer. Number of rows of interpolated latent vectors.
+ num_columns: An integer. Number of interpolated latent vectors in each row.
+ **kwargs: A dictionary of
+ 'latent_vector_size': An integer of latent vector size.
+
+ Returns:
+ A `Tensor` of shape `[num_rows * num_columns, latent_vector_size]`.
+ """
+ ans = []
+ for _ in range(num_rows):
+ z = tf.random_normal([2, kwargs['latent_vector_size']])
+ r = tf.reshape(
+ tf.to_float(tf.range(num_columns)) / (num_columns - 1), [-1, 1])
+ dz = z[1] - z[0]
+ ans.append(z[0] + tf.stack([dz] * num_columns) * r)
+ return tf.concat(ans, axis=0)
+
+
+def define_loss(gan_model, **kwargs):
+ """Defines progressive GAN losses.
+
+ The generator and discriminator both use wasserstein loss. In addition,
+ a small penalty term is added to the discriminator loss to prevent it getting
+ too large.
+
+ Args:
+ gan_model: A `GANModel` namedtuple.
+ **kwargs: A dictionary of
+ 'gradient_penalty_weight': A float of gradient norm target for
+ wasserstein loss.
+ 'gradient_penalty_target': A float of gradient penalty weight for
+ wasserstein loss.
+ 'real_score_penalty_weight': A float of Additional penalty to keep
+ the scores from drifting too far from zero.
+
+ Returns:
+ A `GANLoss` namedtuple.
+ """
+ gan_loss = tfgan.gan_loss(
+ gan_model,
+ generator_loss_fn=tfgan.losses.wasserstein_generator_loss,
+ discriminator_loss_fn=tfgan.losses.wasserstein_discriminator_loss,
+ gradient_penalty_weight=kwargs['gradient_penalty_weight'],
+ gradient_penalty_target=kwargs['gradient_penalty_target'],
+ gradient_penalty_epsilon=0.0)
+
+ real_score_penalty = tf.reduce_mean(
+ tf.square(gan_model.discriminator_real_outputs))
+ tf.summary.scalar('real_score_penalty', real_score_penalty)
+
+ return gan_loss._replace(
+ discriminator_loss=(
+ gan_loss.discriminator_loss +
+ kwargs['real_score_penalty_weight'] * real_score_penalty))
+
+
+def define_train_ops(gan_model, gan_loss, **kwargs):
+ """Defines progressive GAN train ops.
+
+ Args:
+ gan_model: A `GANModel` namedtuple.
+ gan_loss: A `GANLoss` namedtuple.
+ **kwargs: A dictionary of
+ 'adam_beta1': A float of Adam optimizer beta1.
+ 'adam_beta2': A float of Adam optimizer beta2.
+ 'generator_learning_rate': A float of generator learning rate.
+ 'discriminator_learning_rate': A float of discriminator learning rate.
+
+ Returns:
+ A tuple of `GANTrainOps` namedtuple and a list variables tracking the state
+ of optimizers.
+ """
+ with tf.variable_scope('progressive_gan_train_ops') as var_scope:
+ beta1, beta2 = kwargs['adam_beta1'], kwargs['adam_beta2']
+ gen_opt = tf.train.AdamOptimizer(kwargs['generator_learning_rate'], beta1,
+ beta2)
+ dis_opt = tf.train.AdamOptimizer(kwargs['discriminator_learning_rate'],
+ beta1, beta2)
+ gan_train_ops = tfgan.gan_train_ops(gan_model, gan_loss, gen_opt, dis_opt)
+ return gan_train_ops, tf.get_collection(
+ tf.GraphKeys.GLOBAL_VARIABLES, scope=var_scope.name)
+
+
+def add_generator_smoothing_ops(generator_ema, gan_model, gan_train_ops):
+ """Adds generator smoothing ops."""
+ with tf.control_dependencies([gan_train_ops.generator_train_op]):
+ new_generator_train_op = generator_ema.apply(gan_model.generator_variables)
+
+ gan_train_ops = gan_train_ops._replace(
+ generator_train_op=new_generator_train_op)
+ generator_vars_to_restore = generator_ema.variables_to_restore(
+ gan_model.generator_variables)
+ return gan_train_ops, generator_vars_to_restore
+
+
+def build_model(stage_id, batch_size, real_images, **kwargs):
+ """Builds progressive GAN model.
+
+ Args:
+ stage_id: An integer of training stage index.
+ batch_size: Number of training images in each minibatch.
+ real_images: A 4D `Tensor` of NHWC format.
+ **kwargs: A dictionary of
+ 'start_height': An integer of start image height.
+ 'start_width': An integer of start image width.
+ 'scale_base': An integer of resolution multiplier.
+ 'num_resolutions': An integer of number of progressive resolutions.
+ 'stable_stage_num_images': An integer of number of training images in
+ the stable stage.
+ 'transition_stage_num_images': An integer of number of training images
+ in the transition stage.
+ 'total_num_images': An integer of total number of training images.
+ 'kernel_size': Convolution kernel size.
+ 'colors': Number of image channels.
+ 'to_rgb_use_tanh_activation': Whether to apply tanh activation when
+ output rgb.
+ 'fmap_base': Base number of filters.
+ 'fmap_decay': Decay of number of filters.
+ 'fmap_max': Max number of filters.
+ 'latent_vector_size': An integer of latent vector size.
+ 'gradient_penalty_weight': A float of gradient norm target for
+ wasserstein loss.
+ 'gradient_penalty_target': A float of gradient penalty weight for
+ wasserstein loss.
+ 'real_score_penalty_weight': A float of Additional penalty to keep
+ the scores from drifting too far from zero.
+ 'adam_beta1': A float of Adam optimizer beta1.
+ 'adam_beta2': A float of Adam optimizer beta2.
+ 'generator_learning_rate': A float of generator learning rate.
+ 'discriminator_learning_rate': A float of discriminator learning rate.
+
+ Returns:
+ An inernal object that wraps all information about the model.
+ """
+ kernel_size = kwargs['kernel_size']
+ colors = kwargs['colors']
+ resolution_schedule = make_resolution_schedule(**kwargs)
+
+ num_blocks, num_images = get_stage_info(stage_id, **kwargs)
+
+ current_image_id = tf.train.get_or_create_global_step()
+ current_image_id_inc_op = current_image_id.assign_add(batch_size)
+ tf.summary.scalar('current_image_id', current_image_id)
+
+ progress = networks.compute_progress(
+ current_image_id, kwargs['stable_stage_num_images'],
+ kwargs['transition_stage_num_images'], num_blocks)
+ tf.summary.scalar('progress', progress)
+
+ real_images = networks.blend_images(
+ real_images, progress, resolution_schedule, num_blocks=num_blocks)
+
+ def _num_filters_fn(block_id):
+ """Computes number of filters of block `block_id`."""
+ return networks.num_filters(block_id, kwargs['fmap_base'],
+ kwargs['fmap_decay'], kwargs['fmap_max'])
+
+ def _generator_fn(z):
+ """Builds generator network."""
+ return networks.generator(
+ z,
+ progress,
+ _num_filters_fn,
+ resolution_schedule,
+ num_blocks=num_blocks,
+ kernel_size=kernel_size,
+ colors=colors,
+ to_rgb_activation=(tf.tanh
+ if kwargs['to_rgb_use_tanh_activation'] else None))
+
+ def _discriminator_fn(x):
+ """Builds discriminator network."""
+ return networks.discriminator(
+ x,
+ progress,
+ _num_filters_fn,
+ resolution_schedule,
+ num_blocks=num_blocks,
+ kernel_size=kernel_size)
+
+ ########## Define model.
+ z = make_latent_vectors(batch_size, **kwargs)
+
+ gan_model = tfgan.gan_model(
+ generator_fn=lambda z: _generator_fn(z)[0],
+ discriminator_fn=lambda x, unused_z: _discriminator_fn(x)[0],
+ real_data=real_images,
+ generator_inputs=z)
+
+ ########## Define loss.
+ gan_loss = define_loss(gan_model, **kwargs)
+
+ ########## Define train ops.
+ gan_train_ops, optimizer_var_list = define_train_ops(gan_model, gan_loss,
+ **kwargs)
+ gan_train_ops = gan_train_ops._replace(
+ global_step_inc_op=current_image_id_inc_op)
+
+ ########## Generator smoothing.
+ generator_ema = tf.train.ExponentialMovingAverage(decay=0.999)
+ gan_train_ops, generator_vars_to_restore = add_generator_smoothing_ops(
+ generator_ema, gan_model, gan_train_ops)
+
+ class Model(object):
+ pass
+
+ model = Model()
+ model.stage_id = stage_id
+ model.batch_size = batch_size
+ model.resolution_schedule = resolution_schedule
+ model.num_images = num_images
+ model.num_blocks = num_blocks
+ model.current_image_id = current_image_id
+ model.progress = progress
+ model.num_filters_fn = _num_filters_fn
+ model.generator_fn = _generator_fn
+ model.discriminator_fn = _discriminator_fn
+ model.gan_model = gan_model
+ model.gan_loss = gan_loss
+ model.gan_train_ops = gan_train_ops
+ model.optimizer_var_list = optimizer_var_list
+ model.generator_ema = generator_ema
+ model.generator_vars_to_restore = generator_vars_to_restore
+ return model
+
+
+def make_var_scope_custom_getter_for_ema(ema):
+ """Makes variable scope custom getter."""
+ def _custom_getter(getter, name, *args, **kwargs):
+ var = getter(name, *args, **kwargs)
+ ema_var = ema.average(var)
+ return ema_var if ema_var else var
+ return _custom_getter
+
+
+def add_model_summaries(model, **kwargs):
+ """Adds model summaries.
+
+ This function adds several useful summaries during training:
+ - fake_images: A grid of fake images based on random latent vectors.
+ - interp_images: A grid of fake images based on interpolated latent vectors.
+ - real_images_blend: A grid of real images.
+ - summaries for `gan_model` losses, variable distributions etc.
+
+ Args:
+ model: An model object having all information of progressive GAN model,
+ e.g. the return of build_model().
+ **kwargs: A dictionary of
+ 'fake_grid_size': The fake image grid size for summaries.
+ 'interp_grid_size': The latent space interpolated image grid size for
+ summaries.
+ 'colors': Number of image channels.
+ 'latent_vector_size': An integer of latent vector size.
+ """
+ fake_grid_size = kwargs['fake_grid_size']
+ interp_grid_size = kwargs['interp_grid_size']
+ colors = kwargs['colors']
+
+ image_shape = list(model.resolution_schedule.final_resolutions)
+
+ fake_batch_size = fake_grid_size**2
+ fake_images_shape = [fake_batch_size] + image_shape + [colors]
+
+ interp_batch_size = interp_grid_size**2
+ interp_images_shape = [interp_batch_size] + image_shape + [colors]
+
+ # When making prediction, use the ema smoothed generator vars.
+ with tf.variable_scope(
+ model.gan_model.generator_scope,
+ reuse=True,
+ custom_getter=make_var_scope_custom_getter_for_ema(model.generator_ema)):
+ z_fake = make_latent_vectors(fake_batch_size, **kwargs)
+ fake_images = model.gan_model.generator_fn(z_fake)
+ fake_images.set_shape(fake_images_shape)
+
+ z_interp = make_interpolated_latent_vectors(interp_grid_size,
+ interp_grid_size, **kwargs)
+ interp_images = model.gan_model.generator_fn(z_interp)
+ interp_images.set_shape(interp_images_shape)
+
+ tf.summary.image(
+ 'fake_images',
+ tfgan.eval.eval_utils.image_grid(
+ fake_images,
+ grid_shape=[fake_grid_size] * 2,
+ image_shape=image_shape,
+ num_channels=colors),
+ max_outputs=1)
+
+ tf.summary.image(
+ 'interp_images',
+ tfgan.eval.eval_utils.image_grid(
+ interp_images,
+ grid_shape=[interp_grid_size] * 2,
+ image_shape=image_shape,
+ num_channels=colors),
+ max_outputs=1)
+
+ real_grid_size = int(np.sqrt(model.batch_size))
+ tf.summary.image(
+ 'real_images_blend',
+ tfgan.eval.eval_utils.image_grid(
+ model.gan_model.real_data[:real_grid_size**2],
+ grid_shape=(real_grid_size, real_grid_size),
+ image_shape=image_shape,
+ num_channels=colors),
+ max_outputs=1)
+
+ tfgan.eval.add_gan_model_summaries(model.gan_model)
+
+
+def make_scaffold(stage_id, optimizer_var_list, **kwargs):
+ """Makes a custom scaffold.
+
+ The scaffold
+ - restores variables from the last training stage.
+ - initializes new variables in the new block.
+
+ Args:
+ stage_id: An integer of stage id.
+ optimizer_var_list: A list of optimizer variables.
+ **kwargs: A dictionary of
+ 'train_root_dir': A string of root directory of training logs.
+ 'num_resolutions': An integer of number of progressive resolutions.
+ 'stable_stage_num_images': An integer of number of training images in
+ the stable stage.
+ 'transition_stage_num_images': An integer of number of training images
+ in the transition stage.
+ 'total_num_images': An integer of total number of training images.
+
+ Returns:
+ A `Scaffold` object.
+ """
+ # Holds variables that from the previous stage and need to be restored.
+ restore_var_list = []
+ prev_ckpt = None
+ curr_ckpt = tf.train.latest_checkpoint(make_train_sub_dir(stage_id, **kwargs))
+ if stage_id > 0 and curr_ckpt is None:
+ prev_ckpt = tf.train.latest_checkpoint(
+ make_train_sub_dir(stage_id - 1, **kwargs))
+
+ num_blocks, _ = get_stage_info(stage_id, **kwargs)
+ prev_num_blocks, _ = get_stage_info(stage_id - 1, **kwargs)
+
+ # Holds variables created in the new block of the current stage. If the
+ # current stage is a stable stage (except the initial stage), this list
+ # will be empty.
+ new_block_var_list = []
+ for block_id in range(prev_num_blocks + 1, num_blocks + 1):
+ new_block_var_list.extend(
+ tf.get_collection(
+ tf.GraphKeys.GLOBAL_VARIABLES,
+ scope='.*/{}/'.format(networks.block_name(block_id))))
+
+ # Every variables that are 1) not for optimizers and 2) from the new block
+ # need to be restored.
+ restore_var_list = [
+ var for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
+ if var not in set(optimizer_var_list + new_block_var_list)
+ ]
+
+ # Add saver op to graph. This saver is used to restore variables from the
+ # previous stage.
+ saver_for_restore = tf.train.Saver(
+ var_list=restore_var_list, allow_empty=True)
+ # Add the op to graph that initializes all global variables.
+ init_op = tf.global_variables_initializer()
+
+ def _init_fn(unused_scaffold, sess):
+ # First initialize every variables.
+ sess.run(init_op)
+ logging.info('\n'.join([var.name for var in restore_var_list]))
+ # Then overwrite variables saved in previous stage.
+ if prev_ckpt is not None:
+ saver_for_restore.restore(sess, prev_ckpt)
+
+ # Use a dummy init_op here as all initialization is done in init_fn.
+ return tf.train.Scaffold(init_op=tf.constant([]), init_fn=_init_fn)
+
+
+def make_status_message(model):
+ """Makes a string `Tensor` of training status."""
+ return tf.string_join(
+ [
+ 'Starting train step: current_image_id: ',
+ tf.as_string(model.current_image_id), ', progress: ',
+ tf.as_string(model.progress), ', num_blocks: {}'.format(
+ model.num_blocks), ', batch_size: {}'.format(model.batch_size)
+ ],
+ name='status_message')
+
+
+def train(model, **kwargs):
+ """Trains progressive GAN for stage `stage_id`.
+
+ Args:
+ model: An model object having all information of progressive GAN model,
+ e.g. the return of build_model().
+ **kwargs: A dictionary of
+ 'train_root_dir': A string of root directory of training logs.
+ 'master': Name of the TensorFlow master to use.
+ 'task': The Task ID. This value is used when training with multiple
+ workers to identify each worker.
+ 'save_summaries_num_images': Save summaries in this number of images.
+ Returns:
+ None.
+ """
+ logging.info('stage_id=%d, num_blocks=%d, num_images=%d', model.stage_id,
+ model.num_blocks, model.num_images)
+
+ scaffold = make_scaffold(model.stage_id, model.optimizer_var_list, **kwargs)
+
+ tfgan.gan_train(
+ model.gan_train_ops,
+ logdir=make_train_sub_dir(model.stage_id, **kwargs),
+ get_hooks_fn=tfgan.get_sequential_train_hooks(tfgan.GANTrainSteps(1, 1)),
+ hooks=[
+ tf.train.StopAtStepHook(last_step=model.num_images),
+ tf.train.LoggingTensorHook(
+ [make_status_message(model)], every_n_iter=10)
+ ],
+ master=kwargs['master'],
+ is_chief=(kwargs['task'] == 0),
+ scaffold=scaffold,
+ save_checkpoint_secs=600,
+ save_summaries_steps=(kwargs['save_summaries_num_images']))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/train_main.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/train_main.py"
new file mode 100644
index 00000000..645d20cf
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/train_main.py"
@@ -0,0 +1,174 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Train a progressive GAN model.
+
+See https://arxiv.org/abs/1710.10196 for details about the model.
+
+See https://github.com/tkarras/progressive_growing_of_gans for the original
+theano implementation.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import sys
+
+
+from absl import flags
+from absl import logging
+import tensorflow as tf
+
+import data_provider
+import train
+
+tfgan = tf.contrib.gan
+
+flags.DEFINE_string('dataset_name', 'cifar10', 'Dataset name.')
+
+flags.DEFINE_string('dataset_file_pattern', '', 'Dataset file pattern.')
+
+flags.DEFINE_integer('start_height', 4, 'Start image height.')
+
+flags.DEFINE_integer('start_width', 4, 'Start image width.')
+
+flags.DEFINE_integer('scale_base', 2, 'Resolution multiplier.')
+
+flags.DEFINE_integer('num_resolutions', 4, 'Number of progressive resolutions.')
+
+flags.DEFINE_list(
+ 'batch_size_schedule', [8, 8, 4],
+ 'A list of batch sizes for each resolution, if '
+ 'len(batch_size_schedule) < num_resolutions, pad the schedule in the '
+ 'beginning with the first batch size.')
+
+flags.DEFINE_integer('kernel_size', 3, 'Convolution kernel size.')
+
+flags.DEFINE_integer('colors', 3, 'Number of image channels.')
+
+flags.DEFINE_bool('to_rgb_use_tanh_activation', False,
+ 'Whether to apply tanh activation when output rgb.')
+
+flags.DEFINE_integer('stable_stage_num_images', 1000,
+ 'Number of images in the stable stage.')
+
+flags.DEFINE_integer('transition_stage_num_images', 1000,
+ 'Number of images in the transition stage.')
+
+flags.DEFINE_integer('total_num_images', 10000, 'Total number of images.')
+
+flags.DEFINE_integer('save_summaries_num_images', 100,
+ 'Save summaries in this number of images.')
+
+flags.DEFINE_integer('latent_vector_size', 128, 'Latent vector size.')
+
+flags.DEFINE_integer('fmap_base', 4096, 'Base number of filters.')
+
+flags.DEFINE_float('fmap_decay', 1.0, 'Decay of number of filters.')
+
+flags.DEFINE_integer('fmap_max', 128, 'Max number of filters.')
+
+flags.DEFINE_float('gradient_penalty_target', 1.0,
+ 'Gradient norm target for wasserstein loss.')
+
+flags.DEFINE_float('gradient_penalty_weight', 10.0,
+ 'Gradient penalty weight for wasserstein loss.')
+
+flags.DEFINE_float('real_score_penalty_weight', 0.001,
+ 'Additional penalty to keep the scores from drifting too '
+ 'far from zero.')
+
+flags.DEFINE_float('generator_learning_rate', 0.001, 'Learning rate.')
+
+flags.DEFINE_float('discriminator_learning_rate', 0.001, 'Learning rate.')
+
+flags.DEFINE_float('adam_beta1', 0.0, 'Adam beta 1.')
+
+flags.DEFINE_float('adam_beta2', 0.99, 'Adam beta 2.')
+
+flags.DEFINE_integer('fake_grid_size', 8, 'The fake image grid size for eval.')
+
+flags.DEFINE_integer('interp_grid_size', 8,
+ 'The interp image grid size for eval.')
+
+flags.DEFINE_string('train_root_dir', '/tmp/progressive_gan/',
+ 'Directory where to write event logs.')
+
+flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
+
+flags.DEFINE_integer(
+ 'ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then the parameters '
+ 'are handled locally by the worker.')
+
+flags.DEFINE_integer(
+ 'task', 0,
+ 'The Task ID. This value is used when training with multiple workers to '
+ 'identify each worker.')
+
+FLAGS = flags.FLAGS
+
+
+def _make_config_from_flags():
+ """Makes a config dictionary from commandline flags."""
+ return dict([(flag.name, flag.value)
+ for flag in FLAGS.get_key_flags_for_module(sys.argv[0])])
+
+
+def _provide_real_images(batch_size, **kwargs):
+ """Provides real images."""
+ dataset_name = kwargs.get('dataset_name')
+ dataset_file_pattern = kwargs.get('dataset_file_pattern')
+ colors = kwargs['colors']
+ final_height, final_width = train.make_resolution_schedule(
+ **kwargs).final_resolutions
+ if dataset_name is not None:
+ return data_provider.provide_data(
+ dataset_name=dataset_name,
+ split_name='train',
+ batch_size=batch_size,
+ patch_height=final_height,
+ patch_width=final_width,
+ colors=colors)
+ elif dataset_file_pattern is not None:
+ return data_provider.provide_data_from_image_files(
+ file_pattern=dataset_file_pattern,
+ batch_size=batch_size,
+ patch_height=final_height,
+ patch_width=final_width,
+ colors=colors)
+
+
+def main(_):
+ if not tf.gfile.Exists(FLAGS.train_root_dir):
+ tf.gfile.MakeDirs(FLAGS.train_root_dir)
+
+ config = _make_config_from_flags()
+ logging.info('\n'.join(['{}={}'.format(k, v) for k, v in config.iteritems()]))
+
+ for stage_id in train.get_stage_ids(**config):
+ batch_size = train.get_batch_size(stage_id, **config)
+ tf.reset_default_graph()
+ with tf.device(tf.train.replica_device_setter(FLAGS.ps_tasks)):
+ real_images = None
+ with tf.device('/cpu:0'), tf.name_scope('inputs'):
+ real_images = _provide_real_images(batch_size, **config)
+ model = train.build_model(stage_id, batch_size, real_images, **config)
+ train.add_model_summaries(model, **config)
+ train.train(model, **config)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/train_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/train_test.py"
new file mode 100644
index 00000000..67f47899
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/progressive_gan/train_test.py"
@@ -0,0 +1,93 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+
+from absl import flags
+from absl.testing import absltest
+import tensorflow as tf
+
+import train
+
+
+FLAGS = flags.FLAGS
+
+
+def provide_random_data(batch_size=2, patch_size=4, colors=1, **unused_kwargs):
+ return tf.random_normal([batch_size, patch_size, patch_size, colors])
+
+
+class TrainTest(absltest.TestCase):
+
+ def setUp(self):
+ self._config = {
+ 'start_height': 2,
+ 'start_width': 2,
+ 'scale_base': 2,
+ 'num_resolutions': 2,
+ 'batch_size_schedule': [2],
+ 'colors': 1,
+ 'to_rgb_use_tanh_activation': True,
+ 'kernel_size': 3,
+ 'stable_stage_num_images': 1,
+ 'transition_stage_num_images': 1,
+ 'total_num_images': 3,
+ 'save_summaries_num_images': 2,
+ 'latent_vector_size': 2,
+ 'fmap_base': 8,
+ 'fmap_decay': 1.0,
+ 'fmap_max': 8,
+ 'gradient_penalty_target': 1.0,
+ 'gradient_penalty_weight': 10.0,
+ 'real_score_penalty_weight': 0.001,
+ 'generator_learning_rate': 0.001,
+ 'discriminator_learning_rate': 0.001,
+ 'adam_beta1': 0.0,
+ 'adam_beta2': 0.99,
+ 'fake_grid_size': 2,
+ 'interp_grid_size': 2,
+ 'train_root_dir': os.path.join(FLAGS.test_tmpdir, 'progressive_gan'),
+ 'master': '',
+ 'task': 0
+ }
+
+ def test_train_success(self):
+ train_root_dir = self._config['train_root_dir']
+ if not tf.gfile.Exists(train_root_dir):
+ tf.gfile.MakeDirs(train_root_dir)
+
+ for stage_id in train.get_stage_ids(**self._config):
+ batch_size = train.get_batch_size(stage_id, **self._config)
+ tf.reset_default_graph()
+ real_images = provide_random_data(batch_size=batch_size)
+ model = train.build_model(stage_id, batch_size, real_images,
+ **self._config)
+ train.add_model_summaries(model, **self._config)
+ train.train(model, **self._config)
+
+ def test_get_batch_size(self):
+ config = {'num_resolutions': 5, 'batch_size_schedule': [8, 4, 2]}
+ # batch_size_schedule is expanded to [8, 8, 8, 4, 2]
+ # At stage level it is [8, 8, 8, 8, 8, 4, 4, 2, 2]
+ for i, expected_batch_size in enumerate([8, 8, 8, 8, 8, 4, 4, 2, 2]):
+ self.assertEqual(train.get_batch_size(i, **config), expected_batch_size)
+
+
+if __name__ == '__main__':
+ absltest.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/data_provider.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/data_provider.py"
new file mode 100644
index 00000000..5e136cf9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/data_provider.py"
@@ -0,0 +1,33 @@
+"""StarGAN data provider."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+import data_provider
+
+
+def provide_data(image_file_patterns, batch_size, patch_size):
+ """Data provider wrapper on for the data_provider in gan/cyclegan.
+
+ Args:
+ image_file_patterns: A list of file pattern globs.
+ batch_size: Python int. Batch size.
+ patch_size: Python int. The patch size to extract.
+
+ Returns:
+ List of `Tensor` of shape (N, H, W, C) representing the images.
+ List of `Tensor` of shape (N, num_domains) representing the labels.
+ """
+
+ images = data_provider.provide_custom_data(
+ image_file_patterns,
+ batch_size=batch_size,
+ patch_size=patch_size)
+
+ num_domains = len(images)
+ labels = [tf.one_hot([idx] * batch_size, num_domains) for idx in
+ range(num_domains)]
+
+ return images, labels
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/data_provider_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/data_provider_test.py"
new file mode 100644
index 00000000..cdd0927b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/data_provider_test.py"
@@ -0,0 +1,42 @@
+"""Tests for google3.third_party.tensorflow_models.gan.stargan.data_provider."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from google3.testing.pybase import googletest
+import data_provider
+
+mock = tf.test.mock
+
+
+class DataProviderTest(googletest.TestCase):
+
+ @mock.patch.object(
+ data_provider.data_provider, 'provide_custom_data', autospec=True)
+ def test_data_provider(self, mock_provide_custom_data):
+
+ batch_size = 2
+ patch_size = 8
+ num_domains = 3
+
+ images_shape = [batch_size, patch_size, patch_size, 3]
+ mock_provide_custom_data.return_value = [
+ tf.zeros(images_shape) for _ in range(num_domains)
+ ]
+
+ images, labels = data_provider.provide_data(
+ image_file_patterns=None, batch_size=batch_size, patch_size=patch_size)
+
+ self.assertEqual(num_domains, len(images))
+ self.assertEqual(num_domains, len(labels))
+ for label in labels:
+ self.assertListEqual([batch_size, num_domains], label.shape.as_list())
+ for image in images:
+ self.assertListEqual(images_shape, image.shape.as_list())
+
+
+if __name__ == '__main__':
+ googletest.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/layers.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/layers.py"
new file mode 100644
index 00000000..c92430f5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/layers.py"
@@ -0,0 +1,392 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Layers for a StarGAN model.
+
+This module contains basic layers to build a StarGAN model.
+
+See https://arxiv.org/abs/1711.09020 for details about the model.
+
+See https://github.com/yunjey/StarGAN for the original pytorvh implementation.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+import tensorflow as tf
+
+import ops
+
+
+def generator_down_sample(input_net, final_num_outputs=256):
+ """Down-sampling module in Generator.
+
+ Down sampling pathway of the Generator Architecture:
+
+ PyTorch Version:
+ https://github.com/yunjey/StarGAN/blob/fbdb6a6ce2a4a92e1dc034faec765e0dbe4b8164/model.py#L32
+
+ Notes:
+ We require dimension 1 and dimension 2 of the input_net to be fully defined
+ for the correct down sampling.
+
+ Args:
+ input_net: Tensor of shape (batch_size, h, w, c + num_class).
+ final_num_outputs: (int) Number of hidden unit for the final layer.
+
+ Returns:
+ Tensor of shape (batch_size, h / 4, w / 4, 256).
+
+ Raises:
+ ValueError: If final_num_outputs are not divisible by 4,
+ or input_net does not have a rank of 4,
+ or dimension 1 and dimension 2 of input_net are not defined at graph
+ construction time,
+ or dimension 1 and dimension 2 of input_net are not divisible by 4.
+ """
+
+ if final_num_outputs % 4 != 0:
+ raise ValueError('Final number outputs need to be divisible by 4.')
+
+ # Check the rank of input_net.
+ input_net.shape.assert_has_rank(4)
+
+ # Check dimension 1 and dimension 2 are defined and divisible by 4.
+ if input_net.shape[1]:
+ if input_net.shape[1] % 4 != 0:
+ raise ValueError(
+ 'Dimension 1 of the input should be divisible by 4, but is {} '
+ 'instead.'.
+ format(input_net.shape[1]))
+ else:
+ raise ValueError('Dimension 1 of the input should be explicitly defined.')
+
+ # Check dimension 1 and dimension 2 are defined and divisible by 4.
+ if input_net.shape[2]:
+ if input_net.shape[2] % 4 != 0:
+ raise ValueError(
+ 'Dimension 2 of the input should be divisible by 4, but is {} '
+ 'instead.'.
+ format(input_net.shape[2]))
+ else:
+ raise ValueError('Dimension 2 of the input should be explicitly defined.')
+
+ with tf.variable_scope('generator_down_sample'):
+
+ with tf.contrib.framework.arg_scope(
+ [tf.contrib.layers.conv2d],
+ padding='VALID',
+ biases_initializer=None,
+ normalizer_fn=tf.contrib.layers.instance_norm,
+ activation_fn=tf.nn.relu):
+
+ down_sample = ops.pad(input_net, 3)
+ down_sample = tf.contrib.layers.conv2d(
+ inputs=down_sample,
+ num_outputs=final_num_outputs / 4,
+ kernel_size=7,
+ stride=1,
+ scope='conv_0')
+
+ down_sample = ops.pad(down_sample, 1)
+ down_sample = tf.contrib.layers.conv2d(
+ inputs=down_sample,
+ num_outputs=final_num_outputs / 2,
+ kernel_size=4,
+ stride=2,
+ scope='conv_1')
+
+ down_sample = ops.pad(down_sample, 1)
+ output_net = tf.contrib.layers.conv2d(
+ inputs=down_sample,
+ num_outputs=final_num_outputs,
+ kernel_size=4,
+ stride=2,
+ scope='conv_2')
+
+ return output_net
+
+
+def _residual_block(input_net,
+ num_outputs,
+ kernel_size,
+ stride=1,
+ padding_size=0,
+ activation_fn=tf.nn.relu,
+ normalizer_fn=None,
+ name='residual_block'):
+ """Residual Block.
+
+ Input Tensor X - > Conv1 -> IN -> ReLU -> Conv2 -> IN + X
+
+ PyTorch Version:
+ https://github.com/yunjey/StarGAN/blob/fbdb6a6ce2a4a92e1dc034faec765e0dbe4b8164/model.py#L7
+
+ Args:
+ input_net: Tensor as input.
+ num_outputs: (int) number of output channels for Convolution.
+ kernel_size: (int) size of the square kernel for Convolution.
+ stride: (int) stride for Convolution. Default to 1.
+ padding_size: (int) padding size for Convolution. Default to 0.
+ activation_fn: Activation function.
+ normalizer_fn: Normalization function.
+ name: Name scope
+
+ Returns:
+ Residual Tensor with the same shape as the input tensor.
+ """
+
+ with tf.variable_scope(name):
+
+ with tf.contrib.framework.arg_scope(
+ [tf.contrib.layers.conv2d],
+ num_outputs=num_outputs,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding='VALID',
+ normalizer_fn=normalizer_fn,
+ activation_fn=None):
+
+ res_block = ops.pad(input_net, padding_size)
+ res_block = tf.contrib.layers.conv2d(inputs=res_block, scope='conv_0')
+
+ res_block = activation_fn(res_block, name='activation_0')
+
+ res_block = ops.pad(res_block, padding_size)
+ res_block = tf.contrib.layers.conv2d(inputs=res_block, scope='conv_1')
+
+ output_net = res_block + input_net
+
+ return output_net
+
+
+def generator_bottleneck(input_net, residual_block_num=6, num_outputs=256):
+ """Bottleneck module in Generator.
+
+ Residual bottleneck pathway in Generator.
+
+ PyTorch Version:
+ https://github.com/yunjey/StarGAN/blob/fbdb6a6ce2a4a92e1dc034faec765e0dbe4b8164/model.py#L40
+
+ Args:
+ input_net: Tensor of shape (batch_size, h / 4, w / 4, 256).
+ residual_block_num: (int) Number of residual_block_num. Default to 6 per the
+ original implementation.
+ num_outputs: (int) Number of hidden unit in the residual bottleneck. Default
+ to 256 per the original implementation.
+
+ Returns:
+ Tensor of shape (batch_size, h / 4, w / 4, 256).
+
+ Raises:
+ ValueError: If the rank of the input tensor is not 4,
+ or the last channel of the input_tensor is not explicitly defined,
+ or the last channel of the input_tensor is not the same as num_outputs.
+ """
+
+ # Check the rank of input_net.
+ input_net.shape.assert_has_rank(4)
+
+ # Check dimension 4 of the input_net.
+ if input_net.shape[-1]:
+ if input_net.shape[-1] != num_outputs:
+ raise ValueError(
+ 'The last dimension of the input_net should be the same as '
+ 'num_outputs: but {} vs. {} instead.'.format(input_net.shape[-1],
+ num_outputs))
+ else:
+ raise ValueError(
+ 'The last dimension of the input_net should be explicitly defined.')
+
+ with tf.variable_scope('generator_bottleneck'):
+
+ bottleneck = input_net
+
+ for i in range(residual_block_num):
+
+ bottleneck = _residual_block(
+ input_net=bottleneck,
+ num_outputs=num_outputs,
+ kernel_size=3,
+ stride=1,
+ padding_size=1,
+ activation_fn=tf.nn.relu,
+ normalizer_fn=tf.contrib.layers.instance_norm,
+ name='residual_block_{}'.format(i))
+
+ return bottleneck
+
+
+def generator_up_sample(input_net, num_outputs):
+ """Up-sampling module in Generator.
+
+ Up sampling path for image generation in the Generator.
+
+ PyTorch Version:
+ https://github.com/yunjey/StarGAN/blob/fbdb6a6ce2a4a92e1dc034faec765e0dbe4b8164/model.py#L44
+
+ Args:
+ input_net: Tensor of shape (batch_size, h / 4, w / 4, 256).
+ num_outputs: (int) Number of channel for the output tensor.
+
+ Returns:
+ Tensor of shape (batch_size, h, w, num_outputs).
+ """
+
+ with tf.variable_scope('generator_up_sample'):
+
+ with tf.contrib.framework.arg_scope(
+ [tf.contrib.layers.conv2d_transpose],
+ kernel_size=4,
+ stride=2,
+ padding='VALID',
+ normalizer_fn=tf.contrib.layers.instance_norm,
+ activation_fn=tf.nn.relu):
+
+ up_sample = tf.contrib.layers.conv2d_transpose(
+ inputs=input_net, num_outputs=128, scope='deconv_0')
+ up_sample = up_sample[:, 1:-1, 1:-1, :]
+
+ up_sample = tf.contrib.layers.conv2d_transpose(
+ inputs=up_sample, num_outputs=64, scope='deconv_1')
+ up_sample = up_sample[:, 1:-1, 1:-1, :]
+
+ output_net = ops.pad(up_sample, 3)
+ output_net = tf.contrib.layers.conv2d(
+ inputs=output_net,
+ num_outputs=num_outputs,
+ kernel_size=7,
+ stride=1,
+ padding='VALID',
+ activation_fn=tf.nn.tanh,
+ normalizer_fn=None,
+ biases_initializer=None,
+ scope='conv_0')
+
+ return output_net
+
+
+def discriminator_input_hidden(input_net, hidden_layer=6, init_num_outputs=64):
+ """Input Layer + Hidden Layer in the Discriminator.
+
+ Feature extraction pathway in the Discriminator.
+
+ PyTorch Version:
+ https://github.com/yunjey/StarGAN/blob/fbdb6a6ce2a4a92e1dc034faec765e0dbe4b8164/model.py#L68
+
+ Args:
+ input_net: Tensor of shape (batch_size, h, w, 3) as batch of images.
+ hidden_layer: (int) Number of hidden layers. Default to 6 per the original
+ implementation.
+ init_num_outputs: (int) Number of hidden unit in the first hidden layer. The
+ number of hidden unit double after each layer. Default to 64 per the
+ original implementation.
+
+ Returns:
+ Tensor of shape (batch_size, h / 64, w / 64, 2048) as features.
+ """
+
+ num_outputs = init_num_outputs
+
+ with tf.variable_scope('discriminator_input_hidden'):
+
+ hidden = input_net
+
+ for i in range(hidden_layer):
+
+ hidden = ops.pad(hidden, 1)
+ hidden = tf.contrib.layers.conv2d(
+ inputs=hidden,
+ num_outputs=num_outputs,
+ kernel_size=4,
+ stride=2,
+ padding='VALID',
+ activation_fn=None,
+ normalizer_fn=None,
+ scope='conv_{}'.format(i))
+ hidden = tf.nn.leaky_relu(hidden, alpha=0.01)
+
+ num_outputs = 2 * num_outputs
+
+ return hidden
+
+
+def discriminator_output_source(input_net):
+ """Output Layer for Source in the Discriminator.
+
+ Determine if the image is real/fake based on the feature extracted. We follow
+ the original paper design where the output is not a simple (batch_size) shape
+ Tensor but rather a (batch_size, 2, 2, 2048) shape Tensor. We will get the
+ correct shape later when we piece things together.
+
+ PyTorch Version:
+ https://github.com/yunjey/StarGAN/blob/fbdb6a6ce2a4a92e1dc034faec765e0dbe4b8164/model.py#L79
+
+ Args:
+ input_net: Tensor of shape (batch_size, h / 64, w / 64, 2048) as features.
+
+ Returns:
+ Tensor of shape (batch_size, h / 64, w / 64, 1) as the score.
+ """
+
+ with tf.variable_scope('discriminator_output_source'):
+
+ output_src = ops.pad(input_net, 1)
+ output_src = tf.contrib.layers.conv2d(
+ inputs=output_src,
+ num_outputs=1,
+ kernel_size=3,
+ stride=1,
+ padding='VALID',
+ activation_fn=None,
+ normalizer_fn=None,
+ biases_initializer=None,
+ scope='conv')
+
+ return output_src
+
+
+def discriminator_output_class(input_net, class_num):
+ """Output Layer for Domain Classification in the Discriminator.
+
+ The original paper use convolution layer where the kernel size is the height
+ and width of the Tensor. We use an equivalent operation here where we first
+ flatten the Tensor to shape (batch_size, K) and a fully connected layer.
+
+ PyTorch Version:
+ https://github.com/yunjey/StarGAN/blob/fbdb6a6ce2a4a92e1dc034faec765e0dbe4b8164/model.py#L80https
+
+ Args:
+ input_net: Tensor of shape (batch_size, h / 64, w / 64, 2028).
+ class_num: Number of output classes to be predicted.
+
+ Returns:
+ Tensor of shape (batch_size, class_num).
+ """
+
+ with tf.variable_scope('discriminator_output_class'):
+
+ output_cls = tf.contrib.layers.flatten(input_net, scope='flatten')
+ output_cls = tf.contrib.layers.fully_connected(
+ inputs=output_cls,
+ num_outputs=class_num,
+ activation_fn=None,
+ normalizer_fn=None,
+ biases_initializer=None,
+ scope='fully_connected')
+
+ return output_cls
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/layers_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/layers_test.py"
new file mode 100644
index 00000000..faaf8297
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/layers_test.py"
@@ -0,0 +1,137 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+import tensorflow as tf
+
+import layers
+
+
+class LayersTest(tf.test.TestCase):
+
+ def test_residual_block(self):
+
+ n = 2
+ h = 32
+ w = h
+ c = 256
+
+ input_tensor = tf.random_uniform((n, h, w, c))
+ output_tensor = layers._residual_block(
+ input_net=input_tensor,
+ num_outputs=c,
+ kernel_size=3,
+ stride=1,
+ padding_size=1)
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output_tensor)
+ self.assertTupleEqual((n, h, w, c), output.shape)
+
+ def test_generator_down_sample(self):
+
+ n = 2
+ h = 128
+ w = h
+ c = 3 + 3
+
+ input_tensor = tf.random_uniform((n, h, w, c))
+ output_tensor = layers.generator_down_sample(input_tensor)
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output_tensor)
+ self.assertTupleEqual((n, h // 4, w // 4, 256), output.shape)
+
+ def test_generator_bottleneck(self):
+
+ n = 2
+ h = 32
+ w = h
+ c = 256
+
+ input_tensor = tf.random_uniform((n, h, w, c))
+ output_tensor = layers.generator_bottleneck(input_tensor)
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output_tensor)
+ self.assertTupleEqual((n, h, w, c), output.shape)
+
+ def test_generator_up_sample(self):
+
+ n = 2
+ h = 32
+ w = h
+ c = 256
+ c_out = 3
+
+ input_tensor = tf.random_uniform((n, h, w, c))
+ output_tensor = layers.generator_up_sample(input_tensor, c_out)
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output_tensor)
+ self.assertTupleEqual((n, h * 4, w * 4, c_out), output.shape)
+
+ def test_discriminator_input_hidden(self):
+
+ n = 2
+ h = 128
+ w = 128
+ c = 3
+
+ input_tensor = tf.random_uniform((n, h, w, c))
+ output_tensor = layers.discriminator_input_hidden(input_tensor)
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output_tensor)
+ self.assertTupleEqual((n, 2, 2, 2048), output.shape)
+
+ def test_discriminator_output_source(self):
+
+ n = 2
+ h = 2
+ w = 2
+ c = 2048
+
+ input_tensor = tf.random_uniform((n, h, w, c))
+ output_tensor = layers.discriminator_output_source(input_tensor)
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output_tensor)
+ self.assertTupleEqual((n, h, w, 1), output.shape)
+
+ def test_discriminator_output_class(self):
+
+ n = 2
+ h = 2
+ w = 2
+ c = 2048
+ num_domain = 3
+
+ input_tensor = tf.random_uniform((n, h, w, c))
+ output_tensor = layers.discriminator_output_class(input_tensor, num_domain)
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output_tensor)
+ self.assertTupleEqual((n, num_domain), output.shape)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/network.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/network.py"
new file mode 100644
index 00000000..6d4d0790
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/network.py"
@@ -0,0 +1,102 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Neural network for a StarGAN model.
+
+This module contains the Generator and Discriminator Neural Network to build a
+StarGAN model.
+
+See https://arxiv.org/abs/1711.09020 for details about the model.
+
+See https://github.com/yunjey/StarGAN for the original pytorch implementation.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+import tensorflow as tf
+
+import layers
+import ops
+
+
+def generator(inputs, targets):
+ """Generator module.
+
+ Piece everything together for the Generator.
+
+ PyTorch Version:
+ https://github.com/yunjey/StarGAN/blob/fbdb6a6ce2a4a92e1dc034faec765e0dbe4b8164/model.py#L22
+
+ Args:
+ inputs: Tensor of shape (batch_size, h, w, c) representing the
+ images/information that we want to transform.
+ targets: Tensor of shape (batch_size, num_domains) representing the target
+ domain the generator should transform the image/information to.
+
+ Returns:
+ Tensor of shape (batch_size, h, w, c) as the inputs.
+ """
+
+ with tf.variable_scope('generator'):
+
+ input_with_condition = ops.condition_input_with_pixel_padding(
+ inputs, targets)
+
+ down_sample = layers.generator_down_sample(input_with_condition)
+
+ bottleneck = layers.generator_bottleneck(down_sample)
+
+ up_sample = layers.generator_up_sample(bottleneck, inputs.shape[-1])
+
+ return up_sample
+
+
+def discriminator(input_net, class_num):
+ """Discriminator Module.
+
+ Piece everything together and reshape the output source tensor
+
+ PyTorch Version:
+ https://github.com/yunjey/StarGAN/blob/fbdb6a6ce2a4a92e1dc034faec765e0dbe4b8164/model.py#L63
+
+ Notes:
+ The PyTorch Version run the reduce_mean operation later in their solver:
+ https://github.com/yunjey/StarGAN/blob/fbdb6a6ce2a4a92e1dc034faec765e0dbe4b8164/solver.py#L245
+
+ Args:
+ input_net: Tensor of shape (batch_size, h, w, c) as batch of images.
+ class_num: (int) number of domain to be predicted
+
+ Returns:
+ output_src: Tensor of shape (batch_size) where each value is a logit
+ representing whether the image is real of fake.
+ output_cls: Tensor of shape (batch_size, class_um) where each value is a
+ logit representing whether the image is in the associated domain.
+ """
+
+ with tf.variable_scope('discriminator'):
+
+ hidden = layers.discriminator_input_hidden(input_net)
+
+ output_src = layers.discriminator_output_source(hidden)
+ output_src = tf.contrib.layers.flatten(output_src)
+ output_src = tf.reduce_mean(output_src, axis=1)
+
+ output_cls = layers.discriminator_output_class(hidden, class_num)
+
+ return output_src, output_cls
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/network_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/network_test.py"
new file mode 100644
index 00000000..a89c0156
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/network_test.py"
@@ -0,0 +1,61 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+import tensorflow as tf
+
+import network
+
+
+class NetworkTest(tf.test.TestCase):
+
+ def test_generator(self):
+
+ n = 2
+ h = 128
+ w = h
+ c = 4
+ class_num = 3
+
+ input_tensor = tf.random_uniform((n, h, w, c))
+ target_tensor = tf.random_uniform((n, class_num))
+ output_tensor = network.generator(input_tensor, target_tensor)
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(output_tensor)
+ self.assertTupleEqual((n, h, w, c), output.shape)
+
+ def test_discriminator(self):
+
+ n = 2
+ h = 128
+ w = h
+ c = 3
+ class_num = 3
+
+ input_tensor = tf.random_uniform((n, h, w, c))
+ output_src_tensor, output_cls_tensor = network.discriminator(
+ input_tensor, class_num)
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ output_src, output_cls = sess.run([output_src_tensor, output_cls_tensor])
+ self.assertEqual(1, len(output_src.shape))
+ self.assertEqual(n, output_src.shape[0])
+ self.assertTupleEqual((n, class_num), output_cls.shape)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/ops.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/ops.py"
new file mode 100644
index 00000000..c2c1797e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/ops.py"
@@ -0,0 +1,106 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Ops for a StarGAN model.
+
+This module contains basic ops to build a StarGAN model.
+
+See https://arxiv.org/abs/1711.09020 for details about the model.
+
+See https://github.com/yunjey/StarGAN for the original pytorvh implementation.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+import tensorflow as tf
+
+
+def _padding_arg(h, w, input_format):
+ """Calculate the padding shape for tf.pad().
+
+ Args:
+ h: (int) padding on the height dim.
+ w: (int) padding on the width dim.
+ input_format: (string) the input format as in 'NHWC' or 'HWC'.
+ Raises:
+ ValueError: If input_format is not 'NHWC' or 'HWC'.
+
+ Returns:
+ A two dimension array representing the padding argument.
+ """
+ if input_format == 'NHWC':
+ return [[0, 0], [h, h], [w, w], [0, 0]]
+ elif input_format == 'HWC':
+ return [[h, h], [w, w], [0, 0]]
+ else:
+ raise ValueError('Input Format %s is not supported.' % input_format)
+
+
+def pad(input_net, padding_size):
+ """Padding the tensor with padding_size on both the height and width dim.
+
+ Args:
+ input_net: Tensor in 3D ('HWC') or 4D ('NHWC').
+ padding_size: (int) the size of the padding.
+
+ Notes:
+ Original StarGAN use zero padding instead of mirror padding.
+
+ Raises:
+ ValueError: If input_net Tensor is not 3D or 4D.
+
+ Returns:
+ Tensor with same rank as input_net but with padding on the height and width
+ dim.
+ """
+ if len(input_net.shape) == 4:
+ return tf.pad(input_net, _padding_arg(padding_size, padding_size, 'NHWC'))
+ elif len(input_net.shape) == 3:
+ return tf.pad(input_net, _padding_arg(padding_size, padding_size, 'HWC'))
+ else:
+ raise ValueError('The input tensor need to be either 3D or 4D.')
+
+
+def condition_input_with_pixel_padding(input_tensor, condition_tensor):
+ """Pad image tensor with condition tensor as additional color channel.
+
+ Args:
+ input_tensor: Tensor of shape (batch_size, h, w, c) representing images.
+ condition_tensor: Tensor of shape (batch_size, num_domains) representing the
+ associated domain for the image in input_tensor.
+
+ Returns:
+ Tensor of shape (batch_size, h, w, c + num_domains) representing the
+ conditioned data.
+
+ Raises:
+ ValueError: If `input_tensor` isn't rank 4.
+ ValueError: If `condition_tensor` isn't rank 2.
+ ValueError: If dimension 1 of the input_tensor and condition_tensor is not
+ the same.
+ """
+
+ input_tensor.shape.assert_has_rank(4)
+ condition_tensor.shape.assert_has_rank(2)
+ input_tensor.shape[:1].assert_is_compatible_with(condition_tensor.shape[:1])
+ condition_tensor = tf.expand_dims(condition_tensor, axis=1)
+ condition_tensor = tf.expand_dims(condition_tensor, axis=1)
+ condition_tensor = tf.tile(
+ condition_tensor, [1, input_tensor.shape[1], input_tensor.shape[2], 1])
+
+ return tf.concat([input_tensor, condition_tensor], -1)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/ops_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/ops_test.py"
new file mode 100644
index 00000000..580a8edb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/ops_test.py"
@@ -0,0 +1,113 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+import tensorflow as tf
+
+import ops
+
+
+class OpsTest(tf.test.TestCase):
+
+ def test_padding_arg(self):
+
+ pad_h = 2
+ pad_w = 3
+
+ self.assertListEqual([[0, 0], [pad_h, pad_h], [pad_w, pad_w], [0, 0]],
+ ops._padding_arg(pad_h, pad_w, 'NHWC'))
+
+ def test_padding_arg_specify_format(self):
+
+ pad_h = 2
+ pad_w = 3
+
+ self.assertListEqual([[pad_h, pad_h], [pad_w, pad_w], [0, 0]],
+ ops._padding_arg(pad_h, pad_w, 'HWC'))
+
+ def test_padding_arg_invalid_format(self):
+
+ pad_h = 2
+ pad_w = 3
+
+ with self.assertRaises(ValueError):
+ ops._padding_arg(pad_h, pad_w, 'INVALID')
+
+ def test_padding(self):
+
+ n = 2
+ h = 128
+ w = 64
+ c = 3
+ pad = 3
+
+ test_input_tensor = tf.random_uniform((n, h, w, c))
+ test_output_tensor = ops.pad(test_input_tensor, padding_size=pad)
+
+ with self.test_session() as sess:
+ output = sess.run(test_output_tensor)
+ self.assertTupleEqual((n, h + pad * 2, w + pad * 2, c), output.shape)
+
+ def test_padding_with_3D_tensor(self):
+
+ h = 128
+ w = 64
+ c = 3
+ pad = 3
+
+ test_input_tensor = tf.random_uniform((h, w, c))
+ test_output_tensor = ops.pad(test_input_tensor, padding_size=pad)
+
+ with self.test_session() as sess:
+ output = sess.run(test_output_tensor)
+ self.assertTupleEqual((h + pad * 2, w + pad * 2, c), output.shape)
+
+ def test_padding_with_tensor_of_invalid_shape(self):
+
+ n = 2
+ invalid_rank = 1
+ h = 128
+ w = 64
+ c = 3
+ pad = 3
+
+ test_input_tensor = tf.random_uniform((n, invalid_rank, h, w, c))
+
+ with self.assertRaises(ValueError):
+ ops.pad(test_input_tensor, padding_size=pad)
+
+ def test_condition_input_with_pixel_padding(self):
+
+ n = 2
+ h = 128
+ w = h
+ c = 3
+ num_label = 5
+
+ input_tensor = tf.random_uniform((n, h, w, c))
+ label_tensor = tf.random_uniform((n, num_label))
+ output_tensor = ops.condition_input_with_pixel_padding(
+ input_tensor, label_tensor)
+
+ with self.test_session() as sess:
+ labels, outputs = sess.run([label_tensor, output_tensor])
+ self.assertTupleEqual((n, h, w, c + num_label), outputs.shape)
+ for label, output in zip(labels, outputs):
+ for i in range(output.shape[0]):
+ for j in range(output.shape[1]):
+ self.assertListEqual(label.tolist(), output[i, j, c:].tolist())
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/train.py"
new file mode 100644
index 00000000..a959da71
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/train.py"
@@ -0,0 +1,228 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Trains a StarGAN model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+import tensorflow as tf
+
+import data_provider
+import network
+
+# FLAGS for data.
+flags.DEFINE_multi_string(
+ 'image_file_patterns', None,
+ 'List of file pattern for different domain of images. '
+ '(e.g.[\'black_hair\', \'blond_hair\', \'brown_hair\']')
+flags.DEFINE_integer('batch_size', 6, 'The number of images in each batch.')
+flags.DEFINE_integer('patch_size', 128, 'The patch size of images.')
+
+flags.DEFINE_string('train_log_dir', '/tmp/stargan/',
+ 'Directory where to write event logs.')
+
+# FLAGS for training hyper-parameters.
+flags.DEFINE_float('generator_lr', 1e-4, 'The generator learning rate.')
+flags.DEFINE_float('discriminator_lr', 1e-4, 'The discriminator learning rate.')
+flags.DEFINE_integer('max_number_of_steps', 1000000,
+ 'The maximum number of gradient steps.')
+flags.DEFINE_float('adam_beta1', 0.5, 'Adam Beta 1 for the Adam optimizer.')
+flags.DEFINE_float('adam_beta2', 0.999, 'Adam Beta 2 for the Adam optimizer.')
+flags.DEFINE_float('gen_disc_step_ratio', 0.2,
+ 'Generator:Discriminator training step ratio.')
+
+# FLAGS for distributed training.
+flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
+flags.DEFINE_integer(
+ 'ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then the parameters '
+ 'are handled locally by the worker.')
+flags.DEFINE_integer(
+ 'task', 0,
+ 'The Task ID. This value is used when training with multiple workers to '
+ 'identify each worker.')
+
+FLAGS = flags.FLAGS
+tfgan = tf.contrib.gan
+
+
+def _define_model(images, labels):
+ """Create the StarGAN Model.
+
+ Args:
+ images: `Tensor` or list of `Tensor` of shape (N, H, W, C).
+ labels: `Tensor` or list of `Tensor` of shape (N, num_domains).
+
+ Returns:
+ `StarGANModel` namedtuple.
+ """
+
+ return tfgan.stargan_model(
+ generator_fn=network.generator,
+ discriminator_fn=network.discriminator,
+ input_data=images,
+ input_data_domain_label=labels)
+
+
+def _get_lr(base_lr):
+ """Returns a learning rate `Tensor`.
+
+ Args:
+ base_lr: A scalar float `Tensor` or a Python number. The base learning
+ rate.
+
+ Returns:
+ A scalar float `Tensor` of learning rate which equals `base_lr` when the
+ global training step is less than FLAGS.max_number_of_steps / 2, afterwards
+ it linearly decays to zero.
+ """
+ global_step = tf.train.get_or_create_global_step()
+ lr_constant_steps = FLAGS.max_number_of_steps // 2
+
+ def _lr_decay():
+ return tf.train.polynomial_decay(
+ learning_rate=base_lr,
+ global_step=(global_step - lr_constant_steps),
+ decay_steps=(FLAGS.max_number_of_steps - lr_constant_steps),
+ end_learning_rate=0.0)
+
+ return tf.cond(global_step < lr_constant_steps, lambda: base_lr, _lr_decay)
+
+
+def _get_optimizer(gen_lr, dis_lr):
+ """Returns generator optimizer and discriminator optimizer.
+
+ Args:
+ gen_lr: A scalar float `Tensor` or a Python number. The Generator learning
+ rate.
+ dis_lr: A scalar float `Tensor` or a Python number. The Discriminator
+ learning rate.
+
+ Returns:
+ A tuple of generator optimizer and discriminator optimizer.
+ """
+ gen_opt = tf.train.AdamOptimizer(
+ gen_lr, beta1=FLAGS.adam_beta1, beta2=FLAGS.adam_beta2, use_locking=True)
+ dis_opt = tf.train.AdamOptimizer(
+ dis_lr, beta1=FLAGS.adam_beta1, beta2=FLAGS.adam_beta2, use_locking=True)
+ return gen_opt, dis_opt
+
+
+def _define_train_ops(model, loss):
+ """Defines train ops that trains `stargan_model` with `stargan_loss`.
+
+ Args:
+ model: A `StarGANModel` namedtuple.
+ loss: A `StarGANLoss` namedtuple containing all losses for
+ `stargan_model`.
+
+ Returns:
+ A `GANTrainOps` namedtuple.
+ """
+
+ gen_lr = _get_lr(FLAGS.generator_lr)
+ dis_lr = _get_lr(FLAGS.discriminator_lr)
+ gen_opt, dis_opt = _get_optimizer(gen_lr, dis_lr)
+ train_ops = tfgan.gan_train_ops(
+ model,
+ loss,
+ generator_optimizer=gen_opt,
+ discriminator_optimizer=dis_opt,
+ summarize_gradients=True,
+ colocate_gradients_with_ops=True,
+ aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)
+
+ tf.summary.scalar('generator_lr', gen_lr)
+ tf.summary.scalar('discriminator_lr', dis_lr)
+
+ return train_ops
+
+
+def _define_train_step():
+ """Get the training step for generator and discriminator for each GAN step.
+
+ Returns:
+ GANTrainSteps namedtuple representing the training step configuration.
+ """
+
+ if FLAGS.gen_disc_step_ratio <= 1:
+ discriminator_step = int(1 / FLAGS.gen_disc_step_ratio)
+ return tfgan.GANTrainSteps(1, discriminator_step)
+ else:
+ generator_step = int(FLAGS.gen_disc_step_ratio)
+ return tfgan.GANTrainSteps(generator_step, 1)
+
+
+def main(_):
+
+ # Create the log_dir if not exist.
+ if not tf.gfile.Exists(FLAGS.train_log_dir):
+ tf.gfile.MakeDirs(FLAGS.train_log_dir)
+
+ # Shard the model to different parameter servers.
+ with tf.device(tf.train.replica_device_setter(FLAGS.ps_tasks)):
+
+ # Create the input dataset.
+ with tf.name_scope('inputs'):
+ images, labels = data_provider.provide_data(
+ FLAGS.image_file_patterns, FLAGS.batch_size, FLAGS.patch_size)
+
+ # Define the model.
+ with tf.name_scope('model'):
+ model = _define_model(images, labels)
+
+ # Add image summary.
+ tfgan.eval.add_stargan_image_summaries(
+ model,
+ num_images=len(FLAGS.image_file_patterns) * FLAGS.batch_size,
+ display_diffs=True)
+
+ # Define the model loss.
+ loss = tfgan.stargan_loss(model)
+
+ # Define the train ops.
+ with tf.name_scope('train_ops'):
+ train_ops = _define_train_ops(model, loss)
+
+ # Define the train steps.
+ train_steps = _define_train_step()
+
+ # Define a status message.
+ status_message = tf.string_join(
+ [
+ 'Starting train step: ',
+ tf.as_string(tf.train.get_or_create_global_step())
+ ],
+ name='status_message')
+
+ # Train the model.
+ tfgan.gan_train(
+ train_ops,
+ FLAGS.train_log_dir,
+ get_hooks_fn=tfgan.get_sequential_train_hooks(train_steps),
+ hooks=[
+ tf.train.StopAtStepHook(num_steps=FLAGS.max_number_of_steps),
+ tf.train.LoggingTensorHook([status_message], every_n_iter=10)
+ ],
+ master=FLAGS.master,
+ is_chief=FLAGS.task == 0)
+
+
+if __name__ == '__main__':
+ tf.flags.mark_flag_as_required('image_file_patterns')
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/train_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/train_test.py"
new file mode 100644
index 00000000..b35c7deb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan/train_test.py"
@@ -0,0 +1,141 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for stargan.train."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+from absl import flags
+import numpy as np
+import tensorflow as tf
+
+import train
+
+FLAGS = flags.FLAGS
+mock = tf.test.mock
+tfgan = tf.contrib.gan
+
+
+def _test_generator(input_images, _):
+ """Simple generator function."""
+ return input_images * tf.get_variable('dummy_g', initializer=2.0)
+
+
+def _test_discriminator(inputs, num_domains):
+ """Differentiable dummy discriminator for StarGAN."""
+
+ hidden = tf.contrib.layers.flatten(inputs)
+
+ output_src = tf.reduce_mean(hidden, axis=1)
+
+ output_cls = tf.contrib.layers.fully_connected(
+ inputs=hidden,
+ num_outputs=num_domains,
+ activation_fn=None,
+ normalizer_fn=None,
+ biases_initializer=None)
+ return output_src, output_cls
+
+
+train.network.generator = _test_generator
+train.network.discriminator = _test_discriminator
+
+
+class TrainTest(tf.test.TestCase):
+
+ def test_define_model(self):
+ FLAGS.batch_size = 2
+ images_shape = [FLAGS.batch_size, 4, 4, 3]
+ images_np = np.zeros(shape=images_shape)
+ images = tf.constant(images_np, dtype=tf.float32)
+ labels = tf.one_hot([0] * FLAGS.batch_size, 2)
+
+ model = train._define_model(images, labels)
+ self.assertIsInstance(model, tfgan.StarGANModel)
+ self.assertShapeEqual(images_np, model.generated_data)
+ self.assertShapeEqual(images_np, model.reconstructed_data)
+ self.assertTrue(isinstance(model.discriminator_variables, list))
+ self.assertTrue(isinstance(model.generator_variables, list))
+ self.assertIsInstance(model.discriminator_scope, tf.VariableScope)
+ self.assertTrue(model.generator_scope, tf.VariableScope)
+ self.assertTrue(callable(model.discriminator_fn))
+ self.assertTrue(callable(model.generator_fn))
+
+ @mock.patch.object(tf.train, 'get_or_create_global_step', autospec=True)
+ def test_get_lr(self, mock_get_or_create_global_step):
+ FLAGS.max_number_of_steps = 10
+ base_lr = 0.01
+ with self.test_session(use_gpu=True) as sess:
+ mock_get_or_create_global_step.return_value = tf.constant(2)
+ lr_step2 = sess.run(train._get_lr(base_lr))
+ mock_get_or_create_global_step.return_value = tf.constant(9)
+ lr_step9 = sess.run(train._get_lr(base_lr))
+
+ self.assertAlmostEqual(base_lr, lr_step2)
+ self.assertAlmostEqual(base_lr * 0.2, lr_step9)
+
+ @mock.patch.object(tf.summary, 'scalar', autospec=True)
+ def test_define_train_ops(self, mock_summary_scalar):
+
+ FLAGS.batch_size = 2
+ FLAGS.generator_lr = 0.1
+ FLAGS.discriminator_lr = 0.01
+
+ images_shape = [FLAGS.batch_size, 4, 4, 3]
+ images = tf.zeros(images_shape, dtype=tf.float32)
+ labels = tf.one_hot([0] * FLAGS.batch_size, 2)
+
+ model = train._define_model(images, labels)
+ loss = tfgan.stargan_loss(model)
+ train_ops = train._define_train_ops(model, loss)
+
+ self.assertIsInstance(train_ops, tfgan.GANTrainOps)
+ mock_summary_scalar.assert_has_calls([
+ mock.call('generator_lr', mock.ANY),
+ mock.call('discriminator_lr', mock.ANY)
+ ])
+
+ def test_get_train_step(self):
+
+ FLAGS.gen_disc_step_ratio = 0.5
+ train_steps = train._define_train_step()
+ self.assertEqual(1, train_steps.generator_train_steps)
+ self.assertEqual(2, train_steps.discriminator_train_steps)
+
+ FLAGS.gen_disc_step_ratio = 3
+ train_steps = train._define_train_step()
+ self.assertEqual(3, train_steps.generator_train_steps)
+ self.assertEqual(1, train_steps.discriminator_train_steps)
+
+ @mock.patch.object(
+ train.data_provider, 'provide_data', autospec=True)
+ def test_main(self, mock_provide_data):
+ FLAGS.image_file_patterns = ['/tmp/A/*.jpg', '/tmp/B/*.jpg', '/tmp/C/*.jpg']
+ FLAGS.max_number_of_steps = 10
+ FLAGS.batch_size = 2
+ num_domains = 3
+
+ images_shape = [FLAGS.batch_size, FLAGS.patch_size, FLAGS.patch_size, 3]
+ img_list = [tf.zeros(images_shape)] * num_domains
+ lbl_list = [tf.one_hot([0] * FLAGS.batch_size, num_domains)] * num_domains
+ mock_provide_data.return_value = (img_list, lbl_list)
+
+ train.main(None)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/data/celeba_test_split_images.npy" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/data/celeba_test_split_images.npy"
new file mode 100644
index 00000000..8288ba0e
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/data/celeba_test_split_images.npy" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/data/celeba_test_split_labels.npy" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/data/celeba_test_split_labels.npy"
new file mode 100644
index 00000000..8288ba0e
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/data/celeba_test_split_labels.npy" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/data_provider.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/data_provider.py"
new file mode 100644
index 00000000..53a11238
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/data_provider.py"
@@ -0,0 +1,37 @@
+"""StarGAN Estimator data provider."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import numpy as np
+import data_provider
+from google3.pyglib import resources
+
+
+provide_data = data_provider.provide_data
+
+
+def provide_celeba_test_set():
+ """Provide one example of every class, and labels.
+
+ Returns:
+ An `np.array` of shape (num_domains, H, W, C) representing the images.
+ Values are in [-1, 1].
+ An `np.array` of shape (num_domains, num_domains) representing the labels.
+
+ Raises:
+ ValueError: If test data is inconsistent or malformed.
+ """
+ base_dir = 'google3/third_party/tensorflow_models/gan/stargan_estimator/data'
+ images_fn = os.path.join(base_dir, 'celeba_test_split_images.npy')
+ with resources.GetResourceAsFile(images_fn) as f:
+ images_np = np.load(f)
+ labels_fn = os.path.join(base_dir, 'celeba_test_split_labels.npy')
+ with resources.GetResourceAsFile(labels_fn) as f:
+ labels_np = np.load(f)
+ if images_np.shape[0] != labels_np.shape[0]:
+ raise ValueError('Test data is malformed.')
+
+ return images_np, labels_np
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/testdata/celeba/black/202598.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/testdata/celeba/black/202598.jpg"
new file mode 100644
index 00000000..4bcdba0f
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/testdata/celeba/black/202598.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/testdata/celeba/blond/202599.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/testdata/celeba/blond/202599.jpg"
new file mode 100644
index 00000000..1421bf9f
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/testdata/celeba/blond/202599.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/testdata/celeba/brown/202587.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/testdata/celeba/brown/202587.jpg"
new file mode 100644
index 00000000..da8b5ab0
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/testdata/celeba/brown/202587.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/train.py"
new file mode 100644
index 00000000..51e5c7e7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/train.py"
@@ -0,0 +1,184 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Trains a StarGAN model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import io
+import os
+
+from absl import flags
+import numpy as np
+import scipy.misc
+import tensorflow as tf
+
+import network
+import data_provider
+
+# FLAGS for data.
+flags.DEFINE_multi_string(
+ 'image_file_patterns', None,
+ 'List of file pattern for different domain of images. '
+ '(e.g.[\'black_hair\', \'blond_hair\', \'brown_hair\']')
+flags.DEFINE_integer('batch_size', 6, 'The number of images in each batch.')
+flags.DEFINE_integer('patch_size', 128, 'The patch size of images.')
+
+# Write-to-disk flags.
+flags.DEFINE_string('output_dir', '/tmp/stargan/out/',
+ 'Directory where to write summary image.')
+
+# FLAGS for training hyper-parameters.
+flags.DEFINE_float('generator_lr', 1e-4, 'The generator learning rate.')
+flags.DEFINE_float('discriminator_lr', 1e-4, 'The discriminator learning rate.')
+flags.DEFINE_integer('max_number_of_steps', 1000000,
+ 'The maximum number of gradient steps.')
+flags.DEFINE_integer('steps_per_eval', 1000,
+ 'The number of steps after which we write eval to disk.')
+flags.DEFINE_float('adam_beta1', 0.5, 'Adam Beta 1 for the Adam optimizer.')
+flags.DEFINE_float('adam_beta2', 0.999, 'Adam Beta 2 for the Adam optimizer.')
+flags.DEFINE_float('gen_disc_step_ratio', 0.2,
+ 'Generator:Discriminator training step ratio.')
+
+# FLAGS for distributed training.
+flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
+flags.DEFINE_integer(
+ 'ps_tasks', 0,
+ 'The number of parameter servers. If the value is 0, then the parameters '
+ 'are handled locally by the worker.')
+flags.DEFINE_integer(
+ 'task', 0,
+ 'The Task ID. This value is used when training with multiple workers to '
+ 'identify each worker.')
+
+FLAGS = flags.FLAGS
+tfgan = tf.contrib.gan
+
+
+def _get_optimizer(gen_lr, dis_lr):
+ """Returns generator optimizer and discriminator optimizer.
+
+ Args:
+ gen_lr: A scalar float `Tensor` or a Python number. The Generator learning
+ rate.
+ dis_lr: A scalar float `Tensor` or a Python number. The Discriminator
+ learning rate.
+
+ Returns:
+ A tuple of generator optimizer and discriminator optimizer.
+ """
+ gen_opt = tf.train.AdamOptimizer(
+ gen_lr, beta1=FLAGS.adam_beta1, beta2=FLAGS.adam_beta2, use_locking=True)
+ dis_opt = tf.train.AdamOptimizer(
+ dis_lr, beta1=FLAGS.adam_beta1, beta2=FLAGS.adam_beta2, use_locking=True)
+ return gen_opt, dis_opt
+
+
+def _define_train_step():
+ """Get the training step for generator and discriminator for each GAN step.
+
+ Returns:
+ GANTrainSteps namedtuple representing the training step configuration.
+ """
+
+ if FLAGS.gen_disc_step_ratio <= 1:
+ discriminator_step = int(1 / FLAGS.gen_disc_step_ratio)
+ return tfgan.GANTrainSteps(1, discriminator_step)
+ else:
+ generator_step = int(FLAGS.gen_disc_step_ratio)
+ return tfgan.GANTrainSteps(generator_step, 1)
+
+
+def _get_summary_image(estimator, test_images_np):
+ """Returns a numpy image of the generate on the test images."""
+ num_domains = len(test_images_np)
+
+ img_rows = []
+ for img_np in test_images_np:
+ def test_input_fn():
+ dataset_imgs = [img_np] * num_domains # pylint:disable=cell-var-from-loop
+ dataset_lbls = [tf.one_hot([d], num_domains) for d in xrange(num_domains)]
+
+ # Make into a dataset.
+ dataset_imgs = np.stack(dataset_imgs)
+ dataset_imgs = np.expand_dims(dataset_imgs, 1)
+ dataset_lbls = tf.stack(dataset_lbls)
+ unused_tensor = tf.zeros(num_domains)
+ return tf.data.Dataset.from_tensor_slices(
+ ((dataset_imgs, dataset_lbls), unused_tensor))
+
+ prediction_iterable = estimator.predict(test_input_fn)
+ predictions = [prediction_iterable.next() for _ in xrange(num_domains)]
+ transform_row = np.concatenate([img_np] + predictions, 1)
+ img_rows.append(transform_row)
+
+ all_rows = np.concatenate(img_rows, 0)
+ # Normalize` [-1, 1] to [0, 1].
+ normalized_summary = (all_rows + 1.0) / 2.0
+ return normalized_summary
+
+
+def _write_to_disk(summary_image, filename):
+ """Write to disk."""
+ buf = io.BytesIO()
+ scipy.misc.imsave(buf, summary_image, format='png')
+ buf.seek(0)
+ with tf.gfile.GFile(filename, 'w') as f:
+ f.write(buf.getvalue())
+
+
+def main(_, override_generator_fn=None, override_discriminator_fn=None):
+ # Create directories if not exist.
+ if not tf.gfile.Exists(FLAGS.output_dir):
+ tf.gfile.MakeDirs(FLAGS.output_dir)
+
+ # Make sure steps integers are consistent.
+ if FLAGS.max_number_of_steps % FLAGS.steps_per_eval != 0:
+ raise ValueError('`max_number_of_steps` must be divisible by '
+ '`steps_per_eval`.')
+
+ # Create optimizers.
+ gen_opt, dis_opt = _get_optimizer(FLAGS.generator_lr, FLAGS.discriminator_lr)
+
+ # Create estimator.
+ # (joelshor): Add optional distribution strategy here.
+ stargan_estimator = tfgan.estimator.StarGANEstimator(
+ generator_fn=override_generator_fn or network.generator,
+ discriminator_fn=override_discriminator_fn or network.discriminator,
+ loss_fn=tfgan.stargan_loss,
+ generator_optimizer=gen_opt,
+ discriminator_optimizer=dis_opt,
+ get_hooks_fn=tfgan.get_sequential_train_hooks(_define_train_step()),
+ add_summaries=tfgan.estimator.SummaryType.IMAGES)
+
+ # Get input function for training and test images.
+ train_input_fn = lambda: data_provider.provide_data( # pylint:disable=g-long-lambda
+ FLAGS.image_file_patterns, FLAGS.batch_size, FLAGS.patch_size)
+ test_images_np, _ = data_provider.provide_celeba_test_set()
+ filename_str = os.path.join(FLAGS.output_dir, 'summary_image_%i.png')
+
+ # Periodically train and write prediction output to disk.
+ cur_step = 0
+ while cur_step < FLAGS.max_number_of_steps:
+ stargan_estimator.train(train_input_fn, steps=FLAGS.steps_per_eval)
+ cur_step += FLAGS.steps_per_eval
+ summary_img = _get_summary_image(stargan_estimator, test_images_np)
+ _write_to_disk(summary_img, filename_str % cur_step)
+
+
+if __name__ == '__main__':
+ tf.flags.mark_flag_as_required('image_file_patterns')
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/train_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/train_test.py"
new file mode 100644
index 00000000..7bc69191
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/stargan_estimator/train_test.py"
@@ -0,0 +1,73 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for stargan_estimator.train."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+from absl import flags
+import tensorflow as tf
+
+import train
+
+FLAGS = flags.FLAGS
+mock = tf.test.mock
+tfgan = tf.contrib.gan
+
+TESTDATA_DIR = 'google3/third_party/tensorflow_models/gan/stargan_estimator/testdata/celeba'
+
+
+def _test_generator(input_images, _):
+ """Simple generator function."""
+ return input_images * tf.get_variable('dummy_g', initializer=2.0)
+
+
+def _test_discriminator(inputs, num_domains):
+ """Differentiable dummy discriminator for StarGAN."""
+
+ hidden = tf.contrib.layers.flatten(inputs)
+
+ output_src = tf.reduce_mean(hidden, axis=1)
+
+ output_cls = tf.contrib.layers.fully_connected(
+ inputs=hidden,
+ num_outputs=num_domains,
+ activation_fn=None,
+ normalizer_fn=None,
+ biases_initializer=None)
+ return output_src, output_cls
+
+
+class TrainTest(tf.test.TestCase):
+
+ def test_main(self):
+
+ FLAGS.image_file_patterns = [
+ os.path.join(FLAGS.test_srcdir, TESTDATA_DIR, 'black/*.jpg'),
+ os.path.join(FLAGS.test_srcdir, TESTDATA_DIR, 'blond/*.jpg'),
+ os.path.join(FLAGS.test_srcdir, TESTDATA_DIR, 'brown/*.jpg'),
+ ]
+ FLAGS.max_number_of_steps = 1
+ FLAGS.steps_per_eval = 1
+ FLAGS.batch_size = 1
+ train.main(None, _test_generator, _test_discriminator)
+
+
+if __name__ == '__main__':
+ tf.test.main()
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/tutorial.ipynb" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/tutorial.ipynb"
new file mode 100644
index 00000000..164b433d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/gan/tutorial.ipynb"
@@ -0,0 +1,2256 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# TFGAN Tutorial\n",
+ "\n",
+ "## Authors: Joel Shor, joel-shor\n",
+ "\n",
+ "### More complex examples, see [`tensorflow/models/gan`](https://github.com/tensorflow/models/tree/master/research/gan)\n",
+ "\n",
+ "\n",
+ "This notebook will walk you through the basics of using [TFGAN](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/gan) to define, train, and evaluate Generative Adversarial Networks. We describe the library's core features as well as some extra features. This colab assumes a familiarity with TensorFlow's Python API. For more on TensorFlow, please see [TensorFlow tutorials](https://www.tensorflow.org/tutorials/).\n",
+ "\n",
+ "Please note that running on GPU will significantly speed up the training steps, but is not required.\n",
+ "\n",
+ "Last update: 2018-02-10."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Table of Contents\n",
+ "Installation and Setup \n",
+ " Download Data \n",
+ "Unconditional GAN example \n",
+ " Input pipeline \n",
+ " Model \n",
+ " Loss \n",
+ " Train and evaluation \n",
+ "GANEstimator example \n",
+ " Input pipeline \n",
+ " Train \n",
+ " Eval \n",
+ "Conditional GAN example \n",
+ " Input pipeline \n",
+ " Model \n",
+ " Loss \n",
+ " Train and evaluation \n",
+ "InfoGAN example \n",
+ " Input pipeline \n",
+ " Model \n",
+ " Loss \n",
+ " Train and evaluation "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Installation and setup\n",
+ "\n",
+ "To make sure that your version of TensorFlow has TFGAN included, run\n",
+ "\n",
+ "```\n",
+ "python -c \"import tensorflow.contrib.gan as tfgan\"\n",
+ "```\n",
+ "\n",
+ "You also need to install the TFGAN models library.\n",
+ "\n",
+ "To check that these two steps work, execute the [`Imports`](#imports) cell. If it complains about unknown modules, restart the notebook after moving to the TFGAN models directory."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Make TFGAN models and TF-Slim models discoverable.\n",
+ "import sys\n",
+ "import os\n",
+ "# This is needed since the notebook is stored in the `tensorflow/models/gan` folder.\n",
+ "sys.path.append('..')\n",
+ "sys.path.append(os.path.join('..', 'slim'))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "### Imports"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from __future__ import absolute_import\n",
+ "from __future__ import division\n",
+ "from __future__ import print_function\n",
+ "\n",
+ "%matplotlib inline\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import time\n",
+ "import functools\n",
+ "from six.moves import xrange # pylint: disable=redefined-builtin\n",
+ "\n",
+ "import tensorflow as tf\n",
+ "\n",
+ "# Main TFGAN library.\n",
+ "tfgan = tf.contrib.gan\n",
+ "\n",
+ "# TFGAN MNIST examples from `tensorflow/models`.\n",
+ "from mnist import data_provider\n",
+ "from mnist import util\n",
+ "\n",
+ "# TF-Slim data provider.\n",
+ "from datasets import download_and_convert_mnist\n",
+ "\n",
+ "# Shortcuts for later.\n",
+ "queues = tf.contrib.slim.queues\n",
+ "layers = tf.contrib.layers\n",
+ "ds = tf.contrib.distributions\n",
+ "framework = tf.contrib.framework"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Common functions\n",
+ "\n",
+ "These functions are used by many examples, so we define them here."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "leaky_relu = lambda net: tf.nn.leaky_relu(net, alpha=0.01)\n",
+ " \n",
+ "\n",
+ "def visualize_training_generator(train_step_num, start_time, data_np):\n",
+ " \"\"\"Visualize generator outputs during training.\n",
+ " \n",
+ " Args:\n",
+ " train_step_num: The training step number. A python integer.\n",
+ " start_time: Time when training started. The output of `time.time()`. A\n",
+ " python float.\n",
+ " data: Data to plot. A numpy array, most likely from an evaluated TensorFlow\n",
+ " tensor.\n",
+ " \"\"\"\n",
+ " print('Training step: %i' % train_step_num)\n",
+ " time_since_start = (time.time() - start_time) / 60.0\n",
+ " print('Time since start: %f m' % time_since_start)\n",
+ " print('Steps per min: %f' % (train_step_num / time_since_start))\n",
+ " plt.axis('off')\n",
+ " plt.imshow(np.squeeze(data_np), cmap='gray')\n",
+ " plt.show()\n",
+ "\n",
+ "def visualize_digits(tensor_to_visualize):\n",
+ " \"\"\"Visualize an image once. Used to visualize generator before training.\n",
+ " \n",
+ " Args:\n",
+ " tensor_to_visualize: An image tensor to visualize. A python Tensor.\n",
+ " \"\"\"\n",
+ " with tf.Session() as sess:\n",
+ " sess.run(tf.global_variables_initializer())\n",
+ " with queues.QueueRunners(sess):\n",
+ " images_np = sess.run(tensor_to_visualize)\n",
+ " plt.axis('off')\n",
+ " plt.imshow(np.squeeze(images_np), cmap='gray')\n",
+ "\n",
+ "def evaluate_tfgan_loss(gan_loss, name=None):\n",
+ " \"\"\"Evaluate GAN losses. Used to check that the graph is correct.\n",
+ " \n",
+ " Args:\n",
+ " gan_loss: A GANLoss tuple.\n",
+ " name: Optional. If present, append to debug output.\n",
+ " \"\"\"\n",
+ " with tf.Session() as sess:\n",
+ " sess.run(tf.global_variables_initializer())\n",
+ " with queues.QueueRunners(sess):\n",
+ " gen_loss_np = sess.run(gan_loss.generator_loss)\n",
+ " dis_loss_np = sess.run(gan_loss.discriminator_loss)\n",
+ " if name:\n",
+ " print('%s generator loss: %f' % (name, gen_loss_np))\n",
+ " print('%s discriminator loss: %f'% (name, dis_loss_np))\n",
+ " else:\n",
+ " print('Generator loss: %f' % gen_loss_np)\n",
+ " print('Discriminator loss: %f'% dis_loss_np)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "### Download data"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Dataset files already exist. Exiting without re-creating them.\n"
+ ]
+ }
+ ],
+ "source": [
+ "MNIST_DATA_DIR = '/tmp/mnist-data'\n",
+ "\n",
+ "if not tf.gfile.Exists(MNIST_DATA_DIR):\n",
+ " tf.gfile.MakeDirs(MNIST_DATA_DIR)\n",
+ "\n",
+ "download_and_convert_mnist.run(MNIST_DATA_DIR)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "# Unconditional GAN Example\n",
+ "\n",
+ "With unconditional GANs, we want a generator network to produce realistic-looking digits. During training, the generator tries to produce realistic-enough digits to 'fool' a discriminator network, while the discriminator tries to distinguish real digits from generated ones. See the paper ['NIPS 2016 Tutorial: Generative Adversarial Networks'](https://arxiv.org/pdf/1701.00160.pdf) by Goodfellow or ['Generative Adversarial Networks'](https://arxiv.org/abs/1406.2661) by Goodfellow et al. for more details.\n",
+ "\n",
+ "The steps to using TFGAN to set up an unconditional GAN, in the simplest case, are as follows:\n",
+ "\n",
+ "1. **Model**: Set up the generator and discriminator graphs with a [`GANModel`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/namedtuples.py#L39) tuple. Use [`tfgan.gan_model`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/train.py#L64) or create one manually.\n",
+ "1. **Losses**: Set up the generator and discriminator losses with a [`GANLoss`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/namedtuples.py#L115) tuple. Use [`tfgan.gan_loss`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/train.py#L328) or create one manually.\n",
+ "1. **Train ops**: Set up TensorFlow ops that compute the loss, calculate the gradients, and update the weights with a [`GANTrainOps`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/namedtuples.py#L128) tuple. Use [`tfgan.gan_train_ops`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/train.py#L476) or create one manually.\n",
+ "1. **Run alternating train loop**: Run the training Ops. This can be done with [`tfgan.gan_train`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/train.py#L661), or done manually.\n",
+ "\n",
+ "Each step can be performed by a TFGAN library call, or can be constructed manually for more control."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Data input pipeline"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsnedznNeVp5/OGZ0TGjkDRCBIgCQYRIqkKInyrsayZXs8\noWZqtrZqq2b/mP2wtbVTW96tqd3ZGc/Ya82MbEqkxASIJEAQAJFzTt0I3Wh0QAMN7AfUew0wiCKN\nRM37VKnsIhhuA+97z73n/M7vgIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyM\njIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyM\njIyMjIyMjIyMjIyMjMzbg+IQ/+2tQ/y3ZWRkZGRk3nZeGcOVB7EKGRkZGRkZmYNHDvIyMjIyMjLf\nU+QgLyMjIyMj8z1FDvIyMjIyMjLfU9SHvYCDQKFQkJGRgd/vp7y8HLPZzObmJj09PYyPj7OyssLG\nxsZhL1NGRkZG5i1Fq9ViMBiw2+04HA48Hg8KhYLV1VXi8TjJZBKAcDjM3Nwcm5ubbG3tv/7830yQ\n9/l8XLp0if/8n/8zeXl5rK+v81/+y3/h17/+NYODg3KQl5GRkZF5Y4xGIz6fj2PHjlFbW8vZs2dR\nKpWMj48zNTXF4uIiAD09PSwsLLC+vi4H+b3A6/WSn5/PqVOnOHv2LG63G71ej0aj4dSpU6RSKdrb\n2+nv72dkZIStra0D+cZ/GyqVCo1Gg1qtxmKx4Pf7MZlMKJVKBgYGCAaDpNPpQ12jjMyrMJvNOBwO\n6uvrKS0txWazEYlEmJycpKWlheHh4QPb6F4XlUqF2WzG5/NRUlJCWVkZNpsNg8Hw3O+dm5ujpaWF\nSCTC+vo6AKlUitXVVVZWVohGo3u2Lo/HQ1ZWFsXFxWRnZ+PxeFCrX7yNr6+vMz09zezsLLOzs8zP\nz7O4uIhOp2NjY4PV1VVSqZS8l/yBGI1GCgsLRWD3+/34/X4CgYC4YFZUVJBIJABwOp2MjY0xNzdH\nOBze9/V9b4O8SqXCYDCQn59PQ0MDFy5coKamBqPRyNbWFkqlksrKSkwmE16vF5VKxdTUFOvr64fy\n0KtUKnQ6HSaTCbPZjNlsxmg04vV6KSkpwWazoVQqUalUbGxssLy8LGcfZI4kSqUStVqN3++nqqqK\nH/7wh5w/fx6v18v8/Dzd3d3E43GCwSCRSOTQnmOVSoVarcZkMqHT6XZ9TavV4vF4qKys5Ny5c1y4\ncAGfz4fFYnnu7xkaGsLlcjE/Py9SstLnGxoaYmRkhHQ6zebm5huvVa1Wo9PpKCoq4syZM5w5c4aq\nqiry8/PRarUoFIrnDktra2v09vYyMDDA4OAg/f39TExMYDQaWV9fJxgMkkwmSaVS4vdHo1HW1tbk\nveU10Gq1+P1+6uvr+fTTTzGZTGi1WvF1n8+36/en02n6+vpoa2tjbW2NtbW1P+jZeBXf2yBvsVio\nrKykrq6O06dPU1JSgtvtRqPRANspfLvdjk6nIzMzk5WVFR49esTKyoo4cR0USqUSi8VCXl4e9fX1\nFBQUEAgEcLvd2O12zGYzarWajY0NLBYLVquVr776ipWVlQNdp4zMd0EKkBcvXuRP/uRPyMvLw+Vy\noVAocDgc1NTUUFZWRn9/P4lE4tACitFoxO12c+rUKXJzc5/7DH6/n9raWvx+Pw6HA51O98Ksg9/v\n54c//OGuzXp1dZW5uTlu3LhBNBpleXn5D9pXLBYLRUVFfPDBB3z00Ue43W5sNhtqtfql2Ue1Wk1+\nfj4ej4fa2lpmZ2cJhULiJr+yskIqlRLf/9HRUe7cucPY2BihUOiN1/pvjUQiQX9/P5WVlaRSqRdm\ne3ZSVlbGf/yP/5F/+qd/YnNzk/HxcWKx2L6t73sR5BUKBWq1Gr1ej8lkIj8/n6KiIioqKigtLaWk\npASfz4dWqyUej4ubvE6nw263Y7fb8Xq96HQ6lMqDaTiQ1utyufD5fOTl5VFaWsrx48fJzs7G7XZj\nMpnQaDSk02m0Wi0qlYpYLEYkEqGjo0OcAvcKpVIp/tPr9dhsNkwm0ysf2mdZXV1lfn6eeDwu0pdH\nEZVKhV6vFyKZzMxMdDqdSH+Gw2GmpqaYnZ0V9bSjhCT0MZvNOJ1OcnNzMZlMwPY7sb6+TiQSYWFh\ngfn5ecLhMPF4fN/XpdPpCAQCVFRUcPLkSdRqNZFIhKdPn6LVavF6vZSWlnLu3DlWVlaYnZ0Vt8mD\npKCggHfeeYe6ujpycnJ2fU2tVmO328nPz0ev13/r32M0GikoKNj1a8lkkpycHDY2NlAqlYyMjDA6\nOsr4+PgbfVa9Xo/f76ewsJDy8nI0Gs0r9yqlUklGRgYZGRnA9o0yFouhUqnY3NwUN3bpYDI+Po7B\nYKCxsZEnT54Qi8UO7f1VKBSoVCry8/PJzc0lIyNj1+14a2uL9fV1VlZWCIVCbG1tkUqlCIVCrK6u\n7um++CqkrMjw8DAdHR2UlZURCARQKpWk02lSqRRarVbsK06nE4vFIkrEwWBQDvKvQgrYbrebnJwc\nfv7zn3PhwgWsVitGoxGdTodKpSKZTLK4uEgqlUKlUuF2u1+YfjsItFotbreb+vp6Ll68SF1dHQUF\nBRgMBtRqNQqFgkgkwuLiImtra0KxWVBQwLFjxwgEAqysrOzpw6xSqdBqtWg0GrxeL8eOHSMnJwev\n1/taf8/o6Cj37t1jdnaWSCSyZ+vba7RaLQ6Hg9raWi5cuMB7772H0+kUgbK7u5vPP/+cmzdvHskg\nbzQa8fv95Ofnc+LECT799FMRrBQKBdFolL6+Ph49esTdu3fp7e09kCCv1+vJzs4WWpJEIsHQ0BD/\n9b/+VywWC5cvX6aqqgqPx0NfX5+4UR40J06c4K//+q93/cwlFAqFCDRvgk6nw+v1cu3aNerq6nj0\n6BG3bt3i17/+9Rt9VqmsYDQaRXr+dZH+vMSzt3+Px0N+fj5qtZr5+XkmJycPLchLe9E777zDp59+\nKjQd0prT6TTRaJSBgQEePnwoAv6DBw8YHR0llUodmNZjc3OTZDLJ0NAQn3/+OWq1Go/Hg1arZW1t\njeXlZWw2G2azGdj+WapUKrKzsykoKKCtrW1f1/dWB3mFQkFmZiY5OTkUFxdTVFREcXExlZWVZGZm\notFoUKlU4sQbiUR49OgR4XAYjUbD+fPnDzTIS7f3qqoqkWUoKiqisLAQv99PRkYGSqWSZDLJ8vIy\nt27dor+/n83NTS5dusS1a9cwmUxYrVYyMjKeqyO+yXqys7Px+Xy4XC7cbjcejwe73Y7T6cTlcmG1\nWp/bAF9FKBSiqKiIW7du0dTURDKZPFLiHqPRKIJ7bW0t5eXlFBcXk5OTg8FgECWdoqIirl+/jsVi\nwePx8Pjx40NPYwYCAfFfQUEBRUVFuFwu8R5ItzaFQoFer0ehUGAwGMjMzOTu3bu0tLTse3pQCo5S\ngBwZGaGtrY2RkRE8Hg9LS0uo1Wp8Ph+VlZUsLi6yvLy8b+t5GeFwmJGREUwmE3a7fdfXpNtZX18f\nNpuN3NxczGbzd37npEOC0WhEqVRSWFjIxMQEfr+f9fV1VldXX2utPp+PDz/8kIqKijcK8DvX9DIM\nBgMej4erV6/idrsZGhqis7OT5uZmVldX972sIumSXC4XFRUVnD17lrq6OioqKnA6neKiJv1el8uF\nWq0mIyODzc1NVlZWcDgcNDY2cvfu3QMtA21tbTE7O8u9e/coLy+nvLwch8NBOp0mkUiIAC+hUCgY\nGxujq6trX99FeIuDvEajwWAwUFFRwalTp6irq6O8vJzCwkIR1J+tVa2urtLV1cXMzAxarZaysjKK\ni4sPbM16vR6Px8OFCxe4cuUKVVVV2Gw2kYba2toinU4TDAbp7e3lX/7lX3j8+LG49b///vtotVqM\nRqO48f8haLVaKisrqa2tJTc3l9zc3F2lAoCNjQ2R0kun0y8N1pKIyWAwkEqlqKysZHV1lb6+PhYW\nFg7kBvldUCqVOBwOKisruXbtGu+++y6BQACdTsfa2horKyuk02n0ej0ZGRk0NDRgMpnQ6/WMj4+z\nsLBwKGpwjUaDTqejtLSU+vp6qqqqKC4uJi8vTzzn0vqlTgyNRoPf78dut1NcXCzqsPudHtzJ1tYW\nk5OT9Pf3s7CwgNFoFM+C1WqlqKiI/v5+Ojs7D2Q9O5mbm+Px48eiRr1zzclkkpGREW7dukVWVhb1\n9fVkZ2djNptJJBLo9XosFsuuEteLkEpCPp+PnJwcfD4fS0tLrx3krVYr1dXV+Hw+cbve3NxkfX19\nV9lOeg91Op04kEiHrlcdDtRqNWq1WjxfQ0NDfPnllwwPD++q3e81CoUCpVKJ1WrF7XZTWlrK1atX\n+eM//mMRHGOxGMvLyywsLKBQKNDpdOJC4vf7ReZKrVYTCoVobGw8cK3H0tISS0tL9Pf3MzU1tWtf\nfxFTU1P09/fv+9741gZ5r9dLRUUFP/7xjzl79ix2u128dC97mHf+unSqlX7tTU/Hr4MkgDlz5gzV\n1dVkZGTsCtSpVIpIJMLdu3f51a9+RXd3N9FoFJvNti/rMRgMXL16lWvXrqHX6zEYDOh0OlKplEhP\nB4NB5ubmSCQSrKyssLy8/MJAb7PZ8Pl8HD9+HLfbjdPppKSkhNraWlpaWo5EkFcqlRgMBsrKyvjZ\nz35GdXU1mZmZIhANDw8zPDxMOBwWmZbi4mIcDgfZ2dniVnYYWQmXy0VZWRnXr1/n/PnzeDwe4vE4\nbW1tTExMsLCwgEqloqysjPPnz4u0Lmwf5jIyMrDZbM89cwfFUWyTm56e5s6dO3R0dDynO1lfXxc3\nfZvNRnNzMw0NDRiNRp48eUJNTQ1Xr17FYrGIQ+DLAr1CoRAZOIvFsqu2/F2Zn5/n5s2bnDx5ksLC\nQgCi0Shzc3P09fUxNDQEbLcter1ekZ2C7cuF1WoVZcBXIQXRvLw8CgoKcDgcLC8v79s7rNVqMZvN\nNDQ0cO7cOY4fP05RURFms5l0Os3CwgL37t2jpaWFrq4ulEolLpeL8+fPU19fz/Hjx1Gr1UJncBil\nn510d3dz+/ZtNjY2yM3NFVqfZ5G6pfY79rx1QV4Sy9XU1HD58mVOnz5NWVmZuGnGYjFR89hZT5Nu\nydKt9DA2HY/Hw/HjxykoKMDtdgPbm0kikSAWizE3N0dvby9ff/01Dx48IBaLodVqX9jisxeoVCq8\nXi+BQEC8TNIpVKqlS6KtZDJJNBolHA7vCnJST39JSQkqlYq1tTXRvii1Ah5GUHkRBoOB6upqzp8/\nz5kzZzCZTCwuLtLV1cXg4KAQRkWjUSEGy87OFm2WDocDg8FALBY78OfH4/Fw+vRpTpw4QUFBAYuL\ni/T29tLU1CQyDGq1mqWlJbxeLzk5ObhcLnHLlHwXvu0QfNBIh643CXp7wfr6OtFolNHR0ecyG5ub\nm6LXXafTMTU1xcrKClqtlqdPnxIMBlGpVFRVVZGXl4dWq31hkJf+nvHxcYaGhlhaWhIp59chFArx\n9ddfMzMzQ15eHrCdmQwGg+LZhe1SlMfjYXBwUBwG7HY7RUVFuw4ikgeAJOjdiXTzl1xC8/PzWVxc\nZGlp6bXX/W1IN3ifz0dtbS2XLl2ioaGBvLw81Go1MzMzTE9PMzg4yFdffUVbWxsDAwOYTCby8vKo\nrq4WosFoNMr09LRwMd3PlrRXMTw8jEajIZlMcvr0aU6fPv3CZzwzM5OioiJWVlbe6Jn4rhyN3fc1\nMJvNVFdXc/XqVT755BNsNptIW8XjcfFS6vV6jEajeIA3NjZIpVIkk0nW19fR6/W70vkHsWlLN12H\nwyF+LZFIsLi4yOTkJK2trfzLv/wLg4ODhMNhtra2sFgs+yYQ3NzcFBaLGxsbPHjwgK+//pq2tjZm\nZ2fF70mn0+J79ezLI6nwvV7vcyWEeDzO8vLyoZ+sJaxWKx9//DHXrl0jOzubiYkJHj9+zN/8zd/Q\n2dkpShNqtZrZ2Vl0Oh3nz58Xpi4ejwebzSY6NA4Sj8fDmTNnyM3NZX19nfv373Pr1i1u374t2tCk\ndiqv1yvSnztThuvr66RSqUPdAHei0WhwOBzP1SsPioyMDLKysggGgywsLOz6mvTz3dzcJJFIsLa2\nxu3bt0XXQiKRYG5ujj/7sz/DbreLm/KzbG5uEo1GuXPnDr/73e/o7Ox8I3OcUCjE7du3aWxsFP/O\n5uYmm5ub4rmF3wfOnb8vJyeH8+fPk5GRIfQmBQUFXL16FYfD8a3iQpvNRk1NDdPT04yMjLz2ur8N\nhUKBRqOhvLycv/qrv6K8vByfz0cqlWJycpLOzk6amppobm5mfHycSCRCOp3GZDLh9/upqKggLy8P\npVLJ3NwcbW1t3Lhxg/b29kPt85+cnCQUCtHT00MwGBQammcDfXV1NfPz84yMjOyrJuWtCfKSc1B1\ndTV/9Ed/RH19PU6nE7VazdraGsFgkMePH3P//n3y8/MpLy/n1KlTWK1WNjc36erqorGxkY6ODtGX\nftCMjIzwu9/9joWFBVwuF8FgkFAoJG7LExMT9Pf3E4lExEYsud69qo3nTYjFYnz22We0t7ezubnJ\nxMQEIyMjzM7OvnIjUiqVGI1G8vLyuHDhAqdPn6ayspKMjAyRRuzv73/hLemwkNKQUpvc/Pw8PT09\nzM7O7nKe0mg0YmPf2toSB8T9rEu+Co1GI8SWW1tbonwSiURwOp1YrVYhRPV6vSKDolAoxCGyubmZ\njo6O164Hvy5S1kzK+NjtdnJzcykvLyc/P5+KigrhHpeXl0dmZiZ6vf7AjagWFxfp6+t7aQlKCkJS\nNk2qy8O2Ut3lcnHs2DEcDsdzAV46HPf29tLS0sK9e/fo7e0lFou90WeUvp/f9ca38/dJmUupywi2\nyz9dXV2YzWasVqsw1vF6vbtu9xaLhdLSUpqbm197zd+GSqXCarVy4sQJ3n33XcrLy7HZbITDYZqa\nmmhra6Onp4fR0VGmp6eJRqNsbGyIktTFixfJz88nIyMDhULB4OAgjY2NTExMsLq6eqjlIa/XS2Zm\nJh6Ph4KCApxO5wv379XVVcLh8L7vKW9FkJdER2VlZVy4cIHr168TCATEKTYSiTA4OMgXX3zB//pf\n/4uzZ88Si8UoKSlBo9EQj8dFC8vTp09xu91kZWWxtrbG+vr6gaWTh4eHWVpaYn5+noyMDAYHB4XV\nZCwWe2E7nEajeamd5h9KIpHgd7/73Wv9GcmTwGAwEAgEOHXqFJ9++inFxcV4PB5g+yTb3t5Od3c3\n4+PjB9qz+m3sDD5bW1sEg0FGRkaIx+OiPiYJG+12O0ajUWQ7ZmdnhaHJYWwgkouclG6VBmF4vV6K\niopEX25FRQWFhYU4HA6USqW4Fd2/f58HDx7Q1dW172uV2pui0SjxeBybzUZpaSnhcJisrCxKSkow\nGAwoFAqys7PJysoSlrcHaUS1sLDw3A1eQjoQSj4aPp+PTz75hJMnTwLbdWTJ5+LZ7hNJBLm0tERL\nSwv/7//9Pzo6OpiZmdn3z/QiJI+CnajVar7++muhZv/Rj37E5cuXsVgsu0qd0tdft8PmVWi1Wlwu\nF5cuXeLixYsEAgEikQhDQ0P89re/pbGxkbGxMdLptCgfmM1mMjIyOHHiBBcvXiQrK0sEz7GxMZ48\neSI84Q8SaX16vR6z2UxNTQ3Hjh0TZQgps/YswWCQ8fHxfU3Vw1sS5CWR3UcffcSFCxew2+0iRT83\nN0dPTw+/+93vdvUbplIp5ufnGR8fp6enh5s3b9Le3k48HicUCtHW1saZM2coLCx87T7wN0Xq03/4\n8CEajYbV1VVhK/my05zBYCA7O3tXiv+w2GlXWlRUxPvvvy/cBKXWLdhWLX/xxRf09PTsu2Xj67BT\nZCmlXZPJJAqFQtxoJLGdFIwSiQR3794V7YyHkaqH7edZ6id3uVycO3eOvLw8PvzwQ2GHbLVacTqd\nojsikUgwMjLCnTt3+OUvf8nY2NiBrDUajdLZ2Ynf76egoIDs7GzhA5FMJonH4wwPD2M0GkVXx/Hj\nx+nq6mJqaupA1vgqpPbS9957j+rqakpLS8nJyRGtdjsPhTuRTFlGR0e5efMmd+/epa2t7UA8yl+H\ndDotzHDUajXj4+NMT09TXV29S7MRCoW4f//+nj87Usq9qqqKwsJCVCoVT5484fPPP6e1tZX5+Xmx\nb6hUKpxOJzU1NVy5coVTp05RXFy85wePN0UyTqqvr+f69evk5+fj8/nEZeFlGhhJb7DfB9u3Ishn\nZWVx+fJlGhoaKC0tRavVsrS0JOo2ra2tNDU1MTExAWynoScmJmhsbGRhYYHu7m66urqYnZ0Vdayp\nqSmCwSArKyu4XK4D+RxSz+T09PSuX5fUomaz+TlhVGFhIbm5ucIIIhaLsbS09AfbZL4JFouF7Oxs\nqqurqaur491336WoqEjcynZ+Hq1WS3Z2tiiLSJqJcDhMJBI5lMEYUt+y1D9us9koKCgQp2xpGElW\nVhaJREJ4Fdy5c4cHDx6wtLR0aOYgCwsLtLa2YrFYyMjIIDs7m0AgwPr6Ouvr6yiVSmw2m6i5wnY6\ncGhoiKdPn/L06dMDW3sqlWJ2dpa2tjZsNhvHjx8XJj2hUIipqSkUCgV+v198jrq6Oubm5pienj50\nJb6Upvf7/cKoShK7vYp0Os3Y2BjNzc3cunWLzs5O5ubm9nfBb4CU1QKE3bDL5RICws3NTeLxOFNT\nUzx58mTPsxCS8E/yB1lfX2doaIiWlhampqZESUnq2jl27BgNDQ1cuXIFj8cj9FaSPkIaBHQY5TS1\nWo3T6eTYsWN88MEHOJ1OzGbzK30JJMfK/XZZfSuCfE5ODh9++CF+v1+4PU1MTHDjxg1u3rzJ06dP\nSSQSYqLV/Pw8a2trPH78WLR97bxRSqdYqQZ42JuKWq2msrKSwsLC51S6Pp9P1J7S6bQQauy3WONF\neL1erl69ytWrV2loaBDK6Gcf5KysLH74wx8SiUREKmplZYWpqSna29t5+vTpS+ug+4n0Mkqn6+Li\nYrRaLRsbG9hsNrKzs9Hr9SwvL/P3f//33Lt3j66uLqLR6KEb+oyMjPD3f//36PV6UW4ymUy7nt1n\nN4tkMilqmpK+4CCQAkhfXx8zMzPcu3cPp9MJbB88IpEI+fn5nDlzhrNnz+Lz+aivr6e5uVkEmMN+\nJyUHzaqqKgKBwHf+c6lUigcPHvD555/T0tJyKCY/r4N0C718+TIXL14UQSeVSjE3N8fw8DADAwN7\nrqyXBMVarVZ0OYRCIbF3S+Tl5XHu3DmuX78uJgGura0RDoex2+0kk0mmp6eZnJxkbm7uUEqDGo0G\np9OJ1+vF6XR+58BdVlbGqVOnmJ6e3tc5JEc6yEtiI4/HI755MzMzfP755zx58oS+vj6hRN+JVNuT\nBFPP/uCfNck56JYiydlJGkpTVlbG6dOnRevIzvVI1qVGo5Hl5WXu37/PnTt3DmUKncFgICsrC5/P\n9629+xkZGZSXl++6rSeTScLhsBj20dTUxOTk5IFu5pKIbn19nY2NDRwOxy5XOEmkGQ6HCQaDIttz\nFNz6YrEYU1NTfPHFF0xNTeF0OkVbpd/vJy8vT5grAeJA1dzcLEYoHzRSKSqVSonslWTekkwmMZlM\nDAwMkJubS1lZGUVFRfT29jI/P3/oHRlSz7W0l+zMkHwbarWawsJCjh07Rm9vL4lEYt+Fjm+CtAdV\nV1dz7tw5CgsLRfpb0hQMDAzQ09NDKBTa86yhlDUzm81i7K1UvnG73VitVvLy8igpKRHzR5aWlvjn\nf/5nvF4veXl56PV6YrGYGKiTSCQO5V1dW1tjamqKjo4O7ty5Q2VlJbm5ua9sV83IyMDpdH7nZ+tN\nObJBXhJaSBPZ9Ho9KpWK6elp/tt/+2/f6pAVi8VeqejeaYbzqrTKXiG5UUm1U5/Px9mzZ7l8+TJl\nZWW4XC7S6bToPX/WrW96epqWlhZhLnPQG7fUb51KpV56speEeS6XS7T2SH9Ouh1ZrVYxS3l1dfXA\navYbGxsEg0FmZmZIpVLioGU0GkX7mbSxB4NB0cZ4FJDS8nfv3qWpqWmXF8SZM2e4fPmyGOSxsbHB\nwMAA33zzDa2traKMddBIN/oX3WaTySQGg4HOzk4cDocQDGZnZxMOhw89yO8U9BqNRrKysnZ9Xa/X\nv1AxrdFoqKmpIZlM0tHRQSwWO1JBXqFQiG4Bh8PBuXPn+OEPf0hmZibwe03B8vIynZ2ddHd3s7Ky\nsuelHpvNJroqJH2Vx+PhxIkTAOTn53P+/HkhOk6lUrS3t/Pf//t/55133uGDDz4gJydH/IxCodCh\nldLW1tYYGxsTQ2i2trbEuNmdN3rJF0IywNHpdMJgaz85skHeZrNRXl7OJ598wtmzZ0UNZi82Xcn2\nUZrk9LJRjXuF1Ltqs9nw+/1cv36d2tpaUQdzOp2i13Nubg6Hw0EgENilppZezoKCAoqLi0XrxUGK\n2ubm5vjyyy/p7u5+oRBQeoj9fj8lJSVMTU0xMTGBxWKhsLCQhoYGfD4fDQ0NKJVKsrKy+Oyzz4hG\nowcSTJeWlvjbv/1bGhsbyc7Oxmq1kpmZybVr10TNeGZmhs7OTsbHx49UkJeQNkSpX1iyOz19+jRW\nq5VIJMLMzAx37tzh1q1bLCwsHBnh406k4N/W1kZWVhalpaW43W4CgQB9fX2HujbpsDcyMsIvf/lL\nWltbnwvydXV1nDt37oW3NekdqK2tZWFh4dAOWc8izTMoKyujoqKCqqoq6urqKC4uxmKxiENZf38/\njx494s6dO/T29u5L8BweHubhw4eUlpbidDrx+/2888471NTUANvCPLfbTTQaZXBwkBs3bvDo0SMW\nFxfR6XQ4nU42NjaE9uooCDZnZ2e5c+cO8Xiczs5OcTGF7e+9y+XiwoULoux8UBzJIK9QKCgoKKCh\noYELFy5QWFgoUhp7semazWYyMzOFycybTpr6LkgntoyMDOGzf/36dYqLi4nFYqysrDA8PMzy8jKL\ni4sEg0Hdls5aAAAgAElEQVSqqqrEYBgpJatSqTCZTFRVVREOh8WUqJWVlX0/pEiEw2Ha29vp7+9/\nzoFP8nrPz88nFosxPz/PwMAAQ0NDWCwWKioqSKfTFBYW4nK5OHv2LKurq9y7d2/PR+a+jEQiQVtb\nG8PDw2Ioj+SnIAX50dFRWlpamJ2dPXBh43dha2tLtOw4HA5KSkqorKykuLgYvV7PyMgITU1NtLS0\n0NfXRzKZPHIHFdg+rEitXWVlZUSjUVwuF7m5uaKEcpjr3tjYYGFhgebmZkZHR4WmQCIYDJJIJAgE\nAni9Xux2uyi1aTQaMWFyYmKCnp6eQx3bKpGRkUEgEBC+FhUVFQQCAfHZ1tbWiEajtLe38+WXX9LV\n1UUwGNyXtczPz9Pd3c3g4CA2mw2r1Up2djb5+fnAdjZ2cXGRnp4eWlpauHHjBlNTU2KIjsfjIZ1O\nMzs7S3d390tbIQ8SqW10Y2ODkZGRXb4ECoVCtIqqVCrR/idlEjUazb49H0c2yNfX1/P+++/vmpG9\nV7jdbmpraykpKcHv9+9rn7w007mkpIR//+//PT/96U+xWCyEQiHu3r1LT08Pw8PDjI2Niel4P/7x\njyktLRUpbvj9WFQpqzE3NycUsBsbGweyIUqmQy8qb0iHpurqalKpFJ999hnj4+MsLS2hVCrp6+uj\ns7OTjz76iPfeew+Px0Nubi4+n49oNHqggpnV1VXGxsZIJpO4XK5dqeG+vj7u3bu350KjvUSyHC0u\nLubKlSuUlZWJzMrIyAi//vWv6evrI5FIHMlbvEQsFqO3t5f+/n7m5+dxuVyiNeqwgzwgylKRSOS5\nFrL+/n5u3LjBv/t3/47Lly9z6tSpXfuIlAofGxujra2N8fHxQxfhZWdni/R8TU2NyGZKxONxJiYm\nhKfIfs6biMViTE9P09TUxNbWFrW1tcLcDLYPUd988w03btygsbGRpaUl4bdQVFSE3+8XHiNzc3NH\nxnALtm/0oVDoudkok5OTuN1udDodgUAAu91OIBDA7XYzOTm5b22WRzLIAyIVKdU1tra26Ovro7W1\n9Y1rXGq1GrPZTGVlJdevXycvLw+FQsHq6uqeB0vpRF9aWkpNTQ1nzpzh9OnTuN1uQqEQvb293Lt3\nj/7+fubm5ohGozidThoaGoSASvIiHx0dFYcFyfzk448/Jj8/n+7uboaGhggGg6K7QBq7uNeBU2o/\n3Ilka3vs2DGOHz8u1js6OrrLp3tjY4NkMklmZiaZmZlYLBYKCgr4yU9+wj//8z9z9+7dPV3rqz5H\nKpVCq9VitVrFRqdQKIRH/1EQ270MrVZLIBCgqqqK8+fPi5a//v5+njx5wujo6C7XxKOKNIc7GAwy\nNDQk9DcZGRlotdp9Nwl5FZJQc2Nj47l3SRLUtbe34/P5KCsrE06KgJgVII2n3e+664vQarXk5+eT\nl5dHbm4uhYWFYvKm1WoFtssmS0tLNDU10dfXRzAY5MmTJ/uq9obt79/s7Cz3799nYmKCpqamXZnL\nhYUFRkZGhBATEPtFVVUVCoWC+fl5IdI8Ss+6pJ/ZiTRDQtKdSL8mjULfz+fjyAb5Z9na2qKzs5Nv\nvvnmjbyfJRvWrKwsTp48yfvvv49er2dtbY25uTkWFxdZX1/fs4dFmvx16tQpPvjgA959911MJhPR\naJShoSEeP37MgwcPmJmZEUrvyspKfv7zn1NZWYnD4SCRSDA+Ps7XX3+NRqMhEAhQU1NDIBDgvffe\no6Kigv7+fm7fvk1fXx/xeJx0Ok0qlWJoaOhAbseSOPLUqVOcPHmSX/7ylzx8+PA5IUwymRRWtzk5\nOWIs8J/+6Z8yMTFxoEF+59p3qtR3ahwO+xb5MpRKJSaTicLCQmpqaqivryedTjM1NcX9+/dpbm5m\ncXHx0APk6xCJRBgZGRGuiVarFb1ef6Q/g6S+HxkZYWBggGg0KlL20telTo6DbNOVSjmSwLeuro5L\nly5x4cIFnE6nSA1L9fdYLMbk5CR/93d/J8azHoRQcG1tjVAoRCgUeqVlrkqlwmg0Ulpayk9+8hMx\nWGp8fJyZmZkjFeBfhk6nw+v1ioFqsJ0pkrrA9vMzvDVBHrYfjDdJQSoUCqxWK2VlZXz88cdcuHAB\no9HI+vo6IyMj/OY3v+HOnTusrKz8wW1pUoqmqKiIM2fO8OGHH1JbW4taraarq4vm5maePHlCZ2cn\nwWBQ1JikF7GsrAyNRiM27QcPHvDkyRMUCgV2u120k0gbYnl5OW63m9XVVdLpNPF4nLm5OX7xi1+I\ncbH7idvt5uLFi5SUlKBUKolEIoTD4Zf+jJaXl5mcnCQWi6FUKsVN5zCQphKm02lWVlaYnp5mcXHx\nSHgnvAybzSae44aGBgAGBwd5+PChqKPGYrFDHdDxh6DVavF6vcLH/CiztbXF6uqqUJ/vfOaXl5fp\n7e2lo6ODiYmJAxu1LE1MLC8vp7KykvLyckpLS/F4PELZrVQqicViBINBnj59SnNzM729vSwvL4vs\nxVFCo9GIka0KhYJYLMbMzAyPHz+mt7f3SGfdJKqrq7l48eKu1uORkRFhNLSfz8dbEeSlwOnxeMjK\nyvpOykRJ0e5wOMjMzCQnJ4fa2lquXr1KTk4OGxsbDA4O8uDBA7766iv6+/v35OYr+V3X1tbywQcf\nUFtbi81mY2xsjIcPH3Lr1i0mJiZYWVkRvt0lJSVcuXKF0tJS1tfXmZqaYmhoiC+++IKWlhbRT67X\n6xkbG2NgYICysjJqamqoqqrC6/UKwYo0JnY/xYQ7sVgslJWVkZ2djUqlEul4aQOUbmNSu5rNZhOq\n050jUA8StVqNVqvFbrcLj/dQKMTTp0+Zmpo6krVsqV0uPz+fkydPUl9fTyAQECnjr776io6ODpHa\nfFvRarW43e5dNslHEbVajdFoFM/7syn5eDzO6Ogok5OTLC0tHVgg0uv1+Hw+6urquHz5Mi6XS0z6\nUyqVrK+vi1twV1cXLS0ttLa2iuf+KGIwGCgvL6eoqAiVSsXU1JRwMZ2cnDywd1Uaoe33+1EoFKys\nrLCysvKtAVp6bysrK2loaBBlEtj223/69ClLS0v72jJ6pIP8zh52pVLJuXPnALh9+/YrrSKllrPa\n2lo++eQTKisryc/Px2q1ihT9b37zG27cuMHQ0NCepah2psjeffddNBoNY2Nj/OY3v+HRo0f09vbi\ndrs5efIktbW1HDt2jLKyMux2uzC7efToEU+ePGFqaorl5WWR9o7H44yNjTE7O0traysdHR3U1dVR\nU1OD3+8H4P79+3zxxRcMDQ3tyed5FdJm53a7cTqd/NEf/RFut5s7d+4wODgo7DBtNhuVlZVUVVVx\n/PhxXC7XgR1EnkUaLHLs2DFqa2vRaDSiFae/v59wOHzkbgcajQaLxUJ9fT2XL1/G4XAQj8dF284X\nX3zxRmWso8Zhj5/9rphMJjIzM/nBD37A1atX8fl8uzJSku7joNP1Go0Gq9VKQUEBx48fF34KSqWS\njY0NotEora2t3L9/n6+++oqZmRnC4fCRGSL1IoxGIw0NDZw4cQK1Wk1vby9fffUVw8PDorvoINBq\ntWRlZfHJJ5+gVCqFXfS3+fpL721FRQUnTpzYNf10cnKS/v7+fRcNHtkgPzIyQmdnp1AjSiNXi4qK\n+Oijj6iqqmJra4tQKMTS0hLxeByn00lubi6AMJQ5ceKEsM5UKpVifOHQ0BCNjY0MDQ3tqdDK4XBw\n4cIFampqsNlspFIp9Ho9gUCA+vp6MfwkJydHjNl0OByMjY3R2trKl19+SU9Pj0jx7axrSxPUpBey\np6eH1dVVhoeHRRpoYGCA/v7+AzPgkIwzUqkUZrNZCJC8Xi9TU1OEQiHxfZGmpQUCAWw2mxj1Ojw8\nfCBrlYx6CgsLuXbtmnAZnJiYoLm5mZaWFqGROGrk5+dz6dIlLl26JGp6AwMD3L17l66urkNXbu8V\nkmB1Pw6Akkd7IBCgrKwMo9Eovra0tERPT48YAgTb6fj19XVx0ZDGo0pCtoKCAtHiK7X97UQSqh5k\n6cfpdHLmzJkXDnCZm5tjYGCAJ0+e0NXVJWa0H7bx0LchiWOlThxAmFVJGqSDorS0lHPnznHhwgXU\narVYj+SqmkgkiMfjWCwWLBYLJpOJ7OxsysrKOH78uNBsLCwsMDY2xvDwMIuLi/82R81ubW3R1dWF\nzWYT1ocWiwWFQoHX6+VnP/sZ8Xiczc1NOjs7GRwcJBgMUl5ezqVLl8RLqVarhcGG5Er01Vdf0dzc\nLGrie60itdvtnD17luLiYmBbKOV0Ojl79qxIuTscDgwGwy7xi9QLevPmzW+tae9EUpe2tLTs6Wd4\nHaS2m52tLT6fj5MnTxKPx0UKUK/XY7VaRWpemrX9T//0T3R3dx/IWiXRWlVVFX/5l3+Jz+cjFotx\n9+5d7t27R0dHx5Hb8KTgUlFRwV/91V+Rk5ODyWRifn6e9vZ2/vEf//HImK08y04RmGRkJc1ZT6fT\nB14SUSqVwozpZz/7mRiNDNu6hn/4h38Q88hhW4gZi8XE59DpdOTn53P58mWqq6spKSkRI4l3kk6n\nD20uhtvtFoOjnmV2dpaOjg6GhoZEX/mzrmyA+BkdhRkCRqNR+MJLg2zi8bjQIB0kFRUVXLlyhZqa\nGvR6Pbm5uSJdv9PnJDMzk6ysLDweDydPnuTKlSuizLy5ucnk5CQ3b95kcHDwQD7HkQ3yc3NzNDc3\no9PpuHLlClevXhUznHNzc0W7mMvl4uTJkySTSaxWKz6fb1eaP5FIMDAwwMDAAO3t7dy9e5eJiQmW\nl5f3Rb2bTqdFLVqqgxmNRnHqg+0e0GAwyNzcHJOTk4yNjdHX18fIyAixWOzQX6zXYXZ2lt/+9rds\nbm6iVCopLS3F4XCIzUOy/tw5ozqVStHX18c333zDo0ePnpvKt1+YzWYuXLjA+fPncblcBINBurq6\nuHPnDl1dXUfuBq9QKISSvrKyUhhoLC8v09zcTHNzMxMTE0c2TS/dms+dO0dBQQGJRILJyUkGBwfp\n7+9ndnb2QNcjlfAsFgter3fXO2kymbDZbKyurornQBJ4SdMLbTYbbrcbn8+Hw+EQrX47kSyFOzs7\n6e/vPxDxq4RSqRTalxeJWQsKCjCZTJw6dYqJiQm6u7tfaNIj3TKl2zJwKIcygNzcXE6cOIHVaiUe\njxMMBg88WykxNDTEN998g81mo6ioCI/Hw0cffURNTY0I8HNzc+Tn55OVlUVGRgYulwu/34/JZGJ9\nfZ1wOExXVxeff/45o6OjB+JxciSDPGynZIaHh0mlUmxtbWGxWPB4PMIGVmoD2Xkal5DqYdKmIqnZ\nu7q66O7u3teHY2Vlhfb2dmB7ep7keiQF/3A4zNDQkBDlDA8PMzIyIgZhHDXB16uIRCJ0dnZiMplQ\nqVQEg0Fyc3NF//mzt4S1tTWWl5dpamqisbGRwcHBAzGyMBgM4hZXW1uLyWSip6dHpC4P6qDxOuh0\nOjweD/X19cIFcWFhgb6+PhobG+no6GBxcfHIHgrtdjsVFRVcunSJmpoaMQJackxUKBREo1F0Ot0L\nb5T7gUqlEk5jO0VQkuPaTiTDFinIv+jWDogMxdbWFolEgu7ubh48eCCGuxwEko+G3W7HZDK9cOiJ\ny+USY7UXFxfFwevZw+3AwAC9vb2iXp9MJgmFQiwsLAg/kf1GygAVFhZy/PhxLBYLwWCQhw8f0tPT\ncyjjeycmJmhra6OoqEgIRKV24EgkwtLSEouLi8LgZucznU6nWVxc5PHjxzQ1NdHe3r5rMup+cmSD\nPGz3Vo+Pj/Pb3/6Wnp4e6urqaGho4J133nmhd7pEIpEQCtJbt27x2Wefsby8zMrKyr4rSCcmJvjF\nL35BWVkZx44dIzMzE4PBQCwWo6+vj6dPnxIOh4lGoyQSCZLJJGtrawfuQ79XSJtbZ2cnU1NT/Ou/\n/isFBQXU1dUJJf1O5ufnGRsbo6WlhdHRURKJxIEEKY/HQ0VFBSdPnqSgoICtrS1xmzlKA0R2YrPZ\nKC4u5r333qOmpob19XXa2tq4desWX3/9NePj40c2wMN2kC8rK8NqtQrf9GPHjnHixAnsdjtOp5OO\njg6cTucuU6KjgsFgEHbHSqXypV0g6XSaZDIpTKiePHnC3bt3mZqaOjAnNqnNLC8v7zuNOrVarVRW\nVr4wJV9dXU0ikRB6m+npae7du8f9+/eF0dJ+o9VqhcPd8ePHxTCjX/ziF/T39+/7v/8iFhcXGR0d\nZWRkRJQM3n//ferq6oQNueRLv3PuCGxnL8fHx/mbv/kbHj58eKB200c6yEuOWLOzs0QiEVZXV4WL\nWnl5OdnZ2WJwjaQ0npiYIBwOMzc3x+DgIB0dHQwPD4sa2X6TSCQYGxsT/ep2ux2dTifmHo+Pj4tR\np0dNwf2mbG1tsbKywurqqrB0XFhYEJOYdhIOh8XQjv121YLtzU+v11NXV8eHH35Ifn4+kUiEb775\nhqamJnp7e49culuaTnXu3DmuXLlCVVUVKpWKzs5OmpubaW1tZWZm5khZeb4IaS5DJBIR08TKyso4\nceIENTU1OBwOjh8/Lg5gJpOJubk5RkZG9uUGLN20Q6EQAwMDKJVKPB7PS6dQ7iw3PYuUnt3a2hIe\n69FolNXVVZqamkQZ5aDe8XQ6LbKf9+/fp6SkRLT2Sf/t/IySG9+L0Ov1oqc+IyMDjUZDOp3GYrHw\nq1/96kCCfFZWFu+++y6nTp0iMzMTrVZLOBzel9n235X19XUWFhZ4+PAhJpOJeDwuJuTB9vfNbDYL\nnwf4vVi6tbWVW7duCS3YQXKkg7yENJO6tbWV8fFxQqEQly5d4vLly3i9XrRaLfPz8zx69Iivv/6a\nUCjEzMyMOHEdJFKpYGpq6khMRjooJCXxxsbGkfrsOp1OTH/65JNPUKvVPHz4kP/7f/8vjx8/PrBW\nw9fBaDQSCAT44IMP+NGPfoTRaKSnp0c42vX19R35AA/beo3GxkaWl5dJp9N4vV7ef/997HY7BQUF\nYqyoFGRXV1eFTmJycnLP17O1tUU0GmViYoKWlhbhCqfVar9Vzb/Tzlm6fUmp283NTQYGBvj1r3/N\n7Ozsobn0bWxsMDY2Ji4P58+fp76+HofDgc1mE4NR4PceIs8ebHZaYofDYbRarWgDrK6uprCwkAcP\nHuzrTVqyeC0pKeHP//zPKSoqwm63i6znfth1vw4rKyvcu3dPlBO0Wq3QljgcDuGoajAYgO3YlUwm\nuXnzJv/4j/94KD4Wb0WQ38nq6iodHR0Eg0Hu378vTpyJRIJgMChetHg8fuSU0jIHh6SIloRfxcXF\nKJVKYUDR3t5+4CfqVyGZAx07doxPPvmE2tpalEol09PTtLe3c+fOHeHp8DZkgRKJBAsLC6ytrbG1\ntUUwGOT27dtMTk5y5swZ6urqqK6uxm63i77jpqYmgsHgvmzkW1tbLC0t0dnZSSgUYnJykvn5eerr\n60Va/kUMDw/T1dUlWnVhe0BNV1eXmKa3sLBw6FPmYNtpTzK3uXPnDpmZmfj9frxeL2q1Go1GQ1ZW\nFtnZ2eTk5OwqQczOztLW1kZfXx+Tk5Po9XpSqRSLi4tkZ2eLro79wmQy4fP5uHjxIpcuXRLDyaLR\nKHfv3qWpqenIiGOlC013d7cI8jqdDpPJxGeffSbMnKROhZ6eHmZnZw8lJr11QT6ZTDI5ObkvJ32Z\n7w/SKNbS0lIuXrxIIBAgEonQ1tbG48ePxRS6o4KUPs3Ly+Ps2bO89957mM1mZmdnaW9v58GDB3R0\ndBx545KdPDuoY319nd7eXgYGBlhYWGBhYYGVlRXcbjdKpZLbt2/z8OFDlpaW9mUzl1wYV1dXRb08\nFouRSCREy+uLePr0KQ8fPiQYDIrSztTUlNBEHCVdhPSZxsbGUKvVIsB7PB7RBlhYWEhRURHFxcW7\nymmjo6Pcu3ePnp4eEeQlRXh2djYul2tfR7pKz/+1a9doaGjA5XKJ8ue9e/doa2s7MkFean+emZkR\nhl9HlecLUQfH0XkzZL53mM1m6uvr+eCDD/jpT38qhvZIwpdgMHikhI7SOOI///M/FyY9AwMDPH78\nmH/913+lr6+PUCj01go0d6JQKIRhiMViQaPRoFAoCIfDrKysHFgt22g0CpX9y2rvsO0FEY1GSaVS\nYl2SAcpRCvDPIrUMajQa8T2WdAZS3X1nyn5tbY1IJCKyoNK433Q6Lf6eF7Xc7RXl5eWcPXuW//Af\n/gM1NTVoNBra29u5d+8e//AP/0B3dzfxePxIf88PgVfG8LfuJi8j8yrMZjNZWVmcPn2agoICZmdn\nRcDs6uoiFAodmUAp1aMLCws5efIkmZmZou7X1tZGa2srnZ2dLCwsHJk1/6FIQs2DEF5+G/F4nHg8\n/tb7/b+Mra0t1tbW9iTzcxClCI1Gg8FgwGg0CkHjkydPuH37NmNjY2+FDuUoIgd5me8ddrud4uJi\nzp8/j9ls5vbt2zQ2NvLkyRMhAjtKKJVKMcAimUzS09PDN998Q3d3t1yWkvk3g3RD39zcJBqNMj4+\nzsOHD/nqq69kfdUfgJyul/neIc2JLy8vR6PRMD8/z9zcnBCBHaUgv/MmHwgEUCqVrKysMD8/z/Ly\n8pHt4ZeR2WscDgd+v1/MFVhdXRWzLY7y+OdD5pUxXA7yMjIyMjIybyevjOH77yMpIyMjIyMjcyjI\nQV5GRkZGRuZ7ihzkZWRkZGRkvqfIQV5GRkZGRuZ7ihzkZWRkZGRkvqfIQV5GRkZGRuZ7imyGI3Mg\nWCwWMjMzycrKwm63Mzk5yfT0NLOzs0eqb/2o43A4sFgswp5UmmFtNptZWVkRI1ATiQTxeJxYLHZk\n/L5lDh+VSoXNZiMrK4uysjIUCgWxWIyRkRGCwSCrq6tiFPZholar0ev1GI1GdDrdrq9tbGywtrZG\nNBo9EkOBjjpykJc5EDweDxcvXuTDDz+kqqqKzz//nC+//JLFxUUSicRhL++tITs7m4KCAjQaDW63\nm5ycHC5dukR+fj6Dg4O0trby4MEDZmdnmZ2dZXJy8tA3bJmjg1qtJjc3l+vXr/Of/tN/QqlUMjU1\nxd/93d/xzTffMDExQSQSOfRnRqfT4XQ6yczMxOVy7fpaLBZjaWmJkZEROch/B+QgL7MnmM1mzGYz\nSuXvK0DpdJpUKsXq6ioKhQK1Wo3BYMDpdFJZWcnY2Bj3798/1CCvVqtxOBxUV1fz3nvvodFonnPW\n6u/vp7u7m56eHpaXlw9sbSqVCq1Wi8ViIT8/n8rKSiorK8nLy0Oj0WA0GsnIyCA7Oxur1UpBQQEm\nk4mioiKCwSAzMzMMDQ3R09PD06dPxQ1Ndg57ORaLhYyMDJaXl8VY2ddBpVKRnZ2NRqMhFAoduZHX\n0pAatVqNTqfDaDSiVCr5+OOPKS0tpa+vj8bGRh4/fnxga8rMzKS6uhqTySSm4lksFjweDzk5OXi9\n3l2/PxqNMjk5yZdffklvb++RGzZ11Pg3E+RVKpUIMMlkknA4zMbGhpwq/gOQUsYWi4VAIIDf70el\nUonJVlJKbXBwUMytVqlU6PV6AoEAPp9v16HgMNBqteTl5XHlyhX++q//GoPB8FwQbGxs5NatW6yt\nrTEyMkIymSSVSu35bUehUIiJYRqNBrPZjNVqJRAIUFdXx+XLlyksLMTv96NWq3dNEIPtbInH46Gq\nqopwOMz8/Dyjo6NkZmayubnJ4uIi4XCYRCKxZ+vXaDTYbDaUSiXpdJq1tbXvfLuSJpxtbm4e+iYt\njWHNycmhsLCQyclJYYX8XT+PVqslIyOD6upqMjIyxFz2wx6AI1knS3bPPp8Pu90unjeHw8G5c+fI\ny8vD7XYzNja2b0FeOmBI/2swGDh27Bg/+MEPcDgcYhqgyWTC7XYTCARwu91sbm6K9zIWizE5OSmm\nAMbjcRKJhHyrfwn/JoK8QqHAZDJx7Ngxfv7znzM8PMxvfvMbFhYWZG/wN0ShUKDX6ykuLubdd9+l\npqaG4uLiXcEnkUgwOzvL//yf/5Pl5WUR0Dc3N8VI0cO+VZpMJs6cOUNtbS0qleqFwaa8vByDwYDJ\nZKKtrY2BgQGmpqYIhUJ7uhYpBe/3+wkEAuTn51NUVERpaSmBQACXy4XRaHxhgH/R58rKysLhcFBQ\nUMC1a9doamri8ePHDA4OMjMzsyezwd1uN9evX8dsNhOJRBgaGvrOQW19fZ1oNCpmuh8mJpOJvLw8\n3nvvPT788EOWlpZobW3lb//2b5mfn/9OhxC/309VVRU/+tGP8Pv9dHZ2cuPGDW7evHkAn+DlSAeY\n06dP884771BaWkpJSclz43UTiQSTk5NEIpF9W4ter8dut2MymURG79SpU7zzzjsYjUZUKhWwnWHT\narXo9Xo2NzeJx+PiUKrRaMjOzubHP/4xTqdTHL4P+zB1VHlrg7xWq0Wn07G5uSnSwi97ERUKBSUl\nJZw7d44zZ86wtbWFVqs99Fvky3A6nbjdbnHSll4Mu90ObAtPkskkg4OD9Pf3H/j6jEYjdrudgoIC\n6urquHbtGiUlJWRlZYmb/NbWFqlUiqmpKW7cuIFCoSA3Nxez2Uw0GuXp06f09PQcyulb2kCKioo4\nfvw4586do7CwUDwPzx487HY7Op2OdDqNwWBgfX2dSCSy50Feq9Xi9/upra3l9OnTZGdnk5WVRXZ2\nNiaTCdieY/5taWSVSoVGo0GtVqPRaMSNqKSkBKPRiNVqRa1Ws7a2tidB3mw2U1lZSW5uLpubm4yN\njX3nv3dtbY2lpSVx4BsZGWFubo5EInFgN3uFQoHVaqWoqIgLFy7w7rvvUldXRzKZZGtri1/96lev\nPFBJGAwG3G43+fn5lJSUkJGRQTQaZXl5mZGREZaWlvb507wYrVaLw+GgrKyMc+fOkZubi8vlQqPR\nPLf+nJwccnJy8Pl8hMNhksnknqxBqVRiMBjIy8vjxIkTIutUWlpKcXExeXl5JJNJVlZWiEQiu8bj\nbmhfgCgAACAASURBVGxssLy8LMoe5eXl5OfnU1xcTCwWY3p6mlQqJQf5l/DWBnnphVpfXyeRSLC8\nvPzSjUGlUnHmzBmuXr2K2WwmlUqxtrZ26CnCl5GXl8fZs2dRqVSYzWZ8Ph9VVVVUVVUBiBnY/+N/\n/I9DCfIOh4Py8nI+/PBDGhoaxE13Z6oeEHVjvV6P1+ulrq4Ot9tNKBTiyy+/5M6dO3u2ibwO0qb3\n8ccf8+Mf/xiXy0VGRsa3HvoMBgO1tbWk02mGh4cZGhral3VlZWVx+vRpPv30U3EQlW43sP2zlzIg\nL8qCGAwGMjIy0Gg0u/4cQFVVFRaLhcXFRWZmZvbk2dFoNLhcLiorKwkEAmxsbLC5uSmeg62trRf+\nf/h9kI9Go4TDYf73//7ffP3118zPzx9YHVulUpGVlcWZM2f44z/+YwoLCzGbzZhMJhwOx3Pfw29j\nc3NTfH6TyURhYSFXrlzBbDbzf/7P/zm0IK/X6/H7/SKASxmhZ/H7/Vy7do2lpSUmJyfp7Ozcs/dT\nrVbjdDo5efIkf/EXf0F+fj5utxuVSoVKpUKtVjMxMUF/fz+9vb27vlfr6+vi8KdQKPjTP/1T8vPz\nhYjwBz/4AaOjo7S2tu7JWr9vvHVB3mg0kpmZycmTJzl//jxdXV10dnbS2dn5wluhJOAoKyvD6/WK\nDXp5eXnXafGwsNlsOJ1OrFYrdrsdl8vFiRMnqKurQ6lUotVqMZlMeDweMjIygO2NXKfT8cEHH6BQ\nKLh58yaDg4P7vlaz2YzX6+XSpUtcvnxZpJENBoNQcisUCoxGIy6Xi/b2dpqbm8W89OzsbNHqdZjp\neqke6HA4yMrKQvf/2zuvp7ayLY1/SiijjECInEQONtiAE263w7i7b9+avqHufZmaqflr5mn+gHm6\nU9137oRut+1xO2EbY5IJBhFEEBJIAgkJlLOE5sG1d4ON3XYbJOw5vyqXqyzAR+KcvfZe61vf4vPB\n4/FoBuIg9tbLX93MHBYsFgscDgeRSASrq6vQ6/WQy+WIRqNwOp2wWq2YmZmBxWJBMpk8cJOq1+vR\n2NiItrY2VFRU7HuNiK1ITfQwcLlc+Otf/4qlpSW0tLRAKpVCJpNBrVZTEdWb2N3dBYfDgVqtRmFh\nIa5cuQIul4ubN29ia2vrUK7vbRCdDllL9Ho9xGIx0uk0VldXsbCwgGg0+s73qMfjwfz8PJxOJ+Lx\nOIRCIYqKilBTUwOlUgkej5cT4SOXy4VIJIJIJIJQKKTlnlevg81mg8/no729HbFYDCKRCC9evDiU\nNlepVIovv/wSly5dolkONpuNUCgEi8WCqakpWCwWrK2tYWNjA+FwmH5vOp1GOBwGm82GSCSCz+ej\nG0ZSinifzVguIWXOVzdZiUSCZrAO+/D5UQZ5g8GACxcu4KuvvgKbzYbD4aDCrleRy+WoqalBZWUl\nJBIJTCYTlpeXEQgEsnbNRNEqEolojzNZZHU6HSoqKlBYWAi9Xk/TUKRN6qDFmMfjQSaTobu7GwUF\nBVhZWclKkFcoFDh58iSuXr2Ka9eugcViIR6PY3NzE5OTk5iZmQGPx4NWq0VdXR2ePXuGR48e4eLF\ni2hubkZBQQHi8TitweZKKENO8kqlkm6c9gbtdDqNUCiEdDpN+3V/KWAdBru7u4hGo1hfX8fIyAga\nGxtRXFyMYDAIk8mEsbExPH36FEaj8Y3lKYPBgHPnzkGlUr0W5DOZDF1EDivQeDwe3Lp1C4uLi1hZ\nWYFarUZRURHKy8shFArf+r0k+Oj1euj1enR2diIYDKK/v/9Qru2XEIvF0Ol0OHnyJE6ePAmFQkH1\nIpOTk5iYmEA4HH7nz2pnZwfpdBoejwfxeBxSqRRKpRIlJSXQaDR0g5sNsa9IJIJYLKan+PLycqjV\nahoQ37ahra2thVQqhcvlgtvthtvt/uBrFolE6OrqwokTJyAQCBAKhRAKhbC5uYmRkRH8+OOPcDgc\ntKX2IFGoQqFASUnJod6/BHKgymQyH3z4I5sPHo8HPp8PgUBAxY9sNhsymQwFBQX7vsfr9WJzcxOB\nQODQ9SkfXZCXSCRobm6GRCLBixcv8PTpU4yPj79RQKdWq9HS0gKVSoVoNIqFhQXY7fasXjOXy4VY\nLEZPTw8uXboEmUxGRS9SqRRyuZy2oIlEIiSTSTgcDhQUFEAikRz4M0kaN5vagoqKCvzjP/4jGhsb\nweFw4PF4MDk5ie+//x5msxk+nw96vR7V1dVIpVIoKirCN998Q0V5eXl5sNvtWFpayqngsbi4GNeu\nXUNVVdWBrwcCATx48AB+vx8FBQVoampCZWXlkV9XJBLB/Pw8HA4HBgcHUVhYCJlMhng8jq2tLayv\nr1O195sWuVAoBKvVimAw+NprxEQkHA4feplkY2MDAwMDVCy1V0T1JoRCIQoLC3H9+nWUl5fD5/PB\n4/FkrUfbYDDgypUrOHHiBLRaLU0ZT09P4+bNm3j27NkHZ5tEIhEKCwtRX18Ps9mM2dnZrIgMGxsb\ncfbsWbS2tqK0tBRSqRRarRYSiYSe5Nls9oHvTSAQID8/HwqFAlKp9FCyVrFYDEajEZlMBlwuF4FA\nAC6XC1NTUzCbzdjc3KRdK2/aUOj1enz11Veoq6s79GyaQCBAZWUlYrEYVldXP+g0zePxkJ+fj9LS\nUtTV1aGxsZFmzzgcDgoKClBeXr7ve0wmEwYGBjA6OoqlpaUPfDf7+eiCvEgkQnV1NaRSKdbX17G+\nvg6n0/nGr5dKpTRNvLcGmA1YLBakUimKiopQX1+Pvr4+XLp0Cfn5+dTFicfjIS8vDzweDz6fDwsL\nC7DZbNje3kZRURFkMhl4PB51i9tba02lUohEIllrA5TJZGhqakJRURFisRi2t7exsLCAJ0+e0JRe\nIBBAPB4Hi8VCa2srmpqaoNPpIBQKsbOzgxcvXmBgYODQRWu/hFQqhUqlQlVVFbq6utDX14eSkhK6\nUJBFZ2NjA2azGU+ePEEmk0FDQwN0Oh3dEJAd+VGQTCbp6SmTyWBlZQV5eXlIp9O0Fn8QLBYLQqGQ\nXmddXR0Vae7FbrfDaDRiZWUF29vbh3rt4XB4X4r1XSAdBBKJBJlMBk6nEzab7cjr8SSDUFtbi3Pn\nzqGsrAxCoRCpVApra2t49uwZZmZmYLPZPvj/IuJHpVIJuVyetbSySqVCQ0MDbY0jkPuX6JiCwSDi\n8TgkEgny8/PpNQqFQhQXF9NnN5lMftA6E41GMTExAYfDAeDlZnR7exsmk+kXtQocDgf5+fmorq5G\nT08P9Ho9fQZ9Ph/m5uY+WERKNC7vu6EjJbD8/HwIhUIIBAIolUq6sTMYDKirq6Pp+GAwCI1Gg7q6\nOojFYhoHtFothEIhHA4HE+QFAgFKSkrA5XKxuLj4iylfPp8PuVyOvLy8rNfg2Ww2iouL0dvbiz/8\n4Q+oq6uDRqMBm82mN+nev1dXV/Gv//qvWF5eRiQSocYcMpkMV69exW9+85t9C0UoFILL5cpa+xFJ\n95L+5q2tLfr/p9NppFIpOBwOJJNJsFgsnDx5Es3NzeBwOPSBvnv3Lv7rv/4r678LrVaLU6dO4Z/+\n6Z/Q1ta2rw4PAJubm3jw4AHu3r2LkZERxONxVFZWQqlUUsHP3gB/FIF+d3d33wk7FovtE629CTab\nDaVSiYsXL6KnpwcdHR3Q6XSvfd3U1BRu3bpFHfFyTXFxMf75n/8Zra2tyGQyWF9fx/Ly8pGLMUkK\nu6amBg0NDZDJZEin04hGozCbzRgcHMyKJuCoOShwZTIZsNls+Hw+zMzMYGVlBW63G5WVlTAYDGhp\naaGalcrKStTU1EAmkyEWi33QOhMOhzE8PLyvg2V3d/edSnbEy6KhoQH19fVQqVT0NZvNhh9++OGD\nA2M8HofVan3vmrhAIIBarabdRRqNBlVVVaiurkZNTQ0UCgW4XC42NjZgsViwvr4On89H4xgJ8iUl\nJVCpVHjw4MEHvY+D+KiCvEAggEQioYYl71KbUalUaGxshFQqxc7ODlW/ZgMul4umpiZ0d3ejurqa\nim+An2/y1dVVmEwmeL1eGI1GzMzMYGtrC6lUCtvb2yguLkZVVRU0Gg3EYjG4XC7tKJiYmMCNGzey\nVn7Y2dnB8+fP0d7ejsLCQlRUVODs2bNgsViYmpqCyWQCANTU1ODq1aswGAzg8XiIxWJYW1vDgwcP\nsLCw8N4nvsOACHTkcjmtwwM/B0+Hw4GffvoJs7Oz9GQRj8fp/UK+7tW/D5tXF+S3IZVKoVarUV1d\njebmZpw7dw61tbUoKira1wMdj8cRiUSwtLSEubk5agSVK1gsFurq6tDZ2QmtVkszQmNjY0ca5Fks\nFuRyOQwGA65evYpz585BJpMhLy8PW1tbGB4extDQEKxW668uJ+3u7sLtdmNrawtyuZy2Mra0tMBu\nt2NqaupISlUkm5Ofnw+VSoXW1lY0NDQgPz+fbsATiQQ1ApudncW9e/dgt9vpfa7RaOhpncvlori4\nmGZBPzTzk8lkftXvlbyv2tpa1NbW0gNbMpnE9vY2LBYLzGbzB/f2/9rr6+npwdmzZ1FeXg6VSgWJ\nREL1PkqlEpFIBGtra3j69CkmJiawvr4OrVaLYDCI8+fPU8tev9+P1dVV+Hy+D3ofB/FRBXlS7+Ny\nufRmJKerNy2IpD+UBEYi6njb9xwGpA7f1NSEEydOQKPR0IWXnNj8fj8mJibw008/YX19HTabDXa7\nnaYrI5EIioqKUF1dTetqwMtd8dbWFiYmJnD79u1fZb/5a3C73Xj8+DFtXywtLYVSqaROWUT82NHR\ngS+++AJFRUVIJBJwOp2Ym5vDo0ePYLFYsnKtr0Kc+Tgczmu/993dXTidTgwODtKUOKlZ7j15kHsm\n1wY+RAVdXFyM5uZmnDlzBl1dXTAYDPs2MOl0mpZVXC4XFhcXsbq6mlPjmby8PEgkEpw8eRKnT58G\nn8/HysoK7t69SxfBo4LNZkOj0aCtrQ2/+93vUF5eDj6fj1Qqhc3NTZrF+ZB+63Q6jbW1NZjNZhQX\nF9OOjKamJjgcjtcMaA4Dos0pKipCWVkZqqur0dXVhYaGBggEAiSTSap3cLvdsNvtGBsbw+3btxEI\nBCCVSlFXV7cvHc/lclFQUEDTyG8SNh81IpEIWq0WjY2NqKmpgVgsBovFQjAYxNLSEkwmE5xO5wcf\nHEh28n3p6urCn/70J2i1WurbEo1GEY/H4fV6sba2BqPRiJs3b2J4eBjpdBq1tbVQqVRob2+n68nG\nxgYGBwffWnr+tXxUQT4ajSIQCCAcDlNDE1KjTqfTBy6+JPXk8/mwtbVF/aTJRuGoTvVKpRI1NTWo\nrq5GYWEhfUgymQxCoRCWlpZw7949jI2NwWg0IhKJ7HN1IhAl/V6lcjAYxMLCAtbX1xEOh7NWk3c6\nnbh79y41WCkvL4dYLEZZWRmuXLmCxsZGsFgs2hKVyWSwubmJGzduoL+/H2azOatdDXtpaGhAX1/f\na8MuSHtOJBKh9w8J8CQgZUNZ/z6IxWLU19fjwoUL+OKLL1BQUACVSvWamj0QCGBsbAyzs7MwmUyY\nnJzM6v1yEBUVFeju7sa1a9fQ2tqKcDgMo9FIdR1HCZvNhk6nQ3l5OaRSKXg8HrX7tVgstP3tQ0gk\nEhgcHASHw4FOp0NNTQ3kcvmRajmI7uj69es4ceIESkpKoNPpIBAIwOFw4Ha7MTU1hadPn2J0dBTR\naBQ7Ozvwer0QCAQoLCxEd3c32tvbj829Tp5BkqG6cOECampqaOlveXkZP/zwA54+fYpgMJizzBTJ\nyKZSKUSjUXg8HkxMTGB+fh7b29twOp3Y2NiA3W4Hh8NBe3s7zp07h6tXr6K8vJzqmObn53H79m2s\nra0d+jV+VEGe2GA6HA6q4m5ubkYoFILdbkcoFHpNtEPaGYCf2+92d3fh8Xjg9/uP7BSs0+nQ1dWF\n8vJyyGQycDgcxGIxBINBrKysYHR0FHfv3sXy8vJr9T82mw0ej0c94YuLi5Gfn49MJoNwOIzNzU2q\nws5mG1ooFKLXLpfL0dnZidraWiokKS8v3zcAg0xFe/ToESYnJ7G9vZ2zAENsNF91+YrFYjCZTFhZ\nWaHXRtoAa2tr0dDQsK8GmCt4PB5Nx5aUlODChQu4cOECTp06tU9pTE4GgUAAFosFw8PDeP78ORYX\nF+HxeHI2LIUYInV1deHv/u7vaCvV48ePMTw8jOXl5SPPMHA4HFRVVaG+vh5isZimaJeWljA9PY31\n9fUP3oSmUilYrVYUFhZie3ubijvJMy0UCsHj8Q71uVUoFKiursbp06dx+vRpKJXKfcr5WCyGzc1N\nzMzM4OnTp/u+l8/n0zXyIFGgRCJBdXU1vF5vVoczkYPEqVOncOnSJRgMBvocut1uzM/PY3x8HEtL\nS9TDPhdsbm5ienoafD4fgUAAGxsbmJiYwMLCAra3txEIBBAKhSCVSlFZWYmzZ8/i/PnzaGlpQSaT\noRuwoaEhGI3GIyllflRBHniZql5YWIBOp8OFCxcgEolQUFCAO3fuvNE6kqjcGxoa8A//8A8YGhrC\n48ePsbS0dGRBvqysDH19fSguLqYtKz6fD2azGffu3cPg4CDm5uYOVEyTAF9bW4u2tjbU1dVBpVIh\nnU7D5XJheXkZc3NzWbdxJCmtyclJuFwuuFwuXLx4EefOnaOLF4vFotarg4OD+O6777CysgKPx5PT\nE6TP58PGxgbKysr2/XswGMT9+/fx7NkzuvASd7vPPvsMV69epWWSXELmxjc0NODkyZO4du0aKisr\nX2slIqeK9fV1TE5OYnR0FHNzc9ja2srp569QKHDmzBlcu3YNV65cAZ/Px+zsLL799lvaAnvUCzWH\nw0FjYyPa29tpq6rf78fIyAiePn0Kv99/KJ9ROp2mE/9IppAMc1IoFNja2jpUf3ilUonq6mro9Xoo\nFIr3aqmNxWLweDx48eIFtFotTp8+vS81X1hYiM8//xxer/dIXB7fhEajQU9PD/r6+tDT07NvpvzG\nxgbm5uawubn5Xj4GR8GLFy/oBohoMfa2AhIhH/FjuHLlChoaGpCXlwefz4fl5WX85S9/wejoKPx+\n/5Fklj+6IE8eSjKdq6ioCFeuXEFZWRmWlpawurqKdDoNDocDkUiE06dPAwA1gfD7/XA6nXA4HEda\ny97Z2cHS0hK1xkwmkxgfH8ejR49gNBphNpvfmDotKiqifa5dXV0oKChAOp3GxsYGHjx4gMHBQczM\nzGS9DY1ATCzMZjOt5ZHTCovFwu7uLh0Huby8fGiL54dQWFiIurq6fb4Dbrcbi4uLMJlMcDgc9AET\niURoaWlBc3Mz9XrPBcRlj1zPZ599RpW7paWlND2/t1TlcDiwsLCAkZERjI+PY3FxEV6vN2fGQywW\nC5WVlejo6MC1a9fQ2NiIVCqFsbExPHnyhIpOsyGGJT71CoUCHA6H1kzn5+exsrLyqwV/RNS51zRJ\nqVTuc2IjCvFvvvkG//u//4vHjx9/8PshfddyuRyFhYVUmMtisRAOh+F2uzEyMoIXL17AZDJhdXX1\ntZ9BNu7xeJxmefY+w2traxgfH6etb0cNi8VCXl4edDodTp8+jYqKCuTl5VErZqvVioGBAQwNDWF7\nezvn1uQ2mw1er5e2kL56Ei8oKEBVVRXOnj1L2xl5PB6CwSAGBgbw8OFDTE9PH+km/KML8oFAABMT\nE/Rk09bWhqamJnR0dGBtbQ1zc3NIpVLg8XhQqVQoLS0F8LJWtr29jbm5OczNzR14wx8mDocDz549\ng0AgQCAQQCwWw8OHD/Hf//3ftDf1Vfa2rly8eBFXrlyBwWAAABpU79+/j8ePHyMYDOYscJLT4qu1\nbOJZTgRfgUCAuoDliry8POqt0NzcvM/1zWq1YmJiAmazGTs7O/R98Pl81NXVoaKigi7SmUwGyWQS\nsVjsQO3EUSAQCCCTyaDT6dDb24s//vGPKCgogFQqpZ8x6b8lC/TMzAzu3r2LgYEBzM3NHfk1/hIs\nFgs1NTU4d+4c+vr6IJFI4Ha78eTJE/z000/w+XxU/0C6GI7C0Yxci1AohEgkoiNKnU4nXC4XtUo9\nKGVNgumbHChJ14ZcLqdZn8rKSjpDgGzWysrK8M0332Bzc/NQgjwxXSFumcQqlcViwe/3Y3FxEd9+\n+y0GBwcRCAQODIgkVS8QCCAQCGiaP5FIwOFwYHp6Gvfv38fGxsYHX+8vQe4D0jFy8uRJFBYW0msZ\nHR1Ff38/pqenYTabcy6ABUAdAV+F/M7Ly8tx+fJlXL58GW1tbdQIyOl04v79+/jhhx+wvb19pC3F\nH12QJ+pFkiY5ceIEmpqaUFVVBaFQiIKCAuqqJJVKIRQKkU6n0d/fj4cPH2J4eBhWq/XIr3NrawvP\nnz/H+vo6JBIJ0uk0nE7nW0UiEokE5eXl6O3txeXLl1FUVERrrMSkg4jtcrmDlcvl9OY9c+YMBAIB\n4vE4NcFJpVLIz8+HRqOBVqvFzs5OzhTdNTU1tAZM0tqJRAI+nw+PHj3C3/72N9hsNjp1bC97e+MT\niQRsNhsmJycxMDCQlUWvvr4ep06dwpkzZ9DY2IjCwkLw+XxqJWy1WrG8vIzZ2VmqSvf5fNjc3MxZ\nludVyOlZqVSCy+XSz1gmk9EhJX6/H16vF7FYDKFQCD6fLyuZB6lUioqKCpw/fx4cDgezs7MH1kTl\ncjkKCgpQXV39mnAT+FkBrtVq6evE8WxvqYfL5UIikexLPX8IWq0Wvb29uHTpErq6uqBQKOgmaW1t\nDcPDw7DZbG9NafP5fKjVarS2tsJgMNAgZLfbcevWLTx69AhbW1tZ8bUQi8UoKSnBF198gfPnz6Oy\nshJSqRTRaBQWiwXT09OYmJiAx+M5Fl0ub4NM3Ovt7cVXX30FvV4PNpuNQCCA6elpPHz4EJOTk1nJ\nsn10QT6TySCVSmFjYwNOpxM7OzuwWq2orq6GRqOhD5VQKIRWqwWPx4NSqaQKXpImP2qIN/P79LCr\n1WpcuHABvb29qK2tBZvNRjgcht1ux/j4OIaGhrC5uZmz1Cvxca+rq0NPTw+6u7tRWVlJT8Xr6+uQ\nyWR0VG5paSkMBgOMRmPWgzybzYZYLEZlZSX6+vr21eKJU+Dq6iqmp6f3fR8ZaPSqLWssFsPi4iKM\nRiNWV1ePZNEjGwrigtjY2Ii+vj6cPXuWel2Hw2E4nU6MjY3Rcb1Go/FIVLmHQSaToWpum81GjWfK\nysqo1ScZxxqNRhEKheD1erGxsQGbzUazYId1LWRDIZVKqXd9d3c3ZDIZSkpK3hjkiRBTo9G89rpA\nIIBKpYJKpTrQaRB4eTjx+XxYXFw8tDYpuVyO9vZ2NDY2Qq/X73uNnBZDodBbh+KQVt+ioiJ6j5EN\nwpMnTzA9PU3nOBw1arUa9fX1OH/+PFpbW8HhcLC5uYn19XWMjY1henoaNpstpx4P74JEIoFer8eZ\nM2foBh146U8/PT2NgYEBPHjwAFarNSvr4kcX5Alkx2qxWOBwODA0NLRvupZKpUJHRwe+/PJLVFRU\nwOPxUH/kXNdx3oRer8ef//xnGAwGWt92Op24ffs2Hjx4gOHh4ZyMZiWQqVqfffYZfv/730On0yEv\nLw+hUAhPnjzBvXv30Nraiq6uLuoo1t3dDYfDkXUHMTK2lQz/ISnaV/+8SnFxMZqamqDRaMDn8+nX\nRCIRWts8qgWP1COJoUlzczNaWlr26Qi2trYwOTmJv/3tbzAajfB4PDm9J36JTCYDi8WCkZERJBIJ\nVFdXo6SkBG1tbejp6aFDQYiugJSC+vv78de//hVzc3OHljVJp9OwWq1YWVlBQ0MDhEIhlEolTp48\niaamJvzmN785cG0gyvM3TTsj6fw3WdaSUs/Kygr+7d/+DaOjo4fyfoj1rEwme+01Mr2SlBje556d\nmprCf/zHf2BxcTGr5Ta9Xo/29nY6RnZ1dRUzMzMYHx/HkydPsLa2lnNtz7ug0+lw6tQp/P73v0dL\nSwt4PB78fj/MZjO+/fZbjI6OYn19PWuHtY82yAOgE4PedKrau+NLpVJvHeyRS7hcLmpra3Hy5Eno\n9fp9i3ooFML8/DzW1tZyOtQFePkQ/va3v8WFCxdoa5DFYsHQ0BD6+/sxPz8PnU5H0/YkxalQKKhL\nVbY+f2LmodPpoFarf3EiGlmo9Xo9DAYDlEol8vLywGKxqBvb6Ogotb48CsiQjJaWFnR2duLUqVPQ\narV0I7Wzs4P+/n709/fDaDTSTetxh8w1cDqd0Gg0UKlUKCwshFqtpoOZiL+7Wq2GVqtFS0sLXC4X\ndnZ2Di3Ip1IpDA4Ogs1mIx6Po7q6mg6mOWi++rtCDId2dnaoY5lUKqXPMpvNxsbGBkwmE4xG46Gd\n5KPRKBwOB2pqal57TS6Xo6SkhM6+IDoHNpsNoVBISxAKhQIVFRWQSqUIhUJU+c/lcumG66iRSCQo\nKChAZ2cnenp6oFKpEAgEYDKZMDIygtHRUdjt9qyZfv1aSIny888/x6VLl1BfX09H4z59+hSPHj3C\nxMQE7HZ7VjObH3WQfxs8Ho8uIqSV4TgGeODltba3t+PUqVMQiURU3ZpIJOD1emG1Wg99oMj7wGKx\nwOfzUVNTgz/96U8oKysDj8eD2+2G0WjEt99+i8XFRcRisX0bLplMRmei5+XlZXWWNofDgUKhgEaj\ngVwu31cHJZ7Ze4M1aVssLy9HfX39Pgtio9GIW7duYWJi4sjaFtlsNqRSKVpaWvDll1/i7//+76k7\nHxlfOj8/jzt37uDu3buIx+MHnmr26ghIb/bbJnYlEgn6WRzF7yaTycDj8dDrJ9dYUFAAjUZDf0da\nrRY1NTXUglqv1+PcuXN49uzZoV1LMpnEs2fP4PF4IBaLsbu7Cz6fDy6XSz+jvZ/T3gwDCZCkXLj3\nsyKnNLPZTMsmJSUltNYvEAiwvr6OxcVFWoI4DCKRCGw22z4rVJINISUEcppPJBL0PeTn59OOZydZ\nqQAAEfxJREFUB6VSSZ8Rv9+P2dlZBIPBfff/UaNUKtHW1obu7m50dnaCw+HAYrFgdnYWk5OTMBqN\nWbmOXwsRL2q1WrS1teH69eu4fPkyWCwWvF4vbDYb7ty5gxs3bmBnZyfrXhWfZJAnU4vq6uogk8mw\nsbEBv9//xhncuYQsxBqNBhqNhi7siUQCQ0NDuHfvHtbW1nLi904QCoW4cOECLl++DLVajby8PEQi\nEQwMDODu3bvUO/pVQRGZ0JSXl5e16VvvAvHu3ltCIIvypUuX0NLSsk8wRaZHHWWqkGwwzpw5g6am\nJhps4vE4HT37448/vnWWPPBSSCWVSsHlcqFWq9HZ2QmdTvdarz/p5SaOeC6XK2tZAVIbJyN0rVYr\nRCIRPfGTUcZHhdPpxP/8z/9gaGgIWq0WDQ0NKCkpobbHBKK+J7oAtVqNRCIBk8m07yQWj8cRCATg\n8/mo70VzczPq6uposLTb7YeeolUqlTh16tQ+vUkkEsH29jaePHmC27dv01kAZFPCZrOhUCjQ2NiI\nr7/+GnK5HAKBABqNBqurqxgZGcHU1BSWlpayZn5TUlKCr7/+Gk1NTeDxeIhEIrBYLHj48GFWRNIf\nikAgQFlZGc6fP4/f/va3qK+vpxvG+fl5/Pu//zvtg8+FnuCTDPJcLhf5+fmoqKgAn8/H6uoqvF4v\n3c0eB8hNkJ+fT602i4qKwOVyEQ6H4XK5MDIygpGREXg8nqxPbSMQEcnZs2dx+vRp5OfnIxKJwGq1\nYmhoCM+fP8fW1haSyST4fP4+v3cyv/y4ba52dnYwNTUFp9NJMz5tbW24cuUKWltb6QS3eDyOUCgE\nt9t9ZDtwHo8HPp8Pg8GA7u5udHR0oKSkhH6GiUQCm5ubWF5ehtFopMOLgJ8H1Ow9fapUKtpOVVRU\nhN7eXhrA9n6d2+2mU98Oezb3uxCLxfZtKjgcDlQqFaLRKHZ3dxEOh4+sREUUzrOzsxAIBFheXkZF\nRQXy8/P3eSKEQiFsbGzA5/MhGo1Cq9UiHo+/JiQlLZkkY8jn81FQUEDXm93dXSr4O8znIC8vD0ql\nkmb/COl0mpZ3OBwOzWjK5XKoVCpUVFTg9OnTaG9vh0QiQSqVgtfrhdlsxuTkJObn57M6pZAcyMgh\nh7yHN2WrjhNcLhcKhQLt7e04c+YMent7weVy6YTQsbEx3L9/H263O2cdRp9ckCepZRI8k8kkZmdn\n4Xa7s5ou/iWIirqkpASdnZ1obW1FaWkp8vLyYLPZMDExgbGxMZhMpgPbu7KFXq/H6dOn0d3dDYPB\nAD6fj8XFRTx58gTj4+OwWCw06LBYLDp1i/Tq2u12bG9v57ztby9utxtDQ0NwOBwQi8Xo6emhU8n2\nnni9Xi9MJhOWl5exsbFxJBstEoyvXbuG69evo7KyEmKxmL5OFuBkMgmNRoNgMEhPi6Wlpejr66MG\nKABQXl6O9vZ2KJVKyGQyiMVi5OXlvdbfTXwcZmdns26PfBAsFosOV+Hz+VhYWMC9e/eOxIRlr1dC\nOp3GixcvMD8/v28ENPBzWYcEb9KbfZB499WTcmFh4YHzBA4TUpMvKyujG1ORSISSkhKcPHkSwWCQ\nlkgMBgNOnDiB5uZm8Pl8iMViOuwlkUjAaDRiaGho3xTGbEGcB2OxGBQKBcRiMaqqqnDlyhX09/dj\namoqq9fzPpDPu6+vj1o1R6NR2Gw23Lx5Ew8fPsxpRxTwCQZ54KXLUGlpKcRiMe2v3Gt2kmuIKKy5\nuRknTpxAZ2cnKioqqIJ7dHQUw8PDmJ+fz7lbHElnarVaCAQCZDIZOlWOWDgCoPbCBoMBJSUliMfj\nmJ2dRX9/P1wuV04C/N769N5phaT+3tjYCJVKhStXrqCzs5O6E5I+9ImJCdy7dw9TU1MIBAJHkmoj\nc8zJqWVvJgR4mQqsqKhAJpNBYWEhNW4BXgb0jo6OfcFJrVajtLQUIpHorf3YpGc9kUjkLEtEINfc\n2dmJ+vp6JJNJWCwWjI+Pw+PxHMn/SdYCcuo9zJ9LyjuRSATJZPLISg/JZBJer3efIRURkFZVVYHF\nYqG+vh4AaLaQ9GsTnUEikUAgEMDCwgJMJhMCgUDWA9Lm5iYePnwIoVAInU5Hsw+tra3w+/3Uwc/r\n9cLj8UCn00GpVMLr9cLv9yMYDNIpkVwuF0qlkr7PV1lfXz8UXQ2Z/NfT04OLFy/ixIkTUKvVCIfD\nmJqawsjICPr7+2EymXLqrQ98gkGexWKhuLgYlZWVEAgEcLvdNMgfB8gJvrS0FF9//TV6enrQ2NiI\nTCYDk8mEO3fu4MGDBxgbGzsWhg9qtRq1tbWQSCT0BOR2u2GxWBCLxegpkrhUdXR0oLKykk5junPn\nTs4mzwEHB3qZTEbnAdTU1KC3txc6nY5uAuLxOEwmE+7fv4+//OUvRxoEiXe4xWLB8vIyioqKIBaL\n9w1VampqQm1tLSKRCGKxGF2ExWIxFArFO6faU6kU3TBmMpmcjhAlkOf17Nmz6O7uRk1NDex2O1ZX\nV4+FY9/7Qvrht7a2sLOzg0gkAh6PdyTiX+L3cFAQKS4upqf7vex1FNzd3aUW1QsLC/uyctnEZrPh\nxo0bqKqqogOX5HI56urqkEwmoVAo4HK5qPVwQ0MDDAYDlpaWYLVaYbfbsbu7CzabDZFIRJ/pveZL\nhPv37x9KkCcHhc8//xx//vOfIZfLEY1G4XQ60d/fj1u3bmFlZeXA2STZ5pMM8iqVihrhEA/1XArX\n9sJisaj5RHNzM4qLi+nUMKvVisHBQZoWzHWAB35ubyGzkuPxOJLJJFgsFvR6PR2K0dnZid7eXlRX\nVyMajWJjYwNbW1s5HQMJ/DyVbe+fsrIy/OEPf6A96fn5+XSRyGQyCAaDGBoawtTU1JFnUchn+vz5\ncySTSXC5XJw4cQKVlZX7gjdp8yK/B/Jv74PRaMTy8jLi8ThmZmbQ39+fNU/ygyDK84qKCvT09KCg\noAChUAgLCwvvZSJ1HCEjjLe2tuD1erG0tASbzXaop2Ry8j2oT/5NkO6AeDyOcDhMh3WNjo7C6XTm\nJGtILMcXFhYwMTFBDxUVFRVQKBRoamqiA6bW1tZgMBig1+up/4bH46HPL5/PR3FxMerr62kL4N61\ndHl5Gc+fP//gayZlNrVaDalUCjabjcXFRfzwww8YGhrC2trasWlv/aSCPBnmUVZWhtLSUqRSKfh8\nvmNziudyuRAKhdDr9aiqqkJZWRkkEgmdD//8+XMsLCwcG0tS4OVCvLfmC7w83Tc3NwN4ebOrVCp0\ndXWhra0N4XAYq6ureP78OaxWa85udNL+RE6vZBEAXqqSlUrlvq8nXvB+vx/Ly8t48eIFHXZ0lOz1\n0U8mkygrK4NUKqW1SZJyJ77eb4IYrni9XjidTsTj8dc2V2TyIbEJnZ+fz+kGTCQSoaqqCs3NzWht\nbQWXy8Xa2hrGxsaOfLbEURMMBmEymWgq2WQyweVyHer9FI/H4Xa76UhTkUj01rY34ijo9/vhdrvh\ncDjQ39+Px48fY3NzM2c+HGSE+MrKCiYnJ8Hn86HX65GXl4fS0lJUVlYiGo3C7/djZ2cHWq0Wcrkc\ntbW1CAaD9Lp3d3cRi8UgFAqh0Wjg8XioLzz53A9LPEucAoVCITgcDu3r/+mnn7C+vn5sYg7wiQV5\niUSC4uJidHR0wGAwYHt7O6f95a8iFotRWFiIzs5OdHR0QCgUIhwOw+Fw4Pvvv8e9e/cOdQTlYRAM\nBrG1tUVrZUKhEL29vTTIk6E6xI9gfn6etu9YLJacXTdJu5PefTJ8403EYjF4vV5MTk5iaGgIq6ur\nWRl/SiBTv+bm5qBQKCCXy1FZWQmtVvtO359KpRAMBjEyMoL//M//hNvtfq1MQoIBUS7n2hxKo9Hg\n+vXrdCSzyWTCs2fPcOfOnayONT0KNjc3cePGDeq0SDoFDlObsr6+ju+++w6RSAQSiQRlZWVvtNUF\nXrYOLi4uYmVlBUajEaOjo/Q+yaUwjHiC2O12vHjxAmw2G263G2q1Gnq9HhqNho6yVigU4HK54HK5\ndCAQCeCRSIRupsxmM4aHhzE5OYnt7W162Dgs5810Ok3LZ4FAAGazGQsLCzCbzTlT0b+JTyrI5+Xl\nQSaTUfMVp9N5rIK8Xq9Hd3c3zp49i+bmZggEAqytreH58+eYnp6G1WrNulHCL2G32zEyMoJUKoX6\n+nq6kOTn59PFgc1mw2g0YmZmBnNzc5idncXS0lJOSySJRAJWqxWTk5MoLi5Ga2srKioq3vj1ZrMZ\nN2/epIugy+XK6imXCLaWlpao7qGhoQHl5eXUTpik8GOxGBwOBzweDxXhkZTn8+fPMTIyAr/f/5pD\nGMlq5LoMxGaz0dDQgJ6eHurLb7PZ8PjxY+rpnWt3xw8lGo1ibW2NaisikcihZ4VIP/nw8DAEAgH6\n+vpgMBggk8nAYrGotsTlciEej2N1dRVLS0tYX1+H1WqF2WzOeVcF8HNJjYhdnU4n9dIvLS1FYWHh\nO/2cUCiE2dlZOJ1OOmvCYrEgGAwe+vuMRqPY2trC48eP4fF44HA4MDk5eay6iAifVJDn8Xh0pjJZ\nCI9L2oTFYqGiogKXL19GZ2cn9Ho93eEPDg7Sm/G4YbVa0d/fT3uFlUolxGIxEokENjY2EAqFwOPx\n8OOPP+K7777D9vb2sdjJxuNxLC4uUjc3gUCA0tJSWnbY20YFvKxX/8u//EtOg0ssFoPFYoHNZsPQ\n0BCamppQU1MDlUqF06dPQ6fTgc1m04zD7OwsVlZWaNaCLDa5zKAcBFH/k8+ew+Hg7Nmz+Prrr9HR\n0QG3243x8XHcvHkTAwMDOb7awyGVSh35vZRKpZBKpehkNqIv4fF44HK58Pv96O/vx/j4OPx+PxwO\nBxwOB8Lh8LE7TAAvMw1OpxOTk5PU5lin0x04FOggIpEIlpaWqK/IUW5mI5EIIpEIvv/+e/z000+I\nRqPU4+G48UkFeZVKhcbGRgiFQlpvOg6WiGTetFKppGljn8+HpaUlTExMYHl5OacK9Lfh8/mwsrJC\nXeJu3rxJvbBDoRA9yVsslqyMTXxfSE+8QqEAm81GW1sb8vPzEY1GMTY2RhXcU1NTx+baiRhvdXUV\n29vb4PP5mJ6exp07d+gJzeVywev1IhAI0A0LmXV/nGCxWGhoaEBVVRUKCgpo8Dt//jx0Oh2MRiMd\nQPKx1+FzRTQahd1ux3fffYfHjx9Tv/x4PA6z2QyPx4NkMolwOIxIJHLsp7gBLzfpOzs7iMVi7zy7\nIJ1Ow+/3Z7UMRbobiP7nOPJJBXlSk2exWHC5XJiZmYHNZsv1ZdHZwqWlpVCr1eBwOHC73RgdHcXU\n1FTObWvfBtmhbm1tUWONjwliHkM80EOhEJRKJUKhEB4+fEgngm1vbx+bh5RkGNxu97ESYf4aWCwW\nlEolNWNJJBJwOByQy+Vwu90YGBjA0NAQRkdHj0UG6GMkkUhgZ2cHIyMjub6UQ4NkKY7rukjItcfE\nu/BJBXmC3++Hx+OhZhS5RiaToa2tjQoC4/E4TdNPT09TW1iGo8NkMmFjYwN3796lE7b2eo2nUqlj\nmWr72MlkMnC5XHC73dBqtUin09jZ2cHjx4+xurqK6elpuFwuhMPhY29hysDwMfJJBXmn04mhoSEs\nLS0hGAzC6/XmXGQE/KzEBF6e6o1GI4aHh/eJYhiOllAoRL3IGbKLx+PB7Owsbt++jd3dXdjtdths\nNjgcDmxubjL3PwPDEfJJBXky7vG4EQ6HsbKyQvuXR0ZG8ODBg2OdpmdgOAwymQxtZR0bG8v15TAw\n/L8ju6On9pP7I3aWIMK7mpoaVFdXY2Fhgc6IPy51YAYGBgaGj45fjOFMkGdgYGBgYPg4yWUMZ2Bg\nYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBg\nYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBg\nYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBg\nYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGD4f83/AWvhJbdqo4fI\nAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "tf.reset_default_graph()\n",
+ "\n",
+ "# Define our input pipeline. Pin it to the CPU so that the GPU can be reserved\n",
+ "# for forward and backwards propogation.\n",
+ "batch_size = 32\n",
+ "with tf.device('/cpu:0'):\n",
+ " real_images, _, _ = data_provider.provide_data(\n",
+ " 'train', batch_size, MNIST_DATA_DIR)\n",
+ "\n",
+ "# Sanity check that we're getting images.\n",
+ "check_real_digits = tfgan.eval.image_reshaper(\n",
+ " real_images[:20,...], num_cols=10)\n",
+ "visualize_digits(check_real_digits)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Model\n",
+ "\n",
+ "Set up a [GANModel tuple](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/namedtuples.py#L39), which defines everything we need to perform GAN training. You can create the tuple with the library functions, or you can manually create one.\n",
+ "\n",
+ "Define the GANModel tuple using the TFGAN library function.\n",
+ "For the simplest case, we need the following:\n",
+ "\n",
+ "1. A generator function that takes input noise and outputs generated MNIST digits\n",
+ "\n",
+ "1. A discriminator function that takes images and outputs a probability of being real or fake\n",
+ "\n",
+ "1. Real images\n",
+ "\n",
+ "1. A noise vector to pass to the generator"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Generator"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def generator_fn(noise, weight_decay=2.5e-5, is_training=True):\n",
+ " \"\"\"Simple generator to produce MNIST images.\n",
+ " \n",
+ " Args:\n",
+ " noise: A single Tensor representing noise.\n",
+ " weight_decay: The value of the l2 weight decay.\n",
+ " is_training: If `True`, batch norm uses batch statistics. If `False`, batch\n",
+ " norm uses the exponential moving average collected from population \n",
+ " statistics.\n",
+ " \n",
+ " Returns:\n",
+ " A generated image in the range [-1, 1].\n",
+ " \"\"\"\n",
+ " with framework.arg_scope(\n",
+ " [layers.fully_connected, layers.conv2d_transpose],\n",
+ " activation_fn=tf.nn.relu, normalizer_fn=layers.batch_norm,\n",
+ " weights_regularizer=layers.l2_regularizer(weight_decay)),\\\n",
+ " framework.arg_scope([layers.batch_norm], is_training=is_training,\n",
+ " zero_debias_moving_mean=True):\n",
+ " net = layers.fully_connected(noise, 1024)\n",
+ " net = layers.fully_connected(net, 7 * 7 * 256)\n",
+ " net = tf.reshape(net, [-1, 7, 7, 256])\n",
+ " net = layers.conv2d_transpose(net, 64, [4, 4], stride=2)\n",
+ " net = layers.conv2d_transpose(net, 32, [4, 4], stride=2)\n",
+ " # Make sure that generator output is in the same range as `inputs`\n",
+ " # ie [-1, 1].\n",
+ " net = layers.conv2d(net, 1, 4, normalizer_fn=None, activation_fn=tf.tanh)\n",
+ "\n",
+ " return net"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Discriminator"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def discriminator_fn(img, unused_conditioning, weight_decay=2.5e-5,\n",
+ " is_training=True):\n",
+ " \"\"\"Discriminator network on MNIST digits.\n",
+ " \n",
+ " Args:\n",
+ " img: Real or generated MNIST digits. Should be in the range [-1, 1].\n",
+ " unused_conditioning: The TFGAN API can help with conditional GANs, which\n",
+ " would require extra `condition` information to both the generator and the\n",
+ " discriminator. Since this example is not conditional, we do not use this\n",
+ " argument.\n",
+ " weight_decay: The L2 weight decay.\n",
+ " is_training: If `True`, batch norm uses batch statistics. If `False`, batch\n",
+ " norm uses the exponential moving average collected from population \n",
+ " statistics.\n",
+ " \n",
+ " Returns:\n",
+ " Logits for the probability that the image is real.\n",
+ " \"\"\"\n",
+ " with framework.arg_scope(\n",
+ " [layers.conv2d, layers.fully_connected],\n",
+ " activation_fn=leaky_relu, normalizer_fn=None,\n",
+ " weights_regularizer=layers.l2_regularizer(weight_decay),\n",
+ " biases_regularizer=layers.l2_regularizer(weight_decay)):\n",
+ " net = layers.conv2d(img, 64, [4, 4], stride=2)\n",
+ " net = layers.conv2d(net, 128, [4, 4], stride=2)\n",
+ " net = layers.flatten(net)\n",
+ " with framework.arg_scope([layers.batch_norm], is_training=is_training):\n",
+ " net = layers.fully_connected(net, 1024, normalizer_fn=layers.batch_norm)\n",
+ " return layers.linear(net, 1)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### GANModel Tuple"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvVmMnOd55/urfd/Xruqq3jd2k2yyySYpU6IsW5atyE4s\n20Ewk8EMMDBm7nIxGGAuz7mZq7mfZC7GgJEAiR0pnkR2lEQRSYmUuHezm71vVd1V1bXv+3oudN4X\nNg5w4puBzwHqD/CCUrOrq77ve5/n+S9PwxBDDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcQQQwwxxBBD\nDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcQQQwwx\nxBBDDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcQQQwwxxBD//4Hid/XC/+k/\n/aeB2WzG6XRy8eJFvF4v6XSa8/Nzzs/P6Xa7mM1mFhYWKBaL7O3t4XK5aDabPHr0iKWlJX7wgx+g\nUqkolUpsbm6Sz+dptVqo1WoMBgNOp5NkMsnBwQFarZZut0smk+HWrVt897vf5fPPP2d/f59+v49W\nq0Wn03FyckK73WZubo5yuUw8HufKlSuMj4+j1+uJxWIcHh4SDAZxu93odDry+TypVAqHw4HJZEKj\n0dBut2m328zPz6NQKFhbW0OtVuP1erl48SIWi4VYLMbm5iZbW1tMTU3hdrtRKBSMjIwQCASIxWKc\nnp4SiUSYnJxkaWmJarVKsVgknU7z7Nkztra2ePvtt7l8+TJer5cnT57w5ZdfcunSJaxWK8lkEq1W\ni8ViQafTUalUODo6wuPxEAwGqVar6PV6RkdHSaVSxONxTCYTKpWKTqdDv98HwG63YzKZ0Ov1TE5O\nEggEODw8pFgsotFo2Nra4ujoiKWlJXQ6HfF4nEajgU6n486dOwQCAZrNJrFYjGg0ysnJCV6vl29/\n+9ucnZ2xs7NDKpWi0+lgNpsZDAaoVCrGxsbo9/ucnZ2h0WiwWq1MTU2hVqtJp9PYbDYMBgNnZ2fy\n9aampjAajfzVX/0VnU6H1157jX6/T6/Xw+VyoVKpqNVq3L9/n1gsxo9//GNsNhsbGxvU63UUCgWh\nUIjR0VGCwSClUolEIsHz588xm82srq7S7XYB8Hg8nJ+f8+jRI7rdLlqtFo/HQ6PR4PT0FJPJRL/f\nZ2tri0AgwNtvv41KpaLb7VIoFOQ932q1GAwGGAwGzGYzZrMZvV6PxWLB7/ejUCioVCoEg0EGgwGf\nfvopOp2Oy5cvUy6XKZVK1Go10uk0sViMS5cuEQ6HabVaKBQK1Go1drsdi8WCWq2m2+3SaDS4d+8e\nkUiEsbExlEollUoFv99Pv9/nxYsX+Hw+bt26hVqtRq1Wo1KpiEajbG5uYrVa8fl8zM3N0ev1yGQy\n5HI5UqkUZ2dnLCws8M1vfpNsNks8HmdnZ0de35s3bzI1NQXA5uYmDx8+xGg0EggEWFlZweVyoVar\nSSaTJJNJMpkMPp+PqakpTk9PyeVyDAYDqtUqpVIJs9mMUqmkWq2Sz+cpFosEAgGmp6e5fv062WyW\nnZ0dFAoFGo0Gg8FAIpEgkUjw/vvvMzk5STQaZWtri4ODA27fvs3c3Bxms5lisUgmk6Hb7aLT6XC5\nXLRaLUqlEoVCgWKxSKlUwuFwEAgE8Hg89Pt9YrEYKpUKlUrFy5cv0ev1fOc732EwGJDNZkkmk5TL\nZfn3TCYDgFKpRKlU8s1vfpOvf/3rHB0dsbW1xYsXL7h06RLLy8t8+eWX9Pt9bty4QbvdplQqAdDt\ndmk2m5TLZSqVCkqlkmazST6fZ3R0FL/fz+7uLgaDgRs3brC3t8fp6Sk/+MEPcDqdbG9vk81mabVa\nTExMoNPpKBaLOJ1ObDYbyWSSQqFArVaj0WhQr9epVCrU63UajQYqlQqj0cjExARWq5V+v0+tVqNe\nrzMYDPB6vSwuLpLNZonFYiQSCXnOLS0tsby8zMTEBEdHR/z85z9nZGSEcDiM1WrF5XLh9/s5OTnh\n9PSUXq+H1WolEAjQ7/fp9/soFApOTk54/PgxS0tLzM3NoVQqKZVKxGIxAoEAarWae/fu4XK5ePfd\ndykUCuRyOdrtNg6Hg4mJCXZ2dshkMszNzdHtdjk9PcXj8WCz2Wi32+zt7fHw4UMmJiaYmppifHwc\nt9uN1Wrl8ePHHB8f43A4WFpa4rXXXqPb7VIsFtnZ2eHFixc8fvyY733ve1y7do1Wq8X29jZffPEF\nDoeDYDDI5cuXcblc6HQ6ms0m8XicTz/9FIfDwde+9jUSiQTFYpH/8l/+y79Yw9X/Wyr4b4H/+B//\nIxsbG3z66aesrKzg8/k4OzvDaDRy+fJlBoMBJpOJsbExMpkMjUZD3hC1Wg2TycTExATlcplYLMbd\nu3cxmUxcu3YNh8NBp9NhZ2eHQqFAv98nGo1SLpdRqVQolUocDgcHBwesr69z48YNgsEgFosFg8GA\nXq/nzp07ZLNZtra2CIVCWK1W6vU6gUAAv9+PxWLB5XIRCASIRqOsr6/TbDbp9Xr4fD554IkGolar\nUSwWSSQS8nssLi7y6tUr+fBOTU2RTqfR6/Xy3+h0Or7+9a/LJmVycpJUKsXTp08pFos4HA7GxsaY\nnZ3F6XTy9OlTotEo4XAYnU5Hu92m1+sxGAywWCz0ej2USiUHBwesra2hUCiw2+2kUikKhQLVapV3\n3nmHYDBIPp/nxYsXHB0d8d3vfpdAICAbMPH9dDodJpOJly9fymZEqVSSz+fR6/W4XC75kO7u7rK7\nu8vBwQHLy8vy+6lUKgKBAF988QXtdps7d+6gUChotVqcnZ3hdDploRaFNJfLcXx8jNvtxul00mw2\nGRkZ4ebNmyiVSuLxOPV6nWw2y+7uLjdu3CAUCnH37l22t7dJpVLEYjGMRiM2mw23241SqSQWi1Gv\n15mdncXtdqPVaimVSpycnHB8fIzdbmdqaopwOIzP58NoNKLRaKhUKqRSKUqlEtVqlXa7jV6vp9Fo\nkMlkePXqFY1Gg6997WtcunSJUChEp9OhVqtRKpX44osviEQiTE1N0Wq1OD09xeFwyM8vk8nw5MkT\n+Xlns1kmJye5ceMGH3zwAQcHB7zzzjuUy2WazSajo6NMTU0xGAx49eoV//AP/8CtW7e4ePEiJpMJ\ns9mM0Wjk5cuXZDIZrl27hlKp5OTkRB7cTqeT2dlZVldXefXqFfl8nsnJSVqtFk+fPuXq1auMjY1h\ns9nkoX9wcEAqlcJut9PpdNje3qZWq5HJZEgmk6TTaWq1GsvLy4TDYcxmM/F4nFKpxNjYGHNzc4yO\njqJSqSiXyzx//pyzszMmJycJBoMsLS3h9/s5PDzks88+o1KpoNPpCIfDaDQanj9/Tq1Wo91uy/PE\naDQyPj6ORqMhlUpRLBYpl8skEgmOjo4olUpUKhUikQh+v5/V1VX0ej0ajYZOp8Pp6Sk7Ozu88847\nhMNhkskkFouFUCjE3t4emUyG/f19eS7Mzs4SCoVYWVnh/v37fPrpp/h8PgKBAJVKhVgsxvHxMR6P\nB51ORzabxefzsbCwIBtrpVLJyMiIfF4LhQL3799nMBgwOjpKu92mXq+zsbGB3+/HbrdzdnaGXq9n\nZWWFzc1NIpEIarVaPj8qlYper8elS5fQarUMBgOuXLnCnTt38Hq9AExOTqLX68nn82i1Wux2O6FQ\nCI1GIxvXYrHIyMiIHKhu3LhBvV7n2bNnaDQafD4fr732Gg6Hg3Q6zeHhIeVyGaVSSa1Wk03lxMQE\n6+vrVKtV9vf3sdlsstlMJpO8fPmSa9eu8Qd/8Adsb29TrVZJp9NsbGwQiUR48803WVhYwOfzkcvl\nqFaruN1u4vE4Dx8+5M033+Tdd9/l1atXKJVKFAoFDoeDZrNJu91GrVYzOjoqh79isUg+n+f8/Jzn\nz5/LM8xgMJDL5bh+/TpXr16lUCjQbrf58ssvsVqt+P1+gsEgWq2Wer2OTqeTTZHZbEan09Hr9SgU\nCjx48IC9vT0ajQZmsxmLxcLx8TGlUgmPx4PJZEKtVpPJZDCZTIRCIRqNBtVqlX6/j06nY3R0FKvV\nSqPR+K1qrep/ZyH/f8OVK1f+D5VKxeXLlwmFQhgMBoxGI/1+X05odrsdAK1Wi9frldOi0+nEarVS\nrVZpNpsUCgXW19dxu92srq7S6/VQqVRMT09TLpd5+vQpN2/e5M033yQYDGI2m8nn83g8Hq5du8aF\nCxdwOp2o1WocDgcKhYIXL15wcHBArVZjZGQEh8NBr9ej2WzSaDRwOp1YLBba7TYqlQqv10s4HCYU\nCuHxeDg4OOD+/ftYLBbZbYbDYUZHRzEajQDyoA2FQqhUKtrttvz/9Xqds7Mz2u02wWAQtVpNrVbj\n4cOH7O3tEQqF8Pv9+Hw+rly5gsPhkIdNPB6XjYLX68Xr9cqbW6PRsLCwwNLSEhcvXiQUCqHX60km\nk0xOTvLWW2/hcDgwGo2MjIyg0+kwm83MzMzg9/sZGRmRn1WpVCKXyxGLxXA6nVy/fl0Wv7m5OZxO\nJyqVCpvNRqfToVKpUKlUUKlUrKysMD09jcPhwGKxoNfr6ff72Gw2OW3q9XqcTidKpZKzszO0Wi1u\nt5t+v49er5eftcFgQKvV0mw22dvbo9VqYTKZ8Hq9XLhwgZmZGXw+H2azWX4W3W6X1dVVvv3tbzM/\nP4/VasXtdsvJy2w2o9FoMBqNmM1mHA6HvD5Xr14lHo+zu7uLVqulUCiwt7eHx+NhdnYWj8cjJ8ti\nsUi1WmVycpL5+Xncbje7u7uyGA0GA9kMiWtRq9XI5XKMj4/LQ1ccTGJimZ2dZXJyEkBOl41Gg6Oj\nI3Z3d7lw4QLz8/OSFen3+/I9iaIg7pFQKMT09DQKhYJisYharcZoNOJ0OhkZGcFms8mpXavVolQq\ncbvd8lqHw2F6vR7ZbJZms4nb7ebtt99mYWFBMi9jY2Po9XrZuIjC7/F4JPM2Pz+P1+ul0+mQSCTY\n39/n6dOnVCoV3njjDbxeL9VqFfhq2gUwmUyYTCZcLpcszDqdDp1Ox/LyMlevXmV8fJxOp0Mul+P0\n9JRqtcro6Chzc3PcvHmTyclJ+v0+qVQKk8mEx+MBoNPpUK/XJUsiJnq1Wi1fQ6fT0Wg02NzcRK/X\n4/V6qVQqnJycsL6+jlKpZHl5mZmZGcLhMB6PB5fLxejoqPxMkskkbrebyclJVCoVFouFsbExAoEA\nJpOJ8/Nzjo6O2N7eZmpqiuXlZUZHR/F6vbTbbSwWi2yqDAaDbNpzuRxjY2P4/X40Gg0KhUIWCaVS\nydbWFp1OB6PRiNVqpdvtkk6nSaVSVCoVxsbGsNvtNBoNDg8P2djYIJPJ0Gq1aLVacnpPJBKcnZ1R\nq9XweDyEw2GcTifdbpd8Po9arZZnbiwW4+DgAKfTidfrZWdnh16vx8rKChcvXsRut3N8fEw6ncZq\ntbKysiJZAZfLhcViIZ/P0+v1uHnzJgqFgs8++wyz2czY2BiNRoNKpYJWq+XmzZuEQiGq1SoGg0Gy\nr6KQWiwWut2uHKZmZmZwOp1UKhV8Ph8zMzPY7XbsdjvT09MUCgW2t7dJp9OyuVxZWWFmZoZSqUSz\n2cRgMEhWZXp6GqvVKpmeQqEgmdWRkRHMZjMKhYJwOMz4+DihUIixsTG8Xi9GoxGlUkmj0cBqtaJW\nq9nZ2cFoNDI2NiYZqZ/85Cf/579Ua39nk/zBwQGTk5PMzMxQLBapVCrYbDYUCgWlUkke5qJr9ng8\n9Ho9+v0+ExMTGAwGDg4OyGazVCoVSUsC1Go12aWdnJzQ6/W4ePEiq6urHBwcEIlEePXqFXfu3GFi\nYoJKpUKj0aDZbKJQKGg0Gjx//px+vy8pz16vB0C/36fdbktKrdlsolarsdls9Ho9eTA0m0263S6x\nWAyHw8GlS5fo9/uUSiXy+TzNZhOLxYLP50On0/H8+XMKhQLT09MolUrq9TrwVYOj1+upVCqcn5+z\nvb2NWq3m2rVrVCoV4vE4tVqNs7MzUqkU/X6fyclJSa+GQiE50ReLRXQ6HbOzs3i9XvR6PalUio2N\nDSlpjI6OUq1WZUcuirRGo6HVauF0OqnX66RSKbLZLLlcjkKhwNzcHPPz87RaLXQ6HR6Ph5cvX7K5\nuUk0GpVTv9lsptlsYjQaUSgUv9H5Tk5OykOh2WzS7/dxOp0Ui0UikYg8RAeDAXa7ndHRUdLpNOl0\nGpPJRKVSYXt7m1KpxMzMDAsLC5hMJllM4atJxWq1AhAKhQiFQqjVarRaLXNzc/LaJZNJut0uRqNR\nNhEOh0M2E7FYjLOzM+x2O61Wi2w2y9zcHHNzc1QqFZLJJJVKBQCNRsPy8jI2m416vS6nv0ajQSgU\nwu12MzExQTgclgyE3W7H5/Phdrup1+v0+33cbje1Wo1+v8/CwgK9Xo9oNIrBYMBms8kDSBRp8T6t\nVismk4lms0mxWGQwGEiGwel04na70Wg0FItFVCoV/X6fwWCA1WqVjFI4HMZoNJJIJPB4PFy5cgWl\nUimlhWq1ilqtxu/3Y7VauXTpkpSyRkZG6HQ6lEolebBubm6ysbHB9PQ0NptNyjzdbpfz83POzs6I\nRCKcn59LmWgwGFAoFICvCrDdbsdoNMqmHpAFTa/Xs7y8zPT0NFqtllarJSd80ei43W4MBgNqtZpy\nuSzPn3w+T7vdptvt0uv16PV62O12jo6OiMVi3Lx5U05nJpNJ0rTivq3X67KJuXbtGrOzsxiNRtrt\nNslkEqVSicvlot1uA6BSqVCr1Wg0GtncBoNBAM7Pz6nVauj1ekmf5/N5lpeXabVaxONxydZVq1Up\ns4liJ6SaTqeDRqORryG+XhR+i8VCp9OR77HdbrOysgLA6ekpz549Y39/H4fDgd1uZzAYoFQqGQwG\nkt2zWq3odDoprwGySFmtVhQKBel0mp2dHfx+v2xMPR4PN27ckINMuVzGYDCwurqK1Wolm83KJqXX\n66HT6bBarZjNZnK5HBsbG5ItFJ/v3NwcKpWK8/NzSqUS/X4ftVpNq9Wi3+8zMzNDOp1me3tbNhx2\nu51+v0+n02FsbAyn0ymZxpGREZ49e8bx8TF+vx+TycTc3BwzMzO43e7fOC+q1aocZqrVKrFYDJfL\nhVKpxOl0ykZZnN+ChW21WvJcrVQqkh0LBALYbDZ5/jQaDQaDgTzT/iX8zop8OBzm9PSUTz75BIVC\ngc1mY35+HpPJxGAwIJPJkMlkiMfjaDQaHA4HKpWKK1eusLq6isFgoFqt8sknn7C5uUm32yUajfJ3\nf/d38tATGtH7779PMBik0WjIrlzc0OJACYfDXL58mWg0SiaTkVOPzWYjl8tJPa7f72MwGOQh4PF4\niEQirK+vc3R0RCqVotvtsry8zH/+z/+ZBw8esLm5iVqtJh6P8+rVK6xWq9T4s9ksh4eH8iDe29vD\nbrfLCUjohy9fvuTBgwdcv36dyclJjEYjjx8/5le/+hUajUbSanNzc3z/+9+XBX18fFzqvgqFQlLp\nYir+9Wntiy++4OnTp7z77rtcvHiRer2OwWDA7Xbz4MEDMpkMLpeLVCpFKpXC5XLJSU5Qn2+++Saj\no6MMBgMikQiff/45JpOJ+fl5vvGNb3B+fk4ul+P+/ftUq1WOjo64c+cOKysrnJ+fo1QquXLlCi9f\nvmR3dxebzYZKpUKr1ZJMJhkMBszMzGA0GlGpVMRiMV69eiW1tomJCV6+fMnDhw/54z/+Y4LBIMlk\nEqPRiMlkkgfIa6+9xosXL3j69ClLS0uEw2EAeUA+fvwYm82GTqfj4OCA4+NjlEolrVaLDz74gImJ\nCSYnJ2k0GtRqNVQqFRqNBqVSSblc5vz8nGg0ymAwwO12YzQaaTab8l4Tze2jR49IJBK8/fbb3Lp1\nC6/Xi9lsxuVyyYZIMFYAwWAQg8HAyckJ2WxW6sqFQgGNRkM4HOaNN95gYmKCTqeDQqHg6OiIv/mb\nv2Fqaor5+Xni8Tjn5+d89tlnTE9PMzIyIhvWer3O1taW1MBnZ2elDqnT6aQ34Pj4WNL13W6XbreL\nRqNhenpaNuDCc1Mul0mn06ytrWG321leXsZkMlEsFtne3sZqtWKz2chms+TzeSqVitS6BaX9xRdf\ncPHiRS5cuMDf/u3fEo1GmZ2dJRwO4/f7iUaj1Ot1eYiK50mhUEhpJpPJMDk5idfrxefz8eTJEz79\n9FPef/99lpaWWFxcZH19nfv379PpdOQzIprB09NTWq0W3W6Xs7Mz8vm8LLzf/OY3yeVyVCoVRkdH\nCYfDTE9P0+/3efToEbOzsyQSCf7sz/4MrVZLIBCQdPDMzAy5XI5MJoNOpyMQCNDpdKR2e/HiRRYX\nF1laWuKLL77gpz/9KRaLBZPJJAtmvV7nF7/4BUajkWAwyNnZGV9++SXRaJTx8XGmp6el1+Lu3bto\nNBpu377N0dERa2trkqm4e/cu2WxW+iaKxSIffvghW1tbZLNZFhYW8Pv9vPHGGxQKBRKJBF6vF4VC\ngV6vl9pxsVjEbDYzNTUli79ga9PpNPfu3aNUKnHp0iVGR0fpdDpEIhHJPojGKxKJ8Nlnn1EsFoGv\n2M96vY5SqWRjYwOFQiG/PpPJkM1mOT09ZXt7m1wuh9vtplgsks1mJVPj8/lYWVmhVquxtbXF8vIy\nFouFX/ziF+TzeQaDAVqtVg6ZnU6HaDSK3W7n2rVrcgIXNaBSqWC1WqnVamSzWZ4+fSobeZ1ORy6X\n4+2332Zubo5Wq8Xx8THHx8eSwdzZ2cFut6PRaPj4449ZX19nMBiwtLTEW2+9JRsUIfEJ2aJcLv9W\ntfZ3VuR9Ph/tdpt0Ov3VD6JWk0gkCIVCzMzM0G63abVaUhcVk6BOpyMYDJLJZDg8PESv17O4uCin\nTTEd9no9zGazbBqEHuR2u2m325KmMhgMWCwWzGYzWq1WTvUjIyNymkomk2SzWandejwetFqtfC/i\n53M4HFQqFUmzW61WORlbLBacTiejo6NyYllbW5M3lMvlwmazSa3bbrdLPU7QXeJ1xXQdCASYmJiQ\n9NCv63kWi4XBYEA6nabT6eByuTAajXKaViqV0gQlfq5SqUQ6nSYej0spYXR0FLPZzOHhIbu7u4yO\njkpWQ0gUTqeTTqdDt9uVk6CgEScnJ1EoFKhUKs7Ozuj3+/j9flqtFkqlErvdLil6p9Mp6Wu9Xi9N\nJ4JJER28w+GQD7WgyEQhtdvt0sim0WjkgZ9MJolEItJQJyQBIYtotVqcTid2u52xsTGy2azUv81m\ns5xEer0elUoFl8vF9PS0fP25uTkGgwFHR0ccHR1RLBYxmUwYDAZ0Oh0Oh4NWqyVNSeK+Et4RIfso\nlUopExgMBhQKBc1mE5VKRTAYRKlU0u/3UalUmEwmRkZGpE/BarUyMTHBxMSE1PWFgc3lchEMBnE6\nnSQSCeLxOM1mE51OJ1modrtNs9mUhbVarZJKpeT36ff7PH/+nFKpJK+FXq8nEonQ6/UYHx+XE4b4\nIxikYrEoJabx8XF5cKlUKlnkj46OOD09lffF5OSk/LlEYy5YCr1eL+/DX5+gPR4PHo8Hv98v6Wfx\nHInmXDAWkUiE7e1t3njjDQApEZpMJgqFAkqlkkAgICfC0dFRWq0W+/v78nnP5/MYjUamp6fxeDyk\nUin5nAnmRrCM4tr++vNhsVjQaDQMBgMpKeRyOcrlMplMBoVCgdFolDJTOp0ml8tJtkHos4JZVCgU\ndLtdKVWVy2Xy+TylUkmeg8IzAcjmVJwp8/Pz0owrpIJXr15JiUjcu+Ic0mq1kgERLJ7wqAgTqpBh\nO52ObGJEsRP3pNFolE1rKBTCbrdjMBjY3t5mf3+fUqkkPVCBQECeD0Ka02q18j4fHx+XkpvBYMBg\nMNDpdIjFYlgsFrxeL91uF6VSSTgcpt/vE4/HOT09pd1u4/F45NcB5PN58vm8lP22t7cxGo0sLi5K\nf4fdbpf3ydjYGGazWZ79YvI2GAyMjIxISUbcz8IzZTKZCAQCpNNpzs7OUCqV+P1+DAYD8JXJVww1\nvy5n/0v4nRV5t9stnbu9Xo90Os2TJ08wm81S8xQGIpvNhl6vJ5fLUavV0Gg0HB4e8td//df8+Mc/\n5q233pIXvFKp8ODBA5rNJisrK9JMVSqV0Ov1LCws4HA4ACRtLZzNoisTBrfR0VF8Ph+np6ccHh7K\nA8Dn88mHtlqt4nK5ePPNN+n1ekQiEf7+7/+eVqvFixcvWFxclBdW6IjpdJrNzU1+9atfsbi4yLe/\n/W3Z5ZpMJnQ6HSqVSjo+lUqlNJg8fvyYWq3G/Pw8b7/9NouLi0SjUVKplHTUbmxs4PP5JJ07Pz/P\n5cuX5eHm8/nodrty0rPZbMzOzqJQKIjH4ySTSYrFIgaDgTfeeIPZ2VlpPlIqlczOzrKwsMDrr7+O\nw+GQjnyhVZZKJbrdLouLi8zNzdFsNolEInzxxRdMT0+zsrIi6SzhSBZGEjHBmUwmxsfHpTNYNF5X\nr17F5XJRqVQ4PDxEq9Vy+fJlZmZmZOFXq9VUq1UmJiakRri3t8fjx49577335DTqdruZnp7m5OQE\nnU4npwqv18ulS5ckDTw5OSmTFufn54RCIW7cuMH8/Dy7u7tygt3Y2ODZs2dEIhFcLhc3btyg2+2i\nUChwu91y2p2ZmSEUCnHx4kWcTifZbFbSg+IzyOVyskkV1PTS0hL7+/tks1lCoZAsEL8+6Qv9PRqN\nUq1W5Xv6/ve/z8TEBEqlknv37tFut7l16xbXr19nbGyMer3Ozs4Op6en3L59G71ez0cffUQul2N7\ne5uxsTGq1SoffPABHo+HP/zDP8RqtdJqteT7vXLlCmtraxSLRWZnZwEoFovEYjHUajXvvfceLpcL\ngLOzMwwGAysrK5ImffbsmdRDhSEsk8mQSCSIRCKS6r9+/TpTU1Nks1mA3/Ar6HQ6QqEQPp+Ply9f\nEo/HmZiYwO12o1arefLkCfl8nnA4TCqVklS+YK3EVCgGiG984xs8ffqUzz77jH/zb/4NCoWC//bf\n/htf+9rtUgAKAAAgAElEQVTXeO+99/inf/onisUii4uLjI6OMjk5ye7uLqVSCY1Gg8vl+g3PwA9/\n+EMpbczPz9Pv91lbW5Oy3NHREcfHx/T7fVZXV6UpzmAwSDf90tIS29vbdDod3nnnHbrdLtlsltXV\nVdlQX79+nUuXLvFP//RPnJ+fU6/XpYQkvDZHR0eoVCpmZ2cZHx/H6/Xy7rvv0ul0aLfbKBQKDg8P\nyWQyvP7667z77rvYbDZ5phaLRRqNhrx+gt0TPhqv18vNmzdRqVTk83mePXuGVqvl1q1bnJ+fUywW\n2draAsDhcFCv12WTY7fb8Xq9Mgmj1WqZnp7m3Xffxev1olarOT4+JhKJUCgUpETo8Xi4ePGiTLAI\nyeXk5ESeH8FgkEePHmE0Gvm93/s9nj59yqtXr2RdGh0dZXZ2lsXFRRqNhkyTCA/FvXv3mJiY4Ic/\n/CH37t0jGo1KX1kmk+GHP/yhZGPW19cplUoolUp6vR5er5eJiQksFgvb29ucn59jt9uleW9iYoLb\nt2/zwQcfEA6HmZqakoY+h8PB9vY2d+/e5Y/+6I+4efPmb1Vrf2dF/qOPPmJkZITp6WnpFBWGoHK5\nzNraGvv7+2g0GiYmJrhw4QIHBwe0Wi2WlpYYDAaSRhQPhihYonP76U9/Kmn4fD5Pt9uV0ZfT01OS\nyaSc5IvFIvF4nP39fXq9HteuXaNQKPDll1/SarVwOByUy2VqtZrU7dPpNHt7e3L69ng8MtITDAYJ\nhUJS4xMTwNbWFsViUU5IU1NT6PV6/vEf/5Fms8k3vvENqtUqkUhE0mYrKytSKvB6vTLi9vjxYz76\n6CMcDoekVjUaDYDUYx0OBzqdThqZSqWS1Jj1ej0+n0/SzBcuXGBubo4XL17IRufo6IjPPvuMRCLB\nxMQE7733HrVajXg8Ti6XIx6P8+GHH0oXtsPhkJ6Ck5MT0uk0i4uLWCwWtFot5+fnVKtVGX9KJpPc\nvn2blZUVGc9RKpVSa+92uxgMBq5evUqv1+Pzzz+XjY9er8fv99Pr9fjFL35BNpul1+vh8XjktLC+\nvs7nn3/OYDAgHA6ztrZGPp/n5s2bHB0dsbm5SaPRQKvVcnx8LDX34+Nj1Go14+PjMrIlqPKzszM+\n/vhj1tbWpFaqVCo5PT3l+fPnknIU3otSqcTDhw/R6XTSxCkaklQqxfj4OC9evODJkycYDAbJVmxs\nbFAsFqU5U6lU8vDhQ46OjhgZGcHlcmG1WimVSigUChYWFqQsINIAL168kDGyWq3G+fm5fFbE9Wg0\nGrJ56Ha7PHnyhGazKZmSSqXC48ePsVqt8iByuVx8/PHHxGIxfD4fxWJROvOFJl4oFDg8PGRsbAyf\nz8f+/r40MpVKJdlcC0OrkD2WlpbQarV8/vnnRCIRms0m4XCYbrfL1taWvK86nQ42mw2Xy8Xp6al0\nTxuNRgaDAXt7e9KF7/f78fv9Mto5NjbGYDCgXC7z+PFjzs7OZDPl8XjkZFur1UgkEmxvb/Ozn/0M\nu90u2ZHDw0NisRixWIz9/X3m5+eZmZmRn6PVamVvb49EIiFZIovFQiKR4ODgQA4IgtIVk7vH46FS\nqdDr9cjlclLv7ff7shgLT8j6+jrBYBCbzcbLly+xWq3cvn1bshZarZZOp8PZ2RlWq1Wago+Pj5ma\nmsJgMKBSqdjb25OJgWg0Kn00qVQKg8FAJBLhn//5nyV9fP/+fdxuNyMjI0SjUTQaDRaLhfPzc7LZ\nrJSAfvazn0lT8draGh6Ph/fee4+7d+8SiUS4ffs2IyMjxGIxstkshUKBzc1NOp0OV65coVAoSM9P\nu93mo48+QqPRyMhmo9FAoVDIyVg434XpVXgCdDodXq+Xo6Mj9vf30Wq1lMtlfvnLX5LP56WO7vf7\npUzxySefoNVq5RmsUqloNpt8//vfR6lU8vz5cx49esTJyQmAHESF9yCTybC3t0ckEsHr9WKz2Zic\nnGRra4tPPvmE+fl5pqam8Hq9nJ2d8cEHH0gfjGB5xPD76tUr2UT96Ec/ot1uc/fu3d+q1v5OjXfN\nZlO6qoXL12azySIojHWxWIxOp8Pe3h5KpZKJiQlJLel0OpLJJE+ePMFoNDI7OyujJ7/61a8oFAoy\naicukqCWk8kkvV4Pv99PJpPh4OCAfD4vJ4ZSqUQ0GmVubk66q6vVKtFolE6nQyqVYn9/X1K2wiXf\n7XbR6/VYrVZ50Gs0GnZ3d/nkk0/kAysornQ6zdbWFpVKhbm5OXmjC/ZAUHUiViNofVFUlpaWpENa\nGJG63a58yOv1utTPEokE/X5fRsfERGuz2RgdHZXO3EajwdTUFI8ePeLJkydYLBaZ+dzb22NjY0Nm\nsu/du4dCoeDSpUvycDo/P2djY4OzszPGxsawWq2MjIxwdHQk9dpqtcrx8TGjo6NMTEzIB6Pb7Uqa\ns9FoYDKZCIfDRKNR9vf3SSQSWK1WlpeXUavVDAYDHj9+zMnJCVarldXVVUKhEIlEgt3dXT7//HPp\nZXjx4gUKhYK5uTkikQh7e3s4nU7a7TaFQgGFQkG5XGZvb086ldPpNIVCgYWFBWmG2dvb4/j4WJpA\nheZbqVTodDqyuAvKe3d3VzrLW60WhUKBjY0N2u02Pp+P4+NjDg8P8Xq9jI+PY7Va2d/f5+TkhMuX\nL2O32yWlKA4OUeDq9Tp6vf43aMBUKkU6nSafz2MymXA4HMRiMfb29kgmk9LRL9ix/f192u02drud\nk5MTisWiLGZit4LT6ZQsRKfT4dmzZxwcHPAHf/AH1Ot1IpEIJpNJFnFxfywsLGCz2djZ2ZHO8Waz\nKen8XC5HOp2m1WpJmr5er3NwcMCrV69QqVRcvHhR6raRSIRUKiWNTdVqlUwmg1arlTq3iE+Ke6rZ\nbGIymSRtLdIUTqdTRuiy2Syjo6MsLCxIR7egaiuVCo8ePcLv93Pp0iXMZrM0SAn5o1qtYjQapZ4v\nCs3BwQE2m41wOMyVK1dQKBQUCgVSqZTcXyAKnDB4inhVt9uV061ogG02m9SrM5kMKysrLC0tUS6X\npfYt8ukmkwmbzUYmk5EO9YODA3K5nJSojEaj/DxFGml9fZ2xsTGMRqOMJMfjcSmfHB0doVarf8Nb\nIAa08/NzJicnaTabfPnll1IejcViMnLmdrtxOBxMTU3JHRWC3UulUlLqg69klGAwSK/X48mTJwCS\n/VUqlTKxoFQq2d7eplKpoNfrqVartFotzGYz4+PjXLp0iadPnxKLxXjttdeo1WqysRZ/RkZGWFxc\n5NGjRxwdHWG1WqWhNh6P02q1eOutt8hmszx69Eg+84lE4jcajWq1KgchsTdFDIipVIp//ud/xmw2\nMz09jUajIZ/P8+jRI7kXY2lpCYvFQjwelwNAs9nkrbfe4utf/zr/8A//wMbGxm9Va39ny3B+/vOf\nD0SkIBAIEAwGGR8fp9frkc/n5QO8t7cnJz+j0Yjf72dlZQWtVku1WpXLGp4+fYrdbmdxcVHS6IeH\nhzL/LbT8N954A4PBQL1ep1gsyjyz0MmEU1Voy0L3ExGwtbU1Hj9+zMLCAhcuXGBxcZF6vc75+bn8\n9+J76vV6tra20Ov1/N7v/R7lcpnDw0PcbjeDwYB4PE48Hiefz/Paa68RCoWkechsNvOTn/yERCLB\nnTt38Pl8srERtK4wJ6pUKpxOJ1NTU9JJv7u7y+HhIScnJ9IEJLKk4qETsSoR6REdb6VSkc7fdDpN\nNpvF4XCg1+sBpFa8ubnJwcEBsViMK1eucOnSJfb29ohGo6TTaemcf//995mbmwPgww8/5OOPP2Z0\ndJT5+XlWV1dlRv3p06fs7u4SjUa5du0ai4uL7OzsUKvVpJtWLGMRmpjwLjx8+FCmMi5fvozD4eCn\nP/0pCoWC73znO7LwigapVqvx9OlTTk5OuHjxIpcuXeLKlSsyDlQulyXTITR5t9vN+fk5jx8/lhqq\nONy73a6UVGKxmFzmVKvVaLVa6PV6RkZGZBSx2Wzy13/91zQaDWZmZrBarTJuKTTaZDJJvV7/Dckq\nnU4TjUZZW1uT70fIR8+fP5eFRsQ1fT6flJkePnzIq1evqNfrWCwWgsEgFy5cwO/3y7x+sViU7Ik4\nqM/Ozkgmk5IGLhaLrK+vy6bgxo0bWCwWWq0WR0dHtFotFhcX5TMszHn3799Hq9XKphtgZGSE3d1d\nPvvsMyYmJgiFQtIPo9fr+fzzz6nX67zzzjvU63X29/cBaLVaUs7q9XoyNSCo3MFgIB3cwoDXaDT4\n6KOP6PV6/Lt/9+8YDAYy/WIwGBgMBnKRjqBJI5EI0WhUXlODwSDp3FAoxP3794lGo7KYiudQDAVC\nStBqtXLhVqfTodPpyMRAtVqVbvd//Md/pFqtsrq6yszMDMFgkHa7TTweZ21tjbGxMTkNClZicXGR\nCxcuSPlhcnKSs7MzotEoPp8PvV4vBxTR7CSTSR48eMDCwgJXrlyRzJ9oZra3tyXtLfZaXL9+HYfD\nQbFYZHNzU5qCxbMg0hvNZlO61Y1GI5ubmxwfHzM+Po7ZbKbT6TA6OsrIyAgajYZIJMKDBw9kHRAm\nT7vdLj0ozWaTVCrFycmJbILv3bsnzZDvv/8+ly5d4i//8i+p1WpSKnQ4HGxubgIwNjbG8+fPicVi\nLC8v0+l02N3dpVarYTabeeutt7hw4YLcZ9BoNGS802KxUCqV6HQ6+Hw+eZYLRkzsIBFZfWEQbDQa\nNBoN6bHR6XSyERI7XwQTI5axNZtNLl68KNNZgnn99eHx5cuXnJ6e8l//63/9/+4ynMnJSRkPUalU\nNBoN2Sk2Gg0mJiZwOBzkcjmKxSLRaFSaqwS9KjKJYsGFw+GQ26aEYaVSqbCzs0MgEJBUmWgCBHUq\nDkaTyUQ+n6dcLstC2Ol05N9VKhWFQoGTkxN5aPt8PgaDgcyNi5u8XC5TLpdlVl1E5XK5HIA0k4kp\nRFDPwjwkutdcLsfBwQFKpVI+sIL+DYVCXLlyhVqtJo03YpnDq1evODs7o9VqSSpUaNtCI9Rqtezv\n70tHsHhY3W63bLaCwaCkNkU3KpiB3d1disUik5OTaDQa4vE4nU7nN5qFXq9HvV6XEaVAIMD8/Dyl\nUklGW1KpFNVqVRoQM5mMlCBSqRSDwUBmuLvdroxUCZOTy+ViamqKfr9PIBCQTU46nSYQCDA+Pi6z\nv2LaMZlM0iMxOjoqXbjwlS7ncrnk9jBReHK5HI1GQ+a1y+WyNMaJRRpCJ1coFNLzYLVamZ2dxWq1\nSr1TbMwzGAx4vV5pvBRRq3K5zMTEhCweg8GAer3O+Pi4zCDv7u5ydHQkN3V5PB4UCoVclCMObmHu\n9Hg8TE9PS99ELBaTeyNEFE9Ei+Arc6zI7IvIpcvl4uzsjCdPnvDWW2+xurpKOByWz5QwnzabTSkJ\nmUwmOdkIZkO43xuNhpSmbt68id/vZ2NjA4fDwezsLAaDAY1G8xufsYAoMM1mE6fTKY2WrVaLdrst\n6WSx+U2kWkRj4Pf7GRsbkw19s9mUPojz83O5WEgYcEulkmx+8/m8dJSLayIiZcKoq1arpSdEq9VK\n06bYZPjrG/sE+yb2RohtccK57XK5ZJxS0L9i01swGMTr9dJoNGTMOBaL8fLlS1577TXZqAqXuYhn\n6fV66UMSmftYLIbH4+H111+n1WqhUqm4desWy8vLzM3NyZy8iJ3qdDrUajX5fJ7T01NpCLVYLCiV\nSimhic9Q6PnCUCsKZygUIhwO43K55LIYweC5XC5KpRJWqxWDwYDL5ZIUvTjDBaOVy+VkAkJIX8IQ\nLM6fwWCATqdjMBjIzZJOp1Munvn4448luynOPMEEi6hksViUk75gQ1qtljwDDAaD3GYo5OBCoYDf\n78fhcOD1eikUChwdHRGJRBgZGeHChQvSLJhOpymVSsTjcbxeL8FgUBr9tre3ZaT2t8HvrMgLZ7fQ\nsA4PD2XUSbgnNRoNiUSC4+NjmQ+PxWKyUyoUCmQyGc7Oztje3mYwGPD666/LDnlzc5OdnR3y+bzU\n1Le2tjg8POTFixf88R//MYuLixwdHcms9ubmJul0GrvdLjfUiVWn4r+JblNs8BLucIPBQK/XY2Nj\ng8FggMfj4fbt28zPzxMIBEgmk+zt7bG1tUU6nZZ02eTkJMlkEqfTyeuvvy4fjlAoRDKZZGtrS2qK\nImP8P/7H/+Ctt96ShxYgaX5BXet0Oq5du4Zer+f4+JiHDx9SrVaZn5/n+vXrBAIB/uqv/opms8ns\n7KxsVgR70e12ZeF5+fIl3W6X2dlZ/v7v/54PP/yQer0ut5Dl83lOTk64c+eOZBvW19fZ3Nz8f2jc\n4+Pj/M//+T95+vQpiUQChUKB2Wzmm9/8JlevXpX+BjEBORwObDYbd+/eZXNzkzt37tBqtaREMz8/\nT71el1PMX/zFX/DRRx8RCoXkRsJarUahUODx48eEw2H+7b/9t6ytrcnOXHgc5ubmmJ2dpdvtkkql\nODw8lI3S5uYmuVxOdtxih4PIs+/t7fHo0SOCwaDUy7VaLSaTiWAwSK1W49mzZ5IKNplMkgIXryXy\n1/V6nampKZkqKZfLpFIp6TlZXFzk8PCQX/7yl9jtdq5cucKf/MmfUCqV+PDDDwGIx+NsbGywtLTE\nv/pX/4rV1VVWVlYwmUx88sknfPTRR3i9XkqlEn/+53/OrVu3ePfdd/nlL3/J6ekpTqdTMmIzMzMy\nWZFIJNjZ2eH3f//3mZubQ6/XS8+HWEazu7sr906Ig3ZjY0NO4levXiUUCsniNj4+zsTEBPCVX0fE\nZUUue3d3V67GjcViMmrZ7/ellOL1eqU0JqaeTCbD3/3d3xEMBrl58yYXLlygXq9zdHSERqPh6tWr\npNNpEokE0WhUrk792c9+JuOdCwsLctNdMpnk/PxcegUWFxdlfFNcu//1v/4XyWSS+fl5Ll68KBev\nwFeTspAHhKksHo9L5mtlZQWn0ylXHne7Xebn5+W646OjI/L5PO+//z4XLlyQ6Zt6vc76+rrcZRGN\nRnn48KG8X4RkJrxKiUSC1dVVstksn376KZlMhmq1yqeffsr3vvc93nzzTf77f//vGAwG/uRP/gSL\nxSI9Hfv7+6yvr8s9J2Kbm0gdiFjj6ekpf/qnf0ooFGJ2dlbuPAgGg+zs7PDJJ58wNTXFwsIC3/ve\n9xgMBiQSCdbX16WZVTTiwtfhdrs5Ozvj5OSERqPB9PQ03/3ud0kmk2xubnJ+fk6v15NnrfA/CGZL\npVLJpVdCrhUenkqlwvr6Oj/5yU9YWlpiYmKCYrFIMpnk9PSUQCDA4uKi9LL87d/+rTwjr1+/Tq/X\n4+joiKtXr+J2u3ny5AlKpZKxsTEODw8ZDAZ8//vfl+kDsa9CSLihUIhvfetbhEIhPvnkE4rFIlqt\nlsePH+PxePjxj39MqVRibW2NUCgkTaz/En5nRV5o0UKTnZyclFnotbU1wuEwNpuN09NT1Gq1pOuE\noUTodWK6F1GvXC6HyWQCYH5+Ho/HI7V/nU4no0Zut5v9/X263S5jY2OyQIuOf35+Xq47jMVitFot\nPB4Per1eTm9iT7LYTJbL5eRiD6ED2u12ueggGo3y8uVL2ZVXq1UWFxdZXl7ms88+4+7du9K97vP5\nWF1dJRgMyk1yZrOZe/fusba2JhdJPH78GIfDITc5GY1GObGJ4hoMBhkdHZWmpMnJSXw+Hw6Hg9//\n/d9nMBgwPj4u9T/h9hXrhavVKn6/Xy79GRsb4+bNm/h8PjmBClr619cDC/OWmMTS6TQulwuXy8Xt\n27cJBAI8efKEpaUlrl27Jl26ZrNZZvHtdrucFMbHxxkdHZUbpqLRqJzCKpWKjFb5/X4WFhZYXl6W\nK4kTiQSnp6eMjo4yOjpKsVjE7XZz4cIFyaqI5IXY4iWiYmdnZ+RyOfx+P3q9nsPDQ8bHx7l48SJK\npZKjoyMePHggadyTkxM6nY500YqYYLlcplAoyG5/dnYWi8Uil+qIaypMh+FwmHq9LncUtNttrl27\nJn/ngEajYW5ujtXVVUmlAiwvL0um4I033sBqtbKxsSE3lQmp5t//+3+PUqmk3W7zne98B7fbzdOn\nT2X08+HDh6jVaskgCYPczMwM/+E//AcCgQCpVErm+QUTI8xKgkEQ094PfvADEokE5XKZQCCAwWCQ\nxkKPx0O9XqfT6fCjH/2IeDxOKpWS62xLpRIjIyP4/X6ePn1Kp9Ph+vXrcl+GMPG53W7J7m1vbwNf\nMRIiLVMsFmXUy2q1ys9aTGrtdptMJkO9XpcUd6VSkbEy0TiKaFQwGJQ7PPb399nY2JAu/62tLcmU\nvHz5klqtRigUIpVKSWe7WIIjpjKxAbBUKpFMJuXCJJ/Ph9PpJJ/PE4lE5M8nXPLRaFT+LL++cEXs\nk8jn8/Jaf+tb3wKQ63GbzSYul4tOpyMjjhqNhtXVVQAphYglQiLxUqvVePXqlZQmHA4Hk5OTsvFz\nOp1861vfIhAIYLfbefDgAb1eT25jXF5elqtzhRem1Wpx+/ZtaQIUC2zi8bhcpx0MBrFarfK8TiaT\nmEwmVlZWJIs2OTnJzs4Om5ubTE9PEw6HKRQK9Ho9DAYDjUYDm83GysqKlNssFgsrKyuo1WoZ7SsU\nCpTLZarVqpSH1tfXMRgMfOc735GpKBGXtFgsTE1NSR+A2OQo9nCIVIxY+pROp/nRj34kV9qKhTe3\nb9+Wq6DHx8clUyF8GcLz89vgd1bk4/E4JycnvHz5UjrRxRpEYQYym81UKhU8Hg9vvvmmXBEqDFtb\nW1vcunWL0dFRmQM/ODiQ++eDwSDT09Po9XpOT08pl8vye7pcLk5OTiiXy7LDE/SUWH+qVCplzl48\n0OFwmEAgIDt/YSKbnp6Wv5whEAhI2lQ4QQU1ValUmJqakkYqn8/H+Pg4P//5z+UvbXE4HIyPj8t1\noGKSzOVyPH/+nJ2dHZkh3trakr/ERSxrsdlsrK6uksvlePjwISaTienpaZl1FisVxV58+KrpisVi\nRCIRnj17hsVikY1Co9GQixvUarWkZxcWFtBoNMRiMfmLWAaDgYzQicIq8pwnJyfkcjnC4TDf+MY3\n8Hq9fP7554yMjHDjxg3pMBbLkCqViiykmUxGxkxEZl2YEavVKtlsFqPRKDOr8/PzTE5OSm2sWCxS\nr9e5du0aLpeLRCIhlwWJz258fFwW2VevXkkaUUgG169fR6fTyQJ84cIFuR0vlUpx5coVFhYWiMfj\n0iUtqFKx1EapVBKJRNBqtXzta19DoVDw+PFjXnvtNekyFikQQXHfu3ePdDqNxWJhfHwch8PByckJ\n/X6f27dv8+6773L58mUKhQKdTofLly+zu7tLq9WSu7YFiyN+wdHNmzf51//6X/P8+XOZnMjn83zx\nxRdyOcj6+ro0AgnqWuzTvn79OqenpyQSCVwul6SdBcsm9lR4vV5SqRTNZpNbt25RKBTk7wwQ2yHt\ndjvhcJj19XV6vR7f+ta32Nzc5Be/+AUul0vmz71eL/Pz81J+WlxclJsO9/b2KJVKqNVq2YRH/u8t\nidevX5eb9URm2WKxUK/Xpa7d6XTkRr1yuSxlIbGZLJFISP9KMBiUS0lUKpWUsv4v5t7tqe37zv9/\nCsTBgBACISGBECCJ8xnMwec4juOkSdfNOdt2mra7Vzu7O/sf9GpnZ/Zm77o7ne0hbbfTJk3TJj7h\n2AYb22DOAiGQhDgfhNAZcRCW+F64z9fYv5vNxW8my0ymmdQHkD56H56v5/PxZIaZACG73Y5kMgml\nUilwFM7DXS4Xjo+PUVRUhPr6eiHvcQPa3d2VQ8/h4SEyMzNRUVGByclJhEIh7OzsIBKJoKCgAC6X\nC8PDw3jllVcky0+uBLkhNMfl5+ejvb1dfEgccfCwXFlZKR0R1dXVYkhNJpNygDg+PkY4HIbL5ZLb\ncHp6usCxeFhVKpVob2+X153jGkrT1dXVAjHiiGN/f18IcDQNx2IxbG9vi7JHNZD9IFtbW7DZbLBY\nLFLglJ6ejtHRUSwsLKCzs1PUDaY5OAaxWCyYnZ1FOByW8i3SBKmQ0pvlcDgwNzcHl8uFiooKXL58\nWfgqALC3twedTidjM8Ym6dXguAuAXCD39/dx+fJl1NbW4vj4WAqCqqqq5NJjMBgE4x6JRKSD4PDw\n8Gvttd/YJv+LX/xCFpze3l4YjUbMz88jPz8ff/d3f4fx8XEMDQ2ho6ND4AV6vR5KpRJ9fX3Y2dmR\nHGFDQwNycnIwNjaG3/72tzg4OIBGo0FPTw9aWlpgNBpx48YN+Hw+vP/+++KKTyQS8Pl82NragsVi\ngc1mg9/vRzweF/YzXZM0h7GEgg9XVlaWEIyam5txfHyMgYEBmEwmvPTSSwiFQhgfH5eMpl6vx1/+\n8hfcu3dPZsc3b97EysoKSkpKUF1djfT0dBk/8AGbnZ3F9PQ0ioqK0NHRIb6AsrIyMZEtLCzIbYq5\n/JKSEuzu7mJ4eFhcv5zj7u/vCxxjcnISKysrCAaD0Gq1yM7OFhohyz7Kysrw9ttvA4C4ixn3W1pa\nwsbGhhRimM1mPHr0CDMzM5KGoJTLOSvnazQLHh4eIhwOw+v1SsZ4YWFBzEtMNVitVoFVJBIJeL1e\nMT6yQWt3dxfXr18Xh3VDQwNOnz4Ns9ks7wmBRY2NjYKmDIVCWF5exr179wQNrFKpoFAocP36dWxs\nbMgNa2lpSbwHbJOan5+XW+ry8rLMiPnnnTp1CjqdTuI1a2truH//vgAv6B622Wzwer1YWVmRuSYP\nTrFYDJ9++inMZjN++MMforCwUBCr5Njv7OxIcdLGxgYmJiYkm0zICxnux8fHuHfvnrApmPFOT0+H\n3+/Hp59+Ko1fjY2NIrWTksb87h/+8AecO3dOpO26ujpcvHgRIyMjcDqdKCoqwvLyMux2O5qbm5Gb\nm4vx8XGo1WpYrdYXil/C4bCMOxhzq6ioEGwuD1s+nw/z8/MwGAxQKBQYGhqSjZWG0crKSiiVSjgc\nDrQtMpAAACAASURBVIRCIayvr8PhcIi3hwpOKBSS1kelUonq6mo0NzfD5/NhYGAAWVlZqKysxMWL\nF2Wc4/P5JKrIw3RbWxsMBgMuXboEpVKJhYUFgQM5nU5sbm4iFAqJZMtxht/vFzR2IpFAc3MzKioq\nZGRpMpmkNIsqZX5+vhSy0C/CFIJCocDU1JTk7s+ePQubzYZPP/0UMzMzSCQSaGhoQEdHB+rr67G1\ntYVPPvkE3d3d6OzsFPc2C2j8fr8c8IlajUQiWFpaEj782NgY+vv7RRkkVnx/fx9TU1PIycmRlEcw\nGER/fz9CoZCoIWlpaZienoZarZbNkr4oRpwvXLiApqYmeL1epKeno6SkBBsbG/D5fOKCv3XrlphA\n7XY7PB4PQqGQKCA81Ofk5MjBbmtrS8Bjn332GWZnZ/Haa69BqVTC7XZLlJuXl8HBQRk/ffTRR5JC\nILnx7t272NjYAPBMoamtrcWbb76J/f19PH78GAqFAhaLRRo4MzIy8ODBA7jdboEtabVajI2NIT09\nHRaLRZDJVJS+ztc3tsnTbajT6VBYWIhUKgW/349oNCoPKhGYx8fHQkujYYNRG5ZkqNVqDA8PY3Jy\nEuXl5RKjW1lZETmuoKAAWq0WFRUVaGlpkXICzuRobmNOlRsGS1no0KV8w0IIOkv5YaLBp7KyUtQC\nLk7JZBJarVb4xzTMtbS0SINSKpVCcXGxfNi5IdNdH4vFREGwWCwSl9rd3YVOp4NWqxX3bmNjo3D5\nad5i1Km0tFQWNVLiVCqVlMvQYUp5k14HNq8x5nZwcCCqB/CsO2Bubg5KpRJNTU1iamTFLilYpG1x\n41epVJJxpuTLP5/YVsI2KPHx7+TBa3FxEWVlZTAajXC73ZidncXGxgYsFouciJ8+fSpMfNIBKR3z\nzzSbzQAghh1iend2dgTV6vF4xL1NZeX4+Fg+7D6fD4WFhQCelchotVpUV1cjNzcXXq8XLpcLa2tr\n4rimgTIvLw9GoxEzMzOyuDMWxdeNEmg8Hhf3Lf0UbLgqKSkRsBLVKI6nOJZQq9WCbKaKwcMXzYZb\nW1sS5STd0O12C+KZmXWz2SwdB3S401CpUqmQnp4uvQepVEoiVEdHRwiHw2J2oyRJNY18CFIxmT5w\nu92S4Njf35ebKcdy9D7w+6FEHwwGBY1Kg2UgEJB8u06nk1inzWaTaB4x1wDkdr+0tPSCcsPnjEVT\nrL6tra1FQUGB9AXwUJGRkSG8Dj6DwDN8cV1dHWw2mzDK6R/hZSGZTGJtbQ3r6+uydmZlZSEUConL\nn6ZmIsLZRRCPx7GwsACj0SiNdLwBE8jERABJeiQ+cgxHA+/i4qIgpXlB4ueKhsWnT5+isrJSkipU\nHdnloVAoZNzAbDwN0Vx3+P6Rxf+8d4gQoOeZ7rxMMLFCJZfgM6ocVFHS09PFLc9DLveI5eVlWK1W\nVFZWorW19QVCHp8Jjl0IVNPr9RJlJGEUAKLRqHS3lJSUAICMjRjTKy4uFuR4ZmammHX5zNKr8nW+\nvrFN/qOPPpIPpF6vRygUkmKYGzdu4B/+4R/wrW99S24JXq8XWVlZAJ59ABj/4ZvOeRqrUs+fP49A\nIIAHDx6gr68PH3zwAS5dugSNRoPOzk4xa21tbcFut8PpdMLlcuH8+fPo7u4W56Pb7Za8KN3VHo8H\niURCXN6kpP30pz9FLBbDv/zLv0jGky5ygiXsdjs6Ojpw+fJlAbkcHBzAaDTC5XLh3/7t39Da2op3\n3nlHSFzhcBh1dXV477338PTpUzGmGAwGdHZ2YnJyUhjO5Gzv7OwgPT1daiLJX7bb7RgbGxMz4PDw\nsCgZBoMBBoNBZkl0d4ZCITQ2NkpygabIQCCAo6MjFBcXo6qqSg5CCwsL6OvrQ29vL7q7u+X77enp\nkQWazl7OcomSTCaTsgnw5kTwyKuvvopvfetbQsdjxzzZ/SsrK3A4HLBarbBYLJibm8Pi4qIgbJmW\n0Ov1eO211zA7OysAEm7uPDz29vZKqqO4uFgcujqdDmfOnJF4Jjcw/t7s7GzZsNhaVlBQgG9/+9sy\nUqKqcffuXTx9+hSvv/66FMB0d3cjOztboDapVAqvvvoqtFotNjY2xPXb2dmJiYkJ/PKXv8SVK1fQ\n3d0tJTOzs7PQarVobm4WClgoFJJF8PLly6iurhZlx2g04sKFC3KQ0ev1qK2tRSKREKNYT0+PwJZm\nZ2dx69YtXLhwAXq9Hv39/SgrK8P3vvc9eL1eud1TCeju7sa5c+cQi8UE73vmzBmcPHkS+fn58Hq9\nePLkCVZXV6VUhSUoyWQSKpUK7733HtbW1jA3N4fS0lI8ffpUGBFXr14V1ebMmTNIpVLy7wqFAna7\nHTabDadPn0YymYTZbBZyZXV1Ne7evQuHwyEeme7ubjGcspyEQCrGLzmqIKDkgw8+EOomc/pHR0dS\nBd3c3IzV1VUMDg6ivLwcDQ0NiMVimJqawieffILGxkb09vZicnISmZmZOHnypHBD2tra5LB/5swZ\nea/9fr9EwliYUlZWhtXVVXR0dKC7uxupVAqLi4u4f/8+ZmZmcP/+fXz44YdobW3FH/7wB7S3t6O3\nt1eimn//93+P9fV1rK6uor29XdrjOHrJzMyEx+PBw4cPZXQZj8dlXdVoNKKw0vlOsyLRxMCz6OSJ\nEyfQ3NwsPqrOzk5YrVYsLy/D4/GIskupu7GxUSRwerrYi9DQ0ACz2Yx4PA6dTof3338fjx49wubm\nJi5cuICcnBwsLCwITGx1dVWUTPq56JXSarV455138NJLL0nHCQ9hRqMR7733HqampvCnP/0JV69e\nlYRTRkYGamtrsbi4KO2DNGP6fD7xPJDH0djYiLa2NoTDYTx+/BjT09PQarVoa2vDxYsXpQGQI6PZ\n2VlZ93m5+Tpf31jVbHV19U942ubMhScVGoOCwSBSqRRmZ2dx//596PV6wdByw+QJLy0tDXNzc5iY\nmJDCA7vdjrS0NJw8eVIkY87ifD4fhoeH4Xa7ZWOzWq3iElcoFPB6vRgZGRGYAbuUiVitqqqC0WiU\nZrFEIoGKigq0tbWJYYywCwCyoRmNRomBAJDGN1Z6GgwGAJDZIEEaRUVFGBwclIWORKRAICCQH6oV\nqVRKOAPRaFSaofi9czNeXFwUzjt76QnacDqdSKVSUKlUmJubw9DQEIaGhrCzsyMc7Gg0CrvdLqrM\nwcGByKHMMk9PT2NtbQ2BQAA5OTnQ6/WCK+VNNBQKYWZmBktLS+IrKC4uhkajwfHxsRDnVCoV3G43\nIpEISkpKJF8ai8Xkdkj3NMc2JpNJeuF54IlEItLcx4MW5/t06i4tLclhYnl5WbwSz8c0T5w4IbAY\nmvW4gMViMVF02N7ndrvFxMRmsN3dXbjdbnHwAoDX68X+/j70ej0aGhqwtbWFL774Qgh0ExMTiMVi\nsNlsUCgUWFpawuPHj6V3gL4Umt4KCgpQWVmJmpoaiaeS3kdiGQ/JZI1nZmZCoVBI6iIWi2FlZQVH\nR0dCoKMrPB6Py+LGjnQy9wkOIqqZZiLOd9fX1+VWaTKZZG7v8/kECWq1WvH06VOsr6/D6/XC5/PJ\n+7qysoJQKIQTJ04I65vxWbbZraysYGZmRrLiHBV6vV7MzMwgHo/DYrEIZIrGKwKNqNxMTk5iaGhI\n6JFVVVVSVUsTGeuyOT8HIP0YjHVxrs0IGQCJC7PSlCOYhw8fyriFvRyUvOm9YP8EWzyVymf3N3YE\n0JTJOS5/z+7uLubm5qS8aWBgQPoleHPPzMwU6t/i4qK0zgWDQayurkpnRCqVEjWGzXxZWVlwOp2Y\nmpqSmzTZIhxHESCl0+lQUlLyQs0xR2Wse2ZUkK2ler0eer0eOTk5iMfjmJ+fl3Wa0Vre3Llerq2t\nCV/D5XLJASY3NxeHh4cCDaPXZnZ2FiMjI6IQ5OTkYH19HcPDw6iqqoLZbBaD7s2bN8UkyEp00j4V\nCgVWV1fFP1RbWyt98kdHR+IrMJvNUuD2xRdfSAQ7FArJ++l0OjnS/l+rZr+xTT4/P/8nKpUKdXV1\n0vxGhyEAuUVnZGQI5au1tRWVlZXY3d2Fy+XC/fv3ZSM4ODjAysoKNjc3RfbyeDyor6/HD3/4Q8nT\nkqjn9Xpx//59ASO0traira0NAwMDGB4eRk5Ojmxs/J58Ph/i8bhw2ZnjpVmtoqICDQ0N0Gq1Im9T\npqfpo7i4WG4pxFWyNY/tSCdOnJAbtEKhkE3i6OgIf/7zn7G+vo6XX34Z+/v7GB8fFwpbKBRCQUGB\nlJb4fD48efIE+/v7yMjIkLlPa2srNjY28ODBgxdu5rwB2+12TE5OCqmrpKQEt2/fxsDAAGZnZ7G3\ntydlEPv7+5iYmIBSqZQ8NElz9DRw1r29vY3i4mKUlZVhbW1NzC/RaBSzs7PC/KacSUmKblKVSoVE\nIiEoWpvNJoAc5pItFgvm5+cxNjYmnGiLxSK5cN4kKb2z0YlJiGg0KjcuuvcXFxelNpSEKjp9j46O\npALY7/fj6OhIetQPDg5QWVkJg8GA/f19rK2tYXZ2FidOnJB5GwAxka6traG2thbJZBJutxt6vV7+\nvqmpKXz55ZdQKBTiwSgoKMC7774Ln8+HsbEx3L9/X3rAWb3KQ7DBYEBNTY3k5Kl+TU5OSgJDqVTK\neGZ/f1+QyOQvrKysCMb24sWLmJubEzLg+vo6BgYGhDmwsrKCZDIJtVotEJ78/HwhLzImu7W1hbW1\nNYRCIZw6dQpVVVUYGxsTfwDrOysrK5FMJhGNRjE1NYVwOIzz589jb28PDx8+FF5CZmambBKLi4tY\nXl5GIpGA0+kUyhhNosvLyxgZGcHOzg5UKpWgrCcmJlBRUQG1Wo1QKCSS8sjICB4/fozJyUns7Owg\nMzMTp06dQmlpKebm5gAAhYWFCAQCwjvn4ZNKD2t3GZtLJBIwGo2iwJB+dufOHSGw3bhxAxMTE5K7\npkGLc2atVitjBY63qCZR4tbpdKKi0dty8uRJuFwuUWLC4TCuXbsGg8GA1tZWkdmZRBoeHpZqW6PR\niIWFBTidTtTV1aGoqEjMc1xneeAYHR2VdUqn0+H06dPymQmHw3LIYWZdq9VK+RbjpBxJcd0Mh8OS\ntlGr1Tg6OhJVdm1tDXt7e2hoaJCoKQ3cVG3Ly8vFE8UbfCqVErz54uIi1tbWhHA6NTUFAEhLS5Ni\nIpfLhdLSUtmDnE4nPv74YxnZUuklOvjg4AATExNCEy0oKMDBwYFQNy9fvix7YCKRwOjoKP785z+j\nvr5eAFI84IyPj2NychLr6+v/dzf573//+z9hgxk7fynDxGIxGI1GGAwG7O7uwmQy4Z133hG5htKK\n2WzGmTNnkJeXh1u3bmFvbw9dXV1SG9nb24v6+nqJvalUKlkIWFZTV1eHo6MjQYLm5eVBrVZjbW1N\nHKl1dXVyYissLJQNbWtrC1999RWAZ+YUOtELCwtlHnR8fCw/Ezut2Q3vdDrFzEQplbNDtVqNjY0N\nPH36FPX19YIe5YLNhZrZWeaHq6qqxJgXDodRWVkJs9mMoqIiqTHl7JHQF24AjM4MDAwgGAzi5MmT\nKCsrE8pXR0cHTp8+LUUMVVVVAgTa3NzE6OioUPdmZ2fR2NiIy5cvo7W1FT09PTh//rzUUdJ1mp+f\nj729Pezt7aGqqko+mDT0EAyi1WoFGlNdXY28vDzY7XbxHrDhjaUWlE5TqZSAONh5v7+/j0ePHqGq\nqgr19fUYHx/H7u6uELIoCR4fP+vKbmxsxJkzZ2QDqaioEDYDOQpqtRo9PT3o7OxEOBxGPB6XPycY\nDOLBgwfY3t4WdzxnnUVFRairq4PZbEZZWZlsvvQVKBQKDAwMSMyN/Q5nz55Fe3u7dHUz3nl4eAiH\nwyEejOPjY/h8vhfMTFQseNBSqVQiH09MTKC0tBTV1dUwm80CQ2KPfVNTE6qrq6HVauF0OrGwsCAq\nHOVvkstI8mIxz/HxMTweD6ampmTUxvTHq6++Ku1dKpUK9fX16O7uRlNTE3Q6Hba2tsSopNfrpTZV\nqVSirKwMqVQKe3t7opKwtbGmpgZNTU3SIXHp0iW0tbWJezszMxO1tbWwWq3Iy8tDaWkp2tvb4fP5\nsLq6KhsAK5aTySSqqqqkHZOSKpWq5eVlPH78GLFYDJcuXYJarZacdTgclgNfKBSSAwyRrdw81Wo1\nlErlC2yC3Nxc5ObmwmKxwGKxwOVy4ejoCO3t7WhubobVahV3vl6vx9LSEiYmJtDW1oasrCzcvXsX\nkUhECqHq6urQ0NCAkpISGI1GuYG3t7eLWbmwsFDSElRlzGazKFWsOGaskxFbtVot6tTQ0JBEdDUa\nDQwGA8rLy6VQprS0FDU1NWhubsbGxgampqZQXFwsKYN4PC6q4dLS0gs3ZW6SS0tLoiasr69DrVaj\nu7tbRo70T0UiEUSjUTkspqeno7m5GVlZWfD5fBgfH5fDD9WFrq4ueb/r6+thsVhQWVkpngmuN6yf\nZZTPYrFIaRSd8SSJEme+sLCAtbU1lJWVobS0VNacnZ0dfPnll0gkEvjOd74Di8UiHhdWNhOv/NVX\nX/2vm/w3NpPnTHdkZEQ62BlH4Gw3mUzKHIezcG6UWVlZqK+vF3rX8yQpGkIYP+NNgcaz5ytm/X6/\nLNbb29svyNDM3TJfmp2dLbIL2diPHz9Ga2srOjo6ZE5HKYz58Wg0KqzovLw8aY3yer2SZyb6sKqq\nSrj5SqVS3O80mrDD3O12y+ZH+h8zrCMjI3A4HLIoMx7Ih97hcMgCQ2jL4eHhCzQrmo+AZyYj1iCW\nlJRgaGhIzFE0mPH1yszMRElJCTo7O3HixAk5vTNawhMypaeDgwMpp6DDPC8vD5FIBHNzc2JipKFl\nZ2cHCoUCm5ubErmiISs/P/8FMElmZqa8Vtx8gsGgyGWEeDAGRlmPRkWOjWgW5ObNMQONgHzuiKdd\nWlpCPB5HQUGBKDWMXC0sLIjrm810JMoZDAaJEGZmZooZLBKJiHOehTJ6vV4wr4w3so9hbm5ORgp2\nu13MgzTIkRVPZYqft5ycHKhUKqjVajEosaY0GAxKfphEONYFUwUxm82CmNXpdCJTPm86olN6Y2MD\n2dnZ8nttNhucTieW/toqZrVa0dLSgsXFRSwuLgpLgCRKAEK/YyES3zcashhZqqiokLpb4mrpLueN\nis8KWQKEquTm5mJvb09qRKPRqEBaDg8PJSZbXFwsaGAmRSKRCHZ2drC4uChZf34u6elhTS7raKle\n0ddAzgBjWBqNRkZxJHGq1WpUVlZK+oLPkFqtlkIoMh94mGDGnrd1HgrJsyA0hgfH7OxslJeXIzc3\nV6R6Anr4nnJ2z6a6ra0tAQxxZMT/zn53xgNpygyHwzIm48iW0TQAcrGhgkpjJuuWyXNgXI+MFJZw\nsdyLJlSu9fRbkVbKcWJJSYmMGIqKinDixAksLi4iJycHBoNBjJfJZBIKhUIy+UQiMybHgjGabJkk\ner4Ble2Y8/PzGBgYgMViwSuvvCJlPDQgc3So1+u/1l77jW3yDocDwWBQGprKysokEmIymWC32xGJ\nRFBXVwePx4Of/exnMh9MpVLo7e3F3/zN32BiYgLxeBw1NTUIBAIYGhoS4wdxgk6nExMTE4hEIqit\nrZX2n52dHbjdbgwMDCAejyMnJ0cOAHSE8ybEeItOp4PJZEIikRBgxdLSksyJV1ZW8OWXX8pDTTMR\nzTRnzpzB4eGhGEYmJyfxy1/+EoeHhzCZTHjttdfE8HblyhUUFhaiv78fVVVVsnG6XC6Mjo6ivLwc\n7e3tqKioQE1NDU6ePIl79+7hX//1X2X2n5GRIYtJTk4OVldXcf/+fcTjcahUKrz22mtIT0/Ho0eP\noNfrYbPZJJLIwwjzq9zkNRqNgGHm5+fx7//+7+jp6cEbb7whxrKPPvoIv/71r/HTn/4Up0+fRm1t\nLUpKSsTBy6gIQRtKpRIul0vKKWi+YU+z2WzGwsIC7HY7Zmdnsbu7KxIoK0R5MPF6vZifn4fFYkE0\nGsX169fx2muvISsrS2hq586dw8TEhFD3NBoNFAoFVlZW5PVNT0+XTu75+XkUFxfD7/fj5s2bKCws\nRG1tLSorKwVpydtxIBCA1+vF+vo6XnvtNZw+fRrnzp3DkydP8PHHH8t8/NSpU1heXsYf/vAH/OAH\nP8Dly5fFPLe8vIyxsTEEg0FcuHABwWAQ//3f/y2HIwKR1Gq1FMZcv34dXq8Xh4eH0Ov1KC0txSef\nfIKqqir80z/9EyYmJnDnzh3Mzc1hf39fEMmlpaXo7OxETk4OGhsbheZIX0t/fz9aWlpw8uRJmavT\n2c4IZGFhIXQ6nRweysvLUVlZifLycszNzWF2dhavv/66LJqcg/MWenh4iE8++QTXr19HWloaPvjg\nA3R2duLevXsYHh6WmCAjUCyEYuyNtz8CQ/x+P3Z2dqDX6/Huu+/CZDLh1VdfRUlJCVwuF/7jP/4D\njY2N+P73vy/za6VSif7+fly/fh2nTp1CWVkZRkdHATw7nNy9e1fIihaLBWVlZRILu3r1Kvb397Gy\nsiL571/96leIRqPIz8/H+fPnUVJSgmvXrkGj0aCmpgYlJSU4PDzEyMgIPB4PgsGgpC+ysrJkLdrc\n3MTR0RFOnToloxy2WG5tbYkayEPgtWvXcOrUKXz44Ye4ceOGmC2Xl5cxOjoq+X2bzSab25tvvolk\nMon/+Z//QUlJCWpra1FXVwetViv43vz8fCl+4aWkuLhY+h42NzdhsVikjY8xUq/Xi7m5OVy8eBGF\nhYWYm5vDvXv3MDQ0JM2h9EUxOsrxB5sNr1y5goaGBqnT9fv9+Pzzz/H06VO0t7djdHQU29vbaGtr\nw/b2Nj7++OMXYp98v4gTJlV0cHAQWVlZ0Gg0OHnyJA4ODtDX14epqSksLy9LT8nZs2fFLf/Tn/4U\nqVQKFy5cEG9GIBAQaFQqlZLLBVMIHR0dwoKhAlVSUiKxxbKyMly9elWULofDgXg8jv7+fjns7e3t\nYXV1VUis9Aj8b1/f2Cbf0NAgDGNKsjTNRCIROS1TsiooKBCzEjO4jAIR/cq6Tb6h+fn5SE9Ph81m\nkxM5pfSdnR1sbGwgmUzilVdeEbY7qzL5d/JBDYfDMp8MBAKSX79w4QLMZjNKSkpQWFj4grx5dHSE\nqqoqrK+vY3R0FHNzc+LQ1mg00mX88OFDKXhpb2/HwcEBdnZ2JPrR1NQks85kMimnyOeZ57m5uTAY\nDMjJyYFSqURbWxu0Wi22t7fFdU8u/uXLlyX+VFNTg+3tbbjdbqn55EbO2wZNOTQ5cTOgb+D111+H\nxWIRCTSRSGB7e1sIYZubm8jPz5dsdDQaxeTkJEwmEy5evCjxkby8PCkpsdls2NvbQ3l5ORQKBZxO\nJxQKBRoaGlBRUSE3TzLVWYIyNTUlSFSz2YyKv5ZilJaWIi0tTfwSBHuYzWZ0dnbCbDaLQpNKpaRp\nTK/XS2HK3t4e8vLyxFip1Woly0tfBACR3nQ6HaxWq5iYWPtptVrR29srsbuysjLhL9DUxO+XWWbK\ni4FAAMlkUkh+GRkZ0Gg0YkTU6XRoampCT08PSktL8e677yIYDOLmzZvQ6XRoaWmReb1OpxNoDVUL\nkrXIN+ftsqWlRSiHTqcTIyMjaGlpEdY8Pz+MsZGJvrOzg9bWVqkeraiowI9+9CMZz1VVVcnttL6+\nHru7u5KbTiQSMJvN2N7eFrYDx2eVlZXSIbC9vS0gJpoh6XsoKChAVlaWOONp8mO7GmVhktCoTPCm\nROUjHA7DZDIBgGwevIDw97KQZW9vTwyzNpsNLS0t4ujv7u5GLBYTmlkymUQymZTPM5MX5NEXFhYK\nMpWVvidOnMDp06fR2Ngo69X8/LyY4N58801UVlaiuLhYmhq56ZKoubq6iqmpKYG/NDc3i+mUsWKO\nSnmAYNNhbm4uKioqAEAwtqFQCKurqxL12tvbQ35+Ps6dO4dIJCKJD9bbUnnl80IjmslkQiAQgM/n\nw8bGhlAhOaYh0IsHvfT0dFRUVIiBkQkh+i6Y3uI/vK2zC4PJhOPjY+j1euTn56O+vl5u6TQi0iR3\neHiIS5cuYX5+Hg8ePIDNZkNzc7OMgpkSOjg4EAVpbGwMFotFElSElfEg/dJLL4lJj6oWRxqZmZmi\neFI9tVgsolZ8na9vdJNngQbl4eLiYulQttlsMu8FgJ6eHpF0vF6vlA0w3tbX1wej0Yienh6Z/yQS\nCZFa2UF/dHQktzWWHpw7dw6JREJcuIy00VRB+TQ3N/eFw0BhYSEuXrwIvV4vJou0tDR0d3cLhIVu\ncK/XK65ZRi1efvllqNVqBINBvPHGG+jp6RF3dSwWw8zMDJLJJLq6usRl6vP5kEgkYLPZZJN/fv5P\n+t6ZM2eQlZWFGzduIDMzE2azGV6vVyIaLHtgTCoajcp8m7NnysC5ubnY3t4WL4HJZEJZWRlu3ryJ\nw8NDvPHGG1AqlUJqY1EEa19putFqtVLgQjXgu9/9rtyoS0pKYLFY5H+TySRKS0sRDocxNzeHuro6\ndHR0yMbNxYTFOzRoNjc3S0NdZmYmmpubpfXJaDQiLS1NvBMdHR04efIkcnJyxAjF+Asdw2azWapS\nNRoNurq6BE7DbDolRyYleOq32WyS4QeeRYd6e3vx8ssvS/lFT0+P1BRT9TGbzcJyJ7+BjPuDgwOY\nzWZkZ2eLn2R7exutra0oKirCq6++Ku/v+++/j7t37+JnP/sZfvCDH6Czs1NMUWQqEM3LZjiOf9hu\n2NjYiJ6eHnR0dMiIw+FwoKOjA2VlZZifn8fe3h52d3fF2MaDj8/nQ0tLCw4ODnDv3j2YTCZcuHAB\ndrtdCjt4S2luboZWq4Xb7YZOp0M8HofNZsPBwYEkaUwmk8yhuZDzvSfa+HmCGzd3yqF07VOVYq6c\nJSo8vJHHEIlEZH7e3t6O3NxcrK+vi9mNxUY8NPOSUlRUBKPRiN7eXnzrW98SCf2ll17C5OQkPS37\nKgAAIABJREFUBgYGsLi4iMzMTMm9s7mSzwkX/t7eXnnednd3QcMya1lpAotEItDr9Th37pxsCt3d\n3TK+TE9Pl3bByclJjI6Owmaz4Tvf+Y60ebJdkm2S6enp6Orqkiw5f04a1Yht3dzclAP20dGRdHo0\nNTVhYmJCopmM2mo0GlitVll30tLSxOND1DEPGWazWZoUU6mUECrZTGk2mxGLxUQ95eWB6z35AVqt\nVjL4LBxjRI3rSHFxMerr65GTkyMchOLiYrl0pKWl4e2338bNmzdx7do1dHd3o7e3F5ubm4L8Jq2w\nrKxMTLvRaFTSHgT4RKNRFBYWoq2tDUqlEru7u+JpYIR3dHRU1gnyLzj2ZJHU//b1jRnvrFbrT54+\nfYqSkhK5qVLa2d7eRmFhoWxg8XgcW1tbGBsbw+LiImw2m5RibG5uyrybvb+pVEoywPxgEK6gVCqx\nsrKCyclJNDU1obi4WAo5eFsLBALShR6JRKS3m61qLBhgyQ4fJrrJl5eXZWHg31tUVCSZVKoTlK+V\nSqXMqcxmMyKRCDwejzDz6dTe2trCb37zGzx58kRyqFlZWRgcHBTzWHZ2tsQS/X6/kJMODw8xPDwM\nl8sli7HBYJCTd15envz8vO2QJLe5uYk7d+5I5nl7exsPHz6UqNO1a9ews7MDABgfH0cgEEB1dbXc\npMrLy6Wm9dGjR+jv78fy8rKAd+h47erqgkajwcTEhJDB2NTHmyw3J6VSiUAgIBjT6elpmQX7/X5M\nTk7CbrfLoY0jlampKUxOTkqPtMVigUKhkA3b6/XC6XTC6/ViY2NDON3cpDkbtdvtElkjnCmRSEgN\n69raGoBnsUnW0x4eHuL06dPQ6XTY3NzEH//4R7jdbhiNRpnRceNNpVIYHx/HwMAARkdH5f/b2trC\n+vo6NjY2BNbEeWRVVRXC4TA+/vhjaVqbnZ2FQqHA2bNnkZ+fL7PQmZkZkY6LiooQDocxPz+P27dv\ny7PLg2tGRoZEAOfn5+F0OsWXEIvFpHa5oqJCPmsAhMvucDjQ39+P27dvQ6lUor6+Hi6XC36/X4hj\nCwsLePz4sWB2CZ5ZW1vD6uoq/H4/ampqcP78eXi9Xomnra2tSVkQY15ra2uIRCKiUDx69EhkXz47\nDx8+RGFhIVpbW+Vwx4WX8+ZYLCaFRGazWXrbCUXJycmRbgY+dy6XSw4d9+7dg0qlgtFoxODgIBYX\nF6HVajEzM4ORkRHU1taiurpaPAtMNSQSCeE/ABBn9vLysvAXuOFyg8zJyRFTHLGt8XhcWumSySRG\nR0fx4MED4Rfw9j85OSnrMp/DZDKJc+fOoaurS1Cs5Exwdr69vS2ejVAoJHTIVCoFr9cLt9st3qGO\njg7hDhQXF0vM9s6dO/B6vcjLy5N1tb+/X+K7HNFxo+P4kN4TGicZLeVhVavVSjz11q1bWF1dFUjY\nwcGBkCPz8/Nhs9nk0JyRkSG8/OPjYyGhUq1Rq9USR15YWMDu7i6mp6fR398vyRaHw4HR0VFMT08j\nPT0d58+fh9/vx8LCApqbmwEAX3zxhfBCpqenMTExgZmZGdjtdoEnEeY2NzeH1dVV8TZpNBrk5uZC\nq9X+3zbe0WxEeQqAkJV4cwaAlZUVxONxqWRVKBTIysrC3t6eNPvwZEQ3L+dMrKKkmSoej0vbEk0l\nnPOlUinZ0NnqxHgU/1ziCtlPX1hYKPPu5z9MlBWZyWf9anFxMSorK0U+JD+9s7MTIyMjwiMmR5vS\nO0cUCwsL2NraElcvjU28zUSjUeTm5ooszE5mjjlYD0m5ljcdo9GIU6dOyc/GHDzrO7kAZWRkyEmY\nCybZ9f9fUtTx8bFQ0mgcmZubk8MMkaVzc3OiitDBTaXk6OhIbra8GdPIkpubK2bE/f19OQAQTxkO\nh0Vh4U2NN5pkMolIJCJxPgCSnabhh/xwMrcDgYCMCVKpFAKBwAtVrXw2Senj+0iwTzQahV6vh8lk\nkvjW0NCQONv53jNCw1sDnykAcuPln8kFh0kJzjPHx8dR8VcELAtO2tvbZdGlmY/KC2M5BDeRSsbb\nJAmFLAU6PDwUmhrrfUmnY0Um2QZbW1vyevBQQCMfM/DxeFxiT8AzghmfuYODA2RlZaGpqQmlpaXS\n9ufz+USa50ydn2WO4ljCFIvFEIvFAECiTc87/8kop5FMpVLJCJHArueLhvjfWciUlpaGmZkZMXJx\nM2DdbzQaxeLiIhKJBKxWK46Pn3Xd8/a+vLwsdbK7u7uyfpCJH4vFJPbLnPze3h40Go0wQUjq41x+\na2tL6oqpEHGMwPWFEcZQKCQFNcXFxcKfOHnyJIqLi8XAxvIeUh2ppB0fH4tCwA2fm7Lf78fp06fR\n0NCAyclJLC0twWw2ixGSIymPx4P9/X2JWWZlZQlyeWdnRxrZ1Gq1rC987ojypopIyhyrdSORiNBK\nyUcghtdms8k6zs9cIpEQg19XV5egu4nB5mj53LlzEg32er3Iz88XZolCoZCxS1dXFx49egS/3y+w\nHK/XC6vVKtn8cDiMaDSKcDgs7xeN0awxpuGQ48avC8T5xjZ5zka5WWdlZeHll1/G06dP4fV6YTAY\nkEql8ODBAxwfH0t7GM01Kysr8Hg8UkNKrCcbz1gwoFAoBEoTjUYxNjYGm82Gd999Fw8ePEAwGMTV\nq1dlEzp37pw42BcWFjA/Py+LLWNbjx8/xttvv43Tp0+jvLxc4juMUUSjUTGqKBQKOBwO3L59G4WF\nhXIriMfjePLkiWASmYMnDpLzScrHzPlevXoVjY2NYoZRqVS4ePEiYrEYgsEg7HY7vF4v3n77bTQ0\nNAgal53bCoVCJFoqDJS9uBA7HA5pZeMH5cqVKwiHw7h9+zYsFosUzGRkZOCtt96CQqFAWloaenp6\nEAwG4Xa7kZOTA5PJJOUmrEckxWxnZwe3b99GY2MjXnnlFdy8eRPp6el45ZVXoNFosLm5KR3Oer1e\nTIx8X3Nzc+UWx2KOGzduQKVSoaqqCleuXIFGoxFzJ2/bNL8QVEGiHvn+nP/l5ORAoVDIqILPFZMK\nRqMRTU1NSKVScDqdsFqtEj+Lx+PCZEgmk2hvbxdgk16vl+IionDZsEbPiFqtxksvvSRmq6ysLFGN\nuADQDc3IYCKRkBtdYWEhCgsLRVHhLSsnJ0dqjL/73e/K7Ylu9/PnzwOAQFuYAU5LS0MoFILX64Ve\nr0dLSwvy8vKk6IiIZ47JFAoF5ufnMTo6iu9///s4deqUpBGejzLp9Xrhk/f29iIzMxMTExPQ6XQy\nzuMNJhgMCubXYDBImx8PgPQyTExMYHx8HD09PbBarejp6ZG5Ogus3nnnHSmrSiaTQhkEniVdmpqa\nJFbITZPSOA9YRUVFMJlMiEQi+PnPf46amhpcuXIFRqMRNTU1aGtrk4QQYULLy8uorKxEW1sbHA4H\nnE4n5ufn0dXVhYqKCrjdbnG68zkgsIjQIqVSiZGRERgMBjQ2NsoBoLy8HEqlUsBSq6urqK6uRnFx\nMVZWVlBVVYWKigqYTCa4XC6hvdHXxI2VvebkOKSlpSEzMxMHBwfweDxQqVQ4efKkqH6M56rVakxM\nTCAYDIqnan19HWazWYqKSNcMBoPIysrC3/7t32JxcRG/+c1v4PF4UFlZiZdeekmeW6opq6urUihF\nw1ljY6N4hnJycqDValFTUyOf0ZGREaysrODUqVMwmUwS7V1dXRU/V1pamhzeeDna3d2V9SIrK0vY\nKKzbvnTpkihX7JVnRJuAKnqrOFqgqZblSadPn4bNZpOeC742VKVpjG5qakJ9fT0ikQicTqcc1Jj+\n+Tpf39gmzxOvRqORU92dO3dEMgEgxTDshrbb7YKR5IychgeDwYDj42NMTEzIrPnOnTty8vX7/Uil\nUqiqqsLBwQF+97vfyQx1ZmZGbhgPHz6Ex+NBaWmplFewDczr9QrFjDGSsrIyRKNRrKysyMx+bm5O\njCFk5Gu1Wvh8PvzlL38RDC7lcUp8fr9fIChVVVVyk+X3fvnyZdhsNpGVmb2ntEqzRiKRwL1796SK\nlzeZ5+W8mZkZOJ1O1NfXw2Qyyfz54OBAvn+dTickO8YWm5qaBFn6POhhc3NT6kFJl7Lb7ZiamkJj\nYyPUarV0l7POdmdnR2ZsnH2ToEe1gbhjpgPy8vKE9rWwsCCSMt9Dq9UqWVwyAUZHR1FdXS3lP2QM\ncOFaXl7G3NwcNjY25P/f2dkRfkBJSYlU6brdbolBMSa5traGoaEhMQru7e1JRtbj8SAQCEhVbWFh\nocjrr732GrKzs2XGxlpMQok2NzcFEsUbDAtWjo6OBPrCbDRjYEx1EKNJRCvneOTI2+12MYpNT08L\naZAsCLqmE4nEC3FVPnNms1kARWwojEQiEhPc3d2VlrRgMAi1Wo1kMomJiQlkZGRApVJJlI3/npaW\nhpqaGhwfH4sDn456rVaL8vJyMWzRMMWDEmOgdH7TnZ6VlYVAIICpqSmJlDocDhgMBlRWVmJ/f18M\nUrxxb29vIzc3V0ZnS0tLAqpyOByCi15bW4PH4xEDsU6ng9PpfKFCl7c1qkyctXOWX1dXh3g8jhs3\nbkgxD9MEbNyMRCKoqanB0tIS5ufnpcp1d3dXDvz3798XAFcwGITD4cDk5KSoRnyfysvL5dDDnvWt\nrS2B6FA5vHfvHkKhEA4PD3Hy5EnU19fDbDYjEAjg1q1b4m9gqZXBYMDh4aHUf1N9ZVsf/56hoSE5\nXPl8PsHoWq1W8cYwwsexGV9HNhAmEgm5MJBwubm5KYrg0tKSqAsk29HPxAsOTX/8edfW1qQzwe12\nixrn9/uxvLyMmpoa6PV6fPnllxI13t/fRyQSkX3m3r17OH/+vBQu0dfyfNkYLxoErdXX1wvx9O7d\nu8L+JzmR+10gEBBIVFFREVQq1dfaa7+xTX5/f1/ciDw9Xbt2TZyHvGWWl5dLhtTpdGJ0dBQ6nQ6N\njY3o6uoSSbi0tBSrq6uYmJgQN+qtW7ekWWhzcxNVVVV466238PDhQ/zXf/0X3nrrLZSXl+PRo0fi\nvH748CFSqRQ6OjrQ2dmJxsZGPHnyBG63G9PT09DpdGhvb8f+/j68Xi90Op3ccDhbv3//PjY3N5FK\npfDRRx+hoaEBVqsV6+vruHfvHgoKCtDS0oIPPvgAAGSDcrlcuHXrFmpra3H+/Hkp7lheXsaVK1fw\n7rvvyqIzOzsrpECCZEwmk2w8t2/fRjweR0tLixxgXnnlFZH3R0dHcevWLayvr6O1tRU1NTViQpqe\nnsbe3h6sVqtIjisrKzhx4gQuXryIUCgEn88nPcy7u7vweDwSzykoKMCbb76J6elpOBwOufUUFBRI\n1Oz27dsCeOGHrr29HcvLy/j8889RWlqKsrIykXYPDw9RV1cnDtXR0VEMDAzg9OnTqK+vx6NHj6BW\nq/H+++8LGpM3j7GxMZmp5ebmyomam874+Dg8Hg+i0ajckAOBgNSs0i07Pj4Ol8uFubk5kY+J1J2f\nnxc0LP0ae3t7Ut+7ubmJ9vZ2me9lZGTg6tWrSE9PRyQSwZ07d8Sxvr+/L6712dlZcWcXFxcjGAxK\ni1lmZqbMHnmIoKLEWbVarRZJm9jZN998U6KFlZWVyMzMxPLysoxBamtrkZGRgdXVVWxsbEgBCwtI\nnm9g5E2KapHf74fH4xHzXHNzs8zxT548if39fczOzgq3fnh4GAUFBbBarbhx4wYODw/x8ssvIxQK\n4dGjR+L0djgcuHTpEk6dOiUjOI4QWOoSjUZlnGA0GkVWPz4+xsbGBsbHx9HQ0CD1vvX19bIJsMCF\nKRrmrCORCGZmZuB2u1FbW4vs7Gy43W6J09rtdiwsLKCgoEBQuNPT05icnER+fj56enrQ2toqjnBu\nfACEmtfa2orBwUHcvHkTGo0G9fX1ACCzdbY7NjY2Ym1tDW63G+fPn0d5eTmi0agoJJ988gkUCgXe\ne+892ZyePHmCRCKB27dvi4JG6BJ/LwuVGN8kS6Kvrw8rKyvy5585cwYNDQ148uQJPvvsM9kwDw4O\nkJ+fL+AtKgp+vx/b29tymKFCOjY2Jiok8dFHR0dCuiNvRK/XY2hoCCsrK3jnnXeg1+tFut/f38fe\n3p6MJx89egS73Y6Kigr53L/zzjvo6uqSXvnJyUm0tLTAYrGIPK5SqeS1GhwchEqlwpkzZ4TP4Pf7\nEQ6HEYvF0NDQgMrKSvz2t7/FxMSEvGasgCb+l+VEs7Oz4piPRCJCF+VB5cmTJ5II4GGdDAn2tKys\nrCAtLU1aCr1eL3Z2dqT46+t8Kf7/3ry/7tf58+eP29vb8frrr8up9quvvkJubi5aWlowPT2NnZ0d\n6Q+n65RUJ85/SWNyOp0vRC22t7fxq1/9Shjg6enpaGhowD/+4z8KMMbpdOLw8BAtLS1i8mNbFyNg\nbMjjjIldzYRErK2toaamBr29vdLutL+/j5mZGczOzuL999+HzWbD6uoqtra2BKGq0WjQ0dGBvLw8\nJBIJ9PX1YWJiAn6/HzabDfX19XA4HDg+PkZ7e7vQljIyMuDxePDJJ5/AYDCgublZDICVlZXw+Xzw\neDxYXl5GPB5Hbm6uODbZ/sSTMGfaoVAIi4uL6O7uRnt7O37/+99jd3cXFy5cEHNTdnY2LBYLzp49\nKznwYDCI/Px8NDU14cGDBxgcHERtbS3UarVI20T5cpNPT0+XzHdGRgZ+/OMfi2KQnZ0tlaVms1kc\nrcFgEOvr6+jo6BAJdGNjA9PT0wAgsBCDwQCLxQK/3y8LKY02nLPV1dUhFArhwYMHaGlpQW1tLcLh\nMHw+H9bW1kQap+v74OAABoNBYDJES3JWRnDP0l+rPckkp/OYcl1eXh4KCwuh1+vx9OlTPH36VG6g\nANDf34+ZmRmJQrKCMhgMor6+Xv753e9+h5GREVitVsF6MkLW0tKC7e1tDA4OoqqqSqTxqakp9Pf3\no6CgADU1NXjnnXews7OD69evS7EH2f6bm5uShSYauKenR2aMubm5AnghJbGwsBAejweDg4OiCj2f\n2mDElCOhSCSCzMxMhEIhjI6OorKyEmfPnsX9+/cRCATQ1tYmzV4E0PBQRvgQfRFUMyh1csQFAJ2d\nncIQePz4Mb766ivx73R1db3QhhgMBjE1NYWCggJYLJYX5Nu7d++ir68Ply9fRl1dHdLS0rC8vPyC\nA72pqQkdHR2oq6sTuNbR0RGqq6ths9kwPDyMUCiE0tJSGQFQJaHKx1KdhoYGiYER2by9vS0o2MXF\nRVgsFjQ3N6Onp0cOKXa7XWJ5er0eKpUK/f39WF1dlUZIq9Uqv45pmqKiIly8eBGlpaXY39+Xzwzh\nSaz/5biUVcy8YCiVSknh1NXVSdZ/Z2cHk5OT6OjogM1mw+DgIKLRqDANnj59ipmZGRwdHYnfgj4B\neno4466rq5OyHZpFifENBAJCFV1cXBR4jclkgkajEdQ2x7vp6eliirNaraJWMM1F3sPc3By++OIL\nFBQU4MqVK+js7IROp8P09DS+/PJL/PrXv0Z9fT2qq6tRVFSEsrIy8SVQFSspKYHJZEJaWprsfVTj\npqamcHBwgG9/+9twu934xS9+gTNnzqCrq0ueMwKfAoEAHj9+LGs6x2Q//vGP/9c9/Bu7yZOmNTMz\nI5hSxkIyMzNF/jWZTPIDsWCGJ/itrS3k5OTg6OgIHo8HNptN8puUYui+ptFrYWEBxcXFaG5uxtLS\nkpSWFBcXQ6/Xi+mBixRnlaTVkZRGsx1PaBqNRk59nGexMIJypEajETcv5+8AhHqVlZUlcS6CgjQa\nDRoaGqDT6aRWk1l7xktI3SLFj+UQNJoYDAYYjUb4/X7psebPy1IHurZNJpO8Xmw+W1paQnt7u5hc\n6PzlfIgENc6TKJmxkpYzTRqYaDajWY+GJ27OvKXxgEUDIatLSd5KJBJYW1sTFrTJZBKMbCKRELph\nVVWVUPl44+QixmeK0b7c3Fwh7xFTHAqF5NcnEgnxfygUCmn7q62tFSOSUqmUiA7nydy0+B7S6ElT\nKdWqeDwuMBc65PkcVFZWIjc3VxbJ/Px8iRfFYjGsra0JKY3I0sePHyMtLU0keL63SqUSVVVVQrej\nj4AgIs5MS0tL0dTUJNE9/hycG8fjcRmVhcNhGW/wFkxzIuVz4BntkkZRltvwOWUdM70jNMGmpaWJ\n5MnIk8vlkoMji3K0Wq3AZDifpqO7q6sLCwsLL2T5w+GwGPDIsGeELpVKYX19XV4H0inb29uRSCQw\nNDSE9fV1aV4rLi5Geno69Ho98vLyRPpldfHzlafJZFJqbpmwIbmuvLxc1AoAYopjvS9NgWq1GhqN\nRlzubEoEIG2UX375Jfx+P+rq6lBYWAiVSiX+DpVKJQfQgoICiaWy7jczM1PIjIwG8nb+vIGXRk2i\nZXmhoAGNzzRjikSNM2J5dHQkXBOFQiFO/dLSUphMJoHXRCIRWX+p9Kalpcl6TG8IWSK7u7vy2SJZ\nkXN4PmcAxKne0dEhyRaS8Ah1Yq+8QqFAV1cXfD4frl27hoaGBnR2duL4+BhmsxltbW14+PAh/H6/\nxL99Ph80Gg3y8vIAQEzCFRUVMjLe2trC+Pg4rl69Kko01xJ6dzgSeN4L83W+vrEI3Y9+9KOfhEIh\nXL9+HWtra4jH4zCbzdI6l5WVhcbGRly6dAl1dXUwGAwYHh7G4OCg5EInJiYkerGzsyOGFz48CwsL\nYgjim+p0OiX2xEzvzMyMcLHZssQGJX64ZmZm4PF4BATDOXFLS4vISH/605/w2Wef4fbt2/LG0rBD\nD0Fubq44/dni5Ha7xcBy5coVBINB3L59W1Ctp06dQmZmJmKxmKAga2pqpG7z008/hcPhkBIDFtEQ\n7kKDltfrRSAQQEZGhsiuZFQnk0kEg0GMjY3BbrcjKysLZ86ckUjRhQsX0NjYKDMshUIhXekOhwN2\nux2Li4tYWlpCenq6zJsZeVEoFMIbTyaTsFgsyMvLw8jIiBjhbt68iYcPHwo5q7y8HCMjIzg4OMCZ\nM2dwcHCA1dVVmUUybkJGPSU8NqGxFY6EKZrOjo+P0dDQIAYlNg86HA652f3xj3+Ez+eTv3d+fh59\nfX24f/8+xsbGZHbvcrmwvr4u0h9LewiC4UJN6Mr6+voLABCinVkUQ/NYV1cXAMh8kTcDLmp9fX0y\nIlKr1UgkEvjP//xPeDwe1NTUSAHHb3/7WygUCvzwhz8UpHBnZ6cY6jh6YVVzKpXC4OAgvF4vXnrp\nJdhsNrnl5OfnY3p6Gru7uzAajdjY2EAwGJSbOt3CpD4+ePAADodDGvYyMjKwsLCAgYEBxGIx7O7u\nSm/F1taWIFrLy8txcHAAl8slbAyWKZF/EAwG8fnnn2Nubg6bm5sSDS0rKxNymt/vx9LSkiBxu7u7\n0dzcDKVSiV//+tdwuVxQKBRwu90YHx/H8PAwgsEgDg4OBN3785//XA4vaWlpyM7ORkVFhcxW2T7Z\n2dkJo9EIlUoFj8eD3d1dWCwW2cw4dhgbG5NNraam5gXcstvtFo8Db/a8xR8cHAj8JxQKoaenBw0N\nDThx4gQmJydx8+ZN9PX14fDwEB9++CEyMzNht9vx4MEDmQ3PzMygv79fnPlmsxk9PT3o6emRRjse\nXm7cuCF47OXlZblAfP755xgYGMD29jY2NjYkYlhUVCSM9xMnTkChUGBubg6ffvopYrGYFLvwIPS8\n6XRtbQ2ff/65oHYdDgcAoKWlRebOS0tLSCaTqKmpweTkJO7fvy8Y9LS0NIyOjqK/v1+ok3fv3oXL\n5cL8/Dxu3ryJqakp4ReYTCZh8sdiMVRVVaG2thZKpVIw5E6nE6urqygvL0d9fT1KSkrg8/ng8/mg\nVColwfHyyy+js7NTRnCZmZnSHsqfbWxsTJIgAOByuYSayTgt2x3JTGDTqtvtFk/HjRs34HQ6sbW1\nJWPq3//+9/93I3ScFTOyRUcjndiUuQwGA0KhkGAOearNzs6G1WqVmExbWxuAZzltRrD431KplMQr\nOHdlhzi7yHlqz8zMFNodb4J0dhoMBmRmZsLlcknMg5ukUqlEZ2cnjo6OpEGqsrJS4hY02jEaw5tK\nOBzGwcEBmpubBWBCiAYhKFzYeXLjbZRI3by8PCGgMYpF+TIWi4lUytsJedg0BnEkwPlrJBKR0YXZ\nbBYuPnP/a2tr8Pv9KCkpkdsInbqRSER4+sCzONTznHcAwpMGINWPLANhlI9zQ5vNJjFJ/lpyEggf\nIeby6OgIarUaPp8Pm5ubcjPjrYIAnOPjY/kZabDk6Z+vDUFD7HLm68iSH51OJ7cGjpCYrHA6nbKQ\nE2BE5nwsFkNtba3c9LjZt7a2oqWlRUYr8XhcUM8Gg0FmeTRY1tfXIxwO48mTJ8jOzobRaERnZ6c8\nk4FAQDasVCqF+fl5KcJYX1+HUqmE2WxGMBgUVWlnZwdOp1NGH7W1tQAgbH/GxqgacYb45MkTKY3h\nja2xsREFBQVYWVmByWSSbHkwGEQsFpPXq6amRgAfPPwAzw5FxcXF8sxXVFTI6I50O0aqnlezjo+P\nhXXAtA1NYWxnTKVS0i3/fFOY0WjE/v4+dnZ24HA4pEveZDKhvr5elJXx8XFkZmaiqakJgUBAalL5\nvfLQzMN0MBiEQqFAUVGRHIS0Wq0UuSz9lcHO0h6FQiGFWZS909LSxOxXXl6O2tpapFIpDA8PIxwO\no7q6WuRhrVYrY6azZ88iGAxCp9MhFotJe6PJZJINl+kRfo8EQXk8HmRkZKC6ulqeexpvWWMdi8VQ\nVlaGuro6UV055kpLSxPyHpUVRlBXV1cRDAYlJmaxWKTOu6ysDJWVlVCpVBK73N3dldu1RqORDoJI\nJCIqjkqlkv4Ht9sNjUaD0tJSNDQ0iBpAMzMNm5FIBMlkUj7nfN05rtPr9S/0gwAQIuorr7wi6lpZ\nWRni8bhs1hwpUAEheCcajQrCnMbVJ0+eIJVK4YMPPkBdXZ1Eu/Pz8yXqSWMy0d0sRvrTSu9GAAAg\nAElEQVQ6X9/YJm+1Wl9wR584cUIW4PT0dFRVVaGurg7JZBIejwd9fX2CH11YWBD+M2ULQibu3Lkj\nGekPP/xQZqB+v18WGLae7e7uIplMwmw2izGppaVFjCMsFqBs2NPTIy1ThLKUl5eL8Y8oyrGxMbmx\nlpWVoba2VjZrSuhEk5LiZLVaBRtbUlKCuro6FBQU4OjoCKurqwAAjUYjPP6ZmRksLCzA7/ejra0N\nLS0t8voBz27XLpdLClL4UHEcUVRUJOMJxuXY582bL3n11dXVkl9/Hjhy5swZiaIw6+31ekWK5gNN\nKZqHKYXi2RgpJycHNpsNd+/ehcfjwT//8z+juroa6+vrkldva2uTyCT/IZOcI5CMjAz09/cjkUig\nsbERi4uLmJiYEHmOWWXeyKLRKIaGhmC1WtHc3Cwcdy4UeXl5aG9vlya73d1dHB8fo62tTTZKjiio\nBhkMBinOGR0dRWFhoUS3VlZW4HQ6JRfOSCBnx6TZlZeXS9vW6OgoWlpaUFFRIb4BOrbz8/Px8ssv\nY2RkBF999RXKy8tRUVGB733vezI+8Xg8Mp8Mh8O4desW3njjDdhsNqkGrqmpkXIixhEfPXqE119/\nHZ2dnWK4XF9fF5c+pUbG9SKRCAYGBmA2m3H69Gn4/X6kpaXh0qVLiEQimJ+fh9Vqlds+DY00JzIZ\nYDAY4HK5EAgE5MZVW1srefjOzk45GBFDS+7AqVOnpHXx6OhI4CJnz55FW1sbysvLpYAomUzCaDTi\nBz/4Aba2toRAVlhYiJqaGoyMjOD69euYnZ2FTqfDuXPnYDabodfr5QDxxRdfwGaz4dSpU3j99dfl\n9SMUi+MLrjsbGxsoLS2FXq8XoiW/dnZ2MDw8DKPRiI6ODtkMWeXM9FFaWpq0tvGW6HQ60dfXh+rq\napw9e1bioIlEAnl5eXID5cGMSg3bPzMyMjA0NITJyUlR2gwGA9ra2pCeno6+vj4kEgl0dHSIMvjq\nq68iEokIsTAWi6GmpkYocVxPKK2fOnUKBQUFctDmAZ/x5ObmZhQXF+PChQtwOp3Y3t7G6dOnUVVV\nJb6Lzc1Nyb8vLS3JmhoIBCRtwD2Dl5Lp6Wk5DNTV1UlqCICQ4sLhsKxTLKUhH6K8vBwGgwHFxcUS\nd+Vek5GRAYPBgAsXLohfS6vVIpFIYHZ2VpJEzz93eXl54g3IyspCb28vysvLJY1SWVmJt9566wUu\nBke+TOs0NTWJ78HlcmFzc/Nr7bXf2CZPbrzZbEZXV5c84IQY8EZ5584dgbwQnEMXZUtLC4aHh7G4\nuCh0IJfLBb1ejxMnTuDRo0fY39/H5uYmOjo6UFBQIIaZhYUFwVQODAxI9ndwcFBqAY1GoxDRFAoF\npqamsLS0JIhGSooTExPY3t6W+MfGxobUlt6+fRt2ux3f/e534XQ68eWXXyIWi6G0tFTqGsfGxjAx\nMQHg2QN48uRJnDt3DkNDQ+JQz8/PlxM8502M1wUCATgcDjx58kRMMAQ9kAoYiUTQ0tKCRCKBgYEB\n5OXlCVOdVC66atk0xn7maDSK3t5eqNVqUQlcLheampqwsLCAhw8forS0FJWVldKcRubziRMn0NDQ\ngMzMTHz66afSOtXW1oZIJIKbN28iLy8P586dE0OTx+OBWq1GVlYWvvrqK8nHMxrFX2O32yUClJ2d\njXA4jJ///Ocyn+StdWBgAOXl5SgtLcWf//xnrKyswOFwoKGhQaJy0WgU6+vrMoJg8QgPcBzrLC8v\no66uTrK5VEaSySQWFhbg8XikNXFqakry27W1tZL+uHbtGvr7++W1IGhFpVKhr68PgUAAubm5+OMf\n/4ijoyPU1tbKbJdlOENDQ3C73dL6lZeXh1QqBYfDgS+++ELMO0REnzx5EisrKxgbG5ND22effYZQ\nKISMjAwYjUYUFxejp6dHkgSsMR4eHsbx8TFsNhvy8vKwvLyMgYGBF5SIcDiM69evi/lwenoaBoMB\nVVVVePToER4+fIhXX30V4XAYdrsdDQ0N0Ov12NjYgNf7/5h7t6DG7/v8/xHnkwCBhIQAIaED4rgc\nl13MHuy1117H9tr1IamTqSedTHPTTi/bm3bS6VUvctN22qbzcxInjZvEjrN2fNqT17uwu7CwnBEn\nCR0QQhJCQiABAgH/i/XzLju9qG/+4zCzM4nt3QXpq8/heT/P61mCx+NBNBoVl3tbWxueeeYZLCws\niFqVkZEhmx1HNEyg8PdxjtnS0gKHwwGHwwGr1SoqRCQSgVqtlvInnU6HTz/9VGKoTIlcuHABra2t\nYpRbXl6GTqcTsJXD4cDU1BQaGhqQTqdx7949ya1T9aCq4/F4cPHiRcRiMdy4cUOaGtVqtRwwOSrj\niJBmMho07XY7qqurMTU1hTt37jxWFwxACH3H2+OoTuzt7aG/vx+er1rRcnNz0dnZie9+97tSw7yy\nsoLp6WlJJ3FDoefp7t27mJ6eRnt7OzIyMrC4uAidTof6+npJy4RCIdhsNphMJsRiMdlE6QOg6sHy\nluHhYSwtLQlTv6enB8899xyKioowMDCADz74ANFoFNnZ2ejs7IROp5PDZX5+PkZHRyViNjMzA5/P\nB71ej5ycHNTU1MDj8YiXhE146XQa4XAYs7OzAIDq6mpp73vw4AFisZiYqsvKyvDqq6/K2HV+fh7h\ncFgAQW63GxaLBeXl5dLDkpWVJayI/v5+8cBwBEb1SqfTYW5uDvF4HF1dXVhfX8ff//3fi0JGj1pP\nTw/Gx8cxNTUFl8slhwaC0b7O1ze2yRPOcvLkSVgsFqhUKiE8kTfMnubs7GzU1taKYzsvL0/oP8lk\nEl6vV+ZD8XhcTC6xWEzkHC5mlFi5WHBxJBQhGo3C4/FgZmZG3OyUyvmhY38wPwB0f7Me1263o7m5\nGUajUfrGWY5x//594WYTeZtOp+FwOMR8x80xkUjA6/VicnISVVVVUKlUiEQiwh4nRvW4AzMrK0vK\nM4qLixGNRpFMJrG7uyuxoJWVFZE9Y7GYxKx4OzUYDFCpVILpDQQCQuorKioSABHz35FIRIyRNDse\nx1xyrnvv3j35EJEDsLOzg+rqatTW1krmlJFKbrw+n09c2iaTSTgIx+VQAn4YXSSacnt7G+FwGHq9\nXub05FyHQiGZJzJexxIiACLtsThpaGhIwBh8BsklYFZWoVBI3S3jWDQkbW9vi+y+ubkpGw7BULu7\nuzIXbmxsFKc0N2DO5CmvV1RUSLc1eQWbm5twOp3o6ekRt29RUZHM0I8zG6anp1FaWgqdTgcAYgwl\nRpNAKtIFSZFkOUZOTg40Gg0aGhqQSqXg8Xik93p7e1tgIjSxrqysYG9vT0AvXPyPM8KJa6W8SoTq\n4eEhMjMzkZGRIc13hOXwGWf+mAwEjuk4QiA0prKyEufOnZPRDIt/5ubmpHPBZDLBarWipKREoqr0\n/xiNRuzs7MDj8Qgxz+fzibpAQBEVQ1Ig6SkhfY1QpYqKCuzu7op/g/6SeDwu8bStrS2MjY1hbm4O\nXq9XxmHHTZyJREISR5SkaXqkSXVubg5lZWWi2DHbzw2ZTWeMuPG5XFpawtjYGLRarbxPer1enqtA\nIIDFxUVZB6go8jJCxDDfV95qt7a2UFJSIr0BRqMR+/v78Pv9uHXrFnJyckS6pymOCY3l5WWk02nU\n19fD7/fL86/VauUCcpwjQDIiVUmW29BASHMbfU2FhYWi3hHKQ2UpkUhgfX0dmZmZ8Pv98lnq6OiQ\n/44NhIzgJpNJke9poszNzUVTUxPm5+cxOjoqBlUmu/b29rCwsCBd90ylHB/f/l9f31iE7t69e0ec\nc8zOziKZTEozXX9/P9ra2kRepmQ6MjKCVCqFb33rW5JRZpTqypUr2NzcRElJCc6cOYP6+npBE/JG\nEo/HJUdeXFwMlUqF3d1dTE9PC53t8PAQTqcT7777rkQsWHJjMBikEYtSGnOQRUVFYpyy2Wwyv+GC\nVl5ejhs3buDDDz/Et771LdTV1SESiYhz/ze/+Q3W19dx4cIFtLe3w2w2Y2VlBQ8fPsRHH32Enp4e\nvPDCC1I1mJGRgf7+fty+fRstLS1oaGiAxWLBwcGBzOO2t7fx8OFD1NTUwGQy4e7du9KmZTQaUV5e\nLkTBs2fPIhwOw+fzPSbnhUIhhMNhqNVq1NTUwG63I5FIYHl5GTdv3gQAPPXUU/IecewQiURksbPb\n7djZ2ZHbZzweR19fn0iPXGAYk6qqqpJxQ2VlpZT1nD17Fn19fbIJHx0dweFwwOfzyeyPNabj4+Nw\nOp3QarV4+umnH2vQ4t959+5drKys4PXXXxclg19kYUejUVFlbt++jeLiYvnzAMii6HA4YDQahZuw\nubkpH+rMzExBae7s7IgysLi4KLdnjUaDcDiMf/iHf4Db7YZWq8WLL76Irq4uya/rdDq4XC5sbGyg\nurpaimAODg6kM352dhaff/452tvbYbPZUFBQgMXFRQwMDEjenpvSzMwMurq6UFdXh3Q6jezsbOTl\n5eH69esIBoM4d+6cAEd2dnbE2MjDIBfop556SuhmnEna7XZxSvN2tby8LI2AXPCzsrJw+/ZtfPDB\nB3jjjTfQ29srCYesrCxMTk5ifX0dGo0GarVamuM2NjYwODgoBwCatvR6PWZmZjA4OIgXX3wRLS0t\nIn8eB/FcvnwZSqVSImMejwc3btyARqPBiRMnxER2HPREiA0VRz6z5P4zkeLxeJBMJlFaWiq58aOj\nI2H0M+pHRCq9OIyh0TDLg2dGRgampqbw85//XBQOqnlut1uy7nq9HkVFRfJzOp1OIfBFo1F89tln\n8rqQsBeLxaSfgv0H9+7dw8zMDF5//XVoNBosLi7i3r178Hq9EglOpVJYXl5GKBRCfn6+eIRKS0tF\nwdnY2MC9e/dkzl1RUSEjwbfffhu///3vYbPZ0Nraiu7ubnkN2A3xy1/+Eq2trTh58qQcVg8ODtDf\n34+RkRE5AJM6mUwmsbq6iqKiIlitVrmQmUwmKBQKUQUSiQQGBgYkWdHW1gaTySTvZ2FhIW7evCmf\naYvFgvr6esF5czSTSqXwxRdfYGhoCMFgEG1tbXjzzTfx4YcfwuVy4ZVXXoFer0c6nZbnNpFIYGxs\nDNeuXcNLL72EJ554QhRSrj9US9VqNZqamvCf//mfGB4elosjD/U5OTl48skn/3gjdFzISktLcXBw\nIExzVv4RmUgesNvtxv7+PvR6PQwGg0AVKNVVVlbKvFmtVkOn02F+fl5Oy1zgtVqt5EpLS0sfu4ns\n7u5icXERq6ur6O7uluYgzn3pEtVqtZLdNRgMSCaTghzkLZsnP5pljkftAMgpn4Y41rRyQ6HSQexn\nNBoVMAJP6mazWT5UeXl5wpXmg8wb/Pr6OjIyMhAIBGRRpIegs7MTOzs7Ut2q1WrFEHlwcIBIJCKY\n3ng8jpmZGUQiEayuroqbdHl5WRjf/GfE0XLDJ888Go3C5XKJVJmTkyPmQ4PBIJlmlu3QwFRVVSXm\nSHKb2fgGQG4uXChYl8v2qmg0Kqd3yr4FBQUyNzs8PBQHOCUzxuoWFxfFsMmSC6pOKysrQgvj682b\nL2NoJDuyoYu4WEbuHj58iObmZrnB0rNArCnwaIwzOTkpRlEqKSMjIzKn12g0YmjiLYEUQ0KVGN0B\nAK/XC+NXjY0bGxuyKVHWXFlZESQr2fQzMzNyA6NRjgdLHoDZLKbX6yXSBkBu+DRcpdNpAfScO3dO\niJHV1dVYW1sTjgV9APwzuFGy/nRzc1OoiXxeiouLkUgkEIlEhBcQDAZRU1MjXhQeWgBIgxlxpjs7\nOwJDIRrb4XCIGkP2vNfrFbMpM9Z8HkkIZBpjb29PFnKlUilsBh58DAaDdE0Q4gVAkiv09tBwxUOo\nwWCATqdDJBKB0+kUgE8wGERvby9MJhPKy8slIkYK5yeffCLxu52dHTHIra2tIRQKSUsdCXbJZBJL\nS0tCgqOnhOoMb+OMOadSKRQUFIhB79lnnxX2AD0chAhVVlaKl4T4YkK8uGbxUMMLW2Njo6CkGasD\nHkXWrl+/jtbWVtTX10vTHJHo7F6IxWKSalhfX4fFYpG9iZW7CoVCDM48gJCTzw4JvV4vmFsAgjJe\nXFzE/v4+ysrKoNfrBXNNZgcPcfn5+dL6SYaC3++XzozjzZeMw/KQ+XW+vrFNfnx8HHV1dRIPiEaj\nePDgAQwGA5566imEQiF4vV5ZfOfn5/HKK6+gs7MTOTk50qLFD3tHRweOjo6wsLAglY10yG5tbaGw\nsFAc7jMzM7h79644O9VqNYLBoNDWdnd38Td/8zfSvsZDAnuoS0tLMT8/j6ysLHR0dGBqagrvvPMO\nTCYT6uvrkZmZKd3MJ06cELMN0bBOp1NMW+wo5kav1WoFD0rwRzKZxOTkJHZ2dvDkk08Kj72xsVFO\nsQ6HA//93/8tMhojPru7u2Jy4fjC6XRCp9Oht7cX3d3d2NrawuLioswKCdwIh8OSWzcYDNjY2MD9\n+/cFpvHWW29BqVTi6tWrqK2tRVtbmxga6ZanS5fzOJ/Ph8nJSRQVFSEYDIq5UaPRoKOjA8FgEL/5\nzW/Q0tKC9vZ2kd+rqqqwtrYGv98Pu90uTn21Wg2j0Yjr16/LwYoZ5J6eHuzv72N0dBRzc3NYWlrC\n2tqabFomkwnNzc1Qq9Xwer0YGBiAVquVGb9arUZdXR1u3bqFe/fu4a233pIKSkrJ09PT0p0dDoeR\nTCah1+vlcMXRDituuQlyju5yuXDt2jW8+eabj6lPZWVl4idhd8F7772HS5cuoa+vD8lkEnfv3sWP\nf/xjqdatqqpCIBDA3NwcTp8+LY2LhBZNTU1heXkZzc3NyMnJwfz8vBx6Y7GY4Jazs7NxeHiIsbEx\nGWW98MILMouemZnB5uamzAdJihsaGsL09DTC4TDGx8dx8eJFvPnmm/Lc8WbvdrvR0dEBhUKB+/fv\nw2az4S/+4i9E6VGr1eItMJvNqKqqEuZ3OBzG4OAgNjc30dbW9ljihHP6zMxM2Gw2OJ1OBAIB9Pb2\nwuVyYXh4GH/+538uahTVCapOjY2N0tB2cHCAcDiMiYkJGI1GNDY2YmhoCDMzM9ImeO7cOflcNjY2\nSv11SUmJgICuX7+O69eviws7Ho/D+BU/3ufzwe12Q6FQoKmpCQDE4LeysiIKyi9+8QsUFBTgb//2\nbxGJRMQ4yeIbqhykJpI3QV4+8OhwdOLECaGJ3r9/Hz/72c/E78I5NlMg6XQaV69eRTgcxg9/+EPh\nq/f398vIqampCVarVT5z3LSUSqV4CZRKpShdTDQEg0EYjUZRrzhOiMViWFxcxLVr1yQBcnDwqGKX\nRtUbN24gGo0iLy9PUNUlJSWyluTk5MDj8eBnP/sZvv/978sogqmUUCiEWCwGs9kshtyRkRHs7Ozg\ntddew87ODm7cuIHV1VUpIQuFQvj1r3+NmpoatLa2orq6WoiMSqUSp06dwhNPPIHDw0Np3SsuLsaD\nBw+wtrYmUWRisbe3t2Gz2eR1Os43ILVxaGhIitTeeOMNGAwG/P73v5fEl9vtFkP2//X1jcn1v/3t\nb480Gg1qamqkEhR4RLiizGQ0GqVyNBqNChaUMw66O4+OjuRmGI1GYTQasbm5iXfffRc1NTV49tln\nBXO6sLAgrl46Qufn57G4uAiv1wu73S6VtayO5E2Z82SeuAgt4YyUOVnOdlKpFOx2O0pLS7G5uSkz\nbIJiLBaLtMxxZuj1emVzoVQ9MDCAdDqN0tJSKc24cuUKbDYbTp48iampKcRiMSiVSiwsLGBpaQnP\nPfccKisr4fP5pLmPzmtWHnZ3d0sdJUEwpJ2x8pI+BJpReKNcWVnBa6+9hoKCAty5c0c2a7rEdTod\nVldX4Xa74fP5oFQq0dXVJSdz1nqOjY2hubkZra2tQuOjUhCLxQQspNfr5XUqKSl5DHiSSqXEZDU7\nO4szZ87g/PnzqKqqEsTlJ598gomJCZw9exY2mw1qtVq42nV1dZJjZ50nAPleWILS1NQk4BTO3Fit\nSXmWIyS2ANKhy5vb1NQUCgoKBG6zt7eH5eVloYmR2VBdXS3zRKvVitXVVQwMDODkyZNyC3E4HBga\nGhLYS29vr3Sms1uAPonDw0OMj48jkUjg7NmzSKfTWFxchMlkQnZ2NsbGxqSvmgCS/v5+FBQUoKWl\nRWJMZHYzbpiRkYFUKgXgUR8Fb8jLy8uor6/HU089hdnZWSnuWFhYwK1bt2D8ipS3s7MDu92OkydP\nikO+qKgIDx8+xI0bN/DEE0+gtbUVFRUVmJ+fx+DgoGxqZWVlcLvdmJiYEMma83IaKulL2N/fRzwe\nF/MgXe/hcFjQzcfd1mz8o9pBkybjkDU1NbDZbFKBzMMSoSsAJOfOCB3VCMKX2NLm9XpFZayrq0Np\naamYwXw+H8LhMLKzs2VjIi2O6gUP0zzQMeqnUCig0WhQUVEhhUss8CI9b3h4GA6HA6WlpbDb7TIO\ni0ajuH37NkKhkKhE5MezVTEzM1MuNGy1C4fDSCQSj1H72AtBtz9jdhxLkLvh9/ul3pmpHJPJJAe9\nw8NDhEIhubCcOnUKubm5WF5extzcHBKJBF566SUoFAqMjIygpaUFFRUVuH37NkpKSnDx4kXcunUL\nU1NTMq4sLCzE+Pg4otEonn76aYTDYXz44YdC16ysrEQikYDL5RJfV29vrzBAvF4vtra2RLHd2dnB\ngwcPhL/BlAgAgU7xvaXnIjMzE42NjTh37pyM0n73u98hPz8fJ0+eRFNTE5RKJSYmJqRzpL29HSaT\nCa+++uofr1x/3AwSi8VweHgosIf9/X15gHd3d+WUmJeXJzEFfqAZm6Hsr1KppI6TErRSqRQm8Pz8\nPPLz81FbW4vt7W3E43EMDQ2JKYjxB0JWysvLxUW9s7MjcYiamhoUFBRgcnIS+fn5sFgssNvtyMnJ\nwdTUlNwWSdai9MbbNTPqe3t7wrBm4xk3MtKdjscEKVsxk720tIT5+Xmk02l0dnbKDG9/fx+ZmZny\n4QAg0j5zuMvLy/LzaDQawflWVlZCrVZjc3NTKFmjo6Pw+/0i3Ws0GpG7aHo6TnHjjHZ0dFS6wWmg\n5IyMLUq8ARHKcpzOV1BQIOY1zl5Jk+PBjYahUCiEhw8foqqqSoBFlM4Y4bPZbGhqanqMe05Jl0Qq\n/vNYLIb19XU8+eSTaG1tlTIbAkm4wPL2QkMgP7xs/eN7S7fz5uam5H7JWGCmu7OzU/LOn376KZxO\nJ2pra+U1oKdkeXlZIkocnRCfube3J7Nsm82G2tpayZhHo1FMTU2hpqYGPT09KC8vF7WHhxqW3fCg\notfrpV2MKQUmEsLhsKhSxIgyJkoqXTAYREZGBrKyspBIJLC4uIi9vT1ZwIuLi6WEhoaz9fX1x8x2\n6XRaYFCcb7JoJ5FIyLyWhjaazniI4aFqbW0Ny8vLUhbDw0BZWZmoNGza4+c1OztbWiL5mUqlUtKw\nRhgRe865GbJ6+Dj8hflnHlI40iIhcH5+HuXl5YIaLigokHhqLBaTzbS8vByZmZlCkwsEAsjPz5dW\nOR62xsbG4PF4UFNTIyMsNioaDAb4/X4Eg0FYrVapXaUZc3h4GKFQCA6HA+fOnUNHR4e8DqurqxJH\nPr5Wu1wukdrr6+thNptRXV0tbZ2U4On/YI0qzdF7e3swm83y2SZPgXXWGRkZQruk6Xh0dBQ+n0/K\nzXhQotqxvr4OAPJ8xONxYQ/Q/U5ORCAQkFGhRqMRpjw3452dHeHq09NAvw2VByp3TA2wWCYnJ0f+\nOUePXE8A4Pz585LHpxmasVXg0SFhe3sbY2NjEpv9Ol/fGPFOr9f/aGxsTMhuZC9rNBp873vfw+Dg\nID744AN4PB7ZyJqbm1FcXIw//OEPIuUPDQ3B6XSitLQUc3Nz+P3vfy9zURrMCgoK8Pnnn2NwcBDl\n5eVYX1/H4OAgdDodjo6OcO3aNZSUlODChQtygyBrnmUIbrcbV65cERY+c7C8RX788cfSLf3ll18i\nOzsbVqsVTqcTq6ur0Gg0ePjwId59911sbm7KB5KHgBs3bsDtduPcuXMoKSmRvPHCwgJ++tOfYnt7\nG3q9HlNTU4jH4zh9+jSOjo4wMzODvLw8KBQKyYaePn0aXq8Xfr9fjECM59AZurGxgfX1dTQ1NYna\nMDAwgN/85jfIy8uD3+/Hr3/9a/FI3Lp1C1988QUGBgYAADqdDoODg7hz5w7m5+fR2NiIy5cvS8+4\n0+nEgwcP4PF48MILL6C+vh5Op1PKS+bm5uDxeLC5uSmO6qKiIgQCAfzXf/0XotGoHNyYuFheXobL\n5ZJFml4Fj8eDmzdvYmJiQvCug4ODGB4exvT0NHw+H6qqqtDT0yMlQvPz83A6nYjFYlCpVFhaWsJP\nf/pTwWyOj48jKysLzz77LDQaDSKRCD7//HNcv34dd+7ckYMYjTitra2ycQCQuShd+OxP4Cye8+CZ\nmRl88cUXODo6EhAOXc3vvfcerl27BrVajampKfzyl78UEuDCwgL29vbkz8vLy4PJZML09DT+7d/+\nDaOjo1hcXJTY1O7urtAMJyYmkEgkJGpK0ldZWRksFgvu3buHmzdvSpsgEwgVFRXCEgeA+/fvY3p6\nWmbjfB7v37+P+/fvY3h4GLdv35YRBKX2u3fvSm85e8lDoRC+/PJLjIyMCNOArWmxWAyjo6OCLyZ2\nt6urS27carUaCoUCbrcbm5ubAjTiKCsYDGJtbQ2/+tWvcOXKFTidToRCIezs7IiEW1ZWhunpaXz2\n2WfIzc0VnkJlZSXa2trwk5/8BPfv30dvby/Gx8fx85//XF7/mzdvYnFxUUA6kUgEtbW1GBwcxDvv\nvIPNzU0sLy/j2rVryMrKQk1NjfRVMHpLwyD5FkqlUlJFyWQS+/v7GBsbwxdffAGNRoNgMIh///d/\nx/Xr1+VZj0QiIucODQ3h6tWruHnzJq5du4bV1VUkk0ncuXNH4rkTExMIBoMyZum1H8EAACAASURB\nVBwcHJRD6pdffolkMonOzk4xJ7JFcX5+XoyyVHTo/eDG6vF4MDs7i4KCAhiNRiiVSqytrcn4a3Jy\nEuXl5cKX8Pv9CIfDj3l77ty5g9/+9rcYGxvDgwcP8OWXX8rmv7i4iLGxMUxMTEClUqGhoUEqp0Oh\nEDwej7QKUgGh3F1VVYVIJIKrV68K1re/v1/GL9PT05icnITNZoPb7ca//Mu/CFZ2aGgInq8IfKur\nqzg4OEBdXR0yMjKwvLwMjUYjo4TS0lKJOjLd5PV64fP5ZPRL9fncuXOIRCJiaGS8mpePwcFBoZSy\nCv3hw4d/vMQ7s9ksBh6CYnijtdlsAB6Z05qamiT+RfYzsbBLS0tywvP5fEIWIn6SfGeSsYqLi8Us\nwhw2FzDyvJVKJRQKhZhzeLugYYquep7qieKl2crtdmN9fV0gPABE1uJthvEcdlhzs2JjE9u82KRF\n0hpnSPv7+7BarYJjZVUh3bG1tbVwOBySe2etIg0eNJ7xhEgDTzwelzxyXl4eamtrxQRENK3ZbEZL\nS4vMlJgjNhgMcjpnhI3Kg0qlklgIT9uctfH1Pd7iRnOjxWKReSFvoYlEQoAXPp9PFkASvBhPYrEG\nK4Htdjuqqqoklud0OiUSRrofmwrr6+sl3ma32wVJqtPphKHNIpa1tTWUlJRIbJBtfbm5ucLJ5gbP\nmws3ZZ7oien1fNXGxZtwTk4OSktL4XK5sLu7C5PJhK2tLTgcDpm5Mt1BPwoXD8qqjFYdLySi25yR\nUj5vAOT90Wg04pCfnZ1FT0+PyPg0BzIZQgmfRLnjkSlKu4wTMUtut9tRV1cnHhRW5/KWyxpYft75\nerAKV6FQ/C/aHG9pvOnSkFdcXIx0Oo3V1VXJIK+ursJkMomUT0gTqWncZEit3N7eRnV1tYwKeahi\nPt3j8YhxkQbM4zHXvb098QXx5siUCA1VRUVFMBqNAmY6TuTb398Xk2c4HMb8/LyoOyTJ8XXiLZil\nXBqNBisrK2IejkQi2NnZEVPk8Z4AtvgRNqVUKmEwGLCysiLRU47V+O/JMdFoNBIFZC0sVZzc3Fyp\npqUPiw1uPGhy9MWbfk1NjZguqXZRIa2srMTGxgby8/Nhs9mg1WrFpL23tydkPbrr8/Pz5fZeVlYm\nBwl+/untUKlUYqikckB/DmFBPNwUFxfLTZyHXEZuy8vLsby8LLx+q9WK6upqVFdXiyGWagwVVlZo\nB4NBWU/pj+LBpra2FhkZGQiFQn/8xruXXnoJXq9XFmd+QElNKikpgdVqxfPPP4/d3V3pf6YZIhqN\nCsc9MzMTPp9P4hB0eB6PpXR1dUGlUmF8fByFhYUyM8rOzsaJEyeQTCaxsrKCxsZG6PV6VFRUSKMb\nv58zZ84I5jIQCODo6Ah1dXXo6OhAZmYmBgYGMDk5Kbf8ra0teaB4i2tvb8elS5fQ1NQksR86xSk9\nAZBNmQxyGsJycnIE28gOZbvdDoPBgJ6eHsnu5+XlycLG7DYRri6XS3CvOzs7WFhYkFvn+fPnodfr\nBW9JV//+/j7q6urwxBNPwGw2Q6/X48KFC8IfD4fDQkajLM3ICBeg6upq6PV6ed3pLr9x4wZGRkbE\nCc/4V21trRziGAOKRqPo6enB3t4ePvzwQ6HUPffcc4+Zm7a2tpCdnY35+Xl8+umnyMzMFEfr4eGh\ntK1xnrm9vY2Kigp0dnair69PImWsvszPz8fFixfFzMTbr8fjkXa6lZUVuN1uVFZWory8HFVVVbKB\n8X0+XlxCJKtWq8XMzIxQ8XiQZIZ3bm4O5eXl+N73vodwOIyhoSEAj+S79fV1VFdXAwD6+/uxt7eH\nM2fOSAqA5iCVSiWHiZycHJlJHh4eYm1tTWhgqVQKBoMBbW1taG9vx9jYGG7fvo2Ojg55T0ng4kHP\n5XJJ3pdwIh6GaCJlbJKMCKvVKuwCxgDpkGa3NhvJlpeXxVR1cHAAi8Ui9DkAQik8ODiQOs/S0lJp\neXvmmWcQDAYxMTGBvr4+NDQ04Le//S20Wi3Onj2LYDAoUCOTySSybHZ2NlpbW1FaWopYLIbLly8j\nlUphYWEB1dXVAnnyer1SzdzV1SU/D+mGBoNB5N3u7m45lJhMJhQVFYmKo1KpYLVaoVAo5KYMQJ7Z\nxcVFQTPTK8T66N3dXUxMTECtVsNgMKCoqAhZWVki9U9MTIjRkOOlpqYmrK2tIR6PSxZ8d3dXvBFm\nsxkqlUrUP66DfJ1ZOEVjW1tbm6DCqWx9+eWXgoXl98DqXZYkARAzXSKRkM4CjnP42eF6zoMXi7Z4\neE8kEgiFQkin06ipqQHw6DJnNBpldHS8yEapVKKzs1MudlQbGhsb4XK5kEqlhJp5+fJliXB2d3fL\nvjI1NSXmWLIRWKY0MzMjSQKOELgf0KNxdHSEtrY2bG5uYmpqCpOTk0in03jttdckbUSY2VNPPSXl\nZQUFBdjc3Pxae+03tsl7PB5EIhGpSS0vL0dzczOUSqXkh483kR0dHQkE4/z589jb20N+fv5jZh1u\nJjwpBYNBybgzU1leXv4YcAeAvJE0tLBNjbEQAldWVlaQTqdloeKskHMcyjatra3S8MQZ/NbWFg4O\nDoRuNTk5iYWFBdTW1qKxsVEyzPX19WIqTKVScLlc8Hg88r3Q3FdRUYGjoyOBt7CU5uDgQG77RNdy\nA/Z4PCgsLJTvlaARnhD5OrD+dn5+XkyF3d3d2Nvbe8x5HI1GBSpE0xm9EIzj0ES2s7MjBsuysjI0\nNjbKhpyVlSUZd5pxGF1aWlrC7u4uPF/xvY1Go8T6uEhUVlYilUohEAjI4s73fn9/X9CmPEjl5+ej\no6NDPpDJZFIofMRP0g3PRc/tdsstjx3u5KszJsi5N29S6+vrclvic+J2u7G1tSUoVMJjWOVJ1YX+\nkLy8PPGZ+Hw+HB0dCf2RsS3OZPm+xuNxVFVVSRkHI3xnzpyRnnuFQgGPxyOsdG5uFRUVYj4j//2V\nV14RlzF5+JTYudmXl5djZWUFAERRYlSIzxgPnrxR+nw+JJNJ8cHQLAY8KvFwuVywWCzQ6XQClMnP\nz8fy8jK2t7fFT8DGNAAClyosLERVVRWUSqWMp1gqZTab8eabb8rtla1wFotF1CB6aHjLZ1NeMBjE\n8PCwRHN5ULHb7WK+YtywuLgYdXV16O3tRX19PfLz8+U2S+XgeH0v1RS+VvRF8Db80ksvISMjA4eH\nh9jY2JDYLxUCYq1JXSTzA4D4EgBgcXER6XQaxcXF8r2ySjkWi0mzGiNwlZWV2NnZwdraGkZGRmS2\nTr8JoVFsdaRB9fDwEDU1NYjH43A4HNLOV1tbi5WVFYTDYelUp1JWVlYmM/ni4mIpW6ILPZ1Oy99n\nNBoF8cuGQypHs7Oz0j9BIzFHO9vb22L6DIVCyMjIgE6nQ2dnp6idjHFTWY1Go9jc3JQ1nRHCw8ND\nNDY2CnFxaWlJ1mnK8a2traLC8DVLp9OoqKhAVlaWgNwITqJ/aW1tDcPDw3jiiSfQ2NgoylBRURGW\nl5cRiUS+1l77jW3ynINtbGyILFVdXY3d3V34fD4UFhYKBY+3YAASp6L0mZeXh0AgIJ3Qa2tr4vwl\nnQz4H0erXq8XYhIl6t3dXZFjuFjSrEXX9d7eHtbX16XjmGa4g4MDcdpzc2Mxye7uLiKRiLhd2Ucf\nDocRiUQwOTkpUg9NGTT8qNXqx5zmPFHzYWazHo0rpKiRwUxJidAWfi8krXFxYRSKBqicnBwxAoXD\nYWk5Y9Wt0+mUQ4TL5RLJkSSs4wchADKmYPyJchzHJJR+SRPjCZ+L8sbGhmSrSf3jv+N7plKpZN5I\ncp/f74der5c0BBcSmvvsdrvMqzc2NmRjYoSKzw0PM1woSJoiztVgMGBzcxMOhwPl5eUiWRIORNMd\no5yhUEjgNcXFxXJDIdyGPAiqMZwlMj7IWxjfe41GI68HwVHhcBgmk0lav7jYZGVlyftBdDT/PwEf\nfNb5mms0GhiNRgHA0JBK5sHR0RFsNhuKi4slu0uHOUcS6+vr8uzyVs+CGRruMjIyhM2gVquxvr6O\nqakp6Qbg65Cbm4vFxUXpp+B6QU4DY3TxeFw2tVgsJos8pVyLxSKoW6oIZAIw5w08GmXxcxiLxSQm\nSQmehVasmiVxjmuVSqWC3W6XsR6ZEZSkaSzlYeU4tpVjTI4cWltb5WfwfIVs5bN7HLbCpkk2V5IJ\noVKpJK7KzzCjnNvb29jZ2ZGLgUqlkkMbqXZerxcbGxuysR5/HtLpNJaWlsRTQinebDaLQZN13aWl\npQL0YryShsrjowxWdNMETG4JTZk0SwKQ8R8PZDTt8sbPLD9HeYzgMvKbSqVEhdzb2xMWRyqVEp8B\n/wz2ekSjUdTV1Qlul+oPD5U8RJWWlkrTKkeU4XAYxcXFKCoqkr2Q/QDkVPDgxHEB123ui0y1/F9f\n35jx7uWXX/5RMBjE3NwcCgsLxSm6u7srGNj9/X1ZhFOplOAGnU6n3BCJgmUVIItHuGmxj5qYS9a+\n1tTUiOw9ODiI7OxsNDc3y22osLBQGt0AiARPQMzCwoJwjLlYM0KzsbEhi/jw8LDI48yLErFIEtrE\nxATGx8fFiJaXl4fq6mpkZmYiHo9jYWEBJpMJ7e3t8t9Rdnc4HKiurhYYEItM6EWgI5dUvlgshomJ\nCZSUlMBms8kskHE9btK8CTCqFgqFBAxUVVUlCwXrcikvc/bIBYoRNholtVotysvL4XA4EAqFoNVq\nJVpIVLFSqRTlpLa2FpWVlVK8sbW19ViGfXt7Gy6XC263W27j0WgUGxsbaGlpwf7+Pm7cuCGFLSyI\nqK+vF0Mno3iE1dAgtrS0JLNHk8kkABd6Qti9sL6+jnfffVc8EPfu3YPf70d+fr6Yj0KhkOSi6US3\nWCyyCHGToWufLH7Kqbm5uTAajdDr9SgtLZUFhzIhP0N0aNMMRQWssrJSSGG1tbXy3JIS5vP54HK5\nMDMzI3Ksx+NBIBAQx7hKpRKzGpMg6XQaZrNZsLtXr17F/fv3pZOd4BA2d7ndbly7dg0A5FZ3dHQE\nj8eDhYUFyZHzkMzngIer4uJiof6VlZVhaWkJ9+7dk3WFkcpwOCwbIWVlwoI4GuHnlCqUy+WS3gOu\nOSUlJYhGoxKz5Y2at/aOjg7U1NQglUqJP2J1dVXY8wTXMEOu1+tRW1sLvV4vTnQaDdllsLW1hZmZ\nGQHXsNuAIyKaZqkmEO9769Yt+Hw+WCwWOTRrtVq54RM5TZrl7OwsvF4vPB4PfD6fqKTb29t48OAB\nfD4fcnJy0NjYKIcyIsh3dnYQDAZlVJNKpTA7O4u1tTUkk0m5pB3H6j58+BCxWEzGahUVFQJf8nq9\n4j3igcfn88loiDWvwWAQPp8PHo8HXq8XBwcHsFqtUhFLdYym6FgsJvjwVColGy1x5qlUSjZT/hk7\nOztSG01ULm/hBNOwdY+brdFolJpjjpWokgQCAXlNTCaT0FsrKytRUFCA6elpKQ5iXLyiokJaK3mB\n3djYkE2e+9StW7f+eI13nMnSLMeaVJLEwuEwFAqFmKyOjo4ey7Fub2/LApVMJqWSjyYSlhb4/X5E\no1HppCeogJI0JdvMzEwxThBWQjmeWU4iPllGolQq0dLSIos/4yG8GXEOBzxqhSPyljMlZi0pnTHL\nzpscUwesLSQvPzc3V+hh8XgcxcXFqKmpgUKhkA+CXq9HZmYmZmdnodVqYTKZsL+/j2QyiVAohIaG\nBtTW1opyQZMdNx1WWnJBYZsdxwSRSAQbGxsAIJWolLNZbMNDEVvoaKLibYomr/LycpmZ7e7uCpM+\nMzMTOp0OBQUFEtfhSZowGm6eiURCFpStrS0BF/FgWFNTg5qaGmRlZcmohUZIkvUikYioMisrK6IW\nVVVVoaysDLOzs3LK502ZCwcNbOl0WnL8Go0G+/v7SKfT4kBnLSyjTMFgEA6HA11dXYKhpfOeGE3G\n7ChLHhwcwPNVaQs3X/7cZGUTIcqNYHd3F36/H263G62treL6Z/kKOflUqPjs8wbV09MjhS4sZykr\nK5N8f0FBgeSfAcgBJCcnB2q1GrFYDF988QVGR0fh9XphNptF0qVMz2hkcXEx4vG4cBKoyq2uriKR\nSAjAivltq9UqXh7yF2gUoxRNvkFhYSFisZgcXDgvpvISjUaxuroKi8UisU1uzuxlJ/JWoVBIJE+h\nUKCsrExqiI+OjsQcyVgrZ/28cQL/w0vnAYaGquOfkZycHITDYYyNjcltnx4bGtomJiawv78vrwvX\nM5alrK6uCtWP3QqpVAoTExNYXl5Gd3e3jNi4Lup0Okkg0dOyurqKdDqNyspKifWx/Y6jhUgkIgcN\nmu5YA8z3l5chv9+PjY0N8bHwsMoiGVYh82ciHI2jJtaTswmSig19IQCkyGh/f19GZ8drmEmVnJub\ng1KpRH19/f/icPh8vsd6JnJycmSExNFLaWkp1Go1xsfHhTJK9YlrIGN7/JWbmyuQnqWlJYk17+zs\nSAvi7OwsFhYW0N7ejoKCAmxvbwsq+Ot8faM5ecqzpI9R2iQgJDMzE88++yxKS0tlXkQJlHLh5OQk\nkskkuru7Rb41GAxQKpVC59rY2IBarRaHI2eDzHharVapZAwGg4jFYtBqtY8BZFKplBhm9vb2JOqT\nTqdFCj9euEAJ8sKFC5iZmcE//uM/oqGhAd/5zndQXV2Ng4MDzMzMoKSkBBaL5X+98TThra+vo76+\nXj6YVVVVACDoWBZ20EE6NTUFn88nxDPOiEizovxUVFQEk8mEDz74AGtra1LJWF5e/ljr2NjYGLxe\nr+AjVSoV5ufnMTk5ieXlZRgMBjz33HMypyaA56OPPhL6XktLC8rLyyUmk0wmhUddWFgoxsOPP/4Y\nExMTyM3NRWNjI7q7u2U2TkmQOVLO6b1er1Cz+KELBAK4d+8eZmdnkZeXh4yMDJw4cQIXLlyQG2I8\nHkdOTg4qKipkY3A6ndINUFJS8lgGHngUGYvFYrDZbDCbzdDpdBgYGEA4HEZ9fb24wqurq8VQVVZW\nJv3p3AyLi4uxv7+P4eFh+cW6zuzsbHi9Xjx48EAk1dLSUvm5eAMZHx+HXq9Hb2+vpD18Ph9KS0tR\nX1+P7u5uNDY2wufzCQuAB1g2fx3PjW9sbKC2thbd3d0yzqKTnNIquQ8PHz7Ee++9hx/84Aew2+3I\nz88XeZbJGLq2gUdybzwex7/+67/K90jAkcfjAQCJhOXl5UGr1YqqcOnSJdTV1YkKsr29jcuXL8Nq\ntQoL32azyeJoNptlBEgVxmq1Sksly6vi8Th0Oh1aWlrkotDQ0ID79+9jfHxcyGZ+v18+X/SjZGVl\nCYHtn//5n7GwsIDu7m75s9kUeffuXTns0H29uLgoh8KMjAyoVCohNObn50tMqqOjQ/LudPB//vnn\nWFlZQW5uLn7wgx+gsbEReXl5Ejf+/ve//1gtLH/v9va25OiLioqEYGkwGOB0OrGwsIA333wTZrMZ\nw8PD2NjYQEVFBfr6+mA0GmXTWV9fx507d1BcXIzLly/LKIYpIJvNht/97neYnp7Ga6+9JlRLwnK0\nWq2w/Ofm5jAzM4P3338fKpUKb731lvSkcwNlMohwnYKCAoHhcHSaTqfh8/nQ2NiIuro6jI6OileH\nFydK68FgUH4WKihHR0c4efIkioqK0N/fL9ChWCwmaOZkMgmHw4Hm5mZBLtPzRTBUIpGQgz0AKamh\n2ZKdHWSI8PJSXFyMkydP4s6dO9JQSn8KKaKzs7NYXl7Gyy+/LGMUIry/ztc3Jtd/5zvf+ZHL5cLg\n4CA6OztRW1srM76SkhLU1dXBZrMhJydH4liMUqlUKpkbsxTGYrE81mzFOTvlqra2NpmVA49uGp99\n9hmWlpZw5swZOWTw725oaMDe3h7cbrfgbymBU7bq6uqCxWKRwwV/cQbK23oymYTf78f29jZWV1cl\nSsE4kEqlgkqlkk2CjmQaloxf8cWpZnDGz3gLDyeJREJMQHRzVldXi5GFD0d7ezsaGxtRVFQErVaL\nqqoqxGIxMZJRxWAvt9/vh9lsRm1tLcrLy2WzIBfg/v374toNhULiVCf5izcqwjHsdjvMZrOYEClH\nEkLyxhtvoKWlReb8iUQCPp9PeOlUZYgbLioqQmdnJ4xGo4CGOCeuq6vDhQsX0NLSgtzcXPT392Nx\ncVGcwgaDQVreGJ9j2Q8BIWzFIyubcI7FxUWYzWY0NTVJOVAsFhPjJ3PsdLLzds3419zcHEpLS/HK\nK6/Iok4PBVUss9ksvQZZWVmYnp7G6OgoiouL0djYCLvdLm7hUCgEAOKxoCmQN5doNCpJE+JTmY23\nWq1iaiOXfWtrC9XV1XjqqafkdsPZutlsRiQSwezsrMzD+X4wA19aWirmWKo0/NzodDohr7lcLoyP\njz8WNeIIrqGhQapTjUajuLZDoZCY5OLxOHw+H/b29tDQ0AClUomcnBwYDAY0NDTAarXKuGp6ehqe\nrzrJj46OMD8/j3g8Lv4PxmJNJhN2d3elCIU3ZCYUqBgqlUo0NDSgo6MDVVVV8lpwpsyRJA9qdLCT\ndseYW01NDdRqNSoqKnDixAmcOnVKkK0sPMrPz0dbWxv6+vpgsVgQDAalSOvpp5+GzWaT9SY/P1/8\nOtnZ2fL9c55OhaGgoAAmkwk1NTWykZtMJoE/0V/BtYJyPRMFNP2urKxgbW0NVVVVOH/+POx2OzIy\nMrC1tYX5+XksLS2hoaFBKHRUHPf392E2m9HV1SXKHBWUZDIJrVYr5lHgkcensrISLS0tqK2tlcpe\njmb5i3n44wboeDyOyclJzM3NwWKxyMWJcbu6ujq0t7fDaDRKEyeNv0xScP3R6XQoLCyUkRQjhwqF\nQjwy8XgcGo0GnZ2dYjRktXRdXZ3ED2dnZ6FQKNDb24vOzk60tLTIKI1wtKqqKuh0OqysrGBwcFBU\nvPfff/+PV66n4QZ4tOHu7u4iEAgIKIJzJkadGIngPJIyLalXvOXRILazsyO5epVKJbLixsaGyKZs\nl2LlJACJnxQWFsLlcsHr9aK1tVV48JTuOzo6oFarJcet0WiQl5cnsixNNpyNlpWVCbaRHdL5+fmS\ni6ZTneQ3OnBJ2+JsiOUojImwHY1OZaPRCLPZLK9tT0/PY+7Q3Nxc9PX1iYRE17nT6YRKpUJtba2Y\n24aGhsS1z5+NMjrLOLxeL27duiULNJG9HR0dcvuKx+OSF6XsnJOTI5Eelq7Qpd/X1ycfEi48zHtz\nYaIUSKOTVqtFXl4efD6flHFwTnr27Fnk5+fLaZ7PQE1NjYwK8vMf9d5TpuaMlCf/7Oxs+b4GBwdx\n9epVRCIRnDp1Ci0tLfLhZrkF5T6asmjw4gyPKRCz2YzXX39dDg21tbXy31IRYH6+qKhI8L0vv/wy\nqqqqcHR0JPEhRo0Y3yJ3nooPTUqUHGOxmET90uk0Zmdn4Xa7xfOg1Wqh0+kEIsRZqVqtxp/8yZ/g\nP/7jPzAzM4OGhgYkk0kpZWFSIiMjQ1jlGxsbaGpqkgVxeXkZTqdTwCUul0tYBoeHh6isrJTPMmtr\neRD+8MMPZbYLPLo1US2hHEoPACXo4w16NO0xckVvBDdSMjLi8bjQBEkNTCaTmJ6eBgCJxCaTSako\njUQi0Ol0osgpFApZrPm/AcjBjP4bbmx9fX0yzwYescxZ29zV1SUjhp2dHTidTnzxxRe4dOkSzp07\nB5/PJ10Px7PllMiJqabxk6U0LIkKhUKi6lmtVgwPD0sWm6NCPkNUYGhUpe/HZrOhpaVF6Ibb29uI\nRCJYX1+XwxMTHhx9ERbFmztVDY1GI4yRjY0NicA2NTWhsLAQKysrYoqlCsTfQ5Iqfw+fC6/Xi6Wl\nJXR0dEhlLs2aJ06cQE5OjvAVCOBiGocsesYHCevi80YaKfA/5VI8GFPiJ+TIZrOJl2hxcRFGoxFP\nPPGE7AVOp1OeFfIZePkLBoNS/f11vr5Rd71Go8Hrr78usairV68K6nFsbAzJZFIkMG6udJJyTtLX\n14fy8nJkZ2djamoKd+/exblz55Cbm4t33nkHxcXF6OrqEonW5/OJIYSGhvfeew9dXV3o6uoSA+Dq\n6iru37+PkZERNDQ0iKueHevcHMbGxmA0GnHixAl89tlniMVighqdm5vDM888g729PczOzsJisUhN\n7ubmpgAzDg8PMTw8jKOjIzz//PPiJmYUpqenR+J9hOZwzlVWVobCwkIpouCtiadvyoQ7OzsSGYzH\n43KKvXLlChYWFuShdblcACDUpdLSUokxRSIR3L59G9XV1aivr5ec5+XLl1FfX4/q6mqJrND1y1N3\nKpXCgwcP4HA44Pf7UV9fL1QocvHX19dlvknWgVKplNeS2dTd3V04HA68++674oim90Cn02Fubk6o\ndhaLBa2trYIaffHFF8XkxcKIqakpueHyhK/T6eB2u6VAhQaooqIinD9/Xqo8/X6/+EFo5Jmbm0N+\nfr7AM0pLS2WBGRsbk1vWxYsXxZn7hz/8AQ8fPsQrr7yC3d1d3LlzR+p4+/v7kZubK9wCeiGYdlhZ\nWcH+/j6am5uxvb0trO68vDzMzc1JtwPLL+x2OwoLC4WyBvwPHOr06dOywRGn63a7BfLyu9/9Dnl5\neejq6sKpU6fQ29sLq9X6GLc7kUhIqxwrVQm2Oe6Qd7vdaG5ultechiyqbQDg8/lEZmWUdGxsTA5V\nKysrogKUlZUhEAjA7/djdnYWHR0dAlTq7+/Hp59+iueeew52ux2ff/456uvrcenSJajVauzu7kpO\neXl5GZcvX0ZVVZUYqVZXV0VFS6VSiEQigud1uVy4evWqxDrZ+sb5/enTpxEIBIRrUV9fj6amJvEY\nrK+vi+HSYrHIe+Hz+aTLgooff+7MzEyUlZWhs7MTmZmZuHv3Lq5evYp0Oo0zZ85gYWEBm5ub6Ojo\nQEZGBiYnJ9HY2IimpiYsLi5KBIsALkZHqYwSs8xDI02qd+/ehVKpxOXLcEuXaQAAIABJREFUlzE8\nPIzR0VEYDAZYrVY0NzfD6XTigw8+QHd3N7KysuD3++Uwy6TSzs6OHGrn5uZwcHCA6elp1NfXw2q1\nSoeC1WoVmh9vspSymRTY2trCl19+KdHYl156SYq95ubmMDk5iY6ODolE0hQ3PDwMu90Ou90uozVe\n6jwej8CEKisrUVJSguHhYaysrIgpkRcWKiM5OTky2qOBtKenB3l5eRgaGoJSqcTJkyfF6JmVlYX5\n+XmMjY1BqVQiHo/jzp07gokOh8OoqqpCV1eXXDq5lmdmZiI3N1dwvf/X1ze2yft8Puj1enR0dIjR\nijfTvLw8mY/SuMMZFuMx6XRaZmU5OTkYHBwUMw1NfUdHR4jH43C5XHKiJy89Pz8ffX198vdlZGTA\n7XZDp9NBqVSisLBQerY5z9vf35f5Jt2gjPElEgkolUo5TTJ7TcMKTWkkbDGiR6ITTYM0lZWUlMBs\nNmN7e1tMKIlEQm4aTAwwU0vzHWeRjMMdHh6KSZGRE9KaKFXx39P4RNOSzWZDTU0NzGYz1Gq1tCKt\nr69jeHgYbW1tQufi71coHjWNEQdKYEo4HEYqlRKFgbRBhUKBSCSCVColIwwainjbYGySEUiOTHQ6\nnTjLORtjLKm8vFyAK3z9AMi4hzNkAHJb4cmbfz4PfBzjMEbHimD2F7BClIkFz1e1lLzBe71elJWV\nSekGC4EAyDPN+tj5+XlRSrjhcUxVUVGBU6dOCa6VcubW1pakDiiVs1TD7/eLEY9VmlNTU9Le53Q6\nhVDH94RUO95mAMgMkDPHdDoNk8kkzz5nz7xxkPE9Pz8vBzq6/XmDZaqDtaSUd4mKZRyPqg43HY1G\ng8PDQywtLUm+nZ4dk8kk5rJwOIz9/X00NDTIa07TG6EkyWRSNmT2OPAZO06iU6vVcgOuqamRf+9w\nOKT0iot8WVmZ8BFII2QxVXV1tXDkudGvra2Jm5tKEhkD7LCgkpZIJOT9Ki4ulr6K/Px8VFZWAgBq\namokgUDIDA1w/GwwNbS7u4tEIoFAIACFQoH6+noolUrEYjHZ4MPhsOCMS0pKZKTHX0VFRSgpKYFW\nqxXFkP4Y1kTTUMc4WlFRkbT+seeBlcTkQNAXo9PpZF6flZWFtbU1mclzzaKS6nA4oNfrUVdXJyUw\nNDAWFhbCarXKJnmcVcALSywWk3ZGvk88dDGmShYFla3Dw0PhTBwdHQmrgWumw+GA1WqVn4cY31gs\nJg17paWlyMjIkBEnTZQcAdGrxlEVR0Vf5+sbm8lXVlb+qKKiAl1dXRJbOXfuHFpaWpCXl4dr165h\nYmJC5rLEthLSsre3h6qqKlRUVGBtbQ1vv/02srOz8frrr8uLQZLe+Pg4rFarFEvQFdzR0YELFy7I\nPGhgYEBMNd3d3XK7dzgcGB8fh0KhkNjY7u4ulEolLl26JACR1tZWkdS42fDGvL29LdEzxh/YHc35\na11dnZC7FIpH7WO1tbXY2NiQCFh7ezssFgvi8bjcXrip9PX1ob6+HhaLRT4UvJWtr6+L14CHKaoD\nu7u7WFxchNVqRXt7u8ysL168KG1Hubm5KCkpQUtLC27fvo23334bvb29yMjIwK9//WsxZm1vb2Ni\nYgI///nPYbVacenSJTl40DHa0NAgpjo2lgUCAbS3t+P8+fPo7e2VzYSRwtbWVmmj4iJ64sQJnD9/\nHufOnRO0LwCZ8bW3t6Ourk5u0QSD0PHOWbrX60V1dTVeffVVuS1lZGRIvreyshJKpRKBQACLi4sY\nHx8X8FFnZycKCgoQDAah1WrlxmIymdDd3S0zZx4AiT8mlU+hUECn08FsNsNoNOLu3bvIz8/HW2+9\nBeNXBC4eHpqbm9HR0YHGxkbpLDh//ryY6+hVqKmpkbZB5q5LSkrQ3NwMrVaLX/7ylwCAb3/72+Kh\noImLG05OTg4CgQAODg6kQ1utVsNms4mzmpvx7du3JVNPv8Xa2hpu376N999/H6dPn8YTTzwhnhYm\nQ0pLS9Hc3CwHXGauzWYzUqkUPB6PLKrE9dpsNtTV1aGkpARjY2PQ6XR44YUXMDAwAI/Hg7Nnz6Kl\npQUNDQ0YGRnB0tIS6uvrUVVVBbPZjGAwiM3NTWlbm5iYEAk6EonAbDbjwoULcruiufTUqVNyMKS7\n/ujoSPosnn76aZw/fx5nz57F6dOn0d7ejqamJkSjUdy/f1/c4C+++KKoAwaDAfn5+RgfH0dZWRma\nm5sljcLXnQoILwSxWEzy0jx02Ww2ac/s6OhAU1OTwL4YH2SahmkRi8UCm80mh2c20f3lX/6lHBJ5\nASDbQaPR4Mknn0RDQwMikYgcTo+jXxlt5cEJAKqqqlBVVfVYqYzFYhF6ZldXF86dOyef95aWFlRW\nVuLw8BBqtRrNzc3o7OyETqdDOp2G1+vFwsKCYGvVajXOnz+P5uZmvP/++4jFYnj++ecF58waYG6o\nJpNJWAgcHxIsFovFxPD58ssvQ6PRoLa2FidOnMDR0RH3LhwdHSEUCkkX/YkTJwTOxIQVzZqDg4PC\nXDGZTEilUlIgVVBQgLa2NhiNRlRUVKClpQVNTU1CSmRDJsdeQ0ND+Oijj+Tg88EHH/zxzuRbW1ux\nvb2NX/3qV9BoNPKwW61WWRB4ivX5fFhaWpJZO5vIDg4O8MILL8BgMODll1/GxsYGBgcHhTjGHKXd\nbhdpaHl5GX19fbBarWJAKS4uxtHREYLBoLhPHQ6HPAher1eYyJRhCQshYc7lcolJIhAICB3MYDCI\ny3NrawsPHz4U9CyRj5RFt7a2MDc3h7q6OpHVpqen4XA4kJOTA61Wi4WFBZlH8WbMjPtnn30mMSdS\nxYhnBCAzckq7pOtxdjs+Po61tTVZsAOBgJyGV1dXxdQWCARQX18vCwAbzAKBgCgMzz77LDIyMnD1\n6lUxwdEZ6nQ6MTQ0JOYRZlcLCgqwtraG1dVVWYw5V93f38fk5CTcbrfAkHZ3d9Hb24umpiap26VJ\nU6FQYGFhAYFAAJubmzCbzWhtbUUkEoHD4cDdu3fR398Pg8EAm82G/Px8fPjhhxKF29jYgEqlQktL\nC8LhMObm5sTxu7i4KKrD2NiYoES9Xi8ikYgALh4+fCi+kFu3bqGiokKctpOTk8jOzpZnhovNxYsX\nUVFRAbVajaGhIUxOTorTV6FQ4NKlS7BarWhoaEBRUZHMwv1+Pz755BMh8rHLwW63IxaL4e2335aO\nhm9961s4ODjAL37xC5lz37lzB6FQCOFwWMx+zKLTUMguCKfTiXv37qGrq0tGVcvLy/jss8/E7T87\nOysHRaVSienpady9exfpdFqMsywJWV1dhcfjEaf+rVu3hJnR2toqiQ6ywDn7ZGJgZ2cHtbW1ODg4\nwMDAAEwmkyRYqIDMzc3h008/FaDMxx9/jNraWvT19eHo6AjhcFjohn6/XwxvIyMjAB4t7LOzs8jI\nyMDp06elNpijmOMESN6wOB7xer0iXQeDQTz55JPo7e3F1NQU7t+/L699KpWSGx7lYlIsWfJzvJtD\npVKhsrJSjKEff/wxVldXJUrGywEPFCdPnpRGyf39fVRXV8sIorW1FalUCjdv3hSwDGf5Op0OXq8X\na2trYqql4ZENdgBw5coVuawwlsb10+/3P8ZyX19fl/EKb8gkyH3yyScihXNt0Ol0kopivfj8/LyA\np5aWlgRJm0wm8U//9E/C2BgaGkJBQQEqKirkeaGB0u12Syvm2NgYNjc38cwzz6CyslLWPc7CPV8V\n3iwsLMhzyNePFb/0CaRSKXzwwQeics7OzgpQiKMG4qiphHFtpceG/pS5uTnMzc1JxW5DQ4OMGr/O\n1ze2yfNDwxk6Z6k0VhkMBsGLJhIJzM7OYmVlReQ1UqhaW1ult3pgYADvv/8+Tp48KaaI8vJyGI1G\njI2NSUuU1WqV2RpjJrm5uTKbYdc0M6/EYdKwpdVqBd9K4xyBFhsbG/L30DRkNBpRW1srblt2SU9P\nTwvEhmUnw8PDyMjIgN1ux9ramhgzamtrYbFYREnw+XxCYaJTeHh4GHq9XuY2vIlQNjy+oBKPeBxd\nGYvFpBnqOJt9enoay8vLol4YjUacPHlS2pVo3JmZmcHm5ibsdjsuX76MxcVFfPnll5INZ00loSPs\nGmBenSY73nxSqRRefPFFOVStrKxgZmYGKysrAsepqKiARqMRvHFJSYnIlpFIBB6PRypXq6qqMD8/\nD4fDIblik8mEH/3oRygoKMBHH30k75nX60VXVxdeeuklie2QKLe3tydGQYfDgYODAzQ2NiIWi8Ht\ndsNut2NrawuTk5NoaGgQteW4TJ9IJMQjQWm+vLwcXV1dKCwsRDKZlJYtjkD8fr9UvZpMJnmPOHaZ\nmZlBPB5HR0eHKCeM3n3yyScwGAzo7OzEd77zHaysrOC9997D2bNnhcG9uLgorAetViseA2bLlUol\nampqsLKygqmpKQAQs1g4HMbdu3dRVlYGo9EoXdo2mw1FRUXwer3o7+9HRkaGuNF5GGFrGX0mExMT\nSKVSUCqVskhSHiWzgcAqzqgrKirEqcwaX/p3aOzy+/24ePEiCgoKcOXKFdTU1KCjo0NMTiwx8fv9\nMobg9+b1euFyuZCfny8IZcY9iUg9biSNRqOYmJgAAMnzr6+vY2RkBGVlZTh16pTMZJeXl8Xc6vf7\nxYCWSqWkCIu/9Hq9MDCIiSXbn8oFcbsqlUoa7ba2tqBWq1FQUICBgQEkEgkxyNFYtra2hvHxcYn/\nsjyLBuNQKCTvExMi0WhUsMtzc3PCIuDoh81sg4ODQjJtbm4WxsDa2prUi5MFMjc3h52dHWRkZCAY\nDCKZTMJisaCyslL+LprtKI0TrPXss8/C4/HgJz/5CV599VW0tLRgdHRUlAbyCVigxLgo15fS0lKc\nPXsW2dnZ8vccHh7Ka8i1gfFgzvrJd2FDaSKRkAa5hoYGrKysIBAIIBwOw2q1SsogOztbanuzs7Ol\nYZO0SoKduA+Ul5ejp6cH0WgUsVjsa+21/2fh/P9fXz/84Q+PyF5++eWX0dbWJmUjWq0WbrcbiURC\nJJpYLIapqSlZ4CmdnDt3Dna7HQcHB7hy5Qp+8pOf4K/+6q9w/vx5KVXIycnB6Ogo1tbWxAzFRYEx\nItKR6JTlB4yYVZ766BHgZsK/+3ivOufaPp8PdrtdkKU8mTqdTuzu7qKmpkYIVq2trSgpKRG3f1FR\nkcwjOQOmxL6wsIB33nkHarUaDQ0N0sbFytPGxkZ5/ZRKpUiAh4eHCAQCGB0dFT8Ds//892TRE0Ly\n6aef4tNPP5UZYFlZmYBbbt68ia2tLTQ0NEj/MeXVVCqF0dFRzM7OoqqqCk1NTTh16hTGx8cxMjKC\nnJwcOYDRI0Bz2MzMDG7fvg2Xy4VXXnlF+uVpqBkbGxMXLiOSlLNmZ2fx5JNP4vTp09jY2EA4HEYg\nEJD40JUrV+Q54In87/7u72C1WkUd2dragtPpRHV1NZ5//nnEYjFMT0/jpz/9KQoLC/Htb39bfANL\nS0vIzc1FXV2dKBCM71RUVIiMTPNkJBKBRqPB0dERBgYGUFxcjL6+PjEGsm2Mt8aioiKJKh5ncNvt\ndkmgcDY4NzcHtVqNrq4ujI6OCgzK7XbD6XTiz/7sz9DX1yc+g9LSUoRCIbjdboyMjAhimXNrJl3s\ndrsgl8mwn5mZkVvfX//1X2NxcRH/7//9P5w+fRrNzc0SMSL7gvxymjGtVivUajW2t7flYBEOh6UZ\nkJ6MTz75BF6vFw0NDairq4PRaBSaG1UXcv0JklGpVHLYLSwshMFgkAQODy8ul0silAqFQhZxHmze\neecdbGxs4LXXXkNxcbF8PsghuHfvHj788EMsLy+jrKwML7zwAvR6vRw8Hj58iB//+Mc4c+YM3nrr\nLWkU+9nPfoYXXngB3/3ud3HlyhXMzc09BkoqLCxEWVkZ9Ho99vb2BLEdCAQwMzMDq9WKjo4OVFZW\noqysDEqlEh999BGuX7+OZ599FgqFAl988QW0Wi0qKysxMjKCdDqNxsZG6WnweDxy2/zTP/1TvPHG\nG1AoFMLQePjwIRYWFtDc3AwAmJ2dhdFolOcgEAjg2rVrcjA6efKkRGvZR3Lq1ClkZWXB6XRiYuL/\nY+7NY9u+7/v/p26KEiVRlEhdlKiTEnVZty3bcpTYcVYny9H0QtZ2QLN2Bzrsj2EYsP2xf4eiKDYM\na7dvMyTr0nSN3bRJa8dnbMk6rVsidVCkKJEURYoiqYMiKVHU94/0+fo52B/tD/j9kAkYBrS1LYmf\nz/v9Op7Px3NWgr4qKyvR09MjnTtFpCzECL6h2OzevXswm83o7+9HdXU1srKy8MEHH2BoaEjCnSik\nLisrQ2VlpfwMBoMBpaWl4mqhyp3rH04VSMTs7e0V/RFXDYSTUeNF8XE8HkdFRYU4pTo6OmSC6HQ6\nMTc3h5///OcwmUz4q7/6K4yNjWFlZUXCk7gGIACJU1tqEGpra6Xj57u/uLgItVqNpqYmrK2twefz\n4Vvf+tbvvMM/t06etLOnudxUOhItylEm8bb0yvKBZDgDOyx6eJ1Op3S/FG9R3FZbWyt779nZWQSD\nQRk7GgwGwdUyK3xvb0/iWalwZDSgTqeTio+UKtreFAoFtFotFArFZ7CjHLVnZGSgqakJLpdLREX0\nwVPIws43EAggNTUVJycnAuapq6vD/v6+7KYo+GHCGtGoRLWyI+COjtU2AOlGeKAQu8siiesUnU4n\n2gbyz8mWr66uRl1dHdLS0mSaQXAPO1zuPs+cOSNFAZX0VIPTNqdWq1FVVSVVNHG4FANRYWuz2T7T\nMWs0GsTjcWxtbUkkaFNTk9guuW5paWlBOBwW5Ta9xFR9e71e6SQ3NzeFj15UVCRxj2QZPB0qodVq\nhffAAAx6fxmXy0PnaUoYI0IzMzOxt7eHcDgsmouGhgakpqbC5/PBYrHI+ogkRAqDysvLxZ0CQJTb\nWVlZuHjxouw6FxcXhZfPApGioMLCQhlpJhIJWS2Fw2H5nvjFvSlFpDU1NTLKdTgc0Ov1MJlMwq6v\nrq6WIr6goAAAZLdL0uLTiWzknJOhThEred5McvT5fLJTdbvdYl9VKBSizyHvn6mMTGnjlI6K+dTU\nVBF1kh6Yk5Mj75XP55NDl2skvhcVFRXisjg4OEBNTQ1UKhUikYhE51K0d3BwgOzsbFRVVUnCotfr\nlSyJ5uZmJBIJEWdxOkW1N627THYkQ+BpTY9CocDZs2fld0pnBsfuT1sCySGJx+NiQ6OGiep+es5D\noZA8Q3q9XoSA1FYwzObo6Ah+vx9qtRptbW1SxLDI5FcsFhPCIwCZfGZnZ2N+fh5OpxNlZWWfSVEk\nzVKj0UjKHxPx4vE4jEajTGX5HD/NV1lbWxOBM1e2DQ0NkvxGKBoLAxZfSqUShYWFcLlcsNvtKC8v\nh1arRSgUkiLQ6/XC4/GIsLS0tFTG/xQ/stiIRqM4PT3FycmJFAwMLHK5XNjb20NHR4cIiFms8iz/\nfb4+N+HdX/zFX/wDOc3j4+OYn58XtXIikcC//uu/4r/+678QCoUwPDyM69evIy8vD+Xl5fIBc5/C\nkToTg2ZnZzE9PY3s7Gxsb29jeXlZOMi84JRKJd555x0MDAygsrJSoAr83xLpyQ6d4QhUf/Pvyc7O\nhtVqxaNHj7CzsyOWnkAggLS0NGG+c3cXj8fF711VVSXVc05ODhwOB95++22kpaWhs7MT8Xgcbrcb\nAwMDQnYbHx9HJBLBq6++Cr/fjzt37qCmpgY1NTViu5mbmxM3wcrKiozEiASmuIkjZlbTFosFo6Oj\nEgJBO1tRURGAT33B586dE/Ej8ZJDQ0PIy8tDY2OjFF881Jubm3F4eAiHw4FHjx5BrVZLNnlqaiqc\nTqd0qAx6+fjjj8VLyvHguXPn4Pf7YbPZ0NLSIp7r0dFRsfWxkmdYxtTUFI6OjsQTy+AQstxJKOzp\n6RFL2Y0bN/Cf//mfmJubQ3JyMhobG/Hee+/h+vXraG9vl6S69fV1zM3NYWxsTEKRAIhA0eVy4caN\nG1hdXRXUbU5ODioqKrCwsIDHjx8Lpnl9fR2zs7Ow2+3QarVQq9XIy8uTaRKFaBaLRZCtP//5zzE/\nP4/T01PZ4zGs5/T0FMPDwzCbzcjOzkZrayuuXbsmkbqkBzLMiNnlarUaer1eViAswghxYr6A3W7H\nw4cPceXKFVy9elU64PLycskL+Ld/+zckEgm8+OKLcmCqVCoRDKWlpcHtduPGjRtYWlqSNLq8vDzx\nvFOompWVhbGxMdl5X79+HXNzc+ju7kZZWZkAnbKysrC5uYlgMCjEMU7UJiYmcOfOHXzyySeYmpqS\nC12j0UiXPDQ0hJWVFXi9XhG40vJYX1+Phw8fYmRkRJLLKisr0dDQgNbWVsnKoF86Go0KTOvu3bui\n5idpj4AjApJIIhwbG8Pu7i46OjpEoMnONjMzU/Iofv3rX2N9fR1nzpyB0WjE2bNnJQq1ra1NdEYX\nL15EY2OjhGfl5+dLrPPf/M3fYGNjA++++65MC2ZmZlBUVISuri5hj5hMJkSjUdjtdslGGBkZkWKJ\nxZTP55OCY3x8HIuLizg5OUFDQwPOnTsn0K3bt29jZ2cHkUgEbrcbCwsLuHv3LqampiQ6mw6S5eVl\n+P1+gZNNTk4iIyMDBQUFoi3o7OwUdO/S0hJ8Ph/S0tLgcrlgNpsxODgo0bnUKd2/fx8ulwvJycko\nKCjAmTNnYDKZRM81OjqKTz75BAsLCwiFQqLPSk5OhtvthsViwZ07dwQrPTk5KWS6jz76CBMTE6is\nrITJZEJBQYHw66lrycvLkzQ9vo+pqakwGAxITU3FwMAAHj16hIGBAYEkMdDHYrHISuf27du/U3j3\nuV3yly5d+gdGCdLec+bMGRQUFIjqm2rlwsJCFBcX4/z582IDIs2MQSmTk5PS3RCTa/hteharMSpL\nmc9MTytJeH6/H6Ojo/B6vaivr5dDk+NngkUoEtvb28P9+/cxPDwMu90u46NwOCyks1gsJoEoVGyz\ns6atTKVSSVhESUmJhF9wOqBSqUTUsb+/j6ysLDn4E4mEcONJVSssLMTc3Bymp6dl2kEACitF4iMJ\nluFhztEUU9nofXc4HDg4OJAJBwV/1dXV8tlxLMp1A8l5Y2NjMJvN8Hq9MJlMMBqNGBoawtraGnQ6\nnawx2KVkZmaK84AJeGq1GvF4XKxppEtxP97W1iYxwtRZsMovKSnBo0eP8NFHHyEajYqQrKCgANXV\n1XA4HEI+I9CD6yBePt3d3WhtbUV5eTk0Gg3m5uZgsVhQU1ODhoaGz+B3CdJhUVpdXS2KXgDyHBL1\n2dbWJt2MTqcTspjH44Hf7xc4VG5urlg8SU7s7++X2Eta9paWlmTUTgukwWCQ8Tm73szMTExNTWFt\nbQ0mk0kgSNwtUnzITHWCcxi2QujS3bt3kUgkcPbsWSGT8WKkXiQ7O1u6QVo7WcSUlpaiqqpK2ANG\noxH7+/sYGBhANBoVuyLXapzIUXnO72Fubg5VVVVQq9USKdvQ0ACTySTFQnd3t6wUKisrpTsk1Y7W\nNyYXhkIhuWypT+DlXFZWJissWqM4/SM0qrS0VGiCZI6zMKGgknocr9eLqqoqmEwmmUgxFIU20c3N\nTVgsFsH5EqELQEJltFqtvIecNPJymJ2dRWlpKS5cuIDu7m7YbDYpFvPy8tDV1QWdTie2X3a+DLbh\nOUI90v7+Pmpra5GRkSFuCArV1Go1GhsbZaLJv8vtdqO6uhpNTU0y2eA6rrq6WkBWAwMDMq2wWCwI\nBoMyrSARkdkm3OvzmecKj7ZHfl5paWkyNaVNjgFfZN3ze9VoNMjPz0dFRYVYY202mwj6aK2k1ZQi\nxZqaGnGUZGVlwWKxICsrS/5cMBjE4OCgvIeMlTYajVKg1tbWSsJdXV2d2AeVSiUKCgrke/3www//\n96rrt7a2RIBTV1cnClKm7tTX18vo8PT0VMAFmZmZcLvdyMzMRHl5Ofx+PzY2NmTfR4wrx/4nJyfI\nzc2VnQz9t8fHx+jq6kJSUpKoyAOBAOx2O9LT00WowQuBIzCCXjiyu3v3LkKhkHhVGQmo0+nEZ89x\nEJOxKBpbX1+XS3l+fh77+/v4whe+gKSkJLG05OXlobq6GpFIBFarVew0TqdTBHosPijGSU1NxU9/\n+lOYzWZ0dHRI9CPDWOgTPzk5ETqXz+cTtvr29raQ5NRqtYyUuDfNzs4WFC0DfxgiQo55IpGQsd36\n+rqAJOh5HhsbQ0pKClpaWiSulwdNQ0MDnE4ngsEgysvLJev6+PhYVLfUKvAAodCSYrPc3FwJAkok\nEpiYmMCNGzdw/vx54clfvHgRer0eb731FjweD/R6PZ555hl0d3cL33pychIvvPACzp07JyshesB9\nPh+uXr2K2tpaJBIJoalx+mAymaQQI19+Z2cHht8mVrEz7urqkhE5sacc8/Kip1WKAjTmCNAPTkVu\nIBDA+Pg4zpw5IwRC+qvpBX4anev3+7G1tSV6CmazE7xUUlKC+vp60ZvQq15VVYVIJAKHw4GRkRF0\nd3ejpKQEa2triMViuHjxIrKyshAIBGR1RNw0CYhEpubn56O4uBhvv/023G43XnrpJQQCAQwODqKz\nsxMNDQ0wGAzweDxYXl5GbW2teOU9Hg/29vbwySef4PDwEOfOnYNGo5EDn1G7Go0GKpUKjY2NInZl\nXjjVzdT/cFVBP73X68XKygrKyspQVFSEk5MTsfMynMrr9QrJLRwO4/T0VASSiUQCFosFoVBIileK\nuIg/piizubkZhYWF8Hg8UuwCELqj1+vF4uIiLl++jJqaGgQCAfk3h4eH5d+l5ZG++nA4jJWVFczN\nzeHNN98URgjPAZfLhcrKSgGHMaucSGn6+3nWtLe3i4OJQC8SH/lzabVasbXu7e3J88fcAiY7KhQK\nyR7gPpz5E729vaisrMSdO3dQUFCAK1euyDqsoqJCxL/RaFRWiioTjPiAAAAgAElEQVSVSsSXPKty\nc3MRjUbl+zeZTOJYITiM94rBYJAGMhgMisvA6XTCZrPBYDCgrq5OVkfkVXDN1NjYiLKyMhweHmJk\nZATj4+N45plnxMpnt9sxNDQk/I1gMChIXbvdjlAohP7+fqSkpGBlZUVWKSyWOeUjEe93fX1ul7zH\n44FWq8XBwQGi0SicTqdYZ7hz4s6TISes/n/+858LwJ+s8La2NhlfX7lyBcfHx3j06JFUvPfv34fb\n7RaEZWZmJtra2qBWq8Uasbm5iYaGBrGq0aKwtbUloirSjaisbWpqEhxrPB7H6uqqIEtZ5XHnMjk5\niffee0+4/CaTCTabDR9//LH83ffu3ZNQhm984xsoLCyE3W7H6ekpTCaToHl/8YtfiKWtr68PKpUK\n09PTctkTXmGxWHBwcACPxyM7V51Oh7KyMtmZRyIR8XoPDw+L4p7hHMxVfjomMykpCe+//z6Sk5Px\n5S9/GRaLBVarFZcuXcLx8THu3Lkj6wGKHlNSUjA5OSlgntLSUimAjo+PMTg4CLfbja2tLQCQsRYL\nIk4Trly5IrtaWpcikYh04dFoFADQ2tqK5ORk+e8ojqO+Y319HcnJyairq0MikcDk5KQIIm02G5KS\nkmAwGDA6Ooo7d+6Iip2fJ+ElRI/Sfkj0MNnuBwcHuHr1qqQCUsG8sbGBoqIisW1ynxcIBDAxMYGJ\niQmEw+HPEPvef/993Lp1C5ubm0hPT8f09LR0nkSTzs3NweFwQK1WyyXLi7ugoAAvvPACMjMzhQGQ\nlJSEDz74ADk5OQLteDoFkROX5ORkPH78GCsrK6KKp4r/6OgIdrsd169fx9raGs6dOwe9Xo+8vDxY\nLBbs7u5KquHa2ppAWLa3t9HU1IS+vj65KNndX758WXaRv/71rwX4wgLxjTfewMLCAq5fvy7Ete3t\nbfh8PrnQ6H7hJWm321FWVob6+nosLy9jeHhYRHyBQEDcAX19fdDr9djd3YXb7cbg4CBSU1NRV1eH\nl19+GWazGQ8fPpRubXp6GicnJ7LiKC8vR19fH6anpzE2NoaysjKBeg0NDcHpdOLs2bMoKyuTtRGL\nexa97CCph1hZWZHf/+XLl+H3+3Hz5k3BLzP8Z2ZmBo2NjTAYDKIbWFxclIhtrg6Ojo6wvb0Nh8OB\nrKwsbG1tIRAIyJTvrbfegt1uR1FREfr7+6HT6QQEZrVakZOTg8bGRqyvr4slzGq14vHjx1Cr1cjN\nzcXIyAgMBoPoXaiipxapsrJSbM5DQ0OYmJiQiVpFRQXGx8dx//59yU547733JHfe5XLJ5JHd/dTU\nlCS6USPDgtPhcMBsNiMQCKCtrQ2RSARzc3O4cOECysrKcPv2bRQWFuLll1/G6uqqsEWCwSA++eQT\nccP8+7//uwCdDL/FiNNSbLVa5Z0aHh4WP/z09DSOj49x8eJF2fFTjD0xMYHDw0MJyqFuy+l04uHD\nh3jjjTdQW1uLzc1NeDwerK2twWazyTn5u74+t0uedDGOfrjfJhWMlCZ64jc3N6UDYVWTSCSE6ERB\nB0fpTBc7ODiAw+GQhDTGcrLbpCCNO0rudoeGhiSWlBc4BUYUiAGfCmEImQgGg9jc3ITf78fe3p6g\nJ/ln/X4//H4/TCYTtFot8vPzYTabsbGxgcrKSuFIs5sjNEShUAiG1+fzyWHI/PL9/X0ZGRI2QYCN\n2+1GSkqKRE4SBnRycgKPxyMxpgRrGAwGHB4eCm0rGo1ib29Pxlf8/ZHBTPsb7Xoc+fMlTiQSqKur\nk4AHPtwUIVEsmJaWBpvNJnnfPBTj8bisD6hcpl3l6OhIXnIAInbjGoLKVIKFuLejg6CsrAxqtRrt\n7e2SD8A0N6VSKUJLBgBRIElIidFolFHqycmJZK0/HYrBZ4ziPq5e2LWo1WokJyfLWoE796WlJeTk\n5IjalyIbsg6qqqqEFcBgJHbGRqNRRuIc7c/NzYnYjAS9WCwmn6nNZpPRcV5envjvNzc3sba2JgmG\nnHjt7u6KeJZTjNnZWRl7k2LJdzKRSGBtbU2Ih5x6MLTFarVKbgT97a2trbIrZgASC3TaRPlFgROf\nCarzWZClpKSIcyAQCGBtbe0zhT3JjRRGcZpHi9Tm5qY4UVgsk7LGC/jpPITk5GTxNavVallVcETL\ns45FIQAZ0VMoptVqRRjG55oAnZycHKEhckrJUKqnx+nUGFAYSSIgaW8k4VVWVkq2Ox0mfH85Yp6c\nnERWVpbQD7VareggGGubl5eH3d1dKSRisRjW1tZEPMkgKE71PB6P+PwZzUqLJDUISqUSjY2N4lDg\nxJGrJU5QFQoFBgYG4Pf7pXHgs6RUKkU0yXuDHnvqHngvEM7Gqanf78fk5CRKS0uRl5eHzc1NEe1V\nVFTAaDSisrLyMxkSu7u7UlDU1dXJOmd9fR3xeBzd3d1CF+QZSOsmYTucGEWjUezv7+Pk5EQcCYlE\n4vdOofvcLvnW1lYRzcXjcWi1WnR0dGBnZwculwvFxcVIS0vD2toazGYzFhYW4PP5UFZWhq9//euS\n0fzgwQP4fD5RNra1tQkburm5GXfv3sX777+Pl19+GV1dXfJyccd2dHSEyspKGSPTz314eIiysjJ0\nd3fLL5ajKYVCgYWFBflAuftpampCTk4O7t69K/YgPiiBQADp6elobGzE1atX0dDQIKlhWq0WL7/8\nMtrb2wFAXtDR0VHYbDacO3dOxrcLCwvw+/3o7e3F3NwcpqamZJz20ksviWWlq6sLXq8XH3zwATo6\nOvDSSy/h0aNHwtOem5vDkydP8Pzzz8vY7MKFC+jp6YHb7ZYulpY3JjqlpaWJaObatWuyz+fInP7Y\n8vJyAZ+QwU7VcSQS+UyRxwNiZ2cHHo8Hk5OTKCgogMlkEghITU2NHLr8/ZSWloqosKysTIRO9fX1\nEi/Kw727uxuNjY3wer2y1+clkZKSgp6eHrz44ov46KOPMD4+DqPRCABwu91i3RocHERWVhb6+vok\nMlOv1wvJbXd3VwBIPNCpGmccJ8FHtJUx45se4UQigfHxcRlbX7hwQTC7DPKpq6vDN7/5TRQWFsLv\n90u2Pd+DzMxMCSzS6/UYGRnBj370IzQ0NODs2bMSrQxAxE/Ef3Lnn0gksL29jd3dXWi1WkxNTSES\niaCvrw8mkwnl5eU4f/48kpKS8MMf/hAulwuDg4P40pe+hJaWFmxtbclFevbsWWRkZODDDz8UNO/w\n8LCQwoLBIKxWK77whS/ICDqRSMj3vrOzgzNnzsg+/MKFCxLw1NDQgD/+4z8WHDR/BgYoKRQKGY2T\ni0FGeywWg8lkwiuvvAKDwQC73S7hI5yGUTCbnJyMnp4eXLx4EWq1GleuXMGVK1eEkFZZWYmSkhJZ\nL/h8Prz99tu4cOECvvWtb8HtdsuUpq+vDzqdDg8ePIDVapUUteLiYrGSlpSUyPfB4ryurk4mHX6/\nHykpKfj6178uQUDUkfCZ393dxU9+8hPs7+/jtddeE21FMBjE4eGhFCr19fV49dVXcfbsWZkcrq+v\no6enB4WFheju7sb169fx7rvv4qtf/SoqKysFVEZPPlcCPT096O/vl5Cuw8ND3LlzB/Pz83jzzTfR\n2dmJnJwcmTqtrq7CZrOJG6O9vV3seDabDT09PaitrRVPuclk+sw5QldKRUWF8A80Gg3+/M//XBgE\nPLuYWhcMBoWT0dXVJSFnr7/+usDMCDHz+Xzi/mJULgNyqBUhnZPo8lgshtXVVQSDQWi1Wrz66qtQ\nq9U4ODjAvXv3kJOTg6997WvimEhOTobP5xPRI+FCJKYeHR1hYWEBOp1O4oJzcnJkL/+7vj7XqFna\nKA4PD+Hz+TA6OopwOCxiKI75CGSgIGVlZQVmsxlzc3PCc1coFLKHpY2HIzAqbw8PD0V4RXEYRStU\n4xYWFiIUCuHRo0fwer1ikQIAl8slYRL5+fkyjqRymglCkUhEui+O/ZlCxe8lGo3i+PhYLF1bW1tC\np1tZWcHExIRwn0mwOz4+FpX91tYWjo6OJOY2NTVV0p6eFgrW1NRAr9cjHo8LAY57TKVSKVMHBqX4\n/X54vV6kpKTAYDDA5XJhdXUVJSUlUCgUcLvdGB8fx+joKAB8Jt89JSUFNTU1yM/Pl0MgPT0dtbW1\nyMrKknjV6elpzM/PY3Z2FvPz85iZmcHCwgK8Xi9SU1MFFsHPmpGnvJQzMjIQCoUwOzsr3S65/4WF\nhbK3zMjIQCAQwOjoKOx2u4SMOJ1OjI+Py86V+1W+/AsLC3C73YKNJb3r6VS5mpoalJeXy3MxNjYm\nOdFklTscDtESUBRI5waFl7QhkmS2v78vyVpZWVmy/5+bm8P9+/cF18xulmpgCrIODw+h0WhgtVqx\nsrIicZ61tbUiqpqZmYHZbJa0PHL5+ftdW1uDw+EQP35bW5tcmvF4XOxGwWAQdrsd4XBYdve0gaal\npWFjYwNDQ0MIBoNClKMQLScnB2VlZaioqJA8imAwCKfTKWFLnBLQzslDdXd3VyYMZrNZIFpE8fL3\nx70/BaN6vR4LCwswm80Ih8OCq+VKgPoH0haZCeB0OsWBwkkh3RUqlQrRaBQjIyNyyZM9zzPA7XZj\nZ2dHRHecdNAaW1paKsptTh9/85vfIBKJQKPRSOY7V4Ek+y0vL+Pk5ESirZ1OJ3Z2diR5jvx42pS3\ntrawvb0tFsuVlRUkJSXBZDKhtbX1f+S/U6+wv7+PsbEx2Gw2wbceHh5Kke5yueD1ekX3QVHn7u4u\n5ufnhRtxcHAAn88njqqpqSnk5eWhtLRUimaVSiUY3aqqKuEajI2NYWFhAcFgULQnT548QTgcFgEl\ndR88l8PhsMBk1tfXhUS6u7sr5xbzCYghJ6UwKysLiUQCo6OjCIVCMBqNaGtrQ0VFhWhrbDabAIKS\nkpLk3N3b25PseLL9AYjFm9O7xcVF2Gw2IXhub2/L88GQNCKxMzIyZNLJKZpSqcR77733v1d4R1oX\nATM+nw+3bt1Ca2srent7cf/+fczNzck4vKmpCVVVVdjb28Pdu3cxPT0Nj8eD3t5eUcerVCrU1NTg\n8PBQBEDFxcXo7OwUuAopdkwOY6XMTokiDhLG6KstKSmRFLpYLCbCF47T+eIfHh6ivb0diUQCTqdT\nolbZdZhMJtjtdgQCAUm5qq+vx49//GOsra2hs7NTPJhf+cpXUFRUJCpkjlOVSqVcvI2NjcjMzEQ4\nHIbNZkN+fj4KCwvhdruhUqnQ29srhzGFcFarFT09Pejs7MTY2BjC4bBMUex2u6wayMvOyMiATqfD\nyckJpqenMTMzI15/KveJqOUosaqqSmIoo9EofD6fCCT52VGRyoO8vLwcTU1NOHv2LFJSUqRoCQQC\n2NnZQVpamozad3Z2sLCwILa5lZUVpKWlobKyEjMzM3j8+LGEUAwNDclLnJycDL/fLyQw7pop4OHh\nzovj/PnzksxVX18vkwUecLu7u1hbW8P9+/cl03txcREej0cwnrQzhcNhrK+vIxwOIzs7W9wTAGQc\nHwwGEYlEJJ9gcnISp6enYgfq7++H0WiEy+VCPB5HbW0tPB6PQG8ISeHu7ujoCI2NjXjppZfkP3/3\n3Xexs7Mj8cutra2SzU5EM7MYTCYTGhoa0NbWBp/Ph9u3b0OhUKC+vh43btyAxWKRSYder8f09DQ2\nNjbQ09ODcDgsAsqioiLRrxAeRP0I39OpqSnJM3967cZ9cVJSEoqLi2E2m2Gz2bC3tweHwwGXywWt\nVitgEcZ9hsNhmagw0dDpdEqIiVarRXd3N/b29mQds7m5KYJICrYyMzPR0tICu92OpaUlaDQaYXEw\nU4NiO46vk5OT0dfXh/Hxcdy7dw/l5eUwmUyC3yXWlcEu1HXE43E4HA787Gc/E00CO+tgMCgrR1oK\nWShwzx0KhaSAKSsrw9WrV2GxWPCzn/1MLh0WthMTE5ImSPHgxsaGgHlIoKQFMicnR1Za4XBYCqun\nGwOSBplqt7q6ioKCAmg0GszPz2NpaQmVlZWw2+04ODhAU1MTWlpaoFKpsL29Lc8LbW1kbgwODiIc\nDqOurk7EwjabTSJ5n46VZmEWiUQAAFarVTzo8XgcGRkZqKurA/Cp3io9PV1SMJ+2S5POyb+XWiS+\nbw6HQxrQpKQkEXwCEEspdQ9cY/FnWlpawvj4OI6Pj9HY2CiANwY3+Xw+sdfq9XooFAq5N7iu4M/+\nu74+t07+jTfe+AfuwQlU0Wg0qKmpEYUkxTTMOOeetaSkRBSHly9flrjL7OxshEIhseLQ3sIxM9Xg\nGo1G0K1U05KV7fV6EY/H8dxzz8FoNErmezgclrFaVlYWmpubP7MfY3WsUqk+gxytr6+HXq8XASH9\nntzf0QLEQkatVsNgMKCzsxOlpaWCi+S+kMEVJpNJ0rIKCgrEzkbPP9GKNpsNh4eH8vdWVVVBq9Wi\ntbVVfgaj0Yji4mLxFBuNRtTX14t4prKyUmwyVHo3Nzejt7cX9fX10Ol0MnnR6XRwu9145513BFbB\nDoPAFb1eLxYs/p71ej2Ki4sltpJTHu6RicJ9GjxUUFAgVkfCipRKJRwOB7xer4B/WlpacObMGZSU\nlGB8fFzGnFTZU9F7cHAggCbm0nd2dspYjF1rbm4uNjY2sLq6Krt77or39/flQGxubpYOheE6fDYA\nyLPBUaJKpZLiqbGxEfn5+bIOqaiowPPPP4/m5maUlJQIMau8vFwcE/wzPp9PyGK0KTGEIxKJIBgM\nii87Pz9flPXMumdaYkFBgTzH1L1kZWVJfC5BO2fPnkVubq50IvX19XI5lpWV4ZlnnsGFCxfEaqhQ\nKLC4uChrOY59a2pqoNFoMDU1BaVSiY6ODoEMRaNR8Xk/efIE+/v7eP7558Vt097eLrx66kjKy8uF\nee7z+TA1NSWBJkdHR/Lz0N7H95jkQYVCgebmZglz4blDJ1BxcbEkMbrdbtnpctS7tLQk+16KSFm0\nE2ZDl8rJyQlCoRAePnwoyX0MAqqqqsKZM2cEQ8vJAhP9WOiy2BgdHUVubi7q6+sFJkWhcXNzM2Zm\nZrC6ugqdTgeDwSAi1v39fRFrrq+vy+dXV1eHjo4O9PX1wWAw4OjoCC6XC6mpqbJuo62SWiitVivW\naKbcdXZ2Cmmyq6sLX/nKV9DU1CSEUGZTtLW1obW1VTRLZrMZ1dXVOHfunDxrTGyjloewIbJNyFGh\nHZqMC36OxNUSaEb9FFdnBQUFItZVqVRYXV2VhMDl5WXk5eUJe4KaJjpHaMNkkBWDozQaDdLT0wVL\nnp+fL+/Z9va23Fm08FH7k52dLamEjFhnUNNbb731v7eT5w6WXS6rKfosCVgg9vLk5ETU2GT7Xrx4\nUSJiOZp1Op0SXMDul8S8k5MTFBcX4/DwEBMTE9LBE8FJtT997fx319fXBYTDDpeHOQ8gn88Hp9Mp\nFS7HXGR3Hx0dIR6PY3x8HIFAQEJraDWjoIvMeb/fL3aUp4V47GR5EVEhnpKSAo1GI8lKDKkYHx+H\nRqNBWloaampqJDoyGAzC4XDI+G9rawtzc3NYWlqSi4uwC1oB6U3lv0+mOYl2Ho8Hjx8/FhtXVVWV\nfIa0mzAJj3qF9vZ2+Hw+eDwe6bQnJiaky1OpVLKfC4VC2N/fR3V1tUw0qMbnQUY07tHREVZWVnBw\ncICGhgb5PfNgoHAvKSlJxoMulwtHR0dQq9WS6+1wOIQVQCoZYzjz8/MlJY+XHkesCoVCRoGM2AT+\nn9VUPB5HU1MTotEolpaWJNKS3HCFQoG1tTWJT6b4KikpSSyPx8fH4lKpqKiQETAJchS/8VJaXFzE\nzs6O2PG4uyXAiGNNWtQYQLS1tYXe3l7x7fKwo3CJh6RCoZDChP52pm6FQiHRptAGCHzq7SZYiuP5\nzc1N5ObmCuSHwlwqwrOyslBSUoKamhqx0VFIyHVcSkqKcMopjmM8NZ0KRKn6fD6ZKHBqRvHX00UD\nO/X09HRZ/fEzpz2X71U4HMbMzIwE1jARjijevb29z0wBKVxdXFwUVPTT8acKhULEo0Qps5vj6Jr2\nXv47W1tb2N/fl2eUZES+MxR87u/vY3t7G36/X84peskBiOiVQlYWKRTAsvkijY7FZFpaGgoLC6HR\naOSZ52VFUTGnZPv7+4JqNZlMODo6ws7ODra3twWkxIhXviPMESBylvz5jY0NwdYyE4JCwEAggP39\nfbnkAQi4iE4tir4pNmZUtc/nk4AtXtB8VnZ2dkQ0S47C3t6eCE75vzs+Psbq6irMZrOsv4aHhwWn\nvrGxAQBi/y0oKJDskJSUFCmIOc7/fb4+t0u+tLRUfMGVlZVITk7G+Pg4UlNTceXKFTx58gSTk5My\n1k1NTcXVq1ehUCjwwx/+EC0tLfj2t7+N73//+7h//774kPf395GamiqivZWVFZw5c0YiJf/kT/4E\nFosFP/7xj7G3t4fKykr82Z/9GXJzc6VzoR+eFxEfQu7B9/b2RJ1eVlYmyWQrKysSMFBSUgKTySQa\ngUuXLuH999/Hv/3bvwn/nYrRg4MDjI2NIS0tDa+88gomJyfxzjvvCOe7ra1NHkwKo+hRnZ+fl06E\no9q8vDzodDqxNjEyVK1WIxKJYHBwUBS8Ho9HkuP4Qq2urqK1tRVdXV2orq4WVCuzvnlQLS0tyeiK\n3eDbb78tQrOVlRUsLS2JiO327dt4+eWXceXKFQAQdfDh4SF2dnZQWlqK1dVV/Mu//AvKysrQ2NiI\nkpISABBh2c7ODvr7+5GXl4elpSU8fPhQOr+Kigq0tLRgY2NDRoyk0nGX/Nprr8Hv9+N73/seenp6\ncPbsWaECLi0tiUWyvb1dOk5iPQlL2tvbwze/+U00NzcLr3t/f18UxO3t7QgEAvjlL38p0yMeFGtr\na2K9aWpqwsbGBv77v/8bGRkZonre39+Hw+HAjRs34HA48Hd/93fY2NjA9773PfzRH/0RjEYj3nnn\nHWxsbCA9PR3f/e53cfXqVfj9fiiVSvT29uLx48cYHx/HysoKampqcO3aNXzyySdYWVnBl770JVy4\ncAEXL14URPHMzAwmJydlPcCu0Gw24/HjxygsLJTdNfDpbnppaQlWqxUqlQr19fViN6K3OBwOy+9h\nY2MD3d3dkgvw5ptvQqfT4Uc/+hF0Oh0uX74sz8v29jYyMjJw9+5dOaQrKiqEFmk0GiUFrLS0FNeu\nXZO1icViQXJyMtRqtThgCBfq7OyUlRoRtJmZmZicnITL5ZL3fm9vT9ZI8/PzWFtbw8LCgmgUvvjF\nL8LhcGB0dFQ0LXa7HXV1dWItXV5eRiwWE6FUcnIy2tracO7cOSmolpeXkZ6ejp6eHiwvL2N8fFyK\n4lu3buFLX/oSzp8/j5/+9KfIyMjAd7/7XeTk5ECv12NqagppaWno7u6G3W7H/fv3sbGxIaz0eDyO\n0dFRbG9vi4WQAjPutO/duydW3/39fZjNZty9excvvvgirly5IqRNPt+xWEx49pFIBH6/H5ubmzhz\n5oyM57mP5gVKr3p9fT3ef/99HB8fo7e3F2NjY/jJT34i8JimpiYpNrhaZbpmSkqK6LWSkpLQ0tKC\n0tJSLC0tIT8/H1euXEFmZibsdjuWl5cxNTWF4eFhvPrqq2hvb8fR0RGsVqvkJ3i9XlRXVyM5ORlm\ns1katYWFBRwfH6OwsBDPPvssmpubsbu7i6ysLHR1dWFqagqTk5Mwm82IRCJ49OiRND0qlQo2mw2P\nHj1CamoqOjs7cXx8LM9JR0eHsCWWlpYwOjoq0KbNzU2UlJRIQmUoFBKnAz+XhYUFlJSUSODN1tYW\n1tfXf6+79nO75Dk2a2pqEkJTf38/VCoV1tfXUVBQgO7ubmxvb8Nut2NlZQWLi4vQarW4cOGC7Pf6\n+/tFucgHjFV6Tk4OdnZ2cPv2bZw5cwZtbW1iiyosLJR9WCKRkMPk6Uo+FAqJFzg5OVnGzAyuYVXF\nao95wScnJ1CpVELQyszMRGZmpuSrczQ4Pj4uI/G6ujqxS5SXl+O1116TCt5gMAiIhkznvb090RVk\nZ2eLiJCd3snJifj4nU6njLDolWWoAznOw8PDaGpqEoALWfLp6elITk6WUB12xVSchkIhSXhj2tb2\n9rZ8D6mpqdKx0o+fnJwsI3nu+2lXpCaCKVgUhNGeQ3Y0NQjRaFSgG+Rba7VaoepxXMZCpqqqCqWl\npbh69SpCoRCePHmClpYWKZ6eDgs5ODiA0+nE/Py8WBa5QqqtrZUDgQUPfdy8gHJycsSrTHsh//vD\nw0MMDAwA+NRpQuVsZWWljC5ramrQ1taGkpISKJVKfP3rX0d7ezvy8vLQ09ODuro6KJVKybDPyclB\nOBwWvkFzczP29vbEycCRfWFhISorK+XPsKvRaDQ4d+4c0tLSYLVa4XA4oFQq8dWvfhXxeBwzMzOo\nr68XdjknaiUlJVKgUIjKHWlRURHm5+cF9JKRkYGOjg4YjUYRNrKoKSkpgUajkYlRY2Mjpqen4fP5\noNPphB/BsScLimAwiEQigdzcXHR2dorojBcbrU/p6enireaoG4BM0Cg2o4BLpVLJQazRaGTczlUb\nBaAulwvBYFBWSyxO+cwcHBzIyJi2VBZST1vgsrOzYTAYZBdMJntPT48ISzMyMlBYWAij0YikpCQ0\nNTVJfnlvby86Ojqg1+uF/Hh8fAyVSoWysjLE43EcHh6KAHN+fl6S/+huys/Pl0vUaDQK34IJe6Rd\nUjSakpIiwKqKigpMTU1hdXUVzc3Ngv9mjDenHUTspqWlyTpWo9EIY7+6uloS9nZ3d3F6eipx0BkZ\nGWJ3I+iLaF1qSjQaDd544w25W46OjhCNRiXBjqPwp1NIn478ViqVIvpTq9Vypup0OoG3Wa1WSf1s\nbm4WmFJjY6NMMpuamsQ9srS0hMHBQRQXF4vVrqysDAaDQRDCwWAQfr8fu7u7cLlcUlRXVVXJ9JET\nGU4wfp+vz+2SJyKSuNfU1FScP38eXq8Xs7OzMJlMYoMaH+HoPLMAACAASURBVB8XS040GsUf/uEf\norq6GgqFAi+88AJ6enpgs9lEHUuSUHl5OW7fvo2PP/4Y3/jGN/AHf/AHMhbu7OyU8Rsz1anKZ1gF\nAHmwlUqlKN0pKCPJjKNG8tjpUSYwhUEk3Dnm5+cLf1yv16O3txcmk0mIY5WVlWhraxNluVqtFi8y\nyXqBQABarRb19fUSfvP0ioIHd2NjI46Pj2E2m+Xv0uv1MBqNoqidmprC0tISurq68OUvf1kY1XQV\nsCI9OTlBe3u7jI4DgQAAoLe3F+Xl5bJnJywjLS0NarUaDQ0NiEQiiMfj8nJxF0bsLjG4XAOQ101B\nk1KpFIBPbm4uPB4P7Ha72Ov4n9NLTOvU6ekpLly4IIx2hky88soruH79uoBDng534QXvcrkkwIP2\nqt7eXrz55puinOcKye12o6mpSUI9jo+PBZvLA1+hUMglvrm5iYmJCej1euGOE6Kyv7+P0dFRXLt2\nDT09PYjFYnKw039/7do1AJCdYDgchlKpFEFiW1ubKNH5bOTm5op6l+P8p+1XpJpZrVYsLi5iamoK\nV65cwXe+8x289957WFxclHwCs9kMAPJOEC3KkTBH7OQQ6HQ60YBwKhIOh3Ht2jWYzWaMj4/jwoUL\nsjpTq9UwGo2YmZmR/IeTkxPRxnDP7nQ64ff7BU2q0+lwenqKaDQqGGSdToeMjAzE43HxSZPXz9Eq\nwVBUofPP5uTkSLpcWVkZkpOTsb6+LloVRlJzRRCPxyVxcGdnB1arFW63G88++yxaWlrkrGCBxd8/\ntTVNTU1QKpWorKyUpEHmRVA3RJ58VlYWamtr4XQ6UVVVhRdeeAEmk0mspsz0UCgUOHPmDLxeLzY3\nNyUUhwAmFn5VVVV4/vnnRZtB3z2BZRzlU5wWCATkYgUghNLl5WXB2ZaWloq9rrCwUGiHer0era2t\n6OvrkxhsXqQNDQ2iU2BSZF1dnajog8Gg8CEYRkXNCHkBL774oqwHaLdmkUE3FlewarUaZWVlYu8j\nsY/W4IODA4myJZ56ZGQEKysraGhoQG9vrwSLlZaW4vvf/z7m5+fxta99DY2NjUgkEvj7v/97zMzM\nIC8vD5mZmVCr1aitrUVPTw9MJhNWVlbwq1/9SnRH5FYoFAr5vMxmM1JSUlBUVCSTkt/n63MT3r32\n2mv/sLm5Kdi+RCKB69ev486dO5iZmYHNZpPAA41Gg5KSElitVszPz8sFXFpaKntLVtEnJyfY29tD\nNBqVQ7uqqkp4yL/5zW8QjUbR1tYmqNa1tTUhqlFhzb1McnIyNjY2xCbBETkDaCoqKjA9PY133nlH\nUscePnwoHtund78MdmAn8vSOenR0FCsrKwKeWFhYkDCZvb095OXlob6+XjC8ZrMZ8/PzGBkZwf37\n93Hr1i3cvHkTp6enqKmpwdLSElwul1ykfGjX19cxMDAAt9uNUCgkKWPcNT1+/Fg6xLt378q4LhwO\ny9+VlJSEzMxMbG1tCZvA5/PhyZMnePLkCWZnZ2G1WrG+vg6XyyXe49LSUhwfH8Nut2N4eBgTExNY\nXl7G8vIynE6ndFoDAwOiQ2hsbIRSqcTk5KSwrdkZWK1WNDc34+zZs0hPT4fX68XExIQUfOS2p6en\nY3R0FKOjo2KxIyazp6dH9om8ELlDZy5AMBhELBbDs88+K7GajOIdHx+Xg5MXWHZ2NjY3NzE4OCja\nDK462traYLPZsLGxgZaWFmRmZmJ2dlZU0Zxg6PV6VFVVCeRpaWkJd+7cEcvU02ChDz/8EDdv3hTx\n2+TkpIhNd3d3sbKygg8++ADNzc147rnnZHzJz2ppaUn8xlx7EetaV1eHlpYWBINBmUYxG4Lvj8vl\nkgP/ww8/xK1bt+Tnpk6G1kun04lYLIYHDx7gyZMnshKrqKj4H4hidvelpaUyWQiFQiJ+m5ubkxUa\nBVK0BkYiEdhsNlitVrGeUVNDDzfwKUCJSur8/HxhYiiVSkQiEczMzGBubg5ms1mAORzZms1m4Y5X\nVlYiGAzKQR4KhXDz5k1Eo1HpTPlcETRlsVhkTUA4EJ0Ara2tItK7d+8ePv74Y9y7d084GQaDQayI\nfC9IKXS5XHjw4AHu3r0r7qS0tDQpvnm+EU27vLwsnTC1Aaurq7BarSJM3djYgN1ul6kI15rEN6el\npUmefEFBAba2trCzsyN6K2LGmepHRrtWqxUXCe17RJU7HA7JWl9dXYXD4ZBOvbi4GOvr60LEA4CC\nggJh4TMk7PDwUFwInNwtLCzIRa9UKqHT6ZCTk4Pd3V2Mjo7iZz/7GRYWFuTfp+6CEzibzSYOiubm\nZpSVlQGAvHvMfK+oqADwaaM4OTmJg4MDKfZMJhPOnz+PgoIC4SXwd5OZmQmPxwOz2Szrs3g8DovF\nApVKhXPnzsFqtWJubg7Dw8P/e4V3i4uLMkZhmMPKygpcLhfS09OxuLiI4+NjXLhwQcg+JEdxd8OE\nOP4f2e2sxCimI8ueH1ZBQQEMv037YbYwg2IASKRofn6+dPvb29vC7yY1jmMjZqiTwsfKl0jbWCwm\nYJGTkxOhxdHDTvgIx3ihUEjgEAzKUKvVqKio+EyqncPhwPT0tIiaCgoKpIOh6tXxW2SlyWQSIZXL\n5ZKXzu/3Iy0tDR0dHZifn4fX6xU1cG5urhRMVVVVIm7i6I2UPe7LvV4vtre3cXx8jKKiIumW2IVV\nVlZiYmICFosFwKdc7Pn5eRgMBnEdKJVKtLa2imiFRD6O/Wg/y8vLk7Cb8vJybG9vY2NjAysrKzJi\nbWtrk9E+YSkAJEnr2rVr6O3thc1mg91uh8PhkESxra0tETCSv82wJL50JHZxFE29BgU67NJIYGP0\nJXeU7e3tQpbjekitVsvBnJmZKZfVkydP8OjRIxwcHCAWi8nB4nK5sLW1JcIyioUodqNf+cGDB7h6\n9Sra2tpkPxmPx7G5uSl54yQu0lPOUbfX64VGoxGICeEw+fn5CAQCWF1dRSKRENvf5uYmamtrZSzJ\nlDqifnkRcwyq1WpljMlo0qOjI+zu7koIjtfrlRVZLBYTmxxXLBTXBoNBEWGRNrixsSH/DjMo+P/J\nBuBzSkpgIBAQZDLDXsxms/AmCD45ODgQBCtxulxJZWZmino+IyNDxFIlJSVCKQwGg5ifn5esjJyc\nHOTl5UGv1yMajcLtdgvb/vDwUCAwvb29SElJEZtxS0sLFhcXsbGxISsCAPJ+8vnyeDzSvTJdzu12\nixtnc3MTwKd6GeDTiavb7RY/P1dRnGocHBzIGHxyclIIcOxEmdDIoCO32y2MeObbU1nOgK3U1FRB\nmCsUClnhhsNhaLVaOWcXFhZE40Jle11dHWKxmPxeGCKkUCjEopaVlSW/U9okqSnY2NjAJ598gpaW\nFqSnp0vxx5UyyXTHx8eyOqPGgSvRpqYmlJeXC66dK+SWlhYYjUaZGnEiS5syBaUMmWKgEdMXGS3N\nyd329vbvddd+bpf80NAQiouLUVdXJwc6w1pKS0sxNTUlKk+bzYZ3330Xvb29uHTpkijy2Umenp7i\n1q1b0Ov16O/vlwuX+2sqylNTU/Hcc89JohPHYRsbGygvL0d/f79Qw+gJZRJaXl4eKisrodPpoFQq\nxRKRmZmJrq4ulJSUyFieuerchbHrUCqV6O/vl7Fya2urABk6OzvlMoxGo3LBlJaWyqUdj8dRVFQk\nYjzg03jYgoIC1NbW4vXXXxc1ekdHBxwOB9566y2cPXsWfX19MobUarXo6enBK6+8gsXFRSQSCbzw\nwgu4fPkyIpGIHPbf/va38fjxY8zMzMgoemtrC4ODg5ibm8O5c+dQV1eHnJwcaLVaVFVVyQHS2dmJ\nUCgkEYsUB1KJ/+abb+Lg4AA/+MEPcOnSJbzyyiuiTK+oqJBJDg/JV155RXCkrISp3I5GoyKOi0Qi\nMBqNaG5ultFxSkqKJI+1t7djdXVVdlsc+VksFnzwwQeizLdarUhPT0ddXR0KCwuh1+uxs7MDADAa\njVhcXMTy8jKysrKg1+tRXl4Oi8WChYUFzMzMSMXNwmBvbw9ms1noi6+99poo8Ds7O0XAxUKEueQe\njwcDAwMYGxsT9ncgEMDFixexu7uLkZER9Pb24tVXX4VGo0FKSgqMRiM2NjZkdZWWliYXYyKRECLg\n0dERbt++Da/Xi66uLulmgE/XALzELBaLoHJHRkZgt9vh9XrR2NgInU6H1dVV5OXlSdHi8XhkRbO+\nvo6joyMUFhbiy1/+sgSPtLa2ora2FmVlZTJ1sFqt8Hg8YqM1GAwAPlU/X758WXbpRH4ajUYolUoo\nlUo8evQI29vbMBqNUiB0dnYCAG7evImDgwPpFBlMQsslO3ir1Yrc3FwUFxdjZGQEbrcbra2taG9v\nR3V1NX7wgx9gYGAAKSkp+OIXv4jXX38dc3NzMq4m9ImTsLq6OmxsbGB7e1twrcvLyzg6OkJ+fr5w\nzP/xH/8R7e3tuHz5sgQPnZ6ewuPxwO1248KFC3j++ecF7LS0tCSuov/4j//ApUuX8Oqrr4rrZHFx\nEQ0NDaJRSUpKkhju/f195OTkiI2zvLxcdDFbW1uyvmpvb8fp6SksFgs++ugj9Pf344UXXsDHH3+M\nnZ0dNDc3S8gPs+qXlpagVqtFc0FIE22nRqNRXB7V1dWorq7G3t4ePB6PWCQ7OjoE60ph3/r6OgKB\nAHJzc2WtyWlgcXEx/vqv/1pY7oQJMWiJ68KCggIAkECn2dlZOZfIvtBoNLDb7UhKSkJDQwMuXrwo\nOPXCwkLJHqAYt7e3V1gNq6urklapUqkQi8XES2+326HT6dDS0iIplunp6XA6nTg6OsJrr72Gzc1N\nTE1NwWAwyMqSehuz2YxYLIZr164hNTUVKysrwkz5fb4+t0ueyVakHpEORTAF7WkWiwWRSAT5+flw\nOBySC0+W+8zMjEAqXC4X9vf3JX7xxo0byMjIgFarxfLyMkKhEDQaDerr68XmxP3V8vKyjGBKS0ul\nag0Gg0hNTZUwgcPDQwFHMGGOBylFQOySAEguMnnjGRkZmJycFAoZX/61tTVsb28jFouJJYrCNgpg\nSPNKT09HdXU1Dg8PJTZXoVAgJSVFMsJ3dnakSyeUghOBy5cvQ6/XS/fNqpoKX+6Z/X4/VldXRfC3\nsbGBxcVFpKamoqWlBS6XC06nU6I/GxoahHbG8fny8rJU3z6fTyyLzH1mZxIMBrG4uCiceaqf2THn\n5+djcXERc3NzcLvdQg3zeDwIh8Po7u4WoSEvVnqoGR9McVMkEoFWq4XVasXh4aH8Z7m5udja2sLI\nyAg2Nzdlf0YXwNDQkHTR/NwLCwvFBrOysiLhLSaTSeh/VO7a7XakpqYiGAxKtC195xwtU8BVWloq\nFyB38UqlEkajEYWFhRgaGkJ2djbOnz+P/f19TE5O4ty5c0hKSpLwGCby0bu9sLCAjz76CH19fRJe\nxL3s6uqqCBiXlpbg9Xol5ZEdE1XYtI4tLy9LilhycjI8Ho84PywWi1h+KLZSqVTyztMHbbfbRQhI\nkiPXGJFIBBMTE2KzopiVzPhEIiFiWwom+ZwfHx/j3r17sp9/epTK9QwASaej9Y3cgNraWgke4fPU\n1dUlYUAajUaCRWKxGG7duiWxyBT00a2TnZ2N+fl52Gw2LC8vSzF3dHQEh8Mh8dSPHz9GRUWFpLCR\n8zEwMIDMzEzU1dWJJodTnUAgAIvFIjx8nlmcfnKFODY2hkgkIr74WCwmeezRaBRra2uwWq0YHh7+\nTBjRyMgIpqamcObMGXnP+f5wUrW1tYVQKCRgIAoByXfnmrK2tlaY+BkZGVCr1WLzS0pKEtue0+mE\nx+PB4uIiIpGIaI2YxMnPc39/H9nZ2ZLl7nA4ZEIci8VQVFSE+vp6KXSoF6B1mOcen2d29x0dHcjO\nzhZBZygUgt/vR2dnpxTSfr8fT548EYIoEwYpsiwoKEAkEpHpFO8KTnx4jlFISsug2+2GRqORIk2n\n00n63cLCgqwKuS75fb4+t0u+t7cX4+PjGBwcRFdXF0pLSyVvmJd1ZmYmlpaWBGn78ccfw2w2Iz09\nHZ2dnSguLsaDBw8wPz8vI3W73Q6DwQCVSoUbN27g9PRUABDBYFCsCU1NTYhEIojFYkhPT8fS0hJm\nZ2dhMBjQ29uL3t5exGIx2O12JCcnIycnRx4uVv3s+nU6HYqLi8Urzm6Bo36qU4+OjpCdnY379+9j\nf38fzzzzDJRKpRx8ZMZ/4QtfQG9vL37xi19ITjgT4vjz9/f3C16XFsOdnR0cHh7C4/EIm1mn08kL\nTVb15cuXZc/F8ebe3h4GBwfx1ltvCaBieXlZxrJjY2MS2frVr34VV69exT//8z9jdHQUR0dH+M53\nvoPLly8jHA7LyHZ2dhYzMzN49tlnAXyaPJiVlQWDwYCpqSk4nU4R8a2trWFyclKEUlRpr62tyV7Q\n7XZjaGhIOoa8vDwB1DA97Ny5cygsLIRarRbk5vXr1+H3+6FSqVBYWCgTkoWFBQwMDKCzsxOpqalo\nbW3F5uYmpqenxXvLroC0P6fTKapwsr09Hg9GRkbgcrlkXZGdnY3i4mIkEgnpkJgsFggEMDQ0JKS2\nrq4uLCwsYHJyEna7XcRCly5dwqVLl9DX1wej0Yjs7GwRjP7qV7+CyWTC3/7t3+Kf/umfMDg4KIjY\nkZERgbow5a+1tVUEleXl5QgGg/joo4/Q2dmJwsJCTE9PC7XO4XBgamoKdrtdLnO73Y6Kigo0NDSI\nXZIWvurqakl4Y1c+NTUlmgce3lwPra6uSsDMwsKC0Nm4iiCRbXNzEw8fPsTQ0BCAT7vOxsZGdHZ2\nIiMjQ4rqg4MDlJSUICUlBQ8fPsTx8TGUSiXMZrMIL4kHbm9vF58xw1+4j/b7/fLvcyfO/XMikcCl\nS5fQ29uLnJwcDA4O4v79+7ILHxsbQ11dHXp7e7GzsyNKcYKDfv3rX8v3xqARWluNRiPcbjfu3bsn\n7HOLxYLnnnsO/f39uH37Nk5OTvCVr3wFwKcriuHhYezv70shdvfuXSEx0svPcJfl5WV89NFHArBi\n1O3MzIxMRciQJ+0wFovhk08+EdJcKBTCyckJTCaTMAQikYgw6A8PD1FXVycFE4N59Ho9Hjx4gAcP\nHqC7uxu5ubk4OTkBAFlRckVCfLHVasX09DSGh4elw2aGAAO9dnZ2oNPpEI1G8Ytf/EKQtXS81NfX\no7q6Gs3NzeJj5xTJYrGgpaVFSJQUV/M87+vrEwYFkcAWiwV6vV7cFdRmEYFMO7LVahXFPou9lJQU\nbG1tSf7A8vIyfvzjH0vBTFAbnUklJSXo6+tDbW2tNJiRSEQKpb6+PnFF/D5fv18p8P/D161bt05Z\n6WxtbYkVihVNYWEhioqKUFZWJntgohzb2trkgVCr1VCpVEhPT5ddYVNTE7KysrCwsCBABJvNJocB\nOzS32y3EKP67tNjxgZqcnBRyGf8MPbEAJPYyLy8PExMToo43GAyoqKhAcXGxoB+546utrUVeXp5g\nZom4JV/fYDCgtLQU/+f//B/EYjH85V/+JUKhEBYWFiSARKvVAoBUn7wcI5EItra28PHHH+P4+BhX\nr14V7j1DHA4ODkR0QlFgVlYW7HY7FhcXsbW1hZSUFLHaZWRkwOl0Cvf52WefxcWLFwWuQiY3Gcx2\nux03b94UnQA59EdHR5icnBTvPKFBBoMBJSUlckFSeaxQKER1q9Fo8Jvf/AYPHjyAUqlEbW0tLly4\nIGP9Z599VsIcOOpKSkrC3NwcfvnLX0roB62GGRkZmJubw/r6Ogy/zXcvLi7GwMAAFhcX0dbWhtra\nWuj1epm2MNErHA6joaFB1OvMAufPRrtla2srlEqliMh4EDU2Nkq4Du2TwWAQHo8HS0tLgjHt6+vD\nhQsX0Nraio2NDdy8eRMNDQ3Izc3F+Pg4qqur8frrr2NmZgZra2sy1aGHmbnfRHFSHU0NCS+cWCwm\nBWN1dTWsViu8Xq+AfLg6ItnN7/cjGAwKv59FFVcM3G1yzcTPs6SkBBsbG5iamkJXVxeKi4tlr8gp\nHgBB7MZiMdhsNrHPUpBIaBC96BSRUadCIV1BQQG0Wi1KSkpk506xGJMCAcj0wWq1im6ntLRUcM3s\nIgHI6JaaB+ZA8OxJTU2FUqlEUVGR2ASPjo4wMTGB1dVVhEIh4T0QesRnPSkpCUtLS0hNTUVHRwcM\nBgPy8/Oxtrb2P8TAfMY5aTo9PUVqaqoU+fxyuVwinFWpVEKhKykpQW5uLm7fvo2RkRFcu3YNRUVF\n8Pl8KC8vR3FxMex2O/x+vwhgjUajPCubm5uCo+aZwuhklUolkKPLly9jZGQEc3NzQrxLTk6WDr62\ntlZ0R/zMnU4n7HY7bDYbMjIykJubK3x/8uTX19fR398PAMIKUSgU2NvbQ0VFBS5duoTy8nIRQW5s\nbGBpaQmBQACxWAx5eXky/SQlFIC4oog7dzqd0gB+85vfxDPPPAO3243p6WkMDAwIXInBP1arFb29\nvaiurpbUu7y8PJnS8h0YGhqSpq+trU3CtTiFuXjxoiB8Z2ZmREdFhwmzQV588cXfeYd/rj754uJi\nNDU14ebNm9je3kZ9fT3S09NFVFVcXIx4PA6n04mDgwPpRPV6vYyqDAaD7GY5uvN4PCL0OD4+RigU\nQmlpqTzotBQRFcsKlPvOra0t2O12gaqcPXtWAlxSU1OluqIFkPtUKl2bm5tRXV2NiooK6ab0er0c\nSrW1tZKvHIvFZLJAARvtTVqtVmw4XDswCIZITIpO6I+mmIlRlBxV0YoWDoextbUFrVYr+EZ29RkZ\nGTh79qzY5YjNpQjmaR+5zWaTUBKFQgGn04mZmRlkZ2cjEAjg9PRUBHUFBQUy2qPQzGg0oq6uTjLF\nNzc3RZCytbUlGeiM+Y1Go9BoNKirq5N4zmg0ioqKCtTW1oqXHID8W3xZtVot2tra0NDQIHAlwnM4\n5uQYloJD/l0Uw/H3xSKPZMLDw0PBX1IISkWxy+WSz4jYY9rWOPbnaJbPMb+HnJwcWZswEpXOisPD\nQ5hMJhQUFMBut0skKUeVubm5wkFPSUmR6QWJYcvLyzg9PYVGo5HVEvULVNlrNBoRlfHQZwpiPB4X\ngAlV7QRRkdnAwA2v14uGhgbk5+eLvqWqqkoEdBQQBQIBiWxmR310dCRe5ZycHGxtbcHj8SAYDMp4\nN5FISPHCRD/unvmMkRXPn4daCIrIOEGrqqpCZmYmHA6HfParq6sSGUwLKIvsoqIiEetVVlYiFAqJ\nsIyoatLtampqoNVq5TPa29uDTqeTIBRODfx+v6zD0tPTEYlEoNPp5B2jAIsAGuoI+B4TREPPNdeK\nBI5xckE7pVqtRmZmJnQ6HYxGIwwGAzweD+bn5wW3zN+zxWKRs+Dw8FCKJHr4CUDiOUm7IHUTLFSI\nXnY6nZIpUFFRAY/HIyTHrKws0SMlJycLQY/6qGg0CqPRKA0M3+GTkxNhIsTjcXneubpkzDanM097\n/ymkVigU8rzQSZSXlyfFcUZGBvLz81FWVobCwkIJb/J4PAJyevqOIPM+FApJgVZWViYC74sXL8Lj\n8cDj8Yhjgk4JNi1VVVUAILRS5gT8Xnft/2e39v/Lrz/90z/9B9oYuNukT1qlUkkHxUpqYWFBQihm\nZ2eFqsbAEYvFgnv37uG9997D3Nyc7AcpaLBarWLBA4CGhgZUVVVJJckH4smTJwI/YddMLjQr5IKC\nAhl19/f34/j4GLOzs+jv78e1a9fQ3Nwsu9i5uTkZY+p0OtTV1QlKMx6Po7S0VCAhFKXxgi8uLkZy\ncjJ+9atfIRaLyehrYWEBubm5OD4+FgvgkydPcOPGDdhsNpycnEjlTfvW4eHh/23vTILavtP0/7AI\nEAi0sWhDCBCbwGy2wRt2Ok6cdBJnKtWZpJOZSc9Sc5rDXOYwx66ay1RNTdXUzKRqZrqquyadrnTW\nTpx4SWxs49gYzG52WSAhCRAgARKSWITQ/5B+3r9z6rmlD7+3qqs6nY4D0u/3Xd73eT4PlpaW4PF4\nEAwGxRKSm5sLv9+PTz75RHC7jY2NUKvVuHfvHm7duoX79++jrq4O7e3tqK2txejoKD799FPZwJLJ\nJL799lt88cUXGBwcRDqdxk9+8pPv3eCj0ai0fin0slgsODo6wvXr1/Hxxx9LnOT6+rpsjDyQERfJ\nGd/o6Chu3boFm82Gnp4eABCKGHHJOzs7CAaDmJubE0saQypWVlYELFJQUICVlRXcvn1bbmZ37tzB\n9evXcfv2bcHAMrylv78fo6OjmJyclENfKpX6XkIUf2YutCsrKyguLkZPTw8ePXqEe/fuwel0YnV1\nFb/+9a+Fa0/Pb3Nzs2zCN2/exPLyMiorKxEMBrG+vo5jx45JS1Wj0QgQisQ0wjIikQj29vZEMU4L\nVywWk1kuW9KFhYVySKNSHcD3gn3u3r0LvV6Py5cvi8+bh76bN29Cr9djf38fH3zwgSSl0ddMnvu1\na9dgNBoRi8Xw3//937h27ZpYH4mGdrvdkuyn1+tRU1OD+fl5/PrXv0YikUBFRQVeeeUVlJaWYnNz\nU26Ti4uLEsFMYM3s7Cxu3LiB3/3ud2huboZGo8GHH34o4yOSNZ1Op2wWFosFsVgMX375JVZXV3F0\ndCQBOi0tLcjNzRVhKRXYzOKYnp5GOByGxWKBx+MRHkJWVpboOqxWq+QuHB4eCiuC7+TT7yutqJFI\nRBb52tparKys4F//9V8RjUah1+sxOjoqcdwARODncDjgdDrR0NAAl8slCvZ4PC4pjuTVJxIJ9PX1\n4cqVK5iZmYHP58Pm5iZGRkZw//59PHnyBAcHB3A6nUgkEoLW5nx5ZGRERnQU7w0PD6Ovrw99fX14\n+PAh5ufn5Ubd398vMa5erxcej0d0OoeHhwiHw1hYWMDAwAACgYCgwDm3rq2tRVtbm7yDTU1NiEaj\n+OCDD8QOe+3aNRweHuLy5ctIJBJYW1uTbAwyUQgMSZuUegAAIABJREFUoybr9u3buHPnjhz+9Xq9\nXPQeP34saak7OztykIrFYgIpoy2PyaEkOVKIHQwGodfrUVZWBo1GA4/Hg76+PrS1taG9vV0OBYQi\nlZaWYnh4GOFwGDqdDqOjo7h9+zY8Hs8fr4UuPz9fGMjRaFROaoxOzc/PRyQSweTkJA4ODtDV1SXB\nEqurqygrK0NLSwsGBgawubmJ1tZWCbTgKYv++9zcXOE9EzMaCATgcrnEJufxeDA2Nob8/HzU1tYi\nPz8ffr8f29vbcnOgBYMUOs6LXC4XsrKy5GTMtK/V1VXBRvJmxDYXT5XUIRD+wps/GdXLy8uwWCxy\namxvb0cymRThC/nhJH49ndiXyWQEWkJGOUEj5D7zpkfIzcHBAZaWlgTAwp+noaFBIm8tFotwzPf3\n9/Ho0SMh3/HAxmjdoqIisSjydsrvhMlbJpNJMJD07xNYUlhYKNY/UrhWVlZEqMnTPJXTZHlvb2/D\n6XSKVoFZ3tRWjI2NoaGhATU1NSIEIqGNBz4ClehYUKvVcPw+oGNsbEzGSswBdzgcyMnJkYWQ8cOp\nVAparVY43uXl5cIloL4kJycHwWBQPifqMJilsLOzg9nZWRQVFYlQi5t6JpMRSlxJSYncPKlliMVi\nuH79uoB51Gq1eOg5e+QCVF1dLYI9ft9Ek5J3EIlEMDIyIjZTRnqypc9ZpNFolPZsLBYTqmFzc/P3\n3AS0kPLfT1Eso5kLCwvh9/uh1+vxp3/6pwgEAqLGJumSyXFFRUWC2NXr9bK+lJWVoaenR0STHHER\nVMR3hM+KRqMRy5RarRbdwuHhoRwqGG0MfOeGyMrKkg4ix4fs8iwtLSGVSqGmpkY6TBRxURDLQxA7\nTcQgs819eHgogl7S/6j8V6lUkhFBPcbThzBeoJh2SBfP00rvw8ND2O12tLa2Qq/Xi6+fuom2tjYM\nDw/Ln7eysoJAICCf3dM+b45E6ZhpampCU1OTdIn4+W1tbSE7Oxtzc3OyhrHFPjs7i3Q6LfoWi8Ui\n3TdeBgOBgPAKeHNmnLHf74fX68Xu7i4ymYwc7MvLy+Hz+ZCfn4/q6moROBsMBsTjcczNzYlSnuFO\nZLNsbW1Jt8zpdCKTycghh8I68h6i0agkb/IZIfyrra1N3t+8vDxxKnBfKCwsFCsz0wHz8vJEjMeO\n1v+lfrBNnmKvUCgk6kOeoiwWC9bW1uD1ejE+Po6mpia8+OKLYoOpqqqCxWJBfX093n//ffh8Prz5\n5puorKyUtjkRscxUZ1Z3d3c3/H6/qMiJJZ2fn0dvby9ef/11sd6kUik5gbF9zTlOc3Oz5IzX19fj\nxIkTQqhjGt7y8jKam5vhcDhgNBoxMjKC0dFR2O12mXcz0pbkOr1eLwtwf38/tra28Morr8if8fLL\nL8tnt76+jkQiAeC7B6i1tVX42Lm5uWJNUalUMs8jhY43PXYNysvLsbKyIqKrvLw8dHV1Cd6Rs/+s\nrCw0NDTIbM/tdqO3txdtbW148cUXZeTw4MEDIagtLi6isLAQdrsd9fX1MBgMACBZzMeOHcPZs2cF\nlkK1ayAQEGV+OByG1WpFeXm5CK8Y4kGAR05ODlpaWrC9vY1IJCIY42g0Kq1ehlSMjIxAr9ejsbFR\n5mYtLS3S1aG95vz588jLy5M8Ah4MMpkM5ufnUVtbKx0Cht8Qp1tfXy/RxHw5ifTd3d2VdK3XX38d\nq6urcLvd8rNoNBppi9psNlE+X758WcYOT4vHOJph/CRny06nE5OTk7h69Sreeust6TKRTR8IBLC0\ntISZmRlkMhk8++yz0jqmRiQcDgs1TKvVIhgM4saNG+ju7kZtbS2ysrKg1WrR0dEhGQhM0Oro6MD0\n9LQcEOrr61FeXo4vv/wSs7OzqKysRGVlJdLpNC5cuIC6ujpxsezu7qKmpgY7OzsYHh5GY2Mj/uIv\n/gL//u//jtXVVRk/ra2tSV5FRUUFKisrZQ3hu9vZ2YnOzk7k5OTA5/NJvoPNZpOZPXPCOQKhtcxk\nMsHhcMiBPBqNwmg0ipqeQivOegmHWVtbg8lkQn19PT755BMkk0m88sorSCaT4kdn4BQDVHgbNBgM\nWFpaQjQaRUtLC7q7u4XgFo/H8fjxYwDAO++8Ix58bhZsgbOoF6C+grdeXlzS6bQcJm02Gzo6OuRg\neffuXdy7dw9nzpzBc889h7GxMWxubiI/P1/0PzxA0PLFi1AqlRJ9SmFhoSjdOe/mQWtubg5XrlyR\nREy73S4MEH6njY2NsNvtaGxshEajkfClsbExvPvuu2hqasLZs2eRSqVgsVhw6dIlfPTRRxgaGsLJ\nkyeRl5eHkZERGe319/dDp9PhmWeekThr3rzHx8fR1dWFxsZGHB4eitia2gyizDki2d7ehtfrhV6v\nR1tbm4wnl5eXMTMzg9HRUbhcLvHOM4304cOHos3gwYUQKiaLAsCHH34It9uN8+fPC/WP44cPPvjg\nD+61P9gm/+jRo+/9tU6ng8PhwPb2Nn7zm99gamoKa2trknT2zTffiDDHZDJhbW0N7777Lqanp1FY\nWIhMJoNHjx7h008/xU9/+lNUVlbi1q1b2NnZEeEY2eL8cn0+H+7fv4+1tTWYzWb81V/9FXw+H3p7\ne3HmzBlYrVacOHFCUqb4omRnZ+Pzzz9HPB7H6dOn4XK5hHkcCoXw29/+FkajEW1tbXC73fD5fDh1\n6hQsFgtefvllaX1euXJFFKjXrl3Dzs6OYDkPDg4kKYxir9LSUjQ3N8smPTMzgxs3boiimbe6RCIB\ni8WCcDiMR48eIZ1Oywk5nU5jbW0N9fX1aGlpwdjYmLzwXCQYm0gg0dbWFr766itkZWXh1VdfldM4\nBZE+nw91dXVQqVQYHx/H1NQUhoeHBev78ssvw+FwSDtMp9OJct9utwP47mbCn31hYUHsbrSOZWdn\n49q1a9je3kZtbS2cTieKiooAALFYTBLkksmk3JzX19fFo8zAkStXrkh7MB6Pw+12yzyfB6vh4WEs\nLy+LlY+inLm5OSwtLWFsbEy6OIFAAEdHR8Ltfvz4sVhzlpeXZeNdWlqS22p+fj7y8vLk1sjnz2Kx\nSAzmN998I4r0J0+eYGVlRXIEKBKNx+OS2jcxMSFkxOrqaty5cwebm5uoqalBY2Mj/u7v/k5sWgaD\nAdvb21hcXJSI4rm5Ofh8Pty9e1eCQCoqKgSiMj09jZWVFUxOTqKhoQGvvPKK4Ep521lcXMTw8DBm\nZ2clLIcz/LKyMoyMjIhfmZoYr9cLm82G9vZ2eY7z8/OFVPZ0umRJSQk2NzfR3d2NUCiEoaEhzM3N\niXOFqGVSyRYXF0VERyvdN998g6GhIfj9fszPz2NqagoGg0H+U1xcjOeff14Swl566SX5WXl7dDqd\niEQi6O3tlfQw8iXsdjuSySS2trZw/fp1QdTW1dVBr9cjEAhgeHgYQ0ND4ql/9tlnMTo6iocPH8Lv\n98sa43A48Nd//dfi7V5eXsbIyAjm5+fR1dWFhoYGYe5nMhmxrlIourW1JZelqakpXLhwQXgDPp8P\n3377LXw+n8CUmpubkZ2djfX1dVGMm81mvP3221heXsa//Mu/IBAIoLS0FHfv3kUsFhOhXW5urnxf\nFPUSdRwIBDA1NYVQKCRaIVr57t27h4mJCUxOToq90ev14vDwED/+8Y9x//59zM7OorGxUWBXXK84\n2yc7PxqNoq+vT+b0ra2tOHnypAgXVSoVpqamkJ2dLYyTmZkZeDwe7OzsQKVSwWg0yrvO72NrawuJ\nRELEuNTJjIyMYGFhAbu7uzh+/DiKiorgdrsltfTbb7/FwcEBTCYTOjo6RDDOFD5qbP7t3/4NFRUV\n+Mu//EtMTk7if/7nf9DU1IT6+no4nU5xMLAjmp+fj5WVFRGD/qH6wTZ5tsQMBoOEw1gsFiwvL0v4\nydOxomxPEUPIYBUKohYWFmRuxTQwRmfSzkEkKqMtJyYm4Ha7kUgkhI7k8/nEQ0uSFb3v4XAYFRUV\nkpkeCoXQ1tYmLxk3JdooGLrwtBiFvHsuSjyR8sVifngmk0F1dTWKiorg9XolCpH2FX4ee3t735v/\n0B7j8XiwsbEhmyc31/39fbjdbuGNj42NIRqNorq6WnKQyaCmmDEnJwerq6vIy8tDOp2Wdu/h4aHc\nNmkZZCrb+vq6tCWpbF9fX5cNkEwC3iqZ2MXv5uDgAABkdsYgkI2NDWn57+zsiMeetEB+phxVMGnQ\nZDIJWpiCTSYJAhCXBoWOjNtlF0SlUmFjYwPLy8tYWlpCZ2cnzGazzGtNJhPm5+cxMjICp9Mp0Zj8\nc1ZWVkSNTbynwWBAKBSShUutVsvNmOEvWq0WxcXFsFgsqKyslM+OwCS20SORCNxut3j++e9JpVIo\nLy+Hw+HAw4cP4fP5pDXI2WZ+fr4E7oTDYRkn8bnlWCWZTApCtbW1VfQPDBHKzs7GxsaGzLoJcGJn\nbWJiAna7Xd5pk8kkzyzFT1lZWSLUoo0zFArJ5sV89aKiIvT398uoharo/Px8bGxsYGlpSZ5VPp9Z\nWVkijGMre3V1Vd637e1t4SykUins7++jqqpKxgEtLS3i715bWxOVPMeNOTk5qKqqksNmIpEQRwXB\nXMwtIGNApVJJR3NxcREA5EZns9lw8uRJxGIx6bg8efIEi4uL6OnpQXl5OdRqtYTdEGLFW/TTHT+m\nli0sLECj0SAUCol2gPHK1GMEg0EEg0FUV1ejrKwMZ86cwddff43R0VFxJy0vL0OlUqGkpAQlJSXi\nz+e4FICsBVtbW8K65+2eItvFxUWJU+ZIkSOrqqoqeDweUe3n5uYKGIfZAkVFRXA6nfLzE01NgZ/R\naJS2NjsYvPQwAZMdTUK9CgsLMTAwgPHxcSwtLYlmYnd393tExKmpKWF/MJqWLoPFxUWMjIzA4XDg\n0qVLqKurk9+ZgtOSkhLo9XrMzc2Jq4tplnSGUCzOKGtSQfkz/V/qB9vkTSaTIDyLioqg0+kEnVle\nXi5tV1pZuEFlMhnJYs/NzYXNZsPW1hZ+85vfoL6+Hv/0T/+EyclJ+Hw+vPTSS3j06BE++eQTXL58\nGT/60Y/EV06ucltbG1wuF8LhMK5evSrtdWZA80a5vb2NsbExHD9+HKdPn0ZnZyei0ShOnz6NyspK\nSa6idaaoqEgEUswF/uabb3DlyhX87d/+LTo7O1FWVoapqSnMzMygu7sbWq0W29vbKC8vh81mEwVr\nbW2tKJUZYjM3NweHw4F/+Id/kKQ7phQdHR3h448/ht/vl6SmoqIi2O12yfPmuIACwOPHj8tDyhv/\n3t6edCj+5m/+Riw4KpVK0KBUnnPjqKurE9GIy+XCqVOn0NDQIDNHBkVwhslEJXYxNBoNWltbBZRT\nV1eHlZUV9Pb2oqGhAW+++aaEAtGaGA6HRe1cU1MDr9eLx48f4/79+6ioqMCbb74JnU4nfnoqZmkh\nvHbtGqqqqvD6668LFvPUqVPSYuNmlUqlcOLECeHXp1Ip6UgwtMjv94t+gTY+Il+TySQePHiAiooK\nlJWVSRIeLUmBQADHjx9HdXW1cP4zmQy6u7ulJf/5559jcHAQLS0tcDgcEi8cj8fx+eefY3V1FQsL\nC6itrYXNZhOgEHkLbA0z7pTahrfeektm61y09vf3pe3/8ssvo6KiAhsbGwI24nMyNDQEp9MpUdBn\nzpwRcA21I4uLi/jVr34FlUqFF154QcA+Wq0W/f39+OKLLwQ3TTtodnY23n33Xfh8Przxxhtob2+H\n0WiU9LPKykoAkE2NMKQHDx5gbGwMf/Znf4bq6mqMjIyI8K+zsxMOhwMulwvZ2dnijohEIrhz5w7W\n1tZk3EMfc3l5Odra2tDY2CjKfZ1Oh+eff17GS0ajUSBaAMQzf/v2bdy6dQvDw8OIxWJwuVxwuVyI\nRqPCsJ+enoZOp8Obb76JSCSC4uJiSayLRqNiaUylUmhubsbJkyfR0tIiBzSOQlpaWmQkR0EzaZCM\nsP7tb38r40idToeuri4R/m5ubmJiYgKPHj3C4uIizp07h46ODjQ0NIiQjpcIXsLoauCB0+l0oq6u\nTkBbHKvQhkxLLlHRBoMB3d3dOH/+PHy/zw45fvy4WIybmppgsVjETUCxI5kT0WgUjx8/htFohNFo\nFB0QIUIUszkcDlGjM6abM3k+P5WVleLo8nq9Ms5hVgN/X4aUhUIhNDc3A4DoFi5evIjbt29LJ8Tl\ncuHy5csAINQ+qvodDoeEAn377bf453/+Z/T09OCNN94QXUo0GkVXVxccDgfu378v6yTX4f9L/WDq\n+vPnz/+cZC/OYL1eL6LRqIRvcFbMW1YqlUIymRQrgtVqlRnP3t4eGhsbJQAkFouhvr5eoAFOpxM5\nOTnweDzweDwIBAJyug+HwyJ+KSsrk3ZhKpVCTk4ONBoNDg4OMDc3h5KSEthsNhGahMNh0RVsbW2J\nlSydTsv/l/So8fFxRCIRnDhxAgaDQdovnBtvbGzIolJVVYVEIiFJRB6PBwMDA0Kgo5hPp9OJmIqp\ncFlZWZKaVlNTA6PRKMQ+btwmkwl5eXmisD5z5oyIip6O8gS+szvyxkKaFD+7ra0toZaxBQZAhFHF\nxcVCrcrNzUUwGBT1cyQSgclkEkU0uxv0l/I79/l8GB8flxskb+rMPsjJyRFRy8zMjNhw6FfW6XQi\nCqLFxmKxyK2FcalarRY+nw/RaBROpxMqlUrEOyQtkqO/uLiI8fFxhEIhGAwGuFwu8fnzALC+vo6c\nnBwRBO3s7AhNbmVlRbo3VMHv7OxIO54zfwpG2QKdmZkRIA/zDKjgV6vVckMpLy8X3vrOzg6sVqsI\n8rKzs+V94w3K7XZjZWUFsVhM8hWe7kA1NDTIM5udnQ2LxSLzSAKhLBaL3CqB725yq6urkizHZ/Lw\n8FBGPoxa5SF0d3cXAwMDAmU5PDwUyhdviW63G0+ePBEuuuP3KFBubPPz85icnITZbBZVtMFgQDKZ\nxODgIMLhMBoaGmA2m1FaWopAICDgJiq7udYsLi6KtgeAcPGXl5fh9XolQCscDsuaMDExgVAoJCl1\nU1NTcqPnukDORiwWE/RrJpMBAEnxY5eqv78fa2trcvjjbZrBLvF4XHQ8vAQx34NiQKvVitnZWQwN\nDcmhn/RAaoEikQhu3bol/xztjexa7O7uSqeIzAi6N8h54PewubkJn8+H4eFhGI1GNDQ0AICIqxcW\nFjAxMQGNRiPhPQAEWLW5uYnBwUFhetCSOTk5ifn5eXi9XvHwU8AIfIetraioQHFxsYxRGP7DiOyN\njQ1Jo9NoNNjb25M9yOfzIRAISGgScwaomWDG+9HRkbxTTF8kIZFrHWNoGc/N73ZxcRE3btzAzs6O\n/PNLS0twu91wuVxwOBzIZDLyGRL4BAAVFRVCRcxkMvjtb3/7B9X1P9gm39LS8vPi4mLY7XYsLS1h\namoKAwMDYp/S6XQifuEthq1Sv98vmMfNzU2hFNXU1IiYiOpws9mM9vZ2xGIxjI2N4fbt27IR0Pby\nxRdfQKvV4rXXXhOWMhdcepbT6TS8Xq+0ZRoaGqBWq/Hhhx9iYmJCHnifz4fBwUFkMhm5kW9sbOA/\n/uM/hAXAYIgbN25Aq9Xi9OnTuHr1Ku7fvy9tt4qKCuzs7Ahx7d69e/jiiy+EQPXaa69Je624uBjx\neByjo6MCZzk4OBDrR0FBAQCIPYWQDs7stFotWlpa5EZJdvTc3JwsPL29vbIRU83Pdtf6+rqkslGN\nTZzu0tKSiMQKCgowNDSEq1ev4pNPPsHKygra2towODiI3t5emVMPDg7KWGZhYQFzc3MIBAKoqqqC\nzWbD7u6uqMvVajUMBgMKCgowOzuLX/7yl9jb24PD4cAzzzwDs9mMqakplJaWorq6WhYVs9mM8fFx\nzM3N4U/+5E9Etc9DVElJCYLBIHp7eyUjmzfcvb09fPXVV7hx4wZKSkrk9sI5fSKRgNfrxcTEhBwo\n+LvNzs5iZmZGZqu8GScSCaGyEZzR29uL/v7+74FrqLiPxWLS3r979y4mJiZw7tw5OJ1OaXevrq7i\nvffeQyKRwPnz5xGNRoVGNj8/j8HBQTgcDqhUKnz88ceYmJiQ+SPjRDnmKS0tRSKRwJ07d5Cfn4+e\nnh5Z6Mn3J72MuNbBwUFcu3YNTqcTVVVVMh7p6+uTrPRAIACz2SzRuWNjY/jFL34hxMYXX3wRJ0+e\nxMOHDyX4iKjVUCgEm82GCxcuyByatrONjQ3EYjHk5eXhtddeQ1ZWFvr7+/Hll1/C4/HA4XBIV+Oz\nzz7DtWvXvmdTI5BnYmIC4XAYOzs7kgrJlLPbt2/j9OnTMBgM+Prrr6WV+tFHH4moMhAISM4B0dWt\nra149dVXpUXPAxbzNfb39+HxeBCNRpFIJHD79m0kk0k8//zzQlwjOOf999/H/v4+SktLxZ1RVlaG\nQCCA+fl5IUMS7jU/Py/vQEdHhyCvmVJ3/fp16HQ6gSJNTExIFPXy8jIKCwuxtbWFkZERGZ8xitXr\n9UpuSDqdRiAQwN27d6X74Pf7BYl79epVfP3117h48SKMRiMePnyI8vJyuFwuCZi5cuWKjEEikYh0\nZR8+fChWvXQ6jVOnTmFra0tAYxyHcnO+evWq0A+np6exsLAgaZFkd2xsbKCvrw+Dg4OYn59HZ2cn\nLl68iOLiYhmfkZEwPDwsOSjEVT/33HNobm4W0FJVVRWam5sF0MURQk5ODvr7+/Gf//mf8qzNzs4K\nFthisUClUklOx9TUFDweDzY3N+FyuQSQQ9fIp59++se7yb/99ts/r6iogMFgkEAYl8slIBmeLD/5\n5BOZlUUiERweHqKmpgZra2v47LPPRGSl0+kkRpULr8ViEUsH28TJZFIgPPwSaKXJy8vD119/Dbfb\njePHj6OwsBBra2sYGhrCwsKCgCFWVlZQUVGB0tLS750adTodsrOzsbW1JbcKClBsNpvczgBIm7a6\nuhpNTU1YWVmBXq/HM888IzNd3s5v3LiBrKwsnDp1SlLRKN7SarUYHR2Fz+dDbW2ttIN4MJmcnJTN\ngNATqsFTqZQI1nQ6neBxOSdcXl7G1tYWNjY2hEPOHGdCGw4ODhAOh2XjpSqUUI2pqSnk5uZKTjRh\nEOfOnUNLS4uMRMxms/Dg7Xa7qMvZeiRKNRKJIBaLSXuW1ETehimY48iEhDLaWtjZoRWJyl1y2ysq\nKgSHCQButxsdHR04e/asqILZxbDZbOju7kZhYSEmJibkdr2wsCCjgcrKSgleKi0tFceBy+XCiy++\niObmZuTl5aGmpgbNzc3CvDebzZIoSK/t0NAQTCYTmpqaJB/earViYGBAgkxKS0tF6MabfGVlJZ57\n7jkMDAygr68PHo8H2dnZkgrm8/nQ2tqKgoICjI6OwmQywW63w2q1ilCKPwtzzg0Gg9DP7t69i2Qy\niaamJsE4k1/OGT7f0d3dXQSDQUGKHh0dCYWQ6XjNzc04f/48Tp06Je174Lt256NHj6DX62G32+U7\nqK6uFj5+QUGBjMw4Q62vr8fU1BSuX7+Oqqoqabvn5+eLgJQBVT09PTh58iTsdrsgl2tra9HR0SFA\nFLoa2tvbcezYMQH71NfXC2Oivr4enZ2d8Pv9ePz4Maqrq3Hq1Cm8+OKLaGhokDEYuxS1tbVy8z84\nOMCJEyeg1WqRyWTQ3t6Os2fPClBrcHAQNptNNm4CwkpLS+VwxzAls9ksSYRqtRpOpxMmk0lcCIw0\nnZiYwObmJrq6unDhwgVRl9tsNtE7HB0dCcdfq9WioqICVqtVgp/Onj2L9vZ2OH5PsDQYDCgrK8OJ\nEydgtVolHY8ZIl1dXWhpaRG1PDc84DuNTEtLi6Q+MoHQ6XTi2WefxbPPPguDwSBRxLTXsitATQ85\nC/z5qK8wGAzys5NRYLfb5XexWq1iP2TnMBgMSqw0x2m5ubmw2+2i3eIF72lLdF5entg2ySzRarWC\nVT937hysVit2d3ela81OzczMDNrb29He3o5UKoVQKASPxwMAKCoqwgcffPDH65Mns50paPSTM5aR\nwpDBwUGkUim56VD4cHBwALfbLa19tt36+/tx6tQpaLVa+P1+edC4APC/E0Wbl5eHkpISxGIxjI+P\n4+HDh8hkMnj11VcFv+h2u2WBoyLbaDSiubkZTU1NkpFOHz19oBqNRm6wTU1N8Pv9CAaDcLvdKC0t\nRTQaFa8krS/0JkciEQGsDA0NyRfNA8Xe3p5YhAYHB6FSqSSi1e/3y7yU6t+DgwOoVCqk02nJ/qZg\n8PDwUH4OQmQoHqIfk3N/zmOXlpZE1EKm/+7urgjEeCBaX18X5ToFWVwYyaJ/OmpUrVbDbrdLSAVn\nbXl5eWL1on6DbVTqB7a2tkRpy83x6OhIKHu5ubkiXCRbnGSpRCIhbXuy3RnyQrY0o2L9fr8cWjQa\nDcLhMHw+HzQajfim2dUAIMI0/m78bmjzCofDMBgMskEzGY1dGH4PCwsLqKmpkUQ4AjhIjMvKysLB\nwYGwFdga1Wg0ojhni5QHEr/fj729PckWJ9UwKytLNAGMDFWr1WhpaUE6ncb09LT8e+fm5pBOp4X3\nTkIaUauRSERcJFqtFvF4HPPz80gkEkK5Y2JdJBLB+fPnUVpaKu/23t4e8vLyRFDKUBmqqDc3N0W0\nl52dLShbsvfHxsYwPz+PYDCIjo4OlJeXSzgL3xGr1SoQEopOd3d3RRxVUlIirVOKA6nvAL4LoiJN\njxkCHL1w7Thx4oTkJMTjcaTTaaE68gAaDAa/JyJUq9Vye+cmOD09LYdLttv5eW9vb4uoj04dhu9Y\nrVbU1tYKTCwej4tVNRAIwG63yyWDjAOOmTimYMwzNySVSgW1Wo3S0lLY7XYRBGcyGcEQE+u8vLws\nlyGz2SzCS8KbePHhmJOOlYWFBYFjcWPmO8TxCufpm5ubiEQiwk0oKCgQ1gKjfEnQ455C3RVpd3Rx\n8OJFgST1TPX19YJ4ZlgRu2MDAwP40Y9+JKhJGytqAAASPElEQVR0rkMc/1I8qtPp4PV6JRmSrhhq\nLJidwYhfq9WKwcFBGQnu7u7+8afQkeI0MzMj7OuCggLxSt68eROLi4uwWq1Qq9WYnJyUuVAikUBR\nURF+9rOfYWBgAP39/ZJoRIAN7XNdXV145ZVXJNSAc/nZ2Vmo1Wro9Xr87ne/kwCThYUF6PV6TE1N\nSTu8tLRUHlSv1ytc5fr6erzzzjtYW1vD4uKitMZKSkrQ0NCA7u5ucQvEYjEsLS2JCnxnZ0esWCaT\nCQMDA4JQpfWEalgupoFAAOFwGCUlJWhtbUVvby8+/PBD5Ofno7GxEcFgENPT07h37x7KyspQVVWF\n7u5uEXxlZ2cjGo1KiE9NTY3AGUwmE+7evYurV6+K2ITQIFIC7XY7zp49i/v37+ODDz7AyZMnoVKp\nRKVLf6fBYIDZbBZ1dyKRwOTkJIaHh/Hcc8/h5MmTuHnzJh49eoTZ2VmYzWZJ/eJLFY1GxZ5SWloq\nhzgqmoPBoPhytVotvvnmGxQUFODSpUtYX1/HyMgILly4gNXVVbz//vu4ePEiuru7xXLZ2dmJhYUF\nDA4OCtCjrKwMly9fRkNDA375y19KfvhXX32F8fFxXL58GUdHR7h16xZ++tOfoqqqCjdv3kQqlUJb\nW5t4ey0WCw4PDzE7OytWpmAwiPz8fImazM/Ph9vtRjAYlPkiZ+7V1dV49dVXxV5nMpmg0WiQyWTg\ndrsBQFTRXq9XxHH7+/uYmprCf/3Xf6GlpUVwt7m5uaIfACC3s76+PrS0tMBgMGBsbAx5eXn42c9+\nJq1Xv98vXa6trS2kUil4PB7RTXCjopDvs88+w4ULF2R2PzIygsnJSfT09AhLPhwOY2JiAh6PR+yT\nZWVlcnDwer0iTFtZWcGZM2eQyWRw8+ZNZGdno7u7Gz6fDyMjIwLZiUaj0vlZWFiASqWCRqPBzs4O\nQqEQ3nvvPWg0GomAXltbw8TEBAYHB3Hjxg0UFBTA5XJBp9Nhc3MT09PTsNvt8uezS/T1118jmUzi\n4sWLWF1dhcfjwUsvvQSz2YxgMIh4PI6dnR14vV45fPh8PqRSKXEHsA4ODvDFF19gcnJS5uYmkwmd\nnZ3Y2dnBRx99hGeeeQYvvPACent74fF4cHh4KGPEp4VdjKZ98OCBPA/t7e2oqalBOBzG8vIyBgcH\n0dPTA51Oh76+PtFxzM3NCQZco9FgYWEBAETlvba2BqPRKBtxXV0d0uk0xsfHUVVVJemGOTk5cLlc\nsq7zphoIBKR9v7Kygvr6erz11ltwu924efMm7HY7YrEYHj9+jJ6eHjzzzDPY2trC8vIyAoEAbty4\ngYmJCXR3d6OiokL0SOyoMsVtZWUF4XAYAwMDmJmZgd/vR2trq3AWGDK2sLAgFtShoSEMDg5Kx4pa\nBJ1OJ/TFVCoFl8uFF1544Xt6HMK6dDodDg4O8ODBA+EasOs3OjqKtbU1Gb8cHR3hpZdegsfjwf/+\n7/+KZum9997D2bNn8ed//uciwCXgp62tTWKFR0ZGoFKp4HK5RIz5f6kfbJOnHYgzGoJEeDomm54f\n+ObmprTC2FJkS53c9pycHHR1dQlClQsfLUahUEgoTLzNaTQaiUhlO0iv16OiokIiXJm9TKuRwWDA\nkydP5JRcUlKCrq4u1NfXIysrC2tra9JiJgOfLeOOjg5pRzNK0u/3C8CDNyfgO5EPc6f1er3gKyk4\npDinqakJ1dXVAlZwOp0igOEtNplMCubT7/eLT5MPOG99TU1NQnk7PDxERUWFCM7sdrtYrehOIOGL\nGMrCwkIRdQFAdXW1JOE9efJExFzxeBx5eXk4duwYVCqViOeys7Ph8/mQlZUFu90unyOtZDydq9Vq\nIZGVl5ejs7MTxcXFaG9vx/3798X6QjIVscFMCAyFQsjPzxe//dOn7M3NTRHeVFRUYGVlRSJZ2V6m\nVoOisYaGBhH8mUwmaLVabGxsSBuV3ydtL/v7+2LxMpvNgjCmAIiispqaGlmoNBqN/FkOh0OCTbiY\nz8/P4/DwEMeOHRMUJvn4+/v7qK2tlUPY0+EctFTxmSBVj1QyKuVp9STGlRZOo9GIdDotcCa2tUkM\nJD+cCZDHjh0TFC8DbdidYthPdna2ePRVKhWsVivy8/NhsViwt7cnoikS+zimSiQSKC8vh9Vqhcfj\nERcFEwVDoZCQ99bX18UFQjFtdna2sN/JdKcWg6IqiiD5DmZlZYl+CIDc4EOhEDQaDU6ePCmHiNXV\nVSE4Pn78GKFQCO3t7WKbKy0tlXZtYWGhtHpLSkpQVlYmcdMcPzLHgnP6goICyZHn5whARJKxWExG\nVsxcr6qqEvgRrckc47C7w5kyf05afZPJJJLJpIgt9/b2RDy9t7cnokCy3Ok4oe5Kr9dLDgU5/7Oz\ns9jf35duAsd9Wq1W1goKlilspa23qKhIRkNPR8lyTEihNkmD8XhcRN6kcEYiEemuLC4uCkBMr9eL\nkI9ESI7vKisrhVqZm5uLaDSKubk57O/vo7q6Wi49xBbbbDbJMNnY2JDnh8Lyo6Mjcebk5uYiPz8f\nVqtVGAxPe/r/UP1gm3xXVxeSySROnz4tak9yxePxOCorKxEKhRCJROD3+5FIJPCP//iPMJvN+NWv\nfiVzX4PBgOzsbASDQTgcDrz99tt48uQJPB4PfD4fbDabLOKRSEQefKfTCZ1Oh8rKSrzzzjvi32RQ\nB0EvJOXl5OSI0lKv16Ovr0/aT7zh0gc7PT0t80rOS1977TW0tbWhqalJmOkNDQ2YmZnB0NAQnn32\nWRiNRvh8Pvli2cKrq6vD0tIS5ufnJd86Ly8Per0eLpcLly5dQnl5OSYnJ4XlTo/u9PS0zEYJz2A6\n2NOLeTqdxrFjx9Da2gqz2YxEIoHHjx+juLgYzc3NMBgM0gblQaO2tlZQqnyZHj9+LLehnJwcHD9+\nHOXl5bLB8mCkVqvR2toq8a7r6+viCx0dHUVtba3EdjKL3Gg0wmazwWg0iofcarXCZDKhq6tL2m2T\nk5PS7qqvr5escgrq0uk0/H4/ampq0NXVJW3u3d1dRKNRsa2QkzA4OIjl5WX5rNra2kQXwcWH1r1E\nIgG73Y50Oo1kMimWuJaWFrn5hkIhRKNRmEwmoa1VVlYilUrB7XaLQI2z5erqaqTTabS3t2NtbQ1H\nR0dobm4WD/jp06dlrNPQ0IC///u/BwBZ2MLhMJ48eYKenh7JYCCznAt1bW0tlpeX4Xa7JUGPP1M4\nHEZpaSni8Thu3rwJtVoNl8uFb7/9Fk+ePJHRD2FKbEGTwDg4OAi3243q6mpotVq88cYbEhTCbg1H\nNZz3MpwlHo9Do9Hg0qVLsoF0dnZKK39ubg5DQ0Oy2ebl5cHhcODUqVOYn5/H3t4ezp07J9kW/Ky7\nu7uxuroKr9eLS5cuoaurS0YtOzs7cphpbW1Ffn4+MpkMTp06hb29PczNzQkhkYdPAoEIzqFH3W63\nC053d3cXw8PDIh5eWlqCwWDAhQsXJJGSeOWenh5YLBbRC7AzeOvWLfT29sqmTvxyYWGhvBt0CAGQ\nTAOqyClOY+59ZWWlXE7495eWlhAOh2WDa2xsFMHh0dGRQHyo/ucmWFZWhnA4jM3NTZSXl8vfp+WL\n3vB4PC6bnMVika7R7u4uJiYm8ODBAxQWFuLcuXO4ePEiysrK4Pt9aFZpaamM4ywWCwBI658Hj8rK\nSkl4pJCRh/by8nJpnR8cHODg4ACNjY2orKzE4eEhFhcXMTQ0hLNnz8Jms6Gvrw+lpaWor6+XbHli\nyFtbWwXH+9prryEUCqGvrw8FBQUIhUJ48uQJzGYzTpw4gY6ODoE1lZeX4yc/+YkcxI+OjsQazY4E\n37dAICC27tzcXExPT2NsbAzNzc2orKzEL37xiz+41/5gwrsf//jHP+cNk20sWni4aNhsNpw5c0aS\nrurq6gQuQrjA5uamzFsByAtWUlKCp4V9KpUKZrNZoBSE4hwcHEhoByMsOcf2+XyYmZmR+fTS0pKk\n5xFvWFFRgWg0iqmpKSSTSYm2NZvNonrXarWorKwUKxXDKh4+fIhQKISioiK5wTOUhTdCs9ksmzAR\nmHwZSPSiGIhZ2DqdTixfNTU1QtiLx+MIBoNYWlpCcXGxWEB4wFGr1SgqKhKHwtzcnNzu2REgHIXJ\ndyQxbW9vy8GCtieypgk80mq1AnzgbaSoqEjm0jzk7e7uypwxGAyKyIzajbKyMmkdb2xsyPfHgBIC\nb0pKSqQ9zIQyZnhXVFSIk4K3E1oGeSvhd8K2IKl5vDFwESMulB0nLnYcxzA85elses5byRxQqVTY\n3d2VOV9tba2MTDh3ZGIelcHscFAg6XA4UFRUBJ/PJ+ClgYEBhEIhORSVlJSI7oVUOboU+M94PB5M\nT0+LlYwcdnq2yQJn6mFbW5vgfq1WK7KysuS2U1xcjO3tbajVajQ3N4s4lSClnJwcESlRgOlwOLC4\nuIjbt28LU4D4ZgppOfff39+X/523x4ODA2xubkp6F/UOzc3NwnrnmIwsBAACquHGyVsf593sAhmN\nRuzs7ODJkyfY2toS/zTw3cEqEAgITpdCVNpXeQjSaDRoaWnBiRMnUFVVhXg8LqhSOleoOaBbhe/9\n/Py8aHomJycF0MMMA2phAoGAvEdUh7PTxi4XR2F89yKRiCTU8TljUiYTCff29uB2u6HT6eR2yVs7\nY4lNJpP87i0tLWhoaJAUUb6HMzMzotDPzc1FdXW1QLkY083bO7PnQ6GQhCfx4M/5+MLCgrTlefim\nWDcnJ0dslFxbCZZiZ2BpaQnxeFw0ItTK2Gw2sUYODAwIkpipgVwP6f7ioYvdSJvNJhAmdu/YGYtE\nIrLJ5+fnY2FhAeFwGGVlZcKqoJYllUrJc5bJZBCNRnHnzp0/XuEdc8KzsrIE1hEKhSR+j7xop9Mp\nSE/iAKkEZjuGIheSvqh8LygokNvZ07nzwWAQyWRSAmO4wdFWRjEEfdTl5eXIyspCIBCQFiMXFbbg\nJicnkU6nZU7FkzZb3wyHoEc9FAphdHRUXAWMiORtgiE0Op1OBC8El2xubmJyclKiGzc2NoS1zjbe\n9vY2MpkMampqkEqlsLCwAJ/PJ+Q8m80mN0GK8igO297exsbGBtbX11FbWwuHwyEBLxSFcCGPx+MS\nTkJSXl5engRBmEwm7O3ticsgGAxieXlZlKnUWLCd+HR7lYcBtVot8ZN8GZLJpIQ2EOfKOM7t7W0A\nEEGW2+0W0Q5Vt4WFhUJD5A2dNx7asZjyR385Vblms1k8t4QCra2tSY45W2rLy8tYX18XJS7bkfwd\n6RHnYeLg4EA6LBzl5OTkwOv1IisrS24xPIRQEMjvpLq6GltbWxgaGpJW8sTEhLTN+WcTskGgCQA5\neCaTSQSDQfh8Pmn1Ps3RZwTs2toadnd3UVBQIETBnJwcEa7x4EKk8NNRnkdHR/K5svuSyWRkhJeX\nl4f19XVMTk7i+PHjgkxljC0A4dvz0MdNmKLVra0tdHV1CSaYvARCm5aXl4XuyHeCn8fT0a/U0NAS\nWlhYKJngq6urAodhjGtJSYnYO2n/5cGWTiDgO6pdZWUliouLsb+/j9XVVdFvUBScTqcRi8Vks+IG\nR0IeBcq8NXPNSafT2N7eRiAQgM1mQ2FhoUBouNZydEQNEbsm9PFbrVbodDoEg0HMz8+LU4iHs52d\nHcljIJwmGAyitrZWnrWcnBxJZGPeOi9CsVgMbrcbubm50hmkWp9zbgrPGBJEvCsZGO3t7WKDXl1d\nxcrKCjo6Or4HfNrY2BB1PEW6R0dHcvnju8p3mE6HxcVFwZaXlZXJLX98fFzeX1oqefgIh8Mi5gP+\n/0gtOztbfnZ2qoxGowgEuf8AEIsyxyP8TvkMMkRrdXVV1jmllFJKKaWUUkoppZRSSimllFJKKaWU\nUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJK\nKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSiml\nlFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRS\nSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkop\npZRSSimllFJKKaWUUkoppZRSSimllFLqh6n/B0UwE4w6dOEhAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "noise_dims = 64\n",
+ "gan_model = tfgan.gan_model(\n",
+ " generator_fn,\n",
+ " discriminator_fn,\n",
+ " real_data=real_images,\n",
+ " generator_inputs=tf.random_normal([batch_size, noise_dims]))\n",
+ "\n",
+ "# Sanity check that generated images before training are garbage.\n",
+ "check_generated_digits = tfgan.eval.image_reshaper(\n",
+ " gan_model.generated_data[:20,...], num_cols=10)\n",
+ "visualize_digits(check_generated_digits)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Losses\n",
+ "\n",
+ "We next set up the GAN model losses.\n",
+ "\n",
+ "Loss functions are currently an active area of research. The [losses library](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/losses/python/losses_impl.py) provides some well-known or successful loss functions, such as the [original minimax](https://arxiv.org/abs/1406.2661), [Wasserstein](https://arxiv.org/abs/1701.07875) (by Arjovsky et al), and [improved Wasserstein](https://arxiv.org/abs/1704.00028) (by Gulrajani et al) losses. It is easy to add loss functions to the library as they are developed, and you can also pass in a custom loss function."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "vanilla loss generator loss: -1.304018\n",
+ "vanilla loss discriminator loss: 1.597559\n",
+ "improved wgan loss generator loss: 0.169809\n",
+ "improved wgan loss discriminator loss: -0.012215\n",
+ "custom loss generator loss: -0.538480\n",
+ "custom loss discriminator loss: 0.080095\n"
+ ]
+ }
+ ],
+ "source": [
+ "# We can use the minimax loss from the original paper.\n",
+ "vanilla_gan_loss = tfgan.gan_loss(\n",
+ " gan_model,\n",
+ " generator_loss_fn=tfgan.losses.minimax_generator_loss,\n",
+ " discriminator_loss_fn=tfgan.losses.minimax_discriminator_loss)\n",
+ "\n",
+ "# We can use the Wasserstein loss (https://arxiv.org/abs/1701.07875) with the \n",
+ "# gradient penalty from the improved Wasserstein loss paper \n",
+ "# (https://arxiv.org/abs/1704.00028).\n",
+ "improved_wgan_loss = tfgan.gan_loss(\n",
+ " gan_model,\n",
+ " # We make the loss explicit for demonstration, even though the default is \n",
+ " # Wasserstein loss.\n",
+ " generator_loss_fn=tfgan.losses.wasserstein_generator_loss,\n",
+ " discriminator_loss_fn=tfgan.losses.wasserstein_discriminator_loss,\n",
+ " gradient_penalty_weight=1.0)\n",
+ "\n",
+ "# We can also define custom losses to use with the rest of the TFGAN framework.\n",
+ "def silly_custom_generator_loss(gan_model, add_summaries=False):\n",
+ " return tf.reduce_mean(gan_model.discriminator_gen_outputs)\n",
+ "def silly_custom_discriminator_loss(gan_model, add_summaries=False):\n",
+ " return (tf.reduce_mean(gan_model.discriminator_gen_outputs) -\n",
+ " tf.reduce_mean(gan_model.discriminator_real_outputs))\n",
+ "custom_gan_loss = tfgan.gan_loss(\n",
+ " gan_model,\n",
+ " generator_loss_fn=silly_custom_generator_loss,\n",
+ " discriminator_loss_fn=silly_custom_discriminator_loss)\n",
+ "\n",
+ "# Sanity check that we can evaluate our losses.\n",
+ "for gan_loss, name in [(vanilla_gan_loss, 'vanilla loss'), \n",
+ " (improved_wgan_loss, 'improved wgan loss'), \n",
+ " (custom_gan_loss, 'custom loss')]:\n",
+ " evaluate_tfgan_loss(gan_loss, name)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Training and Evaluation\n",
+ "\n",
+ "### Train Ops\n",
+ "In order to train a GAN, we need to train both generator and discriminator networks using some variant of the alternating training paradigm. To do this, we construct a [GANTrainOps tuple](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/namedtuples.py#L128) either manually or with a library call. We pass it the optimizers that we want to use, as well as any extra arguments that we'd like passed to slim's `create_train_op` function."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "WARNING:tensorflow:update_ops in create_train_op does not contain all the update_ops in GraphKeys.UPDATE_OPS\n",
+ "WARNING:tensorflow:update_ops in create_train_op does not contain all the update_ops in GraphKeys.UPDATE_OPS\n"
+ ]
+ }
+ ],
+ "source": [
+ "generator_optimizer = tf.train.AdamOptimizer(0.001, beta1=0.5)\n",
+ "discriminator_optimizer = tf.train.AdamOptimizer(0.0001, beta1=0.5)\n",
+ "gan_train_ops = tfgan.gan_train_ops(\n",
+ " gan_model,\n",
+ " improved_wgan_loss,\n",
+ " generator_optimizer,\n",
+ " discriminator_optimizer)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Evaluation\n",
+ "\n",
+ "TFGAN provides some standard methods of evaluating generative models. In this example, we use a pre-trained classifier to calculate what is called the 'Inception Score' from [Improved Techniques for Training GANs](https://arxiv.org/abs/1606.03498) (by Salimans et al), which is a combined score of quality and diversity. We also calculate the 'Frechet Inception distance' from [GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium](https://arxiv.org/abs/1706.08500) (by Heusel et al), which measures how close the generated image distribution is to the real image distribution. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "num_images_to_eval = 500\n",
+ "MNIST_CLASSIFIER_FROZEN_GRAPH = './mnist/data/classify_mnist_graph_def.pb'\n",
+ "\n",
+ "# For variables to load, use the same variable scope as in the train job.\n",
+ "with tf.variable_scope('Generator', reuse=True):\n",
+ " eval_images = gan_model.generator_fn(\n",
+ " tf.random_normal([num_images_to_eval, noise_dims]),\n",
+ " is_training=False)\n",
+ "\n",
+ "# Calculate Inception score.\n",
+ "eval_score = util.mnist_score(eval_images, MNIST_CLASSIFIER_FROZEN_GRAPH)\n",
+ "\n",
+ "# Calculate Frechet Inception distance.\n",
+ "with tf.device('/cpu:0'):\n",
+ " real_images, _, _ = data_provider.provide_data(\n",
+ " 'train', num_images_to_eval, MNIST_DATA_DIR)\n",
+ "frechet_distance = util.mnist_frechet_distance(\n",
+ " real_images, eval_images, MNIST_CLASSIFIER_FROZEN_GRAPH)\n",
+ "\n",
+ "# Reshape eval images for viewing.\n",
+ "generated_data_to_visualize = tfgan.eval.image_reshaper(\n",
+ " eval_images[:20,...], num_cols=10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Train Steps\n",
+ "\n",
+ "Now we're ready to train. TFGAN handles the alternating training scheme that arises from the GAN minmax game. It also gives you the option of changing the ratio of discriminator updates to generator updates. Most applications (distributed setting, borg, etc) will use the [`gan_train`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/gan/python/train.py#L661) function, but we will use a different TFGAN utility and manually run the train ops so we can introspect more.\n",
+ "\n",
+ "This code block should take about **1 minute** to run on a GPU kernel, and about **8 minutes** on CPU."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: 0.329137\n",
+ "Current MNIST score: 1.000079\n",
+ "Current Frechet distance: 344.763275\n",
+ "Training step: 0\n",
+ "Time since start: 0.033110 m\n",
+ "Steps per min: 0.000000\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvVePpemRnbu2997nTm/KdldbkjMaYURgIEC3upB+D8+/\n0bV+gjDQSEORmjbVZbKq0uf23ntd5DzBL6kDkQcjoHmA/QKN7q7K3Pv7XhOxYsWKeKXt2I7t2I7t\n2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t\n2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t\n2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t2I7t+P/PcP1cX/y3f/u3m5OT\nE3377bfq9XparVbKZDJar9cajUaKxWLyer3q9XryeDwKh8Pq9XpaLpdKJBKaTqfqdDoqFouKRqOa\nTCZar9eSpHa7LZfLpf39fS0WC7Xbba1WK61WK63Xa6XTaRUKBTUaDY3HY/n9fg2HQ3W7XR0eHioa\njarZbMrr9SoUCumHH35QpVKRy+XSeDzWZDLRr371K+3s7Kjf7ysYDCoej6vT6Wi1WimVSmk0GqnT\n6Sgej8vtdqvX68nr9SoYDGo6ncrv96tQKKjb7apardr7jsdjZbNZFYtF9ft9TSYTrVYrTadTLZdL\nHRwcyOfzqVqtyuVyyePx2LMdHBxoOByq3W4rEoloPB7rw4cPWi6Xcrlc6vf7isVievnypYLBoDwe\nj7LZrNbrte7v7+VyueT1ejWbzRQKhVQsFjUYDNTtdhWNRrVer9XpdJTNZpVOp3V3d6fZbKZoNKrR\naKTpdKrd3V25XC7V63VJktfrld/vVzAYVCQSUb/f12g0UjKZ1HK5VK1WU6lUUiaT0dXVlZbLpbLZ\nrM7Pz/Xx40f5/X7N53N1Oh29fPlST58+1Ww2k9/vVzqdVqPRULfbVTwe12azUb/fVzweVzQa1Xg8\n1nq9lsfj0Xg81mq1UrFYtPUIBAJyu92qVqsKh8M6OTlRrVZTq9WS3++XJM3nc5VKJSWTSTUaDS2X\nS4VCIdVqNU0mE52enmq5XOry8lKJREIul0s//fSThsOhgsGgut2uPB6Pvv32W6VSKc3ncwUCAXm9\nXtuTy+VSs9lMLpdLxWJR4/FYV1dXisfj8vv96vV6CoVCyufzms1mWiwWcrlcWiwWmkwmSqVS8vv9\narVacrvdisfjWi6XcrvdSqfTdgbcbrdWq5X6/b6i0ahKpZI6nY6m06nC4bAqlYpev34tl8ulzWaj\n+Xyu/f19ffnll2o0GprP58rlcvJ4PFoul/J6vVqv1+r3+4pEIsrlcrq9vdVkMlGpVNJsNlOr1VKh\nUFAwGLTz4fV6tVgs7FlXq5Wtz2q10mazUSwWUzKZVLfb1XQ6VSAQ0GQy0WAwUC6Xk9frVb1eVzQa\nVTqd1m9/+1vd3t4qFAppPp9rPp/r1atXth9DoZACgYBubm40nU6VzWY1Go3U7XaVz+fl8XhUrVYV\niUSUTCZt/+7v70uSptOpFouFZrOZJpOJ8vm8isWiarWa+v2+FouFNpuNPB6PdnZ25PF41Gg05Ha7\nzY6t12slEgnNZjNNp1Ol02kNBgP90z/9k60/Z+Pzzz+3Nc5ms9psNqrVakokEkqn02q1WpKkdDqt\n8Xhs+229Xms4HCocDisSiWg6nWq9Xsvn89kZ3dnZMZs3mUw0m80UCAQUi8WUy+XUbDbVarUUi8W0\nXC7VbrcVj8cVi8U0Go3kcrkUjUb19u1bffjwQcFgUJvNRtPpVKenpzo8PNR6vZbf71cikVCn09Fw\nOFQikdB6vVa321UqlVIkElGlUtFms1E6ndZqtZIk5fN5rddr1et1ud1uSdJgMJDP51MqlbI1WC6X\nmk6nmkwmOj4+Vi6X0/X1tSaTifx+v8bjsZbLpVKplBaLhVqtljKZjDwej77//nsNBgOFQiENBgP5\n/X796le/UiQSUbfbVTKZtPPrdrsVjUY1GAzMPk2nU93f3yuRSCgQCKjf78vj8SgSiWiz2UiSgsGg\nZrOZut2uMpmMwuGwrcFsNtNqtZLf71exWNR8PletVlM4HJbb7Va/31c4HFYul1Or1dJ0OlUikVCl\nUtEPP/wgv98vj8ej//Sf/tOf9OHu/2te+//jCIfDKhaLevnypaLRqFwul/b29pRMJtXr9RSPx5XN\nZu1l/9W/+ld2gE9PT5VKpVSpVOyQ39/fS5K+/fZb+f1+dbtdeb1eBQIB+f1+HR8f6/T0VOv1WtFo\nVM+fP1cgENByuVShUFA2m1UoFNLBwYFKpZJqtZp8Pp9++ctf6vDwUNlsVqVSSfl8XplMRkdHR9rd\n3dV8PlcwGNT+/r68Xq+8Xq+eP3+uRCKhWq2mXC6nw8NDSVIqldJnn31mh2JnZ0ehUEj9fl+pVEqZ\nTEbD4VDT6dQc03q91uHhofL5vLxer2KxmAKBgDqdjqLRqF69eqVEIqFoNKoXL14okUio1+vp6OhI\nX3zxhTnQTCajXC6no6Mj/epXvzKwwDvN53PFYjEdHh7a4cnn8woGg1osFjo4ODDQlEql9OTJE81m\nM43HY+3t7SkYDKrf78vr9crn82kwGKhUKumv/uqvDOik02m5XC4Nh0PFYjGl02n5/X5lMhkzxtFo\nVH/1V3+l4+NjRSIR5fN5e8anT5/qiy++UCKRUCQSUblcViwWk8fj0eHhodLptG5vbxUIBHR0dKTN\nZqNwOKxf/vKXSqfTGo1Gts6Xl5dKp9P6+uuvJUmz2UypVErhcFjhcNgARTQaVTweVzAYVK1W03g8\n1tHRkbLZrKLRqA4PD5XJZNTr9VQoFPTVV1+pWCwqlUopl8upUCioVCrp6OhIuVzO1v3k5ESTyURu\nt1tff/21ksmkXC6Xnj17psPDQ61WK+3v7+vly5dyu91KJBL6m7/5GwNU7FMOfyaTMSf/q1/9SvF4\nXKPRSJvNRqvVyozs7u6uQqGQMpmMPv/8c4XDYW02Gx0dHeng4EDRaFSpVErpdFrxeFzlclmvXr0y\nwPHs2TPb96enp/r88881n88NZAI8nj17ZsAtFAopl8up3+8rEAjoiy++UCwW02q1Ujgc1mKx0P39\nvfL5vI6Pj9Xr9bTZbHR8fGzPns1mDbwUCgXt7e3J4/Eok8noiy++ULlcViKRUDKZNJD85MkTnZyc\nKBAIKBwOq1AoWCDw9OlTpdNpdTodxWIxZTIZLRYLFYtF/frXv1Ymk5HX69Xu7q4FGKlUStlsVoPB\nQJvNRplMRqvVSi6XS0+ePFE8HjeAnUwmNRwOFQqFdHJyomAwqGAwqLOzM3m9Xl1dXWl3d1evXr1S\nOp1WJBJROBxWJpPR4eGhvvnmG+3v7ysQCOjk5ERHR0dar9fKZDJmu9xut8rlslwul2q1mgqFgsrl\nsvr9vkKhkPb399VqtdRqtcyxd7td+f1++Xw+tdttxWIxPX/+XC6XS8vlUrlcTuv1Wq1WS6VSSU+e\nPFEul9Pe3p4ODw/N4f31X/+1nj59qlQqZf+k02kdHR3p+fPnisVij85HJBLRixcvdHR0pNlspkQi\nof39fa1WK/l8Pn322Wdyu926u7tTIpFQMBjUp0+fFI/H9erVKy2XS00mE+VyOQP9JycnOjg40GAw\nUDAYVD6f12KxUCgU0pdffqlyuaxkMqkvvvhCOzs7ury8VCQS0Weffaa9vT3l83nbV4lEwvY2AOzg\n4EBer1fxeFxfffWVgsGgRqORjo6OlMlkdHt7q0wmoxcvXhhAOTw81HA41N3dnXZ3d5XNZlWv17Vc\nLg1opVIp/dt/+2/tXbA7BLc7OztKpVIqFot68eKF3G63JpOJTk5OdHx8bDYplUr9Wb7W83/Zd//Z\n4+zs7DfRaNQQEmh+Op0aIh0MBppOpxahTSYTiwKI6D0ejyaTiXw+n1wul1qtlkVLq9VKo9FIs9lM\n0kNUhuMcDoeGrjebjSFPn89n6HA+n6tSqeji4kKNRkOtVkuz2Uw+n88clySL8GezmTwej9brtaHj\nzWajwWBghnA+n8vtdisSiUiSRQHRaNQ+T5LG47E5zOFwqE6no/F4bJ9JxDMcDrVerxUIBLRYLDQc\nDu1dq9WqKpWKOp2Out2uNpuNgsGgwuGwZrOZzUe/37fobTweG0PQ6/U0HA7ldrvl8XgMeWIERqOR\nGS+3261AIKDpdGq/Iz0gcBCwE726XC7NZjOt12stFguL8iSp1WrZnDt/PxqN2vrPZjPVajXNZjMD\nIuPxWNIDgJQkv99vURRgjHf2+Xz2vERhrVbLHG+/31en0zEmhWjJ7/cb+mefdLvdRyzS9fW1zfls\nNpPb7VYoFDImaTab2XouFgtjqAKBgAaDgbFXHO71em2RzWAwsPkHyI7HY3U6HUmyd6xWqxoOh7bn\nVquVZrOZOahAICBJWi6XttbValW9Xk/j8djmMhaLWQTHXudZ5/O5ut2u5vO5RWiwAoBXovDlcinp\ngdmZTqeaz+fG5EwmE4uAmH8YHJfLpUgkIrfbbc/tcrk0mUy0WCzk8/mMSWm1Wur3+1oul/L7/QqH\nw2YDWCc+f7lcqt/vW5TLHPn9fttLbrdb0+lUq9VKoVDIzuF0OpX0EN1zfomIPR6P/R3r4Xa7bY9h\nV4jYqtWqrq6uzE4BTJPJpDabjdxut7GMBCwwGpz30Whk+4VnCIVCcrlccrlc5lx4d+wGz8I6ud1u\nm5tQKCSfz6fJZGIRMWdhs9mo3W7r4uJCtVpN3W7X7A4Oi33BugeDQa1WKwMdrMtoNDKGCYbT7/fb\n8+IHXK6HgLVWq9nz9ft9i3IlaTgcmq3n3Zi/fr8vv99vUffd3Z1arZY54GAwqEAgoOFwaGeEPbZY\nLOzssb7j8Vgej0c+n898itfrte+MRqOaz+dqNptm/1mn9XptbAmMG0wO9n25XGq5XNqZ8ng8mk6n\nur291d3dne2Bt2/f/j9/ytf+bE7++fPnv4FSwkAOh0NtNhtFo1FNp1ONRiOt12szsiwE1PfOzo4Z\nJDYR1GYoFDIDzlgul1osFlqtVppMJtpsNvJ6vTbRGOL5fG6RdKVSUbvdto0iSaFQSOFwWD6f79H3\n+nw+eb1eTSYTSTJ0NplMLGKaTCaKRqPGSmBEwuGwvF6vbYLpdKpQKCSPx2OHAeDg8XiMVv/jw48h\nxPD3ej0NBgNzUBhgSUbhz2YzRSIRLRYLjUYjhUIhud1ucxxOIx2LxTSZTIyKhAaFkm+325pOp4pG\no/a+GIbFYiG/369IJGKb2OPxaDab2fdCSzabTTtwGK5IJGLfNZ/PdX9/r0AgoEQiYQ6TNMRqtTIK\nv1KpKBAIKB6PazgcPgJVAEKcKO+Bs2M92EfO78cZED1AhbdaLQ2HQwNM7BPSNU5HDt3t8/ksuiJ1\nBVMSDAa1XC5VqVTMEDcaDS0WC8ViMQ0GA43HY2McZrOZARuPx2POApoYxkmSfD6f3G63ms2mOTAc\nFukqokYMGkYRB+Pz+bRardTpdBSJRIyCB+C43W5zpk5HvlqtbB9lMhmj2XHk4/FYBAJOOpxzw2fP\nZjM1Go1HoDQUCsnv9xuAH4/HBooA8hhkSQbyN5uNRqPRo58jXdjtdh85HNImHo/H/huwvVwuzUYA\nXklF+nw+o+2bzaaazaZms5mdh1AopFgsZk6p0WhYigFKH+czHA7l8XjMqbDvsWt+v98cncvlMhA4\nnU7lcrnM3v0xkIdpmUwmZlc2m429T7vdVrPZVL/fN6ADnU6qjDNOqg6wMhwOzRbzfMx5LBYz6h/H\nPx6PlU6ntdlsdH19rWg0qkQiYYEGAR1pRmwmTrjb7dqZImDENnY6Hfl8PgM1MDMEJT6fzwIKj8dj\n87fZbBSPxx/ZLsBiOBy2qB8AwJ6Ix+Nar9d2fgFnLpfLKHhAPQDM5/PJ7/drMBio1Wqp0+nYWf70\n6dOfdPLef4mj/peMaDSqWCymSCRik0auo91uK5lMyu/3W97j+PhY0+nUIgUOZCqV0nA41NXVlXK5\nnL755hs1m00zesvlUvP5XH6/X8vlUsPhUJFIRDs7O+r1embMmNRoNGqHNpPJ2MFarVaPnJPzULhc\nrkdOGgMFLU0+BuDC4sbjcWMAcL48dywWU7vd1nq91t7enur1um5vb+07QK8+n0+9Xk/hcFj7+/ua\nz+caDofK5/NG14LkcTLk/4nIeHeMGQcWag3QslqttFgsFA6HFQgEVKvVtF6vlcvlLOKOxWKSJI/H\no3Q6rWg0qmq1qvl8bhEZuVjyZIVCQfF43FD6ycmJodhMJmPfHYlEFIvFjLWBBvR6vabTIHcHiFos\nFpYD9/l8KpfLmkwmlp9LpVK6u7tTMBjU06dPzeiz1qvVyoAVdDPPzrsvl0s1Gg3L7RIJwDoQ/YfD\nYQM/6B+gkomICoWCpSMwfvP5XKFQSGdnZ8Y27O/vm4MrFovm/JLJpHZ3d/X+/XvV63UDbPP5XIVC\nQW63W5VKxQzOeDzWZrNRqVSSJHU6HaVSKUvZABgymYwCgYC63a7a7bZqtZp2dnaUyWTUbreNDsdo\nwTYFg0Hlcjm53W5dX18rEAgYde/xePTkyROLXFKplNbrtWq1moLBoNLptJbLpTabjUKhkNrttiqV\ninK5nEVy5JLJ4XM+ANz82+Vyab1eG+MUi8XM0DPfy+XSmL3VaqVoNKqjoyN1u11LDZKnTSQSln7h\nM3u9ntrtthl8AABr43K5VCqVzMnm83l7L7/fb2AYzQNUNjl5AIDX69XNzY39HdEl30E0jDaGeSZQ\nwSkOh0Mlk0nF43F1u13TWUjSZrMxtrHRaCidTqtYLJpWKZ/PG2ALhUIWBABWYWZgZcmTsx/j8bjS\n6bTZWsCKx+NRIpGQ1+tVq9UyPwEzsbOzo0QioXA4rHK5bMAQ++lyuczGAMALhYIkGQ2fy+XU6/U0\nnU4tuEEHkkqlDNAAgLAbAP14PG6fB+jHZmQyGVUqFQ2HQ5XLZc1mM93f35vNZCSTSQP0hULBAC+B\nTLfbVTAYVDabtUAJzU+n07H98eeMn83JbzYbzWYz9ft9JZNJM96j0cgMq9vtthwqv7Ner7VcLhWJ\nRJRIJAy5kzOE5o5EIkblYXjcbreJlJwCoMFgoHA4rHQ6bYcTWhHajv/n4OLAd3d3JT3QQ1DeOPFo\nNGqAhL9br9eKxWIWnXm9XiWTSQMN+XzenpXIdrVaWb4cwZbf77dIz4n8ARnO55Bk4juMdjabtTwk\nQAinCBOSSCQ0Go3M4ANM5vO55cFIJ5BzR0AF+nS73Uomk8aqcJD7/b7RoDhhkPFqtXqEmokCl8ul\nxuOxMpmMgTyMynq9tijeqVsgCoECdYIj0D/vQYRLtOTz+Yzyhe7fbDaG/rPZrDlj6FciC5gW6Q8R\nXDqdtmgCA0hEC5AALC4WCwO/OL/JZKJgMGhGxefz2bvCiLAvYrGYRZw832KxMJAMs+QcROvQrAC8\n8XhsxnUymViU5WR32KdE6ziu3d1dWzfelb1DxEmKh7MVi8UspQbTwB7b29uz85HNZg3AEP0Avrxe\nr4nRisWi5bz5DvQfrDHrTjQcjUZtbfkZ3hvHyJoDDNGH8J6cNZgjngF2hTUOBoMWMXNGARLodBaL\nhQUjw+FQgUDAADspIGxfIpEwGwf9C1AkteVkD10ul7LZrDlb5hFxWywWUzweVyQSsciVCBUGkTmH\nMSFIA9CyFwF56XTaxLcwuDh+zj+i0cVioXQ6bSkiHCZ2Aa0Pdm42m9m5TiaTtt7YI2yCJEuhwRYD\nAkgd8z1ouQKBgEKhkK0dDA5CPc4A7KfP5zOxLxE7z8iz4XMAmrDZpFZJK/J3zAO27E+Nn014R8R+\nfX2tcDisvb09RaNRJZNJ7e/vG9325MkTpdNpyzGSV0TFioN6/vy5wuGwKVXj8fijnBQOAeFRtVrV\ner2Wy+XS/f29KddBiUQ95D+ILDks0Jv7+/va3d01h06UHQ6HTTw2n8/V6/WMAsbY9/t9ud1u7e7u\nmuE/PT018VwqlVIymVStVlMkEtG3335rdCyUaLPZVC6XM2Pq9/uVSqVUrVb16dMnzWYzDYdDo797\nvZ4uLy+1XC61s7NjbEo+nzejtrOzo3K5bE7PeZD583a7rUQioVAopIuLC0nSycmJgQ0cdqPRUDKZ\nVCqVsnypx+PRhw8fdH19be+Baj+ZTJpqn3THYDCwPDlVBURugDkOfLlcVrlcVjabfaTJiMfjKhQK\nlgKIxWLq9Xq6uroyipn8YCgUMnBTLBbl9/uN5p5Op/rw4YPNEwDiyy+/lCRdX1+bkby9vTW2qFqt\najweq1gsKplMKhwOK5VKyePx6P7+XvF4XGdnZ4/odlIfnI3z83MFg0Ht7u6q3W5bRM/nsWdRPB8c\nHJhRKpVKarVaury8VCaTUTqdNtDK3BJtjUYjNZtN26+wMJlM5pHoh7xgNptVIBDQ3d2dUfifPn2y\nczmdTi0SXq1W+vDhg4mynGkDcqD7+/sKh8O6vb3VeDzWaDTSu3fv5PF49Otf/9oiwJOTE6VSKXU6\nHaON0SqMx2PV63UT1hWLRe3u7mpnZ8fWOpFI6Ouvv1YikdBqtTIGAr1PLBZTs9nUZrMxQRvnjX2L\nge71evL7/To6OjKAtVgsVKlU9O7dO4XDYe3s7JizicViqtfrur+/t0if/C373OPx6ODgwIIC0k2X\nl5fGkHi9XnPi1WrVol/SQwDii4sLrVYr7e3t2bne2dnRer1WtVpVJpOx54Om//jxo+r1usrlstlT\nHNO7d+9M4e/U/LRaLd3e3iocDtu+AFSiHTo+Ptbu7q45bgBiLpfTycmJAbWDgwNJUr1eVzAYtHw/\nvgGGcWdnR8Fg0FK39XpdP/74oySpXC4bO5PNZm3+ACr1et0ifnxMLpezVEuxWFQ8Hlev11M0GjVb\nT1BAeqtQKCgajapSqdja1Ot1DYdD7e7uWoovEonI4/EYW723t2fAMp/Pazweq9lsKpPJqFAomEiW\n9ESn0zG2sFqt/lm+9mcrofv3//7fbw4ODvTZZ59JehDkUAqHQpVSJ2gqBFiRSMSiVigQ8p9er9fy\nMZQj1et1lUolBQIB1et1eb1ehcNhXVxcaDgcqlAoGHNQKpXk8Xh0e3trEdDvf/97K++Cmjs4OFAu\nl1MgEFAgEJDP51Or1dJms1E+n9d0OlWv11MsFjMjCr0CbRuNRtXtdk396ix5Ie8lPZSUSH/IAfb7\nfV1dXSmZTFoZHoryfr9v78iG7vf7Rstms1m9ePHCmA8+l7xSOBw26gzVOCWJ0MvQkThcSkAQzkwm\nE9Xrde3t7alcLhtdHwqFdH19rWq1qqOjIyv7AsyATAOBgC4uLnRxcWHiPOlBuYqqmtw5iBvWA0oO\nde5yuVQmk7EIju+pVqumIEc4B0KmvEv6A2252Wz09u1b+Xw+nZ6eWrRLDlCSUYTff/+9lZzx+8+e\nPTOn/se5+FQqZfQh4KjZbBqjQvQOawIAgjGASifP5/P5LE8ai8XU7XZ1cXGhYrFoOXwcX7VaNYeH\nmArH2u/39ezZM/361782sAJdfXd3pydPnlhZD0xKu93WfD5XPB43B+jMxcOoOM98q9XSzc2NKdmd\nOepKpWJlgrA+KMUpeUun0/ov/+W/6Pz83PaKx+PR8fGxqahJT93c3Gg0GikSidj8Uh5ar9fl8/kM\njPj9flPJcwYkGRsHUISRIrrLZDIaDAb63e9+Z+kT0kiALdTw8/lcP/30k7FOPp9P+Xxez549M/C2\nXC41GAysxArNEvuZtXLSt8ViUel0Wufn51osFiqVSgZgYrGYFouFms2msQvNZtMYjPv7ezWbTR0d\nHcnj8ahWq2l/f1+lUklXV1eWFvr06ZMFEgRRx8fH2t/fN9vsLBuDBWw0Gspms1aBJD0I9oh0EbAh\njETrID1E6rC9pKmoYliv1zo/P5fX69XR0ZEBVJgwqrZWq5V++9vfql6vG9NAdVIulzO2A8ZD0iPR\nNOXN5+fnSiaTVh0GEGs0GppMJmZ3SAvgnCUZ00IE32w29eHDB+3v7yuRSFi5c7lcNvo/kUjo7u5O\n3333nTGj//k//+e/3BI6aLm9vT1TDbbbbbXbbVMy4szIA+FIKNF6+/atCXUwLtlsVuPx2AQ9q9XK\nUCbqezYc+cVCoWBlLeRA7+7urEYaqtgpjMjn80qlUrq5udHNzY16vZ4hcQ48kQgpBoxApVKxAyvJ\nooX1eq2bmxsTCLVaLbXbbTvUHz9+NJR7cXGh9Xptmxlmotfr6fr6WpJMDIaaHyddLBblcrlUrVZt\nziuVignN6vW6KpWKUfjOqoaLiwtNp1OL5CaTiUWFFxcXJjS8vb01QV+327UD1u/3dXFxoXQ6beUl\nRPjkpcmtInxh/oPBoKUQELd1u11TqoNuq9WqGo2GaRL8fr+azaY+ffpkIG0wGDxyMq1WywBio9Gw\ndNLl5aXRoM1mU5PJROVyWcvlUnd3dxbdvXnzRtPp1OhTSVZD6/f7zTG9fftWrVbLFLswO5PJRPf3\n9ybYQ3G/Xq91d3en8Xis/f19zWYz1et1i+7I63k8HlPiooanzng8Huvdu3fmIKmokB6qO9hvnDFS\nJbBC6GFub29NF0P9v8fj0c3NjVqtlgKBgDmwWCym+Xyu7777znopcFaPjo5MjwF1TfklZV6LxUKJ\nREKDwcDqtCeTiX2e1+vV9fW1pftgBIhg6VeRyWSMEu50OqZO5h1ub28tjUTu0+/3q91uq9FomPgO\n3QYGGMU0ArTFYqFut6ubmxsDKPz+4eGhsUaTyUSdTkd3d3cW0QPsg8Ggpcn29vYkSZeXl3ZGP336\npOFwqGg0ql6vp3q9/qhSAa0NynEEY7CC8/lct7e3RlXf398b/X15eamPHz+avaxWq0qlUorH47q+\nvrbPub6+VrvdNoYAYE8qC8aMYA1WivPY7/d1c3Oj+/t71Wo1i55hR9+/f2+6p9vbW63XayWTSRPL\nUa3UarV1U87cAAAgAElEQVTk8/kkyc6H2+02YST9K66vr01LRC8QdD6SLAdOCR49MHq9ntmCXq9n\n4tSrqytbf/pmsE5oYIbDoRqNhlUisSeoYplMJorFYo+YSew7VH2tVtP9/b0Gg4HNFYCMNAJg+U/6\n2n+Rp/4XjFevXv2GnKT04PQ7nY5tumazqXq9bo0syBU6jSHRN/n1zWZjlDr5Kn6HPDJRGnlWoleE\nLc4yMnKL9Xpdo9HIFOoI6VCD4qCc1DYolJ+jqcn19bUhU/KJzlw5RlbSIzUrkTggg0XGOIdCIaPm\noYtRpuMAEcSg2sVYkEemlI2DSz6a0hccOugZ0ZFzzgEGREc4T55vs9lYhOLMC7JxiULb7bYZBach\ncWoViAJ8Pp85YKcoBnBFlAM4hB2CcaCKgM91uVxWIofTIM3gFF/iFHkPDBApAtIxoVDI8plOtS35\nWyocMFrk6xFKUSrabDatlI21jUQiJvTEgCD8Q/chyahTKEOADnsd4RhgD/U6n0/OFVErIlGU38w5\ngI0oCCErkQc5Zc6hs7St1+uZ43c28IF1SqfTevr0qfVaIF9aqVQs9w1AQMEOE9Lr9XR/f69gMKho\nNGpAAREl0eFgMNDt7e0j8WSv17NnpFSMMwkVDHMSDAaNqodpgbFBYY7yfrPZ2JqhfyBSDwaDZhcB\nFFDlpFFgpxC0UXKK4LjRaJhzQXuD3WH/kYZEPCz9gRVFa0HemHVG/IVqnbXlHRGnoosh0AB8oJ1B\nPAYYwVGv12u1221Vq1UTIxIlk57zer3GgrIGPIPP51On03lURUNaDZsEAMH2YRexuaRXSZ9dX1+b\nJsn5eTAsnA8ABYBjOByaGJTIXZKltUgzut0PvTAA5UT+zWbTysgBTvizyWTyZ5XQ/WzCOw4Cghvo\nDiYBpSSTjjIcEY+kR84Q4REbFAPicrlMLYzaV3owkslk0kQSlPugtGTRiH4DgYBFhgip2PzkYDCo\nUDxer9coKBYTpSiLTkc2xB2IMhBh8R3MGc91eHhoylVy0k6RGn/mFPwxXxgqojU2IfoG5psDJslE\nHwiuiIh5bgRK5HWz2eyjSgTEQZFIRJFIxBw2LAViGQCBU6zDdwHAeC/yc0T9AELmySnE4vOYB+YH\ncQxAA0fuVDsjmiHnjZCHfcbaAIgYGHO6tfF9brfbQAvz4lSE85mUyfD/8/nchFIYNSf4A6Cwr9B5\nUOHhfB8MHGvHHnC+G2eqWq2aYQMU47hXq5UJVgFDROzsN5/PJ4/HY3uKs0O9O4JYQCCiJZqfULFB\nyS1gHWDPe7FXcXyIFZ3CSWeJJWvAWrFfAFzME0GEs9SWsjz2H6AGah0qHYoc0RxrzfwApMgR08eB\nFBx2hGelQsep3Gd/OAWZDEAW+0GS2ZBEImFnFF3Cer02URzrTPkajs1pt9jbnFPW7Y/XHiBJ5Qvz\nxpmGocCmYPedQmLml7nD0TntVDqdtr3KHsD2A9RJCfBzzAnBCt8LE8faoy1wlifybIBJ/AXPC6uH\n2M8ptgUc8Z2kl0lJ8/vME3NMn5M/V13/s0Xyn3/++W/orgX6RnjU6XS0t7ennZ0di6aIIMmZsYB0\ni+KAOA87k0oU43RelLAgmEDwwMZGGHF6emrCHj6bsigi3GKxqF/84heWL02n05IeFpZykMFgoGg0\nap3xoEKJ4FOplHWN4wDROYucKwYc1TL5fqLvaDRqRnZnZ0eBQMAoOZwwxmIwGJhwEPUnRpQIhujT\nWTlAPjeXy5mhzefzVpJDdJTP5y3n3W63NR6P7YAEAgEVCgWjZhl3d3darVY6OTkxOgzqOBKJmOCF\nP3/+/Lk1yKHrn/SHBi4AN9iW1Wpl80a/AlTAHDyAGBoJZ/UCFR10YqSdLtoRohhn0xq6aQEi6IxH\nysQJajCoRM84RgAw5Zxut9uc9v39vTlpQA5dI0ejkTKZjFWvJJNJKz1jbgBHlPFAwwNO0WDU63XN\n53O9fPnSSgWd88E7ERVRJdLv9y21hfME2DlrqBFnsqcBfScnJzo8PDSgDgVOeiIej+vg4EBXV1eq\nVCrmRBA2QrMGg0FT5tMNjsiOiptms2kRLPsNRooSTmcpFOpyWvoiNAVscRaSyaQxL+xXgo9QKGSR\nGaCO70LURWc2AhIYGZ/PZ8HFaDRSPp9XOBxWu91WKpXS/v6+Cf1IqfX7fRNQIqRzVnx4PA+trvf2\n9pRKpWzfIk4lrcD70kgLwEGOutlsar1emziVs8ceprqHKBVmEQEzoJOOdLBOAK1YLKajoyNjfcPh\nsOLxuEKhkNLptJXK0pZ2MplYB9JsNmtaFM4TPoN0yt7eno6OjoyNOj4+NhYEe4JuBkBE507SNfgs\nmr0RrdMNkzUn0MI/AWCwk5TulUolszWAyjdv3vzlNsM5OTn5DZsDdThldETs8/nc0BRRDJMWiUSs\nDScTBHKnthKDE4lEdH19rW63q93d3f9N3dxut7VcLg31SjIw0Gw2jQ6EhoWWi8ViKpVKKpfLj/I8\n0Dx7e3uGOjEWg8HAWkBilOlhTeRCqQ+KVmhpJ/NAuQp1r0QX5PTG47FRUpQCYrTj8bi1qkwmk49K\nsGi7SG4ZJCrJyvh4LoAHzYmgm6Dt6OaVSCRUKBTMACCYozaaSIN1ga5HLYv4DoePUjqTyRgr4qw5\nhZFxlqykUilrfUpER08BjIMko3jRZBwcHBj1SHka+gd+x+/3m9GDzobadSJ1Snp2dnaUTCbNkQHQ\n0um01RGTKhqPxyqXy2b0Kd9CZJdOp+2dYJjIBxYKBasmmU6nBladTVoAITj4drttBpWoB6p4b2/P\nzmkwGFSj0VC/31c2m7W5Iaqi3A+wCfMCMKY8jMiNXhedTscAltfrNXV5o9HQer22/ucAV+hmUl1Q\n20TmXq9XhUJB+Xze6oxXq4fmVS6XS4lEwvQXRN3T6VTFYtEEe5x1GDGnUWbeEd3S+pYKhtVqZXOO\niI7cPqwG90NAf7Nv4/G4dnZ2TIXu7KyXy+WUTCaNvYNZ8Xg81u6Zz/D5fJZOZM5x6o1GQ9VqVaVS\nyUAaZ4BUAgGSM31DgzCAK+kOnGapVLL2rDBoAEuCIGdDIwI2yuY8Ho9SqZQGg4GazaaBX1KsXq9X\n9/f31qramSZwuVyPUhAweaTaOAekuP64rHJ3d9dEquy1TqejcDisRCLxSJyLbTo4OFA6nbZUKqkK\nbA/sBcLX8/NzhUIh+x6cP6mYfD5vKTWnILrRaFgTovF4rA8fPvxJJ/+zCe+czieRSKhUKlmuGNRO\nJEznMg6ZJCsZwplDl1DTSjTn8Tx0P6KLWij0cNFHuVy2uvxms/kIUOBMEBuhLCbfQkkdQiZ6CFN2\ng0oW44zAhjwwm4XomRrV+XxuSBOkTXXAaDQyRI6wqt1uq9PpWMkaBokqBUrPOESgP5/Pp0wmo729\nPQMUCJfILztrfplT+jnTZ53fQeySSqWsRncymahSqejq6soUw7lcznJkGAbWiT7pXEoCe4DRxaBQ\nukbff2crYmr8oWNhTHg2jB9Gm33I+pBK2Ww2Buy4jIf1GA6HqlQq5iBhl3A+aCIkmQNyXmwDFUrO\nDTqRiBDKFUdFZEnDESJf8qNOB8ZaVCqVR84N5qZerz/qVAggJDIhiqQ2nv0+Go0Uj8dVKpUskqXZ\nEHoHzivOlfelHFaSAXLYEZoRSQ8NV66vr+3Z0+m0VcQgkkOvw5qTZkO8hEGH0obWZw2d4IbyROYb\nwM1+oVSQQIHyLZwdOWLOdK1WM5BZLpdVKpVsT9ANk4oh2CyeFcftTDsAInFgpNNIP6XTabOTMG/O\nhjtoU6DAnb0smAPO6OXlpdky1piyX2enUVgSAi36QGDLSZcigC6VSpaac/ZfgC0ESJI2BHwzR4CY\nVqtlETMCv8FgoIuLC7VaLVPqO2l8qiFYQ878bDazPL6kRyp61p3ghz1GsOfsOyDpEaPDpViSHmlw\nNpuNiYzZD61WS+fn51Y2CNOFYJsGQwBnwORoNLIOpn/czfX/NH5W4R0XplDniYL25ubGSlu4bOHs\n7Mx6JT958kQul0vn5+fm5KnV3tvbs41fLBZVqVT093//99bMACVyPB7XDz/8oPfv30t62KjU1Ho8\nHqtJfvr0qUU3bM7ZbKb9/X0rbUP4gsqYA9Zut40RuLq6shrRTqdjF8w0m02ro00mk1osFkb/0bXs\n4OBALpdLjUbDIqN3795ZOQbUMOUW//N//k9lMhkFg0Grj0XF6/f7dfjPF6CgUJ3P57q5uTF6iCqB\nVCpl7XFLpZKCwaCurq4kPRiPq6srDQYD7ezsSJJFnV6vV69fv7Y8NpudSyK+++47yyM2m02L9M/P\nzzUcDnV2dqbhcKj7+/tHeoFSqaS9vb1HGof7+3u71MLj8ejy8lKSrIvccrm0blXdblfhcFjj8Vjv\n379XOBy2PJj0UHZUr9etfp9olTTG69ev1Ww2rYkHXeQmk4n++3//74rFYioUCrq7uzMETr7u1atX\nVvtN9E3aoVQqaTgcWo1zt9vVjz/+qL29PZ2enqrdbmuxWFjtLd3moDHL5bJSqZTOz8+t6Q4Gj85a\nP/74o+WqqW8uFAq6vb1VtVo1JgI1svRgAFOplJ4+fWrGzed7aL378eNHS+kQZedyOSt5e/nypcLh\nsKrVqjmISqUi6XHtcrlctvfN5XIqFotWQVMsFvVP//RPevPmjd3s+OHDBx0eHqpYLOrq6koez0O/\nDGqInTqTw8NDq/0G7F5eXqrX65l6vVqtKp/PW40z/QoACeHww+U0Nzc3KhQKymQy5vhisZiurq7U\naDSModpsNkZ/v3nzxpTZtVpNo9FIpVLJHBQs1sXFxaPWyLlcTv/m3/wbhcNha7xENQL14lSiILKj\nn0I4HNanT58sEv706ZM6nY52d3etbI4y4bdv32q5XNpeZ79cXFzo+++/NzDiZHw+fvyobrdr71Gt\nVk2k6/f7dXp6avaZHHitVlO73TamgLQcAjlKQp2NdFwul25vb+XxeGw+SVN++PBBP/zwg62HU+R4\nfn5uKaJOp6NGo2Eton/44Qflcjkr66WUEp3K4eGh3bJIShjnzs1/tVrNIvabmxvreXJ/f6/NZqPn\nz5/r8vJSV1dXxhwOh0NLx75588a64RGMFItF9Xo9/fa3vzUBZ6PRkPRQNkw1Qrlc1mq10vX1te2/\nH3/88S9XeAf6pBMXKIVmJYhHUBP7fD4TGTkbNpDT6Xa75qCJJBB09Ho9E49Acft8PqP5WGREStCA\nIGen+Ay0jXCDJhh+v9/UrcVi0fJVRFzoBfhvBD5cd5lKpUzVCe1EhzIQM8/g/DflF26328qvyHdC\ntztL+Phz5o75arVaFs3QCAamAwEIn8MaoGCnDppUBsCAiBW0TEqCucWQQr/xbFCQTsEN3yX9QcSy\nWq1Ur9etrwJq5c1mY9E/0SPUOJUP1ERTx0veFZod+o7yK2hrSRaNEuHy37PZzNImRGROfQjrxTXE\nfB5zxveiwCZy4vdIY7E+GAny6vRRIPKSHhqJVKtVdTodo4FhXqihHw6HymQyFlGyDjh7WkLzezBj\npKioduHnUHNLsvVkHsm9QnN2Oh3rf0HkxBmhjJbIBZ3GH+sGmFu+zznXaGgQdRIxIxJtt9sW6SHw\npHEJlQhUeQAGUdYj/HJ2EuQ7JpOJ5aVpMATbQRkiqQVnPTZ58UAgYPuKElJaMVOW6tQgeTyeRx0y\nYfI4D6Qzh8OhhsOh5vO5NX8CRMOs0hsDfRKdA9GtsOecqQLmGY0J+4Uct9OOEJQAvLD1TsEywsZY\nLGZCM2wA68nnOe0aeXaYCG74dDJMzLczmscH8Hz4Jcqv6a/hrF5gXmESuWMEv0IL41arZXuj0+lY\n2ok5I11DoxvsGEAX+8n7wzY5U6n/p/GzRfIHBwe/oZEFD02eR5JtcOnh5qHf/e53ikajdtUj6IoJ\noQwKY0c5mPOCD6INnAMiDudNbuRTMUTX19dWO8/98y7XQwMYhHI4rWq1alQ9KnlqHRGgkC9FlYrx\nYVPRr578IOKqUCikw8NDE+o5gYYkE/1IsmYciE4QFYJ4yVtJsvpTDBY5qvl8ruvray0WCztkkqxx\nED0LMBLQ1q1WS9Vq1RTGPp9PhULBGj6gpObA4HTb7bb14X7z5o0JeihlGY1GJszh+TgYzDmbvtls\n6u7uzppx8IzQu9QdAwQwlrwTAq9/3qfWnTEcDlsaB2OJsUVEVK/XLfXC36FfQLHLM2D06/W6GU9q\nqWFEiHJIc+B8a7Wa1cTjUNibq9XKotOPHz/q/v7ewCbgif3Mn+NAoWhJYyA2wqi02+1He54bvcgp\n07Xw6urKmrBQc06unvr99Xqt6+trm2vman9/X8vlUhcXFybKozSRPQswXi6Xpp+gaxlVMBhuUkjM\nHc4KQPzjjz+qUqno9PTUypawDRjX1WplDJzb7Va73dbt7a1FwNyxcXh4qG63a+JLwCU9NpxagEaj\nYeVXAGNAM/nezWZjfSGYJwSNpGDQhGD/iB5p6IUT4gwjunUC//39fTuj9BxxNqXh90kLnZ+fG5OI\ng6P0i5TOfD43AMh+wVFdX1/r6upKh4eHVgePP8B+AoLZs6S/SFshhkY0CgjbbDa2R1g7SdZ1kx4e\nXLRF0OC8CZHPozQXW7xer60/QSaTMZBBcIjtIxXTbDatMRZAHgBLe996vW7A1VkRhJ6D9rZ3d3eW\nnsVmXlxc/OUK77788svfIP5C6IOq1VliwSGjMQ15XQ4dN7qR66EsBBUoaAlRB7lkBCQcAihFZ4kX\n6n+MNiV1dD1LJpPmuJx9sIkSyAnS0tJZb40DJP+GQAYRFp3PQG6Il1DmOvMxRNaosckRsxmI8AKB\ngNLptBlR6sWj0ajNKwpPyqhQPYO2ifScpTJ/fO0uDpMcG1Qan+EEMWxiJ/Ag3wvgY652d3eVz+et\nZAdlPLlu8qO8f6lUMuU0jI4zaoGGZz9A/SGUYx9IMqCAwyOHh+NHlY/TxUnwbru7u0bbIlKkH7zz\ncKOX2NnZsf2ByCwSiajf75s+g6iDeWePEjVglHh2mBVnFIPavVgsPgKhGGuU8zgrzgV3txM9stZ0\nGyTCxcAzb6yXz/eHOyL4OyeYIM1BpMS5kmQ2gj/nkhya+mA/yK0SsXIRDa2XAbZoXWgjyh0Y5EOh\nrdFGEIESnCDqwtDjyNhXTnEwPexDoZDR5DAklGClUilr79pqtR4p/ZlznIXb7VY+n1exWDTGiznn\nO9EV0C8D0R5rwdxyrqHGnT3wAZqsl7OcGRuKKBanDdjEpqIPYW/4/X4Vi0Wz9TQ+gpVArwFby5w7\ndQ0wDNhAlPrMLVotmLlkMmldDFnfeDyubDZrHfToVEp/Bs43zAD7pVwuG/hAPwOT5/P5LIBdr9em\nmwGgIG7GHsEQk/pApIfuA40CokLpQRPw008//eXS9Wx0xAU0UIlEIkbxQW1RPhSLxUxIAh2OwMVZ\nf4qgg1uUUK2jeETYA9rHkPNc0I4oYonaODSUwkBnMcgdgZIBGgARRBVO4QYRP7lnjCub05mr47Y5\nDCTP7wQBoNZsNqtwOKz7+3sTF0my56aJC6V85KZhCTAAOC6U7qj56WngZBNgQ5xzAa2P40XMhSjO\n6/Wa0+CmvJ2dHaOE2SOgZZrE8NyIaei+RZMgSsYQ6UC54hwQx3ArHArXyWTy6JIZyocQExIJEq3Q\n1GQ+nxvQQBUvyRwA71ev180hoBVwUoj0qU6n07bvAC7QmOv12sAEdDNiOBgeWCLel/XEiRIl8Xyl\nUskML+/OeUArAxhFzASwwLHx+9IDxY/RBkASkZFmgB7HuSFw5R6DYrFoNC/7AAeAcYzH48pkMtYp\nDVW5y+WyShXux8DJSjI7QHMrHD/v74zmcSzYCYRRpAmXyz+0TuZ2PcrnAIakGp3Pxz0Z5HNJDcBe\nsIbBYFCZTOZRfTVAzAleSHnyfLSvRdwmyZ41mUwajY7zR+jsFLQ69zt2w+/3q1wuazweq9Vq2Tqy\nNsxvMpm0m+JgR7Ex2WzWAhiEoLVazTQwtOxeLBYGzlk/gB9nBgbBGQjhK7Cv/EPpLp32AKcEVrB6\nfAZaJid7FYvFlM1mH92P4eyxQQBD1Y7L5bJueqQaAB1UHDhLZQGP7EVaJwPEr6+vzT/8OeNni+Sf\nPXv2G8oqstmsoSI2zLt373R/f290DlETLWcXi4XVhTKgX5wH0uPxmACo3W6bepR8ODWqoGeiPI/H\nY9QOh61Wq1mqAKPKhQXT6dRytR8+fFA4HNazZ88skqV+9f7+3kQ6IFpJRi3xTkQyi8XC+jFzoQzG\nhaiO+lM+y9nshIi9Wq0+ysfTPQyDjiDm6upKR0dHKhQKlid1uVzWQYpIzeVyPVIqYwig6Q8PDzUc\nDnV3d2cOke/G2W02Gx0fH5uSFcod47Nerx/13Wfs7OxYqRwO6/z8XJPJRGdnZ4/qTDudjn744Qf5\nfD4DhJIelUNms1kDD0QMXCgDGMAwNBoNM2BcYYkjoC6X3ODNzY3lLJkvwCrg8OrqSv/tv/03ZTIZ\nHR8fG3AkpYTWBIcAsOt2u/L7/To7OzOgdXh4qOVyqdevX5sKfzgcWiSGoJIzRURPeR35R8AcuXRE\nf4eHh1YuRzqn3W5bv20Aw2azsetiDw8PDZgAXM7Pz5XNZi11BjC7vLzUYDDQF198YZQz9CnCWuqQ\nSTN0Oh19+vTJSjLr9bpFaNDdlDcBOGu1ml6/fq3VaqVSqWROjbKx29tbq1gBaNOadbFYmPhzOBza\nLZkIDIvF4qMgpdPpqN1uW98I9hz2hLRLv99/RMdPJhNFIhEdHx9btOrz+XRzc6N3795pZ2dHu7u7\n5kgWi4eW07T55QxgF2u1mp0BVOpcA31+fm5BTS6Xs5TK5T9fZPXixQsr4YMtu7m5sXQdpZTOVGku\nl7PUnvSQivn+++9tv5CSgLHknnguXCIN22q1jMUtFArGKi4WC11dXVlPd84N7cXv7u6M4SN3Ho1G\nNRwOLVU7nU51f39v9n0+nysWi9k9EzAEqOFzuZxdmBYMPtw4d3Nzo3q9bsETDhtnPJ1OrTfAZrOx\ni2b+63/9r8YOoXFw2uxCoWAll6QROb+A0Eqloo8fP/7lltBREoUIhohhtVqZ+pIWhtSVkreiVMxZ\nM47oAXQHGiOnxSHhkHHgYRBgB1gkjAbqUjYBdC4IlyiNKNBZEwuFGwwGTSGN4ALFO/k16i55H4wt\nSJ46dqc4htIwNiMHajwe29xABUmy0ihQIhQ1Qh7qfBGoRKNRU3qCMjHKRHc073GKWsh3gdChEJ2C\nOIAYiBT2ZL1e24F1dhEjigaQQDM6y2UwbDwHN6k5GyFhFJ0iOg4gzrnX60l6iHKdzAuaESIo9q4k\na4PKfuHZWTMnBUrkyPM5rwkGtNZqtUdNlwC5sAY8E7QwVRJOfQERAzXw/B1pAe5s4Ew4y4kQbwH0\nuAaUftmANFgm2DEAg7MskRIwmvBgvNgjkoyx4kywnmhLJJnDJ8pstVrq9XqW6oAVoZSLOQfwUAnA\nTWGkNThfaHgAZYHAw7WoPp/PbunDGLNXnaWblLwSySMoBCjAOLHPqHbg7PDsTlEsAcFy+dA6ezab\nWUURtO9sNrM+7+wXShSh1LGHRIn8ObbQKeCbzWaWBiK/Df2PzYQhYh8SUbMXYCb5PFgaziYCTMTC\nBAIwKz6fT9fX18aach7ptcD8sXacY2ePB8SFAGpslVOfwRnnLgfSa6S2YEaw8bDNgIz1em1BKP6A\nfUpKDNviZHJpuU6UjlBR+sONjDC82BPSRH8sHPxT42dz8uRejo6OLDeLOO0f/uEfVC6X9dd//ddG\ns4bDYTUaDbVaLb148UKlUsno3FgsppubG/X7fROg4LzH47Fev36tnZ0dvXjxQre3t1bygsjq5ORE\nLpdLv/3tb825NRoNhUIh/eIXvzADAOJfLpdWe0uv7SdPnqjf76vX6+mbb75RNpvVx48fTQV/cXEh\nn8+nX/7yl7axTk9PFQqFdHl5aTlaDoLX69WnT59UrVb16tUrJRIJffr0SZVKxRTT8XhcX331lYbD\noZXATacPN6zRMAbakXyzJD158sQOEbnVm5sb5XI5/d3f/Z0ajYbevXtnve3fvHmjdDqtfD5v1GKh\nULDLeQqFgkXCbvdDz+Z/+Id/0Hz+cNUoAhYa0PR6PYvGa7WaJpOHC3Dotf3ZZ58pHA6rXq8/Mpix\nWEz7+/smykSEeXNzoxcvXujw8FA//PCDCdk+fPig8Xisf/2v/7WSyaRRgZRF0YxiMpnYQa7Vavrp\np5/sfRFPoQHxer366quv5Ha7dXV1ZYeO5hYnJyePes4T3ZAHpdHF4eGhms2mAoGA/sN/+A9KJBK6\nuroyA/HhwwerM8cQptNpu1SFmtvf/e53Buq+//573d/f6/j4WOv1Q5tOaN7Xr18bk/Du3TvVajUd\nHBzI73/o101EC/uAsYrFYnr69KnpYHy+h5vw3r9/r4ODA3355Ze6u7uzm8UqlYouLi60u7urXC5n\nFQjhcFg//PCDptOp/uZv/kbT6cOVvThGbuB68eKFLi8v1Ww27QKjer2u58+fKx6PW7lpOp22S5q+\n+eYbrVYrNRoNu6SFss/9/X1rlFQqlew61r/927/V2dnZI0Xz+/fvtVgs9OLFC0kPuWBYCm7qC4VC\nOj8/t3s13r59q8vLS52dncnv9+vNmzeWAuz1ekomk/r8889Vr9f14cMH62cxGo309OlTvXz50lIe\nCFIXi4XK5bLcbrfd8DeZTPT73/9ekUhE/+7f/TuLoD0ejxqNht6+fatisWjCv3g8rs8//9wi1oOD\nA/l8PhOGDodDvX371uwBjgoR3WKx0MuXL1UoFPT3f//3Jv5ttVparVb6+uuvrbMizcfYo8fHxyZK\n5SKW+/t7PX36VAcHB/r48aOxZBcXF+r1enamLi8vLd12fX1t/UZarZbNBbX9BwcHOj09NedHvtrn\n83cwnPwAACAASURBVOkXv/iF3G63lRAPh0N99913ikajevnypZ1RruulBJmrvUmR3NzcaLlc6u/+\n7u/shj6YSEqVv/32WwPGu7u7dqfDwcGBCoWCCSrX67XevHkjl8ul//gf/6P5LYK0er1uqdOrqyu7\nFIk98ezZM2UyGbM7pVLpz/K1Pxtd//Tp09/QQACxAWgfJbUzQuS+aHJX3W7Xoh2UvUTI5EZSqZRF\nU0RviGK4WIDIwKnCdKKo9Xqt29tbK/XZbDaWQyZnIsk2NXlHSRZposolmnTeYQy6pA7ZiUqdfeRR\nXMJMgDxns5kJ1NzuP1x4EwqF1Ol0dPnPdycTuZLzJ5LfbDbGqkiyFAYUMNEInZ5QFy8WC93d3Vmk\nAmVNpJ1KpYwdIXJ0u91W8sbcSjJBovNGtUqlYjdpwR5wMZAkYzq4mYrqBHKGaCOIrrnKFQML2of1\nIbIj6kHNTu6XqgEoRpgeIl4c4GAw0KdPn9Ttdk04CVNFNzEqOugUCM3oXANys1QYkEZA+czZIB2E\nCpoInkiJ37m/v3+k50DsdHl5aQJWOmoxvwgEnQ1FeB4AD5Q4eXlUwjgsFPxUcRA1wlCh00Bw6tzr\nlCOxhrw35bSwWdCu7HPWE+ETOVLsBGeMdaMSp9lsPgKV7CcuBiIidOao2Xvk0BGiwQjB0JBCgJ0g\nHQXtypqzv3h2xLhUADnTk+Sgu92uRqORwuGHG8rovkc3OtgbolhU8+Px2HQIzogcRoOGUU7dBRoJ\n0g2UYHLGaRrE/kFLxGfwDLCqnHnWUZI5SZgZKGr0VvQeoKMg60OQBTUP9Y19geEYDod2RmGrYNnQ\nr5ASq1Qqtr/5eeao0WiYDqdSqdgcIzrEd3HGneJar9drHUlhjhEyw0o5GReYlmq1aqmSyWSi9+/f\n/+Wq67/55pvfoFSFGmGzxONxq6Em4qaRBlQGm4uaRCdVDfWPIeNSi36/bxR0rVYzhSq5JRA7wis2\nKCVczWZTPt9Dv2jEMzRaoG+2z+ezznrlclm1Ws1yQNIfyk0mk4nVpmcyGRNvOEuleAenIyPXB/Cp\n1WpGX7OhUNKSP+adoE4nk4erDvP5vFHQNB66ubmxyAsDiyIbDQHghBwdtDK55mQyqePjY43HY1Uq\nFaP2MNooW9fr9aOSOD6fUj4nbUjuk9w2ZW4ej8c6AJIOYZ5hRZwgxylmgs5DIQ/NyFzy7tTVY4Sg\nzhCX0SxkNBo9uuYWAAkIovkOt2fRbZCOizw34A7KEQBKDbDf7zeAAPtDjT1gCXEVV6wCajhvXq/X\nuvcBKKCKmTf2C7l20jXOGncaj6zXD81BMJY4s0ajYeJBjG2r1bJLc6CPnWJS9A3VatVAI06O0lba\n5qLRAQxi5NPptIkHUbDTNInGN4vFwy1lgHCcDek4SsCc9kCSUf9OYapTzMh5k2Tlcfw8+2K5fOh9\n7szJ9/t9o4rR8RDQILyaz+fWowCmEmrY2YuC61hJSWC7EonEIzD8x+kqUh04GQSC2Cscd6VSMVDZ\narW0XC6tLTP7BfBKTpwgKZfLGSAkAm6326YToKENTY6q1arl8W9vb60CgX1B2eR0OjXhHbXv2AqU\n9aPRSJVKxYK2drstv9//v1VAoAdgDQHXMG3YG8AKjN9gMLA9Szk0zbNIQbtcLtMaEJhhuxHbkrai\nDJr/Rv8AE3Fzc/OX6+R//etf/6ZUKunk5MRQST6fN2eAYpTNv1gslMlkrM0jFEc+nzdKMpFImGLZ\n2ZJxf3/fEDVU+4sXL0zZiSIYYRn9hal7bTQaFn0isKANIa1jEW9RKgJlyGLSxa5QKJjBPTw8lNfr\ntbIMIi3ASz6fVzKZtMNH5yTKX3h+9AY4QqeRo6Wqz+dTMpk01bIko9pgTbhYhigE540KHaEdOgSE\nI9ls1lTs5NpxGkQhq9XKkDI5cupAKZfBWR4cHJhoSZKVG9EaEvCxv79vOTlylzT8GI1G1hUKIOF2\nu+1+BHKbIHFn/g7wSRTBhR3MN0CHOaNChLVBpIhoD1Gn2+3W+/fvFYlEtLOzY3Mfj8cfRccAAy74\nIYKj+QxMAwyUs2yIf3CozvIv2r62221Jsjpf7iQg6qcqwu/3K5lMqlQqWXUFJYfU/NNMiJ7aCOqI\n5pbLpZLJpP1sOp22UiXnHO7t7Vl+v9ls2vNlMhlTqS+XDw119vb2VCqVrDTp2bNn1sQIliISiVgb\nYzo7lstl5fN5lUole996vS5JVmFQKpWUyWSMNWB9AePBYNAuhFqv19bKGWU9DWtIz7DX0KRQSrte\nr61rYa/Xs9LaWCymVCqlUqlk0S37r1wuG+MJiwUrtVwulcvl7Nwlk0nl83nTBKH1QUsAGIOdoezO\nWcLMPucMIugLBAIm9HVek4wt4PIkyhgJGqgsIO1KcyQoahye8x4HLkCS/nA9K38Gc1Gr1XR2dqbP\nPvvM3hPNjLMUE/EnZxSgA3NydHSkQCCgarVqd0xQWrm7u2upxvF4LK/Xaxec0eE0Ho9b6pLuoVQ4\n0GsADRR+bjqd2jmHpYHhpKoMJgFRpxP8/jmR/M9WQkfkBAXtdLYIz6B8MOCoQinJSqVSNmEYMKeA\nDVEPIjk+P51Oa2dnR/P53Do6SX+oGcbYgc4xelCkoF5qkIliEURQysZh4h5k1JeIzZy9yqn9xplC\n1aAbIEJ31m5CyyFCw0jyD8YfwQzpDAwKBpHfk/5Qc4+zIcp11i7zfU5hHypyREEYHubUuY6RSMRE\nRNDkfC7Rg7OunmeGvkMEiBjTKUIhEsdJkW/lfaFjEX85+ynQLMRZ042gjMstpIdmO4AsqjiYP1IL\nRKNOGpT5RUzKuqBMxhDiVCSZ4XUq9J2RF2kCHC/RgbM+eLVaWbtcgBiiKMqM2M/sR9JORKIwC5SV\nAXCh6zmD7OnxeGxOkHnk/ZljSfZegCrmj9QGc8Rzh8NhSyHQGZP9SH9vBnsW5bxzTng/5g5wDmNC\nSRjvDV2NmBHWDpEjbGA8HlcqlTINA4I753M6xY9OphFalvQFz4voC0bCKdT7f9vnBBzhcNjEzHRF\n5OckPWqQxF6m0yaUviRjcZyCZASf7H9sITaLPU7/eVha7BKiOcAfDAHzwTvxe4gUnT9LEFIqlbS7\nu2vtkqvVqjEXsC2Af+wEZ5To32kfKKvDXtOrhWfDfjhBN+wVdD+/5+yLAkvJXPIumUzGzi5iayeT\nA/PCerNfnL1S/k/jZ6XrqYWmnzBGhDx4t9vV6empNTXxeDwmevB4PCoWixYlHh4eqt1um2jv7OzM\n6K1Pnz5ZhELDC6eSfjqdWsMS6DpyMh8+fDBh1tXVlZbLpdG5fr/fSsAwiJROud0PF6nQRAKakf7k\nlJulUimdnp5aucWrV69ULpeVTqdNxERP/cPDQzM+lOAR2VCOBJXEhTwIsEgZcDtdJpPRs2fPLMqF\ngru4uFA2mzUUC2vCJTk4IZz2crk0Ad/p6anK5bKVm0HPHR8fGwrFodPz+uuvv1YqlTIabDgcWqkI\ndClqdzb63t6eisWi5YSHw6FqtZp1niOSgrYbDodKJpNKp9PqdDqKRCL66quvdHd3pw8fPujg4MBU\n1KiEocyeP3/+qM6Zz+NqVZwFHakuLi6sBOrm5sYAF9/77bffKhgMWi6uWq3qf/yP/6FMJqOzs7NH\naYn7+3vVajW72rXRaBhVPBgM5Pc/9PKGVaFU8urqSqenp1bCifEleqFMibNGZ0ZEVURZOIhGo6Gz\nszMTVJE+eP36tSqVis7OzlQqlewin/F4rPPzc202G+u3vVqtrEvZ1dWVcrmc3YdAugvanVI9cs7D\n4VCXl5cKh8M6OzszUDqbzVSv1/X+/XvTfnB5kHOOucIaJXOj0dA//uM/arVaWfRGXh3hJcJaoq+9\nvT39/ve/193dnZ4+fard3V0T3larVXW7XUWjUR0eHtptlpRBct+G814Nl8ul6+trNZtNo+Tv7u4s\n8CEN8uLFC4voB4OBrq+vdXd3ZxEzLE0kEtHt7a2m06nOzs7kcj20H6Z3/6dPn5RIJLS/v2+aAcTK\nq9Xq0bXbi8XCwHMikdDBwYHpCnC8Hz9+tCDh+vpa4/HYhKW09s5kMo+A7OXlpabThx773NLHrXPX\n19fWrAu7HAqF9O7dO52fn9ucE+Q504yff/65qdWZu5ubG2UyGWNzJFmZIrdIjkYj6wrI+yFYBNDM\n5w/3evzjP/6jlfhxsZKzQyBscrPZNCbru+++03A4tLJtzjtlxpThwrxtNhvTsBwfH5tehSoORIoI\nw2u1mu7u7v5y6fonT578JhwOa3d3116QkoWPHz8abUp3r52dHWuBiQGgKYrf/9AbmoY1XAYQDofV\narX06dMni6Sd3aiocXSK5xDAoSJPpVJqtVpGcYKEd3Z2jLL641p8UL4kAwCvX782io5IkXpPmqc4\nIwy/3693796p2+1qb2/PaH0iyvPzc83nc0OQksyQ0XDF4/Go1WqZwySyL5fLVpaCoOn6+toqC6Dc\n9/f31ev1dHV1ZbQfff9pvoHz49n5mZ9++kl+v9/ybzSzQdtAiRJGjzlfLh8uoun1erq7uzOGYL1e\na3d3V8fHxxblUtqEJgMDhNEjt0eNNKJEKGuEW9B7XBhUr9eNLnSWCFarVc1mM2v0M5lMjBGgTpp+\n8ERc7BmuI8b4z+dzy2uSHliv19bY6erqyuhtyrag+Mhz0gMB1ff19bXpLYhcyIVS8wwT4PU+NPuo\nVCqmSueZiM4Xi4dujew/jHy329XHjx+tUyKRXy6Xe9RHgC5hCD3Jgzoj0UQiYZQrzwVT4XK5dHV1\npWq1audZkr3fhw8ftFwu7UpSUkyz2Uzdblf7+/vW5AcW4PLyUnd3dyauohe5JL19+1bj8VjZbNae\nAw3L7e2tpf9gp7iwpdlsGiDqdrtWMndxcaHRaGQROZHg/2LuzZbbzq6k3wRAABzBAcRIgKMoUlKp\nXC673Xb0jfuuH6teox+io1/C4XZ5aNslaqI4gANAEADBeQJJ4FzAv9RGfecLV5xzUcWICssSCQL7\nv/faa+XKzHVycqL9/X076IX+/VKfsLuwsOD4B5pC0phKpYzMZbNZ+3lMT097RCkXJT70PHvQ0dvb\n2wFVT3ipQwbkPdMGIWZeX18b9cDfAwVAsVjU6uqqkbd4vD/m9ujoyKji6empE6tqtarb21vl83mr\njpDJgfbRfqHSPj09tXIBQjUFzubmphNqUBkQHkxkSKpYc5BkxihDgCMR42IlIaa3//btW5NMWSue\n/enpqdet2+36jKO4QrkDJ+js7EwfPnxwwQCqmEqldHJy4jgmySqkf7zeT1cnD/N0dXVV3W7X0o7z\n83N9+vTJPaOdnR3VajVrzVutlo0CILshxbi/v9fLly8Nt0PeoyeXTCZVq9Xcp97d3R2Q8ezs7Pjw\n1Ot1V6GSLA9hw5CFIsnhZxqNhntPx8fHhp3evn3r6U0gFkCof//73525vnv3Tjs7O+7vdTodm2+E\nqMLbt2/tHEV/jgoSieD09LRJOcCEyWRSKysrSiQSOjw8NCHx4OBAkUjfN5yqAYbq7u6uiVL0Izls\nl5eXzsb39vZcuTBZa2pqSkdHR3be43cR4La3t208AwGtXC4rFovZLx65WD6f19ramh4e+nafEIxC\nedPR0ZHRnb29PR0fH9sVr9lsGgH53e9+p+vr/lQwpkvBqiZoR6NRvX//3q0bSGq4eNVqNVfJu7u7\nGh8fd4DDAY/LguqZi/36uj9R7fLyUmtra3p8fFSlUrFmd2dnR6lUSsvLyzo/P/cEvXa77UlxV1dX\nev/+vaFsEI+lpSUdHx/rz3/+s1qtllqtloML5+jq6kojIyNqt9tqNBpeP5JZgrEkra6u6uzszOMx\nkV9ls1mtra2ZXJpI9Ic0NZtNe6EjYUokEkaDMGHZ3t52lQ+RMJFI6OjoyMTY7e1t7e/vO7E/Ojoy\nrM9sBSSwMMJBpehL426WSqV0cHCgSqVit7K3b9/6TP3P//yPTk9Ptbq66vc0Pj6udrutP/3pTxof\nH7eci0Tr5OTEz+by8lLv37/3udja2tL9/b3y+fyAz/zFxYW2trY0PT2tYrForT/GLbFYTCsrK+p2\nu3r37p05Qbu7u+r1eiqXy7q4uFCr1dLExITZ4rlcznwWLoudnR3PALi/v7eRmCRfMIVCwUxyeCg4\n2SEPxXWO0dvPnz93gkI7EkLgV1995fNxenqqw8NDHR4eampqyu8JV9Dt7W21220tLy+bIEjlurOz\no5GRET179kxnZ2eq1WqGvHHQk/o++qhx3r17p3q9bh4Jv+v8/NwyQHhbDw8PnnePKVY8Hvc0zF6v\np0ql4smYd3d32t7eNiL77bff6v7+Xtls1s8NyRzIUzweH5gdAYk7Ho+r2Wxqd3fXBNU3b9440T88\nPPT9AdFuaWlJmUxmgAD+Q75+tEr+1atX35Dh06ti2AhOU2FPGZY7ErJwUhhGNci0CGJk6khFgHmo\nsOh10zMC7oHgIcmSDTY/1SqQ6vDwsG5ubhzMYExzeXKhk4W2223LMEJjBLzb6UFhAkLFz9+TuCDh\nCXs+MKnpiSGhgYlJBYL5z/j4uA8MhBsO0c3NjRMY+qmw1EE7uBC45GHH48YHXM66ImHigNHfQ+7D\n5YwygGQDoktIKLu/v3dlzTOESCPJ/WLps5sgbQkuHfgN9AppF0AuYh2opOjp8bvog7O+tJN4xrQr\nCKTT09PKZDLW/I6NjXnOOdwLEl1QB/YB6gNJdsALCUbNZlO1Wk2xWMx7CUIg5wo7WNYfhIdKE1Y9\nw09oNUky8sJa0WslOZc+G1xJcgXJv7HPIAxRqXc6HdXrddXrdUlyr5V1BsljChhSVCxVqSBp7yFD\nC1s2fH+IVlGZMk6ZSYahMgWU6P7+3nwYkgniQWhaxMVKNSfJFRztBPa09HnQDK6O/D3EVF4PAm9o\nVgMXg9dEaUDyCzschBQEkL3LueL9wUZH+cLnAa2jjRDyM1hvzhoEWgh0tIG4uPg9oYSOL/roxCx4\nERQnsNtRbYBshTEP5QLrBx8DUuTFxYUVQMjhYMMjcYWci/Mha8taI5vmzzxDfj8jj0NpMZwTvgci\nM3LQkP/AWSEGgZYQ51GTEQ8+fvz40yXeAb0xkg9SydDQkAqFgoMx5IdwQ1JtSxqQKUFe4FIBRgTS\nBiLkQuHSC4l5EJdgzIayDEkm9UF6otd+c3OjfD7vPjW/h6SF7JteakjcYuOF+mfYvJIcICSZAQ7x\nDWZ9SBzjooPsBSGEpArXMeDoXq/nKgKyCMEcmJNNyGsDF3GgIV8BB05PTw9okQnaQMEkWqGDE1Uv\nECWvB6kH6STfRw+NPjBtGHpsMzMzlhgB6VEtMnOdwAmBigDB+2UfSZ91ybwOARdyHBUBSRcwYEgw\nZR9Jn+cIsI58rlDGh9YXQhbMXdYPwunj46OfKb1SWgBc0PwslTDe3SgOCHa0m/h7KmR+lmDeaDR0\ncXHh3wP7nrYRcC1wOfscTklIYoVwKclnC5gzTDDoJ3MOYZrzGg8PD3aCk2TCFMx8+sQkHFTg0WjU\niBnJBXsV1nS413mPrFdIKOQyAg3iM5NARSIRtwRC4iiXEmtGIo90EnZ7aL/M9wGr4yIY7g/2AmvM\n7wn91PkZCgv2Ou+fS4UECalnSEDmGdGComDCcIs4xtmhL02cJNFlXfm8YRHH2cMjn8QlJOsSo3DS\nZC0obkLiKoUkLQJJlnPDfOdsc3Yjkf6Y5qWlJceWkGzIBc1+JhEfGhpSPp934sEzIW7DTaJ1ODQ0\nNCCb5V4htvB+/9nXj3bJA93EYjEfxFwu5zfPRm82m5qenjYESd+8Xq9rZ2dHCwsL1hTSp0Qyga0j\n7HckECAAHMZSqeQKlJ9lc0syjIynciQS0czMjDXy6XTaRJx4PG5TnLu7O09xg8nc6/WcIaKH5b2g\nUS6Xy3r9+rXlGsCPeNtHo323q1KppKWlJfdo6Nl0u12PPmU9IANxqVANUxFy2SWTSbc31tbWDN3R\ngqDSRabX6XQ8LwAZEoeAtdzd3dXjY9+P+eTkRDc3NyYAHRwcuJXC5TQ3N2erTqB6dLuSLM9ZWlry\n5cMIVLzZ5+fntbW1Ze8B3AMZasNBRw3Be+dwo5XtdDqWxsHwzWaztqQlICK5icfjqtfrlphxKKV+\nQnt0dKTR0VEtLy8PWP8SiNbX13V3d6c3b97YJ57WFjKe0PAEdjGe2FI/SJGcLi4u2ppzZmbG39fr\n9Uxiikaj1rrTZsAjfGpqyqROLif4JnNzc0qlUnbmgqBFwkErJZVKmR0OgkO1uL6+7s9H5Y1xDyNE\n2c8Q7gjcDPrBZwKjFpKlkZERzy9IJpMm7cHJQG8/Pz/vPj8/i5kRsk24JlTRECVh+dNKqFarWl5e\ndpLGBSLJ6I/UT7xpFb17985KBHq8EJEfHh48dpgEHV5LIpFQuVweQCLwOIczEMZFuDbFYtFIFecu\nRHVA7ODugJARw9CVQyIjMSaOhzJjrJMjkYj9GhqNhl6+fKmlpSUdHR15INPW1paOjo4sLcQThcIL\ntj4XXi6XM0cF8ik8re/3vGHT89nZL6GyB3k2MmuKNpLJDx8++ByRCIAQb2xsKJFIaGVlxegpKgnI\nm0gBSQibzabvHHr35XLZfi0gWnA14L9wb5Ho/LOvH+2SD32E6XUTmBKJhJ2Q5ubmNDIyYtKLJF/U\naDAJOrgeIdOampoydMw4RzJ5LrxOp6PDw0PD1xwmNgyGNbwH3h/BB0iFYCb1oeLh4WEtLS35e5Ec\n0cPm97ERgKMgaJF1Y1NLTxpiHBd1pVLx5+XiJfvHQxwokEpekrNOKpJkMmk4Hege1KFYLOr4+Ngw\nbSQSGSDynJ6e+vLk81KRS3LyESZgOI9lMhnDYRD0MNfAJwEUgUwW2Dlsg9DXXFlZGTD86fV6AwEx\nTD6AZJnrTLJAnzUSiZh1D3Hw/v5e29vbrmYlucp6fHy0wQcsX9abihdZDInVzc2NNjc3TVwDCp6Z\nmTHsGvqsk6BiIRxqgFEaQJr6vud3s9nU5eWlVSTAl5J0dHRkfTrPgfNBsAmrbXrWVGpcDnhE0PPH\n44D+JMoPpGYgdrFY3zUsEomYT4KPAYOOSCS5sDACglFP8A9RI2BT6fP0OP4MyselJMmoXDiVDpSD\nSplhM0DqeDrAveDv2Je1Wk2pVMqEUNYPsiBqH+YY8CzpdYOaTUxMGGVjhCvnulwuu3XAfkF+xcAh\ntO58HloBDDsillF0kDSyPsSP0MkQVIQEiZYicery8tL7naR2aWnJ7w1uBAqnRCJh2aUkE2RhofPe\n7+7utLe355iO4mdxcdHqDgoxqmOQUYi/nAGkwdwxoLNU+cQ4IH0+x+hof0Ip5E0cREE0KJYWFxc1\nMjLiy517JR6PK5/POw5zt3GHgajR84frAZeL9t0/+/rRiHetVktnZ2ful3OpcJFC5pmfn9fMzIxN\nK4aGhkwKWV9fN+xFb44qmof19PSker2uzc1NS0+Ya0xFt7W1NUCSA5pBZhKybOk5AldRXTQaDfdj\neIDLy8uGE4GbIac8e/bM1pWwMZEPTU5OuofUbDb16dMn3d/fe+rUxMSEp3G9fftWnU7HFToV5enp\nqRmtOPzRX4RLwEGDqAO5hwCMy2ChUNDp6alZ+wRT+qc8s3A4A7K2er2ubDbrgDw62p90xVhJpFT5\nfF65XE7xeH/SFoMp4ETQRiCYE0B4Js1mU09PT1pbW1M8HjfJqdfraXt7W/F4XKurq7YjhkvRbrdd\n9Uj9y7harWp7e9utJCoKgvGbN290fX1tPfTk5KRNhCCD4TxHsICLQEUKKnB1daU3b95Iktfl5ORE\nqVRKp6enOjg4GNjLQJWgH6w50tLd3V1tbGzYVIR9mkwmtb+/r0+fPnmmNlrsZDJp1rkkkw8hCIbt\njvCZw8zHvwKOBkErkUh46h5QK5cexFouOgLzp0+fLGfjUhoaGtL29rYnqXERwchmCh2XE2oEqiES\nGaSBkBtBJqhmQRWbzaZSqZQKhYL3EEoGiHToybEC3tnZUTQa1cuXL53MSP1q7d27d64aZ2dnzcYm\nxtHrPzk5sRafoTxTU1OG0nFvu7u7U6lUUi6Xc1wsFot+LnCcaFPc3Nz49ShEaJHVajV9/PhRyWTS\n8D26fLgSPA84JYlEwtPbKIaur6+twCDRBeXDaIfK99WrVxodHTVDfnR01HNIiIv0s+E2FQoFlctl\nP//Ly0t9+PBBtVrNhQ2V9Pj4uL777jvd3NxYZojb5O3trarVqtsF7Xbb9w6ma+yXm5sby/+q1arG\nxsZUKpV0cXHhoVgMPMJkh4QH7gbkcpDki4sLHR0daXNzU4lEQouLi07+pqenPSER9JjkH0l1u91W\nKpXymf8hXz8a8e7LL7/8plwu66uvvrLFIXDTzs6OjQ/YJOPj49re3tbh4aGrhVarZakTpv1LS0uG\nPiYmJnzB53I5ZbNZW1kuLCxYg/7VV19ZKkc2C+t+eXlZrVZLJycnAySVtbU1FYtFSz6KxaIajYbO\nz89VKpXU6/VUrVbdx3z//r1Zs+12W2dnZ5qZmbEul/eHhnJsbMzM2oWFBQ0P9wfAIAd7//69xsbG\n9OLFiwFSycnJiXZ3d11dMXAEKHByclLPnz83SYy+4eHhoUZHR7W0tOQ+Nmz6zc1NzczMaHZ21tVG\nPp9Xs9m07hrEgATl22+/tVyPHjvQYqPR8PhPCFexWMwSumfPnlkTjHoiHo/r2bNnWl1dtWyPASaV\nSsUQ3/7+vvuhe3t7ury8VKFQMKmINsZ3333n9weaMTw8rKOjI52fn1un2mq1NDU15cEQvV5PX3/9\ntST5wHU6HVUqFUPH6I+pgHC4A2FBsrm9vW12OO0itNlv3771pQAPI5fL2Qo1JEYx7vjbb79Vp9Px\n0JPb21vbC//xj39UJpNRqVTyxRqNRnV8fKzT01MtLi5K6svIEomEK0j8FEhWisWiLi8v9fbtJa+g\n+gAAIABJREFUW2WzWWUyGR0cHKjX6490BbKm18vQjWg0qr/+9a9OZEnOGKL07bffan5+Xs+ePTOy\nNDk5qQ8fPqjRaOj58+dm3pMkHR4eKhbrD7na3t621JOAT8IMXM+AmrOzM6MZJN2RSEQbGxu6ubnx\nbALiyenpqb777jvDxXArEomEPnz4oJOTE1fW3W5XhUJByWRSf//73xWLxczWh0HfbDZVqVS8xz58\n+GA5LKgf7Hr2GC1KOAJYz3IO//rXvxrGfXh4sKFQpVLRzc2NY9L5+blbMxsbGxoaGtLc3Jyr0/X1\ndbVaLe3s7GhxcVHpdNq/F0nj/f291tbW1G63tb29bXMi9kepVHJCNDs7a0tcWj27u7tuoX78+NHD\ntrhQUQVtbGwY7aOHzRCko6MjlctlIz34GmxsbOjs7EzPnz83qTOZTOry8lIbGxuWpW5vbxtVxTTo\n5cuXtjzGKfPjx4+6urrS/Py87ahBa969ezdwRuPxuIrFoqrVqo6Ojgz/09LgZ3geUh9le/HihTqd\njv7whz+oWCxqaWlpgECK5wmxYWNjw4jXu3fvfroSOjSFVBtIaJ6engyn0J/j/wO7hz0y+hWQR0L7\nVfqr/P3IyIgHgQwPD7syxCKRqgUmNYEf7S3QHbALPV2IRBD6IIs0Gg1XQCAMZHv0pgl29LWQe9ze\n3jrLRPMOGSh0SaN/g0a/2Wyq0WiYCANJB8iML3o9wF+w1Lm0gAsfHz97jaNppp8L8QOSHb1WLhj6\n1WTj9FKpNpLJpNEKhl1gB8ulF/ZW4UpA9onFYoa06Z3hYy3JPVqCOC0i9kY8HndwwUEKkhvrxXPD\ni+Hh4WFgMAbPC0YtyA0oDzAi9qE8Q9jtoBskunxOSHEhIfL7LH+eI/uTzH96elqdTsfviWoOBIMp\nhvQz+V2Q7NjP+BeAbkFWAp6GqEffkUQKZQVtBRjYnAfUGKgGYPvTu8dT/PLy0tatoaIDZIEWzdTU\nlGHi0HGM8wl6BXLFeZP6qMXdXX98M7psrIQx/aFaD02oeOZUqfTgUaMA/0IAZV+0223zRyBbhsNn\nQtIc6wyCQusKuSTrjNEM5LRwX/AZQC3pt9/e3togC5SIJAH4HIge9VG73dbx8bElqxCiQaVoI9Ku\n4AygGKHVAzInyXEL4huoDqZYkKDZq3yux8dHe3egnoHDRQup0+k4rrTbbWvyQWxo5YDg0vZCTSXJ\nPiOJRMJsfNYYq/OQNyTJMZrnyZ5lHe7u7jQ2NmaVA8Q8JrOGg4FAp2lRPz4+GrWkDfXPvn7UKXSw\nYXnD9HTooRBcCZCQ6WZnZx1sCIAEpVar5aBK4IOFzgUO/AYrExIVF+nj46Pm5+dtOsPlExrK8FAx\neAHW56JGakIvMrRRpBK4ubmRJGej8BD4bFxKsC/JCqnAHx4edHh4aDOdVqtlZipweSi/Ihunl0Zw\nRCVwdXVl5ylkjNg3hqxhSTbA4KIlyeJSAYKiR0qiw0WEPG9kZMQaXQLR7u6uKv+YjsZhI1iQFUuy\nhzMe7PSRUWDQEyZp4QuSJ3Ag7x34jqBL9Sp9VhP8Q7biQFCv13V7e2sjmE+fPvmyAF4PAwkVOOuO\ntEuSzUww7EAVEipJgG0xpkmn02q1WqpWqwNs7FCNcXV1pdHRUV8ySLoajYbh493dXcudJDlhIfCx\nZ0lkufBDwufh4aFyuZwrHRJg+CEkbHA6ut2uLzHQFoynCJTEAsbZchnBUZGkWq024LkfqjnwakDq\nhMoA+d3k5KSTk4eH/kjgbDbr+RG1Wk2Xl5feS8C8IFIUEVyo/M5Op+NWCkny0NCQ9vf3nZgcHx/b\nTCc0jwH9k+TigcSVi589EQ5IopXGGrOvw8uA1hNrjJKA1uHe3p46nY6Nks7OzjQ7O+ukERJau922\n/8Dt7a05NKh8sKsF8uYSw7EymUw6hoCUEZNJFCX5PBNjSMiIR+FgGIbNgMQiX4VAi3kZlXEomYUT\nhFySyxUOFokicQhLbtpJnU7H6gkSNkkDElKKUhQiT099i99wQiOuldwxjUbDskz8BihQJOn9+/c/\nXce7n/3sZ9+EzF0eyNDQkBmaXNxU/CGLE9mM9Nk3ORaLmdCEpCvsZ0Lg4sLl+zlgXEBUNZJ8cdEH\n5DUZUMOhhhiDTp5KGdYnFx16YUhj9LuolmDkElRgA0MU4T1Ho1F/dqA7iFihNAvjGog1ExMTAxau\nbHBIXawfVTiyFIIykr3wogplf/weMn3WnAyfZwMpip+j/wo0TwDi9wGpQSYLK+7JyUkHURAX5GKS\nBnrKkKk4hLx/qjQISawxe4XKkMuNagKmMRAx+1OS9cT0O9EyU22yz1i3kCcCSsIeJ/jzPEC90FaH\niSRadJAEPML5HFwgYRIKysO/SXLfHKtRPgsVG8GHswJcG7KD2bdU/ZCXeL+0piBFEUDZq+xL1hz+\nCX4PPCekmuFzmp2dtRMdQZhhOaGMC4Y1nxcSIUEf5jUqHhKCSCRijgzniecbj8eNwPF+SPrC54RH\nBO8lVKkQm0gSWffwXLGe37/IqZpJ1ENJF7ERBI5zEcZNzjxnAMQRTw8QWBDAUHaJnwXrQUwFuWDt\n2ZvEKgo8oHn2AEkPZ5PzQ7wMYwy/O/R/CM8D90pYbKFbh5RILMVLg/dAYgXJmecFrwnUi8/FGpDk\nRKNR2+GGMmJkdxSznMPvnzGQTH5maGhIb968+enq5AnqpVJJBwcHhn2oRiC01Wo169nRXR4cHEiS\nJWlcFqOjo+7DAykB3xM8eODhw4jH44YHmYd8fn6uVCqltbU1B12qwXg87mlDOzs7dsIi04dwwnzq\nMLhC8GDDcskjxTk4OPDasBmr1apbCWzEdrvtQ4A9IyzmkZERT+fj5zjA09PTWlxc9DzkUqmkZLJv\nbYsl5vv37+0QFgYGDh+XGVBlGORJqhKJvvsZ8Bi9daqdiYkJm5vAR4AQiEcBWb0kV+y5XM4VJ6Qd\nJvwB8QH7AgOH3gl8htAoAx5ELNa3n41GowNGO0DPIB4zMzPa39/X/f29nzvSx3g87l4gAYj3Go/H\nzZ2g6kqn066sqFTCIC7JSRCtBUh1uMa9evXKI2x5BpJ8KRNUM5mM3daGhobcS7y/v9ezZ8/cQuL3\nc+nBFaEfTHLIdDgqPJItfp7ATFDks5DskcCFSSaBG/In0HoqlRqYn8B0RnqstLhAAJBYTU1N2SmO\ndZ+ZmbEMEtc4xqnil0AsoqXDhQgJC3UFUDxjn0P0gbM4MTGh/f199Xo9raysqNPpDyIpFouanp7W\n1dWVzxLmPwsLCzo9PVW9XrenPJrpXq/vxPb09GQUhRYeFwf7nFZXmJwMDw978hrEMexfkS3i5MZ7\n5UIcHh5WOp1WPp/X/f29ve/ZuxMTE5bKgkwQh/l87XZ7QKEAWQ/CNAUB/58WAkqs0OtEkpNCkkiS\n7vv7e0stu92+V30ul7PhEckAyQkWyAcHB8pkMgMTT7vdvuPl7e2tCxjaBBQkoXkUz4CYT1wsl8tG\nE5gVQaIG6nRxcaHd3V3F43HNz89bAbC6umriI6TjH/L1o1XyX3311TeS3ANLp9PegIVCweS0sGKO\nRCK2ZgRyJZjQb//06ZOSyaQPJRtof3/fQx+41DAS2dnZ0dDQkKUdBJ67uzvbkHKBsHGpJDOZjN8f\nFTne2PPz8wPuXhyEyclJfy6qzTdv3qharfr1qI7pt2UyGb1+/dp971BatLCwoPn5eWsn0QY3Gg0z\na5vNpiT5NUdHRzU/P++Lj4Bfq9UsYeJCj8Vi2tnZMQGF1wApAOKHECn1K2QkH8hfaJ3Qxw3ZpZLs\nIlar1Qzt4+AFoQgyGERCggCweaFQ0OPjo53RYLzDNwgz9L29Pe3s7Gh2dnaAxXx9fa29vT3FYjG9\nevXKUBtjQqvVqqampjx+lueLPWXY4yU4wjXAxvnm5sYBAe8HWiRUCLRmSGYJnHd3dw7yDFjp9Xrm\nTHz69MkXU/iFzSiKlbBNhkMaMtTb21ujIdFoVDMzM8pkMgPrR2smlUoZIqZ3ibwJBrzUr7RarZY+\nfvyoTCajdDo9cIFsbW2p0WioWCxa/cD6osV//vy5iBtwX9hnV1dXOjw8dNCnT5vP511BLS0taXh4\nWNvb2/ZDoJ/P/+JeyHOj8q5UKjo/P9f8/PzA1DH2GMQrKrp0Ou22WaFQsN4dlA8pFa5t9P3xD0GJ\nwutB9A0te2mJYSiTSCQ0Nzc30Bdvt9uqVCouTkAV8d9oNpsql8uanZ01X2lkZMSqntevXw/4mWDr\n2mw2dX5+7oFSvM9YLOYxt/f39yoUCsrlcqpWq+p0OobrqVYvLi60v7/v1kJo/lWpVLS/v69SqaTp\n6Wn/2+3trVuka2trroJJ/lDuQKzlIqbt0Gg01Gq1VK/XjSCQlOZyObcl+Z2sH/13kEhg91wuZ/4R\n3KOdnR0XAiFi12q1tLm56fHZVPljY2OqVqtqtVqam5vT7OysCxK4AuwXEtQPHz78dCt5sj8IWvSr\nw34al7ck64dxFqI6hgjHa6HVDatNLgrg9hCip3fKoQJiAY4lUPB3kvy7mAXOxcDFD4Of4SJUFpLM\nXMYZDMYrlxrvkV432SoVC700yBr0oempIR/iPyAzAiY91mw2674+CQVrgpacYEBPO6zGgI1CCJH/\n6DVT5fMflTMVezzeN1thTemHh25jvC8+A7BqOCKVwEzlxHoDuVJZUZUj88Oyk/1H/5geniTDwbRK\nSASo5MjUyaxDJ8DQS4D3ETp8wclgTwLHsb/YMzwHZEqsLyOTuaBAa9iDIU+C3iv9Wz4X+5M2EsGf\ntQ8haKojYEv2AlAwaw6ZjaoMySUGJFRYXP7sVSoTEoaQlEdVBxGRBAi+QJhI89nYL/wXInKgN6Oj\no06c+LzdbtetAOIP1bYkM9dZG9Y/lUq5R8ueptKjFQN8yyXBWoRnlD3FfAx62TwbSUYzIIQyDIX9\nBmKJTwimWuwzLruwvUXLgX0HrwVE5+bmxmcqbJEQI/m7y8tLX2zfJwJKctyDPAm/hUuY72cvgbBw\n4VHFQxqkxcXzpRUK8sBzwh6WPYDkMCRUwm0IP1vYqoBgiScFZxz7XeIezx50l3/HKhmkDE0+v4s7\nMDQwY11C3gdJyw/5+tEq+fX19W8ymYxevnyp09NTtVotQ2bb29uamJjQ9PS0DzIDEDDqHx4e9s9A\nShgZGdHr16996BlEUKvVlMvlPBEtmUwqk8m4Sl9ZWVE0GtXe3p7S6bQz/WSyP8yFg0LV0+12NTc3\nZwciDBzIbOl5Q9jpdrtuMeASF41GVSgUPHpzYWFBhULBA1UWFhZUq9V0c3OjxcVFxeNxO1nFYjFt\nbW0ZLYAA8+zZM5uDMN6wWq3agUyS3dYIDhBVKpWKR2U2Gg1dXV2ZTFapVDQ/P2/J2vj4uKVU8Xhc\nX3zxhQmMQPzHx8ee0d1ut+3WR7WLCcTBwYEmJycNw0N6vL3tD9ohWEUiEctzuNDQ8larVZXLZY2N\njeno6EipVEqZTMbjexcXF004hAPy8eNHpVIpLSws+GIuFos6OTnRycmJ7SeR+0xNTWl/f98DahgQ\nwoS2/f19j5+lIiZxGB0d1fr6us2ZuCR2dnYsGYSxPTQ0pHa7rQ8fPiiTySifzxvK5zwgQwP9An2i\n2vjFL35hnwSSCoa85HI5E4pKpZIrh2KxKOkzmREeCa2oUHFxfX2tnZ0dt5uQ4+FaVqvVbFWNJCoe\nj+v9+/caGRnRL3/5S1eyGHt8+vRJ5XJZ5XLZBlTIpa6vr7W+vq5oNOpRyEwZjMfjWlpa0tXVlW5u\nbvyeotGoq6FQ6bG5uamDgwNLUc/Pz22mAqSOERHKDKrnQqGgVCrldZ6bmzOEC2p3fX1tJOLjx4/m\nAVBo4GGxt7enYrGosbExD8wKe/CL/zBQIblBOhWLxTQ/P+/EBwLf3t6ePR12dnbsbMdYbsannp2d\nuR1Wq9WUyWS0srJi8uLExISazaYODg48pnVra0vDw8PK5/M+o4v/kEFC5KQwoSVCkYYMem9vz7wQ\n0MLJyUm9f/9e19fXlpGiiY9G+26YTLVj/kImk3EV/uzZM42Pjxt14x64u7tToVCwWyeTP5nsyMwO\nSKxwdJaWlmwQROJNW25+fl6S3CK5vb11NY7zKQgGipvFxUUlEgltb28rn89rZmZGOzs76na7Wl1d\ndcGQy+V0d3end+/emQPDOVxdXTW5emFhQZFIxDr/0dFR/e1vf/vpVvKVSsU9NeZIw5Ss1+vWwzJY\nJZVK6fj42Fku7GAg1L29PQ0PD5vJSyV8dnamvb09B93T01NbLuKUJfWlEshQhoeHtbm5aUkWgYaW\nwMnJid2skENMTk76c8zMzDg7n52dVa/Xs33rxMSEg1Gn0x/wsbOz4wli1WrVUpm9vT1dXFzYNIgL\n5vHxUR8+fPC8ZDZEr9fzgWdUbrPZdBXaaDQ8rY73B+EJl7dGo2Fm6vX1tY6OjszoxJugVCqpWCyq\nXq+r0+kYXsfRj4SEKuDw8NAXP7+HCnpvb0+tVkvpdFqbm5s2q+CgUD3SbgBSTSaTmp2ddXBC+VCr\n1TQ3N6eZmRl9/PjRPA6GSIRrOzk5aVc9RsRi0sSQjFqtZsvio6MjmyQdHx8bar69vdX79+9tHMK6\n0DbBmhl5I60kvAkeHh5sETozM6OzszNtbm7q5ubGQS0e70+dQn9MJRoyeY+PjzU5OekLtV6v26mQ\nhGRiYsKJGpaqGN+QWFEZtVots4pBEZaXl3V5eand3V21221LmKiyP3z44HNBZccY2U+fPrkfijkL\nY4UPDw/tQnd8fOwxttVq1cjC1dWV6vW6iY0fP37U+Pi4gzk/TwImyUYnoG6ccaa18XwikYgqlYpG\nRkbcckH+hpQPVjvFxsPDg3Xo/B5UMpFIxBA+o00xvsKUiFkdqAZoZ4TulL1ez5fS3t6estmskT8S\noYODgwF5WLPZ1MnJiRqNhi9l3h9mPpI8sbHZbOr9+/d6eHhQsVh0qxR0gDbP7Oysp7BRgZ+fn7u1\nWa/XLXNGtozXxsXFhWZnZ/X4+GgJ5+TkpDY3N10BE/+phJlT3263bQd9f3/vS494cHBwYC5Vs9k0\ngov6AZRoe3vb/W/Wotvt+tyTlDPOfHR01IZitFTgLl1e9sc3393dee/Btmd/AK3TXqbwAKVAElgu\nl32mMAza39+3hBCviJOTE8dCuDY/5Cvy//u2/v/4NTEx0ZuZmdHc3NyAucnj4+OAfhbCBVktGQxM\n9XD8K7pzGLv5fN7aRMhIIURKT41DiPsUgRzSGl7e09PT2t/f1/v37x0wQQawOARuAdKGyQv8CjQ+\nMTGh5eVlXxhAnL1eT/l8XqVSybAUMHan01GxWHTfP5VKmbAGIQf9LjAcsHw8Hte7d+90dXXlWdBo\nlkMoE1grHo97XCOvR0KUyWRcEeNghiICkhBrjTtUt9tVKpXy6+RyOUUiEeutY7GYofqbmxtNTU1p\ndnbWZLGNjQ27bpE109vnIkJrTo8dqRf9PwIm/coQekR1QD+01WoNmBbBUQCCRNKDCVCj0TCUzXvP\nZrPa2NjQ8fGxXr586WRvbGzMrSEgaxIV0KZqtWq1BoQ/9P6dTsfeEaw1RE5Y25VKxX4JQLTwQEjE\nIKlCBAVhyOfzisVi+vTpk6R+X5LvefbsmR4fHz2aF/gUeJSWQOjHzRftAeSlODcCrVerVQ8gouJi\nffhfqY9GPT4+OgGA1ASz++DgQJubm+6d06MfHR01rwH4mUQy7DkT7FEUQAgENkcNMDMzM+AIyP/y\n3IClaQNIcrJ9fHzsz5JOpx0n/v73v+vq6sq9fdpn8F+Gh4dd2YdySmIoMY2+OjI2nBMhpdI3BrKG\nPwI6xN7mGaJ6wVcBaBuTrFarpT/84Q9Kp9M22wHJY23CthCcBxLVsCUEmRBNPxD75OSkeQlh+5aL\nkfeLcgNYnHsAMiykUVQHW1tbOjk50eI/DKFarZYymYxNh6R+mxWpMs/q6enJvXZiUEiyRjIXiUS0\nuLhoHhUxLrz34HnR9mIWQaFQMHKBYVw+n7dE8b/+67/+6R3+o13yv/nNb3o4hIVyMip1ST4goewE\nGIXLh74UvRX6UFyWZE1sapjAeI3Dgj0/P3eFA4QN7E+vfGpqSo1GQwcHB8rn8/aSpqfFRQKkzyEk\nMHMZ49vOIBb+P7+Lg8LYRzLd29vbAeMZoCa0skgzQm06QTAe79vFogXmfdG/JEBIsixramrKFz8B\nGgUCG5YKGokaqoCwn05mT2C9u7tzP5sZ75AgQ0344+Oj4dKdnR0nWgQJSDAkf7B9p6enlUqlfMlj\nuRo6/BHQqVJp4YyMjDgpY8wrMhrWCKIavAzQAtYUUuHMzIznb5OcQbKj/x7q8yEVQhxENkqfOVxX\n1vzu7m5AiQH5ElOQ0EgI/2zgcEx4gISp0NLptBNJVA2cFZ4b7aterzdgjBMGUdQTJDRhosglksvl\nnIizD5FdUa0guaSPCsGzVquZsETAnJiYsMFSJpOxfJYzyp4ITXe49JDFsT9YFxLokKMwPT3tVgC9\nZgJ3KHviPNJX5tIFGSGo85xoMUEshYwGw5y/u76+HlBScG64+Eh6gPUl+XLGPhrzoKGhIfsGhLJE\n1p22CsNq6I/TOgK+rvzD9RHnPtpc8JtCIjSJLnwXiKYQk2ldSLIR0NjYmMrlsguwkL/CHsdbZXx8\n3MVTJpMxssuac1+MjY2pVqvp4uLChDtmF4Cyca+AUJF08ZkkWfkQmkiRAF9fX2t+ft7JMmeJnj6E\nbV7v4eHB6B1kSBAOOCMkKf/5n//5T+/wHw2uz+fzHiLDRgurSTYzFyMaSKqyq6srPwiIJFT8ZOtU\nPmz0UC7B4pGNEXTRUfJ+ut2ujo+PbftIdkqmx0UJMsCfw2DMZUc1wsXA+0fPyUUKgYfPyv+i/YTM\nwt9BdqP3ydqRoVOBcsHhvc3FQOANE6jvBxBkiGdnZ66ySCogrdze3ro/zLN8enqyPzjIx+3trSsQ\nqjZ+juy+VqsN9OSpGML+F0NE+PysBwQZyEYcvnBwRriWVKkMxgBmnpyctH85f0/AZ+2+r3VH9klA\n5gLmPWWzWT+jsPLCpY7KHJ00wZZzQMBG/nN5eanZ2VmbCoFccKGTtJJ4UtXyfexZKhXUI6Abo6Oj\nJrJy4UJ4IqhyXoD5OWMkGXwvFyfBORqNmjPA74M8Gsr/8AMAecFzAOY/bbUwcEej0QFtOwUCZCW0\n8N/XQbNn2Tuh2oFYQjWHRTIySarO0FuBeAZpD+94Av3d3Z372qyHJOv1w4RQ+kzCJGlgHagEScQn\nJiZ83pBghsNewufO2eSc8Dwh/tLO+v7o3Gq1arRoZGTE0xCxuOXskoBAKkyn0z7z4SV/cnLigT2s\nIWvF3wOVEy+JT+xPii5eO/Q0CYmtIQrEvxOnJA0kmMigkeWC/ISJH3s4JBfzfOE3kbQTTyngQKyI\nw8RF7jzpM9mZBJZn/kO+frRLnuqOyyaUjUH6ocqmuocli0a8UCj4+6h8W62Wp6ixkaPRqPv5XHAT\nExOGfbjAYVaSiUMSg9SDXSn9M+BNNhsPOGwxkNHxGa6vrz1UJ2T7w9CFBEKy8PDwYMeybDZr6FCS\nM2k29NjYmBn3VLb0fdB1whYGuiOwhRc6w1q4DPmPdeY5AdnjB8AwCA4oXv4EW2BV5CC9Xk+5XG4A\namOtCKYhCxZkhBkC0mczEDgIyLyomtDlplIpFYtFHyTQlcfHR4845T3ACxkeHtbKyoqrNZANqntM\nX5LJpNLptM7Oztx/pHWAPTKvz/vl0iVJxEQJNIgBNVQ2JBJUEKA/pVLJe2lqasrPKp1OuyqTPk+N\nQ/lBcgE/A9nf/f293eNIxqjMIZYRUEkGWT9ej5+NRqPKZDJOTJBR4g9PgkSVDaeDEbJcPOGZC82m\ngOJDu2fQDaphYgvJI45lVHWZTMZ7nc90d3fnQI4aAZLn4+Ojstmstd8k5vAIZmdnnRiE7HkuJX6G\nXjCKDNqTt7e3TvzxgWCvc7ZDKJ01gRvBmWJdcasDpcxkMtaNw1E6PT31Z+JshUY2mFTxvGjpse68\nf5I62kZ8TtofqAfC6YMkJzjPMeUS/xBiL0UhsQkZKl4T4YXPF4z24eH+MK5UKmUSNVA+iQw/x9qH\nXyRwYRFH++f09NSkWC5+zhdE6UKh4NehODg/P9fMzMzAFFX2BO05kqTw2ZM08H5+yNePxq5fXV39\nJpvNanV11RsynU47w0ELzcLQr398fByQi01NTVmPGWY+sVjMmdnT05P12WgdM5mMfbHDheYQ4LWe\ny+VMSiL5eHx81NramjWYHB6qOf58fX3tYEvgQPeKnhXZFrI6Lid+JhqNejwum4kqhIqJTD2Xyxke\nxzSh1Wo5GyVgra6ueuRleCjR74dMUaq/QqHg6oyeJOZAGILwbCAV8v5w7AqrJemzC14I/3NpXF9f\nmxsg9Tf5wsKClpaWnAnze9kj9MphqGLsksvlJPUrgnK57MNJ0CBoMsDo5ubG1Q0BAsmeJK+dJI/H\nJDiPjo6asBaypdfW1nzh0YOFeIW2HxSCdeL1Qv8IZJ9UxyA/ocEH+wV0AkidJIPnkk6nXYXm83kT\nNcPKP5PJaHV11ZUwCcf9/f1AdTc1NeX39/T0NNBHxviEFhMOekNDQzZE6XQ6vtzxuWBeOEkL1SUX\nEutCC4c1J2A/f/7cid3k5KQJZ/xe5EmgcpBo2fdDQ0PK5/P21eCsSP1KG28JZL2cYdo+5+fnvtQi\nkb53/tzcnCQZEkblwhmNRPqjU1++fKmJiQmbGAHz8/pc7rSBQIGGh/uuirw/eD58XpA1LhtMlhKJ\n/sCY+fn5gTYL3Ap4KZDxeNYMTwrP6OLioi9X1jys4CFC0r6jsKACRu2ADTFk5UQi4ef5+PjoREuS\nB8pw5pmPwB7GsIqzjJyOQiqRSOjFixeanZ11TKLVMDY2Zpvzbrer+fl5jY6O6uLi4v9Gtga5AAAg\nAElEQVQwbeIs39/fO66iMgDx5LyQ+HNfkBQgy56YmFCxWBzwe5Bk6/ahoSFtbGz8dAfUjI+Pq1Qq\n6Re/+IUP+vz8vC/B+fl5ra6uSuo/wJ/97GfuVa6srCiXy+ny8tIuaIx3/eUvf+kePY5XDw8PWlxc\ntAyHXgesWRzX2JTT09Oq1WoaGRnRv//7v1uelclkbLW5urqqly9fuu/0/PlzIwSvXr0yy5IEgF74\nr371K/MDwu+bmpqyWQYsTYLxF198oampKTWbTR9KsuZnz555AtPiP4xlOp2OCoWCp1wB/09PT6tQ\nKGh1ddVBLZ1OW/JTKBS0vr7ujHRhYcEcgNevX+v58+c6Pz/X2NiYVldXPYr0yy+/VCaT0e3trSHe\n09NTpdNp/eIXv/CmJZtn2h/90unpaZXLZfedVlZWlM/nXRmBBCwuLnpMJZO6EomEZZX5fF6Hh4dK\nJBJaXl42Seq3v/2tjSbm5+c1Nzfn0ZZLS0u+mND2Pj4+6tWrV55AlkqlPJ2v0+lP3QN+LpfLdlVL\npVJ6/fq1KyLIWYVCQb/+9a/16tUrxeN9Q5JSqWSextLSkp6entRsNgcO+fz8vNbX1w0xP3v2zG2Y\npaUlpdNpj21lFn0sFtPa2ppZ5gTper1uiST95MXFRVfeX331lV69euUELpvNemzxixcvXEEvLS05\nCWMvwXF5+fKlk+pyuewJcOVyWevr6zaWefnypfXL4djWhYUFvXr1ytyLQqFgguTy8rKT2FKpZNlr\nOp3W119/rWKxaO04rZ21tTWtrKwoEunbzyJjnJyc1FdffeXXL5fLWl5e1tPTk7LZrP7t3/7N5/zV\nq1dKpVKq1+vK5/PeV6AAoHFra2s2/mE0KLAy8SqXy1n2hbEU74l9gvf/z3/+c7svzs3NqVgsOol+\n+fKlZ6X/7Gc/s0taPp/3pTA+Pq75+XlzUb7++mtr7kHxYHb/y7/8i2WqXNCHh4eamZnxhZbNZk2S\njkQi+s1vfqP19XWbJNHCXFpa0rNnz4y4vXjxwgTZ1dVVlUolNRoNDQ8Pa2FhwXt7dXXVsQZS8/b2\ntv8/c++RT46MjGh9fd2TLVF3XF5emqBMq2pubs7qrFwup1/96ldWzMzMzNjS9tWrV1pZWdHQ0JCK\nxaJWVlbsAvjb3/5W2WxW0Wh/nDDqomKxqJ/97GfmZTx//tyOdrlczjMAOKPwL169emWWfyjhTKfT\nevXqlYm2X3zxhYvLV69e6fnz5xodHf1/Nbv6v339aJX8ixcvvqH3whxf6fM0OYJoaPdINgzMEjIb\ngeWBTwhy2C5CsgFeAlal3wJ0CmwJXHd8fGy5EPKleDxu4h5mDTDIY7GY+yaQu4Diut2uSW30jCHg\nQD5Cj4yECxiMnwOyoZoM7Tz57OPj47q4uFCtVvN75oKi1wkTmPWAXQzLOxKJ+LlgMAPqQUba6/Xc\nZrm4uDDx7fr62qxw1pnPT6+OrJiqn+ld9Ib39/fVbDbdi6UaSSaTAy5hMH9D0hPVEesPGYmEAGkK\n8FgopcFCEsImJFDaBmHPHhg67CEzrAjonkqe5IugRn8awhftENwVqY5Yt0Qi4fZCPB73QCLQAAhR\nkJtQRLDnqPIh7tzd3XkwCu5wOzs72tnZ8Rlgzfh96JhDIivnsNvtuqcPysbvlGSJJWQkKmWSvqur\nK0PftK7QqgOfkiRwnvm6uLjQzs6O20Mgd5AWqfxA7nCyZCgIlXAsFjMLG6iadh5GMVTJsPu59ML9\nQqsMhYv02cwrNCVCUbC/v+9zAioFGZh2D6Q5SfZUgBTHOsPGp1XJoCWQJvY2vxuIOzSAYgwxyERo\n0sL7g2VfqVRUr9c9OS4SiQwgUZFIZGDYzNDQkJEFUDf+P9A/qC4tAPY9fybmAvefn59b9cA5HB4e\n9nPnvfBvVMwMBkIWCdpIn5w7h9Ykahvar6ExFfuI16btAHmXz9vpdPwMer2eYyYSPwZTEf9vb2/V\narUGSIY4FBL/Njc3f7oDal6/fv0NECsXOzDb8PCwAwgBkBGz4+PjnlyFjzaBANc0SBahfzK2l/T4\nsGUNpUwcaiCtp6cnHR8fW8JGYCGTDMkVJycnTgyYQschgnEMDIb8gYAUEtdINMIgxgEaGRlx0hNK\n2GhjADMzB73Vavkih4ADyZC1pt/JxYj6AMkQaAKaU6BvetHhgeSy73a7mp2dNZOWf6NNQfClwiLI\nwPpnzXlmBHdY0HweZCaTk5OeFAiMy0WEmgE4MvQ/B5onEDOJEB4IF7skB0u+N0xaJHnNSCIYBYos\niEo45DkwzpYLjj4e8DL9ZSB2VAWxWGxgbYDusFrlQoDVTNIW6mrxkABGPD8/t49BmIzQT4V8xtnD\nqYvP0en0x3pCCCJ40kpj2h8XIrwE/PJZG3qPvG/WDlgXSJbqjSSGiYtcePS+JQ3EBi4tyJEkX4+P\nj54oWavVvC6hQoN1BAU8ODhwYsqe4KzDQA8vefYMbR+01EizaAmCVPF64TOkFx1yVeB0wFvBp4ME\nkBjF+SPhAEkLXdr4Gc4av5vLiH5zu912UsU+JDGAvNft9u1led5cnlyOQNycPRQatBVIFIhPtGvg\nGLEWJDORSMTvG4vfZDLpZBfIHg8BEmrWnC+KGl4DlA0yLYkAvBlUXlzytLHC8ynJP0eLkYSbgomW\nBZc+iRkcAHg+JNpXV1fa29v76ZrhIHOiEoSdK8kZMAGLQCnJJiNkZ7Ozs4rFYjZcoaf0+Pioubk5\nZ99AfbgZ4QeOqQrZUy6X82YCrpLk3gz9ZDYkUBRJAwEaKAZ5BJcQvfp4PK6FhQXV63Xt7+/r2bNn\nSiT6ox6lz9aZo6OjlqRQWUmfLSeBg4DaIJqVSiWNj49rY2PDXAUOL70+Se7ph4kEzF9ISdfX134u\ne3t7hrlg7uPihXsbh5gWQa1W80UN0RCoCZ1osVhUIpFwLzqUovGc+A+IDQMKScrlcjbAmJ6eVjqd\ndgCjMp6cnNTq6qoNKgqFgtLptIM/hClIWhgp4TFNRZdKpSzxKxaL6nb7IyFJQre3t8165vXo5+LW\nxxewJtUsfuRhYhVOOtvb29P5+bkKhYLnoxeLRa8fyR4+4RDsRkZGzNM4OjpSLBbzsAx6rLQCQJBg\nJ0PqpJ+JmU06ndbIyIgDL73JcKgQ/e+Hhwd9+PBBs7OzevHihU2R2OPJZNJqlUaj4RYAXwReJJLR\naNQk22Kx6MoWkiPxhL0+Njam2dlZVatVXV1daWZmxhdgKKUE3uVimZub08nJier1uh3RMGJBoguH\ngKQMPgKKgfHxcRNgcUQ8Ozvz/kOSSWwjwYAoCGrCeUgkEqrX6yZeEgtDIiVn4LvvvlM0GtXXX389\n0CqLRCI+TyT09KKRak1NTRktoGVGnMhms2a9h8icNOhpIH0elIRPBr3n6elpIzNUzXA2rq6udHR0\nZKe4RqMxkDQ+PT3ZI4W2LfMYIIqSCBG3QWHGxsY8HIYhSyRKSCaJO7DpC4WCqtWqLi8vrZAhCZDk\nhHFubk6Hh4eq1WpaXV01k5/zADQP+obqigs8n897j9EuI7bOzs762fM5fsjXj1rJE4CmpqZMRAhl\nF1RPZEZPT0+GASHqoUsEjoGpC6OWLEj6PIoQSRqLTdYIs50KkHYAsHFozEDQzmazAz9DxQSTHHkE\nUrVer+cEBxSDCwbokMOHFpJqIfRlJxsEbqXPSvUS6sbJxMmAISFyWQK3EvCAC1EHULFFo1H3myHa\ncaB5T0jFCJySPJqSLJZqFsY0fWgQBpAbIDwOLuoKyJEYhRBUecZTU1MePIKT1vT0tAlLVOBUk6Oj\no37f2HJSdUhy0ka1Q0XN8+E16TPjeEX1xb6EHBmS4oAWSbDC0cSXl5fum46OjrrFwjMk0YSFTfUX\nEpaohkLZZSiV4gyg4qBKDOF/OCwwiCW5NcXn43mH6Ac8lpAJDjmMc0KVHMohOQPsy06nY2SAM0UQ\n5vfSewY1kj6PWeUyQQIXKhogLcLFQHIJl4W/Jz4AtfJ5QikfjHgqedaDCx7UiTgwMjLi1gNrzvlg\nXSCykmyhwAAax1GTKpEkmNYHyTXvkdcJSZgkJKB83e5nkxxaXqHUkEudFgNnmXYqsYXPR7JP24Sk\nk4SfSz4kAYaoLu+X9i5rj/KDWExMBMmjOGSPEI9AlsK9wp6ADc8zAMUCASYGAp9TEIVKiFCrzzMj\nOaPYgjRKkoNqgvuN+BIS9KLRqL07+P4fMk/+RyPeYW3I2EYWNplMmkX48PCgQqGgUqlkRiiwYLfb\n1dLSkqQ+FEZWd35+bmckXi+bzfoQvXjxQrlczgfv4eFBOzs7nngn9QMYcqRPnz4ZIqcvj4UjFoow\ngQnaEL4WFxcHBl2wCbPZrGZnZw2Nr62t2d85m82qUCjYNY2sDgic6jQSiejk5ERbW1tKp9NaXFw0\nBLe4uOiLkqoHS1XWPBLpe5dDlJudnTUUh6MdF0C4KV+8eKHJyUldXV2Z6Y8dIwx0+ufIzdbX1/Xl\nl1862OM9QHZOYCfpQfMcjUY9fIYedLPZtDqCYFwsFn1xQYohUeTg4A7I5bu4uKibmxtPbMvlcm7f\nQIzqdrtaXFwcMI05OzvTxsaGRkZG9PLlS+8X9iw8D6p7YPXNzU37T8OuX15eNmExk8no+fPnlgMh\nd0smk55wdXh4aL99JIivX782k5tnXqlUlM1m9eWXXzrx6PX6bnuVSsVJNVXnxMSEDg4OdHh46CSG\nXioOigRwAhUEL+SJ9BIJwPSG5+fnLY388ssvlU6ntbOzo2QyaUIUCQqXElMLSYyvr69tlxtO9MKZ\nDwMZeqdwFer1+gDiQEWKKiaZTKpYLLqiTiQS/kzZbNazDKLRqBYWFtRsNrW/v+8kUpJbb1TChULB\nvf1EIqFqtarf//73SiQSyufzbnOBotCugDN0eXmpVquld+/e6fz83CREktRer6d2u63p6Wnl83lD\nwSQfIyMjKpVKenp6UrVa1crKisrlst6+favz83NPhyPhgI9AodJutw0Hv3371oRSLt5sNqvx8XHV\najVdXl4qEol4j9OSJO4w32NhYUELCws294Jkh0nQ2dmZkYlMJmOl1NramiQNTN3jnPM8o9GoyX20\nCi8uLvT27VsTi7lMsUCmVQN6Azxeq9V0f3+vpaUlE58x3Nra2rIyhgKAwiIe708fHB3tW7Rns1m9\nePHCSVA+n9fs7KxbCZ1O37GSZAAToNHRUVUqFb19+9Zx6PLyUlNTU8pkMuYQcFZOT09/0F37ozne\n/cd//EevXC7r1atXPihSv09Fr5BqFNhid3fXbGfgaw493vXA8mRoEMYweCDgdbtdBzWGlODvPTw8\nrEaj4Yz0T3/6k0dn0kdZWFiw6x3z4Pf39+1KxwWDNrhSqRguJygiE0HOBXkJeBjSIDAZfuV3d3eG\npZF9kWxAmpqamtLp6an++Mc/ergDENerV698qVNtkHAgz6Nfzu8lEyXIJxIJffz4UQ8PD2bG476G\nZGtyclKTk5Mm3oyPj6tSqahWq2lhYcG9W4Ls3t6eL+Tt7W3t7u4OuDytrKxoZWXF+n7kO1wqVOgc\npr29PcvSWHNIc6gxksmk5XdhBQZJ7+joyIECm1cSUrLt6+trVatV+x/89a9/tcc67Ydnz56pUCjY\nsIiEgcsGPgbDbzY3N204BBkpmUw6iwe1CfuaXDRhUpJK9eewv3//Xmtra5qdnbWyIJ/Pq9lses8y\nBpbK4vb2Vvl8Xuvr615TiKZ3d3cD8P/o6Kjy+bx2d3c9XphECwSAnn06nTYhCa8BrESHhoZUr9fN\nBdnd3dXNzY1KpZJJV3Nzc0omk55vMTMzo7/97W9+3lT4X3zxhQfvhCSzp6cna6hpr3W7Xc8mwMsB\n5QzW0lRVXNTECTwUqP4gvr5580bT09MqlUq+gFASAL9eXl7qT3/608Dkw6mpKS0tLQ3oqCGngrb1\nej1bnLZaLTUaDTsV4kYYei9wcYJWdjodu01iFAVis7297bYcpMd8Pm8felCJjY0NvXv3bqCFubi4\nqPn5ebdbSeJZGxACEq9Pnz7p4eHB0wVD6S5nNERWh4eHTSjGqIo22t1df4gPagdaAEtLS3p8fNTh\n4aHbk8xsoJpPJBIqlUoqFApuwzDLAeURcH2xWHSxSRuDc0ispvUCgQ/lAP4b+XzeltIk6dVqVdPT\n004gw3sACWK1WtVf/vIXk7n/+7//+5/e4T9aJQ/knc/n9fj4aJP/09NTbW9v21Dk8PDQlV1ILHl4\neNCbN29MpoNQNTY2ZhIRmwrbzaGh/lQhMmTYj1RP1WrVkN/x8bE6nY51k0gfgM5mZmbc22H6HHOt\no9EopAhdXl6a3Q3sfHp6alJbaFSRSqV0dHTkHjGMZEhxlUrFkBuDZvL5vM7OznRwcKDr62tXG1zG\nEJ6oliYnJ1UqlZRIJPx+Gehxfd0fl8tQHvSZHIKnpyd/pkSib3ZD8CO48FkPDw+NLFQqFVUqFVun\nnpyc2Pcay8b7+3u73AFXk6lzKaLtPj8/V7PZdCV0cHBgJUWtVnN/m7VH31+tVh3sj46OTJ45OTnx\nECJIOQRSJgFK/SE+d3d3yufzur3tz7OmZVKtVgc4E0CcQJJwJA4PDz18Z2NjQzs7O9anU5FCgJL6\nFdrW1paOjo6c2TOpMB6Pq1ar+UJnGmE6nVaz2dT29rZhvlqtprGxMeVyObXbbbXbbQ0N9QcOIeEk\nISWgAq+DlJyennq/o6cfGRnR0dGRzs7OfDnChL65udH29raJXQxcwvO91WoZuoe5HIlETKKLRqMm\nvS4vL2t0dFTb29tWnhwcHOjm5sYIHPucqjSTyXgCHIRBqlRaGwxDCpUUXCT7+/vunSL7QmNN9UuS\nju784ODAe/rg4EDJZFJffPGFnp6eHNQhrcENovCgXUWlPzw87OfD+7m4uPB6kaCRiD88PBhp4nlA\n1EqlUnp8fNTR0ZEr1/39fe9pjHkgVDabTaVSKRvIUCRUq1W12217kPCsQSqx0221WuY/VKtVbW1t\nud21t7dnkjBJI7GqUqkYisaVjvNLbKX6p2XIcB72+c3NjVZWVtTr9XwuMXmi5w+KCy8jHo8rm81q\nZGRkoN3WarV0e3trUyEGSZH40W49ODhwXDs9PdX+/r73JKgHCPDl5aU/b71e9/3CBM1SqWTSHeRF\nis7h4WGjnD/5nny5XP4G2IWs7/j42EQjFpFeMX0+Am84+jVkrsJqhuA0PDzs6V2NRsN9LyooMmsI\nYFwo9O6B0dhY3W7XsDbV7uNjf7ISl6Eka77JvDOZjDqdjhm5IZMfz28qUyAw+p9MPAt7T2TgfB68\n30kmuEyByAhsmAfRo0J+xyVOr4qeEvaTtCpChi9Bj00PjEVCxCVM7xh0BRkeVQ+9f0hTqB+An7k8\nc7mcDTW63a69DML5B5OTk+r1eibFcPmFzx1+AigGPUXMfULYkc/C90oaIPLRqwMJwFebwMkeZMYC\nVTtTB+lNQj5FYiZ9nrgH4sB7B72gR0/VzReKCwht9BJp3bB3kO1AGjs5OfF+x8scW1pULBgU0fpo\nNBpuT/E5aCVhmANRjz4760p/FyUAJC/WidYPwY1qlLYOiAtJORcBz4a4Ae8BNcbY2JjZybTm6POD\nysBNIUHB+wEkEYWIJCfrISpD2wECKAk+rTbIYJwrWiTf5wUNDQ1ZNcDnYBpaqEgIkyN65xABUfWw\nd9hjJBf8PSgFe441oaVC0UQ7AoY6CAR8C0lGunAh5PvPz8+NgsDriEajVnXAHfm+lI3PSIzARz+U\nMN7f35v3gnMpiGbYTjw5OXFLJ1TUhHwCEGWSZIpQyK14NNBGhHdDC4Jklz+HcSeUaVI0so8w7qFV\nijskRkQkFpz5H9KT/9HY9UNDQ9aWQiyg6qGKYKMDGUFiAAYlsNDPlWSyFA8eFikXa+hhTAUA1AI8\nHcqm2FRk2TwwyGwQWmg38FkgYnFRh3ATD1yStZFkhEBp6Ks5xBBAePi0BIAnOVQchFAnSg+TzxBu\nXkhOrBuQGevKpgx14lSv/JkDySUFA53Mnd9DNR4mGRBpQpgVc4rvV1dAaHxOhmFQQYbs6lCaCeEm\nJKGFUCZfBJ2Q3IOO/e7uTrOzs94TJKjsiZAwxZ8l+bnhd0BrJZyRwGcjUeWzQcjiWZG9cx5IDMIg\nG41GTYLks2I3TBCamJgw25hzxeWJIoBnRsJEX52kbHJy0ox2kglJfg1+JpTt8Wf+DTZ/JNJ3VCPZ\nYt9DaGJ/SHLbIBKJOAkNSVkkzqG8LPx+ziTFA8+J9wc5j3NNtYdskb43gTYkmPKsURcAScNPYP9x\nNllznq8kPyf2XHgBcQmTZKLdZ89whoiLXIzSZ/8RSU4QwnNIq6vb7bp9CoEWElx4ntjn8BhC0hh7\nCELw1dWVEypiA1U4MZ99QHwLyaLEY2J/2GJgz7G2GHuFxQ+/mzNB4cVzDdeFViXnCT4S7494Slxn\nbXhtkk/OPITNcD+T9PHZ2Z8gCaH0kkQ3JHtyf31fxfB/+/pR2fV4QocyHvqJEGKoKJFzIBXh4pqb\nm9PU1JQrhWw2K0kDTFKqZSR6MJI5UOHBp2Kr1+smyzA4IWSSw0pHaoc7XywWc2+MgBqJ9B3Hksmk\nSqWSezZICJHNcIhJGmAAF4tFxeNx8xMg97FRqdxwPTs9PVU+n9fw8LC9AjhABC4IJxBprq+vzX0I\n2eCQYiR5c8H6pI8IgUeSCSOQ45CodTodE/X4zJIGgvLx8bESiYS++OIL/xufkyw7Go0aqltaWrJE\nCv5GvV7XzMyMYelut2tGO3sMlQIVI0GPCxTZYjQaNS8iGo3aGZH+GkELeHN6etoeBQQL9hfmGJVK\nRcViUcvLy1Y5wLjudru+tNAKszZYosKL4EIP20BIG+md3t7e2j2QSVwgHgRgoNi5uTlX4AToRCJh\ni2AMSMrlsh0qWS/QrdnZWSNHoXoEUmo4r4KzDCsdpQeXBj153gfnGPIT+vaZmRmVy2UTxkhA7+/v\nzaqHrLm2tmaEDQ90kqgQQYGsSLVG64hCBOg6TKjgqYQqDxICkiqY2ZhnwS+hHQmqQOLKpQl5jiKB\ndec/LiFY63wPKhFiIcUJ5y+RSJjUB4muUCgYpqc3TIsAIixIDm2+sLIMSXFjY2N6+fKlk2mcAOEL\nDA0NDRhukbiUSiXHHs4jFXFYsBGPuDBx8iMJRi0C96jdbjsm0Y/nrBFzgeQzmYzm5+ddVbOXe73+\nbABaARMTE5qennYBhFU0Zkpc5LRj6bPPzc05qeIuIRFlXaS+xBlEtlwum0tGAvfmzZufrhnOs2fP\nviGLTaVS1jki6yHAcYEQIICYx8fHtbCw4GqDQ3tycmIYGQgqGo26HzY3N+cAzkWP0QGSj0gkMqBv\nB9qByEH1lUqlND8/b/tCMrzQ/YmKH1gRhnC5XDbRjcv46urKwUL6PGDh6OjIfTpgKMhCGLsw9hYZ\nG7AedpBU72wmPAC+L9F5fHz0e+Dgj4yMGAZdXl7W2NiYIpHIAPufSwPSYyzWN3aBGQvyQHBpNpt2\nxmPd6Kc+Pj7q4ODATmDfrzghO4byGEaVlkolG2CEFT3e6lRoEHXgF1BVU8UApxUKBRPgGDRRq9X8\nfkFOIB9dX18bxiRJoWoeHR3V8+fP3Z+nSm42m5Z6hfIzLupyuWw/CKok+u9UNQT+y8tL7e7uOhAj\nsbq8vNTJyYnbNpwjLoH7+3tVq1VVKhVdXV25OkPexhkFOqdlQmCKx+O+aICy0bBz+eGCGHp0A8ne\n3d2pWq3aq4JkFskmpiC01CT59zQaDbcaQgMs/McJ7CSgyWRy4PXY95JMXM3n8wODXjiLodZZkp/j\n/v6+RkdHHZNABjmjXE64N5JcUkBcXFwYKeCZpFIpM9RBUPAKwG8fWRrtpvHxca2trXmmAEkYBDJs\nYbkgz8/PdXJy4gmPnHfY6g8PD47BIIMkv8RFEkAkiRMTE7a6pWgjuaJwYq0lDSQBWHvHYjGjea1W\nywgsCQF8FCSmJEasSaPRsFUw8Do8LFQXtHdAIkk2l5eXTapGEhd6TrBXSB6i0b71MJ4HsPlBqfEb\nIUm5urrS9va2stmsFhcX7T8R+kyAGIAEwPmAo3J+fq6zszNtbW39dCV0wJ705hgGwgYLHZGAPkI4\nh0ESVOChMYIkX/TAmc1mU+122xAvwTEWi7kvDUQ5MjJi4wH6ZQ8PD4YTJVnKUSgUbKxB1he2E8js\nqQ7Ozs40NTWlUqnkQCbJ7kt8fi4d+o0PDw+uYMhugdJgsUv9wAc6Qn+dwEXfkIoil8s5eBAUmM5F\nz5xgJfUvWQiHIBb8LpILDgFSlf39fScAYTsBUiEyRzzTR0dHdXZ2pmazqVarZa3u9fW1K6iwSsGR\nCzbr/Py8+80kRAQGWPJhS4U+IMkWFfrZ2Znu7u7sg84YWgJz2B8mW0drDvuc4MbBTSQSWltbUzab\ndbC8u7tTs9mUJPulh1A5CRnPl/U7OTlxhcVgocfHR7VaLVUqFcViMeVyuQFuBERLKq+7uzv/rrOz\nM38PnwHNPAlrOMo4NJJB2w/KhnIEAxrWlc+HwVU4WrZer+vg4MABHTUEySdoFMm7JDOrDw8P/bk4\nq8DQnFFUHyB6tHU4gwRzeCNIWEloeD9jY2NGbNgzvV7PfCISWngnj4+PnixG1QkSc3Z2Zk4De5Hg\nzcCjYrHoSzn0nwAt4zkA8YKowNIGHSHOptPpAU4MJFZ+LoSiISHT5gARkGRfAtYMeJ2zRWzg30gS\nwmqVNeESm5mZcUwHmSIm0WID2oZkeH9/b/98yID8GyS78PUhUZJ0QOgGKUilUlpeXjZaxRc8KJIz\nZMLsbeaHcJfxbGjNcWHDDdna2jIqwPkmKeTz8rxZf0iUksyz+iFfP1olv76+/k0ul9Pz588HoM+L\niwtVq1VnMrVaTZFI38mOC3dxcVGxWEx7e3sOMJubm5ZL0CtcW1vT7e2tvvvuOzT2rMgAACAASURB\nVE9l4hLLZrOqVCpmwEJ2QyazubmpaDSq5eVlNRqNAbtXYMtcLmcIJx6Pm61KBd9sNn2R7u7uamSk\nP1Cm0Wio1Wopm83q/PxcW1tb7rtRnc3OztoDvVQqSeoT8Biv+bvf/c4+Alw6i4uLqtfr+t///V8V\ni0VNTk56/cJpdUtLS0ZHSEoqlYr7oUdHR15nAjDBlQEws7Oz+vDhg5rNpkqlkklnkNFY12w264o2\nn8/r8vJS1Wp1oH8J/P/+/Xudn59raWnJlp/AeLe3t1paWtL6+rp7u0NDQ9b+5/N5xeNx7e7u+vNC\nNqK/fXp6qsXFRT09PenPf/6zL4Cwwtjb29OnT59UKpUGZHOxWEwfP35Uo9FwdXJ9fW2Szt/+9jdl\ns1nl83mrNKjeUqmUfv3rX1t3TaKGUmF0dNR+APl8XtFoVAcHB77sQJqKxaL29va0t7fnarLVaqlY\nLCqVSumvf/2rXbNIOMvlspntXHxA8uVy2YoGguvl5aXX/ObmRtlsVq9fvzb0Cct4f3/fr4dD3crK\nis/KwsKC9cUEw7dv36rX6+n58+e2DWVIzM7Ojqs/zmg6ndaf//xnvX371sQ7ZHORSESfPn1Sr9ez\nyxgSPS6C+fl5D8mBJHtwcKDT01PNzc3p8vJSGxsbmp6etsY5lepPaTw/P/c40KurKx0eHlp2CtOe\n58E5B6YeHR3V/f29/vKXv3jaXKPR8Jm6vr7W3t6eJzui5KFgYZgOdsF89t3dXSdvXE6oidrttgde\nvXv3TtFofyz0wcGB2zagLfSFiUlIwmKxmJ4/f65Go6G3b98aoULJMzIyYoQLl0HWnLPMcCPUDLlc\nzglvJpMxasmlubOzo1gspn/913/186VV8fHjR/t+0GL5+c9/rmazqZ2dHU8jvLq68pn67rvvdHh4\naLUPqFH4evl83n4nkEWTyf7gJNqQtHwODw/dFmq32/ZT6fX6Q3xoK3/69MnTP0GWkJG2Wi0Tng8P\nDwcUOicnJ5qZmdHJyYl+//vfm6jLPbC4uKjd3V0dHBxYiVapVMxt+CFT6H404h1VCtUoi4qPeVhh\nAKVTIXMYwjGPQD9PT0+WU9CrgnEKLEmVQuCiz3Z2dubMGJidnrH0GX3gPQB5SnKGCCyL0xVseCpW\nKh5+H3IqEoh2u+3vBWGgF0mlyGsT7GgPhMMfWOOQXARxBpYprwXjNNRCX15emgkNqzQSiZgMFw7l\nYewkFRrWuul0WlNTU64IWX/G+LJuVPQElZB9TYUDZE+ggggYDu7ACIP9wj6g9wUREMerkLAnyRU3\n8k2qDSC9s7MzJ0qw8qm6qIxANUJFCAiUJEPKyEWp2GA149QHEoHBEyQlECXsPblUCRisEaRMLivW\ngvOBcyJ7nH1O/5P3zZkK92Ao7wl5JFR3IAW8VxAkSJSgdPTBeZYkDbCSYT/jm/H01B/UAleFfY3K\ngEIBhBAUCxRlZGTEcP7ExMSAP3jorMa+43Vo6/BsaReyfzudjuVMkLdA5lhTxlW32+2BvQXzHISN\nLxLZTqdjTkur1bIum5ZK6EoXEhuxJ8bIi2d/fX1thA/SGLGC58L7Dy2vaZVS1IS8BH6W1+QZwlAP\nY3OIMEqfHRhpmfAsUaDQimI/he8RlILP3O12rcaBZ4QqgX0ACklbJCQQws1goBO+JShMQu5B2JaE\n64KUEYknQ8xAUVkrEDCqdvY4MZfECjIev4u4y7r/5Il3y8vL39AjBk7homKxCP4XFxfa3t4ecC26\nv7/3wyOjRaLGgj4+PqrZbDqrlGTHIvynuTRCDfDNzY0dvT58+OCHSn84hChhhjabTevE2eSdTkc7\nOzs6OTnR4uKi4vG4+9fxeNw/w+XABXN7e2viFlAg7lz0keijk8iEA1tWVlZ0cXFhCJHDRoWby+X8\n2iMjI0YdwkOPkQ6ZP4c7JOMQ9DicyMZYc0kDPb5oNGq5HZn08PCwLi4udHJy4gE1uJv1ej0zc5FW\nImtijVAxSJ9ZuOje6d3d3t4OzNemwucCZ/24uPAA6Hb7Bjzn5+eq1+uampryPuOAtVot9Xo9LS8v\nD3jG397e+jU43JBzeD2mUN3d3RkhqlQqAzIoAhCfHUh8amrKQaJer2tvb88kRVoaiURCx8fHA3pq\n1pzLCuIeQ3UkDQQcLlOeIYn44+Oje8nwMr777jvD9Hg94GAGp4PqiMCOKxgXB94U7EFgYPgE3W7X\nXt5ra2vqdDp6+/at2wokxqx1GFBp+dze3to1k8oLLgCJLWRY3jt92/AiI9GjTTI+3h+fzf4n8aOt\ncn9/r62tLQ0NDRlF2d/fd7FCnAlJjaOjozo+PvY+45lTvOzt7SkajXq2/f39vWNcq9XyWpCA8dxw\nOwQxm5ubM7pEsk5SR7KGRO3p6Uk7OztWuvDaqH7CSXT4WUSjURNSk8mk6vW6x9mGQ8dotcEXOT8/\nt6dFp9PR0dGRE/nj42M9PT1pfn5et7e3XiMuawo2kmpmCNRqNa8hs0zgElBc1mo1HR4eun2DFwXx\nGGSNf0MxQMwloUJJApoBh6Ner6tUKqlYLJqfxFyIVqvl5IgCAje8ZrNplOL09FS7u7s/XeLd119/\n/Q0saA4mAaXX61m7Gma69JOAmBuNhjP070vk0ITTI4I8FEpy+DcSDLJuGMX0777P6o9Go54jDpEG\nDgAZPZcQBBI2BFldr9czM5w/I59Ip9MDFzFs6LAPGlboZOqsgyRL+jgg/BtmC/TNQRfQYlLtYM+6\nuLio1dVVv1eQE9i69Nshy8Ab4HNzcKjSYIJT3SLVgWPA5yDDpQ84Pj6u5eVlFQoFV9pAfki47u7u\ndHx87D41AQT4GRQmZLJDnmPPUMGFvTX+jfUmAPL5WVt6diRq9B+np6e1sLCgRCJhuB43MSyRIU4h\necOwiKBMj5JKGfLj09OTWbv0oCEgIcUi84fUChyOp0AikTBBjgsX3kuhUNDi4qLbHXw/SS4BlTOK\nSxucB5LVsI/OUCcUF5BBsVvlouOsgvhRFNC/pIpiYBBrzr4CLgVuJklDjgaKCMEQyB0kj0uHy5U9\nKn0evkP8QC0CXA8/Jdxj0WhfAonOHsQHhIPPMzU1ZdvbdrutyclJ26xy+bN+kUjEpGWqRpAL1o9e\nPWx82g6h6oYKkq+QYwOSB9pBrztUJBCHS6WSlS1catgogzCRtMdiMeXzeSNqoc8B54/4AOkNFA00\nhn0A4ZP3zu+CVwR/gnWAb8AzHBsb0/LysqLRqCr/MB2DtDk9Pe0iBWSRxBt0gv/gIPDvQ0NDbhkS\nF0kciP0k5MRtSK2gSpJ8L4C0sgc/fPjw04XruRAhy9GnQE86MTGhbrfrDCoM5gQ3emYQXNDvAjeG\n2TB/F/ZgqLLIOrlAufgwVgDyTCQSriI5GECEbEicizhAMHuBnMj+MWqgXx0Gnmw2q1wup2q16k0D\nSQv7zJAESAJArxjrTzYTlzKZN5cVmXoqlXK/hyw5Fotpfn5eS0tLKpfLZtozdQqDlFDuF4/HncyM\njIwYZnp4eBiw1uSC4r1zSTLmslgsuk0DjAZpCOlY6IFAYkULIpfLWY5JUKZaIlMH6oNExnumdYRh\nB71A+A1cxCQkvB8uW3y9OeQQIZHWXF1daX5+XsVi0euBrvfu7s7Tw/L5vJm02DJLnxUGoTYeUiPJ\nLwkHZE/WKYRRee/sofn5+YFZEhCweO/1et22quPj4z53QLvJZPL/ae/Nehu9suvvRUqkRImUKHEQ\nB0nUUHOVh7aDjruBAEEHyefy18h3yGfITdAJuu3udpVL80hSFEeRFEVRokiK/wv2b9eRL+LgxQvY\nF88BjHRcLol8nnP22Xvttda2y4OLkXYGvVjXNZKWC4kGCTRENS4HV1OMZphn3ul0FI1G9ezZM1OS\nSLLvz5nCUY1Eejye2C7TsgAl4YL3+/2mdSdhQDvP+eH/8n1mZmaMa/P5558rGo0aOsbP5JIheV9e\nXraWpAsb8/M6nY7a7ba2tra0tLT0BB3jYkIOPD8/b8gMjmpLS0vGXHf3z/LyshUufBbiEO+D5wyy\nwfOhmsf6tdVqGZERORj7mQuMZB+UiVhK8oGaivaCKw8k+cUbwJ0uhwIKmRyXrvSpxUS8IcmH1I3j\n4PT0tKmSeIatVkuZTMasfIm9JKrA9LQoYNuT6IBcwJ+Cl0FROBgMTL8/GAyUTCY1Hk9MiSgsiSkk\nCcFgUBsbG5qZmTETMJcY+L+tX6ySf/HixbcE/lgsprW1NWPbTk9P7Ger1aqx17nkgeR9Pp82/j4A\nho0MhIVUhsrN5/Pp/PxclUrFMiWgm1KppJOTE5OHcSlJk/4p4z6RGLkyl2AwqNevX9swm/F4YjRz\ncnKi+fl584gngysWi9rb27NATo8aSB1ohsqMTBkvb1j8vV7PZCCQedLp9JPeM9a5EGIg4BEsI5GI\n1tbWrKqenZ1VoVDQ4eGhksmk1tfXLfvudrva39+3wR2uaqDX66lYLGo8HpsBDoepUqno9PTUGMdA\n7fSQg8Gg+cCTHcMcRwaFtBCoF7Kby5YOBoPGZXjx4oVJJIGkIcu58iv699fX12bFCUQbDE782IGs\ncSODkQ7Ram1tzRCPbDZrkCEtlGq1ahcTzx7NOJ/j6upKu7u7VnlSzUnSxcWF6vX6k+cnTdoS5+fn\n1rOUZLLPfr+vo6MjSxTdKgdL0ufPn2tlZcUSpdFoYrmK9/3V1ZWurq6eyP8ikYh5UEgy4lG73TYJ\nKQnfw8NksBMSRHgpuCGio8cymhGdZ2dnury8tL8D6tXr9czy8+3bt7YP4D1gM8veAVnDDOfzzz+3\nSX7uGY1Go/ryyy9tD4GaXF1dGaJH4k57x2X4kwzf3Nzo8PBQ4/HYDE1ub291cXFh0j7iwPX1tRUF\ntK58Pp+63a4ajYa1Fmq1mrLZrH73u99ZS7HX6z0hhjIMh8sTTkYmkzFYmLPlzg1wL/R6va7z8/Mn\nlWaz2bT9MBwOTXrK9wdhoN9Pu4PYCTGPPUZBtbe3p36/r7dv31oCSjwjgcQ7BdSwUCjo8vLS2PqS\njMl+fn6u29tbi0lwhlqtli4vL21IDp+d+4H44rbUJJl3xtu3b03ZFIlE1Gw29be//U2Li4taW1sz\npQ2xDMIxzoCcGe4EVA4Qadvttn744QdLmMPhsLVbsb3e3Ny0WRjcbySyw+HQXClPTk5+vRI6+kNU\nzVxuaDeBWcjwg8GgkfLIeKnIQqHQk94eUCpkBXyxp6amjIQEQQxJApuASh7nO+B5SQbVuK54VElY\n8d7c3JghCcmE3++3Xg0GHVQbkEU4QLQnSFqoNKhIgL4ZFMFlB9mMxIHvS/sDAhHICJcO0Gq9Xjd4\nDV/5VCqlfr+vk5MTe1dsWiZCuQQeDjlB2OfzWRWEbh2iHe8UwhfBTZrIDYFw2eAQAHlOc3OTMb/0\nk4G3eH8YStCL499xmXOB8A6paCTZZeX280CVSGAIdsCC8DUI/iSYvV7P+AtAzVQ1zBXnHfHOCMpU\nRnwnkiggTaoQ9ror3XQvDRI9KhtJJhV1NcA8a84QfAW3csToieAIlDo7O6tYLGbcEvfSwBefPmKv\nNxmliaJEktmjumQ/+uFc6PBsiBfwRNiX8GGwQqXtwLPj81HZgzaBJNEbRYYFjM/nAyLHQ2A0Gj35\nTsC19/f3xnh3K0Aqfy5e2hB8Xy5I2hUQvoD7IeChnADFQpIKnM8759LDr9/VhcNBgVdDvxkSM/1z\nYhFIBLHG5ZlAcgSdoT24tLSkdDpt/XOqZdAKOA8UHZx7YhJFDHHBleINBp985ynCIMfiugeplrsC\ndIY/4x3yd9kvs7OzJoO8vr42JIOzQ9JKHHNbBqBP+GTwPrn4+XygwJCwu92uxRCKO+SzU1NTT74v\nscXlUP1v6xe95CORiL744gtJst7TaDTSx48flU6n9dVXX9mGY3hBvV7Xy5cvFY1GdXh4aK5NDCkB\ndsahjoleqVRKr1+/NnYzsJbf79fXX3+tSCSiv/zlL3ZxImF4/fq1AoGA6TEDgYCNbIzFYjo8PLSJ\nSBz2f/7nf9bm5qbK5bIF+Q8fPigQCOhf//VfbRAIsqVarWb9HwbG9Ho9FQoFtdttvXnzRuFw2PTP\nBPF4PK6vv/5a9Xpd+/v7Vgk3m02trq6a7FCSjTTt9/vKZrOSpGKxaAF4f39fkUhEf/jDH6wnvrCw\noGazqb29PT1//lzv3r2zoSKQnh4eHvTixQtrPcAAPzo60uLion7/+9/r/v7ejCsIZCsrK5qbm9PB\nwYFubm6MBPT4+KiXL1+aSRGwXbvd1sLCgsnr7u7ujOy2s7NjQfvjx482fIXhFF999ZVl0ECzjLaE\nvUogrFarajabevbsmdbW1syXGtc3n8+n3/zmN7q7u7Pf6/P59Mc//lH9ft84A0CmEKWoRIDkwuGw\njo+P1e129c0330iSjo+PDdU5OzvTysqKvvzyS7s8tra21G63VSgU9OrVK7148cJaGpK0v7+vbrer\n3/72t2o2m/rzn/9sJEBG6q6urur4+FilUkmxWMzY1m/evNHq6qrxXHifoFXA6IyHvby8VDab1Zs3\nb6y/ubKyYonLH/7wB3322WdqtVomqyoWiyqVSlaVdLtd88D/z//8TwWDQeVyOUvY4TAgyZOk77//\nXtPT00qlUmYa8ubNG1O3cBFdXV0pFospFovp4ODAph1SXPzTP/2TFhYW9D//8z92uR8fH2s0Gunl\ny5cm9QR+ZRgOpj1UVJeXl2o0GmYkw4Q0iF2Li4t6/vy5+Y5ns1kjdpFMo/cm+QwEAvrtb3+rmZkZ\n/fjjjwZrM8b03/7t3+T3T8ZHI/2tVqvmyFipVKwPzHCbQGAy8RGp8f39vb777js7v+Px2IySUDFg\nlHV6emqQM6RmHBtJLuB6RKNRbWxsWHv0yy+/1Gg00vHxsd6+fasXL17o5OTEyMMQqL/88kubSIjG\nH0+R7e1tU+Ugk33//r2y2azS6bSKxaKR/5gs+u7dO93d3enjx49WkF1fXyuVSunt27eWlNBqGAwG\nevnypZaXl/Xjjz9qbm4yOZKBQ//yL/+i+fl5Q6+Q56XTaX322WeWZMKb+SlZ8OZmMgf+u+++U6PR\n0Ndff61wOGyxHgJkKpVSLpfT4eGhjTWGb/LixQtlMhmLbxBUf2794ra2ZPIQDIA/sKBFbsHlQAXQ\narVULpcN6oLk5vYggaaxoZVk1U6/31e1WrVNBtGEHk+v1zPG78XFhc16pr8FREm/BwY72RWBz2WI\nYqRRrVaNPEElC0Pd7RO5PUy3YgZOozcN4YjLiiqBZwRrWZJVB2SsQMwcIOmTt/VwOFS1WtX19bXN\nH6eCIwi6qEYwGDRSDVk5iQCVBWxvsmKqKNewhxGvuFJhWIPLHajP3NycKRQwo4Ew5vP5TF+OqgF4\nsd1um5cCEhsya2RUwIjIGzHIoaqgMuazc3lzIfDZIfHRlyYJcsmD9BwlGWJzfX1thj88c5/PZ9MB\nJVnFx2d3CVJIvmB3g4LQh8WMh884HE6mD6Lnhr+A6QuVPAk07GcgZ+SJJJWoR6jgQBaoeCCTUdVS\nGcJTkGS6adQzkL5I3jGS6fcnEwyJA0DE9HtBG6iu3aoKdIz9R8y4uLiwoUvValWtVssqQpALeAZU\nVewdVBC1Ws1QET77w8ODuVDyLBuNhsl5IcfSogSFYkIiDpEkuhQFnBFkeiCBKH74DJhCETfhLSFB\nHI1G1iZyCaU8F+Ir/hkYVhEzfkpChQzLpU+8A+3gO9Peu7q6ks/nM5UO1TuJDn13oO6HhwdDJBiu\n5ErOQCBubm5swuXDw4Py+bydUXr9sVjMzhPSUFAkqn2QNJ4rSWCpVLKzRasLYip8EOI66DVxE14G\nZ4U9wzMHwfH5fMab4Y46Ojr69bLrv/nmm29h4HLZ01ujn+4GD2AXdOZk+8B5mLC0Wi2DgiQZpE+A\nA5ri7/GzI5GINjY21G63rb/LBQjbHLkFk4IgoAwGA5XLZbscGOcYCoXsoFF1TE1N2aEmwCwsTGZ+\nd7tds0DkO8FEpm/earWs78vng1DEhCJMhegHkgniAAfkvry8bH1DMlpmAEifEi4gIgiSSMA4TPyd\nSCRi/359fd3611zMfN5ms2nw/dramu7v762fSMAmcHFgcLOSZP1N3qk0SQaRRBFMSbTIlrlcCSz0\n5AmosNbR2LL/gObZi/SuIRiNRiNjCDNymJGiQMXAlDhXXV9fWyaOIQjB1mVOA+XR5pAmF2ilUtFg\nMHgyNIcAC8FyenraIGNsSrvdrnECCHB+v98qUpIgpJPIVOES0NZhvzUaDUmyS5Mg2Wg0rEJGaULg\nf3x8tCSZ5DKbzZorJQk5Uksg5bm5ORs1zOVCOw5YFKMV3NICgYDS6bR6vZ5OT0+txwnKQ++f/+3K\nCdFjk4Dx3Vx3RCBUEnQY6VzmbgsvFApZUkpLi2SPS54LweebmPckk0nby5KMhU5RgESNVhh7ngSV\nzwjMD1PcvcxAimhzUBXDO6K3TkIMUkLhg4+Dq/hIJBLy+/02cnpmZsYmBS4tLVmhAonN5/OZdAwJ\nNYnHeDy2nnco9GnWB9yXn3o39Pt9NRoNU4EQu0hoSQYgNqJ2ITlJp9PWu19aWtLj46MZ0AQCAWu9\nsl9wLqSdJ02KF2lC/saoi2QCjgXtGj4Ts0AGg4GRaUGMaAnW63XbU51OR8Vi8dd7yf/DP/zDt4lE\nQtvb21ZJ4GA0OztrvvJcupi1IG2j0lxdXVU8HrdsH6gI9izWkPRckedtbGxYhkrvlI0BXB2JRLS6\numpB25VVpFIp0wMHg0Gtr6+b/eXq6qppitG+csFSsczOzpr5DBUTTkow/iVZnwy4b2lpSYlEwios\n16CCISQ44aE/hT3Kc8UfGngNFil+3cwLBwrmv4G9DzEPv3Uc9GhXBINBa21QEfv9E195JDGubt6t\n9FE09Pt9M9tZXFzU+vq6UqmUlpaW7DAg14rH41pZWTEHOJI3njFEPAInkslUKqX19XULYKgXXPbx\nYDAwuBnEh+QG+RLfPRKJGKETNnY0GlUymdT29ramp6eVz+dNUkkQTSaT1pYCLeBzUw1Fo1E9f/7c\nLilIqq7RBsknKBD/jqQIQyPQic3NTUmTXnEymTTeC38X5InEU5LS6bQl1IxSBVlAKRGJRIxU1+l0\nlE6nlUgkTOIKQ5qAToLT7/cNkcBRDrYxQbPf75sy5/HxUfF4XLlczhICeAyLi4vK5XIGhwPxUwAk\nEgmrGPkMxJLFxUU9PEyc5ra3tw3hY4QpSg0uTvaTzzeZ5gYfgWeOaRBnGSJmLpezgUZU4rFYTIlE\nws4ABjhwb9hj7kUYDoctJgSDwSfERpIS5FckB4+Pj2aFzGXn9/uVy+VsSA4I0f39vRYXF82d0CWa\nXl1dGVo2Pz+vlZUVpVKpJ74kJPkkQK51r8s74Ofg4Mg+J+kBsicxACKHvJ1Op43zREwcj8dWIPT7\nfTsLrVbLYgTM942/+8hjlMXwLjggID739xO79eXlZXvH3W7XfBJQPXB+QVUo/OAfwcpnSBAxjO+E\nrI87ZH193Xw2pEnSeXh4+OuV0EFog0SALIsKBMiQwICGnU0Mm5ikgIqLPwPWAxqiGkBrv7CwYBks\nMCEHG1tBesjARRwOKicuay5w6ZMhC9CdS7Ij+YBkAqGJF0vGBpyDYxOXClWhS+iDEcphIWEhkJBE\n8Hmpbti0ZMVIFjnEQE1cyrQGGATB34F8A5xLZQPfwZUOoaeHdEbrBIkg74LqEhKWq7Enu+ffc7FA\nggEt4XtJMm0/2TS+CdKnJIqsGm33eDw2CR6H2SXcoNFFnkkQJwDx+6heef6STI7kGqvwjPjz5eVl\n27s8a9AG/pFk0jhIaZiWEFh5d25AfHx8NL226zYHSY3/nn3O/qOlBbGQfY4e25WgsfeQdwGZ075B\n/cA7ZmY4umXOKM+bC5zfQ5sD9Izf6Z5Nni0oACoYqiWSLOSnPEt+B8+PM+R+P86qa7hDb9qtCvlM\n/BnPl3/4TpKeyL9oP/L5+Lsko1yAJMy09CTZvoSUR2xhX/K82BMgURRRJBKnp6dGoCXOuaoLviPv\nhkXcZG8MBgNLHl3oHSIs0DiopmsChcyRs+gy193kFjk2iQLT8ji36OpBcomxxCAuXd4Hai7eM4UI\npDxiCO0LlBjETfY0dxuxxj2/3GfsKwZUgXKyZzE1Iuax5/6vxLtfrJL/x3/8x2+BY5eWlmxoBxtk\nf39f5XLZyBVk4RDShsOh9ZampqaM+PanP/1J2WzWpqXd3t7q7OxMvV5PoVBIr1690tzcnDGHCZaL\ni4vKZDLWlwFqPjk5sU2AUxLBMhwO6/e//73m5+etv99sNrW7u2v6R6r7cDisZrOpfD6vTCajaDRq\n1pHpdFqFQsGIIdFo1C4usmxIZ/TNA4GAoRzb29vKZDJ2KeDoRS8XeJNANDU1pUwmo2fPnqlSqej2\n9lYrKysqFAra2dnR+vq6crmcXS7BYFDFYtG8mel/ra+vazwe689//rPJmzDJQOI1HA61tramVCpl\n2TfQk9/v18bGhsGJBDcX0oTdzt+5ubkxUxz2SqfT0cHBge7v742ckkwmzbr0/Pzcqnpg42fPnun4\n+FgfP37U2tqavZOrqysVi0VjhWNKI8nY7ZVKRa9fv9abN2+srRCJROyzA08z+Qvmtt/v16tXr6yP\nNxhMBsccHx8rk8no5cuXFmzR2zMbAKdHkISdnR35fD599tln1rsvlUoGiW5sbCiXy1kigY4bAk86\nnX7COyiVSuaKiHwKjsv9/b1WV1fNe4FKP5/Pq9FoKJfLaW1tzX7m3d2d8vm8Hh8fbYQtTHx84HO5\nnOm/0S3Tw3779q35XcA0rlarmp2dtXG7IHM3Nze6uLiwllSpVDL4Goh7a2vLerlU7z/88IPG47FN\nHFtYWDACW6lUeqL+QDYHmvf69WsbXkPRQdtne3vbhlvhQCdNZG3wMrhomYkEAQAAIABJREFU8vm8\nisWitbzq9bqdhVqtppWVFX3++edmcY3z5+HhobLZrDY2NpRKpexz4lO/urpq/VwSuY8fPyoUCmnj\n73LbYDBog32mpqb0+vVrG2VKEnB5eanxeKzNzU3jY7BfIKT5/X6VSiUz4IG5D7qGP0Sn07G4vb29\nrVgsZu54cH8wD0JBtba2ZryOjY0NZbNZG4tbqVTUarUUCoX0+vVr65u7xMG1tTXlcrknSRkSSXgi\nJycnT951IBBQLpeTz/fJfbVUKmlnZ0fxeFwbGxtGDPT7/Qb7Y7yFUiAQCOjo6Eij0UgbGxvWEoTH\n0Ol07NyQaOM7wN+JxWKSZMn7xcWFarWaBoOBDfAqlUq/Xrh+e3v720gkos3NTXsJ09PTajQaOjw8\ntIwMEtfq6qr1DYFUyd44uPQYycgZ3lAoFMyUxpVLQKgjm6QiHQwGuri4MBgTZiRVpHtxceGjxac3\nL01IJmT1x8fHenh4UCKRsFYEWl4YoW6/KRQKWVB2daPA8eVy2QxYuPhBIJrNpj0XbHoxq5ibm9PW\n1taTzB9tNcNJqLLi8biazabOz89NSw7ZZGFhwfStQPR8hn6/r0KhoHA4bLOXgf5brZZqtZpVjWiN\n0cTTfwf1oDJDD7vx92EVtF3K5bLOzs4sAwdaXFxc1MnJiVqtlrGE3cqh2Wxaf5nfH4vFVCwWn2i1\nsev0+/0qFAr2bIHhJFmfnnfq9jSpcBhxDIva7/fr5OTEEj3QJHqADALCQRDEB1kZKBdokt/v1/Hx\nsfr9vhKJxBN0CLbvcDg0Eyf6sxcXF3am0GeDHnFJbG5uWltoZmZG19fXOjk5sTM6Go3Md+Hs7Ezl\nctncE12ZYb1et6oPfgPkyYODA5t050oBy+WyEc7YL1yE+XxeDw8Pds5hqSMLTCaTxk6WJn1S/Deo\nuIbDofWPDw8P5fP5lM1mDQ3ArwISHjwJfl61WlWn07GpY6BGd3d3Ojk5sXMNcXh6etpsiGkd1et1\nq95IyHK5nFkoQ548Pj42fbvrMHh1daVCoWDoEqgL7SvGTCMHxPSFC255eVnSJ+fMYrFog5lok4C8\nYgjFPHn+f8iSKysryuVyhiwmk0lVq1UVCoUnhDaKHyxi19bWTFJM5QsfhbP9+DgxReOMMuqaRPvx\ncTJSF1MtElRQ1EKhYIgZqg0qdEy4ZmZmjCAqSYeHh2q1WoYy88x7vZ729vasJQfiS1yEd4T5D98J\nv5JMJmMFChyq4+Nju7ckmZso9x4uhUwlnZqa0unp6a9XJ391daVgMKg3b95Iksrlslqtli4uLvT+\n/Xv5fD4tLS2pVCrZNDKIPtls1kwOyKjy+bwGg4Fev36tfr9vF+dwOLR+CXKQYrFoD4sMkg2CHzdy\nnM8++0zT09P2Mwi08Ad2dnZ0cXGhQCCgi4sLXVxcWP+My+fx8VG7u7tWRdGni8fj6na7+v77762X\n5zqzQeRIJBIajSZ+0TCIz87OjJNA1UiV1+l0zFQCRQDMb0hPo9FI+XzeiDT/9V//pUajYRyESqVi\nRL/9/X1zxUNHjCPfxcWFEomEHh4edHJyYsGQBISJT7DC/559GhRaq9XskuByyWazBpVBWGu1Wlpa\nWtKLFy/s5/l8kwloR0dHBl8hl3p8fNTBwYGq1aqhEoPBwHrDHz9+1P39ZJRsrVZTuVyWJJNW8o6o\njiRpb29PxWJR8Xhc7XbbJhVOTU3mu8/MzJjjH+8TMtTa2pqWl5dVLBaNM3B6eqpms2mymIODAwtm\n+Xxe4XBYz549s+BPcMZcZXp6Wvv7+5aIwPglAJ+fn+v+/t4+KwQmZpjPzs4aUkSv3XW1o+f87Nkz\nQ0RI7JgoF4lE7CKmUrm8vFQqldLi4qKx2YfDySQ8iLHFYlE7Ozs2M4JEEpMTzh8EPjge1WrVyHCc\nh0QiYb1jTHk4A5FIROd/N8J6fHxUoVAw3/BQKKSLiwtjTZdKJfl8Pj1//txQxaWlJUkyu+TRaKTd\n3V2VSqUnJkKxWEyj0UgnJyemICkUClbh1mo1FYtFM6Y6PDw0pIgLnhbN1NSU3rx5o0AgoIODA5XL\nZVWrVZ2dnWl6elrPnj1Ts9nU2dmZnSl84JPJpDH3Z2dnLYlbX183WWmz2bRkW5JdtoVCQff39zo9\nPdX3338vaXLJQxR2jaqwgC2Xy0a6JWF6/vy5FUsgcNVq1ZLVw8ND4wAw9vXVq1eKxWLGqRiPx+bP\nsbi4qFKppNPTUyO5NRoNK6B2dnaM+JbP523v1Ot1HR0dGbrAZMxUKmVtD9BgksJAIKDd3V1DEE9O\nTtRut5XL5eznczn/+OOPGo8n42JpH0QiEUu64Me8f//e5NCYIW1sbNg5QlG0t7dncRPUcm5uMnqb\nqaUgwG5y9nPrFyXeYXoArAUbld4qUgkqdVzRMA6AkUomCQOTSgczCFfedHV1ZQxmeqlk3vzj2thS\nYcIRwHQFgxpIHrCdg8GgvXD6aPwuYBagadfoAzgMpnC/31e9XrfDiu6WJIOLDNY1FzgBhv46lQAQ\nJhU51WupVNLl5aUkWbuCPhisX/roEKxgycOaJWvmfUFOoeKCQQzTH5lis9m0y4beHq5yQP18Lwhd\nQKfo6GHEkyAAcZGwSZ8Y+zwXIDvXQpK9AtkPxjTV12AwsAyfBESSQd7I+CBLsV/oP7qVMDInsnYu\ndpd0Q/JFS4a9Dt8CRAeSGsxy1+aXqhNZGdUmgXo0Gpn0BzgalQM9X5z22MtMWsQFjeqPSp2LDWIY\n+mtMp+gfI0fDVfL29tbImPTqXeMqkhiSBoI00D0kNHrIkAZpT3FBESd4p/Rb3eFTnFfIvb1ez/Yg\nFS1afxIsV3IFquAy75n46MYGzrE0Qf2QrWHLDDeFWAepjM/Ge3elmJwbKkfaUyAaKDXa7bZV+Ehb\nXZkbPA+8NXgWnA83LoIcuKQ3+ukHBwdmO8vP5hlAjmafw9OBMwHvKhwOW/y8v79XuVw2+1vIzbxH\nzi4kvYeHB4vBoCyQPpHQgmqRhKB1r9frZg8MIscZJaH2+/12JkG1IDsvLy8bT4w4QVt3OJwMNeJz\n4cHh+vizXCUIBFMg/IODg18v8Q4PafpcVI2BQMB6vS75iUAk6QmzFMIDRBlgKvTIgUDAmMPAvJA3\n6MUQUKkG0B/zgiBPcGj52cjScPnipfLCgGVdMxV3rCo9aogc9JgJHJC30GEDcxLkR6ORbXBJ9r+B\n6oH+IJRIn3TYbj/R5/Npa2tLKysr5pZFAsFAC34XFzcBkc8IvIv0iwqIC54DynOEvMP7AIqT9ERX\nDLkH+KzX6z1hRpO0kKgA+SFDcZ3TkCgBg6E5pr1DKwLonMOI9JLPB9GKZ0RQA7LkGfL86RNS+bJ3\nl5eXn2ivCdguBIiMi4sPsqTrC8C7YJ8TYLnESCJpb+F0h6kGTpOS7HdANEMOBlGx0+mYdJQEmr3o\nDhdxB4DQC8cLnefCvkEBw3cBWSMBxtEOaRhtLXzSJRnkyv+FdElbh8sAlQR9UJIliE8kHlxcXGgo\nY/geJOJ8Dy5wIGJJlmBA5sN3IxgMKh6PG4Tv/ncQ/ShcSH4lWaVLksBlNj09bWNNQS+5+CHukTAQ\neyCLuUUJCRmfj3PKeWKfu8+M2AnJjcSSz9Nqtax9Rtzl7xPT/X6/7SUKhUAgYPuFWMpnIYmVPhGd\nQYv4nlzsxPCpqakn8lqX8EuRQU8f699+v28ICK1gSbav0+m0pEkRwQUPCksMxgOARJj7zHU5JdEh\nHrFXuAsoLNhHnBuX7Pi/rV/FPHn6wGjPY7GYVSDX19eKxWLa2toyze/q6qpKpZL29vZMOuJ6iyON\nY5PgEsfmh/UL5At00m63TZYWDAbN2pCfS9Xn9/uNaQ3D1yXtocWmd+8GyqmpKcuAXWkXP1uSJR6L\ni4um6UcHTAJDYrO0tGS9dYI/U59APNxkgYMvTS5T+lDAoVy+wWBQz549U7fbNdIYFybWsCANXBpI\nUjCr4R3yMzmoo9FI6XTa4HYCdSgUMgLkwcGB6vW6SaUGg4Fl1SRXr1+/tuokm83q9vZW7XZbL1++\n1Pr6uvU6uVhcY5qtrS1DbKQJZMk+gBsAqxtNN4GevjGBmIO8srJiECmXHf8XyeHt7a0lKfRrl5eX\ntbOzY45ykC7j8bj8/k9TsVx4jorSRQu4wILBoPb39w2G9/l8loTCB8Ech8QukUhYFUVyRaWCZ7jf\n77f+PPtNmlz88XjcIOZarWZnslqtGq+APUgyEIlE9Pz5c0mfzIWQZYVCIXPJ492zz100CqlYo9Ew\nljhxBUY4unA3obm+vrZ++urqqrUdCNBwGGgFuGcQlUswGLRgjckKSftoNFK9XlcymVQqlbKKj+RD\nmlza3W5XHz58sD1CcggiAUeFAoOEt1QqaTwem9zz7u7OJnru7++bF4QkS8aRi0Fe5D1IsrYesQEp\nG3wd/i4FUDKZtPMPMuP3+xWJRJ4YPxGfgKGBnhOJhLLZrF2YkUhER0dHqlQqJv9FB8+0zHg8blJf\n0Fl8KVAc1Wo1hcNhvXr1ytpItPyIcd1uV4eHhxoMBjaplEIDDguSZvwW5ubmjH+Qy+UsztPWYSgZ\nf48khkIJ1QJybt6z3+83aeLFxYUZ/uBJwIjsu7s7a4uCKrl76X9bvxhc/+bNm2+pojiAyOQIEP1+\n37y+yWbIbB8eJpPcYC2SEbVaLesRQUiCtQgET4WNZAt4kSqA6nVmZkbdble1Ws1aAMhcuIg57LCx\np6en1Wq1NDU1ZVp2fiYwDPpyzG4g5nBg+HwEjnK5bKYeJCFkhlQTVDt8JqA7Pnev1zOY76cVHVUq\nAxyQBoEYoFLgd7lVK/AUiQpJFTIWKj8CLZcQn4f+L86BZPkQKWmhDIdDOwA4DfJd6IPBN0BKAzJT\nrVatiuZ9+f1+MzoCziRg3t7emmwFtjvto4eHBzOAodKlqry9vTXWL7AaGnBkTVTEkmwAEBeFK5Uj\nSYIfwD6Zn5/XzMyMuXwR/JCx3d7eGmGT5IO96rqIobdnv7nDNnDgc1EFFx0jCcZJzx3kQtLD83O/\nE5Xnzc2NDWmiKuOd43eBHJP3CJTMRDPaEiTQkB2Bj0GMqDA5oz6fT73eZMATFzIJO/AvShT2mdu6\ncs8onw8kjwTbRSExSyERQ0/OfnGdGIkBfr/fPi+/h8/+8PBg+nJIpLQmKEb4zpwrWh2cUXwOkOiC\nHBJ3QDhpObEfucxJGhkehUqJOOL6YNBuYY8hZXMHQo3HYztTSC6J0+12W61Wy/wLQLBcKaS7F7g8\nab/w3EF/IPQyrx1SMu0PCINcxCQ2+KFAdOa58/wouHBxpKVEQsTn4wyQfIAekUhQvaOzdyt4+DCc\nu9vb2/8TXP+LEe/YrNfX1xZQgMwHg4FZHnLZUvFBEhkOh+ZhT9bOhSjJDvZwOHHwOj4+1unpqW1g\nt+/5448/mpc0ARGyFqYwPp/PJrvRe4MUQhuBSxdrRTIysuyZmRldXV2Z8UI6ndbKyopVUZD7OCBI\n1Wq1mmZmZvT8+XP7s4WFBUmyHiV9QjYi/VV+DkkOzn1k0BATr66uVC6Xnzj3nZ+fG/TGgXChbS5a\nCFmMfqQ/zwHATGdhYcG88pEqSZ8SJuA+rHiBtTDZ4QIiK8ZqlOeHJOrx8VFnZ2dmNXxwcKDRaKRM\nJmMHqV6vm40mwdSFOqvVqnq9nlKplCVbBHyIOQRjzEjq9bp+/PFHQ5Xo+/H8MbWhikc1cXFxYUZO\ntF5QdZRKJQsGBIJQKGQHHeMZeq7FYtH8uiEmEZDy+bz++te/mvkKlSGMb+YhjEYj40vgkidNKnna\nHsDa/B2fz6fDw0NNT0/bWF4klyRwXGRUdVxM7EnIt/j8Q8icm5tTpVLRaDTS9va2/TyGdzCgyO/3\nP7l06I3TSrq/n0z6ury8VD6fN6QN3ggBPZ/PW+U3HA4t0SSJkD5pwflz+Brr6+sWzLPZrAaDgREq\ncb3DfKVWq6lUKlnrp91u2zOFwOpq7ZGAVatVM0+Bk8LMhE6no0wmo0wmY0OmQqGQcYbwqqAvz5kn\nmSOBCwQCKhaLqlardsHOzMzYni0UCqZFh+MCZ4S24WAwMMkXTP/7+3uT5IE29Pt97e/vG6mQCpU/\nc90S6bVD4L27u1M6nba4CAHyr3/9q1l/gxCD4GBvCz+F5JLWrFsc0mKpVquan59XNps1zgbM/lqt\nZhPvYPxD0kQZhrkPRMmzszOFQiGtr69LkrWpIWaCCOBkOTMzY8gE/A+Ikz+3ftFKPplM6tWrVxbY\n0Hqfn59bNshc8Wg0qmKxqHw+b5lPr9ezTGhnZ0ePj4/a3t62nt78/LxKpZK+++47LS4uKpVKWTab\nzWaNxbu2tvbE6QgZFJc4oxTJMH0+nz7//HNtbm5a5ry0tKSjoyNdXFwomUyq3+8rn8+bIcWHDx9M\n10nfP5FIqFaraWdnx7y8XS/n09NTXV1d2eUEG9Tv92tnZ0fhcFivX7+24IC2uFarmfQFNij/fywW\n0xdffGGuflQbjMfN5XKmr3/37p3a7bbev39vVThDN2KxmEHC6LsHg4H1Gt+/f28XGe/i5cuXqlar\n2t/fN607PcCpqSmdnZ3p7u5OuVxO9Xrd7F5xKXv+/Lm2trasz08vfm9vz+BMoPJIJGIez/TOGH4B\nBIk5En1QV56D61goFFIikTA4cTQa6YsvvrCLkOmDOzs7Wl5eVi6XU7lctuSGimRra8vY5/TN9/b2\n1Gg0bFZ4vV43ktHBwYFpbqmOI5GICoWCSUJhm+O695e//EW3t7fmjBUMTmZQ39zc6I9//KO2t7f1\n7t07g57hwdze3tqFtLe3Z2Yh09PTWl9f15dffmkWwExyOzs70/LysuLxuKEDuVzOPl8sFjPULRaL\naWZmRicnJ3p8fNTq6qqRX5eWllQul/WnP/3J/MOpHjOZjHkmpFIpTU9P/AZAmc7OzozbwGAhzufD\nw4NWV1e1srIiaYK6RKNRVSoV3dzcKJPJmNcEEP7Ozo4l5CBVtL0qlYqSyaSWlpaMF7C4uKjd3V01\nGg1j5N/c3JjrGYNOPv/8c/n9foNu8SVAb00CA6KAimRqasreDV4cyCpJLpC17e/vGy+p0WjY993b\n2zPNPTIw/Ar29vYUDAZtmuPs7Ky2trbsjKIgopB4fHw0Etjq6qp9D9ecK5vNan193f57CJ31et28\nBS4vLw2dOjo6svntVKu0Hf77v/9b4XBYm5ubBlMvLCzo8PBQBwcH5kfA0CzUEn6/X1988YWRl3ln\nh4eHpuCAIApKHAqF9ObNGy0sLNjzIx48PDxoc3PT5gcsLCzo9vZWP/zwg8LhsFKplFlOx+NxnZ6e\n6uLiQrFYzLTztCR3d3fV6/XMMbDT6SibzWo4HOrDhw/a2NjQy5cvdX19bS2bSqViZ+DubjJ0h3by\n7u7ur7eS59JgY5NBwoIlM6cCHY1GNlEJ+AqmKNXAeDyxdgWCp1/bbrclTcg5sEVxhgNSkWTyNWAR\ngo3rQoQWGuiK3js9GtiywLr4DJdKJYNv+e/oOTGUgZ/BOEjY6JDJqMCpEoGqQEUkGRuYvikBHbSD\nlgfVJFUC/x1wEhdMv983uRDM4+Fw4i7IZbu8vGxEF8h3rVbLCC/0eoF0O52O6VXpP+KFDdGEdoOk\nJ+QhYEb0s81mU8Vi0ao1fAH47kjKAoGAQXP03TgofAZ69FTImC9RffI7k8mkybTo25NVk+wQtH/q\npsc+pV0ACxtFAVA5REKcCCElUeXRWkJ+Q9U6GAyMoUvfHXQpGAyaxprPDfzH52s0GhqPx9b3xl2u\n1+vZ7+W5o1/Gb2B5ednQOfr/+BxAVOU7UVXz3Ov1usUF0D3O6N3dnfX/QRJc8i0VEkQrSUbkhNMB\nSQlCIjAxY6jH47EpB5aWlux88LP4O7xfLh3iDO2cdrtt7SoUBVRk/LfA5LQTINWSaPIPiBn/HTER\nKB0vBkx8+DNQHmBr2pTERZBIVAPuGUX2RZuIHjawOXuU1g3nCMKqJHt2JA73959mU4zH4ydul/xv\nzpQ7OAouEJc09wDxDnTGNcyC6R6Px+2OoG8PhwCmOzEJ/gAoJe6L0ifiMA6VnU7H0EoGylA4gOBx\nx5Bw0hrgHMIpuru7MwMq3iMtD2KiizRwTzGKncTh59YvVsn/7ne/+xaiF5enK/chsMEi5DLl4PJg\nXNkMxCggL2QOwM1cGr1ezw60JJNkuPANm/b6+lqnp6cGcUJS47KCwQykT5BCssSmJIvGhQsyHBcr\nfwY86hpY4II0Ozurq6sr61/h8AXxh2fAAIhSqWSfmwNMgIBxTWJDP7JSqVgw4zJ1FQiSLKmiPcAF\nK8lgXrgD/C6ydC5xgkYwGLQJTCRO6H7pI/IMOdRAbwwSwi/+4eHBJHTdbtf+DLSCCwAWLuxiPheE\nSd5jt9u1vdHpdKw6QZPLRT8YDGxwCtp3V4EAioIkiBYKbQJ6h1jkunA2vAkCDBPK+L4EGJJL2NJA\n7UCkyWRSjUZDp6endrYIovRsQTFAH/jcJKTo9XFxa7fbBgPzeQluBM9oNGotF8hCLlubISfB4Cer\nYZQNyLyoeCDWMTyGsa21Ws2qHdQiWIJyYXLBuSxykgISHFdBARLCnkMRMB5/soLlkiV2wQcg4eKs\n5fN5k/w1Gg3Nzc0pnU6rVCopn8+bNJf4BkEVtj1VO0UKZFqInVSTtCxBF0nwiHl8BrcFQ/KBaoci\ngnhAocMlC3JaLBZVLBZtpDcx2+UJuXJEEmv6/yQC+AMsLy/bPq/VamYwNDc3ZxA9sRYCJfeFJNub\nFIpMEYQX4LYG6vW6+Xbw89gjxDSeBUTERqNhKDCExvn5eWsZEVNcgu94PLbkk+QX5RUETlpvg8HA\nXEdB0ygU2cP1et1GGfOzj4+Pf72Od1999dW38/PzBv+RkdP34aXx4pAeQPxy5QRUF0D8BBBIMsD0\nbEIqKXf4AX9ONsyDxYyAC55DiDEOchmX/UpPB5kcmwiJBSgGQRSSHX11fj4wMZ8Xwp2rHGBjQhzh\nM3KQGA6DyxkTl9jIoBD8O7T0EAbpr8OX4FKYnp5+QgiTZAERtjYQJc9d+iSPo6LmvUN4oRVANk8g\nk2RkIoIXpB4ufb47lQWQMfwJ9guLDJ6snQqMy4gEA2Ij+w6Egt4rageeBVU3PxM1B5U3z5D9DvmM\ni4b3T1/TlRS65B0uKC4uOCUEej47iTGJEsQhSGB8bpjvBGjeL+eR/5/nAaTJvydQumeX3ia9UX4X\njGsSSmxOXe0935kE1u2Dc9Y4S1Td7vnF4MdNwN1ZDCyXWEkiCGrERQR5kODP70L3/9OzwZnncoMA\n6z4r2o6cAZJ9YhYkVJ63K4VDnUTBA1riEiO5fPgskiwmkYCBELixjwSJd+hW5rxzHDMpHrj4YKRz\nIbO3IfCC0vFeSeyRR/JsIT/jcU+xQHyFHyPJzjt7XZIVhcQdyIh8VhI8ztvc3JwN9IHYCDeAmEAx\nQ9uWZ0cixZlmf3I3uf/AD3DjD0RR1Dcki7xP/g7oE8nH9PS09vb2fr06eV50IpHQ+fm5OakR5IGG\n2CiJRMI2AVU8lx2BlwPExUKlA+Ma9ywqvHg8bpupUqmoXC4bcQIoB6kIFyIXGQmC65jFRl5aWlKt\nVlM+n3+yWTigkP8IClz4XKQEPfr5JBrdbtfMIQgAvV7PqgEyyeFw4hoHAQ+TGYLC2tqamdEAmyeT\nSUmyw+b3T4ZLoH/mcuIw4ZfO52+1Wk8QAvyxqfrpY3IZB4NBk1gxmwBb1+fPn1tF5CZsDC3BBIc2\nxtTUlI3DvL29tUSGfiLSKH4/h5vACurAgZNkpEQCCBUCCgVJ9i65gLa2tuySd+Hy0WhkwRiJEMGN\nBIC5CFSfIBW0DnAwhFjmsv1JkpCeshfgHPAekLKh1eYz4RswHo9NdkhiBk8D8yWSFRJo/g5wI5cE\naBZnAlY0+43WRyaTeXLpusGZ5y7J4Mt6vW6qlZubGyWTSeVyOUmTYO/O3kaZg40yLRhJ1jLrdDqW\nuKCPp8XAeUQJQBLEvicxlWSFgCRjYJN8DYdD8+Wn99ztdrW+vm7T5mgtoSNfXV1Vo9FQq9VSMpk0\nCSFJLr1izhGwL4kMiRGoDCZhuKihXUdxAcrpPrdAIGDPyNXHLy4uant725JNSQZTx2IxZTIZHR8f\nazQaKZvN2mfCdx63O9Ae1A7sF6ZLkhQ8PDzYuyHhkWTwPOfLNSsiZpCEoMyhZUhFD7I4MzOjZDKp\nwWBg/ItYLKZGo2EM+YODA11dXdl9IsnaDRBjQZ4puojJnEd4SI1GwxJZl+iH/JARvSSWU1NT2tzc\n1MLCgrWU2W8/t36xSv7t27ffcrAjkYiSyaRlLKHQxBMaiA/IBfiIudXYd2IVeHt7q2KxaJuYSoL+\nCaQOLnFIbKenp3YAcbLDhIQZ191u11i8tAW4XDjoBFQkW5DQqLr4/MzmdquPYrFo9pj06eip7ezs\naH5+Xq9evbKecTweV6vVUj6fVyqVMiIRmR72olzm1Wr1CZwGgYeAxbQj7BiR9VFJ8ney2axBn5iC\nYB1JlUS2XKlUVKvVrKKC6LK8vGwX9fr6uvX06fHhwkcgJgBStcFihYjDwCHkORCv6FfzWblAQV3o\nMwMLSrLnx2UHax1SJpwR3OA41CgByuWyDaBgn9KXhMgG8sSQnaOjI5PC8eympqbUbDbV6/XMghVy\nz2Aw0PHxsQaDgdbW1ixhBN4rlUpaWVlRJpOxlkEsFjN53+rqqmZmZuwiBjYnqeaZuY6T7kXHz6vX\n62q329ZPxVyHXncwGDRSUrfbNXXJ7u6uWbCSlPT7fRWLRTUaDXunVD4kEnNzc9re3rZLjCSUYUyN\nRkOVSsWqNJI55nlzUdDHJZElRmB6UqvVTJ1DnxSSI5cpSAgXdqFQeIJgcYlhEZ1Op20YDhcEHhRw\nI2q1ml3KFA2uYRJ7oFar2dAretF89vF4bDPo2+22oRxYbNMSoXLvgqBWAAAQN0lEQVSHXb+6umqe\nAOzBfD5v7HWSUmIfFxWtGGB9KlmGyXQ6HUMGsCSG7ApScH19bXbgKysrT2YznJ+f6/T0VOFw2M6f\nyzORZDwsZLbEv0Qioc3NTXNCpVhiKBf22JKemO5kMhlL6qm68W7Au4JElFaH643AxX5ycqJ+v690\nOm1IC7yNUqmkdDptGnmfz2ckzE6no2fPnimZTFphBqeIwTS0sn7VjneuExmmMzyIq6urJ7OSJZkG\nlAcFjETW7WZ8DK/hoqJ/SS8HKJUL+P7+XvPz8zYwwiVIACsTDIHKgBupAqkeqPb4eZCc4BrA3qcP\nyOfn5wEvSbKLggoZCBkeArIPoLfhcGi9WPpfXNS0CahGmHRHn7ndbj8x8XE/lyu3IhMmG3a1pvgT\nYPjjWvaik+UZ0cvkXblcCqoCF7Lif6N3p93gwsEgI7wfSHMEToIUz4PvyGUgTZAM13mL4EFlwL5z\nfx97mc/ttiio+knaqGJ5hnxO9NrwGEAxQBII4pwRoHcQLZ4fZjb07iHSufajJDkku+xzd3+zB+h7\nAuVCCnWrCaoU9r4kQ0SohvmukGElWc8ay9Jut2vPi+/EHoRIR6IBekavGISDgAlfAkhakhHK2Esg\nUnw+EAyCNJcnFybeHewjWhz0+unV8/f53IFAwOasz8/PW9FAr9j9/Tw7qmcuZQoCvjf+IsQN992y\nr91/6LWTkI7HY4sT8H3grpB4cV4xHQqHw4ZY8H251DlHvGOgaPhEnC3OAHsMeNpFOF2vDJAEiHT8\nHPYzsDk/j+dBywvDMWIme9et9IkxyIAxuyKmEy+WlpY0Pz//pM9OxQ5yzD8UdRjisJ9cG3YSRdoq\nxBCSat4nMYUWpvt7/i/rF6vkP/vss28XFxctywF6wrQfKJ5/D6R+c3Ojjb+PJ63VaqbhREu+sbFh\nZgdoJhuNhmKxmE0PCgQC9vMajYZl+tfX10okEgqFQiYNWV9ft40IrDIYDGyqGIQMhii0222T4HQ6\nHdOlnpycKBwO24AVMmSMFnK5nEnqXGJOr9fTixcv5PP5VCwWDcJiKMvq6qoRjtbW1tRut1UoFLS2\ntqaFhQWbVgfTOxKJWGXoMk339/cVCAS0urpql0goFFK5XNb5+bk9ZzYous2Hhwe9ePFC/X5f5XLZ\nZJDIhzY2NmzjptNpXV1d6ezszEbqdjodJRIJpVIpnZ2daTQa6e3bt6ZNpQ8tyfS/rk99q9VSo9HQ\nxsaGotGomSGhDUZXzyWWSqUkTaSFuHORDM3Pz1tQQct7eXlplQjsdZzKGJrz+Phog3pWVlZsPgIt\nDZ9vYhvMuGEusP39fd3d3ZmMFFONer2u77//3kZbUjkuLS2p0Wjo+vra9OKlUskSgbu7Oy0sLOjV\nq1cWZOLxuDqdjr777jutr69rY2NDhUJBwWBQX3zxhUl1gLwZ7ASBKBqN2jhMdNGdTkeHh4fmewC5\nc2FhwUZvgrSUSqUnsPnCwoJ+85vfWPWaTqc1GEymPr548UKrq6sG3XKmWq2WEomEJKlWqxnXAtcy\n3gccDQJzJpMxZjakKVwzaSfd3d3ZeOQPHz4oFArp3bt3dqEzLaxSqVgixntHLgnaMhgM7PtKMl+O\n9fX1J8gf8lAqWuYYAJGHw2Gtra1pMBjY9LHxeKx8Pm+VJp+PM8TQHZz7AoGAwc0+n09v377Vw8OD\nje9GyoYDJBf7/Py8KpWKnfnZ2VmVSiWrsg8ODtTpdLS2tmZkZr7baDQyVLFer5vLIgRQkhzIZPPz\n8yoWixqPx3r58qVGo5E6nY69m1KppFgspvX1dVOlZDIZ4zhghIbHQTgcttG9SIsvLy/NJXN/f99a\noRhG8cwDgcATt0laRVTja2trVv2TBBwdHdlI3aurK41Gk2FiqBvevXunaDRqXhckeDMzM3r27Jmh\nMhBuT05ODOFCYZbNZlUqlVSv122wGO2nubk5vX///tdbycM4lT55AadSKbVaLZ2dnanZbFqmDEHv\n9PT0SaVAgMcsR5KNj+WgA0W6mkNGoF5eXhosTxsA2cvOzo5mZibT1hqNhiUHaD4DgYBN84pEInag\nuLSp0DCsODs7M0SBiVZUAAze4KLEWIW+DLB9rVYzCKdcLltmzcXW70+m7zGdC3kTWeNP4UZpIvm6\nv59MaGq1WnbJ+f0TG9VyuWxBF6YrvTSqRKDGVqtlUDMmGA8PD/Y+YrGY6ZlHo5H1UDEI2tnZMRQF\naRfkJZ45UhW4BTBxqZ4KhYKhOMijQGUeHx/t8MDIRgoYCoWUzWbVbrct47+7m4ynZHgFIyxJ7nq9\nng3BuLi4MKcqGOeoIeiRk0iiv8d0hUqRS5lxyyAz9KcJGNfX1+Yo12w21Ww2rR1C0KtUKqb5bjQa\nZk3aarV0dHRk7Sz6jPw85KaDwWScMZUrLGYuTSxhSQQhb1arVXvnVGPAxu5kuVKpZJX1/f29Ba7p\n6Wn7rCsrKyoWi3bZckaR8n348MGqHp4fsCYJhYvULS4uqlKpaDgcWoVJ+6zf7+vg4MD2zOXlpTHP\nb25u1Gg0TNXCBdbr9cwIChlgtVq11kS5XNbi4qJdDoFAQPV6Xefn5zo6OlKz2TQzH6o2zgeMelon\n09PTduGHw2EbtrS8vPzEwW1qasq04RjuED/cgVCBwGRqJkS2Wq0mn89nSQgSWEm2D66urmzgDJ8L\nWTBQMtUqmv6NjQ07U+xlYvDMzIzy+bwhjBQ/eKMwfQ8/A6Y6AldTBDabTSMpk7g8PDxYC5Tff3R0\npFgsZiZntCaI1/AssJ7+6edj8BSt33w+b66Y9O6ZUOnyt5j8F41GraV1c3OjQqGgTqejZDJpZ4AY\ngmfG9fW1Dg4OjJ9xf39vrSv4Kj+3fP9/XNj/X9bCwsJ4aWlJ6XTaNOgYcnBZApMCqzF9Bx0rfStI\nRPRYsG50+ycQyPAHzmQyT3SdXPZcJPjaExwgfRWLRR0cHCidTtvsZpf1j+YceC0ajUqSGYbw32Cg\nAynp5OTENgCfHxb09PS0yXJcHa2rSiCzu7m5sV7u9PS04vG4Zbw//PCDut2uDQPx+/1mBMLFBoyF\ncx5QFc+R7JfPRw9dkvWVsazk77LpYcrzzoAGge+BX9vttlX3yMV2d3cNUry9vVU4HFYulzPIzQ3u\nwKkoE/hu9PXpp5dKJevJYWQDdEYCR8DGkAeYlMQQYgwQIHAk++Xw8FDlctmkNoPBQOl02i5L5HvI\neZCh1Wo1qzBcljvPCN4HXANYzOxpAlE8HrfAiyIDORYEOddCGuvkbrerv/3tb1aJYr3L4CHaP8Ph\n0AIxg2Ag7EGQhRAlyRJayKYkZLOzs3ahwvAnsRqNRk/G9mYyGTNPQunAO43H4yoUCtrd3dXW1paR\nT1ES0BqgLYEqgb7/7OystfSAYqVP/hNA+kgGiVMkDqAYvA/iAhJADL6osCHFIZf8+PGjut2umfiA\npvA5XJ09rRqQIdjmxAL63vAM4Crh4Y4yaDgcmhc++4XYyhmlrVYsFk3aSksyGo3aGYXQB59qfX3d\n2hKQRVF+AHEjA+R9cIFNTU3p8vLSLshYLKZEImGJG+3eVqulaDRqKhD2HecKjhVIHAS4WCymaDSq\n3d1dlctlbWxsmHSW2EzLRfrkOYLSi3M1GAye+L7wrpDajsdj42HhL0BSBHoHbM/nJ7kOh8Mm64Yw\nS5G6sLCg//iP//jZO/wXu+S/+eabcTgcNlYv/XL6Omw07E2BI+lr82fIxtC0kv36fD4bDEESQVUP\ni5/e3fT0tB08NgqsSyotJHfYv2LXibSO3jz9NfqDuDJ1u12TYlBF0C+CDESfCx91+mn0RtGII1ui\njwOURdJBr4YDRSAtlUp6eHiwaXmSbMPSX5NkVRtwMwHPfebotzmYGPekUilzv6L3iR8+ciaCHQQ3\nDgifGd4FxKjb21udnp7a4B0+AxasZOT39/eqVCp2eZAkUS3hCgjzmB4X8Fwmk7Hg7wYSGL1U46A4\nzCGgd0yw5V3jsHZ9fW2yRBIQkhL4BK45h8vr4L+jz+6qDfj8SI3cpBcEgT2GFAzDGypvWhXA3HAl\nBoPBk/nbBGaIsPAhQBs4vyS3/AxaCO6lQitLkhHV3O/F52GPw+vodrvmMkgyzvcFFaSlcXFxYfCw\nK7WEB+Nqu+FCoOEGPZJkyQnvCdQFdz404MhVuRCJGxAA3TOKxBE+AGggcPP9/b0WFhZMjeOaKrk8\nDtjmrmyPveP2fJFiElcTiYTJfznfwO4UJZxrFzlzYzCtDjwS7u/vdXl5abA0bdZoNGp9b5cYTKyh\nSCLJY88iKaavTexzJ8TBKcGXA7ImPxc3R1qcFBPcLbOzs4bu0FpFUkuM5f1Jny55CjOXE0QyAVFP\n+jTq+ebmxoZckQRjioaskvfH/eFOn3M5TMw7oJj693//95+9w38xuJ6+MBcCG9LtwfJlIX1xeUGO\nY/O5FaLfP7E2dQMclwLZJzpRMieyRly7cAzjM8G0RjLk9/utL3h/f28kD5cQeHNzo06nY/AQc7mR\noBH8IG2BTkDwiMfjRvBwpTNobDnQVNmDwaeJSjwvzHKoRFdWVqz6kz65grl6bxKe4XD4JKC7lzzv\niGwTWPDq6sqGMZAYgFr4/X7F43E7tASGfn8yTQrCHv094HpMJaLRqBKJhP0MKnKybNoOVJF8R94N\nnzmRSBgPAa/uarVqMDyKDzdJoXp3deygJ1zEksyClqoI6ScXPC5ykowcxM9wGcUEFS4zUCAuiPF4\nbEERb3GgaSpWpHw8V54TiEMwGDSIk8oUFzo0zPRuUYlw0bkVx3A4NFSHS0uSIQYu+e3m5sZIohcX\nF3p4eDAOB9Uhz57fw74MhULmJuea9XAeUJKQmI7HY+t5g/QQJ0hmuORRJ5BQkPQSjPl5eHBcXl4a\nnwMSJSQ4JGbsMZJavhvnXJIhH6enpwqFJkOMUqmU8WEglXEu+dyj0cgqefYQifJgMLAiAmkl+5dW\nB2oWLmxiAh4ZrqkL0ld4H8wSAeJvNptmOEXccSWpIHYQEonrxGuIfSSmbq8dpGM8HluLkwSq3+8b\nTwNHP94jnx3uUDQaNXSAWEkbg7jIdEYqavdOIiaQVPCsJNl3xYyK/0aa+C8ArxMX2cMgcBSQ/C4S\nQJIc3h+8oHq9roODA0sKveUtb3nLW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3vOUtb3nLW97y\nlre85S1vectb3vKWt7zlLW95y1ve8pa3vOUtb3nLW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3\nvOUtb3nLW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3vOUtb3nLW97ylre85S1vectb3vKWt7zl\nLW95y1ve8pa3vOUtb3nLW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3vOUtb3nLW97ylre85S1v\nectb3vKWt7zlLW95y1ve8pa3vOUtb3nLW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3vOUtb3nL\nW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3vOUtb/1y6/8Bp7OYIYX23bIAAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -2.758185\n",
+ "Current MNIST score: 1.797050\n",
+ "Current Frechet distance: 213.724701\n",
+ "Training step: 200\n",
+ "Time since start: 1.144458 m\n",
+ "Steps per min: 174.755270\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsnUlvZMl1708OZM6cWQNraLWk7oUtLbzzx3vvyxnwzjBg\nwJBbemp1sQbOTA5JJnN4C+IX/N1Tt7rbC6NkIANIVDGHeyPO+D9DxI1YjdVYjdVYjdVYjdVYjdVY\njdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVY\njdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVY\njdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjdVYjf89o/G1bvzdd98t5/N5zOfzWCwWsVwu\no9FoxGKxiMViEY3G49QeHh7Kb/hOq9WK5XIZi8Ui2u12+bvZbEar1YqHh4fy93w+j9lsFq1Wq1y/\n1WpFu92Ou7u7mM/nsba2FovFIh4eHsr3fK/19fXodrvR7/djPp/HZDKJu7u7eHh4iMViEc1mM5rN\nZkRENJvNWF9fj4eHh7i7uyvriIhy39lsFo1Go9x3NpvF2tpaNBqNeHh4iHa7HWtra2XNzG02m0Wz\n2YzlclnW1Gq1Yj6ff0a/drsdnU4n+v1+9Pv96HQ6cXFxEZPJpHwHWvC7RqNR5mAeREShie/LOtbX\n12O5XMZyuYy1tbVoNpu1PIQ38/k8Op1ORETc39+X783n80LztbW1Mv/FYhG3t7cxm80q8gHfmd98\nPi+8j4iyPvgxm82i3W4XXrVarYiImM1mhXfQst1ux3K5jOl0WmTM12Me8GOxWFT4j7xcX19X5j6f\nzwv9kNF2ux3T6bTQ07LdaDRiOp1Go9EossM8kB3kzzownU7L96Btp9OpfAbfWe/6+nr0er3o9/vR\nbrfj5uamzIvvcT3T7/7+vqLD1gd4wf+bzWaRMe4L39bX16PRaBSZQLYjosgLdGFNXH9tbS263W4M\nBoOYTqdxfX0d0+m0yDK8YQ2dTqfcl7kuFotYW1uL9fX1cm30w2tircvlMu7v72M+n0ez2YzZbFZ0\nGR1YW1uLtbW1sg74hux0Op0YDAbR6/Wi3W7H6elp3NzcFLtoOWA+6FHWUWQWOwb9oOVsNqt8hqw0\nGo24u7sr+svaO51ORf4yzbvdbnnNZrMYj8cVHYXmeX5cj2uiA7Y16Ap05rqmf7vdLjqwvr5e5o6u\nWGaRO+7nuV9eXsb19XXgj6yj0Ai9eXh4KNeBv/CUuWPjmMN0Oi3fn06n0Ww2i+1vNBrR6XSKvMBr\n09myiG3Z2NiIXq8X//Iv//KLPrz9S1/4nxowEqfDglAmCMX7NjDT6bQwH+bZOfM9OzMbw4goCg4I\nQKi4noWT+Zr5Dw8PMZlMKp9zfYwezsOfY9wQBu7HmhisDQdvAeOF4WCt/j2fra+vx93dXTHW8/k8\n7u7uKkKZgRO0hR4YMH5jx8ocvXYUI4M4zxPemsdee6/Xi2azGdfX10WpZrNZ3N/ff0ZvgxuuyfoB\nWzbKBmK8b5rb6CFvGI6IiNvb24px5RpeI3xjfQ8PDxVQaOeB02XezMPvm+aWT98XowSfmQPfn0wm\n5fsGS9npXl9fV8Dk7e1txYm32+1ot9vFwfl6zMHXZt6AFMsENGGdlhE72SyL1skMcqEHvLes275k\nmePe1itA3traWsxms8/oh/3IRhmdsi5yPd9zsVjE+vp6mV/WUWhkvmPnDC6/JIv8xuDi/v4+2u12\n9Hq9Qj/uO5lMSjByc3NT0Q1onPXWa5rNZkV37FQtz5YNvudXBu6WBwMb29f7+/sSICHPDjIIHADS\n1hWug41F1hwgGHQxN+iS5S4DE3TLsoX8wXPmg+zwe3/Gbzqdzmf+4udG65e/8j8zdnZ2/k9EFbU5\nUkKYs+GzEfH7IGZQuA3q+vp6dDqdWFtbqygo38/RmEFDRJT7EtHVKRkRbafTKQjODp37gzaN7EFn\nIDru1e/3Y319PWazWUyn07i/vy/rBbkSCfl6ZAVsGBASaIwQ+XqsL9Pc4IjfRESJ/rrdbkRE5VpE\nJggxRoQMA3PkelY8aG4+oHjQ1lHO+vp6MaiOcKAjkXfOFBDVI3sRUdbjzArzgb+DweAz+jnCtQwZ\njEY8GXxkgjkQZTm6hwYGWDgcAAcyjtGGtgat0ICoAxqZvzaw8MFzt4w58rKjjogi43Zw8JdInfdN\nd2QFGfP7zD07AIN53jNdGKY5IJ2o0nIOf52ZIuobDAYVHQXE2y5YP9rtdgwGg6JXBtDInzMwvKbT\naeFb3fzgDaDd8meZzTrKmpx9GwwGRc6cScIuWj/goecM/zOvob8jbtaLvGCrAIyWWesAc8jZAdYM\njXi/1+sV+eMz38cgiTkgh3au6APZL/sp6NrtdgudvF5kjN84E4YcOoiE5tjMiChy4OvxXQOdk5OT\n/xu/ML5aJB/xuTN1pAARQIkZnbJYOwUYaSHHiTrdV8cMo23uwUBB7eyM7i1k2WAyrNggQ9ZK2qjR\naJTsAO/jGFkv83E0aFQHHZgT8yP9n7MeNmZ2lHmgaJkHKBDzgqY29I5KWLPXY5o7KjdP7JCMrKGt\nlR3eMZecovaa+L6BBs6IiN7fJy3sLABzYw6WMdPcss78oCtr5Ps5k5SvZUMLzwENGHpH8XWRQo4E\n5vN5TKfTsn6DJctzXlsGDE69OtrKhs166KxQHS2Q6Zyx8rUsL6Z5ziygo6yF/xtAeU2AVsAiES42\nwfSE5vDDGTI7S9azXC5Ldor5IBusx1G0AxJnN0w/y6T5zzz5O2fvTAscDwET8/DnBDIGb9Advud5\nZ7tjeSaj4OAKntnOmoder2WlLpNqGpHZZe6miXUUXjiDxL0MJuyn6gBonb55YFscwORsK3OfTqfF\nl/ya8dXT9Y5O65Q3G3CMVMQT+jdIILpjGCkiLI7enKo3Os0OIUfF/O3rgr4ceXJNECVCiDOhZm6F\nwqhMJpN4eHioREfc2xGrUShrysAJZ+xUoFN4jiL5riM01uABX0hpAlQajUYlbQ8PUOLb29vPQION\ntLMtEVXgAv8cPdzf35e/UXB+hxF1ecDzB1AaqFDjY00oN/eCTwaH0B1Aled4d3dX5s66+TviCQQa\nnDnDZRp5DTZOyGl26kR86BIyDE0ycLNRZs0ZyFDCQL4MTDLYY33cA5rboCKPyIplO2c4zEuDxzwP\nwK+zZ46O7XQMzAxsrDu9Xq/wk0j95uampImdiv5SNs32wFksO3/sIjYEvlPPhe+OcFmf14ZD8Hds\nPwArljHuwz1Yv2WR39gOskanmLkP+mJZtxP2tchcMndsvPswvgQUfD/kEJrbQZt+ng9lVjK+6KGv\nz0AubUuQb+aF087lY3/PQQ/ZFGey8rUzSP0146s5eafrMqLLKD6nhxiOHHCcNnA4d9JPZnZEVJqK\njLItxBZkp5vqIgqnfeqi3rymwWAQw+EwhsNhiRJI2wwGgzg7O4vr6+uIeEqDZmOajXRd1iFHAHlu\nEVExUJ4zvzHtvR7/bR74nuYhtM5RBPTLwpsBnJXUAIHveG42SP4OBoY0qmXPc/cL2eFfKyPXyE7T\n98tzd9Sa6cg1srJnmhug5GY0z8HRjulVx0PLF47Iv3fEajnMTpvIJNPWEaT10tkmSgo4dwNmy4N5\nnq9t0O61WZ5NB/Pe9CWtPRwOY3Nzs0LjRuMxnYtDzut0RjLbENfxs32rA178hr8tFwbIdZEjspkj\nTLJUmV4OfrKc+nt5XeaH9SqDa8teDqbyd3xthuXAIMxlkSwn5pkdJVk+6xrz5/d21P4tNDXYN7gl\n/Z6DF9Ov1WqV4OhLdtY2PYPGXzO+mpN3cwYC6ToGRM1GPTsY15xc042I0lVOqp66p1G006sQMuKp\n1pmNPlEZQmMjbwEDxOTo3fPGcIxGo+j1etFoNGI4HEa/34/hcFh+Ox6PC2CJqBot/va/dYaC9F/u\nDs004/2cHcgOwULva2C4QbGOPnxtR4U2XEbNVnD3NzilmQGE520QyQt+ZvqYR+6m5zcgdzIrNOlk\nhWcujgRcKjC9PI9sYOyssyPL8uSOXKILl4TsBJgT9LPumZ5cwxFQNnzZ0WaZ4F/3TLgPZj6fx3A4\nLPdy5uX6+rqSorVzMu2QSwP93L9CRG/eWOcdDLg3ACC+ubkZ29vbsbOzE8vlspIh2NzcjPv7+7i6\nuoqbm5vPUr7ZgSFXjgINQgGPNvBfAtuZHznCqwO/lgfrHnx2TwNy6CYx3s+ZDubO33n3BLaQUZcF\ncCbCeug550AEmYV31pccuLDzAR7zu9vb29KUaroyMmAweLMfaDQa5fqdTieGw2EBD6wFgEyWgqzt\nfD6P29vbSiBkHbVvQpadsf658dWcfKfTqWwLi3hKy9I8B3Fs2L24xWJRnGK3263U9lwXRoCcDnEd\n284kOxenES0YGBIrqMeXogMbO9DexsZG9Pv9aLVaMRqNSoOP0/V2Jhm510U5vG+jw/9t5BlZaLmO\nacL3rKB8jrBubm4WUOWu9Dw3lMKCy78YBDuNtbW1irzYAGbDlqPO7JAMBMwXN+jZwTjNZhp5/Z5H\ndpjQPEeOWV58Lzt2O1HuYyfPv9DHkVgGEple2ZlgwNAzjHzOKGSd8XqbzWbFke/u7ka32427u7si\n75RYdnZ24v7+Pi4vL6PZbMbd3V0cHx/H3d1dBYSYbp6/wVvO4FE+s2z49878YZhpumR+ZNqGw2F0\nOp1YLBYxGAwqDmKxWMRoNCqG2nw2/XNQAc2xU+ZxXcTs/2dH5L+zo8N51kXEddcxQP1S4EDW0SVT\n6ObaufUkl0msH7nEZV8AnwCu0DBnjrKNycP0sHxCd+TlS5kuAJr1nTUiD/iijY2NePPmTSyXyzg/\nPy9NerZ70Knb7cZ4PI7379/H3d1d3N7eFpCb7WJEVMDMrxlfzcl7nyDDTrTOIFoZWfRgMIiNjY0Y\nDoeV+k12+A8PD3F7e1u2vplIFv4clcDAbNC4vxmRHXpeF8bH0U2v14utra2SsvG+bO+HjHiqCTsb\nwfVzxOCI2YL4JbTvSCkrNL/J1/OgZjkajWIwGESr1Yrr6+vPsiM5+rMRtKMzP7g+Ql7n5LMBzSm7\nOuDC/3HKjhr8u5zl8TUYdQ6vzjBkYJGjtTq5ND3slAC7zLmODnXD98r8NIDLWZ7sbHOKNiIq5Sb0\nZGdnJ0ajUUyn0xgOh7G9vV22NW1vb8fd3V2cnZ1Fo9GI8XhcHD5r9fW/tB4DgKy/0LBO3+E/NXZ2\ntEwmk2g2H7u1edFom8uCOLurq6vaMwXMk3yOQ45os05YDi1jOf1tPtaBWgdJ5mFOH2d9zde0TmWw\nmmXJ13W2ze/j4KnFA0bckOyg6v7+vthc/IC3VLO2rEsGzJZ708X6k7+frw2oMQjHXm9vb8fz58/j\nu+++i4iI09PTklH2i3Mo2u12HB0dxWKxiKurq2i3q9tTGS5nfgnI1I2vWpPHyRGxRjx1cYNWaFiK\niBgMBmVbCpE5jnN7e7tEwz4EY3NzM/b29mIymcTZ2VmMx+PPmom4p+v1KCmC4Bou/3eTCN/nN7kB\nhK5sDuHAQPR6vdje3o61tbW4vr6Ow8PDyuEObN95eHgo6G6xWJRaIE10TvH55X3ipER7vV5RGhAx\n9VNS7CiOmxbn83np/s+o2wrS7Xbj4OAgTk5OSqr6/v6+pO6dunJU48idCNJGkeyH9/yDkMnQMIds\nFK28dsIRUXg2nU7LNrRcx2d+NoCOWEyLPJrNZvT7/ZhOp3F7e1v4Tzmp1WrFeDyOq6ur8hvuZdBF\nUyZ0wMjQKEU2ywc1MW++HxGVvePmBbKNHiBT3W636CgykZ0K69ze3o7d3d1CRxoYR6NRHBwcxMbG\nRnQ6nTg7O6sYuRcvXpT5sG/cUQ/ZENuJzEeAL+lQnPdkMqlEQc6KIRPYkNFoFGtra3FxcfHZ9snl\nchmTySROT0/LgT2s9ebmJu7v70sk32w2y2EylPjgk0G5gwXTvtfrFaeWddTg07JCtJudukEQMsX1\nyGpCb67HVsyc1SICtm6gd1xnbW0tJpNJWYu3fOadMbZz/KbVasXW1la8fPmy2PSrq6u4vLyMs7Oz\n2NraKiXNyWQSHz9+LDJPkIf8Ou2dm0Cho8sJDp6cmWJ9gNNOpxPb29sVHvb7/dje3o4//vGP8f33\n3xcAu7m5GdPpNDqdTvz+97+PjY2NaLVacX5+Xg7i4fCvDx8+xLt37+L09LQCrHhhn8kq/ZrxVSN5\nG2MbYqfwI56UgpSwt5bhOJ3Wz4gQgaUL1oKZ71EXFXiA3LNT9aiLKth/3ev1iqHsdrvRbD52pGI8\nPn36FIvFokTFo9GocmodSl8XTZmWEdVubKe3MEIZqeZ1WMD4Lv9mmvG39/xj5H0fvudsSOZDjmDq\nMiv+3POt+23OsMBDTnfDGEN3FAijCY+QMdMFmeAe/n+OdACv/IZo3MDBwMTRPr/Bcbm2HfHUs4AB\ncureGaQvGYYc+ZnXpqOBTc5k4CjfvHkTW1tbRS9ns1n0er04ODgount5eVlOput0OsX5D4fDGAwG\nZT96NnJZNm07shz9XDYQgz6fz8vJirYlAI0M3O/u7mIymZTdIzgWH6SC7cH5/lI0ndfE77AxXMvg\nJ/OxLoq1DLLuL2VmACPoMLKVm0v9ffTXWdN8b/PEa7ZMo4vYqdFoFC9evIjvvvuu2Mvj4+Mivxsb\nG8XRj8fjuLm5+SzDYDr7vtCiTndd88/ZCOv3+vp6mSNrns1msb29Ha9evYp//Md/jN/97nflNL35\nfB4nJyelTIC94Z7e/eKeJtufugAu2+svja/m5EGq3iYBggKF4wxzra3b7Ranxf5VFBAn2Gw2KxEw\nxtxRaRZA5oFCRFTT9ShAu90u0ZAdEN9HGLx9D+ZiSGAiAuAXCoZA7e3tlYgVsOLDftzcElE1cMyf\n+7GFDcQLDdz4VLemiKeIxulwOz2aCeknAKx5Cx3XM+CwgWH+OR1KxodarfeU5k53rpEdBNckqt3Y\n2IidnZ2Ks2Q+Plvh9va2sg8WI86rblcB98IwQQuuiXxTB/SWKebh61n24IN3W1AeQbdyhsnbyOAl\n9M+GzPpAihn+2jFlENvv92N3dzfevHkT33zzTeU4YbIZ1K2vrq7i/Pw8Tk9Po9frxWKxKHK+ublZ\nopy6My+ckbFBzlEcMsg6sCfQDoc0HA5jNBpFRFSclmvCZKM4oRNHSDaNZlxnX7i3I+2cPnb/RuZT\no9Eo0Sc2hLXlRso6gJj5Y5uyWDweFU0QxLzRN29V9DUBLHbyyAqna/rkOT53kx/rQN6w49Bvf38/\n3r59G99//32sr6+XrCLX2Nraiq2trWLTxuNxoUXOYCEPubRiG012kOOnbRft2MnIDgaD2N/fjzdv\n3hQdnE6ncXBwEH/4wx/i4OAgdnd3yzpPTk5Kc+bh4WGZ76dPn+L8/Dym02mMx+M4Pz8vx+sio/l0\nV5cW/+4b70j/gZ5yRJa7Fm2AUCAU6+7urqThfSwhzTB2bk7/G6HWpWQzUuI6CHJGy/zGL5QctOYT\n+ebzeTlC9PT0NC4vL+P29rZEkig46XNSWdRscG65mccOPqLaJY1ByHVr88GpXKeGyQzk7IudBNEx\na0ZxHIE7A8EwrbOBWi6XxdGCfJmrGxG5j+djR9xoPG55Go1GBXXv7u5W6nz8330dNvRuaKuTkezk\n3aCZm/lubm7K/yeTSWV/so0/zp1T14bDYTx79iwajUbZpx0Rsbm5WQExrnM67W+98sslCMuNI1Eb\nR5dXiMb39vZif3+/6B6p8vl8Hjc3N3F5eRnv37+Pn376KU5OTuLh4aESxeOQqOuzPs/BsmcdZv4A\nfoNeMhmAbPTQGRqfMgZvkFuA2N3dXQFr/X6/RPY3Nzel54f55MY1eJtlx4GBZSzLcdYV66t1E7sD\nn/r9fmxubhaQgRPDSdrxuzcng2bsBHbSegAYrAPbnneWPebEIUwbGxvx7bffxtu3b2NnZ6fY9ouL\ni7J7YTabxdXVVSwWixiPxwXMWRawkxx2YxCebX5uxLZTtQ1E9vb39+PFixfx/Pnz0iPFNktnEm5u\nbuLDhw/xpz/9KY6Pj+Pm5iaur68L/c/Pz4u9h1/eKm2Qx8h+8teMr7qFrs4wM2yMnK4wU4hIHZEg\nMKTjQKqLxaLUxYxEnfr6UnrXLw6R+CUn70EUgBKA/DEq8/k8xuNxqdcSOe3t7VUiA4wLBjOn4+2g\nPX9/LwMCf8e0x1A4E+ESReZVRPXBDxgrQJWzA45qTNscNeDsLC82ZhFPB/SwNl8zN9SBwnd3d+P1\n69dxcHAQW1tbcXp6WnFGzjhgFFFMZ1Asl9nxY1ToHbGT57c2TLmmCN0zOKE7fX9/v9DDxogMBGsm\n+vRZ/Hb2mR+WH/c5ZAOas10c90sTbLP5eFb+eDwu16R++uc//znevXsXl5eXJXWJw6SvgAf8AG4t\nZ84C8fIODmjOcFSOg0eeLUdk52jOtY3wnn1KUuwYoBafo9hs15hHpr3l1AFOBgaWMdvHfA/zB5nf\n2toqtoT7eKcKGYR2u116Gviuyzx1Tpo5GRTW2XSGS4f8PZ1Oi1wfHBzE3t5eCWYuLi7i9PQ0rq6u\notvtxu3tbdze3ha6G1y5NAEd8+4evofNdq+H5cw2yNcdjUYlY0nmGF3ggVTdbjeurq7i3bt38Z//\n+Z9FBx30cBjRzs5O7O7uFp12JsFlbdu2/874qlvockRsFOkUfY7giXzs5C3cfD+nhG2EqRVyXTtq\nDxTAKd2swDaO/hvnjuOYzWZxfn5eicD5bo64mCPd99x7Op3G+fl5XF1dleZBBoba66E04NSm9/mb\ntjlicP0yX7dOEdwshsEEoRJhZkMNT+1QIp7Sf46mmbeNgwU+G04iFNLBrVarRPDffvttdDqdkta9\nvLws8jifPx7MggHPe/9xojwtjLlAjyyTGE4DCM/XtHQd13ttSYfT1wFPKF8h54PBoNKxDl19xC0G\n0MAQOtZFmnVzNkiDx0Tul5eXcX9/H+fn53F4eFh+3+v14vj4OE5OTspT4prNx8ZBO0g3/LEtC5mx\njue0Ns6b5iiuw/zQMwAMa/e57azRD0KizwHn7ogXvbA+EZlR7ydt6zMj+C6H7bgXwCDF83YEmlP0\nEVHkwPoGOAQgIsez2aycv4HsTKfT6Pf7n5Ukms1myXBZZgCj8IQAJPc7ZRkzOHCGk1p7RJT+pMvL\ny1LGJOo13XH0BBgGuLZr3BN/Q8+Q+1UMUKzDBF7w4ezsLJbLZVxdXZVsyPHxcZydnRUbhe4dHh7G\nX//61xLUeYvhbDYr2+4ajcczUgBXdb0Q8Ju+ob/7dD1GJqK6RcjOgpEdTMTnTzvK9TMcLESLqEaI\nGDu/b+edQYHnYdBgZ+/3cpQd8XQOvx08ow5psu98e3s7ms3HPcSXl5fRarVKaojMBffNKSmvw1Gw\nU592ihhrBpmQ7EwzakYZxuNxhQZOd+F03R9go1AXHVjI6+ibI2oG96IL146GksJy+VgGuLi4iJOT\nk7K10ql23ltfXy/pcJeJsiJ6brlL13PnPv6b/+O4qRWTOn54eChbudChu7u7YrBo1uSAGUAK86iT\nZ//t+XtOnrf11XMmRX5+fh6fPn0qhu9vf/tbqXP3er24uroqKVY61Oki5wQ2R8VOJdelpHEUHvDP\n8pTnDf3sLHNUiu5mR4wRJppkfU4JY7xxrOa3S3g4Y4CIATNzrEt/u3zCNfr9fvkNA7nww1SYJxkX\nrg19yaTg/HCiORCrA9jQ23avTj/5DDnf2tqKjY2NWF9fj9PT05KppX+DHQ0GFc42kaXZ29uLVqtV\nImeXCEndYzdsa1ib6RoRFfsG39l5MZlMyql2BreTySS2trbi/v4+3r9/H9fX13Fzc/PZE1ehFzQf\nDodlh4110LbD+vlrx1dz8tfX17FcLktaPaceEZT8kA0cDltWEGgQJ9EZis/Z4xY4iGaEnwnolJu7\nXDEMCLsP3Mid0k6H+RhQRw5Z0BwpUePc3t4u0SW16Z2dnZLWobMXgfA5A84QOCXsdeVMhYWo1WqV\n54vf3t7G5eVlccqgZiL029vbODw8jJubm7K9xBFWRMT29nZsb2+XVC7RMIrn7AuRHQicuToDFFGt\ng1v4u91ujEaj2NjYKDVvwCFZoMlkEp8+fYqPHz+WCNCdrShmr9eLvb296PV6cXl5WdLQRJKO7ByF\n4ZSgkbdW5m5p+EQNlfMTKOWQRYBP7MigifA3v/lNDAaDkukiWoaP0NX6ZseEzJK6RIdarVZl3vze\nDWnI3tHRUdHbjx8/xo8//lh02idP4tjn88dtkACqh4eHOD8/L01IOBzk11E9p0SyJpp4WZfPnjAY\nAyQxD3SG66Ez9MKQlqXTmyh4PB6XCM+nbmLHdnZ2ypPe0Bnqt4PBoDx/3RkJZImmPveA5DIAa+r3\n+6UmTL0am8AuB5q4eHQzwIXMnhvPer1eAQDIA/N347JlhowMNthnkcB/6OBsAFvl9vf3o9/vx2Kx\niL/85S9xcXFRrs1hSdgAdhsZoKBPL168iEajER8+fKjIBr4DcOV+DK5h8IwdIXvTarWKA0du8UPw\nFbm9v7+P7e3tODk5KfrJOgwYrHt07J+fn39WjrJtm81mpY8Lm/NL46tuoUOAjFyJ+hAQ0LMRMUzK\n0TYpJ8CDa2cYRyNUKw8pNzsMp+d5n8jUKMuoy9GCnaajPdd8ec/gY3NzMw4ODuL169fx4sWL2Nzc\njLu7uyLcrgOxNtPSBgGjgSFz5O/GKa+XDAfI32vINbrsjEHL7OmH5kQwGD12GrClykKfX6B3dhW4\ng9oywP0NFE1nrjeZTOLw8LBEkYA2N2RCF4w/Hes+PwC6O9plMC93yTIXpzMNYB0BQhvXfknBPzw8\nlGdKc2iL+zQwuAwbi4jqwUfWvSyLpL6Rlxz1sR7fB8d1d3dX2WuOcWedOAOizIgozzJ3NsSgLUcw\nZAKyzmJJM8xOAAAgAElEQVTM+T5A3xEcPCQS8zkCyBp6RBqZ8yzY349Tw27QIb6+vl7ZCmgg7Yxj\nv9+vBCARUXEo8MQAHprDN5w1R2RPp9PSlEb5hANWfBYGNsQ9IwBcQCJO2UDe13DWy/LCtezMLCs4\nXOi6t7cXb968ibu7uzg9PS3NjADkHAC458AZz83NzbLm6+vrEgxgc308r4MprmdQy/fQeQN2bP7d\n3V0lYOp0OmVrX7fbLVmFRqNRytPcB/lpt9ulKfLo6ChOT09jPB5XTlO17uXs7K8ZX83Jg7htaFAq\nonsEh9okdS6E0kaH/8MoFJg0FYrvLWxuzMhCmx2fU2kc9uB0sZ18HQCI+HI3pAFEq/V41Ofbt2/j\nzZs38fLly1hfX4+rq6tCB872xqCxzcTXQwGyUrhMALhxajMiSiQLwELRnB40gHEmg8OKnPrudruF\nHygQ62XuGAan0TzvHN1zb4M88y2n2Pxdjo20/OBofC3WxCl+bp7ks5zKthICrJirs0lupjEip7zg\nLZ/ebnl+fh4XFxcVQATYoK8AJ8V38r2dKTMYYph+To8b9CITlunl8rFOiYHFkTAclXB9jnE2MDH4\ndTo9l/iwHe5Gtlz5yX85GIionqBp4I8Rdmc6Tp7olscNu5ZLxoGdAdYf5sR9I562OuIsAAOWD+YN\nDay77fbjltWNjY3Y2NiI0WgUk8mkAECcNScIDofDSjYCGwaNocXDw0Oll8i137otyLbfdvLIHmuz\nXaSBdGtrK/b39+PVq1fx/v37SjMyDdT4AWhDTdqNpvv7+zEcDuPi4iIuLi6KDMJ/rmeQZxvv+dnJ\nQ0f4bvn39ciyPXv2rMgLTt+7XJbLZeHJxcVFRDzuilkul3F4eBhHR0flDAm+TxbYuvvfcfRfdQud\nkRqK4LS104KkxZxOs2F1VG6HBGpFUBDkwWBQqXc5suIe/B3xVKskheuUD/fydfiN02pcL0d8EU97\ngLvdbjx79iy+/fbbePPmTTx//rzQh/TlYDAohggB/BLCtsNHeZlXXb0nIiqK6QyH18HvoCfOCeNm\nxIoBhebsGb26uiopQtMlI3UyNE55uu5o/tDMYoTtLYg48MViUdlWZZAFyGR9m5ubJZVIym48HldO\nIMQYQnPoSd0eWfbLBpP1OmqkCYv6ojMAjsZxvvP5vGR7IqpHAft+ROYYTTtA1hwRlRMHs8zmLJVT\n8LxPFoDIlWgSo2sdQaYwghERw+GwpJINoA0AnEWBr/Da5QdsheUIHcFRbG1txcPDQ1xdXRX7RAbR\nmTAiTTuhuvKbAb+jRdbrEiW/ycdWuxyYgSEnrL169SqeP38e0+m0HKYFXXBuZHwARjmjZH3ODYCe\nPzxz1ge6O5XN9yKeTrxznwN8pKt8OBwWkAAdLc/eukjgRskGoJhBvWnP9Ww34IfLEbYnfJ/38+4B\nByKm3cbGRrx8+TJ2dnbi4eEhXr9+XfiKM//P//zPAgqXy8dSJyUgN1TbVmeZ+rXjqzn5iOqDVLwo\nO14rKpGzh1ENhtoRdsTTlhqMAd9zJ2928jZiNt6eY3beDBt5rys7U2cHSF9tb2/H/v5+xcjf3NyU\nbSQZ2SJkVkDWaCG0Mc+pYkefdcKV54yjXC6XpZaEsrITAMG1YTJPcbzeXeGo0mlAfmdQlZG415V5\nYMPiw5dMJ8CKSxQ4AIz89fV1aaKhJ8RZGoCBdzMgv669+tqOeDDcPqqXJ2Q5wsn6Y9BjeeKZBwaS\nTv05IwMtvE8362OdfhgYMg87mAzGoXXWcXhrJ8bZ5DikLLsGoY5wst3AUZEtyPrgdC9Oim2yBt/L\n5VM3NzwBIFo/DGiQNR9nDY29y4XhTEKmJzw0IHz27Fm8evUqXrx4EcfHx3F6evqZTcr2yvxjILO9\nXq/QjKyZ0+TmtbN6eQ2+P7zCZsALOuo3NzeLkzYAY3cAJQyyguhkt9st58RzoIyPAq6zO9Y/24Yv\n2Tt/7jIx7xPgRETJNtPTQCmI5zggw2Q0oSc8dpM4fMt0td75uz83/i5OvEMxIqrG0o6WtIudlo1W\nxNP5xLznNDQnCfngf77PfW3A2FtP/dPRCLWnTGQ7y1w/9Hf41wo7GAxKbWpvby8iIi4vL2MymcT5\n+Xn8+OOP8f79+xiNRiVV5W0gTjE5teRoDcfhmrzXjLJCH9bgdXA9ot1erxc7Ozvx6tWrODg4iFar\nVSIdzhQgKiJ6zoCIF/V8G2XmRiMSRtppf18P2TE/Dez4HGdODRKnAl2dEry7uytZB86adkbEBow1\nZjrhcJbLZfkcAzYYDIrBB1A8PDwUdG8nb16wdrrxeWwx9T87TWdn3POCE8KgE7lGPB3CVOfkeUU8\nnTKWAV125C4Z0DEPz0gfwxt4ZlBXB+i5LvTEFtAs5zKWDTaOY21tLfb392N/f7+SxXFdm8ZGdrfQ\n/Q9gdIRu5+RnEyBjziBBW3hjwOrnC9hJIjNra2uxs7MTBwcH8ezZs9ja2io7W2wHDDrIWBgQYTd5\nME+z2SynsNnGMMfcROYolhMNsekGEtCczBT0AdS6LIhcuiEY3rpkNBwOy/G3R0dHBXyRCeK+6Dwy\ngA7avmTnadtRJ88RT09BffbsWcWZsxaelcKJd8jSeDyuBFP0PdDTkUF2Ds5sp3/N+Kpb6CKeOoyd\nurLjJWXINgkcAY4fweMJUtTKHB04lUgURtd0jvggJHOw00HAmSdlBEeiRrs5ivfgnr1er0TBz58/\nj729vZhOp/GXv/yldMJ++PAhfvjhhzg8PCy1N7ZJuSMUJ4GRBNRkQ09UkY1LTnc5qsKgOpocDoex\nt7cXe3t7sbW1Fb1er+JcMNwcVoHhIKLKx/NCbxymdyTQNGPDHhGVuXlXBetEnqgRwxOcuJ8l0Ol0\nitFkW8zp6Wnc3NyUwzh8iBPrzNEMDhNQYSNjeeFkwPl8XhrrKKng8CKikgnI4NWOe3d3NzY3N2N9\nfb10I0MPn8/vZi/LJqAWGYY3dmJ21I6aHXHaKCEHLsVFRGWLnEFlRMTV1VXhCYAA/tsA45RzxgbZ\nYZ6OkNAPdoxA/8XisdN7Pp+Xng34zQ4NbxH09l3AM9sIcZp+IiMA4O7urkT46C5O0Xrn/gDLOfRc\nLh/LdycnJ6VERinCdOCcCNaLjA6Hw9jf3y9RJb0UV1dXlUba3IxK1gmb47o0zgpQ4KgdYMN72GKA\n7Gw2i0+fPsWHDx/KWfREw+vr62U3Fo4fB7+7uxuNRqNkO5FX+xnsBZlR09z8Z222i8zf/CGCf/78\neezv75fGaMD4y5cvC0A+OjqKH374ocKT29vbuLm5Kbsgms3HU/S++eabeP/+fdze3lZ8iH0HOopO\n/Zrx1Zw8wuPGq4inZiaIScqK91BuDC1GgG0poCCfUAUYwHjmPZQRnz+SEZTO/+247OQBADh5oqSc\nXoyophgZo9GodNHv7u7GcDiMk5OT+PTpU1xcXMR8Po+//e1v8eHDhzg5OSkAJaL6pDI7bcAI4MbI\n01Gxnahp4bQ4Sosi+/jHfr9f9rd6uxJRPHuf2fYHTZvNp8M14BNGzRkFb3FxHdlHljJ3FMJpawMB\nHCl8IhIEZNJF//bt2/j9738fDw8PcXh4GB8+fIiLi4tyLGUGkDkjgmw/PDyULIEdEw7NWaq7u7tS\nBuC0LFJ+1gVKR240Y61ra2uxublZtlbiUIhkAX7c0zrFyOUM1prBSk4HW/5ylOi52/ljFCmZ2Qhe\nXl5GRFQabjNwcDaP96G5y3PIFXMn/U4qFdl4eHgoNKI2ymEx3sGCLBjIsTZANcAFQI4DRGZw2AZT\nrIm1ZLpkHmXQ3m63i9Pwesny0FjKvPv9frx9+za2trai1WrFn/70pzg7OyuP/IW+6KJtI/zHxrgv\ngajbtXlnWLkWzcOXl5elo/z9+/elp4AGXjIuBAU8znU0GpVaPtlOntxmMJNr8MgPzhXHb4CKHbKd\nzpG+nTwHCGEfnO07OjqKP//5z5X7ci/KC41GI54/fx6vX7+O29vbODo6qtzLum6f+HcfyYN+bSgR\nqmzAzTCU16hxMBiUp7V5jziI2U6YMkGu7ToizKkd120xYDgylwqc/rEByKlOvj8YDOL58+fx7bff\nxu9+97tyIhj3/PjxYwXpkjng/GMABYLtVH3dfX3CW6vVKopvB+sSBrQH5Lg5ZbFYFPRsB8/55Gdn\nZ3F1dVXpj3BNK/dYwNOIz0/fg8/Mw0AQmeAazgKxVpyu01xEyigZTYOAgZubmzg7OysnWbEn2s7I\nEYPRf3bA7Xa7gC2nCanrA0hI0yOrzBHHZDlzetPHtLL/n7qkewFybRU9cwo8l8PcFWygwrx5vy7d\nyfrtLLgfNAPgAwrNc2SC+5JtceredsIZqYioPLrUqWt0nTLc9vZ2dDqdODw8LGAOWYx43I1xcnJS\n+IOzsPN27ZgDXkiBI+edzuPjSVkrcmRwa1DHmlmfo2noNJ1O4+TkJGazx/3xFxcXlUCIRmGDeGRn\nsViUUhpRqM94QH+crXHm05kkgCZRJhkOgDS2C1rw3vHxcSwWi5IFdPkMABwRlS1/DPTDOwUcUVMa\nQV6cancG05kIZ92QUWyJAwhov7m5Gbu7u3F8fBzNZjM2Nzfj8vIyfvjhh7i/v49Pnz6VPg4HSBFR\ngg78lp+rYh5bpp0xtv35ufFVG+8inowjSu2ozQiW7yJArq1621G73a6clw5qtBPkN/6eDZXnZYNi\nwWDwmdOR+XpZsAAbe3t78e2338Yf//jHePPmTcxms/LQjvv7+5IizoiZlGJOZfO307FfmpMNrxG2\n6z6mudNEXOvm5qbS4AXwwdi4/8HoH0dphWIO+T0blZxKc3SRr8N8kaf8Xbr6MUTer317exvn5+dx\nfHwcx8fHZWuawZPlAx6bzh5ORWO0MbDZmUJvdMB6YDk0oCBSw9hhHN2sh8FzZOZmN8spw9Fz1kN6\nI/Ln5g9zdjkjfxdHRFodIG35tEzioDByBhO5jGGds8zbKdETsba2Vvou3BPCvC4vLyvlOsugs5DN\n5uNWNc42d8MVtgf9dsMk17MeYlecHveYzWalhEA9GoCCvNuWQjeezknJAZ31AWO8T5nT1zSfM9Cy\nLthWQicHVc1ms5zmuL6+XtLU9gPoBWAj4qnLHQBDjwSA37JvObb+Z13ic+uCwaptD0HHaDQq585z\nlkWz2Yyzs7M4PT2N2WxW+qrIwFqnZrNZ9Pv92NjYiMFgEBHVo8RzhrBu7r9mfDUn7y1xDAyckVdW\nqoyqHM1T/7q8vPwsSvA2MrZFYexzNML8nA40wrZyuvmP79gx8VsbpF6vV3n28D//8z9Hp9Mpx4Fy\nkMjFxUWJzLgX0QLrQtgZ3NfKxO9d+vDgb65lZQZ5U1Mn8iU1x9GTTkOSpgeccF8Ohcnb+AzwmA/0\ndOSNEtoQ+sW8WbvX65o2c6IRiQN6NjY2otVqlcjt6OioNPQgT0RdruN55DlBNxtFR8es12UgjBV0\nwSH4HmRviBrJQhDRcOSqjRa0zA/7cfbGoKAuis+0Z+7+DrqM/DnLlXll40+qPf8ugwwydD4xMEdk\n6C+gIMsNn1ErXltbi9FoFJeXl0UmXUe244A37lPxPdgnj54aZBLVXV9fF1BseSctjVyYZqZD5hMp\nbcpj6C/3drkL/RyPx2Vr6Pn5eZydnZVtcPzOJT/m4mClLoPjJlRk17Lha0BbatQ+4AhZpoEXx08q\nHDrkXhnvDsiOui7gsb2zrkRUgx3Pe2NjI37729+WdP3Gxkal5HZ/f19siRs1baPwL7u7u7Gzs1OO\nQaYR8efAse32L42v5uRdH8kRgx1qXpCJz/at7e3tcgTocrms3V6GgXM36HA4LGmS7JgtvO70RGAZ\nWXj9b0ZipAiHw2G8fv06/umf/inevn1bMUw4btKwzNtChvPg/gCRnMJknXSJG437WhHVs5KzY8CQ\n2THxGUYAw2v+sh7X+DNoY96OxEw/3sM44ai9bvPBvPB63IBj54YcDIfDWF9fL30Px8fH8fHjx3KM\nr3mQsyQMPs90oL5qUJAdnudnHcAI+rd2Uu6qpyOfR1o6KgOoeZ6mvXUvZwqyk7XM+O8s+znTkKMi\n8ygDBzv6HL34+FhH3Dhm8zzXtd3PA7BzBmexWJRTJZ01pGmU60Y8RdboLHPA/jiSpFMd+UBG+CyX\n95zON7iy7mEHcAik2133RbZoWLaNAIjf39/H2dlZOXvdNLOsRDxlohxEODuV7UfukWENZJ3gSURU\nQCmlJwMNAwjkn/s7AHQQmO0Ev4NHZGANer2GDAwItJrNZsn8RETp4WAnDtljQB1gxzzlufTffvtt\nbG5ulhMrvS7kLPc3OHj7pfFVT7xDSG3wEETvEWX4O06NsceVmhBE8olhEU9P/YFgdG+ypcFRC7U0\nanCugVtRbHizMeR3KBsdt1tbW/H69ev4wx/+ELu7u3F1dVWEhnrScrks8yOF6HSx58XwvBBO1g9g\nsPNgZITN//k3A4DsTOfzedlCgjC7xmTFM4/t8Jw1cXoTZ5G3Lfr3nhORMKMu0nLU0e12S1d6o9GI\njx8/xk8//RQ3NzflaWkZvGV6m+aZdvAfA5br4HVRJp+7RMDauAeyjUyNRqPKTgDOrQccmO+ml5vP\nkFnzIwOAL0U8HtlBE524hJYdu3/LvXGYyAXz4Qx8N2DyG5oEI54aSCnbcA3ksd1+PM2QExqJFv2g\nEOSRSBg+cz0f9ELpxw27BqQOFnxGAo4nl3KyPGRwhPHnt05vm57IIfpip0oEfX5+Xnp/fD/bBEoc\nEU+Zzjq9zbYjp+z5/c3NTdzd3ZX5m0+UExzUoNvILfTs9XrlM3ib7QTXcU8GL4OoHNVnO4+TXywW\ncXJyEmdnZ6UvgifP4eQJTDgvJJcrNjc34+XLl/HmzZuIiPj06VMJPlxzt47g5HNA93Pjq6brG42n\n50jDYCtrxFPz22KxKArifYQIKkaU09ROT0+j0WjExsZG7O7uFgdNujji8ThB9oE6debGGtK7IHlH\n2dTrXJPDaDq9x0E3u7u7sbe3V+p1f/rTn2IymcTr16/j4eEhjo+P48OHD+X55i9fvoxOpxNXV1fl\noSgGP9AFUGB0Cu28Z5WIA9TvkoTRqpURYXKXuxWh0+mUB3HAD2jlLm0aaNzUBfJ21Af9SSvDZyu3\nU+/IBNkQN7dwTZSNpiIMA4f4fPfdd9HpdOLDhw9xfn5e9kOznciykA29yzmWbRwUc8e5IHv8xnzE\nmTvjEVGN3Cxj/X4/nj9/XnoikEnW6l4NwK2jWubM2gweWZdBitPyBlnZePnl8gjGETnMzUzw370D\nOzs75aFAdmKWWzvyVqtV4TWf14GjiMcIjG5+DqC6vLws8k7d2iUa6DCfz8tWTMvF8fFxmQMZjOPj\n48r2NKJJtk5ybdaBTUJe7KwiokK7DBxzhoP3oCmZnuFwWJ7CSFRKJOmsBc7bPAVwAuihMyDMD4RC\n/uywsfnYrMViUZrmrB/Iv4E/4JwAZm3t8cFdPKnOfMe+YCfgEXaR7AmBhAE518EfsZ6rq6sSVF5e\nXsbl5WV5OI3BWd4mjI5C32+++SZev34d7969K7tjaBZ1do858B6lgL97J+9GtYjP9wQa0TuSyQ0J\nFhoaUdgmBzDAeWPkrKwZxUVExXhn1BxRfRZ6HnVpTlLCW1tbZYsTxsAo+OjoqGzDiIhS68Gx8Vmu\nqzNHBDqi+hzqbNiZm9+3QTBfGI6a/R7bF/2cZ9PXxtxO3LWmuqwC83Ikarp6PXZKuUaeZQyg0O/3\nY3d3Nw4ODmJ/f784dnohOITGcplp44xAzoI4mqnLijDXHM2bhnX0sZPt9/tlGxHlER9E5DmZtlwn\nZyLMOwygf1sn73k4HYqe2DHkCB5d4zcckMJjdjncx3vQceCAm+x4XVqx/lr+iGz5LTKBM2d/ttP8\nOEs7Bhr3nMZ2dz770I+OjsoBO/RQMOxE62T2Sxk0R5zma9Ydrmmn5qOlLy4uKoGWbYQBP9fMc7BD\ndNTuzKO/k6/LZ9gKf9e2Cn7BQ/hBtMyDg7Lu8FvrV7ZL0D5nQQxoDfRYF2eAANZub2+LPcq6TuaI\nU/7evHkTOzs7cXFxUYI0ztw3rQzurKO/Rh8jvqKT5whJuiudijRhvL8XRtlwsEeUBjo/VYzv+4AW\nM5RUrBubMBIWRpAXUUaj8dQUwrz4PoIDgLDRwxDwcJTt7e04OjqKjx8/FjBC1yblBhrteKiB0/U2\nkF8CLhFPSuQT5UxPR5E4wZzmykAB2rvWh9Fjro4OsnLDi1zP43PfH6XJuwYYjv5yacD/mh/sif/+\n+++j2WwWfl5cXMTHjx/LdbJxML+t7J6PwShr8vYhZMxg0dG0I/rs5C1b3W43dnZ2YmNjI6bTaZyd\nncXJyUmhuYEqPMdoZKdhMMUaut3uZ1Exc7EBM/gharPDpHvdTZg21GQ9eLAHYHh7e7vUM4fDYclM\nsXODCMn9Dnbi+VGt2ATmj9F1XbbX65VTyabT6WenvS0Wi+L8m81medZAv98v212JBj99+lSidm+P\nIvWfO8AbjUbl6YLod64Xm9ZOMdc5fviFPaF0Mp/PyxZdPzHS4LwuS2XnaflBXsimwWOXSpFp1mS9\n5NoENBn0WneRXzJjnz59KvVsom14Dc+wmbZVDEfqOeixTmCvAG6c3IdO4Sf4HTafv3u9Xuzv75ez\nOAaDQfne0dFRvH//vnJ+BPpnO8dcHNj80vhqTh7jkTuNM/KyEnwp2iHVQ4rXtR0aOJxSQin8VDHm\nwnB9yYKPIOUnkiG4zCnXhXw0LkafZ5J77y3bqnieM12wHKfpg3mc+sTo2Ojl9C/G/ktIkO/X9ULU\n1foQ7Ny8wtq8XxU6WKH5l/e4vsGe6Zlrjna0dXV/p7zskKjDv3r1Kt6+fVtpPKIvwnO0kvP+l1B0\nBkZ53qat55QzSV+iUcRjZoiH2HBgDnKCsTMg4V6ObHnPBrWO5z4bwJ99af112RqOd/W+f2cb2CHw\n7NmzePbsWTx//rxkhngQUKPRKB3p0NB9DhhaaMzcHRkyP4wujzblZDqer35/f1+e0e7uexwQeovj\nRPZ9EBT3NUCNqJ6Zbv2wHOfzGMx7887O3TKV18uJoDwymdINh+fkiPtLdsH2mGtbntDBXHo1ODS/\nbFsjqg1mdTLFiw576OfGtmynuCf3McCFfpYZ2486mcF+cEAYWcznz5+XQ8Y4Xz8iyo4Hno/w5s2b\nODg4iJcvX8bV1VU5gOjjx4/x8ePHMh8DwMxT5uJM28+Nr+bkr6+vy4KcNomoNjC5Tu7FZgYhrI7a\nUXrqaigbCsRRp27YcBqPYWNNB7MF40uG1I6W2j9Ij/2tVhLWQQpwsXh8Pvfh4WFxQkbQzhpEVI8I\nRtFIz6F82ajUpYxNP9M6RwkosZtx2FtL45cNQy6z5IggO0Cn71zjM2iwI2fddelGOz1q2a9evYpX\nr17FxcVF/O1vf4uLi4tSEvHWq7p0GcPoOqdTqZ/5QSR27F6/6eH3uZbvt76+Hru7u7G7uxuDwaDs\nFfbWIvgF2q+L9prNpwbXXONDP+rSn8zJ+uoMmQ0yaXC2lHE/uscbjUbpjTg4OCg8ubu7i6Ojo7i/\nvy8ng9EkBr0MIA3qvIXKANg0bDab5cCjZ8+eFZpyDc5WZ2cAmQl3vePAuR9Hw+LokXmnmYn8TN8s\nLxmIm57Z7nGtDNAskzwIhsN56Bly42IGDBk4oAPWZds57Cz/59452rRTZz0RTzaLAC8DaztaSg3U\nr9lRYH4bWNgW2Flnu2GAkGnA71qtVjkmmAfkbG5uxsPDQ+zs7MR8Pi89SqTxb29vY3NzM549exa/\n/e1vS6aWk0x58ejZiOrx1/YjOVv1a8ZX3UJH1BfxeS01R39On0REabgAnW5ubkZEVM6UJg1IFB3x\ndC62AQb38vYbG14jeQxnFqacSq5DxJy5jEHGETpzgYMcj8flyF+6pvP+WRqwHAmzbq/DyI9MRERU\nhIS1cP2IqqAbkXOIiJ8Gxfe5FgoBvbLCm0eO/jB23uYCfR0dedjxeN2syR3XbFv55ptv4tmzZ9Hv\n9+Py8vKztCn/dw+IU/42RGxTRAaQOT9FDQNIFJUBYZaVHEmYRvB9OBzGwcFBAYAAk9y0h6znGrwj\nbv5PiQiAnJvj7AzyddAR5khqeH9/vzzrnHmNRqO4ubmJVqtVmlJ3dnZiZ2cnhsNh+R6P9eVIapqO\nuB9rc0ks78s2rwgEeGyz18V8SKHyXAOXpUg742DRX8Bc3lNuR+go0rVyN5Axb5pinW7Ojjz/a5lh\nXYAsDm3hGNjT09M4OTkpPOY6Lg9Y5vx/9BR5MXDJD27K8pzXnh2p+WUbN5/PS/8P6XKcO9vraGTM\n4Nwvrp8fWeu5mqbIIWsdDofxm9/8pjze9+DgIAaDQeEZjyxmayZnnsDLZrMZp6encX5+Xs7hoDQD\nr3Mg40AMvfKuk18aX83J1zkU/817mTlWcJSPJqpGo1FqGq5bknYjjexUnqMyOwk30uT52tDntBnz\ntIJHRKlTbmxslAcpeJsThsNd6Dgnp9WsbDYc3MuIvy4yZN51iD1HcrzHXEwbHJsfi4qxcyT8Sw7M\n93fa33Q2As9RtV91xtBGJSLKvvJXr16Vc7uJEIlsbIR8jTpggpHudruVY3dxFrkj2TJlOmc+ZJox\nUPLhcFgaeEgfA7oczTvaM+3q6FM3X8tLnT7ka+FsAX8G4Xt7e4U2JycnJSrf2toqgItUPmeRU0Jx\ntIac4BQt956f+cbcCSx81Ot4PK44kmazWQAF1+CeTslzPZfvsj1hbo5YrROep0EXjtTDGQHLzJdk\npdF43L64sbERe3t7sbu7G/1+Pz5+/FiJWHOWLs/Nf7uprK5mneeWI3OPfB/Tgm54eq4MegCgAD/s\naN21DBZ85DlyaAf/JbsE6KNfant7O16+fBnffvtt7O3tVYA7D+2ilHZ6elqym5PJJC4uLsp5/UdH\nR7U5hS0AACAASURBVHF8fFx8UqYdw7ulsMU/ZyPy+KoPqGk0GqVWhzHKiyB6xcC6cxcED1JtNh+f\n+X1+fl4iX+pRCObt7W3ZyuLaEWkfFIyBoGJcmCfCR3Rix5R/22g0yv74ra2teP/+fTE0NIlgxKgH\nu+YCIPGWN0e9dZEbhsSGCqHgARKgbt8rpwJJVxrho7TUMUmhjcfjcsRmboQjWrByAWrga06lOZXJ\n2tkvn1OVRLCuH3PN7AhI13Poybt37+LHH38sj4R0565/5+txn9w8Zjq6vkpJw5EePDT/GAYEVnpS\nhC9fvozt7e2IiLi4uIizs7OyZx6DwrXhL9e0gXejEOuiR2SxeGxOGwwGlYfz2CkZYE2n03K8NA6b\ntHCz2Yzf/e53sbm5Ge12O969excnJydxd3cXu7u78ebNm2g2H48E/fd///f461//Gh8/fozxeFya\nanO9l3o4zpd1WdZYb3ZipMVpEt3Y2Cgy0W6346effioR2Gg0KucoXF9fx9HRUaV51TLq44hzFgja\n+dz/bOOgMed0oB/oPTpVB9ay4W80GrG5uVkepUtT36dPn+Lk5KTCR+t+xFOZwTbMQMT2GRowL5wy\n+uizSnK63qDGGY9utxt7e3tlDzwHzZD+5iFjV1dXcXR0VI6PzVlEdJQmS/deuUQAL6Ev/ECvOYWu\n0XjM5nS73Xj79m1sbGzEYrEo++VbrVYpP7Xb7fjhhx/iX//1X0sTMgC83W7HX//61/j06dNnoN/6\njiy4r+NLx75/aXzVLXRutMhCm40c38nKg9OnEQLlg1gcuhARhcg+WjVnErJRzdEXaRtHx9m556g+\n4qkBA+FE2Dxfo3zXkS0EduIompUzf98O34bcUTLDimzDgpLaOfn4RQSfeiWIm/mhcKZvjmhsBPM6\n4LN3SNjBWCnromEDI84ocFbn3bt3cXx8/FmN18AolzMc/WZg6Bo2DhZn694SZwwyTy3vTt2RRVks\nFnF5eRl/+9vfymNl3XGeafRzmYIcgTGczcoRvX9nnmDE4Qu1avYSL5dPW4U4BpT1zWazOD4+jj//\n+c/lzIJczvLIRg5dYO7e4WI6I6sAa4Na6tY8SQwdoQTA9lz6TbBDRJdZ98wD5phT5M4Gei11+s86\n8t85w8KcnHG8uroqoMm6hFz6OnV219d3doz3ce45EoUvzhDlDJ9tO2l5yguAOGwl210JEB2YeO3c\n2yDWB8nU0S3Ph3UBNgCux8fH8Ze//KU8A/7HH3+M+/v7WF9fj7Ozs9ja2orFYhH/8R//Ef/2b/9W\nDv4hMO10OuV0zZwxcyDFPeui97/7SN6HLdigshAvPCPxiOoJV64rRjydULVcLotTXSwWFedqNGmD\nzLACmOGkkYiULFDZUCMgEY8pwbOzs2i1WpWmNIwzPQMuATg7kMGE69MojFFg3dxxDs6a2LgbWUc8\nlQwcNRFN8+Qk1goq5nM39Pi+2ZDlaDCDqhw51KXyua+deuZro/FYs9va2opOpxPn5+cl5UdjI/fj\nX9OGe1v2bJgy0GMuuQRB2cgRsWU8y4/XY5BwdXVVHs2JYR2PxyVLBf/gde45yPPNa4aPy+XTA6Mc\nfdqxm0/UvPmNo61G4/Fpf91uN05OTuL8/Lw8UZFTBi8uLuL9+/eV8ollGXohWwzbAMBQNvo4eXSP\nNczn8+L46Jzm70ajUTm+1Ls4cNYASPMKGcjlH4PyupIasuAtsrYHdbyqG5Yr9pGPx+MCZpvNZkVO\nvpR1yHYkO2aDXGfufF145eN7c0c/vGy1np7iR1NbRMTx8XGRsfv7+zg/Py/yiA8wXcwH6E82li2Q\n+bsZ2NgOciZDs/n4vI4ff/wxLi4uyrWOj49LqZUjsieTSRweHsZ//dd/VY5JhgZfCnAyQHMQkW3c\nrxlfzcnf3NxUFMUG1EKPkWo0nqJFBLTVapUGNRzmxcVFvHv3Ls7PzyOiiu7pjM+1s+wgIqJEXjYq\nCLE7jhkGJjkivbu7K3Wwq6urODw8LKfacS0EwPOw43Z0GFE1GNnBM5ccNftkLafNPLhOFn7XFOka\npdufLWjn5+eVLn74m7dhYbizs7AMGOxxjy8Zhp+LDPg90UGr9fjQiPfv3xcenJ+fF3lyN7bvYTp4\nvy2llhy9wH8iEJctLDM5PWewW2d42CrH3mauf319XR5oVLd1jHvl6IX5EOE440JJJ4OzbETzfGez\np6fhUZK6vb0tT3Kz40TPAD+TyaQcC2rjl7ue5/N55XS0iCh1V+xELmMxN8s697cekXVzIGBQaZtk\nunIfD8sya3Bpge8QDHhrce4ZcsbKepoBl+eMbHBs7Xg8LjrpvgI7PcAQa0EfoA1RuTN08DjT3XTh\nt4723SzMb2yfOE/fB4FxPXZo4Usc8GRQ6/ujY34ok0+ls5N1LwZ8GY/HcXp6WukPcGM0fR2z2dMR\nt3WymO1fBrERT8cwZ/ufv/dz46sea4sTsDKziLrI1AYWIeOAGLaucGqZ9zqb8U7J+l4MR5J2ItmI\nOT0U8XmKiPdY68XFRWE6EYzXbsfHsAJmJxbx+aMRrZQZtDBvz92GwoqNc/UciMxyxDmbzUrjGo1U\n8CnzNws21zfaZzgN6ugnlzHqnGudwpB9IBI4PT0tBoJ0t3mWaZ1pzrxQZhuVvGvBtXmDxzqwYtrk\nNWFQiI6pUfIUL85RyLzN8uF1QlcAi3+DvGAALdd1DoX3iIwXi0VxWtCZ62L4WDdOhExANmo5k5d1\nwlEjuv4luUPX7LAtZ/koUq7Jd+oAfrZf+TPbsBzU4GygHfevy3xlOYT+mTemKR3ePA0NHpjG2Wbl\n+xkkZR2GpjRG2p6bLhmkW/4MSgGvBog0O/p6OHcAoecJbQ2w4IUj48wj/86ZM/SMHifTjflnYBPx\n1E+V123AlvUn89Vzt5/5u4/k6Uj0iT6ZCNlxsmie8NNut+Ps7CzG43Gps3KkpNE2zigLtR13NlxO\nCbomQs2Ijl8bbP+mDqniCJmLjZxRbF0UF/H5tiro4vfyOrytCaXz43XrQIIdAMJFupK1ArZA296D\nbx7aQeTrew6k8xqNRomqvaWIx9SSmvdazUN/Bm8ajUZpnrGjJPoFhbtOaedR5zBZB0bDzsPPVsD5\nODLMgM3GxSDC92A9lE/IYDWbzRJ5+nvZ4OTowDKDAabfhPc5899neWf9iHgqSfh6yAvXpg8motrw\nlI0pL98n8zfrCLKDnDvyzD0/Ttubzlm2vRbzIYONrJO5/wR+m9d2VIBCg1r0Kkd+GUTUOXde2CzK\nbNfX10V3stOBDuhGXpdpZADIQUF8h2wMfGXtpPJx/vDGNs/bHunj4JnsOFjsJvLmY6fr+hyybGNb\nGo1GyeqZXjn6h/YPDw8lmMxBl//NdEP+6rIadaDKMmNZdN8TPM1P/fy58VW30DnSslF1k5eFxekR\nCIWBI4Jx6hThpSZsg8xnWVEwOJQAuC/b9Yhucg0wIkrzH47NaTycAIJoI2PDkI0x6RqMDNdy1OKX\nO+9JX7Lf3PXDnJryehk2rHVREcYC5070hOGy0UGxHWFwf/7NTp2tWAZabkBZLpfF0ORoCcfO99kN\ngeJQciBasPx5u6Bp4S1bBhDImLu6eZFGhCfIMmuEjrlk5DnkLAEGyY1IAKNcb7S8WCa9XmQOecmy\n7SjXDsTzhUaWNfjh9GyWsTp5Nj/hXwYw0ILr+WwC647vCc1NW+uh+4Tgk22G6cfvTD+DPf7O4Ij3\ns476ZD2MeY7g+A2ZF9sMg0K+y31wEH6hU3Y8rM8NlxlEY4Mjougoc7XcM7iPAUCmheUUmlHyASAa\nmMArg6Asq6aL5QVb6j6ETCvrj2mUaWH5yrIEzU1b5MVyz+cu2ZguBmB5rv8rnDzMNzICqUREhXgR\nUbYSYDQiPj9r3sYZwvn8eyMxN4550E2JsNGM4yiHRjk3wPGgCj8i1NG6nVCeo50UDpUDOJg314Bu\n0MhpceiHctn4cQ0ObrFD7na7Zb3QHGPiAzk8d+ZpI26j6LQYPLTzg5asm8iU+UB3wIkPK2LugC9n\nDKA5v+d7ACaiGV7mhUGQZQu6OVK0EgIAkSsACk4+NykypwzSMKS9Xq9kF6wLXAvgBcidz5+eGc71\nkTWu5yOReflQDWhpsIUMuZZNxJ9T3cvl05bL3LeCbGbZN4g2gIb2OL9cGoNO0MZ78qET9PA2L+uC\n06zOYpj38I+5u+S1WCzKDhMDF3Q2A1J4j2OPiEoAwTpns1lFnqyH0MGOPjdyZYCHHCAv6ACHynA9\nl5IMtLwudKDReHqEsEGO69y22w6AnGW1zDoLh211eZDfYOPQdYMAb5u0s7XeIMPwwY17uU+J31km\nLS+2e2SovNPLGU+DVujKb7F5OVuUAQBr+l/h5GEugo9iGg3mqL2uTotSIxh+chQCi4G2kkRUswlc\nDyNvJeQYQ5y4T+lzxyTCMBwOS9c8jLBRsvEz0xD2DDp82IwjY16u8Tn1xwE8dDTb4OTImu/bQEc8\npasdQWaUa+HM0aNPgmO9jto8BztCjHa/36+UApzRYH7eVoaB5HoutTB3lM9lARsIR9LeD+09/h5Z\njpi3H9KCbNmZ+35cx9eD9j7iss6Ys14rf7vdrszdkW1OS/LCAXL2AYYfw47cOKKwwfJcfApijuAA\nCI580FFOeWRYPi0z6CbvQfPRaBSNRqPszfe6+Zt7IS+mN5/n8p5pbxDvQAJD72EaOTNgnYPmgAyD\nCzvaDLqc3XA2xMAEftuuoWeAU8Cz97I7ms00QO8Bsr1er9JU6kAGIGlw76g8R8fwwVvdTEMDZsug\nd/Aw7EB5GaAMBoPyYCF6SAzsuRcgBdpi//MZEwAO3uNoW86Bydlby5DtVLZxfh+bPhqNSsf/L42v\n5uTzwiKeDi+JqKbbIqqCg4MCGTEQiLpmGgTeztHGwiibOfAZ8yRidKqGtbh5huvhRAwgnIazEvsJ\ncawD486cXJ9hTdSqDYI8J06l63a7ZU+mnZ6RJQbEdELhc5rJ9PN9nfJCCeGDnZlT59lxWoGhazZi\n3MvX81ydcmSuKKAzLDbUTu1BB+hv+SFCd3bFBtfZCB49ifOz8TTNGF6jHZtpiyHyfZElDIEzTMi2\nex78G0cLnqvBl2UbPrqL2HO3w7COOHOE4zf97GSyjtVFiI78rKPOTkBDvs+cHH1n+bUTgd6LxaKS\nYXRWwjJh8GZna6Dr1K3lnJKSeQCNfD2ixAwcodF0Oi22AadqmbMNsVNlzQBgl6ey7vtFYOBrZSCO\nfHFP6Mj3DHDhH46f30dEuQbzq1s7f1vGbBc9H2ev0BPsIvyHH3nuzAd6cl1nMAzysTtcj4ZBZ2yd\npTNdDHCw6X/3Tj4fPOKUh1Ee3YyueSJEMNCG1U4+4lEx7u/vPyOekZS/G/H04I5saFqtVkF9EU+d\n0xgCfkvDGScgRUTFQecUvlOoOHnmh2JyTQQv9wWgHMzJ6Tk6ahEqjDtrYGSU7TVGPDVReU4RT02U\nCKwNj+nq4x9zeixndKADDiE32eSBMWL+fNdbw3xfrgHNkSXmYENNhMvamTf8hc4YbZwU0TTXBuBx\nDcupyxl8ZuNuHYBPgCTrD+snhU/Eigy4LonDx8GS1mabqpsFnXJ2pok5sAZHqugevLFsQxfWa5Dk\nbJHX7mtgQ6ArmTNeTrva0RucwUMbYztHdDRHtnWNa9Dbjtg8xOEwV5yUgxVAviNJO2XTxcAqp7T9\ne6eYPb/l8ukpfvSnoB/IGLbFza7WZe5PZswnaNpm2m4wP0fn5rvtkUuHOG8HP6TLHx4eiqO2TGRb\n4RNDHWT6vAA7WwdkProYeUW/ANWmB7yezZ62fZuHvgb/t8/xefw5aHXG4NeMr+bkI54MhA2H01oI\nWERUCO5/jYa5BoaL1EoWPr6XldX3NRiA8GxBsRO1MbDhxBAzV/6PIHMPK2V2olyXmhPrQcntcCKe\nHB1zQplwPHaAng9K5j4ChI7voHBWGn5rYGaaca3crWzH4PWi4Dm9z/XNVz6D7l6XjaBpbSTO9Q2y\nzHc7T6+fe2SFx4BgXDAggFkDWO4L/712/5tlwqAYvTH/7OxdE+dveANtnBGwIeGRsK4/5+ZEIj7z\ni/vjGHHyXm8GFzgwA2uvF/rAGzsL91fYWXDyYtZR1uz55IwC8upoG5ozMhBGRnn5LA/kxc9FsE4x\nd97Pe8ktE9lBWCeYT36PuVt+bVuZr/lvucy2Jstpq/X02GCXDS0P+ZV13nrFHAzG62wp//Id2zR0\n3DbOczJPAVZcz3bEgSeONZ8tgA9iTQZWllnrJ7Q1/fmuDyqyPkMXQB8lgF8zvuqxtiDIiM/38dpI\nR0QxKLxnJ+/IoU4gEHpHWgYXjqS9lcMpmfn88cADBILDETxng40MHBCwiM9rsB5Gso7qeLH/H2GC\nfggrtCQ6c/YAZcj7kEkX2gkZPZs/BkVG+jbEpJXsPFgHv2EbXwYH5ker9Xg6oEGZt3Mh8NmZMhyF\nZSCZ/+81WckMslyGsVEzuMTZsw2I4S5+3z/LQKZl3ZqcfuQazWazRDZ+fLK3qNooOEtmOjhzhREj\nc8X8SBkaIPBibhghG36Ma9YJ6p52wJ6Po7c6ObSzMODMXdnW1RwI4FgMkJEH93wgh3UggHtwGma2\nZ9l5OnpDt1335/7OpuSgxDz1+5ZfXswHR5Kdjq/Dv9kROzOB3fC2ZZ/sxn0tc/A/3yeDAGTMZxYg\nT+YD66AHw/rra5rm1h/3EviERWedDPYdqNlfmNcOJLLs2f/4GmSQ8nHr7k/hPlwD2/9L49c9q+5/\nYOzu7v6fiOozwVmAG4lMPJpGXMfn+zxEggdJgMba7XZpgup2u5U9zO4ktnOJqD7cBAbnSDmiGnW6\nPmMggcC7QceC55q0I1gaQzCAGGycG7+zc3bZwqkp14q8dQua8xCaiKgoFnN06pi5u4vcwCXPwdmL\nbrdbmkZyrZJu3wxAIqrbeaxk+f4uhfCEPKfunAWw8eZ61Lroysdgra2tlXPvDSiRQTfHmEdcGyML\nXeGDO3uhUV35iTnTBAS/AK+9Xi/29vZic3OzNPzYYFgWDBwMGvIL+chNTdQyWbONI7TgNwZl1ik7\neZ7dDqi17LHeXq9XeG+DyfUs79CffprslKE5NsL84jPo1m4/PqBna2srNjc3o9vtlrWur6/Hq1ev\n4u3bt/HixYuyQ4V7cX+DbGjO+7YBuT/J8uL14PizLGcdsGxyPUfg2ENHn+avZZ059Hq9Yj+s8xlY\nOHMF6IeudqDM3Y+rBbDy3HVkwvYK28O94ZUfkIS9gt7eNWK6WTbglXUUPcUu2iY6rc91cNguSbJe\nruft2KYBPQIG9nzfoIn7n5yc/N/4hfFVnycPU522ghiOdPg/RGWBMLbX68XW1lal4xCGoKQ4ENft\n7PCdcvTLqU83+ZngOTJzdMR7MBKjkyNHOwfWRcMcDt6AxzSIqKbPMGAoW0aZGen6vk4x8X2jfKcA\nLbRkWoxCjeK5F+sCteIgmDvXNujDgNnwWalMixzlw2On3wysvDYrG+8bWLmvg986eoioZl0ygneW\nh/s5UsigyjzxmqgDYvjYAbK5uVki7JubmxiPx585eWTd/LCcoDPMFZpYRy1/TltmAwQ9WL+Bmod1\n3oALGrRarUqWou43dmDmieeyWCwqa3Mkm0Ex76+vr8dwOIydnZ3SYEampNPpxNu3b+Ply5extrYW\nh4eHJcPHvcxXr9d0gIemH07F6zKYzRGq12gemR/ZluL84ZPpjl6T6eBedqw5c+L7eEeFwR1zZL52\nwM7Wwl/LkOeHLWV+yJifGOe5c586eeFerCvT1/rN3DOorNNfz9066KAu299sr9BRdNDyYrv+c+Or\nbqHDwVuoQYQRnx8PCyFBT0Rqg8EgdnZ2YjQaRbfbLWdkj0aj8vuNjY3o9XoxHo8riuJ0PgJvhbAg\n8G+z+ZTCRZhch0EA3FgTUX0QAY44p6wQYNYGas/Xc0SdFd/RTnZ6TqlFPNWT6by3EbCgZeOBQti4\n21k4lWa6kULmfYMS9zr4Xigg/RBWWOqdrIVhh2GjzXW5j5XPJ8hxfcsCJQZqc76XX143ynh7exvz\n+bzSpJNBQI4i6tKxpjlRcLfbLUf2Qi8i4FwOw5A4+1NnoJBDMgLID87eqWVHbu7jcBSaU9UGg1wP\nmfL1nK6EX3bOXpejdACBtzS5dmv+Zz6yVjIkZAl5cA3Zo93d3Xj58mUcHBxEv9+Pu7u7+Omnn6Ld\nbpeeDQcj3Ns6ZX1CF2azWUmBuxFuMplUwJ/l2jbI68klEIPb+XxeslZOU1t/Iz7flmze2f4A3N0k\n57IRfDWw4Dp8D32YTqflUb/mrx2+ASHv+dhZ27FchnGmEfogF36cdQZsBgLQK6flsa+sNwd+1g3m\nTc0fHmX+OvPl4OnXjK/6PHkb84hqLSMjX6Mnb29AAUejUXkyGkZ7NBoVY8fzfSEoNa8cmfCyY/Nn\ndqL593nkKNjDaN4Ry3K5LCjaSpqdeVY6vuPP8vxQJDsS6E0air+zEa1DjXZq+TtWBCtpHe+dRkbx\n7RD8m5yZMH1spD2/fC87diuijZJ3cHB/9xHYcGS6m/8YDzvVDFqN6v3KRsz3wRm2Wo+HMG1ubpZ9\n+YBPAyjmw5xyJiNnDOoiRpeG8jMgeOWmPsuKQZVBKMY0Z1a4DrtKnCGxjHn+0Ag5z01yWbahC7JA\nJIiDJTsIj1qtVons3759G99//33s7e2VyN62gWs4i+HMjedtIOJymenH9fx7O/gs6+aNZdwlH/bn\n+6AkA3Dshm1unlcGzhmk2455ZKDjgcP2PQ1yfQ3/36AQMG5n6mF5ybpmmjtbVUfXbOuyLbBfyyDF\n13OG0L7H88/2/teMr+bk2RLlmll27HYiRl6uo49Go9jZ2Sk1woinPYXb29uxt7cXOzs7ZXvFZDIp\nz5e34Yfo3ktsRwMad8OdIxSnB23ErZTZ+eTow+nFVqtVHK/TXRmARHz+0BSEhc+cis3I046dkQ0j\n1+Z7ORoDkJiH2Vlh7Hwd7sH7fh69Dd9isaik9Y1qPad8rxzxQW+uy3Y/nDDpbytVbtCMiJJa9LbI\nDEqYPw+1QE6I7jIwsnHMUSb89L0su6PRKHZ3d6Pf75eILxtq0wW58qE0zNmg0/yxLuaoy0Y9OxkP\ny5v/BjTVgWI7DrIKliPPmxdbLqGlMwqWTYYDB8oenG+wtbVVDuiB/5ubm/HmzZv4h3/4h/jDH/4Q\no9Eo3r17FxFPzVs5aocu9Nc4G2Od52EsXCPbRdMZvjlDaD3MsgKteJTrcDgsB3zxBDloTs1+uXxs\nAqQu7IYx2x+ccs4cGCz6febHXMkuea1eOzJnHckRLfrFvSaTSckg8LlBl+9Txyvuy8s203O3vpj2\nzB0bxn2YS76e/Yi3vDpLnIHtrxlf9QE1GWkxMspmMYPBIAaDQTlhjf2RNLRBTE6QogkD5AoDERA7\naIMIhA6jT3oOgmMocyrG8zdqj/i85u21NZvNUn93BM+jaN3wYbpEVPf2u95j5XGtOyI+i+T9WXZW\neWTHSoRrw8V18nWtTIAynxTmqMSRA+vw9+roygsawPd2u13Od/eeY2jDEcWdTqfQ3AaTv21IbcBy\ndEovgCNE1yhNywwQbARzVOBUOE56NBrF9vZ2Abq3t7dlR4Jfzlj4+Fbfw+nLiKftdt6b7nnW/V1n\ngHN0lIEX9MvNSjnbhJxnWeV9+A596iJ5fgO/XI4ByPFAFF83IkpDFw4Sm3J9fR0fP36Mk5OT8gjd\n3NnPPe2ccaZuSoMHPq3Qw84hg/3s0E175J5MxNbWVulVwv5wiA6lKBqTce6UViOqDW1kSB2wIRue\ns3Xcuur/G/jVZW1sl+xc871ykJblz3Lk0pR1pU5efJ9M5zxXg4lsF32PLMv+3BkMg8H/zviqJ95l\npJkdg8dy+dj5vLGxUek2h3hmuA3G2tpaSUkhOLmBiN97Dk5N2lhnQ8hw5JkjSD63wJnZdG5z/Ox0\nOi3IGsNjhOz5ZuToLWWOXH+NobVjy2urQ9gY0qyMpgH3s4P3XKnjGS0DpBD0nAWoo3s2/M1msxjk\nTqdTHkXseiDAzpFtznBkBJ8jSGjotLgzMz8n515TjjIdGdiQm0fr6+uxvb0d29vbsbW1VTrQT05O\nKr+1vDA/wEuWB1/fGQfP247Zo87J+Hdf+h5GFiDNcLe2v5uvmefjSCtHnV/iCQMnTzTtOQAMaXxc\nLp/q/oeHh3F0dFSeTllnY7JuWQ5yZOi11fE+87eO5tZT7kHjZr/fL7aUDCbAlH+J8tfW1sqjjRk5\n45aDGs/DgYFpkLOfmS/ZsdvGYtsjnmr7tkG2SXXyl4OVOjCQ72nHz8hg1oEAn2cQbx+Sg7R8PZd/\nTIsv6Vrd+KrPk18ul2UrgTskvQ3N6Q2iLiP+6XQa19fXRekmk0lMJpPyvPbt7e345ptvKsrOY0Z9\nnj179k1s7omjxaHhIBqNRuWgHSJ+fz/iqenNSgoqBlU/f/48NjY2IiLi/Pw8xuNx5XeAE4MOlMpP\ngWMuGEunfhGoTqdTGoM8d+51c3NTUTALeU4xRVRRJzwks5IjDrYycpY+ETY9Evx+uXzaaoOjbbVa\n0e/3S1MS/DGNqN82m83Y3t6O3d3d6Ha7cXp6Gufn56X2OxwOC0jhMA9StIPBoNJFDV1dAjCtsyG1\nslIisbzyO5eYnA3Jht41fUdk3W63RGSj0ShevHgR6+vr8f79+1gun/bzww86o/PhJzZA3MsAazQa\nlYOgoAO7WJDln+s5+FI0w//Z5krt25Gft8zmCM1bomwvmE/uHI942raYHUnEUxc4qWfvwllfX4/J\nZFJOvByPx3F+fh57e3sxmUzi8PAwjo+PCx8tL8gZtHD0y2eUccgQdrvdwj/45I5qZ4bMQ+shcoIM\nQS90rt/vV/5GNvf392Nvby8iIq6uruL4+LicEeIsoXmPTrRarWJfeB8AYR2F98i6t6hl/notYLV7\nBQAAIABJREFUZGqx3YvFouivg4NWq1UpsUA/+x+uiXyQ0fOZFpT0mHsGusiYa/eAfOTJZ0DgvwBO\nlJUI9Kzv5n0OEPEHv2Z8NSefo0mPjAYdobqhyPtKSSteX1+X2i6CwsMy2OMZUT3eMkdZTv/URY85\npcl1mB/CZiPi6zv6I+W6sbERg8GgGFKnvZxBQClyDR865WarTHNfrw4h++8c5fv7Rt82ZhFP5wUY\nmSP83oee612ZTjmCtTzkuTsSIlqhZNPtdispWBw5ToOUpY0Exhd5wRlBX2SI+2OAzQPzrS4KRobh\nSV10/HPRBQ8HGQ6H5SEhyL6dWJZxjIj5mHUROuJsPQfWaB7l32YeeT3OvpFp29jYiK2trVhbW4vr\n6+u4ubmJm5ubIkfoN/czvXL0UzcPvkNk6pQ5vDDIyal+wBGRO475w4cPcX5+HoeHh3F5eflZf1Hd\nfHJ2rS6rkTNiGRiZF45MuY/55CxJLj0iD3a47CigRu/Dn+psAH87I5HLJIycWbENcG8BcwWU2Z7x\ntEEaBi8uLkqJpC6TmumT58G662ym7Ytpbj74ftCBXV8OQBqNx3MGXAYhY7S+vl4exnV3d1exm5nW\ndQD158ZXPfEOo8TfTnVYIbylDANL2iwiikG+u7srSJotLr1eLyaTSal38/hR7m8lN1K2YbSxRvhQ\nDH6TBckIDoNg52kj3e/3y1pwRq7pIfQ0wnD9iGrKD+GxoNuh2zDbiLN+R/1uYuQ63Aea48wxmk6t\nA7BcP8IhDQaDz7rXQdTZkRnMYWQ9DwMM1ko6kvtDG8AefwO0IqIYDObAd2m0JDJ07wZGj2s7cgEQ\nICs+wQw5stLmjITLMzY0GejhJNfW1uLdu3fx4cOH0tRq8EP0ZCBnI2hjYsDKWQboqFOj2dhQk2U9\nNo7cm3VCJ3bHbG1tla1oFxcXcXFxEePxuOgPzYSeo3luOeXe6I8zK94bHhElVd1oPJ1yZsDoEh/R\nLmBwbW0t/t//+3/x008/xYcPH+L6+rrQ1rpjXtQBOUefRPCcMoh8ch1edkzww3Q2+Ox0OjEcDgtf\nt7e3o9frxdXVVSUyx/mTUbB9/BIvkXXWRA+R93sj00Sg0MWlMPgFmMbpsWvK24hx8jxBrt1ux+Xl\nZdzc3JTrIu9fClCcDYUGNzc3hbe51FMHoswLO3j8DE2btrcEms1mM4bD4WfPKkDXrq+vK7qaAbnl\n7JfGV3Pyt7e3lVRPRrseCKcjwuVyWWkIiojiBIiODw4O4u3bt/Gb3/wm7u/v4+rqqlLftlChzI4M\n6hw3v8vpbM/bDUtGqWZSu92unGjHHC4uLkqpwdE+Qs61XAqwI/C6IqpNIEbYTtcaxdZFXaY/RpqT\nmUid8SAcCzpGkvmRpgfNuqPVBiDTH157zVnRMIa5pGCHTBqOLm2cAykzlxjskLi2jZ3TyQCUHFXC\nLzeXOvORDZ5HXSTMb5FvUuZEMmdnZ7WlFvgOTTC8zK/OYS+Xy7IzwJEcg1SiDbXnmp0xMsHWLbb5\n7ezslNezZ89KU22z+dhTcX19XercllXmCt1s/GxPcibFTt6Ati7bksEjsttsPp7vf3JyEp8+fYrT\n09PKTpjM1+wAcgSJfrpBtM4eZlvJqItUcYgEEjyCFxCMI6cDHQdHwzGZlPPz80J/y5XlmTmYFwY6\nBqwGsHyX/ir0mPLTaDSKvb29GI1GcXl5Ge12O7a2tgqAd2mDvek+v8L9V2RprMMGFz5gyxk6dMc2\n0s7fu0IIeDY3N2N7ezv29/eLTSe4JNhkRwPzQTbwUTSmO8uZZTODxS+Nr+bkeepRTt9k5GTUBcGJ\nwk1gDFaj8Vjn3tvbKwdVHBwcxPHxccWotVqtiuGCubkZysro+hYjp1kRnPl8XokMvSYUzXv7G43H\nrT/j8bhEYqDr0WhUmqQADzipHNlFVI+etcOyUXaNFwHOTsrROqfAkd4GoOCwb29vC5ImS8FcmUu/\n3y+/owbH/XLaN6fCbGQYv+Tkfc37+/sSHaH88HoymcTa2lo5CtNRoetrlkfo52yFecDfBlb83tdw\nhJOdZY5UnZHwKY43NzdxeXkZ19fXlWcZ2Ck6K2Re52jEeugnuuVaPcYzgyrfm3th/Cih9Pv90sm9\ntbUVW1tbsbGxEaPRqERBjUYjBoNBkVvSmOaznYqdNPO3ftbpqyNo7EsGK9AJJ4++zufzuL6+jvF4\nHLe3t9FoNEqtFRkzz5En09llHdfpv9RslW1ljlKz3vqgJA4Gi6g+IwK9xeFAa9Y2Ho+LXpu/DiJc\nEqqbl/WRV14Xtnx9fb2UbobDYezu7sZoNCpytL+/X4DWeDyO6+vrEvl3Op1Kjd0A24ALvvKvd3ll\nEGX7YvnIoNhZhs3Nzdjd3Y2tra1ytobBJA4d/pOVIOADGBjwZR39X+HkMRY5goEZRDtG2aT3cTYY\nZ3eLbm9vFyf//Pnz8hz1m5ub8nAXIqDLy8vyHvfi/ygyj6LsdruFQeybRVEtQACAHPlEPJ0JDWOp\nw7P1aTwelyjLHa67u7ul2Yw53t/flxQ+SuYGDwAPwoTDAsH///bOrLmtKzvbCwBFigRJcJYoWpLt\ntjvdTiWdSnKVm/zyVOUmfyApx0PbGpqkOIMEOE/Ad6F6Nh4sHcq+45eqs6pYokCcc/bZew3vGvba\nZnjemfm394UAuRKXuV5YWCi5ddIMGASMKIr+yZMnsbS0VPLHzFu73R4rPLJQOQzI+ctudpG9YIec\nB4OPBTn9fr9EGfzOhMYbjUZ5N7ezRbAc8uR3kDj9FphblIsLgxylYXy+P+POUS2vAwINEGm324Un\nO51ODAaDUkxKxITUjUEs9/G7QzbGXv9soB7yKvi/ow0Ro7Qa4yGaw7w8efIkrq6u4uDgoMiFC/xW\nVlZKbnMwGBSgZiXrd/OPDRAEaEGefU5GrqPxe/JOyCO7GY6OjsouB9bBhsBzaqOOosYwUBTJmjlq\n5e/bSFpuPc6IiNnZ2Zifny88Q/qJeUMmcttX9INbTtu42Cvn+8wxNUW53sOgIPMOxNrbsDMX3rHQ\n7XbHzqMYDAbF4aHj6f39fYnOOe1QZcRZI+tupw4NZqqcUeSy2WwWnT47OxutVqsUgBO1pscDwAtw\nuLy8XJzR4+PjmJubi/Pz87F+/Tk6lYHd5+jRjHz20HOI1SHNiHE0SI6z3++XqsROpxPz8/MxMzNT\ncnygNCpDI0YHDiDoEZ8eSRgx3mv7c+N+yJtxEQnKhH3+KFq+x/ucn58XIYHp5+bmYnZ2tihhBMhV\nmmZa5s3jjXh4eyJj9HUIICmF+fn56HQ6xWvE+CKUER93BIBabSDx8l3xPDs7W8JtKDC+z5pbCTBf\nzLGLerLQ8U7Oo0aMUjk8E4NugQFgMR4QPorTBT+Q7+fz1xlH1firQp3wteff6wGhtNkayBjgZ/iB\nA2xyeM/3hd9R1P6bFYqBrPmqyqsxMCQUCtgB0E5MfDwq08YcgzE1NVV4rN1ux+rqaiwsLBRlnw2G\nx+voDX+Dv7Lxtlfpyme/p+tFUMzUxfjYaQAiuzOsO3h3j5XokL0zoiRcZ2+bdTEIyd4wf+eH1AJG\nZ2ZmpjQCY0dQTlcMBuOnozmMbV5hvp06hNdY/8wrD0VanH8nmuOaCXRHr9eL4XAYp6enBShOTEyU\n6BX8sr6+HoPBIA4ODgrohQz2mEfnwx0FqIqYeNx+L6Kd1snk18/OzuLs7KzMC6COyBRzNTc3F19/\n/XXs7u7G1dVVcSqqnu2x/x561PPkeWkqDKsEw1tSUDjkcOi3Tiit0+kUJDg9PR29Xi8iooTQfLKS\nlSL7Yo34IkaLC7JFCXBPPH0X+LjCHkaZmJiIubm5cpIV1/kIT7zhs7OzGAwGJSRO8YaLie7v70sL\nU6P7m5ubcmIXQmqGgIGZMzwGQENWoM3mx21oGxsbBXmyDe3u7q60E56bm4v9/f3Y3d0t84aioXgL\nDwpPfnp6uoAv0hquZGdO8rxOTU2VPbsPeQwoHj4DRTssOTs7W4SSzww47+8/HrF6enpawMny8nLM\nz8+XGgm8oImJj1vAPNfuAR4RYwcNsRZ4AA6BR0QJ52UDxftwINPU1FQJWQ4Gg1hdXS0AdmdnZ8wA\nuLbAoWRHS/w3FJ0jRPaMrTw9PmTXIIr1BJgMBoM4OjqKk5OTEs3pdDplPs/OzmJmZiZWVlbi5cuX\nsbCwEOfn57G7uzsG8FDMAChv8YJvIkbV4be3t0X+MWiXl5djgDoiPtm6iZeKATo8PCxGHtCHLrNB\nxuO0Z0ikp91ul8ZL7nAHD6BfCGNzH9bK1e7wEGPwLpDV1dVYXFyMZrMZW1tb0e12S1Oo2dnZ8t2z\ns7NS8OcDXryrgfVFzgDQpE/RGUQ6zUM2VOglxtxut0uvB7bI3t7eRrvdjuvr6zg7O4u9vb2SKiQC\nSo+N8/Pzole++uqrEqaHXxhbLrh0rZTnnPoAZNEA2Q6J5xlQ68Jgdomcnp4Wfp2dnS3jY3sm56ls\nbGyUngvwAnwMwENG4YvfQ49aXY8CQ6mBZiM+PS6Rz5l8XhiFh7fJUbMI0MXFxdhBB3j9hHS8JcZo\nmoVEKfB8KiRRSFacVV4QkQYKi5aXl+Pw8LAg5NPT0zg/P49GY7TnnvthHDmWlQITK1vngO3JwpjM\nH4rXCjB7FjAxSsttL20k6EVAyPrp06exv79fQmS8N2vFvKKMUWr0JXDOFlDFuF3RbkWDkoY3EEKU\nBseCMmaMP/dw6N5pBReZcX/+5rQC17u62J4gyt6fDYfDse5h2et0bt/rmT0NeNpGrdVqFQDxkHL1\nZ/YabSgcxXGEwzLqCEvE6BCeRqNRQGan0ynzRBQNWTPYIdxK8ydAGeOli9zBwUFcXV2VlFF+P+bJ\nspoVMmPOvABv48Xm+o+bm5syhtPT0xIRQtne398XANPpdEoYHN7y2phfHCUxvzBnyJHl205Gli0D\nLHpENBqjvhdETpApzyGGejgcFaIaSGd+tNfu6CbfYzzZaXCUFlldWVmJV69exfz8fLRardJU6PDw\nsPQmoE6JuQIsAdhXV1djbW0tIj46hr1e75NOjQacHrOjquhKF8vmaID5zFE11hWnkcJVR22cyiQa\nury8HPf39/Hjjz/G9vZ27O7ujtkCOwL8n/n9PfSoR82iOC2EGBpXA/slc/6VZjLz8/Ml3EOVsRcH\nTwpETrgET9LNJTCEhGAYr5Wfva6I6v3dKD366z979izW1taKR4zCpqDIDAkIuby8LMw/PT1dPGIM\npL09eyhOHaAsYF6UIAwLw9noT09Px9LSUmmYwbsNh8OxfN3x8XFRYnl7m4tGqPJtNBpFCFgjlIG9\nP4MYwl+skSMlzIFRLVET6h8QDniG381fEaPDUFDiBkN4LhSMorRdrc2/bkmatwraUNrIc73HyJzb\n8N7d3ZXw3+XlZTGq/N+GOa8bP+az/MO4HTmyoXS0J8870ZvZ2dlYXFyMVqtVPNXr6+sC7nLRLHPm\nwiR+Pzg4iPv7++j1emW7lAFJniOnKDx2eJJ55zrAGXPrJkjc//r6Ok5OTuL+/r6ANu6JniIq2Ol0\nioeJAW40Rtvz4OWcsrEsAvhZ85xmMK/liAZyjgzc3NyM1aVgxDxPXM/cYJyQCUfJIK+TgbYBmPPm\nXgf+DihaX1+Pb7/9Nqampkpl/8nJSRwdHRVnjTUmKsd8zc7OxtzcXKyvr8fa2lrc399Hv98fM/Lm\nUdbUc5aLar0lOPNXBgcU2Vk2ut1unJ6eFjnEGLOeXP/06dNYXl6OZ8+exWAwiO+//z4ODw9LyD4/\n13znaPdv0aN3vHM40p6fCeWLJ4UiBjF6a8vV1VV8+PChFDDMzs7G7e1tqdDMioD7e2tds9ksxtCe\nMkrLRREAFQuD81osJuGodrsdETEWPkbBIWT2mlyQZYF0+MjfsUGx4GGY3MoU5JrDwYyZGgfCb+SX\niCb4yFo8I8JWJgR6aWmpeNj0+CZlQSFUxEiBGfw55Jr38FpwXdjlKAvzw9rf3t6WKm4iOYCnPEeE\nzgFezscyPpQ916EMiCa5wt6dvRx98XxhCPJ3bIjgI64xDzBGPrPRyrl4fndYErAIj+CBe22t2ACG\nANoXL17Ezc1NHB4eFp71+2Ac/C+V68PhsBSjWb5Q8PbmfD/Lgw9i8u4AlKznwB6pi51YS/4/MTHq\nypZTEt5JQxqI9XEffAw43nLEeEdMogAP8XjWl1b68OLl5WWcnp6ONf8CnAIK4Wn4wfNmGWD+POfm\nE3vIFxcXZVwG0d49xD0nJydjdnY2NjY24g9/+EP86U9/iuvr69jf3y+60dEQ/iXnTRHn4uJiKVrr\ndDpxeno6FoFlfi0D5mH0mMP39uIte5YVdri8evWqhNnPzs7i+Pj4Ex2Mc0OEhcjs+vp6rK6ullQV\nhcInJyfl/Q2cmHvrhN9Dj5qTN7P6RWBemAf0GTG+txWFTqjk5uYmer1e2buKIs7hRwTDaC4bahja\naNyE0uE9sufM5zADFdF4wihhDKU98YhROoOx29ih5O2VuoraBg0BqZrjDLKYC7piuYr17OyseFMo\nAEKKUPYIHdKjGBIwBlBjflE6nvOcRgDk5S1p8AeKl4hO9phyeMsRB5Q0hs2hPdbSDTYc6mfs9gBz\nyBgFAu/wufnHc+jnWMEzRkcLDPKINtDq0/f1OmUyiPQY8tj8d+9GQOlS/U9RK2DV7wuwbLVapREU\n3ie1JoAMtnDh0TGHOdrhMLDlwOkLKPMCR1LT5yG/N9ej/L0nHh5yPhzdAXhHhmzkbDjt3cHn5nHz\nheU1Rzqz0b2/vy8GxcYMoB0RYy3AMfLeGeR1q+IH6+nsLNmj9/iGw2EpIF1eXo61tbVYXFyMXq9X\nABgGzQWcAEmM+Pz8fDx//jy++eabmJ+fj+FwGNvb23FyclLWKkdBPEd5Pg0Gq7x3riEqOT8/Hysr\nK7G2tlY8b94hRziePn1aTovE4Xv16lUsLS3F1dVVbG1txeHh4ZiBh1dNyGCVTXqIHs3I4x2CzC2E\nDoMOBoPSPQjhYgvc/f192XM9HA6j3+/H7u5u7O/vl4VuNBqlF7kRvAsmqBDHa3GehmK2iNE2o9nZ\n2TED5QmHAWEWWtai7M7Pz6Pf75fcnpVQNrygYACCvc2cjjBSBfAwzyhNh2BtPGD+3KwBgzIcDuP4\n+DhOTk4KKMlhrxxdILRJumJhYaEYXjyVwWDUbQxjiYJxsSHz6jA5z6YYb2FhoUQDVlZWYnl5ueR5\n3YSFe9m7tGfNWCLG93tTQ9BqtUpjETw6+gUcHh7G+fn5GDABNOXwbfYocgg8g00rIow8oWbzi7cc\n+Z7Z+4IYa5YLrwdy5L29GGgbfYf0iSIQLTEwQg6fPHkSKysrpQJ8cXExOp1OtNvtODg4iDdv3sTl\n5WV0u92S44Ty/BG+ZO3MSwb5DhPD36urq6XnP+9JhCTrJT+71WqV7bvu/9BsNuPi4iL29vZKEdv2\n9nb0+/0CbvD+mQ+DhKoIi/nS9SSZT6gBYicDOwJWVlZKY5vl5eVot9sxHA7HdGUGzwYSjmwAREiH\n8X2cJRfhuTjNRYisF/lsQD/AiCJp9BE7FyiOnJycLLn8P//5z3F1dRU7Ozvx7t27ePfu3dhZC9kJ\nsw6wvNlxM087nWHHbX5+Pqanp0vtD7tCqH0imjAcfjwPYGlpKebm5uL169fx3XffxfPnz+Pp06ex\nvb0dOzs7sbm5WVJbrLfH7ZoBA9rfokf15BESn9pkhU/Xn7u7uzg/Py8eMRWKVMT3er2iYHd2duLo\n6Cj6/X45Bxnk2mg0irG6uLgoDEb7TAQ7YlRM5Ap0C5lDqgiG28ryPYx6p9MpgoLx4Z2tKK0ICX2i\nkAjlOYSMwrQHao+AsCCMh2FAqA1sIqLk5BBwAAZeEYKAMkRw7bFSP0BoEA8tYuQ1c38UlpWJPTnm\nkmucfjAP4REy574PY7YHxlrZO2YtAB4GAM7VOVVkBYxA4xES3mVOsndpr9HeJuvKOPL4zCuE+QgL\nct+qiA7js7dlvna0xN9j7A4fe/ue28FGjLrKkQ/Gg/S2IMB7p9OJv/u7v4vvvvuugLDT09M4Ojoq\nkSPWLWJUa2A+N+85v4oRstGZm5srBhnDgnFhTb3+9pbtjPDvyspKbGxsxMbGRtExl5eXpfjw6uoq\nTk9Px7rMsb44MXj95nOMoWsXWCfWAVAP+EInUGc0GAwKcHKInEr0J0+elB1KpEJYc0eZqiIK6Aoq\n9eGXLHfmP97VtS9EM29ubqLb7cbOzk5cXFyUvhpU0VMT0Wq14vnz57GxsRFffvllrK+vx9OnT+P9\n+/fx/fffx/7+ftkmmLtkQtbr/p15roqy5LkAOGG8Ke5uNpuxvLxc1he9PTMzE0tLS/GP//iP8Yc/\n/CFevXoVExMT0e/348OHD7G3t1fW0/qUqIcji6xRTms/RI9u5GFme1IoYZDh7e3HDkyEfBcWFoqi\nvbq6ipOTk4KkQKaE+Nrtdpyfnxfvq9vtlqYz6+vrY1sfyJ0iOEatPknM3gHKwV4bjDAYDEoRCc0y\nIkaCjMKEkbgPn3U6nWLkadwDsEGwI0ah5JxjrxI8wk05jJmNmiuI2afPd+xluDIZgQKBu8c0W7sI\nJ/P8rLDslVDd7jnLAgdo8alUdOqan58vhhbF52cwV/AfPOlCPxSFoxR49Cg35/z5O/PgvvesB3Po\neg/PK0aScLhznayVDTkFd6xxLpCjVsEAyfxnr8AV+y50dP2FG+24zoMxMRYOD+GeeLXwI+Dsq6++\nin//93+P+/uPhZw//PBDaRt7dnZWqpEjRgDbYWIDOW9zJQ0AvzWbzdLEiTqDZrNZCgKZe9eV5LBt\n/llZWYnXr1/HF198UaJ1FNUuLCzEwcFBvH//vqQUHYa191h14mE2lJZhy4x1BlEUCo3n5+djeXm5\nODyzs7PFcFC7gqNgZ8vkMHc28vAzxpE5RF7v7+8LHzsSFREFAPV6vbJ7ASOP1+4oKHK3sbERf/nL\nX+Lrr7+OycnJOD09jXfv3sX//M//FFDpHVPwoyM/nmtHq5wuyJE2OyMY+eXl5eh0OqUo9MmTJyVq\nORwOSzdK0gv/9E//VNIL3W43Pnz4EL/++mtsb2+P2UTmF17BThqUWH98jh61uh7jCZNiOFGyOQfC\nBKysrMTp6Wl0u93SnKLb7ZZ9iV6gu7tR60YQKx3cCGmh7F0gw6Sasbmn/+YKR8bt0CChXBtzDuXI\n74wXPTMzE4uLi6U/uT3bHMbjuYSEIz7d6mJPACYByPA9h+D5bDAYjPV7j4jyvvagDCxQOih2DARj\nditQ9qBjcKiataFwTgtl7Pexp2uDNRgMSgMNh4d5Hu/DXHnNmFOuA0jY+0dhTUxMlP4GeBBc4/vg\n8czMzBQA5TWyskYBYxANHp2+sFFtNBpjOXgiKDzXXQYBtcwZ6+7QJB5xxPixnHzXW3wYJzLU7Xbj\n5OSk1AUYCDP+RuNjwSN74Hu9Xtzf38f+/n788MMP8ebNmzg+Pi6FnvCot2WZz6vC2nzebrfLutM+\nlJ0q6Jnb29tSEY2Me6x8hvNBq1hCp6RtJiY+HgAD+IuIwovoIMbtiI6jlxh9olR8L/cy532JQmK8\nCXW3Wq3o9XpjAI45ZJ263e5YJXcG/gY5lnvkEp5grI40otOGw2ExvK5LuLm5ib29vWg0GuXMjnx/\n6xMiiy4o7Pf78ebNm9ja2oqjo6Oy35+x5miP+dy1AnaUzEteH+YU8DE/Px/39x93SE1OTsbCwsLY\nYWis1fn5eSm6RqdcX1/H7u5uvH37Nra2tuLg4CBOTk4KX3iLJI5UjixZX3+OHvWoWSNVmD8vbkQU\noaJt7eLiYjQaH3PzVC9jOHzQAeGw8/PzODw8LKE0Cj58Sh2hRYdTs9fKpFopWmizMY2IkrMhBMj7\nOLSKQPBcijpcNFOVh40YR9kOJzLOh36yF5nrC/jMaYWIUR7cCgzm43fSBE5pwKhsb8lFh3luHV62\nUrehMPhxRXzEKFUyHA5LPYPDrgi0wYm9N+4PAGMNHXYEpJJOIprhqnuvgyNXFli/O6kOcqZEM5iP\n/C+G30ANHpmZmRnL42EQuB9zn8OxnmvLaDYqNqZ8l2iY5wKvzlufPKd4n+fn5/Hhw4f461//Gtvb\n2yWv6lbGWX/k+fM8AsIx8re3t8Wjtgy32+0CtgnfEzlDLigUc88KTozr9/ulyQz7njmEB12wvLxc\nOvY5jMx4nX7I7+GIIuthvjJw5z3wfA8ODkpPC56B8WX/efbe7bF6fv237O3786oolVOK0O3tbYl6\noHuRZVK1nOzn8yXo4DccDuPk5CTevHkTe3t7ZceOjbvtyEO60XyOPNlhYU6RKx8sRrE3AAYQSiqC\ndNP09HQsLi4WXdTv9+P9+/fx888/l14QVTUE1ut22nJk4nP0qFvoXEASMV50ZCNCqG99fb00Tbi7\nu4ulpaXY2tqK4+PjEoKPGPWeZ+sC2xJQzjSnefXqVSnyYdsJ1bwgT+/fNUMY7RFyy4od5coOgIgo\nQtVsNkvHKf5GISJVyhh4qqU5YY9wmBWpw+U8256IUbpzyQi4BTR7XCa/N9cYGHE/vK9ms1k6D05M\nfGye42peI+cMWPgMA+HcqhUfwmRPISLGdiAwPyifqkpnhBvDxLWElc0HzDtGwc06rCCc+zNvVAkw\nEQt6PxBm9ty7gInx8rujDnj/roDHE3QVMbxsEMu48cR5n4jxk9DIQRsUe3thrr/IYOXp06extrYW\nnU4nIiK63W5sbm7Gu3fv4ujoqPAWBsxzC1jKys7K0HPg+heie/ALXtnS0lJppOJ6AAAJBWt+Fh0g\nj4+PS7/4P/3pT/HFF1/EyspKifSsrq6WYluMdTaUAFbzi42sDRF/ywYXz5I1vLm5KYVhPviF6J9T\nODaKVRE6P8+GkR9qeTw/jobyDDso6ILb29vS1hsA5VqawWAQS0tLJULFlujj4+N4+/aGZKnMAAAg\nAElEQVRtHB0djcm/UzkeD7zid3YdSn5PgL53i7gjIOPvdrtj0a3b29ux0wkXFhbi2bNn5ayO4+Pj\n+Omnn+J///d/4/T0dMypZN14D9cF+ef/hCdfZVicE8VYLy0txfr6eszPzxcDeHR0FEdHR8WDQikj\n0BGjkKAbUdD2loKNiFHujgW0AKIY8Lb9OQzM/2FGCw5FJLwPCM+onPsg3HhKHvvZ2VmcnJyMCT3M\ngCGwAYYRyO96y1DeWmWGQSgskDa2EaMOefyN8BmhYZ6BESSnjIK1p5KjBzzT6RrejXGZue0lcE+M\nGEbd9+V3PNqMhq2EMKh00AOMMd8YynxvV0pzT3iSZzoClD168qN4fI68cG8X5dk4wHu+lnf0rgbm\n1BETh+vhHRf4mFeGw1EIlegZa+j1dxQA2WA3BEp7f38//uu//iu2trbil19+ie3t7eLhebwm7pW9\nL/M1csSzAYSsf0QUEL64uBgrKytxcXER+/v7cXx8XCJ8HvPTp0/L2CJirB88dUBra2tla9X8/Hwx\nXFT1c84BJ04ScTB/ed6y92z+N+izDGFwaZhElT1hZ/OsDV726A1Is+cLD1lGGbvTXwZd6ElH0azv\n+Bt1QdZB7MZgHfD08eLtNOYIYQbZzn07wubvZNAL3zhVTI3R2dnZWK0V/AaYBRC8efOmRCB++umn\n+PXXX8vOEeQp8ztjf2h8v0WPvk/eL2X0jZKcmJgoJ8qxZYf88/b29liBBegUAQcJDgaDgg6XlpZK\nwQSVyTwb5iTPn9Gcc9gYSi+qCzgyUrSyQXm4YAlFzHdplDM9PV0YhJwNjIMgsPgIOvdCAdgooJSt\nRFDyGcFmgTcYA2G2WqN+zOThXVzmvvV5Hnn3Ki+hSkE49MaYrMBdDMf6ZMTriEMVIvbz4UU8NHYI\noJB8HrSFrmrOHYXIApo9Me7tojbWlDGRowfUGCySc+d6lBhKFHkDvDabzTEw4dAk44E8XyhkN6iB\nF63Ivb7sdllfX49nz57FxMREbG5uxvv378v5B3t7e2Pb5bJXa3BkfmVtUd6A4NwilO+gS6jR+fLL\nLwso//DhQ+kUF/HRg6OQc2dnZ6wa3rUuePccjsKBWawfle2MzfdnB0xVlNN8CXA1r1Erg9G0wfQW\nYIpis/5yNI65toxVGXrGwndbrVFXypyvR064v5+HFw8BUm0DAEJra2vFyFPXg7MH/xvEQZY3ZNvj\ncxG1+dygmXsaoKCnvZ44ED6RjsZQbEM+PT2Nn3/+OTY3N8fshec4E+9ngPV76FE9eQTNW8MajUbx\nnMhbt1qtODk5ibW1tWi323F8fFya3XDICQUwrqC+vb2NTqcTa2trJQy6srISERE//vhjaUEJGtze\n3v7EyDWbo/OGua9RnQ+XMDM7hE6kAWVAMVHEeHtFvCOY8Pj4uIRpCEfhtWBIQZKACEK19sgBOwAi\n5slV2fbgbAwwzDA1SgkDQeoDxiWygsJHgHkPPBobCubV3jsAwSgbAOY8os8WuL29LQVcw+GwhNWy\nEmYcAELGcX19XeYPz4EzzXk+c97v9+Po6GgswmGPmTwuEQV+Zy4ApQ7/2yjyTHs+vCe8fH19XdJU\nyI4VlmWB+3l3QxWAcsEkwMR7zg3U7DkyBwZogCn4st1ux4sXL+KPf/xj/P3f/30sLy/H/v5+HB0d\nFZk+Pj4eO82vKsrkaELEqEjUNQYeA+FQvocBMe/Pzc2Vngrs1aeQrNPpxOrqajnJjXA/6zw5OVny\n3tPT06W4lJqQk5OT2N7ejnfv3hVeoNIefmLukHHyvVk+qpT/1NRU6fVBao/19U4NdidRdDk7O1sc\nGkcPmFPPuw0KY4E3Hb1DN9jhQQ69VvZW4escwUBmIqKE6CmYbrVasbu7W5rfcJCZQbcr/w2usxOG\nfsw86yhJRIwBJ9YNucJ4A2TRFTMzM/Hy5ctotVpxfHwcw+Ewjo6OYnNzM7a3t0u/k5xnd3TW6VX4\nt6qz6EP0qOfJw0Q5/OQCC6MbXsr9jJlwd75jUm5vb+P58+fx5ZdfFs+KvaFUc/J/ih94njvswbwo\nZaNfkBvKBUF0LslMFBElH2iFmb1SFDiKmvPkEQ4rNAMk5rbK2/FnvFdWHlmpYqRyWDwiyhotLS0V\nsMWWJ55jcFEFKHgOz0WgHIUxqubdcuiKsZIbx8j6nR1Op4AKo884Isa9IxeHoeDzwUY2lNzHNQfm\ncT63ZwY5wgAI5F5WvrybvWYMjrfqubgL8rzC0+aNXCcB8R28lKqoCvfPipR7ofD+5V/+JV6/fh0R\nEW/fvo2Dg4PSpZLq6Ie8mszPOQrEHPEZYNF5fQwT/DY1NVWq36kzIC14f38/1osfI828W0/c3NyU\nU8cODw/j3bt3cX9/H4eHh/Hhw4c4ODiIu7u7UowL/3hnCDKY5dBep8GVOz067We+hN9JS7LGV1dX\nJUzstFOe9/y7jZFTJOYHrz06Ousgyxx6lXFwHWvGFrSVlZVoNpvR7Xbj/fv3pUAz59Qtc5Dn1Pqg\nCtj4c/7v9BHrTco3t+Elwrm4uBirq6sREXFychK9Xi92dnbib3/7WynW5BpHOA1KmUfPmef9t+jR\njDzVpyBXh1AiRk02CJd2Op24u/vY+MbVrRS8gcDX1tbG8tArKyvx/PnziPhoXLe3t4snTVgQBY4H\nhHJE6aLYrFxyzheFaw+af3NVqQ20DSILjaIGkYNoDQTILUZ8WoVrw2WARC4OwGBlgVDhmRhAYXCy\nAcDTXV1djVbrY+X81NTUWB7eisOCAkomJBwRYwApYgRWGo1GOa/bYXjej/lAGVq520vgGXmLmq+F\nB9njz86OVqsVOzs7Za9uRJTjd/OaW1HAHygHwuz2CLjGKN7hVNbXzXiQG8ZrhWglAG9yH8+zvW97\nfhBA4uLiYswzZk2zoslg0n+H/16/fh3/9m//Fufn57G5uVkaoLx9+/aT4lHf0957lQJ2FIi5ysp/\nOBxt5QLIs5WOQ6PIl7darVhYWCi6BmPfaDTKwS8YfAAh1eK9Xi+2trZKWJ9IoQ0qkaOqGgn0EWtl\nHnE6xREpogDUR9jg4TQNhx/7yxN5+/XXX2N3d7dsXUN2oOxc8Xx7xfwN4GlAZcCeD7wxLxn4mh/R\n/+iJTqcTL1++jKOjo/jll1/i+++/j62trU94nXfxGJjbDGIcOXOkDx4z4LT+RX7RHZnPJicn4+XL\nl7GxsVF6tfR6vfj+++/j4OBgzNFDPi3rHoPlgL9X1RM9RI9m5Al9ZBR5e3tbGgt4Qebn5yNiVODW\naDRKuIkWh3RWA7EiBBMTE9Hr9eLg4KDk2jhoxCEX52RcAAI5z8/iZ/QKZU/NRj5/zyEle6lcT8GM\nPSQbCHs9XBcxjkwRNBSNw8MOlZlxjCaNYhEYGt44D04u2HlQKzEDpEZjlMOtUuAGQvCKjSNpCzxY\nC1vEKCfrohq2llGIhOJzGqLVapVtMs5ls62K1ADP4p2gbGwtxF57K2//rcpY8o4+jMkg1FEK0hBW\noMyTvRuvbdWcwzP53RhTjkoYAGRDPDMzE19//XV888038cUXX8T79+/j/Pw8tra2Ynd3t4S2s5Hx\nmMzblpuqaAi84CiU1wneA9zt7e3Fmzdvypnx1Fu8ePGi1AixrYtcvjst0nSH/c8nJydl3m9uPnZz\nQ78wBsbm93YEJntslmuH4TFQyF0Gj/ANhZT9fj/29/dje3s7er3eWF1P9t4zv1R5j+azqloCR44c\n9awyYl5bIrSrq6vx7bffxuvXr2N6ejq63W788ssvsbOzE/1+vxJcV/EL+jQbR3S/x+n7cA/L2UPf\nAcyura3FP//zP8fGxkZMTEyUE+bIy2cAwXrnqBX/eu6rAPbn6NGMPDmUrJSvr6/j7OysKHBQ9+Li\nYszOzkaz2Sy55Pn5+ZicnIylpaX44x//WJo8YBQxDpeXl6U38O7ubjkK0HkQlINz2owPo8ZnRssW\nvCyQmRlsyLmGv2FccsU24TjASJWR517OZ0eMN/QYDEZFIq58tjHAyBsNQ1b2eBM0poiIMs/8eAud\nQ7YIPUAN45OrWz1vgAcbefNMRIwBDdAu6+f88tTUVGmNjHdzdHQ05pHwbjMzMyU8zx5cH5dq4XOY\nmjUeDEZV7vCPFYmVk/OH3N8CT6qIo5INalg7DBbrZpDhQkOAXfYSDU6Gw2FZRxtGN+nAIBnEVPF/\no9GIubm5+O677+Kbb76JxcXFko/c3NyM/f39sdyzwX2+lz0t+MrfYc4zAPIYWWcDV9ZwdXU1FhYW\notfrRavVimfPnpUQN2vJGOAltupSc7Gzs1N4nLGxnqRH7DxYF8DjBo/Z8+X7gI3hcFiuy84HkTnA\nSETEwcFBbG1txd7eXtGvzoVXGRqvDe+fjarnJ/+Yt20szX82xDx3dnY2Xrx4EX/5y1/i22+/jVar\nFYeHh/HmzZs4PDwsdoR3tcH03LK+roNhXu/u7h4s9GSMTnNVgXHGHfER0G5sbMS//uu/xsuXL+Py\n8jJ++umnODo6Ktso7cHnyJrHbt7Ia/D/vZHPuZqIUVUlDIyhYVEIYZGfWV5eLmF6Wqc2m804PDws\nXtfFxUWcnZ2VbTFnZ2efNB2wQHhC+RcUT5gsIso9vE2Kd8jI21T1mcPS5Hh4tsNOudiJaz1HhG9h\nHvrHUxxkL9vAoMrL85hdoEOuifPmCVvRJ985u6p7cR++l4sPScEAcIykXUxmclTCwC17EkQgFhcX\ni0dO9fPt7cemFVRAsxZEi/D4HR3JCDtiVAGPl0UUotFolI539riyB2zeY364X6fTiZWVlTKHABWe\n7bAj98iKwVEhK0XmDR7CiBsgWGHmcZt877m5uVhdXY319fWyJ/7o6Ch2d3dLGDfzXjb4VTxlEIpn\ny/gbjcZY90vXEXi94ClC1ldXV6Xg9cmTJ7G5uTmWg6ezIQYN4+ADpDAKrrvh+XmXCGM2YMYQslZV\n4CmDy1arVXiBz9CTCwsLsbKyUorz6FXBuju8b12UfwjJI1tOb6EnGXuuHeCe2VOu+t080G634/nz\n5/HnP/85vv7661IoTfTHYDvzi/mKsfI9H0lNpCanAjNfGgB7vPkZq6urJUxPTQ+FnAaoVWCW+8Db\ned3RBYz/99CjGXkzj5E0n4F2CJ31+/1oNEZHD2bPkIphttZtbW2NGXn2sDpEloXeC+pnZGPo/I4X\nqqoYJVNmbATUChYmsAB5roxcjUwjRsoOI+p7f24t/L5+B8ZECBhDSMibPBxnw+eqaNB7fvcqYbcH\n6rHbc/A8eOx+F98TvmIMjA8+QuCdAqL3dLPZHDNCzIXvZ16pAhqu7M/zwrhzuJzn2cizroyPIi/z\nrOfLnlKVMrGhyXLg93L+r8pIZpDia3keh7Wsrq7G9PR0XFxcxO7ubqlxyGFKE+POleUYzCyD+fkZ\ntNuIMX68an6n/ezExETptEY0iP3tpBkto3jSDl1nQ8lYnB9mbblXNlAP8QZ8RVEdUUfeZ3p6umwZ\nXlxcjLu7j01YaKFaFY20rrOsmf/xfvM7Ze8+ryP0e7zQZrNZmiV99dVX8fr162i327G9vV0OIMtb\ngTM4yfyDw8D/zddV/F0lF1lvVa0rhxZxuI7BpO+bZbCKb/mMZ5tPDHI/R4+6hQ4Gx3CxYHyGIiNU\n9t1338XKykrZwrS7uxv39/elUhU0/eOPP5bKVt8352wzY5oQQq4n7ItgsWXPnc4ccn8ImRrURIy3\nTkVAGa+3m8Ek9ibx7lA4GBYE3u88GIyqjO0dGJGbmfDUaOVJFbGL43gX927n3X2/nCPMc8v/3ft/\nOByO9eL3fmJ3gvNeWj5j/hz5QOnixVxcXMTc3FxpnoTCwCPiBLGdnZ04Ozsba4zCmJhzK3reKRfV\neAudvTTfL3slBgsQxWIoCJ/yxv2ILtmz9H3yTgcMH3NnQwWfeacA62gjkUFXFShmt8yHDx/i/fv3\nsbm5WQoXbRxs1HKRkcPtBoXMn9M+6BcXuMGXTp3k93SBXESUaBhOBLtwMBzI2M3NTdkO7NQJfGC5\nJr/PZ+aXRqNRCgAt29kgoCOXlpbi2bNncXl5WVJ76Ktnz57FixcvYm5uLt6+fRs//PBDfPjwoVR2\nsz7mExsQh9dZE+af5/OZK/odPUM+zSeZbPDg84WFhfjmm2/iH/7hH2JxcTH29/fjP/7jP+Jvf/tb\nZQQ18wpjRz59LDMyjy7j+ojRVsDsPFR58Bl4NhofU8vr6+ulK9/JyUns7OyUk+ac+vXY3VfDY2D+\n+B7jd/+Kz9GjGXkWqcrLcx7t7u6uFMFsbW3F9fV1TExMxOHhYezu7hbBPj09LQJOcZ1RnpVp9uL5\nP2T05/2d/JsRVF5sFuIhhvY1/vHiWoHZ08NzqvK+eT8/114n11jpZ6/T74JigTFd7EMVPeABsMN1\nHpcVFJS9hewR5GhIlYfhNbKH4bygx8O8ARJRqO7TcHl5WaJG7CnngKOcd85rbv7yOlmorex5H29j\n43OHQlEA7DDY398vB4y4CYi3DHIP13eYz7PHksfP/ZgHA9SH+Nn/mq8AiHt7e2Vu3r17FwcHB58o\nqnyfvPbIceaFKjJYNXCwTjBwYWzk013ARyrKIXoABGuV6yicv+U9kHN/bt5gLXLEJ0cv4TnvQWcX\nAO2IkQlSEFT+053N82idkuczjwF5zjINn+d1McBE93zOk280GqWN8IsXL2JtbS0uLy/j4OCgePHu\nBYC+ZQ091way/txr7vXzuD4Xccj8yXvC6zioZ2dn8d///d/lrHjPM/ORnU4+z+R5/C37Yno0I89k\nYNQ8+Tnfh6HnSL6ZmZmyt52F2d3dLcLiwjKHZ6xArOwyY5pB2OJl5dJsNj/Z+11lVPx5lUGzQLFg\ngBYXjrkoK4fEcsiIayDfD2bOoXzmxorEwnN9fR2zs7NlzfDuKeKjMK7q2szUfncbFr9X1VqAZCF7\nHPb+eTd7H/ZMEGZ3JfOhKv1+P25vb6Pb7ZZrrPQZK2PI482GAwXMXOUaFI+dz4zeWSPOYjg+Pi6V\n2pz0xnyz1kQ3fDBPDh17LXgX81OOqFSBr4e8mmwUqOf45Zdfyqlj7BM2APW98j2zTOV5rwIsLmKz\ncSXSlHnef7c854Iy1ohalIgo7bBZu1z053kBzJtfAAQZbOeICDzCe3K4DmfY0ygJ43V9fV22bOG9\nV4Fx5Mv6yGAiAw2T5TunpbLuc+TI8+3vNpvNmJ6ejtXV1bJr6vj4OA4PD2MwGJRCWM+3AbhTnYzf\nMpABC5FUR9IyT1YB+8yj7GK4vr6ObrcbR0dH8f79+/jP//zP2NzcHNNJ1h3Mu50v87TB/+dA7UP0\naEbe/dtNVnauaMSIEHYhdO1roKqtEL4vEwhDGAAYWPgzGJSQHWOr2gZjxvUYqhYHzyFiVGWdDcnE\nxKg7Fv+HYXKlPJ95Ly3jYswPIVTnEjO6JEXw5MmTsba8jAND4FA8/9pzZc6zsmk0GiVkadDhObfS\nzjzDs5xDd6tPGynWDwNMAxbGcH5+HjMzM2U9vBWK9+C9W61Rd6+MyofDYeFzxu5cJutVFRJ1+sVA\nl3knd3xzczMWxWC+uF8Gs9mj5/kZqEREiXCgID1GK9Uq407qBX7o9/vx888/l+OTqSfA+6uSNys6\ne5z+vcoL8nZLV0T73c3jvA9z6RSiUz3mNcbK3zBeGSRQtOhUUjZ4jJ0aANILjsJ4DjxmahqcRvD2\nVWoIJicno9vtxvHx8ViqzQY2g5sqQg8YaPBepAPtpHmunSb0c7JRJWW2urpaIobn5+elyyRpQct/\nla7zPFmvWrcCuEgBelxVc5EBZZabRqNRDs2ZnJwsnVkBUowng2pH/jx+844jyL+1TqZHM/JmhohP\n0XuVkJnZ+bvDF1xfpWw9IQ95Iv5+Ffq0d51Dr/n+v4X8PIacn2F+Go1G8ZKNMnl/G0uus7G3wrJg\nZeVscPLQenirG1uU3DioKlSZQ6Wen4xoGTtjNEgxoHJUw3NnfjJAiBg/D51xAyiGw2EJebs3A7zF\ne2F4bYitIHPkwMqGJiXwC2PzWlSlAbhvxCj8x2EWAAvWxWAhy5b5OvM7z2Z++LFht6dZJTv2SogA\nuFZiOByWOoher1ciceavh8ZpykqVMQ0Gg1LTkvmlKlwPf1bNtWXL61oVGmXuPme4kNGq5wIS89gN\nGv3eOdoFL9vxyHPodAjbQPN7V+muKl6JGDXyihjfosu7ZV4xIM58ntc2YhTynpubK43NTk9Po9fr\nlXQDwCaH/as8b8+5jbznyLJTxd95njIoMUhrNptxcXERBwcHMTU1VdobW4/kH/OKQbR1nMeUI8e/\nRY9aXQ/S9YsyWfzNRi3i00YAvjbf/6Hn+vesVJhcTzDbLPCgJyZGR71mEJGNYx5fBhB5DFXXWeCz\ncXGIB4WPgo6IMfToIiDm2sxmw5ifx5xnoJPfPRsG7ut3sxF2PtpG2coDnkChZS8Upej3zQDGhssG\nmOs9lxRUejtiRtm8h1E233OYPRseg4Iqb9VAwWDMHrwbAhmQ0BHPhYlVYNNhv8xveODcD5CZ55Ox\nwkMGxLmOACM0OTkZ5+fnpV9DjgZkY2YAgLfMNq2sD/zs4XBYxl5Vk5DfmWdWPddU9ZmjHqyD75vX\nOss4PMuz4VOiRtaLjjjyfTdU8Xa+HIYnh22jy1gdxrZ8VTlOvibPWS64Q0a9hp4bnuE1mJiYKIWv\na2trMRgMYm9vL/b29sq+eHRZlUHORj/rOHSkG0bBVzm1Y8qgMD8DUOt53N/fj5OTk1I74yLFh+Qy\nOz8ej7ey/p8w8igxV8NGRCnsMmNEjB+SkA0PwsH3MpKEITxhRnWZMWBYnuW+92ZuGyLGboHgd+7L\n/UwZWXrRyataMC0MRnk2UPyN6+1V5SNzWQv2AmfBJs9EmsTXOg/uOgLeK78TY/LhIMzpcDgcK1ph\n7DyDecohK69vVt7wUc6nZkVjnrBn5Xdi7ITnud5rwu/mFzw5KyWe6eLTrPSp2uYdc9g48yuKy9vK\nbDj8fL87a3p/f1+MMgqQfgU2LqwtOczs5VKU6W1hBlrD4XDMgDHPlkOPk3fimewmMcC5u7srz/UR\nvE5rGTyZN60/DHAcXbPxZXzWE/CL14q/mSe5jnnxWvs9bZx5pvUifIrs8V6WC68TwM98yrwbwPo6\nOwN57c2n8HnEeBtwy1tOgXA9+t/AzpGku7u70tFuc3OzdLiDV/1Mr0cVsLKeQH/C29aNXMc487ki\n2eYwV3l3EqCcfi1EWhyNMV8yFssKugH+gqfR517zz9GjNsOxcbGCxKgMh8Mxb9kTwQQRLqYxAEyG\n8Nub8GLxN5jeChjhYEFQIBQy8feIUZgPI4URyOPzfkmuM8MxB1aYND1oNptjXixVtRHjW54iRkqD\n+bLS5fQttgNxP75ngWGMdPWyZ+YwI0obAbYnA1ngOerSPMB88bnn3EJmAeaejN1rwb1nZmaKd0Ge\nFoPv4iaDP+YHoeX9IqI8xwDQz+Mz9zMHABms8X1Ajes68nrwWS6+9NyiqJAbK20rT98PYo818sD2\nrogoDWF8/ji8PD09/Umb4eFwGNPT09Fut8s72qgwH4AeF4Ga/21czC/MKWMxiOI9XGzFdx0ZQkZZ\nu6x3LIsuIvP6GrjnpkeO2HjXh2WUOeU7NGUxuERnIaNsnWQ+GZ8BEetmQAwfEWlwtC/zMe9l4Mf8\nOJJnWeaHz+yUMX7rTne6ZJ2QwxzJ4GS3m5ubePfuXWxtbRUjjzw5N08UYDgclsZpEPzG74zJDpDP\n0UBGpqenYzgclpSpt9uxhnbUkGdqcdw4CX2FbDqNw9gdZcnrgTwiB7zPb9GjVteD2kB5GI5GoxHT\n09PRan08htZo2cbF4RV7Xt6rapRohBbxaY6PzxwaYywzMzPRbrej2WyOFZnZKPN8BNQepA2nlYTH\n7ZCM8132zrJHyLU5chERRXH5SEnGAjN5DXhnKySEyR4OCtvj43cMGagTT5j5sodnj4trUd6sOQ0l\n8ESysYLZjXa5F56ZPTQUQ/aSrEythDHSvDeGx0rMIbZGo1HmfH5+fgxEMj5HXJrNZjGyRvWeY8bg\nsXPf29vb4gn6nnzf3po9WBsBezL0Y0fR5u19Tg/Rx8HbsYgCwIMGGuwEcKTCUSr4wF0vUcb2dB0x\n4h7sZec4WOTcgNyGhOdjkA1iyaXyfBwHeNZGnHExj/bCWcucYrDx5+8ciIRx8LwwXuZ4YmJirDtb\n1iOONGYnyAYbHdFqtcbqa+xNW18Z6KEbAHXc03wO3zl9laMrORrXbDaLE/Lhw4cC0nu9XoksYSPQ\ns9gI66EsU454MZ90t4wYpYO81swF/JELZy1nBuc3NzfR7/cjIj5JDzAf9EGwvoef+J51POsJr9BB\n7/fQo+6Tt/FFufOZW5paaUxMTJRz5r1tazAYjEUBfFQnf8OARYzyHw6r8XkO8yD409PTxeOxsWTB\nMcLudZ0FxUDDIUAUEZ95fhAOGATlbhRZBVxAfRybeXFxUfa2c42fg4FDSZqBs4HIHjYKE8Vi1Ilh\ngykR0IiRQnc4DCWDkkVI7FWwNvzwvZxOMYL3HLEDwaEzhwvhF3jOBo53MrL3c4nAtNvt0iDFOygM\nqHhWbmjE+qHQzZOE8Jh/DKHHDqG4UJCsO+9hxWh+sSfK/FjJ23P0/x05cNiUtbZB5HrGxxqytgZp\nyLVl1wYaBdlutz8Js/o6+Bod4oYwAJeZmZniJDx58qRUuwO6OKTGId+sxOEJvK6cJvK4ARf26P2e\nOd8L/9kQs3420OiaiChjQHYYn9Mz2atm7R3dM3hCR+OBXlxcfDJ2vmtPl88tN0RlDKK73W55B/aY\nY/QjYizaBdiyUSXSZEKXwi8zMzNxdXU1lkL0NdaleY7Q4XaqSGNxyFq2MehFA2nbAUeFGA+6yzod\nUFhTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVUU0011chGWk4AAADtSURB\nVFRTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVU\nU0011VRTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVUU0011VRTTTXVVFNN\nNdVUU0011VRTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVUU0011VRTTTXV\nVFNNNdVUU0011VRTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVUU0011VRTTTXVVFNNNdVUU0011VRT\nTTU9Hv0/y8caoZvUtlwAAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -2.340148\n",
+ "Current MNIST score: 5.376083\n",
+ "Current Frechet distance: 64.949997\n",
+ "Training step: 400\n",
+ "Time since start: 2.269744 m\n",
+ "Steps per min: 176.231304\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvdlyY1eSpvtjImYSAwnOU5AxKRSSQlmZqc60rq6bfq9z\nHqbfoK37Pquy07osKzVLMXOeCYAAAYLgAJwLns/DsYSQlG3WFlVmWGY0TgD23mv58uH3331JozEa\nozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEa\nozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEa\nozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEaozEao/EfZ0Q+1IWXl5f7\nvV5Pt7e3ur6+Vq/XUzQaVb/f1+3trWKxmKLRqCKRiKLRqP2Pr263q8vLS0UiEXtPNBpVLBaz10Sj\nUd3e3urm5kbRaFSSBl4nSdFoVGNjY+JeGL1eT/1+X5IUi8WUSCSUSqXU6/V0dXVl17i5uVE8HtfY\n2JjGxsYUj8fV6/XU6XTUbrcViUTU6/XsdYlEQrFYTMlkUtlsVv1+X9fX1wNfkUhEsVhMvV5PsVhM\nuVzOniOTySgWi+nm5ka3t7fq9Xq6vLzU9fW1+v2+er2ezWUkElEkElEqlVIymZQke8319bVubm7s\nfpiTSCSii4sL3d7eKpFI2DV4tkgkomQyqUQiofPzc11eXtrz3d7eKpVKSZK63a7dw/X1tSQpHo/b\n5zH/zHkkErHvPH88HlcymVQ8Hh94tn6/r0QioUwmo6urK3W7XZOBm5sbWzeuxdqy9qyvvy7XYw5Y\nW9aD90QiEZuHaDSqeDxuz8n1IpGIEomE0um0XZf38+zRaNTuQZKurq7sWlyb18XjcWUyGeXzeXvO\n6+trXV1d6erqyp6p2+2aTHNN5JE14P0Mv/aJREJjY2NKpVKKx+O6urqyde12u7q5ubHP8GvFdcbG\nxux5vVyz1n5t2D88r1+Lfr+vWCymWCxmnzXsdaG8IBPoAO693+9rbGxM6XRaY2NjSiQSSiaTJred\nTkdXV1fq9XpKJpNKJpPqdru2HpeXl7q4uDA57Ha7JgfsE/bo9fW13QP6KRqN2vr6OUBuYrGYUqmU\nUqnUwL6+vr7W7e2t4vG4ybOfIz4buWKOr66ubH78/IVr7tcP/YG+Yg2QW97j5xy5QV78uiIrrCN7\nC1niGr1ez/a5f//V1ZU6nc6ADkGnc5/JZNLkLJFImL7iHpHfUF75Qg64Dz/36DhsD3vK7x1koN/v\nq9Vq2TOzV/y6oWeZ10QiYXYPnXlzc2P6Dllh/fhiIMeHh4e/aMNjv/SC/1tjYmLi/7m+vlan0/nJ\nA7ChUebSnRLk791u1yYbxS69U2BecBF+v0DeKfCbhL97h4PPZvjPu7m5GRB4NvfFxYUpDT7PXzeV\nSmlsbMyENZFIqNvtDjgP3gjhpKBE+/2+UqmUut2uGo2G3cOwjeo3Ms/EvTGf4Ybxz8j7ubdMJqN+\nv2+ORbhpvaHl+f3G96/z64HwYyyZU0YsFjPDxrpxjzwTa+UdNjYv8uXnxRsafucevIJAPvxcolSY\n17GxMVOwfr25rndkvZz3ej2TZ+bFKzNJSqVSSqfTSqfTdn8YIRSBf3Z+9gbBOzoMnpc9gNLi2hga\n1tfPQyKRGHBa/FogR36Psg/CvR7+7NfmfU63N5LcJzLh9+PV1ZUFDDxnoVBQNpvV1dWVOU+8DuXa\nbrftmufn5/b8GGruwesdZIX17ff7isfjurm5MWMvvdNjGLxQ1lm/y8vLgfn0BmXYdb1T5dcqnDO/\n53FSkFk/5152vAz5PRrqbJ4pDKrGxsaUTCZNJqLRqDnQfr9hdNGFfJafcxwYr+8SicSAw9Dr9XRx\ncTHg7HuZRW+H8uflBTlCT3uHjfsZGxszB/Dm5sb2P7LunX7WgzVEn/p7QF6QTQI4ZNfrJNaq2+3+\nv/qFEf+lF/zfGih8L3QICps1Ho+bl8OGQFBY1NAYSIMe2zClggeKd4mSY/GHeU5cx28EFEsymTRP\nHAHmWgzvscbjccXjcWWzWVPy/vO9cCJ0bEbv9PCZRJZemMJoB2TAb3zpnaft559r+/nFc2SuvELF\ng/YOlr+Oj1jD+/JoTSqVMgPmI0SPaoCCgJxcXV0ZmuDXki/kzMsa1+ZnPGvQGtYBhMA7Ibwmk8ko\nkUjo8vJy4Br+mfHgvQHiu3dImX9v2Hn2TCajqakpVSoVVSoVdbtdNZtN7e7uql6vmwFC3vg85sI7\nDuG8R6NRpdNpi9pDB4TXsb78H0SKaxPJeTm7uLiwOcch9PI9bH/6e/T71cuNjyBRhN5hZy694U8m\nk8pkMspkMpqYmFAqlVIul7Nrt1otM6rIQyg7obPu9xuD9fSIEJ/njQY/eyeGefYO+LA9yl7k/x6t\n8us2bL/5+UGWQ6fBz69fAz7f76NQr3pH0aMARLFeB7HfeJ50Ov0TtNE/V+jYhToW55Q9SBTtn93r\nLL/3uU9+R74TiYTJGNE9iFXo1HibNEyOWQdvpP1e9Yigl3m/Rh79CNf258YHM/J+0zBCGAjokA2B\nQkVYfKTJpDHCaI3Pl2QQKF+83o8wEua+vML0ygY420d13qv2G4vBc/A6r+y8cvZKjP97JMELsFdK\nXph4XyggXlkNmyt+xsiHc+UdAz/3/udhUQS/e+U8NjYmSbaB+Tvzzmv9BgznwCt+ntVHAv6+/Tri\ngGWzWYvomGc+Jx6PK51OK5/PW1TNfPP6UO78/IT356FUv7bcH9eamprS/Py8FhYWdH5+rng8rpOT\nk6FG0ssZ8+aNhoeIw3TI+/aQd76RW+9IsBYo+FDWiHL853MfXp5/SWmF+sErwNA4hQ5qMpm0NeO9\nNzc35qh4lCMSidg9v8/YeBkKnajQIWLNw704zAC/L7gY5jDwGT5gCo2YHx56DtfQ6wXWkzVkbYY5\ny15WvJ5jHfwceSOF/PX7fSWTSU1MTJjRbLfbP0kN+H0RBjLYBK/HvMPs58/D/aHO9HsHXc41vTH2\nDgSwu9///l7D/TQsfeD/HzpvoSz+n4wPZuR9pOWjGDYbSgOFD2zoDYBXYiys9+bCyURYUG7JZFKR\nSGQAWg+VnTc0DP+53Cf3F0aSvI73eQiK6IFFDCHN0Dj2er2BPBXDR9/eyfCf5z3fYV4x8+4VlhdC\nDwEDKXnlwtz7lMEwJ+Pnhr8uxsM/o+cs+PwXUWKoJPkczwXwHncoi/1+36I8og3/nKAvxWLRcoZE\nqaESZ/iojfsh7RDON8MbMXKDyBjwYLvdthx8OHfe4LEvvLPB5/E+5iKMdHBegA9BqZBZ1h5uAHtr\nbGxMuVzO8t3Ihpc71jSZTP4E0vbGNjR6rBdzTdTljVy4f6Q7/XF+fm65dRxEkBH2IvMGNO/X1jti\nPmoLr+UNkOdZ+Of3iBhy7lODoXz6ffBzz+nvL9wL3lD4veKd9JAf4OXfj1Cn+jlhr/T7fePMhPPG\nnvTQPXJ1cXGhi4sLc855jXcwkGcvq6Ej4fd1+Ixh0MT6ei6I13nIeb//LoU8bN94nYkz4OUktAmk\nOz33h7+H9mCYjP+a8cGMvPRTYfSGFCH0mx8FgwcvDcLoXsGl02lNTk5aTsUrSwx8tVpVvV4fyL16\nw+QXzCsqD515ISPfFkZq3KekAWFEaLxg+jy1fx/3FBpVoslEIqFsNmv31ul0THj4bC+M/lrcl0cB\nwujNoybcg4evUqmUYrGYarWa2u22rZVfZ7/ufu74n7/GMEEO57nff5f7JXJlE+IAotAvLy8HiJp+\nTiUpmUxqcnJSDx8+1NLSki4vL82I8x0yTSqV0v7+vprN5oCDFhp5jBD5SD/PXs68AQSlwMDjdBQK\nBZXLZYM0j4+P1el0Bhw0P3c+H858+/SU3yveKfPOiJff0BnwexXZisViymQyGh8f1+TkpCnGs7Mz\nI2lms1mVy2WNj49rfHzcHKpOp6NWq6VWq6Vms6lGo6FGo2H8Fj/CCCh0rPz+6fV6Fhz4Z43FYmbE\nfAoQRMk7994YMneki5rNpjqdzsC6vs+pDY18uPeGjVC5M8K8b+gUhEb+fY53JpMxzkcul7N0Bnv5\n+PhY+/v7A8Q33usj29D5wInlu78+9wtyx3OQevOcK49SEsCE0TpOQzQaNZ3n18OPEBIPZYp7Dw0r\netY7svF4XMViUZJ0fn5uX5FIxEig2ByI2Ofn5z8JuPwahpF8qP/892FyP2x8MCMfepjSoPCzaBcX\nF+ZdosxguYdCxxgbG1OlUtHvf/97jY+PS5KKxaJyudxAdPSnP/1JX3/9tSnxEDL3yheF7aMQDB6D\njSBpgJkdRk3DCBnv88xDw8E88Z33JRIJzczMWKRIpODz/cPgfn/vjPcZW/+sY2NjymQyyuVyyufz\nKpVKSqfTevXqlQ4PD9Vqtez9Hqod5vQwDyhiPy84e354aCx8Xb/fN6Xl89X1en2APOOjxVgspvHx\nca2srOiPf/yjfv/73xvJsdPp6OzsTPV6XfV6XY1GQ+fn52o2m9rb27P19GvtjbdndfuIl2tjfFmL\nXC6nTCZjxjydTqtQKGhmZkarq6vq9Xqanp5Wu93W1dWVGUHPJ8G4MnjGbDZrDgfXhbyH3FxfX5uR\n457GxsYGSE588V5JptSmpqY0MzOj+fl5ZTIZxeNxHRwc6OTkRI1GQ0tLS/r000+1vr6upaUllctl\nu9+3b99qY2ND29vbevHihX788UcdHByYU8eX52kwcDTCgfHAaHunh+f1RiqZTCqfzysSuUNO4Baw\n59EfxWJR4+Pjev36tY6PjwfIie8z8v6eiGJZr2Qy+RO0w8s2Axn3xEYcTVDFUI/59/q9F4vFVCgU\nNDs7q8XFRa2srGh1ddVk9fnz5/r22291dnZmjozXZz7l4yH/fr9vjj5rg07xjqS/317vjsfRbDZN\ndn1KAYc3n88PBFDoMfZ/p9P5SfrTo7s+PeHvwSPD2BzvLHtkLJ1OK5VKaWJiQjMzM8pmszo7O9PR\n0ZH29/eNSzM5OWmpvaOjI+3t7Wl7e1vtdntA1zMvIZIzzBFhftm/v2Z88EjeRw7SO8Xn2eY+/+zL\nulgUb4BjsZgqlYoePXqkL774QpVKxYgf0rsSk4uLC7sHn6saNrmhR+w3M1AhikiSwZz87BfUK6af\n89bC6w+Lbtk8ENHwMuPxuGq1mkGQCCf37b3yMNr2TkWYzyV68ZHN+Pi47t27Z+WA4+Pj5q0SabK2\nXgmEXuuwaNLDaty7Z8b6yMn/j+sSrQ1z4kIjMTY2ZtHy5OSkET6vrq7MWLXbbV1fX1sEj9HwCixc\nGy/P/X7f5s+vCw4Cr6Mkq9vtWgRwdnam4+Nje//k5KTK5bL29vZ+Ep3zPBgySZaDxmiwD4hIeDaG\nR228kb29fceW9kQ1/jc9Pa3l5WXjL+RyOY2Pj+v+/fuKRqOam5vT2tqapqamVCgUlE6nzRFEOfb7\nfVWrVXPK/b5gvnA4fC7dzwNKEz1BhMZneGeL9/scf6FQsMACZ61SqSibzUqSCoWCMpmMGo2GTk9P\nBxzx9xl3P0Ln2UeeyA7X9fKCwxxG8tJgNdHP3Qd7hkh0dXVVv/vd73Tv3j1NTEzo5uZG5+fnqtVq\nKpfLKhQKkqROp2Of4Z1zb0xxOJgPTzTzryVFw3vYszhekgZ4MqlUSpOTk1pYWND+/r4ODg4GEFPs\nha8U8HuR52V90S9+TnwFTOgoeX0l3SEgc3NzmpubU7lcViwWU7PZ1PHx8YCRj0aj6nQ6isViarVa\nA4iAv65frzDQe98avg/9CccHjeS9sfBRJp69h+Zvb2+NCexzpF6ppdNpZTIZPXjwQM+ePdNnn32m\n2dlZK3VAYQIpewjTs2Gl4aQILzx+ExIxe0dCkm1KFiPkG4Rw6LDBPfh0AQKKYZiYmFChUBiAteAI\nhB60VyAekvVKx0fT/C9UhP1+X9lsVpOTk1pcXFQmk9H19bWOj4/VbrftvVyLiIq58FGBlwOM9TAe\nRAjT+eEj89vbO5JmOp02oxzmzT0CFI/Hlc/nNTs7q3K5rHw+b3wN1hbyVKfTUb1et+gQ5eyhQw9L\nhs6dd1JDmYhGowPRPj83Gg3V63UzmLFYTMViUeVy2XLfPp8ci8WUTqeVSCSsRNXnmdPptHq9u/Ic\nf10MiR+sh3eUSVnAQfFEpYmJCRWLRbVaLSUSCYtm+KJSgFSPHyBsrVZLx8fH2tzc1MHBwYAMsL+8\nk+qZ+945ZJ7J//qID+fT8xa4fiwW09TUlKEMpEwWFhY0MTEh6Q5xicVievXqlY6Pj1Wv1y3q/zVG\nPhzeYHtHFMQEOfJGlLVheIP6S4PUyuzsrOnLxcVFM/A3NzfKZrPKZrPK5/MDvQR8esBzCyTZeoQO\nFM8ovdujyBsoCDqNdQLBBW1YXl7W2tqaIS2tVksXFxeKRqM6Ozuz/h7DjB9yAQIyDCH0etsHOOxf\nZAN9MTc3p9nZWVUqFRWLRV1fX6tarZpzUiwWdXNzo0ajoVarZVF+iAC/r6Tv58Z/CCPvhcQ/IN6c\n91hDrxDjT0OLSCRinvbi4qL+83/+z3r27JlmZ2eVTqfN6DH5GJN8Pq9CoaBms2kGJ8zve4PgoT4G\nm076KekpHo8P9AF4n6fmnz90NMIcoIcO4R588sknKhQK2tvbU6fTGajT53PZiP4zw+f0cCjPy7PQ\n3IWGLPF4XOvr61peXtbExISmp6eVyWSsPpUys1arpX7/rhzt/Px8aIMKr8D8/HhD5O/NIwNe6YXy\nApIxzLv3TkupVNLq6qo+++wzzc3NDfQjkGQGp91u6/LyUvV63SJfbyjCNQY18WkIFJePGj1cTFRC\neVw0GlW1WlW5XFYqlVKxWFQymdTl5aXK5bLu3bun7e1t1Wo1Q3SIonGIkQnvGLL2l5eXAw4gCsgr\nYpwECFmUo0Goymazdt2LiwsdHR2p2+1qcnJSq6ur5nxjLHEW/HwxN4VCQU+fPtXFxYUODg60s7Pz\nEyiY9QsJXR458ekpj7rx7OgQqneAX0Efnjx5ooWFBfV6PeNE9Pt3fSJmZmYUi8XU6XT0j//4j8rn\n8/rb3/6mnZ0dnZ6e/mpD7/ccziT7NGR5e3JaiA6GDuWvuW4mk9HS0pKePHmizz//XKVSSTc3N6rV\najo7O9PJyYlqtZo6nc7AvkMmfHDgHeiwIiHkHrDO4f17WJy1Qt5KpZL+6Z/+SY8ePVKxWNSjR490\ncHCgr776SsfHx7q9vdXz5891eXk5sPe8Pgbh9A4UMh7qfu8s+nw5e2B8fFzlclmzs7MqFosDlTak\n2Xq9ux4LmUxGCwsL2t7eVjabNefbG2iv87l/9NkwWUFevAP1c+ODGXkm3C94KEA+ssTr5+eJiQlN\nTU0Z1JFKpbS0tKSHDx/q6dOnunfv3kBzgdCDAxadmJhQpVIx2MR7gn5SveCECsS/FqiGhjfNZlPt\ndtuMPYOffY4//Dxg28nJSUn6idHN5XIGt0UiER0fHxuDeBgMz/UgvIT3gmDjCDH3RGQTExMGp8bj\nca2srGh+fl7lctmMz+zsrDVwWFpa0tjYmE5OTnRwcKC9vT1rNMKzho6IvycPYeI0eZkJS4eYI94/\nTCGGMjgxMaGnT5/q888/1/3791UqlWzz40Ccnp5qf3/f7gfl6xvieKh12LxjyInkh5WUcb8+vwv7\n3EeUwMSFQkGlUskY4xCnUNgwx1HE4X36n/29+Pv2yMD4+LjJAJD9zc2NORTM/9nZmclpPp8fcHBu\nbm7UbDYtHULEn8vljNA0OTmptbU1HR0d6fnz59rY2LC19ms4bK5DOfcKmy+U+fj4uIrFoinfdDqt\nbDariYkJffrpp5qZmVG32zXD3263lUwmValUJN3B1w8fPlQqlVKlUtH//t//W//6r/9qeeH3GdzQ\nEQlRTI8ueocshHn9z/6zw7+F104kEpqamtLTp0/15MkTLS8vK5vNGv8EQ9/pdCw9E/IdvHMRGhuf\nOvL34511v1/Ry+xx9M3y8rKKxaKWlpb02WefaW1tzTgli4uLGh8f1+npqa6urjQ9Pa1KpaJXr16p\n0WhIepe+wFlIp9OGjuF4e1TQOwHDHNBMJmOoQrFYtGdnfSCZohdubm5sn0xOTmpubk6FQsFSd6Aj\nHpXxTh1yMmzd/0NE8qlUasC4+00cKm4MjWccLyws6P79+0YIyWazunfvnj766CMtLCwom80akePs\n7MwULFEa5Ifx8fGBxiawM6V3httD7vwNo+w7WEnvSH/5fF6pVEq1Wk3VatU2fTweHxAyPhMP0LfM\nTKfTmp+f129+8xu1220dHR2ZMmIOFhYW1Ol0dHh4aPlTNmfoIXoF4qN5BIwoDYIW95dKpfTpp59q\nenp6AM0oFAqanJzU/Py8lb6USiWLnD/++GOtr6/rX//1X/W3v/1N5+fnAwJLNOYFNzQ0OHW+osJ7\n2PzfIwLeeQifn+/R6F3znenpaf3TP/2T/vCHP2hpaWmge9bl5aXa7bZOT091cnJirONyuWy5QxSc\nj168rHgIHSTJdzNjTXgfcOfl5aXJKzXeKKpsNqtCoWBpBVIMhUJBy8vLevjwoQ4PD7W3t6fNzU11\nOh3rc0Bk70lnYXTvZQXZnpiYML6CZw3zXKRIOp2OOp2OcrmcyU+r1bI5QsaJjHd2dgwKz+Vy1oxo\nbm5OX3zxhf785z8rnU4PtG72ynfYGvM8GKd+v2/ziP5Ip9OamZnR0tKS5ubmlMvldHl5qXw+r3K5\nrMXFRTPsyCEpBi+T09PTKpVK+i//5b8om83q1atXtp9+yciDLKDjvCMSlmn5fTPMGRs2F7zGj2g0\natHlH/7wBz1+/NjSEuhKqiEwOugCnxrgs33E7vci+9HzBHwKjM+kFNTvj3w+rwcPHuiPf/yjpqen\nNTs7q7W1NU1OThpZFLj88vJS5+fn+s1vfqMXL17ov/23/6a3b98qkUhYaWShUFAulzMjf3FxofPz\nc5ML3z6YeWQvenkCsVxdXVUul9Px8bE9VzabVavV0ubmpiYmJgwVQh9XKhXdv39f33//vSQpn8/r\n8PDQbBP3lc/nlUgkLJ2cTCYHOi0OI3T/0vhgRn5qakrSnXBS9ysNtuVkIDweJllYWNDS0tKAQaxU\nKpqenrZcZL1eV61WU71e18TEhBGbotGosWPxJLe3t1UsFrWzs6Pj42ODnT28jeLwve5RxPF4XLOz\ns1pZWdFHH32kqakp5XI5nZ+fq91uG7yLZ+lznGyGfv9d+Zf3HsvlskWuvutcJpNROp1WvV5Xv9/X\n9va2eZTcu1faeLQYehQVUVqlUtHU1JTl4oBhUYh06EPh+GiWa5KDXVxc1NzcnKampvTZZ59ZvnNz\nc1P7+/sDzsj78mjcdyKRsIgQxQJU7sllvhQydMoYIAGpVEpPnz7V73//e33++edaXl5WLpeTJIuA\nj4+PdXh4qKOjI0MnPKTtr+edVK8YvEHg/aAlHqJkLrziJx+JMVpfX1cul9P19bWVm5F2yuVyevDg\ngZaXl23e8/m8Qa/wJLhHnByf5vDzzXrhVJB7LJfLFiFDnmo2m6YkYdk/evRIMzMzOjs7M8fi7du3\najabA8TMWq2mpaUlPXv2TPfv39fc3NwAU//TTz/VwcGBvv/+ezUajZ+FpJlzyvR8RDk+Pq5SqaTZ\n2VlNT09rampKpVJJxWJRhULBekv4qhHWz5Mfr6+vdXJyYiWAoBqZTEZ//OMfJUnff/+9nj9/rtev\nX6vdbg+kEUA8wvQBc+fPiAgdX4KhEOkMo+MQveLay8vLWl9f14MHD/TJJ5/o6dOnmpycVCJxdw7F\n6empdnd3Lb+NQalWqzo/Px8oxw3XwCOAoFKRSMTIk5Q083s+n7eUDygUOoe1X1xctL+VSiVzDDwa\nmUqllM1mNT4+rnw+r5ubG/3444/a3Ny0AGxiYkK3t7dqNBpqNpvq9XoWHOJQ+ajZc1O63a6RcrE7\n5XJZExMT5pTc3Nxof39f9XpdkqzyBbRqfHxchUJBH3/8sRlw5hwCOAEFDrQnEr569Up7e3v2Go/o\n/prxwYx8sVgciHSI6oZFnywsjMWVlRUtLi4aqQ4PPZ/PG/u13W7r5OTEBDaTyZhwJpNJTU1NaXx8\n3OCf8fFxgzUxyL60z8PxkI56vd6AYltbW9NvfvMbffHFF5qfnzdoBuil2+2q2+2qVCpZZzXpzvAg\nfOPj4wN5Q3LZ5MNBFi4uLtTtdnVxcaFEIqHT01ODm3xDGj+PRDF4iGwQoDEMM0JJtJjJZOwzpHf9\nCojeGo2GeeeJRMIiSxyKlZWVAWNMTTTQ9PvgdOacHgBsQm/gGd6AhcrP3ztrlk6n9eDBA/3hD3/Q\n/fv3NTk5qX6/b9EoJTE7OztWqQCMT809tfOh8g3zZz5SDnOPyL/P7+OAeXhwcXFRCwsLur29VbVa\nVbPZNL5DLpfT/Py8nj59qsXFRVszyIdAgzglY2Njdg2f3/PpGRxCZGF2dlYzMzPmbN3c3KjVaqnd\nbpuRB51Lp9OanZ1VPp+3lrHValWvX7/WwcGB9ac4Pz9Xp9PRs2fPVCgUrHUvxi6fz2tlZUVra2va\n2NiwGmT2zM/piVKpNADdl8tlzc3N6dGjR1pfXze41zu93iD6KBR04uDgwFIN09PTikQiA4br448/\n1vz8vP7X//pfSqVSOj4+HjhvwUfo0jteDY4n6YqQgY3B5+9hKi5MIbKmPr3IPvzHf/xHffHFF1pf\nX1exWDTde3V1pWazqZOTE1WrVdMrrVbLiMohC9zrF3QKuW760pPqKBQKmp+f19TUlIrFoiYnJ+36\noCcenfIGLHRifO7cV3Ggs+bn5/WnP/3JgqZcLqdGo6G3b99akOfTpL46iTQXelKS6UiCIIjOONwX\nFxeq1+tqtVqGNIAw4MzncjnNzc1paWnJAgLPTwJ1xvG7vr5Wo9FQrVZTt9s1NAzHJdR/Pzc+OLse\npR3Cn17fFQDKAAAgAElEQVRZYgyYvLOzMytdOD091fn5ucbHx22RpDsP7PXr14pEIqpUKiZ8eEp+\nM7FQjUbDPDxgWw+NeAgKNrL3LImopEFoPPQSgf38JuH+PeOYDehhdZ8jJWdcq9V0dHSkk5MTXVxc\nDLzeQ2l+LlHkQE3kos/OzjQ+Pm5QJ8rJKx3v7XpSEM0sgJdZ0+PjYzWbTYt8/Cb1suDnmDn0BBme\n10fqKDDvwLF5eO/7crc8G2vT7Xa1s7Oj/f19tVotXV1dWbtNIjk6zp2cnBhrP3QouBbzdHl5aX0D\nuFdkETYznvv19bWSyaRyuZytDcbVE/9gcnslR319PB5XtVrV5uamqtXqT8iffu4zmYw5jqwp0D5p\nLVAj1oB5x4n2+zaZTKrVaun6+loPHz7U48eP9f333+vNmze6uLjQ8fGxDg4OfqJQidr8/N3e3hr5\njr3G8PnUcP69XmH/gE6cnJxYuSd5XF/Nweew90DiLi8vtbu7q1qtprm5OWUyGZ2dnVkEyhxeXV0Z\n2gjJys83iAd8IZ+2IwJmP/m9F/KF2B8eafRG0CM27BHkxOtT5ov/JZNJNRoNbW1tGX/Cp5e8HmEg\n5zy/Ry5ICXW7XR0dHSkWi6lUKllXxOnpaYOo4Ub4SqfQ0fF/86ial4NMJqNKpWLOMA4x+xAkyRv6\naHTwABjfqfDq6kr1el3FYtHkEL3LWs7MzKjZbOrw8NBq+ff29tTtdi2gYz/xGeiTZDKp8/NzHR4e\nmr44OTnR6empqtWq9vf3rTmUb+j1PkQrHB/MyJOH8d6pJ90xvFJCKfDA1HICpfK5QG9EMkBqRNwo\nqrOzM1WrVR0dHenVq1fa2tqyz2MD+ppK790j+OTogFLq9bqOjo7Ms8RD99A8QuUHnqR/ZgwnMH1o\nSDqdjk5OTnR4eKiDgwPVajVTsChtaTCKpbTOH1OKcLKBO52OGo2GisWiQZree7y4uFCn07GGM/V6\nfWBeYD3jUHn2Ldf3FRSMEMVhHhB87s/PBegIBsoraa/suA5QPd44JWk8x8nJyQA8Pz4+btHw2dmZ\nms2m5fQ898E7Gtw73jppIklWEeIVOJ+DkkLhEZUsLS1ZOQ4IQ6PRGDha1itfb4h97TCKOSR0eSMR\nQr5EDvApGo2GOTbImze22WzWlClz88MPP2hra8sUW6PRMHIq7GngceQWRwpHgn0YoiMhaVC6i3Zw\n0jBktFe9vb01uL1UKlkXQb48ubDX66lWq5nhQo59gMCR18yBnwu/z31ahPvn8yAkevTQOwNeh4TO\nsDeG3rn3xhF5KpfLmp6eVjqdHpBR7hdjRpc79hZoZJg+8IEaBt47iz4V0e/31Wg0LCVCGRoBCyhG\nWFbpr4mdQH95dIRAwD8vVSVHR0c6Ozsz1AnHFUfLy4hPH7Je7J2Tk5OBJlW9Xk+FQsHKRilx9Sd1\nEvFHIhHj92Co6/W6Op2Ocbeo4AFRqdfr1lGRZ0E+PKLzS+ODGfmLiwuL+Bihp+YhGcoOOp2Oms2m\n9vf3jWWMMpZkuUZgIEhkGHtgzFarpYODA/3www/67rvvtLOzo6OjI2M8+kjYbyiE2MP1bISNjQ0j\nUUDAI3/vYTo2/vucGa6NQHa7XctHIczdblcnJyd6+/at5bl9+1BvNBhEA6QkpDvn4uzsTNvb27ZJ\nnj9/rtnZWc3PzxuEj5c9NjZmHiaeLR46pEYY6TgJkAWp9wYx8cdGohDCAXpDtMEa4rTB5/CwqGew\n+6YnKIXx8XEjLU5PT1szGJQB0Bm188C6FxcXxjrGYITEqGEIFHPO83AfrI2H8D3RbmFhQY8fP9a9\ne/esDrdWq2lnZ0cnJyfWF5454uv29lbFYlELCwsD/BEfqXsEhjXgnn3HMQwpzYTIa3oj6nObOOP0\nH3/79q1OTk6sEQhOB0TB2dlZzc3N2RzRJx943xtZDKcf7JVhzq93JOPxu0N9NjY2BlI4yMPExITG\nx8e1vr6ux48fa3p6WslkUs1m09aDBjmVSmWAWNVsNs3AJ5NJawfL8CgE6zCsZwT6hHn3vBkMknds\nJA0YKuTR7330IQ4jpDHfYQ9+xOnpqV6/fm2lgBg5Ppu59gMdyTN5I888w9aPRqOq1Wo6PT1Vu922\nuU8mk9Zwx+sAnpNrcg26J3K6IM6Cd/AjkTsexs3NjV68eGFRNbonnHccKJ7Hf0l3OXOCwI2NDUNt\nFxYWND8/bzyryclJSzdATM7n86Y79vb2dHZ2pqurq4HzJwg0T09PTcf4cwNYC79X/91H8t6L9PlB\n6ae5Hl7PAsKIPD4+Nk8Qkhp18Tc3NwZ7QqxBsPFYNzY29PLlSzOQRJjeQ/Lesd9E3tvD8J6dnenw\n8FCZTEYvXrxQr9dTuVweKDtLpVIWPWCUgHIQKAhfcBT8c0kyJVqv13V4eKjt7W3t7+8PkGP8PPuo\nh2dCmfOsHnYmz16v17W/v69SqaS5uTnNzMxoenractW0dW2328rlcsrlcj9pgsKRntfX1wMwM0aB\nqJa/cc98554w6vF43Bw37tvXE/vnp6c+RmtsbEzlclkrKyt6+PCh1tbWrIkQBLKzszO1Wi3l83lN\nTEwYSahUKlmUAQIVEqBC2fUb0ysUho9OUeqkfaisoAtZJBIZmDvp3VG0wPTsj1wuZ8/hoyAifvKS\nXoF6JwX5xqmMxWLGVYAsBJeCz/YOC6xsnCXQJao3KpWK1tbWtLi4qEKhoKmpKVOCkgwaf/HihQ4O\nDgxFCCNI5hul7h1kjwiCYIHM+b0r3bU8puc+z/jJJ5+oUqmo1WpZeejc3Jz6/b5mZmYGjhtFDonu\naXuMA8W+9UZjmH7xSAtwNmkY7xTA4WBOSIsR4fo0HXyNjz76SPPz87bvPMGz1+sZLHx8fGx8GS8f\nIdIzLDXln42AATmXZE75ycmJdnd3je/hK1q8Y+whdJ9GgJUej8eNZwFSQ9BFa16PkPh+Jl6P+z2I\nLuJvyAp6BacOsh8OFPsRpzIajZojSH4eEizIJ/qqWq1qb2/P0poQ9zwayzwP2wO/ND5oW1vpXc9p\nlLQXnmHwofQOhqzVapqentb4+Lh1ZpqYmLAFZ+PCrGcBW62Wjo6O9PbtW719+3agJpRFDWE277GG\n5X7cE2Q/UgJHR0daWFhQPp/X1dWV0um0JiYmLDKmQQmlTTg8r1+/1o8//mjNTyBccS/X19cmMPv7\n+9rd3dXR0dFAfbwf3vPzkCIODZ68JMs/kxKBnLe+vm5GZm9vTxsbGwMOEYSaer2uQqFg0ef4+Lha\nrZbB4VQteIjNw9fSYK0794SRJ7eKo8fwkCbG05d6YTxXV1d1//59PXz4UPfu3VM+nx9YW6KQSqWi\nQqHwkxpXGN7DoqphshvCp15mPG/A8zoowVlYWNDy8rIymcwAwY294nkgcB3IvVIhgGx6+BjnMXSq\nwgjGO4gYECIM7wT4fcnnwRDGQMNAp1Lgk08+sfa3t7fvzgigcmB/f/8npEefkvDGx0Pk/ss7Mt6A\n+JJYSUZgpf1xs9lUqVQydjSOJTA9bG1QBwwtKcJ2u23QKqkcD3eHCJufQ2SdCBU4GASE//t1AAFh\nDZFN6uHv37+v3/72t1pYWLC/e2b/zc2NlVxyxoOXC28EQ73sh5cDv689CsM8EQyhm1kbZB0EjtRp\nNpvV0dGRvvzySzOGkUjESLOQQJkrAgicHuTdp4hxaP0e9HLjUxH8zNzQcKxUKhmnBsY/OpXfQb8I\nIhqNhm5vb5XNZnVzc6OTkxMdHx9bCm5Y8BDOcahvfm588N71XqF4pTHM+4V01+12VSwWrZ3g0tKS\ndVwjtwOJJ5fLDcCTjUZD+/v7evv2rba2tnRwcGC5fn+gib9HabAzn38N9woP4Pz83A7kADLifzgd\nKGk8Qw9Hl0oltVotnZ2daWlpyfI9PE+/f3f4w+bmpp4/f67nz58bdOvnctgzeIMTwmCecMKG63a7\nRtSp1Wrq9Xp6+/atRfn+OEgcrkgkYidZ+f7plKS1221TfF4B+HtkeEN5c3NjiE29XjengM3t8/4o\nGJrzLC4uWvXA7OysJicnLR+PgwWTu1Kp6Pb21rxwIEXINJlMRt9++62tmVcOPtrBefOGz6d/wnwt\nvyeTSU1PT+vBgwfGACeSpucCSgK4knIu0iVE8FRQEHWG0VYoL6wDELHfg0SLvkTKQ7jAwqAnvs/C\nwsKC9Y5YWlrSgwcPNDc3Zwq5Wq3q8PDQlGG/31er1TK2PcoRuQzlxe9R/7vn+fivcI94yBqE7eTk\nRPPz8/rkk08MjcP5xHHk2dkD6Cf2Mg5pLBYbyKeGco5MMIc06QECBo4mSvUpFowmc0OESFvV3/3u\nd/r888/1+PFjzczMqFwu272RPul0Otrc3NTW1pbNbzi8k+WjefYhQVQY+PgeEDwrBLTj42P99a9/\n1enpqRFTe7271Gaj0dDOzo7xEnq9uwNs6MIHoY/ySuYKLoi/Fk4T94GuQw95grQkczzDKJp59zwy\nyKT9fl9zc3NaWFgw3cn8+Dy6vxccShyZYTpsmLxwDyF/4X3jgxl5b+Qk/YRt7qFA761GIhFr7sGD\nApnQaMZ3F6OGGwJYs9nUzs6OvvvuOx0cHFhpQkhUC5XisGjN/+4jQS9kvqyN3J730kAdUAz0D+j3\n+6pUKhZJA0/f3t4xfre2trS5uand3V1jiw6LFMIoPjT0CFQY8XjDH4ncld8RaRH1wFHwihduBA4X\niAtVEL5agfIVD696WNbfO+S7ULlDfPLngfsGRYVCQSsrK/r000/16NEjg8JR1JIs5RONRu1QCZQg\nKIAk7e3t6eDgQLu7uwPyOMyxCiPlMPr0z+Cd3Ewmo2KxaCVoRDl+3pFnIqFsNmucDzgK1PfjAIGa\n+Pt8X7rBR/HS4MEn3G/4XgwlyAL8iXw+r7W1NWs4Mzs7q6WlJXN2idqazaa18uUscxwO33FtWITj\nAwQvN16PcH9e9v3wegZSMGtB2gakzO959BiRvTfCOEveQIbD3zs6C+WNHPpIH1lA7lkb5ITIFJle\nW1vTw4cPNT09bSSwkAh8fX1tPJNh8hCmlHzaCUMeonCeO+FTOfyOcYQBD6qJfJ2fnxuvgXr06+tr\nnZ+fmxPpT4js9+/QI9AKX7pMnb5Pbfi59+k3nsmndTxqy7xglEkbeMQAW3Z2dmal3cwz1Vegor4C\nJJVK/eRQMT9nPpDwyOcvjQ9m5JlUSGpEG8O8dBQ8k4cCyOfzlj8mMqQ8zJdFnJ+fK5vNWpSztbWl\nv/71rwMGHoXivWquHSplIhnpXbmKL1O6uLiwrlqXl5fWzajf79vmQ9B93b0n/REFFIvFgQY53lE5\nODiwqG6YYfTKxQspOdNh7GTmgZ8Ruuvra+u6Rt5obm5OyWTSDEuj0TCFjIFqNBrGGQgrFXAeIMih\nPDBKXjHe3t6qXq+r1+tpampK/f4du5vXeBYwHjNOFg2FeB/3EEbWGBTfXY97hCCaTCY1Pz9vEZDv\nGTAsuvRzHzqRyCiePoYFaBi5h0jIWnkDhFNIhELU+8033+jrr7/W9va29WAYFrV7CDycc+SDPYqh\n80Q4XgeKRY+EXC6nhw8f6tmzZ5bK8Yx6HCdY1tPT0zo9PVWn09H8/LyOjo50dHRkkY6vGAnn1v9O\nRMlz8N2ngIZFql4ndbtdc0a4JucwgCJCdsRZoa6cvczBKaFyft8e9RElfKNKpWKnOkIM5X5gXQP7\nes4O69br9az1sTcOIbqELgUN8vLgYXoQAkkDrZdxTpBDdAi6xhtGb8w6nY51ptvc3LRS0e3tbfV6\nPZVKJXP8Hjx4oOvra21vb1svCP6XTCb15s0bNZtNVSoVWyvSYTgJ5Ol9Ptzn5JEN7EDI3/AIEU7+\n6emppROoSlhZWdHExIS2t7c1PT2t+fl5q3cvlUqKxWI6Pz+3+0KvTE5OqlarDaTCQmQ5dLp+zfhg\nRp4SJQy89NPDavjOzz5f0Ww2dXBwoPHxcTP0KEZyaeQDUVTNZtN6YROVspA/twHC3/3fMUphNOeZ\nxvwfRc8iIRzFYlG1Ws02Gwrm2bNnunfvnrF6pbv8Djnx4+NjK9/hmiEkK/00svHR+7BnxbNF4NnA\nRPdcixprol06EhLN08Sh1Wqp2WyqVqtZ/sp7/ygW8o4hooOiwAGiMgAyDBF9WBcNe3tlZcVqc3kN\nSgZCmI8iwo0fjUatwUu73bZGGZFIxLgQocF5nwz79fBOyMLCgp3pvbS0pKmpKYN5SUtcXFyoUqlo\naWnJUinT09Mmg0CZtGIFNQod1WFGxv+d/5F/hAfgWd++NJJIMhKJWHfE9fV1PXr0SB999NHAwRzI\nFutHFEkpHv33JVl0f3JyYmvj53XY8MiP//2X3kPUTIoDci73l8lk9ObNGyPi0ZkNI0f3QFjTcAmQ\naeQo5Pp4Zw/Dw16EsEUEi0Hp9XoDeWvkGUJlKpXS1NSUkTY9j4VACJmAEMY57sNK9TxzfZjh8YQ2\n9Id3aHBg0BMEP750lH2eSqWs1Sv8IBxInzqjjSyBDwaTc0JOTk4GEJKQT+ANpc+5Y9x9tYKXO+QI\nZI0I3vOErq6uND4+rlqtpkgkomq1KknGnQGR5Iv5wrhjx9jXw3TH+5ChYeODGvl0Om3NO1B43jsJ\nNyeKHE/o4OBAi4uLur6+6w5E7gpPlggmHo/r8vJS+/v7+u6777S9vT0Q9fnJ8lCcV2BeCELl7Y0m\nQiW9g6zY4GwgPgMop1QqGRJQLpf19OlTffHFF/r000+tDAoFcHp6qp2dHe3s7Kher9tm9c/toxbv\nkfuIx5PMeC0bkqgNgfPtJFFoMKW9kqM1LmxToh0im1qtZm0c/VyjhEg7eMXsNx3zRl4fpyP0uCGk\nYeDX1tZUKpVMtlCKRIkoQv7noyrek0wmDQqkHTKMcIx8+FxeThg+0uReISo+fvxY8/PzA0dXYuRB\neWZmZrSwsKCLiwtzrJB/mOjSnUKpVCra2NgYiGy9w4ci9ffuIwh/qBL8BBpJofiQNXgw5XJZS0tL\n+s1vfmMw/TAonbWj50Cz2TQDQAoiFotZH4thZ7aHSs5HN75y4ecMvH8fckwkuLe3N9AK+tWrV2o2\nmwOkSIhjoHM4ZDQ6gSXN3HoDyrW9nEGuI1/uuwpSZ07+nc+lSqfX61mL6pWVFX388ceGBFJ2CoKI\nQ0CvEBqtcC/ewHsjPyz48Wk9r/+8vHtHg54gvneBJCvbI/gDpYxG70jNID4EEpCZr66uBnrT4xwh\npx5pCKN2ZBg96iF0r/N5VnQ5ZFGIvl6XXlxcWFVROp3WwcGBZmZmDCk5Pz83Yir2BZklhck640SE\naNV/iEjen93sS6nCaGJYVIpi5CAa2K9AgvTHZsGZqLOzM2NmhkbNC6u/pheIUHkPUyAINC1J0+m0\nleVUKhXNz8/r+PjYIrFo9K43NobFtzMl+mEDd7tdbW1taWdnxxSSj0Q9sUgaPI7TP1MIyXqDg9NF\na1DWolgsamxsbMA5wtnCCNVqNa2trQ2cwsdG5Bl4RjY7GxFGsCfq+LlFOfovZIbP4AtG8R//+Ef9\nwz/8g2ZmZqyMkRSRdxogENI+GOeTHuZsYiLnVCqljY2Nn0DHPGNoTKV3J1xFIu/aCxP1Li8va2Vl\nxQ47gSxKly5OAGQN+DzmhIgXA5XL5XR6eqrr62sVi0UzDl4hecMWolch6uPnmjTM9fXdISHMPYe6\nPHv2TE+fPtXq6qrK5fIA9AnqxHtoVNJqtTQ2NqZSqaTx8XHLW6ZSKdXrdUsP8B5GCIN7GQiVYRjB\nhToFZA0UqNFoaGNjw3LV3W7X0LZyuWz9KEqlkp1KR8UL+dZIJGIIHDLneQEhetjrvTuzHJkfGxsz\nxjtrwPyHjjuHFd2/f1+ffvqpPv/8c62urmpqasr2NXlkDmip1+s6PT01ncU8oxt9vtrrR28k+e4d\nWC8/3NuTJ0+UTqdVq9WMa0AKDsPp92i/f9eymVa3OK5TU1Mql8t68uSJbm9v7URGmhjBf9rY2NDh\n4aHtNeld+TEows3NjfEgMODk/71s+BHqf0ibHi1AL8MbIPBJp9Mql8vqdDp2TgCODE1zICaCMnhO\njHcOf+34YEYeFi55Xgw1XikGyxPKvAEtl8uan5/XzMzMwDGo5C+B7ZkM8jDScKZi6K0Ng/hCeJPh\nFQndlmDI4mDMzc1peXlZ9+7d08HBgR1GQEtaIM2pqSlNT09renra+rVLshI0yF9wDygR9LltvoCj\nQ0H1ysFDhdls1gQNrxmSCweToPRIkRBRSndKiudOJpPq9+86oGHwPWITsu9DxeHnXNKAQQL647P4\nbIzc6uqqnjx5omfPnml1ddXOA8DYMDdA4DhQrAmlS9Sec03ODuCeqGjgXrx37WFO5oa/U+45Pj5u\nTYc48ALZZQ9g4IAqydki0+fn53a+PZ9LeqfX6+no6Mgg5xCm9NENe4B94iF8SbbnisWi+v27XviF\nQsFklF4KDx8+1PLyssrlsjl/pBAoyeSYTpwYSRaB0mEQVvnCwoJOTk705s0bk7NhRjrcs+Riw4gH\nJer3uie74RhTM95oNKwkDjj77OzMHF/4LV4vcG2u66N3Km38vgzRFT6P++L/3pn172EtJycn9eDB\nA/32t781ounU1JSdPcGzQWrjuejeiBPCfYTooNfFYSCEIQ/TOtHoXb34zMyM/uEf/kHZbFYnJyfa\n29vT7u6utra2rGLKOz882/j4uObm5lQsFk2H+kZK3W5Xm5ubdmgM7cl7vZ6lIubm5pROp3V7e2vO\nRYjUhr1KvIP4c/aAuSDNyH3jTPvDyfg8ekXQKjuVStm12ZPII5/L+/2c/9rxQY+a9acLAU3F43Fr\n4ec7i/X775rJ+OMg79+/r9XV1YG2n5BRaCKC93p1daXZ2Vnrh+0XKITj3wfN4+F6GMfnKWE8c7rV\n9fXd4TePHj3S/Py8JicnjWTXbrf19u1by7FxxOL9+/e1tLRkUQCpBxom0EEqHo9rampKc3NzVu+N\nYNdqtQFvNGR6M/gf907KA/hrcXFR8/PzBtn3ej3z/hFiIE1gTJqFsKlgWiPMwFgQuYDcfYTN3GMo\nvbIDAYHoRDvJhYUFPXz40A4hmZmZGTjylCgBqL5arQ6kEHCU+v2+NenwyIN3EnxZF0bGzzPKmsjM\nK4hCoWAtRjktDmIhOUiiWemOh4FzQDRLORXNZiQZEkAFgWfd7+zsDBgI1oX5xvh4draP9iuVitbX\n17W+vm6HdHBCWD6fN6UEAkUVyPn5uarVqrVf5qx45pIIy0f8tJjFaUyn06YPkFm/J/mbRxy8wZFk\nqJE/6x2HClQxEokYmer4+NiMIvIJenVxcaHZ2VlDEZHPXC5n5ZmZTEbn5+cDbHNSHex/bzx8FEwA\n5A+TyuVyBhEjrz7oSKVSWltb0xdffKH/+l//qxYWFlQsFgeqe3Bk6eexs7Njz314eGjPzN7C8EG+\n9YGQdzSQEY4KJt1Ha+aPP/5YT58+1dLSkjHNf/jhB3311VfKZrN6/fq19vf3zZCBYHCyKH0t1tfX\nde/ePTPAcIJWV1fN+X379q1OT0/17bff6vDw0NK73oEmb+6flXw6X+gpn6r1c45e9igMcugbEsGd\n4H8gajSBYj1xiEFuWTefvw/t0K8dH8zI41ETCQIVY9i8F+XrE1E+5LM9pM256jDYw5wyOWzyrp5o\nIf2UgObv1UfrfkFDyA04yPe25+xjjnGdmJgwjsD4+LhmZma0ubmpSCSiqakpMzDeMNPK9/Dw0Jia\nPspCqaAUiAL9fYZKhWiDecXhwRhzJGKpVFKj0bBIhjMDMBY4MhCSPM/i9vbWGkDwLDQ8CXNzCHZ4\nzK+vlSWi4H4nJias9ezy8rLVl/toEdm5urrSjz/+aLnr7e1tHR4eDszRo0eP9Pjx44G1Z3BwxKtX\nr3RycmLRRwgNS+8YznyGdwgpO4RQRH02Cvfy8tIgPUrKUPBAroeHhxZdnp2dKZFIaHFx0dINrAsQ\nPa/1kRpICGNYNAwcyRpdXl4qk8no3r17RnbKZDL22lqtpjdv3tj89Pt97e7uGiyLoabZDPlUnF/m\nEXj/+PjYTnNjLr1h92vnI1uP+sB4R/YwvDyzpAFdgAH2RFOQlV6vZznfer1usui/MJDsX58ukN7V\n2XuiqH+fRwQ82z18bu4/kUgYrI3xwPFizulBUKvV9OLFC21sbGhvb8/us1qtDpSe+tSOd7rDPLDX\nH+h0iHDsbVII5XLZTiqcmZmxNBr34B0mSl8fPHig9fV1O0J5cnLSroUMkxbkICE4DDhHHoofZhy9\n3PvPDVHbYXbB/4zMsIY3NzdGwtzd3dXY2Jju379v+xkHnOoR5pI2v/1+3wiEIXrsbc4vjQ9m5IFI\nibppGoFQU+PuyQd4XQgCkQ1N/huNhnZ3d61mng0Uj8etNzVC6+EZz8T1wuyVyDDSjDSYh+KZrq6u\nrMSJwxf43u/3TdBvb28tF7m7u6tYLKZyuWynZHk4iEj+8PDQ8pTR6LuDT/r9vhHXJJlC9vCXh6h8\nDS9fvnWmJCv96/V61n88nU5rf39f+/v7dmY3ZSyTk5PGD2A+Li4udHh4qJOTE1MaICucskaU4Rv+\n+OjGs7kxOOQJ8/m85ufnrQZ7dnZ2IOrBM4cb8be//U1v3rxRp9PRq1evtLu7awrx9vZWpVJJn3zy\nycD681mnp6fa2NjQN998o4ODg4G2wMPIMMw1zyzdKXigcxyzdrtt5wLghC4uLlp70YmJCfX7fXvd\n7e2tNjY21G63lUwmdXp6aigEhpc1oHERlSZeHuB1QEz15U7IPc4ijh3OFBGZd2AkaWtrS999951e\nvnypfv+uWoSDN2ZmZixV8vjxY6uTJ72FE846n52dGQcFOffGMvw9dLq9nohGowOd85Az5JR18hCu\nJMvZootYa/qge6XOmvs5Rn79HgSBQj65X4/8oLtwhD3C5fcsRg7kzR8nzb6iPXW1WtXx8bGeP3+u\nrWnyb18AACAASURBVK0ta+Lio1VvsEEwGOFr/f7kXvwcY+i8XsO5AUnySB1rmEgkNDs7q6dPn2pt\nbU3Ly8uanp7WxMSEBYS8Hn0LCZa2y1dXV9Y50vcd8M6Yn0NPnPa2w+9n7xByr94OePQxHo+bo1qv\n17W9va14PK61tTVDDXCCWq2W2TCCNJx873h4OxXe28+ND2bkaehBhEX0TV6OEgtyLGdnZwM9uulY\nVqvVVKvVlMlkdHp6qq2tLa2vr1s9Ite4vr5WNpvV559/bqQaPPUwx+Uj3zBKkN4tsjfC/nskErFy\nF1IRc3Nz6na7JnQYK8iAH3/8se7du6dHjx5pcnJyoO0rJSEcHcrBBhiEH374wfoAkFf1MA8C4SMH\nohZ+RugwDETPnU5HP/74o0XX6XTaiI1E5/T1Xl9f1+TkpEHbBwcH+u6774wA02w2B7oKosxRJCgy\n73l7he2VB/dHHn5mZsaOHybSbTQa1mcf4stf/vIX7e3tKRqNWkSfzWY1NzenBw8e6He/+53W19cN\nJQLKfPPmjb788kt9++23+uGHH4ww5pUDysLD9kRH/hkjkYila3iNz2WC0sTjcTvVCucXSJ9GTnBQ\nxsbGDNLO5/O6ubk77/3Nmzd2xoCHXP3gM7hvZJ/IH+QhPPjIyz3wL90eX79+baVnpMrOzs4sTQG5\njaoL4OJ+v6+1tTV1u10dHh7qxx9/1O7urhHawggmTKPRJKvff0ck9dwIvocQqN/3fr/73DQweqVS\n0ePHj/Xs2TNDjPxnkjIj/cS9QGbF6KAbeI1/BuSB/7M+3sDCaQHan5ub0+zsrBKJxMAhXBsbG3rx\n4oWlPKnKqFar9tnsRY/oDZuf96EJGDRkgtdRxgmHCPmmJXc6ndbk5KS1Ep6enra9zAmM8/Pz1roc\nJwM5RE80m03t7u5qe3tbBwcHajabtm9mZmbU79+RDCl99GWvHnHx+sijSsy/3yPDnEvWjRQA547A\nTYLL5LsTMofeuaRqhzQa/0Nv+xTCL40PGskj3J4xS2SGEul2u9b20ivCXC6ny8tLO9GIaIRuVX5z\newIM7UoZoQHnteGGD6MHvnuB9mxrGmcAyfzwww+q1WoqFAqamZnR7e2tNjc3LXpZXV3V06dPLY/M\nNVhU8kh0d/MGEpKhr80Pu1eFqQifpgg3qPfQz87O9PbtW1NAMPpDNvrU1JS14SXqOTo60osXL7Sz\ns2POSWgkvIMVluKEnqpfC2QhnU4rn8+rUqlocnLSnJD9/X0dHBzo4OBAh4eHNqecNAc0ODExofn5\neT158kT/6T/9Jz169EiVSsU851arpZcvX+ovf/mLoQAQH32Uw/DygNLwc+8jZQhCXnHSZpceCjCj\nUUy+GRK5eUqTPIeFNaU3Qcgx8bCql3kflflctG/V2e/3jZ3MZx0dHem7776zttH+pDEQInLUiURC\nR0dHRgJjjmq1mmKxmEH7OPDSXUlgiDR5mcB5Ihom4vMd40K41a+ZV9KhQ0DknUwmVSqVtLKyoidP\nnujBgwfWE8LvM/Ym9xOJRGwNKbmD+4Dc81wetied6bvawVegEVMsFjNexOzsrKamphSN3h0HfXp6\nqs3NTb1580bb29uGLsJj8XODHvEQfajr/PCON/Ltqwd6vd7AgWCen3B5ealqtarz83Pl83ktLi5a\n9L6ysqKVlRVrv8u+xqHxBEAvt8gr152cnFS5XFa5XFapVDI435cs4/hiU0JCppcxnAv0IvsGO0PA\n1ev1lMvljMRcLBbtGFrSqKHuZt+TgsYZIOAhsMX54h7/3Rt5cosYaBYOI7GwsGC5a2plj4+PLceT\nSCT08uVLg2tubm40MTFhuZtisTgAxQFHb21tWV95POpQ0YZRPGNYpB8qDn73HvLh4aH++te/Gpw2\nMzOjSCSi7e1tzczM6MmTJ5qenh4okfJKIBqNmjGiTS9KRJL97hm7XnF4VizP66N7hM07Awyf+ydd\nwmuJUjmbnfMD2DDValVv3rzR8fGxNcFh+DxTCEP5+2TzMLceMvWlipx6B8y7t7enV69eqdvtanJy\n0vLSKIybmxvNzMyoWCzq888/12effaZPP/3U6rOJtg8PD/XnP/9Z//N//k9DMHxe20eGDJ/z9cbd\nG9r3GX7mjRTIwcGB5eKpr72+vrbzvlEOkgYOWWJdQbJCmNenF4hkPGwZKnkieVCaarVqMpjJZOzA\nJ1qVsr98ZzQ+kxrh09NT7e7u2vGaIF8nJydGGszlcrp//75WVla0ubmpFy9e2L2i0FOplKrVqpXo\nJZPJgUOpkDEUK9FbKHs4jfR/QKmT252dnTXex8OHDzU7O2vkL+8UIZcEG+Sd+ZkUCPLD/vUcGW9w\nfVQNobdarRq6BpFzdnZWpVLJHCoOPrm4uDDkc2JiwvSur+2nTI8TA33KINR770M9fUACaZEg7uTk\nxD6T48JJY05PT+vzzz9XqVTS5OSkpdyQLXhavu9Gr9ezwAbe0+zsrCSpUqmo1+tZ5ZYPYgqFgiKR\niHUMDEt3MbhhHxWCNdBMnL5KpaJsNmtnw5NCpF8EHRFxzny/AOaC56Qck6qrZrNpMjk2NjaApP2c\n0xqOD2bkfXRKTSDsTM5unp6eliSrL6fUgjwijESiF8p4ONMXD5+Ssnq9PgBz+hyKNFgK5z0zn38a\nZgiHTToKpNfrGbwK9Hd6eqpYLKZGo6G1tTV99tlnWl5ettIzvH8Mabvd1t7enp4/f26Mbq7no18P\nMw27n9A5QSn52t0QovPP6aNtroPnWigUjFkNh2BnZ0cvX740o+UNh1ewIQToX8c1fYSBoCcSCU1M\nTBjbG+YzG5wIgcgchUMuHSMCJFgqlSS9K0178+aN/u3f/k1ff/21Re8YTx/98YXMeEMaGvXQYQk3\nqmczky6Bf3F5eWkGnTpnIiDK2YieKBGMRCLWYMZHLNwfECpz6u/fGx2gb9ICGxsbNqeUnMGvwZn1\nThCEtnw+r8nJScuhttttax7jka979+5pdXVVKysr1hiIKI/T4jxRj7PnieD5O7wQX6IVOlwYOn/g\nj++TUCwWVSqVVC6XNTc3p3v37lkLVghqMOavr69VqVT08OFDkzHPfYlGo1YdQ87cn8nA3vBVJxgi\n5H1+fl5TU1O236enp7WysqL5+XlzUlk3Ulnk7OPxuO2RcrlsQRIooXf6vXz6vgxefjGCRJk+mm+3\n26pWq/ruu+8MraWck+BhaWnJjrAul8tW744zyR7CASHFQ1DH6+nfUCgU7JlwkE5PTwf2ridfovvZ\nuzy71/2sCc4A5EZq9rPZ7MCpgJAgfdUJlUugYpIMqSCg42yBXu+uggkECL6MR4r/nvHBjLwXBkoH\nSqWSMcWBYalNRrF6ZnY+n1ej0TDGLm1VfXQF0YeyEU5s812FvKL2EBqwNBuUiMjn4Bjv865QgsAs\n0t351eSUp6am9Lvf/c6UdKgYgVxfvHihv/zlL6pWqz+B4CF1MUJojb+FZBMP0w2LKsNnlAYPmeC6\n/khMOjpVq1VtbW1ZNO2jW+ldhOuNyjCIjI3n+6UjC97IUwuMwzg1NaWVlRU7s5omK96BYvg8L2VG\n+/v7+vLLL/Xf//t/1/b29gA73SMhKF8QFtYEhRA6UGHKZNhgTTDW9Xrd/udrp4F0c7mcoRJUlXCW\nezR618AG2J+cKPfoo36UmzcS3CdpmVgsZi1WqcU/PDw055sSskqloomJCSthRNn2enfkru+//14H\nBwcDzifGhvJaSq84153DSDCSnk3te5X75lpwE4BnvXFC7nG+fTOhycnJgfzwzMyM1fjPz8/bXLC/\n/WmI1GX7HKqX47OzM7tfjIM0WM3geTfIejKZ1Pj4uPXQwJFaXFy0pjcQXylhm5ycVDwe1/z8vMk3\nzZEoIYX7xAE86D72IZEshhc54cvvU+Y0EokYJ6Ddbuv58+cma91uV4VCQfPz81pdXVWpVNLMzIxV\nFnmmPlUdjUZDp6en1kYaw08KljJaonCOyqZqAMeC7oqgDL500+v8cJ+yBhyGtry8rI8++sgCU3hE\nt7e3htzQkrjf76tYLJqR5/5zuZxW/v+W29Fo1I4o950Ncao9svD3jn8XkTzQBd485+y2Wi2D4yg7\n8KUxY2Nj5sXhiUO8wWNC+Pb39weOUmQCPZTpCRhsUm/UiJD9716B/5yX5d/nP5OmHz7y8REfG4RI\nyXuZ4Wfzmf5v3nkZlnv1cPj/ySDK881uzs/Ptb+/byVGv+R9+mgKB8tHkz6K8Pk4jGs0GrXe52zG\naDSqmZkZO/YWaM9DjX7NmNO9vT29fv1a3377rb7//nsjwNDB6vb2XTcqjC0GwjsPw8hhPKf0fgPv\n3xNC5v5zfYRFpzWuD+s7l8tpcXHRnA/QrVarpfPzczM0vjaYa/CFQWROMQreQIISSHfHynpkgXkh\nkoMxvLS0pFgsppcvX+r29tbaf0rSw4cP9dFHH+nhw4cWKeGcNBoNTU1NKZfL6fj4WAcHB4ZuoWiZ\nG+BY5sufb+DnEIcAUlYqlbLIPZfLWbTuU0Uemr64uDBuTT6ftz4RvskMjgm9A3DEaI0apt7ol4A+\niMfjA44sBgeycqFQMHIm9wjpjPfiRCFDt7d3B92cnJyYUUdWSKdGIhFDRvi7Rw2HpTOZV5jj6HKe\nByJ0JBKxfPX5+bnxGyQZOlKv142odnR0ZCTjQqEwcC4A1/ZOCmTlnZ0dO84Wx2ZYJ1BPkhzGufGp\nCsiLOJnInnfqI5GI8R+4L9Il9Xrd+vDTopf+HUdHR7Y/ObrYM/JJwYWcpfeNDxrJ+wgSo0yHLHpZ\n09DBe5EIHUQwr3CZXDw9WOAHBwfW7MFH5tK7Tc7vvgQizEFJgwbfC9ivGbyH1ry+U5+Hp1FQe3t7\n+uqrr7S5uWl/G2Y0+OzwPkNIOYRifZerv3cMM8RA3VtbW6rVagPQdpgK4DPCz/P3HRplSVZ+Q5TI\nKXjkTnGYCoXCwFncwzYF3jcy9/333+vLL7/Ujz/+qJ2dHevp7XPaYXqBe+U67zPgwxCLnxvD5itk\nFSPn9GaghAjHhK6IU1NTAw1BfP7eO45hysandiiH5HO63a7GxsasqyAHo8CGBpEDfobI6PsqeAfg\n8PBQ0WhUT5480SeffKLFxUXLeRMFQhZkLlqtluWdiSQl2f5lD7POw5A4nhs9Q51yqVQyR8XLOPOO\nYm6323YEKoRgf5Qqh+yQZ+XapB08AQ4niO/oJPYQcDd9BrwDC6rH+vT7fUMcbm7uTig8PT1Vs9m0\ne8P58ChUKNveCeF+vNwP03/MN8bSkw1pPUs6I3SiPFrgESnPJUAG/BqyJrVaTTs7O3rz5o02Nzd1\neHhoxD/4Hz5d42Ud2Qk7zLHfPJfDo0R01QStgS9CIIpcZbNZQyCA8EG5Tk9P7cRO7vP8/HyAeIcc\nePTxl8YHLaFj4hhAgS9fvtT8/LxBT54lK2mALIGQEZEDtUDYOzg4sGMrITkAPfk8ilfSTGiYM/XK\n3ZcaYQD8a35uRKN3h5w8fvzY8sbeQbm6urLyry+//FJ/+tOftLGx8ZPI2L+H3xnDIlbu2zfroBph\nWH74l4b/TL+O5+fn1jDm7zHsfniD6Ek8bBrg1Ovra+3u7ioSiVj0iHGRZBGK3xTcNxv18vJSBwcH\nevnypf75n/9Z//Zv/2YGs9ls2mdgqFBCHjL3vRy8XPhN+ffO77DhURwiFkg9nErYarWM0byzs6N2\nu209+IE/PfzHvWIgfUTP/zilDJSACHVsbEy1Wk2Li4taXl4eKO0EEQMJoFcB+/D6+lpra2sDNcOJ\nREIzMzO2lhjyvb091Wo1g2H52tra0tbWlhGfgD196ZuPINk/PuUiyQwE9wrPhE6ORMqQ1dA1pOKI\ntiGJNZtNvXnzRltbWzo8PDQOAWRVOBbeMEciETNGRNzesNDghQiwWCzaelESBpnt4uLCOogiK2/e\nvNHLly/N6djf37eUhy+hY1+E+9PLsd+zXk6G6aMQfSRKpzz68vJS+Xze+Bg4KDhWEBmnp6d1//59\nc+Z9aSkBYq1W0w8//KA///nPOj09Vb1eHzioDKMcIm7eocJGeKQTjgyk3Xq9rs3NTSt/JUXDWiJ7\ntONFHnkm9FU8HrfP+uqrr6xc1Hd9xd759PLfA9t/MCPvBRi2IsxyvGNfcoVS9e9HKBhAaMDFX3/9\ntdVn53I5lUolg9fo2OaNPh47hpbFCsln/m9swF+C6/1gM8K2DBXN7u6unj9/rmq1qpcvX9q58cNy\nMu9DEcKN5o0NUF88Hh84/IN7+HsiTSBBTl0CkeGsZ4ygvwfuK4yEw9eEiAWkROYtHo9bKU4qlbL2\nkDh/5FgzmYydb0DEJ8kM4ebmpl6+fKkff/xRL1680P7+vuWbPbweRoHcF/lPD9d5dCSMhni///5r\nR+gssZ7M8/n5uTm0QH/k/7h3n1rh/Rg/zznwsoZTAAnIkwN5H45YvV7XzMyMVldXrWGVhxlJhfX7\nd9U0Hp3wqQ+anFBHTy6b6AzWNoaCumMievaWZ0bjhHmEifWFqwCCSLMnyIQ4kJ6oyD1zWA8ljhhR\nSjaJXD2SyBqGA6ide08mk8aJgPRFu18gd0iZRPY0rAJda7Va2t7e1tbWlp06BwHZOy48F6kY1ox9\nBZcJI+iNTphK8g6K/zzkFT4HXSsLhcJA4IV8U+4HA51rYeRJ41ByybPzjPSz8LpomJ7zXBTPv/JV\nE9gtonjKd1dXV5XL5Yw86p0TTopEJ4CukWbc39/Xzs6Odnd3VavVjI/B632aiN+HBUbvGx/MyLPY\nsFh9OQS1tUCQ5K14WA/N+80GweLw8FAvXrzQP//zP6tarSoajeq3v/2tZmdn1evdnUbHhLGh/EE5\nkgY8yTAfx2CDh5P+c4rbR76epIIwdTodvXnzRv/yL/+ibrer3d1dE1J/3WEGY1g0HP6O8KEM3pen\n/DXGp9e7K2Oh3BEl4ZvxeO88vJf3RfFc3xtSvGjfD+D29laHh4dqt9sqFosGUSM3QPqFQkGPHj1S\nr9czLzkSiajdbuvo6Ej/8i//om+++UavXr0yZYCM0VjEIzseosQo0YbSR3Yhn8OneDw0Gj73Lw3u\nASOAPEUiEWu9y9Gk1Wp1QKn2+30r/+H+iVgw1qytj9i8fBLp8XrK16rVqvb3960x0s3NzQALnPUD\ngpdkzlSYtoN1TikYzY3a7bZev36tra0tMwhefpl39pWvl/fIH4iDd+Qw8ihhzsMAdiV6RPcwP2Nj\nY9a5sdVq6fT0VIeHh5Zb9ToqRNXC4asdMDj5fF65XE65XE7z8/NaXl4eOLzq5ORE9Xp9oK30xsaG\nVUDQTOvs7ExnZ2c6PT011AOiJNUafi3QFRhn1g9ESBo80tenqng+fyYC6CvIkyTt7e1ZNE7ZJMFd\nNBo1iBuUgX4Rt7e3lpYjNbK/v2+te+E2oI+4X58e8IgO98sz+8Fn+MAUJKpYLOrevXt68OCBGXkf\nSOFQ4ojh7CSTSXNSt7e3rfWzR3D8tdir3M+v1RXSB66TR+kB/ZF7kN61K6T9aSQSsWiW/B8dwYD5\nOZjl8PBQh4eH1ukrnU7bYQj1et0mk8X3eR2Ez5885aFZFAZeue9axN+GRfZ+oebm5vT48WP94Q9/\n0PLysgkkUCsKGu/ON6rxn/VzP3tvGiHGmfGdmCKRuxIrCEG+cYofoUDxfM1mUy9evNDq6qrW1tZM\nodOd0Dtnfk5CA4/R5mdkI8z59Xrv2KlAkp1OR8fHx9ra2rI+1mwWHMlyuax8Pj+QXyffRw6s1WqZ\n48a9eFKanxM2H1Ex0C1MWC8zKBDkAweXefa5YgzILw0PsWPcX758qUajoTdv3ti8kJdvNpuW/gI6\nzuVytgd8pIFTEkKsoUOKsQJy5qQ5OgR+9dVX+h//43/YoVKVSkWLi4taWlqy9QC9u7m50ebmpqEQ\ntVrNSp/8qWk48dR1+6oAnDLvQCFjyLuXJ95PZNbpdFStVpVMJvXNN9/o/PzcYOJEImFthiFgsmdx\nLjlR8ocfftDLly/NmHq+Q7h+3EtoWJgTnAzksN1uW74W/UjbaGQxGo0aEooxIk0CDwAjCuKGjDM/\nrKmH4X0umkjTo1k4KBgxvtClrFGv17OqkW63q+PjY718+dIOPcLBIACEze7n/erqSpubm+Z87O3t\nmZEniodRzz7xzgnPCvnaG3j+hl4naPGImW/01O/3rbEVOgHSISnj09NTs1+kG0A83759a0Rl9ABp\nZ3SeR9kY/+4jeW7at3QEeqH13+Hh4YCA4aFxDCg5DxjGlNcAeUgyMgyMaO/5+7ykF4R+v2/RIPfn\nSXnActI70h6Q1fuUtPd2ydMsLCwom81aCRDGHZLg9va2dQb7NTmYEOoOHQCElIFnjdcJIeR9SMGw\ncXFxYR3tcFJg0vpI8H0CGSq+0DHyDhiCzwaq1WoD9dK8xpfy+Ofk2TFiYW7aXxdl4P/uozB/nxgI\nCG8wdz38h3LzqQmfWxv2uT83PBpAmeXZ2Zmq1ap2dnZMdpFX2vdiMMhRshc8aS10vEIY3yt+SZZD\njkQiRizzqBG15nNzc1pdXdX9+/ftREbP3P7666+1ubk50ABLete34PLycqAcypdV+vnmbz7/jfHz\nKI13BrzTGolE9Pz5czWbTRWLRVPwl5eXlp+/vLw0nYPeOTo60sHBgd68eaOdnR01m80Bdr+fW/+3\nENbm3n1umjnisJ6rqytLC0Ci88/hicSso8+1c13QJ+bJ83N8VI8MgPp4ve3RSRxrX+LnDSz7krWk\ndwmG1pczcsphuVy2Jmg8B/wQnC7O9aDSAQ4Jc4WtYU6RT2/MfUqO5wr/h4PDXmHdMdygD/1+3/bi\nxsaGpZX6/b51tuv3+3ZQWbVaNa6An3dkxjtdYXrkl8YHM/IoZ7p5Ifx4bx6C88xaDC5fvM+/xi8W\nC0qUBnziayF9vqzdbg8YfDx93sfrENJQ6IcNv4GApi4uLrS7u2tHg+7s7NgZ1t98843+P+repLnN\nM03TvQEOIkiQBECC8yRqttKSnE7b1enMqu7qqE33rrf9Z875Q73sTVdERUVVpbPT6cyynNYszjM4\ng6NEAGfBvh7eeAXZOosTyvNFIEiCwPe9wzPez/DOz8+Hte6MxtVKKaResr/H+2RJMw/qjTF6PvTi\nGSRGUZa1vLysubk5rays6PDwsGVGcysjIjVQeIYLJrwi98bcMHMGSY0ij7O5p9qqWqHVWrdaVxKe\n8LDwiByB8BwPxp6GaT5Usbe6EPx4mV52w30xSsmGB63CUPX4I/NCkPj6u9L357MmqaHkzUeOjo7i\nXPj//b//t4aHh6NVLcjSq1evtLW1FWOj9p4kNYdfPWnQ943PQOeU+KHM8L5BOVyod3R0xPs4Du3t\n7VFyNTg4GCVp9IGoVCpaX1+PFtWEFw4ODpqUil8eApGaSxZZR+QQKMPx8bE6OztVqVQihOhJlqn8\nYU98b1rRNfTvFQvOS9zDjQNHIdKkXWQwnRnpYcL6Ew7zXg2MAw8ab5dzPvb39+MQIyoLrl27prm5\nuTjYZXV1NdCM09PT6Ezpsp514LkoTtYdY4cQErxDoilr0Wg04phhcqcODg7U1dUVVT4XFxdBHxwS\ndXp62nRqal9fnzKZTFSHICdSfkv3Bsj/Qxw/6SMqebwoL7FAqGDhQLgujKXmBBpX5NKVZ81muJfi\nUKwndjgDpPHpVMA7AzAO/s/lsVeegRcNPElHtVrtsqwD750Ti6rValOvbn++/83lXkE6FjwYH78b\nManX6wk0P6WAUJyNRiMynZ8/f94EVbbyYFIB9D7h5JY0zOoxQPY79TjSMacGRLpG6fW+EIiHPnzu\nrlg8QdRRDOaS7p/vSSvD4n3j437ZbDY8W/gGY41xQOMea4XW02f7fRmHw8qt1qXV5fTvaAFQ/N7e\nXiB3ZKVz0qF7KLVaLZLHUlQhHU+KmqBwENSsUerR8ZP9BAb3NrmEtMh23tzc1KtXryL+vrKyEiiO\nnyHOWrwPYUvX0p0VqdmDxJj0z7mM8D1s9Zz0f+krvUfKZ+k9nVacp1yOe1tsD2cyL5QV80DJS1c9\nAzKZTOQ30Mmxo6NDS0tLoTh3dnZ0eHgYGe7p/qZ838pT57kY4j4XaIcLQ4VwDTL++Pg4wgqgOXQt\nRT4gi5HLKUrm69xKlqRG4c9dH7WEzjc8VT5pHMIFEZ9LPdlU8bmy98sXGG8xLQNrtaB+PxcorZKq\nnFmAw4n7SJclNjs7O3HUKXErko1SuJt7pe+xHk7I7rF6yRwXySd4mghBt949jthKiHBvINm9vT09\nf/5c33//vZ4/fx6NjdzQcaPIr9SQ8T1inE4HPpbUE06Vefo/lLMn77Wam9+Ln16W5QlSKW21oh1P\nfHODKoXI/X6tFH1qZGC0UtfrnqPTCmMgeQrh5TknvOCL9ykJ/5mu3fvGzpilS2/v4OCgabzssRsV\n0LsrmFb7w1oSC3bIHkWD0scASPcKWkbYk4/T1tamSqWirq6uQKsIJc7NzUWZFhn0ruBcibgyaUVn\nXu7nnlwrmQLdpqGkdE7pc10WOn95KMufk47d6US66ifiqCovp+NUzqeVDXzH99nviROBMUsZJ018\nOM3PDTgPgaTlgD7vtHzZ15J1duOE+8A7GK6EDN++fRvJmZVKJXKAGAM0iHJ3mc243XH0v1mPVjL9\np66PGpNHAbJ4nrUoNXtOHr/kaqXQeZ+FROEhlLEWvXe9Z4B7HMsVOxa9x3X5jqTwjnysjUYjEgTd\nwwPeyWYvj3UlkQ8ILs0YdoHnHjnE4fExXw/+T/KZK1jW2McOXOYxJ4dGfd25F3G158+fa319XWtr\na9EEhz11wYYQgQHTNff7Qx8IADekmKOk2FNXoK3Gmsk0d+8ia5pEJF83PygoZTRP7nLh5OtL4hDz\n8MxY6JJ1ANp0JdHKy/b5QM8IDRf8zA8eQHkA05O4iJBi7Ky11x7zfuphsOaunNnD1KOEDlk/n3iV\n8wAAIABJREFUGls5bJoicG4QQ5usSytDESMm7cWPkudiz8hNAEV0g5Dn8lmqMMgVIrGrUqmEcZ46\nIuncoUvPqHcnAQOMdXAZ4qhlany5vGllIDtNO9LEZ6gqQpGle8jvJNa1MgAcZmes3lyHuSNzHfr3\ntWDsyLizs7PoO7C7uxv39hh/vV6PahmnxXR/nWZdNkJXfA6jwOU78/ccCUlNEL735ieM6+EgLn8W\n30G+urx3J419cN1DguCHXH8VdfKt4kKuYF3oS1cMlBI9n3OhhfBDQXufaIQiSs4twNRq5f2UoTwM\ngBCEcGq12jutcyEICNGf6wLJrW73AhHAxA1TAwVi8fG6Vcu9gf9SoeOZ5b7OfA4h5LHm3d3dgFnp\nceDtJt1bYs2deN2SZ63du0o9XvYGa5jfpWbjxQ1JFzS+fqyFMxQGndR81jvjJsbtdOBC3JU3//M5\nutfG2BEGzKmVJ5R6fsyX9UrH5PxA7LutrS3CKG4YpPvkygkjNaUDNwa4B2N1fmUP3cD2mKzTFZ+D\nxlLBynPckHHPL5UhGDPOA27Ys1duSEKnfJYcFs7YaDQaceSzx8M9DOh847yHrEnXnOemc6jVau/w\nuN/P5VGqENJeCOl+ZDKZOHUPRQltMw+UMmPkfZd3LhdZM6dFeMORU+dR+JLPYYifnZ0F71Wr1fgf\nxil7QUgG+Q5P4W3znFSmM3aXO8zX5Q4yJKUrSrs9xODIgfMYz3OexzjkvhjsTuee8+A5aG7E/dz1\n0ZQ8Co4LoYGl45NxSxtCBCZJBTUZw26JutDE+nIPmEVEKbuCdS/QGZn/vY/ZeC7eEIzn1vC1a9ci\nWQtmgLAZBwyKkAX25yzxzs7OJgJIrXUnGh8rFmeq9JxwHGEhAQrmItsYT969yFqtFvFMiNQFOPdk\nXKlhlcasstnLxjbsFQIpl8sFw+MFuffbCkp3w9EhODxv5oHh6UiDz9Etdv9cKrB9DO7F4IkgQBCs\nXnLHc707pMelGdP7PGr/ye9eNubGiitlhBz0gDEHIsZcKMFstdaOHFG6RjyZcfMc/5194bseNnIv\nyD1lpxfed55wpIt7QVeO0vAdP/kLZUPlRDabDcXBuqTx2xSZcuMROsXAZrz8dBgWA8ITJeE9EgSh\nxYuLi1hnDFPoCvrmOcgQZC5GF81bOGzH98NDdi5X3FD0cAtrxHOdHp3eMCLh5Xq9Hkf9eg4EidNd\nXV3xub6+PtXrVyW1HR0d4R0jn8gJgGfceUSepYYje5U6bhjizBXnEflMAim0g2xGprOHrB0VIjSj\n8hbH7JuvvYdZU4Tgp66PpuR94Z3I3QN3r4TvOBM7ofHTBbdnTiNQXXizSG5UpBvuVqxDPwgcnuXf\nS+/tBIMA841izHwPq86tPElNxOnGCGNFgKWC1w0nLF9PYnOr0KE0h9l5PmUxvI+H40kl/l32xffT\nkQ23Tv137u+fS73OVt9xReg0wXd8P9z44/+pUHJP02nJqzNS2uMeTmu+p66gubcrMt/DVEDyM/UW\n3vd5fnfaZy/8nq6kETApHfmat4pl+npA524U8Fzo0b1l3yc+5+N16BOa9+diQLoQdQOB+/F3K0/T\nedvnwQt+SVElv4/zoMs4LvcM/bt8xsMNzh+pQeIGtPOK0zL3Y42cXzBqMSp6e3vj/zgmzNv3lbFg\nYLCWUjPsnPJVaoSmss0TCL2ZFIraD5ThO2dnZ/FcErjZVy+ldV7wNW9ra3vHUHe+cDrzPWSeGDNO\nf9zDx+A8BD14K2Ho832ygXV0xBt58iHXh/n7/x9cvb29/5eXWLjCel88RVIQFUzgG+nQktTcwMMJ\noJWyTaER4COHz53Z/PhYCMLjpO7VSmqy4PEiPPbuSpfPcXEPT25xqIq5O0TusVYfe71ej0xPrra2\ntugxfnZ21rR+3I/15nM+97ShEOuZKlg8JYdDWWeHJd1TcmbjQJDUKOR/DqW5UcMY0v1wiIw5I/x4\nvnt5qbJwaxracSPEmRfr3pWaCx7fa2iHJKxUMTs/pGvk9MK4oW0P3fA5hLzXMHMfPpNmFhOy8Lio\nGzGsEbxMv3zo2ps7tRKsvnYeikgFnxtTfrH+NEXyfePeKToiqYnnoQenbcbAXjtimBqWKe3wO/wP\nioeSYQ29tbfvMWvpvOdKJpWLXCBwjmA2GpehwlKpFEZ5oVBQd3d37CNIlaOd9Xq9yRlI9woZmyof\neN7H506S01gmk2nqGsk4oA03FNzw4tmMxY0GNwxTA969d0dBeQ856zyQ0iIoQmpIo8gZK/KJ+/n4\nkBvMCbqBBxwFdWOvWq3+3/qZ66N58s5YxApRXg5PuXKEGdwwcCuTyy1gt679c+6dEgfDu0stcund\nTNcUJgTmg0hd2ba1tcUxkY1GQ319fRGT8SMYXWBL7wpy6YpheC5xN6xzrpSYHUng/+maOYICcfE5\nypzoUX9xcRHVAPzNOBi7W/GMlfEDPYFIsGcXFxcB6zJuj/f5mFzAtNpT3mNu0Jo/m/7XuVwuznvG\nM/BOcPx0unM6c3rz+cDcbmS5EvXP+/vpHqUGkyt8R3gwpHzNKQHzg57obe/PdOHp9/M1h1c8xsu6\neMjK99sRDJRhyrfOa9zLm8Cka+N/+/ccAod2UgOFMTsiwZgxSLiHh4F8zIzP+Yx7wIvexY5xuXPh\n++qGl/M4dOrGiDs3juK5LKLXPWcWuKFAG97+/v74PG2pqe/nOFzkhsu61BhL98YNSg/7OHqWIknc\nN1WerVAEDCI3qpwW0r+dPlJaS0M/joBQicI6o4A9t4N9pv8EZaFUZjAXd1ZTAzAdV0rbLgOdVv/q\nY/JujeNNOGPiFRMndcjHCZ7LCcEXASHbauNdGLvHKzXHQtLnuFKFkD2W1orx+T8M5QKQFonuGfqc\n3rd+qff8Pks1ZTBnDubDepL4xdj5Sc/xnp6e2IfUc3UkwtfZ14G/EV7eex34zL0s1tQVSSvPN2X4\nVMmTv0B/bFce+Xw+epVTh0vDFF7+TBd2vs68x/89dOB06YrahY1b6L6fbqymAsvH41AsNEfnMFrI\nYhTSBMkNS6cnh9L9GbzgGwReyiuuoLlfK4g+nU8rIeaKxNfSv8NPV5ZufLlQ9rE7/Tm64PzBmrhi\n5h7Oe27cYFT6GFPjlN/9+zwTYwoeaYUcuNfp68LYOLAGI7yjoyOUPy9az3LiX71eb8rz8bk78sl+\n+F6kdO7Ok+eYpOvrDg5zclnYSkk7z7t+SGnpp2Roq71gXhgnnnTrSATfc1piv0BDXB/4PknNRiqf\n8bmm+5m+XFb83PVRe9d7zKdWqzVBTylROKzlHsRPWXpe75taTi5MHRJySCQlQr7vz/RkNY9JOYTG\n8zkiEuGfKr30/umzndBcWLrwdAKE8HzurRQz65sSjhtdELyvbzabbeqX7X3huY8LA/aSZ8IMzINy\nmUajEdaxG1UuzBuNRhPE3IrJ3TtwgYDnjtAbGBiI88+p3/Z18ONTeZbvE4zOc1wRurL3dUjRpdQI\nSo0B9ybTy8eAYMcL4eQ0YNtarRbJSU4PqQJK6dDpHHph/zwO6YqbZ5Fc2mrOqXHO5XTsf6cGnSs1\nz/VwWUGDFEJRCGOnR6cP1t9DEK0Uvj8bJ0VSE2LCPrbyYN0R4T2fP0aaQ+cuL6E3p0uu09PTQE+g\nCU5khCaB30nkw7Alvu3GrPO874d7temeslbMDXnozWpaORx+Dzd+fB9SXkiNfPbfL5cfTqPOk8i6\nNIQAnfg++njq9auSVBwS97RbOVjp1UrRp46fOxOtZEGr66OW0ElqIlgX2gh4ZzpnGofWmTDeYalU\nivPGJUU/4/39/TgfvJXCdCHVykpKYTZJcS/3aPyFUHGY06GvQqHQZDEzJk8mgWAROpLeEUzp2J1B\nfb3cEEoZxw0rRwEos+nt7Q1YD9iK5x8cHGh/fz96jDsakHqIvueeYFiv15tqQaVm+JUX33VmSxWk\n5y84Q7ugLRaLGhkZ0dDQUAg/T/DD8HMGd6vc19wFVCv0yNe8lffD/J2O0r1KBQTj5RCP8fFxFYvF\nSKIioSqTucoCJ0uc/WnlMaVeQ6p8MCxTY4AxYTyxFm70cKVIGXN179XRp2w2+46g5H03kKEZ6Mh/\ndyFO61FyblyROT2m3hffz+fz6u3tjdMc8YpRptKl3KG/Pc1cGH8a4uB3xsa430e/Hrt1HnCDAQV2\n7do1FQoFlUol9fX1NeWHIOdOTk6i0c/R0VFTYl0r9CVVRE4//vz0cqPEeQPF6jzB2PkfcizNFWiF\nvLRSok5rPh53BN04xCjhc94zJF0Lf7Z79YQ1fa6uqP17bkT52vn6pjL7Q66PpuRhTJJ+WjGwdDlB\n7xgHU6QKg/h9T0+PHj16pM8++0wPHjxQe/tl7+N///d/19OnTzU/P990/nSruupUyKbKkwvB7zHK\nVBHhSRETla7iPZlMRr29vRoeHm5iKLyOvb29KA/hVKnUMPGxS+/CfwgCSgsRihgUfNeZg+8juDjq\ncmBgQGNjY/HMcrkcCUTr6+taXl7W8vJy9PrmYp/ShLWUqRuNRsS2GJ8f/wtDpQcLpftGjod7SwhW\nfw0MDOjWrVsaGRmJkpt6/erwjGq12hSbZ73SWLt7DZlMJtAbr/31ul0UmAt3N+hYM8buBh3PYBwY\ntTdu3NBvf/tbTU9Pq7+/P5Ir8/m8qtWq1tbW9ObNG1UqFS0uLjadu+DejK+zG4vwZ9okyL+HcOPU\nOV+L1Gj2CgWeh4LjZMlGo7mSwY0JZAP77uPh/Avgcurb2Y9sNhsHMnl5kycRunfs3isyZnJyUtev\nX9fg4GC8hoeHVS6Xlcvl4uCRH374Qd9//72Wl5clXTVLATnztWY/Ob8eOeBhEcaWJom5zILX2K/2\n9nYNDw9rfHxcpVIpWsBSinpxcRGnwm1tbeng4KAp69zlH89wnvJ5uGxMjUVXoqmh6uXT/j9kJ+GG\nnp4eHR4exuE/ngSXysJWl4+Rv53OXZdA99zPk/v8u9AHSBq/czYD++SfQ4Zwf18vn4evF6E21uKv\nXslDsGkWpHQFlUFQZPGOjo7GJnvWK95DoVDQ8PCw/sN/+A968OCBJicnVa1WdXx8rOHhYe3t7YWA\nS+OsruhSL8YVE8rYPTt+enY4c6E3N+dN7+7uhiLr6ekJgYjA8Dg9p3KRBONM/j5LzhVeqrj53a14\nh/ldWMLQeGUeMnGB0tvbq7GxMbW3tzcdmOGMjKLyc9lbjdmZyg9ScQueMaAkUwHn6+SNS1wZZ7NZ\n9fT0xEEWGFkwuK8Pxpsbnak17c91Wko9XRiVcfC+CxJXio4g8FmHLxGQo6OjunPnjm7evKmhoaEQ\nIhx1SX91P7mP/eRKFWgqyF3RsKdeAcOYCBF4wpcbR+wrnq3TKR5xLpdrWpcUSfEx+t7znhuymcwl\nXM8pgYzVTyijZCytC+fentvAaXq3b9/W3bt3VSwW44jU3t5e9fb2xtofHx+rv79f/f392tzcDAPS\n5YY/R1IYqNAsUPrp6WmTM9TKm0t/94RR6B5ePD4+jsRLnvf27ds4cMnpAVqD1qFTZK/LxdRLB52C\nDnA03DiHtnB8oKn29vZYW+QSPQxcwaZI309dqafvoUzPBUKZMof04BhozXmFdQHRwUHBm/dOoj5e\np+1Uzjha4HTuY/+566Mqebw1X2h++iaw8blcTv39/WHJ8L/Ozk719fVpdHRUU1NTevDgge7du6d8\nPq9M5vIs6aGhIR0eHqpQKGhnZ6ephCFViDy31ZgYN8oW4vZ5YO1ls1dNU4jxnJycNHnHPT09KpfL\nunv3rq5fv67u7m5dXFzECW6NRkNra2uh+B3e/tB1hniYC993aNjhI/d+IWDqVT20gbIcGxtTo3HZ\nGYxud6kyJ+aXKkDG6B6IhwKcsHmuKx+Hk53ZfG8R4igmT2TCo3K0AOUO9OpxR1faPvZWtO2KPPWA\nHIlq5ZFi8Ph+p8Ill8upXC5rdnZWt2/f1ujoaPRYB4nguGLOsz48PAxBDs22MgrhAejcE8DwuhF8\nvuZUKeTzeUlXJVxv376NbO98Ph8oFd+r1+uRO4BiOzw8bFJWrUIgPl7fD4euUQx47dAEc2YdMExS\nI7qzs1P5fF6Dg4OamprSJ598ok8++US3b99Wb29vNKHx1q+NRkP5fF6FQkEDAwPq7e3V9vZ2k+fp\n83HkSVI0p0FJuILn+44gpfSRyql6/arpDEoM2vek4PdB0r7v0KnzpcuZWq3WZMz6fjAO/x0kNJ/P\nK5fLBb+S6AsqgwGGLALpaGUA/tyVftblCaFijHJX+I7qOPSOgcLcaRcMsnR0dBTVSD7WVE64jOFi\nnV2GtJrD+66PquSl5gQ3rG/3Lh1K4Uz4er2u3t5eFQqFUPjFYjHgsv7+/oB8e3t7dfPmTa2treni\n4kJfffWVMpmMtre33xvbaAVdujfn3oILbTbPhTJEydnHtVpN/f39Gh4e1s2bNzU2NqbR0VGNjY2p\nUCioWCwGzPrrX/9aIyMjev78uf7yl780wYkfusGM20vcvFTOYR/Wg3nh8XBGOHNh7mSld3R0BOQN\nkff19TV5LBxY43HJdA4uEBzeBwaGedyi9pgtyjoVOCkDsR75fF6jo6NxrjnCH4VDbHRvb69JcDr9\nOj271c16uUGUQnFp2MUZ3MMR/jy/3/T0tP7Tf/pPun37tiYnJyVJ6+vrmpubU6VSUaVSiRPfgGcR\nkBhcKTLhCp+xefYwa+w5MS748vm8+vv71dvbGwKag0TK5bKKxWLQBkLejX3WaHl5Wc+fP49jkB3Z\n87VPYWA3CBivj9NrwJ3us9lstK+F5ggP5vN5FYtFDQ4Oanp6Wl988YUmJydVKpXC85auknDr9cuQ\nUX9/v0ZGRkLAc7iKP8eVaK1Wi4Q51gRaJ8TnIbaU/t5Hh9KlnC2VSrp+/XogapRWdnd3h8JfWFho\n4hWUiodUUiXkvJw+12UltIPh0mg0IozZ1tamiYkJjY2NaX9/Xx0dHRocHIxw5Y0bN9TW1qbt7W0V\ni8XIAdra2tLa2lrTqaWtrlbjQn6wf8wVqB743evdHQ1kvwkxZbOXJZgYqzilhUJBx8fHqlarYdzC\ngyl9s56tHAdX9h/aCEf6yB3vWFCIJI2zSs0xGyzajo6OYCAEPIkwuVwuCBE4sqenJyDZTCaj3d1d\nLS4uRs/1NPbE2FrFmlCSLtg8RuZesXQlzIkL5nI5TU9P6+7du5qdndXIyIgGBgaUz+cD4oEBxsfH\nlcvlVCgUVK/Xtbu7G5B1Gmrw8aWxIj7De07UUjME7fCadKXYUeBv374NaBKo0hXi0NCQzs/PIxTR\n1dWlw8NDra6u6uzsTLu7u+94uD5uVyYuBLGI6V2dKk1PbkwTlFJLmOf09fWpXC6Hp4Dwz2QyGhwc\n1N7entrb27W6utpECz8nSFJ68f4FKbSYCudUmaf7y1z7+vp08+ZN/e3f/m3kRhBTXVxc1Pb2tnZ2\ndrS7uxtn3TudSlcNfN5nsEhXiUJpPwv+hzJk7TG+SfQaGBgIwxUl39/f/46BA+IGapTNZrW5uRne\nD7To43V64W/WzGk9RXgcQfG5I0vIDcATHxwc1NDQkIaGhnT79m3NzMyoWCw2oYlOx6wtKFdbW5uO\nj4+1s7Oj5eXllk1rXAYie1IEiDm4J9fKWHZaw9gYGxvT4OCg+vr6NDQ0FEhDqVRSf3+/dnZ2dHR0\nFH3sXak5HVMBw1qy1k4TThv+ExnBWeysdXd3t/r6+nT37l1NTk5qb29PbW1tGhgYiETpycnJkPMo\nyba2Ns3Pz+ubb76J8ad7mspGfrpMYIwky5GLBL2nXT6dB9BNIFe0Bu7p6QnaLxaLEQrp7++XdInC\nvXr1SouLi00IKWN2R9dlekozH3J91BI6zyB1L8o9L7xNYCaEAZ67Lwz3ImsWyw6h3dfXJ0na3NzU\n6uqqnjx5Eo11eC4bDvQGLOleMBAk4/ZkGhjeYVgMgXw+r6GhIT18+FCPHj3SyMhIKHeIxwVxZ2en\nRkZGND4+rpOTEy0uLkb2K3Nm7ClBuFDwv90i925z7h37ey54IL6enh6NjIyE0AZiOz8/V29vrwYH\nB5XP5zU9Pa3R0VEtLS2pu7s7ksHSffYxelKiozmMl8+kkBVwckdHR+yFx9jciENYoYi4Hx2/SNzi\nNL2enp6mpE8uZzI3+vjpcd40M9/nzc/Uo25lUGDsjI6O6u7du/rVr36l9vZ2VatVrayshHI/OjrS\n6emp9vf3mw5ocUHB3kJD6ZUKaOjU99A9HqD6/v5+DQ4OamRkJKoXUCr9/f3q6+vT8fGxjo6OdHR0\nFMbB5OSk8vm8dnd3dXBwoNevX0f+wNHRUVOXSdYCunCUDSXN+Lq7u5sSQTEAUh7NZrNhmOIY3L59\nW+Pj4xoZGVGpVNLIyIgKhUIgYI5KOZLEvcbGxjQwMBDGV7FYjCRLR0zYj1RpM07G6KcotjJUUlrp\n7u7WyMiIfvGLX2hoaEiSotNdf3+/JiYmVC6X9eLFC62vr0fSJoaVj4PENy6vLIIuXO6gNKEhwjXl\ncjk67J2cnKhYLOrWrVu6fv26hoaGok89uUAer4Z2stmsPvnkEz19+lQrKytNMr+VR5wqS5ctruQZ\nY19fX1M/fRwt1lS67EqI3HE52d3dHYZVPp8PGqSEenh4WPfu3dP/+B//QxsbG5GDwvjq9fo75cie\nf+C8+SHXR1PypVIpiBOlRkyMF56jN1S4du2aBgYGwrIGgsNDrNVqWlpa0s7OTiTjkdDW09Ojqamp\nyNrmEA2ywT2jWWqGW9mA3t5eDQ0NRZKd11AfHBxE3Jw4EcRdKBR069Ytffrpp/r88891/fp15fP5\nyKJ3o4RObP39/crn8+rs7NQnn3yi8/NzlUolPX/+XJVKJQ5wSOP0qdcOAXvsDWWKkYFwuri4UKVS\nCVhRevdwm4GBAV2/fl2FQiGgN6CzWq2mQqGgrq4ujY+Pq1wuR8LS0NCQ/vSnP+mHH37Q/v6+Tk5O\nmrwxHx+Xh2sc6kPpIDD7+/uVyWSaEBMEBFBwLpfT6OioZmdndefOHd2+fVuFQiGMCoQR2c2ZTEbl\ncjkUVaVSiRMEW0GUzoCOQHlcz4V7ytitINBUgOfzeV2/fl3/9b/+V/3t3/6tCoWC3r59G17iysqK\nDg8PtbOzo2q1GuvkcVhP4moFs7pAJxbKnhB/ZL/wSr1rILA2wpz4+tramhYWFiRJlUpFp6en6u7u\nVqlU0vDwsMbGxgLivHXrliRpZWVFCwsLevnyZXSHhL/SrGqnF35vNBqBZPgc07Xl7+7ubo2Ojurz\nzz/XnTt3NDY2Fj0UgLYJHcJ/jUZDPT09Yej19PSE94djMDU1FVU92Wy2iW8d0nYHg3l2dnZGwmI+\nn1d3d7dWVlYijOSoCHs1OTmpGzdu6O7du7p//74++eQT5fN5tbe3q1AoROJhb29vHAhzdnamra2t\n6HiHkYR3yuvt27dNx7syZrphMp62tuby26GhIY2NjWliYkJDQ0OhOHO5nIrFYih+eBrjDucAQxJD\nd2RkRG1tbfrv//2/69WrV3r58qUeP36sjY2NJiPlfaEN6J5E0c7OzkCikIvOH9VqVZlMRrdu3VJX\nV1ecQtdoNCIku7Ozo7dv3+rw8DCcm66uLo2MjGhwcFDj4+MaHh7W6Oiovv7660AM4Y29vb3g21YV\nRD5u//unro+m5Pv7+2MBOeDE4z0uoF3BdHR0xEaUSiVlMpfnPW9vb+v8/Dw8BKwnhA2EMTQ0FH2b\nNzY2tLGxEYoZ5YgH5UkuMCxJfHjBx8fH78DdLmjwUsbHx/Xpp5/qN7/5ja5fv66BgQFJ0sbGhvb3\n90OxvnnzJqoEILxr165peno6oLKOjg69ePFCm5ubMebUE3fl2ErRI5Rhrnw+r1rtqnkJmcg+t0zm\nsvNdqVTS2NhYWPVnZ2fa39/Xzs6OarWauru7NTw8HDXbJB9hpa+urkbTDYSgN9Xxy5Wve8socjy2\ngYEBNRoN7e7uBhQKo1SrVeVyOQ0ODurTTz+NV4rI4JVIivhnX19fwM6Hh4dNKARjYZ3525U8c/N7\nuwJPQyopnJ9efX19unHjhv7zf/7PevToUQib/f19LS4uanV1NeKVZ2dnKhaLTcfmogycttP1xqCi\nYZDU3GHNEQoMK3gERVQsFlUqlaJmnxAZ9L65uamLiwuNjY3pxo0bgQZBY+Pj46EYcrlchE4khaeJ\nfPCzDnw/pavT/ryxSYqQ+O89PT0aHx/Xr3/9a33xxRcxfjf0G43LJNPt7W1VKhVdXFyoVCrFvcrl\nskqlUkC4kjQ8PKyzszNVq1Xt7OxoaWnpnTV1XsVAIWGR5kY0bnIDw0NTKNv79+/rt7/9rR49eqTp\n6enI+k/ny3w6Ozt1cnKiubk5LS8vx/ORuaAb0JIrP5AeZAjr3dbW1lSVMD09rdu3b+vOnTuBINAK\n3NcXrxxZRO4GzhIJy9x7aGhIr1+/1p///Gft7OwEeuVjdMSX9/jZ1dWl/v5+5XK56HzJ4T0uy+Hh\n2dlZFYvF4KeLi8tT8jY2NiLv4ujoSPv7+3FvkIGZmRmVy2X19vbq4cOHmpyc1M7Ojp48eRK8SUja\nGweleVj/v1DyDsl79igEjtKl1vDo6CisOJKGenp6okRob29Pb9++VaVS0fDwsPr7+6PxR3oYgMeF\nUsHgyUZ83uExFhzPm0xlak1BHxqNRsDAIAjA893d3Wpvbw/j5uLiQouLi9rc3IxSO0kaHR2N9UKg\n5XI59fT0NCVnucBGAHuWq3RF6GSk9vT0qLOzU3t7ezo7O1Nvb2+UWJGodXR0FBnmeMMkCOLRvXnz\nRjs7O1HPi8Anm/n09DSSmihZ29vbC8HssW7PfOZ/5FgQNvEkOy/j4Tl+NvbMzIwajYaWlpY0Pj4e\nWeg3b97U+Ph4JL9wLzKaMeAoCQStSRkrhc6cphGO0A6wMyEZv1cK1bsykpphRwys/v6b8/ldAAAg\nAElEQVT+yGdZWVnRkydPtLy8rK2trYA7oTN4zHlKkvb398PIcCHIC8+fMJSkiHumhji0gCG3vLys\ntrY2FQoFbW9va2FhQaurqzFG1mB5eVljY2MaHh7W+fm5tre3o8KEI15R6CgCDiqSFIIQ+sFQrNVq\nYbzTI+L8/Dzoifv5Ba8ODAxE6RaIEAaIw6bn5+eR3Li4uBjhCEJXExMTTWEDlCXxXi9VY33J33Hn\nBnnX2dnZlLjKQV4gcaenpxocHNSDBw/0m9/8Rn/zN3+jsbGxMFRcWUG3KGUUK140tMA+8D2MaZSP\nGyfZ7FXvf/7PeOk90dbWpqmpqXgG43JD2Y0f9rpcLkdvA0eYoFMP6XK1UuzvM66R3/V6PfQFxpM7\nPQMDA1pYWND5+bk6OzvD4KSEeHd3N4zdSqUSawxyiYPFWDAQ9vf3I0eMRD/WAXoj+99D3R9yfTQl\n7+VUEBkCEYHpGaYoT7wWoHGIqKenJ/4mLiJd1fMizB0ahSndK3Eh5/EyNuv8/LypSQoE7TW3CAQ8\nZyxZBAdCplqt6vDwUNVqNbKhG41GEDPZ0O5RQxAon7RuNoUjEcR+Qt3bt2+DKVE8dAMkdIACBI7q\n7OxUsVjU7OyshoeHg6FOT0+jBvv8/FyHh4dxyJBnn9Js4/DwMBQe6+rMivHiipK9hxa4rytcGA3F\nR6gGb//mzZu6c+dOQO94wJT8MV9KvPDAe3t7A25PYfb3XT6XNMMexCItj4OB0+Qy6SqBqru7O0I+\nhUIhoOhnz57p22+/1dLSkra3t/XmzZuI3TofOaSbxvhS75bvNBqN4CkMHzcuuT8GGN3dqIKgGdXa\n2pp2d3ejAxw90/meh2jwYra3t7WysqKVlZUIH2Wz2aZzwr3k0mmesASGcZrkx0831AgFgrSRg+D3\nlC5L8vb397WyshLG+cXFRYSk8vm8zs/PVSgUQga58+KCOx0Tz4DvPWcJ1BLHAhnT09MTh17Nzs7q\n7/7u7/Tw4UPNzs5GPon3rvBcBIf4kQ2+r9LVSYSZTOadODnKEmOAmn6nZ+5frVZj/VZXV6PvSW9v\nr/r6+gK5RNmChBEG4vAXFC9rhoxArjgK1ios4++jfwgdggJ7G2GMTdCmV69e6ezsTKVSKeQZLbGr\n1WrsBXwL+kDyOHX+/L6ysqLl5WVVKpW4n0P1aTmi8/CHXB9NybPh0tXhI2l8qVaraXd3t4nAYer+\n/n6Vy2VJUnd3twYHB3VwcKCNjY0odyHmjLV0cXHRJNyLxaImJib06tWr8LCcERymhliPj48jbuV1\nq1itHtsns5X6T2JawH07Ozva2toK78u7qkHs9FLH+KG0iOQqh2B9vPzN/Wq1mnK5XMwTD8mZGuXu\nyADJJWNjY1EVUC6Xo0Pc/v6+lpeXY/xk4iMg2traVCqVdHZ2po2Njab6fxeyMJPDj3hLDi+jfPiu\nx7oxKMhubWtrU7lc1vXr16OaAQuY0sCtra2IDzcajaCdkZERFYtFZbNZ/eUvf4nnMIY0fOAX40Xo\ngOgQU8QCp5QK9AIh2ypuT6b2V199pd/85jcqFAqx/n/605/0z//8z3GCGALEwzZe3okQYRytBCGC\nHQXu76XjY/8uLi60s7OjgYEBTUxMBMx8dHSk7e3taKeLcqIhVCaT0cHBQbTl7enp0fr6upaWlvTD\nDz/o+fPn2t7eDmPAa41dYTEGaAbaRr54fTbjZg9RJuPj47p//77K5XITAnB+fh6JmPl8XgsLC3r8\n+LHm5ubCsCqVStrd3VVfX59qtZomJiYiyRMjBI/OETicG+gAgwIl73lKjsixHpSCXr9+XZ999pn+\n43/8jxoYGAjk041nd5iIc0OXHAYG7ThNsPf7+/tNcgb5TegS5Ifve0vXTCajra0tPX78WOVyWUND\nQyqVSpqZmdH9+/c1NTWlgYEBnZ6eRkjVYXQge8YBrJ7L5XR8fByxeMbHM1OPnt9ZExxF53HPn/H8\nMNb86OhIU1NT4fBRRXR6ehpoDUl2Dx480NTUlMrlciRNU/59cnKiZ8+e6fnz59ra2godko7ZkwVT\n3vu566PC9Q7TSu92sMJaYQO6u7sjHgMDlkolFQqFiN+RCUvMHyL3sgwsM6DloaGh8EJcyEpXZ1a7\n0kR4Q8gQhb+XenMwCXDn8fGxFhYWQlkjnEmU4mdbW1ugB5VKRZubm9rd3W3q2NdKybjgcEPElZN7\nyD4HV8IoJ9p4lkqlSNTBG5WuyrFANuhSxXPIl0C5uEfgXkNKwKlH7HSSGlpujHV1dWl/fz96D+Ax\n8B1avK6srGh3dzcQlQcPHmh6ejri+b7GrTxg/597iFwk37nxQYwepe50jrfqzN3Z2amJiQl99dVX\nkSxYq9W0srKix48f6+XLl9rf3w9oGg9UuuqgB7zLs0BDPCGP+TqUCH+6MehGsK8BPz0Ja3t7O4Qf\nIR/CCO3t7YGcEHPu6OjQxcWFtre39fLlS62vr0eMFRpwqD0VhqnidyHPXnAPPCQSMj/99FN9+eWX\nmp2dVW9vb3h51Wo14NSzszNdu3ZNi4uLWlhY0NbWVlMPiGq1GkjQo0ePAlVi/ChALz2Ezl2pOC/D\nJ/Aj9EfGdrFY1MDAgGZmZjQ9Pa2+vj5lMhmdnJyore2qwx33qVQqqtfrUduNJ4xHm9ZguxHl1Q38\nj5+eSOh74rSC90rCcX9/v/b29rS3t6evvvoqUD9kAIhgNpsNGcI+ss/+v/eVF6cK0XnU5RAyzZ0K\nN2TfvHkTSZfVajVo2vMWyuWyhoeHAxkaGxtTqVQKY4Q5LCws6OnTp1pcXNTOzk5TMqbLwBTxScMu\nP3d91GY4LjSlK4HuAg7lgSeKQkfR43VRq41FTozEma9er0cdOpt67do1jY+Ph4cDU/n4Wm20z8Mt\nwBQqwrMgzLC+vq7Ozk5Vq1XNz883ZeNTM9rT0xMv4m4HBwdaW1uLrGm8qzSZxhWOz8XH5eVwTuCp\nIgVu6uvrCwHisJknNKW5AOl7zJE4O/E2jB5/ts8n/ds9eK9iSGFv8g0GBweb4lg0oDg4ONDS0pJe\nv36t4+Nj7e/va3t7W7du3VJvb2/EUt0L9NhjK0s6VfTMn6QkOsEhVElOQoDyDIfiOjo6VCqVdPv2\nbf32t7/V9evXlcvldHR0pKWlJf3+97+PRk+EhugKmXrcCEfm4c1byF3x+DvKCCWfGmLpxd729fWp\nv79fbW1t0bUOYY3higc6NDQUyWQklp6cnGhzczOa+qB4HCZO1z2lcebIWjJ/DCCUUWdnpwYHB3X3\n7l39/d//vT799FNNTEwEZAvfbm5uRiveer2u1dXV6PNOsxtCaYTnvDWtt7qGLtLcABfuTkvwKnNl\n3wqFgiYnJ9XX16dCoaCJiQkNDAyoXq/r8PAwlMmbN290cHAQ8mhxcVFtbW2R8IiRl5Ydu9OFHMRo\nZLytlA6ffZ9CqtVqsW6Ueh4fH2tqako3btxoQgHZO/jl4OAgDgZCbmAsewkde9xKlqQ07LTDs9yY\ndRpLZT35FVRgtLe36/r165qZmYl8CMKXtVot5luv1/Xs2TP96U9/ioqYNKTi6+V6spWR/VPXR1Py\nbI7U3AiCwbNJJGFRxoEAKZfLGh8f18zMjEZHR4NpPEP72rVrAY0R/zs5OVFXV5fK5bI++eQT9fX1\naWlpSfV6Xdvb202WnC+2C3jpqvGGl/T42CEWLNalpSWdnJxofn6+6bMwV71eD0h/YGBA4+PjAVXR\n5nNtbS1aPiKAPYHHEwSld4/E9ExvJ168yFaCsq+vTxMTE5qZmYmudpIip2B3d1dbW1sBl6NoSF6C\nGbPZbNQZA9vC6ClEyHgxRnyckiLM4OufMm02mw3DDtidnIPFxUU9efJEz54909LSUhgGfoAJXq9n\n8kJfGF6uRFuFIHzcJPSl3QaZO+gSRgXfKZfL+vLLL/Xb3/5Wn3/+uUqlUhikeBRdXV3R0pYyHeKD\nZHTjbWJwSVctZ11wIVBQiClfMif3LqQrL5ls+fHx8Six/NWvfqVisajvv/9em5ubOj4+1vj4uCYm\nJjQ5OampqSlNTk6qq6tLx8fHWltb0+rqqnZ3dyOLHHp+n3BzOsZYBoWDR+jjgPLHyLh+/bo++eQT\n3b17V8PDw5ETQFnUxsaGlpaWwkikEsZheEfCMGwZF4YSmdaUrblShXdaISUu4KkuyuVyGhkZ0Z07\nd8LxIYTz+vXrQP0ID7hj8vbtZffBgYGBcI6oy6a5ldOpX/yN/IbW3SFD7rgx/D5Fe3Fx2a78+vXr\nkXVOVQ4OGbzG/IG5WRMMU2SNJ/W18uRb0S4Gl9O3j9+NdqqH7t2719RXgVI8Xt3d3ZGDIylamm9v\nb2ttbU3z8/NaWVkJyN8RlNSLdwPQHbQPuT7qUbNssvf9TWFwL7NCkGQyl/XqMzMzUW/pWdq+icAo\neLwwHaVgCFjOeU83OA0deMIaY4WwUpit0WgE1E7cnWQ8n49n+XMKFUljCCqscQwCR0E8CY33UUC+\nJu41pszrELR/D6ODGlasZnIKNjc3tbW1FYYUsPTg4GAoeTxWwi29vb2h1DCW3Mjw9WdO/jmv8W4F\nIWcymWAa9uX09FQ7Ozuam5vTs2fP9Je//EVzc3Pa2NiQpKjPRki7QZcaGr72KQKBkeAXdAAk7bAt\nED05EghP6LZQKOjGjRuamZlRoVDQtWvXYp1I6slkMhoeHlZ7e7uKxaJGR0ej2x05ELQmZq8Jw0hX\nkL4LbNYghSz9gs9cAGKgYuR54lSj0dDq6qr29/c1OzurqakpDQ8PB62DWD1//lwLCwuhoDDCUoXh\n77lAzmazTZ3ZUKj1ej2QKLLSu7q6IgGXUBT7iIGOl8V6YQiBTjhP4ZjUajV1dXUFcoKTQlmh94SA\nZtLEYF9n9gxDnTLb0dHRQB02Nja0tbWl8/NzLS4uanFxMco+GQ95EiBVoGmgEZ7Vnxqsqceb0oIj\nJ6zTTyki7kHYBkSCTHr4Gz4my52wjvM8aG3aM8Sf43/7vFDy8J6P3+fHHPHcx8fHQ96VSqV4uTwG\ntWJcVISRgLq/vx99K6CrdIwulz0c8qHXR1PyWGRe1kJMwusPEaIeByeD++bNm2H9YlEDe6MYJTUR\nDETuxOANDbxe3pWG1NzGEyWClZbGgfgOMRziktlsVjMzM+rq6tL6+rqGh4c1PDwcHiXjcUTCYXVK\nUdxI4mI8eCIpQTg6wlwYq8cqnblRNggyElX29va0sbGhtbU1VSoVSQqoNpvNxhkCfryu3xc4mXHx\n02G2lJi957onVGHU+D0QmjSKQTB89913keSCp5jJXB3JW6vVmurKacaRVoPwDOgFTyAVIOwLcLhn\n1r99+zZCBdB+rVYLxUj5Igbo7u5uZH13dnaqXC7r/v37wTNUGJDvgPHJfVAsbqx6MhbfYW4om/ft\nh683ArCvry/aNNNQBM+zra0t2pYODw9HnDKbzYYyX1tb05///Ge9ePEiYvGtYo+tEBMfB8pXuuL/\ns7OzONeCHuJbW1vK5XKRl+OGI+WhFxcXkSnf3t6ukZERnZ2daXt7O4z1VOHUarXIqkbJE6+lhNFp\nVWqWGx6Dxwjw43OJxff09Gh1dVVzc3Px/fb2ds3NzWlubi549+3bt1EVA81SqkhS5Pb2tra2tlSt\nVt/JRXFoPOVXLuiKqxUSkO6h9z4AYcWRky5RO0I+1KKTFOtycmdnR/Pz802Jfq2e18pY9ZCUy0Xm\n4HuDIcZ+YiyPjo6qUCiEfkBesoeEKkFxGIsn9KXjcyTH7+d0/iHXRy2hk67iN5744BnermgPDw9V\nKpU0NDQU9eQkaUAYeJT+jLa2tihfI1b49u1bbW5uam1tTc+ePdPKykpTtrl7CZLe2RTGlcbiU9gY\nBkPB0wkLNAFBi7dDXA2lS7tSyqM8ox7llFp6raBv6QrWguD4fCtP2JtP0KY2m73s9DU/P6/nz59r\ncXExyj747OTkZMTuMbZqtcvKA0qoiEt5nXur3AJ/35PsWsFXzhAgCdPT0xobG1NXV5e2t7e1u7ur\nJ0+eRC4Elj/WOX3sHTHwBkvQBuNLx+nKkJfHkjOZTHRmxFgiCYm18HAHJXzAljs7O8pkLjPtQYWm\npqaa2r0isLwZTb1ej6YtbohAe5QLoUzgIz6frn16ufAjjs29oGMQh0KhoL29vVBQrBHrTKkcxmz6\nTF/bFE7lRZisVrvqS8A8cBB8jHjZ0JrnlcAHbiCz1vA1Moj5zs7O6tatW3FPvkseDgf2eNKpr7vP\nOaVvlApVO8ggMs4xlkAOvda/XC4Hb9KcxyHo9DAW9xxdRqTKyOUP1/s8fv8ORqGXmCHzoL35+Xkt\nLy839XenEReGoyQdHBxodXU1Egzfh0YwdkdBXdG6YesyBrlJCKdSqWhtbS3QNb7LfqcvZEx3d7cK\nhYJGRkZ048YNVavVpiTwVusKbaTy/H1rm14fTcmfnJxEQoJ7UQgdX2gYGo+K2A0e8tu3b5vaSKI8\nT05OAv7mtDTgOITK8vJyNBKhNKeVkpeae32z4O79pJ4PzAsTcioXghg48/z8PHodz/yfjkjEzg8P\nD7WwsBBHhvpRrikRpGEEv4CM3VN2Je9zwrPv6emJOHpvb68ymcvWjnNzc/ruu+/ivPLj4+NINrp5\n82a07HXGODw8jMoAkg09ocTrnNM5SVdlOozThWCaN4EQm52d1eTkpHp6ejQ/P6+nT5/q1atXWl9f\nb8oDoCvhzMxMHCBBqKharTa11kxj74zfDZJUyfN5Qi7MB8OBOSBwyTnp7OwMCLlWq0XXNyD+trY2\njYyMBCSLp+cVGd3d3drb24v5eqKi06yPE9r1piZOUz8lXOjoNjo6Glnemcxl1cjw8HDELoHNoQ/G\n4gomzQXwkJKvcbofGFB8znss7O3tKZvNBurR1tYW2el4ya5oec/pEzSHhDFQHpTv/fv39Ytf/CKM\nA0mBCjx58kSrq6vvGE5uoGBIpsYstEo4DyWPEjk7O9POzk6ghhiExWJRmUxGExMTmp2djfyN8/Pz\nSArFOE6dmlaOi1+uQF3mpChoqwu+KxQKQSveB+T09FQ//vijnj9/rqOjI/X29mp6elqZTCbaxbK2\nyBdyC5xeUhnufMuaY0Sw960cH9Ag2tC+evUqckmOjo6inA8jIkU+UPQ4QyRrzs/PR2mir2vqSKRo\nwodeH03JS1dlIzAQsVbpXYsdpcPBNPR9J0GH+BqJam/evNHm5qb6+/ujrITDElhMst6p7fSEO2e8\nn7JI8eA8acK/40IKL40EDS9hmZ6e1szMTMyN6gAYm25gftCG1HxUrxtE6TikdyEnxp8yNOvd3d2t\ncrmssbEx1ev1qNmnhW21WtXx8bE6Oi6Pmr19+7YmJiY0ODio3t5e1Wq1aDm8tLSk9fX1qKOXFDHq\nFE1IBQd/A82lUD/zcePJD89pNBrRdQ0vwUM19JZ++PChJiYmAk6rVqtxToALY88XSNGe1NJmzCgM\nEkDd6mduCGVKQTlpcWRkRLlcToeHh9H7gVr/i4sLHR4ean9/v6ktJ1AiRypjwGJwsgZ05UOxeJKX\n067Tlf+P8SMo19fX9ezZs6ZQnN8brwXkjX3s6urS0NBQzDENJxFecNpmjZ3veI7HQYljE8fNZDLq\n7+9XsVjU2NiY7t27p+vXrwf6BOpCLkOj0QhFmMlkmsJV5JWQ/Dg7O6tf/vKXunv3bhi65+fn2tzc\n1OLiopaXl7W7u9vUzKZVEnLqCUqKUCRjpxaeLmsDAwOxRihAHJze3l6NjY1F/X9b22X/CrLCaQIz\nNDSkjY2Nd+RfqnDSn9CE7wX7B1rrF/vpFTqVSkVPnz7VwsKCNjc3tbOzo4WFBa2vr0uSBgcHJUn3\n79+PfAgQwrSXvueVoLidh7ncSXAaSo1IPgudb25u6vHjx1pfX1elUlFXV1cgpegZjp0tFouR70A7\n8bGxMa2vr0fSHmidP5vfoQkQYBKA/+rhej/kgritx8l9U4jrUauNl5DNXtZH7u3tNQluGGtvb0/j\n4+MRj+3r6wvCQwgAxdKlKbXc+Okeb1qag0f6voQPPsP9rl27pkKhECVyXV1dmp2djWYg3jcAQUDy\nWBq7SQ2JVpZrKqjdMPBuSg4r4y2QuYtwI1ubpg3E32dmZnT79m2Njo6qWCxGExQS3vb395vWgL13\nL9jXLxXeLuBhJlcCGCbE9TgnPp/Px1gRyKAkNKy4fv26Pv/8cz169EhjY2MB55KJf3h4+M66pvv7\nPs/Fu/BhODlt4TWQtDgwMBA9vumgNjIyEgIV75BuiXTMOjg4iNPRBgcHA84cHBwMSBFY3KsTgLQR\nxu/zEFKPmfd8LWq1mra2tvTy5cvgK5S2d5XEuyb7mESw9vb2yFHh4CDpUnlwj9Sjdk8XGiGT3XNo\nyC3hRe8BTj+jrpl5uhdI6ABvfH5+PhIISeArlUp68OCBvvzyS/3yl7/U9PR0xL8PDw/16tUrPX36\nVJubm2E4tFLmTudcjJ+QHoqZuQwNDcXeUu4HukAWPfXaHArjDgfdCXt7ezU6Oqr5+fkm+eM04Io0\nHa/ziBuwbqi7LJKuYu7b29t69uyZFhcXNT8/r/X1de3u7kYfEWTF1NRUyA9kLo4PuVYY0/SkoBMq\nTlNqoP8U4pAiGORq0Bhpe3tbBwcHam9vV7lcDm/dcycGBwcDrUJ34YhMTEzoxo0b4XC2MjL42xU8\nrw+5PpqSz+fzkZnIghBrBG5C4dCv+7PPPtPDhw+bysv6+vp07do1bW1taXl5OaAwFNXFxUVYvTA9\nHkS9Xo8scU82kZo315OQiKsj1Ihvu/J0QpKuGkV4q9hCoaDx8XENDQ2F50tcTFIkYHmylI8/FXRS\n62ZCzMHhYEq36vV60wl0ICu12uWRjsPDw03Hr2LBVqvVIDj68t+9e1c3b96MxLB6/bLhD2U89fpl\n32tgaqxdEBA8U+kKnWBOCDmvI3dExuOK/f39UZ6F4H779m1kzwPT1mo1DQ0N6caNG/qHf/gHPXz4\nUNPT0+rs7FStVou42/b2diQ0YgxBDw7LQauMzw0Pksugd/YAOPzNmzeR0zA9Pa07d+7o008/jWNb\nh4eHlc1moxSwWq3q6dOnWl9fD/pFWQ8PD4cxQbLj8vKyRkZGtLy8HIcgYXARwkFwSldelgualKZT\nBc+LA4J2d3f14sUL/eM//mPw68XFRYRR2tou6+QxqkgQK5fLunPnjlZWVvT8+fO4PwYxCbvQKT/Z\nF2LWeLDwBNAwuTkdHR2anJzU7du3NTIyEoctsQ7sOYfBrK2taWlpSUtLS1pZWYlkSfoYfPLJJ/qH\nf/gH/Zf/8l9CtknS4eGhNjY29Mc//lHfffddlDLiaDA/9+rTNYXPkBU4OOQZzM7O6sGDBzo4ONDu\n7q62t7fDIcjn83FAFKdugp5Ch1QfjYyMaGpqSk+ePGmCh33P078xoqAbh8odDk/zDFzJc5bB8+fP\nQ3F6WRlGSaPR0OjoaDgdLhuo3AH5JO7NiX10TPRmOYzN0Sxf91QX+L5Qlnh0dKS9vT0tLCyE8ga2\nB0EhcZbEPJwLDBbuu7a21hJtdfpgvX3df+76qyuhw1sBysGT7+np0cnJiSqVSlhoKPe5uTltbm5G\nL2nuw/emp6eDUHjO6emplpaWNDc3F7Xara5UmPFemtjmnn0rjx6DYnd3N04ZwionuQol5vFi1so9\ni9S69Oek4009BAwcnwdr413PsLAhZhKOUFR0GPQTxzgHAGOB7zlM7AlHeOSpYeLz5qefVuUVDang\nQdC4ty8pOpoVi0VdXFyeET84OKipqSndv39fExMTTcmcOzs7kW/guRqeb+FrnCaBsb5u9JFoxAlx\nJA7xnYuLi+iIBY3QEpm5g1C9ePFCq6ururi47FdAYygqSFCIKJv0WOI0b8QRqVaKxueT/u7C3Euy\n8DYODg7Ck9zd3dXJyYnK5XJ4PqAPkoKveREr9QS2NObqRi7rScmcpFDatVpNt27d0vDwcNQs12q1\nQEVQfG7Y03d/fn5ei4uL0ZAKRU1vgF//+tf69NNPNTQ01CRjtre3I0E1TQpLvVpPKk0N92w2GwdT\n7e/v6/j4OMIBJFoSX8dzpPMkyWXQMb3WQUsIj0gKnve4tCMbKBd3bBi/OyKU6EFzqcKC3shTIUka\nHiBHxOUyoU039rmYD/TMvHGwoG3nTUchfC9Svk0vHxfzOz09DYVPs7bt7e3w5E9PT1UoFHTz5s2m\nMXM6XYp0cYE8Isccrfqr9+SdoCEyMmK9tpSkOspeFhcXo+1rLpfTwsKCnjx5opOTE21sbGh7e7uJ\noEZHRwNWRlFwWhBnEHuSTitvxTeeMXvCXfq59130Gie2Rv4ARANh04iDntkpjN2K8HzMraxvRxrI\nKMd7RnFS6sUztra2tLe313QeNx5SoVBoOuGKcimYisQkhA2JYF4q5kreqwRceadeDkyMR8IcPYOa\nhExq0+v1usbGxjQwMKBSqaRsNqvx8fE4RnZiYiKSmFBS1JgzF0/McWWOAESQthIWvJ/NXpbQePfF\nbPaqDA+6pBJgYGCgqV6dA4C2trb06tUrra2tBSLQ3d0d60mZKDR3cHDQdD5CaqDwu8cuea/V51Jj\n0gUlhghoTT6fD69sdnZWZ2dnmpub0/379yNeTziBvWX9UCasnyfo8lze96zmi4uLpozlfD4fNEdo\naX19XW1tbTo+PtbOzk7k/Pi837x5o7W1tWhhyyE7wL7ZbFb5fF5TU1P69a9/rRs3boRiodf98vKy\nXr9+rYODg3eS7HxtoRNeIIROV4z36OhIxWIx4riDg4PhDNEQCdlBQ7CVlZUm2JrzBbxB0/n5udbW\n1iIMkSrAFC5O5Q175meG+EEyfAYaQgbQOpjPYRRAw5Li3A8SS1NP99q1a+rr64swBbLt9PS0Zate\nZCJGoRsTvh8/d+GwkPgIUtje3h6HfXV0dER3RKo6+A7IlJdup44OXfXSPKS/eheQfekAACAASURB\nVE+eWm8WJ62vZHL5fD6E88jIiEqlUlh/uVxOq6urmp+f19nZWZSXYSB0d3fHOe6Dg4MhCJaXl/Wv\n//qvWlhYaGqZKamlZceFde+x+9QramX9uZdEMhtQeKPRaApPZLPZyNrlXHBKz7z5RiokXJG7V+n/\nZ44YJwgUOgF6eRCKn2NMKdmRFJ4DMfBisai+vj7V6/XIRqdc6NWrV9rY2AgPDi/PlSeefiukgvcw\ndhB4LhBTCJ2GE4uLiyoUCnrz5k2cZ9/T0xPQGY15OGyHLnIIObLbPZ/AUYf0amWJQxPEgVOjwMsx\n29raIk7qgqxer2t9fV1//vOfo6SSMsDBwUF1d3drZGQkEsiI+1cqFf2v//W/9Lvf/S68pTR/I52H\nI1GpMeP84Aaux5DdEMJrRPERW+3p6QljkwQxjEMEGWgMe0yHs9SgduPQx+x8SbiJFzA8uTp4So6g\nnZ2daXNzUy9evNDr16+beuiDEubzeX322Wf64osvNDg4GIZKtVrV4uKivv32W/3www969uxZxJi9\n2Yz/ZP9b9S2QFGEQ5BDdMwuFQhyQQi4CNH1xcRFlwpyTcXJyEo2J9vb2oqOjpDhE6ujoqCVCQn4J\nhoGvcWoIoLwp2Uz5JKU1v4cjBiCen3/+ub744gs9evQoDibzZ1JR4e22PemVhEp4z1EIpynfj/dd\nTvc+fjzsXC4XCFxbW1scjnZxcaH19XWVy+Wgw3TejhCzn+it1HP/ECNE+is4Tx5Ih41CiXrMCAiv\nWCyqXC4rk7lM3FlZWYnznL1FJHDNzMyMbt26pdnZ2SiNOjs70/Lysr799ts4ItI9k1TJ++UCjs/A\nkPydEj0XVv/IyEgkhV27dk3Hx8fa3d0NJZjJZOLMdTL/UW79/f1RykWyiXs67s24Z+ZQf+phUlXg\npVL+2dXV1WihyalR3qwom81GeSKJP9vb29re3tbq6moTtOmKnX32+tfU4udKlTrr7gLR8yZoftPV\n1RWxSHI4iIvBhAgrQkEkWVHydfPmzWht2mps/p57oanBksbSXIgSRvKzxvEGstls5EL85S9/iaxu\nvE9abM7OzkZr0I6ODq2trenx48f6l3/5F/34449N55C/D21qZcS44OPv96FdjsgwN+8UB29i7KAE\nkAUeY8e78f+50mk1llS5+zwJfQCjY1TQbdDDArVaTZVKJTz4jY2NpsNIrl27pnK5rKmpKf3qV7/S\n/fv31d/fr0zm8pRKTs/79ttv9eLFCy0tLTXxcrrejBsa9rkyP+qpobPDw8Oo0PGqkmKxGPzJIVjz\n8/Oam5sLJU9+0tTUVKAAUrNh5WvnspgXSW4Oa7t8RLZ4uWbKE04T7piABlDCOzU1pa+//lqPHj1q\nkuVOkxiUXiLKfvH8n/J+XeH/lIJ3unfHijGT+4NTCZSPoQtKNzY2FsaG5/ekxnMaFkFuOmr4c9dH\nU/IeuyWr0OOFXvICg5CwNjQ0pLW1NX3//fcRS5MuF55EkrGxMf393/+9vvjiC5VKpWgnytGoz549\ni1imC4dUyTtRsvAu+LzszqFcvweCDW9ramoqOmhtbW3pxx9/VKVSiWMI6TzV19cXyRw9PT0aGRlR\no9GIjFE8aZKpPLaKp05LWQgRKxkv0o+G5JXJZILZUW54grlcTnt7e1pdXVWj0QiGo1aVM8BXV1dV\nrVaVzWZD0dKNCngtk8lEIxHW3gncGdKVOArAvR0n+J2dneg0SHYxcbLj4+NAQ7xxy/7+vl6+fKlM\n5rIGd3R0NLx/EmuIVcKwKW2knjH7wHpLasotgC5Q7iRGsT78fXR01JRYWqlU1GhcHos7Pj6uBw8e\nRN91whO/+93v9D//5//Ud999F22HnYbdS/fL+QB6cY+hFS+kHlkrAy2TycRxoYTlCEs5lIqicS8Q\nZI41xduHd3m+l+PiBaYeJvXIGKjkorS3t2toaCiMvsXFxciGR0jjLff19emzzz7T119/rXv37ml6\nelo9PT2RdU2S3evXr7W3txfQrJfLIf/SPI9WXj6Gk/efPzk50f7+vrq6uvT27VttbW1pZmZG4+Pj\nKpfLcbDO2tqalpeXtbCwEEjn+vq6Go2G7t2713RsL0djg654d0Ri954M7IiDo4XMJ80rYm9S6JwY\nvjsO5Mw8evRIv/rVr/Tll19GeSs5XL73yDJJMVboxRs9OU3zXZePreQJNJ0aJ26QgKD09fVpdHQ0\njlqmqyJjoOLIEQXnH0LV8ObFxUUkBjJueMMRtJ+6PpqSd4vdT99xCBmCooUhSQz0BwaavXv3rubn\n5yP+MTw8rBs3buhv/uZvov4V6/f169daXl5u6nbm0JMLjNRjYVFbeR6uJFt58cCUtPJ0qIZYa6VS\niSMMj4+PgzBph3nnzp0mxcxY9vf349QpepbjJTs87EoI4vfxp4ZNo9GIch88Hg4+OTo6ihItkACH\nk4irkQBWq9UiZubIh2fktlKUfM49+XTcDst6kt/p6WlkHJ+cnGhhYUH1el35fF7Dw8NhPAFR7uzs\nqLu7O+A2lO+tW7e0vr6ujo6OaGbDnH3crvQR4Ah0hDrQHdAeLWcRFqVSKVqfOnwL/XOyWG9vb2RV\n37hxIzyDzc1NbWxs6JtvvtH3338f6IRfrRCTVOm7x41HBE+4gfM+aBO6c+gVOuzo6IiqAUpJoa31\n9fU4lcsNPaoeWFMP3WCYpnTiqA8llGtra9Fvgqx1svCdv/v6+jQ7O6uvv/46KkQ6OztVLBYjWfMX\nv/hFtPC9uLiIxlrfffedfvzxx2gX684La+zr6PKlVeiByw0pp0Hi7ZVKRa9evQqDA/rf29vTzs5O\nGMbt7e3a2dmJZEDuDV1xWqIb3Z6g544X+5HGt/1n6ok6n2P8ZDJXuSS5XE4zMzPRb+D27dvR68Tz\ncd68uTwu+uXLl9HR0dFKZKXrGnfo2AeMDObmRnyKaqQ0LimQwcnJSd24cUOzs7MaHBwM4xzehiZJ\nakYmEzbEiESZY1i688aew08fcn00Je8xVjbC4QiPAaHwgetpMHDr1i19/fXXWl1d1e9+9zudnJyo\nu7s7YpN4NplMJpLeyEp2iDuXy0m6UhJeQyy9m4mJcmHBW0GK6YWix5OT1ASB1WqXjTc4NrStrS3i\nS/X6ZeLYL3/5S5VKpYihIRj39/e1tbWl1dVVvXjxQq9evYr4t7eOdUUEwbigTC9nZoTIy5cvo1yt\nUCjo6OgoQiWe0e1JM0CchUIhnsncUH5cqZDjp++Nj7uVgZLJXJ6lzelr29vbmp+fj1rjoaEhzcxc\nHgVZLBYjm1dSeCko+vb29oDs3759q/X19YDgMKJShcM8ECRetri3txeoFOPhYJxr165pbGxMIyMj\nkTmNIPWT5mq1miYmJnT37l09fPgwWjVvb2/r9evX+ud//mf94Q9/iH7mrsA9pMDP1ENB+AKzkxwo\nXfVUd1pyRdUqRotAIuG0s7MzEr8GBgbU09MTpWZLS0taXFwMpcq9PNHLhZ4r+ZSHnTcZG82QOOhk\nYGBAt27daqrll6Tx8XH19vbq1q1bmpub0+vXr6NT2cOHD6MfRKNxdXjUq1ev9G//9m/685//rOXl\n5VAqKZ26B+r78j4l70ZxetVqNe3s7ERbVxStP8MNHhQ2nQn9hD9CiqCHfg+UPE4JR0RDK8zB5+kw\nudOGGwV4qZJCrpdKJX355Zf6b//tv2l4eDgamfE5jLbDw0OtrKzo8ePHEXrlmfV6PUqWnS5SmoW2\n/Ahid/pawfepI5LNZqNj5ldffRVd8F6+fKnt7e14BuOnax4KvFAoRGdLaJt8lb6+vkii9Bym99FD\nq+ujniePNcZgXUm6l1atVqPrWC6XC4sPD2lkZESPHj2KzEaayhDrdAjQhS5Ztyyqw9ZAgZ6s4cTp\nZRku5FvNE8Nla2tLT58+jcY30pUChPBI3mGu9Juenp7W+Ph4JHMQc+OgDCC5arUa54171jHE45mc\nngfxc1cah8ciJVkNy5N7Uw9er9cjwQllCmxIiIF1QpAzbo9HcW+EEXNp5U3yO3vjdf3VajUgbfoj\n5HK5qNcdHBzUyMhIU4UH8X2vp2WsZPO69e8hB6xyMvVRXBgSmUwmym5ACvb396PMy+FpOrTBAyMj\nI+Hd7O7u6ne/+52++eYb/eEPf4guYa5spdaog1+u/FEY7nmlnn+K/vhnMIQxblZXVwOlANWgKU53\nd7cWFhZCOeZyORWLxSg3ROACr8M3yBDnTebn+TLE4j0Hg8oFGjxRhdNoNEIAA/lzDPTg4GBktrP/\n+/v7Wl9fjxp6Tv7jWQ6tojDpE4/RBH2myYW+tj91tZI/vjdufAEBY0wix0jgLJVKWl9fDxQJQ4M8\nIPaUNWcdMeTq9XpTTb4rSzdcGo1GODk4CITLKKN2uoOecG4WFha0sLAQaATdJNlzngsSRZdUz2dy\n543PY0jDd+nl8sURCZcxh4eHYQjRQthzP6SrMEWpVFKxWIy9QM69fftW+/v7TXP38MJfvSfvBOjC\n0ONT/B/Yw+PCEBeMMTs7G94GZyJ77BnPC+jElWlav50Kq1YQMpd7yO+7MDIqlYqy2azW19c1OTkZ\nirBYLMbRkeVyOQiBw15u3Lih6elplUolSVeW4PLychzgQBwcQmUtUyZxC/tDSzBcYLrnQZKae3KU\ny6UIzenpqQ4PDwPu9wQZX0u/3qdQEMQ+Pn6m++SGCXuFsYFR19/fr5OTk2guk81mm8IhqQD1OKOP\nPaUX9y6hvRQd4n/AowgxmjNxJkNnZ6eGh4fjCN9G4zIfIpu9PIHr5cuX+uabb/Ttt9/q5cuXgTIw\ntnTN0qsVhO8CjHVgzK0UfHofV3DUjINcoIzZg9PT02gR2mg03qnfZm9BRDKZTCgS1jRF29h7NzoY\nu8dvM5mrxC0EKIYUPQfoRAm8TwZ5pVIJPnz9+rU2NjYi1OY0wLjwXtOTMH3NmUeKirzvSj1+l0fO\nQ/4/1g9ZgTxlvu7I+Dq6t+4hQAwsDEMqFrz01EMW7tgxHjxqem54vLpevzr6d2dnR+vr63r58mWU\nhmI8wUNu3CKj3AD0cfsaYth9qGzkGYT2QBGp9pIUyAhdVwkFeo4DNEGOEGtDGSR/4+T8lE5Kr4/a\nDMc7uXl82ze+0biMB29ubmp+fl5TU1OanZ2NmKF0CbHSCME3G+VEowIO9cjlcpGEhTecGhfSlQHC\n724RQwQ/BetIVxAmBFqr1fT69evIzu3u7tbU1JRqtVpTiKGt7bK3NIc39PT0RNLS1taWfvjhB/3x\nj3+M5EOa1niIg/H5WmI9uzX+U0ZK6kWAcrhnzQlSePcHBwfRvpOkpWq1quXlZe3t7alarbaELJ02\nWDdejj64JdzqHszJ42l8ju8DU15cXEQyzNHRUfQYpwtdo9GIEkYP4zgsmSp1nwvGDLSOYYNiSxUw\nx6JyBGo+n1e5XG5qMQwCkMlcniPw3XffhQe/vLzcpDxaGR+u/H2fUy+f9fJ2ryiT1DN7H71ICqF5\ndnYWfRI4RKjRuDx1jPagrAneNgrIYWE8ZNA2eNENbjcg3TB145dwCW1ioVUUHclgfX19TVUcvH98\nfKznz5/r8ePHevr0qRYXFyMfBoOXvXYlyPx8rVIHgzlB6z91vc+Db/U/SU0VBN7pkx4ThOhQVk5L\nQPYpX3FfeIYSvlwuF+XBPkfG5WEgcq6mp6c1PDwcawiv7u7u6unTp5qbm9Pi4mK070UWEWpj3R0Z\ngedb1cvjCDK3lK8dpeI70pUspOshCBVy8Pj4WJlMJnJPstmsRkdHo9MpzbfIw3Jn1BMzWXvW+6fy\nBFpdH/U8eYgZrwr4OPUQUJA//PCD2traVKlUdO/ePd27dy+YSLrKrpcUENzBwUEcrCIprCcEa0qw\nKdTqi+rwlEMoHqdML+6Bx5LNZrW4uKhyuaxcLqfh4eE4gES6PIQBIwVrD2j4/Pw8mv/88Y9/jGNT\n/WAfugGSyYsV6967zw2Gf5+BknojrAE5ErTaJFGMsjTidkCXh4eHTSfQpR6GewXsodMAlj6WOf9L\nIdoUvWAe/HRYlO9Vq9Wm6gQamQDVViqVUPKZTCYaU/Dye6ceLvTCuDxhUFLTARU0eCIBhxI5jDyn\nJYT0wsJCGHvr6+vRAIhnIrDYv1YKxteolcHlSXNSc1Jh+gzu5WGW7u5uDQwMaOb/lLTevHlT09PT\nKpfLcZ+TkxP19/dH455qtdrUadArQ9gXnuHr64aH04Bnt0O/nHnfaDSiaYo3eoHeiNOipDj8an5+\nXt99951evHjRFN/2NWDcqYzCUJHUVIoGnfv8/t+WTL3v8jFRVurHpCLLUpSNsbnC8cRhpyXkK416\nWEO/j/MHhiRG39TUVPTYdzl9cHCgxcVF/fu//3skQvr98YQJxXF/kCRHcd4npzFGPcyZynXXA9Tw\nc6QzYddM5hLyHx8fjw6E0Hl/f38km4I6eI6R68VWhmqrNf+566MeUEMsxmtQU8gWoXZ4eBjZwsTt\nJicnQ+B5ElujcQnxr6ysaGNjQ5VKRVNTUxFnk5ohV5SKd4JzAvOuc56JmcZ8/L5cbAyMfH5+ruXl\n5ahppa/+6OhoU+kH93JD5/DwUE+ePNE333yj7777Ttvb2xEP57koeb5LNrsrRJ+blyo587mhwzq5\n4L527ZpGR0c1NTWloaGh6F5G0xygppOTk+hA55nGDoe5UnBkx4nYE1cYo0OGjtw4c7rAZ68J16Cs\nsbiJC0rS3t5ePAMvE0O0u7s7whLMx70vN1RZb/Y/9Wjouc1BIcR+8/m8+vv7VSqVogT04uIiKinO\nzs70ww8/6J/+6Z/08uXLOD439by9oQxCjL9TJMcFeQrDQpsgbp7J3ErBe2c0Dj558OCBHj58qHv3\n7sWZE95DgVjrixcvtLe31xQOckPFS+N83aEL/x80AxTsrbI5L4KTzGhC5FC1dBXyQRGenZ3pxYsX\n+v3vfx8yiRPsMHDZC1A5P9+ekivG1irWiofpSVapMZXKmJ8T+r43+XxeExMT0cTKq3F4IQuRRawB\nSrxVMiEhDYwnb1nuCJhfHLJDaHJoaChyVNjrnZ0dzc3N6fvvv4+cAm+jDazN8zo6OnRyctIUrmnl\nnTtywlzRJ66PUsOxvb29qXeLh4fb2tqiT0u5XI42w7VaLQx4lDgoB/f2A6zQO6nz0spI/6nroyp5\nyiXIHvS2qKm1d3FxeRY0iVvr6+v6t3/7t4DVvJEI5VOcA352dqahoSF1dnZGq8mNjY2Ak13hcx82\nDEvQPUkIAi+fkjjPfHQh6nPg1LBarRZC7fz8PBpTMA7pCj7b2dnR69ev9fTpU/3pT3/SixcvAqqC\nQGBehCVMi1ADasO4AllA2HgZYytvmAtPtqenJ2pWWQcE4OHhoXZ2drS9va29vb2Aw9P4HGvjytsJ\n2D0370XN/8ircAjdDROPHaP88vm8uru7JSmSY0gEbDQaWllZecfzqFQqMXbm6aUsMJ57XO6Fwci1\nWi1K8FhL2ozSYtfXF0OpUqkEPT579iy6CC4sLOjVq1fRwhcB7BULPr5UIDNH/9uRHvaDZE2EPIrW\nk6qYDwKWzGA3Yt17YU+cJnd2drS8vBwJkdCx0wbry1wcGUkFOJfTJzzKeh0cHESbZjK7gV79hDZk\nFKGFtbU1ra2taXNzM1ronp+fB1LluUApVM+4/WQ06MWTipEXnieSCvgPhW2RXyjUwcFBZbPZ4Ffg\n7Ebjsix5YGAgDpbyUtFUPruB7cZCavCmKJfLF1CV8fFxzc7OqlgshtIkfECL2Fwup4ODg+ieSCKy\nG/GMxXnSw8A8P1WSXmKH7HFPOuURvnNychJZ9DisnZ2dun79elO5WyZzdToqxwNXq1W9evVKlUql\nqfkbiCUJy61OSf3QvIGPfp68n4glNccQJb2zYTR5WFhY0O9///tQNn4oh5eecV+SZYCQDw4Omrwq\nlCTM6Eo+JUzGzv0hLldcfMbngWcFtAdB1Ot17e3taWpqSgMDA2GZAhsvLS3pxx9/1OPHj99bE+qJ\nTHjMhEHcA3dFKekdg8VjgKmw53LPu1arxTwuLi6ikQ+dnfb395s8+FZQU8psKGuEh+dfuAJ1ge7G\njF+tDAbW6eDgoKl+uNG4PAnKs287OjqiLwAVAX4ee+oFp/Eyh3jpLujjJ06NIeT3wAgki/zk5ETf\nf/+9nj59qpWVFe3s7LQ8ttRj8hhhrCv7599J0Y40ts3aOD+4h+O84SEdwg3A5CTQeqwUlG5/f1+r\nq6taWlrS7u6ujo6OonLB5YV7tin64CE43ne6xWhiDhxEhJNAfJ6SLXoY8D1CgNS/u5yBxh158PUm\n9AbPIfA9QdINJEch4OUULfxQT85lBH3lQVE4oyGbzQZCBN34erpR6AlgyBXoFo8VGk6RwlaoD0qQ\nFubIUsrk9vf347jqN2/eRFUPirC9vT1ykth3jA0PQbRCqXyvoDeXj+9zdHjv/Pw8DPdqtarNzc3o\n14IDQa4Hjsbu7q4WFxfV09Oj8/NzvXr1SltbWy3XiDG6oeLj+JDroyl5Ol05UbAZQOGt4KmUwFhk\n4uQpdAtheykCgrUVdOMeEZa5K333WBmrl/FIV41+fspz4kAdoKg//OEPmp6e1meffabDw0MtLS0F\nerGwsKC9vb0or8KyduEsKSxoBIgLZCAzIHTG460n0zUHenJm4DtkgFcqFUmKUq4XL17oxYsX2tra\niuM4XUlIzTXk0pXQcg9celdoEzd1r8EVm8PIvi6sFdD8wcFB9LfnmFafPzF62hBjDHi9qtMjz+L7\nbmigcGlH7MrYkRdOsNrY2NDa2pq2traCvgYHB3V4eKi5uTltbW1FbkMKrzIeb37jtOL84WgKe+Dr\n5XD/8fHx/9Pe2ew2kURh9JIQRQGJhdnDm/D+L8DCiyARYYIjE0uJoiRg3LMYnfLJpUGsBo30HWkE\nY9rdVbdu3b+qLo9lDAdTPavk82k67KNhXqxWq9put/Xhw4d69epVvXv3rt6+fVubzWYcgPP+/fvx\njrzP2XfgYhkSPMztoMfGYAMI3BwAIAsyZY4D5t49YLRees73ikLXZ5yd9zHwE8i92ofMaa/tSWcu\nG527plflcJ7r9brW63V9+/atFovFOJALh4qOuV/IxO3xsg9vFvC2iN9T7/bQY8iGXOY448GhPdvt\ndvzULzrNHohpmkaVpr9T7iDDcu7Ok+9XHX722v2yXUR/yMQ5dMgl/ufPn9fFxUUtl8t68+bNaOti\nsajb29u6uLioxWJRJycn47VL/A3Pwc/wzLnE4k/4q4fh4CxodI+07ITnnAJ/twLZMbG2wnUMNMLy\nRKZUwoA6C3BUh7BRHh+cQKDhMopLid0YU7rZ7/dj7wA7LS8vL4dzuLq6GkrXM1Ybb8ujZzE2RkwS\n2tnb3CNJZ3bIgDLy2dlZff36dWwcXC6X9fnz51G27G3of7rt/XOvidFuyl/dWNBmnE+/H33Y7XYj\n4qZcbwOLAee+nCSIw/B1yKobWvcLA0IAQpvsaBkzAjD0BSe+2WxG4NczyD5eNmL8P0any9Z62WXu\nOellkD6effx6GZPSLntHWC558eLF+NGWzWZT6/W6VqtV3dzcPKnC9UCt43Z7U2Vfnqh6mgHzdz/H\nyYb7y72n6bDs48pHt0HWRbedz9EDLzHwDNo0F3ThmPi8y8cy8pggF5Y2Wab4+PHj0LmXL1/W/f19\nffr0aZTGbY/pi+de1xfrOmvj/hU66xnt4odadrt/zxFZLpcjiD4+Pq7r6+vabrfjDSuWFjye/J05\nRkXFVVjLhLEhAfQYcl3P5Pv8wF+wpOJg2jpIJYIqwevXr+vx8XEkQScnJ0Pnqw5nEKAnruDQbuv6\nn/BXf4XO621VPzsBQOF7ibDqMFj9aFqETWbL9/h3G4RpmsYmQK/5VT3d9c3gY7jmJlIvXzF4zvRs\nGMnkWK+9vLwcJSk7VL6H0ekKy2fdUfrf3S9PtLnrfO+qQ8bKczj9jXMLyEC+fPkyymo2BL8y0A4u\nbEi8o9uBWZ9snoyuuPSlEzaBsbyAA+qGgANR2GTGcoMzdp+M5mfRfsu2b1Kzftv50D/aR6a32+2e\nLO/04K0bI5fL+QyD5GutC/47BswOveuI72Nd7HMUY/vs2bORWbGxcL/fj2OO7+7u6vb2tq6vr58E\nL78ysD0oQVdwZCyP9ECRdrnywJ89mOjBDM4NuXjXONUC37/Lwm1lriBP2w9gCQ3d6uuvc8HznEPD\nNp6eno7giuD1/Px8VORIXJy52z7MBR52eDh5nu1r5+TJ93ib5OjoqFar1dhP5UyZw27Y98C8oH9+\n3Qz5u+rSg36ezZIM84GlzF6u97Khv3t6ejra6mooDh/bzbylajRN01heIKlA1iRzTsCQoXWFTYl/\nwl89DAcDxCBYsb3jvZcl3HFn4xhnBsilZq8722hZkb2RCiVB2ESdDGLPlNishwIQIVc9XWJw3+3A\nUDS/U2ql5HveDNINoCNIGzYUwtd2w04fiU55Ftf51RTaenNzM15joUrg3yyn5Ob+8V+X31ywgSx7\nn3u7GV+cpeVadXj3vJffGAM+94l7ZBdMYusLB7JYl2xUHOmT3RBgOCDwHHD2xy56dI9Sog2PDZv1\ngOfyHK9/Mx8wSDZqvX8eNzsYOyKqZZ4rLldbR6sOJ+A9PDyMqpU3tXmDVw/iHFg58PY8t8xp09xm\nK/pIW+2g6dfvgmXPI+TPvgl0xzLyXPQ8cJCErjsw9R4IywTddBXLAdvc9bxxwTvYZMQ+FIo5wj2w\nGQS+2BbPWbJhxpcxw0Z2vWJe4mC9jn91dTUqVcxpTqhzW/0DLXwO1m1k2MfXQRVv2KAvDhD4nqss\nDvKxLx6zHgQ7oNvv9+OAHNtY7JEDlz6nrEskLP8bJ+/yjwXk0jmTqguSwezGwNk633MJcs64dwfR\nS24oRZ90tAFjyo5PshfaboXoUfKPHz9GpuVXnroTnwsU3CdnYJZtzxJsiWwduAAAAahJREFUGCyX\nvv7uAMzOkedzuITLo30zow0DfeVZVTUmq5/F9/jcSt+dfA8WGQ87Jl5hswH2mmzPlryHwM6b79Ae\n6+zvKjt2WNbFHmj4OZytfnx8/FPFwY7H90NPcTjTNI3TsgjU3F/0smeyGBF01tmQ9cW6xBj0Kpez\nS4KS79+/j2UQ5IzeWEfRF+uyZebgpJcw7VjnKik9yKR9/q7nn+dHn0fIjHt4XluWlkUPKLwBlvnS\nS8g9+HGg1h2+9Y7gnUCcdvJmkW2J9cDVC4+ns1rf29UT7uszDOxs/XocQQaHIeG0CTSpMjDnHEx5\nzjN3LBf3Y255xHPZetrH37bG42u5WydsGwiWj46ORoWWa+knATCvN3Mmx6+WR7qNCyGEEEIIIYQQ\nQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEII\nIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGE\nEEIIIYQQQgghhBBCCCH89/wDV+Vw5XsyxPkAAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -2.127394\n",
+ "Current MNIST score: 6.997816\n",
+ "Current Frechet distance: 33.048138\n",
+ "Training step: 600\n",
+ "Time since start: 3.387114 m\n",
+ "Steps per min: 177.141966\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsfddzW2d+9oPeeyUJ9iKq0FSztI4tW+tknV07u5lkZrPJ\nZCaXuctN/pRc5W4zk0wymWw8k3gn68TZdV3bK1mSJYoUewMBEJXovXwX+n6vXhwdAAeFovx9eGY4\nBAngnPe85dcLMMQQQwwxxBBDDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcQQ\nQwwxxBBDDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcQQQwwxxBBDDDHEEEMM\nMcQQQwwxxBBDDDHEEEMMMcQQQwwxxBBDDDHEEEMMMcR3B7KzurFCoWgAQL1eR6PReO59mezp0Og9\nmUyGRqMBmUwGuVzO3qP3xa4huB/kcjnq9Tr7H11PoVCg0WigVqs1XZMfh1wuh1KphEwmQ71eR61W\nazv2TuN5kVAoFFAoFM/9X+zZ6/V6y+fivwc8vzatPitlLuia/PoCgFqthkwmQ61WQ7VaZePr5tqD\nhFwuh1wuZ+OluRKOQ6FQQKlUss8DQK1WQ6VSYeOmH/5v4TVov/H7/DSeWSaTsR+VSgWNRgODwQCl\nUgkAyOVyKBaLqNfrqFarqFarz+2DbtDtd+hewu/TmIGn86VWq9m8VatVtm/4+ebnvNO9xJ5R6r6n\n33K5vOnedA3+M0qlks017XGiMfx96XnFxjNIEG2g6wvHz0Mul7P93m7cdG4ajUbTmaBrCOeHn2f+\nevwYhZCyNgSVSgWtVts0Ftrb9N12tIa/l9j/e4HY+hJ4uqNSqaBQKJDNZjvycGXPo+kTQuIlRKsJ\nJYbULWjj8YeWJrPd9fjFrlarjGESU5Qy9rMGMW/hweI3p1QG3+r6vbwn9jmeCBOhFjLVbq89SNAc\n8QRLTNCk/UJCSqVSeU6IFBISsXvxnzvN5+XnnfaHwWBAvV5HJpNBpVJhTJPG1WrcpzW+TuOu1+uo\nVCqQy+VQq9VszGKCmJQ9K2T2wv9LuQadq3bCAtEgYqy078WY6mkJeWLjF9LFTvROLpejVqs1fY+n\nszzDFJ4FKTRY6v87vcd/plarMQWwXC6zPS68xiBonBS0W1/heZPKB8+MyffCTPrFIKRfnqiQ9Cl2\nTd4CwEtnrTS/04TYWOl/vFWE/tdubXiJXLjpeAld+J4YkWslsYpJxjQmIbGVoo0J7zsItBJChX/T\nWvOWFCHTlsJAXvR+aTQaKJVKTDPjGXwv+7cVwyT0q/3wr2u1Ghu78HOdtC9eu1apVNDr9WxPk9Wr\nUqkwAYKEiHbP0OnZ+TEIteCXAVK0Yhq78IzSe/z/znJv8yBaRgIVLxCeJVrtTeGcSh3nmTH5s0In\npkyvxSaQZ4RCpil2TblcDo1GA51OB7PZjEajgXK5jFwuh1Kp9BzRPO3Nxbs86Idn8gBaah30ffoO\nMS2FQgGVSsUYAW8dEWrlQssBL92LrQX/HV4LEBubcJz8b/6eg4LUa5EpFng6V5VKhVkD+PUQfod+\nv2jmTr+JWRKTb+eeanWtVhqxmJmV0OlMiu1JsWvwJtdWz9hK4KQ9rtVq4XQ6YTabodPpoFQqUS6X\nkUwmkU6nkc1mUSwWJTF6qejVmnbWICtEqzPN7/OX4dn4fUNWt3bW2RcFMTrZCi89k++kMb5ISGGw\n9L5KpYLBYEA2m2UmNjEmTYTCYrFgZmYG3/ve96BSqZBOp7G6ugq/38/8nKVSiZlzT+v56vU6iymg\ncdPBJKGFCCPPfFppmAqFAkajERaLBU6nE9lsFqlUqknrI/OXSqViwoRarYZKpUI+n3/uucUIMG8R\n6cYfzzPX0yKcna5Xr9eh1WphMBjYvCgUiiYNUMjw+bETg30R4OeeBDe9Xo9SqYRMJtNWMxODFKsZ\nL2QCz6x75A6h9/gzJqadC4VX2tOlUqmj5t5KEJDL5TAYDJiamsIrr7yCCxcuoFarIR6PY3NzEzs7\nOzg4OIBCoUChUEC5XG65VlL2nVAQftHWPjFIWWeCSqWC1WpFPp9HLpcTZZhn/Tw8aC+RO7CTC/ZF\nQGwPdPq8FPx/p8n3CwqeEgZDtYPdbsdrr70Gm82GfD6Pc+fO4fj4mDG6crmMTCaD4+NjbG1tIZlM\nolgsDmzMYkIIvSYNrRMT5CVxYkJ2ux3z8/O4cuUKyuUyTk5OEAgEEAwGEQgE2Pd4awUfFyBmxRDz\ncwt9evz4O42VYij4Q/2iCSgREd7f3Uow5P8mZnXaZkTeBcMzyXK5zAS1QbqZhNeg+aH7AoBGo4Fe\nr0e1WkWlUkGpVGI/wrELLRC88NTKbSQ2Dh5qtRo2mw2zs7O4ePEiLl26hEAggJOTE2QyGWSzWZRK\npYGvzcvA3Am8ANUJFJhMwaYvSjjtB2JC44uAUDAVukzp9aBwZkz+u7AJxFAul1GpVDqa2Om9QqEA\nhUKBixcvYnp6mpn5yQRKG+34+Bhff/01fv7zn+PJkycDZfLAM5OUkMhVq9Wmz0n1nSmVSrhcLly5\ncgU//elPYTAYkE6n8fnnn+PTTz/FwcEBc0nwGhV/H36ehGZr/iC0M7+2An9NjUbD5qBQKLBnPk1i\nStcmJqXX69la09qLBUDSd+v1OjQaDdMUpQqUvYJntCqVCo1GA+l0usm9IEU77wTh98jCpFarYTQa\nWbSzzWaD2+1GJpNBOp1GMplEMplEpVJh3xVafng3Q7lcFr2fVK1ap9PB5XJhfn4e4+PjMBqNiEQi\nWFlZwddff414PI5yudyRDnQzL0QXXgZGzwt+UoQYXph/mczyUvCi55x3bVDsB6098LxiJIZuxnum\n0fXflU3Agye2UjY+LSilmPBpKbwU6fF4sLi4iLfffhuNRgOJRKLpc/3MFWnQwkPb64GkMSkUCmg0\nGphMJpjNZtTrdRwdHWF3dxf5fL5JGOLvIRaY02oM/ewTYiBWq5W5KojpG41G5HI5JJNJRKPRJkY6\nKDQaDWi1WhiNRlSr1abo3XY+XF7AEcYitBPC+O9LZWbAU4FNr9cDAEvx4y0ggxY4hWPQaDSwWq2Y\nmZmBx+OBVquFxWKB2WzG0dERgsEglEolGo0Gc+/w2TK85tjJ5y8VGo0GTqcTCwsLcLvdqFar2Nzc\nxOPHj5FKpVAulwfmhweeZ6pnbToG0ORK6QSlUgmDwdCURfJdQKPRYLSBXC7d0hwpsSM8lEols1RZ\nLBaMjo4ilUphe3ubBXV2ukY3+/w7Ya4nLYMefJCH67RAYyO/NfmHG40GM9GXy2UUCgUUCgVotVro\ndDrMzs7i8PAQwWAQcrkchUIB0WhUVBvuFq2i4oXj7nQfEhLI36zVaqFSqVCv13F4eIj9/X2WU93q\nHp3Am+l7Ae0TtVoNh8MBi8UCk8kEnU4Hg8EAo9GIbDaLaDSKzc1NpNNp9gxkfSgWi8jlcn3Nu0ql\nglqtZmsuZvoWmuz5Z+AFRT4uoZUFhPcv8r5+PoWSNxOSJqHT6Zgrit5XKpXsbxrbIM8c3UOv18Nu\nt2N6ehqzs7MwmUwwGAzQaDTM8qVUKtm60A9p9Xz9C3rd61jpOyaTCR6PBxMTEzCbzQiHw9jf32cC\nrNAS2e/cENEWprmeBoTMgbeEEJ0FnuWuS3FxUNriyxRr1Qn8uaBYmW7HzruYaA35NDwh6LyZTCb4\nfD5MTExgdnYWe3t7ODw87GoMLz2Tl2pukMlk0Gq1MJvNzFROZuAXbfLvxmRKi3np0iVcvnwZOp2O\naSL7+/sIBAKIxWJ4+PAhVlZWcPXqVVgsFhwcHMDj8eBv/uZvYDAYsLu7i3/5l39BMBh8zh/Z7diJ\n+IlJm92YNckyMDY2Bp/PB6VSiWw2i+PjY8RisecCtfoZc7/X0el0GBkZwfXr13HhwgVoNBomOJnN\nZigUCmxubqLRaGB0dBSFQgHJZBKHh4dYW1vDgwcPmhhdtygWi8hms0xL4COx6RmF4LV38uUTIy4W\ni0yLpOuRH7TRaLAiNvl8HuVymRGdcrnMCsVQAKBGo0Gj0WBaO+0PMncLMyYGTbiVSiVMJhMcDgfc\nbjdj7na7nblYnE4n6vU6dDod09iz2Szzi/NpWzRf/aah0d6enJyEVqtFPp9HOBxGNpsV3QvEnHs1\n3fMMvp8obzGiL3YdvsYDrT3VdCA6WyqVmmJI+KBIsWtWq1VmvZPKfFoFQwoFptO0+jYajefidMQs\nnPwYeC1ap9NBo9E0PQvtSyH4gE6v14tbt25heXkZSqUSuVyO3U+qEiR1j5+pJt9u8WgyRkZGMDU1\nhZmZGUbYKMiLpOtYLCZ5ExAhIK2ND8YaxLh5YqPVajE5OYnp6WkWWR+NRvH5559jdXUVsVgMT548\nwcbGBmKxGBwOBzKZDJaWlrC0tIRz585hbm4OhUIBn332Ge7duyd5jGLgA7mE5h6hj1zKHNhsNtjt\ndiiVSiQSCYRCIaTTaeYPHQS6Eax4qFQqOBwOXLhwAW+++SaWl5cxOzsLAMhkMrBarTCZTNBoNHC7\n3VCpVPB4PKhWq8hmswgEApidnYXP58PKygp2d3d7Ir7kpiGtUyoBJO3VbDbDYrHAZrPB4XAgmUwi\nm81CJpOhUCgglUrBbDbDZDIxf3aj0UAgEEA8Hm+qKGY2m6HRaJDP56FUKmE0GpueiV5TgRBitKfh\nylAoFDAYDJicnMTY2Bi8Xi/Gx8fh9XqZ8FWr1aDVapkWT1kJYkGbQsuPsIKaVJB1YWRkBD6fDxqN\nBolEAgcHB0in0020goi2Wq2GyWRCpVJBoVB4LjBXCnghvBttmGgNmYBVKhUTnoxGIyqVCrRaLex2\nOxPy6Dfdt1wuIxQKoVarwWKxoFAosMBCsmZRFlCrOVWpVLBYLMxSSfMgNl7aiy6XC1NTUzAajYwO\nV6tVpsTR+EqlEvx+P9LpdJOrRjhHNBdA60qUwjkHnp3RboRDeg4SvnU6Hbs3WUuFAj1Br9djdHQU\nFy9exJUrV1AsFpmi0e2ekYIzY/KkOYgNlA6ay+XC7du3cfv2bbz55pvQaDTMN7exsYGvvvoK//Zv\n/4ZEIiHpUBBjIzMzbahuUpXaHUIxE6jb7Ybb7YZSqUQwGMS9e/fw/vvv43e/+x0z4dZqNdy/f5+V\nWVQqlXA4HFhcXMSNGzdw/vx5aLXavpi8TCZ7zpwmDFjqZpPJ5XLo9XpW+jSdTiMQCKBQKPQ8RjH0\nylw0Gg3Onz+Pt99+G3/2Z38Gs9kMuVyObDbLUhsLhQJqtRoWFxeh1WoZAVSpVFheXsYbb7yBSCSC\nv/u7v8PR0VET8ZEKg8EAm83GTM20f3kfstCiwu9Tl8uF8+fPY25uDpOTk/D7/YjH49Dr9UgkEtjf\n38fExAQmJibg9XqRy+Xg9/sBPC1Fm81moVAo2F40mUxIJpNMgKAzwJetpTFQSlQmk+nLBC4EEWOr\n1YqLFy/C5/PBarViYWEBHo+HEUaK8UilUgiFQggEAgiHw6hUKk2uD5o3XisjIYGHVKapVCrh8Xgw\nMjICjUaDZDLJsl54Kwt91mw2w+fzIZfLIRaLsToYUu/J7weyrrQqLCMcKx9jYrPZWE7/3Nwcpqen\nkc1m4XQ6ce3aNfaeTqdjtCCfzyMSieC3v/0twuEwjEYjMpkMkskkTk5OkEgkEIvFkEgk2POL0QkS\n2HQ6HfPLiylPFIU/Pj6OGzdu4Gc/+xlmZmaaBIpoNIpiscgsN9FoFB988AE2NzeZ8EDuTn6/8tYp\n/r12a03zp9FomAAh1NrF1pF3oanVatjtdiZk1+tPKy+Sq49XVOr1OvR6PcbHx5kltFKpwOFwdNwn\nPLo5h2dqrufNXMCzIByDwYA333wTN2/exNLSEmZnZ+F0OplZEgBmZ2ehUqlYFHe1WkUul0Mul4PP\n54PD4YBKpUIkEoHf78fExARGRkZgt9uZj3ZjYwPr6+tYX19HNpuVPHFCxsiDUm9eeeUV3Lx5E8vL\ny1AoFDg8PMQXX3yBDz74AGtra8hkMk1aFBGtarWKra0tyGQyvPHGG7h48SJMJhMsFgssFgszifUy\n3+VymR0cYQpgt+YxYkLkNyUmNkgXSq9aPGnn3//+9/H666/DbDYjl8shEAjg888/RzAYZM8sl8th\nNBpZytiVK1ewtLQEq9UKlUqFsbExvPbaa4jH41hZWWGpj1Il/lwuxzR53lzf7tlobBSMduPGDXi9\nXiiVSmxvbyMcDjMzMjHpk5MTnJycIJlMIhQKIRQKMTcBaRbJZJKZYmlsNCYisrVajc3FycnJc3EE\n/cLhcGBkZARzc3OYn5/HwsICNBoNq6dA85rJZJg769GjRzg+PsbJyQkzhQrT43hrhFjqnFRQjMDI\nyAg8Hg8ajQaCwSBWVlaQTCafWze6n1KpxNjYGMbGxrC9vY1IJCKJ0QghzHaR8l3KArh48SK8Xm9T\nzwOdTgeTyQS73Q6bzcbmmgRE0jgpA8Rut2N8fBwqlQqZTAa7u7u4d+8eK20s9jwklJDQTPcXM7uT\nYDQ6Oopz587BbrfDZDLBZDIx5mg2m1Gr1ZglKZvNQq/XIxQKIZfLsRoFuVyOCSQGgwFqtZoJq91Y\nzIhu0fN1mn9i1tVqFcVikQnR5O4gQV5sDhqNBhMAKHiZ7ssHc0oduxScaXS98LdcLofVaoXP58OP\nfvQj/OhHP4LdbodKpXpuwiwWC9vYuVwOCoUC6XQaiUQCS0tLmJiYgEqlws7ODlZWVrC8vIzFxUVM\nTEzA4/HAYrHgiy++gEajwdHRkWgwTatxixXwoPc0Gg08Hg9u3LiBP/mTP0E2m0U4HEYsFsPHH3+M\nX/3qVy2vXa/XUSqVEAgEkMvlsL29jYsXL8JoNLIodoq+7Ba0uejwix1Uegb63c6VQlYHYvK0PlIP\nVzfjlgrKIHC5XFhcXMStW7dw8eJFVKtVHB0d4d69e/j3f/93PHnyhBERMnHWajXkcjm8++67yOfz\n8Pl8GBkZgdfrxfnz55HP56FQKLCxsYFIJIJMJoN8Pt9xTKVSiTGmbirHNRoNmM1mjI+P4+LFizAY\nDAiFQojH49jb24NarWaEQaPRIJvNIpFIMGafSqWaxieTyXBycoJcLsd8rWR+5TVnel2pVJBMJptS\nPfsBaT4jIyO4efMmbty4gbm5OWi1WqTTaZaSls/nodPpkE6nsbOzg0ePHuHhw4dMoGk3Dj4qXQqx\nFgMFRbndbjgcDtRqNcRiMWxtbSGdTjddi+5DrhWPxwOHw4FsNsuIeLfxHL3UI7DZbJifn8etW7fg\n8/mQz+exs7ODnZ0dlEolaDQahMNhlEol5jbc2dnB3bt3WeoiMVebzYbx8XE4nU6kUinIZDKsra0B\nQEvzM/BU2CkWix2fl+gDnb1IJIJGo8HoCAA2TqPRCJVKBbvdDrvdjnQ6jZOTE6ysrMDlciGVSiEa\njSIYDMJkMkGhUGBvb0+yNZEYMcV6dZNKJ3QXaLVa5lKjWgqtajVUKhXm+qGzSHT9NOLMzozJq9Vq\nAM0+Ja1Wi4sXL+L3fu/3cP78+SbfHEm4xEji8TiOjo6gVCqxvLzMTNqVSoUteDqdxuzsLN5++23o\ndDoYjUYmOWq1Wly/fh3FYhFffvllk/+yHcikKxb4R76w0dFR6HQ6xGIxfPTRR3j48CGSySSOjo4k\nzU2j0UA+n8cHH3yAQqGAd999lwXG8EEz3ULM9MSb7IXvtYJGo4HZbIZer2cNWCgVRKvVdj2uQYDK\nkJpMJvzgBz/Ae++9h+npaeTzeezv7+Pzzz/H559/Dr/fj0KhwAhwPp9ngk+tVsNnn32Gra0teL1e\nLC8v4/bt26hWq5icnMRf/uVf4vj4GOvr6/j000/x8OHDjvOl1+thNpsRj8clE3z6DJ0J2tfBYBCH\nh4fY29tjFjCqsaBSqZhGThXYhMSFgvaAZoYuZhYWxmj0y+QpBuDChQv48Y9/DJfLBaVSyUzw8Xgc\nNpsNLpcLPp8P6XQaW1tbiEQiTACXOnf883Q7br1eD5fLBYPBAJVKBeAZA2slZJCZmNba7XY3pdl1\no52RFihVEATAzNeUxbGysoIHDx5gbW2Nuac++OADlipGVs9kMsn848vLyxgdHcXY2BjsdjsUCgWi\n0SgODg5wdHSEdDrd9jlUKhXMZjMTcIQWCRovCZd37tzB3t4ejEYj9Ho9dDpd0zONjo5iaWmJWV21\nWi0SiQQ2NzcxMjKCH/7wh4hGo0in0ygWi0gmkwgGg4hGo6zyptS1J8WtF2GWNPpyuQyr1Ypr165h\ndnYW+/v7+Oyzz3B0dPTcXFSrVWb1AMCsa93GhknFmTF50qA0Gg1cLhfcbjfTwK5cuQKv1wuZTMYW\nkLRhitymQK98Pg+HwwGDwYDx8XHmZywWi9DpdIzgkXnZZDJBr9dDpVLB7XZjZGQEer1etBWrGMin\nBIjnsNMm3t3dRTqdxmeffYYnT54wTUQKSOt+9OgRI4wGgwGvvvoqotEojo+P4ff7JVsf+Ovyv8VM\nSVJAgg5p8nwGBBHGFwkyAY6Pj+PmzZt455138L3vfQ8WiwVHR0dYXV3FvXv38PDhQyQSCcZs+eAe\nwtHREcLhMOx2O9N6Kfr70qVLmJ+fx9TUFBNMt7a2kEqlWo6NDyjqVkqnCmK1Wg2ZTAbBYBCxWKzJ\nbAw8i5YWK7TEo10RILFIZ6lWh04gn+X58+dx5coVLCwsoFgsIhKJYHd3l51pSmWqVCqIxWLY2dnB\nyclJV8SPF0p6GbfBYIDb7YZOp0O1WkUqlUImk2GuDDFBiPYfFfXxeDzIZDKIx+MsSr0bAYVeSwWf\nYpxIJPDtt99iZWUFOzs7zIfdak4UCgUcDgdu376NmZkZjI2NQalUIpPJIBQKwe/3IxaLdbRaUcAq\nmctbMUwSekKhEI6PjwGAzRt9XqFQwOv1IhQKweFwwGazwWQyNcWbkEA4MjIClUqFJ0+eIBqNNtHk\nbuaw1z1OAqVGo4HD4cDs7CxSqRQajQaMRiNzXdDnqNDT/Pw8c79ubm7C7/efWurhmTF5XrO+ceMG\nrly5gsXFRdhsNqjVauh0OuTzeRQKBayvr+Obb77B119/jY2NDSb11Go16PV6zM3NwWKxQCaTYX5+\nnm1snU6HYDCIR48ewW63MymVrAF0DYpIlQKyPIiZBKvVKtLpNHZ3d7GxscGkTCnFDYSo1+uIx+M4\nODjA9vY2xsbG8Fd/9VdIJBJ49OgR3n//fSbkdHNNHt1q8Pz3Go0Gy+2nzUuawiAhZVwkZCwvL+Nv\n//Zv4fP5YDabmYn6/v372NjYwPHxcVtpncZer9dRLBaxvb2NeDyOqakpLC0tYW5uDjMzM5iYmIDT\n6cTU1BT+/u//Hul0uuU1yS9OAZ7daHVEILLZLGKxGILBYFN6Dq+1Cp+hVyJHBKmV2btbyGQyGAwG\nzMzM4Cc/+QmWl5ehVqtxeHiI9fV1rK6usvfpfGazWYRCoSYBoJvn6JXBy2QymEwmjIyMQKvVIpfL\n4eDgoMm/3uqevOVgZGQEpVIJ29vbTW2S242J9/N2q4ESvSyVSohEIrh37x7LBug0FxQ4eOXKFVy/\nfh0WiwWJRAKpVAp+vx9+v59ZJdohm81ib2+PpRl2Y4kgIZjH/v4+gsEgCxSkWhz/t4c6IpEIrly5\ngomJCVitVqytrSGRSCCXy7FKhFLRb1CpQqGAzWaDx+OB2+1uuj/vwiQ35/T0NP7oj/4I4+PjiMfj\n+OSTT/DgwYNTYfDAGTL5yclJFoFuMpmgVqtRLBaRTqehVCqZCW9ra4sRg8PDQ8RisaaDQETvP/7j\nP3D//n243W4mBFDAU6VSwdWrV5kGT8SzUCggnU43lTrtBJ1OB4fDgUQi8VyBGpnsad9w8i/2E2ne\naDSYJrG7uwubzcaCBym956OPPsKvf/3rts0xePDMoBeNgb8OBZsINXmKUh+0b6nVIZTJZDAajbh+\n/ToLUCN3TbVaZcFboVCobVATL20TkycNulAoQKlU4oc//CF0Oh1UKhUmJiYQj8dZ1H6r5xVafroh\nJmq1GgaDgQVETU5OYmNjA0qlsulZeIIyCNA4BxFjIZfLcenSJbzxxhu4dOkSbDYb0uk0gsEgjo6O\nkMlkWNyDxWJBuVxmhUGkMJdW46a/eYjtf/49mUwGj8eDpaUl2Gw2Zq1pFccCPOssGAgEoNFooNVq\nYbVa4XA4MDMzAwAIhUKShdVWrZrFQG6C2dlZ3Lx5E06nE8fHx4ymdbqGTCaDz+fD1atXmRWUal4E\nAgGmxUsRPIgm0Nj71aKplS917SRLCTVuUqvVrLaI3W5/ztzfDSiWpxdNWqlUwul04vbt27h58yYc\nDgcODg6QSqUYf6Dzb7PZ8Oqrr+Ltt99miunx8TESiQSL9zgNnBmTpyIX9XqdFSdJp9PsoFBu6u9+\n9zvs7OywgyJcBPKrkn+ez3ekQB9qMkERkMTk8/k8UqlUV0yeAlUoKIkHEYN2ml03oGttbW0xocjt\ndmNhYQGvvfYaFAoFtra2EAqFJGs8g0iDou+r1WoWJ0CBgcQEB216akW4KTr+5s2buHbtGuv/Xa1W\nEY1Gsb+/j42NDSSTyY7PxM9NsVhEsVhEo9FANptlmhKVJrbZbBgdHYXZbGYCaismINUVJPZdcouQ\nBuB0OpkbSpizPQgmz5+xfquvUUzB5cuX8dZbb2F2dpYFloZCIZYOZ7FYMD09jXK5jIODAzx58gT7\n+/td5+e3smjQ3500aplMBrfbjaWlJdjt9qbGQq2+Q5kIgUCACWWUhulyuRCLxRAKhSQ/Ax/U2+nZ\nqdbA7Owsrl+//lz2TDuQQDExMYHr16/D6/WyWKN0Oo1IJIJIJMJMz53ohlBAGcTZJ7clH49Fa+L1\nelk5ZIrd6rVoFV23VcZUO5hMJkxOTuKtt97C8vIydDodazaWz+cZk1coFPB4PPjBD36AW7duYWRk\nhD1fOp3uyiLbLc6MyT9+/BjA0wkOhUIsYIsmmCJt4/H4c/mGQvBmH94cW6/XEYvFmA+IgkdoIYvF\nIiOWUiQMoA5cAAAgAElEQVRnMp2GQqHn0jTIxDkIJspfM5VK4dGjR0xYIaKv1Wrx+uuvQyaT4Z/+\n6Z9w7969jpu8H1MmD1oboY9XoVCwmuNSAxmlotW4FQoFTCYTFhcXMT8/D61WC5lMhlgshp///Of4\n8MMPJZem5T/Dv6YiJxTfQYxbr9djbGwMbrcbfr9f9B5Uea6XAimpVArHx8dIpVKsHK/NZoPT6WSE\nQdi8phdCJQbeZN8LKBiTsgMWFhZgMBiQSqUQDocRDoeRSqVYfnI4HEahUMDu7i7u3r2L3d3dvvaq\n8LutXguh1+uZT54yY4jRCb9HRJreOzk5wf7+PrNA7O/vIx6PS34Gul6nMRLUajWsVisr2ZzL5Zgb\nVK1Wt83C4bMdFhcXYTAYmNZcLpeZgEZNkjqdZYqxILfUICFcO7fbjbm5OYyNjcFoNKJUKuH4+BgH\nBweMsXaDXnz4hOXlZfzBH/wBJiYm0Gg8Tbd88OAB7ty5g5OTEwBgdQyo4JPT6YRCoUAqlUIsFkM0\nGmUBkvxzSrGeSBnzmTH5cDjMXh8fHzOi0muwT6tJoZzmZDLZZD6v1Wrw+/3Y2dnpamNQZKSYCWsQ\nDFR4vWKxiOPjY2xubuL+/fssn3l8fBwulwuXL1/Ghx9+KKmZxKDGRsGFhUIBpVKJlRyloCO32y1q\n6egHrcZO5tGRkRFWSyGbzeLo6AhffvklHj582LWPTvjZer2OTCaDBw8ewOVysdoHarWa3bdV5gTf\nN77b+Y/H4zg8PEQikcDIyAisViumpqaQSCRYCWGK2j+tyNxeQIKG2+3G5cuXMTMzA7vdjmKxiHA4\njJWVFZa2SjEyZHV5+PAhC7jrVSvrpAy0g0qlYoG4+Xwefr8fiUSi5XfJjy6Xy5uCgsvlMsLhcFOn\nOingMx7aQSZ7WtyK78kAPG1rPTU1xSrW8ZYMvnCMXq+Hw+GAx+OByWRCJpNBLpdjKZjkgiOhuRMo\nT/40WznzNGZ6epo1nqIe9tQ1sRfeIfa6HciVMzMzg0uXLkGv1yMYDOKbb77Bw4cPcXR0xOonyOVy\n6HQ6lqKo1WqZELi6uop4PM7qRJDVj1ywg5jHM2PyRPiE2uCgNweVwRXmIFYqFTx48ABffvllV+Z1\nIu58oYvTJK50/b29Pfz6179GIBDAlStXWOoaBbpIQS9Bdq3GRNG02WyWmez1ej0mJycxPj6OnZ2d\nvu7Bo924jUYjnE4nq7xHWvzu7i6LCh7E+iQSCfzrv/4rKpUKFhYWoNfrWVVGYlRi4P193aDRaCAc\nDmN3d5cxeafTiVdeeQU6nQ5bW1vY3d2FTCZjOfu8oNwv+LSiXqBQKDA2NoY33niDlQvOZDLY3NzE\nxx9/jFKpBIvFwooOxeNxfPbZZ0wD6mfNOvnexf7Pv0d5y9QZLBKJdLxnvV5vKlRF/uRuU7lajU0I\nYnhWq5U186E5v3btGrLZLOLxONt3ZM6uVCoson55eRkul4sVmKGANmp9qtfrWXdCqWMapCVTCKph\n4PV6Wc8MoSmft2RJcc/w3+vmjArrKeTzedy/fx//8A//gEAggEqlwrJdSAhUq9VszOl0Gnfv3sUn\nn3yCVCoFpVLJAgyBp1a8dopBN+65M+0n/yKYJACWBuXxeJjkF4vFWHBfN4SMtNhBlvmUgnw+j2g0\nCoPBAJPJBKvVCgCIRCKo1+swmUxtm1sMyozLX4sIGRE2hULBaq1TbvdprzFFrPLdoAj9VD4Tolwu\nw+/34/Hjx/jmm29w7tw5GI1GXL16FX6/H19//bVoSmOj0XuEOjXLiUajyOfzGBkZwdjYGLRaLete\nRfEG8Xgcfr8fyWRyIP69ftaMfMVutxuzs7MwGAwoFouIxWKIxWKMiWs0GhSLRRwdHSGbzTalzJ3W\n2Nv51in9k3zbqVSKmdzbgWoAULxPu5z6TuNuN0YCnT+Hw4Fr165hbGwMlUoFSqUSbrcbN27cgNvt\nxhtvvNF0rUKhgHw+zwrkuFwuOJ1O9swymQwulwuNxtM6HXq9vqu6F6eVAgY8fWadTsc6A05MTLAS\nvFtbWwgEAk3nj++42Aq8W1cqbeR7kly6dAkLCwtQKpV48uQJVlZWEAgEkM1m2ZiJEVM7brKWUGzB\n2NgYRkZGmLBBwcJfffUVSwnsd07PjMm3y9kdJCi62ev1YnR0FEqlkpX+pJQLqYssk8lYYQz6e1Da\ncbt7AmAFLFKpFILBIKvalkwmIZPJWF63WAAYP85BMHoxJk8aK0W7Um9pKovazz3bbXQ6LDyDV6vV\nXdU+kAKa6729Pdy5c4cVEFleXsbOzg7MZvNz1iKhpN3t3FcqFZavTCZU8sGq1WqMj4+zpkZUEGd3\ndxeFQqGvdebnu5c9rlKpYLPZ2JmjpjixWIxpKHTNSqWCRCKBnZ2dJvN2v+PuZrzA0+dUqVTMekFd\n1cLh8HORz0SQqSkMNVshIYEqD/aSOiv183TWzp8/D4fDwZi83W7H5cuXce3aNVYplCw82WwW6XSa\n5f4TI6dSt+T6IkFRqC22y3A5TdA4jEYjJiYmMDU1hbGxMVbqllIGicnT+kjZCySES6X/KpUKTqcT\nFy9exI9+9CMsLCxALpdja2sL+/v7TSnZ/D6hanjFYhGZTAZarZZZUyizQa1WIxqN4vHjx1hfX+8o\nXErFS9mgZpCgCEzqOkaV8A4ODpBIJJr6nnda6Eajwcz1JKSQKZYn7oP2y5Nbg7pyUblHSmNzOBzM\nJwW07hrGd57qZ4xk3komk0gmk3A4HCwQ7eLFi0gkElhZWWGaJd+Za9Cg8roUBQ08awpDLo1BrUej\n0UChUMDx8TFyuRyrsmez2eDz+Vg+PK+RUWSt1DRH4f1yuRweP34Mi8XCKrEpFAoUCgWWqmm32+F0\nOjEyMoI7d+7gV7/6VdO+7gXkRxTmy0shhEajEefPn8f09DTrzkX10uv1OjweD6sPf+nSJQSDQYTD\n4YEWUupWMKGa9RaLBQ6Hg1WCc7vdrA8AXVehUMDpdMLn8+HKlSuYn5+H1+tl7ZY3NjZYfXWqlSEF\n3exTuVyOk5MTfPHFF7BYLJidnW3ytxNz4VMAlUolVCoVrFYruxfRLyp5XC6XWVXH+/fvY3t7W1Jc\nAZmvyVowaOsdxXh873vfw9zcHIxGI+tHsb6+jlgsJtr0SSrzbvdZep+aKr366qt46623cPPmTVbl\nj+pmUFYXKQVE+3w+H2ZmZuDz+eB0OpnVp1QqsZ4CVGWQGgHxfeqB3vnKmbaaPU1QUMrU1BTOnz+P\nqakpliJyfHyMb7/9lqXw8BsCkEbIxEzDpyGwEIGlQ1upVJDNZnFycgKNRgO1Wo2JiQkWzNFoNBhR\nOa04h0ajweoBkOmVNHmn0wm3281SiUjL7sdd0O47pMnz4Lv5DdqlQjEevK+TKilSqWYe/aS20Voe\nHByw+SRBlaqLFQoFzMzMYHx8HKOjo8hms7hz5w4LyOsVQn8l7+dsN5/ksllYWMDY2FiTZkwxDBcv\nXoRGo2HaDQW2vSjXlxAy2bOeE1Q9kwpteb1e1kvCarWyMY+NjWFiYgLXrl3D/Pw87HY7EokE/H4/\ni5UJBoOS1573IUvds1ToidrI8oF1MtnTKnxUZ4HqWVB/ENpPFDCXz+cRCoWwvb2Nzc1NJqhQdpJU\nmjhovzy5UYiWX758mVXli0ajCAQCrGiVsIKl1HHwc9/u2ahQ0tLSEi5duoSJiQkAT2k/MXmdTsdy\n+UkJSqVSmJqawuTkJOx2O6uEZ7PZmDAej8eRSqWYFp/NZgemFJ1p4N1pHmqj0YiLFy/i3XffxZ/+\n6Z8yX1OhUMD29jZ+85vfsG5k3YBqnvOlJLtpbNArKA2DWlkmEgmMjo5ifHwcc3NzTJIn36eY5jWo\ncVJtAmpByUud9B6leLXS4KUKVFLe5/3+pLWQCa6XoLd2oBrdxNDJikMuCaEWwUfX9wIqiPTgwQM8\nefKEEVEySyoUCrz22muss5fP52M555lMpufnrNfrzxV7IrTTeMhyNj09DYfDwQpT1et1jI2NwWw2\nY35+nmmN+Xyedc/rRyjh0cse1+v1mJ2dhcfjYdeg7ImxsTFUq1VcvnwZV69exfXr1+FwOKDX62E0\nGlmLUSrTfXBwAL1ez1pJdzt2KahWq6wRysnJCT7++GOWSUR+dEqRGxkZwejoKNxuN7xeLy5cuACt\nVtsU/JxMJrGysoL3338fwWCQFXORan3io+sH6cZUKBQwGo2Yn59nPnCbzYZqtYp4PI5AIMAKJ/Ua\njd5pv5CZ3uPxYH5+HnNzc3C5XOwcUEGccrkMk8nE6IBCoWA9S2ZnZ1lvj0ajwbK0KJZmY2MDKysr\n2NjYwOHhIXK5XM95/0KcGZPn62yfxrU9Hg/effdd3L59G+Pj46yX+P7+Pvb29ljbQiEB7jSpVGWK\nD3I77YATmexpp6R0Oo1arcaYJ7VIpeIbDoeDaXqAtKpZvYKKzVAeMK1no/G0dWMymWSR/2SF4It0\n0AHpd96SyST8fj8CgQCmpqZgtVpZC0q73Q6Xy4VwODywuTAajZiammLtMIXBccLnIWFQ6NKRCqq+\nVyqVmmrk05xT5y2Px4Pz58+zoLdBmL75KmBSfd31ep21qaX2zXzpVfqbiFw6nWY9AoD+Ykb6sZgU\nCgX4/X5EIhHWy9xoNGJ2dhYAMD4+jsuXL2NhYQEulwuZTAaHh4csu4IKslA/dSld83oF0RzqqU7d\nCamFMFVYk8vlCIVCsFqtLLXu/Pnz8Hq9sNvtzDKYTqfx+PFj3L9/n+X5dytwCRn7oJQeChC8fv06\nLl++DLvdjlKphGAwiK+//hoPHz7sOXVOKui5SqUS4vE4NjY2WNpwo9FgpcYjkQiq1SpTyMiFZrPZ\nMDIyAo1Gw9JxM5kMvv32W6ytreHo6Ij9xONx1sFOqpuhE87UJ38aDF4me1p/emZmBu+99x7OnTsH\n4Gn0aCKRwNraGvb29pBOp1mUfDebg4pEUC/uQeUytoNMJmO+HZVKBZVKhVKphJGRETYGtVoNp9PJ\ngrJa+dEGZUqr1+sshY43d/GMj5ge3xuANG2pG7STRnBycoK9vT1WFVCj0SCdTiObzTLNlvKW+31u\n2luTk5OMyZMmSt3SeFAMBy/gdDsGkvrbIRgM4uDggBEHilHoB7xLSiqDB8AqWIbDYVZlULj21EAq\nk8kgkUggm82KZkf0Om4p4xQim81iY2MDe3t7SKVSrMc5meILhQIWFhaYNW1lZQVra2tMs5udnYXZ\nbAbwLIr9NIVs/tpUqEkM6XQaoVAIKpUKuVwOJpMJ6XQa5XIZKpUKhUKhqalNLwweeBarMChmS0KD\nXq+Hx+PBtWvXWD56OBzG+vo6Pv/8czx8+JCVoD6tbCeio5RtQRkU9P/j42N88cUXiMVi0Gg0sNvt\nrLz26OgoHA4HS7MtFAqIxWLw+/343//9X/Y96kondfzfmRS6QS8IaQw//elP8cd//Mfwer0s5e3w\n8BCPHj3Chx9+2LQxuh0DVT8TNgk5LZAZkIoq0BgqlQoikQgODg5w4cIF1jglm81id3eXmfKoMt+g\nNGeCRqPB0tISlpaWmHmwVCrB7/ez9orEKOgAUo4oMX4pVhApTCWVSuE3v/kNqtUqvv/977Mo9OvX\nr7O64hRk2StoHaiMaalUeq6tq5hARZYivs0nPVc3zLMd8vk8MpkM84dTxbZeQXtFLDpcSEh5DZxq\nJdjtdtY4hKK0qYEJMfXDw0NEo1EUi0VmEaKObWehzQNP14Cim8+dOwebzYbJyUnYbDZkMhnodDr4\n/X7853/+J3Z2dpBIJLC6uorFxUX8+Mc/xsTEBNRqNcxmMyuLe1ZxBjzo7J07dw63b9+G2+1m7phY\nLMaawVDAVy/XF7bj7fe56by53W5MT0/D4/FAo9GwlLn79++zdDWpQaG9gizOFNBMwn04HGZdUqmo\nGrkXTk5O4HA4WL8RrVaLUqmE9fV1PHjwAHfv3sXW1hai0Shzi3Qzfr5jYyecGZMftNQll8vh9Xqx\nsLCAd955B2+++SZ0Oh2i0Sh2d3exurqK+/fv4+7duwgGgz2b0sgnzy9Kp+uQuZov/tMJRDR1Oh0s\nFgsajQaL0JbJnlW7cjqdsNvtjLCPj4/DZrMhn8+ziGD+eoMypVEAj81mY5oamRAbjQZ0Oh0bLzFB\nXrscVNpNo/E0p3dlZYWZS2dmZpj5mopOUKXAXkBE0mw2w2AwAAAzn1P713b1Cei5KTCJPkcxHeT6\n6dXtQ4IECYOUfdEv+H0ixuiF6VVUYphyme12OwAwbdHv9yMejyOfz7PGHMVikRFqMe1E+L9OZ66f\n4M5arYZgMIj79++zeg/UtEWpVCKVSmFzcxNffvklgsEgizepVqt4/fXX4fF4oNPp4HK54PF4esru\nOA1t1Gw2Y3Z2FpcvX8by8jI0Gg0zee/u7uLx48esw2Gv1gfax4OyXlDsy/z8PK5cucJaylI9hYcP\nH+L4+Jgx3n7mrJP1h/YGrXWhUEAul2MMmmhtpVJhAi31SaFmReVyGZFIBN9++y1++9vf4u7du6zk\ndS9j/05o8kT4BmGyJyZ68+ZN/PVf/zUuXLgAvV6PRqOBtbU1/OIXv8DKygozxbUKKOoEPqBKqi+e\nGITJZEK5XGbm7U7fofQsh8OB6elpVke7WCxCo9FgdHQUt27dwrvvvsuaNdBcqlQqlsPOHwCqXtVt\nH3ox1Go1xOPxpqhWCv6anp7G6Ogo5HI5c2uQ9st/f1DErFarIRaLYXV1FdVqFe+88w7Gxsbgcrkw\nMzOD6elpnJycNJUn7YaYklDldrtZ5gD5Qh88eICNjY2WGgz5ovmCQSSomUwmlr9cLBZbWgQ6QaPR\nsGjpQfmB+TgKMYuDkPHSnnU4HBgfH2fzT7EwyWSS5Y9TMRbqOqfVapmLiTR/XhDiK5IJ06SEz8qP\npRviT7EPu7u7UKvVmJqawujoKFujWq2GJ0+e4P79+4hGo0ilUqySGa0t5ZmPj4/D5/Ox9MFuGd+g\nGf3ExAT+/M//HG+88QZGR0dRrVaxvb2Njz/+GGtra9je3mZCSz/opZObGIOVyZ4WvvF6vbh58yb+\n8A//EB6PhwWT7uzs4NGjR8hms30XTqL7kcVRbOy0j/g9SI3NgGeKH++GNBgMmJ+fx9TUFAwGAyu1\n/dVXX+HRo0cd/e7tGHi3itqZMflBmo4dDgdu3ryJd955B5cvX2b+0pOTE2xubuKrr75qMkf1c19e\nk5dyLdJqKfiPL7AhLJxCPyqVCgaDgfmhZmdnme95Y2MDCoUCr776Kl555RXY7XYEg0Emld+7d4+Z\nsYTWhnZm124gRvT5+uxksqI8UDEJX6oVRAp40yP5pAuFAq5fv450Og2DwcAioKUKZqQVKxQK2O12\nltM9NzeHfD6P7e1t7O3t4cmTJ9ja2mopkZOAQ2vNp94RMdNqtUilUqzeeLcC2NTUFF555RVYrVYE\nAgEEg0HkcrmuriF8fhqr1KBUqpMwPj6O6elp6PV65HI5Fqi0vb2Ng4MDVkyEKq+RJk9CKd1HSPz5\ndr2kWQm7rfHZFL3EPjQaDUYz/uu//gvr6+vQaDRoNBqsP/zOzg6SySRbU/LPJhIJ5PN5KJVKZuEa\nZDGmXkAFc6anp3Hjxg1M/d804nw+j0gkgidPnsDv9/ccn8SD1qSXeRdCJpNhdHQUv//7v48rV65g\ndHQUarUaBwcH+PDDD/H48WNGP/ulH924h3iaRdY3+j7vMiCL8tjYGCwWCyuW89VXXzFXaqexCy1l\n/eD/CZ+8x+PBz372M7z++uvwer0Antb+PTg4YL3oBxUBzzMzqdDr9ax/cLVaxd27d5nmBjzTmojw\nG41GeDwevPfee3j77bcxOTmJSCSC9fV1/PKXv0SlUsGbb76JkZER5PN5PHjwAJ999hk+/vhjtoHE\nQJtyEOClSd6URX45StEhTf60fZMUtZrP55mPrFQqwWQysWBFobtCbC7IF2g2m1mu/czMDBYXF3H9\n+nWYTCZEo1H85je/wRdffIFkMtmyzSwAJuTwmidVSqtUKtBqtcwdU61WmUDUzXydO3cON27cgN1u\nx9bWFgss7RVEuHjhTExj5j9Pgunk5CSmpqYgl8tZgNS9e/ewtbWFbDbLXCmxWIwFZ/KVJ3lCygdp\n8mekWq02BXjx1oV+07dyuRxyuRyCwSDT7sjNIrY25J8NBAIs0NBgMLDMl7OEUqmEx+PB3NwcFhcX\nYbfbUalUcHJygqOjI+zt7SGTyQwkXof2b78gy+fk5CTee+89XLhwAUajEdVqFfv7+/jFL36Bvb29\ngaRbCk3e3ewbPktI+ENCN7lvAGBlZQWffPIJQqFQk0WzHYTj4ONfpI4T+H8gul4ul8NgMLAWfsDT\nhw8EAvjHf/xHfPrppwNj8L1Mskwmg9PpxJtvvonx8XE0Gg1mwuWjMRcXF2GxWJBOp+H1ejE7O4vl\n5WXW5zmbzSIQCLBqa48fP2Z1y+/du4f19XVmAmo1Dr1ez9Lx+p2PQqGATz75BEqlEj6fDxMTE9Dr\n9XC5XKxiGLk0XjQoOJJK7rayvKjVamg0GgDP/OVUyezq1atQKpU4OjpiQY+pVAqHh4e4f/8+1tfX\nWUvRVtoIT0SENQpksqc55RTEybewlWolouAbj8fDcrkpcpePx+gWdF0KlJSS600xIgBYKd5IJILD\nw0NEIhHW5pny6JVKJcrlMra2tpoERSGjB55Zasjn2U7QJoGA5rHXfc7fg/ax2D1J+//mm28wNzeH\nV199lY2hF4GaBINB0EYKjr106RKr61AsFnHnzh188cUXCAQCKBQKbWsiSIVQo+0VVK55bm4O4+Pj\nMBqNTIAgl2WvVioxjZ0EVJlMxuiEVM1euL5UTnt2dhbnz59nY69UKgiFQjg4OOgrLkj4t1Qh8kwD\n7wahVVK1MYfDwSY1HA5jdXUVn376KdbX1weqRXYb/Uu+14WFBSwsLEChUCASiUAulyMajbI+w1ev\nXoXdbkc0GsXY2Bjm5uZgMplYq8dcLodwOIxSqYRMJoOVlRVm+qeKT53GxJd+7Rflchmbm5twOp3Y\n29tjQUoUWUypfP0e+m6lVp4xFItFZqYnpkWFezQaDXw+H7xeLxO8qKnK5OQkbt68iXq9jrW1NSST\nSVZCk/qdU8lKKe4a4Q8REmKexOAUCgVUKhVzCQmfiQflcVMOrs1mY80vqNxtr+CrCErdL+SPVqvV\nKBaLCAQC8Pv9rBsgtSQ2mUyszns8Hm/yc7bSJmk+Wo1FGCfAV13rJ5hReP1WyGQy2NjYwNHRUU+R\n0jx4RaIfukVBu+fPn8fi4iLbV/l8Hmtra1hdXcXJyQmLAxlk6lsnP3Mnt4/NZmPd3XQ6HSvWQ1Xh\nut3bQm2d3y8k3JPVhvZLN/NPn6fSyKOjo6wKKQWdUmvoQSk934nAO5rMbgNkeMhkMpjNZjgcDhZN\nXK/XcffuXfz6179uqmc8CPRiBiTtjAinVqvFhQsXmOnMZDKx6Hi1Wg2v18siMul5ALDgJbvdjnK5\njIcPHyKZTLI0OSnMhop8dCuotLsmRbv6fD74fD4oFArmB9zc3Ozr+t2AXxuylBQKBeh0OhiNRphM\nJhiNRlQqFRgMBni9Xrzzzju4efMmarUazGYzxsbGWJS4xWJhPtZoNIqDgwMWqEQlY6X68XiNkDdD\nUx0BatULADqdTlRb5e9FLoXR0VFcvnwZ4+PjUCgUzNQs9FV3O4+0N8h9IOXzWq0WdrudVfyiKl5r\na2usVzY1rfH5fMjn86zID18VsdWcdqIR/PrTeRMr5HMaKJfLiMfjrAIknzbarVAtFFjE4hOkgGrJ\nz83NYWpqillOMpkMotEoEolEk/thEHPU6TpSaD1PlyjttFwuY3d3F4eHh10HlvIaLz+3xHv4wmF8\nAB4/Xv75hM8rFCAou4WyMqLRKL799lvWLbTXeRa6pOi1FJx5dH0/ko1MJsPc3ByWlpZgMBjY4q2t\nreHu3bvIZDIDO+A0oURAKPCiExqNpyleh4eHyOfzKBQK2N/fRywWY1XrqGKWWq1GqVSCXq+HzWbD\n9PQ0lEolDg4O8PXXX2N1dZVpa2S2l2p6pyAvChQahJ+cmPzu7i4uXbrUJLhptdqBNRwhSZsPBBMS\nPt48rlQqmemMorbJLVIoFODxeHD58mXMzMyw8pSUX853IisWi2xNYrEYqyzWjbuDhDx+v/AmYIoo\n5wUAYoj03PR8RJSo5ebCwgKWlpbg8/lYUGepVHquuqBUCOeUL9vc6XvUUyEQCODk5ASHh4c4PDxE\nIpGAXq+Hw+HA/Pw8PB4P63sQiUSaak70A16IokBJqiA2CPdUO5C5O5lM4vj4GC6XCxqNpmn/SUU7\nht6NYE6VOY1GI6ONpVKJZfdQVcRMJsOCdAfl1hQbN8+Y2j0jLxTTZwuFAqsO1+te4e8pvC81+6I6\n8sIzRwJvO9B4qYEZuUeCwSA+/fTTnkqoDwpnxuTVajUqlUpfwRoymQwXLlzAq6++CqPRCODpgdva\n2sLKyspAAkGE96NFlGoKpGItq6urKJfLrKzu8fEx8vk8DAYDM29TkAb5LX/yk5/AaDTio48+wsOH\nD7G5uckC9ro9lNRVS6FQIJVKDcxsT0yeXAnEHIVtV/sBFTkijY+3RvAEgW+QQc+rVquRSqVYlDxV\nwlteXmZFK8gvL5PJYLPZWMpLKpVCOp3G0dER63UupSMX0BwVrlKpnutSSM9RrVaRTqebTH7kftJq\ntdBoNCxPm4QXm83Gsi7Gx8fhcrlQq9WYVYdS6aTsf7F9QGPn+6t3AnXnKxQKLOgulUohl8vBYDDA\narWyuuPhcJiVIiaf8CAsSwSPx4OrV6+yVr1SBfJeQet5cnICv9/Pgjbpt9T4CF7rFAqz/Gsp80R0\nirRhACyTgapjqlQqHB8fs9gVqWNsNYZWmQ3C+BSxIkG8xUoYuU7BxZQi2ytajU+j0bBAWEqLJGWF\nXCJgKyYAACAASURBVI5SmLzwWWu1Go6OjvDxxx8jEAj0PG6CUFCRel7OtJ98o9FoqoLWLSiS8dy5\nc8x3wzOY05Cc+AWXUlym0WggEongo48+YhHgmUyGNXOgIgonJyfMZ04+40gkApVKhWAwiHg8ziTu\nXhrN1Ot1RKNRZv4alG+IOif98z//M7755hsUi0XE43EcHh7i6OhoIPfgfYZ8WpLQDE7ENpPJ4ODg\nAB988AEUCgWrpkb97clkSdYG8teTX55qTJfLZca8yKcm1UxPB50Iltia8VH3/DNlMhlUKhV4PB5Y\nrVb4fD5YLBZotVpWtOTw8BCxWAx37txhLWH9fj+2t7e78usKzcM8ISmXy5KIW71eZ4VUYrEYADBh\nlFrMrq+vI51OM8Hr6OgI4XCYMeFBmtXj8TjW19eRz+dZjMBpaamERqOB7e1t/PKXv8Te3h6KxSIK\nhYJk1xjNe6dKmlKfwWazNaXt1ut1pNNpRKNRhEIhHB4eMgtEN7Sykzul1ZiJaQLPBBixz1YqFYTD\nYYTDYeTzeVbzo9dCO7Q/hQySj9mgIF3ahzxd6RSQyL9H1pwHDx4wa9LBwQGrzdIvhK4bqfNx5nny\nvQbgkdbjcrkwMjICpVLJmpUM0kwvhm4PRTqdxurqquh3qOyuGPb29voaJw9iHvR6UPNTKBRY+tD9\n+/efOzCDglievZCw0H4qFossd5n+z3+PzMnA85oSrzUTUaJuYr3MGzFy4XdbEUQKQisWi6y/OU98\nKMKYhMNMJtNkERhUzXQSZqVci4ghb+Xg55Xa4gaDQSZMUWrhaQjj2WwWoVDouV4Cp41gMMi6QJIp\nXMhgOqEdA+xmnsitGA6HsbW1BZVKhf39fTx69IhlPJDWKrY/uwF//qS83+4+JIzE43HW4ZICX3tl\nlJ2Ej3K53BQHw68BX3ypE+g8Pn78GDKZDOPj40y4HWSHRV5AkYIzY/J8ek4vm4siGalIBgDs7+/j\nf/7nf+D3+0+VyQsnWcq9zsofIzaG0xgLX8P9NLQmnhC1ew6hVtzqc2KfbzQazGTHC5+9BCcJNeRu\nQN+NxWLIZDLY29tjQgfNM9/FUThOoPs6FDzxEM6x1P0tnEsCT4woXbBbk6PUZ+DHXSwWT71RDA/q\nDknNX3K5XMvGMWLoJcCuFWKxGB4/fgy9Xo+VlRWUy2Vsb29ja2sL8Xj8uWDdQZzZThaLbvZRNpvF\nwcEBYrEYgsEgQqGQpGqhUiG239u5QKXcl84cZTupVCrmshqku4jG+9IzeaFZpFtQEFIikUAgEIDF\nYsHh4SHu3r3LTIaDhNBfNmgCdZrgA0hOa8y8BDxIiPmh+jUtdvrOIOZILKq323GQdiwsbCO8Hu8H\n5DWzfsY+qJgN4HlB6rTAu0goCPG02r2KgZhEMpls6ucgxVzPp3ENAoVCAZFIBI8ePYLRaESpVEI4\nHEYkEukoKHcLqcJJN+c2mUxiY2MDcrkcx8fHSKVSp7KOQkbfzz2Ip5HlAQBTGk5r7FJwpj75fqRH\nekC/34/19XVMTk7i8PAQq6urSCaTAw3k4Qkon9P7XYHU4iCD1CQGAZ5RftfmnN8vwOkKHPQ5PnWr\n2/sJo58HyeRfJCjmhG+MxFtVTnsPkfmXCv9ILYpDQb28ibgfUCW+nZ0dJvTxcSGDpI2DLOADgAUx\nrq6uolAoIBqNsnTWQfi2CfwZHdSc8ML5ae43ft474cyYPFUZ6pZ489pKrVbDvXv3kEwm4XA4sLOz\n0+T/6IZQiWlH/Gs6hID4ZuYXVLi4L5p5CsfOR6MLx0Lv8ymN/ZisBgGe4fDacD8Mc1Bj6nRvsbEP\nOnq8FcgESfeXOlZ6LbQIiAWWCs/UaRIx/reU+/DZFfz/hNei67Uzefb7XLz5V3hvIfj55itFitGQ\nbsbXaDSnfklhZt3eh6cvYvcQe26xa/L7tdF4Gjfz6NEjFmNDhad4wVl4rU5jF75PNF0YHNnt2MUg\ndi2p57HV/PCg1FwpODMmL2Q8UiZP6Cctl8tYX1/HwcEBdDpdU8s/KdcCpC+a0BxKIMIqtoF4onta\nEI5FbMy8liA0fwuJS6vNLnzvtCEUPvj7iwXhvShBSuph5fcLPy6xOeVNhoMeq3BMwPNBjK32qJhW\n3+qz/Pq025NSxsvPkVBoaUcAaZ/ze5lnCmK05rSEFmKwUhiqcL/we6XV3u60D+k9Yepmt5C63/nc\ncqHrTup+4J8znU5je3ubFYri6ym0U6jE7tluzGJ7rN33pdCZVrSyGz4ndd9IwZkxeQr8aKVhAuKL\nSd8hXwelRNFhphSnTpMp5f1OC82/bkUsTpPp8IcLaA40E84dn57Hp6UolUpmYhSOm2eqpy2s8M/E\nEzzK8+U1HLL+8ASlH6lbCrqV4OlZ6G8hs6HftDa81ickNN3cW8jYeO2WryDGm255S4NM9sy6xl9D\nLDuALx8rPAetNFKp4M96K6sC/77wWfmiJsKAROE9Bgm6t5gw2orG8fudGuJQi1r+s3yQpRhdFKLX\nee/28/QdqqtA9IR3HbVShoSCLv2P4il4V0CnoF7+rIkJ17TPhXMotlbC8QljmqQKWd1AyvV4miy1\nZsCZMXmgOfiu1fut/i92aE9jfPwm5QuvtBvbi2KIYvcTY9I0dr7TnXCz8+sgJqCcprAifB4CHWrK\nYxdjhGcxxk7g5xx4niCIrdkgNchWBUmE9xEyTIKwRbHY3hC+FhtHLwyj1XWE74mNg8YqrEXeavyt\nrjeI8yuFIfBj4s+omGAtPNdi4z4r0JyTYNKqxbVUOi88P60+1wrt9jl/nUajIUpT6PVpndF+QOPq\nRng+07K23Uj6ZzXBdF/avLymJIVgnvbY+M0odlhIM6NSs9TjnTY3TwhPu2CIVPCEnRqTaDQa1Gq1\npsIpPBF4UWlSUkGSNpmPhcVwGo1n/sVWgmqv2gBfLYy3elDFwE7an1A46eS3FmOKg1yPdsFzwuco\nl8usrChZ+7qN5ejnDLRj5u0+T2mber0eAJpKHQufsZVP+izA03C5XM5SCLulJYN8Dv4MtLsPfzbE\n5llMUHkZICYEtsOZ95NvFUgh5g8Um2w+YEwofYkxv36JD12TOprx4xVKsPzn6TXvcxPWYxeOVSo6\naQw0Br7xAl+FjR+DVL/RiwIdWL5Mrtg+6Na0/aLQLtjpNMcq3PO8f1gK0RJqjq0+z5vr2wkPg4IU\njZ7cdsJz9TLs7XZjoHGTm4oEcJVKxQReoYDbKTW2lSB2Gkz1RacsDgJS9uuL2tvd4qXX5Hkm3iqQ\nopVZmd4DntbAJ18QmYzI/0gNWagiGLX9a1eqUMwUxv+P92nzTJ3GLayQRIydoiFJOOB9sUSQBtEk\nQoyI8GMURqsLrSkv0tUgBZ3mgxeYXqYDSBDTEPj3TvveBOG+lHLvTkRNuFdeNBNtx9heFquUVPAC\nitAHzNefp+filSShIsGvAxVRIppI1yJ6RKbfRCLBrAe9jr8Xq8nLglZ7XYwHfddwZkyeCjTkcrnn\nJpBnRHwgD+/7IZjNZlitVlgsFhgMBuj1eqhUKtbulPrMx+NxbGxs4L//+79xfHwsmm/Z7rDQeHQ6\nHcxmM1KpFGvGwWvCQmZDTQ6MRiN0Oh3UanVT61mqj57NZlEoFBjD70Vq5DUq/n/0YzQaWRUm/n3+\n2V/GTaxQKKDX61lFMYIUy81ZQywFUApOi2F2c00+eE4s4EsIsc8NClKFEtJ8efcOjZffL2e1RzoJ\nTRqNBjKZjDFcmUzGAmS1Wi1TBngFQ6vVQqfTsfgVrVbL6IjT6WQuALvdDrvdDq1WC6PRCKvVCq1W\ni1wuh/fffx9PnjzpSROncapUqpaNqV60ACikC60sObwyJpa2yPOal8Ua1C3O1FzPm195iGn1Qv8J\nL50aDAaMjo7C5XLBarXCYDDA6XRiZmYGDocDer0eR0dHrB641CICYotJEf0ktdLm4Dd2K21YKF3z\nQR/UmpC0+1qtxnyLGo0G5XKZ9d/upRMTEUDeBy/leV8GkObCCz9Szcm9YhDMQBgx/F3DoC1KLwL8\nPhcz5b/M60DaMIDnhBPebC+0wPHCDAk4Wq0WSqUSXq8XNpsNOp0OPp8PY2NjMBqNsFgssNls0Gg0\nrH21QqHA2tpaTwVn2lmsXjSEqXFAsx+7G6saLxietYDYK86MyVN7TSHBJgh9IGJaKkVzyuVyOBwO\njI2Nwe12w+l0wuVyweVyQavVQiaTsXzLdhWoxIgCj3q9jlwuh1wu99zY+PEJ/WSVSoX1ftdoNKy/\ncy6XY342rVYLg8HA+k/XajV2EO12O9LpNMLhMBKJRFsm30rTajSe1oOm4BipUcdnDdJkMplMy8JJ\ngxq70FXUz/VpL/Qa0HjW69GJYIsRPyEBPStG32vA3VmD6AvwfEwE8DTtWNhQhgQDqghHXRQtFgv7\ncbvdGB0dxezsLKanp2E2m2EymWAwGFggLvC05ere3l5P/UTq9Xrb772oNaBzx7s4eIFITCloJaCQ\nACWMpXpZIPV8nalPvhWDF/tb7H9kIqKWoYVCAYeHhwgGg9BoNDCbzdDr9dBqtdjb28Pq6ipSqVTP\nPYmFTF3s/0Bzvjr9jyJOyQcPPGu3S+ZojUaDQqEAj8cDt9uNK1euYHJyEnq9HqFQCJubm/jtb3+L\nvb29jiZ9sfFRmc1BtpptBToYwvxmKessvA75D+nQDQoqlYpVXtTr9aw9pNPpRL3+tAZ1KBRCMplE\nJpNhwhcgHivwssUz9ItOwUY8A6K/eS3qrDI2yBUmDL7rFULX3Wk9jxh9EWNGwrHwtICaoWSzWcRi\nMRwfH2N/fx8WiwX379+H1WqF1WrFuXPn8Prrr8PhcEChUECn0zF3Xi819Omsn1WWC69gUfdGsohS\nTFaxWGSxVLxbQopA38vat9L8aX8Kr8mfp07nRqjwtsOZ5skDvR8Yeki1Ws1MU5lMhvVdr1QqTDs2\nGAw4Pj7G0dER63vdyzjFJMB2z0OveRM5bTZ6BgqG0ev1MJvNMBqNmJycxIULF3Dr1i3Mz89DpVLB\n7/fD6/UinU6jVquxtqKlUkkSIW00GixP/rTAPw+tC5/KRAIOpcWJRX0LQcSD3BiDAF3T4XDAbrdD\npVLB4XBgfHwc586dg8/nQ7lcRigUwsbGBoLBICKRCDKZDEs/JNcJbzrltaxuDqEU8IzmRTLNTtoC\nb8HiBTKKDuc7E74I0HgUCkXPwjwxC5VKxfzZvLmcT0GlfU337nfcdP9WGTetBGX6oSj3dDrdFPRL\nzFuhUMBiseDq1avQ6XRYXFyEy+ViZ1Kv1zOrWTfz9aK1Xbof3ZsUPQCw2WwYHx9na0OtjVOpFNLp\nNDKZDKug1w2DlzomjUYDnU4Hk8kE4GmrapqbSqXCgsJLpRITRKvVKuNZSqWSCSTCPQY8UyJfeibf\na8MRIqi0cXU6HbRaLQAglUphf38f8XgcuVyOMTWaVAoK6XUj9iPJ0yHkDy8tEmnyXq8XTqcTly5d\nwtLSEivV63A4WCDhX/zFX+DGjRv45ptvsLGxgYODAxaw1wnFYvG5QBJ+LP0eULlcDrvdDpfLBY/H\nA6/XC6vVing8zhhkLBbDyckJ5HI5isUiTk5Omvqli5mB+fnrFyQYmkwm3LhxA9evX4fT6WTjpYyM\nQqEAn8+HqakpnJycIJFIIBqNIpVKIZvN4ujoCOFwGIVCgTWkEDbq6FazEWPkNAcqlYq5csjFc5ro\n1v+oUCigVqvRaDSgVqvhcDhQLpeRSqVQLBZfyJgJQuGxG5CAoNPpMDIygldeeQWlUgnJZBI+nw9y\nuRz7+/uMUSQSiaZS2v0y+n6u0UojJOGa9lexWMTvfvc7BINB/PjHP8Zbb73F+n7wAm4v9xdThk4D\ncrkcOp0OGo0GarUak5OTcDqdSCQS8Hg8uHbtGuMRVqsV+Xwefr8fKysr2NjYQDQaZcyXpzv9KJ3E\nj6amprC4uIhbt25BoVDA7/ezQNBgMAi1Wg23242joyNEIhEmlJ2cnGBqagoWiwWhUIi5eGOxGNLp\nNFtLvv6GFEH2zDX5XkALolQq4fP5cO7cObjdbpTLZeTzeSaxAc+0NovF0lTT+jQgNOuJjZtAubAu\nlwsTExOYmZnB9PQ0JicnMTc3h6mp/9Pelz21mWbnP0Is2ncJJBAgBGJf7PbCTLc73e2uTlVnslWu\ncjO5yv+TPyBVU5WbpDIXU6lKJlMz6W67Pd3t9gIYjFnEKoT2XUgIBOh34Zzjl88SSAKPO/X7niqX\nbdDyft/3vmd9zjn9yOVyODw8RCgUAvCaHdvV1QW32w2Hw4HBwUGsr69jYWEBe3t7lwq2ah3vRE+s\nVki9nmtWq9WwWq24ffs2vF4ve8larZaVI1nTmUwG2WwW4XCYqwtos1Yjx1CusVAovGXV1rtG+ruj\nowNOpxPDw8O4e/cubt68CZVKBbPZjK6uLh6GIa7Bbrfj9PSU+RgHBwfY3NzE7u4uh/KLxSLy+TxX\nSYjVFnRdUoUver/t7e3QarVwOBwcNiUBRuRLYk1TZKRUKnFYNp1O4+Dg4K1DTx3U6L69K6+a9nNX\nVxf/yefzCAaDCIfDSKfT73ySIO1lWstFjYaqgfYxGdRjY2O4ffs2zs7OkMvl0NXVBaVSiVAohIOD\nA5RKpXPPf3NzExsbGzUZ5vWuQaFQXDlqdZHBcHJygkgkglQqhZ6eHtjtdhwdHbGCIqO1WCzW3diG\nqojESF2t778M4nkVZZZWq4XFYoFer4dOp+Nqg8PDQ7jdbuZidXd3Y2pqis+M2WxGsViEy+VCPp/H\n/v4+MpkMt/KmlB2VVl+UhhP3F61NpVLBarXC5XKhp6cHfX19GBoawu3bt6FQKLC/v89efDKZRHt7\nOywWC169eoWtrS2WGUdHRxgfH4fD4UAoFGLHKBwOIxwOIxQKIZvN1mz0UwvvteNdo4eQIArH0dFR\n3Lt3DxqNBkdHR3zzKa9EFpZYA9psKE/0rqptXmljnovWT5tjaGgI9+7dg8vlgtvtxsDAAAwGA5ej\nZLNZbG5uwmq1or+/HxqNBlarFcPDwxy5+Kd/+ickEgkcHBxcmELQarVobW3lskXxMNHzaIasRKHv\nsbEx/P3f/z1mZmbYMz85OUE6nWartLW1FScnJ3j58iXm5+eRSCRQqVR4lkE1qFQq2Gw2AK87gdVr\ncUsjAq2trTAYDBgZGcEvfvEL+Hw+2O12xGIxZDIZqNVqVtTkzedyOdjtdthsNmi1WraeNzc3sb29\njVgshnQ6jWw2i7W1NSYuUT5QtLirCZDW1lZ0dHTAZDLB7XbjZz/7Gfr7+5lwaTabYTAY2GsBXu8v\nmg++s7ODJ0+eYHFxEbu7u0zcovSBRqNhA6VUKjE5qpH7Vw9vAni9D6ampnDr1i10dXUhGo3i5cuX\nTJy8aG9dlegogiJ3jXr0CoUCRqMRXq8Xn3/+Oe7evYuRkREO2dOzlPINSqUSkskk/vVf/xW/+tWv\n+FobvQ5SamJU66qo9gzFqOLu7i7m5uYwNjaGnp4eAG8iUaFQCOl0+sK+IgSKRpZKpUurpqqtT/p7\nutdqtRpqtRqtra3o7e3F9PQ0vF4vXC4XlEolYrEY1tbWODVLTlBvby+fGa1Wi+PjY2g0GiwuLnLn\nT5IjFOoXq51qrZP0iUaj4bJou92ODz74AHfv3sXs7Oy5cumTkxNYrVZkMhmUSiUMDQ0xKZCc0dbW\nVjgcDlgsFkxMTMDlciGbzXI6NhqNYmNjAw8ePGCnqJG98d6U/FU2caVSgdlshs/nw8jICDweD46P\nj2Gz2dDb28teFeU5iHxxXXnSWuu+qEQDeKNMW1paoNPp4HQ6YTab0dLSgq6uLvT19cFiseD4+BjR\naBRbW1tYW1vDixcvoNVq4Xa7MTExgcHBQbhcLlYMMzMziEQiWFxcRLFYrLlmGuRDSkjc0NUmPF0E\nupahoSF88cUX6O3thdPpRG9vL2KxGJaXlzmESQqeFE+5XMbW1hZzJMQcZzXBQOFS2tz17ptK5Q0R\nrL29HSaTCdPT07h9+zYreKVSiYODA+zv7+PBgwfI5XI4OjqCXq+HwWCAwWBgjkGlUuHeBoFAAC9f\nvmRiHnnU0mlZYkWH1ENTKpUckZmcnMTY2BgGBgbYoFCpVPyHDFeKRp2cnECn08Fut8PlcmF2dhbZ\nbJa9E7p+hULBQmJ/f589OLEnQ60Iw2VGrfg9tCbyavV6PVe2bG9vY3t7+1zOVpqOODs748ZWCoWC\n26M2k6YRuQCXRdcIbW1t0Ol0uHnzJu7fv88Kb2FhgSNPVGdOtebkJep0OnR0dOCTTz6BUqnEs2fP\n4Pf7EY/Hzw0DEu+XeK+BN2Qs6c+vquil30XnVq/Xw+FwwOPxwOl0cuohmUxCo9FgdHQUIyMjCIVC\nmJ+fv3S6J7Hrqw3RuUge0nVT5IX2olKpREdHB8bHx+Hz+aBWq9HV1QWv18u9BChsrdVq+ZmbTCZo\nNBpu8EOpov39fSwvL2NhYQHhcPhcTp70g8j9kabMRK6HWq2Gx+PBzMwMp/oGBgZgMpmY+xWNRvm7\nT05OcHBwAABwOp0cqSNPPp1Oo7W1FUajEdlsFj09PVAqlYjH49je3kYymUQ0GkUwGEShUGA5Wu/e\neK9K/iqwWCwYGxuD1+uF0+lEIpGATqfjAyi1tjs6Os6VVbyLvNFFAolCxdSsx2q1wuv1wmazoaWl\nBWazmUv+0uk0dnZ2MDc3h/n5eZ6rbLfbcXBwAIVCwbWuJpMJg4OD8Hq9WF1drWnlVSoV3mhiwx0x\nTC+G8ek91UDhKpvNhlu3buEf/uEf0NnZCQAIBAKYn5/H//zP/6BcLnN3QFKQJycnODo6QjKZ5OiD\nmJOneyWum3K7zVYFEDHHYrFgZGQEo6OjcLlcXCOcSqWwtLSEH3/8kYmZdrsdAwMDGBsb47KkYrGI\n09NTHB4e4uXLl3j27Bmi0ShyuRxHIiqV810PSYCJ95SMD0rXTE9P4/PPP8fNmze5jFIcsQu8rs4o\nFArnSkBbW1thNpthNBoxOjqKtrY2jmSRkiwWi9jY2IDRaITBYOB+DBRdqRVhEO9/PSCvOZPJIJPJ\noKOjAxaLBe3t7bDb7dBqtUxUBN7sQUpn0J5Wq9VQKF43g2k2ty7Wyddj1FOYnsK8H330EZLJJAKB\nAFZWVrC1tYVQKAS3243u7m44nU4uT+vt7WXuydjYGKxWKxP0KpUKe3DVSlbFtUkZ1+9CPokK3ul0\nYnBwED6fD263G8FgEIlEAvl8HiaTiT1Lv9+P9fX1S3k/9KzEhkh0nReBoi4ajeacYUdVUx6PBzdv\n3oRGo4Fer4fRaGSODKWr6H20p0qlEuLxOBKJBGKxGFKpFFZWVvD48WNOgYq8GfFMVtsvomwkYp3b\n7ca9e/fQ3d0Ni8UChUKBSCSChYUFPH36FCsrK8jlcnzPaKaCx+OBxWKBWq3mSrBsNst7MBqNwuVy\nQa/XY39/HwsLC7xmcf9I7/OF97iuV70DVLPqG4HRaGTriUqhcrkc/H4/ksnkufyKQqHgEjUSNLQh\nr/Mg0SYRhaYo0Lu6uuDxeGA2m+FwONDT08PCt6Ojg0Ndu7u7ePLkCb777jusrq5yh7p8Pg/gtWes\nUqkwMDAAnU73FmGtGmoJD+nP63kmHR0dcDgc+Ku/+it8+umnsNlsiMfj8Pv9+P3vf4+FhQWEQiEO\nPxJTme7/4eEh576rPYdaqRDpGusBvb6jowM6nY6VMBkclPLY3t5GNBrF6ekpWltbOU9G303Rh0gk\ngng8jqWlJWxsbLC1Xq3bG3mnVBopDo4hoa5Wq5m7QL0caH9SyimXy2Fvbw9ra2scLqR7SZEHpVIJ\nl8sFh8MBu92OcDiMSCSCZDKJZDKJVCqFSqUCi8XCyl0MrYr7Qupli3urlgFJf0jYUvSjvb0d/f39\nyGQyLKzIyKO0Eb3W5XLBYDBw5KZSqXDIsqHw5P+mQMQmMZe93uFw4Oc//znsdjuHRpeWlpg0StwY\n8uDpuXZ3d2N0dBQffvgh1Go1Dg8PYbfbMT4+jo6ODgQCAeaeUL76opAwRYzoeq+rooT2nEajwdjY\nGHw+H/r7+9HT0wODwcBd8DQaDaenaB3E9L4IlJOnfH69eXyVSsW8pFQqxV4tEU0zmQzm5+dxdHTE\nEVpKO5EBTcauyWTCxsYGR42i0Sin01KpFFKp1FvkawrXq9Xqc+uuFlEUz0VbWxv0ej3a29uRTCbx\n1VdfYXl5GYFAAOl0Grlc7q3KIZI5dJ8TiQQ7L8QJWF9fx97eHhu6REyWRoP+T+TkmwUJBZPJhP7+\nfuh0OpTLZWQyGfYOyeMCzhPvdDodcrkcC1Kx7vk61kV/k0VOG8hisaC/vx9DQ0Pw+XzQ6/UcDqaS\nIxLa+XwegUAAq6ur2NraQjQaZUOhUqkgm80iHo9jf3+flQPVd5vNZlae1XCZgrzod7RGnU6HwcFB\n9j6Hh4dxenqKlZUVfP311/j222+xs7ODcrl8rrKB+ADUyriaMKgWHhNDwVeJ/tC+0el00Ol0aG9v\nx+HhIYrFIodjqbSRFLBOp+OOYRT2XllZwe7uLoLBINLp9FvGnDTMLe1OJoZMKeR4cHCAZDKJUCjE\nnIl4PH4u6rG3t4etra1zz0JU8mREOhwOWK1WxGIxZuWScUH7rqenh9M3tRQhKQUx3C0aL1IhLqYm\n6DWlUgnZbBZarRZOp5OVAHnwFF0joe7xeOBwOFAoFBCLxdDa2opoNFp3c5Z6BZ/4DIi8Ozk5ienp\naZyenmJ+fh7Pnj3DysrKuT2ayWTeOuc7OzvIZDJMilWpVDAajejs7EQ0GuVUS70cIJHYJe6bZkH7\nXq1Ww+FwoK+vDz/72c8wNDQEk8nExhAZIET2bG1t5T1UTwSNjOd6iZ30/K1WK3p7ezE8PIxgMAjg\nTU+RcrmMUCiEaDSKQqGAXC6HVCrF+1mlUnHUgeri9/f3USgUcHh4yMYtefuirBcNVzoDl80Ortoa\ntwAAIABJREFUEc817dlSqYRQKMS8GDKmqz0HSgUcHBxApVIxeZPWUi6XWQZdxxwTwntT8s0unixG\ns9mM3t5edHR0IJfLcUiN8rv0WuC1ULRarbBarRwKFTfkVW+kVCC2trbyzzs7OzE9PY2//uu/xujo\nKLq7u5krQENzSqUSt64Nh8MIBALY29vj0DF5wmq1Gi6XCxaLBel0Gvl8HkqlEl6vF4lEAnNzcygW\ni1WVvBhGroaL7gFZrhaLBQMDA/jbv/1b/OVf/iXPH4hEIvj+++/xm9/8hpvGiEqarF6Hw4FsNsvh\nVGn+uNa/6QA2S9Ikj7hSqXCpHHnXVNolhkup7Ka3txcTExNQq9VIp9MoFouIRCLY3NxkQ1L0cKXl\nkZXKm5pqMWIh9ivIZrPw+/1M+gOA7e1tPH36FPl8nktryOuupmwpikBKi8LF5BmSkTUwMMDtnynn\nd3x8XNVTps+jKBmdFxJG0tanYrSmra2NDdbd3V0cHR2x90traWtrw8HBAdRqNd+/0dFRuN1uFAoF\n7O3toa2tDYVC4dyshXog5uSl10T3hTxFnU6H2dlZzM7Owufz4cmTJ3j06BH29vaqkkFFo65SqSCd\nTiMUCmF7exttbW1wOp1M+Do4OOB9IlUi0s+lM0H54WrVAY3KKdF7dzgcmJ2dxb1793Dnzh04HA4c\nHh5ylUsmk0E8HueUCQCsrq5ic3OzrpQJpbHqNcQpiuV2u+Hz+eDz+dDS0sLlqWJ6jowQUQaIxhC1\nM29ra0MqlUIkEuFKGIqgXbQuek6XGZP03IhXdXZ2hng8jp2dHQ67X/Z+ci4pxSCVafVyMRqKbNX9\nymtGMyUuAKDRaODxeNDb2wu9Xs8CjXLURqORDydtbsqjUY/7RCKBaDSKeDzOyvI6SlaoFIpY0Eql\nEtPT0/j4448xNTXFa6BcEoUkVSoVjo6OkE6n4ff7sbe3h3w+z6xequcky9flcqGrq4tzrNTIZWBg\ngMuppGFB0XtspGaZvACv14uRkRH2eCwWC0qlEnZ3d/Htt99iaWmJQ1R07VSONDQ0BLfbDbvdjng8\njmAwyDnAYrFYswWpKAyb7aZFB6tYLCKRSGBvbw8DAwNwu92sdCj/ZzabMTAwgKGhIXg8Hvh8Pjgc\nDhwdHbHSEjkE0pxqtbWT4iOuhBiup9x5JBJBpVLB1tYW8vk83yNquEOGghgWF79HhEj0I4+f7pte\nr4fL5YJCoWCDuNY9Jc5Fe3v7OWOiUqmwxyfW7BMJcGxsDOPj4zAYDBzpaG9vh8FgYEOKvNvj42N+\nn9PpRH9/PwwGAwqFArq7u9Hb24tCoYBkMsl57XpAw1zEKAXdd0ohUPMpm83G57JYLCIajWJ3d7dm\npUq179LpdHC73ejp6YHNZkOhUODvAcDzJqRGmgj6mUajQWtrKysnqnqRRrakkPJpdDodrFYrbty4\nAa/XyyQ7MvRo35NsAV730SiVSigWi2hpacHq6iqCwSAbs5fdj0YiKWq1Gp2dnbhz5w4mJibYAaD9\nJu57qVcrkvUUCgUTlcnhIKOYDIPL9k09PCQCRagqlQqMRiNHjy9LKYkRFfqMautqRHnXLcPr/sRr\nBnkyjdaxarVajI6OwuPxQKPRsOIiYUpW9NnZGYf0JyYmoNFo0N7eDoVCgXg8zmVqp6en58L79a6j\n2g0mwU0lGmq1GtPT07hz5w56e3u5jpSawAQCASbCUM5vaWmJm20Q2/jw8JDLvxwOB5xOJ7q7u9l6\nbW9vh81mQ09PD7a2trCzs1M1bCYq+Xqvk4TY6OgoZmdncfPmTWYe53I5bGxs4KuvvoLf7z9HdhIJ\nPpOTk8yfCAaDXJ4m5rNF0l2te9tIHoogKvl4PI6NjQ0MDAxgenqaS2G0Wi0MBgOXwnz88cfo6emB\nTqcDAPZ2EokEzy0Qw9hSISGNYqhUqnOeJf2OmPoUFi2Xyywwqn1WPUpO9DYo103ngnqYZzIZri+u\nFUYmti+FJIE35Cry8MXQtdlsRn9/P8bGxjA4OMgKhCorKpUKlxVRyRL9bHR0FLdv3+bzTGkUj8eD\nFy9eYGVlpa4WteLeo3SMNKVAglav17OCV6vVHHYNhUKIx+OXygNSNLTHh4aG0P+/Ja6kGAFwB7p6\nQsFk8IvKV5wrT8+gWtRFoXhN7KVwe1dXF3w+H/7mb/4GN2/e5CgmnX/qW0HVEPSsyIilFuG0V8Tv\novVWW8NlZ5ReQ82/ZmZmMDU1hfb2dmxtbXFjM2n0Q/oZYlfFXC6Hg4MD5HI55n00QtgU00yXgWTJ\n8fEx2tracHx8zGRoMsjENdLZpQg05f5Jjvwp8F473lUjK10EhUIBg8GAiYkJDAwMcAewcDiMH374\nAfPz89yogoTC8vIywuEwBgcHMTAwwPlIstrpQTQL8b1iuK2rqwv9/f1wu93MLM7n88hkMgiFQvD7\n/Zibm0M6nWYFQN4mNTahJjB2ux19fX3weDxwu91M3KA2jiRQKCQqzpAW72sjvQHI2yRFKLKzz85e\nD9KIRCIIhULMdBXfS6mK8fFxjIyMoLOzk4U9EbMomiPNyUtB72k2J0+fWyqVsLGxgcHBQRweHrJh\n2NPTw6Fyo9HITWcCgQB2dnawuLjIRLtEIvEWWVD0rqR/izk3sTcEheukeexqIbx6hYGo4Khaw+l0\nwuVyweVywefzQavVIhAIcBldLWFIxodYUiSy3cVIBbVinpqa4jKgYDCIly9fYm5ujvd3a2sre+xU\ndbG3t4eWlhaOuJnNZmg0GlQqr6tByAgTIyiXPWvyksR0EP2OCFtEkkun01hdXcXu7i6Oj4+Z93CR\n5yoSr8bHx3Hz5k309fXBbDajXC4jGo3C7/djbW2NyZz1pAXPzl7PS6AID4BzioPeX01eicTeoaEh\njIyMYGhoCFarlaNMFD1cXFzE5uYm9vf3ObXo9/sRjUY53UeGVq30gvR+UFrtoteLvA2SKSqVio3Q\nrq4u9Pb28jMQr1f8PMrXl0olzrlLQ+CXrVnEyclJXZEK+m7qQreysoKDgwN2vorFIlKpFIDXU0XJ\nSCaQHBOrm/4U+Emw6+sFhdvsdjuMRiMAIJlMwu/3Y35+Hmtra0gmk7zZyMqKxWIcNrTZbNDr9ejr\n60MoFGpYyVezVun/YliNlDOR67LZLEKhEDY2NrCzs4ONjQ1m74qTp0TPllijYm08kbEozEtkrXA4\nzOFysWmHuMbLlOlF1ypa1MSQj0QiiMViLFSJRKhSqWAwGNDb2wuv1wuTyYRK5Q1pMBqNniPDXCRM\npR5ys6B7G4lEsLe3x8TFlpYWOJ1OnJycQKvVorOzE6enp9zsZmlpicuoxHn21dIL1UAeCSlJ0UOR\neupXAQlQk8nEjZXcbjf6+vo4Z3l2doZYLIb9/X0kk8mahlOl8pq/kc/n39rbogFCPBGTyQS73Y6u\nri6OlK2srGB5eZk7wAFgA1GtVqNSqfAeOjw8REtLC6anpzE0NASbzcZGoE6ng8FgQDKZrJuARzn5\nasKeRkWTkQcAh4eHrExzuRynz8T3kmKnEDFNvZyYmMDQ0BAMBgMqlQrzg9bW1hAOh3FwcFB3p79K\npcJnm7y/arJJjCABb1JjZMhNT0+jv78fNpuNB9QA4HLRxcVFJvXa7XZYrVZWluTNi/02LkoPSdd0\nEcRUFckKsbc+9Ywgzgvxe+iPlBUv1qDXY4zUQqPvOTs7QyKRwOLiIqxWK2w2G4aHh6HVahGNRrmR\nFUVaK5UKIpEIlwzTPvtT4b0p+Y6OjreETL0H4eTkhMlr29vbePHiBVZXVxEKhc51tgPebKzd3V0m\naYyMjGB8fBwbGxscwqoXokdGpDjaXOKhJGsTeO0V5fN5PH/+HA8fPsT+/j6XdohhXHEDi97w/fv3\nYbfbcXx8jM3NTRQKBZhMJuzt7WF9fR1WqxWlUgl+vx/ZbLamJ08/ayQ1cXp6yh29SDEUCgUmoaXT\n6XNEJiI5DgwMwOfzweVyoVQqIRaLIRgMYm1tDWtra5z/u8yiFUmN9Yasa+Hs7AwHBwcIhUJYXl6G\n1+vlkcQGg4HDzPl8Hg8fPsTc3BzW1tbODQJqFCTMxOYzUmF0XRa9QqFAd3c3Pv30U26N3NnZyUJl\nfn4ey8vL2NnZQSKRuPCzyCChe15rjVTpotfr0dLSwgzozc1NxGKxtwQzNR6KRCLIZDLcqMTv9yOd\nTuPs7IzD9kqlEgaDgT2ies8pKQ7p86J1lMtl7gUuph8oOmYymbgNr/iZFEFTKpXw+XyYmprC8PAw\nR6kODw8RDAaxtLSE1dVVTgM2YlTTOqmEUiTxVnu9QqFgRn93dze6u7s5mhIKhfD8+XOsr68z61wc\nrETXRA4Q9YKgz63X2xRD9bXOiFTB03hcUQd0dnZibGwM4XAYbW1tSCaTyGazyGazSKfT5+rbgTcR\nzGaVu7gehULRUKQzmUxibm4OP//5zzE8PMxdSBOJBG7cuIGhoSHmZJyenuLrr7/G3Nwch/bJ8bwK\n6nUO3puSr1b7dxnE0CblulZXV/H8+XPEYrGq7E7KT5KV6PP5uAkNzXVvFFIPTOrdK5VK5gNYrVYe\nNEPM0VgsxjmxWgeptbWV+2dTiiEej2N3dxexWAwrKyuIx+MIh8Pc4SmZTFbNSYn540buNwkbCoVR\nr+729nYO6ykUr5uYnJyccP91CslarVYAwObmJoLBICKRCOf5RMZ8Peu6DkVInmg6ncba2ho3T9Lp\ndEyIXF9fx+LiIubm5rCxsYFMJtPwPhVBguOyUOZV0draCr1eD4vFwl6my+XiXOfjx495oFE2m+XG\nOhet5yIhSjlgvV4Ps9mM1tZWZDIZRCIR5i+QESd6blQSSIN+SNFS2o2m/RGznnrDi8b1ZZEfes61\nXiumHsrlMgwGA5xOJ7xeL8sEihyIBD5qYnJ6eorOzk709PSgvb2d99T6+jqePn2KtbU1NmAaVfBi\n7l1qkFdzXihyZjQauWpClA2BQADRaJR7tdPnUWWBz+fDhx9+iEwmg42NDTx9+vStxiv1gPbIRfdc\nvPeUVlldXYXBYMDU1BQ3CaMSP5KRBwcH3G8gFotx/4frOE+NykV67eHhIescgs1mg8fjgc1m4372\n1GPjxo0bsFgsyOfz+PHHH7G5uXmlGQeN4L0peTH0Vu8Npj7ENBe9XC7D7/djcXGRG/dLQdaw1+vF\n7du3MTMzw93O9Hr9uW5k9aCWN0nGBxEs7HY7BgcHYbfbodPpWCEWi0Vu5lDrQBCTfnx8HDMzMzCb\nzQiHwwgGgxwKTKVS50J7IqQWtTTsLVqAF3EixBAzpT+oWQUxhqldLFnn1OSHiHbHx8fY2NjA6uoq\nUqnUW92b6lHuoiF11dA2eZOBQACDg4MAcM7Y29rawsOHD+H3+7n64Sog5S5ebyM8lHpAhDLyenU6\nHfR6PdRqNTKZDFZWVvC73/2OO5qJnmK1vSxVOLVALHWx/PPly5ec6yVWtjjMgxoQiSWBIieFIjzU\n+4I4KmSsX5bfFvPDl0WI6PdWq5Ub2lC3MYoq0B4uFoswm81M6KNrisfjODg4QDabxdLSEn7/+98j\nHo9zPrtRSLkepOylqTfgfD8HSskcHR1x6jIQCNRMyVAqZHJyEl9++SUymQyePHmC7e3thvd9vVEp\n2v9kNBUKBbx8+RI6nQ4DAwMAXsv4mZkZlu8KxesmMdR5j+T92dkZD3BpFlLnpxFFTw2bqNHOwcEB\nT9+k0lEyhI+OjuB0OuF0Opk38OOPP/JY9GZR73rfK7teZGPXk89xu92YmprC2NgYXC4XE5guqs+k\n9pp3797Fn/3Zn8HpdDKxh3KXu7u7NXu+V1uHGK4XBTblxuiB2u32c01v+vv74fV6mb1ay0MkJvSH\nH36I2dlZKJVKbG9v47vvvsPW1haXFFXrFlerRz8ZH7RB6ffS10kNLyJyAWBhB4C9NJ1Oh+7ublQq\nr7vKeb1e9PX1obu7G+l0Gtvb28jlctzxTIxe1Ou9k9FTLbXRCCi0SUqQxuIaDAaOKmSzWQQCgbfS\nPs18l5jHFRnz4muuy7On0CMNzDg5OcHOzg5evHiBJ0+eIBwOMyuYnjk1XKkl5GqdS/ouGj60urrK\nUQESvCcnJ1w3Tuxy8uSpbFK6b/v7+zE+Po6uri4kEgkkk8lzBiEZDLVKLun/Yki4lvdLlQcajQbd\n3d1cq00sdIvFgkqlwjlVMlioIyIpT+L4kOefy+Xq5g5UA91HaSUGXbv4rOhc9PX1YWJigjkOa2tr\niMViF0YSNBoNz5uw2Wxc8jg2NoZMJtNQOFkk1NWaEimm2xQKBbP2z87OoNVquamXyWTitsEGgwEA\nuITXZrNxCe/S0hL+8Ic/YGdn5y2CZb2g+0jhenJo6nkfyfDHjx9jdXUVp6en7DgCb3gSZNz6fD6M\nj4/jxo0bmJ2dxeHhIf7rv/4Lz58/b2rtjeAn0Qynns3U0tKC4eFh3L59G263GyqViklf0hpFkf3a\n09MDn8+HGzduYHBwkBUWKclm51xXi0LQwRStaxqJ2Nrais7OTvT29iIajbJQEEFCqKenh6MONpsN\nL168wMuXL/Hq1Sse1FIrzC3+TFTktXJ60muQ/o4Or1iGRqH709NTFuQGgwEWiwVer/fcHPnd3V3E\n4/FzPaMbtZrF9TQDEkIajQb9/1vmNTY2Brfbfc47IyGdSqWamlJY67vp+2sp06sqeVF50eCjcrmM\ncDiMFy9ecPiYlJ7oWYu590ZB94saHFF+mwhV5OWLzUzE+mdx/Uqlkoc0GQwGBINB7O3tcc1zoz3s\npZEfqScspv7Iq6SKAzL4yOsSQ+jkhRLfhkp5qXsffWazz7SaoSW9JlKWbW1tPBmRuibSaN/LGtPo\n9XoMDQ3B5XJxa2ziVTTKUbro/9Wuj8Ld4hkrFArcTtfhcMBms8FsNvNeIoPMZrNxSopq44PB4LWw\n1RuJFNJe3t3dZa+d3ieeLapQIseBZgV89tlnWF5extLSUtM6qF6815x8vYqehMD09DQ++ugjmM1m\nbvVJBBqpkqd8071793D//n2MjIwweYdIH+vr63j16lXNNrDV1lEtFEhChBQhEY4cDgfUajVbigaD\ngRWgSqViVr34+W1tbfjggw/wy1/+Eh6PB9FoFH/4wx/w9OlTRKPRS3vu0+9Ej54EGgks6ftFRSoe\nUrG0i2prxRIbajZEM5RJaapUKlQqFezs7OD7779HIBBoevwmcL6bVqOeL10TVVZ89NFHmJ2dxeTk\nJFwuFzccInKgtOf0VaBQKM5NQaToFYCaCr/Z76G0lNFo5IEj2WwW+/v7iMfjHEmg11Enu4t6JtRa\nGykZ6nUgfS2dV2q7Syxo8b1SL57WTqkfihBQ1Yj4nnr2kdgZTVyXeCbovK6uriKfz3N9ezabxeHh\nIfL5PM/BOD095V4bRNTT6/X48ssvuU+92WyGzWbjlEOjEMlu4lrpmkV+A3nxRqMRe3t7WFlZYYfn\nssZBCsXrkboTExNwOp3c6TGXy2FlZaVhUhitUSyLvsyQpfRfPB5nPgDV+VNZJskSvV4Pr9cLn8+H\n4eFhmEwmlu1U6tnofAMRjToetP5arxebBykUCpRKJbx48QLlchmzs7OYnp7mFs46nQ75fL7u3iXN\n4CcxT76eUL1CoeCwztnZGQKBAB4+fMjdwoA3OXvqluXz+TA5OYn+/n5+3+npKVZXV/HDDz9wr+FG\nvbZq66WflctlLv2Q5vtJOFBbUFG5igQam80Gk8nEgogOYSNhQDIspMQdac5Peg3SfB/VsALgfvNU\nYiO24yWFT3ng1dXVc419rkIwuSikfBk6OjpgMBi4nnl2dhaDg4NwOBxsjFB98sbGBveevi7L+jIv\n/rq+gwQjsb/JsyOPic4QKXjamxd5XpfxNWpdi0h+o5bClykdGipD3SLT6TQ2Nzd5QEcjkT+651LZ\nQmsmL4zKUWlEbiQSYYPy+PiYO1OSYU3Nqcg7MxgMGB4ehsPh4Pnl1AirUYj7RPpz+kOpgo6ODj6r\n+Xye+7rXq6zE/SL2GqHW4NQzopG1i1HDevY4rZMiQKVS6Vy3RirPpL+DwSBSqRTa2towOjrK80Bc\nLlfdjWyqrVt8b6Nn86LXS2VssVhEIBDAf/zHf6BQKODzzz+HwWCA2Wy+dMLfVfFe29pS/uayjUEP\ngyzdUqmE7e1tfPPNNwiHw/x+vV6P3t5e3Lx5k1np1P6WWI5ETPmXf/kXrn1uJPdEh+siK45ynwTR\nAymXyygWi0zMEXOpFosFPT09nA+kQ+1wOKDX6xtap1TJN5oeEQUv1TWTYUKNMijMSVPQqENcLBbD\nw4cPsbi4eG2lIvWuWwqtVou+vj7cv38ff/7nf46enh4ea0mG0+npKVKpFBMarwNiCF3MVb4LJS8q\nHequl0qleLiIGM2hXDQpoosUfb0CWwpR0dTjdYvRBSLsZbNZ7O7ucgSo3u8V73utyA+V4NJri8Ui\nkskkdnZ2zgln6ftEkpdSqUShUMDOzg43qero6GhaydPzEQ1xug76vHK5DK1WC51Ox4aKSMCt93s0\nGs05g5CIbeTFNxLZFCMjQHNnVFT49LliJzmFQoFAIIDj42N0d3ejp6cHTqeTx+E2u0/Fc0HyttnP\nqgfJZBK/+c1voFQq8dFHH3ETnUQiUfc9F1HvWt/rFDppPWutBSuVSmg0Gm7QkcvlEA6HEY/HUSqV\neCNMTEzg7/7u77i3u81mY2uVpnutrKyc8zAbQS0PWPoacbaxKOxisRhevXqFVCrFoTdqs0lRh1u3\nbmFiYgJ9fX1obW1FqVSCy+WC0WhsyEom4S560I2Ey0lxENknlUpxBQMxszs7O9HX14f+/n5YLBac\nnJzA7/fj0aNH+Oabb7C9vX2lAyMqoEYVJB3g3t5efPnll7hz5w4LYvLgjo+P2Uuj6gVin18HyCi9\niF1/Hd9Fe4i6uG1sbCAQCDD5ir6LlAb1ktfpdAgGg29VejSSm7wM9XxGR0cHTCYT97MvFotcH90o\nqYqURT09y2lt1aJMl6377OyMlSONjlar1ZiZmUEikUAikag7Qgm8ifLRzykcr1arYbVaed96PB70\n9/cjGo1iZ2cHr169auhMGwwGfPHFF/jss89w48YNHlTzzTff4OHDh3WnGUQFL6YU6HcipKmcar8T\n91s1h6RSqbCDRPKn2VbX0uugc0jRLdFguk6FTymKTCaDvb09uFwu3L59m1Nrja673mt/b0qe6ksv\nOmgEEkzUGIIegkql4jncZrMZN2/exCeffMKtMKk0iiy0g4MDvHr1imfOvwvPinK/xKgXLdyDgwNE\no1GUSiUOl9ntdrhcLszMzODGjRu4efMmOjs72Vo3GAw83IV67VcqFQ7lVxNkolUs/Xe9XjyxQylk\nRmVN5XKZow79/f0YGRnBwMAALBYLTk9fj5ylWuGrlIeIa2kGRAicmprCvXv3MDQ0BKPRyKRBSiHQ\nvaHw7HVa89XWLnqb9Ayb9ULob61WC6PRyNGeWCx2boKhSIajXLLFYuHZ8tTlTNwf4trflWdDoMFR\nGo0Gh4eHPNWr2QqHRoynZq+NlE4oFOKBUkajEcPDw3j+/Dmn5RqJvokesRiRMBqNcDqdsFgsGB0d\nxcDAAILBINRqNUcjL/tspVIJm82GwcFB3L9/H/fu3UNfXx9aWlqQSqWwsrKC9fX1psPGYkUD3R/p\nXqoWkRN/dtG5Ew1UIjg20nP+srWLaUrp+sjLv6pRXqm8aeR2dnaGnp4eHB4eQqfT1SxnvQ68NyVv\nNBrfYlvXEnj0gLVaLcxmM05OTrhXdjAYhEKhwO3bt/Hhhx/CbrfzRhDZjWdnr7t+ra+vM8Gm2Qd2\nURMRo9GIW7ducZtDCpuLrWaJZDQ4OIi7d+/i5z//Ofr7+5mlTuunpjper5dZsESqqZXbr2UNS0ty\nLgOFfw0GAzo6Ori2uVwuo7OzE0NDQ7h16xampqbQ1dUFjUaDfD6PlZUVvHr16kpEmGpoVPHqdDp8\n9NFHuH//PiYmJqDX69mzplnTVEqnVCq5FSURJa/DAKRnTeFH8edi+qkRop8oMOkzLBYLXC4XzGYz\ns9kpFLi/v3+OV0GTDKn0i4hiFNUQlbs4YvZdwmw2w+fzwWKx4ODgAN9//z1WVlaa+l4xYtFM9Ic+\no97voi5yOp3uXBMUappzGRdFdHKIwwCAI4G0Jrfbjc8//xy9vb0sEyqVCr799ltks9kLlTPxfWZm\nZvDFF19gdnYWfX19TP6lITWNlv6JcpuY/kT8k957eiak6ESjhn52mYLX6XTo6upi502Uqc0Y5rR2\nmvlBClj8TJKzjQ69uQh6vR4ej4erU2jyYCNRq0bW8d6UPJXFEC5aNIU5KKyoVqsxPDyMX/ziF2x9\n3rhxg4fWUCMFKtsqFovY3d3F0tISnj17hnA43JRVJlqb1d5LvcNnZmbQ399/bgoRAPT29uKzzz7D\nyMgIjo6OYDKZMD09jYmJCU4tiCS7crmM3d1dPHr0CMvLy+eGg4gevajYL9rsjQo8MpTK5TLn8Mrl\nMvek93g8cDqd0Ov1OD4+RjKZRCAQQCQSubYStGbWTn3zb926hcnJSRiNRhbI0WgUsVgMsViMJ81R\nhIW6wV2XUqPnBLzd5KRSeTOX+qL6YuBNvlY0GKm7oF6vx/DwMAYGBtDV1QWj0cgto2k6FwkoEman\np28md1FDDunzumoYtF60tLTA7XZzD4uDgwMsLi5yv/VmIJ6Pd4mWlhZ2PMhwotHPNFnyokiNmLoh\nY4/+TcYVdVaLRqPnplFSwy0ywmsZ1ZSy+vjjj3H37l188MEH3LsgEAhgYWEBjx8/5smVjUCUN7VI\n1FLZJEKUpbXuETlpRqORhxhRK14xStXMfqX3iPNC6HmpVCqoVCo2+sWUZbPDstRqNTcKM5vNyOVy\n15JyuAzvTcmL+fDLSqOo8QC1wqTyuMHBQbx8+RLZbBYejwcmk4m9o2KxiHA4zH8ePnw6YLpaAAAS\nRUlEQVSI+fl57O/v82SwZnDR+0jJj4+Po6en561NPDg4CL1ezyMR8/k8e/BUZ6tQvC65KBQKyGaz\nWFhYwK9+9Stsbm7yukVrWEowukig1HPN0tdQ7rqrq4uJdQMDA/B6vXA6nVzylEgksLu7i/39fe5Q\ndh2oRxBIYTab4fV6MT09Da/Xy3yMTCbDgzmKxSKsVitOT08Ri8WwtbWF5eVl7nR2XZByM8TrISEF\nnDcIpNdPSp36LVAVCY2OHR4eRl9fH/M2aOJhJBJBKpXi1AR5DMVikZV8JpNBoVA4p+SlHu27IiOJ\nvIlPP/0UOp2OR8sGg8GmP7eZUkvpui57L63dYrGgq6sLVqsVFosFra2trORp1sNFoO8hJSmNnJyd\nnWF/fx8mkwnr6+vo6uridrp6vR52u53HE0t5H1QdMzk5iX/8x3/E4OAgTCYTSqUSAoEAnj9/jl//\n+tf47//+76YcHvHf5AxIpxRK/0jTQBdFcOk+k+FEExWpjTfwJk1A1RT1QlwHRUdFIh6ltYi7otfr\nsbW1hXK5zKmwRqBQvC7rnp6exujoKDQaDVKpFAqFwjs3SN/7gJqL+rcTaAMtLy+jr68Pn3zyCfcF\nplpDanVLs5hTqRTm5+extbWFcDjMefirNB64LJxHm4SiCaLXfXJywlPltFotbDYbjo+PYbFYmO1M\nBsr+/j5evXqFlZUVHp9LhBMCHZh68+2NhiLJEyTGMBH5zGYz+vr64PP5WKmUy2X88Y9/xL/9279d\nmWwnhTiroB4oFAoWfiqVituoplIp7O/vY21tDa2trZiamoLL5UJbWxvS6TS3Ir3uJji1iEa1BJ/0\nM8RxwzSAhMaHWq1W9PT0wGQyQavVAgAbW1tbWwgGg+fGhVJrZYoSUWWEKLBofaIB8q4gGjBUipnP\n59+K8jX6mbU63tX7fkK194q5WyrvUqlUfA4ODw+5L7zoYdZrYFfzhCn3/+DBA1gsFng8HqhUKng8\nHvzyl7/Ey5cvsb6+ztyKQqEAm82Gnp4ejIyMYHJykqOc0WgUv/vd7/DkyRNsbGxgc3PzSgYR3RMi\nEYtth6WEaul+r+c7qRcA7XuSpQrF6/JB2tONQtzn4jpJ4RaLRY4gkFdvMplQKBTYiWxkf3V0dMDh\ncODOnTsYGBhAoVDAo0eP8Nvf/hbRaPTcGazXyPzJE+9E4X3ZBZHnurGxgfn5eQwODp4jYXR0dHAe\nqFQqIRKJcL90v9+Pvb09Hm96lXrtWjefNjnVPVIZjTjFi+rdKfxDoVOVSsX3Ip/PY39/H0tLS5if\nn8fz58/h9/uRSqXOdd4S78u7At1LyudS20li03d3d0Oj0TBbdHFxEQ8ePKi7PbAU1e6tmMerZ+OT\nFd7Z2Qmv18t5eDImaY69xWLBwMAAN6IIhUIIhUJvdWJ7VxCNPwDnytlEUEieOBCTk5Po7e2F1Wrl\n0bgk+I6OjpDNZhGNRrG3t4dQKMQldLT/SKFTbbg4BVC8t2Le/12GEqlqhsidhULh3LS+ZlHLwKrn\nfeLfwPkzJoZWDQYDK1IaDEODdkg2SdN1l6Ha60j2pdNpLC8vY2xsjOdvWK1W3Lt3D93d3RgaGuLK\no1wuh87OTvT392N0dBQmkwnFYhHb29vY2NjAf/7nf+LZs2eIx+PXkmcWUw7iH+k1Sf99kXFLf9Pw\nII/Hg66uLq72IYOK+pFQiXSjkBog9O9SqcT9JsiQMxgMLD+oKRgRV6sZpmQIqlQq9PT0YGJiAv39\n/Tg7O8P8/DwePXqEx48fn5sVUO+eJZ1Tj7x6rwNqxNDOZTg7O+MRoT6fD8VikWdk0wOnNpSPHz/G\nxsYGgNdNNShE/64IRFTu4vF44PP5YDAYOM8KvB32pL7NXV1dAMCDGHZ2dvDv//7v2NjYQDQaxdbW\nFmKx2JV6YTcLEpJEdvF4PDzFz+PxcAtMqi8PBoNs5TbzPdVAHlk9e4Q2fUdHB4aGhnDnzh1YrVYu\nSzKZTGhvb4fb7YZareaWmel0GhsbG9je3r42BV8tJCkFCW9R2EmZwnTvR0dHcffuXXzyySfn+h8c\nHR3h8PAQuVwOiUSCUxGZTIaJkqISp3CqmPapZlgR6iGOXQXt7e1wOByc8qFWstd5ThtVsmLdtOiN\nAjiX/+3r68PMzAxGRkYwOjoKlUr1Vl73Oo0katizubmJubk5KJVKuN1u6HQ6jI+PY3h4mGUhVe+Q\nZ721tYWFhQX88MMPeP78OQ8qui4imUgevWiuAKFa2F58TmLYvLOzk0eDi3KH+ltQ3T9Fpeq9HimX\nQBrNorA8lVofHh6ir68Pw8PDsNlsODs7Qzab5aE5wWDwXBifFLzNZkN3dzc++eQTTE1NoVKp4Icf\nfsDc3BxevHiBaDR64Z6v5fyQbKzHsHlvSl6lUgEAMwovejj0u1wuh+3tbTx48ADb29uw2+08k5j6\nN6dSKayvryMej0OlUnHOUTx8zUI8uOLnqFQqmM1mbmRDFrJSqUQkEuEyiVKphGw2y/X95MV0dHTg\n4OAA29vb+OGHH7g5QjqdvnaWer3Q6XTwer0YHBzE4OAgvF4v3G43urq6eLToyckJgsEgvvrqK/j9\n/oYVgnjQawlDunYxFFjrc+jZaLVamEwmbmNM4VPKaVPjFWqCQjXl10m4qwdiyJdIjtSspqOjA8PD\nw/D5fOjt7cXo6CiPHKY6fxLesVgM8Xicp/wRn0MMndZS6NU8eHF973rviVUC9Gyuqhjb29uhVqtR\nKBTqki+0DlF40v2iKEtLSwtUKhUbjU6nk58P9V+gZjkqlepcC9x6h56IEQgxtE3/JyLut99+i1gs\nxrPj6XwCQCQSwcLCApLJJCunWCyG7e1trK+vY3d398pysBbESEe96QkRdK0U2iZDampqCv39/ejs\n7OS0lFgr3wzou6VVR2IalIxi+j/tAZ/Pxzyq0dFRHoFLukalUnF7XlqfXq/nUdsbGxtYWVlhLkU1\n1CMb68V7U/I0Ca7eWlgKoRCxy2azoauri/sWW61WpNNpBAKBqs1HrjtPLApNaljhcDjQ1taGjY0N\nVoJPnz5FMplET08PN/EhTyubzUKn08FoNPKcdbrWi8JZ7xoKhQImkwmTk5OYnJzEyMgIPB4PLBYL\n594oL7W1tYXf/va32Nraavg7xL+l/xY9KVIA4s9rfQ69hxQh9Whva2s710oZeB3lCQaDCIVCVybc\nXZTKqaZM6T5SrwfyhIgxbzQa8fHHH+PmzZvQaDTMMaAQ3cnJCcLhML7++mse5KLRaHBycsKM+lqM\nZ/EeV/s/pZbonr8rUNUMCVIybq5S+0xKwmg0nqtJBupT9PRcaJ+R4lIqlcxkPzs741A9hc2pl8Tp\n6SlPOlSpVNx/oB7Q9wFv+p8Db5RnpVJBIBBAIpHA8+fPYTKZYLfb8Rd/8RdcQbKwsIB//ud/xvLy\nMsLhcNP3sVGQwVqvoV8rgkSlclTPf+vWLfh8Pg7J0wRAeg+lvZott5RGWcW10fMoFApMVi2VSpiZ\nmYHD4cDg4CDu3LmDSqXCY7hjsRjMZjOMRiNaWloQDAaxvLyM9fV1PH36FMvLy4hGo1WjntXkvRgN\najZd+96UfDqdbpgER0KBhJc42YxGPFZrxnAdCl5qZQNvwqvlchmpVAqPHz/G0tLSuRpoGg7i9/u5\nYxOVA1IDlmw2i0KhUHWTXQeqKZpar6PDqlQqefhDW1sb15BTuDeTyeDHH3/Ed999h1Ao1PBAjloK\nRgrRkq5VoiMq+JOTE3z//fdMGrTb7ejr60MymUQ8HkehUOB9VywWkUgkriVUX21dl0WPRDa1yNEg\nbodWq+UBHWJr4VgshgcPHuCPf/wjXr16hYODAxwdHbHgoHK8qxgtlA54V14ffQeFWYE397AZtrT0\ncylXSx52PSFk0QCksy3WTdPa2trakEwmMTc3h2g0CpfLBZfLxYTZhYUF+P1+lnH1OhmiNymCngE9\nU2r5mslkEIlEkE6n8eDBAwCviZdra2sNd1C7CmjdV+W0KJVK6PV6Hifb3d3NBnA8Hkc0GsX+/j6S\nySRSqRRSqRS2trZ4wEuz+5Sefa20Gv2c+s//+te/xtLSEkZHR9Hb24vOzk5uc5xMJpFIJHi8NM1J\nicViSCQSPOmw2vdVc+wuSvf95JU8KbVGFksbnQ5xLpfj9zfaorYZiMpEfBjHx8fIZrNIJBINC9hm\nplU1g3qiAmIurFwuI5lM4uzsjMdAHh8f4/DwEIlEAnt7e/j666+xtLTEwqwZXPbs6T5eVm8svmZ1\ndRWBQACVSgU2mw1DQ0OIRCIIhUKsEEkB0366bkVWz74mhUyEQNrf5IHSVDEKHxN7emtrC48ePcLT\np0+xv7//lkdSzStpdO3Xkd66DEQCFBuyHB4eXjmKJUaAyGCo5zzS/RfD9KICIFJda2srwuEwisUi\ndnZ24HA4ePLf8vIydnZ2OBTb6P2r5ZzQz0n2idjb22voO64LUoKjGDVp9vNaW1u5p34ul8Pe3h7i\n8TiCwSB2dnbg9/sRj8eRTqfPDcu6jgFYtX5HoF4gT548wc7ODtbX18+la8R++xTp9Pv9ePbsGZME\nL7s/tRS9iEZSIsB77l0PNO+tNmIcvAvQ4QfeRBgaNVp+ShDzkWdnZ0ilUlheXmYWMYXwc7kcAoEA\ntx5Np9PvlJxVr7ISUxwUrgbAKRKKnEjbU77r53VZFIXC6yQcqAJDDPtR97STkxOEQiG8fPkSa2tr\nPPxHqgyq/bvZtV/H51yESqWCdDqN9fV1nJ6e8ozwqyoL2sdin4J6rkM0cMTnRgz3XC4HAKzsKfqi\n1WqZBCl2j2vk3l2k4H/KqJYyawYUkV1YWMDe3h7MZjMUitejwUWOgViPT+ei2f3S6FppL8XjceTz\neayvr3OKiZwGkdR3eHjIhLx611iPrCOOQD14b0r+/6IyFKdEAdcfVr9uiDk98gbrabNJm1NUQCcn\nJ1Cr1SgWi0z0Iuv0T3X99QppkRVOA0/+1LhMuYvKQ8rIPjo6wtHREdbX1zkMSDnXUCiE9fV1JBKJ\nC5s6XeWZXCcr/CJQeiUWi2FpaQnZbBZ+vx+FQqGpz6N1U4RAHEDU6LrEv+nf1dqOUqSAeCri82x0\n7RfNo/gpQcqjoajHdaS8KIJIXSgpclGtraxoVDS735shl9JeoGhyo7gO47mRM/relPxPfSNLQZv5\nT9Eu8zogWtfEJq91CEWFRKkGqvmlYS6pVIrzpDSoRhRI101uFNf2f9EgpP1SLdQmFU7V5p6fnZ1x\nj4S9vT2YTCao1WokEgmEw+GGRrA2CpFT8q69+HK5zIzwra0t7O7uclSjEdB9pTNK3IvrvIZqn0Pn\nRYwQNfN9opL/KUL6PKTVCADOlYQ2ozilBlqjHnCjECM+fypcV9Tj/0QJXb0WlJhrvOzn1ymQpNaq\n6A2/K4V2VYgbSFQmYo9naehS+lrpdZEAOzo6OmcIVCqVc6zfehjM9WxwqaUuTvGTCtB3/eyrfWat\n30n3p7hfgMbG/NL9PDo64oqLRCKBjo4OZvlSWPAyRq70Oy+6JnHt9KcaZ+G6w/iJRAIvXrxAJpNB\nLpfjULn47KXfV03p0JqlBgrt0ctynY2i1v6rZ19edM/fp1ErlSHA+TxxLbkI4FKPvh7ZIL62kftQ\n7RzUc0arEarfFaSyttoaq73nos+qB+9NyVcTFJc9KOmmq3aIr/tBiRbrVeoy/1QQN5Ko5EUBWO3+\nXaQU6NCK7xGVPAll8T2Xra/Wa6UCme73VdjWjUAqxC7bn7U+Q/QSxPtT76GmfDK1z2xra+PhMsQt\nkAon6dqlqHZGanlookEoGnDXFcYX92gul+M8qzh9rdbwpVoKRzQMqn0ffY74s+uQFxfdl3q+o9qZ\nfZ+otpZqzgDdb+meuaoRKI1q1bvmWp9R6/Xi+t81pHv2Kjrrp7RXZMiQIUOGDBkyZMiQIUOGDBky\nZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQ\nIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOG\nDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBky\nZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMiQIUOGDBkyZMj4\n/xn/D1agP5mLCnnXAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -2.486658\n",
+ "Current MNIST score: 6.905512\n",
+ "Current Frechet distance: 30.577679\n",
+ "Training step: 800\n",
+ "Time since start: 4.515208 m\n",
+ "Steps per min: 177.178974\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvXdvXNl9Pv5Mv9N7ZS8iKYoS1cvuStautnjX64I4Towk\ngAED+SNvIu8h7yEwEMBBNo6d+LvrRVbySlpJlqhCUhTFzhlyeu+c8vtD+RydGQ3JKZekFr95AELU\ncO6955x7zqcXoIsuuuiiiy666KKLLrrooosuuuiiiy666KKLLrrooosuuuiiiy666KKLLrrooosu\nuuiiiy666KKLLrrooosuuuiiiy666KKLLrrooosuuuiiiy666KKLLrrooosuuuiiiy666KKLLrro\noosuuuiiiy666KKLLrrooosuuuiiiy666KKLLrrooovvDyRH9WCNRlMtl8solUqoVCqoVqstXS+R\nSPa8RiaTQaVSwWQywWw2I5/PI5PJIBqNYmdnp63nSaVSKBQKKJVKFItFlEollMvlmu+1et/DgkQi\ngVKphEQiQblcRrlcRqVSeeM7tK78POhz/m/tzlMiebXlZDIZALB3v9v9aNxqtRrlchk7OzvY2dl5\nY+wEug+/P/bbKwcBqVQKqVTKxlS/brSeMpkMcrm85p3w3+HnRKDPG/1tL/DvcL9rab93+r4PG7Sm\nMpmMredue6UTyOVyKJVKWCwWKBQKZLNZpNNpZDKZtu9JYwd2Pxf1e4LfC/znrTyT9mq1WkW5XG55\nT0mlUshkMrbfac2lUimUSiUqlQq7L/1eP96jOKM8Tef3S6lUglQqfWNtd9tLR3E25HI5ZDIZCoXC\nvjxcfhgDagTawO0weLp+r40hk8lgMBhgs9ngcDiwtbWFRCLR9guh8fLEr92xHxVo09JGrV+/3Yg5\nP/f6jd8u6KDvdz9+TMTkm1n3+nkdNmh/8P9vRLCr1SoTOuv/vtu4O5nPfuemfrzfJwYPvF4bItoH\nNXa5XA6tVguLxQKJRIJUKoVSqdTRPflzttdZ3Ov/nTy3nXddfx0vqFarVRSLxTdofTPzOgzQmCqV\nCiQSSY3CSZ/xQlUzwvFhoZ6+7IUjZ/Ltol5ap3sCgEKhwMDAAD7//HOMjIxAr9fjf//3f/HgwQPk\ncrm2D2Ojg9CuVnUUIOmUNnb9uu0GiUQCuVzOricLRqfWl2a17Uqlgp2dHfbMt32deTQiDEQ8eG3h\noOdGz+GF1HorVKOxf5/WmqBQKKBWqwEAhULhQISVsbExXLp0CePj40gmk/jiiy+wsbGxq8DWCpq5\nvpFw3OwzJRIJVCoV9Ho9rFYrdDodotEoYrEYEolEy5YPOrv1TJzfY2+rwMgzy/0sirtdfxQg68nO\nzs6+3z0yJt8JeBNuLpdDsVhkkhcAWK1WTE5O4rPPPsOxY8egUCjg9XqxuLgIubz9KfOSH42D/9v3\nAbxpTiqV7kvoiSloNBooFAqUy2Xk8/l9r2sGpLU08z0yZ4u9zjzzo2eJ/YxGGjwx+KPYQ81aY74v\ne7oeMpmMudSkUmnTwmwrGBoawg9/+ENMTk4iGAxiZWUFhUIBqVSqYwZ/kJBKpdDpdHA6nRgaGoLb\n7YZGo8Hjx4+RyWRattTx3+ctbI2E27d1P30frbIymQwKhQL5fH7f7x65Jt+ONiiVSmE0GtHT0wOv\n14tIJML+LpPJcO7cOXz44YcYGhqCyWRCoVBAoVBAJpNBqVTq+BAepBnwINEusZPL5XA6nRAEAalU\nCrFYrKnNVY92zXT8mou57sRoSfCRy+Us1qLT5zSjEfDv4yD8xvXjIY3r+7p/mwXvAyaINV8SzFwu\nF6anp2GxWKDX6/F3f/d3qFarWFxcPJS1bfcZSqUSk5OTePfdd/Hxxx9DJpNha2sLs7OzbWnxAN7w\nxX/f9tZeLhJeEXkbrBG0/xQKBVQqFVKp1L7XHCmTbxa0yFKpFCqVCmazGRaLBRqNBnK5nE1cLpdD\no9Hg2LFjmJychMlkglQqRaFQQDweRywWa8q8sd9YZDIZYw71wVKHjXY0wd0Cuhp9TxAEmM1mTE9P\nw2AwYGNjA0tLS0gkEu0PukXUW1A6BZm6KCjTYDCgWCwiFAohmUx27FttBkfp3xOTUPFBmXTv+gCy\nZt8dBW/VCyPt+InJssdr8mKACKzBYIDD4YBKpYJMJsPw8DCcTicUCgXz7b6NEAQBZ8+exfXr13H2\n7Fk8f/4c6+vriMVizH/eKngm+X1FI+a+3+9igKxOSqUScrkc2WwWhUKhYVA0z3coALwZHLkmv9/m\n4M2pcrkcBoOBmeAzmUwNERAEARaLBT09PfB4PFAqldjZ2UEqlUIgEEAgEOiYgFPEKEWpFwqFPaO9\nDxK8b7dZkxNP+PZbd6lUCr1ej/7+fly7dg12ux0zMzNIpVJYW1sTeTa7Q0xNlw6LIAjo7+/HxMQE\nRkZGEAwGcevWLZaFISYa7fF6wnHQBPIg3AJ8hgDPnKVSKQRBQLX6KvCqGT81nW8+0pkyKdpl8mSd\nEVNoo/MvCAJUKlVNBoVCoYBGo0E2m0WxWGzr/ge9D9RqNS5duoRLly5Bp9PhyZMn+Ld/+zdsbGy0\nfcaOWrs9KBzGmZTL5TCZTNDr9dDpdNja2kIkEqlZU56xU2yUUqmEQqFo6jlHxuRJ0m/GJ6xUKqHR\naGAymeBwOOB0OpmPVqfTMRPryMgIzp49i8nJSdhsNigUCszPz+PLL7/E8vKyKGZYCuohaas+oOow\n/aqDg4MYGxuD1WpFIpHA/fv3EY/Hd7VWEDGmGIb6w1lvETEajRgfH8epU6fg8XhQLBYRiUSQy+VY\n2tdhCDf16Vztgpi7zWZDX18frly5gtOnT8NoNGJ1dRXb29vI5XKIxWIijv41dvPBH9SeISYHiB/Y\nR8xdo9HA7XZjdHQU4+PjbP+o1WoW5EkMO5/PIxaLIRKJMDM37y7JZDKIxWLw+XwIhUKIxWLIZDIo\nFAotj00qlWJnZ4edeTHmLZFIMDo6ir/927/Fhx9+yPZkNBrFgwcP4PP5oNFoWh5vo+ccxJ64ePEi\nPvzwQ0xNTaFSqeD58+dYXFyEz+dry/0GvGbwfFDvQWi7RqMRIyMjOHfuHO7fv4+ZmRlRnyNW1lAr\nz7t27RrOnTsHj8cDrVYLuVyOJ0+eYHFxkdHZcrkMq9UKg8EAlUqFSCSC1dVVCILw9jP5ZsxotPBy\nuZxFg+p0OqZFl0olKBQKaLVaVKtVDA8P47333sPw8DD0ej0qlQpWVlbw5ZdfYnNzU5RgMWIUxWKx\nxlTf6sHkTZoAWvKTEvE+duwYPvnkE/T29sLn82F7exuVSgXJZPINBsybU/cKYqNxqdVq2Gw29Pb2\nore3F7lcDn6/H16vF9lsFiqVCvl8/lCZfCfPonkZDAb09/fj9OnTuHLlCs6dO8dMu/39/QdiodhL\nk+9UaKFULoVCwWJPCoUCmy8RAjozrZyB3dw6vAav0+kwNDSEK1eu4P3334dKpaoxP5Kmu7Ozg2w2\nC5/Ph7W1NfT19cFqtbIsGYlEAq/Xi5cvX+Lhw4colUrIZDLI5XJtrw/RCDGYgUwmg8vlwuXLl/Gr\nX/0KfX19AF6tTTwex9OnT+H3+yEIQs25bhUHyWymp6fxi1/8AoODg4jH43jy5AmWlpZEEWqJ0R8E\nBEHA6Ogorl+/jp/+9KeoVqtYWVlBJpPp2P0KNHY5HSS0Wi1MJhPef/99/PSnP4Xb7Wbnl3hcLBZj\nQeU9PT0wm80olUpYWFiA1+tlefLN4MjN9ft9BwDy+TxKpRKy2Sy2tragUChQrVZZ0QLeRzYwMACN\nRsPMrqFQCMFgsG1JtR68L57M9O34i2nMwCtilMlkmjbxKZVKmEwmHD9+HNeuXYNOp4Ner8f09DTk\ncjk2NzcRj8ffIJBkttxL2qZ1LRaLyOfzWF5eRjAYRDqdRjKZRCwWQ7lchtFobHvuraD+AHYChUIB\nt9uNY8eOYWpqCj09PdDr9cwtQfEdYoNn6HwgT7ug6zUaDaxWKy5cuACPx4OVlRUsLy9jZWWFMVl6\n5wqFgmkGnT6bzgBZfOx2O3Q6HUqlEkwmEzt/UqmU7XHgVdaL0WiEw+GATqeDWq1m8REAEAgEmF+Y\nXGEkgLVCeHlBVizNUq1W49e//jV+9rOfwel0AngtROTzeSQSCRSLxQPbQ2LAYrGgv78fGo0GS0tL\nuHfvHjY3Nzu+L63vQdEBk8mEn//853j//fcxNDSEc+fOYX19HQ8fPkQoFOr4/vX05aAtsmNjY/j8\n88/xgx/8AAMDA5BKpUin04hEIggGg9jZ2cH58+chCAJyuRwEQUAmk8HMzAyLn/heaPLNgpgOHXr6\njAiNIAgwmUwwmUzMBCiVSpHNZuH1erG1tYV4PN6xCY1AxK0+N7QVSCQSaLVa9Pf3I5/PIxQKNU0Y\npFIpTCYTTp06hampKQwPD0Mmk6FSqeDixYsoFouIRqNIp9M11xHho98bgT+sZFql+5AbQKlUwuPx\nwGKxMGGArj2ogyFWABXNK5FIwO/3I5VKoVKpQKVSwWazYXx8HE+ePBFp1LuD97U1C9Kcq9UqBEFA\nX18fBgcHMTw8jLNnz0Kv1yObzSKTyUClUjHGm81mEQgEsL6+3hbz2eud0noGAgG8fPkSpVIJDocD\nDocDLpcLOp0OgiDUPJf8iXSGaL+GQiGEQiEWxFYqlZBOpxEMBlv2b5fLZWZp6zQoTCqVYnx8HFeu\nXMEnn3yC6elpSKVSRKNRBAIBpNNpPH/+nCkSvPXibQIF0RqNRkgkEiQSCczNzcHv93d8b34/i00D\nJBIJ1Go1jh8/jsnJSajVang8HvT392NhYQGRSKRjukCCZiOFRUz3llqtZm7Cjz/+GIODg5BIJFhb\nW8Py8jIWFxcxNzeHeDzO0pbJtZpIJDA/P88sqeVy+fvB5EmCamYhG5k76TADgM1mg9FoZIyXtNDN\nzU2kUilRzDrA62hIOsztaEak4UxOTmJrawt+v7+pNSB/p8vlwvvvv4+TJ09Cr9cDAFwuF959910E\nAgHMzMzUBEIRGjH5ehMVrV8ul0Mul6sh0OTf9Xg8mJiYQCQSQTweB7C3daBTiKHNV6tVFAoFrK+v\nI51OY2trCxaLBceOHYMgCHC73bh+/Tru3bsn4sj3H1OzoH1XqVRgsVjw/vvv47333sPZs2dhMBiw\nvb2NL7/8EiaTCadPn8bU1BTcbjeSySRu3ryJ1dXVtgLY9vrbzs4OwuEwHjx4gJcvX8JmszEL0y9+\n8Qu43W4olcoa61C5XGYBdjs7O4jH45iZmcGdO3dgMpngcrlw8uRJ2O125PN5FAoFJJPJlsZdbwVo\n1wxLwuXHH3+Mf/7nf4ZWq2UM3OfzMT/86uoqtra2UCgUGpZDPWrw1heKI0ilUlheXkY0Gj3q4e0L\nqVQKrVbL4jwEQYBOp4NSqRRF+Kf92IiGicnkLRYLPvjgA3z88cc4e/YsKpUK/H4//vznP+PmzZv4\n9ttvkc/nsbOzg1u3btUUHyNFl3e5NRtQeqQ++Wq1yrSTdiOoSbM5duwYRkZGYLPZIJFIEAwGMTMz\ng5WVFdH8cvQ8WuR2JXYikKlUCuVymZk399NYJBIJdDodPB4PJicn4Xa7a9aRgjdIoNlNgOIzFnYL\nBKv358vlcqbtnj9/HpOTk7Db7VhfX4fP50M6nUYqlYLP50MymWTvlP9pF2Lcg+6Tz+cRjUZRLBaZ\n66FUKqFQKCCXy6FSqbCgQrGCtRqNo9V7eDwenDp1Cj09PRgcHMTp06cxMDAAm82GWCyGaDQKh8PB\nzJlGoxHFYhHPnj1j8ShiCGL8HpFIJMxlRT70dDoNp9PJfO31Fefo/FSrVWSzWQSDQaRSKchkMlag\npbe3F4IgsHO9uLiIly9fIhwOM+tLs0JxuwxeKpVieHgYn3/+OT777DOYTCY232QyiWfPnuHLL79E\nKBRCNBqtGZcYabpiasV8BgTwyv1JlT/FNLEfhJBvMBhgtVqRz+cRDofZM8xmM7RaLZRKZcdnVRAE\nRjspW4rOi1h++lOnTuHy5cv48MMPMTk5CaVSiWw2i3g8joWFBRYb0SiWqtF+b+UsHxmTp5dDk6CJ\ntbrpyJ967NgxDA0NwWw2IxKJwO/3M/OGmFqmGH5VOsDFYpFFjjZT1EAmk6GnpwdjY2M4duwYLBYL\nG1O5XEY2m63Js6xn4CTNSyQSlmdJ2iFVsSOCTd+ldCFKXXz33XcxPT2N0dFRTE9PIxgMYn5+HpFI\nBOFwGLOzs9ja2kIul2OCC2U/7PYeeKEDeHMPiMnkKf6BGEwkEoFGo2GaJglMYu0ZnmDz92v23gqF\nAhaLBadOncIPf/hDnDx5EiMjIzAajdjZ2UEsFsPKygo2NzfhcrkwPDyMixcvIp/Ps8+3t7fbEnR3\nI3D186A9Q1kXZAGiPc5rkvQ5WeHIxG232zEwMIChoSHY7XZYrVb09PRgYmICCwsL+Pbbb7GwsIC1\ntTWWorbXfBrt/WYFA6lUCqvVirNnz+LXv/41RkdHmdUumUzi5cuXePToEb799luk02mUy2UWX0DP\na3cP8YKJWIyehHQaE53Ngw4wEwOCIECtViMej8Pv90OpVEImk8FqtUKv17MA4E7molKpoNPp2P8b\n7e92QcGvZ8+exaefforz58/DarWys5FIJLC2toatra2WSiLX0/e9cGRM3mw2syp0vDkPaJ7Rk2+b\nAu5Iiw8EAtjY2GCamZjSKplNgebLstaPmQqxTExMoFAoQKlUsrrRe10nCAI++eQT/PjHP4bT6WQM\nulqtIhQK4datW5idnUUymXyDCNJBVygUEAQBDocDHo8HTqcTuVwOz549Y9Gq2WwWEokEer0eQ0ND\nmJiYwOnTpzE+Po6+vj6YzWbodDpUq1W4XC4MDg4ywpFIJLC+vo6nT59idXWVVSRMJBLIZDINCR9p\nfUSE+KYWNHaxgu+A14f44cOHsFgs+Ju/+RsMDAxAoVDAZDKJ8gw+Al0qlTLTGjGLZomS0+nEP/zD\nP+Ddd9/F8ePHYTQaIQgCSqUSZmdn8ac//Qk+nw8ymQw3btzA4OAgi3qnc/HixYsDKdDSaA40v2g0\nimw2yxq48IJOuVyGVCqF2WzG8ePHYbfbEQqF0N/fD6fTCY1GA4lEAqPRCIvFgt7eXkxOTuLu3bv4\n+uuvsbi4iO3t7YbruNc+aUYrk8lk0Ov1+PnPf47PPvuM1duoVqtIp9OYn5/Hb37zG9y7dw/RaJRl\n+BBdoJidXC7HslyaBZ9x027Wzm5zorLUZM0SOzOGxi52NcVkMgmfz4elpSXYbDYMDAzAarWit7cX\nJpMJKpWqY7pAbgwyifNxV6TwAO1p80ajEX19fThz5gxOnToFg8HAFFtKGY1Go2/UfGkGb70mT/4V\nWlQiQq1uPJPJBKfTCavVytqRUunVdDpdExgmBkhj4RumtAqyPoyNjQEAdDodNjY2sLW1tes1JpMJ\nQ0NDrA6ATqeraTRCaTzr6+ssG4FnklKpFGq1mhXxINOoVqtFJBJhzIhSspxOJ06fPo2JiQmMj4+z\naHSNRlOTulGtVmG1WmuimYeHh+FwOHDv3j2USiWWAplKpVhxFGLkFPRC6WClUolJtPXmVjGJR7Va\nxfr6OmZmZvDpp59CpVJBoVDAbrfDbrcjHA6LkpHRqXCi0+lw/vx5XLx4ETabDcViEeFwGI8ePcKd\nO3dw69YtJBIJuN1ufPrppzAYDCwlrVwus7iJwyhlS3EPwWAQDx48gEKhwPnz5xnzIw2XtEoKLlKp\nVCytSBAEpq1JJBLGnDKZDEwmE6xW676EvZHlZz9CTWdEr9ejp6cHFy9exPnz52EwGJjgsrW1hWfP\nnuHevXtYXl5m+0MikTB/PAW3qVQqZDKZlmN2dqsfIib98vv9CAQCoqQU8xBTECeQtru9vY1oNIqB\ngQEYDAa43W64XC4YjUaEw+GOnkGKDfnDaQ585kera0VK1cDAAG7cuIGTJ0/C6XRCLpfX0OxAIIBE\nIsHcWs2iFQXzSIvhUP1dYs7EPFsxrVmtVrjdbgiCUNOtrFQqIZlMIpvNikrcstksQqEQ0ul0275+\nIibDw8Mwm83o6+vD/fv3MT8/v+s1AwMDuHr1KgYHB6HX61ksA5lKk8kk1tfXEQ6HGxYAIQ1Fo9EA\neCU02Gw2bG1tYWNjg200mUwGs9mMixcv4p/+6Z/Q19cHnU5XU22JN6vz7gsyV9psNly+fBmRSAQv\nX75kz87lcsxvSyWGiaDzPtxsNlszd96PKybi8Ti2t7dZ5oVEIoHb7cb4+DjTdtoF77tuNzaB3CpU\nEUsikTBt8l/+5V8wMzPDrC5yuZyZLen3YDCI27dvY25uru14l1a+S+bsxcVFJJNJ7OzsYHx8/I10\nOZ7JU9QxjY8sW7wgmUgk8N1332FhYaHGX7rXmHmL0X6xM7R3Ke5kbGwM/f39sNlsNXn+CwsLePz4\nMfx+f016Kp2/crkMmUwGu90OtVrdVuQ6CTeN/Prt7n9y5VEFwaWlJSwtLYlaCZC3tonpZqAYh1gs\nxoRVStvs6+uD3W7HyspKR89JpVLsvJPiQXtCoVAwgbkVkFJ18uRJ/OpXv2JxKqTFl8tlBINBbG5u\nsp4qreKt1+SJ2eTzeeZHbrZXOEEikWBoaAjHjx9nlaYSiQRWV1extrbGBAaxgieAV4edXko7hFOp\nVOIHP/gBPv74YwwNDbHYBL1ez8rw1msgMpkMExMT+Oyzz9DX18e0ItKIi8Ui0uk0wuEw0ul0w0p8\nAJh/tFKpYHV1FclkEtFotCZwSKlUQqvVwmq1sv4AFMFK60kC2fLyMrLZLPR6PUtjVKlUjIifPHkS\nKpUKOzs7LLDt4cOHrFoVafkKhQJSqZRp+oflK6wXUOoZsxjgBSESypq9N1V5JOtJuVzGt99+i9//\n/vdYXFxEIpFg489ms9je3sb29jZUKhWA11YnsdJHm8HOzg7S6TR8Ph+CwSBL6SPtmy8LDYC5Z9Lp\nNKtVQJ/z1pbbt29jeXkZsVgMsVis6XXk3+Ve71Uul8NoNGJiYgLXrl2ryYXPZDIIh8NMcKl/NpnD\nJyYmMDU1BUEQsLW1haWlpZaK+fAR8Hv59Fvdm0qlEmazmVlPKRZHbK1bjEj3RvSaYjZcLhc7D9ls\nFuFwWJQUOmLgfMMo+j/9tMJDSDCfnJzE1NQULBYL6+AJAOl0GqFQCN999x1u3brFMpRaRbPC1JEx\nefJ5VCoVpFIpJmm2wuClUin6+/tZGlShUEAsFkMgEEAsFmNEkg4MafidlGPtJEpZpVLBarXixo0b\n+Oyzz9DT08NShNRqNVQq1RtVyVQqFYxGI6ampnD9+vUa4kilQovFIjKZDJLJJItDqAeZUkkTSiaT\nWFpaqtGMiMBQZHOlUmEWCyK+VJQoEongzp07iMVicDqdGBgYYFXMKD+aAgSJkFPA1MrKCjQaDRsn\nuWvqS/LW++TFBpVLJgsFf7jFQruavEQiYT51qvJYLBbx3Xff4Q9/+AOi0WjNOMmMH4lEYLPZoFKp\nRHdxNINKpcL2WSgUwtbWFvNXkwBI2jFZZ2gfGwyGmkJXOzs7eP78OW7fvo0HDx6wHOF2xrQfiBFS\nn3hyQRF9CoVC2NnZYWWtlUols3wZDAb09vbi8uXLuHbtGqLRKKrVKtMCW7Xe8GbiegZTv4+aubdG\no0F/fz+MRiOAV65SYvhigNfgeUtfq3uPj2MhgZisWTabDQ6HgykS5XIZgUCgJuK+XewWxFZvqSTs\n9TyKE+vp6cHly5dx/PhxtqeJtsXjcVaM6N69e22Nv5V3d2RMfnNzkx3uQqHQcm1t2gxEOKicJwCM\njIxAqVQiHA4jkUgglUohHo8jGo0iGAwiHo83Fc3eCK0crnr09/fjwoULOHHiBPPPUM9r3mRNG1wq\nlcLtduPGjRusCAdv7qHNx8cyNHr5dHiIaNG//H2AV4IXpXaEQiFsbm4il8shEAiwlr1erxcbGxvw\ner0IhUIolUqsFKPZbGYdAKkmPFknyK1gMpkwNjYGuVzOovo3NzfZfmi0trQeNBexGBcFdKnVamYu\n0+l0sNvtTBvuFO26c+RyOa5fv46/+qu/gsPhQDabZS4OsnzQu6ZSy2azGTabDVartSaN8rBB7+vF\nixf4zW9+gw8++ADnz5+Hy+V6g/GVy2UIggCn0wm1Wg0ALAI8Ho/j/v37uH37dstFceoj1PcjzMTk\nLRYLzGYzS6OUSCRM0KbmVydOnIBKpcL29jZsNhumpqbw6aefYnx8HDabDY8fP0apVGL1NJpNqSPh\npt79RSZkErTJNUIKy357zG634wc/+AEGBweZ9U/MtGIe5DrqNGWTaBZF11PmEL/vWyntuhd4Rl4f\nJAq8bkm9X1loemeDg4O4cOECzp07h76+PqZIkTLj8/lw584dbG9vdySIv/WafDweZ4R/r/Sq3aBQ\nKFg5VwpCUygU0Ov1GBkZgcvlYgVdiGBEIhFsb2/D5/OxH6rb3Kr/sRWoVCoYDAaMj4/j0qVL6O/v\nh1arBfC6ApjFYoHNZmMaA0W3T0xM4IMPPsDY2FiNb7E++lapVMLtdtcwAz7Xk2/wwjN5fiNTgBxF\nE6dSKWxtbWFxcREKhYK5QjY3N+H3+5kgQkxHEARsbm6yAhv9/f2w2+2MoZPPmwQWCnihNq9kWq7f\nvK0EmbQCKvOpVqvZ80hwbFRM6DAgkUjgdDoxNjaGd955B9PT06xXQqFQgE6ng8vlYhYV4JWwMjU1\nhb6+PphMJqalabVaDA4OYnt7G16v91DnUa1Wsb29jbt372JkZAQTExM1hJTfdyqVilXGq1arLEsj\nEAhgaWkJy8vLzKLUDEhobmRFqX+ffFCqy+ViAbx8/AlV1aQSpJQZ4/f7YbVacfz4cdy4cQMWiwXV\n6quGNZOTk0ilUlhfX4ff72dxPPuBzglvxaJ/+TgGEhya8RWTJk+tt61WK6xWa03lzk7BM0j+/63c\nm84eVS2l/e50OuF0Otn4658r1tjr1xx4nQ6pVCqZUEWKFc+36P0olUoMDg7i5MmTNdYTvmrr8vIy\nHj58iGDH8pVSAAAgAElEQVQw2LEQ1AyOjMmTJNpKbiAPQRBgtVpht9thsViYqdlkMrGNT2X/SGMl\n0/jKygpmZ2fxxRdfYGZmhr24g5JsdTodxsfHcebMGabhEoOk+t+Dg4MYHR1lkrZKpcLw8DAuXLiA\nCxcuoKen5w2/MaFSqcBgMODs2bNQKpVYXl5GIpFgLhC+0lW9f5I3s1F6ndVqhdlshkwmQy6XY75e\nv9/PynfywX28ZE0xC7FYDNPT0zhx4gQrcUpzslqtLFedAl6y2SxWV1dZ7jShEYMXK8ZCq9XCbDaz\npiK0Hw+r8U496D2cOHEC//iP/4jx8fGaZkhErI8fPw6ZTMZ88j/84Q9x7do1DAwMQK/XM6HP4XDg\n2rVryGQy8Pl8bZlPCe2sdSaTwfb2NhNG6u9HGjQfq0DFZsLhMILBIMuSaSXwifYyr+USDWg0F7lc\nDp1Oh97eXlitVnYuyepAxNtsNmN8fByFQqEm3YpazJJWef78eRw7dgxXr17FzMwM7t69i5s3b7I0\nqb1Aa8QX3OItcTQeMgE3Wy+A7qdWqzE5OYnV1VWo1eqW0/z2Al8atl13ptlsZumsiUQCOp0Oo6Oj\nuHTpEiYmJlh0eifP4cHHQdQzed7CSe5FKpZF7kdSpsitQOXKR0dHmbBNAmc+n8f6+jrm5ubw9OnT\nPVOmmx17MzgyJl/vG2/1ZanVatjtdta9h4QFqVTKSh6SKXhpaQnpdBoSiYT5jC9duoRgMIhSqYSl\npaWmDmCrICKm1+tht9shk8mYz5QijkkijEQiCIVCqFQqGB0dxcmTJzE4OIjJyUmYTCYmsNDBJr/O\n6uoqHA4HqtUqRkdHIZfLYTAYMDc3B5/PV2Mqp+t56ROo1eL1ej0G/68mutVqxfDwMAqFAuLxOKuN\nzt+LR6VSYUFRqVQKXq8XT58+xfj4OCYmJjAxMcEifKnTnUKhQE9PDyukIpFIGhL1Rtp9J+9FIpHA\nbDbD7XZDpVKxNSKrC5lGOwFPNJodLwmq4+PjzAqSz+fh9XpZC0pBEJDP51lntIGBAfT09DCiQjEU\nJLgQMxKzKFQzoADNjY0NLC8vw+l0MvdavWBPmls2m0U6nWbFmYhItpJdQdaqepfWbspEpVJBPB7H\n48ePAbwKjKJulsDrLAc6LxQ0XC6XayxbVJuAzg5vrVtbW0MgEEA2m91TgKRx8gIJb76nWBx6frNC\nA9EBOutknaD6FZ2gXnFod49R0K/BYGBplyQ4UXYPHwzHC2JiKGmNlCDgdYodrSVZn/nga16bp3fE\n34sCUgOBAILBIGto1Ol4m8GRavKdEB0KblAoFEilUiw6nPzKFEF+584dfP311wiHwxAEARcvXsTV\nq1dx+fJlhEIhZDIZBAKBXQPWOoUgCCz6vFp9VbTG7XazfF8KFtza2kI4HGaR9D/60Y/g8Xhgt9sh\nCALbdOTHjkQiePLkCe7evYvLly9jaGgIg4OD0Gq10Gg0TPMGatOJ+M3LS4JkNjUajRgaGsLo6Cjs\ndjtMJhMMBgMWFhbw4sULdo9GqFarrChOMBjEixcvoNfrcenSJVQqFfT39zMiToGGcrkcDoeDaVqJ\nROKNlBjeFCjGOyK/oclkgsPhYFpfqVRi/m0xfH2EVvY4+YApGpq0gGAwiPv377M1A16dgYGBATgc\nDpZWSetINQcoLqXeknNQ4+dBgtP29jY2NjZw+vRpJnDwDIEYWLFYZO01q9UqS7GlDI1WnstHwe81\nd2KokUgE0WiU9bCnCpqN1oGEjlwuhydPnuDOnTuYnZ1Fb28vzp49y9JTyXIllUrxzTffQKvV7msl\nInM9fafepUbz4P31u4H36/MMid6BwWBgHc46Ba9dkzDRqimdD2qkQE0S/EhjBl6vicFgYBkZrQRt\n7zX2etcOBbySYkKCVn08E90HAHK5HFKpVE3cA6UxUrBgp1X6Wrn2rW41uxc8Hg+uXr0Kh8OBTCaD\nlZUVaLVa1ud5e3sb//7v/44HDx5gaWkJxWIRDoejhpC7XC54PB5RqiY1Ah0kap7T39+PyclJ9Pb2\nMqK8urqKP//5z6hWq5iYmGB+eIfDwfqEk4ZLLodMJoNoNAq1Wo2xsTE4nU7WgS+dTjM/d30wYyNT\nP0nA5XKZRf8PDQ0x028+n4dOp0N/fz96enoQCAT2Jbj8cxQKBYaGhtDf38/8WyT0kDsgn89DIpEw\nt0D9ge1UQ6iHXC5nQV7EUCgDYH19Hc+ePUMqlToSn3y5XEY6ncb29jaAVxWzNBoNhoeHcePGDWxt\nbSEajeIXv/gF08ocDgerNUFaGnV3W1pawtbWVlt1BsQyhfb392NsbIxFz5MAxRPXnZ0dZoGw2WyM\ngFssFqjV6pba5Nbvl/32Ds9IrVYrTpw4AbPZXEPQK5UKC4wFXsUUrays4Ouvv8bNmzeRSCRgMBiY\n2Z6eyVfCa3bs9b/zLgd+3VoF75aTSCSswmWjYLNWUa3WBg3SZ62cIblcDrlczuKK/H4/E8iJppEC\np1arcenSJVYoh1KH2xk3v66N6CX/N36d6udFa5DNZpFKpZBOp1lhJBKylpeX4fV6RbGsvfWafKcE\nhLROmUzG/CNURIPSdx4+fIhnz54xTcZkMtWk8uh0uprKcWKC/HTHjh3D9PQ0JicnMTY2hqGhIVZz\nGQBrjuLxeJhG1tvbC61Wy3oGUxEFXqKnQDuj0cj852QSisVizNzZaCPy4LVki8WC4eFhOJ1OaLVa\nVKtVpvGOjIxgbW0N8/PzrDY+f49G96eIbzInK5VKti46nY51IQsGg1hbW8PGxgbC4XBDYi7m+yGr\nBVW5A177bMnSAIBpw/UH+qCYPxGcdDoNr9cLvV4Ps9kMhUIBl8sFtVoNo9GIRCIBu93OzMykIdJ8\nyMSfy+WYJn8UMQa0T3t7ezE0NASdTlcT7EVpf1T5joRCtVrNWmySJSmRSDS97vvt+Ubfp+9YLBaM\nj4/DYDCwv1EtCt7vTn3kKduEAkmr1Spr1qTT6dj5bsXv3YjR7/X7fvOSy+U1NIfy+p1OJ1KpFGNO\n9NOutYdnhI0Y/X6xNES/iJ6HQiHodDoUi0XWZ8LpdDI3qMvlgsvlYvu/k8ZAjRh7/f9bWZf6daB4\nsEgkgkwm07FSya/nfvjeavLkk6bqR1RPHXhlLkkkEkgkEiyIiwi7xWJhDIx+Ws2b34/YSCQSFjPw\n3nvv4YMPPmBNRcgUTPfweDy4fPkyYrEY6xNMLXOpiQzPiFUqFTP9u1wu1s2O+lsvLS3B7/fXRCPv\n5bPiA4sGBgZw9uxZaLVaZDIZFItFZomYnJyE1+tlAUTkT9otypN83gMDA8ztQKY1Ms1GIhGsra3h\n3r17uHv3LrxeLwsmq4eYmjxpMwaDgXUXI+JGJTOpAiCAN0oEdxqQtheISWxubmJgYIBpN1R7gPK3\nqTpgKpVCIBCA3+/H8PAwBEFgcySz71EweOB1wJLb7Wad5XhCH4/H8Ze//AVmsxknTpxgmrJSqWQB\nmFqtFhaLBX6/X5Qyw/uBykfrdDom+GUyGdYpj7RgynIgIYQE8kQiAZ/PB5VKhcHBQVb4pNmCRK0Q\n72YhCAI8Hg/0ej1zBzgcDrzzzjtQqVR48eIFkslk25XXgNfngjTURsGy+zH5eDxeU8O9UnnVVlml\nUmFpaQl2u51VjpNKpaxCqkajQSqVaqnwEI/dGHyrIFpKwjilEFMa8vb2NhNQxIgjeOs1+U4jpImJ\nOhwOFtADvCJu1IWOTFGkKTgcDoyOjsLlcgEA1tfX8eLFi5ZK3+63WSlYZGpqCh999BGuXr2KsbEx\nVvWo3mxnMBhYCU0yset0uppKZyThkp/H7/fj8ePHrIsdHY5isQi/349wOFxDVHghgQe5LSjHXavV\nMsYhCAI0Gg2zfPT09ODdd99FtVpFLBZjtZZJ6l5eXmb1vKmpg06ng8VigcVigdFoZD7mSqXCzGzP\nnz/H4uIivF7vG36s+nXfT7AiNKO5lUolxGIxBINBZhKuVCqw2+0YGxuD2+1GKpVCIpFg5ralpSUm\nhJB2xwuIjdpEtgLaq9FoFA8fPoTL5YLD4WAmP9o/lIGxtbWFFy9eMB8+X2iG5tlJ4adOz2h/fz8u\nXryIkZERaLXaGuGWxiWXy7G1tQW/34/+/n7mPlOr1bBarRgcHMTg4CBWV1fbqgXfKmj/E+0gog3U\nZuuQAMP3Oc/lcohEIshmsyxTgwTZaDR6JJ3fKpUKotEoZmZmYLfbmdvMbDbj1KlT0Ol0GB4eRjQa\nhc/nw8uXLxk9aQVkVid/PKWh8sIx+c4psLZ+Lerz/iUSCZLJJLNcud1ullVAwcA01k7b+3YCoq29\nvb04f/48Tp06BbfbzVrhknuQ6B8FY1N20UHjyJl8u9DpdOjr60NPTw/zX1MAnt/vh9frZRoPVaVy\nuVwYHR2Fw+FAuVzG0tIS5ubmWpYA92I4tJlPnz6NX/3qV7Db7TVtDIFayVEQBNhstpqId4VCwQgK\nRdL7fD54vV4Ui0U8f/4cX3zxBQqFAtRqNTKZDKRSKSwWC4rFIlKp1BuaAy9l03Mot91kMsHlcqFa\nrcLn87HgFyppK5PJYLPZYDabcebMGVbAiMztXq8X/+///T9EIhEkk0kmAJCAQHnQRAxJMPD7/VhY\nWMDm5ibrpbzXuu5ljWiWIfGxDcFgEF6vFyMjI2y97XY7xsfHYbFYWDc1auv6P//zP1hZWWGMIJPJ\nMNcFr33Uj4kf137CilQqRSwWw8OHD9HX1webzQan08msQIVCAfl8Hul0Gk+fPsU333yD8+fPszrr\n9QFP7WryrazpbhgaGsJPfvITtr68P5NSJxUKBTY3N/HkyRNcuXIF1WoVOp2OaccjIyNYXl7GvXv3\nIJVKD5zJp1Ip+Hw+eDwetp4UzEqaPc2DmBll8qRSKUQikZrCJ+FwGCsrKyzYar93IbaVqFp9Fez7\n7bffstoLAFgJX7fbjWQyiXg8jtnZWaYstMPk+WBVUhCq1WpNA6psNsuCnBuZxuvN4/F4HLFYDP39\n/RgeHmZrXSgUEIlEWHGzTks3d7rfZTIZBgcH8fnnn6Ovr48Vd1IqlVCr1ayeCwWG6/V6Vu77oAW/\nI2PyvFmnnUlShDkVUInFYtjY2MDGxgZWV1exvLyMnZ0ddgh7enpYrnk8HmcbmyfSYoA2Kvkj6wvY\nUJARESuS+KkIDKXdlMtl1hTjt7/9LVZWVhCJRFAqlZBIJLC5ucmCgUhzplKndD9eK25UMY4qi9F9\no9EoVlZWGJHTarU4e/YsTp8+zQ6rSqVihzYcDmNxcRG/+93vMD8/zywIRAg3NzdRLpfR398PqVSK\n06dPs0wB6j5G728/4rcX869f+/3uUy6XmQ+PUq3Ix0oWH6oZ0NfXx4jN2NgYCoUCXr58+UYk7m5u\ni/qx7zc+igsIhUL47//+bzx48IB1YiMmR75TKuGsUqlgt9tZQGd9/fN2GKMY58FgMGDw/xoq8elH\nuVwO8/PzePToEW7duoW1tTV2fhcWFvD3f//3GBsbg9FohMvlYv0aDprBA0A0GsWLFy8wPj5eI4xT\nnE+5XIbVakUymcTa2hpCoRATbEno4wsqkdDQqiIhdgyKIAhM0+b99GQeJ0sbuQ1bpcv1dI1vG069\n36kgUKv3pbiGYrHItOFSqYT5+Xm8ePGCxW+0Czov9QXGWrleqVQyRZLmS1o8WT/D4TBj/J24RlrF\nkdaur88zbAWZTAZbW1tIJpMwm81Ip9OsdR+Zk8lcolAoMDExgdHRUbbAgUCA9fEVm3gQUeCLZlDq\nG5+yxzP0SqXCJF/SFKPRKB48eIA//OEP2NzcZDm27RKAeoZYqVQYUyYhQyaTIZPJYHV1FTKZjNXD\n12g0MBgMMBqNLFguEAjg4cOHuH37NqLRaI3AJJFImHa+uLiI4eFhVscZAPOr8amU+5nj9/p7OwSJ\n9x3S9XxuLqXCkIYWi8WQy+WQTqeZP5CPKN4tlqAV0Psvl8usexwJivWpkIRnz57B6XRiamoKdru9\n5n57CR/NjqddaLVaFkRK98pkMgiFQnj58iVmZmZw584dhEIhVKtVbGxsoFQq4dNPP2VZD3wmxmEg\nkUhgY2OjRpPlY3f4evuZTIZlhhgMBng8HgwODsJqtbJyzvF4nHWta2Yt611rYjB7aphC8Rr1z6Ne\nFtSEp9387XprpCAILP6Fos530+J3Ay9E8+VlM5kMq0/QaYneTs8IuYNpr+r1elZFkwTyUCjEsgCo\neuVhCK3AETJ5Cj7gJ9rKi0okElhaWsL09DScTidLvzGZTIxAUEEQo9GIK1euMB9UNBplFbXIvCZW\nxDQRg1Qqhe3tbSZBk3nJ6/Xi1q1biEQiGBsbg1KpBAAWqJFKpbC2tobZ2VnMzs7i5cuX2NjYYESi\nnTHypmL+M97EzNc6f/HiBctVXVtbw82bNzExMYG+vj5YLBY8evQI9+7dY6kiiURi13Q9yiemFBg6\nsCT80Dvai6jxhK/RYaR7trIeZAWhjAzysRYKBUSjUWg0GiwvL+Orr75CJBJhTZR4y0Mj8+Jua77b\nO6gfF/C6Mxv/nnZbG+AVk1cqlfjkk09q3DLlcpm5ftoxZ3Z6HtRqNWw2W00P7Wg0isXFRayvryMU\nCr2xL6rVKos/4Ku+UZZJM+buTsZNzZf48srkgqNYHqp4RkG8NpsNx44dw/Xr1/H++++z9FgS1KPR\nKPPT7zc2Pl2Wt3R2Mie1Wg232w2DwcDuRwII1bG/e/cu7t+/35EiQedYLpezWBxqubu+vs4YXKvz\nkUqlzNwtkUhYwyCxulby42ln/1BasF6vZ+dNJpPVdFf1+XxYWVnB0tISSxNuJ62V0IoQeKSaPBHa\ndiZLEciZTIaV8NRqtXC5XIzxRCIRKJVKFjFrtVpZF7Tbt29jc3OzJZ9IMxIf1c+n/GZiaMViEclk\nEqFQCJFIhKWLUTpZf38/KwP6/PlzPHr0CCsrK6IV6uEl4kaf83OkgBAiVKFQCLFYDC9evIBGo8Ha\n2hpWVlaaardbKpWwubmJ9fV1JBIJaDQaqNVqNudLly4hmUyyGIpG74LMYaRJ1ZvKaR6tgoqtAKgx\nY5ZKJTx69AjPnj3D/Pw80ul0R24dYryE/Yjcbsx/N2QyGfj9fiwtLcHj8TDztiAIrH7CQVR03A/1\nflrgdWopaXa8G4lv6FIoFNi7oTOs0WiaKgXaiRacSCSwtraGRCLBamqQBslXQdRoNPB4PPj8889Z\n853h4WHY7XZmiYvFYgiFQohGo01px+T/J2bJ17roBNTDwGw2v6FtU9nq5eVlbG5utm36phobVNp1\ndHQUCoUCkUiExV+0256bhBIq0U2uNd49cJTgNfZ6Fx61XV5aWmJuqd3Sm1t9ZrOKzZH65EmDqifY\nzYDM9blcjuVjkxRM6TfJZJKVKaVrNjc3MTs7iz/96U8Ih8NNm3rqCfVuhEShULCa+jabjTU2oDFR\nJ6V8Po+lpSXWGIOCqqLRKGZnZ/H8+XNWSvYwiTOvgZfLZRYkR01nWsXOzg68Xi/W1taQSqVgs9lY\nzfjx8XGoVCqsr6/j0aNHuwYmUYATxS/wZtN2UG8Z4P3W1KL19u3bePr0acs+RB7VarVhXWwxCDcP\nyq2fnZ1FT08PPB4PS7vzeDywWq0IBAINx3eQIO2dmBbFnDidTmY6FgSBWZGo4RSdFYpqJ2FFp9Pt\nyeQbCeGtzpGK3ITDYWSz2Rqmy0Oj0UCj0eCXv/wlew654yhtLhQKMfdhqVTat+ogZaQQzeDpYid7\nnTR56mtAgYEajQaZTAbr6+vw+XyIRqNtV45Tq9XweDwwGAwwm82YmJhAKpXCxsYGU6TanQPFDlEt\nCOovcVCd9FoFCSH1pcMpJmNxcZFZZHkXrhgWiGZwpHnyFFTRjoZEKRSUI2y1WlkwF2l+lAMNgPmZ\nv/rqK9y7dw/xeBy5XK7p5/OSfH2aB82HNBciCnxqVSaTYXXat7e3UalUcPr0aYTDYQQCAdaekZrA\ndOIb22sOnWY1tAqlUsm6kJG1hYQxjUYDl8uFkydPYnV1FfPz86yfAA9iWHzDHaD9/Fb6/vPnz3Hz\n5k0MDw+jp6cHwCsLkd/vZ3nynfr6KCaEXEJUI4AvuyoG4vE4vv76a+j1ekxNTUGr1cLhcODHP/4x\nSqUS6yRIFjTSloGDZfb1e45qVZw7dw4GgwHvvPMOs5RQA57BwUEWXV8sFiGRvOoER4GsezEMfm+0\n69oqFov4y1/+ArfbjStXrjAzdzPXUoncxcVFPH/+HEtLS8z/ut/75ucp1jmtVqtYX1/Hv/7rv6Jc\nLjMrDylZ2WyWCTSdaMVGoxGTk5OwWq0wmUzo7+9n7kmKS2h3n5FAQkLf+vo6njx5smtNjU7Q6v0k\nklfdQs+fP4/Tp0/DYrGw3gaJRIL1nAgGgyzVT6x0v2ZdC0feoKZdiYaKwMzNzcFgMDBJnw/2oMWm\nBjALCwu4efMmlpeXWZpZs4SWGPhe5iZi7NQYhwp9UFpUPp9n5jCn04ljx44hEAhgeXkZRqORRbqT\nACImE9jLp32QoCYqvb29jHBT2U/q7NXX14fR0VF4vV4WI8HPm8q38j7SvVLqmkG1+irQ68mTJ0gm\nk+z9klROvrROQY1/iFmRMMc3IBGjWE02m8Xc3BxGR0exubnJWoueOXOG9RAAXlvQePPpQTH5UqmE\nTCbDGhFRrEE2m4Xb7YbZbGaR0/l8nkV82+32miBU8o9TQxKiHfXg93cna7qzs4MnT55Ao9GgWCzC\naDSy+xKDpPocZDpOJpOIxWKIRCJ4+fIlFhcXMT8/j83NzZa0Y7IqNRuQ2gxCoRD+/Oc/Y2RkBCdO\nnIDT6WSxKKQsNUsLd7Ngkrme6DBlg/DujnbBC8vk/hS7U2Qn6ywIAoaHh1kLcYqBSSQSLOCOVyrF\ncDG0sp5HxuSpFnW7RIYCuv74xz/i22+/ZZXBqHgF37KQcqKpUQYFbLQa5cnXsW60KarVVw1a/H4/\n/vjHP2JmZoYFvABgDTAmJycxOTkJjUaD6elpnDlzBn6/H3Nzc6wDF91vP7TC6A6bwdMzKV+eDj5p\nPNXq67Q8YgSNanzXp7zxft5O/K9kVqX9IAgCjEYjHA4HYzKtzLPRfqC82Ea+Tr41Z6cg7TwQCODp\n06esEBGdBY/Hg1AohFwuxyxclNYlBiNpBHKpUbnmcrkMn8+Hubk59PX1wW63Q61Wsz1BkepkriY3\nEQWuAa/Mwrwvnx93vXm73TmR5cPv9+MPf/hDzZ6kwiaffvopfvaznzGf65MnT7C0tIT19XVks1lW\nHrmVmuqVSgWRSIT1dBDLHE21Nm7fvg2VSoWf/vSnGB0dZalozQYFAtjVkkJKF1+gKxAI4OXLl8hk\nMtBoNG3XlwdeK4VSqRRDQ0M4f/48vvnmm7buVQ+ysrUDUg7ISgaAZS3FYjEkk8kaQZbnI4eF720X\nOmIUoVAI4XCYfU5EQqfTYXNzk21kOnC0odtZ5L0YPI2JNPlgMIhoNIrt7W1WVIZ6qFNLRbvdznx2\nXq+X5ag3exhoYzbL6FoJ1hALCoUCvb296OnpqYm/4Fs1JhIJBAKBfdMZ6UARoek0kpoqx9G9ASAY\nDGJ1dZUxlWaw15pSPAYFcZFmT88X2zefzWYRDAaRy+WYRYuClSjHnqwUB6nFSyQS+P1+3LlzB6dO\nnUJPTw+LhKYCVmazmVneADCtnfza1P51ZmYGXq+3qb7y/Hya3R/8+aEznEgkkEwm33hH5AqUyWRI\nJBLI5XKIRqOslWwkEmHrSu7IZkGuAsqOIKtBpxYXOm8+nw8zMzMYGhpiLoSVlRUW27Qf6ukNj0Kh\nwOgwdZPkK3F26oPmrQEUwCymO7NdoZAK2/T19cFqtaJUKjEBZ2VlBaurqyzoWOxYnGbxva1dX38v\n/neK5qxn/vRvu766Zsxn9DdiYhTwxPvsl5eXWcOc9fV13L9/n/V1psPXqnbeDIPnxy5mHu5eoC50\ng4ODNZIu+aaq1VdtVFdWVpBIJBoyPvJlk7WG8us7FVh4UzkRwuXlZTx48KCpKG4a214gDZ5aVZJf\nWSKRMEYrJsjCRQWSyAdPWhald3bSjKQZSCQSeL1e/OlPf2Lvq1AowOVy4dKlS0xYA17vQZVKhWQy\nCZ/Px37++Mc/4tGjRywegzd37uffbtb0XP9dYraNQOt7+/Zt3L59e9d77je+3UBpc8TgSaAVY58k\nEgmsrq7i4cOHCIfDqFarePr0KbM+7IX96GehUEAoFGLV6Kh3Bu19sczr9W62owaVlB4aGoLNZkMu\nl8Pm5iZevnyJ58+fsxbdzRT8OigcGZM/SC2iEeoPcSdoRVCo/x5FQq+trTFfHjWwaDV3sp0gESIg\n9Hsji4rY2iW1XeQFDD5fNxaLMW1iLwsJAGb+5rXwThCLxfCb3/wGc3Nz6O3txf3797G1tdV0TelW\nhDEy49EPANEruRmNRoyNjcFkMrHPqAHMixcvWNvZZv2v7ewFYgiRSATPnj1DLBZjTYouXboEp9PJ\nzJv8XqRqjlTG9i9/+UtNOlsrQvZ+32sU19GpZYh/frvX8uefhFqqo9/pcyhd7ptvvmHdN6lQTzMW\nEqIXjZDP5xEIBNh74ptQlUolqFSqts3U8Xgc//Ef/4FMJoP33nsPiUSiKatOs2j3nfExNgsLC9je\n3kYoFEIwGGSl1X0+X8cV+TrF97bV7FE8VywCQAciFAoxk167lo1OGH29f+0gtbp6okoMj9ovxmKx\nXZk2b0URO4AwlUrhm2++wcrKCkZGRrCwsCCqKZAnbDyTp0A8sUHphhS3QO4rKtRBjKRZotMqE+Qt\nRKlUigWhUl2EUqnEIsgphoYYfalUQjAYxP3793Hv3j3cu3evIWPYbQytunA6Dd48CPBMvt7q1imo\nqVI8Hq8JJG6W9uz1HUr/pe9QWe5UKsWElXaRTqdx69YtVCoVCILA4iX4gkVHhWq1imQyiUePHqFS\nqdYLWXQAACAASURBVDC3EtVqaSXe4aBwZEz+/68gIltfZe+wNsFeBPIgxiGXy+F2u1kHKeA1s6MO\nXaFQaE//NO/DJ5MtT/g7GTNVYcvn89je3mYHUyzJm6KliZDyDPYgrFmRSASPHz9Gf38/hoaGALwy\n0y4uLiIej3fkJtvtOl7o4u9PpnWJRMJMuNTMiIRMypogEBMi7X2v57YzVn7M9Wtx1My+3iXI73ux\nx0d7Uaz7UnAs7W/KDiJa14m5ulgsYn19nf1Lzbo6qWEhBkiQJVdFtfq6JDXFvxx03Esz9+4y+SZB\n2i9pC0D7h+OwGftuz+2E4DcDWie+wQQFh1Fby8ePH2N9fX1PNwWlXtHB2S9XuhWQwEHpXmKluACo\nsVoArzXNRhX7xEI6ncbq6irLGiAffDvFQ/Yzdzf6vf56XrOjDok+n49dxwdU0Xq1G6BUL2zsN276\n3lFknTQCL3jQ+A+SSYgtNFCMCZ1Tfi6dPIunGdT3/rCLhO0GUjoopuFtGFM9uky+BVAe9UEzx4MC\nBfUAb6YbHQR432woFKpp2rG8vIyHDx+yLmS7EXbel81rOvXf6dQNQ1YCsVCfs92IkYi97hLJq5LE\nW1tbrHwytUDl6wx0+tzd3CWt3pvXJum+9Hm742omLbFeQHkbznIj18Fu++ZtA7/P683/rbh69gIx\n+mYb/bQCsZS2w0QrLsvvDZMX6zC2Gv1K3+fbEX4f0UjLOei5UPrPy5cvIQgCDAYDKxCxurqKtbU1\neL3ePXtX88SPfvgIZAAda95iEaJ67LbX6uckJqjB0TfffMP6HszNzTGGLxYOgrAdNLGsJ4qN4kSO\nQoDn42QauRDEjEHpBLudj0ZnlD4X0+oGdJ5R0eia7xv4/dIM7TsyJt9pqkmroOpLvF+42Vxh/pDx\nzOX7otXXB6vx4+bNpfQ7MX/+gNYTwmZRLBaxsLCAWCwGuVyOeDzOahuQ/7u+g91uc+CFk06raDW6\nv0QiqSnYwwdFdnLfRnNrxrTcDqgY04MHD7C6usryl8nH3axgWz82nnC/jagf715ry3+HP9dvA2hM\n/DlrZmw8XTrMzKW3RQBpB7wQQnjbaTmhlTU/srejVCqr9ead+kheQLxFFwQBWq0WFosF1eqrXHrK\nTW+GUfPtLvlWnnzA2Nu6QficW6BxtDrwqpoYVRSj/1MxF0oVaaXcLplPqemGRCJh+fF8AN1e68/n\nx5N5tz4CWQzzMz3HYDBAo9Ew82AsFmvL4lFP/Oo1HPqsmfu0Mj96Ll9Klsr0NrtWfG1zcmWIKczu\nR6A6Mdfzfn6evjQ7jv0ETbHG2ujeJGQCqDkb++0XelfUw7zTOvStjrs+U6deQNlv3x+2wkeop4li\nK231sSZizo0rlLQvD3+rzPUH6RtWKpVwOp24cOEC6/TWCrGlzcub7d9Wpr4beEm/Efg5ymQy1hmP\nBJlW50sMIh6Ptz1m3uLAa/xiC1bUyZA6aTVbBWyvce/2/1bG3M6aV6vVPV0gzT6zmf7tbxP4PdGo\nPPJu338b0Mi6UP+3/dCMRUxs1Avb7ex7XpA5TBy0cnaQ8yJa3QyOvOLdXgxHTFAHuM8++wyzs7N4\n8uRJy4y6vlxlu+VxDxs0ZiIgu6XOUJQ5FSrhu+lRoZ6jKGLEm+kPipBptVr09vbi+PHjMBqNiEQi\nHZedfZuYSDvgtTMx58ITP7G1nHqNTOxxA4dTLbJVQZDS1g6byQONA3pbxVGdFZ6efZ/OayvrfORM\nfi+IZcaRyWS4evUqPv74Y5w4cQLhcBh6vZ7Vsm+V0R90/mMzaGVtdtvEuwXiUX3unp4epNNpyOVy\nJJPJjjTbTnFQvj+5XA6TyYQLFy7gr//6r+FyuZDJZDA7O4vV1dW3Jvr6MFGvVR4EU6OOcnzrWzHT\nFxuNtdG7bOf9HuR+4CsAkjC0l7uBd8Xxrq+jrCjaDo7qnB0Ggz9q+vHWVryrl/T5z1pZNIVCAZ1O\nhwsXLuD69etwuVw1Pc1bAR8Qc1QafCNfb7MgibuRCY0+p05PSqUSDoeD+eWpH8BR4aAOiiAIGB8f\nx7Vr1/CTn/wEarUa6+vrMJlMHVXpEgtHSfx2C0zrZDzkCnI4HDCZTFAqlUin04hGo6yWgpgWMp5Z\n8v+n3w9DK28FLQVU/R+TpzghpVIJ4HXudivNcY4S9XvssN7FfkJUpziqWIN6HBkVayYdbTeG1ApM\nJhOGhoYwMjICh8PBGtdQp65W73mUmp1EImFmdBI0mq1axQsmjYLdeMJHfnmNRgMALM/6KMFrJ2Ku\nv9lsxi9/+Ut89NFHrJ63QqGARqNh3eIOE/z+4gPIDnvP1WeTNJOzvZfGSX9XqVQwGAy4fv06pqam\noFAo8Pz5c9y9exe5XA6ZTAbJZLLjet97Rc7zn/PugkbPO8hg4Eaor26337Mo9kAqlcJqtUIqlSIe\nj7NyqoeJTmgjH0Nx1FZSsXHUcznSFLr9JGkxTClDQ0P40Y9+hNHRUQDA+vo6Njc3kUgkWOvLVsbc\nyffa8VtRhLparYZOp4PH44FWq0WhUIDf74ff72/ad9xscQ3S6Hd2dmCxWKDX65HNZpHP50VtDNEs\naDxiHxa5XA6dTofh4WH09fVBoVCwSPRcLidqK8u9UG92VSgUEAQBTqcTOp0OyWQS0WgU4XD40Agg\nz/Qos4L+T4LHbpoKr2GSwGQwGGCz2eDxeNDb24tLly5hZGQEEokELpcLJpMJ8Xgc4XAYXq8XoVAI\n0Wj0jTalzdAEWkP6Hv/D0x2aY32mCW8y5+e3l79fLK2tVbpAlkqz2YyJiQlotVpEo1EsLi5ieXm5\n4/E0A95S0iqkUil0Oh2sVisrf5tMJlvKBukU+wU6diLcHTWDB45YkwdqGd5ejL5dnDhxAr/+9a9h\nMBgQiUTw9OlTLC8vt105qZHZr/7vfA43zY/mu1fgWyMQwbLZbBgcHMS7774Ll8uFaDSKu3fvIhqN\ntt3dqRFovMViEYlEAsPDwxgYGEAikWD9s4/C3yfmHAnETIHX2gOlVlJueafP3I9A0H5RKBTsx2Aw\nwGq14sqVKxgYGMDq6iqePn3KWvEexvpXq68yI+RyOQRBYGZ04LWwuFdKlFwuh0ajgU6ng9vtxujo\nKC5duoQzZ85gamqKpVSWSiUMDg7i8uXL8Pl8WFtbw9zcHGZnZzE/P1/TS4Ces5+QKZPJWJMeEkjI\n3y+TydhPsVisqalOggnf0ph//3yXSN6K1klEfCPQnthrnpTyqVKp4Ha7MT4+jqtXr8LtdiOZTOI/\n//M/sba2JlqPh/3QjkJGgorNZsP09DTrZ7GystJ0S+FOwAt8tE8I9dabVhj9YVh8WnnGkTF5QRBq\n+mnz0qAYEpzVasWZM2dw6dIlmM1mlMtlbG9v4+bNm5ifn2/LTM/3wKZ+3Y2iM/nx84RDpVJBpVJB\nrVYzIpTL5VjvZeB1DIHJZMLIyAjcbjc0Gg1MJhNsNhsGBgZQrVaxuLgIhULRtP+S1xZ5hrmbYEWf\nq9Vq2Gw2TE5OIp/PI5fLsX7k1WqVzYuvGcATQDFABA1oHFfQLqanp/Hhhx+it7cXcrkcpVIJc3Nz\n+N///V8Eg8EDj7uQSCTQaDQwGo0YGBhgP3a7nX1mNpsxPT2Nd955B9vb28zXSrX2U6kUFhcXWaMf\namHbSqe53cZGjI7eJS+8NrKsELE0Go2YnJzEiRMnMDU1BbPZDJvNBrfbDafTCb1ez64h95NcLkcu\nl0OpVIJer8fY2BiuXr2KBw8eYHFxkbUrproWe+0vEkxUKhUEQYBGo2FtnfmWzrSHaV7kojIajejp\n6UFPTw88Hg8T/vL5PHMnKJVKaDQaVKtVZmnLZDJIJBKIxWIIh8OsU1orZ2E/4i2VSiEIAvr7+zE+\nPo7p6Wn09fXB7Xajt7cXCoUC0WgUH3/8MZxOJ5aXl1Eul9HX14e5uTk8efLkQDJU9lN+6r9LLaM1\nGg0GBwdx5coVVvY6k8mwH4rXWFpaQjQaZXXixYoP4vc5b6lqtx6HmO7c/fL4m33WkTF5antJJnMx\npR+JRAKr1YobN27g7NmzEAQBm5ubePHiBR48eIC1tbW27sm3xeTNf/y4G42f9/MajUb09/fDZrNB\nEASk02nEYjEW5U8M3W6349y5c+jr64NMJmNakU6nQyQSAQBWb71ZYs4LKQB2ZfTEqKmVo9FoxPj4\nOMrlMiNkRCyp2A0x+Uqlwuql53I5ZnbrBBKJhO0XIsydHCaZTAalUonTp0/jo48+gsfjgVQqxc7O\nDtbW1vDkyRMkEgnRDutuUd1SqRRarRZutxuTk5M4c+YMzpw5A4/HA71eD0EQWEEb+iHmVCgUWPXA\n7777DjMzM1hbW0MgEGBd9dp1N9DYSLDiz+heNSKIyev1ekxMTODGjRv46KOPoNVqoVQqa64hAkrx\nJWTFMJlM6OnpgVKpRKVSgUKhQLlcxvr6OgsA3S+VkzR5YiIWiwWZTIYVYeL3Kj8vYqAmkwkTExM4\nffo0pqenIZVKmYBBVh6dTgeLxYJKpcKYfCKRQCAQgN/vx/r6OhYXF1kfgWbB7+3dLCRarRbj4+N4\n//338cknn8DtdkOr1QIAstks1Go1TCYTTpw4gQcPHqBYLOLYsWOMdkSjUWQyGdFdYK2cSXpHer0e\nLpcLo6OjcDgc0Ov1UKlUzJIYCoWwubkJi8WC7e1t1olua2tLlABN3rXEj413V7XC4Pl/W7VqSCQS\naLVaVsiqWq3WCKS5XI69s0bWo91wZEyeDlx9xTsxIJfLYbVace7cOQwPD6NcLuPWrVv4/e9/X9Pz\nuBXwhKCRT26336nVJgWzabVaXLhwAdPT0zAajcx3SO1WjUYjdnZ2kMvlIJVKsbq6iqWlJajVajid\nToyOjiKVSuHx48fwer0tzYUq9tHa74VcLoft7W0kk0nIZDK43W5YLBacO3cOxWIR+XwekUgEhUKB\nER61Ws1ati4sLGB+fh7Ly8sN16sVNBKw2gUxIY/Hg8nJSRw/fpwJKfl8HiqVChaLhUUqHxR4zdHh\ncMBsNkMQBORyOeTzeZhMJnagyXxMcy+Xy0wD1mq1MJvNeOedd5BIJPDtt9/it7/9LcLhcNtEkLdC\nUXob/7e9alvQGa6v1kbg3SLlchlKpRLFYhGZTAbxeByFQgEWi4Uxrb6+PrhcLszOziKZTDadvlqt\nvioIROb4XC7HTP/0Ga8R1QvvBoMBDocDdru9pgIiCU5kkQPAuuuRpa1cLiOdTiMcDuO//uu/8MUX\nXzS1/+sZCw8ao1KphF6vx+joKMbGxmC1WiEIAusTIZfLYbFYYDQaYbfb4XQ6EY/HEY/Hcf78eZhM\nJnz11VdYWFgQrVVrq0yNmFexWIRUKkWhUIDP54PVaoXFYmGCrU6nY3EHIyMjbH1+97vf4auvvkIo\nFBIlrZfOFO+O4l0xrcyNt3bxZ2A3/kDX0DzJnfX/tfelvW2d6dkXtZCiuFOkFlK7JVmLZcuO7TiZ\nLJNmnGQCTDvTKdIFKFC0aD/Mv+mnoijQDwXSoh/aAm2QmbSdcbaxE9uxJVmydlGiRFGkuIirNot8\nP/i97jykKYmk5DjzvucCDNtaDp/znOfc63Xfd3NzM+LxOObn56Xz5urqqjS5+p1Q8sUvayXhnuNQ\nU1OD69ev491330V/fz+MRiO2t7cxOTmJhw8fVn2wj7PsjrseDxDLWmpqatDZ2YmLFy/CYrGgtrYW\nh4eHsNls2N7exv7+PsLhMFZXV5FMJhEKheDz+WCxWNDe3g7g6d6FQqGKupqVyqEeJXgYYYnH41hb\nW4PP50NXVxe6u7ths9kAPG2cs7GxgWw2i/r6ethsNpjNZhwcHCASiaC3txfd3d14/Pgx5ubmEAqF\nTs2YVu+lWkOttrYWHR0dePfdd3Hx4kXY7XYR4LFYTPoBnLZnvYpSAptfp0e+srKCvb09rK2tiXDu\n7OxEW1sbrFYrDAaDCESGjmk02u126HQ6UV6BQAD3798vu2XzSWtXn9lJCpYKfH19HY8fP5ZUmVrS\nxcgU/ySTSWxtbeHw8BB2u10UKIlYiUQC6XRaBPpJZ4jvHPeK+5zJZIRbUCrkSUVqNBrR2toKr9cL\nq9UKk8kkxjGjDjSyGLlqaGgQT5vvNA2Xzc1NLC8vIxKJnOg9H5dG4xrr6urgdrulxJUGoMrt4M82\nNTUhlUohGAzC7Xajt7cXJpMJv/3tb3Hnzp2C1NtZoNzr5HJPx0en02kEAgGMj4/D4XCgtbUVjY2N\nyGazQiq2Wq1ob2+H2WwGACwuLmJ6elrGJ1c7PVLVOWolSzXpDHrhdrtd2oB3dHTAbrdLxDYajcLn\n8yGVSkkUy2w2o6OjAy0tLeKYDg0Nwel0IhwOw+VyYWlpSQy4agieL0zJlxIWZzHlra6uDn/6p3+K\nP//zP0djYyMSiQQ2NjawvLwMv99f9fUpOIoVfTkbzdA3BYzFYkFTUxMMBgNSqRRisRgSiYSEih8+\nfIjp6WkJiedyObhcLmxvb8PhcMBoNEoYvNwpT6qwJqnluN85ODhAOp3G/Py8sHd7e3vhdDpRW1sr\n19jb24PJZEJjY6MInO7ubly8eBFvvPEGFhcX8Q//8A+4ffs24vF4VUqHYfrT5uMpIIeHh/E3f/M3\naG5uFgWUTCYRDAal+uJ5N/7h84hGo0in05ibm0N9fT0MBgNMJhPMZjP+8A//ED/60Y/k5a6trUUs\nFkMsFkM6nRZSm5rXPnfuHH7+859je3sby8vLVZP01POtergnXSuXyyGRSGBychLJZBJLS0sIhUKI\nRqPIZDISfbh06RJ6enpgNpsRDAaxtLSE0dFRXLlyRfZhZ2cHS0tLmJqakqqOct83lbvA964UY5v3\nx32qq6uDw+FAV1cXOjs70djYKEqdHrNqpFGgNzU1PTNjwWQy4fXXX0djYyM+/PBDJJNJMTqO2/ej\nvq5WAhiNRphMpoL0G4mGxbDb7cKDyOVyGBsbw8DAAFZXVwu8w7MM3Z8EevMsZY5EInA4HKLsgsEg\nbt++ja6uLgwODsJoNMJisaCmpgYtLS3wer2Ix+OnNsgpE/jcKG8reW/4bjqdTgwODiISiaCxsREf\nfPABRkZG0NTUBL/fj4cPH+LDDz8UImlDQwM8Hg/ee+89XL9+HSMjI3A6nTCbzcjn89ja2oLD4RA5\nXG1K+4V3+yhlmVQryGnJms1mCfeNj4/jX//1X/H48eNTk8HYKEZdayUhKqPRKMKAXgFD30tLS5iY\nmMCjR4+wsrKC7e1tOWw63dPe+y6XC16vF21tbXA4HLDb7Xjy5Ami0eixnfvUXGq5B5h76XA44PV6\npXEJlS1L+HZ3d+H1ehEMBrG9vY2uri64XC6YzWa43W7U19fj5z//ORwOB371q18hHo8LOaxcUMCd\nFIE4CbW1tXA4HGhqaoLNZpPc39raGhYXFzEzM4Pp6WkEAoEzb/yjnmlVcTJNQP5DbW0tUqkUTCaT\nMM19Ph92d3eFeESehNVqRTgcRldXF5qbm6HT6aTMkmSsUChU1b2QB8FQYrlEPhpkiUQCPp8P8Xhc\nOkseHBygpqZGSKfpdBperxcHBwfCTfB4PAX9CWi8VILi80ESVTlpBr6Tc3NzAJ4ST3t6etDX11ey\ntI6KtXgIFH+uo6MDer0eBoMBAwMD+PjjjxGJRI7kS5x0ro1GI2w2G7LZLCKRCJxOp+zpUeFbKjJ1\nzaOjo/jFL36Be/fu4dGjRwgEAvJunrSGUsqmGg+Tz2lnZwexWAxffvklNjY20NDQgEwmg83NTeRy\nOXg8HiQSCTkLLS0tuHjxIiKRiPRUKE5NHcVrKP58tbKi0vUXX4s9WEZHRzE2Nobr16/D6/XCaDRi\nbm4Oa2tr2N/fl+oOpoTa29vh9XrR3NwsRiW5HrlcTrg3xWmmcvFClXzxC3NaS9JkMslGMQ83MTGB\nf/mXf6lYsRRDVTTVGCFkUrtcLqnHVpX8+vo6FhYWsLy8jGg0KqFN7pHT6UR/fz/OnTuH3t5enD9/\nHgAQiURQU1ODcDh85CHgNSqJPuj1ejgcDrS1tcHr9cJgMCCbzUqb2/39fczPz0sYdXl5Gaurq7h0\n6RIGBgbQ1dUFq9WK1tZW3Lx5E3q9HsvLy5ifn0c4HK5YUVdqVJUCBURzc7Pk3Hd2duDz+TA+Po6H\nDx9ieXlZ9vKs8pXFCl79u1SDH0ZJVlZWpHwtEokgEAjIGRoaGpJyP7vdjqamJuj1ejQ0NMBms8Hj\n8cDr9YoQrHS/SbyjAqvEQKZS3d3dRTgcfmY/stksVlZWkM/nYbPZYLfbYbVa0dXVBY/HU8CHoGCv\nhK/DPee+V1J+yfD79PQ0IpGIRNyYEy5GsXJXv85wucPhQHd3N9xuN6anp6vqwcD7ocEQiUQQDAbh\n9XpPbNykGjG8VldXF/7sz/4MnZ2dcDqdmJ6extraGra3t5FKpZ4pWyzmLZBrdFpFD0AqRcbHxzE+\nPi4KzmAwwOv1YnBwUFIT7MQ5PDyMmZkZhEIh7OzsYGdnR2RqqdTkcXuj/ruaUl01GlBbW4srV67g\nvffeQ2trKxoaGpDL5RAOh7G4uIj9/X3o9XqJSPT09MDr9aKpqQmNjY0FxlgqlYLf70cgEEAkEimQ\nSZXs7wutk1cPEHB6Qd7f3493330XPT09SKfTmJ6extLSkpB8zgLFQrvc3yFzlx4Y80skxeTzeXn4\nxb9bX1+PoaEh/OQnP0FPTw+sVit2dnYwOjqKw8NDPHz4EDMzM1hZWZFOV8W5Pa6hnHXTqLh27Ro6\nOztxeHiIpaUl8XLdbjcsFgt+/etfw+fzSdohm83i4cOHGB4exuuvv47R0VH09vbCYDBgeHgYf/mX\nf4l///d/xyeffPJMicpxqPblK4bRaMTY2BhGRkZQV1cnlje9+NnZWcRiMWGxngX44h/VeKnYuwcg\nCnJmZgYbGxs4ODiQ6gw+W5PJhI6ODni9XmnVzDyfXq+H0+lEW1ubMJIpICoNQwJn24GMa8hkMsjl\ncujv70d7e7sIdKPRWMBuZhe8Spjg3AOWrx1FAiy+V0a7dnZ2sLKygsPDQ5w7dw52u12iGqV+h/8u\npfBU5cx68FQqJe988fVOUk7JZBJ+vx8WiwXNzc24du1aATOcv6ueJaDQU6VSMhqNuHLlCjo6OhCN\nRhEKhRAIBPD555/j9u3byGQyEjli+S8N/GQyWfCO0ChUCWzlgmFydR/5vgcCAfh8Ply4cEFaTR8c\nHMDlcqGjo0O4ServlDKcj0Lxnp8mQnjjxg389Kc/xaVLlyQixZK/YDCIjY0N1NTUwGKxoL6+Hq+8\n8greeustIdqpFUR7e3uYmJjA3//932Ntbe0ZuV4JXqgnr+b8TgOGOdvb2/Hqq6/C7XYjkUjg/v37\nWFhYOLP2jqVylPx6OWvU6/Uwm80wGAzyQPmCsua5mNXJka9erxcjIyNiHOzu7qKxsREtLS0YHBzE\n4eEhIpFIQcVCcXOHStbM2fEM4zFvTSVvs9kwMTEBv99fcPiCwaA0kdHr9XC73TAajXC73RgbG8Pk\n5CTu3buHRCIhYeRyFb26/moMLb1ej9bWVrhcLgBPeQdq8xuydc+SiFR8D8d9nWeLgoq5bOZ33W43\nDg8PhYnvdDrh9XphNpulZ8Lh4SGy2awYlTQyKvXGKWyex6RFRrAAwOPxwOPxSDmmqpRUIlklMkLN\nyavGwUlnhqFjRtKePHmC9vZ29Pf3P/NeliJPqnlj9Wd4PwBO3E/Kg1JGDUPCrBt3Op3o6+vDxYsX\nhdVPo4YeYTAYRC6Xg9vthl6vLzgPOp0ObrcbLpcLBwcHSKVS2NragtlshtVqxcLCgpw/VXEe9X5U\nK8eLlatOp5P7j0ajCAaDyOfzaGhoEAOuOP1QjUzmzzENW21JYX19PdxuNwYHB/HGG2/A5XKhsbFR\n3sVIJCL8AZbF1tfXo6+vD6Ojo2hpaZEW4vF4XHg35GfxzFT7Hr4wJa8eHJVQVU04nDmp5uZm9Pf3\nw2KxYGVlBXfu3JHc2lmBh6JSj4hKprGxUV5kMnltNhsikQj8fj8ODg4KrEtyDGw2G2w2W0EDnVgs\nhng8Dq/Xi/39fTx69AjpdPqZ2nnuMa3lckJYW1tb+PLLLyVUqgpNHtJ0Ov1MKDibzWJ1dRW7u7to\naWnBwMAAWltb5RnRAmeaotw9r9bKJuhp6PV6IQ6qoT0qHp7FswLPSrkvqHqP+/v72N/fF8W/v78P\nt9sNt9stnhhzePzdbDaLcDj8TOOXUtc/DuRd8JlXEgItB/QQGQ7f2tpCY2MjGhoaYDKZ5Iw5nU64\nXK6KeluwpI2efCnFXEqBAk89ZVbgJJNJJBIJuFwuvPLKKxISVz3nw8NDxGIxTE1NwePx4Pz58yXT\nkPl8Htvb25iYmDi29FVVWMelclZXV4Wd/sEHH6ClpUXKL/f392E2m6HT6XDnzh0cHBzgxo0bcDqd\nsse8B7LJa2trYbfbYbPZ0NraildeeQUfffQRpqenkUqlEA6HEQ6HkclkhDyoKlRe5yyiPqoOICtd\nTW/QAQmFQpKrVrsRVvI5wLedDKtV8nQeWO6p9oBJJBJYX19HNpuVUmPuPysk2EOCz5U8nPHx8apS\nbcV44ex69aBUczN1dXXweDx466238MYbb8BoNCIYDGJmZgbr6+tIJpNnvvZK18gHqDYAYUiSQvng\n4AAGg6Hg5a6vr0dnZydef/11XL58WTydra0t3L9/H/Pz82IY0OtTm4VUu17gqTBJp9MlQ7YkUFFR\nqtenZxOPx7G9vV1Qsri7uyvdqmiRl8P0r/YeSoGljGqOcWdnR3KQNLzY6IQWeLWNZU6zbvV3GdHx\neDzo7+8XgiNL6548eYJYLIZQKIRgMAi/34/19XUx+qrtgFecbz6r50BSUTgcRigUwszMDH7v934P\njY2N8nmMZqyvr1e0/0cpyHK8PNUg29nZQTgcxtzcHO7fv4+hoSG0tLQAgHRmu3//PtbW1pDLPRCB\ntgAAIABJREFU5WC32wvY9wT5FfPz8xXXdpdaK8PbiUQCy8vLuHv3LkwmE1paWmC1WlFfX49wOIxY\nLIaVlRWp5+d4X6YlGOEkuZLGCZ+5y+VCa2sr6urqpCcGjYijPPmzNAZpBHZ2dkolQT6fx8LCAr74\n4gssLy9LY59y53cU46izUi4aGhrQ3t6O9957Dy+//LJEnbLZLAKBANbW1uD3+8XwMpvN6OzsxMDA\nAHp7e0UG7u7u4uDgAFNTU/jtb38rLZ5Pq+CBF6zkT7O5hNlsRn9/P/7oj/4Ig4ODqK2txfr6OmZn\nZ0XpVcvWL8Zpw1FqvkrN/8Xjcej1ejQ1NWF3d7eg+9fY2Bh+9rOfYXBwEACQTqfh9/tx584dLC4u\nYnNzE6FQSAbuHGdJV2rllkpz0Ks87rlRCFGhAk8FXSqVQiqVQiaTEc+eP/9dDL3hulRSYz7/tGkK\nZxno9XrY7Xb09/cDANbX17G5uSldBr9rUGiyac7AwAAuXryI7u5uqdSg180BLxzyoir5as4/lW1x\nB7CzAOvOV1dXsbOzg/n5ebz22mtSkw58q+QDgUDVSp7/V/8uF6x19/l8uHfvHpqbm9Ha2lpgyN65\ncwcrKysYGhoqqL5R10KS6tTUlMwfOAnlyJq9vT1sbW1hcnISAHDjxg309fXB6XRK/fn+/j6sVqu0\n+eXMADodKrGR7wMHNLH7Jg11RgpKnadiBc+vVXPuGPU0Go1SQme1WgE8dT6Wl5fxzTffIBAISFni\nac7maX7X7XbjwoULuHnzJkZHR+V6+/v72NzcxMbGBiKRCADAarXCbrejr68Pb7/9Njo7O1FbWys9\nHDKZjJSMRiKRqhu3FeOFe/KnUcA1NTW4cOECfvCDHwgDnEpze3tbms8c1aWuUlTKUC8GrWfWzJOE\n0dXVhe3tbVgsFmQyGUQiEWxvb2NsbAxXr16VemKyfm/fvo3x8XE55PQ0T1Lw1Qq7UiB5hwpavabB\nYEBTU5Mw2XU6nXgVqVQKDQ0NEkatq6tDNpsty7up1ktQvRN2/aPwZetdKn4yoQcGBmC1WnHx4kV8\n9tlnUtJYTRTntKitrYXX68Vrr72GV155BWNjY1KeSOG7s7Mj9eUA5N6qDUFyn7lnZxmOBSDh8MeP\nH6O1tRWjo6OShlANyWrY9apBchr5wt9LJBJYW1tDOp2WaE82m0U0GoXVapVmJgyRq2vN5Z42WmL1\nxkkpKubLVTlz1Pp5rvf29mT+BQcDtbe3o6mpCRcuXEBjYyPcbrc07KHcLeY6cM85DthoNIpXSkP4\npPN0FMG33GfAdbHV85tvvolr167BbDYLCZMM+5WVlbL7hBwH7kc1eOedd/DHf/zH8Hq9BV9nq2M2\nqlpZWUEymZT+C+QqUQbqdDoxyFgxodPpEI/Hq74v4oXXyQPV9fplicXw8DBeeuklNDU1IZ/Pi6eY\nz+fhdrsl/8qcNFtdMkxcTblEtUilUlhfX5f16XQ6aWiyt7cHh8OBZDIp+dSxsTGcP38eDodDQod3\n797FV199heXlZemQV67gPatQKxuD8CCqjX4MBgPcbjf6+vrQ0dEBq9Uqw1/YoaqmpgZWq7Ugd3oS\nVKWjCnH1nkrlXvl/i8Ui5YCsLWb74IODA2kHeuHCBbz00kvo6emR7n70XNbW1oRVrE7JOmndqmCt\n1DihwTQ0NIQ333wTw8PDMhaX0RaVIMdWuWo6pRrQmwK+NU7PwpNnyLitrU3KLNva2tDb2wuXyyVe\n/GnCqKpXeRahY3WqHa9PYmNfXx/29vbQ2toKh8NR8HuURxsbG1hYWJDU2nHrZiqPRtVRUOdhWCwW\n8RJtNpuUYvEZFjfJUd+lUvtjMBgkN88UEOcPHGfo8d1Ur1nJmWcnRJYIj4yM4Ny5c7DZbEKMI2eG\nMojvwWmdt0rWScJib28vXn/9dTFC1O/X1dWJ/LDb7chkMjJE6Ny5c3A6nTAajQVcIYPBgL6+Puh0\nOiQSCYyPj0t1TPG7XMm5fmFKXj3IxSSVcqDX62G1WjE4OIiRkRE0NDRge3tbZlCbzWZcuHBBctQ6\nnQ7pdBqrq6uIRqMyQKWSUKYqrCtZKxEMBvHgwQP87Gc/k7pKNsghQzMajcq9ORwOWCwW6HQ6BAIB\nPHjwAJ999hnu378vjPdyD/hZCbx8/inLtampCR6PB/X19YjH4xI2I+P32rVrUj7Hka4Wi0XCy83N\nzcJuP6knPYVfsdAvVjpqnp0hSX7d4/Hg4sWLGBsbQ09Pj0y8YmkKyY3vvPMO3nvvPeFH7O/vS4jt\nP/7jPzA/Py8exUndy9R9L1eIqM+IbS/Pnz+P69ev44033oDJZBKjSSVp1tTUSDRE7WdeqQAj2JlO\nNSTOwotn/4WXX34ZV69eRVdXF9ra2uB2u8X44v0z7HlUDvgoqFGm0ygA7h2Z0w6HQ5SnzWaD0WhE\nW1sbgKe5WYbC1fWHw2EphUylUicaSsX58aPWRe6Iy+UqyPG2tbVJ3r3Y6C0GZZm6ZqbRampq0NfX\nJ+HkSCRSQL4r3lN1oFE1fCB68G63Gz/+8Y+l5bTBYEA4HIZer5fOmkyjsC/EaRwvrr2SiIBOp8PI\nyAj++q//GmNjY6KsVbD1sM1mQy6XQ3d3t0RJGhoapFSUa+d7/YMf/ABXrlxBPB6H0WjE1NQU4vG4\nlJyqjosq447D96JOnqjkZdTr9VJzzsEo6+vrePTokZRusZTryZMnMvyCDQgMBgNWVlYQDAaloctJ\nJV2n5RDQcyWD12AwyKAGdgYzmUwwmUyiEDlYY2ZmBnfu3IHP55O8XiUvULXCvhTYR7+rq0uuy5Bu\nd3c3enp64HA4ZJIY16D+TQ5BOZEIVRDxUJ/0HIqv6XA40NHRAZvNhrq6OqlD/fzzz/HkyRN0dnbK\nHHdGfegttbS0iNKYnJzE5OSk5Nr29/ePfdHUFEm50RYaNfTEPB5PQaSKkQSuj/tjt9uxu7uLzc1N\n5PP5gqE2lYJ5ftbsVzLtsBRItLx27Rreeecd9PX1ob29HQ6Ho4BxzKgK+/mzFW4loIJVeT/VrplT\n7Jqbm8W4UufNU7GrBEV+/sHBAR4/foz/+q//wsrKSlnGilr+d9TaeZZ4f4xSsYySaypWfup+7Ozs\nYHt7GwDE2WDengqbxvz58+exvLz8TLOW4jWp8rFS2VRTU4Ompia0trYikUhgenoa+/v7aGhogMvl\ngk73tHugyWTCwMCAjPRdW1s7s/RROeAzZ1rParWWvOdUKoU7d+5gZ2cHTqcT3d3dIkcYrWFairX0\niUQCjx8/RiAQQH19PQKBwDMcruK9Kwcv1JM/TY6YU9lMJpPUiPt8PkxPT+Py5ctwOp3CQM1ms/D7\n/aitrcWNGzcwNDSEjo4O/Pa3v8Xk5KTUYR43QIW5Z6D6PB89EzbDUGsp2WWN/bIp8MgxmJ2dxYMH\nD2RG9VmF3quByWSCy+WC2+1GLve07WJ9fT30er0Ib4aM1RIytTwtGo1KJKWcPgYUImrk5zgBqP7R\n6Z429+nq6pK+//F4HHfv3sWtW7cwOjqK7u5u6ZbI+zEYDDLilyMxW1paROkxD1quki8XFHr8t9Fo\nlDG4ahkjQ7V6vV6arTAyQaVarZdDb4nnU93bSu5Dp9PJHlqtVrz55pv4xS9+IfdIOcBcL42XRCIB\nv98vRNRKWvPyOtVWFDBNwrajTPFQyTNvzvXzM9XP5/mYmprCL3/5S3kux+0f163T6U40qviz7J3P\nSYpUjMUVEfl8Xshd+/v7iMViWF9fBwCZT9/c3CztcXlvVqsVPT09Umdf7LGqn6FWJpRr3KoGks1m\ng8VikRbfkUgENpsNvb29aG9vl3bZQ0NDMJlMePToERYXF0/tyVfyjjJKRNnGDqCcKMpnv7m5iV//\n+tfY2dkR4iDnTKiRy729PYTDYWxtbSEcDuOjjz7C5OQkPB4Ptra2kM1mn4l2cx3feyV/2jKvzs5O\nvPPOO7DZbNL8nwNoeBDT6bSwz/f29tDW1oaOjg5pJTgxMSHKvRLPuFoFyxczHo8jGAzCaDTKoWhs\nbITD4YDb7ZYXjV3kaLXG4/GqS7nOEswz7u/vw2AwSPmQw+GQ3CXDmbyPTCYj9dzM4e/t7UmN63Gg\nV3SSV0ZBo/6fQqS7uxuXL1+G0WiEz+fDnTt3MDs7i3w+j/b2dhnbabPZJBdND4dRilQqJQYDWxOf\ndaMYrpuGQzwex29+8xtMT0+jqalJDKyRkREMDg6ipaVFPLC6ujq0tbXhxo0bePz4MR49eiQRk0rP\nLN8LlWhV6TXoDb7//vu4du0a7HY7hoeH0dDQINdTc6zAt4x+9rJn17BK114t4ZC59pdeeglXrlzB\n5cuXJaVA5r8alVLDp+o9xWIxTE5OYnFxsaIGS3wXyolu0dgIhUKYmpqSc1m8loODA+zt7eH+/fv4\n+uuv4fP5EAqFkEwmUVNTA7fbjb/4i7+Ay+UqCOEzB0+ZUyo3rK6H3qaq2E+6Zz6rXC6H9fV1GWJF\ngnI0GkUul5OKHN5fa2srfvrTn6Kmpgb/9E//VFbq7CzACNf4+Dj+9m//Fh0dHRgcHMSPfvQjuN1u\n7O3twefz4fHjx0gkEtLam6WNagotnU5jYWEBH374Iebm5pBKpRAIBLC9vY1AICCs+6MiUuXe7wtT\n8pUchFKor6+H0WhEJpNBMpmUXCn7YNMS5axql8uFnp4edHd3w+PxwOFwSGlJcajpeYEHOhKJYGNj\nQ4a+1NXVwWq1wmq1Ssesw8ND8dA4a7jYQq/m808Lehn0wmmc2O12maDEGeck5lHgclId++Cryvuk\ndZerbIq/p4Yd2eluc3MT9+/fRyKRgNvtRn9/PwYHB2E2m8XSzuVyIlDoMZGwyVTT8+yMx+tms1mZ\nC8BZ4p2dnTCbzRJVYITDYrHAYrHAZDJhZGQES0tLSCQSVbXppfFZzf0xxNrU1AS324233noL169f\nl7JQFRR4qkfMzyRB0+FwlGXkqWuvFixVfOWVV/D2229jdHRUlB0rF9S9VD0qKv2dnR1sbGzg3r17\n8Pl8ZRuDqnF31D2o+Xg2tuE7pkYWqDgZOWRvjU8++QSLi4uIRCJ48uQJ6uvr0dHRgZ/85CfPNE5i\nNIifUWz0lXM/5f5MPp8X0nHx/er1eszOzsroVqPRCKvVisuXL8Pn88FsNp9q5Cw/pxzwvV9fX0cs\nFoPH40EgEIDZbEZTUxO2t7dlBsnGxkZB6qQ47bi6uoqvv/4a//M//4O5ubmKy0TLxQtV8urflWJz\ncxN37txBJpOB2WyWQR39/f3SJtPpdMqovitXruDq1avo7e2FzWYT8ozT6ZQH8bwtQR4QNvggOYOG\nCTt9sYZVZaZ3dHTg3LlzktOv5rNPG9ZS8++NjY0y6Yz1twaDQcbocuACrW+TyQSn0yltNPf29iQM\nXM7aqwW77HHqWV1dnczXNplMOHfuHEZHRzE4OPhM720aM4xEHB4+Ha3LssXn4cWXAoU/SaRGoxGx\nWEyG1uRyORiNRpw7dw4WiwW1tbV4+eWX8eTJEywsLGBra6uqz6ymDI3cgFdffRWjo6M4f/68zDAP\nhULI5Z723aeyYkkjw/Y0LpLJJMLhsBhhp2l4UgmcTieGhobw+uuv48aNG0LApHLjGriO4harbKC0\nvLyM+/fvw+/3V7Tmk36WUQ+73Q6XywWr1YrOzk6MjIxILTkA8YQzmQzW19cxPT2Nhw8fYnZ2tmCG\nPNNS9NipKOlxkpeg0+lkANZR6y6Ozp6FTM3n80gkEvjVr34FAGhtbUVra6sQZWlMlhMRPAqVcpYY\n5aJhwTRIPp/HxsaG8L+MRiOMRqOQFRllpvF17949fPLJJ9jc3Kx47b8TSv60aGxslJGaDocDuVwO\nZrMZ3d3d0q3s6tWraG9vRyqVwuDgIHp6etDS0gKj0SgbrvZ7ft6gguzp6cHo6Ciam5thNpvlMKjh\nHNYH19bWysFmC9BqP/ukQ6xamsUjPhlJYNXC2NgYurq6JFdJJc/+3Q6HQ+6HZCUaMcC3pVQnGR6l\nQqPV3DtbCtfV1aGrqws3b95EXV0dmpubJUXClsEMb3KeAMkxiURCXmAK/XKE8lmQHlUFmEql4PP5\nhP1OYhg9Lc43Hx4exo9//GOYTCbcu3evIGdazp6pivc4z7K2thY9PT24dOmSzPru7++XM8u+3I2N\njWJQ83ypOUrep06nE+VFTs3W1pbwP8pZ+3G8jVKgwXHp0iV88MEHGB4ehsViKVDwKreEv0Pw+7u7\nu1hYWMDk5CSWlpakoclZGSdU8Ddv3sSFCxdgs9mE7KqG61WWPol0dXV1wu6msdrf34+XX34ZXq9X\nyI/Fz+Pw8BDNzc0YGhoSonBx5FPlkRQ7cKe9d3b05Bng59HZcDgciMViSKVSVX3WSWe8FHj/5F6s\nrq6KgUdS9eXLl3H16lVcunQJbrdbIoDz8/P4/PPPcevWLWlF/jyN1xem5E/rVdrtdgwMDKC/vx/N\nzc2ibFgnSm8GePpiOBwOGI1GUepsgsL+65WEM6sV2FQ2w8PDBV5CcdiX3g3XbjAY0NLSIm1Mq/n8\nUmHFo36OAo+lIcxT2u12tLa24tVXX8Wbb74p6Q6dToeGhgYpI6JwoXdAi5feBT344vrdo9Zz0pqP\nA63muro6qVgYGBiA2+0WkpPFYpHnT9ZxIpGQDn2MqHBioKrkT0JxmA44fQljNpvF0tKSCJnz58/D\narUik8lI/wKSqf7gD/4A+Xwejx49EsJmuesup9MdPcsLFy7gr/7qrzA8PAyPxyPhXaYK9vf3YbFY\npHSIHnCplrk6nQ4Oh0PKj9LpNGZnZxGPx5FIJMpaPw34cs9NfX09bDYbrl69ij/5kz8pOJtUdGq6\nRjVQ+DMcmTo7O4vx8XGsr69XrXiOAmXZu+++i5s3b4rhqva8BwoHXLHO32KxwOl0wmazYX9/H4FA\nABcuXMCbb76J9vb2AkeD98Qz1t7ejmvXriEajSKdTgshTP284jUUNzyr1PAieAZ2dnZEHqncjdbW\nVoRCIeksV62irwY8Z/F4XNZFI/X69euSquJ6Dw8PMTU1hb/7u7+TkH+1+N7n5FV2fTXIZDIIBAIF\nE8VUtjEPFBUWw947OzuYnZ3FvXv3pCUlQ7HPGxT26mGgAORIUUYamI9neKetrQ1jY2P46quvqvps\nda+P2ndVoFOxk4wWjUaFbUuyHV9y7q/arSyXyyGZTGJ7exvhcBjz8/O4d+8epqamJE9ZrrI8jUHI\nHvwkP/Gl5NQzlgyxnM/n82F1dVXawep0Tzt+6fV6KWdcWVkpOSr0eay/GAwV0qvhAA/2Ku/r65M+\nACaTCV6vVzy9QCBQtlBReSrlNGVpaWnBxYsXpXEJc9PRaLSggRGVZakudmrTHX4uZ6WzC2SlMqPc\nn3c6nXjttdcwODj4TClcKpXC5uYmvvjiCySTSYyOjuLcuXNob28XBcY2zvTk5+fny2ryVIyTFCH3\nhflzVeGpnAXyXdjf/tatW/D7/bDZbBgeHhbPsrGxEel0WipG1EghZUFDQwN6e3ulhv727dv48ssv\nC+rluQ9HRYtOE8lihCSZTCIej0vZZT6fh9frxfvvv49MJgO/31+VHK82lZnP50WG7e3tiUHF/gV0\ngnj+Keu3t7elKdp3gRem5NX2m9UgHo9jZmZGvHan0ykTrNTyI1qYuVwOiUQCc3NzuHPnDr744gv4\n/X5sbW1V1Pv4NAJbNTho9bLRBImDBwcHsFgs0umMh5m1sAx9VouTXjRa78xN8wXgAWVJXygUkgiI\n3W4veLlZ9xkIBOD3++Hz+TAxMYEvv/wS0Wi0ZDONUigOdVcjKBi1SaVSSCQSEoJlyR/HsdIAZIcv\n9vVmq2RWFAQCAUQikYpLus4SLG+jclFbOe/v7wsXhe+G2+1GU1NTRf33VYF93Pr1er108mptbQXw\nbfSETYPoIer1+oK6fr5LVP5qfTgjPqurq/D5fBXPoagk/MrIwfXr13Hu3DlZFz3z5eVlTE5OYmJi\nAjqdDufOnSuo4qEXz4ljfr8foVDozEZcq6CCJ5mvODWhKizyfzY2NrCzsyO9IsbGxtDa2gqdTic9\nN2igsFkViack/bKK4+DgANFoFN98801BozF+diUpoUrAZxEOh9HS0iJ9I4xGI7xer8yar7YvQrXv\nKCPDBwcHIk+am5ullJj8r93dXcTjcfh8PszMzEib7GpRiWHywpQ8lRUHg1SKjY0NKQExmUxobm6G\nyWQqGHpCJjjDbAsLC/jHf/xHTExMSAlYNZ28qg07kYBGTz2ffzp+cmlpCePj40in09jb24PZbEZD\nQwMuXLjwzEAJ4NvGCM/Lq6GQZX6JeXSDwSBeGPB0Fnh7e3tBt6iamhp5GaempjA+Po6JiQn4fD5s\nbW1VTZ46Tbj+8PAQ0WgUa2tr6OjoEM6ASppiREGn08n4X1rh7F1OtnQl3oKalzwrZa+mMHQ6Hba2\nthCLxbC6uop8Po+RkZGCSgE1NF7JPpajpCwWCy5fvoze3l45k/v7+0gmk5KWUY1vcjdUBUkFo/ZU\niMViCAQC+Ld/+zd89tlniMViEgU76exXWilDEu6FCxfQ3t4u68pkMlhbW8Onn36KTz75BH19fbh0\n6RKGh4fR0tLyzLjW9fV13L9/X6ZKVvOOniT41cqWYiZ+cUqOMi+ZTOL111+HyWSC2WxGR0cHDg4O\nsLCwgNraWklXsWUzPfvW1laJzDQ0NCCXy8FqtYp8Kp7jrj4/russz3w2m8Xa2hq6urrk2uSnMA9e\nTfvlanLypa7BfRoYGMCNGzcwOjqKtrY2kfNTU1P453/+Z3z11Vdnsi/f+3C90WiU8Gk1LwPZlLOz\ns3A4HLh48aJ4CqpF+eTJE2FHr6+vY35+Hmtra1UbF0D1Vh/LudTuRfRg2B1ubW1NfpbMe3r+FotF\nyEuVeJLVrJ3hPvacZ604AEQiEYRCITidTlkrjSla+isrK1hZWZFRomw3XMneqZ7JaQhsrLNNJBIy\nyIgRFdVA4bAgu90Ok8kkKSGem1AoJIqLZMNKOt6dBagorVarpHNoxLL8kp/Hs2U0GtHd3Y1gMIhg\nMHimpWh811iqypwulQBbkVKxq+1PCeb1eS8MMU9OTmJubg7hcLhkQ5DTrp2fzfInDpnh7x8cHMig\nK7PZjJ6eHvT19aGpqUl6QKhoaGgo6LN+1hEcriuTyeA3v/kNdnd3MTQ0JHMiilvZ6vV6DAwMwOPx\noKWlRc692WwuYKNzvoff75eBKCyDVQl1aqkvuTfquoojP2d5//n8Uy4KnTvVoFCNrWpwVnwZck0G\nBgbw0ksvobm5WboQbm1tYXl5GXNzc9J87bvCC1PyDQ0N0re8Wisql8thaWlJSEdAIYs8l8vJtKhI\nJIK1tTXEYrFTK/hqvUqS6ChAyGQlsY4eJ9vx0rPgIAbWo5vN5rI7xZ0G6stKg4zKnKE9CvHd3V1s\nb28jmUxifX0dMzMz0tWObNxS+1aKLFSKP8BzUmkUhT+XyWSEmc3Qser5sK0n21QCwOLiIiYmJrC4\nuIi5uTmk02kA3w4sUb2X49ZT7VlTeSUMm3LPmQ9mOWVtbS08Ho90JqNHvb+/j8bGRvT392Nubk4M\nsuPWVMkekxdAQ5rVIuxVcZRiVq+tGlssn1tdXcXExIQQmhiKPU0tdCnU1NTA6XSK16rmpJmicjgc\nuHTpEkZGRtDT0yMETqCwY5vT6URvb2/BsJLngUwmg//8z//E9PQ0fvjDH+L999+Xbmpq33uj0Sjk\nYzWc/uTJE2GlM53l8/ng9/sxMTEBh8OBnp4ekVHk2eh0T7sXskqmmFD3vBQ8ACFyxuNxIf1R/pHM\nWU3fimoqMY66Bo1vjoMmd2l3dxcbGxtYXl6WLnbfJV54M5xqcygEFfna2prMUWZHrWw2i5mZGdy7\nd0+8y1Qqdap1n+ZQcCAKc8LA0yqBnv87StblcklPdIbuw+GwWM4GgwFvvfUWcrkcfvnLX0p3qHLW\nrM4KqCQfrv4+O8ANDg7i7bffhsfjgcvlkvRDTU2NVCtQuQNPSU2Hh4fCTGdfeP7hOeDaSnnG1UZ8\neB8LCwu4deuWjH5kG0q10xpbCfPP5uYmFhYWkEqlJOTNPCXLBelhFNdPVwvuNb2ThoYGtLa24vz5\n87h8+bLk/Ww2m+SBabSy7IzKieOMKaiNRiPMZjNSqVTZih4oDAer/6+vr0dTUxOuXr2K4eHhAg+v\nuKKgGHyWfK5qfXZTUxMuXboEm80Gr9eLqakprKysIBKJFHhwJ62/nGdRV1eHsbEx3LhxQ3oM8BnY\nbDYMDQ2hvb0dOzs7cLlccm7U8kIKeBo3AwMD8Pv9WFpaqijiVgnfYH9/H+l0GltbWzICt/haxZ42\n11pMLKypqZH0Q2trq0yhU/uyA0+jp8vLy/D5fM+MnS03laJG5IrXwX+rKQBGeSwWC/r6+vDWW2+h\nq6tL5P7GxgYePnyI7e1tmEymZ87Vce/kWSl4RqwuX76M3//938fY2JhErkiy+/TTT/Hf//3fiEQi\n31l/DeKFKfnd3d2q2m0Wg4S68fFxJJNJIaux+cn09DTu3r0rNc7ftRWlgkqe4VWG751OpxwWsjPJ\nN2DjCraJ9Xq96OjoqKiBTyVh7mLFroKHubu7G5cuXUJjY6OEYtnTO5VKIRwOFxgqer0eZrNZnjcF\ni6rk+VIeFfouVhqVRFPy+TzW19eh0+ngcrkQDoelfS2VPD1lGhm7u7uYmZnB9PS0DC9S23rSa+Lf\nx3n0lQoSngV6Tt3d3bh+/Treeecd1NfXS8+CUv3zaahw3xk+p3HAYU7Fe62urTg1ks9/O9edxpxO\n97Rs0uv1Ynh4GN3d3VJyWUq5k4GczWaloVM+n4fD4ZBeCuoeNjU1SXe8w8NDxGIxJBKJsiI6pc6K\nep+qYtLr9RgcHJS56+r5Z3TH4/Ec+azU2nkKb/bpLy4pO0twpsDW1haSyaRUgvBsqvdXkkJBAAAO\nH0lEQVRM751KmYYYz3o+n5cpaByiQqeC98XOeWtrawgEAtKLvxqo0Q+1MoBGNFtG0yHj/Pienh6M\njIygqalJqmbY7vvw8BANDQ1ifKjXBb414lVuE9cCnI4/oNfrpbrk3XfflT0kz8Hn8+HRo0eYnp4+\ntVNbDV6YkucEr9NaNblcDpubm/joo4+E1MZ8GcOIVJSs1z4Lb6uaa6jThtjXmIeOpRjsBma1WmGz\n2ZDP5xGJRCSsT/YuX+pywANcrlWrvoRqxIUldA6HQ/KSjEhks1nE43EsLCxgZmYGPp9P6rJ57YaG\nhmcGuqgM4VLrO27t5TwHfj+dTmN5eRkffvhhwZjH4pC4xWJBQ0ODPKdoNCqfrXqR9OZUD/So0qFK\noe630WhER0cHOjo6pKMZz4LRaBThDED6PpBlzw5mJEyx4Q+Ft+o1lRI+6nkwGAwwm81wu91SkWAy\nmTA4OChrK06zqPe+t7eHUCiE+fl5PH78GAsLCzg8PMS1a9dk7jbPi9FoRHNzM1paWtDS0iLjXJmX\nP2lPi8+yWtqpGhI0sj0eD7xeb8FsctXQOe4Z8d3NZDKIxWLY2tpCMBiUJjjPS8kDkImD8Xgcu7u7\n2Nvbk+ektgmmgqbMIKkUKDQA+F6y/JJ9Lchr4iAVNp7h2Vffj3K8ePX/aq8Btsa22WxIJBKYn59H\nLve0Q2JHR4cMvtLpdMLdyOfz6OjoQCQSQTAYlEoZOg7cf0ZaaLCrUY3TstwbGhpw/vx5DA4OorW1\nVSJ8VPJ+vx/JZPKFKHjgBSp5ekWnfQl4iMPhsISUmS8iY5c/dxbh1NOA+a/79+/DZrNhcHBQiBn1\n9fVS888XjnkoACIg7969i/n5+YrK/k6CKtBKeT38HBJvGhsbxeKmootEIlhcXMT8/DyWlpYQCoUA\nQIQNCXzFRDXVqzrqJaBS488U/+5Jnh3wredzVCSH905iI88n11ps8TPlAKDAizsteH0KqlQqJZO2\nNjY2pK3thQsX0NnZKV4DhTXLNAGIQcvSrmw2K+SgSjs9qkRWekr09vgOZjIZBINB8bgjkYgMjdrd\n3UU0GsXGxgbW1taEfLS5uQmz2SxyoL6+HhaLBS6XCy6XC4FAQFrz8syXw4FQ5Uox25tf54QzppzU\nHHOxcud7wHJF7kMwGJQ+CiRX+f1+Iew9T3nDNcXjcayvr0uInZyM3d1dzM/PY3NzU0b2ms1mDA4O\nwm63F/Ab2LK2vr5eDFzObHe5XNje3sbm5iaWlpawtrb2jIN2lKFYas1AIUmO7xH7ULBcju+ewWDA\n4OAgzp07JylBOhXxeBzRaFRaTavsejU6ely07TR6gQ7Zyy+/jIsXLwrhlGfF5/Ph1q1bWF9ff2G6\n54Xm5M8iJ8Lf5SHjMBH1698X5PNPG4R8/vnnSCaTuHnzpuT4RkZG4PF4CjpOcfhObW0t5ufn8emn\nn2Jqagp+v78q8qC616U8FDV3r+4pvW3mxugFAN96AqFQCDMzM5ifn4ff75c8tsFgEK+SCp7XU/fl\nONTW1spnMkRYbICchbGYz+eFXMdrqtdVr682IjnJi6l0bWrlRTQaxe3bt/HgwQM0NjZif38fbrcb\nH3zwAX74wx+ira1N2NxUgixXY+QqFArh8ePHiMfj8hz5R21RetSecC3sBrizsyNtglkuF4vFkM1m\ncffuXSwtLaG+vh4TExP43//9X8m5l+p9Pjs7K1EUltoxNVVfX4+1tTXJxavP4SQlr+ZjjzJkXC4X\nhoaGJGWgGpzFe8B74Nhncku++eYbJJNJdHV1YXNzE3Nzc8L9qVTJV3p+adAxJNzT01PQ3Glrawsf\nf/wxpqamcHBwAKvVCo/Hg5qaGng8HumQyE6gJAaztn57e1vSWvF4HBsbG5idncXq6uozofpS78hR\nUM83DQ06ZdFoVN4Xrqm+vh7nz59HV1eXEDOp4MPhMBYXFxEMBpFKpUTGqAqekSt1UNNRMrBS1NbW\nwuFw4NVXX8XFixfFqeE65+bm8PHHHz93kvRxeGFKXg0XEmelkCs5cKe5fjW/d3h4iHA4jIcPHyIS\niaC1tRXt7e0ysYvho1wuJ41yJicn8eDBA0xPTyMUClU9VrHYYy8OnZHtX1zvyu8DkJclkUgAgNTF\nM2qieoiqN6zmBKvdQ3qPXMd34SmV+jf/Xw6bWDWcitMS5aydZ2Z3d7dgDz/55BOsr69jYmICHR0d\ncDqdEq612+3Y3d1FIpHA1NQUHj16hIWFBcTjcRmUQQPgOO+r+P75THkWGIr84osv8OTJE2xsbIgn\nr9PpRPEf5+Wp36MntrOzI+mUTCbzzHMud9+Og073be9zjulVlQsA6e/OygFyajKZDJaWlrCwsCCe\nvM/nk5A90yXP82yqXTP5dzweRywWQywWw9LSEubm5iQKksvlZMyyz+cT54IRE4fDgb29PTHQWfnU\n0dGBxsZGzM7O4uuvv8bm5mZFCqtc+X7Ue0TO1a1btxAOh9HZ2SkyikZnKpUqqOJRr0N5xedafA5P\n+4ysVquksFQiYSKRwGeffYaHDx/KPvx/58mroPA7q5zFd7WZ1Ty4XO5pu1fW7ZNQ0tzcjN3dXSE2\nMVS7traGhw8fYnJyUjz4asuI1BxVsaKnN8WfUQUrf46NMjgqlwojGo3C7/cjEomItUxFwns+bVSl\n2MsqJoc9L5wkmMpRJsU54mrOjMpAf/LkCSYnJxEKhbC8vIy+vj7x0Gw2m/Tl39rawldffYWlpSVs\nbm6K0cVSyHJquVXhS9Y+/03CE1nei4uLBc2lylXGarRArUB53lBLWtlHgSkd8huotNXa7MPDQ8zN\nzWFqakpy4VQw3I/TkNKAZz1NdT+o2GmkUKlks1nEYjHMzc3h0aNHmJubE3mRz3/b5XFubk6GTbnd\nbrS0tMDj8SCdTmNychL7+/tCsk2n0zCbzZicnMTk5CSi0ehz80qPMgIzmQwePHiAeDyO/v5+mM1m\nGAwGqYAhoVpdl5raO8q5OIszxsiT+k7FYjHMz8/j9u3bmJube+HR5Bem5OmxfheC+qxQKk9XDSgE\nDw8P5ZD6/X4hzDDfTa8hnU4jnU6LUq127SoBifuuegNU8sVhet4nSxXn5uZgtVolVLiysoJgMIjN\nzU3EYrGS+d7TKnh6nmobZDW68H2FqsTO6oxT4YdCISQSCSwuLgqZkGVy+fzTnGw4HJZ8ZbGAO0oR\nF69X9bb52bW1taJU1tfXJR1zVsbcdwFyeMh7WF5exsLCAhYXFxEKhSSvzntTa8Z5FqlE1f2ppn86\noTo8xcQw1Thnp8/29nYcHBxgZmYGer0eoVAI33zzjXAY1PA1jTzyaNiueXNzEysrK5J2y+fzsjcs\nw1xdXZW+F9U0+jlN9JOcK947SaTkgIRCoWdaZfMcHlVBchYgIVWv1wsR8PDwEJ9++im++OILTExM\nYHNz84UR7ogXOqAG+G5f6u8TeAj39vYkLMhDrOZLi4mD1YBCQb0OX2SWs5US+sUKam9vD9FoFNPT\n00in09DpnraaDIfDEtJUw2Zn8Wwp8MjyVYlwah71+wpVQZ6lsuee7O3tIZVKiZHG8ifg27I1KiGg\nPLZ/cVqBa+YfNZWjfv37/iyKkUgksLCwIB0n/X4//H4/1tbWCspteb/qZDruNZtSncWzLY74qP8u\nBt9HVimx5XQ4HMb6+rqkPkpxGViNQc9zb29PjEDVGKQDwNQLUxbF93jSmTrtuaDBGo/HZVqgwWBA\nKpVCNBo9spyv3ChbNeDv7uzsIBgM4quvvsLi4iKy2Szu3buHR48eIRgMPvcxsmWt9UV9sMViyTOk\nqzJmv8/ColhR/q6A5BVVSAOFHdXU3uYqgbHYS1YJLWoOUw25nuXeqNEGrl8NgZfDtn5RUIVIqajV\n8/As+Hc5n3FUBI1n4ahy0+JUyffpfSiHyMu1m0wmaQXMoT/0fo+7NvBtTvwsz1/xvh61x3wnWOrW\n3NwMu90uUwbJjyn1vtTU1DzTIY7vNAApNT48PCxgplMeqNdV94J7rq6RXzsLkEOhzpVIJpMyYbKa\nZ8CoqToHoJL1kNRns9nQ3d0tzW/i8TiSyaTwaJ7Hu6Hs74k6/IUS79Q/3ydBUQrFdeO/K1AVTbEX\nxu/z66pQKUUqK6XU+f3nJezVtT8v5fg8cJSXQKH6PO6h+NrVfEY53k2pcP73AeWm/vh9ViPwd8oV\n9sU8C37tLLz44jx8qT3m+8k6ds54JxfguKiN+n6rQ5lKVbzw+8eVWqpRwuLozlnsiwpGr7a3t1FX\nV1cQpar2rPPvarkyXM/y8rKkRBhhe17zC9Q1l3NeX3hb2++bJ1AKx4XNvs8oJTSOYs0Xew78mgr1\n/8WH63lZq8et6fvA5TjOYyleH/e41PfOCmepaI5b41l6aC/ielTqaq+OStZA5VbcPe20+8+1lbpO\n8dfIDUilUtLatrjVrPpvVRYUp1yAb8m3qtFfTkTku5KLNGzUMtpyFHypM3Fama5GRVkRwut9V/rs\nd00fadCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0\naNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCg\nQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMG\nDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0aNCgQYMGDRo0\naNCgQYMGDRo0aNCgQYMGDRo0aNDw/xr+DwR8azBqHOP+AAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -2.295616\n",
+ "Current MNIST score: 6.899487\n",
+ "Current Frechet distance: 38.447990\n",
+ "Training step: 1000\n",
+ "Time since start: 5.646992 m\n",
+ "Steps per min: 177.085421\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsnVlz21ea3n/Y950EQHBfQJGUqMWStVm2bHV77O7qmenJ\npGqmMpWa3OQynyGVj5Cb5GIqVclFp9JV3V2TpT3d47bVktvaRWrhvu8gCIDY9y0XyjmGZMuiSHBz\n8FSpbKlI4ODg/M+7Pe/zQgMNNNBAAw000EADDTTQQAMNNNBAAw000EADDTTQQAMNNNBAAw000EAD\nDTTQQAMNNNBAAw000EADDTTQQAMNNNBAAw000EADDTTQQAMNNNBAAw000EADDTTQQAMNNNBAAw00\n0EADDTTQQAMNNNBAAw000EADDTTQQAMNNNBAAw000EADDTTQQAPHB4pDe2OFolqtVg/r7RvYZygU\nChQKBdVqlXp8z+K1jgsUCgUqlYpKpUKlUjns5bw1FIoXV8Nx2nM4fufkIKFWq9FqtWg0GiqVCplM\nhkql8v/9fh3zs/5GG64+iMV8F47bhjawc9Qa+HrhOJ6X43yBNtb9w4NWq8VkMqFQKCgUCnVzwI87\njuse7HTdh2bkvw/1NhAN7/7goFAoZMSg0+lQq9Xk83lUKhUmk4lCoUA2myWbzVIqlRrfSwMN7DOU\nSiUqlYqhoSH6+/uZmppidXVVRrAN/LBx5Ix8PQ+eiCgFGgZl/6FUKlGr1eh0OhwOB1arlWKxiMlk\noqWlhWw2SzQaZWNjg+3tbXK53LFMZzfQwHGBVqvFbrczODjIhQsXCIVCrK2tHfayGjggHCkjX08D\nL4wNvDDuInXaMCj7BxHFq9VqlEolLpeL3t5eHA4HXV1dDA8PUyqV2Nra4ssvv2RkZITFxUXy+fwP\n0gFrZJAaOApwOp2cP3+egYEBXC4XxWKRdDrduAuPMUQAu5Pv8EgZ+XpBoVCgVCrRarXAi9qo2JBC\noXCsa6VHHbV1Pq/Xy+nTp3E6nbS0tNDa2kogECCdTpPP5ymXyz/o7+EwPturjvJR3l9xURkMBiwW\nCy6XC6VSyebmJslkklwud9hLPPZQKBS4XC4uXbqE3W5nY2ODeDxOPp8/7KU1cEA4kkZ+rxeTYDZr\ntVoUCgXlclkySoVh2ct7NCK016NUKqFUKgHw+XycPXsWvV6PSqUiEolw9+5d/vjHP7KyskI4HKZY\nLP4g9/IwSE3CaNaez6O8tyLz43K56O7u5ty5c2i1Wm7fvs3CwkLDyNcBCoWCpqYmLl++zMzMDA8f\nPiQUClEqlQ57aQ3sAa+Wor8Ph2bkXzWU9TCcOp0Og8FAd3c3HR0duN1udDodCoWCSCTC1tYWa2tr\nhEIhotHori5isbENQ/8NFAoFRqMRs9mM3W7HbDaj1+tpaWlBqVRy9+5dlpaWiMfjLC8vs7KyQiqV\nIpvNHvoeiqxP7UMjzkXtn92+9l6dSUGaAiiXyygUCvR6PT6fjxMnTnD69Gk0Gg3VapVcLkexWJRO\nbSwW449//CNra2tH0mAqlUr0ej39/f382Z/9Gd3d3ZTLZTKZDMVika2trX1779rSksjslcvlH1SW\nT6FQoNVqUalU5PN51tfXmZycJJFI1P19YGcOpTjPGo0Gs9lMd3c31WqVQCBAIpEgk8nI7+GHAPF5\nxTNcLBZ3fcZqjbq4G8rl8ht/71AjeXEJ7rYWr1Ao0Gg06HQ6LBYLdrudpqYmzp49y8DAAC0tLRiN\nRtRqNfPz88zOzqLX66lUKsRisV0/zEqlcl8OYe0+KJVKLBYLarWaQqFAPp8/kik2cYhFSr65uRm7\n3Y7JZMJisRAMBvn66695/Pgx4XD4SKXolUolOp0Ot9uNyWR6icNRKpVkJ0AymSSdTh9odC6Muclk\nwm63o1QqKRQKaDQaHA4Hg4ODXL16lRs3bpBOp0kmk8TjcXK5HKVSCZvNRiQSYXl5mWQyeSRbpkTH\nRU9PD9euXcPlcpFKpQgEAszOzu7Le4rv3Gq1YrPZMJvN0qlQqVQkEgm2t7fl3SQcK6VSSalUOlbG\nR6VS0dTUhNPppFKpEA6HWV5eJpvN1vV9duLMiuyqzWaT97LT6eSdd94BYHp6WgZfiUSCSqWCVqsl\nHo+TSCQkn2onRu0oQJwZu92Ox+NBrVZTLpeJxWIkEgmSyeRbv16tM3UsIvlaQ7mbi0cQ69xuN729\nvVy5coWBgQE6OzsxGo3odDrgxUGv9R7z+TyhUGjXnpRSqUSpVNZV5KU2O1Bbp7x27Rput5vV1VXm\n5+dZWlo6che1Wq3GYrHQ29vL0NAQ2WwWm81GX18fa2tr3L59m4mJCaLR6JEy8PAi89Pe3s7f/d3f\nMTw8jMVikdyNbDZLMBhkcXGRr776iidPnpDP59+q7W8vGQClUonP52NwcJArV65gMpmIx+O43W68\nXi8tLS2YTCZKpRL/5//8H27fvk0qlaJSqaBSqejr68PpdNLU1ERHRweJRIJisXhk0rTCQbfZbFgs\nFrRareTQxGIx0un0vrynVqulo6OD69evMzw8TFdXFzMzM2SzWTo6Orh79y6fffYZGo2GYrFIJBJB\nq9ViNBrZ3t4mk8nUfV37Bb1ez9WrV7l+/TputxuNRiM5SfXGmwy9SqXCbrdz7do1/s2/+TdYLBZ0\nOh1OpxOFQkEikSCfzxOPx5mamqJardLe3s7vfvc7bt68ST6fJ5PJkEqljtQd8jqoVCrMZjNXr17l\nb/7mb1AoFMTjccbGxhgZGeH+/fvf+zm+y4DXluDeJhtw7MRwhBG02Wx4PB5Onz7NmTNnOHfuHD09\nPfh8PpLJJLFYTHrkInoXYhA6ne6tjWWtAa5nF4DVasXhcMg0q1KpxOl00tbWxocffkhbWxvBYJBH\njx6h0WjY2tqSnq7AYR16pVIpDfrZs2c5deoUwWCQVCrF6uoqY2NjPH/+nHA4vKMsxEEpT2k0GvR6\nPSdOnODChQtcvnyZoaEhbDYbmUyGbDZLsVjE6/XS3Nz8Uvq4NtLYb+h0Opqbmzl37hxOp5OtrS1K\npRKlUonV1VWSySSRSIQvvviCBw8ekM/nZcQUCoXweDyYTCYMBgM9PT0EAgFZpjpovMoVEEbebrfj\ncDiw2WwolUqSySSrq6uEw+F9WYfI3LzzzjtcvHiRnp4e2trayOfztLW1YbVaZRmhWq0Si8XQarUY\nDAZisZgsMWUyGZLJJIFAgEgkQjKZlKnYowCFQoFOp6O3t5eOjg7y+fyhaVOI7NPVq1f5yU9+wvXr\n1zEajXKdooyaSCSIRqPEYjE8Hg9DQ0MUCgVcLpfMqmUyGQqFArFYjKdPn+7bOdkLbDYbbrebgYEB\nPvzwQy5evIhCoSCVStHc3IxarSYcDhMKhWSWohbfZ1/Ezwoy+U5w7Iw8vPCSWlpauHDhAj/96U85\nc+YMarUavV5PoVAgEomwurrK4uIiarUap9NJNpslFotRLBaB3RuUV439XmuuXq+XkydPkkgkyOVy\nKBQKzpw5w5UrVxgaGqKlpQW1Wk1HRwc6nY67d++Sy+Wk0RSR52GQvFQqFW63m0uXLnHx4kVOnDjB\n+vo6d+/e5Ve/+hXBYJBsNruji09Er8C+p+T0ej1ut5uPP/6Yn/70p9jtdsrlMtlslo2NDba2tqT8\np9fr5cKFC5hMJsbGxpienpbRxE5SlLv5XkSaOJ/PUywWsdlsOJ1Ocrkct2/f5uHDhywvLxMOh0km\nk/LnxHuVSiXm5+dZX1/HbrfT0dHBhQsXePDgAfF4/MDPSy3vAb65oMTl39zcTFNTE/F4nFAoxPT0\nNIFAYF/WIXrG29raaG5uxmw2Mzg4CLy4V65du8apU6cwmUySuCvWLiSKq9Uq6+vrzM3N8eWXX/Lw\n4UPm5uZIpVJHhkgq9tflcqHT6VhdXWV7e/tQ7gq9Xk9bWxv/6l/9Kz788MOXMmbFYpGxsTH+03/6\nT8zNzbG1tUW1WuXatWucOHGCq1ev8hd/8Rcye1qpVEgmk4yPj/Pv//2/P5JG3uv18u677/KXf/mX\n+P1+1Go1KpUKl8uF3W6XUf39+/dJJpPf4qbBzuzSDz6St1qtdHV1YbPZAEgkEqyvrxOPx3n8+DHT\n09NEIhGZ+oYXxqNQKBAMBnd9AYsHfa8PizCSdrudzs5O4vE40WiUSCRCMBhkfHwck8lEKBRiYmKC\n58+fMz4+TiQSkd74YSpWCc6A3W7HarWSTCalEXzy5AmhUIhsNnukamiiHjswMMAnn3xCW1sbS0tL\nrK6uEo1GyeVyxONxMpmM7M7Q6/XE43EikQgbGxtEIpEdfffCOOzlnBQKBTKZDPl8nkgkwsTEBE+f\nPuX58+dEo1Gy2SyFQuFbvyechFwuRywWw2w2E41G6e7uxm63MzEx8Z0RRL0hnlWhm1Db3QJIkl2h\nUJDlNEESE874fqxJlJj0er38N+FUbW9vy/Og1Wrx+/24XC4sFgsqlUreG9VqVUb4586dIxKJ8ODB\nA+7cuUMikdixc7ufEJk2p9NJOp1GrVYfinN37tw5Pv74YwYGBtDpdGxvb/P111/z6NEjcrkcS0tL\nPH36VJZDFAoFz58/5x/+4R9kacrv99Pe3o7H45Gfq6enh42NDQKBwKHvNYDRaMRms/GjH/2IDz74\nAIPBwMjICJOTkxSLRYxGI0NDQxSLRbq6upibm0On01EoFOQ9uR/fz5FsoXsTBIGjs7MTg8FAJpNh\nc3OTxcVF5ubmZB24Nn0mHkij0Ug+n991Xb0enrBarcZgMMiIoq2tTV4ikUiEtbU1otGoXONnn31G\nIBAglUrJujC8SIOJOqa4pMT69jP1LcoeLS0teDwe9Ho9kUiEaDTKo0ePmJ2dlSzZt8F+XkAisvF6\nvQwPD3Pjxg2Wl5cZGRnh3r17zM/PSwMufl7I84rvQez9d10or5ZyhBNXKpV27egUCgVyuRyFQoFE\nIsHIyAgTExMsLy/vaK+EERXlh/7+ftRqNalUipWVlQNhMot9qHV4amuLwuiLPRaO+H45h8LREH9K\npRLZbJZUKkUkEmFhYYGpqSlmZ2fRaDS89957tLe309TUhEajeclhUalU9Pf3MzQ0hF6vx2KxsLKy\nwsrKyp6IjvXq3BGOuM1mIxQKSYb3QUF896I9UqVSsb6+TiAQ4De/+Q2//vWvpV7Gq1hYWGBhYQG9\nXk9TUxNXrlzh9OnT9Pf34/V6qVardHR00N7eTjAYPHQjL8S/hoaGeP/99zl//jz37t3j9u3b/O53\nvyOTyWCz2fj444/p6emhqamJpqYmHA7HjgOH3eLYGXmVSiXTrX6/H5PJJKPIZ8+eMTExwdra2rda\nFcTDDC8edNE68zbYa2QmXsNmszE4OMinn36K1+tFp9Oh1+spFousra2xvr4u/1sulwkGg+RyOXkh\ni4fH6/XS3t6OVqslmUzKlGGpVJJRR+1nrMdBUigUtLa20t/fj9/vl4SeVCpFMplka2uLSCRypCJ4\nQKbLfvazn3HixAnm5ua4d+8eDx48kOIrtfsjGPaVSkWma1/HahURi16vR6vVytdQKpWk0+ld70Wp\nVKJYLMoa3ldffcXm5uZbf4cej0caK51Oh8lk4smTJ4yOjkqSW61BqqeDWK1WZVT+6v7qdDo6Ojok\n+1gQHdPp9L4QBKvVquSLPHjwALVazdDQEEtLS0xMTHDnzh0WFxcJBAIkk0mUSiXPnj1Dp9Oh0Wik\nU20wGNDr9RgMBpxOJ93d3bzzzjsYjUb8fj/xeJx4PL6rzyAMce3e77bVV7RuCQLhfpEG3+SUTExM\n8Jvf/Aa73U4ikWB2dpaFhQUZlHwfRPn166+/ZmxsDLPZzLVr1zh58iQul4u2tjaePHlyqKRepVKJ\nwWDgnXfe4d/+23+Lw+FgcnKS//W//hePHz9+iXQ8MjJCNpvl1KlTtLe3UywWefDgAZFIZN/Wd6yM\nvEi99/T00NPTg9vtJpVKEQ6HWVpaYm5ujvn5eamH/uqDolKpsFgsAOTzedLp9Hf+7H5CpVLR9f8k\nXgcGBgAIh8OkUikSiYQkn4RCITY2Nr5lIERbRldXFydPnuTEiROk02mWlpaIRCLSEXE4HABEIpGX\n1OX28jltNhttbW2cPXuWs2fP0tnZiclkku+fy+VIp9O7atHZ7/03GAw0NzfT19eH1WplbGyMyclJ\n5ufnXxs5ihqgwWDAYDBIo1soFCSBU/Sfq9VqNBoNSqXypctrLyUV8X7iNTc2Nna8t0K7QAgSXbhw\nAZvNJp+BYrEoMy77VVapTcvX/l2sT2gpiLMqMhf7RWCrVqsUCgVCoRAjIyOUSiXW1tZYWFhgYmKC\nx48fs7W19VKddHV1VZbnxHcsDLzY33Q6TWdnJ4VCQfJKdrP+2vp/PYyW+H1RetqvSP516xTPz8rK\nCul0Go1GQyKRYHV1dccOkOh0yWazBAIBFIoXCn5NTU3yPurs7CQYDBKPxw/F0Ov1ejo7OxkaGmJ4\neJgnT55w+/ZtRkZGWFlZkZ+1WCxKjQKfz0c2m5W8j/1c97Ez8na7nUuXLnHy5EmMRiObm5tsbGwQ\nDAbZ3t6WmsyvbppSqXxJfCEajRIOh6lUKt+pnf5d0Uw9omCtVsvw8DAnT56UKUKRnl9dXWV2dpZ4\nPC7Tfa/+vlqtpr29nb/8y7/k6tWrnDhxgtHRUeBFiktMfhNkotHRUcLh8J5bkhQKBT6fjz//8z/n\n4sWLnDx5EqvVikqlkrXhsbGxPe3Rfh50m82G1+tFrVYTjUaZnZ1la2uLXC732vcV0VBTUxMul0um\ndROJBB0dHTQ1NUnGu8FgkG0+oo5eqVQolUq7fohFr77I3qjV6h07DaJL40c/+hE3btzg7NmzFAoF\nisUiFouF+fl5GenVI0P1OrzuNQU/QpDfSqUSer0es9m8r5eeYMw/ffqUmZkZFAqFNBDC2Xtd9qtY\nLFIsFslms8Tjcdn77PV6SSQSRCIRNjc3ZTZtN5+hNgMnnPLdkjfFaxgMBlpaWrBarfuyt69zaEQ2\nLBwOs729/dK/7RYKhUI+h263m/b2di5evMjjx4+/lY07KJhMJs6fP4/f7yeXy3Hr1i1+85vfEAwG\nX/qsarUah8OB3+/ngw8+YHl5mdXV1X0XqjoWRr62ba69vZ2hoSF8Ph/VapV4PE4gEJDs0VcjEjER\nraWlhb6+Pi5fvizVwJaXl1laWmJ+fv5bzsGr9UPYu1Rpd3c3Z86c4erVq7S1tbG4uMjU1BSjo6Pk\ncjkpkvA6hq5Go+HSpUtcu3aN9957j97eXvR6PXNzc4yMjLC5uUk2m30pvSwmve0lMlCr1TQ3N+P3\n+zl58iQdHR1YrVaq1SrJZJLt7W2i0SiFQkEKtwgGbaFQYH19/Vss0oPGmTNn+OSTT2htbZU110gk\n8p1rEiWhrq4uzp49i9/vx+l0MjMzw+zsrIyARRtbuVxGrVZLIyAEacR52W36u1gsyra45uZmzpw5\n81rmuYgC/X4/ly5dwul04vP5GBoawu12E4lEGBsbY25ujnA4zNTUFFtbW7KcUE9n9k0Q7V1Op5Ou\nri7cbjfVapUHDx7wxRdfEI/HX/t7e12j4FZsbW1JxTBB/nvdz7/6/yIrqNfrGRgYYGhoCK1Wi9ls\nxuv1sri4uKfum1rjvheCsojgge/9jPsNwX94G2g0GimYI1oshYDR4OAgfr8fm81GqVTC5/MByCzo\nQZIeBQHw6tWrdHV1SUcvFAq9tN+iE+nTTz/l6tWrGAwGUqkUW1tb+0YyFTg2Rl4I3/T09EiWcDab\nZWtri9XVVcmsFxACOEIFb2BggLNnz/L++++j0+mIx+NMT09jMplkCkVEdeKA1B7MvbDZhXBPf38/\nn376KYODg5RKJYLBIJOTkzx69OiNr6HVanE6nbz//vt88sknDA4OolarCYVCjI6O8vDhQ5LJpDQu\nwWBQrnsvEHvf0tIi997pdFKtVtna2iIYDBIIBKRUrdlslgJFarWadDotW74Oc/LV4OAgP/rRj0gm\nkzx79kz25b76WY1GIxaLhaamJi5dusRf/MVf0Nvbi06nQ6vVkkgkWFtbY2trS7ajiT+v4rt0Fd7m\n+xCRfLVaxeVycfbsWaLRKMFg8CUHQhA5bTYbly9f5u///u/x+XzYbDby+Tybm5s8f/6czz//nLt3\n77K8vEwmkzkUp0s4I06nk/b2djo7O3E6nZRKJe7fv8+XX375vZoK9VCbLJVKMrW7W2Kc1Wqlra2N\nd999l3PnzmG1WjGbzTgcDimlvRuIrM1eAwqNRoPBYECr1cq75nUqa/udLn4TarswxLqtVitNTU10\ndnbS0dFBS0sLbrcbt9uNx+PBbrdL46jRaJifn2dsbIxsNrujWn+90NTUhN/v5+zZs7LNNhQKfev5\nMhgM+Hw+PvroI06ePEm5XJbqig0jz4tI0mg0SiarWq2Wqflnz54xPj4ue5fFJSJqsO+++y7Xr1/H\n6XTicrlkz7yQqzQajbS2tlIul9nY2JBs6L0+ZLUQtbuBgQF6e3sZGRnh2bNnjIyMsLq6+sbfVygU\ndHZ2cv78ed555x06OjpQKBRMT0/z4MEDSbirjcjqtXbx8AldepHyi0Qi/OEPf2B8fJxEIiH7yw0G\nA729vfzVX/2VdMR6enr4+uuvuXPnzp6IaHtFLbFMpKnF96xUKmWm5MqVK/T399PW1obb7SYWizE/\nP89nn33G8+fPJc/hTSnZvWaBRGoznU5jtVrxeDyynSuTyUiD0N7ezqlTp/jJT37C8PCwFD+ZnJzk\nt7/9LVNTU4TDYYLBoFz7YZKUtFotV69e5ZNPPsHj8Uijm0qlvrNMBW83kONN2MszIqLjS5cuSRKn\nz+fDaDQyNTXF+Pg40Wh0V2nj2lT2Xr8fQQg0Go1EIhEePXr0rRny9eYA7BYqlUpmW7u7u7l48SJ9\nfX14PB4sFgsmk0m2O+bzeYxGo2y5TCQSrKyssL6+TiwWOzADL85iZ2cng4ODKJVKZmdn+e1vfyuV\nSWvR1tZGf38/KpVKtumKaYD7vd4jaeRr25BE6tfj8UgmbqlUkop2m5ubRCIRyULWarXodDpcLheD\ng4MMDg7S0tIidYNFG5ogiInUoUqleqm95vvWtNOHQaFQYDKZ6Ojo4MqVK/j9fsrlMs+ePeOrr75i\nZWXljfUYi8VCe3s7ly9f5oMPPmBwcJBqtcq9e/cYGRnh4cOHrK6u7lsqTrDG29vb6ejowGw2k0ql\nmJ2d5f79+zx9+pRCoUAqlSKXy+FyuaQSmMiMiDYYt9st+QEHPYQilUrJuqAwNBqNRhoPMf9gYGCA\nS5cu0dLSQjabZXR0lOXlZaanpxkdHWV9ff2t172XumoqlWJkZITu7m6am5sZHh5Gr9eTTCZlR4Pf\n72dwcJCuri4qlQojIyOEw2FmZmb4p3/6JxYXF4+MHKuIgC9dusT58+cxGo2sra3x5MkT2e98EMZm\nt+9htVo5ceIE165d48MPP8RsNqNWq8nn84TDYRYWFvZUmqrXM9HR0cG7776L2WwmEokwNzcndUNE\nZlHce9VqVao6Hoah9/l8XLhwQZLozp8/T3t7O2azWT6narWaZDLJ5uamFAMTXScrKysyU7Ffsr2v\ng2ipjEajLC4ufkuFT9wvgvgoulmWl5dlULnva9z3d3hL1KY4hbfmdrsZHBzE7XajUqmkoU4kEnK0\nqaiR2e126cW+9957ADx48ED2mQtyjyD4iNdJp9NvZPWKyG+nUCqVUpb0b/7mbyiXy0xNTTE5OcnS\n0tKO0jQtLS381V/9FdevX+f8+fPo9XoePnzIf/yP/5GpqSmi0ei+Dq4RQ0SGhoY4efIkJpOJubk5\nHj58yNTUFCsrKy+ljrPZLI8fP2Z0dFROpnM4HKjVajo7O9FoNAQCgQO/VLa2tpibm5OT8fR6PXq9\nnlKphFqtxm634/V6aW1txWQysbKywv379/n1r38tZ5uL6Hmn2I1j+CrC4TC//OUvee+99/jZz37G\nxx9/zI0bN2Qr2MzMDG63G5vNxoMHD5ifn2d0dFR2aohzfVTQ1tYmz3JXVxfpdJrHjx/zX//rf2Vu\nbu6N+/SmaH4/U8+CfPrXf/3XXL16Fa/Xi0KhIBaLyc6eQCBwKNKxr65zcHCQTz75BKvVyvz8vORf\nCL0Is9mMy+VCqVRKyebDWvfp06f5D//hP2Cz2eTdn8/nCQQCGI1GTCYTZrOZfD4vHVshLxyLxdjc\n3GR7e1sSHg8CYp/EvWKz2eSE09rATThUYgLn4uIi29vb3Lx5k/n5+QNZ65Ez8gK1xl6QY4T+vKjH\niDYcnU6Hw+HA5XLR2dlJT08PNpuN1dVVyb6PRCLkcjl0Oh1dXV0MDAxgNBpl2v5NE452Q6RRKpWy\nlu31ekkmk6hUKinC8X2vJaJLj8fDO++8Q09PDwCfffYZ//zP/8zExAShUGjfL3DRC+xyuTCZTBQK\nBebn57l37x5bW1svZRCUSiWpVIp0Ok2hUJBSw7UTAoWso/C8D8rQT0xMoNFouHDhAsVikZMnT5LL\n5cjlcmxtbclI7E9/+hPz8/Mkk0mWlpZYW1t7bQr5TRCOz14+nyAujo6OolKpOH/+vBQE0Wg0bG9v\ns7W1xdTUlLzwRGbnKBl3eHGma2cyFAoFZmZmmJyclBHwm/AmJ2s/DbzZbKalpYWBgQHcbre8MwKB\nAF999RXT09O7Piv1glarxWKx4PV6cbvd5HI5IpEIiUSCQqEgI3mtVovVaj3Usa4ajYbm5mY6Oztx\nu91YLJaXRg2L9rJCocDGxgaTk5N8+eWX8m5pbm6W3T3V6gsFwnqXWt+EYDDI0tISp0+fljMRZmZm\nCAQCcg1KpZJcLsfGxgZff/01iUSCxcXFt55Et1sceSMPL5ihQsy/UqlIj0+MjDSbzVitVrq7uxke\nHubUqVPkcjn+23/7b0xNTbG9vS17xbVaLQ6HA6PRiMPhkFmA7zsYtRHZ29QFBcGoubkZnU5HsViU\nA2nsdvtLAjevpimFqp/P55PEr6WlJX75y1/y+9///kBIbEKXwG63y7G3uVyOlZUVxsbGSKVSLxkx\n0dNa23IXVwHzAAAgAElEQVSUzWYl49XhcEilp9qe6IPAzMwM4XAYhUJBd3c358+fR6FQkE6nefTo\nEQsLC6yvr7O8vPwt+dXDRLlcJplMMjk5yfr6Okajke7ubhwOhyyPjI+P8+TJk0Ovt38fhGERLU9W\nq5Xt7W3Gx8eZmZkhGo2+8SzUm2/yNlAqlTgcDrxeL06nE6VSSTweJ5vNsrCwIJ3DnaytNmCo92cx\nGo10dHTgdrsxGAyEw2Ep2VytViW5TafTYTAYZNRZL77D20AYapPJRDgclvLG4hwYDAaKxaLshLp5\n8ya/+MUvUCgUctyyz+eT97nNZnvtffo67NUJj0QisoTX3t7O9evXpUqm6LYRbbSRSIT79++Tz+dl\ngHMQOHJGXmy4+LIEC7Farcp+cJfLhc/nw2Kx4Pf7ZdtYf38/Fy5cwGq1ShELYUzEpS3U4CwWCw6H\nQx6GWlbrd61pNw9lpVJha2uL9fV10uk0KpWK5uZmfvKTn9Db2ysNj0irCqMn2nEGBwfp6+sjFotx\n7949vvjiC6mYdBBtTiK1p9frZe0JXog/iPYVkY4ql8vyUH/X2oRK4enTp3G5XLIr4m3Ws5fPXK1W\nSafTPHnyBLPZzL/4F/8Cg8EgHUeRshTnrl77W6/XEdK2YrZ9Pp/H5XJx9epVGc2LssJRhJgbL0ZB\nKxQKOVZ0dXX1jaWyw3RcRE9/V1cXdrudhw8fYrfbUavVLC0tMTU1JbNqOzmnwtAWCoW6p5e9Xi8f\nf/wxfr9fcmVKpRIulwuFQkGpVJIMdmFkxIyGfD5/oE5ioVBgeXmZ3/72t0xPT+P1eiV/yev1AjA7\nO8vz58959OgRExMTUu9DTO0UI7mFvsD4+DihUEhOa9zvzE+lUpFRutfr5dq1a/j9fjY2NlhfXycY\nDBIKhYhGo0SjUba3tw98GuCRM/LwDVFJpNHFpszPz2OxWHj//ffRaDSy1cnj8aBQvJjo5vF4pI69\n2NBaj0kwqXU6nXQCxIHZyZre1siHQiHW1tYIh8NyFrhQrOvo6CAUCkkZTHEIjEYjLpdL9qMvLS1x\n9+5dPv/8832T/HwdRPpPDPkRfeFC2lOUSsQQl1ch2PlNTU1SoU/IhL7tQd+LoRf90UJ4SEw/KxQK\nBAIBlpaW9i26qgfEwBlRkywUCjgcDsl50Ov1Lw1QOWqwWq28++679Pf3S7nfYDDI3Nwcm5ubR04G\nuRa1RLVUKsW9e/ekzO3i4qKURd5JZCY6esxmM7FY7LXtl7uF1Wrl1KlTUt9dlNPEOGuRfRPnSafT\nYTabpbjPQZYbBBlaSN2K1lutVktHRwfFYlF2IQlCW22WsFAoEI/HsVgstLW1yYg/Ho+j1WplO91+\nolKpkE6nmZmZkZnK5uZmGYgGg0FZQltaWpJBxUGK9hxJIy8gNkG0tW1vb780RKS2Za5afaGRnUwm\nuXPnDrdv32ZhYeGlaVuiXURchLVR9k7Ss2/7pVQqFTn2dnV1FZfLRXt7O5VKBZ/PR19fn/xssViM\n9fV1pqamZDdBLBYjFovx5MkT5ubmpKLTQUAYO71eL41JpVJhc3NT9peLQTX9/f3ysn51j0SHQVdX\nFzdu3KBcLksxmd1Kf+7F0IszEggEsFqtmEwm3G43TqdTGsmjCsGCTiQS8ryWy2XS6bRkdAuBl6ME\nhUJBU1MTP/7xjxkeHpa9+wsLCywtLRGNRg97ia+FKOcplUrpkAcCAWkMReQrslnfJ3wk7iqhviiE\nk+rpmAnOkslkkjK2arUaq9Uq9evz+TwajYZyuSxLnaKUdRgQmTSRDTEYDNIJEgNtxHwFAfEsp9Np\nwuGwHBAjyNpGo1Gqme732sXwKOFkdHd3y4l5IhA1mUxUq9WX7s+DwpE28gLC4IjUeyaTkaxoj8dD\nJBJhcXGR58+fMz8/z6NHj+Tc7++68MShcrlcVKtVrFYrGo1mX9adzWZZW1vj1q1bZLNZkskkq6ur\nhMNhqUJVrb4QO7FarZw8eRKz2YxOp2Nubo7x8XHC4TCrq6uHcnmLBz+dTsuWxWq1Snt7uyTTeTwe\n0uk04+PjLz2IQqvg448/5qOPPqK7u5vbt2/zxRdfyLnRO12DKBW8+qC/zecwGAz09/fT1NTE1NQU\n8GJ4i+BKtLe3s7GxQSwW2/HrHhRaW1sZGhqSBMzV1VUmJyelaqLRaOTixYtsb2+zsLAgU/q7wW4V\n+r4LSqWSixcv8tFHH+H3+9FqtYTDYR48eMCf/vQn2f56GPguMq3BYJCT22w2m5wOKRyqWCxGJpN5\nqWtBRPWiJU2U3cTrCtJeV1cXV69exefzodPp+Kd/+ieePXsmnbZ6fB4xRlepVEq5aa1WK6NK8W/i\nDrTZbHR3d0udiLm5uUNRxhP3u4jqNRoN6XSaRCJBKpX6zoCgUqmwsbHBF198QUdHBzqdTk4LtNvt\nzM/PS/2U/YyaxQCgUqnE9vY2MzMzUmG1s7OT9vZ2WlpaiMViKJXKl9QwdwtxdncSKB0LIy8gvDeR\ngq/VFX/y5Amzs7MEAgE58OV1EL3wra2tOJ1O7ty5s2+DAkqlEoFAgJs3b8qUthCwqZ07ffHiRd57\n7z0+/PBDLBYLqVRKjkIVxJnDgNA3z2QyKJVKYrEYGo1G9rO63W4KhQJPnz59KRIQpMOBgQH+7u/+\njosXL5LL5ZiamuLzzz//3u/nVYh2SvFg7CbVpdFocDgcnDlzBofDwfj4ONlslo6ODgDZI5/P5+tm\n5Ot1nhQKBV1dXfz85z+Xl8T09DTPnj3j1q1b2Gw2Ojo6uHbtmpwAuLq6uqcopl5rV6lUfPDBB/z8\n5z+npaWFVCrF5uYmd+/e5U9/+tNrJWz3G7WCSGLSoF6vp7m5GZ/PR2dnJ21tbbI2nE6nmZ2dpVAo\nSH2FcrmMUqmUBtJoNKLT6STRtLYV2O12c+3aNf7dv/t3uN1u2Ve9uLj4UjCyVwdLdLSINHKxWESn\n0+Hz+ZiZmZE8AK1Wi1arpampib6+PjkFc3l5+dD65UVmJBAIyOf9TWOHNzY2+Pzzz7ly5QonTpzA\n4/HQ3NwsiZ1ikud+fp5KpSJnVoghOhaLhf7+fq5fv05vb6/UCak18nuBCHx+cEZe1MbEQ1YoFJic\nnOTu3bvMzMywvr4umfTfB9EnajQaZbp8J+1zu7n4hGOyvb3NyMgIi4uLRCIRWcMTB3BychKNRiMP\naaFQkFHDfs7Xfh3E5+3p6eHDDz+U3r5QYIvFYuh0OkKhEL/73e948uSJ/F2j0Yjb7ebDDz/kxz/+\nMe3t7UxOTvKP//iP3Lx58637WcWBriXFve330NvbK/kAwWCQ6elpZmdncTqdnDt3jtbWVv78z/+c\nXC7H4uJiXbz/ehlJk8lES0sLfr+fxcVF5ubmePr0KUtLS/KMCJ7D+fPnGR4e5r//9//OrVu3duUQ\n1aO/H75RqvR6vXI4kKjFR6PR1w6TOiiIcpNw+H/84x9z4sQJOjo6JA9FDAUqFot0dnbKcamCpFbb\nx3316lWuXbsm55t7PB5JWhXtd0JnXbDGxRhSgb3uRbFYJB6Py8mFgmQnRueKkpTP5+P69esMDAzg\ncDhkF9Jh9viL51x0FjmdTjY3N9na2nrt75RKJVKpFDMzM5RKJfx+P0qlkmAwKAPBg747q9Wq7ELa\n2NiQ3Aen04nT6cRsNkthrr28x07LncfKyIvLR5BG4vG4TGkLQtWbpq3VtoVVKhWpHFbrGLxam6pt\nndtN3Uo4Eul0+rWMcqEDXyqVyGQyxGIx4vH4S8NODhoizehyuTAajajVanw+H1tbWyQSCYrFIhsb\nG5IUIy67trY2hoeHuXHjBhcuXCAYDHLnzh3+5//8n7tW56slPr7NXoiL/OTJk5w/f55gMMjq6ipr\na2uk02k5Rra5uZmLFy9y584djEbjvo1ffVuYTCbOnj0r+3BHR0cZHx+X8rqA7IXe3Nykt7eXS5cu\ncevWrTe2hu43nE4nvb29tLe3Sx39UCgkh+MchT0WRLiOjg6uX7/O8PAwLS0t0sER42NFp0mhUJBn\nOJ/PS2Eto9FIb28vH330EfF4HI1Gg8/nk4RIMcAoGo2ysrLCxMQEKysrdZ8hIDKFXq9XRuciZd/W\n1sa5c+fQaDT09vZy7do1bDYb2WyW7e1tQqHQkfg+xPPo9/upVl8MIXtd94Vw/EX0f+rUKQwGA6FQ\n6BBW/w3Edy0ccDETQ7TX1QM7PTfHysiLFrhEIiFr2mtra6yvr+8opS1SZ2KIitAkF+MhBZHv1Rrw\nbuvAr679TbBYLAwODpLL5eRAl4Oedy8g9iASiTA7OytTjna7Hb/fj0ajYWtri8XFRel5a7VahoaG\nuHLlCp988glNTU2k02n+x//4H/zhD39gZWVlVylkMbJ1NwbLZrPR19cnZVT/+Mc/SuUs4UCNjo7S\n1NQkL73m5maCweCRkIJ1u938/d//PWfPnkWlUjEzM8PDhw9fcmar1RfKX7dv38ZisciLXJDBdgNh\n5PYSzff19fEv/+W/pK+vTwqCLC4ucuvWLZaXlw+1p19EW+Vymba2Njmj3GAwyPYrkeETNXcxJvTa\ntWuUy2Xi8Tg2mw2j0SizjFqtlq6uLtl5IvavXC6zubnJgwcP+Od//me++OILSSKu3YO9pOur1Srz\n8/P85//8nxkYGKC/vx+dTofdbsfn8/HBBx/w8ccfS50Ou90ua8iBQIBgMHioRl4QR0X30ZkzZ8hm\ns3IU8OvuDsF9SiQSZDIZzGaz1CY5zMxEpVJBr9fj8XikBK8QCztIHCsjL2ofa2trsg0uEAhINac3\npdx1Oh1erxelUsn8/DwzMzOMjY1J0kTtz9YOL4H97dOtrQ+Wy2U5Zz6TyRyaKItIB83NzXHz5k10\nOh0mkwm73Y7NZqOlpYVQKITZbOby5ctYLBZcLhddXV34/X46OjoYHx/n9u3b3Lt3j+Xl5e+d3b6T\ntbwtFIoX44lPnDiBxWKRmZ/l5WVJPhLDUURroFarlQMwDhuCSGWz2YjH4zx79ozp6emXOkYECoUC\nm5ubhMNhKb4hIs+3RT2cWngxoev06dM4nU5ZIw6Hw6yvr7+WFHuQEOQzMZHyV7/6FW63G6PRKCNg\noQWh0WikUEs8HqdUKqHX62VWUAy/KhaLLCwsyNawaDRKLBaT3UELCwuMjY2xurr6nXu712c9m82+\nJPmq1+tlNkUQwATRuFKpsLq6yvLyspTHPuz2S+E8bW5uSkfQZDK9kcNTLpcpFAqSO+Tz+TCZTIem\n5icg5Nbhm2ClHi3QwgHfCY6VkRcGUBgMrVYrtYKFmM13pdpFus1isdDd3Y1CoeDJkyfcuXOHmZkZ\nmQp6NS3/Xenh/XoIBPEvkUgQCoUkE/YwI51yuczMzAyhUIiu/zf3WxDxrFYrWq2WlpYWrly5IoVO\nTCYT8EKl8ObNm/zDP/wD4XCYbDa75/W8DYTjJJQQi8Uik5OTjI6OsrCw8K3XE46EuNCPSiudiFKm\npqb41a9+9Vp9dyEHmkqlpDiUVqvddTaiHml+UboRRjOZTEpNiMOabf4qKpUK4XBYTmoT6XeRjheO\nqUqlkuqVHo+HZDIpVeOampro7++Xw5nE4KalpSUWFhakiuJBPsubm5sEg0GZzbl06RJ6vR6Xy4VO\np5PnKhgMMjMzQzwe31PWB+pzN5ZKJSlKJcSFatsB37SGXC6HSqWipaVFtq0dpuMiHHXB4diJQM9O\n8YNM18OLL3F1dRWDwYDf78fr9RKNRmVrl8FgeInxKggcpVJJpmRzuRzz8/My+v+uaP0gowy1Wk1b\nW5scISumWb0NA/1N2G3atVp9oRT32WefMTExgc/n48SJE3R2dsoUfnt7Oy6XC7VaTSaTYWZmht//\n/vfcunWL7e3tQ7nQReZGqA5OTU2xvLwsZ7ELGI1GTp8+zYkTJ1CpVCSTSUKhUF3WvFfimsjuCP5J\nOBx+bUlKnPd0Os309DRarZbOzk7y+fyuWtT2unaFQvES21ilUhGJRA5svObbQjh5Qjf91YmUlUpF\njjKNxWJy5vnAwACtra2o1WpWVlbY3Nzk5s2bTExMkEwmpbbFYWXjyuUyi4uL/OIXv2B0dJShoSE6\nOztxOp1oNBomJiYYGxvbU5fDq1nPekC00zkcDhlQfN+ZFEPJenp6cLlcUsa8ng7I20JwMdRqNQsL\nC0xNTdVV62Snz+ixM/KFQoFwOExrays6nY7W1laUSqV80KxWq9xcoVPf1NRENpvFYrEwPDzM1NQU\nwWDwO0eeik2rFdyp/W+9oVarMRgM+Hw+mpub5TjFtbW1I1EThhcznMfGxlhYWKC5uZlAIMDg4CDl\nchmHw4HFYiGfz7O1tcX29jaPHj3iH//xHyW57bAgtOlFNLW0tPRSXU/wMwYHB+nu7pZnoV7EmL1C\nMI0rlQoajQan0ynryOISE2k7IRssZs0bjUaamppeUvI7aESjUSYmJmTJQahLvknC9rAgOmFe5xSJ\nTpd8Pk9bWxsej4cTJ07Q1NTEysoKi4uLhEIhRkdHWVlZORKfUahuhsNhNjY2WFhYoLe3V6btx8fH\nWV1d3XWmbS+E5NehVl+gpaVFzrpQKpVsb2+/lHGz2Wx4PB7a29vp7e1lcHBQEu++T6r8oFAul8lm\ns0QiEUKh0KGURI6dkReM13w+T6FQoKuri+HhYSkXa7FYJOkIvvGmasduBgIBxsbGiEajr62LHcQX\nIZikLpcLt9uNRqNhYWFBttnVU762Hu1guVxOKjaNjIzIVkQxuMNqtTI3Nyen/x2mjrpIR66vrxOL\nxaTEpbh4RaRvt9vp6emRgj7i74L0uNc17AXCyOt0Opqbm/npT3/K06dPmZyclON6BVlJaACcOXOG\n7u5ugsGgrCO/rQpevdoHV1dX+fLLL7Hb7Zw6dQqr1SqVyA679rtbqFQq2RbY39+P3+9HrVYzMTHB\n9PQ009PT0hAdJVSrL9TWYrEYU1NTWK1WnE6nvGf2Qijej89qNBrx+XxcvXqV4eFhWltbuXPnDl99\n9ZXkzthsNt59911+/vOf4/V6ZcARDoelBO5hG/hEIsHGxgaFQkGKJdXL8fhBputF+imVSrG+vs74\n+Djnzp2jvb2dvr4+OY1IRGJC9z6VShEIBCSTfnR0VKaRD+sQCNGK1tZWenp6ZO/k8vIyCwsLhz6y\n8rsgIt18Pk80Gn1J6WtlZQWDwSBHyB72TG1xVnK5nEy9C8ETwaC32+1yNLHVapUiJrslCNYbwqEN\nBoO4XC6uXLlCa2srw8PDkrMBL+qYJpOJc+fOyXkHoj/4MC86MWc9EAjQ2dkp06c7FfE4alAoFFI2\ntqOjg8HBQYxGI+FwmJWVFTlP/KiN+BUoFApyaE0ikXhJae8onPdaZLNZOSce4Ny5c7S0tDA8PEyl\nUkGtVmMymejr6+PixYtS82RlZYXx8XEePnwox70eJnE5nU7LdQjC40FnF46VkYdvxA+EEEh3d/dL\nw1JEP2qpVJIiNOvr69y8eZN79+6xurpKLBY7FIGZWogUZnd3N2fOnJGf6f79+8Tj8SP30AnUrktc\nGkLy9ihCRI0qlQqtVovT6aSnp4fTp0/j8Xjw+Xy0t7dL7kE4HCYYDJLP5w891SdqxIuLi/T29nL5\n8mWp/S4cKUHWLJfLeL1eWaIolUqHqrEASGJXKBSS50Nc0IfNrN8NhJPY1NRER0cHPT09ZDIZVldX\nmZ+fJxgMHknn/FWIwVypVEoGREeFaCogCJpCFvb69ev86Ec/km2JtRBZu0AgwNOnT7l16xZff/31\nod+j1eqLWRMbGxt4PB7sdrvskz9IJ/fYGXnhIeVyOaLRKDdv3mRtbY3Ozk6am5ux2Wysr6/L+fOp\nVIpUKsXa2hpbW1uk0+lDN/DwQiO7vb1dstJHR0flTOXjGOUcZVSrLwbttLW18dOf/pSTJ0/S1NRE\nKpVie3ubX//61wQCATY3N1lZWSESiRyJy1rMkp+dnZWCRIlEQipo5fN52aZVLBZxOByyNer+/ftM\nT09/a7DHQaJUKsl5DclkUq5NlBCOE1QqFR0dHXKe/OzsLCsrK5JDs76+LrthDvvc7ATiDhXp473c\nOfv5ecfGxtBoNHR1ddHc3IxarX5pkFQkEmFhYYGHDx8yMjLC/Pw8a2trR6aDIxwOMzo6yvDwsCSg\nqlSqA50keuyMPHxDkCmVSoyNjbG8vIzH48HlcmG321laWiIQCBCLxSTJ56g9eGK+dnd3N263W6qW\nHZZu9A8doqVMDAISQyXm5ua4c+cOi4uLMjo+Kk5WbSSv0WhkmSSfz2O328nn80QiEakTYTQaZaQw\nNjZGIBA41M8iCIJiuqJGo5HdDcftjCuVSgwGg3SuhFMoRJUEG/+4fC4x7lSQ047KmX8Va2trlMtl\n7t+/T6lUkqUo4SRubm4yMTHBH//4Rx4/fnzgo7jfBDFwp6mpSa5dq9UeaBBxmDmaunxCQU7SarWy\nXUGQ8uox7efV96rH6ymVSgYHB/nX//pf4/f7MRgM/Jf/8l+4desWsVjsSB3Sw0a9WmBquxhq5T6F\nUtZhp7ZfB+GcGAwGWXcU5Qfh7ArjUjtKuR7Ewb1CRC4ejwer1SrVKsUQnaO216+DaMk1mUxotVoA\n2UNfOxv8uHyeV1GPdknYXw0Rr9eL2WyWhGrxvkIARwhaHbWAzm634/V6eeedd3A6nTx48IDl5WXC\n4XC91vpGG34sI/laiIdsv8ku9axZCc85kUhIUoYQjDmqHvVxhxisI4bPvE4f4ahBpFVFeeo4Qexz\nOByWw09qjeJxgSALZrNZMpmMXP8P5Vk96t9FoVBgZWUFqN90xINCLpeT0sFWq1Uqax4kjr2RP0jU\ny2NVKpWk02mmpqbY2NigUqmwubl5JCPJHxIE4aiB70c9I7Nqtbqnkbe7RT3H/IpsYW3WZD9w3AzY\nYeC47Y8oqQlOiiCGN9j1RxT1GsFZqVSIx+OMj4+j0WiklON+pumP4wVSOyjosImSu8Fhqm3tBbVZ\nq+O29lcJffVyVEQWYj8NvGBdH6c9FyUZ4FiVYOCb6aL7WWoRGR/BDasXR+xt7paGkT8EiHat5eVl\nmdI87nW9/UK9HKuDRu2QIzh+xvK47Tfs31l5Vf1yP/Cqctxx2/sGXo/aO/7Vf98LdnrOjz3x7iBR\njwin9jVqH+j9fqiP46UN30Rmx80BEmQt+Gbtr1NXrAfq+f0eRISzX/i+Z3S3e3QQhrd2z/f7vV59\n372+33F2TPbzXqwdeCaepXqWkf7f6/7wiXcHCfHl7JWEd1yju8PAcd2j2od6vx2s/RAyOa77vh84\nqL04zHthL2f0OJ+V/TbwIqNXb/VJcbfsaC11ecfd4fiejAYa2AFqxxzXDpOp98VynCOpBo4GGmeo\nfhB7KbJ5tSqs+/F2b/qBRiTfQAP7iFcvzcYl2sBRwqtlpeNIcD0qEMPQHA4HDocDu90u1VYrlcqh\n6Z80jPwh4LtqNQ38MHFQKdjjyrlo4HAg7iCj0YherweQevY/lP7/g4RCocBiseDxeGhra8Pr9WKz\n2VheXiYQCBzqbIBDM/KHdSnV43338hq1+sVCw7jeynyve9+GEfh+1Lt17G3qZnvBce1A2A809mBn\nECOM29raaG1tBV7orE9NTR2JuQ3HCeJO9/v9/OxnP6OjowODwcDS0hJra2skk8lDnUx4qJH8QT2Q\ntQQIoTZ3GGkpIaphMpkwm82YzWY5JEUMzdmP/dipoandn//fHnKVSiXHQAJ1HWK033u5X7X+Bn54\nUCgUqNVqjEYjDoeDvr4+/H4/xWKRhYUFFhYW3lqs5f/3s6dSqTAajbS3t3PhwgU8Hg+ZTIYHDx7I\nWfKHuT8/+HS9MPBiepGInvdiyHb7e0qlEp1Oh9PppKWlhdbWVnkI9lvucCfEmlqSyNus5YfwkKvV\nasxmsxQkOU7yq42Szzc4rH04DsQ1EXHq9XqcTicdHR0MDg5y6tQpotEoqVQKnU63K3ntH8IdsBsI\n2yI06js7OzEajcRiMR48eMD4+Pih78uhp+v343AYDAaamprw+/3Y7XZ5cIvFIiqVirW1NWZnZ8nn\n89/yWmtbH16nT73bNavVakwmE5cvX+bs2bN4vV62traYmpriT3/6E3Nzc/sSzde2cXwfBDnkTQ+4\neD2RkbBYLCQSCTn1r/Z99tpvfVDqawaDAbfbjcFgoFwus7i4uGcJ3EYa/dsQz5YYC63VaonH44RC\nobd2Ll/3+gIH2WsuBgbt1jHcT20C8dp+v5++vj7K5TJarRaLxSKHvni9Xrq6umhtbaVcLu/LnASF\nQoFWq8VkMmEymdDpdHKQUqFQkIGYuKvFXogJgEIW1uFwYDQaCYfDpFIpstmsLHseNMTe/fznP+fa\ntWsYDAYePXrE7du3WVtbOxIy2odm5FUqlYyW6gGFQoFer8dkMuHxeOjt7eXKlSt4PB6MRiPb29tk\nMhkUCgXPnj0jHo+TyWTIZDLkcjkUCoU8RICcUlbPL0mpVKLX6zl9+jQ3btygqamJ7e1t2tvbWV1d\nZW1tjUwms28P+ZuMzZu+D/EaBoMBq9WKz+eTIxQXFhZIp9OUy2V5UdcSDOFoRjnie7fb7XR1dWEy\nmeQo0Xg8XpfX349e+VpN9VrFxHq9NtRHkas2k6bT6TCbzVitVrq7u/F4POj1eoLBoBz1m8lkKJfL\nZDIZUqnUrt6vHmt/G6hUKgwGA6VSSer079bQ78e6lUqlnMl+8eJFotGoNIzV6otJhhaLhaamJnw+\nH9FolFgsVvfzqlKpcDgc9PT04Ha7MZvNxONxUqkUmUwGnU6HWq2W45QFVCoVFosFvV6PWq2mtbUV\nq9XK8vIyW1tbxGIxgsEg29vbB37H6HQ6vF4vN27c4OTJkwCMjo7yhz/8gVAodCS6FQ7NyGu1Wkql\nkiQk7FVxSalU0t3dzbvvvsvw8DB+v5/m5mZ0Op0cCJPJZGS0IAxSPB5nfn4enU4nHYJiscjGxgah\nUFUK4QQAACAASURBVOg7D85u1yqMqMlkwuFwYLFYUCqV5HI5fD4fTqeTfD5fd3brXiIMAfGQarVa\nBgYGOHfuHBcvXsRmsxEMBkmn04yNjb00ZKe2X3S3a9hv8pparcblcuH3+7l8+TJKpZJQKMTz58/r\n8vqvOjr1ek1RV7XZbPKi/D5lvYOGOC/CuDudTnp7e7lw4QKXL1/GarViMBjkdLdkMsnCwgIbGxvE\n43GePXvG3bt3gZ0/b7U/d5BRvF6vl3XY7e3tXWck9mvNGo0Gh8OBVqsll8uxubnJ1tYWqVSKYrGI\n0Wgkl8uRTCZxOp1YLBbpPO5kTTv5GbFPg4OD/O3f/i0dHR04nU7ZtlcsFuU9kUgkqFQqaDQaeYbU\najWVSoV8Po/ZbEar1RKLxYhEIoRCIf73//7f3Lp1a0eZyHrCYrHgcrnQ6XTk83mi0ai0HUchiocj\nUJPf68E2Go0YDAZ0Oh1DQ0PcuHEDv9+Px+NBqVSSz+fJZDLo9Xr0ej1KpVIOCSiXy0QiEbRaLQqF\nAofDgUKhIJVKvUTCqhfK5TLZbJa5uTnGxsbo6uoim82ysbFBsVhEo9F8a8BGPVCvy0Oj0WC1Whka\nGuKjjz6is7OTWCzG+vq6vNz266LdjwhNrVZjsVgYGBjg7NmzDA0NUa1WMZlMuFwuNjc3yeVyu47K\nanuQ69XV4fP5aG5uxmAw4HA4cLvdxGIxYrEYqVSKcDjM5ubmnlLfwjHZS+pZpVLh8/no6+vD4XDQ\n3NxMW1sbp0+f5p133mF7e5t0Oi2fO5/Ph8fjIZFIkMlk6Orqwuv1sri4yMbGBtFo9MCnd+0Eoh7r\n9/uJRCLk83nZhva25LX9LOtUKhW2traYnJwkEAjIzKbT6SQej9Pc3IzP56NUKhEIBJibm6s7J6VS\nqZDJZAgGg/T09NDe3o5Wq6VQKBCNRolEIsRiMXK5nPx5i8UiS2jVahWNRkMkEqFUKtHU1ERPTw/d\n3d1MTU0xNjZGNBqVv38QEF0KOp2OYrHI5uamnDp3FKJ4OEQjX4+xjeKCcLvdWCwWzpw5w9WrV9Fq\ntdLAx2IxNjc3MZlMMiXV3d2N2Wwml8sRiUQwmUwkEgn584lEgnQ6XfdLpVQqkUgk+Pzzz9na2uKD\nDz4gk8nIVgvh+R3FGq5CoUCn0+FyuRgeHubq1ausrq7y+PFjfvGLX7C1tfWdAxjqJQW8H3ui1Wpp\nbm7m8uXLvPvuu3R0dFCpVFCr1fh8PtbW1sjn87t6X0FihJflkPfCTVAqlZw+fZpLly5ht9txu920\ntraSyWSIRqOsrKzIemAikdj1JaNSqV5a69uuWZRATp48yd/+7d/S3t6O3W6nWCxiMBjI5XI8fvyY\n+fl5jEYjLS0tdHZ20tHRQX9/PxqNhvfee4+//uu/5pe//CW///3vefbs2VuNYj6IZ0g8E263m7Nn\nz7K0tEQkEqFYLL51hrK2rl/vdReLRTn1cnp6+qXvNZFIkMvlZPnE5/MxNzeHWq3+zqEqu0WlUiGX\ny/H8+XOWlpZIp9PYbDb0ej2RSISpqSkeP37M5OSkzN4plUra2tpobm4mm82iUqkwm81MTEywvb3N\np59+yoULF/D7/bS3t9PZ2Uk+n9/1M7vbz1Uul9FoNBSLRYLBIIlE4kg5pIdm5Pc6O108FAaDAafT\nSXt7O01NTRQKBRYXF1ldXWVpaYlQKEQikaCzs5Oenh76+/sxGo1YrVZZcy8UCmxubrKyskIymSSR\nSBCPx8lms3X8xN+MrBSRu6jZJJNJ2Uu5HwejXoIsLS0tfPrpp/T19RGNRvnDH/7Al19+KS+2naxh\nN3gdAXK3MBgM2Gw2Wlpa6OjoQKVSMTs7y7Nnz3A4HMCLrIXNZpMR5Nu8f23LZi1Lfy8G3uv10t/f\nz/vvv8/Q0BDLy8ssLCwwMzNDPB6nVCrR3NyM1+tlcHCQtbU1tra2ZO11t9jNmgWDu7m5md7eXqrV\nKqurq4yNjRGJRMhmsywuLrK9vY1Go8FiseBwOPB4PJLxnUqlWFxcZGRkhK2tLarVqjSCO1nbQRh4\nlUqFzWbDbDaTzWZRKpU4nU4ymYxswaz97t+0pv2M4mtlVYUDKurJ/f39OBwOdDodBoPhJZ5HvdeR\ny+UolUp88cUXrK+vo1aryWazhMNhAoEA4XBY/rxCoWBzc1OWUJVKJVqtlnA4TC6X48svv6RYLNLU\n1ES1WpWZ2oNEsViUJdZEIsHs7Gzd+Qx7xaEZ+b0cIEEk0ev1uFwufD4fvb29WCwWWU8dGRnh8ePH\nbG9vUyqVGB4eluQOn8+HxWIhmUwSCv3f9s7sqfHryuNfgfZdSEISSAiEoIHecS+me9qxE9uxnXJN\nTaamapbKc/6uvCVP4/HUZBIvadtpuqHZN7FJAiGEFrQvaANpHlznWuBeQBICpu6nKjVVnka6ku7v\nLud8z/fsIxKJYHt7Gy6X65WK+5M0ckuoVCosvOp2u1nOSaVSQSgUQiaTQSgU4vDwsKmCqmYcqEhg\notfrEQgE8MMPP2BycrIpB7a3jbdZD017ezvUajV6e3vhcDhgsVhwcHCAra0tbGxswG63w2QyQSgU\nwmQysVBiNps9deSpmaI4Krvs7e3FBx98gOvXr0OpVLLQK4WyJRIJxsbGIJFI0N3dzf4uHo8jk8kg\nn8+f+qBB468X+o4NBgM6OzuxtrbGxEgej4eFW2kNoPmlVCpht9vx5MkTpFIpLC8vs0N6pVJhOXwS\n7dIBplWljrWHN7FYzNYflUqFUqkEhUKBvr4+9v9Pp9MolUpM+X1ynLWpHErtnEeIl/Lete8rlUph\nMBjQ39+P4eFhaDQaJnikjfg8vtNKpYJSqYTp6WlMT0+/NQ0XDodf+d/b29uxtLSEzs5OfPjhh0y5\n32pnuVpVP7UPz2QyLR3D27jwnPxZocVLp9PBbDYzoZ3RaMTBwQFmZmawubnJ1NHFYpEpNv1+Pzo7\nO1EqlSCTybC8vIzV1VW43W6Ew2FWH/qmyU0PYzP6v9f+PUUk1Go1crkcYrEYy1tedL02iWb0ej0c\nDgei0ShWV1eRSCSashjQwk0P6MmNsZkbvEKhgNVqxejoKAwGAwBgdnaWRX0or93T04MHDx7go48+\nwvPnzzE9PY1UKvVWMQ1tBCRSPDo6augmLZFI0NPTg7t37+KDDz7A0tIS5ubmMD8/j3A4zMSkbW1t\nyGazUKlUUCgUGBkZwdjYGPb397G6uoqZmZm3psho7CR4qtelSyqVwm63w2g04ujoCC9fvsTXX3+N\nra0tpNPpn80Z2oRyuRy2t7dZVIs2yaOjIwiFQtjtdly/fh0qlQq5XA5LS0uIRCLIZDKvFFw1s9qA\nflOqECCfi1pNgdlshlarZSW6MzMz2N/fZ1UDxWKRzQXalCilU2tEdZ7CMfos3d3d+PjjjzE2Nobe\n3l6o1WpsbW1hcnISPp/v3Iy5TlLve4jFYnR3d8PpdMLhcGBzc5OJrFuJUCiEWCxm710sFi9NLp64\nUps8LUBqtZqFJjs7O9HR0cFUxj6fD4FAANFoFKVSiYUDjUYj9Ho92tvbkc1mmWDM7/cjEAiwheI0\nY3ibSKZ2or3tIKBQKGA0GnHnzh309/ejo6MD6XQa4XAYsVgMwWAQOzs7LH1wEZu9QCBgSm61Wg2P\nx4PV1VWkUqm6x0Mbrl6vh8FggEajweHhIVKpFEKhENLpNPu8zcqvikQi2Gw29Pf3o6enB8VikaVp\n/H4/CoUCyuUyhEIh7t27h5GRERgMBni93jMJMSm9cLLSoJ7PIBKJYDQamW5keXkZz58/x+7uLjsA\n0utnMhmm9n333Xdx/fp1HBwcQCgUslBoJpNpmmL6dUilUlitVuj1ehweHmJnZwfr6+vIZrOvfcao\nlCuVSv2sdFEqlUKj0bB0RVtbG0KhEPx+P1KpFBPw0ffQrGeEwtpU9qdWq9nBTa/XQ6PRoFQqQalU\noqenByaTCTKZDPF4HADYAe9NG+ZFPM9isRgmkwkPHz7E4OAg1Go1isUi/H4/pqamsLu7e6nCzScx\nGAyw2+149OgRRkdHIZfLkcvlEA6H687H1ztvxGIxi76S+v9Nm/xFlHheuU1eIpHAaDTCaDRCq9Wi\nVCohlUoxIV0qlUI0GkUymURbWxtUKhXMZjPu3buHoaEhZr4Rj8eRzWaZEcNZTl90k3/dGNvb2wGc\nrnTNYDDgzp07+I//+A88ePCAjS8Wi+Hg4ACrq6v44osvsLa2hr29vQu51ZPxjVwuBwCEQiGsrq4i\nnU7X9Xp0WLNYLHjw4AHu3r2L/v5+pNNprK2t4fvvv4fX62Uq2WZ9XqlUiuvXr2NkZARqtRpLS0uY\nn5/H3t4eK0Gj+ly6IVB057SLBx3qKEzbCDSXVCoVIpEIvvjiC0xPTzOjnto5SGMrFAoswkJeBrSA\nr6ysIJfLvfbgSf+tXC43tFlSykCn0+Hw8BD5fJ7Vv9fzHchkMtjtdty8eRMPHjxAKBRiZi3n6UxI\ndqUkCLTb7RAIBFhfX0c+n8f+/j6rIujr60O1WsXOzg5evHiB6elp+Hw+5PP5Y5t87f8tlUrHykzP\nK1xfC0XlOjs7MTw8DLPZDAAIBoNwuVyYnp7G/v5+3a/dirVpcHAQH374If7xH/8RNpsNhUIBm5ub\nmJ+fr+uZa6SyQSaTQaVSAfjRCvtNm3ytuLKVt/0rtcnTl1MoFJDL5diNnER2gUAAXq8XxWIRBoMB\nWq0WNpsNfX19uHnzJrq6upjrXSKRQC6XY8Kqs4zhbaH6Vy2+J6EffGRkBP/0T/+EkZERdHR0HAsJ\n0kItkUiwvr7OQoCtPmmLRCL09/ejv7+f5Rr39vbOXKoiEAjQ2dmJnp4e3Lp1C4ODg7DZbNBoNKhU\nKlhdXT226TYzbCkUCqHRaHD37l0YDAa4XC5sbm4euxELBAKYTCYMDg7CZDJBLBazcCst1qeF5kkj\nJ3e6cT158gQA8PLlS2QymZ9t8LW5YoqOUJSLctd7e3unTv3QuBvVWUgkEmi1WvT09MBut7NoyWkQ\niUSQyWTo7+/H0NAQbt26hY6ODiwvL8PlcmFlZQV+v5+F/5ttBiQSieBwOPDxxx+jo6ODuVUmk0n2\nXqTv6erqQi6XY86V6+vrCAQC7GDzplt87abfigO8VCrF6Ogo7t+/j46ODuTzeQSDQfzv//4vnj59\nyg6I9XIeGz0dGgcGBnDz5k0MDQ1haGiIzakvv/wSMzMzTXGoPOvYjUYja0jztkgfCR2BH/P3mUym\nJeV+V2qTB3DMDUsqlSKZTOLg4AA+nw+xWAy5XA5yuRydnZ0YGhrC8PAwhoaG0N3dzZzvotEojo6O\nmBXrWfLKb3sYTypp3xTSF4vFGBwcxMcffwy5XM5O8rSZyOVyOBwOdHV14fr161heXkY4HG75jZ7y\noXa7HUKhkGkGTvtQ0aIpkUjgdDrx7rvv4tNPP4XNZkM+n0c4HMbm5ibGx8exuLjIakwpVN8M5HI5\nTCYThoeHUS6X4XK54PF4jplWUMnOjRs3WD6ZPAAKhcKZDx2N/j4qlQp2u53l1r/99tufmSXVms6Q\nNW9fXx8sFgvkcjkCgQD29vZY2dLbPsPJw0k9HB0dsUOaVquF0+nE4ODgMS8F+m5Ij0GajFoRXkdH\nB8bGxnDv3j04nU643W788MMPmJubw9bWFjO0ajYCgYD5Qfz7v/87qtUqvF4vUqkUCoUC8vk8BAIB\nlEolurq6oFar4ff78eLFC/z5z38+dV72pC7hvG/CdAh85513MDo6CoVCAZ/Ph+npaXz55ZeYm5ur\n+2Dd7Dp/qVQKtVoNoVAItVqN27dv4/3338fnn38OrVYLqVSKo6MjbGxs4A9/+ANCoVDd79XIXDca\njbDZbGyTf926TBoMs9kMoVCIZDLJLqyvGksz58GV2+QBsAdIJBIhn88jm80inU5DIpHAYDAwwR05\nKJEXciqVYiHa+fl5+P3+M+dwTnPiPs2DQop66uVMYc1EIoH19XV4PB44nU5YLBbWOEWtVkOhUEAs\nFre0FhT4qfyvUCiwFMdpPifd6EwmE65du4Zf/OIXePToETQaDdbW1vDNN99ga2sLgUAAu7u7P1Ox\nN+szOp1OPHz4EO3t7fD5fPB4PIhGo2zTofxrT08Pbty4AZ1Oh+XlZXz55ZfY2Nioe/FrZPz9/f24\nc+cO5HI50uk0AoHAMatX6pxHHcVsNhuGh4dx69YtyOVyzM7O4m9/+xvm5ubemA9vNtlsFouLi3A6\nnXjy5Am6u7sxODjIejOQwxkA5v6o1+vZZzEYDDAajTCZTOjt7YVYLMbz588xOzuL6elpZuTypt+k\nke9dKBTi5s2bGB0dhVKpRCwWQywWw+bmJnw+HxKJBCwWC0wmE5LJJPb29hAKhbCxsXEujpXNQqlU\nwmQyQa/XQyaToVgsYnJyEn/605+ws7PTsIi4WbS1tWFkZAT/8i//ApPJhI6ODjYfdDodhEIhisUi\n9vf3WXVJrQ3uWak3ciUQCKBQKKDRaNDe3o5isfhGcS4dfjUaDYxGIxKJREtSHBfeoOYskGJZrVZD\np9Oho6OD3exLpRLkcjnkcjkTqSUSCQQCAZRKJVZbOTc3h9XVVVbPeBH2k5QXM5vNTDRCtpJutxtT\nU1OYn5+H1+uFzWZjHt/lchlqtRodHR1MWNgqBSz5cmcymVM/UCKRiJUVUcjV4XCgUqlgeXkZc3Nz\n+Pbbb7G3t4dkMnkuY6eTsdVqxcDAAEKhENbW1hCJRI7dbGmTN5lMsNlsKJfL2N3dxdLS0rmN7W1o\ntVrm/ZBKpZBMJlkeVyQSsduuSqVCR0cHent7YbfbmZHP6uoqJiYmsLOz01KLTaqDX15exvz8PCqV\nCvr7+/Hw4UOk02lmZVut/uguaDAYYDKZWIherVZDpVJBLpcjm81ic3MTz58/x+rqKra3txuuankb\nbW1tMJlMsFgsqFarSCaTCAaDCIfDyGQyUCqVUCgUAIBIJIK9vT14vV7E4/FLp6wGfprbQ0NDePTo\nEQYHByEQCLC8vIypqSnMzc01TdTbyGuQM6LZbMbjx4/xySefwGQyQa1WQyQSMXGmx+NhAuu5ubm6\n9R5A4w20KIpGUdjX+VKQ0Li7uxsGg4GlAvf398+tXJG40E0eONukoDCe1WplalYy16CaZLq1l0ol\neL1ehEIhiEQiFAoFpNNpBINBpty+qAeSToAOhwNKpRLRaBQCgQDhcBgzMzOYm5vD0tISFhcXoVQq\nmcFJf38/0xnUNoNpxUZfLpdxcHCAVCrFdA1va2Yjl8vR3d2Nzz77DA8fPkRPTw9cLhf++7//G1NT\nU2xhrLdU6zRQrtpoNMJgMODvf/87pqamftZOUyD4qWWkTqeD2+1mIe56b8CNOtyROIt6KFBpYW0p\nIJk7KRQKaLVaAEAgEMD8/DwWFhaY4rhe6vkMpVIJkUgEL1++BADcvXsXdrsdfX19zHmSDub0TNNC\nTqYi6XQa8Xgcf//73zE5OcmMqk7TIbFRLQFtitVqlVW6+Hw+FItF5jR4dHTEbpJUzdNKO9WzQHqC\nX/3qV/j9738PiUSCtbU1/Od//idmZmaY6PSicTgcePz4MT766CMMDQ3BYDBAKBRCIBCwtT2ZTOLP\nf/4z/vrXvyISiSASiTT0fDYSHqdDB0U13xTlpXz8+++/zxqT7ezsME3Q0dHRmVwmzzLuKxOup3rU\nnp4eDA0Nwel0oqenB4FAAGazGf39/Uy0Rm5f5IMtkUgQDocRjUZZOOUiN3gqCdLr9dDpdFAoFMjl\nciwkGwwGEYvFWMtHcjSj5gwOh4NN7oODAwDnV5JBDwKVA1WrVdbQhZqJnGwtq1arYbFYMDIygps3\nb+LOnTsQCAT49ttvsbCwgOXlZfj9/jPrIeqBXrtYLCKdTjNjm5PvSXOnUCiwG8L6+npD42s0/BkI\nBOByuaBQKFAul2GxWJh9ptVqhd1uh81mQyaTYU1qQqEQxGIxXC4XfD7fW8Pa5zV2cnacnJxEoVDA\nwcEBbt++zSpjMpkMq26h7o/ZbBbBYBBzc3PY3d1lNf60wZ81z93I2Dc2NqDRaFhYmzYcUtVTcxSy\nwW5WmP48crI9PT345JNP8OTJE2i1Wuzt7WFjYwOLi4sIhUJNTS+c5ZClUChw+/ZtDAwMwGq1ore3\nF319fRgcHIRWq0W1WoXP50MwGMTh4SHC4TA8Hg+WlpYQCASYfaxarWYtc8nv5Cwaq0YolUrHWuOS\n3wFB62dPTw/u3LmD4eFhlnalBmVUjUXjaYbwtZYrdZNXqVTMX7m/vx/Xrl1jDRbIwAMAtre34Xa7\nsbq6ysxwisUicrnchW7wAJhgR6/Xo7OzEwaDATqdjpVfxGIxpvwHftyc8vk8xGIx5HI5Hj58CLVa\njc3NzWNOZuc5XoFAwPy4BQIBM5ORSqUIhUI/89w3mUy4ceMGfvGLX+D+/fs4PDzE9PQ0/vSnP8Hr\n9SIWi7Xs5kDjp4dRKBSylpW1DxSlFkjEOTc3h83NzQvpUU0EAgHmdkciTaVSiWKxyPQaer0e8Xgc\noVDoWHRie3ub2cGelWaFbePxOGs4QiH72sMUAHZAoc+wsrKCv/zlL/B6vUgkEhdywyyXy1hbW2Om\nMRqNBn19faw2n7w19vf3meVxs76zZjq20bweGBjA7373O9jtdpRKJXg8HiwvL7PvuFnvddaNSaFQ\n4L333sMnn3yC+/fvM798irpms1nMzs5icXERhUKBPZcUbTk6OmLNYex2O1QqFSqVyqnr5ZshdqQO\nipRGo7WFfkfSzTidToyOjqKnp4d1RSUDHfr3VD32ujlwcpynHfuVsbWtVquIRqNYW1tDqVRCuVyG\nQqGAQqFAR0cHxGIxuxFYrVZcu3YNw8PDWF9fx9raGgC0LLT9Jih07HQ6cefOHdjtdsjlcmi1WqjV\n6mMqY5qEbW1tcDgc+OUvf4m+vj7EYjFoNBpIJJJz/zx0syEr3kqlguvXr8NqtSKVSh0LuwoEAsTj\ncZRKJeZKGIlE8PTpU7x8+ZL1DG91+Z9CoUBvby/u378Pm80Gi8WCYrGIcDiMbDYLgUAAlUqFrq4u\n1nYzFos1ZPZTS70LydHREVP8W61WjIyMsAgVOdJlMhlEo1FEIhEAP25Q+Xye1cPXA/1dMzacarWK\ng4MDFlETiUTscx0eHmJ3dxcrKytwuVzIZDKIx+PHfAvqpdHFm1T0h4eHMJlMMJvN8Pl8aG9vRygU\nOraQN3M+N/O1aHMZHh6GwWCAQCDA/v4+pqamMDs729ROaWcVypLaXK/XQy6Xs4tNMBjEy5cvsbm5\nycLxiUSCOSKSPoaU7KRHoZJcnU6H+fl5LC8vt+QyFw6HsbW1hUKhwEyRcrkccrkcs3fW6/UYHh5G\nX18f0uk0QqEQlpeXEY1GWYqKTJOA4w6gpBeg/gOnKc8+yYVt8vVMZpqUpVIJOp0OAwMDsNvtTAgj\nlUohFouh1WrZYh0KhdjtmVzVLrLPL6nkOzs7maik9mRKegHguMOfw+FgZS8kNKFQ8nmHuw8PDxGN\nRpka3Wg0orOzk93QqAQKAJLJJBKJBDsQbG9vY2Jigim8W608JhtkjUaD7u5uGI1GFItFbG5uMuc0\nSgV1dXWhUqkca8XZKI1slBT+02g0sFqtTMvh9/uZaCcSiWB/fx/FYpGJwQqFQtNuhM2YW9QchaJt\npK5Pp9Pwer2YnZ3F5OQk+3cXbeNcKzSlKJrJZEK5XEYkEjkWBbqsUMfIhw8fYmRkBMlkkjU0mpyc\nhNvtrqss9E2c5Tej6NrW1hZkMhmkUikymQz29vYwMTEBt9uNaDT61nQZVSdZLBYMDw/D7/fD7XY3\nNSLyJrLZ7DHzqf7+fpZeJa2JRqNhaw01TyPx76vKKGs7WL7O5vgsUZ8rk5MHfqqRp97IsVjsmBKX\nXKPK5TISiQS8Xi/29vZQKpVgsVggk8nOpYXsaaEbulgshlgsZgeOUqmEdDqNSCQCn8+HeDzOfkS5\nXM4Mfex2O7LZLOvJnE6nW/I5Dg8PEQqFEAwGWdojmUyy5hFGo5EtelKpFDKZDKlUCuFwGG63G8Fg\n8FQ12ucJPSxUq3r79u1jngSdnZ0wm81IpVIs53oZIj/A8S58u7u7mJ2dxdbWFiKRCOLxOMRiMSuP\nIg/7UqlUd6OMZt9Q6bmk2wl1Xtzb28Pq6ir8fv8rvefroVljJlFVOp1mpiXUFIsOAc0a83lAG86n\nn36Knp4eTE1NYXx8HC9fvmQRrIt+HuPxOP74xz9CIpGwmznpjCit+rbfs1wus2icWCxGIpE4Uwqi\nGZVSdOPWaDS4du0a0uk069/Q3t6OXC6HlZUVvHz5klUokecCNUSjm7xYLGaueHR7f12Tq0t/k68H\n+uC0MdICXSqVEA6HUS6XkcvlsLW1he3tbbZ4kI92oVBAe3s7tre3EQ6HW/6Q1hqXFAoFeDweFAoF\n6PV6bG1tYX19nYlJ2traoNPpMDg4iPfeew937tyBSqVi+UBq3tEKKOTq8Xjw5ZdfQiwWsxLFarUK\ntVrNOgMODAxAqVSyjkwej6eh3ubNGr/P58P4+DgqlQp2d3fhcrmQzWZZ6sRgMEAmk2FjY4PpHRoZ\nczNuEpR+yuVyCAQCSKVSmJubw9zcHPOhLxQKrFVrX18fjo6O4PF4Gm7U0YyNnvQPT548wQcffACt\nVot8Po9UKoXJyUnMzMwgEAgwJ0QKkTdaEkU0MvZaj41cLsdSDvl8nvU4uOiIw6tQq9Xo7u7GgwcP\n8OjRI3R3dyMUCuGHH37AwsICK6e8DKV+h4eHzOe/Xmg/yOVyKBaLsFgssFgs8Hg8LTmk6/V6ZrQm\nk8lw7do1ZDIZSCQS1hXV6/UimUwyrwoak1KphFarRaFQQDabZYdy2ufoEtLoPLtSmzzwk6Pc0dER\nisUi86unmyPddHZ3d9mmQxaIAFjnJzpJnWWjbHThI9ERbZIulwuJRAJmsxlLS0tYWVlhoXqhAZSW\nvQAAFCxJREFUUAibzYZ79+7h008/ZS0sd3d34fF46jI3qXf81eqPnu4ejwfhcJiJ2MjZizZ4jUaD\nx48fw+FwQCaTIRAIMIV3K6n9nPTAkN94oVBAOByG1+tlVQJjY2NQKpXwer2IRqPY2trCwcFBwwsE\nbRQ0lrMiEonQ1tbG2q36fD7Mzs5iZWWFLWDVahUymQwSiQR9fX3I5XLY3Nxk30M971tbWlTv2AGw\ndqa//vWv8emnn+Lg4IAJ7L7//ntMTU2hUqlAoVDAZDIhk8kwm+l6FjYa91lKkV4H/Xa0uZPLZjab\nPaambjaNrDHt7e0wGo14+PAhfvvb3+KXv/wl3G43nj17hhcvXiAUCjFBWitMWFoBPd8kQu7t7cXu\n7i4mJiZaErHV6XSwWCysYsrhcCCVSqFSqbBQPgnAK5UK2tramN+9wWCAUqlEOp2GQCBg6yRt8LV5\n+ka4cps81TPv7+9jfHwcExMTLAxIodZEIoFSqQSVSgWRSASTyQSDwQCJRALgxzzKzs4OYrEY+0LP\nMhloATjrBNLpdLDb7ejt7YVCoYDb7WZ9sj0eD2s8Qhvmb37zG3z00Ufo6elBqVSC3+/Hd999h4mJ\niZZunHSoosWt9pRZG1bKZrPwer2QyWQYHR1ljoStujVQWLhWNU8hV/ITp1xrLpeDyWSCSqVCtVpF\nKBTC9PQ0dnZ2ftZUpB5qxwDUt+Fms1lsb2/j66+/BgBEo1GEQiF2uieod7tKpWLtTC9DqqGrqwtj\nY2Ow2WwAgP39fTx79gxfffUVS0uRUHZsbAwSiQTBYBD/9V//hXA43PBN/DQHhdpDwcn8KFW71FYI\nVKvVc7sJU8iZ5u1ZDjpCoRBGoxF3797Fb3/7WzidTiSTSeaxUCgUWB/52tvhebe2PW/IHpyaSFFa\n6GQp23nh8/mwtLSER48eQalUolwus1t8bYpSJBKxVGFnZyc6OzsB/KgzI0fFWr+QZkaJrtwmT8Tj\ncdZV6/DwEBKJBKVSCdlslonVOjs70dXVBYvFAp1OB+Cnmzy5FJ2VWpHcWf6G6sdJwU0iu0gkAolE\nwlzfhEIhOjs7MTAwgEePHmF4eBiJRAIbGxuYn5/HzMwMc/1qJbQYvMq45mR4qa2tDXK5HNVqtaV2\nqsDPN1MaE9lg0r+pFeS1tbUhmUxiY2ODlUVdhpKoQqGA/f19duigMDYtADSvOjo60NXVBYVCgUgk\nwpzxLhqr1Yr333+fNXBZWFjAs2fP8MMPP7ADEP0GZCHrdrvZoeas1H7fZ/3eT84bei2qCKBLRDQa\nZZqH84CiN7Q5nGadoXHevHkTY2NjuH37NrLZLBYWFrC+vg6fz8cqYEgPVKlUWCSzVux4GVMQb4Ki\nRe3t7Uin0+xQ1qrPEAqFsLS0hGfPnqGrqwv5fB6hUAixWAyFQgEajQbXr19nXvy0FykUCmxtbSEU\nCrF2yY1Gn17Hldvk6SR9eHh4zAErl8uxsDGVMrz33nu4e/cubDYbM0pYWFjA2toaEokECoXCmW48\npHY+7S2BoBM6GSAkk0mUy2WIxWJWmpZMJlm519jYGH7zm9/A6XQinU5jenoaX331Fb755puGaofP\ne+JTeqS/v5+FnzKZTMs2+drf5HUildrFXyaTQaPRMB8Fau/b7JKoejd6yjdSKV/t/4CfNB5dXV1w\nOBzMIpmiERcJGYB8+OGHEAqFrGXy3Nwce+ZIJKtSqeB0OlGtVhEMBhsOJVPkqdHfUalUYmhoCH19\nfVAqldje3sbi4iKi0WhDLoKvg6KUVIHwqt/8VVDFzmeffYZf/epXaG9vx/z8PL766isEAgEEAgEm\n6iLHUFJ7Uy6bdE6XVWvwKgQCARO7KRQK7O/vY3Z2Fi6X69xNtohUKoX5+Xns7OwwUx66aN64cQNO\npxPvv/8+rFYrO4xkMhmEw2GsrKzA4/Gcuyj5Sm7yrwox0UQ3m83Q6/Ww2+1MbRyNRpnL0+TkJLxe\nLzuN15OfPutDQCEk2lRIVUnlgEdHR9BoNCzyQF23KpUKPB4Pvv76a0xNTSEYDF7ah486/5nNZlSr\nVUxPT2N7e/vUjWyaxdu+n2q1ygRhOp0OarUakUgEfr8fBwcH53IgqSe1U/u3NKaTm59cLkdHRwds\nNhvUajWWlpawvLzckHCN3rORcVOkRCqVQqvVwuVyYWpqCltbW8cOqSc3MCpZa2SDPnkbP+vfUHSE\nfPRVKhXrMBaPxxGJRM5tTpOi/2R57Ns+R0dHB5xOJ2w2G+RyOavd9ng8rI8H1WPncjlEIhEIBALm\n1EeVAtTgi9p40zp7Gdccym2r1Wqo1WoEg0Gsrq5ifX2dOeS1Ytz03ZG5jUwmQ7lcZk2AqPTZYDBA\noVDg8PAQHo8HsVgM0WiUuX6eJ5d2k6cbc22OlSbcyR+PwlBmsxk3btxAV1cXTCYTFAoFy9NPTk5i\nYWGh4ZPTWXNYtXWPZHpDD15tSNBgMMBqteLWrVtwOp3Q6XQs8vD1118jFApdyoeNFkW9Xo++vj4Y\nDAZks1l89913LH982SAvb2rusri4yAwtmj3eZkcFCLrFDAwMoKenByKRCOPj41heXm7KotHIwaT2\nRlqtVrG4uIjx8fFXNnARCASsbCqZTCIejzdt/Kf9dycjLXRAkclkrCaeIirnWSlCOgDg9MZdAoEA\nFosFN2/ehE6nQ6FQgN/vZ2FgqtQRi8XIZrPI5XKIRqM/e23yErHZbKz9Mm1gl+kZrj2EabVaZgq2\nubkJl8uFZDLJxIWtXi+LxSL7/TQaDes74nQ62TzK5/NIp9PY2NhgkZTz5tJt8rQhms1mGAwGqNVq\nZDIZBINB5rBWi0AgYIrSmzdvYmhoCHq9HkKhkDnkLSwssHamjbRoPW0YsTbXT7W1arUaWq0Wer0e\nNpsNlUoFiUQClUoFQqEQSqUSRqMRfX19rM78+++/x9OnTxt2iXuduKgZSKVS6PV6fPLJJ/jwww+R\nTqcxOzvLbgmXkdoGJJRDi0ajV0aARGmpoaEh/Ou//iuKxSLcbjf29vaQTqcbfn2aL/Xe4qgHuEQi\nQT6fh8fjYdUNtZB2pqOjA0qlEnNzc1hcXKxbVFqPtSrw88MTOZVpNBqIRCJEIhFsb29je3u77s6V\np+Usc5CiDaOjo/j8889Z45y2tjZ0d3fj9u3b8Pv9CAaDbPN70+uTeJbsYknA9ra/e1NFQO0Bqvbf\n1VZC1M43ep/XXeaEQiHr+2G1WmGz2Ziupjb9etEXomQyia+++grlchlSqZR9jyTw/dvf/oa9vb2W\njOVSbfJ0O6HmJr29vcwadW1tjTWZIRMYkUgEiUQCp9OJDz74ADdu3IDFYkGlUsH+/j5r8DE7O4tk\nMslu8I1umG/Lj5H7G+Udyb+YFJYmk4m10aTwMZ1K1Wo1QqEQ1tfX8fz5cywsLDTsT1+PWPC0r6vX\n6/HOO+/gH/7hH3D//n385S9/Ybak59ldrl7oFqBSqVguPh6PN83CthVQ5zabzYbR0VE8ffoUKysr\nTKTXKCdL6M4KiRpJl5FIJNgtnl5XJpNBp9PBZrOhq6sLMpkMoVAIXq/3Qru50bOiUqmY/0MkEoHX\n60UwGGy6ZuMkpxVftbe3Q6fTweFw4N69exgdHWWOfLTZkXUwpUFO2qKefN9SqcT0QiSMbGQe0N+T\n0LnWxU0sFkMmk6GzsxMymYylpcgshvpkkCcBCSCFQiEcDgf6+/thNpuhUCjYparVqcE3cXBwAJfL\nxTRi1F45GAxifX0dq6urLRvrpdnkaUI4HA58/PHHePDgAQYGBiASiRAIBHDt2jUEg0Fsb29jYWEB\nR0dHrN/29evX8fjxY5jNZlQqFSwuLmJychJ//etf4fP5mM3geeeXaIGg/Bfl1qiuPJvNIp1Oo6Oj\nA1KplPVIbm9vZ1a8iUQCz58/x//8z/9gc3OT+TY3Y1zNVM/Sw9vX14d/+7d/g9PpRCwWw4sXLzA9\nPX0p1N2vQy6Xw2q1QqlUsoWlHn3GRUHlUiqVCqVSCWtra5ienm7KLb4ZkPCNTD4kEgkzwqG5bDab\nMTg4iNu3b2NoaOiYULPexa+ZFRFKpRJKpRJHR0cs39uKg+BpQ/QSiQQOhwP//M//jHv37kGlUrGD\nNaX5JiYmWP34aSpGyEPipOCvkc2IrJjVajXkcjna29txeHgIlUoFq9WKzz77DN3d3SgUCigUCkgm\nk1heXkYqlYJEIkEgEGBeIlT+R42vBIIfvfg3NzfZnLtMVKtVbGxssL4SdJCqLcdsBZdmk6+Fys06\nOzvZ7Vcmk2FgYADDw8NMiUshte7ublitVuTzeWxvb+P58+cYHx+H2+1GOp1u6Rdaq+ylh4Oahvj9\nfkxPTyMQCECv10OlUh3L/ZFpy8TEBFwuF3O/a/b4GoVuw52dnUw1ur29zYRf0Wj00j1wwE/jNplM\nuH//PrRaLXMzu6yphZPQBjQ8PAyVSsVSUc3KZdN7NKPG+PDwEIVCAbdu3YJQKMTW1hbLad++fRuD\ng4OsfwO1h6YDeb00Or/p8Go0GqHX61np5c7OTkvyp28aF4nNNBoNbty4gXfffRcPHjyA2WzGwcEB\nVlZWMDMzgxcvXmBlZYW1oz7ts0jVHGflbd85CYwpoknPIbnEqVQqdqDS6/XQ6/XMIptaQ2u1Wtbc\naGhoCL29vewQGY/HmxKlPQ/ISOkiuTSbPJ2gM5kMdnZ2WIMQqVQKuVyO3t5edvN9/PgxCwmSEU5b\nWxu8Xi9evnyJp0+fYmZmpmUKy1pqHxT6TCQs8nq9CAQCLP/e29sLg8EArVaLdDqNVCqF8fFxbGxs\nvFIc0yjNej16SK1WK0wmE/L5PMbHx/HFF1+wZimX7WEDfroB2Ww2PHnyBLFYDEtLSzg4OEChULiU\nYz4JpXbu3LkDsViMZ8+eMfvmZo6/0U2ecqzlchljY2O4d+8eNjY2mDELmeQkEgkEAgGsrq5id3cX\nqVTqQg+IFIkzGo2sjfX+/v6FC19JPNfR0YG+vj58/vnnePfdd2GxWCAQCBAOh/Hdd9/hm2++wezs\n7IWXUNZSKBTYGigWiyGRSKBUKplIsFqtQi6XsxLid955B3q9HlKplOkCgOOVTeQXUSgUsLu7yy5z\nl3Gjv2guzSYP/PgjhsNhTE5OIpfLYXx8nJ325HI55HI568Veq1RMJBIIBoPY2dlhApnzcPw6TXnW\nq/5bbVVAuVxmjmvRaBRSqZSVXpRKJYRCIaY5aNb4z0OIcnR0xGo929rasLS0hFgsdmk3eABMUKXV\naqFQKLCzs4NgMNiw5qFVCAQCjIyM4P79+xgYGMD29jZWVlZYQ6Nm0WiIlg66FJGjPt+UQ6VeE2tr\na5iamsLu7i6i0Si8Xu+lSJscHh7C5XIhGAwik8mwjageGrXCJiEgbe59fX3o7e1FqVTCixcvmKHW\n3t4ePB4PAoHApUmV1a591HymWCyyQ3UsFoPf74dCoWA3e7VaDbvdDqvVyvwfLBYLyuUysxaOx+OI\nx+OIxWKYn5/HxMQEwuEw3+Bfw6Xb5KnrUzweh1qthlQqhVKpZJ3mtFotU6enUinEYjGEw2Fsb2+z\nnsSXQV1Zy8ncVrFYRDabRTQaBXA8PHpeYoxmPgAkkonH43C73chms9jb22t5r/iz0t7ezlIk1IbY\n5/NdqlvP6yCR6cjICB4+fAiDwYD19XXs7Owgm8029b0a/Q3phrW/v4+NjQ3I5XIAP9ryUo/2cDiM\nzc1NPH/+HOFwmEVSLnr+0Nz2+XzY3d1lTWkapRGDH4qckU6gvb2ddSHc2dnB7u4ugsHgpbAyPsnr\n8vq5XA6xWAw+n+9YKkKlUsFsNqO7uxs2mw3Dw8Ow2+3sYpRKpVif+VgsBq/XC7fb3VJXzavGpdrk\ngZ8mBVn91bappBIKsVgM4CcjgnK5zEJCl2GhOCvnPebziGgcHR0xt6xYLHYpbmBvgm5EdJNcWVnB\n/Pw8XC5X3S1ZW4lCoWD10ENDQ0ilUkgkEmzTvExUKhXk83k8e/YMbreb1cuf9H/P5/PIZDJs7lyG\n+UOCQaoEaKROvNFbPP0tHahdLhfcbjerCqn932Xc4N8G/eaU2snlcqzh2NbWFiQSCeRyOSQSCQvF\n03pP7VnPw9viqnDag+Ol2+QJ2sAvE/9fOjc1A1p8KAR3Fb6XSqWCdDrNHA83NjYaFnq9jWYs9PQ6\nVFIUiUSwvr4Oj8dzbrqTRjemo6MjRKNR1tu7mZUd50mtcLYRr4Da12vGmKjUk77H1xmDXUXoM1Cf\niVamG67qmn4Wzcyl3eQvI81asP+/0Yrvo9GHkW5le3t72N/fh1AoZDa25zX+k0YfjYRryTpza2sL\n5XIZMzMz2NjYuNQ3ODoIXkVqS8jOynmsE9Rl7TJyVdfF2jTpVRs7cPo18fx78b2eK/Wtnqw1v6gx\nvO5HfdNkpbaLV+EmdZKTD2IjGyWF7On7OM+WrDRfSBnciBOXQCBgZaRdXV1QKpUIh8NIJpNMB9HM\nz0AVK5f5APE66BkFGlu4a29KZ32dejaOi9xsGj1A0xy/LEY0p4We0au4LtIzenh4+NY9nN/kz0Az\naofPg5PWka+asJd17G+jWaftV93MzvPBpgWExt7oQlpbhgngWMvZZn+OZtXJt5rzupm1KqR7ERt9\nM0olr+JcIeq1Qr5oaH3hcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8Ph\ncDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+Fw\nOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4\nHA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgXx/8B4zPH2PoXhR8AAAAASUVORK5C\nYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -2.570406\n",
+ "Current MNIST score: 7.240756\n",
+ "Current Frechet distance: 40.736855\n",
+ "Training step: 1200\n",
+ "Time since start: 6.773234 m\n",
+ "Steps per min: 177.167951\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsfelzW9d5/oN933cSBEGBmwiSEkVJlKxdthM7jptkkuk0\nnWkz0w/tTKf9A/pf9FO/dPqxnaROGjeLXce2bDnaF1IiKRIkSIAEse/7vvw+6HeOIVkLAQIEad9n\nRuNFJO7Bveeed3ve5wUYMGDAgAEDBgwYMGDAgAEDBgwYMGDAgAEDBgwYMGDAgAEDBgwYMGDAgAED\nBgwYMGDAgAEDBgwYMGDAgAEDBgwYMGDAgAEDBgwYMGDAgAEDBgwYMGDAgAEDBgwYMGDAgAEDBgwY\nMGDAgAEDBgwYMGDAgAEDBgwYMGDAgMHhAauH12708NptgcViodE4dMs+sGCxvt5+L7uvr/qZg/w8\nWCzWM+s7qOtkwGCvIO/oYdzjB/kM2SVea8MZI8+gZ2g2hIf8RXshvgUHCAMGDA42XmvD2fuxCgYM\nngcx8BwO55lo/dsExsAzYMCg1+D2egEMvrsgxv1VRr4XqUA2mw0Oh4NGo4FarcYYawYMGBxaMOl6\nBj1DKzX5/apvs9ls8Hg8KBQK1Go1JJNJ1Ov1tq7JpOsZMGDQZbzWhjORPIOeYTcGsNFo7Iux5HA4\n4PF4sFqtMJvNEAgECIVCePLkCcrlMmq1Wlev3y2wWCwIBAJIpVLk83mUSqW2nRYGDBgcPjBGnsGB\nR7cNEovFAp/Ph0wmw8WLF3Hx4kVkMhncv38fbrcbtVrtUBt5uVwOi8WCQCCAeDyOUqnEGHkGDL4j\nYIz8AQaXywWfz4dQKESlUkGxWESj0UC9Xke9Xu/18r4VIAZ+bGwMV69exZkzZ2Cz2eDz+eBwOFCr\n1dq614RYCPSGgMdisaBUKnH8+HFMT0/DbrfjD3/4A27fvo1qtcrsnz2CzWZDJBLhwoULGBkZgUAg\nQKFQQCqVgsfjQTAYRCwWQz6fR7lc3tfsCYvFAo/HA5/PB4vFQq1Wo2cH49wdHLDZbLBYrK6XAxkj\nfwBBWOdyuRwqlQo6nQ75fB6JRAKVSgWlUgnZbBbVapV5afcINpsNsViM4eFhvPfee7BYLBAIBPD7\n/SiXy4fWIEokElgsFly5cgVnz56F3W7H6uoqHj58CDa79001zWs4DMaHOGxcLhdisRgKhQJGoxE/\n/OEPcebMGUgkEmQyGQSDQSwtLcHhcGBzcxPpdBqFQgGxWAzZbLYrRE4WiwU2mw25XA6JRAKRSASx\nWAyBQAAAKJVKyGQyyOfzyOfzKBQKqFQqHV1Dr3CY2nDJuc7n8yEQCKBQKMDj8ZBIJJDL5VAoFFr6\nrN2CMfItYL+IVFwuFyKRCMPDw7Db7ZienkYmk4Hf70c2m4Xf78fS0hLS6TTK5fKB39wHGVwuF1qt\nFjqdDjKZDKVSCYFAANeuXcP9+/fbTm33+tAZGRnBuXPncObMGZjNZmQyGaRSKaTTaVSr1Z6ti4DP\n54PL5aJaraJarR7oLgbSbQEAWq0W09PTOHfuHM6ePQuLxQKNRgM+n49qtQqz2Yzh4WEEg0Fsbm6i\nWq2iVCrh97//PR49eoRkMtnx0g+bzYZMJsPly5dx8uRJjI+PQyAQoF6vI5/PI5fLIZfLwel0YmVl\nBcvLywiHwwf2frcCYjTJPur1e/c8mo0xl8uFXC6H2WzGyMgI5ubmoFKp8Omnn+LRo0dYW1vb9ee2\n8h0ZI79LNHvypL2qXq935HAiHp5IJIJarcbg4CBsNhuGhoYwPDyM0dFR5HI5BINB5HI5+P1+GI1G\n+P1+hMNhBINBpNPpjq3lRZ9BDrnn/47H40EkEoHL5YLNZqNUKqFUKqFSqRwKghefz4fVasXAwAB4\nPB5SqRTcbjdWVlbg8XgOtPF5EYRCISQSCWZnZ3Hp0iXYbDYUCgWsrKwgGAyiWCz2JDPB4XAgFAox\nNDREjRCbzUalUkE2m0UqlYLT6UQoFNrz/W5ugSTvaTufyWazweVyoVKpYDAYYDQaceTIEdjtdpw8\neRLHjh1DuVxGpVJBLpdDIpFALBajZFGLxUIdmY2NDcRiMeRyua7wO7hcLkwmE44ePYqTJ0/Sd7FS\nqVAnamhoCEeOHIHVaoXT6cT29jbi8TgymUzH17NbsFgsiMViqFQqZDIZZLNZegbV63VwOByw2Wz6\nDOv1Oi1HqFQq9Pf3w2azoVwuI5PJwOfzIRKJIB6Pd+W9bdb2YLFYkEgk4PF4yOVyNDrP5/P0+atU\nKojFYqysrCAcDsNoNGJ8fBynT5/G2bNnIZPJ4PV64fP5WjLyrYAx8q9BsydG6rcikQjVavWZOvle\nPp/NZoPP58NgMOD48eP4wQ9+gHfeeQdcLhc8Hg8CgQDFYhE2mw3FYhH5fB5XrlzB+vo6Hj16hK++\n+grr6+soFot7OkBeVkdmsVjgcrlgsVioVqvPpMgkEglMJhOkUil4PB6i0SgSiQTS6TQqlcqBN5JC\noRBHjx6FzWYDAITDYWxubiIQCCCdTh/otb8IMpkMg4ODOH/+PK5cuQIul4uHDx/iyy+/xM7OTs9K\nPDweD1qtFu+99x7++Z//mR7Y1WoVwWAQLpcL//Ef/9GRCJPL5dJIlrynQGvRDznMSSlnbm4Ob775\nJiYmJiCTySAWi8Hj8RCPxxGLxZBMJrG4uIhHjx6BzWZDp9NhamoKEokEtVoNVqsV0WgUbrcbxWJx\nT9/veRA9h+YzqVaroVAoQCKRQKlUQiQSwWw2Y25uDvF4HPPz8/jNb36DhYWFnhl5cvZpNBpMTU1h\nc3MTbrcbXC4X9XodpVIJfD6fRuqVSgXlchkcDgdSqRRHjx7F1atX8eMf/xilUglerxeffvop7ty5\n05WMCVmzQCCg56HZbIZcLofX64VKpcLExAS8Xi84HA5+/OMfY3p6GiaTCf/6r/+KTz/9FBaLBePj\n4zhx4gSMRiPq9To0Gg3kcnnH10rwrTXypN8ZACVP7TaVQ6KORqNBvWAS/dRqNdqG1IkD02AwYGho\nCGNjYxgdHcXIyAjGx8ehVCrp4cRmsyEQCKjnqFQqUS6XIRAIIJPJYDQa8ejRI1y/fh3xeBzlcnnX\n1ycZBJVKBblcDqFQiEQigUwmg0KhgFqtBg6HA71eD41GA6VSCR6PR6MwrVaLiYkJiMViGu0nEgm4\nXC64XC5sb28jmUzSCLKV59BtEPKU2WyG0WgEh8OB2+3G/fv3EYlEUKlUDsQ6dwNyYFqtVrz//vsY\nHR2FUChEvV5HKBTCrVu3EAwGe/J92Gw2lEolTpw4AZPJhHA4DJ/Ph3q9jrGxMQBAPp8Hj8eDVCrd\nc7Rbq9VoGaud/Ubq2+TPkSNHMDY2BrPZDI1GAzabDYfDgUePHmF9fR2BQADFYhHhcBihUAh8Ph8a\njQaRSIRG8uVyGZFIpCulkkajgVKphJWVFVSrVSwuLoLD4aBWqyGfz6Ner0MgEEClUsFkMmFmZgbT\n09Pg8/mo1+twuVwdX9PrQMpk7777LiYnJ6HVauH3+xEIBFAqlRAMBrGxsUGfH3GM+Hw+2Gw2VCoV\nTp8+jcnJSSiVStRqNVQqFSgUCnpWdgN8Ph96vR42mw3j4+NQKBTg8/lIp9NQq9UYHh7GwsICwuEw\narUalpeXcfPmTaytrSGXy8Hr9WJoaAj1eh2pVAr5fJ5meLqFb5WR53A44HA4kEgkkEgkkEqllFla\nLpfpH9IS1Wx0gK8PSqFQCLlcjmq1ikKhQOuyhKlKotO9GCsOhwOBQACbzYYLFy7g3LlzmJiYgFar\npYQZch2ysfl8PjWwjUaDvrhjY2MwGo00UotGo7taA6nlmUwmDA8Pw2AwQCaTwefzIRQKIRqNolar\ngc/nY2BggP4RCARoNBpIJBKQy+WYnp6GUqmERCKBTqdDNpvFysoK7t69C7FYjFAoRCP7QqGAYrGI\nYrHY89owWS85vCuVCjweD5aWlpBIJA5F2xzJqnC5XMhkMoyNjeGtt97CwMAAPfxDoRCWl5d3Tbbq\nFJmJvE/knezr60OlUsH9+/fx5MkTGoFms1m4XC6w2Wyo1eo9s9Hr9fozDtpuP4ekjmUyGQwGA9Rq\nNcRiMfr6+qDRaMDj8Whp4caNG/jf//1fPHnyBMFg8JnPEAqFkMlk8Pv9NFDQ6/VdLZWUy2Wsra3B\n6/WCx+NBKBSCzWZTHg+Xy4XRaMTRo0chFotx6tQpvPnmm7h3715PRJvIs37//fdx9uxZAEAqlUIq\nlUImk4HT6cTdu3cpcbFSqVCuEgDI5XKMjo6ir6+PnvtisRgSiQRCobCr65ZKpbBarTh37hyEQiEl\n0qnVahiNRnq++Xw+eL1eLC8vIxqNIpvNUs5PJpMBm81GJpNBOBxGOp3u2poPnZF/2YYkN1+j0eD8\n+fOYnJyETqejzFbiZfv9fiQSCbqZCoUCNTYsFgsKhQJyuRxSqRSFQoEedJVKhbY77NU4kd7l4eFh\nXLlyBT/84Q/R19cHlUoFHo9Hr0MiY1KXIr9LQCJ5sViM0dFRvP3222CxWLQu+Lo1CIVCHD9+HGfO\nnMHZs2eh0WggEAiQzWYRDoexvb0NPp8PqVRKCS7kJctkMlheXsatW7fw2Wef4dSpU3jjjTcAACKR\nCFarFQBgNBoBPD2EcrkcXC4X1tfXsb6+jng8vqf7uFeMjY3h0qVLOHLkCLhcLjweDyKRCGVB7xXd\nPjyb2bparRbHjx/H7OwsNBoNxGIxqtUqdnZ2EAwGWzIuZL/tJVNF0ppkv2SzWXzyyScQCoXgcDg0\nRXz37l0aqQ0MDGB8fBzZbJaWetpBO8qI5F7a7XacP38earUaABAIBFCtVrG2toZisYhcLodr165h\nfX0dXq/3G4dzo9FAuVxGKpVCoVCgnBW5XA6ZTIaNjQ2aCewUiLNEDCLJYDYaDXq+VSoVBAIBSsJb\nX1/HX/3VX6Fer9NsQ7vPmgQdrfw+KSckk0lks1l6jpEIn2Qd/H4/0uk0jEYjLe0UCgXU63WaERoY\nGKDEO51OB61Wu+sOklbf0VKpBI/HA+DpGTg0NASbzQa73Q6pVIpyuQyfz4eFhQVEo1Ekk0lkMhna\nqVOpVBCJROByuWC32yEQCKg96hYOlZEnETrx2CQSCdhsNv2j1WphsVhw+vRpjI+PQ6VSIZvNIhQK\nwePxwOl0IpFIIJFIvDQSJ543ieBJNE0Ogb1ENyTiUiqVGB4exoULF3D27FmMj49DKBSCy+XSzyfk\nEhINkZ5KssZGo0Hbeer1OrRaLcbHx7G0tNTSmrhcLiQSCYxGI8xmMxQKBUqlEuLxOAwGA4RCIaRS\nKU05lkol+Hw+ajxIOp7UywwGA1QqFUQiEcrlMkQiEQwGA8RiMQDQuj1JqfWqh5zNZmNoaAizs7PQ\n6XS0BlgqlWgkedAhEomgUCgwODiI0dFRHD9+HJOTk9QpKxaLiEQiLUvzPs9DaeUZicViGg2TdHGt\nVkMul6OZoWa4XC5wuVwIhUIMDg5Cr9ejr68P9XqdvqftoFUDL5PJMDAwgJmZGZw6dYoevsFgED6f\nD5ubm9jY2EAikcCNGzeQTCZf6uyTbF+5XIZEIoFCoYDJZALwNN37qnXs5fuSLCUpMT5vuLPZLG2j\nUygUePPNNyEQCGAymRCNRpHP59u+dquo1+vI5XJYXl6GwWDAzMwMBAIB7boQCoV0f6fTaWg0GhqU\nkXbidDoNkUiEoaEhGhWHQqE97ZvXgThTXq8XtVoNSqUSfD4fYrEY2WwW29vbWF1dhcvlQiQSQbFY\n/IZ8N3FuEokEACCXy7VUYm0Vh8bIs1gsaDQaWCwWWK1WDA4OYmBgAEKhkHqufX19sNls4HA4lLTG\n4XBQKpWQTqfB5XIRCATg9Xrp4fO80SZtRs0GlRiFvYLU+omB/+lPfwqr1UpTa83rIMzeF12beO4k\nhV+pVMDn8ymTc7eoVCpwu91Qq9X0XiqVStTrdfB4POj1ehop8ng8WndcW1vDp59+ilwuR1O7S0tL\nWF1dpU4DSV1ZLBYAgM1mg8lkgkwmo4cjm83uSUqc3LeBgQEcPXoUIpEI+Xye8go6RRbstgNDskE/\n+MEPcPbsWajVasjlcggEAlSrVeRyOXowtmLkCYO5VbBYLKhUKlgsFioo9PDhQxSLxV3zGwjRjdQs\n96NDg8PhwGAw4M0338T09DSMRiM1hn6/Hw6HAzs7O88Iy+xm35KSmlqtxpEjR2jp60X3tlPiSeS8\neNkebjQayOfzSCaTCIVCkMvlmJycxPz8/L4a+UajgUwmg88//xxCoRATExNQKBTgcDh0r5CUvlgs\nRiqVQjQaRSqVwvr6Ora2tqDT6SihU61WI5PJ4MaNG1hYWOjquULKYMlkEgqFAlarFVKpFEtLS/jv\n//5vPHr0CKFQiJ5zz5eDyVnv9/tp+bKbkzgPjZEHALPZjFOnTuHkyZPQ6/UQi8WU8Z1MJqnBX1tb\nQyKRgFqtRqFQQDgcxs7ODra2tpBIJJDP51+anno+gns+em5X/YzL5eL06dO4evUqzGYzbDYbBgcH\nIZFIUK/X6YNvNoAkeifeXyKRwK1bt6jU6sjICCYnJ6FSqcDlciGVSp8hnbzq5SOOQjQaxePHj5HL\n5bCwsACLxYJKpQKxWIz+/n76nc1mM5LJJG7evIn5+Xns7OxQJi8h6BGHKJlMIh6P03t99OhRSmQs\nFotUyKdX0Ov1mJiYwPj4OE3LEt5Go9EAh8PpOV/gdSDko7GxMdhsNgwMDNAOB+LYRqNR3Lp1i9a/\nd4u9GBmJRAKVSgWVSoVoNEq5F6/6TFJDz2azKJfL0Gg00Gg0EIlEtA2smyCEVqvVir6+PohEIhQK\nBWSzWezs7CAcDiOXy7VUBiBG226343vf+x5GRkawvb390t9vp8TQDCLqpFAoKOnvZUTL5o4enU6H\nwcFBOByOtq67F5TLZXi9Xnz++efIZDIQCoXUkSKZQ9KmnM/nkc1mkU6nEYvFwOFwMDk5ienpaQwM\nDCAajcLpdNJ6927vY7v3m5QbKpUK4vE4VlZWsLCwgIWFBWrgX2RLOBwOtFot7HY72Gw2AoEAzYJ2\nC4fGyJN2hZmZGVy8eBE8Hg+xWIymb+LxOKrVKthsNu7duwev10sNVjQaRTAYhNvtpgSw3YJsgmaG\nfqvg8/mQy+U4f/48/vEf/xEikQh8Pp+mVHO5HCWL5PP5Z6Jnkh6KxWLY2dnBf/3Xf9E65jvvvAOR\nSISxsTFIpVLIZDKIRKJdR8gkWkqlUlhbW4NarYZGowGLxUJfXx9OnTpFXza73Y5IJILf//73CAaD\nVK7zRZu5XC5TZS3icIhEIpqiS6VSPWGuk5esr68Ply5dwsjICIRCIW1LzOVydA8dZDQf0BMTEzCb\nzVAqlZQQWa1WEY/HsbGxgZs3b2J1dbWlz2+OPJr/ezeQSqU0IiMp69ehWW+CZIKIw7pfymwCgQAG\ng4F2jxDHOhQK0YxCKyBdK5OTk3j33XeRTqexsbHxSpnkvbwPJBthNpthNptpX/bzZxYJOEiWkM/n\nQyKRgMvdf1NQq9UQi8UQj8dx//79Z/6uuWxJOEpk/RqNBna7HbOzs5iengaPx8Pm5iYcDgdisdi+\nCYSRDqNgMIhPPvkEi4uLr9TWIN08BoMBk5OTtHTMGPn/DxaLBbVaDYPBAADY2trC7du3sb29Tcl0\nRIyAtOCQlpVCoYBCobDn1px2Ng7ZlGfPnsXExAQlHpE6pcfjgcfjgcFgAIfDQSgUgkwmg1arBYfD\nwcbGBn79619TIRy3241UKkXFLkgWQCQSQa/X0xoR4RO0AhJJkWg8FArR733v3j0UCgXs7OzQdOXr\nnB4ej0dlNon4CSFC7lVfoB1wOBxoNBpYrVZau04mkygUCjTzQOpjB7l1jsfjQafTYXR0FHNzc5Tc\nSKLPeDyOP/zhD7h27Ro2NzdRKpVavkY73BMWiwWDwQCdTgev1/sM6/xVIGWs06dP49ixYzSztFfd\nh1ZAott6vU4PXFL2e5nT96oaukwmw+joKEwmE0qlEubn5+k71A2IxWJcuXIFdrsdLBYL6XQaq6ur\n3yAwkqxFrVaDw+HAysoK7ty5Q9/1XqCVzEaj0cDw8DDeeOMNmM1mVKtVuFwuPHr0CIuLiy1F8XuB\nQqGAzWaDTCZDLpejIjyvKvXx+XwYjUZKsnY6nVhfX0c2m+0qB+hQGHnizWk0GlonzufziEQiiEaj\nCIVC8Pl8AEANpFwuR6PRgFgshkajQbFYBJfLpUaq28Qq0qev1+spY9dms6FWq1GxlWQyie3tbXg8\nHuh0OnA4HEQiEchkMhpRb2xs4KOPPnqGMU+8cZLKJCQ9wpQlP9MqSIsh8NTgR6PRZ0oGrQ7GkUgk\n9FmQfu1IJIJMJrPvoiwcDgcymQyTk5NU0GRnZ4cOEYnH4wgEAggGgwdawIeQxE6dOoW5uTnYbDbK\nSSHpz/n5eVy/fh337t3bE5O7nXtASHckUnzddyGZldHRUZw4cQKDg4NYXFyk5ZP9IkBWKhWkUilo\ntVpIpVKIxWLodDpYrVYUCgUEg0H6npFMg0gkQigUQiaTeSbSFAgE0Ov1GB8fh0Qigc/nw9LSEtbW\n1tpyuHYDHo+HkZERTE9PI5fL4eHDhy+9//V6HZFIBLdv36aGptda9q8rLRKwWCxYrVacOHECKpUK\nsVgMCwsLWFlZwc7OTtecqOdBMjWEpExKk6/6HkTx0Ww2Ux4QyUZ305k9FEYeeHpT1Wo19Ho9TTH1\n9/dDo9HAYDBQ+Ua1Wg2r1Qq9Xg+BQICBgQEcOXKEbgan0/nMS9ktkBr5qVOnaB+8VqtFKpXCb3/7\nW3zxxRf0gCiVSrRdjhDqyH+TWuXzm4ekQpvVs56XlN0LXtR90OqhT0hYKpUKuVwO9+/fx9bWVk/m\ns5Po9+2338bY2BgSiQSuX7+OO3fu0KixVCpRMmEn0I3uARaLBZ1Oh5/+9Kc4d+4clErlM/XLjY0N\nfPDBB9jc3NxTtqTdtddqNXC5XNhsNiQSCSwuLr70Z0nZYWZmBj//+c8xOTn5jNDUfjmChIy2vb0N\nlUoFtVpNuwPeeOMNlMtlhMNhKpJD2qaMRiOuXbuGtbU1aiRJtmhgYADDw8Oo1WrUwPv9/q4ZU+Jc\nyGQyOoHuRfevWYTF7XbTbpKD6tQ2gwR7ZrMZExMTdJDUrVu3sLGxgUwm0/K50u4+L5fLSCQS8Pv9\ntOT3us8RiUQYHx/H4ODgM5kKIo3cLRwKI08OXcIKFYvFyOVyVKRGLBbDbDZDKpVicHCQRrqk9s1m\nszEwMIBisQiDwYBYLNa1vkSyEW02G06ePIlTp05RBvfy8jIePHiAr776CmtrazSibdUgk8NRrVaj\nr68PQqEQpVKJiip0MhJtt0RBpCeVSiUajQbNDHSqB71VSKVSGI1GDAwMQCaTwel0wuv1Ynt7m+4j\nkpoVi8WUMLYXZ6kbB+fU1BQuXLiAI0eO0GxVvV5HOp3G4uIiHjx4gI2NjT1L8rb7uzKZDFarlfYq\nb29vIxgMIplMPvNzPB4PfX19OH/+PN544w0MDw+Dy+XC6/XC6XTumzofOSsAUK4I4bUIBALodDra\ne221WjE2NoaZmRlK0PP7/bQPWiwWQ6lUUnGlUCiEYrFImeHdMqbEUBGeTHPP+fP7l+wXEiDspSV4\nvyGXy9HX1wej0Qgej4d0Og2fzwe3203Fq1r9Lu1+93w+j0AgALVaDYVCQTO3L8vUCIVCaDQaDA8P\nQ6/X01Zkt9uNXC73jY6WTj6TQ2HkCYLBIJxOJ9UKDgQClF1PNsDIyAh2dnaQSqVoXb7RaGBychIm\nkwk6nQ4SiaRrRp6oe01NTeHHP/4xhoaGIJFIEIvFcOvWLfznf/4nTRG3AyJio9VqMTAwgMHBQQgE\nAqTTabjdbkQika5kKVohYRHioFwuh1qtRr1ep0phpJ1rvyGRSKDRaGgPf6VSofVKEn2KxWLazhiP\nxw9UfZ60VJ48eRLf//73YTQaqWANOXBu3ryJhw8f7nkP7OX7Eufz2LFjYLPZ2NnZoYQk4gCTPXz0\n6FG8//77GBkZgUQiQbFYhN/vx9ra2q7r+Z2AUCikzG7CD8hkMpRRLxAIoFQqMT4+ToWjVCoVqtUq\nJiYm6D7RaDTUCOVyOdy7dw/xeJz2dXfTuW2O0HU6HS1tkrIH2ecAqPHvdpq4UyAEQZPJhJMnT8Jo\nNKJcLiMQCGBnZweBQOCF2c5ugmT+dDodbffj8/nfOC+aS81WqxU2mw1qtRq5XA47OztwuVzUFhAd\nFgAdbR3tmZFvJU1C2r2uXbsGh8MBPp+PbDaLWCxGP6dSqVA5yVwuh0qlQo2NWCzGT37yEyoZ20ov\neatr1+v1OHfuHN5++23MzMyAy+Vie3sbv//973Hz5k3K/mz3ukQK97333sPFixdputbpdOJXv/oV\n5ufn9/LVOgIejweFQoGhoSFMTU1BLpdTdapuyje+CqR1r1KpQK/X4/Lly6hWq+BwOFRS1Ww2o7+/\nn06N2tzcxPb29oEw8nK5HEajEdPT01RGmM/no9FowOVy4datW7hx4wacTmfPpv81Gg38+c9/BvD0\nPRgaGsIvfvEL+P1+pFIpcDgcmkombXORSASFQgEajQYAEI1GO64I9yqQASnj4+O4fPky5c0sLCxg\nfn4ea2tr2N7eRjgcxs2bN7GxsYEvv/ySzpqw2+04duwYfTf5fD6CwSAWFxexs7ODSCRCu026yfUg\n2ZxMJgMWi4WzZ8/SLFoqlYLP56N1Y7lcjmQyieXlZeqEHGSIxWJMTk7izJkzuHLlClQqFfx+P27f\nvo3Hjx+/cL/sNipuN11Pfoc4GEQDgcvlPvOcib0hQ8eGhobQaDQQi8Uo0Rl4msonGdlOl1AOTSRP\n6o0ul4u2VFQqlWeYry+qIwOgus0jIyP0Zew0mr3NCxcu0GEc0WgUfr8fd+7cwdraWtsvlFgsprK9\n09PTuHjxIq35rays4KuvvsLt27cRCAQ6/M2e4nUbjnishODW398Pi8UCk8mEcrlMpYW7OYjhVSD1\n9kajAbkNY2CSAAAgAElEQVRcTqMC4KmIUqPRgNlsBpvNRjabpZyIXoPcV5PJhHPnzuHo0aPQ6XRg\nsVgoFApIpVKYn5+nJaBYLNaztTYaDbjdbvD5fExNTeHs2bM4duwYjh49imq1SgWqSPfH5uYmfvOb\n3yCdTtMUM5Et3c9UvcViwdGjR+ngkCdPnuDOnTu4f/8+vF4v1RxPp9PY2dnBkydP4HK5sLOzg9On\nT8Nut8NisdDplPF4HMlkEtFolLb2djstTiJ5om44MjKCoaEhyrQnJNNsNgupVIp4PA6dTof5+Xms\nrq4e2LQ9mb9OyjpTU1Nwu91YWlrCw4cPsbm5+Y123FY5NXsx9CQ7SeSbBQIBBAIBpFIpZdFLJBJc\nunSJOoNerxdLS0vw+Xy0g6RZDKnT6JmRb1Wc40WbkET4r0sl1+t1rKysIJPJ0Nnne8HLBCbkcjks\nFgtmZmZgsVhQr9cRjUbh8Xiwvb1NZQzbgUajwcjICGw2G44fP47BwUHw+XyEQiF88MEH+NOf/oRg\nMNjVfsuXgbQfkaE7Op0OR48eRV9fH7hcLnw+H3w+H5LJZNfYxa9DqVRCPp+ntTM2m43h4WGoVCrM\nzMygXq9DLBbjk08+wZ///GcqYdpreVviPI6MjOAv//IvceTIESpb6vf7sbq6is8//xzXr1/vmAO1\nF+W1arUKn8+HDz74ANVqFUNDQ5BKpVTpjfwhKWOn0wmHw4FisUjlhbPZbEe+x+tA7u3ExATsdjsa\njQaWlpZw7do1PHjwANvb25TESu4F0YZYXl6G2+2G0+nEhQsX8LOf/Qx9fX1gsZ7OjgiFQlQspdvG\nkzyvRCKBeDyOSqUCjUYDqVRKxWWsViuNEIlq36VLl/Dv//7vWF9f31OWoRNKfS+D2WzG2bNn8c47\n72BkZASNRgMOhwOff/45lpeXqRZAM5q7kF5lwF/397sBeRdJJlkikcBsNsNut+Mv/uIvMDg4SGV6\nCW/q7t27+Oijj7C5uUmnBBLJ8L3osbwMhyaSB16+iV53QxqNBoLBIAQCAU6cOPENElAnQLTQJyYm\nYDAYaC/q6uoqFhYWKCGuVZDxm7Ozs7QWq9VqUSqV4HA4sLCwgHv37sHn8/W0fkxai/r6+mhqzWq1\ngsPhIJlM0ta5XrXqkOl8f/rTnxCNRmE2m2l6VavVUuJSOByGy+VCMpnsicPUDLlcjv7+ftqhYbPZ\n6BAMQrT73e9+h6WlpT0T7ZrRLEDSDpEpl8vB6XTi448/RiKRoAaHMIlJSnJnZwcrKyvPOKfkmVQq\nla47hCTjdOTIEej1eqRSKTidTjx48ACBQOCl7VhE7axUKmF9fZ1O/+NwONDpdPD5fM8QOvcD1WoV\nW1tbuHfvHng8HiYmJjA4OIhqtYp0Oo1QKEQzbZVKhXYnjY6O4tSpU1hfX9/VYKsXgWRTO1nfVyqV\nGBgYwNWrV3HlyhXYbDY6KOjx48dYXV2lY7V32373IuzVyJPgs1qt0v1K5pP09fXBYrHQzBUpVWaz\n2WfS/ORzulVmO1RGvl00Gg162PT19VFt+k7eUA6Hg+HhYdjtdiiVSnC5XFQqFaytreHJkyfPtLq1\nAhIZnz59Gj/72c8gFAqRSqVoxPHHP/4R4XC4Z3W1ZiIVmS1/8uRJnD59ms69j8fjiEQilCvRC1Sr\nVUQiEXz44YdYXl7G8ePH6VCe6elp6oyEQqGOMbv3sscIWWdqagq/+MUvYLfbIRKJUKvVkMlksLOz\ng/v37+O3v/1tx6NFoo3QbhaDtJx9/vnnuHbtGoxGI+0EyOVySKfTtNWTrJuohwmFQigUCgDoupFX\nKpWw2WywWCx0NCyJznfz3Um/+fr6OhYWFqDX66HRaOD3++H1elva63uNhsvlMtxuNxVDCgaDmJqa\nQqFQQCAQwMrKCp0oWalUMDg4SEemXr58mUrGtrqXSBavk/uPxWJBr9fj7Nmz+MEPfoCLFy+iWCxi\nZWUFDx48wPLyMuXK7NXAdyKaB0CjcaJeR9ZGspssFovu50qlgnw+v6f3tpX98p0w8sDTm0HadmKx\nGIRCYUf7tdlsNp2g1TyEQqfTwWAwwO12t9W3LBKJMDg4SBXwqtUqtre38eGHH+L+/fuIRCI9S4ED\nX28yo9GIEydO4PLly5iYmIBcLgeLxaJjLZ1O575GNi9aJ9GZXl1dRTAYRH9/P8bHx2E0GqFUKulM\ng05esx0QghqZdU/EZRqNBqLRKBwOB373u9/h7t27Xekl7/QY1GQySfkQJLX5oveOOLQqlQpbW1td\nH0VsMBhw/PhxWiaIRCJIp9MtRVSNxtN5DMFgEPF4nO7xdqSRieJeO9kToi2QSCTgcDgoUbBer6NY\nLCKRSNAOjUajAbvdjqGhIRgMBpw/fx5+vx/FYhFut7tl2e9ORqBEcGhwcBDf//73MTg4iEwmg5WV\nFTx69Ahra2uIx+Md4RB0eu3kc7xeL1ZXV+H3+2E0GinRNJfLYWlpCQ6HA6lUat8yhd8ZIw88TSmT\n4RdyuZzOX+4EGo2nU5VI6x45KPv7+zE1NYVSqYRQKIR0Ok37WVksFm3lIk4B6Zfk8/mwWq0YHx/H\nzMwMnXlOJmORfujmwRm9APGEiR7z1NQULBYLGo0GIpEItra2qMLffqvcPY96vU6ni4XDYcRiMZRK\nJXrgEc+bx+PtuUd+LyAaDzabDVNTU5QlnU6n4XQ6cffuXXz11Vd0UFGn0e4UupeByEq/CuTAZbPZ\nUCgUkEgkHbv+665JSgNEUGu3e5QYJKlUSkW6stlsS1P3mj+rXZDrkEEuxOlodjbIzzQaX4/NXlhY\nwOXLlzE8PIyxsTF4PB46eKoVdPI9YbPZtOw3PT0NlUqFdDpNB8CQ4WOdOEe6QTYkWeNgMPhM8EUG\nRi0tLWFjY2PPrcTkOe4G3ykjT4ZfqFQq6HQ6lMvlloz8q9I6lUoFd+7cgUwmg91uh1gsBpvNxujo\nKCQSCWZnZ+F0OrGysgKXy4VYLAY2m41UKoV4PE7JgOVyGRwOB3K5HH/9139N6/AKhQI8Ho+2DpJe\n+14zYsmBodfrMTIyApVKBYFAgFqthp2dHdy6dQvb29td12duFSTC3NzcxJ07d1Aul+mgH5FIRAUq\negEej0ela9966y3IZDKkUil4PB7cuHEDX375JcLhcFcn5ZFDpBuqfS8DqR3z+fy2Wy1bWa/H48H1\n69eprK5cLm/JuWCxWJBKpRgdHcWPfvQjaLVaxONxxOPxfd/vzUb8Rf/e/DMsFgt+vx//93//B5PJ\nhNHRURiNRhiNRipH3CoxulPgcrlQq9U0o8Pj8SjHY2lpCU6nc9+ka9sFcbzJNEaBQIBAIAC3243H\njx9ja2urIxmE3f7+d8rIE4IE8RbbYdm/7AWo1+vw+XxYXFzErVu3cPz4cZjNZmi1WkpyIVPeNBoN\nEokEFAoFXC4XlpeXaRqtXC6jv7+f1rZtNhudp0z6ob/66iskk8kDIWRBohmFQgGDwUCFZpLJJB1+\nQQY3HDSQmeterxcjIyPQ6/XQ6XSUS9CLcbNEtvbEiRPUaSoUCtja2sKNGzcwPz9PBTR67eB1Gs0R\n9X7o7WcyGXi9XkQiEWg0GoTDYWSz2V0ZOaJdfvLkSZw/fx4DAwPY2dnBn//8Zypf2ytD+SLD3gwW\niwWxWAyj0QiZTEbPQzK6u1cg7c0WiwVGoxGVSgWxWAxOp5MqVPbS+d4t+Hw+RCIRbasj5Yb79+/v\na4sowXfGyBMiESFFELnbToFEhmtra/jkk08APG17UyqVUKlUyOfzkMlkdJxlNpuF2WzGw4cPUSwW\nqdpRuVzG7Owsrly5AqvVikbjqSTs5uYmHj58iA8++ADz8/M9rW83gyg9EXU74nl7PB4sLS1hfn7+\nwM5mJ3X6RCJB1at0Oh0UCgUSicS+RwyExNTf34+rV69SieZMJgOn04nPPvsMa2tr+zYxrBf7iziN\n+6FTUCqVkEwmkUgkEI1G4fV6kclkXlmuIc9IJBJBq9Xi0qVLuHjxIoRCIRwOBz766CN4vd4DY4ie\nN9qkFGgwGDA3N4eBgQGw2WwIhcKu6Ie0AoFAAJVKhbGxMfT19dGpm4uLi9jY2EA0Gj0w9/VVkEql\nUKvVEAqFdPDRwsICbt261ZO23O+MkSeet1qtprXYVm42SUu/Ls2SSCRw+/ZtqlPMZrOhVCoBfN0O\nNz4+jmKxCBaLhRMnTmBgYIDOlweetvZIJBIsLS0hFAohFArB7XZjc3OTDng5CAYeePpikgyFQqFA\ntVqF2+3Gxx9/DIfD0fM6/OvQ3FZEpuQRDfNOfHar353D4cBoNOLcuXMwmUyURxAKheB0OrvS/vki\n7GeanoC0Ho2OjgIAlpaWWvr95vLCbtZOiICEBGW1Wqlcqcfj+caBTISeTCYTJicnMTMzg9nZWQDA\n3bt38fDhQ2xtbbXV589ms8Fmszuuitds5In6Wl9fH06cOIG5uTkYDAZUq1VotVoYDIaezJUnjtPV\nq1fx1ltvwWazUX33paUlXL9+HdFo9ECfI82Ym5vDe++9B6PRCBaLhXK5jGAwCI/H03aX1V7wnTHy\nPB4PIpEIEomEsh1b3TS7USXK5/NwuVyYn5+H2WxGLpeDxWKhSl9yuRwKhQKVSgXpdBoajQYTExOU\nkV+pVODz+Wjkvrq6Cp/PR4lie2m56EoPJpcLhUIBqVRKo/hAIIB79+7B4/Hs2WvtptBGM0jvcz6f\n75mmN8mKEAeRz+cjk8kgFArB7/cjFAod2KxIJ0A4M319fYhEIm19Rqv7pV6v0ymQFosFUqkUCoWC\nqtqFw2EUCgVUq1Valjpy5AhmZ2dx7tw5NBoNWkpZXFxs2xh1U/GMfK5arcbg4CBGRkYwMzMDm80G\noVCIWq1G08udzG7uFmRewLFjx/Duu++Cz+cjmUxidXUVjx49wvLyckd1ILqNsbExzM3NQSqVUrVB\nv9+/p/N7L/hOGHmSoiJDKEhKsNUN3coDWlxcRCgUwqVLlzAzM4OBgQHYbDYMDQ1Rj10oFFKCE5FT\nDQQC+PLLL/HJJ5/A6/VSwYe99FQ29z13epORFxT4etRtOp2G3+/vmU59KyA8DaJYKBAIOt5Ss1uQ\nSFEikdC9mc1msbq6Crfbva8HRK9S9UQdjM/nt/z7hC3fys+T0p1KpcLQ0BDsdjsuXLiAtbU13L9/\nH5999hk8Hg+SySREIhGUSiUdcQ0A9+/fx/379/HgwQOEQqG2DXy3QDoWiMLj5cuXMTo6SgMPLpcL\nDoeDYrGIQqFAz6L9dHJJmp4MkCoUCtjc3MQvf/lLLC0tHRj+0W7AYrEosbtSqcDtduP69evw+Xw9\nc1K+M0ae9LBzuVw6673V1FQrLReZTAaFQgF8Ph+BQAB6vR5XrlyBTqejgwx4PB5qtRqq1Sqtu3/x\nxRe4ffs2VlZWqGZ2J9ANpjQRbJmbm6PqWqSsQIQ19opuvxhksAfRGwdAhxztt+IdSR9Xq1V674rF\nInZ2dhAOh7vS8nOQwOPxUK/X4fF4EA6H2/qMVu9PtVrFxsYGJV0ODw9TrQupVAqVSkWFnMj/I7MD\n1tbWsLCwgMXFRdpn3i66+WyJ8xOJRLCysoJ0Og2XywWRSAS9Xg+5XI61tTVaHuHxeM8EBd3ec2az\nGW+++SYmJiZQq9WwuLiIGzdu4NGjRwiHwz0T0GoVxEESCATgcrnIZrNwuVy4ceNG12aK7AbfGSPf\n398Pq9UKHo9HW5R4PN6uP4Ns9lZEMiqVCp48eYLV1VX64kxNTdFhHDweD5VKBYVCAcFgEA8ePMAv\nf/lL7OzsdLR2QzZfpwkfZHDK22+/TTXVV1ZW8OTJk56NlG0VREAkEAhQwpRSqezJoBciXJLL5ZDJ\nZCAUCpHL5ajQyn7dz/0qkTx/TZI6djgc2NnZ6fo1yTs6Pz9PtfNlMtkzM+RnZ2epsSP1++3tbayv\nr+PJkyd4/Pgx3G73noiwzRmIbtxzcm6trq7C6XRCoVDQefMTExOwWCxYWFiA1+sFAFo6JONpuxlF\ns1gsHDlyBH/7t38Ls9mMbDaLL7/8El988QU8Hs+hKk8Rfg/JhJCOKCJa1St8J4w8m82GyWSCxWIB\ni8VCKpV6pTb1y9CuV0tSwjdu3EA8Hqf1eRK5lMtlepiHQqFD4blyOBwoFAqoVCrw+XwUCgWk02ms\nr69je3v70LycRIkqEAhgcXERW1tbSCaTPVk/MfI3b97Ev/zLv0AoFCKbzWJ+fh6RSGTfjG4vsgVs\nNhszMzOYnJzsqEjV60A6Wra3t/E///M/CAQCCIVC1NhbLBZwOBxal9/e3savfvUrbGxsIBKJ0Eiz\nExFvt0cEE0clm82iUCiAzWbj8ePH2NzcRCwWQy6Xo0JehADY7b3AYrFQLBbh9/tRrVbp/IBgMHgo\ngoTnQc765nbQXmfgvvVGnkxHMxgMMBqNVDksGo22HC3vxVOv1WpYX1/H+vo6jayJke/G5KFug8Vi\nPUPWIe1Ifr//wPbFvwjkkA+FQnj8+DECgQAtNew3w5ysxel0wu120+wPmUX+bQaLxcLg4CDsdju2\ntrYglUr37dq1Wg2xWAyJRIJmUlQqFZU9JvLMtVoNS0tL+PLLL+H3+zva5bJf+4w4kgSZTGZfrvsi\nkHOQGHcAiEajCIVCSKVSh+o8JCAa9clkEsFg8EBMsvzWG3lCDFOr1VAoFCiVSshkMh2rGbcDkp5r\nPiQO44YmDkqzeEm1Wj00UXwzYrEYHjx4gHK5TGc89+qZEKlVch/3cki8iNR1EPda87CQer2+7wcj\neSdJFMnlciEQCCCRSKiT3mg0aDvjfoyQ/S6AcKRIq9lhdmjJbIB0Oo3NzU3cvn0bDoej5/vkW2/k\niYFPpVJYX19HqVSC2+2m6bdeYT9TOK2yjnf7mc11J5VKhWw22/PxrO2iXC4/o4ndyxezU3vjRWTL\nXh84LwKZF5DNZulI4l7so0bj6ZS8XC5H/9/zE9YO4v07zGg0GgiHw7h9+zbEYjG9/71o5esUwuEw\nVenbL+GqV6FnRr45wujmiyORSKDX6+HxeJBKpZDP57G+vk6jtVbQC0JSJ9BsNDqZgq5Wq4jFYlhc\nXIRWqwUAqt7XKezXPe913azTIPetefBRq7+7X/eDtM1Fo1EqJlMqlXoiyPM8ep1qPQzYy36p1WrY\n2tpCOByGSqWCSCRCOp3eNyPfyT1GgqlAIAAOh0NbL3uNnhl5otXeTVYpSQGRWhuXy6WGqdeHx36h\nOQ3afODv1agRZnIkEsHjx4+h1+shEomQyWTQaDRoBLTX+3xYHatW1dc6cT3yz+aRpe1cv/mz9iOK\nJWSl1dVVhMNhsNlsRKNRcLncZyY67gYHwTFoF4d57e2AnP9kvjrhn+RyuV2XQ54vR7Vy/8i70u57\n8jzIuN/19XXKTcrn8/RazZ+/12fdXNp6HXoayXfyS7/o8wkJIpFIUDIYUZVrJ7I9zC9hs+Ht1OcR\nByqfzyMej0Mmkz3DTO/URj6s6NV+eT5L1q5Ay/O/163vQw767e1tBAIBiMViKvvcDg7je3qY9/le\nQBw8wsPgcDgolUq7yrIetHtGSph+v/+F4mMv25ft7NdWzsae3SUWi0W/VTdSyeTzCLu++bBrN4Nw\nmKPK59GJ70CUtJqHipDpbZ3qFvi23PP9Wn8nymD7fc/Jewp83Q1DoqJ2MxH7mUXpBA7jmgn2ul+a\nM0fdUuZ81bU7fa39eH+a7tVrbXgvXaHDtZNxOCMEBr3DfvFOvm3Yq4N0GA3mYVwzg97i/+8Xxsgz\nYNBLHNZMRK/QzB/ZS62UccgZfEfwWhv+rW+hY8Cgl2AMze7B5XIhl8sxODgIuVwOLpcLh8MBv9/P\n3MfvKDgcDi237oVM+l0GY+QZdASkNs+8hIcfvdKu12g0dIyrVqsFh8NBMplEIBBoizvDoDU085cI\n8a3X77FUKoVMJqMDvchQsUqlglQqhWw2i2Kx2PN1HmT03MgzabXDD0K8Iy1P1Wq144pxh3mfdGPt\nhKj2bRFq4XA4OHPmDK5evYrZ2VkIBAIkk0ncunWrrc97naPSDZLvYb7/wNPBNEqlEsViEfl8nnbJ\n9Op7sVgsDAwMwG63o7+/H2azGQMDA+DxeEilUvjkk08wPz+PjY2NPQubfRue38vQcyPP4HBDIBBA\nJpPRGdukP/QwzYA+rCDdDcDXrUiH8aDi8XiQSqWw2+2Ym5uDzWajwjjtTnfrdjbpee2J5s6dwwYW\niwU+nw+LxYK5uTmEw2G43W6EQiGqe9GLNXE4HKhUKlgsFoyOjmJkZATDw8PgcrmIxWLwer0IBAJw\nu9170v74tmd+eq4deBhfCgZfQywWY3BwEBcvXsTPfvYznDhxAgaDoaOqd8Dh3ifdWjuZXU0mGpKS\nSSewnwcfkZ622Wyw2WwQiUTw+Xz46quvWk7VA7sz8O0y9pv/EK1yLpfb0Xu/32Cz2ZBKpZiensbf\n//3f4yc/+QlOnDgBtVoNDofTk+9FBniJxWIoFAoolUooFArIZDIIhUIIhUKYTCbo9Xq6xr3oahzW\nZ7cbfCcjebIZyDz55gErDFoDj8eDUqnE0NAQJicn8eTJk44b+F7gILPiSZQzMDCAqakpbG5uYmdn\nhw4L6sSa90ulj8vlYnR0FFeuXMHo6CiEQiHq9To8Hg+uX7/ec+1vcq+5XC7V2BeJRNBoNNBqtWg0\nGkgmk9jc3KTqZocJJFp+9913cenSJUilUsRiMTidTqTT6Z4NapLJZDCbzWCxWNja2gIABINB3Llz\nh055czqdcLlc4PP5VFeh3exhpzQ92Gw2RCIRJBIJpFIpqtUqMpkMpFIpxGIxOBwOcrkcYrEYSqXS\nvsxo+M4ZeRaLRR8Cj8dDrVZDPB5njHyb4PP5UKvVdJQvkQ4+iMaxGcTR43K5UCgUEIlEdLQrqc9l\ns1nE43Eq7tNLSCQSKJVKOgJXpVLh2LFjuHTpEm7duoVGo4FYLIZ8Po9KpYJKpdKyJCzBfhl4kUgE\nnU6HmZkZfO9734PJZEIul0M4HMbKygoWFxe7vo6XgZRCmqN1Pp8PqVQKnU6H0dFRjI6OPqO9XiqV\ner5PWgGLxYJEIkFfXx+uXLmC2dlZVCoVBAIBbG5uIpfL7fv3IaUDnU6H8fFxlEol7OzsIJvN0rOa\nSN+m02kqV07OnXawl/1O9odQKKQkQVJ+MpvNKBaL8Hq9UKvVtGMkl8shHo8jmUwimUwikUh01dgf\nWiPfjmBGcwQ0PT1NtdeXlpZaMkzdEjk5jIIYIpEIJpMJLBYLfr8fXq8XkUjkQB92hEXM5XKh0Wjw\nzjvvwG63w2g0QigUgs1mI5PJ4N69e/jNb36DeDyOQqHQk7WSPTExMYEf/ehH8Pl8qNfruHDhAmw2\nGzQaDUwmE0ZGRrC0tIRgMIhMJoNwOEwPj4P4LEgN+P3338eZM2dgtVqRz+exuLiIDz/8EPfu3evZ\n2lgs1jNlkFqthlqtRrNWg4ODOHnyJN544w3U63U8fvwYi4uLyGazPRtf3SrIWWixWDA7Owuj0Yhq\ntQqXy4VAIIBcLtcTTg2Px4PRaMTo6CiOHz+Ozc1NrK2twePxIJvNolAo0OxCc3q+F+OJgaecJLVa\njbGxMczMzGBubg4OhwPpdBozMzPIZrNYXl6mJYVyuQyTyYSjR49SZ/bDDz+E1+vt2pl/6Iw8ib60\nWi36+vogl8vRaDQQCAQQi8WQTqdfGnlxOByIRCLYbDZcunQJW1tbKJfLtC1jN5BKpZDL5XQe/F5S\nWt8GRiefz4dKpUKxWMTm5iaCwSCy2eyBNCwcDgdisRgymQxKpRImkwlHjhzB+fPnYTabwePxwOPx\nwOfzweVyUalU4HA4sLKyAq/Xuy9rJJGMSCTC6OgodDodQqEQZmZmcO7cOXi9XuRyOfT394PD4SAW\niyESiaBarWJqagpmsxler5dmJsLhMIrFYlvr6DSpjESOSqWSHopvvfUWuFwu5ufn4fP58OTJE3zx\nxRc9S9MT4yeXy6FUKiGRSKjUqkAggEqlglqthtFohNVqpcTHq1evgsfjYWFhoWcp7t2Aw+FAqVRC\nKpVCIBBgenoas7Oz0Gg0SCaTuHfvHra2ttpyVjpxnpF++Gq1StvkgsEgotHoM44HqdmTMgq55+22\nWu5FkrfZ+Tt27BhkMhni8Tj6+/vhcrmQTCZRqVRQrVZRLBbBZrNhsVgwNDQEtVoNl8tFNe+7gZ6P\nmm315pKD+ujRo3j77bcxNjaGRqOBTz/9FAsLC1hfX0cul3th+oOkZkdHR3Hx4kUUi0WsrKy0tAaN\nRoORkRFUq1UkEglai9sLq5N4pcDhG23J5XIhkUiQTCbpy9guI7qbIC+jTqeD1WrFyMgIzp07h+np\nabDZbIRCIXpASyQSTExMQK/X48yZM0ilUvtm5AHQFOrf/M3fYG5uDteuXYNarUZ/fz/EYjFCoRCW\nlpYQj8eRzWZx584dVCoV/NM//RMEAgEePXoEiUQCPp9PU5qtgjjTwLNR0l4OQzabDZ1OB7vdjp//\n/Oe4cOECZDIZPvroI/zbv/0bzQLtdkBJN9AcRJjNZmi1WkgkEggEAromMqeBy+VCIBBgdHQUf/d3\nfwcAWFlZQalU2nNLV7fA4/EwNDQEs9kMmUyG06dP4+TJk1AqldjY2MCnn35Ka+Ct4EVOYTuoVqsI\nh8MQCoWQyWRwOBzY3t7+hp49CbJqtRo4HE7bJUIyJ6HdNZPhSoVCAdlsFplMBgMDAzAajfD5fHj8\n+DE+/vhjFItFukatVouHDx/iH/7hH3D8+HFcunQJ5XL522fk93JYcDgcmEwmnDp1CgKBAB6PBy6X\nCz6fj6ZzXgSpVEqjnVKpBK/XC4/Hg1KptOvrF4tFJJNJ5PN5pNPptohOzcIxBC/7DD6fD7lcTjfS\nQagPA18/B6lUir6+Png8HqytrfWs5eZ1MBgMMJlMUKvV0Gg0YLPZuH//Ph48eIBSqYRUKoVQKIR6\nvdI1GSIAACAASURBVA6pVIrt7W0oFApaiyUaAN3+bs3M7XA4TNvIcrkcdnZ2cPfuXTx+/BjRaJSm\nh4ki3K9//WsYjUbweLxn6sPtRFjNQ5z2OpqYsLctFgtOnjyJ8+fPY3h4GMViEU6nEw8ePIDT6UQm\nk+m5sAmJEA0GA8bHx3HkyBGUSiVaQyUGPJfLIZlMIpvNIhAIYH19HS6Xi9aGn08fd0qudy/g8XiQ\nSCQwGAzQ6/V0fblcDtvb23j48CHC4XBbpalOZXsajacjrMPhMB4/foxYLPbS85zs0dftb+LgCwQC\nOuK8UCjs6ndfh0qlgkwmg+3tbUxMTFAnfGNjA7dv38bCwsI3ssvxeByVSgVbW1sYHh6G3W6nRNNu\nnO+HLl0PPH1oarUa4+Pj2N7ehs/ng9PphM/ne+kDY7PZUCgUOH78OHQ6HXw+H7a2tuD1eluKPAuF\nAsLhMDKZDAqFQktpLeL98/l8AEA+n3/tAxUKhejr60OlUkE6nUYul6NeYa/qUMDXxCmVSgWj0QiX\nywWPx3PgGMbEGRkcHITdbqeEtGQyCYfDgc3NTXpwk8NEIpEgEolgfHwcY2NjtGXnVQ5kp0DUxkql\nEhwOByUYyWQy1Go1/OlPf8Lt27e/0RHCYrHwxz/+EYODg5ienn6mrkwinXbWQf69HZD9LpPJYDKZ\nMDU1hdOnT+PMmTMolUpYX1/HnTt3qHE5KOByuTCZTBgfH8f4+DgCgQA16Ol0GjweD16vFw6HAz6f\nDxsbG3j8+DE8Hg8tCXK5XBQKBXrvCDlMIBCgWq0+M62RoNvlO4lEAr1eD41GA6lUikwmg0wmg52d\nHWxsbMDhcCCTybTNK+jU2uv1OlKpFFKp1Gt/ljjEzU4TIcOJxWKIxWLw+XwIhULKbieOMSEW7uUM\nrdVqyOVy8Pl8dJy5x+PB3bt38cknn7xQkrlQKKBQKMDv9yOdTmNgYICK/HTjTD90Rr5er1NPUyaT\nYXNzE/Pz88hms6/cZAKBAHq9HjMzM8jn8/j444+xublJPbrXgXjhZD49eUlbIetJpVJoNBrodDqU\ny2U4HI7Xes0ymQzj4+OUe+Dz+eDz+Sg5hjgo+x0VkGyKxWKBVquFSCTa1+vvFnw+HzKZDLOzszhz\n5gw+//xzOBwOhEIhpFIp6mg1379SqQS32w29Xg+TyQSTyQStVotQKNR1Al6j0aCHRiaTgUgkQrVa\nBZfLhUgkQjAYfGH2iBjlYDCIYrGI73//+7Db7TTblEgk2lpLu2CxWJDJZDAYDJibm8Po6ChEIhEE\nAgHW19exsLCA5eVlOJ1OBAKBtq/TaZDvPDg4iLGxMfrcU6kUPB4P/H4/qtUqNjY28Nlnn1FDmU6n\n6btoNpvRaDTgcrlQrVZpiVGlUqGvrw/ZbBaRSASJRII6AiS7180xq1arFTMzMxCJRDR7uba2htXV\nVej1ekil0q5ct5vg8XiQyWQolUr0/kskEmi1Wpw9exazs7Po6+uDQCBAuVymTs0XX3wBl8tFDf1e\nyiv1eh35fJ5m2lZXV7G1tfWMk/cikHe2WCyiXC53rVf/0Bl5YtBIhLa9vY2VlRXkcrmX/g75WaFQ\nCIVCgXg8DofDgVgstut0O/EY6/U6isXiM6nb1/2+QCCAUqmERqOBWq0Gl8vddcsE6bPs7++HxWLB\nkSNHqKfo9/upjGwvjDz5TqRt5CCm6UUiEQwGA3Q6HaRSKZLJJLxeL03Nvwi1Wg2pVAq5XA58Ph8G\ngwEDAwNIp9P7wrInLXDZbLal32s0Gsjn88jn85DL5ZicnEQ0GkWtVmvLyLcL4hATp/rMmTMwGAxY\nWVmBz+dDsVjE4uIiNjY26Dt4kMBisaBQKKDVaiGVSpHL5Wg5kDhZpH5KonFyxggEAtTr9W+U5MiB\nXi6XaTbpRdft1vdhs9mwWq04e/Ys8vk8kskkQqEQPcsI4fQwicIQR9Jms1ExqHq9DrVajcHBQZw5\ncwbT09NQqVRIJBJwOByoVqvPyG53wqkiJQa/3///2juz5yav849/te+7LduyZAuveMNgTAATE9Ih\noRDSTidLk0mn7VWn0z+hve8/0OtMO+1MM2m6JQ0h+U1CMMFxAINt8CbZlqzFWmztu2RZ+l0w50RQ\nMPImy/R8bjJD7NevXp33POfZvg++++47JBKJsvbkTCaDeDwOkUhUtrO5HQ6ckSehGA6Hg/X1dTid\nTiwtLW1aXEQOBiSXRk7f5fa1lvZUl+YqybWfhVwux+HDh6HRaCAQCGC1WuF2u8sy9KFQCGNjY1Ao\nFBgcHERrayv8fj/C4TAymQySyeS+5PhIuJ5U6XK53D3NWW83lCmVSmEymbCxsQGHw4Hl5eWyWvzI\nekmn07TY0m63IxQKbfcjVATyjGQyGcxmMy5cuIB8Po+JiYmKrg/Sqvryyy+jvb0dsVgMo6OjmJ2d\nRTgcpgeZaqgvKYUYRJJqEAgECAQCNLf6+CGv1IiTIiyfzwfge5EtUiCWSqXg8/loSPbxmpy9ehak\nzsBsNmNoaAi5XI5WfWu1Wmi1WuTzeXg8ngNj5EsLOY8fP46Wlhbo9Xqsr69Dr9ejq6uLDrYJh8O4\ne/cu/vrXvyKdTiOZTMLtdiMej1MnaTdYXV3F2NgYWltbUVNTQw9NT3vvYrEY/H4/uFwu7Ujai3f0\nwFXXy2Qy9PT0oLm5mb5UuVxu0+vweDwMDg7izJkzEAqFiMfjWxIgIAb08RxluRGAmpoaDA8PQ6lU\nIhqNPhImehYk50MKkqRSKXQ6HQwGA1wu1769mHw+H/X19aitraXRjb3OWW/H0EulUjQ2NmJ9fR0O\nh4O+2JshFAphMpnQ2tqKuro6qpVNaimqHdKqJpfL6ftRSUgdidlsph789PQ0FhcXsba2tm+aA+VA\nPHLSghiPxxGJRBCPx5+5z5A9gnTblG7apNee/NyTfvdp/2+nGI1GnD17FqdOnYJWq0UikYBKpYLB\nYIBMJoNKpUI4HKbtgNUO8eBPnDiB48ePY2BgAIVCAZFIBFNTU8jlctDr9eBwOHT/dDgcmJmZoZ58\nMpmkUdzdOlylUil4PB4IhUJIpdJHKv9LUSqV0Ov16O7uhsFggNVqpUN29uKgt69GfjsLWqFQ4MiR\nI2hqanqkLWGzv8Pn83HixAm8+OKLVDUpGo1uuvk9fn/khX3S9K/N4PF40Ol0OHnyJHg8Hubn52lu\nv5xrlHoI5HfIIio9Ke5HuJ6EwYmc6l6dREurkreKWCyGXq9HLpeD3+9/5nPncDiQSqXo7u5GT08P\n9Ho9MpnMvml4bxXyrEgl8draGkKhUEXXh0QigdlsRmNjI8RiMe7evYvr16/D6/U+MeK22fe7ky6c\n7ba1krA7h8NBJBJBNBpFJpMpawMuFotPPAyU+67vNhwOB01NTXjrrbfQ3d0NiUSCWCwG4KGxIWI/\n6XSafsZqTLuVwuPxoNVqce7cOQwODkKr1cJqtcJiseDLL7+Ey+WiKdFneem7WexI6gJI0d/6+voj\nBycSfaivr8fx48fR39+Puro6fPrpp7BarXvWdrmvLXTb3bSbmpogEAiwuLiISCSy6cIkBUsNDQ1Q\nKpX49ttvacHdZg/1aRvOVvqFORwO7fdUKBSwWq1Prbjc7Bp8Ph/ZbBbBYJC28E1NTcFut1ek4vtJ\nkLYocmJVqVQwm81IJBKb1kdsh514OWSdCQQCqma3GXw+H1qtFj/4wQ9w9OhR5PN5WK1W3LlzB5FI\nZFv3X0mIkcpms1hbW8PMzAwcDkdF70EgEECj0YDD4cDr9WJlZQWrq6v/lXsvTYOVtiiS3CqR5d3u\n974dSN48mUwiGo0iHA6XrYOxl974dihtc21sbIRSqUQul4PdbsfMzAxsNht8Ph/W1taQSCQQCAS2\npalQabRaLUwmE7RaLbxeL65evQqr1YqlpSX4/X6acy/3ULbbxGIxOqq3dG/m8/lQq9U4fvw4fvGL\nX0Cr1cLj8dDvYa84UH3yEokEWq0WRqMRuVwOY2Nj8Pv9T70Wh8OBwWBAX18fWltbIRaL6aLe7iCP\nrfwOl8tFXV0dDAYDhEIhldDdShEUUZQj6luFQgHxeBw+n48qKe3HpkLa0EKhEGpra6HVatHR0UF1\nvHeb7X5GUhRTV1cHmUwGhUJBFbJKIfrTZrMZx48fx/Hjx1FTU4PFxUXYbDYqslTtkF5o0itPlCAr\nBanVaG5uhk6nQy6Xo8WApS1Ora2tMBgMUCgU4HK5tG84m82iubmZdhgQTYpKrXEScieHi60eMqrF\nwAMP10JHRwf6+vpQW1sLgUCASCRCuxscDgdcLhf8fv9TiwH3C6FQCJlMBuD76AiJULW1taG7uxsi\nkQgrKysYHR2Fw+FAIBDY57t+CBk8Qw6xJP2j1WrR0tKC/v5+9Pb2YmJiAqOjo1Sbf684MIV3pDfe\naDTCYDBgaWkJn376KZxO5xNfLBIaOXbsGH71q1+hvb2dht1Iwd1WX8ithnb4fD7a29vR0dEBAIhG\no7SXslwkEglMJhM6Oztx+PBhCIXCsvrr95pcLofZ2VnU1NSgvr4eOp0OPT09uHXrVlXJ9ZJQpNls\nhk6nw7Vr12Cz2R75/kuHpVy6dAmXL1+mM82dTiecTid8Pl/VqpiVIpFIoNfroVarIRaLkcvlKpqT\n53K5UCgUdBZAJBKh0x4JPB4PFy5cwOuvv07FZjweD/7+978jFArhpz/9KVwuF7744gvMzs5WNIxM\nIj8SiQRKpRLJZBISieQR6VTCbq/z3b6eTCbD5cuXcenSJahUKqyvryMSiWB2dhbj4+Pwer00olkt\n7yvwfc69qamJHgBDoRBVrHzhhRdw5MgROrxmZWVlT43kdiAdYKRQvL6+Hu3t7Th9+jS6u7uRyWRw\n5coVfPTRRzR9slccKCPf3d2NU6dOQafTYXFxkfYAP+ln9Xo9hoaGcP78ebS1tVGBE1JVX6osRkYV\nkmEemx0ayi3UIGFIg8GA+vp62ktbzgtF/pZCoUB9fT3VVff5fLDb7VS0YmNjo+wCxt3eQEi7Fimi\n0mg0aGpqQl1dHRQKRdXo18fjcdhsNpw+fZoORJFIJLhx4wby+TyNlBw+fBgnT57E4OAgDAYDuFwu\nnE4nvvzySywtLVVdm9fTIC1FhUIBHo+Han5XCpFIRHXfORzOf2nnnzhxAhcvXsSLL76I3t5e6HQ6\nrK+vQ6VS4fXXX0cikUBvby+ampqg1Wrx2Wef4fbt2/D5fBWVSyYHOrlcDrVaDY1GQ6NmpfMNSKtl\ntRlKorPf0tJCp/tZLBbcvn0bMzMz8Pv9+zaE5mlwuVxoNBqcO3eOFqVxuVzkcjl4vV5Eo1F6KJme\nnkY8HqdKidX0fpbaFqlUCpVKha6uLnR1dUGlUsFisWBkZATfffddReplDpSRb2trw5EjR6BUKiGR\nSKBWq6nh5PF4tPUFADo6OvDmm2/i6NGj0Gq1tCWKCIuoVCoqdSiVShGPx7G2tvbUWgESdilXkYj8\nvFarhUqlQjKZ3LSqmPTVAqADShobG2E2m9HS0kJDUw6HA1ar9REjX46B3w1d6cevx+Px6HOXy+X0\nQGIwGGgVNWkf2q8NMBaLwWq1wufzQSgU4tVXX4VQKITL5UI2m4VEIoHRaMTp06dx+fJlqFQqCIVC\n5HI5LC8v4/r163t+0n4cskGQ77ZcL5aoOra0tCCVSiEUClW8ml0ikUChUEAqlSKXy2F1dRWFQoGG\n5V966SX87ne/A5fLRT6fRywWo4Vqvb299N0ym83o6uqiLa/pdJrKge41pQdYsVhM5x2QjhhykJHJ\nZPB6vfD7/QgEAlVl6NVqNcxmM4xGI+RyOW0j++yzz2C1WhGNRvf9XrlcLsRiMYRCIS0iNplMeOed\ndzA8PAyNRgMul0slyK1WK6anpzE7Owu73Y5MJoNAILCnPeZbpXRvFAgE0Gq1MJvN6O/vR3t7O+Lx\nOCYmJvCvf/3rmd0au8WBMfLFYhEOhwMWiwVNTU3o6+vDr3/9a8zOziIYDEKj0aC2tha1tbVIpVI0\nv0qKrTgcDtRqNXp7e6HVarG+vk4VrTKZDK5du4Z//vOfz7yHciitFo5Go0gkEmhoaKCiMU+Ss5TL\n5RAKhSgWi2hpaUFPTw8MBgOamprQ1tYGAAiHw3C73TRHuZ/V3qWDLrLZLFKpFDgcDo4ePUpzsXNz\nc5iZmaFzzveDTCaD1dVV/N///R8ymQxeeOEFDAwM4Pe//z2ddBWPx8HhcGCxWNDa2oqGhgak02la\nPFPJDYSkDsjsadINQqJPpZSux1KxJ61Wi+XlZTgcDsRisW1vJI/LhZZz72KxmA7HIUWMx44dozn4\ns2fPgsvlIhaLYXZ2Fh988AE8Hg99xiSy9vLLL+MnP/kJjh8/DuChZz07Owuv17un3SREdc5ut8Ni\nscBsNuPw4cP42c9+BqfTidXVVaRSKRgMBhw6dAgSiQRLS0v44IMPaG57u+zmZ3rxxRfx3nvvobOz\nE4VCAYlEghZ5PUsddLtsNVool8upYE1HRwd9D4lTQ/ZtPp8PuVxOpy6Gw2E6I4Psg9WQIix1MkmE\nsL+/HxcuXMChQ4cgFApx+/ZtqhNRqX3lQBl5MgTl1KlTMJlMqKurQ1NTE2KxGLRaLWpra6HX6+nJ\nn4z2c7lciMfjcLvdtBqfbIgqlYqeqJ7lMW1lIZECnlgshkQiAbFYTPWwY7EYLcojrXBKpZKmDYiy\nHQlf1dfXIxAIIBQKIRAIIBKJ7LuHTKIUarWa9gYTVa3Gxkaq0x2LxWj0Yz+m0+XzeSQSCdy/fx/Z\nbBZyuRxDQ0M4e/YsisUiwuEw7ty5A5fLBZfLRUPNdrsdbre7ot6ZSCRCQ0MDDAYDjEYjOBwOMpkM\nvF4vstks3eTIeGMSmVIqldBqtWhoaEBbWxuUSiVsNhuV1qwkJDJGhKPEYjE6OzshFArR0NAAPp+P\nr776CqFQCPfv38cnn3xCZ06Utq/pdDpcunQJRqMR+XweFosFwWBwzyVwyUEqGAwiEAigubkZarUa\njY2N1IhnMhmIRCKYTCa0t7fDaDRienoaxWJxS50zewHpJuru7sbw8DC4XC68Xi9mZ2dhs9lo0fF+\nUip5XFNTQ4uKs9ksEokEHjx4AIfDAZFIBIFAQN8DEo3j8/mPCBaRboz9/DwkhSMQCJDL5ej42cbG\nRnR0dECv1yOfz0Mmk4HP51e0VfFAGXm3201fdqPRSEPhxWKRho7JeEwej4dUKkVnVE9NTcFqtcLj\n8dAZ8nfv3gWfz4fb7UYgECjLyJfjPZMTJpFHjcVi4PP5OHz4MC5fvgyHwwGFQoGLFy/CZDLRaWdE\nvCGRSCAUCmFqagqhUAgikQhzc3O4efMmbRPZah5wN70fshmXPgvSykfCmBKJBF6vF8vLy/RAstWi\nw90kGAxifn4eo6OjNARLNsCRkRE4HA4olUocOnQIOp0OY2NjmJycrGixnUqlwksvvYSTJ0+ir68P\niUQCkUiETuISi8UYHR3F9PQ0VTuMx+Po6elBf38/jh07BqFQiHA4TNX6drL5bWe9CIVCiEQiOghk\nY2MDJpMJarUahUIBN27cwD/+8Q86mrM0bEzemdLiWOLFtba2wmq1VmQzJ948ERVyu924f/8+RkZG\nYLFYqMhJZ2cn+Hw+Dh06hFdeeYXmjsln2Q8kEgkMBgN0Oh3EYjGi0SgWFxfx+eefY3Z2dk88yK2k\nAskeajQaYTab4fP5sLS0RIdvZbPZR0b5ymQyGhlqbW3FyZMnqWIph8NBLpd74lTPSsLn86FUKqnD\nRgbryGQy8Hg8xONxyOVyiMVitLW1YW5urqL3emCMPPBwJOLi4iI+/PBD3Lp1C7W1tfQkLZPJIJVK\naZiHzH8OhUK4e/cu7RcmBWEkZEgEL8oR1eHxeFv6cvL5PJxOJ2ZnZ9HU1AS5XI6BgQG0trZCpVKh\nv78fOp0OEomEerv5fB7RaJR6lqWDK8imvh0Dv5uhfZIzFggE9IUkHrPb7aadEKRnlUyAItK3+wF5\nrvfv30c6nYbNZgPwsONhfHwc6XQadXV1dNAIGSBSKQ9BKpVSOc729naqm65Wq2E0Gqly3fr6OgwG\nAzY2Nuh30NTUBJPJBIPBAA6Hg1gsBqVSia6uLnownp6epsqJz6JU8Gmrm1E+n0c2m0U6nYZKpcLh\nw4chEokQDocxNjaGsbExLC0tPTUNQt4zsVgMuVxOvbjFxcWKDLIhe4fRaITRaAQALC8v4+bNm7DZ\nbAiHwygWi5ibm6PRwPb2dvT09ND2XKfTuS+aClwuFyaTCT/60Y/Q0dGBQCCAe/fu4ebNm3jw4AGt\nj9jtjgDiWJWzh5pMJgwMDECj0aBYLGJychJut5uOTSb7Q6mok1qtpoV4nZ2dUKvVaGtrQzKZxPLy\nMqRSKZxOJ22nroQBJfc3PDyM3t5eGsWZm5ujCqBerxfj4+MIhULo7+9HR0cHbYeu5F54oIw80ar/\ny1/+ArlcDq1Wi1dffRWDg4N0RnJtbS00Gg0dsxmJRPDgwQM4nU4EAoFHFsBW88TlevKEQqEAh8MB\noVAIg8GAnp4emM1mNDc3Q6lUQqVSAQAtyiMjT5eWljA5OYnl5WXU1NRAJBJRBa5cLrfvM7dLW0ME\nAgH4fD6SySQNfcfjcRgMBkQiEaTTaVrguJODxm6c1Ml408XFRXzxxRc0RUPqNQQCAU3zJJPJHdcR\nlFsUSTS4W1pa0NbWRmd9l3oI6XQagUAAXV1d6OzspAOC9Ho9DWnmcjkIhUK0tbXh6NGjSCaT8Hg8\nuHbtGq1NIF4G2QzX19dp2J/cK8krbieKkc1m6Xo2Go2oq6uD2+2G3W7Hxx9/DJvNtmm4mAiGaDQa\nKBQKRKNRrK6uYnx8HHa7vez72O56IemC5uZmOvPAZrPh1q1bVGETAJ2eZzabaXHVwMAA7WbYjpHf\nyRonh6Pm5ma88cYb0Gg08Pl8uHHjBkZGRmCz2fZk33jcyD/r51pbW/Huu+8iHo/TLqEn9beT9ZlO\npyGTydDa2oojR46gra0NnZ2dNHw/OzsLLpeLTCZT0Ughj8eDUCjEhQsX8MYbb8Dn8+Grr77C9PQ0\n1tfXkUwmkUwm4XK58N1339EW3OHhYVpfUG6n1k45UEYe+D4klE6nsba2hmvXrmFqaormoU6fPo2B\ngQEYDAYkk0kEAoFdERsgG+JWvpRisYhsNguHw4H//Oc/GB0dhVqthlQqRX19Pbq7u8Hj8ZBIJGC1\nWulJO5FIIB6Pg8fjQaPRoFAoIBgM0rbBvRbxKedaZDpeLpeDQqHA3Nwcrl27BovFgmw2C5/PR3P2\nAHas+77b9088GnJdgUBADSqPx6Mh45383XJ+VyqVQqvV4o033sDFixdx6NAhqpkOPNQjICNxk8kk\nEokEcrkcxGIx9RiJUfH7/Th27BjeeecdSKVSyOVyNDU14eLFi+ju7qYiHQBoseTExARmZ2fhcrlo\npTsx+tuBPDe5XE5TbJ988gmuX78On8/3zE24sbERb7/9Ns6dO4dMJoOrV6/i6tWrcDqdFVFjI5s3\nSXeQw9PjXirZDyYmJmg9UE1NDdrb23Hv3r1t/e2drDWSWiBrGAASiQQVvCmnknurbcLknsmhbbPr\n8/l81NbWorm5GW1tbYhEIojFYhCLxZten8fjQalUYmBgAH19fTR3TxRASaS0nKFTuwVp0e7q6oJO\np0MkEsHc3Bzdw0vrYMgzsVqtdFAZh8OBRqOp2FTLA2fkge8X1vr6OhYXF7G4uAgO56EePCmGicVi\nmJmZwcLCAkKh0I7DVCRfuJ3wZTgcRjgcpm0VRHjFZrPRCURkxnmx+L02vclkorOKg8EgfD5fxYeN\nPAmSViCGJ5fLYWVlBVNTU/D7/fTn9Ho96uvrqQjRfle/Ep4U0itteyEywnt9z0SR8cyZM7hw4QLO\nnj0LADQdQ4oVI5EIwuEwgsEgVYETi8WIRCJwuVyIRqMIhULwer3weDyora2FQqGgnSUSiQT19fWo\nq6uDRqMBn89HOp2mQjWkPS2ZTO6o1ZJ4XvF4nKrcEY/twYMHT60RIKF5o9GIwcFBXLp0CU1NTQiF\nQrhz5w5GR0cRCAQqUjBG5kSsra3B6/XSGhOz2Ux7tQn5fB42mw0mkwlnz56lErLEs60kQqEQnZ2d\n6OzshEQigc/noxHMSCSyZx7uVvZFLpeLZDKJpaUlSCQS1NTUoK+vDwBoao90vAiFQigUCjrIpbOz\nEwqFAoFAAG63G06nE3a7HQ8ePIDdbqe1HZXaY3Q6HY4ePYqamhokk0nMzc1hYWHhifogxWIRfr+f\nTrHk8XhoaGigB8m95kAa+SdRLBbpYAAejwev14uPPvoI33777a5Uoe9GaIUI7qyvryOVSlGDWDqj\nHgD1IEnYdHV1FaFQqGoMJXmxs9kswuEwbdci86kVCgWUSiVMJhNaWlowPj5elhe3GXtZWEMOjfF4\nHIVCgXryO/UcN7tn4jUdPXoUv/3tb1FXVweBQIBisUir04laXy6Xg8fjwcLCAiYnJ2Gz2WiUiAia\nkA3y1q1bWFhYoIWRHA6Hqm299tprOHXqFNRqNa0SJwebWCxGjQHZLLfzvFOpFFZXVzE/Pw/gYSGY\nWCyGSCR66mhnIon83nvv4dy5c9Dr9YhEIlhcXKRta1uVb97uWslmswiFQpienoZUKkVPTw8OHTqE\nH//4x7hy5cojkbRCoYBwOIzV1VWamiLTLbezXneyxqVSKS5duoQLFy5ALBZjYmICf/vb3+BwOMp+\n78rpMHra7z2LfD4Pv9+Pr7/+GnNzc7hw4QKGh4fx85//HPPz87hy5QqVMA6Hw9Bqtejt7cXw8DBO\nnDgBjUaDlZUVzM7OYmRkBOPj47RGqVyRsd2iWCxCqVQ+0oJNpiw+zUYQgx6Px2naIhaLVURy+rkx\n8qQtg+jau91uzM3N0b7a3WA3DgrkGsRIbvZ3mpubUVNTg+npaTgcjqoRfAAe3v/q6irsdjvdhi14\nXAAACnpJREFUGEnLn16vh9FohEKhoN5dOp2uigPK0yAbHJn2l8lk9lSsguSeDQYDGhsbUSgU6EFu\neXkZk5OTVNLT4/HA6XTS0CsRbXrSLGxSeVxKIBDA2toakskk7t27B5lMRiMxDoeDekK7MQeBpJZu\n3LgBLpeLl19+GefPn4dEIsHVq1dpSgoAbZUbGhrCD3/4Q5w+fRoGgwGJRAL37t3DlStXYLFYKhq9\nItGTxcVF6HQ6DAwMoLGxEcBDgRk+n08NCtE0kMlkEIvFCAaDj+hYVBIOh0NbuAqFAjUoW41+7PVB\nOhwOI5FIQKvVQiKR4KWXXsLg4CBUKhVSqRRNI5GUZjabxa1bt2hthtvtxvz8PD28lI7wrSSJRALL\ny8tQKBQ0WrtZDU9p5wg5vFeqc+e5MPLEK1Kr1Th06BCCwSDsdjs8Hk/F1cp2Cz6fj87OTtTV1eHf\n//43rQavFvL5PDweDw33ptNp6HQ66HQ6qtRHQsqpVGrHBmSvN00SridRlmw2u6ebB5/PR01NDWQy\nGa3BCIVCiMVi+Prrr/GnP/0JIpEIAKjG+HaJRCKIRCKwWCyP/PteRUfC4TCuXbuG2tpavPbaa3j9\n9dfR1taG+fl5uhaAh0azq6sLr7zyCn75y19SXQm73Y5r167h/fff35eD4cbGBpaXl9HQ0ACJRAKN\nRoN8Pg+1Wg2JREKNONFSNxgMUKlUsNls1MhvlZ12v5B56sQLJkNRSDSQXJ985/t14CZGmXS5tLa2\nYmhoCH19fY/cI9E4+eMf/4gPP/wQbreb6kNUA4FAAOPj47TN72kOWymklimZTMLv91dMw+K5MPI8\nHo9qBKvVaszPz8Nms5X14KsRovql1+uh1WppjrOaIJ48yfeePHmS6sKT8Oz8/DwmJiYqPs98O5QW\n3pGc6k4LoYhIx5OuQ/K+X3/9NaLRKJ1bnsvl4PP5aA0HgD3zZPfqOyEey/j4OP7whz/gzTffRHNz\nM37zm9/A4XDQECVpsSNqZ1arFZOTkxgZGcHExMS+rRmyGZODUVdXF+rq6tDf349YLIbFxUUAD/Oy\nr732Gs6cOUOL3R7Xj9gKm62XZ0F0KPx+PxobG2lNB0n7kVog0se93yNlU6kU7HY73n//fXzyyScQ\ni8X0oK1UKmnLq9VqhdvtppLC1QIptqutrUVjYyNUKhU0Gg2Nsj2OSCSCRCKhcs7Mk98iQqEQ9fX1\ntBLd5XJhcXFx3xfydpFKpbQVUCgUIpFIVN1nKRQKtKdcq9XiyJEj6O3tBY/HQywWg9PphMfjwczM\nzI6iKbupub8ZAoEAKpWKamXvtZogEUqanp6G3W6nB7lqPwyVAwnNEpETs9kMjUaDEydOoLe3lx5m\nSJvg6uoqHA4HHjx4gLt37+Kbb75BKBTa189A8u0TExNQq9UwmUxUbMhgMAAATTW0trYikUggGAzS\n8c9bZaee/MbGBgKBALxeL8LhMAQCAerr67G2tkYHZYnFYloXtN/7SS6XQyAQwM2bNx/5d5LGIpGJ\najLspZBo38rKCuRyOZqbm6mQGenGIJBq/KamJgiFQqyvrzMjv1Xkcjl6enqg1WqxsrICi8VyoI18\nXV0d+vr6IBaLqYBJtY45JaI+zc3NkMlk4HK5WF5exueff47x8XG43e4deaKVMnpCoRAqlQrZbBZ+\nv39XJHg326CIt5hKpWie7nkw8KWkUil4vV58+eWXyGazGBoaQmNjI5VpnpmZwccff4ylpSWa1iHF\nVNVAMBjE9evXUV9fj9OnT2NwcJDqD5D3USKRIBKJYGFhAVNTU1R0aKvs9Lvf2NhAOByGx+OBw+GA\nVqvF2bNnabX3+fPnqcT32toaIpFIVa63jY0NqjFQrQYe+D7HHg6HkclkcPLkSdTV1WF1dRWBQICu\nYRLR6+vrw9DQEGQyGVWrZEZ+C8hkMnR1dcFoNNJFEgwGq9YwPguZTIaamho6f/5pVcn7CSlUW1tb\nw7fffovl5WWo1WoIhULY7XaMj4/T3uZq3Eweh1SiE3nenbZrkcKszT472SiqJc+425Di0pmZGaTT\nabhcLtTW1kIulyMajcLpdOL27dvwer1Vl44CQFXLiBTy0aNH0dzcTD3iTCaDubk5LC0tYWFhAfPz\n83RWw3bYSa58Y2MDbrcbPB4P4XAYjY2NkMvl0Gg0yGQydO6F3W6v6OjhrUIOvweBYrGItbU1OBwO\ntLS0wGg04u2336baFTweD7lcDqlUCqdOnYLRaMTCwgLdF5mR3wIymQydnZ20r5y0Fh1UhEIhxGIx\n/H7/jj3hvYRM6/rzn/9M2+akUilSqRScTue+D8Iol9KRrplM5hF5zZ1wEA43e02hUIDT6YTT6cT1\n69cB7G075G5CBi+NjIzgzp07ePfdd3HmzBnodDpwuVykUilcv34dk5OT8Hg8CIVC21435FC4XdbX\n1+FwOODxeDA5OYmBgQH09vZCJBIhlUrh9u3bcLlcWFlZ2fbfYPw3q6urKBaL0Ov1OH/+PN566y2q\nXCoQCBCNRuHxeKDRaBCJRGC1WrGwsFBRx+25MPJcLhcymYzq0FerUSyXSCQCh8NB1baqeUMkbWfx\neByZTIaO0j1o3qlKpUJnZydkMhmCwWDVRU6eJ6p5PT8JoknwzTffwGKxUJW2jY0NeL1eOkp5p4W+\nu9GiS1JAc3Nz8Pl8j8hl71T1k/HfkNqkqakpmM1mvPrqq3RMNI/Ho2Jnq6ursFgssNlsCAQCFR1h\n/VwYefISrq2twWazHdi2OUI0GsXy8jLVZK5mg1kacj6oNRBkVKRKpaKbZDU/c0blIeNuH29DrCZK\n30Wv11uRgT7/6xBNAofDgenpady7dw9KpZJ2C0QiETqNc35+nspHV3J/OfBGnsPhIJVKwWKxIJ1O\nY2JiAi6Xa79va0fEYjEsLy/TKWiVngn+v0g6naY5zXg8jo2NjQMTVmYwGNtjN95x0rEwNjYGu91O\nx56TdjkirpVKpXZUs7FdDryRLxaLiMfjmJqaQjKZhMVieURf+iBCwt8k/Me8yr2lWCwiGAxiYmIC\nIpEI6XS6KgvBGAxGdUJaLsPhMP230pHNpf+tNLs3ZHzr7NonFovFqK2tRS6Xo8Uvz0NOtVI94oyH\n/blisRgymQx8Pr+qZgUw/jdgkSPGNnimDd83T343FzTRRCb5qL16UQ6q0T2o9w1sfu+7uYZIV0ah\nUKCa8TvloG7az8N6eZxq/yyPy84eJJ6H9bIfswZ2+jfLvca+hut3a0FvbGzQlrm9Hg1aycWwm3/r\nIL+IlYBIgJLUyP/6c9rpWt/PzfNxytEsqAYOwj0+TxBtjIP4zHeqkMhgMBgMBoPBYDAYDAaDwWAw\nGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAY\nDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgM\nBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwG\ng8FgMHbO/wNvAHF3v+mEMwAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -3.571910\n",
+ "Current MNIST score: 7.097467\n",
+ "Current Frechet distance: 47.269943\n",
+ "Training step: 1400\n",
+ "Time since start: 7.891089 m\n",
+ "Steps per min: 177.415304\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvdlzm2eW3//Bvu8kSILgvq/aKMnW1rLc3d7aM53MdKq6\nb5JJpXKTylX+geQPmLvcTCpJJVOT1GQq6aSTbq9qjSxb+0Zx31eQBEgAxL6D+F349zxDu21LlkgC\ndONTpWq3RAIPXrzvc55zzvecA1WqVKlSpUqVKlWqVKlSpUqVKlWqVKlSpUqVKlWqVKlSpUqVKlWq\nVKlSpUqVKlWqVKlSpUqVKlWqVKlSpUqVKlWqVKlSpUqVKlWqVKlSpUqVKlWqVKlSpUqVKlWqVKlS\npUqVKlWqVKlSpUqVKlWqVKlSpUqVKlWqVKlS5figKON7l8r43lWqVKlSpcJRKL5qokqlqtn4Gs+1\n4cqjWEWVKlWqVKnyfVAoFCgUCpRK5R8Y+yovjrrcC6hSpUqVKofDfuP4TV6wWq1GrVZTKpXY29uj\nUChUhLes1WoxGo2cP3+e3t5eAoEAS0tLTExMkM1mKRaL5V7isaFq5KtUqVLlB4ZCoUClUmE2m9Fo\nNOzt7ZHNZsnn82i1WnQ6HTqdDqPRiF6vJ5/PE4vF2NnZqQhDbzAY8Hg8XLhwgatXr7K4uMiDBw8I\nBoMEg0ESiQSlUqns6zwOlM3IKxSK6hdUpcpL8jwPrcofN2q1GrPZzNDQELW1tZRKJdbX1wkEAni9\nXpqammhpaaGmpgaz2Uw4HGZ8fJxPP/2UWCxGoVAo29oVCgUul4tTp07R0tKCzWajqamJRCJBPB5n\nfHycxcVFMplM1aN/AcrqyVcN/T+g0WhQKBQUCgX29vbKvZwqFYzdbmdkZASn04lCoeDp06csLCxU\nPZsjROSKzWYzJpMJvV5PJpMhmUySTqfJ5XJlXZ9arcZisdDf309/fz8Gg4HV1VU2NjZob2+nsbGR\n2tpanE4nWq2WiYkJVlZWMBqNpNPpshp5gJqaGs6fP09nZye1tbUYjUYMBgMul4uuri6Wlpbw+/2s\nra2xsLDwyus9LFskDuPiftFoNJRKJYrFIsVi8Uj2+qonXwEoFAr0ej1KpZJkMnlom7W44arX/fii\nUCjweDz8q3/1rzh16hRKpZJ/+2//LSsrKxURZv1jQaFQoFaraWhowOv1UltbSzAYZH19na2tLfL5\nfFm/C41Gg9Vqpa+vjx/96Ec0Njayvb3Nzs4Ozc3NGAwGkskkBoOBQqHAnTt32N3dxWg0Eo/HSafT\nZVs7gNvt5uLFi7S2tmK329nb26OlpYWRkRGy2Sy7u7uMjY3xwQcfsLq6SrFYfOnrLQR+B/197Tfw\nKpUKrVaL2Wxmb2+PTCZDJpP5YRv5cm9G5Q53ajQajEYjTU1NdHR00NnZid1uJ5VKsby8zOzsLD6f\nj93d3Ve6gfdT7mteKYhTdWNjIw6Hg2AwSDweJ5fLyRN2pXrFQiCVy+VQqVQ4HA5+/vOfYzAY+O1v\nf4vP56uGMA8RsWF7vV46Ozs5ceIE7e3t2O12FhYWePr0Kel0mkQiwd7eXtnuoVKpRCaT4fHjxyiV\nSk6dOiX/PhQKkcvl2N7eRqFQkMvlWFxcZHt7m2g0WvYoBMDGxgYffvghra2t8iBlNBoplUoYDAbM\nZjMtLS2cP3+eZDLJ6Ogo8/Pz5HK5lzKcB/U9KRQKnE4nHo+HEydOUFtbi16vJxQKkc1mqa+vJxKJ\nMDc3x9raGoFAgHQ6fajP7LE38iqVCqVSKTc/8QWLjdxisWC32+XP7DeYmUyGdDp96BdZoNPp0Ov1\nqFQqjEYjNTU1tLe309vby5kzZ6ivrycej1NbW4tWq0Wj0aBUKgmFQmUPn30fxEao1+vRaDRkMhny\n+XxFfAYRYq2pqWFwcBCv18vKygrBYFDeC8lkknA4TDabLfdyv5FcLsfW1ha7u7s0NDRw5coVzGYz\nY2NjbG9vl90L+yGjUqkwGAy0tbVx6dIlTp06RWtrK3q9HoVCwfr6uvzvcrK3t0cymeTJkyekUimK\nxSJWqxW1Ws3e3h7RaJStrS0UCgV7e3ssLS2xs7MjD7vlpFQqsbGxwccff0xjYyPNzc309fXhcDgo\nlUrU19dTW1uL1WpleHgYi8UCQCgUksb0+77fy6JWq6UN0mg0aLVa6urq6Orq4tq1a3i9XhQKBSsr\nKyQSCTo6OojFYtTW1mI2mwHY3Nw81Gf2WKvrFQoFVqsVnU5HLpeTRrtUKknhyeXLl3n//ffR6XSU\nSiV5Us3n80xNTTE1NcXMzAzJZPLQ19vQ0EBPTw8WiwWXy0VjYyOLi4t8/PHHLC4uYjQaCQQC2O12\nGhoaqKurY21tjY8//phIJHLo6zsIFAqFDEt1dXXh8XiYn59nc3OT3d3dsuoNlEolWq2WoaEh/vE/\n/se0trbicDjY3d0llUqRz+cJhUIsLS3x4Ycfsrq6WpHefCwW486dO9TW1tLb24ter8dut8uN7zgb\n+XJH2J6HTqeTm/jw8DB2u51MJkM0GmVpaYmZmRkikUhZvXhAhrRFSFg4OMlkkmKxSDabJZlM0tDQ\ngNPpZH19ne3t7Zf2hA+aaDTK5OQkq6urjI2Ncf36dTQaDSqVioGBAU6ePMnZs2epra3FZrPJNMno\n6OiRHM5FiN9ut2Oz2aReoL6+nkwmg0ajkWV/4+PjxONxtFotAwMD9PX18dOf/hSXy4XRaOTv//7v\nq0b+m9BoNOj1egYHB2loaGB3d5e1tTUWFxflv4sHUghPFAoF8XhcbvZerxeHw0E0GmVjY+PQLrRa\nrcZkMtHT08OVK1dIJpPk83mSySRbW1vMz88TDodRqVSEw2GamprY29ujqakJq9WKyWQilUqV/YT9\nXSiVSgwGA/39/bS0tOByuWhra8PtdrO4uMjq6io+n4+lpSXW19fLsgHq9Xp6eno4f/48r732GhqN\nhmw2S6lUQq/XU19fj8fjweFwMDY2RjAYlBqJSiKVSjE1NUVvby+7u7uYTCYZvlxYWGB7e7vi1vxN\niGhbfX09LS0tMtqzt7eH0WjEYrGwsbFBIBAgEolURCTIYDDQ1NREW1sbTU1N5PN5gsEgq6urTE9P\ns7q6KkP15aRQKFAsFqVOo1gsSmGgiHju7e1hNpupra0llUrJA0Al3DvZbJZsNkssFkOpVMpIq0ql\nkh57qVTi9OnTdHd309zcTFtbG7Ozs4e6LoVCgUajwePx0NPTg9vtxmAwEIlEKBaLKBQKEokEOzs7\n5PN5wuEwU1NTFItF9Ho98Xgcp9PJe++9x9bWloyuHCbH1sjrdDpcLhdvvvkmp06dYnV1lc8++4z1\n9XX5RahUKnK5HJFIRJ6uAJxOJw0NDTQ1NVFTU8P8/DyJROLQjLxer6ehoYGTJ0/yxhtvcPfuXR49\nesTY2BhbW1vE43EZSSiVSuRyOUKhEIODg2i1WkwmE4lEoqKNvEqlwul08qtf/Yof//jHNDQ0SCVp\nOp3G7/czOzvL3/7t37KxsXHkno5CocBisXDt2jUuX76Mw+FgdXWVmZkZxsfHMRgMnD59mpaWFpqb\nm2lqasLn85FKpSpi09tPNptldXWVxcVFfD4fXq8XvV5PV1cXc3NzjI+Pl3uJL4QQrw0NDfGLX/yC\nWCxGIpEgk8nQ1NREd3c3H3zwAZ9//jmTk5MkEolyLxmj0SgNvN1ux+/3s7GxwZ07d3j27BmBQKAi\nDiPw5V5SKBSIxWLyPhZ/lEqljHbW19ej0WgqxsDvp1AofEUUt7e3x8LCgjyA5/N5Kc7zeDzo9fpD\nXY9KpcJkMnHu3Dn+5b/8lzidTtLpNDdv3uTJkycyZSYOJ6LBEHx5v4dCIfr7+zGZTOTzeeLx+KHf\nL8fOyIsbs7e3l7Nnz0pDODMzw+rq6ldCU8VikcePH5NIJFCr1bL5w8jICNeuXcNgMOD1evnpT38K\nwPXr1w+lrKFYLJJKpYhEIgSDQebn55mZmWFra0s2ddivCchkMqRSKbq6ujhx4gRvv/02H3/8Mb/5\nzW8OdF2vgjhItba2cuHCBSna0Wq1zM3N8bvf/Q6v10t9fT2Tk5OoVCqGhoY4ffo0gUCA2dlZwuHw\nka1VRHSGh4fRaDRcv36d2dlZmUpQq9WsrKzQ09NDS0sLly5dwmg08j/+x/8gnU6X3TPbjzgIRiIR\nNjY2sNlsWCwWWRpVqYh7RqvVsre3R2NjI++88w4nT56kq6sLpVJJNBplfHwcu91OW1sb77zzDn19\nfaytrfHFF1/w+9//vixrVyqV2Gw2mpub6e3txePxoNFoSCaT+P1+lpeXCQaDFVnhsLe3Rz6f/8rf\n6XQ6HA4Hvb29jIyMcO/ePRYXFyvS0O9fz/69Xa/XYzQa0Wg0GAwGjEYjKpXq0NYhwvNXrlzh6tWr\ntLe3o1arWVtbY3p6msnJSQKBwLdGXZXKL7vIC6V9W1sb/f393L9//9DWDMfQyAthw/DwMG+99ZbM\nJ42NjbG8vCxPTqJMYWJigomJCRQKBTqdDqfTKdWmarUau93OuXPnWFlZ4bPPPpMCvoNEnKZ9Ph9T\nU1PMzc2xvr7+rU0nCoUChUKBjo4OfvKTn1BTU0M4HK4YIy8OTDabjd7eXt5++22ePXvG06dP2d7e\nZmFhgd/85jcMDg7S19fHZ599RmtrKxcvXmR4eJhYLEYoFGJ3d/fQNxSRO2tqapIiqe3tbT755BNm\nZ2fZ2NiQIXsRbj179iz/9J/+U/R6PR9//LHMYVbK5ifu0Ww2SzQaJZPJYLfbqaurk7XzlbBWpVL5\nFQEmfBnVMplM0oP/5S9/idvtJh6Py01aHAZcLhd1dXWcPXtW5rnLaeQbGxvp7e2lp6eHuro61Gq1\nPGz5/X6i0WhFHQb38/X7wWQy0d7ezsDAAENDQ9TX12MwGMpe+veiCAGtxWJBq9XKP8KQHgYqlQq7\n3c5rr73GyZMnMZvNBINBVlZWGBsbY25u7ju/f7VajcPhkCH6mpoaGhoa0Ol0h7ZmOIZ18mazmf7+\nfoaGhmhvb2dycpL79++zsbEhQx/f9LpiY8zn8xSLRZnnKRQKMkR4WBQKBal03dzcpFQqYbPZiMVi\n3/l7YjCDCLFVyuZdU1NDS0sLHR0duN1unj59yujoKM+ePWNlZYV8Po/f7yeVSjEzM0MikaCtrQ2L\nxUJ3dze5XI47d+5Iz+GwUSgU9Pb2cuHCBdLptAzRCyWueDDz+Tzr6+sYDAbp3Q8NDTE9PY3P5/te\nKYbDFJCJg4vFYpE1z+l0mkgkUlEaAoPBgN1up7+/H4/HI587nU5HT08PnZ2daLVa1tbWWFtbY2Nj\ng0gkQqlUoqOjQyqX4cvDgdjEj7q8UTgI586d480336SzsxObzSard8xms2wZexxQKpV4vV7+5E/+\nhOHhYUwmE01NTXg8Hpmzr3SER7+3t/eVsPhh3hcGgwGbzYZOpyOZTLKwsMCHH37IjRs3XkhnZLVa\nuXDhAoODg9KpuH///nPtwKtyrDx5cRIaHh6WKvWlpSUePXpEOBx+bs5aiN9aWlrQ6XRSGLGwsEAg\nEDi0G6RUKpHP5wkEAoTDYerq6rDb7QwPDxMKhWS9qjDqVqsVr9dLTU0NKpVKlqCVu5mNqO13uVw4\nnU5KpRJbW1vs7OywtLTE9vb2V66jSFH09/fT2dmJyWQCONSQ2tcRpS0ej4e2tjbm5uZYXV1lZ2eH\nVCr1lZ8tlUrEYjG2t7fZ3d3FbrfT1dWFQqHAaDTi8/mkEd0/IevrpZnitQ6rjEqpVGI0GrHb7bjd\nbrRaLalUiq2tLUKh0KG85/fBZDLR3NyM1+ulsbGR7u5uDAYDU1NTZLNZlEqlLAMUpVsbGxv4fD5K\npRLDw8OoVKqv6FOmpqZYWloqy+cRz+Pw8DD9/f3YbDaKxSK7u7tEo1GSyWTFDk0RhyqLxYJer0ev\n1+NyuTh79iwXLlygqakJpVKJ0+nE4XAcqid8kAhNgYj8FItFGYU4LGdIpVKRz+dZWFggHo8DcOvW\nLZ4+ffrcSF9zczODg4NcvXqV7u5u0uk0c3NzPHv27NC1JseqTl6n0+F2uxkZGaG9vZ1CocDU1BSP\nHz9+IdFcbW0tP/vZzzh//jxms5lIJMLa2hp3795lYWHhuet6VSMrIgm7u7vU19fzJ3/yJ8zPz/P3\nf//3qNVqKRbs6+tjZGREejOxWIx0Ol12T15oGFwuF4VCgXv37smBFt/WjtdoNPLjH/+YN998E5PJ\nxPLyMtPT00Sj0SPZFEW42G63YzabCQQCbG1tfed7i1SPVqulvb2duro6Ojo6+Oijj2RZnQhFC8HS\n/lIlwWF9V2q1GqfTicvlwmq1yk6Ji4uLbG5uHsp7vigKhUI+ZyMjI3R2dqJUKpmZmeHv/u7vCAaD\nOBwOPvvss6+0gBVemdfr5fTp0+h0OorFIvF4nMnJSf7Df/gPPHnypCxepsfjkdPQamtrKRQK7Ozs\n4PP5ZFOTXC5XMREUgehX4XA46Orqora2lrq6Ok6dOsXg4CA9PT3o9XpisRgGg0FWIFU64nOJ4TpC\nYJjP5w/1/hCO2ocffoharUahUBAMBl/ofc+dO8f777/P66+/jt1uJxaLsbCwwPT09KGXvB47T16v\n12OxWAiFQkxPT7OysvJcYZQQXnV1ddHV1YXT6SSVSjE+Ps7jx49ZXV2VJTrf9aAeVNe5bDaLVqtl\neHiY4eFhzpw5I0/QhUJB5ldtNhuRSETmjstVeys2CovFQrFYJBAIkM/n2dnZ+c7wsMvlorOzk66u\nLmpqaojFYjx58oRPP/2UQCBwJGsX+XQRSt2fu/umQ5Ner5dNKkTvBbfbjcvlYn5+Xr6mwWBAq9VS\nLBaJRCIygnEU34+IHKTTaUKhEEajkWQyKTtolQOhlD99+jQXLlzgjTfewGw2s7W1xfT0NE+fPmVt\nbY1EIiEPrZlM5iuHQ5VKhVqtxu12Y7VayWaz3Lp1i9///vdMTEyUJUqhUChoaGjg9OnT2O12EokE\nW1tb8rC4srKC3++vGEW9QKlUotfrZd797NmzuN1unE4njY2N1NXVYTQapRcsGkFV2kHluxDNwoTj\ntL29fajamVwuRzweJ5VKycPQN0VwxLVvaGigs7OTgYEBzp8/z9DQEA6Hg/n5eT799FOePXt2JKLe\nY2PkRZMVg8GAXq9nd3eXx48fEwwGv/NLFeHajo4OhoeHaWxsRKVSsbOzw+joKPfv38fv95NMJo9E\nGStC96JGdXBwkHfeeQeFQkE2myUSiUiNgFqtJhwOs76+zu7u7qGu67tQq9Uy9+v3+wmFQkSjUXmt\nROhadM9Sq9UYjUY6Ojo4c+YMra2tqFQqlpeXefjwIZ9//vmRbSZCoCYMisPhoK6uDpfLRTgc/krI\nXrSkbGxslC000+m0nG0tBGKFQkGGP0UTnaP2gERjk52dHWpra8nlcoTDYeLxeFk2atFF8Nq1a7z1\n1lv09/czPz/Pw4cPuX79OmNjY7KzZCwW+4M1KhQKzGYzbrebpqYmbDYbmUyG27dv8+GHH7Kzs/PC\nJaQHldYSHmN9fT2Dg4MYjUZCoRCTk5MEg0EikQirq6tsb29XnJEXnm5vby+XL1/mypUrOJ1O9Ho9\narUarVaLSqWSjoOIxh0Hvj7sJRaLsbW1hc/nO1RtVT6f/07dxX6tTF1dnSyZfuedd3A4HGi1WmKx\nGGNjY/zX//pf2dnZOZJo5rEx8vCl4KuxsRGTyYRKpaKzs5PR0VE0Gs13hotra2v5yU9+whtvvIHR\naGRzc5Pp6Wmmp6dZW1sjFouRyWSObHMslUosLi7yl3/5l/z5n/85v/jFL9BqtaTTaZaWlggEAiQS\nCbnZjYyMMD09jUqlOvISF6VSiclk4vXXX8dgMPDxxx/L0jfxsKlUKnQ6HRqNhnQ6TUNDA6+//jqv\nv/46p0+fxul0srKywv/9v/+Xp0+fHqlwar8+IJfLceLECSwWC1arlRs3bvDkyROZx1OpVJw9e5Z3\n332XEydOkEwm2dzcZGZmBr/fL6sHisUiKpUKlUpFoVCQmolv+0wHnWYRA43UarU8YJjNZpqamlha\nWmJjY+PIhWlnzpzhV7/6FT09PbKZ0/T0NL/+9a/Z2tqSBv7b1qVUKjl79iw/+clPGBgYwGKxEIlE\nCIVCL9UI5yAMvWhh63A4cLvdsj/BrVu3iMfjqFQq1tfXCYVCFZePF2mq2tpa3G43FotFduZzOp1y\nLxGiQqGhOA49FoQ2yGw2k0gkePDgAU+ePGF2dvZIOpd+G6I07vLly1y+fFn23bBarbKb5tzcHJOT\nk+zs7BzqgWQ/x8rIC0qlEg6HQ6rsk8kkiUSCSCQi27+KKUytra2yFrS+vp75+XnGxsZ48uQJ8/Pz\ncljDUT+koVCIW7duYTabsdls1NfXk06nefTokawUGBwcpL+/n76+Ptxut+x8d9QqXjFty+Fw4PV6\nKRQKGAwGGhoacLlcWCwWdDodKpWKRCJBfX09r7/+OkNDQzQ2NjI+Ps7t27e5detWWbrdlUolGc4b\nHBxkYGAAQIqm9rfiff311zlz5gx2u51wOMzy8jJLS0tS/S36GpQzrCk6NppMJtlSE5DjQ8uxlvb2\ndn784x+zu7vL4uIiOzs73Lp1i6mpqRdKMwktytmzZ6mvr2djY4N79+6xtrb2vUOaBzVVTK1WY7Va\n5f0di8XkwSMajcrmJolEouKMPHwZ7QmFQszPz0thWjqdlt0EbTYbHo8Hm82G1+ulpaVFljpWKgqF\nArfbTWdnJ1arVd5vExMTbG1tHZnh/CaMRiN1dXWcOHGCS5cu0dfXh9lsplAoEAgEWF9fZ35+nvX1\n9SOtgjlWRj4UCrG5uUk8Hpe18sKo+P1+pqamGBsbQ6FQYLPZ6Onp4dKlS1y7dg2n08nOzg43btzg\n3r17TExMSA8Mjl6xLsKt169fZ2pqipGREfR6veyCJ4akKJVK2tvbpapdPKxHSan05UQrvV7P8PAw\nVquVeDzOu+++y+nTp/F6vdIzEOFvkbMW9f3Xr19neXm5bENfRP1+c3MzTqeTzs5OLl68KGtWHQ6H\nVOCLaYDT09N88MEHsgf4t0WLnsdhlNBptVqcTift7e2yhKixsZGampoDfa/nIQyh3W7HYrFw69Yt\nrl+/zsOHD58rcBSIKIpoF6vRaBgdHeWv/uqvWF9ff6lrfhClduIa63S6r7SEra+vp1gsyqFGlSi6\nKxaLJBIJ2YlPDMfSaDSEw2H0ej2tra288847XL16FY/HQ2trK1qttuwC329DHN56e3u5dOmSjBB+\n/vnnrK+vv3Sq6qDSOw6Hg4GBAVlaLNIhQqC3srLC1taWTFdVjfzXEII1IdzJZDIYDAZZsxqNRunv\n7+f8+fMUi0W0Wi01NTWyO9Xa2hqPHz/m0aNHLC4uli13uf/zlEpfDswRwjCtVksgECAej8sBOvX1\n9bz11lsMDAzw85//nA8++ICZmZkjW6cQVNXW1tLa2opCoaCrqwuj0UhfXx8ejwez2SzTH1qtVp60\nxcFrcnISn8/3lZr0o2Zubo4bN27I2vJcLofdbuf8+fNSJS8OJtvb29y9e5ebN2/KsFol1Q6LXGt7\nezsGg4FsNksulytLREr0n1hfX+fGjRvcvHmTBw8esL6+/sKq4bq6Ojo7O2lsbJQ6FJ/PJ0W1L7um\nV32+9/fVEN3OWltbyWQyKJVKqZOpNIMotDGZTIZisUg0GpU5bKVSKTuARiIRhoaGKBQKmM1m2agl\nHo+X1SP+JkTHQY/Hw/DwsNxfVlZW8Pl8xGKxl35GD0q70djYyOXLl/F4PACydj+fz2Oz2ejo6MBi\nsZDP59na2mJra+tIBo8dGyMPXwofYrEYGxsbsgmF2Wymu7sbhUJBoVAgl8tJo1ksFmWobX5+nrt3\n7zI1NSWHG1QCohmPGKywf13Ly8ssLi6SzWbp7u5Gq9Xy7NmzIzXy8A+1tiaTCavVisfjobm5mVKp\nJGvhb968yaeffkpjYyOnT5+W+eHbt2+zvLx86A0fnsfi4iLRaBSHw0FHRwcAra2teL1e1Gq1FB9l\ns1kCgQAfffQRjx49qkjFsehT39zcjFKppFAokE6nZfrhKBGH77m5OX7zm99w//59OSTqRXG73Zw+\nfVoKCNfX19nY2CAajb70mg7CyIvIVC6Xk7XkJpNJCn+np6crrq58vwB2f7MYUWa5v496KpUiHA5T\nLBalWNZmsxEKhSrGyIsDuMlkorW1leHhYbq6urBYLDx+/JjFxUXC4XDZGhGJ7p9i6uaFCxfQarVk\nMhlZ/y5SJD09PXR1daHRaAiFQty7d494PH7oVVPHyshns1mWlpb4T//pP+FwODAajQwMDNDb20tX\nV5cMvYpOdmLzTqVSPHv2jAcPHpTdg/82vmlNIqS/vr6OzWaTCmMRnj0K9vb2CIfD8pqnUikMBgMm\nk0lupnt7e2xsbBCLxWhsbMRms6HX61leXubmzZsEg8EjWevziMfj3LhxgwcPHkjPrLa2loaGBtrb\n2+nr60OpVBKPxyuue9x+hHcWDofl8JFoNMry8jJ+v//I15LJZGQZ6suUuWUyGSmwW19f5/r160xM\nTLzSug7ie9tv5IW3JpTp4hk47Ali34f9DZpEmFgIYhUKhUwr7N87MpkMkUgEhUIhU3KVlJd3uVy0\nt7fzox/9iK6uLhwOB0tLS/z617/m2bNnrK2tlUUPIa5zY2MjXV1dnD17ltOnT+NyuYhEIvh8Pp48\neUIkEiEej3Py5El6e3upqanh1KlT1NTUsLe3J2d+HOYhpXLu0BegWCwSDod58OCBPEH5fD58Ph+b\nm5u0tLTIfJk4mRaLRXZ2dlhZWWFtba2iwq7PQ+TCg8EgWq0WnU6HwWBAp9MdWehblJE9fPjwD8rE\nvr6ROhwObDYbtbW1aLVaQqEQCwsLFVOak81mpZe5X2zX1NREoVCgv78fnU6HVqsln89X7NS/vb09\n6bnv7OzI7ygUCn1jedphIkpC94teXxSxUVosFhoaGuQh/sGDB6/c3e4g8soi5C1qyIUBFKr19vZ2\ntre3iUSTy2NoAAAgAElEQVQiZZ1tIKpbRMe6WCxGPp9Ho9HQ19eHy+WiVCqxvLzMysoKuVwOtVot\no6Fi5HI2m5UNucqN8OC7u7u5fPkyP/7xj6mpqSEajfL5559z584d2YHyqBENn9ra2ujt7aWzs1NW\nX4gKnK2tLdbW1giHwyQSCba3t/H5fHKs8tDQEH19fczPz5PNZtne3mZtbe1Q1nusjDx8tWdxPp9n\nfHycpaUlPv/8c4aGhnjttdeoq6uTtc6iljUWi1WkAvZ5iL73Ipfs8XiwWq2EQqEjP7A8bxNTq9U0\nNTXR1NQk0yeVOvBCtEyNxWKsrq7S3d2NxWLB5XKxt7cnFeuVTCKRYG1tDZvN9pX88XFBHLQ6Ozt5\n++23CQQCzMzMMD8//0rRH+HRwqt59Ht7e+RyOXmYMpvNmEwmDAYDbW1tKBQKtre3CYVCcgJdORBj\nt0dGRlAqlYyOjpJKpbDb7fyTf/JPOHHiBPl8nv/1v/4Xfr+fYrGIyWSipaWF9vZ2mpqaCIVCcqxr\nJXS902g02Gw2Lly4wD/6R/8It9tNMBhkampKThwtxyFcRHQGBgb4Z//sn9HT00OpVOLv/u7vGBsb\nk1PoRASoUChQLBaZnJxEr9ej0+l4//33+Tf/5t/Q0NDA1atXqa2t5dGjR/y3//bfDmWvPHZGXiCE\na6lUSgryHA4Hfr+f1tZW3G63rEf/6KOP8Pl85V7ySyEm2EWjUTQaTcUazba2Nk6dOkVvby9ms1nO\nVD6otR5G335xYMxkMsRiMQKBgAzBVsJG922k02kWFhZoamritddew2AwyDLSctYJfx8UCgU1NTWc\nO3eO119/HbfbLedxRyKRVw5fHoQnL3QaYkJlPp+no6MDo9FIQ0MDKpUKj8eD3W5nd3f3yI28MMhd\nXV2cPn1athCuq6uTDZuGhoaw2WwsLS3JUj9REnvx4kU6Ojrk6F+Ro//6TIej/kyi62FfXx8tLS2Y\nzWby+Tyrq6vcvHlT6pSOeh9UKBQYDAZaWlro7u6mrq6OyclJpqenuXv3LqurqzL0Lg7bwk6JUl2F\nQsFnn32GRqORNklUNpw5c0bO1ThIjq2RF4gckxDd5fN5OcIvHA4zMzPDhx9+eCQqxsNAlMKIsayJ\nROKlDf1hDbgRZS3vvvsu3d3dKJVKFhcXDywXvz/PeNAIQx+Px1lbW5ODdyrxICUQk/ROnjxJZ2en\nTEklEolDH3ZxEIiBP21tbbz//vsMDg6iUqlYWlpicnLylY3MQXmj4j5YXl7mxo0b2O12HA4HbW1t\nmEwmNBoN9fX12O32soS4hUHs6+vjvffew2AwoFKp5JQzMeMgGAzy7NkzOUnRZDLR1tbGlStXaG9v\nB5AH3W8a3HQUn0M0lxLpkKamJs6cOUNNTQ2JRIJCocDs7Cy3bt0iHA6X7fkUlS1er5d0Os1vf/tb\nPvrooxdKn4r7aXR0lNHRUZRKJW1tbZw4cYL6+nquXr3Kp59+WjXyX2d/zbCon/R4POzt7cnmFYct\nbDhMNBoNTqcTu92OyWQ6UtHdiyA63nV3d3PlyhWsVivPnj3jP//n/8zTp08P7H0O86He29sjkUiw\nsbFBR0eH7F1fyYiNUeRQRfe7Sr/PRSnXW2+9xdWrV+nr65ODpkQO81U9YnFwO6h7JpPJsLu7y+7u\nrjQ4Op0OnU6H1+ulqamJ6enpA3mv74PIq4se6Xq9Xgp1l5eXWV9flykpcQhsaGjg1KlTXLp0iaGh\nIVwuF/l8nsnJScbHx0kmk0ea8hFtYGtra2lqaqKurg6lUonL5ZLzLiYnJwkEAjx9+lSWF5cLEd2Z\nm5tjdHSUiYmJl+6TsN+ZqK+vx+v1MjU1xcTExIEq7n8QRl6v11NXV0d7ezu9vb2ymcnMzMxL19oe\nBi/jSYvNPJlMEo1G5U1eKZ68mHB18uRJvF4vY2NjfPbZZ3z++ecH1m9frPcwDL2IBBWLRYrFoqw3\nr6SD1NcRYjcxnx2QGohKETl+HRGNqa+vp6OjgzfffJNTp04Rj8flZMLV1VVSqdSBhNkPkmw2SzQa\nJRgMsru7Sy6Xk6p1i8WCxWIpSymdaLtrs9lwOp0olUqi0Sirq6tMTEwwMzMj16vRaDCbzfT29nLx\n4kXOnTtHQ0MDkUiEhYUFHj16xMzMzJH2hBAOTGtrK4ODgzQ1NcnUhxgxvLm5KQ/gy8vLZe9ZIdrT\nhsNhAoHAK/Wf37/36HQ6nE7noUwCPPZGXqVSyfnVHo8Hi8WCSqVie3ubDz/8UPYmLzf7y1te1MtQ\nKBRywlIoFGJ9fZ2VlZWXUtaL9wYONBzd2dnJv/7X/5qzZ88Si8X4L//lv/C73/2OaDRa0YZSIL4X\nkWcVG2Ule8QihZNKpeSJ/6Bqww8LMZnr9ddf5y/+4i9oaWkhHo/LZ3RpaUmmpCqNfD5PIpEgEAgQ\nCATIZDIYjUb5b+XSyQgHQEySi8fjLCwscOPGDRYXFwkEAuRyOVQqFWazmZ6eHi5cuMDIyIhsbPXk\nyRN++9vfcvv2bVZWVo7svhf57eHhYd544w3efvttOcr1yZMnLC8vs7y8TDKZJJVKSb3JUQwR+zaE\nBmxmZgar1SqnhL4KQtwZiURIpVLE4/EDn09y7I28QqHAZDLR3d1Nc3MzRqNRngLX19flMJVyUyqV\naGxspLm5mZ2dHVnj/G03rUqlwu1209bWhtvtZm1tTTaVeZmT40HnmZVKJUajEY/Hw8DAAAqFgqmp\nKebm5tja2qpYj/LriI3SZDLhdrtxOBzE43HZi/9VP4cYaXvQBli8LiAH7Ij52uUUTn0bdrudkZER\nLl++zMDAANvb20xMTDA2NsbCwkJZ1ekvilBHiyiKSqXCarVitVrLkpMXI2KFIdTpdHg8Hk6cOEFD\nQ4PU7wj1fXNzM21tbTgcDhKJBPPz89y/f1+2ID7KiKdWq8XlcnH27FnOnz9Pe3s74+PjLCws4PP5\n8Pv9cpT1/rHE5T4EisOUWq3GZrNhNBqxWq2kUqnvff+KSYF1dXXU19fLvgYH3Vb42Bt5QHa9a25u\nRqfTEQwG5SjISgnVw5cDRC5dusTS0hJLS0ssLi7KHNj+L1VMkOrq6mJoaIimpiZ8Ph/b29uv1IlK\neNYHcQOpVCpqamrweDzU1tayvr7Ow4cP2dnZOZTN+jDEgsLAG41G7Ha7HEErHraDmPonuo0d5CFL\n5LVF0xLR+ESUd1WakVepVNTV1fHuu+9y8eJFHA4H9+7d486dOywsLMjyrXJv4N+GELg5nU4cDoeM\nmmi1Wmw2W9mEd6JbZigUIhQKyXSlyWQim81KMbLBYMDtdsuOjgCrq6vcuXOHe/fuMTs7e+SRK4PB\nQF1dHSMjIwwODqLT6VhbW+PBgwf4/X6CwSCZTEaKqUXZ9PM4LHGxQKTKRFrParXKEdz7Owo+b40i\nkuF0OmlpaaG1tZVcLofJZJI/U83J//+I8X6iy1oul2Nzc5PV1dWKmw61sbHB48ePuXbtGq+99hqf\nfvop8/PzbG1tyRnzSqWS1tZWBgYGuHz5Mo2NjbJH86uOJxQe30Gg1+u5fPkyFy9eRKVSMTo6yv/5\nP/+Hzc3NA3n9w0Sr1aLX6wEwmUycO3eOy5cv09DQAHw5nS6dTh+I4dkfTj8oROhbo9FI0aDIF1ea\ngVcoFLK8sqenB7PZLEVUjx49kqVnlVzRsP8wuD8Hn0wmZV62HFEIEeq9d+8exWKRn/3sZwwMDOB2\nu6WREDnf/TXl9+7d48mTJzx+/JjV1VW59xwljY2NDA0NUVNTQy6Xk0JB0XcgEomQSCSkkX/e+pRK\nJWq1Gr1e/5W+/Yd1T6VSKfx+P1evXqWrq4vl5WVmZmZeaOqi0WjE4XBw7tw5XnvtNVniODExwdra\n2oE/C8feyAt1vcVikT2DNzY2WFtbI5VKVZSRF6NLPR4P/f39FAoFbDYb09PTcpPQaDR0dnYyODhI\nXV0dmUyG8fFxFhcXD6Shz0HcPGLG/MmTJ+nq6iIUCjE1NcWTJ08qOpctdAkul4vu7m7ZcOPKlSsM\nDg5iNBpZXV1lamqKaDR6IPeOeGAPI1Wi0+lkE5xsNisnolUKarUag8FAf38/Z8+exev1kkwmmZmZ\nYXR0lMXFxbILqV4E0Tvd6XTicrkwGAyy9tnv9xMIBMqyzwilt5jLYLVayWQyNDY24nK5sFqtss9G\nIBAgnU4TiUS4desWo6OjstvaUV9/hUJBQ0MDPT09mEwmIpEI09PTLCwssLW1RTQaJZFIkE6nX8iD\nVygUtLa2ynTt/nbgfr//UA4xosuj2+1mZGSE5uZmXC4XSqVStqoVaTqxt4vhXR6PB6/Xy+XLl+nr\n6yOdTjM5OcnNmzfZ2Ng48IPJD8LIq9VqzGYzarWaVCrF5uYma2trZDKZivIOCoUCmUyGTCaD0+nk\nZz/7Ge3t7YyOjqLRaGTbWjGi8M6dO8zNzbG4uEgoFDrUk+n3QRyqvF4vWq2Wx48fy3aZlbC+70K0\n+vyLv/gLXC4XFosFp9OJWq0mmUxy9+5dPvjgAwKBwIG832FcD5VKhcViwWg0UiqVZC91IcKqFEwm\nE/X19dJjMZvNPH36lL/+679menr6WBh4QIa7hbhXq9WSSCSIx+PSyJfzcJvL5QgGg3z00Uc8ffoU\nm83G+fPnOXnyJFtbWwQCAba3t5mfn2dxcVHOZSiXgRdtYVtaWigWi7Jj6djYmCz7E8b9ec+PeL33\n3nuPX/7ylzK6lc1m+Zu/+Rv+3//7f4TD4QMfcb2/4sdsNtPc3Ex3dzenTp3ik08+YXR0FLVaTSaT\nkfNS3G437777LidOnKCtrU0eyu7evcudO3d4/PjxofRzOdZGXuQ1RGOKvb09kskkKysrLC0tVcwk\nJYE4Yd67dw+j0SjLzoTHI6bRCdHJwsICGxsbcspSJRhQhUJBT08PFy9epKamhmAwyL1791hZWamI\n9X0X4kBosVhkXarJZJJlXDMzM9y/f5/5+fmKbipTKBQIh8OyVEr0LN8vxisnQlPS39/Pj370I06d\nOoXNZiMYDDI/P8/ExMSxqb4AZGtbv9/P5uYmdrudYDDI4uIii4uLMt1WLoRR8/l8BINBdDodsViM\nhYUFKfBNJBJsbm4SCATKmhrR6XTYbDbcbjcWiwW/38/ExASPHj2S0deXWZ9IhYo+IoVCAZ/Pd+gH\nyYmJCfR6PS0tLTQ1NcnGTgMDAyiVSnw+H5OTk9jtdjnMprGxkbq6OmZmZpiYmODhw4fMzs4SDAYP\nJSJ0rI08fOkt2Gw22QxEbNiLi4sVt4kIZeZHH31EOBzGbDbj9XplA5bZ2VkeP34su1MVCoWK+gwi\nN3nq1Cl+8YtfyMY39+7dO7ThCgeJMPLAV0726+vr3Lt3j+vXr7O0tEQwGKyo6/51xIY+NjbG559/\nzqlTp+TM80ow8qL++dy5c/zzf/7P0el0JJNJlpeXWVpakobmuJBMJtnc3GRiYgK73U5bW5v8/7Oz\nsxVRTSLKu4Qmw+/3c/v27YorqzQYDNTX18uD6dLSEk+fPmVsbOyleiSIn7958yaPHz+Wf7e3tyfz\n+ofJnTt3GB0dpaOjg3feeYe+vj5++tOfotfrKZVKPHr0CIVCQXd3N16vl1Lpy9HMsViMmzdv8tFH\nH7GxsUEymTy0PedYG3mFQkFTUxMdHR3odDq2tra4d+8e29vbFXdzC0qlErFYjLGxMf7qr/5KlmE0\nNzcTi8WYmpqS5USVtn6tVovVasXtduN0OtnZ2WF1dZXt7e2KE3x9E2J62/LyMp988gmvvfYa9fX1\n/O53v+PRo0ey93SlXfdvQrRb/fWvf836+jpms7li6sxramp47733uHjxIiaTiXQ6jc/n47PPPmNq\naqqiRXbfRKlUkmHV5eVlzGYzqVSKUCgkD+OV+HkqbQ9UKBSk02k2Njb47W9/y4MHDwiHw/j9/lcK\npwt9xH49SqlUOjJ9Si6Xw+fz8emnn7KxsUFzc7Psbz87O8ujR4/kuPB0Oi0npE5OTrK1tXXoaeVj\nbeThS8OjVCpl+Oz+/fsV7SmUSl+OBV1bW2NjYwONRiOnWsGXXmWlGkyLxUJ3d7es6VxYWGB6eppI\nJFJRgq9vQzz4m5ub3Llzh0KhQH19PV988QWzs7MyR1mp987X2dnZIRgMkk6ncTqdRCKRsq9dp9NR\nX1/PhQsXGBgYQK/Xs7u7y+bmJs+ePTsWaZ2vI8qmFhYWWFxc/IqYslI/S6WuK5fLEQ6HZTfMg3re\nxKjcclAoFGTL4/n5eVpaWujs7GRra4vFxUUWFhbY3NxEo9HIBlaiOuMo1nysjXypVGJlZQW9Xi8F\nHHfv3iUajZZ7ac9F9NcW6ujZ2VmAitMRCIQi9u2336a5uRmfz8eNGze4fft2xa75myiVSkSjUWZm\nZtjY2ECr1cqa3EretL8N4dGLw2E5q0kUCgUOh0P2c3c6nahUKjlP2+/3E4/Hy7a+V2V/OPW43SeV\ngHi+9teA/5Cuo0ivLi8v4/f7efr0KZlMhlQqRSaTkd1OBUeVEjz2Rj4YDDI9PU0ymZRNcCqpbO67\n2N+OtJKFXvDlWhOJBEtLS4RCITKZDM+ePWN7e/vYXG+BaEVa6df8Ramk8bI6nQ61Wk0kEiEUCmEy\nmZienj6wEbLl5IdkkMrJD/U6CsdNaCNCoZD8t3LukcfayAPEYjFisRjLy8vlXsoPHp/Px//+3/9b\n1mVXSklflcpBpVKRzWZZWFgAwO128+DBA+7fv3+svfgqVY4r5ZTiVq3DMUN0lRIn1uMY3q5yeIhw\nvdvtxuv1YrVa0Wq1TE1NVWQHyipVfgA814ZXjXyVKlUOBIVCgUajQavVotFoZEe2XC5XceWgVQ6O\nw+4XX+U7qRr5KlWqvBwvMyRDNOQR/yuEVqKHetUQfDdVg1nle/LHYeTFVC6hVK9SpUp5USgUuFwu\nGhsbZQvY3d3d6vNZpcrB8lwbrjyKVRwW+8eFGgwGOYu3SpUq5aepqYk333yToaEh6uvr0Wq11efz\nCBH7Y/Wa/3Fz7NX1DodDji202WwAPHr0SPYfL3e7ySpV/thQKpVotVq6urr40z/9U+LxOKurq9y+\nfZvR0VGmpqbKvcQfLKLtbnd3N01NTWxvb7O8vMz4+Ljso3CcGj5VeXWOvZHX6XTY7Xaam5tpbm7G\nYrEQDofljOf9pV7VG7sK8AfeTfXeOFhEdM3tdnPixAl0Oh3b29toNBri8TjT09PV6/0tvGpOvqam\nhgsXLnDx4kX6+vqYm5vj9u3brK2tyZGr1bz/HxfH2siXSiXC4TDT09Ps7u7S0tJCc3Mz0WgUm82G\nUqkkHo8Tj8fJ5/PV8p0jZL8hrSQjKgyQTqeTpYD7ld+Vss7jzN7eHrlcjt3dXdbX12loaECv19PY\n2EhNTY2cFFa91n/IqxhgpVKJx+Phvffeo7e3F7PZTCwWw+VyYTQaUalUX9kDq8b+j4NjbeThy57F\nhUKBVCpFLBaTHdhsNhtnzpxBq9WSy+XkprO8vMzu7i7JZJJ0Ol0VAr0iOp0Oq9UqDWdLSwt2u10q\nrLPZLE+ePMHv97/S+7yM0vvrGAwGOjs78Xq91NXVyTaTiUSCXC4njVMymZQjfhOJRHUT/J6Iw1Mi\nkcDv92O32zGZTLhcLnmvVNLB7zD5vob0ZfPnSqUSk8kk57TX1taSz+fx+XwsLy+TTCa/MkhHvI9G\no0GtVsuIZ6VxXA8iBoMBr9eLxWKR462LxSJTU1NHLkA99kYevmwZKCZd+f1+nE4nIyMj/PKXv6Sv\nrw+LxUIul2N+fp6//du/ZXx8HJ/PRyAQqBr5V8RkMtHW1obBYMDhcPBnf/Zn9PX1AaBWqwmFQvy7\nf/fvXmlo0NejAi/7Gjabjbfeeos333yTEydOoNfrKRQKBAIBEomEHFW8vr7OJ598wujoKMlk8tht\nMJWAGH26vb1Na2srdrsdq9WKyWT6A4/yh8p+4dthRy6USiUOhwOn0yl7FCQSCb744gtu3ryJ3+//\nyl4n1mIwGDCbzYTDYdLp9Cut4SAO4t/0moLj9BxarVYuX75MW1sbRqORjo4O0uk0//7f/3vGxsaq\nRv5lEJ5BPp8nFosxOzvLX//1X/OTn/yEn/70p2i1WhobG3nrrbfQ6XSEw2E5W/w4oNfrsdls9PT0\n0NbWRn19PQDxeJynT5+yuLhIOBw+UqGhUqmkrq6OK1euoNfrUSgUxONxbt++zdLSEul0WrYcfpUH\n9FUfboVCQW1tLb29vQwNDWEwGLh79y7pdJpkMil78QOcOHGCoaEhotEo6XSa9fX1imvi8k3enkhD\nWCwWnE4nwWCQWCz20mvff7B62dcwGo1SVZ9IJFhfX2dnZ4d8Pn+sNuyXQWiFdDodSqWSUCgkZ4Z/\n04FVRL6+/vcvimhEpNFoAPjiiy/4/e9/z/37979zdLVWq8XhcNDe3k4ymWR2dpZ0Ol32e97tdtPV\n1YVSqSSVSrG4uEgkEinrmr4vpVKJmpoa+vv7sdvtbGxsoNfrUalUL/T7X3/OX/aZOT5W7gURo1zF\nJKC9vT1aWlpobGyUIXyfz8fvf//7ii0tERuswWDAaDRisViw2Wy43W4uXbrE6dOn6e7uBiAUCuFy\nudBoNDx9+lRuJIeNUqnEaDTidrvp7e2Vw4IWFxeZn5/n0aNHRCKRA52V/CqvY7VacbvdWK1WQqEQ\nn376qTSEsViMYrGISqWipqaG3t5eTp48yc7ODvfv3ycWi5WtSkNs/vv/ALK5jPCIlUoler2ehoYG\n+vv7GR8fl+mol70fRFj9+/6+OHCYzWbq6uqk4G5tbU2m036oRl6hUKDVaqmpqaGrqwuVSkU6nSad\nTpNKpeSe83VDL575l01jKBQKGSXZ2dnhs88+42/+5m+IRqPPnRKpVqvxer0UCgWSySR+v7+skzyV\nSiUNDQ1cvXoVjUbD7u6unLaYSqUoFAoVHwkS6SqNRkN3d7eMGMKL7WP7bdOrRjN+cEYe/iEnmE6n\nefLkCf/xP/5H3nnnHU6dOoXZbCaXy0kxXiUiNuy+vj5Onz7NxYsXcblc6PV6HA4HDocDl8uFUqmk\npqYGq9VKQ0MDgUCAtbW1Vw67PQ+xkTU3N1NbW4vf72d2dpanT5+STqdJJBJEo1HpsR3Ehv4qYXoh\nwFxaWuKTTz4hkUhw584duWEIL0ehUPDFF19gNBoZGRnh1KlTdHd3s7i4yM7Ozit/hpdZu1qtRq/X\nYzKZMJvNMhQr5leLDVyr1eJ0OhkYGOC9996jUCgQDAaJRqMvZeTFe+/t7X3vA45KpcJoNGIymWSE\nJ51O4/f75RzxHyoqlYr6+npOnDjBW2+9hc/n49GjR6jV6j/YrL8eyn+VkL5Go6G5uRmlUsmvf/1r\nHj58+EJT/8Rzmk6naWtr49133+X+/fvcuXPne6/hIJ5zpVKJwWCgubmZq1evYrPZSCaT1NXV8fDh\nQ548eUIoFKr4CZKZTIaVlRU2NzcpFotkMhkSiQSxWOyF9udvupZVT/5riL7ZGxsbFAoFuru7ZX5E\nhPXLHZKCL29qsSnq9Xo0Gg0ej4f29nb6+voYHh5mZGQErVZLNpslnU7L0JwIBVosFjKZDFNTU9y6\ndYuxsbFDXbPZbJaHjWw2y+joKLOzs8zMzFSkl1YqlUgmk2xubnL//n2SySQrKyvf6A2MjY1hsVg4\nc+YMHR0dnDt3jlQqdWRGXnSKa2lpkaG9QqFAOp0mHo/LagBRESCutTDGZrOZzs5OLBbLK93f4nD0\nsl6lVquVeeFkMsna2hrz8/Nsb2+/9JoqFXGtjEYjTqeT1157jdOnT9PS0iJTQeJeU6lU8pnfX7P+\nKs+MGByl1WqJRqPMzMywurpKNpt97u+KuQLFYhGv18vp06fZ2Ng4lPz698FoNOLxeLDZbGSzWTKZ\nDGazmaamJgKBABsbG0xNTREOh8u2xu+iWCwSjUblUCbx3Aqh+PfhVb+HH6yRFySTSba2ttjZ2SEW\ni9HY2IharUalUlVEuF6lUskwq9vtxmKx8MYbb/DWW2/hdDoxm81SQxAMBtnZ2ZEiN61WC3y5ybS2\ntvIv/sW/oFgsHrqRd7lctLa2olQq2draYmVlpWIbD4kNNJ1Ok8vlCAaDMtT9TSwsLKBSqfjVr35F\nW1sb165dY3l5+dCvqUChUNDR0cGf/dmf4XQ6USqVrKys8PTpU7744gsZghefS2wA4rMlEgl0Op2M\nVr3KdyJK3V7mMyiVStLpNFtbWySTSebm5hgfH2dzc/NIjMdRGSnxWTUajUxdvf/++3R2drK+vo7f\n7ycQCEjvTTzvOp2OVCr1FUMsvPvvi0qlQq1WS09xamrqe3m64vlQKpU0NzfjcDjKpmoXDphwwsT9\nXldXh9fr5ec//zmhUIiJiQn+8i//smKNPPxDRLlQKMgU1sukv16VH7yR1+v1OJ1OvF4vHo8HhUJB\nsVgse7tHlUqFRqOhra2Nvr4+WltbaWhowGaz0dHRIcuOMpkMc3NzTE9PMzMzw9bWFnt7ezQ0NNDT\n00NfX5+sQ7ZYLJhMJtRq9aF61O3t7Vy7do29vT1mZ2dZWFggl8sdynsdFPs9pu+6LuK0LTyHwcFB\nPB4POp3u0KM/Wq0Wl8tFT08P58+fx2g0sr29zfXr15mZmfnOfKRSqcRsNuNyufB4PFit1le6D0ql\nkiwr/L4Ui0WSySTj4+P89//+3wHY3d1le3v7hbzLV2V/3vsojJTL5WJ4eBiPx0NjYyNutxutVotS\nqaSlpYVLly5x9+5dVldXvzKpL5VKHYjqXqvVotVqpff+fYVzIp/vcrmoq6ujtrYWh8NBPB4/8uda\n3Mc6nY50Oo3FYkGr1RIKhdDpdPJZLJVKvPPOO+h0Oh4/flxxDobZbGZkZISBgQHUajXT09M8ePCA\nWCx25Gv5wRt5m81Ge3u7VKS/7MZ1kIjQXk1NDSdOnODKlSu0tLTgdrsxGAzs7e3JfGowGOTx48c8\neX1ShC0AACAASURBVPKEZ8+eEQgESKVSaDQazp49y7Vr1zh37hwej4dMJkMulzu0ZiPCa2lvb+fK\nlSuk02kKhcI3enwip6tSqSqiEdH3CYkWi0VisRilUonW1lbcbjdGo5FEInFo945CoUCv19Pc3ExX\nV5csuVldXWVycpKFhYXvXL9arcZqteJyuaitrZX5+5c9mAiP6mUoFoukUilmZ2dZXV3FaDQCEIlE\njmwzPkpP3uFwcObMGZqamnA4HJjNZum91dfXo9FomJubk/nZ/VULB3E/CVX9xsaGPDh8H8R+ZLFY\nsFqtOJ1OXC4X2Wy2LEZer9dTLBbZ2NiQaYiVlRVUKhVarRa3201jYyM/+tGPiEajPHv2rKKMvEaj\nweVycf78eQYGBtBqtSwsLPDw4cOqkT8M2traeP/992lra0On08mc2P482VEiTs0dHR288cYbDA8P\n093dLbUCfr+fqakpxsfHCYfDMkwfiUSIRqMyp5PL5RgfH5fGqKmpCZ/Px9TU1LeWy7wqGo0Gi8VC\nXV0dHo+HtbW1b3wvkSN0u93Y7XbW1tbKcnO/LOl0mvHxcTweD+fOncNoNEr9wWGJNdVqNTabjcHB\nQbxeL9FolBs3bnD9+nW2trae+32qVCopdDuI71/8/su+jtDEiPv1uyIDx7UWGpAVD6VSidraWrq6\nushms0QiEbLZLFtbW8zPzxOPx1EoFOTzealiz+Vyf/B5X+bzi3D9y/6+ONBls1mSySQajUaWYR41\nInUwNzfH//yf/1POI1laWiKfz+NwOPjzP/9zLl++jM1mw2azVUTadT91dXX09vbS399Pc3MzRqOR\n3d1dfD7fkUSyvs4P1siLDmytra1cuHCB+vp6qWTN5XIy9HmUKBQKdDodJ06c4Pz581y+fJmWlhZc\n/197Z/Lb5nW18YcS55nULGqWaE22bMmyLM9K7CRO4aZJgQIBGhToou2mQLfdFuiq2/YvaAu0QZK2\nSBM3dvJ5EDxItkUNJkXRFEVxniTO8yB+i+DeSI4HyRIpyrm/lQ1I4kvyfe+595znPKemBhsbG3C7\n3dDr9ZiamsKjR4+oivRZizYRdng8HsRiMQQCAUxNTcFqtZbstFlfX4/x8XEcOXIEMpkMsVgMwWCQ\nnk74fD46OzvR0NAAhUKBhoYGCIVCfPXVV4jFYgdmASfK9FAohI2NDdTX16O7uxvRaBTJZLIkr0nm\nr2ezWQQCARiNRszMzGB+fh6xWOylvy8UCtHR0YGGhgYq9NmtoGu33xcZ/Uw0BE9nl0idUqFQoLa2\nlmYA1tbWKrL887y2Nz6fT8trxWIRJpMJq6uriEQicDqdWF1dxfr6Os1o7YXYbvM1FQqFLTM6XgXy\nXZG/sx+Bk6zP6XQaTqeTtqAVi0Wsr68jm82Cy+VicHAQw8PDUKlUNPtJNk+VQF9fH86ePYuOjg6I\nRCKkUimsr68jEAjsy3392gZ5Ho8HlUqF9vZ2DAwMUDVrMplEIpHYlyBPzEp+/OMf4/Lly1AoFJBI\nJFSEEw6HMTk5ibm5Odjt9pcuBmKxGHV1dWhoaACHw4HRaITb7S7Z9ff09OC3v/0t+vr6UFVVBafT\nCbvdThcFqVSKd999l25eiJOW0WiE1WqtmIdwO5DSBIfDQWdnJ4aHh2GxWEp2uiFK9Pn5eYRCIdhs\nNlgsFjo57GXXKpVKMTw8jJ6eHqq0L1VGZ7u8LB1dVVUFgUCA7u5ujI6OIpvNwu124969e7taDHeb\nhXge5H54OpiKxWK0t7eDy+XCYrHgf//7Hx4+fEgzb9lstiTT38g9Ssp0uw0gm1uP98sbolgsIp1O\n02wI2Wxsfgb8fj+CwSC6urrQ3NwMpVL5Sqr1UsDhcDA+Po6f/OQnaGhoQDKZhMvlgt/vRyQS2Zfn\n8bUO8rW1tRAIBAiFQuByuQiFQtDpdHj8+PG+tNANDQ3h4sWLtIbO4/FQXV2NjY0NOBwOGAwGmM1m\nOkHvZYhEIkgkEszMzCAej5fsBEQyEKTNL51OIxwOY3FxkbrZyeVyaDQaHD58GEePHoVKpaLZlNOn\nTyMajWJ+fn5f0lU7hRhXeDweRKNRrK+v05NYqSAbUJfLhVgsBrfbTUsxz2KzZerY2BhOnTqF4eFh\n8Hg86PV6+P3+inaWI2ZP7e3tOHnyJK5cuYJCoYBQKITh4WGYzWZYLBbYbLZX2ljtxfvm8XgQi8V0\nM+7z+baUncgGPJvNIhKJwG63Y35+nmpn0ul0yexsiYBPLBbTjcdutC+bTZc2t+cSuFwu9bkHsKWV\nE9ibIVSbf/9FKnSRSETdBMnJvxICPBFTi0QiCAQCFAoFmM1mXL16FWazed+0YK91kFer1dQ/ncvl\nwmaz4ZtvvsHCwkJZ6/FkMR4aGsJHH32ExsZGSKVSAN+pmC0WC/R6PZxO57bSswCowQaxroxEIiV5\nX1VVVTQdGY1GkU6n4XA4oNfr4XA4UCwWIRaLoVQqUVNTA6VSSd9foVDA6dOnkUqlaHmh0uvzuVwO\nbrcbLpcL8XgcoVCIpgtLBUmxp9NpBINB2O12qiwmPgoikYhqSoh96cbGBs6fP4/Lly+jqakJkUgE\ny8vLCIVCFbHwvQiRSASNRoOhoSFMTEzQgUZnzpzB7Ows7t69izt37tAFnyjHgdLW7skGhLgkqlQq\nKrzc7ARHAmM2m4Xf74der8f169dLPviKCOWUSiUaGxtpjd/j8bzS6xKvDblcTpX/xMyI6ChUKhWU\nSiUd6pROpxGJRLaU4XZjWLUTtz+ZTIaamhpwuVw6nKwSjM2qqqpoVwXRpNjtdly7dg0Oh2Pfruu1\nDfJE3U2Up/F4HC6XC0ajER6P53s/C2x1oSL/3ysXJ+I9X1tbS29OcmNns1nYbDZYrdYdBZJQKIRC\noYBYLPZKqtrtIhAIqLnH7OwsVldXYTQaYTKZqONaKBSCxWLBf//7X6TTabz55puQy+UQi8Xo7++H\nWq3GuXPn8Le//Q2ffPJJSa5zryBBnliwqtVqtLa2Ym5uriyvT4IaOa1JJBIcOnQI58+fh1wuh0Kh\nQF1dHe1Db29vh0AggEQiodkWtVpdlmvdDaSWnE6nkU6nIRQKweVyIRKJMDg4iKamJrS3t8NisSCT\nyeDx48eYnp7eVjvkbtT1XC4Xx48fx+DgIFpbW2EymTA1NUVnAWyuyZONGPHiSKVSJT9ACIVCjI2N\nYXx8HGNjY4jFYjCbzfj000+xuLi4o79FhMDnzp3DmTNnIBaLqcC2v7+fbqzOnj2L0dFR2vdPun70\nej2A3W26yFq92ab5eT9HsooSiQQ8Hu+Zw382bxp2e207gbwuOc2TLFB9fT3W1tZe2Xdit7yWQZ4s\njORUKRQK4fP5EAgE4Pf76UmZCH9Iy8az1K57Aak9EpvQzQKcbDaLYDCIlZUV2O32He1Ik8kkrcWV\ncmHhcrnQaDSQSqVU+b+0tLRFpFMoFKj4j4xrVSgUEIlEqK+vpwNipqeny9Jzvhs2u1VtTlOWU4xE\nAhgZjzs+Po63336bnrgaGhqQTqfh9XqRzWbB4/GoQx45iVYyZHMbCASwsLCAa9euUZtkYt9MNsQa\njYY+v08v4M9iN9+TSCRCTU0NRkdHcfToURQKBRiNRtq6Sl5XIBBAJpPhyJEj1J2QaCFKHVT4fD6O\nHDmCN954A2NjY4jH42hoaMDk5CSWlpZ29FxJJBI0NzdjdHQUvb29NFNCnl3iuNjZ2QmVSgWPx0M3\nZZvf616m6p/HZmc/clB6XklqP8x8yPdvMpnw4MEDDA0NQa1W4/Tp06itrYXX60V1dTW8Xi+WlpbK\n5sH/2gV5ssARV7bm5mZIJBKkUimEw2F6QiKndoFAQHtCI5EI/eD36hRPronL5SKdTiMQCEAqldLB\nFaFQCC6XCyaTiVrwbpdyiatIejiTycBiscDr9SKVSn3vAc/lcjCbzXRUK1FOv/fee5iYmMDx48fB\n5/Mhk8kQjUYrUkENfNf+BYCqY/dDGZvP5yGRSPDuu+/i5MmT0Gq1AEBrpaQVMxgMIpvN0jog2UDt\ntzXpy0gmk1heXobf78fk5CQuXbqEM2fO4PDhw1AqleDxeGhsbKRT3DZ7vL8skL3qIq9Wq9HX14cT\nJ06gsbERX3/9NVZWVr7nkSCTyWh7LnG3W1lZoQLfUn7uXC4XPT090Gq1EAgE4PF4aG9vh0KhAI/H\n29Fhpb6+HgMDAzTb5nA4kM1mUVNTAz6fTy2sA4EAdDodZmZm4Ha7qRJ/L97ndrsNqqurIRKJaIAn\nlsnP+r3tGF/tNUS0+K9//QsWiwW/+93vMDAwgI8++ojaG8vlcly7dg1/+tOfqOVtqXmtgjyXy4VA\nIMDw8DCGh4fR3d0NuVyOjY0NmM1mLCwsbLkpSH8omUJWigAPfHdzVldXI5lMwuPxwOfzwWazIRgM\nYn19HaurqzuuK5XjBhaJRLQXlYxljcfjz32wSGaB1N1DoRBu3ryJfD5PSxZnzpzBw4cPS9oJsBvI\n5kShUNCsQzlte0mG6c0338Rbb72F0dFRtLa2QqlUIhwOI5lMorq6GplMBqFQiGaABAIB5HI5nf7W\n1NSE9fX1ihU7kkxWKBRCMpmE2+2m2gdiPUxmDuh0OqysrGw7gL7Ks8HhcNDa2opTp04hn89jaWlp\nixUvORBotVocPnwYQ0NDOHbsGGQyGQQCAVpbW1FTU0N1K6V6PjOZDObn59HS0oKJiQmaJSS6je0I\nLqVSKZ30dunSJXR2diKRSGBhYQEGgwGrq6sAvjs9RyIRBAIBuN3ukrTDbufviUQiNDY2QiKRIJlM\n4u7du9Tt7mnR3nb/5l5C4gYRJf/jH/9Ad3c31Go1jh07hkOHDkEul2NiYgKFQgH/+c9/MD09XfLr\nei2CPBHAEKHMmTNnMDY2Rnd9xOv48ePHSCQS9Pc2B6VSQtr5RCIRstksjEYjdDoddDodHcdaqv7r\n3UKUrMRHmlzvdkmlUnj48CFyuRyamprohDez2VyxQZ7L5aKxsRF1dXU040KyPOVAKBRCqVTi8uXL\n+PnPfw4AVEXs9/sRCoWgVCoRDAZhsVioG55SqURPTw9UKhWamprQ2tqKeDxesUGeQEplJICTVrWN\njQ2sr69jeXkZ09PTsFgs20pFkyzdThZ5Ugaoq6vDoUOHqH7HaDQiGAyCx+Ohrq4OAwMDeOutt3Dm\nzBkMDw/TDKBKpUJdXR2amprooJhEIlGSk1omk8Hdu3chl8sxPDwMpVKJfD4PPp8PoVD4UuFudXU1\nampqcOTIEbz99tu4cuUKstksZmZmMDs7C6PRCKfTiWw2i3Q6Xbbx1S9DIpGgra0NCoUC8Xgck5OT\nePDgwTMPR/uZwSoUCvD5fPjss8+oC+VvfvMbtLS0gMvlQqvVYnBwEBaLhQX57UJmV4+Pj+PKlSvo\n7OyEWq2mk9lmZ2eh0+noQl3qG+DpBUalUmFsbAwdHR3IZDJ48uQJFhYW4PV6aZtNpSqhuVwuuFwu\nYrEY4vH4KwmLiJnF9PQ0pFIp0un0tjsI9gOJRIITJ06gt7eXBhmTybRlg1hK+vv78cEHH6CtrQ0W\niwUulwtutxterxdPnjyBx+Ohadl4PE7d1PR6Pd544w387Gc/g1AoRENDAywWS1mueTs8L4VOykEk\nTd7c3Awul4tUKoW1tTU4nU4Eg8EdbS5f9RlfWlrCP//5T6yvr1NDJKFQiObmZrz33nsYHx9HW1sb\namtrEYlEoNfrsbS0BKvVimAwiL6+PigUCgQCAdy4caMkvdH5fB5erxcrKytYWVlBY2MjotEoIpHI\nFt3AZsiaVF1dDbVaTScstrS0UBttq9VKAzzpk99vr4XNqNVqjIyMoK6uDslkkg4AqoQNyNMQz4F4\nPI5sNou///3vuH//Prq6ujA8PIzx8fEd3c+74cAHeXJKJjftuXPn6O7aZrPhwYMHuHXrFu1bLTVy\nuRydnZ2IRqNwOp20Hk9aT+LxODWRITv9F506pFIpNBoNotEofD5f2etMKpWK+v6TB+pVTieJRAIO\nh4Oa5BCBWDney05fRygUQqvVoqGhAYFAAB6PhyrtywEZRWoymbCwsAC73Q6XywWPxwO73Y5AIPC9\n98PhcGCxWJDL5VBTU4NisYijR49iY2MDS0tLWF5eLsu1b74ekmF7WnT6dC23rq4O3d3dOHLkCLq6\nuiCTyaglLrFaLfXMCXI9brcb4XAYiUQCuVyOnuCbmpowMjKCY8eOIZvNwuFwwOl0QqfTUT2NSCSC\nWq2GXC6n2YhSsLGxgVgsBr/fTy2PyfVu7m3frPYWCoXI5XLgcDhoaWlBX18fhoaGUFtbi0QiAaPR\niIWFBbhcri0BvlJGR5M23v7+fkilUjrXY78MZrYDafvMZDKYnZ2FyWRCZ2cnqqqqcPLkSdqmWaqM\nD+FAB3ni9NXZ2Yl33nkHIyMj4HK5sNvt0Ov1uHv3LpaWlrC6ulqWkzKHw0FbWxt++ctf4vHjx/j0\n00/B4XCQTqfx8OFD2kcZDoe3CHledJM2NTXhgw8+oD245d5Zk3GrbW1t0Ov1+Oqrr175bxWLRRw6\ndAh9fX3Q6/Ww2+0l73cGvhuZut3X4nK5UKvVEAgEtBujnIvd4uIiLWUQIR2x7XxevZXoS2ZnZ+H1\nevGLX/yCug9+8cUX+POf/7wvrURkoIhYLKYaAlIeI9czMDCADz/8EMPDw9SrHPiu9VQsFtO5E6Vc\nDIvFb33TM5kMNjY2aKAmJSuhUIh4PI6lpSXcvn0bk5OTW6ynN4+dJRv6Un3mpFc9FAqhWCwiFotR\ne12iOyLaDplMhqamJkSjUWQyGfT29uLo0aPQarV00uGtW7dw584dxONx+rkTXcF+n5TJ+5DL5Wht\nbUVVVRXW1ta25QZZSaTTaaysrCAYDKK2thZtbW1oaWmBzWYraZbwwAZ5MpFocHAQJ06cQH9/P4RC\nIex2O+7du4eHDx/CbDaX3QebDAoZGBjA5cuXMTc3B4/HQ6/B5XJRy9oXwePxoNVqMTY2hpGREcTj\ncUilUiQSibLWWIm61u12w2q1vpLpBIfDQW1tLU6fPg2lUgmPx7NFnb/XkM3f2NgYnWDl9XoRCoVe\n+rt8Ph8SiQQymQzFYrFkQqMXkUql6AK7k9cli30mk6Ep19bWVjQ3N5dsMuGzIMN2NBoNOjo6IJVK\nUVVVhUgkAqvVSkWZEokEra2tOHHiBEZGRlBbW0tr8cTv2+12w+fzlc1PnaRZiXuZQCCg7m8Gg4GW\nTkwmE6xW6/fsVElwJf8u5XWGw2HMz8+jv78fdXV16OrqQiaTgUQioZa6m3u1Ozs7oVQqcfLkSfT2\n9tKOmUwmg5aWFhw9epQa4gCAwWCAz+cr2XvYLnK5HOfOncOlS5fQ3NyM1dVVzM7OVvQp/lkQXU06\nnUYul4NUKkVtbW3JtUkHNshv7mc9c+YMGhsbEQwG8eTJE9y+fZuKvcq9CyU77K6uLmg0Gni9XpjN\nZkSjUQQCAczMzLz0bxAzirGxMUxMTECr1WJpaQlSqZQ+lOWCWO8ajUbo9fodb5iqqqogEonQ0dGB\niYkJ2Gw2TE9PIxwOl+iKvw3ySqUSV65cgUQiwfXr15HL5bYV5Il4Uy6X03GX5dYP7KYkQ9Ksq6ur\nsFqt6OrqglQqBZfLLZvNrVAohEajwcjICMbGxpDJZBCPx6lZjNfrBY/HQ1NTE06dOoWTJ0+iu7sb\nfD6fdrmEw2HY7XaYzWasrq4ik8lsq0d+NxBbV3KAIAGex+Mhn89jdnYWAKjHPpnb/vT1lCvwBINB\nPHr0CGq1Gr29vejr66P+IF6vF+FwGAqFAnK5HHK5HFqtFn19fTh06BAV65HPdWRkBI2NjTCZTDT7\n6HK5KiLIK5VKvP/++5iYmIBKpcLdu3fx4MGDkq4hpYK056ZSKbrOV1dXl/Q1D1yQJymxkZER/PSn\nP0V7ezvUajWy2SwMBgO+/PJLWCyWfRGMFItFaqfb0tKCYrFIJ1PthK6uLoyOjuKtt95Cf38/ACAW\niyESiZS9V5toAaRSKVWbb5eqqiooFApcuXIFx48fRz6fh9lsxszMDE0zlgKBQEBPL0KhEBKJhHpu\nv4yzZ8/i/fffR3d3N6xWK8xmM4LBYEmus5SQU/DGxgYkEgnUajUVZpUSDocDtVqNCxcuQKvVQiqV\nwul0wmKxwOl0QiAQ4NKlS+jv74dWq0VLSws0Gg017yGbgIWFBTx48ACPHj2im+RSblJIKY2MUQa+\n9X0gqe9EIgGJRIJ8Pg+3241gMFi2zMjzSKVScDqdcDgcWF9fh1arRX9/PwKBAGKxGNLpNLhcLiQS\nCVQqFWpqaqBWq6FQKMDn82nAIToJtVqN+vp6PHr0CHNzcyWzyd4JxKyMbLzT6TQsFgsWFhYqWrz7\nLEjsqq6uRnV1NZ03UmpL3gMV5ImNZE9PD86fP4+LFy/S3m2bzYbZ2VnMzc3t26x44Lu2PCKE2a4x\nBTk9SKVSDA0N4e2338bRo0fB5/Oh0+lgs9n2pQbl9XphMBggFot3FOA5HA6amprQ29uLoaEhKBQK\nGAwGGAwG2Gy2km5WxGIx1Go1pFIpCoXCttr+SD+wVqvF6OgouFwufD4flpeXD+SJYW1tDS6XC5lM\nht5bm0VZpYLYpPb29kKj0dBOCp/PRwPR8ePHMT4+jkOHDkEikdA56x6Ph5rKLCws0N74QCBQ8owc\nqfvW19djcHAQmUwGyWQSkUiEDloiToh+v/+Fw4PKBfEYWFlZgdFoRE9PD7q6uqhinmSEiAEVEQAD\n327eidc9yT6SiXlut5turHbahgjsbSaDuJcqFAoUCgUsLy9jZWWFbmAPEqSUS0zDVldX4ff7S35w\nO3BBvrW1Fb/61a8wPDwMsVgMq9UKnU6H27dvw2w2P7eFpFwkEgmsrKzA5XLB6/VuazABmUlNenQv\nXLiAixcvgsvlYmFhAR9//DF0Ot2+ZCfMZjPC4TC0Wu222+dIWnV8fBw/+tGPEAqFcOfOHUxOTsLv\n9++ZU9bzIKf4YrEIj8eD2dlZ+P3+F/4Oj8eDVCqFSCRCPp+nA4OWl5cRj8dLdq2lwu/3w2az0T75\ncm58NyvqiX0zl8tFTU0NNBoN2traoFarqY9FIpHA2toarl69inv37sFqtSIQCCAcDtOUeDkoFApQ\nKBTo7u6mIjVy+ioUCjCZTLStrlztTy+C6AdMJhPy+TwGBweh0WioUHFz9qpYLNJSXy6Xo8JgIiol\np3aRSAS73Q6bzbajsiDZJO11doMo0Hk8HtxuN65du4bl5eV9zzC8Cnw+H83NzYhGo/j4448xPT29\nY5fTV+HABHkOh4OBgQGcP38eIyMjkMvltHeYnABICm2/KRa/ncpWX18PiUTy3Jt/c4sRGYjR3t4O\nrVaLqqoqGI1G3L9/H4uLi89smyoHiUQCfD4f7e3t4HA41HP5RSkmiUSCxsZGdHR0QK1WQ6fTYX5+\nHk6ns6ROYAQimiICvO7ubnpSfPrnqqurqZFMZ2cnnXMwOzsLg8GARCJRsR4GL4KcRPP5PGpqanD8\n+HHa0VBKyHdLxHdKpRJcLpcOOyHtWwCo0nhlZQVLS0uYm5vD8vIy7YnP5/Nle56JilwqlaK9vZ2W\nk+rr65FKpeDz+bCwsEAthCthnQG+E1s6nU7Mz8+jpqYGvb294PP5VONAsp2zs7N4/PgxCoUCHA4H\nHj58SDdRRAgrEAioH8ZOnlOikYrFYjQDsBfPeWtrKwYHB+kQoOnp6Yo10XoRpIw7NjYGoVAIp9O5\npdOklByYIF9VVYWxsTFcvnwZbW1tWF9fh9Vqhc/no+5Sr/KBbVbs7sVNSXa0zc3NkEqluHfvHjUu\n2fwzmwO8SqXCxMQEzp07R+tlTqcTk5OTuHXrFlwu17454hFV8+HDhyESiTA9PU19CJ5lasLlclFX\nV4fBwUHU1NQgHo/DYDDAaDSWJcAD3w2KIAHu3LlzyOVydBNIviMy6KJYLKKpqQmHDx8Gn8+H1WrF\ngwcPsLS0dCBPDMB3QWtjYwMNDQ04d+4c1tfXyxLkiYMd8XgYGhoCh8PZUjowGAzUhXJ6enrLdLn9\ngFy3UCikQ3JEIhH6+vrg8XgQj8eRTCYRDocr7p7I5/OIRCKYmZmhI2Krq6vB5XKRTCZp2emTTz7B\np59+WpJrIGODyWe1V59RZ2cnnXnhdrsxPz+/7+UzEjNeNLGUdFmQdZ4MAiIDhcxmc9nG4x6IIE/G\nT/b09KC3t5fewHV1dbQlZ6e766fHAuZyObp73c1C4/P58OWXXyKfz+PixYuora1FQ0MDFQ5tbGxA\nLBZDKpVSfcHAwAA6OzuRzWYRDofx+PFjTE1NwWq1wuVy7XtqMJFI4Msvv8TRo0fx61//Grdu3cK1\na9foHOdisUhPbufPn0dHRwe4XC6sVivu379PR4WWC7/fj/n5eTouU6vVgsfjoaurC263m95LiUQC\n6XQatbW1kMlkEAqF1ALZZDLB7/fve911MztRlpM2QHJPmc3mLbPQS4nH48Ff//pXBINBfPjhh1SQ\nCnz7fDidTnzzzTeYmZmhRkP7LWIjmyKHw4Hp6WkMDAygvr4eAGAymXD16lU4HI6KcoDbDLHLjkaj\nWFxchFarRXNzMxU8EgvkUpFMJrG6ukotcPfqM2pubkZPTw/W19fhcDjo5Mv9pFgs0pkeRKtEdD8k\n+0TWRGIxTYalORwOLC8vw2g0lu15PBBBXigUora2FhqNBo2NjTSwVFdXU+ejnQR5jUaD7u5uqqgV\ni8UIh8PweDzwer27+vBjsRgWFhZw+PBhCIVCDAwMoFAoYG1tDYVCgbaUyWQyKJVKdHR0oKOjA5FI\nBGazGel0Gnfv3sWNGzeQzWYrIlWcyWTw6NEjCAQCnD17FsePH6d9zJlMBlwul4471Wq1UKlUdLLe\n4uIiddAqF7FYDLlcDlNTUxAIBBgZGYFCoUBTUxN8Ph9kMhkOHTqEaDSKaDQKPp+PRCIBj8cD+fVW\nWwAAB4lJREFUi8UCk8kEh8NRMerdqqoqSKVSqNVq5PN5pNNpapf59D1P1LukRx0AHYBUrvcTiURw\n79496jUukUho/d3hcMBgMOD27dswGAxlmb++XYrFIrxeL2ZnZ6FWq6FUKhGLxaDX6/Ho0SNEIpGK\nSdM/TaFQgMfjwdraGsxmM0wmExoaGuhmtdQ6GNL7vZ0JgduBnIBra2tRX1+P2dlZ2O32ko0D3ynE\n9IhspiKRCA3yRMxInkVyACoWizAYDDCbzfB6vWV7HwciyCuVSmi1WiiVSupCRaa52Ww2OJ3OHZ0U\n33jjDfz+97+nNxKHw4HZbMaDBw/w+eefY35+/pWvlaT9iKr18uXLuHDhAnw+HwQCAZRKJU3dC4VC\nrK2twW63U9tDj8dDxxJWws0MfJv+DofDMBgM+Oyzz3D+/Hn84Q9/gMvlQi6Xg0qlonXVtbU16HQ6\nLC0tIRKJ7Nsinsvl4HQ6EQgEIBaL6fc8NDQEoVCIfD6PxsZGJBIJ3L9/H3Nzc1hcXITf70cwGKyY\noS5kMens7MTZs2cRi8Vox8Pa2tr3rpPH40EsFmNkZARnz56FRCKh6vZylXzIfTs1NQWLxQKlUgm5\nXA6RSIRQKASHw1F2Ud12CQaDMBqNEIvFsNlsyOVymJmZQTAYrJjNyIsgI1iXl5dhs9lo8C31WrJ5\n3dsLSEcDaX+12WxwOBwVc79kMhmsra1hcHAQ77//PsRiMbhcLjgcDlKpFFKpFBKJBB4+fIjPP/8c\ny8vL1Mis3Gv7gQjyJF2fy+UQCATobvvOnTs0mGz3pMjhcCAWi1FXV7dlQlcwGKRtI7uBpP0sFgv+\n7//+DxcuXEBPTw+am5shEAggFovpsBeyUZmenobRaITdbkcwGKy4lCDpp/X7/XTIDGkFUSqVaGxs\nhNFoxNLSEjweD82I7GcWYmNjA4lEAktLS/jkk0+Qz+fB5XIxPj4OHo8Hi8UCsViMfD6PqakpGI1G\nqn2opDn35H5SqVQYGhoCn89HJpPB6OgoVldXsbKyQi1VST98fX09Dh8+jLq6Omp9SrIb5bzuSCSC\nSCQCsVgMoVAIgUCAVCqFWCy27+n550FqrCsrK3A4HHQEbjk/u91Agu1+aHj28vskY30FAgGSySSe\nPHkCq9VaMUGedI2YzWbodDr09/dDo9GgpqYGer0eT548QTAYxOLiIjweD6LR6L7pqg5EkCc3bigU\ngtVqxcLCAm7cuIGrV6/uaLEgJ/d4PI7V1VU4nU6Ew2FwOBzcvHkT//73v/ckMBWLRTx+/BiRSATN\nzc3o7u5GY2MjVdknk0mkUinYbDbodDrcvHkToVBoy6z7SoSkt9PpNMxmM5qbm9Hf3w+5XA6dTocv\nvvhiz1S1u4XcM3q9Hnq9nlrrkolt9+7dg0wmA5/Ph16vx/r6+n5f8nMhrU0tLS3o7u5GQ0MDgG9t\nR2/cuIFUKoXq6mo0NTWhvb0dXV1dCIfDiEQiCAaDCAaDVGm/HySTyYodpfw0QqEQMpkMHo+Hbo4O\nwgn+dUMkEqGpqQlCoRDRaBQmkwmrq6sVsbYA3wntbt68iZWVFbzzzjs4fvw4ent7cf36dVy/fh1e\nrxfRaBSpVGpfr/VABPn19XWqqpTJZAgGg9QMYafe3hsbG5iensYf//jHLac2j8ezpy07qVQKbrcb\nX3zxBQKBAHX3UqlU+PrrrzE7OwufzweLxULbcg4KRAOxsrICvV6P27dv05nmlfIQPg1pNZqcnATw\nrTiPx+PRTV+lQj5PvV6Pv/zlLxgdHcXIyAiOHDlCvRVUKhX12pdIJMhms1hYWIDBYEA0GoXZbEY8\nHt/xabRcUwIriWg0So2nyBjoH9pnUAnU1dXh1KlT0Gg0dF2pxO8hk8nA5XLh+vXr0Ol0UCgU1Ccl\nlUpVxLp+IIJ8PB5HPB7flrHMyygWi7BYLCWfs01UzVNTU/D7/RgYGEBXVxfq6+tx7do16HQ6RKNR\nJBKJfd/p7ZRYLFYxorSdkE6nYTKZ9vsydkyxWITT6aTCKjL5bGNjAz6fj47tJYN0+Hw+pqenMTc3\nh0QigVAoVHLh1esCCe6VMmL1hwqZqBcMBuH1eit2I57L5RAOh/e9re9FlH6s0/N57Z8g4mRHxmUK\nhULweDwEg0HaS0pqOwzGyyB1d5lMBplMBuDbjQvp9988PS0YDG6ZC35QasqVwA8xg1FptLS04Nix\nY3Swkd1uP5AHizLw0hjOgjyD8RpB2nY2e5ezgMU4aMhkMjQ0NFDxZjndDw8YLMgzGD9kyjHbnMFg\n7BsvjeGlH0vFYPxA2WyZzGAwGPsBC/IMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAw\nGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAY\nDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgM\nBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDMYO+H+j6hkq\nyg8wmgAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -4.605023\n",
+ "Current MNIST score: 7.695475\n",
+ "Current Frechet distance: 22.835312\n",
+ "Training step: 1600\n",
+ "Time since start: 9.010625 m\n",
+ "Steps per min: 177.568157\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvWdz3FeW3//pnAMandDIORIMAEFREiVxKGkkj0bjmlBT\ntrfKu/azfeZ34NfgKpdrH61r11XrXe96Zq3/rIbSUoEckZJIgkQicmjEDuiAbnRO/weqewfUSIwA\nGpzpTxVrVEMCfbv7/u4595zvOQdq1KhRo0aNGjVq1KhRo0aNGjVq1KhRo0aNGjVq1KhRo0aNGjVq\n1KhRo0aNGjVq1KhRo0aNGjVq1KhRo0aNGjVq1KhRo0aNGjVq1KhRo0aNGjVq1KhRo0aNGjVq1KhR\no0aNGjVq1KhRo0aNGjVq1KhRo0aNGjVq1KhRo8aLg6KKr12p4ms/EwrFNx9XpfL9S3+Sf1MNFArF\niVtTjcNBoVBgsVjo7e2lVCqxtbVFIpEgk8lUe2k1/kRQKpWoVCrK5TKlUqnay/mjR6lUolAoKJVK\nj7Xh6uNY0J8SJ9WQntR11Xh+FAoFdXV1XLlyhVQqxaeffkqhUKgZ+RrHirjg1Dh6KpXKE5/pNSNf\no8YLTl1dHW1tbfT19REKhdDr9ahUqmovq8afEJVKhVKpdGSXCYVCIf8IA3fSLi4H1wcn52JVM/JP\nyUn54mrUEDidTtrb22ltbaVcLqNSqf7gwKlR46g5CsOrUChQq9WYzWZMJhNms5lKpUI+nyeZTJLN\nZikUChSLxaqmCZRKJUajEZvNJp+7QqGAWq1Gr9eTTCZJpVLkcrljX2fNyD8FtQOzxknE4/HQ3t6O\n2WxGoVBQKBQol8u1/VrjWDlsp1IYeLvdzunTpxkeHubUqVOUSiWi0Si3b99mZWWFSCRCNBolkUgc\n2ms/DUqlEo1GQ1dXF5cuXUKj0VAul9nb26Ouro6Ojg6+/vprxsfHWV1dJZVKHev6aka+xqGgUqlQ\nq9UYjUa0Wi0qlYpcLkc2mwWgVCqRy+VqhucI0Ol06PV6isUiyWSSSCQiP/caNY6Lo3i21Wo1FouF\nU6dO8corr3Dq1CkqlQp7e3vYbDYWFhbw+/1MTExUzcgL8vk88Xgco9GIxWJhYGCAzs5O+vr6cLlc\nuFwuPvroI9bW1kgmk8/1WpVK5Yk1EDUjX+NQECG1hoYG6urq0Ov1RCIRdnd3AchkMhQKhZry9pBR\nKBSUy2VyuRzJZJLd3V02Nzcpl8vVXlpVqaUqjpejypErlUrMZjMDAwMMDAzgdrtRq9U0NDTg8Xjo\n7+9ndnaWeDzO0tLSsX7nByupisUiq6ur7OzsYLPZ6Onp4c/+7M8YHByko6ODjo4O+vr6CIfD7O/v\nP7eRF6/7JNSM/DPwvAeIQqFApVKh1WrxeDw0NjbS0NCAy+Wirq4OpVJJqVQimUySyWSoVCqsra2x\nsLBAJBIhnU4/8+vC4XrcIlQ1MjLC2NgYHo8Ho9FIuVyWr5dOp5mfn+fatWtEIpGa6vuQ0Ov1mM1m\n+vr6GB4exmKxAEd34L5IHMX7VygUWK1WWlpaSKVShMNhMpkMxWLx0F+rWnz7dqhQKFAqlVQqlWNN\nAYl1iP8tFovk83my2aw8f/V6PQ6HA4/Hg9VqRavVylTVcazv2yK7YrFILpcjnU7LiJow5iqVCrfb\nzbvvvkuhUGB9ff3YPs+aka8CwjttbGykv7+foaEhenp6aGtro7GxEaVSST6fl5ukVCoxPj6OXq9n\nYmJCGv6n5SiMvNFoxO1289JLL/GTn/yEuro6VCoVqVQKu92OzWYjkUhw48YNZmdn5UNYbaGMQKFQ\noNFoRM3psR9mz4PJZKKxsZG2tjbcbjfxeJxoNPpCrP1FQ6FQoNPp8Hq9XLhwQd4cNzY2iMfjJ2Iv\nPwtKpVKm2dRqNQqFQj6barUanU6H0Whkb29Pvs/jNvSlUolYLEY4HEaj0Uinqq6uDo1Gg8ViwWg0\notFoKJVKx2LkxecmhK4mkwmDwYBWq8VgMGA2m0mlUsRiMZLJJCqVCr1ez8WLF/H7/fzqV786tsjm\nn5SR/3YJxrPyvLd4tVpNZ2cn//7f/3t6e3vxer3Y7XbMZjNGo5FSqUQ+n5d51mKxiMVioampif39\nfUKhEIVC4anXcRQPZ2NjIz/+8Y959dVXaW1tRalUyodQpVJRKpUwm83U19fjcDjI5XKYTCbC4TCp\nVKrqYWWNRoPH45GOSTabJZfLUSwWq762R6FQKHA6nZw9e5Zyuczt27f55JNPmJqaqhn5I0CpVFJf\nX8/AwAA/+tGPAFhbW+PXv/414+PjJ2IvPy0KhQKj0YjD4eDUqVO43W4UCgWRSESKxhoaGmhra+P6\n9et8/vnnxxa5EGe0iGjev3+fSqVCX18fqVSKYrFIR0cHGo2GXC4HIJ2Uo0apVKLX67FYLJTLZSwW\nCyMjI7S2tuJ0OlGr1ahUKjQaDdFolAcPHmAymTCZTFgslocuP8cR1fyjNfIiJG6z2XC5XGi1WmmA\n4vE44XC4KrdJ0Z2ssbGRs2fP0tnZiclkIpvNkkwmCQaDBAIBdnd36ezsxOv1YrVaMZlM1NfXs7m5\nid1uZ2Njg62tLYLB4BO/9lEoX51OJ8PDwzQ0NJDL5Zifn2d3d5dKpYLRaMRut9PR0YHH4+Hy5csk\nk0lyuRzhcJilpSXu3btHJpM51APy+yIWwskTNxSLxYLb7ebMmTPY7XYymQz5fJ79/X02NzcJBoMy\nvZDP5w9tfc+LuBU0NTUxOjpKPB5nfn6eu3fvsr29Xe3lPTVCtClSWHq9Xt4oheMlvkulUkljY6OM\nGAlBZzAYJJFIHEkURuwXcZY0NjZisVhwOp1MTU3h9/vJ5XInao88CvEMaLVaOjo6GBgY4MKFC/h8\nPgB2d3fZ3d0lk8lIw1mt6FapVCKVSuH3+6WoV6ytUqng8/kwm83odDqUSuWRr0ecIQaDAZfLJS9g\nPp8Pr9eLxWJhf3+faDTK7u6utEFnz56lr68P+P1+P471wh+pkT+Y8+7o6GBsbAyHw4FGo2Fvb4+Z\nmRnu3LnD/v7+Mxn558nJKxQKmUeyWCzyQNve3mZtbY2NjQ2++uorpqam+Hf/7t9x5coVuru7sVgs\n2Gw2/vN//s+8/vrrfPLJJ3z00UeEQqGqPHxio9tsNhwOB+VymZWVFf7mb/5Geq4ul4u2tjbef/99\nBgYG+Iu/+AsZGk+n0/z2t7/lv/7X/0ogEDjUA1I8iOJgOpjfUyqV0mHq7Ozk9OnTvP/++7S3twPf\nPICJRIJr165x8+ZN7t69y87Ozok6wFUqFXa7nfb2di5cuMAHH3zA7OwsyWTyhbtNwjfRFKPRKOuM\nPR6P3CPr6+uEw2H5vjQaDefOneP06dMYDAay2SyRSITr168zPz9/JOWDWq0Wi8WCyWRCq9WSyWSw\n2+3U19fT1NREQ0MDoVDoRO2RR6FUKqXTMjo6yrvvvsupU6dwuVyUy2WpN7hx4waTk5PcvHmT5eXl\nQ3fGn4RCoUA6nSYWixEIBLDZbPj9fhKJBGazGZvNJi9Kx3UOipp4r9dLJpNBo9FQKBQIBAL4/X6W\nlpZYW1uTFzCTySTLAAuFArlc7lidphfGyAvDLbyfg4IMIfIShn1sbIze3l4sFgttbW10dXXJ2sX9\n/X10Oh2xWIy1tbVjrSkWeb2xsTFef/11nE4n6XSaQCDAtWvXmJiYYHd3l42NDUKhkCy3EDe2kZER\n2cJ0ZGSEVCpFPp9ncXFRqtiPC61WS09PD0NDQ3i9XoLBIOPj46ysrLC5uYlGoyEYDLK5uUkoFKKr\nq4u2tjbGxsY4deoUOp2O4eFh/uIv/oKrV6/y5ZdfHtraDAYDFosFq9UqjUVrayter5dUKiUfUCHk\niUajaDQastms9MaHhoZQq9XU19fz+eefE4vFDm19z4NWq8Xn8/HOO+/w6quvYjabSSQSbG9vy7Dl\ni4BCocDlcnH+/Hm6u7tpamqS+V+z2Qwgo24irCmedbPZLJ2xnZ0d/H4/kUiEYrF46KJStVpNc3Mz\nPT09slXw5uYmiUSCYrGIyWSiqamJ2dnZZxbEHicajYbm5mYGBgYYHR2lv78fn8/H8vIy169fZ2lp\niXQ6LW/PwWCQeDzO3t5e1W7z4jZcV1eHyWSSz7VGo2F/f1/Wyj9LCvNpESkEockoFosoFAri8Thq\ntZpisUg0GiUej8v9kM1m2draYnt7m+bmZtmNsmbkvwOh5BZ12AdbdyqVSgwGA1arlR/+8IdcunQJ\nu92O3W7HarU+pM4UzRQ0Gg1qtZp4PC7zsEeJSqXCaDRy7tw5xsbGsFqtLC8vc/fuXa5du8b4+Djp\ndFqKW7766ismJiaw2WwEg0F0Oh1msxmz2UxXV5dc8/7+/rGLYjQaDR0dHbS3t6NUKllfX+fOnTsE\nAgFZrxqJRFhfX2dqagq3201/fz+VSkVGMdra2vj5z3/O9vY2X3/99aGpwvV6PXV1dTQ3N2M0GikU\nCpw9e5bu7m4ikQiVSgWr1crMzAzb29tMT0+ztrZGIpGgt7eXrq4uvF4vZrMZj8fD6uoq9+7de+51\nHQYOh4O+vj5++MMf0tfXh1arZX9/n93dXQqFQrWX91iEM2632xkYGOD999/n/Pnz9PT0yJC9cORF\n17B0Os3u7q40NouLi6ytrbG1tcXKygrLy8skk8lDf341Gg12u53u7m7Onz/P7OwsiUSChYUFjEYj\nKpUKi8VCS0uLDBef9EiKTqejv7+fK1eu8O6776JUKgmHw9y5c4cbN25w69YtGQo/CdoOlUqFwWCg\nsbGR5uZm6uvrKZVKWCwWeVnb3d0lGAwem46mXC6TSCQeqstfX1//zn8rBL2JRIJkMin3/3F+ti+M\nkReqZ41Gg9frxWazYTKZaGlpweVyodFocLlcNDU10dnZicfjQa3+5u2lUil5S9Dr9Zw7dw6fz8fK\nygpTU1N8/vnnLC8vEwqFnngtz4LJZKKhoUHeFpPJJLdv3+bv/u7vWFlZIZ1Oy4NKeIziBnP16lXm\n5uYYGBhgZGSES5cu0dnZiVKpZHFxkZ2dHWKx2LFoDMQNx2AwEI1G+c1vfsO9e/e4d+8e8Xj8D/59\npVIhHo8zPT2NXq8nGo3yb/7Nv6G5uRmr1SrVsdls9lAO6lKpRKlUQqlU0t3dzdjYGHa7Xaqj19bW\nuHv3Lg8ePGBjY4MHDx6gVqsplUo0NDQwMDDAL3/5S5qamtBoNLKssZoHn4hcXbp0iXfeeQev1ysN\n4sEQ4ElHq9VitVp5//33uXLlCsPDw1L4KNJgYg+Xy+WHHMWJiQmmpqZIJpPs7++TzWZJpVJSiHXY\n2O12zp8/z0svvcTg4KB0KLa3t+nu7mZ4eJimpiYqlQo2m01eFk4qIlV16dIlLl26hMViYWpqii++\n+ILPPvuMubk5qX84CQZepAQbGhq4ePEiPT09ANTX17O3t0cymWRlZYWNjQ3W19eP9ZLztPh8Prq7\nu7FarbJ66rie1xfGyMPvFZdKpRK3201vby99fX34fD4ZWvV6vWi1WultJZNJ0uk0NptN/nE6nVJE\nU19fT6VSob6+nuXlZQKBAHt7e0ey0V0uF4ODg3g8HiqVCpubm8zOzjI5OSl7MB+kXC7LcjNxc4lE\nIrIu3W6309DQIMOXR60sFbluvV6PwWBgd3eXfD7P7u6ubATxfU5GLpcjl8sxMTFBoVCgo6MDvV4v\n84N2u12GXJ8X0dc6FAoRi8Uol8uk02n29/fZ399nbW2N+/fvs7GxQTgcloeDUqkkEomgVqtldUM+\nn5c6gmoeIELN29/fz5kzZ2R0KhaLkUqlTvQBJ4RGWq2Wuro6vF4vTqeTSqXCxsYGa2trpNNpzGYz\ndrtdplWCwSB+v5+VlRXm5+eZnZ1lbm7uWMLGCoWC+vp6XnnlFYaGhqirq6NcLhMKhWR5qFKpxOl0\nUigUsNls6HS6E2vkRZqvs7OTgYEBGhsbyWQyzM3Ncf36debm5giHw9VepkREZb1eL319fXR2dlJX\nV8fW1pYUtq2urrK+vs7m5iaRSOTEljGKKgaz2Sx7oBxXPT+8YEZedPaKxWKYzWZeeuklWbZQqVTQ\naDRSBBGLxVhYWCAUCpFMJnG5XDQ3N9PX14dOpyOfz2OxWOjr68NqtdLf38/09DTXrl3jwYMHRyKi\naW5u5tKlS7jdbvb395mdnWVtbY39/f0n+vliscjKygoNDQ3EYjFZYifKvo7y4BMGXoQoLRYL09PT\nZLNZ8vm8jDg8bg3hcJjZ2VmmpqYwGAxSJe3xeNjf3z+UkhKxJlGjmkgkcDqdaLVaAoEAy8vLzM7O\nytSNWHO5XJYRCuEo7u/vn4h2vFarla6uLhobG3E4HFgsFra2tpidnT3xtfE6nQ6TyYTNZqO1tZXO\nzk78fj9zc3MABINBNjY2aGlpYWhoiLfffpvNzU0+++wzGaXK5XLH2jFR7MnLly/jdDoJBoOk02ni\n8bjcX6Llai6Xw2azodfrq95a9bsQz25LSwvnz5/H6/VSqVQIBAJMTU3x9ddfnyjRoEKhwGaz0dvb\nS09PD319fTgcDvb29picnGRzc5PNzU2WlpaIRqOySuokRrLEmSnOaYVCIS90NSP/HYiDTKVSycb/\nDocDg8FAoVBgf3+fra0totEom5ubTE1NEYvFyOfzGI1GmpubKRaLNDU1yQ5JIoSu1Wqx2+3Mzs4y\nPz9/JMMW3G43Q0ND2O12tre3uXXrFsvLy0/8OgqFQq7ZbDZTKpWIRCIkEgmy2eyRbBqhg/B6vdK4\nRKNRQqGQLC0rl8tPfJMU/0boC6xWK42NjXR2dhIKhWQU5XkQN71SqUQoFGJiYoKWlhbq6upYW1tj\ne3ubTCbzkME4KNp87733cLvdRCIR7t69SyAQqHoI026309fXJ8u3lEolfr+fDz74gNXV1aqt61GI\nz3RwcFDmrUXuOpFISP2GyLWLnhAqlYpgMMjExASRSIT9/f1j/ey1Wi0vvfQSb731Fl6vl3Q6zcbG\nBtFoVDp8LpeL/v5+7HY74XBYOiEnkYNK+h/+8Ic0NDSg0+mwWq2oVKqHShSrjRDUNTc38+abb9LY\n2IjL5cJisbC7u0sgEGB7e1uWTBYKBenEnEQMBoPsgSKa5wiHpCa8+x7UajVWqxW3201TU5PMb8Ri\nMdbX12XYeHV1lZmZGVKplNwAzc3NOJ1OaWDK5bIU7DkcDuCbvPlhbxixCR0Ohyz3EDf5p6lrViqV\nuFwufD4fNptN1nPHYrEju22qVCrZOlU4VTdv3mRubu6ZQsTi4Beldw6HA7fbjdvtluWEh/E+xO8Q\nIx4VCgX5fJ6dnR2i0egfOER6vR6n08nFixd599130ev13Lt3j6+//pqtra2q5uJVKhVOp5NTp07R\n0NAgKxempqb4+OOPv1fVrVQqHxKyCa3Ccb0Xo9GI0+lkZGSEoaEh2bksEAiQy+XY29tja2uLXC6H\nQqEgFotJoV0ikWBzc/O5SlWf5WfFefDaa6/x5ptvYrPZpG5DpKe0Wi0ul4uenh6sVivlcvmxehLx\nHVTjtmkwGGhqauLs2bNcunRJlgHqdLpjF4E9DlHD39jYyBtvvIHFYpGNZYSqXQxf0mq1Mn0otBkn\nDRGRED330+n0Q+Lq4+CFM/JWq1UKYQwGA7FYjPn5ef75n/+ZhYUF2SghlUrJ24G4jTocDvL5vAzv\n7O3tkc1myWQyzMzMcOvWLaampo7EYIoDV6QULBYLHR0dhMNh9vb2nuh3aLVaLly4wCuvvILJZGJm\nZoYvvviCcDh8ZBtGr9fj8Xh47bXX6O3tJZFIMD09/cyvp9VqMZvNGAwG9Ho9Op2OSCTC4uIiiUTi\n0N+H8Jh3d3fJZrOy9Onbr9PQ0MClS5fo6emRXQdFt6pIJHKoa3oa1Go1dXV19PT0cOnSJVwuF36/\nn//5P/8nn3322R9EJARClex2uzEYDKhUKsLhsBSHHYex6e7u5u233+bUqVM0NjZSLpfZ2Nggl8th\nNptlgypBpVKRJWrPUw6lVCpl7vNpf4fJZMLr9dLV1UVLSwsAU1NT/OM//iNbW1solUq0Wq1sEiN6\nLxx0pr6NyMkqFIrHdsZTKpVSkX1YiHSkz+eTkZRYLMZXX33FxsbGob3OYSB6wItOlCIleTCNdurU\nKQwGAzMzMxSLRRoaGpicnHwuvcZRaW56enr48z//czkid2ZmRkZva0b+OxBhelEOpVarmZub49q1\na1y7do319XV5qzn4IQrRicFgkKI7tVrNzs4O6+vr+P1+xsfHuX37NqFQ6NCVugc7TIkDwmazMTg4\nyMbGBktLS4/9HUKg1t3dTW9vL1qtllAoxMzMzBM7Cc+ybqfTSW9vL0NDQ3R1dbGzsyNvL8+ySR0O\nB+3t7dTV1UlPPBKJyDnLR7HxK5UKyWRSahe+6wB1uVy8+uqrtLW1yf7Y+/v77OzsPLFm4igwmUyc\nOXOGs2fP0tTURDAY5O7du3z66afMzMzIELHogme1Wqmrq5OpkPr6eiwWCwaDQYrclpeXjzQELpzq\njo4O3nrrLdxuNxaLhWKxKMtfDQaDbGASCATY39+nXC7LMtfnWZtKpUKn0z1TtYbNZqO9vV12f7t/\n/z537txhdnZWDkURHdZE6BWQ3dhEFFAYJY1GIx2AfD7/yCihiNoAhxrOtdls0tFSqVRSQHjz5k38\nfv+hvMZhUi6XKRQK0inf39/H4XCg0+no6enB7XZjs9lwu91SOyFEbYcxxvUwEXM9bDYbkUiE+/fv\ns7i4WCuh+z60Wi0Oh4P+/n5ZuvKv//qv/N3f/Z3Mi32XlyyEHM3NzQwODtLW1oZCoSAUCvHll1/y\n0UcfsbW1JVXKh40oOdNqteh0OtRqNXa7nXPnzjE7O/tEv0Mc4k6nE4fDgUKhkOHOo2jCcVCsMzY2\nJluJApjN5mfepM3NzYyMjODxeKRIMhKJsLm5eaTK5Hw+L2+H37V2p9PJ6OioPNxFVYMQ51UDkeL5\n8Y9/zMWLFykWi9y9e5ePPvqInZ2dh3LAosWwaHLS2NiI3W6nUChgNptxuVysr69z//59/s//+T8y\nZHgUiJRaa2srIyMjMiScz+exWq10d3eTTCaZm5tDp9Nx+/ZtFhYWZDrheRG6FZHjfxqcTif9/f1Y\nrVY2Njb427/9W27evCn3jDiDDAaDnDpWqVSw2+2YTCYpgjQYDDKtptPpZKvqxxn5w251KvbQuXPn\naGxslDflnZ0dPv/88xNp5CuVColEgsXFRTY2NggGg5w+fRqn08kPfvADCoUCKpVK9hpRq9X4fD48\nHg9///d//0wO7FEYXdH8zG63U6lUZD+ChYWFmpH/LpRKJcPDw7z22muyfln0eg8Gg49UKyoUCvr7\n+zl//jwulwuVSsX+/j4TExPcvHmTjY0NeZM4qrWLUJN4iIXR1mg0T/Q7fD4fw8PDeL1e4JtGM0KU\ndBQ1wmJ97e3tjIyM4HQ6yeVyrK+vP1P3N3GA+Xw+BgYG5LCaQCBAMBg88hDyo8JjwqERYdJMJkMs\nFiMWi5FOp6smqBIT/trb2ykWi3z00Ud8+umnjI+Pk0gk5M394sWLnDp1SlY9mM1m9vf35Xfl9Xqp\nq6sjHA6ztbV15EIru93Oa6+9xunTp9FqtTKCIur5c7mcLHk9c+aMLHdMJBKHYuRFXv9ZfpfQ54iO\nemtra0SjUdRqtdSmvPfee5w5c4Z8Pi+f5/r6enp7e2loaJAh+YNtd/f29p6oLexhPwOivbNQdIuB\nL9FolFgsdiLHPoso629+8xvC4TCxWIzFxUV5wcnlcmg0GoaGhuju7qa1tZW6ujp8Ph8Gg6Hq5a7w\nTSMlq9WK3W5Ho9GwurrK9PS07JZ4nLwQRl7chEdHR/nBD36Ay+Uik8ng9/ul4vVRqFQqecOxWq1k\ns1l2dna4d+8e4+Pjh94O87teX4TzRJOWfD7/xFOIFAoFra2tvPLKK3g8HgqFAtvb21J/cBTGUafT\n4XK56OrqYnBwEJ1Oh9/v5/79+880BEWELxsbG2VTiHA4zMTEBNvb21WtcRXDUPb390mn0/KQEWVT\n1ZoX7nA4aG5uxuPxEA6H+eCDD7h3755U0/t8Pnp7e/kP/+E/8Pbbb5NKpdje3mZ5eZn19XVZ+9zf\n309zczMLCws8ePDgSHvcK5VK2URGpBf29vZkw5pEIkEsFsPpdGI0Gunr6yMSibC8vEypVJKiqufh\nYNTmaRH7VFRnKBQKTCYTGo0Gt9vN+fPn+cUvfiGbOonZCG63W4aMNzY22N3dlT3Xt7a2nij8Lhp+\nif8+DITgdHd3V14IhAP7famraiJSFqFQiKtXr5LJZKTuRMzDKJVK6PV6Njc32d/fl2JqkZIVfS6q\nidAUCEG33+9namqKUCh0KHv8aXghjLwIefT19dHb2wvAF198wV//9V8zMTHxyJ8V+Tmn04nb7Uan\n0zEzM8P/+3//Tx4sR+31iVtApVIhGo1iNpvx+/384z/+I3fv3n3kz4r8plDHms1m6dk+7XCaJ50n\nr1Ao8Hq9vPnmmwwODqJWq4lEIkxOTvL//X//HwsLC0/8mgK73U5/fz/d3d3Y7XZSqRTT09P80z/9\nk6yXrhYHh6GIiX9TU1OygqBanDt3jrfeegu9Xs/W1pacbSDWfObMGf7Lf/kv9PT0kEgk+OCDD7h7\n9y4LCwtEo1GSySSVSoWWlhZMJpMsWzvKA1CIFm/cuMH09DRqtfohI18oFCgUCrS0tNDf38+rr77K\n8PAw5XKZDz744NBEjs/6TOdyOeLxOIVCgba2Nv7yL/+SaDRKNptFp9Ph8XhkK9jt7W08Hg86nY7O\nzk729/dJJpOMj48zPz9PPp8nnU5Xree7iE4VCgWi0ah0WHd2dggEAlVzXh+HSCmIqE8+n6dSqcg9\nVKlUpLapra1Nds1MJpNyaqdoX10t9Ho9ra2tuN1uSqUS29vbrKysHGrk5EkjFi+Ekfd4PJw5c4ae\nnh5sNhvhcFiWDz0ulOp0Ounq6qK1tRWbzUapVGJ1dZXPPvuM7e3tYytpqVQq7O/vEwwGyWQybG9v\nMzk5ydbUpbyxAAAgAElEQVTW1vf+jBhLKwZktLS0yE55U1NTsrTraQ39o/69qMX3er1cuHCB1tZW\nOWHu/v37zMzMfGfr2kf9PoPBQFtbG5cvX6a/vx+tVsv8/Dz37t3j/v37RyYcfNL1iTkIBwVSCwsL\nrKysVMXIi8O5ra1NRlHEEBrRxKmnp4fLly9z+fJl/H4/d+/e5eOPP2Z8fJzNzU05yKWvr4/W1lbq\n6+vlwJejPNyFgnx6elqGicW4WGHslEolm5ubAFy+fFlqZG7evHlk63pSUqmUvG05HA5efvllQqGQ\nLL2MRqN89dVXshul6Fext7fH7u4ukUhE1nE/i3E/ODHxMIyUEP0ajUapgVldXWV1dfXEGHmRyhOC\n0XQ6TS6Xk02HxDOYz+elg6rT6WRKRoToTSbTQ/NMqoWI3La2tuLxeCiXy8RiMYLBYFUiDFUz8k9z\nq+zv7+fP//zP6erqIpVKsbS09MRCrd7eXn7xi1/Q398vpxaJdrLH1YJS1PEHAgE2NjawWq2EQiE5\nPe/7UKlUNDQ08MMf/lCO1hT5yzt37uD3+5/qIHgSh0ChUGA2m/F6vXR3d1NfX086nWZ8fJx79+49\n9Wd2cOb8T3/6UxoaGshkMty+fZs7d+5UvRGHcGpEFYHD4SAUCsk2wtWoaxaOh5i4JXQbYi0+n4+/\n/Mu/5I033kCpVPLxxx/zD//wD8zMzBCNRmWY2eVy8Ytf/IKXX36ZxsZGmR8/ys87lUqRyWQeer6F\nQRRUKhX29vaIx+MySiemQ1abvb09/H4/6XRapglDoRC3bt3i7t27bGxsyCjKyy+/LGfZf/zxx6yu\nrhKNRp9rrv1R9OgQw4Dcbje5XI4HDx4wNzd3Ypr3iGhlf38/nZ2dTExMsLGx8UghtFarpaWlBZ/P\nh0qlklEToX+o5pkiOiEKI69QKMhkMs882vz7eNL3+ELc5MW86XK5zPb2Nh9++CF379594rDz+fPn\n5VCDiYkJ5ufnSafTx1enqFbLtp4OhwOz2YzD4aChoYFIJCJLx8QNTswfHhgY4OzZs1y8eJH29nbU\najXr6+s8ePCA7e3tI2n+IMY61tfXY7VapWOxvr4uR/M+KUqlEovFwuXLl3nzzTfx+XxUKhWCwSBz\nc3Osrq5W/aARubOGhgYpksnlcrKJTjUOCzHExWw2o1AoZF5b3FicTqfs2phKpVhZWWFmZkY2kwF4\n8803eeutt3jttddwu91kMpmnzlM/qSN+kG8b9Ee9R1FiJiZLHray/FnY29tjZWWFmzdvEg6HCYVC\nLC0tMTc3J4WMCoWC06dP097ejl6vlzMQxPf0POH5wzTyYkxuR0cHbrcbo9FILBZje3ubnZ2dqt/k\nNRoNTqeTzs5OTp06RWtrKyqV6jtbTn8bs9nMxYsXOXfunPyZTz75pKppCDGjYWRkhJdfflkOX1Iq\nlXR1dTE2Nsbk5CQ7OzsylSb2ylGeM1W9yT/JGxM5pVQqRTQaxe/38+GHHzI/P//InxNNK9xuN93d\n3SgUCtbW1vjd737H7Ozssd7QRF282+2WCtB0Oi1buYohKaIWXjTjeP/997lw4YJsu5vJZFhYWGBy\nclIKZw4bETYTdamirjkYDD5yAM1BRPjNZDLR2NjIO++8wxtvvCFzyysrKywtLbGzs3MiVLBNTU00\nNTXJsqhUKlVVVb0QEYn8thjAYTQaZYtS0f2rXC6zublJKBRCqVTKBkM/+9nP+LM/+zMputra2nqq\n+uGDrUKP4js6qDwWMylOQu9xUZXwySef8OWXXzI3N0c8Hpe5VLG3xbhnMeI5nU4fWrnlYRh6EaHq\n6emhv79fTlLMZDLs7u5Wfd6BiFT19PRw5coVfvrTn5JOp5mdnaVYLD7yEiYuIsKQlstl7t69y4cf\nfvjUOqXnRTReqlQqctLppUuX+MlPfkJjY6MUY4q9ImZhiD72lUpF7v2jWnfVjPyTho7Fv83lcvzu\nd79jenr6iTao3W6XeXxRFrW5ucnExMT3zv49KsrlsuzzDr+v2z9z5ozs3Z1Op7FarZw6dYozZ84w\nPDwsPXCDwUAqlWJnZ4fp6Wk5EvIoEOpoh8Mhh7QczI09yUY0GAzU19dz6dIlLl++zODgIEqlkng8\nzvXr1/nggw9YX1+vuoEXeoHOzk5aWlpQq9XMz89z+/Zt4vF41dYnlN2lUolcLsfu7i7JZFJGtNLp\nNL/61a/Q6/Xkcjmmp6cxGo2yUdTly5cZGxsjHA7zD//wD0xOTsroydOs4Ul52vyxUqmkpaWF7u5u\ntFotS0tL/Mu//MuxP5ffRaXyzQz75eVlWUb3XXlU8d0kk0l5eD9vvlUIzsR/Pw+iU+LY2Bijo6MY\nDAa2t7e5d+8esVis6s+ey+Wir6+Pn/70p5w/f566ujo+++wz/u///b+PPRsaGhro7e3FaDQSjUZl\ni+fFxcVja20rnGCXy4XZbCaVSsm25b29vbhcLvR6vawY+frrr7l69Sqbm5t/IIhdW1uTffi/63We\ney88108/B0+ycBHCjkQiTE9P8+DBA+bn5x/bgUx0xjt37hzt7e1S3bi4uMjKysqxtyk9KO4S/63X\n63E4HFJsUi6XqaurY3BwkLGxMc6ePSvFMqIrmN/vZ3V19VhCUuLGIhrCPG5qkkajwWQy0dzcjM/n\nw+v1cvnyZTm3Op/PEw6HmZmZ4csvvzwR07rEzbetrY2mpibZP0H0KK8WBzu/5XI5lEolHo+HkZER\n9Ho9Wq1W3gYqlQodHR1yGuP58+d5++23USgUzM/Pc/XqVcbHx59KLCl4kmdUr9fjdrvlEI5cLkcq\nlZKfobihiAoTEZofHh6WDvjy8jI3b94kFAo9y8d16JTLZVnF8F2Ic0lU7ghB22Hc4p83mnGwH8XQ\n0BBnzpyRwka/38+dO3eeqc/FYePz+Thz5gxjY2O43W5WVla4ffs2X3755fe2FRfh8IGBAS5evIhO\np5NVJ4uLi+zu7h6b8yJmcFitVqxWK5VKBafTSXd3t2yCpFKp5POwsrLC9PS0rBgQkVu1Wv0HaSrh\nQIjIwMFOnc9UFnpYb/qoKJfLzM3NScVrJpN5bJhap9NRX18v+zUXi0VmZ2flBj/unI3omy8mEYlp\nRNFoVA5cEBtb5Cnh95u6XC4TiUSYnZ0lEok8V1/vxyEm2+3u7lIoFKSg5XEHmNlspqOjg//4H/+j\n7EQl+tOXy2WSySQ7OzuEw2GSyWTV84Hw+yZFzc3NeL1elEqlHH6k1Wqrti6RnhLefXNzM21tbbz3\n3nty9oIQSIo6XJHqEa2T7969KzuaHWWbT4fDwZUrVxgdHWVwcJBgMMjCwgKffvqpbBFdKpUwm830\n9PTgcDgwGo28/vrrtLe3E4/HWV1dZWVl5US1I30cQjBotVrx+/1P3NTqqBGOx+joKP/23/5b+vv7\nMRgMZDIZ5ufnuXnzZlVnMQhaW1sZHR3F4XCwtrbG//pf/4vbt28/UhgqBlu9/vrr/PSnP2Vvb4+p\nqSm++OKL5xpm9CyItajVajlVTnR6PHg5y+fzxONxWWEi0oDlcpnV1VXUavUfzJ8Qk/icTictLS0E\nAgHC4fAz9+w48UYekB8QPNww4iDC8xHh16GhIXQ6HRsbG8zOzvK73/2OqampqoipCoWCDMkfFNiJ\nRhWir3oikWB1dZX29vaHbkH5fJ719XVu37595PWfos+76D5WLBblWr4LtVqNTqdjeHiYV199lcHB\nQdmyVtzohJZiZmaGxcVFisXiici/igNRPJiiG9hBAVs1EDnJiYkJbDYbL730Ei0tLdhsNgqFgnzQ\nRR91UZ4zOzsr26cuLy/LgU1HWQao0Wior6+ntbVVOtUejwe73S6jNcKBFU1lNBoN2WyWmZkZJicn\nuX37NslksuoizCdBHMDCmTooEKtWCFxEFWw2Gz6fj66uLl599VX6+vowm83E43HW1taYnZ1lbW3t\nSNpgPy1arVZOnYzFYjx48EBWHH0bEZ1wuVycPn0au93O1tYWX3zxBffv32d2dpZwOHys6xfPaLlc\nluN6RQdE8Uf8GzGETIhfxbl+0KZ9m2/3CnieM/OFMPKPUuyKDXBQ3DY2NkZ/fz/ZbJalpSWmp6e5\nd+8efr+/KnXPosNeLBaTXp9SqXxoMlWpVCKRSDA3N0d3d7c8PISeYG1tTeaKj5JyuSzHNorPXRiV\ng/mhg+K6+vp6XnrpJd5//300Go00kpFIhEAgwPLyMnNzc8zMzBy7MOZRqNVq9Hq99LyLxaIsn6tm\nu0/xnX/99dfk83nOnz+Pw+GQY5DFASHyfWLA0tWrV5mammJtbe3Y1vrtEjmbzYbJZKKpqUkaxEwm\nQzwex+/3s7+/T6FQkOLLTz/9lJ2dnSMblXzYiOE6YprbUZRGPeuampqaGBkZ4c0336Szs1M2Ytnc\n3OSrr76ShrTan/NBrZXY66Lh0LcRkU+dTkdzczMXLlygWCxy48YN/v7v/56FhYUj71j6XYiafdHj\nQ6lUksvlZK8N8ZxmMpmH2hof5PvWLJ6pdDotKzZEn4ln4YUw8o/CZDLhcDhoaWmhp6eHvr4+HA4H\n6XRa1q6GQiEikcixzvA9iBDeiUEsojlFZ2cnnZ2d+Hw+OUjE4XBIkZrFYpE362QyeSwtVoUBEepP\nMeyjvr6euro6WQOs0+lob29neHiYl19+me7ubgCuX7/+UHpFeKIWi4U33niDyclJ7t+/fyJaarpc\nLjo7O7HZbGg0GvL5PBsbG0xPT5+I0LEQW05OTlIul+VNeG9vj9/97ncEAgHpTAWDQba3t4893xqJ\nRLh69SoTExM4nU45dlVUZQhxY6lUkiHHQqFAIpEgEokQCoVkR7PD5Kj6lxsMBlpaWnA6nTKVVs0B\nRgqFAo/HQ3t7O6Ojo5w5c4bOzk4pSltcXGR8fJzr16+zsrJyIgy8qOBpamrCZDLh8/n4wQ9+wBdf\nfMHMzMxDa+zv72dwcBCfz0e5XJajn0OhEFtbW1U7079NpVLB5XIxNjaGz+dDo9HIZmYOhwO9Xv9U\nv0s4P0Lg+Tzv8YU28qKbkxCsjY6Ocvr0aba2tuTELb/fX/VbghBSiZC7UqmktbUVr9fL0NCQ7JGu\n0Wjw+XyyO5kQvYkZ6AaDQfblPiqEkReqYb1ej8ViYWBggFgsht/vp1AoSCM/MDDAhQsXyGQyPHjw\ngBs3bjA+Ps7Ozo7MU+n1enp6ejh79qycGlXt3tIAFosFp9MpVbDJZJKtrS38fv+JGNyRy+UIh8N8\n9dVX7OzsSL1GNBrl448/lnPXRZ6+GqRSKWZmZpiZmQF+X7oquo9VKhWZuxTlQ8ViUVabVDPM/SyY\nTCa6urrweDxUKhUikQjhcLhqn7+YVe9yuWhtbZUluqFQiJWVFe7duyc7Sx53z/THIVKXLpdL3tB1\nOh2FQkFeMIaHh+nr60Or1bKyssLs7CwbGxtVb1v7XWi1Wurq6tDpdHJtYkbJ0553Ik17GPqrF9rI\ni3IXhUIhxUkej4fbt28zMTFBJBI5klvC0yKMvKhvr1S+mUvd0tLCK6+8wuDgoBTUiVnVIhyYzWaJ\nRqMYjUZ6e3uZnZ09UgMkNlckEuHBgweyWcw777xDa2srd+/elboGYaw3Nja4c+cOn332mRwaJDan\nyOlvbm5K4y/CWScB4TUnEgnW19cJhUJHNtnvWYjH41y7dk02ixHDdPb29k6kkRT7R4gDAZLJpJzw\nJ9Z71E1Ajur3Wq1WhoaG8Pl85PN5pqenq2ZAD4Z1xe22rq6ObDbLzZs3ZSpEDPE6CftErHlzc5OZ\nmRkpXuzt7cXn8/Huu++SSCTklEIxqfJf/uVfmJ6eJhQKnZj3cpBKpUI8Hmd+fl7qDbLZrOzNsrOz\n88y/93l5oY08/L7RTHd3N83NzRgMBsLhsLyNnQSBF/x+E+Tzea5fv45CoeD111+nsbGRtrY26uvr\nKRQKUpVZLpfRaDRy4pLP52NwcJBYLMb+/v5z5WgeR7lcJhAI8Mknn1AqlRgbG8PpdDIyMoLL5ZKK\n+8nJSfx+P1tbW8zNzTE3N0cqlXoo0iDei3gA0un0I4V8x4lOp5M3zmQyyfLyMpFIpCo5vu+jWCye\nCDX0k3LQiAteBEHdk2Iymeju7sbpdJLNZpmbm+PBgwdVvSWLnHYkEmFtbY2lpSXGx8eZm5uralOn\nR7G8vMz169cxGo00NDSwv7+P3W6nvr5eannGx8flLIDp6Wm2trZORATw+wiHw3z99dd4PB6ampoo\nFAoEg0Hu37//yJLMo+aFNvIKhQK9Xo/T6aSnp4fGxkY5cWlnZ+fEbYhSqcT+/j43btwgmUzicDhk\nffnB7nJCyCTym2JITS6XkxoDcVM+KmMUCAT47W9/i1KplLOym5qa6OzsBCAWi7G8vMz8/DyTk5OP\nPeREFznBSTCiooGMRqORRv4kNAqpcXIxGo20t7fjcDjIZDIsLi6yuLhYtby8EHfFYjHC4TCZTIb1\n9XVWVlaqOvjpUVQqFZaWlkgkEhiNRtra2iiXy1KflE6nuX//Pn/7t3/L3t7eiXRSvotgMMgXX3zB\nuXPnGB4eJpPJEAqFmJube2xvl6PkhTbyAG63m6amJoxGo8wni7ngJ+G2+F2ImtW//uu/5urVq7hc\nLkwmExaLRfYjj8Vi6PV61OpvviKLxYJWq31ImXqUxkjMV7916xaBQACfz0djYyNNTU3s7u6yurrK\n1NSUzNE/jm+v9SQYUqfTSWtrKxaLhXA4LI18jRrfh1Cyi6EjorSpmvu5UCiwt7cnR/vG4/ET0Wzq\nUZRKJWKxGJ9//rmcQyJKWUWIvtpVC0/L/v4+a2tr/M3f/A2ff/452WyWlZUVOSK3WrzQRl6hUEgj\nbzAYpFebyWSONJz9PIicZSAQIBAIoNfr5fCauro6qWDf3d1Fr9djMBikorehoUHOuj7qQ0X0E19a\nWmJ1dRWLxSJVvNvb26yurspD7kk5CYb9IKK+22w2UywWiUajJ0JwV+NkItTsovpleXlZtiitJqIe\nW7QFPgk6pMchtARP02r5pCOqkm7cuMGtW7dk+XG19T1/FEa+ubkZvV4vh4uclJzvkyBESmKOteig\nVCgUHmqsIAQdyWTy2LUG5XJZKqNFydNJdaKeFIVCgcPhkA6iRqPBbDZXtdNdjZONEGmm02kWFxf5\n7W9/eyJmMMA3z6iIqJ2E9fwpc7Bh1Un4Ll5YIy88ao1GI3sEBwIBbt26xfb29olTHX8f3244cxIR\nKnlR+vTHQjweZ2Njg2w2y9bWFltbW1XNndU4+ezs7PDrX/+aYDDI+Ph4VQVV3+ZFdrr/mDhptueF\nNvJCpCZqyScmJvjf//t/n4imDzVOPmtra3z55Ze0trbKWu/jmmJV48VkZWWF//bf/pts+Vyjxknn\nhTXy8I144969e+zt7XHz5k02Nzdl68yaka/xKCqVCg8ePGBvb08K7x43+KhGDRHVqp0vNV4UqtaR\nRKFQVJ7nQTkYrhf9x3O5nByscpShq6NqmVnjePl2Q57ad1qjRo0XjMfa8KoZeaVSWYGHD9anPWQP\nql1F6P4oveyD0+OqXTbzp0bNsTp+lErlicsv1qhR4yEea8OrFq7/9qH9LAfJwe5atfxYjRqHh3Bm\n4egjHAendtWciho1DhdltV74uJq6HCa1Q6h61D7zP17EUBsxta5GjRqHxwstvKsGwtjUjE6NP2aO\nw6FVqVTo9Xra2tro7e1lbm6Ozc1NUqnUC9XprEaNk0zNyD8DNQNf40+BozbyYhLj6OgoL7/8Mlqt\nlmKxyMbGxh9VP4YaNapJ1Yx8zVDWqHGyOepn1OVy8aMf/YiXX36ZgYEB6VTE43FyudyhVcjURJs1\nnoaDKaM/hn1TNSOvVCprHZqeAJVKhUajeahy4CRuPI1Gg0ajAb4Z/6vX60mn03L2fI0aB1GpVFit\nVvr6+uju7sbj8aDVauXM+cOmZuhr/KlSNSOvUqkAaqVoj0ChUKDRaLBYLHKG+1Edgs+DQqHAYDBg\nsVioVCoYjUZcLhfb29uyz/5JW3ON6qLRaOSoX5PJRKVSYXNzk6WlpUOZIFkz6jWelT+2fVP1m/zz\nNsT5Y/lCxNz4xsZGmpubaWlpob6+HpvNhslkYmVlhcnJSWZmZggEAtVeLiqVirq6Orq7u3nppZew\nWCyo1WrZt0DUWBcKBQKBALOzs3z11VcnItcqmig1NDTgdrtRqVSo1WpUKpX8O5vNRigU4v79+6RS\nqRdmpvWLgEKhwGQy4XA4cDgcmM1mVCoV2WxWOrKHyWGdEd+l/D/4uw/+/WGUCNf4Bo1Gg9PppL29\nnYGBAUwmE0qlklQqxeLiIl999RW5XO5YxJqPKysVf//tvXLw59RqNVqtlnw+fywTRataJ/8sPyP+\niPGssViMTCYjG9SUy+Vjq+89DEQPfq/XS1tbG4ODgwwNDTE4OIjP58Nut6NSqbh//z4mk4lgMFh1\nI28wGLDZbHR1dXHp0iV+8YtfoFarpSo6k8kQj8fx+Xy4XC42Njb45JNPmJubo1gsVs1gqtVqbDYb\nVqsVi8VCX18f7e3taLVaNBqNdFI0Gg0ej4fFxUWi0Sibm5vEYrFD30+P2qfi70R5mXCYxMS8XC5H\npVKRw5leJCdEoVDg8XhobW3F4XCgUCiIx+Ps7e2xv79/YtN4KpUKnU6H1WpFo9E8VN9fKBSkg1ss\nFmXXzVwuRzablXPnq8lB51s0EBODp07qmalWq7FarQwMDHDx4kUuX76M3W5HrVYTDAb55JNPePDg\nAaVS6VgrMg4+nyIqpVQq5QTRQqEgL7EKhQKdTodWq5WXCbVaTTweP5YWyVUz8s+y6UXeV61WMzIy\nwptvvslvfvMbJicnUavV5PN5MpmMTAW8CD2mlUolFouFX/7yl1y+fBm3243FYsFgMFAqlQiHw8Tj\ncTKZDE1NTZjN5qrqGRQKBR0dHYyOjvLKK68wODhIfX09a2trzM/PEwgE2NnZYXt7mzNnzjA6Okpd\nXR2tra10dHTI91SNddtsNt555x3OnTtHR0cHdXV1WK1W1OpvHgMx8jebzaLT6aTj8vnnn3Pv3r1D\nTTuIQ1aMLz34e5VKpVyTXq+nsbGRQqFAMBhkaGgIj8fD6uoqxWIRi8XC2toawWDwxO91+OZ7UKvV\njI2NceXKFex2OxsbG4yPjzM7O0s0Gj2UxlZH4ZDp9Xo6Ozu5fPkyHo8Hs9mMwWCQ0Spx4MdiMfb2\n9kilUqytrbGyssLe3l7VxzPrdDoMBgPlchmtVovNZiMajRKNRh86M08SNpuN3t5efv7znzM6OorH\n40Gn01Eqlchms1itVux2O5lM5lhnTwgHz2Kx4PF4OHv2LDqdjng8jt/vZ2dnh0QiQbFYRKlU4vV6\naWpqwuPxkM1mCQQCFAoFUqnUkUekq2bkn7S/vEKhwGg0YrFY6OjowOPxoNFoOHfuHG+88Qa5XI6G\nhgYp9IrH41itVpRKJdFolHK5jEqlkgao2t70t3E4HHR3dzM6Osro6Cgmk4nV1VUmJiaIRqMkEgnS\n6TTt7e309PQwOjpKOp1mYWGBbDZ7rGsVt5gzZ87w1ltv0dHRQaVS4dNPP2VhYYGVlRXC4TDhcJjd\n3V15M3vnnXfo6enhvffe4/79+8zMzLCxsUEikTi2tbe0tDA8PMyVK1c4c+YMXq+XRCJBIpEgn8+j\nVqsxGAykUinK5TJ2ux2z2YzVamVvb49QKEQ4HD6UdINGo8FqtdLd3S33bV1dHXq9nkQigUajwW63\nk81mUalUtLe3U6lUCIfDDAwM4Ha7WV9fp1QqYbFY8Pv9BAIBstksGxsbzM7OnjgdhLjlWiwW3G43\nQ0ND9PT0oFarWVtb47PPPsPv9x/7nn4SNBoNJpOJoaEhLly4wJUrV/B4PBiNRvR6PcVikXA4LB2Y\nRCJBMpkknU4zPT2NyWRiZmaGnZ2dI1+r+JwVCgVarRav14vFYkGhUNDY2IjP56NSqaDVarFarezu\n7hKJRNBqtaRSKba3t9na2iIYDB75Wh+FEBx7PB46Ozvp6+ujqalJnt8imtvU1MS5c+e4e/cuyWTy\nyPe8uJmL77q1tZXTp09z8eJF4vE4X375pZxQKCLL8M2MeZVKRU9PDyqViu3tbXK5HHt7ezIqd1RU\n1cg/DhHKdjgcdHV18ZOf/ISRkRH5/7ndbkwmE++88w5ms5lkMkkwGKSlpQW1Ws38/Dzlchm9Xs//\n+B//g3A4fKLU6QqFgpaWFl555RW8Xi8qlYpSqcTNmzf5q7/6K4LBIPl8Hrvdzn/6T/+JH/3oRwBY\nLBZ2dnaO/UA0mUx0dHRw8eJFrly5QiAQ4NNPP+W///f/TigUkiEq8Wd3dxe/3y+jLmfPnuXLL7/k\n2rVr/PrXvz5WI3/u3Dl+9rOfceHCBerr68nn8ywuLjI5OUk6ncZms9HR0UE4HKZUKtHa2kpnZye9\nvb2EQiECgQBff/31Uxv57wqDarVampqa+PnPf47X6yUSiTA8PIzX62V+fh69Xk9HRwehUIhsNktT\nUxN6vZ5CoYDVakWv15PL5WRqIZPJkEwmCYfD/PM//zNLS0sUCoUT1VBGGB2fz8epU6fo7OzE4XCQ\nz+dZXV3lxo0bRCKRai/zOzEYDHi9Xt577z3p3Or1+oda/zY2NkoDICiXy7S2tsoph0dt5IWBV6vV\nMjV18eJFOjs70Wg0jI6OMjIyAiBDxolEglQqhdFoZHNzk88++4wPP/yQUChU1XNSOMJNTU3y885k\nMuzt7cmIrk6no7u7G6VSyd7eHktLS8eyZvEaSqWSoaEh3n77bYaGhrh58ybT09OEw2HpcIg9sb29\njcFgwOFw0NLSQldXF7u7u2xubh55KueFqJN3u92MjIzgcrnIZDLs7OxQLBYfygPr9Xri8TihUAil\nUklraysulwuj0YjD4eBnP/sZbrebW7duEQgE2N/fP8J393hEHqe3t5fXXnsNp9PJxsYGt2/f5l//\n9TyFnA4AACAASURBVF9ZW1uTs81Fbs9oNNLZ2cnq6ioGg+HYwvYiVNnX18cvf/lLTp06RTQa5YMP\nPuDatWvs7OyQyWT+4DvN5/Ps7e0xPj6Ox+NhdHSUpqYm+vv7+fzzz1Gr1UceHqyrq6O5uZnz588z\nPDyMSqXizp07XL16lZWVFYLBoMx119XVUalUcLlcnD59WkaNhAHN5/NP/frftc9FePfjjz+mvr4e\npVIpv1Nh6Mxmswz3uVwuGhoaaGxsxO12Y7VayeVyWK1WGhoa5IGo0Wjo6uri9OnTrK2tVf2gFkan\npaWFtrY22tvb6erqoru7G5fLxcLCAjdv3pQG/jCd1sMIgR5c/0svvSR1MgaDAYVCQS6XIxAIEAwG\niUQiOJ1OmpqasNvtGAwG1Go1FotFRmmEU3AY34kQiSqVSpxOJ3a7nUgkgs1mY3h4WDrafX19uFwu\ncrkcer1epoHEbd9oNMowvkKhYGRkhOnpaTQaTVV0BMJxbW9v5+LFi3R0dNDS0sLe3h7r6+vMzMyg\nUCgwm810dHTgdDrp6+vj7NmzbG9vs7KyciyXB71eT319PT09PTJlWalUiEajsqJIIMYTwzfnUVtb\nG2q1mi+++AKdTodSebTd5V+IjndGoxGfz0c2m2VpaYnJyUmi0Sj5fF56ShqNhng8LkNP4uC22WzU\n1dXx5ptv0tDQQKlU4v79+/j9fnK5XNVyUDqdDo/HQ29vL2fPnqVQKHDv3j1+9atfMTk5SSwWA5Ba\ng3K5jFqtxuPx4PP5MBqNcvLeUaNSqXC73QwPD/PjH/+YQqHA1NQUH374Ibdu3Xrkz2azWe7du0dD\nQwP9/f3YbDba2tqw2+1oNJojLwmsr69nbGyMM2fO0NTUxMLCAp9++il/9Vd/9VANvzjQbTYbfX19\n7Ozs4Ha7USgUbGxsyPDaYZDP5wmFQly7dg2z2YzD4ZACKCHiymazUkxntVrp6OhgaGiIpqYmeQN2\nu9309PTgdDqlmLC9vZ0LFy6QzWaJRCJVLbkU6Z2hoSHGxsY4e/YsTU1N1NXV4ff7uXfvHv/0T//E\n0tJS1Z3ubyPOFJPJRE9PD6+++iodHR1YLBbK5TLxeFxWjSwuLrK+vk5bWxunT5+mp6cHr9eLXq+X\nJbBCqHcYz6tSqcRsNmMymTCZTLS0tOD1etna2sLj8fDOO++QSqXY3d3F6XSiUChIpVIEAgHMZvND\nv0ecjyqVSua/W1tbsdvtMpR8nPz/7Z1Zb1vZlbZfUqRIioM4ShRJiaIGax4sWXZsl+0q2BcVJ52O\nU0AHdZcf0Hf9K/o+QAMNpPsilSqkO2MFnUq3h3LKs6x5FiVKlDjP8zx9F8baoTzLFin5y3mAQiqJ\nJB6S5+y191rvelf1dVCGtlQqYW9vDysrK3jw4AHK5TLUajUuX76Mc+fOsSDv9/sRDofrkranctup\nU6dgNpuZtiaZTL5SBEvfW2trKxQKBdRqNRobG2s+r+FEB3n64EidXalUWN3o+VYbHo+HYrGIXC6H\n//7v/8aDBw/Q3t6OsbExnDlzBh0dHRgfH0dzczPu3buH//u//2N1+uNAoVBgbGwMXV1dEIlE8Pl8\nsNvt2N3dZQEeeHZjkKKaTgFCoZD1F0ej0Zpfq0QiwUcffYTLly9DoVDg1q1b+Oqrr7C3t/fG3y0W\ni9jf32eZiaamJrY4SSSSmtej9Ho9rl69is7OToRCIfz617/GzZs3kc1mX3hd2jAWi0XY7XYkEgm4\nXC5MT08jGAy+00n+VdC9Te151TX06pIHAMRiMVitVvh8PojFYiYKlMlkUKlU+Pzzz/HJJ59AIBCg\ns7MT3//+9+FwOFi56riCfGdnJy5cuIDJyUn09fVBp9MhHo9jbW0NCwsLmJ+fh9PpPBFtlc/T0NAA\njUaD4eFhnD9/HmfOnIFGo0Eul0MymcTNmzfx9ddfw+/3IxqNIp1OQyqV4s6dO/jJT36Cixcvwmw2\nM32FSCQ6smtrbGzE8PAwxsbGMDQ0hEqlglwuh8uXL0OpVEKr1cLn8yGZTGJjY4NlG0QiEZqamti9\nVSqV8OMf/xg/+tGPIBKJ2LrS19eHqakpzM7OvtX6+HyQep/7TSwWY2BgAOPj4+jp6cH+/j6Wl5cx\nPT2N7e1teDweVCoVJBIJBINBpNNpAEB/fz/S6TRmZ2fhcrlqWqoi8fFPf/pTDAwMQCAQIJPJsHa4\nV73/6omplInmTvJ49sGEQiEsLy8jn88jm80imUy+Vjkfi8XgdDrhdDoRDofh8/kwODjI0jsWiwUW\niwWlUokpIjOZzJEu4m9CIpGgo6OD7bRDoRA8Hg+i0eiBtCW17VC9vlAo1PU6qf1jaGgIQ0NDEIvF\nCIfD2NnZYeWE10GiJFp0crkc/H5/Tfqhn4fP50OlUqG/vx9qtRo+nw/r6+uw2WwvbBDpHz6fj2w2\ni7m5OfB4PDidTthstpoEIjq1v6n9LZ/PI5/Psw0d1bepNkkiNr1ej0KhULee4VdB92xnZycuXryI\n7u5u5juwvr6O+fl5bG5uwm63IxaLHcimVae0j2NzQgp5k8mEU6dOYWpqCj09PRAIBNja2oLf74fH\n48G3336Lu3fvssWdcLlcuHjxIvL5PEqlEnK5HFKpFMs6HgWVSgX5fB4NDQ1oa2tDPB5HJpOBSqVC\nY2Mj9vf34XK5sL+/j93dXbjdbgSDQZZ+p2BTKpWg1WphNBoxNDQEnU7H6vmUzq8n5BVy+vRpjI2N\nQaVSYW1tDTabDevr63A4HOxzrFQq2NzchMViYR08lCFsbGx86Sb+KKD7o7W1FWNjY9DpdMhms7Ba\nrdjf33/ta+ZyOezt7UGpVDLR41HeF6/ixAd5AEgmk6zm+7bTsTKZDOx2O1wuFx48eACTyYSRkRF8\n+umnyGaz0Gg0TNixtLQEj8dT1+BJwhgSVIVCIQSDwRcWfOqT5vP5KBQKSCQSrEWnHsI7Pp/PBom0\nt7ezBZzMb95EpVJBMplENBpFMplEJBLBwsICnE4nEolEzcoNJNqUSCTQaDRs+MnzXR3V5j30O6lU\nCnfu3GEbv5PSVlQ9ATGXyzHx3cbGBubn5zE5OYn9/X385S9/wc7OTl0WkJchEAjYojs6OgqpVIpg\nMIhvv/0WT548wdLSErLZ7Av9/dWqZTrxvCvv8r6rHSanpqZw4cIFDA8PQyAQwGaz4ebNm3j69OmB\nlrjnX4daNamXOxqNwmazIR6PH1npJJ/PY2lpCcViEWq1GslkEoFAgG0mVldX4XK5EAwGkc1mD5gM\nPf/6T548QaVSwc9+9jPI5XLw+Xy43W5sbGwgkUgc6rre970JhUJoNBqcO3eO6Wei0SjcbjfrgqHX\nSKVSePr0KXQ6Ha5evQqNRsO6sCQSSc3WRkq5K5VKNDc3szLx3bt3MTc398r1jMfjIZlM4smTJ/D7\n/VCpVFhfX0coFKq5x8UHEeSrhQuH/Z1iscj6U6muPTAwgHPnzrEe10uXLmFvbw82mw2zs7Ow2+21\neSNVkCinqamJCacUCgWr0ZBhhUwmQ09PD3Q6HfL5PFZWVjA3N/fCCaiWlMtlJJNJJJNJNDU1YWJi\nAplMBl9++SXm5+dfu2sWCoXo7OyE2WxGuVyG0+nE6uoqotFoTfUEQqEQer2e1dUbGhqg1Wpx48YN\nmM1mdtrJZrOYnJyE0WiERCJhbXR/+tOfYLPZji1QvonqgN/a2srqxZlMBvv7+4jH48d2GpbJZBgd\nHUVXVxd4PB4eP36MxcVFLC0tweFwoFAooKWlBVqtFq2trWhtbYVOpztgSEQbQr/fD7fbzcon1DVw\n1PcO3SNdXV0YHx/HlStXcOrUKYjFYkQiETgcDuzs7GBvb48tzM9/tnK5HAaDAa2trexUHYlEWJA/\nypN8LpeDw+HArVu3mD/I/v4+ZDIZBAIBjEYju6cDgQDm5+cP1IupRU2lUrHPnnQGNpsNgUDgrevx\nR/W+RkdH8cknn8BsNkMgECCRSGBvbw9WqxXJZPLA65TLZaRSKSSTSba+0/tRKBSIRqM1uffFYjEs\nFgvrmiiXywiHw9je3obb7X7lazY0NKChoQGZTAY+nw9+v/+lmpnqjCJlUt73fv8ggvz7UqlUEI/H\nsbu7i0wmg9bWVoyMjECn00Emk6FSqcBqtWJ6ehperxd7e3s1Xxzpi2xoaGDGFFS7I9V5U1MT2tra\nMDw8jLa2NuTzeSwuLmJ2dpadDGoNpZT39vawvb0NPp+Pnp4e1u7l9XrhcrleGgwbGxuhUqkwNDSE\nzs5ORKNR7OzsYH19/dCnhMNCNVWJRIJgMMgyJ9evX8fg4CBWVlawuLiIeDzOOgboJONwOLC9vY1o\nNIpEIoFsNlvXLM9hMRgMrOc+n8/D4/EgnU4fS4Dn8XjMS8FsNiObzeLBgwe4ffs2YrEYM2Lp7e3F\n+Pg4hoaG0N/fj56eHiZUq1QqCAaDcLvd2NzcxNLSEh4+fMjKb+l0+si/D8ogWCwWXLt2DRMTE1Cp\nVKxzJJVKoVgsMsFsKpVCOp1mmwOhUAij0YiBgQGYTCYoFApWatza2qrJ/R4MBhEMBgH8LeNnNBox\nNjYGo9EIrVYLmUwGm82Gzc1NZDIZdjCgZ9NisaC/vx9SqRSBQADT09PY2tpCLBar2/1DbX8TExO4\nfv06DAYD0uk07HY7tre3X3rookNcdatoQ0MD8y6oRamBuhH6+vrQ3d0NuVyOWCyGUCgEl8v10hZQ\nyhKKxWJ2bYVCAeFwGNls9kAbJv08dV7R2ku6pXf9Pv4ugjxRKpWQSCSYy5NCoWBq00QiAYfD8VZ1\n5qMgmUxifX0dFosFvb29zKKUBGnlchljY2OYmprCxYsXYTAYkEqlsLOzA5vNVnPBGkEbpN///vdY\nX1/HyMgIPvnkE4yPj+PGjRuQSqX48ssvWU93NRaLBSMjIxgfHwefz8cf//hHrKysYG9vjwlmakWh\nUIDL5cL9+/eRSqVw5coVnD59GqVSiZ0ex8fHkc1m0dvbC6VSyTZeLS0tuHbtGlpaWuD1erG0tITt\n7e2aXu/7QOYb5NhHmofjgNTRp06dglqtRigUgt/vZ/VH2theuHAB//AP/8Dqk1Kp9IDvt0wmY10k\nbW1t6O3txczMDB4+fAiHw4FwOFyT65fJZGhra0NzczMkEgnEYjE6OzvR0dGB1tZWRKNRNDU1YW5u\nDg8fPoRYLGZtmiSG6+7uRqlUYm1fVqu15ptaCgh+vx+zs7MolUqQSCSYmZmBzWZj3UjAs89Xo9Fg\nfHyctZaKxWJsbm7iD3/4AzY2Nuq6QWxqakJLSwu6u7vR3d0NkUiE5eVl/Nu//RtWVlZe+XuU7SS7\n2HK5zE7Ihzn5kvDtTYGUylATExPo7+8/0F5bbQ1c/XclEgkUCgWamprQ0dGBqakp8Pl87OzssDJK\n9XTR6ta8VCoFq9XKdBTVp/7D2BD/XQV5+iAphU8DAorFIvx+P6xWa90MWpLJJKxWK8bGxlgPa0dH\nBy5dusTSOMPDwxgZGWEKzmAwCI/Hw0Q09YBSgxsbG/D5fHA6nawe3N7ejk8++QTRaBR2ux3BYJCJ\n6wDgzJkz+N73voeGhgbYbDZMT0/D6XQiFovV/LpLpRKi0SisVivrXY1EIjCZTDAYDNDr9Wz6mUql\nAgDWelMoFDA6Ooq2tjYEg0G0tbVBp9NhY2PjQOfDSaF6ASiXy6zl8jhoaGiAWCxGS0sLZDIZO3lT\nZoF0HGazGcPDw8zvO5fLMWEbj8eDWq1mwVYqlaKlpQWZTAabm5vw+/01ue6mpiZoNBoYDAbI5XKI\nRCKo1WoIhUJIJBJotVrmMmgwGJjWg4J8T08Pc4H0er2Yn5/H2toaAoFAzb+P6hauTCbDMgnLy8tM\nb0Sfv06nw+DgIC5fvszcH51OJzY2NrCyslL3e1ytVmN8fBzd3d1QKpUIBoOw2Wx4/PjxG9cK6jIR\nCoUoFAqIxWKH3lBVzxx4HSKRCEqlEiaTCRqNhgkZyV9fLpezNaSxsRFqtRpGoxFms5nd0yaTCQKB\nAEKhkD0rVIoqlUpQqVRoa2vD4OAgex+JRIJlDHK5HJvPwuPx3iqb+3cV5OmkLBaLWc8qqfX39vaw\nvLyMaDRal+l26XQaOzs7cDgcSCQSaG5uxrlz53DmzJkDLRY0jtPn88HhcCAajbIvut7E43GsrKyw\n6Wz//M//jMnJSQwMDGBnZwebm5tsqEulUsHHH3+M06dP45tvvmEn+HplSihYU8rd7/djZmYGP/jB\nDzA+Ps6CDtXKCoUCHA4HcrkcBAIBurq60N/fj0qlggsXLmBxcRH/+q//emKDPHCwt/u42tIofU3q\n/4aGhhfSkdVT/yqVCtLpNAKBAH71q1/h7t274PP5uHTpEm7cuAG1Wg25XA61Wg2lUgmxWMx81o+S\nxsZGaLVaGAwGGAwGFjjEYjG77qamJgDP1pGPPvoIExMT7DRJDmx8Pp892//1X/+Fp0+f1tWrgII9\n2UyTcJRev7GxEYODg/j4449x/fp1aLVaFAoFrK2tYXl5+Vj89fV6Pa5du4aenh6USiXs7OzAbre/\n1TrX2NjIRI7UfXXYyW60Brzpe5JKpczYiDI1EokERqMRFosFPp+PrX1KpRKTk5OYmprC2bNnWbsf\nxaCxsTEYDAaYzWYsLCwgEAggm82y+48sfNva2tiQrLm5OaYHeT7N/zr+LoI8LX5arZa1G2k0GsTj\ncTgcDtjtdqyvryOfz0MsFkOhULD6Va3q3qVSCel0GsFgEC6XCy0tLUytSe0tANiuL5lMwu12M2/1\n44AmPfn9fpRKJfz+979HIBA4YDVsNptZatNgMCCRSGB9fZ3VBOt57bTgVXtJP3r0CC6XCyqVivWn\nKxQKlEolRCIR5jFtNBqZVzk9yF1dXfB4PPD7/SdqBgIFGKoNP++rUE9o8aENNG1KKYtmMplw5swZ\ntLW1MSfH6elpfP3113j48CG2t7fZRqFSqTDLUADsGahFKYJq7aTSpjYymtpWLpeRyWSQyWRYIKSy\ngkQiYR0wdM/F43Hs7OwgGAzW/Tuo7r54HoFAgPb2dlgsFqhUKvB4PITDYczPz2NlZaXu2hPqRujr\n64NarUYikcB3332HJ0+evFZ1Xq2FEAgECAQCzFTpsM8mBfc3fU90X5O9LrVyA8DIyAjb8FE2qLe3\nF/39/cxW2O/3M2EpAOh0OkilUlbbz2QybAytUChEOByG2+1GIBBgsyneZd35YIM8LSaNjY0HRia+\nrP+Zz+ezdrkzZ85gcHAQarUaVqsVq6urmJ2dhcfjgUajgUqlQi6XY8NLEolETewdqUWIajpyuRxy\nufzAz9BJCHjWEhgOh+vuQPUyaIrS119/zYwnVCoVq2drtVoMDQ0hGAxiaWkJa2trdelYeB3UITA3\nN4e1tTXm+V7deUE/BzzzMbhy5QpzFJNIJOjp6YHD4UAwGDwRQZ7ubZFIxE64UqkUBoMBwLNTDqXv\n6URXr8mMFMBJEU/Xa7FY8P3vfx/t7e3I5XJwuVy4c+cOfv7znx/4/dXVVYRCIVayyuVyiEQicLlc\nNQvyVMKpXkzpZFgsFhEMBhEOhxGLxVAqlSAUCtHV1QWDwcA2BZVKBdlsFrFYDIFA4MQ5+QkEAuj1\nembZHIvFYLfbsbi4WDfvd4ICtUwmY50ATqcT9+7dw8zMzGvT55RZITtel8v1Tm1/wN8OL2+iWCwi\nnU7D7XZDKBSy3+Xz+WhvbwfwrL2xu7ubqe91Oh3TWFEQJ0dLvV4PnU4Ho9GIXC6HTCbDNAW0oV1b\nW4Pf72cj1WlDchhh4YkO8rSI0U66GpoaNjQ0hObmZuzv7zMP6WpEIhFkMhkGBgZw9uxZXLp0CZ2d\nnWyoAaVY+vr6cOHCBfB4PBQKBSSTSczPz+P+/ftwu90H1KZH9SCUy2VWbyTrVBp0kclk0NjYyGxP\nm5qaoNfr0dTUdCwntJddezqdxsbGBr766itcvHgR58+fh06ng0ajgVAohNPpxPz8fF1c+d4GOl2S\nAIwe7urPkv49m81iY2MDX3zxBf7xH/8R4+Pj6Ovrw/7+PlZWVg5sDI4LOrVTRwafz8fY2Bj+5V/+\nBYlEgtVnbTYbFhYW2EjaWprlVLf9iMViNq2N0vMGgwETExPQaDTY29vDz3/+c/z1r3994e/Q893Y\n2IhKpYJIJIJUKlUzkxaqobrdbvzud79DS0sLkskkpqenEQgEkE6n2b1Dp12pVIrPPvsMFy9eZCXA\nYrGInZ0dWK1WpNPpY79HnofEYBKJBJVKBTMzM/jjH/8Ih8NR9zWFAjWZ17hcLmxvb7+xc4jH40Ei\nkTAL3mKxCLfbje3t7ZoKThOJBDY3N/HLX/6SCUVpsuLAwABT0FO5JpFIQC6Xw2Qy4dGjR9jY2GDl\nhZaWFpw/f551VVHcSSQSiEajmJubw/b2NhwOB7vvqjMOlEl6G44tyL9NoKKboFQqvTAAhWp79M+r\nahStra1M9Xr69GmmigyHw6xFIxQKoaOjg/U90gxg8ocn9eVRBVce79kc4t7eXjb6cX19HYFAAHq9\nnp2CKP09NjbGNgM0mOYkTBgrFouIRCLY2NiAXq9njoLUBuV2u7GyslLXaXOvg+r0b0OxWITX68Xj\nx48xNjaG0dFRmEwmGI1GCIXCE5FRoUXG7XZjaWkJlUoFMpkMg4OD7GSfyWSwtbUFvV6PtbU1WK1W\n2O12RCKRmhiGVKe4SYBHtW5qB21vb4dAIIDX68V3332H9fX1F/6OXC5He3s7E5BFIhE2sKcWgTOX\ny8Hn8yESiSCfz0On0yGRSODx48csyD/fz0x+AH19few5zufzsFqtWF9fr3t56k1UHxpkMhkKhQK2\ntrbw8OHDY5sASEJM6nnf399/Zfsn3e8SiQQtLS0YGRlBZ2cn8vk8QqEQ28DWarOSy+UQDAbZWGG6\nfoVCwQYUSSQSuN1uFItFhEIhiEQiaLVaLC4uwm63M5vj1tZWWCwWDA0NQSqVgs/nI5PJsFZvcvh7\n1dp5mPd4rEEeeP3FCoVCqNVq5tpU/fOlUgnZbBYLCwvg8XgvdSbj8XgYGhrCz372M7S3t0Ov16O5\nuRk+nw/b29uYmZnB/Pw8+Hw+lpaWEAwGkUqlkM1mUSwWmdjtqMUofD4fRqMRn332Gdrb2+HxePD0\n6VPMzc2xUxafz8fAwACuXLmCvr4+SKVS6HQ6JgQ67tM89RWLxWI0NzcjEAjgyZMn7MSm1+vh8Xiw\nublZ8/ahWpHL5RAOhxEKhZBIJJgvOA0bedtxycDRZX+qoWfg3r17cDqdyOfzOHXqFC5dusQ2kFQG\n6unpgd1ux8LCAr7++musrq7WZG4DaUhocqFOp0NHRweKxSKuXr2KqakpSCQSxGIxpoF42bPV2tqK\nyclJtLa2Angm+iQRZS06S4LBIO7cucOeKxr+9CqnOKq9RyKRA8Yr+Xz+gIjtsNTyfpHJZNDr9TAY\nDGhubkYmk0EwGGQjrY8D0g9Eo1FmQf6yDXS1oJMGM505cwYWiwXZbBapVIqls2t9vdX/WSwWEY1G\nsbi4eOBASPcHrZN0GiejJ5/Ph2vXrrGSc6FQYHMdZmZm4HQ631jqedt75NiCPE0ge1PdhX6ODGLo\njZXLZVbbeNmbJYOFlpYWDA8PQyaTsVPP9PQ0bt68iYWFBXi9XtbvSJPG6LRAQpujvHEEAgGGh4eZ\nOlcgEMDpdCISicDj8TBFaWNjI0v3CQQCVuujz+s4A7xKpWK2wJVKhaXjZTIZZDIZS13KZDLodDpE\no9G6qeqPGlr06XsQi8WQyWQveJa/7veJo/7O6Bmw2Wzw+/0HhgENDQ1hcHAQp06dgkgkYqlulUoF\nk8nEXLdeJziiE/nbXjePx4Ner4fFYoFSqURTUxNrRzSbzZicnEQ6ncaXX34Jl8uFtbW1l54g6e9M\nTU1Bq9UikUhgaWkJu7u7NdMUFIvFQ2ecqPxgMplYdicajcLj8bDe6cNSiyBPVtTDw8O4cuUKjEYj\n6yahtqyjylAe5u+QSQwNrOLxeGhra2OCXRqz3NrayvQS9BrUyqZQKFgZjsaO1xMK5ocpE1C2uLGx\nkY2IzuVyTOvkcDiOtNRzrEEewGsfBLphKGA/L5B63YdADmc0i5vadKxWK27fvo1f//rXdZ+XTMNe\nzp49i6tXr6Kvrw8+nw/pdJrVT6t/Vi6Xo7m5GY2NjYhGo/D7/e+VAnyfBaRazWoymdDb2wuZTIZA\nIACn08nmWmu1WtYloNFo0NHRAZfLVTPzklpCiyMpYsvlMvh8/qGn/9Uq60LPgMfjgcfjAQDs7Ozg\n0aNH6Ovrw+TkJH7wgx+wsaEA2NwGtVr92rahaj3M2y6c1UFepVJBJBKhVCphcHAQ5XIZLS0tuHXr\nFv793/8du7u7CAQCL/wN2ti3tbVhdHQUAoEAe3t7mJmZwc7OzolJf1OPc2dnJ7q6ug48o4FAoK6O\nca+Dyo5arRZTU1P49NNPodfrmfo/FAodWYA/7PpCtXWZTAa5XM5q0wMDA+DxeMhkMujv78fIyAhG\nRkbQ3NyMfD4Pl8uFVCqFlpYWiEQiJoirVrufZOjZkkqlbK2kAVRkHHWUHFuQfxvDDnKoe9sWh2r0\nej1u3LiBCxcuIB6P489//jMeP37MJjPVS2VcDdWSBgYG0NfXB7FYjMXFRfznf/4nbDYb+zna4Q4N\nDWFkZARCoRBbW1u4desWvF7vO1/3u/4ebU4MBgOGhoYwMTEBs9mMmzdvMqtgh8OB5eVlnDlzholp\nKBtyEvQD77LBaWpqgslkglarZR7mXq8XwWDwUH3ox7HYezwe1i5I6UCNRsNsnIFn7+9V0xcPE9yr\nfycejyMQCLDn1u/3I5VKIRQKYX9/H7Ozs7DZbK9MRdJ0RtKmPHjwAPfv38f6+vqJ8ihQq9XM2pQy\nPXa7Hffv30cgEKj7M/oyyNqZyn7j4+MskJLF9FFtvt9FlEwC42g0Cp/PB5PJhJ6eHnz++efsUeM5\nuwAAEpRJREFU/hCLxRCLxUyVnkql0NzcjJaWFqjVaqadIT1FrYe9HAXkZUG+CjweD9vb2/jiiy+w\ntbV15K93bEH+bYIs1cSoLYh6yKlH8WUpYB6PB5VKhd7eXly8eBFyuRxPnjzBrVu3cP/+fWbveBz9\nw9Ra1t3dDalUygbi0CQo+rmWlhb09PRgZGQEJpMJqVSK/Wy9BTIU4EkkcvbsWahUKmSzWXg8Hni9\n3gNq7Wo3JzKPoKxNvaieKEc95JQWI6+Bt/kbNFtbo9Egn8/D5/PB6/UeaAt7E9TuUu/7LR6PI5lM\nwul0slphb28venp60NHRgWw2yza7r6rHvss1ZzIZNjSElPEkqPr2229htVpfmaJvampCe3s7zp8/\nj97eXpTLZSwvL+PBgwc164+n16bv6G16panXfGpqCjqdjimj9/b2MD09/V4nsaO8TyjrNjIygvPn\nz8NgMLCUN42sPsoN+GGvndZ3l8uFp0+fQigUYmRkBGNjYyx7tru7C7vdzjJnUqmUDTRqbm6Gw+Fg\nU/eOI13/LtCkQ5FIBB6Ph2KxCJfLhXv37tXEzfHYgvzb3BDU42s2m3HmzBnWizg3N4eNjQ3s7Oy8\n8Hco3TM1NQWNRoO1tTX89re/xfb2NrODPS6DkK6uLly7dg0dHR3w+/348ssv8fDhQ5bRoDTO2NgY\nfvKTn+D06dMQiURsOIzT6ay7Jzm1iHz88ce4cOEC2tvb8T//8z/4zW9+g1AoxMx5zGYzzp8/D61W\nC+BZFsZkMmFiYgLLy8t1a9GpFuhIJBKmZB0aGoJSqcTvfve7N/rQk3mSXq/H9773PWa1abfb4fF4\n6l7meRcoc1Eul9nnYTAYMDY2hnPnzmFlZYW1eR2lXoJOtcVikZXLkskkYrEYGzDzsmul2eiTk5P4\n6U9/is7OThSLRTgcDjaroRZQ0CaLUPIJfxUCgQBSqRRDQ0O4fv06LBYLBAIBG6gyNzd3IkpTJOjq\n7+/HwMAApFLpAY/3pqYmmM1mZul8HJB3xcrKCrxeLxMkazQaZnV8+/Zt/PnPf0YikUBvby9++MMf\nQqlUMi8IGmG8u7tbV2fB94EGkolEIqa1ogmAteh4OdY++Td9IZXKM595UiMODg6is7MTWq0Wg4OD\nLPgFAgFWg9dqtWxE5O3bt7GysoLV1dVDnb6OGvLFbm9vx+DgIAQCAVM6O51O9nMKhQIWiwWTk5MY\nGxuDQqFAIBDAw4cPsbm5yZS+9cRisWBsbAx9fX3I5/O4desWK3tUL4hyuRytra3g8XjIZrMQCASI\nx+NsMEk9T7NKpRJGoxHj4+Nob29Hc3MzyuUyfD7fG6+BvMjPnDmDs2fPYnh4GJlMBi6XC6urq7Db\n7e+Uxq4lL7ONpTHG5MDV3t6O8fFxDAwMQKvVoqGhAYlE4shV1ZFIhLl0GY1G6HQ6mM1mnDp1Ckaj\nkaXzq6+9s7MTw8PDsFgsGBgYgEqlgs1mY8ZFtTqhkbsYDRARCoVIpVKIRqMH7GDJyUyhUKCtrQ2n\nTp3CxMQE2traAIBNy1tbW0MwGDwx7ZW0IZFKpcwwLJ1OM3ExTeE8LqgkRJMeb9++jWg0ykTS+Xwe\nT58+xfLyMnK5HFQqFQQCAdPJJJNJNnOE7GQ/BOjwWp3lKxaLNbMrP9EneboJ/H4/8vk8JiYmoNVq\n0dXVxdSwf/nLX7C2tsYGvPT29oLH42Fvbw+//e1vWWvRcSIUCtHc3Iy2tjaYzWakUik4nU7Y7Xa2\n66fJUBcvXmTzzTOZDHZ3d3H37l1sbGwcyyZlYGAA169fh1wux/LyMn7xi18gEAi84CxIKlly+xKJ\nRNjf38f6+jqSyWRNDEyehzImGo0Go6Oj+Pzzz5mI5+bNm3j69OkrMyH0uzKZDBaLBZ999hkmJibQ\n3NzMelxXVlZgt9vrLtZ8nurSDvAsWFHrYmNjI9NPUDCnjZrRaIRarX6hr/ioqFQqCAQCsNvtcDgc\n6OrqQldXF7q7u1EsFrG2toZ0On1Az8Dn8zE6Oop/+qd/YunkRCKBu3fv4quvvkI4HK7Zfd/Q0ACJ\nRMKmEtKMCLvdjng8ziaE0amYMiFXr15FV1cXGhoamN/GnTt3sLy8jFQqdSKCDWUF6d6gNuNcLsfa\nhOVyORvLepzXTKZUd+/eZbMLALyQ4SwWi8yymdoXPR7Pe08PrWXb4sug1ldqlT6MD/27cKId7whS\nHv7pT3/C7OwsGhsbYTKZcOrUKYyMjODcuXMAwHZDMzMzWFhYQDgcPhFqS+r3l0gkyOVy2Nvbw+7u\nLmtd4fP5UCqV6OnpwUcffQSFQoGlpSXMzc2x0kS9XeMoharT6WAymbC0tITV1dUXTlUUWAQCAZtW\nRyng1dVVOJ1OFAoFliZ8H/Hf22R+AMBoNGJkZISJjCqVChMGvqpnXyKRQKVS4fTp0zh79iw6OjqY\nK+Lc3Bxu374Nt9tdcz1HdZ8tGT2RwQwN0qF7mv63/v5+9PX1oaurCy0tLVAqlWwQU/XEqrW1Ndhs\nNubzXQvnOzKuefz4MfR6PcbGxlht+MaNG5iamoLf7z/gNUBmOX6/H3a7HfPz81hfX6+5jXOpVEKh\nUEBzczMbc5pIJNDT08NaDIPBINPIdHd3o7OzE+3t7YjFYuzzpH/eRxR71NApUaPRoLW1FTKZDCKR\nCOVymWVTXC4XfD5f3YPcm3hZCZaeT6PRCKlUilQqhfn5eayurh56IM3LqOdnIJfLmWiT2sjpPdbC\nQOmDCPLkub20tISlpSXw+Xz09PQgHo/jxz/+MU6fPo1yucwsR7e3t7GxsfFOwwpqAQVM8qqn0oFc\nLmfpQmoZamtrg9frxezsLB4/fozNzc1j2ayQrahGo0FLSwvr/61OM9F8ZbPZDI1Gg0QigY2NDdZ/\n6/P5EAgEWJA/iofxTVQqFeYZTelqsousLh0Af/tepFIp9Ho9uru7MTo6iq6uLvB4PCYspMXkqIVK\n1dDEwba2NigUCjZwpjrok1CJ3gOJCk+fPs1S8W1tbQfEjslkEtvb21hcXMTMzAxmZ2fhdrtfcJA8\nKiqVChKJBJaXl2EymdDf3w+dTscGwEgkEqacJ5+LXC4Hm80Gq9WKlZUVPH36FOFwuOalKboGGq40\nNDQEHu/ZwBav1wuPxwOXywWTyYTBwUF0dHRAoVAgn89jdXUV9+7dw/r6OpxOJ+smOClQmaGxsRFN\nTU2sJp/P51kmcXV1FT6f77gv9QWevy8pi9Le3s6yU8FgkJVzjkNn9T40NTWxZ4EOPtRRRePPj5IP\nIsg/T7lcZpZ/Q0ND6OnpYVN+SOT1Jv/jelIoFBAOh1kPbXNzM5tQpNVqMTIywnzp3W43vv32W3zz\nzTeIx+PH1oJGozfJApPsJ6sDpFQqxeDgID777DNIJBIEAgEsLS1hY2MDsViMnRTJAe19OMxDnE6n\nEY/H2UYkk8lALBazrgBKk1Gvand3NxOkZbNZuN1uhMNhuFwuzM7Owmq11tQuE3iWSbBYLMwn32w2\no1AooFAosKlmJHTM5XIs1axQKKBSqaBUKiGXy1lalnrgfT4f7t27h//4j/9AJBJBMpmseTYim83C\n4XDg1q1bCAaDuHr1Kjo6OphpTyQSgdPphM/nY1PqKOBTFqgem3MSCNK9ajQa2cz4eDyOWCyGcDjM\nBJw0wtfr9WJpaQnfffcd6yQ4CYeJ56FOpFQqBbFYzExXyIlyeXkZPp/vRF47QevM5OQkRkdH2WmX\nRq9ardb3Xh8P2579PlDpgTJZ1f+dhq0dNR9kkAfAFms6JYrFYuRyORbgT0KanqAaTCgUgsfjQU9P\nD4aHh9lYW71ej0QiwfqI6bRVz5vveWQyGbq7u6HRaMDn82EwGDA6OgqVSnVgrrJOp4Narcbu7i6W\nlpawvb3NVKJ04qfdar3eC82PTyaTzECjt7cXP/rRj5gvNm1aaCqXVquFSqXC2toatra2kMlk4PF4\n2MCM9zUgquZVDo1isRhGo5GlhamHvaGhAfl8ngVomidN6fpK5ZknP6WLSeiZTCbx6NEj/PWvf2VG\nMvX4DsiBzOFwIJPJIJVKQavVwufzIZFIsL75aDSKdDr9SmvbekC2qoVCgVk0A89OWwqFAmq1mn1u\n9IxOT09jfX2dTSM8iadIuifcbjc8Hg8GBwcB/M17JBQKMQ+DkwxNqRscHERPTw/4fD68Xi82Nzfh\n9XqRTCZP5Of/MqhsRjMEaDwx8Ddxdi0swD/YIE+7H3JMokE2gUCgJm0I70Ol8mxwBZ1g+vr60N/f\nj7GxMZbC/+677/Do0SP87//+L0Kh0LHvrmUyGXp6eqBUKlEqlWA2myGVSlEsFmGxWJj40e12Y3Fx\nEfPz87h37x7i8fgLp956ZyJisRhcLhdCoRDkcjmy2SxGRkZw+vRppqSm1Bil4Gnh2Nvbw9zcHOuH\nf99TLz3YFAxeVfsrl8usBYhU0FTWaW5uhkKhgFarZWl7stKkkcjBYBD7+/tIJpMolUoss/Kb3/yG\ntRfVGypN7e7u1v213xYKhrTRqNYwUMmK3NT29/exsLBwoGXrJEMtiPv7++wwVCwWkc1mkUwmkUgk\nTkQnwOsgx77e3l50dHQwfc3q6iri8fgHE+AJctGs7pMnJ02a1njUfLBB3mKx4OzZsxgcHIRUKmXD\nRN5XaVkLisUiUqkUdnZ2UCwWEQgEoFarWWoykUjA6/XC7Xa/16nxKKGhC0KhkD1MIpEISqUSPp8P\nVquVzYn3+/1wuVyIx+N1qbu/CbfbjcePHyOTyaCrq4tNfDIYDCw9LJPJEI1G4Xa7sbCwwHwX3G43\nsw8+ilpf9UjI15Usstks7HY7fvnLX+Kbb75hm1a5XI6zZ8/CYrGgtbUVqVQKkUgE+/v72N/fZ/c7\nTfGia6YJdD6f78QHo+OkXC7D7/djZmYGv/rVr5glr9/vh9/vZ6fFTCaDWCyGYDAIh8PxQQxdKpVK\nCIfDiEQizNSqsbER8XicucMd97P6Juj6SNNDpZKnT5+emBHWh6FcLiMajWJra4uV31wuF/b395mh\nz1FzokfNvu53zWYzPv30U3R1dUEgEDAlbDgcPhFBshoaHev1ehGPx+HxeCAQCBCNRg84952kB44E\nW+VyGaFQCEqlEmq1GlqtFl6vFzabjU1LOmmLBQ3ECYfDsFqt6OrqQigUQmdnJxv+I5VKWbC8f/8+\nS//VQm1OvC7IFwoFBIPBA25pfD6fmckMDQ3BbDYjGAzC6XTCarVid3eXTUmsLk+9rXsbx7PvJxaL\nYWtrC5VKBSaTCS0tLWzT7XK5WEmhWCyydq8P4bOlDCKpt4VCIYRCISuhfAjvg6yx9/b2IBAIEAqF\nmE6m3sZg7ws9k7FYDNvb22yC6v7+PlZXVxGJRGrjB3Hkf/EtqbaSJN7mhqNapMlkwuXLl6FQKJBO\np7G7u8tEJCf1xiWVN5UTTvKCUSgUEIlEsLq6CpvNxvpthUIhCoUCsy89aQGeKBaLCIfDSKVSTOvQ\n1NTE/n8SqNEM51rO/n4XX2/6+Xg8jrm5Ofh8PhiNRuzs7MDpdLJeZzLQOOxzxPE3qN6+vr6OnZ0d\nNjCkUCiwz5fujQ9p89TQ0AClUgmlUnnA6pkEhx/C+yiXywgEAvjiiy8gEokQiUTYSPCTuna+DhJD\nZjIZ/OIXv4BMJmPeIrV6Px/cSb56R8rj8eDz+bC7u4s7d+5gbW3tRH/ptFicJFHgq6BrpY3Jh0Z1\nrTWZTCISibzQjnbSF2w6iXm9XqRSKfj9fjaI46Rf+4cGOWu+aYb3h0SpVEIsFsPe3h4WFhYgEong\n8/lgs9k+mHp2pfJsiA1lWkg4+yFD61K9ysofXE2ez+dDJBIhn8/D7XazU9of/vAHOByO4748jhMK\nZU0+NEqlEivpnPRMFcfJgoR3s7OzUCgUSKVScDgcWFpaOrIRs/WAPO453o3ae42+6oV5vApw+NQi\nn89HY2MjMy6JRCJsd3pSzG84OI6ak+ZKxnHyofR8c3MzDAYD6+QJh8MfXD2b45W8MYYfW5AH8E6r\nFdXkyaM7m82yNiduAeT4e4TbAHC8CtI+VYsxufvk/yuOM4ZzcHBwcHBwcHBwcHBwcHBwcHBwcHBw\ncHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBw\ncHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBw\ncHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBw\ncHBwcHBwcHBwcHBwcHBwcHBwcHBwcHBwcPxd8/8AMbEVX/K/ppAAAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# We have the option to train the discriminator more than one step for every \n",
+ "# step of the generator. In order to do this, we use a `GANTrainSteps` with \n",
+ "# desired values. For this example, we use the default 1 generator train step \n",
+ "# for every discriminator train step.\n",
+ "train_step_fn = tfgan.get_sequential_train_steps()\n",
+ "\n",
+ "global_step = tf.train.get_or_create_global_step()\n",
+ "loss_values, mnist_scores, frechet_distances = [], [], []\n",
+ "\n",
+ "with tf.train.SingularMonitoredSession() as sess:\n",
+ " start_time = time.time()\n",
+ " for i in xrange(1601):\n",
+ " cur_loss, _ = train_step_fn(\n",
+ " sess, gan_train_ops, global_step, train_step_kwargs={})\n",
+ " loss_values.append((i, cur_loss))\n",
+ " if i % 200 == 0:\n",
+ " mnist_score, f_distance, digits_np = sess.run(\n",
+ " [eval_score, frechet_distance, generated_data_to_visualize])\n",
+ " mnist_scores.append((i, mnist_score))\n",
+ " frechet_distances.append((i, f_distance))\n",
+ " print('Current loss: %f' % cur_loss)\n",
+ " print('Current MNIST score: %f' % mnist_scores[-1][1])\n",
+ " print('Current Frechet distance: %f' % frechet_distances[-1][1])\n",
+ " visualize_training_generator(i, start_time, digits_np)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[]"
+ ]
+ },
+ "execution_count": 13,
+ "metadata": {},
+ "output_type": "execute_result"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAg8AAAFuCAYAAAAGZfvAAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzt3XmYHFW9//H3hEASSEIggAQMoOAFwiYTJIACSVgEgmyy\nDSBBCCJrZvS6c2VRr4oiAS67mgDBARVQCCCLgbDIZgKIEJFFFiWsIZCdkMzvj2/1b3o6PUvPTHf1\n8n49Tz8zXVXddc50T9enTp1zGiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiQpn/uAFcBD\nHWxzfbLN5KxlLwPvA8PbecwK4Kys+1OAf+VsswZwNvAssAiYl5RjAlCXbLNJ8lyd3Y5tpxxnd/K4\n77bzuN6UKUNvmAD8vBuPG52UYbfk/nHJ/Y26+PghwDXArt3Yt9o6E/jvtAuh2tA37QKoarUQB5FR\nwMeBf+esXwP4Qta22QYBvwQ+38Fzt3e/DrgV2BL4MfA3YACwD3AFsBXQBLwO7JT1uA2Am4AfALdl\nLX+pnTJk7NTO8tz6Fkvu36K7zgSm98LzTCP+Jm90cftPA8cQr7d65lwiUEpFZ3hQMc0iDtaHARfk\nrPsCsACYm+dx84C9iLPhrhxU6rJ+/xxxNrw3cE/W8juA5cDpRKh4C3gsa/0myc8Xc5Z3ppBti6Gu\n801K+lzvJLc09i3/jiqRPmkXQFVtIXEWf3iedUcAvwc+ylneAtxCXPb4OdFqUYj1k5/53tuXAt8p\n8Pl6agoRYi4jLsfMJsrWB/g28AKwBHgOOC3P448E/kr8LV8BfgKslrPNfsBTwOLkeY7JWb820ery\nRrLNw8DYrPUvE5cZxtP5JYeTgH8Sl4PuAzbOWX9cznOsA/wWmJPs+4ms8o2mtbXj3qzfVwG+Bfw9\n2c8C4rLTmKz9nA08D4wjWpeWJOXKvcy0HvCrpO4fADOAXbLWd/V1yLZJUsdjiPf3QuBV4nJa7sF7\nAvBM8tyvJNtkvzenkP/9kasP8dr/K3mul4Af0XoCmLl8dRZtL2VtTbQGvZ/cbgI+kbV+dLL9fsCD\nxN/7eeCUdmsvSUV0H3Ew+CLx4ZTdh2EwcSD5HPFh+OusdZn7nwDmEy0G2VYA38+6P4W2fR7WJQ4S\nc4kP292Jyxad2YSO+zjkOjvZfhXiAzz7ln0AmQJ8SFxKGQ0ckCy/Alia1GVP4IdEkDoz67EnJfu4\nkmiJOSmpW6Y1JlOG15Jy70UchJcD2yTb9AeeJC7THE9cvvldUqbMwfjTyfpbgR1ZOZxknJbsb1JS\n5p8kdeioz8OdwMyk3rsTr+2K5PdBwMnJ/a8CWySP+RlxQD6V6AtxFHFQf5fW1/JsIlS8BHyZCEN/\nSp5r82SbNYhQ8DIRjPYkDqTzs7bpyuuQa5NkP+8R/TX2Ji53fUTbfiPfIV6LC5Ln/gZxcM5uTZtC\n/vdHru8k9R+f/E2+ASyjtf/PKFrfKzsmy/6LeL88AhwIHErre2HdZJvRyePmAr8g3kOXJMs6C1GS\n1OvuI8JDf+ID7GtZ68YTH+gkP/OFB2g9WJ2Qtb6z8AARSl6gtfPi0qQ8XyEO9vlsQvfCQ77blTnl\nW0H0qcj4L+Kg8o2c5zyXOLisRZxpvgncmLPNROJgvFpWGfbOWr8pbT/4T0zufybnee6j7SWX3BCX\nqy4pzw05yy+l4/CwmLatPXXAebSe/Y/OeTzAVOCMnP0cQmsfGmite3ZrxPBkWVNy/1Ti77xt1jb9\niI60J9L567A2+W2S7OeenOUXEO+1wcCayXNckrPN8cljt0zuT2Hl90c+fwLuyll2CnB01v3c/43r\niKAwMGvZWkToOS+5Pzp5XO7lwZvper8V1SAvW6jYlhBnVdmXLo5k5YNQPv8H3A+cD2xYwD4fBD5F\nnN3+iDhI7gxcThw0+xfwXJ3ZIc/tBznbvEN8iGeMJQ6i02jbYnFrUrZdiQPbukQzc7YLgZHE2WrG\nA1m/Z4LUkOTnHsRBYFbOvqYlZV2zi/XcPCnPH3OW/66Tx91LHIxvIELjesA3gb908JhjgIuISx47\nE4Ekc6kjt1Xk4azf/5P8XCP5uSvRMvG3rG2WAiOAq+ja69CRqTn3bwJWTcq8c/Ict7Ly3x3iDD8j\n9/2Rz3Si9eJ+IhxtQQS36zp4zB7E+31x1v7nE/8fe+Vsm68u6xHvQ2kldphUKfyWOJMZTpyN7UHX\nhzIeT3z4X0Vcl+2qFuKgmjmwDiGapE8hWjJyzwi7a1YXtlmQc39o8vOZPNu2EGeh7yb33+rC8y/O\n+j1zvTtzYjCU6AeyrJ19DSOuhXfW0S5zFv52zvI5nTzuSOK1PoLoOLsCuJu4XPFyO4/ZgTgw7kDU\n7e9EnwLylHNJ1u/56t7R36+z12FYB4+F1rCSkdnXEFo/W29v57mzWxpy3x/5/CzZ7nji0sj5xN+l\nkfZHyQwl/v5H5lmX+3dpry5rdaFsqkGGB5XCn4hLF4cnP18iOs5B50MNXyIOPpOID87O/JY4m84d\n5jmPGGnRQGuTcankHvDmJT/HEGeCudu+Spz1Qeu16Yy1iINqR2fuuft6nqh3vjK9nPzs7HXIjKD4\nWM7yobkb5viA6JD4baI16CCiaf1S8ofBwcT75UlipM7sZPl+RP+ZQsyjdRRNtp2JwPRecr+j16Ej\nuXXP/G3eorVvxlFER87c534z535nWoi/2aVEi8w44HvEZa31yB8O3yOC2vl59p/bUXko8T7JyK6L\ntBIvW6gUlgJ/ID78DwOaC3z8xURT6y+6sO0/iZaNHfKs25C4/vt0gfvvqdwD84zk57pEy0XmtjbR\nxL82cdB8Bzg457FfIs5m+3Vx3/cRLT5v5+xrDK2d7qDzyab+SXTMzB0584U822YMI+a7yBz0nyfO\noO+hdRTN8pzHbEHU/0JagwPAvsnPQj6z7gc+SYw4yOhPHHAnJOuh/dehs2B0YM79Q4mOno8AjxKX\nlj6e89xLgf+lbajpylwdM4i/CcT74mqi9WxNouMprPwaziAC2FNZ+3+C6E9yUBfq8jIr9yeSAFse\nVFzZZ1Q3EMPaltO2F3fuWVe+s7AWokf93/Ksy/Vz4oNxOvHhOoO4VLIt8HUiOEzpwvP0ptw6/Z24\nxnwVcRCZSVxb/hHxYf1Pos5nEXW4hAhfmxP9KS4l//wY+Uwm/t53Ewet14hw9W2iX0Hm4P0eUE/0\nE3mMtpdCMr4F/IboEPp7YjKor3aw7zlJXS4iWhReIkLdvklZoLUVZn+ileIfyc8zk7J9RBzIMge3\n7M5/nZlMHChvAf6HOOhOJALExcTfuqPX4blOnv8woj/JHUTHw1OIVrLFye084vUaTLwPNyBCSQtx\nQM/oSsvDn4m/yRtEP48NiffzfbS+F+YRnYV3T/Z3brLtNGIo6FKio+jBrNyK05SU+ZFk3f6s3Fol\nSUWXPW4fIqi+y8p9BNobqplPI3FAye5RPpmVZ4EcRHxoP0U0Ty8metifC6zeznNvQmGjLc5i5bPm\nfPKVD2LUx5nEqJClRBP5/9Ha0THjWCLwLEm2/R6tI0baK0Nur/t1id70mXkeniUOPNmOTNYvou08\nCLkOT8qzmDi7PiIpQ/Zoi+W0nefhKqIFYgnR+vBdWg+YdUSnv0W0tghlAsxCIoBcktQhM/wWYrRF\nV+o+jAgI7xLvhTtpO/qiq69Dtk1onYL8jqTs/yBG8+Q6mQiLS5K6XEPbuUvae3/kqiNe+38Sf/s3\niBCX3Sehifgbzad1aPT2REvV+0Qoe4gIBhmjk7qcTFwKy8zFkdviJUmSemAT4oDblT445W40UZex\nnWwntVFon4exxNnG+0SKvojWYW+XEel6ftZtQtZjxxNnHQuBx2n/OwEkSVKVWJdonss0665PXIM+\nO7n/ONGZK5/RRODYmWgmbCSuP3bUNChJ5WgTqqvlYTm2PKjIMpOv1BE9mP9JdBLqR+vkK/lMJSbo\nyfYs1fHPJ0lSTSn0ssXC5OdrRKvD60TP9e2IDnHnEh15niNmkct0ihrBysPjnqV1/n1JklQhujtU\nc1NiLPR1xJCtXxC96y8kel/XEzMKriCGzg2iNXhkLKLjYVfD6HyGN0mStLI5dD4DbLd1NzwsJQr1\nLaIDZQMx73rG48SMgEcQ4WEhKw+RW4OVp7rNGLbBBhu8/vrrnU33LkmS8vgP8YV4RQkQhYSHXYBf\nEWOkM7PS9SdmUdubaIm4Imv7/kTrAsQ45+xZ3iAuZUwjv2Gvv/46U6dOZcstSz2TcGk1NjYyadKk\ntItRdNazuljP6lIr9YTaqOvs2bM55phjNiRa71MPD08RrQc/IWan24BoVfglESZ+QQzFvJcYhnkG\nMaoCYtKfm4nvHXiI+KrcdZNl7dpyyy2pr68voIiVZ8iQIVVfR7Ce1cZ6VpdaqSfUVl2LqZDwsBDY\nh7gc8SYx9PJaYia/ZcTsZpcSs6e9Qczy9pvksdOJURmXJev/TkxROw9JklRRCu3zMJuVv60w48rk\n1p7r6Pi75yVJUgXwWzUlSVJBVul8k1QMA04aNeoktt+++kdrbrNNbUx3YT2ri/WsLrVST6j+us6Z\nM4crr7wS4mpAUTpMduWrYNNQD8zccceZPPqoHVskSeqqWbNmMXLkSICRrPxNxr2irC9bPPYYPPhg\n2qWQJEnZyjo8bLYZnHNO2qWQJEnZyjo8fOUrcM89tj5IklROyjo8jBkD22wDZ5+ddkkkSVJGWYeH\nPn3grLPgz3+GBx5IuzSSJAnKPDwAHHwwbLutfR8kSSoXZR8ebH2QJKm8lH14ADjoIFsfJEkqFxUR\nHmx9kCSpfFREeIBofdhuO0deSJKUtooJD5nWh+nT4f770y6NJEm1q2LCA8CBB0brg30fJElKT0WF\nB1sfJElKX0WFB2htfbDvgyRJ6ai48NCnTwSHe++FGTPSLo0kSbWn4sIDROvDpz9t3wdJktJQkeGh\nri76Ptj6IElS6VVkeABbHyRJSkvFhgdbHyRJSkfFhgdobX1w5IUkSaVT0eGhri6Cw333xU2SJBVf\nRYcHgAMOsO+DJEmlVPHhwdYHSZJKq+LDA0Trw/bb2/dBkqRSqIrwkGl9mDHD1gdJkoqtKsIDwBe+\nYOuDJEmlUDXhwdYHSZJKo2rCA9j6IElSKVRVeMhufbj33rRLI0lSdaqq8ADR+lBfHyGipSXt0kiS\nVH2qLjxkWh/uv9++D5IkFUOh4WEs8CjwPjAHuAjon6wblaxbALwEHJ/z2PHA88BC4HFgp+4VuXP7\n72/rgyRJxVJIeFgXmAZcAqwJbA+MBr4NrAXcDkwBBgMnABcAuyePHU0EjWOT9dclzzWkZ8XPz9YH\nSZKKp5Dw8DYRIK4B6oB1iFaHt4AvAu8AlwErgHuJgHBC8tgJQDPwMLAcmJQ87pAe16Ad++8PI0fG\n13bb+iBJUu8p9LLFwuTna8DfgNeJ1oatkvvZZgPbJL9vBTyds/7ZrPW9LtP68MADjryQJKk3dbfD\n5KbAhkQrw++BgcCinG0WJctJfi7sYH1RjBsXrQ/2fZAkqfd0NzwsJTpMfgvYhwgGq+dsszrwQfJ7\nvvVrZK0vClsfJEnqfX0L2HYX4FfAtsCyZFl/4EPiEsTeOduPAP6e/P53YOs866d1tMPGxkaGDGnb\np7KhoYGGhoYuFzq79WHMmAgUkiRVg+bmZpqbm9ssmzdvXtH3W8ihdA0iJPyeGGGxAfBbYtjl94EX\ngHOAS4HPAX8ADgBmEEM8bwYOBB4CTgXOBDYD8tWyHpg5c+ZM6uvrC65UrmnTYvKoe+6BPfbo8dNJ\nklS2Zs2axciRIwFGArOKsY9CLlssJC5RbA28CdwH3Ak0AXOBvYDDiFEXVwKnE8EBYDpwCjEaYy5w\nBLAv+YNDrxs3DnbYwb4PkiT1hkIuW0CMoPh8O+tmEi0O7bkuuZVcpu/D/vvD9Om2PkiS1BNVNz11\ne/bbz9YHSZJ6Q82Eh0zrw4MPRuuDJEnqnpoJDxCtD5/5jLNOSpLUEzUVHjKtDw89BH/+c9qlkSSp\nMtVUeADYd99ofbDvgyRJ3VNz4cHWB0mSeqbmwgNE68OOO9r6IElSd9RkeMhufbjnnrRLI0lSZanJ\n8ACwzz62PkiS1B01Gx4yrQ9/+YutD5IkFaJmwwPY+iBJUnfUdHiw9UGSpMLVdHiAaH0YNcpZJyVJ\n6qqaDw+Z1oeHH4a77067NJIklb+aDw8An/98tD7Y90GSpM4ZHrD1QZKkQhgeErY+SJLUNYaHRF0d\nnHNOtD7cdVfapZEkqXwZHrLsvTfstJOtD5IkdcTwkCXT9+GRR2x9kCSpPYaHHLY+SJLUMcNDDlsf\nJEnqmOEhj733hp13dtZJSZLyMTzkkWl9ePRRuPPOtEsjSVJ5MTy0Y6+9ovXBvg+SJLVleGiHrQ+S\nJOVneOiArQ+SJK3M8NCBzKyTjz4Kf/pT2qWRJKk8GB46seeesMsutj5IkpRheOhEpu/DY4/Z+iBJ\nEhgeusTWB0mSWhkeusDWB0mSWhkeumjPPeGzn3XWSUmSDA9dlGl9ePxxuOOOtEsjSVJ6DA8F2GOP\naH2w74MkqZYVGh62A+4G3gXeAK4GhibrLgOWAPOzbhOyHjseeB5YCDwO7NTtUqfE1gdJkgoLDwOA\nO4AHgY8BI4jgMDlZ/xngRGBQ1u2XybrRwEXAscBg4DpgGjCkR6VPwR57wOc+Z+uDJKl2FRIeNgKe\nAM4FPgLmAlcCY4HVgG2Ame08dgLQDDwMLAcmAW8Bh3Sr1CnKbn24/fa0SyNJUukVEh6eA8YB2efb\nhwKziMsZfYlg8Uay7TeBumS7EcDTOc/3LBE4Ks7YsbY+SJJqV086TP6QCBMnA2sC9wIXAhsCxwBn\nAF9Pth1E9HXItggY2IP9pybT+vDXv9r6IEmqPX278ZjBRD+H7YHdgGeS2z1Z2zxOXJo4Avg5ERxW\nz3meNYC3O9pRY2MjQ4a07RbR0NBAQ0NDN4rdu7JbH/bbLwKFJEml1NzcTHNzc5tl8+bNK/p+Cz3k\nbQrcDrwMNBD9HgAOIjpRXpG17ZnAXsDuwFTgA+CUrPWzgfNo7XCZrR6YOXPmTOrr6wssYulMnx4d\nKKdNg3Hj0i6NJEkwa9YsRo4cCTCS6FrQ6wq5bLEWMJ0YbbEPrcEh4xdE58k6YGfiskUmTPwaOJoY\ndbEq0AisC9zczXKXhTFjYNdd7fsgSaothYSHLwPDiUsRH9A6l8MHwB+AJuDSZNm1wPeB3ySPnU60\nOlxGhI4jgH2B4retFFF234fbbku7NJIklUa5XqmviMsWEC0Ou+8OixbF8E37PkiS0lRuly2UR6b1\nYeZMWx8kSbXB8NALxoyB3Xaz74MkqTYYHnpBduvDtGlpl0aSpOIyPPSS0aNtfZAk1QbDQy/JtD7M\nmmXrgySpuhkeepGtD5KkWmB46EV1dXDOOdH6cOutaZdGkqTiMDz0stGjY94HWx8kSdXK8FAEZ58N\nTzxh64MkqToZHorA1gdJUjUzPBSJrQ+SpGpleCiS0aPjZuuDJKnaGB6K6KyzovXhllvSLokkSb3H\n8FBEtj5IkqqR4aHIzjoLnnzS1gdJUvUwPBSZrQ+SpGpjeCiBs8+O1oc//jHtkkiS1HOGhxLYfXcY\nMyamrrb1QZJU6QwPJZLp+2DrgySp0hkeSsTWB0lStTA8lJB9HyRJ1cDwUEK77QZjx0aIWLEi7dJI\nktQ9hocSO+sseOopWx8kSZXL8FBimdaHc86x9UGSVJkMDymw9UGSVMkMDymw9UGSVMkMDyk5++xo\nffjDH9IuiSRJhTE8pGTXXWPeh/POS7skkiQVxvCQoq99DR59FB5+OO2SSJLUdYaHFO23H3zqU3DB\nBWmXRJKkrjM8pKhPH5g4EW68EV55Je3SSJLUNYaHlI0fD4MHw8UXp10SSZK6xvCQsoED4Stfgauu\ngvnz0y6NJEmdMzyUgdNOg4ULYfLktEsiSVLnCg0P2wF3A+8CbwBXA0OTdaOAR4EFwEvA8TmPHQ88\nDywEHgd26l6Rq8/w4XDYYXDhhbB8edqlkSSpY4WEhwHAHcCDwMeAEURwmAwMAW4HpgCDgROAC4Dd\nk8eOBi4Cjk3WXwdMSx4noKkJXnoJbr017ZJIktSxQsLDRsATwLnAR8Bc4EpgLPBF4B3gMmAFcC8R\nEE5IHjsBaAYeBpYDk4C3gEN6XIMqseOOsMsuDtuUJJW/QsLDc8A4oCVr2aHALGAr4Omc7WcD2yS/\n51v/bNZ6Ea0P998Ps2alXRJJktrXkw6TPyTCxMnAIKIvQ7ZFwMDk94GdrBdw0EGw8cYwaVLaJZEk\nqX19u/GYwUQ/h+2B3YBniGCQ239hdeCD5PeFyf1sawBvd7SjxsZGhgxp+7QNDQ00NDR0o9jlr29f\nOOMM+Pa34ac/hWHD0i6RJKmcNTc309zc3GbZvHnzir7fugK335ToGPky0ED0e4Do0/A1ohNlxmVE\nYBgPTCWCxClZ62cD5xFBJFc9MHPmzJnU19cXWMTK9v778PGPx8yTP/xh2qWRJFWaWbNmMXLkSICR\nRNeCXlfIZYu1gOnEaIt9aA0OADcB6wMTgVWBMcBRwK+T9b8GjiZGXawKNALrAjd3v+jVac014YQT\n4PLLYfHitEsjSdLKCgkPXwaGA0cQrQjzk9sHRJDYCziMGHVxJXA6MCN57HSi1eGyZNsjgH2B4ret\nVKAzzoC5c+Haa9MuiSRJKyv0skWp1Oxli4yDD4bnnoNnnoG6cn2VJEllp9wuW6iEmppg9my48860\nSyJJUluGhzK1665QX++kUZKk8mN4KFN1ddH6cNddcelCkqRyYXgoY4cfHnM9OGmUJKmcGB7K2Gqr\nxdd1X3stvN3hdFqSJJWO4aHMnXQS9OkT8z5IklQODA9lbuhQOPZYuOQSWLo07dJIkmR4qAiNjfDm\nm3D99WmXRJIkw0NF2GIL2Hff6DjZ0tL59pIkFZPhoUI0NcGTT8KMGZ1vK0lSMRkeKsSee8LWWztp\nlCQpfYaHClFXF30fbr0VXngh7dJIkmqZ4aGCHH10jL648MK0SyJJqmWGhwrSvz+cfDJMngzz/DJz\nSVJKDA8V5pRTYNkyuOqqtEsiSapVhocKs/760NAAF18MH32UdmkkSbXI8FCBmprgtdfgxhvTLokk\nqRYZHirQdtvBmDEO25QkpcPwUKGamuDRR+Hhh9MuiSSp1hgeKtS4cfCpT9n6IEkqPcNDherTByZO\njH4Pr7ySdmkkSbXE8FDBxo+HwYNj5IUkSaVieKhgAwfCV74Cv/wlzJ+fdmkkSbXC8FDhTjsNFiyA\nKVPSLokkqVYYHirc8OFw2GHxfRfLl6ddGklSLTA8VIGmJnjxRZg2Le2SSJJqgeGhCuy4I+yyi8M2\nJUmlYXioEo2NMGMGPPFE2iWRJFU7w0OVOPhg2HhjWx8kScVneKgSffvC6afD9dfDnDlpl0aSVM0M\nD1VkwgTo1w8uuSTtkkiSqpnhoYqsuSYcfzxcfjksXpx2aSRJ1crwUGXOOAPmzoVrr027JJKkamV4\nqDKbbgoHHgiTJkFLS9qlkSRVo+6Gh3WBF4Dds5ZdBiwB5mfdJmStHw88DywEHgd26ua+1YmmJpg9\nG+68M+2SSJKqUXfCw2eBh4FPAtnntjsAJwKDsm6/TNaNBi4CjgUGA9cB04Ah3Sm0OrbrrlBf77BN\nSVJxFBoejiMO/N/NWd4P2BaY2c7jJgDNROhYDkwC3gIOKXD/6oK6umh9uOsueOaZtEsjSao2hYaH\nO4gWh9/mLN8O6AucC7wBPAd8E6hL1o8Ans55zLPANgXuX110+OEwbFh8YZYkSb2p0PDwJrAiz/LB\nwL3AhcCGwDHAGcDXk/WDiL4O2RYBAwvcv7potdXi67qvvRbeeSft0kiSqklvjba4B9gTeIC4LPE4\ncWniiGT9QmD1nMesAXzQS/tXHiedFD8vvzzdckiSqkvfXnqeg4CPAVdkLetPtC4A/B3YOucxI4hO\nk+1qbGxkyJC2fSobGhpoaGjoUWFrxdChcOyxMePkN74Rs09KkqpHc3Mzzc3NbZbNmzev6Put63yT\ndq0gRlHcDxwMTAW+QFy+2An4I9AI/AYYC9wMHAg8BJwKnAlsBuSrZT0wc+bMmdTX1/egiJo9G0aM\ngKuvjiAhSapus2bNYuTIkQAjgVnF2EdvXba4GWgCLiXmd7gW+D4RHACmA6cQc0HMJS5n7Ev+4KBe\ntOWWsM8+MWzTSaMkSb2hJ5ctcoPHlcmtPdclN5VYUxN8/vMwYwaMHp12aSRJlc7pqWvAXnvBVls5\naZQkqXcYHmpAXR00NsKtt8ILL6RdGklSpTM81Iijj47RF04aJUnqKcNDjRgwAE4+GSZPhhKM4pEk\nVTHDQw055RRYtgyuuirtkkiSKpnhoYasvz40NMDFF8NHH6VdGklSpTI81JimJnjtNbjpprRLIkmq\nVIaHGrPddjBmjMM2JUndZ3ioQU1N8MgjcZMkqVCGhxo0bhxstpmtD5Kk7jE81KA+fWDiRLjxRnj1\n1bRLI0mqNIaHGnXccTBoUIy8kCSpEIaHGjVwIJx4Ysz5sGBB2qWRJFUSw0MNO/30CA6TJ6ddEklS\nJTE81LDhw+HQQ+P7LpYvT7s0kqRKYXiocU1N8OKLMG1a2iWRJFUKw0ONGzUKdt7ZYZuSpK4zPIim\nJpgxA554Iu2SSJIqgeFBHHwwbLyxrQ+SpK4xPIi+fWPkxfXXw5w5aZdGklTuDA8CYMIE6NcPLrkk\n7ZJIksqd4UEArLkmHH88XH45LF6cdmkkSeXM8KD/74wzYO5cmDo17ZJIksqZ4UH/36abwoEHwqRJ\n0NKSdmkkSeXK8KA2Ghvh2WfhrrvSLokkqVwZHtTGbrvB9ts7bFOS1D7Dg9qoq4tJo+68M1ogJEnK\nZXjQSo44AoYNi74PkiTlMjxoJautBqeeCtdeC++8k3ZpJEnlxvCgvE46KX5efnm65ZAklR/Dg/Ja\nZx049tiYcXLp0rRLI0kqJ4YHtauxEd54A264Ie2SSJLKieFB7dpyS9hnnxi26aRRkqQMw4M61NQE\nTz4JM2YWRWmfAAAUK0lEQVSkXRJJUrnobnhYF3gB2D1r2SjgUWAB8BJwfM5jxgPPAwuBx4Gdurlv\nldBee8FWWzlplCSpVXfCw2eBh4FPApnG7LWA24EpwGDgBOACWsPFaOAi4Nhk/XXANGBI94qtUqmr\ni74Pt94KL7yQdmkkSeWg0PBwHHHg/27O8i8CbwOXASuAe5PtTkjWTwCaidCxHJgEvAUc0p1Cq7SO\nPhqGDoWLLkq7JJKkclBoeLiDaHH4bc7yrYCnc5bNBrbpYP2zWetVxgYMgK9+FX79a5g3L+3SSJLS\nVmh4eJNoWcg1CFiUs2wRMDD5fSDR16G99Spzp5wCH34Iv/xl2iWRJKWtt0ZbLABWz1m2OvBB8vvC\nPOvXyFqvMjdsGDQ0wMUXw0cfpV0aSVKa+vbS8/wd2Dtn2YhkeWb91nnWT+voSRsbGxkypG2fyoaG\nBhoaGrpfUnVbUxNccw3cdBMcfnjapZEkNTc309zc3GbZvBJcX67rwWNXEKMo7gfWJoZungNcCnwO\n+ANwADADGAvcDBwIPAScCpwJbAbkq2U9MHPmzJnU19f3oIjqbWPGwJIl8PDDaZdEkpTPrFmzGDly\nJMBIYFYx9tFbly3mAnsBhwHvAFcCpxPBAWA6cAoxGmMucASwL/mDg8pYUxM88kjcJEm1qSeXLXKD\nx0yixaE91yU3VbD994fNNotJo/zOC0mqTU5PrYL06QMTJ8KNN8Krr6ZdGklSGgwPKthxx8GgQTHy\nQpJUewwPKtjAgXDiiXDVVbBgQdqlkSSVmuFB3XL66REcJk9OuySSpFIzPKhbhg+HQw+FCy+E5cvT\nLo0kqZQMD+q2piZ48UW47ba0SyJJKiXDg7pt1CjYeecYtilJqh2GB/VIYyPcdx88+WTaJZEklYrh\nQT1yyCGw0Ua2PkhSLTE8qEf69o2RF83NMGdO2qWRJJWC4UE9NmECrLYaXHpp2iWRJJWC4UE9NmQI\nHH88XH45LF6cdmkkScVmeFCvmDgR3n0Xpk5NuySSpGIzPKhXbLopHHAATJoELS1pl0aSVEyGB/Wa\npiZ49lm46660SyJJKibDg3rNbrvB9ts7bFOSqp3hQb2mri5aH+68M1ogJEnVyfCgXnXEETBsWPR9\nkCRVJ8ODetVqq8Gpp8K118I776RdGklSMRge1OtOOil+XnFFuuWQJBWH4UG9bp114Nhj4ZJL4MMP\n0y6NJKm3GR5UFBMnxndd3HBD2iWRJPU2w4OKYsQI+PznY9imk0ZJUnUxPKhomprgiSfg/vvTLokk\nqTcZHlQ0e+8dLRBOGiVJ1cXwoKKpq4PGRrjlFnjhhbRLI0nqLYYHFdUxx8DQoTH3gwFCkqqD4UFF\nNWAAXH45PPUUbL55hAmnrpakymZ4UNF98Yvwr3/BhRfCjBmw9dZw2GHw5JNpl0yS1B2GB5XEgAFw\n2mlx6eKKK2DmzPgGzgMOgMceS7t0kqRCGB5UUv36wYknwnPPwdVXx89Ro2JOiAceSLt0kqSuMDwo\nFauuGlNYP/ssXH89vP467LYbjB4N99zjxFKSVM4MD0rVKqvE13g/9RTcfDPMnw977QW77AK33WaI\nkKRyZHhQWejTBw46CP76V7j99pgjYv/9YeRIuOkmWLEi7RJKkjIMDyordXWw777w0EPw5z/DkCEx\nWmPbbaG5GZYvT7uEkqTeDg9HAB8B87NuVyfrRgGPAguAl4Dje3nfqiJ1dTB2LEyfDg8+CMOHw1FH\nwZZbwpQpsGxZ2iWUpNrV2+HhM0RYGJR1Gw+sBdwOTAEGAycAFwC79/L+VYU++1m4444Y0jliBHz5\ny/Bf/xVDPpcuTbt0klR7ihEeZuZZ/kXgbeAyYAVwL3AdESKkLvnMZ+APf4jOlTvuCCefDJtuChdd\nBIsWpV06SaodvRke+gD1wDjgZeA14ApgCLAV8HTO9rOBbXpx/6oR224LN9wQwzzHjoWvfQ0+8Qn4\n2c9itIYkqbh6MzysA8wCfgdsAewCfAqYCgwEcs8NFyXLpW7ZYgu45pqYaOqAA+B734NNNoEf/ADm\nzUu7dJJUveqK/Pw7EJ0kJwNrAodlrTsd+DLRWpGrHpi56667MmTIkDYrGhoaaGhoKE5pVdFefTVa\nH666KmayPP30+ErwddZJu2SSVBzNzc00Nze3WTZv3jweiCl7RxIn9b2uN8PDtsBRwLezln2O6N9w\nKtAIjMhadxmwOtGhMlc9MHPmzJnU1+fLFlL75syB88+Hyy6LURsnnwxf/zqsv37aJZOUlmXL4L33\nYN1143Ohms2aNYuRI0dCEcNDb162mEuEhG8AfYGNgJ8RrQ6/B9YHJgKrAmOIoPHrXty/BMCwYfDz\nn8PLL8PEiTEq4xOfgDPOgH//O+3SSepNH34YrY6PPhqz1F56KZx5JpxwQswZ8+lPw3rrRWvkxz4W\nI7Z++lP4z3/SLnll6+38tRvwY2BrYAnQDHwT+JBIQBcSnSTfAn4AXNPO89jyoF7z3ntw8cUwaRIs\nWBBDPb/97QgUksrT0qXwxhvRkvj66+3/fOedto/r2zdOIIYNgw02aPtzjTXgllsiZHz4Iey5Jxx3\nXMxuO2BAKtUsilK0PJRr443hQb1u/vw4Kzn/fJg7F445Br773ZgzQlJpLF0aB/7OQsG777Z93Kqr\ntg0C+cLBBhvA0KEx3X1H3n8ffve7mHDuoYdg8GA4/PAIErvsUvmXNQwPhgcVwcKF0anyvPPgzTfj\nQ+N734Ott067ZFLlWrKka6Fg7ty2j1tttfaDQPbPtdfuPBR0xwsvxKitq6+Oyx+bbRbf+HvssbDx\nxr2/v1IwPBgeVERLlsDkyfCTn8SHxsEHx7VS33JSq8WLOw8Ec+bE5cFs/fq1HwRyQ0E5nOmvWAEz\nZkSI+P3v4yRjzBgYPz6+X2dgBU0sYHgwPKgEli2Da6+F//1fePFF2G+/CBE775x2yaSeW7YsAkBH\nt4UL2+9fkDtnSv/+nbcSbLBBfKldOYSC7liwAG68MYLEvfdGX4lDD40gsfvuxWkB6U2GB8ODSuij\nj2Lmyh/9CGbPhj32iBCx++6V+yGo8rJiRecH8sWLo1WsK9t15XFd/SbaAQPioN9ZMFhzzdr6f3j5\n5Ti5uPrqOLnYeOPWyxqbbZZ26fIzPBgelIIVK6I39g9/CE8+CZ/7XISIvfeurQ/NWtXSAm+/Da+8\nEmfeCxf23sH8ww8LK8uAAXGmP2BA4bdCHrf66tEs7/u7fS0t8Je/RCfL3/4WPvggPhvGj4fDDotQ\nVS4MD4YHpailBW67Laa7fuyx+GKuM8+EL3zBD9lKtmxZzPfx6qsREF55ZeXflyxZ+XGrrVaaA3nm\n1q+f77NytXhxfEnflClw993xWh18cIzW2GMPWGWVdMtneDA8qAy0tMA990SIeOCB+GKuM8+MTlTl\nfu2zFi1Y0BoE8gWD11+P1qWM9daDjTaK5uiNN277+4YbxvXu/v3TPyCoPP3nPzB1agSJf/wj3jPH\nHBMtEltumU6ZDA+GB5WZ+++Pyxl33x0fDN/9Lhx5ZExMo+JraYG33sofCjK/Z/f679sXPv7xlUNB\n5v5GG1XX5EBKT0sLPP549I1obo734Y47Rog48sgYVVIqhgfDg8rUI49Ex8pp02DTTeE734EvfSma\nttV9H34YZ3IdtRwsXdq6/RprrBwIsu8PG2aLgUpv6VK49dYIEnfcEe/BAw6IILHPPsU/2TA8GB5U\n5p54IkLEjTfGgauhITpOrb56a0e07N/zLVt99Zg9rxbMn99+KHjllbik0NLSuv16660cCrJ/X2st\n+wWovL35Jlx3XQSJv/0tvl/j6KMjSGy7bXH2aXgwPKhCPPMM/PjHcVlj8WJYtChuXbXKKl0PGl1d\nn2/bAQOK10+jpSU+KLNDQW5IyJ4zIPuSQr5Wg+HDvaSg6vLkk9E34rrr4js5tt8+QsRRR8W3ffYW\nw4PhQRWspSWaLxctahso8v3e2frOti1kCGC/fj0PIsuXw2uvtQ0IuZcUBg5sv6+BlxRUy5Yti8sZ\nU6bEpc+WFhg3LoLEuHE9v/xZivBgNy+pSOrqopd+//7F39fy5b0XSt5/P2YbbG99ZqRC5pLCxhvD\ndtutHBC8pCDlt+qq0QfigAOiBeL66+OyxiGHxBd7NTTEsM/6+vL9HzI8SFVglVXiTL/Y8++3tLS2\ncvTrV9x9SbVgnXXgtNPi9swzESKmToX/+z/YaqtojTjmmGipKyeOUpfUZXV1ERoMDlLv22qr+Lbf\nV1+F22+Pb/r9n/+JvkH77RfT5+ebwCwNhgdJkspI376w775xOeONN+DSS6Oz8ZFHRgvEV78KDz/c\ndmRSqRkeJEkqU0OGwEknxfdq/OMfcMop0Sqxyy6wxRbxbcCvvVb6chkeJEmqAJtvHvPKvPxyTJk/\nalTc33hj2Guv6CuxcGFpymJ4kCSpgvTpE1/Adc01cVnjV7+K4Z9f+hKsvz6cc04JylD8XUiSpGIY\nNAi+/GW47z548UX47/+Gv/61+Ps1PEiSVAU++Uk46yz44x+Lvy/DgyRJVaRYU9C32UfxdyFJkqqJ\n4UGSJBXE8CBJkgpieJAkSQUxPEiSpIIYHiRJUkEMD5IkqSCGB0mSVBDDgyRJKojhQZIkFcTwIEmS\nCmJ4SFlzc3PaRSgJ61ldrGd1qZV6Qm3VtZhKHR7WA/4AvAe8DVwArFLiMpSVWnkjW8/qYj2rS63U\nE2qrrsVU6vBwA/ABMAzYEdgT+F6JyyBJknqglOFhM2B34JvAEuBfwA+ACSUsgyRJ6qFShoetgLnA\nG1nLZgMfBwaXsBySJKkH+pZwX4OAhTnLFiU/BxKXM9qYPXt2scuUunnz5jFr1qy0i1F01rO6WM/q\nUiv1hNqoaymOnXVF30Org4ErgXWzlm0DPAWsCczPWj4MeBzYsGSlkySpevwH+AwwpxhPXsqWh78D\nQ4kRF28ly0YAr9E2OEBU9jNEiJAkSYWZQ5GCQxruB35DXKb4BPA08P1USyRJksraesBviTke3gTO\no7SXTiRJkiRJkiRJkiRJkoqhmr77YjvgbuBdYmKsq4nRJgCjgEeBBcBLwPE5jx0PPE/Mi/E4sFMJ\nyttTqwD3AZOzllVTPdcGrgHeISY7uwn4WLKumuq5J1HG94HXgUnAasm6aqjnusALxGy3GT2p1yrA\nz4n/8Q+Iz6/1i1HwAuWr5xeBJ4nX9l9EZ/XsPmfVUs+MYUTfuvE5y6ulntsCfybK+QbwM9pO/FiJ\n9ey2e4kP6P5U9miMAcQH71nEcNi1gWnALcAQIlCcTLzQY4h/5sybYnRyf2fiBW4kDlhDSlb67jkX\n+Aj4dXJ/LaqrnvcCvyNmQx0I/B64lep6PQcQE7edltzfkJgF9kyq4/X8LPEBvALYLVnW03qdRRyQ\nNyQmwmsGphe3Gp3KV8+RxEFkv+T+FkRQ+lpyfzTVUc+MPkT5PgKOzVo+muqo5zrECfa3iHpsDDwH\nfD1ZP5rKq2e3bUb8cbLTz+HAq+kUp0c2B26jbao/gDizOYF4kbNdSoQmgKnA5Tnrn2Xls6FyMpYI\nejfQ2vIwAfhHznaVWs+RxEF1YNaytYgp16upnv2JM7WJxAfOx4FngCainpX8vj0OeJn4TMn+EO7u\n6/fl5Pd/A0dmrVsvef5P9EKZu+M48tfzEOJMM9sviJMaqJ56ZpxNtPb+i7bhoVrq+XXggZxtNwKG\nJ78XvZ6l/lbNjlTTd188B4wDWrKWHQrMIur5dM72s4nZNmln/bNZ68vNx4BfAkcBi2mtczXVc0ei\nbF8hmgFfB85PflZTPZcARwDnJL+/SryXJxH1+FvO9pVUzzuATxJDxbP15PVbE9ggZ/1bxOdYWvVu\nr543Af+ddX8A8RmVmad5BNVRT4jWo8OBU/Ksq5Z67kgE+8uJiaBeAI4hQgGUoJ7lFB46++6LSvZD\n4h/1ZNqvZ6aOAztZX076ANcSB9KnaRuWBtH6+mVUaj3XJq4vbgZ8OrltSJydDqR66jkcuJn4ttvV\nga2JD6FzqPx6vkmcWeXqyfs0s0051bu9emYbSFzjXgj8NFnW0edSJdVzPeLS6VGsXF6onnquTbQi\nPEKcYB8CnETrZaii17OcwsNC4gMrW+Z+7vTVlWIwcCPxRt6NSIrt1TPzxWD51q9Bni8OKwPfId5w\nlyT362i9VFNN9Vya/Gwkyv0W8D3i+nEd1VPPQ4izmPOBZcSZyrnEGVw1vZ7ZFtC9es2n9cM33+PL\n9TNrc+KA04e4Lp6pQzXUs444mbmIuJ6fvTyjGuoJ8Zn0KDAFWE60Cl5MtLhACepZTuEh+7svMtr7\n7otKsCnRw3UgsAMRHCDquVXOtiOS5Zn1W3ewvpwcQzQRvpfcGoigNJdoiaiWej5LfAD1y1qW+V6Y\nJ6meei7Os+wj4EOq632brSf1mkdcuspevz5xVliO9d6POODcDnyetsGuGuo5nDhJ+z6tn0kbEX1Y\nbkm2qYZ6QhxP+ucsy/6uqmqpZ5dVy3dfrAW8AvyKlaffXps4uE4EVmXl3t1jk/ujk/Xl2Gu9PZNp\nHW0xlOqpZ1/gn8RoizWIYVN/JkZcVNPrOYyoy3eIE4tPEmc0P6W66pnd8ayn9TqX+BttQjQVX0/5\n9FrPrudOxNnqce1sWy31zJXbYbJa6rk5Efa/QXRu3oY40c6MlKrkenZLtXz3xdeIF3oB0WqSuWWS\n/kjgQeLFfZ62b26Ao4lOW/OBh4lvGK0E2eEBqquew4jhTK8TB5vJtHbkraZ67kDM1zGX6OX9A1rP\naKqlnrkHm57Uqy/wY+KDex7RMXGdopS6cNn1/CPRijQ/53Zb1vbVUM9cueEBqqeeOwIziP/VfwPf\nzdm+UuspSZIkSZIkSZIkSZIkSZIkSZIkSZIkSZIkSZIkSZIkSZIkSZKkCvf/AK5g1WLsZc5WAAAA\nAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAf8AAAFuCAYAAACPyxFYAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzt3XmclXXd//HXsBgguwsK4q3i76cips645ZJbd2klJqY2\niEtqblmRS92WmlndWZq3WfZT09QzwLgTapomiGKJ0oxbSaA3bqiogSOIIODM74/PmcecOcwAZ2bO\nXGd5PR+P8xjmOtvnOzOc9/X9Xt/re4EkSZIkSZIkSZIkSZIkSZIkSZIkSZIkSZIkSZIkSZLK10yg\nEfjrOh5zW/oxN2dsexX4ABjZznMagR9lfH8L8ErWYzYGLgVeBD4CGtJ1nAZUpB+zTfq11nc7cR31\n/wdwI/A68DHwLnAfcNA6nqP27cy6/16kstEr6QKkDmoiwnNvYCtgYdb9GwNHZDw20wAiVL+wjtdu\n7/sKIoB3An4OPA/0BQ4DricC5rvAW8A+Gc8bDtwD/AT4U8b2Be3UsAUwm2jXhcQOwObEDsZ04KvA\n1Haeq7YdA3wm6SIkSR33KPA0sJwI22xfA94B3gD+kLH9VWAJseNwWhvPawQuyfj+Flr3/A9IP+Zz\nbTz3KmA1EdLZtmH9Pf1MFxOjCgOytvcA5gAvbODrqMWlxO9AKns9ki5A6oTlRC/62DbuOw64C1iT\ntb0JuJc4bHAlMWqQiy3SX9v6v/M7opfeFbYgau2Ztb0x/R7XZ23/T+BxYBkx6nAdMDjj/v9D/Dze\nBj4EZgD7Zty/Tfq1vwvMJX62p6TvGwPcTxwu+YAYwdh2PfXfAswCvkGMWiwjRix2y3rc1kAtsDj9\nno9kPWZddWWrIn4GDcBS4C/AXun7LqVlpy5zB68H8F/Ay8BKYB5wTtbrzgRqgIuIHcoPgD+ma5Mk\ndaOZRIAdTXyYZx7DHwisAPYneu2ZPf/m77clAunBrNddX89/MyJYlgCXAwcSw/7rsw259fwPSz9+\nHnAeEYjZOwLNDgc+Ae5O//t4YgfgkfT9o9M1/534eY0lgvhj4LNZ9X0AnAR8hdgx+r/p584GjiQO\nNzybfv3N1lH/zUQIv51+vSPTz2sgDoEAbEoc1vgXMVIzlvidLgV2XEddbc3XGEjMiagFDgG+CPwN\neJ8YPRkB/D79Wntl1HB9+udwCTGa81Nih/GijNd+NP0689Lt/xoxgvQq0G8dPwNJUhebSQRFHyIs\nzs247yTig5n017bCH6KH1wicmnH/+sIfYqfiZVom7X2crud02g/obcgt/AHOIsKy+X0aiID/z6zH\nzSGCPdNRRKhuCdxOBGPmIYSeRE96dlZ9N2S9zmQi6PtnbBtChOEv11H7LenX2z9j2xbEoYzm5/2M\n6Mlnhnlv4md7x3rqyrZP+nGZoxnbETtozaM7l9J62P//EjtNF2S91mXpOoekv59J/I4zRzt2S7/W\n2eupS5LUhWYS4Q8RULMz7nsQ+EX636/Sfvg3v04D0TOEDQt/iIl/BxAT+GYR4dCY/nefNh6/DbmH\nP8SowleA3xCTC5t3BK7MuP8T4IfreI13aN3mZpemn9svo77sMFsETCF2Fnpl3O4DnlnHe95Cyw5Y\npkeIHjnE7+yvbbz2b4mdC9ZRV7aNiXYuJg6/HJHelulSWof/menvd8p6/z3T28emHzczfcv2MvGz\nkYqOx/xVCu4ghnJHApsAhxKn+W2IU4gP/N/n+J5NRNBfTOwEDCNCZz9ajyR01gri+PK3gE8Tx+4f\nJ0Y6RgNDiR2Rd9fxGkOIEM+2KP3cgRnbPsx6zCbEMPdqYFXG7UvEqMK6vNnGtvdo6VFvQsy+z37t\ns9M1Ze5EZdeVbTnxe/hTut5pxM7AdcCn2nnOJumv/8x6/6eI3+/wjMeury1SUfFUP5WCPxND/8em\nvy6gpVeafdpetgXAD4CraX8iWaY7gEGsfZpgAxHQ1URPsjN6EqMNvydGFjL9L/Adon2jibY3sfbx\n942InaCnifkJbQV187bFtIx8ZHufmDj3q6ztFaw9mTLbpm1sG0bLjkoD0aM+v43XhgjiXMwnRlYq\niFNATyAOnSyg7UMUDemvBxPzP7JreD3j+/baMj/HGqWCYM9fpeBjond8NHEud22Oz/8N8ARxqt76\nzCdCdY827htBHBvv7Gl4nxDBcyqtZ+w3a54M9w+iR/wscWgg02FEL3gr4DHgy7Tu4fckeshPEz3v\n9jxGrF3wHFCfvj0DfLuN98w2itY7QsOJY/LT09/PTLflpYzXrid2oE4ht9PyxgL/JgK5iTik8E1i\nomDzMf9P2mgbxI5T5vsPJY77D8147L60jBRAnFmwTUZbJEndYCYxC7vZ4URYrCYmcjV7lbXP82/r\n+PcoYuh4fcf8BxOhu5RY5OcwYsb8OenH/Z22h5m3Ibdj/pXp93gFmEj0Tj9HjAQsA67NeOyXiGC7\nPV3PScQs+2np+0enn1NHzFYfS/TmVxEz49dV3y7EDsaD6ed9gThl8BPWHf63pB/zEjEiczSxU7SQ\nlh2aLYlDD08RO22HEsP0jcTOxbrqyrYJMaLwJHFmwSHETP41xBkZECMzjcTORfPkvRQxunE+8TM+\ng9iJmEPLCMTMdFueTv8MJhA7Z8/i6KkkdatHaZnwB/EhvJjouWVq71S/tkwkPuQzw/9m1l6FbwAR\nws8RPcsVxFK/l9H+qV/bkPuEv1HESoT/S8w+X0aEW1uHJw4nQnQFsbDRr7Jq2ZUYCViarvlh2j7P\nv636dgceSD9vKTFJ78vrqf2WdB2nEsfLPyB2GrbOetx2xE5L83n+9cDJG1hXtt3Sdb5H/Bzm0HoH\nZUviZ/QxLTtPPYnT+l5Ob3+dmHCYOeIykxgZ+iGxY7AYuAmP90uS1MottD5mXsxmEpMspZKR6zH/\nzxF70x8Q5/5eTUwskqRsFet/SNEopbZIOYV/X2JZ1FuJ2c57Esf/vpeHuiQVtybWf6ZFsSiltkhA\nbpNVmohjjj3TtwriWNzyPNQlqbh9PekCutDBSRcgJe0g4tzY1UTw34PDYZIkFZVcgnsksbzoT4Fr\niJXG7iEWPbmkjcdvyfpXAJMkSWt7O33Li1zC/zvEObCjM7aNJ3YEsle/2nL48OFvvfXWW50sT5Kk\nsvQmMbcuLzsAuRzzX9HGtjW0vQTnlm+99RaTJk1ip506u9Jp4Zs4cSJXX3110mXkne0sLbaztNjO\n0jF37lwmTJgwghg9Tzz87yMuj3khccW0bYjFMWrae8JOO+1EZWVlZ+orCoMHD7adJcR2lhbbWVrK\npZ35lsupfm8DnydO7/s3sbraNNZ9KVFJklRgcl2X+u/EjH9JklSkvKqfJEllxvDvAtXV1UmX0C1s\nZ2mxnaXFdioX+VqgpxKoq6urc2KGJEk5qK+vp6qqCqCKta9U2iXs+UuSVGYMf0mSyozhL0lSmTH8\nJUkqM4a/JEllxvCXJKlAfPQR3Hln/t8n1xX+JElSF3vvPbj22rgtXpz/97PnL0lSQl56Cc46C7be\nGq64Aqqr4Y9/zP/7Gv6SJHWzJ5+EceNghx3gnnvghz+E11+Ha66BrbbK//s77C9JUjdobIR7740e\n/t/+FsF//fVwwgnQp0/31mLPX5KkPFqxIkJ+xx3hqKOgRw+YNg1efBG+8Y3uD36w5y9JUl4sXgy/\n+x385jfw73/HMH8qBfvsk3Rlhr8kSV1qwQK46ir4wx+gqQm+/nU491zYfvukK2th+EuS1AWefjqO\n599zDwwdCt//Ppx9Nmy2WdKVrc3wlySpgxob4U9/giuvhMcfj979tdfCiSdCv35JV9c+J/xJkpSj\nlSvhxhth551h7FhYvTp6/P/6F5x5ZmEHP9jzlyRpgy1ZAtddF+fjv/tuBP+NN8J++yVdWW4Mf0mS\n1uPVV+F//gduugnWrIGTTopJfDvskHRlHWP4S5LUjrq6mMR3550weHAE/jnnwOabJ11Z5xj+kiRl\naGqCBx+MSXyPPgrbbRfD/CefDBtvnHR1XcMJf5IkAR9/DLfcArvsAl/6Enz4IdxxB8yfD9/8ZukE\nP9jzlySVuYaGWH7317+Gt9+GI46IlfkOOAAqKpKuLj8Mf0lSWXr9dbj6avj972HVqrjAznnnwU47\nJV1Z/hn+kqSy8swzcTz/9tthwAD49rfhW9+CLbZIurLuY/hLkkpeUxM8/HDM3J8+Hf7jP2L9/VNO\ngf79k66u+xn+klSk1qyBv/8dZsyIIewRI2CrrVrfBgxIuspkrVoVPfwrr4Tnn4fKSqitha9+FXqV\ncQKWcdMlqbg0NsI//xk91+nT4bHHYNmy6LmOGhWT1d59t/VzBg5ce4cg+zZ4cOlNbFu6FG64IY7p\nv/kmHH54/Pugg0qvrR1h+EtSgWpqisvDTp8evfsZM+C99+BTn4J9942rxh1yCOyxB/TuHc/5+GN4\n6y1YuHDt2z/+AX/+MyxaFDsSzfr2Xf8OwqabQo8iODl84cKYtX/DDbBiBRx/fEziGzMm6coKi+Ev\nSQXk7bcj5Jt796+/HqG7555w2mlw6KER/H37tv38T30Ktt02bu1ZsyZ2ANraQViwIK5O9+ab8bhm\nG23U9mGFzNuwYdCzZ9f+PDbU88/H0H5tbZyPf9ZZMZFv+PBk6il0hr8kJej992HmzJbe/dy5sX3M\nGDjqqAj7z34WBg3quvfs1aslsNvT2BiHENraQVi4MK5dv3BhjDQ069kTttxy3TsIw4e3jFJ0VlNT\n/NyuvBIeeghGjoRf/jJ2ksp9rsP6GP6S1I2WL4cnnmjp3dfXR4htt10E/SWXwMEHRy86ST16xKlv\nW2wRhxXa0tQEixe37BC8+ebahxneeCPa3KyiItq2rh2EESOgT5/2a1u9Otbav+IKePZZ2HVXmDQJ\njj2263YsSl0u4X88cF3Wtk8BjcA6fk2SVL5WrYpecvMw/uzZEV5bbBHH688+O75us03SleauoiLm\nAmy6Key2W9uPaWqKyXftjSDMnBlfGxpaP2+TTdreMVi8ONbZf/11+Pzn4S9/iZ0mJ/HlJpfwn5y+\nNRsOzAEu6NKKJKmIffIJPPdcyzD+rFnR8x08OGaaX3VVhP1OO5VHYFVUxCGLQYNg553bf9yHH7Ye\nOcj899NPwz33xGTHXr2guhrOPx8+/enua0ep6eiwfwWxI3A/MKXrypGk4tLUBPPmtQzjP/poHMfv\n2zfWhr/44uiZ7r57cpPhikH//rDDDnFrz8qVMZIycGD31VWqOhr+E4CdgC93YS2SVBTeeKNlGH/G\njDi1rlcv2GefWCb20ENh771j5r26Tp8+654LoA3XkfDvAVwM/BRYvp7HSlLRe++96NE39+5ffjmG\ns3fbDcaPj2H8Aw4oz2ViVZw6Ev4HA1sAN3VxLVKimhcEeeklGDp0w2/9+5fHsdtysnRpHKtv7t0/\n/3xs32GHmGR2+eVx/H6TTRItU+qwjoT/0cA9wIr1PXDixIkMHjy41bbq6mqqq6s78LZS/jQ1wemn\nw4MPxoU+PvgAliyJpVSXLInb+++3XhWtWa9eue0sNN8GDSqOFdPKwcqV8OSTLcP4Tz8dE/e22iqG\n8M8/P06/W9d58VJH1NbWUltb22pbQ/apD3nQkf7K88DVwB/W8ZhKoK6uro7KysoOFSZ1p6uuiiVA\np0yJmcRtaWyMHmHzzsCG3hYvbr1SWrOKChgyJPedhiFDyvuCJF1hzRqoq2sZxv/rX2MHYJNNYgj/\n0EPj6/bbO6qj7ldfX09VVRVAFVCfj/foyEfItsCbXV2IlJS//AUuuAC+9732gx+ilz54cNy2227D\nX7+pKU712pAdhTffhBdeaPl+RTvjawMHdmy0oVwnoDU1xYIzzWH/2GOxI9e/Pxx4IPzsZxH4u+zi\naIzKQ0fC30UTVTL+93/huOPiOO5//3d+3qOiIkKmf3/YeuvcnrtiRRxu2JDRhZdeavl+2bK2X69f\nv5Ydgb59o7bMW48eXbst6desqIi16mfMiKVqN9oo1sW/4III+8wL4kjlxMFDla1ly+DII2Ood8qU\nwjwHu2/fuOV6cZLVq9e907B4cQxzNzWtfWts7Nj2NWs6/xr52D5sGJx6agzj77df+xfEkcqJ4a+y\n1NgIJ50Er70GTz0Vx9FLSe/esPnmcZOkbIa/ytJPfwpTp8K0aTB6dNLVSFL3cmqLys60afCjH8Fl\nl8HYsUlXI0ndz/BXWXnxRZgwAcaNgx/+MOlqJCkZhr/KxvvvxwS/bbaBW2/1lC5J5ctj/ioLn3wC\nX/tazHKfM8c12CWVN8NfZeHCC+GRR+Chh2DUqKSrkaRkGf4qeVOmwBVXxBK+n/tc0tVIUvI86qmS\nVl8fC7yccAJMnJh0NZJUGAx/lax334WvfAXGjIHrr/cCLZLUzPBXSVq1Cr761fg6dapLukpSJo/5\nqyRNnAizZ8Ojj3oNdknKZvir5Pz+9/D//h/ccENcyEWS1JrD/iopf/0rfPObcNZZ8I1vJF2NJBUm\nw18lY+FCOPpo2GcfuPrqpKuRpMJl+KskrFgBRx0FG20Ed90VXyVJbfOYv4peUxOccQb84x/wxBNe\nw16S1sfwV9G7+mqoqYHJk6GqKulqJKnwOeyvovbII3D++XDBBTB+fNLVSFJxMPxVtBYsgOOOg//8\nT/j5z5OuRpKKh+GvovThh3DkkTB0KNTWQs+eSVckScXDY/4qOo2NcNJJ8OqrsYrfkCFJVyRJxcXw\nV9H52c/gnnvgj3+EnXdOuhpJKj4O+6uo3HsvXHIJ/PjHMewvScqd4a+iMXcuTJgQi/lcdFHS1UhS\n8TL8VRQaGqKnv/XWcOut0MO/XEnqMI/5q+B98glUV8O//w1z5sCAAUlXJEnFzfBXwfvBD+Dhh+HP\nf4ZRo5KuRpKKn+GvglZbC7/8JfzqV7GYjySp8zxyqoL1zDNw6qkxye+73026GkkqHYa/CtK778JX\nvhLn8d9wA1RUJF2RJJUOw18FZ/VqOOYYWLkyFvPp2zfpiiSptHjMXwVn4kR48kmYMQNGjky6Gkkq\nPYa/CsqNN8LvfgfXXw/77590NZJUmnId9h8KpIB/A0uAe4BhXV2UytPf/gZnnw1nngmnn550NZJU\nunIN/7uBvsB2wNZAI3BTVxel8vPmm3D00bD33vDrXyddjSSVtlyG/auAvYHNgQ/T274BbNnVRam8\nrFwZ6/X36gV33QUbbZR0RZJU2nLp+e8FvAicDrwEvAX8Cng7D3WpTDQ1xTD/Cy/EJXqHeRBJkvIu\nl/AfCnwa2B7YLX0bQcwBkDrk17+OC/XcdBNUVSVdjSSVh1yG/T9Of50IrAKWAz8EngL6AR9lP2Hi\nxIkMHjy41bbq6mqqq6s7VKxKy/TpcP75cRs/PulqJKn71dbWUltb22pbQ0ND3t83l3XTvghMI0YA\nlqW37QP8FRhI7Aw0qwTq6urqqKys7Io6VWIWLIA994Q99oAHHoCePZOuSJIKQ319PVUxFFoF1Ofj\nPXIZ9n8YeAX4A7AxsBnwM2AqrYNfWqcPP4yle4cMgdtuM/glqbvlEv5rgAPTX18C5gGvA6fkoS6V\nqKYmOPlkeOUVmDYtdgAkSd0r1xX+3gY8YK8O+9nP4O67YerUuGiPJKn7eWEfdZv77oOLL4ZLL41h\nf0lSMgx/dYu5c+H44yP0L7446WokqbwZ/sq7hgY48kjYemtIpaCHf3WSlCiv6qe8+uSTOIf/vfdg\nzhwYMCDpiiRJhr/y6qKL4KGH4MEHYfvtk65GkgSGv/Lottvg8svhyivh859PuhpJUjOPviovnnkG\nTjkFJkyAc89NuhpJUibDX13uvfdiVv/o0XDDDVCRyyLSkqS8M/zVpVavhmOOgZUrYyGfvn2TrkiS\nlM1j/upS554Lf/sbzJgBI0cmXY0kqS2Gv7rMTTfBb38L110H+++fdDWSpPY47K8u8eSTcNZZcMYZ\ncZMkFS7DX5325pswbhzsvTdcc03S1UiS1sfwV6esXBnB36sX3HUXbLRR0hVJktbHY/7qsKYmOPNM\neP55mDULhg1LuiJJ0oYw/NVh11wDt94KkybBHnskXY0kaUM57K8OmT4dzjsvbscfn3Q1kqRcGP7K\n2SuvwLHHwiGHxNr9kqTiYvgrJ8uXx9K9gwfHhXt6eeBIkoqOH93aYE1N8PWvw4IFMHs2DB2adEWS\npI4w/LXBfv5zuPPOWLN/552TrkaS1FEO+2uD3H8/XHQR/OhHMewvSSpehr/Wa9kymDABxo6FSy5J\nuhpJUmcZ/lqvu++GpUvjvP4e/sVIUtHzo1zrlUrBwQfD1lsnXYkkqSsY/lqn116DRx+FE09MuhJJ\nUlcx/LVOkydDv35x8R5JUmkw/NWupqYY8h83DgYMSLoaSVJXMfzVrjlzYN48h/wlqdQY/mpXTQ0M\nHx5r+EuSSofhrzatWgW1tXHFvp49k65GktSVDH+16cEHYfFih/wlqRQZ/mpTKgW77w5jxiRdiSSp\nqxn+WsuSJXDfffb6JalU5Rr+xwFrgGUZt1u7uigl6/bbobERqquTrkSSlA+5XtJ3TyLsT81DLSoQ\nqRQcdhgMG5Z0JZKkfMi1578nUJePQlQY5s+H2bMd8pekUpZL+PcAKoEvAa8CbwDXA4O7viwlZdIk\nGDgQjjgi6UokSfmSS/hvCtQDdwI7AvsC/weYlIe6lIDGxljY59hjoW/fpKuRJOVLLsf83wUOzPj+\nDeB7wFPAxsDy7CdMnDiRwYNbDwxUV1dT7UyygvTEE/Dqqw75S1J3qa2tpba2ttW2hoaGvL9vRQ6P\n/TQwHvivjG37A48C/YDVGdsrgbq6ujoqKys7XaS6x2mnwYwZ8PLL0MOTQCUpEfX19VRVVQFUESPu\nXS6Xj/glwDeBC4gRg62BK4CbaR38KkIrVsAdd8AJJxj8klTqcvmYX0hM9vsKsBiYQwz5n5OHutTN\npk2DZcsi/CVJpS3X8/wfB/bLRyFKVioF++4L22+fdCWSpHxzgFcsWgQPPeREP0kqF4a/qK2FXr3i\nFD9JUukz/EUqFYv6DBmSdCWSpO5g+Je555+HZ591yF+SyonhX+ZqamDTTeNCPpKk8mD4l7E1a2It\n/+pq2GijpKuRJHUXw7+MTZ8eM/0d8pek8mL4l7FUCnbaCWIVSUlSuTD8y9TSpTB1avT6K3K5woMk\nqegZ/mXq7rth5Uo4/vikK5EkdTfDv0zV1MDBB8PIkUlXIknqboZ/GXrtNXj0USf6SVK5MvzL0OTJ\n0K8fjBuXdCWSpCQY/mWmqSlm+Y8bBwMGJF2NJCkJhn+ZmTMH5s1zyF+SypnhX2ZSKRg+HA45JOlK\nJElJMfzLyKpVcfneCROgZ8+kq5EkJcXwLyMPPABLlsAJJyRdiSQpSYZ/Gampgd13hzFjkq5EkpQk\nw79MLFkC993nRD9JkuFfNm6/HRob4/K9kqTyZviXiVQKDjsMhg1LuhJJUtJ6JV2A8m/+fJg9O3r/\nkiTZ8y8DNTUwaBAccUTSlUiSCoHhX+IaGyP8jz0W+vZNuhpJUiEw/EvcrFlxFT9n+UuSmhn+Ja6m\nBrbdFvbbL+lKJEmFwvAvYStWwB13xIp+FRVJVyNJKhSGfwmbNg2WLXM5X0lSa4Z/CUulYN99Yfvt\nk65EklRIDP8StWgRPPSQE/0kSWsz/EvUlCnQq1ec4idJUibDv0SlUjB2LAwZknQlkqRCY/iXoOef\nh+eec8hfktS2joZ/T2AmcHPXlaKuUlMDm24aF/KRJClbR8P/R8D+QFMX1qIusGYNTJoUl+7t3Tvp\naiRJhagj4X8IcBRwN+DSMQVm+vSY6e+QvySpPbmG/zDgRmA88FHXl6POSqVgp52gqirpSiRJhSqX\n8O8B1AC/Al5Ib3PYv4AsXQpTp0av3+V8JUnt6ZXDYy8kevvXpr83XgrM3XfDypVw/PFJVyJJKmS5\nBPhcYDjQmP6+X/rrcmBo1mMrgboDDjiAwYMHt7qjurqa6urqDpSq9Tn4YOjZEx55JOlKJEkbora2\nltra2lbbGhoamDVrFkAVUJ+P9+1M7/1mYtj/lDbuqwTq6urqqKys7MRbaEO99hpssw3cequT/SSp\nmNXX11MVE7fyFv4u8lMiJk+Gfv1g3LikK5EkFbpcjvln+3qXVaFOaWqKWf7jxkH//klXI0kqdPb8\nS8CcOTBvnsP9kqQNY/iXgFQKhg+HQw5JuhJJUjEw/IvcqlVQWwsTJsRMf0mS1sfwL3IPPABLlsAJ\nJyRdiSSpWBj+RS6VgspKGDMm6UokScXC8C9iixfD/fc70U+SlBvDv4jdcQc0NsbleyVJ2lCGfxFL\npeCww2DzzZOuRJJUTDqzyI8SNH8+zJ4Nt9+edCWSpGJjz79I1dTAoEFwxBFJVyJJKjaGfxFqbIzw\nP/ZY6Ns36WokScXG8C9Cs2bFVfyc5S9J6gjDvwilUrDttrDffklXIkkqRoZ/kfnoI7jzzuj1V1Qk\nXY0kqRgZ/kXm3nth2TKX85UkdZzhX2RSKdh3Xxg1KulKJEnFyvAvIosWwUMPOdFPktQ5hn8RmTIF\nevWKU/wkSeoow7+IpFIwdiwMGZJ0JZKkYmb4F4nnnoubQ/6SpM4y/ItETQ1sumlcyEeSpM4w/IvA\nmjUweTKMHw+9eyddjSSp2Bn+RWD69Jjp75C/JKkrGP5FIJWC0aOhsjLpSiRJpcDwL3BLl8LUqbGi\nn8v5SpK6guFf4O6+G1auhOOPT7oSSVKpMPwLXCoFhxwCI0cmXYkkqVQY/gXstddg5kwn+kmSupbh\nX8AmTYJ+/WDcuKQrkSSVEsO/QDU1xZD/0UdD//5JVyNJKiWGf4GaMwfmz3fIX5LU9Qz/ApVKwYgR\ncPDBSVciSSo1hn8BWrUKamvj9L6ePZOuRpJUagz/AvTAA7BkSSzsI0lSV8s1/A8BngI+AN4GrgH6\ndHVR5S6ViqV8x4xJuhJJUinKJfw3A+4HrgUGAbsDBwH/1fVlla/Fi+H++53oJ0nKn145PPY9Ygdg\nOVABbEr0+t/NQ11l6/bbobERqquTrkSSVKpyHfZfnv76BvA88BZwS1cWVO5SKTj8cNh886QrkSSV\nqo5O+BuBFlRUAAAO10lEQVQFjAAagbu6rpzyNn8+PPWUQ/6SpPzKZdg/08fEhL/vExMABxGTANUJ\nNTUwaBAccUTSlUiSSlku4b8vcBPwaWB1elsfYBUthwNamThxIoMHD261rbq6mmoPaK+lsTHC/9hj\noY/nT0hSWaitraW2trbVtoaGhry/b0UOj90YeJEY5v8vYDhwBzAHOCfrsZVAXV1dHZWVlV1RZ8l7\n7DE46CCYNQv23z/paiRJSamvr6eqqgqgCqjPx3vkcsx/OXAYMAZ4B5gJPAR8t+vLKj+pFGy7Ley3\nX9KVSJJKXa7H/OcCX8hHIeXso4/gzjvh3HOhIpexGEmSOsDlfQvAtGmwbJnL+UqSuofhXwBSqRju\nHzUq6UokSeXA8E/YokXw8MOe2y9J6j6Gf8KmTIHeveGYY5KuRJJULgz/hKVSsajPkCFJVyJJKheG\nf4Keey5uDvlLkrqT4Z+gmhrYdFM47LCkK5EklRPDPyFr1sDkyTB+fBzzlySpuxj+CXnkkZjp75C/\nJKm7Gf4JSaVg9Gjw0geSpO5m+Cdg6VL44x+j1+9yvpKk7mb4J+Duu2HlSjj++KQrkSSVI8M/AakU\nHHoobLVV0pVIksqR4d/NXnsNZs70Ij6SpOQY/t1s0iTo1w/GjUu6EklSuTL8u1FTUwz5H3009O+f\ndDWSpHJl+Hejp5+G+fM9t1+SlCzDvxulUjBiBBx8cNKVSJLKmeHfTVatgttugwkToGfPpKuRJJUz\nw7+bPPAALFniLH9JUvIM/26SSkFVFey8c9KVSJLKneHfDRYvhvvvt9cvSSoMhn83uP12aGyE6uqk\nK5EkyfDvFqkUHH44bL550pVIkgS9ki6g1M2bB089BXfckXQlkiQFe/55VlMDgwbBEUckXYkkScHw\nz6PGxljL/7jjoE+fpKuRJCkY/nk0a1Zcxc/lfCVJhcTwz6NUCrbbDvbdN+lKJElqYfjnyUcfwZ13\nxrn9FRVJVyNJUgvDP0+mTYNly1zYR5JUeAz/PEmlYL/9YNSopCuRJKk1wz8P3n4bHn7YiX6SpMJk\n+OfBlCnQuzccc0zSlUiStDbDPw9qamDsWBgyJOlKJElaW67hvyvwF2AxsAi4Fdikq4sqZs89FzeH\n/CVJhSqX8O8LPAg8AQwDRhPBf3Me6ipaNTWw2WbwhS8kXYkkSW3LJfy3Bp4BLgPWAEuAG4BD8lBX\nUVqzBiZPhvHj45i/JEmFKJer+s0DvpS17atAfdeVU9weeQQWLfLcfklSYevMJX1/SuwMfLaLail6\nqRSMHg2VlUlXIklS+zoS/gOJ4/y7E8H/z/YeOHHiRAYPHtxqW3V1NdXV1R1428K2dClMnQqXXupy\nvpKkDVNbW0ttbW2rbQ0NDXl/31xjahTwAPAqUE0c929LJVBXV1dHZZl0g//wBzjtNHj9ddhqq6Sr\nkSQVq/r6eqqqqgCqyNOh9Vwm/A0BZhCz/Q+j/eAvSzU1cOihBr8kqfDlEv5fB0YCxwFLgWXp29I8\n1FVUXnsNZs703H5JUnHIJfyvSj++PzAg4zYwD3UVlUmTYOON4aijkq5EkqT1c3nfTmpqiln+Rx8N\n/fsnXY0kSetn+HfS00/D/Pme2y9JKh6GfyelUjBiBBx8cNKVSJK0YQz/TnjnHbjtNpgwAXr2TLoa\nSZI2jOHfAZ98Ar/9LeywA/ToAaefnnRFkiRtOMM/R089BXvtBd/+NnztazBvHmy3XdJVSZK04Qz/\nDbR4MZxxBnzmM/H97Nlw3XUwdGiydUmSlKvOXNinLDQ2ws03w/e/H5fs/c1v4MwzPcYvSSpe9vzX\n4dlnYf/9Y83+L34xhvi/+U2DX5JU3Az/NixdChMnQlVV/Puxx+KUvmHDkq5MkqTOc9g/Q1NTnLp3\n7rmwbBn84hfwne9A795JVyZJUtex5582d25clW/8+Bjq/9e/4PzzDX5JUukp+/BfvhwuvBB23RXe\neAP+/Ge4804vzStJKl1lO+zf1ATTpsWw/rvvwsUXwwUXQJ8+SVcmSVJ+lWXPf8EC+PKX4xK8Y8bA\nP/8Z4W/wS5LKQVmF/8qVcNllsPPO8I9/wNSpcP/9rtAnSSovZTPs/9BDcM458NprcN55cNFFsPHG\nSVclSVL3K/me/8KFcMwxcNhhMHIkPPcc/PznBr8kqXyVbPivXg1XXAE77ghPPAFTpsD06bDTTklX\nJklSskpy2P/xx+Hss+Pc/W99C378Yxg0KOmqJEkqDCXV83/nHTjxRDjwQBg4EOrq4OqrDX5JkjKV\nRPh/8glcey3ssAM88ADcdFMM9e+2W9KVSZJUeIo+/J96CvbaK4b3jzsurrx3yinQo+hbJklSfhRt\nRC5eDGecAZ/5TKzW9+STcP31sMkmSVcmSVJhK7oJf42NcMst8L3vwZo1cM01cNZZ0LNn0pVJklQc\niqrn/9xzcMABcOqpcPjhceW9c84x+CVJykVRhP/SpTBxIlRWQkMDPPoo1NTAFlskXZkkScWnoIf9\nm5rgtttiOd4PPoDLL4+dgN69k65MkqTiVbA9/7lz4XOfg/HjYd99Y4j/ggsMfkmSOqvgwn/5cvjB\nD2DXXeMiPA8+CHfdFevyS5KkziuYYf+mJpg2Db7znVip76KLYkZ/nz5JVyZJUmkpiPBfsAC+/W34\n05/gi1+EGTNg1Kikq5IkqTQlOuy/ciX85Cew887w/PNwzz1w//0GvyRJ+ZRY+D/0EOyyC1x2WQz1\nz50LRx0FFRVJVdRxtbW1SZfQLWxnabGdpcV2KhedDf/NgJeBAzf0CQsXwjHHwGGHwVZbxcI9l18O\nG2/cyUoSVC5/jLaztNjO0mI7lYvOhP9+wJPAdkDT+h68ejVceSXsuCPMmgWTJ8ex/dGjO1GBJEnK\nWUfD/2RgMvCDDXnw44/D7rvD978fS/POmxfn7xfjEL8kScWuo+H/INHjv2NdD1q8GE48EQ48EAYM\ngL//HX79axg0qIPvKkmSOq2jp/q9syEPOvLIufTuDRdfDGPHxrn89fUdfMcC1tDQQH0pNiyL7Swt\ntrO02M7SMXfu3Ly/R1cMvDcCBwGPZ2zbEpgDjOiC15ckqdy8CewJvJ2PF8/XIj9vE0VvmafXlySp\nlL1NnoIf8rvCX14LlyRJHVNwF/aRJEmSJEmSJEmSJEmJ2Rz4I/A+8B7wP0DPRCvqmF2BvwCLgUXA\nrcAm6fv2Bp4CPgQWAKdkPfck4CVgOXHK4z7dUG9n9QRmAjdnbCu1dg4FUsC/gSXAPcCw9H2l1NbP\nETV+ALwFXA1slL6vFNrZ1jVFOtOunsCVxP/zpcTn1xb5KDxHbbXzaOBZ4nf7CnAJrU/ZLpV2NtuS\nWFfmpKztpdLOTwPTiToXAVfQei5eUbXzUeIDtg+wLfAC8QdaTPoSH5o/Is6IGArcD9wLDCZ2CM4i\nfkkHE/8Rm3+hB6W//wzxy5lIhM3gbqu+Yy4D1gB/SH8/hNJr56PAncBAoD9wF3AfpfU77Qt8BJyT\n/n4EMBe4iNL4ne5HfIA2Ap9Nb+tsu35EBOoIYABQC8zIbzPWq612VhEh8MX09zsSOzrnpr8/iNJo\nZ7MeRH1rgBMzth9EabRzU6KD/H2iHf8BzAPOS99/EEXUzu2JxmXufRwLvJ5MOR22A/AnWu9RjyV6\nFacSv6BMvyN2eAAmAddl3f8ia/dECskhxE7a7bT0/E8D/pX1uGJuZxURiv0ztg0Bdqa02tqH6Cl9\nh/jA2Ar4J/Bdop3F/Ld7MvAq8ZmS+SHa0d/f19P/Xgh8LeO+zdOvv20X1NwRJ9N2O8cRPb1MVxEd\nEyiddja7lBhxfYXW4V8q7TwPmJX12K2Bkel/57WdXX2q387EcOqijG1ziQ+ggV38Xvk0D/gSra9W\n+FWgnmjjC1mPnwvskv53W/e/mHF/oRkG3AiMB1bQ0uZSa+deRH2nE8NobwG/Sn8tpbauBI4Dfpz+\n9+vE3/PVRDuez3p8MbWzvWuKdOb3NwgYnnX/u8TnWFLtbq+d9wDnZ3zfl/ical7rdjSl0U6I0Ztj\ngbPbuK9U2rkXsWN+HbEmzsvABCLUIc/t7OrwH0AMS2X6KP21P8Xrp8R/srNov43N7eu/nvsLSQ+g\nhgjBF2i9szOAlt9ds2JtJ8Shm08To1O7pW8jiN5hf0qnrSOBqcBPgH7AGOJD5McUfzvfIXo22Trz\nt9r8mEJqd3vtzNSfOMa7HPhFetu6PpuKqZ2bE4cfx7N2vVA67RxK9OJnEx3kccAZtBzGyWs7uzr8\nlxMfOJmav1/Wxe/VHQYCdxN/hJ8l9tLaa+PS9L/bun/jjPsLyYXEH8u16e8raDnUUUrtBPg4/XUi\nUfu7wA+J46cVlE5bxxG9iF8Bq4mewmVED6rUfqfNPqRj7VpGy4dnW88v1M+sHYjA6EEcF25uQym0\ns4LokFxDHM/O3N6sFNoJ8Zn0FHAL8AkxKvcbYsQD8tzOrg7/fxAz4jfP2DYaeGNDCyogo4jZlf2B\nPYjgh2jjzlmPHZ3e3nz/mHXcX0gmEMNr76dv1cSOzhJiJKBU2gkRghXApzK2NS9v/Syl09YVbWxb\nA6yitP52M3WmXQ3EoZ/M+7cgemWF2O4vEoHxAPAFWu+YlUI7RxIdrUto+VzampjDcW/6MaXQTohM\n6ZO1LXPJ/aJr5+PAFCI0i3W2/xDgNeAm1r7y4VAiHL8D9GbtmcWHpL8/KH1/Ic6Ybs/NtMz234TS\namcvYD4x239j4rSb6cSM/1L6nW5JtOVCYud+O6JH8QtKq52ZE6c6267LiJ/RNsRQ620kPzu8WWY7\n9yF6iye389hSaWe27Al/pdLOHYid9QuIybm7EB3l5jN1iq6dmxMTG94jjnX8kq65dHB3Opf4JX1I\njFg035r3squAJ4hfzEu0/sMEOJ6YcLQMeJK4wmExyAx/KL12bkmcDvMWERY30zIRtZTaugexZsMS\nYpbxT2jpUZRKO7PDojPt6gX8nPjgbSAm1m2al6pzl9nOacQozrKs258yHl8K7cyWHf5QOu3cC3iM\n+L+6EPhB1uOLtZ2SJEmSJEmSJEmSJEmSJEmSJEmSJEmSJEmSJEmSJEmSJEmSJBW9/w/m6362UvOX\ntgAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgcAAAFuCAYAAAAVsrs0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzt3XeYVNX9P/D37C4LLL0IYseG2GXtEUPU+FWjEisuIUrs\nGhNIjGIvqEQxCok9kdhdNBZMNEZELNFYWcWGYqGIdAFBysKy+/vjM+d3zz1z+9yZe2f2/XqefWbn\nzi3nTrnnc08FiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiosO4H0Ozz\n93Wexxie3c9WBd4mqmuyx6JcXQE8CGBg0gkhIqLi2RbAvtm//QA8C+Bbbdm+APbI8xg9s/upLvA2\nUV0DYGMRjlOKBkECp4MTTgdRqlQlnQCiAvsa9pKBpQDWA3gnxmMszf4Vept8ZIp4rFLE94dIU5F0\nAohSYhDkDvJsAHMAfA/g8OxrZwJ4D8APANYAeB/ASdq2w2GvIrgfwIsAfgVgJoB1AD4AcGSe2wDA\nAQBey6ZlDoARAKYAuC/k+Q7JntMqAAsA3AUpYlfaAfgbgG+yaZkB4PfGPn4P4HMAawHMA3AHgI4e\nxxwOOeeDIOe2BsCHsL+X6thjtWNPB3Cysc5sALcCeCm7H7fz7wng8ew5roV8dsOyrw0CMDX7/8va\n/wAwGPL+rM1uOx5Ajfb6Ndn0HQv5vNYAeAvAIS7pICKiFLsfwCyH5YMgGdc8AMcDGArJ6H4NoAnA\n5ZCi5+MBvA0pfdgyu+1w5Gb0ywF8DMnUjgDwLoDVsDLgKNvsBMmEXgHws+w+VKb3d49zvgb2NgdX\nZJ/fBuCnAM4FsASSYbfLrnMPpMTl5Ox535jd5rTs66dAMu5fQ+rrzwawEt5BijrnJdk0HA7JuDcC\nOCa7TgbA85DgbEQ2fXdlt/ultq/ZkM9gDIBDIUGTkxcATINk4j+GvE/N2f87ATgv+/xcyPsLyGff\nDGmLcDiAcwB8BwnelGsgn81SABdAgriXADQCGODxHhARUQrdD+/g4DJj+Z8A/NFYNiC77pDs8+HI\nzeibAfTVthmYXXZcHts8CGA+rAwcAPbPrhM0OOgGydTvMdY5CFYmCQCfOaxzOYCjsv/fnV1HL44f\nCuA3HukYnj3GFcbyBljVPD/NrnOisc6DkLYiqrRzNuSO3c9aAJdqzzOQUokDs88Hwd7mIAMpEXjO\n2M8h2fXU+V+TfT5MW6cd5PN5PEC6iFKNbQ6I7D4wnv8h+9gFwA4AdoTcqQLejQkXwx6EfJt97JDH\nNodAMq112jpvQTLKoPaHpPsRY/nrkGqKQZCMfyokUNgCwL8gd/M3aOtPhZQWTAPwZPb1RwOm4WHj\n+dOQzLY95L1tye5Pvz79C5IR7wqpigByPysnLwMYDWBPAP8G8B8AF3us3w/A5pBz1Y//GqQK5qfZ\n/QBS4lGvrbMu+9rPAqSLKNXY5oDI7gfj+XaQOv1lkAz0IgBtsq95NWJbazxXd+5evzm/bXpCAgjT\nQo99mrp7bLMIVhXGSMgdfl8Ad0KCljcgmSwgd8dDIe/XNZD6+a9hlaZ4+dZ4vhjyXnYB0CP7/ypI\ntYH6ewwSNGymbWd+Vk5OAXALpGfIfdljPw9gG5f1e2Qf7zSOvx5SzdRHW3cJcnuBLIGUzhCVNAYH\nRO4qIHfqqtthDYC9IPXvSZgHYFOH5b1D7GNZ9rGPw2t9YPWgUPX5O0OqPi6AdAvVSwcmQorju0Pa\nJnwHKRXQM3AnPYznvSHtOpYBWAHJ9Pd2+NsXwJvZbVp8jqGsBHAJJMjpB6liOAiS+TtZkX38g8vx\n9Wqn7sjVGxJkEZU0BgfUGgXNWHpCqhEmQIrP1Z286kHg9fsJeoww27yaPXZbbdlesLdT8PM2pNHc\nL4zlAyENLF+HlIx8Dqt3wjxIZjoRUs0ASBDwVPb/VQCeAHA9gEo4BzC6wdr/GQAnZI+7HnKOHSHv\nbYP2tzOAK7P7D6pPNu0nZJ9/AeBmSEmQOg/zzv8zSEnGtsbx50GCJX1MjGoA/6c9bw9pk/BSiDQS\npRLbHFBrFLRP+2JIff5vIMXRKyCZwWmQQMGr216UfvN+24yBFJM/Dykq7wbJkNVIj0Esg5R8XAXJ\njP8JCS6uA/AJgAcAbIBUIVydXecjyF33aZAgAAAmQxpR3pxNTzdI9cIXkK6HXsZCApyZAM6C9BJQ\nXQCfg9TvP5NN02eQu/ZrIT0PVMlHkPd3QfYYfwHQGVLtsTckwBqTXUeVFBwNKWX4ANLw8h5I4PAs\npLrjCkhA0WAc477s+ksgVU7tIZ8JERGVkPvgPFzyIEhmYI6UtzukUdsqSLH5QwC2hmQiE7PrDM9u\nq3oeOB1jG0gGfmoe2wBSJP4mpH3CbEijwG8g/fDdXI3cO+RzIN0m10Huim+DZIJKe0hPjVnZdeZA\nAgG91OJcSOCwGvLeTITVvdPJ8Oz5DM1utwZSYmC+5zWQ4Gdu9thfQjJcvQHoLHj30FB6QsZrmJfd\n1xeQqgEVXGQgjTPXZNOknATpSroWkvE/DWAX7fVrsufycwBfQapCXgCwW4A0ERERxeZQSHCg6wap\nJrig+MkJbTgkQ9024XTE4RrIubBqlspSob/Ym0Ci/h8X+DhErcFekOL8EbAGZHoWMnhSvcd2RESh\nFLLNwY8g9ZfbIlrjLCKyuwVSrH8epDriB0iVxy8hxfqloFyuBS0on3MhKprhkPrQk8EZz4iIiAjS\n11dVWTA4ICIiKiGFqlYIMghIHzgPxEJERETeFmT/CiKpcQ76bLbZZvPnz5+f0OGJiIhK2rcA9kGB\nAoTEgoP58+fj4YcfRv/+/RNKQnGMHDkS48d7dUEvDzzP8tNazpXnWV5aw3nOmDEDw4YN2xxS+l5W\nwQEAoH///hgwoLynPu/atWvZnyPA8yxHreVceZ7lpbWcZ6FxAA8iIiKyKUbJAQMQIiKiEsKMm4iI\niGwYHBRYXV1d0kkoCp5n+Wkt58rzLC+t5TwLLcq0snEYAGDatGnT2HCEiIgohIaGBtTW1gJALXKn\nEY8FSw6IiIjIhsEBERER2TA4ICIiIhsGB0RERGTD4ICIiIhsGBwQERGRDYMDIiIismFwQERERDYM\nDoiIiMiGwQERERHZMDggIiIiGwYHREREZMPggIiIiGwYHBAREZENgwMiIiKyYXBARERENgwOiIiI\nyIbBAREREdkwOCAiIiIbBgdERERkw+CAiIiIbBgcEBERkQ2DAyIiIrJhcEBEREQ2DA6IiIjIhsEB\nERER2TA4ICIiIhsGB0RERGTD4ICIiIhsGBwQERGRTaLBQWNjkkcnIiIiJ4kGB2vWJHl0IiIicsLg\ngIiIiGwYHBAREZENgwMiIiKyYXBARERENokGB6tWJXl0IiIicpJocPDFF0kenYiIiJwkGhzMn5/k\n0YmIiMhJ3MFBLwCTACwHsATAOACVbiuvXh3z0YmIiChvcQcHjwFYCaAPgH0BHAbgcreV2SCRiIgo\nfeIMDrYH8GMAFwNYB2AWgOsAnOm2AYMDIiKi9IkzONgFwDIAC7VlMwBsAaCz0wasViAiIkqfOIOD\nTgDM7F6VDXR02oAlB0REROlTFeO+VgOoMZap544jGqxYMRLHHtvVtqyurg51dXUxJouIiKg01dfX\no76+3rZsxYoVBT9uJsZ97QDgcwCbAlicXTYEwFgAWxvrDgAwraZmGlavHhBjEoiIiMpbQ0MDamtr\nAaAWQEMhjhFntcIXAF4HMB5SjdAXwBUAJrht0NQU49GJiIgoFnF3ZTwRUlUxC8BbAJ6H9FhwxOCA\niIgofeJscwBIdcLJQVdubgZaWoBMnJUbRERElJdEh08GgA0bkk4BERER6RgcEBERkQ2DAyIiIrJh\ncEBEREQ2DA6IiIjIJvHg4O67k04BERER6RIPDq5zHQWBiIiIkpB4cEBERETpwuCAiIiIbBIPDnr0\nSDoFREREpIt7+ORQevYEfvKTJFNAREREpkRLDnbZBVizJskUEBERkSnR4KBtW2Dt2iRTQERERKbE\ngwOWHBAREaVLosFB+/YsOSAiIkqbxEsOGBwQERGlS6LBQbt2rFYgIiJKG5YcEBERkU3iwQFLDoiI\niNIl8WqFtWuBlpYkU0FERES6xIMDAFi3LslUEBERkS4VwcHq1UmmgoiIiHSJBgddu8rj0qVJpoKI\niIh0iQYHakbGhQuTTAURERHpEg0OuneXx0WLkkwFERER6RINDjp2BCoqgBUrkkwFERER6RINDjIZ\noFMnYOXKJFNBREREukSDAwDo3JnBARERUZowOCAiIiKbVAQHq1YlnQoiIiJSUhEcsOSAiIgoPVIR\nHLz2GrB4cdIpISIiIiAlwcGSJcD++yedEiIiIgJSEhwAwKxZyaaDiIiIROLBQadOSaeAiIiIdIkH\nB6rkgIiIiNIh8eCgQ4ekU0BERES6xIODdu2STgERERHpEg8OKiuTTgERERHpGBwQERGRDYMDIiIi\nsilkcFAD4E0Ap3mtxOCAiIgoXQoVHOwC4DUA+wFo8VqxpqZAKSAiIqJIChEcHALgJQD3AZjrt/IR\nR8hYB5tuWoCUEBERUWhRgoN2ALZ3+asB8AGArQDcAZ9SAwCoqAAuvhho8V2TiIiIiqEqwjb7A5jq\nsLwFwHEA/hl2h+3aAevWRUgJERERxS5KcPAKYq6OYHBARESUHlGCg9iMHDkSXbt2xdy5QGMjcOyx\nQF1dHerq6pJMFhERUSrU19ejvr7etmzFihUFP26mwPufBeBqAA8aywcAmDZt2jQMGDAAjzwCDBsG\nrF3L4ZSJiIi8NDQ0oLa2FgBqATQU4hiJD4IEWAEBqxaIiIiSV+hqhb5BVmJwQERElB4sOSAiIiIb\nBgdERERkw+CAiIiIbFIRHLRtK48MDoiIiJKXiuCAJQdERETpweCAiIiIbBgcEBERkQ2DAyIiIrJh\ncEBEREQ2qQgOqqqATTYBPv446ZQQERFRKoIDADjoIAYHREREaZCa4KBzZ2DVqqRTQURERKkJDjp1\nYnBARESUBgwOiIiIyIbBAREREdkwOCAiIiKb1AQHNTXA+vXAxo1Jp4SIiKh1S01woAZCamxMNh1E\nREStXeqCA46SSERElKzUBAft28vj2rXJpoOIiKi1S01wwJIDIiKidGBwQERERDapCQ5UtQKDAyIC\ngG+/BR56KOlUELVOqQkOVMnBu+8mmw4iSofBg4FTT006FUStU+qCg/POA95/P9m0EKVBczOwfHnS\nqUjO998nnQKi1it1wQEALFqUXDqoPGzcWPo9X8aOBbp3B6ZOTTolRNTapCY4UG0OiOJw9tky6mYp\ne+kleTz00GTTkVYtLUAmA0yYkHRKiMpPaoIDveQgk0kuHVQeHn006RTkr6Ul6RQky+/81WiqDA6I\n4pea4KC62vqfwQER+VHVRhWpuYoRlY/U/Kz0gIDBAREpbiUIa9bII4MDovil8mfV1JR0CogoLZqb\nnZerkgOnm4mvvgKmTStcmojKXVXSCXCyfn3SKSCipKlMv7kZqKzMfd2rWmH77eWxtbfbIIoqlSUH\nGzYknQKi5DFjE34lB0lVK/zwA0dwpPKVyuCAJQdEdm4ZZJLmzQPeeSf+/S5eLN04VXDkdu5ubQ6K\nVS15ySUyguOcOcU5HlExMTigVu2SS4Ajj0w6Fc70uvSNG4Nv98UXwJdfxp8e0y67APvtF/9+jzgC\nOOww67kKDubPB8aMsYIGtzYHjz8ef5qcqNErWdJJ5ShVwcGQIfLI4IDyFbRI/qabgP/8p7BpicPq\n1fbns2YBo0c7r7vjjsAOOxQ+TStXFma/5p24Cg7OOgu4/HI57nffAUuXyvIXX5RlarjlYo2MqYIS\nVv9QOUpVcPDoo0CbNgwOKD7bbFMeM32edJL9+dChwNVXt46MSZ2jui5kMkDPnvZJmTbZBOjaFbj1\n1uK9JwwOqJylKjioqJCREhkcUFzmzJGpf+P2858De+0V/351eqYzZYr9NXU3nca2CPkyM1t1jmq5\nU2asrhkXXijVKnHLZIDhw3OXuaWHqNSlKjgAZKREBgdUDOPHR9/2mWeADz6ILy1ReQUHd90FzJ5d\n+DQ88wzQ0FC4/Tc3S+PH116znnsZO7Yw6XjgAftzDtZG5YzBAZW97bd3vpv83e+Kn5a4eWWU558P\n/Oxn8R3rv/8Fnnwyd/nPfw7U1vpvf9VV4QYm0ktHDj7YavgXpnFmMbDkgMoRgwNqFYrVgj1fkyfL\nHem8ecHW97uLNhsN/ve/VhfAsA4+GDjxxPDbtbRI98LrrgvXM0QFAc3N9mncix0cqDYrHTrYl7Na\ngcoZgwNKzFtvFW+I2yuuAEaOLM6xgvjFL5wzypdflsegXRH9ggO9m93KlZLBX3JJsH3H5bbbpKEx\nEG5qdj046N49d3mxrFghjx072per4KBQ6enXD/jlLwuzbyI/cQYH2wB4CsBiAEsAPJ1dFkp1NfsN\ntxYHHADsvXfxjvfnP1v/e93tLV0K/PWvhU3Lo486d6FUmeiGDblpXLYsd/0wwcHixfL43XfB0xmH\np5+2/tenZnejzlsNZtTSYg8Oit0I84cf5FGlvbERePNN6/VCBQczZwIPP1yYfRP5iTM4mARgKYCt\nIUHBdwD+GXYn7MpIxWCOG6A76yzgnHO81ykUPTgwOf0uwgQHKijo0SNa2nRh2jLo8yIECQ4UveRA\nn9I9bGb87bf59VhR7/ucOVJasO++wIEHAvffHy09RKUgruCgG4D5AK4EsBbAagB/AbArgFCXIlYr\nUL7ee0/u7py8/roMlvOLX7hvr+4Um5qkSLmlRQYcmjNHWuXr1RNx1zerTNDpN+A0h4BfxlSI4GDd\nOuDf//Zf7/LL5e5XT7dXcLB4sVR9qOJ6lfbm5uijRQLAFlvIX1TmZ/Hhh/bnUdtwEKVZmFkZ2wFw\n+4nNB3CUsexEAHMhJQiBMThofVpa4u0Wdu217q8NHCh3vc895/z6xo1WZjZunOxr/nwZcOj112U0\nPt26deHq0f3oJQeq/YGeNlOYkgNVElJTEz19AHDRRf7rbNwoQx1PnCgjNipVHlec3r2BTTe1hiVW\n1QrmOervQ5jrxRdfRBs50q+ac+BANkqk8hOm5GB/ADMd/j4HcJix7jkALgRwetgEMThw9u67wAsv\nJJ2Kwoi7DtkrAwJyM12VhvnzZdvJk2WZCjIWLpRHpzv3oN/VGTOAr7/2X08FB077jRIc6NuoVvfq\nGI2NwO23h3//b7/dfx2V/q+/tlcrqGMNHw4cf3zuduq9BuzVCnrwqKfX7EHgRQ9SwuD1iFqjMCUH\nr8A/mKgGMA7AEEhJwqteK48cORJdu3a1LVu6tA5du9aFSFbrsO++8pjGO5SPPpIL9h57RNu+qcme\ngeRj9Wpg0iTvdZyKgTdscJ9db9YseXQq3XCrvjDtvLM8+n1+Xm0OogQHgGTQ225rpVUd4/bbgT/8\nQVrF//Sn/vsJQ89Q9aBKpdccUMiJKjmYOBFYtcparr8PHTpYJQ1xWrIE6NKFNyuUvPr6etTX19uW\nrVBdaAooTHDgpyeAfwFoA6AWgO9EpuPHj8eAAQNsywYPjufH+MAD0m2rb9/895Um06dL/WkcjcqC\nWLhQMrQ+fdzX2X13eQwTuOjrbtgAtG0bPm0tLZJxnHyyFVxMmBBs2+23l77zKtNpanKv2jjhBHl0\nej3ujEO/qzf16yddHPX68yDBwc03y126mphIvfeqmsGr2FxNchTkd7RkCfD558BBB9nfFz3wC9Ne\nQAUHl11mX24GB4XQq5c8rltXGr2nFi2SqrIpU6ShpN6Ak0pbXV0d6ursN8wNDQ2oDTLyWB7iapDY\nBsALAFYAOAgBAgM3cUXqw4endyrefOy5JzBoUPGO16cPsNlmwdb98kvvGfEeesj6TPTPWGUCYU2Z\nIhMQqYBg0SJrpj4/XbrYnwfJAPIpOQhKBQeqUaR5LHOEQjM4cOoeuWwZcPjhUkqgb6OCBK/2Hkce\nKaUOQQZlOuQQqX9XaVX0kgMzOIhSpaTvI672Hl9+Ke1KzDTNnRvsetTcLN/9sWPt23/1lb3Uo1CO\nPho44wygvr54Y4dQeYsrODgGwF4ADoaMcbAq+7cS7o0YHVVXS53vK6/kn6gkuqLl48UX5UL9wgve\nd+Eff1y8NCnV1f59/3fYwXsEvVNPtTIvPSCIGhyoDEiNnrfppjIKXxBmo7yFC6M1itTHToiDykid\nggPAugtXad1yS3n87jvgxhudA2IzAw4THLz3nv04Xj7/3PrfreTATEuUu3J9H+YdshksBNn/7bfL\nd3f0aAk8zjzTem3NmmDBQVOTDPY0ahTw0kvW8u23B445Rv5fu7Zw83HoXTXjDlipdYorOHgqu68O\nADppf50BBBwIVqgf+09+Ej0x6uIRVz12GEOG2KeSDaJPH+CeeyTqB4AjjghWJ5uvKVPsFzIvGzZI\n5uMnSBc3tT+n/92sXWu1FWhulgug6hZ31VXBjqkzi6Pfecd/G6ci8dtuC39sLyrDdiuBUY0tzeDx\nuuuASy/13kYxgwMv+uBDftTneO+99gzKKziIkpHpn4MqaVHMoM+vEeLddwO/+Y31fN064L77rOer\nV8uQ036amqx0mW1a3npLHs8+u/AzeQLynra0AAsWFP5YVL5SOXxyvlSk79S6vNAef1yKz8NYuFDq\nVfU7OL/hc+OYBe+nPwUOM/uZeIj62SxYkFvUqZcWrF9vDVELyMV0662tz3HCBLnod+smz885RwKD\nfBrR1dTY3++JE4HPPvPeJkx118cfS++HsFQGo3oW9O8P/FMbSsytGN4c2ldnZqDmPtRnsWSJ3EHr\nd6FR2racdZb9vdLf56Ym+/HjDg7MoM9vVsrzzrM/Hz0a2HVX63lDQ7DZO/V2M+Y5qeBMlfgVetCk\nxkapQtpss/Kc0puKoyyDA/XjTKLkwMv69TLfvFMdZCZjD2b8Lpq1tbLNN9/Em0YvUT+b3XaTYZLV\n+amJeJRLLrEyfgC46Sap612yRJ6rYl6V4Tz4YO4x/vjHcGnq0ME6Zps2wPPPSzsVL2GCg912i9YY\nVl3MVXBwxx3A5ptbr7vVX/fu7b5PM0i+6CIJhlTJwTHHAI89JtVaX34JPPWUtW7UBn/691e/k/7s\nMxk2W4nSvkjPXM3Sj549vbd99FHv18eOtZda6qUKXiZNsoKD9evl/NWsn6tXy29VXY8K3cCxsRH4\ny1/kf47eSFExOCiiyZOBW2+1F0Wri1tFhf0OS2UOfpymIi4U8y4tKDUyn35x1C+Q6s5YvRd+deFO\nJUJmi3Y/NTUy3kF9ffDvXNiLetCxCnQqOFDVCtXV9u+xmh/BFHZgo7o6e8Y6ebL9u6hE/Q3p5/7M\nM/bX9CqcKCUH+t2weWf89797twPxGhlTiRKwDB9ufY8aG4EnnsgtcVDvZdj9T53qX+2lf5aNje4D\nSBEFxeCgiJzuHFRmYQYHQS+aURvzRZFvJqoynXXr7OlW74HaTl3o3C5scXyu3bvLnf0ppwQPeuLo\nRTNsmPfr6pxVY9q2be1tBlTVhz5YkL5dGHqGkslY+8hk5FxXrYonOPCyahXwySfh9q0HWGawtfnm\nwG9/G2w/v/6183Kv317nzu6v6dUKTvtwCw4+/NB7gLNDDwX2209K0sxAy4l+bAYHFFXqgoOod6e6\nQgcHp50mIxaG5TTAjUprJgM8+2zucj9pDA7cegyoz8MMDtQFzAwO3DKYIG1J/AZC0uvSg37n4mgF\nPnGi9+vqvVBF8W3b2r/H334L/OMfUu2ijB2b//cgk7GX2Bx+uGSEUX9DQUu+9tlH6vjDjJGhBwTm\neatGquPGee9j3jzgzjudXzO/d3vvLe1cAKnOc6OPUeF0Pm7BwR57SCNkP0OGAD//ufNr+vH0ni5B\nSkqInKQuOIizQeJHH8n47nF78EHg9NADQzsHByqtmYy9AVvQi2sx6xSDZqJffeW8XC85cCo9MYMD\ntxKIIBnW4Yd7v64HB37DLStOwcF++wXbVufUTbGlRTJ8Mzho0yZ3EKHp0+3bjhvnHRy4Vc+4lRzM\nnAm8mh3bNGqj3qCjFqrP+J57gu9bb9y6caO0UVG9KtTd+3bbee/Dq2um+TnrA3R5NYJVbWTWr3f+\nPKJWKyiqeq6pSQa0cqtSVCN6AvbpsonCKJvgYPFiucjPmGH/cV9+eTzpUlTGF6VPvMqE9Exv5kx5\nNC/C+gXQrc87UJjgoFcv4Prrc5e3aSP1n5mMVSfu1rjSiUrriy+6lxysX2+VoMyY4byfIBmWX4av\nN4B0Knp1CkCcgoMoxbZ6P3rlnnukh4YabEhlzlVVucGBef5r19rHGAhKDw7Wrwd+9Sv5X68rj1py\nsGxZuPXNXgNeRo2y/m9qAi6+WDLEf//bCmDzGTnRzLyrqqzv9E47uW83YoQ9XSa1j6jBgbo2rl0L\ndO0q3TTXrs1tlKyXKhFFVTbBwdSpcuF86ing7bfjTZPy4YcyqAkQLThQGYm6OPzvf8D++zvv76uv\nrMZnXnPRRy1OfuMN99eWLAGuvDI3TTU11kBIKm0335y7vdt7owKK//7XuVRgwwbg4Yet58cdl1sX\nO3lysIynstK7pbnegM8pwHJa5lSa43ehv/12YKut7MucSlZUMHjTTfbllZX2DNppHorvv5deDWHp\nPTzc3lO34OCJJ+zjS5x2mv31CRNkUKpCU9//zp3tA0B5BQdejTf32gv417/syyorrYAsSLCUyTh/\nv/1KxPyoa6Pe7uDkk3O/X2Grv+bPl4ArjfO2UHLKJjhQDbSuvz7cXYhOFQu6ybfboLqQqYuD3hDL\naXhalR41Jr6TqCUHet130ItC795Wxv/qq95Bi5cHHpB6XJPZiwGw6nqV//u/YMeoqJDuXGqMfJOq\nmwac30On/v3fGZOPd+kiwcHy5TIHgZORI4N9b1RJh1kSYQYHS5dapQpBBfmOuH0H3Epp+ve37tKf\nfVa6buqmT7eXzhSK2yioXsGB1xDf77+fu0xvLBykCmrUKHv7IZNfQPnCC9Ll2aSujb/8pbVMDWJm\nzlWiM4NUoObgAAAgAElEQVQd04gR0m4l6u+ZylPqggP9Qhimm5668wlSVz93rr3Y/K9/lQZBvXrJ\nj/qrr2TUQvNuSr9w+5UcOF2QzeDAL8JXGY5X6YDba3PnShqnTHF+Xb+Y/O9/3ulQKiut8z7tNJkp\n0uliqdbxG1TItGFD7v7cZkoMyi1z04MDp/fwmGP8h/Du108u9N272+/enHpi6MyeBoB7plNRYX9t\n+XLnKacB98mxgtxJulWPuGVk1dX2Boz6+6m0tBR+IDK3cR/M4MCvl4gX/XsftJrFa+RRt/f0oYfk\ntSOOkC7PJqcbJ6dAxwwOjj3WPS2AdW4sOSBd6oIDvfFPmB4BYb7Yp54qxeZ33SUXxXPOsYqv339f\nirYXLrSGPZ09W+603fqYO5k1KzdjU5mG+kH7XbTPOUf64XsFB25FlCqwcsvg9MzgoIOkH7VfMGbW\nd8+fL/3KTZmMfHb9+9sH1PGzYUP8PUxOOsl5uT4Gv1MGU1MD/PjH7vvdZx8pxVAXer10x+/OUC8l\nWrZM6vrdgiCz5MCL27qPP+6/rdvdtFsmZ86i6RQc/PBDfBMjuXELDsyqg112iX4MveSgslKuFf/4\nR/T9uU06duqp8ps3JwVTvBoE69eBsGOf6IOTESmpCw704two9fpO3MZzP/985yI3czCYI46QQWP0\nhmTTp3sPcbzDDsA229iXqQvB8uWSJqcZ9HQffSSzDprBwdSpuefy7bf2+nl1zm53bk6Dx+y4I/D6\n6+7p2bgx9zNxytQyGeBPf5L/w5QeOJUc5GvcOHvrbcUpM9OpzMWtX/tBB8k6ZmPRpib/Ufh0PXrI\nFLtuRb9hgoONG+2N4sLwavTqxCw5cCrGX7XKeyKusDbZxPpffb+23tp5XfPzDTtIlM4sOTj8cPt5\nhQ2AVHBw//25Gf7997tXIz73nPs+9RLTsNUDLDkgJ60iODAzVz1jNOssW1pyM1a3EoMddgiXDlXE\nvHy5TE7jVuRvMtOvD++qLgoHH2zvK+002p3ODA5UJu511+EUHLhRd6thGkwWIjiorHS+E/MLDvwu\n+M3NUoyvd9mrrwdOOEHmFohLmOBAf6/9zs8UNjgwSw7MRnGAZILmekEdckjuMn3Oku7dgTffdG9/\n0aWLNYQwIO2QnAKVm2/2b8ypf+f1z2LKFOnW7DX2gU59V1RwEHSSsiCCdn32wuCAdKkLDvSGTSpj\nW7ZMgga3jGvNGu8W7Bs2yBdfba9njFdcYV+3pcU+aiEQf5CyfLn/hDB6MahXBtvYKNPAfv21PP/8\nc/vkNm5pd5t8x6uqY90694Z3OnOiHd2gQe7bXXBBYQauctqnnnk6zcug6ned3r+tt5YAYLPN7MuH\nDrVPkhSHysrgAZP6ngPh75SjlBzoVC8eXVOTc3AQZLIvs8vg1Kn2xqhVVdLTx62dBWDvrdKmjZQU\nmvr1k+VXXAEMHmwtP/5463+zWkE59FBpHOh1fdh9d+v/Dz+URxUcxPldzydjV9c5jqZIutQFB506\nWf+rH91bb0nm/9hjztvU1rqPdgbIRWrECCk2nzXL/iMwi5ybm3NLDuIODhob/ScK0oODn/3Mfb3G\nRvs0sDvtJBdC1a0raMmBCohUcKBPW6s88kiw4Vvvvz93v8q227pv99ZbyQQHl1yS+7pXcDB7tnw+\nfpP8xKGyMnhGrwdiYe/Y3Vr9uzGrFXr1yv1Ob9hgvc8TJ1olXmYPEqeJtC65xGov07Zt7hTuUUZS\ndSoNUvu57jpgzz2t5fr7p1crOH0fvK4PW2yRu0y1Dyjm8O5ewQOrFchJ6oIDnfmldfsR+tVrb9hg\nTXa0dKl3hKxXK6gfbxyjNgLWxdvMMPV6YlXHrdevevFr1Oj2npmZvzlVsBpxLl9mQ7c99vBe3yzJ\niYPTRdjMPM2MwwwOnALTsEX3UegZk5+mJuu3EvY7G3bQoqqq3MxkxIjcEQRVu5u+fYEBA+R/fQbJ\nqirnTLu62moQ6vQdjxKwewUHKi2KOfmUVwbqlRan794jj8h+ihkceFU7MDggJ6kODsxZ+qLS76hm\nz/YPDszjmg0Lo1LFimZw8Ic/WP+ri1XQu1K/4GDChGDvn+oZou6i4wqIzDHu9eDAKXN1GxUxH2ax\n/Lvv5t55fvSR/bl6/eabpRrh5JNz9xu1Pj2MMF0Be/SIHhzEoX176QGkO/dcmQ57332tIaFVcDBy\npL10QeeW4d5wgzxGuSY4BQdu1SmnnGL937mzd9G712fk9NqkSVIqEnf7Gi96u6lLL7XfUDE4ICep\nDg7MsfajFu/r3XxOPtl7UBi95OCVV2R4Y68RzYI2Klq71hpN0Dy+ftFSF4ygA8j4NUSaPVtGJAwr\njgmwnOjnpVchFZJ5h+Y0AJM5Fr/KXM84w719SDGCA5X266/3/07o1TlxBgdduzovd/pd6hn9n/4k\nmaNqKKtKkVTgq767+jaq14Pbb7RjR3mMUj/uVD3jFlwffbT1f+fO3tcer0zerXRg5cpgJQd6m4V8\nbLON1RPpxhul95Wizq2Y87RQ+qUyODDHII8SHGy+ufW/mbkHrVa49lqZp92r3/oZZwRLj34R0v+v\nqrJ3l1MXGqeuYZMmSQtt3dKl/sdWd2xhFOrOU79Ad+wYbdjfsKIElUHOv5jBweWX5w5RrNt7b2no\nF7TkQH3nRo70T4N+9ztmjHewqd6TvffOHeVPBQeqR5IKDvTeJGpCMxUEAFLqoKjPMkpw4FRyoDc8\ndPuedOnifXftFUi7BQAbN/oHB507uwdmUXz4oXNprPmenn++vRrthRfi7YFDpSGVwYH64q5bJ3dD\nqlpADa6jxvdftSp3SFtlr72sbk7m3YFXhDxmjL0ofNEiCQ70YkZd0ElU9DToLf6rq62LS7duMomM\nWm46+mhrLgbFbwpgwH6+boPGmNq0kcAoLL96eP31Tp2cW5ADcrfuNQPeM8+EK3nQG5sFEaTkxCk4\ncOp3/8477uMl+NEzZq8M0cy8/IIDFYTuvLN/GvTveI8eMsaDfiydOq5TcKs+T/Ueqd+EnoZrr5Xv\nqPps58+3j+uRz4A9ZnAwbFiwon09OAkTHFx+ebjgwGlgKT1IchM0kHeqMgVySw7uust+vTv2WOl6\nTa1LKoODs8+Wx8cfl9Hj1KAymYz8UNV4+0cf7V4337Wr9aM1L4BqJkTFK0NbskTq57bZxvlHGCU4\n0KkLxJdfAh9/DPz2t/LDdTpW1KFoVXD17ruSSQUZLrmmJtw0uorfxUy/AHqtu8UW3u0uevUKV/UR\nNjMJcsF1+t44DQq1zz7xNPD0Cg7U+xo0OFD7CnKeesmbX2v9jh2BAw7InUAKkFKBxkarzYEqOejQ\nQWaCvP12CZD170WfPvZAI5+SA/PzCvqdOPJI75ID9R6a7S322cc7ODADE7MK4aabvOeIUNsHvQbp\npaI6v2oFtkVonVIZHNx9t/woFi2S5+oiol+Exo8HXnvNfR8nnhi8wY/bcKWAdaddXe080t769Vb3\nR6/uYG5tA1SGv9129n7zThdt/fz//Gf3Y5lUcPDxx/Koz3yo0zPumppo7Q78GlJusglw2WXyv9ed\nf0WFf0MvM6Ny6jamhM1Mgpy72/dryJDcAFQd32z4GIbXOajvixpDoG9f732pjCBI1Yie+fhV0VRU\nSPC53365r2Uykk51B6//JkaMAH79a/+05FNyYH6fzH14nZu6Rjj9LtV3xexu6TSDpuJUcqCPFbHH\nHlJy5xW8ffVVuNEQ9TFcPvzQCv7Z5oCcpDI4UBO5qIxZXYT1H+/vfue9j8GDgw9rGqQfeXV17qA3\ngFw4DzpIgpl99nHf3q/kwOSXOf32t97F7jrzR2/e4QAyJKxefFtTE62u3mk2Q5O6iHqVHGQy3sfX\ng4PGRsk4vWY/9Bp8yXTggbkXekAGmJo82Z5GJz165Pbl33VX+2MUQYKDwYMlU+rXz3tfenDgVyLl\nVAStLw/7PVF3wwccEG47/VhxDNgTJsC48ELgb39zTrP6rZr7695dqpScDBtmn1xpiy2kulTNVaLO\nzyuNffrkXpPat5eqUSdmycG558qjek+dqvgWL/ZukN3SArz9tvvrVLpSGRwAUvytGtvp3cq8HHWU\n/blX5qNn5EGG+PW6GL35ptwFeXXDCxscBCn1+P3v/dcB5Pxuv917PIgXXrC/XypgcsokvQS5E1Xn\n5lcFEbSLWHW1fwblNMud6fTTZV9vvOHcRmDHHYMFZO3b5wamEycC06Z5b3fqqd6vezVg0+8wKyvl\nuzF+vPv6qppj0aJw8x/EMSBYdbVMbHb55eG3VaMkHnpo/ukIo21bmVvF6fzV9UnPRCdPlt+OPs6H\n13DrRx0lvwc1FkSQwMXp+1BZ6T7JlCrlNKlzcgpk9DEpnDz5pLSDMktxv/nGe2A6Sr/UBgfdulk/\nLPXjW7LEe5vevWVoYjWgi1fmo7fgD1JyEGYEOf1CreYYUMGBOQiQWwYYpJuT3w9XWb9ehpIdO9Z7\nPafgwG+OBpNZ4uE0K6M6N1Wt4FZKErb/uJeqKulSpybscXLPPfGMUb/99rnn1KmTdeF349dz47rr\n3O+2nRqzeU3CNHSoVEH87GdWNZNf+kz51EX37h2tDc0228hx3SZcMt11l3vQFVdduhoHRf8MVPWO\nPiuq3n3QpN4Lde3wS9uiRc7vX0WF++9p9Wrv4MDkVRKqzJ0rj+Y05CeeGKyaiNIrtcGB/kNbsCDY\nNoceKg0RVX9wrzptfRKXIPXLYboD6sGBKu1QwYF5t+8WBAS5cAYdKEmfHMiL3vhJvSfqYhLkWLvv\nnlvi4XS3pO6w1OezfLlzUWicwQEgg/GY3et0VVXh74znzMmdLW+77aLdYbdtm9sbRdetm3t7Ebde\nH2622w548UVgyy3ls379dfusnm70yb2UuIYXL4RzzwUeeMD5NTMDjjq2x6hR8t3acUdrmXpPttrK\n+l15BfzmaKx+1SZmtZW+n7DBgf5b0t+T997zTgPg/tmHHY6b0ie1wYHen1+Pvr2YLXu9GvPo8xEE\nabGtgoPbb/dfVy+JeO89mZrZqdsW4J7JBcn8gvazDxpc6X2qzdbZqhGhl4EDcy9MO+0kdzn33WcV\nM6q2JCo46NDB+SLjNFiR4tQgEZDR39wy0ELYaispEtYbAEYdVrmqSr73XneYbnNTBG1TccMNUoqk\nTz8OAD/6kX8AeNppzu1uSsmhh1qjXZrBgQqwwnZ7VaVSTioqrMGH3Ir71XpqX4BVwhi2dKOy0r1U\nZc2a3PZHLS3235HXlO1ebXrWrZMgSZX2fvKJPObTAJeSldrgwLx4BWFmlm5R7T/+YX8epuQgSFGZ\nWU0xfbpVXG3+cPMpOQjaG0P1+vDTti3w8sv20g11cTrppGD7MM+9slLucoYPl2lzdfrdj+qeqjv3\nXPfP0C04GDMG+MUvgqU1TsccY/0fdVjcoI3tHn9cGuQ+8kjwfas2EHV1MpVxkO+X3zwXpdjFbcoU\nqzTPTH+HDrLs/ffjPeaee0rPghNOcF9HXQe22kp6IkXpRgzI57rDDvbBnZQ77rCGSVfMxoYHH+y+\n7622cp+V9R//kGrLsWOtGWIBKU2cP18CyyiDsVFyUhscXHBB+G2CXpTNueKDBAdeUb/JzCAXLLBK\nDtq1s9d7Bw0O/AZf8Wqs2dDg/ppp0CDglltyjxukDUSvXrkzHDpl4MccIxc/fZClbt1y77gzGfe7\noIoK+5wUSdMbIOY79LRfA9mTTpIGlkOHSuA3b577usuXy0VZjRgatAcPkPv+mt/BM8+U34U+SFAp\nyKc7ZFRes5EC1u8rk5GeSF6DZpmNm5980mpXofYzerQ8nn66ZNyqXYRZwrFuXbgqOjODN7uZX3NN\n7lDkm28uM2/+61/Bj0PJS21wEGTQGDVSmxL0omxWIwSpVnCa1teNeQFevtz6QbdtK/XeLS3S4Met\nRXmQzFgPhrxab6vxDZzsuqtE9m704MBtymxAGh5ecgmw227u6ygVFTLQlXmOYYZsrqgALrooPXev\nemDj9z0cOFAaF7p1c/PqOmbq1cs+VLipa1f5Pqp9hgkO/D6Pvn3luxV19EeyuGXQXoMuKccfbwVy\nTteNE090bwOwbh3w9NPB06muYxs3ysiJqroxSLuTYsxiSvFJbXAQZOKhu+6y/6jCBgdqSFCn7fQG\nch07hpte1fwRrFghP6pMxn6sd96xF0fr9AFR3Oj7Ctt//tln5bGmRvpLu9GDA687jOOOy38+Br+B\nn3RRR4sslDAlB6+9JkX2O+3k/HqY4CAotc8wF2jz80hLIJavNM5C6HZ9UdNW+1G/B7dSEbeeXo2N\nwdskAdaAWG+/bS8JCDKAUpjAlJKXskusxWvYUGWTTexF+EGDA7Weutt2qo7Qv+zm6998491NzFxf\nBQdt2wZv2d27t/90zPoFJWxRdtAuU3pwUOj5552CA7fpstMWHIQpOVDcqsGCjLsRlgoOwgRwhf68\nk1JKwcE551hds724fZfUubo1sgzbdVddk77/3r48yHeWJQelJWWXWEuQTFSfvx4IPs2x2rd6dPrS\nNjdbPzjzh7vFFt5D9Zo/1HXrrOAgDL8LeT5dyMJM1gLIe1DoLmtOmeoTT0hvD1PaggM9mMw3OFAZ\nuVfr8LD0ycvCKvUeCqY+faRqL8gQ5G7Badzcvs+ZjFzXZs6UeWYefNB5PTVGids8C6++6lz1GHRe\nBkUFB+aNS5BusOUabJarlF1i/Y0aJaOPtbTIxVXVpY0e7T0CmfLGG9b/qnRip52kXcCUKdZrzc3e\nGZDXXYf5I2hslO57xZjiN6gowUHYIWtVl7F80tS9uzUqni5twYFepxs0OHC7WJrjQMRB74MfhLrY\nr1ljHzK6HGQy0ijYK8AH5DP1GlXUyWefebfxceNX5L7DDjK08i9/6fy6Cg5Upm1enzp3tvciUH74\nIVw6VTARZbCwyZO9Rwk9/vjiBWPkL2WXWH/V1c5D2AaZlx6QcfOVnj2la88110jDLb0bz8aNVmYY\nNFNUYyBcf718yVW7genTJXhZvDjYfqLadNPg66oMzK+oT517ZWX4iVnCBhNeAcvuu0t3KKUUgoOD\nD/ZuLOh2DlHaB/h57jn72CF+VC+R9u2tkgO3vvzlqqYmfEDfr1+4nk36sfKhbnTMTFsvKXKqpvQL\nDvr3tz93KzkIYvRo97FL7rhDGkY6zWpKyUjZJdZu+fLcC6hbUWzUu6y997b2qe9bzbSo/jc5lRwc\ndpjVC2HWLOlC6DWgTRANDcGmWAZkYqBbbskdaMlJZaUMSlRf772eU8lB797e4/aru6A4g4Pp04H7\n77eGpU1bcKAHZio4ePVV726GblTXwHwbeOp69PAefdGkv7/dukm1xNCh8aWH7PJtrKeuXV6Zttm7\nC7AGJHNj/s7UxFBRggPFadsoXdepsFJ2ibXr2jW3jjTfPuReMhkrAteDg+uuy13XKTgwi4k7dQqW\nUXvZa6/gs9d17iwDGB19tP+6VVUyKJHXnS1gVdVUVFglB5ddBpx1lrWOWSc9fbo8xhkcmNIWHPzm\nN9b/Ub6j+nt4661SBJzksMTm+8v64sKKqyW/+ZvTr1P33Ze7vl9wYH7uDz0kQ0U7DVoW1N13R9+W\niidll9hcZlG2WXLwyCNWt7w4PP+8ddwHH5RGPPqF34tThpVEC12vTFnd/QW92P/tb1Y7DfVZ6D0X\nDjggd055Vf8ZNjgI2qAUSF9woM+GFzY4qKyUBmdKmzb24ZiLSbUTSdv7Wy7cRu+MKzhQxfa77CKN\nLv/4R+s1p6oLv2oFp/Sas9+GFXTEVkpW6i8B5sxg5oV36FCZWc7NzJkynoA5G6KbPfeU+tbTT5fG\nP3ojRZ0ekR95pDw6/cD14MBvyum4uGXKeklM0NEkO3a02mmo4KCiwgoO3KaN9UqHm/vvlxKJ2bP9\n101j5qW+E2GDg759g3XdLQaWEBSWPuGbLo7uq19+aV2vKiqk0eUmm3hv4zekcZRh7P0EnQiOkpXC\nS6ydOXxukD6/uh12kADjzTeDfSk7dZLMyW2AGkWNCjd5MjBpkvwonQYT0oODoCUQ+XLKlGtr7e9d\nlEzALDm4+mrnrlUq4w4bHPTuDfz1r8Gm401zcBAmbU88Abz0UmHSE0XUz46CyWSkJM6c3yVsrwEn\n220HdOkSbhu/XgdR5wnxwuCgNKTwEmtnNoKKWqTVvr191sF8nX22TAV72GFSV+42fLEeHMTZwMyL\n04W9Xz+5MOUzAIweHGQy0svDqfi7kGPXq/SnMTi4+OLw25xwgkxokxYMDgrvwANze1wFGRG1EIIE\nB0GGRA9DBQdz5tjnmaF0SeEl1u644+yZTJgW14VUVSWTnfg1GtPniChWAzPzwn7ZZcEGfPETtPtj\nt24y78Fdd+V/TDdpDA6GD0/XqHtRqBKlsN1WKRzzjjzolNv5Ov10+3O/ktg2baxh5k09ekiPnLCW\nLQMefVS6e190kZSe6Ur9N1Qu4rzE7glgKoDlAJYAeBBAgOmTyps+LXEhvPGG1DXqzPHYb7hBxnQA\n8is5GDZM2k0MGeK9XiYjU7cGqR6IKsmW/OUsLW0fyl0hiuuDMEsv77zTe/3KSu/u415TPLtZtcre\n0NGcDv6mm2TuBkpWXF/RagDPAbgTwGEAOgF4AsCtAIbHdIyS5NcgKF/6oE7KiSdKQyOn1sn5ZKpt\n2qRrmmSK3w03yB1dv35Jp6S8JdXwM8jvv00bayCuigopIXAS9Rz82ldceqk8sgQhWXGVHKwHsD2A\nMQCaISUGHQDEOiag6hVQSoJMPV0Ibl2jzj1X7h78xjdIK5YYFFaXLhIA8n0urKSCgyDVcar0SF1v\n3Uo/o1btmV2fKZ3CfLztIAGA018NgLUAWgC8AeArSOlBbM1N1qwB/vnPuPZWPPkOixq3Aw6QEcrS\nlq6gfvUreeQMb1TKkgq+1HHPO899HXMSJ7cbDXZ7LW9hgoP9Acx0+PscUpWgHAIpOfgYwJSQx3DV\nvn1y9XT54A8oXqecIsWNhRwpk6iYJk0q/jFPPNH9tXvukUd9xsaJE3PXS2OjYIpPmOz2FQTL6Buz\nf78FsAjAbgCmO604cuRIdDX6F9bV1aEu3wkJCIA0Ntp116RTQUReBg8u3rFUhu6VsQ8cKI96cDBk\nCDBihH10Q7d97Lwz8Omn+aXTz7vvyhwkK1aEH9uh1NTX16PemARnxYoVBT9uXPfi2wB4GcABABZm\nl6mCX9fOMuPHj8eAAQNiSgKZvIoOiSh5xa5eCDJOiKpGUI0SlfPOk7FNFKdS0bfflqGbJ0+WKZhN\njz3m39vJzZo10svhzDOtKcVnzpQZWz/7LPgouKXG6Ya5oaEBtbW1BT1uXAVDswF8B2AcpCFiT0jP\nhX8D+CamYxARlZVit50JUnKgqm/NCd+uvtp5X7p995UGjarrtEmNLBtEY6N9vI0OHaQKZvhwe5fs\n3/1Ohr3n2BzxirPWaDCkJGIOgA8gAQPrB4iIHNx4ozWpWbGoDH+LLbzX++YbYNw473W82lM5vVZV\nFa73Vrt2VtuIzz6zlm+6qRUcNDcDH30k/8cxPwVZ4mzi9y2Ak3zXaoXOP1/GPSciUkaNKv4xTzpJ\nhi92Gkq+Tx9gwQL53y94ALwHzNp/f5nq/sorrWWVleG7dk+aBDz9tL2Kolcv52G+m5qAtm3D7Z/c\nlWD7/9Jzxx1Jp4CISLjNMfPpp8Dq1d7b3nWXBAWnnpo7Y66uogK44gpg9Gir7ULYkgPliy/sz6dO\ndR7pdcwYOdbYseGPQbkYHBAREbp29Z+c7txzJUNevRo47TT/fVZWWsHBzjtHm/xu1arcZWom0+Zm\nK0AYM0YeGRzEgz1ViYhaoV12ibZdJiNBgtvgSLpOneTx+eflr6ICuPzycMd780331zh7aOEwOCAi\naoU+/NCqGmhoKMwxVO+EgQOtORquuw547z1rnUMO8d7H3LnurzU3F79RZ2vB4ICIqBWqqABefFHa\nGuy1V/77c8qkVXCgj26bydgbM+66q0zh7MZrvJ+1a3OXTXccco/CYnBARNRKdekC9O8fz76cZohV\nAx6ZQ9+bw597dYt0CgCUpUtzl+25p/v6FByDAyIiKoiLL5bGi2bmbwYHXoMyrVnj/triWOf9JR2D\nAyIiKohMxnkGWD04yGTcGxa2b+/d6PCii/JLH7ljcEBEREWlBwcnneQeAEQZFwGQgONvf7OeX3wx\ncNZZ0fbVWjE4ICKiyA45BNhyy3DbqDYIW28N/OhH7vMiRA0OABlZUbn5ZuDee6PvqzXiIEhERBSZ\nGpAoDFVyoIKEQgQH1dXRtyWWHBARUZGp4EA1VNSDg003tf6PMqKiwnkW8sPggIiIisoMDlSbg732\nAv73v9z18jkGRcPggIiIikpNnDRwoDyqkoMLLgB697bWyyeDZ7VCfhgcEBFR0X39NXDbbfK/ChIO\nOsgeEJiDJ4Vx333A669H3761Y3BARERF17evdXffv7/MrrjjjvaAIN+qgUmT8tu+NWNwQEREqaGq\nHID8Sg4A5+GVKRgGB0RElEpmycHZZ4fbfvXq3GUtLcAPPziv/9VXnAZaYXBARESpZJYc6Bn3I4+E\n3x4A/vIXoFMn4Isv7MsXLQK23x4YNy58OssRgwMiIkols+SgpcWqdjjsMP/tzQmfvv4aGDlS/l+y\nxP7aypXy+MEH4dNZjhgcEBFRKnkFB3rG37mz8/bmbI8qMACApqbcfZOFwQEREaXKyy9LN0SzWqCl\nxcrw9Yz/gw+AZ5/N3c+GDfbnjY3W//qojIsXA0cdJf/rDSJbM86tQEREqTJokDxOmWJfrgcHeslB\n377yZ1q3zv587Vrrf73k4O9/l8aIZGHJARERpZKZubtVK7jRSwoAYNUq6////Mf6Xy+FWL8+XBrL\nFYMDIiJKpTVr7M/NkoO77waOP959ezM4+P576/9bbwVmzQLeeQcYNcpa/thj+aW5XDA4ICKiVFqx\nwlP3FxgAABD2SURBVP68udkeHJxzDvDkk9brZtWAGRyY+1uzBrjwwnjSWm4YHBARUSotX25/7tbm\nQNl2W/vzxkZ7w0NzUKRVq5x7OpgNGVsjBgdERJRKqnvh44/LY3W1c28F3e9+B1x5JbD//hIc6NUE\nZnuCCROAmprcfejVDwAwdmzuoEnljsEBERGl0l//Ku0KTjwRuOkm4E9/cg8KlFtvBUaPBg48UEoA\nvLom3nsvsNVWucvfew+YN896PmoUMHhwtHMoVezKSEREqdSnj7QrAICLL5ZHv+BAadNGggO9+6IT\ns0cEABx5pDzqAyO1tqoGlhwQEVHJCDpIkQoOzCoC3SGHOE/O5KS1TcjE4ICIiEpG2JKD779336ax\nMbe7pEmVHugNG1sDBgdERFQyjj022Hp6cLDJJs7rNDbaB0ZyooIClhwQERGl1F13AcuW+a9XVQUs\nXQosWAD07Om8TmOjd7UDYAUFzc1SyvD88+HSW6oYHBARUcmoqgK6dfNfT83o+Nhj3sGBOTCSSS85\nuOgimaApSHBS6hgcEBFR2dGrAXr0sP7fbz/r//Xr7SUHL7zgvp8FC4A775T/TzklvnSmFbsyEhFR\n2dFnXdSDA713wuzZ9m0OPzx3P04NEV98UfbToUNeSUw1lhwQEVHZ0YMDfRREv94JJreGiOU+YiKD\nAyIiKjt6cKDPwxBXcBB2P6WG1QpERFR23IKDk08G2rUDamuBIUO897HLLu5TQjuNrFhOGBwQEVHZ\n0Yc71gdBGjfOen7FFVI9sO++wKJFufv49FP5c2JOB11uClWt8BCAlwu0byIiIk9uJQd6oNC2rTyO\nGZPbONEPg4PwTgdQB6DFb0UiIqJC6NvX+l8PDnTV1fLYrl34/btVK3zxhft8DWeeCbzxRvhjJSHu\n4GBnAFcA+BuAgNNjEBERxevcc62gwG1uBVVyECU4cCs52HFHGQdh9mygrs5egjFhAjB0aPhjJSFM\ncNAOwPYufzXZv4kAzgWwMN5kEhERBVdRARxwgPzvVnKgxj9o3z74fi+5RB6HD5dBlHRqkqZ33wWu\nvBKYOBH4+uvg+06TMA0S9wcw1WF5C4DjARwL4AUAkwEcmH/SiIiIolPVBm7BwQ47yKN+d+9nr72s\n/7/5BthuO+v52rXyqCZ9AmS451IUpuTglez65l8lgI4AdgdwWczpIyIiikTNr+AWHIweDdx0E7Dr\nrtH2P26cvXpBzfDYpo1VqvD22zK/Q6mJK6b5JYB+ABZnn7fL7nsZJGiY57TRyJEj0bVrV9uyuro6\n1NXVxZQsIiJqrTp1kke3NgcdOwIXXxxunxmtNd0dd0jDxwsvlOcrV8qjHhyoNgYnn5y7fRD19fWo\nr6+3LVvhN1tUDOIKDo4wnl8NYBCAn3htNH78eAwYMCCmJBAREVl23lkeKyuBq66yqhHitGyZlBjc\nfDNwzDGyTK9WUP7zn2j7d7phbmhoQG1tbbQdBlTI2hB2ZSQiosRsvrk8VlYC114bzz7NO/81a2Tf\nt9wCdO9uLTcbKx51VDzHL5ZCBQcxfQxERETRqDYHcTKDg8cek+mcAavXgxkYlCJOvERERGVJBQdu\nkydFsffe9ucqMACsxolNTe7tHFpKpEydwQEREZUl1Y0wruDgjDOArbd2f33ECHncsME9OHDy/ffA\npZeG61JZaAwOiIioLMVdchC0p8H69eGCg/HjgRtvBF5O0YxEDA6IiKgsqfENwgQHr70GPPmkDGj0\n0EP214IGB17VCk7UYE1uczIkgcEBERGVJZVBhwkOBg4Ejj9e5lsYNsx5f37Mbox+1H4ZHBARERVY\nlODAS9CSgw0b7AFCly6APt5fc7O9IaOar2HNmvzTGBcGB0REVJbiDg5qaoKtZwYHp5wiozEq994L\nbLYZsGiRvfcCSw6IiIgKLM7g4MorgWuukf/9RjvcuNEeHNTUAPO0SQTUTI0ffSQjNypq4ian/S1f\nHjrJeWFwQEREZSmO4GCffSQoGD3amqvh8MP9t9ODA3Mwpj595HHRImDSJGv5lVcCatqEF18E5s+X\n/0eNso++WAwMDoiIqCzFERy88w5w9dX2ZZmM/1TM771n/a+vO3cuMGOG/N/SArRta722cSMwZoz8\nf/jhwE+ysxOpkopiDqDE4ICIiMpS3G0OdBMn+q/Trh0wYUJuIHHPPfL42WfAtGn21/R1v/3WvuzO\nO4sXIDA4ICKislTI4OCEE/yrF379a+D0091LGcaOzV324otWAKDGaVDbX3AB8Prr0dIbFoMDIiIq\nS4UMDgAr83aj2h24BQdO6XrvPeDVV+V/lX59+3XrwqUxKgYHRERUltq1k8cOHQqzf792Byojd1tv\n40bn5StXyqMKPvQgpFCBjqlQUzYTERElap99pH7/1FMLs3+/koNu3eTRL4jw27++fbHaHDA4ICKi\nspTJAGefXbj9+2X6++0XbD2TKh1wCg5YckBERJRibpn+W29Jl8XjjpPnfiUMJjUYklObA/ZWICIi\nSjE90z/4YOv//fYDTjrJeq5mXQzqhx+s/c+bB0yebL3G4ICIiCjF9ImYjjrKfb327cPtVw8O/vxn\n+2vFqlZgcEBERBRBU5M8nncecNFF7uupXhNBrVoljxUVucMmP/888Pnn4fYXBYMDIiKiCFRwcNhh\nkpHffDPQt2/uembJwVdfee9XLzkwg4O77waGDo2W3jAYHBAREUWgggNVMvCHP1gzLurMkoNtt/Xe\n75dfymPPnkDnzvmlMSoGB0RERHnwqzbo3z/c/p5+Wh779CnuZEs6BgdEREQRqF4IfsFBnz7A4MHh\n979xo/soioXG4ICIiCiCNm3kUZ922U3YHgsHHQSsWFG40R39MDggIiKKQJUcBBnHIGyPhe7dgVde\nCZ2k2DA4ICIiikAFBUGK/sMGB8WafdENgwMiIqII1NwJaoIlL17VCjvumLts5sxoaYoLgwMiIqII\nhg8H5swBtt7af12vkoNOnXKXzZ7tvv4ZZ/gfL18MDoiIiCLIZICttgq27pZbur921VXhjnv++eHW\nj4KzMhIRERXYOecAS5YAP/1p7mvHHpu77K23gP33L3y63DA4ICIiKrCKinAlBKo9Q1JYrUBERJSQ\na6+Vx3vuAe64w3/9W24pbHqUjP8qBTEAwLRp06ZhwIABCSWBiIgoGWq6Z3N4ZH15xiGHbmkBGhoa\nUFtbCwC1ABoKkT6WHBAREZEN2xwQERGl3LvvAp98UrzjseSAiIgohW67zfp/772B004r3rFZckBE\nRFRko0cDvXrlLv/zn4HvvpP/L7gA+M1vipsuhcEBERFRkV15pfPy3/62uOlww2oFIiIismFwQERE\nRDYMDgqsvr4+6SQUBc+z/LSWc+V5lpdyO8+nngLGjCn+ceMMDvYD0Axglfb3Soz7L0nl9kV1w/Ms\nP63lXHme5aXczvO444BLLy3+ceNskLgPgJcBHBrjPomIiKjI4iw52AfAtBj3R0RERAkIU3LQDsAW\nLq8tgAQHCwDMBNAZUqVwIYBv80gfERERFVmY4GB/AFMdlrcAOAkSBEwGcDeAagC3A3gOMslSs9MO\nZ8yYESatJWnFihVoaCjIvBipwvMsP63lXHme5aU1nGcx8s5CzsrYE8BiALsC+NR4rQ+AdwFsXsDj\nExERlatvYZXYxy6uBolbAvgdgCsBrM4ua5d9XOuwvqqG6BPT8YmIiFqTBShQYBCn9gDmAxgPoC2k\n1OAZAC8kmSgiIiJK1m6QNgfLAHwH4H4AXZNMEBEREREREREREREREREREZWVXgAmAVgOYAmAcQAq\nE01RNHsAeBHSvmIhgAcA9Mi+th+AtwH8AOBrAKcb254G4AtIz453IWNIpF0lZGCr+7Rl5XSe3QE8\nCGAppN3MUwB6Z18rp/M8DJLG72E1Iq7OvlYO57kJgC8B/Fhbls95VQL4E+Q3vhJy7dq0EAkPyek8\nTwDwAeSznQXgKti7q5fieQLO56r0AbAIcm66UjxXp/PcHcBLkHQuBHAz7CMbl+J5unoZchFuB6Av\ngI8gX+JSonpnXA3pDtodwLMA/glphPkdgPMgH+JPID9W9YEPyj4/APLhjYRkSGlvvDkaQBOAv2ef\nd0N5nefLAP4BGd2zI4AnAPwL5fV5tgewBsAF2eebA5gB4AqUx+f5I8jFtRnAwdll+Z7X1ZAMd3MA\nnQDUw3kwuGJyOs9aSAZxVPb5TpBA6PfZ54NQeucJOJ+rUgFJYxOAU7Xlg1B65+p0nj0hN9CjIOex\nNYDPISMPA6V5nq62h5y8Hr2cDGBuMsmJrB9k9Ec9Kj8WcmdyBuQD1N0JCYgA4GHIKJK6T5F7N5Mm\nh0CCuMdglRycCeAzY71SPc9aSKbZUVvWDcAuKK/zbAe5yxoBuZhsAeATyBglZ6K0v7fDAcyGXE/0\nC2zUz+9X2f/nAThFe61Xdv99Y0hzFMPhfJ7HQ+4SdbdCblqA0jtPwP1clWsgJbazYA8OSu1ch8P5\nPC8E8F9j3a0g4woBBT7POCdeCmIXSJHtQm3ZDMhFqnOR05KPzwH8DDJ0tHIigAbIOX5krD8D0tUT\nLq9/qr2eNr0B3AtgKGRAK3XO5XSe+0LSdjakiG4+gFuyj+V0nusADAFwbfb/uZDv8njIeXxorF9K\n5/k8gG0BPG4sz+fz6wJgM+P1xZBrWFLn7XaeTwH4g/a8PeQapcYR3hmldZ6A+7kCUgJ0MoDzHV4r\ntXN1O899IcH73ZDBjr4EMAyS6QMFPs9iBwedYI2gqKzJPnZE6boe8kM8D+7nqM6vo8/raVIB4CFI\nRvkR7MFQJ1ifnVKq59kdUre3PYA9s3+bQ+4uO6J8znNLAE8DuA5ADWRo850hwUKpn+ciOM/hks/3\nVK2TpvN2O09dR0j98moAN2WXeV2X0niegPu59oJUbw5FbpqB0jtXt/PsDikFeAtyA308gHNgVRUV\n9DyLHRyshlyUdOr5qiKnJQ6dATwJ+ZIeDIny3M5xZfZ/p9c7aK+nyaWQL9Md2ecZWFUp5XSejdnH\nkZB0LwZwOaT+NoPyOc/jIXcgtwDYALnLGA25+yqnz1P3A6Kd1ypYF1an7dN6veoHyUwqIHXS6hzK\n5TwzkBuWv0Dq0/XlSrmcayOkIe39ADZCSvZug5SYAAU+z2IHBx9DWvT30pbtDOAbpO+D8bMdpHVo\nRwB7QwIDQM5xF2PdnbPL1eu7eryeJsMgxXfLs391kEBoGaQkoVzO81PIxaWttkzNO/IByuc8neY5\naQKwHuX1vdXlc14rIFVL+uubQu7o0njeR0Eyk38D+D/YA7dyOc8tITdiV8G6Lm0FaUfyz+w65XKu\nn8Cao0jR50Mql/P8/14D8CgkUy3V3grdAMwBMAG5M1t2h2SeIwC0QW7r6EOyzwdlX09jq28398Hq\nrdAD5XOeVQBmQnordIB0KXoJ0mOhnD7PPpBzuRRyY7At5G7kJpTXeeqNuvI9r9GQ92gbSDHuRKSn\nxbd+nvtD7jSHu6xbyucJODdIVMwGiaV8rvp59oME9BdBGhDvBrmRVr2NSvk8HfWCNLxYAqlrGYvC\nTh1dCL+HfIg/QEo81J+K1GsBvA754L6A/YsLAL+ANIpaBeBNyAyVpUAPDoDyOs8+kK4+8yGZyX2w\nGsmW03nuDRmvYhmkhfR1sO5GyuU8zYwkn/OqAvBHyEV5BaThX8+CpDo8/TyfgZQCrTL+ntPWL9Xz\nBMIFB0Dpnqt5nvsCeBXye50H4DJj/VI9TyIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIi\nIiIiIiIiIiIiIiIiIiIiorz8P1Q9VlO/iE+iAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# Plot the eval metrics over time.\n",
+ "plt.title('MNIST Frechet distance per step')\n",
+ "plt.plot(*zip(*frechet_distances))\n",
+ "plt.figure()\n",
+ "plt.title('MNIST Score per step')\n",
+ "plt.plot(*zip(*mnist_scores))\n",
+ "plt.figure()\n",
+ "plt.title('Training loss per step')\n",
+ "plt.plot(*zip(*loss_values))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "# GANEstimator\n",
+ "TensorFlow offers a tf.Estimators API that makes it easy to train models. TFGAN offers a tf.Estimators compatible `GANEstimator`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Data input pipeline\n",
+ "`tf.Estimators` use `input_fn` to pass data to the estimators. We need one data source for training and one for inference."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "tf.reset_default_graph()\n",
+ "\n",
+ "def _get_train_input_fn(batch_size, noise_dims):\n",
+ " def train_input_fn():\n",
+ " with tf.device('/cpu:0'):\n",
+ " real_images, _, _ = data_provider.provide_data(\n",
+ " 'train', batch_size, MNIST_DATA_DIR)\n",
+ " noise = tf.random_normal([batch_size, noise_dims])\n",
+ " return noise, real_images\n",
+ " return train_input_fn\n",
+ "\n",
+ "\n",
+ "def _get_predict_input_fn(batch_size, noise_dims):\n",
+ " def predict_input_fn():\n",
+ " noise = tf.random_normal([batch_size, noise_dims])\n",
+ " return noise\n",
+ " return predict_input_fn"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Training\n",
+ "Training with `tf.Estimators` is easy."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "INFO:tensorflow:Using default config.\n",
+ "WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpg4AdEb\n",
+ "INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_is_chief': True, '_cluster_spec': , '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_num_ps_replicas': 0, '_tf_random_seed': 1, '_master': '', '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': 100, '_model_dir': '/tmp/tmpg4AdEb', '_save_summary_steps': 100}\n",
+ "INFO:tensorflow:Summary name Generator/fully_connected/weights:0 is illegal; using Generator/fully_connected/weights_0 instead.\n",
+ "INFO:tensorflow:Summary name Generator/fully_connected/BatchNorm/beta:0 is illegal; using Generator/fully_connected/BatchNorm/beta_0 instead.\n",
+ "INFO:tensorflow:Summary name Generator/fully_connected_1/weights:0 is illegal; using Generator/fully_connected_1/weights_0 instead.\n",
+ "INFO:tensorflow:Summary name Generator/fully_connected_1/BatchNorm/beta:0 is illegal; using Generator/fully_connected_1/BatchNorm/beta_0 instead.\n",
+ "INFO:tensorflow:Summary name Generator/Conv2d_transpose/weights:0 is illegal; using Generator/Conv2d_transpose/weights_0 instead.\n",
+ "INFO:tensorflow:Summary name Generator/Conv2d_transpose/BatchNorm/beta:0 is illegal; using Generator/Conv2d_transpose/BatchNorm/beta_0 instead.\n",
+ "INFO:tensorflow:Summary name Generator/Conv2d_transpose_1/weights:0 is illegal; using Generator/Conv2d_transpose_1/weights_0 instead.\n",
+ "INFO:tensorflow:Summary name Generator/Conv2d_transpose_1/BatchNorm/beta:0 is illegal; using Generator/Conv2d_transpose_1/BatchNorm/beta_0 instead.\n",
+ "INFO:tensorflow:Summary name Generator/Conv/weights:0 is illegal; using Generator/Conv/weights_0 instead.\n",
+ "INFO:tensorflow:Summary name Generator/Conv/biases:0 is illegal; using Generator/Conv/biases_0 instead.\n",
+ "INFO:tensorflow:Summary name Discriminator/Conv/weights:0 is illegal; using Discriminator/Conv/weights_0 instead.\n",
+ "INFO:tensorflow:Summary name Discriminator/Conv/biases:0 is illegal; using Discriminator/Conv/biases_0 instead.\n",
+ "INFO:tensorflow:Summary name Discriminator/Conv_1/weights:0 is illegal; using Discriminator/Conv_1/weights_0 instead.\n",
+ "INFO:tensorflow:Summary name Discriminator/Conv_1/biases:0 is illegal; using Discriminator/Conv_1/biases_0 instead.\n",
+ "INFO:tensorflow:Summary name Discriminator/fully_connected/weights:0 is illegal; using Discriminator/fully_connected/weights_0 instead.\n",
+ "INFO:tensorflow:Summary name Discriminator/fully_connected/BatchNorm/beta:0 is illegal; using Discriminator/fully_connected/BatchNorm/beta_0 instead.\n",
+ "INFO:tensorflow:Summary name Discriminator/fully_connected_1/weights:0 is illegal; using Discriminator/fully_connected_1/weights_0 instead.\n",
+ "INFO:tensorflow:Summary name Discriminator/fully_connected_1/biases:0 is illegal; using Discriminator/fully_connected_1/biases_0 instead.\n",
+ "WARNING:tensorflow:update_ops in create_train_op does not contain all the update_ops in GraphKeys.UPDATE_OPS\n",
+ "WARNING:tensorflow:update_ops in create_train_op does not contain all the update_ops in GraphKeys.UPDATE_OPS\n",
+ "INFO:tensorflow:Create CheckpointSaverHook.\n",
+ "INFO:tensorflow:Saving checkpoints for 1 into /tmp/tmpg4AdEb/model.ckpt.\n",
+ "INFO:tensorflow:loss = -0.492021, step = 1\n",
+ "INFO:tensorflow:global_step/sec: 3.10231\n",
+ "INFO:tensorflow:loss = -2.58632, step = 101 (32.223 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.0688\n",
+ "INFO:tensorflow:loss = -2.53612, step = 201 (32.586 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.11178\n",
+ "INFO:tensorflow:loss = -2.61272, step = 301 (32.136 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.06442\n",
+ "INFO:tensorflow:loss = -2.87477, step = 401 (32.633 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.09906\n",
+ "INFO:tensorflow:loss = -2.0928, step = 501 (32.268 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.09819\n",
+ "INFO:tensorflow:loss = -2.49558, step = 601 (32.277 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.09766\n",
+ "INFO:tensorflow:loss = -2.6964, step = 701 (32.282 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.09992\n",
+ "INFO:tensorflow:loss = -2.25868, step = 801 (32.259 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.102\n",
+ "INFO:tensorflow:loss = -2.59776, step = 901 (32.237 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.11788\n",
+ "INFO:tensorflow:loss = -2.98097, step = 1001 (32.073 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.09916\n",
+ "INFO:tensorflow:loss = -2.42849, step = 1101 (32.267 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.10695\n",
+ "INFO:tensorflow:loss = -2.0718, step = 1201 (32.186 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.08804\n",
+ "INFO:tensorflow:loss = -2.3412, step = 1301 (32.383 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.08785\n",
+ "INFO:tensorflow:loss = -2.65558, step = 1401 (32.385 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.11728\n",
+ "INFO:tensorflow:loss = -2.94126, step = 1501 (32.079 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.11366\n",
+ "INFO:tensorflow:loss = -2.15374, step = 1601 (32.117 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.10287\n",
+ "INFO:tensorflow:loss = -2.74035, step = 1701 (32.228 sec)\n",
+ "INFO:tensorflow:global_step/sec: 3.09625\n",
+ "INFO:tensorflow:loss = -2.60946, step = 1801 (32.297 sec)\n",
+ "INFO:tensorflow:Saving checkpoints for 1856 into /tmp/tmpg4AdEb/model.ckpt.\n",
+ "INFO:tensorflow:global_step/sec: 2.94229\n",
+ "INFO:tensorflow:loss = -2.44058, step = 1901 (33.987 sec)\n",
+ "INFO:tensorflow:Saving checkpoints for 2000 into /tmp/tmpg4AdEb/model.ckpt.\n",
+ "INFO:tensorflow:Loss for final step: -2.18134.\n",
+ "Time since start: 11.023208 m\n",
+ "Steps per min: 181.435391\n"
+ ]
+ }
+ ],
+ "source": [
+ "BATCH_SIZE = 32\n",
+ "NOISE_DIMS = 64\n",
+ "NUM_STEPS = 2000\n",
+ "\n",
+ "# Initialize GANEstimator with options and hyperparameters.\n",
+ "gan_estimator = tfgan.estimator.GANEstimator(\n",
+ " generator_fn=generator_fn,\n",
+ " discriminator_fn=discriminator_fn,\n",
+ " generator_loss_fn=tfgan.losses.wasserstein_generator_loss,\n",
+ " discriminator_loss_fn=tfgan.losses.wasserstein_discriminator_loss,\n",
+ " generator_optimizer=tf.train.AdamOptimizer(0.001, 0.5),\n",
+ " discriminator_optimizer=tf.train.AdamOptimizer(0.0001, 0.5),\n",
+ " add_summaries=tfgan.estimator.SummaryType.IMAGES)\n",
+ "\n",
+ "# Train estimator.\n",
+ "train_input_fn = _get_train_input_fn(BATCH_SIZE, NOISE_DIMS)\n",
+ "start_time = time.time()\n",
+ "gan_estimator.train(train_input_fn, max_steps=NUM_STEPS)\n",
+ "time_since_start = (time.time() - start_time) / 60.0\n",
+ "print('Time since start: %f m' % time_since_start)\n",
+ "print('Steps per min: %f' % (NUM_STEPS / time_since_start))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Evaluation\n",
+ "Visualize some sample images."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "WARNING:tensorflow:Input graph does not contain a QueueRunner. That means predict yields forever. This is probably a mistake.\n",
+ "INFO:tensorflow:Restoring parameters from /tmp/tmpg4AdEb/model.ckpt-2000\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "execution_count": 16,
+ "metadata": {},
+ "output_type": "execute_result"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAWQAAAFgCAYAAACMigM2AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsnedznFd25n+dcwTQjUYjByIRgSQIMA45osJY4xnbsnc9\nsne/uMrlf2U/b9WWq7Zqp7Zqw9TseOWZKdnWzEgaiklMIEEi5ww00OjcDXRE7wfuvWpQlMSA0CL7\nqUJREgXgvv2+73PPfc45z4ESSiihhBJKKKGEEkoooYQSSiihhBJKKKGEEkoooYQSSiihhBJKKKGE\nEkoooYQSSiihhBJKKKGEEkoooYQSSiihhBJKKKGEEkoooYQSSiihhBJKKKGEEkoooYQSSiihhBL2\nH4qjXsCzoFAo8vl8/qiX8cpQKBS8DtdRQgklvDr+Px98K+cWJSEDJRYroYQSXkd8K+cqD2sVJZRQ\nQgklfDtKhFxCCSWUUCRQH/UCSjg8KBQKFIonJ6bd3d0jXk0JJZTwNEqE/AZAkLBSqUSlUpHP5+VX\nCUcDcU9K96CEQpQI+TWGUqnEZrPh9XoZGBigpqaGnZ0dJicnuXbtGqFQqEQIhwiFQoFKpcJisdDQ\n0IBOp8Pn8xGLxUilUqRSKTKZTNHek9ImcvAoEfIRQqFQoNFoUCqV5HI5crncK0euKpUKlUqFVqtF\nr9fj9Xrp6uriZz/7GT09PQSDQf7whz8wMTFBIpEgnU6XXrBDhEajwel0cuLECRwOB5OTk6yvrxMM\nBgmHw4TD4SO7H0LSEicppVKJWq1Gq9WiVCpRKr9KOT1rjeK/JZNJdnZ22N3dLUljL4gSIR8htFot\nx44dw2634/P5CAQCxGIxMpnMS/08hUKBw+GgsrKSjo4O2tvbcTgcVFVVUVdXh1arxWg04vF46O3t\nBWBubq6oo7LXCUqlEqPRSGVlJX19fRw/fpxoNMrCwgKjo6M8ePCAwcHBIyExpVKJRqPBaDRis9mw\n2+04HA7q6+tpbW3FYrFgMpmAZ0fK+XyeXC5HOp3m7t27/OEPfyASibC9vV2Sx14AbywhFya4BMRD\ncxgPj06nw+l00t3djdfrZXh4mHw+TzKZfClCNhqN2O12GhsbaW1tZWBggBMnTpDP51EqlaRSKTY2\nNlAqlTgcDnp6eojFYqytrcnovISDhVKpxGq1UlVVRVtbG/39/ahUKubm5jCbzQQCAYaGhshms4dK\nYAqFAr1eT01NDR6Ph/LyclwuFy6Xi87OTk6dOoXT6cRisex5Z54m2mw2SyqVwuFwsLm5ydraGoFA\ngEwmQyaTIR6Pk0qlDu26ngWTyYTJZEKv18vIXzRwCU54mhvEdYrri0QiJBIJdnd39/0+vZGELI5f\nKpVqz24vjlgH8UEXQqFQ4HK5aG1t5eLFizQ0NKBQKNje3iYQCLC9vf3CP7OhoYFLly7R3d1NR0cH\nVqsVvV7P5OQkc3NzrK+vo9fr6evrw2q10tHRwebmJsPDw2Sz2Zf6nUeJ76OeqVar8Xg8NDQ0SHLb\n3d1Fp9Phdrux2+17ZIHDgkqlorKykg8//JCzZ89iMBjQ6XTodDrsdjtlZWVotVpJXCIaLpQk1Go1\nSqUSrVZLT08PGo2GxcVFlpaWiMfj+P1+Hj16xNLS0pHcM0GyTU1N9Pb2UldXh9frRafToVaryeVy\ncv0ajQa1+gk17u7uksvlyGazhMNhVlZW+PzzzxkeHmZnZ4dsNruv63ytCFnoX+JLPDBGoxGTyYTF\nYsFsNmM0GuUHrlKpUKvV8kHb3d0lk8mwsbGB3++XOut+Q61Wy4deo9HIHfhFH1atVovFYqG1tZXL\nly/T0dFBXV0dMzMzDA0NMTw8zPj4OIuLizidTrxeL06nk4aGBlZWVqipqSGTyRwKIT8ddbzI96nV\naoxGI9XV1TgcDlktks1m2djYYGFhoaijfJHQq6yspKGhAbPZvKe1vjAqO+zoWKFQYDAYaGpq4tSp\nU5hMJlQqlQxM8vk84XCYSCRCPB6X74R4XpVKJXq9HrPZjM1mw2AwcPLkSbxeL42NjcRiMVZWVlhf\nX2dpaenQru3p61Sr1TJwaWlpoa6uDp1OB0AkEmF3dxeDwSC5Qa/XYzAY5AYUCoVYWFhge3ubTCbD\n7OzsvifGXytCFsksjUaDVquVD01NTQ0NDQ10dHTQ3NxMdXU1er2e7e1tNBoNFotlT2SSTCb5t3/7\nN7744gtmZ2fZ2tra97UGg0FmZ2e5c+cOc3Nz3L59m6mpqRcmRqvVSmtrKz09PRw/fpzy8nJyuRx/\n/OMf+d//+38TiUSIRqMkk0mam5tJp9Po9XpcLhcNDQ0cP36c7e1t1tfX9/0an4bYMF/kBCK+x2Aw\n0NDQwM9+9jP6+vrQarXs7u6STCb5l3/5F37+85+TSCQO+ApeDYKQ6+rqMJvNKJVK8vk8qVQKn89H\nKBQ6siRYKpVieXmZ+fl5amtrMRqNZLNZKTeMjY0xPDzMzMwMi4uLkpRFVGkymWhoaKC7u1tGn06n\nE6fTSTabxeVyce3atSO5NviKG2pra+nv76esrAy73Q5APB5na2uLaDSK2Wwml8uRSCTweDw0NjZK\nTqmoqECr1fLee+9hMpn46KOPiEajMhm/H3gtCFns8mazmfLycvklKg7q6+upr6+nqamJ6upq7HY7\n2WwWn8+HyWSitrYWvV4vZYxMJkMymZSabjgc3tcPPZ/Ps7Ozw+bmJo8ePcJkMsnd9nmjPHHNFRUV\n9Pf309vbi9vtZn19ndHRUW7cuMHQ0NCeY2UqlZJHM4PBQEVFBY2NjczOzu7LdT1LfwOoqKjA6/UC\nT46AkUiEWCwmy71ElKVWq3E6nVRWVmK327Hb7fLzMJvN1NXV8YMf/ICuri5JyKlUipmZGWw2m9T4\nihVqtRqXy0Vtba1MkOXzeQKBAMPDwywuLh56lC9OhdFolMHBQdLpNB6PB4PBQDablaQ8OzvL5OQk\nS0tLrK2tyeO6IDqDwUAymcTtdlNZWYlOp8NoNKLValleXsbv95NMJg/12grhdDqpra2VAZlOp2N3\nd5fJyUmmpqaYnp4mEomg1+slIbtcLurq6jh+/Djt7e1otVqcTifHjh0jEAhgs9lkhdR+4bUhZPEy\nt7a2ygqDsrIyHA6H3A21Wi35fJ5oNMrKygrj4+OUl5djs9mAJ4k2seP39/djNpuZmZlhZmaGZDK5\nrx98NpslFosxNjaGUqlke3v7hUhfXLPH4+HSpUv09vZiMBgYHBzkH//xH1laWvrazxMvn/h+i8Ui\nN6hXxdPlUuL3AbS2tvLee+9JrXphYYH5+Xnm5+fx+/3AE7IymUy0tbVx6dIlOjs7aWtrk0lOg8Eg\nN1yxeYrvs9vtVFRUkEwmi56Q3W43NTU18iiczWZZW1vjxo0bTExM7Lsm+TzY3d0lGAzy2WefcevW\nLZnsKmwgSqVSJJNJ0uk0mUxGnnIKteR0Oo1Go8FkMmG32yXpTUxM8Pnnn7O2tnZk+nF1dTWXL1+m\ntbUVo9EIQCgU4uOPP+bf/u3f2NzcJJFIyOsW2r7RaOTv//7vaWhokHKG0+mkoqICvV4vA5BShFwA\ncQTv7u7m8uXLNDc3U1VVhV6vR6VSsba2xuPHj0kkEsRiMaLRKFtbW6yurmK1WhkbG8NsNqPT6Wht\nbaW5uVlqYhaLBaPRSCaT2VdCFsQojn0vGoFbLBba29sZGBigsbERh8OBUqkkHA4zMzNDPB7/2s/L\nZDKEQiEikQi5XE6WYFmt1n27LhF9i6SiRqPh2LFjtLS0oFKpJOHEYjHi8bg8xXi9Xnp6eujs7JQ6\neE1NDYlEgng8zs7ODqlUimAwSDqdxmazodVqUalUuN1uTp48KYmlGCFyFUKXVKlUbG9vs7GxwdLS\nEpubm8+8Z4cBsTGIOmh4vqSpKJWrrq6mv7+fEydO0NPTg9frxWg0srS0xPT0NLdu3eLhw4cEAoFD\nuR4BhUKB1WqVz8eFCxdoaGggn88zMjLC4OAgt2/fZnJy8pm5IhFgDA4O0tjYSFdXFw0NDVISBfa9\nAOC1IGShXw0MDPDTn/6U8vJyFAoF8XicYDDItWvX+P3vf8/a2hqhUIhsNit3dpFs0Wg0aDQa/vIv\n/5Kf/vSnVFZWkkwm5Y5/EMk9hUIhs7ziePi8cDqdXLp0icuXL+PxeNDr9TJKSSQSzyydS6VS+P1+\nAoEA2WwWg8GAy+XCbDa/8rWISEqhUGAymaiurqasrAyz2UxzczMulwur1YparZbJncrKSiwWCwaD\ngf7+fv72b/+WmpqaPfdDJJfW19cJBAKYzWYqKirk3wN4PB4uXLggj/7FCEFehZU9iUSC2dlZqcke\nRXT8TfgukhHvjV6vp729nX/4h3+gt7cXnU4nT0kTExP8+te/ZnBwkPHx8UOVY0TkWlZWxokTJzh3\n7hwXLlyQ2viNGzf45S9/yfz8vNyEnoaI/B89eoRKpcJgMFBfX7+naGC/r+m1IGS73c6pU6fo6urC\narXuaQ2ORCI8fvyYubk5otEoOzs7zyxUF7uhz+djZWUFm82GXq+XWmYoFNr3dYuk1IvoUIX6b319\nPVVVVahUKuLxOLFYjEgkQjabfWZyKB6PMzExQV1dHadPn8ZkMqHVamVW+VUhIi1R69zY2EhdXR3H\njh3D6/WyvLzM3NwcExMT+Hw+dnd3sVqtsu5Vp9PJ4/HExATT09P4fD42NzcJBoNsb2/LZpof//jH\nMgLf2Njg7t27LC8v78t1HASEXm+z2eQRNxwO8/DhQ8bGxtjZ2TnqJT43VCoVdrudmpoazpw5w4UL\nF2hqapKJylwuRyaTYXl5mcHBQdbW1l662ell8bSMWVNTg16vJ51OEw6HWV9fZ3V19blOJW63m46O\nDpxOJ5lMhs3NTZaXlw+k6eW1IeSTJ0/S1tYmjxj/+T//Z7a2tl4o857P54nFYmxsbNDS0oLD4cBm\ns2Gz2V6JtL7p+CeShi8ClUqFzWbD4/FQW1uLy+VCoVAQDodZXV0lGAx+o/whCLmxsZFEIoHJZJI1\nl/uhgwlChieSSl1dHadOnaK6uhq3283g4CBXr15lbm4Ov9+P0+nEZrPJSHpnZ4dQKCS7vX7729+y\nsLDA6uoq8JVOffr0aU6cOEF1dTVKpZKVlRW++OIL1tbWXmn9BwWFQoHb7aa1tRW73S7rj0OhEA8f\nPmR8fPxIE14vAtHuL2SA//gf/yOnT5+WzxA8keHi8TiLi4s8fvz4SNcpCFkELtFolM3NTXlS/Lac\ngwjS6uvrGRgYwO12k0qlWFlZYX5+/kAkpteCkEUdZCgUYnx8nEePHhGNRl9IYqiqqqKpqYnLly/z\nwx/+kPLycvlQzc3NvXSd7n6PcTKbzVy6dIkrV67Q2NiITqcjnU4zMTHBJ598wv3797/x9wnCFKQp\nWmXF0X8/kM/nZeJO6G3b29vMzMwwPDzMgwcPCIfDbG9vE4vF8Pv9LC0tMTExIRNKu7u7LC4uMj09\nLe+jUqmU0pRofNnd3WV7e5t4PC5rQ4sN4ujc1tbGlStXZLVJPp+XxCV8H4odoqGppqaGCxcucO7c\nObxeLyqVas//d+/ePX79619z48aNI1rpV0niiooK6urqKC8vR6lUkkgk2NzcJBqNyuTks6BQKGhu\nbqa3t5dz585RX1+PxWIhmUwyMzPD+Pg4sVhs/9e97z/xCCB2skAgwK1btxgdHSUejz/XCyqSLfX1\n9fzgBz/g/PnznDx5ko2NDSYnJ1lcXGR1dfWlSPXp8q9XhagoGBgY4PLly7jdbrLZLH6/n8ePH/PJ\nJ59851oLtV6RZNpPQoYnEVIkEmFnZwedTsf29jarq6tMTU0xMTHxzESI0CQL/ZoLZRxhciNaw61W\nq6xUEQRfjIQs9PCWlhYGBgaw2+1SqipMWBYrIYtTidFoxGw209LSQmdnJ5cvX+b06dMy4helnLFY\njC+//JL/9t/+25HWhWs0GhwOB263G6/XK7sg4/E4Gxsb0jOm8DkUpZc6nQ69Xk93dzfvv/8+PT09\nVFVVkc1mCQQCzMzMMDk5STwe3/d1vxaEnMlkiMVisq5XlHw9D4S3wJkzZ/jxj3+M1+slnU5z9epV\n/vVf/5WZmZmX1on2MzJWKpW4XC4aGxupqqrC4XCgVquZm5vjo48+4saNG2xubn7nEcxgMMhyHeBr\nf+7Hmu12O729vXR0dGCz2VhfX2d9fZ1oNPqtWelcLvet8o5Go8Hj8VBTU4PFYiEajTI8PMzc3BzJ\nZLIoSc1isVBRUUFFRYVMau7s7MgmjHA4LGuxixGi6ePChQu89dZbss64qqoKg8EAPEkWZ7NZhoaG\n+Pjjj7l58+aR3w+RTG5oaJDPez6fJ5FI4Pf7pf4rIPoYhLQkGq06OjqkLBgMBpmbm2N6elp27O03\nXgtC3tnZYWlpiVQqJY+43xWdiojM5XLR09PDyZMnOXHiBAqFgkgkwvDwMFevXiUWix35y6JUKtHp\ndDQ1NdHX10dNTQ0mk4l8Ps/6+jrXr19naGhIdg19E/R6PR6Ph8rKSlmTfRC+HXa7na6uLlpaWjAa\njUQiESYmJtja2vpWOaXwz0KIY79Op6Oqqgqv14vBYGB9fZ2HDx8yMzNDOp0uKkIWz5/b7eb48eOy\nDBOePK8rKyssLy8Ti8WKqrriaZSVldHc3Mzly5f54IMPZHloIpEgHA7LRHkmk+H69ev88z//Mysr\nK0fuIKjX66mqqsLlcqHRaCQhi6qpuro6otEo8NXzZbPZqKmp4cSJE5w/f56qqirsdrusgpqfn2dw\ncJCZmZlvfZZfBa8FIQcCAW7cuEFlZSVdXV2o1WoCgcC3ko3QmFpaWnjnnXfo7OyUpSyigeFVrDD3\nE1qtFpvNxoULF/jggw+ora0FnrR4x2IxgsEg8Xj8OwnJ4XBw5swZTp48icVikRUNwpNgPx4w0XDS\n0tIik26Li4t8+umn+Hy+l/65KpVKWodWVlaiVCrx+XzcuXOHqampoiM1seG3trbyZ3/2Z7S0tKBQ\nKKRviM/nY319vagbWRQKBR0dHfzt3/4tXV1d2Gw2NBoNuVyOpaUlRkZGuHPnjjQM8vl8+Hy+ooj4\nC42CCn2eGxsbsVqt9PT0EAqF9shkhU0tFRUVGAwGtFot2WyWeDzOrVu3+Oijj1hcXDyw63stCDkW\nizE+Ps729jbNzc1SB4InWqTQ8fR6vSzMF9ppQ0MDDQ0NOJ1O4EltqPAlLoYHC5AlZMePH5cbRzQa\nZWJigsePH8tmiW+DsH5sb2+XjS+hUIjZ2Vl8Pt++XmcikWBxcVFGJSKqeBWfZ71ej8ViwWq1olQq\nWV5eZmJigpmZGfx+f1FFx/Dk5TYYDHg8Htra2nA6nbK6Qmjqa2trRVtdIexc29raOHPmDG63W0oU\n2WyWdDpNMBhkfHyc0dFRMpmMnHpSDPdiZ2eHtbU15ubm8Hq91NbWUl5ejsPhwG6309DQILmh0MUO\nvj57UuQ/BgcH5bUeFF4LQk4mk2xsbLCzs4Pf7yedTktHt2w2K53ePB4PHo9H2u4lEglqamqksfbu\n7i5bW1vMzc0RiUSKgowBamtrOXPmDF6vF7VaTSaTYWVlhX/6p3/iiy+++M4OKFECZLFY8Hq9uFwu\nVCoV8/PzfPzxx4yOju7bWvP5PLOzs/zX//pf5Qvs9/tfyeNXbCZOpxO1Wo3f7+f69evcvXtXdh0+\njf2ubnlRCL/rQlN30Wq8vb0tJYtiJWSn00lvby9tbW2yK1Jcg6gzr6qqwu12s7q6KssViwXBYJCb\nN29K7+Jz585x9uxZqSeLMlZxTWITKbQWED4e9+7d47e//S0jIyMHPmHntSDkXC7Hzs6O/FOU4Ygk\nlvC0UKvVJBIJtra2ZMZVr9cTiUTk0TEWi7G+vl4UzmFip66rq+PMmTNUVVWhUChIp9MEAgGmpqZk\nQuvboNfraWhooLOzk8rKSkwmE7u7u/h8Ph48eLDvDRXxeJyZmRn574UP/MtAtMAKX14RYfp8vqId\nQWWz2WhqaqKqqgqLxbLHYnVnZ0faWRab1CIgjumpVEq2t8NXDRd2u526ujpOnjwJwNTUFOvr67Jb\n9KijZNGVOjExsadOXzRDFdrtik2mvLyc1tZW6W2Ty+VIJpMsLy/z+PHjQzmJvRaEDF/VdWaz2T3u\nb8KQxul0srS0xNTUlBwmubu7K70gREukMIkvhshFlPMJQrbZbOzu7sryomg0SiKR+E5CMplM9PX1\nceHCBSorK1GpVKRSKQKBANPT09LgZ78gIsH9qtxQKpVS19PpdJLUvi2Tf9Qk7XQ66ezspLGxkbKy\nMvR6Pfl8XjoJbm9vH3klwrchFAoxMjJCd3e33DQKtVgxnUalUuF0OlGpVJLAhC3BUSOfz7O5uSmT\n9L/5zW8kJ4iySrGh6/V6+vv7+fu///s9Lm7C7XFzc/NQPMNfG0KGr4gAnjw8oi15fX2dSCSC3+8n\nFApJG0p4kiyqrq6WGt/i4iK3bt16pQTUfkHoeMKRTqPREIvFuHnzJp999pmMSL4JKpWKqqoqaULU\n3d2N1WrF5/Nx7949eeQ/KE1s3xyw1Gq8Xi8tLS2YTCai0ag0Sy+GF/9pCM3b4XBgNBqlM52omw4G\ng8RiMXmqK0ZkMhmi0SiPHj3iF7/4BXV1dVRVVVFVVUVFRYW0cXU4HLS2tsoEps/nY2ZmhomJCcLh\n8JEZJgmIRijRBSqGQojoWLT6W61WabMLT6SLubk57ty5w+jo6KF5jbxWhFwIUaieTCYJhUJ7hPtC\n8V4kLjweDwqFgunpaT7//POi0MPMZrMsahfNG+FwmN/97nf85je/kWU73wTRkHD+/HnOnDlDe3s7\nCoWCoaEh/s//+T88ePBAensUM9RqNbW1tbS3t2M2m9nc3CQcDhOLxYqO0EQUKTL2Io8hjvyRSESa\noRfDKeybIKSHBw8eMD8/T09PD6dOnZI6rCA0rVZLfX09LS0tZDIZEokEn332Gel0mvn5eSn9HfUz\nJoyCstns1+RI0R1amMzL5XKMj4/zi1/8gtnZ2UPzGnltCVngm8q5Cv171Wq1zH4Lz9ejjrwUCgUt\nLS385Cc/4fjx4ygUCnZ2dohEIjLy+LYdu6Wlhfb2dn7wgx/Q399PVVWVzDyPjY2xsLAgSwOLGcIl\nraysDLfbLWt5U6lUUerHotvL4XDQ0NCwpzohkUgwOjrK7du3D2QKzUEgmUwSDAYZGxsjHA4zPT2N\n1+uVSfLu7m5ZFy9kwjNnzmCxWBgZGWFkZITx8XEWFhaO+lKAvRuDKE2sqKjgrbfe4q233sLpdEop\nIxgMsrq6eiAt0t+E156QvwmFdYpKpVLu7kIDO8oXXezUzc3N/Nmf/RkVFRUoFF8NQY3FYnKiydMQ\ntpSdnZ289957XLhwgY6ODnK5HOvr64yNjTEyMsLy8jLRaLToCO1piFl6whRc5AgymUxRErKIjh0O\nB/X19dLFLp1Os729zejoKHfv3v3eEHLhxOjZ2VlUKpVs0BESWGVlpWzDBzhx4gS9vb08ePCAmpoa\ndnZ2ioaQC6FQKNBqtXg8Ht5++20uXbokHd2i0SjBYFDO1TwsvLGELDx7hQtUIBBgfn7+wDpwXgQi\nKiycfpvJZBgfH+fmzZtsbm5+Y9R/6tQprly5QldXFx0dHXg8HlKpFEtLSwwNDfG73/2OwcHBouhA\nfB7U1NTQ2dkpNyWxZnG6KTaIqEvoxiLaSqfTMkH0PHXjxQohZfj9foaHh8nlciwsLPDTn/6U+vp6\n6fqmUCiorKyku7ub69ev75lAUgwQG2d1dTXHjh2T3twKhYKtrS0mJiZYXFw89NPyG0vIopuspqYG\njUbD5uYmc3NzB+J7/KIQ0XthBJ/L5VheXmZ0dHTPGsXDbzQasVqtnDlzhr/5m7/B7XbjdDpJp9Ns\nbW0xOjrKrVu3+OKLL5ifny+aF+O7UFtbS19fH263e894nWKFMDLX6/VyM83lckSjUWn5+E21098H\niNI9MRdxZWWFSCQiPSPEIFBRfZHP5/cYEBUDRBu+3W7n2LFjHD9+nMrKSoxGI7lcjkAgwKNHj1hc\nXPxWR7iDwBtLyA6Hg76+Ptrb29Hr9aRSKUKhUFEkWkSErNfrMZlM6HQ6MpmMnFRQOMdL+Fx0dXXx\n1ltvce7cOflwZbNZVlZWGB0d5ZNPPuHWrVv4/f6ieTGeB01NTVy8eBGv14tSqSSbzZJMJqXWX2zX\nYjQa8Xq9eDweWWWhUCiYnZ3lyy+/ZGVl5cAkscMmPVFqOjc3xy9/+Us2NjZ4991399S6ixK4YrpP\nYsZeW1sb77zzDmfOnKGyslIm9Px+P4ODg8zPzx/65v9GErIwEmltbaW2thaVSkUkEmF+fr4oImTR\nVSfIWGTqxXgmUXIETyoxhEHS22+/TWNjI0ajEb/fz/r6OpOTkzx69Igvv/yS8fHxonoxngcul4uW\nlhZ5nNzd3ZVtukdtYPMs2O12Ojo6aGpqwmKxoFar5WRsUU5ZzBH+i0CUmQYCAQYHBykvL+fMmTOU\nl5fv8XsWG2cx3CsR7DQ2NnL27FlOnz5Ne3s7Go2GnZ0dgsEgCwsLTE5OsrGxcehrfuMIWXQaGY1G\nysvLMZvN0izl1q1bcjrFUUKY6Iijnoh8tra25OgYoaG63W4GBgY4ceIEbrcbtVpNNBrlD3/4A598\n8gnr6+tyDFKxvBQvi8LuKjEXsViuR9wnj8fDlStXOHHiBHq9XhpAzc7O8vDhw+8sVXwVHMVnIa5b\nrVbLvIeQlqLRKGtra0dei1wIISkdP36cH/7wh9IAK5fLyVFg9+/fx+/3f2Pi/CDxxhGyVqvF6XTi\ndrtlj346nSYUCrG8vHwgptMvCoPBINdXaCMqZvxls1lZddDS0kJ/fz9ut5v19XWmp6cJh8N8+umn\nfPHFF7IB4fsGUZYoJvwKJz7RoXjY2t53QRgJdXZ2cvz4cTlJY319neHh4SOLuAohJK9CHf7pMrDC\n5+27Pl/hwFddXU1vby+dnZ1YrVbZtTc3N8e1a9eKIsgRqKqqoqWlhe7ublpaWmSTSCKRYGlpiTt3\n7jAyMnLfbB9MAAAgAElEQVRktqhvHCEbDAYaGxtpbGzEbDaTzz+ZaydsKIvhJRfJEavVKqNjhUJB\nfX09/f39+Hw+1Go1fX19NDc3U1lZydLSEp9//jmjo6NMTEwQCAT2dB7tBw5ToxT14Xq9Xo6ZEubi\n6+vrRaH1F8LpdHL+/Hnpoyu049HRUX7xi18wNjZ25KWUwvUwl8t9zXJVbIAiR/F0E9Wzfp5Op8Pt\ndnPq1Ck+/PBDuru7KS8vB55Mjbl//z7/83/+z6KQAeHJmru6uvjggw/o7e3FarUCT2qtA4EAs7Oz\n3L9/n6mpqSOrgnnjCFmr1eJyuXC73TJZJhyhiqUHf2dnh42NDfx+P5FIRBbdC++AcDiMWq2mpaWF\nfD7P+Pg4Q0ND3L17l7m5OelP+32GXq/HarVK1z6FQkEulyMWixWdqbvwdhClhiaTiWQySTQaZWZm\nhqGhoaIhJREhi2O6KP+y2+10dnbKjlVhwpVMJslkMtL5rKKiAqfTicFgwGKx4HK5aG9vp6urS/pU\nj4+Pc/fuXW7evMny8nJRPIsiQd7Q0EB3dzdut5t8/snk76WlJe7du8fNmzdZXFw80pb8N4qQCyfR\nOhwONBqNlCtE51sxPDzhcJjx8XFaW1tlVYTVaqW5uZnGxsY9FoF37tzhV7/6FUNDQ2xubh6o7nWY\nn43RaMTlcsnNqNApTcygK4ZSKqGfiiSxGOy6tbXF4uIiCwsLrKysHPlG//REFiFPKBQKDAYDdXV1\nfPjhh5w9exaFQoHP52NsbIytra09g2RPnjzJ8ePHqaiowG63o9PpMBgM0rNjd3eXu3fv8p/+0386\ncommECaTiaqqKqqrq/F6vRiNRtLpNMvLy9y7d49/+qd/Ymho6Mj9Ud4oQoYnO6XX66WqqgqdTkc4\nHGZjY0PWhhbDA5RKpdja2uLx48d8/PHHtLS00NjYiN1uR6/XEwgEpInLgwcPGBkZkX7QR/3i7xdE\n52Q6nSaXy8njtsPhwOl0SjvLYkChmbnYMB49esTnn3/O0NBQ0UTzuVxOVqaIUjSVSoXNZpOTWCor\nK4En0p7BYGB7e1u6nqVSKerq6qiursZoNKLX61GpVNJ2YH5+ngcPHvD73/+e1dXVopCVxHPT1NTE\n+fPn6ejowGw2o1AoiEaj3L9/n88++4zZ2Vk58/Eo8UYRsnDhqq2tpaamBr1eL+WBcDhcNPWSwqdh\naGgIv99Pf38/AwMD1NXVYbPZGBsb4969e3zyySfMzs4e2rr3cxDqd0F4KAgpSaVSSW3d5XKh1WoP\nfA3Pg8IqmO3tbWnUfu/ePf7H//gfRSNVwFdddgJCNxbTmc1mM2q1mnw+j9PppLy8HI1Gg06nk+Y8\nogtRVLns7u6SSqWIRCLcuXOHf/zHf2Rubu5rQ0SPCqKiqrW1lT/90z+lubkZg8Eg52/evn2bP/zh\nD0XjvPfGELLoWzebzTidTumgJnxfV1dXi+IBEsjn88RiMVZXV7l9+zZLS0tYrVZ0Oh2BQID19XVp\nmH1Y6z7Mz0ev10tD+nQ6jUqlIp/Po1arpaZcDBBSis/n449//CNjY2PE43GGh4fZ3t4+8ojruyDK\nvR4+fEg+n+eLL75AqVRSVlZGbW0tTU1NNDc3S+1ZyBdTU1MsLCwQDAYJhULSeEiMpTrqd6nQS/z0\n6dNcvHiR+vp6jEYjiUSCe/fu8eWXXzI5OVk0Y6fgDSJkcTSrqKigrKwMi8UiZ9NNT08Xld4lsL29\nzfb2NhsbGwwNDR31cg4VYvMUJCB8K4ol8SogmiP8fj83b95ErVbLaSDFQEzfBrH2ra0tIpEICwsL\nsuW7rq6OU6dOsbOzg9VqRa1Wy6Tq5uYm169f5/79+ywuLuLz+eTUnWK5N0qlEr1eT319Pe+88w4n\nT57E6/XKJL4Yy7SwsFBUviJvDCGbTCYuX77MO++8Q3V1tRxuKEqrhN9wCcWBeDzO2toaMzMzjI6O\n0tDQgFarZXx8nLGxsUO1RPwuiNJJn8+HQqEoWmvQb4LofozH4ySTSTkmLBKJMDo6yr/+67/KhF0m\nk5E2rpubm8RiMba3t4uua1Kv11NZWUl9fT1NTU1UVFQAT/yoV1ZWWFpaYmVlpShGtRXijSFkrVZL\nTU0NTU1NGI1Gksmk9BcutiaDEp6U/qXTaaanp6WRvtFo5NGjR0xNTRXdi5TJZAgGg0e9jBfG092P\nArFYjLW1tSNc2atBzNNUq9Xs7OzIbtW1tTUWFxeZnZ3F7/cXhW5ciDeGkBOJBLdu3QKeyBd2u52J\niQmuX7/O1NTUd05uLuFwIUhidnaWXC7H48eP0Wg0rKyssLKyUhQZ/BKKFzs7OywvL/PZZ58xNTWF\nTqeTnuKiK6/YyBjeIEJOpVJMTk6yu7uLw+HAYrEwPDzM8PAwPp+vaLLCJXyFfD7PxsYGsVhMWjqm\nUikZPZdQwjchk8kQCoUIhUJMTU0BRz9G6nlQHKnqr2PfPznRFGI0GnG73ahUKtn1VYyz2Up4ArVa\nLUuthNtbYetvCSV8z/CtnFuUhKxQKPKll604UVh3e9j3qBg680oo4RXxrZyrPKxVvAiedp0qoXgg\nmgkK78/T/1z49bK/41nYbzIuPWMlFBuKkpBLUVDx4rsi48K/K93HEkp4MRRriFB6k0sooYTXEd8/\nyaKEEkoo4U3EG1P2VkIJxYZCnb3UmFQClAj5WyESWGq1WjpeFYsrVAnfX4iWfZfLRVdXF4lEgseP\nHxOLxV47Yi5VxrwYSoT8FIRLlMlkkq2XOp0Os9lMIpFgZWXlezmjroTigUqlwmw209DQwFtvvUUk\nEiEajbK0tEQkEvnekrJ4d4RhPXzlWri9vX3Eq/t+oETIT8FkMlFeXs57773HwMCAbExQKBQ8fPiQ\n//W//ldRDW0s4fsFhUKB2Wymq6uLc+fOcfr0aZRKJbW1tXz66ad8/PHHRe8S900wmUy4XC4GBgb4\n4Q9/SD6fJx6P85vf/IZr1659L6/psFEi5P8PMcW5vLycmpoa/vRP/5R3331X2j6m02nUajW//e1v\nS8ewA0ThZOTvGrT5fYVKpcJqtcohm263G6/XSzgc5ubNmwSDwe+VV4eQ9lwuFydOnODdd9/l3/27\nf0cwGGRhYYHbt28f9RL3FS86nftF8MYTsvhwT58+zV/91V/hcrkoLy+nvr5eRsZi9E2xWQy+ThD3\nQavVotfr5WeezWaLZgTSfkBEjQ8ePMDn8zE0NMTAwABXrlzB6XTS0tLC7OwsKysrR73U54ZKpUKn\n09HW1sZf//Vf093djVqtZmxsjE8++YSZmZnX5r0R07vFl5jCsl+k/MYTst1ux+PxcObMGd59912c\nTqccrAlPXqBUKsXm5iaBQOC1IoejhEhsmc1mqqqqsNlskoyNRiOZTIZ0Ok08HpezAsVDL8g6FosR\nCASOfKMsjJaeZx2pVIq1tTUCgQDLy8vs7u7KAKCnp4dUKsX6+rp0vCt2mM1mGhsbOXHiBCdPnsTj\n8aBUKllbW5NjyIoVIhAQfujZbPZrU3h0Oh16vR63243T6USpVEr/63A4vK/reaMJWaFQ0NDQwPvv\nv8/58+epqKhAq9V+7QWLxWJMTk4yMzPDzs7Oa7PbHyWUSiVGo5Hm5mb+/M//nOPHj2Oz2dDr9Wi1\nWknIm5ubbG1tyWhZDNwUk7lv3bpFOBwuGkKG5+9QFI5k09PTXLt2jcbGRs6dO0coFGJ4eHhfI6+D\nhNvt5t133+XcuXMYjUa5kaTTaRKJBJlM5qiX+I0QZCwmCMXjcTlYV9xHi8WC1+vlvffeo7+/n2w2\ny+zsLB999NFzE/LzzqN8YwlZjAhqbm7m3LlzHDt2DL1ev6cuVERiW1tbjIyMMDU1ta8VFkJ7Uyqf\n9OeIB1kci8T8uFQqRSqVeunfIX52MUFUsng8Hrq7u+nr68NqtaLRaFAqlWxvbxOLxeTYrVQqRS6X\nw2g0sru7SyQSQaPRMDMzQyqVOnT7VCGvaDSal3agKzzu6nQ6KioqOHbsGBMTE9TW1rKxsVHUpvfi\nOS0rK6Orq4uWlhYMBsPX3qFie/YEBBlbrVZqa2tRKBSsrKzIShfxDra2tnLmzBkuXrxIa2srw8PD\nhEIhksnkc9/zfD7/XN4pbywhGwwGvF4vLS0tdHZ2UllZuYe8crmcPDL7fD4eP37M5OTkvpe8aTQa\nObRTvKAqlQqtVovJZMJsNhMMBl96JNCLHqcPA2IjEgNny8vL5dBZkTBNJBJsbm6SyWSkmZFOp8Pj\n8WCxWEgmk8RiMe7cuUMkEjn0k4uoljCZTHL6zMuSj9Pp5MSJE3R0dOByuWhsbKSzs5NsNlvUhKxU\nKtHpdNjtdmpra/F4PNIqtfCrWCGeqbKyMtrb21EoFCQSCZLJJJlMRr6D/f39/OxnP8Pj8ZBKpbh1\n6xb/8i//wtra2gvJZc/z/72xhFxRUcHFixfp6+vD4XDIsfIiSs1kMmxvb+Pz+VhYWGBlZUXqlfsB\nvV6Pw+Ggr6+PpqYm4Il+lUwm5ehy8XBHo1GCwSCPHj1iZmZmzzqfvskajQaNRkNVVRUejwer1YpS\nqWRlZQWfz0cwGHzpaHu/oFAoMBgMNDc309HRgd1uB54MdV1cXGR0dJS1tTU2NjakdpzNZrFYLPT3\n93Ps2DHsdjstLS28++67aDQa/vjHPx7adYkN02azYTab2dzcfGl/ZoVCISdsG41G0uk0Ozs77Ozs\nFH2+QqvV4na7qaqqwmKx7HmHxLMcj8eLUrLQaDTY7XbOnTtHb28vNTU1rK6uMjQ0JNdbUVFBc3Mz\nbW1t1NfXo9FoiEajbGxssLa2diCnsjeSkBUKBR6Phx/96EecPn0ag8Eg/05Ex2Lo4/LyMrOzs/h8\nvn3rpFIoFBiNRmpqavjwww/5kz/5EwCpuYljVDKZJBqNyoGT/+W//BcWFxflOsUsQPFQiCOY2Wym\nu7ubc+fOUVtbi0aj4erVq9y9e5dkMlkUhGwymejs7OTkyZPY7XYymQzRaJQ7d+7w85//nOXlZUKh\n0J7r83g8pNNptFotXV1dNDc343Q6SSaT3Lx589CuS6PRYDAYsNvtWCwWNjY2Xrp2WCQ3RSNFJBJh\na2tLTrEpZmi1Wqqrq6murpbvkKhKymazJBIJQqHQkT9vT0NExm63mw8++ID333+fZDLJF198ASBL\nDkWyv6WlBZvNJjfKWCxGPB4/kBPZG0fIQvfS6/Uywik8VhWWtaTTaaamphgeHt73DiqHw0FNTQ1O\npxOLxQIgNVKhT6rVajQajYwOf/rTn9LU1CTXUajViXVrNBq0Wi319fXU1tZit9tlxK9SqYhGo5KU\njyJhpFaraW5upquri46ODiorK1GpVKyurnLnzh1u3LjB0tISoVDoazLE1tYW169fl7prS0sLJpMJ\np9MpSf0w6nfF5BkxVupl69JdLhc9PT1cvHgRl8uFSqWSUkw4HC46InsaNpuN8+fPc+nSJcrKymTN\nvkqlklGymEhdTFCpVPT393P58mXa29tJp9Pcvn2b69evEwqFpN5rsVioqanBZrORz+cJBoNyUvVB\n1cd/Lwh5PxNTIiLR6/WYTCY5/LDw74VmmUqlmJqaYnR0lGg0+sq/u/B32Gw23G43ZrNZPsjCN0Mk\nQwQh7+7uYjKZ+PGPf8z7779PNpuVu7xSqSSXy8nvLSxaF0SRSqVkKd/4+Dh+v1+W9xw21Go1TU1N\nnD59Wka4AEtLS3z66afcv39fasdP3+9oNMrt27eJRCJUVlZit9txOBzYbDacTqfU/w4aWq0Wo9GI\nWq1+pWeyoqKCK1eucPHiRcrLy4lGozICi0QiRU3ISqUSh8PB+fPnuXDhwtc2JnGCKzbvF3EaOXfu\nHH/9139NeXk5GxsbXL16latXr0rNXqlUYrVa8Xq9WCwWcrkcfr+fxcVF4vH4wa3vwH7yK0BErAdB\nGGq1GovFgsVikUmkQoidfWtri8XFRVlruJ+7fD6fx+fzMT09TTgclhGu0N3S6bRM7mk0GhmxiySJ\nIF7xEogXQRA17J3coVKpsNlsNDU18c4776DT6bh27ZqMBg4T2WyWhYUFrFYrDodDSkRTU1NsbGwQ\nj8efqY0LiMhElMApFAqZJY/H4weaBCv8PJVKJX6/n1Qq9dIvqDip6XQ64MkJ4PHjxywtLRV1uZha\nrZYdrWKjFzmNXC7H6uoqs7OzLCwsFE2VhQi0+vr6OHfuHGfPnsXhcJDJZPD7/ayvr8vySr1ej91u\np7KykqqqKsxmsyx1GxwcJBAIHNg1FS0hi2ND4Y67H9BoNJSXl1NeXv61mmPxe7LZLD6fj5mZmX3V\njgt/RyAQYH5+Hr/fTzweR61Ws7Ozg9/vJxqNsr29LTugxJeQWMS6t7e3ZUlcOp2WL7BImhWeAkwm\nEzU1NZw9e5ZoNMrg4CChUGjfrul5sbu7y+rqKkqlEq/XKzeXtbU1tra2SCQSz/VZF25SJpOJyspK\nlpaWDnz94nnJ5XKEQiGi0ehLJd8KtWNxShPllcVsYCXktLq6OlpbW7HZbHuks0wmw/LyMl9++SVL\nS0tFUUctchZWq5WBgQH+/b//91RVVaHT6VhcXGRqaorV1VUZHDmdThobG6mvr8ftdmMymeRGMzk5\nue/NIIUoSkJ+2sNgP3cjkfDq6urCYrE8k5AzmQxTU1Pcu3ePra2tA9GLstks8Xicubk5hoeHsVqt\n+P1+bt68ycLCAqFQSEoRIol0+vRpent7qa2tRalUMjo6yuTkpIy0U6mUdNzq6Oigs7OTzs5O6urq\nZGQtiPpVtM9XQT6fZ2dnR25IdrtdvtiifOzb1iQSotXV1Xg8HjQajYyyD/paRBQodPh0Ov1S1RXi\nnppMJhwOh3wOA4EAExMTbG5uHtAVvDq0Wi1lZWVcunSJP/mTP8Hr9e7ZpHZ2dlhaWuL27dssLy8f\n8Wq/cqBrb2/nBz/4ARcvXqShoUEGBr/+9a/59NNPmZ+flyez+vp6/uIv/oKBgQFsNpvU9r/t5LZf\nKGpC3m+InbK1tZVjx47JBFohKYu644mJCUZGRggGgweyFvHwLiwsMDQ0hMlkYmVlhc8++4yZmRmC\nwaDUUcXRNhgMEolEaG5uRqlUMjg4yNDQECMjIwQCAdLptCTezc1Ndnd3cblcVFdXA09adgOBgCT7\no0A+nyedThOLxVhdXcXtdtPY2CjXl0qlvvHzFq3WotTK4XCgVCrZ2dmRScDDWH8ymXwlrVqpVKLX\n67FYLNjtdpRKJRsbGywvL7OwsPDMCKzwOT0qwyWFQoHL5aK9vZ3Tp09z6tQp9Hq9XFMwGGR6eprH\njx8zOjpKIBA49DUWrlXo3F6vl3PnzvGjH/2IxsZGrFYrk5OTPHz4kM8//5ybN29K2VCj0VBdXc35\n8+dpaWnBaDQSDodZX18nGAweeBnfG0PI4gaJo7vX65XaXeHvWlxcZHBwkOHhYZaWlg604SCbzbK6\nusrg4CDhcBifz8f8/DyRSGRP26yIKgcHB5mfn5e6XSgUkl66grxFpCh+Xjwel//N5/Px6aefcv36\ndSKRyJFpe+IUEg6HWV1dZW5ujs3Nze8sHTMYDLS1tdHd3U15eTkajUYmW8bGxtjY2DjEq3h5qFQq\nScZms5lQKMSdO3d48ODBN5aJiSQu8K2b1kFBaOe9vb28++67NDc3o9VqpbyYzWYZGxvjv//3/86j\nR4/w+/2k0+lDXWMhRBDT29vLhx9+SGdnJzU1NajVasLhML///e/553/+Z+bm5mTkK2rLy8rKKCsr\nk+/Z3NwcX375JUNDQywvLx/oxl+0hLzfEM0Sra2tstxMGIoUSiOLi4vcuXOH+fl5otHogT74Qqve\n2dmRTRuJROJrmqQ4Kq+vr7O+vv6dPzeXy2Gz2aitrcVqtcqEYSgUYnx8vCg8OXK5HOFwmLW1Nebm\n5uQm9Cyo1WqqqqpobGykv7+fvr4+ysrKZCfbxsYGm5ubB5r93k8YDAba29vp6urC4XCwvr7OnTt3\nGB8fZ3t7W97/woofUV8OX2m1h6nPejwe6urqZELM4/GgVj+hD1E/v7S0xL1791hcXDzyChG73c7x\n48d56623uHz5MhUVFSgUCjY3N6UWPDMzQzwel59zRUUFvb299PT0yM7RfD7P3Nwc165dY2ZmZl+r\nrZ6FoiTkg4DBYKCvr48rV65QV1cnO+EEKYkM8dLSkiy9OmjCymaz+P1+GeXuZ4nQ8ePH+clPfoLb\n7Uaj0ZBOp9ne3iYQCBCJRF7p9+yH9pzL5YhEIqjVakwmE6lUikwm88yfrdPpOHPmDFeuXKG7u5v6\n+nqsVivBYJCZmRnW19eLqrTqu2C327ly5QrvvvsuFRUVjI2N8ejRI1mVIFDoZ2IwGCgrK9vjf3GY\n0kVvby8ffPAB3d3dtLS0yETk7u4uyWSSQCBAIBAoirpjhUJBTU0N/+E//AfOnz+Py+WS7fhzc3M8\nfPiQ7e1tKisr8fl8MlfT2NjI3/zN3zAwMCC9qvP5PAsLC9y4ceOVyPh7bS6038kmodnV1NTQ1NT0\ntWTe7u4u8XicaDTKysoKy8vLxGKxffv93wTxe5VKJalUal9aZYVVoIhotFot2WyWQCAgK0Ze1hcD\nvq5lviyElhyNRllfX5fZe4PBsOdI7nQ68Xq9nDx5koGBAaqrq7FarfJ0ce/ePaanpw+1zbjQi8Nu\nt+N2uzEYDCQSCcLhMFtbW9IMSdSSiw1HeCeILL5eryeTyRCJRMhkMpjNZlpaWmhra5Ofs2jiMRgM\nJJNJXC6XPC2Jz0rcl0Kb0v2A2BCqqqro6uqiqqoKo9GIUqmUMyZXV1e5e/cujx49eu4qmYOC8Nco\nLy+nubmZmpoaNBoN8XicWCwmm696enpobGyUp1KlUkl7ezsnT57E6/XK+n/RQer3+6XhkPCYsVgs\npFIpfD7fd8pI32tzof12KFMqlWi1Wmlko1KpyOVysiFDlDAtLS2xurpKIBA4lIdKJIj2cwMyGo1U\nVFRgs9lkO6t4aRYXF1+5/16Q0atmnIXenUwm8fv9GAwG2TUpmlZEpHPy5El6e3s5duyYfFGSySRL\nS0t88cUXjIyMvNIm8zJQq9VUVlbS2trK2bNncblcrKysyGRRIBAglUphNBqxWq0EAgGi0ShWq5WK\nigocDgdms1muWaVSYTQasVgs/OQnP+Hv/u7v5OezurrK1tYWyWSSSCSC3+9nZGRElmOKZ1mpVO67\nZaeQSux2O06nU27warWaXC5HPB5namqKX//61zx48ODIZSOhzwsr18KNIxKJYDabOX78OBUVFbKL\nVTxrOp0Oi8UiSzGFhULhqUUMp62traWuro5QKMTNmzef6/krmQvxhECcTid1dXVUVVVRVlYmbxQg\nTYTGx8f5/PPPmZycPNTj734fO71eL6dPn6ampmbPjiyaTHQ6nfQbLoaCfUE6oi5XlIOJ9fb29vL2\n22/T0NAgM/qiSiMUCskI50U3iFdxwWtsbKShoYGysjI8Hg92u11KCj09PbIdV0TEOp1OSlNlZWXU\n19dTWVkpfXcdDgdnz55ld3cXj8dDX18fRqORoaEhRkdHZW164QSVWCy2pymoMAG8n3C5XHR0dNDS\n0oLdbt9TMik0/PX1dRnIHLV0pNFoqKiokKcWMdVD1PEL32NB2ELCS6fTcrCxTqeTEoeojRcQp1qf\nz0cmk5Hdofv1uRclIe+3XOFyuWhqaqK6upqysjLZ/aZQKOTL/fjxY371q18Vtd3hd0GhUFBbW8tb\nb71FfX29/O/CDN5qtWI2m+WIpFfVkfcjsi+0HBWRl8ViQa/Xy9rrH/3oR3t8dnO5HLFYTNYDv6zT\nmoieXgQKhYK2tjbefvvtPVLB9vY2Wq2WxsZGTp8+jdVq3VNPv7m5SSQSkRNpdnd32d7eJpfLUV5e\nzjvvvEN5eTmtra0ABINBPvnkE/7v//2/7OzskMlkpIeGzWaTMoaQKQQp7zchezweLl68SHt7Ozab\nbY+lZiaTIRgMsrm5STgcLopmFo1GI8sihbSSz+cxGAyyM1d8JZNJFhcXZWWT2+3G4XBgMpmAJyWw\nq6urRKNR+T0i7xMIBJiamtp3Lf+1JmRRgN/S0sKZM2dwu90yay1+z+bmJg8fPmRiYuJ7YejyTRAR\npYjAbDab/LtMJoPP52NlZYVYLPZK0XHhi79f96lwuoQYIX/8+HH6+/vp6enBYDBI34jd3V1Zvnft\n2jW2trZeurzqRdcvxks1Nzdz6tQp/H4/m5ubrKysEAqFpLlRV1cXer2eZDJJIpGQMpFWq5UnFKE3\n+/1+MpkMtbW1skX33r173Lhxg/v370vHOyEVCa9ekaQV9/Lpztb9gsPhoK2tDbfb/bX8QTweZ2xs\njJGRkSOXKgQsFgsDAwOcPXsWs9nM9vY28XicxcVFpqenZXdrJBIhFArJTlmdTkdPTw/9/f1kMhlS\nqRSDg4NSiil85gv/eb8/76Ik5P2A8Jm12Wy0t7fT19eHy+V6JiHfu3ePqakp6aNQ7HhWd2GhR6/H\n45ElUqLmd2tri42NDRmVvQoh7yfEQy0y80ajEbvdTm9vL3/5l39JeXm5rL8VSZb19XWuXr3Kl19+\nSTAYfOkI+UVhNpuprKyUQw2mp6cJhUKsrKwwPT0tOyXPnDlDKpUiGo0SDoeJRCJyork4mYnxTTMz\nMxiNRjo6OlAqlQSDQW7evMnPf/5zeYoRHteAtIUVksjTpLBfn4PYAMTg1fLy8j2/Y3d3l1gsxtjY\nGOPj40VByMIQqKenh+7ubgAZwQ8ODkrzIJG8F3XrRqOR2tpaXC4XqVRKdpIODg7yy1/+8mtNQAdZ\n3fJaErIoF+rv7+fChQucP3+e6upqzGazFOwFwuEwY2NjrK2tFYWm+m0QEYqwfczlcnvG5Ai7w3A4\nLAlZaLN2u53y8nIsFgs6ne6VJYv9hthQOjs7+fGPf8zJkycpLy/f41WdTqdZX19ncXERv9//Utqx\nwO6tKicAACAASURBVMu05be2tvKjH/2Ijo4O4EnZ4s7ODltbW6ytrUlb02AwiNFoJJfLYTabsVqt\nnD59GpfLJSfRpFIpwuEws7OzRKNRRkdH5YSax48fo9PpUKvVUgMt1IlFydnTG9F+Pr/CYMflcuFw\nOOTxX0gk8Xgcv9/P6uoqGxsbR9oEAk+en/LycioqKgiFQoyMjBCNRllcXGRsbIzZ2Vnm5uZky3vh\nBmIymWhra6OtrQ29Xs/8/DxXr15lcHDw0IcEvJaELKKsvr4+/uIv/oKqqirKy8u/Nr8un38ywFSY\n/BQDIRe6tQmI0iMBYS4kkjzihRXNH3Nzc5hMJioqKuT3l5eXU11djdfrlSVWh9Gb/7wwmUxUV1fT\n19fH+++/j8fjkVpeYWff5OQkY2Nj+P1+OY36ZfGi115XV8fbb7+Nx+MBnpSjxWIxgsGgdH7z+XwM\nDw+jVqvR6XTU1dXR3NxMbW2t1IKFm+Ds7CyTk5Osrq7KzVQkiURlUOHAV0EOwnb1Za7heWE0GvF4\nPLhcLrmJCzJOpVKsrKwwNTXF8vLygdkLvAiEoZZGo2FxcZFYLMbm5iZTU1M8fPiQYDD4zOdF2Cm0\ntbXR2tqKTqdjbW2Na9euMTk5eegn5teKkEUEWVNTw+nTpzl58iS1tbXo9fo9EYZ4sHK5nByQedTF\n7GJdwgxFbB4KhUK60wkPhUQiseflFte1u7vLxMQEH330EUqlkvr/P1pep9NRX18vXxqj0cj169cJ\nBAJHNiZI6J3injU3N/Phhx9y5swZXC7XHgLIZrNsbm4yMTHBb3/7W27evCnHOx0mEej1epxOp0ww\nxuNxtra2iMfjsu5YrEdcl1qtRqvVEggEGBkZYW1tjdXVVVmZsLq6KiP9wjIrUaolEo+F9cWitO0g\nycJisdDc3IzH49kzwCGVSuH3+/nd737H7373O5aWlo6cjOHJsy/8XHw+HzqdTk7cEWPLnrVOUU9+\n7NgxmpubZR18JBI59MG58JoRspgk3drayqVLl+jo6MDhcMhjvBiYabVaUavVkuAKo4+jRmEWWHyJ\ncjWhoYqIUXgCC+TzeXl0Pn36NDs7O+j1emk5KszsQ6EQDx48IBgMvnQi6FVrmcWfIhITXZQtLS1y\n+reoRIhEIgwPD3P79m1u3brFyMjIoXapCSQSCdm6bjQaWV9fZ2lpSVpwFq5HzMnT6XRsb28zNzeH\n3+9ndnaWxcVFWXXxIoMCxOcmgoeDuH7xzDkcDtrb26mtrZWns3w+TyQSYXFxkfv373Pnzp2iqKyA\nr8rR4vE4a2trwHf3M6jVajm5p66uDpfLJT3Jw+HwkYzQeq0I2W6309zcTF9fHxcuXKCiooJ8/on3\nsPDbVavVdHd3Y7VaiUajJBKJojHRFs0SIqsOTx4q4TQldN+nI+NCiN1dXJuIuMXRrKamBo/H80wv\n6BdZ537B7Xbz53/+57z11lvSo7YwMt7Y2GB2dpaPP/6Yq1evsrq6eiRkDDA5OcmvfvUrBgYGaG1t\nZXx8XI6Ef3o9ZWVlHD9+nEQiwfLyMvPz87KSJJFIyM7MF7mOw7hmkaOorKykr6+P5ubmPdPAV1ZW\nePz4sRzyeZB5iBcNFp7+f7/re3U6HZ2dnfT3/z/2vvO5zevM/qD33kgAJAh2EqwiRVJUs6TEXjvF\n9m42zpZJPuzOzv5PO7MfMpOdZLJJdrOJFTu2ZJmiKiU2sYMkQPQOonf8Pvh3r0AVWwUEIApnRhMr\nkoj74n3f5z73ec5zzhRUKhWAx7oc9XAyB05YQNbpdDh79iwtVSQSCezu7mJpaYnS2sjEkVQqxcbG\nBjY2NhqK6vasYEMMI18EZIe32Wy4f/8+VCoVFAoFpY5VTijWE1wuF2q1GhaLBWfOnMHw8DCl6uVy\nORwcHFDXCavVijt37tCaXjWZBC/zs9xuN27dugWRSAQulwu73Q6Px/PMLJEMvESjUfh8Plpmqtdm\n8qIQCoXQ6/Xo7u6G2WymJyuSLFitVty6dQtut7thTpWvAlJ+IiJPJEHb29vD/v5+3dyyT0xAZjAY\nMBqNtAvOZrOpUNCXX36JBw8egMVioaurC729vWAymfj9739PhUZOCkitfHl5GblcDgaDAXq9/ki3\nPBqNHhGoqTVIqWJ4eBizs7Po7e2FRqMBm82mHfD5+Xn85je/gdfrpcf7agfjlx3RJz53Ozs7kEgk\n8Pv9z9RuYDAYlAVCuMnHXfOtBhiMb7wex8bGMDIyQksulSeW9fV1XLt27dhVz4DjOxEwGN8YmOr1\nevT391NtdLfbjevXr2NhYaFuMeFEBGSifapQKKDX6yGXy8FgMOByubCwsIC9vT2k02kqUGMwGJBM\nJqnjc73cM44DhG3h9XpRKBTgdruhUqmg1WqphGU+n4dSqUQ0GqW2NbUCi8WC0WhEb28vzp8/j+np\naeh0OnC5XACAy+XC0tIS5ufnsby8TEsvr4pKpxSxWIxcLke1oElN/UUzPdJrsNvt4PF4ODw8pCyI\nJ0eXiS9io2fEBEwmEwKBAHq9HlNTUxgZGYFYLKbvRaFQoFS9RmEkvSqYTCYsFgvOnz+PoaEhKuXq\n8XiwurqKvb29utH4TkRA5nA41LiUx+NR7qbb7cbi4iJisRhaW1vx93//93j//fchEAjg9Xpx7tw5\npFIp7O/v10X0+zhAskgyyurxeOgcf1dXF814DAYD4vF41f0CvwtsNhsDAwO4dOkSLl26hIGBAapR\nUSqVYLVa8d///d9YXl6mrievAxJoiClnPB7Hzs4O8vk8zchfNCCTYzvxvEun05BKpZRlQf5OraUx\nqwE2mw2pVAqTyYSZmRkMDw8fMXAgpbBa6zAfB9hsNqanp/EP//AP1FszGAxSH02Xy1U31tWJCMhE\n/0AkEoHNZlOazvDwMH72s5+hUChAJBJhfHwcCoUCwNH6IdG2eNNeoueBsDBKpRJVAAOASCSCaDRK\nNREq9SFqAaPRCLPZjNnZWUxNTcFgMNA1EPqh2+3G1tYW1amtBginl2jcEqeRV0VrayuGh4eh1WrB\nZrOxubkJn88HFouFaDQKm83WMOyDFwWLxYJCoYBGoznCOz48PKSUw7W1NTx69KjeS30tELU9g8FA\n718oFMKtW7conbLWyoGVODEBWSwWU5UwUh+cnp7G6dOnKZ+XBGHCVCCZDZmIepObFE+CMDJIQyaT\nyVAKGXGuJmWCWqGjowMXLlzAuXPnMDEx8ZTIeSQSoZN41XbEJvKWRKSHbFKv8uJ1dHTg8uXLOHXq\nFJRKJa5evYqtrS1wuVzs7u7C7/e/cQGZTHOqVCr6XBCG0sbGBv70pz/hj3/8Y0OMSL8O5HI5Ojo6\noNVqIRQKkclk4PP5cP36dXz11Vc1k959Hk5EQCbTQ+l0moqHVHInSUOFGGs+fPgQq6ur2NjYoFZN\nJykYV4JsQGSgJJFIQC6Xo6urC7FYjGoJHzeIEh2p4ROxoEwmg0QiAavVitu3b+P27duvZSD6JMiA\nhd/vx/r6OrLZLLXJetW+wfr6OnK5HL7++mvw+Xzs7e0hHA7TDDmZTL6RJy3yrJTL33g4xmIxLC8v\n49NPP8Xy8jLV0HiT0drailOnTkGn01FeNbEAI7zweuJEBGQScCKRCFwuF4DH8/4E+Xwefr8fjx49\nwu9+9zs6TlnP40ktQF4ysmFls1nw+Xx0dnYiEAjQrPE4a2bkdGIwGDA2NkYHcwidz+12Y2lpCZ9+\n+mnVaYhksyaOwaScQ6YiX+Xe7+zswGq10t+fhOeHfE9k+pA4YS8sLOCLL75AMBg8EWwknU53pJHn\n8/mwv78Pn8+HeDzeDMjVQDabRTAYxPz8PAKBAC1dVIJkSuFwmOqf1kolrFFAuL9tbW0wm804PDzE\n2NgYdnZ2jnUElsH4xrFYIBBAJpOBy+XSjcLj8eDq1au4desWFdqp9pGRnJJIdlfJiHjVzzppz00u\nl4PD4cBXX30Fh8MBgUCAVCpFtSoaiav/KqgU5hKJRLQcc+3aNXz22WdwOBwNIbZ1IgJyoVBALBZD\nLBbDxsZGvZfTkCBeYwqFAi0tLdDr9UgkEhgfH0c6nYbD4QBwPIGG6FITPWEA1EfOZrNhfn4e9+7d\no6JHx7GGZ/GAT0oTtxrI5/MIBAIIBAJYXV2t93KODalUCoFAgCZsN2/exJ07dxqGPXIiAnIT3w4i\ncsPn8yk9UCgUwmg0Ynp6Gm63G3fv3j22QREink9eAqJRsb29jaWlJbhcLnpcrGWAbAbjtwfkXq+s\nrCAcDlPBMavV2jB2ZkAzIJ94EC2LSpHzVCoFv9+PSCSCZDJ57I0akonabDbMzc2hVCrRhtHi4iJ8\nPt+J4YE30bgg4lter5c+k412SmoG5BOOSglI4JvxX6vVCqvViv39fayursJqtR5r/Yxo/V69ehXL\ny8u0nkvKTGRyrokmjhukd1H5+0ZC7aYCXg6N9S29wSDZsUAggFarxcDAACQSCbLZLFwuF/b29l57\nPPlFQTSe38RJtiaaqBK+NeY2A/IJBxmKqQzMRMMhl8shk8lQyc9a4KRohjTRxCuiGZDfZlTqKlfy\nshuxftZEdfCsTa/S8IDgdWh/TbwyvjXmNmvIJxzkxWwG3rcDz5IVJUMwlZ6SAJ4yZmg+I8eHF5V6\nbQbkJpo4YXjWyadShY6gMjtuBuPjBfGP/C40SxZNNFFDVGaqlTortQqILyvK30TV0awhN9FEI4DL\n5cJsNqO7uxsDAwMoFou4d+8e9vb2EAwGG8L5vInjw/+v7TdryE00UW8QEwWLxYKLFy/inXfeoeqE\nh4eH1BW9iZOJF9UdZ373X2miiSZeFxqNBoODg3jnnXdw5coVGI1GMJlMFAqFt07kqonno5khN9GQ\nkEgkkMlkTzmblMtlpFIp6uRcL6PWlwHRgj59+jQmJyfR39+PdDqNRCKBWCyGVCrVpJ81AeAtC8iV\nriHAUVoQQaO/3Ccd5F709PRgdnYWAwMD6Orqov9/qVTC5uYmfv3rX2N9fR2pVKohZBOfBzKUMzIy\ngh/96Edob29HOp3G/v4+1tbWsL+/j2AwWHcd3iaOFy8aV05EQCY8S7FYDJVKBQaDgXw+Dy6XCx6P\nB4FAAD6fDy6XS73znvdzSqUS8vk8PB4PAoEAUqlUQ6lBnVSQwCWXy6HT6TA5OYlz585heHgYfX19\ndOKwVCpBr9fj4OAAhUIBVqsViUSiYTNlqVQKlUqFvr4+9PX1UZGlO3fu4O7du3C73SdC+L2J6uBE\nBGRiYkkcc5lMJuLxOBQKBdRqNQwGA1paWiCVSiEQCAA8nRWT7LlQKCAej+PPf/4zFesm/m6N+MKf\nFLBYLPB4PPT39+PSpUsYGhpCX18fFAoFisXikUmz1tZWfPDBB+ByuYjFYsjn88hkMg13fxgMBtra\n2jA+Pg6TyQQWiwWbzYaVlRX88Y9/xIMHDxAOh+u9zCYaCCciIGs0GkxOTmJsbAwjIyNgMBhIJBKQ\nSCQ041Kr1RCLxdTaHHhsb57L5cBms8Hj8VAsFpFMJhEKhZDL5bC+vo6DgwNa68tkMs16X5XBYDDA\n4/GgUqmovZPL5UIgEIBYLIZCocDg4CANaiKRCD09PQiHw3A4HODxeNjd3UUul3ul8sVx6GsQMf62\ntjYMDg5CJBIhFAphcXERc3NzWFtbg8fjqepnVguV4/bkfzkcDnXuFggESKfTyGQyyOVy1AKs8pTS\nKJsjUToUiUSQSCTQ6XTQarXU9NftdlN7qkZguZyIgNzR0YF//dd/xcTEBIRCIQ20wGOnDFKuqCTk\nE1nIeDwOoVBIHat5PB5GRkYgEomgVqvx6NEj2Gw2eDweGsCbqA5I1isUCtHa2goGg4GtrS14PB7Y\nbDYIBALodDr827/9G7RaLfh8PjgcDpRKJW2OsVgseL1e6rT9Kp8PVC+IMBgMiEQi6PV6mEwmmEwm\nAIDdbsfXX3+NL774ArFYrCqfVW1UnkQqR65Jk7Wrqws6nQ5+vx/BYBDRaBTxeByJRAL5fJ6OYzdK\nQOZwOJDJZNS2bHZ2FtPT0wiHw3C5XLh+/ToePHgAt9vdDMjVQOXDE4vFqDcWCbzlcplmxoT3GYvF\nEI1GEYlEkEgkkE6nwefzaQ1aLpcjl8shnU5Do9Ggp6eHGoUeHh4ee0CWSqXQ6XQwmUwwm81wOp2w\n2+3wer2vdcTl8/lQKpVgMplIJBLIZDJ1F4Ynn53JZBAIBJBIJMBmsxEKheDz+cDhcBCJRHDt2jUw\nmUxMTk7CaDSCw+HQ+yUWixsmCBBDAIPBgDNnzsBisUCj0eDg4ABWqxXb29sIh8MNc8ricDjQ6/W0\nrKdQKKjnYT6fp04zQqEQIpEIWq0WcrmcBuJ0Ok1/ZTIZZDIZ7OzsYGVlBel0um7JC4kJOp0Op06d\nQm9vLzo7O2GxWNDb24tkMgm9Xg8+nw+tVouvv/4aVqsVqVSqrg3WNz4gA9+YnHo8HqRSKezs7KBY\nLNLSBIPBgFarhVgsht/vh8fjwcHBAex2O/b39xGPx1EsFsHhcGgNs7e3FzKZjPrPdXZ2IplM0iPy\ncUOpVGJkZATvvvsufvCDH+DGjRv47LPPcP/+/VcOyCQLNZvNYLFY8Hg8tCxT70BWLpeRSCSONLcq\nTzH5fB6ff/45gsEglEol9Ho99enj8XhgsVivJSFazWBe2c+4dOkSWlpaIJFIcOPGDfz1r3+F3W5v\nGEYFKUX09/fj7NmzmJmZQXd3N0QiEYrFIuLxOHg8HuRyOT1dklMkyYRJo5WcNmOxGP7whz/A5XIh\nGAzW9TTJZDJhMBhw5coVDA8Po62tDQqFAlKpFDKZDDqdDh0dHejq6kIkEqHvQzMgvybcbjf++Mc/\ngsfjIRQKoVQqgc3+5tLI8ZHL5SKZTCIej9PJqEgkQgNSqVSinX6pVAqj0QiNRkMZF1arFQcHB8fq\nvksaW11dXXj33XcxMTEBuVwOjUaD1tZWiESiV/65IpEInZ2duHz5MnQ6HaLRKO7du4cvvvgC6XS6\nIYLys8oN5N7k83lks1n6d8rlMmKxGKxWKz1uvso1VPu6pVIpOjo6MDAwALPZjFwuB5vNBrvdDofD\ngUQiUdXPexXIZDK66Q8PD6OnpwddXV0wGAxQqVTgcDgolUrg8/lgs9lUQxt4+vuqZL8Qve2zZ8+C\nxWJhcXERKysr8Hq9CIVCNbu+ynJLuVxGNptFPp8/0sivdEJva2vDBx98AD6fj6tXr8LlctWNSvnG\nB+RyuQyv14tPP/30lf49uXlktFWpVEKn06GrqwtqtRrr6+twOp3Y2trCwcHBsQYuNpsNsViMrq4u\nXL58GW1tbeByubSxVdmQfBmwWCxIJBKYTCZcvHgRg4ODyOVy4PF4uHv3bt2zgkqQLKwSpOFKsmES\npCORCLa2tuB0OpHL5RqiDCCTyTAwMICBgQG0t7dja2sL29vbsNvt8Pl89V4eAEClUqG/vx8ff/wx\nfvjDH0IkElHTT4JyuQwul/tUjf3JBh4JfCwWi96n06dPY2hoCJ999hlYLNYRNkmtNn5S+y4UCjQB\ni8fjNMPncDhgs9lgsVjQ6XR49913wWazsbi4CL/fXzca5RsfkF8XPB4PIpEIFy5cwPvvvw+9Xk+P\nmbFYDKurq5ifn0c4HD72GyQQCNDS0gKNRgMej0ezEo/Hg+XlZQSDwVf6uZVC5CTjEYlEaG9vx9jY\nGDY3N2G326t2Ha8C8uJrtVq0tbXRwMvj8SCVSjExMYGJiQn09vaiWCwiHA5jd3cXy8vLsNvtDTMc\nIpFI0Nvbi7a2NvB4POzv7+Pq1avY29ur99Jo4BweHsYnn3wCi8UCoVBIT5PA0+Ub0vQm/3ZnZwd2\nux25XI6ykTgcDtra2tDZ2Ynu7m4IhULw+XyMjo6Cz+fTU2YqlapZCYPJZEIoFCKTyWBzcxNOpxN/\n/etfKetiZmYGY2Nj0Gg0tMekUqnQ1dWFYDBIN/la460NyOQBU6lUtN7385//HBwOB+VyGYeHh/D5\nfNjc3MTq6uqxB2NS4zWZTLTZQPznfD4f1tfXX/rYV0nnIjs+i8WiQzKtra3o7e1FKBSqe0Am5Rqj\n0YiZmRnweDyUSiWIRCIolUpcvHiRUhrD4TD29/exsbGBra2thsk8gW8CstlsRktLC1gsFtxuN+7c\nudMQGwY5pnd0dODKlSs0GJNjfaFQQDabpRSwUqmEw8NDBAIBsNlscDgc3Lt3D6urq8hkMohGo3A4\nHOBwOLBYLDh9+jQdE9dqteju7oZer8fKygoePnwIn89XkyBHMncul4t0Oo3t7W26eZDSRSQSAYPB\nQF9fH4xGI/h8PhQKBfr7++H1emu21ifx1gZkHo8HhUKBM2fO4MMPP8TY2BjYbDZtWNjtdqyurtKh\nkOMGg8GAUqnE5OQkLBYL+Hw+pdiFQiG4XK6XMiKtrKGROh95IUnmTSiBlRlSvUBob6dPn8ZHH30E\nsVgM4HG5oqWlhR6dSYnqq6++aij6GKFMajQaCIVCyltvFPEg8gxks1lEo1FatioUCkin0wiFQjg4\nOMDq6iq8Xi+SySRSqRRSqRT1ZPR6vQgGg9STMZlMgslkIplMIhAIwGaz4Z133sH3v/998Hg8mj2T\nMlmt7le5XEYul8Ph4SFlRlX2Sm7evAmfz4fZ2VmqLyKVSjE1NYVQKISVlZWqG/++CN+9/m9ijUG6\n82q1Gn19fZidncX3v/99qNVqsFgsFAoFJJNJbG5u4v79+wgGgzXJjlksFhQKBW0GcblcRKNRuN1u\nOJ1ORCKRF6qRcrlc8Pl8SKVSCIVCJJNJ5PN5Wn+trAnm83kkEoljbVS+CBgMBuRyOYaHh6kAj0Qi\neervkNLL4eEhHj16hO3tbWQymTqt+tkgvQiBQEAblY1Q2wYel648Hg/u379PT0q5XI4KNu3u7uL+\n/ftwOp2Ix+OUW8xiscBisejvn0QkEkEymUQmk4HZbEahUKDPXGtrK7q7u2tatimVSsjlcshms7QZ\nXPke7+zswOPx0N6DRqNBe3s7ent7sb6+Di6XW9X1vKj85lsXkNlsNuRyOfr7+/H+++9jZmYGYrGY\nZo2JRAIejwf37t3DtWvXatIdZjKZ4PP5lIojl8vBZDKxu7uLzz//HOvr6y/cZFAoFDAYDLBYLDCb\nzdje3sbBwQHS6TQtx5BR5GAwiEePHtV1Yoxk8gaDAe+++y4mJyePUBYJSG2zshZOTjSNBFIKI4NI\n1X6xXweErXL//n14PB56WiK0tUwmg0QigUgkgnQ6TTN7Mmj1ZFB7EmQzIuU24PFmq9frIRQKa3Wp\nNCCT/37WujOZDNbW1iAUCnH+/Hna2BQKhfRUWeuTzVsTkNlsNhQKBXQ6HXp6ejAxMYGpqSm0t7eD\nwWAgEAggFArBZrNhe3sbDx8+xMHBwbGvi8FgQCwWY2BgAKOjo9DpdFRvw+VyYX5+Hna7/YUeDFK/\nu3jxIuVckuEKqVSK1tbWI6WQcDgMp9OJw8PD477M56JyM9JqtTSrL5VK4PF4TzklA4/5s40UkCup\nVmSzIHVxkUhER4zrCRJcPR4PvF7vUxOK38XHft6fkWuWSqWUOkeasoVCAeFwuOYiSpU0yuetu1gs\nIhQKwev1IpfLUbofh8M5YgZbrfW8yLP6VgRkBoMBPp+Pvr4+TE5O4uLFi1S4hsvl0k7s3bt3ce/e\nPTx8+LBmvEkGgwG1Wo33338f3//+96HT6WjWEgqFsLGx8ULDICQYWCwW/PM//zP8fj+d7vP5fDh7\n9iyGhoYgk8mQz+cRjUYRCoUQj8frWrIgLzKbzYbf74fT6UQ6nYZKpaIvNslWyHWSeiaHw2mYgAzg\nSEAmQlVkEiwcDtd146vEs/QmXicTJHxenU6HgYEBGAwG2izMZDJYXFzEZ599BpfL9dprf1FUnqa+\n7dqedOSulFc4jjV9F058QGYwGJDJZDAYDJicnMT58+cxPDwMkUgEn89Ha7RbW1tYX1/H1tZWzWpd\nJNOTyWQwm83o6Oig2THZUSubcM/7GUwmEyaTCRaLBZ2dnZSV8ejRIwQCAYhEInR3d6O3txcikQix\nWAzb29twOp3IZrN1rXGSTCYWi9HJSaKdQIShtFottFotZDIZGAwGpFIpLBYLfD5fTUbZXwaV7s5M\nJhPt7e24cOECgsEgYrEYFAoFLZGl02n4/X76DNZyY6xmwFGr1Th16hSmp6cxMjKC1tZWMJlMZDIZ\nxGIxBAIB+Hw+pNPpqn1mNUCasAKBgG6i9R7BfysCslarxeDgIM6ePYvZ2VlIJBJ4PB4sLi7i66+/\nxtzcHCKRCFWvquXaiIaGXC6HWCw+MvhAso5sNvvcji9pUo6MjODf//3f4fP58Omnn2JxcREbGxtQ\nKBRUi7e7uxt8Ph8ejwerq6vY39+v+0BIsVhEKpWC3+/H9vY21tfX6XGay+ViYmICp06dwunTpyGV\nSsFkMqFWq3H27FnEYjFsbm4iHo83BIuB3DfSKOLxeBgYGACTyUQ4HEYmk6GqdWw2Gz6fD/fu3cON\nGzcQDofr3lx9FTAYDBgMBvzd3/0dpqen0dLSQoW60uk0fa8a4f48CSaTCYFAALFYTDP6ejdhT3RA\nJkMQ/f39mJ2dRWdnJz0ep9NpmpkQO6Ba746EmhMMBnH//n0IhUIMDg7Spl5PTw8+/vhjPHjwAEtL\nS4jFYkgkErTBAjwWDEomk7h27Rq8Xi82NzfhcrlweHgIoVBIAz9pWoTDYWxtbdV1RJSAUO84HA4A\nIB6Pw+l0Ip/Pg8lkIpvNIhAIUDEhuVwOLpcLtVoNpVJJp8nq/cKTE02xWKTTbKSGrNFoKItHr9fT\n8WRiTSWTydDZ2Yn79+/j7t27lCrX6BAKhWhvb8f4+Dh6enroeD+TyaTU0YWFBTgcDmQymZo/a9/1\nTJA5BJ1ORxvJleWLZlOvyiAv8dDQEM6fPw+j0UiVrDKZDPx+PyKRyJFuci1RLpeRTqfhcrnw8qDz\nRAAAIABJREFUxRdfIJ/PQy6XUx+5wcFBtLW1QalUIp/Pw263w+VyIZVK0bUS/q7X68V//Md/0Gko\nMgRCggOpj2WzWUQiEWxvbzdEQGaxWFRJjOhRky5/uVyGz+fD1tYWWlpa0NbWBjabDZFIBLFYDLFY\nTBswjUAtIxlWLpdDLpejNEM+nw+VSgWFQnGkkSaTyTA2Nobh4WF8+OGH+OUvf4nNzU1qfNrIYDAY\nkEgkGBsbw9TUFNra2iCRSGgwzufz2N3dxVdffYW9vb26BOTvAovFglqtRmtrK90cK+vI9ehPnOiA\nTKg8zxOtyefzdQvGlchkMjg4OMDdu3cBAH19fWhvb0epVEIikQCTycTExAQV1MnlcpR/m0wmaUZJ\npANJ/ZLP52N4eBgXLlyATqejOz5R5qq39CbwjRjP+Pg4BAIBbDYbXC7XETYCWa/H48H+/j60Wi0k\nEgnEYjHlWnO53O+kZB03yMtMFOjYbDb1/5ubmztCG2MwGGCz2ejo6MDly5epnOjo6Cj+5V/+Bdev\nX8fc3FzdruVZIFoVEokEIpEIDAYDRqMRU1NTGB0dhUwmo0GMSNXG43GEQiGk0+mG2DArQUpiZrMZ\nAwMDtBzG4XAgFAqhUCgQCoWQSCSq8lxVfjffhhMfkCvdfaPRKJ1lJ0GtEeQns9ks3G43EokEAoEA\ntre3MTw8TBs/RCZwf3+fcjzJJkImqZ4EyTwtFgtmZmag1WppcCNuCY3QDBMKhejq6kI2m8X8/Dw8\nHs9TD22xWEQgEIDD4cDQ0BDVKSBZdSNMGgKP6V98Pp8OGW1tbeFPf/oTVTwjzxqTycTU1BTVUiB9\nDrVajUgkUveATAZBCJuFy+WCy+VCpVLR4Gs2mzE+Po7BwcEj3GNyCguFQgiFQg3pqk2eIZPJhJ6e\nHkgkErpRCgQC6nheDXW+SibHWx2QSQCam5uDz+eDTqdDe3s7BgcHqSMFkeusdx2SlC8IX9PhcFDq\nlFAoBI/Hg8vlot3q71qrUCiEWq2GVqulYkXpdBoHBwdwOBxIJpMNMdIbDodx48YNFItFRKPR5764\n5KGuzPKJ7kIjNcOIHnI+n0cwGITP56PCOpUnsVKphIODA/zXf/0XQqEQPv74Y0gkEqjV6leWWa0G\nCLNHrVbDZDJhYGAAw8PDdKjIbrfD7/dDIpGgo6MDSqWS0g/J9W1vb+PatWu4desWXC5XQ0iOEhC+\nuFKphNFohEKhOLKZAKB6L5XX9aqfU8mWYrFY35kEneiADHyTJZMxW7lcDpPJBJ/PR8sExKFCJpNB\nKpVSGyAyvZROp6nwynHv8mRYIxwOP0W9e9kHQygUQqVSQa1WU741aebt7u42TNYSj8epeNPzyg7E\nUZw09IrFIhKJBHWsaCRXcJIll8tlJJNJHB4e0rH3J3m/fr8f165dg0gkwvnz5yESiWgJph4gAldE\nfnZ4eBhnzpzBhQsXwGAwkE6nce3aNSwuLkKv16OrqwsymYzWyslGabPZ8MUXX2BnZ+fIqaCeqHQW\nYrPZUKvVMBqN1PKtco2k5k/0VF7n8573++fhxAdk4HG9+PDwELu7u9QlxO/307rY7Owsvve979Hy\nRjqdRjgcxtraGmw2W03oO98WdF/2c0UiETQaDW0SkmnEubk53Lt377WylmqfJr5tYyC1PjLUo1Kp\nkE6n4XA44Ha7G6IOTtZZ+dKR6UiiWvesNRLKH1HbI03oTCZT8xMbCVSdnZ345JNPMDo6CpVKBY1G\nAy6Xi0QiQXWFE4kEPYGR7JKc5ohwkdvtRiwWe63+TLW+g8oMldT4tVotWltbkc1m4fV6oVarIZVK\nATx27PH7/bBara+VuJD1P08D5Em8NQG5WCxS7y+iK0zGllksFrRaLYaHh5FKpZBIJMBisZBMJqHV\narGxsQG73Q6Px3NstjTV7ujyeDyIxWLq+kAaglarFTab7ZWP+ZUTc6/zoj2pU1H5Z5W/J5l+W1sb\n1Rj2er301NMIbifA4xFwomP9ItSpUqmEUqmEZDKJSCSCWCwGDodTFzYCCVoSiQRdXV1oa2tDJpOh\nlmck09/b20M0GqVZJKGLlctlRCIR7O/vw2q1Uifnet8b8pyRkwvJ5guFAmKxGLV8MxgMlJpIdMnl\ncvkrv5dPnobIvf4uvBUB+Xkg9LdgMAir1Yp79+6Bx+OBz+ejt7cXFosFg4ODcLvdWF5exp07d3Dj\nxo1jGauu9oNLgh4RlHlRgZgX/dmv8zMqXUGeFA168nOIyaxMJkOpVEI6nYbT6cSNGzdw//79qksk\nvio4HA7kcjlkMhk4HA4KhQLi8fgLqdGR74M0z8hwUK3BYDAQj8exsrICq9VKSw7EUZpooEgkErDZ\nbMoDJ/dwd3cXv/3tb3H79m3EYrHX1u6o1ndQSWEjVmAkyeLxeBgcHKQOL3w+n74zr/OuPDme/qI/\n560OyABokNrf38fc3ByEQiE9NhaLRXR1daG1tRVSqRSpVIra0dR75/8uFItFahBKOK2kPl4vihgZ\nh9br9VAqlVQf1+l0Un3eSm9DQg0bHR2FUCiEx+OBw+HAgwcPsLKy0jDW7cDjIz9hfBCut9/vf+6/\nIYMjUqkUCoUCHA6nbnKo5BQZDofx4MEDlEol7O3tIRKJUBYPi8WCyWRCZ2cnHQEngjy5XA5utxsL\nCwvY3d1tmFIS8DhDrXz+s9kswuEw9ZsMh8NUEZEwLcjQ0et87suiIQPykypUxwmyu9vtdgQCAcpF\nXF5extjYGP7pn/4JExMTGBwchN1uPzKJ1Mgg9DaiMkYCReWU36vgdWqCRHzm0qVLsFgsODw8xM7O\nDq5evUoHIkhA5vF4EAqF6O/vx/T0NDgcDtbW1nD16lUsLCzQ+vHr1PeqWaclJxCyAbrdbty+fRs2\nm+25/4bL5UKpVKKlpQUGgwEcDofWXmsN0mcJBAKUD0+SEqKHTLQ5Tp8+DYPBQOl9RFc7FArB7XYj\nGo02VDAGcOR0WLk2kg0T+iixc1Kr1a/V1HtVNGRABmpnhkiQyWTo8ZLL5cJoNNKaE9k1RSJRQ0k+\nfhsqywClUgnhcBiBQACZTKbmBo4kc7RYLHjvvfcwNjYGk8mEUCiETCZDBZWAx6UWuVwOo9FITyek\n8UXcvxvF1JSgUCggkUhQ6VClUgmLxUJF230+H4LBIGUytLe3o6OjA2azGRMTE+ByubBarbh27Ro2\nNzfrcg1klP9ZPRKSzZtMJgwNDUGj0dDSitfrxfz8PG7duoVIJNIwpxaCbysbkI0oGo3i8PAQSqXy\npcpN1UZDBuR6T1yx2WwYDAb09/dDJpM99+81ShbwbSDZvNfrhcPhqEsTjEw/jY+P42//9m8hEono\n4AQRLSdlFRKQVSoVBgYGoFarUS6XEQqFqHNKtdgu1fweiLtxLBZDsVhEZ2cnWlpaoFarweVycffu\nXQSDQTCZTMhkMkxPT+P8+fM4deoUlUR98OABfvnLXyIajVZtXdUCYYC0tbWhv7+fjoETzYrf/OY3\nuHv3bkNZar0o8vk8pZsajUakUil4vV4cHh42tSyeB9K11mg0kMlkSKVSSCaTSCaTz2U9PHk8IR1W\nklkR+Us+n4+Ojg709fVBLpdTm/Te3l5IpVJ4vV6srq7ixo0bz+SUNiJIeUCj0VBFORIIaw0y1EHE\n2jkcDn0JotEoZDIZ1WomIkOdnZ3o6+tDS0sLmEwmYrEY/H5/wyqHkRrs/v4+fv3rX+PChQuYmprC\n0NAQeDweOjs7MTMzA4FAAJVKhcHBQZjNZmg0GjgcDszNzWF+fh7RaLShBl3IaZCoJZ46dQoqlYq6\nSROHZhLAGr2U9yyQ6yAMKpFIBKPRCKVSWfPT8BsVkDkcDoxGIzo6OuDz+eD3+ymd5FlEbFLIJyBC\nNCQbIwV9uVyOmZkZfPzxxzCbzTAajTR4E5uX//3f/8Xc3Byd7Gt0tLa2Ynh4GFqtljrqksZRPfDk\n/cnn81TcSa1Ww2AwYHBwkHJB1Wo1WlpaKH88Go3SgZ7jWFu1grzVasV//ud/IpPJUBW0np4eTE9P\nIx6PQyqVUrlHolWys7ODX//619jZ2WmIcfYnwWQyMTIygp///OfQ6/VUl5rYnTkcDrqRVOt7rGUf\niQTkQCBAWSSdnZ3QarXNgPw8VGo3EFFyDocDrVYLtVqN0dFRaLXaI7zIbDZLm1rlchkCgeAIT5To\nDggEAphMJnR3d0Mmk4HNZiOTycDr9eKLL77A/Pw81tbW4PF4Gq4+9jwQqp5er0d7ezvEYjFkMhnl\nJdeSaVHZNMlkMlSHtre3F3K5HCMjI3Rcl3TtCRVsZWWFsioODg6qTnMTi8VUs6Aa5Zx8Po9YLIab\nN2+iWCzi/PnzOH/+PJXZJC7P4XAYOzs7mJubw927d+FyuRoqMyYgzVWlUgmNRkPNW3O5HDweD778\n8kt8+eWXVZ/Iq6RtHvdzSq4nn88fafLV4368MQEZACXRh8NhKgokkUjQ29uLH//4x+jv74dYLD6i\nOEWYBqVSicpaEqEU0mwiDAQSxOPxOILBINbW1vC73/0O169ffy12QT3g8Xjw6NEjTE5O0s1IIpFQ\nJ+BaHi0Je4J4q+l0OigUChiNRrS1tR35XglTIRQKIRgMYnV1FV9++SU8Hg/C4XDV1038/PL5PM2+\nX+c+E+nMBw8eYHV1FfF4HAaDgZaMyKbkdDpx69Yt/OpXv8LOzk7DPlsCgYDqoSiVSrBYLMoH93g8\nuHXrFu7cuVN1N5DKycdaBGTy3JFTNZFLqDXemIBMygShUIh2ssvlMiXku91uKBQKmoERx4ZKR2ki\n9EGYEsR9mclkUsL40tISlpaWKDF+c3PzjQvGAHB4eAiXy4VwOIxUKgWBQEBV4mpdciGfOT8/j0wm\ngytXrmBqagpKpZIK6ZAXIRaLIRwOY2FhATdv3sT6+jrcbje959UG4diS2nS17jPh5s7PzyMcDh8x\nZSVaHD6fDz6fr6rH/Go/pzqdDtPT0zCbzVRsh5SbyAQfCWRvMkhpolQqgcVi0SnXWuONCcikaRKP\nxxGPxwE8bhY5nU5sbGygWCxSXdNyuXxE7UwikVAKEpnCyeVy9OieTCYRj8dx48YNfP3119Rz7k0F\nkfKMx+PI5/NHSjn12FzK5TLW19fhcrnoBqnT6SiLhRwVI5EIAoEAbty4gc8//xzxePzIBlxtkEyo\n2iUc0tvY3Nw8QmM7rtpotWudJDvVarU4deoU2tvbaUO4WCzC5/Ph4OAAsVjsWFUDa/GskpFxiURC\nFdmInk2t8cYE5GeBZF5erxefffYZbt++DTabTUcl29raYDabcfHiRQwPD9N5/FAoRGkugUAAfr+f\n/n+kWUiC/psKEuDIpkVcHOqZ6ZP66pdffolHjx5RiUPg8b0kpYNAIIBwOEzLHce1bvKza/W9vCkn\nLcJA0mg0GBgYeMrggGTIRKir2tdFGvW1ABlAIiXP/f19fPXVVzTJqyXe6IAMgIrmbG5uHtEfZTKZ\naG1thc1mo/VLEpD9fj+CwSANwF6vl/75m1ieeBaIfGgikUAkEgGLxaJGmvW6vlKphEwmg93dXezu\n7tZlDU/ipNxv8txXC2w2mwo7GY1GyOXyI5+VSCSqzqyoxMtqQLwOSIOfOGW7XC4sLi7Cbrc3eciv\nArJrPwmPx4NIJAKHw4H/+Z//oQ0XMo2Uz+eRy+XosfWkvJzA4wkkt9uN9fV1sFgs7O/v18XMtYna\noJr3lMfjQalUQqlUUhnRJz+rFs22WiCVSmF3dxdarRYA4HQ6qXlFMyC/Ap73cBC5zUgkUodV1Rck\nIFutVvoyeTyehtIZaKJxQZgGdrsdN2/ePNKHCQaDWFpagsPheKZ92JuGVCqFra0tlEoleDwe7O3t\nvbAzT7XRqKIMzYhRBRCdZ+KEkkqlEA6H6zKj38SbBTKI1dLSApPJhLa2NhgMBjgcDjidTjgcDgQC\ngbrRw6oJUp4hPo1EM/2Yei7fGnObAfkEg1jQk6mwSq5lE018FxgMBgQCAcRiMZRKJXVijkQidRPf\nOQ5UmpASnvUxWrY1A/LbjCcbPc1yRRMvg8oBjUoj05Pah6jBIEozIDfRRBNNNAi+NeYyv+0Pm6gO\nXtRxttFxUq7jSZzEa2rizUSjPonNDLmJJt5QkJpsPcb03wB8a8w9EbS3Jppoor4gDWRigcTlcqls\n6pugHw58w0qqFB7LZrOUt18rNANyE000IN4URxrgMUVucHAQly9fRltbG5RKJa5evYo///nPSKfT\nDSkt+iRkMhm6urrQ1tYGvV6PtbU1zM3N1ZTW1wzIbzie7IITUZsm3jzweDzw+XzI5XJIJBJKMSNi\nWI0I4hGo1WoxNjaG999/H11dXVAqlbDb7fj8888bPhgTHrLJZMLMzAwGBgbQ3t6OcrmM27dv11Q7\nvBmQ33AwmUxwuVzqCpzL5U4MP/Rtg1KphNlsxszMDCYmJvCXv/wFc3NzCAaDSCQS9V7eUyC1Yq1W\ni4sXL+L8+fPo6OiAQCBAMpmkpq+NbupAzBJmZ2fxox/9CO3t7eBwOFhaWoJAIKDCXLVAMyC/AuqZ\nlTIYDPB4PAgEAshksiNODkwmk3oN2mw2uFyuE8sXPUkgzuZmsxmXLl3C7OwsxsbGsLGxgdu3b1O5\n0kYDcX5pbW3F+Pg4BgYGIJFIEAwGsbOzA7vdXlcxqxcBMZ0dHR3FmTNnMDw8DKVSiVwuB41GA51O\nh3K5XDPj2WZAfkkwGAywWCyakZKstFYBmcViQSaTwWg0YnBwEMPDwxgbG4NCoQCTyUQikcDh4SF+\n9atf4fe//31Nj1tNvBq4XC6kUilGR0fxySefQKfTgcfjUSH7Rs0w2Ww25HI5DAYDenp6oNPpkMvl\nsLS0hD/84Q9YWVlp6IYeeZeVSiVmZmYwPT1N9dTZbDaUSiW6u7uRz+ebAbkRQezsNRoN9Ho9stks\nYrEYAoEAQqEQgOObhGMwGJBKpdBoNBgcHKSu2N3d3ejq6oJUKgWbzUYul6Oz+BwOB/F4HJFIBHa7\nnZo4NmKQlkgk0Ol0UKlUUCgUyOfzSKfT1PqIbH5EMzmXy4HBYIDL5YLD4YDL5QL45vs/ODiA0+lE\nOBxGIpFouGslIEd+o9GIqakpnDt3Dp2dneByuUilUsjlckgmkw2pFcFgMKg58IULF2AymZBIJLC+\nvo65uTk8ePAAwWCwob97oVAIi8WC2dlZWCwW6HQ6cLlcGqjb29tx8eJFKsifzWaPfXNsBuSXAI/H\no4aq586dg8/nw87ODtbW1hCJRI41S2YymdDpdBgaGsJ7772H8fFxyGQyCIVCsNlsKkZPTCk//PBD\nXLp0ibqp/N///R8ePHiAaDRaFe+4aoLBYEClUmFqagqjo6OwWCyIx+Pw+Xxob2+HwWAAn8+nbg6p\nVAqRSIQeN6VSKaRSKbVH+vTTT/H5559jZWWlIWuvBEQEvq+vD7/4xS9gsVjA5/Op5kg6nUYqlWq4\nJi3ZSDQaDX74wx/iypUrkEqluHXrFn77299ieXkZTqezYTN74Jv3SSqV4r333sOPfvQjmM3mI646\nDAYDXV1dkMlkCIfDePToEaLR6NsZkGtpAf5tYDKZ4PF41DWho6MDer0enZ2d6OnpQTgcRldXF9Rq\nNaRSKex2O3w+X9XXIZPJoFKpMD4+jlOnTqGlpQX5fB52ux3JZBLJZBIsFgtCoRBGoxEmkwkcDgcq\nlYo6a5fLZfT19SEQCGB3d5cacNb7ZZfJZOjp6cH4+DjOnj2Lnp4eGAwGZDIZHB4eUlcR4vRCzAQS\niQSUSiUGBgag0WjQ2tpKhWHGx8ep7CpxCq/3dT4JBoMBsVgMo9GIvr4+mEwmKJVKMBgMBINB7O3t\nIRgMNuSRn8fjYWBgALOzs+jp6YFEIkE6nYbX64XVaoXX66UmxI0IBoOB1tZWesrU6/VgsViIRqM4\nPDyk2TOh6xGPvWo7nj8LDRuQG+FmMplMiMVi9PT04JNPPsHMzAwMBgNtoGUyGaRSKWg0GkilUnzx\nxRfw+/1VX7tSqURvby+mpqYwOTmJQqEAj8eD7e1tHBwcwOfzgcViQSqVYnp6GiKRCCKRCAKBAFKp\nFDKZDJ2dnYjFYvB4PLh27RpcLtexGYe+7LVdvHgRV65cwenTp6FQKI7cf6/XC5fLhUePHmF5eRmr\nq6twu93I5/Po6+tDsViEWCyGyWSi1l39/f3gcrlYXl7G8vJyQ1IBSQlqYGAAvb29kMvltOzidrux\nuLh4LM/S64IowE1PT+Nv/uZvYDQaUSqVEIlEqGlrPB5vuHVXgslkor29HePj42hvb4dIJEIul0Mo\nFILNZqOytWRTz2QyEIlE1J3+ONGQAbmWN5PFYtEvutL+CQDkcjkuXLiAs2fPYmhoCFKpFKlUClar\nFdvb2ygWi2Cz2VhdXcXW1lbVC/+tra3o6OjAxMQETp06BalUimAwiM3NTVitVrhcLkqJIh1v8hAZ\nDAYYDAa0t7dDq9WCx+NBJpOhVCpBrVbTUke96skk62hra8PIyAh6e3upQ3i5XMbm5iYWFxexv7+P\n/f19eL1eeL1eatxaLpeRzWbBZrOpGzI5WXE4HJpZs1ishtOq4HA4UCgU6OvrwzvvvINTp05BKBTS\nUWObzYa5uTm4XK56L/UISGbZ1dUFi8WCzs5OiMVihEIh3Lx5E4uLi0ilUg0djAmIvdn+/j4ymQzs\ndjvcbjeCwSDEYjHa29vB5/OpJKdOp0MwGDz2db2VAZkEXRaLRWuu5HPJZ7NYLBgMBrzzzju4fPky\n5HI5crkc3G437t69i7/+9a+0hmm323FwcIBoNFrVcoter8fZs2dx8eJFTE1NYWNjA8vLy7h9+zaW\nlpZoPZgYQhLfPKfTid7eXgwMDIDNZkMmk0EkEkEoFILJZEKpVEIoFILD4dSNtE8MNDs7O2GxWGA2\nm2nTrlQqYX19Hb/5zW+wubmJvb29I/eG3LtyuQyhUAg+n38k6JIGIPnVaOByudDpdPTYPzg4CA6H\ng1KphFwuB7vdjtu3b+Pw8LDeS6WobECOjY2hv78fBoMBDAYDHo8H8/PzWFpaqotT86sgk8kgHA5T\net7CwgIODg6QyWSg0WjQ3d0NvV4PnU4HJpMJuVz+lI3VcaAhA/Jxgox5qlQqdHR0wGg0orW1FTwe\nD0wmEzabDU6nE2q1Gp2dnTCbzcjlcpibm8P29jZ2dnZgs9lgt9vpUEY8Hkc8HqeWL68bjMnDbzKZ\ncOnSJbo7Lyws4Pr169je3qYGk5U1RkKT8vl8yGQy8Hg8NAMmtW4ulwuBQACRSEQ7yrXOaMgDfvny\nZbz77rtobW09spEVCgWEw2FYrVZEIpGnvlMOh0PpVp2dndBoNDRAl8tlRCIRuFwuhMNhJJPJhppy\nI7Xj0dFRnDp1CnK5nN6DeDyOYDAIn8+Hw8PDhppw4/F4EIlEGB0dxeXLl2EwGBCPx7G1tYXbt29j\nfX0dfr+/IRkhT6JcLlOuNMl6nU4notEoisUiOBwOYrEYTCYTjEYjbDYbNjY2amIF91YFZFL/0ul0\n6OrqwsjICHp6emA2myEQCMBms7G4uIj19XUYDAa0tbVBKpXC5/Nhfn4et27dwvr6+rFTqbhcLsRi\nMTo6OnDq1CnE43E4nU6srq7i7t27iMfjz3xZy+UyPTL6/X7Y7XZaT5bJZFAoFOBwOJRi5nA4cHh4\nWNP6KoPBgEQigdFoxOzsLM6dO0eDEvDYLfvw8BBOp/PIYEHlZtrV1YWBgYEjzbByuYxisQiv14ut\nrS1qMdQoR2gmkwmhUIjW1laMjY1hZGQEMpmMrj0ajcJqtcLj8TQMO4TcF5VKBZPJRBvLwDcejXfv\n3sX8/Dz29vbeGO/KcrmMw8ND5HI5OJ1OlEolunGT8qVYLIZarYbRaESxWMT+/n5N/APfmoBMOKtG\noxE//vGPMTY2BpVKRa1pGAwGCoUChoaGoNPpwOFwAACbm5vY2dnB/fv3ab3puF9wjUaD4eFh9PT0\ngMvlIhaLwWazwe/3Ix6PP5WFkJeGXAPJ1IvFInZ2dmhwb21tpQ/amTNnkEql4Ha7a5rVcDgcWCwW\nnD17Fl1dXZBIJEeaJaSOStzBKzcLLpcLvV6P0dFRfPDBB5iYmIBWq6U15EKhgEwmg6WlJfzlL3/B\nwcFBwwRj4Jv19/f3Y3p6GiMjIzCZTJRjXSqV4Ha7cefOHRwcHNR7qRRkSGJmZgY/+clP0NPTAxaL\nBavVipWVFdy5c4cmKW8KyCmskv5ZLBbB5XIhl8sxOjqKn/zkJzCbzVAoFODxeEin0zV5T96KgEwa\nXiaTCZOTk7h06RKGhoZoLZLFYiGRSCCdTqNYLILBYCAWiyEUCmFzcxObm5vY3t6uCdGdwWBArVZj\nbGwMHR0d4HA48Pv9ePToESWnP+/fAd+ULcg1EBddmUxGRWrK5TIkEgl6enqwsbEBFot1rNdTCVLi\n6e/vx5kzZ2A0Gp+q/8ZiMTgcDvh8PhQKhSPfN4/HQ3t7O8bGxnD+/Hl0d3fTJmCxWITH44HNZsP9\n+/fx8OHDhqrBCoVCqFQqOqLb3d0NpVIJADQ4uN1uLC0twe1213m134DBYEAmk6GlpQUTExO4cuUK\nCoUC4vE4dnZ2sLCwgI2NDbjd7oYqC70IKtdLTl5k6GpmZgYzMzOQyWQoFArgcDhPPYvHhbciIHO5\nXGi1Wvz4xz/GlStX0NnZSbvaZNDA4XBgf3+fZsShUIjyXUmN+Lh1IQhTQC6Xo6+vj/IjiRX7817U\nynU92VTMZrM0EyCMA6KdwGazKVWsJg/b/1fVIuUilUp1hB1RLpexu7uLP/3pT1hcXHzq++bz+ejs\n7DzCyABAxV/m5+fx29/+Fpubm3QDehHU4vpbWlpoE29mZgZqtZpumtlsFtFoFB6PB7u7u3Tqs54g\n98VsNuPKlSsYGRkBh8NBOByGzWbD6uoqHj16dOwDUccFsmbCyBGJRBgeHsbPfvYzTEx+ostvAAAg\nAElEQVRMQCqV0mSFvCe1uM4TH5BZLBb6+vowMTGBc+fOwWKxIJfL4eDgAA6HA9FolB7dHQ4Hdnd3\nYbPZEI1G6zJ2S+qMLS0tKBaL2NzcxNbWFnZ3d1+ImF653lKpRDNmIr7NZDJRLBaRyWRqPkkll8vR\n0dGBtrY2tLa20qZi5Vo9Hg/u3bsHm812ZJMhEo99fX3o6emBSCSi1xuPxxEIBLC6uoqbN28imUw2\njOIdkdQcGBjAuXPnMDQ0BKPRSEti5XIZiUQCLpcLbrcboVDoO5kKT25ild9TtZ5XDocDgUAAs9mM\n2dlZdHZ2gs1mIxqNwmazYX9/H06nkwq4N1Jp6EVQLpcp46i1tZWq7E1NTaG9vR1sNpvOGWSz2de2\nL3tR9tWJDshkNPXSpUv46U9/CoPBAABwuVy4d+8ePv30UzidThQKBZpJZjIZZLPZqhxRXpYCR246\n0cxwuVxYXFzE8vLyKzffCLeax+NRJkk2m6VW7rXMbvR6PcbGxmgwrqSkEY2KaDQKu92OcDhM/4zF\nYkGlUqGzsxODg4Po7u4+wtsNBAJYXl7G3t4e7ZS/DI5Tf0QsFkOn02F6ehrvv/8+DAYDrXmT9cdi\nMezu7sLj8Xznc0dOOE9uZE8G6dcFn8+HVquF2WyGxWKhpxmii+Lz+RCLxWgZ7E0DeSdMJhMmJibw\nzjvvYHh4GC0tLWCz2WAwGEgkEnC73YjH41X5PDLa/2040QFZJpNBp9NRAZ5cLgeHw4GbN29ifn4e\nKysrtC7cKLs84XVev34doVAIa2tr2N/ff6WGAofDQU9PDyYnJ2mjkpQrBAIBDdC1KlloNBr09vZC\nqVTSzyUgR/dMJkMFkIBvTgx8Ph8jIyO4ePEiOjo6aCMsm80ikUhga2sL169fx87OTsPoJ5Dvuaur\nC5OTkxgdHYXJZKJTngBoA9blcmFhYQFWqxWZTOY7X9rKe1Xt+8ZmsyEQCNDV1YWZmRlMTk5CpVKB\nxWIhmUzCarXiwYMHDT8e/V1gsVgQCATo7OzE2NgYBgYGYDQawePx6PsQDAaxuroKr9dblet8kZ9x\nogMy0aDQ6/Xg8XgIBAJYX1/Hn//8Zzx48OCFHv56YHt7m3KJSRbysiAZwOnTp/HBBx9Q0W3gG0Fu\njUYDuVxOs4HjBoPBoALsRAio8nOLxSKy2Swtr5AskJRwzpw5gx/84AfQarX036TTafh8PqysrODq\n1asIBALHfh0vCg6HA5FIhKGhIXz88ceUUVJ5KiC8cZvNhvn5eezs7CCdTn/rqeVJ49Ang/PrBg4O\nhwOlUkmlQPv6+iAUCqms69raGm7evFlTydnjANF+IYNJGo2GJijke/R6vVhYWIDT6USxWHyt633R\n+3IiAzJ5kdva2jA1NYXW1lb6ggiFQtrVPu4H6mVfDvIgpNNp5HI5Sv16GZBA19fXh6GhIZw+fRqd\nnZ2QSCQ0AAoEAuj1eqjVanp8rgWSySSCwSBSqRQNvOSzScNvZGQEv/jFL+h4uFqtRltbG8bHx6HT\n6eg4K/l5drudkvobZZCCwWCgo6MDU1NTlA2iUCiOMFoIZ9zhcMDhcFDNhBd5Zp71d0hp6lVBAhSx\nMTp37hza29shFovBYDCwt7eHu3fvYnt7+6X5uOSZJPTGQqFA373KNdcq264sC/J4vGcmJeTviMVi\ncLnc117bi96fExmQCXeSPFx6vZ5yDFUqFZVybJQyRSXI5N2r1rBJjXF0dBQfffQRHXEVCAT07xBV\nOLVaTTvIxw1Cxnc6nTg8PEShUKBrBb7JzDgcDqanpzE6Oorbt2/j2rVrGBgYQH9/Pzo6Oo5sKqQZ\nZrPZ4PV6G0bRjQQfIkg1MDCAtra2Z37HREvh4OCAZsav8zy+zr8lYvP9/f348MMPMTY2Rk9Q5XIZ\nGxsb+MMf/oCdnZ3n/oznBZxKmQImk0k5vZVBqtbvIXHdIU1u8t1XrkkkEkGr1dIG8uvirS1ZkMGC\n/f193Lx5E3w+Hy0tLXQw5KOPPoJKpcLCwgL8fj89KtcrOJOXmIynlstlZDIZFAoF+rC8aLAho9Gh\nUAgPHz6Ex+OB0Wiks/lkEIN8nlQqRTwef+lM/FWwv79P/zuVSsFisUCv1x8JVmQz7ejowDvvvAON\nRgO1Wn1kQyGNrHg8TkekG0V0n7BZxGIxpFIp+Hw+gMcZEuFM5/N5+Hw+LCwsYG1tra7KewwGA3w+\nH2azGb29vdBoNBAKhWCxWAgEArDZbFheXobVan0mt1smk6G9vR0mkwkmk4nqpJBfbDabbrypVAou\nlws+nw+hUAjJZBLZbBahUAiBQOCpoHgc95RkvwwGA48ePUKxWMTQ0BB6enqo+htR4+vo6IBcLq/6\nGp6HExmQyUNPRFpMJhM6Ozvp2PT3vvc9SCQSJJNJ2jlOp9N1pUoR7WW5XE6pXITx8TIsDaJ/HAgE\nsLS0hJ2dHWi1WsRiMeRyOXR0dNAarlgsRmtrK+VZA8ebqbhcLgQCAcq/VSqVaGlpOVJPJtdgMBio\nRgUJcgTk/iaTSQQCARweHjZML4DD4UAmk0Eul1NBmsoAQ5IFspksLS1ha2ur7ippJCD39fVBrVbT\ngR2v14s7d+5gZWXlqQlCUmZqb2/H5OQk/aVUKsHn8yEQCKjqHpPJpKYDjx49wvb2Nmw2G0KhEOLx\nOIrFIkKh0JH7eBxBmfxMFouFYrGI9fV1KhlaKpWgUqloRiwSidDS0kJPZm/tYEi1Lp6wFH7zm99g\na2sLly9fRn9/P+WF/uM//iO2t7extbWF9fV1rK+v100QvFwuU50GYuETCAReqsNbLpep4FA+n0co\nFKIvjdVqxfj4OC1jiMVi9Pb24qc//Sn+8pe/IBAIHDm6Vf5Mgtd9QUgpxmazQSqVYmZmBtlslpaQ\nyGdUMj+Ikh05JZA/Y7PZ4HK59IVvBInNyilLi8UChUJxZBKRZH+JRAIbGxtYWVmB3++nE6L1PKER\nZkVfXx+kUild59bWFj777DNsb28/9e90Oh0++ugjTE1NwWAwQKvV0uYYm82mv8rlMmKxGB4+fIiH\nDx9iaWkJDocDiUQCqVQKmUzmKQrmcX8XZBMga2MymZBIJBgdHaUbKBnYIVIEtUBDBmSgOkE5FotR\nBa2dnR26UysUCqhUKqoW1traCgBU1KVemTKbzYZEIoFAIKAqci9b587n88jn80caLwwGAzs7O0il\nUjhz5gy6urrAYDDQ1tYGgUAAr9eLBw8eIBaLHSH6Vx6zSRb7uvekVCrB5/NRMZp8Pg8ul/tUw4v4\nFZI6M5mcImshWTP5ruodkEmdVKPR4NSpU+jv74dUKn1q+CWbzSIcDmN1dRXLy8sIBoN1pY+RYKxU\nKmnJgbhjOBwOrK+vY2FhAbFYjP4bFosFhUIBi8WCH/7wh7hw4QK4XC69vspnh8lkIhaLwe124/79\n+/jqq6+wtraGYDB4xCPxWc95NZ+7J0FKgoVCgQ5cmUwmelKpHNiJxWI16zc1ZECu5oWXy9+MR3u9\nXvz+97/H2toapqamMDIygsHBQco9JMLu5ChVS5BMsHJnDgQCiMViVakrkgBGtJ9JM0MgEEClUqG3\ntxdnzpzB+vo6rFYrne4ja6smyLU+q8sOPKa/kTHiUqkEDoeDgYEBCIXCI+UNkv0T7mi9UFkqamlp\nwcjICPVoq8z2ybHcarXi1q1bWFhYwOHhYV1LFSwWC3q9Hr29vWhpaYFIJAKLxYLT6cSnn36K+/fv\nP1U2E4lEePfdd/Hee++hp6cHbDYbhUKBGv6SfoRGo4FSqcTu7i4ePnyIu3fvHvGmI9/LtwW746wh\n83g85HI5MJlMymjK5/NUv4LQ3lwuVzND/i6QjOpFSgykZre6ugqPx0NHIlOpFGQyGcrlMjQaDS5c\nuACZTAYejwe/319zOcHK3TqVSr32kEMldaetrQ0DAwNQqVQ0gBFRFWL1VOnc/CxU46Ekn0ucWp6k\nHJEpwu3tbdy+fRulUokK85hMpiMZEwnIhApXqzpf5bWQYCyTyWAymWCxWNDd3Q2NRnPk2kqlEtLp\nNKxWK23kORyOujciOf+PvfPubTNN1/uPvXdSlSpUo4pVbMuWLbfxzNjr2d1ZLM5JzsnBCRAgCBAg\n+QT5HPkjQIADJCdZ7CabszuTs7vT1h573Jtk9V6oLkqiRFKkRFIU84fzPEPZnmKPRUmzvABjd2wV\nvuT73s9drvu6NBq5HCFcl3d3dwkGg3R1dTExMSHvQ4VCIU0Fzp8/T0dHB0qlksnJSVZWVggGgwSD\nQeD5cLmurg6lUsnS0hKzs7PMzMzIFtxBXbcQuPL5fJSUlLC1tUUqlZJmEyJrF4N1MV/KFY5kQBaB\nRqVSyTXn74toNMrTp0+Zmpri+vXrWK1WTCYTZ86c4cMPP6S+vp7GxkY++eQTHj16tI9X8TK2t7dZ\nXl6WPdEfaj+kUCikn94HH3zAhQsXKCsrkz83mUyyubnJ0tKSFOvOLp/3ayNMq9ViNBpldpvdrtjc\n3GR2dpbHjx/z8ccfo1KpZF/2xfdCbFvleuNQIJvO5fV6+eCDDzh//jxFRUUvqdglk0nW19e5f/8+\nn376KQsLCwcejOF5QD527BgXL16kqKhI8uDD4bCs0sTrVCqVHD9+nHfffZcTJ05gMpno7++nu7ub\nBw8esLy8TDqdxmaz4fF4iMfj6HQ6tre396yLH+R1iwTl8uXLXLlyhY2NDdbX11ldXaWoqGgPmyd7\nqJcrHMmALFZShX3S6urq9+ahJpNJVldXWV9fZ3p6GovFgsPhoLW1FZ/Ph9lsxmaz0dvbm3OOpEaj\nwWq14na7MZlMzM/PS32N16Wleb1eysrKqKiooL6+nosXL9LY2IjFYmF7e1tqRoyOjvLw4UMCgUBO\nxOpVKhUVFRU0Nzfjdrvl4SAe1Lm5OW7evMnjx4+ZnJyUK7ubm5svbfeJYJgL88lXQSQGwn1GeANm\nUwvT6TQ7OztMTEzQ29vLkydPGB0dPTT6wQqFQg5HBSddaIe0trZiMBgIBoMYDAZsNhuXL1/mzJkz\n6PV6AoEADx484N69ezx79gy1Wo3P58PlclFUVCQzzmAwyMTExJ7gnqtrE8PfbGZPcXExjY2NnDx5\nks3NTam0ZzAY5CJMJpMhlUoRi8VyQgkVOJIBWa/X09HRQVVVFV988YVcMf4+wUSUzGJwpNFo5Bqo\nUqmUZZvT6ZTUmFxBuBDX1tZSXFzM06dPCQaDhEKh17opFAoFLS0tXLt2jWPHjlFTUyM5sUqlUrJP\nbty4wZ/+9CeCwWDOqGMajYa2tjauXLlCSUnJnkNvd3eXsbEx/tf/+l8EAgHZyxYDGPEwi+8RnOVc\najq/CCGaX11dTWlpKXa7Xa6ow9d6FQ8ePODjjz9mcHBw397rN6kQRF97fn4et9uNzWbDYDDQ1NSE\nyWRifHyckZEReY1erxe73c7Q0BCPHz/m+vXr9PX1EY/HaWtr48MPP6Suro6SkhJJf5uZmeGrr77K\niUmogHjODQaDdKRJpVKUl5dLaVuz2YzJZMLhcGC326XLvGhbrK+vMzExsUfoar9xJAOyeCDtdrss\nw/v7+5mfn5cDg2/6PuH4K+T1xFqu6HeJci2X1j/i5nE6nZw4cYJjx47hcDgIBoOvrTVRVFSE1+vl\n1KlTnDx5koqKCjweD+l0WlYFg4ODdHd3yx5hrq7V4/Hg9XppbW2lubkZh8Mh/01M6MPhMEtLSzKb\n+rZrFz13sQafy+xLrVZTXV1NXV0dbW1tHD9+nOLi4j3iQbu7u0xOTtLX18e9e/cYGBhgbW1tX5wn\nsnncr8vK6e/vR61WEwgEaGhooKGhAbPZTHV1NVarleLiYhwOBwUFBRgMBhKJBNPT0wwMDLC9vU1h\nYSF2u53Tp0/T1taGxWJha2uL7u5uFhYW6OrqIhgM5jTTVKlU6PV6Kv+/a7vg3rvdbgoLCyksLGRn\nZ0fy3A0Gg7wHV1dXpVmAqBxzhSMZkNPpNKFQiJ2dHX7yk58Qi8X46KOPePjwIfF4/FvXjnU6HQUF\nBVIwpa2tjb/5m7+htrYWpVIpBbhzvWygUChwuVycPn2a1tZWFAoFXV1dwOvJdwpj1I6ODvx+P3q9\nnt3dXba3t5mZmeHzzz+XJWa2i0guUFpayqlTp2hpaaGmpmZPJinYFULpLbtv+SoHaVHhCN3qXAdk\njUZDa2srV65c4cSJE/h8Pkwm00u90v7+fn71q18xPDzM7OzsvraE3iRDTiaTPHnyhOHhYelz+G/+\nzb+hqakJm80mZxDi5wo6ojD8FYHb7/fT0tKC3+9nYWGBwcFBbt26xVdffSXdeHL5+ahUKkwmE/X1\n9fz1X/+1tMsSW55arZatrS1ZXYlqbHNzk9HRUW7fvs29e/eYmZnJqcXZkQzIOzs7LCwsMDU1xfHj\nx/H5fPz0pz+lsbGR2dlZmfFNTU0RCASoqamRotNGoxG3241OpyORSOD1eqmqqmJ5eZl79+4xMTHB\n2NjYK4nw+4UXebUGg0EuinyT1oTIqnU6nWRR1NbW0traSltbm6Rdie2op0+f8uzZM/r7+5mammJj\nYyOnGQtAWVkZp06doqSk5KVlDkH1e5EGVl9fz8mTJykrK5N/Jx6qjY0NJiYmWFpayunhKQZDtbW1\nnDhxAq/Xi9lsRqVSydc2Pz/PxMQE9+/fl5nxfr/GNxmYCVpoJBJhbm6Ohw8fkk6npf5JfX097e3t\n0gRY8PgvXrxIWVmZ1IQwmUxEIhF+85vfMDs7SyAQkOV+LpdexHNRUFBAR0cHFy5coLq6GpfLJZdU\nxGckDvFkMikdgiKRCAMDAzx9+lQG41yutB/ZgCxu+EgkwrFjxygvL5c3lhDsvn79Ordu3eJnP/sZ\n586dQ6fTodVqZRsg+4N59OgR/+N//I8D4SGLgCz6oSIAi2GLGFxl91CF3q7VaqWwsFDKU4pFF7FE\nITRd//f//t/cv39fVhC5hkKhoLS0lOPHj+PxeF76d1EqZi+DZDIZWlpa+PnPf05lZaX8WjFwWV9f\nZ2xsjMXFxZw+NFqtFpvNRk1NDc3NzTJ7F8yVWCzG2NgYn3/+Offv32diYmLfX9MPDXg7OzuEQiFC\noRDd3d0yOfjFL36Bx+PB4/FgMpkkRfKdd97h0qVLKBQK4vE4wWCQP/3pT/zDP/yDdAs/KKhUKgoK\nCrh06RLnzp2jtLQUrVYr6W2C6y8YRWLQHwgEmJ2dpa+vj4GBAUKhUM4ZIUcyIO/u7hIOhxkeHuY3\nv/kNPT09lJaWSlGQ0tJSampqaG1txeFwUFVVJT+QjY0N5ufnmZ6eZmxsTA4DJyYmGB0dzWkDX0D0\ntpVKpdy0U6vVtLa28h//439kdnZ2j72Px+PB5XJhs9mwWCyYzWa8Xi8+nw+NRsPa2po0aX306BFP\nnjxhYmIiJ5Kj34RMJkMoFGJqagqz2YzL5dojUm80GikrK6OlpYWLFy8SCoVIJBIcP36choaGPf1m\nEYzFdebCCTwbdXV10upHBGNxSIyOjnLr1i26urro7+9nbm4uZ6/rbUJI1D59+pT//J//M0ajUSom\nmkwmUqmUTHwEd3x8fHzfeuSvg3Q6zdbWFktLSwSDwT0CVtmVpeBcK5VKGbTdbjdWqxWr1Upvby99\nfX05rb6ObEAW0otra2t0d3dTW1tLQUEBZrOZlpYWnE4nZrMZv9+PQqFgZWWFeDzO0tKSpB/dvXuX\nlZWVQyPbmEwmJatCbND5fD7m5uaYmZlhZmaGcDhMVVUVZWVlUpUru2cp9C8WFhaYmZnh5s2bdHd3\ns7m5mfMWxYsIBoMMDw9TUlJCRUXFHl85seXW1NTE2toa6+vrxONxWltb5YEqIFw25ufnD2QAW1VV\nxcWLF/F6vfJQSaVSsv/48ccf09/fz+rq6oEHpzeFGHAJjRdx8DidTiwWi+T/i2uPxWIySB8kz1i0\nJGKxmLxHRKIi7iHBzhFsClGNifaf0FxZWVlhYGAgH5C/L3Z3d4nH4ywuLhKLxTAYDGg0Grq6uvjk\nk0+AvcLQqVSKra0tKfWXa17kN0Fcx+joKL/+9a9ZWVnhww8/lDzOoqIizGazbMuIZRaxcbezs8Py\n8jJTU1N0d3fz5MkTNjc3pYZALBY7FGpo4+PjZDIZysrKqKur23MN8DzYFRcXc+HCBVlOCkPQbCeH\nYDDI9evXuXPnTk4/Q5FdFRYW4vf7JZ0qW4hncHBQ6h8cdHB6mxCBTqgQilafaP0JqdjDcr3icHQ4\nHBQWFpLJZCTNMnvxKVsmYH19nUePHkkJgcnJyZwna0c+ICcSCVkyfRtyvcX1Otjd3SWZTDI/P8/N\nmzfR6XQ0NDRIbVZxcmdD8HOj0SgbGxuMjIzQ1dXF/fv3uX///h4d5cNy3YuLi0SjUZ49e0ZlZSVV\nVVUUFBTsWexwOp3yIBL9dFFuiil4IBCQugi5lK0Uiyk2m43i4mKpXS0GjH19ffT19bGysvKdVkxH\nDeIw3Nrayukq8Zsgk8lINxmTyYTNZiMajbK+vv7SIlEqlSKZTJJIJBgfH+fBgwcMDQ1JNbp8QN4n\nHJag9G0Q67X9/f189NFHNDU1UVtbK3vG2UsUsViMubk5pqamGB4eZnh4mJGREeme8V2iLQcBUQl8\n+umnLC0t8bd/+7d0dnZitVplFiyyUNhrdy/K0JGREXp7e5mdnc05NTH7/czmGotW04MHD2R76DC9\n73+JSKVShEIh+vv7WV5exm63Y7FYpPaJ2KDc2dkhlUpJDrxw0xZaMvmh3l8wRNYbCAT46quvWFhY\nYHp6moKCAhmQRYkoemRTU1OMjIwwOzvL4uLiN0oZHgZkD74ikQjFxcXodDoaGxulGE8ikSAWi0ku\nuWhViG2yvr4+6Xqc62EePA/A09PT3L59W7bIkskkQ0ND9PT0MDs7e6TdmH8sEIO97e1tgsGgXADR\n6/Xo9Xqp6iaYVtn894P87A5e1fvV+Iu+mwXlSAjwZA8kBETw3traIh6Py9bNQd9Q3wcKxXP7qPLy\nck6fPs2//tf/mtbWVkwmk9Q9EH1KtVpNKpXi0aNH9PT0MD8/TzAYPFBT06KiIulmIg5IMTA+SCum\nPF6N7EpLtMKyW3nZ8re5eDlv/I8HiMMdUfL4wRCiNpWVlVy7do3a2loMBgOhUEjyWMUEfGdnh/7+\nfsbHx+VQKY88jijyATmPwwkRlM1ms2RbCAPQF6U/X7VWnUceRxD5gJxHHnnkcUjwrTH3ZZGEPPLI\nI488DgT5gJxHHnnkcUiQD8h55JFHHocE+YCcRx555HFIkA/IeeSRRx6HBPmAnEceeeRxSJBfnc4j\njxxDqVSi0WgwmUw4nU4MBgMqlYpQKMTCwkKea/0XjHxAziOPHEOtVuN0Oqmvr+fSpUv4fD4sFgvX\nr1/nf/7P/8nm5uahkEvNI/f4iwrI2WppeeSRaygUCgwGA263m7a2Nk6fPi2F7rVaLePj46/tMr7f\nENZiQvM4/+zsL/4iAzLkg3IeuYdKpcLpdNLY2Mhf//Vfc/bsWZxOp7Qk29rayrmp5ndBq9VisVjY\n2toiFosB+WdnP/EXEZCF2lO25fdRvamE/oPNZpM2TmazWWZW8XicaDTK1NSUtJ0/TA/4XzI0Gg0N\nDQ2cP3+eY8eOUVxcTDqdZnJyksePH9Pb23topDstFgsej4eqqirq6uqknKVQE9zZ2WFzc5P5+XmW\nlpakB+JhhYgBWq1WurkLP0BxCB6Q+tse/MUEZFF6CbnEowqlUonRaKSyspIPPviAEydOSGcRpVLJ\nwsICgUCA3//+9ywvL5NMJo/09f6YoNPpOHnyJO+//z7FxcXs7OwQjUbp6enhH//xHxkZGcmpP+C3\nwel0cuLECd59910++OADtFqttEMTcqOzs7P8+c9/5v79+/T29h76gKxSqTCZTFitVvR6PSqVio2N\nDam/LZ4TIVoPua8GftQBWaPRoNfrqa6upqKiApvNxtbWFo8ePWJubu61T0HhZiFO0FxCrVZz7Ngx\n6uvrKSoqwufz0dLSQkVFBU6nU6qlWSwW3G43SqUSv99PKpVie3ub1dVVIpEIyWSS7e1tYrGYNEQ9\nakacovWk1+uxWCzSyj2RSBxaaU6Px4PP56OqqoqSkhL0ej3BYJDbt29z8+ZNJicnCYfDB354isBV\nUlLChQsXOH78OG63W2bIQnvbYrFQXV1NOp2msrKSCxcuMDo6Kh06QqHQwQq9//+M2O12U1lZSWlp\nKSUlJdKlXavVolKpiMfjL7mxi3750tISs7OzTE9PMzMz88rfYzAYsNvtAGxvb8s/b3rtP+qALEr7\njo4O3nnnHcrKyggGgwSDwT1WR98XCoVCDjhyLQSv0Wjo7Ozkr/7qr6itraWoqEh6zmW/voKCAjwe\nD36/f48xZX9/P4FAgM3NTdbW1lheXqa3t5fV1dVD18LJHr6+OIjNNkQ1Go2UlJTIaxSi9YfpWuD5\naxWO2mVlZTgcDgDm5+f5/e9/z8OHD9nY2DgUB6Og5JWWlnLx4kWqqqrIZDJEIhFCoZD0RWxsbKSi\nooKSkhLee+89dnd3uX37Nv/tv/03njx5wsbGxoEzRZRKJSUlJbz77rucOXOG9vZ2KfUqvBrFcywC\nuPDe3N3dpauri6+++orPPvvslQFZoVBgMpkoLy8HnpukhkKhH5QU/KgCsjjdnU4nhYWFHDt2jJaW\nFnw+H6Wlpeh0OnQ6HceOHSMWi0kroezvLywspKioiJKSElwu1x67c7vdjtfrJZPJEI1Gefz4MXfv\n3t33ANDa2sqpU6e4cOECVVVV0oY9HA4zPT1Nb2+v7OFZrVacTietra00NjaiVqvR6/WUlpZiNBpJ\nJpPE43Hp+9bX1yft2w8a4kCprKyktraW2tpaeeiIXng6nZafs9FoxGq1Mjk5yZMnTxgfH/9Os9tc\nQ7TKGhsbeffdd/F6vaRSKdbW1piZmWFlZeVQuYzY7Xaampo4duwYKpWKiYkJxtJjaeIAACAASURB\nVMbGmJiYYHJykmg0SiKRwOPxUF1dzeXLl/H7/RgMBqqqqvirv/orrFardHePRqMHch3l5eVcuXKF\nkydP4vf7KS0tlaa0m5ubbG1tsbm5yfr6ugykJpOJ+vp6CgoKsFgseL1ezp8/z+bmJrFYjKWlJba2\ntqiqqsJgMDA3Nwc8T/xEZixaHW+KH1VAFu7MPp+PxsZG3nvvPS5fvixL2Ugkws7ODhUVFaysrEhn\nWaVSiVqtRqvVUl5eTmNjo8xmIpEI29vbqFQqKioqOHXqFEqlklAohFqt5sGDB/s+OGtoaODDDz+k\nrq4Ot9tNIpFgbW2NQCDAgwcP+P3vf8/U1BSxWIyioiIqKyuJxWI4HA5sNpukWjmdTuDrkiwQCMhW\nx0FDWOuUlpZy5swZ3n33Xd577z3gayNRcWio1WoMBoN0D/7yyy9ZWFhgcXHxwF7/N0Gn02G1Wjl2\n7Bjnz5/HbDaTSCSYnZ1lcnKStbW1Q+XibLPZaG9vp6mpCaVSycTEBJ9//jldXV309fXJe12hUFBT\nU4PJZMJut1NcXExpaSlFRUXs7OwwMTHByMgIsVjsQGzFvF4v/+pf/SvOnDkjPfS2trYIh8NsbGyw\ntrZGMBhkdnaWmZkZAoEATqeTra0tmpqa8Hq9OJ1OCgoKZNDu7e1lZWWFpqYm7HY7mUyGjY0N2TLb\n2tr6wUPZH1VArq2t5dq1a9TV1eH1enG73WxtbREMBlleXpYuzUNDQ0xOTrK5uYnZbMbhcFBTU0N9\nfT0ulwur1UokEuHu3bssLS2xubmJRqOhvb1dtgvcbrd0g45Go8Tj8X27rkgkwvz8vJzIP3v2jLGx\nMcLhMAsLC8zOzhKPx9nZ2SEUCpFKpfinf/on+vr6OH36NM3NzRQVFWG321EqlXJCHovFDs3Qz2g0\nUlhYSEtLCxcvXqSiooJ4PM7Y2Bjj4+PMzc2RTCZpaWmhrq5OZvyZTIbl5WV6enoOVUAW5W9DQwNn\nzpyhubkZs9kMwMrKCnfv3uX27duHKqNXKpVYLBZqa2uprKxEr9cTDocZHR0lGAy+xERYW1vjj3/8\nI7FYjJ///OdUVlai0WhoaWnh3//7f8/vfvc7VldXpdtLriAqKMGoUCgUzM7O0tPTw8DAAENDQ9KP\ncnNzk2g0SjQaRa/Xs7y8TEtLCx0dHdTV1VFVVYXVaqW8vFwaCXd1dZHJZFhZWSGRSKDRaEilUsRi\nsXyGDMg3v7a2lg8//JCamhqsVivLy8vMz88zOTm558/4+LgcOtjtdsxmMyUlJTQ2NqJUKkmlUszP\nzzM4OMj8/DyRSASdTodWq+Xs2bNYrVZcLhder5eGhgYmJib2NSCvra0xMjLC/Pw8yWSS69evMzAw\nQCqVeimYiptrcXGRZ8+eoVAo8Hg82O12GSQ2NzeZnJxkYWHhwAOyeE12u536+npaW1tpaWkhlUox\nMjLCgwcPePLkCWNjYySTSZmRWK1WOZRZXFxkYmLiwMrjV0EMlOvr67l69So1NTVotVpWVlaYmJiQ\nNLfD8prFwNpoNFJUVITT6ZStlUAgQCgUeuk+iUQiPHjwAKVSSUtLC263G4fDQVVVFT6fj+XlZYaG\nhpidnWV5eTmn1yEqXjHzmZ2d5csvv+TBgwd0dXV94/eOj4+zsLDA9vY20WiUVCrF6uqqPIwSiQQL\nCwuSk/228aMIyCaTibKyMqqqqigoKMBoNLKzs8P4+Dh3795lcHBQGmRGo1EikYgsfwWXcnt7m7Gx\nMeB5iRwKhQiHw8TjcVKpFIlEgrGxMT755BO2t7e5fPkyLS0t/Lt/9+/49a9/zezs7L5d38TEBJFI\nRN5ci4uL3zmQFMMZj8dDWVkZNpsNpVLJ9vY2g4OD/PrXv+bp06cHTlUS3NDy8nKuXr1KS0sLAPfu\n3ePjjz9maWmJlZUVIpEImUyGGzdusL29TVlZGalUivHxcaanpw9FDzwbDoeDyspK/H6/zLLi8TgP\nHz7k5s2bjI2NHaoVafE5qFQqtre3CQaD0nB2c3PzlZlfJpMhlUoRDoeZmpqiqKgIo9GIwWBAoVDQ\n2dmJ0WjkV7/6FZ999lnOrkOj0aDRaFAqlaTTaRKJBIFAgLt3737nc7q7u8v8/Dw3btygp6cHh8Mh\nB8YrKyv7zrc+0gFZlCZut5v29nZaW1txuVwyoA4PD3P37l3GxsaYm5vbU24JiHJK3FTf1u+an5/n\n3r17FBYWcu7cOcrLy3E4HNy9e3dfr3N1dZXV1dVv/RqNRoNOp8NisWC1WrFYLBQWFsohBcDS0hLT\n09PcuXOHW7duMT8/f+CMBL1eT3FxMXV1dTQ0NKDT6ejv7+fOnTt8/vnn7OzsyKClVCqJx+PY7XaW\nl5fZ2tqiq6uLqampQxPYxBDV5/PR2dlJc3MzhYWFJJNJgsEgz54948GDBywuLh74YZgNpVKJVqtF\nq9UCzxOVQCDA0tIS29vbr3x/M5kM6XSacDjMxMSEzKxdLhcGg4Hq6mrcbjePHz/m3r17b2Xo9V14\ncedgZ2eHra0tVlZW5FDymyCehY2NDSKRiJytpNPpnFWRRzogK5VK9Ho9lZWVfPjhh5w6dQqLxcLC\nwgLDw8P09fXR29v7nYOFFx2OvwmRSISRkRFmZmZIJpMYjUacTid6vV7SZQ4KZrNZUqtaWlqoqamh\nurqakpISTCYTs7OzdHd384c//IFnz56xurp64MFYtCo6Ojpoa2tDpVLR3d3NP//zPzMyMvKNVYAI\nbsvLyzx58oTJyclDE5CNRiPFxcWcOnWKf/Ev/gWlpaUYDAaWl5cZGRlhbGyMmZmZfW1xvQlEhmw0\nGjGbzWQyGbmB92IwyqYeCsbR6OgoDocDl8uFQqGgpKREHk5FRUVUVFQwPz+fk565UqncE0wFd/r7\n3O/iaw7qfjrSAdlkMtHQ0EBHRwdNTU0UFxcDMDMzw61btxgYGJCZ5et8GN+EZDJJKpVibm6OwcFB\n/H4/xcXFuN1uSktL2djYYHNz84df2PeARqPBZrPJlkRRUREFBQUy06ysrMTr9QLPM+y+vj5u3LjB\ngwcPJF3nIKHRaHA4HNTV1dHe3k5VVRXxeJzJyUm6urpYX1/fEwgUCgU6nY7i4mK8Xi8ajYbV1VVm\nZ2dlj++goVAocDqdtLW1cfz4cWpra1EoFKytrdHf38+9e/dkr/ugD8MXIfrHFosFo9Eog7Pg7L6K\niSMy5EgkwsTEBGq1WrYIPB6P7ONWVFTQ3NzM1tZWTgKyYIJkMpk9mb9arZYUymw6qDgwI5EIiUTi\nQHn5RzogOxwOrl69ypUrV3C5XKTTadLpNCMjI/zud78jGAy+9Tc2k8kwOzvLF198we7urgwQbW1t\n9PX15SwgG41GqqurOXfuHL/4xS9kkNLr9RgMBnQ6nfzaSCTCl19+yaeffko4HM7J6/su6HQ6/H4/\nHR0dnDx5EoPBwMDAgCyRX1ySEAyAlpYW2tvbKS4uZmNjQzJcfsjn/DaqG1EqFxcX884773Ds2DGU\nSiUrKytMT0/L9//Fg+awQKVSYbVasdvtkqpXXV3N+Pg4Op1ODpCzK03x39FolO3tbTl8Bmhvb8dk\nMqHRaPD7/YRCIaanp5mYmNjX6xA6G+L1arVanE4ndrsdg8EgqWlarRa32837779PSUkJt27dYmho\niPX1ddmiyWtZvAaElGFFRQUVFRUYDAa2t7flNtHS0tK+lYUbGxuMjo7S3NwMPOcJv//++2xsbBAI\nBPbldwpotVpKS0upq6ujo6ODjo4OmpubcTqdL22z7e7uEolEWFxcZGFhgZWVlUORmWk0Gux2O21t\nbZw9e5by8nIWFxfp6+tjfHz8pfJSo9FQXV1NU1MT58+fp76+HpPJRCqVIplMHgpZSL1ej9vtprq6\nWlZOarWaubk57ty5w+Dg4KFeU1er1dhsNqxWq9TYEMFJZJfZn0v2/+7s7MgguLm5yezsLEtLSxQV\nFWG1WikoKJALTfsNkbWLgCr47Y2Njfzd3/2d1K3Q6XTYbDZOnz6N0+nEZDJRVVUlV6UnJyd/0Ar0\nm+JIBuRsnqHNZsNut6PRaAiHw3L7aT9PuFgsxsLCgsw2RUB8/Pgxd+7c2ZffKWAwGGhpaeHy5cu8\n9957kk4l1j2z1z93dnb2LMAcdNCCr3uVLpeL9vZ2Ojs7MZlMDA8P8+jRI4aGhl4a/AhRnp/85Cec\nOHECu93O4uKizKR/aMb5Nt4Xg8GAz+fD7/dTXl6O0+lEoVAQCAT485//zMzMzGuv6ucSKpUKu90u\npTZXV1cZGxtjdXUVnU6HWq3+zgUWMYANhULMzs5iMBiw2Ww4HA6Ki4sxGo37PmsRz4G4J8Tz0N7e\nTkNDw56/VyqVspKsqamRVNebN28SDAb3LCPlCkcyIKtUKtk7FSWWUqlkbm6OTz/9lO7u7n3NRMRi\nhZiSiw1BjUazb79TQATks2fPUlpaikqlIhKJSDqfw+GgsLBQ3ohGoxGbzYbRaESj0eR0YvwqqNVq\nWltbOX/+PDU1Neh0OqLRKCsrK2xsbEiJRwHRAywvL6ehoQGn00k4HOb+/fs8ffr0wA8aQbMqKiri\n/PnzdHR0yC2ueDwu+9yCtndYISiSguY1OzsrB9jRaPR7L3aIhYne3l5sNhsVFRWSAZQtvr/fQTmR\nSLCyssLq6ioWi0XOLODlFpXIpEWA1mg0mM1mnjx5Qnd39yv5/vuFIxuQheKZw+GQgSYQCPB//+//\nZWJiYt8D8tbWlszkxGm73yvIok3T3NzMqVOngOf0pNXVVRYWFpibm6O6uhqn0ykHGDabjcLCQrlC\nna1pexDQaDScPn2aX/ziF1RVVQHPh47BYPCVm07iIfF6vVRXV6NSqRgbG+PLL7/k/v37P2hA9jay\nNfH6SktLeeedd+SqrtClXltbY2lp6VD2jbOh0WgoKCjA5XIRiUQIBAJyGUmwlL4vVlZWePbsGTU1\nNbJlIHjBuWIjbW1tsbCwwNLSEmq1WiYk2RVk9nOg0Whwu90UFBRQXFyM3+/HaDTK9e9cbRoeyYCs\n1WppbGyko6MDl8tFMpkkEokQDod/cFloNBrR6/VsbW3JntmLP0+n0+FwODAajS99/37fcEIWcHx8\nHICpqSlu375NMBiUq9UzMzMcO3aM2tpaNBoNxcXF/P3f/z1+v5/u7m5GR0eZmZnJuVSlRqPBaDRS\nUFBASUkJOp2O1dXVPdnui3C73dTU1OB2u9Hr9VJgKBaLEY/H2d3dfUm5K1cHjqhAmpubOXnyJB6P\nB61Wi0KhkHMGsXL8IoRKXzaF7KDMBBQKBXq9nrKyMqqrq1leXmZ8fJxkMvlGgSgej7O8vLynehFJ\ni2A57CcymQyLi4t8/PHHTE9PS4Exr9eL2WzGYDBIfZTFxUUpoWCxWGQ2XVxczPvvv4/NZuP69et8\n9dVXOfl8jmxArq2tpa2tDbvdTjKZJBQKsbGx8carwDqdDr1ej9PpxGazybXiVz3YarUak8kkSfQC\n4vSF/SvJUqkUs7Oz9Pf3o1Ao6O3t5Z/+6Z9YX1/HbrfLgUQ6ncZut0uu9E9/+lOamppkb3NlZeVA\nArLZbJaZiJB1HBsbIxAIyMCQSqXkw+v1emlqaqKwsFCqagmhmGQyKbnoRqNR2gzlKvsXAdnv99PS\n0oLL5ZKCR6FQiMHBQRYWFvYEJUHBEiW8uE6FQiGHUeKwycXBIuYxBoOB4uJiysvL2d3dlZ/DmyQ4\nYhFjc3PzpV6u+P/7fV0rKyvcuHGDQCDA8vIyra2txGIxPB4PTqeTnZ0dYrEYg4ODrKys4PF4cLvd\nuN1u7HY7VquV9vZ2ucY/OjpKPB6X991+9ZaPZEBWKpWYzWZsNhsajUZK462srLxxQK6vr+fs2bN4\nPB4MBgN/+MMfvpGVINTWsj3GsrmP+3mzxWIxHjx4wNLSEjqdjsXFRUKhkKQeRaNR5ubmmJub49at\nW1y9epWOjg6ZAZw6dYr19XV6enpyLohuNpspKCjAYDDIoaPFYuHs2bNYLBYGBgaYmJhgenoak8mE\ny+Xivffe49KlS1RWVpJKpdjY2JDUMZPJhNFopL6+npMnT/Lo0SNu3rz5vV/P2/icVCqVXIgQA6Ld\n3V3m5ua4efOmrGTEYdPQ0CDXqQsLC2UpLV7P7u4un3/+OX/84x9zIsojNH3tdjt6vf6lQ+JNIIR2\nxLP4IhsmFwemuL8WFxe5d+8eo6OjfPnllxQXF1NUVCSfldnZWdbX19Hr9ZIyWltby+nTp6WxxQcf\nfEBJSQk9PT309fUxOjrK0tLSvrzuIxmQRS9VeMltbW0xPz8vLYte9wNXKBTYbDZ8Pp/McoSS2KuQ\nTqf3aAjnslTe3t6Wgi2iJxwOh0kkEsTjcTY2NgAYGxvj8ePHUqqysrISp9NJVVUV9fX1lJSUEIlE\niEQiOQvKDoeD8vJyzGazPLxMJhNNTU1YLBZMJhMqlYpwOExRURG1tbV0dHRw9uxZmbEtLy+ztLRE\nIpGQ1+Z2u/H5fDL45Qoi43U6nbjdbrRaLbu7uyQSCZaWlujr62NpaUkKPJWWlnL27FlOnTpFY2Mj\nZWVlchicHQAzmQyBQIBAICAV7PbrvlIqlVitVtxuN0ajEZVK9YNnIWLGIp5FkazksiUj6G9COnNs\nbAyVSkVhYSGFhYXE43FisRjr6+svtVaOHTsms/uSkhKOHTtGQ0MDXq8Xu90uh5OCFvg2ceQCsuhF\nicmt0DeYmJiQK81v8vPGxsb4P//n/6BWq8lkMkxPT3/j9whFLKvV+gOv5vUgbjKRfYhd/VeVlWLK\n//nnn7OwsCD991wuFx6Ph1OnTpFOp+np6cnZwKKkpITjx4/jcrkAZDYmBG0EH9npdFJTU0NnZyc+\nnw+j0YhSqSQajTIxMcHw8DDr6+tEIhG2tra4d+8e4+PjLC0tvdGh+CYltHCPEQeC2+2WCxThcJjN\nzU2ZISqVSi5dusTPfvYzqqqqKC0txWKxoNVqZfkrPN6USiWdnZ1YLBZ+9atf8cknn+wrz1qpVFJY\nWEhFRYUUcP+hCUY6nd6z3KPRaKRw0UFBPDuhUEhK1e7s7LyUwInFr08//RSlUklRUZE0ExZtKZ/P\nx6NHj/j888+/NU68CY50QNZoNKhUKrmv/kOI3CsrK6ytrck+3rfB7XbT2tpKaWkp8JwlMDk5ucd9\nZL8ghhHiYf82fY5kMsno6Cibm5tUVVVJmqDJZMLj8UgFuFxBPJirq6sMDQ3JfmsikZBi4aFQiGQy\nic1mw+/34/F4UCqVBINBxsbG6O7uZnBwUFYFwnhgvxdyXgWVSiW32iwWi6zWhNZ0Op2W/97U1MTZ\ns2eB5z3WxcVFebCKykH0/MU6740bN/advZPNWBLLNkJc502ZShqNZs+MJbudd1AQh0s8Hv/OhbGN\njQ3C4TAejwePx8PJkydpaWmRnGqbzYZKpWJgYIDl5WUSicRby/yPXEAWWqcqlUqWVzqdTmqxvu4p\nLEoq4HvfNGVlZVy7do2GhgYymec2SF9++eU3GiG+bYjX+31v8J2dHdmeAGSmkMt2BUAwGKS/v5+p\nqSnp+CFE9ZeXl2U/LxqN0tHRIc1bt7e3efz4MTdv3uTevXtSuvJtvfY3DRTiXhTJgahYhJjN7u4u\nDodDuk+kUin6+vro6+ujv7+fYDCIVqulpaWFa9euUV9fT3l5OYlEQuoq7HcQyw7IZrOZeDzO1NQU\ns7Ozb1w5WSwWKisrcTgcKBQK2a8V2ehh5mMLZDIZadgqaIv19fWUlZVRWlpKTU0NZWVlTE9Py9nV\n28CRC8hiKpydORiNRqqqqhgdHX2j5YwX10G/CTabjdLSUlpaWqivr8fpdJJMJhkbG+PevXv71uh/\nEa9zQ4vS2mq1yiGoIM2Hw+GcPhyrq6sMDg7KibtKpZKHhfA1S6fT8rM1mUxy27C3t1eK8xwWlw0R\nXETVBs+zQ4vFQkNDAz/72c8wmUwUFRWh1+ul9KagHsbjcQoKCigvLyeVSrGzs8P29rbUvxBGobn8\njATjQq/Xv3b1JD7XkpISzpw5Q1lZmWTSBIPBQ2VV9X2wsbHBxsaGTPTS6TQmkwmz2SytxmKxGPfu\n3WNtbe2t/M4jF5CzJ8DiYbBarTQ0NDA6OvoSFe1tQnATz5w5g9PplP3rqakpnj17dugkFeHr6X55\neTk+nw+dTid5ohsbGznNkLMPgexDUHCLd3Z29rg9GAwGIpEIc3NzDAwM0N/f/1b73T+UfiW8CbO3\nH41Goxz+dHR0SG3eTz/9lD/84Q+yQkgkEthsNilOVVBQgFarJRaLMTc3R39//75LAIhr2NjYIBgM\nsr29jdvtpra2lrGxsT0CVd8H4pD1+Xz85Cc/obq6mkwmI4Xuc0lJfJvo7+9ncXFRzguEEcYHH3yA\nQqFgaGhIOhD9UBy5gJxOp6VzciwWQ6/XS+Uml8slhyNvkycohjfl5eWcOnVKbiANDQ3x9OlTenp6\ncl7+fx8I8f7KykqKi4ux2WwoFArJzPihKmmvCyFf+qqKJNvKqbCwkIKCAtRqNaOjo9y9e5fx8fFD\nd+CJoZCgpwkPN61WK3uNIvufmZmhp6dHOheL5YPz589z+vRp3G436XSaYDDI4OAgd+/ezYmBgJg1\nCIUzrVYrvSJFj/+7ID67wsJCqqurOX78OD6fD5vNxs7ODkNDQ9y+fTtnFeTbhhgCPnjwAJ1Ox89+\n9jOampokc8bj8RAMBt9KG+3IBWRR1omeqFhdtVqtWK1W2Zt8m0MEjUaD0+mksrKStrY2ysrKSKfT\nPH78mP/yX/4Li4uLh0YgPRsqlYqSkhL8fj+FhYXo9XppV765uZlzNatv6x+K7MrlctHY2EhJSQkq\nlYq+vj4++uijQ2VgCl/zXLe3t+VA+VWbm2tra4yOjjI0NMTo6Ci7u7uyLVBZWclPfvITWlpaUCqV\nLC0tMT8/T29vL7du3cqZo0g2U0fc62Jw9X2hUqnwer1cuXKF06dP4/F40Gg0bG1t0dPTw2effZYz\nadrvwptURqlUiocPH7K2tkZ9fT1+vx+NRoPVaqWwsJD5+Xm5OfpDcOQCMjyfUn/11VdotVqpeKZU\nKqmvr+ff/tt/y/3793n06JHUy31dZPem6+rq8Pv91NXVScrW4uIig4ODPH36VH4QbwrBGBHWS3a7\nXarJbW1tvfGkWyjhnTp1ivfff5/S0lI2Nzeli4rwFTwsJaTT6aS2tpbW1lZOnTqFz+eTpfR+Sqn+\nEGQyzz3lVlZWWFpakswCocn98OFDJiYmCAQCDAwMoFKpqK2tpbq6mpqaGtra2vD5fKhUKqLRKFNT\nU/J7XhRZ2i8olUo8Hg9er1fSC7MtkL4NIjN2Op1UV1fT0dHB8ePHKS0tlaV8X18fAwMDRKPRA5Ue\nFW0wwaJ6XU60ECwSh68YUIqWqclkeiuMpSMZkOPxOLdu3SIcDlNTU0NVVRUqlYr6+np8Ph9FRUWs\nra0xPT29p0TOJt5nl83ijYWve9QALpeLM2fO8O6779Le3k5BQQE7Ozv09vby2Wef0d3dTSgUeuPr\nEFmhsP0pLy+nvLxcTm1XV1ffWM1Mr9dLvvG7776LTqdjenqaR48e0dXVRTQaPTQtFhEUzpw5w/nz\n5zlz5gzb29ssLy9LY8nDcnC8iFQqRTAYZHFxkdLSUsxmM+l0msHBQf77f//vTE5OEgwGpcB+a2sr\n77zzjuRYC9nY1dVVRkZGuHPnTk49ArOVE00m00tD82/KJrPlK4uLi+no6ODcuXM0NzdjsVhIpVL0\n9vbyu9/9jqGhoQP3DxRVdCqVkgH1decRgsssTI/F/EPo37wNeuKRDMiZTEbawczNzbG4uIjH45F6\nFCdOnOA//If/wOLiIsFgELVavce+JRaLsb29TSKRYHNzk2g0Kg1C3W633HXX6/X4/X4qKyuxWCxM\nTU3x1Vdf8fjxY7q7u1lYWHjjaxA3vtVqxefz8d5773Hs2DHcbjdbW1tcvHiRr776is8+++y1MmVx\nU7S1tXHp0iX8fj9qtVoG+KGhISYnJw/8AREQnFWfz8eZM2eor69HrVbT3d3NH//4R7q6uvYtGL+N\nn5tIJJienmZ8fJyGhgY8Hg8qlQqn00l9fT0ul4t4PE55eTmVlZXU19dTVVWF2+0mFosRDAbp6+vj\nzp078rMR25a5QjZLJPvvsvUnsv8+W/uis7OTtrY26urq5Obh4uIiY2Nj3L9/n/7+/rfGQPgh8Pv9\nXL58GaVSydbWFnfu3OHZs2ff+/vFdrDJZJJ60el0mtnZWYaGhpifn38rFcCRDcjJZJL19XXGx8cp\nKytjd3cXp9OJwWCgpqaGuro6QqEQ6+vr6HQ6uSm0s7NDOByW1j+hUIhgMIjZbMblclFRUUFJSYks\nTUSGEAqF6Onp4aOPPpL8xB+axahUKiwWC2VlZXR2dkrNCXj+oOt0OsbGxlhbW5M932/icoqHR+zj\nHz9+nGvXrlFeXk4mk5Guu6Ojo8zNzR2a7Fin08k+d1tbG0VFRcTjcQYHB/ntb397aA6Ob0IymWRm\nZobJyUlZzSgUCtxuN83NzbLV0tzcTENDA1arVeqvzM/PMzAwwK1bt/jjH/8oH/JcIpPJEIvF2NjY\n2CMnK7j94ppExizaGTabjbq6On7+85/T3t6O3W5HqVSSSCQIBALcvHmTrq4uJicnc3o934SSkhIu\nXryI1Wolk8nICiwWi7G1tfWdOuFKpRK73U5BQQFKpVIuj0xOTsoq6G3gSAZkeH4jra+vc+PGDebn\n5/H7/TQ0NNDQ0IDL5ZIaAWL3XCyRZDIZOQVPpVJ4vV62t7dlj0msjwoPsGfPnjE9PS23q0ZHR98K\nXUwEVDGVF0OidDot3VDOnj2Lw+FgZGSE4eFh+vv7mZiYkEIt2T9LrJI3NjbS1tbGuXPnqKqqwmAw\nEAqF+Od//me++OILGYwPSwvA7XZz9epV3n33XZxOJ1tbW8zNzbG6uioNnIuxlgAAIABJREFUJ/cb\nP4T+lkwmmZ+fZ3p6Wm63iQHX5cuX5dqzxWJBp9MRDocJhUIMDQ0xODjIwMAA4+PjB9ZCSiQS3L9/\nXzJcCgoKUCgU1NTU8Pd///eEw2GUSiUGgwGDwSB7zEKCVmz4KZVKwuGwdDe/c+fOoTDTFQiFQgwM\nDNDW1kZ9fT1/8zd/Q3NzMzdu3KC7u5vV1dVXzilEoqPRaGhsbOTChQv4/X6USiVPnz7l8ePHbzSn\n+iYc2YAMz5XP+vr6WFhYYGpqirm5OUKhEEVFRXItWCxGiN6wEMsWcodiqJbdBonH40xPTzM8PMyt\nW7cYHByU4ulvM5AJJaxIJMLk5CQGgwGr1Sr1JmpqavD7/QwODlJeXo7NZsNms0maVba0oU6nw2Aw\ncPLkSc6dO0dDQwMOh4Pl5WUGBga4efMmd+/ezTnV7ZsgbnLR525tbcVkMjE5OUl3dzeBQCAnXnmi\nXH/TQ0psGgYCAfr7+7FYLBQWFkoWhRgiiQpFrIiLFfCJiQmpuncQn0sqlWJkZASFQiEPcb1eLzn3\nIkEwmUyyxyzes3Q6TTQaZXV1VR5Mw8PDPH369FD0jbOxsrJCT08PBQUFNDQ00Nrait/vlxVNIBAg\nHA6/Uj5Xq9ViNptpb2+XejDiuRoaGnqrCy9HOiCLKXcoFCKRSLC4uMiTJ0/Q6/V7FLSEBq2wNhfB\nOhaLSbND8bOWl5elJfj6+rrUdd0PRoJ4UIU6mHBsOH36ND//+c9lP7KsrAyr1UpdXR3r6+t7Bgui\nTSGMKIXymMlkIhqN8uWXX/LnP/9Z3jiHhZ6nVqtxOBwUFRXhdrsxm81kMhlGR0f57W9/y/DwcE4C\nlFjBF9XJm0Cos/3DP/wD/f39XLlyBa/XK/3potEoDx8+pKurS9oKra+vEw6HicViB1qxCPZAKBSi\nv7+foqIi/H6/XFoRAUvMYcT3pNNpwuGwlKOcn58nEAgwNTXF4uLiG6ku7ieWl5d59OiRZDJVV1dT\nVFTExYsX8fl88mAUin3Z7Rshr1pcXIzdbicUCslr3tjYeKvskSMfkNPpNFtbW7LRLpDNmhBtAbH9\nJRYk4vG4pIfB1xPzlZUVqQa1nxADRrE5ZzAYcDgcJBIJWQ4KIfdMJoPX66Wurg6VSsXu7q7scRsM\nhj0C5+l0msXFRQKBALdv3+b+/fsEg8GX7JEOEmq1Go/HQ0lJifw8IpEIMzMz9Pb27vksDzsymQzh\ncJju7m5isRgKhWJPQI5EIjx58oTe3l42NjYO3cZaOp1mY2OD7u5uuS1YU1NDaWmpZA8IRoEwARDr\n3U+ePJFysIuLi29V1+FtYnNzk3g8Tk9PDzqdjkgkQlNTEw6Hg+bmZgoKCuTcRsyoRAUqnESSySQb\nGxsMDAzw+PFjSU19m62mIx2Qvw3ihhfqaEKjNRqNEgqF9rQsxMkvvjbX7sDidyUSCdbX1+nu7mZ9\nfZ36+nqampqks3Jrayt1dXWYTKY9mh1KpVIuywhXg7t373Lv3j0GBgak+8lhgthUKysrw2AwEI/H\nJSvmTU0G3gSi8nkbM4Hd3V1mZmb46KOP0Ol08uBMp9NEIhE2Nzdz0oZ5E8RiMR4/fszS0hITExOc\nO3eOK1eu4HK5UCqVMkERmfDTp0/p7+9ndnaWYDAopVAP06GfDVGFjI6Osrq6ytLSEktLSxw/flyu\nQou2HyCvV7imCEXC8fFxrl+/zpMnTwiFQm+9uvnRBmQBkV1mP3Av9rZyZbz4bRCvMZlMyjXM9fV1\nuQW4u7vL7OwsFRUVe7YRBRKJxB46X09PD/39/YRCoUMn6qJUKjGZTDQ2NtLc3IxWq2VmZoa7d+/S\n29ubE5UzAfG+v43fJxgLwknmKCGVSrG2tibXqOPxOJFIRFYv2b3wxcVFRkZGmJ6elmylXB6ibwoh\ndBSLxVCr1USjUZaWlqioqMBms+H1emloaJB6LxsbG4RCIVZWVuSCkljyEQpwb/s+3V+b5DfH4Ush\ncoBs92rRtxMDx0wmIy3KXyTtZwcV8b8iMH8XnecgoNFoqK6u5j/9p//ElStXiMVi3Llzh//6X/8r\nw8PDObeWyuNrZJs/CFaFgGgR7uzskEgkSKVSh0Lr+HUhBsrC29BoNGK322lvb+fv/u7vcLlcBINB\npqamGB8fZ3x8nKmpKSKRCNFolFgs9kMGlt8ac3/0GfJRgwiw2VoJ4mbP1gLIJuwfpYcBnpeP4XCY\nO3fusLa2RiQSYWhoSN70+WB8cBAWVEKT+ccI0SNOJpNsbm4SDoflMo5arcZsNhONRllZWZHbomL4\nvt/tzHyGnMeBQKVSSU9E0bc/CmVvHj9OZNP5sivPF2UW3saveuN/PEDkA/KPHNn88DcRe8kjjyOK\nfEDOI4888jgk+NaYmzuHyzzyyCOPPL4V+aFeHm8F31d68KgNIPPII5fIB+Q8Xhsv8raFCpgIytl6\nANl6G9n/nQ/MeeTxMvIBOY+3huws+cXgfBiWb/LI47AjP9TL463gVWLm8HImnA/KefyFI8+yyCOP\nPPI4JMizLPLII488jgLyAfkvDN/UWsgjjzwOHvmAnEceeeRxSJBnWfxIYLFYpKh9VVUVqVSKdDqN\nzWbDaDQCX2fHoVCIyclJAoEAc3Nzh1bDNo88cgWhY6FWqykoKKCyslI6yczMzDA7O5uTgXQ+IP8I\noFAosFqt1NTUcPXqVT788EPpUu3z+SgsLJRWPCqViuHhYT755BNu3LjB8vLyoRVNzyOPXEKtVmMw\nGKiurubq1atotVri8ThffvmlNGzd7+fkLyIgq1QqbDYbZ8+e3WPFLpC9yLC1tUU4HObhw4f09/cf\niIPI94VSqaS8vJyOjg4aGhrw+XzU1NTgdrux2WwyQ1apVMDXGXJhYSGXLl3C6/XS2dnJzZs3uX79\n+gFfTe6xn9zoiooKGhoaUCqVJJNJJicnmZmZkYsxGo1G6vAajUZ0Oh2hUIiFhQXpEJPH/kFkxEaj\nEZfLRU1NDVVVVVRUVFBZWUlVVRUqlYpUKiU1ocfGxvbdSftHHZDFm242m/F6vVy7do1r165RUFCA\n0WiUWaNSqZRbZOFwmMXFRZRKJSsrK6yvr0vrncP0kIjTvLa2ll/+8pccP36c4uJidDrdHlFxQAqK\np9Np6cF34sQJmpubicViJJNJbty4ARxOnrAQFNdoNHscxAWEcHoikWB7e/sAX+nX91x5eTlXr15F\nrVYTj8e5e/cu29vbsj1kMBhwOp14vV6cTidGo5GpqSmpQ3zYXEfEYa7X66XP3ovDYaHYJ1xHDtPz\n8iKE+bHL5aKuro5Lly7R0dFBS0sLTqdzjwynsHLa3t5mcXFxXwX5f5QBWdwoer0ek8nE2bNnOXfu\nHCdOnMDtdqPX62UQfhEGg4GCggLeeecdDAYDjx8/ZmRkhNXVVba2tg5NwHK73Vy8eJELFy7Q2Ngo\nb6Ld3V0UCoX0ctvY2GBhYYGHDx8yPz+PyWSivr6eS5cuYbPZpL270WiU1cBhgXjotVotjY2N1NXV\nUV5ejtvtxmg0olarpTnq6uoq9+7d4/bt2wf6moVlfGVlJSdPnsRkMpFOp6mtreXatWvywLBardI8\nU6vVolQqmZubo729ndu3b3P37t1D5cShVqvR6XRcuXKFc+fOyaxeYHd3V9oePX78mJs3b7K9vb3v\nRsFvAoVCgdFoxO1209nZyenTp2ltbaWqqgqTybTnsFEoFFRWVnLx4kUCgQCTk5PSP3A/8KMMyKI5\n73K5KC0t5cKFC1y9epXS0lKsVutLJ3u2CLVWq8Vut3PixAmsVivxeJxQKEQ0Gj1U3nR6vZ6KigqK\ni4tJJBIsLy+TyWSw2+24XC7W19dZW1tjfn6e4eFh/vSnPzE2NobNZuOdd96hoaEBs9mMXq/H5XJR\nUVHB8vIya2trOb8WkVWqVCoMBgNmsxl4/rmIsv7s2bO0t7dTX18vHZ21Wi0KhYLV1VXm5+dJpVIM\nDw9LJ++DgMVioaamhqamJurr6+X91tjYKM0yM5kMNptNuoWLzDIYDFJfX08oFOLRo0dyMHsQEEHJ\nYDBgtVqxWq3Y7XauXLnCL3/5S3nviEpFGLmurq6iUqno7e1lbW3t0AbkgoIC6uvr6ezspLOzk7Ky\nMux2+x7NFfEeFBUVYTQaaW5uZmJigtHR0XxAfh3odDrMZjPHjx+ns7OT9vZ2vF6vPP2yHQBe/COC\nucPhoLi4GI/Hg9VqRa1WH5psBWBtbY0vvviCZ8+eYTKZZCl56dIlfvrTn3L37l1u3brF3Nwc8/Pz\nzM/PE41GCYfDBAIB5ufncTqdFBQUUFdXxy9/+Utu3LiR84AsBo3iwff7/Zw8eRKFQkEikaC6upqq\nqio8Hg8ul0tm9Wq1WvbG7XY7Go2G9vZ2VlZWePr0KYODg9/5u9/256lUKvH5fPzt3/4tnZ2dmM1m\nOavI9qoTLZjsLEypVGK1WvF6vXg8HoxGI/F4/MACsngO/H4/Z8+epba2Fp/PR21tLRaLRVogiRaZ\n6LW63W7KysqoqamRpq+HCUqlEq1WS0dHBx988AGNjY2UlZVhMpmAr13IM5kMKpVKtsnMZjOdnZ0o\nFAp+85vfsLy8vC+v70cZkM1mM2VlZbS0tHDhwgUqKytxOBwv+dC9qiR8sdlfVlaG1+tlfHw815fx\nrYjFYgwNDTEyMgI8L4ELCgooKSlheXmZZ8+e8cUXXxAMBgmHw3u+NxKJSIPKTCaDWq1Gr9fLALdf\n0Ol08o9Go0GlUqHX63E6nTgcDhwOB01NTXR0dKBUKtne3qaurg6fzyezlVcttRgMBnQ6HT6fj5aW\nFmZmZvb1Or7p2hwOB36/n87OTurr62WvFXipr/+iEJPIRoUB7MmTJxkdHWVmZianiYBoTTidTkpK\nSujo6ODy5ctUVVXh9XpZWVmhp6eHSCRCMpnEbDZjs9lwOByYzWZZ4djtdgwGQ85e9/eFTqfDZrPR\n2NhIZ2cnLpcLs9mMQqEgHo+zuroqWxKFhYWUl5fLIF5XV0c6neb69ev7NhD+UQVkcZN7PB5aWlo4\nduwYfr9/T1/oRX+sFx9y0cyH5w96fX09y8vLdHV15fhqvh2ZTIZUKiWvSa/XU1ZWBsDIyAhTU1Ms\nLCyQTCb3fJ8YzHg8HmnxPjQ0xG9/+1sWFxf39TXbbDYKCgooLCzE5XKh0+koKiqira2N0tJSTCYT\nNpsNu90OPO9Lms1mGcy+bcNQoVCg0+lkKyOXUCgU2Gw2jh8/zvHjx/F4PN84+HrV/ScgsuiOjg5M\nJhP/+I//yOzs7J6v308olUr0ej1FRUWcOHGCq1ev4vf75X0VCoX4wx/+wI0bN4hGo6TTaZxOJw0N\nDbz33ntUVVUBSMfzw9iusNlsVFRUUFJS8tI8aXV1ldu3bzM2Nsbi4iIXL17kX/7LfymrALPZLAew\nKpVKZtNvEz+qgCxOsvLycjo7O/H7/Tgcjj0T0xchbM1FDxO+DtIqlUr2zrJpcocB2X1v4U9nNBpZ\nXl7m5s2bjI2NyXJRXI/L5aKhoYFz585RWFiITqcjk8mwtrbG6OjoW3+AxO8tKSmhsrLy/7V3Zk1t\nnmfc/wuhHYGEJMQmFgFmxxgb22CnieMmde04PUg6aTvTdnrS6VH7Fd73rF+hx+lMetBM0nQSx+Ng\nx/UKhmCxg1i1oX0X2kB6DzzXbYmAjRMJHN77N+ODOMZ+Hul5rvta/xcaGhpgMBig0+mgUqkgkUig\n1WrR2dmJqqoqlhPefY9UtQ+Hw7DZbNja2kJZWRk0Gg2qq6uZd69QKKDVag/VMystLYVKpUJbWxuG\nhobQ398PlUrFCo6774XyxbsNMT2jAoEA9fX1kMvlePLkCSYnJ+H3+4se+kskEuj1ejQ2NqKjowNn\nzpzB8PAwSkpK4HA44HA4YLVacfv2bTx8+JBtQ1er1RAKhXjjjTfY+5NIJOD3+1+rmguliyhv3Nra\nCrlcjpKSEsTjcVgsFkxOTuLu3bswm83weDyoqqpCIBBgrYlisRhKpRJ1dXUwGAxwuVwFr1UcK4Ms\nEolQVlaGlpYWXLp0CTqdjnlXuzfIErQKXCQSQSqVsoo3/QxtQ6YWudcpjww8T7HQtS0tLWF5eRmR\nSCTvz5WWlqKlpQV/+ctfcPHiRVRVVbEFo9TWU+h7o0Otr68PH330EVpaWmAwGFhejnKpUqk0L6ea\nK2JPxtjlcsFsNuOrr76C3W5HY2MjBgYGcOnSJej1ehYq19TUsHzgYUCpkjNnzuCtt95Cd3c3lErl\n99ryclvCdnZ28r4zoVDIcsvAs8iMemN7e3sxPT1ddIOsUCjQ39+PCxcuYHh4GE1NTSgvL8fk5CRu\n3LgBk8mEubk5hEIhRKNRVm/Z2trC9vY2c1wAIB6PY3Nz83vP4FFCqbHz58/jd7/7HfR6PUpLn5m/\nQCCAO3fu4M6dO5iYmIDX68XOzg42NzfhcDiYs0N/T0dHB9bX1zE6OsoN8osgAyCXy6FWqyGTyZDJ\nZBCNRhGLxZBMJhGJRLC5ucm8jng8jq2tLXaC9vT0oKurCwqFAiUlJVAoFGyQ5HU1yEKhEPF4HHa7\nHR6Ph53ubW1tMBgM0Ov1LDd55swZ1NbWoqSkBKFQCG63Gz6fb8/D6odCBsZoNOLUqVO4cOECBgcH\nUVVVBbVaDeB5z+r29jai0SiSySSSySS8Xi8cDgcikQi2trZYZ0IwGITdbsfExAT8fj+cTidkMhlO\nnz4NjUbDDIRIJPpevrZYUJrEYDCgra0NNTU1KC8vh1AoZJ9lKBSC3+/H+vo6rFYr4vE4kskk84qF\nQiEaGxvR398PtVrNwmOJRIKuri74fD54vV7YbLaiPHt0KKrVavT19eHs2bNsKMJiscBkMuHx48dY\nWVn53lCETCZDfX09mpqaoFKpWJ+uzWZjOeajRigUorS0FG1tbTh//jzOnTsHg8EAiUTCWvU2Nzdh\nMpkwNTUFl8uFeDwOgUCAlZUVfPPNN3jzzTdRVVUF4FkXlsFgQHNzM2ZmZgp+vcfKIO+GPBOfzweH\nw4FQKASLxYL79+9jYWGBhRyZTIZVlf/0pz+hsrIS1dXVrFhBuabdeejXAUpXRCIRmM1mpFIp1js5\nNDSEd999F6dPn2YV49zBEZ/Ph9nZWWxubhb0nkQiEZRKJYaHh/G3v/0NBoMBMpmMGaGdnR3s7Owg\nnU4jFovB4/HA7/cjGAzCZDLhwYMHsFqtcLvdbCCHflGftN/vh8FgYEM7uSmBw/p+KB9fX1+PxsZG\ndogDzyMyt9vN2g6//fbbvB5WGu55++23oVKpIBaLWdqmpKQEPT09kEqlGBsbw8TERFHujQ4VrVaL\nvr4+9PX1obS0FFarFRMTExgbG8Pk5OSeHrpcLkd3dzd6e3tRUVGBUCgEk8mElZUVdpgeNeSg9ff3\n449//CMMBgPLG1OfvtVqxcLCAlZXV/O6LBYXFxGLxaDVajE0NATg2XdWXV3NnulCc6wMMoWEoVAI\nDocDSqWSnXSLi4swm81YXl7G+vo6XC4XotEoC9XJ07TZbFheXoZUKmVjxyKRCGKxGCKRiKUvXhdo\nSo2MVXNzM9rb2zE4OIgzZ86wPHpugSkajcLr9WJsbAw3b94s+EmvVCrR3t7OxrjJUO3s7GBrawuz\ns7NYWlqC0+mEz+dDLBZj0YrD4cDa2hqCwSCi0eieRoiKdyqVCmVlZSzcp5a+cDhc0PvZi5KSEqhU\nKhgMBpw4cSJvqAAAtra2EA6HMTY2hhs3bmB+fh4OhwPJZJIZqpKSEkSjUUxOTuLjjz/GO++8gytX\nrrDDn3KW1IUSiUSQTCYLeh9isRiDg4N466230NLSwvQb1tfX8e2338JkMiEej+/Zfkc1GzrkqYef\nwvujhJ53g8GAs2fPYnh4GHV1dVAqlawgFwwGcfv2bdy6dQsbGxvfq6FEo1HYbDaEQiH2DIpEIuj1\netTV1XGD/DJofDYQCMBut7MKvdlsxuTkJO7fv4+lpaV9f357exsejwcrKyswGAx5AwuU93wdTv3c\nFjDy2OlQaW1txYcffoj+/n50dXXl/Rz9uXA4jOXlZTx8+BCfffZZwXN9ZJCbmprycqPpdBrhcBiP\nHz/GV199xbxzMroHOehKSkogk8mg1+tZFEMGLBAIYGVlBcFgsKD3s991VFZWoqmpibXmUeE3m82y\nl/nhw4f45JNP9i0o7+zssANKKBTirbfeglgshlAoZCkYitSoe6GQSCQSDA0N4Ve/+hXq6+sBPDNE\nKysruHPnzkvb7ug6qdNHr9ezAYujhBysxsZGXL9+HadOnYJGo2Gpx3Q6Da/Xi5s3b+LTTz/ds6Ad\nj8eRSCQQi8XyomhqL5VKpQW/7mNlkHPn7Wk0NZvNwu/3Y3l5eV/PicJ+kUiE2tpadHR0oLKykv2+\nRCJhva5HrZVAOUu9Xs9y29vb29DpdGhsbER3dzd6enqg1+vzfi6TySASicDhcODp06e4c+cOvvvu\nOzY5VkjC4TDm5+fR2trK/v5sNguTyYRHjx7h3r17WFhYQDAYfKUwvKysDO3t7eju7sbp06dx6tQp\nVFZWIplMIhaLwWQy4caNG1hfXy/o/exFSUkJNBoNamtrmS4K8OxQ397exvT0ND799FOMjY299P4o\nl06RDqVe0uk04vE4gsEggsFgwZ2B3E4ijUYDsVgMp9OJ//73vxgZGcnzDPf7DMhDpvSLVqvNm7Q8\nKkQiEcrLy6HT6VBVVcVy+/S53r9/H3fv3sXi4uK+7Wv0e9ThQ50WezlEheJYGWR6QORyOZvoSqfT\niEQisNvtiMfjLP1Ak16U85NKpZDJZEwxTaVSsUo4pS+USiXLWR4VEokEJ06cQG9vL/R6PaRSKZLJ\nJJqbm3H69GlWzNzdcrWzs4NQKISFhQU8ePAAX3/9NZxOZ1HuJRqNwmw2s35OqlIvLi5iZGQEs7Oz\n2NjYONCDTNGJXC5HbW0tzp8/jzfeeAMXL15EbW0tALCi1/T0NB48eHAohkAoFKKqqorljunFpChg\nZmYGn3322YEnH3MjhFyDTKkPGuYpNNRfW1FRAZFIBI/Hg5GRETx8+BDRaPSFP0sOCwk+USfDYXa5\n7IdYLIZWq0V1dTV0Oh2LllOpFGKxGMbGxvCf//wHDofjhdOQ1OlDhf/cHH8xooBjZZAVCgXq6+tR\nU1PDDGoqlYJCoYBGo2FylK2trTAYDKxYBzyrGCsUCvT19UGtVrMwOJvNory8HP39/fD5fAiFQgUP\nGw8KiaKcPHkSb7/9NrvOTCbDBOopRZDbQkapHJ/Ph+npaZjN5jxZwUJDRmliYgL/+Mc/MDQ0hKGh\nIQgEAtTU1MBisaC0tJR5hC+CFNHefPNNDA4OwmAwoLa2FiqVih00GxsbrPe6kN0i+0GdCUajESdP\nnoRWq2WHeyAQwPT0NJaXl5FMJg+keCYSiSCTyb4XAicSCdaBUowhhN3Tj/R5UmrkZde+s7PDdEOy\n2Sy7j8MezNkLhULBJDXLy8vZNcXjcfh8Prjdbrjd7gNFvJSqOIzunWNlkFUqFbq7u9Hc3AylUol0\nOg2hUIjKyko0NzczA3zy5Em0tbWhrq4OYrEY6XQapaWlEIvFqKys/N4YsVwuh9FoxMrKCiYmJo7s\n/mimvqWlBSdPnswraAH5us65Ly+9aHTS57ZdFYOdnR3E43GYzWZ4vV4IBAK0tLQwudBQKMTEaKjF\niHL1FAKT6A7laN977z1cuHCBjVwDzwpngUCAef3r6+uH4h1TeF5bW8teeMqjkkFeXV09cAG4rKwM\nDQ0NrDcceHaQ+v1+WK1WNhVXDIOc2xVCOe2X/TuU2yaVQCrq0feX21N+VEgkEuh0OjYoRM9MMBjE\n+vo6nE4nS5m9DLo3ctBSqRQ7JAvNsTHIAoEADQ0NeO+99zAwMMCkAaVSKVpbW5lhrqysZGPDcrmc\ntV2Fw2FEo1Hs7OywXlYq0pD4Ta7xO4r7IwEe0jwgI0AFPeB5uJvr+VBfcHV1NYaHh7GzswOv18u8\nr2IZsUQiAZ/Ph0AggFgshpqaGtTX16O9vR12ux0zMzMspSGVSiGXy1FTU4PGxkYAz/Kx1dXV7Ody\ne4yz2SxcLhfGxsbw4MEDTE9Pw+v1FuU+diORSFBeXp6neEa5X7/fj7m5OVgslgOngwwGA65fv47B\nwUGIRCLWEjg3N4d79+4VvC2RIM+PUi3As4OgoqICFRUVTO9k98+IRCLodDq0trbizJkz6O3tZXnj\nvfKru0fIDyOK2d7eZt47HTKZTAYWiwWPHj2C3W4/sEHNvZ/t7W0EAgF4vd6ivDvHwiBTcUKj0aC7\nuxsGgwFCoZBN4QmFQqjValRVVUGn00GtViOdTmNlZQVOpxMulwuBQADBYBA1NTUwGAzo7e2F0Whk\nD61SqURZWVnRBXheRjKZxOLiYp6O7m4BG/o8xGIxampqoNFo2IhvZ2cnnE4nHjx4UPRxcCpw+f1+\n2O12dHZ2sq0MHR0d0Ov1cDqd+xrkVCqFyspKVFRUsPsCnnfTOJ1OTE1NYX5+Hpubm4dWcJXJZExM\nRyKRsGcim81ia2sLdrsdPp/vwALtGo0Gp06dgtFohEgkYgZyfX0dT58+LZoCH3l+FD3Rc97T04Pt\n7W1YrVakUqm80W5K7VVXV6O9vR29vb2oq6tjY/jA89wytToeBSKRCCqViik1En6/H2azGYFA4MDG\nlN4nitpisRgikUhR6i/HwiDnShuS9ygQCBAMBrG2tgabzQaPx4ONjQ2oVCqcOHECLpcLn332GVZX\nV7G1tYVUKoXt7W2UlZVBr9fjz3/+MxoaGlgin3pBc7/cw4Re9o2NDfzzn//EF1988T0NBHoRKOxX\nqVS4du0aLl68yFIxVHUmacjD8PhpAIV0J6iQRDKOuVNr9D3SSPdeUUk6nWbC+2tra3C73Ye6Zos0\nM8gg5+pWkGdGWg8vg4xcVVUV08CgYRi3243V1dWXFtd+KPS503ibUbQjAAATXElEQVR6RUUFGhoa\ncO3aNXR1dWFtbQ3pdJqlIajli+5dpVKxsXWaTqSJSWrdo0OJorbD+I5I7Kmvrw+dnZ1QKBTs0KSp\nz1epA5FzU1JSwmQGUqlUUTaiHAuDDOSHFZFIBOFwGOPj4xgdHUUgEEAoFML29jZkMhmWl5fh9Xrx\n6NEjbG5u5uXNxGIxfD4fGyemv7tYVdVXgcaMc1/Q3DwgANa+R036tbW1qKqqQnd3NxQKBSu6HDRf\nWAhILY+8+qamJtTU1DDhJwr5aajHZrNBJpOhoqIi76GnP2O32zE2NobHjx9jaWmJaQ8clkFWq9Vo\naGhARUUF8wT3ytO/7HooF11RUYHy8nJW1PP5fNjY2GDTpcXyMqVSKRueouulGgUZ22w2y8aMt7e3\nWcRC+tu5+Vn6DACwXH/uQXmYE5QUbel0ujzHIx6PH1j4iNaG5UZB8XgcS0tLmJmZKcoA0rExyLna\nCE6nEwsLC/j666/x5Zdfsh5P4PkuLTJuu1/k7e1tJgADPM/JUuX5dZrSA55fH0FrmOLxOFKpFObm\n5lgje01NDTN+r+LF/VicTicCgQAikQgsFgvef/991kNNHj0Z46WlJZhMJjQ0NKCnp4e1vAkEAuzs\n7CASiWBhYQGffPIJJiYmEAwGi+at7IdWq0Vra2uekiDlfZPJJPvsX/bZ5haeSHQ/k8lgY2ODjY8X\ncyN4WVkZM1gUIQoEAigUCojFYta7K5FImCaHQCBALBZjXnyuUBR9BtlsNu+gOkzIQaFVWuQdk3e+\ntbUFj8dzIINM2smkCkc7Nx8+fIjbt2/D4/EU/PqPhUGmUIl6DG02G27dugWTyQSv1/tKDwWJkexe\nprm7MPFjEAgETD6yo6MDMpkM4+PjP1hAZvfP0H9nMhkoFAqo1WqIxWJmvGnp6WFNHabTaaTTaTae\nqtVq2cJPOhCTySSi0ShcLhfsdjsEAgHa29vzDkwyyB6PBw6HAx6Pp6gGay8EgmcypkajkelJ0+/n\nKri96JootVRfX4/Lly/jZz/7GcuTU8F1eXkZwWCwoPdG10iFbZ1OB71ej1QqBbfbjfLycsjlcubp\np9NpVmMJBoMIhUJspJ1U+lQqFRNHqqioyJsFOIwR9r3ukSLE3eqNwLNCcygUeqnwkUAgQEdHBy5d\nuoS+vj4WRYRCITidTjidzqK0vx4Lgww8b+CmKbEvv/wSfr//lR9omszLPflzG+ALNZ2jVqvR0dGB\n3//+99Bqtfj73/8Oh8NRUAEZoVCImpoatLa2sqlF8mKOYl8bdXZIpVJsbGzA7/cjFAoxj5J6o0Ui\nEerr6/NEhahARGuoaIHmYRtjMsjNzc15Bpmu+yA1BlJzMxqN+M1vfoPTp09DLBaznvFAIACLxVJw\ng5b72fb390MsFkMmkzGlwJqaGlRUVLDuhFgshqmpKdy7dw/hcBiJRAKrq6uw2Wws0qyoqMCFCxfy\nVp3RcuGj6Ecmg0ybaXZ/H7TV+0XPPn3PAwMD+Otf/8q0YBKJBDuYijUgdiwMMmkb5C4xpfX2r4pa\nrUZjY2PeMtR4PA6bzcY2cPxYIyAUCnHu3Dlcu3aNnb4DAwNMi6EQ2hK0gqqhoQFNTU0oKytjxbDc\nMP8wDRp5whsbGwgGg9ja2mKGlTxh2hpC3hepxAHP1lbNzc1hamoK4XD4SMJhCs13Hwi5Ik8vo7Ky\nEpcuXcLPf/5zpg8NPOsAcDgcWFxcZB5yIa/baDRiYGAAmUwGLpeLRSkKhQIzMzMwmUxQKBTseyEP\neX19nfXd5orlC4VCJBIJuN1uZqDEYjFaWlpw/fp1FtYf9qG5X489XQelH/ajsrISBoMBLS0tLLWW\nSqWwvLyM8fFxuFyuotUsjoVBVigUaG5uRlNTExsBzc0DvwqUH8zdQEvdDVartSBhSklJCU6fPo0P\nP/wQEokEPp8PAwMD8Hq9cLlcP9ogCwQCKJVK1NbWorGxEQaDgSmLud1ueL1eJBKJQ827As+NFm2g\n2H3NtGNPJpOxyUMaVaXvYX5+HtPT00cmfk4LPePxeJ6HRN5tbq1irxdWIBCgsrISV69exZUrV1Be\nXs5+3uv1YnZ2lklBFvL7KSkpQUNDA959913Mzs7i1q1brEecZASoz3b3ve0Hpb5CoRATqpdKpWhp\naYFEIoHD4cD9+/cBHK6uRe72n93j6BTt0ve1F1qtFgMDA6wNEUCeQXa73UWLLn/yBlkgeLam++rV\nq7hw4QITSaH0wkEfBKFQCKFQiO7ubly9ehVNTU3MgASDQczMzGBxcbFga2lyez9lMhlaW1uxvr7O\nHuAfCnVZDA4O4p133sGJEydYccXhcOCLL77AnTt3Xiocc9iQZ0Nj0p2dnXlTedlsNq/L5Kj0RKjr\ngDpGcnPIdKhQqLy7FY/a/bRaLetjphHyRCKBlZUV3L59G8vLywU/LDOZDJaWlvCvf/0LPp8Pm5ub\niMfjLHW1e7Doh/z99HMkMnQUMpyUDpJKpawORP3D2WyWtbBR6m43AsGzlWOXLl1CV1cXM97pdBou\nlwsWi6Wo21t+0gaZJugaGhrwxhtvoL+/n/V1Uo/rQQ1oWVkZ25pw8eJFKJVKZDIZxONxeL1etji0\nEB4yPfQU9kilUjQ1NaG5uZkNn7wonUDGizwb8m7ohS8vL8fQ0BCuXLkCnU7HqsNmsxkjIyN48uTJ\nkelx7AflN+vq6nD+/HkYjca8QzWRSCASiSAYDBatKf+gUJ1hryEhElKSSqXsGknnQSqVoqGhAW1t\nbXk6KuFwmE0uPnr0CC6Xq6CHJRUZrVYrW5paSGhUnrqQqAf+KLZO5xpkei+A59EZ5Zj3ev5JRKyt\nrY1tFhEKhYjFYmzJhc1m4wZ5P+RyOTo7O9nGAnpBdDodTp48ifn5eayurr7w7yDPpqurC5cvX8aZ\nM2eY55JKpWCz2bCysgKv14t4PF4Qz4XCJnqpSQKRRrsVCgVisdieYRF5YXK5nK1Eouq4RCJBa2sr\nurq60NnZyRaZhkIh3L17FyMjI7Db7a+dyD7wXC5Ro9EwaVF6mdLpNCwWC2ZnZ2G1WosiRXlQqLgT\nDoeh1Wrz/l8mk4FEIkFVVRVcLhcrVGazWWg0GjQ2NuKXv/wl3nzzTRiNRlaInp+fxxdffIHR0dGi\nLM4sNltbW7BarSxlmOuZHjbkkOUOq5CHTKPgu6+NbEBTUxOuXr2KS5cuQaPRMMdodXUV4+PjWFpa\nYvWXYvGTNsgikQiVlZUsTUEfcm1tLYaHh5FMJuFwONgI714oFAqoVCr09/fjF7/4BRobG5mCWiwW\nw+LiIqanp+Hz+QpmyLLZZ6t9lpeXYTAYoNFo2Nbfvr4+tu+OVqmTEQae6w8olUrU19czo1xWVgaZ\nTIbOzk709fWxxaHRaBRWqxWPHj3Co0eP2BDF64ZSqURLSwuMRiP0ej2TtCTva3V1FSaTCZubm4jF\nYkd2oFAu2+fzMflPQiAQQK1Wo7e3FyKRKE8vob6+Hq2trbh8+TLOnTsHgUCAra0tuN1uTE9Ps1TF\nURQrfyxbW1tYW1uD0WhkeVfyTg9rOi8Xij6pVzrX+Op0OnR3d8Pj8SAUCrFuDJFIhP7+fly9ehU9\nPT1QKpUslWQ2m/Hw4UOsr68X/dn7SRvkVCoFp9MJh8PBvNeSkhIYjUa8//77iEQiWFxcZL2TuR8k\nPSx1dXU4e/Yszp8/j7a2NrYxOJlMwuPx4PHjx7h3794r9zO/iJ2dHdy+fRvRaBS//e1vcfHiRQiF\nQjQ1NeEPf/gDAoEA4vE46/vMDcFyNZrVajXkcjmbKCIvkxrZ0+k0O92npqawvr7+WnpfAsEzWc4r\nV65geHgYOp2OhfOkZz03N4fx8fGi6TocBAr9aeFqc3Nz3jSnWCyG0WjEBx98wFr6KBKiZbm1tbWs\neOZ2u/H48WNWuf+x9YmjMH7Ac/3r1tZWnD17lg1jHIVRJvEfj8fDxLNoXF8ikbDna3FxEXa7nTk1\nZWVlqK6uZup9NPHrdDoxPz+P7777Dm63u+j38ZM2yOl0mq1cmp6ehlgsRl1dHRMVGRwcZBV9l8vF\n+lfJw9TpdOjp6cH58+fR19cHjUbzPUP29OlTrKysFFS4JpPJYHl5GbFYDBqNBslkElqtlg0ckMBM\nNBrNM8i57TyUP88Ny4Dns/rhcBg+nw9PnjzB6Ojooa02elUo961Wq9HV1QWj0Qi5XM5UyKgzxGKx\nYGNjo2i6Dgclk8nA4XDAZDKhsrIScrmcpZnIQyalNCrakjwledc2mw0ulwuLi4u4f/8+pqenWRrm\nh77wucVF4rCMIHUh2e12Nv1Jq7ZUKhXbl3gY7OzsIBqNwmKx4MGDB0gmk2hoaEBlZSWUSiUaGxuh\n1+tRU1MDl8uFuro66HQ6KBQKVghMJBLwer1sRHpycpJtDAeK+7kerXTZ/vyfg/wh6mulvVelpaVo\naGhgvasVFRVob29HY2Mjqqur2QRRU1MTzp49iw8++ACXL19GX18fqqur2cx+LBbDjRs38Pnnn2N+\nfh5+v78oVe+trS2srq5iamqKpRJyVcRkMhkT4SEdChIyp4cnd+tJJpOBx+PB2toaJiYmWN54dHQU\nXq/3SAth+0EpmBMnTuDy5cuor69nywEAwG63Y2lpCU+ePIHZbD5wS1YxicfjbEqQlgPQ8kz6Rd0u\nVJjMZDKsB3tqagojIyMYGRnBxMQELBYL237+QyFP9Chyt3Rv9fX1OHfuHBsQmZ2dhdlsRjKZPNTI\njAaIFhcXsbm5yYTzaaceyaeS1oVSqcyTUXU4HJiensatW7fw73//G4uLi0xTpADG+P++6H/+pD1k\nehA8Hg8mJyehVqvR1taG5uZmqNVqZtzUajVqa2tRW1uLQCDA8q/d3d2s2r29vc2Wf87NzeF///sf\nZmZm4PP5ipJzpdyoxWJhKQoaDKBtJiSKolKp2DZfOjSogyQUCsHr9SIUCiEcDsPj8eR5lGtra3C5\nXAW//kJRWloKtVoNjUaDioqKvEEQOrRomm8vfd6jIBKJIJ1OY3JyknXhdHV1ob29HVVVVSxyEQgE\nbCoyEAjA7XbjyZMnePr0KVtj5ff7C6YpQrUGippoTL7Ynxkp3NEqrfLyclRVVbFUTbFlXnOhtFIk\nEslLU5IyYEtLC5qamliUTO2U4XAYwWAQbrcbi4uL+O677zA5Ocm26xzWc/eTNsgEbcnVarXo6Ohg\n+TwaMmhoaGDLS0nxjYpe9PJToebmzZv4/PPPYbfb4fF4DuWLiMVimJ2dxcrKCkZGRtgmCgrd29ra\n0NHRwbSNk8kkIpEI3G43zGYzS6vYbDY2QUYh8+uwJftFUFcCje3SPkDqa02n02xq7HUwxsDzw3Rh\nYQE2mw1Pnz5FX18ffv3rX+PUqVN5mz/IWG1sbGBubg43b97E2NgYotEoEolEwSa+yEOmWgL922Sg\nihlmk1ELBAJYWlqCWq2GTqdjrYyHsfpor2vKZrOwWCxwu92YmJhAfX09rl27huvXr0OpVEIul7ND\nf3NzE2azGSaTCVNTUzCZTKyj4jDz8sfCIJOn7HA4MDo6ytqSmpqa8jSNabY+txl/fX2draN3u90Y\nHx9nGsmHFRrnqsnRixoOh5kwisViwczMDFQqFRuBpr5c8oY9Hg8CgUDeC/66V+tJZIm8S8rDCgQC\nJhRF0crGxkbR1ub8EOg7297eRiKRYBtpEokEhoeHWYRDLzp5XJQCK4bnSsVC+nvJSz6M5yCbzbJU\njt/vZ1oYkUikqG1iL4MiFEpFSiQSuN1utgGIWg/JO7ZarWzBQCGvO7cf+kUcC4NMUNU6EAgwPePa\n2lqW0yODnM0+28gQCoUwPj6Ojz/+GCsrK3C73eyBPgpjlutpBAIB9vszMzP75gVzvZ/X3QDvRXl5\nOfr6+tDV1cUmu+jlDgQCmJiYwI0bN5BIJI70xd4LSh2lUimsrq4y/ZSWlhZ2uFitVjx8+BD/+9//\n8OTJkzwp2EKS27N9VJFEMpmE3+9nXU2UBngdhpBIMMnlcuGbb77ZM9dOwyPFiigO0m1yrAwyVUcz\nmQx8Ph9LAZBSG+VkaQafimorKysIBAKsyn3Uhm0/Oc3jhFgshl6vR0dHBzo6OmAwGNgeRAAs/KXN\nwEddyHsZqVSKdbWk02mo1WrWi7y2tgaLxVLUrSaH6Q3vh8fjwejoKBwOB9tz6PP5Dm211svIfbf3\nO7SKndp5GUe7AmN/Cvqp5GoM0K498iSOo7H7KSCXy9HR0YHh4WF89NFH6OzszNOtMJlMGBsbw1df\nfYW7d+8e8dW+Orn9t4fxjB32v7cXu0V9cvUtOIwX2tz/Lwwy8Hxcmaa/uDE+Wqi7goqtVVVVTBM4\nlUrB5XLB4XCw/tafIoc9qHFUgyG5//7uPmj+jn0PbpA5ry9UiaeuGCq2UjGGwzlmcIPMeX3JFRQn\nMZfcXxzOMYMbZA6Hw3lNeF1tLofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8Ph\ncDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+Fw\nOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6H80r8P1cwDEIXG7O8AAAAAElF\nTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "def _get_next(iterable):\n",
+ " try:\n",
+ " return iterable.next() # Python 2.x.x\n",
+ " except AttributeError:\n",
+ " return iterable.__next__() # Python 3.x.x\n",
+ "\n",
+ "# Run inference.\n",
+ "predict_input_fn = _get_predict_input_fn(36, NOISE_DIMS)\n",
+ "prediction_iterable = gan_estimator.predict(\n",
+ " predict_input_fn, hooks=[tf.train.StopAtStepHook(last_step=1)])\n",
+ "predictions = [_get_next(prediction_iterable) for _ in xrange(36)]\n",
+ "\n",
+ "try: # Close the predict session.\n",
+ " _get_next(prediction_iterable)\n",
+ "except StopIteration:\n",
+ " pass\n",
+ "\n",
+ "# Nicely tile output and visualize.\n",
+ "image_rows = [np.concatenate(predictions[i:i+6], axis=0) for i in\n",
+ " range(0, 36, 6)]\n",
+ "tiled_images = np.concatenate(image_rows, axis=1)\n",
+ "\n",
+ "# Visualize.\n",
+ "plt.axis('off')\n",
+ "plt.imshow(np.squeeze(tiled_images), cmap='gray')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "# Conditional GAN Example\n",
+ "\n",
+ "In the conditional GAN setting on MNIST, we wish to train a generator to produce\n",
+ "realistic-looking digits **of a particular type**. For example, we want to be able to produce as many '3's as we want without producing other digits. In contrast, in the unconditional case, we have no control over what digit the generator produces. In order to train a conditional generator, we pass the digit's identity to the generator and discriminator in addition to the noise vector. See [Conditional Generative Adversarial Nets](https://arxiv.org/abs/1411.1784) by Mirza and Osindero for more details."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "## Data input pipeline"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsnelzlFd+7z+9r+pN6k3d2vddCMS+mAHMeJnMxOM4mSRV\nqeTWzZv8AffPuK+SSiUvUpXUrcpyM2NP7DtgDBgMRiAhgfZ9V7ekXtTaWq1e7wvqOQM22NhGUuM8\nnyqqbCGh093nOd/f+a0gIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj\nIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj\nIyMjIyMjIyMj8/qgOMDfnTvA3y0jIyMjI/O6860artyPVcjIyMjIyMjsP7LIy8jIyMjI/EiRRV5G\nRkZGRuZHiizyMjIyMjIyP1LUB70AGRkZGRmZ1x2tVovBYMBut+NwOHC5XCgUCra2tojH4yQSCQBi\nsRjLy8tks1lyub3PP5dFXkZGRkZG5gdiNBrxeDw0NTVx6NAhTp48iVKpZG5ujsXFRSKRCADDw8OE\nw2FSqZQs8q8jJpMJn89HZWUl1dXVOJ1OCgoKXvj9oVCIQCBAMBhkeXmZlZUVcrkcGo2Gzc1NdnZ2\nSKfT+7IZvgtGo5Hjx49TX1+P1+ulq6uLL774gp2dHVKp1EEv7xsxm804nU7sdjtut5vGxkacTica\njeaZ71MonlSnfPrpp9y9e1d8FjJ7h81mo7GxkaamJurr61EoFASDQT7//HNmZmYIh8MHvUSZfcRi\nsWCz2SgqKqK2tpaOjg7U6ufLVi6XI5VKMTw8zPXr11lbWyMej+/5Go1GI1VVVULYvV4vXq8Xn8+H\nQqHA4/HQ2NjIzs4OAIWFhczOzrK8vEwsFtvz9cki/4pQKBTodDq8Xi9Hjx7l+PHjHD16lIqKCgoL\nC1EoFM8V6rm5OUZGRpiYmGBiYoKxsTFyuRw6nY5QKMT6+jrJZFK4djY2NtjZ2SGZTB6Y8CsUCoxG\nI6dOneKtt96ivr4eo9HI6Ogoy8vLeSnyer0eo9GI2WzG4/FQUVGB3++nqqqKc+fOUV5ejk6nE8Iu\noVQqUavVrK2tsbGxwfb2Njs7O+zs7LC7u5t3BphSqUSlUuFwONDpdKytrb02xonBYMDv93Pu3Dku\nXbrEuXPnUCgUDA4OsrKyQjQazWuRNxgMmEwmADQaDWazGZVK9dzvzWQyrK2tsbm5eaDPcr6hUChQ\nqVRotVr0ej2lpaVUVFRQUVHByZMneeedd9DpdM/92VwuRyKR4MaNG4TDYUZHR1laWiKVSpHNZvds\nzVqtFq/XS2dnJ3/0R3+EyWRCq9WKv/d4PM98fyaTYXR0lL6+PnZ3d9nd3d3T9cki/4pQKpWUlZVx\n4sQJ3n//fWpraykqKsJoNJLL5V74EDudTgwGA3V1dUSjUZaWlsRNXhIVSUh2d3e5efMm/f39YvMe\nBCqVCoPBgM/no7i4GIPBgNfrpba2lu3tbba3tw9kXd9EWVkZbW1tnDx5ksrKSmw2GxaLBYvFQmFh\nobjFP/05KRQKstks58+fp7S0lFQqxeLiIsPDwwwPDzM1NcXa2hq7u7sH9bK+hk6no6CggD/5kz+h\nsrKS//zP/xTuwXxGqVRSXV3N6dOnuXTpEvX19eKzeF0EsLq6msOHDwNQXFzMiRMnMJvNz/3era0t\n/uM//oO7d++ysLBAMpncz6XmLSqVCpvNht/vp6amhiNHjtDW1obb7cblcqFWq79xP2g0GhobG/mr\nv/orPvnkE27dusXy8vKe3uh3dnYYGxujubmZZDKJwWD4xu+vr6/nr//6r/m///f/ks1mmZub29Mz\n80cl8iqVCrVajcPhwOl0UlxcjMlkEgf49vY2S0tLBAIBlpeXX+nvVigUOBwOysvLaWxsxOfzvdCt\n9DQGgwGDwSDWW1JSIl5LIpF45ha/u7uLTqfDbDZz584dQqGQcAHtJ5LRolAoxM3RYrHgdrtfaGW/\nalQqFRqNhkwmQyaTeaEhpdfrsVgsHD58mEuXLtHZ2UlxcTFarRaNRvONh4b0GsvLy4U1vrKyQnl5\nOSaTiVQqJW70e4VSqUSv16NUKslmsySTyW+8lbvdbjo6Ojhx4gRut5vr16+/1D48aBQKBS6Xi8rK\nSsrLy3E4HM+IfD4KvUqlEoauz+ejpaWF5uZm4Pefg9FofO7PxuNx8ex+9NFH+y7yGo0GvV6P2+3G\n4/FQWFjI1tYWY2NjrK2tHYihrlKpKCgooKOjg/b2dpqammhoaKCyspKCggI0Gs237gOVSoXL5eLY\nsWPMz88zMzNDLBbbU5FPpVKsrq4yNTXF48ePqa+vx+fzoVQqyWQyJJNJtFqteA4LCwspKChgbGyM\n6elpVldXZZF/WdRqNWazmaamJk6cOMGlS5coLS3FYrEAsLi4yCeffMKnn366JyKv1+sxmUzo9frv\ndbBqNBoKCwuf+drTmzqbzWK323E6nYTDYdLp9IGIfCaTIZFIEI1GWV9fx+VyodVqMZlM+yYoGo0G\nq9UqXOeS0H+VgoICampqeOONN3jnnXcwm83C6Puqa/5FGAwG9Ho9uVyOgoICKioq2N3dJRQKsbi4\nyPr6+it9bU8jGa0ajYZUKkUsFmNra+uF319VVcWvfvUrysrKWF9fZ3d3Ny/DJ8/DbDZjtVrR6XQv\ndHPnExqNho6ODt566y1++tOfUlhYKPa/FOZ5EQaDgZ/97GdoNBpu3ry5p3voeeh0OtxuN2fPnuX8\n+fN0dHQwNzfH3/7t39Lf338gIq9Wq3E6nbzzzju88cYbVFRUCHF82WcVfv+8lpWVUVZWxsTExB6u\n+sm5nEgkmJyc5JNPPkGtVoszcXd3l7W1NWw2m/DqqNVqVCoVJSUlVFZW0tfXt6fr+1GIvE6nw+Fw\n0NjYSEdHB42NjdTW1lJZWYnVahXxEb/fz4ULFzCZTDgcDnp6epibm3sla1Cr1Rw9epQzZ8680EX3\nbSgUim/czEqlErvdTltbG//jf/wPRkdHGR8fp7u7m7m5uX2N7e3u7jI8PCxuXlIy237d5C0WC3V1\ndaysrBAMBonH48+Na0nCOD4+zqNHj2hpacHhcJBOp8XDlkwm2dnZIR6Ps7i4SCwWw+v14nK5cDqd\n4nNRKBSkUim2t7cJBoPMzc3tqZHl9Xqpqanh3LlzpFIpBgcHGRgYeK7Ia7VaHA4H1dXVtLS0EAwG\nGRgY2HNX5avAZrPh9Xo5fPgwbW1tmEwmdnZ2iMVi3Lt3j7t37zI4OLgvSUovi81mo6SkhHPnznHq\n1ClcLhd6vf4bfyYejxMOhwmFQqytrQEwMjKy5+EetVqNTqcTYrm9vU1lZSV/9md/RktLC8XFxQQC\nAXp6epiZmXmuweHxeLDZbCwtLbG5uflK1qVQKNBoNJSUlNDa2kpZWRlVVVWcOHECv9+P0WgU5+FX\nz8VcLkc2myWdTpNMJtHpdOKcl55VlUqFSqX6TgbC9yWXyxEMBrl9+zYNDQ00NDTgcDjIZDLs7Ox8\nTRMUCgWzs7MMDg7uuUH12ou8UqnEbDZTV1fH+fPneeedd/D5fFgsFnZ3d9na2iKVSqHT6dDpdLS3\nt1NQUIDVaiUUCjE/P/9KhFGlUlFRUUFNTQ1qtVqUR0ibMJFIkM1mUSgUqNVqkViiVCrFz0v//SIk\nb4EkrAsLCwwODrK+vs7Kysq+lWTAE/EcHR2lpqaGixcvYrVa8fl8mEwm4VreS3Q6HYWFhcTjcUKh\n0Avfu0QiQTgcpr+/H4PBgNlsZnd3l0QigdFoFLkPUh3r+Pg44XCYbDaLwWCgqKhIJE1ms1nW1taY\nmZlhdHSU6enpPRX5kpISjh8/zrvvvksoFCISiTA1NfXc79VqtZSUlFBVVUV5eTmDg4M8fvyYlZWV\nA/H2vAxKpRKNRoPP56Ojo4POzk7q6+vR6/Wsrq4yOTnJhx9+yPXr19na2sqruHVxcTHHjh3jxIkT\nNDQ0kEqlxFkDT253qVRK/AFYX19nenqamZkZFhcXAfbcUIQn2d/Ss6lQKJicnKSkpIQPPvgAl8tF\nLBbj+vXr3L59m+npaVHP/TROp5OysjJisdgrE3kpzNfQ0MB7771HS0sL5eXlGAyGb/UI5nI5kskk\nm5ubRKNRnE7n17yg+000GiUajTI2Nsbi4qIwOl50Ji8uLjI2NrbnRvhrL/I6nY7S0lL+4A/+gOPH\nj1NWVgbA8vIyU1NTTE1NMTc3R0NDA/X19dTW1mKxWCgvL8disQhB+qHimE6n6e3txel00tDQgMlk\nIpPJsLy8zMzMDENDQ8TjcZRKJR6Ph/Lycpqbm8WDZ7PZMBgM3yr0gIiFu1wuGhoa8Hq9mM1mtre3\n91xcJdLpNKurq6yurpLJZCguLqa9vZ3i4mJGR0f3fC3hcJiuri5xA39RnFq6yY+NjQlxN5vNrK2t\nUVRUhMlkYn5+XoR5KisrOX78OAUFBcJggSchio2NDR4+fMi///u/8+jRI6LR6J4KT1lZGdXV1QSD\nQR4+fMhnn332wgQ6g8FATU0NpaWlqFQqFhcXGR0dzetbvF6vp7i4mDNnzvDee+9RW1srKhzm5+e5\ndu0a4+PjrK+vk06n921vvwydnZ38zd/8DWVlZWxvb7O4uMji4qIIA25ubn7ta6lUing8zvb2thDS\np2Pze4FSqaS8vJwPPvgAn89HOp3m7//+70UGezQaZWRkhAcPHjAyMvLC0I6UA/My59PLotVqRXLd\nsWPHRBLyy4RqpOdxbGyM7u5uzp49e+AiLzE0NMTNmzdJp9OUlZVRXFz8XA+nlM+0156G11rklUol\ndXV1nDlzhuPHj+P3+4nH4wwODjI2NsbMzAyzs7MsLCwQCATY3t7G5XJhMplEsonZbGZra4tMJvOD\n1pJOp+nr6yOdTjM+Po7RaCSbzbKyssL8/Dyjo6Ps7OygVCpxOp2UlpYyNTWF0WhEpVJRVVWF0+kU\nFqxKpcLj8WCxWJ67EaQyNqfTSUlJCV6vl2g0um+lUrlcju3tbeLxuIhVFxcXU1paisvlYn5+fk8P\nZSkW/21IiXmrq6vs7u4K11kmk8Hn8+FyucT67XY7JSUl+P1+4Pduv0gkQigUIhQK0dvby507dwiH\nw8+98bxKvF4vZWVlYg/Nz8+/cJ9qtVqxpxUKhbjh5HPpnMlkoq6ujkOHDokktVwuJ4yyO3fusLi4\neGDVCxqNhoKCAsxmMwaDgWAwyMbGBvDkeV9fX+fu3btEo1ECgQCBQIDV1VXgSfa89LWVlZV9X7tU\n0iuVnr3xxhvY7Xai0Sg+nw+r1YpGo2FycpJ79+4xNjZGKBR64TMrXSxelSCp1WqsVisNDQ00Nzfj\n8/meG+4IhUKEw2GSySQajUbUzQNMTk4yNDTE1NQUra2tr2Rdr4KpqSk0Gg2JRIJjx45x7NixZ0rq\nJIqLi6murmZjY2NPz5LXXuR/8pOf8P7771NXV8fGxgZDQ0P84z/+o7CkpBtAOBxmd3eXw4cPU1lZ\nKRLYCgsLSSQSr0TkHz9+zPDwMBqN5hkXr7QOyVswOTlJd3e3+D6tVsvp06eprq5Gq9WKB/TixYs0\nNDRgMBhe+HCp1Wqqqqqorq5mYmLiQFyz0gHwdCghEAjkVcKXVKkgufYqKioAsNvt/OQnP6G6uhqL\nxSI+E/i9yM/OztLX10cwGKS3t5eVlZV9ER6p4iIQCHzr56pSqTCbzc/EMfMds9lMW1sbdXV1WK1W\nYZzMz88zMDDAgwcPDtRIMRqNVFRUUFVVhdfr5erVq0LkHzx4QCwWY2pqilAoJIxJ6RzJ5XLP/P9+\no1QqKSgo4PLly7z11ls0NzcTi8VYWFjA5XJRWFiISqVibGyMzz77TJTk7le4T6/XU1RURGtrqwhx\nPo/R0VG6urqIRCJYrVbq6+tpbW2loKCABw8eMDY2llceHoCFhQVCoRDDw8Osrq7i8/nw+XxfE/rW\n1lZWVlaYnp4WORp7wWst8vD7UhCNRkM0GmVgYICFhYVn3jSFQkEikRBx8UwmQyqVIplMvtJkNenf\nexmeFgmVSkVfXx9zc3Pi1q5Wq5mZmcHr9aLX66mpqaGxsRGXy4XVahXfJ2VplpWVfa1j234hvX+S\nwSIZKvmEWq3GZDLh9XppbW3lzJkzuN1u3G43VVVV2O12cdBIt7Tl5WXm5+e5e/cufX19bG5uEgwG\n9zzBUdrTJpNJuPm+7ffp9Xqqq6vxer2kUinS6fQLKw7yAa1Wi8vloqOjg4qKCpHUuLKywt27dxke\nHj6wG7xWq+XUqVN0dHRQV1eH2+1GrVbT39/P6OgoAKurqyQSCSKRSF6FRKQzobKykra2Nk6dOkVF\nRQWbm5vcu3eP27dvk0wmcTqdKJVKNjc3Rd7GN+2VdDr9Si5DEocPH+bNN9+ks7MTr9eLUqkUpZLJ\nZJJQKMTMzAyfffYZX375JZubmxiNRoaGhuju7sZgMPDll18SDodFbXxFRQXFxcUvLFvcL9xuN8XF\nxaIktLCw8Lleiq2tLWKx2J4bsq+9yEs35Vwux9raGmNjY6yvr6NUKlEqlWi1WnQ6HTabTbSX3dzc\nJBwOE41GX5iVvZ9kMhmmp6eZnp4WX1MoFPT09AjRvHDhAul0mkOHDgkXPzyx2CUX1kGXHUk3+lcZ\nt3tVWCwW/H4/HR0dnDt3jrffflvc3KVbl2QISvHUoaEh7t+/L+KV+1WvrVarRVLg05n934Rer6e8\nvByn0ylCGfnYSU2KQzqdTiorK2lpacHn85HNZtnY2GBubo47d+4wPj5+IOuTPCIXL17kF7/4BZWV\nlSgUClZXV4WbGJ4k0e132dvLoNfrsVqttLe38+abb9Le3o7ZbGZ8fJzbt29z9epVzp49i9frRaFQ\nEI/H2djY+FahicfjrK+v/2DvnFRaeOTIEX75y1+KXibw+1K0YDDI4OAgXV1d3L59m76+PlKpFGq1\nWnQVVKlURCIRstkser2e8fFxZmdnsdls+y7ykmGl1+uFd6qpqQmPx8OhQ4dwu93PPRNXV1eZm5vb\n87Dfay/y8PvyinQ6Laxqg8GA1WqlsrKS+vp6/H4/lZWVqNVqurq6uHr1Kg8fPnwl8fi9QOrDLDVT\nCAaDTE5OUlNT88yBn0qlGBgY4NGjR3u+WV5n2trauHTpEocPH6a6uvprLUdjsRizs7P09vYyODhI\nKBQiGAyKwRL72ZAlmUyKLObd3V2RhS4ZJM8zSiXvTyqVEvkDsViMXC6HSqXKmz2u1+ux2Wy8++67\nXL58GYfDATw54EdGRrh79y5DQ0Mitr3fOBwOqqqqqK2txe/3o9FoGB8f5+7duwSDwQNZ08uiUqko\nLS3lzTff5PTp03R0dKDRaBgeHubjjz9maGgIg8HAsWPHaG9vZ2dnh62trZdqq7qyssL6+voPzqw3\nmUzipmu325/xPiaTSRYWFviHf/gHHjx4IHJhnq5Uisfj4pmQmoNJDcV0Ot2BXDDUajV2u53Ozk7e\nfvttKioq8Hg8GI1G7Hb7Cw30paWlfQmxvvYibzAYxM22oKCA0tJSstkstbW1eDwe8cBKt5t79+7R\n3d3NzZs3iUQieVWW81WedrdKrmbJioXfl4eNjIwwPj6eF+1V881NL1FcXExbWxstLS04nU7g2bXG\n43GWlpbo6+vjzp07ole9lFi4n0ix3HA4TCQSERUbhw8fZmFhgdXVVdEJUeKriVE+n4/GxkZRk50v\nNeYFBQWUl5dz9OhROjs7MZvNoib+0aNH3L9/n6WlpQNrjVxVVcXbb79NXV0dBoOBjY0N+vv7uXLl\nCoFA4EDW9DJI7WBra2u5ePEibW1teDweZmdnmZ+fF17Cqqoq6uvrcbvdRCIRNjY22N3dfcYI1Ov1\n6PV6dDodmUxGVAN8UxOml8XhcNDR0SF6azwdiw+HwwwPD3Pnzh16enq+9rOSgfu0N0Gv1+PxeERf\ni/3q0/E0arWawsJCmpqaRFMks9n8rR44aTTtXhsmr73IW61WioqKUKvV+P1+Ll26RDqdxmAwUFJS\nQkFBAUqlkn/913/l6tWrDA4OEg6H88JN/zJICW1tbW28//77opueQqEgFosxPT3NxMQEi4uLeZXo\nlm9Ice4XPVDpdJqtrS3W19eFKO7u7h6ou3thYYGFhQVOnDghDrErV65w7949otHoc406KaHp7bff\nxufzcePGDR4/fpw3Ii81raqoqBDVJKFQiNHRUe7du8fDhw8PdPbBkSNH+Ju/+RtMJhPxeJyZmRnh\n+ctnT5laraakpISWlhY6Ojpwu90ix2FtbQ2TyURxcbFIIpSSkWOx2Nfi8RaLBa/XS1FREYlEgtnZ\n2VcWO3a73Vy4cOGZckmJ6elpkdD4skhNserr66moqDiQFs5Sp1K3201hYeFLC3d9fT1Hjx5laWlJ\nJHTuBa+9yEtJdOl0GovFQm1tLfCkft5qtaJUKkUL1oWFBVZWVvJygMpXkZLYfD4fnZ2dHD58WLTn\nhSdWbSAQoLe3l4WFBTY3N/PCaMm3GLDEyMgIn3zyCYuLi+Imb7fbxcNps9lobm5GqVRSWlrKF198\nweTkpHDVHwSjo6PY7XaKioooKSnhxIkTOBwODh8+zOTkJOvr62SzWYxGoxhrrNFoMBqNou1uIBDY\n08zdl8VgMODxeDh27BhvvfUWZWVlwlvx4MEDYYBvbGwcaGjBYDCIEEIul8PlcnHkyBEikcjX/uTD\n+yohvZezs7NMTEygUqkoLCzE7/cLN77RaKSoqAin04lKpcJqtdLa2srbb78NPKkmcDgceL1e3G43\nZrOZ2dlZrl69yuTk5Cs5N6UE2KeTc0OhEJOTk9y8eVOUJL4sUr3/0x3v9pvd3V0WFxd5/Pgxn3/+\nOc3NzZSVlX1ryeFXh2PtFa+9yEuT23Z3dzEajZhMJoxGI1qtVowelBLtwuHwa3Hb1Wg0ok1pS0sL\n7733Hu3t7eLvpUzX6elp7t+/TyAQONBbRr666J9mcHCQ1dVVpqenhciXlpZSW1vLoUOH8Hq9NDU1\nUVdXR2dnp7gRSC1vD2Kk7OTkJOl0Gq/Xi0ajob29nYqKCs6ePcvAwACRSIRUKkVhYSEejweXyyXm\nGczMzPD48WPGx8cJhUL7uu6volQqsVqttLW1cfbsWS5evIhWqxVthO/fv89vfvObvOhql0wm2d7e\nRqfTodfr8fv9vPHGG5SUlIjk2KmpKUZGRkgmk2Lc8EGTTqcJBAIMDAxw584dFAoFbW1tFBUV4XK5\nOHTokBjIBE9CfXa7nSNHjgijxuFwUFlZSVFRkbhQ3L9/n4GBgR8cqpDKgo1G4zOzPXK5HMvLy9y+\nfZvPP/+cnp6e73RGSwmzOzs7JBIJtFqtuEXv17m0u7vL7Oys6LOfy+WEIfP0jV6pVIpmP0+/H7K7\n/hvIZDJ89NFHDA0NUVpait1ux2q18uabb4ppUFK94vT0NJFIJC8eyBchxXD8fj/19fW0tLSIRiEu\nlwtANHbp7e3lxo0b3L9/n0gkcqDrztfb+9MkEglWV1fp7u4WcTupKdKFCxfo7OykqakJo9GI2+3m\ngw8+oLy8nE8//ZTBwUFmZ2f3fC71V0kmkywtLfHrX/+a8fFxOjo6qKqqwuPxoNVqcbvdopGP1DRp\nZGSE//f//h9dXV309/e/shakPwSz2UxNTQ1/8id/wpEjR0TYJJPJEIvFxFz1fHg2p6enuXr1KkeO\nHKG0tBR4In5S2+DDhw+ztbVFd3c3t27doru7m4WFhQNe9RNyuRyLi4v85je/YXJykqamJlwuFzab\nDZPJRFVVlegPIT2zLpdLZKNrtVqMRqMoH11bW6O3t1fMhvghGI1GDh06xLlz52hvbxdNqHZ2dohE\nIszOzhKNRr9zrX4oFOLevXuUlZXhdrupq6vDbrf/oLV+X4LBIJ9//jnxeJyBgQH0er3In1IoFBQV\nFXHmzBm8Xu++eh1ea5HP5XKMjIwwOzuLx+PB6XTidrtpbW0VIh8MBrl79y5zc3N5Vc/6PKRBOx0d\nHZw/f57GxkYqKyvxeDxoNBqy2SxbW1tMT09z/fp17t+/nzcHDDzJkJYSHPNN+KWY+1eTh2ZmZoTY\nxGIxKioqhJEljSlOpVIEg0Eymcy+irzUunNoaIhIJMLS0hI1NTWUlJSI8j94EucsKSnB4XCwuLjI\np59+yuTkJMFg8MA/B4VCgc/no7W1lSNHjlBeXi7+Lh6PMzExwcLCQl4kjcKTbmUff/wxq6urNDQ0\n4HK5RDtXl8slRNJisQhBjMVi+9pS+kXkcjnW19cZGBggGo0yPT2N2+3G7/dTUVGBzWajtLSUSCTC\n4uIiExMTz/WcbG5uEovFiEajzMzMsLS0BDwxdr6vt0WaUVBRUYHX68VgMJBMJlldXWV2dpaxsTFR\nEvdd2N7eZnt7m/n5eQKBwDP7a7/Z3NwUxur09PQz0xSly5tU6uz3+9Hr9RQUFIhy2b3yMr/WIi+x\nu7vL0tKScFk/XZIgxZTyOTNWwmq1itGVP//5z0WcSXJtZbNZVldXGRgY4MqVK8zPzx/wip8lk8mI\neGW+lGx9G9vb23R1dTE7O8vDhw85f/48Z86cEWWXP//5zxkfH+fevXsHFhKRPnfpAJdER3JHHjp0\niLNnz+LxeNjc3GR2dlbE6w8apVJJc3Mzx44dw2QyPWN0RKNR7t69KxrM5ANTU1MEAgGuXr1KZWWl\nGL4kdaCsqakBoLq6GqvVyvj4OIFAgOnp6bwwVKQ5HIFAgFAohFqtpqamBoPBwM7ODru7uwwODnLt\n2jU+/PDD52bMSw3DpHynRCIhGm5NTEx8p5j5V5FKUXO5HLu7u0xPT9Pf3y8Gbb2Kf/egDdtgMEgo\nFHomu16hULCwsCAmdfp8Pux2Oz6fD6fTycLCwp4lx/4oRD6bzZJMJkUZydNZm8lkko2NjQOP9b2I\n4uJiKioqKCsrE1Ps2tvbn6kfTqVSorxICj8sLy/nXbZvNpslFovljcC8DNlslu3tbdE6NpfLsbW1\nhdFoFO3hA6/oAAAgAElEQVRWa2pqaG1t5dGjRwcWGpFaIz8v+amurk78t/Qs5IORZTQaRVlXTU2N\n6PqVy+VYWVlhcnJShNHyhd3dXXZ3d4nFYqJDpsFgQKvVEolEOHToENXV1bjdbpxOJ5cvXyadTvMv\n//IveSHy8PseG+l0Gp1OR0FBAe3t7VgsFubm5rh+/To3btxgdnb2pdcsvS+vcl9ls1kxT0LKe3mV\nHJTYPz15UEJqElZVVUVJSYn4mjR9dC/j8j8KkZcwGo24XC4MBoPoYpaPYiPNOdZoNNTU1HDhwgXO\nnj1LXV0dRqNRxIwzmYwYl3vjxg3+7u/+TiSa5GOFQDabFS6rg7amvyvSQbOxscHa2pqIyUpTDlta\nWvJOkCSkLF2p6U2+vPeSwNfV1Yn3UupQKd3gAoHAK6m/3gui0ShffvklgGgzPT4+zs9+9jMh9m+8\n8QapVIrf/OY3L5wQeFAolUpMJhNlZWWcPn2aZDIpMsC7u7u/07+1vb2NQqH4XpclScy0Wu0zsyFe\nBdI5qtPphIdL8kRIbcbzQQN0Oh1ut5tjx45RX18PPJvUu5dr/FGJvORekmLD0gSog8iMfhFKpRK3\n2015eTktLS00NDTQ0NBAaWkpFosFtVotNurq6ipTU1P09vZy9+5d1tbWyGaz4o/Mq0dquJFMJkUS\nUCwW27ehNN8F6fD0+/2Ul5ezvb1NJBJhZ2cnL27yra2t/PVf/zUNDQ0iMXBjY4NgMMiVK1e4evVq\nXpWhfROZTIZAIMD4+Djj4+OUlJRQUVFBIBDI2x4VarWa4uJiMRxFGvzzfWqyt7e3v9NsjqexWCxU\nVFTQ2dlJQ0PDK6lll1rJFhcX09zczJkzZ8TgGmkQj9SSOh/aD7e2tnLu3LlnWiNPT0/T29tLIBDY\n03yxH4XIK5VK0Z/e5XKhVquJxWIMDw8zMzPzjfPG9xulUondbqe+vp7Lly9TXl5OYWEhDocDnU5H\nLpdjY2ODSCTC4OAgfX193L9/n7GxsbxPHAT23PW015jNZgoLCzEajaLpkEqlEv+dT0izGYqLi/F6\nvUQiEZaXl1+5W/X74vf7OX/+vLhlwZOuZv39/XR3dzM0NJSX4vgiMpmMCJtIHpONjY28DE9JZ2JV\nVRVlZWXAk+ZK37cx0vcVeGktT9/kXwVqtRqXy0Vraytvvvkmhw8fxufzoVKpiEajBINBMWr8VZ39\nKpUKg8Eg+v5vbGyIrpjf9DMqlYrm5mZOnDiB1WoVfzc7O0t/fz/RaHRPw8k/CpHXarUUFhZSV1fH\nyZMnsVqtLC8v8+DBAwYHB79X1uZeoVAoRPeplpYWvF4varUalUol4u+Tk5Pcv3+f69evi01wECNk\nvytSa9WDHpTzQygpKaG9vV14VhQKBcXFxdTV1XH//v2DXt4zSINsnE4nVquV/v5+FhYW8mYwjUaj\nwWQyPWMcLS0tcfv2bRYXF/NmnS+DUqmkuLhY5BcUFRUd9JK+EanNd2trK3V1dSQSCSYmJnj48OG+\ne0/W19eZnZ2lu7sbp9NJWVnZD77NSyV5ly9f5he/+AV2u13Un0uh2nQ6/UrH52q1Wvx+P++99x5K\npZL+/n76+/uZnZ194c9oNBoKCgpobGyko6NDDEmDJ0bX2NjYnodeX3uRV6vVeL1e3nzzTU6ePElz\nc7Pogdzd3f1KLblXgUqlorGxkUOHDokWiBLr6+tMT0/z6NEj+vr6mJiYEDezfDFSvgmVSiW6ZUWj\n0X2/TUrtOKXSQ3iSLf3FF1+wubkprGVpcIRUBgVPut+VlpbS2NhIS0sLHo+HdDrN2toaw8PD9PX1\n7Wnrye+DxWKhqqqKwsJCMpkMExMTzM/PH3iGsc1mo6mpiaamJtEcRFrP2toaIyMjYnjOQaNSqaiq\nqsLn8+FwOFhZWWFmZoa1tTVxQysoKMDj8fDWW29x6tQpGhsbKSoqIpvNiiEq+XTGAHg8HlpaWmhp\nacFsNvPw4UNGRkYIhUL7noT8dG7Rq5gF4fP5aGho4OLFixw7dkx4b6UhNqFQiMePH7OysvJK91hd\nXR2nTp3izJkzqNVqPB4PgGiKtLOzQzwep6CggIKCAkwmEyUlJdTX19Pe3i7GWUudCaempvald8tr\nL/JGo5Gqqir++I//mJaWFgwGA729vdy8eZPe3t68S5TSaDS0tbVx9OjRr41E3NzcZGhoiJGREVE7\nLLm3nt6sTycVHrT4S2vI5XJoNBr8fj9+v/+FNbh7hVKpxOl0cvToUX71q19x+fJlFAoFn3zyCVNT\nUywuLoqJbE6nk5qaGkpLS0WToerqas6ePYvL5RLWdiQSYW5uji+++CIve5fb7XaamppwOBzs7Ow8\nI/IHhfQ5vPPOO3R2dopbvJQMJU37y4cmPSqVCpPJxJEjRzh9+jTV1dU8fvyYa9euMTk5KToF+v1+\nmpubef/99zlz5gzw5PVIHfvm5+fzJuwglW2VlZVx8uRJ6uvr2d3d5datWwwPD+dVwq4UCpOa8Eih\nkOftX8lDKM2Ov3DhAm+++SZVVVWoVCpxJkolpHfu3BH1/a8K6fe2tbWh1+spKysT7vq1tTUikQir\nq6sUFxfj9/txuVwcPnyYCxcu4Pf70Wq1ZLNZFhYWuHbtGhMTE/syBfW1FnmlUvnMm7i5ucng4CC3\nb9+mp6cn77J2pXGgRqPxue0M7XY7J0+epK6ujnA4zNDQEIFA4Gud1tbW1hgfHycYDAoj5mmx3Q+k\nQ25tbY1gMPiMR2K/UavVFBUV0dHRwQcffEBDQwPwxBiyWCzP3Lyqq6upqamhpqYGh8OB2WwGntzW\npBpW6T2U2m1KNdD5EOd+GrPZLIYwJZNJlpeXf1AN86vAbrdTXl5ObW2t6MiXyWSER2RgYIDNzc28\nEMXKykqOHDnCu+++K1yp0rTCjY0NYdSZTCZRKSCxtrbGzMwMn3/+OV1dXXmTLyON8m1ra+P48ePi\nTLx//z6Li4sHvbxn0Ol01NTU8LOf/Yyqqipu375Nd3c38Xj8mf2hVqtxOBw0NDRw8uRJqqqqqKys\nxOVyoVAohHEQCoX47LPPuHnzJkNDQ688LDE5OcmXX36JzWajuroal8vFO++8Q1tbmxD45eVl0VDL\nYrFQVFQkpoemUilisRiDg4N88sknzMzM7EtS+Gsr8lL9Z0dHBydOnKCwsJCZmRl6enq+NU5yUBiN\nRgoLC7FYLF+bwARPDpPKykoqKytJp9OUl5ezurr6tbhSOBxmYGCAhYUFUS+/vr7OysoKW1tb+3Lj\nlJpZbG1tiZpivV6P2WwWYxb3CynJyG63U1ZWhtVqJZfLoVAocDqdnDp1io2NDVQqFU1NTVRWVoqO\nU9KMdimrPpFIsLa2RjQapaenh9u3bzM3N5cXoiQhvV63201TUxMGg4FoNEo0Gj3Qm5pCoaCgoACX\ny4Xf78dut5PL5Ugmk6ysrHD79m36+/vzJhG2qqqKd955h+PHj4tOdsXFxTQ1Nb3wZxKJBJFIhJGR\nEXp6euju7mZmZma/lvytSC2Ea2tr8Xq9DA4O0tvby/j4+IFOIpRq4hOJBMlkUjT58nq92Gw26uvr\nRXvmp0Nr8MT76XQ6aW9v5+LFi9jtdjFefGNjg5WVFUKhEFNTU3zyySf09fURCAReuZdzfn6evr4+\nqqur0Wq1OJ1OSktLaWhoYH19nWg0SiQSEQ1unu5dLzUK6+np4e7duzx69GjfwrCvrcjb7Xaqq6s5\ndOgQDQ0NaDQalpeXefTo0YHfZl5EUVERjY2NOByOb83WVqlUlJeX4/P5vmbppdNpUfe6s7PD0tIS\n/f39fPbZZ4yOju6bxS7VPKdSKTKZDBqNhpKSEvx+/76OfJRuijMzMzx8+BC1Wi0GbJSWlvLLX/6S\nTCaDQqHAYDCIToJPJwhKdbVLS0uMj49z69YtHj58yNDQUN55hDQaDS6Xi8bGRs6ePUssFmNxcTEv\nSvy0Wi0mkwmLxSL6VcTjcebm5rhy5QqDg4OvNBnqh1BZWcnly5efSYb6NsLhMJ9//jnXrl3j5s2b\neRcOdDgcHDt2TMyL7+7upq+v78BHa0uNsiKRCOvr62KWuiT2JpOJd999l3Pnzn2t14NCoRDTFa1W\nq0iwSyaTzMzM8Nlnn9HV1cXAwIDIpdiL1xqJRJiZmWF6epp4PM7W1haXL1/myJEjWK1WTCaT6Esv\nlUJLJJNJ5ubm+Id/+Ae6urpIJBL79gy8diKvUqnQ6/U0NTXx7rvv0tjYSDqd5t69e9y+fZvHjx/n\nrcjH43GCwSC9vb1i3r3ZbEan06HT6Z4RRkmQnucGT6fTYlSjFAtXKBSi7Gu/RF66zYfDYba3t4X7\ne79v8lJfhLm5OW7cuEFhYaGYLW0wGESnNWnN0qQ2eNLJa2ZmhtXVVTY3N5mammJsbIzh4WEWFhaI\nRqN5IUhPo1KpMJvNWK1WbDYbi4uLBAKBvBB5KXYqdfKC338+kUgkr5IXpazviooK0YAqGAyyuLiI\nzWZDqVQSi8WYn58XLaQjkQhDQ0MMDQ3l1dwIaaqZ1+uls7MTs9nMzMwMo6OjzM7OHnglQzKZZH5+\nnomJCRYXF9Hr9WLuurRPnE6nmBD5VaTzRHoN8XicoaEh7ty5w7Vr1xgbG3vlMfivkkqlCIfDdHV1\nYTKZiMfjogc/IDyZ0uhqQLQHfvjwIZ999hkDAwOsrq7u6Tq/ymsn8hqNBofDQWdnJ3/xF3+BSqVi\namqK//qv/+LWrVsMDg4e9BJfSCgUIhaLodfricVinDlzRvQwttlsQtCl5Jmnex9LSAem1BtZpVKR\nSCRwu900NzcTCAS4cuXKvr0mKfmopqZGzK6WLO39QmrjOT8/z8bGBs3NzZw8eVIMflAoFM80Edre\n3hbZ3Wtra/zud79jeHiYcDjM5ORkXh3ez0MaWSkJUzgcFuOWD5qv9hDPt94CTzM1NcWVK1d45513\nRG7G+Pg4n3/+OTU1NWg0GiYmJrhx4wbXr18/4NV+M0qlErPZjN/v59ChQ8zNzTE+Ps7U1BQrKysH\nvTySyaQYmDM9PS1G4H4XpJBaNptlbW2Nu3fvcuXKFb744ot9C6dtbGxw+/Zt0TtDq9USDAaBJ14U\nv9/P4cOHxVmeTCZJJBJcu3aN//iP/ziQz+K1EnmVSkVhYSEnTpygqakJnU7HwsIC/f39PHr0KO8S\nS56HVOq0vr5Ob28vXq+X4uJiPB6PcBtKMc3S0tJnOiQlk0n6+voYGBhgfHxc3OIjkQharZaamhqG\nh4f39fWsrq5y48YNzGYzBoOB/v5+JicnDySGnUqlWF9f59atWyiVSlpaWqiurhazwCcnJ1lbW2Nu\nbo6hoSHS6TTJZFIMh0gkEnl103wRer2eqqoq3G4329vb9Pb2cuvWrQONuUqsr6+ztLTEzMwMVqv1\nhTezfGB6epoPP/yQnp4eMZ50dXWVYDCI1WoVDU+kQzyf0Wq1NDc3U1ZWxszMDPfv3+fWrVuiQiBf\nWFpa4uOPP8ZsNlNaWvq1mevfRCQSYXJykvHxcYaGhujp6WFiYuJAEmKlUOXQ0JDYHzqdDpPJxEcf\nfSTChVJVyfDwMMFg8EBmqLxWIm+z2aisrOTUqVNUV1eTSCQYHh6mq6uLiYmJ16JFZjabZXl5meXl\nZQAKCwvxeDy4XC5xm/D7/VRVVVFTU/PMIZlIJLh9+zYPHz5kbGxMiLzkHZicnNz3yXSxWIxHjx5R\nVFRELpcTBshBiHwmk2FnZ4dHjx6xublJIBBgaWmJ6upqhoaGGBwcJBwOMzU1RX9/f14l030XpEQk\nk8nE+vo6ExMTDA8PH/gQJmm4z8LCAl1dXWxubuLz+cQa862h0+rqKqurqzx8+PCgl/KDUCqVGI1G\nmpub8fv9jI2NictAvmT9S0ju7pKSEjHd72WbZwWDQQYHB3n06JEQ14Ma8StVjQQCgbyfcPraiLxC\noaCqqoqTJ09y9uxZrFYrU1NT/O53v+PKlSuvhcA/j/X1dXZ2dlhcXBSbXafTiZjV03F6aV701taW\nODClEhKlUkk0Gt13l20ymSQajfLpp5/S1dUl5jsfpICGw2G2traYnp7m448/Rq/XE4/HRWlOIpHI\ni+zuH8rOzg4rKytiymI+5A5I9fr/9E//hNFoRKvVkslk2Nraygu38Y8RrVaLzWajtbUVp9PJb3/7\nW8bHx9ne3s67ss9EIsHq6ir/9m//xqeffvrSt3gp0U46X+LxeN4Mn8l3XguRNxgMYtZ6a2srm5ub\nTE5O8vjxY9Hg/3X9sKUaz1fBQTRrkWJkoVAob1yDUp/t18H1/n2Ix+MMDg6ysbEhOiPmy/7PZDLi\nIJbZHyoqKjh8+DDZbJbJyUkGBwcJBoN5achKpXRLS0t7nign84TXQuQLCgooLy/n+PHj1NXVcffu\nXfEnFovlnbUqI7OXrK+v8/nnnx/0MmTyhJaWFt5++23Gx8fp6enJu852MgfLQaa+vrRv0WAwYLFY\naGhowOFwsLy8zMrKiujrno8Wq4yMjMx+UFlZSUVFBbFYjFAoRCAQkM/E/z58q4a/FiIvIyMjIyMj\n8zW+VcNf38HfMjIyMjIyMt+ILPIyMjIyMjI/UmSRl5GRkZGR+ZHyWmTXy8jsJUqlEp/PR0tLC01N\nTRQWFtLb28vQ0BCjo6Ny9YaMjMwr5yc/+Qm1tbWsrq4yMTHBwMDAnvyeH73ISz2GLRaLaL26sbFB\nKBQinU7LB/h/c1QqFSaTiaqqKi5evMgbb7yB3+/HbDazubnJ+Pi4vEdkZGReGQ6Hg+LiYn76059y\n5MgRxsfHUSqVDA4O7klDqx+9yOv1ehwOB+fOnaOzs5Pa2lru3LnDv/zLvxCJROR60v/m6HQ6qqur\naW9vp62tDafTmTeNZWRkZH58nDp1iv/1v/4XPp8PjUYDsGe3ePgRi7xGo6GgoIDGxkY6OztpbW2l\nsrISi8WC0WgEyIsJWV6vl7KyMjweDwaDgZ2dHaanp5mampJ7AOwD0lS/ra0t4vE46XQavV5PQUHB\nvo/MlZGR+fEi6dH58+c5efIkCoWCQCDAyMgI09PTe9aW+kcr8nq9Hr/fz1tvvcX//J//U/T5fvTo\nEcPDw2xububFgJK6ujp++ctfcu7cObxeL4FAgH/7t3/jn//5n4lGo7LI7zGJRIKxsTGMRiOVlZVi\nFrTb7aaoqOile2vLyMjIfBMul4s//dM/5cKFC+Jrq6ur/J//83948ODBnv3eH53IKxQKvF4vTU1N\nvP3225SVldHb20tvby+jo6MsLy8zNzcnbm0HhUajwWg0Ul1dTWdnJ16vF4vFIpLAXC6XGKryuuFy\nuWhsbKSpqYny8nLW19dZXFxkdHSU2dlZMYEvH1AqlWLAR0lJCRaLhZ2dHR4/fszIyMiBx+M1Gg12\nu536+no6Ozux2+0olUoWFxcZHBykp6eH3d3dA1/ny2I0GrFardTW1lJcXIzBYKCoqAi3200gEGB5\neZlwOEwqlSKbzTI9Pc3KykreDOB53VAoFKhUKjQazTNeqcLCQqqrq9Hr9cLrGYlE6O3tpaKigqam\nJkpLSykoKCCTyTA1NcXIyAjDw8OEw+F9fQ0Wi4XTp09TX19PcXExQ0NDPH78mKmpKdbX1/d1Ld+X\nc+fO8ZOf/ITjx49TXFyMQqFgeHiYL7/8knA4vKda9KMSeenArq+v59y5c1y+fJmlpSWuXbvGZ599\nxuDgIKlUKi8OC5VKhdFopKioiJKSEjFxzmq14nA4sNlsaLXag14mRqMRg8FAIpEgmUySTqe/8f1T\nKBS43W4uXLjApUuX6OjoYGVlhYGBAa5fv04qlcorkddoNBQXF1NZWUlVVRVms5mNjQ2GhoaYnJw8\nUPGU9kNDQwMXL17kvffew+v1olKpGB4e5r/+67+YmppibW3ttTEGrVYrVVVVnD9/ntbWVgoKCigr\nK6OqqorR0VExLjmRSJDJZOju7mZkZIRwOMzm5mbejavNB5RKpZhgqVKp0Gq1aDQatFotKpVKDPh6\neqSr3+/n2LFjmM1mdDodDoeD6elptra2OH78OD/96U9pbm7Gbrezs7MjZiUsLCzsq8grFAoKCgo4\nffo0P/3pT2lpaeHatWvo9Xqi0SgbGxt5cZ6/CLPZjN1u58KFC/zhH/4hFRUV6HQ64vE4fX193L59\nm1gstqdr+FGJvHQruHz5MufPn2djY4N79+7x61//WmTT58uGSKfTbG9vs7KywtTUFLW1tRgMhoNe\n1teoq6ujqamJ8fFxFhYWxPv4PKRbg9fr5Y033qCyshKlUonD4aC6upp4PM7U1FReze+2WCy8/fbb\nXLp0ierqapLJJMFgUCRlHuR+sdvtNDU18ed//uccPXqUkpISMXq4pqaGpqYmKisrmZiYeG1E3uv1\ncvr0aU6dOkVTU5MwduGJ8NjtdlpaWshkMuRyOU6dOsX4+Di3bt3i4cOHDA0NHfAryD8MBgNmsxmV\nSoXFYqG4uBifz4fH40Gn0+HxeKivr3/m0mAwGLDZbGI/7e7uUlBQgEKhoKmpiZaWFvR6PbFYjNXV\nVaamppiZmdn3RGWFQoFarRYGSy6Xo7q6mhMnTtDb20swGNz38drfhaamJj744ANOnjxJWVmZeE8X\nFha4c+cOX3zxxZ57I34UIq9QKFAoFJSUlHD06FHq6+tRq9XcunWLL7/8kqmpqbwRd4lsNksikWBt\nbY3l5WVKSkoOeknA74XaaDRis9k4fPgwZ8+epbq6mt7eXm7evMnW1tYL30+FQoHJZKK0tBSbzYZC\noUCv12Oz2fB4PBQUFOzzK3oxRqMRj8dDW1sbDQ0NFBQUMDw8TH9/P8vLywd+a5Rcqu3t7VRVVaFS\nqRgcHGR5eRmfz4fD4eDy5cvAk8l0u7u7eV8Z4Ha76ezspKqqCrfbDTwRmLW1NZLJJJlMBo1GI7KO\n6+rqKCwsJJlMEolEDkTkJREtLCzEbrdTUFCA0WhEr9cDsLm5ycLCgpjfbjabyWazRCIRNjc398wA\nk0J+zc3NNDc3o9VqsVgsuFwuPB4PTqcTrVZLYWEhFRUV4j39Kjs7OwwODhIKhVAqlaytrTE8PMzO\nzo4Q+Z6eHmZmZvbdmJRKoCXvBDwJB1ZVVWG1WlGr1XkZylGr1RQUFNDQ0MClS5fw+/3i7JudneW3\nv/0t3d3dBAKBvV/Lnv+GfUCy9hobG/nVr36F2WxmcHCQf/3Xf2VkZCTvNgA8EflUKsXW1pY44PIB\nhUIhrP/m5mZOnz7NxYsXSSaT+P1+Hj58+K03XKVSiUajecY9mI84HA7Kysrw+XxYLBbS6TQjIyN8\n8cUX4oZwkHvH6XRSXl4u4vDxeJyPPvqIL774ggsXLtDZ2clf/uVfEg6HmZiYIBqN5s0+ehEul4uO\njg4cDof42ubmJktLS6yvrz8jIiqViurqaqxWK83NzfT39x/EkrFarVRUVHDkyBFaWlqorq7G5/Ph\ndDoBmJqa4sMPP2RxcZGdnR0qKytJpVL09PQwOTm5Z8Ko1+vxer28++67/Pmf/zkGgwGNRoNSqRR/\nFArFM+78r5LL5djZ2aGrq4tHjx6RSCRQKBTkcjk2NjbY3Nxke3ubYDDI6urqvhuRUghWyh2AJy7w\noqIijEaj8ETkGzqdjtLSUmpra6mtrX3GwBocHOR//+//TSKR2Je15Oc79B2x2Wy0tbVx8uRJqqur\nuXbtGjdu3BAJdvmIJIRWqxWXyyVuBQeNUqnEarXS2NjIL3/5S9rb27FareRyOWw2GyqV6qXKyvK5\n9Eyr1WI0Gjl9+jRvvfUW1dXVpFIpRkZG6O3t5dGjR6yvrx+4cSgdzgqFgnA4zPDwMKOjoywtLYmw\nid1up6amhoaGBh49epT3Ii/te6VSKYzc+/fv87vf/Y6NjQ3hPdFoNJjNZt5//33a2trweDzU1dXR\n0dHB8vIya2tre+65MJvNInZ96tQpysvL8Xq9OBwOMpkMkUgEnU6HzWbjzTffZHt7W3wm6+vr4ka/\ntLS0J+tLp9NsbW2RTCZF7P2bzpH19XVisRipVAqDwYDT6UShULC1tcXQ0BD3798XYZJcLsfu7q7I\nw4nH4weWnyKtR2JycpKuri6RkJlvFBUVUVdXxy9+8QvOnj2LVqtFoVAQiUS4ffs2169f/0Zv6Kvm\ntRd5hUKBzWbj9OnTtLe3Y7FYGBgY4ObNm6yvr+dtCZpKpXrGjf10PF6lUqHT6cRtOJvN7tuGkBJd\nKisruXDhgritwBPrNJ/F+2UxGo34fD5OnjzJ5cuXMRgMzM7O0t3dTV9fH5OTkwe9xK8RjUbp7+9n\ncXGRjY0NISharRafz0dFRQVjY2Osra0d9FJfmnQ6zfr6Oo8fP+bf//3fhWApFAqsVivFxcWcPn2a\njo4OCgsLaWxs5I033qC/v5+JiQmWl5f3JB4reQZdLhdHjx7l8uXLXLp0Ca1WSy6XIx6Ps7i4yOzs\nLCaTCbfbTWNjI2azWWSxS56gvQwvpNNpNjY2WF1dJRAIkM1mMRqNpNNpDAaDyHXIZrOk02kWFxcZ\nHx8nHo9jtVqpqanBZDIRiURE9ny+IRm60lmoUChYWFjg8ePHhMPhvHTV+3w+jh49ys9//nNqamqA\nJwbW2NgYv/71r/nyyy/3dc2vvcgrlUpsNhutra3YbDYmJiYIBoNsbm7mdVmRQqEQt5WvZtJbrVbK\nysoIhUJsbW2xsbGR16/ldcPpdHLy5ElqamrQ6/UEAgG6urr4z//8T8bHxw96ef9tyOVypFIp4vE4\nm5ubpNNplEolOp2O8vJyOjs7KS8vF6Wlra2tuFwu+vr6uHPnDr/97W8JhUKvdE2SwLvdbg4dOsT7\n779Pc3MzBoOB5eVlJiYmuH//PiMjI8zOzqLT6aivr+cP//APaWxsxOfzAU/i3IuLi3uaOS3dsPv7\n+/nwww+pq6tDr9cTCoXo6Oigs7NTrCUUCnH16lU++ugjdnd3Rd5MS0sLhYWFeVuKJiVTl5SU4HK5\nUFRvhdAAACAASURBVCgUogxTqVTmZQ5Kc3MzZ8+exWazia9dv36djz/+mAcPHrCysrKv63mtRV6q\nia+pqaGsrIx4PM69e/dYXFzMSzfO00gHnNRHX0roASguLubo0aNsbW2xvb19oK6yHxNSn/qKigqO\nHz9OaWkp2WyWkZERenp66O/vZ2NjQ3y/UqlErVaj0+lQq9Wk02mSyWTeZfPqdDqMRuNr17hHMnSN\nRqPoT2AymWhqaqKjo4OjR49SWloqDGCn04nNZiMYDO7Z67VYLHg8HpFw2tHRgdFoJBwOc//+fbq6\nuuju7mZubo5IJEJRUZHwdimVSvFcR6NRRkdH9/RAz+VypNNp0S1tbm4OrVbLysoKmUwGj8cjBLy/\nv5+enh66u/9/e2f61OZ1vv9LCLQjoQUkgRbEjgQCbGO8YZM9cZOmM+mbtNO0r/sn9NXv90d0+i6Z\nTmc600kz7Tie1Em84Dhgs++bBEIIiUVoQ2hfvy8858QEO3Yagx7S85nJxDNsj6TnOfc5933d1z2G\nbDYLoVAIh8MBn8+H2tral75ZellUVFRQ4SPJdpJnkmgHuIJOp0NzczMuXbqEzs5OyGQyujG8ffs2\nvvnmG2xtbZ24oPfUB/mOjg5cvHgRKpUKIyMjuHHjBjY3N0t9ac8ll8vh4OAAa2trGB0dhVqtpotF\nU1MTNcMJhULwer2cCyynEdITb7VaqSlFPB7H+Pg4Jicnj5R3ysvLqapaJpMhFoshEokgEAhwanEh\nvbhcFSE9CzI4qra2Fk1NTdjZ2YFOp8Pvfvc7eop/MsNVLBaRz+exsbGB1dXVYxEu1dbW4sKFC/jw\nww9x/vx5iMVi+Hw+zM/P47PPPsO3336LaDSKbDYLPp8Ps9kMu90Om82GmpoaFItFxGIxeL1ejI+P\nw+PxvPRr/D47OzsIBoNYWloCj8dDMpmknSN9fX0IBAL45ptv4HA4aHo7lUphd3cXoVAIfD7/xERg\nP2dsNhv++Mc/wm63w2w2g8/nY2hoCH/5y18wOzuLjY2NkhzWTteq8D14PB4MBgOMRiMikQjW19ex\ntrZ26DTGVYiYxOVy4c6dO+ju7kZbWxuA73avRFHKlTo4OQUnEgnO7vyfBekhPn/+PM6fP4+amhrE\n43E4nU6srq4iEAhQdzmiuidtSDqdDnK5HNFoFHNzc7h79y5CodCx9wzn83nq7UB0BM3NzZBIJLDZ\nbNQ5S6vVwmKxQKFQoKKighN2zS8CadU8e/Ys+Hw+Dg4OUFlZid7eXtTW1tKaMklLezwerKys4MGD\nB1heXn6pgUmpVKK9vR1XrlzBtWvX0NbWhkKhgPn5eYyOjuLBgweYm5tDOBymn4lQKERXVxfOnTsH\nuVwOPp+PdDqN8fFx2qFxEsLffD6PQqFAAwg5QJDNSGVlJVpbW7G4uEh/hmyYuJ4hNJvNOHPmDBQK\nRakv5ZkIBAIqtuvq6kJNTQ0KhQL8fj9WVlYwPT39g/4ix82pDfJEuGYwGFBdXQ2Px4PV1dUjbmqk\nz7JYLNIHgUunMK/Xi1gsdsRFqlSBnaRQn7a5kMvlsNlsCIVC9DT7pCiGKHxJDZVLkAexr68P3d3d\nkEqlWF9fx9TUFDY2NpBOp6HT6WAwGNDU1IRz586hubkZer0eWq0WcrkcBwcHuHPnDrxeL5aWlo49\nyKdSKSqyIzqNnp4emEwmdHR0QKPRIJlMQqlUwmKxoLq6GhKJhLP11e9TVlYGkUiEzs5OdHZ2Hvpa\noVBAJpNBLBbD/v4+QqEQJiYmcP/+fYyMjGB9ff2lXAPxcTCbzXj99dfx+uuv49KlS8hkMnC73Rga\nGsJXX32FwcFBpFIp5HI5+oxUVlais7MTdrsdIpEI6XQawWAQw8PDePDgAYLB4IltuEiZgBCPxxEI\nBBCPx1FdXY0zZ85gamoKIpGI+hGcBkwmExVUE8hazpV1XCQSobGxEW1tbaivr6cbVo/HA4/Hg3A4\nTLuWnkU+n0c2mz2W8eenNsgrFAoYDAY0NDRAKBTi3r17mJqaOvJ9VVVVqK2tRTKZxMHBAUKhECdP\nOsTQp9Tw+XzodDro9foj6V+NRoMrV65gZ2cHDocD2WwWcrkcjY2NqKqqglKpRENDA3p7eznn3qfR\naNDc3IyWlhYolUoEg0Hcv38f//73v7G9vQ2LxYI33ngDVqsVDQ0NUCqVkMlkEIlEEIlEtJ5vNptx\n4cIF7O/vH7uRxfr6OsRi8aFNh0ajoaLS7e1teL1emEwmaLVa2Gw27O7uHttc6pMkFovB7/dT05Bo\nNIpAIEBTzC+L8vJy9PT0YGBgAO+88w4sFgsymQw2NjYwMjKCmzdvYnFxkdrsErW3TqdDS0sL9Vjg\n8/lwOp149OgRRkZG4HK5SqoLcjgc9Dr7+vpgMpnQ2tqKpqYmeDyeU7URJP3+BBIQuXKPy+VyXL58\nGV1dXfQ60+k0Njc3IRQKcf369ef+DlKW9fl8L/X+Bk5xkFer1ejs7KSnmcXFRXg8HvB4PCiVSlRX\nV6O2thZGoxF1dXVIJBLY3d3FwsLCsbyRPxfKy8thNBoPWagSiB8B6U/NZrOoqqpCc3MzVCoVPVEa\njUYIhcJDPxuLxeByuRAMBk/y5dBMTmtrK65cuYL6+noqtpuamsLy8jKMRiPOnTuH119/HU1NTdBo\nNFTwuLe3R9u6amtrUVlZeci96jgJh8NwOp0YHBxEIpFAV1cXysrKkEwmsbm5iUQiQdvNqqurYbPZ\nsL6+jsXFRU6ddIDv2kJFItGRRftJ4vE4Njc34Xa7sbq6ii+++AITExP0FP0y1dQkk9Dd3Y3+/n60\ntbWBx+PB5/NheHgY9+7dw/T0NILBIAqFwiH3uJaWFlitVlgsFvB4POzt7WFmZga3b9/mRCtjIBDA\n/Pw8hoeHIZfLMTAwgPb2drzyyitYWlqipYRQKHTs3uk/hacdfgKBADY2NkruSAmAtk8SF0dyreXl\n5aiqqoLVaoXVan3u7wmFQvD5fLQsFIvFXlp6/9QGeWKPWVFRgd3dXezt7SEej4PH46GpqQn9/f14\n9913YTabIZPJkM1msb6+jhs3bmBwcPBYR/v9N5AafakX5vLycpjNZtTX1x+xwSQ1eTIat1AoQCgU\nQqFQ0L5+8t/3HbZ2d3dx584drK2tneTLoWnVy5cv44MPPkBNTQ2cTie+/fZbrK2tgc/n48qVK3jr\nrbdgt9shkUiQzWaxtbUFt9tNVdQVFRV47733TjTNWSgUEAgEcOvWLUxNTcFkMoHH41H1NnFi6+jo\ngNlshs1mw+LiIvX4LvW99CQVFRVQq9VQKBQQCARPLecUi0Xs7u7i888/x8OHDzE3N4dgMEiNQ172\n6yH3RkdHB+x2OxXZzc3N4bPPPsODBw9oZ0tZWRnkcjnsdjt+85vfoKOjA0ajEVKpFPv7+1hYWMDQ\n0BDu3r2Lg4ODl3qd/w35fB77+/u4c+cOBAIBenp6YLPZoFarsbKyApfLBa/Xi7GxMczMzHDqXnke\nRKzMhc2JzWbDG2+8AbvdDq1WS4M8Od2/6HpBvAxEIhH29/fpsKCXwakN8jqdDhcuXKAmJuFwmPpe\n2+12dHR0wOPxYGlpCfv7+9BoNFAoFLh48SLy+TwCgQD29vY48UA+jVI8dKTuW19fD61WeyRQE4tJ\njUYDuVxOa/Jk0f6hOnwmk0E4HD7x3bdIJIJOp0NtbS2USiX29vYwPz+PkZER5HI52Gw2nD17Fg0N\nDQAAl8uFtbU1PHr0CGtrawgEApBIJKivr6fOYicJWayz2SxNsZK5B2Ri3rlz52AymWgK2WKxYGdn\np+QpWWKpqlar0dDQgEuXLuHy5ct04iJpAQuHw9jb24PD4cDs7CzdgO3u7tKRs8fBk66TxM3R6XTi\n5s2b1OimtbWV6jJIi1Rvb++hOQzRaBQzMzNwOp2IRCKcqHeT93ZnZwczMzO4efMmenp6YLFYIJPJ\n0NjYiEgkAqVSCYFAAK/Xi3A4jFQqxYmAT7Q+1dXVMBgMEIvFNE0fi8Wor0Kp6ejowMDAAGpqag5l\nPsvKyiCRSGhp82nvqUqlglqthlqtpplPk8kEs9kMn8/3vxvkycKh1WrR1dWFsbExTE5OIhqNoqys\nDGKxGK2traitrcWnn36K8fFxuFwutLW1ob+/H7///e+RzWaxtraGqakpzgb5UiCXy+ks+8rKymcG\nbYFA8KPG4BIl70kLfng8HmQyGerr61FdXY2ysjK43W7Mzs5ifn4eTU1N6OvroyYrBwcHmJ2dxeDg\nIG7fvg23241MJgObzQatVkuvPZFInKigirRlff+hj8VidGhLW1sburu70dDQgJaWFqRSqZIFeZJi\nFQqFEIvFaGlpwaVLl/DrX/+azjAHHm/89vf3sbq6SkcRT0xMnNhkMTKMidzPPB4POzs7mJ6eRnl5\nOWw2G/Wrb2trg1arpcZV5NkoFAqIRCKYmZnBxsYGp/Q+hUIBsVgMy8vL+Ne//gXgsRubXq+nA4/E\nYjH4fD4ePnwIh8PBmWmd5Np0Oh1MJhPEYjH1FYnH48hkMiU1wqmoqIBQKITVasW5c+cOfe1JO+Cl\npSXcunXrqdfa0NCA1tZW9PT00CCv0WhQW1t7pNz5Uzh1Qb6iooLOWyfmE6T+KJPJUFtbi/39fczN\nzWF6eprWbtbW1iASidDS0gKFQoE333wTu7u72NjYKPVL4gyxWIymqRsaGmA2m3/yTHvigZ3P5yES\niU6sl5s4lxkMBrz66quwWCxIp9NYWlqCx+OBTCbDxYsX8atf/Qomkwm5XA7b29t49OgRvv76a+zs\n7NCTgkAgoDOg/X4/BgcHOeHFQIYcOZ1OTE1NwWKxQCqVoqGhAW63u2TXRSZwNTQ0wGaz4erVq+jp\n6YHRaKRtccDjOuTg4CCGh4cxMjJCRXUnFShJijSZTCKRSEAsFqO/vx9arZaWoshaU1lZiXw+j3g8\njlQqRQWZqVQKe3t7WFpaOtLZwxWi0SgVY7rdbvT396OjowMGgwHt7e0Qi8VQq9X0cwiFQiWvd5Pn\nVyqV0mlzoVAILpcLW1tb1IO/VBgMBvT29sJisRz52vz8PObn5+Hz+bC4uIjJycmnbppkMhk6Oztp\nlvG4OJVBniifeTwe0uk0EokECoUCHXzhdDrB4/Gwvr5OBTDEn/nRo0e4cuUKent78Z///KfEr+bZ\nlJWVQSaT/eCJ+mVDxIkTExMQiUQ4ODiAXq//wRswlUrR6Wckw/LkNReLRWQyGVrrJ3Xu44bP50Oh\nUFB7VK1Wi1Qqhe3tbWSzWXR0dODs2bOwWq2IxWJ0zv3ExAScTif9HWKxmM7n3tvbw+LiIubn54+0\nPJYCkiFZX1/HzMwMrly5Qm1Wp6enS+IIRgafdHR0oKurCz09PTh79izq6+uPfC8ZjDI2Nobx8fET\nvU7gcSkkmUxiZWWFZkOMRiPMZjMymQy1241EItjY2MD+/j4qKipgMplQU1MDgUCAvb09eDweaqXN\nRVKpFFKpFNLpNHZ3d6ngzmq1UmtbHo9HN/QzMzMnrp35PiRjS07MwHcjiQ8ODkpu3mM0GnH9+vVD\n97XX66WDfmZnZ+Hz+ahi/mnPYU1NDWpra499s3Lqgjyfz6etTQBoD20ul0Mul6O7vEKhcOShi0aj\nmJiYQHt7OxoaGjg12/z7KlI+nw+9Xo+6urpnzoF+2WQyGQQCAXz55ZdwuVw4e/Ys+vv7j6SjnsTn\n82FsbAyhUAjl5eV4++230draStOfpH+XGNG4XC5MT08f+2upqKig5jEtLS0Qi8XY3t5GMpmEVqvF\nxYsXYbVakc/nsbi4iMHBQdy6devQBkQoFEKpVKKrqwtnzpzB7OwsJicnj20wyn/L5uYmKisrsbGx\ngebmZnR1deHOnTvg8/kn6gvB4/GgVqvR09ODjz76CB0dHaipqXlmOyUREJYqOJIBObdu3UI2m8Uf\n/vAHmEwmCIVCRKNRbG5uYnFxEdPT05iZmUE8HofFYsGHH34IqVQKmUyGtbU1OBwOTt0Pz+Lg4AAu\nlwuRSASzs7Ow2+345S9/iddeew02mw1KpRIGgwHFYrHkQZ7rGAwGXL9+/VDv+/DwMP70pz/RTQjp\ne3/W80c87o/b6OfUBXmywyMnxXg8Tge4kBpUIpGgwpMnyWQy8Pv9iMViEAqFnJ53TiwzW1tbodVq\nkUgkjt18hQRkUpcjC93Q0NAzfyYSicDtdtN0p1qthkAgQGNjIzXUEYlEtM1LKpUe62sgCAQCWCwW\n1NfX08lciUQCfD6f6jlUKhXC4TDGx8cxOjpKFa0CgQBarRYtLS3o7u7G5cuXodfrMTo6SkVwXBqM\nkclk6OdQV1eHhoYGqNVqiEQiJJPJE9FBiMViVFVVYWBgAK+++iodDysWi+nMcr/fD7FYjJqaGgDf\nicNK9V6Sv+92u3Hv3j2kUilqDxyPxxEOh7G9vY3NzU1sbW1RwVp1dTUVnobDYQSDQU6IwJ5HPp+n\n5QWih7Db7QC+E6iWlZWhra0NdXV1nEjbcxWBQEA7RQixWAwej+e5J3OZTAaVSoVLly6hv7//0CAb\nUnp7WaI74BQG+Schi0c8Hj9i7fg0SHqOa+MJSUo7nU6jvLycbmQMBgNaW1thNptPxEaVXEs6ncbO\nzg52dnYwMTHxwj+rUChgMpmg0WhgMplo+UQqlUIul0MikZxYVoKkVevq6sDn8xGPx5FMJiEUClFd\nXY3Gxkbk83l4vV7Mz8/D4XAgmUyCz+ejsrISNpsNAwMDuH79OvR6PRKJBC0NcZFMJkPdE3U6HVQq\nFaRS6YmJHYlfwttvv40333yTTgnLZrPUCW5paQk6nY4GeS5QKBQQDAYRDAYxNjb2zO8rLy+HWq2G\nTqeD2WyGSqVCPB5HJBJBKBTihKL+eTxpLBOPx+F2uxEKhVAsFsHj8SAWi1FXVwej0Yja2lokEgkW\n5F8SfD7/kJFSe3s7Ll26hN7eXuTzeSQSCaRSKczMzLz09u5TGeRJavvHusSJxWLU19dDpVJRMRgX\nICrMubk5KoQhKJVK9Pb2IhwOH7vD2k+FzKz2+XycPNmQDYdEIgGPx6MWpXV1dejq6kIqlUJlZSX0\nej2uXLmC9vZ2GAwG2jfvcDjg9Xo5dYp/FlVVVdSf/ySc19rb2/HRRx+hq6sLMpkMZWVlCAaD8Hg8\nmJmZgdfrRXl5+Q+WfrgOsW4GvssCrK6unpp0fWVlJWpqamCxWKgd+Pnz52ngTyaTCAaD2NjYwPr6\n+okcKv5XIPV3vV6Ps2fPYmBgAC0tLUin09ja2sLQ0BA+//zzH3WoelFOZZD/byDtVG1tbVAoFPB6\nvZy5iQuFAjweD9bX19HY2HgoyIvFYphMJqhUqhJe4YtB0v1cCPC5XA5+vx+BQICKMiUSCaqqqmga\nG3i88Nntduj1egCPsxE6nQ5dXV0Qi8UIBAJwu92Ynp6Gw+GgJx+uo1AooNFo4PV6j/XvkD7ztrY2\nXLt2DRKJBOFwmN7PLpcLY2Nj2N/fR3d3N2c21j+WYrFIS2bkNZAsgN/v5+TrImI6MqXQZDKhsbER\n7e3tqK+vp+1pRDtDMi57e3sIBoOcu89LXd55knQ6jUgkQo3AgMdricViObT+yWQyaDQaOkjNYDCg\nq6sLFy5cQDKZhM/nw9DQEG7evIl//vOfx/Ken8og/6Q73Iu+KWVlZVAqlTh37hykUilGR0c5oZAG\nvuu1PS1pv2dRXl4OvV4PnU5Xcr1DMpnExMQEzGYz3n33XUilUtTU1ECr1aJYLGJvbw8ajQZGoxFq\ntZo+mMSEQywWw+l04quvvsLw8DCmpqawu7uLdDrNucXvaVRWVkKlUh17yyJpA7JaraipqYHf78fk\n5CQ++eQTrK6uIh6PIxaLQa/XQ6PRQK1WH+v1HBeFQgFer5dzvfA/BFnzWlpacOHCBXR1dcFqtdLR\nyeXl5bSmTEqG0WiUM4Y434doCrhwiIhGo3A4HGhuboZWqwXwWHH/i1/84tAabrVa8corr0AoFEIg\nENC1hcfjwe/3Y2xsDB9//PGxug6euiCfzWYRiUQQiUSQSCRo+ml9ff2p9VJieGGz2dDX14empiZs\nb29jcHCQM+nvfD6P1dVVzM/P4+rVq5BIJLR7QC6Xo6OjA9PT01AoFCdqxPJjISI7kUhU8mE7uVwO\nu7u7WFtbw+LiIhobGyGTydDS0oJ8Pg+FQgGRSAShUHjIeCKVSiEQCODevXsYHx/H2NgYnE4nLUFw\n4RTxIhDP/uP8HKRSKYxGI/r7+9HZ2QmBQEAd+ohzYKFQQKFQgMFgQGNjI82YkBa1RCJR0kEuLwqP\nx4NcLqfOeFyFzAiorq5GXV0dbDYbbDYbOjo66CCjJ/0qYrEYtre34XK5sLy8jIWFhUMjaUtFeXk5\nxGLxIQ1POBzG7Ows/H5/Ca/sMaurq/jb3/6GDz/88FCQv379+qFgXVtbi+bm5kPPocvlwvj4OBwO\nB2ZmZrC0tHSsFr2nMsgHAgH4/X6EQiEYDAZ0dnYiGAwinU6jUCgcqtcLBAJIJBJcvnwZ165dQ3V1\nNSYmJnD79m3OpOtzuRyWl5ehVCqxtbUFjUZDU/ZVVVXo7e3FxMQEVCoVcrkcJ4M8EQs+6R5WSvL5\nPA4ODug0MbFYDLvdjtbW1iPfS/rNc7kcAoEAFhYW8PHHH2NkZIQOJynVyYa8ryTDAIAKTEs9D5yk\nJ69duwar1Qoej0edDdPpNNU8kGEdxImSpL4jkQj29/dL3vP8IvD5fNTV1VFBKZcgzxpx/FSr1bDb\n7Th//jwdvKNSqejmhJgo5fN57OzsYGVlBV999RWdF8CFTVdFRQUtrRGIOHJ7e7uEV/aYlZUVrKys\noK2tDRcuXIBAIEBdXR3q6uoOfR95r8m/c7kcJicn8ec//xlOp/NEXsupC/L5fB6xWIzemAaDAe+/\n/z7KysowMzOD9fV1VFZWQqFQQKVSoa2tDV1dXTAajcjlcvjHP/6BsbExxONxTqR9noS0+EWjUZrW\nJPUyLqbPnkQgEECpVKKtrQ2NjY0n5mz3PLa2tvD111/DZDLRdqEnIS5mLpcLU1NTWFhYwMLCApaW\nlhCNRks+zU0oFFJhIJlzHgqF4HQ66UJTiut7cgNNfCuKxSJ8Ph/W1taQSqVoW11zczMuX758yKXS\n4/FgYWEBGxsbnBg08jzISZ6MleUS5HNQKpVobW3F1atX6Wzz75tT5XI57O3tweVyYW5uDouLi1hZ\nWYHP54Pf7+fMmqjRaHD+/HnU1taW+lJ+kLt370Imk+H69etHAjwAeDweTE1NoVgsYn9/H5OTk5if\nn8fKygqi0eiJXCM3VuIfQaFQQDqdhsvlwp07d/D+++/DYrHg6tWrqK6uxsrKCp1trtFoYLFY0Nzc\nDL/fj+XlZdy7dw9ut5uTath4PI61tTVYLBZql8j14E6oqKiATCaDwWCAXq/nxEJYLBYRCoUwOTmJ\n9vb2p9aDc7kcDg4OsLKyguHhYSwsLMDtdpc8uBOEQiHq6+vR19eH9957D6lUCsFgEA6Hg3aJ7O/v\n09LDSb7vpO4vk8kO9QtLpVI0NTUhn89Dp9Ohu7sbZ86cgVwuRyaTQTwex9zcHCYmJrCzs3Nq2rS4\nck8A35mCabVaqFQqqFQq6HQ62O12vPbaa6irq6MmK7lcDrFYDPv7+wgEAnRWwOjoKFZWVuDxeDjz\nughk5sRxG8X8VKanp1EsFiESiWA2m4983el04uHDhygUCgiFQhgaGjpxLdipC/IEn8+HaDQKPp+P\ngYEBXLhwAa+99hqy2SztSSwUCnC73RgdHcX9+/cxMzODvb09TgZ44LGxzPj4OJqamnD+/PlSX86P\noqysjHpNi8XiE7PifR5EMfz3v//9qTbGJFWfTCapkRKXFnOyeJBatkAggE6nQ0NDA4xGIyorKzE9\nPY1UKoWqqqpnussdB1qtlo5yJpsLq9VKfesFAgH0ej0V3UmlUoRCIaytrWFwcBD3798/sdPMT4Wk\ntok2o5TlKDL8p6WlBe+++y6ampqg1+tRU1MDpVJ5xKQlHo9jc3MTY2NjGBkZwdLSErxeLyKRCJLJ\nJGfu9dOI3+/H0NAQlpeXnzpUJpVKUUdHcqA4aU5tkCeTfqamppBMJrG1tUXTgcB37VxerxdOpxML\nCwvY2tri1AL+fZLJJDweD3w+H8LhMKRS6U8eEFNqCoUCotEo3G43pqamTryeRoRf29vbnKjl/Vgy\nmQy2trbw8OFDrK6uQiQSQavVore3FyaTCdevX0d7ezuSySQ6Ojqg0+mQSqWwtbWFjY2NY613Hxwc\nUJOmbDZLXcAqKiogl8vp/6VSKSoqKqj5EFkU/X4/J/UlT6NYLCIYDGJ7exvhcBhKpRI8Hg8SiQRS\nqZTOnT8uhEIh5HI5bcXSaDRobW3F5cuXodPpUFVVdWi9KBQKSCaT2NzcxPLyMsbHx7GwsICVlRXs\n7OxwZlTri0D0Nbu7u/D7/ZwypMpkMtRxkquc2iAPPH7wHA4HHA4Hbty4UerL+cmk02n4/X74fD74\nfD46s5oYt3B1cwKATgMkgzBImwhRuc/Pz+Pu3btwuVylvtRTRTwep5754XAYNTU1sNlsKC8vR19f\nH1555RW8+uqrAB5nU9LpNA4ODrC6uorl5eVjXRB9Ph8cDgd2d3dRU1ODiooKKrx80qqT9DenUims\nra3h3r178Hg8nBB4vSiFQoHa3G5vb6O6uhpCoRBVVVXUHvk4n1GJRAKTyYSBgQEMDAzQPnfiLEiu\nkWw0yDCX0dFR3L59G19++SXC4fCp2lSRTpZsNovd3V14PB5sbm5ydhAQVznVQf7nBmkP/Prrr+F2\nuyEWi+nOfGVlBYFAgLOlBjLh7caNG8hms3jrrbcglUqRTqcxPz+PR48eYW5uDnt7e6W+1FNFLpdD\nJBKhZYVwOEzV/1NTU+jr64PdbkdDQwPKysrg9XoxPDyM1dVVWno4DorFIiKRCObm5vDXv/4VHLST\nRAAAAyBJREFUb775Jt55550jbU9EjEcm5T18+BDz8/N0OuRpIxgM4v79+xAKhTh37hx6e3uxv7+P\nfD5PS4jHgVwuh81mQ3d3N+x2O2QyGS2LxWIxhEIhmoInhx8ibNzc3KTzPU4L8XicTv4jGdmtrS1O\nOZWeFliQ5xDEw5govE8T2WwW4XAYQ0NDVIlcWVmJaDSKoaEhTE5OYnNzk7ObFK5SKBQOncZJb/nW\n1hZVRIdCIfj9flRUVGBpaYlmTI77pJxIJODxeGhdV6FQQCKRHKpNFotFuFwuLC0tYWRkhFoDn1bC\n4TBGRkZgMplgtVrR3NyMRCJBXe9isdixdcPweDzEYjH4fL4j1+T3+7G6uopQKIRCoYDZ2VnMzs7S\nVsbTRjQapcNaCoUCRkdHsbq6imw2y+mMJhcpZTMz+6R+ZhAzHLlcDo1GQ0edhsNhRKPRYz1Z/i8i\nFoshk8no8B/iPx6JRBCLxU5EtU5mfhMbXT6ff0SUlkqlkEwmcXBwgGQyeao3eiKRCEqlEh988AF+\n+9vfoq6uDvF4HI8ePcLnn3+OL774gvagv0yEQiEUCgX9rJ+EeBMQN7hisYh4PM45EemPgdhQq1Qq\nSCQSxGIxRKNRmqlgUJ4bw1mQZzAYjBeEbGrOnj2La9eu4eLFizCZTMjlcvj000/xySefUGtYBuME\neG4ML2Uz8/8r4d9mMBiMHw3RRvj9fiwtLaGsrAxqtRrd3d3Y2trC9PT0qc9WME4V//9538BO8gwG\ng/Ej4fP5EAgEqK+vh9FohF6vpy5ypKWQwTgBWLqewWAwGIyfKaUdEsJgMBgMBoPBYDAYDAaDwWAw\nGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAY\nDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgM\nBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwG\ng8Fg/A/zf2NOVGbNo4oZAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "tf.reset_default_graph()\n",
+ "\n",
+ "# Define our input pipeline. Pin it to the CPU so that the GPU can be reserved\n",
+ "# for forward and backwards propogation.\n",
+ "batch_size = 32\n",
+ "with tf.device('/cpu:0'):\n",
+ " real_images, one_hot_labels, _ = data_provider.provide_data(\n",
+ " 'train', batch_size, MNIST_DATA_DIR)\n",
+ "\n",
+ "# Sanity check that we're getting images.\n",
+ "check_real_digits = tfgan.eval.image_reshaper(real_images[:20,...], num_cols=10)\n",
+ "visualize_digits(check_real_digits)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Model\n",
+ "\n",
+ "We perform the same procedure as in the unconditional case, but pass the digit label to the generator and discriminator as well as a random noise vector."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Generator"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def conditional_generator_fn(inputs, weight_decay=2.5e-5, is_training=True):\n",
+ " \"\"\"Generator to produce MNIST images.\n",
+ " \n",
+ " Args:\n",
+ " inputs: A 2-tuple of Tensors (noise, one_hot_labels).\n",
+ " weight_decay: The value of the l2 weight decay.\n",
+ " is_training: If `True`, batch norm uses batch statistics. If `False`, batch\n",
+ " norm uses the exponential moving average collected from population \n",
+ " statistics.\n",
+ "\n",
+ " Returns:\n",
+ " A generated image in the range [-1, 1].\n",
+ " \"\"\"\n",
+ " noise, one_hot_labels = inputs\n",
+ " \n",
+ " with framework.arg_scope(\n",
+ " [layers.fully_connected, layers.conv2d_transpose],\n",
+ " activation_fn=tf.nn.relu, normalizer_fn=layers.batch_norm,\n",
+ " weights_regularizer=layers.l2_regularizer(weight_decay)),\\\n",
+ " framework.arg_scope([layers.batch_norm], is_training=is_training,\n",
+ " zero_debias_moving_mean=True):\n",
+ " net = layers.fully_connected(noise, 1024)\n",
+ " net = tfgan.features.condition_tensor_from_onehot(net, one_hot_labels)\n",
+ " net = layers.fully_connected(net, 7 * 7 * 128)\n",
+ " net = tf.reshape(net, [-1, 7, 7, 128])\n",
+ " net = layers.conv2d_transpose(net, 64, [4, 4], stride=2)\n",
+ " net = layers.conv2d_transpose(net, 32, [4, 4], stride=2)\n",
+ " # Make sure that generator output is in the same range as `inputs`\n",
+ " # ie [-1, 1].\n",
+ " net = layers.conv2d(net, 1, 4, normalizer_fn=None, activation_fn=tf.tanh)\n",
+ "\n",
+ " return net"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Discriminator"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def conditional_discriminator_fn(img, conditioning, weight_decay=2.5e-5):\n",
+ " \"\"\"Conditional discriminator network on MNIST digits.\n",
+ " \n",
+ " Args:\n",
+ " img: Real or generated MNIST digits. Should be in the range [-1, 1].\n",
+ " conditioning: A 2-tuple of Tensors representing (noise, one_hot_labels).\n",
+ " weight_decay: The L2 weight decay.\n",
+ "\n",
+ " Returns:\n",
+ " Logits for the probability that the image is real.\n",
+ " \"\"\"\n",
+ " _, one_hot_labels = conditioning\n",
+ " with framework.arg_scope(\n",
+ " [layers.conv2d, layers.fully_connected],\n",
+ " activation_fn=leaky_relu, normalizer_fn=None,\n",
+ " weights_regularizer=layers.l2_regularizer(weight_decay),\n",
+ " biases_regularizer=layers.l2_regularizer(weight_decay)):\n",
+ " net = layers.conv2d(img, 64, [4, 4], stride=2)\n",
+ " net = tfgan.features.condition_tensor_from_onehot(net, one_hot_labels)\n",
+ " net = layers.conv2d(net, 128, [4, 4], stride=2)\n",
+ " net = layers.flatten(net)\n",
+ " net = layers.fully_connected(net, 1024, normalizer_fn=layers.batch_norm)\n",
+ " \n",
+ " return layers.linear(net, 1)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### GANModel Tuple"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvddv5Gd65/upnCMrsYpVxRyaqckm2XnUkmYszWikkWZ2\nNJ4d7NqGbdiwfWfAgC8W8PkHFsYC3osNWCxkrHcg2fIojMJILbW61YEd2M2ciywWK5GVc7HCuRB+\n79m5ObM4BwfjA9QD9BW7m6zf732f8A0PoROd6EQnOtGJTnSiE53oRCc60YlOdKITnehEJzrRiU50\nohOd6EQnOtGJTnSiE53oRCc60YlOdKITnehEJzrRiU50ohOd6EQnOtGJTnSiE53oRCc60YlOdKIT\nnehEJzrRiU50ohOd6EQnOtGJTnSiE53oRCc60YlOdKITnehEJzrRiU50ohOd6EQnOtGJTnSiE53o\nRCc68f+fkP22vvHf/d3ftUulEtlsFq/Xi0ql4uDgAJlMhsViQS6X02w2KRQKyOVyNBoNAAaDAZ/P\nh81mQ6/Xs7u7SyQSoVgsUiqVqFQqXLp0Cb1ez82bN+nq6mJ2dpaDgwPS6TRqtZrT01OOj4/x+/0Y\nDAZSqRRqtRqDwcDu7i75fB6n04larUahUDA4OIhWq2Vvb49Go4FGo8HpdKLVaikWi+zv77O7u4vV\naqWnp4eJiQkUCoX4mslk4tVXX6XVapFIJDAajRwfH/Pee+/h8Xg4f/48rVYLjUaD3W7n5OSEaDSK\nx+NBqVSSSqXQaDQYjUYajQZKpRKDwUA2m6VQKHD58mWazSaffPIJZrOZnp4e8vk8tVoNmUxGqVSi\nXC5js9kAyGQy6PV6NBoNa2trNJtNRkdHAahWq8jlchqNBpVKBZ1Oh8ViwePx0NXVhdFoZGtri4OD\nA2ZmZpDJZKysrKDX6zGbzWQyGVQqFV6vl1KpRD6fR6lUEgqFuHnzJsPDw4yOjpJOp3E6nVy6dIm1\ntTVWVlbQaDTI5XJarRZ2ux2DwUAymUSj0dDX10ehUKBarTIyMoLBYKBQKFCpVKjX62g0GgwGA2az\nmYcPH7K2tkZXVxder5dAIEA2myWRSHB4eIharaavrw+9Xk+73ebk5IREIkEikWB4eJje3l7K5TJd\nXV2Mjo6KM2az2TCZTOh0OhwOBwqFgpWVFc7OznA6nahUKtrtNsVikUQiQTgc5uLFi/j9fhYXFzGb\nzVy+fJnd3V2Oj49RKpUAtFotzp8/j8Ph4MmTJ6RSKQC8Xi82m41qtUq5XKZUKjE6OkpXVxehUIhk\nMkk+n6der1Ov16lWq3g8HkZHRzGbzahUKnK5HPl8nkKhwNnZGUqlEpfLRTabZWdnh3A4TLlcJhAI\n4Pf7CQQCbG1tUSgUGBwcpFwuc3BwQLlcplAokMlkUKvV2Gw2gsEg/f39DA8Ps7Ozw+eff04gEECv\n1xOPxzEajfh8Ps7OzqjVapTLZRQKBVqtFrfbTaPRYGlpCYPBQH9/P0ajEYVCQalUYn9/n52dHa5e\nvUpvby/FYhG5XI5arSaXy5HJZDg5OWFvb4/9/X0uXLiAw+EgFovh8/kYHR1le3ubQqGA2+1ma2uL\nZ8+e8dOf/pTh4WFCoRClUolarcba2hqlUolgMIjRaESr1RIIBKjVajx48ACVSoXL5cJqtSKXy0mn\n0wDIZDL29vaQy+VMTU0xODiIx+NhZ2eH09NTWq0WVqsVq9VKqVSiXq8jk8koFovk83nkcjn1ep1s\nNotMJkOr1eLz+dBoNORyOXp7e/H7/dy7d0/kSUDkgGw2y+7uLul0mlarxfXr1zl37hwej4d0Os3B\nwQFfffUVKpWKl156Ca1WS6VSYWdnh1wuh0qlwuPx4PF4UKlU1Go1MpkMCoUCo9FIX18fcrmc4+Nj\n6vU6pVKJpaUlDg8PyWaz9Pb20tPTQyaTIRAI8Du/8ztsbW2xvLxMLpdDq9Xi9XrFndjd3QXA7/fT\nbrdpNpuoVCoUCgUKhYJAIIBOp+PTTz9lZ2eHfD7P0NAQ09PTXLt2jUwmwy9/+UtGR0cZHBykXq8D\noFarWV1dJRwO43a7cTqd2Gw2IpEIiUQClUpFf38/CwsLpFIpTk5OxB09PDzk/Pnz+P1+9vf3qVar\naLVa0uk0tVoNl8uFTqdDoVCIdy6TycTZ0+v1GAyGX/s7Ui7Y398nn8/TaDTEMx0cHMRkMiGXy8lm\ns8Tjcba2tggEArzwwguEw2Hy+Tx2u51arSZyQb1eJ51Oo1KpsFqtpFIpCoUCf//3f/8ba7jy/22x\n/n8aPp+P5eVl7ty5g0aj4ezsjMPDQ87OztBqtfzpn/4pU1NT/OpXv2JjY4NYLMbFixcZHBwkHA4j\nl8vxer1sb2/zxRdfcHp6SrFY5OzsjK6uLnp6ekThkBJhOp1mf3+fdDpNpVKhu7tbXFybzYbdbmd/\nf59isYjVaiWdThMOh8XLzuVyHB0dEY1GCQaDjI6OcvHiRSqVCk+fPmV/f5+TkxPm5uZYWlrif/7P\n/4nL5WJ4eBiXy8Xp6Sm7u7vMz89TqVTY3d1lcXGRjz/+mNHRUbq7u9FoNGxubrK6uorP50MulxMO\nh+np6WF4eJh2u02hUODo6Ij+/n7m5uYoFotUq1UikQihUIh8Pk9fXx9Go5FyuUytVqPVatHT00O5\nXObRo0fIZDJR5JrNJo8fP2ZycpLR0VGePn1KsVjE7XZjt9uRyWQ0Gg2y2SwWi4XV1VXW1tZE43Xv\n3j00Gg1WqxWlUolMJmN9fZ1KpUKxWOTk5EQ0JHfu3OHevXvY7XZmZ2eZnp4mHA7z+PFjrFarSIJd\nXV3ioLtcLuCbYlgul9nf38dut+N0Onn8+DH379+nWq3S19fHiy++SCQSYXd3l/X1ddEQSQn1+PgY\nrVbL8fExL730kigGu7u7ZDIZDg4O0Ol0jI6O4vV6OT095e7duzx79gylUondbsfv9zMzM4PdbueT\nTz4hFouhVqs5d+4c4+PjDA4O0mq1xHOKRCJEIhGR8G/dusWTJ08AqFQq5PN5/uiP/oiZmRk+/PBD\nNjc3KRQKBINBbDYbp6en6HQ6PB4PWq0Wk8mEwWBge3ub//yf/zNmsxmDwUC73aa7u1ucJa1WSywW\nY319ncePH9PT04PP58PtdpNMJlleXkYul6PVajk8PMTlcuF2u8Ud1Ov1rK6u8v7773P9+nW8Xi/p\ndJp6vU4qlaJWq1GtVqlUKhwfH3NyciISJcD58+c5d+4cb731Fl9//TWNRgOz2Yzb7ebixYuoVCp+\n8Ytf0Nvbi8FgEA10qVQiHo+zvb3NwsIC1WqVzz//nJOTE9rtNuPj4xiNRqLRKLFYjHQ6zfHxMfl8\nnlAoRCqVolQq8fjxY5rNJs8//zzNZpN6vU4+n+fw8JC7d+9is9kIBAJEo1F2d3cJhUKMjo5y7tw5\nHj16RLPZxOfzsbq6yocffsjQ0BB6vZ5YLCaKtsViIRgMUi6XOT09pVwuc+fOHfG+1Wo1rVaL3d1d\nNBoNV65cYX9/n+XlZbq7u0Xj3tXVhdPpZHt7m3Q6TSQS4caNG9hsNg4ODjg8PBSfMZ1OI5fLyefz\nRCIRACwWCy6Xi3a7LRrIs7MzWq0W0WiU9957j2AwiNls5uOPPyaVSjEwMEC5XObs7Az4psHPZDLs\n7OxQKBT4sz/7M2q1Gn/3d39HX18f/f39hEIhEokEAHt7e6RSKc6fP08wGBTFPJPJUCwWicVirK2t\nkUwmqVarXLhwAb1ez9bWFolEgtPTU0qlEs1mE4VCgdvtxmKxiAJWLpdRq9Xo9Xp6e3s5PT1lc3OT\nvb09HA4HIyMjKJVKTk9PefjwIdlslj//8z9nZmYGrVbL4uIiH330EWq1mtHRUc7OznA4HLRaLVZW\nVmi1WoyOjjI0NITBYODevXucnp5is9nEUPDhhx+KeqHT6VAqldRqNQwGAw6HA61Wi0qlAqBYLJJK\npbDZbGi1WgqFAs1mE7lcjtVqRavVcufOHZrNJjKZDKvVSqvVYn19Hb/fj0KhoNFokM/nOTg4IJPJ\nUK1WUSgUKJVK1Go1VqsVh8OBSqUS+fc3xW9tkv/3//7ftxuNhkgUyWSStbU1DAYDg4OD+P1+dDqd\nuOxra2u43W7cbjfd3d0MDg4yPDzM48ePicViOBwOZDIZZ2dn4iFqNBrRKUqNgvRwnU4nAAqFArVa\nTb1ep1wuYzAYUCgUVCoVrFYrLpeLUChEtVrF5/MB31wGq9VKrVZja2uLVCpFpVJheHiYyclJLl++\nzMHBAY8ePaKnpwePx4PdbieVSnF0dEQsFiOXy9FsNtna2mJ3d5dXXnmFQCDA8fExW1tbhMNhnn/+\neTQaDTdv3mRiYoLnn39edNyJRAKHwyGmKYVCwf7+vki2rVaLs7MzKpUK2WyWbDYrPq9KpSIQCOD1\nejk+PhZJsbe3l4GBAWq1GsfHxywvL1MsFlGpVFy6dImBgQHMZrP4PgsLC6hUKtbX18VhlFCZbDZL\nOBwmEomI7lhCR9rtNrFYDJPJxMzMDAcHBxwcHGCz2VCr1TQaDVKpFOVyGb/fj1qtJp/PY7PZcLlc\nAlWwWq0iQT979oxyuYzD4UCj0aBWq9HpdDidToLBIMVikUgkwueff06tVmNkZISLFy/idrv5+c9/\nTiqVIhAIEAgE8Pl8OBwO8vk8W1tbYgIzmUwYjUb0ej1KpZJSqcTTp0+Ry+UMDg4SCASw2+0UCgU2\nNja4c+eOuIxqtZqJiQleeOEFHjx4wObmJhaLRTSXR0dHYlI3mUw0Gg1KpRJnZ2eYzWYx9UvvUGqM\ni8Ui7733Htvb2wI1MJvN4gxIye34+FigN+12m2g0ytbWFnq9HoVCQblcFg1RsVhEqVTS39+PxWJB\nrVYjk8lot9ucnZ2hUCjEs9VoNKRSKdrtNlqtllwuJ9ADAJVKRTQaBWB8fFxMn9/+9rfx+XxsbW2h\n0+no7u6mVqtxdnaGTPZNWmq32+KzVyoVlpaWWFxc5I033sDpdPLxxx9zcHBALpejr68Pm81GrVbD\nbreL6dRsNuP3+0UBs1qtnJ6e8tlnn3Hu3DleeOEFvvjiCxKJBP39/Xg8HhwOB81mk3Q6zfb2NnK5\nHJPJJJ6/TCbj6OiISCTC+Pg4AwMD+Hw+KpUKqVSKaDRKo9HA7Xaj0+loNpt8/PHH1Ot1XnrpJdbX\n17l37x49PT243W6sVisqlUogFSqVCrVaTbPZ5OzsjGw2K55FuVwWDW2tVmN5eZlyuYxGo+HixYsE\ng0G0Wi3hcJijoyPx/fP5/K8NQRqNhkajQTAYJBAI0Gw2iUajrKysiLwSDAaJRqN88MEH9PT04PV6\nxSSfy+UwGAyYTCZMJhPDw8Ncu3aNjY0Ntra2cDgcZLNZVlZW6OnpIRAIoFAoBNr50Ucf8fDhQ55/\n/nmCwSAqlYoHDx6wsbFBd3c3Xq+Xnp4ezGYzGo1G5BeDwfBrzWWj0aDdbhMOh6lWq7zwwgu0222e\nPn1KOp2mXC6jUqno7e1lZmYGt9uNXq8nmUwSjUY5Ojqip6cHrVbLgwcPODs7E+hTqVTi9PQUq9XK\n8PAwGo2GVqtFMpmkUChQr9fp6enBbreL865Wqzk4OKBYLIpBC6BQKFAsFimXy+j1epxOp/jZpVyl\n1+s5OTmhUCigVqsJh8M8e/YMtVqNx+Ph0qVLos6trq4SiUT4+c9//i93ko/FYng8HiYnJzk6OqJc\nLmM0Gunp6WF+fp5QKEQkEhFdUqvVIpVKUa/XBewoXX6LxcLQ0BA6nU4c/FKpJGDdWq3GxsYG5XIZ\nu91Od3c3Q0NDlMtl6vU6SqWSk5MTqtUqly9fpquriydPnuB0OhkZGSGZTHJ2dkYgEBDTpjRRHh8f\nU6lUsNvtjIyMMDY2hkwmw+Vyce3aNQGdJpNJcdkePXpELpdjbm6OcrlMIpHAYDBgNBqx2WyYzWb0\nej1utxutVovZbKarq4tAIEBXVxcKhYJsNku9Xkcul6PX6+nq6sLlcpFIJIhGo6RSKXK5nGhcdDqd\ngOh8Ph/9/f14vV6USqXoOrVaLfV6HafTSa1Wo1AoCCqjVCrRbrcFXWI0Gunu7kYmk9Hd3U0+n6fd\nbhMMBjGZTCKhnJ6eisYqGAzi9XrRarV8/fXXSHSNXq9ncHBQQPQqlYq1tTXC4TDd3d1Uq1VBK1it\nVtrtNvBNEejv78ftdhOPxwWs7Ha7BWzscrkIBoMi+W9ubpJKpVCpVCiVSlQqFQaDgVarhdvtZmxs\njKGhIZE45HI5TqdTTMh6vZ5SqcT29jbhcBilUonP5+PSpUuoVCoqlYqYZKSzJ01L0nkzGAwEAgGG\nh4dxu92YzWZ2d3fZ29vjxo0bzMzM4HA4WF1dJRaL0d3dDUAulyOZTJLJZIBv0LDZ2Vnu3bvHwcEB\nFy5coKuri2g0SiKRoFarMTU1JSZ46WfJ5XLYbDbGxsZQq9XUajUODw9ptVrAN9Bgo9FAJpMRCASY\nmJhgb2+PQqEg4Nd8Pi8SmHTeR0ZGRGOnVCrJ5XIi2Q0ODnLjxg12dnZoNBrimVosFjGlSO9Gr9cz\nPDzMuXPnePjwIclkEq/Xi8FgEPe/1WqJf6dSqUTzMTw8jNFoFBSPxWKhVqvhdruZnZ1lf3+fg4MD\nCoUCrVYLs9nM+fPnqVaruFwuQdFJsP3q6iojIyNcv36dcDhMqVTCaDRisVjQ6XTMzc0xMDCARqPh\n4OBA5ALpfkgTtwTZy+VyDAaDoPv0ej3BYBClUkmr1RKNiQT7b2xs0NXVJVA3qRHr7e0Vnz+bzdJs\nNunq6kKn04mGTKFQMDExgVqtZnd3l7W1NU5OTpiensbpdApKRSoyuVyObDbL/Pw83/72t9nd3SWX\ny4lGsNVq4XA4KJfLonhKTXmhUODk5IRGo4HVamVkZIR4PM7S0pLI88fHx3R3d/P888+LHHXp0iXG\nx8fR6XQkEgk2NjYwm83i3KnVasrlMk+fPsVsNjMzM0OlUuH09JTDw0MBhbvdbtRqNXa7nd3dXW7d\nusX4+DgzMzPodDqsVit6vZ5ms0mj0cDr9dJsNkVzIDVU0v8h0cUSnTExMSFQzY2NDdLptDjHZrNZ\n3Bdp6i4UCly6dAmXy4VSqRR0glqtxuv1Mjw8TDKZpF6v4/V6qdVqxONxMf0Hg0GazSbLy8vivAwN\nDWGxWMR5KhQK/1u19rdW5GUymZjgHz58yMbGBrVaDafTiUwmE1CUVGy0Wi19fX3odDpCoRCVSoVa\nrcbm5iaZTIZHjx7RarVEoe7u7uaTTz5hYGCAq1evCj5qd3eXRCLBzs4Oer0emUwmoKFms8lzzz2H\n3W7HZrOxvr7Oxx9/jN/vFx1yuVwmnU7z1VdfUalUeO211zg8POTevXt8+eWXLC4uUq/XsVqteL1e\n9Ho9FouFR48e8eTJE54+fUoikRC8t06nE/BgJBLh9ddfJ5fLcf/+fX75y1+iUqlIJBKsrq4CMDc3\nh8fjoVQqcf/+fVZXVwVElUwmOTg4IBKJCN6nWq1isVjwer0oFApSqRSrq6s8fPiQYrGIQqFALpcj\nl8splUqUSiVR9PL5PMFgkN7eXsLhMI1GQ1yAdrtNJBIhGo3yxRdfoFKpcDqdtFotBgYGCAQCGI1G\nqtWq0Dncvn2bF154gZmZGRKJBI1Gg4GBAZGsDAaDmGwbjQYGg0HoM9xuN4lEgoODA5xOJy6Xi+7u\nbmZnZ7HZbMRiMTQaDb/7u79LoVAgFouxvb3NyckJqVSKiYkJfD4f7XZbQLljY2NcvHiRl156iSdP\nnnD37l3x/0vUxNDQEEtLS9y/f5+/+Iu/EHzz/fv3WVxcFIjJ+Pg4y8vL7O3tCej1e9/7HgqFgpOT\nE95//33ef/99AdM7nU6Gh4dZXV3l3XffxW6343A4+G//7b/x4MEDXnvtNR4/fszq6irZbJZqtUqr\n1eL73/8+AwMD/OIXv+Cjjz4SzYjVamVgYIB4PM7bb78t+O9CoSASXCqVIp/PiyRy/vx5AXkfHR1h\nt9vF+9BqtUxPT9NsNnnvvfeoVqtC77G2tsbbb7+NTCbD7/fz+uuv43a7KZfL3L59m7t375LJZLh0\n6RI//vGPhZ5EJpMhl8ux2+08ffqUe/fuUS6XmZyc5MqVK2xsbLC0tITJZKJUKmGxWLhz5w6Li4uC\nh9Zqtbz77rsYDAZGR0exWq2C5jMajUIPE4vF2Nra4uTkhM3NTcbGxrhy5QrLy8s8ffqUnZ0dSqUS\nxWKRy5cvo9fr+eyzz8jn88hkMjFpHR0d4XA4yGQyHB8fc3x8TLVapVQq0Wg0SCaTAkGMRqNEIhHa\n7bYomjs7O6ysrKBWq1EqlXz00UdYrVb6+/vZ2dmhWCxit9sZHx/H7/ezt7dHJpOhUqlgMBi4ePEi\nxWKR4+Nj1tfXyWaz1Go1FhcX8Xq9jI2NIZfLSaVSPHjwgHa7TavVEo2Jy+Wi0WgIiuX09JTPP/9c\nNDJSc3T58mVSqRRPnjxhbGyMQqHA0tISX3/9Nc+ePRNoy8zMDH19faRSKaEjuHbtGiaTicPDQ3Q6\nHWNjY4yMjNBoNCgWi0LzIL3DSqXC/Pw8NpuNZDJJMpnEZDJRqVQEcrq3t8fnn3+O2WzGZDKh0WgI\nh8PcvXtXNCXSOQmHw0LX4fP5hKYnEAgwOTmJx+OhUCiwu7vL1tYWyWSSrq4uPB4PQ0NDQntwcHAg\ntAHtdpt6vS5qxj/8wz9w7do1enp6uHnzprjf9+/fJxqNcuXKFeLxuKBeBwYG2Nvbo1Qq0dPTw9On\nT1lbW+Py5ctC41Sr1YhEIjx8+FDoQxKJhNC/hEIh4vE4er2eXC7HP/3TP1EqlUin07z88su88cYb\n/Kf/9J9+Y61V/H9ayf9vYmBg4G8kMdHx8TGFQkHAkM1mk1QqhUKhIBgM4nQ6MZlMAm7s7u7GZrPR\narWEaCUYDFKtVgVfo9FohHhCuqDSZOJyucSUJcGhHo+HYDAoHmgulxOTq81mExdJuoCpVAqtVsvQ\n0BDZbJb9/f1f45YkSFkmkwmYXuJ1ent76erq4uTkhLOzM3Q6nZiEpqamKJVKJJNJwdfCN1OrNAlW\nq1VkMhnNZlNcKknEJE3AHo8Hl8uF0WikVqtxcnJCqVQikUgQCoU4OTkhn89jMBjQarWiA5e6ylqt\nhlqtpr+/n6GhITQaDVqtFoDT01PS6bTgpySEQKfT4fP5UCqVhMNhkRC7urqw2WzIZDKUSqXgbCWB\nlKRVqNfrqNVqXC4XmUyGZDJJuVwGoLu7WzRfFotFiBilhuP09BSHw0F/fz/xeJy9vT2Oj485OzvD\n5XJxdnZGOp0WZ6hSqeDxeDAYDOJ9hkIhGo0GzWZTJJZyuczu7i7JZBKDwUA+nyeVSolk7PV66e3t\nxe12U6lUaLVatNtt8TykwiZBdBLK0Ww2uXDhAgAHBwfAN5Dezs4O2WyWSqWCQqEQ4tJYLCa0Ft3d\n3bRaLdHENRoNVCqVgCEzmQwjIyMEg0FisRiZTEZQMPF4nGKxKOB2i8WCw+EQ90uiJqRptVQqcXJy\nIhKT2+3m4OCA27dvi2YGvuEjC4UCX3/9NYeHh3R3dwv+UxJTSgld4lorlQrLy8tUKhXUajXVahWb\nzUZPTw86nY7T01PW19fJ5/N4PB70ej2tVktApWazGbVajdlsFiJT6dmcnZ1xdnYmCosE+0vTdl9f\nHxqNhmQyidvtBr7hmBOJhPgZi8UilUpFTNlS8ZaQv4ODA3p6eoQIs16vC7REuquNRgO9Xo9er8do\nNGK1WjGZTKjVasGtS1qLVqslPodEIVYqFUHNRCIRDAaD+IzwDXWTTqfJ5XJ4PB6RFyUqx2q1CmRD\nuheJRELkW61Wi8PhIBAIUCwWefToEV6vF4/HQ7VaFZqddrstIHOZTIbdbhcQvl6vp1wuk0qlsNvt\nIretrKzw5MkTgsEgfX197O/vo1KpmJmZwWazYTQahdbg9PRUoKnS/dzZ2cHlcomiabfbAYTAVZro\nJapldHSUSqVCqVTC7XYzNDSEy+USaBHAzs4O+/v7giIplUpotVq0Wi0KhYKenh76+/spl8vk83la\nrRaNRgO5XE5/fz8mk4l4PC7E19VqFZ1OR09PD9FolFu3bomaIb0PlUrF9vY22WxWUBd9fX3E43GS\nySTtdptMJsPu7i5yuVwgt5lMhs3NTYFWGQwGyuUy0WgUr9eLyWTi3Xff/T9+U639rRV5jUbzN5lM\nhkwmg06nEzxRJpNhbW0NtVrN2NgYP/7xj4XQ5vj4mHa7ze/93u8JIYZer2d0dJSf/OQnGAwGoQyu\nVqtMTExwfHzM//gf/wOZTMb4+Dj/6l/9K+bn5/H7/WQyGeRyOc899xwvvPACN27c4O7du9y+fRu5\nXM7CwgI/+9nPKBQKbG5usri4SDabxWQyMTY2Rm9vL5FIhMPDQwqFAhaLhenpaf74j/+Yy5cvC8ht\nZ2cHtVrN+fPn+eEPf8j3v/99AoEA9+7dIx6P02g0+JM/+RPeeOMNcaEkzv/s7AyDwSBU8oeHh2Qy\nGTweDzdu3OBnP/uZEOt99NFHrKyskEwmmZmZYWpqir6+PnZ2dvjwww9ZWVkhFApRq9WwWq309vYy\nPj4uINjvfve7/P7v/77QMuj1enp6eggGg2JiXlpaYm9vj9PTU9xuNz09PQwNDQmofWZmhkKhwH/9\nr/+VfD7PwMAAr732Gt/61rfo7u4mEomwvLzMwsIC165dY3x8nPv37/OLX/yCXC6HUqmkp6eH/f19\nVldXOTo6Qq/X8+KLL3L9+nVu3LhBf38/jUaDZ8+ecXp6SqFQYHp6WjzvlZUVdnd3KZVKXLhwgb/6\nq78SaMuFCxdEgi+VSuzt7Qm1rTQhuFwuXn/9dYxGI++9954ouM+ePSMUCgleW4IeJQHQ5OQk58+f\nF0Kw/f19IpEIMpmMf/tv/y1vvPEGFy5cEE3SxYsXGR4eZnp6mq2tLe7fv49CoSCXy/H48WO++93v\n8sorrxCNRqnVaqLRMpvNvPbaa1y9elU8i3A4zBdffMHZ2Rk/+MEP+N3f/V2uXr3K5uYm+XxewJ5n\nZ2dCI3BVutqhAAAgAElEQVR2dsaVK1f4zne+w9TUFPV6na+++gq9Xi9Ee41GA5/Px+TkpPicUrF+\n8803mZyc5J133mF1dZV6vc7e3h5ms5l/82/+DcVikf/4H/+jKHwbGxuC/rlw4QIul4svv/xSTCzf\n+c53+OlPf8q1a9fIZrN88MEHtNttpqam+MM//ENsNhvb29vCdSBRGc8//zypVEokTel8S+6Il19+\nGYfDQTQaJZPJ4Pf7+ZM/+ROMRiMrKyvIZDLR1ErCp3a7/WsT+cHBARMTE1y4cIGBgQG2trb4+OOP\nmZ+fZ2pqiu7ubkE1SW6Ger3OpUuX+OlPfyqGiRdffBGNRkM0GhUctfSzra6uYrPZ8Pl8OJ1OQqEQ\njx8/xu/3Y7PZKJfLLCws8N3vfhePx0Or1WJ7e5ujoyPq9Tpvvvkmzz33HFarlXv37vH2228L14TD\n4WBnZ4enT58CCKrg8uXLvPLKK/h8PqElkb525coV5ubmCAQCQm+xvLxMo9Hg5ZdfZnZ2Fr/fz507\nd9ja2hKiSpVKxT/+4z/yxRdfkE6nefXVV3n55Zf51a9+Ra1W4/nnnxfFLxKJEI/HRQNar9eZm5vD\nZDKxvb3Nq6++yh/8wR8wPz/PzMwM8/PzApXc29sjmUyiUCh47bXXuHTpEv/lv/wXyuUyf/mXfymg\n7Wq1KlCe4+Njcrkc8/Pz1Ot1PvnkEwqFgtA0TE5OCt3Rzs4OzWaTiYkJ/viP/5hz586JgUWhUHBw\ncMDly5d56aWX0Ol0nJycCMRTrVYL3YlEXUqNnsvloq+vTyBNEnJy69Yt+vr6hGC1WCzy8OFDyuUy\nWq2Wl19+Ga/XK5qb9fV1tra2fmOR/63B9UNDQ0IpLMHGExMTlMtlIpEIPp8PvV7Pu+++i06nE51f\npVJhZWWF4+NjHjx4QF9fH1qtlp2dHaLRKBqNRgigYrGYgKIkxGBjY0MIqmQyGSqVisXFRba3tzGb\nzWxuboqkHgqFSKfT5PN5NBqNuIB9fX3A/zW9SHz42toat27dIplMislEamLGx8fZ2dlheXmZGzdu\noFaruXTpEktLS4RCIX75y1+ytLRENBql3W6Loi6hG319ffT09HDv3j2i0Si/+tWvUCqVWCwWcSAl\niNNgMPDs2TPi8Thut5vt7W2azSZ+v1/wUplMhkgkQr1eF9alZ8+esbe3JwSKQ0ND5PN5vvzyS+x2\nu+Ci7XY7Xq+XUChENpvF7/eLifrhw4dEIhHK5TIejwe5XM6nn35Kq9US9qh6vc6dO3fY2NjAZrMJ\n+2EgEBD/RuLRAEGP/K/Cr3g8zsnJCclkklQqJURAuVwOjUaD2+2mUCgIDUSxWBTct4QCSNPr6emp\naKiKxSLRaJRsNisEV5JYsFqtkkgkePDgATqdDrfbLURsjx8/xuFwCBuh9A69Xi99fX0Ui0Xu37/P\n7du3cTqdvP7668jlcvb29giHw6hUKgYGBoRop9Vq8eDBA5RKpSgiu7u7WCwWkskkf//3fy84eskm\ndP36dUZHRwkGg9y+fZu9vT3i8bigsXw+H36/X6iFJfhScgIcHR0JB4CEBEmFa3FxkUQiIbh2pVLJ\nrVu3sNlsLCwsEI1G+frrr4VQaXt7m1QqJRJmNBoVoqaRkRE++eQTQY8dHBywvr7OxsYGbrcbn88n\nxKGSBkSpVArx0cbGhrBtmc1m0TRJ1IRcLhf6Fq1WSyKRYG1tjbt37wqLnVRk/X4/oVCIZ8+eAaDX\n6zGZTIKukhTzkpW2UCiwtbVFJpPhW9/6Ftvb24RCIZrNpjjjw8PDmEwm1tbWuH37NhsbG4LPlpwq\n0sRuNBrxer3k83mBwFksFtEsHBwccHp6KpACSci5trZGPB4XQtxYLMbNmzfZ2dkRKNjZ2RkfffSR\ncCZJVriuri7a7TbxeJzV1VVarRbNZpNiscjs7KwQkR4eHgoYXK/X4/V6CQaDgq+ORCLkcjnS6bTQ\nBSSTSaGFGRkZ4eTkhKWlJTQaDYODg9RqNf7Df/gPIkdLrhYJ+ZDcQxLfvbm5yVtvvSWEtmazWei1\nJI56fX2dhw8fcnR0RCaToauri8PDQ2KxGLu7u2xubgrEUy6XMzExQSQSoVKpMDU1hcvlol6v8+DB\nAwwGg0DslEol6+vrbG9v8/DhQ5rNJtVqVdBehUJBIAxS/i6Xy7TbbSHs29nZ4dNPP6Verwul/cbG\nBvfv3xfW5tXVVaGhabfbgnrJZDI899xzxONx2u22oD0jkQjDw8NCq/Ob4rc2yb/55pt/Y7fbMZlM\nQj06OzuL3W6n2WzS19eHSqXi7t27FItFIYrR6XQUCgX29vbY3d1lbGyMYDAovLPNZpPJyUl8Ph8n\nJyfYbDamp6fJZrOk02nOzs4Ih8Ps7e0JyGNzc1N0+RKHHQgEKJfL7O3tiZfTbreFujuXywkPv8Ph\noLu7m3g8TjQa5fDwUHi/JU/6yMgIiUSCp0+fCrGGXq8nnU6zu7srOL3NzU0BAclkMkElDA0NMTAw\nQDabJZ/Pk8lkhOVpeXmZ/f196vU6Pp+P3t5ewQ8Wi0XS6TTNZvPXFOD5fJ5oNCosVE6nk1gsxvLy\nshDxDA0NkcvlhCc7Go2STCbx+XwMDAyQTqepVqvCoiaXy4WgRlKEajQa7t27x+bmpoBPJc40HA4T\ni8Vot9v4fD6mp6cJBALCi5zL5YBvfMGZTIb19XVhzZP0AxK0abPZODs74+TkBIVCIeB2vV6PXC4X\nithqtSpgWJ1OJ4RLEhQpNRLxeJxarSZEhpJVplKpcHh4iEajweVyiXdy7949YrGYsLtJ1EZfX59Q\nFkt2z5mZGRYWFgSfuLGxQbPZBBBuBOnnVqvVXL16FavVSiQSwWw202q1WFpaYn19ncPDQ+r1Ona7\nnYsXLwpu9/PPP+eLL74AQKfTCbGf2+0WDY1EjyQSCW7fvk0+nxfiTqfTSV9fHy6XC5PJxOPHj1la\nWiKXy1Gr1cRED/C9730PmUzG4uKimE5dLpeAGBOJBLFYDKVSidVqxWazcf/+fRKJBK+88gparZb9\n/X3hbDGbzeJ5eL1e7Hb7r3n89/b2yGaz9Pf3i+eRSCQE1Ot0OvH5fNjtdjQaDYVCgYODA2H5UigU\n9Pb2CuHY4eGhoLDMZjODg4OiSZemZGkPw/HxMZ9++ilGo5H5+XnW19d58uQJOzs7pFIpsfPA6XRy\neHhINBolHA5jMplwOBxiL8fp6SlGoxGn04nX6+Xo6IitrS0sFgtGoxGTycTJyQnxeJxQKEQsFkMu\nl2OxWAQS0Gg08Hg8QuwpUSDS8CFRlWdnZ7jdbgGNSzRVuVwWIuLDw0PK5bJoSqWfv1gsYrPZBAfe\narXEeUokEuzv75NIJMSdyuVyVCoVAoEAKpVKUGZnZ2fMzs6i0+n46quvKJfLtFotIaQsl8siF0nK\n/2azSaVSEQ291MxJ1Of09DRdXV0UCgUikQgbGxsAoplNJpMcHh6yuLhIPB4XX+vu7mZtbY1isShQ\nEoVCIZxJpVIJpVKJXC5nZ2dHnJ1IJEIsFiOZTAqtj8/no9ls8stf/lIMiJK9TjqrN2/eFGLcbDYr\n9rRIjq5nz56Rz+cFxaBWq7lz5w4ymYzLly8jl8sF1ZnJZEgkEmJHwWefffYvd5IfHh7m6OiItbU1\nNjc3kclk9PX1UalUWF1dxePxMDw8jMPhEAV1YWEBh8NBIpHA6/UKqK6/v590Oi0Sbm9vLyaTicHB\nQYECtNttlpaWODk5YXR0lN/7vd/D6XSKlyUlSWnxjpQ8yuUy6+vrHB0dUa1WCYfDwvfp9Xr57ne/\nSzwe58mTJ7z++us4nU5KpRIPHz7kq6++ol6vYzKZ8Hg89Pb2Mj09LaY/KXE5nU5+9KMfMTIyIhSj\nkorWbDZjs9l48OAB7777LtevX+d73/uemMzsdrvwnE5OTgoaQeK5JDW1NOVLfyTLk9QRGo1GYrEY\niURCPLfT01NmZ2f5nd/5HUKhEKFQiP39fQYHBxkYGODcuXPk83n29vZwu90MDg4yNTUlHBHr6+us\nrKyIySEcDjM6Osrs7KyAryQNAHzjq67X6zx69IitrS2Oj4/x+XxiqpS46sHBQc7OztjZ2WFsbIzB\nwUFSqRTLy8tsbGzgcDjEs+7v7ycYDApL0auvvkogEKBer4t3r9FoUCqVAtLr7e2lUqkQi8XY398X\nkOAPfvADotEo//RP/8Tk5CQLCwv09/fj9/uxWCyi+PT19QmuVBJYejwevF4v09PTHB8fs7i4SG9v\nLy6XC6fTyZMnT9je3qbRaNDd3S14TEnwub29za1bt3j55Ze5dOkS8/Pz5PN58c7y+by4F36/n+vX\nr6NWq4VwbHJyUiQZl8vF4OAgvb29fP311ywvL6NWqwVKJWldlpeXGRkZYX5+nj/8wz/kJz/5ibDv\n3L59m1dffVVQaRLKVavV0Gq19Pf3C7fGW2+9xfHxMW+++abgxa1Wq7ApTU1NMT09zcOHDzk+PmZu\nbo6uri6hds5kMsJyNjc3x7lz51CpVJyenuJyuVhYWKC3t5eNjQ3u3r0rkrDJZMJisTA+Pk5fXx9X\nrlwRqI604Kavrw+/38/m5iaPHz+mv7+fixcv0mw2OTk5EXayc+fOcefOHbFboNFokMvlsFqtzM/P\nMzIyIvhflUpFvV5nenpaNGjS2ZWSeyAQEJ9D8sbr9Xqhw2k0Gly+fJmrV6+yvr4uuFtJBDw/Py+a\nUqlZlhwq7XabZ8+esbm5yeDgoNANSAJFyZXy0ksvCW/7/v4+h4eHfPzxx8RiMebm5oQr6G//9m8x\nGo0YDAZ2dnbEe5Ca7WKxCIDL5RJCt88//1ygHUqlEpvNxtDQEG63m+eee04slWk2mxwcHLC4uCiQ\nMAlB7erqYmxsjL6+PhqNBsfHx6ytreHxeIQ2xeFwcOXKFf77f//v/OpXv2JoaIj+/n7GxsYwmUyi\n4dfr9QI9LhaLOBwOQqEQ9+7dY25uTtx76XlIOpJvf/vbqFQq7Ha72AMi2XbX19fZ3Nxke3sbk8mE\ny+VifX2dV155hatXrwpXyF//9V8LlFZyukg7OCRffnd3N9PT08KRIVlx5XI5o6OjYqFPLBYTOwHu\n3r37v1Vrf2tFvtlsCrGGBGVIXLxKpSIWi4m/K21Ok6AQk8mE2+0W8JVkf6pWq4RCIfR6vVjCIqnq\nZTIZNpuNZrMpYCW5XE6lUsFoNAo/qeQjlzhbCY6SrA6tVksoRSWlvSTGkzrIzc1NYQM5OjoSnaYk\nKJEWc0ifS1paI30fabKcn58nGAyi0WgEbFyv1wUkmMvlePjwIYVCQRwyaRNWrVYTXbAEX0oLMrLZ\nrGicTk5OROcJCNtcq9Vib2+Pqakp/H4/3d3dohC2Wi1CoZDYxpbP57FYLELFK5PJ0Ol0AIK7brVa\nhMNhAUVLE3M2mxUQvuSHlZYKNZtNXC4Xvb29wvooKZ+j0eivWcGePHkifKdut5vh4WF0Oh2pVEog\nGb29veL9SYm41WoxPDxMs9nk6OgItVpNd3e3ED1JojyZTIbH4wFALpejVCoFhGaxWBgbGyOZTIrC\nI3npK5WK2KUwPDzM+fPn2d/fZ2lpSfDAEiQuTbEOh0MIn3Z2dtBoNKJ51ev1VKtVcUb9fr/gNvf2\n9lAoFHR1dYlJcWJiAqVSSbFYJBwOk0gkBC8v6R8kAZ809UoJXdKCSMtfJEFiV1eXoKwSiYQ4r81m\nE6/XK5bAdHV14Xa7ef/99wmHw6J5ODw8FAldo9EIoae0G2FzcxOVSkUmkxGIwZMnT8jn82KyHx8f\nF15t6d5IdjSHw4HRaCQUCnF4eIjb7aavr4+hoSGxDEt6Vk6nU2iCJCvX6empeM/j4+O43W6xYbJe\nrzM7Oyu2mvl8PoEkwTc2RymHSMLDbDZLMpkkl8sJn7rkxZZsY5Ii3u/343a7abVatFottFot3d3d\nmEwmoROIx+NCRR6Px1EqlQwMDIgtlX6/X/i6u7u7aTQaPH36lHK5TDAYFHqM/zUPSechl8sRi8XY\n29sTsLW0Q0H602w2icViNJtN3G63ELVJVEksFiMajdJsNjGbzUKpLokopf0BkuNAogokMeH/KlrW\narViCpbusbRTQLLMSohOd3c3gUAAmUzG48ePxZZIyfkzODjI8vIyz549E5SkxO+XSiVhS9vb2xN2\nRalBAEROlejBGzducPPmTba3t5mYmBBibofDgdvtZnFxEZlMxuTkJC6X69dcJhLy3Gg0xPIqi8Ui\nBrJyuUw2mxUbRE0mE+l0WjhapHf9vxO/tSIvcU3SFiJp2YXUpW5ubnLz5k3hg5emBEmdKCkeJe5v\nfn6eSCTCO++8QyAQQK1WEwqFUKvVYs2h0WhkaGiItbU1fv7zn4sCJvkape17kj8+HA7z8OFD4bOd\nmZlBoVCQTCY5OjoilUrxzjvvoNVqhQ92a2uLd955RzQeh4eHHB4ecvPmTdLpNLFYjJmZGcxmMwcH\nB4J/fu+998SmMUlMJi1LkXyR3/72t7l16xZbW1u8+eab3L17l/fee4/z588zPDyMWq0WQhwJwjs+\nPhbiOPimiEsLd/b391lcXGRsbIw33niD9fV1bt26JRwOuVxO0Bi9vb04HA6cTie3bt3i9u3bqNVq\nLBYLbrebbDYrBFQSelGv1wW3Lq02jUQiwjtcKBTEulUJWTCZTCJhWiwWMWlJCUPSZYTDYT777DMC\ngQDj4+NEo1HRaF27do25uTk+/vhj7ty5w9LSEt///vf5/ve/z6effsr9+/dxOBzCFjY1NUUymeQX\nv/iF2IynUCgIh8Ps7OyIVbiZTIZCoUAulyORSIhOfnh4mJ/+9Kek02mi0ahoQpvNJk+ePBHWuhde\neEH4h1dWVkRnL1l/pGQn0QOPHz8mEonw5MkTLl++zB/90R8RiUSEANBqtTI5OUm73UYul3N4eMjW\n1harq6ui4P7rf/2vOTw85K233iIcDpPNZjGbzcRiMU5OTvjhD3+I3+/nww8/ZHd3l+XlZebn5+nt\n7WVsbIx4PM7Nmzd59uwZ2WxWIEVTU1O8/fbb7O3tcenSJdGcfe9732N4eJjj42OGh4fxeDwUi0W2\ntrb4+c9/TqVSIZFI8NprrzE/P49Op+PBgwe8//776HQ6sZFNsl9JDoednR2xaWx6eprz589jNpt5\n9uwZH3zwAfl8XqwN9nq9aDQa3nrrLWEr+8EPfsDPfvYz9vb2WFpaEkOG2Wzm0aNH5PN5pqam2N/f\n54MPPhBOnBdffJFMJvNrup0f/ehHAo2TJkNps2Emk2F+fl7ArpI1U6PRCBrH4XDg9Xr59NNPBaU3\nOzvLtWvXGBoaEmugV1dXOT09/TVXiURZBQIBsWlzbm6O2dlZvvjiC7RaLT/84Q+F/TMej/PgwQPe\neecdZmZmuHTpEuFwWOh4pH0DPT09Ao2RtmyGQiEcDgff+c53hJanv7+fVCrF559/TjAYZHp6GpVK\nJQSLN2/e5P79++j1ehwOh1iUVCqV+PLLL9FoNIyOjjIxMSG0CxqNhvPnz/PgwQNxniQdxnvvvcdn\nn31GqVTC6XQK73w0GmVjYwO1Ws3IyAjNZpOFhQWhcn/77bfFYiOJCtDpdNy/f59/+Id/QKfTcfny\nZX784x+L3QkSbfHll18CYDKZBK1TqVRE4Q2Hw7z44ov8u3/379jc3BTiTWn6l6zhjx494vT0lLW1\nNebm5ggGg4RCIVZWVohEIsKud/nyZdRqNU+fPhUW6HQ6TbFYFIJbiSLp6ekRtKvH4+Gf//mff2Ot\n/a1x8gsLC38jbW9yu9309/czNTWFUqlkaWkJl8vFuXPnGBsbo7+/X6w2lVbLStYsKenfuXOHer3O\njRs3GBwcpKur69emieHhYXp6esQmPKfTKaxyPp+PoaEhsfQjEokIlajkD7VYLGSzWdbX11leXsbl\ncuFyucQSH6/XKywbRqNRTIapVEqoLSU468qVKwwMDCCXywVsNzs7S39/v7CkSdO0tBlPWpl4cHBA\nPB4X/JpKpWJsbEx4TiV9w9DQEIODg3i9Xtxut7AL5XI5jEajsPdIW5l2d3dxOBy88MIL9PX1CRtN\nV1eXsBAZDAaxbESn04m98t/61reEnz8cDgtea2hoiIsXL4oNZtIaUcmy5XQ6xVIPuVzO/Py8gIvP\nnTvH1NSUuMzS3mtpd7xEqUxNTWG324Wlz+fzMT4+jsPhYH19nVqtxsDAgFg4Ik2ycrmc4eFhLl68\nyMjIiOCRJSX1+Pg4TqdTLMRwuVzE43FyuZygK6RFK9JE5vF4mJmZ4ezsjFAoxM7ODjqdjsHBQS5c\nuMClS5fo7+8XYken0ykERX6/X0DcgUCA8+fPMzExwfT0NFarlUqlIuw0gUCA0dFRMc3pdDpUKhXF\nYlEsHkkkEoLfb7VaYpubNIFK63nT6TTJZFJsM8xkMsI2Jy1vCQaD5PN5SqUSgBBW9fT0MDAwIKZC\n6fcJlEolsTzk9PSUW7ducXR0hMlkEhOXZL2S1iBLMLfBYGBsbExYXePxONVqlevXr9Pd3U2pVBIU\njt1uZ2tri6+++opz584xODhIsVgUYkS3283k5CR9fX3Y7XaxDGtiYoLZ2Vnx/CSxYTqdxmq1Mjc3\nx+DgIBqNhpWVFUHNSRvXZDKZgFol8ZkkHB0eHkalUpFKpQT94vV6MZvN+Hw+FhYWRBGXkAHJUih5\n1yU0x2g0Cl47Go0KrYZ0lpvNpmi0pWnVbDZzeHjIxsYGoVBIaH8k4Z/BYBD72KV1zHa7HYvFgt1u\np6+vT1ht5+bmxE53SaV+dHQktnaqVCqBusbjcbFRcHh4WKz7XVhYwOVyCaeD3+8XOViC7KX8eXp6\nKhw5lUqF+/fvo9VqGRsbw+Px4Pf78fv9YsOgJATt6ekRVsrh4WGh6ZBcMNJ2yvX1dUwmk3AKOZ1O\nrFarQGIlFFlaNyt516VVws1mE71eT19fH3Nzc8LuLJPJRPP84osv0mg0BHXo9Xp56aWXGBsbEw2H\nx+Ph6tWrnDt3jt7eXqFD6Onpobu7m97eXpG7zWYzFy5cYGpqStSsBw8e0Gw2USqV3Lt371+uhe7q\n1at/02g0aLVagq+enJwkmUzywQcfMDU1xcLCAsPDwwJmz+fzxONxoZCWDk+5XObtt9/GYrHwk5/8\nRGyjksQyGo0Gn8+HxWIRwghJ7CKJjfr7+xkYGODrr79mb29PCK4k77hSqWRlZYX19XXC4bBYCpHL\n5cQBGxgYEIrk4+NjHj9+TKlUEpushoaGuH79OmNjY9jtdiFQmZ6eZnJyErvdzuHhoYBhpJWzbrdb\nLMSQBHBLS0u0221x4GUyGQcHB9RqNXEwpIMlwbXRaJR6vS5Wfup0OoLBoFC7Lyws8Ad/8AcCrpUm\namnqlhSq0vPd3NxELpdz5coV8vk8u7u7AoWoVquMjY1x/vx50ahIxV2tVosiJ/2yCskF4fF4yGaz\notO/f/8+a2trQu0ubaozm81C9ZvJZEin02JBivRsNzY2UCgUTE1NAYhd2VIxOX/+PNPT08A3v/ho\nYmKCzc1N1tbWhAhUErUplUru3LmDXC7nRz/6EV1dXSIRpNNpdnZ2eP7557lx4waPHz8Wv8sgEAgw\nOzvLwsICfr9frCCVNheq1WrxC1tcLpegBQYGBpicnGR8fBz+T+beJKbx/E7/f4wNGINtbGNsvGAw\nu82+VbHXQnVq6apeqzsjZdJKNMqMehTlMHMYjXKY22juiTLSjCbK1tOddNJde3XXQi0US7HviwFv\n2MbYxhhsMLYx/0Pn/f5VnSaXvzpIOYw06gL7+/183svzvB4AXq8XIyMjKCwsRE1NDVpbW9lVAoAF\niFS4zM/PY2pqCn6/HzKZDD09PVAqlZDJZMxc1+l0GBsbg91uR39/PyQSCRwOB4vFaH9/5swZFnzS\nc6DT6WCxWGAwGLC8vIxEIoGCggKsrKxgZ2cHfX19ODk5weTkJNsZaRVC0weymZF+hkKaSOdgMBiw\ntLSERCKBt99+G/n5+dja2oJMJmMM6NzcHAYHB3H69GlelXk8Huzu7qKrqwudnZ0oLCzE4eEhNjY2\nUFNTg4aGBpSWlrL3nIpkQrD29/czlvXu3bsQCARMz8zIyOBLji75VCqFyspKlJeXo7S0FAcHB3C7\n3ZidnYVcLkd9fT2kUilTE91uN4aGhnjlQgQ/EtwSHZLWPTabDX6/H0KhkAmOtKqkCVAkEkFtbS2k\nUikmJyextLQEh8MBvV7PzZBAIMDh4SE0Gg07VPLz83nkTt2/2+2Gw+HAG2+8gfz8fAwODrKwjsKa\nCFAGABsbG/D5fDg8PERFRQWampqQSqWgUChQXV3NE0lqEGgVR1wBCm4hC6RCoYDL5eIwmnPnzvHf\nkE6neWVD9lStVotUKgUAMJlMrMI/OjpijorX68Xg4CDa2tpw4cIFdhSQkp8aGRIMh8NhBINBHB8f\n4+joiMmo+fn5sFqtKCkpYYHgyckJVlZW2Ho7OTmJx48fIxqNoqamBpcuXeLGLxqNQqfT4dSpU1xs\nBYNBzmJRKpXQaDQoKChgxkFjYyPKy8t5Pfjo0SPI5XIK4/rrFd7pdDrs7+9zlTU/P4/p6WlsbW3B\nYDDAbrczi5wU6JRWRIEaNpsNH374ITQaDUpKSrC8vIx/+qd/QlFREVQqFXJzc9keo1KpuEry+/1Y\nWVlByZ9Z7R6PhxPm6MFxOBw8MaitrUVmZibi8TjMZjP6+vq40s/JycHGxgZmZmb44fjTn/6E5eXl\n1zprgUCAcDjMByjBDzo6OtDc3Ixnz55hbGwM09PTqKurw1tvvYXPP/+cR4o7OzvY2tqC3+/nXc7h\n4SHDT2jUSRoF8lJubGxwd01jzKOjIxiNRlRVVeHFixdYXV1lTcLm5iaGh4dZ8U8HO/m3yQK0vb2N\naDSKSCSC//mf/2E1LIVk+Hw+3LhxAwMDAyw2nJub492bVCrlQyc3Nxf5+fkYGhpiGxJBYShsyGw2\nYxRCUdwAACAASURBVGJiAr/5zW+4a7JarWy9SiaTaGpqQlNTE2w2G/v5SVSUSqX4ICgpKcHp06dx\ndHSEhw8fwu/3Q6lUoqmpCXq9Hq2trbh9+zZX6bRrp5e2rKwMy8vLzJYmN4hIJGKNBuke7HY70uk0\nkxDj8TgHEa2srEAsFqO8vJztZ21tbTg6OsLjx495J1lUVIRTp05xIt69e/eYxX/r1i1GEV++fBn1\n9fUs1qM9+fr6Ov7whz/wc0hdI6mWj46OmLq1vr7Ook/aDZJfPDs7m8OQRCIRbt++jcXFRdjtdt5B\n04H1u9/9DqlUCl6vF3q9Ho2NjRgaGuIsAVKkRyIRFBcXo76+nh0tjx49gtFoxKlTp6BQKJBIJLC8\nvAylUon6+nrMzc1henoaDQ0NsNvtAIAnT55gbm6ONQmEWp6bm2NrVSwWw9DQEAYHBxm8IhKJXjsv\nfD4fvv76awSDQT6LQqEQPv30U7S3t3MuADlpCNZ1fHyMmZkZ7OzsoKGhAfX19awGHxsbg9VqRTQa\nxS9+8QsuQnw+H0+z6L1yu928e3/58iWGhoaY1qfVajE7O4uvvvoKdXV1qKiogMFgwODgICYmJjhv\n4OLFi0z2e/z4MXen1NneuXOHJzTEJ3E4HPD5fDg4OMDh4SEyMzNx584dqFQqNDY2Yn9/H36/n8Nj\nCCJFkyAAyM7OxtraGmOVSYBcU1ODoqIiPHjwgItHYvbT2qitrQ2RSASrq6vsVKDcEsq0cLlcePLk\nCU+/yOaWSqUQDAYRj8fh8XhQUVGByspKLCwsMFeEzpwXL15gZWWFJyHxeJyZ9sPDw/B4PLxWjMfj\nqK+vZ5S12+1mXcHy8jLy8vJYL0BaqE8//ZQ1Fjk5OfB4PPj0009xcHDAwLHj42P8/ve/52alvLwc\nKysr+OUvf8n6BtJWNDU1YWNjA7u7u3j+/DnC4TBaWloYsf6X/Hxrl3w6nWafOoksiKVM4h2pVMqK\nRhLnCYVCFnkR9z2RSLAnc3t7m6leNFI1m83cAev1evY69vX1wWQyYWpqiqtTGnmp1Wqu5lwuF8Ri\nMYNpSKB0cnLCkZRk06GAAfoC6WJUq9XcyVIynMvlYsIXYV7J556ZmclxnCqViq1LNOJNp9PMud/c\n3GThiEKh4C6RhIM0alOpVExsogAI2qPT9IB4+GKxGHl5eYhGo5wKB3zTUZLrwGAwsAWSXAnkY6fi\njEIiCFVMo8iTkxOkUimIxWKoVKrX+P7039ze3uaVAX0uWVlZ/DuRpaioqIgvIofDwcUSoXW3trYQ\nDAZ5Z0/JYCQyCgQCyMzMhEAgQFlZGbRaLR8GFKRTUVEBgUDAnQmFoBQXFyM7Oxt1dXVQqVRMn8vN\nzeUQDafTyesElUrF4iFKkyM/t1gshsFg4MuGoCxqtZrDYEhpb7PZEAgEmOCXm5vLBwQBZA4PDyGR\nSLhYI8SyVqtl6xR9TwRJIovj8fExYrEY44Hz8vJgtVr5MycGeGZmJkwmE4splUolTyfIKkaKayKl\nEf+BxGQKhQIFBQWQy+X8fhOJ0Wq1MrEQADQaDZaXl+Hz+djjTZ0jCfiIFBiJRF57lonTThyMnJwc\n7g4pZIjWGyREbG1thcPh4BQ5moAEg0H4fL7XlNwUG1pUVAS9Xs9IWhrZE4yI+Au0x5XJZKz1AMDN\nAE2RzGYzd44UrkVrOHLuUFRvZmYm8vLyUFlZCYlEwtbAUCjEnSF9xrRSIQEfXaxKpZJ1FkqlEtXV\n1ez6ycjIYPsbFUaUICgUCrG3t8cCQ+K4U+QzUUG3trbYWUEiZgBQqVQwGo1s7SUtFaUs0vrzVTKq\nVCplQAwVpGKxmEXIe3t72N7e5smfTCZjlDl95vX19eyiICAQPaMUK02YYypcSVBM90ZraytOTk7g\ndDp5Mp2VlYWsrKzXVqVisZhXVMT4IJKfVCrlyQKdDclkkumdROfLy8t7jTb5f/18qwE1dFC0trby\nmGhubg5TU1NoamricSCFaBB3XigU8o7m5z//OcbGxvCDH/wAJpMJAoEAf/zjHznnubu7GxaLBZ99\n9hlWVlZ4l0v7s7q6Omg0Gg50oXEuEYnC4TCHd2RnZ3MVT+PKc+fOMat9ZWUFx8fH6Orq4vE1oSGb\nm5s5c1osFmN2dhYDAwPsF+7r68OFCxewvb2N9fV13Lx5k3eeGo2G92c09qU99tHREZ49e8YqTOrQ\nk8kkE/3oEqCiKZVKobCwEMXFxejv70dubi5u3brFh09zczPbe2hUV1tbC61Wy5hL8u9T10685cnJ\nSWRmZqKrqwt5eXlsNyJf/5kzZ9DR0YHl5WXObD88PGQ3AQAcHBzgyZMnPGGgC0Or1eK73/0utra2\nIJVKYTab0dvbC6VSiWg0ipmZGdy5cweTk5M4PDzE3/zN30CpVGJ4eBgjIyMsYiJoD4lZ8vPzUV5e\nzglwIpEIDQ0NePToEX72s5/h8uXLePPNN1nJTOx3tVqNuro6tmNRWh55lxUKBVuigsEgysrKcP78\neSiVSo6zpdWFwWAAAJSVlbHIiTjWpAmhNZPNZsNvfvMbAMCHH37ICmbaxZ6cnODNN9/E+++/D7/f\nj/X1dSwsLEChUHARmE6n2UpG+eA0iaHRPOWlf/3117h06RJaWlpgsVgQCAQQiURw+vRp9Pf3s1Br\nfn6ei5Kenh4sLS1hf3+f9+Dvvfce5ufnce/ePRiNRtTU1DDshvjex8fHeO+99zgUpeTPWfJjY2O8\nM7VYLBweQ+NcYlrcuXOHc8JjsRh0Oh3eeust9u2bzWb2qpMmiEaunZ2dLBJbWVlBMplkoaTNZuNC\njDC4ANDZ2Ym6ujpWnFNAVmZmJt577z3k5OSwe4TywIk3QRYpQv7a7XY0NzdzPHZ7ezuuX7/OY2Wv\n14uWlha8/fbbXGiHQiHU19ejs7MTh4eHCAQCbC09f/48IpEIXr58iU8++QTHx8c8Qi8uLkZFRQWv\nb6jJODg4QEdHB06fPg2xWMw0QHoPyfK6vr7O6nhqegAw3yA3NxelpaVob29HaWkpZyt89dVXLLpO\npVJoaGhAdXU1hEIhmpqaoNPpWJgWiUTQ19eH1tZWzgsoLS2FxWKByWTC7OwsDAYD3n77bYyOjmJt\nbQ3t7e28NqVsEtJ8VFZWorW1FXK5HH/84x/hcDg4oTM7OxuXL19mvgplABwdHSEjIwNGo5FXGy9e\nvEBxcTF+8pOf4JNPPsHc3Bx6enpwdHTE5z+5Wui8fPjwIRYXF5mtYbVacefOHSwsLPB3/eMf/xiz\ns7OIRCJob29HIBDA8PAwJicnsbW1hWvXruH4+Bjz8/OQSCR/MQznW7vkDQYD27WePXuG4eFh3l1X\nV1czu3dychLRaJStUdnZ2RzVurOzwwKcRCLBSnvizq+trUGj0aChoYHBEG63my/Sx48fc7447VYJ\nlUtdNI3FJRIJrFYrc4aTySRDdIiIRgrNRCKBaDQKv9/PWMmqqirs7e3h/v37yMjIYCpcdnY2EokE\ng0RI7JFOpxEMBvnfe9XPTfCa1dVVTExM8Fia4gwDgQBEIhHTA3U6Ha8cDg8PcXJywspwg8GA5uZm\nFBYWIhqN4u7duygvL2dFPAlZPB4PNjc3UVxczN5s6h6dTifz2U+dOoWcnBwWw9HLTC8pqYmJyU+U\nunQ6jQsXLkAkEsFmsyE3N5crbBrJyeVyqFQqXoNQUJFYLMbc3BwzFwoLC1FQUICZmRmk02lWxcfj\nce6QDw8POQnNYDBgd3cXX3zxBfPQPR4PotEo+vv7EYlE8Ic//IG7RYohJieEXq9HT08P4vE4/82k\nAh8aGmIaISmZFxcX2eJGh6hcLodarWbgi0KhwPj4OBwOB65du8aBQCRuJDEjZREIhULWKFA4C+0v\nSUxGHREp1onRQNnZGo0G7e3t2N/fx9bWFmZnZ7G1tcWQFFKZFxYW4tSpUxgfH4fL5YJEIoFWq0Vv\nby+LRm/cuMGccGIOUEgK8I0+IhKJIJVKwW63Q6/Xs4WQnhehUMgKY5VKxdGlNNWqqamBSqVinkEw\nGER1dTULmagjo+fg8PAQ7733HqxWK3MMnE4nhy7Nzc3xeoqimZeWlpj/8OzZMzidTkgkEuTn56Ov\nrw+BQAC3b9/G9vY2i9GoS93b20MymeRMglgsxgWWx+MBAAb5kEYkkUhw4iIJwhYXF5FIJFBaWoqd\nnR1MT09DoVCwvsXv9zN1jljuy8vLmJqaQkdHB3JycngSIhAIsLGxAbfbzfRKgUCAsbExbGxsIJlM\nQqFQ8Dg9HA5jdnYWSqUSer2es+43NjZY9U8Jddvb27zX1uv1/N2Si4j20tSBUia8QCDAvXv3oFAo\nuEMnu9nOzg4mJyexsbHBtFJaEW1tbWFzcxO7u7ts7xUIBFhZWcGXX36J5eVlZGdns52WVsBU2KpU\nKnR1dUEmk/Fk7OjoCKdOncLw8DC2t7dRWlrKuHRK76RAnN3dXW4SKVwmlUpxqFRtbS0KCgr4PJRK\npTwFyM3NRVlZGQoKCniFQbwVu93OWQSUaup2uxGNRjkFcX9/Hy6X6y+6a7+1S558g8fHxxgZGYHb\n7eYL2WKxQCQSwe1248aNG7ynevPNN1FVVcV7IJ/PB6vVCoVCAY/HA6fTCYfDAYlEwuNfStki/zbZ\nIAKBAJ49ewaHw8EvF9HqJBIJjEYjJ3bRaJFEYpmZmQgEAtjc3MTz588hl8thNBq5O6Ds5lgshv7+\nfhbAEX4xGo3yf5tEbg6Hg4l9lJueSCQQDAaZZvVqFCb5JsfHx1nAEgwGEQqF4HK5YDQaIRQKeW9Y\nVlbGkb6vag7eeOMNNDU1oaWlBV988QVbmQirSQEmVPWS8pUOf7KWkZCurq4OOTk58Hq9LBSkhL/M\nzEzMzMxgbGwMer2ek/Jop3f69Gnk5ORgc3OTu2sqsihnnDymZDHx+/04Pj7G4OAgvF4vBAIBC1U+\n//xzbG1tQa1W4+TkBHK5HMFgkJGfOTk5PPoLBoOMqyUudU1NDT766CPcvHkTT5484ZfZ6/VCq9Uy\n5KS2tpYvDiJ+qVQqThckUaBYLMbx8THb3Oj32t/fZ/EnPaPhcJj1Eu3t7RAIBGyf83q9vBqw2+18\n0ba3tzOwZHR0FKFQCL29vaioqIBOp2PFNoX1EBObBF4k8qGMBI/HA7fbzRASCuBobW3FuXPnsLGx\ngRcvXiA3Nxe9vb3o7OzkLnlgYABbW1sQCoWYnp7G5uYmW02VSiWPpHd2dpCdnY2CggJ0dXWxw4HG\npTs7OwD+n4bH4/FwJrfZbEZGRganNMbjcfT29rKQLBwOw+Fw8KXmdrtZmJWVlYVQKIS1tTWYTCYc\nHBzg6dOnrHchUEsikcDVq1dRVVXFvPB0Oo3vfOc76Orqwp/+9Cc8f/4cGxsbUCgUMBqNAMB+amLf\nr6+vczFHDhqymXo8HhaBjo+Pc8FCkcpff/01aw08Hg9WV1dRXFyM4uJimEwmntQQmIYK2rGxMX7H\nCLsbj8fhcDiws7ODdDrNUyJaTSUSCSwtLbHuxe/3Y2BgAD09PTCZTJiZmeFmihw6QqEQyWSSG5Gs\nrCwOGQsEAqx/aG5uBvDNtIrcANXV1UgkEhgbG2PBGf23yaVANkC/34+TkxPW61C++8jICAOiAoEA\nNjY2cPfu3ddQ5IlEApubm3C73Rwr29zcjLNnz8LpdLKYUqlUoq6ujv9Git0mYZ3BYEB1dTVycnKw\nsLDA4TnPnz9nYBGBiORyOWOSCczkdruxtbXFoJ/i4mLEYjFO/ZuammI4Und3Ny5evAiNRgO3241b\nt25x9C9pAf6Sn/8zcP7/r5+f/vSnJySeWltb4/0ihWOcnJwgEAhgdHSUU+pIbUy53lQRERowMzMT\nRUVFvAZIJpOckUzj1YyMDKytrWFychKbm5tQKBT47ne/i2g0isnJSbx8+RJ7e3toaGjg3Q1lFZNt\nZm9vDxaLBXq9HmKxmIsO6gLpcqaHkkZkFRUV0Ov1bK+gNCTyHdNueWpqCtPT0+jv70dVVdVrCVeZ\nmZm8l3n58iUGBwehVquZ6U+c/e9973swGAzsb6YKnbrXV0E+ZEMh3C+lNlFOdDKZRH19Pf+91CnR\nbon2sbTHpSKGoBc3btzA9vY2TCYTC/befPNNmM1mJJNJzM7Osj1MJpNx2I1EImF85tjYGKcPDg8P\n85SBbGEKhYLjJEk9/Nvf/hYOhwNSqRSXLl1CbW0tbt26BbfbjczMTHzwwQfo6enBzMwMM+8Jj9rS\n0sIXwt27dzE9PY3GxkZG9dL+mjqO/Px85OTkICMjgzvBcDiMhw8fwuFw4NKlSzyKJ5smfSe0087O\nzuaIylu3bkEmk8FkMqGvr49hRL/+9a+xsrKCjz76CLFYDA8fPsTh4SFkMhnOnDnDVlMS/xHlbmdn\nB9euXWMRFRU7o6OjWFhYQDAYZOqW3W5HPB5HRUUFrw3IZpWTk8OCqKamJmi1WgQCAVZkUyjSq8/I\n4OAgnE4nFAoFrFYrmpub+T1KpVLw+/3wer3o6elhzn8gEIDf70dhYSHns9PqJxQKwel0YnJykldm\nQ0NDWF1d5SI2IyMDwWAQUqkU3d3dWF1dxfPnz1FZWYn29nZcuXKFaXHDw8M8Ijabzairq+NwoUAg\nwJbb/v5+SKVSjI6OorW1FWfPnsWNGzcwPj6OQCDAGOJYLIaqqir88Ic/hFwuRzweRzweZ1Y87eFp\nikXxuhRpS+uGrq4udHd3Y3JyEjs7O5BKpWxvJVtbTk4OcxJolG6z2VikvLi4yGsCGj23tLTwms9i\nsfAzEQqFGEKmVCpx5coVKBQKLlAODg5YZFlZWYm5uTksLS3xTjkcDuPUqVMwm834/PPPEQqF2Cqc\nk5ODlpYWqFQqxONxRiNbrVbk5uZyLC9RMrOzs1FfXw+LxYKCggLMzc0xQXJkZIRXEiTK1Ol0bDGj\nPHufz8eTO7pUaYowPj6OrKwsdiSQtoUU+QsLC3C73XwXxeNxDmlSq9Vwu914/vw5zpw5g+rqavh8\nPp4i0yTme9/7Hkc3O51OjiWmqcjAwAA8Hg86OzsRCoUwODiI8vJyKBQKzmHRaDRwuVw4PDyE2Wxm\nYTSt7373u9/9n3f4t9bJ7+7uQqlUstmfLhZKQCJUK1kXcnNzOfygtLSUhVyUkETxg7RjI+oaWT8A\n8A6bBCRZWVmseqf/P4pNtNls7LEnQhSRvUQiEe9ziSIXiURQU1MDjUYDr9fLakrqaJPJJCNHiQhF\n0Z400iYQjNPpZMxvU1MTBAIBVldXsbCwwGQvSpai1DdaaSQSCSQSCWxvb3N8ZSgUgt/vZ6sQ+Z3J\nK0vZ7YQRJaFeNBplZSp1quFwmHfKxFJvbm5mxjbtHwmQQeIWmm5kZWVBo9GwYnZ/f5/tLEtLS5DL\n5Th9+jQAsBqcugT6rKjLezWmmMInXuXPU6FH0byvRkoSn7+4uBjT09N8YO7t7SGRSAAA4vE4nE4n\niydJvCUQCBCNRnnkTaNZ+vfpc3I6nRyXTDZMAl0Q1ler1TKWNxaLIRgMYn19HdPT0zh79iw0Gg02\nNjZ4rUTaC3Im+P1+FosuLy/z/314eIhEIoHc3Fx2QVAOg8fjYQrk/v4+MjMzORhoZWWFSXI0OSLy\nWTQahVwux/7+PmZnZ2G1WtmzTsJTQho3NjbyGJwEdiRgValUTKkj2qPf78fh4SE/MyTIou+dqIkU\nk0sjS6/Xy2ubk5MTvgjIz05BTzRCJgU36QEODw/hdrvh9/shl8s5ddHtdjMZkIiAdD6IxWKO7t3b\n22OBIE1FyO0yNzfH3vREIsE7+KKiIkZ401qCxGomk4lXkBKJBB6PB0KhkFd8JpOJ2esOhwPJZBJG\no5EV5TTRi8VifBYSPY8w16dPn2a3EDUoYrEYWq0Wer2eL2+DwYCSkhKeqFCoD3Wo9PfSc1FfX8/C\nVArcoQaC1p/RaJQhWQT4AcAgGpvNxkULxTvv7+/z5E4gEDC7Q6VSMWyH3C8AuIAi2BpNGlUqFa9L\naIpCzzo1FF6vl4tFAKwLKi0t5b8pJycHfr8fU1NTrHF6lYfyatFD7g8idObl5QH4Jo9jbW0Nbrcb\ner2ezxCaPDidTm4qNzY2IBKJ0NTUxEmt5HL5S36+tUuePlCHw/HaxWC1WtHZ2QmbzcZjTeAb5SUd\ndrTfdTgc+Lu/+zs0NTVBpVJhamoKN2/e5O5KqVTCZDKhsbERU1NTuHPnDg4ODliFT2Knzz77jC8F\nCm+YmJhgEM7q6iqi0SgMBgN3NMRSnpmZYd/n2bNnodVqOT6RAguIvkdRkmShI+jExMQEZmdnUV5e\njubmZvh8Pq52aT8/ODiI//7v/0YqlUJZWRnef/99GI1G1NfX4xe/+AWGhoZ4dCYSifDb3/6WYw2p\neJJKpUilUpibm2OVL0Vdkhef6HoAOFREIpHAZrMxSY8OvHA4DIvFAp1Oh+HhYTx9+hQWi4VFKhTb\nS2Cf7e1t5og7nU4ea1HhRRYzCpMgEiFZ2ejh1mq1rDlYXFzE1tYWK9QFAgGampr4UiAvOhHLTk5O\nOFubVMeJRAJ2ux1TU1M8KVlZWeE0NofDgb29PWZTz87OcmIUXVq0s6Y9M332JpMJUqmU96Qejwd2\nux2JRAJnz57lQofY85OTk7wOkEql2Nvb42S/vb091NbWoqKiAtPT01hbW8PCwgL6+vqQlZWFZ8+e\nMenO4XAgLy8PH3/8McxmMzQaDex2Ox48eIC5uTlIpVLU1tZCoVCgqqqKV1B2ux3V1dUwGAysSRGL\nxfB4PPD5fJiZmeHLyO12Iy8vDz09PWhtbWUr7IMHD7C6uso+YnI3xGIxTuUim5bBYGABGcXPrq6u\nsqeehJ+9vb1YXl7GgwcPcPnyZQajrK2t4eHDh+wbJiEfHZRutxuffPIJW9Uo3OnWrVtcNFDHRtqU\nUCjE05bi4mKcPn0azc3N+Oyzz7C+vs7ZBx6PB3NzcyyIpR0rTQV/9rOfoaioCFarFdPT05BKpbh+\n/TrT3KanpzE9PY2FhQWkUikolUq0traisrISoVAIT548we3bt3nvTeszgUCA4eFhDmtqa2uDWq1m\nPcHa2homJiawvb2Njz/+GGKxGJ999hmDhdbW1vjdmpycxMLCAmQyGRoaGnDt2jXMzMxgaGgIc3Nz\nzGygkK66ujokEgncv3+f98rz8/NQKpXo7+/nd/Lo6AjRaBQTExOcDErRsGtra2hra2Ox4NraGn75\ny19ibW0Nx8fHnJHw5MkTPH/+HAcHBxzXmk6nOb9hdHSUrcrky+/v7yf/OHfWwDeXKjVXlA5KoKBE\nIsHJkVtbW5ienuaGiTDV+fn5CAQCWFtbw/j4OK+LaX22tLTE71RJSQlkMhkmJye5WPB4PGwrPDk5\nQVZWFuuNfv7zn0OtVqOxsZGtmy9fvmQ31e7uLiQSCba3t5mRce/ePY4N/r9+vrVLPi8vj3cRJpMJ\ncrmcq1KFQgGtVguTycQdSU5ODmpra1FUVASj0cg74qmpKezs7ECn0/GovKmpibGSOTk5yMnJgdls\nRkFBAcMs1tfXYbFYeES+traG+fl5zn2+cuUK72LoIqBOpKSkBPF4HGKxmC0fTqeTgyI2NjaYtEb7\nbepK/X4/VldX4XK5kEwmIZfL2U5DwsO8vDxIJBIMDAwgEAjgwoULKCkp4ZeCJgI0wr1y5QpkMhkG\nBgZQW1vLIjCyHbpcLratAGASIOE16YKkC5AeLAqOUSgUqK+vh0qlQiKRwNbWFqe0FRYWsuOhoqIC\nDQ0NMJlMkEgk2N/fx+HhIU879vf3eeReWFjIFkMS4XV3dyMrK+s1KyR1r7Q2IMpdIpHA06dPmTJn\nMBggFot5n+1yudDR0cFdECmo6bPNyclBRUUFq7/r6uqgVqvZhUCeYiJekV0yGo2yvqCgoACtra0o\nKCjA8fExNjY2mLAFgLt7mq5IpVJ0dXXxc0DfIVmdotEoGhsb0dHRwcUdsREIFtTY2MgX4fHxMVsC\nKbqYaGLki6YJ2OjoKIBviuVLly5x3rpWq+Us7HA4jHA4zIULoZUNBgOHOcViMUgkEigUCoRCISST\nSXaV0OFGqwC5XI6qqioWShLamIBNtJ+n6QullBFvgKJqS0tLGYFKo3ifz8di2IqKCszPzyMSiUCt\nVrNuo6ioCGq1Gtvb2zy1AcB8CXLpkF6HxGYCgQAXL15EMplEXl4eExPPnTuHoqIi/ndI60B6D2Ij\nkDD26tWrPFE8c+YMpFIp8vLyWCgXiUSQk5PDHHuRSISFhQX+LLq7u9kRkZGRwfvYdDqN9vZ2Tokk\n8d3a2hqkUik6Ojqg0WgQDAZhtVr5IltfX2deBpHpIpEIAoEA1tfXucgsKirCRx99xDv24uJibG9v\n8wVNe2ayH5IIVCgU4ujoCPF4HNXV1Uwa1Ov1KCgowN7eHhciNNETi8WcYElFulAohF6vx7vvvsva\nhsnJSRwdHbE9jciQhYWFLJymVQSle1LQzauhMNS9u1wubqReTTw0GAyor69/rQAgmx6he+12OzQa\nDbu/SJNA7znZHkkQTiu8cDiMjo4OXjGTvqSvrw9CoZAdCjTtoSkYuQ7KysoYD7y6uspapv/r51u7\n5AkKcHJyAovFgqqqKt4PJ5NJFrplZGRwglNnZyfq6+t5J+3z+TA9PY3l5WV8//vfR35+Po+Nmpub\neb8SDofZf0zih6KiIlRVVUGv17M9xGazsSf86tWrmJ6exmeffcYBBR6Ph+l76XQaarUap0+fht1u\nx+DgIObn5+FyuRCLxRilS18WKW9JFe52uzmHnpTDVBBQ1OTw8DA2NzcZFHLlyhWEQiHezZGS98KF\nC8jPz8f8/Dw6Ozvx4x//mDtDh8PB4yyaVhBgpaysjAE5NKkgm4jP52PBI3H7TSYTkskk77rIYuLz\n+ZCbm4u2tjZUVlayqNJisSA7OxtarZZ3hTQKLygo4NUBxc1euHABJycnuHXrFjweDydRUXFC0c1S\n2wAAIABJREFUL6PRaGSvfGtrKy5fvsxsfpfLha+//hoejwfXrl3jURiNCck5UVJSwhcJMQwEAgGz\nBgh6Mjs7y2wAg8HAIzUqHC9fvoz8/HxWJgsEAva7ut1u7Ozs8GWv0Wh4H52VlYWSkhKoVCrs7e2x\nLa22thZVVVUQi8UYGRmBx+OBQCCAwWDA+fPnYTabcXJywrvdkpISFla1tbWx/YoY5EKhEHa7HU+e\nPEFjYyPa29tx9uxZSCQShEIhZhZQcBI5U0iBL5PJUPLn3ALiOygUCuh0OgwODrKQknIGqqqqEI/H\n8dVXX8FoNKK7u5uLA7FYzHoAwuy+qsMhmAlhQoPBIE6dOoX6+nrGLBcXF+PRo0dwOBwcwUycCep4\nyK5GcKPKyko+M8huOTY2xs8CRSa/CpM6d+4cO06oWaDPf3BwEBUVFVxU0cHv+DNyurS0FGVlZejo\n6GCdS3t7O7KzszkW2uv1cnwr2egIh7q/v4/m5mae2lAE79HREVZXV+F0OvkyI+sdRdWWlJSgo6MD\nVqsV8Xic12JFRUVwuVxwuVyMVu3o6IDf78fCwgIL8p4/f46LFy/i6tWrDJnS6/VYWVlhaqBUKn3t\nbyexHRXER0dH/J2ZzWakUileRYVCIUgkEj7j6FInlCz58CUSCXp7e1lkRtjm+vp6nuTqdDrodDrU\n1dXxOL+goABZWVlIpVKQyWQIhUJobW1lpgcFBk1NTbGqndaUhIwWiUQoKCjglQmtgWk0/+LFC14h\nUXbBwcEBN1JUJCsUCohEImRlZbE1jmx6Ozs7+Pzzz+F2u9Hf34+dnR3cuXOHw8YMBgMLC9vb29mF\n8OzZM9y8eZOD1v6Sn29NePfVV1+dvHz5Erdu3cLHH3+Mvr4+7O7uYmJiAgMDA7zbJN643W7n2ES5\nXA6fz4elpSX4fD7o9Xr88z//Mw4ODvDixQse65J6tKSkhClKdDiTpY0ykYnkFI/HYTAY8O677yIQ\nCGBsbIyxpjTSEQqFCIVCEIlEqKysRCAQwOrqKuLxOCcfUSocFSqOPyMmSYlJaF6ZTMZEOwCcTU9d\nOh1YROebmZlBbm4u3n33XdhsNg5bIfHh1atX8c477+DGjRsMMTEajSgsLGSICO0WqZuLxWIc4VhZ\nWQmj0Qi/34/bt2+zyhYAkskkDg8PuSgjPrrD4UBpaSnTt46OjmC1Wjks58033+RoRzrkjEYj+2sp\nQKKxsRHJZBITExM8dm1sbOTIVMpPp50boYRJPet0OjExMcGsZxqf7+/v8+9HVsfR0VG0tLSgtrYW\n0WgUa2trGB0dRWVlJUpKSjiNanx8nG03FouFA4qIdpiVlYWMjAy2OhJAhMbHRI17NXWQQBrvv/8+\ntra28Mknn7Bnvby8HJWVlaisrMTS0hJT13Z2drC4uMjiveHhYf63gP+HtSUwE4lQz507B5fLhS++\n+AI1NTWoq6tDS0sLuwleTV10Op2Ynp7G+fPnYbVa2TcdjUbZepdKpfhyevz4MRYWFpBMJqFWq2E2\nm9HT0wOVSoWRkRFsb2/z3l2n06GxsRFLS0u4e/cuZDIZF8LxeJx3lgQ0Idsp0cZoLE6iJ0rLi0aj\nCAaDnK1AnxUJ/Sh9kX6om5uenkZ2djYKCwuxs7MDuVyOnp4e/gza29v5uS8sLIRGo+Ei1Ww2Q6VS\nsXWTBH4bGxuw2+04e/YsampqIJPJ8OjRIwwMDODatWvIzMzEl19+CZVKxW4Xl8sFp9OJsrIyVFRU\n8CVPHSp508nGd+/ePcZ+Z2VlYX19HW1tbairq8Onn34Kp9MJvV6P5uZmJqPF43HMzs7yZC8QCHCR\nOT8/j9nZWQBANBrF1tYWqqurUfLn6F6lUolLly6xEHp8fJyTLWn0bLFYYLVaUVdXh2fPnmFxcRHX\nr19HXl4eZmZmUFRUBKVSidXVVXZUSCQSqFQqmM1meDwe3LlzB42NjfwZRKNRpsHJZDI8e/aMbaeU\nykdnAb3rZPXc2dlhMSPtwAsLC1FRUcHP1MDAAH/O+/v7PF1SKpWs8N/a2mKnA038pFIpAPBK2PHn\nQBkKUyJITVZWFpxOJzPph4eHsbW1xY2Q0WjEysoKXC4Xryvpu6HCKplMcq4EaWbIxksT0/fff/+v\nV3hHIx+FQoFwOIzV1VWmDU1MTHBCHEVd+nw+JJNJTn6SSqU8Fs7NzcX6+jqkUikqKysxMTEBj8cD\nl8uFqqoqHqceHBxgY2MDAoEAJSUlbAEhlXR2djaPv4jGJ5VKoVQqeeRO0YsSiQSZmZmIRCK8ZyWw\nCSFvBQIB7+sBMN2MVMc0jlGr1VAqlfwFEsDGarXi6OgIjx49YnEdUaIICzo3NweFQsFCrLW1NTx5\n8gR+v59HZxSVSX8rdZrb29tsWZqYmEBHR8drlCwaXdJonsRrRJeiz4hGbVlZWbDZbPxdEQeAwCY0\nUqNADRJZEhiEQiBoHE6XqEQigVKpZOb/3Nwcd5kZGRkAwJ8PZRqYTCa2ClEACdG9PB4PIpEI+8Rp\nekITDYqhFAgEkMlkUKlUvLOWy+VsEyTxDGEtSZREamKj0chRuoRiXllZYT88iTJJzCiXyxEKhTh9\nDADbSufm5hjIQuNHIs55vV5Eo1EoFAp+ZsmuRmLB/Px83lWT7YwuuoKCAlgsFj6gotEonE4ni6qC\nwSDrUVKpFLsy9vf32ddOCmVaV9A0gjy/6XQacrmc427JT+1wODibnUabr5LppFIpK8NpL0tQJ9KG\nvCp4PD4+5pXF5uYmCyxpjUMRt2QlpHhh8uS/GsNMhz91qNQkEBeDvMqkIqduldwmtCuWyWTY29tj\nARxFO+/t7bGojfIW6O9aWFiARqNBcXExvF4vj4PD4TCkUik8Hg9DcuhsoajThYUFTu48PDxEJBJh\nPRCtWciOSKJDjUYDlUrFEdGE9N7f38fc3BysVissFgsXbvS+HR0d8Xt8eHjI+hISVFP8N01oqPAi\ntj9Fru7u7iIvL4+1HpS+SMAcjUaDZDKJra0tnrbQme5yudh9RAFmU1NTnEVP4UqRSARyuRxlZWUs\nSKUJGk0LMzMz2SGxu7uLtbU1boyoIaPnk0TBFFtN6z273f7a1JS0HxQdTQTNiooKyOVyzM/PIxAI\ncIASkT1JTOhyuVhQSpPWgoIC6PX6v+iu/dYCat58881/k0gkqKmpgcvlwuDgIKanp3mMdf36dbzx\nxhv8x1K+MVljGhoa8Pd///eMG/3yyy+RSqXYdqbT6eByufhyOHPmDBobG9lXTQK2zs5OXLx4EWVl\nZQzj0Ol0nFdNnONXFes0pu3v74dOp2PF7tHREcd7kjVkaWkJwWAQ9fX1nIi1sLCASCTCBKazZ8/i\n4sWLOHv2LKtT0+k02tra0NLSgtbWVkbb0uE0NDTEYQcCgQChUAjLy8scRfnOO++gu7sbbreb+f2E\n7W1tbWUuAOFS4/E4zpw5gzNnzuDu3btwu904d+4cV8UmkwmnTp3C1atXWQmvVCqhVqs5bpY6bkok\na2pqwsWLF1FZWYlYLIbR0VFYrVa8//776OjoQHt7O3vADw4OOLubSFG7u7tcQHm9Xh7Nk2NgamoK\nOp2OR4MWiwVnzpyBQqHA5uYmlpaWoFAocPXqVb7g6YCtrq7mXT4hhNva2ph41t3djd7eXvT19aGl\npQVGoxHhcBh6vR5vvfUWmpubYbVaYTQaodVqIZfLUV1djfLyck4TI8tmfn4+Tp06BZPJxLvoZDIJ\nm82G4+Nj9PT0oK+vD21tbcxHWFhYQEtLC7q7uyESieDxeLCxsYHGxkb09PTg6tWrTNujS/ODDz7A\nu+++i3fffRdnz55lMdjBwQEcDgcuXLiArq4uiMViuN1uTE9Pw2AwoLa2Fn19fVCr1UgkEtjd3eVV\nA9n6KDCI9BpTU1Po7+/H1atXUVZWhsbGRrS2tiIzMxNCoZAnBh0dHdjd3WV7VDAYREFBAc6ePYuG\nhgZ2yLzxxhu4ePEiEyTdbje+/PJLVFZWoq2tDadOnYJSqcTu7i7a29thtVqRTqfR0NCAt99+G2tr\na5z6R6hoYjQYDAbE43Fsbm4iGAxCrVbj448/hk6nQyAQQGFhIYtpJRIJysvL0d3dDbPZjHQ6zQAY\ns9mM/Px8zMzM8FSPLmCZTMZaHyrynU4niouLceHCBda3tLS0wGAwICMjg4syYk5Eo1GOVhYKhTh3\n7hzeeecdpNNp2O123L9/H1VVVfj+97/PVrfe3l6IxWL4fD6UlZWhrKyMnSDxeByjo6OYm5tDLBaD\nUqlEfn4+FhcXYbPZYLfbmTcgFAo5MY0U701NTcjMzMTt27c5UW57extZWVmorq5mtxKdyxkZGbz6\nIKytxWLB2NgYHj58iI6ODpSXlzOYLBgM8jqVKJKUUyEUCjE7O4v29nacP3+eiZv0OQJg0W8qleJz\niIR4pKUoLy9HTU0Ni2lLS0tRXl7OhdLvf/972Gw2HBwcoKamBkKhEHNzcyycTSQSyM/Ph9lshsFg\nYB0Rrfref/99vPfeeywsJaeVRCLBhx9+iMbGRgDg+Gqj0YjW1lZ85zvfQXl5OaxWK9rb25FOp/H4\n8WP87d/+LT766COeGshkMnYnqNVqbrIozvbhw4d/vSl0/f39/0bqZFJuE/0rEomgrKyMVcnUOcfj\ncezu7rKK/ODggG0IJBIBvrE9ENyBLuxoNAq73Y7l5WUWbfj9frjdbhZfkC2MPN40zidLB42XKioq\n2JJCXy4J/CgLHADDfijcgwQfJFgrKSlBMpmEy+WCTCYDADidTq4uybJBtKeVlRVOVaK/q7KyEuvr\n63C73Tw9IA/qwcEBe+zJGkafLyXchcNh5rfTYUdTlXg8zr5xmUzGytyNjQ32RhNTmQJ0aPTk8/nY\ngkeUQZFIxCI/skAR4YnG1aS0zsrKglQqfY0fXV1djYKCAkZYUiIaALaxLCwswG63w263c5hERkYG\nZmZmMDIygqWlJSwuLrKCWyAQ4Pnz59jc3GTCFgFt1tfXMTY2hpWVFQQCAd4Bk61tYmICMzMzSKVS\nsFgsvA/2+/3sdSbB5erqKtbX17G3t8frIgBs8SH9xsjICBMEKY6VPOO0H6YJjc/n49xvqvQPDg4Y\n8bq0tISpqSlMTk5ifX2dU9eIbkaWJjqY19fXsbKyArlczroKAuNQYpxSqWStAhHXpFIpdzA0caDC\nd3x8nG1P1MlSYl06ncbGxgaLumjaQkSy2dlZtqGurq4yMS4UCvHfFo/HkZWVhbGxMbabHRwc8PtD\nnz9hVUnIS/kC5eXlvM+m7pgugFf1AYRuJoodde0U20tnFQWbhEIhHB4eIhaLIRwO86Rwd3eXP2fS\n68zPz7+W40CCS0pTW19fh9fr5ef91dWQz+fjC5sKFWLEE3BGqVRymho9lzs7O4jFYrBYLGhoaGCx\ncEVFBbxeL1ZWVrC3t8dnQ05ODtsGac1IIrp0Os0BT6SdODg4gM/nw/r6OmZmZrC5uQkAvHajlRJ1\nrQB47067cvq9XS4Xa1wWFxcRCoVYs0FTXuJlkIYmkUgwFTCRSODw8BDpdJpjtiORCLxeLzY3N/ni\nFwqFWF9fx+joKEN8yH4pl8vZfSQWi/m8IRv47OwsC7QTiQSP6WnSrNVquUihFej09DRsNhsz82l6\nQ/wROv8pD4BcRiRu/TPG+q83hS4QCLwGuBAIBBzqQQcEAB6xnDt3Drdu3eLDeHV1FWNjY8jOzoZS\nqeSL98GDB3A6nSzCIPTi0NAQ5ufnObpVrVbj3r17cLlcqKio4PxustjR5UZe2qysLCbHUaISdQ5y\nuRwdHR3weDzY2trC1NQUiouLUVNTg+rqau5IqaunvGqpVIrx8XEMDw/z6JiIWjT6Ju8njeoLCgpQ\nW1uL7u5uJkfduXMHOzs7qK6u5kvj7t27MBgMuHr1KlQqFYsQCXlKqnNifItEIlbnUpzryMgIJBIJ\nCgoK+EGenZ19jbQlEAhQUVHBav6qqiqoVCrex4+OjiI3NxdVVVU4d+4cE7XS6TS8Xi8+//xzZGZm\nQqFQcJ7B7u4uq2EpfpNEWsSBj8fjDA+ZmppCWVkZIpEIhoeHkZ+fz4EWNMp2Op3wer3MNqCAnL29\nPdy4cQNCoRBnz55lZvXo6CgWFxcxNTUFmUyGmpoa/OM//iPKy8uxv7+PFy9e4OnTpwgGg+js7MSl\nS5fg9XoZQWy327G5uYmuri4AwP3795FIJFBeXo4zZ86gvb0dGo0GIyMjePbsGbq6uhheQjzyWCzG\nlxsBPba3tznjgVTpBFt58eIFHxSRSIQvOrISPnv2jMfMJFianp7G6uoqPB4PEokEAoEA3n77bdTX\n1/MUZ2hoiC8amUzG9kvCd/b29sLn8+Hp06fc2VLk7MbGBn70ox+hra0NSqUSNpsNq6ur3MVtbm5y\nx00aHBI25uXlYXx8HAMDA0ilUujt7cWVK1fw4MEDTExMIBAIwGQy8UFPz04oFOIdK4UdmUwmtLW1\nIZlMYnFxEV988QWuX7+O733vexgaGmIGBYW2TE1NISMjAxcuXGBC2uzsLONn6V1qaGhgXQQVr7Oz\ns7Db7Whvb4ff78f+/j7efvttSKVSvHz5ErOzswgGg6itreXVFZE0KSCHiquMjAy28JaUlCAYDOLe\nvXsoLS3FyckJnj9/zo4H4rVHIhF+v1taWiAQCDA0NMSj+XA4zONllUqFkpISPHz4EOFwmLPrNzc3\nsb6+zjZHslAS8vZVuy5d9GazmT/vmzdvYmZmBnNzc4ypffbsGT+rVqsVtbW1HD5EomSfz4ft7W2e\n7jx69Ahff/01ampq+LkiLgo5ESg8iZoCWm3QiN7tdkMikbCFcmpqildzxcXFXJzQu7C8vAyHw8Gg\nMp1Oxx303t4eW69J9U+6DPo76XeglRfpSw4PD3H37l1eT9+7dw+hUIhF4TU1NbweaG5uxt7eHjY3\nN5n4SUJbk8kEp9MJp9P5F92135rw7kc/+tFJdXU1Ojo6sLCwgLm5OWZMK5VKFBcXc1VLSvD79+9j\nZWUFKpUKXq8XCwsLvN+4du0atre3MTg4yGIfsteUlpZyyhJ1aXQpUedYXFyMuro6hMNhAGCSF0Wu\n0gjMYDBAp9Ph4cOH2NnZQVtbG1ddOp0Ox8fHWF5eZl/uyMgIDg8P0draivz8fJycnGB6eprz1unQ\nJRDL/Pw8NBoNysrK4PP5kJGRAbPZjNnZWYyNjaG3t5cnAKWlpVCr1fj3f/93TExMsIKUOmIqHBQK\nBTsPRCIRCwRJEEZhLyS8oamK2WxmIWF5eflrB0hOTs5r6XB3797F8+fPceXKFVRWVkIoFMLn88Fu\nt2NhYQEikQj19fWM8aUL3Gaz8V7VZDIxNIV+CCBCI2+r1Qqfz8eCP8Luki6B9t8U5BCNRpGXl4e5\nuTn4fD50dnaitLSUxY5U0JFVrbOzE7W1tUwnGx4eRnNzM0cCS6VSRKNRTE9Pc+IU7dOXlpZgs9l4\nekC0tdLSUszOzjLgiL43mkzR/j0Wi2FxcZEdDaFQCHt7e8jKymKhqU6n4yQ7GrUTZ0IsFsPr9WJu\nbo4FRGSRI/gTAIb7ECyKQpQcDgdmZmbQ1NQEs9nMoleiIFIIDu017XY7RCIR+vv7sb29jRcvXgD4\nxjlTX1/PkZ2UO9DY2MgTMHI93LlzBwKBAEajEVlZWXywjo+P4+bNm8jJyWH8KcV9kgZELpcjEonA\n7/ejqKiIV1mk5yD3Cb33dKiTYM9sNsNisWBnZwcejwcrKyuorq7G6dOnEYlEuKunydzo6CiCwSCT\n5XZ3d9kmSvZfo9HIaOyenh4WK0qlUgQCAbx48YLf90uXLrGne2JiAjabDe3t7SgpKYFEImFs8/Dw\nMHJycvDOO+8wLvbw8JD/R3taAoPt7e0xRjUWi2FtbQ2Dg4PQ6/UoLCzE7du3YbfbIRaLeTdMti6X\ny4Xy8nJUVVXx9CE7OxsTExOYn59HYWEhSktLYbVaMTg4iJGREeTn58NoNKK6upoLraWlJRarUkf+\n9OlTLC8vIxAIoLW1FZ2dnWhoaEAymeQJ3M7ODgcW6fV6zphPJpPweDyYn59HY2MjPw+BQICxwGVl\nZQxRC4fDjDz/+uuvkZ+fj0uXLrGVd2dnB263m3My1Go1LBYLTk5OGLGek5MDl8uFvLw8VFRUICMj\nA4FAAPfv30d+fj7eeOMNjIyMwOVy8epAoVDgzp078Pl8+Oijj5CXl4f19XWGuhUVFbFVkWx2Ozs7\nUCgUKCsrw9zcHGeRkGe+qakJcrkck5OTfK/t7Oxgf38fN27c+OsV3lGlRRGcJMgQi8XclTkcDvZt\nK5VKRgRS90wvNR209DJXVlaivLwcZrMZwDfje/Kg63Q6tru0t7dDIpFgcHAQ29vbWFhYQElJCQNk\nyPpxfHzM1j7i01Mk4P7+Po6Pj7G3t8dRsiUlJSy8oYjP6upqjhyky5SyxImVTDv2nJwczhNWKpXQ\naDQsMiosLGRmeTgcZn+uQCB4zSJG4rLNzU0OSCBhDYFpCMVL2d2Tk5OccESsAvp96TCpra1lfykp\ngElgR+KhdDqNqqoqaLVa3o8TMIKoUzRGpdxlWmnQ2JLY6gqFAgAYpSoUCpGbm8vxxJmZmVzUFBYW\nor29ndcThIf0eDzQ6XRcEJSWlqK1tZVDKBQKBe9diX9AYsySkhJoNBoWldE4ji4kAloEg0Gsra1h\nZWUFEokEOp0OJpOJ95W0+6doV7/fD+Ab6qBGo8HS0hLcbjcXDCqVij+3qqoqZGRkwOPxsM2RSH/x\neBxVVVWQy+Xsu6XCjqyC9Pe8GgBFqxiylxGAiOAfRBWjIpCys6emplBdXY0LFy5wVwaAdS4kYqPI\nV6VSifX1dezu7kKtVrPanS5qvV4PpVIJi8XC++Pt7W0W5FHaVlVVFSYnJ1kgShnlhIeWSCTQ6/X8\nPJCA6vj4GBqNBkNDQ5iamoLZbIbJZOKzYnR0FMXFxdBoNFwgEgWNujISyhqNRhZfUVY8EeaoMCaB\nIhVZdGYcHR0hFosxnVAoFHKXn5eXxyNukUiEdDrNnSNpJCjSF/hmrE2oWYqVraio4DhTErrSe0ri\nOxJ8ktCUyIPj4+Noa2tjfgRNUYqKijjamZTnNHU4OTnh9YVWqwUA9uHTtMBkMkGpVEKn0yEjI4PX\nJaTZIY4D+cK9Xi9HaxcWFvI6paioCLOzs5BIJGhsbER9fT1KSkrg9/v5WaIQIGqQjo6OWAxIUcYU\n4Uyx0wC44KOinGyUubm5HF9Ngk5y0RwcHPA7QUJWuVzObohXtTipVIoL+ZycHDQ0NGB3dxfLy8vQ\n6/UQCASYmZmBwWCA1Wrle4bWcxQvTbZAatzo/flLfr61S76oqAjb29v43//9X+Tl5UEmk6Grq4sZ\nwvfu3cPi4iJf/MQNJ9xhbW0trl+/jpcvX8LpdOLly5eYn5/H0tISzp8/j/b2doTDYQwPD+Phw4cc\n8GC1WhEMBuFwOHDlyhWUlJRgcXER8/PzmJycxJkzZ/gLJ8Ef8I0Fo6amhkeoNGYbGBhg64TX64VG\no2E619OnT1FcXIyqqioolUoMDAzgl7/8JeeZLywsoKOjA729vZiamgIAjk389NNP8Q//8A/QaDS8\n/7x8+TKePn0Km83GKmcCkpDKXaVSwWQyYWlpCV6vF0VFRZDJZCgsLGTal0gk4l0uXaxEodvc3MQH\nH3wAhUKBly9fYmtrCzs7O5iZmYFGo8H58+cZzPH06VNORmtra8OlS5cwODiI1dVVtLS0sGeeQjo2\nNzeRm5sLpVKJyspKpFIpLC0toa6uDjU1NVheXmbkLmkhrl69CqFQiJs3b2J4eBiPHz9mm2NnZye2\ntrZYx9Dc3Ayz2Qyv1wu/38+wkJGREfYu3717FxsbG7BYLFCr1SgrK4PNZoNEIkFdXR3y8/Oxs7OD\nly9f4vDwEJWVlcxlp0tpf38fgUAAKysrePHiBYv2AoEAnE4nB6dkZGRgdnYWk5OT3E2/ihn2eDyQ\ny+WwWCys8CXLJ+39l5aW0N7ezhcL6Srm5+cxMDCAzz//HP/yL/+CpqYm/Nd//RdSqRSvjkhwSTti\n2nv/4Ac/gEAgwNTUFHp7e6HT6bC1tcXPBvndCd35+PFjFBYWspqbVmpUgEWjUXi9XrjdbmZa0PpN\nJpMx4e1Xv/oVTk5O2AdMnS9d9KFQCCMjI6zPIH94PB7nABHyeBMTXSwWIzc3F4FAAGKxGC0tLSyS\nJaFZUVERbDYb2wtJP7K5ucmMB5lMBrVajenpabjdbuh0OlRUVKC1tZV30/X19WhoaIBcLsfQ0BA2\nNzfR2NgIo9HIq6mFhQXU19ejqqoKCwsLGB0dxeDgINrb29HQ0IC6ujqsr69jamoKT548Yf91Y2Mj\n3nrrLUxOTuLZs2eIRCKcRkguoGQyycIu+rwJ8ZyXl4df/epXWFtb4+8qOzsb77//Pscnz8zMYHJy\nEhkZGaiqqmIR8Pz8PLxeL9rb2/HDH/4Qg4OD+M///E/U19czk6G4uBgGg4Fjd8PhMBYWFtjiRTHA\ntEemooa6VLpoi4qKEAgEsL+/z0RQyninTIKcnBzePZNVb3p6GnK5HB9++CGi0ShsNhuGhoZwcnKC\nyspKLC4uwuPxoKqqiuFFAwMDcLvduH79OrRaLVZXV/H48WOsrq6irq4Oer0e9fX17P4gAunS0hLW\n19e5YczLy+NiksBrgUAA//Ef/4F33nkHXV1d+PWvfw2HwwGhUIjOzk60tbVhdXWVccu0lqb3LDMz\nE0NDQ0in0yzkI0dHIpFgp0s6nWb7dzgcRkVFBbq7uzE6OgqbzfYX3bXfmvDuzJkz//aqFSOVSqGo\nqAiZmZmIxWLMs08mk1AqldBqtXC5XMyFp25tYWEB6+vrcDqdnE9eVlaGVCqFsbExTE1N8YOv1+uh\nUqn4d6CRDAFByPISCAQYgEJjxUQiAblcju3tbTidTlRWVkKv1yMajUKtVsNkMsHj8eD//wuzAAAg\nAElEQVTo6Ig7pXA4DLPZDKPRyOM3kUjEymyCOUgkEohEImi1WhiNRojFYigUCkSjUYa16PV6qNVq\nFkvZbDZotVr09PTw70JdWmNjIwQCAQeOZGVlsYKXChjqholXXVRUxGNHuVzOuF2ytZHl7fj4GG63\nG06nk+1JNTU1zMUfGxuDz+dDVlYWE6Qo2SsjIwM1NTVobGxkIeLm5ianNJFAirChJDQLhUIIBoPQ\naDQoLy/nznpvbw9utxt7e3uoqqqC0WhkuA6lBC4tLcHhcPBelew+BK6gi5j8sLFYjA8yqVSKU6dO\nMafh1YQ5SvsiOxKphQkkQ7tyYrQXFxdzcp1KpUJubi62tragVCrR0dHBUxufz8cHd3Z2NndBlOhH\nnVUoFEJhYSHa2tpQXFyMg4MD3sf39fWxvYh28kQeq66uRkNDA0QiEfb29phJTjagUCiEeDwOl8uF\nmZkZbGxsMKGMLEJ08JJNMz8//zXYycHBAQoKClBYWAitVstiIioCKHaXCHhUHFEgT3Z2NkNsysrK\noFQqGRes0WjQ2NjIGo6srCyUl5czq54yJkjkSloDEuwmk0kGuJCNl/DGtMIoKipiIiMApg8C31AM\nCdREVleayOl0OkilUhbUud1uAIDJZIJKpYJSqURJSQkX5PF4nKeEZGEjW1llZSV3gjQtjEQibGEj\nVwyde4S8LS0thUaj4f/R9EmtVvPqwWAwMDb5VSSzUChESUkJHA4HvF4vP5NkC9ZqtYz7pef9+PiY\nMdOUrEn7bspMqKioQFNTE4BvrNNlZWWwWCz8WdBUjCa0AHivTlMF8uQToY+iok0mEzQaDQucyW6X\nSCSQTqdZsEv+dhLTyWQynpqQPTg/Px/7+/vY2NiATCbjCYVKpeJ0RnoXCJdN+ir690hQmJWVxRqF\nWCzGKZQEx8rLy+OpSWFhIVt0a2pqcPr0aabp0US15M8hZg0NDdDpdCxWX1xc/OsV3tFISqvVYm1t\nDbFYjGlD4XAYLS0tOHXqFO7cuQPgm86O9qoajQYOh4MV4pFIhMeWHR0dbB0ZGBhAKBRCRkYGI1cB\n8FiKRmxXrlxh3CTxwAUCAc6dO4dr164hFothfX0dDoeD0+Xo4REKhXwpr6ysvJb5S0S9zMxMHhn/\n8Ic/hEaj4RfBZrNhbW0NTU1NPCqsrq5GX18ffvrTnyIUCuFf//VfoVAocHh4iIaGBo5WbWxsxE9+\n8hNsb29jaWkJz58/5yhE6gDy8vLw4MEDPH36FBUVFTCbzexHJY8zZTFT8M/S0hJWV1extrbGgQvl\n5eXIzMzkAyAWi6Grq4ttZpFIBIuLiwC+CXahC1MkEqGiogJCoRAulwunT59GfX09Xr58yRatvb09\nbG1toaysjIUl1PE+efIEGRkZzHbu6OhAMpnE3NwcvvzyS15ZXLp0CSKRCLdu3eIQmpmZGR75kyjy\n+vXr2N/fxyeffILe3l5UVVXB4XCw4ndjY4PHY0QElMvlODg4wL1793hCQFYtChSan59nYSZV5hQx\nSx0rHWp5eXl82RUVFeHq1aucVLW8vMyj2AsXLkCr1eKzzz6D3+9HyZ8pfcTq7+7uxgcffICVlRWs\nrKxAo9HAarWipqaGL1FaszQ0NKCqqgoGgwEikYjFbvfv30cgEMC7777LI/rh4WEupslvTWIitfr/\nY+7Ngtu8zyvuwxUkQRIgQGwkAQIgQYLgvkiUSe2yJduxvMVOXTtxkkmmyUwnk160F53pdDKdaTPT\n6VWnnbaTdpImaewkXjJeZUnWLpEiRYobuIAkNmIhCIALKBIE1+/COc9HfRdfcudoxjexI4l48f6X\n85zzOzokk0lcuXJFijs4Cy4vL5fxUHNzM8xms8zyk8mk1ONyA6DaQLPZ3/7t3+LIkSO4f/++SNFE\nlN64cQNlZWXo6emBXq8XJkR9fT3OnTsHt9stihzwxRiEsVGyMrq7u3Ht2jV4vV6cOXNGzK9UOXgL\nO3PmzGNRQm7czPinUilUV1fDbrdjfn4eOp0OX//61yVyy9n2xMSErCFk/vPwq1QqZQ1aXV3F3t6e\ndF4UFRXhqaeeQjwex/j4uHQRvPXWW7Barejq6sLTTz8Ns9mMsbExSTT8/d//PY4cOSJtZ+l0Gu++\n+y4A4OWXX5YxYXZ2tuBiW1tbZVQQDofR19eHR48ewWAwwGQywW63C+KV1Eiumxyz8jZKlLDRaBSW\nAv08Tz31FD777DMAQHd3t4zbaHjU6XQSg7158yai0aiYrhsaGnD06FH09/fjrbfegtlsRltbG555\n5hmoVCpEIhGRs7u6urC+vo579+5J7G5zcxM7OzvQ6/VC4aSqk0gkJDrNQ2A6nRYYEiumX3rpJfE7\neb1e5Ofno66uTsbGzz77LKqrq0Vp3dzcxLPPPivwLLVaje3tbdy5cwcKhQJNTU2PjS4CgQAePHgg\nseODgwNJJ+h0OvE/LS8vY35+XkBff8yvL22TZ0aUG4fP55OcOQ0X6XQagUAA5eXlIrORmESnY0ND\nAwoLCzExMYHl5WXcuXMHXV1dAiwgcrGoqEhkZrLXy8rKoFQqMTk5KfKM0+lES0uLnJ6vXbuGYDAo\ncurW1pacwvPy8tDV1QWS+9bW1iTPHggExFnOuRS/QFVVVaitrUVTUxM8Hg8++eQTtLa2QqFQYGJi\nQjKaeXl5sFgsGBgYwNLSkmxABQUFOHPmDNbW1vBP//RPqK6uxsbGhqAnM5mMUOBu3bqFhYUFMczw\nthgMBsXFT4KZ3++XuFhRURFefvll+P1+cXyS1200GrGysoJ4PI6rV6/C5/PJbeXChQvC8L9z545k\n2TnPJQK4pqYGpaWlSKfTCIVCQpVjkxjz9iQ/JZNJ3L17V/Cx+/v7gvZdXl7G7373OzHtsbaVkmtL\nSwv6+voEJqNSqXDhwgX4/X6pCeUzKi4uhkqlgtFoxP7+Pn7xi18gnU7DYDBInSXVI7/fL583Z6MN\nDQ2PoWF5wp+amsLw8LDcxtnhztwyyVl6vR6ZTAb379/H+vq6HMw427bZbNJCuLa2hh//+MdiMmM5\nCOthV1dXUVBQgMLCQmkDzM3NFU59KpUSuh4hNo2NjdBoNAJRmp+fx+3bt+FwOGTDpXGPkUu3241k\nMonZ2Vlp2Ovv74fb7RZoEGealOE574/H41AqlTCZTIhGo7h37x5u3bol/fZULYiBXl9fxxNPPIH9\n/X2pbJ6amhIGPeOXxcXF6OzsBPBFmyHb8kZHR7G4uCjfh9LSUuEM0FDJ9jP2VXCTJxFSoVBgdnYW\nb7/9NjY3N4VGx2ifxWIRaMr09DRWVlZETbTb7RgZGcFnn30mzHzifouLi+Uw/9Of/lRIlYSxUEVj\nORQ9B8AXN9Sf//zn8Hq9OH36NPr7+/HZZ58hHA6jrq5OiJD9/f0CXsnJyZGEAj8DhUIh8UGDwSDr\nztmzZ+FyuSSRQtWUpVc0rjFay0QHkz6BQACxWExm0KQpcnZ/7tw5MV/TwU6mg8fjwcDAAKanp+W2\n7PV6cfXqVRkFcv0miGZiYgI6nQ5qtVoUVFYlFxYWipmaI06lUomZmRkEAgGJ5zIet7+/j3feeQd+\nvx8rKyvY2dmRkcPy8jL6+/uRSCRQVlaGzs5Ombn/8z//s3gw1Go1CgoKRPnjwYOz+/39fVgsFly/\nfh337t2TpkuClurr69HS0oK5uTl8/PHH0Gg0aGpqwpUrV/7gXvulFtRQ1qCUQYlSpVJJvSsbmJLJ\npGRSATxmRKMblf3PKysrkou1WCxS2MEXgka08vJyqFQqmXkpFAq54ZL6RmoWiyHoKicq02g0CpDE\nbrfDarUKB5n54Xg8LoYtuvppoOAMmpGX/f19zMzMYHh4GD09PWIeZCxrY2MDNpsNr7zyCoLBIEZG\nRrC3t4ecnBwUFRXJ3KekpEQMHLxRk9LHKllKkAUFBZL9ZSSFkBnKWjTJ0JBCOt/Gxgb6+voQj8fR\n2Ngot9/R0VEMDQ1heHhYXMaUqBOJBOrr66WshmAQZpxp6Dos35FyFQqFEAqFUFZWhqNHjwqR7DBy\n+ODgAMAXlZMWi0WkNua0i4qKcPz4caytrWF8fFxefI4NdDodWlpaJMFBYxSZ9uQxHKZa0aymUqmk\nxQ+AxHiIvmS+NisrC0eOHIHRaJSUwcLCAnp6elBcXCxSKGffjDVWVlbCYrHAarVidHQUV65cgVar\nlUKa9fV1PHjwQGo5LRaLvF/xeFw2YUag6urqJJfOKmUaiUwmEzY2NoTSSBmdUVen0wmNRoPp6Wm5\n5bJgigfiw7K2SqWSQibmfomWpVk1FotJ6oUjAFLdWEdKAyl/H6JguYYwIsmc8srKCnQ6HcxmM+x2\nO1ZXV+XAv7a2Jge4ZDIppkh2ZNB0SjMpXedzc3PweDyw2Wzi8qfsz5EgD3msYU4mkxgZGcHU1JSk\nFDi24hplMBgkS0+1YHl5Gaurq9jc3JQDzPz8vMziD2NeVSqV9CPwO0cKIEcY/HlIdkun07BYLHIQ\nY9NdQUEBVldXMTExgfLycrkwVVZWCtHusHRdWlqKxcVFUagOY2VjsZiAkhgPjMVikhShsrC4uChx\nQiZMiDEuKSkRCE92djYWFxfFFEd5nxRHEj75nWc6iskWUjsZkyZ/gRcr8kRYIMXvIRkVu7u78p0E\nALfbLeVpzMPzpn34fSYbhuMveizKysrQ0NAgyi5VE7brcb8jslmn08FkMv1Re+2XtskT6rGysgKD\nwQCj0Yi2tjYBihDQX11dLXCb+vp6nDx5EtnZ2RgZGcG1a9fkRsGs7tDQEEZGRpBMJvHyyy+joqIC\nOTk58Hg84jA9TFbjzY6gCp6kCWTIysqSW+fa2hpKSkpgNBpRXl4umMmKigq88sor0kgWjUbR2NiI\n119/HX6/H9PT0xgZGXmMx82ZXVVVFX7wgx/AbDZLrOZXv/oVfv3rX0OpVApprKOjAy6XC5cvX5b2\nu7q6OrS1tWFkZATb29t4+eWXBcCzubkppSUPHz7E2NiYlMpkZWWhuroaOp0Ofr8fAISrT4MI56Ud\nHR14+umnpWCGysbu7i5ee+01ZGdn43e/+x0GBwfhdrvxne98By6XC0ajUfret7e3hS44OjoKv9+P\n2tpa8QrYbDbY7XZ5keheLy0txaNHj6DX6/H0008DAJaWlvD++++LuYu3iebmZjEU8gDm9/vh9Xph\ns9mg1+vR2dkpZsPOzk6J7fn9foklMb1htVrl1kw3bklJCWpra9HV1YXGxkZMTExI5I2GvJGRETid\nTuzv7yMYDD6WZW9qasLzzz+PS5cuIRAI4LnnnoPVasXe3p5IfEwlnD9/XhzYv/3tb+Hz+cQpzg29\nqakJ3/ve98SAdbgvnYsPm9Rqa2tx584dJJNJnDt3TvwdLGy5f/8+Hjx4gMnJSYlF9vX1wW634xvf\n+AbGxsaQTqfxjW98QxzZ586dQ2FhIdxuNzKZDEpKSvDaa6/h1KlT0nrG2JVCoUAymYTNZkN7ezuu\nXr2KaDSK9vZ2cVLTD0CnuN/vl0IjHlICgQCuX7+Oqqoq/MVf/AXW1tYwMDCA8fFxbG9vy+2dIzwi\nsGl8PHr0qDTaEUrz0ksvQaPR4Nq1a5iYmEAsFsP58+dhsVjwySefSAsguRexWAxqtRoOhwPf/e53\nJTefyWRgMpmEo35wcIDGxkbhV7jdbvz0pz+Fy+XCN7/5TfH43Lp1C5WVlTh27BgWFxexvb2N7u5u\nOBwOVFdXC8FvbGwMTU1NaGhowG9+8xv4/X4YDAZ5l6enp5GTkyPv1/e//3309/dLcolxxKNHjwre\ntqKiAhaLRRIbV65cQU9PD9ra2vDo0SN4PB4sLy9jampKTIUdHR0oKyuD2+3G7OysXJRGRkYEdUyz\nKJUCi8Ui/QqFhYW4ffs2ZmZmcPbsWWi1WqysrODMmTNoa2vDxMSEHFhnZ2cRi8Vw9uxZGAwGZDIZ\ngeHwJsyDx/7+Pn75y19Cr9fju9/9rqQeCOwCIJ0DhYWFcqCjz4dVxTSF8zbP+DFrs3nw7erqwsmT\nJ6HRaPDb3/4Wk5OT+N///V+88cYbePPNN8UAGggEcPfuXSwsLKCurg7Nzc1obW2V+N/c3BxMJhO6\nu7vR3NyMaDSKhw8fCgTu/PnzaGlpERXjxIkTKC0tlUvaH/r1pW3yzAGGQiHU1tZCr9dja2tLWrzY\njMTqSKIP9/f3EY/HkUgkkMlkBODidDqhVqslf8ws4sHBARwOBzY2NoQaRdMUAFEP9vf3UVdXJzxm\njUaDg4MDoRzl5+fDbDbLTZknwZaWFgHQkL7W39+PnJwcVFVVCfGKFZ7pdFr+/FAohGPHjuHo0aMY\nHx9HOBzG7u4uYrGYvNg5OTm4efOmbHw0raXTaUQiEXi9XiwuLgqXns7s6elppFIpKWZhrePGxoac\n5nljYjaUN3RGRfb29sQVu7u7i3A4LBEZUrk0Go1ImQTQrK+vi1z+zDPPyJ9FyMn29jai0Sj0er2Y\npjY3N2VeGw6HhSXNaF0mk8Hi4iLm5+extbUlMZv5+Xlx8/J7Qna80WiUKBolSJ7Eh4eHpVAklUqJ\n+YlzYN6Gw+EwAIiPITc3Fzdu3JAFIh6Pw2w244knnsCVK1eE3cDcMg1hvHWQu69UKrG+vg7/7/n6\nrDoNh8OSsydBjYpWdna2OI6ZoV9ZWZEs+pEjR7C/v49AICDz8IWFBaHaPXz4UNq7DhuIKNMyR8/3\nkTAWcr4XFhakeKe+vl647TwMcxNRKpUYHx9HOp1GQ0MDtra25GbCqKrH4xFTKQ+WnFufP39eIok9\nPT3QarWYnp4WrgEVGZIhye2nxyc/P19MbbwFk27J51hdXY2trS2Mjo5iamoKNpsNR48eRXZ2NgYG\nBiRi5nK5RDFIJBKYmZkR1chms6GwsBCbm5tiQiwsLITH48HBwQFaW1txcHCAwcFBMcIyunXYSc0m\nPYVCIRwG9lMMDw9L7O3o0aPSGsmZMtNHRqMRExMTsr5ptVopx1pZWZE5N9HTrG1Np9OYmZkRJePR\no0fiNxkbG5MaYd7I6aNh1wPff3qMrFarUOWIo6aBbHR0VDxS0WhUehP29vYk3shiH8r+S0tLWF5e\nxvT0tHw3uf6rVCpkMhkBMLFXgqpuIBCQ9ATfN0YB+f/v6elBfn6+dI/wXaFqFwqFMDc3J56U6upq\noRFynEH/yt7eniRDFAoFfD4ftre3YbPZJB6Yl5eHUCgk8UuOyGgApkG0ublZuu2j0ah8xgAk6sd2\n1D/060vb5Dl3HR8fR0dHB8xmMz799FOUlpaiq6sLly5dgt/vx1e/+lWZYRC/OTY2hoWFBWxubmJi\nYgJbW1vycNltHI1G8fbbbyMSieDFF1/E4uKi4A4BSKEFZfL29nb09vZidHQU6XQa1dXVKCwsFIMf\n/4xgMIihoSEUFRXB5XLJKZ7NVbm5ufjd734nsp3RaEROTg4WFxfFSctqUf/vm+l4G/7ss8+QlZUl\nLlyr1YrNzU38+7//u8jTLS0t4n6/ceMGbt26JTfwq1ev4syZMzh79ixGR0clK//cc8/hzJkzePDg\nAfr7+6XP/nB/Ng8L8XgcCoVC6kRZhMCFk89id3cXAwMDsFqtsFgscLlcMBgMeOutt3D9+nWJ/D39\n9NN47733sLa2Jq7n5eVlTExMwOFwoKamBsFgUDbHRCIhUCTe0AFgenpaYpKEo1RUVGB8fBwPHjwQ\nGZrxRJPJhL/+67+G2WyGx+PBlStXMDo6io6ODjFSkvRGubimpkZMdh9++CHm5uaQyWRQU1ODzs5O\n2Gw2+Hw+/Nu//ZvI54uLi6isrMSRI0dw/fp1zM3NCcKys7NTxg3BYBDDw8P4r//6L1RUVMBsNmN8\nfFyqeh0OB/R6vXDR9/b2MDQ0hLW1NXzlK19BbW2tEMoO3xDm5uYwPz8vM+CcnBzMzMygo6MDNpsN\n9+7dw+LioiyIRUVFuHv3rnSK81aiUqlgNpuxvr6O8+fPw2az4fLly9jZ2UEikZCmRUb6Ll68iF//\n+tcYGhoSZYEIZa/Xi5///OewWq14/fXXMTg4KFhdj8eDzz//XMyFdF+bzWZcunQJOTk5OHHihEBg\nnn76adjtdly9ehVzc3NYWlqC0+lEaWkpbt++LchPViFz9lpaWirsgZmZGZGXuVHSCb+6uop33nkH\nzc3N+Md//EcAgNfrFSnXbreLb2hmZkbMpdnZ2aisrEQkEhFJmHn6WCyG0tJS9PT04LPPPsMvfvEL\nyfaz5Gp0dFRQ0ufPn0dJSYkc7jKZDMbHxzE+Po5AIICWlhb09vbiwoULopCQGMjZ7d7enkjlLpdL\nZPcjR47gwYMH+O///m+oVCrodDp8/vnn0Ov1+PM//3P5ufR6vfxem5ubWFpawvXr1zE9PQ2tVgun\n0wmdTodr165hfn4eGxsbgs2tqKgQ4xrNjD/72c9w8+ZNzM3NyX/zwQcfYHh4GJubm6iurkZTUxOW\nlpYQj8dx+fJlPPvss2hsbITRaMTU1BQ++OADpFIpIcWxItzv92N8fBwtLS1SG6xQKGA2m/HSSy9h\nc3MT77//Pubm5rC1tYWTJ0+Ky720tFRy9+Xl5Th//jyuXbuG4eFhqNVqpFIpeL1eXLhwATabTb6r\nm5ubuHjxIpxOJ8LhMPLz81FQUIC7d+9ifn5eElLsf5iZmcG7776LqqoqvPHGG+jt7YXBYMDExIT0\ntGg0GnR2duLZZ59FMBjEL3/5S4TDYRiNRvzDP/wDgC8ATEVFRdjZ2YHD4ZC1t7i4GOXl5X/UXvul\nRejOnj37IwJIDAYDcnJyJINL4wvnXTk5OeJs3NvbQ2lpKerq6nDs2DEpIGhraxNinNVqRVtbG0wm\nk2AsSbIymUzY3NzE7OysmI5KSkpkLup2u2XhZIc4Z9nJZFLmqdwIq6urpTmL8x6Xy4WmpiaYzWZ0\ndHSIlOtwONDd3S19yiaTCevr6yKLdnV1SQWjTqdDQ0ODdJ2n02k8fPhQbrI9PT0CTujq6kJ9fb0Y\noXZ2dlBXVweLxYJAICCNcJOTk9J4ZTQaUVpaiqWlJfmZVCoVKioqYDAYUFVVBYfDgUQigb6+Pvh8\nPinNcblcqKysFOxiJBJBIBBAOByG0+mEVqsVrvjh2V0oFBLADGE4paWl8Hg8YohSKBQ4fvw4gC+k\neWaheQui4Y7qQjweFyogD1R0SNNc09zcjPr6ejkxKxQKkQ+5GBISxK7v7u5unDhxAq2trTh+/Dg6\nOjqg0+nE9EaTF6NRnD/TF0Gq4uDgIKampuB0OpGXl4e5uTnhiPMwcurUKTE4VVRUSLNcVVUVbDYb\nYrEYsrOz0djYKO1d5Mnn5+ejqqoKer1eMLzr6+s4ODiQLDijPydOnMC5c+ek+ZG32Lm5OSEhkr3g\n8XgQjUZlIXc4HGhsbERWVpbk1js6OvDCCy/gyJEjAmgxGAzQaDRobGyU+CRvICMjI9jY2JB4msPh\nwFNPPQWlUgm/34/6+nq4XC4hALa2tsLn82F+fh4nT55Eb28vWltbsbS0hHQ6jeeeew5NTU0S1eR4\nIxQKCdGNcjQjY1TWOPe1Wq2SdOA8v66uTuBHarUaq6uriEQiqKysRHV1teSkmR83GAwCGlIoFHA4\nHFCr1bh79y7y8/PxzDPPoLOzU1DRVqsVFy9eFHMtkdeTk5MCJWLUsaWlBUtLS5icnMTo6KgcHtjA\nOTo6CpVKJbClg4MDTE9Py0Whv78fXq8XBoMBWVlZEvO02Ww4duyY/Mwsz6KfZGVlRSh2NDyGQiGY\nTCaB6NB9HwqFhJ9PCZ04cXpm4vE42tvbcfr0aXR1daG6ulqc6mTPp9Np+T3C4bA0bFZUVMjzycvL\nw9LSktQRW61WieeRiEfFgK2UhKCdPXtWjJNUGaanpyXieuzYMVgsFkGY83BHEyeR5SQhErajUqlw\n6tQpGAwG6erw+Xyoq6uDwWAQQufq6qr4mzhCs9vtKCsrg8fjQX9/P77yla/gq1/9KpRKpeC7VSqV\nRA/ZDEng0bvvvvunG6EjgYsQhJKSEjGhzM7O4siRI6iqqsLq6qpQt3gLZ6yHABeStmi4MhqNKCgo\nkBkt6XKsXCWjmdQlAAiFQlLSkMlkpIxDo9EgPz8fy8vL8P8eo8o4WUlJCXw+H3JycsQwk5OTg46O\nDpGWlUolUqnUY6hGFiXk5eVhcHAQ/f390v3MIhnearOzs+FyuYS/PjU1hb29PVy4cAEGgwENDQ0y\nzy8pKZFNu7e3F3q9XspJOMOmREZyE00rvAGx6pLmQUrKzO62trbCZDJBpVJJhSPHJgsLC3A6nTCZ\nTNKMlZOTIxE6zgCJdszLy5NKV8r/Wq1WbtvMrrOow2AwIJ1Ow2q1Ijc3F/Pz89jb24PVapUbbigU\nEnbARx99hNXVVdhsNjidTthsNnz00UeIxWKC0mVtKRu0EokEFAoFWlpaYDKZsLi4KLM6krG6u7sF\nRsIWwLt376KsrAy1tbXwer1IJpNy88zPz0dLSwu2traQm5srM+JYLCb8Bq/XC+D/rbrkZqNUKnHp\n0iWsrKxIuQsPEHyH2tvbUVxcLHE4ztuZfaa6UVtbi+7ubpEK6UYuKSkRXgNjcFtbW/Ics7Ozodfr\npR6V5UN0/BKYxDz75uYmjh8/jkAggCtXrjx2S+R4QKfToaqqClarFXNzcwiHw6itrUV5eTkePnyI\nyspKdHV14aOPPhIcLaFOIyMjUmBCuZpxKMrz9AhQxudBk1Q4Ki7V1dWwWq3y/re3t6OlpUVKUbiO\naLVaAXVxzEHzIquQ+S4wpz08PIxz585JZwPpc7m5uaioqJAikrt37yIej0sqRq1WS4Z9f38fk5OT\nCAaDQtbMzc2F3W7H0tISFAoFdnZ2sLe3JyVJ8/Pzkg7a3NyUz5seKB562IvBrDxTGpFIBIlEQuqH\nWfFKOBQArK6uCvmOt/H8/Hy5PbORkYVf3OQJzgmHw5ifn8eDBw8AAPX19UKdpMbBmvoAACAASURB\nVOy9s7MDtVoNo9Eoo0ySHmlKZWuoXq8XJDfrfen7IruBz3BrawvFxcXIZDLw+/2yhpJvcnBwgHQ6\nLcbAvLw8mEwm4bG0t7dDoVAgFovJrZpwNLPZjAcPHmBubg61tbXIysqSpAtTCPQEMNFCv4DJZJKu\neV6qTpw4gWQyKT6p/Px8eb9XV1f/qL32S7vJt7W1/YiQGW7wNNdMTk5KppdwE4/H81iTD3nDlJlp\nwDCZTJiZmcHg4KDQx772ta/B5/NhZGREXLc9PT2YmZnBw4cPBaBBE11tbS2cTqfAJLa2tqTHvb6+\nHk8++SQ6OzuF2c6cv8fjEWjKwsKCSLSBQEDY2lyglUqlbFZZWVkoLy8X5jPNHw8fPoTb7UZ2djbq\n6upw7tw5+H+fUz/svCTfmdG+ra0t6cemgzgejwsOksznpaUlGAwGlJWVCaaXDWY8eRK4YjQacXBw\nIO1+jDsShLO9vY1wOCy37YaGBslj86DGMopUKiWZeCJ09Xo9VlZWoFQqhULFLChpdJzTW61W7O7u\nSmEIUa96vV5Y3NnZ2fjwww8xPDwskR7OI2dmZnD9+nVZOBkPo/ROGhapXHT9MiJGLv/Ozg4sFgt2\nd3fh8XgkAkXcLln1JpNJsKpbW1t48sknceTIEdy4cQPhcFgoc3V1dSLj8nZaUlKC9fV1ie24XC6R\n/pLJJG7duoWOjg50dHQgNzdXZHuXywWLxYLl5WW57ROaRP8K0wjFxcVobm6WxZJzxe7ublHDHjx4\ngImJCRiNRnR0dOCpp57C7u4u3G43+vv75d1gxNNutyMYDOLGjRuIRCJYWloSklkikXgse804Iju8\n+S7TuNbQ0IBAICAzah7OgsEggsEgVlZWMD09Lc9ZqVSKIzkUCmFiYkLy0FS46DRnFwahO4xTscnO\n5/NBq9Wiq6sLMzMzGB8fRywWQ0FBgZAs6Yyn1DsxMSGoXRrnmPoBvpjHT01NyUE/FouhsLAQlZWV\nQtTjxk7yY0dHB7797W8LKW1qakpSNBaLRYicfE/ZceB0OmVEs7+/D6vViqamJuTl5eHOnTtIJBJi\nNs1kMoKTZqvg2NgYJicnoVQqhQ9B/8Xy8jI8Hg92d3dRUVGBjo4OxGIxWWdZOdvS0gKXy4Xr169j\ncnISZWVlssHxQJtKpVBbWwuXy4WRkRGEQiEBOJWUlCASiYjJmqMqttINDg7Kbbujo0OeC9ez/f19\n4QAwNcAOjpKSEsGRE8iVm5uLu3fv4uHDhzh+/Lhw4/Py8qRW+eDgAGNjY4I4Hx0dRSaTQWdnp6y3\nhCG1t7cjkUhgbW1NxsA7OzuYnp5GLBZDb28vXC4XGhoaMDMzg88++0zy8efPnxeg0WFwEtXTe/fu\n/elWzba0tPxIpVLJS6dQKBAMBmVROlynyLkbW+so66+ursrtinIjN7VHjx5JRWJZWZlkvWmO0ev1\n2NjYkC8Co2OFhYVQq9WSHQ6Hw0gkEkIOYwSIBheFQiG8YzqpGXfZ29uDx+ORTY2bBs19h6tzyVZm\nPImRJrrcuaESrbixsSFSNg86lJF4Sl9YWMDCwgJycnKg0+lEoif5ijNE4Iu4T3Z2tsTxAEiuWqlU\nQqVSiRmMGF3GqdhqxZs/q2m5UTM+yJgiCVa89dGNTFMMa0+pBDBXzc80k8mI4sGqSnYMFBUVCX51\nZ2dHXiiaVKLRqFTBMlEAfBHx8Xq9MBqNqKiowNLSEkKhkJyiafrMzc1Fbm7uYxhKfj/Ky8tRWlqK\njY0NefnJSk8kEvL3Z6Um5btMJiOuX+aEFQqFcLO5IB6Och5GYdbX16OiokIkUm7qBQUFmJ+fR35+\nPhwOB0pLS6VAZGlpSZ7Bzs4OFAqFxHToQTEYDMIoAL4wH5KWWFRUJCVHLDw6HDPKyclBOp2W6B2B\nUaSW1dbWorS0FG63Wz47jUYj0SjGKilPEqV6+GenIsHnR0RsQUGBlEbx5yssLBR8Kg/hjx49khgT\n+9wpfVP9oMmUcBngCxWSsSviXNnhwMIcHlYKCgoEbBKLxeTvzt83mUzC7/dLPJetlwsLCxIZJnQJ\nACKRCILBIOLxuPy8er0eGo1GOgo49w8EAqKSkUS3v78vt1SWrxiNRqhUKqyvr2NyclLibYd7Lijr\nH1aRUqkUUqmUuOaZUGJr5/r6OmKxmPwMNLGWlZXJ581f29vbcknhHkAVgSMevV4vngUma3Q6HXQ6\nnai0JARSWbJYLLKe0ZyXlZUlCG2NRiOMhOXlZRkDTExMYGFhATabTSrNS0pK5DvKfng235GjX1FR\nIc+fhlCn04nc3Fz5+2xsbEhHiFarRTqdxu7urrAe+H0kNTUQCMj3eHl5WQ62u7u76Ovr+9Pd5Jua\nmn7kcDjwla98BdXV1dje3sann36K7e1tdHR0SMaa0RfCMxiNiUajmJycFB7y7u4uQqGQ5BXLy8vF\nnenz+YQhzIgKT2WVlZWYm5uDQqGA3W7H+vo6srKy0NLSIgaZaDSK9fV1MUAEg0E8fPgQu7u7+OY3\nvwm73Y6dnR1ZWCm9ajQazM3NIZ1Ow+FwwOfzCa+aAIfV1VWhYNFLMD09jXfeeQdmsxlarRb+39ce\nms1mybiz1zkrKwtzc3OCra2trRVu8tjYGDwej8ybNjc3kZWVhaamJlgsFuh0OsnSVldXiwRJh2wo\nFBJUKWM2PFxQNdnY2MC9e/egVqvR0NAg4xHCijijY4OY2WxGbW0tbt++jbGxMemoX1xcxKlTp6DV\nanH9+nUBrNB1PDc3J/Io61dra2uRnZ0tZDJKiIuLi0ilUjh37pyMOrjwsfO5sLAQJ0+eREdHB0ZH\nRzEyMgKfzycFI/fv34ff7xdg02HnOLOtiUQCY2NjiMfjwj4vKCiQz4w3b5fLJQ2A5OIPDw/j2LFj\n0Ov18Pl8ssDxefBZcYxCtcvr9WJ0dFRupna7XQ5BjY2NkkZhrMntdqOoqAitra3IZDIIBAK4c+cO\n/H6/fHZk8R8+AHG8wwNBZ2cnmpubsby8LAcg1pvqdDosLCzg448/llz50tISdDodLly4AK1WKyMP\n4orb29tRVFSEjz/+GAqFAmfOnEF1dbXUonJcMTs7i8nJScRiMRQXF0sP9/T0tMw3GbfKycmR0g9u\n8AcHB7JBlJSUwOPxyOybKtDU1JS4941GI1paWqDRaMRER/WHEjQPMXwPqMbt7+9L/v6we57JHtao\n8pAfiUQwPz8vPfb0JXHcYDab0dDQICrfxx9/LBcXmjNppisrK5PNpaSkBGNjY7hy5YpQF0+dOiW3\nx7GxMQSDQTGrVVdXy6jv1q1bGBsbQzgcxrFjx+TAy+8BD3ZkauTn54N48sOKBBM0MzMzknbhhry3\ntycSPw8PHL9NTU2hurpazL4+nw+Li4t49tlnYbVaEQ6HJbnT0dGB7u5u8fAwrjw7O4tgMIgnnnhC\n6IXkx5eUlGBzcxO3b98WnxQ30snJSeTn5+P48eOiorD1j4VX+fn5UiHb1dUlBz2uMRwj8LCl1Wpl\ntNzQ0ACNRgOv14tPPvkEZ8+eRUtLC371q18hGAxKusDlciGZTGJoaAj/93//JxcrAJidncWnn34K\npVIJg8GA27dv/8FN/kurmr148eKB2WyGy+VCJpPB5uamLDgbGxsoLCxEfn6+nBJ5W+N8MJ1OI5FI\noLGxEUVFRRgbG5OZX2trKwoKCnDnzh2RtFnSwNNbZWUl9vb2pNuXc2hWc/KmSBMVISpZWVkih5Ft\nTZc+gSt0Wu7v72N8fFwauPi/8caUTqelDpTz06qqKqyvr8umXVhYKG1OXOhZTsMv1eDgIDKZjCBv\nqWQkk0lEIhHU1dXB5XIhFAohKytLXPu8GeXl5UktJatdGadjy1hLS4vAbDiXDAaD4gjmrNXr9WJv\nbw9ms1meNW+plMw5SmBc7zCClf9NNBrF6uqqwIACgYCU3gwNDUnrHBcJYiPpbOYzp1EsGAxKTSgj\nSIxibW5uyg2FJjbeeNkWxz7qnJycx07TzIBTSmMue319HcFgUJrvmGGfmJhAMpnEwcGBGBjJzF5Z\nWRHTqcVigUajwfb2tixIVIxKS0tRU1MjHgTKp/xOkhI2OzuLQCCAvLw8obWp1WoEAgH5fjOeVVBQ\nIMrS4Y1LqVRCrVajuLgY+/v7WFhYkA6A1tZWmM1m7OzsYGpqCgMDA9BqteLyV6lU0Gg00sZI/wUA\nobWxk6Curg47OztCvVOr1WhsbEQqlZI6XEbLysvLkZ+fj5WVFZSVlcFkMkk8kgpdIpGQdyY/P18O\nnYfBUQDkdrS7uysRXI1GI5somwyDwSA6OjpQWlqKQCAgEcmlpSWsrKxIUxvn6IlEAtPT06iqqkJN\nTY3MyUtKSsT7UV5eLvWlfAZcwzg2KCoqEub56OgoWlpa0NLSgk8//RQrKyty8KioqEAikUA0GpVD\nA02mNpsNPT09cvCnKkY1y2QyiSpC78zGxgYcDgeysrKkj50qAxUGPg+CquhnouGVvg+isdvb28WL\nAkDWGRqY+/r6MD09jY6ODuzu7mJiYgK1tbWitqVSKUQiEYH7UGHkc2bih2ZTHvJJHTw4OIDT6ZTM\nO2PTCwsLSCaT0Gg04tMioCcSiaCwsFDi09y8uV4ZDAbk5eVhbGwMhYWFqKurE08DkbUs5FKpVJid\nnZURNVWdxcVF4cTwEHL//n0BQ/GQTAVhY2ND1Jcf/vCHf7pVs6RiMUqTn58vUsbCwgJqamoE2MCT\nGG/gFRUVODg4gEqlQllZmcipnGVbLBY57ZKcRjmHdY6kOZFRTvKT1WoF8MWJiRssT+mkgSkUCtTW\n1mJ/fx+Dg4OPRSfYKMVb0KNHj8QgQ4Y5F99YLCaZatYz7u7uynxtZGQEAOByuQBAyEhlZWVoaWnB\nxsaGyK4HBwfo6uqCx+ORRUij0QjdD4C8eCxB2djYkJk8m7gMBoMgL7lx8uXn/84XYWNjQzjXXESZ\n4WSsjzWSSqVSsJzxeBwWiwVFRUUivaXTaQwPD2Nvb09Y6Kurq1AqlTJiYSrhcOkNPz+73Y6NjQ3s\n7OzIgYMLDDO+rLE0Go1QKBSYmppCOBwW2BElTaVSCb1eL7RAqkA0oR0cHAh7vLGxEaWlpdjZ2cHS\n0hK2trak1IjybTQalQ3YZDKJg5kHAKfTKVXDLBahxMe/A0s5WM6j1WpF1eHhcm1tDXa7HY2NjQAg\nmxTjPlQbAMiBmtW2PNgyv8xxGeXclZUVLC8vCyKUc1xK4DU1NdIwx9sdO9cZ8aJsq1ar5XZCOBSV\nuvLycrS1tckGQVVCr9cL0ZLf2ZmZGZm/A5BRAQDJb6vVaok6rq6uCsiK/QGxWEygR4xPLi0tIRaL\nyTtLd/Ph0pTDBTNUdoqLi8UotrKyglgsJqVKNM2xRlin04l0zBEVjcVKpVJqWXkgIGjH6XSiublZ\nvDrFxcUyRiEAKR6Pw2QywWazyQiPcjjXzMOlLAqFQpQjm80Gs9mMTCYjIyqHwyGHt1AohOzsbFgs\nFkElc/NhBwVNbvxZU6mUZMRZCkSTNEdupMVRfeFzrKmpQXt7O27fvi0mVvpdOJJjjp/RSZbdcORC\nNj3XN3Y5pFIpABA4mU6nE0+S1WqF0WhEKpWCVqtFc3OzXJ6ysrLkO1JdXQ2TySSm2vLycjkEUXrn\ngWF/fx/hcBiVlZXo7e3FrVu3EAwG5bOlJM9DND05eXl5AL4oSuJYi76JP+bXl3aT/+CDDw4opQMQ\nJ6jZbBbqD2fRrJLNyclBeXm5nKjp7NzY2BCHKzurgS8iWDSlNTQ0CDGJIBsufoe7wllWwfkR//3m\n5iZmZmZQVVWF1tZWiYcRWxmLxdDQ0ID6+nqJaWxsbODtt99GIpHA+fPn5RY5NjaG0dFRjI2NSQSE\nN/vNzU2RehcXF6HT6XDx4kU4HA6oVCp8/PHH2N7exnPPPYecnBy5nRBawtO0SqVCIpHAnTt3Houx\nsamLBqlUKiVGKTbdUVJNJpOyIF67dk1kKrrwy8rKkEgkMDAwIAqGRqORDOze3h62t7dRWloqhwuS\npCorK+UfZujz8vLEDUuvxPT0NDKZjECO6G7f3d3FgwcP0NDQIPIqACmnmJ2dxdGjR5GTk4P5+XmR\nFrVarRwE6dOgWhCNRoXOlZeXh2AwiPv370uveX19vfC+abjy+Xzy77xer0CFlpaWhDXP9ENWVpYA\nf2gA5YaYn5+P7Oxs8Q9otVqsrq7Kd2Fubg4TExPIzs5GUVHRY6UifL41NTXo6OjA8ePHcXBwgHg8\njrt376K0tBSdnZ3i9qfrnP3gVMr43Ofm5iS2R0IZD7nc+CsrK1FRUQGj0ShmqJmZGYmmcQNjph/4\nAmRCrC9d73R8J5NJWYDz8vIwMTGBe/fuQaFQwGAwiEoVi8Xg/z3nf2lpCZWVlcJo2NzcRH9/P+x2\nO3p6eqRVkqZOPmulUikGMzq4qZId3sgPDg5EdqdnhJ6EgYEB9Pf348knnxQza1FRkbwXgUAAb7/9\nNurq6nDixAkAkBpWmiqnpqZEsmbRlsPhEIDV+vo6FhYWcO3aNeTn5+OVV15BdXU1iouLRU5eXV2V\n5E4qlRJTGi9EDocDubm5AmDhoYHz4+rqavHwsG2RNcBbW1vyXcvKypINkN9dtlLSu8JLx9bWFlpb\nW1FaWipjFMJktFotHA6HmJR5gOJavLm5iRs3bkChUODEiRPScEcCHA9I+fn5mJqaEl8C1xo+04OD\nAzQ3N0Or1UopTGdnp3inGOkdHh4WU/Tc3ByUSiWcTqewCGw2m1A1qRa43W5RBlpaWmCz2SSB4PV6\nUVlZiYKCAszMzAiKmmoUxzwcQbPsiXsdcd98L5gYyWQyKCsrEzTz6OgoAoEAfvazn/3p3uRjsRgS\niQRWVlZgs9mENMXbIpuZ0um0FCpQ0iR21Gg0YnFxEeFwWG4VNPMpFArMzMwIKnNmZkZcsclkEtFo\nFA0NDTCZTOL+5p8Vj8eF472/vy85WMInWH+alZUlABG+WPF4XIpk9vf3RU14+PAhotEoysrKMD8/\nj0gkgpWVFZEv6fosLS3F6uoqJicnZcO+e/euSM0sRrh06ZIYahoaGpCfn48HDx5ge3tbkKc86adS\nKSQSCVitViFTMSoSDoeFP8CoIW/YjLSVlZXJYSovLw9ra2vw+/3ixA6Hw3LIoBmHmzbRxJlMBlNT\nU4/NmnkDYuxPq9UiOztbDDmUwLOysmSR2Nrags/nE3NkUVERIpEI/L8vi6FzmVSs7e1tDA0NycI5\nOzsLr9crGFkqEIzN7O7uimGQjYPpdBpKpRJ1dXXY2trC9PQ0ampqoFar4fF45Bafk5MjCg4PWqQT\n+nw+lJaWwul0ikSXSCRQUFCAvLw8kf0DgQCAL0YcLGPiZ06pkCMKxtLIag+Hw0JqYykND6A7Ozvi\nYK+pqRFDHW9shw+86+vrUohCdGZFRYVgiEllDIfD8mdxLku1g3jhgoICqfXlbYdRIIJzFhYWEIlE\npFK5trZWRkech2ZlZaGqqgomk0l8MsXFxVKxC0C8H7z9M8rI2xQAzMzMyN+F1c5cnHkzYrva3t4e\ngsGgHKxI/GtoaJBNJRAIiNJVVFQkEeBEIiGlQbOzs6IGMrrK58IYGG+26+vrcsHhqGlpaQmZTAaj\no6OIRCKipK2trUmqgK1uHOFxc6CJk+8Zq1Z5sEgmk9KHUFhYiL29Pfh8Pil6KikpEVY9o3C5ublY\nX1+H0WiUz4VjlOLiYom6hUIhrK2tScUy1cu8vDz4fD5puczOzgYAYZtoNBqkUilMT08jEonIWrq7\nu4uZmRnxk9DsxrWLhxeOGjn28Hg84q2hv8ZkMon3hZ9ZIBBAQUEBdDodAoGAyP4OhwMNDQ1S9xwM\nBgWlS0mfLaYDAwOiGnP0xn2A6wyl/oaGBonYraysyKiaRVhcZ5lC43iVpUpkKvyhX1/aJv/+++8D\n+GJza29vh1qtxqVLlzA0NIRr167BarWK45P1nOTJJxIJdHZ2yk1qeHhY3KpFRUX4+te/DqPRiHff\nfVfmw2NjYwC+WKyi0ShmZ2fx0ksvYWdnRyRkyrs8lVEOJdpye3tbwC2PHj2CWq3GsWPHkEgkEIlE\nJCLFW0FRUZFI9z/5yU9gMBjgcrkkV0xHstvtFlfnk08+Ke5abshsjyKgZn9/Hz/5yU+EsvXaa6+h\nqKgI77//vowG3nzzTbhcLlloWGrAFzQajco/OTk5CIVC8sVhtO3wZ0/piMQtEsyYvScSleYcxgIt\nFgvOnj0LAAIHIbObDt1IJIJkMinmHM7zs7OzpXsgmUwKDOPKlStwOp34/ve/D7fbjZs3b8Lr9cqL\nSYAGzXs3btwQXPLNmzcRCASk4e5wL3llZaVwCKamprC/vw+j0YhkMinPcnV1Fe+99x4aGxtRUVGB\n+/fvS6Tx3LlzIutlZ2ejtrZWEhUPHjxATU0NGhsbEQqFcO/ePSwtLUGv16O9vV2kx88//1y8AYFA\nQBgSPMDm5eVBrVajublZTKV0/E5NTQkJ7XBbVywWE7cxVQUedJghpvmTcUKWxTCCxBGH2+2Gz+fD\n7u4u7Ha7KEalpaUwm82orKzEwcGBRI4qKysRjUaRSqWQk5MjEdXDB1j+HXm7PH36tDAAKLM/fPgQ\nJ06cwIULF5BKpcSDEolEcP/+fTE58eDLiJzf70cwGJRioatXrwrU5ZlnnkFXVxcuX76MmzdvygbE\nYhVuHuXl5aiqqsLCwoKMB8myd7vd6Ovrk8OwUqmE1+tFdna2qFnDw8NIJpPCdCdylfNtHqTy8vIw\nOTkpzH0y/XnDnpiYEFCVzWbDxsYGBgYG5PBElzmraTn/1Wg0cDqd8Pl8AsbZ2dlBIBCQKuQXXnhB\nbrQ3b97EwMAACgoKHmvG45/Dlsb6+nrodDoMDg6KZ4Ueh2vXrskm2dLSArvdjrm5OWRnZwv/IhqN\nij+FJUrl5eVobm7G/fv38R//8R/Q6/UCMkulUvjwww/l0MPvMw2afr9f1COueSaTCXNzc9je3sbk\n5CT0ej2MRqO47jkm4kGcSZFwOIxoNIr+/n6cOHECbW1twjQhjI3YaLfbDb1ej4mJCWkXJERKo9HI\nWsL3kWodx1NU7NLpNLxeL4qLi9HR0SFrrVKpBPAFifHRo0cSNT+cTvj/+/WlyfX/+Z//ebC3t4fd\n3V3U1dWJoYWMcd6Qenp6kEql4Ha75eadyWRgtVrR0tIiJ7vD7uD29nbpNFar1TCZTPLyNDc3Y25u\nDnfv3sU3vvENtLW1yQbPQoBYLIZLly6hpKREjBmpVEqypJzLcFbj8XjQ19eHlpYWIVFNTk5ieHgY\ntbW1Itky5vTgwQOB+HBzOnHihLiU9/f3UVBQIFWq4XAYfr8f0WgULpcLdrtdEgX7+/tC+3v77beR\nm5srZp/9/X1MTExIyUNjYyPq6upgt9vFvDc8PAwA6Orqkmz/kSNHoNPppHBlfn4ezz33HOrr65Gd\nnS15ZeZjn3jiCczPz2NkZASRSAR6vR5PPvmkGLCKiooQj8cxNDQks0Kj0YhYLIZr166J5HzkyBE4\nHA7odDrJ2TIaxVa/wsJCaR9sa2sTdWdnZwderxd9fX2oqKiA3W6HxWLB6uoqhoaGZJbHgxGrRwkd\n4ryNkTxuJDQCktBGiQ34Yp5IrOnq6ipqamqg1+uljYwqEuVDvV6Pmpoa5OXlYWNjA6Ojo8jNzRWS\nFaN/nNMzSpNMJuH1euF2u4WE1tPTg6GhIfzrv/4r2tra4HK5xDDIWz/HOLzh8aDU2tqKRCKBwcFB\nofjR8MUbCG8dlBBpfmJBEW/MRUVFqKyslMWShjWDwSAeFJ/Ph0ePHkmEk38efQujo6O4d++ejFJq\namoE9sFDw8OHD+W50XdBYmMwGITZbJbq4urqatTX10t6hRCb7OxsTExMYGdnB21tbejs7ERtbS3+\n5V/+BSMjIwJ/yc/Px+joqCh/JCOq1WrY7XY88cQTUKlU0vlAwI7VaoXdbofH40E6nZYGs93dXXz+\n+edYWlqSv5vNZpMI5NbWFiKRCCKRyGOm15qaGlRVVQlroLS0FG1tbXA6nRgdHUUoFEImk5GKYxr3\nmG9nORPXHh7qeVsmz508CkZjOeOn3yKdTstoy2KxyG2dptl4PC5zcXZYeDwe5Ofnw+VyyQiVSZvC\nwkL09fXB7XZLRTVBPIy7HW7NzM3NRXZ2NtxuNz7++GO5WfNQt7q6KmZavustLS3Q6XQisXPzLiws\nFONhIpEQFVipVKKkpETgVD6fT3rgqai43W5EIhFJXmxvbwtqXKfTYWRkBL/5zW/Q1NSEmpoaiWuS\nn5/JZMQgTrZCWVmZ+Ba4J/DAxrWI9bRMUxiNRlGn/qTlenKcD+dU7Xa7uOAnJibw6NGjx2ZTPT09\ncDgcGB0dFQoaF9bd3V2Mj4/j7t27UqVpt9uFMre5uSlRJ61Wi6qqKjHQ0PhDk85h2p3ZbIbf75c2\nNG40zCzzZ8jPz0d3dzfa2tpEmqGZand3F21tbSgvLxcjC81CdOwTWuH1eqFUKmG326VjnrNz+guq\nqqpw9OhRyVuzkYgYxdraWpEN2b5HVYC38/Lycuj1ehmPtLe3y4y0rq5OJHZmqfV6PaqqqlBcXCxS\nKTvUW1pakMlk4PF4pBimu7tbiipisRiSyaRIyZWVlQIaUqlUUirDiE1xcbFkw0l1UiqVsgiYzWYo\nFArMz88/loVlWQjxvFarVebdHo8HoVBIIETt7e2oqKgQtSgYDGJychIFBQVyu9nb28PGxoa8cJFI\nBOXl5Xj22WcxMzMDn8+H2tpamYfy4JRKpYSjQNOZVqvFwcEBIpGIjA74+/OFJW7XarWKeW5zcxPB\nYFDYCXxuBImQ4qXRaNDb24tUKoX+/n4AkH+v0+mg1+sxNTUlkaqdnR1UVFSgtbUV5eXlmJubk7nq\n/5dpoNPpJGNNpgVHATk5OYITZv6bhTrEwHKRJdqXTIP9/X2Ul5eLLMko7ngL4QAAIABJREFU4GHC\nV319PUpLSyWCNjMzg5qaGuF2k2vBzZHvwuLiosQ8i4uLRfo3GAxQqVRwOp2CRia+trOzUwpTEomE\n0Mw41iEGm6MNKnYlJSUyvrL+Htm8tbX1GOmPCznNvnxHk8mkFB3RJEd3PouDaKA9DOmampqSNBCb\n4Q5XSzOBVFdXJ8/HbrejpqZGPBVOpxOzs7MYGBjA1NSUtOudOXMGvb29j+XWOcLhKNTr9Yr0r9fr\nkclk4PV6Rakj4Kqzs1PeH/qY6LGprKzE0aNHhcbn8Xjg9/tRWFgoCHB6dHw+H9bX16FSqQRIw3fX\n6/WiqqpKzL15eXni02EihT8Hi31WVlbEy8IDztGjR6XUhweChoYGzM3N4Z133sHa2pqMadbX17G0\ntASXyyVjOo4QOzo64HA4sLa2JtW59HvYbDbhSZAc6XQ6UVVVBZVKJf0sxcXFYhbMz8+XESr5E6Q9\n/jG/vrSb/I9//OMDxl9GRkYwOzuL1dVVWSB5ij537px8CBqNBslkEu+88w7Ky8tx6tQprK2tSZzn\nzp07+NnPfobTp0+juroa0WgUVVVVaGpqQn9/v9RI0hxGZjVvzrzFs+0tEolgaGhIUKSHbyyjo6OS\nq+Xs+fXXX0dRURH+53/+BwcHBzCZTJLRr6ioQDqdxsbGBl599VW0t7dL7C6RSIjBq7e3FyMjI7hx\n44aUYwQCATHr3LhxA9vb25J75dy5rKwMTU1N4r5mXGZgYABNTU04duwYlpeXEYlE4PP55MZEWbC2\nthYffvih5DedTqdAcw53N1dXV+PTTz/F1atXcerUKZSWlgrljxCKwsJC2O12OBwOWCwW5ObmYmVl\nRTZlRpSIixwbG8P4+LhErPiPRqPBCy+8IM5qt9uNQCCAqqoqZGVlIR6Po7W1FTabDSMjI9ISNTMz\ng3g8jgsXLojpiuQuYl87OjrQ2toqqkYgEMCDBw8EeKNQKGR0QEPQ4OAgDAYDLl68iFu3bmF0dFTM\nimq1Gn19fRgeHkYqlUJjYyNeeuklTE9PY3x8HLOzszAYDDh9+jTcbjcePXqEF198ESsrK/jggw+E\n9c7yi4qKCoyNjUk1rtlsRmNjo6Qidnd3xaR6+/ZthEIhHD16FKurqxgcHJSNiTfbjo4OKVJhdNNu\ntwum9NNPP8Xc3BxWVlbQ09MDi8WCaDQqs2MewEk1BCCNeTTHWSwWqFQqaXcrKyuDy+VCJBIBADQ2\nNkqMtaqqCtnZ2ZicnBRX+ODgoOS3gS+iqW+88QasVivu3LmD7e1taRhjTjkej8PtdqOiokLgObu7\nu9je3pYK68rKSty5cwcffPABenp6YDQakUgkRDngM7PZbGIupGOa3wmNRoMLFy6guLgYt27dEiAO\n/6ytrS1oNBqUl5cLEvjUqVPo6+vD5cuXhYnAuWpOTo6YDw83a9LYOTw8LGoI3//XXnsNDx8+xO3b\nt+V58IZKo6zZbMazzz4rwB9y430+H06ePInjx4/D//t6aavVips3b+LGjRuyFi0tLeHEiRPo6OjA\nwMCAqKFkhZw7dw67u7u4du2awINeffVVlJSUCD0UgMSiU6kU3njjDZw8eRI///nPMTExgb29PSlp\nqqysFIMojbWHTbQ0SjNCarPZRCnLzs6WsQEP80eOHIHf78dbb72FEydOoLGxUSTy0tJSRKNRxONx\nUZBZ3OX3+9HY2IhkMom+vj7xFHV1dWF1dRV9fX2ijBUVFaGxsRGnT5/G0NCQUBH1er00SbKxkw2V\nrOytqamB3+/H559/LoqUWq3GyZMn8eqrr+Ly5cuS8V9ZWcH9+/fR29sLtVqNy5cvS3sg98O/+Zu/\n+YN7+JcGw3E4HD/ibIPxAt5aCwoK4HA40NzcLIUQNJ1Fo1GJgs3NzYmJi8YFlUqFjo4OFBcX48GD\nB8jPzxfEKpvUsrKysLe3J25HwiSINk2lUigqKpLbHeto2bRFKYmAE5atdHd3ywyXDnMS7agClJWV\noaGhAVVVVdBqtdK5vrm5CYVCAa1WK9Is5eSdnR0YDAZUVlZKVpMGPMrhVDb4olIhYFabc3DOdBix\n48mfp2CVSgWLxQIAWFhYEJMKf0b2Ys/OzsoNg4ZBjUYDvV4vtagWiwV2u11uM6wJ5ZiG0iCfK82M\ndrtdzIK8OfDWkJWVJTPC2dlZIQJywaH0TbAIaXJ0tGYyGezu7j6Wk8/JyZETOr+DKpVK3MN89iyG\nYASMkTZmndfX14UvX15eDofDIeCTwsJCVFVVwWKxwP/7/vqamhrk5OTIfHFvb08OKNz4mHuvqKhA\nXV0dwuEwIpGIFP10dHRgYmICHo9HTJylpaVyq6Z5lLQvIkLpN6F8yQWW+Xa+L8lkEpOTk3K4YC1s\nKpUS6hvVt0wmI93mVG6osrFwKisrS9DA/PyJOmbxTH5+PgwGg3DODytrxKlyHklpE4BEupiGMJvN\nKCsrE2Ke1+tFeXm5KER0K5NQubW1JYea2tpaGI1Gef5UvwAIHc3v94tvgX4VRqvIIeDNnWVb7InI\nzc1FOBwWL8phVz/lab1eL7jckpISgaQsLy+js7NTKrkZM6YC5XQ6hYI3NTWFUCiEwsJCOJ1OVFdX\nw+v1IhgMIhQKSV0tY53Ly8uPRQYPDg7EBMm/+8bGhtw2tVottFotAMg6ZzabpfKXKh17Lhi3YzKD\nMVz6MQwGA5aXlyUiS0WA6zzNc1zTOJojQfIwA4OHsMOfOVUv/r46nQ4zMzPC94jFYggEAkL1I1ee\n8DAqaY2Njejp6cHm5ia2t7elSI1rVEFBgfz/WKpEtSaRSCCRSMgYJZ1OP0a/U6vVMs4ZGhoSoyVT\nCExUbW5uYmBg4E+XeOdwOH5kt9tx+vRpqfI8ffo0Ojo64HQ68fLLL+OZZ55BXV0dCgsLkUwmpaP6\n9ddfx6NHj/CTn/wEDQ0NUKvVeO+996DVavGDH/xAADsffvgh1Go1zp07h4aGBrS2tqK2thapVAr3\n7t1DY2Mjzp49i6effhptbW0wm80ilfX394tD+PBmwJnSU089hVOnTol7fWFhAb29vejp6cHZs2el\ntYxFGs899xwuXryIF198UQoYtFot7t69i7feegtHjhyBxWLBjRs3MDMzg/39fZw8eVJkfhro2tra\noNPpcP/+fWg0GvT09OCpp55CTU2NsLVnZ2dx9uxZOBwOaVVzu92Yn5+HVqvFt771LRw5cgRqtRqf\nfPIJxsfHodVqcerUKXz7299GdXU11tbWcOPGDTkAsAIyGo0+htW0WCz43ve+h9bWVlgsFrS3t8Pl\ncsFsNuPEiRPo7u4WhzB74vPy8uByuaBWq4WTTymsvb0df/VXfyUzdhbH8JbS1dUlkqjf75d5em9v\nL7q6uqDT6WRs8tlnnyEej6O7uxs2mw12u11GAYuLi+JmLS8vl8IUk8kk5jAAcLvdovzw0MXFhJt2\nPB7HRx99BLvdju7ubjFvEtNcXFyMkydPiuGM3epk9Z8+fRobGxty66cZ6ZVXXsGbb74pPfcEhCQS\nCRw/fly4EqOjowgGg9je3kZ1dTUuXryISCQisBNuKMeOHZNWs5mZGVy7dk3wsa2trairq4PJZMLE\nxAQmJiZQUVEhnQSUN+nLCIfDaGtrQ29vLzo7OwEA4+PjsNvtsNvtIOiKFbq1tbXo6emRPDDNU7wp\nXb16FQqFAiaTCZWVlTh+/DhefPFF+dnOnTuH8vJy6VhIpVIYGRmByWTC888/D5/Ph8nJSfh/3xpG\n5j3rQOPx+GPcB71ej2PHjuH06dMIBoNSfEIvhsPhEJY6DxYsoDp58qQcyNbX11FYWIjW1la5bNAQ\nOTQ0hLa2NvzlX/4l0um0ZMhramrQ0tKCmzdvwuPxCGdidXVVWArkvTO9Q2+IyWRCT08Puru7sbm5\nibfeegtFRUV44okn8NWvfhVHjx6Vw6fRaMR7772HeDyOixcvSixufHwco6Oj6OvrQ2trK/7sz/4M\nAGQzZ579/PnzqKurQygUkmQAW+K6urpQV1cHnU4nilNxcTGeeOIJPPPMM1L6otVqRc341re+hWee\neUZGSADg8XhQUlKCV199FadOnUJNTQ36+/tFrXzxxRfx/PPPS/x5ZGQEJ0+exJtvvolz586ho6MD\nBoNBLj63b99GaWkp/u7v/g4bGxsIBALo7u6WA/XJkydx/vx5WK1WqXPu6+sTzxabCTnmCYfDsFgs\n+M53voO6ujpYrVY4nU5pj6urq0Nvb6/4gz7++GOcOXMGr776KlpbW9HY2CjcjKWlJdy/fx/xeByV\nlZV48skn0dvbK+rNnTt3UFdXh66uLrmgTU9Pw+fzIZVK4Wtf+xrUajUGBgYQjUYRCAQQjUb/dIl3\nP/zhDw8MBoOcWPgPZypEhHI+l5WVBbfbjaysLLzwwgsIBoP49NNPoVarsb+/j/n5ebS2tuL5558X\noprH4xHQjt1uh0ajeQzyAEBMVUS6sjDjypUrQqHjxk6DCKXcg4MDeL1e4d47nU5Yf9+IxjnQJ598\ngnA4jPr6eiiVSpnf0AzFG5JWq8WjR48wPDwsGyiLFkg5ymQysNlsyM3Nlcw6ZzckP9FglEgk5EtC\nFy9bvji/BCDtaixE4Kw4EolgZGQEDQ0N6Orqkpusz+fDwsICVlZWJJ+cl5cnrm46+Dc2NtDe3v5Y\nPert27flRtvb24u9vT2R6QjAsFgsOHnyJObn5zE7OystYxyTsGSksLAQq6uruHbtGsbGxtDS0iIO\ncM6cmY/t6emRitHDCya/B3zGWVlZqK+vR21tLdRqNUKhEO7cuYOGhgZUV1dLE1ZJSYkcTKqqqgB8\ncQNqbm6GTqdDX18fVlZWxEFLCZOKB522k5OTAlbi2IdG0nA4jKamJjidTlFwyNdPJpNQqVTiSGYU\nlTKxy+XC4OAgQqGQ+BKi0ShqamqkOZFQKX7/q6urJdXh9/vFI5NOpxEMBoXG+PTTTyMrKwuBQEC8\nJzRIhkIh2O12VFRUSPWyy+VCVlYWNjY24Pf7kZ+fj/LycokNbW9vIxgMwuPxiDs9OztbDFfEUNNw\nSxcyFZknnngCFy5cwPT0NLxeL8LhsJjySA3MycmB2+3G4OAgCgoKZI5N9YGtYCwu4TyecUUSKvPy\n8lBWVibmuoGBAYHpfPe73xUqHWO4S0tLOHXqFJ577jkZu126dEngXGtrawAgBVGpVErMYqyNPTg4\nENocq0339/fx5JNP4uDgAJcvXxZML8E9/Htvb2+jr68PmUwGTU1N8rNQrmYstLq6GvF4XNCuXV1d\naGtrw//D3LsFtX3n5/8PEgIJEEiABBJIIM4IMOejCY4T23GcrDfJJuluOtt2d7uznd51Or3oXv1u\netPpVZtpu+0223Yym0l2ktZJ7OBTfMIYzPkoTuIkIQkQOoBASCD4X2Sfd+2b/+5dlpm92MSODfp+\nP5/34Xlez8rKikyVuJYgDpj2LyKMWVyenJwgEAhgaWkJp6enwsEg7W9vb0/86IQwMUODNEzCn2hZ\nzM/PR1VVFdbW1vDpp59KjkdXV5dw8Pv6+jA0NCTe9suXL2NiYgKhUAg/+tGPkJGRIe4E2lV3dnaw\nuLiI/v5+ef4osmNgkcfjEdsaCZGMtc7OzkYgEJAGhWs+rhsmJyeh1WpRW1uL+fl5TE9PY2RkRNar\nJEZSbzM9PS059kQ3Ly0tISsrCyUlJXj11Vexs7ODBw8eyJnV29v7hyu80+l0iMViksTj8/mwvr4u\nAgZ+4NyDMK86NTVVQlfq6uowMjIiisfDw0OMjIwgGo1Co9HAbrdjeXkZfX192NjYEBRgbW0tGhsb\nJSFuc3NTvNiM1CQSlRcGFa+kUk1OTooi32azoaioCDs7OwiFQjK+YrBBIBAQ0Mn+/r4AP6LRKCor\nK9HU1ASn0ymJUxzPbGxsSJIRx1UKhQJ5eXmyCvB6vfB4PMjJyUF7eztKSkqg0+nwL//yL3j69KkU\nArSmJCcno7+/Xx7eP/3TP4VKpcLY2BhWVlYwODgoCFOO0jjmZ4eSmpoqL148HsetW7cQCARkbJWR\nkSFjba5I2EHxsCIsiJMOdjNcmSQnJ8NisUg0p06nkz0XVw3FxcWYn5/H+Pg4VldXhYjFcaJarUZK\nSgrGx8elE6N1rqGhQYQxdGdEIhFcvnwZOTk5ItShf56IYAJiXC6XjPWsVisqKipEzMnCidkC8Xgc\nKysrIp56Nr+eSV5paWkoLCwUypbb7cbk5CRcLhc6Ojqg1WplXRCNRuFwOGREazabkZubK97okZER\noQXq9XqZRHk8HulWmKY4OzsLt9uN2dlZBINBeDweAeQEg0EZoxK1S1AQgTAUg3JPT8EYASv0Ux8e\nHuLp06fS4TNIZH5+HoeHh0KapGeZ1jeLxYL09HQ4HA5Eo1EJm+GImZM0+pupUyF22Gw2o7a2VgpF\njUYjpMGlpSUMDQ3hypUrqKmpkexvZrvT9aHX6wV3TTIgo4qzsrJkd074C1dloVBINAMUUi4uLkpR\n39HRAb1eLxkRLEDD4bCcM+zSCT/y+/0CWjKbzWhsbMTGxgb6+/uxvb0tJMadnR0Eg0EZMXNcTCsl\n98fhcBjj4+MieqT2yGw247PPPsPGxgbq6+ulWKJbgGwRgrWKiorQ0NCA6elpOBwOfP3118jKysLb\nb7+NlpYWpKSk4Msvv8TIyAjW1tag1WphMplQX18voLGTkxPodDqcPXsWycnJGB0dxfT0NLRarVy6\n6enpWFxchNvtRmZmprio5ubmsLGxIZcxi1xGmfPL6/UKGZIiwcLCQinEOYkpLy+Xwnt1dRXDw8Pi\nqGKQzLNNTyKRQG1tLc6dOycC0cnJSej1euTm5opNlRMJEj05SXtW4MpskkgkAoPBAKvVisLCQplo\nkklC7crv+vrWOvm//uu/Pi0qKkJdXZ2AIhigMTs7i2g0iqysLLzwwgtwuVy4ffs21tfXkZeXh7/5\nm78R4tHdu3dxcHCA73//+0hKSsLy8jLm5uZwfHyMjo4OQaouLy9L2pVOpxPePWEQjFnk/nhlZUVU\n2AQ7uFwu5Ofno7S0VKYEVVVVEhnJEbTf78ejR4/kQa+ursZrr70mB9KNGzcQiUSElsUdLxXa7PKZ\nIre+vo7y8nLZ7dCzTRUog01GR0fFA8qoykAgIH714uJiGAwGaDQa3L9/Hw8ePBB7FCcqvAi44+Jh\nVl9fj1AohM8//1xyyYuKinBwcIDR0VG53B8/fgy1Wo3Lly+LqpbjTofDIZQtJqR9+eWXKCsrQ0VF\nhSBVuWbguJ67NwqV6LumTQyAHMAajQYzMzPY2trCpUuXEIvFcP/+fYmsvHfvHgwGA959912JhKS+\no7+/HxcuXBD2QUZGBsrKypCRkSGoV9qphoaGsLKyIvbPRCIhkxcSDcfHx2G1WiVfnF3z7Owsdnd3\nce7cObmY7t69i5GREQk6OnPmDLKyshCNRvH48WOUlJTgnXfekcJuYWEBiUQCOp1Odplk4DO8SK1W\ni+VJoVDA6XRid3dXokmzs7PlQLp//74kGZpMJuj1eiFMktsNQDjqLpdLcKd0vzidTnR2dop9k/yI\njo4OpKen4/79+zAYDIJfpsiVvvTr169LBCtxsWVlZSguLhYgVTQaFY43EahJSUli3yQDnvvLg4MD\nLC8vo7m5Ga+88ooolPV6vawNTCaTdHj37t3DtWvXJFaV3aTFYoHJZJI0xvv37+PevXu4fPkyqqqq\nZHqg0WgwPj4uRMuKigq0t7cjJycHfr8fd+/eFWRpTU0N4vE4+vr6sLKyglAohIqKCqSnp8Pv96O2\nthYtLS1iZ9NqtTLNmZyclOaE3v7GxkZxv3Dic/78ecHULi4uwuVyCdRpdHRUhLdU3ZPcl0gk8MUX\nXyAej0u0MRP+FhcXcePGDZnK5Ofnw2aziVg3Eongww8/hEqlwve+9z3JGqDjZ3BwUESb1PBQB8Jm\niAEsLpcL0WgUr732GmKxmFhu9/f3UVRUhN3dXUxOTgqX48UXXxTs+eLiovxcy8vL8eqrr8rEq6Sk\nRPLkKRL0+/0ygeGaiQ0Hg5/S09NRWVkpVkte8rRC0p6p1+vx+PFjKJVKlJeXw+PxSOENfKNf4Pnu\n8XhEoFdaWirvFONlycFYX1/HxMQEnj59KtOPr7766g+3k8/PzxdrGK0CtLIdHh6KApTRmkw4ysrK\nkpAUCuc4QqOAiyIiCvFSU1MlPYtJdiSREUHKCo5MddrJKF5iSAGZzMzZpjBmZ2cHjY2NosrlQdPQ\n0ICmpiYZW7Hr4ehRpVIJIe3ZiQG540dHR/B4PBLIs7W1JZMGUs9oC+FejLYydkXFxcVSTBH8oNVq\nUVlZKbGehLjs7u6KVY3iFfpp9/b2YDAYoFQqsbW19ZyITq/Xw2q1CtaVY2RiPLOystDe3g7gmwec\nFbXJZIJKpRKxIwlRtDwRicuump8lpz85OTmyMqBok6FCOp1OHA18logvJumO/m3GftJu5ff7AUAu\nIXbfpJVxJEoVezgcRiwWg06nE5FgLBaTcWt+fj4AiO89GAxKR8k/i0Kf1NRUVFVViU3H7XYDAJxO\np0y3+OxrNBqEw2GoVCrU1dVhd3cXGxsbyMvLg0ajgdfrlYAMdgPRaFTyuE0mE0wmk8BOKLoiz5vR\nsM+GSPHzpciQ42xakqgh4dqMKzKOYp1Opzxbfr8fGo1Gnmm+N3wXgG9U9swHJ0vg2b8P41rpYuEK\ni2hkAPIZUWDGqSDtZXz/KH5loU3FfG5urrAGGO5ErsDx8TEmJiakGzs8PITBYJDJGZPUTk5OBEFr\ns9kEPRwMBqHT6cSeyAuL6zfia202mzy3KysrEu6j0+mQm5srO3WeMVwNcqTOpE1aGnl2ra6uQqlU\nwmq1isB5dXVVpgnPgqjKysrkMuN4mq4Gt9uNjIwMyTKgM4f2N5PJJEJl2jQpeGZqoVarRTAYlHfP\nZrPh+PhYLJ7MH0hNTYXb7cbKygo8Hg/sdjsqKipE2El3hVqtxieffAIAeOWVV2SaylAgjvv551HA\nTE0NEbi0/u3t7QH4BqpGZ41Go8Hi4iIODw8lDItiSJVKhWAwKOJOFkq0YCclJck5xuhuWjApVmQj\ns7S0JDHQJpNJdA2/6+tbu+SbmpqwvLyMmzdvij/aaDQiJSVFABrr6+vo7+/H/Pw85ubm0NbWhvT0\ndHz22WeS1UxgAfd6fX19MvYMBoNCcvN4PNjf35fUKqrbk5KSMDY2JuOR5uZmWK1WbG5uYnp6Gnfu\n3JFxLOl1e3t7MnZhPOLq6iqqq6slR91gMOD8+fO4cOECjEYjFhYW0Nvbi97eXlGe/sd//IcEWjAV\niwUDU5Q0Gg22t7cxNTWFzz77DElJSairq5OHf3V1FQsLC4IH7ezsREtLi8Q5AoDRaERtbS3W19cx\nMjKC69evo6GhAd3d3fjoo48wNTUFg8GAqakpDA8Pw263y0i6p6cHbW1t+M1vfoOdnR3U19djZWUF\nt27dEpAKq82CggLU1NTI3m1xcREejwd1dXXo6OjAhQsXMD4+jidPniAcDstub3V1FWNjY2htbUVG\nRoYolClM29/fl4OONDjusYgQttlsAqmYmJiAz+cTu47H45Fijp7YoaEhrK2tIRQKobOzE3q9Hs3N\nzTg5ORHON1+wM2fOoKKiQgJ2RkZG4HQ64fP5JEqYKmrmwCcnJ4tjgeK+aDQqISvhcBgLCwsCOsrI\nyEB3d7e4FtRqNR48eIDHjx/LIbG+vi6To5GREWRkZCAWi8lun/u90dFRsWj+6le/Qk5ODt544w0Z\n437++efY2dlBeno6fvzjH8NqtcqBnpWVBQAy1menwoM5GAyivr4eV65cwdzcHB4+fCiJgQcHB+Ks\noP6DeN6lpSWsr68jEolgYmIC58+fh1qtxs2bN4Xhz0Co1NRUyTmgVoN2rNXfxomGQiG52Px+vxTe\n5FgwWjgpKQkdHR1YXV3F3/3d3+HSpUsy0bFYLGhvbxeK2/HxMQwGAy5duoTbt29LR0el9MTEBJxO\nJ7xer6juHQ4HZmZmMDk5KVS/zs5O1NTUoKamBgMDA+jt7UVGRgZMJhNqampQXl6OkpIS3Lt3DyMj\nI3C5XHJW9PX1SeepUqkwMzODhYUFJCcno6mpCRsbG5ifnxf9SktLC46Pj+Hz+XD9+nWJYmUS4+3b\ntxGNRjE7O4sXXngBLS0tuHbtGk5PT3H58mX09/fj/v37yMzMRF1dHa5cuQKn04mpqSlxCKyvr0Or\n1aKoqAh5eXkyzUokElLA8R1YW1vD3t4eenp6kJubC4/Hg3v37mFychLNzc04OjrC6OionMWNjY0I\nBAL45JNPUFNTg9raWhHRbW1tobGxEdnZ2fj3f/93OBwOJCcn491330VlZSVGRkYEZlNRUYHS0lL0\n9vbC7XZDqVTihz/8IZqamvDFF18gGAwKMS4SiSAYDKKwsBB2ux0zMzNwu93SaVssFjx+/Bhra2to\nbW2VCQz1IOnp6aitrcX58+eFrTAyMgLgG/7LvXv3MDc3J8Q7fi9s1NjQPrua5lowFAohFArh1q1b\ncDgc8Hg8suLzer2ChK6srERLSwvef//933nXfmvj+l/+8pen7Kh46W1sbCAtLQ1msxmxWAxbW1sY\nHR3F6ekpDAaDqJtZdZNwtLu7i5aWFkFxbm9v4/T0VNTUCoUCXq9XyGglJSWora2VLHACOfhQ5efn\nIxwOC36Se3Kr1Sp73La2NmRnZ2N1dRWJRAIajUYwj7Q3xGIxyTP/zW9+A5VKhcLCQkSjUSlAKNp4\n5ZVXUFBQIPs52kgIzmA1OTMzI7sudthjY2MIh8NSWRIC4fV6cevWLZl2AJBghMLCQuh0Ovz617+G\n0+mUvWJeXp74UAOBgOgNVlZWcHp6KjkDtLAplUrY7XZJ96KVyOv1ys+AFsDm5mYcHh4iHA7Lnpua\ng9TUVLG05Obm4vHjxxgcHJRwC1bdWVlZMgVxu92Ix+NISkqSlQnbKrCcAAAgAElEQVTwTa703t4e\nLly4ICI5junoaddqtZJtcO7cOdhsNqFykeXOFDU+e6QMEkvs8/kwNTUFtVqNuro6CTSZm5sTCyG7\nIOZaDw8Pi2iGGgSj0YhQKISjoyPpdtgVMBmOY2ZCWKi4T0lJkYOBwq3KykqxifX19SEQCCA5ORkv\nv/wybDYblpaW4Pf7xTmRnp6OjIwM7OzsYGlpSeKHKUQi8pPuh3g8jtXVVdTW1kKn02F5eVmmJQxy\naW9vR1pamkwHmG1Pm+Py8rIgopk3QYrZ7u6upN7R1lhQUCD2qN/85jfY2trCq6++KkUgpxuEJ1Fg\nZzAYhA3PIovRslVVVThz5gymp6eFHldbW4v6+np88skn8Pv9uHjxInw+H2ZmZgRYcnp6irm5OczP\nz+Ps2bNIS0uT3IiMjAycPXsWFRUVyMjIkGnW+Pi4AIFoQzQajQiHwxgYGBDk8fLyMrRaLV588UWo\nVCrRLSQlJaG2tla6UNIZS0pK5EyihTAYDMq7R1If7azZ2dm4e/cufD4fdDqdTAuWlpYQDoeRnJws\nz7per4dKpRK4GFMGSWekh93v98v0lGhdTjRWVlYkCpkq89nZWdF1aLVaLC8v49atW0hPT5d1KDvg\npqYm5Obm4t/+7d8wNzcnluiKigrk5uYiEAhgamrquQAZrVaL/Px8VFdXIyMjA19++SWSkpLQ2toq\nYkZqYmivJJ+CFlpCZxYXFyVJ0+fzic2XbhFaVLkGzszMxIcffoiRkRE0NTXJ2pU2XE4nGhoaJGTp\n2YQ+vteRSARTU1N4+PAhzp07B4vFIvqVZ+10f//3f/+H65Nvamr6f2lpaRIpG4/H4XA4cHx8DKru\nU1NTcfPmTeTm5uIHP/iBHNAWi0WEcSSMhcPh5/J8n42VpOCOozWLxYLq6mr09/cLBIHiJR4WVNJT\nhU36G8lbNpsNer0eoVAIVqsVbW1tUijwIcrJyZFx3VdffYWKigq899578pJz9Hp6eoo33nhDKl2K\nO6jSvnLlilCUOLbOy8uTNYff74dWq0VPTw8CgQCGh4dRU1MDvV6PhYUFOUxPTk5gMplw9uxZZGVl\nIRQKCa7S5/Ohvb0db775poiAeOkFAgHBPVKRW1tbK2uT1tZW+TyMRqN4zC0Wi+ztSbvKzMxEcXEx\nrFYrTk5OxAZVU1MjPzfiaJ/lM+fm5sJut8sYm5hH+l+fDbyhCpbs7bKyMni9XrjdbtTW1qKgoAAA\nhC5FX6/ZbBawEbtL/h24xmCHnp2dLeE4eXl5uHTpEkwmE9RqtYRqVFRUiKKfBMHl5WWUl5dLnkI8\nHkdZWZkImJqamqBUKjE5OYmioiJUVVVBo9FAr9fLqkSn0+GVV15BZWUl1Go17HY79Ho9/ud//gdK\npRJvv/029vf3xY8eCoUwPj6Ouro6NDQ0oKioCEajERqNRna77733HvR6PSYnJ4VXTxuWxWKRVLor\nV67A7/fjww8/RHt7O5qbm2VnWVRUJFMlTg1WVlZEl8GUyTNnzuDRo0cYGBiQ98rn86G2thYVFRWi\nZjaZTMI3YPxvcXExZmdncXx8jEuXLqG6uho2m01+7eLiouzqi4uLxUFgtVpRV1eHiYkJuFwuFBUV\noaioCHq9HisrK5iamsL9+/eRnZ2N1tZWsesykay/vx82m00IhxsbG5icnBR+P0exZrMZVqsVer1e\n1hBms1kSAJOTk0XpX15eLroIUj85TWAEL5ke9Henp6dDp9PJ+2g2m6XhoQuElxRXJMXFxbhy5YpE\nKpNeuL6+jq6uLrz11luyFhwYGIDJZEJHRweamprQ3NyMhoYG5OTk4OjoCNPT0+Jw4qh7YWFBxt92\nux0NDQ3CqeA0pKWlRcSJZCbk5+fLZIuXeiAQQFVVFTIzM2WNkZycjOXlZRwfH0uRn0gkcOnSJRQU\nFEjaKHM3amtr0dDQAOCbSGVmb/DM5EpqZ2cHExMT0mXTbbG9vY3y8nKYTCbcv39fXBy8xFNSUkS8\nSHxwfX292FWnpqawubkpDBhmzXMVaTAYUF5eLmCkM2fOICUlBYuLi8IAqKiogEqlws7ODjo7O1FV\nVSWAMLPZDKfTyWf5D9dC96//+q+nubm5KCgoQG9vr4gnbDYb6urqsLa2hvn5eTx9+hSVlZX4zne+\ng83NTeH+skIuLy+HQqHA4OCgpHxx956dnS2ZzOXl5UhJSZHRdHV1NZ48eYLV1VWkpqbixRdfxOuv\nv47R0VHMzMyIIIPiI+4JmZGcSCQEFKLValFYWCiXYzweR2dnJy5cuCBwDRL80tPTMTk5KZ7xqqoq\ntLa2oru7G9FoFB988IH4zf/7v/8b6+vraGxsFKwhldMkeeXk5GBqakrgHqze/+Iv/gJ5eXno7e2V\nbpIPKgUfa2trktiWlJSEc+fOoaurC9PT09jb20NOTg7cbjd8Ph/q6+vlpSZTeX5+XhTYFN5Fo1Hk\n5OTgzJkzWFtbw+Li4nPsaB7ebW1tODk5wfDwMBoaGlBTUyPja1bL3JNnZmbCbrcjPT1dbEEUUnKH\nfeHCBSgUChGv8fdXVVXh1VdfxbVr13Dr1i3k5OTIQbyxsYFAIACLxSL44/n5eVH/kp72rOUxJSVF\nlK0qlUr21icnJ8+pril2TE5Olo6AqXLM8KYfXqPR4JVXXkFpaSmePHkiQh6NRiN7f5L4mMdOtXhS\nUhLsdjsyMzMxOjqKeDwuuoCNjQ0sLi6KavnMmTMwm82i0g0EAlhfX0dubi5+/vOfw+1245e//KXA\nfF599VXk5uYiFouht7cXS0tLsFqtEsb04osvikWOhxgv0eTk/9sEJhIJZGZmora2VvbatKs9ffoU\n0WhU7IU5OTmwWCwijuUeljGjZrNZUugotExLSxNy35MnT6SjpPo4Go2KO4XWMk71VCoVenp6EI/H\n8atf/Qrnz5/Hn/3Zn+Hzzz+X7ru0tBTV1dXSeQWDQYHJ8D3kPp1rgoKCArz22mtYWlrCo0ePMDMz\ng1AoJJeWSqVCfn6+0CSNRqOwGA4PD+FwOORdJquD4BWFQgG73S7j7aGhIbmM6GIBvpmOMUglFouh\nqakJjY2NgkoeHx+Xgnt2dhZDQ0N48OCBEAyrq6vR2tqKnp4eXL9+HdeuXZMY2T/5kz/B8PCwkAjN\nZrP4uyk2S09Px8HBAfr7+zE7OyvvAwW+0WgUTU1NyMnJQSwWw927dzE3N4fXX39dPksG/1ADQdT4\ns/a1Z8l4u7u7SElJkX21Wq1GaWmpiHtDoRBcLheePn0q5155eTl0Op3YH9locV11enoqq4Xs7Gw8\nfPhQ9uLUb9ENplAoMDU1hY2NDWRnZwsym882izIihGmXpFqfLgfi0IH/m77SDZCbmytugoGBgT9c\n4R2jDLOzsyUakiAMjl1OT0+fQ6Myd5ycYr1ej+LiYmRmZooYJCkpSUROAIS/TYEXRUq07VCNGolE\n4HK5ZEzpdDqli6RWgP+fwQ3Mc6czgA8kHzR69Q8ODmC323F6eiq0LuYuNzc3o7m5WeAHi4uLElrA\ni5VjP6/XK6xyr9crBCc+KM8KmA4PD58TNXFPxweKmFhW+hzx+/1+EZNsbW0hKSkJeXl5iEQigpzk\nzplEN3a5ZOnTAra5uSn+WEJt6FGnvY+7YB7C9KlSNERiFe13/D3Hx8dISUl5Lseeo1vu1wi0cLlc\nMBgMcpjr9XoUFRUhOztbWPfPho1wSsAXODMzU4AyxPFSTc0JxNraGvx+P2KxmHRMZEyzEz88PATw\nTbANOfDsHij0GRgYQDAYRF5enojcyErweDxiA+PImYVxQUGB5FYzcpWdM+mMRB4zfIYXGEVSz2bB\nFxYWCvNge3v7uRz2jIwMtLa2yh6bufTHx8eSVhgKhSTKkxxwo9EolxXDbWgtU6lUopngGJW54enp\n6aL858qCJD3+WhYWFFc+GyHMy/309BQVFRWywqHanJx5IrSZWMiRM4OWWNwz9CYvL08U4QaDQQSk\njJ1OSUmRd9lisUCpVCIYDMJqtaK8vFwu33g8LkE47Jhpqd3b20NtbS2ysrIkRCmRSEiR9OjRI7H7\ncSSs0WgkAIjTT+ZYHB8fSyoc3zMWlEVFRWhubsbe3p4UbicnJ9ja2kIikYDRaBSKIaNifT4fysrK\nhHTIdQhz4Nl1U5sSiUQwPz8vdtS2tjYhapaXl4vgkoUtz/CysjJZLbFIpqVSoVBga2sLR0dHyM/P\nFwvvwsKCrLA4hfV4PFhfXxe9AQvQwsJCrK+vi7iXGhSq4WnR5FSYlzL98vy+SSHlpJITMWYsGAwG\nEYzybN/e3pbvkSJW/h0Iz+Lfk5ZovV6PlJQUyan4//v61i75GzduiH+1paUFV65ckeqLu0LC/3U6\nndgKtFotzp8/L10jK5yzZ89ifX0dMzMzSEtLE2VvaWmpcLtDoRAqKyslPKS9vR1KpRJOpxNjY2N4\n//338dZbb6GtrQ1paWlYXFwUljY9zQzwePXVV1FaWgq/3y+Z10wj4tje5XLB4XBAoVDg4sWLongn\na3ptbU0QhiQbUbXK75PwCx7+3/ve95CSkoKPP/5YfLJqtRo5OTkiWiJbf2NjQwA88Xgcc3NzogAm\nirWtrU0odQQStbe3Y21tDV9++SWamprQ3d2NmzdvIhqNoqenR5KmmIWenJyMra0t2S95vV4BQXR1\ndYl/+NnL9ZNPPkEgEMCZM2dEIc1giqqqKhE00d8biUTQ2dkpHAAG2TDqlla7S5cu4fPPP8fGxgbO\nnTuHeDyOTz75BG+99RZ+8pOfSFGWkpIihVpGRgZmZmbw6aefwmQyobq6WgJxiNnV6/XyM2XHzj1j\nZmamxHvu7e2htLRUDh7u9N977z0cHBxgZGREBJGNjY3yWfT29uL69etYWFiQ5z0SiUClUqGtrQ0+\nnw937tyR/Z3RaBTRGTuE2dlZ5ObmoqurS37d22+/LTG0pBYSVjI5OYnz58+jpqZGcq2Lf5ukxjCi\njY0NzM3NyWiUnTcJfCyKqernqJV22Pn5eajVamETGI1GKJVK/PKXv8TGxgbefPNN5OXlAYBEQPf3\n9yMcDktEKvejAOTiMZvNOHPmDA4PD+FyuYQ/X15eLgKwtLQ0GY0WFBTAZDLJLvf09BR9fX3o7e3F\nxsYG1Go1/uqv/gopKSki5GtqakJPT49w73/0ox/h4sWLmJ2dFSGhVquVnz9/HpxcabVaVFVVobOz\nE0NDQ3j8+DHu3LkDi8WC8+fPi0f8888/h9/vl5VNRUUF6uvr8ejRI4yMjODq1atobGwUABS90sPD\nw3j//fehUChgsVjQ0dGBoqIiqFQqobgVFBSgpKQEubm5ePDgAR4+fIi//du/xZkzZ0RDcnBwIKmM\nFPKpVCpUV1cjkUhgZmYGra2tuHr1KjIyMrC2toY7d+5gd3cXeXl56OjogNVqFTQv1fybm5twOp2w\n2WxoaWlBJBLBwMAAlpaWpLkwGAxS7LS3t4vdLB6Po6qqCrW1tbJbp+vG5XIhFArh0qVLODk5wejo\nqNiFf/rTn6K1tRWpqan4h3/4B9y4cUMK5Lm5OeH5d3Z2IhaLYWxsDMA3l3hZWZkELsViMfh8Pjx6\n9Ajp6elCzSRYTavVQqlUCrOD5/rS0hKamppkBUebLh0fLPoo+t7d3YXD4YDBYEBbWxtcLpewV3gm\nFRcXy7qLK2TaFT/66KPfedd+a5c8rSF3796VfTkrcXbX7BCpJPT7/QKTIEWL3cMbb7yBcDiMwcFB\n2O12yQpfW1vDxMQETCaTeDJjsZjQpjIzM1FSUiL+YiZFWa1WGbWwg6I4LC8vD1NTU3A4HOKrPzg4\nkFhY7rk3NzfF1z44OIjBwUGx8ul0OhQUFGBvbw///M//LGSv9vZ25ObmYnZ2Vsh1ExMTQv0aGhpC\namoqampq4Pf78dVXX8nFOTY2JtVucnKyRDGyS+jv78fMzIx06t3d3QgGgxgeHpYOipYo/n18Pp8o\nRdlNUCSWm5sr1rJ4PI54PI7x8XGplIFvDuVnpzbcCZpMJsTjcaytrUm3yykA7YyRSES+N6r4vV6v\nxNHysFMoFKLqraqqEpId7SbhcFjCM/b29lBYWIiuri6xyhECQ0vn7u4uSkpKZFzOw3B1dVVEZtzn\nDg8Pi4WTPPNYLIbCwkIUFxcjGAzKc0o0KIV76enpQh9zOBzCCSgoKEBFRQXGxsbgdrsFxvLee+/B\n7XZjfX1d+NwKhQKPHz8W0tvW1hb+67/+C+FwWMSsu7u7GBsbw8zMjOQvUCFNixyBQBxr5ubmYnd3\nV0arT58+FRcLfdsOhwOHh4fo6ekRaiJFWXy/ysrKsLq6KjoZjkvZvXAUzckTd6i00dKbDECyB3Z2\nduR5Yooa0csVFRUIBAJQq9UCYZqYmJD8d6VSKd/P2NgY5ufnhdnPHAnqV8hySE5ORltbG7xeL/x+\nPyKRCDweD9bW1lBRUYF4PC4WKnLYGdnr8Xhk/ZKcnAy73Q6z2Yz9/X1cv34d29vbsku22+3Izs6G\n3+/H1NSUKLHHxsawvb0trh9CXigKLi8vF1Ero0z5z4eGhjA8PCyT0fr6emg0Gnn2GfrEwpSW12c1\nI7FY7Lmc9uPjY1RXV0ucMqeE1FMxvY4XnM/nw8DAABKJBILBIPLz8+VZ4ASW5wcA0f643W5xVpHX\nUFpaKqmiTCqlLfTo6AherxePHz+G0+nE3NycWJR3d3exvr6OcDgMhUKBhYUF0dYwvpqYYl7EtByS\nRMozmVoBm80mU6StrS1ZM9LSSA6E0WiUnx27c2qHwuGwnI+BQEDgbB6PR7IFqOD/+uuvEQqFkJSU\nJGCq3+frW7vks7OzEYlEMD4+Ln5ugi5YSXEsG4vFUFBQIIpqWiF2d3dlr809PlnQeXl5SE1Nxfz8\nPK5fv47Lly+js7MTOTk5z8Wfms1mlJeXo6KiAj6fTxCdZWVlUq35/X4oFAoUFRXBZDLBarVKrjYj\nRJ9VZTNxiN9PUlIShoeHsb6+LqpksrUdDgdu374NhUKByspKvPLKKzg4OMDU1BQ0Gg3cbjdu3ryJ\n2tpadHZ2ore3FwqFAj/4wQ+wsbGBhw8foqWlRSYPtLqcP38eVqtVoEKMAp2bm0MwGERVVRUqKirw\n6aefyovOqnT1t9G6/Iymp6fR1NQEk8kkrgCfz4f8/Hzk5+cjLy9PgnQY6cpQmEQiIaNEYlUVCgVK\nSkoEC7yzsyMpVz6fD06nE0dHR6KSLSsrQ1pamjgObDYbcnJycHBwIJcEQS/Z2dkyJuVkhEl5y8vL\nCAQCqK+vFz7/2tqawJPy8vLgdruFk69UKmVVQIU5IUjl5eXIy8vDzZs3kZWVheLiYrmoSGWrqamR\nw4z0tJ2dHUkcc7lc8hw6nU5Eo1EUFxeLHZBFFkODzp8/j97eXiwuLkoBrNPpMDg4CIfDgZdeegnb\n29u4ffs26urqhAi4sbGB2dlZERjl5uaiuLhYul6lUikHGOE9arVaeO4USNKnXlxcjPz8fCwuLiIQ\nCKCiokJSz+iuoCe9sLAQq6urCAQCcuA/efLkuUIvFArJZ8/fx0KPOg56o0li5HooFAphenpa8ims\nVit0Op0IDtfW1jA6OoqMjAw0NzeLPY9wLV5aOTk5CAQC2NjYwMLCgjD6h4eHxRkyOjoKn88HtVot\nzhtS0agXIlODsdFut1uobTqdDpWVlUKzvHfvHrxeL/Ly8lBZWSkY4bm5OXz11VcwGAwoLS2Fw+HA\n4uIiKisrsbW1JZfC4eGhJLOVlJTgwYMHcDgc8Pv9aGpqQk1NDR48eACfz4fMzEy0tbWJwpyBUE6n\nU3RF8XgcX3/9NVpbW5GZmYnl5WWcnJygsLAQPp8PkUgEXq8X+fn5uHTpEhYWFrC0tCSFAKN+uXcP\nh8MIBAKiceJKtqSkRKBb/DXUarCA5rpzcXERS0tLyM7ORltbG6qqqmT8/tVXX2FnZ0dgOyqVCktL\nS5iYmMDjx48FPnVwcIBwOCwFXVpaGpaWlmA0GmWyFIvF4Ha74fF4kJaWJlqF4uJirK+v49GjRxJ+\nlJWVhYaGBknQ5BnGUKHc3FxkZmZKIcsgMMLFSLojwyMcDgt6+bc8ekxNTUlRUFhYCIvFgvv374sO\nKTk5GZmZmb/XXfutCe/eeeedU3pjn6VUkerz+uuvo6SkBDMzM9JlMsmNNCwCCmg3ysnJgdVqFZiB\n0+nE1tYWdnd30dbWJvz4ubk5PH36FDabDdXV1Whra0MgEMDs7KyI546Pj8WOwkQk7mDr6uowMDAg\nO/KUlBQA36wZtFotGhoapGolppL5y9FoFN3d3RKfqlAooNVqhXzHYBYmrCmVSvn3tFFQMMJkKNqz\nPvroIxn3vfzyyzAajVhfX8fa2pr8j9AUhUIhLxEfrKamJrS2tj6X787dG/dWNTU1WFxcxOTkJBKJ\nBCoqKkRpy/jXYDCISCQiKUt8iXU6nTDk2Y1SP3B8fCxc/b29PVmVcDxG0eDp6an4R+lL5h6fawsK\nmnp7e8UnTbDR2NgYdnd3Za9cWVmJzc1NxGIxJCcnY3x8HKFQCD/5yU+gVqsxODiIjo4OmM1mfPDB\nB5iYmEAkEsF3vvMdtLS0CKCG0amRSAQOh0NeZOJt8/LyRPDU2toqOe5erxdDQ0N48uQJIpEIfvrT\nnyISieCLL77A5uYm0tLS8MYbb0ChUGB6elpCcaxWq3TPIyMj8Hq9KCkpkRGnRqPB4eEhZmdnRW1O\nu6FOp5OQl66uLthstudQqsQNE9JRWVkpiF2KCgsLC0WIxkkFXR9U5XNfztUHixYKkZ4VJ/J93t3d\nxebmpkwpSGX78z//c0Eez87OIpFIoKOjAxsbGxgZGYFOpxOEKNdldrsdoVAI//mf/ym2p/b2dtnL\ns2Do7e2FSqXCX/7lX0pexsnJCVZWVnDjxg1BLNfX1yM9PV1wuOPj4zCbzTLlIqSJ1kYe0EajEX19\nfTg6OkJXVxe2trawurqKoqIiKBSK58BfBLrMzc2JpYxnA62NSUlJWFpawvLysqCVCaai9kGn08m0\nkHt5Fk7BYBB6vR5NTU0CDMrIyMDU1BQ++OADNDY24ty5c8jLy3sO2BMOh58rLEKhkOhQwuEwNjc3\n0dzcLCu46elp3L17V+KJy8vLhWFSVFQkYlFOHdbW1rCxsSEuhZKSErGTsgl54YUX5NlmlonNZsPA\nwAAmJyehVqtl8rW1tSVnz9raGkZGRuTfFRcXC/HuwoULqK+vh9frBfBNA0pLYV1dnWCWqTNiMU7G\nAgFNmZmZsFgsAjTa398XFLDb7ZbCilkR1CqwORocHMTVq1dRXFyMu3fviuWXa2CfzyeTR4KDfvSj\nH/3hCu/oeSY/mUpSCmtSUlLkA9ve3sb8/DyysrJgs9mgVqulIqL61u12i/eUh+vy8rIod/1+v3gz\nV3/LxQYgKwKyykkXIjQlLS0NBwcHAt8hlY2jTop6jo+PRbFLtTWreoYxEI/KiNn5+Xno9Xro9XqB\nhywvL0tqW1ZWFgwGA4p/m4pEktjS0hL6+vqQnJwMk8kkLG/y7EOhkKgvWfn7fD6xOdXU1GB1dVVy\nkPmwR6NRJBIJUZMTI0pG/8nJyXM7UYI6JicnpbNlzoBS+Y07k+E9nNYcHR1JwUaPOB9e7qkUCgUK\nCgqgVCrlAqbYiElYxI1SQFNSUiIWKopUtra2ngsYoYZjd3dXRtfcqR4dHcmKAoBEYvp8PqysrEgF\nn5+fL6p8RvEmEgmsrKyguroaVVVVAIC5uTkJB+KIjuQ+PkucTG1ubkpgCIl+Ho9HOOLc7ZEjQWsd\nAGH6ezwe7OzsIJFISCfE75srJz6/Wq0WAMSt4PF4ZDJGeyLpdQaDAbW1tVhaWhLeAdcaVPzzs8nO\nzpbxIxXkh4eH0Ov1OD09lQkXiw0WGhRccszr8Xig0WgkUIoWOhLUioqKsLe3h8nJSSnuuQvnecCV\nUlpaGgoKCrC8vCw4UKVSibKyMmkwaL/b39+X7s/lcsnYn+sHopSDwaAIMxndXFBQIBchsaz8Oz+7\nTyYYJRQKobu7W9aHDNfJysqCTqeTwBzatVJSUuB0OpGdnS3ef6rOvV4vXC6XiB45tSO7n1RIXthc\n4RC4wz+Du3OqyikcW1tbk7OBRfnY2Jg4KHjxcF+uVCoRDodFXMfnmta+k5MTsYTu7OxIXC8BSPyZ\nl5SUyN+vvLwcqampwgMIBoPw+/1CBc3Ly5PgIMKY+Iyy6VCr1TJJYE4BCxeVSgWLxQIAIoAEIPHY\nbLYYYpZIJLC+vg6bzSbNJ6eKKysrYumm6v5ZQuXR0ZE0XNQ07ezsYHp6Gq2trTKp4PtFZw7Jnvv7\n+wLU+X2+vrVLntWl0WjE4uKi7KBycnJQUVGBqakpTE9PCyksGo2KQIbeSF4yjD4MBAL4xS9+Iap0\nksX4ElGNz1ASp9OJ4eFhjI2NIZFIiHiOIAju5zk2IWNbqVTKWBmAvPi06vDSoveXlTg1AZOTk5ie\nnsbU1JS8vDk5OWhpaRFojd/vl7Eyo0mp8PV6vfB6vTLqjMViKCsrw8WLF9HX14fr169jYGBAHkwK\n1RobG4U3zghaenWTkpKwsbGB4eFhEZFQrHhwcCAK+8nJSVy8eBE9PT2id2B60tmzZyXWMysrSwo5\nqrb5stMnS+Y71ef7+/syqmIxkpOTIwpgMqsnJycRCARgt9uxuLgIr9crsai3bt0SxS0tUhzbn56e\nyhqBXvS9vT28/fbbSEpKwuLiouB6SctzuVxYXl5Geno66uvrUVhYCK/XiydPnmB0dBQ1NTU4Pj6W\n0XVHR4fkpvt8PqG1AZDd5dTUFKampiS8iBMNvV6PL7/8UtK7SktLodFo0Nvbi/T0dLS2tsrunrSu\n+vp6TE1Nwe12Q6vV4uDgAENDQygrK5MR/+3bt/Hxxx/DYDCIf9tut+PcuXNYXFzE0NAQ5ubmsL29\nLTTHkpIS1NTUCLd+fX1doEksfmjP6+npEf3HnTt3sLi4CHGDRpcAACAASURBVIfDgZKSElRUVEjB\nOT4+LsAYOgoo8kpNTRVEMENhTCaTuGGmpqYAAJubm6ivr4dCocCvf/1rAEBFRQVKSkqQlpYmF3ww\nGMT4+DjS0tJQXl6OaDSK0dFRLC8vIxgMCuo0LS0Nzc3NsNlsmJycBPAN7W96elq858xDv3btGjwe\nD5KS/q95YjQr090IrrHb7bhy5YpQKdPT0xGLxTA9PQ2lUinnX05ODoqLi0V5z8x6WkOVSiXW1tbg\n8/nQ19eHrKws0YOwuGLjwTCimZkZKUho8y0pKRHYVGZmJvb39zE6OgqXy4Xd3V388Ic/RFVVFSwW\nC7788kvcvHlTPiPqH0wmEzIzMxGJRDA4OIhIJAKlUomSkhJYrVYUFBRgbGwMH3zwAXZ3d6HX6wWY\nxPE9R/vk7dNpxJUZACmQFhYWBIn77rvvSure6OionPtWqxWtra2w2+0wmUwYHBzE0tIS3G43Dg8P\npbACvnFJsRCnPicnJ0cQzbFYTIiCzc3NqK6uxqeffir3Dj3+XPnOzMzI+c/8kIWFBayvr0swE/NF\nmMGQnZ2Nubk53Lp1CxkZGXKnkKZ348YNPHnyBHt7e0hNTcX29jY8Ho/oIE5OTrC9vY3XXntNipLf\n9fWtXfKMRs3Ly5OOnjm9lZWV0p1sbW1BoVCgtLQUwWAQQ0NDWF5eRmlpKa5evYpbt27B5XIJH7y2\ntlaIdz09PaIu39nZEWtUbm6uVEwUfJFD7/V6sbq6iuPjY2g0GhHmUaHucDjgdDqxsLAApVKJ9vZ2\nARNwPEdB28LCAkpKSlBcXIyCggJoNBoAQElJCVJTU1FUVCQMb0I7uAOlRYQHxsbGBm7duiUVbVNT\nE3Z3d8WawnAdu90uD+329jYcDof89+lb3dnZEZY29978OSQSCaH20evJacHm5iaePHkiVhN+VswH\noB3JYDCgtbX1OQY9u+vV1VXx1qvVaszOzgoPmiyD3NxcsfkdHBwgNTVVDvuMjAwJmqD16PT0FC0t\nLUhOTobRaMTg4CBWV1fR0tICk8kkP1dS6pRKJVpbW5GXlye2KIrqqqurUVtbC6PRCK/XK/99Wqe4\nw93f34dCoUB1dTXy8/PR1NQEtVqNg4MDsVMxPIQHuM1mw6VLl+QZZjFKVjyLhfT0dJw7dw5WqxUa\njQalpaXCaCgqKgLwzUW0uLgo9sHOzk4p6HQ6HVZXVzE/P4+CggJEo1G0trZKZ5OWliZCJkJY3G43\n1Go18vLyZKJF8aLT6UQsFoPBYJAurKysTNDOFACFw2E0NTWhoqJC9txmsxkOhwNLS0swm83CLmes\nrFKpFBva3NwcwuGwEC/Jt2DnxfEru/crV66I0I0XcTgchlarhdlsllXZ6uoqzGaz2LUikQj6+vpE\n3EuGPd+Bg4MDma5wmqLRaGAymeT30A7m9Xql22xtbUVNTQ1WVlYEJa1UKlFUVITS0lJhrdONoVQq\nZT0Zi8VkShaLxeD3+5GdnQ2dToeFhQWEQiHU1dUJ04FTMYK9WBjR3eLz+bC/v4/29naYTCbRC6nV\nanR2doqnnpPL4eFhRCIRtLa2oqioSGAt0WgUx8fHsNvtYuWjTYwocNor3W43dDodzp07J+vKtbU1\n2Gw2VFRUYHx8XARyHGXzDrBYLALuCoVCInqjq4owLYPBgBdeeAEdHR3ClVer1SIYrqurk8nf8fHx\nc4JXADCZTADwXEAO7b17e3uSecGzkJ06n/tn4345MeDPNxKJSLyzyWRCamoqjo6OEIlEUFZWJsE5\nPFPJu2AoENeSpaWlODg4kHvFbDaLJZerQZ7Fv8/Xt6qu56iUgSptbW0oLS2F2WyGy+WSGEylUgmb\nzSZxqPPz85JtTWRnVlYWampqcObMGUFSvvTSS7DZbNBoNJifn5eHiOMrkpnu3bsnH+IXX3yB6elp\nSYrS6/WwWCwiqCMCkXa8mpoa2cURAnHhwgUcHh4KsvPZKNVgMCgdzrN+ZlaHz/pgeaEUFRVhfn4e\n9+7dQzwely53Z2dHDj4GgZSUlKCurg5arVa6S3ZFVF8z7IHAER4U8/PzQmRLTU1FVlYWzpw5g+bm\nZhwcHAjQY2FhAS6XC2+//bbsdAcHB/Ho0SMkJyejtLQUFy5cEKIeAUHsSiYnJ2X/fvv2bezu7goV\nrfi3lDLmmtMTyg4vJycHV69eRSgUwsOHD1FWVibZB3q9Hi+//DIyMzPR19eHN998E5WVlVLwrK6u\nSqZ9T08PSktLkZqaiq+++kqCLq5evYqrV68iEAhI4ZeamopEIiHiKk4IaPFqaGiASqXCxMQE1tfX\nZbrCLouIZrvdjpdffllG3x0dHbKbT0pKEogTLzCm7rW0tMjaxWKxQKVSYXBwUPQAzABg6l9qaiqG\nhoYwNDQEi8UCq9WK7u5uodtlZWVBq9UiJSVFsuopmmtpacGTJ0/kMvN6vXA6nbIfjsfj0Gg0qK2t\nhcVikTXO5uYm/H4/GhsbYTQasbGxIZ5jsve7urqgUqng9/tRVVUll6dCoZDiIxAIoKamBi6XC2Nj\nY4K7zc7OFoZBIBBAXl4e3nnnHXi9XszMzMg7QP1DdXW1vKcrKyt48cUX8cd//MfIzs6Gy+XCxsaG\n+JXZpXGiRU8+aZQcZZM4ycAZvvP07zc1NcFut+PevXtIJBISzMQGhTobg8EgqY+chmVlZaGsrExE\nvPxeVCqVBCKdPXtWfu5ra2vY39+XlSZ39rQyut1uRCIRvPTSS8jMzMTCwgIODw+RlJSE6upqWRdx\nhD40NCS6DiYFPruLb2xsRGdnp8BnDAaDaJ5KS0sRDocxMTEhZ7BGo8Ho6CgGBgbkn7lcLvnZEs+q\n1WpRU1OD7373u3jw4AF6e3sxNjYmxMzKykqZOiQlJaGyslL2+mlpaQgGg1haWsLIyAh8Ph96enqQ\nmpqKtbU1JBIJgQ4RrEWdiNfrxfHxMfLz88W1s7u7C41Gg5KSEmkyeRbu7e2JwI8ANbIksrKykJSU\nhIWFBQH8nDt3DuFwWNxI+fn5yMzMfA49Tc7C/Py8xNCePXsWra2t2Nvbw61bt9DX1we73Q6bzYZ7\n9+5BrVZLIclJ8u/6+tYu+c3NTWRmZj7Xdfb392NgYEBGKdxnEsjBEcrXX3+NkZER/NM//ZMcWhTP\n/eIXv4DP55OLgeExz+4a9Xo9zGazfLjDw8Ooq6tDd3c3XC4XdDodWltbBfU4MTGBlJQU1NTUiDc7\nOzsbiUQC77//Ptxut6jr/X4/hoaGREW+tbWFhYUFoZ3FYjF8//vfR11dHSKRiHTbt2/fRm5uLl56\n6SUJVXnppZeQm5uLzz77DMFgELW1tYKoZfpYMBiUjsloNGJpaQnj4+NoaWmR7PmcnBxRHEciEayv\nrwt+cnR0FA8fPhRxytbWluyXHj58iPn5eclmVqvVskLgXvHGjRuy6yYciIQ/gngopJuZmZHx4O3b\nt6XDJTluYGAAq6urQqMLhUKCiyQlyuPxwGAwIBqNyp/jdDrhcrlgNptx6dIlABBELbOtA4EAAMBu\nt8sl+eDBA1nDUBuxubmJ8fFxUQkTXcy9NS8uomnv3buHoaEhqf53d3cxPDws8B2GlxBv+atf/UrG\npB9//DEePXr03GdETz7teHt7e6Im12q1ePTokQT3UGj28OFDeDwevPDCC7BarZLkxbwGwk34/YTD\nYUxOTsrOVqFQiLd9ampKrKIs0A4PD6Xz4qU9OTkpAi+1Wo2ZmRkhgVmtVjgcDtnhGgwGmEwmFBUV\nwefzibXR4XDIdE2hUMjF//XXXwsi9fDwEMPDw6IX4LqGWpv19XUMDw+LAr+yslIOYzIJqqur4ff7\n8Y//+I84f/68ZFlkZWWhsrISBwcHYj9lVLXdbkdGRoZYD1nsMa+e39f58+ext7eH0dFRgb9QwzE0\nNCRrI5vNhqOjIymwqURnzCv/7tSZhMNh+P1+3Lx5UyYlpFd2dXWJRiASicBkMsFsNstOfHp6Wnbh\n9+7dg0qlks9CrVbjww8/FDQyV2Hcq1+7dg0lJSXIz88XsNTi4iLMZjOys7MxMTEBv98v+3juk1NT\nU3H27FmsrKzg17/+tRQdzc3NOD09xczMjIBsgsGgTCvIegiHw8jIyBDH0crKCq5du4bk5GTh3VdX\nV+P8+fNwu90YGhqSvXk4HJaVVVNTE3w+Hz766CMh9zU1NcFiscBisWB2dlb0C6RW+v1+KQroHKHr\nYG1tTfRVra2tMBqNGBsbg0qlQiwWw/j4OHZ3d0VTQAw6XUic+h4dHUGv16Onp0e6dFqQGeudlpaG\nTz/9FLdu3RKwGPHp1DIQWsTpwRdffPE779pv7ZKn8CA3N1eyg30+n4BGsrKyRGRAwQTFRjs7OwAg\noyT6u5m1TQzgysqKZMMzhINqWHpFw+Gw2Nw2Nzdhs9lQWFiIsrIyiZVcWFiQB5YoXo1Gg93dXUxM\nTAg9LxqNykMDQEArFK8dHh7K4axWq7G8vAy/3y+4Sgo8GChhNpuh0+mks09JSRFsKHfeFotFsLsr\nKytYWVmB1+sVex8BPLQjhUIhqFQq5OTkwGAwIBKJYGdnR6h+CoUCOzs7ss8jOYwdZjgchtFoFKU/\nx4LspCkOXF1dFdhFaWmp6BYYTkNhHsf2jDbli0lIBwVw9JufnJxgaWlJ7FzP2i+fpR4eHx8L45wR\nkRT0kUlPby5HefTfOp1O1NXVITc3V1jqnMJwzdLc3AyLxYLp6Wk4nU7s7+8jLy8PKSkpcDgcCAQC\ncjjr9XrU1NRAoVBIEIpCoYDL5UIikZAQpeTkZAFfcCwdjUZFk/FsMpvVapWcaeJ319bWBH1ps9nk\nYuTqh88tdQe7u7vIysoScRB/TsxNX1lZkenU/Py8iEFDoRDi8bisabgDJ0KaBW40GoXH45GgGAKI\nGMZxdHSE2dlZRCIRpKWlobW1FRaLBQ6HA/F4XFYUfOcp7qNVkYwBnU4n2GF2pkdHR+LaIDP84OAA\nk5OT0i0zQpjrLor+/H4/Ojo6oNfrMTo6ilgsJoEgHM3Tu5+WliYTuKWlJRnZcg3CX88JHZuPubk5\nUU3X1taKQ4QFZ0tLC+bm5rC8vCwBTbxAmKRGgiJFhhQSMhee1tSsrCxJ02MOB/3ldA7U1taKQp5/\nDtcyzIjgz4Z0OU4QgsGgrDYZKEX8LvkWPMfNZjPy8/Plv8PgKIKw9Ho9zpw5A4/Hg9nZWWg0GqSm\npkKn04nokyJUNogej0cgYsvLy89lsJ+cnGBxcRFJSUlobm4W7kV6ero4lMjkKC4uFgcHGSecEtPr\nHggERCzHopMWPUKJ4vE4XC6XcBXI2AAg7HsGiTHbgFx8vu+Hh4eyfszIyEBGRoZktjxLcPx9vr61\ngBqNRvP/QqEQtra2UFdXh7q6OuG7l5WV4fXXXxcus9FoxBtvvAGdTof09HRUVlbi7NmzeOGFF+D3\n+7G8vCxqULvdLkSr2dlZEcow3aipqUmgLtzdd3Z2wu/34+OPP5Zghvn5eRkjU+GdlZUlFXJTU5Pk\nO9NaRpoclbsKhQJnzpxBTU0NDAYDXnzxRbz11luorq5GOBzGrVu3EIlEYLVaceXKFXR2doowqaqq\nCnq9HpmZmWhqaoJKpRIOQCAQQEpKCi5evIif/exncqB/9tlnODk5QWNjo3RHpaWl8Hg86Ovrw9jY\nGCKRCNra2qBWq0VUZjQa8e6770p1S8Ehk510Oh2uXLkipDqj0Yju7m4BurS1tcnlTqTrysoK9Ho9\nysvLUVdXh5aWFjnMjo6OcPHiRZSWliIQCIjjoKenBy+//DK6urqQnJyMg4MDBAIBOJ1OLC0tSXDR\n2NiYZKEbDAZYLBZ0dnaira1Nio/BwUHxY//sZz8TSh/pX/n5+ejs7ERdXR3GxsZk/8sLoLq6GsXF\nxSgsLBRUMFG5Fy5cQFdXF+rq6oQl8PTpUxQVFaGgoABOpxPr6+tyWRUXF+O73/0uGhoaREmenJwM\nnU6HlpYWfP/735dCjbnpfr8fNptNACkbGxv4/PPPYbFYZKRHb21ZWRnKy8sxOTkJhUKB5uZm1NfX\no7u7WzqB4+NjdHV1obOzE/n5+WLDo9uAxRoPb+oYqqurcfXqVUxNTWFubg5arVbcIq+++qoo77lW\nysvLg9FoRHt7OzIzM7GzsyOoUarSc3JyJBHv6dOniEQiyMzMxEsvvSRBJqFQCI8fP8bly5dx6dIl\n5Ofno6qqCjU1NVCpVJImlp+fj/b2dvkeGFhFay0PXLvdjnfeeQcPHjzA0NAQCgsLJQCE5LHx8XEc\nHR0JSCsWiyEYDEKr1cpelp8ZEb9ut1sKN5IEw+EwdnZ2MDMzI+K06elp7O/v4/Lly9jb28PTp0/F\nFltQUCBebKVSCZPJhK6uLimM1tfXkZSUhLfeegtVVVXCB1CpVKiqqpK1WE1NDRobG9HQ0IBAICAM\nkpKSErz++utITU3F4eEhurq6YLVaRdjc2dmJjo4OlJWVIR6Pi4W3u7sb58+fR2dnp4iNeenwDOJz\n4vP5sLCwALvdju985zuw2WyIx+MYGhoS0iVXjW+//bYkGe7v7yMrKwvV1dXymV66dAlmsxmTk5Mw\nGAyor6/HH/3RH6G1tVXsnampqWhvb4dKpcLIyAgKCwths9nw+PFjpKen4+c//7msDRjfS/ZBSkoK\n9vf3sb6+jtnZWQQCAWRlZeGNN95ASkoKPvvsM7hcLkmX6+rqwo9//GMRCba1taG4uBhHR0ew2+1o\naWlBVVUVzGYzMjIycHBwINHmdMM8u/rSarV46623JKOhsLAQDQ0N6OrqQnd3t8RE86ywWCwyrSHi\nnBOCkZGR3xlQ86118u3t7Tg5OcHx8THGx8flUOcDQ+88AHg8Hvzv//6vCGHq6uqwv7+PqakpgTWw\nyqTAi7s1inC4456dnRXVJalhtEMdHR1hfHxcACkcNx8fH0tHyUqY0BNaZp7tIufn50UIwt32xMQE\nDAYDjo6O0N/fD7/fL92yy+USgcnIyIgkbjkcDkQiEfHdMsc4PT0dKysrmJyclBHY5uam2FcoqIrH\n48LTpljParWiqqoKy8vLmJiYkEr6/v378nu2trbg9/ul62LXB0CAMdeuXRN1NX2bGRkZWF1dFTcC\n96UcPXo8HjgcDuzs7Ai2tK2tTdTX8/PzMkIjeGJra0vUuLQKxuNxbGxsYH19HTU1NbDZbJiYmEAi\nkZDOr729HXfu3JEQFgCym9zd3ZWIYY1Gg1gshvz8fNjtdoyPj8PhcOD09FQ6WlbutH7t7+9jeXkZ\nm5ubYk8jyY6TD3rHjUajqOqpJuevoY9doVAA+MZuOD09LSsPOjOOj4/hdrsRjUYxPT0tKXvs6qgo\nd7vdSCQSonwmJpUOBnajtAtubW09N0k7ODjA2toaAIjtKxQKCRuAgS/sfgYHBwVixNUTCxha9xwO\nh2SHc6oSj8cxMjICjUaD1tZW+RkODQ0JEGd7exs5OTnPTW34vVD4GAgE4PV6odPp5H3c2tpCRkaG\niA8TiQQsFgui0SgePXoE4JvJitfrFb4GBVWlpaWSLsb/ls/nQ0pKCqLRqCi1qTqnsp4TCwbDsNui\nf5zNRCAQwMDAAPb391FcXCxrPFLuUlJSxFZI4hkDm8ip58SFgmSTySS7cwJ49vb2MDQ0hMXFRQnF\nWllZEacHWfcM4+L6j1nqPp/vuXeXSZRHR0cwGo1ISkp6zg7HfAJOL3Z2drC6uiqTKrfbjZOTE9TV\n1SEtLQ03btzA/Pw8AKCtrU062O3tbfGcK5VK1NfXw+/3y0g8OztbOnlmTVDwODs7i4WFBRHGffHF\nF3Ku08t/584dsfqyAORzlp6eLlZBJuBxGkLxKScnXBdFIhE5x6jL4nOg0WhQV1cn4VRpaWkoKyuT\nCUZ/f79MnjhBoH2QTA+lUonS0lKcnp6KDZpaIP663+frW7vkL168KGIo7s6IBmUymsFgQFpaGtbW\n1mSHW1ZWhpycHASDQfT19UmoBJGG8/PzqK6ulkOZoSocQ1OxTaHd6ekpVldXoVAoYDabMTIyIspt\nqiE5rqJac39/H0tLSzg4OJAsZQaWcARJkhltKwMDA8/tfI+Pj9HR0YFIJIKxsTGBG/T39yM/Px/l\n5eV48uSJKJtbWlrQ3d0tEwO3243x8XE4nU40NDRI7vfe3p5MRKhipVXkxRdfRHl5uYg9aA1jJr3Z\nbBaq3c7OjkTDJicnY3V1FTqdDtXV1ZidncWDBw9EXLaxsYG2tjZUVFQI6rSurk6wlyRjLS0tyQ7c\n7XaLsI8iOx6K/DtTc0CACxn1zEoni1+v1z9H9nrzzTfx8ssvi9d5ZGQEeXl50lFRp6HX6wXcUlRU\nhMrKSvGkUgVttVrFykaLDwWPtPyxiNrY2BCHwbPqeu7vuFLgAbGzswOPxyMdi1arldUODxceAITK\nLC0tYXV1Vah+u7u7mJqawtramoxKBwcHMTw8LDGVpaWlMBqNcoBubm5iYWEBDocD3d3dKC0tRVJS\nErxer5D5iCt+Fq+p1+ul8/d6vbh7964k+u3v78PpdD6nsXG5XIISNRqNMmrf2dnB/Pw8zGYz3njj\nDZhMJty8eRNPnz7Fo0ePEI/HYbFYUFFRIZcQKYcZGRmyyvF4PAKuslgsSE9Pl7Epx6ukEy4vL+PO\nnTvo6upCTk4OHjx4IBO6SCSCiooKnD17VuJ8mXTHuF7ms8fjcTidTtEKlZSU4OT/a+/NnhrPz/vf\nt9iEQEJCEhJoQYAEYl97oRd397Rn9diTnpRdXiaOy0klzj+QXKVSvsxlXJWKy1WpxM44tn8ez2p3\ne3qhN7qhG5qlAYHEIiQhgQCBBAKJXedi/H5O9zkXyak6VfbF97lxje3pGdBXn+/neZ73+/U+OcHK\nyoqE1rAxeBEuZLVacXR0JPTKzs5O3Lx5U9YZZDDwgjs3N4dgMIhIJIKjoyOkUil88sknyOVy8l0H\nIDkdvIAx6IpOkMLCQszNzWFwcBBGoxEnJyd4+vSpiJqLiorkz+jq6sJ7772HiooKRCIRsflShV5d\nXS3Nhk6nk7ORl19SKAOBgHTITETc2dnBV77yFWxvb+MXv/iFfGYXL16EWq3G4OCgIL+fPHmC+vp6\nXLt2DX19fRgcHMTAwIA4sHhxmZubg8fjQW9vL4aGhjA0NITXX38de3t7+PGPf4x0Oi3Tnr29PTx8\n+BBGoxF1dXWSH5HJZOByuaDVavH8+XNJraOm5ejoCPPz84LAzc/PR19fHzY2NqDT6SSHnhqinZ0d\nmWDW1tZicXERo6Oj6OrqgsfjQSQSwdraGu7cuYOtrS2Ulpbitddeg0ajQTweF60DyX92u10us1NT\nU7Db7TIBIITtf6o/GvHu+vXrOfJ88/PzsbKygs8++wy5XA4ej0cYxRy3/OpXv4LBYEBHRwf++q//\nWhLT2GUVFBRgamoKAwMDOHPmDHQ6nYj4KDozGAzCBP/888/x3nvv4dSpU7JTIQaTdjSz2Qyr1Srh\nByS8NTc34/bt21hdXRULlV6vRzAYlJCW+fl5hMNh/N3f/R3Onj0rSU8MeAgGg5idnYXdbofX65Xb\nJlnvoVAITqdTxmN+vx/Pnz9HaWkpXC4Xzpw5I8zlp0+fIhgMYn9/H21tbeju7gbwf/s3eWBw/Hv6\n9GkRhVRVVYkIrLa2Fp2dnS/dXk+dOoXTp09jbm5OoBW0wZB8Vl9fL2CGpaUlWWfMzc1hYWEBXq8X\nkUgEP/vZz1BYWCjrCY1Gg5GREUxOTiIej+PcuXMi1EulUi/pGJidrdVqxb7Y0NAggkf6i+km0Ov1\nODk5kc6S4TtU+qtUKtTU1MiUpbKyEl/72teQSCQQCoXg9/uly6NVsbGxUcAr5AOwW6FwjcEeqVQK\nw8PDOHfuHM6ePYuamhrhqZN3ffv2bVHSj42NYW1tDRcvXpSI1bKyMtndb2xsYGdnRyI8R0ZGJKiD\n4I0rV67IZOP999/Hw4cPBZJ08eJFCdPh5WljY0OEfhTTPXz4UCAu+fn5sNlscLvdkjj2omBsd3dX\n/pocenbrnDAkk0mhHXZ1dUlwjd/vR1FREd59911hJLBbYXdEq9Du7q7sfKlk5guGn2t9fb3go2tq\nanDmzBlhETATwGq14tKlS0KCvH//Pu7fvy8kvHfeeUf0EDdv3kQsFoPL5UJJSYkIH8nUn5mZwdLS\nEr7//e9LDG1paak0Jvv7+4hEIiIq4xSC7Hba8ZhxQQIgx/VdXV0SW0vrZzgclhx5jsupuaFmZG1t\nDdFoFFqtVtZS29vbGBkZgcvlgs1mw4MHD0QcePXqVXR0dOCDDz7A6uoq6urqcOrUKbS0tAgEaGVl\nRbp3JlkeHx8jHA4jGo2KdZg2Vp47uVxOsOAUCLL5uHfvHkKhEH7wgx/A6/UKH35yclImJIQrHR0d\noa2tTfDVbW1tYhXmRXtiYgLhcFhWHLTgUjDHdRtTToPBIGZmZjA1NYX6+nq4XC6oVCr5vY2MjCAc\nDqO6uhoulwu1tbU4OTlBIpFAf38/9vf3X/p5FhYWJBKbnviKigoMDQ3hpz/9KV555RW0t7fLNO/w\n8BA3b97E7OwsGhoaxEquVqtRUlIi01mfz4eWlhZZre3t7ck6dXd3Fz/60Y/+dIl3tP5wx0dlOpGJ\nWq0WBQUFsNlsMJvNKCgogNVqRUVFhVDDKDABgMrKShm/cldUVlaGk5MT6PV6YSW7XC55adBKRCUo\nO8GSkhKoVCqUlZWJA4BUO51OJ10FmcPcxS8uLkKlUgmTm2zmVColqUTJZFIEPJubm2hra8OXvvQl\n3L17F/F4XGIhFxcXcf78eXR0dGB/f1+iJw0Gg4yR3W43SkpKsLy8jNXVVQBfkNo4bQC+GLvq9Xro\n9Xpks1kZcZEdzjAbimW2t7clBpTdBXPWGSBSWFgoamY+6Ny153I5lJSUwGazYXV1VT43voiZn84p\nCzsJ2ilpsyP34PDwUMbdZD0zCpRUu3Q6jZqaGgmP6t/0gwAAIABJREFUINPe6XQKbY9hIRTftbW1\nSbgQJz3BYBBOp1NwlZz6MNOgvb1diHSHh4cglpmqV+JFmU3P6MuTkxOxglInYbFYMDIyItGdJB7y\nRczYWE4yAODk5ESiLZ88eSIkQqY2tre3y0i0trZWktLIrCepj585rUEUzDGKlYJWhkUx2pUiJ7PZ\njEuXLuHJkycC4aHmxefzIRKJyIqAe2AKK0lR46SAHTG7fD57tCwyMfDg4EDSHinkezEciUAdQpYo\nFuU/D4AIMvkipa2JExCTySTjcoo89Xq9iKXMZrOgd+12uzyjVqsVpaWlcjjzMpCXlydrEooAKysr\nRXzG9djR0ZGsqfjnc+2g0WgktjWTyQixrampSZoGjvDHx8elM6VNuKysDH6/H3Nzc9Lo7O/vQ6vV\norq6Gq2trfB6vWhtbZXn1O12A/giEplOktXVVYElabVaUbbzAldTUyNo2FgsJuAYnrW8RObn56O1\ntRUmk0mEd6urq6KbYmIinQMUyXGSB0B+v/v7+1hfX8fc3BzKysrQ09MjnzcJlbT0Ei/LoBp26bSy\nFRcXi2aEa5OysjK0t7cLh54CQgK+tFqt8CQobrxy5Yo0Ajw3mLaazWalUVGr1RJ6s7KygvLycgkJ\nIqyL5wV1YC6XS6zlxcXFQq78H9+1/z+9s/8/11/+5V/+MC8vDyqVSjrwqakpaDQa2Gw2PH78GKOj\no1Cr1QgGgxgbG0NjYyNKS0tx69Yt9PX1YWBgQDKXX9y/cNcdi8XgcDhw4cIF+Hw+jIyMYHp6GlVV\nVfje976HtbU1jI6OYnFxEf39/bh16xZKSkok1IA8bYrdGGhQWloqF4gbN27A5/MhFovh6dOnSCaT\naG5ulsNjZGQEIyMjODk5QX9/P/77v/8bg4ODku3MiMfFxUWEw2HMzMxgfHwcsVgMV65cQVlZGT7/\n/HPE43Hx6x4cHOCzzz7DwcEBPB6PdHxjY2OIx+OIRqN4+vQp4vE4HA4HbDYbGhoaJFnvk08+kUvB\ngwcP8OzZMwlP8fv9WFhYQDKZlAMklUrJl5y56YTTLC8v4/PPP5eXBFnVDocD+/v7oqqndY7WN9p8\n3nvvPRHhMFN8dnYWp0+fxp/92Z/BbreLKI0XJ+4wP/74Y+zs7ECj0WBiYkIsmHQnkDxlMpng8/kw\nPDyM0B+CMl599VXcunULv/nNb4Ra98EHH0ClUkGv12N1dRXz8/N4+PAhZmdnEQ6HxSZ29+5dUcLz\n9s1go+3tbYk0feONNwAAi4uL8Pv9GBgYwG9/+1u5kN67dw+Tk5MYGRkRS+Pnn3+O0dFR7O7uor+/\nH0+ePBE4Czn0tANWVlairq5O3CbXrl1DSUmJCPB0Oh1WVlaQn58v3Xo6nZY4VUZvUsRTWlqKCxcu\nSBfmdrtxcnKCmZkZGelSrHX58mX8/ve/x9DQkHSMNpsN4+PjmJycFMvhiyhkn88n3TXjVCnSfPjw\nIVKplKzEAoEA+vr6hJaYzWZfCnsxmUzQ6/WSTLm+vo7CwkK89dZbsFqtWF9fh8/nQzabxeXLl5HJ\nZHD37l0kEgksLi5ienoa8XgcKpVK9qherxfBYBD9/f2IRCICjVpdXZWdaygUwuzsLNxuN95++23p\nImOxGMLhMBYWFsSHPTY2hkQiIeI/Tvlqa2vh9Xpx7949WVsAEJ5AKBTCgwcP8PjxY4yNjYlziGFD\na2trqK2tRX5+Ph4+fIiNjQ0cHBxgdHQUiURCYFYrKysYGBjA4OCgrJ90Oh0SiYSsSvb39xEIBHDq\n1Cl0dHSgoKAAkUgEfX19ePLkCbLZLM6cOYNoNCp/PtdVmUxGsMHV1dWoqqrC+Pg4Hjx4ID/vzs6O\nJNh99NFHuHPnDp4+fYpsNguj0YhwOIz79+/j17/+Nerq6tDb2yvj8NbWViwtLQnzAQDeffddDA4O\n4mc/+5kIq09OToT14XK5kMlkMDAwAK/Xi46ODgmnSiQSGBsbw9DQkAiO3333XQlE47uALq2ioiK8\n+eab2Nrawk9+8hPRxXBtRUTx7u4uqqqqhAcwPT2NW7du4dNPP8X+/j5ef/11jIyM4MGDBwCAw8ND\nZLNZDAwMYHp6WjQar732mrhV7ty5g8LCQnzzm9/E0dGRCJRDoRCePn2KpqYmdHV14Ze//OX/KLz7\no73kOzs7f0iBh9/vRzweF4HZi+k9HJFwNJjL5VDzhwQtiihKS0tlV0UVL1WxZrMZWq0WJSUlQjer\nq6tDe3s7/H6/pGjRotTb2ytjYBKb6Mk1m80iWqqqqhJ6EoNe+O9GWpnVapXO22aziUBPq9XKSI4Z\n8Gq1GrlcThC3PT092N/fF6IWnQHcPWazWVgsFuj1egm8oDe5qKhIKEkMRmD3wZ0jO6CdnR3k5eWJ\nR5u0MqvVitdffx0VFRUAvqDRcRRFXy87cY/HI8r8eDyOTCYjYAmbzSa7ao7f8/Pz0dnZiVOnTsHt\ndos/n3nnBQUFsgaJx+NiZaSqmxe5mpoayREgxlWv16Ojo0NeOIeHh2hvbxc7D/2+7ASYckdi2s7O\njkx0NBqN7BqNRiMKCgpEzU3ELZG09fX1MJvNElG5t7cnh8Xh4SHcbjccDofE7VKMaLPZJEUwm80K\nNpMhF+zS6CphXndHR4d89hzbU+xFcA0zEkjrKysrg1arFbBOLBYTAR7XZvn5+ZibmxPi2tHREdLp\ntHRl1GXU19eL/Y87eL/fL/oEdvEMHNrf34fZbEZVVRVq/hCrWlpaKkJMTmtepIRxVcPp2Yt8cNor\n2ZlTL6HT6V6iyJWVlaGqqkr0Dz09PaioqJDsAoJGvF6vjL45jeMKiNZTshRIkCwvLxf1PKOsTSaT\n+NENBgP0er3w67lGcjgcsqMvKioSSyXjt1UqFdrb22W0zNRMjs1pBWbwz8bGBubm5gBArFipVAqR\nSESmUQaDQRwm6XRaphC0p9ntduTl5WF9fV1CrfR6PUwmEzQaDYLBIEKhkNhq2e1yfPwiT4EdN6dg\nS0tLWFpaglqtlmlrW1sbOjs7BYhjt9thNBqRTqcRjUYlaIcTpfb2dpw9exZdXV0IBAIIBAICuuno\n6EAul5NGhCJsCqw5NXn27Bny8vJQWVkJp9MplyHidjlJ49rEYDCIS4TTpoKCAllF7e7uSk489U9M\nWuREz+v1oqamBn6/H9FoVM4VZhOUl5cjnU6LQHx3d1eS6dggkYBKi2csFpPv3c2bN/90X/Imk+mH\nzMamKrilpQV7e3uyV29tbZVuTKfTYWFhAQcHB/je976Ht956C+fOnZO4vaOjI3l5MvqvtrZWfOun\nT59Gd3e37Og1Gg38fj+AL0SADN14/fXXX7KljI2NyQ5Sp9MhHo/D5/OhtbUVHR0daGlpwdraGvr6\n+iTg4cKFC8Jy7u7uRmtrq/ClaflqamqS/WsikZD0oomJCfT29uLb3/42Pv30Uzx//hyXLl3CuXPn\ncOrUKRgMBmi1WslYTyaTCAQCmJ+fx9ramnjgX331VVRUVOD69euYmprCysoK2tra0N7ejubmZuRy\nOSwvL8vozWg04uLFi3jjjTewvb0Np9OJ7373u5Lt/aUvfQkXL16Uz4WOhrKyMrGAMRVvc3MTw8PD\nqK6uli9lOByWNEGTyYTvfve76O7uxvr6ugQJXbhwAQ0NDXI4MjOaN2i73Y6qqiqMjY1Bq9Xi+9//\nvjgdvvGNb+DixYsSHVxcXIxHjx6JRZLj5ObmZhwdHeHu3bvo6urC1772NUF7VlVVYXJyEkNDQ2ht\nbZWLAy0stAvStvgiVIXkP4vFIru0gYEBueW//fbbQizjS8hiseDy5cv47ne/i4mJCYyNjeHtt9+G\n1+vF3t6erKpisRiqqqpw7do12c+/+uqrUKlUGBwcFPyz3++HSqXCuXPnJJgkl8vJqNTj8cBiscjv\nlPtMTm0IYZqdnUUymYTRaIRarRZ/e2lpKXp6eiQ4pKWlBW63G9PT0xgeHsbdu3dht9vR2Ngolsps\nNouVlRUcHx+LzfH06dMwm81IJBK4ceMGjo+P0dnZCaPRKG4QCmYZBNPb2ytjaAr6uLbhZYuiwsnJ\nSfh8PolCZiecy+Xw7rvvSlQ0xai9vb1obm6WQBtiXS0Wi7zESOJjYwF8EV6i0WheOsyZRqjX69Hd\n3S0vBsKqNBoNdDodSkpK4PV6YTAYMD4+Lt5pXvJ/8IMfoL29XX7eUCgkPHgiU3U6HRobGxEIBPDg\nwQO0traitrZW3BjBYFAS5jweDwwGAwCIIJB2Qf7s7EzpEuns7ITZbEYkEpGJKTnqdrtdLo/z8/OC\nia2qqpLRfyKRQCAQEMfSV77yFUngPHXqFNra2gRI9OqrryIcDuPhw4dCCSX+uL29Hd/4xjfQ29uL\nkpISLC0tSRPn8Xhw6dIlLC4uYnBwUJJEdTodnj9/jkAggHfeeQcqlQoffvghGhoa8KUvfQnt7e0A\ngIGBAXEmvPbaa3J55jrZ7XajtrYWFy9elAkvMy5yuRxqa2tRWVmJYDCIQCCAiYkJFBYWoqGhAV/9\n6lflu0wBanFxsWgb2tvb0dDQgEAgIG4cNrOMp71+/bqAfNbX17G+vo7d3V1sb28jGo1iZmbmT9dC\n19jYKCKdWCyGRCKBiYkJrKysIJvNYnh4WOhrVK8yne3nP/85XC4XKioqEAgEhAzEF19lZSUAoL+/\nX0hdzDgPh8M4PDzEo0eP5Cb9/vvvSx5yLpdDKpXC06dP4ff75RbPmxf9zcvLy9LZsJvQarVIpVL4\n+c9/Lmzj3t5elJWVCb97eXlZ2NH/5//8H0SjUSQSCeG1z83Nye+msrISy8vL+PDDD3Hnzh2YTCbh\nu4fDYdmRcedeWVkpo/HDw0O0trbiH//xH0WRPT09LZYmAlCYlGa1WqU744j+8ePHIuj5j//4DyHb\ndXV1oa6uTghe//mf/4lkMimJTezc5ufn8eTJE+RyOej1eqysrEiH98knnwiMgmlLDx48EIgKu15e\nHAwGg8TesnO7e/euoDXv3bsHjUYjYhl6jTOZDD777DMhq7lcLlGPx2Ix3Lx5E0+fPoVarcaZM2ck\nwOXmzZvSdfr9foTDYYEYPXnyRIR35MHfv38fDocDWq1WPLIU/e3v72NsbAwAXvJVA8D29rYonSOR\nCH7/+99L0h6Z4BxV379/X6KB/+Vf/kWwyMR3ms1mZDIZ3L9/HyqVSjon6ggGBgaES9Hb24vDw0P8\n7ne/w9raGt544w25xNKH3tzcLJet/v5+9Pf3o7a2VsJVONlKpVJoamrC1atXJbiG7Ibp6WnJ3ubl\nm6lv3IuurKzg0aNHOH/+vHyGjOpNJBKi0iZmuqenR2Ah5OvTFUCIFHMaJicnxSqVTCYxMjIi4r14\nPI6bN2+KEIp/v9vtFnTzxsYGGhoa8PWvf11+plu3bkm31tXVhVQqJTYvBmJ5PB5861vfgkajgV6v\nx8OHD8UO6HQ64Xa7odVq5eXJ2FZOKfr6+mS619LSAqvVKswQrVYrpEqm7zkcDrjdbhQWFmJoaEig\nRAQ5+f1+FBQUCAKZU9LHjx/jxo0bqKiogFqtxv7+PlpaWnD27Fl50fI5cjgc2NraQklJiUzdVCqV\n2ME6OzuRTqfR19cnIlf+vZzoHBwcoLa2VkJaqJQvKyvD6urqS0harVYr6OlEIiHTXSY6ApB1aFtb\nGy5duoSPP/4Y4XAYZWVl8ozcunVLVguhP6RtWiwWlJWV4dSpUzId4TT3yZMngsDl2UhuQllZGd54\n4w3Mz89jYGBAUhTpCOno6EA4HEZ/fz+Wl5fleSwsLBQ+y+bmJkZHR2USvLy8LO8Zao04dfvKV74C\nq9WKk5MTcT7RPVVYWPi/etf+0V7yACRcgCI5v98Po9GIyspKUZbyP3kLUqvV2NjYkIebo3y+PDie\np+2L4hH6aEtLS7G0tIRAIACXyyUWj9LSUpjNZiETMZykqakJu7u7yGazorLlw0AyH/CF35MjLzLI\nGbpBQQ+9//ROc9/HwAOK/XZ2duDz+WTcnUqlMDc3h5GREUl9InTh4OBA/Pe08ZCwlclkZN/E0RmF\nMa2trWhpaUE4HBYADzu2qqoqxONx+TyMRiMWFhYELEQMq9FoxPb2NpLJpHSTOp1OGPRUzLJDMxqN\nwqjnmPbw8FDUqOTvMxKTI3USv5iKZ7VahdxGGiLVwtwnn5ycoKmpCZlMBqOjo/KSZzxsVVWVoHyH\nhoag0+lkx04PM8MqKDi02+0oKCgQJgGjXznye9HHTVsTdQ3T09Piz2emNUfxpGYxg5vsBo5XqW3Y\n3d2F2WzG3t6edO0Mw6FglNRC8goIT2lra5NwmLy8PFRUVMBqtcoo1WKxSBfNcBd+FhRwHR8fCx+A\nqYovxrU2NTUBgBDQeJFkPCrjodPptCiFGfN7fHwsCYW5XA5HR0ci2tRqtWJJo6uCY2p+/ylQIgHP\narXi2bNn4rYgFW95eVmwwCqVCqlUSrzv5AWwQ11dXUUymURbWxs8Ho80D2wEtre30dzcLEE4vEjk\ncjmhoVGQRwsU/fUUbpL+yCAnTvw4ZeNzSa+7xWKB2WyGz+eDz+fD+vo6PB6PIFcJ5aHYkRe/paUl\noe7xz+Uaj/RAxm53dnaiqakJ2WxWVmVcVbS0tAi8is8eY6FpEeM5c3R0JCJDrqGIsOZYnmwLl8sl\nvHru8EkH5TNB+6nBYBDOP3PiGxsb4XK5UFdXJwTNzc1NZDIZzM/Po6ioSPJC8vPzX4oVZswxLw8U\n02WzWSwuLmJ3d1eaDq48SX58kTpZVFQkQufd3V2Mj48jGo0K6Mdut0sUM3kdpF0yOpix1GQSdHV1\nIZPJ4Pj4WAKdtra2UFBQIIFn/1P90V7yjx49kiSs7u5uGAwG6c5PnTqFubk5sbolEgnk5+ejuroa\nHo8H3d3dsNvtMBgM8ksbHBxEWVkZXnnlFQSDQeh0OnR2dspobGFhQQhF9+7dw9zcnOzCvF4vSkpK\nkMvl8Lvf/Q7b29twu9348pe/jJaWFvzyl79EIpFAc3MzNjc3JQOZY6orV67gn/7pnzA5OYnd3V1c\nuXJFYAXXr19HJBLBX/3VX4kbwGg0Ynd3Vw5RIjL5pfL5fPj3f/93NDY24tKlS2hvb0dfXx9++9vf\nwmw2C43t5ORELiBjY2O4e/cu3G63CGV+8YtfYGNjQ75g7IQymQxqa2tx/vx5AUNwX6rRaHDu3DmE\nw2EMDg6ioaEBFy9exKVLlyTHeXZ2FsPDw+jp6RFf6tjYmESJVlRUoLOzEyMjI8Jxt9lsOH36tHi1\nGfnKLqShoQGlpaUviSbn5+dFTwFA9pY8xKlILS0txUcffYR0Oo3Tp0+/ZN+qrKzEO++8IxefW7du\nSYhELpdDLBZDaWkpUqmU+FkpeKSbor29XVYz3NcVFxejvb1dDkY6OCjYobWrqqoKJpMJIyMjMhHq\n6emB2+3G48ePxWvNsS6nHmQlMJ3ObDZL+M7m5iY+/PBDzM7OYnd3V8byu7u7qK6uxt/8zd/go48+\nwr1793D//n2cPXsW//zP/4zDw0MEg0G8//77sNvt+Na3voW3334bq6urePjwIba3t+USSi52Q0MD\n3n33Xbz77ru4ePEi7t69KwpjuiIYPzs1NYXm5maYTCYRwhkMBmH/X716VaxrdMecPn0a5eXl4jI4\nODgQ+uXe3h7efPNNoTbSOWE2m2W3y5xy/nPW19dhMBjgdDqFr/+tb31Lpnkc51JTY7fbkUgkUFJS\ngu7ubqH99fT0YG5uTgKNKNisq6vD3/7t32JwcBDDw8OiDTp9+rS4VAAIupeTMY1GI1ZOJiHSgeBw\nOJDL5bC1tSUCWbqOiouLhanAPXVNTY1AiOgFPzo6kukNo3yLiorw1ltvQafTYXx8HH6/XyaihE+9\n8sor6Orqwk9+8hOMjo6KHZQpjJFIBMFgUHQU3/zmN5HL5RAMBmV6kpeXB4fDAb1ejwsXLqClpQX/\n+q//ilQqJTqfmpoalJeXIx6PY3BwEDabDS6XCx9++CGOjo5w9uxZcV8QtQxAdtN08ezu7qKzsxOd\nnZ0AAJ/PB51OJxdUrhjz8vLQ19eH4eFhaVyam5tRWVmJ4uJizM3Nwe/3o7+/X1IBzWYz3G43Wlpa\n8Pz5cwmA4kqD4ChqYrq6urCysoK5uTmZ0C0sLOA73/kOrl69ip///OdYWFhAKBTC+fPn0dLSgpKS\nEkxOTopdkWs1j8eDV199FYuLi+IqcblcaG1txeLiojhkmOdC/cD/pv5oO/lvf/vbP6QVoby8XFK4\nCAPhgef3+/9fbGb6yZ1Op9x62DXE43EEAgFsbGxIhK3D4UAkEhEyWzKZRCKRkMM2k8nIf0fKXWlp\nqTCcibelhYbq2c3NTaFrAV8ADDgaZMc8MzMDlUolXs1cLoeZmRmMjIwIOIZiQwJoCLrgn0vvNu13\nFGExECYQCGBsbAwTExOw2WzweDwIBALiYeVawel0QqPRYGNjQ7rqxcVFEbIRpZhOp2XHtb+/L2Pn\n7e1tTExMIBgMSqd5cHAg6mLysHO5HCoqKkQZvbu7C51OhzNnzmBtbQ2Tk5NiVyHKler54uJiCaih\nt59pTRUVFaisrBTiGi04RGqqVCpUV1fL6BX4wjbFCNGNjQ2kUilks1kRrG1sbAitLxKJwGg0Crlv\nfX0dm5ubku3+YoRneXk5LBaLMBVIHuMXlPx4vV6Pvb094ZoXFhbC5XIJJYu5C4FAQNYdRO4aDAYJ\nCeFzB0B2rqRlVVdXw2Qyyc9MIAk/i/r6epw7d070FSMjI7LfYzb5nTt3sLy8LC+f3d1dCRIh47ys\nrAwGgwG5XA7RaBTr6+vIZDIScsPOhhdm6gRWVlZE5Li6ugq/34/y8nLpghgZSoIkJ2GcAjBB7Ojo\nSPjnKpVKBLv0chN3SgEkp0GkrkWjUdnj83cRi8XkEtXW1obNzU08e/ZMqGu0w/r9fnEMVFVViUrf\nbDZLZgPBT3TocILo9/sxMTEhO2u32y3i1eLiYiwvLwsDpLq6Gnl5eYhEIvLdi8Vi8t2nFYuWVjY/\ndrsdqVRKrLYkthHlGggEkJeXJ5MTkteALwiGJIFaLBZRjxcXF0vXzTEz8xios1leXhahJfCFTiEa\njWJ1dVXEaHq9XmK8+YyTvc4J1PLysoj6aKskb2BnZwfpdFpiwBOJhKBdp6am5Bnb2NiA0+nE3t4e\nhoaGEI/Hsb+/j1QqJWdmUVER9vb25Jnk88zzgWN9rkI5bVtZWZFUUJ71sVhMvvOc/FEXlpeXJ98n\nk8kkTcbKyopAnEpKSuRSrdVqRaxcXl4OAMLMn5ubE0gXn0G1Wo2jo6P/1U7+j/aS//GPf/xDhgFk\ns1lRYY6Pj+PGjRuiVI1GozISpohobGwMJycnqKmpwdLSErLZLFpaWhCNRvFf//VfmJ6eFpQkf2nM\n8OYoiPsOt9uNZ8+eYXZ2FrFYDK+//jrOnj0r6WYTExNCGeL4p6CgQESA58+fRyKRwPXr10Vl+fnn\nn2NwcBDPnz8XpToPi42NDXz88cf49NNPMTg4KBaMsbExeRFUVlaiq6tLDv9PP/1URqS0+zDv2e/3\n4/bt23jy5AkSiYSww2dnZ3FwcACv1yv7JxK1OC6Mx+MYGhp66cAoKirC/Py8xESGw2GEQiGxNf3y\nl7/E2tqaHOQ+nw8ffPCBgFJCoZCInOx2u8SGajQaXL58GaOjo7hx44b8Hg0Ggwi0qMA9d+4cZmdn\nZZ/rcDgkItPj8WB4eBjPnj3D1NQUhoeHMTQ09FK4B72yLwZ58DAwm82Snufz+ZBOp/Hnf/7nsFgs\nuHfvnsA8GA4yPz+Pzs5OeL1efPTRRxgbG8P+/r7ETFI7UlpailAoJApop9OJ8+fPC+RlZWVF/NxM\nMgQgEZ3r6+uiKne73bh27ZoInQ4ODmRfTyJgKpWC0+nEpUuX4HQ6UVZWhnQ6Db/fjw8//FDEdIR5\n6HQ6QepyCvX555+jo6MDVqsVt27dknExDzibzYZkMonbt29LIM2ZM2fEjjY/P49UKiUUMvLis9ks\ndDodZmdn8dvf/lZ0F/TZr66uorGxERaLRfb0RP0Sf8yXPFdrq6urMhrW6/VIJpN48OCBXH75O2T0\nML31iUQCt2/fxuPHjzEzMwOv1wuv14uCggJMT0/jzp07Qr90Op0YHBzET3/6UwlAuXz5MuLxOG7c\nuIHR0VHE43HJhifQR61Wo7y8HP39/ZJ+RnCP3+/H/fv3MTs7i8LCQvT29qKnpwednZ1oa2tDLpfD\njRs30NbWhr/4i79AUVGR6C+Ikg6Hw9jb20NVVZUglScnJ5FKpcQeW1lZibt372J8fBzFxcXy8jGZ\nTNjY2MCTJ09gtVrFlsYu3OfzCdLa6XSiq6tLtDJ0uFB7Mzk5CaPRiEwmg2AwKJZlrVYLAHJBGh0d\nhdlsxvHxMQYHBwXbTLZDS0uLuAXefPNN5OXl4d/+7d9EnBaLxcTqSaX5zMwMfD4fZmdnEQgEMDo6\nijt37ogdmS9Bk8mE58+f40c/+hGOj49hMBiEBllbW4uVlRXRKOn1ely5cgXZbBbBYBDDw8OIxWJQ\nqVRygT05OUEoFMK9e/dkrUNk8meffSZpnHNzcyJM5QWEYVZtbW1iBXz69CnW1tYEAHV0dCQpib/+\n9a9RVVUlVmdOgIaHhzEyMoLFxUVpOMlSWF1d/R9f8n804t3f//3fC/GOjHpaGQDga1/7GhwOBx4/\nfiyjsa9+9atwu92Ym5uTDphiqefPn4svc3l5GQDQ0tIiVgniKRsaGqDRaMRSsbOzg56eHrkh8+ZP\nVf7q6qooZufn5+VF73Q6odfrcXx8jHg8juXlZdhstpeAPbQ6EKZTV1eH+vp6LC8vY3R0FB9//DEM\nBoPYSV4cvxAKwn0f/zveTrVaLWpra1FdXY1QKIT5+XlJbTIajZLcFolEpAtj5Kfdbkc0GsXS0pKE\n7BC7a7fbZSzEW7RKpcKXv/xl5Ofn49GjR8L2fBP7AAAW+ElEQVS7Pzg4QGVlJU6fPo21tTXJZGdq\nl16vF30EkZShUEhytQEIMpSrgKamJrz33nuYnp5+SXVcW1srkKQ7d+5ge3sbLpcLPp9PDlvu4og6\ndblcoowfHx/H7OwsbDabfKmZVU6wEdPjjo+P4Xa7pQMn/raqqkoscIS7ENm5t7cn4CSuYsjeZwAL\nhZu0TQ4ODortknoDjlw1Gg26u7tRU1ODdDqNhYUFPH/+HE1NTeID527y0aNHmJmZQVFREQwGAyoq\nKgTpenBwALfbjXPnzkGtVmNrawuPHj3Czs6OwDR40Ugmky9Z0Twej4ycw+EwdnZ2YDabsbW1haWl\nJdTV1cn3i5ejWCyGwsJCXLhwAXl5eVhdXZVnlgKsg4MDFBUVyb6UHVRTU9NLWonZ2VnZye/t7UlA\nEVcbjFuleyCZTMJqtUqH1NjYKKKlqakpPHv2DFarFXV1dejs7MTOzg5CoZAculevXsXBwQHC4bAc\nxhUVFaipqRFfOjn1qVQK+/v7qK2tlWeMNj21Wi2YX9oyI5EI8vPz0dLSIpdV4AtE8vT0NACIpoMa\nC36G5eXlsgPn4c5JIpG5DocDOp1Oxtz9/f0YHh5Ga2srGhoahHZH0ev6+roIO3nJJtWOwtFr164B\nAG7evCmTVO7fNzY2MDExgcXFRQH0NDY2QqvVSj5H6A8BNC6XC9XV1XLO7+/vy7qN0C7aDA8ODnD7\n9m2Ul5cLDXFnZwcffPCBjM/r6+vhdruh0+mwurqKsbEx0QhxesDoVkbhFhQUSConrZvsvKmHiEaj\ncDgcuHz5siB2qXOigI6Mi7W1NYyNjcnamMFIu7u7MBgMKCgowNramgCyiHOmzZn6BrpxiouLRWjN\ndabdbofL5cL169cxPT0tU5hkMimN2yeffPKnS7zz+XzyImBQDMdmLpdLRFAcvRmNRng8HrHajI2N\n4dmzZ2hsbITBYBARF72P3G0yKpa4Qgqi6End2tqShLXt7W0Eg0EZL1NdSuEaVwNHR0cwmUwwm81Y\nXl4WjjjFdt3d3dJJEaf49OlTFBUVobW1VQhQvO1ub29Dp9MJG3pzc1Mym00mE1pbWyWMxGw2iz3G\nbrfDarXKSJoP3/b2Nurq6rC1tYVwOCyeYpIFOUpUqVRoaWkRXGQ6nUYwGMTJyYmIPkKhEBYXF7G1\ntSUioIKCAvFmWywWNDU1oby8XIh3jJVkihfXBHyx0FZCKwhFgmSiM7zC4XDA7/fj+PgYXq8XKpVK\nLHiM6KQ9igI2Tka4KqBdhSNH/g7y8vJQU1MjEyB2Mi92yQyjWVlZwc7OjvCn6SXnmPX4+Fi0D7wA\ncP97cnIigUMc2ZHcR287P5Pi4mIJ3xkcHBQRYM0fkLhHR0dCiSPymSLQlZUVdHd3o7GxER6PRzzD\ndJfE43EJ41Gr1bDZbOjq6sLvfvc7LC4uyqU1FovJBIzah9LSUkSjUYTDYbk0Mi2Sl0yKh7gqWV9f\nh9vtRkNDA4LBoITX0PfOUTpXRBTd6XQ6CdYhKpej2+PjY6yvrwuQhF51jUYj49TNzU2kUikUFxcL\n/c5isYgYjoK4/Px8OQ/4YqUXWq/XIxAIiMJer9fj6tWrsNlsiEQiuHPnDvLz8+FyuQSte3BwgKqq\nKly4cAEzMzOYnZ3F2NiYZFqo1WocHx8jEomgqKhIfqZcLgeLxYJwOIz5+XmYzWbx7XOqwZWASqWS\nixxfyDz3SM+0Wq3Y3NwUuNb29raw50tLS0XQVlxcjO7ubhEgMhOdAVWMVaXGwOFwiLr74OAAFotF\npnC0oNLvrlKpMDk5KdqS4+NjyWEgj4IrOMKVrl27hry8PMTjcbkMvLiiBSDfHe73vV4vHA6H7K8Z\nDcydNVe+nOocHx+L4p5pm8FgUERvRHXrdDppFLLZrGiMmAqXSqUAAA0NDWKtrqioEIeE1WoVgXY8\nHhfOBH8XpCFyGsizw+12Y3FxEZFIRM4vpmKWl5fLmc5zzGg0/q/etX+0cf21a9d+SKVoUVGRvBxc\nLhcuX74Mo9GIZDKJe/fuCduZSmW73S4UKaIh6Y2Ox+OCWOzq6kJxcTE2NzelS6fYKhAIoKGhAS0t\nLYjH43jw4AGuX7+OyspKWCwWUUOr1WpRSdbV1ckONBwOCzCC+dX0VtJfu7OzA5PJJJ1yRUUF2tra\nMDk5Kfz1vb09LC0tYXFxEdFoFGq1Wrzjz549QyAQEMRnLBYT3GZRUZGAMu7fv49QKCQe2WQyKXsg\nYmEJoNBoNHj8+DFWV1dld8hOn51NMplEeXk5rly5gq2tLfh8PqFLsYuiElar1SIYDAoulgfKV7/6\nVdTV1QlhS6VSoa2tDel0GrFYTL6AOzs7AoWgJWp5eVlESfSwulwu0WNsbm4imUwKnIPKW36R6QhY\nWVmRmzVzxTlaXllZgcfjgcfjkT93eXkZJSUl8uWjEpr2ocLCQrnIPHnyBIODg7J75Jd1bW1NJjKR\nSAQ2m02mI8FgEENDQ7JXdDgcQpGz2+2w2+2SfwBAFN5WqxWHh4eIxWLo7+/H2NgYmpubJWiIvuDu\n7m6o1WrhtdNrn0ql0NfXh9AfksH4wi0pKcH8/LyI3YgqZbCOxWJBNpuV+NSSkhK5DJycnMButyOX\ny4l9jBwIZi3wc+f3jiN42n/y8vKwtbUFr9eLd955R8JexsfHRURL/jnTF9mh0WpENXoqlZJgILIS\nlpeXMT4+LgTB6upqoUMyhW1oaEj22q2trUgkErL+IcBma2tLqJXLy8uyvqBLxOfzYX5+XlDRU1NT\nWFtbExgWHSDFxcXieiCUq7i4GBMTEygoKEBDQwPOnDmD6upqJJNJ7OzsCIyIgTF0HDEIyWazYWtr\nC6E/BEgdHh6K9oBCxrW1NRl1x+Nx6PV6eDweuFwu+P1+3Lt3D1tbW3LZoU2LiGKDwSCai6mpKbnc\nE6tsNptRUVGB8vJypFIpbGxswOPxoL29HS6XS5oNuikuXboEAAiFQrI2OD4+xszMDMLhMNra2lBf\nXy9TXhIT9/b2EAwG5VniBJBURTZbh4eHmJqakukCNUiXLl1CIBAQSiMZK+zQaYOm1Y42af5sPp8P\nz549k3x6NhkcxQPAhQsX5LK7uroqbgZqQa5cuSJYXzYFly9fhsFgkKRSNhLEjNfX1wujpLy8XKbY\nf/g5/3R38q+++uoPaeui0I17brVaLbfyWCwmYzlCEviFKS0tlZ0NX3g7OztCiqKY4+TkRDqDk5MT\nrK2tCR+dxKlgMIiJiQmB0lCQxihY2uXol2dgS3FxMVKpFNbW1uB0OkUpzY6Gt92joyOxSDx//hzz\n8/MSjcuuiarQYDAIv98vfGJGvdK2s7m5KfshRnoyAelFIM3S0pLsiA0Gg4zrstksTCYTLBaLjO35\nwj04OBDbCq1dRUVFIlKjApbe0M3NTYm3PDo6gtFoRGNjI9rb26WTp7KYKN0XuwAeYjw4KDrhjpbg\nH74U2NnR3pJOpyVBkOEgtJHxZc/ugRAPrVb70kvm+PhYbDNOpxPV1dU4Pj5+yeaUyWQkYYwHP21T\nDLDhC4kjVSbHkTvwhx2aXLwIA6HXnbx1rVYrL3YejLwcLS0tIZ1OiyhvY2MD4XAY6XQaDocD6XRa\nRpHsROLxOGKxGIxGozyDyWRSIkf5wtrZ2ZEJEoU95KqTxcDfFTs8XjoJU2GXyskXpxUajUbGvH6/\n/6VRO+ObX4wkPjo6ekk1znQ5juPZeTPelBeRqqoqEWxRKEUyI3fnLwplt7e35UWcTqcxNTWF8fFx\nyWioqamRZ5UENdIhGf6ytbUlO2T+O9H6yzAnBtBwXMzO//DwUKKNabfLZDJYWFiQZ+tFlj4A6fJI\npONqyOl0ykua5DSuQmjzpPCNHSXH/lxTcm3JqREdDJyyEQVMIBDTFkmT/H9OqPb29hCPx7G5uSkr\nLkb20oNOke6L/1yOpQG8FC09Pz8vZxCtrcAXKycivZPJJPLz89Hc3CwXptLSUjgcDpl08hnh95/i\nWafTCavVimAwiJWVFTgcDrn88lmk7bGkpASRSARzc3PC76BTYmdnRwTGFDPTFpdOpzE7OytMBZ4n\ni4uL8v3iz721tSXvBq7DOPGpqqrC48eP/3Rf8ufOnfshPcpdXV3Cq6flgLdYdnNTU1NwOp1CvbLZ\nbCIi4e6HqvuKigqk02l89NFHQpPjDoMccEJo6IFkvCPHr5ubm4hGo/D5fHLDzWaz8kAwre5F5e6l\nS5fQ09MjHd/BwYF8uRsbG3FwcCA5zxR16XQ6dHR0iAAqHo9jfHwcU1NTuHr1KlpbW7G2tiaCK2In\n/X6/3GIplBkcHMTBwQEMBoOINagLcDqdcmkgmra8vBw3btzA8PCwvLjNZrNgfakZOHfunNxs19fX\nJXgjFAohFAohHo+LP/bq1avo7e2FXq+XkWFnZyeOj4/x/vvvo6amBm+++aZMEADA7/fD7/eju7sb\nDQ0N0Ol0WFxcRCAQQGdnp+BtebBx5zw1NSWJVefPn5dsA4auXL58GQ6HQ/CtRPvabDbU1dXB7/fj\n8ePHQg+zWCxoaGiAw+EQHG1VVRUCgQAWFhbQ3d0Nr9eLyspKQZcytjYej4uH9tGjRzLlOTo6kssX\neQhFRUUwGo3o6ekR/PDz588xPj6OhoYGeL1euN1uYVwzJnltbU3EkSTVFRUVYWJiAqFQCGazGZub\nmwI+YXe0ubmJsrIydHV1obGxEbncF7n1v/rVr4RDsLm5KawDKuQjkQjUajVaW1sFp0w6IWM26RXP\nZDIYHByU1RHjPaenp2EwGKRzmZmZwSeffCJhM0xJ7Ovrk7VIQUEBVlZW8PTpU1lz3L17FzqdDq+9\n9pp0cvSzx+NxTExMYHd3F11dXdjb28PMzAzKy8uFWEjsKdcfJpNJtDL0VA8MDAi/nAyG3t5etLW1\nobGxEZOTk8hkMvjOd76Drq4umEwmueAwKCiZTMLr9UKn02F4eFicK1R3f/Ob34RGo0EsFoNGo5HA\nFGayp1IpsWKRmZ/NZqFWq2VClMlkBJOcl5cnVkW73S5WMv4ZW1tbUKvVqK6uxtbWlkyKqPLOz8+H\nxWJBIpEQfDd5HTyTEomENAPUAzE7gJyRgoIC1NfXY29vD4lEQvgLL76IKyoqsL6+js8++0wExq+8\n8gpqa2uRTqdlwhMMBsVRQtjSizAxPtvBYBDJZBKFhYWIRCKYnJyUM/D8+fOor6+H0WiU7xAAubDS\nykjK397eHs6ePYu2tjY4nU5MTk4iFAqhtbVV8uVra2tRX1+Pg4MDWeNRoMsLCXkNmUwGDx8+lPjY\njY0NwXr7fD7cu3cPm5ubkkm/ubkpCY3Ly8svXcbJrqf4cWZmRrDgv/nNb/50hXff/va3c6RBnT9/\nHmVlZZKxy32fRqORcBgq6Jngtb29jaWlJbGLsOOhhYIdPhPiuPPd2NgQXy0Z+adOnRJrF4ENTFsy\nGo1y6yL1KJPJoL29HXq9XrrgFxOpSOIbGxsTlGh+fj5CoRACgYB8kSgmcrlckle+uroqFxzufPLy\n8rCwsCD7aXaddrsdDodDrGurq6ty+G1vbyM/P19icOklJVaTCWdcMxCvu7a2Jn5fRnpyR0v4ENOv\nuO9i3jlV/9yHcYfFL9Lq6iqam5vR2toqlzcG8iwtLaG3t1c6WH7mRBNvbm7KwUIP+sbGhoS1vPLK\nKygqKsLMzIx04OTg0+6ytbUFh8Mhka/8QtJaVFNTI6p3krcGBgbkYOD/p6mpCX19fRgaGpKUMe76\nefsuLi6WS1s6nZYUN1ruaAHlOP/JkydYWFiQlxq9wUxUI26UaWvsPOlqKCkpkRjao6MjWWUQV7u7\nu4ve3l7U1NRgY2NDfncvhhDxMstOMxAIoKysDF6vVyiGnIYdHBxIghqfk5OTExGNORwOuVDX1ta+\nZJPi1KmwsBC1tbUybq6pqZFOnbZW/i5eTD8k1pWrmry8PExOTmJ/fx9dXV2SuEf2fWNjIzY3NxEO\nh+XCr9Pp5KCvq6uDRqPB6uqq2Ld2dnZweHiIgoICWQ1QN0ThmMVigc/nk8+A00Wr1SpOE35HCF5h\ndHUkEsGpU6eg1WpFoJfJZLCysiIvTJ5XzCF4UVtCel15ebmIZJkeNz8/L58HATx1dXVYXl7G0tKS\nrD4ZLmMymRCLxQT4xX9fCg2poNdoNIhGo+K0KC8vF7Q2J5I6nU40K7wgzs3NIRaLoaOjAxsbG7h5\n86agW3t6eoQLsL6+LpHbnLrU1NRAp9Ph6dOnknEQjUaxs7ODlpYWaDQasV0Sf057JoW/PG8JlAG+\nCI3KZDISgLazs4PTp0/LupR5DnRqUKnPbADaJbkWZQwwo8vj8ThGRkawt7cn661cLgcAkl5JFgLj\njSORiEwrGQgFQJrJqqoqsUm2tLTA4XDgH/7hH/50hXfr6+vwer3o7u6GzWZDUVGR5HDncjmhbvEW\nW1NTg8rKSgGjJBIJ+Hw+mEwmmEwmUUHabDbMzs7i8PAQDQ0NIuLY2NgQL7rb7YbX6xWKllqthsVi\ngdfrFUWyRqNBfX29dKHBYBDl5eWIRqNYXl7G17/+dXi9Xjx8+FDUl9FoFKlUCmazWUQdVLxSTczD\nmy9vPjjc4TOogCP/wsJCOJ1OJBIJsZ2xI6fSE4D43JeWliTsx2azCaeekw6Ogvj/u3TpkozZ2I1k\ns1mYzWZ0dXVhenoagUAAb731lpDvKDCkb93hcCCRSAgFMBgM4ujoCB0dHRJ5uru7i+bmZjmUuBKg\nHsBqtcrBwo6df02xECckPMCJVeUok9G+pLlxOkESIUfGvOC0tbWhpqYGoVAIJSUlsFgsYp2pr6/H\n1tYWFhYWpJtjkh6RrJFIBK+//jo8Hg90Oh18Ph/C4TCqqqrkoEmlUkilUnIY8HkEvvjykqLl8Xik\nK+TtvqOjAw0NDXj+/Lnst18MOmEioNfrldxzRowyV9zhcMjFg0QvlUoFl8slcCI+qwBQXl4uug7i\ng3d2dhCNRgFAMsOpWGa+utFohNfrlUQ8RrTSwsqQlcLCQjQ1NSEQCMj6hNMzqtR5MclmswiFQkin\n06irq0MsFsPY2BjUarVoIChEI7ERgOBPSUFkF8fVBSc09GSTslleXi6C2/X1dSwuLmJ8fFzUzJ2d\nnTg6OsLt27dxfHwMi8WCgoICWSNSKMjD3OPxSKw0RWPRaFS6a6b3FRcXY21tDbFYDAsLC3L2JBIJ\nAIDT6RSRIN0JDGAhsY3fG75MuPoi1Y4kNobVUHnOlzW/O7zYUMvEhoNK7/z8fFHf87L/ouOCZEBe\nejwej6w0bDab6G+oMeIZ0NXVhVwuh0wmA41GI2I4fi6Mk21qasLDhw8RCoVw+vRpFBYW4tmzZ9ja\n2kIul0NjY6MkxCUSCXmv8FLMl/TW1hZMJhMqKytl9ZdMJrG9vY2pqSm4XC753+gGYbR4Y2MjioqK\n4Pf75Zll0qPD4cDc3JyIt5n9wMvx+vo6zGazOLrY9TOKuKKiQoA7/F3R0cVnnOs9Ph9KKaWUUkop\npZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWU\nUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJK\nKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSiml\nlFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRS\nSimllFJKKaWUUkoppZRSSimllFJKKaWUUkoppZRSSimllFJKKaXUH6f+L4JWSM1ILlDzAAAAAElF\nTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "noise_dims = 64\n",
+ "conditional_gan_model = tfgan.gan_model(\n",
+ " generator_fn=conditional_generator_fn,\n",
+ " discriminator_fn=conditional_discriminator_fn,\n",
+ " real_data=real_images,\n",
+ " generator_inputs=(tf.random_normal([batch_size, noise_dims]), \n",
+ " one_hot_labels))\n",
+ "\n",
+ "# Sanity check that currently generated images are garbage.\n",
+ "cond_generated_data_to_visualize = tfgan.eval.image_reshaper(\n",
+ " conditional_gan_model.generated_data[:20,...], num_cols=10)\n",
+ "visualize_digits(cond_generated_data_to_visualize)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Losses"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Generator loss: -0.314357\n",
+ "Discriminator loss: 0.044576\n"
+ ]
+ }
+ ],
+ "source": [
+ "gan_loss = tfgan.gan_loss(\n",
+ " conditional_gan_model, gradient_penalty_weight=1.0)\n",
+ "\n",
+ "# Sanity check that we can evaluate our losses.\n",
+ "evaluate_tfgan_loss(gan_loss)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Training and Evaluation\n",
+ "\n",
+ "\n",
+ "We use a slightly different learning rate schedule that involves decay."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Train Ops"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "WARNING:tensorflow:update_ops in create_train_op does not contain all the update_ops in GraphKeys.UPDATE_OPS\n",
+ "WARNING:tensorflow:update_ops in create_train_op does not contain all the update_ops in GraphKeys.UPDATE_OPS\n"
+ ]
+ }
+ ],
+ "source": [
+ "generator_optimizer = tf.train.AdamOptimizer(0.0009, beta1=0.5)\n",
+ "discriminator_optimizer = tf.train.AdamOptimizer(0.00009, beta1=0.5)\n",
+ "gan_train_ops = tfgan.gan_train_ops(\n",
+ " conditional_gan_model,\n",
+ " gan_loss,\n",
+ " generator_optimizer,\n",
+ " discriminator_optimizer)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Evaluation\n",
+ "\n",
+ "Since quantitative metrics for generators are sometimes tricky (see [A note on the evaluation of generative models](https://arxiv.org/abs/1511.01844) for some surprising ones), we also want to visualize our progress."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Set up class-conditional visualization. We feed class labels to the generator\n",
+ "# so that the the first column is `0`, the second column is `1`, etc.\n",
+ "images_to_eval = 500\n",
+ "assert images_to_eval % 10 == 0\n",
+ "\n",
+ "random_noise = tf.random_normal([images_to_eval, 64])\n",
+ "one_hot_labels = tf.one_hot(\n",
+ " [i for _ in xrange(images_to_eval // 10) for i in xrange(10)], depth=10) \n",
+ "with tf.variable_scope('Generator', reuse=True):\n",
+ " eval_images = conditional_gan_model.generator_fn(\n",
+ " (random_noise, one_hot_labels), is_training=False)\n",
+ "reshaped_eval_imgs = tfgan.eval.image_reshaper(\n",
+ " eval_images[:20, ...], num_cols=10)\n",
+ "\n",
+ "# We will use a pretrained classifier to measure the progress of our generator. \n",
+ "# Specifically, the cross-entropy loss between the generated image and the target \n",
+ "# label will be the metric we track.\n",
+ "MNIST_CLASSIFIER_FROZEN_GRAPH = './mnist/data/classify_mnist_graph_def.pb'\n",
+ "xent_score = util.mnist_cross_entropy(\n",
+ " eval_images, one_hot_labels, MNIST_CLASSIFIER_FROZEN_GRAPH)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Train Steps\n",
+ "\n",
+ "In this example, we train the generator and discriminator while keeping track of\n",
+ "our important metric, the cross entropy loss with the real labels.\n",
+ "\n",
+ "This code block should take about **2 minutes** on GPU and **10 minutes** on CPU."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: 1.288447\n",
+ "Current cross entropy score: 2.326701\n",
+ "Training step: 0\n",
+ "Time since start: 0.021370 m\n",
+ "Steps per min: 0.000000\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvUeTpFma1/t3rbUIdw8dkZGVqjKrq4aGopkxrDHD+AKw\nZsOaJVvuCr4Ee7ZsMFgyYDBD0zVTWSplCI8I11prv4u4v6ff7Gt2p64wq7lmfszSZqq6MsL9fc95\nxF88R9qt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt\n3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt\n3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt3dqt//8s\n1y/1i//Nv/k3W7fbLY/HI7fbrc1mo/F4LK/Xq3A4rO12K5fLJZ/Pp8lkon6/r0AgIJ/PJ6/Xq/F4\nrG63q729PSWTSXk8Hnm9Xnk8HjUaDY1GI7ndbgUCAQUCAW02G202G61WK/l8PgWDQXk8Hq3Xaw0G\nA63Xa/t9kjSbzRQMBpVIJPTixQslk0nVajV1u10NBgMNh0Mtl0sFAgFJ0na71Ww202azkdfrldvt\nliQNh0N5PB4VCgWtViuNRiNtt1ttt1v7b4PBoDKZjBaLhd6/f6/1eq1QKKS9vT253W7V63WNx2Mt\nl0slk0mFQiFJktvtlsvl0mw2kyQFg0H7uev1WsViUV9//bV6vZ7K5bK63a4mk4lWq5W8Xq98Pp+W\ny6UWi4Umk4n8fr9CoZDcbrem06na7bbS6bTS6bR6vZ5Go5GWy6UikYii0ag2m40CgYBisZja7baa\nzaY8Ho9CoZBisZiWy6Vms5kmk4m8Xq+SyaR8Pp9cLpeGw6Hm87k2m429U7fbrfV6rfl8rl/96ld6\n+vSp7u/vVavV1Gw2tVgstNls5Pf7tVqtNBwOFQ6H5ff71ev15Pf7tbe3p9FopPF4bO99Op2KveZy\nueR2u+X1epVIJBQIBNRsNtXr9TQej5VOpxWLxTQejzWfz7VcLpXJZBSJRDQcDu07t1otjUYjRSIR\n+f1+20vRaFR/9+/+Xfn9flWrVfvZy+VSLpdLfr9fk8lEi8VCfr/fvnsymZTX61Wj0dBisZDP59N8\nPv/kXQUCAa1WK3k8HqXTaY3HY93f32s2m8nlcimTydi7Wy6XkqRYLKZEIqFkMqlut6tut6t+vy+v\n16t4PC6PxyNJWiwWOjk50a9+9St1u101m02Vy2WNx2NJUiQSkSS122253W5FIhE7X4lEQrPZTO12\nW5FIRNvtVp1OR9vtVn6/X85zHo1GFY/H1Wg07NwFAgGFw2G53W7N53O1Wi1Fo1Ht7e1ptVrZHp1O\np5pOp5/sF5/Pp0gkYme0Uqmo2Wyq3W7bWeA8cN7W67WGw6FSqZTy+bza7bZms5n8fr82m42Wy6WW\ny6W22618Pp82m42226329/flcrl0dXWl7XarYDCocDis9Xqtdrstj8dj/47vvV6vtVgs7PPkcjm5\n3W5tt1t5PB4Vi0V99dVXarVaury8VKvVss+yXq+1XC7t+fl8Ps1mMy0WCztfjUZD2WxWuVxO3W5X\ni8VCHo9Hq9VK6/VaPp9PiURCe3t7urq60uXlpeLxuP1ZLpeaz+dar9eSJJ/PZ3s/k8los9mo1WrZ\ns0ulUvJ4PJpMJnr58qUeP36sWq2mWq2marWq+Xyu7XarZDIpSRoMBvL7/QoEAnK5XBaniSfso8Vi\nYefbeaYSiYR8Pp9ub2/V6/U0n88VDAYVCoUUDAa1XC41HA6VSCQUi8Xk8Xi0WCw0Go20WCy0Wq0k\nSYFAQNFo1M7EF198Ia/Xq0qlonq9rk6nY886EAhYLOf98jv8fr9arZZcLpdisZjFDb/f/0ncDwQC\nymazWiwWarVatv/4/n6/X9vt1n7fdrvVfD6357BcLi2Wsp9cLpcODw/14sULVatV9Xo9/ZN/8k/+\nxhzu/X+epv/fLY/Ho/v7e719+1ahUEgul8teei6XU6PRsI02Ho9Vr9d1dHSkcDisSqViia1YLCoW\ni1kg9/v9ajabGg6H9nu8Xq9arZZWq5Xy+bzi8bhCoZC2263G47EqlYoCgYDS6bT8fr8kaTQa2SGR\npJOTE5XLZa3Xa7ndbr1+/VrdblePHj3SaDTS3d2d1uu1/Umn08pkMnr//r22260+//xzzWYz1et1\nSwij0UjxeFy5XE6r1Uqz2Uz9fl+bzUZut9uSzQ8//KDVaqVQKKSjoyPt7e0pHo+rUqno9vZW2WxW\nsVhMXq/XkupisdD5+bkymYwmk4nu7u4kSZ1OR2/fvpXP51M4HNZ4PNZwONRwOFQoFFIikVA6ndZg\nMNC3336rZ8+e6fPPP9ePP/6oWq1m7ySXy1mR0O12NZ1OLTkRxLrdrobDoXK5nJLJpKLRqCX67777\nTtvtVk+ePNFgMNBgMFAmk9FyuVS1WpXf71cikdDV1ZV6vZ5cLpeur69VLpflcrnsIMfjcQUCAdVq\nNcXjcb148UJ3d3dqtVp6+vSpFouFfvzxRyUSCcXjcY1GI4XDYRWLRd3c3Gg6narT6ajf72swGCgW\ni8nv96tSqVjiTCaT9ns6nY4uLy8ViUQUDoclScvlUpPJRJKUy+UUjUaVSqXUbDY1n8/V7/f1ww8/\naDgcKhAIaDQa2Z7abDaq1+s6PT1VMpnUx48f5XK5lM/n9f79e9XrdSUSCUvUBJH5fK7JZKLxeKzJ\nZKL1eq1YLGYBiUASCARULBZ1dnamv/7rv1atVtPh4aE8Ho8V1dvtVv1+X0+fPlUkEtF4PLYgeXt7\nq++++05Pnz5VNBrV27dvJUmpVErtdlt+v19/8id/on6/r59++kkXFxcKBoN6/fq1VquV4vG4Be3V\namVBzLlIhOwjkkkul1M4HNZsNtPHjx8Vi8V0cHCgfr+v+Xwun8+n1WplwfLo6EiXl5f283/44Qd1\nu12l02kNh0O1Wi2FQiFtNht1Oh0Vi0Wdnp7q8vJSs9lM+/v76vV6qtfrymQyCoVCVmCsViulUilt\nNhtVKhWt12tL6qPRSB8+fFA6ndbR0ZGi0agikYgikYiazaaq1arcbrdCoZAVzdFoVOVyWYVCQQcH\nB6rVavrw4YOdm48fP1piGgwG2m63isfjarVaarfbKhaLWq/Xurm50fn5uc7OzvTmzRsNh0NFo1FL\nOjQM2WxW5XJZ19fXVqimUinNZjMtl0ul0+n/U5MwGo3k8/mUz+fV7/c1HA6VyWTk8/k0nU4VDAYV\niURUqVQ0GAzk8Xh0eXmpy8tLlUolud1udTodpVIp++88Ho8ODg5UrVYtfm42G5XLZTsXx8fH8vl8\nGgwGluDa7bZ9Bp/PZ8UXhYPf71c0GtXBwYGGw6F++OEH5XI5xeNxK8BSqZQmk4ni8bgSiYRCoZDe\nvHmj7XaryWSi9+/faz6fKxaLqd/va7FYKJ1Oq9/v6/LyUhcXF0omk/rhhx8Ui8X08uVLVatVtVot\nSbIz52wUaRTYk8FgUJPJRM1mU6enp8rn89Zger1eXV9fazwe6+zsTG63W8Ph0H6m3+/Xy5cvrWCr\nVqs/K9f+Ykm+WCzaS6V7p4tKJpMajUaazWZWFabTaR0cHGi73apWqymRSOjo6Mi6hcPDQ3U6Hd3c\n3CiRSKhQKMjj8VgnX6lUNJlMlE6n5Xa7tVwu7SAQfKkGo9GoQqGQ7u/vdX19bQc0Ho/bSwsEAlaN\nhUIhFYtFhcNhTadTvX//XpvNRqFQSAcHB/L5fCoUCrq6ulK9Xtfz58+teKGrbDabkqTHjx9L0ied\nX6lUss6LyrfX62mz2SiZTFoXNJ1OdXFxobOzM71+/doCq9/vt6pyPB7L5XLZzw4Gg9rb25Pf71en\n09FgMLBkHAwGtb+/r2Qyqe12q2g0qi+//FKBQMBQhOFwqMFgoL29Pau6QWboqqj++/2+JFk3x/cM\nBAIqlUp69eqV5vO5/uIv/kJut1uz2cyqZzoUEAqfz6fRaKRoNGqdhfTQOWSzWRWLRYVCIXU6HQUC\nAeXzee3v71tnkc1mVavVNJ1OFYlEFI/H5fV6FYlE5HK5FAqFLCDRRY5GI7lcLh0fHysejysYDKrb\n7SoUCml/f1/39/cWVCORiCX/1WplQTQYDCqVSsnlcimVSmkwGGixWFjRenJyYgVYu93WZrPRkydP\n7OfF43Ftt1t9/PjRAhvvMpfLqd/v6+rq6pPuebPZqNfrKRqN6vj4WPv7+9psNup2u/YsP378aN/J\niXrxmQmohUJBkUhE+XxeP/74o4bDoQXRo6Mj5XI5LZdLTadTJZNJvXjxwrqiaDSqTqej+/t7JRIJ\n27eSrNiZz+efnFE6nWw2a90gZ3Fvb0+1Wk3X19darVYKBAIqFAqaTqcaj8cWGBOJhH2PYrGo1Wql\nH374QZvNRv1+X4lEQqVSSUdHR7q5uVG73bb93O/3FQ6HFY/HVa/XNRwOdXBwYB2Wx+NRt9vVcrlU\nqVTSxcWFptOpdWV+v1/5fN460+12q729PT169EjD4VDb7VaLxcLeDcmAc+TxeBSNRuX1ehWLxQwZ\nAsFkL9PpJ5NJPXr0SOPx2M4CqFAymdSzZ8/k8XgUi8WUy+V0fX2t+/t7nZycKBQKqdvtSpIhqW63\nW+Fw2Pb46empFc4gdvF43Ap7UD/OUyAQUDwel8/ns/fb7XYVi8WUzWaVzWatiDk8PLT3zPuq1+tq\nt9uGutDdSlKtVtN2u1UikdBoNNJqtbJ9SgwPh8N25o6OjvTmzZtPEMl0Om3vihhCHOPcOhG2+Xyu\no6MjpVIppdNpVSoVzedzFQoFaxCkhwQ/nU4Vi8W0t7dnnyscDms4HKpSqahQKCgUCqlWq9k7oQkK\nhUKG4B0cHMjv9+v9+/eazWZWBLtcPw+I/8WSfCgUMgiCYBWPx61yA0b0eDxWeQFRbrdbBQIB5XI5\njcdjeTweJZNJ60hzuZwlLhI5dEA4HNZqtbLKiY1IYCaAECxubm4M3g+FQtZ1EEBns5nC4bD29vYU\niUTU6/UssLrdbmWzWYPwVquVJZlcLqdUKmUd2WKxsP/e7XZrMploOp0abO/1ei3x0PVTkQeDQYM6\n0+m0Hj9+bJtP+kMRxXfm8Pp8PitqIpGI1uu1dc2xWEzhcFixWEySjOYgKTghVOkBzi0UClbFAjm5\n3W4Fg0E7SLPZ7BO4VZIFgkKhoMlkYsgOBRRJnGeeSCQUjUYt+AH5zudzDYdDSzagMXRdHHgqf4qe\ndDpt3XIgELAuxu12K5VKqd/vW+B2u92Kx+OWtOmCHj9+rNVqpW63a98vGAxqvV7L7/crHA7L6/Uq\nGo0aNEqx5HxeuVzO4LtQKKRoNKp8Pq9IJKLRaKREImEQJHAmHVUmkzE4kedLp0txR0EDLE1QbbVa\n9hnplCiQgbElWXfNsxyNRgZl5vN5BYNB63w5l4PBQJvNRrFYzBACimq+P+dqtVppb2/Puh0npEmS\nDAaDikaj9o4pSKQHKBYKi+9BsocWg4pbrVYaj8eGTFF4EGRJVKlUSqlUymDrbDarVCplyInP51Ov\n11M+n1cymbS9O5/PLZaB+oAkZTIZe8/Ozno8HisYDFqycMZKr9er1Wql6XRqiYxkxjknWbCnKAr7\n/b6i0ahKpZIVDiTYdrttiXw6nRqtNRgM5HK5LOFSiPR6PftsnCW+C1ReMBg0Gsbr9Wq5XGqz2RgC\nVywWlcvlrLMl5tDZhsNhZbNZdbtdi2upVErRaNTiD/Etm81Keijw3W63xTbOXTgcVj6f18HBgW5v\nbzUcDq17dlLD7FmeN3D9aDRSIBCw+OWkiGezmeUg0BtgeX4WxQkxgaIGypP4xPOiceXsgZ5A3/BO\neM9/0/rFkny5XLZOhSS63W7VarXU7/fVaDSsk3C73RoMBva/wU//9NNPSqVSCoVCevfunXEfo9FI\n3W5X0WhUrVZLvV7PDlOz2bQD3O121ev1NBgM5PU+PIpGo6FGo6F2u63hcGjds8fj0Xw+12g00mAw\n0Hw+13Q6VbVaValUUjabVafTUaPRsKKi0+koFovJ7Xbr9vZW/X5fPp9PtVpNPp9PmUxG8/lcvV7P\nqkhgYrqR9XqtSCSibrercrlsEBRVfbvdVjQaVSwWUygUUqVS0X/8j//RuDonBwlUCj8EzzabzVQu\nl40qQHuARoEkMZvN9N//+3/X/v6+crmcWq2WWq2WQXVAZ1TrbFT0C3zGzWajarWqQCBguoPJZKJv\nvvlG/X5fd3d3evLkiRKJhOr1ugaDgXFU4XDYOkWS6mAw0HQ61Wazsc5SeihM0COMx2N1Oh1dXFwY\nLHZ/f69+v28BdTgc2mFHA1Auly2heb1eTSYT1et1TSYTpVIpQ1DevXuner2u5XKp0Wgkv9+v0Wik\n6XRqmhIONsGC9xyNRrVcLtXpdKx7Y1/0+339+OOPymQySqVSuru703A4tGKMgMdenM1mtmeAPFer\nlVarlW5ubqwTCIfDWiwWur6+lvQQIOlq4H1BfAhEFB7tdlv/63/9L0ky2HS9Xmu1WqnRaKjX68nr\n9arb7ep//I//YdB3o9FQvV5Xo9GwYL3dbjUajdTv9+27397eajqdGuW23W5VLpcVDAatY1qv17q/\nv9d4PDYYvd/vW9Dl70oPFFUwGDSaqNPp2DsCjRuNRmq1Wup0OvJ4PCqXy5pOpzo/P9dms9HHjx91\ne3uryWRi8YDfBe1xf3+vm5sbKxqhbFqtlpLJpOl3arWa1uu1oVx+v1/z+VydTsfeOf9uPB4rFotp\nNpupVqtZDCFe9no9zWYzox/7/b7+4i/+QrlcTplMRv1+X6PRSL1ez/QwxLB6vW6U0jfffGOFF+gY\n2g2Kq/F4rNvbW7VaLVWrVYuTxC+nLglkyufz6e7uTrVazeLsdrvVdDq1zzYcDuV2u1WpVNTtdpXL\n5RSJRNTpdFSv1w32RmsDn0+BsV6v1el0DDlxogU0cqPRSG/evLEcwb7iHUITSdJkMjGUBs4eRI73\n3m63VavVVKlU1Gg05HK5NJlMdHh4aI0OCMrd3Z0ikYgVQuQyaB/QXPY8RRyF4tXVldbrtabTqbbb\nrZbLpfr9vjqdzs/Ktb9Yku90OtpsNsrn80qn05YMJpOJwZIEI7ovquDj42Pj3yWZgAsuiq4WodJ6\nvbZigUBC90c3kcvlVCqVVC6XNRgM1Ol0rHOlm+SQwgdGo9FPBHbNZlOTyUSlUsmCI79zuVwqGo1q\nf39f8XjcNgWdXj6fN1RjPp9bh+/z+ZROp42HJjnn83kL7icnJ8pkMmo0Ggaf06WT5Nfrtf0+4FA2\n7HQ6VaPRsKqRQ0VH7fF4jC/n87VaLS2XS4VCIRMT0YUjngMJQARGd0k3hiCo3++b4AiIDBENAQMu\nLpfLyePxWPJst9sajUbGV5PsQXtisZhBa2g3qLLRCgSDQUvu7JdoNGqFIbBYLBazpApnzD7m/TpF\nnX6/X4PBQLPZzBAC3itBEa6Qg85nR4PhcrkMDu31evb50um0/X0+OyI5aCdJajabhgLBQ7Jvl8ul\ndboUis4iaj6fy+VyfSL27PV6qlarur291fn5ufL5vO1zgmQoFFKhULCigHe4Wq0Ui8V0dnamUChk\nwsLFYmF7Akh+tVp9QjkAL5+fn2s4HFrR6fP5dHR0pHw+r1gs9gn0yvORZFArdAqICkJOBJHQV/Dh\ny+XSCkTQxlgsZoUBCFcmkzEkkc8MhE6coWvls0NDwP1TKHm9XqNeKM4J7LFYzLRDdHXsT37HfD7/\n5NwjSOX3sGckGTxcqVQM4eRzQ9+AxlB8uFwulUolpVIpS6KbzcZExRQgxPJms6l+v2/olyRLYqBb\naAwkWRHA+4OSBEJHGOmE1UEneX57e3vWEJHI+e6gV5IsnqK/4f+nIWo2m1oulyoWi9ahk+xpUnhG\n0HfZbNYQuMlkYhoZivfVamVF9mKxMGgeMTDveL1eW1EDbU08JQ78nPWLJfnRaKRgMKhHjx4pFApp\nNpvp5uZG2+1WqVTKuNj5fG5QJIpXqu/lcqnr62uDf+BEbm5ujO+iSECZv91urRPiBXW7Xe3v7+vi\n4sIOxmq1UiQS0f7+vkHd8Xjcut14PG5Q2Wg0UrlcVrPZVCQS0ZMnT6zydsJGQDfJZNI6ZhLrycmJ\nwVqtVsvg6WAwqFwup2w2q729Peu80Bx4vV79+te/ViKR0H/9r//VEnEikTBonU1JAjw6OrJAjtp6\nPB4rm83q6OhI9/f3BkMDvZHI4/G4Li8vdXNzo6OjI4Po2+22oSckRyBwt9tt1TId4cHBgYrFoo6P\nj/XmzRsNBgNFIhGlUik9evTIYDsKslarpWAwqIODA+ve6/W6QYdHR0fy+Xy6uroyrpDqWZJOT0+V\nyWT0+vVrbbdbnZ2dWSGB0psOVpId1EAgoNvbW3U6HVOvJxIJPX36VNlsVn/5l3+pxWJhOhC4RqA2\nOqVsNqvpdKper2fVOj+LAo9ESXBj75ycnKjdbuv169emT4nH41osFhYAKGSDwaAlATqfaDRqRY7P\n51OpVFKn01GtVjMoejqdqlgsWtKmw5Kk/f19UxxfX1+rVqtpNpupUCjo9PRU1WrV9iqQeCwWMy70\n6upK/X5fyWRSuVxOuVxOHz9+VK1Ws/0ai8WsyPD5fOp0OkYvAE0+fvxYn3/+uf7Lf/kvJvY6OjrS\n48eP9fjxYyWTSevi2GsUOOPxWP1+X4eHh0ZrVKtV67JIdpIsSazXa3348EG9Xk+LxUKPHj1SKpXS\narVSp9Oxjg8RJnA0xTRIZLFY1MnJifx+vxqNhqFfx8fHOjs7UyQSMWifZiSbzZrAmEJ8uVzq9PTU\nRMDT6VTxeNyKlFQqZbRirVZTvV43l0I+n1e32zUhHJ0qsTQajZrIi5h5enpqED/oKtz44eGhTk9P\nlU6nrcCE4kokEhZb6IL9fr8+++wzBQIBlctlc/KkUin5fD4rrCTp/v7e4h9nCf0GjQ6fy+VyqdPp\nGI3Bfn/16pUhmW/fvlW32zU9BOJIGjDg78PDQyv0e72eITvZbFYvX740cS4F+OHhofb397XdbhWL\nxVStVvX69WvTGmSzWdXrdV1eXlrxJMmoCFxBLpfLikoKq0wmo06no2azaYXL8fGxOU6A/n/O+sWS\nfC6XswPR7XY1m80sEaE6pfNCuU0nQ5KcTCYGOVerVVMbdrtdud1updNpg/HoJqkyXS6XKZMHg4Fx\nTu1223gxVOnn5+eKRqMmvKH7GQwGqtVqxldKD4KLSqWicDisVColSRZ42CT5fN6qaipJbFYEDsQa\ns9lMHz58MCEO3BzPzOVy6ZtvvpHb7dbd3Z3BrIh5gNroyjgojUbDLG9AYa1Wy3hiUBEU//CjqVRK\ni8VCqVRKvV7PrEF0FcC32AyphBEvUsGv12t7nqAGq9VK/X7funmEjnBnzsBBJwQaAS+Mun2xWBj8\nB7cI7L9er3V9fW1IAcEN0RhQIIEGUVelUpH0YF388ccfzQqE1iIejyufzyuTyRgsKsmSFlAs8Dac\nIgkRZIoKnqLM2YEC6wOR0jnEYjEFAgEtl0vd3t6aFbJQKMjtdpvtx+VymRVpu93q7u5OzWbTAjMQ\nKs+drqrT6RgES1EOn8vexHY5mUzMBoY1yePxWGJst9uq1+tarVYqlUpar9fq9/uq1+vWtRM4sURN\nJhNdXV3ZZ6Rjnc/nqlarqlQqprGQHsSd7CecNS6XyyiaYrEoSZacgUqhOxaLhbxer3VbQNwkgEgk\nosPDQzWbTSuK4WRRZtP1ulwuEyf2+33T22BVbDabGo/H9rsQHI9GI0uQxKter2d0CFQi8C8qdqgs\ndAdQRcQfeGPeDYgL+3a5XKrX6xlETDfq9/uNI/f5fJbwKSZBYPh+cOt/jERAYbLHnbYyUAaePe6U\nxWKhcDhsOiz2Es+G5+GEw5vNpiE0xFIKeRAz/n8KN6f9lIJHkhUSIBwul8uaSahIt9utg4MDc1RE\nIhHNZjONRiPj5bHcSTIUKZfLWSxBK0AhWK/Xlc/nTQQK5eHc63/T+sWSfDabVavV+oTvCIfDn/De\nHBA2CH5WNjEPD6h/OBxaYKWDBDZDSQ08hvKcn43fFm4cFIEOD/iJgA/s0ul0rHMG6ry/v9f+/r6O\njo4sYBKcOOyJRMJgLhIp/+3x8bEODg60Xq/VaDR0e3tr8DaJ3gltffjwwQ4rUDSQJt0g8NJisbBn\nheaAAwbXdHBwoFgsZsGPgEQQAv6uVqvqdDpWFKVSqU+EUnxffNUIxAj+vV5P9/f3SqfTSiaTBhET\nVCie6MZIKlS/mUzGvK945nludG7YrVqtlln8XC6XBVu6rlwup/39fRMgkowRfmI3JPiT3BHPMIOA\nQCfJOkkcHCRXhHkEGqxCwMMEGZ4FRRD/u5N35h1iAW02m7q+vjbVM4Uu35O/D0SPJoUCG1gd6gjd\nAAVSJBKx9wGPfHBwYHSN83t6PB4NBgPb6yi+gYbxakN33N3dGccOEgO65eSk/X6/+fa3260ajYbx\n7qBWFE1Qd3jeB4OBJQe/32/OEcSvwOnQPdAGuDnQ+5ycnJjwr9lsqlarmTecvcvnQajrtPxhlWu1\nWhbPiHdQaxR5/H588t1u12zAJDEKE0n2dxCX8l4pYHjeo9HI4u+zZ8/szNPNEgspRimeiYPn5+eS\nZM0KcYnPgxiNghbRcaFQsP3LZ0YAHQgELB4R4weDgcbjsZLJpFFb0oN+ijkAxBeK2Hq9boJn9Bh8\nN1A0ziGNEDEYsTBnZLvdqtls2swJZ0GMvqPT6RjC8fbtW93d3VlDRuECVYlGgpyBWBR6hv1J3kgk\nEspms7q9vTUalDj2c5bn/356/v9m/bN/9s/+dbfb1eXlpTKZjPL5vFXxh4eHkh5e/NOnTy2RZzIZ\nG7wBD4/t6fj4WH6/X8PhUKVSyRTphUJBL1++1O3trdrttkqlkkFcKN8bjYbxK9hl4FBXq5Vevnyp\ng4MDg/zatsU4AAAgAElEQVQR99FxZjIZC6g+n0/39/dKpVJ6/PixdURQECRIKABeYLfb1Wq1UrFY\nNFvUxcWF0um0bm5uJMkCK8Mcttut2u22QfmpVEqZTEbpdNqgrpcvX1qniMOgXq8rmUzq8ePH9nfw\npq5WK33xxRfa29sz+qFUKhlkCi8EtM4h2NvbM37d7/fr7//9vy+v12sFFMXawcGB2YdcLpcpVSOR\niKnbp9Opnjx5omfPnkmSVa3wanjX0VIUCgVTD3/99dcWaJ48eaKTkxOFw2H7HQQF9hhVOEp9oN/L\ny0v1ej2FQiGFQiElk0l98cUXxsPjsUZNTDUeDAb14sUL5fN58xMD/Xm9XpVKJXt2r169Mr/0F198\nYZCg2+02CD8QCOjo6MisPPl8Xtls1qyRmUxG2WxWpVJJX3/9tdxut/76r//a6CHpASk4Pj62ISen\np6cqFovmWAExy2QyevbsmXw+3ydiK4Yi5fN5RaNRHR4e6uXLlxbMX716pWQyaVoFHABwlQRtIPBa\nrWbvkLkLr169MsqHgOj3+3V8fGznIxKJqFgsmgMAbclsNtOrV690fHxsqNJoNLLC3kk3lEolJZNJ\n1et1hcNhnZ6eWveG9gBkgmSXy+V0dnZmtBbPzOv16vDw0LhXCofFYqFcLqc//dM/1XQ61du3b83x\nAzxNDEmlUvrtb39rGgG4dJocNC5+v19HR0fm6CgUCoYUlkolffbZZ1bQUBwQGymwsD5ikT0+PtZk\nMlGtVjPtBPugWCxaA/PixQs7H/Dgo9FIZ2dnOjk5+aS5QMycTqcViUS0Wq20v7+v4+NjDQYDJZNJ\n/emf/ql9Tz7Per3+JImjKer1eup2u0YrhUIhXVxc6NmzZ7b/GHrl8Xj0+PFjQ2SPjo7MZh2Px3Vx\ncWFF9KtXr7S3t/eJp50zur+/b4UJqEe73dbJyYkePXpkxeHJyYlRejRRwWDQoPpKpSK3263j42OF\nQiEFAgEdHh4qFAqZRiEajRrF8Pz5cysajo6OlM1mFY/Hbd7JbDbT4eGhvvjiCztb/+7f/bv/7W/K\ntb8oJ08FK8kSARUv0AmBVvrDhDf8kARV7A7YKrbbraSHipruEdUnf9hIdHmbzUalUsk4aDo15wQi\n/hn6AOsLlSVV22QyUavV0v39vdm4qDI9Ho/BlyQd4MDpdGqQa7PZVDabNRgc+B8lL51qr9czLpTg\nASSFOtflcllX6BS0oFGQZJUzUCrzAKgWGZrB33VWqHjNeTc8MyxRQFn8DLgzLD5YDvmZCLjoPJ0T\nyAgmJALoF4qXy8tLjcfjT7zefBb0HLwHuh/2D0gQokjELtiAnKgLCAl7F3Us6AM8JYEIDl2S8abA\nbk5+HLESQZDCzNmZLhYLmxaHAA8ol0SERYt9AWzb7XZVKBQkyfYv3Q9KX54754yzh3CSzwASUqvV\nrGjlDNIFsncR+k0mE/NR+/1+2/PAsNj6OBu8I+BiBIVOC6Zzkphzshion1OEiVKd7iuRSBicy7vi\n/SEMo+ih08KFgKaG/xaOm66SqYycZXQyFP48QyxdUEnsXUl2TnGWAEVj1ZNk6BnvgwKJ9w+CAqLW\nbrc1n891eHhoZ4/3QLInzgGrU3A60S+oBb6/U4DpnLaXSCQ+cd2AqvEOKcr5ns4BR8QKKBz2JOcV\ntwPxAuqAZ0+xKj2o5qHiEGjyzNnjUGkgCyR7zhfOHQS2NJs0gLwzYggxgjwA8uiMwyB6Tj0YdCtI\nHKgHnwXU8+esXyzJVyoVjUYj605RSAOVEpBarZYymYxKpZJtMLzZiE0CgYBZopjCRZKq1Wp6/fq1\nWq2WQSVAdwSc6+trDQYD5XI5HR4eKhgM2uYimTYaDV1dXdnkJaB1eHEKkH6/r/V6rdvbW41Go0/E\nRPB+6AqoRhndul6vVSgUjELgwMKFHxwc2Eb99ttvzZpCkZBIJMwZQAF0e3trlrD7+3srfNAiUIWu\n12vd3d3p+vraOp5isWgbm+E7THOLx+NmmeLvUM3OZjP9+Z//uXHuICTOn4XIkiAGrcLhHo/HNk0K\nIRVaina7La/Xa9P8cBRUq1WbAFYqlfT69Wt1Oh21223rhg4ODiQ9HHh+PgfM7Xbr22+/Nasl+w7f\n9TfffKNKpWLvKxaLWWfOoZZke+Py8tI4zEKhYJPMGILx7t07m8733/7bf9OHDx/MPcCI0Ol0qsvL\nS2WzWV1cXNj44Hfv3lmxQMK6vb01SBuPL8/ow4cPurm5sfHQDFIhORPcxuOxWq2WGo2GceuSbD8y\nHpffs16v9ed//ucqFAoqlUqqVCqq1WoWDEOhkM7Pzw3huru7s1kCDALidzJEChHtYrGwd8S8islk\nokKhoHA4bJY5EuZgMFCj0dB0OjVB2Xg8NigdHRBnkOfFJM16vW6xgQKSBuH6+tpmT5RKJUuczWbT\naDVmCOBn//3vf2+ir8vLS0MWSarEuu+//97cQ07rLIgl76jdbpv+gHkC2WzWkhpTLQ8ODowWwyaG\nxYymBq4c4SpanlQqpdFopNvbW0MX+c6JREI3NzcWN0B5GNBDvIpGo+p2uwqHwzo5OTENEPQdQlQ6\n9V6vZwp8xMy80263q9FoZDRnOBzW5eWlTbXDmQQc/91335kT4P7+Xh6PR8+fP9dkMtH3338vScrn\n87ZP7u/vTdtBg1apVJTNZpVMJnV9fW0N39u3b1WpVJTP5+2cQ80UCgVrIOr1uiqVimlHms2m0Rmr\n1Ur1el21Ws1iKYn8hx9+MG0Wk0z39/ctVzLN8OrqSrVa7ZN5Bf9X6xebXf+v/tW/2sJNIRIiAcNh\nOhOo0x4FjxGPxy0JswnYJHQBeNfxXlL9IBJjAAwTtOhgqPC8Xq/+7M/+zB52tVq18a7YRUiYuVzO\nKAOnuI+ODq4azjEYDFrnDDxVKpVUr9d1c3Njw3mosIHcES/BjaLIxgNP8mQu9nq9tsCMUhj+C1EX\nndZyubTuT5KpdKFLUObDT7L59vf3dXh4aAHMObMa8Va327Xugm7a6/XaIYOPc7lc+nt/7+/p888/\nt+DabDYteQJZ4mkGHZH+IJbxeDzmdQclQkVLEYXCFy0IPw8ePRqNWpJkAiM/g2ft7GB9vocRoP/o\nH/0jhUIh3d7e6vr6Wo1GQ5JMIMhnobiA23fazQj0Xu8f5tqThHl/nAMgUqfQD4U7+wHuf7PZGGzO\nvqcjubi40G9+8xvV63Xzs/NzQVec43ThHOkcg8GgarWaIQ8I0eiqb29v5Xa7P7ECffz40WxrTg0N\nBQ4wMXoH0B5JZj9Kp9P69a9/rVwuZ2LLfr+v6+trTSYT5XI5e0fA+YiW6L7ZT8ztwA7p7PZ8Pp9N\n5kQAClycyWQk/YGf5qwPBgPTs0QiEZ2cnFhydLvdOjw81G9+8xt5PB4bsQ2Kyfm/v7/XYrGwgV+I\nPFGjQ3NAZ1KYwHujnmfWA2JZBGTsD/4QExA+oiPBxgnSxBl1jpxtNpt2zolJdOAUB4hBndqDxWJh\naJVTOwIqk0gkzPXBniOW8xz4e5IMbeL98LsY8/yP//E/VjAY1NXVlfH+zGZxDj0jJ4HQ+Hw+66QZ\nhMYfYjMFD4OU9vb2bAhPPp+32SQ0Bdih2+22aW+c90H4/X57/k+fPtU/+Af/wOLKv/gX/+Jv7+x6\nIA5U3IvFQvV63Xy5wJBsDqrKPw7AzWZTjUbDeKqzszODvJyq8mKxqHg8biI1lKI+n0/n5+eWGO7v\n7zUYDMyGBX/OwSdBsdGCwaBBVnSXBwcHJlzhYhgndYBntVarGVSHPYLAQFDneTDshCQ1Go1UKpV0\ncnKi5XJp89/9fr+NkURUB9TGRCsUrZFIxA6hz+ez6ppk1O12bUPzPAqFgtrttnUG8MdwcAQ5Bpdk\ns1ldXl5+8t9jyQPd4O84Vfjw/3jOSQCMqB0Oh1YIUfBBnTBche7daRkChcF+dXBwIJfLpW63+0nH\nj12OoUhcjJLL5XR6emrFDWgOB54gy8hMukOSVDqdNgV6KpWyZ07BybAOfLCBQEAHBweaz+f6+PGj\nTWvDa04wp2tFC4LAiHkP6XTaOmCUujyjzWbzyaQydAjsc6yIBEoStVNDQRGHMDWTyRjvTme/XC61\nt7enp0+fyufzqd/vG2/pdj/Mnne5XPr+++/V6XQ0n89t8BJWJdTWQMZAmiQN5/RH57QzkiPFCMUD\nIlUSF9SO08XCvsWGSQESDAZtkAu2ROdcAqxh+XzebId0/jxDhtY4LZSS7H/PZrPmmuH+AudnRXHd\nbDa1v7+vRCJhxSCCxGQyqc8++8w6Se5PGI1G5ieH++YMBINBcwFR6AE/UyzQeFAk8pxwNk2nU1Uq\nFftvuQMil8uZmI4zHw6HrchnKiLUDQJbOv5gMKh4PK7Dw0Nz0Tghd/RDxNBarWYaEeZmoFVwCtho\nSmKxmF1yRswm8UrS5eWl0czorCia2S9Y8uDVmZqJViqfz+vbb79VpVKx5hPXRjqdVqlUUq1Wszsb\nfD6f7QemL1LU/03rF+vk/+W//Jdb+xD/h6LUOY2LjhCrBC8JSByvoZMTkf5gSyAJw29gLfnxxx+V\ny+X07Nkz2zTYHZjLjdKYhPn1119rb29P9/f3NiWKwM0QGUZ2brdbq8KYxEbHT1f/5MkTRaNRm25H\nQCWISjLkwuVymTiDm7MkqV6vm6qfhIRVyO1225xpxH+dTsc6HBIdCYM/dOBOnzwwWLvdNqEc/BM2\nJSpgPNher1edTsdQCJKa831Lss7I6YX3er0qFov64osvdH5+rvv7e+so4fiBxlutlg2PAIa8vb21\nqptBGPV6XU+ePNHx8bHu7u4sYTu5TyenCPJBpwAvj30ynU4b+uTkx+BcP//8c3m9XpvwRvLjd9EN\nMQsAK5MkQ2uAlbfbrYrFou1JhD5OTtG5d7AZOW1MJC4C2MHBgSEo1WpV0+lUpVJJL1680J/8yZ/Y\n5RcUS51OxwpOZ9HlpLPokqEbyuWyCf5I2NxQhzaBIp6BMdIfBFAUe04vM8EWVwEiuMPDQz179kzp\ndPqTy4bQgICoUWhTBED/USC6XC57Z0+ePFEymfzEZQF1xrAZpxaIZ+1yuWyyI5QE7gmn5XM6nSoc\nDuvRo0f67W9/azAx1lPOy2az0c3NjQ1koXgDzeC9o0MhflIkOhXeoGiJRELj8disuSBTkkx7Aj9M\nXJQeUBziQDqd1ueff67z83MbwoW+pN/vmw6IwVygAAhGKeyd0x7RTIB0sa9Bd/kD4oDFj4vNmNfB\nnsrlcnbJF1w9s0B+9atfyefzmTgT2o/9ThylcXGOzAUNAlVk7xBDQEC4j4R/Jk9QyF1fX9sFShTp\ng8FA0sNcD4poEKmzszO9ePFCX375pe7u7tTpdPTP//k//9vbye/t7ZmIxDnulOTt9JrychHqMNAf\nwZPX+3B5A7wkM5Gp1P7YjsDDAyLEmsBmIpCxkZweTj5fo9H4xIrELGM6YyYn8bsikYju7+/Ni55I\nJExUmEgkbOzjbDYzpT58GN0CYipJNucdhS5VKkmQz84GJMDP53Pzx3NJDPZBp30rEAh8cmEJFT5B\nARUpmgLnVDs6cQIrwQXBoSSDOguFgkGpmUzGAj9JjMQCLE2HBkyLvQ0vMB1WNBq1G8zQEzC4hGdR\nq9XMegVMxsUPJF4UvU7tArOnGUbBPoRDxDIl/WGWOj511LokPaxbdGcUPU7PNM+VAI0lFGud8/wQ\niHkGdG4kBygvLFkU1NIfBFxOGg0dBQ4Enj2KZophoFYEhghEJVkgTKVSRp9hGcxkMha4nZdI8Xf5\nXjxX9gNnjA4KNIr/G4vFPvHrQ88BUaPZQRiaSCQMIXN2b3SaXq/XxiPjT08kEiZavLu7UzKZVKlU\nsjPI55nP5+Z15gw4/2BfpVlBGAiSgnAxFouZngAImX3LaGoU64hOM5mMVquVXZ7EWfR4PIaEIuRj\nP4MEMnCHvU9h+MdCQ74HBZRT0AmqhzCQYpc9hJiPv4+Nj6Kk0+loOBzawCEu2hoOh7q9vTV3DIgl\nMc4J9UONghqDXjnfEfuIz0aSTyaThpbN5w93V2SzWWs0mMNBzuH5SQ+j2/lnhqeRw4DmKRQZmvT2\n7VsbVUxsw5GBUHM2mxli8XPWL5bkX716ZUK2g4MDJRIJffz4UZlMRs+fP9eHDx9MqAJ8OJ1O7e7f\n/f19/frXv1a73Ta+Ci49mUza7W+IkqbThxuBnj17ZmKUs7MzHRwcWGW8t7dnkBP8/Gw2M4iHiny9\nXtuBf/r0qT3s8/Nzud0P979TGTMSl+QALHnyf9w2xiHudrv2fdkUiGD+6q/+Svl8Xufn5ybiYJpS\npVKxoEO3gMip1+vZPzNRi9nup6enevr0qS4vL7VcLk00FIlEzCaCN5wgH4vF9PXXX5t/NR6PW8VP\nFwsVkU6nje/67LPPFAqFVK/XrWtimM3Tp08NZbi4uNB8Pte7d+8sqNGNxONx83QjlOIGOPhp4LZC\noaBMJmMIApa/aDRqwqdAIGDIB1BcIBDQixcvzG/O+2VAEiKicrlsQdTv91t3fXNzYxPFQIUkmYjU\n5XLp9PTUrkJmzKzTKcJlGwS+2Wymzz77zIbJIE4lCZ+cnNg1vl6vVzc3N/rmm2/0+eefq1Qqqdvt\nGvTKZEHmqM9mMx0fH1tHBAKSSCQsoULNnJ6e6ujoyNTMFJwkTWiTRqOh+/t7JZNJu/nPyfeCdmF9\nRNOwXq8Nuv/iiy8saT158kShUEjff/+9BUOGntCdzedzS7AUTNvt1mbb53I5K0hOT09NlU9BxRW7\nd3d3xl8fHx/bWaewffPmjabTqd0iGQwGLWH4fD4dHh7qyy+/tGuFEaeu12sTrkHxwPOzN0EA6RoR\neB0cHFihWSwWbSAThblTfc1FKl999ZXplLjsh2t28ddjnf2f//N/qlqt6uLiQpFIRLVaTYVCwSym\nODj+uDClQcnn88bZu1wum/B4eHgor/fhdk1oIJTzUCycYUR6xWLRbrGjKOf5MboYlLTRaNiV2Y8f\nPzbNzmeffWYiYopf4kM0GrWGhKSPHgg0wefz6dWrVzaiHJ0RN925XC4dHR2Zw+l3v/udBoOBObOg\nepn7kkwm9fnnn3/iaUdofn5+rng8rqurK2WzWb169Urh8MNV6kwY7fV6Ojs7k8/n07t37ywuEHN+\nzvrFfPK/+c1v/jVVGtYHuM16va6rqyvrfqi0eDj39/fWdRM8Op2OKXERNVB5AoMg1KGrGQ6Huru7\n09u3b80f6fTCt9ttNRoNnZ6eKhaLmdKbi1kIJvBBCOYYXsP1o9Vq1YZ9XF9f23fa29vTdrtVvV7X\n/f29Va28RKZr3d/fW1UPrcFITsYuOm/fYnylz+fT8fGx0R5Ah07LELAYE8dAGiSZehm153w+N9Qg\nHA6r0+moXC7r5ubGgi0Q63Q6tWeBBZLuvN/v6/b2Vuv1WqlUyjoqhCYEp2w2azwXFw05hTwMW0Hd\ny4AVNB7400F+KBzg2pkqBecuyZTAb9680Ww2s6CGnfP+/l7lctkCLF0h1rVAIKDz83MLeCh9qcDr\n9boVdtBFTI0DhmcATqVSMQqDCWuDwUB3d3dmcwT1oHtstVq6urqyAkKSccFciUo3s1o9XPDTaDRU\nrVZNDU13glNjNBoZvQH/yvx6Csn1em0FZLPZVLlcNj0Moterqyvd3NyYToHiBviTCWLxeFwej8f8\nxDc3N7q6upLL5TK0B1EUHRfXcTJBDnUzHRvDfUBb2u22AoGH2fEIuiaTiSVH9kQikbB5FIyZBX4m\nvvC9QR9AIfldTHoDUaAD5PYyxJ1Axa1Wy6gFEBPoKBwr/CwoFfYmt/IBNSMoo9GBV+d8gp7kcjlt\nt1uLn5xttCagclA00+nUYgE/n+cKHeb1PkyhY89iVQbtgTrkshYs1Vza0+12Va1WDUUk4eOYYbIp\nlCAaE2gYPgP/jgtv0HpIsnPBPmevgDiAZPGu0BHxd3AbUDhjgWZP4JagAYUOQnhJnGImA5owZi7Q\nKFBYJZNJHR4e2kS///Af/sPfXp/89fW1QWEc+ng8bnAeqmiv12sQF8lju32YP399fa1isahAIGDd\nkcfjsYOAShzryGbzMOIWxSiWu1qtZhAVnmICHf+XroHOBh9ls9m0YPDhwwfj65lmRGcDPTGfz/X9\n999ru93aIA4ONVCcc4QrY175Z7j9Dx8+WCFDIEOZTUFDQYC3GxUrN/A1Gg09evRIXq/XgiJTzRaL\nhQVCpxoUzhuxDhU0gRclMxOhGM2IMBEOjQPFwA1J9rM4YHxvxuqiYKfLzufzllRQlBeLRZuM5xRo\n4WBgUBDwrySbisfwHrhFkgwFCDag6XSq4+NjRSIRK25AYugOer2eTQOEu+v1enr79q1evHihQqFg\ns7ERVaEF4FwQvO/u7pRIJPTs2TNLktBL3HPgtF3BQePn73Q6ur6+tktoeMY8Bwqo/f19E0IShOnk\n2LtoE5z7j3+H1oCuEnjz6OjI/NlobCjgoUp438vlUvV6XcViUalUSt9//70VkaA1JBtoEdwRCMQa\njYZqtZolycFgYGgNWgNmF0SjUft32MIoHBkyxZhXFupxJtCh+Ia6IiE5kQZn0obP5Vk1Gg2LbSR/\nRlLDzaP/QXPEWWe6IHqa9Xqtt2/f6uLiwsbu4jJhTgGFC50hdACN1M3Njf38ZDJpSQiNkCRza4Cs\nIqTDW494GBsaRRqoAI6QzWZjtC1aBEbAUgQw4ZL9AceO6px4QlMCJYHaHppqPB4bMst5Z9+it8EO\nya1+5XLZYiIOimq1aqJB9jjjgqEw0UjgvDo8PLRGkYa21WrZZ2IfsTehOUGQnbfuuVwu2zc/Z/1i\nwrt/+2//7ZYADuQlyR7c3t6e2WzgROjiOCxwjVgVEI1h6WHiEqIYgnyz2dTV1ZXB086byjg0eExd\nLpf+zt/5OzZWkGElTi8/XTaCE+6Kj0ajNhscrg/+jAlgwJU//fSTWSzopJ3Q/OHhoXK5nF2p6/F4\nlE6nlU6n7fB2u12VSiXt7+/bdZ5ffvmlhsOhzfXv9/u6vLz8ROkOl40amWdOF0+SozsAFgfCevfu\nnd1PzXx1rB9QLWgB6LyA9BlxiaiMTuerr77S06dPLakRCLrd7ifDjEAlEKq43W6Vy2Xd3t5qPB6b\nb5exrCcnJ3YbVrVaVb/fNy88c6VBBHB50PHQmTCsxe12m7iG7jMWi+nFixfyer2q1+vWPdERAJuy\nX+LxuI6OjqzwrFar9jngEpfLpamTeTedTkfhcFiFQsH2LCpzPL8MVeLP8+fPdXZ2ZuNJuVCIguLx\n48f6h//wH9p1osvl0sSMKNDhqRkKwxlj3CsaGYSCw+FQhULB7muv1Wp69+6dQa98b4Ii7gmCIEkO\neDmVSunDhw+aTqfmigGy5X4Jkj1BlnNE4vD7/ebdn8/nev/+vb0LunHmPnCOvV6v3r17p81mo+fP\nnxuHC/xbKBR0fHysfD6v3/3udzZHwallYNjUwcGBjo+Pzd1zdHRkaCbPsdVqmdYB4aLX+4dbyvDU\ngyI5mxo0RDQ4dIloDUKhkBaLhTk5AoGAnjx5ovl8rt///vdWTAOr4ytn0iNn/Pz83AoJUEE6Y7Qd\nfC+ekXNIEMUaiAqiPWILMbfdbuvi4sJsdLiViOcHBwd68+aNPn78aDNImFC62Wys4Ds8PDR7529/\n+1v5/X5dX1+b4A6dC8UbxQF3iaCvoBDH2odbAQU8qB2FtFMU7tSCUFxhacSeydhvRKOcHY/Ho7Oz\nM3355Ze6vLxUo9HQP/2n//Rvr/DO5/OZ2Mg5KcspWsHa5HI93ACH2A0FPAfXOUwlGPzDLVwMKkCA\n55zExPW2WIqoCOFo6FoQ1Dl/FzAx/46XSgUpySpQfhdUARzmYvFwjSEWGpIGjgGGQMC9ESicoih4\neOchYyPRDTt986iUOfher9fUnE71KkgAh2Q2+8N91RRbKG+BVQlo6A/W64cxlcVi0bpp3hNFmCSD\npukeQUgQ8LBXCOZOwRXBC5EbkDrzp/k92AIpFinqQG4YqcxnkGR2H4oJOD2njYmpeAQ09gcWKQSf\nTNziwDvV2M5iBUiQfUqw5hkBndJNYVVzTkpj3CV7mT3K3sFRQaHE70edjKOAPe9Ut4NwsQd5RyAh\nfHcgWQps+Nu9vT1LMni6KR7oGqFTsMCiUsfyRJH4x3uBz4O2ge/K5wfNg+ZDCMZ9GJwbRKq8A5IJ\ne5KiEcGk8/mSZCmCiTtONBFqj/NIMgWhZP/zXaENeMeca+aIwHW7XC7T5NDxcueG9NB5QzHRfUKV\ncYbpvIkNFCh8TgR0LJ638/nQafL7mQsBRUTyBJHA/kncoEtnBgV7gIK33+9bTHD+bm5wo7Bnj5JL\ncFrRkPG/UUixf50Dy3AAUKxwDmkyne4FEjJ7l7iEMBIRLb8T+6EToeD9I2bk9zNxkNzhtBn/nPWL\ncfJ/9md/9q+paOh2gIThMW9vb/X69WsFg0F9+eWXxpED+6FqjEaj+vjxo00GYgoeMJWTm+/3H668\nfPHixSdqfW51o+piYp0kQxU4qMPh0P4OvE21WrUZzG/evLGChORHEVEulw3WdA6yAQrz+XzG5zg3\nDt00trzhcKhWq6W7uzujIPb29tRoNPTNN9+oXC5rsVhY0UC3zFAcJ1eM57Pf76tarRo0jtIVKww8\nVDweN798tVo1KBerIoIxhC9w6SSSzWajRqNh8Dcb/Obmxrp9Ago0DJQD0BVIBN+NiWnffvutwYKS\nDHrnXcIjAuH3ej0TWJFISQYkAVS7BKWbmxuD2rAe0l2i7ubgojEg6XQ6HUmyomI0GqlarapcLqvd\nblvBCefstGxifUJngF2Hwkv6gxWQd8IMgUQioXa7bZ0nwRM9CJfUIHCFnkC34CzGOLe9Xs+GJblc\nLmWzWQ2HQ3U6HRvVyyAqBrlUq1XV63UrbOmWSDzL5fITESp7Ce6YhOrzPdyBDsrD1dRoCRBpgsyQ\nIHJ2rpIAACAASURBVOmeGDtdrVYtYEI50dlRmMPZ393dfQLNs6fRz8B/UxSSpEGg1uu12Rs/fPhg\nw4GeP39uOhvn8CzOlySjF+HsuY6VrpoiADpMeigkoIQYBBMKhQyNZLAL7544QFIkHqNX8fl8Ng2R\n6XTYe/m9wPB06SR7EERuC6SYGY/HRifd3d1pOBwqm83K4/GoUqlYguNz48qRZPA+NCSxFJSKQhkK\n7c2bN2ZnhZKEUuHndLtd3dzcWPIn5qDvYRooNNDt7a0+fvxoPHo2mzUKkVjNc4Veo6AgDlDYOvUV\n/PtsNqtyuayPHz/aO0ylUkYb/vt//+//9nLywHxwbdFo1AQqgUDAuoOvvvrKeA4U3PAfWEUQMbDh\nA4GAVUNcJsAQCoI8FXokEjFoO5/PW7BEsMZ/hyWHAEfFyRhc5wUPzgtcsBwBu3a7XbsIgw6Uaht/\nNLAlnnO4JpKks1PZbrcqFAp22MLhsM7Pz3V7e2uBnEPh7H6dN4Ah3lqvH26OKpVK9h2cNka6qlAo\nZBUzzxyoH2ESTgX45HA4bEEaZSq+eyrl4+NjS8pAznhqnQm9WCxagUY1D292dnZm170yeKZcLiud\nThsXiQMDaNY5gQ4KhMDGe6WrJQmg9gX+PT09NQQIWBSoD1fIdvtwjz0cKsXnYrGwAR0UfolEQldX\nV5rNZjo9PTUlOO8R1CgSiZi+AUsX+4DCVHoobhD14ctFgQxUSndHsYJwD1cJFk3+0KUVi0VT+BMY\ne72eUqmUnj9/bpw2okzOBUGZkc109cQA9hd8cigUMph7s9kol8sZauWciwD0D/eJTRBRl9vtVqVS\nMbcF+5ris16vf6IOZ+gO3SOzKujaKQjpKkFoAoGAJT0ueWJa3eHhoe7u7mxv4dJA/wKMvb+/b58d\nId1wOLTiDSQSfQoIJEhBPp+3uevMUMB6BgLgtFCSLEGQ6Ojh7AOBgE22g75zWjNJhgzIYtwwSZDh\nRAjLXr58qeFwqPfv35u+Ci49GAwaCohmCrdMIPBw9TNIHe/86OhIg8FA5XLZOnzuZWBkODoS9E3O\nWQdut9toBeB4Cn8uuUIz8fjxY/3www+aTqc6OTmxM0bBeH9/bxfMUGgdHR1Z/uB7sgcikYghFRS4\n6Ek2m40V99AfP3f9okmergyYDiEO9qvlcqmXL19quVxaxQ1/yizvH3/80YaiADeycUmIsVjMuiE2\nHvOJnT7FfD5vqu+TkxO1Wi2Vy2VLbFAC2I9ms5l5fVOplC4vLzWdTpVOp5VKpSyR0H3QoaA2xm/N\nwBK6eaeNB3EayZ/PAiS+2WxsQM2PP/6oYrFoAROLE7w1xQQJgWtZR6ORBTOPx2OTs5wQOoVCLpcz\nLQLPHOieLhAeiaEqX331lUHmQNpc4nFwcGCV//HxsXVMJBkqYScEyM1VtVrN3jldNUEbny7wJqI8\nVN/O4SbOZ0kCKpfLNuISlAlYbj5/mL19cnKib7/9Vj6fTycnJ4ae0OEQcOkQwuGHq3AJ/MDGm83G\ngiI+/1QqZUNGEEc2Go1PoGbnYBr0AKA9h4eHRnF5PB4TUPl8D/eAo1xm7+LGyOVyBmGz5yeTid0Q\nRiJwdt+PHj2y2wij0ajNAtjb29Pz58/1u9/9Tnd3d/buocCA2Ul6zCQHhcGjTVHLjYhQZVxqQ1cZ\ni8XU7/eNAgSRwJ8MGiI9JLRkMqlHjx4ZskESqVQqOjw8lM/n0+3trSTZzAhmFUiy5w51A5dMEcKV\npJwphgSlUikdHR3ZjYMUsWhS0DOkUikVi0UL/EzsbLVaNk3N7XbbZwPmTqVSlrhwWTBSFlQRioK4\nQNyhywT25+fTCKB8b7VaZmfk+1JY9Xo9uz2QnwEFAAoIlfLkyZNPtBuJREI//fSTNXygRxQIzLII\nBoPmSonH47YXS6WSjZSm+bq9vZXH49HR0ZHFYJ4LugQ86CR50B+ea6vVstsHK5WKQqGQTk9PDWG6\nuLiQ1+s1+yjjiAOBh5vnmJDJHSzM9eA9ERMp2LPZrDkgQBG5LwK6leLsb1q/WJIHKl4ul3r//r1V\n6ZJM6DOfz/VXf/VXxgvTeVar1U+GXJAwgay+++47E7YtFgt9+PDBJp0BybtcLl1dXZktz+1268OH\nDxbw3717Z7Axgj6gcTYDwg24cTop/NnT6dSEaq1WSx8/ftSHDx8+mV0/Ho8NMkU8hX2s1WqZxcTp\nNmBTEiTL5bIVK0xY40A6ITwOdDAYNOU+gy7olPl3XHVJEEVRTHAGbgKWrlar2m63KpVKkmSX53Q6\nHd3f3yuTyRh/SjeMGphE+91339k7ggdENT0cDiU9DJdhnC0VLp0Foj0OsHO0J50y9NC7d+8+mRPu\n5NolGdR2e3trtAPw5WazUblctsMKVMtgFQoT0BT0Er1eT7///e+tgwKuR+nP/HFEXcCiXGaBXQ+K\n5fb2VuVyWYeHh8pkMqpUKgbJA7+iQoZWQMUMUtRsNg0mBba8u7tTpVKx7oahJNPpVLlcTpJMfIcw\njitwmXmPoyKTyViR4fF4bG5Fo9GwJMHvxKnCfsnlcvYZRqORbm5uFAqFjKcGusVR4/F4TExJYsD/\nnE6ndXJyYnulWq2a7gRkYrN5mC5Xq9V0d3dnnwGueL1+uFDmL//yL202Be+vUqno4OBA5+fnBh+D\naEDJOKlA6MFCofCJ2hobnSQbisL+4yxDidBl89zu7u5sgAvdIg6PyWSit2/fmuU2EAioXC6bhTmR\nSCgej5uuBgoVego6CwqRmLfdbk3YCQQfjUbtMqT9/X3TIlWrVSsmaQrevHmjVqtlsZekRxxGF/Gf\n/tN/Ujqd1vHxsVGhJG/4bgpBeH6m34EGMdIczQq6KETUxJ5Wq6VSqWSiZs4iOQia4bvvvrOJecPh\n0NxDxEi3261arab//J//sxVS3333nb0XJmkyo4K/R07kfTN7gOLfOV/g56xfLMnDU3AQ4bupwJlY\nh0AJ25wk2/QM7EAgRhB0VtJ/LGKiWgX+o9sF+sJrTtBg+AiCEbhsj+dh9rvTtoTojk4DrkySJcZc\nLmciNqpjoGHEThQeJETnZRlOf7HzWcJbwQtyCKmwSSqbzcbU5hx+Dqazguc7obB3ino4jFSSJBRg\nLTjpQCBgiRN7o6RPxE/OyWloKJwXMkBhcBDojijMJJkIkYSArRA4eX9/3yA0p7gO4RCBjv3IBC+n\neh9tBLAqym2KK2fB5BSEsQfoboEtEWQh/qTbcQY2pvMBlSK4Y88zEyGfz1sxQeKi4+b5OWmn9Xqt\nUCikXC5nGpE/LiBRB8MN8s88X6fwx6mrAQbOZDJWaDnfN58H2gvhGmfEGbTZg3wGhIwUSdjanFPO\nnF0OYi/+nXNWOkI+9qxTGMW44el0ahQA75r4xPPldyEEJFFw7qEc6NBxpDSbTRPboqngM0AP8Jyg\nIHkOjLFerVafICvEM6cziL1L8mMSIWcUNAgRaTgcts8NOgOVsVqtjJ4qFosmDqTQp/NltjrTJEFU\n0SwRz6FaoZCIcfx+aMvFYmEFXTabte/MM6HQhP5C+MnvxuKJiJuFOI9zx/6CisWyCS3G0CaEq5VK\nxdAR9hoNC0K5fr9vczVwFkiyf2bfEA9wN/EMwuGwCaHJUYiWfy5k/4sl+UAgYGrPg4MDbbdbVatV\n8zsnk0kToXEQGQzy5ZdfWmeCPebk5ETz+cNIT3irZrOpQqGg09NTm3oVi8V0f3+vDx8+6MmTJyqV\nSmo0GspkMjZnG9+sJBv8gtWOjg2LHoMiuFYQ3hdYn4TBsB2UuXTSQMQMo8Dacn19rSdPnmh/f9++\nq5OLwU4CBAmXxTMA0mbzcm/2er02FMFZJWJpKxaLtvGcNpH9/X25XC7rzoGLUQcTMOnOi8WizVyn\nK14sFvY8gW2ZfV0ulw2+ZDY8xQyBi99Fh1qv15XP53V8fKyDgwONx2N1u1175olEQicnJzo6OrIK\nnW6HSWJQRlituDyC+7pJXBRHvCMoJkkmrmK/hEIPN9uxt4D3k8mkCTedQz+cRRuJyufz6eXLl6ZA\nRyDEnoKG2tvbk8/nMzj26Ojok+BBsup2u8pmszo/P7ckAGLAZ4G+glsEJiYJU0DHYjHt7++bOtup\nEM7n8zo7O1M2m7VnDvrAvuO+Benhsg9shGhOnJZQusvPPvvMrv7Fx05CBNVx6mYoCuHNK5WKfvrp\nJ3355Zf67LPPVCqVrLADaVmtVnr8+LH29/dNBEf37VT6h8Nhs0NyR4RzHC3PhItnXC6XCoWCwflY\nxfjcs9nD6GI0LZxvzmC329VkMlE2m9XZ2Zld8ELxhGDy8ePHphBH/5JOpy0ukHivr6/tsinGsb55\n80bNZtMQLWfjg9Cr1WoZggeyQHeLZx9tjvQHu24mkzF6i4KV4iAQeLgVbn9/3zpm3hmNCxezgMqA\nml5cXBg6hPNBetBKHB8fK51Om1aJOQRut1uJRML0IyA15AYSOO6m8/Nzi6UUTBT/o9FIjx8/tuur\nGQjkvGyHvYLOIpPJKJ/Pm1aCsw6aiuCY/QmCyjnlO/Jefs76xZI8lSsKVfhRlNwMusA+QrWF4tqp\nYF4sFnY/uSTr6rh9iApPknFPzDsn0MGLONXVTosRHR5VKQHC+T0Qnzi/C90DqABeZl4olT+HlYKA\nCp/pVovFwuZmg3A4IVa6PoRIVKkgJHQriGfYsF6v1zoInjlCG7hoJypCsqYix1aFNoGRr8Vi0Tod\nkkUikbBEIcmQAKgNui8Gw3BRBM8dbzDdIJ0T8wl4lnQJToGbcxYAwQK/rXMWu9POSXJjz1AkUY0j\nMiSoMlyFBEq3xd/DPsPeAN1gj7JvgPP437A5Au05VdAkRaiC7XZrf5fPDapAseVceNFJ2Og/2NtM\nD3Na6vj8FEd04D6fz9wvJFkQNLfbbcmKzi0cDn8y15vzTMf0vzP3ZruNnum1/yIlihookeI8a5aq\nSqWqtst2dzudDpAOEiCNnOQ0V5C7yP8kAXIJuYAgRzkKEOQoQHfSSaPbdtmuQVWaSImDKI4SNZGi\nxP8B/XvqlfeBawM7qAgwsne7XCK/732fYT1rrcdlxQPP88zoSmkUIEVy9nmPw+HQOj6+Lx0X555z\nwmgjHA7bEic6ahAdzhJnFtSR3w1qCKpIJ+4iLOjaQRN4P66GWpIhfcxpXXmaC2ljqAJ3AtkbxSmJ\nKZVKKRqN2j56Yh/vk+fO+UXLzu/h3VOE8a44hzxjOlOQIVBNeCgUu5eXl6Yk4mzxfEF4iFHEdO79\n5eWlqtWqnUsKMO4ucYCCD+8B+FA8WxelA5GZn583EiNmRiAgxMVOp6NarWYoHJvkYMm7mw0zmYwl\nes4pSCqx372n7viUZ0NxxTvin/f5+aBJnuoGCAK4hJkTF1Z6V0lLo+ofyHB6elr9fl+FQkHj4+MW\nPG5vb41FCgwESUaSMe4xTUHiQ0csyQ61q7MHnoEzQBKFN8BnBt7CCMfjGVmOzs3NKZvNmuaay0Rw\nwFsaWJrqGZlKMpk0qRBSjU6nYysP+Z5AxW4hRFIFaido8WeBe2G5Uq32+32rTt1ChkDWaDRMTeCu\nYcRNqtfrmbkMSAAeCEdHR1b1Am1yCSGnAJkh3yHYATMC15EM+M7n5+dqNpv2fOicLi4ubC57fn6u\nxcVFC7gUPTh2kegl2eyUd8Z5YNwAAuFCdnwmEgjcBLejv7i4uEcSvbi4ULVaNbYyaohgMGhjEXe8\nwPOG+VwsFu8hCgTPZrNp7xPlgltw9ft9e1/8/yk4CKKMI3A7w7Vsbm5OkUjEEnG1WlU6nVY+n9fU\n1JTN4lnC1Ov1rDCkgwchgnvjchdwgmPOD3LEwik3qUqy98hYDv8AJI0kZrdovLi4MI27u4WQRAGp\nlLtye3tr/5cxk1swnp+fG6rI3zs5OWkFNe6Q7n3l/xJL+Fzu2I1nQ0GIRK/dbiuTyejBgwcWY0k8\nMzMzJgVG0tbr9eyZMyMnifR6PZVKJfuONGLENcaJFHIUcy4h01U53dzc3CMxgxQVi0WNjY2p0+lo\n8bs9DHxXxgTuuJX3en5+rkqlorGxMSsU4c0QGyEvUry2Wi3bucCzZtRJpy9JkUjEmhneb6vVMn4V\n3g/tdlt7e3umsiJZ04xAgkwkEpJkSCXngs/gkq8lGfoJuhiPx40rQSHlNhs/9PPBkjzyt+PjY83M\nzBg8ykur1Wq6uLi4d/BgcgOJoDmdmJjQT37yE2NBEtwg2E1MTJgtYCQSsYqPqvrg4MAqI4LC6urq\nPSlLMBhUIpEwmB4CBdIpmJ6SDHLz+Xy2KGdmZsb0t8lkUqFQyGY6yWTSiIEE57m5OZv9wTR1iUiJ\nRMI+L/NP5HSrq6tGFMHoJRwOGzkLiDSdTuvg4ECDwcC8l+lskeW4Xdjt7WjRBkEqk8kYKVEaVZlA\n0PF43J55KBSymRXqCYxfcHKC4wD3Ym1tTY8ePbIEeXZ2ZiSX5eVlSdLe3p45H7qICZU/y1fwfI7H\n47aBjz3fSFRIEszyYGnH43EzEPJ6vVaBQ+5DHfL06VM1Gg01m009ffpU4XD4Xhf06tUrUzUQIFl+\ncXJyYkUIzNpsNmsEu1AoZNpuTGQKhYLN1pmr8q7GxsaMCMcM0p09sqTn4uLClrMAL29ubpofAl3s\n69evtb6+bkUagZFnQnHj8Xi0sbGhwWBgbHqK59nZWaXTaUtKGJC4BTF3PJFIGGxKYUn3HwwGLVjC\nzK7ValpdXVUqlTK0A7j/+vpasVjMJFjudkveu/SOnwHRc319XZOTk3a2x8fHjUiZz+etEKTIQhLK\nqIQz3Wq11Ol09PDhQ9s86S5iwZ4ZuL9er6tcLqtQKFjhTlfubp4Ebbu6ulIoFJIka1KkEZqZSqVM\n747XO/wQYgwJKZlMyuv1WnLsdDo25+de3tzcaG1tzeRcoVDI4G74Hq9fv1a1WlUymTRnvomJ0YIp\nFl9FIhGD0EH3+AfpHEt0jo+P1Wg0THIJV8CVEMbjcSvsQepIstKoSIvFYnr48KHFE0ifjFUlGZcJ\n8iTfH+SVpUSVSkXz8/P69NNPbRzFGKLXG/ni+3w+U1shn+X8gkIwFkFeTXF8c3OjRCJh94OYCtky\nmUxagfU+P94f/iP/Mz9UZRxC4D0Ss/SOtMIqSLotoCLmFzBKSeYwyVOplOLxuBHiXBibeTaX2yWp\n8OKpyqi2XBMJkIjvw/kwcBlDsEmKDpzP5kLBQDiwWZHZ4cqGmQyBEaYvBYw7B6NjIXAiHwIKl0aQ\n5dXVlc1t+TxAyzDwo9GoIpGIXVY2LLnb7viHMQtrcqlOkSeCwvC8SX6cAS48HAekO3wPOpfBYHBP\nnyy989G+urqyABAMBi3QwKNwDSjgBkDsoRuFgMi7+P4PnQWzWfgXoVDIOitQDpfX4RIxgSDd5Mac\nnfe3sLBgZx2kwyUBQg4iwFC4un8/z5ZnROfAeSFwQTST3s2Ce72enWkX1qSDABaFJDY7O2u2nGiL\nObO8LxzUIM2iw+Y8gMpQRIDYuKgTs9Hb21vjonBneM6SbPTGSAtDI/ggzDcDgYASiYRp/ZnH4lVO\n0chZ+b4cjMYDuBx41+3QeO6SjBEfDofvQezcX94ZSYo4SdHAGXRRA7Yegm7AMcAl0+PxGCzOKIf7\nSBzkHkvvrMWJca75FHcENI/vT8y4vb21cw9ywd2Fn8D7ofCbm5uzJTB8brz7XeItRS33AZQvEokY\ngZt/aNDoxAOBgJaWlgw1cEc6xATeFd+Vopn3yxnmsy8uLtqYiXgMYVySxUQW84BYU2ATGyi6KBhB\njHgnFDTEEBfaf5+fD9bJMwefmZlRsViUJGWzWWO2kwBubm7ueT+3Wi3t7+8rGo2aRePNzY12d3d1\nfHysQqGgsbGR5/TDhw/t4PLCSWywfpnZESD493gmI6+o1+v63e9+Z9ApF/f4+NgOAgm+WCwqk8mY\nTzhM4EqlosJ3S3UIQK6LH4gA5B+638PDQytIuHggHQQUSQYFuUxhSSqXy/ryyy9NygeZAxLg1dVo\nS93R0ZGq1arJ+MbGxtRsNnV0dKRvv/1WPp/PiDxIa5hLs8hnY2PD1nYyg0W2wlwNhYL0bi7P+AXW\nOsTJb775Rp1Ox6RpkP8IerDoj46OFAwG9fjxY7sAdN50/HAggCnr9brNoNPptD7++GNDiqrVqgUl\nAhmb5pDcsdHw9PTUXK/gDiCXA7rlLHPWb25u9Pz5cxsvdLtdK+woHqrVqiqVilkwS7JzcnBwYMEC\nyPvm5sagXTT2QJ43Nze2+IkukIByd3encrms+fl51et1ffvttyoWiwqFQsYw5++V3qlAYJEPh0Mb\nP3W7XZXLZZXLZXufFJCFQsEW90CYc0mW7tISpEoU17Vazd5fq9XScDjUy5cvLRhLo052e3tbvV7P\njGz4zsQLCkUIXGjYh8Ohnj9/rlqtpsvLS/MxZ/kSUG6v19Pz58/vkcEgRsJ7gMjKOfT7/SoUCmq3\n28rn8/Z+WAA1MTGhcrmsr776ys7s5eWlrUImVpJ8iYmSDNIfDAbWBfr9I///mZkZ21xXq9WsuGKV\nMGNRCGB4QoBysOns7OxMy8vLWlxcNCdBxoStVks7OzvGy8FXAxUD5NRms6nd3V35fD6LERRNnU5H\n3377rYLBoDY2NoxRzlZC0EwIs8TNo6MjWyKEkydIDBD97e3IZRB+F4gG8sTXr1/bmfD7Rx70v/3t\nb+3cMjLd3d29t9vj8vLy3jY8zkMkElGhUNDR0ZFJITFqGg6H5gKJ5BC0ivEFCCwrhU9PT7W6uqpQ\nKKSDgwOdnp6qUCjo8PDQyL4/9PPBbG3/7M/+7G9IjFT4w+HQpDPujJJun2AZDoeVTqcViUSswwT+\nJeC5nQ/r+gj0JD8XIXBZy8Ph8J794ebmphmLEFRchj2zV+Q0QNKQ+zBqAUaGWQwTHTgrGAwa6sD8\nkeSOExqHD5iWQEh3wQGcmZmxQgP2P5/z9vbWOi4uOP89picQ/9zRQyQSUSKRMDiPAM8hxgCm2+0a\nkjE/P2/vDmKYJPv8dPYUNCAlsN9xv5N0T5ZHt8MzZ4lELBYzuBazHo/HY9vnQEXohGDJg5LQMSD5\ncWVrQOLuZweahegTCATMKwBSjST7nSRwzouLHPFnOQ90D6lUSnd3d6ZVv7kZmZxQFDDPA12h86Oz\nCAQCCoVCdp+A/+iGIBnmcjktLy/fU5Ag18M+muKWzh6WtiQLaig1crmc5ufnTSvOu2V+SlHKGeLz\n8swh88HXQV7HMwO1mJmZUTabNeMS3hcxgd8HHM9/R3HuGlHNzc0pHo9bIU/XR8fI0iy4Ee695f3h\nycA9BaFCjXN2dqZqtarx8XHlcjltbGwYgZTiy+USUGCAohHPiJWcH4iMzK2524zPWMwEv4HzH4/H\nrRmi2AZRBdaHzAdiNj4+ro2NDS0sLBh5E5SHWAG6RJwHqYUQyNwehCafz5uhEcQ1RrluIYe0DK4O\n8YtzwvlnmQ/IGlyYYDCoJ0+emOcC3Tv/PaMoikW/329OmB6PR2dnZ4YGcp5BBGlghsOhcSD4bHB3\nxsfHFY1G7yV57imoCkUBxReNVSaT0ebmpqF4//qv//qDtrYfLMn/+Z//+d9wKJaXl813vdfraWZm\nxkhSc3NzNlelQ3/69Kmy2ayRiehu4vG4tra2zHN9b29P0sgiEAmcS6DLZDI2Z5dkB5Hfh8Me80gO\nMEGGUQLdJyMEdw0pHvjpdFrJZFL5fN5Y6u7vyGQyisViNnvZ29uzw4zMBVhsampKmUzGDhcdMyTE\nq6srpdNpLSwsmHkJTGtpBBERsEjkvd5o89/KyookGbGKuXQ2m1UqlbJ5tbvBKh6PK5lMan5+Xm/f\nvjXWaS6XUz6ft+qUpDs1NWWb3JijTU1NmdohFospnU7bMweym5gYWRQDq7KpaXx8XGtra5YMMRXB\nVAiv6YuLC/ORD4fDymQyWlxcNK7DmzdvzHRiaWlJoVDIEgWJF0kXUCyb787Ozgxqp4siMFJYIR9D\njUHwnJyctOCCdpm5NEUI3SUQ99bWlhYWFkyLe3Z2ZlvhSJQkuFgspnw+b+oOCEQQOK+urpRKpbS0\ntKRMJmPQtzQq7oLBoNLptFmEQgrju5LI37x5o2AwqFwup62tLcViMZsH4/keiUSUy+XsrO7u7kqS\nFWjY6lLYIunCZQ4OBZanfL5EInHPepYihO2IJBe3GGMcV6vVzD4YySUIkWvfvLi4aFwiiIS5XM7k\nhCAWbA3jvCYSCYPkQU0KhYIWFha0trZmRRKwNsgcCCZjQd4fkjaXXzA3N2fSRQx6IBgmk0k9fPjQ\nPOF3dnbU7XaVzWa1urpqrn80RCReLI8hCOLhfnc3cpB89OiRlpaWdHl5aWNE1Dk0ChgTjY2NaWFh\nQVNTU8bu93g8tknxyZMnNhYsl8s6Pz/X1NSU4vG4OXqen5/fO2NwDgrfGZIlEgkrqHw+n6LRqCGA\nKIVmZ2e1tLSkhw8f3vO7oDFiVIAVL89vc3NTHo/HCIs0J8RuznmxWLSY+fDhQxsXwhMYDoeKxWLa\n2NgwFUAoFFIkElEymTTvDJA7F9n0+/1aW1vTs2fPrMH553/+5/+93vXlctnmJlxMIDk0k8BC0rtZ\n1cXFhf7zP//Tqj9kUQTSer1uSysymYz8fr8tEyGAcpEmJibsAQLjwjwGGgeuooJnhzovjOTUaDRs\nMQ4vFKc2r9erarVqsghJ1qFdXo52tLuVptfrNV0tBjwwhem2IYqhC4eJTWKA8IFemdkQcD2FFPOi\nbDarfr+vg4MDY6vTjdzc3NwjmIyNjZlFLGMP3ifvpdPp6ODgwBQAwOfMniSZjMaVqwHNudXv5eW7\nbWEkS0x/cOriebsrGnd3d60jQVoIEvHmzRudn58rEomYtwJzUbp9oGKQCmRCLiMb2I1ZNqjF9fW1\njo+PjWzJcwbuZzSEFh+mLOY81WpVx8fH6nQ61rlhmQp8TRClo8YBjATqzu6AlUFYUB8wk3ThYp2+\nkgAAIABJREFUQoh3QLLdbtfQJ3zACYYej0fFYtEIVjguEgRBOiC90QXz59y9BhTrJAVQIUZtdC88\nJ5fNDLrAaA1VBR0tsDYSM2B1ukW/f7TQivfE2ccOGNLd2NiYkQqXlpZMiunz+Ww8AFmKWMASmJub\nG+s+E4mEOeHxXlCDsLwJrgMdOc0HKAXnHDUGbHi+E97+uBCCNiwsLOjycrSf/urqyopWOl6eJX8X\n3Sfcmaurkd3q/v6+wuGwLYBicVen0zEin7vlkmVTdMI3Nzem0MAohmJjMBgY0RYLW2SHvV5Px8fH\nkkZjL1CXVqtlZk/uyJJuv16v2yyc50HSvrq6svc6Pz9vCNbV1cj1slar2TuAe8V7gp/hoowXFxfa\n3d29JwEmvjUaDZMTurJYEC8ULRRBNAWS1G63dXh4aHnpfX4+WJKHbONaNNLdwL5kBgUBYXx83F4+\n8B2QNTArtp2SzAmMgwVkTAIHAuFiQ96QRlU0bH2CCAGLAO/CdnQH09PTVu3SiUiy6r/dbtuOatz0\nKC6ABelUgIGB8VzEgXkhBx/5CPAaiYViBHIh5DpgOb5XMplUuVw2lYCb2Cgm+v3+PdkTl5tLcn19\nbZBWp9MxrTFdFQx9HKhAAqTR4eeC8vdyoUiGBGwSJbPJSCRiAQSIDS21qzGGGEY3ikyOuR+e3wQE\nOmF+P8EHYhOztImJ0fYpLh7FED8kfTdA0PEA8UkymJogxed3u1oIasDcmKRgHEXRBETtFoMgJUD5\nzEzv7u6sS4Nkx+dAxuP6TdBJXV1dGS8EUhYyPDoxxkKu8gSjEUZ1KEOQC6ZSKXk8HhsVwVlg3kw8\nYGxGAuUHrg2EXu4LNrJ+v994PyB8rhfF9fW1stmsjTiur6+NJc/nwvKWeOMmV0YgEAZdYpYkKyrg\nbpBUee6QJIGhOb+gg0jyQH84BxjkED8Yr8AZmJ2dta5xdnbWmp/z83NbykKShWxGbIJ0NjU12ipZ\nKpXubXfknTD3Bp1ipMHyFZoyngFdNrGVkabr9+CS0VABMZbk7hF/KIi5x5BLJyYmrGCk0aEwhxiM\nlTekPxpCTMMYt3zf64KRUSAQsM8G70eS5SgaJvLK4ndLbYhbPHMSOuRe1wuD0Sv3+L1y7Xv9qf+B\nn48++kj7+/vWoTH3AMYl0bCQBij87u7OtqQB1/T7fSNA0KUyb+dhACWtr69bYIdkxYrPVCplwSwY\nDKpUKlmHyryfTolDztw+Eokon8+r1+vp5cuXSqVSevr0qXWQt7e3qlarury8VCaTUSQSMfZ7Lpez\nAwGkJr3r6jBxYAMXQafdbhs0Hw6H7bPTWQO5MRuH6IJkJp1Om0wjHA5bR7K+vq5IJGIHnIQSCoW0\ntrZmh39zc9MSC10OF4DVpt1uV5lMxiRgdARskPrRj35kQQUCzc7OjqrVqm1fQ7d/dHRkZC+PZ2Q/\nmsvlFI/HVSwWdXd3p62tLWMYg5yw/AJZymAw0OrqqhHrgMnoDnu9nl68eKHJyUn7+yio6Mqy2azJ\nh2ZmZsy3nSUSwWBQKysr/4djFZfa4/EonU4bXOy6fAWDQUWjUZvHoQsHiQIanZiYsIU8FCTlclk7\nOzt68OCBlpeXjRCJJp+ODRgbNcN//dd/GSzO7JfgQsfH7DeXy+mzzz7Tr371K7XbbS0uLhqzmzki\nS2OQfWG4gzSJjodxzezsrI6PjzU5OalPPvlEl5eX2tnZsULYXbhEApFkq3jdOwonhq4INQzJCsVK\nIBDQ4uKiMc2B5nk+FHjSOyh9OBztZ+D5wHnZ2dnRwsKC8vm8FbgQ6G5vb7W+vq7Z2VlL6gRs0EBk\nunA+QMxcO9u1tTXt7e3p9evXtqSF2ALznLgJkdGVWVIgwVFZXV3V/v6+yuWyuR2yg0KSIXpsdJyf\nn1cwGDSUic/MCI6zBOeHxoZY8/XXX2s4HNq9RrIHwgGxDW/2brdr4xr+vru7O3PYLJfLNhbEgwEU\nFkIjeYXkzY4QODssaJqdnbVuHbtyvjuIF+MavB44T5KsaaNx6na72tvbu4d6MXJFyoj6C2J1MBjU\n/v6+xsbG9KMf/cgUIjR7yGb9fr85aL7PzwdL8syX6ZAkGft1b2/PiF3Mo4Bt+/2+JToCIkQ5OlRX\ntyrJ4D9Y0pg5kASoCCFbQIwCvqNzdDtj/jvQATp/SEt0LPydGLA0m00jXYAk0G1KMtYvdohAyWyC\nIiECr7ozUqBBjByojGHtYxFMEKYgQAvNwZdkIwxQELcTIMDwe/gdoC4cZDf5uyYOQIB0OUCEqBlc\nExS+Dw5VzGth1/N7eG/M3Kl8mZ3y5zGVoGOhMyPJAqm7lqb8brrucrmsmZkZKyKp2kGE6KDp8kCl\nYLkjzaHj5Tlw7pDeADPDU8hkMqpUKqZhBgqEjAYfQ9I9+Rj/nuAEV4ACCK20NBojce5d9jzvic52\nZ2fHjJ5OT08tkfLsmYFibkNCl95ZojKi4Pkx7kFXjAwRJjN2pnR/qGRc5MFVQCCTw+CKuanbKcNJ\nAOUCWZRGZMlcLmf3lPPgSpzorICKXeIc78R9RhQNFB2MDOhWKSLb7bZ1mSCMtVrNTIQ4lzxX3jfd\nLmOsarVq8mTiEsmDzz4YDNRsNs2EhYKX94Us7ubmxooXpLmMMOEHIMnDJdEdjRCT4DsQwzif09PT\nxkGh0OH5uXJn7vvJyYlp/fmHs99sNq3h4DtyH/nskowkx8iMOE/spFCjUWS8AALKf3dycqK7uztF\no9F7z9WVFHKviX3kkPPzczszjG64H4wD+feQfUHm3ufngyV516KSRBAMBlWv1/Xq1St7Ed/XwJ6f\nn9sMu16va3FxUX6/X+VyWRMTE4rFYiaRANYiyZAUkOEw0yc4uh7ryONIImdnZyoUCtYtwwCl6h8b\nG1O5XFan0zEY6s2bNwYZArNhsHB9fW1M11arpWQyKY/Ho1KpZB7y+/v7Ojk5sWqwXq+blMaFbggg\n+Ed3u12DxDye0Va8vb09m/VwIRqNhhUnX375pflwE/g7nY5J7Pi8bkVbKpV0czMy6fj+2KXwnQMh\n8zKKOZIdvAgg836/r8PDQ93c3FjRxDtmXEGxQfdwd3enQqFg/5skvXnzxqBAIGjg4YuLC21tbVkx\nwEWmyMKwiPc0MzNjhiUY+1xdXZnMkMLLldyBqgyHQ719+9beP13uxcWFZmdnNT4+bpvs6FBADPh9\n8Bdg3H722Wc2HyTo8fwIVHQXcCgowtjUyEY2EoPbbcJfKJfLOjg4MEiZDhZG8cnJiV69emVJnX0A\nvFeKK5Lts2fPFI/HrRsPh8PmDc+M1UVfvvnmG4M+KXTdtbjcSUmmuKFwq1arNotnlowS4ubmRqVS\nyRoEXN0k2Z9rNBqqVCpWwESjUeNIsIuh0WgYP6ZYLBpHZ2ZmxhwdUTvAGYBsxg4Nim2XlwISWalU\nzKYbQlm/P7LuRqpIwYZ7HE6HNzej/QGPHz9WPB7X/v6+6vW6FQZIhknYMLsPDw/t37sjFBjh/D6Q\nETp/4jHEaOInsQWUrVarKZPJ2JiPe+Zq8vHmIBZSDDB+BcnpdDrG9aHLp7ijMcBkjb8f6RkOiRAv\ny+WyNZAUWN1u14yVisWiobW1Ws1UEH6/X5FIRKenp3ZvSfg0Zqg3+Hf8Pgyk6OgxIJJkTQwNJeeT\nuwfaeHJyokKh8F659oMleUlm5ECXV61Wbb0mcCgSKpiRSGGQdzBHo7PCoazb7RqBJhaLWbBjBzXd\nuMfj0dramkmSgDcvLy+Vz+e1sbGhbDZrkDzEKqQq6Ma52ECFyGmY0wHrtdttY06fn5/L5xttpnNh\n+uvrayOBLC4uGpzscgF6vZ4xoiVZl06CdmehwGnMYWFvTk5O2rOIRqNKpVLK5XL3SFC48wHjJZNJ\n9ft9vX371uBkrH5dfgIzTZAWUBZ+/+bmpp0DAh4SOZj6uG0xz2T2TOFAop6amlIikTCSCsVaJpNR\nKpUyC9qJiQmVSiXrKNLptO1td7tc5n6woYFDgSE///xz88Cu1+v3AmM8Hrfvyf/GnBBSD3wKxlSR\nSMTIo2/fvpXX61U0GjX4lSD+b//2b4YWJBIJeb1eCyxjY2NKJpO6ubmxJTKMVQhiKysr2tzcNNfC\nSqVihSodFD4QkUhEl5ejJUSpVMruHUWLz+czbsnMzGjlar1et6Ka9y6N0AHGaTDogZNJGjDMGTGB\n5LhjPCBhEAZkZsCkzFEZ90Wj0Xu8D9eWeW1tzd5LoVAw74NsNqt4PG5d7MuXL63ARQ0BC7zRaMjv\n9yubzdrvRkUASW5ubk6pVMrOOvA0d4dnSKHHWXbvrSQrqBgjMJ+n62SLGyTMbrerr776StPT01pb\nWzNPBxBGnm0sFjNjHne02el0bPQhyci/IAtA4Rh+cTYymYy5HCKT4+wD+fNde72e8vm8JNkoz+/3\n6+3bt8YjwkeBZV5wZzijGOpQxMGRWVlZsaTIaAtZLo5xnHXiwerqqqFajDwgAFPgXV9f6+DgwL4P\n0jmksXt7e2awhAwWFLXf7yuRSNhCMopAXD9xKGWURiHLd0dxhOlXJpN5rzz7wZK8q/cmidOZAksQ\naN3Omy4IqJWqnSBFZw08CWGEapBgj0aS/xa/e6RHwF7pdNqqWncVrWsfCYTV7/cN9sVUB5gTjSqa\nbg7l2NiYBSXYvBiZoHtFC828i8OMPAcPdv4dJELkgCAldCtUg+iUGXsAkVEEMIdfXFw0uHNqasqq\nWawmXViUGRKmLvx76Z2Jyu3trc380X4Dt0M8ZBbJbI0CwP07SSLMUfnOEHywl4QVTwdPgIMsQ2EE\nFOpud0IaKcmSChce1zbUAaAnJN2pqal7EDUcAMY3rhSHgoL/dywWs+54bGzMuklmrq6lJX8GpzlM\nQDjPEOiSyaQVFMDEPGO8JehAQbmApfl9FJm8q+npaSsWr66urHjkzEEIZDxGIcufY3Up948zytiC\n78Y+eveOcpfm5+cNzXDdECcmJmyMBALnylApICE6guYgWURGSuHOvSbQYmlMIry9vbVnANRM4wHy\nR8dGnILZTjyDZMd3YgzCeQeh4DwxysP7gO+NHezKyorNbrkHrVbLCgpm9cQp7hbnZWZmxpAP7hgS\nO9j4vMter2dcBT4nd8F1MuUMEGN5d8QGSK/YdfP78AXgmcM34M9QoExNTZkBDiMY5JuoK1D7YFZ1\nd3d377PwrDkvXq/3XpzmvnOfKSZOT09NZsv7m56etrHc93cEoLIhxnEuGR2yawCSKiMCVEzv8/NB\nF9RglEBSo3MJBALmXoQMZGdnx6AWmLPMogm8V1dXOjw8vKe3hc14eHhoF5PuCPY38zESKwENZjvz\nFqo/Ok/gT7oKuvzLy0uz76RqxrMYpjKVNcVAvV63CwOTdmxsZN5SLpdNe8r3Bt5jlk1nwOeHYAfc\nDIxF4HTn+ZJMpkL3QZcAOQXiGZIWCJGoHbgY7mx5d3dXnU5HuVzuHtOb7pPPSKIkudPpulBxt9s1\nOJOCAGLYxcWFDg4OzAgDXX46ndbY2JiOjo6s+ifpA813Oh0bcUAS8vv91sGxrpL5KwGXjhb2NsGV\nmR7fF0gclIURFXeAf0+Qhn08Nzener2udrttic5FZ4DzmEfjl9BqtfT27VuDtmdmZoyD0e/3TXrH\n2cYykzk6fycdGB2rJBuxMB900RTWhBKseUdjY2MWlJaWluT1eo1oxAyXuwfKksvlzHrWlZfBJGde\njwwUPg2dD383LGpJlgxgfB8eHtr8vNls2tmk4aBzRsedTCZtdAQfBY7C3d2dDg8PNTk5aQtRIIjd\n3t6qVqsZ54DkwD8+n++ewyd3kM+BRIzkB/oIStFsNg1lQQlEgcF4DY4Fz4vEAqOcpC/J/lvQUMYo\nFHYUInA6iCPEGXg8jCcwsgkGgyqXy6pWq6ZA4q7Q9PD8KZbD4bAZB7nnjmJtZ2fn/yD8wdEgpruG\nQfCFKPooAlwlCzJctyHpdrt68+aNEa9h2tNUQSwFKYYsiiMjxRJ34eTkRK1WS9lsVpFIxJI9RRvc\nLopGYirxij9D7P6hnw+W5LPZrGq1mhqNhlKplEGIkuzh49QkjeDofD6vWCym4+Nj+Xw+xeNx9Xo9\nM7ZhTzOGH1T6iUTCXgTJiaKCRRazs7PKZrMWUOmsYMDDdgYSOzk50djYmKLRqM3Hgf/weMcBbjgc\nLbXAGSoajWpubk6Xl5f255i3wyaF+dzr9cyy1WV2Su98p3O5nJGJuCBIBik2qEQ5xFTxFBpYkna7\nXWPR3t3dKZ/PK5lMqt1uW4dBMqW6brVa1v1TSedyOb19+1b7+/tmVELSlt65weVyOZunoa2F5c/G\nKIoWpG7AW5KMOYtenER+dXVlC2CazaaZavB38Fmvrka2rb3eyA5zZWXF5HAUCnRA4XDY5pPMF/EA\nYHTEHBUkA3MabCrZQgfRCHkOqhI68o2NDfu9OGuBQtF5hsNhM+zhTvGOs9msotGo8SAwULm7u7MF\nNUCyPt9oRSzwMT+gTZBFQXFAGygSeUfj4+NaXl7W3NycSqWSJicnrYuFr8C8FOMnkhTa5LOzMwum\nfG7Itvw5Rm/AzJDiIpGIGUhNTU3ZqI+/7/b2VisrK4aEAe2zqAS3v0qlYs/F4/FYwYRCJxqNan5+\n3pbmwJHBLQ8pG7NzTHPQ/lNYdTodSbL362rFz8/P75mBSbKlT1gZU4ylUiklEgl7zi4cHQqFrKFi\nr/vc3JzdHRI0RSnyLYq1qakp5fN5QyJ4xnTHjDEgtVYqFVvC5SoEPB6PNQOw2QeDgfL5vLHsMZZC\nScVZZBwmjRQQjI9arZZJ6FznPDpz0BfiUyKRMEUUhTTST1RC5BISPIZcNzc3Rpjm3eTzeZML0rVP\nTEyYEsDv95uJEkghfBnitqtwgoPU7/ftHbbbbRtBw08jLv6vT/KPHj3S3d2dXrx4oY8//lipVEqv\nXr3S+Pi4EblcV61MJqNMJmMBPRAIaGVlxWwn3fWouHMBreVyOXsoGxsburi4UKVSUTab1c3Njf7j\nP/5DiURCH330kXX8sVhMtVpNhUJBmUzGfhcuSMViUdPT0+Y+VK1Wtb6+Lp/Pp+3tbTs8dIb5fF4H\nBwcaDofa2NgwsghQGysLsV9FIjYYDLSzs2OwJHa7gUBAzWbTILnx8XEVCgVls1nlcjn96le/kiSl\nUilLmPiKdzodpdNp+/48p2q1qmq1as8rn8+bHSYowMrKigVWiC2Qgebm5tRutxUOh/XJJ5/YisS1\ntTUzG2FWvLOzI7/fb9vbWq2WFhYW1Ol09Jvf/EYzMzN2OZA74UK1trZmCWLxO8e6t2/f6vZ2tCWP\nWfPjx48NNsSZDHQH6P76+loLCwsaDke+0mtra5bMmRlyQaPRqNrttl68eKEf//jH2tjY0IsXL4zs\n9PXXXxvnIpPJWHeJhj8Wi+nx48eG7iwvL+v09FTffvutstmsSeDm5+f1+PFj64DYl85sHrg2lUpp\nY2PD1iWvra1ZQHvw4IEikYjK5bIx+kEnfvrTn8rn86lcLlsXz9gKe1g8Be7u7lSv1/XkyROlUikd\nHx8rEAgom83q9evXqtfrtkns9PRUm5ubBtdHo1F99NFHZkIyNTWlRqOhfr+vzc1NRSIRHR8f29w6\nGo3q5ORE8/PzxnTGH/6bb76xTWLNZlNer1eLi4t6+/atfve731nQnZ+fN8IlI76nT5+akxrz22Kx\nqFgspqWlJZVKJXk8Hj169EjFYtESJzCrG7wpBtxk1Ov1bN4ci8W0s7Oj29tbPXjwQHt7e7q+vtan\nn36qUChkz3xubk6//e1vNRwOzSGR9ySNZtTLy8t6/Pix8TSePn1qe8wx5+J5oXsfGxvTT3/6U0Ma\nGbsgZ0yn02o0GtZh4qUejUZN8QESVyqVFIlE9POf/1x7e3sqFova2tpSv9/X73//e0vEIBauMuTB\ngwdmRAZnAnIsqOtwODSu0rfffmsILDwXUKtQKGROoZ1OR4lEwjTrXq/Xkq270Ovo6EiLi4vKZDLa\n39/X1NSU1tfX9cUXX9iZTSaT1jBgvDY1NaWlpSXzS2AHAYuEKMIDgYBWV1dVLBZ1dnamhYUFSTIJ\nNwXM/Py8tra2DG1NpVLqdDqKx+N6+vSpMpmMxfTx8XG9fv1a3W5XT58+NQJzLpeT1+vV73//e1u0\nA+/ofX48/wP5+71+/umf/mnY6XSMlQxxBfiV+QPwIfAss1UgDvTfMBiB2iC0Sbr3MNwZEtUbiQ44\nDzibcQAWp0CzmIcwq3IhLKpbYGm3woOFSRdHEqcTo5ucn583NjLEIaQWzGHo1Jkr8Wxcturs7KxW\nVlZM786/c2FeCCOwPyUZG57nhJQEcwnY6yQr4FKQCMYuwGU8B56hy+YOhUL2ueiISNawioHxcD6D\nke5q25mtkZwhR0ojyJk5M0mjWq3aeXA7BzpfoHg03YxeeHbM93i2GFRMTEwYjwMikysZ4p1xZqR3\na2DhLuAx7wZNPCMODw9NrsVnBy3gDKEE4DvTPYME8P6wtWVkFYvFtLy8bGfF4/HYd6PTAPKExU/x\nAXwPIRUODAQ4pHHA0vAP6NBc3sP5+btVoO69xsAJPwFXYrm8vKxIJGIjwMFgYFyZ8/NzQ1fcOb87\nWsBnAPgVTgWWvScnJ7bQBn4I8YqZLR0fRY67f4K5MeQ/PjedMuf39PTUVBI8V0jGrqkQ55x4BFQN\niZI1rDDYGY0NBgOl02ljxvPfY0gTiUTuLZBifEp37MoCWdfMuYXjg2SO/x54HJMbOlC383ZNdSjy\n6vW6zcBBDkjK/D38bjhPeJEQi135LqoGCM0UVvCrGJWAxMIncUnewPwgL9wVmrpIJKJGo2HGOpwx\nzKkYF7imQHBp6vW6xWX3x/VCwBaaM/LXf/3XP5jDP9iqWTqMbDarXq+nRqNhl5c5JEQgZkRAVRQA\naEmREg2HQ2PP12o1STI9JcEVH23kdECWY2Mju0peInr3YDBojGVmnsD7BAB09qenpwZrEwxdDThm\nKIwIuFyNRsOKDNidsHOxW727u7PZGcEXgwcKAKR57KmHVMa2JMYFkEIIFNikwhJ13bNOT09VqVQs\nyOMb4PF4bJYOqx1p0eXlpXZ3d3V9fW3mL51Ox2boSP6QI1HVw82AdIYNL/9QuDFzg+wCUYdihHk/\nlxZbS6A6l4yDBS+yGJQFFBDwHugimNN3u10dHBwYP4LNeCQSnjdqEWbHyK3gMvDvKPAI1vv7+wYp\n01W4rmDAuEgjIaHCDOa5UBBjj4t0h7sDF4EEBgueMwO8j+EUc0LGD8g0mU/j6oik8+DgwOalKFBc\nq1jOWqVSsTm7G0yRH2Keg/Tu+npkG9zv922uiVcBfguQmJrNpikFmHXjFQDRl3vD/eC5Me+GdAnc\nympUkjLJ290/T9yJxWLGbSChMXIMBAImfeXZ397emowM8xYUN0D8FL6STNkA34R7hOcDd/Ty8lKH\nh4eWHF0jGkZ1EM+Qsnq9Xu3u7lp8IWYyAry8vDR3TeB4inkS5MXFhaEvnAvQI2yUIUnik8G7h7jG\n80F222q1jJh6fHxs6Kb0rrAcDAb255BPolzhjrZaLfsMGNa4ttZ8VuIlyZsihKIRzwtY8tfX1war\nl8tluwN4vLCIB0nn5eVoxzyj4uPjY9XrdXm9XuML4MzIiIWz8EM/H6yT//u///sh2+R4WMB+rVZL\nsVjMLhvwMDMeDgQVOlUwhiH//d//rWq1alI1UAJIJSAD/J2QaoDmIWXBzAfCrNfrRh7hf2MuhIUk\nxYlrKAHMCouZuQtdjCSb2R0eHprRAh0q2+zQOCPb4DLUajXd3NxYV0wQi0ajyuVyJuM4Pj7W9fW1\nzX6l+6YcLH+B9AH7+ezsTNFoVJIMcud3IXGq1WpGqqFTi8VimpubMyg9l8vdMx+C9Q0SQSGQSCRs\n2QQa9GKxaHwFCI6SrJMCCoTFCp+CRE7XEwwGTdu+sLCg+fl5vXr1ypI3FTxwsWv8AcksHA6bhS1K\nDt4n29c8Ho9arZZKpZJOT08Vi8Usqc7Ozsrv96vb7VrnQjCmgMP/e3p62gpWdkojOWs0Gnrx4oXS\n6bQVRtIIGQB5ca1z6QTdbXF0iiAFy8vLqlQqOj4+tsIFSRQMX/S/vGsCnfRODukS+dLptN0fnCoL\n362dZcc7zn5jY2PmWQ4DmntYr9dVLBbNxdGdha6vrysQCOjo6MgIcsx46UpJYCACyFwpdjnPFMs8\nFyS77LrApwNey9TUaFHVycmJqtWq4vG4dX6Q3YrFovE/uKOoY/L5vKknSJTxeNwIYbC86QhByzD/\ngYQFnAzD391RPzExoUqlolqtZtvR3Lh4fHxsc2wKV5CU3d1dIx9ShMzMzGhhYUHxeFzValXlclml\nUsnGdkhzKVIpNEA0QPlQmPDM6fiRYs7MzMjr9erg4MDOM8UMDU65XFYmkzHZGUoBul2aEEjASJIp\nKlD44EJK0cHoxOfzGVrsIsvf/6GIgfOTTqd1enqqnZ0dszx3FSHs/lhaWrIiDkXQy5cvNT4+rmw2\nawRNms5EImEky1/+8pc/mMM/qHc9sBDJEZiH7oILQlfiOq0By2EG4pKU2u22LcDA3hboDDkc1RnG\nNq1WS7VazSpCoNZOp3OPBEeXyPwLuLbT6ZgMotPpGFGOzWGMAQaDgQVEIHKkK6ATHFQuIAEIpuf0\n9LR1DsyLJVmVSaIYHx9XPp83+Rsd8Pn5uaENVJaS7DtQNDGP9Pl8qlar99iqQPC8R7o/ngsMcEiO\nXFqYsbBv0TdL75zEIFK5W+joqpk5A83z/HC4AoakWmbWDnyHvAwkAKSDTgvCHD4BnAFX5kMHhNRI\nepdYSWq4gZHsDg4O7OxDoKFjLRaL9hyALf1+v05OTu6Nc9rttiYnJ60jbbVahk6APrgMeJ49nQeB\nF6MRWOWgWmiJSdKcCdaicu4pVklIFE4Q09xxFncbTszl5aU9c8yZ6LhBB7hnIChwVEB8Q7iVAAAg\nAElEQVSELi8vjTgFJLq8vGxFNUXr8fGxJWfuiutaB8G1UqnYeQENoZmgkAC+B0GggAwGg+aoxziJ\nEQr6bu5Mv9+3xSqgj9KowOcMYQDGshmeK3JbTFsosrGv9vv9Oj4+tsKImEKCkKTj42PVajVTVSCt\noxFBFgprnOYLlAQEChSKubYr16TTp+DkOxAX4V+w3AUkiLsgyeI+8j08Fa6urqzj9vvf2VxTZBLX\nYPczO+cdUEQPBgMtLi5a4iQuYo3NM+GzDAYDI4ROTEzcs6FlbwJbCHmO7oj15OTEGgTOFWZdkCrJ\nJYVCwYq78fFxc9ckXng8Hq2srBgC+l659r3+1P/Az9TUlPb29vTFF1/c0/0ye5VkHr1UuTB3G42G\nscJhVUoy4svLly/NTYx1iRDb6Cq8Xq9V+jgZ3d7e3tsFTWL1+XzK5/Pa3d21hLq9va3z83NbptNu\nt61KLxaLSiaTWllZ0bfffqvLy0utrq7q9PRU5XJZyWTyHkqBVJAk5Ep4kFxMTEyYCQZkvUqlosPD\nQyPuuHr4sbEx61SBRhlDsA6RwIAhCdDj6uqqJGl3d1fr6+va2NjQN998c89kBS2wG4SYIROs2LbE\nGlEkYF6vV8+fP9fd3Z02Njasa3Z1wBzgWq1miEuxWFThO9cqIDICGlV+Nps117O1tTVdX1/r1atX\nZsxD0KITgyvBrJ9ulwuPycjk5KTJIHnmLkEPLaurxYUQ1+l0tL29bcxq6d0yGsYh8D4gO21tbWl/\nf1+VSuWeZwRMZSA8JEN0F8xnUW2ghpDeGbFkMhmDDl0nQsYyyKG8Xq9KpZK++OILczhrNpvGJzk6\nOtJgMNDDhw+tE2EtKEGIz4KOGQSOoon7JOneiAzXQ84DKA+QLAmMO8puh0KhYEEdk5tkMmmSUIqN\nw8NDZbNZra+v6+XLl+r1etrc3FStVtPu7u49T3O4NKgS9vb2rCAlUaKn5/wwGimXyzo8PDSkzkUB\nPB6PVldXzTkRy91Op6NXr14Zd4UiHB4A8/nLy0vt7e3ZutPXr1/r4uLClAoY42CkVa1WTUKJSRhn\niPvB80WO7PP5jGAGqQyYHYkhYwSPZ+TYWalULL7BzYBDAVT+xRdf6O3bt3rw4IGNJej8UeMQ1weD\nwb2d9ejZt7e3dXd3ZyRJDMtKpZJ+/etf69mzZ1pdXbU5PaPdeDyuX/7yl5qenlalUrGxxvPnz634\nQ8YYDofVbDb1+vVr/fznP1c+n9evf/1rDYdDraysaH9/X2dnZ1pfX7dGEE4UxYMrs+U58HvgTpC7\nvvjiC52cnOjhw4fWQLmFwGeffaaVlRWVSiUVi8X3yrUfLMnDoKdzwlQDPeuLFy9sBhMOh20BzO3t\nrXZ3d421TeUFPLOzs6PJyUltbm4anEbXenFxoeXlZSO4oHENhUImf5uaGu06X1paUq1WM2Yme7mp\nGjkc19fX5qk9NTVlQR2iSzabNfczAjKFB2xLJG+Q/KR3dp3n5+cqFAoGXfEZkY5NTEyYFrXX65nL\n297engVSmLV0VsCsyDuAy/b29nRwcGAw5fr6uvL5vJHAcKoDlWAmXq1WjQNAV+Oa+WQyGauwgc+T\nyaQVAxDN6JRqtZqxZ90tcCQ4RhYEVCpg4HVMjPL5vOr1ut6+fatEImGsWSBvklk4HLaNhRRbdBaL\ni4tWdIFQkDh5BvF4XPl8Xq9evbJLiWSHghWSGktyJFkXWSgUTO64ublpCU4aQYUw1pnLeTyj9a6Q\nyYBDgUbxN2AsAFmVZE5XynhGkorFoiKRiLmtURDe3Y2MlXAhdJ398C5g1EFHenZ2plKpZLu76cb8\nfr918GwyRCrLWAJpIKMcimy6dopTlAAY1gBJ4wRYr9dNkkZCRkKGZwExI5/Pm1qh1+tpampKa2tr\nikQitq0MJUSv19P6+rqNDjHzAmGZm5szPgvzbkhhMLyj0ajC4bBevHhhxRmEXc7UcDg0xj7PAch9\nf39f+XxekUhEn376qXFsaALW19ctkbDZsdFoKJlM2l0EPdnb2zPjG3avQ0JkvTDyUcZZ3CGaM0ZW\nw+FQ1WrVECw8BnAA/fLLL40nk8lkFIvFFIlEjNiIzJVn4Pf7zd9heXnZIHZsxCnYYMAzfojH49ra\n2jIX0UgkonQ6bQXA3d2dySddUiFoI+6ZEDCxFoZzlMlkbEQTi8Ws6GI8wkgLFJe7TWxst9s6Pj62\nO8AIgHGM1+s1CScqF0YWEPe4A+/z80E7eYIUXwY5BgkHkhkMRAhPwBYEAxI9BJ6VlRVlMhnTb7pQ\nGtvagJNw+ELny/wvlUpZYOdScJmAeiUZvI8BAlU60BcWqXTZwMYcKBa+ANG7SQ0dP5AnxDXY+rBD\ng8Ggde+xWEyLi4sG3Umyiwj50HVZo8vy+XwGefN8UqmUzYH537jo/G8EYeaLs7Oz6vV69+B4SDBI\n1giIfA4QGTTfELf436nAmY2SoLjs6E9h9mazWdtUCMEMJITkCfSJ5I+Oh+IJl0EMOSAy8XcxYgAR\nomMAbuf5usxqkhWohbuIg/eUSqXsHQC5M5s/OTkxRn+1WtXExIRyuZxBzCR5STYD5HNSeADPU0C5\nNtCYTdG5MnckgcRiMSsA4CnQjfv9ftuSCISJ5pg7C2oCMkEwBJ0i6WL6xDNw7wyzdLo2CIdAoRT2\ncFokWRKGM9Hr9UyDTaFNd8jv4F5eX19bwmOTIEWBO0ZgJEfQBjHy+XxGmKI4yuVyikQi9zox4gwc\nCeIes3liwNHRkRVxiURCq6urBmfDX2C2fHFxYUYxxBbXLnl8fFzVatXeAQRQlw/kxmCMZ+DqMPqg\n4OWsuPdkfn7eigWaIjgqFOnEUz4fJDXMpIjbIHH8DhAaxgvSu5EHSRIkCVY6uz5c2NwdC3G2OQsg\nWrOzszZOphCH4+NyNoLBoFm14yGBth1LZhoydtBzzyk0cEIkJ7FtFeQMFJum74d+PliSLxaL1k3z\nIkOhkIrFora3t+/5JFN5oyckeO7v71twZtYMXEk3QXIFkvn1r39tfsck/1arZQEYJvKXX35pm/Bc\nVyR3OxgvF8irXC4bC5UZDRCca1uIIxvMc3zggeo4bLxEIB8CpM/nMxZ4s9k0ExSQjFarpUKhYJUe\nxDueYbPZNHiOYNBoNIylDgtaGsF2zDhvb2/11VdfKZfLKRwOa29vzxjYzKB4RsxXYdCTZPiOpVJJ\nExMT2tjYsDkZHgh4yTO6obuAJNhut02vCoOd54JkChj86OhIzWZT29vbhuRAeqtWq/adGQ9A8CHQ\nFgoF60x4Hqenp0qlUqZeOD4+1suXL83/HOknG8CY6XPegOxevHihdrttzHHY2pxBIO+DgwMrtugQ\n4V9AJoXNjCkMjoEEHGbU/X5fa2trlhwvLy+tCyMwwb6XZCMZGNU8u8PDQytqAoGAyeNgnkuj5Ari\nAN+ArhRlAQhXp9PR8fGxPU8CHWfq9PTUvB1OTk7MhAjnQVwIa7XaPbb72dnZPYIrplOMt7hf3CXc\nzl69eqX5+Xnzktjf37c5uWuAw9iE98GcF0Y3fw55HfLNk5MTUwpRDICeAC0zz+Xdwx0gcaMaqNfr\nNtphtotVN88OcydUC4yLUOxAyoWojNpAkhllMQpgNMVdoIHiXVBsEHdfvnypWq1mCY4Ckhk6cb1U\nKqnb7ZqJE+9kMBjo6OjI5MfdbteSNagYBE6e7e7urtLptBXNjHXY5Ac/Az7CxcWFIRSMHmDn45yI\nXS3vHbRvMBgYaTuXy5mjHUgA5kFwSiCxsmGO9cmSLE6juz87O9Ph4aGNLJLJpD1XYtIP/XywJM9l\n45AjceLh0F3xkPC2xinIlVlAdggEAve85oEoJRkbl0qZA0c1ByEEwhB6ShzygFSYOwPNcXnRMw8G\nA8XjcfM1pnOGPDE5OWkJBPjc1WrCjAVmlWQdGJ387e2trScEtp6cnLQChE7fXakIx4Aiyf19BCk0\n3hxyYF5pBC1TrFCN0/kCk8FIhsxC5c874rvQJQCDEYxxKANqpiMjaY+PjxucTMDC5hbb2larJY/H\nY+/D4/FocXHRgoDrRQ+KRJGIZpoRDu+IWRoQITBkIpFQ4bvNhGzgozrnzzIfBq1ptVpGNiKhM1N0\nGcGMO+bn5xWNRuXxeMysY25uTtlsVt1u18yZkF/SpZP0XDgcNCedTht3hLPizs3p5OhEXJdIiknm\ns3Nzc+YjIMmkdHSwjHT43yEv0RW7HgI+n+/eKlmSXyQS0fj4yDI0FouZ3SrfFw0ynQ8dI6gccjkC\ns8fjuYdQAZUCrzOOIfASyCmakcUCEXOW6bZdhEqSnVsY93y32dlZM3ZxlTjwEiSZF8Dp6an29/c1\nPj6uhYUFiw2lUsmUFJAAISdjQDQ5OalsNmt+DDQUoEy5XM7OJPGDebvf77fETiKHdIayiTMLmhCN\nRu8VuxRPPANiHp1rKBQylzniFL4o8G94p6Az8GS4vxQd7G4HFYjH49bps9GN5VJu4Y03A7GMxk2S\nJXfuGTERL3viNQUw54zPivyVfMN/B4EckjQ5BZQKCfP5+WjNL4UvOQWU7od+PliSp4vHdhGih9fr\n1dLSkjnFVSoVTU9Pmxzh9vbW5Fy93mhV5MXFhRKJhNkYEsiZ+wJVBQIBPXnyREdHR9re3rYDIUmZ\nTEbr6+v65ptvLHgAfWOKAKkNOBfdKIYXMFkzmYzByJBZQqGQETFIXLu7uxYAqBArlYpCoZAtQUFX\nDnzETOvi4sLm3Q8fPtTd3Z11pUCxwLxUrMj9+HwuCjE+Pm52ncVi0eQ/VOQELr//3frP1dVVM5Ch\nep2dnTXInufrJnEXZg2FQlpfX1e5XNbp6alZRG5sbBhpkE745OREgUDAiI7NZlOFQsGgxEwmY+oK\nLq/P51M2m1UqlTIi0Js3b9TrjTb44aAIqsTsjOAhyZi1SA8h+W1ubiqTyVjnC/yOORPPDu17KpUy\nmSZESd4V26QoMjAdyufzymQyevz4sarVqvb29rS4uKjl5WUtLi6qWCzq22+/tTNGAckcGHTLnTOC\nnhwdHdn7IDkwvsEwiB3uT548sdXQSCBhtCMNJNhRSCGzmpub0+vXr3V5ealHjx4pn89rYmJCu7u7\ndl6bzaZqtZqePXumaDSqcrmscrmsSqWiR48eaWFhwSSijPSurq4sic/PzxtfADIZs3QWvdCRIalK\npVIaDAbm9c75h+GdyWQMzpZkicjj8djMnQQIQZYA70o3pREasrm5KZ/Pp52dHSOjZbNZ5fN5Gzu6\nqoZUKmVSu06no8PDQ21vb2tra0uPHz82tGd7e9uMnIDAp6entb29rUKhoPX1dWWzWSWTSdVqNdsP\nQJGSyWTMqrbRaNjfNzc3p8XFRYXDYbMg52xOTk4qmUyaS+HNzc09TwqKPlBSGgt2ElQqFUNOgaLx\nv+j1ekbCw+mNGC6NmpV4PG4cH6D0169fq1araWVlxTzhl5aWFI/H1Wg0dHR0pIODAy0sLOjRo0dK\npVLWNNJls9UuFotpe3vb1ACsv2UUhbw4kUhYsUKTdXh4eM+mGdUOmxS5lyBvzOcHg4FmZ2e1urpq\nS8Hc7l+SFhYWbKMhRc77/HywJM/cg5k5hxtovt/vW1fEDIM9vbBkgcCHw6ERgKampgxmZ36JBI3O\nDgKY9M7/vdVq6c2bNxawIeoVCgW7PBQUkCnoJoDnmK8yC2Nud3V1ZRA4+nHmlY1Gw+BYSTY66Pf7\nZocKyQWiEvMkEg6zGgIfs3Jg0F6vZ9/L/TxIsiAiknRIiCywwHENVIDvUSgUVK1WzUca3bcrt4Id\nDmmKLhvIjc7KZZ5irhEKhSz50gmzEOPi4sKkUViYMkOU3rHXLy4ubHnI+Pi4dWVHR0fW9VFcZLNZ\ntVotVSoV+w6gFzC56YQwscAtjmd4cXFhWle4Asw66Vbd7wuCRQc2Pz9vyAXKB4hrc3NzJoHy+Xyq\n1+smnUNeBAwq3dftYgIFlMrfgX6YGSeKD2BX/j7gTDp8FAW4i8GrQUsMnMr9uLsbLXFhfzf/O9ak\nU1NTBj+fnZ3J7/ff2z3A+ebvZSxH4dFqtSzQUmC4xSXv8eTkRJeXl1pcXLTxkWvm5MqSQGIYFRFj\nzs/PbS1zrVa7h0pwb6QR1Eqnv7e3Z3cW8y+6Xr6Huz/g/PzcvCco9l3ZHB0dREwkYKAlGMmAmJ6d\nnVnhlEwmLabiKYDUDXTy7OzsHhJBN+qyxYGn6cCZzV9fX5vREYgBY6WJidE+EuLkmzdv7HfT4YJW\nAsczqpybm1M+n1e73bb3DjoLisYdYHwKugpKQyzEjIwGCgOfRqNhn0l6p0ghV0BYvry81Ndff22N\nD2oJUAAsy8lnSL2xpUYxMRwOjSCIqVG5XDZ/e+4yzSuSae7o+/x80CRP9UrggTlIJwDbHEiIf9ft\ndq3aJMBQlQO9SDIIDxgFKBboG0icoOPqRUkg7iFgpzrzfchykgw2pKqfnp62y0aggSQC8SwcDltQ\ndOFiJGRUcWh/CSIw2YHkcDbjuxJIuIx8LxIwSb5erxubfDAYWOIFEWHmxGyJz4FageCGuYWrP8db\nv1KpGFMepMM1ooBpjTphMHhnR8qslQvMu0fvStFDggOtgcQCEe3k5MQuI4Gdc8IzSiaTZuJBgMEr\ngS1SwJUEZIIRMzISF4UngQXHLog6nDvUBO12W9FoVJOTkwoEAvf00pxpjIBOT09Vq9WMOAgLnREQ\n3BF+L4Xmzc2NFSTAfBjl8O4pXilmCCy8Z8YXvBM6Kc4LIyC8AZjdSrLzgiskpim3t7eG+OA5PzY2\nZv4WeDO4IyVGCoxt4B+4KgjuB9yVmZkZRaNRS1zM5V1mOBJOHC4xL4GXgF3p93XRFDOQ+/DskGTo\nXbVate6P+MEZx1yIUd3d3Z0ajYbdR+IYi6iOj4+NuIyVL00BSRabVde/gYIkHA4bgQ5SKQTidDpt\nIzw8IFySLONKHN7Q5RNPST7wPyDq4u0wOzurhYUFnZ2dmSskRFR3ec7d3Z0V9XipMKLFoRNbYArm\nQCBgYyrm69j1SrJdDGju4S/wLBhDHhwcKBqNmnwSlQm74CWZUykxBWMsdieA/MJtohgmhsFH4ezz\nWWq1mvGKeO+MvRj9MTqkUf2hn/cD9f8Hfv7iL/7ib87Pz1WtVhWJRAyGnZx8t8Z0cnJS6+vrGh8f\nrVFMp9NaWFiw+Tqz+lAopGQyqZubGx0dHSkUCpm0YWFhQR9//LG5eIEOeL1eLSwsKBQKqdFoaG5u\nTqurq5YI6Vol6cmTJwYH043SdbgsThj9kOE++eQTSxrMXSXZXPL8/NyY2QQXZj9er1cbGxtKJBKm\nXwaeiUQi+uijj+T1eg1yjUajWl5eNrctEsPnn39uvw9uQavVUjAY1NLSks1UFxcXzZbz6dOnSqVS\nFhz5XvwdHOZMJqNAIGAmMvl83mbxq6ur1mHx3w0GA62trenhw4e2xW95edlMjmA4dzodra+va2tr\nyxjcLKq5vr5WNptVOBw2ghHFUigU0ueff24JN5vN2kVlHgasvry8bAkbd8Ver6elpSVtbGyoVCoZ\nwY/zlEqldHd3p93dXaVSKS1+538dCARsPjk/P6+nT58aLEeSQjEAMxspnsczsgpeWloy5zOv16tM\nJmNzT/gRsHRZVOTz+bS0tGSs8U8++USBQMBGAkDpc3NzWlpaUrPZ1NXVldbW1qyoSCQSttGM8RRF\nCpyHw8NDZTIZ23Q2OTmpfD5vY5Znz56ZhzncFXgV7hwSCWq5XLYxCmfg888/tw4NpOv2drTDIBgM\nmiMbTHfmvqANjx8/tvcDikOBnEgkLGgDX7fbbUUiEW1ubppiZW1tzeIGbG5+VygUMmJvPB43khz8\nm729PY2NjVkiiUQi+oM/+APrptkYCJKDpDIajerZs2eWrCDbYTazurpq3Iz19XWbm29sbCiVSml8\nfNwY7C5SFgwGtbi4qI8//ljpdNruPJB5KBTS1taWer2eqtWqsbVRWOAsOT09rdXVVft7kQNeX19r\nZWVFi4uLVhhSHGBkFovFFI1GDdbHJOwP//APjRxNcY/kDrI1IzOKaP4OVCs4h5I8aTYePHig2dlZ\n1Wo1xeNxpVIpeTwei3dwCj7//HOl02kz9EIRJMnywuzsrD0v4uLm5qaRUB8/fmw8A+bq7LpnNwq8\nLhDG1dVV+f1+1Wo12/Y5GAyUSqX0k5/8xIpoRg6MKCFRklcCgYDC4bD+4R/+4f/7oVz7wTp5YE+q\nfToPaURoaLfbxpCmS6Drpsvv9/sWMKnC6WrGxsZ0cXFxb65xfn5ul5hEDjxMBex2eq7LHNAyVSOB\nxJXwAQudnZ3p5ORER0dHxqSky6U6JZBCFiIhAJ+7Bgru53L/PN0AnRlWl9iJUkUOh0PrFOjEIW64\nRQ2WnUdHR4Ya0JEC00HmcReL4IvO56KLA+ZiaQmdLl0//0i6Z+LDMyV4uNa7dJ7IYkBK6DgZQ5ye\nnpqPPDpp/n6QhuFwaBW9JIMHGbtQeFBc8dlcdzzGTnxmpD9AniBQPAuInxSRuImxmIhK3R0t0MXA\nwCagQphzl8zQ2SK5dD83LGJQD/4+EqP7rnnOfC/IoSxfoYC7vLzU69evrYjl3bsLR1yeBOfIHX0A\nP4IWuN+T70+Hzrl07x53hHPPOIHvLslg1tnZWVNWeL1e8xngjjAH584hpQQpxJqWd8Pvg7MBgc79\nbxqNhnWz3GsSi2uc4n4P0BIMViTZDgZm68QH+DXcZcYvoB7EBJIj7w30ErIZ7wkSKKMiFA58d8if\n/Bn+HLGGWASMzQix3++br4E726bD5b+ngHfdAuH+4MrJPQO1Aw3mXoFyMXYjvnP/ePagnO55ccnK\nFF/sFeCe8byBzHnWmIHd3o72lVAouGgt58a9Y4w03HiNf4JLIOeOgla8z88HS/IQVFxTEpI88CEO\nTSsrK9ra2lKlUjHnI0mml2XGhSf3zc2NQarIKXZ3d+8lCFi8JycnKpfLNgtingpMOhwOTZoCvIx9\nJ6QhPgsSOhYe7O/vK51Om2UnM1S84JPJpE5PT3V0dKRyuWwdpzR6mbu7uzajxJea+eHz5891dHRk\nenQOOTNq4J5SqaSrqytzorq8vFQgENDJyYmKxaKWlpbMuOH4+FiVSkX//u//rpWVFa2srJhFMMHL\nXbYCmYV/TxDDlrhSqajf7ysej2tyctJ8xXnuQO/JZFLRaNSKCIIHXt7owEkeR0dHisfjWl5e1s7O\njgqFgnEHgPbchIX1J8YSdBEseYjFYnbx3759a981Go3eI1Hy/q6vr7Wzs6NOp6OHDx/a/Bdnwna7\nrYuLCxWLxXsJDLkTRMtut6t6va7j42Mjp7FshZXHzBthBHe7XSM5wT/BdIPiBmtRCHn9fl+7u7t2\nR77++mvrctwkS7BjaQcoFOcF6eXZ2Zm2t7eNzPaP//iPevDggX76058am3p7e9t0/6lUSsPh0DrU\n6+trNRoNSTLTqMPDQ9t2R7EL8RY5JdJR4gfjrZubG9VqNQ2HQ1ugRMcmjZZhhcNhs+KtVCo6ODhQ\npVJRuVzW0tKSZmZmjAyI5BN/dmbSzJkx1pmbmzOJ5s3NjRXOJKPf/OY3Ojg4UKFQsJEiBRKjQ+Ic\n5wvonO/Kn5NGfKWDgwO9efNG3W7XZuIkcv4MCRw4nMRKccIZqlarlmjRwZNokPrRzVOQuyZVg8HI\n3hY+DgUq31UaJUoIiXTL//Iv/2LjFuSZxBAIavCsTk5OzF4b1QhJjxEpPCJ4HdVq1c4qY2GcPlFJ\nlEol1et1WzLFmev1eoYGM3Lk7H/99dcmveM8w5eBG8Fym3a7rVKpZOx+aUQULxQKqtfrhnBKMgL3\n119/rVKpdG+RE7sEeP549ZdKJcs9P/TzwRbU/O3f/u2QoI3uHeJJIpGwhImtKLNuvjCQMYcaggxQ\nPppIZrN0r6AGVJqSjIhH1cZcBwLes2fPlEgkjLCB1pEATHKi6ur1egavErRIaMPhyA4xEAio1Wrp\n+PhYzWZT+XzerEPpSKmsmXkx60NWQ0fsSpg4DH6/X8vLy/rjP/5jO+AEJGZ/dAZ8fp4tAXFiYsL8\nrpmhcphd0pMkY/U2Go17DmAuGQcGK/A63S3VLB1Ev9/XZ599psePH6tUKlmxh64bGR7jB4IL77XV\natm6SDpv5mkuj0CSjXzOz89VKpWMIc6sluBNd05SRE7DRWVWGQqF9Mtf/lJ+v187OzuWeF1GNsUm\n5CPMTZDBUUhiTQvrGydFCkw+F90hndrMzIwRdOig6WAofpgf0wldXV3p8ePH+tM//VMjyOFBcHx8\nbGeTpIE1LXA5hTMFZr/f1/z8vNLptHXYdD0U55JMj+9awnK3QJ9c4un09LR1gbwjr9ern/3sZ0ok\nEioWi/cUKHALXH8BeEDImiC/7e/vG9eDz8l5qtVqhjbia0D3DaoGUgCSAj+GmAI6RJK6vb1VPp/X\nL37xC11fj2yBy+WybS5k7zxFAIUh94i/i+frmltxp12OAnNfUEqkvSRxGopcLqd0Om0NBDveW62W\nksmk3auHDx9qaWnJij0SMgoi7j1Fiis3oyB30QziCzp9EiH3ER8E3h2SN7rjsbExZbNZUykx7gIh\nGh8fN1LnH/3RH1kT5BZFjBtBiVyGfDqdtgKk2Wzq6OjIRhuM+0ASMadCtfXy5UuVy2X7TJxjip9g\nMKh0Om0xHeI3eYA4s7W1pV/84hdWLPzVX/3V/94FNa4ki+4U9vv09LTNczKZjJrNpt68eWO6aw5e\nNpvVwcGBHSy22pFQgOqGw6HS6bR8Pp+2t7eNuIcd7dLSkiUjNmDR+TEDxG2JLh7ol8AGn8AdHwwG\nA+tYcU7C93tiYkIHBwc6OTlRp9PRRx99pOXlZYOX3eTJy4eZ6UL7rjUjlR3OS/F4XMFg0JI+JCy8\nqjOZjHZ2dsycgo4aKJHDikxkenpaCwsLKpVKKnxntTs3N2dWom4i6ff7thP+5R0j1IMAACAASURB\nVMuX5j3OCIC5JhcSH3VmUFTSQKCQ7Qhkl5eXdpFZIEQRxKWXZMk/FospkUgYKebs7Exra2vK5XJ2\nfhhv4D0ANMqI5OpqtMZ1cXHR3BQ5LyQHl9TEuIDChFko2makX+j16XQYH1GIxeNxk9/BD0C+g2c6\nvIdkMqkHDx7oiy++0M7OjiTZ6COfzysYDNq4iI4RiJLzjjUrHRQd1sTEyFb19vbWpEWwvBkDUPiF\nw2HbXLa/v29Svng8rmw2a4Qmxlkej8eIcayKBaqHZIsfhOt6SfJntzmQKt2t1+s17wf2WTBnlmR/\nljuHJTLdGijC+fm51tfXjRlOEqWLRAJLEQiEDB+gWCwagZW45PF4jIPD+3aLN5z3cK8cGxtTPp+3\ne9HvjzzkIahKo9EYHv7YBIdCIdsAiVMdn4MERWMBbymTyWhjY8Pm4i5x01VTICHkDjBWwhOAODw5\nOaloNGreHYPBwJCLsbGR9WwsFtP09LSKxaLlAjZSklhZvzoxMWFeBxA6h8OhSSlh/bsFBl09eYf4\nSbySpOnpaWWzWVMd+P2j/Snwdvx+vzVr3W73ngIMiJ3mBckbDqGovzKZjDY3N/X27VsdHBzYyBAk\nj0KnXC7r6OjIoHzIlkhD+W4/9PPBOvm/+7u/G7odOKxqEhTOVbwEGK3D4VDFYtEOM50ZycydkWGe\ngEwHWDASiWhlZcU2K0FoisfjdhEqlYri8bg2Njb05MkTBYNBFb5bjwkkTxBkNo72mTkxbmAkCGZ6\ni9+Zs+D45Br90D3e3NwYdwDYmf+NyrbT6ajdbps2m0uFH/ejR4/04x//WI1Gw/aOn5+fGyTI84Iw\nhG4Txi9QL8tbOKCgJ8hG3Pl7NBqV1zta70lCPj09tfknnxmFQDgctueHO9nq6qrW19eVyWRMR1oo\nFOxyuRp2uiLgcNaXUvi0220dHR0pm80qHo9bN4x+mufNWeQzDwYDS1xAs7y/Xq9n1qSw5uGMhEIh\nffrppxofHzd4150T001i4NPtdlUsFg2Ngb0tyQLgo0ePbGwEBE+x1mw2FY/HrZOg+3zz5o2azaZ5\n5UNEgqxGF0uBF4lE9PDhQz179sy6SZ4Jm/voPikKDg8PVa/XDQKFmMc/kUhEy8vLZpbi8XgsuLbb\nbevemGdCktze3jaCJDwOkAK8E+DHUNhvbW1pZmbGZLB8djgoIC0wlYHHQQ8J1HTnkNrYBAj6QGfH\nXFoazX/ZLkcxTfJhrgq3A24E46CFhQU9efLEJHwU/SCJkPH4DJx10MLZ2VnrpKVRU5DP5++hm8jy\n4CYxP4cEiAKAu0UDhu14PB63v6/ZbGpqasq8GrhTrVbL3NqY31OocEd5thCXue98R9BAfAQwMaJx\ngNOB+ybjwZOTE3vuxExXQYVyo1gsKpFIaGVlRT/5yU80NjZmK4DdEQrGU5g/gVqCBtBoEkch9oEW\nE79dbhFoG0gznhZwmohtFESQpd33HwqFtLq6qqdPn5pi4i//8i//93byHFi6IyRWBBDmHa52m1l5\nt9u1ZMe8IxaL2fyYxAmrd25uzjSTdL7j4+N2wJrNpvx+v3XYMDuB3l1ymKvFvr29tUPF30fBApwU\niUQMnqIa5kV6PB5bvrO9va3T01Ozo/y+1Ay1AYeX6hbLVGaYLimJBAMJB7IglT/wMRW5SwJCusF4\ng2cAlDQ3N2fzIiR8wMuMGc7Pz9VqtczFCQKb1+u9B5vyzBl/EHBcQhEaWDox4HZIYTjfgXAAweIg\nxmXk3CQSCZVKJTPZoYIHqkUri3kLZD3WWLraW5cPQcAgsAHZcpF5V4wWkCrRpSK7cTsnDFzY5Ieh\nCO5knMtoNKqzszOr/mHv8p4phEBH3ALTJTm6zwpfAhIhBTNniWIR90NGGN1u194h91saJURsiEFt\n6GbL5bKazaaRuugWGbkwWiLhk4xAvzhHdFfubB4HOjgob968sXPOmaUjdcc/PKOZmRlrHBgBsK8C\n8pZLSLu7G20wZFwTi8UUCASM58Oog8QBqRZlDPca9I177ZI5SWrcBxflowBARsdyFUmGKrEPodls\nmpoAPwjijiRrIti7gOadRAZiglQUvwzXCc7j8djYgbgM4uVC5ZxLV/dO88fzpSgB7cHMh7tF3Haf\nMYWtK5eUZIUncdslu1K0I9lDOsw7YTyJnFqSGS15PB69evXKxlqQdLG4PTo6UjqdNqknOYTmkHEp\nMluQIe4CxfZ75dr/y9z8/+yHzuf6+lqZTEbJZNISI5IKn8+nWCxmW+JYITo+PnKJ29raUqPR0PX1\ntXUyVOHM66lEuZyLi4uSRolueXlZkvT8+XPbr8zvxSWOypcXjmYfl6J8Pm+d/fLysmZmZnRycmIB\nBIJKuVw2mP/JkyfGOwAy4tItLS3ZQYXoRpKLRCImPWM2dH19bRILTGPcKpPPTTCEeMTfx6VlTe5w\nODTtJjNAnM8gUnERFhYWDPUgcTLPz2az2tvbU7Va1eLiokntHj16pEgkYmsd6f48Ho/S6bQkWQXN\n1sHBYGD8AXdz4cXFhQKBwD3ImU1WyMKGw9Eyl2w2q+XlZfN3Z/1wuVzW+vq6YrGYstmszee//vpr\nBYNB/cmf/IktBUHq6G5EozOk4uZS43JIYfrixQsNh0Otr69bYIRBn0gklM/nlUgkTKsLcYiOloAN\niW1nZ0f9/mi5CoHn008/ValUMmc8PMADgYBisZhevHiharVqhSS6fIxq4HeQ5LxeryqVit6+fauf\n/exnWlpastHU1dXVvfFOOBy+V3hKsu1tFFqSzJDlyZMnyufzVuhGo1Ht7u6q2WzqZz/7mW5ubmzZ\n1OTkpPb39430Fg6HjdDH50O2xzkB1en3+8pms5JGZLIHDx4Y7wWmeCKRUL/ft3cUCAQUj8fNd4EC\nl+IE+Ba7YTr+eDyuhYUFHRwcmKyvWq2qWCwqGo0a0keMw3CGESNdX6fTsfXD6+vrNtKYnJw0xICE\nh/03xj2um1+/37elLLu7u5qZmbGZOufzq6++UrPZtDsAMdfn8+nNmzc2IqOQxHuiVqtZkiJugO6c\nnZ3p8ePH1kGz6AoEEOe3qakpUwr4/X7FYjGl02lbYxsMBk0JQJzk+97d3enLL79UKBQyR8bT01Pl\n83mLtxTv/X5fsVhMKysrajab91AlkCR8Bmg6eUZI5prNprn8zc/PW2Ph8/lsVAA/IJfLaXp6Wq9e\nvZLP59Pi4qIymYwVDMVi0UzEWGsbjUb18OFD7e/vq9VqmbwWQyEXYaFwet+fD5bkSchAW3yZ/7+9\nM+tt9Mqu9uIgUdTMmRQpapaqVOWu8tDudKODRhAEyS/Iff5Jvn8RIH8juWkgQAK4YyAuJzZq1EBS\nEmeJGimJ4iDyu1CeXUf+gM++CSoX7wEKcLtdEvm+5+yz99prrY00oFwuW2/v/PzcnNPIxs7OzvTh\nwwfLEoEYu92ukW8SiYQxTJFt4G0OZDUYPPiOAz1LMlgSoh8VNoYcl5eXVrVVq1X5/Q9TtSqVisG8\nwET0qe/v741M0mg0DFLC2QyYfGdnx/p49C1PTk4MngNZKBQKuri4MAMMqiFIQK5jEvA5TnKoD6g0\nYddDjKLqKxQKkmT9J0nWO41Go0aKo0VAIgWB6vj4WPV63eaXwzjHohOIlgOH5KjZbKrZbNozabfb\n1l/kOUOAogeLpCgWixlK4PP5rE98cnJiydT9/b31/e7v73V8fGzICRUbrYxyuWzPZXd31/ZiJpNR\nMpk0w5WzszNrz6CSIDhz8d/dPcwxB9o7OTkxWJCLEzQCaV2n0zEoP5FIGHseL4B3795Zn/j9+/dG\nZENlgSqkWCxaYgofwjWQOT4+1tzcnGmPXdlaPp83BQvcFCoYn8+nTCaju7s7HR4e2p6u1Wo6PT1V\nt9vVxsaGIpGIGo2GoR8M74lEIrZnISHhUDg2NqZyuay7uzsdHx8rFotp+b9tpoGIYRp/9tlnBnlf\nXFzYHuD7g+AVCgXzLQA2hTdD7MHdLZPJKJVK6ezszMiIEOwgNrqJMBcd/fpvvvnG5HDYCrt6+JOT\nE4OuXYUQaAvDq5LJ5COuAu+MFiDIIkkmPIR0Om1nEE4L3BeY5FTkkM/Yy1Ty09PTFqsxB6P9AlII\n2uOS7ur1ul2WjUZD1WrVZmKQwLHHiB84T7pTK/kD+sj5BLGijUYSAkEX1QSoLecFbhRmV3x+qnH+\nQJA7OjpSo9HQ8fGxoc20qH5q3IVXCIgydtj1et3OPxbOPG/iHhbF5+fn6vf7FiNAGfg5cGC4g37J\n+mSXPJff1NSUSUk6nY7Z+kH0griEHSNwTrvd1ocPH5TP5w12oj/aaDTskkViRO/x6urKfj89OzSV\nwECunhGIznWg4xBAyKHqLZfLRjBDM0xfzJXOtFotCzCgAGwyCCfT09N2aCH7EVjYIHw2YBzIOPTe\nXQkOU824IAhoiURCksxHnGrj5uZGpVLJKgYOMIY94+PjqlQqqlQqVqnh+gV8x8HncCP/gK08Pz+v\npaUlgw/xNSCRYiPzjHiuBwcHBvmRNPFMXW4BySDvutls2thLmL34A5ycnBg0C1zd7/d1cHBgkPH+\n/r5dCK5O+PT09FGFRc+U7wPxzjXcQFYJp4T/nsDnsp3p4QeDQftdBK5KpWKVPHs8HA7bO5mYmFCr\n1VKxWHzkTgfRBwgf2SPVL59nfHzcpJTHx8dG+Dk9PbX/P5PJmLOgJLso+OeVlZVHsOPk5KTpjkmK\ngYADgYDNb5ibm1OpVLKJc5FIRKlUyiqrZrOpk5MTY9EPBgNL6ElwqLSATD98+GB9ZXqe5XLZqjJQ\nsHK5LEmWoCKlYnAQcC/V2fX1tbU47u8ffOXfvXunUCikdDpt74N2DNpzPDh4ZnAV5ufnjWhGIgwP\nCOkg7TDaPPA+MDaanZ3V7u6uGo2GqSAoTtg/8Em63a7F4J9K0NzLGHkXybKLmpBgEo+xJq7X6zbQ\niKSB3wHSSSHBrPl+v28yUlfiRoKGtDYYDFp89Pv9qlarpgQAWZI+ql/47LSCIRjTTiXmQwJ8//69\ntcZoK5Ass6eJUSRxTEmkl067C9ifVhZtkeFwaJM7icFwA2hB+P1+UwKQSP6vl9D9wz/8w4jeEz1f\noHf6WTzE6+trC/hoVF2GKMQSMm6IV9g2Hh8fm+MU0jUuJ1oC6CzxPnZH+j19+lSzs7Oq1Wp2uWJT\ny4Gj+hkfH1cqlTKyBr1iV5oVi8UUCAQMBp+ZmdGbN290dXVlFqqBwMOIVipjIOoPHz7o+PhYPt/D\nJC16PYPBQM1mU+l0WktLS7q7u9Pc3JyePHliFoy0H+r1uiVZtB8qlYpyuZxWVlYs0MGL4NLjIPHc\nkafgvU6lTE+SjQphkqoash/KAbJnYHdJ+vWvf62nT5+aAoHKjArDNZ3gYgMGJQAy/hQmPP05UAB+\ndj6ff9TTJKjRB4e9CweDCwhvhEDgwfqWg//y5UurZrjMSLgGg8EjoiBjj+mJu5c7yRJtpPPzc7Pf\nBYmB+0BWPzc3Z5cKQQ0EKx6PG4kQ6ZVrFLK2tqbf/va3qtfrRrYjsMNoHx9/GJJUqVQMzZibm9Pp\n6alKpdIjchLwLu56sOlps3DJck6mp6ft8nVRrH6/r1gspmw2q2w2a5apwOWj0YODWSKRMHIslyhJ\nMsSs/f199ft9ra+vW0J6eHhoA5/gw6CfZtb8xMSEjo6OTDVCD9cdbZvNZpXJZLS/v2/JLBA+bZFW\nq6VYLKZ0Oq1Op6NYLKYvv/zSxkOTcDGU6eLiwlp4JycnJh3mciNBubu7Myi+1+sZF4kLnKQatAZU\njKJpfn7eGO/NZlM3Nzfa3t42DwnXCIj//dlnn2l9fd3klsj7+PxwF0gMXrx4YXPZ0XkzHZLWESgN\ncRmEFlQgFArZmGLUV8vLyzo4OLDR4phYobAgYdna2jJE8sWLFwoGg+YxQbKOORCGM5Cjo9GocT9I\nKuFtYC/r9/uN4A3qm0gkLAZDviMhXVxc1MTEhEltaQUiy0Nvv7y8bJbWKysr+rM/+zMVCgUdHx/r\nb//2b//3Eu8IuhARyDAhGBDsCXZufwYCg0twc7WE09PTRjgim2f8IBaEXApAtDCxIX/xEjGHkD6+\nROB5yFEcTEkG+5L5koVSVQGpkxygDYXtjD0lG4L+OzafEIXQ3NKjod/PBcvhpZfDJQ2ZiE0MZEVW\nyWVEoADdmJqaMg4FFZcbAOjxAeXRQgC67/V61pqBBEWCwc+kqqAHCEEGUo/7WQiYkJMgFoImzM3N\nWWXJ73UdsPi5WLsCM3OxYlPLoaQ3zj4iyLvPmXfhMtC5hNjvoFZAwrw33gE/B5SLPY1ig3460jL6\n7rxLpDa8c97D9PS0tWZAl3gvrsQS5YLrsMUzANrsdruGXo2Pj5sBUTQatb45fWJMhvhsJM6Q2VwI\nGIgUbb8kc1akQgau5jySTPC5Id/R+pNkbGf2JkkqVSEwu0s4ZK/jTw75DILhcPhxIFG32zV7W1pg\n7Pu5uTlT3PDzpY/KEJ4x34c9AW+D78He57vwLOl7oyQi6eNnQQwmNvHdIAwSK939PDk5+f+okvjs\nfH434Wffs0ckGQnYdYQjaWw2m5JkfWeIq8Ph0EiH4XDYBreEw2Ebac0dMTY2pmQyaeZWEOHYK3hF\nwE1yiXrEC+IAjH+0/VTWeNaD8FF1x2Ix+3luXOUculp3fC5IrFwEhCSY4oRYQjuZn4EknM/p7p1f\nsj7ZJe9e7AcHB+ZZTZ8D6KfdbisajWppacmY8EAj09PTWl1dVSgU0vn5ufXjeODAapCB6JXyopHn\n0e9FZxwKhcytDOjcNUThMqAPSWKA3KjX61nPDXkTcEytVrNqEmkKfe1er2cOU2SIwOD07aPRqJaX\nlxUMBm2EItVqLpfTxcWF/uM//kOStLy8rKWlpUdwMFaIVEkYigSDQTWbTYO9f6p2QAlxcXGhWCxm\nEiMqPYhVGJm4WmACN89RkrU8QAQwWsGoBgIfhhEkUyALtDK4qHDn6vV62t7eNvILU/4IWsyt3t/f\ntwBPUnJ5eWme66ALZPKYwVA9cckxuOb77783Qh8/g3YK37Pb7Rpvw+3flstlQ3QYo0wCCSM6HA7b\nDHn2K1I0SY8qcrTEMPzxtt/d3bUxnhjVQJTCmwCYEVkk7ZNWq2W9T4hpnU5HhUJBzWZTmUxGz58/\nNz4M2u7hcKhcLmf2sECewNYuoYgzwL9DYgm8C9KC8qVYLBq5NRqNPmo9QBC8vLw03gp70u/3W5Xc\narWUzWYVCoVUr9fNgpa9BapQq9VULpftgmXf8X6Gw6Gq1ao6nY7NvceuGEfNeDyutbU13d7empoG\nLwG3RYIbI+0bLtZe78G98fDw0Miy2Wz2ES+k338Y+7qxsWFzOVqtlk5OTh7JOEkmXIMxSXZZ0jbh\nDFNIvHv3zhLMZDKpeDz+6GwSA6WPtsMoVhjfenp6qu3tbS0vL9t7oO1yf39vyBauipwf0DYSRtp/\n3W5X8Xhcy8vLarVaOj09VbFYtCIwlUrp+vpaf/zjH22S3dLSkqEMJJmofpiTQFJC3x2jnJWVFUNH\naZ9QfNF+4JnSxiJRgnyIAiEajVqhQrKPdDYajSqbzapYLJqjI/ch6OYvWZ/skucgAaWQkQGH8sLT\n6bTJzAKBh7nYhUJBkUhEW1tbkmQmFlTmGCvASI1Go9brBspkYAP2jRwYsuZcLmd9P4INVQDZWDAY\n1MLCgsGsbk/c7/crl8uZbzts2pubGy0uLlqmORgMjBwyGn2cSd/v942RenFxYYYWZI9I3tCaQvhC\nNkW2DCSK4QawVDAYNJiafw+KwrMEsUBP6vP59Pz5c+snuZU1TGQ2L4oGXKvcCg42Kxk6PIiVlRVT\nAPDZXTgMEg7tFS5OgoDrJsXBp28dj8fNB8GtmvnD3sJ5DnISVSGyJxIOWPxUrrDSSSSpPHkvtVpN\nd3d3SiaTjzzZ3Yo1Go0aurCwsGAWzpAqeRZjY2Pm6Mfz4x3T92fADZeVJLuguNRub2+Vy+XMBAp5\nFy0PEsFKpWK/i6QIUxz2eSKRsFYV+4XhRqBursskQZTKiIE3ENtIUkBGIKMtLCxY/5TZ8cDLsJDh\nVdDDjkQij2R/SCvd4R/EDXrLS0tL8vv91kJEksa+pPrEYY39xDvkLPOuaWvc3d1Z9fz+/XtTwFAt\ngjD0+32bMsfvI0byXKampszMJp1Oq16vm/wK9QnxynW4Yx/hQ0DRxDnj+TGgiVnv/X7ffNgZic3P\noEjiXROvcdeE4wJ5Ev05ds2np6eKRCJW8VMcUFjBO+DSDgQC1rJKJpO2rzAl6nQ6drbv7h7mELx8\n+dIQV4pE3hekREnKZDLWsuMcwwnhHY6Pj+v58+cql8vWOsDThHcCg54WB229nyoG+Hl+v9/ItrOz\ns6ZkwGHx8PDQPhMKtF+yPtklz8tk4tj8/Lyq1apCoZBpmPv9vjY3N41RSIbcarWUSCT0/PlzFQoF\nXV8/DJ5BK04PG7nUzMyMSqWSer0HH3U2NhfgaPQwzWxzc1Pv3r3T7e2tFhYWrMqn900vHDgxEAgY\nSQRosN1uq1QqKRKJaHFx0eBaXvRgMNDa2pqmpqb0ww8/WLIATOPCdfTxjo+Plcvl9OLFC3333XdG\n/qGXxcUJnIt8EBiZqpEeOeMmc7mcbUxkIMFg0AI2DkywTMfGxmyC3MHBgUFb9MuowOinYaML2Qt4\nkArd7/crn88bSpHL5ax/CExHggCcjgSMaVnY6IZCoUfOXp1Ox2SVeFEvLCyY7zcJDEkO8jsCLZAn\n0xEhK/EOqbpLpZKkBxMgDEnQLrtWo1Q3q6ur1qoB+kb6iN1vMBhUPB43Fy7Y1VRQBB7cr25vb+2y\nxIYWNjcohavvXlxc1GAw0MHBgdbW1jQ5OalGo/FIx0/VBOFubW3NPhNzzpHOra6umqfC3NychsOh\n9vb2FIvFtLm5aVU4bGEuExLFdDqtp0+f2rkkweQC5HmRRJCIJxIJ9Xo9Y6e7WmmXNMt7xaiE35NO\np82sh3ddKpV0eXmppaUl48WAmtDnJQYQbG9vb3VwcGAuf/RXIWFKMhe9RqNh+7RcLtvZJ0nj80uy\nSx7Im9+PhtxFLP1+vxEvcUOE+0CCD5oGVEwhhH8IFwdJPMQ2DKskGcny4uLCJGh4QADfk+DwhxhE\n3Nra2jK+Cg5+JJyxWEy7u7tGPuTzkqDRFkRRNT09bZU6Bl2MjIbbBAv/iy++0O7urm5ubkySOj8/\nb3uRoUWZTOZRMQCiR6+chGRzc9P+Gzg51WrV2qugyrRoKQr9fr+Rh2kBcU/xXROJhPr9vlqtljKZ\njE1i/Gli9UvWJ7vkDw8PrZfrMp3HxsaMeEQmyqaHqYj/8P7+vjFWIc9I0rt373RwcGDsSnSdkJe4\npCHFYdDx+vVr629999139iCZfQxr8/r62sguVG0cEP4A5XFhA8PgRyzJxgxirtLpdPTDDz9ocXFR\nS0tLOjw8NAb2wX/byDYaDVMJMK8daQmQIf054ChYnXxXDubp6amy2ayxVukxMsP6/v7ejG4KhYKR\n/QhyMLJvb291dHRkvdipqSnTyJ+dnSmRSBiZhVbH8fGxEUvo+37zzTdGbIHoA/x3dnZmlQgXErAe\ngQpDCoLm8fGxEYJce9pwOGxthXa7bS0KJrNJMuSn1WpZFk71IcmkdVSFDKdBrdButw2+xUa50+mo\nWCyaZposnnn3XOK0I4ADcdRiYAbQYaPR0M7OjrLZrOLxuLGYR6ORDYIhuFO5n56e6k9/+pP9/rdv\n31pAZ9gJOm0qOap9pKaMBq7VakbQ5Pwi0+x2u9Y6oEcJJIscTNIjG9NSqWRnklkJDBFivyG9lGRD\nd7rdBwtoWi6w1Ll09vb2lEqltLi4qKurK2vtURHTo0beiUQSi2I34F9dXaler2tra0svX77U0dGR\nSUlptZG4807Pz89VKpWM11IqlbS/v6+zszNlMhmDdUFTrq6uNDk5+ciDgj8wwbPZrCELnNlKpSJJ\n5ndAMo0XPjIwEmf+PaRXPjdOgJyto6MjS87L5bIRx2jfQS6FUDoajfTu3Tu7CLvdj/PnQS9BK5DK\nvn371toxVM5A6fSjM5mMPvvsMx0eHmpvb8/8B7D9hkiNhBhPEVDTP/7xjxqNRiZP7vV6KhQKBsnD\nLUE6iEEX8Xxvb884Un6/X69fv1axWLQ2H2glSinaEKCP09PT+vHHH1Uul/Xu3TuTd7ZaLU1PT2tx\ncdFaTZIsYdrd3TXzI/ZFs9k0ztHPrU92yROM6NfwggiyPGBY61RXnU5HmUzG3KOogNEkY4vYbrfN\n0Q7P6Pv7h6E1QDW8QH4GnwGNJpUNlTqbnuwc/Tufj4qZfjL/PxvQJW1RHUByIzgwAIR2BRUq0kAY\nzvAFyJTJ8OkzAq0CZ7sOarQB4D5gk8ofYHigaDJzSJJAS1QIg8HAkiMgdpIz3KioqqjiXJIYVf67\nd++M5Uwlx/PhM8zPz5ushmobIx+30uZPOBzW0tKSOYlR7fEdCFY8G54BKgEUHZBoGOxCG4MqhMvL\nJTqxN4bDoTHa3eEVkAwhKZHtw38gy3flTvRfp6enJT0kI/Stgf8hKeH2BowKCkCSNjs7q0ajYaYs\noBrIgthffEe3LRCPx60PC3cEsqnbh5dkF4i7p3mv/B1XJcFnDoVCluTwzOmzuuoALuOfJtRA97iI\nIZkFVp+YmDBmPHsYHwh+FvEAVEGS9VSRNgYCATMugufiEjAhGcJ/aTabajQaZtbiIhskzZw74H5U\nGcQdnrck+260cnjmMPKJPfSxMcCCmElLCoIw75lY0O12LU6fnp4adM3fJ9FFiozZ0E/19Dh2ggBS\naNFmhOTH8+DC5x5wW560OVFgEefgoYAourLa/f19a9uBenBXAO+zT3mW3A0QVjljWF7TOx8Oh0bm\n5X6A80BhBs/n5ubGZmOgvGGfEWs4j4FAQCcnJzo/PzdEj6Lofz1c71YYhlKtCAAAIABJREFUHAiq\npkQioaWlJdNuAlEwDe3Jkyem1YWsxgOBEAIrFUcrt5KivwbcxbCQbDZr8isCDJcHOlv6bxwMl0wU\niURshCyXO5OgRqORNjY2tL29rXK5bJXp3NycVVn0y/r9vvb29pTJZJROpyXJKlxgOp/P98jdjT4t\nwYRnEQwGDdYlI+YSRac/Gj0M8IEEQvADop+ZmdHTp0+tbwXDNZPJyO/3WyXhZt0LCwtWGdPnPD8/\nVzweN0lYLBbTV199pX6/b1r3y8tLLSwsGDTJ9yCJIbHodrvmzZ5Op02VEIlErDpIp9M2zKVer1vl\nCfHM7cefnZ2pUChobGzM5iKEQiELAsDjkDnz+bzZV56fn1v/lcAC/E8gIJmht4nBUjweN6mTy8L2\n+/1GnnKH6rgKC5JGEtHh8MH1LRqNWoJ5ff3gsx6LxTQajZRIJLS4uGiJItUb8CXJbCgUMhYxSWy7\n/TCiFIviTCZjEDracnTha2trGht7bMvs9p+BMIHxeRbSQ8JPMkXifHt7q0gkokwmY6Q0pLS0TvBO\ncPu3JF6Xl5fa29vTwsKCFhcXlUgkDFUjDo2NjWlzc9OSJnq07ENcIuFZvH37VisrK1pbW1MoFFKx\nWNT+/r5BslT2yWRST548UTj8MPUMAmM0GlUymdT09LS1DCFm3d3dWYsJ06GpqSklk8lHfvBwd3K5\nnBUa7BFJ5uDHCG0SkWAwqMXFRUue+v2+KTiQYZL0UJi4skz6zsQWkAKc6mg1Ms4WDhC/r16vq16v\n6/b2VolEQn/5l39pMYJEgOJnbm5OlUpFrVZL//qv/6rl5WX9+te/tsuXM8jFj402aAxkWcjWELVp\nX3Bxw9Hie8O1od8P2Rr0yOfz6dmzZ4pGo6Z/Zx8FAgF7d7Q3Tk9PFY/Htb6+rlQqZeZGPEtaRsQs\nEGzk4yR1EE1J+n5ufbJLHoMHXhTQD1mm9HEuPHpWXiQ2iDx0ZF+QO8gy6RUR0NzKhmyMzBWYkCob\nByt0p/Q9OVxkojDp+ZyYelC5smFdCJZqPRgM/j+VDEx8DFfIpqkKJBkkRIZPBYr5Dpkxzw2IGUIQ\n5JOpqSmD62gl0GOHFAVMCRGNd4Y+nuoDKSIBCpUBfsy4OJEUALHv7OzYwQLZAbLmc9Je4X1LssoB\nbSuBBMKQa79KgHJNilx7y4WFBdO8wiuA6Q2Zi0yevUGlTDIIo52eM9U374ef5ZocUbHQc6Xqu7y8\nNE12Pp832JlEgPeCC57rtgcD22XrQmTi3ezu7hqXA80xFyPSUlQowK9c9lweJMB8HvY7z5znRlXD\n/r69vTXjGVfCxCXD8+LiIVFmch9JtqtECQQCRtgjkQAt4Pdz/q6urgwV5FLEAAjkiIRpMBjYRD6S\nLJI33Ap57iSjtGIglwLX7+3tWbLS7/cNIZEerH5BtqjgIBG7VTvfm1gmfXSn5PNLsnjHJUBMA+6F\nn0F7k5/Nz6VFyfmVZPIy9rJrykW7hKQOdId9QTxnstvExISNAyZ2sJ9IXNhLoVDo0TOv1WrGxXDl\ndIFA4NFAo9FoZKRJ9i8oI+iYi9pRxcPPcVsLxB9XfoiRkfSxtQeK1Ov1LPkgAT08PDRSHSgx543v\njMkOMl9kjbTnOC8uCvZL1ie75GEFc7BdhzAubCo22L5UkdfX11YxAy260BaEC3r9kIi4rDg46JXJ\nBIGlSBBgjXPJA5kNh8NHbnLAx/SZ6FES1MmgMaRBrwkxDHgIQwieCRKaTqdjGSrtAuB2+ufAi1wm\nEBFJSiQZAnB2dmaVIBAezmywOd2BHTCtGV/LRmdu++XlpVmActCRskgyuDqdThvfAtXA0dGRJFmy\nNzk5aTA5cDwyJvSi9PdpqzDVDu0uiR89sbu7O6sgXHtJjFZAOzDzQZ5Iq4Rkif4wSQSqDFAFSEF8\nDiA1PjeHHsgdjfX5+bkpQVBQvHv3zqoy2MP84dlAkiRoQBKkVwnT2JXytdttVSoVzczMaHl52Zjp\nOAHSB+ccEoD4Zwh2cCB4BySgfKdqtWp8FQafcBlXKhXbf3BJSFBvb2+tNUMQu7y81MHBgZkv0d7A\n6IYkEVMo3ifwKIkICS7PnhgCz+f+/sE4BWSBsbtwFjhrTHk7OTmxpIFkEjKrJGPnt1otHRwcGGIS\nDH6cStfr9cym2YVp3eBPjCM+krTiq0GiDWrEGSXRGg6HNp0RHhAySWItFzG8Bi5fkDjMi0gIuOBd\nZj0cgHa7/cgmnGQK+ajLc0D6VywWrXLFCOny8tLMYUBn4RzBVwCVy2QyZjkLsRYFABNKXbMuklqS\nM/YxMjmXRwAqjIsg7Qlg9NnZWa2trdn+592DhmWzWWvDQPC7vb3V+vq60um0kSRp/0iy38tzphXg\nXvL/6+H6lZUV7e7u6s2bN8rn8wYVw+4ki8K3Gxi+3+/r5OTEhnWQeSGtATIMh8NaW1szSIaLkwpC\nkkmi2u22YrGYYrGYEYU2NjYsm49Go1pcXLS+UDAYtAp0Y2PDSFYc2vv7eyP6QDRzpR1MEuP7hUIh\ns+5k+MJwOLSBLUdHRxYYJFngpsoB7qWCj8fjBnlSCbbbbYMFkQUuLS3ZQQW+vLq6sp4kbP1wOKxW\nq2WaZCrxfD6vm5sbffjwQdFo1HgAfr/foOqzszNrIdCrDwQCpnZgxjRZMiTI8fFx5fN5M53w+/3G\nSdjY2DBdvyTbLxCpaBNQbdOO4f1OT0/bSFUmEJJg4LRYq9UUDoe1uLho6AxIAhU0LmGTk5NKp9N2\nCRLw8M6mRxkMBm0QE1InkthoNGqVOLp8AhDBFQe1bDZr6gbkNnNzc8Z8Ho1GymazdmFDWGw2m/L5\nfPr888+tQvr973+v4XBoSAQzGkiWy+WyuSEydxzTKCpYKuhOp6Pf/OY3Gh8f16tXr6ylxLNDmlSv\n15XP5xWLxQyh4fdx2cFk5nfl83nlcjk9f/7cEpNcLmcXBYGe959MJtVqtdTpdIxcipQLGJZL3L0A\nSIjY9zc3N4pGo8rn86pUKlZBz87OamVlxdo+kITh1sBnAGXJZrMma0ulUlpYWDCy8Ndff61QKGSt\nPca2MlwL2DqdTltymkwmjciH/O/u7sGqFatvn++j7TIcCxLNWCxmFWatVtPCwoK17+LxuL0nYGyq\n4KdPnxpZlPYB+wLJMZ4nWNiS2FCp53I5QyVQjSAFjsfj9vchaGKUBGmQzwMPIZfLGRcBtc7Ozo4N\nfQLd3d7e1uHhoY6Pj/XVV18pFotZa/Lk5ET1el2BQMA8QIiX7NG1tTVFo1GVy2V75iC88Lu63YdZ\nDXNzc9rd3bUEaXJyUsvLy8YRqdVq1r6j/ZbP5/XmzRsbHy09XPZIBkEQl5eXLYH4Jcv/8//J/8wC\nZsIPejgcPuqnIFWhEojFYjaBSProdsWlTSWB7IDLBd0zzGqcgiBgSTLom0zS5/NZ7w04CMkfMjNe\nqCTTuIIQ0BOLRCKmH2eaFdk27kxun5mKmiBExUrGDexOZknw5GINhUL2DNmcVPmgGBBD+FmQ0Kjy\nMQ1B3kH2T/bu9/sNioJQBEx+d3dnaITr9uYSKvlcVIJo0dH9BoNBq+aAbEFFXFLQ+Pi49QRRZPD3\nCD7YqdLT4juQ/UejUZsJzt/jeYHOUJG7EiGCJe87GAwqmUwaWRT0A8kiSYIL/3PhuO0WqotIJKKn\nT58+an9QvbP/uFSpWnBb44LhPQFH0sflu5NcMlmLnyd9JJYBjV9fXxucTFVGcKONABSMpwVnAjkm\nqAb7DnSJHqZ7dkBTeC8kuCSznJl4PG5cCEiMQNZU+8D3uCBCdCNww1EBZoejA4LEGSfWsHeJEahN\nXHMSkEkX7SOGEHdAZOhv015wLwveJURJkDcuc5JY+E3Aza4DKOcQOSTVKs+c/Sp9dLELh8OWrLo/\nG6QTxIT4wWd20QA4NEDPPBdJ9i6wWQY5I76wX4mBtHikB4MviiQ+L6Tp6+tri99UuSCccAtAZdjL\nblylQub5EN/c2M/fc/kRtDEkWbzk7DKLBYQF8q7Lz+Ed83NIXEEZKVB4ttxl3Hk/tz5ZJY/LG4Qx\nNjKHwe1rQ0ZgIAqSJy66i4sLNZtN641TnX3xxRdKJBLWB6ZPS+IAVMRn4edRkYMqIEFDk09P6ubm\nxsYJcnkC2Z+fn+vo6OiRhr3ZbKparZq5gdvXgeXMnGE8unHx47BRbaA9BobEYAPDGC4lpqa9f//e\nqnNaClQNHAKmLcEw5mJAAhQIBLS0tGTti2KxaBIqPK+z2azGx8dNPuYGdZ/PZ5IeBs7geui2L3Au\nazQaevfunWlyb24eJoTt7+9b9QqshpMg6ga+Dy5mQP60RobDobV/+v2+eWhDIjw8PDTSD9JCUBbQ\nBngVPp/P9Nq9Xs9g3KOjI0MSgPFwSKSlcHp6aiSkVCplI04xBXInt/l8Ph0dHVlyTADjuSFF4w+c\nDuSdSJIgnoIg4bqI/eabN29UKBSsH93r9fT27Vuzd6VVwWVA0KPK4Zni2cD33N3dtcr2/fv3NuaX\nxIoKPZPJ2CxzSVbpVyoVG3wC6kECRD+0WCyq3++bBDQYDNp3JrG4vb1VoVAw5Ucmk5EklUollUol\nVSoVG6M6GAzUaDSMzElBkEgkDJlotVp6+/atoR0oP05PTw2hcPu7TE6kQsWnfnd31ypckDnigiT7\n/sg+gcZRaGA8xXQ7EpSTkxOVy2WDl1dWVnR5ealXr14Zz4FEDSSJRPf+/t4SaKB9Wle0CL7//ntT\n6YBsgUoQa25vb41LNTk5+YiUW6vVdHR0ZJcXMY24cnJyokKhoHQ6ra+++srirEsE5MIjNs/NzanT\n6Rikz+XNPItu98FF8dtvv7XkjRZCoVCw2D83N6ebmxvt7u6aFz1Fy2AwsGe6uLioZDJp6BTOj8TH\nr7/+WuFwWMVi0VoKxHPkjJVKxZ750dGRkVFBzGg11ut17e/vm2nYz61flgr8D6y/+Zu/+XuyXCRn\nkqz/S3XCv+dCpOKHxIXNLXImxmhyqMmA3crLJTIB1QLduaQpKvKnT5/aCFNePj+XjBeyBTpsqllJ\nBpdiJQqLGMtGAnUkElE4HJYkI7SEQiHLGKlY6HVSHf80W+ZijUajZsEIggDkzLN1s27+2R2AQX/T\n7/dbRsm8czJu5EiSHiEcVOdUZlRXroQLZOb+/v6RlOXJkydaXV01Ap8rSUokEtY34x3DII/H41a1\nUKGl02mDVQkgoAEMJfH5fFaxcjHyfAiC6XTaqnUqV9oMJH9TU1NaWlpSMBi0ygJuw9zcnJaXly1Q\nor+HBEeVTJUqyfYQlQQVNRU4ZinwPPi7/A6qBvp5IC0Qu/AkQDGwvr5u/9uVj7k+4VQZzFSHqHlx\ncWHVRyqVsvYShFr2LRUa+xVUyO1FA8ti+EQC514EJCBTU1Pa2Ngw0yH4GvTJgV2pekFPMM2imqWK\njUQi1lsH8h8MBoaAuIQoKipiCM/arfhAnH5awWH8tbS0ZOeS4gGFDy0d9gIIFu5wnCfpwSgmkUgY\noxxnS0ixvEN+P2oEzihoIFU+nx+DI2KfGxeX/9ua9u7u7pEMDG4T8Y39zoAdV6qLvwSSS8YYE9t4\ndiB2buuVVh+XpSRrp8HT4bNcXl5qbGxMCwsL2tjYMJ6U+w45gxQbvEveI2gwz8r1jbi/vzdyrnun\nELNBMkiIQaxJYkm+Qad+2gobDAZaWlrS8+fPjb/yT//0T//n5+7aT1bJU33Sp5BkU8qazaa5H8HO\nLpVKVhmwUdG/0jdPp9NKJpPKZDKqVCrmKpRKpSywSbLLYW5uznSUvDigElig9OqxIxyNRjafGigU\nJIAAiuTChWIjkYjS6bTi8bgajYYODg4MwqV3jg3p3t6e9vf3lclklM1mlc/nLRsHupyZmbE+EoH4\n9vbW5EkkMEjgcFACngRWAv7j4mXDdjodI59lMhm9fPnykYaafj0yvuPjY5u5PRo9GFcw2Y+fzzun\n9+ZqW4E8JZn0LZvN6vj42CBzeqf0nt+/f//I6Q6C4w8//KB6va6//uu/1vr6uiYnJ80JDvJXr9ez\nwELFVKlUFIlETGZ2fX2tUqlkyQK2x7COCSwQCGkREDSRYoFWRaNRPXv2TMVi0fr3XOYYaMCxmJmZ\n0ebmptLptPb29qznKz0E81/96lfmkPXq1SuT4mQyGeXzeRWLRZ2fn5tRk9/vN1MiVyMN45dn7vf7\ntfzfU6/wy+71enr69KkmJyf1X//1XwoGHxz58vm85ufn1e12rRpLpVKP3MAqlYrxBGiBjY+Pq1ar\nqV6vm084CelgMNDOzo4mJiaUzWYtgXa9GaggQR82NjYkydADSWaYBYkOLgEw/OLiosUE9uzq6qry\n+bxdxCTh7hmdnp5WpVLR/v6+VZ+ZTEZffvmlmarMzMzYmaNN4WrhaeeQZLhJHcZKriEQZEbiCIhS\nt9vV5OSkJZhLS0vWPkB/zYCUtbU1+14fPnzQaDTS2tqaPV98SyCpgUC4bQZQLJITkoZ0Om37hAvV\nVSXRniI2zszMGMQ/MTFhHAXeKURUEnreaa1W0+vXr42rALlY+ljBk+hGo1HT6oNINRoNra2taWtr\ny5AeXDbZYygwUCvc3t4qGo1qY2PDEBrcQcfGxrS+vq7x8Yex20dHRzo8PDRv/M3NTeM2VKtVXV5e\namNjw4oiZhtgUZ1KpSwWHB4eGjpFe4H2Bvwn2iw/tz7ZJc8AgZ8aLQwGA/uykC4kmeYb21hmkdNX\naTabdqkCRW9sbMjn85lFrttPJ9tCp84lCnToQkH1et2gpUajoVqtZr2vTCZjwTkYDBoSARMfctbO\nzo52d3ftcuCyxeDn+PjYSDV4SeN5DtuVzYcjFnAkFYAkaxewQejlYnwDGQ74imcOSYYLkMy70+mY\nExd/BxYxLGw+e6vVsiCDbSXvjcsFnTwQIPIQEBpQCQ7l5eWlsdpxqeP9QV6bmJgwGBuSy8LCgsrl\nsrFxqZbphRJILy4urGVBBbS3t/eIRQ5HASgYUx0IM1QAqBtISoDrcLWimmUMKcGaIA26AFzHd5Zk\nve5KpaKzszPt7OwYUXEwGJg5DZp9WOS0pnK5nLn/sdzkieSt1+up1WoZRE2r4u3btxoOh2atCYLA\nvqRap81CO4H+eSgUMgc/ZhzwzIPBoLVWIFoCDVNVuhI8/n+gTPZSKBQyVjafiTNPW2RjY8MuX9pQ\nqFd8Pp8RDdPptCXMKHGYX7+/v29ntNvtmtaeRI0LHXTIVYmAesDmhoiF6kb66I1PWwhrXdQRxDBQ\nCiRdb968MSkZSQmtGJjt0oPTpiRTcPBZXeQRXgexDbgergyVcbVaNSUG3+ns7MwGiiFhAznt9XrW\nWgPNu76+1o8//mhJ8+Liog32cbknFxcXlkCMj4+rXC5Lks1EgDvQ6XTMnRRDHsjAFxcX5hIJiRrS\nIEocOFMzMzPmoEgLCiQNpUa1WlW/37dR5iBNuKyivAGFof0KYix9NIvCBhckCkQHfsBoNNLZ2Zne\nvHlj7pG/ZH2yS57AAgQPE3RmZsYqOOA/ejjhcNgqTIKHS04C4up2u0aGohd5cXGh+/t7ZbPZRxpm\nLi82o/SRjNZut3VycqJWq2Xae+AaevjRaNReChcaF6cLDyMZa7fb5gbGAQVeRr7iSmP4ewQJGPWu\n3p8LmayWLN/VLLvfmSDD55ZkPAT3UAIHohEeDAYGraFJ5ZLGsAMiG/yIm5sb8+0mUNAbp8rlGUEe\nu76+NskSsCF9OCRTJIhUCFwyrrELWtyrqytjD9M2wKOABCIcDts0MCYWhkIhSwIhoJG80YIATsRs\nBtQCYhQIBhcFjH6UEHwv3i+wvSvhAZanVeJaI4NI0Vrg+fl8PvvckJDgG7jkM1oxXJSgQuwL4Nlm\ns2mQIYQzLHZdyJ0KqVQqaWZmRktLSybH5P2TXJCAUCGiawYWdX3sudRIkEG0iBvAnQRZ6WPCS9xA\nzYLHP0Q1F06F0zM5OWnEWfYEl7DrtEgbCZc6nic/m/45MYzYAnMcxzcWiQWBn0uK50usIMHkc+Pj\nwO8FSeCyoxUmyYh+8JRub29NgkY/nhYTihtaGUDgnFHXy4LK9+LiwhjvcFZ4Jy5Hh3NEnzkajVp7\nbWxsTGdnZ8ZpwfMEtDQcDqtcLpuXCcvv95ssFTLc/f29tT6YT0/LkncPouGiRjzjq6srHR4eGp+A\nxB7+DYqp+fl5Q404g3xP2sJc3LTAaFfCByBmsK/cwoUCz71ffsn6ZJf8s2fPDMKgrwoM9OzZM6s+\nGaBBX8TVbCNJQh8JIYIJSJJM/sZgl+XlZY2NjdklfXFxYdD4H/7wB9PLzs/Pa39/3yY7+Xw+LS4u\nmtvQ/v6+WZny2RcWFgy+h6TEZby2tqZGo6GjoyObrOca2gABQfgBMr+8vFSj0VAikVAul7PnhbwI\nCHJmZkbJZNIOGyQsSGVLS0sGldNLYrOOjY2ZZKPdbltgpsKVZJcdA2swWiEAJJNJOywkPM1mU7e3\ntyYpqVQqVlkgLQFt4VLhGa2srGg0GtmgFS4KEgsqO7gOBE96cSSRsKDpVULwe/78ubnZuXpuglmr\n1VI8Htfnn39uHARgVi4o93mmUikdHh4a63x+fl4rKyuPjIFApagS8vm8Map558ilEomERqOHuQsr\nKyv2/yUSCfMwJzEBdYJdPhwOlUqlDKlpNBqGQg0GAz158sQSgHw+r8FgoFevXtk+R0aIH/7BwYEG\ng4/jcdkLIDtLS0t22SYSCXW7D0OBJiYmlMvlbJ9ns1ljDxOokSiCFnQ6Hb148cKqFhzJSqWS6aX5\nd7ReUGmk02lTs0A6dT0FMpmMEomEfD6fPnz4oOnpaT179uyR9nt2dlYvXrywJBd3u+PjY4OwV1ZW\n7NwAPVcqFaVSKeXzeZ2dnZmxFD83Ho8rkUhYywIvDhJTqkp6/ejMkRPOzc1pe3tbBwcH2t/f18rK\nimKxmHw+n2nnSYDgDrA/5ubmTINNkh8Oh/WHP/xBr1690ps3b5TL5bS8vGyV5PX1tQqFgjqdjqkm\n8F44OTnRd999Zxc5bVVkmCxkz6CjOGiurq7q6urKfjYFCjwYCjqkpp1Ox3gNkqxlwaXLcCMqW2JM\nNpu1mAxCiHkW8Qb0kFYCbQ4S3pWVFdXrdb19+1bLy8s2fIa+Pck9cRGJN6RyYgTnHMI33h4kkRgI\nwVWgdYYSod/v6/3797q7u1M8Hjde0S9Zn9TWlv4UmVMoFLJemJsV84DIRiEG1Wo1xWIxm0ZG4EfH\n/dOhMZC7CCpU8qenp5Y0AE9RDdPbHo1Gj/SmwMUYqpDxSR+JcFw0EMeAfAgY9K2ohuiPsylpF1C5\nuCQsGPzValWZTMYqYswTzs7ODCbt9/tWPfZ6PYO9+azSg9qBBALSn2uiAvRLlulKtsbGxix4cxlh\n/ANBznWiAh0gmEDQwTKS1gqfBWIlz5yAQLJWKBTMxQtCoPSxT8fhJ6NGX87hRg/MYeQy5zPRvqBn\n12g0DDbke0C+o5XQ6XQMxiXh5FkiE8UbgWcE6TAQCFgCB7mLCYwMWHEDGgQm9uf9/b3i8bhxAQgG\noCe0wVxJJo5aQMwknVSOvGcIRrQ7XOQAgxDaREDctFToc5KI8FmAmzGuoVVFbLi5udHBwYGSyaSS\nyaRVt0jIXE4KCQ0IF9UQrQOCPBwaSXYRc3ZQ4MTjceufQtxlkQQgN6QXT0yjKnMRQxzziAO4FlKx\no34AXYT0BaLWbDat9eQSLdvttjH2IXJCCGYP856JM1dXVyoWi7q5eZjWhhMbaCHvbjAYmIqAthwx\nlBYVyTLFmqtowNWPdiJnj/1eqVTsn4HjuUBBq1wJcCgUMnj98vLSUD5+NkgC3hh3d3cWL9nLxFaK\nFBJw7giQZRAsLmR4Byh1mAsQCDz4y5Pcg/bS1gM1dN33aIWCvtC6BEV1pc+0+twWFkqxX7I+2SWP\nWxrmIvQsMFcJBB68ot3Dxz9z4XS7XdOgAwNi7HF9fW3yFOCl+/t7lctl22hUy1wEyDnw5+aloTsv\nFotGumKjkf3Pzs7ahgWCIdDx+/AxLhQKOjs709bWll0aVKNAlhw0qj+CIhcvh75arSqVShlfgQlb\n5+fnNqu5VqupUCiYKUuj0TCWOHPf9/f37RDR60Li1m631Wg0bHMDmSHhCYVCJt3LZDK6u7vTzs6O\nGbc0m02TGGLZSeJQrVbt+QDJQfYZDofW8+NCRRqGEoFpTFxQh4eHj5yscEsjCGYyGfV6PZXLZTMz\noVXCXGu8qZHScVmAJtRqNeuT0Rry+Xzm/AbUXiwWjUlMotPpdAwuffv2rQKBgFZWVqz/nUql1O0+\nOAYygbDRaGh7e1urq6t6//69SqXSI/kh0qVisWhQMlA8k+OoDCAZkjBSfdFr9fv9ajabqtVqmp6e\ntv48ZEAYzvAt/H6/vUNJj9zZhsOhdnZ29Pz5c83MzFiiKX0cxkJSQOJO+wsJGr1fzJMWFxcNGSHp\n5HsdHx/rzZs3xlzHfAU0DRMm90wBBXOxlkol87hH4cF+Ir7Qjx8bG7MpY7RBaGOAyEDoJYHmwqcv\nm0qlrE1wdHRkSRyENZKT6+trVSoVMww6PDzUcPgwppek/u7uzjgd+GXAg8B++fb2VjMzM2q32/r+\n++8NmTw+PrbCwE2YMCKD00CSnclkrIhqNBqPfNvpPYMqwHvhmSMLpUUD059YA+9ld3dXmUxG8Xjc\nZNYYVVEo0fYgvsAJAPFttVqGCIBkxWIxO4/Hx8e2X3mGnIXhcKj379+r3++bYgUDnqurK1UqFeMo\nUVhS6aPvJ2FwzwWXOPEdNAYPg4mJCVNcYIzktrGI/b9UQvfJLnmf72F85vr6uvUh6BP3+w/DS4Af\n6U1jOIIUYnZ2VkdHR5bhzMzMaHp6WgsLC/YAqbIY0MABR7vq9/u1ubmppaUlpdNp65kieUKSNxwO\nlUwmjQQHDEaVHQgErCrZ2tqS9FEtMBo9DJ8AdmYsIUF8fHxcq6uWiSXvAAATAElEQVSrNredAwAD\nvtvt2t+HNcqkuT/84Q9mlMA4XQZMMCvb5/MZHO+SayDQkCXDIscEpNVqGbsVFjTWqf/5n/+p09NT\nk5C4RB2/36/PP//cTDno8RF0Jenp06cGU3KwQW+o2NBhk93S26cXS9AaDofW1nEDHn1snlskErGs\nnQMH+xyCDwqOTqdjPXkXvkskEvqLv/gLg/dcu1G/32+/j9YGzxekiGqHrB3TIdoSJA3RaNT6rXzu\nf/7nf7aJcZubm7q/v1etVjMC0+9+9ztrD/DcXJkOzyCVShnxjGoklUrZ75ufn7f3lE6nH0Gwbp+Q\nIRrY/JKkUF3CkAaWJqGfm5uzVg7JVTQaNTY5fAFJJnn6q7/6K3uWXLDA9Az3CAQepsFhfgSCQD+T\n9y49zHcHvgdhiUQiCoVCjyr4H3/8UaPRw3hS9jmIDe26dDqtpaUl+1yzs7NGYozFYqbDJ3nBUjaZ\nTBr0PBqNLIjzjOjp1mo1aznAFeDiAj2EhIgkDiIebRwMiviZ4+PjWltbs5hJ9cxlDVmTyY9UlMQh\n2i3hcFjJZNKSXNqWIEoQrCXZGQC9ZB/ThqPYI+HK5XLm1YGSBP7M+vq6jdN1DXmItcxg577gO6dS\nKS0vL1urgQQftjzFQaVSUbPZ1MTEhBYWFrSysmImazhtLi0t2T6HTPztt9/avoWIBzJ3e3tr5xx0\nDxQL8zRaM5FIxOIbvh/z8/OWBBPTfsn6pBI6vpgrfyAYMC2ITcm/Y8MBYzHVKZPJGLyBJI9LHIIe\nPS+yL+B9qkJgFGCuQCDwKFDTVqCKINC4TkXSR7iezUd2xgVLTxGYnn4Tf+jpsUmp7GF+Ag3H43Gb\n9Y2cEO03JBA3AQL6Z8ADyADQLRc/Mhh6kGiGQT9gQkOWAwHABhTZHckZla4kO4hcKvxuKh760Tgi\ncgnyWSTZ+0GTTzuEC4JeLORMSDFwOtCL83kgNLXbbS0sLCgSidiMd0xJ+IyTk5PKZDIWFN3LA8Kf\nq00GImcvY7pBpQuKgewSmJPPJ0mJRELValX7+/vmBTE1NfUI/pNkbGp+3/39vSXKtAJAUnq9npG7\neM4gDqAUoDYw9yGv0cMGOqQ9wn8PKkXPniSJS8YlJvIusBhln4GY0dZaXV01Ehv/HZ+Vqgv4mlac\n65yH8gGSIOgF34czChkKQu/Ozo6RgTkT7runnQekT2yiegPlg5zKu8Jngb8H+xySGok2KGa73X4k\nb+O9npyc2NlzPwtscWKfi0aASnFJ8N2REHPG4H9QjdL3xxmSvcfP5X24mnGQIoyl3HZUt9s1221a\ne/T2QXUgfo6NjVkSwrhYtxVAywa5IokQz8v1xUC2yLN2SbKceZBTLv2pqSkbZASRMp/PG9QOWlyr\n1ZRKpaxdxs8lCYGcys+mVQZKQaLmkvbYJ3xu9hgx4+fWJ7vkYUe3223LbHmwQOhkjEwEgyhXKpXk\n832c+0zVg6scukp0m8lkUo1Gw6rzwWBgDGqkTpeXl9rf37dNgq0nn5XePdAe1TbaeS4Un8+nYrFo\nPZzb29tHgbjX6+nw8NACKyQTIE/YpvgXQ7iYnJw0vT+fq91uG9mLjc4FD/xNjwkIjgCK65xLtKEf\nBjeh2WzaxgM250BAYhuNRiqXy4/kY8BaPK+FhYVHQym4IAaDgVqtllWNPMNAIGDyHIhCXMIwkn0+\nnyKRiAXP3d1dq7aurx+m/eGO5mqGkU3G43GDXzl0p6enSiQSGg6HRhICSUGlwAXN5UAFD8Od4Exw\nIokiqWu1Wo+MYDC1gLxEZYAMCFnaYDBQLpezwSaFQkHSQ3XEuQDae//+vTKZjLG/+/2+VSWBQMBs\nNiEAkkwCG1NtutIl9j3vFlc/qlCka5A1QdtILiHOIsnEuAilzPj4uN6+faurqyvzPJiYmNDx8bGa\nzaai0aju7u6shyk9BHcuGypd9hYyJr4L+wvJKW0PxuvyGUDtIKrBa6GihsxJooOyo16vG/JHUKcP\nX61WH0kBQS0oXvDnp1ggKULuCp+l0+mo2Wyaf4UkM7G5ubmxlhve69Fo1KBjhqJQLE1OTmpyclIH\nBweqVqtaWVmxeQHNZlOVSsXInXiDSLJ4whktlUq2V1xEFl4He4b4TjXKc4lGowoEArY/QSjYq5x/\nzhFKlNFopDdv3hjyNDU1ZfsAXgwkWUZr0+LCXY4Lk7uIPU4sJQ6WSiWVy2V7B8RVziZcMRJWJIzI\nSPn9cHpOT091fHxsZD1QD5/Pp1qtZugcewgSJUUTSjJXDvv/W5/skk+lUtY7RjIBhMmLxiQDeCOd\nTisUCqnZbCoUCtmgA5/vwUmJvgqe1jMzMzaTudt9mNqUz+eNNBGPx00LDZOWAM087F6vZ3IU4GP6\npL1ez1jMo9HIZr8jz0BrHQgEtLm5aWx3tMGQrsLhsM1iTqfTmp+fN3c0WMYzMzM2EnU0GlnrAKtF\nKidMIiqVihFfuESBVpvNpkFCVGfZbNYOFNUHzmqxWMwmys3Pz1vAcuVWyNLo/7kqARiuV1dXisVi\ndtikB9iUw0sbA8JVPB63Spf+vySbh04lmUwmLeAzmTAUChlDv9frmc6cyjAWixlBZ25uzohtzLwG\naeCzw5Sl98tli5UoGlyIOQw+cS90fg+6eHcATzabtcokHA4rl8uZdha1CHO6qZyCwaB9D+mBdQx0\niILj7OzMoF+SBVAo5H+SzPqXXiuac5A1bFzhRDAcCvIirY/NzU1NT09rb2/PDG1okc3Pz1trhkos\nHA7bhELMkBYWFuySjsfjdmkAIRNQl5eXjSA5NTVlZxSGNhbCDCmhHTMYDMymmTMK7MxeQ6nCMBTQ\nQRz7uHSQXfZ6PXvuEDV5N5eXl6YCgmkfj8dVrVZN8gl6g1Tx5ubGjGxIqEne4QFJMq8IkJvhcGio\nw3A4tH14cnLyaB4H6AR8CBAsLlFiLp/h/Pxcl5eXj8ZF40cBaRIPg+vra0ObTk5ONDc3Z6REnl86\nnVYwGHy0z12TLvr+kPqIxRhfYVs7Go1sL3a7Xa2urlqRSOXLBRsOh629AbIAcgTRdXx83CbDgXyS\ncIHSQSZdXFy0wsR1PgWJpLWQy+U0NTWlXq9nqBifG7MjiK9YkrPvz87OlMlkjANFm5vv8EvWJ7vk\nV1dXJT34MW9vb2thYcEqYP758vLSTDfi8bhWV1etMpmdndXm5uYjd6h2u20SF0bZQhIDTt/a2rID\nBPS/u7urWCymtbU1lctl+f1+vXjxQpVKRW/evFE8HtfTp0+NDenOIl9eXrZKn97bwcGBZmZm7IKc\nmJjQy5cvjRj18uVLC0LALmiX19bWbFNls1mDa2KxmBYXF03fG4/H9f79e719+1a/+tWvND8/r9PT\nU4PpyYbJlGkRVKtVVSoVra2t6enTp2bwsrCwYM5M+XzeNvPCwoJSqZShF+vr6/Z9XQ4F/aTj42NN\nTk7qyZMnqlarqlartmEbjYbZ+r5//17hcFhff/21qtWqarWaGex8++23mp+f1/b29qNqErb57373\nO4XDYR0cHNhz29/f1/j4uNbX1zU7O6t2u62nT5/a515aWlI2m7UqKJfL6fDwUJKs90frRJK1Z7a2\ntswUY2lpyQxl1tbWtLi4aIF6YWFBr169Mp4CVrrT09OWtKZSKX311Vfa3d1Vq9XSkydP1Ov1VCgU\ntLq6am6OY2NjWlxcNJ7H1taWQZwkWKBXEL2QshH8Njc3NT8/b2cql8tpb2/PnsvY2Jj10Hu9nv7l\nX/5F4XBY29vbarVa5hlOAHr27JkymYz29vY0NjamVCplPvWZTMagyt/85jeG2oRCIZvSeH9/b1Ku\nUCik5eVls2eNRqN2eZ+fn+vZs2dGEOR8/fu//7s5y+3s7GgwGNg/f/PNN+bqhlEO6obx8XH9+Z//\nuS4uLlQoFOy7R6NRk57yWTOZjGq1mkqlkra3txWJRHR0dGTJGq0pEperqyttbW0ZGjQ9PW3OmCRb\nmOt88cUXikQiqlarxhVBvolGndYVyNPq6qpevnxpVSRT6KgCe72euQymUikL/uvr60bApcDpdDpa\nXV3V2tqaechvbW1Zq+3FixeanJxUrVYz/s7JyYnm5+f18uVLVatVNRoNrays6OLiQn/605+USqX0\n5ZdfWrxqNpva2dnRxcWFtre3NRqNbG9ns1njG5EgE09AAog1sNlTqZQRoUHeSJJcm9lcLqdaraaz\nszNFo1EzTNvY2FAmkzFnwsXFRb17905XV1fKZrNKJpPGrj85ObERzL///e9tRgCJ6eHhoUmoi8Wi\ncUWA8SnIIKSSuCQSCW1vb9t008XFRePApNNp8xsBHSkUCrq9vdXz58/V7XZVLBZN2ohhELEFhdTP\nLd/P/yf/M+sf//EfR2SRMBPpHbnmLVT3ZEySTCpC1g6EKMl6qJCg6KHTj3PJTrxIKkIsF4FpkOes\nr69b1evK9IBM+Gz890B+QDBo5pGAkVVOT09b0oAXAJUgHt6gBMCDoADAqrBwXXcuF0pEw0xSAsRM\ntQF/ACgRcxZJxiMgc6d3x2S/crmsXq9nOk4uRnrwVJyQ8WZmZizjdzNgYCgu5NFoZIcQ6EySwf/0\nvOnpuxpTviNVDBWsJCOo8e+osnErnJ+ft7aDK7uEUMXzhkBGb5gqkZ+5vr5ubnYYstB3JJCzLyAk\nslwjIrf3DRkLX28magGZw+R2eQqBQMBGlSIt5N3gvuZWYclkUhsbG4/2PrAr75IWB4gAz5L9RAVc\nr9fV7/dNR42yg7YVralIJPLoZ8KKhlgEWoXUC0trzgVw6/LyshHeOKMYvcADoB/Ns+f38f8h0wRV\nYq5Bu922MZ888+npaauw+TzsF3TwnDP2IvGL5JiWJEk1z5t3wllATgViNj4+bgoDOAc8U3r7nGl3\nH9MCwa6WVheSWUiHtAmIKZjkuLLefr9vNuI8M/cPVTbxA/Im5wvVAPJAzhBnn3YAzwopqfvMMcHh\nOQaDwUcMeVBSCkM4UGNjY8rn85bY87kwJiLZRwrIz+IeARklXsPzmJ2dVSKRsMSEFgE8CThZnHPQ\nOMyakNxKsmeEEonfS7EH6vp3f/d3P3uHf7IBNb/97W//HjYs/bGZmRl1Og+Tg7j0yuWyBVUOPJKZ\n09NTg7rIuMmeIUdcX1+r2WwaPI6Ebn5+XoVCQY1Gw2BYIK1AIKA3b94Yw73T6Zik6fLyUv1+35IL\njEK49Bhxy8xgyD1IPvBRv7m5sZnrpVJJU1NTGgwGKhQK9vPq9bqurq40NTWl4+NjFQoF6+XBWgUS\nQ2bWarVULpftUCKrI+DBVG00Gtrd3bV+I4zRUCiko6MjXV5eGkTUbDYtcaESRglwfX1t8qharWaM\n82KxaJcT0sJkMmlyPhi9tVrNLoi9vT1dXV0pnU4bVIUEiMMoPUwLQ5fM9DueCza2ZNK0W87OznRy\ncqJIJKJut2uV/8TEhA4ODnR1daWJiQm1Wi2DrtvttsHOoVBIu7u7ury81OzsrPWfkSbt7+8b+/nm\n5sbgP6RhiURCNzc3+uGHH+xwsw8wUKrX64/kYBB3arWawb+1Wk0nJycGPRaLReMCVKtVk5ceHh6q\n1WppcXFRNzc3ev369SOvds5UqVQyCackm/zFPqeXDtuYirNWq5mkrlwuW+KGZlx6SMbpVQeDQeOP\njI2NmWUxvuKcf0l6/fq1Gfrs7u5a/5SKCx7Jhw8fdH//MKeeIOm6u42Njdl/B6R/cHCgs7Mz2wet\nVsv4Kq7uGXksiUalUtHY2JjNTPD7HwY2HR4e2s9rt9uq1+smuUWKSt8eFjVnlBYXclxc2kDemB7J\n92V2xdTUlEqlkhlKwQmgT88FQ2sTPgqcENQTxJBAIKB3797p7OxMqVRK9Xpdh4eHJpk9ODiwy+jg\n4MBiV7fbNdtduCUoaujVkzTU63VLEhqNhskaQWLn5+ftDEAm3t/ft2SuVCpZcgufiIl/xWLRCGsH\nBwcWk1AfoJDZ2dkxUinnl7jIxX59fa3Xr1/buykWi+p2u0okEkY4xp+B9pIkk+rl83kdHR2pXq8b\nSlSpVIzv0Wg0jISKjHx8fNykviB/e3t7phBi+iAtGpKw8/Nz/du//dvPDqjxlre85S1vectb3vKW\nt7zlLW95y1ve8pa3vOUtb3nLW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3vOUtb3nLW97ylre8\n5S1vectb3vKWt7zlLW95y1ve8pa3vOUtb3nLW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3vOUt\nb3nLW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3vOUtb3nLW97ylre85S1vectb3vKWt7zlLW95\ny1ve8pa3vOUtb3nLW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3vOUtb3nLW97ylre85S1vectb\n3vKWt7zlLW95y1ve8pa3vOUtb3nLW97ylre85S1vectb3vKWt7zlLW95y1ve8pa3vOUtb3nLW97y\nlre89T+z/i+NTGqQknu/YAAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -1.197917\n",
+ "Current cross entropy score: 2.008666\n",
+ "Training step: 400\n",
+ "Time since start: 2.045402 m\n",
+ "Steps per min: 195.560562\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvVlzHMl1NvxU7/uKxg4CBLgOKY41lDThcFjWa0leI2Tf\nOMIRvvQfed8f5AtfWAo7bEnWyJJGM5oZbiABggAIoAH0vu/bdwGf5OlkVXdVdTUAxtcnggGw0ZV1\nKivzLM9ZEpjRjGY0oxnNaEYzmtGMZjSjGc1oRjOa0YxmNKMZzWhGM5rRjGY0oxnNaEYzmtGMZjSj\nGc1oRjOa0YxmNKMZzWhGM5rRjGY0oxnNaEYzmtGMZjSjGc1oRjOa0YxmNKMZzWhGM5rRjGY0oxnN\naEYzmtGMZjSjGc1oRjOa0YxmNKMZzWhGM5rRjGY0oxnNaEYzmtGMZjSjGX04pFzVjT0ez6Df76Pf\n718woihD/2w2G+x2OxwOB2w2GwCg1Wqh0+mI79vtdtAY/X4fg8FA/AMw9DtdQ5/LRH9T+zvnyWaz\nQVEU9Ho9cU8tGjWmUeJjyeNpPRefR7vdDgDo9/vodrtjeVe7p9o91EiNt1F/k/mX3/9gMECn0xl6\nx/xa+kfPx9//uOeR39Eo3rV4lq+jdWK328Xfer3ee9+ziuj55fnj/NE+G0XEt8PhEGuc1jmNZZQn\nPk+ch1F7cdy4au/V4XDA4XCIOe/1eqrrXH6/Mp98rwMX64nmYRzPWmud34OvF/nvJFv4XGmtx1Hr\ndhzJfGrxPW5MmnN5j/Z6PdXr+fOO+ky+Vm0fmiWaa4fDAafTCafTiXa7jXa7rSk79MhJeW0YlYFq\nY6rdm/any+WCw+FAoVAYOzlXpuRtNttglFCVFb7NZhOblkjvYrSC1ATnZdx3UuLCA3gnbK8771zR\n61GSZpXGtIjWrJpQt5rUjCQiM4qZKzgr+L6Md8PXC4Ah418vf/S7msA2o0TNzP1lyrRJidY4rRc9\njs91Ie5EdLtddLvdD4JvbqTY7XY0Go2xOtxxGYzpJe6hEXHrlr7Dv3/Z/NHPD2FBEMlz9qHwPk45\nankE1+H5ZOU+bQVvpbejtg/NkJrCnCZZMeeyd21EacmOwKjxjf7tOtJlrfFp0Yfi8GiRXr6vlZIH\n3gkY2Svhi+gqXorM14wuh67zfF/1epBhZpkm4c2K5yKlp8ebtoqsuJdRI4e+y8NiXIHokVtm73nV\n++Oq5bJZ4mGsD4lvIiNG1ZUpeb0WLYdRrtpaNGLRG73mMsabtjepdq9pKkK1cS9zfYxCGKz0rLXu\nwcMZxM8kAlf2YieZy8tWQmpOgdFrOenNQ5BjvJTPAFzILsoNoHDTOD70rpvrpJiuG4qmhz5UBILv\nb70G7bXz5DnxB7EiLuh2uxGNRtHr9VCv10UsxqrYI92H/wTMJRfRTzkxaNIELj3X6U0w4rz5fD54\nPB70er33Elkm2VAk+Kw2nKygUULNCkWpNiZwEcJyOp2IxWJYWlpCo9FArVZDpVJBq9USSsXo3lFL\nyDKzdilO63a74fF4UK/X0Ww2DfEyjsdprwE9yt1ms8Hj8cDv9yMSieDGjRtYW1tDr9dDrVZDNptF\nMpnE6enpB6lQ9JBVoR1APS+BPiNj1kpZbcU4xJvb7cZgcJF8SDKa/maz2UTyplXr3wjZJ7rjBGSz\n2f6vzu9ZdT+Ew2E8fPgQoVAInU5HTJheyIbyA8ZlRMoJTGrfGzUGF5R2u11kgTocDt3JRPJ44xaH\nmkGhhz/yYObn57G6ugqfzyf4VEvkMsO32lxOI+PWzDVq64YnJFlpPNI8OJ1OBINB3L9/H3/+53+O\nWCwGt9uNTqcjFPwoZSjPrdq6NZsERpC10+lEJBLB0tIS2u02Wq3WxFD6uD2l9pN+1zKcza4jbmit\nra3ho48+wt///d/jn//5n/G9730Pd+/eRSQSQb1ex+npqRDy455PL5nl3ep9M+mY/D2QQuREMsbt\ndotqKsAaY1FeF2YNWpfLJfYg1yXEt9frHao6sIrv/zUe/t+4a661Jw9YB3d7PB4EAgF4PB6Ew2GE\nQiEcHBzg/Pxcl3fAhfqopBpZOJqBlLkS5RnDZiFYPZtQ79iDwWAoo5agyl6vh0ajIRY9V3Kk8Ck+\nOwmUPM34n9Y60PpcLU9D3oRWxfxIcTocDni9XiwuLiIcDgvPvd/vo91uizKmUcbgpFD2uO9TmWa1\nWkW/30e9Xjc1B3xu+Z7QiqdyoT3qXVmFCg0GgyEEZXd3FwCQSCSQy+XwzTff4Pj4eAhV0TuuFd+x\n8rpRxOWiWaSO88blBq17l8slriGU0Eoyo+D5M7daLTgcDvh8PuHRu91uuN1uOJ3OId6NKHs144Pv\nBT10ZUpeL+xmdlHSRFCZRCgUwtzcHJaWlhCNRmGz2VCpVJDNZg2PO07R00+z3iHxzX/nynLapNcI\noWckOKrVasHr9b4nnOlZjAgCOcbKP5tEUBlR5LK1r6WwRyn6SYmMpkAggHA4DKfTCb/fj6WlJcRi\nMQAQIRLqIyH3jTBLZo0xune9Xkej0TCcoc7/ccHv9Xrh8/nQarXEPz62lpJXU+xmPDeZT+Cid0e5\nXEaxWMTe3h7y+Txu3bqFUqmEp0+f4uzsTHdfCpnv/78RyToyjrvdLpxOJ3w+H0KhEAaDASqVCgD9\nYctxemacPB81Jv1Oe9TtdkNRFIGk+f1+hEIhhEIhlMtllEollMvlIVRrnGM56m965cuVKXl6kZNu\nNi2y2+3w+XwIBoPi3/379/F3f/d38Pv9SKfTePPmDfb396fGg5kxZe+doFOH4+JVUTMgvWS1USAb\nHsBFgpHdbofX60Wn00Gj0RAWK6Eog8HAsEcDDMfl9Lyncd/Rsoq5IiciL4KuIwVK140TEJN68TR3\nKysr2NzcxN27d+F0OsXeCQQCmJ+fR6FQEMqUw49WIgl6eJXXGhl43OgY9X36nBALh8OBTqeDwWAA\nu92Ora0tPHjwAPv7+zg5OUEul0Or1RLrTDY+1ZQ/YE0s1uFwwOPxQFEUdLtdvH37Fj6fD5FIBKen\npzg5ORHvhTxPK+WMXidJ/r6Ra/SSWZRG/j8Zcb1eD51OB61WC36/H4lEAvfu3UO328Xe3h6y2ex7\nte1qDsC4sJMZQ5zWGI1HDuTDhw/hcDiE09hoNBCLxXDv3j08fvwYu7u7eP36NY6OjlAoFNBsNsWe\nUJOJspeuhkzpnfcrU/IUt+UNbqxafGT5zc3NIRQKIRAIYHFxEffv38fa2hra7TbOzs7g8/kQDofR\nbDaFEJ8WsqCX+ILkSARgTXnQJMQtbb55er0ems2mgC0bjYYQvNSx0AzvRtAQ2bo2Axuqjcm71nFE\nQo8xYdbAous8Hg/i8Tju37+P27dv48aNG3A6nej1eigUCmi320ilUkKZEJRPXbxoPU8SHjFLHFKX\nnwt4Z8xyA47yO+gZnE4nut0ubDYbAoEAbt68ibW1NWSzWWEIUKITJy6Ird4zFCP2+XwCVSAUJZlM\nwu12o1QqIZvNolarDRmGVhA34jipIV/8/zy8poXyXDaKwJ/FbrfD4/HA5/OJ+DWFWIPBoJhrMq4C\ngQDa7fZ7idM013x9GUFS9BB3PGg90Nol1M3r9SKRSCASicDlcsHv9yMej8PpdCKfzyOXy4mEVHKI\ntN7hKD700JUpeZfLpWqJyb8bJUVREI1Gsbq6io2NDfh8Ptjtdnz00UfY2tpCq9XC8fExXr58iX6/\nj0QigXa7jcFggHK5rHr/y4DI6b48QYMvnHa7jWazaUnyhhmiBc09W85zsVhEsVgcmiuKSfX7/aG2\ntGbvr/c7RuBYNUuaDCx5DJ4YpAYXmrm/2jPQv2AwiLW1NXz729/G0tISBoMBnE6nmMtkMon9/X2U\nSiU0Gg14PB64XC70+31UKhXDSZqThEPU9gxPNuPGESl0StKjf+TJhUIhuFwuYVT5fD4sLS0hHA4L\nY7zdbr83X8QHj1fyd2SFB88RwkAgIIR1t9vFq1evsLu7OxTG4s9uFal5qFxhyoaAVmc6vjYmQRrM\nPhvxbbfb4XK5hKzjrczdbjdcLheKxSJqtRoajYZI6iQFydvpys4FPZva81m1zjudDk5PT0WCHVUa\nLSwsoNvt4smTJ2g0GnC73Xj06BEqlQoODw9xfn6OXC6HSqWiGavnit8s71em5H0+HzqdjoDzKK5o\nxuqlRRKNRrG4uIgf/vCHuHv3LprNJs7Pz3FycoKvv/4aOzs7WF9fRyaTwc7ODvL5PEqlErrdLvx+\nP3w+HyqVivBE1QS4ERp1zbhYjJxVz70yo4tTDZYyGoMi4TY3N4dms4lqtSoWpuytcaFCnhl5XEay\nrLXmT+1znulPUJ7RkhW1TUQoymAwEEapPK6asDVDNM+Ukbu5uYmHDx/iwYMHCAQCSKfTyGazOD8/\nx8HBATKZjDBMg8EgbDYbgsEg4vE42u02CoUCdnZ2UCgUdIV5zMCX/Fo1gSQjLMDFu1pcXEQkEhkq\nYw2FQlheXsbDhw/FmqnX6yK2Twl89H/ZAwIgkrWon7r8zyzZbDb4fD7cunULf/mXf4lcLoft7W0x\nt2ryQi77Gjd/44jGpDVCCob2oKJclAn7fD44nU4hW2ldktENQITVSLHY7Xa0222USiVVdGQUT5Ma\nMbSnyOPlirvT6aDT6YhnCoVCaDQa6Ha7CIVCCIfDWFpaEsm/+XwexWIRhUIBiUQCbrcbe3t7mnvA\njCzl13Q6nSF5SDLCZrOhVCohEokgHo8DgMgDs9ls2NjYwIMHD9BoNPDVV1/h+PgYhUJhCImQDVTO\ngxG6UrgewNAilHvT6x2HalVv3ryJjz/+GH/zN3+Dzc1NfPPNN8hkMsjlctjb20Oj0cDW1hbq9TqS\nyaSI55EC83q9qgdbqHkLk5LWxuAwkJpHYlQ5q8F7ZhY21T2HQiEoiiIErhrMxPkk4djpdISxYvTe\nenjmCpIgyUmQA/nd07rkHpDa2Pw9GSn/5NCf1+sVCm9jYwPxeBz9fh/NZhMnJyfY3d3FmzdvhFcT\nDAbhdrvhcDgwNzeHra0t9Pt9ZDIZnJ6eolwum5p3vXOnFzrmn4XDYczPz4vwTrPZRCAQQCKRwP37\n9+H3+1Gv15HJZEStOXn15XJ5SMmrkd1uH1J+kyBgtLai0Sju37+Pv/3bv8WTJ0/w+vVrEXJUg761\nQhWj7kPXqv2N5+t4vV7Y7XYRu26322LthMNheDweEbbp9/uw2Wzw+/0IBALi80KhgG63C0VR4HK5\nUKvVRKxYln9WQ/ncICY5AlzIC1KcpOhdLheCwaDw9gnRpCqTR48eIRAIoN/vY39/H2/fvkWn00Ei\nkUAgEMDp6SlKpZImH5N49N1uF41GA9VqVRhKPF9gYWFBIBSKoqBUKom9fefOHdhsNqTT6SEkVHYW\nJnU2r0zJV6vVoW5QZpoFEDQ/NzeHaDSKTz75BN///vfRbDbxq1/9Cv/xH/+Bly9f4uTkBLVaTTSp\n6Pf7QsEDF5AQNcgBLmKhciyNNplR+NOoYUDCgRYPoR08c9osTbKgB4MBGo0GTk9PReiAFrLs1dIz\n0GbN5XKmPGs1HrQEIN2Pe05G5lyNKNeA3qFa4ySrYVgqP6Q4bzqdxi9+8QuUy2WcnJzg8PAQp6en\nYh3TdwOBgIgN8rgwrWkjpGasjSKt74wygk5OTlAsFoXhR94QKXyPx4Nut4ujoyOk02lRjqcoCorF\nolBG8r1oLfB3Zdbgoz1PXvCtW7cQj8fx29/+Fs+ePcPBwQGq1epEc6SXD/47hTHI861UKsITttls\nQgna7XbBn9frFXPPwx48uRTA0Klsenk36/zQ90kuUB4DKUnazzabDfV6HU6nE4qiCNnd6XTg8Xjg\n8XgwPz8Pm82Gly9f4vT0FM1mE69evQIAIX9GOVVmZQUhfIVCQYzDHYGzszNUKhWBrHS7XYTDYWSz\nWeRyOTgcDpydnQn9JM+J2v1Ir+hV+Fem5Am2pc1oVHlR8sWnn36Ke/fuIRwOY2NjA6urq0gmk9jd\n3cU333yDw8NDUXYBAPV6XWxeroTD4TCWl5cFZEUbRbaiLis+D2DIE5lEwctWoVlF3+l0howzWXDK\n49L3yJOa1LMepUzUrF05Ri7HMfXck/in55mGcideqYMaxfS63S7S6TT29/eRy+Vwfn6ObDYrIHri\niyox3G43qtUqisWiqHggxW+GHytIyxskb9zlcolQXbPZFMYsealnZ2coFou6lbW8Hs3IFuLX6XSK\nxK9oNIo7d+5geXkZ/X4fpVJJZPdfBtH6pdBkIBAQjado/igOPRgMBMpBxiBHLHkOBBmFLpcLLpdL\nJM/qVXyT7gO6nvop8M6I3W5XPHuz2RTrmeLutEaoeqderyObzSKbzYrreThoWkROo/xMwIVjUK/X\nh0oDqRKGyuvUdOC4NW5Ejl+ZkueJEmYoFArh5s2b+Id/+Af8+Mc/Ft7L2dkZyuWygOlrtdp713J4\nSFEUtNttrK+v46/+6q9weHiInZ0dZLNZtFotVSVmJWyvRnIDHB7zM3vPSYU2X1g8OVDPNdTutlar\nCYVrFAo28n21EAVPVjIyj7JXa8TiN4pKUegpGo0iFouh3+/j7OwM+Xwe2WwWmUxG1SNpt9toNBrC\nU2g2m1hbW4PD4UC5XEa1WjUMuZpBocaNx38no4+SXumzwWAgFD/lx1Cowcg7IwUwyX7x+XxYWVnB\n+vo6NjY2cOvWLSwsLMDn82F3d/c9NGFSGvWOuIKnzG2fzzeUfEbz5HQ6RQ4Dd6YCgYBoNUw99gkR\nWFxcRKlUQqFQQK1WM/TurVofHCVTQwe54id0MxAI4NGjR6jX6zg5ORFriif6GnEQJuF/1N/IWRgM\nLhrnVCoVzM/PY319HW/fvsXZ2Zmh+9B86aErVfJmvSJFURAOh7G1tYVIJCIS1DKZDH7/+9/j6dOn\neP78Ocrl8tiJIIuWSpMcDofoRU1d3HhM3iwkpfe55M3FF+kki1HLozJyPb0zPe05yYO02+0C2jdi\nqMjzMI5vHrPkOQS87I+8nG63KyxvrnDUvMRJ5t2oUqWQEdXCU6JisVhEpVJ5T2lxaK/f78PtdovE\nu16vJ5KBuMIex5dVQm8caUGRHC3iISG9Y9JPGTky+kwUOgmHw1hdXcW9e/dEQuOzZ8+wt7dnaVmW\n3pg9fY8MIPJ+KYY9GAzE73LOQr1eRz6fR7vdFtB+KBSC3+8Xa81oRzaraZxCBt7NFcnsdDqNUqmE\nZDKJer0+tJe1xruMNa51TyoXBPBer3u9RpUR/q+FkjdCZNEmEgmRdVypVES262effYZnz57h6OhI\nV7IRddKitokU56Iszna7PaTgZcieP8Okno+sHM1CjfKY/HezCh7Q987o/VCmO8UI+YE1eu+ploQy\n7r68kQrF6+ify+USQrBWqwmLn5R+vV4fqhbQapJjlPTMOxkfhHZQTLLVaqFarY70SmleKTEyHo+j\nXC6jXC4PQZ78Xlo88DmfRBByNMUoAkIxVyMKnhMhTZM4ESSIyWian5+H3W5HMpnEZ599htevXxvm\na9T95N9lnrnc6fV6KJVKAvolRFQNmuYyq9lsolgsolqtwu12iy6Kfr9fGFg050bWgVUK08g4FHpQ\nFAWFQgHpdBpnZ2dDHRaNInZ6nY9J5LuiXDT9icfjorzYDH0QSt6sd2q32+H3+3H//n385Cc/EbWq\n//Zv/4bPP/8cz549Q7FY1A3VkUBdXl7GD37wA1Hnenh4KDYVKV2eKCiTFUKRYoBUK1qtVkWjHrOk\nV0nqGWcccUVLWe6kZIxkmhu9Pz0fJeB4vV6Rs0ElLHfv3sX6+jr6/T6q1SoymQxOTk6QyWREhuur\nV69QrVYFxEmGArVQVWu6osWj0fwN+j558LVaTXhWo5QdXcNr+AeDgcgsrlQqwqjRYzSaUfBqz8kN\nPQ6fjiO73Y5gMCiuMUPkyU8ijAnNC4fDaLfbODw8RDqdxtu3b1VDeXrGJN7U/sYdCS3Pk56L4u80\nx2QQjko4HQwGIvueqjgI/YnFYqJC4+joCNVqFdVqVZNfK0nLwBm13nnJ9MbGBr797W/jD3/4Aw4O\nDnQjhmZkohzyk0mPA+R0OrG4uIjvfve7+O53vwuv14snT56I0kY9ZJT3K/XkzRBNFCUTJZNJvHjx\nAr/4xS/w8uVLZDIZw54DZfc2Gg0MBhfZnLwMgie9jCqfmpQIIvR4PLDb7aYP9pDJSkWvBzYH3nmX\n8k+r7yd/fzAYCCFA/Hi9XnEimt/vR61WQyAQAADxrlutljiGmJSL3JXNKBm5jninmCJHGfQ8P0H9\n1Ec9FAqJZCu55lYPL2ZyIfjvDodD9POmccYlQMnzZXTOuaKcFEofDAaizIyUfCqVErk+ViXcccRD\n63n52iDnhWLqvDpJKzxBc0JOCiWvcSNSdl6MesHyPc08Ox9Hfn/cCCajlpA5yjuhpkRWhlHUeOZr\nTA7jahlxZFh5PB7EYjFs/G+jNjoN1cxa10tX6slPcu3h4SF++tOf4vj4GK9fv8bz58+HmgnoJYr9\nvXz5Ev/6r/8Kp9OJbDaLUqkkrHXyRikGpPZCrVLGfIFOAllOi/TCzqQorRK6o94rjdtoNHB+fi5q\nr0mZ1+t1dDod5HI53Lp1C71eD2/fvkU+n0e1WkUqlUK1WhUHSVBp2mAwGGoYonZfPfCqkWek0iW9\nSBfNbavVQiaTEcKeNwcx22TKqFFARjA1QaIkL/588jPJSr3fv+jWR8a23jnkioJ7W2bhfgqRFAoF\n4T1zWJwEvJZQp3nhczSOd0Jx1IgbSbxqhT6XFbR8raIoAsqmrPzBYIBcLofT01MAQDabRbFYNIxS\nmCE5BCojUfSM/LtUFUDfabVaODg4wMnJiZDZlINilPR44USEgtAa5WdEqBkn/CQ9r9cLv9+PYDAo\nEu6oX4FZI2kcXfujZtWo1+shlUrhD3/4A46OjpBMJlEul00njPT7fQSDQdy8eRPlchn5fH4oeYM2\nFgmhaRD1515fX4eiKKjVasJaNyus6BpZkE4aWlAjNWtcTchNAqGOupZDmVQGRCVZNptNJFBRpy/u\nkd2+fVsgJ+Qd8AN2tBT9qE1m9J1NYjjyhiyVSkU0xaFnIYVk1NDSs07onft8PgQCAQwGF6134/E4\nNjY2cOfOHZyfn+Ps7EwI43K5LAQ28dXtdkVTq3g8jkqlgnQ6bdhQmjQsRMQT/7jCGXdvfr6BXsGt\nZfSofY8bStyAVhtLvo7mmY4nVpSLplaVSkU0AuPVDHrIqDyh71PdOACB+mxsbIgDl6hrHXW38/v9\nooMgGSFUgsl7LRjRAUbXFq3xYDAoQmoUPl5ZWUG/30culxOoQrPZhN/vx/z8PMLhMLxeL7rdLpaW\nlkTJouxIToP3D0rJcwWSzWZFz+BCoTDx2PF4HPfu3cPr16/x9u3b96AjIhL8VipJsvb8fj9WV1dF\ndixZt5Nmu3KhMA5amoTUvCnu8Ux6Pz3zTvW25LUQ5L2+vi4yt6vVKgqFAsrlMpxOJz766CNEIhHR\nNKPdbg95NCTwZV60SPYi9cKZZueHG6ONRgM2mw1zc3PCi1aDFfWSnmuovj8ej0NRLnrub2xs4NNP\nP8Wf/dmfYXd3F69evcKLFy/w5s0bHB0dDZVxkeKhU9xWVlZwfn6OTCZjiE81uN/snNJ7V4vtq4UU\nFEURgpuu5Y7CuDWgBq+r3U/NeDaC+BDqQ94lJZy63e4h49Yo8XnQs14I6aG1EwqF8PjxYzx48ACH\nh4c4OjrC8fExMpkMarUaYrGY6ChHeSbNZhM+n2+oVe8k3vC4a/1+P+bm5jA3NyfCBG63G/Pz8/je\n976Hfv+i4x45EPl8XvRXmJ+fh9/vR7FYFEed+/1+YZjw5kNmeB9FV3qevFHrj9qWejwetNttnJyc\noNFoTMyHy+VCuVzGy5cvRYINxXwI/uGxeNljNftcnAg+Iwg5k8mMhIr1Erf4rVK2MqnBpDRfRq3l\nUffQI8h4AyEAIvZIbY2j0aiAK+mEQgAiVrm6ugqbzYZcLic8CurGZrQkiyv5cXDtJO9EvtZms6Fc\nLouEO4rNE1mNRpEn0mg0kEgksLGxgT/5kz8RxtPq6iocDgeWlpZw69Yt7O3tDbVsJg8TuKj5z+fz\nAibn63YUEdxt5TONuw/3Sj0eD8LhsEgEph7l9HyjxqSx+JhaiM4khjrNJ6EN5ER0u10cHx8jn88b\nOluC86TnPakZYR6PB6urq3jw4AG+9a1vYX19XcDaHo8HbrcbqVRK9H8gBMLtdsNut6NWq+Hg4ADF\nYnGo94nRuRn3fIqiCLSg2WwiGAzixo0b2NjYwObmJra2tqAoCtbX19FoNFAqlXB8fIzB4KLDoM/n\nE5VbpFeePXuGly9f4ujoSHTW1IucGaEr9eTHPRQ/4MTn8w1Zma1WC+Vy2RLvkMYrFos4Pz9HKpUa\nKvniilHPRjV6f7JmqQMVh81GZcxeNfFYInVpo3I54P3EuWnFnIj4fWQLnYwmn8+HeDyOhYUF+P1+\nITD8fj8cDoeAMnnMrVQqieoKvQpZ9uRHGYX0U21cI54Gv4aMVKoOoAQfUqZ615Se7yiKIjK3FxYW\nhNBbWVkRCYButxvhcFjE3IPBIPx+v2iU1G63UavVRGtTo+/ejPAzQ3RICiX+0pkM0WgU6+vrWFpa\nwvPnz3F0dCRyU8YpQTW+9YSnjBIZwfQ775DHS0eNEF+/er9P5YmhUAiRSAQ3btzA/fv3sbW1haWl\nJdjtdjQaDSSTSRHOIXnIu93xdui81NlqpJWIh219Ph8WFhawurqK9fV1rK+vw+12Y3l5WYSaqIsj\nnTNAPFG/gu3tbWxvbyOfz5uuJNFD1xqup3Ka5eVlrK6u4vj4GOl0WnRysgr+7XQ6QlFRIhYl3tFL\npcVG12mNZYYHh8OBQCCAubk5rK2tYTAYiGSYSRW8lbzKY9BmczgcmJ+fRyKREHPHT/KT/5m5l9Hr\n6PskvKg5iN1uR6lUQq93cV51v99HOBzGjRs3hMfearVEyVksFkMmkxGn23FBOIovLaE3Shjy8bjw\n5Cc16n1JpbYYAAAgAElEQVT2+fl5LCws4PT0VJRZUUY4wbZaULQRZUn8hUIhrK6u4s6dO9ja2hKe\nSzAYFJ5jLpdDsVhEOp1GPB7HjRs3MD8/L8IjqVRK8EZJlHq7Ysrzxp/FKghXUS5K67a2tkQr1Ww2\nCwBYXl7G48eP8ejRI9Eam+LJakanfD81T17rPUy6d8l4pT4Wdrsdy8vL8Pl8IglMz5ybUapUQRSN\nRrG0tIT5+XlsbGxgYWEBi4uLWFpaQqfTwc7ODg4ODgTCwBFCLnuoVBZQP0J33DzQTz0GFaE2dBwy\nGbCNRkOU64ZCIdTrdRQKBZycnKDb7WJhYQGKoog4frFYRDabxdnZ2dApglYZ3TJdqZLXA792Oh2U\ny2WkUikUi0Wh4I0mKoy6f6/Xw9HREX71q1/hzZs3QsGrJc5Mw1sguLNarSKZTArLzupSELMeNSfy\n3ElwU1MNv98Pt9uNRCIBh8OBVCo15CX0er2hLFQ99+E/JyGKNdJGa7VaKJVK4njWr776CsViEcFg\nEG/evMH29rY4cpOOh6RxtGBUNd7594x44fJ39c4Zwd50KBAAEfPj/Qq0Qk7ynOsVOryEi9dsEx/n\n5+fY29vDb37zG+zt7SGVSomOfJubm6jVauLAGkq4SqfTpnqO8zU3qYGsdS2FP6iU1+l0YmFhAaFQ\nCAAQCASE4lGD6o149NMgkqvU7phkT61Wm6gVsB5lSWuEh2JoLunQlmfPnuGLL77A4eEhisWi6Doq\nzyHlK/HqByPrxaiBQu1oi8Wi6ORJSM5XX30lULOnT59ie3sbJycnUBRFyEJqYlSpVFCpVFCr1UTl\njxlDVO96udYldCScKYnBaOanET4ODg5weHioyR9Zv1ZvRFr4rVYLhUJBZGZOo72kFd47WeJ0tCll\nQ5OSDwaDcDqdQkkCeO9EKzUYW/4b/d0q3ikBjyt7ajT0m9/8BicnJ7h58yaePn2Kp0+fAngHz5kt\nQSP0R4+ykT1Q7snpuTeNT3OeTCZRKpXgcrmG9g29Q7pGqx5ZL5FCpe58tVpNtNOlBKlnz57h888/\nx09/+lPk83kh0OkI0GKxiJ2dHbTbbdTrdZH0aERo0zOQMJ1GrTS9F8rTISVPYQq3241arSa8Pe5V\n8jWvpeDV9gP9TWsMs6gc1cdTR0UA4h0amXOj9+Z9IHg/iE6ng7dv3+L09BS//vWvxWlyag2o+LPT\nmuHjG+FJr6IfDAZCXrhcLtTrdeGJd7tdJJNJdDod1Ot1fPXVV3jz5s1QgiGhcXxtT2KE8n08jq51\n4h2V4vT7fVNxOq378vvr8cxk6EwLUjO64ehFkWdM0DdgLlN01H2MeGdaRPDY8vIyVlZWBDwVCAQQ\ni8Xg9/txfn6Ow8NDccxpt9sVTR+olSw/ZIKOuSTvgmcjywt50oQxQkzodypTXFpaQjabxfHxMY6O\njkSJF8XqCT42C/uaDVHQtXrXlewpkRAn4URQJ7XT5P37tQwvPXxSyIlK4FwuF3K5HJxOJ+bn57Gz\ns4Pf//73oiSq1WqJc8GphKtcLouyI16TbmTuSKlasU54YiDND4Ul7ty5g1gsJmqfCcEi2PnVq1dD\n9dp6ZAz/u5qDofV/M2uLriGFSwYLGbXTKhPmxNtOV6tVPH/+XBhJqVRKlDFr8U6/8yRb+swI7G3W\nUCGD9vXr18jlclhZWUG1WsX+/j5SqZRIpCM0imLyVhw2Rtfqvf7axuRJyFKLS7XT5IwQKVJazHrj\nTjKNE4JmLGuOENDCnaQtpxZfk15PXjwly9y4cQPxeFwc5OF2u9HpdJDP55FIJERckk6M4h4xwXSk\ncPgzqymcSRNqSFlywUHwW6FQQKlUQiqVEv3rqYrD5/NNXMFhhD+tvxkZhycikZdEwpvgbE7yHPPP\n9YZWqKMXJdS1221kMhlUq1W8evUKb968EWeFK8pF74FKpYJUKiWOnSUDgJeeGXluDtMbJdlD5AY4\nyY5AIIDV1VXcvHkTsVgMDocDsVgMLpcLKysrok8AGS1GlKVeh8MqovF5kqkasjOKzCJt/DpKsqOz\n1ZvNJjKZjO5un+MMIquJK2rSS9RGnZQ8/Z3kOp0OabURde2V/DgGKTOYjp1US3rTGkMN3iJYLRqN\notFoIJ/PGxJi8v8nVTry2DyeybPTJx1b9gjpczNeKV+0FIekbFibzSYUi9frxeLiojj+lxY9GVj0\nPqj/NhlcfF5l3iexfIn4HNB4jUYDu7u7Aubl8CtBrpSgZPSdG4VUrRJQhFIQ+sDnjlAJ+l1N4ZuB\ng0lBBINBxGIxNJtNHBwciAziarU6lEdDwlwtgdYs1M6f1eh6kRU8X+eUaLW2tiaqBqgUMxAIiKqY\ndDqNJ0+eYHt7W0C5Ru5vddmpnnsC7zxTzoOZcYwQhQkITUin00in0yI50+i7U5PRo8Ywi7LR/qGS\nVCqro/XNT33ke4l3SrSCjPB9LT15UijxeBzf//73Ua/X8ezZM6RSKRQKhaFznPnDUgzk0aNHWFxc\nFJ3PCDKOx+O4efMmXr16hV/+8peiYYpenogvXnNMPBgVKrIgISVCis8qI0K+n/wZ8a+Hz3A4jI2N\nDTx8+BCPHj3C2toafD4fWq2WqG2NxWIolUrCk6H6c5fLNTQWzZcMiap5M1YoeLn9JBHftNQZb2lp\nCQ8ePMDS0hJCoRBevHiBFy9e4Pj4GJVKxVDd87jvWU0kYLSy0ulv/PujPMhxvNNRzbFYDIuLi6LH\nwNu3b3F4eChqmNWSWCkMB8AyISgjQWbmnrpPUrULVY6sr69jbW0NKysr8Pl8IsRWq9WQTCZRKBRE\nXHta7VWnQVYZ0XqIJyz2+31ks1m0223R1KxSqViSVD2ORiFnWkQymuLwAISDQiEx+keyjstESuC9\nbLqWSh642KCxWAw//OEP0Wg04HA4sL29LeAw8vyIBoOBKNn567/+a3zyySdIJpPI5XIolUqIRqNY\nWVnBt771LfzsZz/D7373O0MnXcnwHVdA49pKjhqTFj0JDLfbLVqx6g0nmLFKZSOCX6tmDFAt8P37\n9/H48WN8/PHHsNlsaDQaqFarIh5LjR8IyqpWqwiHw0OH+wAQz+Z0OnV5YJMIIZpnu92umuvA48qR\nSATr6+v45JNPRHOOn//856IsimBE2sh6Ff1lkhkPlv/Uu55oXgOBAOLxOFZWVtDpdLC/v48vvvhC\nnNYmn6RIxh2tcUKw5EZGRp9F5s0MWkX/QqEQNjY28Md//MfY3NwUZbzz8/MIBoPC+81ms+KQLHIy\nOBmBvq3ImzFLVtxTz3qhkB/t+9PTU1QqFWSz2aGseyMkOzB8f+rhSy+Rg8dh91EdDQeDAbxeL5aW\nlkT5nBVkVK5cSyU/GAyEdfezn/0MHo9HlB0oioL79+8jGo0iHA4jHo+LuGkwGMTi4iIePHiAubk5\nBAIBfP7553j69Cnm5uawv7+P//zP/8Tu7i4qlYpuBc/j+cBwgo/cBc2ooqUabvLeyRMwkgCldU/u\n0cj8yb/L16kJWjqwg2o8qbMXGUvNZhPJZBLHx8d4+/atQFKKxSL6/b6Id/NNSMl2WjH5Uc+nh/g8\njmrvSm1ZFxcXRb2ux+MRECI9i9YBRVr3npR/q0nLgOK/G/Fy6L1ms1ns7++LGGUymRRZ9mrxVUqu\nI3SFe29WvH8zRiF/r2R4UBlgt9uF1+sV/caLxSJOTk7w2Wef4fnz50in00IWHB4eGk6cvWolf1lE\nxlGpVBJJllSKpqcSRQ4/0v/l7oL0HauIy2oae1QHTHqOSqWCo6OjoQqASUl+9nF0LZU8cAGD5HI5\nfP7554hEIvB4PKjX63A4HFhbW8O9e/dw69YtLC4uIhAIiBPH5ubm4Pf70el0UCwWUSqVsLe3h0wm\ng36/LyBXI3Aa9wRlK24SL56UDY1DcKWVmfUyjVLu8vdkRU/PXiqVcHJyIk5y63Q6on40l8shm80i\nn8+rWuXyJlYT7mo8mLHugXcwfSgUgtfrFWVdvKTM4/FgYWEBd+/eFR2sotEoHA6HaFGZz+c1w0Rq\n970qL14PacHZo2B7LSLvrFar4fj4WGRG0xxrrWU5EdCqeLRV+4Z4UhRFlLeSsPb7/Xj79i2ePn2K\n//qv/8LOzo7YG7x8zyiictlr5irW6GBwEZ+WKyn0KHfuSdN3qU/BtJwEGkdGPfW838FgIKqKeCLe\nZdO1VfIUK6UT5qiZv9/vh9/vx+3bt/HjH/9YnEJEtbqdTkckfH399dd4/vw5MpkM0um0aIQyaeY6\nV8z8M73EXzQ/s1weXw1WNusdyJ/pgZs5URlIOBxGo9HA/v4+kskkUqmUgGWpNprXZevZCJP8XY1I\nIAAQ7V1v376N1dVVvHr1CmdnZyLpx+l0IpFI4OHDh/jRj36EcDgsWtwOBgNEIhE0Gg2cnZ0NlZyN\nuz//3crcilH3NLpexiFAesjpdIrDQ0jBUxLSuAQ6vs7593jOhlmaFAUA3rWwpWqB58+fAwAWFhbw\n9OlTfPbZZ6Jdsl7jWet+kz6vEZKTwi4TOaCwGDkzemUxXUcnuVGOh8/nQywWQ6VSQbVaHWo9bVVe\nDDcobDabbidsMBiIaiQ6+prnBY0KmRrhaxxdWyUPXHh55XJZZGL2ej1x6pDX64XX6xWWIQDkcjns\n7+/j7OwMBwcH+Prrr7G7uytgYrMNTYB3FqhaeY+ZlyMrX+JN9gYmIdminWScwWCAarWKg4MDnJ6e\not/vo1QqiX8k1LU6BZq97yQKks4ECAaDouWqzWbD4uLiUF/xhYUFPHjwALdu3RL97EulEpLJJM7O\nzvDy5UvxjHo392UK7qtCD8iQoqYmrVZLGHjjnn3Uu51EMQPWzAOVgu7s7IjS0HQ6LU4+Ozo6wsnJ\niVBUkxDN1TQa+KgRKRcZRbHC4x2FcPGwmdE+CPx6nvjME+GmVd8vo6569zVdw8tY1dBK/nMadKXN\ncIDRD0dKg+qZnU4nIpGISNgpl8siHjwYDPDixQv8z//8D37+85/jxYsXlikb4kUW3lagAXyDExqh\nxbfZ+9E91GLz464j6vf7yOfzKJVKAnmQwwzTUGqTPDMp+Vgshvn5edy4cQN+v1/EV6lZTyKRwMrK\nijietd1u4/j4GH/4wx/w7//+70ilUrorMbjy4kbWtDaxjNaoJSAZJb0CjLwSOiiKDCe9Y8jGLlcC\nRnjRGneSOW82myIpjFAdav1K8KtVhiwAEbqYNvEEYoK+FeVdxcU01otshBqVy3yN8BJc4MIpKhaL\nosmWLOf08Dfu3nytc6NIz3XUMpj0lJFS8FFjG7nuWnvynMiTLhaLePLkCRYXF/Hw4UMcHR3hzZs3\neP78OQ4ODpBMJkU7RKsEK71YetFWWoxygwSzMb1xNOmCV8uMlw2VaSgyM8KaC046rZAgZGpiQq13\nu90uNjc34fV6xdp5+/atOCAjm82+lx2ul7dpW+m82kMNeuXzoJfUhKRM3LCgfBWCUPV6o2phDDWE\nS+06owaqUaJ13W63USgUAEDAy9M4U4Lfd5rGIJWy0uFBNpttCH0x+mx6jCk1lMlomEBRFNFGm/er\n52NMq5UxETfYjcgkqpii5jnyWJPQB5F4Z/SF9Pt9VKtV7O3tYXFxEWtra9jf38f29jZ+97vfIZPJ\nmK5PHcUjF9ZWKDQuRHl8hnv1VpIsSCdV9FrKZFpkdjOQoK5UKjg7O8Ph4aHozDcYDETNezabRb/f\nx9HREXZ2dvD69WvR3pYn9Jihy5gb8jTkslKzpEfZ8u9yRMfM+jKq6NVIvmbS90VCmQyXaSFVZslM\nzgEZZG63W/StoMN2qBOhUTKKmkz6fmmdyWtNlstWh66M8s3nRa0Sy8yYZukqU4BN7RZqn0nlc41G\nA/V6HeVyeQgOsZLUYlhWbHY1qHIa3h/3vKw0UKZJdB+CMM22ICZYMhwOixp+4AKOBYYt7Xq9LiDZ\nSQ6R4N3xpqkceAIVNxL572buy3kfRfScHOG6SiVoZTIZh86njcjQ/SZ9b+OIyoDpwBR+HjsvZdVL\nspc+7lqzEDWtMdk54jTJOtcz32Z551VZVqDAfL8DQK/XG6vDPzglTzRpZqKZe1HWr1WhgMtSmJch\nQKwmec7NdsHi45CQGwwGaLVa721Ao+dRaxFv/ztNGFHe8GpK3gzxnAs9UCzRVa8rtWQyK8a8jOeS\nOzJafU++BxwOh4gXj/I09YxJvAPm2xEbuRdg3Trjcz6tpD3AWjnPHU5FUdDpdMbq8A8mJi/TZSsr\n7rFclnK2griSs/rQm2kTF9pmha0cs+MwH/+/1QKKEIRphF+A9xU8/5xDlkafSWvc604y31Yp58vY\nLzJSaEbRakG/9Lk8PzyJTIa4jd5f9uj59Vo8mSWr3ofsREybrEZmqQ23noZuV6bkzSxoTpcJG3PY\nlywoK04TIkEke2NWEvFN1rta0wgj/OohtU2udq9R43G+CT3RU+qjNuYoL4B7v+PG1sM3Xyu8xek0\nThXUUsZmFRyNR2sFGO2dyWgaYG7+xikFPUSCj9aLkSTAcTTpM8nfk5+Xe9lcthiBkNX+zz/XOmBJ\nb2hm1LjEt4wIjDJAifQ+I5eVo+ZWL9+0zvkeHTUHelHjUfJHnnej656vFTrjRM8JmVem5DlMqkb8\n5arBkfQ3tYk3KnTGkWwJkwU8iTBV+13+v2xhcxpluct88wxsWVmO8wRG8a5GfCPKz6m2qOXn5RAg\n94Y5JKg2vtpYemmU9ztKcMrPTZ8T33yNT4oWaD2rlgCR94w8X1pGFxfaHAVR40NtH45aH1prTH6f\n8jOpfS4/r2wUAtCl6NXWoRrfRhG8UetI7Xn5XuX3kO+lpTi1HAXZEJTHpeu508X/jSLZgRhVhjdq\nvY4jLUU/SnZprXVZUfKzM+i7amOP43uUcuf/H+fsjFqHXJYT/3roypQ8dakjUlvw3LpVFGXoSFI5\nziz3FOebWw8cJVu58ksnD1gPjVO+k3hZo64d9zeexCbX5fL550kucoxQ7yJX29Ty/GoJd+KBSmaI\nV63vqik1zquaB6DGr5bxMOpZtTwTviZlHtXuP2p9jDIouPLhzyHPh97acz6WLPjV+NRak7LnyHkY\nJxC1FK88HieSFWRY8YOkRu13eb2rKUXa93xPaPE/StES77KRze/B500Pybyo8c+NNXm9qPEufy4/\nD/2NK0oe+lK7ZpzM05KZeuTNuP4Cas/F1wsPZao5b+P40Nqfo2TJOMNEbR3JjoTevgpXpuT1JJjQ\nhMvHkfK/qVmeZr0lreu5ISF/Nmocq0hLwOq5jvOt1tBHzRoepbjG8aj3c35PWSByvkmRqa0XNQWj\n9V70vpNxClbtO/J74QahDGFqKUajc6s1Fjcq5O+OI877qHnXw/coITlqrekhNSUrl/GNg7z17iW9\niXBqRqyee3F5xv8vyzQ9NGrda607NQWvdw+NmnO9ClrPs3DlNu6744gbWry6QE48NDrv42Sn1t9G\nGVJqnxHfRhKRzQXALCCbzTZQWwhanhrPgpQn1OhmMEPEg5nNd5Wk5tkZKXWhay6bZLh+XA32JErD\nauKW9rSbBWndHzCPGMnZ9ZPwPU6IWUUc/aM1rpd3GcXRQvSmSbJhNs1sb6tI9ihJ8VyHPTiOOOTN\nj7i97rzTOiGUs9lsjtXh1y67nlucPD4of0ft52XwZuR+WlCmWTIznllBT9de5aKXUYdRXja39q/D\nRuX8TnMe9cK6RsiqfSUbxtMkNQ/YiIKn3/lY8thGxjLrEU6brN4f8lxfh72nl2Rj8EPjXS+/02+W\nPCHJCueqFtVVL2KrhPl1Ezrj7k3v2kic8jrRZXivWnMz6XxZcf1lvzMzyY0ypDoKYtUiMmioAsdo\nbP1DUjCcPmQFz5X8h0ZGlPy18+SBd948xXvos6u2tq7KWxznaegdQ69nc50WvR5ezAjladNlKDcO\nT6t5n2bnwgreRyWpTYvMPO+kCoqHEymJiysQnp8x7h4k96ZJ09of12XfGaGr1idmyIysu5ZKnpPe\nrGA9REKRjzcNS1SOyWoJunH3kwWIoiimDsjQIzjMGhL0fafTKcqXePIWfcdMD3huVFmRdKNFZown\nrRASj2nL8V2riNaW3+9HJBJBs9lEs9lUPeZX7d6jeOfvzKwxSXPgdDrhdDrRarV0n8E9bmzOrxVj\naY2nZ2/abDa4XC643W74fD6srKxgZWUFtVoNpVIJmUwGxWJRnJFw1QplWnkr0w5JkvwDYHnsnK99\n/n+9xHNBZP6mkdtkxgi0j//KdEhRlP875u9DP62YIOpTTo0EjMKJvNRGjbhgp9IS+ft678mFJQkS\nj8cjskH10ig4V/47v+c4IS9/3+/3IxQKweVyibmlU6+cTidsNpuhIybVeJL/ZoZkRWZWGavli/AN\nT7Ct1d4Cje92u7G4uIi7d++KWl+tjHI1HvWUTwLG9x0ZIA6HA4FAANFoFN1uV5zAZZbU+Na6v9H1\nTp8bIUraCgaDmJubw8rKCn784x/jH//xH7G1tYVIJIJOp4Nms4lKpaIbRZsWyc9p5F6j5pL/tMKI\nk/chrXev1wuHw2EqoXLU38bJdD3jy/zxv/PERKuUPOe91+v9v3HXXFtPniwhq71rh8MBr9cLt9uN\narWKer2uKxSgR/BxnnkvdKNoAX+RssIwQ6OUmDzPeuZdHo8/s8fjGWowwQ0GsnbNQLhq82fW0+S/\nywaNEaXMrXVZME0qQNT45grU6/VCURSUy+Who0K5JzHKQ9WaR3pngPH3RPNIBkej0UCv19M8qnfc\nWPy9yMp7nDEzDsFQC28YIVrvjUZDvI9nz56hWq0CADKZDI6OjlAsFg1XKUzD41fb50auHUVWymkA\nQ0qROke6XC4A706W1LM2jcy3Hh2gNX6/f3EMMclrmmteIcRL9Ywihmrr/IOok9dDVsIbNpsNHo8H\niUQC4XAYXq936Nx5vcpNb1xNbRHqNVzkBeJ0OoWCnMTqHMXzqP+PGlO2ll0ulxB+wDv0hJ9XrZdk\nJTppWIXzSmMT4jAYqLdCHaUYxnkJVhCNRWgO8ev3+2Gz2VCr1VCv1wUkrgflkY0zrfsa5ZOP3+/3\nRRjBiACVjVt6H6RIvV4vqtUqGo2GeF9qfMj3U3ueSeXLYHBx0BG9n5cvX+LNmzeIRqNoNptIJpOo\n1WqGWtReBqQ/jXtYsd5lw7jX68HpdMLv9yMej2MwGKBYLAoj0gqUjBumk4xFxjHJEloTPp8PwWAQ\njUYDjUYDzWZThK7GyTOOlKjJHr1zfuW9680cIaqXaNGQwFhYWMD/+T//B6FQCI1GA3a7fUj5kHLW\n49WP+ruactKroDjiwK1Yj8cDh8OBVqsljknVOwdWk7zABoMB3G43QqEQAKDT6aDdbou47MrKCrrd\nrhDMZo+NNQJ5jkJb+PcURYHH48FgMECj0RgyxGR4mL9HnnMhE31mBVyvKAq8Xi8ikQjm5+fFeP1+\nHw6HA263G8ViEc1m871OanrnwiqS1wT3xvXei4xCQttarZZAiO7cuYOPPvoIX375Jfb29lCr1Yb2\nLu9yJxtzxJ8VigF4d+Q1cFEfXigUEAwGEQgEcHp6inq9LpAMI0Yz8az3u/L3x733aRgTZj15GQ2k\nc+5JL3S7XQSDQSwtLeE73/kOOp0Onjx5glQqJZKyx8nraT0v/93hcCAUCokDY+gdhMNh3Lp1Cx9/\n/DGOj49xfHyMVCol0De1Jjw0poxayWRkHV9p73pKzrJa2MilLHa7XViDsVgMPp8PiqJgbm4OjUYD\nLpdLeAeUvAS8n5BkdMGYtRBJgHg8Hni9XsGv0eNW9YQYjBKHovmYpMQ5dBUKhTA3N4dYLIZaraYb\nXlLjX+93uYJWe2Y11EIt+5krDXmzccWhNiZ9Nmmylc1mg9vtRjgcRiwWQyQSEfHtRqOBdruNdruN\nZrMp1obsDek51McqMrqXZWFJCj4YDCIcDiMUCmEwGMDlciEWi2F9fR03btzA3t6eyPOQ18go78cq\nz89ut4tcE0JPyPCm39vt9lADp2nOP58DOQbM721kP017vfD3RGvW6XTC4/EMfS8WiyGRSCAWi6HR\naAiD1+FwoFarDSV1aqFwWvtTNjImQQgBCJ3jdDrhdruxvLyMlZUVJBIJkZMSj8eRy+WQSqWEIUhr\nhTd64wayXlmmRVeq5IFhYWiV1UUJX5QYZbPZEI/HEQ6HUSgUUKlU0Gq1EAgEsLGxgVgshnQ6jVQq\nJe4v99WXJ1+NT1nwm93cZB2SsPP7/ahUKiiVSoazlK305NUUH/FSrVaFIqfNurKygo2NDfT7fRGr\nnJTGISyjlK/atb1eD/V6HcBwC1NuIPLxidQscD086iVaAz6fD4lEAqFQCHa7XSj2Vqsl5pyHQQi1\n4nwYRU7Mrls1Acv39ihvhTy5QCAgUIvFxUWR2La+vg5FuchBkK+X37tZo1wPkeFFXmev1xPzX61W\nLVvno0htT/O9qaXk5fnXUv569o2Rv48iruAJteQ/4/E4IpGImNvBYIBoNIpoNIpMJiO8YkII+XkR\n4/gzK6Pleez3+6jX68Ih83g8iEaj2NzcRDgcRiaTQa/XQyKRwOLiInK5HLa3t5FOp5HL5VAulwUf\naqiT/PmoZ1Kja6Hk5fafRol7AT6fDz/84Q+xuLiIp0+f4vT0VCj2s7MzofzJsqLYycrKChYWFnB4\neIjz83PUarX3DhgxkvxG1xjx5ug5AoEA4vE45ubmEAgEMBgMUKvVJpqfSclms8Hn8yEej6Pdbgsr\nmrwV4t3pdAoPfmtrC/fu3YPf78ebN29weHgoYlJ6+daTmUr39ng8CAQCaLVaIhltVAkm/Z97u1wg\nkqFFSZSdTuc9iFBLiBolrphIUc/NzWF1dRUff/wx3G43CoUCTk9PhWAjKJuE5GBwETbx+/2Ym5uD\n0+nEyckJisWiCEXoIVlImiEag+9tLqgoL4ZDlsFgEKurq/jOd76DhYUF+Hw+8f7JU6Y9RR602mEv\nfENxwoIAACAASURBVD5HeUJGSFEU+Hw+bGxs4E//9E+RSqWwvb2NZrMpECw1I4d40zO+EeLHXhNR\n3kIoFEK320Wr1RJ7tN/vi2v4eQrkEBFqUiqVRDkm501L9ljlRPT7fbhcLvj9/iHjtF6vo1KpIJFI\nwOv1Co++1WohHA5jfn4e0WgUlUoFxWIR2WwW1WoVrVZLnI7HkS6ZJl0X3EklGWGz2VAul3F4eIiV\nlRXcvHkTvV4P7XYb2WwWzWYTsVgMa2tr6Ha7ePXqFU5PT5HL5cRYvJ23zKfR9XzlMXmuOM3Eagna\ndrlciEQiWFxcxA9+8AOsr6+jUqmgUqmgUCigVCqhXq8LAUqQMlldkUgEgUAA5XIZ5XIZzWZT1Zs3\nKjSMWF4k4L1er4ArXS4X6vW6iFFNCimZJfK0otEoarUa2u32EExGqInT6YTP50M0GsXi4iLW19cR\niUTQbrfh8/l0H48oG1ajjCX6HgkrMtBkw1ELwlX7HYAwHAaDAdrttuZBP2rjmJlv8t7JqyEP9u7d\nu2IOzs7O0Gg0UK/X0W63xR6iZyM4c21tDR6PB7VaTST9TItGKTTZo6Z3RSWX3W5X5MUEAgHMz8/j\nwYMHmJubAwDUajXkcjkcHx+L/UroBVfyXBHxdWOVgrfb7YhEIrh9+zZ+9KMf4cmTJ9jf39c05M2g\nbXp4lQ1BnvfB9x4Z4kSUxEYnxlFMm9CTxcXFoUNbaG3p5WsSorVNyBntNVJy/X4fiUQCLpcLLpcL\nlUpFQPdLS0u4ffs2isUizs7OhJzsdruiv3u73VZFkqzkneaODIpms4lCoQAAWFxcRLPZRLVaRaVS\nEQ7T+vo6PB6PQOQIqQWgme9jhq5MydPL03tykRZ5vV74/X54vV5sbm7i4cOHOD09xe7uLp4+fYrj\n42MUi0X0ej3YbDYcHBwM1TdTKV0mkxFxHq2EKTnOKdMkG51fQwukWCyK+5HBotcL5uMZ9Sq0qNFo\n4PT0VMSBeeiAhEa73Ua9XhcwVKVSwfHxMV69eoVWq2X6/qOuIyu60Wggl8upetmjxtD6vNPpiE1J\nil7OjB01nhFBwmFL8kAoCWl3dxftdhuZTAYnJydIp9NDTZGo+oKEYqvVQjKZhN1uHyoRNcKL2fit\nfB+KM8rwY7lcRrvdFgeEUGyVZEIwGITNZkMqlUI+nx9KVMpms+L/8n35erRCuZMB5Xa7sbGxgWg0\niq+++go7OzvI5/NDyvCyiBvA1H+ClHO9Xkc6nRZroV6vC5lBc0toZr/fF4qfUCoi2YCd1jPyNUGI\nE1eaNpsN2WwW+Xxe7Acy8sLh8FBidb/fR61WE0Y+NxTUZLO8Lid5Boqry6GiV69eIZVKCRnSaDRE\nknK5XIbb7cbx8bFAUOT1q8Yb510PXelRs1rZhXqIMosfPXqEzc1NBAIBxGIxxGIxnJ6eYm9vT0D1\nHHqirG8Sik6nE91uF9FoFLFYTPfkjePXjNXIIW+HwyGSAXkyz2ULFOJLURT0ej1Uq1UBS40SBIqi\noN1ui1ianNRo5N5Eo5Q0907k+CQ3dIwkotG4dJ3axlOLZ3LEx+i7p6ZHBLvbbDZh8J2dnSGbzQ55\naMA76Jb4JGXpcDiGGj/pfW6zpHdsMmS73S6cTqfwHMmIarfbcLlcUBQF6XQalUoFAN47YlPtfvK8\nTyLACf6mPIHNzU0sLS3BZrOh0WiYMronJa7g5b1B88gRp1arNZTgTD8JAaX8mWAwiE6ng3K5PISu\n6pFzRpAI+Tr+fQoTEDrCUZpKpSL45d5uLBZDIBAAACFfSE5aUSml91nIqJDHa7VaKBQKQw3B3G43\n6vW6KA+sVqtDyDE9v1V0ZUqeJl+G2/SSx+NBPB7HT37yE/zFX/wFQqEQTk5O8Pz5c3S7XWHRqXXa\n4i+ffl9ZWcG9e/ewu7srYBbOp5pgN7KgxxGPK0ciESQSCSEMKb48iRdsZiPKyR4AqMuS6iIkuJ4g\nY+ACbiXja5SSV+PLKKRGkD1woRB4H3ESimq18FpEiW+8Y5sRpWEE7uQJi5FIBLFYDOFwGE6nE+12\nG5VKBUdHRyJ0Q+NyuJjyXDqdDubm5hCNRtHr9YbQKTX+tGhSAaiWU0HzR94noTv0Gc8wpgRDrkj1\n7qVJ4U7y3hOJBFZXV7G+vo7NzU0sLy8jHA7j6OhIeIxW0ijZIu9D8l7pb6RsaJ3THKuVdNJepVru\n1dVVdDodFItFlEolw+/ezDqRESCt/Bm+Pshhs9vtaLVaCIVC+OSTT5DL5UQ8nObBLF+TkNoeI0+f\nSFEUVKtVEV48Pz8fiYZpja/32a7UkzfDMAARm4vH47Db7Wg2m3A6nUilUnj58iV2dnZwcHAwstOW\n/Hm1WsXp6SkqlQq63e7QJjLSucwsNE5K3uVyodvtolQqCSNlUujRivgTbRy+geR7UD5BIpHAnTt3\nRCx+e3sbr1+/FlCc1vhqn+lRqrLnzmP03JsnT5C8Ra54uOcAvOu6RYaJViOfUevLiEFAHni9XofL\n5YLX6xV5JMViEel0GvV6XdWTJd4JBfD7/QKmJVicC/hRfOn14MbRuDUno0BEVIrJ35dRo8qKZ+Ae\ncyAQQCKRgNvtRj6fx/b2Nvb29gyfxWAlcWOVJ35xhFTusEYGAPVUoP3B+29wJGAcqXnjk5Ae5Ub3\ntNvtCAaDImcpk8ng7OxsSObrResug/h9yMAi5JHyS/TOpVF06looeaPK0GazIRKJYGNjAwCQSqVw\nfn6OFy9eiISYdDqt25JXFAX1eh2pVErA0dxK1qvk5e8Y9ZrJcxwMBkLAUwa1WbJKwQPv4u5a96Fs\n9IWFBWxubqLT6SCdTuPp06fY39/X/Szcm9HT7ILWBMHdpNg9Ho+AvknJUT9xUpg0LiXL0IbjxsEo\nJa9n3vQQKXnyyF0ulzDwCLJWC9fwOSLDxufzodFoiDVEzyOjV2r8cliX5teoIKT3QaECtQZIo4wM\ngmyNCjM+xqRwJ4/X0vtoNptIp9P4/PPPcXR0ZJo/tfsYIVqTau9GNuL43+UGMjwPhPYFrUEZ3h/F\nCz3HZRHxHIvFEAwG0Wq1kM1mcXp6OtRhUQ9Z8Q6NECl4v98vGpyZ4eGDUPJGPR0i8hYfPXqEf/qn\nf4LL5UI+n8dnn32GFy9e4M2bN4ZPfer1eggEAlhbW8PJycl7XeXk2lM1mmSRc8t0Y2MDnU4H+Xxe\n1IVOsgi5wDZzrdrvakQLd2VlBfPz8+h2uzg6OsKrV6+QyWQM5RNwwaHnGhKUgUAAN27cEFm1Xq9X\nNBS6ffs2bty4gU6ng2q1inw+j3K5jFarhWg0inq9LjpSZbNZ1Ot1OBwOUefKE/r4fbXmxuycE9RK\niZYUetIK1/BcA4Iz6/U65ufn4fP54PV6kcvlhuBbrsDUiCsIM2uP9mggEEAwGBRJmOMUL61VKgc0\nmr/Bx7GiJJcg/1wuh52dHTQaDRQKBWSz2aE8H71jqsVaZSWpB2Xh35eNIS0FrzYW8eJ2u7G2toZm\nsylyPigPQg+KZoWC5wlrWrKb/u50OhEMBrG1tYWPP/4Yjx49QjqdNtQb3gzPfN/I9xgnq+hau92O\ncDiM9fV1PH78GOFwGOfn58jn84b50PsMV67kjZKiXGQdkyDPZDJ4/fo1vvzySxweHg41FtDLB3lR\nFB9UE97kzVuZECGT/OJ4huhlQGKTEI+/VqtV7O3t4eDgAIeHh6hWq6ZyCvQaKPQOKSGQPMh2uy3y\nHCi8Q32kz87OUKvV0O12EYlE0Gg0REY3ZaQTOjAuBKO1wc0IEkIbqtWqUNzjWgHT8xMsDwDhcFgc\nFqS3bHFSouclRIXK5Iy0eJ3UWNajLMcRXUvZ6ufn5yiVSuJAK4K1zfI4qdHOf3J+5c9HEZd5dMYA\nGeJ6584q752jcbwclCMOHB2iNt+hUAgOhwOlUmmoVey0PXO+xvgcqOkN4p1kks1mQzgcxsrKCgKB\nwJBBOS26UiVvlhRFwf7+Pv7lX/4F5XIZqVQKBwcHwoM3ykev1xPeJk9akbOpHQ6HIaVr1NCo1Wqi\nHrjVahmKj101UQnb+fk5MpmMKHPiB6eYNerGeST0r1gsYmdnB9FoFD6fD+VyGS6XC/Pz8wIdefTo\nEWw2G9LptKjQIK9lYWFBhBWoD3U6ndZsRDTOCDFibXNjU4ZiR3kn9DklsfHKh0KhAJfLNVSepqea\nZZynP+oanltCpYCUKzDu9DC6J6+QMKJwrPIqgXe5Ae12WxyKQu9ITwgJUFfEaiE9o7wTH1rGJ1eM\namuW3gEZk6lUSpSAFQqFoXMQpkkcjZDr/skr52uKElPpmN92u403b97giy++wOvXrwXyZeT+ZsJC\nspLXet/cMHG73eLocQppJpNJgSha6cjJdK1PoeNEE8m9xcPDQ+RyOeRyOdTrddMLk14IP69Y/ju3\nLIkfGWozS7QYqKEPP4N6Usuf38OM8NZL5PVSDJgaO5gtkeQKjv9/1PdJMAMQp7NRiVY0GhXCg2Dt\nXq8Ht9uN1dVVUYZDLW7Pzs5Ep8RRymacAQLo9960vDE9zw4MN5MiIcl7e09VkPyvh+V0OgUcubW1\nha2tLezu7oqOh1QLzT0X4osEInXCM9PAZ1QIxQhxRcPHMwpf6313ZvanlpE5qjMn31fcoaFKlMtw\nLGTvlZTgnTt3sLa2hnK5jGKxKPZfu91GNBpFIBCA1+sVodRyuTxUC6/3CFo1fvTMOc/xobJmQpbj\n8TgcDodI6qV9F4vFcOPGDbjdbsFzPB4XCbJ2ux2vX7/W/c7NGLIfhJLnG4CUCSXB0Kk+k25q6hYl\nCxau4Ll1OUrBGxXuwLujcGOxmCgHsdKaNsOTESLrmpJ35PIdM2RUKRG8zXMZqLshvVcyBgmaVBQF\n6+vrmJ+fFxCs1+vF119/Ld4tHRdqlm8jwnsSiBkYTo5UFEV0k+PfsUoREpHQpnrrpaUlPHjwAJ9+\n+im++93v4ne/+x38fj8URcHx8TEymcx7sGy/3x86oIaO5TTCA/f+rSAj60+WUdxj5spYLyphBL2Q\n15p8f/48avJKURSBtJjZs0ZkCpfhpOh9Ph8ikQg+/fRTfO9730MymcTJyQmOj4+RTCZRKpWwsrKC\naDQKv9+PQqGAcrksDPJgMAgApjqCEu96YuoejwfhcBjBYFD0DKFcqtu3b8Pj8aDVagkHo1qtYm1t\nDY8fPxaJtMlkUuyXSCQi5sCoYfVBxOSNEIcA6UzpTqeDQqFgqCe3FlGTAt6uVYYLuXJX+ynzCxgT\notR+8eTkZKgJziSk5X1YqehlwUYQp1WxMSPGiZpgI4V+cHCAVquF3d1dRCIR+Hw+zM/PC6+RmrOE\nQiFsbGyg2WzCZrMN9cA2y/tVEG8qQi1NeY8DNcFmFu7mRnAoFMLNmzfx+PFjbG1twev1Ynl5GQ8e\nPEA4HMbe3h52d3eFQKYQGPcmeXc7vURr8DKJw828moA3RiEyogSNePL8Jy8j1ape4hAznXTp8Xje\nq+YxYmiY4Zu+63Q6sby8jD/6oz/Ct771LWxsbIj++3So2OnpqciPoJLGYDCIer2OcrmMk5MTZDIZ\nU8aJ3u8pijJU0knnxM/NzWFxcRGbm5vw+XzodDpiDVC4LBQKCXlIvffL5TJ2d3dxcnKCZDJpyKCl\nOdRL11rJ00L0+XwiSYHXLxOkPek9gHeCiuAr2YqXYTu1RW1WeXIlSUkwVnrysiC3KgxA78fhcIjm\nLdRb/aoUnPysNId0MEQ2m0UikcDKygrcbjdisZgQyLwELRAICGFCcUAjYQeroHGjBiNfn7z0k7x6\nrvi1GoYYVfSKoohWnTdu3MCdO3ewtbWFpaUlBAIBLC8vi34T5XIZ5+fnIk5PBjslO1IMnBqbGCmD\nleOj00CtyOHg96F1Q2dg0OlolItihG/6jMbWyxPfi9RAaVSpJJWhUVtwqpU38+6NzjNHfsLhMFZX\nV/HgwQPRbIiqK8rlsigxI2SHDEmCzKn0j1A7I2EpLWRj1PfJAOInJiYSCdy6dQuRSETsLWpfS22E\nyXFrt9sol8vI5XLY29vDycmJOEVvWnStlTydA7+0tIQbN26gWCyKf6RIrIi90b08Hs8Q3EwwIi1k\n3l1KJr7QjXif3BOm0pBpxVAnzR2Qx6I4rN/vx61bt7C+vo7PP/98qP80J7PPYlZY0zU8Dl+r1YRn\nTtn33/72t+Hz+QBcQPPlchmZTAaNRgORSAR+v1/E9vk7GcWXFpQ66vucZz6G/Dx6ntvv9yMSiYhW\np5RFTeNRDFNW9Ea9eRLY4XAYm5ubePToER4+fChqmKnzISVJUab98vIyVldXEYvFUKvVkEqlRFvT\nUCgEACgUCqZgTPmnWQhXjSikwLsL2u12hEIh3L17F3fv3sUXX3yB/f19USExKkmO7sf/GVXu5L1T\n1rnP50O1Wh1ZAWCz2cSpnXRqJJ3iRkm/eu7Pfxrh2e12iyzz1dVVLCwsIJFIIB6Po9/v4+joCPv7\n+9jd3cXx8bHI96A22W63W8z96uqqSJSlOQf07Rc9clbek2RI0VknlUpFnCo3GAxwcnKC8/NznJ6e\nwm63Y2VlRSQ4ZrNZpNNppNNpFAqFiVBPvddcqZLXA79SW85MJoNSqYRKpSJO4LKSDzoUhp9NrMXf\nqIUhQ2N67j0YDEQzlmKxaOpQkXFkZRyWhApvfuNwOFAul0VpC9WVk9Aa5Vlp8WYWPlYjml/igQyR\ncDiMN2/eQFEujj/d3t7GN998g6OjI3EwCmXfyzyOU9jy98YJErVnNbv5qR0yhaJoXfPMcNl4VeNf\n7xomxI0OP6EmOJVKBfv7+3jy5Al++9vfYm9vT8TkG40GFhYWUKvVkE6nRdOhVquFUqlkSjnLIYlp\nefIU/gAu6szj8bhAG30+n1C0WutdVhxGiRtl9JPepZ5zLqgao1QqweFwiJMNKSHVqGduhG9ae5TH\nQ8ZJoVBArVbD7u4uvvrqK3z11Vc4OztDqVQSSlU+xMnpdIoKGbOdQfUaVnT4VrlcFrk/Pp8PTqcT\nX375Jd6+fQuHw4GdnR3s7++LQ3VOTk5EMmEulxN6jFe9mOVbD13rErrBYCBi73S8oNyO1Cqi4zhH\nKUOtSZ1UEZGSpxrcSY6UvQzi+RGxWAzr6+vodrvI5XICgqOyI+D9Ol6OjvC5k6sX6KcRZESLuOCi\nuCNBrDs7O2i321hcXMSXX36JX//616jVaiJ0wvurGyESumbgQ6PGovxdSkryer1otVqoVqvvjS2v\nWzNGFT0bCV/KgSDIslar4ZtvvsEvfvELfPbZZ8jn81AUBYVCAclkEisrK2g2m8hkMiIkQi15zXg4\nspI3S2phLY648ZIov9+PhYUFcc6B2+2G1+sV86NnDcuhBqPePBEZs+PaYdP7IUOWjEG5dbLVRPuf\neOef0Znqv/71r/HkyRPs7e2JpFF6ToLkyaPmoRMzlTx6ZcpgMBg62rlWq0FRFMRiMfT7ffz3f/+3\nyLx/8eIFjo6ORNWAx+MRlS6lUmmiVuVmELcrU/J6PEuC0GkBmm2Fy0mtdIeTnnHlTWsFkQDRKuOz\nguRSMO5lG+EzGAxifn4eGxsbWF1dxdLSEvx+PwDg6dOnaLVaomRkMBiIE8UohkZChCoa+JniXLBz\nQ8CKOSGjkd5ZNpuFx+PB3bt3kUwm8fvf/x4vX74U51LT+qO1Z4TMKEy1uKwZIs+aDvHgJ1wBGIpd\nqt3XDGRMpW+hUEj03QcAn8+HN2/e4NmzZyJOTUReO8HDPFfAqKImRUHrx0xcmxPlM9BcUE6B3+/H\n8vIyAoGAyJgOBAKiXHBhYQHFYhF7e3uqsmoa6BxHMPg9RiGRwLuk1MFgIAwss+2b9coRmlOqxPB4\nPKjVatje3kY0GkWr1cLR0ZHYgzLP8mf8ICoj+kHNAdHDO293Tb03Wq0WEokE8vk8CoUCcrkcGo3G\nUEtncuSMHJClh3c9dKVKfhSTVGZBZwgb9eC1Ni0tiEmas4yyzI16mzyuRuUUVhkOMsmHlHAyukFj\nsRju3LmDmzdvIhKJIBQKYTC4iEd5vV4Eg0FhsHg8HiiKIpoNES8E05Hnpza3/N56+dQi7kUAQKVS\nwfn5OQ4PD6EoijgjnA6Joexjo5mv14EoiZSOcJW7h9F3gGGvFZhsjnnslJoI8QQjEtLEGyl+XnZJ\nvJlJdLTCGOT7kSBlisUTchWJROD1eqEoF+2Ul5aWMD8/D7fbLWDdaXfHBIbfFUeN9LQQBt4ddENy\ncRIUUa/sk3MJqEMm5SNR2ZyWjBqFEBrl3ais5veiUDIlLLZaLaRSKTGntJb5GRhWOXBGxri2cD0/\nV517UnrihbIXRR4yjUcZmWaIFqfWojLzAgkCdDqdQ/xbIaxkRSlnLBsV7jSGy+VCIpHA0tKSgCZJ\nqFMLWYrvkZdBKAovN+L9B7T4lu9vhUcPXAi4fD6PX/7yl1CUd9nn1NzC7Xaj3+8PtaQ0mtRjlC8r\nDDxFUYYgW54EqdfAGwcvy/eTk54ajQYODg7w4sUL7OzsDHVRI1SHlD4hNVyxmxWGXNEZuZ4bQDzn\nxO12i6N/qWRqeXlZHOUbDAbh9Xphs9lwdnaGr7/+Gvv7+yiVSqZ6KxgxaoB374fPm1rSq9YY8nxP\nA0FUu2+73RYly+VyGbVaDYVCQXjC41AIIi3vfdy6VQtV6fH+Cf2gMMFgcJH/cnZ2JhBJGoeXNFJH\nQavm94NQ8qOIPO5gMIh79+6h3W7j7OxMNECQTzOT4d1wODwUExsMBsJrDAQCIh6oV5DQxie+uIIi\nVMBMTTvBnFRLSR2QqLWkFQqN7kP3+v/aO9fWNo4ujv/t2HKUlaxKUawKh1xKqSlN6YW87KtCod+3\nX6B9WWgKgULchtiWfI0l6+7m4lR+XuQ5k+PxrDSzu5Id+P8gEMvW7Ozs2TmXOXNGDBTZbqNPnnJN\nDHpsb968iXK5jEePHuH777/HgwcPTGGIlZUVrKysoF6vm1KNEvqTbVJSVEZbtbIU47MvOukkpNfv\ndPhM/vV6PeO5lUolrK2t4YsvvkC9XkepVMLff/+Nzc1N7O/vX1Bk9hhN6rdvP+M+D21DK8w4pe47\nsU1CijgVi0WUSiVT8KPRaGBnZwfb29vodrvOdV5Zp7SXZEL6Z/c16Ttjh74/+eQT1Go1PHz4ELVa\nDdVqFaurqyiXy1hbW0OhUDBGoCQODofDCxGJ0H4kMfBsBe3jxbvaSPJuJRlvHU2Vgmbj8dgcyCSH\nMfm0o/uRhNA2dA0Ce+63n4PIdBRFWFtbQ6fTQa/Xm1mUdhLXTslra3p1dRVfffUV3r59i+XlZRwd\nHaHdbptQpB2ukcm8Wq3i9u3bAD6EBWWrV7VaxY0bN3BwcOA94NInUfCyj1NepiSJFHoNXvYYl8tl\n5PN5jMdjdDqdTMN9ci0RUikqpENP8sKKx62LYojArq+v4/Hjx/jxxx9NNbl+v48oihBFET799FMc\nHh7ixYsX5ix2CeXrtVeZ9HO53IW1+LjJJs2LLN758vKyKbtrtyvPd3V1FXfv3sW3336LR48e4fPP\nP8evv/5qMsXtGgD2RKfzCEL7ad+vHYUJUfSu0KZ9rWmejo8HLyfOlUolVKtVjEYjtNtt/Pnnnzg+\nPsa///57abeIvjfbWNcTpP58Wr+SRkFcS0KLi4sol8vY2NjADz/8YLb6yZbASqVilNTBwQFevnxp\nzjzQWfd236f1I4ncAO7Sz6ERgaRo+fd1mOTAJEkIlZMTfYwN1/jYshI6D/v+vVbyOm9Eb5G024qi\nCHfv3jXZ+BItnEfERLh24Xr5/N27d+j3+9jc3LyUfLe6unrB8pbwjyjySqVi1npl+52E6vf29sxB\nBiFKVDw9uZYIk7bkQpF7kH3ckuH/+vVr43VmhXjPEjqXiIgoYnsSjuur8N9//xmlJ8kyrVYLzWYT\n29vbaDabZpuaePCnp6cXrF4dpk3qTfii79H1kskSQ71ex/r6Ou7du4dCoWAMS9nyop+1jxL0RRsJ\n9mehxHnu09qV8REl5evpv3nzBp1OB41GwxSpOjg4MGMmxUrs72nD0v6dK5w6qe/279IoOb1tt9Fo\noNfrmeNY19bW0Gq1MBwO8fLlSzx58gTPnz9Hp9Mx1TL39/cTl1gNwfe5XifOz89N8SMJYdunLMbd\nh72M6TKKQp97iKOnkzpdOSP2tcfjMfr9Pra3t527W+bFtfPkgQ8KaTQaodlsmmMzZV23UCjg9u3b\nqFarxrJ69eqVCQ/LYSODwcDsT5Rwe5I9tDq6IH3TSj5knVa3Keu8kuAja5s6CS1L9PqbLvYzaSz0\n5ysrK7hz5w4KhQLOz89xenqKk5MTdDod7OzsYGtrC+12G+12GycnJ6bt0WjkbA/4kC07bbJPagDI\nc7t16xZu3ryJ0Wh0QelIUZBarYYvv/wS6+vrppb9ysoKBoMB+v0+Op3OpSz1uPFKa6xkpRxmPZnI\n2MrBRHt7e+h0Ouj3++Z9jFN2kybHqwhp6n7INr5ut4vDw0Mjy91uF9VqFVEUodVqYXt7G0+ePEGz\n2TTGvr0cFHLdNH3+GND5A2IQhczFEo3Ufy+fTTuOOQuk7/b/5WfX3759+xaj0cicrHgVXEslD3xI\nzpAEliiKMB6PzR7Ur7/+Gj/99BMWFhaMQfD8+XP89ddfyOfzZp2s0+k418yTvBzaE9TbdZK0J2Er\nqVAla/Ji5YaE/329Px050MrIR8EDQKVSwePHj3Hv3j1jXB0fH+P33383xYqkoEVodMPHYwxBh16X\nlpZw//591Ot1bG5uXqhzvby8jDt37uCbb77Bzz//jEqlgiiKkMvlsLi4aBIIDw8PLxgHoUs9aSb8\nSR6AK8w7KWIR0gcfWVpaWkK5XMbi4iKOjo5MdMd3C5xLwesdJrYB7XtPae5dJmepfgi8rzvQbDZN\nhv3R0RG2trbQ7XZNTYI0Bt4so1hxuGQ4jbz4/J2W1RAFv7DwvvBNFEXGmBJ9UCwWMRwOL3nLoGNs\nAAAABxpJREFU9runrxPqRJ2fn5vlzpAojdTiGI1GxknVS6B2v2bBtVbysg4qiu/s7MwUW5HMVqkJ\nPBqN0O/30W63jZfZ6/WmVn7y7Yud+ZtFNqqErMU6leQ08Rh9hd/+Oc5z0tuTfMOe0ubCwgJOT0+x\ntbWFV69eoVgs4vT01CRXDQYDU2VNZ5hmQdK29NGlcuBFvV7HrVu3TFg6n8/jwYMH+O6777CxsWGq\nlrXbbezu7mJ3dxebm5umypXu07SXNKuJe1obtqIXGQ29tq/BKteRSJR4UbKdKEldAVc/7Gv6yKxt\nhCUdf8k1aTQaOD//kE+ysPC+op3MNWnvVfqoZWUea7YumUmq4EMML1tGQ/or39OFl8TJi5svkzph\nk64fMlbj8YcKhFdV5OxaF8ORiUMqMkmGfKlUwtLSErrdLnZ3d01IbWdnB81mE8Ph0LkOmAYxOoDJ\nmcuh7Ylwyvq4VPfL6hQ3fT3twYdOTKL4fvvtNxSLRRQKBSwtLWE4HJoEq1kIcZr2xPrP5/PI5XIm\n+18UfS6XQ7FYxMbGBj777DPUajXkcjlT4/7p06f45ZdfcHx8HLuu7NP3eU/aOuw8S0Uv9dslHCkT\nWRr09e1x880R8P3bSZydnaHX62EwGFzyPEPD8T7M05PXy4/2vSUhNOKYxqA4Ozu7tKVV6vSHGkoh\n/Q7dPitjLMainB/h2goeSmgU4tol3sX9rS4+sLe3h2q1ivv376Pb7eLg4AAvXrwwZ8vPwpPUk09a\nBa8nZDnmVJAa46EhMPm//TtN0n7LxCYTea/XMxXN3rx5Y8KzV2GlTkIMKSmHvLi4iFqtZg7myOVy\nKJVK2NjYQKVSwWg0wv7+Pra2tvDHH3/g2bNnODk5iTUYJ00k8xgLmXxkXdwlB6HP2zdqoAsr6XyX\nrJRV0gTErMY97j0P9eRCrxnSbmi0Qp6b7BDK5XImP0dygWaxX17Pd3reC+m39Ff6LAW0xOFy5UZl\nFQ6XdpIoZok6iPGb5dj69ufahus1MjAyYUuVskqlgr29PTSbTTQajUsh1az7oPuSVRhWKyH5OWuP\nOIu+ixUtRU30pJDGC5h0vTQvqZaZ169fo9/v4+TkxJxwJr9bXl420ZNWq4V//vkHz549w9OnT9Fq\ntVIl9GQlKy5jQlvzrvX4Sd/1wec72viepSz4klZm7Ha0op8VLoXk6ymGohMlpU6GPLck7aUZ75Dr\n2VGqOKPE5eSkTXZL8/5I7pY2Qq7i3biadL/3BN+tFtIoilAoFMwWHdn/PItBtL2XLLI4pT3dPnAx\nAz4r9AuSNvyt9wAn2VUQej07AStJG5JdH0WRqU727t075PN5k3UvW/wGg8GFohxJxkzLy6wribkS\n74DkBobvOq2+rl0g5Cqxz2f4WNAJWbPqv/bk9fVcSijriELaJD+JVMXJWlI5F0XsE8FK2m8pnCb/\n0mJFRKbq8I9KyQu2ZTfrl1obF7qQS9o2XcziPuSFznISntc6sxgVaQwrkRXxYID3a64S/hPDza6f\nnrbfN27cmIsFb6/FpzHo9JhP67fLsLhKtCGepZE8D3ShlVnNZ7rMqswHwOVEzVAF72sUCknlUpNF\ndEzmBJ9qm2mvBWTzfkhbasv1VB3+UYTrXcxDuQu2J59Wwc1zctTCDKRLsNHMQ8HrMU+DfF+/zOPx\n2GRMa1nK6kXUxZPm4clrsojYaCXwMWBHNOZhhGaNbaz5fgcI24Hh+k4SRRS3Vu1q1/5/0meT5TMN\neUfTylOa5QzXUp303acE8JUp+dAJ0BakLATG95p6Hcs+nMZ17Tihdwm8XjeyBcn1vWnYL5ju+//D\nO6kr9IX2wf7Mpz0dWpTv+ng5LrlwKcI0a3U+/ZYDljRZy2ncJCufuTyzaX1fWFgwsjJtTTpEKSWR\nZZ/2dD9Ezue5Bhp3X77X1Z6ZZIxP63uSMdQefJp27PuVd1TkwHXY1KRnHxImt9tz/a2PnGl50e+o\nPebSnmt+Dnk2SZYz4trR86I4btO4MiWvQ1OhuCa3LMPQuj09sBIiCfWKJwmfFqQk92MLvd2e7ru0\nm7RCn8/vdKTDnlBcyibunm1vOGkY06Xgs8Q21nS/XWtx0yaHaQrZliOXZ6bHP+55uK4vfbePZZ7k\nmfkqt7h3wPd+J7Wn+6J3GQBhSw72+KXBpw37GepDr+KUSag823OBNswmteM7r2mZcTltk4yKpHOc\n3d60e4nrs/2O6jG3xznunZp2rZD7cH1Hf6bHWs/phBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQ\nQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEII\nIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhJCr4X+f\n6KUd7W/7WAAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -0.148910\n",
+ "Current cross entropy score: 2.055502\n",
+ "Training step: 800\n",
+ "Time since start: 4.077344 m\n",
+ "Steps per min: 196.206171\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsnVlsnFeW33+176yFVUUWi8WdFCWKoiRLsmXL7aXtltNu\nTXdPexqZmTTQwSQPQYB5SBAgD3kZIE9BggRBEGAQYLIB6R7M5sykx+1u2223JVmydlHiKu5bkVXF\n2vctD8a9XZK1UBZZVXR/P0CALRVZ9/u++91z7zn/cw4oKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgo\nKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgo\nKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgo7B9UDfzuagO/+7cOlUpFtarc8nqi3HMFBYU95ok2\nXDHyCgoKCgoK+5Mn2nB1PUahoKCgoKCgUH8UI6+goKDwFVCpGukIVVDYGYqRV1BQUHhK1Go1Go0G\ntVpZQhWaG22jB7DXqFQq1Go1bW1t6PV6wuEwuVyOUqnU6KF9rRgaGmJwcBCDwYBWq71vAYzH46yu\nrjIxMUGhUGjwSL+MSqVCpVKh0WgwGAzodDpKpRLFYpF8Pq+I5xRQqVQYDAZ8Ph9jY2NoNBqSySTj\n4+NsbGw0engKCo/kt8LI63Q6Dh48iNPp5MqVK2xtbSlGfhdRqVS8/PLL/NEf/RFOpxOr1Yper0ev\n11OtVpmZmeFnP/sZi4uLTWnkAbRaLXq9ntbWVmw2G5lMhkQiQaFQUIy8AiqVCpvNxokTJ/g3/+bf\noNVqmZmZ4T/9p/9EMBhU5ohC0/K1MvLiNHbixAn6+/vR6XS43W78fj9+vx+DwcAbb7zBL37xC/76\nr/+60cP9WjAyMsI777zDSy+9RH9/vzwJi9N8tVqlr6+Prq4uNBpNQ8YoYqcipU0syK2trQQCAU6d\nOoXD4SCTyeDz+XA4HIRCIe7du8etW7dYX18nEok0ZOwKjcdkMtHa2sr3v/99vvWtb9HZ2cn58+f5\n8z//c5aWlqhWq0q6ZB0QHrfad1jhyXytjLxarUav1/Pcc8/x6quvYjab6enpYWhoCIBqtUq5XCaV\nSjWVkReuQJVKRaFQoFKp7ItJrFKp8Pv9fOc736Gvrw+z2UwikaBYLOJ0OqW73uPx4Ha7G2LkxcIg\nDH3tfbVYLAQCAd588016enpIpVL4/X6cTifr6+uMj49jsVi4cOFCQ438w8a+X3hw7OJZiJi2TqeT\nXp90Ok0ul6NcLsuFvHaDJkJApVKJcrlct2swm820t7fz/PPPc/z4carVKuPj4/zsZz8jm83WbRw7\npVYQWPvfjbp/T4OYG2azGZvNJg8NtdqHcrlMPp8nEomQzWapVCoNHPGX0Wg06PV6LBYLBoMBvV5P\nJBIhkUg0ZDxfKyOv1Woxm814vV56enpwOBy4XC5pXKrVKhqNpmEnyochwgmBQACdTsfq6iqZTGZf\nhBPUajWrq6v85Cc/4Rvf+AYHDhzgwoULmM1m3n77bSwWC9C4ojC1xqRarX5pMYjH46ysrDA7O4vD\n4SAQCOByubBarXR1dWGz2ejr6yMej3Pnzp26j19cg06no1qt3jcn9oPBV6vVqNXq+05earUarVaL\nyWTCbrfj8/kIBAJ0dnZy5coVZmZmSCaTcrMrjLtYNC0WC+FwmGQyCdTnPhiNRiwWC9euXSMej+N2\nu5mZmSGbzUpj2SzPQ8x58d9arVY+B6vVitFoJBwOk8lkmvJErFaraWlpYWRkhNdff52enh7a29vR\n6XRUKhXy+TzZbJaVlRV++tOfMjExQTabbarrsNls+P1+6VHu6Ojg//yf/8PHH3/ckA3J18rIi4Vw\ndnaW27dvMzY2htVqpVgsyskO4Pf7eeGFF5ibmyMUCjVkrLWCwK6uLnw+H4VCgXQ6TblcbnojL04I\nm5ubfPzxx8RiMfr6+rh27RpGo5FUKoXL5cLhcDAyMoLH4+HVV1/l6tWrzM/P12WMYhETBv7BhSCT\nybCxscGlS5coFoscO3YMjUZDS0sLVqsVs9mMx+Ohra0NjUZTdw+L0AgcP34cu91OsVhEpVJRqVQo\nFovypLu9vU04HGZ9fZ10On3fdTdy8avdZKlUKkqlEgaDAZvNht1up729nYMHD9La2orBYJDeLGGU\nahfExz3HvRy/RqOhVCoRjUblJsTpdDI5Odk076jwjhiNRpxOJz6fD6PReN+aV6lUsFgsaLVapqam\nCAaDZDIZyuVy05yExft29OhRTp48yQsvvEBPTw9erxedTkcmkyEcDjM/P0+xWKRcLstNcLlcboh3\nQq1WYzQa8fl8+Hw+Ojo6cDgctLe3Mzo6Snd3N16vl0gkQqlUYnJykmg0Wte5s6+N/IMnxEKhQKlU\n4t1332V2dpYf//jHwBc7cbPZjF6vB76II//jf/yP+Z//83821MhrtVoOHTrEm2++SaVSYW1tjWAw\nSDqdJp1ON2RcT0OlUiEajRKPx5mcnESj0UhD9LOf/Yy2tjZGRkb4l//yXzI4OMgf//Ef86d/+qd1\nM/JijI9axIrFIqFQiF/84hfMzs6ysLDAD37wAzo6OtDr9Wi1WrRarcwYEIa1XphMJg4cOMC/+Bf/\ngpGREbmolctlEokE5XIZjUbDrVu3uHz5Mu+//z6rq6vSHSuuvVFelNrr0Gg0pNNpecpxuVz09PRw\n4sQJYrEYd+7cYWtri3Q6TalUui/OLa6nWCySSqXqJoYU4b9kMkk8Hr8v5NBs7m6tVovT6eTw4cO8\n9tpruN1ujEajvG/ZbBa1Wk0ul5P3dGNjoyk2gwKXy8Xo6Cg//vGPGR0dlZ5ZgFKpRDweZ3FxkZ//\n/Od89NFHrKysUC6XMZlM5HK5hsx1jUaDy+XizJkzvPHGG3zrW9+6b+0QIZJ33nmHzs5O/vN//s+M\nj48rRn6nPOyBVioVMpkM8/Pz/NVf/RWXLl3C4/HwzjvvcPLkSdRqNcFgkM8//5xwONyAUX+BxWKh\ns7OTAwcOMDQ0RDqdJplMyrhksyPuvThdPThpRWzVYDCwsrKCw+GQC3QzUalUyOVyrK+vyxP90tIS\n3/zmN6VYUBiZei4gKpWKnp4ejhw5gsPhIJ/PEwqFiEajFItFKRC0Wq0cOHAAo9EoQwti/gSDQX7+\n85+zvr5et3ELasMjuVwOtVpNuVwml8sRjUYxmUzEYjFu3rxJMBhkcnKSYDB432Jde79rT/L1NLDi\nO8UGrxmMIXwxP0wmE06nE4vFgs1mo7e3l+HhYXp7e4lGo0xNTckNSjQapVwuUygUWFpaYnt7m2Kx\nKJ9RI3UftZqZRCLBr371K65evUoymbwv/JBOp9ne3ubevXsyrGkymfB6vSQSCeLxuAzz7NU41Wr1\nfTqBM2fO8MILL3D48GGGhoZwOp3ScyWoVqtsb2/LVEudTrcn43sU+9rIP4pKpUIoFOLDDz9Eo9Fg\ntVoZHR3lueeeQ6VSEQwGuXLlSkONvMFgoKOjA4/Hg9lsJpvNUigUSCQSTSnm+Sqk02lCoRCxWIxU\nKiVFMlqttu5G80nE43Hi8TihUIi1tTV6e3txOp1otVry+XxdTwnCezA4OMjIyAiVSoXFxUXGx8dZ\nWVmhVCpx/PhxAoEADoeDQqGA2+0mEAhgNBoxGAyo1WomJye5du2aPLHVGyF0FUYeIJvNsr29jdVq\nRaVSsbS0RDAYZGVl5Yn3+GG6ir1EfN/jvEGNQqPRYDKZcLvdeL1efD4fBw8epLu7G5fLxb1797h6\n9SqRSIRwOEwoFJIGsHYTJYzRg9qJeiIMp0ajIZFI8OGHH5JIJAiHw/dtRB42PpvNRktLC8ViUXqB\n9tLIa7VaqdGwWCy88sor/M7v/I4UFsdiMTlOcTgol8uMj49z584dMplM3Ssl7isjLxaKnU7GarWK\nXq/HarWi1X5xqULMo9VqG1qWUrhcZ2ZmyOVyTE9Pc+/ePWKxWNMtKF8VES/zer0MDg7i9/u5fPky\ndrtdiquakVKpRCwWk/nPyWSyrouf0+mkt7eXI0eO0NXVxcbGBlevXuVv//ZvZcz9448/xmQyyXhk\ntVrFYDDw4osv8sMf/lDWKGiG02etcRbjCYfDRKNR0uk0mUymKTNKqtVq3UM0O0FoBUS4TKQI2+12\n1Go1qVSKxcVFbt68ST6fJ5/Pfylr52HX1Kj10Gw209bWhsPhQKfTSY/Vg6fyh405m82yublJOp3e\n01M8IF3wIkTQ19eHTqcjGo3Kw+Pt27eltioYDJJKpcjn8ywtLbG2tkYsFqv7IW5fGHkhxLHb7Wi1\nWulqehw6nQ6r1crAwABjY2N0dnbKf9Pr9TIXOpFIyBhgvRCpcpFIhEqlwurqKouLi2xtbX2tiq8c\nOnSIl156SZ6KbTYbPp+PtrY2CoXCnhp5sRCK0+STEAIxl8tFW1sbxWKRjY0NotFo3Tw+arUanU6H\nx+Ph4MGDmEwmgsEgMzMzXL58mdu3bz9yERMb10KhgNfrpb29nXA4TD6fr8vYH0WtG1jE2CuVCqlU\ninK5LMVfzYROp2N4eBi9Xs/k5CSZTGZHPydisLWnz71CbEBSqRTxeJxwOEy5XGZ9fZ1KpcL09DRb\nW1s72uTVviuN2GyJeS+0L4VCQYZtdooQK+/lfRf3R4wvlUpx584dkskkFouFYDDInTt3KBQKZDIZ\ntra2SKVS0stQG4YVnhPxe/eSfWHkRbnRnp4ejEYjt2/ffqKRNxqNdHd38/bbb/OjH/0It9stb6zF\nYsHn8xEOh4nH41LRXi+EkQ8Gg2xublKtVikUCk3nwn5WvvOd7/Cv/tW/wmazAV9MZrvdTmdn531p\nUHuBcAFWq9UdpdhoNBoZ1z58+LBMZ5yYmCAYDO7ZOB8cs9VqxefzMTw8TDgc5vr161y9elUu3o9C\nuAdv3rzJ+vo6p06dwmaz7ek9fhIihlkrWAPkPG/G+S5i3e+88w52u51//+///Y5TtPR6PWazmWQy\nuaebK2HghRZmfn6eaDT6Jd3DTg2eEBg+TFtTD0qlEul0mpaWFhmv3um8MBgMtLa2ylDnXlKbiRAM\nBgmHw9y8eVN6hUulkjyk1c7v2j9qtVq+F/UKAzW1kRc3Y3R0lFdffZXt7W1WVlYe2RRCp9NhNpt5\n7bXXOHbsGD6fj5GREXw+n3TXq1Qq+vv7OXfuHG63mytXrsjdWL0WnAcnQDO6Kr8KZrOZl156iQMH\nDuB2u3n99ddxuVzyBCdibqurq3vqsnI4HDgcDiwWC+l0WqpwH8RgMOByuRgbG6OnpwePxyNTXlKp\nFKlUimq1ik6nw2g0yo3YXiAKgPT09GCxWLh37x7Ly8ssLy+zsbGxo/tVrVbJ5XKEw2FMJhPt7e2Y\nTKYvpaPVi9ocd41Gc59Rb4YwwsNwOBx0dnaiUqmkt+FJBAIBXn31Vbq7u7HZbNy4cYO7d+8yOzu7\nZyLaWg9VrdhSiASfdAgSiIJEjXwehUKBWCyGxWKRHimDwfDYvhG1G8h6CXpr75G4zw/7zKOoFe6Z\nzWZyuZzso7KX976pjbwQzR07dowf//jHvPvuu8zPzz/yhhgMBpxOJ+fOneP73/8+FovlvkI4wmXY\n1dUljU86nWZ5eZlsNls3t32tcf+6IGp7nz17lm9/+9v09/fLlEVA7rRFaGKvTjoqlQqPx0NPTw96\nvZ7NzU15Cn7wROlwOBgcHOTcuXOcPHkSv9+PWq0mkUhw7do1isUidrsdp9OJw+EgFotJEd5uo9Fo\nZGU1lUrFnTt3WFxcZHNz86l+j1h8XC4XgUAAk8nUsFirWNRsNhtGo1GKS+sdHnsaWlpa8Pv9hEIh\neSJ/3Fg1Gg3d3d386Ec/knU5fvKTn5DNZllaWtrTTBnhvSmVSjsOKdQiNmEiztyovP9isSh1MMJ1\nLzbVjzPytTUM6pGR9KxzVozZaDRis9mkR2avPVpNbeRbWlo4fvw4Bw4cQK1Ws7y8zPT09CNPNXq9\nXhYyMRqNFIvF+8QYYgLl83lyuRzFYhGdTkdHRwelUolQKPS1OVXXG6PRKHOf/X6/9JwIpqen+elP\nf8onn3yyZ4ZSxNX7+/s5fvw4c3NzbGxsoNVq71MVi6prY2NjvPHGG9LAi01fIpGQhYpsNptchKan\npwmFQrvemU6UNdZqtWxsbFAoFKRe5GkRRl7MdWh8NTa3243T6SSZTJJKpchkMjJW2eixPYxiscil\nS5cIh8OkUqlHfk5UZ2tvb8fv99PS0kI+n2dycpLx8fGmToUVBqe2UFEjwyfValVuokU52AcPaGLc\n4o8ITeylh203EYVzhJcom83W5R1oaiNvMpno6emhUqlw/vx5ucg+ykCUSiWy2SwLCwt8/vnnUqAn\nTncOhwOz2Uw0GmV5eZnJyUni8Tg+n+++YhuNFivtN0QI5MyZM/T29tLS0iL/TRSxmJiY4P3332d+\nfn7PXsjaWFc+nycejz82Jl0sFonH40xPT7OysiINULFYlPm33d3dJBIJEokElUoFjUbD6urqri2I\nwl3a0tKC2WwmFouRTqeJxWJf6WTldDoJBAK0tbXJCnKNolaoJNyp4uTYyHE9CmHshDpdCGMfhUaj\nYXBwkMOHD9Pa2opWqyUajbK+vs7GxsaOXeb1RBh3q9WK2+0mk8mQTCabIk1QeFP1ej0ul4ve3l55\nWMjn8ySTSak9qFQqclOwHxBe5sHBQdLpNLOzs3XbnDStka+djJOTk5w/f/6xBh6QBvr//b//x2ef\nfSZvpMlk4qWXXuLQoUM4nU6mp6e5cOEC0WgUo9HI0aNHMRqNlEolVlZW9s3EaQbErvrMmTP8s3/2\nz+ju7pb/JgSFMzMzjI+Ps7S0VJcmDbOzszI3OB6PS5GSmDtCQHP58mXGx8dxOBwA0uXp8Xj4wQ9+\nQG9vLx6PB7/fTzQaxWw2o9Pp2Nzc3DWPjzDwra2tmEwmNjY2yOfzX9kI9vX18YMf/IBDhw6Ry+Ua\n2qdBeBWWlpbY3NyU+oAHT2XNcJoXG8R8Ps/29vaOvDV6vZ6XXnpJNsPK5XL3FbRqhut6EOE18vv9\nnDx5ksnJSW7dutU0Y61UKqTTaUZGRviH//AfyuIy29vbTE9Pc+XKFdRqNcVikTt37uybtbqlpYXB\nwUF+93d/l3v37nH37t26eR+a1shrNBpZFS6bzcpynY9DnBqEaEmklQi35b179zCbzWxtbbG0tARA\nT08PfX19aLVaOjs7+fnPf8729vaeX9/DqBXBCEPUrAIlgRDKOJ1OvF6vNK5Go5Hr16/z+eefs7a2\nxszMDKlUqi6nBXEazmQyj8xaqFarZDIZstmsPO0XCgXpsv/oo4+Ix+Oy5rSIzVcqFZaXl9nc3CQe\njz+zsRdzVqVSYbVaZaMisbldXFzc0Rwwm810d3fz4osv8tprr+HxeFhaWrrPqDYCsdET98lsNmM2\nm6UgTzwjEadv1FhtNhs9PT2YzWaZf/44RDGanp4e1Go1//t//2/C4TDhcJjp6emmeWdr+wd4PB46\nOjoYGBigv7+fgYEBUqkUN27caKrxmkwm/H4/x48fl90s0+k0nZ2ddHR0yPV7bm6u0cN9KOKAajKZ\n6OrqYmRkhEAgQH9/P8eOHZNpo/W6501r5HU6HTabDY/Hw9ra2o53bNVq9UspT/l8nvHxccbHx+Xf\nqdVqLBYLarUar9dLR0cH/f393Lx5k7t37+7qtdTyqJOLUF06HA6ZV1kPUcbDxidcZDsRSInJLBbu\nmZkZotEodrudn/70p/zkJz+px7CB38Se0+n0lyp7Pe5naudWtVolHo/zySefsLq6SigUoqenh0Ag\ngNfrJRAI4Pf7yWaz8hk9y/Mpl8tSkKbRaOjo6KCjo4Pe3l7K5TLxeFy6J8UmVywitdXKWltbefHF\nF3n55Zc5ceIEANvb2w038oAcey6XkxX5RItZMd/y+bzUPtQTUbCptbWVQ4cOARAOh+8TjT74eZVK\nJdPlTCYT6+vr/Lf/9t+4d++e7O724M9AfbURYpzi4GAymejt7eX48eMyrGYymbhy5UrD50ctOp2O\n9vZ2+vr6GBgYkJURq9UqnZ2dDAwMcOvWLYrFIgaDodHDvQ8xJ/R6PUajEbvdzgsvvMB3v/tdenp6\naGtrw2g0yk3Vb7WRV6lUsuXn0tKSrPm7m4h0o83NTa5cucLBgwfxeDx77kIRCv8Hi4QYjUYCgQDP\nP/88Kysr3Lx5s+6KV7F4+f1+ANbW1p5YnMdsNhMIBLDb7aRSKd577z2uXr2KTqeT3pJ6IdSqtff2\nq75IYrP461//mqmpKbxeL263m+3tbebn53flFA+/cWmvr6+TzWZli9tCoSDrYff19VEsFllbW5Mp\nfaIcsghpORwOuVkVi2IzLd5CCZ5IJMjn89L4GI1GrFYrGo2Gzc1NEolEXcet0WgYHh6mo6ODRCIh\nyxrH4/FHft5sNmO1WrFarfzVX/0VxWKR5eVl6aKvHb8o5CW6B9ZL2CvurcViwW6343a7GRwclO2U\ntVotyWRStsttlrnicDj44Q9/yFtvvfWlzBBRxMzr9eLxeHbFyO9WuEilUjE8PMybb75JZ2enHJ/H\n4yEQCEjNjWhYU8/73ZRGHsBqtaLT6ZiZmWF5eXnXf78wCNFolMnJScrlMn6/vy4xY4GYwMLIu91u\nDhw4IN239V6oxalKCOd2UgRGiKvu3bvHhx9+yMWLF7l27dpeD/WRYxGbJvH/z4LIlQ+Hw6ysrMga\n2eFweNdKaIoxJxIJCoWCLJ0p5kR7eztjY2NSkCQ0JsPDw7S3t9PS0oLNZsNsNsufq1arLC8vMzU1\n9ZVSq/YCEYLK5XLk83lZA1xkAeh0OumdqPecF2uNEMyJAlWPQxwSrl27JrVAD/sZq9VKS0sL8Xi8\nruVMa1v2ipO8qFdQW0+hXiG0nWIymRgdHWVoaAi4v3BSqVSSYR4xd3aD3ZhzQtx98uRJvF4vTqdT\nZnmJP8JDWu8NVdMaeb1eT6lUYnx8/KlzhZ+GXC7H6uoqqVSKiYkJ1tbW9uy7BA97yKJAgtFolOOq\n98snPAzixLGTHX4qlWJubo7NzU0MBsMTY5n1YLdfIhE3Fu1/a1u57uZ3FQoFGSK4e/cuDoeD/v5+\nfD4f1WqV6elpWULT6XTi8XhkEx2xsRHP7ZNPPuHdd99la2tr18a3G4hNjcjtLhaLZDIZuYGpd6OU\nSqVCOBwmnU6ztrb2xAI45XJZZmGICnOPmgcqlUq6nkVzknpdV216mdClGAwGLBYLbW1tpNNpNjY2\n6nqoeRrE+IUBFulmoiTybpRD3i3h54Mi2bm5OcrlMp2dnXi9XlmTRavVNsTD1rRGXsSl97orm9gl\nRiIRmU5SD2oftEajoa2tja6uLlpbW2XxkHrHJ1tbW6W4JZlMPrTv+4OnZJGu9rQqV7ELb6ZTxKMQ\nLvXatpd78bKKeykMn9hc/O3f/i0AGxsb5HI5KQ6Lx+NkMhlp7K1WK7FYjMXFRT7//HNu3br12Dzv\nRvIww9iIuVCtftEsR6vVSi2HwWCQddAfJth8XCErkTvf2tpKR0cHTqdT5nzX8/pqK7KpVCpisRhb\nW1uybLLT6ZSakmZBiI5v374taynY7XYMBgNra2tSP9XS0kJvb6/sNf9VEc/2Wd9j8fOLi4u8++67\n8jAg3PZOp5ORkREGBgZobW1VutDVImoqP1hYZbcQObsajYZkMrknsf/HISaHmLydnZ20trbicDho\naWnZ8yYuD+L3+zlx4gQul4u1tTWmpqbkolZ7qn+WE2ytG1H87t0uLAO7f5oXAq0HRXB7gfCkiHDB\n4uLiff9uNBqJRCKyYM7Y2BjDw8MYjUY2Nze5ePEiN2/eZGFhYc/GuBuIeVXrqn9Wnvb5iyIsokCS\n8KZlMhlZMGunv0voC4RA7MiRIyQSCZkhUU9qNxSFQkGGnYxGoyzbW49GOjtBPDOLxYJer+fq1avc\nuHGDjY0NOjs7sdvtTExMYDabGRsb46233uLAgQNYLJZn/u7dei7VapW5uTmp+BfxeLH5fuuttzAY\nDNhstl0LM+yUpjTyarWa119/nRdeeIGVlRXOnz/Pe++9t+vfEQgEZBOPRvZwr1QqbGxssL6+TrFY\nxOFw0NvbS7FYJJfL1U0YMzo6yu///u9jNBpZWFigWv2iLnSpVGJ5eZlIJEI+nyedTn/l+yVSqISq\nXbRO3Q1E/LE21LBbv9tisXDq1ClUKhXz8/Oy13UjKBaLbG1tcePGDTY3N2Uzpmq1ytbWFpcuXdrT\nENdu43A46OrqYm1tTRagqcd8F6llojFQZ2cnfX19BAIB7t27x/z8PAsLC0/0UomN66uvvsp3v/td\n7HY7Go2GTCbDhQsXuHHjBtvb2w0Tt4kMjkQiIfuda7VaSqVSUxQl0mq1mEwmjh07Rm9vL3fu3CEY\nDJLNZllbW0Ov1xOPx6UYO5VKyRLUzUqxWGR7e5tUKoVWq2VkZIRIJEK5XFZO8oLe3l5Onz7NgQMH\n0Ov1JBIJ5ubm2Nraeubdp3gpRfvTUCjU0BKU5XKZcDjM2toa29vb6HQ6/H4/q6urdZ0QbW1tjI2N\nyZSi2vQu0boyn8+zsbHB6uoq29vbOzb2te0k9Xr9nhhIkV8uRDmRSGTHHcQeh+icNzQ0JGuSN5Jy\nuSw3pltbW5w9e1amo21vbzM+Pk4kEmnoGHeC8NR5vV6GhobIZDLEYrE9n/O1qYciO0GcuLxer3Sr\ndnR04HK5WF9fJxQKyXx+gZhXoh55IBBgdHSUZDJJKBRieXmZhYUFlpeXG1p2VWyaMpkM6XRaek0i\nkYh0LTeK2jQ/g8GAWq1mc3OTlZWVL723LpcLv9+Pw+HYNc/PXlGpVGTITfy/yWRqSLXHpjXyYnEe\nGBjA6/Vy8uRJ/uN//I+89957z5ybLAyOCAMIQ9YoRG72+vo6c3Nzsn2iaBxRjxOAKOmZz+cxGAwE\nAgFaW1tl+0Sxs9br9Vy/fp1PP/2UCxcuPPRlfBgi/UWIaIToajfdhcJF1tfXh81m4+LFi6yvrz/z\nAtvX18c/rCFTAAAgAElEQVSRI0dwuVwsLCywubnZFKr1crlMPp/HZrPR1taGVqslkUiwsLDQsPn8\nNO5ynU4nG+kMDw+ztLS05xsoIbaq1Vb4fD6sViubm5tYLBZaW1t5/vnn+eY3v8nKygq//vWv+eCD\nDwiFQlLjIAynSDt1uVxEo1E+/vhjzp8/z+zsrIwrN0NDHiF2FCWbi8Uid+/erVsb5YdR+xyKxSK3\nbt3izp07RCKRh96vAwcO8M//+T+ns7Oz7qHVZ6Wnp4fR0dFdCTE8LU1p5CuVChcuXMBut/O9732P\n1tZWDAYDL774IqlUSlbystvtLC0tEQ6Hn8o1azabaW1tJZlMsr293RAley3V6hdVwaLRKIuLi7hc\nri/FwesxhmvXrvE//sf/4NVXX2VwcBCLxSIFPKJqoNFoJBQK4XQ6MZlMX2oh+qTvKBaLUim726eb\nYrFIIpEgk8lgtVoxm81YLBay2axclHfynMXCLbrZ9ff34/V6WV9fZ3FxUeYWNxpxPSK+ura2Jssy\nNyozQyzaT5oTInVteHgYn88nU+uedQO/k9Q3uD9mHY1GSafTsi66Xq/n6NGjWK1WTCaT9D4JDY9W\nq6VYLEo3frlcJp1OMzU1JSveCY9jo417LVqtVr7HIoWu0ZtVoc0pFAoyDfFRehez2UxHRwcWi6Wp\ncvt3gqhXIBrr1JOmNPLVapUPPviAaDTK4cOHGR4eplKpcPz4cXQ6HZcuXcJgMNDd3c3HH38sd6hi\nojyM2u5FLS0ttLW1sbi4+MR6+PVCNMZYXV2lUCg8sdXiXvDpp58yMTEBfHHK8nq9sva5qO6l0+mw\nWCwy91Ov1+/IZS/Kt4oOgHtxXaK4kaiSJbwHOp1OpjrVlrmtHUNtjr1Op8PhcDA8PMzrr78uG49M\nTU0xOzvbFCczgdA2hEIhbt269dhWzHuJcLuKRftJBk6o0EdGRjCbzQSDQVkrYK/HX/vsK5WKTJut\ndWm/8sorOBwOGZYSc0atVssiLcIoieySZg2RiHVPCAMf9KY1emzCyD/qudeW5lWr1RQKhabZaO8U\nUQuiXC7vyrifxmPWlEYevnj55ufn+Xf/7t/R3t6OzWbD7/fT3t7Oj370I+x2O1arldOnTxOJRMjl\ncvzqV7/iL/7iLx7aUcloNNLS0oLb7aZQKDA/P08qlWqaxRp+kxsaDodlTKeeVKtVkskkf/mXf8nF\nixcxGo0cPnyY48ePc+jQIdrb29FqtbS1tXHkyBGWlpaIxWKsra09UW1eKpVkitJe3XNhXIQ7XaPR\n4Pf7OXz4MH6/H6PRyO3bt1lfX5e17cWGQ3R+83g8Mv7e19dHe3s7ly5d4vLly2xsbJBMJptqzgB8\n9tlnpNNplpeX5Sat3jzoeRIL84ObKfH3bW1tuN1uIpEIGxsbBINBotFoQ06/td8n+pP/+Z//Ob/8\n5S+Jx+OyJr0QwQph535BtOAeGhpieHhYVrsT7Y0biVinH/fM9Xo93d3ddHV1UalUuHnzJh999BGh\nUKhew3wmxEZGFPTZjbnzNO9I0xp5kb/6/vvvY7FYcLlcvPjii7z++us899xztLe3o1KpOHr0qKyL\nDXDt2rX7+lYLNaNwtYn0mGacIFqtFpvNRrX6RQW0eufJwxd1/q9du8a1a9dQqVQsLi5KF3h3dzdG\no5G1tTWi0SjlcnnHAhhxit5rxL3LZDK4XC7a29v51re+JcMPnZ2dLC0tkUqliEQibG9vy8IgmUwG\ns9lMS0sLdrsd+KLq39TUFHfu3Gk696tgdnb2vup8jRhj7XdqtVpZclSEZSqVCmq1GpvNhsvloqOj\nA7vdLo3o6upq3UvaPoxyuUwmk+Hq1auP/Ewj3suviijDK8SNo6Oj5PN5CoWCTF1sBKIbXiAQwGg0\nks1micfjJBIJWZNCq9Xi8/no6uri0KFDHDp0iGKxyO3bt/nVr37VtJ6TWoTX2G63U61WZYGietK0\nRr4WoSKemprC5/Px5ptv3lcSVuS3Dg8P853vfEfmK87Pz5PNZqVrLRKJyEnUjNhsNg4dOkQmk2Fu\nbg6dTtfQVpzVapXJyUm2tra4ffu2bLCwuLjI3Nwc8XicdDrddIte7f1qbW3l+PHjuN1u1Go1L774\nIsePH0etVrOyssLi4iKLi4vcvHlTbmjm5uZkKeVSqVTXtK6nQbhhc7kc2WwWi8WC2Wxu2JwRqVlC\nCAhIj0mxWESv13Pw4EFOnDiB0WgkmUwyPj5OOBxme3u7oeLXryO1IatAIEBvby89PT0sLy83fC6r\n1WpcLhfvvPMOnZ2dBINBPv/8c27evCmbMrW0tHDu3DnOnTuH3+/HYrGQTqeZmJjgypUrDdcT7IT+\n/n7efvttDhw4ACAPn/VkXxh5cQpcW1vj8uXL/Pf//t9lI4/aingrKyvMzs7K+F6t+1C4Seod534a\n0uk0c3Nz5PN5tra2GprWVzsmIYqan59Hp9MRiUSkArZZ76U4kc3MzPDXf/3XdHV10dLSIl3upVKJ\ncDjM1tYWoVBIaiGEMFDEzsS8acbrFGMSDVNEWl0jxyoU/4lEQgoCxalNpE6GQiGZUbK6uirrLzTj\nPd7P1Ma7Y7EYGxsb2Gw2eQDa2tpqaH0Q0RZcq9Xidrt57bXXOHnypDyYmc1mTp8+zdjYGEajkZmZ\nGd577z0+//zzRzYQahaEtsdms+H1ejEYDITDYT744AOuXr1a17neyETDr3yVokJcIBD4Sur6ZkUU\nhahNd9lPsb9mRHTjOnr0KN3d3UxOTrK2tkYsFvva3N8zZ87g9/u5desW6+vrTVGPXFQ1FHnoQrSp\n1WpRq9WydG8kEmlaz9p+prbVrBDc9fX14ff7WVxcZH19nc3NzYaJSMX80Gq1jI2N8b3vfY9XXnmF\no0eP3telU1SXzGQy/OVf/iX/+l//66Yt1VyLRqPBYrFw4sQJvvOd73Dq1CnK5TJ/8id/wpUrV3az\nfPoTbfi+OMk/iDgFiGYRXwcDD78R3gEPFQ8qPD3ins7OzrKxsUEsFpM1u78u91c0WIpGo01jMGsV\n9qlUilwuJ1uuiiZIuyVCUvgyYj0U818ImYPBIMlkknQ63VAPVa1Qc3Fxkb/7u78jmUyyvLzM5uYm\nkUiEaDQqvYilUomFhYWm8G7uBJF1MT8/z9///d8zNTWFWq1mcXGx7tewL0/yCgoKv8FqtaLX66Wo\np9k0EgqNpTZ9GHamaK8nQhT93HPP0d/fz/LyMsFgkFAoRCaT2ZehHHG/DQYDRqNRdotcXV3d7Zj8\nE224YuQVFPY5tX3Yd1rwR+G3j71q3vSsCIMoSlIL9b/wtu3X+VwbMhHv6B4UqlKMvIKCgoKCQqPY\n42wXxcgrKCgoKOwdzeoh+C3hiTa8eXv1NSEPxrb2E/txzPud/Txf9iv7/V4L1fl+Yr/P8f06/p2O\neX/NJgWFfcJ+XTgUGosybxR2G8Vd/xQobqn6o9xzhd8m9uN8349j/hqhxOR3k0aWmFVQUFBQUHgA\nxcgrKCgoKCh8TVGEdwoKCgoKCr+tKEZeYVdRhEMKCgoKzcO+rF2v0LyI9J9mbM2qoLAb1G5ilTmu\n0Ox8rY286DVvMBhobW1Fq9USDAbJ5XJKY4xdRKVS0dPTQ3d3N4Bs0Sq6jyUSCYLBIHNzc03TQEUg\n5ogQVYrNydel6ZHC7qHVanG5XPT397O1tcX8/Hyjh6SwzxD17E0mE9lsti51+b+2Rl64jU0mEw6H\ngxMnTmA0Gvn000/Z3NxUjPwuoVKpUKvVnDp1iu9///uya1SxWKSnpwefz8e9e/f41a9+xdraWkOM\n/OOyIsRLp9FoqFarFItFisXifQZfQUGlUqHX6+nr6+Mf/aN/xIULFxQjr/BUiDr2NpuN9vZ2gsGg\nrM+/l+vM18rI63Q6TCYTb775JsPDw2SzWex2O21tbfh8PtRqNadPn+aXv/wl//f//t9GD1ciDCXs\nPze3RqPBZDJRqVTY2tpiYmKCTCZDW1sbra2tBAIBuru76e7uRqut/3QTL5Zocyo2GSqVildeeYUX\nXniB1tZWKpUK29vbqNVqMpkMV69eZW5ujs3NzX33TPYboq+42WxGo9GQz+flRrEZUKvV6HQ6BgcH\n6e7uJhQKEY1GlTmhsGOMRiMej4eTJ0/i9/uxWCzcuXOHqakp1tbWyGaze/bdXwsjL4yk0+kkEAjw\nO7/zO7zyyivE43Hsdjs+n0/2sC4UCqRSqaYx8mJj4nK50Ol0ZDIZEokEyWSy0UPbEWIBjEajTE1N\ncfHiRbLZLENDQ1itVqxWK21tbVgsloYI8sTcEC550cNao9EwPDzMuXPnGBwcRKVSsbGxgcFgIJVK\n4XA4sFgsTE5OEolESKVSDRm7VqulpaWFUqlEIpH42hkWlUqFxWLB6XTi8XgwGAzEYjEikQjhcFh+\nTlx3I8IoWq0Wi8VCe3s7VquVpaUltra26jqGp+HrUJxGq9Wi1+uxWCyYzWZ0Op1cwwFKpRKhUIhs\nNtuw63xwPXvcOLRaLW63mzNnzjA4OCivrVqtEo1GFSP/JDQaDWazmePHj/P2229z5MgRXC4XNpsN\nvV4vT5Ci5Z9er2+awjatra0MDAxw9uxZfD4fi4uL/PrXv+bXv/51o4e2I0qlEslkklu3bjE3N0ck\nEqFcLpNIJFheXuby5cucPn2ara2thvU5F274arVKuVyWz/7OnTu8//77mM1ment78fv9aLVayuUy\nv/d7v8fo6Cg3btzg/fff5/PPP6/7uNVqNR6Ph9dee41wOMyHH34oNynNzk4WQLEB6+vr46WXXqKn\npwez2cz6+jrXr1/n0qVLwBferVKpJNuOiv+uFxaLBbfbTTQaJZVKkc/nCQaDdfv+p0FsaCuVSt3m\nym5vKtRqNQ6Hg66uLp577jnGxsZob2+XhyCAYDDI//pf/4u7d+9SKBR25XufBjF3RQvZcrn82La4\nhUKBbDZLJpNBp9MRCATI5XLkcjnu3LlDJBLZs7HuWyNvMpmwWq3Ab5o6lMtltre3mZ+fp1KpYLPZ\ncDgcMuYqPnfgwAG+973vcfv2bZaXlymVSg1ZOFUqFQ6Hg/7+fo4cOYLf70ej0TA+Pl73sXxVxMQO\nh8NEIhF50spkMkSjUTY2NigUCphMJgYHB9nc3CQSiVAoFOqii3iUmK5SqTA/P49Wq8VoNHL69GmO\nHj2KwWCQJze73Y7H42F6erohRl5sXkWo46t4Qhpxqqsd55P0EEILIbQcer1eGiqtVisNeiPEkELX\no9frMRqNbG9vk8vlKBQKpNPpuo3jcahUKukN9Pv9eDwenE4npVKJfD6PWq0mnU6ztbVFIpGQm5RG\nrXmPw+Fw4PV66e/vp6uri87OTkZHRzl48CAejwetVks2myUejzM1NUVLS4tc24vFYl0OEWJOiPkp\nDg5PolwuE4vFuHXrFlarldbWVtxuN729vdLLuVfPY18aeZVKhd1up6enB5VKRaFQIB6PMz09zfT0\nNCdPnmRsbIzBwUGGhoYwGo0YjUYZ93755ZcZGBjg3/7bf8vW1haZTKbuQjyxE2xpacHn82EymSiX\ny2Sz2aaJRT4NDy7A4vQcjUa5dOkSBw4c4PXXX2dycpJr164RjUbrZuQf/B7xdxsbG2xvbzM3N8fq\n6iq9vb3odDp0Oh0ajQav14vL5cLn8+35OB+GRqNBq9VKPcHTUluzoF4L+oN1Eh5nmMViuba2RjQa\nxe/343A4qFQqbG5uUiqV5PtQ666vF2J8tQeIeDwOUFdPwqOoFRd3dHRw9uxZTp06xcDAAMVikVQq\nhU6nY21tjU8//ZSZmRkWFxcJh8Ok0+mmM/KdnZ28+OKL/MEf/AGHDh3CYDDI91GtVlOtVjEajWxt\nbbG5uUmxWMRgMGA2m0mlUnXzFOp0OgwGA0ajkXQ6vSOFfLVaJRKJ8N5775FKpfB4PPT09OD1eqVt\n2qv1cN8ZebVajV6vp6WlhdbWVtbW1giFQmQyGYrFIuVymRs3brC+vs7ly5dxuVx4vV6++93v8txz\nz2E0GuVJrb+/n6GhIaampuq+MxeLXzQaZX5+nqGhIcLhMJ999hkLCwt1HcteIU7R2WyWZDJJPB4n\nlUo1TQpjuVwml8uxtbXFzMwMly5dYmxsjN7eXtRqNZVKRapfG4GIO3744Yckk8mncr+K0JRYPOp5\nDQ9u9h73uVKpRDqdlsbcaDRSrVbl39XbNV+LyWSivb0dvV4vT4rNMG/hCwPf0tJCZ2cnHo8Hv9/P\n8PAw3d3deL1eCoUCVqsVo9GI0+nE6XQSDocJhUIsLS2RTCZRqVRMTEwwOTkphY5f5V4/62ZBo9Gg\n0+no7u7m4MGDcgNeqVTI5XIUi0V0Op0U9i4tLbG4uMja2hrd3d28/PLLfPbZZ1y7dq0uIYpyuSy9\nITvdgIqDRSaT4d69e/zsZz9jdHSUarW651qbfWXkNRoNBoMBq9UqlbjxeJxgMHifAnp+fv6+9BaD\nwYDf72doaAidTic/GwgEGBwcZHFxsSHuN6Honp2dpb+/H41Gw5UrV1hdXa37WPYSvV6PRqMhl8vJ\n3NBmOAkBchOysbHBrVu3aGtrIxAIAJBOpwmFQg0TQZZKJba3t/nss8+e+mfFwilO1fXyDj3NYiWM\nvNjw5nI5Ka5qhlOmwWDA5/NRKBQIhUJNM2fhCyNvNpvx+/20t7fj8/mkkEvEfjOZjPQWulwuAPL5\nPDMzM8RiMRkaWV1dlUaoEcJGvV6Pw+Ggu7ubrq4utra2WF5eJpvNEovFyGQyGI1G5ubm+PDDD4lE\nImSzWQwGA0eOHOHcuXNEo1Fu3769Y/f5V0X8/qfdTNTe13A4zK1bt6hUKphMJtLp9J7OrX1l5MVk\n9Xg8FItFJiYm2N7efuKiUK1WicfjRCIRbDYb8MVL0traitfrRafT1esSvkQmk2F9fZ1Lly6hVqsJ\nhULkcrmGjWe30ev1DA0NMTo6ytjYGKlUitnZ2aY6FQmq1aqMjSWTSSYmJvjkk0+YnJxs9NCeChEK\nEimDjRI8PokHT0DNWoSoXC431KPzMMSadvfuXTY2NlheXmZ5eZn29nba2toIBoNsbW0xODiIx+NB\nr9djtVrR6XRMT0+TTCax2WzEYrGvrPfYLVwuF8eOHaOnpwe1Ws34+Dj37t1jc3OTeDxOJpNBo9HI\nTbfYsIosGbPZjMlkwmg01mVdedb56fP5eOWVV3A4HMTjcRlG3iv2hZEXQpyDBw/S1dVFOp1mZWWF\njY2NHcdDlpaWmJmZweVyYTab0Wq1+P1++vr6sNvtbG9vN0SlWSgUSCQS3Lt3D5VKRSqVajrj9yyY\nTCaef/55xsbGsFgsGAyGRg/pPjQaDVarlY6ODgYGBuSJJ5PJsLy8zMWLF1lbW6vrmISRfhaDJ0Il\njXR3Pw6xsO3UtV9PNBoNAwMDtLW1YTAYSCaTZDKZptosCc9HKBQilUoRiURYW1ujpaUFh8MhhbDL\ny8u4XC60Wi1WqxWDwcD6+jr5fB6LxcLa2poMnzVqg1WpVMjn80QiEfR6PdPT08zMzBAKhUgkElJR\nX4vYxIrQrdFobOqeGcKjMjg4yKlTp/jGN75BJpNhdnZ2z+uH7AsjL3Zrr776KgcPHuS9994jl8s9\n9OE/jEqlwsTEBG1tbfT19dHZ2YnBYGBgYIBEIoHP52Nra6shRl4I1IQyvTbG83XAbDbz5ptvcvDg\nQS5fviyFKs2ykdHr9QQCAY4dO8Y3v/lNWltbAeSis7CwQCwWq9t4avP6hVvwaRFucBGLb8b5JFKP\nhOFspjEaDAbeeusturu7uXjxIisrKyQSiaaZswKhGRG6BiHEq9VhhEIhaRDFH5FGKrIXaudKIwiH\nw1y+fJlUKkVnZycLCwskEokn/pwoxOVwONDpdE2dXirS5v7oj/6I559/Hr/fz/z8PNFodM89yU1v\n5FUqFaOjo5w9e5aRkRHy+Txra2uEw+FHPlCxgMBvXLBbW1ssLCywtrYmUzXgizz10dFREomETI+p\n58ssTly1G4xmyeF/Vvr7+zl16hSBQACn00lbWxtGo5FCobCn1yeMpMFgkDH32u8TqZSvvfYaZ86c\nwev1Mjw8jNvtxmAwkE6n2djYIBQK3VeNcK/RaDQYjUZ0Op1MFyoUCjJO/TQnSTHvm6EwihABioIg\nQixWLBa5du0a29vbTZNRYjab8Xq9+Hw+rFYr6+vrhMPhHRmQRjSuEafvRxnoR61lD87rRs4PsUkR\ncXi1Wo3BYCAajT5yzpvNZo4dO8bIyAg6nY5SqUQul2s6r5Ver8dsNnP27FneeOMNXnjhBZxOJ5lM\nhlu3bnHhwgWZsbFXNLWRF6UuT5w4wT/5J/+EhYUFPvvsM5aXl9ne3n7kz4n8W/iNEY1Go6yurrKw\nsIDf76ezs5NSqYTNZuPIkSNsbm6yvLy858KNBxEvl/jOZnY5PQ0qlYrh4WHeeOMNPB7PffHuvV7Q\nRQ0Fm81GsVi8T7dRqVTQ6XRYLBbOnj3LP/2n/1RmXMBvXIebm5uk02ncbjfJZHJHJ4tnoba0a226\np1jwRFjnq/zeRp/mxYmrpaWFgwcPcurUKXp7e4nFYiwtLZFIJJrGyIvUXIvFQiaTYXV1le3t7cfe\nP7HeaLVaNBoNhUKhKfPQa6ktDNXozaAYi0hnFqmsQln/MAwGA/39/Xg8HjY3N4nFYuTz+TqP/MnY\n7Xb6+vr4vd/7PX73d38XlUrF6uoqU1NTfPrpp3z66ae/3Ube5XJx5swZnn/+eVpaWrh58ya//OUv\nn/jSPeh6EorTzc1NxsfH6erq4tChQ7JYRF9fHz09PTidTgqFQt2Fb9VqVbrUmnlh2CmiElRfXx+n\nTp3CaDRy69Yt/st/+S91KfTT29vLgQMH0Gq1ZDIZqYwulUrEYjFaW1s5cuQIPT095HI59Hq97Jwn\nFL0qlYre3l58Ph9///d/v6cVzmoLmphMJum+s1gsOBwOAoEAkUiE8fHxHc+P2oW70RtHnU6H2+1m\ndHSU06dPc+bMGYxGI1NTU/e57BuNeObf+MY3CAaDLCwskEqlnnjPhTrc7XZjs9lYWFiQlR/3y/tc\nqwFp1JhFitn6+vojQ1ViXmezWS5dusT8/Dx/8zd/w8TERANG/GSOHz/OH//xH3P48GHgi7X+5s2b\n/If/8B9YWFggHo/v+fxvWiMvCt688MILdHV1sbGxwd27d5mYmHhiIYeHTVRRmOXOnTu0trZisVgo\nFosy37S9vZ329nYSicSen9oeRJwERPUvYXD2KxaLhba2Nrq6unA4HNID89lnn9U1vi02eyL2CL+p\n0FcqlZiYmCCVSmGxWFCr1RSLRVlqslwu09bWRkdHB3fv3uXKlSv3VV/bLcQJ0Ofz0draKutAGAwG\nWdd9YGCA+fl5JicnnyiQEhssUblP5PI2Mp5c604WFR5LpRIrKytAcxSW0Wq1mEwmvF4vHR0dXLp0\niYmJiR3VRheiNovFgslkkl7E/UBtquVX1YDsFkJjINL7RLVS4Y4Xc9lkMsm24UtLS0D9UkR3ik6n\nw+l0MjIywmuvvYbBYCCXy7G2tsa1a9f49NNP6+Zha2ojb7PZOHz4MHq9nk8++YTZ2Vmi0ehXXhRS\nqRSTk5PEYjGuX7+OTqejr6+Pc+fO4XK56OnpYX19nc3NzV2+mscjTnK1+eT71ciL1MTTp0/j8/kI\nh8P8/Oc/56OPPtrTJgy1LC8vE41GpWdEeHLEfd3Y2GB2dva+alrwG2Pkcrn49re/zcsvv8zg4CAd\nHR3YbDZSqdSu6wl0Oh1Wq5WRkRH6+/sJhUKygplw33d1dQFfuChrRYsPG4cwVna7Hb1eTzqd3pNx\nPw3ZbJZgMEipVGJoaAj4zb1udChBoNfr8Xg8WCwWcrkcKysrLC0t7UiMKzbpqVRK1rffL6d40dtc\nFHhpNEKfZLVa6e7upr29HYfDQTqdJpFIEIvFaG9vx2AwcOnSpT13dX9VLBYLhw8fZmBgQNYjiMfj\nfPjhh1y9erWu875pjbzD4aC9vR23283i4iJ/93d/x8LCwlfeaQrFcTKZpFQqEY1GsVgsMvbT1tbG\nyZMnmZmZYW5ubpev5tGImGtvby8ul4tqtcrq6mrT1MaG+8t7ijrRD+6cxULncrkYGRnh9ddfJxwO\n82d/9mdcu3aN+fn5um1cRPW0Wje1UCGLF+txC1oqleL8+fMYDAbZKvftt9+mWCyytLTE9evXd62g\nj4hXt7W10d/fj9frxWKxyFLHVqsVp9MpTzN6vZ5qtcrm5ibJZJJCocDi4iLb29tYrVZ6enoYHR2V\nnetu377N0tKS/Hy9Nlq1CB2GqHooNuqiol2jjaHYmL788st0d3eTz+dJp9NkMpnHPmOVSiX1HLFY\nTHqIcrmcvCbhXhan+2a5XrPZTGtrK11dXbjdbhYWFmRKcjNkEfT19fH7v//7tLW1YTabyefzJBIJ\nwuEwuVyOYDCITqdr+L18FC0tLbz66qscP34ctVrNwsIC169f56OPPmJiYqKu425KI69SqfB6vXR3\nd2O1WgkGg3zwwQfPfGPELlGImGw2G+3t7cRiMbq7uzly5AjvvffeLl3FzhDu2e7ubnp6euRCIUSA\njZ7EwsMgik3odDq5UNcuZKJ1YldXF0NDQwwPD/OTn/yEP/uzP5MlIOvFgwVgnvYeptNpPv/8c7Ra\nrcydP3v2LBqNRrpxd6s4ijAAer0em82G3W6npaUFt9uN0+nEbrdLN/KBAwdwOp0ATExMsLW1RTqd\n5uLFiywsLOD1ejl16hRvvfUWVquVaDQq3f5iY1ZrgOqJ2GTH43FZd0A0KmqGOe52uzl69Cg6nY6V\nlRXS6fQT56ww8mq1mu3t7Yd+XoRj9Hp9XTvDPQyxSRdtcwcGBhgeHpaZDkKL0ki0Wi12u52xsTH+\n8A//EIfDIcWjoh7A7du3SafTe55f/lURXfReeukljhw5IssHf/DBB1y4cKHudTea8i6pVCpGRkY4\ncUg2MA4AABIgSURBVOIEWq1WFrzZzZdDuHBXV1f5+OOPGRsbo6urq+658rU1jYUIMBQKSUV1o3bV\nIsWmo6NDCtDcbjctLS1cunSJ8+fPy0VNr9fT29vLwMAA7e3tFAoF/ut//a/cvn27YSeDZ50r1WqV\n6elp/vRP/5STJ09y8OBBCoWCFFTB7qQ65nI5wuEw58+fZ3V1lY6ODrxer+xQJWqRG41G3G43arWa\nQqGA0+nEZDIB4Pf7yeVy8jMulwu9Xk+pVMLv9zM5OSmVy40yMGIzs76+zkcffUS5XGZ1dZVUKtWQ\n8Qg0Gg2BQEDO61gsxtraGmtraw9dcx5swCM8bo8SiYkqnWq1umElnWsb2bjdbl5//XWOHj1KT08P\nkUiE+fl54vF4UzStcblc/PjHP+Yf/IN/gM1mkx4QtVqNyWSS1ft26z7uhdjZbDbjcDiwWCzo9XoA\nbty4wQcffPDYrLC9oimNPHzRkainp0cqnvdi8gkxnlD5plKpugrD4P4TjnD5PbiQNALRxKe9vZ3e\n3l46OjqkYHFqaupLndFEO05Rx+DmzZuPzXPdD4TDYcLhsBQmqdXqXVdNixzhubk5QqEQgUCA1tZW\n7HY7a2trrKys0NPTg91uR6vVkkql5Ptgs9nweDzYbDYpMtRoNMRiMQqFAltbW6yursr6+43cMAox\nYDabZW5ujlgsRjAYbEj44GHjS6VS3L59m0gkcl/L5MdRrVYfeSgQJ3hRSXFjY6Nh5arVajU6nQ6f\nz8fBgwc5c+YMzz33HF6vl48//pjFxUVZD77RIkiRMj06OirrQwjNjGjvCuxqTvxuGXrh8RwZGeH0\n6dNSSFsul1lfX+fevXu7MNqnp2mNvFCrxuPxPVW7CwM7MTHBzMxM3V0pQlEaCoVYWFhAq9Wyvr7e\n8Mp3JpMJn88nxUjCaIgqWrXV1CqVijyViVhmI9r37hXT09PEYjE6Oztli9zd9CxVq1VZYS+VSsnF\nTHQQ6+/vx+12y+IsmUyGjo4O6e0KhUJEIhHpwrRYLASDQebm5vj0009ZXV1taCxYaDpEVzSAYDDI\n8vJyw8VeYu7Wpm09bt7u5LmL6zWbzRw4cICXXnqJd999t27tlR9E6D6Gh4d57bXXGBgYwGazUa1W\nWV5e5vr162xsbOzZYeppERqOdDqN0WiUKa6A9GTtRidLcZjarWu2Wq0EAgF++MMfcu7cOXw+n1zf\nG7kWNq2Rz2QyMs61l13AxG58a2uLVCpV9/Q5MYZMJkM4HMZgMBCPx9FqtQ3LWxWV1+x2u2xjabPZ\nUKlUsiNUraGrVqukUiny+bxsWdkMi8VuIVrlwhf3Zi+UsUJpXlugRChyk8mkdP0lEgl5snG5XKys\nrDA/P8/Kyoo8wVerVWKxGKFQqOEucZVKJXP9jx8/jtvtRqfTsbS01BTzRGywxH/Ds1Wu02g0OJ1O\n6QGz2+3Mzs4SDod31GdjL6htWFQulykUCvKa4/E44XC4KU7x4tS7srLCzMyMzDLxeDyUSiX5XvT1\n9XHy5EmuX7/+TIcy8Sye9ZmITdThw4f59re/LTOLAK5du8YvfvELbty48Uzf8Sw0tZGPxWKy5/de\nIMqHlstlksnknhY8eRzV6hfNJuLxODqdjmw2i8lkuq+LWD0WB7EYmEwmbDYbDoeDtrY2uru7cbvd\nwBcubKvV+iVDJ/LLv65oNBpaWlr2fCGs3dSJ7me1RlrEeX0+H9vb27J61szMDNvb28RiMVlgo5GL\ntjjN6nQ6Ojs7OXbsGGfPnsVqtRKLxbh582bdygU/idp5LAyJcAvvdCMi0mDNZjMdHR0MDQ0xNjbG\nwsICFy9eZGNjo2GbGvGd6XSacDhMNBqlpaVFHi7S6XRTeN1EbYf5+XnpkSoWi1J4J5rRHDhwAIC/\n+Iu/eObvfNbnIUTHHo+HkydP8qMf/QiHw4FWqyUej3P+/Hn+5E/+pKFhy6Y08iqVikAgwJEjR8hm\ns4yPjz9zV64H0Wg0DA8P43Q6WV9fb6iBEu4pcQpraWnBarWSy+Vk2kg9Joko99rf309fXx+9vb0M\nDQ3h9/sxGo3/v727+2mr/uMA/g5dSymlhY51CCUdUsgGQ4fBmfiweIHTmCUmxjv/Df8T/wSvTYzJ\nDMZl82JuIlvieNBuUCiUQtuVnva0PRTa0/O7MJ+vZVHHHHAO/t6vyyWDtpx+Hz8P2NnZQSqVOhBZ\nf1itu6PW/HWnaw1YGhkZwfb2tkojtIM8K5JK1N3dDcMwkE6nD9QCsLuMrdfrxdDQECYmJjAxMYHx\n8XFVhbBUKiEUCh3oMXFUXvb4Ve6v+/v7Va/1vb29A7XeZYErv0dKEk9MTGBychKGYaBcLuOHH35A\nNptVWRB2/U0ajQYqlQry+TwymQw0TUM4HFZtt+2uVSAxG1euXMHY2Bi2t7exsrKCer2ugtgGBgbw\n5ptv4qOPPoLH41FxMna+ZhkbBgYGMD09jXfeeUe18zUMAwsLC0gkEraPc46c5AGoVVtPTw8uX76M\na9euIZFIIJvNvvSKWJoGhEIh+P1+dcxsJ6lyZ5omIpEI+vv7kclk1DHscU/ykg4UCoUQjUYxOjqK\nV199Fb29vWg2mygWi9A0TUW1vsixtTSEkRW57DCPq4xva3Gbl/n5Ug0sEAggEAigWq2iXC7b/qU1\nTRPValXtwCRgVO7+TnIH39o1T+II/H4/ent7MTY2hqmpKYyPj6uWzrquo1KpODog0+VyqdOrSCSi\n6i7IqZppmtA0TUVKy4AvgVfFYhHZbBbLy8uoVCq2V2OT1GEJdpRGXBIDYnfhHnmGOjs74fV6kUql\nsLGxgWq1inq9DrfbjQsXLsDj8eD69eu2Te7PPusejweRSARjY2O4evUqRkZGVEplqVTC7OwslpaW\nbB8vHDnJS7GP7e1txGIx1ar0yy+/xM2bN1+63m9XVxf6+/uxu7sLTdPU5GUnCWCr1+uIxWKYnp7G\n/fv3YRgG1tbWjvV3yyAldQPOnTuHvr4+XLhwAbquIx6P4+nTpwCgUrTcbvehAkrkiyHR97Va7dgC\nfFoLj0iZ2pddDErHQo/Hg5mZGWSzWVtzu+U9St5wKpVCqVRSE9BJH9FLMSe/36+udwYHBzE6Onog\n7dKyLJTLZSwuLuLHH39UxZGO+nM8iloalmVheHgY7733HsLhMPb29qBpGnK5nGqm8+DBA9y/f1/9\nn2aziaWlJaytralNgx3pcn/Hsizouo6nT5+qcsdLS0vY2tqy/eRHxognT55ge3sbW1tbKJfL6m9h\nmiaSySTS6bTqdWBHxzkZx/x+PwKBAILBoMpUGBgYQHt7u4oryOfzuH37Nh4+fGj7M+DYSX52dhZu\ntxvXrl3D4OAgent7EYvFcPnyZeRyObjdbgSDQaysrCCbzT43iEIGf6/Xi87OTpw5cwaFQgGlUskR\nX0bL+qNLW3t7O3p6ejAwMKBqqh/3ylU+M8MwkM1m4XK5oOs60um02pXUajUEg0HEYrEXWv3LbloC\nfo5zQJGf6/F4VEnXF53o5URD8tXPnj0LXdfVIHOYhiXHrdlsolarYXNzE5ZlYXt7+1jq6h+G7GC7\nuroQCoVUjr+UaJa0P9M0cebMGSQSCfzyyy+quprd37tnyXVIPB5HMBjE1NSUysDxeDzo6+tDR0cH\n1tfX1QJXJvlKpaKeD7ufkb8imwgJNJ6bm0MqlbL9byDjg4zHhmEc2MQ9G6H++++/486dO9jZ2Tnx\n1ynBsXKiI6V2I5GIusaLx+O4e/cu1tfXba8DATh4kr979y6SySR0XcfU1BRGR0dx7tw5vPHGG8jl\ncjh79iyi0Shu3rypVnWtd2UStGRZ1oH+4lJBrF6vY2dnB5qm2f12FVm4BAIB+Hy+E20TalkWNE1D\nsVhEMplU9c9rtRoMw1ClVhuNhroyOSxJSzqJHbBlWaqjm0wissCTXYB8prKDkON90zTR3t6OUCiE\niYkJRKNRBINB3Lp1C/F43PbBUEgwqtRXt7MUqdxhyzG9LKB1XYfP50NPTw9yuRwAwOv1YnV1FUtL\nS6hWqydeeOqw6vW6CpaTEz5N09Q1Wnd3N7xeryOyAw6rNaZArp3m5uawublp8yv7c5H0d2OKHOVL\n8acHDx7gm2++OfFAaXmNu7u76kRteXkZHo8Hw8PDqu7KvXv3MDMzc+KLkL/jyEleFAoF3L59G/Pz\n8+ju7lYpFZ988gkikQgCgQDGx8dV8Yr9/X3s7u7CMAysrq5iZmYGjUYDfX19iEQi8Pl8KJVK2Nzc\nRDKZhGEYdr9FAH9G5p4/fx5vv/02hoeHDzStOek7KNkptuYNywpW6o//Gyc1IBqGoQaMV155BcPD\nwxgZGUEwGMTPP/+scqJl99nf3496vY7NzU1MTEyoRjHpdBqzs7NYX1931GAuz7oc29u5+JCgLtnJ\n6rqOwcFBXLx4UV3pbG5uolgswuVyYXV19VAlY+1mWRZyuRxu3boFANjf30dnZyd8Ph86OjqQSqUc\n9Uz8E5ngJec8Ho+jVqupOA4nk74Cn332GT788EOYpol0Oo1EImHL+C3fPSliFggEUCqVkEwmkUwm\nsbq6ivn5eTx+/Ngx/UccPckbhqGq0bW1tWF4eBjd3d147bXXEIvF4HK5MD4+rr5ssuuUVerc3Bwq\nlQp6enpU28JyuawecCfxer3o6+vDlStXEI1GVTpOR0fHiaYayUnIsytruQf7N5H1J21/fx/NZhN+\nvx9DQ0P44IMP8PrrryMUCiEYDGJjYwOmaSIQCKgTIV3X8ejRI7z11luIRqP47rvvsL6+joWFBdvq\nvf+TZ+sU2KXZbKraCNVqVdXtbm9vh8/ng9vtRqFQQCKRQL1ed0QBnMOQGIJ4PK7+TRbbTnsWDku6\nLmYyGVSrVduvS1pbbEu0vNQtkOJJ4XAYly5dwo0bNzAyMqKqQOZyOdvSEWXT02g0VBlkqfD566+/\nIp/Pq5gCJ3D0JC9k0ikUCqra2rMpWQBUSz/p5StRjslkEo8fP1bBME7L525ra1M97YeGhhAOh1UR\nkUAg4Jh8YuD0DHBtbW0IBoMYHx/Hp59+inA4rMqM7u3tqZRMuVNOpVKo1WrweDyq5/Nvv/1mS4DP\nYThhghfy+ZimqYooBYNBhEIh9PT0oNFoIJPJoFAoqKP708gJn/W/1XptKc+N2+1WteHtIPEv0phJ\nAgIzmYw6pf3444/x/vvvq2yjn376CRsbG7Z/J2Wyz+VyKJfLqi2xBMHa/fpanYpJHoCqqvbkyRN8\n9dVXiEajqtyqBEFIQESz2TzQXlN2+E7oePVXLOuPZjnpdBr37t3D4uIiarUaFhYWsLy87Ni7Sydr\nNpuoVqtYW1vDnTt3MDk5iWg0CsMwsLOzg1wuh52dHei6DsuykM/nsba2phoDJRIJR6TL/ZWjqtR1\nlGTQ0zQNq6urAP7olOfz+fDo0SMVhOSUI8z/J63FcAqFArq6utSGx+5rE9M00dnZiVgsphpcFYtF\n+Hw+DA4OYnJyEiMjIwCAra0tfP/99+r5spucOkjBKjnCdxo7u6C81Agl6TqVSkVN4Pv7+wf6iDtp\nEHwel8uFYDCIaDSKRqMBXddVPfLT9D6cpqurC+FwGJ9//jmmp6exvr6OpaUlzM3NYWVlBVtbWweC\nNelovExpWDoeUh/k/Pnzqma/E47sL126hOvXr+PGjRt499131Slba6nnvb09fP311/jiiy8cEbHu\nIM+dw0/NTv5ZhmEgk8mo1IrWIiCncVCRnaf0kZcmDKfxvThJrVZDLpfDt99+i4cPH6JSqUDTNOTz\neei6riZ2fs5Hi5+n80hWRj6fV7tOJ/ydNE3D4uIiIpEIPB4PyuUy0uk0lpeXoes6dnd3YZqmqkFA\nL+bU7uSJiOjwWotFAXDMJC8VEq9evYqLFy+qQM35+XkUi0UYhuGI1+lQz53DOckTEf2fcOI1ikTX\n+/1+dHR0qEweKdvMa7R/xEmeiIjoP+q5c7hzcrOIiIjoSJ3awDs7OPGoi5yLz8vJkjtnp9QP+DdO\nY2YQnbwXeU64k38BMogQPQ+fFXuc9s/9NL7+0/ia/wv4mRMRERERERERERERERERERERERERERER\nERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERER\nERERERERERERERERERERERERERERERERERERERHZ639n5WHwaNQC0wAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: 1.437691\n",
+ "Current cross entropy score: 0.656794\n",
+ "Training step: 1200\n",
+ "Time since start: 6.129460 m\n",
+ "Steps per min: 195.775811\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsfWdznNd59rXl2d4rdrEAFr2DYK8iJdl0PIomkZLxZGLn\nFyT+P4lnMs7kQyLHieO8ssdFlmRSlGiKJEgQIHrf3nuv7wfOfbQAQRK0sAXSc81gRJEL4Oyz55y7\nXfd1Azx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcP\nHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx4\n8ODBgwcPHjx48ODB4+RA0LZfLBDU6/V6u379nwWB4NnjOmnr5tFeCAQCfs/w+NZCIBBAIBBAKBSy\nr3q9jmq1imq12taz8Q04m6+04W0z8gBO9JPlwYMHDx6vBzL49Xr9pBvXTsErbbiwFas4DBQV8+DB\ng0cngqJP/q46PpBx5w1869DWdD1wslLffLqexzcBlC7l9/HhEAgE0Gg0kMlkEAqFyGazSKVSR/o+\noVCIWq3GP1sercIrbbi4Fas4DK2uhVCaCAB/wfH41kIoFILjONTrdVQqlY47C51QI+U4DqdPn0Z/\nfz8AYHFxEQ8fPjzS90kkEhQKBVQqlWYvkwePI6FtRr6ZaCR5kHFXqVTgOA7lchnFYhHFYhHVahW1\nWu21fnarLyB6LxqNBjqdDkqlEuVyGYFAgF0mrYocRCIRJBIJKpUKyuXykb5HJpNhZmYGYrEYOzs7\nSCaTyOVyTV7p0dBICKL/p/3QSA5qt9E5LhiNRlitVgwMDKBeryMajSISiSAejyOVSh35M/06EAqF\nEIvFUCqV4DgOlUqF/Z3BYAAAuFwu5HK51z6bxwGBQACO4zA+Po7JyUn4fD5sb28f6Xvlcjm0Wi3i\n8Thb/zdl7xD4bObJwzfSyItEIojF4n1fDocDSqUSqVQKiUQC8XgcxWLxtSKZdmxsunS6u7sxOTmJ\nnp4epFIp3Lp1C5FIBPl8HqVSCdVqtSXr0Gg0yOVyRzYIGo0Gf/d3fwe1Wo2f/exnWF1dbauRb6yv\nikQiCIVC9l+BQMAcP47jUKvVUCwWvzGXdW9vL65cuYK/+qu/AgCsrq5ibm4Oi4uL2NjYaImRF4vF\nUCgUcDgc0Gg0yGazEIvFUKlUmJmZQb1ex4cffsiee6tB+3xoaAhDQ0Nwu90oFotH+l6VSgWr1cqc\n4KN+30kCOcTNvm++TWi249Q2I9+MN0SRZm9vL+x2O0wmE8TiZ2/RaDRCIpEgGo2yCMbn8yEWi6FQ\nKHR0xFatVjE1NYV/+Id/gFKpRDAYBADMzc1heXm56b9fIBBAIpFgcHAQ3/3ud/H48WPcuXPnlc/r\n9OnTuH79Os6dOweDwQCtVotf//rX+MMf/oBEIoFCodD0tRMkEgnEYjHK5TIz2kKhEBKJBCqVClKp\nFCKRCJVKBRzHYWBgACaTCQqFAnNzc1haWvqzMj/NxlEuCHJ0BwcHMT09Db1ej3w+D5lMhnw+j3g8\n3tT0MmVMBAIBpFIpVCoVuru7Ybfb90XyZPR1Oh2SySQSiUTT1vQikNOXz+fh8/nw+PFjuN3uI32v\n2WzG5OQkpFIpOI5DOBxmGcNvCtqVyZRKpVAoFFCpVOwMGwwG1Ot1ZDIZJJNJpNPpjiw/NX7R2uj+\naQw66vU6arXasd8x35hInlLyFosFMzMzGB0dhd1uh1AoRLlchkKhQK1WQzgcRjQahcFgQKVSYVFp\nJ0drAoEAg4OD+P73v49qtQqXy4VMJoNgMIinT5+2LFVvs9nw5ptvolQq4cmTJ6+M6Kenp/Hee+9h\nbGwMRqMRw8PDiEQiWFpaQqFQaKmRF4vFkEql+w6QWCyGTCaDTqeDTCYDANRqNajVapw6dQqjo6Ow\n2Wwol8twu91IpVIdZeQbL47Datl0gVD0rNfroVKpEI/HEY/HEQqFEAqFmmrkBQIBRCIRRCIRW4tY\nLIZEIoFCoWCRYa1WY5c11bbbiXg8jlKphJWVFeZUvwhkiGiPFwoFZLNZpNNpVCoV3sj/GWgsU2o0\nGiiVSuh0OhiNRmQyGZTLZdjtdgBANBqFz+dDOBxGoVBAuVxuaRmT1ksgR1EikUAqlUImkzEDXi6X\nUSqVUCqVWOBBZM1KpYJCofDNMfLH+fDpshsaGsLbb7+NwcFBWK1WiMViFItF5HI5CIVClEolcBzH\nLjxKyXaygReJRJDL5ZBIJOzClMlkMBqNkMlkKJfLLVl7qVRCJpNBJBKB2WzGpUuXMD8//8ILUCAQ\nwGq1YmxsDCqVinmuer0edrsdfr+/pZFaY32dUrIymQxqtRpGoxEAEA6HIZVKIZVKIZfLodFo0NXV\nhYGBAQwODmJ9fb0t0eVBNBp24g4cvBgaRUcoW7G+vo5UKgWhUIhIJAK3241wOIxsNtsUQ0SXnVwu\nZ2etWq0iEongwYMHWFxc3MeboXJJNpttW0mnVquhUChga2sLcrkc+Xz+leeL3qNGo4FWq4VMJoNY\nLGZloGagXbXxVv0+csCvXr2K8+fPs/9XqVSIRCJIJBKQSCSo1+vo7e2F0+lENBqF3+9HKBRCJBJB\nsVhsyf3Y2GopEAigUCig1Wphs9nY2rLZLJLJJAKBAILBIEKhEJRKJVQqFctOJJNJhEIhJJPJY13f\niY/kydsbHBzElStXcPnyZWi1WgiFQoTDYcTjcSSTSSiVSnbZyWQyaLVaLC0tAehstr3JZMLly5cx\nPj7ODnYmk8HS0hK8Xm9LI0uJRAKj0QiTycScjIWFBXi9XhYJNkZuWq0Wer0eIpHouV7jVj9vWl+1\nWt0X4dJeKBQKyGQyKJVKUCgUKJfLrFZstVrR3d0Nl8vV0jU3orE75CAanyW9rqenB3q9HoVCAcVi\nEaVSCS6XC16vF+VyGel0GvF4vGmfg0AggMVigdPpZIY7FAoxDkkul2OOykHyIxHy2gFyBlOpFEql\n0pHPl0AgQKFQQDweRyKRYNFmM5wnkUgEjUYDqVSKer2ObDaLbDb7ws+SziQ9Y4pyOxH0+RsMBvT0\n9GBychJjY2NIJBJIJpPwer2IxWJIJpPstXK5HEqlkhEfVSoVSqUSEolE03kmVIKyWq2Qy+UQCAQQ\ni8VQq9VwOBzo6uqCVqtFsVhk3SxEPB0aGoLD4YBCoUAkEsH29nZTgogTb+Qpjfz+++/jzJkzzGsK\nBoPY3NyEx+NBNBqF0+mEw+GA2WyGQqGAQCDAvXv3muZpHxcGBgbw4x//GNPT0+yS9/l8+K//+i9s\nbGy0bB0ikQhGoxEzMzPo6upCJpOBTqeDQqHAb37zG2SzWbbBJRIJ5HI5ZDLZvigNeOaghMNhlEql\nlq0dAEvh1et1iEQidvmSIa9UKshkMuwCJLa5WCyGVquFyWRi2ZR2ETBFIhFzSOl5UhaK1kSvO3Xq\nFKanpxEOh7G6uor5+XmUSqV9bXPNfB8CgQDDw8N4//334XK5sLq6Cr/fz8iuBPozfR7kIL6oBNFs\nkMNB6ziKEE69Xke5XEYoFMLa2hp2dnbg9/vZfjpuEJHYaDSiVqvB5XIhn8+/MCNJ3BOO41jff6vP\n31EhFAohl8sZSdTpdKJWq2FnZwcLCwv7yoQcx0Gn08Fms6Gnpwfd3d0YGxuDUqmEz+dDLpdrekZI\nIBBArVZjZmYGZrMZAJDP5wE844FVKhWsrq5ie3sbfr8flUqFBRczMzM4c+YMarUaVldXEQgEwHHc\nsa/xxBp5usxmZmZw+fJlnD17FjabDdVqFTs7O1haWsL8/DxCoRCKxSJMJhPkcjm6urogl8tRKBQg\nFArZpdeJIC/RZDJBrVYDAKvbZDKZlrF3OY7DyMgIxsbGoFarIZfLIRQKMTw8jN3dXTx69AjhcBiV\nSgUymYyl5FUqFZLJJONDpFIpuN1uuN1udhBaiUbSS6VSQT6fRyKRgMfjQTabZSSpZDKJJ0+eQK/X\nY2BgAKVS6aWRdCtAxucgQfSgsVYqlTAajZiensbFixfh8XiQSqVw//79pkWWByESiSCVSqFWq6HT\n6fD48WPs7e2hUCi88qxRbbKdJTRKuSoUCojF4ld+7lRr9Xq9KBaLCAaDyOVyTXvWtH9lMhkjL1J6\n+uB9JhQKYbVacerUKXR3d0Mul+PevXvY2tpCIpHoqIheIpHAYDDg3LlzGBwchMViwcbGBrxeL1wu\nF3w+H6LRKEvBUzlKIBCgt7cXZrMZBoOBpfLp35u9Zq1Wi6GhIfT29kIkEmF1dRV7e3usFJZOpxGN\nRpHJZFCr1SAWi8FxHHZ2dqDT6RiBUCaTMcfyONd9Yo08XSSnTp3CjRs3MDIygkqlAq/Xi+XlZdy/\nfx9LS0vIZDLMGeA4DiaTCVKpFIlEgl2anQiqX1utVqhUKojFYlQqFYRCIQQCgZbV4gFAKpVienoa\nMzMzzMBzHAeHw4GBgQHYbDZWy9Tr9TCbzbDZbMjn81haWoJWq0WlUkE4HMbGxgaCwWBbLxeqYReL\nRZb2K5VKjICZzWaxsrKCrq4uXLt2jREEDzJkWwlK8VH6/UXRuF6vx9TUFKampjA6OopKpQK5XM7e\nWysgFAohk8lYNiccDmNvb+/ITmm7zyRlpIgU9SojT0Y3HA4jFos1PR1Oe7RcLkOv10Or1UIul7OB\nL43vQyQSwWw24/z585idnYXFYmHR4vLyMssAdALUajV6e3tx6dIlmM1mhEIhLC4u4sGDB+z9NqJW\nqyGXy7HuHwrgiLTZilo8EXdNJhNMJhPjlQSDQfj9fqRSKda9Rc+Z7pHNzU1IpVKMjo6iXC5DLpcz\np5I38njmQVGkNTAwAI7jsL29jdu3b+PevXtYWVlhNRmxWIx0Oo18Ps8Yj0SMIdZvp0Emk+Hdd9/F\nu+++C71ej3q9jlQqhf/+7//Gr3/965YSwKRSKS5duoRLly4xFjpFO0ajEQ6Hg6XAR0dHUavVsLKy\ngu3tbfzf//3fvp5zl8vVEURHEr4pFouIRCL7IuRarYZSqYRsNotEIoFisdj2KF6v12NoaAherxeh\nUOiFLNzBwUH88Ic/xMTEBPL5PL744gs8efKk5UxjKtsQOeqoTil9bzNaiY6KarXKSJivw3Y+WDpp\n5voikQjbw/F4nO3PRhIYtSaS4SDu0g9/+EPY7Xb89Kc/ZZmedkMgEGBgYAAXL16E1WpFIBDAr371\nK7jd7pcSQ4lDodVq0dXVxZjr4XC4qdlCcqCoNdjtdsPlciEQCGBrawter5cx/Q+unfYHke/MZvM+\ngajjxok18nq9HmNjY3A6nTCZTBAKhQgEAnjw4AGLFunh1mo15PN5FItFFoVSjeqodbdWgjzTiYkJ\nzMzMQCqVYmdnB48fP8bHH3+Mx48ftyzdzXEc87C7u7uZ7gCRXkQiESMoCQQCFItFZDIZuN1uxGKx\n52pi7TbwjRwBYnuXSqV9a+I4DmazGSqVijmH7dJRoMtapVLBbrcjlUohGo0+t2fpdSaTCTMzMygU\nClhbW8P8/DxcLldL1y6VSmGz2WAymVgd+KhnjIx8tVptC/mO9kYqlWLZs9cRy2rFc67X68jlcgiH\nw8yBojPYyM0Anp23ZDKJ1dVVDA8PY2ZmBiMjI0ilUrh79y5yuVzbjTztXYvFgt7eXhSLRbjdbqyt\nrSGTybw0syORSKDT6aDX66FWq7G9vY1wOIx0Ot003gFF8CqVChqNBgBY+YO6hl5FhKQzIRKJoFar\nWSBK9+tx4sQaebvdjmvXrqGvrw9yuRzFYhHhcBgrKyuIxWL7LuV6vc48vHq9zhiZEomkY408MWg1\nGg0EAgHu3r2Lf/7nf2YMzFZFOXK5HHq9fl+9iEBCFLQmoVAIl8uFbDaLaDTaseIxje1HB9dHGYqp\nqSkMDAww8g7V69tBBBOLxaxFiyLjw3riidCjUCjw8OFDfPrpp1hbW0M8Hm/pmpVKJSYnJ+F0OiES\nicBx3JENJr2PdtbjhUIha7ttJDl+nZ/ZjPdTKBTg9/sB4LlW4MZ9vbe3h0AgAIvFgqmpKTgcDphM\nJszOziIQCGB3d/fY1/Y6oPtOpVJBLpdjd3cX29vbLM39MiiVSlaPFwqF2NrawtbWVlMVE0mTpbe3\nFzKZDJlMBl6vF8lk8kjCapRhMRqN6Ovrw8jICEqlEmKxWFMyyyfSyBOZ5Pz58zCbzUgmk5ifn8eT\nJ08Qi8UOZfBGo1EEg0GUy2UUCgXEYjFGXus0QzQwMIAzZ87A4XDA5/Ph5z//Of74xz9ic3OTkTda\nhUql8twzqtVqSKfTuHPnDm7fvo1IJIJsNgsArF3rsAu9XfVs+t3UmlOr1RCLxQ5di1QqhU6nw8DA\nAMxmM9LpNDY3N7G8vIxUKtXy9RsMBnznO9/B6Ogourq6WLmjEfTedDod6vU6dnd3MTc3h4cPHyIe\nj7e8xi2Xy9Hf34+hoSE4nU5cvHgRqVQKjx8/fq4HmIwqx3G4efMmxsfHUalU8OjRI9y6daul6wae\nqdb19PSgUqkglUr9WY4dx3EsU6RUKqFQKJDNZln69rj20MGs2Iv+TJ0ldAeaTCYUCoV957adIO0P\nEr0JBALwer0v3bcikQhKpRJOpxOXLl2CSCTC4uIi5ubmsL293RRCNXFNrFYrtFotJBIJ8vk8MpkM\nUqnUSzN+tM9lMhn6+vowPT2NkZERxmlaXl7G0tLSC++lr4MTZ+TJCyKvVCqVwuVy4c6dO3jy5Mmh\naRIy8kRYO2jk210fJlCGYXp6Gu+88w5sNht2d3fx05/+FBsbG20RCCE2f+OBq1QqSCaTuH37Nm7f\nvo1EIsH0819E8DoYQbcaVPOyWq0oFovPHSZqm1Kr1Yw4KJfL4fF4sL6+zmRtWwmBQACj0Yh3330X\no6OjyOVy+OMf//jcRUJGUq1WI5/PY35+Ho8ePcLKykpbnjWRoHp6etDb24sLFy4gnU5ja2uLSY82\n1u1JlOiv//qvcfPmTZaGbqWRp/1pNpsxNjaGra0thMPhl2ajGp3WxmifuD5UK7ZYLIxsetzG53VK\nCbFYDHt7e+jr60M6ncbe3l5HiDuRkVer1VAoFIjFYohEIi8NZsgGDAwMYHx8HJFIBOvr63jy5MmR\nZYhfF1Q2GxkZgVQqRTQaRS6XQzwef+nkQSq/KhQKWCwWnD9/Hu+88w76+/uh0+mY1PrTp0/Z+ThO\nnDgjT33LjWIQVHPyeDwvfECpVAqxWIxpdhMjtlM06yk7cfnyZfzFX/wFrl27xsQfiPzVDlBKm76o\n7h6Px5m4CqWyX/QsKQXVznq8yWSCzWaDWq1GLBbb929UY9NoNOjr68PAwABLwc3NzcHn87Vln5Aq\nn0QiYQIth6UDaSBTuVzG6uoqXC4X9vb22sYhqNfryOfzjPRqNptht9uh1WoZ+1wqlUKv16Ovrw9O\np5Nd1sAzSdl0Ot3SNZMMKbVepVKpI8n9knFv5B2QY0A91MSlaHXZ5CBWV1fx29/+lhl5t9vdEUae\nolxy+Kir4WWvVyqVOHv2LOx2OxuwtL6+/tzZPk5QtmxychLVapUZ95fpIdDZ7OnpwejoKK5du4aJ\niQn09/dDoVCgVCrB5/PB7/cf2kFwHDhxRp7a4LRaLQQCAXw+H9bX1+F2u18oB1iv1xkhLJvNolar\nwev1vpLU0QrQ5h4ZGcHp06fx5ptv4ty5c7Db7Zibm8ODBw+QTCbbvs5isYhCoQCpVIpIJIKtrS3W\nItLoCBxsMyPvF8C+UkMrDZBAIEBXVxdGRkZQLBaRzWb3aalT9O5wONDX1wer1Yp6vY5QKITd3d22\n9ROTPKZMJmMEqnq9Drlcvm8kLuljS6VSJJNJbG1tvfSyaNSLbwZyuRzW19eZpCfJeDauiQaOUJud\nRCJBqVRCMplELBZrSRqZsoJSqRRarRbd3d3QaDRMz/9l425J+pj2erlcZkaK7hiFQgG73Y6BgQGk\nUinG7WhXeTAQCODJkyeYm5tjDP1OmJRHOgNkKJVKJdRqNVKp1D7NBGqbViqVsFgskMlkiMfj2N7e\nZgJErRi0RFrzlJk5LDtD5YTu7m4MDAygt7cXo6OjuHLlCmw2G2QyGSuZLC8vY3t7u2nDjE6ckSfm\nrk6nQ7lcxsLCAu7evcvIdi8CHUTSD56bm0MkEmkZI/ZFIPnXv/3bv2UpHLVajVwuh9///vf4f//v\n/7W1bkYa6MTC1Wg02NnZwYMHDxCLxdgBbDQ6VI8EnjllFouFOVqNNclWPHc6lN3d3ZiamoLL5UIk\nEgHHceA4DiqVikWRU1NTTBLZ5/MxFbx2qYNpNBqYzWbWA03DW0wmE7v0GjtFiLD2sqxDI7GtGXuf\nSmOffPIJ5HI5BgYGcOvWLXz66adMmIqETMrlMvb29uDz+bCwsMDmuNMY6GaDymNGoxEjIyO4ceMG\nVldX8emnnyIej7/UGNNcA5qPEY1GYTQaYbFYsLe3h1KphK6uLgwPD2NiYgLlchnlchnxeLwlI30P\nA7WWffzxxxAKhUfS5W8FyuUyq2sXCgWWdaPOFioByuVyGAwG9Pf3w2g0Ynd3Fz6fDz6fryXEWOJp\nrK+vs1koAFhppvF3S6VSOBwOfO9738OPfvQjJr1rMBggkUjYQCa/348//elPWF5ebtr6T5SRp6jX\nYDAAAFwuF9xuN0Kh0HNtUI2g9A4Jc0SjUezs7LSFSEWgKOLixYu4efMmG6yj0WiYp5hOp9vW3tIo\neDM6OopEIoG5uTnUajU8ffoUq6ur+6J4AqU+aeNTmrkxJUpOQysFfcjBcDgcLAoLBoMolUqw2+1s\nNLFer2cyk5FIpKUiMo0gMp1UKmVp34GBAVQqFdjtdsYgVigU6Orqgkwmw97eHpLJ5EvLIjRMpVE4\n5bg/g1KphFAohD/96U8ol8t48uQJdnZ2WGRcr9dZapLq8yKRCJ988gmCwSD6+vpYN0czSzz1+jMd\n8dOnT+Py5cs4deoUQqEQI+++6nvpUi6VSoznUy6XkUqloNVqMTw8jOHhYfT19SGRSCAQCLAIrh2Z\nOYo+pVIpy+Z0gpGnAGxnZwdPnz6FSqXC6dOn0d/fj1wuh3w+j0KhwGraAJBOpxEMBpm6YCveR7Va\nRSaTwdbWFisXk4PdmEEwmUzo6+vD1atXce3aNfT09DC9AtJeiMfj+OKLL/DZZ59hY2ODZemagRNl\n5OniUyqVKJVK2NnZQSAQYGmdl30fjdkMBoPweDxMZKEdICKGVqvF9evX8eMf/xgKhYJpo2ezWUQi\nkZaOYm1cG7VhKRQKOJ1OOJ1OhMNhBINBhMNheL1eeDweZDKZfdHgQUEOSsPF43GWUiZZ3HQ6jUwm\ng1wu15KBGYlEAqFQCIODg3A6nZDJZFhaWkIoFIJWq4VSqWQptlqthkQiwZjp7Wibayx7NNa2ZTIZ\nDAYDSqUS8vk8TCYTJiYmoFAokM/nsbe390LDSJ+LVCptajRJA2keP36Mp0+fHioIkslknlvb559/\njng8ziI1g8GAdDrdtHPAcRw0Gg0uXLiAN998EwaDgTHhX/WZk1gSADZaNpVKIRQKAXim42Gz2eBw\nONDd3Y1wOAyTycScl3YYebp37HY72wudALpDdnZ2IJPJMDs7i7GxMeYQkaGn6Ztra2vwer2IRqMt\nDYKq1SpyuRw8Hg9sNhucTidUKhUrz1SrVYhEItjtdszMzOCtt97CxMQEK+tQS2Y8HsfGxgY++eQT\nfPzxx00vm5woIw989aATiQRkMhmSyeRL004UJXR3d8NiscDlcmFrawsej6ctbHWBQAC5XI6enh5c\nuHABMzMzz/Xr3717Fz//+c+xsrLS8vXJZDJcu3YNY2NjMBgM2NzcxMOHD2EymRiXIZVKMQN9sG2n\nkXRUq9VYOcJgMDDpRxK9CIfD+PTTT+H1eptGSqJMw97eHnK5HMtG0KVMaUDgq/q2XC5nNWNKrbVS\nmIWeaTKZhMfjwcbGBnZ2drC8vIxIJIJ4PI5sNovR0VGcPXsWw8PDAICVlRUmzvGin0vRSCvU2cjJ\nO4oDRxGxUCiE0+lkqfCPPvqITYs8boyMjOCtt97C7OwsVCoVPB4PK+G9CtQuBeBQ0mk6ncby8jKb\n96DRaCCTydo6X56MJMmokiPeKUin0/D5fKzWrlarWR2eJGOj0ShWV1exuLjYUnImgcpdjbwAnU6H\ncDjM7hKaAElGvXH6Xy6Xw9LSEj788EPMz88jFou9NAt9HDixRj4ej7OJSnRoGttYyHMiERG9Xg8A\n2NnZYWnNdrGPFQoFzGYzBgYGYDQanzOUGxsb+P3vf49oNNry9XEch76+Ppw+fRoOhwPZbBaPHj1C\nPp9HPp9nKe4XsekPpu/FYjF0Oh1jrTscDvT09GB8fBxerxfBYBDFYhGJRKKpaVkaT0nro5GzEokE\nLpcLWq2WiVJQLf4oA1WaBUppe71e3L9/H8VikfXRFgoFJsZButmUen7VpU1Svq3gopBTcVSQI0VS\noSKRCAsLC00z8kajEePj47Db7Sw7eNSIimrsB0HPtFqtsig0k8kgGo2yeny7DCvJgA8NDTGnuhPS\n9YRSqYR0Os3a5xKJBBQKBTQaDdRqNTvHgUAAfr+/rWunciaNpW7sJqFRzjRyuFarsRq+y+XCwsIC\nPv/8c/j9/paUGk6ckacojNpbSqUSa7lonDpEURmlVTiOQygUwsbGxit7MJsJ4gdQ+wTVm0gitl5/\npiIXDAbbIutJlz+1KtrtdvT09GB3dxeRSGTfSMvDNudhLFODwYDBwUGcPXsWg4OD6OnpgVarhVgs\nxtmzZxEIBLC9vd3U+uvB4R20fqofF4tF9Pb2guM4pgLWSOhpBwqFAjweD37zm9+wtjRyrHK5HPx+\nP/x+P/r7+9n+flVG5HUNbyvRyKLW6XTo7e2FUqls6u+jFCvNIlcoFMfys2lcKgCsr6/jwYMHePr0\naVvJbg6HA+fOnYPT6dwn+d0poMCMpLIb6/H5fB4ikehQca6v+ztf9/MgR5kymtQ5QdnNcDiMVCoF\nq9UKk8kxjEQBAAAgAElEQVSE7u5uyGQy5HI5zM/PY25uDjs7O02P4AknwshTfbLRCCqVSkgkElQq\nFdamRYQSoVDIIjO9Xg+pVMqmAiWTyba2jYjFYkxPT+PChQsYGhqCwWBALpcDx3Hw+Xy4c+cO7t69\n21JS2kEQ89PtdsPj8SAYDCKVSu0TDjqKsA1lLa5du4arV6/CbrfDarXCaDSC4zh0dXXh0qVLiMVi\nSKVSrF2tGe/7VT+T+B5utxuLi4twu91Ip9Nt7e2nQTkUMTaugzT3SYSjWq3C6/UyIZmTimq1yi72\nZo+BptY50iGgccNfBzQ9cnBwELOzs5DJZFhbW8Pa2ho8Hk/bmPUCgQD9/f24fPkyjEYjtra2Om6f\nUPYjEolALBajVCpBqVRCo9Ew0mA8Hm9rbz/pq8TjcUZ+pSwnzUepVCps8iadU5/Ph9XVVXzyySdY\nXFxsaZbwxBh5ahcSCATIZDJQqVRQq9WoVqtMB5guahIf+M53vgOhUAiv14vd3V34/f59tbR2QCwW\n49SpU3j77bdhsVhYWqpSqWBhYQE/+clP2qZURigUCqwLYWVlBTs7O/tqidQm9ypSGmk8v/HGG3j7\n7bdRLBbZCNJ6vQ69Xo/Z2VkEg0FWl291GUUoFEKpVLI1bW9v4/79+2yvdAJe9DwoEqVeXZ/P11Qx\nkFagWq0inU4jmUwilUo11SiS2BBlSb5ufz4pt/X29mJqagpnzpzB7u4u7t+/j93d3bY5YBQkORwO\nzM7OQqlUsrPbSYaeHDwKcCqVCrvj1Wo1CoUCm5NxXHjd909E2Ma7is4gBQQ0ZEYmk7FWxcXFRaYQ\n6vP5jm39R0HHG/lGbWsyENSCFQqFwHEcuru7odPpIJVK9xE1urq6GAMzGo0yY9ou0Pug6UUKhQJ+\nvx+rq6tYXV3FwsICtra22toXXywWsbCwgEAgAIlEArfbzQz8wXnILwPVu0nIpVAooFAosM+zVCoh\nk8kgHo9jfn6e9d232sCr1WqcP38eMzMzEIvFSCQS8Pl8belseF0QsY2ERDrt0n5dCAQClMtlBINB\nAGCCNM3C2toafvnLX6JcLsNmsyEUCj3H+n8d2O12jIyM4PLly2z2xNLSEpaXl5vKOTkKBAIBU9C0\n2+0do/QJ7L/jiYRMgYROp2N3e7v0Kg4DGXf6asz4qVQqWCwWdHd3QygUYmlpCXfu3MEXX3zRlixE\nxxt54KtNQCzFRga3VquFyWSC2WyGWq1m7V9qtRpqtZq1y5HOcDs3tkajYROTaK1+vx937tzB3Nwc\n0/ZuZ6ahUqnA5XIhHA6zAQy1Wo2RTCQSCfNUSSaYcND4GwwG9Pb2QiqVMp4BEdvy+TwSiQTC4TDm\n5uawu7vbshoVrbW3txfj4+O4ceMGLBYLfD4fAoEA6zXvVDROKezu7oZEInmpOttJAXU2UFmu2SQ1\nn8+Hu3fvQqlUoqenB36/H+Fw+MjfT1kgvV4Pq9WK0dFRTE9PY3Z2Fj09PRAIBIjFYvB4PG3NCjW2\nyzUao04w8rSXpVIpG0tMa6SBLlqtFiqVCtVqFQqFoikz148KIu42Eiwbhb/kcjlsNhu6u7shEong\ndruxvLzM7vd2BJknwsgDYNEgGUCaFmU2m6HRaCAWi5kHRfX5Wq2GQCCAx48fIxqNtn1T9/X14caN\nG6y/kuM4rK+v4xe/+AWr57T7oiZiY6FQ2NcCQi1wRqORtQI1zrUnA08OQK1WQ39/Py5dugSj0Yhk\nMolHjx5hcXERm5ubjKiSTCaZNn8rDbxIJMJbb72FH/zgB3A6nVhfX8cHH3yAnZ2dpjpZxzGJj7pG\nBgYGcPPmTVbza/fe+ToQCATQaDQsGiY2tUwma9rvzOVycLvd+MUvfsE4PjQy+VVZESoN9vX14cKF\nC3j33XfR19cHg8EAuVyObDaLra0teL3etonfNILOMI1HPUo2rhUQCASMGyEUCtlALEp/WywWJjst\nk8kQi8UQjUbbNtGSnM+D70EoFDKne2pqCk6nEy6XCysrK7h9+zaSyWTTOSYvQscbefKcGlPGYrEY\n1WoVEokEarUaBoOBieRwHMfqeul0GoFAANFotCM0mnt6evDGG2+gp6cHIpGIlRHa0Sr3MjTq0Mtk\nMuj1ely4cAGDg4NQqVTI5/MIhUJYXV1lG5yGBpGSXa1Wg0ajYdrrxWIRPp8Pe3t72N7efk7L+zg3\n/0FSYKO4DMdx0Ov1cDgcmJycZGNFvV4vlpaWEIlEjm0dzYJUKkVvby96e3uh1+uxs7PDWhFPIihV\nOzMzgytXrsBisbB2tmYaR0q1hsNhxONxppegUCgOnSomEolgMpkwNTUFs9kMg8EAp9OJkZERzM7O\nQqvVsvsnkUggGo0inU63ROzpKOgUww58NalPLBZDJpOB4zgUi8XnRlXX63U2YbGx7t1ONP5+6uIy\nmUxwOp0YHR2FxWJBtVrF4uIiVldX2QTCdq2744088HyKidi3pVKJicuQcAINoslkMgiFQggEAsjn\n822txTcSXy5dugSNRoNisQiPx9PRRKl6vc5SmW+99RZOnToFAHC73Ux4hcglDocDDocDLpcLqVSK\nSRDT5VYsFhEOh5FIJJDP51m24Lg3fqPqHoFSlUSM6u7uxvnz52Gz2VAsFllKbXt7u+lp1eN4v3K5\nHENDQ3A4HBAIBAiFQnC5XCfGyB8sv5GDfunSJdy4cQM6nQ4ulwuhUKgl3AgieRGhl6LKUqm0T6SK\nBkn94Ac/wNTUFHOyZDIZC0BI8CSZTCISibDyW7sNEwDWfkblkIPDpFoJStGTNCzwlYpgI8GXXqdS\nqfY5Xo1p/XaCSjZDQ0OYnp7GzMwMEokEtra2MDc3B4/H0/Yszokw8o0fJHl4brcbxWKREUksFgsb\ng0qynSS/2k4vmgholHFQKpVIpVJYXFzEBx98gPv377dtbUfBuXPn8P777+P8+fOw2+0AAKvVCofD\ngXq9jrW1NRQKBUxMTMDpdOL27dtwu90QiUTI5XJ49OgRurq6kE6nMT8/z8ZbNqv+TpcXRQH0Z/pv\nqVRiksZLS0vQaDTIZrPw+Xwt5QR8HchkMgwODjIj73K5sLS01FbC5lFBM+SJm0I6CjabDadPn4bR\naES1WsXm5ib+8Ic/wOv1tmRd9LlrNBr09/dDo9HAZrNhbGyMRfhyuRxmsxnDw8PQ6/VQKBSQSqWs\n/Y7IpF988QUePny4b7pYO+8gIolRJoscEqpvt5PQdlDfpLFXXiqVYnh4GBcvXkRXVxcikQiTHM7l\ncizQa2engFarZWXJkZERmEwmzM3N4fbt24hGox2RwTkRRh7Avkie+iXT6TTi8TgCgQCsVisz8iqV\nCvV6nWmtt2sDUO1Xr9djcnISTqcTIpEI6+vruHXrFj766COmdd2poEELDoeDSaYajUaYTCak02mo\nVCrs7e3BbrfDZrNBpVKxunw4HIbf72dpfGp/aVVWpTGap31TLpcRjUYRDoexurrK/u0kQSKRoLu7\nm0kNRyIR+P3+jo3k6cLW6/XQaDRQqVQYHBxkw0YMBgPsdjs0Gg2SySRCoRAWFhbw6NGjlpayKDWs\n1WoxMzOD06dP49y5c2xUMjkoUqkUwLOAg9a7t7fHhFE++eQTPH78GF6vt+1ZRALdgRT90jRDcrjb\ntR6xWMzGPje2NKrValitVoyNjWF0dJQ5VNRVRfLONKmuXdFyV1cXxsfHMTExAaPRiFwuh729Payt\nrXVEpgE4QUb+IEgoJBaLIZfLIRaLMdYlpX/y+XxbGfXkOTudTvz93/89zp8/j3w+j9/85jf48MMP\n295W8zpodJQa1QT39vbgdruZctzW1hZ2dnaQyWRYBBMKhZgeQDsO40EHsfHvThpoT+l0Osjl8rZH\niUcBDUW5cuUKnE4njEYjG3ebzWbZUKaNjQ1WPnn06BEikUjLDVA+n0cymYTNZsPw8DCbNkb/VqlU\n2ICZbDaLhYUF3Lt3Dx9//DHC4TAKhQKSySSL7Dvhs2lsW81ms6wkMTg4iHg83rS5ES8D8X7IcapW\nq5DL5Sxr0t/fjytXrmBycpIFF1T77uvrQzQaZcOY6Dm3+kwLBAIMDw/jwoULMJlMSCaTbOhVpxh4\n4AQbeXqAlK6h+tnBVjsSVmj1A6cBJ1arlaWr3W43m0dMhrHT8fTpU3zwwQfo7e1lzHpqfQuHw9je\n3sb6+jo8Hg/UajVcLhei0ShjFFNqrZWg9PzBv/umoFQqwe/3w+fzQS6XIxQKIRqNtk1N7VWgjg0q\nsRGJqlwu79snwFfcjUAg0JbZAdlsFh6PB59//jmCwSDj+gBg/IBG7Ye9vT1sbGywgSmdUn9vBDmG\ncrkcKpUKMpkMJpMJMzMz8Hq92NzcbPmaGvkLlD1xOBzo6uqCVCpFT08PI8YqFApG6KWJf+REtet5\nU0nBarXCarUiEAhgbW0NX375JXw+X0ftgRNr5AlEmmlMiR1FcrXZoIlPQ0ND0Ov1mJubQzAYhN1u\nZxfYScAXX3yBBw8eYGBgADabDQaDARsbG1hZWdnXI9pp6KRDdpyg6HdlZQV6vR5yuRwul4vp7Hci\nSqUSK52RE95YR23MsLQbNAJ5d3cXAPbxO04qiASr0WiYMqjZbMbs7CyePHlybOS71/05lUoFqVQK\ncrkcBoMB58+fx4ULF6DT6VhZh7IoND0xFArB5/PB6/WyCaTtiJqpI4Qyag8fPsQXX3yBP/3pT20n\n2h3EiTfyh6ETDiQpNm1ubmJzcxOVSoXNaN/Z2Wn38l4LlUqF6f5LpVKkUqmOjFi+LUin0yxiuH//\nPh4/foxCodAxhvIgDgqHdLrhPLiuTl3nUUGBUKNjRdE9fR1Hm9/rPCd6LfEadnd3MTU1xWRtg8Eg\nyuUyDAYDZDIZ3G431tfXsbi4iPX1dYRCITbXoV1GtV6vw+v1Yn5+Hl9++SW2trY68gx+I418J6Ba\nraJYLDJpzk6q0bwuarUakskkkslku5fCA8/SxltbW/D5fFhaWkI6ne7YVD2hk1TWXgcnbb2HgYx8\nNptFLBaDSCTaJ0JFAUk71kXlvEAggJ2dHVitVqaImclkYDQaoVAo4PV6sbW1hdXVVSZV3pgJaiUa\nn5XH40GpVMLa2hqCwWBH7hfeyDcJVEMiYZhO/PB5nFzU63UUCoW2RjI8TgaoHOLz+TA/Pw+lUolo\nNIpHjx6x+nE77yeaPnf37l08ffqU/V2lUmEy2jR6tpHQ2E5CNTlGu7u7CAQCSKVSHRnFA7yRbxpI\nTYs37jyaBWIo8+DxKtTrdezu7uL27dtQqVRIp9PY2NhAOBxu+x6iPv5oNLpPHIzKCkSkPqgb3w4c\nzHgkEgkIBIKOJlHzRr5J4I07Dx48OgEUqW9ubiKZTEIul6NarSKVSiEWix2b4fy6hOfDMgqdRMo8\nDNQd0glaCC9C24w89cm2q67y56ATWPs8Th7aJR36bQT/rF+MQqGAeDyOTCbD1B8rlcqRBvK8CtRS\nRtmlk/AZUJbgVTaocfYFdYfQF/19O973Ufd624z8Qe1h/nDy4NFZ4M/k6+PrBgLNDCQo6iTjVK8/\nP1Ht6+BF5L1GqVoA+wxiK/bXYc+00XC/bA0HDXzje2zMMHTK4J/D0LaVCQQC9mT5i4QHDx4nGYdN\neOukNHOjsTpM7vnr3sGNBvPgzyJVO7VaDYFAgHw+j1Kp1BEEuka8ytgf/J5G56GN2ehX2vC2RfK8\nYefBg8c3CQdT1p2UCWm2ETrKz6dn0ilR78HI/iiv7ZTP83XAE+948ODB42uiXq9DJBJBIpEwyV4e\nz9DYp0+OT6fMhid0yjqaAd7I8+DBg8fXAEXwvb29cDqdiEQiiEQiiMViKBaL3/pW2sayRWM7HI/W\ngDfyPJoOOtidVKP8tuKojOJ2opPS3EeBVCqF0WjE1NQUzpw5g62tLWxubqJQKDA52W8zDpv+2Gmf\n70nbc6+Db52R59vgWguhUAiZTAYAHauv/m3aEzQmmJjVnSao0+iEdNK6XgaTyYS3334b169fx+nT\np7G4uAiJRMJmVHyb9teL0Ejwo5azTnoenbSW48a3wsgLhULY7XaYzWao1Wr4/X5sbGy0e1mHorEX\nEwBEIhEUCgUbs3iS+lBFIhHkcjnGxsZQr9exsbGBXC7XFuEIsVgMsVjM2niq1SobzkGzttPp9Il4\nrn8uDAYDLBYLenp6UKlUEIlEEI1GkUwmWXtVOyESiVhdu1wud+xUPYJAIIBCoUBPTw8uXbqEU6dO\nweFwIBAIQK1WszPM4xnaMemOxzfUyB/0nIVCIWZmZnDx4kX09fXh1q1bHWnkiXnKcRwkEglqtRoU\nCgXsdjtSqRRCoRDK5fKJSQGKxWLo9XrcuHED9XodsVjsubHArYJUKoVSqWSzwGmuuVqtBsdxiMfj\nyGazJ+K5/jkQCARwOBw4d+4c3nzzTRQKBSwuLmJpaQmbm5sIhUJtdx7FYjFkMhkUCgXy+XzHG3mh\nUAi9Xo/+/n6cPn0aDocDAFAul1EqlTpujO5xoJ1ZCQqA2r1PTxpOtJEXCAQQi8XsAqdLO5PJIJPJ\nIJ1Os7nuly9fxo0bN6DT6bC1tQW5XI5SqdRRl7rZbMbZs2dhNBohl8uZwZfL5QgGg9jd3cX29jbT\nm+70jW6xWFidUiqVshGpS0tLLVs/x3FQKBSYmZnByMgIACCTySAcDkMsFrNIHgDcbveJeK6vC47j\nIJVK4XA4MDg4CKPRiGw2C5PJBJFI1PZ+ZUKtVmPM9E46lwchEAhgMBjgcDhw/fp1XLp0CVqtFl9+\n+SU+//xzbG9vY2trC4lEoqPfx+vgYG/9n4vXrcmLxWKo1WqMj4/DarVCpVIhGAzC6/XC7XZ/47Nv\nx4ETa+TJqzMYDDAYDNDpdFCr1VAoFAiHw2xogEajgcPhwNTUFGZnZyESiWC322EwGNgY2E6BVqvF\n7OwsBgYGYDAYIJFIWN/t1tYWpFIpotEoIpFIu5d6JJjNZoyNjWFgYAAymQwTExPY2dnB6upqy0hf\nYrEYSqUS3d3dGB8fR61WQzQaBQDWxiMQCFiqmAYLfRNAmSGVSgWLxYLe3l7YbDY2P7xUKqFQKLSF\nK3EYGbNWq6FarXa0kRcKheA4Dn19fbh06RLeeecd9Pf3IxKJ4I9//CP+4z/+A6lUquOzEH8ujuPM\nvupn0HmUSCQwGAzo7e3F9evXMTQ0BL1ej7W1NczPzyOVSn2js2/HhRNt5MViMc6dO4eJiQk2SnFr\nawuRSASJRAKFQgFKpZKRPUQiEQQCAYxGIyYmJrC0tNRRRj6fz8PtdsNqtaK/vx8ajQYqlYpFwRTp\ntGpTf936l16vh91uZ/Oi19bWEIlEWho10oSrpaUlhMNhqFQqZLNZuFwuxn6mmrxIJPpGpQPJIDmd\nTly5cgVOpxNisRihUAhLS0v49NNP4Xa7kcvlWnpRNp7FcrnMzmcjOavx7zoJUqkUJpMJ169fx49+\n9CNYrVbs7OzgJz/5Cebm5pBMJjt6WMmfg2al6A/7fCl40+l0cDgceOONN3Dx4kWYzWZotVoolUoI\nhULk83msrKw8N0Wv0/ZLJ+DEGnmlUgmz2YzZ2VmcPn0aoVAIkUgEfr8fqVQK+XwetVoNuVwO0WgU\nhUKBCVY4nU585zvfQTweh8/na/dbYahWq8jlchAIBNDr9dDr9VCpVKxOSZFOJ/Td0sEnj1uj0UCh\nUCASiSCXy6FWq6GrqwsTExPQaDQIBoNwuVyIx+MtjRqr1Sry+Ty8Xi9CoRDUajVKpRKi0Sgr1wiF\nQiiVSlitVqTTaaRSKZbCPgloVBGTy+VQqVQwGo0wGAzQarUYHh7GqVOnADzrcEgkEtje3mZESDK0\nrQD1lBMB8uBF3wnjRF8EgUAAq9WKmzdv4q233sLIyAjm5+fxhz/8AZ9//jn8fn9LRHBEIhF0Oh1E\nIhHjLjRr1OlR9N2/LmhPdHV1wWazQafTsQzt1atXcebMmX2EZCrNWq1WZDIZ5HI5JpPbTAJp476l\ngOtl70kqlUKn08FsNsPn8yEajbZlX59YI6/X6zExMYHp6WmMjIygXC6jXC7D5/Pt6wEmzzqZTKJa\nrUIikWBsbAwGgwEPHz7E3Nxcx1wolJ1QKpXQ6/XQarWsNt+OSUevmsoEABKJBDqdDqOjo+ju7sb9\n+/fh8XhQLpfR29uLc+fOQSgUwuVyIRQKIZVKNd14NtYPK5UKMpkM+38qdTQ+x3q9DpVKheHhYfj9\nftRqNWboTwIaL0CDwYCBgQGcOXMGU1NT6O7uhkqlgkgkgsvlgsvlgsfjgc/nQzKZbLlBpewCgOci\nMPrcOtXIC4VCDAwM4J/+6Z8wPDyMYrGIDz74AP/7v/+LeDzeMpU7sViMvr4+KBQK+P1+RKPRIztq\nnZQdaSRGS6VSnDlzhjlPcrkcmUwGvb290Gq1qFarzJCTY+pwOMBxHNLpNBKJBOLxOGKxGAvojhsS\niQRSqZSVgl9k5Oke12q1mJiYwOXLl/HRRx+xAKfVz/9EGnmBQIChoSG8//77GBsbA8dxiEajhxJd\nqMZH3q5cLodMJoNOp4NWq2VM3k6I2jiOg1arhU6ng0ajgVQqRbVaRSaTwdbWFh4/fox4PN7uZe7b\npOVyGZlMBvl8fl+NWyqVAnhGcvN4PFhcXEQoFEI+n2/JGhsvs1d9to01+ZGREfT19eHhw4cIBoMt\nr/e9TmqUIiAiaCqVSoyPj+O73/0uenp6YLPZYDQaIRKJUKlUUCwWWdnE6/W2ZTSmSCSCWCw+NBtF\npRWg8wy9RCLB+fPn8fbbb8NisUAkEiGXyyEWiyGRSLQ0RU88C3LmlpaW8PjxY2SzWcbqbwSRTy0W\nC/R6PcLhMBKJBNLp9Cuzgq0Qr1GpVLDb7RgaGmLEUGoZ3t7exs7ODhKJBGKxGOLxOFwuF8LhMDQa\nDex2OxQKBdbX1/fJ5jYD5EzTn1+Eer3O9EGIT2CxWDAxMYFPPvkEPp+vpffKiTPyFEX29vbie9/7\nHtRqNQKBAKLRKFKp1KHfQ8If5O0SI99qtcJms8Hj8aBQKLT4nTwPiUQCk8kEg8EApVIJAEgkEtjY\n2MDS0hKWl5c7whkBvjr0FCnTISwWi5BIJNDr9az+++DBAzx69AjRaLQlhKTXSTGKRCKW3rZYLBga\nGoJYLIbb7UYsFmupsTmoRvey15EjpVAoADyrFVssFkxOTuLGjRuQyWTsq1aroVAoIJvNwu/3Y2tr\nC8FgsOURfGOqvlqtPvdsO1mFj+M4XLx4EW+88QaUSiWi0Sg2NjYQi8VanvGhfWKz2XD58mXI5XKE\nQiGWQaPnR/tEJpPBYDBgcnISw8PD2Nvbg8/nQzAYZMb+ZWTHZqfqtVotRkZGMDg4CIfDAbFYzIK2\nUCiEUCiEQCDAvuLxOAQCAc6fPw+bzYbu7m4EAoGmrZFApaSj3i1CoRAmkwnT09MYHR3FyMgIIpEI\nSqUSgsFg09dLOHFGHnh2MctkMmg0GojF4n2EtMYPgCIHjuPYUAT6N47jcPXqVSSTSfziF79oySZ5\nFZRKJYaGhmCxWAA8i5I3Nzfxb//2b7h3717HRTfAVxdAIBBANptFKpWCzWbDjRs3YLfb4fF4cOfO\nHTx48ADFYrHl63/V71MqlTh//jyuX7+O7373u1AqlQgGg7h79y68Xi8ikUjLo92jvEYsFsNkMsHp\ndDJRlr6+PgwPD0MikUClUrFSDxEMg8Eg9vb2Wl6KIOPOcRzE4mdXzou4JY2ODtAZRCpKJzudTjid\nTtTrdXz00Uf46U9/irW1tZavp1qtIhKJIJPJwGQyYXJyEpFIBPl8ngU6jc9co9Ggq6sL586dw82b\nN1Eqldhrb9++jVu3bsHj8SCZTLbUyaJ93NXVhcuXL2N0dBQmkwmVSoURp5eXl/HkyRNkMhlks1kU\nCgWUy2XIZDLkcjlEIhEUCgV4PB7Gs2kWiMNDTurLUKvVkM1mkc1mUS6XoVQqMTk5iX/8x3/EL3/5\nS/zrv/5ry57ziTPyZODlcjkkEgnzVK1WK/R6/aGRUL1ex/b2NpaXl6FSqaDVallq1uPx4Le//W1H\n1KooXa9QKFCv1xGJRLC+vo4vv/wSu7u7bV/fi1Cv15HJZFgU39XVhTNnzqBQKODOnTtYWlqC3+9v\n9zL3QSAQQKvVoq+vD1euXMEbb7yB2dlZVnJQKpWsdtwKUKRLTutR0nmNYitUqyQyqVAohFwuZ+ch\nn88jFArB5/O1tO2IzqdMJgPHcahWqygWi/scVnJs6Bk08k/ofbYTVqsVY2Nj6OvrQ71ex+PHj3Hn\nzh18/vnnbVlbtVpFLBZDLBaDQCCAyWRCf38/Hj16xLpDgK/2R6VSQS6XA8dx6OrqYlm2YrHInMBb\nt24hk8mwfdGK90UOnUKhYOuimjfdx5FIBDs7O8zAUiqcSrTlchkCgQBer7fpXSL0+4/iCFWrVRb0\nJJNJKJVKWCwWnDt3DsvLyy21NyfSyJMhpE2i0WgwMzOD1dVVVu+jS4RSlZ999hnEYjFGRkYYM9Vq\ntaK7uxsSiaTdb+u5CK5Wq2FnZwcrKyvMW+1kVCoVJhJis9kwODiIX//61/jZz36GZDLZ0rUcpQ9X\nKBTC4XDg/PnzuHHjBqanp1n/eLVabbmqIF1cNKr0ZREJ1a7j8Th7bb3+TDY4GAyyzgCFQgG1Wo1K\npYJ0Oo1wOIxQKNSyvvjG80mOdSqVYrVgeg29TiwWMyNPXJp2G3gAGBsbw9/8zd+gp6cHXq8X//mf\n/4kvv/yybWur1WqIxWIIBoMsda3T6SCVStnzI+NOzzGZTGJ7exvBYBBKpRIKhQIcx+HUqVNs73g8\nnrZlC+mupi/imQBgpQRaF2WDvF4vPB4PSqUSisViS3gRjd0fLzPUtVoN+Xwe8Xgcfr+fEakVCgXj\nK7UKJ87I63Q6XLlyBZOTk6ydIZlM4t69e1hdXd3XZwt8RboilSRKGTcy2S0WC/x+f8uNEYFqN319\nfYlcOcoAACAASURBVLDb7azN6969e7hz5w4ymUxHXHYvg1AohFqtxs2bNzE9Pc0IM/F4vC1iFQe9\nbeIJnD17Fk6nE3q9Hg6HA06nEyMjI1CpVACe7aV8Po/r169DKpXis88+QzQa3cfQbwa0Wi20Wi0k\nEgnS6TRyudwLP/PG6D2VSrEyVLFYhNvtZp0nEomEaS8QkbBVlzg5fN3d3XA6neA4Dqurq/scDI7j\noFKpoFAo2DlUqVSo1+vwer3Y3NxkMs7tADkfarUaFosFlUoFfr8fq6urLa2pHoZ6vY5cLsfS1AsL\nC/ukiRvLHRT0UFZFIBAwB0CpVKKrqwtarRZSqfRQ4l4z30OlUkEqlYLb7UZvby/jmtTrdYTDYUgk\nEjgcDsRiMeTzefZeKIKnIK6VmSlyPqjkdJjDTK/T6/Xo6+uDWq1GtVpFIpFo+l1yECfSyF+9epUZ\n+WKxiGAwiFu3bmFxcfHQNrNqtYpkMrlvvjOxkqVSKcxmMzQaTduMvFgsxsDAACYnJ1nLUyKRwOPH\nj/HgwYOOIdu9DGq1Gk6nE9///vdhMBjwq1/9Cru7uy1rK2pE42dPF7XRaMT4+Djee+89XLx4ET09\nPVAoFCyLQynyUCiEaDSK8+fPQy6Xw+12o1QqNf1gktMhFovh9/vh9/tfyXomA0ivK5fLLL1ZKBTA\ncRxSqRR8Ph/m5+dZevcodf+vA4rKbTYbzpw5g4GBARQKBSwtLSGfz7OUKzG+zWYzzGYzhoaGYDKZ\nUCwWMTc3h729vbYKy3AcB7VaDa1WC5lMhlgshr29PbjdbiQSibati5BOp7G+vo7NzU08fPiQOdSH\ndS0AYKUSUuQjg04ZJIqQWwUy8vF4HOvr6xgbG4NAIIBEIkG9XkcikWADgMgpobR9uVxmpahW6oZQ\ncNh4jg7jjxCh12w2o6+vDxKJBJlMhul1tDJoO3FGntoSzGYzarUamyhHxJEXRSpUl4rFYshkMlCp\nVMz7a0cKhSASiaBUKnHz5k385V/+JfR6Per1OuvD7PQInnDp0iW89957mJ6ext7eHhYWFjqiDi8S\niSCVSvH9738f77//PoaHh2GxWFi9mi4IGiqysbGBJ0+e4PTp0yiXy7DZbPD5fAgEAk39LMxmMwYG\nBtg+PSoOrqlSqTCCEjmzVKelC6qZ5CRynE0mE4aHh3H27FnWo98YlZOYVX9/P/r7+5nkrkwmQyQS\nAcdxbBRuO0DCN9evX8f4+DjS6TQWFhbw5MmTtuul0371eDz43e9+h1QqhXg8/lJia71ex6NHj/Dv\n//7vGBsbg1qtRiqVglarRa1WQzAYZCJWrUYoFMKdO3dw5swZdiY5joPZbEYul0MqldpHxqQ0eeMQ\noFaBMmhUFqEzRf9GNoX4DwaDAQCY03L79m08efKkZesFTpCRFwgEsFgs6O/vR1dXFyqVClZXV/Hw\n4UPcuXMHgUDgpXVr0iz/7LPPkM/nMTw8zFTaent7sbGxgc3NzRa+o6/6XUlbf2xsDDKZDMlkEsFg\nsGmiDscJGqAzNTWFt99+G11dXdje3mZOV7uh0+kwNDSEq1ev4vr165DJZKjX6ygUCtjb24PX62XG\np1Kp4NGjRwiFQpidnYXD4cCVK1dYz244HD72Pn/KNNjtdgwPDyMQCDBC6VG/v3GPEImJBHCkUim0\nWi26u7tZloKcx2akOBvb5IglnU6nsbu7y4hdjW2scrkcCoWCOdo0/pdSyu3Y/9StYLfbceHCBchk\nMrhcLqytrWF3d5fxT9pJ1q3Vakgmk8hms8y4vGotu7u7KBQKcLvd0Gg0KBaLrC89k8m0JesGANls\nFrlcDru7u3C73aykMDw8DIVCAblcjmw2C6FQiHQ6jWKxiGKx2BYnkLIPtM9JIEer1aJUKiEWi6Fa\nrUKlUmFwcBB2u51lHqLRKNbW1uB2u/lI/jAIhUJMTEzg4sWL0Ov1cLvduHPnDn73u9/h0aNHyGaz\nr/wZbrcb//Iv/4KnT5/inXfeYT/r9OnT2N7ebjmZhhyXqakpmEwmNhEtFothfX39hX3/nQS5XA6H\nw4H+/n44nU7WXkJSk+0E6Sm89957OHXqFNRqNQAglUohEAjgf/7nf/Dhhx+y+je1pY2Pj6Onpwej\no6O4ePEidDodxGIxPvvss2N3vOiicDqd/7+9c/tpM7va+IPxAZ9PGGwwEMAQzhMiiBKUdtqkzUSq\nNDdzNVNpNG1v+ie1atWTetGbqtNqMheTmdIok0mahBBIIBgbbMCA8fkEPmJ/F9FanyEnIMZ2Ovsn\njSYjZcx+zX732nvtZz0Lg4ODx9IwHE4Z0ulzamoKnZ2dUKvVyOVysNvtuHjxIlKpFCKRCC+OlQ7y\nNB66e3z69Cm2t7cRi8UQi8XYdRIAm/PE43HOkuRyOWi12pr3tifnwM7OTq4rn5ubQzAYRCaTOdDv\nvpZlrXQ3DRxNDU+bgs3NTajVamg0GhgMBi7ZrZVHAf3c1dVV3LlzB21tbbDZbJiYmEB/fz82Nja4\nGx3Zw9J7UovMQ/nP1Gq1sNlsGBoaQiKRwJMnT5DJZGA2mzE6Ooquri7k83lecyKRSNX7pbwTQd5m\ns8HhcODatWuYmpqCWq3Gw4cP8eWXXx4rGOZyOQSDQa4HLb8X1Ol0p/wUB6EF0WazYXx8HM3NzZBI\nJCgUCvB4PPj2228RDAarOqaT0NXVhU8++QQXLlyATCZDJpNhgUytsxAU+GQyGYs09/f34XQ68fe/\n/53bgtJmRCKRIJvNQq/Xw+fzoa2tDVarFePj4yzwefToEauQKwGVTarVahgMBoyMjCASiWBxcZHL\nEsuRSqWcgi+3gaXPam9vx9jYGEwmE0ql570aVCoVTCYTa1hOc3GkwEObiGQyyf7q5XMil8vxQh0K\nheD1etHa2orW1lZYLBYUCgX2t6j2Qk69I0wmE5dCxeNxxGIxpNNpyOVy3pBTKVooFKq5/uRNkFCM\nMiQymQybm5ucSq619mdpaQkNDQ0YHBzExMQEOjo6AIDtbEulEpqamtDY2FgXawv1xUgmk0ilUigU\nCpyFa2xsRDAYxNOnT5FKpaDVahGNRhGPx6teKVXXQZ6+rDNnzuDatWu4cuUKhoaG2Pzg9u3bx/pl\n0y+GdoUKhQJSqZRreKtJuRHE8PAwTCYTpyi9Xi++++67ug/yEokE3d3d+PnPf462tjYUCgX4/f6a\n2MG+imw2i3A4jL29PU4fz87O4o9//ONLT83hcJg1BQaDAUqlEp2dndBqtQgEAkgmk+xvXwkUCgWM\nRiM3lhkdHUUsFsP9+/extbV1oBUubVZkMhmamppYAESLNimR+/v7oVarObCm02lejJLJ5Kk13ymv\n26eT+qsqQ0j4FYlEOFi2tLSgp6cHExMT3PSoFl3dqFlKa2srcrkcp5NpDmk0Gmg0Gmi1Wmg0Guzt\n7bGtKp3wa5X6fh3lZV/0ZxKCVbtx1MugDqK5XA6tra2YmJhAIpHAzs4OwuEw0uk0z/laG4PRz6YG\naORNQRnB/f19rK6uIplMwu12Y3BwEKVSia8bqkldB3lSvo+NjeHq1at8F09Ky5P+kqm/uEqlwt7e\nHlZWVk5dWFUOKUh1Oh10Oh2USiWr/SUSCTKZzKm7N70t1A1Ko9HwqSYWi+Evf/kLvvjii6qXibyK\nnZ0dTE9PY3BwEA6HAzdu3MDXX3/9SpFROp2Gz+fD559/Dq/XyyY5VqsVIyMj8Hq9uHPnTsUWcYVC\nAZPJhObmZlgsFgCAw+HA1atX8fjxY6ytrbHZU1dXFwc9CkItLS2Ix+OIx+NIpVLo7+9HQ0MDUqkU\nQqEQNjY2MDs7izt37sDj8XC6vlqbsNfVEVPAKR+LTqfjxjoSiQQPHjyAx+Op2rtJJkJTU1P40Y9+\nxBmgVCoFlUoFs9kMs9mMjo4OdHZ2sudGKpViR8F79+7B6/XWPGi+DDpcNDY2sk4pm83WRcvt8qxN\nsVhENptFMBjExsYGEokENxirp3bQpO9ZX1/nxjUU6GmTm81m2aa8mkZURF0HeZVKhf7+foyMjGBw\ncBCNjY2c5jtpEGloaIBWq+XGBtvb21hYWMDGxkaFR/9q6ORCCuJEInEgaBQKhZrfS74J6uY3MDDA\nLlWZTAYzMzN49OhR3SxwiUQCTqcTHo+HA/Tc3NwrlcjUsTCRSCCRSHAlBtV72+12blJRCaRSKVQq\nFZ8OS6USrFYrxsbGUCqVoNfrWStgt9sRCoWwv78Ph8OBrq4udHR0cDrZ7/ezOn1vbw/BYBDz8/N4\n+PAh7t27V7M7zJdx+P6XNu1yuRytra2QSqXQaDRVL+uiINjS0oLm5uYDmgISM9KmtlgsQiqVwmKx\nYGRkBIlEAmtra/D5fNje3q474SxlhMiTQaPRIBAIsFisHqArJ5vNBqlUimAwyCZP6XSaa/nriXw+\nf6BxGF0F0zibmpqwu7uLpqYmdnWsJnUd5LVaLSYmJjA4OAi1Wo2Ghgak02msra2dyIyCJjkFeXLg\nmpmZwerq6ik8wavHQLs+v9+PZ8+ecdUApdHo7xz3s6u1qKhUKly/fh0ffPABN0kBcMCO9Dic1tip\nJM3j8WB2dhYul4tNQ15HqVSC3+/H9PQ0Ll68CJlMxv3ZKwkFu3JLW5VKhdbWVoyPj+PMmTNcEkdt\nehUKBXvTSyQStLS0wGw2QyaT8T1gOBzG+vo6Hj58CJ/P906UY5bPnVAohKdPnyIajVZdDFsqlRCJ\nROD1etnUiUR3dOolm+muri6Mjo7ihz/8IVtrT09PY3FxkWu66wXSp5CwzWAwoFAocHvuekClUuHS\npUu4cOECdDodQqEQFhYWEAwGkcvluGqjXJNSDxxeT8qbQu3u7nLNvMFg4KBfLeo6yKtUKi5DIKEa\nqbmbm5tfMCN4EzabDVeuXMH777/Pop5sNsu189WCTgNkYUoNQ+g51Go1m4IcVaRB38VpBnpSgpMV\n7JUrV9Dd3X3Ab51KS46yuLW2tmJkZAQymQzZbBbLy8t8J1epZyi/I87n89jb2zvyd5rL5ZDP57G0\ntIRHjx5heHj4BevQt4VK8yhdSnPcYrEgk8lwGp4McvL5PFQqFTdNaW9v5+/P6/VicXERHo8HiUQC\n4XAYPp+v7h0T6X1Qq9UolUpYXl6G0+nE1tZWzdLIqVQKgUAAHo8HgUAA+Xye79wpdUwLuU6nQ7FY\n5KurgYEBrK6uYn5+vi7S4ADY+tZut+Pq1avo6elBQ0MDYrEYnjx5UuvhAfj/RkAmkwlyuRx+vx+b\nm5vY3t5mX3qyf6Z7bwB1tZEiaPNuNptx/vx5jI2NwWazIZPJoKmpCd98803V3sm6DvIymQwtLS3Q\n6/UsmFOpVOjr6+M7u+OcGtvb2/HLX/4S58+f51KfQqGAdDpd1RQQif2oCQo1RaFTPGUaIpHIkWrN\ny3e0pzlxyOTkZz/7GX79618fEH/RoqfVamEwGJBMJvm5XgbVhl+/fh1KpZJLS6jtZaWfQyqV8sJw\nHD1HqfTc1/v+/ftsz0pZlkqMMZ1OY3t7G9FolJuIUKqaHOzW1tawsrKC1dVVtiItFAowmUw4d+4c\nisUiEokEFhYWcPfuXT5F1tIt7iiUb9xp3hSLRczNzR0QYVV7TMDzzRf5jicSCb57pWBD+plisYhQ\nKMSWsXTF6Ha7sbS0dOpB/igHHRoreUaQgDmVSuHx48enOr7jQOuiUqlELpeD1+vF1tYWK//LzWfo\nujObzWJ3d7fuNrE01paWFly4cAFjY2Nob29nT4J///vfIsgDz0UKT58+5f7qVD5R7sV8VCgFTkr6\nciVwNe8qSXSn1WrR2trKqTM61dPub2hoCD6f70gtcKs1WXp7e/Hxxx+zqQyVslDg1Gg0+OSTT9De\n3o6vvvoKPp8PkUjkhc+RSCTsWZ7P5xGLxbC+vo5oNFpx72za+ZNLGC14x9n9U795qsCoZD1xPp9H\nKpXC/fv3odPp0NfXh0KhAJ/Ph3v37nF9diwWQyKR4PHH43FsbW1haWkJnZ2dB7QE9ZYmfhVyuRx6\nvR7Dw8MYHx9HKBRCJBLBysoK90avhdkJAO5rYbFY2F+hvPEV/V1KgWu1WkgkEqTT6aqNm9Y0AC94\n1h9+pmKxiGQyyRUipVKJr3xIhFfrQGkwGNjWm7wWyAaZngN4nmFTq9WYmppCKpXC7du3q+q5fxTo\nO49Go1hcXITD4YDdbodCoRANaspJJBJ48OAB5HI5Ghsb0dXVBZPJxIu1QqF4Y/chWhR7e3vx3nvv\nsRo2n8/D4/HwbrsaE4R2oc3Nzejs7ITdbofNZoNSqeR6XDKSoYWilq5a5eOWy+Xo6OjAT37yE/T1\n9aFYLOLZs2cIBoN8YidvbI1Gw78zuVyOtrY2SKVSxGIxTnlSzfba2hqXrVFXtUo+r9FohMPhwJkz\nZ/g+jO713vRz6OrBZrOhu7ub+7PTs1XipLy/v899s6mkaX9/Hz6fD3Nzc3C73Vy6RXf3xWKR27dK\nJBLk83kkk0mEQiHEYrG6PsFTYKK5bzab0d/fj6GhIczNzXFnNepUVytzlqamJuj1elgsFmg0mhdO\nzDQPzGYz2yRTiRQ5C1Zzo/WmdaJUKnFXtFAohGQyiebmZuh0OhiNxpqUdh3G4XDg0qVLMJvNSCaT\nWFxcRDAYPPBc5Dgnk8k4I0t21PVgo11OqVRCNBqF0+mE3+/H2bNnoVAoDhyQqjG/6zrIR6NR3Lp1\nCzs7O9jY2MCHH37IxiSNjY0wGo0cGF4GpUzkcjk++ugjfPTRR2hvbwfwPEvwxRdf4J///CdCoVBV\nnoeuGxwOB8bGxmC321EsFrG3t4d4PI6mpiZks1k8efIE09PTVS3rex0SiQR6vR4tLS1obW2FRqNB\nJBLBb3/7W0xPTx8ImBS8yUubfPn1ej0ePXoEl8uFra0tLjuh2u14PM513ZWkt7cXv/rVr/Dee+9B\nIpEc6KX9Juj3dfbsWUxOTvL1hF6vRzqdrlgwLRaLWF9fRywWw4MHD1AoFJBKpbC3t/eCqxoFoP7+\nfkxMTODSpUvw+/1cIfKyzMlpc5zrIvpOr1+/jsuXL0OpVCKZTGJnZ4cbp1A6tlYBHnievWlubkYk\nEoFGozngVQ6As3FDQ0MYGhriAwc5/S0sLJy66cnh8kPKLJTXwpdDB4m1tTX09PTwBsXhcGB5ebnm\nQf7y5cv4xS9+AaPRiK+//ho3btzA+vr6C3+PqjB6e3sxMDCA8fFx/P73v8fnn39eg1G/nng8DqfT\nyZlKqmqgTnvV2JDXdZCnFKTb7UYul0N3dzeam5thNpu561l5Qw9SKBsMBuh0Ok63kQd5R0cHGhoa\n+NTjcrmwsrJStcltsVi4acfAwABUKhXvqn0+H6LRKNLpNAfCSvuknxSZTIb+/n6ucqCufuvr63C5\nXAdSk5Q+pEVGJpNhcHAQLS0tCIVCXJZGNaSkWD6t30H5zp/ufpVK5ZGyN+3t7bh06RJGR0eh0+lY\n1atWqyGXyyuikKWFmQI6LdyvyjSQOImc4dRqNba2trCwsFCTU/zhSpBXXX1RcB8YGMDk5CSmpqbQ\n3d3Nhjjr6+s8P2p9qqRaeboiJIc7qmYAALPZjLa2NvT09MBgMHDzK6/Xy9dU1TjJH54jr5vTtFYG\nAgGEQiEUCgVYLBb09/dje3u7JhvEclQqFbfZjkaj8Pv9bxRE63Q6aLVamM3mush6HiadTiMQCGBr\naws7OztoamqCTqfD2bNnsba2VpUDZl0HeeD5pA2Hw4jFYpifn4fdbodSqYTBYMDAwAAHDeD57lqp\nVKK7uxsdHR1oaWnB0NAQzp8/D4PBwI1GotEotra2sLW19dpMQKVpb2/H1NQUJicnYbfbkUwmueZ/\nZWWFd9rklFQPE5asf8fGxjA0NMTlRX6//6XGDoe/S6lUis7OTnR2dmJ5eRmRSIRdrCjIn+ZzJpNJ\nuFwu9PX1oaOjA1qtFmq1+oXSrPI/07VKX18fPv74YwwPD3NavFQqQalUsuCwEmMnvUC5V8KrPpeC\nvNFoZAGey+XC48ePkUwm33osx4Wuwyj4va63tsFgwIULF/DZZ5+xhTA5CK6srCAYDLL1Z61U6eXZ\nv6amJk6vksMdpVrJ1570NOFwGH6/H0tLSwgEAjWtkX/Vz6XqkmAwyBUdRqMRfX19mJmZqfIoX4Su\nE0inQ1msl1EsFtk+W6/XQ6lUHhAx1wtUobO5uYm1tTU+gJ47dw67u7siyBMkYiARns1mg9VqxQcf\nfIDR0VEkk0koFAo2FaGUCC0sOp0O09PTmJ+fZ3ESpVFOQ8l9GDrttLa2Ynh4mLuElUolBINB3L59\nm20+c7kcQqFQzQM83UdTOm9gYACZTAZ/+tOfuH54aWnpSJ9FBi/Xrl2DXC7n6wmqHDjNZw0Gg/ju\nu+8wMDDAd8A2m427bpGWgJwUJRIJdDodRkdH8YMf/AAOhwM6nQ6ZTAabm5t8Sqv0In6Uz6Lgo1Qq\nOdjkcjlsbGxgeXm5JoGRMiWHffQP/x06sX/zzTfsDkZ6hHA4zIGRLGFrqSug9UahUKCrqwvnzp1D\nNpuFXC6HwWBAR0cHry+5XA4+nw+PHj1CKBRiC9ZK3bdW+nSazWaxuLgIpVLJG0WLxQKVSgWpVFrT\n731+fh7/+Mc/oNPp4PV6D5QZl88riUTC3d+USiUf7EZHR7GyssK9SeqFUqmEmZkZXmdUKhVGRkbY\nq/+0x/pOBHng+Rfl9XqhVqvR3d2N/v5+nDlzBg6HA3K5HCqVCmq1mmuIgec712AwiJWVFdy6dQu3\nbt3ilPjrSnPodEIv+9tCk9JgMLDTHhEOh7G4uIjd3d1T8xQ/LlSFoNfr0dPTg/7+figUCqyvr+PG\njRtwOp1HFrnQBNZqtRgYGMDS0hK7mFXjRYzH41znrtPpoFAouM0w/XzyWad2lkajERcvXsS5c+fQ\n0tICiUSCaDSK5eVluN1uxOPxmpR2SSQSWCwWdHZ2si7A7XbD4/FgZ2enJnPnqMGMslRLS0twOp38\n/5b/ux6gTQtdRVHliE6nY21Kd3c3isUiYrEYW9m6XC7E43Gk02kuuatHyPzGZDJhe3sbvb29MJlM\nLOJ8m6BzXN+SwywvL0MikWBkZARSqRSTk5MHxLq5XA7ZbBZarRZ9fX1cT09CzoGBAQSDQRZt1hNr\na2uQSCQ4f/48HA4HZxVFkC+jVCohHo/j2bNn+Nvf/oZLly7hxz/+MTvFyeVySKVSVh/Tyez+/fv4\n85//zIHpKOkcupMje9m3gVK/lPqTSCR8vZBKpZBIJFg9XQ8Bnk6LFosFZ8+eRU9PD2w2G5xOJ1wu\nF9/9HofycsVAIAC321018yFSnn/55ZdYWlrC2NgYJicnuXcBdY3a3d1FMBhEsVhka1Wr1cqd1DY2\nNvDgwQM8ffqUa/nflqMuipQJkkql7K5mtVqxtbWFu3fvwu1218XceRP1GNQPQ/XwX331Febm5gCA\nhaHUUIqqSPx+P7tvkjV1JdaMw+OpNOQBb7VaYTabkcvl2BP+pNAcpXl4knFvbm6ioaEB586dw+Tk\nJD788EP4/X5sb28jnU4jHA7zxmRoaIiNuPL5PEwmE3p7e7lCo96CPHkrmM1mGI1GpNPpqtnbvjNB\nHgB3EXM6nSgUCohGo7BarTAajQdMSuh+M5/PY3Z2FrOzs6zcPSqVrBstFovI5XJwu924efMmp8US\niQTm5+er2jDkdVDd8vvvv4+hoSE0Nzez7S6lqmOx2LECXLlRy9zcHO7evYtgMFg1YRWdzMg1i+bN\n9vY2bDYb7HY7HA4HrFYrdz3b3d1FMplEIpHA4uIiNjc34fF4MDc3B4/HU7Ga3KOm6KVSKacn4/E4\nK6G3t7dx7969uisdepchfYTX62WPCnJLJH8CKnGNx+PslknXfiScrOeNDHVxy2azXKV09uxZ+P1+\nLC8vn2gDS34U+/v7J94AZzIZBAIBzMzMQKPRoKurCw6HA52dnSgWi9jd3UU4HIbVakVbWxt7V9A7\nTQ6J9brhpcyyzWaDyWSC3W5He3s7WyafFu9UkKe7vVAohFAohP/+978s+qGTDpVH0UJ8kpetvCd2\nJaBFYnZ2Fh6P50CjndMoGzspCoUCNpsNn376KS5fvox4PI4//OEP+M9//sOpyON+n/v7+6zm/d3v\nflczq9JcLodgMMjzRqlUstJbo9HAarWioaEB8Xgcm5ubbDe7sbGBZ8+ewePx1MT/nURrpDlxuVzw\neDxoaWlBIpGAx+Opm/nzvwAF6fKGI+VEIhF4PJ4qj6qyUHtiarVsMpkwPj6OcDgMr9d74iAvl8sP\nOC0e910plUpIJBK4efMmMpkMent70dvbi+7ubm4ORB4bjY2N3AQmk8nA6/Vifn6euyzWI2Q93dnZ\niYGBATgcDjgcDhYgnxbvVJB/GZQGppeTTsVvK3yp1G68/DPS6TQb3NDpvp52nblcjq8Q5ufncePG\nDXz77bdvdQedTCa5fjUQCNS8gxSd7NPpNFZXV7G7uwu/3w+1Ws3PTneA1IM9FovVLJCWu5plMhme\nM6QrqfdTo6A+oQzR/v4+UqkUVzCddD2ig9Hbrr20jjudTvzmN7+Bw+HA8PAwfvrTn8Jutx8o1Uyl\nUnC5XLh79y5u3boFr9dbN70CDkNVGzKZ7EDWuRr8TwT5Si9yp+VEdLhMqt4oFApIJpNYWFiA2+3G\nv/71L2xubr7Vi5PNZvHw4UMA9dNIgjZYwWAQkUgEPp+PW+XSppH+qZYr1eugFDKNm8xyBIKTQnMo\nGAxyZ89AIHDid5Q2z5V4x0ulEjdkcrlcWF9fh1KphMPh4KwtabTm5+dx8+ZNuFyuF9zx6g3Se6RS\nKdZ5pFKpU18X3/kgL6gcpBj+61//ColEUpGTd6UqFE6LYrHIAbN8nPUiEqNTO6lw68FjXPBuQ4Zg\nTqcTiUQCMpkMMzMz8Hq9Jy6hK9ciVDIDSn4aHo8HWq2W/QsaGxsRj8cRDocRCoVOdJVYbehaOyw+\n5wAAAddJREFUwWg0QqVSYWlpCV6v99RNz0SQFzB0YvT5fPzflfrceqVa1pIn5bB1qUDwtpCIbX19\nnV3uNjc3K9KSuNLvOumrqLpBKpVCoVCgsbGRy47r+f0lSE/m9/uxuLiIvb09uN3uqpT7VedS4OXU\n78ovEAgE/6OQi6XRaGQR3uGOb8elvPXuaW7qSaMikUjqpuz4KFBpsl6v5x4Y5P75ltmPN8ZwcZL/\nHvC6phWC0+OoRhfHafAieD316F9+VKo1D0j4m8lkuO3y2wbL8h4GpykGLb+yetd+z1S2SA3Wyssu\nTxMR5L8nvMuL3/eBlxnjvK2D2PeVd3muV2vspIav1Om73CX0tOctBcZ37XdM16GlUokzEQKBQCAQ\nCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAI\nBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgE\nAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQC\ngUAgEAgEAoFAIBAIBAKB4N3l/wBl4EX7WVgJ1gAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: 1.147804\n",
+ "Current cross entropy score: 0.324483\n",
+ "Training step: 1600\n",
+ "Time since start: 8.179397 m\n",
+ "Steps per min: 195.613445\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvddzXNeVPbw655zRAamRE5FIghSDSEu2aDmMxi+eKpc9\n5Qe/+WX+DT+55nFmamqC61eyXWXZlkxqRJmimQ0SOaOBBrobnXPO3wO/c9SASDGgG2hQd1WhKDF0\nn3vvuWentdcGGDBgwIABAwYMGDBgwIABAwYMGDBgwIABAwYMGDBgwIABAwYMGDBgwIABAwYMGDBg\nwIABAwYMGDBgwIABAwYMGDBgwIABAwYMGDBgwIABAwYMGDBgwIABAwYMGDBgwIABAwYMGDBgwIAB\nAwYMGDBgwIABAwYMGDBgwIDByQHrGL+7eozf/dpgs9ngcDgol8uoVCrHvRwGDBg0ECwWC9XqiTyq\nGHwz8EIbzhj5VwSLxaIvPvPyM2DAgAGDY8QLbTj3KFbxJoFE8qVSiTHyDBi8gWCxWBCJRODxeGCz\n2cjn88jn86hUKsw7z+DEgTHyrwgWi0XT9fX6PDabTTMDzCHSeJB7XotqtcqUX44AJBNG0Gx7nsfj\nQSAQQK1WQyQSoVQqIR6Po1AoHPfSGDB4LTBG/hVBjEG9DiZyqBSLRZRKJZTL5aY69N40sFgscLlc\nCAQCsNlsanCKxSKy2Sxz7xsMNpsNLvfpsUPeJfJznCDOh0qlQktLC6xWK/h8PrxeLwqFAsLhMLM3\nGJxIMEb+FVGtVutiiMmh0t3djcHBQayvr8PlciEajaJUKtVptfUHiYJZLBZ1dg7eCz6fD5lMhlKp\nhEKhgEKhULfMx+uCx+NBqVRiZGQEJpMJAoEA5XIZ5XIZPB4PxWIRyWQSoVAIfr8fe3t7SKfTb9zB\nTvYdcW6OMgXNYrEgEAgglUrB5/PB4XBQqVRQKBSQz+eRyWSOLWLmcDgQi8Xo6urC+Pg4KpUKwuEw\n0uk04/ydAJAyKp/PR6VSQT6fb7os0XGBMfKviHpFHSTtf+7cOfziF7/Ab37zG9y8eROpVKppjTwx\n8DweD1wuF4VC4ZncBIlEApvNhmw2i1gshlgsduxGXiQSoaOjA7/85S9x5swZ8Hg8JJNJ5HI5yGQy\nlMtlJBIJPHr0CHfu3MHNmzeRzWaPfd2NAIfDoY5asVg8smsktW6dTgelUgmxWAwASKVSiMViNGo+\nDvB4PKjVaoyOjuJ73/se7t27h83NTTidTiQSiWNZE4OXQ212TqFQoFgsIhKJ0GCMOLTfVIPPGPlj\nAp/Ph0ajgcFggE6nQ29vL3Z2drC9vY1sNnvcy9sHDocDlUoFpVIJuVyOcDgMv9//3IxGpVJBqVTC\n1NQUbDYbPvzwQ6ysrBzDyr8Ei8VCNpvFkydP4PP5EIvFUKlUIBKJMDg4CKPRCLFYjP7+fiiVSgwM\nDGBrawsOhwOrq6vY2dk58ZEBi8Wi+65YLCIWix1pFE9QrVbB5XLB5XKRzWZpBiWXyx3JWp61Nr1e\nj6tXr2JychJqtRoejwerq6tMFN/kIIGHRCKBRqOB2WwGj8dDJpOBx+NBMBikGbtmDZ5q0QiH5Btj\n5MlmIB6fUCikaUuSNiQ18UKhgEwm09A6oUgkgsVigV6vh0QigUqlgkKhAIfDadh3vg4UCgU0Gg0s\nFgs0Gg1EIhFWVlbg9Xqfa/Q4HA4EAgHGx8dx4cIF3L1799iNPInU7927BxaLhb29PfB4PGi1WkSj\nUQwNDaGzsxN6vR5tbW04c+YMdnd3MT09DQAIBoMnOrJnsVgQCoVQq9Xo6upCPB5HIpE4slo4m82G\nSCSCUqmETqeDwWCASCRCKpWi7xp5P8k9Jgceh8OhTgEhvZJS0GHLDaSEYDKZcOHCBXR1daFYLMLl\nclHH7psCch7yeDzweDywWCx6HjYDb+JZEAqFUCqVsFgssNlsaGtrg0QiQblcxurqKra2tpDL5RCN\nRhEIBJrqGmozo0KhEGKxGKVSiZau6tXB9Y0x8mw2G2KxGFqtFna7Hd3d3eByueDz+RAKhchms4jH\n44hGo/B4PFhYWEAqlWrYeqRSKXp7e2E2m8FisbC0tISZmZmmi+InJyfx3nvvQavVIplMYn5+Hjwe\nj9a+nvXSkNp3S0vLvgjuOJHL5eD3+5FOp+n/s9lsuFwu7O7uYmlpCRcvXsTExAR6enrA5/Nhs9mg\nUCjg9/vhdrvhdDqRTCaP+UpeHeQwaWtrw/DwMC5evIj19XV4PB4kk8mG7zkWiwWJRIKOjg6MjY1h\ncnISSqUSQqEQpVIJ6+vrePz4MTY2NuDxeJBOp1GpVMDhcMDhcCCRSGAwGKBWqyGTyRCPxxEKhbC3\nt4dMJoNisfjaa2Oz2TAYDOjs7ER7ezvS6TQWFxcRDAa/UQYe+NI5J9lFgUCAVCqF3d1dpNPppqtz\ns9ls2Gw2+t729/dDJpPRoC0ajSIUCsHr9eLevXv46KOPkMvljsXQP+sc5PF4EIvFNJM7OjqKvb09\nbG9vY2lpCZFIBMVi8dD3+4018uRga2lpgd1uh1KphEqlglqtRmtrKzo6OvZF9plMBrFYDOFwGCsr\nK/D5fCgWi8jn8w1ZX7lcRiaTQT6fR6lUwt7eHtxu96EOrHqDRDkikQjhcBgejwdOpxPRaPRryYcK\nhQJDQ0OQy+WIRqNNcU2lUgmlUumZBi0cDiOXy9G/U6lUYLVaIZPJIJFIoFQqweVym8ZheVUoFAqY\nTCacPXsWp0+fxujoKLLZLDgczpFck0QigcViwdTUFCYnJzEyMgKRSAQOh4NcLgeRSASxWAyDwQCn\n0wmfzwfgaZQml8uhVqthNpshk8nA4/HoQUiincPsL8ITkMvlkMlkWF5exu3bt+H3++t1+U0NUsJp\na2uDxWKB0WiE0WiEVqsFh8OB3+/H/Pw8Njc34XK5miKTRRw/u92O06dP48qVK+jr64PNZqOZ0Gq1\nilwuh0QiAY1GA7fbTVsij5L3YTQa0dXVBYlEQvc8AOrEikQiaDQadHd3Y3h4GF6vF4uLiwiFQkil\nUnU5O99oI8/lcjE+Po6f/exn6OrqgtFopNEB6U0n6b58Pk83hUKhwNzcHBKJRMOMfCKRwPz8PMbH\nx5HP55FIJJBMJpsmnUScpL29Pdy+fRtPnjyB2+2mJY2vS5PK5XIMDAyAz+fD6XQik8kc8epfDZVK\nBR6PB6FQCJFIBKFQCNeuXUNnZyeNZuLx+Imo6T0LJpMJFy9exNWrV3Hq1CnKbD+KrgfSltbb24tv\nf/vb6O3thVqtBofDQbFYRLFYhMlkotHM3t4e1tbWwGKxoNFoYLVaYTKZoNVqUSgUEIvF4HA4wGaz\n4fV6kUqlaHbmdUAiU5KV2t7exl//+tem37Nfh1eV4pVKpbh69SquXr2KsbExSKVSuj+2t7dhsVjw\n8ccfw+PxNIUgEI/Hg8FgwAcffIBLly6hp6eHlnnI+VmtVvc57VwuFxKJBNls9siMPIvFQl9fH37+\n85/DZrNBr9dDJBKhWq3SEggACAQCiMViSKVS9PX1Qa/X4+HDh/B4PHXJJr9xRp7FYqGjo4OmP8bG\nxjAyMgK1Wg2pVEoNeiaTQTweRyqVQj6fp3VxQrr66U9/iuvXr+OTTz75ymFYj01OPFGNRkOjyGZ4\ngQjIwUfSp16v94UHH+E3SKVSmEwmrKys4Pbt2wiFQke06tcHuf+RSISm9dPpNCWFNUtGorYFjpSb\nCoXCvrQe+ZVE6RKJBCaTCQaDARqNBuVyGUKhEAKBoKGpeoFAAIlEgitXruDKlStoa2uj0ThxssVi\nMdhsNgqFAoxGIxQKBXQ6HUqlEq3Fl0olJBIJuN1uLC8vY319HQ6HA36//9BkPR6Ph9HRUZw+fRp8\nPh+5XA6pVKppnO1XBXkHLRYLWltboVKpEIlE8OTJk6/wjFgsFsxmM/r6+jA2Nobe3l5oNBpwuVxU\nq1Ww2WxUKhX6PpCOjKPW8+dwOFAoFJDJZLTMOTIygqmpKZjNZrDZbGxsbMDtdkMoFEImk0GlUqFQ\nKCAej8PhcCASiUCtViOdTjfcgePxeNDr9ZiamsLly5cxPj4OsVgMPp9PndPV1VXI5XIoFAooFArq\neLNYLJRKJWQymboRUd8oI08OvoGBAVy7dg3vv/8+9Ho9JZCQBxwOhxEIBBAIBBCJRJDJZGAymWC3\n2yGTyaBQKHD58mV4vV7cvHlzX+25XptbKBTCbDZDKpUil8uBz+dT4kUzpMSAp9dK7tPLgMPhQK/X\nw2KxQKvVIhQK4fHjx4hGow1e6eFxUOSoXC4jEonA5/PB7XYfKUntWSCHq0AgoD8SiQRSqRThcBix\nWIw6o7VtQywWC2KxGEajESqVCkKhkBqxRqfrJRIJrFYrzpw5g8nJSVQqFcRiMSQSCUgkEnA4HHrf\ny+UyxGIxLS2k02lEIhF4vV54PB6wWCysrq7i0aNHcDgcdE8e9pmQ82JoaAiVSgXZbPbEZmyA/WXK\nsbExWgJZWlra59CR/aRUKmGz2WA2m6FSqcBmsymxMZfLIRQKYXNzE5FIhP67o4RQKIRCoUBbWxs0\nGg3EYjHOnDmD0dFRKBQKpFIpJJNJTE9PY2FhAXK5HCaTCR0dHSgUCohGo1haWsLe3h74fD4VYmoE\nyDupVqvR29uLH/3oRxgfH4fVasXe3h5cLhdKpRJWV1dx9+5dmEwmtLS0wGQyob29HQqFArlcDplM\nBqlUijHyzwKpcZw9exZXr16FQqFApVJBsViE3++H0+nE7OwsVlZWsL6+jlwuRw9GlUpFb7bRaIRI\nJMLa2hqN8ID6tjWkUimsra2hvb0d7e3t6OzsxPDwMKanpw+VfjxOSKVS/OhHP8L7778PqVQK4GjF\nVg4DcujZ7XacPXsWSqUSq6ur+J//+R8sLy8f+3VwOBzweDzYbDb6YzQaaWpvbm6OpvfIWsk1yWQy\ntLe3QyaTIZfLweFwYGtrC+FwuGGpSxaLBYvFgkuXLgEAnjx5gt3dXUQiEaTTaZw6dQo2m41G7KTt\nSSwWUy7H2toa7t69i/X1dVrSikajSKVSdXWEeTweSqUSvF7vicg6fR2I05TNZhEMBuF0OuF2u78S\nxZMyhcvlApvNRnd3N+RyOWw2GwQCAUqlEhwOB+bm5jA3Nwefz4dCoXDkxLuuri5MTEzg3LlzKJfL\nmJ2dxebmJtbX12nwIJPJsLu7C7/fD7FYDIvFgmAwiHw+j1AohOXlZfh8vmdG8fXMSrS3t1MS4MDA\nAFpaWmgQ99vf/haffPIJqtUqkskkIpEIhEIhVCoVurq6cOnSJZhMJmSzWcrTqte63ggjX3sA9vf3\nY3h4GEqlEnt7e/D7/fB6vSiVSnC73fjiiy+wvLy8rz2GkG9UKhXK5TIEAgF4PB5EIhG0Wi3C4TCt\njdTrxpM+5UwmAw6Hg5aWFlgsFszNzdXl848aRFFufHwcIyMj4PP5lMzYDGnu54EYQr1ej66uLpw7\ndw69vb0IhUKYnZ3Fw4cPkUqljsXAk/YauVyOlpYWdHR0oL29HTabDSaTCTKZDHw+Hz6fDx6PB9Fo\nFLlcDuVyGWw2G0KhkBJ/SKo8k8lgbW2N6jE04rpqCUUmkwmpVAoej4cai3Q6jUKhgGw2C6PRSIVx\nSqUSotEogsEgNjc3MTs7iydPnmBzc5MOiAHq9w4SlTRCACQlgNe5VpVKBT6fDxaLhUQiQdnoB1Uh\n653qJpE7icABUENCeCahUOi5LO1UKoVAIIB4PI58Pk/T/fl8Hmtra1hYWIDP5zvyd0AkEkGhUGBs\nbAyXL1+G3W6H0+mkGh3hcBiRSAR8Ph8GgwGlUome2Ww2G6lUCsFgkNqAWCxGuwNIBqle1yOVStHf\n34+pqSm8/fbbGB4ehtFohM/nw/LyMra3t3Hjxg3cvn1733dyuVy0tLRgYGCA8iBIC1095c1PvJEn\nKXq5XI7z58/jn//5n2EwGODz+fDw4UPcvXsXDx8+xMDAADgcDmZnZ7/SHkOIEIlEAnK5HHa7HXa7\nHWw2G263G4uLi3WXOCUvk1AohFQqhUqlglwub7o++ZcF6cEmaWEAtB2xWcoPB1Grk9Df349f/OIX\nGB4ehlAoxOeff44HDx4gnU4fS/q2lt/Q3d2NK1eu4IMPPoBWq4VUKkW5XEYwGMTOzg5txREKhVSi\nl2gAnD9/HmfPnoXNZgOLxUIoFMLS0hKcTmfDDm0+nw+dTgepVIpCoYBUKgWXy4XFxUXEYjGaepdK\npejq6oLNZoNSqYTX68X29jbm5uYwMzODubk5xGIx5HK5hkSQxInicDhIpVJYXV2F1+t96X9PmOkG\ngwHDw8NQq9XgcrlYXV2lBolkComDUk+xk9q+dsInIO8a6YAh6d/nlTXIPpNIJFAoFFAqlRAIBMjn\n81heXsbi4iK9/0cJpVKJwcFBXLhwAVNTU/B4PFhbW8P09DQikQgKhQKVSCYdGFqtFmazmfJUgsEg\nkskk+Hw+JBIJLdvWaqLUowRnMBjw85//HBcvXoTNZgOfz0c8Hsfs7Cw++eQTfPTRR0gmk1+5hwKB\nABaLBf/wD/+A06dPQ6FQIBKJ0Of4jTfyxIOdmprC8PAwLBYL+vr6YDQasbi4iIWFBTx58gQbGxvY\n3d2lacFYLPbMQ5vL5UIqlcJoNNI0SyaTwc7Ozr4UaL1QrVb36aWTVGGzGsQX4fz58/jggw/Q1dWF\ncrlMSY3NVN8ke4ZoI/T09KCrqwsajQaDg4MYHR2FRqNBOBzG2toatra2juV5kCimv78f/f396O3t\nxcDAAKxWKxX62Nvbw+PHj3Hnzh1sb2/D4/GgUChAqVTSLhKTyYT+/n5YLBZwOByk02kEg0E4HA7a\nplZvkPLA0NAQbDYbyuUy3G43HA4HEokESqUShEIhNBoNWlpaaO+7SCRCNBqlKoNut5se5o3iQpAW\n2kKhgEgkAofDgWAw+FLXqFQqodfrMT4+jsHBQXR2dkIoFKJcLmN0dJRyJIiw1tbWFpxOJ1wuFyX7\nHgYsFgsGgwGDg4NoaWmBQCDAF198ge3tbco/IpLFX9drPTk5iStXrmBsbAwqlYqm6Z88eYL5+Xn4\nfL5jeQfMZjPee+89WK1WBAIB/N///R/u3LlDlRFFIhFOnTqFjo4O6HQ6ep0kat/b24PX66WOFhkA\nVjsQqR7n+cWLF/HOO+/g9OnTMJvNEIlE2Nvbw+LiIj755BPcv38foVBo33eRc6i/vx9vvfUWWltb\nIZFIUKlUcPfuXXz00Ud11Wk4cUa+9qCWSqV4++238f3vfx+dnZ2oVquIRCJ4/PgxPvnkE2xsbNAI\n3OFwAHh+jVgsFkOv18NsNkOj0SCdTsPj8cDhcDRkdny5XEY2m0U4HMbu7i52d3cRDAZRqVSOnL16\nGJBo4vTp0/jpT38KLpeLcDiM9fX1pqpv1kbtEokEOp0Ob731Ft555x20trZCq9VSoZXd3V1sbm6+\nUlRXD7DZbAgEAmi1WlitVly+fBmXLl1CR0cH7dUvFAoIBAKYnZ3FX//6V3z88cfIZrO05Yw4qcTg\n63Q6cLlcRCIRhMNhOJ1ObG9vN+zZEAnkkZERKBQKxGIxmjYtFArgcrkQi8Uwm81ob2+HSqWCWCwG\nj8dDNBqF2+1GMBhEIpFouHAJOUMAUPY+IZg9DyT6b2lpwalTp/DBBx9gcnISGo2GsqLL5TKN8tPp\nNAKBAO7evQuBQECN72GNPJvNhslkwrvvvou+vj7w+Xzs7OxQclcul/ta4hbJ/rz11lv4yU9+AolE\ngmq1Co/Hg7t37+LmzZtYWVk5NtIs2UMcDgebm5v47LPPMDc3R/eQQqHA6Ogo7HY7OBwO3G43DZZ2\ndnawsbGxzzmpHeldr9kjbDYbly5dwj/90z/BZDKBz+fTMsetW7fw2Wefwel0fuXfEF2GyclJXLhw\ngXaThMNh3Lp1Cx9//HFdHasTY+RrJQAlEgna29sxODiI8fFxqlccCATgcDiwtraGnZ0dusnJxDTg\n2WkyNpsNo9GIyclJtLa2IpPJ4M9//jMePHhQF8WhZyGfz8Pv92Nrawt6vZ5OPeNyuZQI1EzqUs8D\nUckSCAS01rW2toZf//rXePz48XEv7yuoVCq0j7+7uxtWq5XWU/P5PD766CP89re/xdbW1pGui7Dg\nu7q60N3dTX8lRoikfe/evYv79+9jenoam5ubVIFPo9Hg7Nmz6OnpgdlsptLD9+7dw8bGBsxmM0Kh\nELa2thAIBBqWYSHsZlLuisVi1KAAoENESEsfmUjHZrMBfNl+dBQOol6vx8jICCwWC/L5PI32vg5C\noRAGgwFTU1P47ne/i+7ubppdqZ2mR3goGxsbWFhYwNLSEjweD+RyOW3NPOy7LZPJ0NPTA4vFQkmL\nZB+/6LNbWlpw4cIFnD59GhqNBtFoFPPz8/jLX/5Cyw0vcngaCZfLhY8++ghKpRLpdJqWH9hsNrRa\nLY1+9/b2MDs7C5/Ph0gkgmw2SxUTa1Ev404gEAggk8moOiCPx0MikYDX68X169fxpz/96StdSaT8\nNjY2hnfffRenTp1CT08PJBIJlpaW8Oc//xmLi4t1Hzd+Iow8IcgolUpotVpYLBYMDw9jcnISPT09\nUCgUNIX56NEjbG9v0wEkwPPJLoScpNFo0N/fj4mJCdpzeffuXWxsbDQskiApNafTSSeiFYtFGAwG\niMVixONx5HK5piatAU+jIbVaTSOx1dVV3LhxA1988cWxHhIHQZ7/wXQdGUsZCASwsbGB69ev4/bt\n20c+DY3H40GhUKCnpwfDw8Po7u6G0WiERCIB8LTGGovFcO/ePXz22WfY3NykbX1SqRRKpRJqtZqK\n95CUZTKZhEAggEqlQjKZRDgcRjweb8i+JgRWhUIBuVwOADRqIY6URCKBXq+HwWCAUqmkJCkiK02c\nMKlU2vCebJPJhLGxMYhEIloaeFEExePx6Mx5i8WCeDwOn8+HeDxOx9KSkbmVSgVbW1tYXV2Fz+dD\nLpejBMPDolqtIpvN0rJLLpejGZ0XtbmxWCwYjUaqFMflcrG2tobbt2/js88+QzAYPHYxoFgshoWF\nBWi1WgBPgyI+n0/HWHM4HEo2nZ2dpSTmr9sr9dxHZrMZp0+fht1uB5/PRywWw8rKCu7fv487d+5g\ndXV139/ncrk0O/H222/jO9/5DvR6PeRyOd0nN27cwO7ubt33e9MbeUKsEwqFaGtrw9DQECYnJzE4\nOIi+vj6IRCKwWCwUCgVsbm7ixo0bcLvdXyHWPQtcLhdarRZjY2M4f/48xsbGMDMzg/v37+9r0WgE\naltYgsEgZDIZVf2KRqPY3d1FIBBoeiMvEon2CVL867/+K27dukU972YCuefhcBizs7O0P1ipVOLJ\nkyf4j//4D7hcrpeKhOoNoVBI5yrY7Xa6LqKQ5fF48PjxYzx48ABLS0uUDMVms6mIE5Eedrvd+3rn\nyd8jxKNGqDjWZtqIWhohY5K5B1wuF2q1mtZRiZx0OByG2+2Gx+NBLpeDUCiEUCikmaFGwWw248yZ\nM/D7/dje3n6p+0LmzhMp6rm5OSwsLGBlZYUqZNaWBIvF4j79AsKCP+x1VSoV7O7u4sMPP4TJZKK1\n4Jd550g0fPr0aVitViSTSdy8eROffvop/H5/w1Q+XwXFYhGpVApcLpcGeUQ4hs1m03kSqVQK0Wi0\nISXV54HFYmFwcBD/8i//ApvNRjszrl+/jn//939HPB7/yr8RCARobW3Fj3/8Y5w9exZWq5Vq7KdS\nKXi9XqysrDRkEmPTGfna2ikRQtBoNNDpdBgbG8PAwADtERaLxfRASSaTCAaDcLvdL2TCE8eB6Hj3\n9PSgo6MDHA4HGxsbmJmZQSKROBIjRWayA091jnt6elAoFKDRaPD3v/+9oUNyDgsWiwWpVIqOjg7s\n7u7C6XRiZmam6QmEpPYNgErWbm9v0wjmOEokJPWn1+uh1WppWxmXy6V6DqVSCSaTiSpoETEcMiEQ\nAFZWVrC6ukqjSnL4kXKATCaDXq+nhyj57MNeM3mnajsslEolFZfq6+ujevlk7aTVzOfzweVyIRqN\nolqtUlKb0WikU+rqMXHu4HrFYjE0Gg38fv8L2wlJjb21tRU/+MEPMDw8DI1Gg5s3b9JRxKQe/zyQ\nVl9SkjusrHAymcTGxgai0SikUikSiQRtESPCLwfFfQiP6bvf/S6MRiMqlQoCgQB2dnbg9XrrshcO\nA9ItYLPZcOXKFZrlJJlPcp0ks0X03Y9qzSTlrtfrKRE2HA7j73//O2ZmZiivikAul1P1u/Pnz+PM\nmTNoaWkBn89HqVRCPB7H/Pw81tbW9nVH1BNNZ+SBLw9hpVIJq9WK9vZ22O12TE5OorOzkyp/Ee3r\nTCYDv9+PQCCAaDT6wlQrj8eDTCbDhQsX8M4770Cj0dCpRRsbG1hbWzvSdC0ZpkAmhZHe6I2NDbhc\nrqasy5MZzkajEd3d3Zifn8edO3cQj8eb2sADT6NmvV4PsViMQqFA+24FAsG+muZR3nelUomWlhbo\n9XqoVCqqCFcqlZBMJlEoFCAWizEwMIDe3l7odDpotVpotVp6LaFQCJVKBY8fP0YymaT90yTCVqlU\n0Gg00Ov1yGQytKOjHpLKxLBotVrqgEulUtqaVSwWUalUkE6nKaluZ2cHgUAAfr+fStSSdL/RaERb\nWxsikQii0Sii0Wjd+odrybtkaMjLpLhJy98PfvADmqpPp9N0Gt7X7XtyZimVSgCgTsth3pVcLgef\nz4dMJgOJRELr/KR3n8/nf6VNTCaT4f3338e1a9egUCjgcrlotwVxEo4TXC4XSqUSvb29+O53v4ts\nNguXy4VMJoNQKAQej0ffz3w+f+ROCVlfbZYtGo1ibm4Om5ubAECzZoSgOTIygh//+Me4evUqzU6R\nAWVutxv379/HyspKw0rDTWfkSWqGHBJSqZQSLWQyGWWwkggsFovB4/FgcXERMzMz9CB4HlgsFkwm\nEwYHBzE0NERJKzs7O1T+8Cg9w9p1kR8SZR1lCupVQA68K1eu4Ny5cxgbG6PRULMbeNL+1N/fD7Va\nTeeYa7WGpuWjAAAgAElEQVRaDA0NYXt7G263m3Ikjgomkwm9vb1UbxwAPB4PvF4v9vb2wGKx6DQ2\noqtAWOlE91qn06GrqwuTk5OYm5ujxptEEwaDgRKFEokEVCoVFhYW6jI+l2TeZDIZdTSIrKjX66Up\nVfIjEAhQrVZppJ7P5+l7Tu7HmTNnkM/nEQwGMT8/D7/fj1QqdegImGQdCCFXoVBQXsnzOAAcDgcd\nHR10RHUsFqMltRdp3ROuAmlpjEQicDqdh1a2JENYSEseObfYbDZKpRK9TrFYjGq1SvkzfD4fwNN5\nDffv38dvfvObhuomvCxISv7tt9/G22+/DbPZTCeDhkIhms7OZDLIZrPH0r9PslVEyY50ZZRKJdqa\nS6L91tZWTE1N4dvf/jZ6enqoI0mcu8ePH+OLL77AF198gc3NzYYFFk1n5GtT9TKZDGq1GiqVCjKZ\nDMlkEvF4HF6vlxoUcoisr6/D6XTSjU4+pxZcLhcikQg9PT24fPkyurq6IBAIsLW1hZWVFczOztJo\n6Dg2PJfLhUwmQ7VahUQiaajO8sHvJbO9SQbjoP45YUBXKhW0tLSgp6cH3/rWtzA2NgaTyUQJJK8K\n4tiQz25kyxS5DqVSidbWVioOQxTv5HI5jEYjtre3aertqKDT6dDe3g6lUolKpULbENfX17G3twed\nToeRkRGoVCrodDooFAo6eYtEt2KxmDqw2WyWRs4ajYbK2qpUKhiNRvB4PPh8vrrVvckzjMfj2Nra\nQi6XQzwep/rztfoUtfupNhLL5/OQSqXIZrMwGAywWCw04ne73TRyrpd+eiaTQSQSoW2LKpUK8Xj8\nmdKnPB4PdrsdHR0dyGQycLlcmJubg9frfWENm8Ph0Il6JBPpdDoPfR3EKBw08OQ7iZY6cVyIgmco\nFMLMzAzYbDZu3ryJ+/fvH1tZkJzTpGvKZrNhbGwMfX19EAqFyOfziEQiCAQC8Hq9NFtxXA4JIWuz\nWCzE43GqFFirWUDmkCgUCgwMDGBsbIw+BwCIx+PY29vD/fv3cevWLaysrDyzjl8vNKWRJ+lFIk4j\nk8mQzWaxu7sLl8uF5eVl7O3tIRKJ0Kk9tTKSZOMIBAK6wdlsNj0EJyYm8O6770Kj0SASiWBmZgYz\nMzPY2NhAMpk8tj51gUAAk8kEgUCASqUCmUx2JGsRCoVUfjQSiVAnp/bAUKlUtETyzjvv4B//8R9h\ns9kgl8tRrVZpRPkq0S8x8KR+2Og0OSkxKJVKqFQqqndtt9vR0tICrVYLPp8Pr9eLQCBwpEZepVLB\nYDCAxWIhGAxidXUVs7OzWFtbQzgcRkdHB73fRNyDDBIhz4oclGQ4h0QigcvlogM+SA2ckH1qu1Dq\nQQQrFApYWFjA4uIiLaWRtrSDafZaZxJ4+mx8Ph9EIhG6u7vR3t6OgYEB+P1+RKNRSlirhzgOiYDJ\n8BWz2QydTofW1lYkEolnGnmBQICuri5YrVZ4vV48evQIf/3rX+F2u7/2u0g0bbFY0NHRAY1GQ9t7\n65X1IunfWpCSVG3ZRy6XQy6XY3Z2Frdv38b29jb8fn/Dui1eBqRvnJSrCD9KLpcjmUxSeeNa3sRx\nZxyq1SrlkgCgPe+Tk5MQi8UQi8XI5/NYWFig3THAl11eLpcLn376KW7duoX5+fmGEx2bzsiTgTLk\nQNbr9TTqIINlCEmHjD8EQDc5SSWTKUykTQf4MlIeHByk6ZZgMAifz7evnt/o1p3ngXwvEckha2n0\nOkwmE95//32IRCJkMhlsbm5SjXGiFNjV1QWtVgsej4ezZ8+ivb2dtmyVSiWIRCIIhcKXHl3KYj0d\nczkxMQGlUklTh0SdsBHXTFQG3W43Pv/8c4RCIdoC1d3djZGREXR2dsJut+PChQuoVCpYWlpqaAmC\nOLUkQidEKKK4RxjEAKh0KUnBikQiAE+FXGKxGGKxGFWXi0QiiEQiVHkwnU5T/XGy31+GpPqyIIYm\nmUzukwx93mcfFCohKUziGLBYLDoxz+/3IxQK1U15krRObmxs4KOPPoLVaqVdIrFYbJ+jS76rXC5j\neXkZ+Xwe2WwWa2trlFH/LNRGqOQ6nE4nlSLOZDIN0Sog10YEeex2O4xGIwBgZ2cH29vbdKZEKBQ6\ndOnjMCCKgxKJBFqtFi0tLVCpVFheXobL5YJYLKYTIA92LRwXCIdnZWWF1uZ5PN6+kkk+n0cqlYLP\n56PvBXEs0+k01tbWcOPGDWxubh5Jq2LTGXlyONSyGEulEpxOJ9bW1rC6urqvXl1LNCHtFgaDAadO\nncKPf/xjWK1WStYgMpMCgYDOjSYyiAeZyMcBoqmcTCYRCARQLpcpC7dRm5sY2x/+8Ifo7OwEAHz6\n6aeYn59HIBCgRmV8fBxtbW1U7EQmk0EoFILP51Mio0KhoF4ph8OhpC+ywWuHjJAaJ1GLSqVS1FgR\nlnW9QXqLHQ4Htre36f5ZWVlBMBiESqWC2WxGa2srLl++TA/zRh6CpHecpIsJyWxnZwehUIgag3A4\njPn5eRrxmM1mSjrd29uDw+HA8vIyVldXsb6+DuBLdTaigEbqtGKxGLlcrm73uZZPUptdeFmQw14k\nElFBF1JGCIVCNONSDwEZgmq1is3NTWxvb0Ov16O3txeXL19GsVhEMBjcxx8gZ8LKygq2trZomj8c\nDn9lPnutcSfdERwOhw5WyeVyyOfzDYveau+PQCBAX18furu7US6X4fF4sLy83BTjdMneFIvFVJmR\nEDanp6dRKBSg1+v3tRwet4EHnrb27e3tAXhKfLRYLJDL5YjH4wgEAlRON5lMQi6XU1I3mQzodrux\nsLCA27dvH1n2pOmMPAFpKTKZTLTHlnjFJEVP6pBtbW149913YTabaXsOqUMSPemNjQ04nU4EAgEq\nDkIi+Wq1CrPZDIPBgNXVVXg8noZN6HoWiPdNshgmkwlTU1MIhULI5/NwOBwN6Z8kaUi5XE7FSSqV\nCsbHx9Ha2koNPGEFC4VCsNlsyGQyOgGNZEl6enpw7do1GonrdDqaAk+n01heXsb//d//0fSwUqmE\nxWJBZ2cndeR+9rOfwWQy4X//938bWiM8KIhTrVYhlUrR09MDk8kEpVKJ9vZ2tLS0NHxgkN1ux7e+\n9S0MDg5StrdCoYBer4dSqaSZKfLT398Pq9WKcrmMhw8f4tNPP4XP56NZiUQiQaPLWinP2np4bd/2\nYfc4YakToapIJEKj+do0fe331HI9OBwOFbjq6OhAb28vRkZG6LsbiUSoJG4jUK1WkclkwOPxMDEx\ngbfeegvvvfceisUi1frn8/lQKBSQyWRIpVK0L35jY4OKsFQqFfr8Wlpa0NbWht7eXuzu7mJhYQHx\neHxfG2CjMnSEuDw6Oopr165hdHQU4XAYf/jDH/DkyZOGSwW/LEjmihBBNRoN1Go1LTNoNBqcO3cO\n29vbmJ2dpfv4uEEi83A4TB3CWhIeGaUMPC2ZiMViGlRub2/jv/7rv/C3v/3taDsCjuybXhGklkWI\naOSGkYEWQqGQGuuenh58//vfh81mg0gkou0JgUAAsVgM8XiceuF+v5+ORSURTiQSoW17xxXFkygz\nEAjQ1sFTp07R7oFGCbSQ1ieJREIdIqvVCpPJREkmAoGAplJJJoTcK6JRQAx3MpmEUChEX18fOjo6\noNfrEQwGkU6nKXuZGJ9CoQCv10uHA3V0dKCzs7OuhMPa51nLbiW/EgNESD8ajYZGv0fRZUFIX2SP\nEiGkQqFAnVzigBHHt1Kp4N69e/j888/xl7/8BYlE4rllEnLNxLDU/n89ro3MAtBoNNRRrhV9IelL\nkpUCQMmCxMk0m83o7OyE1WqFzWaj6fNkMgmv17tvlnm9Qd47oinf2tpKh82Q/Un66YVCIWKxGG0T\ntNlsiEQi1MiTUadGo5FqHRBBKyJ528iIlJQ4VCoVhoeHcfHiRajVauzt7WF6ehoul6tpxLU0Gg2V\nb9ZoNNQQZrNZOvb51KlTtL31OASqngVi5ElXSCqVoucmkTQmWg99fX1ob28Hj8fDysoKVRQkTPqj\nQtMaeeIVkXYbUv9gs9kYHx+H3W6H1WqFWq2mP+QQicViWF1dxV/+8hcsLCzQdhUiE0sciNp0MvAl\noeI4WjPK5TJisRgWFxfB4/HQ2dmJgYEBpNNp3L17t2E9rCS9StqtSHRFIjTSQ0zUmQjIWohxevTo\nEe7evYtUKgWz2Yzu7m7KVN7d3YXD4aBSvQAQDodpWo5oOFer1bqPpq3tsqh9zuS/icMhFouh0+kg\nFouRSqXw6NEj2obWSLhcLnz88ceYmZmBRCKhDh3RSiARD2nb4fF42Nrawn/+53/i0aNH++Sbn4Xa\nPUOuu56OLBkWotPpaIsYce6IUBXhlxD56dp3TCqVQqPRQKvV7ot0g8EgAoEAdcxf1Ti9bKRcy9P4\n85//jKGhIfT29qK7uxsqlYo6IlKplJYU5HI5JiYmvqKRTgxAOp3G6uoq1SJ3u937+tUbaeSJgSei\nYUQ4pvbdO26QbpahoSFMTExArVYjn8/j4cOHcLvdGBkZgc1mg0AggMPhoOdKMxh54Mv3qFqtIplM\n0sidPH8OhwODwYAPPvgA58+fB4vFws2bN/H//t//g8fjOfJMSlMa+Wq1SltviMqRRqPB6OgoKpUK\n2tvb0draCovFQvsS0+k0NTakRWF+fh4ul+u5bXG1YiHke+s1Y/h1rjkWi2F2dhYGgwFDQ0NUFIQI\ndjRiXaT2mUqlkEwmqVHncrn7eolJHdLn8yEajVIWMjmIa1sY4/E4isUixGIxWCwWIpEI3G43besC\nQFNdMzMzCIfDWFlZAQA6MrVeIBkawh8gJRFiZPh8PqxWKzo6OmjWKBKJYHZ2Fqurqw0nJSWTSTgc\nDvj9fnC5XNpKQ3rgk8kkuru7wWazaQ3d5/Nhfn7+tYe41DNVTNjROp0OdrsdAwMDsNvttMWPHITl\nchkSiYSSj4LBIAqFAp3+JxAI4PV6kclkIBQKEY1G4XK59kXKjQJ55sTILC0t4dKlS9DpdNja2kJn\nZyfGxsZofZ2kYMk1kvMjGAzC5XJhenoa09PTWFxchM/ne2b9vd4ZQ+KUq1QqdHd3IxgM4sMPP4TP\n58Pc3BzNtjULTCYTdUQikQju3r2Lubk5eDwepNNpbG1t4c6dO7h37x7i8XjTGPhaEHtBslLkp7Oz\nE2fOnEF/fz8dg+vz+WhXxVFfS1MaeQDUyCcSCSr+YTabaXuZUCgEAJqqTCQS+MMf/oBf//rXL83C\nbHRf9qsikUhgdnYWw8PDEAqFkEqlUCgUlFRY71Qb6f8lmxAAHQBBUlDAU4NMpC9XV1cpwSsQCCAc\nDn9Fp35vbw9LS0tf+90ka5JOp+F2u2mfPPmzl13/1/1dcn0k3SqXy6nQUCwWA5vNhlKpxPj4OAYG\nBiASiZBKpeB2uzE/P38kk+jISNDadj1CRPP7/VhcXMTIyAji8Timp6cP7YQeTNkfFsQhVKlU6Ozs\nhM1mg8FgoLVhkpEgGZNisQiJREKZ8hKJhJJLSYeASqWiZSriML7qNb/qtSWTSTotTiwWI5vNQqPR\n4M6dO3j77beh1+upIiIRkwGeqmcSI+90OvHgwQN8+OGHmJ2d/dpzqJ4Hfa2Cn1wuR0tLCxYXF3Hv\n3j14vd6mlMbWarXo6OiAVCrF3Nwcfve739GOnoWFhdcicB4nSIaMw+FgaGgIly9fhtVqhUAgQDKZ\nRDKZfKFoUqPQtEaesBGJYEc+n8fu7i4dwkFqkLWseafT2RRtFq+LYrGIZDKJra0tTE9PY2BgAFwu\nFzqdjvaJ1hOEdDQ3N4df/epXkEgktFeftMMRQ3xQZZCk/+pF5HmdVObz/m5t3V+pVNLSjkwmw9ra\nGvL5PCQSCfr7+zEyMoJTp06hq6sL2WwWN27cwB//+Efs7u4e+ppeF4QoR0o4CwsL+8hyh/3setaF\niQNIaqfJZBKZTIbWh/l8PnXGSItdIBCAz+ejqoJcLhelUgkej4cKXu3s7GB9ff3Io7hq9anE9L17\n9yAQCCgjfm1tDRwOhzouZH8R/kixWEQ0GoXf7z/yc6h2HS6XC7/73e8QjUYRCASaYtjMQVSrVWxs\nbODGjRsQi8VYXV2laoakPHZSz3AWi4W2/3+QmlQqRSgUwtLSEiV4Hwea2shnMhk4nU7ag+j1euF2\nu7G9vY1oNEoHVzS7lOrLolKpUKUwwt4kxCypVPqV+cSHRbVaRaFQgMvlotE0n8+H0WikI0trPdBG\nbdLDfO5Bpjafz6fEQDabDZvNhtHRUbS1tVEVwXA4DAAYGxujrYGlUgmzs7O4efMmPv/880NLjh4G\ntUY4m83C4/HU/fPrBTJkIxQKIRQKIRqNQqVSoVwuQ6FQQCKRUANfKpWQSCQQDAZpNwAZWlMul2kv\nPJfLpbr6R11HJg6Ww+GgvxcOh7G4uAjgy31Gynyk3EX4BEcZqZHvJ6N5U6kUDYKaKUP5LESjUTgc\nDuTzeTidTiQSiSMf71xvEAE3q9WK1tZWCAQChEIhzM7OMkb+eUilUlhcXASHw0EkEkEsFqOqXSSF\nd1I9vq+DQqGAzWajHQBqtRpisbih30na+PL5PPb29ig7+kWiJseJ2oidtJppNBqYTCban9rZ2YmL\nFy9Cr9dTSWMAdHCKTCYDl8vFgwcP8G//9m9Uy73ZD8nDoJZ0ePD3XhVE6Y5E6IFAACKRCLlcDiaT\nCSqViuo/kLJaMpmkhDDCA2Gz2bT9iMViIZPJUCVEUspphj14sA5b+/tHvT7S2WC322GxWDA9Pd20\n9euDIKXIjY0NKj500iGTydDW1kaVBonjurCwQAOL18Fhy2tNbeSJcAkhhpHUfD3Sls2MnZ0d3Lp1\nCwKBAPl8Hi6XC7FYrOHfSw6qZkzxPQ/EOSH1XjK4AnhK7iGsbWJQiP4BcSDJWNzV1VU8fvwYkUjk\njckMHQWI0QsEApidnaVRcCKRgEwmg9FopCp2+XweyWQS0WgU4XCYGnQSFWcyGap2ScSq6l2iqgdq\nuzSOEyTzRgYXkVr2ca/rZeDz+cDlcrG3t4doNPpGGHmpVEqlp4GnsxF8Ph/W1tZe6/yuzVISAvTr\nPNumNvK5XO5Ya6PHhbm5OTidTpqGm5+fh9frPe5lNR1qD1tS3iEOCo/HQ0dHBzgcDjY3N2ldmHAJ\nSqXSvja5ZjsYj0LOuLZ3/jDfRRj/pNYuk8kQCASoLDOR+ySH1PMOK5IKJ/Vl0t7WLFF8s4EM8lIo\nFFTf4aTcJ6I2CDQfAfp1IZFIYLVaIZVKaRnL6/VSlcTXQW331+veo6Y28t9UEIN1/fp18Hg8xGKx\nE/PyHidIVEnGmiYSCYhEItr/T4wGMerBYLApo/ajFGSq574i9zQWi9EBLETqttZQfx3jnDgBtf/P\n4NmwWq34/ve/j9HRUSiVSnz22WfHvaSXxsvsh5MGmUyG1tZWiMViRKNRzMzMYG1tjQo5vY4zXfs+\nvC4YI9+EICnz2dlZGgm9CZ5uo0GMQiaToT385PdPCo5DcbFe94f0xB/2M5i9/nKQyWRUm56o7Z0U\nvIkOHMmskCE2MzMzWF9ff60WUOCr7+UbWZP/JoMwdsl/M3h1nMT7dhLXzOB4kEqlsL6+TtP1J52d\nftKRTCaxs7MDlUoFDoeDx48fw+FwHKrcVA9niDHyTYw3LZ3FgMFxo5bMdJKjSRaLhVwuB4/HA6lU\nCplMdiTkXAbPRyKRwPr6OtVTcLvdh5qcWC/RKsbINznq0ebEgAGDL0FaLk8yoY+U8fx+Px3e8roy\nxwwODxaLhXg8jrW1NWQyGXC53K8ogb7q59X+eph9yhj5JgbRjj/YFlNL4KjXIXUUbO5vKr7u3j6r\nBs88h9fHiw5F0pNP0EzR/IvWXnvwkzZDl8sFv99P1QQbvb5muVf1RL2cvlwuh3A4jFKpBA6HQ9t1\nD77jzwvcSHdJrQ4+Gat+mA4gxsg3KQ4+6IMCJm/qC/dNwvNIds3ybJtlHd80vOx9z+VydOZEuVxu\nSk2Bk4TDRs1kOFcmkwGbzf7KeNyDY68P/tlBca+DffKvnfZ/rX9VHzCnxwtQOx2v9lfm8GXA4PVw\ncEDPSXyPDkb0BCdFCKfZUK/a98HAjGQGDn7u10XyBx0BAC/KMLzQhjNGngEDBgwYMDiZeKENZ7/o\nLzBgwIABAwYMTiaYmjyDV0ZtjYhJDzJ4U/EqnS21krxE4e8ks/cZvDlgjPwJQ73qR6/zvUQnWygU\ngsvl0nnztb2gJ9Hwf9MY7m9CXbpRIIb6RTr7tSD3k8vlgs/no1KpoFwuU6UzRsGv/jiuc/AkgjHy\nJwykxYKQbI5ik5MopbW1Fd/61rdgs9kglUqxubmJhYUFPHz4EPl8nrZ7kN7Qk/AC1k5BI9mJNzkK\nq52DTib3lUqlY1nLQaLRcTscLBaLjiDO5XLIZrO0DeplwWaz6VjoTCaDXC7HKNHVGWQPH9wvx71/\nmhVvrJGv7TkEQFNoJxUcDgcikQjj4+MwGAzIZDLY2NjA+vp6wzc2h8OByWTC8PAwrly5QocwdHZ2\nQqlU0nnQYrEYPB6POiFOp7Nppgjy+XwolUoMDw9Do9FQ2eByuQyxWAw+nw82m02nR83Pz8Pv99Ph\nEicZxJFpaWmBXq+HQqFANptFIBBAMBhEIpE4srWw2WxwuVx0dHTAbDZDqVRCIBCAxWIhGAzC5/PB\n4/EgmUw2xDg+b6KXSCSCVquFXq+HWq3G1tYWbUl70fMnf16pVGg/M9MB0ziIRCIIBAKUSiXweDyI\nxWK0tLRApVIhkUjA5/NhZ2enKYdPHQfeSCNPDLxAIKARWjabPdFGnsfjQavV4ic/+QnOnz+PUCiE\n//7v/4bD4aBRfaNSWDweDz09PTh79iympqagVqvBZrPR2toKqVSKvb09CIVCtLS0wGAwQCgUIp/P\n4/e//33TGHmxWAy73Y5f/vKXGBsbQ6VSodPS9Ho9xGIxOBwOcrkcNjc38atf/Qr37t1DJBI57qUf\nGmw2GzweD8PDwzh79iw6Ojqws7ODL774Avl8/siMPElpSyQSXLp0Ce+++y76+vqgVqvB4/Hw8OFD\n/O1vf8P169exvb1ddyNPzgVgf9THYrEgl8sxNDSE1tZWKJVKhMNhuN3ulz4zasfiEqcqn88fW5bk\nTQWLxYJMJoNSqUQul4NMJoPVasXVq1cxODgIh8OB27dvY29v743Nxr0qTryRJyk/DodDU21jY2Po\n7u6GQqFAoVBALBbD5uYmdnZ24Pf7kUwmT5xwhEKhgNVqhVqthkQiQSaTgVwuh1qtRjKZpHPU672p\nuVwuFAoFhoeH0d/fDzabjWg0img0iqWlJUxPT2N+fp6OV2Sz2bDb7bDb7YhEIigUCpiZmaGiHUcN\nDocDPp+Pc+fO4dvf/jY6OjogFArpASGXy6FQKMDj8eiBbrFY8JOf/ATj4+PY3NzE9PQ0VldXj2X9\nhwWbzYZGo0Frayt6e3uh0WjgcDiwtLSE9fV1xOPxhn4/i8WCVquFXC4Hh8NBW1sbBgcHMTU1hb6+\nPuh0OojFYnC5XPT390MkEsFsNuPGjRv405/+VPf1HCwlsVgs8Pl8tLS04NKlS9BqtchkMuDz+a9l\noImxJ991kozMwb77ZjSSZNJhuVwGm82GxWLByMgI7HY7rFYrxGIxMpkMPB4PHA4HfD7fNz6Nf2KN\nPDHuCoUCMpkMYrEYarUaBoMB3/ve93DmzBlotVqkUil4vV7Mzs5iaWkJDoeDppGLxWLDUzoHSV2v\nu9l0Oh36+/uh0WggEAggFAqhUqlgMBhQKpVee5zh80Dur1arRVdXF4aGhmCz2ZDNZhGJROB0OnHj\nxg38/e9/h9PppEZeJpPBZrPBZrNhdHQU8XgcOzs7x2bk2Ww2+Hw+Ojs7MTY2Bj6fj0gkgkgkQv+s\nUCiAx+OBzWZDIBBApVLhO9/5Dk6dOoWlpSUkEommMfJkPx2sST5rX5FsltVqxdTUFOx2O/h8PlZW\nVjA3N4ft7e2GrZHL5YLH40EgEKC1tRUtLS0Qi8Xo6+vDxMQE9Ho9+Hw+isUiisUieDweDAYDpFIp\nNBpNQ9b2rMOezWZDp9Ohp6cHU1NTYLPZWF9fpyWnVwF5Z4hxb0Yj+SwQ4y4Wi6FQKJDP55HNZpHL\n5ZrSUclmsyiXy9DpdGhpacHw8DBsNhu0Wi2USiWKxSISiQTkcjlWVlYQi8WQTqdpINQI1BIvBQIB\n+Hw+HVRDuC/5fB6ZTIaWCY/qvp5YI09m954/fx4TExOw2WyQyWTg8/mwWCyQy+XgcrmQSqUwm81Q\nKBQYHx9HKpXC559/jt///vfw+/1IJBINu9kkw0AO5Nd5sOTg6O7uxnvvvYe2tjbI5XLw+XxYrVZY\nrVbEYrG6p1yJAXzrrbfwwx/+EHa7HYVCAV6vF4FAAE6nE2tra3C5XJRgVCqVoNPpYLfbaZmk3s7H\nq6JSqSCfz2N9fR23bt2CzWZDOp3G6uoqfeklEgmMRiM6Ojpgt9thsVhQrVahVqtx6tQpGAyGZxJ9\njgOENMfn81GtVqmu9UGDwmazIRQKYTKZMD4+jh/+8Id0StbW1lZDnS4ejwelUgmTyQSLxYLOzk5Y\nrVbo9XpUKhU4HA7Mzs6Cx+Ohq6sLdrsd7e3tSKVS8Hg8mJmZwdbWVsPWR8BisSAQCHDp0iV85zvf\nQVtbG7a3t+meflWQsgjwNIpvxprws7gCJAXe3d2NCxcuwOl0YmlpiXIjmgnEIZHJZJiYmMC5c+cw\nMDAAvV4PqVQKABgcHITBYMClS5ewubmJ69evY25uDk6n85XOopfhVdQq3KnVarS2tqK9vR0WiwVK\npRJSqRQikQh+vx87OzuYnZ2Fy+WiPKajOE9OpJEnEbzZbMbExAQuXrwIq9UKDoeDfD4PDoeDQqGA\nQqFAN4VUKoVEIoHBYIDf74fD4UAgEIDf74fH40Emk0GpVGpYXZsYiVc1eEKhEBqNBt3d3RgcHIRK\npRX8Yy4AACAASURBVEKlUkEikaBp84MayfUAn8+HWq1Gd3c3jchJBBgMBhEIBLC9vY14PE4Ps3g8\nTrMkmUwGkUgEe3t7yOVydV3bq4CQochcZ51Oh0wmA6fTiVwuh0qlAqFQCJ1Oh7a2NrS1taGjowNt\nbW0wmUzQarW0bTCfzx/7wc3j8SASiSCRSFAsFpFMJvcdFmS/i0Qi6PV6jIyMYHR0FD09PZidnUU0\nGkU4HEYqlWrI+kj2QCqVgsfjIZ/PIxwOg8ViIZlMIpFIYG9vj3IdNjY2cOrUKaRSKbjdbjgcDszP\nz2N9fb0h6yNgsViUbNfd3Y2uri4IBAJ4vV5MT08jHA6/0ucRp1ggEKBQKBx7LZ5EliqVCiKRCGw2\nG4lEYt/7SgyTyWRCd3c3hoeHMTExgc8//xzLy8vHuv5akD0tEAioQ97Z2Ym33noLIyMjlFMDPHWu\nRCIR2tvbYTQaodFoaCZxd3e3bgFHbfZDKBSCzWajs7MTk5OT6O7uptwOiUQCoVAIl8sFg8EAsViM\nzc1NOJ1OhMNhxONx6qQ3CifWyOv1ekxMTGBkZARdXV20Tp1Op5HL5WhqhMvlQiwW09QTn8+HXC7H\nxYsXkc/n4XQ6cf36dXg8HpTL5bqyYkn0Djw9nF/nc2UyGQYGBtDV1QWDwQA+n49wOIzl5WVMT0/j\n8ePHDdkkQqEQBoOBvkDz8/P4/PPP8fHHH9ONWdsuBzwd0OByubC9vY1wOAyn04mVlZVjjQZItOt0\nOuFyucDhcPbVTQlhkcPh4MGDBxAIBGhpacEPfvADXL58GSqVCkKhEDKZrCmiM5FIBKVSCYVCgXQ6\njXQ6va8VjaQMNRoNOjo6cOHCBQwNDUEgECCXy1GyYSOug8VigcfjQSgUQiQSIRKJYG1tjU5TJPuF\ndLqwWCw8evQI6+vr8Pl8mJubw9raGnw+H9LpdN3XV7tOQrazWCzQ6/UQiURIpVJYXl7GZ5999kqk\nv1ojJJVKEYlEDjU1rB4guhbd3d0wmUzg8XhYXV3FysoKDTaIYbpw4QKuXbuGrq4ucLlcPHjwAOFw\nuKla/3g8HlQqFaxWK86fP4+zZ8+iq6uLPrtKpYJMJkOHw4jFYlSrVQiFQhiNRmi1WnA4nFfKpn7d\nNECip6DT6aDRaMDlcjEyMoIrV67AYrFAp9OBz+fTvV8sFimHYHt7GwsLC5iZmcHGxgbS6XRDs4Qn\nzsjXtkJdu3YNbW1tqFQq8Pv92NzcxOLiIu39LZfLUCqV0Gq12NzcRDgcpgeQWCxGT08PjEYjIpEI\n1tfXactULpejkf1hjSd5oYrF4ks/RLKJDAYDhoaGcO3aNQwPD1NHwePx4I9//CNmZmYa9iJyuVzI\nZDK43W7cuHEDd+/exezsLCKRyHMzBywWC1KpFFKpFBwOB+l0GuFwuKG1sJfFQYfk4PpLpRJ99iwW\nC3Nzc7BYLJiYmACfz6fe+nGBGCadTger1QqBQIBgMIhQKEQPLmJoiCOg1+thsVjoIXRwzGo9QcoI\n5CAlzz2bzVLH+Vkjk4vFIk3Nu91uBINBpNPphjlTtZO+iPOWy+WwtLSE5eVl3L59G7lc7pUOXNLe\nSkR0iAE4TkOvVqvR1taGK1euQK/XY2dnB7u7u19Zj9FoxODgIEwmEwQCAWKxGOLxOJLJ5LFnI4Av\ng41Lly6hq6uLpsStVivkcjkKhQKSySQePXqExcVFlMtlSKVSGAwG2O12SsSrVqvU8B/2zKxWq/R8\n1Gq1lA8wMDCwry2UdHIQBySbzUIqlcJkMtEAVKlU0nO1UXv+xBl50kpGiDKZTIbWO6anp3Hv3j26\nOYmnZTQaMT09DbfbDaFQCPP/196ZPbWdZuf/EVrRBpKQQIhFwmY3xnh342V6ZtqTqkxNV2o6VUku\nulKVSlUu8jfkr0hykZvcZKvK3HRnambS7vbS6cYY2xhswGKRBFrQvm9IAul34d85LRi3V8TS/X6q\nqPHYDXz13c77nvOc59hs6O/v55Wg3W7nB7tQKCCbze6rW9Xb/hyVSgW9Xo/h4WFcuXIFN27cQFdX\nF4AX4yX9fj9u377dUOEU7QxXV1exurqK2dlZBAKBV34OqVQKg8EAo9EIpVKJarXKKfGjwJv0O+/s\n7PBc6HQ6/dJ50IeBTCZDc3Mzent70d/fz7v4+rGUFFzkcjkUCgWam5uh1WqhVqt3pZMb8XnqfSkq\nlQpyudxrMx+0ewmHw0in0ygWi5xhaST1DnVyuRzRaBR+vx+ff/45AoHAW5vfUAmFxFZqtRoSiYTP\nwWGg0+nQ1dWF4eFhXqyTE1+9WFOtVsNkMkGlUiGfz2N1dRXBYPDQu4/ovJI/x69//WuMjo6yiK1c\nLiMWiyGTySAej+Pzzz/Hl19+CalUCqPRiJ6eHly7dg0OhwPBYBD5fB5yuZwD7/tCz6PRaERXVxd6\ne3tht9vR2toKlUoFACiXyywsDYfDiEQisFgskMvlsNls0Gq1UKlUcLvdSCaT+3JcLz3Whv3kBiGT\nyaDX69Hc3IydnR3Mzs7i4cOHePr0KXw+H6LRKL/EpFIp1tfXoVAokEqlOJWTzWaxubmJWCwGg8GA\ncDiMRCKxS4R3WCIriUQCh8OBc+fO4cqVKxgZGYFCoeA+/2AwiI2NDeTz+YattGu1GtLpNJaWliCT\nyfj/v+58UPaho6MDKpUKRqMRNpsNuVzunYRMh4Ver8fNmzdx+fJlbG1tIRaLIRqNHmr60mg04sSJ\nE5icnERPTw/u3r3LqXfqEtnZ2cH29jan47VaLUKhELq6umA0GqHT6WAwGPia7id7M1Zvs4utP+ZG\nQ8ckk8lQKpUQDocxPT2NQqGASCTyVlknWigolUpWpmu1WlSrVdanHNYit1gsIhQK4f79+9je3ube\n8XqnzGq1ivX1dczMzMBsNiOVSuFf//VfMTs7e+DHuxfajf/pn/4pbt68iaGhIchkMkSjUTx58gRP\nnz6F3+9HKpVCsViEz+dDLpeDRCJBuVxGPp9HsViEyWSC3+9HJBJBLpfbt0Xkzs4OyuUy9Ho92tvb\n0dzcjFqtxpqwUqmESCTCougnT55gY2MDWq0W3d3dGBgY4AVHo5X2xyrIU1rMbDZDqVSyYcXq6irm\n5+c55UG7CqlUumv1Sl9bW1vIZDKoVqvQ6/X8UgJevOA1Gg0H0YNaiVON0OFw4NKlS5icnOS+ZrLY\nrFar3H7RaEqlEmKxGIBX74DrWxltNhvOnTuH0dFRaDSaXa0kh8Fey9T6DMXLXrwSiQQ9PT04e/Ys\nLl68CJ1Oxy2Cb2tvul9IpVJotVoMDg7iZz/7GZcPaLdc345DGaPt7W1otVpOSZMpVLFYRCaTacj9\nQzsblUqFWq2GZDL5Ri/U+jp+sVg8kIUUZTW2t7cRj8cRDodZz/Om56a5uRl6vR59fX2wWq3QarVo\naWmBXq9nkaHX64XX64Xf70cmk3nrMsD7QBnOJ0+eoFgsYnV19Y8WMHTPlMtlRCIRzoYeVrsrIZFI\nYLVa8dOf/hQffvghzp07h6amJmxsbODBgweYnp7G3NwcwuEwZxzq34t0nr1eL2KxGHf/1FtYv+8m\njoStVqsVnZ2dXAamn7+9vc33wMbGBpaWlhAKhWAymdDS0gKpVIpsNotYLNZwV81jFeQVCgWbwsjl\ncng8HhQKBQ7me1/c9YMm9vatkgse2WzK5XLo9Xr09vYCADKZDCv0DwKJRAKbzYZPPvkEly9fxsjI\nCABwgKfFi16vR1tbGxuINPL43qR9hGqxvb29uHz5Mn75y19iYmKCdzTFYvFQanv1xh7115x2sXvv\nFVoAXLlyBZ988gmGh4exsrKCf/u3f8PCwsKh1VaVSiW6urowOTmJTz/9FEqlEuvr6yws3btjpj+b\nTCY4HA709fWhvb0dcrkcfr8fTqdz30VtJPIymUywWq2o1Wp49uzZa4N8fTtUW1sbgsFgw5832gDQ\nzouyIW+7eG5tbcXg4CD+/M//HGfOnNm1yGlqauKyz1dffYX/+Z//gdPpRCQSObC2qWKxiHA4jGQy\nycNyXkZ7eztOnDiBaDSK1dVVfu8dFnRPDA4O4m//9m/R1dUFlUqFaDSKubk5/Md//Ae8Xi/7XJAX\nA9Xdq9UqXwvaXGg0Gsjlch6qRe2+73otJBIJDAYDxsbG0N/fj66uLq7RU7CnwB2LxTjLXKlU0Nvb\nC51Oh6amJmxubmJtbY2PvVEcqyBPu5qenh7o9XqkUim4XC6srq6iUCjwi5tOGJ3o+kBP/04GMgqF\nAg6HAxaLBa2trVCr1YjH4w2rXe6FblSz2Yz+/n4MDg7CarVCpVKxf7darcbOzg7S6TQCgQC/qA+r\n1k07IWplOXPmDPr6+tDX18edDtSyEwgEDq2FrqmpCXq9HmazGQ6HA21tbQiFQojFYlwDI7GayWRC\nV1cXfvGLX+D06dPQaDRIpVJwOp1v3U61X2i1WjgcDvzZn/0Zbty4AaPRiEgkAq/Xi1AotGv6H0GL\nVpPJxN0YOzs7KJVK8Hq9WFlZ2fd6K4mZyICERH40dvX7RJrU6eJwOHDy5Ek8ePDgQBz46k1r6kW6\nb/KipUWtw+HA1atXMTAwAJvNxvVe+nfgxXm5cuUKNBoNZmdnMT8/j8XFxQN5dunzUPlk72fT6/Xo\n7OzE2NgY7HY7/vd//xczMzMNDzivg+4LnU4Hs9kMqVSKcDiM27dv4+7du3C73chms+yQSJ718Xic\nFzS0Qy4Wi5zCp1IWLebeJ8DLZDK0tbXx79fpdADAm818Po9oNIqpqSnMz88jEAhApVKht7cXZ8+e\nRW9vL2sgYrFYw3UoxybI04Qos9mMrq4uKBQKeDweFobtTXlQjZIe6L1Bnh4AmUyGnp4eflhzuRy3\nlxwEdEP39PSgv78fNpsNOp2ORWuVSgXNzc1Ip9MIBoOYnZ3F3NwcWzseNHQdjEYjTp8+jRs3buDj\njz+G2WzmdBXVphKJBEKh0KEtRkjt73A48NFHH2FsbAxPnz6F2+1GMBjk66zRaGC32zE+Ps7uWbFY\nDOFwGH6//8BbAGk3097ejvHxcfzqV7/C0NAQqtUqfD4fFhYWEAwGX9rrLpPJ2A3RaDSiVqtxqcHn\n82FjY6MhO7VSqYRKpQK1Ws2Diijg1y+0SXFPbU5WqxXDw8M4e/YsXC4XXC7Xvh9bPXvfB/VfbwJp\nggYHB3H16lV0d3dDq9Vy1pDsVun94XA4WG1NLablcrnhHSevE/sajUZcunQJY2NjMJlMcDqdmJ+f\nb4jnxttCi0SJRIJCoQCfz4c7d+7g/v37SCQSLJ6mjKfZbMbm5iYCgQDy+Tzcbjfi8TiXYOpLde+b\nSSEPCtqUmc1mqFQqzlZub2+jUCjA4/Hg22+/xdraGqrVKk6dOoWxsTGcO3cOer2eN3CN7CQhjkWQ\np5fe6OgoLl68CKPRCJ/Ph9nZWYTD4Ve2p9UH9r1/Tyc5mUyiWCzyCo081w+i9autrQ0nTpxg8xWq\naZZKJQ6YyWQSz58/x5MnT+B0OuFyuQ7lYZRKpZDL5RgfH8fk5CQmJiYwNDQEg8EAhULBmY9MJgOf\nz3eg6cm9ULaGVtUqlQpDQ0Po6Ojg2QUkCKzValAqldDpdJBKpUin0/j222/x6NEjlEqlA1+k0GSt\n69ev45e//CUsFgsKhQJSqRS++OILfPbZZ4hEIn/0fWTw0t7ezl7xZEaTz+e5Ht+I60GBPJlMolwu\nQ6vVYmhoCOfPn+euhVKphGfPnmFqaoqNWMbHx3H58mWcOXMGt27d2vfjetlx0vFQF0I6neYA/Tpa\nW1sxMTGBc+fOYXh4mJXU5M1RqVSgVCoBvKiLLy0tYW5uDmtra/D5fGhtbUW5XEY8Hj80cS8A9Pb2\n4i//8i+h0WiwurrK1+2wA3z9BoEMrNbX19l7QKPR4MqVK7h27RpP5JTL5dBoNJBIJHj48CE2Nze5\nDPOyzd37IJfLYbFY0NnZyRub+vJfpVLhGjxlFlQqFUZGRjA5OYnOzk4uA9M7h7q5ftQtdFRHGxwc\nxPnz59He3o7l5WUsLi4iHo+/9uR834Wl9E21WmXP8mAwyGYcjawl08qS2jC6u7ths9mgVCp5jnUi\nkUA8HkckEsHCwgLm5uYQCoWQSCQOpNVoLzqdDr29vWwnSRPEaDzr9vY2EokENjY2sLi4iNXV1UN9\naZClbTKZZB/+1tZWDpgulwvpdHpXTTYQCEAul+Obb77B8+fPD/TFR6UDm82GwcFBTE5OYnR0FNVq\nFcvLy5idneXj2gvdT7SLp8Wh3++HRqNhDwBquWtUoC8UCtBoNBgaGsL169fxk5/8hMWu0WgU6XQa\n09PTvAjL5XKIRCLw+XxciovFYg1zF6zVapBKpdDr9TCZTNDpdHC5XAiHw69c0JFAsK2tDePj4+js\n7MTW1hY/j4lEgn3Jyc+iUCjg+fPnWFhYYN8Asr1t1DV4HVKpFC0tLZy5Wl5exvz8/Bu9Rw+KnZ0d\nfj5lMhlyuRw6Ojp4d3/16lVcvnwZXV1d0Gg0AMBWsVTKovLt923y3hXacNK1LxQKUCqVvIglz4Vn\nz57xe0en07HFM/XqUwaXRnU30k3z2AR58rmemJiAXC7Hzs4OXC7Xe6UeyXWL7G67urrw1Vdf4c6d\nO7y6bxS0cKG2I4vFApvNxo52GxsbWF5extraGlwuFwKBACKRCK/4Dnp3KZFI0NHRgZs3b2JiYgLd\n3d1obm5GsVjk1FgqlcLDhw+5xSUajR7oMb6MehU6BZZoNIqlpSV88cUXmJqaQqFQ4IdMp9OhubkZ\nW1tbDb8H9kLmGOfPn8df/dVfwW63Q6VSIRwO48svv8S//Mu/IJVKvfL7qZ0rnU7zGGKaAler1bj1\ntFGfizpWfvazn+H69esYGRnhl3a5XObsws7ODtvbzs/Pw2azobe3F5OTk5iamkI4HG5YkKehPYOD\ng7DZbKyEft2cBcqSUL/2w4cPsbi4iOfPn2N5eRm5XI6FkPXK9frFAz3zh4VcLkdfXx8cDgcUCgXW\n19eP5EjlbDaLtbU1mEwmaDQafPjhh1AoFDzIyGQy8XlVKpVwOp24e/cuFhcX2SBqvyHXxmQyieXl\nZe53b2trg0wmQyQSgcvlwsbGBjY3N5FOp9HS0sKumVQiJj+RtrY2mEwmRCIRfgc1gmMR5Glwwvj4\nOJRKJQKBAEKh0HsJusgad2RkBJcvX8bQ0BD0ej1UKtWBtHyReI1U6VarFcViEfPz85xa9Xq9cLlc\n7BF/UG1clHav/1+qn46MjLAvs0KhQC6XQzQaxfr6OjweD1ZWVuDxeN77+jQKmUyGVCqFR48eYWlp\niWukpLbN5XLQarUwGo3Q6/XY2tpibUQjUalUsFgsmJycxPXr1zEwMMBubB6PBy6XC9FodFd2iURe\nQ0ND6Ovrg8FgYPMbMlSyWq3891evXuXgFIvFGuLIRjtlnU7Hwzno5UZ+9vR7SfhGWQgaVjMwMIDp\n6Wl88cUX+358lO2wWq2w2Wzo6OhAe3s7/H4/10f3/k6ZTAaNRoOzZ8/iypUrsNlsXC/2er1YXFzE\n5uYmB53vSwtTecBsNqO1tRUejweJROJAd/TNzc24ceMGbty4gebmZsTjcXg8nobaCL8p1PJpt9sx\nMTGB06dPo6WlhWc1kO6BBJ00HyMcDuPBgwdYXV1FOp1u6LNKwjqfz8dlBbvdjp6eHjQ3N8NgMHBZ\noVKpsObkxIkTrN0g3Ybdbkcul4PRaMTKygqeP3/eEEOoYxHkh4aG8Nd//dfo6+vDzs4OnE7nSy0a\n34ampib09fXh4sWLmJycRHd3Nwu1SB3eKGgXr1aredBCU1MTIpEIHjx4gGq1CrlcjkAggGAw+Eor\n2UYdX/2XTCaDxWLhCUs0aIFa+GKxGJxOJ54+fcqpSyqDHDY0u4DsViuVCqLRKGZnZ+H1elmYA3xX\nx69WqzAajXwPxGKxhqu+tVot25BeuHABbW1tAIBEIgGPx4NgMMjKeerIoHYdWhh0dnYCeDEoiIZ5\nGI1GqFQqyGQy7hdeW1vjeiF97v2Edsa0KKXzTjXrvffF9vY229/a7XZcunQJGo0GX3/9NWeu9vMY\nqZuCFnJtbW0wGo2csq9HInkxqa61tZX9E9ra2ljDE4vF2CHudfe7VCqFRqPh1sZ0Oo1kMtmw53pv\nSYAGdV29ehVXrlyBXC5HJpNBMBg8NBEvdSTQPW0wGDAxMYHJyUlcvHiRPR7oXZPJZJDNZvnckRvn\ns2fP4Pf7G2qHTPcy+TkkEgk2OzKZTDwLwWQycYa5q6sLp06dgsVi4biys7MDqVSKnp4eNhCTSCR8\n/D/KIK9Wq9He3g6VSsWWrk+fPn2vnymTyfCTn/wEH3/8MSwWCyQSyS53s0b3dqtUKnR1dXELSK1W\nQzAY5NR8oVBAPB5HJpN5K9/7/YICvFKphNFoxPXr13Ht2jW0tbVxTQoAp+szmQyKxSIPCDpsAQ9h\nMBhw+vRpGI1GJJNJPH36FF9//TWWl5d5F1V/rDR4RyaTobOzE4ODg5iZmcGzZ88aepzd3d0YHx9H\nT08PNBoNGzb5fD4Wl1JasKWlBQ6HAw6HA3a7HcPDw+jt7YVard7VlUHBMxKJIJFI4KuvvsL09DQ2\nNze/t7VqPwiHw/iv//ovbG9vo6enB+FwGIuLi/j888/x+PHjP/rvS6US0uk0/H4/Njc32Qzl1KlT\n8Pv9iMfj+/oM0Iua5tgbDAb2968PjBR8tFot2tra2LKXFuALCwscpN+k3ZaCFX01uoOnXlVOGxiL\nxcJiQWojPqyZ8fUDXijbZLFYWDWv0WjYWY70MnNzc3C5XNwrT+/rVCrV0M+x14uCflcwGOTMVHt7\nO7f+mUwmblWNRqN8zes7TgBwNqCRothjEeRJTESmBzTH/H2gCUyjo6PY2dlhARAJsRp505ND3KlT\np2C323kCVjqdhtfrhdvtbuhFfxWUMqMdDmkFPvjgAwwNDbHJRP3Ktlwuc/Apl8soFov80L2Oehe6\nRgQdyphkMhk8f/4c9+7dw4MHD763K4PqqKlUCg6HAxcuXEAkEoHb7W6o9arBYOAFn1Kp5ABdLpfR\n0tKCkydPspqXRgBTbdVgMLAAqVAosFc2eQJEo1GEw2FMTU3B6XQ2fLRlNpvF3NwcTCYTOjs7EQqF\n8Pz5c3z99dcvdVOjoLuxsYGnT5+ir68PEokEH330EQKBAFwuF+bn5/etlZF2YdSGSiI8jUbD55x2\nmWS2Qi1w2WwWiUQCgUAAbrcbpVKJhVf0WV4GtW+RRoSyXY2mfnCQzWbjZ5iMWNxu96Fk3MiQqLu7\nG3a7nb0GdDodOjs7eaEbi8W4E2F9fR3T09NYXl7G+vo6stnsgZTSXga9+8g8qFarIRKJsBC5Uqkg\nnU7ziGqdTof29nbumCqXy0gmkwgGg2zH3ijx3ZEP8pQurp/ylM/n90WkQCv1SqWCeDyO1dVVFIvF\nXe1gjaCpqQlmsxk3btzA0NAQJBIJYrEYeyzTy+cwdu+kPB4eHsb169cxOjoKh8PBI0xrtRq/+La3\nt7lDwGq18nxuhULB2ZBXfYZ6xzzaJe33Z87n8/B6vXj8+DHcbje+/PJLrK2tvfJ3bW1tYXV1FQ6H\nA8PDw3A6nVhYWGD700ZA5jH1PcJyuRwmkwmXL1/G+fPneTem0+nYfpkm/jU1NbGmwO124/79+5id\nnYXb7eZyDy0AGn1f0QtwamqKOxRKpdIrO1aq1So8Hg+y2SzC4TBu3ryJTz/9FLVaDbOzs/iHf/iH\nfQny5Bvg9/vR2dnJmROz2QyDwYB8Ps9jQclzoKmpic9fJBJBuVxmjQSJJYEXC6xXvaSpC8Xr9SKX\nyyGVSh2IuJcWKUNDQ7h69SpUKhVmZmbwj//4j1heXm7Y738VcrkcLS0t6Ovrw+joKHp6epBMJlEo\nFLiEkM1msbi4iOXlZXbw83g8PGuk0YvVN4Fa/vx+P8LhMORyOUKhEAwGAxKJBEwmEwYHB9Hf3w+d\nTsc6pkQiAbfbjWfPnsHlcsHv9zdsoNGRD/IAuIZKK3ya0/uuKXVye9JqtSgUCnC5XJiZmcHU1BSW\nl5cRj8cblq6nFhar1Yru7m4YjUZUq1XMzc3h/v37SKVShzaisqWlBR0dHRgZGcHp06dx7tw59Pb2\nwmw2Y2trC9lsFn6/n9P4BoOBFabkKpXNZrlNi9KR5FRoNptZGatQKHioB6mvv/32W/h8vn39/LQz\ndzqdqFQqf1SH/77v2draQjAYxJMnTyCXyzEyMsLjIhtxbYrFIhtkkHiLppvRToX6/oEXZibNzc2Q\ny+XcJri8vIyFhQXMz8/D4/HA5/MhFosd2PCXemq1GnK53FvNyqZ5Cc+ePWMdwY0bN2Cz2Vih/L5i\nTlpMJpNJhMNhBINBVCoVFAoFvq/JnlQqlbJKvlAoIBAIcMCUy+VobW3Fzs4OZ7EikQjC4fBLjWjq\n1fbUbncQQ5uo3EAzziuVCu7cucNdAY3WmnwfDoeDgzu1iAaDQcRiMczPz0OlUrGdbaFQgP3/T3jT\n6XSIRqMNLTe9LXRdSUTq9XoRjUbZL7++u2J7e5s3G8+ePcPq6iqXO3+06fparYbNzU188803LKDS\n6XRQq9Xv/MC3tbVhbGwMRqMRuVwOi4uLuHPnDn73u9+x0no/T3b9wByFQsE7CLPZzO1Ojx8/xvT0\n9GsDUCOg3TSNdbx27RpOnTqFrq4uVkRTz/n6+jrK5TJUKhUGBgZ4lkClUkEqldo1B0ClUvGLs6Oj\nA8PDwxgbG8Pg4CA0Gg1byRaLRbarbIQYqVar7XK5e9PviUQimJ6ehsPhwMTEBFwuV8MWgGTKVJ9+\npHuGPBOoFZBqxNQWRwH+t7/9LaampjA3N3foOxxgd/ll77Cg76NUKmFzcxNff/011tfXYbPZuE2J\nug3eB0qb53I5xONxBINB1uOo1WqYzWbUajXodDpIJBJuoySTJBI1WiwWVKtVnldOmS4q973s4zEG\nXQAAEaxJREFU/FPwz2azB+KiSFnQ1tZWnDx5kqcS3r17F8vLy0gmkwd+n9C7ZnBwENeuXUOt9sKJ\n8cmTJ6w/IZEx+cH39PSwUHBrawuRSOTIBHii/nhisRif+87OTo5blMZ3Op34zW9+g1AohFQqxQvA\nRl2LIx/kgRc1PhqvajAY0Nvby8KLd2FsbAx/8zd/g/7+flQqFZ4QVT+bez/NEzQaDTo7O3kM69jY\nGM6cOYOOjg7exVJ96TBuXBrMc/nyZXzwwQew2+3o6OjYtaMhRWitVkNrays6Ojp4UI5areY5AL29\nvZiYmEA2m+XFEqmT6fOTgIx6uukh+PTTT2E2m/Gf//mfL7VsfRdKpRKi0eg7LZ7IkW1kZARyuRxL\nS0vIZDINserNZrPsN0Aq+Fqtxve+3+9HNBqFzWaDyWRiU5VisYjbt2/j1q1bWFhYwObm5pF6+QHf\nOSXS9Lw38c6nVDk9P2fOnEEymXxnpzjKLKlUKqjValY6VyoVnDx5EufPn8fNmze5hFc/ipYWqjR7\nvbu7m9P+tCnY2dnBrVu3kEwmEQqFkMlk3vV07RtSqZQ1NSaTiW2mA4EAcrncob1rbDYbxsbGMDAw\ngKWlJWxubsLtdrNLHbXJKZVK9Pf349y5czhz5gxqtRry+Tw8Hs8uHc9RpL5F+uc//znsdjtqtRpC\noRB8Ph9CoRBbqNfrkhrBsQjyqVQKy8vLSKVSaG9vR29vL5aXl+F0Ot/4Z5Coo7u7G5cuXcL58+cR\nDAbhdDqxuLiIUCjUkHSJTCaDwWDA8PAwBgcHcfLkSfT29sJisSCXy2FjY4Od7A5DQEKOTENDQzhz\n5gzGx8eh0Wig0WigUql48Aa1ZLW1tXFaX6fTQS6Xo6mpCSaTiXuiabVOq1Na1SoUCjQ3N6O5uZlH\n0NYrf0dGRuByuTiAvS9ULnhb8xcS69lsNpw+fRr9/f0ol8vQ6XSQyRrzyGQyGWxsbODx48fwer28\nYy0UClhbW+NVf2trKzQaDRQKBafpHz9+jHv37nHt/ahB2oKhoSFEo9E36oyhoEztpNSa9K4LcLrP\nSL9Ai2uymu3o6MCpU6d4Qhi59MXjcQ445DRJxjnFYpHvc2oPrP99hxmASKA5Pj6OgYEBWCwWzM7O\nYmVlhR0FDxry27DZbJzFzOVyLA6lhS1NJqSOE7KvDYfDnCk56tA9b7fbMTIyAqVSiWw2i/X1dfh8\nPqTTaY43jR6EdiyCPN0A4XCY6zjU9vYmDxLdXCMjI/j0009x9uxZlMtlfPbZZ/j9738Pt9u9S1W/\nn7t4lUqFjo4OnD9/HufPn8fAwAASiQSCwSDC4TDu3bvHO4DDQqfTYXBwEL29vWhpafkjH2/aybS3\nt6O5uXlX0CbFsEKh4F1//YAOSo2S2EQqlUKlUsFoNEKtVrOoKZfLsSfAftSP621eTSYTd1AAr1/9\ny+VydHd3Y2RkBGNjY9Dr9fB6vYhEIryT3G8SiQTm5ubgdrtZYEqpZUpfSqVSXL58mXvfU6kUFhYW\n4PF42NzmKKJQKHDy5En8/d//Paanp98oyFerVW5npCzSm5pUvWooFSn5KV3vdrvhdDqxuroKlUoF\ns9nM550WhqT/odZLuVyOcDiMQCDAWpRSqYRsNotMJnNompp6TCYTRkdH8fHHH2N4eBilUgnPnz+H\n3+8/lM0E8N3uVq/Xs/gxEokgk8nwuSbBaVdXFz788ENcvHgRJ0+eRDabxZMnT/DZZ58hFosdiXLU\n90Fi5IGBAdjtdu7u8fv9bMBV/3nrTcd+lDV5AJxSd7lcsFqtiMViMBqNuHbtGjY2NhCJRHaZxZAB\nitlshtVq5VQ5jUUtFAr4wx/+wOpfejD3EwqMV69exdWrV3HhwgV0dHSgWq1icXERT58+5UEGh51i\nTafTWFhYQKFQwMrKClpaWmCz2bjmDnxnAUy92LQCpXYQEiTRTVu/i25qakI6ncbi4iKAF0GUBGRq\ntRparRYA8OTJE8zMzOzLLoNe6qVSCalUClKpFEajkf3FaSFBu7vm5mb2mLbb7Th16hQuXrwIg8HA\n/erJZLJhpkQU0PY6j9ELoD4TYjAYIJVKsbm5iXv37sHr9R7ai/tNqNVqUKlU6O7u5kXM67JmpLzW\narXY2tpiMdObnvu9/x3VPetHy9KLlnaG5CdOQR74bhS0RCLB3NwcAoEA1+dpYU6L4sMO7MCLduPW\n1lZ88MEH+PnPf85p7tnZWXZpOyxol072tGRK1Nrayvc5aa6sViv6+/uxtbWFhw8fsulNKBQ61Hn3\nbwJ9huHhYdjtdjQ1NbFBFGXkqHRE2qRcLgev1wtg/0sQxyLI0wrc6XRCKpUiGo1Cq9XiT/7kT3Dv\n3j2Uy2UWyFAtz2w2Y2xsDBcuXMClS5dgtVqh0WiQz+dx+/Zt/OY3v8Hi4mLDPJtpNfeLX/wCv/rV\nr6DX65FKpeD1evHNN9/g7t27h/7QEdFoFHfu3MGDBw/Q0tKCgYEBnDt3DkqlElarFTqdbtfuhII8\nvTQzmQx0Oh0P16ERrnTd8vk8Njc3MTMzwz+D+p6NRiNOnDgBg8GAL774Am63e9+U4LVajYO6xWJB\ne3s7otEoz2GnWrFarUZbWxu6urr4fpmYmIDZbEZTUxM2NzdZiXzQu+W9O1GNRsMtWz6fD7dv30Yw\nGDzQY3pbaFdOCxGlUolSqfTSc0mLRIvFwlmUdDoNp9OJQCDwRju4+gXo3r9/WYAoFos8I2JvFuAo\nBO63QafT4eTJk/joo4/wF3/xF5BIJJiZmcGtW7cOrV2OqM+OyOVyaLVa2Gw2tLe3cwaQRLs0Knlx\ncREzMzN48uQJT7U86lA30YkTJ9DZ2cleImRwlsvluCODFjOBQAA+n+/Hu5MHXjyg1Ou8vb0Nh8MB\ntVqNwcFBWCwWNiLo6+vjNhiLxcKucoFAACsrK7h//z6cTifW19dfOezjfVEoFDz+sFwuI5fL4dGj\nR/jd736Hubk5RCKRI5FerdW+G8lKvcykFZiammLxHbVzabVafmHTHOVkMonBwUE4HA7Mz8+zj/fO\nzg7vmFKpFPx+P784k8kktzJFo1EoFApuP9rPz0ZftINQqVSc/u3t7UV/fz/Gxsa4VdBqtaK9vZ09\nqLPZLG7fvo3//u//xubm5r4d27t8FsqK1Go19uyOx+NHvkZZLpexurqKf/qnf0KxWERfXx8CgcAf\nPX+12ovpXEajEXa7HX19fVhfX4fL5WKHwjflXV6W9Quq4waluR0OBz755BOcOXOGFd00rOgoDKFJ\nJBKYnZ2FVCpFNpvlNkvye6CWs6WlJSQSCUSjUQSDQaTT6WNzXWi64vr6OiwWCxQKBZaXl/kz7ezs\noLm5mVsCySm0UWO5j1WQ39jYgM/n45olmbaMjo4ik8mgs7MTQ0NDbBlJKeZcLof5+Xncu3cP9+7d\na1hdlSAhWWdn5y4bQ4/Hg9u3b3Of7FGB+jxptUkZB6q9U2pNq9VCr9fziF6qydMN7XA48PDhQ3i9\nXjYGoV09pUBftjuKxWIN/Xy1Wo37ah0OB/thDwwMYHR0FOfOnUN3dze3C9I94/V62SXv0aNHDT3G\nN4GuR1NTEzKZDHdlHOVUPfDipVcoFBCNRtHZ2Ymf/vSn8Pl8iMfj/O9bW1vcgnny5EmMj4/D4XDg\n/v37WFhY4PHPjea4BJJ66L5oa2vD4OAgrl69CofDgaamJhSLRSSTSUQikX3rWHkfyJzKbDazcJey\nnuTbTu6HKysr3H9+nCAdTTwe5+mL1ENPLZotLS2sLaNyJsWsH2W6vh5KEa+vryORSGBwcBATExNo\naWnhyVf1KWKfz4f19XX8/ve/x+zsLDKZTMMDvETyYizrmTNn0NfXB4vFwurbo7KDfxNol0/ppnw+\nj1gstmt3TDvMR48e4dmzZ5wRoB05pfXpz4fxGWq1GlKpFDQaDX7961/jwoULsFgs0Ov10Gq10Gq1\nu9T+lL69desW/v3f/x2BQODAj3svJFikrArNrj4OQUkikaCnpwd/93d/x0pjeh7o2gSDQczMzCCf\nz2NychJDQ0OwWq2Ynp5mB0jBy2lqamJh8enTp9HR0cE2x+TvflQGRgHfpe2pK4esgWmoEf17o3a2\nBwHpksgFVC6Xo7Ozk7MT4XCYh32RwdvU1NQuvdB+ceyCPPBdrbVUKuH+/fuoVCpob29n21VSutb7\ndlOq5KAol8vIZrPY3Nzkuc2Li4tHXjSyFwqSr3tBvGq1fRgPar1ilVbWqVQKTqcTRqORDZWq1SqK\nxSIbnXi9Xng8Hhb6rKysHImdBC24aMYC2Xseh7pxrVZDOp3Go0ePYDAYcOXKFQ7y5B1vMBgQj8cR\ni8XQ3NyMWCyGQCCAp0+fYn19/UhoV44qJB7VaDRQq9UAwO/Ahw8fYm5u7sicP7qPg8Eg5HI52tra\nkEqlkEqleGjQcQ7uBHVx5PN5Nj6iTUVLSwu0Wi07hubzeZTLZc547jfHMsgTOzs7+MMf/oA7d+7w\ndDSqf5OV50HfLPT7UqkU1tbWMDc3h7W1NXz77bdv1dcveHfoQalX+AMvdjW//e1v4fP5kEql0NfX\nB6vVCq1Wi0wmA7fbja+++grffvstotHoG5m2HBTUphWNRuHxeLC0tMQah6NOrVaD1+vFP//zPyOf\nz2NkZATZbJYX6gqFgstB6XQaoVCI29u++eYbrK2tHfZHOLLUT5kDvrtPyHjl7t27uH///pHRbdAO\nnfrFKfNKYtgfCvSZotEoAoEAj4XW6/VsLlYul3mQVCgUapgpzrEO8sB39dZkMsk9q5RiPqybhjIN\noVAI//d//wepVIqNjY0jIXz5MUDq6r3ZB9q1r6ysIJ/P825eLpfvyrzE4/Ejl3GhssjS0hJ3mgSD\nwV1lk6O8q6cS2q1btxCJRHaJMkn3EQ6Hkc1mIZVKeScaiUQO+9CPPJQS7u7uZrtd8gGgSYRHJVVP\n1HfeHNV79n0olUrwer3QaDQ8+4IMw0iH4PP5UCgUWISn1+vZw37vwu19ztGxD/LAdy+Qo0T9nGAa\ncCHqigfHyx4KaqGilfNxgj7P2toaUqkUIpHILo//w3ZYex0k7lxcXMTi4uIuC2kyutnbvw4cTyHc\nQUOdPFR+CofDSCaTWF1dxebm5qFZ2L4KWpAetcX0fkHjnkkkrtFo2N+CjLkCgQDv8kmXIJfL2WWU\nyqS0IHpXfhBB/ihCLyzaZR21lbTg+FGr1TjTQB0OxzUgUucL/bl+BO5x+hyHTb1fRTgcxvPnz+H1\neuF2u49VSeeHRL1+hkqDEokE+Xwe/f39bFW9sLAAv98PlUrFJWbSVtCch/3owhJBvkG8qWBN8MPm\nTaevvSk0FKW+JfG4chhBvb6G/UN4Puv9+KkcqFQqsbKygtXV1SPRNvdjhAI9mXFtbGzwjr1YLMLj\n8bBbq1KpBACeUkqzP2i09fvW6UWQFwgaSP2gFWB/Alq9oyD9jv362T906HrU9yQf1/NGixW5XA6Z\nTMajitVqNaLR6L7NgRC8G2R3LJFIkEgk4PF4eDpnIpFAOp3mBbtcLmdxnlwuZ23ZfiCCvGCXyOM4\nv/SOIjKZbFe9GXj/YFxff69f5e931uCHRv0ufq8qvZ7DPHdvew1pwFMymUQmk+EFDAmR9y4yBX8M\nnadGnaNqtcoeI+QDQCOlt7e3efASOXHS9wAvsjL1vgHvggjyAgDYpdAW7B80m/y4m3v8UPi+IP99\nfveHAR3H646Hjr9SqbAtKgkXydtBBPjXs99T4Pb+vFrtxdhiiUTCBlaVSoWtv+uvW33XCQDu/Kkv\nMQkEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQ\nCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAI\nBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgE\nAoFAIBAIBAKBQCAQCAQCgUAgEAgEgqPB/wPMShtJ6PJnOgAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: 0.442172\n",
+ "Current cross entropy score: 0.430340\n",
+ "Training step: 2000\n",
+ "Time since start: 10.233876 m\n",
+ "Steps per min: 195.429384\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvflv3Hd+3/+Y+75IDjm8b1IiRVH3bWvr9db2LooN0iQt\ngkWSRRIEKAr0v8gPBfpLgAJtgaTBBm3aTXez2dr1ep31yrIOU7JuivdwZnjPcO77nu8P+r7fO5Ql\nW7LFmaF3ngAhWBbJ9+cz7/f7dT1fzxc00UQTTTTRRBNNNNFEE0000UQTTTTRRBNNNNFEE0000UQT\nTTTRRBNNNNFEE0000UQTTTTRRBNNNNFEE0000UQTTTTRRBNNNNFEE0000UQTTTTRRBNNNNFEE000\n0UQTTTTRRBNNNNFEE0000UQTTTTRRBNNNNFEE0000UQTTTTRRBNNNNFEE000cXCgqOPvruzHD1Uo\nFFQq+/Kjm2jipaFQKFAoFJTL5bquAUCtVqNSqahUKpTLZYrF4jf2rNTiHlAoFKhUKjQaDXq9nkKh\nQDabpVQqfWPfaxMNhy+14cparKKWaB6uJhoJwqDWC8LAKxQKtFotBoMBrVaLUqn8Rp+VWjxbtZHX\narUoFIpvtOPUxMGEut4LaKKJJvYHCoUCpVKJXq/HbDZz6NAhXC4X6XQar9fL7Oxs0yB9BSgUCpxO\nJy6XC4BMJkMoFCKfz9d5ZU008Xk0jXwTTXzDodFoMJlMjIyMcPjwYdLpNACzs7N1XtmzIbIPjeiA\niPKL3W6nv7+fbDbL7u4u29vbFAqFei+viSY+h6aRb+KF0MgXbxPPhigVZDIZIpEI5XIZi8WCxWLB\nZrPVe3nPhFKpRK1WS85AI0KlUgFQKBQIBoPs7u6Sz+ebZ6OJhsSBNfJKpRKlUvmNJbkIowr1M6wi\nalEqlXIdor7ciO9crNdgMGAymTAYDJjNZqxWKzs7O/j9fnK5HKVSac/3CDTiM30RqtcOz15/pVJB\nr9fT3t5OX18fg4ODFItFHA7Hvq5LnE+VSkWxWKRcLqNUKjEajZjN5j3kP4PBgM1mo7u7m5aWFtRq\nNW63m4cPH5LL5RrK2BsMBpxOJ2NjY4yPj5NMJsnn85TL5QO3f5r47cCBNvIajQagoS6BVwFhrKC+\nBl5c0tWMbKCuRLIvgiBCtba20tPTg8vlYmBggNHRUa5cucInn3xCOByWF7J4z4KJfRAvafHZiLU/\n6xlaW1uZnp7mxIkTTE9PEw6H993Iq9VqdDodOp2OdDpNoVBArVbT3t7OwMAAlUqFfD5PKpWio6OD\n8fFx3n77bY4cOQLAj3/8Y9bX14lEIg11vs1mM4cPH+b111/nxIkTbG1t4Xa7USqVzc6eJhoSB9bI\ni4vt6UN1kA+aMKpnz57ltddeo1Kp4PV6+fDDD2W6tVbQ6XTYbDbUajWVSoVEIkE2m23od6vRaLBa\nrZw5c4bz58/jdDpxOp20tbWhUqloa2sjFApRLBbR6XS4XC4cDgf5fJ5Hjx7x/vvvk8vlGvoZFQoF\nRqOR/v5+uru76ezs5P79+ywvLz9z7QqFgr6+Pt555x2GhoaoVCr4fD62t7f3bY1KpRKDwUBnZyd9\nfX10dnbS2tqKVqulvb2d7u5ucrkc+XyefD6P2Wymra2NwcFBDAYD+Xyezs5Ozpw5w507d0ilUvu2\n1peBeK6Ojg7MZjPFYpFEIkEikfjGZhRriWqnW3Qt2O129Ho9gUCAVCr1uTtQo9FgNBrJZrPkcrk6\nrbyxcWCNfPWGePrv4OClXgHJhD558iR/+qd/Srlc5saNG8zMzBCNRmu6FrVajcViQaPRUCqVSKfT\nlMvlho3i4cmajUYjQ0NDnD17lra2NsxmMxqNRpLNotEo5XIZg8HA5OQkQ0ND5HI5rFYrv/rVrxq+\ntirKEb29vRw9epSJiQnC4TAej4dCobDn81GpVBiNRgYHB3nttddoaWkhFosxNzeH1+vdt+dUqVQY\nDAZcLhdTU1McO3aM/v5+lEolVquVlpYWIpEIyWRSlk4UCgXRaJRIJEI2myWZTNLR0YHBYNiXNb4s\nRAuiyWTCZrORzWbxer0Eg0F5NhodKpUKrVaLxWJBq9XKckmhUCCTyciyQz1QXRoUWVqtVktLSwt2\nu12uLZvNyjWq1WpsNhsulwu/308wGGzYs6vX69Hr9Wg0GpkdTSQSpNPpfXcQD6SRVyqVsuc3mUzK\ndN5BTbkKqNVqzGYzNpsNu92OUqnEZrPVpde6WCySSqXQ6/UolcoDUXMsFouk02k2NzdZWloin8/L\n9zg7O8vt27cpFotks1nS6TShUIhoNIrRaCQSiXzOaWxUFItFkskkoVCIQCAgz8DTe8RkMjE5OcnE\nxAROpxO1Wk0kEuH69ev7yqxXKpXodDqsVqvMooTDYba2toAnF97jx4/x+XzE43FyuRyFQkG++3K5\nTCqVIpFIsLu7u2/rfBkIRr3NZqNQKHDr1i3cbjerq6sNfy7gyfrNZjNdXV1861vfoqenh3w+TygU\nYmtri9nZWba2tkilUjV9nurIXZQ8AGnQk8kker2ejo4O9Ho9Ozs75HI5FAoFLS0tjIyMMD09zczM\nDJFIpGEzKgMDA4yMjOByubDb7ZjNZq5du8aDBw+IRqP72plx4Iy8YN9+E8U8NBoNLS0tMoIWnm09\nnBdhSEQttVAoNPz7FhmH5eVltFotq6urmEwmKpUKDx48YHFxEUDWgnO5HH6/H7vdjtvt3kPIa1SI\nWrbf70ehUJBIJNje3n6mCItOp6Ovrw+Xy4VarWZjY4OHDx+yvLy8r8ZTq9XKPvL29nbK5TKbm5vc\nvn2beDyOQqFgdXWVra0tksmkNPIC1Vm6RoiQFQoFGo2Grq4u2tra8Pv9rK6uMj8/37DiN+LuUKvV\nOBwOent76e/vZ3R0lPPnz9PZ2UmhUCASibC9vU1HRwcLCwusra0RDAaJxWI1WWc1QbM6mBEqgt3d\n3YyMjGCxWPD7/dy7d494PA5AV1cXfX199Pf3s7a2hs/nk0RIYE9gUu/PaHJyksuXL2O323E4HLS2\ntlIsFslkMjx+/JhYLLZvazxQRl5sXK1WC0A6nX4mKeegpux1Oh3t7e2y3lcqlepWIxZGPpPJyC6G\nRkepVCKTyTA7O4vb7cZkMqFUKsnlcqTTaTKZjCTcAYTDYR49eoTVapU14kbfM5VKhUwmg8/nY3Nz\nE5VK9TkjKSAueIPBQDqd5vbt23z00UcEAoF9I7OJckJfXx8DAwN0dHQQDAbxer1cu3aNra0tSqWS\nzDw8y4FttIycuHOGhoaw2+189tlnbGxsNPR+ETVtk8nEoUOH+Ff/6l9x6tQpJiYmMBgMkmtTLpdJ\np9McPnyY+/fvc/PmTe7cuUM8Hq+ZLLCI4vP5vNyXRqOR9vZ2zpw5w+XLl7FarSwsLBCLxdjd3aVY\nLNLb2ytLOk6nk56eHjY3N2XtXtyh9Tb0CoWC48eP873vfY9kMonZbMblchGJRIjH46yvr+/r+z4w\nRl7UxJxOJ8ePH6dQKLC5ucnOzs4eUppSqWRqagqTycTDhw9JJBLy5QknYXx8nOPHj7OyssL6+jqx\nWEzWM+uZ7jGbzYyPj9Pd3Y1arWZ9fR2fz1c3kY1KpSLfh1qtlpGVMJRGo5Genh6mpqawWCwyjbmz\nsyOZ1DabjZ2dHYLBIPF4fF8jM/G5CaOXyWQA9hgVAYVCQalU2pOh0Gg0dZehfRFUKhV5gYn/fhom\nk4muri6mp6fp6ekhmUyysrLC48eP95XIptPpcDqdnDp1ivHxcZlRuX//PqFQiEwm0/DvV0DcF729\nvRw6dIgTJ06Qz+f55JNPSKfTDWnghXF3Op10dHRgNBqZnJzkyJEjdHZ2yj2eyWRIpVJEo1Gi0SiJ\nRIKuri5+53d+h97eXmZmZnj8+DGBQGBfntNoNMqvQqFANBrd49zlcjnC4TC5XA69Xo/L5cJisWA0\nGtna2iISiaBSqVCr1TLdrdfrsVqtHD58mAsXLqDVaqlUKiSTSebn5/nkk0/IZDI17dbQ6XQYjUbS\n6TSzs7PcuHGDeDyO0WhkfX0dr9dLMpnc14ztgTDyInWn1+vp6uri8uXLxONxbt68SSKR2OMFqVQq\n+vv7cTgcrKyskEwm97w4hULB+Pg4v//7v8+nn37K7OwsOzs77OzssLu7Kw1bPSBUyTo7O1GpVKyv\nr7O6ulpXuUyx8bRaLUajEZ1Oh1arlaSYqakp3nnnHVpbW8nlcmi1WhYWFtDpdAwODtLe3s6dO3co\nl8skk8maXPClUkka8C96LvFnNptFrVaj1+spl8sHQrnsyy6ElpYWBgcHmZiYoKWlRaaYPR6PdH72\nA1qtltbWVimhG41GcbvdPH78uGaf/6uCTqejtbWVo0ePcvHiRSYmJmQE36gStqK0YLfbcblcUoNA\nqVQSDodluUQY0e3tbYLBoCSsXrp0CavVSqlUYmtrS96JrxotLS2yjCQIl9V7Q6xvd3eXSCTC2NgY\nXV1ddHV14fV68Xg8bG5uSjGiZDKJRqPB4XBw7Ngx/viP/xir1UqlUmFnZ4f33nuP27dv15SBr1Ao\n0Ol02O12tre3uXr1Kj/96U/x+Xx7glIRQO0XDoSRF5vMZDLhcrkYHh5mdnaWxcVFIpHInmisWCxy\n//599Hr951IgwoCbzWaGh4fp6+vj+9//Ptlslp/+9Kf8t//23+ryfAJik5pMJorFIh6PR7ZG1ROC\n9T86OsqlS5cYGxujp6dHkqs6OjrQarWUSiX+4i/+QtaXNBoN2WyWaDTK9va2FEbZT7zshSSMvBBp\nEZPEGjFKexkMDQ0xPT2NzWZjd3eXGzdu4PF4ntmG9KogLjXRzbCzs8PGxgYbGxsN1+/+ZRCth//6\nX/9rpqenGRoaQqlUsrm5uSeL0mgQhLX19XVCoRBKpZIHDx7w8ccf7zEmwgkWU/OcTieXL1/m1KlT\n5HI5YrHYvpQjRHZkcnKSEydOsLi4KAOxZ93Vd+/eRaPR0NPTg0ajwe/3Mz8/z/3795mfnycYDJLP\n57FarfKrra1Nlhiz2Sx+v59AIPDc8u5+QJQidDodFouFzz77jEwmg9/vf+Zz7meJ6kAYefj8UIh4\nPE4gEPhc21CpVGJnZ0fWYp9V7xMeuujBFMSsejPIRVrKarVSLpfZ2dlha2ur7pFldXpYoVAwMDDA\nxYsX9/wbwYxtbW2VXIJQKITH45GRT6MaTnHADgKzXqPRoFQqpWP79DsVbVKjo6McPXoUk8nEysoK\n165dq0kdWUi+xuNx0uk0gUCAeDy+p/Wp0SF4BT09Pbz++uuMjY1hNBpZWVlhZ2en4fdyqVQiHo/v\nCXKWl5f3/JtqaLVaCoUCiUQChUJBJBLB5/PtC9PeaDTS1tbG1NQUx48fx+v1PnNviL29tbXFnTt3\nuHLlCl1dXfj9fhYXF1lYWJBrVKlUWCwWHA4HKpWKfD6P1+tFr9eTTCb59NNPmZ+fr1mLoAiKRPCT\nSqWeS2asRXnwQBn53t5euru7WVtbY319/bmkNBH5Pm+DihYfIdcqavH1ZMpWE5ZaW1tJp9NEo1HC\n4XDdo4ZyuUw8HmdpaYlsNktvby9nz57dI0ikVqtlR4C4aLa2tnj48CFLS0uSAd6IUKufHINsNtuw\nbGkBvV6PVquVnQ/PYtTbbDbGx8eZnJxEq9WyubnJ9evXP1e62g+IdstQKIRWq615DfRVQLTLdXd3\nMzw8THt7O6lUiqWlJR4/fkw2m633Ep+Lp6PEF4FKpcLhcOBwOFCr1fj9fubm5kgkEq98fS0tLZw4\ncYJjx44xODgoW1mfx4XKZrOsra3x3//7f0ej0ZBMJiUhuFgsotVq0ev1tLW10dnZSSgUYnt7m2vX\nrmE0GonH4/z85z9neXm5JvtQqD06HA5OnDhBOBzm+vXrz/3dT7ft/tbW5OE3adVAIMDa2toXinl8\n2YtaXl7mJz/5Ca+//jp9fX3s7u4SDofrfrmL+kwul2N3d5dgMEgikah5BPR0d4LwNlOpFBsbG7z7\n7ruy59lms9Hf38/Y2BiDg4MoFArS6TS7u7ssLi5KzoNgttcT1bPVheyqyWTC6XRSKpUaWkyjmpei\n1+ufS/xqaWmRdfhEIoHP5+Px48fE4/GalErS6TTb29s8fPiQzs5OrFYrra2tOJ1O2bZYrWvRqBDO\nai6XY21tjeXlZW7fvi31FxoZL/teNRoNQ0NDUm44kUgQDof35TkzmQzb29uSiLm9vf2FHUSVSoVC\noUAgEACQDHwR+Iiyw/b2NpVKhVQqhVKpZHt7W2Y0NjY2auKYiVJEuVwmkUiwuLgoJZ2/KOA0Go2y\ny2E/cGCMPDzx6iKRCMFgkFAo9JV/js/n48MPP2R4eJjW1lbcbjd+v/8VrvTlUe3Rid7ncDhcF0nP\n6gxHtaEXF/SvfvUrrly5gkKhoKenh7Nnz/LWW29JPfRQKITb7WZ2dpa5uTk5a7sel7pg4Apmrvg7\nrVaL2WympaWF3t5eYrEYDx48IJ/P77l0GsUQCflOo9H4TKKOuGA6Ojo4ffo0DoeDUCgkWdK1Mkzi\njG5ubmK32xkZGWFsbAylUkkoFJKGXuBZ71cwomut8ihQPZRJZNQePXrEwsICGxsbzy2fHdQ5CFqt\nlpGREfr6+shkMsTj8X0TxUkmk3i9XsmSD4fDUi5Yr9ej0+n26CSIfV4oFEin05+7R0QpUTDuRdlQ\nlB1isVjNMqGiDl8qlUgkEjx+/Fiu8VmwWCxYrVZsNptUv9sPHCgjb7Va5cARjUazp870MnC5XJw+\nfRq73U4wGOTTTz/9yj/rVUBEliJtHIvFZL2pHmupPijVUrbVl5iIxsLhMLOzs1KjXBziO3fusLy8\nzNra2r5t3hd5FpvNRk9PD++88w6XLl3a8xwic2KxWPB4PLS0tHDnzh2WlpYkQ79UKtWdq6FUKunq\n6uLo0aN4vV42Nzc/Fx2oVCpJKL18+TIWi4VAIMD8/Dw+n69ma1Wr1XR2dvLd736X48ePMzg4yJkz\nZ0ilUjICq85MPX1hi1LPu+++y1//9V/XbN3VEPumpaUFg8GASqViYGCA9vZ2LBbLM9PYgtWu0Wga\nbnLeF0HUj4eHh3E4HMzPz7Ozs7Nv+71cLpPL5STJ2OVyYTKZsFqtnDp1iiNHjkjypnBmM5kMCwsL\n3Lp1i48++kg6WdXnuFQqkc/nZYZOr9cTi8VqenaFqM+L/s5/+S//JefPn6dSqTAzM8PGxsa+rOtA\nGXkh+VosFqWn/VXgcDgYHh7G7/eztbXFzMxMTS/CpyF6zkXaJhQK4fV662bk9Xo9BoMBrVZLOp3e\nIx38dKSSy+WkapbH4yEUCrG0tMSdO3cIBAIkEgn5vcJ5UKlUmEwmVCqVZPi+ykhfqVRiNpuZmJhg\nZGSE/v5+3n77bUkWFOvPZDJy9KxKpZL9q319fbLsEIlEWFtbY2dn55Ws7WUhVBBF+9D6+voezXf4\nDau9q6uLoaEhRkdHJfFqc3Pza2W9XgYKhQKXy8Xk5CRnz55lcnKStrY2+f+q9061I/kstnGhUMDt\ndrOwsFDzdy9SxEJvQWikt7e309bWRiQSkc/kcDhoa2vD5XLJTNH8/Dybm5s1Jxs+PbvjRc7T2NgY\n586do6enh2g0yq9//Wvcbve+rbFcLpPP56WRHxwcRKPRMDw8zJkzZ5iamtpj5OFJZtPhcLC7uyvv\nDAERcIh9o1arpSMpSNm1MvLVHKUvgtPpZHBwkIsXLzI9Pc3s7GxT1lZADKt49OgRq6urX+vn2O12\nPvzwQ27cuMHy8rI8uPVAtS62kCz1eDwkk8mar0UMEWlvb6e9vR2/38/a2pqMxMrl8p7LWaVSyRrx\n2tqaXLvQwa72ukXqXExR0+v1JBIJOVwCXk16XEiQ/rt/9++4dOkSJpMJs9n8uRR3JpORvcO5XI7D\nhw9z5MgRqRQWDAZ5/Pgx/+f//J+6GXmDwcD4+DgdHR1kMhk5SORp7QeTySR5ESaTiVAoRCwWI5FI\n1KwFU6FQMDExwcWLFxkZGaGlpUWy7cXlVywWKRaLey7xasMvjP+FCxdwOBz8p//0n/jggw9qmkkR\nnS0+n49gMIjZbJZtZr29vVJSWK1WMz09zaVLl3jzzTexWq0kk0n+63/9r/zzP/8z29vbNTPyT+u/\nCyLxl+G73/0u//7f/3symQxXrlzhf//v/72vEwqF8VWr1bS2tmIymejr6+PSpUs4HA6MRuMe7ky5\nXEatVmO1Wj93hqu7fqr3jvg3tSTRirW+SLAyMTHBD3/4Q/r7+4nFYvzTP/0T9+7da7bQCa/ZbrfL\n8Y5fFaIeu729zdLSEtFotK5kGpVKxejoKIcOHUKj0RAKhVheXt4XduuXQalUysEPhw8fZnV1lXK5\nzNbWFul0Wo7a7OjoQKVSYbfb6enpoa+vT4o+iJYpkVJWKpVYLBacTicDAwMMDQ0xNjaGXq8nGAyy\nsrLC0tISy8vLhMPhr/0MpVKJZDLJwsICvb29HDt2TKb9CoUCKysr3Lx5U7Z3ib0ldLDb29tli6XB\nYGBmZga9Xl+TFpzqIR3d3d309fXJSXlir4rLQFxoJpOJ1tZWurq6MJlMRKNRNjc3WV9fr7nCXLFY\nJJfLkUgksFqt6HQ67t27x8LCAoFAgEwmIzMn1c8q9klvby9vv/02VquVsbExTp48yfb2NisrKzVV\nmSsUCnJoTiqVYmdnRyqridG4LpeL8+fPc/LkSXp6ejCbzeRyOc6ePUs4HObXv/51TeRhYW+Z7emS\nyLMwNDTEG2+8waVLl6hUKvziF7/gl7/8JTs7O/tKUhOGeWlpCXgS1ba3t2O326UMNfym40mcORF0\nCKex+ueJr2e14dUaz/udogR05MgRzp07R2dnJ/fu3ePOnTssLi5KPf79wIEx8vAkXS/kU7+qUa72\n9MLh8OfECeoBtVrN+Pg4ExMT0sivrq7WPJIX0UlLSwsDAwMcPXpU9pqKCW92u52pqSkmJyfR6XQ4\nHA66u7vldK5Hjx7tqeHDb+rFvb29nD59mvPnz3PkyBEMBgOBQIB79+5hsVgIBoOfk7f8KigWi0Qi\nEa5cuSIzE2JQTTqd5sqVK/yX//Jf2N3dJZ/PY7PZGBoaknLJohVQp9PR0tKCzWbDYDDsUcPbjz0j\nRmyKi25gYIDJyUnsdjsrKytSwVFcdCJ6sVgstLW14XQ60Wq1hMNh1tfXWV9fr2m7V6VSIRqNSjZ6\nJpPBbDbz/vvv84tf/EIKnzwPSqVSGsyJiQm0Wi1Hjx6VRNta1rqFkRH94x6Ph52dHTmZsbu7m4mJ\nCSYmJujp6ZGfmdFo5OjRo2xvbzMzM7NHVnu/UF2brhZXeR5EUPGnf/qnmEwmFhYW+OlPf8qNGzf2\nfa3ibM/Pz7O2tsbExAS9vb1ks1l0Oh1qtVpme8QYXJFF6erqoqWlBXjCsn9al74RdBieLj8JKJVK\nHA4Hr732GocPHyaXy/HrX/+aDz744JkTJF8lDoyRF8IIRqORSuWJmtrLQhCtgsEgn332WcO0TKnV\navr6+ujq6iIWi8nWuVqL4IiUn9lsRqfTkUgkMJlMHD58mJ6eHrRaLV1dXfT29tLV1YXRaJR920Ld\nbmRkRHYGRKNRyVQXYy3F4BjR1yoibDHqVafTPZOg9bIQkW88Hufu3btyIEc+nycQCMiIpVwuE4lE\nWFxcJBAIcOvWLVpaWujo6MBsNqNWq7l//75k5FdHSq/yYAp1LJvNJvXFDQYDxWIRv98vMxxGo3HP\njPVKpSKVvgRJrFwuSwe21hmqpaUlwuEwMzMzshPA6/W+UIRYqVRwu9385V/+JSdOnOD48eN0dHRw\n8eJF3G43xWKxZmUTpVJJNBrl+vXrRKNRlpaWUCgUFItF8vk8Ozs7kvEdDAY5c+aMHIcqnDXxTPsN\nYeCeJ5BUDdETbzQaicViXL9+nY8++gi3213zuzCbzbK6usq9e/e4efMmo6OjUoI5m81Kbo3ojDl8\n+DB//ud/zv3791lYWNgzCyGfz7/Q8+8XnqVPUK1Hr9FopCro0tISHo+HxcXFmpQUDoyRB9je3qal\npYXJyUnK5bKcpvWiLRJqtVqmYEUqThA16mXsRU3b6XTidDqlil89FMLENKehoSG6urr2RK4WiwWX\ny8WxY8dwOp3YbDZMJpN8f/DEuxafjdlslmn7fD5PIpEgFAqxsbGBXq9namqKlpYWtFqt1Nnu7u6W\nwypE7fmrolgsyghwfn5eetjPIuII+UvBCxDa3yaTSQq6GAwGDh8+jNVqRaFQsLy8jM/n+9opfI1G\ng06no62tjba2NhwOh1RrbG1tRaPRkM/nZc29urYtnqW3t5eRkRG6u7vl1DkxztJgMMjBQLWI0oLB\noHyPX+X7w+EwV65cwe/3E4/H+Tf/5t9w7Ngx1tbWpHHdbwiOjMFgkAOwotEoDocDvV6PWq0mlUpJ\njkSpVKKzsxO9Xo9KpZJDX2rJsBcR/BdBcJGOHDnCwMAAGxsb3Lp1Sw5uqSXEHg6Hw3g8Hu7duydH\nWi8sLBCNRlEoFIyOjkqSbl9fH9/97nfp7u5mYGBAts2JdkshxSv04sPhMJFIRJJra+Vwwd4splAJ\nnZiYwOFwsL6+zs2bNwkGgzW54w+Mka9UKvj9fg4dOsQf/uEfcu3aNTll7kWMvGCNd3d3Mz4+zrFj\nx7h69So6nU7qN9cDQtvYYrFgs9moVCqYzea6rMXhcDA2Nsb58+cZGBjA5/NJQRt4YgwvXbqExWLZ\nM6hGbOhyuczJkycZGxvjjTfekAZJSEv+7d/+LaFQSJYkCoUCDoeDyclJGTVrNBrcbrf8/1/3YD6d\nxnuRnyfS/eKiEcN2fvjDH3Lq1ClMJhP/+T//Z/7n//yf8hL5qjAYDLhcLk6dOsWhQ4cwGo3kcjmi\n0ai8IMRgJlESyWazxONxWdseGxtjeHiY7u5uLBYLWq2WU6dOoVAoWFlZIRqNyii6ETJXL4KtrS1u\n3rzJW2+9xdmzZ/nDP/xD0uk0n3zyyb4/gxhyNTY2ht1uB57U6O12u5wBHo1GpfCPwWBgZWUFh8OB\nxWLB7XadGiBkAAAgAElEQVSzuLhYtzHRz0NLSwuHDx/m93//97FarXz66aesrq7uW0/8i6BcLhOL\nxVhaWqKjowO1Ws3c3JyUYBbtfYI3YzKZGBwc5Nvf/jY7Oztsbm6ysbGB1+tlfX2dSCSCy+Xi5MmT\n3Lx5k9u3b0suSD2eUQy+ev3117l8+TI6nU4OQ6tVKe3AGHmA3d1dAoEAFouFs2fP8id/8if84he/\n4NGjR8/9HlG3FGno1tZWWlpaJFtTRPX16oUeGRnh8uXL9Pb2SiMnVJNqBVHTMxgMmM1m0uk0q6ur\n3LhxY0+ftWChm81mTCYTk5OTktGt1WpRq9WYTCb0ej02mw14cjn6fD6KxaIkTBaLRW7fvo1er+fU\nqVPk83mpc767uytZ+a/q83jZn1OtAyD+W0jeVksPGwyGZ+pRvwhEC9H09DT/4l/8C0ZGRnC5XFQq\nFeLxOH6/n1QqJYVJisUiCoVCRonxeByz2Ux3dzcul4vBwUGZBSgUCqhUKqm/Hg6H2djYqLs88stA\n7Id0Oi15IlardU+L1H5BrVYzOjrK8ePHZbkhFArJUo34XMSIUMGTUCieTHdbXV1lZWWl7oOlnoaQ\nox4YGCAYDPLo0SO2trbqWstWKBTE43EWFxdpa2ujUnmiWtfa2iplhcXnLva0SqUikUgwMzPD3Nwc\n4XBYjgvv7e1lenqaixcvyhLdjRs3ai5FLO4cjUaDzWZjcHCQwcFB3G43kUikpoHlgTHylUqFUCiE\n3+8nl8sxMTFBR0cHPp+P2dnZ55IdRB1ejEq12WyYzWY0Gg0GgwGDwSAv0XoY+UOHDvG7v/u79PX1\nAcjyQ63XIhyhSqXC2toa0WiUDz74QJKNFAoFXq+XmZkZSWZ75513eOedd+Sc6qd1mAVpaW1tDZ/P\nRyQSkUbr5s2bcrKdUEhbXl7G4/FIg9ooKBQKJJNJdnZ2CIfD9Pf3y3bAr6rXoNPp6O/v5/Lly/z5\nn/85JpNJMrn9fj9qtZpQKCTZ6rFYjK2tLRkFJJNJBgYG6Orqwmq14nQ6UalUpFIpWY+Px+O0trbS\n2trK5ubmc0lBjQhRKhIELMG+Fxf3fk7SU6vVDA8Pc+zYMZlVERmUZDJJJBIhEomQSqVkOUWn01Gp\nVIjFYqyuruLz+eo+WEpAnMuBgQHOnDmDxWJhcXGR+fl5KRdbr3Wp1WrS6TRut1sOmIEnwc8bb7xB\nR0eHLJEB8h0vLi7y93//99y6dYtyuYxWq6Wzs5PJyUlGR0eZmJiQGh5CWrseMBqNdHd3093djdVq\nZXt7W87xqNVZPDBGHn4jUpFMJkmlUuTzeUlYero2Wk1+EVFhsViUdTOhuiRITPW6/BwOB0NDQ1gs\nFjkEox4bUjhRCwsLhMNhMpmMlKN9+t8BsnY2MDDAG2+8gdlslmS09fV1ZmZmWF5elgptQsFPkGTW\n19fx+Xyydiki0FrWz14GGo2Gjo4OWlpaZDQnooeXhYhMz507x/T0tNyTQjUtmUzy+PFjyZjf2tpi\nc3OT1dVV2QYotABOnDiBy+UilUoxNzfH3bt3uXXrFolEgng8LssO+/lOn5518CrQ1dXFkSNH6Ozs\nlAIoItu233tDOLFOp1P2Pa+traFQKMhms7L2K4xnJpPB6/USi8UkmayRUvVqtRqDwSBbMn0+H263\nW/IJ6gGFQoFWq5XRezQaZWNjA51Ox9jYmCSTCplbeOL4FYtFfv7zn/P3f//3rKysyD0hMoUejwef\nz0cikeDevXt89NFHBAKBmmcrhP05fvw4P/zhD+nq6sLr9XL16lUePXpU071xoIx8qVQiGo1y9+5d\n2Vah1+sxGo17vP3qFi5heKrbMaLRKF6vV9ZT6ylbqtfrsdvtaDQaEolEXRS+4DfDRUStSESQghkq\ntKU1Gs2eOclarVY6COFwWEqpfvLJJywsLEjlvmonTNS819fXWVxcpFQqsba2JlP5r+rzqO65/Tqo\nlvgVz7m2tvY55bmX+XlarVZGKblcjmw2SyaTYWNjg4cPH/LgwQP6+vowm80EAgECgYDsLrHb7XR2\ndjI0NERbWxs7OzvMzc1x69Yt+fV0e9F+oTp78yp129va2picnASQ4koej6dmZ1Wj0UimvFA/TKfT\nUiWxXC5L5UaNRiMHZ21tbbG1tdVQmSi73c7k5CTj4+PYbDbcbjdzc3Nks9m63Xui5U+n00nhnmAw\niF6vZ3x8HLPZLLszBKlwZ2eH+fl53n//fT766KM9ay+VSrLMeO/ePTo6OpiZmWF2drYmbYxPw2g0\nMjY2xtmzZzl37hyzs7Ncu3aNR48e1fx+P1BGvlAo4PF4+Ju/+Rt+53d+hz/6oz+SpDWhjy4YmkJI\noVooIZ/Py1SPz+eTL7xeaTVhOERkFo/HmZubkxPeaoXqAyAurlKpRCwWkx5pX18fTqcTq9UqB0qc\nO3eO48ePo9FomJ2d5erVq1y5ckUK+eRyOWncnza0QmDnww8/JJlMyjTWq7rERSoQ+Nr1faVSSbFY\nxOPxEIlEWFlZ4c6dO1/5khT63cJw2+120uk0Pp+P9957j+XlZWKxGA8fPgQgGo2STCbJ5/Oyh/zs\n2bO0t7cTCoW4cuWKJBjFYrGal56qh4m8qoyByWTC5XLh8/m4f/8+169fZ25u7pVyNZ4HkTEUnSFK\npRKbzSazKJXKE6VHjUZDZ2cnbW1tpNNpPB4P8/PzNWeqfxmGhob4sz/7M44ePUo+n+fu3bvcu3ev\n7tP0yuWydJSFgJUgnBoMBlkOE5yYGzdu8Jd/+Zesra09cw/kcjk8Ho88O1tbWzKLVUsoFAra29v5\nwQ9+wMmTJ0kmk7z77rv84z/+4ysR+3pZHCgjL6LN1dVVSYYRgztmZ2dJpVJSg1ywKas3Qz6fJxgM\nSkJMMBis23Q0YYSqlb9EK5XFYqn5egDa29sZHx9nfHwcu90uLyuNRkNbWxs2mw29Xg88caZEq53B\nYGBra4vbt28zPz8vxz5+EUSK7tGjR2SzWRKJxCtlwJpMJk6ePElvby9msxmfz8fq6iqbm5svJTKk\nUCg4evQox44dw2g0sr6+zq1bt74WYUmQ6z777DMCgQCffvopuVyOYDDIw4cPpcSycFJEC93AwACn\nTp3i2LFjDA8PEwwGuXv3Lg8fPmR5ebmuXSK9vb0cPnyYUqlEJBJhfn7+pefXKxQKnE4nJ0+e5MKF\nCxw5coSHDx/y8OFD5ubmaiJcJe4MUTrRarWMj4/z5ptvEovFiMfjUsOiXC5LB3B1dZWtra2Xfub9\nhlBaO3ToEO3t7VK/IhqN1pVwJ1LsooQnsq2pVAqPx8P6+jq5XE7e5x9++CHvvvuu7Fp4FoTzHAqF\nZNalHhkVock/MTGBzWbD4/HUdf7FgTLy8CTVK9jH6+vr9PT0cO7cOTkSUqlU7pGgrNZwLhQKBINB\nOZ2ongdSkIjE5CoxV1ikYGsJEYV1d3dz4cIFLly4wOjoKFarVbLmhTMCSHZztdLW7u4u8/PzLyXj\nmUwm903Vz2w28+abb/Ktb32LwcFBPv74Yz744AM+/vhj0un0l15wwgnTaDS89tprvPXWWywvLzM7\nO8vy8vLXioJE29CNGze4cePGM3+vVquV5RGtVktvby8nT57k9ddfl1oEq6ur3Lx5s24TC6vR39/P\n9773PVQqFevr63KOd3WZ7HmoTt0ODQ3xgx/8gGPHjmG32+WEyGAwWFOGtIgmTSYTx44do7OzUzK/\nd3Z25ACghYUFFhYWWFxcrGsr2rMgAgchKS3ahYXkdD0hUvDV+7ZcLktuiWiTE/f53/3d3/Hxxx9/\nIdehOpuUTqdrbuDFPu7p6eHw4cN0dXWRz+d5/PhxzYZEPQsHzsgLbG9v8+mnn3LhwgUuXrwo02dd\nXV1sbm6yubkpNa+FspBSqdwz5KPeLS5iUwgegSCO1GtDiDGNYiKeaFd5evCDIImJgypYrIFAoO7v\ntBqi9m2xWDh37pws65RKJba3t59reMTUscHBQaanpzl9+jR6vZ7PPvuMe/fu7Xs6XJSWyuUyFouF\n4eFhzp07x9tvv01XVxelUokbN25w9epVVlZW9lX3+ssgSmFut5uf/exnvPHGG5w4cYJDhw6xvb2N\n1+vlypUrzM/P7xEKqVYFs1qttLW18frrr3Px4kWmpqYwmUwkEgnW19dZW1urWf1YON8CYgqgwWCQ\ne729vZ1AIMD6+jrXr1/H6/XWRbzqy2A0GnnjjTf4zne+g9lsZmZmhp///Od4vd56L+2ZqBbI2djY\nwO1243K5JDn36QFTz0K1fn+tHS6r1YrL5eL73/8+b7/9Nq2trVy9epWf/OQneDyemq6lGgfWyAcC\nAT777DMuXLjA0NAQsVgMu93OoUOHpGzp1tYW169fJxQKyTYwh8NBKpWquSLVs1Bdk4cnka0Yz1pL\niMMQj8dZX19nZWVFilCoVCo55UmlUqFSqQiHw+zu7pJIJEin0+TzeTlkoVEimUKhIIlQhw4dkiQ2\nt9tNPB6XGvmCuW21Wuns7JSzqB0OB+Pj45w5cwaVSiVFgXw+376mxIXRFF8KhYKenh4mJyc5ceIE\nxWIRt9vNrVu3uHv3LoFAoO7vXAhVJRIJurq66Ovr4/z585RKJXw+n5RSLZVKUjpZGESlUikVD99+\n+22OHz8uh0fNzs6yurpKOByuKTlWdIisrKzIqX4Gg0Fq5yuVSimEs7W1RSgUajgNAqPRSGdnJ5cu\nXZL75uHDh7IttlFRKpWIx+Ps7OzIkbdiap1er5eZ2afft+AOiYmBtSbbiUzot771Lb797W8zMTHB\n7OwsN2/e5M6dO3XdHwfWyIdCIWZnZwmHw7KOJsh2RqORgYEBRkZG5KUoeo+FRObu7m5d1y+i+Hw+\nTzgcllKZtRzCUY1KpcLi4iJ+v5+7d+/y2muv8YMf/ACNRiOnz4mva9eu8ctf/pLFxUWCwSClUqlh\n5gAIxONx3nvvPRQKBdPT06jVaiwWC9/+9rfJZDIsLCyg1WoxmUzE43Gmpqb4t//239LR0YHFYkGp\nVKLX6zGbzfzoRz/ixz/+MWtrazU5rNWseLVaTUdHB+3t7RgMBra3t1lbW+PRo0f4fL6GeeeitfX9\n998nEAjgcDiYmJjg1KlT9PX1SdU90c5VXdMWmgOiZSqdTnPz5k1+9KMf4Xa7az4TPJ1O88tf/pJI\nJMKbb75Jb2+vnH4ZCATwer3cvn2bmZmZmo6TfRl0dHQwPT3N1NSUbMP0er1sbW3VnXD3RRBZrHA4\njNvtxmq1YrFYcDgcOJ1O1Gq15O9UQ6vVYrVa6e3tJZ1O13S4l7jLp6am+A//4T/gdDrZ3Nzkr/7q\nr/jkk0++1v54FdoWB9bI5/N5IpEIH3zwAWtra2xsbEhGrrg89Hq9HHGZz+exWCx0dnZKLe96QkTw\n9+7d46//+q9RqVTs7u5Kx6UeSKVSZLNZWc8qFotSU0Cr1aLT6dBoNMzNzfHgwQO2trbq0p7yIhDD\nTGZmZvibv/kbjh49yujoqEzdi44GnU5HJpOhv7+f8+fPY7PZUCqVbG9vs7q6ysLCAlevXsXtdtec\npCmc10KhgNfr5YMPPmB5eZn79+/j8/kaisUtUq27u7s8evSIn/zkJ/j9fk6fPi2V94QMshjUIYa9\npFIpEokEjx49YnNzk1AoxGeffcbc3FzN5UjFc3i9XgqFAvF4nLa2NoxGo5QTDgQCeDwevF5v3US0\nvgw2m42uri4A3G63lAGvZ9vci0JM/xP72+FwcPHiRfR6PXfu3JEEO0GQVKlUDA0N0dnZKTtRaukY\ninbY7u5u7HY7t2/flh0vXzVr8iKliRfFgTXy4vL78Y9/LP9Op9PJyV2iv1sIrNjtdpxOJwaDAY1G\nU9feeHhi5LPZLFevXuXq1at1W8fTEOxo0W99UCEyOw8ePODRo0e89dZbvPPOO5w7d44jR45w4cIF\nSSQUB0rU8WKxGG63m/fee4//8T/+B6lUqm7ptkKhQDgc5rPPPuODDz7g0aNHDVtThSfvfXNzkx/9\n6EeytNHV1UV3d7ecOijKU8KI7uzssLGxwS9/+UtmZmakA1MvR1x0fkSjUR4/flyXNXxVCO6M1Wql\ntbWVcDiMz+fj7/7u716o66VRIBxclUpFa2sr3/72t7FarWxtbRGPx4nFYpRKJZnGP3z4MCMjI/zs\nZz+reXbFYDAwMjJCa2srfr+f//W//hf/8A//8LWc8FepOXFgjbxA9UsQPaziAxYpQVGj8Xg8/PM/\n/7NMMR+UDd/EV4fYD6I17aOPPmJiYoILFy7Q29tLW1sbpVKJ5eVlrl69KidX+f1+1tbW6mps4El2\nRUzoSqfTdcvyvAyEgzU3N8ff/u3fSiKn0WiUsqXwm5anTCZDKpVic3OT3d3dhlKLO2iwWCx0dXVx\n9uxZTp8+zfz8PA8fPiSZTNadg/Qy2N7e5le/+hWhUIi5uTnZTeJ2uyWfRsxoHxwcpFKpsLq6+sID\ny14VjEajfN8mk4l3332XpaUlKZj0VfEqlR0PvJGvhhBVeBay2azU/BaRfvMi+e1ApVJhY2ODzc1N\nlEoli4uLxGIxRkdHcblcFItFHjx4wP/9v/+XQCBAPB5vmDRsNpvF5/NJZ/WgQIgdfZGw00HS0j8o\nMBqN9Pf3MzQ0RHd3Nx9//DGrq6sv1DbaSIjFYsRiMdk3X6lUiEQiklMgSKliVHcoFCKRSNS8jdFo\nNMqxzvF4nKtXr76SuQWv8hm+UUb+yyDEFpoG/rcTIqr3er385Cc/wWAwSFleMdgln883TJZHrOEg\nRWAvg0Z4x980aLVaWlpaUCgUsgtGdBIdJCMvIIZCAXuktkU9PhgMkk6nZfao1jwVo9GIUqnk0aNH\nBAIB7t+/X/PuqC/Db5WRFxuhid9eCPa0kEFudDQNYRMvA3HHeb1ewuGwHJzTKJmpl0Uul3uu9kal\nUiGfz9dE6vh5yOVyMjgQ0wkbzZn6rTLyTTTRxDcb+zER76BAoVBIRdAHDx6QTCbxeDwNVX7aD9Rz\nyE48HieXy8l26EYMIl8dT//l8c3ccU000URdIJjlv83lOJPJRHt7O/Cb7gWh4f7b+k72E0LyW6js\n1UEu+EtteNPIN9FEE8/EQYuKm0YeKS4kIHgov83vZL9R53PSNPJNNFFPHDRDKSBUvJ5lHF52slyt\n5UUP2rt+1XiWkMpv+zv5IhzUM/r/40tteLMm30QTTbwwGv1CbNR11RLNd/Dy+CY7h8p6L6CJJr7J\nOKhpUrHu6i+BVym5eZDRfA9NHAR8oyJ5MehCzDtvtFaGJpo4SPi6qXpRHxZn8UW/t9GyBRqNBo1G\nI3vNq0dENyqjuokXR6Pss/3CN8bIi4OnUqn2iCV80z/AJprYLzwren9ZIy/mR1RP1vsifBEXoNYQ\nRD6dTofRaCSTyUj2tDDyzWj+m4F677X9xDfGyIvpUdVM0m/yB9dEE7XEVzlLYqxs9TyJL3MWhGMg\nRo7WE0qlEp1Oh9VqxWazEQqF5HRG0ZLWzBY20ej4xhh5+OrpwYMAEeGI1KFWq0WlUqFUKkkmk3IO\nfSM8s4iABGrlcL3KFiqFQoHL5WJ8fByv18va2tqBdhxrmQIXmuLt7e0MDw+zu7tLKpVCo9Fgs9lo\naWkhk8mQzWbJ5/PSGTAajeh0OpRKJV6vl9nZ2bqU3VQqFYODg3R0dGAymcjlciQSCYxGI4VCQU4l\nbBr4Jg4CvlFGXowdLBQKMkV4UC/lp6FUKlGr1VgsFqxWK1arVU71Wltba7jJeiKdCcgLcb/XplAo\nJCfj69RJhZE6dOgQf/EXf8E//MM/sL29XVf5zK+Daufnq5awXsZJEM7o0NAQb731FsvLy0QiEWw2\nG+Pj40xPTxMIBAgGg0QiEUwmEx0dHXR1dWG1WikWi/zsZz9jeXn5a0/z+irQaDRcvHiRs2fPYrFY\nuH//PlevXsVkMsnJec06fBMHBd8oI69QKD5XA/ymoFKpoNVquXz5MsePH8fpdBKJRPD5fITD4YaY\nFa1SqWhpaaG1tZXW1lZGRkZwuVwsLCywvLzMysrKvn42whGy2WxotVri8TjZbFYOeHn691ZnR6pn\ny7e0tPDWW2/x+uuvc+LECfR6PSMjI/zTP/0Tbre7HqpWXwlKpVIO4BFr/qrv/qum641GI1NTU9Lx\n6unpwel0olKpsFqtKJVKDAYDZrMZeKLSlkwmUalUdHV14ff7icfjX2nNLwsxh93lcuFyudBqtWxt\nbbG+vi5H4Iq6fDOKrw3EuTxIAVujEUcPvJGvfqEiYvkmEu7EdKlLly7x9ttv43Q6uX//PrFYDIVC\nUXfHRqFQSGM4MDCAw+HgtddeY2pqiuvXr/PrX/+a7e1t4vH4vkRB1eUBYTRyuZx8L8IBUKvV8t/q\n9Xr0ev2eGnChUKCtrY3vfOc7vP766zidTnp7exkeHmZtbY14PE4gEKhb1uRFLhCFQoHNZsNkMqFU\nKkmn00QikVotEfhNp4tOp8PpdKJWq9nZ2UGpVMpUfalUktmoTCZDOBwmGo0Si8WIRCJYrVY5gWy/\nIRw+m81Gb28vNpuNQqGAz+djfX2dYDAoa/GNlDF7WahUKiwWi/wcBITjUm/nRXwO4g7X6/VotVo5\nXhaQf1aXLNPptJxQVy+oVCrpvJpMJrnWcrlMNBollUrVZV0H2siLtCog08GlUkn+fb037NfB045K\nZ2cnJ06c4PDhw3R2dqJQKPB4PHz00UcNEcVrNBpaW1v57ne/S29vL0tLSxiNRlpbWzl9+jTJZJK7\nd+/KSO1VQ3z2uVyOYDBINBqVh16j0cgMg9PpRKfTUSwWGRgYoLOzk0wmQzAYZHNzE4/HQyqVYnt7\nm93dXTm20+Fw8M4776BUKnn//fdJJpM1J4ZVX4DV5Y+nWfAqlYqLFy9KPsHi4iLhcLhme0SlUmE2\nm7FYLJKVHo1GmZ2dpVQqYbVa2djYIBKJyEliwiETvJJ0Ok0ikajZtECR9bDZbDidTvL5PIFAgEAg\nQDQaJZvNHnh5WKVSidls5rXXXsNgMDA/Py/vyEAgQDweJ5/P1/X5lEqldEIKhQK9vb20t7fLtSkU\nCjlaVjhjWq2Wx48fs7y8XFcHzGAw0NraKoMbUTpOpVK899573L59uy5rO7BGvvrCU6vVaDQarFYr\ndrudTCZDMpkkEol8LsJttFTKi8Jut9Pf34/D4SAej3Pnzh2uXbvG0tISyWSy7s9jMplwOBzk83n8\nfj9utxu73U5bWxtWqxWn04ndbicUCu2LkYffGPrqCEVE7gMDA0xOTmI0GsnlcoRCIRwOBzabjXQ6\nLfdMJpMhn8/z6aefolaryWaztLa2yhrz2NgYMzMzFIvFumVPvuh3arVaTCYT4+PjHDlyhI2NjZpf\n3Dqdjq6uLlwuF2azWTpQCwsLxONx1Go1gUBAjkAVRh5+46SIs13r0ojNZqOvr49isUg4HGZzc1PO\nYz/IEGWowcFBTp8+TVtbGz09PWg0GhQKBQ8fPmRpaQmv11vzZ60m6orPXZCLBwYGGB4eZn19XTpc\nYn0ioNNqtej1eoxGozyX9SipGQwG2tvbGR0d5cSJE1itVvL5PLFYjPv37zM7O1sXPseBNfIi/Soi\nNbFp+/r6CIfD7OzskE6nP1eP/boEpFrhacfEYDDgcDhQKBTMzc3xH//jf2RhYYFoNNoQz2Gz2XA4\nHFy9epVIJMLa2hqBQIBkMsn58+cpl8sYjUa0Wu2+r6X6fYjoa3x8nDfffJNQKMTq6qpkzPv9fubn\n59nY2JAtUpVKhXfffRePx8P29janT59mZGQEjUaD3W6nvb2dVCpFKpWqaReHcGLg+c6quGg6Ozux\nWCx4PB7W19drsj6xLqPRyPDwMP39/VitVtbX19na2sLj8bC7uytT3s/KRoi2tFo74+Ld2u12RkdH\n5Wfv8/lqXup4Vai+65RKJX19fZw7d45jx47R39/P+fPncTgcqFQq/t//+3/84he/YGtrqy5GXkxy\ngycz2lUqFXa7XTrn+XyenZ0dfD6fjOiTyaS8dzKZjCyxiHu/1veiXq+nra0Ni8WCwWDAYrFQKBQo\nlUrYbDYsFkvNSMjVOHBGXqVSYTAY6OjooLe3F7VaTUtLC5OTkzK1k8vlSCaTBINBSZQRUZdGo5GR\nf7Xhj8Vi+P1+rl27xvLycp2f8jfQarWYzWYOHTrE+fPnsdvtskaYSCQaoiQhLnaz2YzX68Xv95NO\np8lmsxQKBYxGI3a7HYvFUhMjX41qMp7JZOLGjRs8ePCAtbU1dDodarWa3d3dz6UqM5kMkUiE9fV1\n+vr6MJvNuN1u5ufnZQRaLbxUazzvd1osFgYHB+ns7MRms8kRmLWC0Wikp6eHixcvcujQIQwGA8Fg\nUM41r057fxHqcUF3dnZy6NAhpqen6ejowGw2Ew6HWV5eboiS2ItAr9dLY1ct3qPRaIAnBlR8Blqt\nFoPBgMlk4vjx48TjcZmF8/v9NVmvIF5arVYymYxcs2hTnJmZwePxsLOzg9/vl5GwQqEglUpJB2Z0\ndJTu7m6Zjctms/h8PjY3N+Wo3f2CVqvF6XRy4sQJvvOd76DRaFhaWkKv15NOpwkGgxQKBUZHRwHI\nZrOkUil2dnZqEqQdOCMvoqnR0VHOnDmDVquls7OTc+fO0dXVJes5wJ5UoJirLEhJra2t0nNUKBRs\nb2+ztLTE1tZWQxl5o9HIwMAAR44c4fjx42SzWWlAGyGFWC3xKbzrZDIpU256vR6LxQJAS0sLer2+\n5usTbXWiLvzgwQNisdjn6ttPH7ZcLkckEiEYDKLX67l//z6PHz8mEonIaKLRYDabGRgYoL29XRLv\namWcFAoFFouFnp4epqen6erqIpFIEAwG2djYIJ1ON2zrmThno6OjjI6O0tHRgVKpZHFxkd3dXba3\nt+u9xBeCKFuK+0HUqNVqNZlMRkbDovOhVCrJgOn06dMEAgGKxSKhUGjf69tKpRKj0Sjv42AwKPeI\ncGDIc7sAACAASURBVFLu3r27J+ta/aeI+HU6Hf39/Zw9e5adnR0SiQSFQoF8Pk8wGJSloP2AIBz3\n9fUxNTXFuXPnuHv3Lo8fP5ap+lAoRKVSobW1FZvNhlKpJJfLsbCwgNfrJZFI7OtdfqCMvHihnZ2d\nTExMcP78eRmltbW1odfr97RCqVQq9Ho9arUaj8fD0tISa2trTE1N8Z3vfEde8gqFQrZ8Wa3Whkrl\nd3R08L3vfY/p6WkAksmkZKjv1xq/Sqo0EAjIg2k0GtHr9fT09NDf34/RaKRcLjM8PFw3B+rWrVt4\nPB4WFxdlBkSkhZ/XWqdSqdBoNIRCITKZjEzhiqi/UcSHqmE0GnG5XBgMBnlZ1sqwinPU0dEhCXUi\nfSois0aF0WhkcHCQtrY2yuUymUymbmnfr4NMJsP29rb87EWmL5PJsLGxQTQaxePxyKza5OQkx48f\n58SJE/T09PAHf/AHpFIplpaWSKVS+1rb1ul0DA0N0dLSgkqlIh6Pf+59f5kcsrjntVotSqWStbU1\nVldXiUQibG1tyZ+5X1Cr1VitVkZHR2WWdXV1ldnZWTY3N4nH4+RyOSmP7HA4OHLkCK+99hojIyMs\nLi5y5coVAoHA/q1x337yK4IwwqKfua2tjZMnT3LixAlGRkZke5RI8whmr1arJZfL4fP5WFhYYHFx\nEbfbzfr6OrlcjvHxcdrb22WUqdPpsNls6HS6hjDyome3r6+P48eP43K5yGQybG5usrm5ua/M7pd9\n9kqlsmczt7e3Mzk5KT8ji8VCLBaT6dpaQrTGidp0KBTac3GJmqWooYkIvbe3l8HBQYaHh1GpVEQi\nEfx+P+FwWKYM61Eq+TIHzGw209fXh1arJRaL1YxtLC7alpYWHA4HxWJRpklFRNnIEE6p0WiUZRqh\nQVFN5Gx0FIvFZxJby+UyiUSCRCKB3++X2TfBqu/r66Ovrw+Xy0VPTw8Gg0GW2/YLKpWK1tZWOjs7\nUavV+P3+Z56pL9q/DoeDo0eP0t7eTj6fZ21tjaWlJaLRKOl0el/vSYVCgdPpZHR0lImJCVpbW9nd\n3cXn87GyssL29jaZTEbeMRqNhmQySXd3N2q1mr6+PhQKBbdu3dq3NcIBMfJqtRqtVitTau+88w5H\njx7F4XCQTqeJxWKsr6+Tz+dRqVQMDQ1hs9kIh8P84z/+I3/1V38l23NKpRJarZbR0VGpaAV7SU2N\nMCBDpVLR2dnJ8PAwPT096PV6IpEIi4uLLC8v113XW0C8J9ECpdfr6e7u5nd/93eZmpqir68PnU7H\nw4cP+fTTT1lbW6vp+sRnLvbG0+9NRAIOh4PR0VHC4TAKhYLvfe97jI+PY7PZ2NjYYG5ujng8TiKR\nqGubjsg+PW8NFouF4eFhNBrNvkcx1dBoNFgsFhwOB2azmWw2SyaTkdmSamJVI0Kn09HR0YFKpWJj\nY4MHDx5w7949fD4fsVis7k7/q0R1Fmt1dZVyucz3v/99ec+KL/GZ7WfGUKfTyfZWt9v90t8/MDDA\n7/3e76HX69nZ2WFjY4ONjY19P6NiT4+MjHDhwgWOHz9OsVhkbm6OjY0N1tbW9mQlhCCVsFWzs7OM\njY1hs9lQq/fXDDe0kRcvUqfTYTKZOHPmDJcuXWJoaAiLxUKxWORXv/oVt2/fJhz+/9o7k982r6uN\nPxzEmZREkaKokZpM0bIsy4rlMXYdxwmMBk6AFF0UAbLoouhf0F0XBbotiqJZBGjRokUXcRu4SRq7\ntuxPFjzIkWRL1mCNpDiIIimKFMVJHMVvEZxryvEUW6To5v4AI0YSWy9fvu89957znOcEkU6nWasI\niR4mJiawtra27e+dmZnBP//5TxgMBrS0tLCfVUqTpSjwyOVyeL1eFtwfPnyI2dnZgtaZXgZSRVN6\nmIxQpFIpEokEc+UrVPvc866Nrg94ZLlrsVhgsViYUE2v1yMWiyGXy2HPnj1IJpOYm5vDgwcPMDU1\nhWAw+FT3vGJ+lmdBCt/p6WncuXMHGxsbRbuuTCYDv9+P+fl5JBIJtLa2srbDZDKJjY0Nlk6lQJP/\nneS3Um1tbRW1zED++fF4HCsrK1heXmaZm9fpJP99oO8slUqx8hRtiGmEbiGfczIbqqqqQnd3Nyoq\nKl74zwqFQshkMkilUohEIkxMTGB0dJR1BxT6/SRBb2trK7q6uiAQCLC4uIj+/n7Y7fYnZkDyfVyk\nUikrmZDXS6Eo6SAvEomYMlun0+HEiRM4c+YMjEYjACAUCuHq1av4/PPPmWL0RSC15vnz55HNZtkC\nQ8p9uVy+azXE/AWnsrISEokEdrsd09PTuH37NlOCl1qQJ/IXb1q0KdtCpiLFgH42ZYDkcjkkEgnb\nCEqlUrz55pt45513cOjQISa0yr/+e/fuwWazYXx8HJOTk7ten33e5pM0K5WVlXA6nbh37x4ikUhR\nro02eD6fj5VvFAoFOjo6sG/fPhiNRsRiMUxMTLDFLpvNIpPJbNtg07uYTCYRi8XY8KVinMokEgky\nmQzW19fZwCdKWdOJNn9IFHXqkLsfiThjsdhrY38rk8mg0Wggl8sBgAlnNzc3C77+pVIp2O12tLS0\noKGhgZUL6P4/C3qHs9ksvF4vhoaGcOPGjaK9nyRcbG5uRnNzM1snBgcHn5llzeVykMvlMBqN0Ol0\niEQiP9yTvFAohEajQUdHB8xmMzo6OtDb2wuDwQCZTAaHw4HR0VEsLy+/1CKQy+UQjUaxsbHBPLSF\nQiE6Ojpw4MABjI2NFW2BJGihqaqqQm1tLWprayGRSODz+eByuViNp1S90wUCATPFqa6uhkqlYqWP\nYvaG5osuzWYzTp06hfb2dtTU1DBfdLVazQxbKisrWXDJ/ye1I5H/e/7fX+xgTyct4vHgQWNRlUol\nMx/S6/WsdarQ0KaKNA10gm9sbITRaGQbpGPHjuEnP/kJgO1+5I9vYKjL5erVq9um0e30fc/vAiET\nn9raWpjNZtTW1rL1hWxVVSoVmpqaYDKZ0NTUhOrqavarrKwMiUQCV65cwfDwMBYXF1kWoFTT/RaL\nBW+99RaMRiPC4TBmZ2fhdDqLcsghcaZEIkFzczM++ugjmM1m/OlPf8L09PRz79nW1hZsNhs+++wz\nLC8vF/Ueq9VqmEwm6HQ6xONx/Pe//8XQ0NBzDbKEQiGMRiNOnjyJyspK2Gy2gr+jJRvk6VRSV1eH\nzs5OHDp0CHV1dZDL5Ugmkyw14nK5XuphzOVyWF1dhdvtZmnlXC6H2tpamEwmPHz4sGhBnuwmdTod\n6/XX6XSoq6tDNpvF7OwsXC4XwuFwSY/RlUgkaGlpgdlsZiWTbDYLv9+PtbW1gp9qSA1vMBhgMBhQ\nXV2N7u5uvPXWW2hvb0d1dTXi8Ti7308qzZAqmaaf6fV6GI1GGAwGNidApVIhGo0iHo8X7bugDM/T\n0thqtRpdXV3o7OyETCZDIpEo2JyAJ0EBmzZ6JpMJjY2NTDlNm4DNzU3U1dVBqVSyzpcn4fF40NbW\nhlQqBbVajWQyyd7XTCazo8+SVCqFQqGAWq1GdXU15HI5s+JVqVTMU6OqqgotLS3o6+tDe3v7tiCv\n0+lQVlbGhFaUictms6xDY21tjbWo7TZlZWVQKpXYv38/Tp48iWw2i8nJSQwMDGBubq4oB4l8HZRM\nJsO+ffug0+mwtLSEXC6Hubm5p94r2iBQe2Ox76lYLIZKpUI4HIbVasXExATsdvtTn0sS99bX12Pv\n3r1oaWkpWqanZIM88Kg9Qa/Xs4UhnU4jFAphfHwcX3zxxUunf7e2trC0tITZ2VmWqs1kMlCpVMwF\nqhjQ4l1fX8/MFFQqFauHzc/PY2pq6jtCjlJELpfj6NGjOHr0KDQaDUQiEZLJJObn57GwsFDwhYO8\nx48cOYJTp06hs7MTTU1N0Ol0rHZHu2ZqtXwcqh2T2ZDRaITFYsHm5ibm5uYgFovR3NwMu93OxJ7F\nCKRUA8wf1EFQR8NHH32E06dPM0HVgwcPiqaBoAVbJpNBq9WyACiVSgE8Sud7vV7YbDY0NDQwwxkq\nk+Tb2er1eqjVakilUhw+fBjhcBg3b97EpUuXWAp/J6C0L60zlNWhFD2l5IVCIXp6enDmzBn8+Mc/\nZlkSGkpC64VEIsHhw4dhNBrR09MDjUaDyspKjIyM4Pbt27hx40bRM4RPQqlUoqmpCQcPHsTBgwcx\nPDyMa9eu4eLFi9/RMBWaVCqFYDAIlUqF2tpafPzxx9BoNPjd73731OC9tbW1be0v9rqYyWQQi8Uw\nOTkJqVTKWoifBs2/ePvtt3H48GHIZDIEg0Gsrq4WXERdskGeBHeVlZXQarXQaDTY2trC8vIyLl++\njIGBgVd6WehEplarWR2NVLT5/faFgk42lZWVqKmpwRtvvIGenh7U19cz//eFhQVMTk4yi95SDvBG\noxFmsxk9PT1obGxkJ69oNIqpqSksLS0VdLdNAQJ45ChFPu753yd520ciEWxsbMDv98NqtWJtbQ2J\nRII5gsViMchkMjbSVy6XM1fFgwcP4v79+xgdHYXVamWtaoWCAlF5eTlT9+cH+iNHjuD06dM4ePAg\nkskkLl26tM2ZrxhQEA+FQlhcXMTly5fhdrvR0dGBxcVFeL1e9t8DgQBzQMxvWRUIBNDpdDhw4AAa\nGhqg0+lgMBiYeHNpaWnbWOCdQCQSobW1FRaLBUqlkgV0EqDlcjm2FnV3d+Pw4cOora2FQqFgf0cm\nk0EkEkE0GkUoFMLY2BhWVlZYara9vR0ikQiZTAZWqxVOp3NXBKjAtxvxgwcPoqurCxaLBRUVFbhx\n4wYGBgYwMjJSlKDzOAsLC/j000/x4Ycf4tSpU2hqamKWyOS18CR2az2ktTuXy2FpaYllzZ51PQcP\nHkRfXx8OHTqEmpoapFIpOBwOzM3NFXwIU8kGeaqJ0hARmUwGv9+Pubk5XLhwAbOzs6/8M5RKJZRK\nJXM283g8CIVCBR08ku+yJxKJoNPpsHfvXvT19bGFxul0YmlpCbdv32YOa6WQ4nsSJDpqaWnBsWPH\n2MLhcrkQj8fh9/sxNTUFh8NRsM9ANXQAzON6YWEBDQ0NUKvVrDUol8shFArB7/fD6/XC5XJhfn4e\nN27cwOLi4ndapSigk/q4paUFhw8fxjvvvIPy8nKkUikEAgEWeAv12UhASIselRIoRX748GG89957\nMBgMmJiYwL/+9S/MzMwUVZxJ1xMMBhGJROByuTAzM4Oenh4MDAzg4cOHz1VrCwQCNDc346c//SmO\nHDmCzs5OCIVCNrOB/rmT3S9CoRDNzc1ob2+HVCpl2pxUKrXNVVImk6Gjo4OVQ4hcLodYLMbsnN1u\nNy5dugSv14u2tjbU19dDpVLBbDYjFArBaDQWdEjT85DL5Th+/DjefvtttLe3Y3BwEF9//TWGhoaw\nvLy8K9e0sLCAxcVFVl5TKBSorKxERUUFYrFYyWUw8ztAXC4XgsHgEzPK+YLSAwcO4MyZM6iqqoJC\noUA8HofVasX09HTBR9CWbJAHHo0vlUqlCAaDuHbtGvr7++Hz+V55Ud3a2sLc3Bzu3LnD2rvGxsbg\n9XrZ3PCdJN/UJ78WXFVVxQRKOp0OwLcP/cWLF+Hz+YpaV/2+kEVwU1MTTp48ibNnz0Kj0cBqtaK/\nvx9ra2tYX19nG5VC1J/yd9XUvmez2RAKheByuWCxWHDgwAHU19dDJBLhypUrmJmZwfr6OmKxGCKR\nCFZXV5+YKQkEAhgdHcXW1hbKysoQCoVQU1PDesLJHbHQp3gqMZCISyaTsToxnY6npqYwPz+P8fFx\n3Lp1C36/v2DX9DTy2yjD4TBmZmbg9Xqxurr6Qu1YpJP54osvcO/ePTQ2NuL06dPo7OyERCJBLBbb\nNnRqpyDPdOBRGScSibANS/6J/vEyXjabxdzcHD755BM4nU5sbm5iZWUFyWQSXq8XdXV16OrqQjqd\nht/vx+rq6q7NFQceBR6Hw4Hbt2/j/v37GB8fRyAQ2LVroueiv78f6XQaZ86cQSwWY2OgyaK2VNbB\nXC7HxlPT8/EkIWxZWRmbDbCysoKhoSHU1taira0NcrkcNpsNMzMzP9wgL5fLUV5ejvLycpSVlSEe\nj2NhYQEzMzOvnN7Ir2+Gw2Gsra2xFgiv1/vUndnLQoGITpRSqRQSiQRlZWVoaGhAa2sramtrUVFR\nwQbrTE9Pl4TAjjYl1CpEi6BQKERFRQWam5vR29uLY8eOoaOjA+FwGKurq8yfYHNzE4FAYMfboOie\n5p/it7a2mO92KBSCz+fD8vIy3G436uvrIRaL8d///heLi4svVP6Ix+PsWaNMQCAQgEgkYrVWGlJT\naNLpNKLRKDKZDMRiMXQ6HZRKJTvh06ZmfHycCZd2g/z2OL/f/703G9FoFDMzM3C5XLBarez0TN0D\nCoWCLfo7gVAoZGUBKutks1k2s4AMffLb5fI3lZOTk7h69SquXr3K2geBbzNclL6n55IMuZ5mp1xo\n9Ho9WltboVQq4fF4cO3aNSwuLhZtGM2zyOVyePjwIRubTSUS6tJRKpWIx+Pwer0shb8bY1vpWsnR\nsampCVqtljlrJhIJttZrNBrIZDJIJBIIBAJ4vV7IZDImziSPgh/kgBrywDYajVCpVExVDDxqMyPD\nhpeBehybmprQ3NzM6mv5rng7jVAoZDViGo9YVVUFi8WCpqYm6PV6KBQKlvothV0rPaxSqRQqlQpy\nuZx1IZSVlWH//v3o6+vDm2++yfzSQ6EQNjc3WZ1aJpNBJpOxWvdOPNAU3POFWsAjhTe9QJFIBAsL\nC3A6new0HI1GX+rFIqEPpe7Ly8tRXV1d8Kl6tHjQokLZk7q6OrS1tbEWtVQqhRs3bsButxfErfFl\n5hm8ChQYlUoly+bV1NSgqakJNpuNLZSvApVBenp6cOTIEZSXl7PaeSAQYGIqWi9oaiHwKE3/17/+\nFRcvXmRDSAiRSLStnTQSibCNod/vZ4eIYgb6ffv24e2334ZWq2Vi2GAwWLSf/zwikQhsNhsuXryI\niooKNnysvr4ehw4dgt/vx3/+8x8Eg0Gmp9ktoyI6qZ8/fx7vvvsu81fwer1ssh75uUQiETx48AB+\nvx9VVVWoqKhgSvvW1la2ZhaKkgzywLe11Xg8zgRSAoFg2wv3Krt5uVwOnU4HnU7HzGYcDge72Ttd\nAyKTHRL6VVdXw2QyoaurCyaTCVKpFOl0GlarFV9//TVGRkZ27Ge/LPlDWihFSacRapOitj+aeJZO\np1mKnkot5GNOAXInNi8k0tra2mKZBgAsJUy/aLO0E1kZ2jjQ9QcCAZaeLfRCne+UBXwrLPR6vaxf\nW6PRwOv1IhAIYH19vSDXsxu+AKReJ9tkAKylbSdPwuQtIBaL2c+y2WxYWFhgJlsbGxtYXV2F3++H\nWq3GxMQEBgYGcOfOHXi93u+ka3U6HY4ePYqOjg52vWSd6/V6n/k95ZegduIz0sEom83C5/NhdnYW\nVqu1qLbHLwINBnK73aisrMSPfvQj1NfXw2g0oqmpic2Pj0ajiEQi8Hg8mJ6extDQUNE/B60HNBVV\nrVZDr9dDo9FgeXkZq6urcDqdiMfjiEajrBOnqqqKrRk08rzQDqslGeTJqGZtbQ0+nw9SqRRlZWXI\nZrPsVJlIJL63ul4sFkMmk7Egq9PpWJua1WplafqdDPL5Tnq0mBgMBpjNZpw8eRIikYgNjRgbG8Nf\n/vKXXRPA5EPp0Xy1MU2Zo9RlfpDNf0F9Ph/EYjHbhS8vL2/bde9EGSI/5Zm/IBbiFAs8uh9UC3e5\nXJieni6qgIo+1+bmJpxOJywWCwwGAzvtb2xs7Gq9d6cQCARQKBTQ6/VsA0x183xNy6sGetIQ5D+P\ntCl0OBxMOZ3NZhEIBGC322Gz2WA0GnHt2jX8/ve//45pDG1OGhoa8O6778JisWBra4uVq6qqqqBS\nqZ752R8XF77q80yDg+LxOKampvDw4UOsrq6WpBsfdTpptVq89957aGhogEqlYge6zs5O1iO/urqK\nr776Cg8fPixqJwnw7XOSTCaxtrYGj8fDMjwikQherxejo6OYnp5mdusAmMkZlSQKtVY9TkkGeQCI\nxWJwu924f/8+EokETCYT67/NZrPf25NbJBKhoaEB586dY/OitVotVldXWfqnEGM581PHKpUK5eXl\n2LdvHxuwY7Va8c0338Bms2Fubu47ab/dgqaZxWIx1nFAi0JlZSXq6urYyT0YDEKpVCIUCmF4eBi3\nb99mgb2iogKBQAASiYR5w5N46lWDPd1buq5CvjQVFRU4f/48zpw5A7FYjOXlZczMzOyKSloikbDU\ntclkwv/93/9hcHCwYKf4Z0Gb2Hxjk1eBylrHjh3Dxx9/jLa2NgiFQpY5sdlsrF1pJz7r1tYWPB4P\nXC4XGhoa2IaJvDnoeU2lUhgZGUE8HkdlZSWGh4e/IwIUi8VQKpXo6urC8ePHYTabIZfLmUZlfHz8\nuQNvCvH96fV6nDhxAuFwGMvLyztWNisU1MZcXV0NjUaDTCbDSjRU7lSr1aipqUFvby/ef/99DA0N\nYWZmpmjXSOWzK1euYHp6GgqFgmWB1tbWEAgEsLGxsU2LpNFoWMlGLpczl8Uf5Eke+NaUJBgMYn5+\nHpWVlWhpaYHRaGSB5UVroZRyrq2tRU9PD86ePYvm5mZIpVI4nU6srKzA6/WynWAhxG5isRhqtZq5\n93V1dcFgMMBut+P+/fusVc7r9e7oz31ZSPSiVqtZmjiXy7HTbL6/AI3lzGazWFlZwdjYGObm5hCL\nxRAMBlk2oKysbJvvAfUhJ5NJxOPxl65J5fvkFwpyX2xtbYVcLsfw8DCbF13s0bnAt7PPu7q6YDab\noVAo4HA4dsWGGXgUlEmIRqfjl3mHBAIBswvt6+vD8ePHsby8jPHxcTacyev17qiIM5PJYGpqio3/\nVKlUbLpZZ2cnS7eS6G5tbQ1utxsul4uVrmixpo1Xd3c3Ojs7oVAo4Pf74Xa7MTIywlpJn7dJ2anP\nJhKJUF5ejj179uDUqVP45ptvMD4+XvBZADsBaRqSySRWVlZw9+5d+P1+ptGoq6tjo6zPnj0Lj8dT\n1CBPm8G5uTnMzc290J/JV+Hni3d/sLa2AFgbSjweh1arZUr7hYWFF7oxJBxTKpU4duwYzpw5A7PZ\nDKlUivX1ddy/fx+Dg4NYWlrC+vp6waYukeXom2++iZMnT6KhoQFutxt/+9vfMD4+DpfLtSvB4klQ\n7VAoFLJBONQLT+UOGuQhEAgQjUYxOTnJej7n5+cRjUZZ8KWFkNzmLBYL9uzZA5FIhHg8Dp/PB7vd\n/sre04VctMh/3+/34/r167h79y6sVmtR6vFPQqvV4ty5czhw4ACCwSAcDgccDseuzDSgxRgA83in\nrM/3vTdisRi1tbX44IMPcODAAayvr+PLL7/EwMAAlpeXEQgEWFfETt33VCqFK1euIBAIQKFQYN++\nfdDr9eju7ma6lHg8DqVSid7eXkilUoyOjrL3oaysDBqNBkajEadOnWKOgwDgdrsxOzuLsbExTE9P\ns03hi/hw7MTno/ftxIkTOHXqFKxWa8nY6j4Pygw5nU7cvXsXFy5cwOLiIjtotLe349e//jU6Oztx\n9OhRXL9+fbcv+bmQh4ff70c6nUZ1dTWam5u3+S4UgpIO8plMhqXRy8vLmR+0WCzGnj17MDc3x2Y+\nk+KbbDUppaPT6VBTU4PDhw+jq6sLWq0WDocDw8PDGBkZwezsbMEEKNTjXFlZidbWVpjNZrS3t0Mu\nl8NqtTLl924ZYzyORqNBdXU1zGYz9Ho9BAIBYrEYEokEDAYDKisroVAomFJepVJBpVJBo9EgGo2y\nVCqlbfMXKgr69Pc1NzeznuKdKpFQXyoJjWpqatiAGafTyYaNvOjPo0WevKZtNht8Ph/rbd2tmqZU\nKmWOcIFAgDn17QZ0crdYLOjt7WXaDL/fD5vNhvn5+e90rNB7Qe9zfX09TCYTysvL0djYiCNHjiCd\nTuPmzZsYHR1lNddCmFTlcjnm5fD5558jEAjgzTffRHl5OXp6elBeXo50Og2pVAqTycTMeZqamnDg\nwAE27MhoNGLv3r1oa2vD5OQk5ufn4XA4YLPZsLS0xNwfiznrQKFQoK+vD8eOHYPBYEBZWVnJGcs8\njUgkgsXFRYyMjOD69euYn5/H6uoqgEfZo3A4DIFAAI1Gw+yTX4bHs4CFuj+JRIKZJolEIhgMBjQ2\nNv6wg3w2m2WLqVqtRlVVFdRqNbq7u5mr2aeffsp6Uallpb29Hc3NzaipqUFrayv27NmDqqoqKJVK\nZDIZ2O129Pf3s1N0IR962nSYTCY0NDRAr9cjGo2y/vxSEUqR/3l3dzfOnTuHxsZGrK+vIxwOI51O\no7u7G42NjVAqlbDb7VhcXEQ0GoVQKIRarYZQKHym+IWCQSAQgMvlQltbG8rKyhAMBhGPx3ekRUsq\nlUKv17Pg0dvbi8bGRqjVagwODrJNI3VQkD3s4z+Tgjt5GvT29mLv3r34+9//jpmZmV09CVEphbJa\nZMaxW1Dacs+ePfjFL34BuVyORCKB+fl59Pf3Y319HfF4fFuWgQKQTqdDa2srTpw4wSahlZeXQyAQ\noL+/H5cvX8bU1BTW1tYK+o7mcjm43W589tlniMVi0Ol06OjoQFNTE1pbW5nQlMbJ7tu3j4mCk8kk\nJBIJdDodxGIx4vE4lpaWMDg4iLGxMayvr+/K9yOVSpnC/9ChQ8wDYDf6878vAoEAwWAQ9+7dw9Wr\nV9Hf379tQ00ZQhoStRNtrMVoEU0mkyzIi8ViGAwG1NfXv9IG5UUo6SAPgJ0A8g1kgG+VihKJBL/8\n5S9x/vx5ZDIZNjtco9Ewz3Jq9ZJIJPD7/RgdHcW1a9cwPj6OYDC4oyr6JwULiUSCiooKNpErkUjg\n3//+N+v3LIUXrqysjKUq33nnHaYZoJY5UgWTt7der0c8Hsfw8DDm5+fh8XgwNzf3zFMyBflwRG18\nBwAACU1JREFUOMxENNlslo3P3YkUrFqtRmtrK44fP4433ngDOp2OtVyZzWZ88MEHbILhnTt3YLfb\n2c+nwE3TomhOdHNzM1KpFL755hsEAoFdVyRT66VKpcLExAT++Mc/YnJycteuh/wDhoaG8Nvf/hZn\nz55Fd3c32traoNVqcfz4cSZozVeOU7ZFoVDAYDAwP/h0Oo1AIIDZ2Vncu3ev4AH+cR48eIBPPvmE\n6YCOHDkCi8WC5ubmbcZL1NJ7+fJlLCwsQC6Xs6EldrsdKysrRVd8EwKBACdPnsQHH3yArq4uSCSS\nbUY8pbDmPAuBQACn04kLFy7A5XI98Zqj0Shu3boFAOjo6GDTJV92HSnGPaEuC9rw0mhoGp5VKG+U\nkg7y9IWtrq5ibGwM3d3dUCqVEIlELFVcXV2NbDaLTCbDnNmonkPWg1Q/dblcuHnzJu7du1fwE3z+\nYkb1aLqGwcFB3Lp1q6RO8Y+78dH9zR8TCoAFaofDweqNi4uLzGP6WeRyOdYKVahJV7lcDiaTCceO\nHYNSqWSbwvb2dnb9s7OzqKiowPz8PCuX0IuXy+UglUrR0tKC2tpaVFdX4+7du5icnMTGxsauB3mL\nxYKjR49CrVbD5XLh6tWru1KLJ0hwt7i4iOXlZTbBjYaM9PX1sf83f24Dbfoo8GxtbcHn82FlZQVL\nS0sYHR1lZbhi4nK54HK5WDfO+vo6VlZWYDabmfgOAMLhMDweD27cuIHx8XEIhUJWxtrNQCqXy6HV\nanH06FG899570Gq1CIfDTPRX6gGeCAQCCAaDT9RJkZW0x+OB2+2G0WhkI31fpqRDbbj0+0JB6ns6\nDOWPOJbL5QUr25Z0kCeGh4fhdrvxq1/9is17pr7E/J11vrJ3c3MTHo8Ht27dwpUrV+B2u+H3+xGJ\nRBCLxQpS23sSmUyGzRyORCKsHSQUCpWEqx3w6Brv3bvHvKJFIhEz6sm/t9lsFjdu3MCf//xn+Hw+\nhEKhXbOXfBzymt+/fz+6urrQ0tLC/OUJoVAIk8mEDz/8EJubm0wIRdefTCbZv7fb7RgbG2Ojfncz\nLU709fXh3LlzUCqVRbHEfFEymQzi8TguX76MkZERGI1GvP/++/j5z38OiUTCUt0EdWsIBN/OmA+H\nw7h06RIbFuTxeHb1s1Fr3VdffYXr169DLpdv2+xSdwiJAenPFKv3+WlUV1fj5MmT6OnpQXV1NYRC\nISYnJ/Gb3/xmR4Z6FQO6j08rLYjFYsjlchgMBlRVVbHSmkwm+86ExhelmCf5RCKBRCLBDlQ1NTXQ\narU/7CAfCoUQjUZx6dIlbG5uorm5mZ2yyBkvFAphY2MD4XAYUqkU0WgU09PTGBsbY6n5Yp+cabOx\nvLyMwcFBKBQKZDKZgk5kexnopfL5fKzs4ff70dPTA5lMhnQ6DZfLxU6yAwMDmJ2dLZngTqTTaWxs\nbGBoaAjZbJY9I/k1Vsr4UAaIDH0o++Pz+bC0tIQHDx4wAdVutco9iWQyCZ/PB4/Hg6mpqZIJ8nT/\n1tbWmDscdWFoNBpotVoYjUZW8qF5FA6HAx6PBz6fD0NDQ5ienobf7981IWH+50kmk7sy6OdVqKys\nRE9PD6qqqrCysoLh4WH09/djYmJix4duFRJ6rulARyZlAoEAFRUVaGhoQHt7O6tpP25xXYrQO7K4\nuIgLFy7g9OnTKC8vh8VigcPhgNPpLMjPfS2CPKUEv/rqK4yPj+P48eM4dOgQDh48yARsdrsddrsd\nLpeLDXq5desWgsHgrqRYaUcfj8eZbW7+fys1yGUwFosx60uXywWZTIZIJILBwUGW7nvZ3XKhoXt+\n584dDA0NQSqVor6+Hvv27cPPfvYz6HQ6JvKjDVc2m4VcLgfwqMVlbGwM//jHPzA7O1ty35XT6cSd\nO3fY9LBS/B7S6TSbGjk4OAi9Xs/G9NbU1EChUECpVCIQCODatWuYmZnB8vJyweZG/FCgmQoWiwVi\nsRgPHjzAH/7wBwwNDZXcc/w8KGuiVquhVquZ3z8JhNvb22GxWNDQ0MDe6dfhM+ZyOUxOTsLj8UCr\n1eLdd9/F/v37sbi4iFu3bhXkM7wWQZ5IJpPweDy4efMmZmZm8PXXXyOdTiORSCAajbJfdIIg+8BS\noFSu43nQhsrtdmNgYIBN66NTPv0/rwPpdJrpOTY2NvDll1+yIEK2vLlcjpV+yEmRTsql9jkFAgGW\nlpbYRKtXncZYaEgFTdae6+vr7GRGp7N8gVopblheF8RiMWpqamAwGJBOpzE4OIiBgQHYbLaSe45f\nBLpmk8mE9vZ2OBwOBAIBbG5uorOzE729vdBqtQgGg5icnITD4WAWxKVOKpVCKBRCPB5HWVkZTCYT\n6uvrIZFItpUqqDz6qrxWQZ5mZ4dCoRd2GeJ8f7LZLLvPryuUcQiHw0wo+DpDJxun04lYLIZUKlUy\n/gpPg76DWCyGWCxWMo6O/4vQ6GexWMy6R65fv/5aBL2nkct9O3JWJpMxG9hMJgORSIRUKgWr1YpA\nIIDh4WE4HI6S0qg8C2r/C4fDiMfj2zrC8rOkO2Wz/loFeQ7nhwqVInw+H2u9fF2MTTjFw+FwYH5+\nnplMve7Px9TUFKxWK5LJJCvn0IwMuVzODn6RSOS1yQTRu+vz+WCz2Vgbqk6nY4GdNsc7oQXiQZ7D\neY2gFpxSFhhxig+Nw43FYlhfX981y+WdhjJx+SQSCaytrW1r632dPitd69TUFCQSCYRCIaxWK1Kp\nFLMVVygUEIlEO5JN3c2V4vX5VjgcDqeEIatg6iEn7wFO6aLVaqHRaCCRSJBIJBAKhSCXy5l5mt/v\nx/j4+PP+mufGcH6S53A4/zMUw9SkFCH9A/3+h/b5X0fI9VMoFCKTySCZTDKxHbkn7gT8JM/hcP5n\neBVrUw6nmJCbKD2vJDSk7pNMJvMiNXlet+NwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgc\nDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwO\nh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6H\nw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+Fwdon/B6CEUde3fJZz\nAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "global_step = tf.train.get_or_create_global_step()\n",
+ "train_step_fn = tfgan.get_sequential_train_steps()\n",
+ "loss_values, xent_score_values = [], []\n",
+ "\n",
+ "with tf.train.SingularMonitoredSession() as sess:\n",
+ " start_time = time.time()\n",
+ " for i in xrange(2001):\n",
+ " cur_loss, _ = train_step_fn(\n",
+ " sess, gan_train_ops, global_step, train_step_kwargs={})\n",
+ " loss_values.append((i, cur_loss))\n",
+ " if i % 400 == 0:\n",
+ " xent_val, digits_np = sess.run([xent_score, reshaped_eval_imgs])\n",
+ " xent_score_values.append((i, xent_val))\n",
+ " print('Current loss: %f' % cur_loss)\n",
+ " print('Current cross entropy score: %f' % xent_score_values[-1][1])\n",
+ " visualize_training_generator(i, start_time, digits_np)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgsAAAFuCAYAAAAPjlu6AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzt3XeYVOXd//H30kSaKIoVBUFREQtgRSPWWKLRBNQlNtSg\nRo08yqM0QZBmRR8NdqPRuLFFH2OJMRpiiUYFo8YWERXbgxUBRVHh98d35rfDMDsw287M7Pt1XXPB\nnjkz5ztzdud85tz3uW+QJEmSJEmSJEmSJEmSJEmSJEmSJEmSJEmSJEmSJEmSVDr6AbcA7wKLgbeA\na4FuSRbViFoB04DBSReiRuH+lqQCnQosAf4MHAn8CDgBeB34DNguudIaTVdgKXBMwnWocXTF/S1J\nq6w/8D1waY77OhFnGv7VqBUloytx8Dg24TrUOLri/pakVXYv8DHQuob7fw6MAdqlfp5BNFfcBSwC\nHk0tX4MIHG8RzRgvA0Oynqsv8DgwH1gAPALsmHH/2sAdwEep53gBOGoVXsNPgedTj/kIuAxok3H/\necCbwEHAS8A3wH+o/lbZlThwpG9zUstvAv4KXAV8CbwGNCPeq3OJMy+LU891NlCRsc0ZxPs0BpiX\nevy9qW0B9Ept65dZr2VD4Afg6Bpea2vgOuC91Ot4DTgza53OwA3A/xHv89+BXbOeY1Xrz97PrYEL\nM7b/InB4DbWmDSBe64HAk8DXxP74VdZ6zYARwOzUc78BnJa1Tk11Zcv3PnVl+f39dsbjdifer6+I\ns2o3Eb+XacelHrMbEaK/Jn6nBtVQhySVvAriYFFVwGNmEE0WNxAHgX2A1Ylw8H/AScC+wHTiQ3Vk\n6nEdiFBSBexFHDj+AXwBtE+t8zAwEzgE2AO4MfUce+SpZ3Bqnd8B+6W2/xkRRNLOIw4sc4gAsxfR\n5LIU6Em0Xx+a+nk8sG3qcTelXuufUq/1kNTyR4CFwFnA3sAk4Dvgmoxt/i312t4ABhLNO++kbukg\n8zQRnjKdQwSL1Wt4vdekXsfhRHPRVJb/htyWONi+k1q2D3B/qt6exD5flfpnsOJ+BngoVd8ZxH6+\nKrX9msINVIeFz4lAuS/wm9SyzDBwDfAtMDa1vYnEWa8xq1BXtnzvU037+0ep536A+P08mngfX6Y6\nTB+Xetwnqbr2IwLuD8DBed4DSSpZaxMffJMLeMwM4kDTMmPZKann2SVr3euIb15rAjun1sn8hrsp\n8SG+YernxVSHC4gD24VZjyHr/veID/dMe1H9TRYiLCwF9sxYp0tq2X+lfu7Kim3YN6WWbZCx7IDU\nssqsbY5OLd8i9fMM4sCX2UF0u9Q66W/UvyQOMl0z1nmVeN9q8hrLH9TT206/1lNTz7lNxv2rpZ73\nlwXWn72f902tMzDrsb8DPgCa11DzgNTjrs9afg8RMAE2T9X931nrTKD6d6imunJZ2fvUlRX391PE\nmZLMMyybEUEqvc+OSz0uM8AAzAKeXUlNklSS1iI++C4o4DEzWPFD8XaqT91nGpB6/v2Jb7zziG/9\n04lvYW2z1n+Q+GC+nfgGuO5Katki9fwnAy2ybl8Svd2hOixkNrU0Y/kP/a7kDgsfZ23zAuLbZ7Os\n5Ztk1ALxPs3IUfNsqs/kdCDOeJyb+nnH1HP0z/G4tPQZmwdS29ok6/4/EKf4a1JI/dn7eSpxQG/L\n8u/1oNRjtyW3Aan7B2QtP5rqszsnp/6/ZdZz75Banj6rk6uuXFb2PnVl+f3dhjiLMZkVf5deJoIN\nVIeFrlnPdy7x3tTUnCfVWvYfq9TYPie+pWV/kGZqQ4SKTIuyfl6L6m+ImdLLOhJtwLsTH95HAv9L\nhIeriW++pJZfQhw0f0t8W32IFT+Y0zql/p1OHAAzb+2A9bPW/ybj/0tT/67s7zDXa/0s4/Fp81L/\ndsxY9kGO5/skY50FwJ1Un8I/ljjQP5WnnmFEwOlGvO63U+unr1jpxIoBp7b1Z7/2TsS37oUs/17f\nDixjxfc7W/b7ka5zTar35StZz/3P1HNnnt3JriuXlb1P2dakus9E9u9SL1Z8bbleSwXRd0eqVy2S\nLkAi+gnsRRywv81x//FEh8H+xAd3Lp8DPXIsT3/Afpr6N92psALYiThInkKclbiQOHiOSN02I9qV\nxxIf9geyovmpf4ez4rf4CqLPQF1VZP38OdF8kz4zkZb9WmH5jnFp6xLvQ9qNREjoR5zev2wl9Swh\nvv1OBjYivnGfC9wGbEW8J11zPG4X4mxLIfVnm08cqAfkuK+COGuSTyeWP+uRPnM0j+p9uScRRrKf\ne+5Knjvbyt6nbAuIUHIpK/bhqSCaQjJ1YvmAvC5xZuHzAuuUpJKwE/Ehd3GO+zoTH9KvZiybATyW\ntV66z0J234IbiH4IHYgP609ZsWnhC+B/gPWA94mrLzLdQ/Q2z6U58YF9ZY66HwYOS/18Hit+kya1\nbGzq/+k+DNnNENnNK+k2/+zBfM4ld5t/p4x1+ubYBkQnyL8Rp8E3pGatUutmX/0wjTjYAfya2J9b\nZ9zfGviQOBAWUn/2fj4otU6/rOVHEVd6dCK3AanHTclafh/V729vcveH2Ic4G9UzT13ZVuV9yrW/\nnyM6g2ZqTZwFSzfPHJd63EkZ61QQfR1WVpcklbRRVLfvHk6cafg1ERQ+Y/lvYjOIA1um1YF/E98Q\nTyJ6iF/J8gfj9Onxp4lLHfciOqB9T/XVDo8Rp3eHpJadRTQdpJ8jlxNSz3E50QFvIHFJ26dUN6+c\nx8rDQgeqr6rYObXsJpa/rC7tUSIIDCcOZucT32RvzFhnBnHQfpYISkcR7+e/WPGs4jmpbf85z+tM\nu5E4Q3Aa8R4NJb7Nprfdjvj2Pgf4BfBjoi/I51R3tlzV+rP3c0Vq+QfEwXNA6jm+Av6Yp+YBqdeX\n3pf7Eft+KXBExnq/I8LjcOIMw0nEfnyO6jM8uerKZWXvU679vS/RZ+ZW4kzWT1Lb+prqS3yPSz3u\nS+JvZH/gbuKsXL6+JpJUFvYnvlW9T/W191cRp3Az/Y3c36A6Eb3456UeP4v4YM20HXHg+iS1znNE\nU0Pa2qnneJ84sLxJBJmVGZR6rsWp576HaGdOG0ccuLNlhgWIsysLiYDUkug3kavj5urARVRfw/8q\nK36LnUGMKTCaOOB9RpxpWZMVbc2KB86arJ6q8+3Utt9N1bJaxjrrEwe8z4iD2sMsf3XEqtRf035u\nQ/QrmZt67GziEsdWeWoeQLy+U4jLZdNjaByWtV5zop/BbOLgO5cInZn9KGqqK9uqvE+Z+zsd4Pai\nepyFL4jLTDPPmB1H9ZmZl4kg8SRx2aUkSQWZwYpjKNTkbOKsy8ouByxVA4gD7F4J11EfjiNey6YJ\n16EmpNCrIbYlUu5nRDvtzdTcRvgQkd4XZtz2q12Zkmopu3NktmOJb7vnEc0o3zV0QZJKTyFhYXUi\nADxJdBDbiggKv61h/b5EOGifcftLrSuVVKhlqVs+2xCD/dxNYWNdlKKVvRelpJxei8pMT6LzWeY3\nlUPIfb1xN6LDV7sc90mSpCbkd+RuEz2c6JjzANEOmmtCH0mSVALqMijTROKa51w9cFsRPY5HEZez\n7UWc5lxIzNSWy/qsfPQ1SZK0oo9Stwaxss5PuXQg+ilsT4yt/8oqPu5Koq9DrmlU199ggw0+/PDD\nD2tRjiRJTd4HxDwmDRIYCj2z0J24Rv0dYgS1moYVPZ4YpSzzLEJrVhyuNG39Dz/8kFtvvZUtt9yy\nwJJUjIYNG8Zll61s1GCVCvdn+XGflo/XXnuNo446akPi7HziYWFNYiCSvwInkr83bgdiPPTZxDC5\nBxDT0ea9dHLLLbekT58+BZSkYtWxY0f3ZRlxf5Yf96kKUUhYGEKMZX4E0YExbRkRDhYSw5lWERPR\ntCVGsesMvEVM2JNvJjtJklSECgkLl6ZuNWmf9fOk1E2SJJWwQkdwlCRJTYxhQQ2isrIy6RJUj9yf\n5cd9qkIYFtQg/CAqL+7P8uM+VSEMC5IkKS/DgiRJysuwIEmS8jIsSJKkvAwLkiQpL8OCJEnKy7Ag\nSZLyMixIkqS8DAuSJCkvw4IkScrLsCBJkvIyLEiSpLwMC5IkKa+iCgvvvZd0BZIkKVtRhYVDD4WD\nDoKHHoKlS5OuRpIkQZGFhXHj4KOP4MADYYst4PLL4csvk65KkqSmrajCwiGHwMyZ8NRT0K8fDB8O\nG24Iv/oVvPJK0tVJktQ0FVVYAKiogF13hdtug7lzIzDccw9svTXsvXf8//vvk65SkqSmo+jCQqb1\n14fzzoN3343w8M038LOfQffucMEF8OmnSVcoSVL5K+qwkNaqFVRWRvPEzJlxhmHcONhoIzj+eJg1\nK+kKJUkqXyURFjL16QM33gjvvx9nHf76V+jbF/r3h6oqWLIk6QolSSovJRcW0tZeG0aMgDlz4I9/\nhNatYfBg2GQTGD8+rqqQJEl1V7JhIa1FCzjsMHj0Ufj3v+P/F10EG28c4eEf/4Bly5KuUpKk0lXy\nYSFTr14wfXo0UVx8MTz3XDRP9OsHv/0tLF6cdIWSJJWesgoLaR07whlnwBtvxGiQ660XHSG7dImm\ni3ffTbpCSZJKR1mGhbRmzWD//eGBB+DNN+GYY+Dqq2HTTaO54rHHbKKQJGllyjosZOrRAy69NJoo\npk+P8LD33jHY01VXwaJFSVcoSVJxajJhIa1dOzjpJHj5Zfjb32IOitNOi2Glhw2LECFJkqo1ubCQ\nVlEBAwbA3XfD22/DqafC738Pm28OBxwADz7ozJeSJEETDguZNt4YJk+G996Dm26CTz6JqbI33xym\nTYP585OuUJKk5BgWMrRuDcceG5dcPv007LQTnHNONFGcfHKM4yBJUlNjWMihogJ23jmaJebOjcBw\n333QuzfsuWeMGOnMl5KkpsKwsBLrrQdjx8I778Af/gDffQc//3lcfjllijNfSpLKn2FhFbVqBUcc\nAU8+GbNc7rsvTJgQM18ed1zMhilJUjkyLNTC9tvDDTfEmA0TJsCMGTGk9C67wG23OfOlJKm8GBbq\noFMnOPtseOstuPdeaNsWfvGLuLpi3Dj48MOkK5Qkqe4MC/WgeXP46U/hr3+FV1+NPg2XXBLTZR95\nJDz1lMNKS5JKl2Ghnm25JfzmN/DBBxEYZs2C3XaDPn3gxhud+VKSVHoMCw1kjTXg17+G11+HP/85\nOkKeeGL8e845cXWFJEmlwLDQwJo1gx//GP70p5h3YsgQuPZa6N4dDj0UHn3UJgpJUnEzLDSi7t3h\n4ovjKoqrr4Y5c2CffaBXr5gJc+HCpCuUJGlFhoUEtG0Lv/wlvPhiXHbZq1c0WWy0EZxxBvznP0lX\nKElSNcNCgioqYI894M47Y+bL00+Hqiro2RP23x/uv9+ZLyVJyTMsFIkuXWDixJiL4ne/g88/h4MP\nhs02g0svhS++SLpCSVJTZVgoMq1bw9FHw7PPwjPPwK67wogR0URx0knw8stJVyhJamoMC0Vsp53g\nllvgvfdg5MholthmGxgwAO66y5kvJUmNw7BQAtZdF8aMibEZ7rgj+jEMGgTdusHkyfDxx0lXqHyW\nLYsrXd5/34AnqTS1SLoArbqWLSMkDBoUV1JceWX0cxg/PoaVPu002GGHpKssP8uWwaJFMH9+7W/p\njqrHHx+TkElSKalIuoCUPsDMmTNn0qdPn6RrKSmffx7DSP/mN3HmYaedIjQMGgSrrZZ0dcWhPg/2\n2Vq3ho4dV+02axZcdBG89BJsvXXjvgeSytesWbPo27cvQF9gVkNso9CwsC1wMXFw/w54GDgT+CzH\nugcCFwCbAu8C/w08UMPzGhbq6Icf4MEH4Yor4JFHoHPn6BB50kmw4YZJV1c36dP4tT3Qf/ll/Rzs\ns29rrBGPX1VLlsTcIb16wX331c97I0nFFhZWB94CrgEmAR2A3wFLgUOy1t0MeBE4Ergf+DlwM7AN\nMDvHcxsW6tHrr8eZhptugm++gZ/9LM427LZbjO3Q2MrlYF8fqqpg8GB44onYH5JUV8UWFnoClwI/\nAdKzGRwC3Aa0y1p3ItAP2D9j2YPA88DYHM9tWGgACxbEmA1XXglvvAHbbhuhYfBgaNNm1Z+nIQ/2\nq69et4N9qTW1LF0K/frF+//EE8mEN0nlpTHCQiEdHN8ADspaNpDchfUCskcEeBXoXcD2VEcdOkQ4\n+NWvYsKqK66AoUPh7LPhhBOgd++GOdivtx5ssUV5HuzrqlkzmDIlRuj805/gkOxzcpJUhOpyNcRE\nIjz8KMd97YCvspYtZsUzEGoEzZrBvvvG7e23Y9KqG26IUSE92De+/faDvfaCUaPgoIOgefOkK5Kk\n/GoTFjoAvwW2J4LCKznW+Qpom7WsDbAg3xMPGzaMjh07LressrKSysrKWpSpXLp1ix75kyfH2QIP\n9o2vogKmToUdd4xBt447LumKJJWKqqoqqqqqlls2f/78Bt9uoS2m3Ym+B+8AlcDnNaw3keiHcGDG\nsoeAZ4FxOda3z4KanEGD4J//jFlGG7ujpaTy0Rh9FgoZwXFN4DHgSaLjYk1BAeAWYAAwiDh7cTiw\nR2q5JGDSJPjww7hyRZKKWSFhYQjQBTiCaE5YmLqlmxYWEmcbIDpDHgqMIkLFGOBn5L5sUmqSNt8c\nTjwxmoS+/DLpaiSpZoWEhUtT67cD2mfcOqTubw9kNqT8hejX0IEYX+HPdS1WKjdjx8LixXDhhUlX\nIkk1cyIpKUEbbADDhsG0adEkIUnFyLAgJezss6OD44QJSVciSbkZFqSEdewIo0fD9dfHlRGSVGwM\nC1IROPXUaJIYMybpSiRpRYYFqQi0bg3jx8Odd8JzzyVdjSQtz7AgFYljjonpq0eMiMm7JKlYGBak\nItG8eYy58Nhj8MgjSVcjSdUMC1IROfhg6N8/zi7UNNOnJDU2w4JURNKTTL3wAtx+e9LVSFIwLEhF\nZrfd4gzDmDGwZEnS1UiSYUEqSpMnw9tvw3XXJV2JJBkWpKK09dZxdcSECbBoUdLVSGrqDAtSkRo/\nHubPh0svTboSSU2dYUEqUptsAqedBhddBJ98knQ1kpoyw4JUxEaNgmbNYNKkpCuR1JQZFqQi1qlT\nzEo5fXp0eJSkJBgWpCI3bFiEhrFjk65EUlNlWJCKXNu2MG4c/P738NJLSVcjqSkyLEgl4IQToEcP\nGDky6UokNUWGBakEtGwJEyfCgw/C3/+edDWSmhrDglQiBg6Efv3gnHOcwlpS4zIsSCWiWbOYZOqf\n/4R77026GklNiWFBKiF77w377hvjL3z/fdLVSGoqDAtSiZk6FV5/HW66KelKJDUVhgWpxPTpA0ce\nCeedB4sXJ12NpKbAsCCVoPPPh3nz4Iorkq5EUlNgWJBKUI8eMHQoTJkCX3yRdDWSyp1hQSpR554L\nS5ZEHwZJakiGBalErbcenHUW/M//wPvvJ12NpHJmWJBK2PDh0K4djB+fdCWSyplhQSphHTrA6NFw\n443w2mtJVyOpXBkWpBJ3yinQpUuEBklqCIYFqcSttlpcSnnPPfDMM0lXI6kcGRakMjB4MPTu7SRT\nkhqGYUEqA82bx5gLjz8ODz2UdDWSyo1hQSoTBx4IP/oRjBwJS5cmXY2kcmJYkMpERQVccAG89BLc\ndlvS1UgqJ4YFqYzsvDMcemiM7vjtt0lXI6lcGBakMjN5MsydC1dfnXQlksqFYUEqM1tuCUOGwMSJ\nsGBB0tVIKgeGBakMnXceLFoEl1ySdCWSyoFhQSpDG20Ep58eYWHevKSrkVTqDAtSmRoxAlq2jNEd\nJakuDAtSmVprrQgM11wDb72VdDWSSplhQSpjp58OnTvHpZSSVFuGBamMtWkTnR2rqmDWrKSrkVSq\nDAtSmRsyBHr2jGGgJak2DAtSmWvRIgZq+stf4LHHkq5GUikyLEhNwGGHwU47OYW1pNoxLEhNQEUF\nTJ0Kzz8Pd92VdDWSSo1hQWoiBgyAAw6A0aPhu++SrkZSKTEsSE3IlCkwezbceGPSlUgqJbUNC+sA\ns4E98qzzELAYWJhx26+W25NUD7bdFgYPhvHj4auvkq5GUqmoTVjoDzwNbArk6yrVlwgH7TNuf6nF\n9iTVo/PPh08/hcsvT7oSSaWi0LBwHPB7YNRK1usGrAW8UIuaJDWgbt3glFPgggvgs8+SrkZSKSg0\nLDxEnFG4YyXr7UA0O9wOfAy8DAwpuDpJDWL0aFi6NPowSNLKFBoW5gFLV2G9VsA/iDMQ6wNnApcD\nAwvcnqQG0LkzDB8OV1wBc+cmXY2kYldRh8cuBQYAj6/i+lcC6wKDctzXB5i5++6707Fjx+XuqKys\npLKysg5lSspl4ULo0QMOPBB++9ukq5G0Kqqqqqiqqlpu2fz583niiScg+go2yCwwDRUWjgcWAJnD\nv1wPtASOzbF+H2DmzJkz6dOnTx1KklSIK6+EM86Al16CXr2SrkZSbcyaNYu+fftCA4aFhhpnoQNx\nJmG71DYOAiqBaxtoe5JqYehQ6NoVRq2sy7KkJq0+w8JCIhAAXAZcAdyTWj4FOBp4qh63J6mOWrWK\nSynvuw+efDLpaiQVq7qEhWYs3wTRHshsSJlEXELZFtgG+GMdtiWpgRx5JGy3HYwY4SRTknJzuGep\niWvWLCaZeuopuP/+pKuRVIwMC5LYbz/Yc08YORJ++CHpaiQVG8OCJCoqYkTHV16BW25JuhpJxcaw\nIAmAHXaAgQNh7Fj45pukq5FUTAwLkv6/SZPgww9h+vSkK5FUTAwLkv6/zTeHE06I0PDll0lXI6lY\nGBYkLWfcOFi8GC68MOlKJBULw4Kk5WywAQwbBtOmwUcfJV2NpGJgWJC0grPPhtatYcKEpCuRVAwM\nC5JW0LFjzBdx3XXwn/8kXY2kpBkWJOV02mnRJDFmTNKVSEqaYUFSTq1bw/jxcOed8PzzSVcjKUmG\nBUk1OuYY2GorOOccJ5mSmjLDgqQaNW8OU6bAY4/BI48kXY2kpBgWJOV18MHQv39MYb10adLVSEqC\nYUFSXhUVMYX1Cy/AHXckXY2kJBgWJK3UbrvBT34SV0YsWZJ0NZIam2FB0iqZMgXmzImxFyQ1LYYF\nSatk663j6ogJE2DRoqSrkdSYDAuSVtn48TB/fswbIanpMCxIWmWbbAKnnhozUn7ySdLVSGoshgVJ\nBRk1Cpo1g0mTkq5EUmMxLEgqyNprx6yUV10F77yTdDWSGoNhQVLBhg2DtdaCsWOTrkRSYzAsSCpY\n27Ywbhzceiu89FLS1UhqaIYFSbVywgnQoweMHJl0JZIammFBUq20bAkTJ8KDD8LjjyddjaSGZFiQ\nVGsDB0Lfvk5hLZU7w4KkWmvWDC64AJ55Bu69N+lqJDUUw4KkOtl7b9h33xh/4fvvk65GUkMwLEiq\ns6lT4fXX4eabk65EUkMwLEiqsz594Igj4nLKxYuTrkZSfTMsSKoXEyfCvHlwxRVJVyKpvhkWJNWL\nHj1g6FCYMgW++CLpaiTVJ8OCpHpz7rmwZElcISGpfBgWJNWb9daDs86Cyy+H999PuhpJ9cWwIKle\nDR8O7drB+PFJVyKpvhgWJNWrDh1g9Gi48ca4nFJS6TMsSKp3p5wCXbpEaJBU+gwLkurdaqvB+efD\nH/8YQ0FLKm2GBUkNYvBg6N0bRoxwkimp1BkWJDWI5s1jzIW//x3+/Oekq5FUF4YFSQ3mwANh993j\n7MLSpUlXI6m2DAuSGkxFRQzQ9NJLcNttSVcjqbYMC5Ia1C67wKGHxuiO336bdDWSasOwIKnBTZ4M\nc+fCNdckXYmk2jAsSGpwW24JQ4bE5ZQLFiRdjaRCGRYkNYrzzoNFi+CSS5KuRFKhDAuSGsVGG8Hp\np0dYmDcv6WokFcKwIKnRjBgBLVvCxIlJVyKpEIYFSY1mrbUiMFx9Nbz1VtLVSFpVhgVJjer006Fz\n57iUUlJpqG1YWAeYDeyRZ50DgZeBr4BXgYNquS1JZaRNm+jsWFUFL7yQdDWSVkVtwkJ/4GlgU6Cm\n6WE2A+4CRgPtgXHAnUCPWmxPUpkZMgR69oSRI5OuRNKqKDQsHAf8Hhi1kvWOBR4H7gOWEkFhBnBM\ngduTVIZatIiBmh5+GB57LOlqJK1MoWHhIeKMwh0rWa8X0QSR6VWgd4Hbk1SmDjsMdtrJKaylUlBo\nWJhHnClYmXZEX4VMi1PLJYmKCpg6FZ57Du6+O+lqJOXTooGe9yugbdayNkDegV6HDRtGx44dl1tW\nWVlJZWVl/VYnqSgMGAAHHACjR8NPfxpjMEiqWVVVFVVVVcstmz9/foNvt6IOj10KDCD6JmSbCPQh\nrohIewh4lujsmK0PMHPmzJn06dOnDiVJKjUvvgjbbw9XXQUnnZR0NVLpmTVrFn379gXoC8xqiG00\n1DgLtxBBYhBx9uJw4jLLWxpoe5JK1LbbwuDBMH48fP110tVIyqU+w8JCIN1e8AZwKHHVxOfAGOBn\nxNgMkrSc88+HTz+Fyy9PuhJJudSlz0J20Gif9fNfUjdJyqtbNzjllOjwOHQodOqUdEWSMjncs6Si\nMHo0LF0KU6YkXYmkbIYFSUWhc2cYPhyuvBLmzk26GkmZDAuSisaZZ0KHDjF3hKTiYViQVDTat4ex\nY+Hmm+GVV5KuRlKaYUFSURk6FLp2hVErm4FGUqMxLEgqKq1axaWU990HTz2VdDWSwLAgqQgdeSRs\nt52TTEnFwrAgqeg0axZjLjz5JNx/f9LVSDIsSCpK++0He+4JI0fCDz8kXY3UtBkWJBWl9BTWr7wC\nt96adDVS02ZYkFS0dtwRBg6Myym/+SbpaqSmy7AgqahNmgQffADTpyddidR0GRYkFbXNN4cTTojQ\n8OWXSVcjNU2GBUlFb9w4WLwYLroo6UqkpsmwIKnobbABDBsGl14KH32UdDVS02NYkFQSzj4bWreG\nCROSrkRqegwLkkpCx44xX8R118GbbyZdjdS0GBYklYzTToP114cxY5KuRGpaDAuSSka6GeKOO+D5\n55OuRmo6DAuSSsoxx8BWW8UkU5Iah2FBUklp3hwmT4ZHH4VHHkm6GqlpMCxIKjmHHAK77hpnF5Yu\nTboaqfzUT0lnAAAPbUlEQVQZFiSVnIoKuOACmDUr+i9IaliGBUklabfd4Cc/iSsjlixJuhqpvBkW\nJJWsyZNhzhy4/vqkK5HKm2FBUsnq3TuujpgwARYtSroaqXwZFiSVtPHj4YsvYNq0pCuRypdhQVJJ\n22QTOPXUmJHyk0+SrkYqT4YFSSVv1Ki4QmLy5KQrkcqTYUFSyVt77ZiVcvp0eOedpKuRyo9hQVJZ\nGDYM1loLxo5NuhKp/BgWJJWFtm0jKNx6K7z0UtLVSOXFsCCpbJx4InTvHn0YJNUfw4KkstGyJUya\nBA88AI8/nnQ1UvkwLEgqKwMHQt++cM45sGxZ0tVI5cGwIKmsNGsGU6fCM8/A//5v0tVI5cGwIKns\n7LMP7Ltv9F34/vukq5FKn2FBUlmaOhVeew2GD4fPPku6Gqm0GRYklaU+fWLeiGuugS5d4OST4fXX\nk65KKk2GBUlla+xYmDs3miPuvRe23BIOOggefdTOj1IhDAuSyto668CYMfDuu3DTTfDBB9GnYbvt\n4udvv026Qqn4GRYkNQmrrQbHHgsvvBBnFjbeGIYMiVkrJ0xwxkopH8OCpCalogL22gv+9Kfow/Cz\nn0VnyC5d4Je/hFdeSbpCqfgYFiQ1WT17xkyV770H48bFyI9bbw377w8PP2y/BinNsCCpyevUCUaO\njOmtb7klmiT23x9694brr4dvvkm6QilZhgVJSmnVCo46Cp5/HmbMgB49YOjQ6N8wbhzMm5d0hVIy\nDAuSlKWiAvbYIy63/M9/4Igj4JJLIjQcfzy8/HLSFUqNy7AgSXn06AFXXBH9Gs4/Hx55BLbZJoaT\nfvBBWLo06QqlhmdYkKRVsOaacPbZMGcO3HYbzJ8fAzz16hWjRH79ddIVSg3HsCBJBWjZEior4dln\n4YknYKut4Fe/iiaKMWPgo4+SrlCqf4YFSaqFigrYbTe4+254883oGHn55THI07HHwr/+lXSFUv0x\nLEhSHW26KVx2Gbz/PkyZEldSbL999eBP9mtQqSs0LHQG7gW+AD4BpgHNa1j3IWAxsDDjtl/typSk\n4rfGGnDWWfDWW3D77bB4MRxyCGyxRQz+9NVXSVco1U6hYeF2YAGwPrAjsA8wuoZ1+xLhoH3G7S+1\nK1OSSkeLFnD44fD00/CPf8SkVaefHkNKjxwZk1lJpaSQsNAD2AM4G/gGeBs4Hzgxx7rdgLWAF+pa\noCSVsl12gTvuiLMNQ4bEGYauXaOPw8yZSVcnrZpCwkIv4HPg/zKWvQZsBHTIWncHotnhduBj4GVg\nSO3LlKTS1rVrDOz03ntw0UXw1FPQr1/14E8//JB0hVLNCgkL7YHsFrf0lcXtspa3Av4BjCKaLM4E\nLgcG1qJGSSobHTrAsGEwezbcdVeEhMMOi0mtrrgCFi1KukJpRRUFrHsYcC2wTsay3sCLwBrEmYR8\nrgTWBQbluK8PMHP33XenY8eOy91RWVlJZWVlAWVKUml59lmYNg3uvBPatYupsk8/PcZukDJVVVVR\nVVW13LL58+fzxBNPQPQVnNUQ2y0kLGwGvAGsRzQtABwBXAhskrXu8URHyLsyll0PtASOzfHcfYCZ\nM2fOpE+fPgWUJEnl47334uzCtdfGGYaBA+HMM2HHHZOuTMVs1qxZ9O3bFxowLBTSDPEm8CRwGdHs\n0A0YA9yQY90OxJmE7VLbOAioJM5MSJJy6NIFLrwwxmu47LKY/XKnnaB//2iy+P77pCtUU1XopZMD\ngRbElRDPEGMpnJ+6byERCCACxRXAPanlU4CjgafqWK8klb127eC00+CNN+Cee+JSzEGDYLPNorli\nwYKkK1RTU2hY+Bg4nOi3sC5xGeWy1H3tgcyGlEnE2Ye2wDbAH+tUqSQ1Mc2bw6GHwt//HmcZ+veP\nyaw22iiaJ955J+kK1VQ43LMklYC+feHWWyMgnHYa3HQTdO8eZxyefjrp6lTuDAuSVEI23BAmT47O\nkFdeCS++CLvuCjvvHENM269BDcGwIEklqG1bOOUUeP31mKyqTRs48sg423DxxTB/ftIVqpwYFiSp\nhDVrBj/5CTz2GLzwAgwYAKNGxZUVZ5wBc+YkXaHq23ffxbToDz0Ul9pedFHDb7NFw29CktQYttsO\nbr4Zpk6NOSiuuioOJoceCv/1X7DbblBRyOg6Ssy338Lbb8dIn2++Gf+mb+++Wz08eKtWsMEGDV9P\nsfzaOCiTJNWzr7+OTpHTpkVzRb9+ERoGDYKWLZOuTosXx5mf7DAwezbMnQvLUtcarr469OiR+7bh\nhvDiiw0/KJNnFiSpTLVpA0OHwoknwsMPR2j4xS/i8svTT4/71lwz6SrL21dfrRgE0rf3369er23b\nGEejRw+orFw+EKy/fjQ3JcmwIEllrlkzOOCAuL38cowOOXYsTJgQ02afcUYcqFQ7CxbUHAg++qh6\nvQ4dqgNB//7LB4J11y3uJiLDgiQ1Ib17ww03xOWXV10VfRumT4eDD44mij32KO6DVlK++KLmQPDx\nx9XrrbVWdQDYc8/q/2+2GXTqVLrvrWFBkpqgddeF886DESPg97+PJoo994Ttt4/QcMQR0XmuqVi2\nDD77LHcYePNN+Pzz6nXXWac6BPz4x9VhoHv3CAvlyLAgSU1Y69Zwwglw/PHwyCMRGo45Bs45J0aK\nPOmk+EZcDpYti7MAucLA7Nnw5ZfV6663XoSALbeMsy7p5oPu3WGNNZJ7DUkxLEiSqKiA/faL26uv\nRr+GCRNg4kQ49lgYNgx69ky6ypVbtiz6CdQUCBYtql53ww0jAGy3XUwHnhkI2rVL7jUUI8OCJGk5\nW20F114LkybB1VfDb34T/x54YExgtddeyba9L10KH3yQOwy89VZcMgpRY5cuEQB23DGuBEk3H2y6\naVwtolVjWJAk5bTOOnDuuXGpZVVVNFHssw9ss030a6ishNVWa5ht//BDzH+RHQbSgeDbb2O9Zs1g\nk00iAOy+e1zdkQ4E3bpFM4vqrlj6ZTookyQVuWXLYljpadPggQeik+Spp8LJJ0ewKNT338dohLlG\nKZwzJ4Y1BmjRIg78uQYl6tq1aXXEzGXWLAdlkiQViYoK2HvvuL3+Olx+OUyZEpdhHn109GvYaqvl\nH7NkSUyrnWuUwnfeqZ4ls1WraBro0SPGg8gMBBtv7IiTSTMsSJIKtsUWMU7DxIlwzTUxXfZ118Wl\nhJtuuvw8BkuXxmNat47Ogz16wE9/unwg6NIFmjdP9jWpZoYFSVKtdeoUs1wOHw633x6dIZ96KgLA\noEErzmOQ9LDFqh3DgiSpzlq1iqaIo49OuhI1BDOeJEnKy7AgSZLyMixIkqS8DAuSJCkvw4IkScrL\nsCBJkvIyLEiSpLwMC5IkKS/DgiRJysuwIEmS8jIsSJKkvAwLkiQpL8OCJEnKy7AgSZLyMixIkqS8\nDAuSJCkvw4IkScrLsCBJkvIyLEiSpLwMC5IkKS/DgiRJysuwIEmS8jIsSJKkvAwLkiQpL8OCJEnK\ny7AgSZLyMixIkqS8DAuSJCkvw4IkScrLsCBJkvIyLEiSpLwMC5IkKS/DghpEVVVV0iWoHrk/y4/7\nVIUoNCx0Bu4FvgA+AaYBzWtY90DgZeAr4FXgoFrWqBLkB1F5cX+WH/epClFoWLgdWACsD+wI7AOM\nzrHeZsBdqfvaA+OAO4Eeta5UkiQlopCw0APYAzgb+AZ4GzgfODHHuscCjwP3AUuJoDADOKYOtUqS\npAQUEhZ6AZ8D/5ex7DVgI6BDjnVfzlr2KtC70AIlSVKyWhSwbnui/0Gmr1P/tiOaJ8j4OXvdxanl\nNXrttdcKKEfFbP78+cyaNSvpMlRP3J/lx31aPhrj2FlIWPgKaJO1LP3zwhzrts2x7gJy+wj44Kij\njtqwgHpU5Pr27Zt0CapH7s/y4z4tKx8Qx9IGUUhY+DfQibgi4uPUsq2A91gxLPwb6JO1bCvg2Rqe\n+yNgB6LjpCRJKsxHNGBYKNTjwG1Ec0I3ol/C2Bzr9SSaKAYRgeTw1M9eDSFJUpnrDNxBjLEwD7gQ\nqEjdtxCozFh3P+AFounhJWD/xitTkiRJkiRJkiRJkiRppQqZb0LJOwL4nuijkr7dnLpvJ+CfwCJg\nDnB81mOPBd4kLq19Dti5EepVbusAs4lRWdPqsv+aAxcTg7YtIP6m12uIwpVTrv15FTHabubfauaI\nu+7P4rQt8AjwGfH+30xciQhN/G/0b8DvgNbkv8JCxeFi4IYcy9ckfrlPIUYG3RP4kuoPrwGpn3ch\nfmmHAZ8CHRu2XOXQnziwLAV+lFpW1/03DvgXsCExgFsV8FjDvgyl5NqfEAeLo2t4zADcn8VodeBD\n4v1vAawF3E9MndCRJvw32oP4Bc9MN4cDc5MpR6vg78Cvciw/EXg9a9l0IggC3ApcnXX/q6yYjNWw\njgPeIf7OMg8utd1/Q1L/fx84MuO+zqnn71YPNatmx5F7f64GfEuMb5OL+7M49QQeoPoqQ4BDiDMJ\nJwBvZK3faH+jhc46Wd8KmW9CyWtGDLZ1EPEB9R5wDZFcc80H8hrV84E4X0hxeAjYlLgEOlNd9t8a\nwAZZ939M/G27fxtWTftzW+Kb6QTi8/UNYhLA9EFoK9yfxegN4vN1WcaygcAsEv4bTTosrGy+CRWX\ntYlf2juBLYBdienIbyX219dZ639N9X7MNV9I5v1qHPOIbxPZ2lP7/Zdex/3b+Granx2IJt7LidPO\nRwG/Bs5K3V/TZ6/7s7hMJMLDKeTfZ9DAf6NJh4VC5ptQ8j4m2sduIjpOvUd8WzmA+MaSa1+m5wPJ\nta/bUvN8IWpci6jd/ltI9QdQrsf7d5yMvwL7AE8APxD9Fy4jOiiD+7PYdQDuBgYTTUuvUPPxslH+\nRpMOC5nzTaTVNN+EkrcNMDVrWWvim82zxGmwTFsR+5jUv1vnuV/J+je133/ziU5ZmfevR3TOcv8m\n41DgpKxlrak+e+T+LF7diXDXDuhHBAXwb3SV55tQ8jYiQtx/E+2hGwNPA9cSv3SfA2cALVmxp+5e\nqZ8HpO73aojkZXaIq+v+m0AM696VOF36B0qop3WZyNyfhxHfJvcizvrtQpwZHJy63/1ZnNYE3iWu\nOKvIuq/J/43mm29CxedHwFPEL+U84tRmq9R9fYEnU/e9CRyT9dhfEB1yFhIhY4dGqFc1y77Uri77\nrwUwhTgrOB/4I9HHRY0ne38OJa5wWURcWnly1vruz+JzJrEfF7H8+Bjppgb/RiVJkiRJkiRJkiRJ\nkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJksrQ/wN1nwC3QYFClgAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgcAAAFuCAYAAAAVsrs0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJztnXe4FcX5x78HENAo0hRBVOyKvbfYUBRblNiCsRDjzxaN\nmmiKsUdTTCxRE0ti7GJMLLERe4tRY8QuoomKBQUURYoicO/vjznjmZ0zszuzO1vOud/P89zn3LNn\nd2Z2Z3fmu++88w5ACCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBC\nCCGEEEIIyY+rAXQk/L2ZMY+x9XSWz/mYtJxRz4s00xfAtQC2LrsghBBCimMlAJvW/zYDcBeA95Vt\nmwJYL2MeA+vp9Mz5mLScAWBhAfm0IttBCKdtSi4HIZWjR9kFICRH3kTUMvARgC8B/DtgHh/V//I+\nJgu1AvNqRXh9CNHoVnYBCKkA20G8QR4OYDKAmQB2qv92GID/AJgNYC6A5wDsqxw7FtEhgqsB3A/g\nOwBeB/AFgOcB7JLxGADYAsBj9bJMBnAcgAcAXOV5vvvXz2kWgA8AXAphYpf0BvBHAO/WyzIRwA+0\nNH4AYBKAzwG8B+D3ABaPyXMsxDl/HeLc5gJ4EdFrKfM+V8n7BQD7afu8DeB8AA/W07Gd/0AAN9fP\n8XOIujuw/tt2AB6q//+w8j8A7AlxfT6vH3shgMWU38+ol+8bEPU1F8BTAEZYykEIIaTCXA3gLcP2\n7SA6rvcAfBPAARAd3fcALADwMwjT8zcBPA1hfViufuxYNHf0nwB4GaJTGwXgGQBz0OiA0xyzBkQn\n9AiA3eppyE7vzzHnfAaiPgen1L9fDGAkgCMBTIfosHvX97kcwuKyX/28f1U/5pD679+C6Li/BzFe\nfziAzxAvUuQ5T6+XYSeIjnshgD3q+9QAjIcQZ8fVy3dp/biDlLTehqiDXwDYAUI0mbgXwLMQnfi2\nENepo/7/EgCOqn8/EuL6AqLuOyB8EXYCcASAjyHEm+QMiLr5CMAxECLuQQDzAGwYcw0IIYRUkKsR\nLw5O1rb/FsAvtW0b1vfdv/59LJo7+g4AKyrHbF3fNjrDMdcCmIJGBw4Am9f3cRUH/SA69cu1fb6O\nRicJAK8Z9vkZgF3r/19W30c1xx8A4NiYcoyt53GKtn0CGsM8I+v77KPtcy2Er4i0dL4N8caexOcA\nfqp8r0FYJbasf98OUZ+DGoRF4G4tnRH1/eT5n1H/fqCyT2+I+rnZoVyEVB76HBDS4Hnt+4n1zyUB\nrApgNYg3VSDemXAaoiLk/frn1zIcMwKi0/pC2ecpiI7Slc0hyn2Dtv2fEMMU20F0/A9BCIWhAO6E\neJs/R9n/IQhrwbMAbqn/fqNjGa7Xvt8G0dkuCnFtO+vpqW3TnRAd8doQQxFAc12ZeBjAWQDWB3AP\ngH8A+FHM/qsDWBbiXNX8H4MYghlZTwcQFo9xyj5f1H/bzaFchFQe+hwQ0mC29n1liDH9GRAd6EkA\nFqn/FufE9rn2Xb65xz1vSccMhBAQOh/GpKnTP+aYqWgMYRwP8Ya/IoA/QIiWJyA6WUC8HR8Acb3O\ngBiffxMNa0oc72vfp0FcyyUBDKj/Pwti2ED+/QVCNAxRjtPrysS3AJwHMTPkqnre4wEMs+w/oP75\nBy3/LyGGmQYr+05H8yyQ6RDWGUJaHooDQsx0g3hTl9MOFwOwAcT4exm8B2AZw/ZBHmnMqH8ONvw2\nGI0ZFHI8fzjE0McxENNCVevATRDm+P4QvgkfQ1gF1A7cxADt+yAIv44ZAD6F6PQ3NvxtCuDJ+jGd\nCXlIPgPwEwiRszrEEMPXITp/E5/WP0+05K8OO/VHM4MgRBYhLQ/FAelquHYsAyGGEa6EMJ/LN3k5\ngyDu2XHNw+eYR+t591K2bYCon0IST0M4zX1b2741hIPlPyEsI5PQmJ3wHkRnehPEMAMgRMCt9f9n\nAfgbgLMBdIdZwKjsqfxfA7B3Pd8vIc5xcYhrO0H5Gw7g1Hr6rgyul33v+vc3APwGwhIkz0N/838N\nwpKxkpb/exBiSY2J0RPAzsr3RSF8Eh70KCMhlYU+B6Sr4TqnfRrEeP6xEOboTyE6g0MghELctL00\n8+aTjvkFhJl8PISpvB9EhywjPbowA8LycRpEZ3wHhLj4OYBXAFwDYD7EEMLp9X1egnjrPgRCBADA\nfRBOlL+pl6cfxPDCGxBTD+M4F0LgvA7g/yBmCcgpgHdDjO//vV6m1yDe2s+EmHkgLR8u1/eDeh4X\nAegDMeyxMYTA+kV9H2kp2B3CyvA8hOPl5RDC4S6I4Y5TIATFBC2Pq+r7T4cYcloUok4IIYS0EFfB\nHC55O4jOQI+Uty6EU9ssCLP5dQBWgOhEbqrvM7Z+rJx5YMpjGEQHfnCGYwBhEn8Swj/hbQinwHch\n5uHbOB3Nb8hHQEyb/ALirfhiiE5QsijETI236vtMhhACqtXiSAjhMAfi2tyExvROE2Pr53NA/bi5\nEBYD/ZovBiF+3qnn/V+IDld1AH0L8TM0JAMh4jW8V0/rDYihASkuahDOmXPrZZLsCzGV9HOIjv82\nAGspv59RP5e9APwPYijkXgDrOJSJEEIICcYOEOJApR/EMMExxRfHm7EQHepKJZcjBGdAnAuHZUnb\nEvrmHgExrjkTwqx3EaLzsgkh6dgAwpx/HBoBme6CCJ40LuY4QggplaUgzHPSDLoMxJzkM8oqECFt\nRA1ifPs1iOdsGsQUv1Z5Ex8LMbzRKuWNQw7V0HJAiCMyYEsNImDJ6wCOLq84hBBCCKkK70GMyT2C\n6IIlhBBCCKk4eS1V2gsiSMgNEF7Cuxr2GQxzMBZCCCGExPNB/S8X8l7HfBMIB8V+EE6KksFDhgyZ\nMmXKlJyzJ4QQQtqS9yH62FwEQsggSFtCRJNbFyKQCiBmKnwJMRdaZfCUKVNw/fXXY8011wxYBFIW\nxx9/PC68MG66PWklWJ/tB+u0fZg4cSIOPPDAZSGs75UXBy9A+Bf8CiKe+RCIQCp/goid3sSaa66J\nDTfk8uftQN++fVmXbQTrs/1gnRIfQk7FmQNgFMQshakQzoj3AjghYB6EEEIIyZnQaytMRHQxEkII\nIYS0GAziQQghhJAIFAckCGPGjCm7CCQgrM/2g3VKfKA4IEFgw9NesD7bD9Yp8YHigBBCCCERKA4I\nIYQQEoHigBBCCCERKA4IIYQQEoHigBBCCCERKA4IIYQQEoHigBBCCCERKA4IIYQQEoHigBBCCCER\nKA4IIYQQEoHigBBCCCERKA4IIYQQEoHigBBCCCERKA4IIYQQEoHigBBCCCERKA4IIYQQEoHigBBC\nCCERKA4IIYQQEoHigBBCCCERKA4IIYQQEoHigBBCSCW48krgz38uuxQEAHqUXQBCCCEEAA47THwe\nemi55SC0HBBCCCFEg+KAEEIIIREoDgghhBASgeKAEEIIIREoDgghhBASgeKAEEIIIREoDgghhBAS\ngeKAEEIIIREoDgghhBASgeKAEEIIIREoDgghhBASgeKAEEIIIREoDgghhBASgeKAEEIIIREoDggh\nhBASgeKAEEIIIREoDgghhBASgeKAEEIKZN484IUXyi4FIfFQHBBCSIEcdxyw/vrAwoVll4QQOxQH\nhBBSIC++KD47O8stB6ke990HPPNM2aUQ9Ci7AIQQ0pXo6BCfFAdEZ+edxWcV7g1aDgghpEAoDhp8\n8AFwww1ll4KYoDgghJACoThoMHo0cOCBZZeCmKA4IISQAqEjYoNPPy27BMQGxQEhhBSIFAe0HJAq\nQ3FACCEFwmGFBrwG1YXigBBCCoTioAGvQXUJLQ7WA3A/gI8BfAjgGgADAudBCKkgo0cDp55adimq\nD8VBA16D6hJSHCwKYDyAfwIYBGA4hDC4KmAehJCKcvvtwNlnl12K6kOfA9IKhBQHywN4DsBZABYA\nmAHgCgAjAuZBCCEtDS0HDXgNqktIcTAJwG4A1OreB8CEgHkQQkhLQ3HQQF4DXovqkWf45LMhxMI2\nOeZBCCEtBcVBA1Uc1Gr55DFzJjBjBrDiivmk367kIQ76QPgZbAAhDF6x7Xj88cejb9++kW1jxozB\nmDFjcigWIYSUjxQHBHjrLfGZp1DabDNg0qTWFWPjxo3DuHHjIts+LSB6VGittjKAewC8DWAMhN+B\niQ0BPPvss89iww03DFwEQkgc110HrLcesO66YdOVb36t2ggXxZAhYk2BTz4BtHejtueRR4CttgIW\nWQR4/XVg9dXF9i+/FNvyuIda6b50LeuECROw0UYbAcBGyGnoPqTPQT8AD0HMVhgFuzAghJTIwQcL\ncUDKoavOVnj/fWD77YEzzhDfp09v/NbVrkUrEFIcfAfAcgD2B/AZgFn1v88C5kEIIS1NV/U5mDtX\nfE6eLD67Kb3P+usDTz5ZfJmInZDi4Px6eosDWEL56xMwD0JIF2DOnMZ4dLvRVcWBzuzZjf8nTmxY\nFEg1YPhkQkjlGD0aWGmlskuRD5y+J9hpp7JLQOKgOCCEVI7HHy+7BPnT1cRBXlMVST5QHBBCKkdX\n6Di7wjn6wOtRLSgOCCGVoyt0FF3hHEnrQnFACAnKLbeUXYLWY8oUYNq0/NJ/+mngiy/yS7+qPPNM\n2SVoXSgOCCFB2WefsktQbUyBbpZdFhg0KJ/8FiwANt8cOOaYfNKvMiO47F9qKA4IIVY6O4FVVwUe\neqj4fNudos5RTp184YVi8kuiK9RtO0BxQAixsmAB8N//Fj8HvZ07kKKnMsp85swpJj/SHlAcEEIS\naefOGgB23TV5ql3WBZPeeQdYYw1ArplTtDiQEQoJcYHigBBipatE8xs/Pv73p58GuncHXn45fR7X\nXRddHbCriQOfOAeHHCLEFCkPigNCiJWyovlVTYz8+9/i86WX0qehd45dTRz4cO21wKmnll2Krg3F\nASHECsVBOLppra3pHOfNA048MV1H/vTTwKuvNm+X+WQdFmlF2vE+KooeZReAEFJdusqwQhG4mNVv\nvRU47zxg+eWB73/fL/3NNxefel216loODLdcLrQcEEKsdOW3TpUQHavLsEIeHXjINF98EXjqqXDp\nxRFCHFBgpIeWA0KIlbIsB1V4y+3sFLEB1l+/UZ4snY3LsELVxcF662VPs8i6rcJ91KrQckBISi67\nTMQAaGe6ss/BX/4CbLBB9E05izjwsRyEfOOtwrUEks9JLyff+suF4oB8xbPPFh/splV5803gqKOA\nc87xP/bRR4FPPglfpjwoo2N55JHi8zTx1lvic+rUMOmVLQ6qIhJcoTgoFw4rkK/YcUcRoIUCIZlZ\ns8TnRx/5H7vdduLv4YdDligfyhhW2H774vKKw7QGQhZchhX0vEMQovyjRwNDh/of9+67wNe+BvTv\nH91+/vnN+9JyUC0oDshX8GF05/PPxWfv3umOl2+lVacrOyS6iIMvvhAhphdf3D09SVGWgxB1d/vt\n6Y5bfnlgiSWAzz6Lbv/hD5OPZXtULhxWIHj5ZeDvfy+7FK2FFAeLLpru+FYx8aa1HMydKxr3u+8O\nX6YqsdZaovNzoShx8O67YthLcuSR9vziuPfeRvCnLEgrG2ktaDkgWGcd8dmvX7nlaCXSiINzzgH+\n+U/xf1xDPWOGaFBXWCF9+UKRdrxaDrdcdx2w227J+48fL6wwVRlScEXthG0MGwZsuy2w8cbR7XkJ\nxOWXj6b/t7+lS2fUqGg6WUkSPKYATqQ8KA7IV9CM546MYOcjDk45pfF/XIM7fLhwgtP3ufRSYK+9\ngMGD3fPMSlrLge94/a67ijH5hQv98skT9RyydJCTJ4twwJtumrxvHsMKrcIHH0S/d8VrUCU4rEBI\nCqTloFevdMfHdTYm7/g77gCOPho4+OB0+aUlq6e7z3Ht7tdQVhCkquF6jmWJg8mTgRtuKCfvKkFx\nQL6CSr3BO++IGQVSBOjIN9weKW1vvp3AnnuKT1t58iKtOGi3eymEA2lZDoldkWnTxDWcM8f/2JEj\ngQMPDF+mVoPigHxF2Q3SRx8B11xTbhkkF18s4hFMmGD+XYoDfXpa3hRdRy7DCnPnNjudhZ4GWAbq\nOVx0UXRbGvR75ayzgOWWs+c9fz7wk58As2dHf5s+HfjVr9yubSs7A2a51lnE3Kefpj+2naA4IJVh\n7FjxN3++fZ+s47+hyGoCT3sORYsDF8vBaqsBffpEt8ly+l6nww5r3nb77cB77/mlE4LQAkevu3Hj\nms9LtRzcey/w618Dl1wS3ee444Cf/rR5jN6EOgz15ZfVEAtFDCuY2pCRI4G//jV9ml0NigNSGeRc\naFvj0dEh3r4uvri4MtnoKuLAxXLw/vvh8rvyyuZto0cDO+xgP6azM10wqjSEDJ9sQhUHNmEmOz6X\ne/CNN6Lfp0yJ39/01pyUzzXXCDN+aEKLgwceAA49NPnYsl4+ih4yTILigFSGpDc1acq/5Zb8y5LU\nQGSNHOhynKkh92kwOzuBf/wjzCI5vmlIE3qohnb6dPtvl10GLLWUmAJ6zDHAYouFyTM0vkNQtmsn\n00kjUOOOuesuMZ35tdfcj+nsFNa+gw7yL0ueLFiQ/tiyxMEVV5STrw2KA/IVZfscSJIezq4yrLDl\nls37+nQw48YBu+wizNNpmTy5uQwuFOlzIGNHfPop8Pvfh3sDy3tYwYTJIVE/Lovwirtv//Mf8amP\n18cdIwW7HgHRRJFOrTZx4FKGstqXLIImDygOSGWoijhxQTaKeVoOZMeszv33uUbybVsu8rRwoX/E\nwu22E59FTGVMm04V7huX8/QpZ9y+UhykiQlxyCHA44/75RmXj/ytCnWgEue3FMd//yssUGVQtWtI\ncUC+oio3p62hrYLFQFLEsIIkrTjQuegiYPfd0x3re55lrASYJq+FC4E//9n8m+8QThJpLQc6WYYV\nnn0W+MEP3MogcbEcuFCk5cAmDpLSPOSQ9HlKLr7Yr+xvvQU89ljxM5+SqFhxSJmULQ6SzLhliAPb\nNSnSIVE1N2apow8/TJ9O2vPNWmebb56cjum+mTRJbE+a0vanPwHf/W78PqFM0S6Nv5pOks9B2nIt\nvbR5X9vzF0oc5MHTT5tnstjEQdI183kubLNFLrvMPQ0AWH11EV6b4oBUnrLf0KskDmSeH34IzJzZ\n2J6X5cDUqKVtgPU81O++4qAsy8HTTyfvY+rUHnxQfNpM6BI5dOOaro08LQf6cfJ72vvCVo4swwou\n5GE52HxzsfiVTtrxezVPU3k7OsT2F14AhgwRTpw6vvUin3mKA1J5yhIHVbQcSAYPFgpfkpflQPfK\nv+WW7MMKaWMOqLz5pgjG09kZ36FKXMTBffdFrRlpMV0T13P++OPseQHlDCuk7QCTOqHOTuDhhxvf\n465hmnsqdJwDkzNkWp+DJHHQvTtw8snA//4nvr/ySvM+aZ+zsi23OhQH5CuqEtUuSRwUWT71gVXX\nPAjpkDhpkn2/ffbJPqwQql5fe02Y4YcNSxYILnW1884iMI0Lvm/vrtcp1MwGmfc559jz9nkzVOMc\n2NLJy3LQ2dmYuZCUj49DYlbLwcKFwEYbAU8+mXxsCHFg6+SvvTY+fHpacaDeHwsXlt8OUxyQJqpq\nOSiSpM4t67CC5NZbgTXWiDbGOqEcErOW9ZNPgOefF/+7Bh1KyjNEACXTfWO7l3bdtRGy+NFH49++\n0wwrqCtvpsHHdyHJcpAkLnTk+T71FPCjHzW2V8XnYM4cEc78tNOS9w3x9v7uu+Z9Ojsb5929e/Pv\naV8c1Lx79ACOOsrv+NBQHJCvCGF+DpF/FYcVdEINK7z+uviUnaTpHNUG2OftM6TPAdCYEunCiy+6\n7edaDl+HRNtx48cLB7b33hPTNMeNy162pPL57KPmbRticBUHcnzclLYtTwD4xS+a07FRpOVAnvcD\nD6TPy8chcaWV7PvFra3i8uKw8srAkUfa8wbss2iKguKANFG25cDWGJUxnKA/sNIMHdohMa6DCzVb\nIaug8XlL3HVX8enSGF93XfoyyTR0Xn45Pv/QawyEjucQV8+uDom+0/myOCQuXJgcmtkXvTxFvLS4\nihxZFpPlwKVtePNN4PLLo9vOP9+tjEVBcUC+Qu+g3nhDmL2LpgqWA1teZ54pPouMc3DttY3/07zN\nxpndfbBZLZZf3n7MSy8lp+vSKM6aBdx8c/w+6vnJxYpsHUpaL3sfh8SJE5Md3FzS0XG1HNh+TxpW\n0HGxHDz1FLDssunH+l0IIQ6S7nvX5yJuWCFt2yCtiFWB4oA0IW/u9dYD9t67uHxdhxXKHF749FPR\nWZ91VrZ0fASQal4M5XOwYIF9TNVGGk/999+Pd/rr1q3hx5DE/vvH/+4yrCBxEQc+w2ymff71L7ey\nqMi37xCzFebPtw8rzJ/fLNyyiAPbdxXf51Y/vxD+DbIMX34JvPNO9LdPPgEeesgtnThxIOuwrOHZ\nUFAckCbkA1TWKmFVsBzYqNWiUdSyWg7i/AIkoYYVdCfCuDd+E2nzXrBADB0cd1y4NE1p+NSFzzRA\nl0b+oYeSz8VluOyXvxSfLj4HSZ2l7RxrNeCkk4B11wXmzYtPIymfEB22vnJk377iU29/fDrbpHvh\niCOAFVaIbvvxj5v3q9Uaa3eoadvEwZtvupcBSI7DUSYUBxVj/nyxVvucOcXnXfZsgbLzdyGvuchx\n6YaarXDTTemPjcPFr+Dgg0X4ZtdjfJDXxPTMFGU5uPHG5H2SZr6Y8o4rl4vlwES3bubZMSEsB3H1\nahPEffpEv0uHyMGD3csh2XRTYOhQ++9z54o/vcMH7EJp/PjmbTZxoApwl3t8m22S9ykLwyxNUibj\nxwO/+hXQv79Q90VSlc65SpYDvcF8++3o99CWAxOhxEFWOjvN90iaayAb+iznM3OmGOu++mrxfYcd\nmvexlS2uY+3sFIvvyLLJaIuAvbwm87JrWXzNz673zsyZZoFQqwmzup5GiAiJtjKdemo0iJheHpUt\ntwQGDmxeelvNq6PD7DvxzDP2skoOO8ytHHHIOtPLoH4vux3NCi0HFSNrcJ0QcCqjPU/9LSK0Q6Lp\n2qvbyhYHPuP6cYQQB/vvD4wa1XhmZs+256MT1+H94Q+ic5JTN11mU5g6Kv3cQlkOktKTDB/eiOSn\n0q2bnzhYc83mbTvtJCw1+nW0Xe+zzwYOOsj8m2nYpFYDLrhAvCiZ0nYZDrExcaJbOWzEDSuEFAdl\ni4tSxUGWCm53yugEyrYcVFEcAPHx/S+4wC1im45PR5F2loHLtDgf3nqr0dlkjQEQYqnf115L3ud7\n3zOb0OO86h97THyqa2nozJ0bdegMbTmI8zmIO86FWq3R9s6fD9x7b/R3l1ga998PPPFEcxnSPKM2\ncTB9uhhilYQSByZmzBDOxq7YxIH6nQ6JGfCpjK5C2WpRluH664vPt4qzFWo14J574vfZckv/dG1x\nDkKKg9D89KeNjsSnDkz7hrAcfPGF236me1m+OccRV7Y994w6dJo6VH21R5u1wiYObH4MWZ8DVRz8\n7nfC+vLCC43zdRE6gGi/dadBlzL99a/N5VGR4kBHvX5JUyaTyqH/PmAAcPvt8ceox9rEge/UVROr\nrZbuuNCUKg5aXVnlSdnmY5sJMAtnneW2nGnVLAdp6mL8ePuSroC9ga+yOLDh2xADYSwHrrNpZLhk\nlaziQPVDANw61COOMG+3tYMyxshdd0WjU2YVB52dDXEgI3POnt04X9conPvvD+yyS3RbiIWYarX4\nyINA+kWnZPpZ93cZVjjkEODXv/bLCwB69fI/Jg9KFQdZKrhdqYLlIC/Rdvrp8fHC5UP4u9/lGzp0\n4ULgBz8QpkQbWeth113FwkKu6cdZDsryOYhb98DnDUk1DUtCiAOT5cDUqfTs2bxtzz3T5ws0n3OW\n5XaTfA4efRT49reb8057jy5c2BAHWYWnHmkyRPvlYjlwjY1hQk37kUfSpaFGSPziC7FA2ocfRtO+\n7TbgJz/JVr4yKVUcFLVoR0eHcDLKM3pXaLqiz4Hk3HObTbJAuHL95z/CV8A1kFHaulAtB+r8Z8Av\nzoFJHDz8sBA4gwbZHayyIsfeTfjMVrj00uZtIYYVTG//+rQ4AFhkkXTp+5TN1RRvwsUh0RSaOO3z\n0NHh55Dog5retGnp0tDFwfz5Yq0O9TrtumuY9mD77cWQShwmEaqurfCvf4ml1dWpulnIIjRD0iXE\nwZ13CsekK68sJr8slNkxu4qD6dPF24zkvfeAvfbK7iSU1DiF8jkIYVb0PW7llaO/pXVIlI36iBFC\n4Eyb1pjK51KOqjBggPgMXTbTHHeT5SAtWaYy2nCx1Jnul3/8I11+Cxc2i4PDDwdOOCFdeirquaTx\nxQGaxcHxx4tordOnR/cL1X8krbNx3nnAs882vquzdtSydnSEsbpKq0jZL2mlioNbby3mAkjlV1bE\nPx9Ce5inyTvpBh81SqxoJ/nNb4C//z2bqc+FvO6VWbPclyD2IU0dJg0ruDrhAcCJJ/rnr3LAAW77\nZamX0Pe5qSy+b2JxZbL5keQ5rKDvI8/xiivSlWXhwoYVVab76qvux8ehXn/TNEoXdHHwhz+Iz8mT\no/ulfRmxRZyMQ11lVBUHnZ2N4zs7i/Gje/zx6BTPvChVHNx0U34m0ValDHGgWwySGvu33op+d+kc\n9Ac7rhySZZYBJk1qzie0SFhlFWCppdzLFYK0PgcmcZBUvn33LX/5Vxsu94UPoeIw2Pje98zbQ4uD\nOEfVpPNJsmKob7ihO7NQPgem66kLsyyWSrWcWaw+amCwUJaDJPbcs3nGRx7kLQ6WAvBfANvadohz\neurKVFkc2IK7xJXZZZqQfvzUqcBVVzXnM2WKOeBNWuLGRp95BjjtNL/0pOCNux5pZyvYIt4lYfLh\nCMHddxcb6jvOiRRI9tnwwecZzCIOTGPVtvvigQfMPhwqSZ2d6qsRWmjLcmbx77I5JOpiIG7Gic95\nudadyc+mDMtBUbP88hQHWwF4EsBKAKxVVfa4StUo43rob61JZdAfplDWDtfj33mnuJjkpsWCkhg+\nPHmftJb2JfeOAAAgAElEQVSDHXdsDs5Tlk/BzJnA7rv7iYONN86W5/rr+x9TxDOV5e1TriWgogfe\nkveAPoXSRI+EoPimaZEqIYYVXGMGAMASSzTnbxKBuo9B0ZYDm1VKXq+iLAdF9RF5iYOxAG4AcHLS\njkU5JbYaZTqRJd3gtrK5LBbji80r/rnnhJdwrVbdYFppzjnJLN6tW/PqcXncKy7PZZpnV3XsSkPS\nMtNlWQ70DiZrA37eedHvtlj+LmXReeWVxv+mcqZ5viU+5QTE2LnuMNqtG/DZZ8376vebS6wKE/p5\nZBEHanqhxYGP03Ie5CUOxkNYDG5O2rEIcVCVKXouqGWcPRv4+OP883SNAW/bPyudncDYsdFGy4Tu\nn3LDDeLzv/+Nbu/oEOFdXfJN85sJfaoiEM4hUX1GOjujb3554eK8m/Y+uOaadMe50Nkppnfq2/Im\n7+lntk73H/8QU39Vkjo71Tvf59q4rvcwdap7DBtTWW3X8oEHot9DhVB2qbuZMxsWGV+HxHnzRB3J\n5zir83OrWw6mAnDSN7QcmKnVgHXWEQvAFJEX0Lix9fXVbftLXG7WuIZlwQLRYSR5N++4Y3I+gHC+\n22mndGsepMXmqJaEr8+B6Vo//HD4BmPu3OR90naIY8emO86FBQuay5X2Tcvnmoa2HNjKouezyy7N\nliSfejFdmyz+PJ2dwpH4W99y299UVlv59edZXyE1LS6WA3U2h00c2CwHl18u6kiGYdedn32nkBYl\nDkpesvl4nH1230j8gTFjxmDMmDHlFalk1IoPdfMnITtuaaY7+OD4/VWl7LJAjGv+Jl59VQiHQw5x\nT096NdvesF3K6XsupsV9Zs2KejO75JfUkZkahiefFLE8vvGN5HxcGD/ePk1OpYrxE+bNa27sJ08W\nHd7ii7ul4VoXceTl6FfE1M+kMmTdR0W2JZde2oie6nqOe+zhl5dET9/VImLbnjSsIC01tnySwiWP\nGzcO48aN++q7mLGU/1hqyeLgQpx00obYb79yS1ElypzKKB3LkoLGqA+D2hCn9TmIa6Duukv8mcRB\n0tjorFnAttuKKbODB9vz8C1TXJ4qs2cDf/lL/FuUi+VA39+UV8g4DbvuGi6topk3D1h00ei2X/9a\nrCD4+OPJx8+fn65jd4l0mQWfsXyfDjovC4cr8nyOPBK49lohdPMeonEV6zaLnc2h0zasIGdu2CJ1\nJpVHf2Hu3RuYN28CgI3iD8xI6YEau+Kwwh//KKJ+mShTHMgxvKRws3J/WXdZhxXyaqAeekiEAE6K\nIBgCV1NoEmnfVrN4y6elij488+aZ6+Kf/2z2TTHRs2cjjoft/EyxGUIsXRxHR4d43v74x+R9fdrU\n0M5tvtN+fYYVfDj2WPd9TXW12GLR73HXSZ3lZdpPWmRtMX18z7fVHRKd6Yri4PDDxeJCcZQhDiRJ\nU6FsZtcQMxJs6M5IPukWcS1teSRdS5WXXhKWjjhs18onn3bmiy/sja2LHwXQsMLYrvWwYc3b8m6w\nOzqEBSxptoZvWUKX+7rr/PbPSxz44NL+6P2U6nOgBpSKsxzYfAt8z7fVHRL1PKxLuBQpDhYsAMaM\nKW4sPw1lxjmQJFkO5M2sWw7yGFaQjByZvI+eV5HXMkSMB3W9Chu2c0prOchyjapoOfjyS3tjmzQb\nRsfn/P7yl/THujBrVmMJZxNqfmVaDnwxre5ZBXEQF75aPUZ1TkwSB7a8evd2L6upLHlRuuWgiGWb\nZUW/8YZQ32eckX+eWSnTcpDHsEIceXUySaIlZL62Bs11MSnX8tj2SRtOVe9IfKbOVlEcAHah5LpW\nhGswMJU0IcV9mD3bXRyU6XPgSxmWgzh/AlsZ4hwS1fvFtJ8qDkx1k+SQqNNlxEHRlgOgOiZYPdId\nUK7lYMUVxeemm7rtr9ddHkGQysa13CEsBy51/8tfmud3x3UccegNjc/U2bI7FhtZOxf1rTBrGkWh\n1mOrioMsloPOTjefEtuxOkmWA9NvNsuBGqzJ9LvNclB63ZSbfTnioAznLRO/+Y39tyydzcsvi+Nd\no9HJvNZcU3wmefZXxXKQNFuhyMY9rr5cV1BzzdNl3NmV0G8hO+wQNj0TSdaNKoiDokk7rFD2OYYa\nVvjOd4BVV3Xb9+mno7N7XMSBHvvB5HOQNFsBMM8E8x1WKIouIQ5kRVfNcmAixMP63HPi89//dttf\n7+yT6kQN+uFKkbMV9HTTzGv2JS7NpIVyJK7XIaTZNcvzZypvEcI7ybqR9fqkGVbQKbrTTRpWsHnv\nly0OQlkOfKNuqp29yzW4887od9mXqIJg4ULztU8aOvcdViiKLiEO9LyqIg7UDmXSJBGkRSpa1w5s\nwQJgt92iSxunLYe8se+4w21/3XKQtqFJe9wll2Q7/uKL7dEgixgG0ccoXc8jS13rZIkeWJY4SCKU\n5aDsNHzQl3R2XT+gbIdEn/DJeZGmrqQ1QH0O5s93W35bh5YDC2WIgyo0YED0Ab7pJhGE6Jlnmn+L\n4733RFjOM8/MXg55E0vLgw05tnfBBdHjihYHNvTzibuW990X/f7++yIqYyjTsi1oERBtSHzEiGlR\nGh1Xz/zQUQCr8GypnUuWxa/Ua7PXXunSKAp9mqbr9OSyxYFpWKFo/6S44V0bJsvBvHnp/Mj69jVv\nL92qU272xc5WqII4UJ1TTA+B7wMSItSrzcEwCd3CEHczx51PyOh+ajlcHq6btaXBllsOWGut9Hn6\nsGBBOsuBDfVcjj7a7Zi04txWn1UTB2md24DotfFd46Tohn3AgPj8bfViK+ehhwI33pi9XEmo9VNW\nZE4lMrEzar8l294vv0wOPW9ijTWqObRQujgowyHRN1CH5K67gMMOy1aGqVMb/5saWFMn/+KL9vRC\nON/5CAw1HxlFzPSm5cqrrwIrreR/nAsuQusxJQLH7Nnpr+OUKeYgO3F5q45KtVr2t7irrvI/Jm2e\nzz1XXcuBWoYs4iCtZUc/tgz0uunWTQQS06e82u73Aw4QMWHyRr2up5/eePPeYIP8844jqR2wWQ7S\nUKv5W6aKoEuKg6lTRYXYwlna2GMPRBaJAkSH8uCD7mmo5r04y4GKLdQyEO8c6NrR+VgO1AdADzGq\n5/fZZ8D3vy8Utelc33or3Vt6EmnNktKHIQ0LFwrfDx/OPRf4+c8b37O+bcrzPvtsc4hfE2mfv8MP\nz08cHHlktuOzWg7ks3T55enLULY40OneXcwkWWWV6HbbPVeUaV+vqz59xP+qaC8DH3Eg97WJg6S0\nunVrXIcRI9zLmDddUhxIXnjB/dizzjJvP+wwsZSw63mojecZZzSPect01Mbl4YeBv/8dmDChOb2i\nLQfq23G/ftG89TJcfLH4+8c/zGnNmOFXThuhTLhZ03nkEfN2W0OrCoMQ1Gri/jn1VHdxUKWVByUu\nHVNcuUMNKyRti6PI5cJdkC8lrsF9yhAHKlVxGrehOiQmWQ5cxIG83jvvHP1t6aXTlzErXVoc+DQc\np59u3i693Xv0cAvLrD90xxwT3W6bRrXXXsBGhkW40oqDzk7g00/F27sc83fpKD7/vPH/Lrs0p2n6\nXqvl29jERS+T+fumU4YzUAjLge/zVLU3XCD7yoPq8XGWjP33N28PIQ6qtrKlvA6tIg6KnrGgs9pq\n8b+HHlaQ56tbY6dPT5dmCEoXB0U4JEp8Ivq5ojoYPvFE8v56o6YqUPV310Y7blgh7vwuvFC8+csZ\nBwDw1FPJ+cm1yVVsPgdJnXOoBki3qPiELLZtd33zTuK889z2C+FzkEYchI5zEIKslgM1lLGtk9lg\nAzFDyETZXuJ5YBMHtutYVOfsG3b8l7/MrywqNmunxBQS2fYs+VgOquCzIyldHBQZBEnPK8QDoN4k\n6v829BtFF0e+UQfjhgTi0pB+EqapNzqTJ4tpljfc0IiiqKZvG1YoynJw9NFmZ0CbOHERB3//e5iy\n+RCiUwptObj2WruJvExxEHee77/f+D/NmhchLAdVw1cc+DyvWYYAfC0HeTkv6+izP3RMPgcvv2ze\nV4+uqFOrNdazqVKY+dJHdsqIcyAJbTlQ/7dhEwd6J+/6FmnqmF3OS+7vYrkZNgzYemtg+eXNadi+\nF4npPnIRASp5mdiLeuDvust9WWJJ0vO39dbmJYqB/DrRrJYDFVsnIxvs/v2bfV9MvjCtLg5kB65f\n2w8+MO/vKw7SWoB9rYpV6TxNwwo27rkn/vdu3YC11xb/12piaMH2HPtGgsxC6ZaD884TQSiKePj0\nG/h738uepq84OPTQ6Hfd2uBrOTCJA5dj5T6mucz6uBcAPP5482qNf/qTvQxF4xr45vnn/UVDUYTI\n39bY20hq2OKsa3mJAxeL3ptvuqVlM9O+/rr4VK0MSSSZmuPQ78nRo9OnlQab5eC118z7l+1zYMu/\nbF8EickhMS21GrDkko3/9XOfMKExe2Ps2Gx5+VCJS/2jHzUiA7ry8cfC1O2DLg4+/NDveBOqIHAZ\nVtAbGL1MaS0HId96bW+TujjQZ3uUNaxgQx9WGD9ejDXrM0T0/cvAdVXGJEIPK8Q1xl984ZeXKy73\niuviU0mdiWvo2s7O9ObzHj2anx11eK4IFl1UfLp2rj7Pa5Zn21cctKLlIAl1KqOpDdhoI2DbbbPl\nkYZKiAPA3yw1cCCw7rr55uGCrzjQsYmDLJYDybx5YkZC3HEmbF63egMHiOA/ZTskqnmp/+vn+M47\n4tPWsVTFctCjB7DyyunSCO2QGOcgpc5cCYGc4551toJKqDfNLOKge/dmEZL3G/DNN0fDAn/nO375\nlm05CLV/XqjtdtZ2Q3VIDCE2QlGRS53uAruaFiV6R7z11m7Hxa1umFUc6MeEsBzIG+0HP2jEIrAd\n54NJHCy7bFRFm9h1V+D225u35yUO9G2uDomu1/zww83bs4gg1XLQq1d6r2VfAZzFchBSHOy/PzBq\nlPjfx2fGhHq867V3yc90/7vQvXvjzd0nzyz07dtYwviYYxplL0McbLWV/Tffzr4I0bLPPsn7hLQc\n6OcUWnSnpTLioAj0jnjQILfj4rxN1TR9ncEAu+XgpJP80om7QR98UNzEl1zSmJ0QShyoeceleddd\n/vn5YHLI9HWYdL0ml19u3jdrwyXTzDKdyVccLFwInHNOujFeUyPmc1+pQaC6d2/k5bKOQdz9rl4/\nlzpxsQjsvXc2y0FacbDZZunyVN9G9e0uhOyEpUVIZfDgdPmkLZfrdTzvvOb1VkwkiYOTT3Yvm245\nqAqVEQd5XhR54fWGU83zuOOAK65oPvadd5odDdWbR01TOjn5oN9YvirUxSHx5pvFdMRjjwVOPFFs\ns027iSNp8ZYyZy+4vknG7RsiCFEWZP5ZTKdpfA4uvjhdXlnfcL7+9cb/nZ2N895tN3vQMYlrECSX\nOkm63p2dwDe+kd5y0K1bszhwreN9902XZ/fujXNXr0EZPgebbtq87f/+z688krTPhquwW2QRt3NP\nckgcOtS9bKaVKatAy4sDn/CSNhM+AFx0EXDEEc3HrLBC89oG++8PTJvWvK9tfF/i0oH5ioM//MEt\nbSlipHVjyhS/fGQ6JmyWg7LFga8IKPvBLEscpE0vqzjQz1N+r9WSIwzGlTtp/RId13rPYjk466yG\nR7qtXP/8Z/O2tIJTdXJLk14Iy0H37sK/59RTm3+T91VRwwqudedqtUsSBz7PIS0HCaQdt/EJL6nP\nTkjKU1bUpElu6Yd4oHwb91/9Snz6xjlIgy3dJIfEInAZVjDt67Ldlaw+B5IixUHS/nFvy7ZhhW99\nyy3vOHGQ1Ei7Wg5cBIxr25NFHOyzT/TlwXRPmM457T1p63DyGlawWZ+GDjWfl7zmRVkOXI9z3U+1\nJpuG8nz6M3U2F8WBgTTOfBLbBX399WgoVZ2kCvQVLGnD9qqkDQolyzptWnOI41A3nC10aRXiHLg4\nJIbyObCRVZTJ423jxS74OunG3eP9+8dHivv2t83bx40DDjooOW+1Ie7sjJrBk8TBfvvZf1OvnVw3\nJA7Xes/ikKhj6oRChs5VhxWS8jXhe//JNWJU4q5rWitZ2ufC9TjX8qiWg6zioCozMHQqUyyXAEI2\nbBWx+urx4TafeSZ+vnRcRx3n+NXRIaYOhfBB8D1u0CARTrhIbr1VfMoH/r33gP/8J/m4vGcr+F5L\nF0ekOLJYDtTOMUsncdRRfvvHXaMttmj87zsMlWaWgNpIFrkqX97DCq7m/RDiQJ2VUJQ4SLIompD3\nXVFTJqsuDuLiHJRFZcTBn/6UzkkOSP+2/eGH5pUOXdI13RDyBvzkE+Dqq4ETToj+7lLxacVBXNp3\n3hlNN6+Q1bIMq60GbLJJecMKP/mJ+JTn6eqQGGdlcuXYY9Mfq1oOiiLuXlA7K+ldnkRSbAsV9TzP\nPz+6iFiRC9AU4XOgo16flVcGrroqeT8XTNanNENWIe7BPCwHadvHosWBTxvrOqwQaol7VyojDm67\nDVh//XTHHnFE/HKZcTdGnM9CWnGQBdfIbzpxD80HH0RjcucV2U6WQY7zlu2QaBMHecUnr9XEdFHT\n9iT0ceGirp163+h55v1Wp5rpBw+Ovj1VURxkma0Qt+2660RY3BDWEtX6VJWpjCbS+hykfbEJfd5S\nHBx1lDkEdx4veUmLQYWmMuIAiF/yMu6iXX01cO+96fIcMsS/PIAQB/oNEMLnwEcdqlMvk9L++GPx\neeON8eeVZTXCqvkcFB1pLKTPQVGo90LI+nO5FnoUyLIsB67kZTkIMZykp1WlOAcm0oqDtM+0az6u\nAlD1kXvooehvV12V3l+Nwwp11lrLbb9u3ZqnE+qkfXA33tj+W1wnOm2aGFtX0Ss47wdMnXrpc1PF\nBWuKK/Pw4fHpVm0qo76IVV7l2WQT8Zlm7FXdJ2Qn4UpHBzB1aqMMepnSknTvDx1qn62QVRzkJQrT\nXo+RI5u3lSUO8pjKmKadO+EEEc/C11qc97CCaz+iigPdmX6ppdJbOCgO6lx7rfu+V14Z/3tcpeoL\nBKnYbpr//Cd+qc2ttxYxEExppZkql1VIJD00avpxzpRxCtt16mcZxImDvC0I0txnq0OXhmLWrIZ1\nJ8tshSRGjIh+jxtWSINrGh0ddofErOIgL5+aAQP8I5cC5qEm9fzkeYeerZA0nBFH6PtPDwq03HIi\namrcwlf9+gHrrBPd1grioFYLP9OtDCo1rGBCjl8nPThxv59xhv23v//dPJazySZu07FM+Dayv/99\n9oZ5wYL4MM933934XzaealAWSdxN6iIOfMzUoWcrTJsWTdN3+esseQPZxMHo0cAFF4j/bfeyq6Ut\njsUXj35XyxZCRPlc6zhxkGX8XT2npJcKH2o14Mwz/Y8znYtaxzJ6oqs4+PGP7b+FHlbo1cv8+9JL\n+4WLf+CBxv/bbed2zIwZwIsvRrflPazget+pL1j6y1atZp/mqyN93jiskIINNhCf0knrhhvM0x6z\nNCahvECTohzaKv6YY+KtGy48+yywxBL231VHx7jOKos46OgALr208b3oYQV9jn9RlgOJ7dr5rndg\na8juuy+6dKtJLMRN3QWa6yTOcpDnsEKPHs3nqT4/oYYVlloqfTomQr3dy3TWWw9Ye22/tH/1q6jY\nVzE5JGYJn2yLQLtwoZ+4V8fy//Qn9+N0qmI5UDFZDlZe2S3GRv/+0e8UBx7I6ITdugETJgAHHhhd\nilRSJQcmm5d8HLfckk9ZTMR1VlktBz7z4bM8CAcf3JyWXj59tci8HrykdH3Fgd5gSFzup9VXj/9d\nL2uc5SBPcdCzZzHDCqHbhVDOorITOuCAxraePbOna7IcZImQqMa6UFHFQZqYFmmpikOiii4OZF4u\nswvUqKA+7LWX3/5pqLw4kHTv3piCZ3rTr8KYzWefibc7X8tB0eQpDuK+A8DXvhb/e1pMM1qKshyE\nGFZQ+dvfkvOykSRE9OPVpbSLnN1hWuAm1FRGdXw79MyP0JYD9RrEjb/r2O4Deb6hhhWuusr8+4IF\n6Z0Ws7TVae9RV8fHNJYD07CCL77DCkX0d5UTB7Y5+KqDTdaFLuLo7BQrsKXhsceAnXcG5swJU5a8\niAtVnUUc7L13NO1zz23eR97855zTGDJKgymwUTuIg8GDgWWWMf/m0iAk5aVfI7Xxt5mqfXCdqdOz\nZ3NHJVdpHDIkW6e+xhqNRt7WmasC4re/dU87VKMsy6XWh0kc+M6ACR3nYLHFzL/7WsPKthwst5zb\nfqo4+NGP3I755JPod9dzlSvkqse4nl+XFAevvWbert7spguobnvzzcaSoDZsD8mXX4qIglnQ1zaQ\nVMVyECcOssxWAICZM93KcMopbvvZcBEHsgGbN8/PecqXpHpNWq1TJc5Um4c4UNl///h999gjOX9J\nUllNloPRo4X1bYUV0omD884Tn52dDYFlEwdqNMwf/lBEaCwS2Qmp11iatPv1Sz4+SRzkPZXR1+fA\ndxntuHzTkMbn4Ne/Bvr2TT5G94FzzUuNsVMFy7dO5cSBLdKhajmYOLH5d/WmOfHEZKcX28OVZQEo\nSdysAYlNkRdBXsMKQP6LG0lkObffvpGuzXJw+unR4YzQhIxrkdVr2feNLg69DLvvHi5tkzgAGk61\nacTBIYeIT5cxdt18nIdwP/JIEaDNhBQt6jMlr4faBvqWyyQOspi5bZTpc3DRRX7T4H3y18VkmmGk\nLAKMwwoKerhgWwekLk5x//3Nv6viwEXt2SohLpzybbclpwvYxYGa51NPAX/4g1t6oSlTHHz+eZhh\nFzmGOGpUI1/dDKifZ94OiSHFgctvpn2zWA5C4Jr+sss2yi+n8qmkEc+mxWtM4uD555u35XFdlluu\nIVh0TOJAoooDm6h1GVbIMjSTdKwatMuFkD4Hxx4LbLVV+jTi0K9rmmuYJpbEZpuJz9128z82L0oX\nB3pwjLjGLe6iq8eZ5u+7sssu9t9cnVXkaoxxFbjIIuLNIi1//Wv6Y2Wn6Wt2dBEHLvu4juXFMWKE\ncEzdcEPxvbMTePrp6D55BcOxUbQ4MBFSHGSxtsSVdcQIYdmT+4SabqiKA5m2aay5T5/mbbb7Vlqm\n0hB3DeLEgVqH3/2u+fi0wwquJB3rm3YIx9BVV21Yr/Ja6lkfIghtOZg0KTpDRbLssqJO5bTWLHmE\nonRxoKM+GOqb39tvu4mDzs7mQC8+yKmTJlxvSNXRREV9oGu1dBV83XXAL34Rv559EuqKYjpZxYEL\nMhJgFmo1MTYbZ4YPaWKPI+RUyawPfShxcO65zZatLAGOVEaPFgJe7vPNb7qnG4fJcrDiiiIuQFLZ\nbOc2bFj68sRdA5PPgUQ6Zqr76diexaLEgWmfRx912zdtuV5/veFL4puG6/62aYmh8lKFa971k5XK\niQP1pv/f/6K/mS6IDK+5cCGwzTbAKqvkd+FsMynSNG5plfTBBwM/+1m6YyVlDisk5ZGEHl2tSuLA\nx/HQRtZhBd+pjDa+/31g4ED/Y30F0pQpfrMF4rDdC4MGJR9ru255zXSxWQ7efRf4xz/Sp5s0W8E3\nHZ99ttkGePnl9OnZ2Gij5m0+7afPi5htvY9Q+A7HlEnlxEFcUBbTXFXp2btwIfD442KmQl7jqrYA\nP+eeC4wZk3y8bjkoi7zWVgDcrn2IB05X36Z8defWsoIg+ZD0VqG/BeuEshyEWD7YhcGDw8UOMFkO\nALf09eFNSV6BoGziYOhQt+GcpHJlfcZchhXiHErj0vNt+554onkGmM/5uQqlkSObh5HyFAdVFwmV\nFgcuY8byIStifNnWOZpCwQLm6Xa234okrc+By5tx3uJAf7DixIFcbTBv8hAHEnXlzVrNHrFOEkoc\npK0jmf6ZZzb71Li8wWfB5HOgbpeY7vFvfxt4//3m7VksB3HPUtrIeJIkn4Os7UvadVFs27M88716\nNQ8V+6Tnuu93vtNcfp98XHzdKA4y4LsQTFxgpNCoDbVOGQ46aUnrc+BC3sMKehpZp/75ImdHqOQp\nDlTnpTQOiausEqYcvgwaFF1nAwg3fGBDvRd8QwZ37y7mnY8fH92elziQVi3TTA2dCy9s3pa3OEiL\ni2gIUTZfceCSp8miuvPO7vkcfnjyPqGGFYpo7yonDtSH0eXBlBfaZzXAtNhWKFuwwC3ITtUsByby\ndojLmscppwA77NBYYMhHHGQ9t1mzgDvuaN6ehziQ6yv4ikjdoWrrraPfs5Q163nmLYht6evDCnHn\noaeRlziQq826iIPjjkuXp+08ZQTYwYPd043Lx6XjD93e+U6jTDvTx7Tc9tFHu+etE8pyUMTLcOXE\nwR57AOuuK/536WhkVLEixIGNPn3MwZPSTsvMG1lW32EFF/K+aVdfXSz/KhepKdJysPji5oVZQgTO\nksjzueUW4Kab/CPLqWW59dbm+yxvcRC3T951ZPM58HnW9GfWdD/HrX2hEldf0rnZtp6CuvqmCdu1\nPO88MR07zm/hlltErJEkr/o4bNdUptm7N3Dzzcn7pyWr5cAUhdLUXpt8b7KIqo6OMOKgS1oOAOCl\nl8SnaWlmHZM4KHJ++3XXiXnapjz1TqMqloPPPrP/VnVxYPPYz8OT3pWQ4kCyzDIinLGvOFDHzUeP\nDisO0mB6M05zj7mM56pDjHE+B3GssUb0u+l6ucZlyGI5uPfeeB8fWz1uvDFwzz3x59yjhwgyleVZ\nTxIHo0YB++7bvF3/P3T+tn31PF98sXk/134jaUgnjqIcfUNQSXEgcREH8g1SrViXxjprbH+JbChM\nneJ99wHDh5uPK9Ny8P3v23/LWi6foaA0ZBEHeRFSHOjXL7TXeZErLwLF3ufyXEeOFFH0JD6zIVZc\nMfrddL1c79+4/fbaC1hzTWDXXc2/9+oVL4hs97tq2cryhhp6NkTol6GsDolDhzbPgHCd+pwmaNeS\nSwJXXCHqnMMKAbjssuR9pBLzFQc//3m6MtmwqU51HYiqWA7iKMJy0G7iwEXEqlx0kf03/T7KumBN\nyCVc92AAACAASURBVM45jXUm1H3u+lY3YwZwxhkiEJksS5ZrkJc4GDIEePXV5lgSSZx8cvzvPm+m\nWepGFVy+PgchHZIHDUpeZ8E2lVHf5nqP6QuUSeKej0UXFYsB+sRciKPLDisA4qG86abk/eSDr84k\nKCr4DdCopLhOMY9ZAXlR9WEF27Q0l4clD/N/mnTjVjfM23JQNGr5pRPpaqv5p+PacPfrlxzIxqdh\ndREHG29sPjaPa3/OOeLTdg5pzdZ5z7bKy+dglVUa6xLE7ZskDg46SASYM6GvxRHqmaTPgQP33tu8\nzbWDN1WUrbE+4QT3Mtk46CDzdhdxoFZoqMAvoam6OMhiOTjjjODFAeAvDuKusd4J2t7QXAkZWyBN\ng6Q+n1tsAUyeLMz+vmTxI8ryrCUtDw/YIxpOm5Y+3yRcxIFPrAKTo20cWYYVQvscJJUlaSpjr17C\n+mAbxllvPXveaaA48MDksevaGJgefJuw+M1vot/TeJ1eey3wgx80b48rr+m3Mn0O4shrKuMGG2RL\nV5JFHOhjjFmR921IcaB3PEmN6lFHxef1059G10goevhFfz6XXz5dOllEZ+hhBb2+BwwwHxu3wmtW\nQlsOfMWBr+DK0+cgKe2lljLvI59fPZJqElnPhcMKHpgulqvlwHST2tZQr9XEjXL88eL744+75aFj\nUud6I6JOlZG/+QZmKYO8LAf33CNW48tKlXwOpk0THuV5Wg6S0kla9rtnz6jXeBZs1/jhh+37FO1z\nYCJvcWAjzbLTroQQB2rd+IqKsn0O1GmrSfV7773mPOVxq6+eLm8bprxM/mZdzXKwNIDbAXwCYDqA\nCwAkakzTRXJtDHwX4Jg2DbjgAvHdJQCJCfXBkJWkl1fOojD9pqdRJfKardCnT8M8F3IKlU0cmJbq\nDc0SSwhTZJ6WA9fj0uYXAvXNOUuMgTjUdDfZxO9Y+axtsIGYPTRkiPuxpvqQLy4HHmgeEtXzzQO5\ndLFOWsuBfpzafpko2+fAZ6rqssvan4HJk4F//St93iZMHXdoZ/RWFAd/AfAZgMEANgWwI4DENQTX\nX19M8VD59a/dMvR5APVKSXvDmvLUGxF1qOS++0Te6htWK1oO1lnHvPiViq1z69atcUPnNVtBbfR3\n2il9HpKkt3JJnpaDEI2Keq/l0ajElSuP+9zWMSaVYd11gVdesUc6NRFnOVh77fj7LM8GfMklG1E0\nVdJaDmSbNmAAcPfd9oWoJFXyOVDTW3VV8anOEovLc/nlzdfRNe80yHuqK1kOVgGwLYAfAfgCwFsA\nfg7gsKQDF19cTOtRSVp9DgC22kpMWUqLrXJs3scSX3FwzTXic++9G9taURy4xCgvUxyoDWOI62ub\nsqTjO5UxDp+xdVsMDZ28LQdFiAPVEdj37TjL+iumRliKg6Rx+rwbcFP6Pg6sO+zQ+F+KgV697LEX\nVH7/++R9VPLyOVhpJbP41YNZhSRr3IgrrhCfL7wQpjx5ETJe01oAZgD4UNk2EcBQAH0gLApBefzx\nfKJ89ekTf5zLsII6ZGF6iFtVHCThIg6yECcO1LyzXN9Bg8SKjv37i3R79oy3Dvh2OqEsB1de6ZZf\nHqZ9lSLEwSabAB98IEJn+5rr5f5p7r84y0EacTB8eDjn3KR2Jel8L7tMTI185hlxr3/96275LrFE\nY30GwK0Nzhqvw5Te44+LazlzZmP7X//aeBlTCSlObPe078vP7Nnpy9BqloMlAMzRtsnliLQFN8OQ\ntcJtlWzarnoeq42TVNxxlgMXcaAvSVoEvgFe1POwxaBQh05UQomDOJ+DUOLgxhuB555rfA/9IKb1\nOdBxdXhT8+vsBFZYwT0PFRdxkJfPgZq2rziwrblgY9Kkxv8mfyHpc5BkwTDl98orwPXXu5UjiaxD\nTossIsJ077GH6PD1NF3ydSUP69XXvy7WkFDTXn/9hk9ZyPzvvjtcWj7Ypl8XESExpOVgDgC9uZLf\njZPIjj/+ePTt21fbOqb+F5bDDIMbSfHBVdRIZrJxOuechpOd3ogkiQO9gXvuucZ4WVGYVpKMu/Gf\nfx7YcEPx/y67+OVVqwnHIKDREKXB1XKQ5QHu0yfZtyILZfocAGIIL25hHl/iyhWyIc0qDlwbVDVI\nk+mYHXYQYn633Rrbpk0D7r8f+Pa344/Ngu6XJa/Hyy8DO+4IfPhh8zGAWx1IoeNS5jhxINPRp3fW\nasJPYubM8GLbRYBmvQ/VoRYfwXv77SJUdlrEEPe4+l+DCRNiFt4IREhx8DKAARAzFmT4j+EA3oVF\nHFx44YXYUPY2yE+RLbkk8Mc/Nm+35ZdU+bJxWnnlxrasloNVVonPMw9M00Vd68C3rmo14Ic/BH7y\nk2zjgUUMK9jyCEXo2Qrf+hbQt6893Lj+Zm9bCVBiMsumZcUVhUd4Vjo7gbPPFuuC+E4RDD2ssOyy\nzTEzllqqeTgyZCf4wQfN1kWZ/lprZRd7cogkqzgYMECsxmh6ebjySmCffdLPErPhKw7WWitbfknT\nNdXrI6Ohpo2SK86t+YV5gw0m4P77N0pOIAMhhxXeAPBPABdCDCOsCOAUAI4jo/lhu5l9hhVUTPNU\n9THpJJ+DIk1TNkwPiVquuHXL05S/Rw/RUGdpNIsQB3n7g4SKcyAZNw649FL3/OLOr29fexjZNMMK\nt9xiz8uXLbYQ4+NpHRJDiYMijtVZZhm7OADEjChXHxQT8pqmea71Y/bd1zxMuvfe4pq4rLCZJf+4\nfc48szkccijiXgKTpjfasFnJWnHhpX0grBFvAXgKwHiIGQulEuckZyLpZjNVuP4mkWQ5iMvjzTfj\n8w9F0s15zDH237LMuc8iDorwOShTHOjXRg7FJB0Xh8/5qGZxV+LK5TtNzIZpeGX0aODCC5OPlXP2\n06ytkaURLnK2wkorAYcemj4tH3EQUtyHwMdy0KNH/ssmhzxHmzhoNYdEQAwn7AdgKQCDIKY1lrhe\nnsDXy9olVreerq84iCPkeHAcasdjIs4bO29xIBfpScrXJg6yUOawgs7SS2cPHuVznO8CYr7ph0Dm\n160bcNxx4n/b/QI0rHiff+6fl2/ESpUypjKm3c8nfHKZK6CacBEHspMtK/hc2mvWTuIgd5ICmPz0\np83bfIcVkho709iSPi0lizgoiptuAsaPb3x/9tno73EKO22HMH8+8NpryfvZouC5ioOQ1onQpPHX\nyILP9La4znDppRv/v/OOeW2SvO71uNDjH38MvPii/Vjpo5C3ONDJ+7k/++z4adeyw3fx0cgyrFA2\nLmXeaScxpHDssfmXJyQUB4706QN88UX8PqaH2XdYwdXnQE1Xj1+f5HPgkn7e9O8PjBrV+K74hgLI\nRxwAwMUXJ+9jCyxURXGw/fZ++xdhAUg6zrYAUpzl4IADxAqEHR0iPLV0yC3LciDruH//eGtbFnHg\nY43S77m8x4VPOCE6x19ngw2Aiy4CTjklOa0swwplCwpXy8Fpp+W73oUvvvEhVCgONJLifQNmceBi\nDh071rzdhGlY4cYbgTlzGkpPtXDEVeRDDzVHh8zysA0b5rbfVVcl75PHsIIrtvFhV5+DLPie2/33\n55t+VkwNjG0GQdybcq0G7Lxz/sMuJkyWA1cfgqKGFXTKthjWauJN2aVDlM96mrrcbDP/Y0ISJw7U\nKeh5c/LJYjbNFlu47e9yf9jqo8uJg6Qb00UcmKbnuQwrqGZsvRx6nHHTsEKPHuIhlA2R2rHGVeT2\n2zfPX87S2Pbs6bawjK3BUPPOy3IQh6wTX8sBEE4c+FoO8l6+1jQ7Jm1+WYYVdOR1SirXW28B773n\nnm4Sn9Vjrd55p9v+8lnM23JQtM9BSFyc9OTME/W83nhDTF0sE9v998gj0WBmedOvH/C735mvZdrZ\nCra2yHXV4ixUShzccUf87y7iwOQdnWUq42efAa+/Hv3dNKwgkeJALWvI8Loux8Z1bqefLj5dbkzT\nTZ61o0pCNuStNKzgS9ZO/sUXgQkTwpVHJWuERtN1HzYs2fk1CTXdefP8j99lF/eFtFSqPFshJC7D\nCt/8ZvO2VVYp31Rve1633TZ5AamiMN0LLkLcdm6ffJKtPC5UShzsvjtwxBH235PEwbbbAj/+MXDD\nDdHtLsMKtv+XWKI5aEdcxDXpIKSWtUifg1ot/nifB7kMy4HM09VknIc4CGk2f+ON5PR9y7POOuHi\n8+v4WA5kpEsfy0QI0tTHPfeIKIK++FwPvZ1oJXEgrV95T2XMA9ke+8y4KJqNDPGKXO4tW30UMaOt\nUuIAiK6+prPNNubtf/6z+FxsMTHWf8AB0d99xYGrQ6Ip3fHjgfPPL04crL66X9o+AWHizOVlWQ5s\n5ejsjJ5TWZYDPRa6yfGvaIdEFdN1UU3+Pm/KMtBNGnO9C+PHi2WWgfI6JJ/rsf32wJ/+lO7YqtCK\n4kCW2Wcp7iJ55RXgttuat7vcH6b6uOQSt1WLs1I5cRDXMMtlQvUL9tFH4lOdlqfi6/jhOpXRVLmr\nriq8iF19DrKiC6Yky4GPOOjVS4SFNZG3z4G66huQ7COhWw7WWAM48kgx3p22DGno2xfYeuvG97ih\nGV/yuubqFEWfNxJpOdBjfIRi1Cizp32Rna6vz8F3v5vu2FaiauJAPq9VFQfDh5vXk3GxHJiu9fbb\nC/+GvKm8OBg5svG/fBvXG8kZM+LT9FVoaeIc6KiWA1+P5zx9DlzFwUEHibQmTjRHbMyro+rsFNNV\nf/az6PZLLokvh2456NZNhBQeNgw4/ni/MmQZVtD3NdVFmVO/TPWuCtnzz3dP6+STgUMOaZ4Cmwdq\nubPMIPCllWcr+LLDDvb1OaqMfMZcfNLywrQSZBJpxUFR7UflxYHp4hx1VPS76SLb4sOnKYPtd1dx\n4OtZmrXys1gO9GMHDABWXNEvjyx0doo3AD393r3jyxH3oPk20nr9+zQ6LtelapYDFZ+494MGAVdf\nna+VzHTOVbUc6LSaOHjggejqgzaqdl7yHilTHHz96/7HpBWeFAcxXHJJ9AY1PcAu0/lUNt208b9r\nnIO4hkNtMPOyHPzcsGpF3LFJQw5VIO3MkpBTe/S8nnzS/Y2qVktuPPOKkDh8uF+6rURZloOuJA5c\nqep5FTGsYDPnp7Eu+syeUSM7Uhx4YGosfC5gjx7A+uu7lyHO50CiOvPlZTkwWUfiBIBPGF0balho\n6XxnmuKUFts1tTlH2sRByNkKw4fHz6IBRChbeWxZjafLanN62YoMEpMG071cpDhI0x5JP5eqdqK+\n/O1vwPXXl12KZPIeh7/3XvsUYp/+Rt4XaadZFiUOcl6fyp80D6OpQ8k6bh+Hy7CCeh6hLAc//CFw\n3nmN77aphrbjVeHg23A9+KBQ5ssvD9x6q9gmI6Ott15jW1Zs5bKJA7ldioNBg4CpU93StJFmvfkV\nVhCfeQ4rJOE7leuDD6JiLwRFrK1QpDjwjX4JpH/Gqsree0e/V/G8Lr1UTIXPk512sv+WxzMtHfDL\nonKWgzQXOavlwDdWuIvlQE0jlDjQt5uGTpIsB2usIf6PW8XOxIgRwFZbiZj6ciW8PEg7rCCnPmZ1\njvvlL9O9gfhMo/S9x30dKl1ZZhkxuyIk7eZzIJ8XH1yGHUlYjjyynIBHUjj5PNNJ99Qee4hZQ0cf\nXa4Qq5w4UC/G//2f2zEms72PBUKvgG7dRAz53/3OvL/Lm4Gav16+pDXoXcTBcsuJT73jiBMHtZoI\nBPP226KjNyHfPuUc9qJJKw7kqphZg4Oss06640aOFM6be+yRvK+vODjwwGq+rRVJWZaDNMhnyMe5\nk7QmaSLGPvYY8Mwz9t/vuKPRnsn73mc2XSgqN6ygsvXWjfG7iRPt+5kewiwXsGdPsfqcDdmBxL2l\nqvnr4kBfS8EV0w0yfDjw6KMiOqRpPxUZdlaawE2suKII5LLffunKqPPmm+aQ1jZ8hxV0cZA1lGva\n9d6XWaYRbyOJqk1lbDWqLg6WWUbM4hg9uuySkLxJ8ywvtZQ9foxLPl3WIVGywgrijUkSp8LlegEq\nWcVBHOuuC8ydGz99Rc1fjyvvOmwRt932Jh1iRsJ3v2sO2pGGXr383qBs0faSxIEc6pDiIG0nGGJd\nhdDDCq2EOusnL6ouDgAR/0GGUiftT5HPdJe3HOhLw8ZdENPbYhafAxfHriSnNbWT8XVwc5lt4OJ0\nWAVClcVlkSzAPKzgIxTyFAcjRoiIllWqn1Ccey6w9trRoGUhGDlSRIQ75JDGth12AH7727D5EJKG\nNMMKrUJlxYE+rp/X3HATIRbwiHNITGs52G8/4NlngfvuS45nUARZ1iP3JclyIMnqc5DniowPPig+\n4xzV0gRT8SGvYYWTTson3SWWAB56KLpt1Kh88iLEl7zFASMkGsgSwjbN/iohIm3FOSQmYSt7r14N\nZ8a4YQXfxZji0stKqBt5wADzdr3cWYcV0vocqKQdVthvP+Dxx7PnD0QXACKE5ENRlgM1/dAzjGxU\n1nIgKcJyoDfmIcSBmv+jj9p/SzpWpbOz0Rmq+6jlr9WAa6/1G++cODHb+GgRVoz11jNvbwXLwVFH\nRX0p8mxI/vhHYYZfeWXz7+3gkAiIqbg771x2KUhXpwzLQf/+YsZZ3lReHEjyFAf6W2loy4HvfGdb\n2Ts6GoGPbOIA8HcmTDOXGwA22URYM/bdFzjttHRpuGBa9liid+a77y6CRaUNiBJCHMj75+mnxadP\nmNSsHHZYcXmVyf/+V3YJSFdmxx3FyrFPPSW+F2k5KIouLw5uu62xZrzEd5qJb/5pK7qjo9F5xQ0r\nFMXAgWIFxThClMdllUnJCis0pmxKTHEsnngC2GILv7xcufFG4JZbivHcd2X8eGCXXcouBWkH1Flk\nXRUZOVOKg3akZcSBb6Ptuv9eezVvUz2j05JH+ObOzmQzVtW8ZkOUJy4N3UfAtK8uDoYNAzbf3C29\nNAwdmi6KZJ51t/HG4rNdhhVIOXz5ZZhnpF1o59kKlXNIlNP+9AVh0loOjjkGeOklv2NDvD3GpZHW\n50C1HMQNK5TBnXeat/vWmylyY1waaeoqdHqEdBUWWYTPiEoZPgdFUTnLwWqriVXA9DHjtOJgxAgx\n/7po8rAcVG1YQWX33cV1fvnlbOUxDemEHqLpiuKgHd9s8ma11couAWkV8nq+5MJoZYTirmRTuPfe\nzWtzpxUHZSmvLJYDyaqrRr93dJiVqj5boUrkPQU1tDhoV5Np1e6LqvPqq8CTT5ZdClJ18n6uNtxQ\nhOL+yU/yzcdE5SwHOlmnMvqIg1otnJjIetM88YSYvqcugGSbylhlfMupiqoePfxjRJiw1ampvtvd\nclCFIahWIO36J6RrktdzVauF8YFLQ+XFgSRtJ+MzjXDq1GZP97RktRxsuWXzNp9hhZBCJwtZLAE9\ne4YRB7Y8unVrjl5JcUAIcaVd2wugjcVBmsYwxBRGPX8f7roLGDTI/rtLECQ55l+GOAgR6lMXB3Pn\nZitTXB4mcVDmsEKrWIMIIYJ2Ft1tIw5WWCG6zHDZlZbGiW633ezHnHYasNlmwPTp8WnMmeNWvqqi\nDyvkgZzvv8QSwIwZ0d90X5d2gcKDkPzwDXTXCrSNONDDSZYtDkKam044ATjzzGi6avplLs6RRBbL\nQYgFsEzIFf3+9S/hdPad7zR+CxEds4pU5X4gpJ1o5+eqbcSBThqfg5CEmn5nc5hLEh9VuWmziIOV\nVwY++CB7GfRrKIcOVl9d/HUlcdCO5k9CyqKdn6uWEQe+lF1peue9yCLA/PnZ000KuiGDR5UhDkJY\nMNTrduutwNJLF3su7SoOknjkEf81OQghAoqDEkh70csWB3qH1rt3Qxxk6eyShhW+9z3xucoqYrXF\nssliObAt05wn7SoOkuph222LKQch7USRLy69e4sVV4ui8uJALiXc6kGQevUCZs0Kl656PdSFo+Tv\njzwCTJoEbLNN9jyzkMVyUIb1o0xxkOf5lv08ENKOFPlcqcu+F0HlZ2leeaX48/Uil51MVSwHqud9\niNDKahqDBgF33x3dvvTSwNZbp88nFFksB2WIgzKnMuZ5r1bFB4WQdqKdRXflxcHAgcChh/ofJyut\nLIdE3XIQqnG2OSTK8yyzE8jqc9CrF3DEEeHKI/n5z4GTTgqfbivSjo0YIWVBcdCClF1peqcY6o3Y\nFj5ZioNWjtj1xRfAFluET3fAAODcc8OnG5oihhUIIeEou5/JkxbuSuIpu9JM4kBdJyFrunr68jyr\nJg5aqVP6zW/KLkF+tFI9ENJqUBy0EGWLA9OwgvQIz9JQ20RAFYYVTFStPHHImR7tSNnPAyHtyKhR\n4nPZZcstRx5UfrZCWqrmkFirhekoe/UCBg8GfvrT6PYqDCvkFamxKIHRSkLGl3Y+N0LKYq+9RNvb\njs9X24qDUaOA/v0byq5o4hwSs/ocTJnSvL2dhxXa8cEjhLQH7do+ta04GDIE+Pjj8vLPy3Jgg8MK\nboQKa91qtPO5EULCU7H3zPbBZDlICn2chSoMK5hopU6p7LIyCBIhpCpUrCtpH4q2HFRhWKHKq0N2\ndSgOCCE+UBzkRF5xDmzIRr8dO+P11guXVlzn2I7XjhBC0tC2PgdlEzeskAdVHVbIyssvAyuuWHYp\nWh8KH0KIDxQHOVG05aBdxcFaaxWXVzt3oO18boSQ8LRZV1IdirYcVGFYwWU8e889w+d74IHu+3bV\nTpI+B4QQHygOcqKrWw623NLcER18cPi8Qq0+WbZwKDt/QgiRVKQraT+6us+B7VzZAdrhWz0hpCrk\n2ZUsBuBJAIfkmEdl6d49+n2RRfKNc1CFYQWXvKsoDjbfXHxWsWyhoQAhhLiQlzhYC8BjADYD0CWb\no549o98XXbT9LQd33NH4n5YDf3htCCFVIY+uZASABwFcBeCdHNJvCXRxsNhijf/b1edgtdUa/1dV\nHJjy7927+HIQQkiVSTOVsTeAoZbfpgB4HsDyAL4EcGLKcrU8iywS/a5aDvLoIJdaSnwOHBg+7TRU\nVRyYuOEG4JZbquOvQQghZZNGHGwO4CHD9k4AowHcYfity2ESB3nyzW8C990H7Lhjvvnkwf77N1+v\nIhkyBDj22PLyLxL6HBBCXEgjDh5BoOGI448/Hn379o1sGzNmDMaMGRMi+VLRHRIXXRRYsED8n8fb\nc60GjBwZPt3QmM79ppvCp0nMUBwQ0lqMGzcO48aNi2z79NNPc8+31AiJF154ITbccMMyi1AYiy4K\nzJlTfL6TJ4t8hw8vNt8ihxVaucPbcEPgzTeBAp51QkgLYnphnjBhAjbaaKNc82X45ILo1Qv48kvx\nf5FvussvX1xeLvAtP8qzzwJvvBF15iSEkLKhC1ZB9OwJ9OhCUqyqDoll5x9HlctGCOla5N1dcT29\nOj17NpzuWtkMnpW8/C3agSLui6587xFC3KHloCBUcTB/frllKYIiO2x2eIQQEhaKgxx57TXguOPE\n/z17NgIjdQVxYKNd3vJDImNT7LRTvvnsuSfwxz/mmwchpD3oQqPgxbP66kC/fuL/rmY5sJGHOBg2\nLHyaRdKvH/D55/lHarz99nzTJ4S0D7Qc5IzsDLuaOCjSIdHnjbuqlguGcCaEVAmKg5yRnVGvXg1x\nIKc0tjNVna1ACCEkGYqDnKHlwG07IYSQ6kBxUBA9ewLLLSf+79On3LJ0ZVbk5FpCCEmEDokF0bOn\nWGBoyBBg++3LLk3+VNFy8OijjERICCEuUBwURM+eomPcYYdy8n/lFWDxxcvJW6VMcbDNNuXlTQgh\nrQTFQUGUuSQxUPzCSzboc0AIIdWHPgc5I6P3detiV9q2fPTaaxdbDkIIIf50sS6rPLraG/OJJ5q3\nDxwIPPZYsWUhhBDiB8UByYU4MdTVrCiEENJqsJkmhdPVrCiEENJqUBwUBDvEBtJyMHZs115RcejQ\nsktACCFmOFshZ+QsBZrSG1AoAVOnAosuWnYpCCHEDMVBznz/+2IthR13LLsk1UEKpa5sNVh66bJL\nQAghdigOcmaxxYDTTiu7FNVCWg66sjgghJAqQ2M3KRxpOejoKLcchBBCzFAckMJZcknxOXNmueUg\nhBBihuKAFM7AgeLzo4/KLQchhBAzFAekcOSS1cOGlVoMQgghFuiQSAqnVgOeew5YZZWw6TJuACGE\nhIHigJTC+uuHTe/zzxk/gRBCQkFxQNqC3r3LLgEhhLQP9DkghBBCSASKA0IIIYREoDgghBBCSAT6\nHJCgfPObQL9+ZZeCEEJIFigOSFBuuaXsEhBCCMkKhxUIIYQQEoHigBBCCCERKA4IIYQQEoHigBBC\nCCERKA4IIYQQEoHigBBCCCERKA4IIYQQEoHigBBCCCERKA4IIYQQEoHigBBCCCERKA4IIYQQEoHi\ngBBCCCERKA4IIYQQEoHigBBCCCERKA4IIYQQEoHigBBCCCERKA4IIYQQEoHigBBCCCERKA4IIYQQ\nEoHigBBCCCERKA4IIYQQEoHigARh3LhxZReBBIT12X6wTokPIcXBMAC3ApgGYDqA2+rbSBeADU97\nwfpsP1inxIeQ4uB2AB8BWAFCFHwM4I6A6RNCCCGkAEKJg34ApgA4FcDnAOYAuAjA2gAGBMqDEEII\nIQXQw2Pf3gCGWn6bAmBXbds+AN6BsCAQQgghpEXwEQebA3jIsL0TwGhEhxCOAPBDAHvEJThx4kSP\n7EmV+fTTTzFhwoSyi0ECwfpsP1in7UMRfWctcHo9AVwAYH8AewN41LLfYADPAFg2cP6EEEJIV+B9\nAJsA+CCPxEOKg4EA7gSwCIQwmJyw/+D6HyGEEEL8+AA5CYOQLALgWQDjIXwTCCGEENLF+SaADohZ\nCrOUv89gd2IkhBBCCCGEEEIIIYQQQgghhBBCCPFhaYhQy59ArMFwAYDupZaIxLE/gAWI+pJcU/9t\nMwBPA5gN4E0Ah2rHHgLgDQhflGcgYmWQclgKwH8BbKtsy1J/3QH8FsCHEL5FtwNYJo+CEyOm3zks\n4AAAA59JREFU+rwUwBeIPquHKb+zPqvJegDuhwgY+CFE+yojC3epZ/RhANdCzGpYEcBLAE4rtUQk\njt8CuNKwvR/EzXwURBju7QHMRKOx2q7+fQuIm/R4iLU3+uZbXGJgK4iOpAPANvVtWevvdADPQ8Qq\nWQLAOJiDpJHwmOoTEJ3DQZZjtgPrs4osChFh+HSIoIT9AdwFEVSwL7rQM7oKxA2tqpf9IMIsk2ry\nKICjDdsPA/Catu0PEMIPAK4HcJn2+6toVr4kX8YCeBviOVM7k7T19536/+8B+Jby29L19FcMUGZi\nZyzM9dkLwDwAwy3HsT6ryeoA7kY05tA3ICwF3wUwSdu/sGc05KqMLqwFYAaEmUMyEWK6Y5+Cy0KS\n6QZgQwC7QTRI7wK4HEKZrgVh9VGZCGCd+v+m319VfifFMB7ASgBu1rZnqb8lAQzRfp8G8WyzfvPF\nVp/rQbx5ngXRvk4C8CM0Op3hYH1WkUkQ7Wunsm0fABNQ8jNatDhYAmJsRGVu/XPxgstCkhkIcZP+\nFcAaALYEsCqEYl0cjbqTzEWjHheHua5Zz8UyFeJtQWcJpK8/uQ/rt3hs9dkHYsj2dxBm5AMBfB9i\njRvA3vayPqvF2RBi4SjE1xmQ8zNatDiYA2AxbZv8PqvgspBkpkGMb10N4ej0LsTbyC4QbySmuvys\n/r+prr+m/E7KZTbS1d8sNBoc0/F8jsvhAQA7AngcwEII/4MLIRyKAdZn1ekD4BYAB0AMFb0Ce39Z\nyDNatDh4GcILc2ll23CIToc3YfVYF8CvtG29Id5c/g1h1lIZDlHHqH+uHfM7KZeXkb7+PoVwolJ/\nXwbCmYr1Ww57QayGq9IbDesQ67O6rAwh5hYHsDGEMAC64DP6GIAbIS4EZytUm6EQou0kiPHM5QE8\nCeAKiJtsBoDjINbW0D1pR9S/b1f/nbMVykd1YMtaf2cBeBHAMAjz502osCd0m6LW52iIt8UREFa9\nLSAsfwfUf2d9VpN+EIsUXonmhRC73DO6NIQzzXSI8bNzEX7paBKObQA8AXETToUwVfas/7YRgH/W\nf3sDwMHasd+GcKCZBSEqNimgvMSOPvUtS/31APBLCKvfpwBuhfBRIcWh1+fhEDNQZkNMdTxS25/1\nWT1+AFGPs9G8LhHAZ5QQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEII\nIYQQQgghhBBCCCGEEEIIIcTI/wMs6Aywt8iLjwAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# Plot the eval metrics over time.\n",
+ "plt.title('Cross entropy score per step')\n",
+ "plt.plot(*zip(*xent_score_values))\n",
+ "plt.figure()\n",
+ "plt.title('Training loss per step')\n",
+ "plt.plot(*zip(*loss_values))\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "# InfoGAN Example\n",
+ "\n",
+ "InfoGAN is a technique to induce semantic meaning in the latent space of a GAN generator in an unsupervised way. In this example, the generator learns how to generate a specific digit **without ever seeing labels**. This is achieved by maximizing the mutual information between some subset of the noise vector and the generated images, while also trying to generate realistic images. See [InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets](https://arxiv.org/abs/1606.03657) by Chen at al for more details."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Data input pipeline\n",
+ "\n",
+ "This is the same as the unconditional case (we don't need labels)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfkAAACHCAYAAAAV6SDhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsnWdzXFd653+dc250N0I3cs5EIAlmiuJQoxlNsFxrzdpr\nV3m3tuY77FfYF7bfuFx+ZU+N7ZoahxntUCNS4jCTAEiCRCByzo1G6AbQuXtfsO4ZQqLSCECDnPur\nUkmFoD734t7znCf9H5CRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGR\nkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGR\nkZGRkZGRkZF5fVDk8LOzOfxsGRkZGRmZ152vtOHKw1iFjIyMjIyMzOEjG3kZGRkZGZk3FHWuFyAj\nIyMjczioVCqcTicmkwmtVks4HGZra4tEIkE6nc718mQOANnIy8jIyPyRYDKZuHz5Mm1tbRQUFHDr\n1i2uX7/O4uIi29vbuV6ezAEgG3mZnOHxeKirq0Or1RKNRnn+/Dlra2u5XpaMzBuLRqOhrKyMjo4O\nysrK0Gg0pFIpPvnkE3Z2dshmj0Y9tNPppKSkhPz8fOx2O9FolIWFBYaHh9nd3SWZTOZ6ia8NspGX\nyRkVFRX87//9v7Hb7ayurvJ3f/d3spGXkTlAlEolVqsVp9OJy+Xi1KlTOBwOxsbGmJ6ePjIh+0Ag\nwI9//GMuXrxIXV0di4uL/Pa3v+Xv//7vWVpako38N+CPysjb7XZqampoaGigpqaG4eFhhoaGePbs\nmRyqOkQMBgMtLS289dZb1NfXs7m5yfb2NqlUKmdrUqlUqNVq1Go1BoOB/Px8bDYbOp2OyclJZmdn\nyWQy++LpqNVqTCYTfr8fo9FIOp3GbrdjtVpRq9Xk5eURCAQIhUKsr68TjUZfufkmEglisRiLi4us\nrKywvr5OIpH41uuTeTPR6/U4HA4KCgrweDyoVCqsViterxeDwYBSqdy3Z/wPRaVSYTQaKS4uprOz\nk0AggMlkIj8/n6KiInw+H1tbW4TD4Zyt8ZtgMplwOp1cvHiR5uZmgD33NxKJMDAwwNDQEENDQwey\nhj8aI69QKHA4HJw6dYp33nmHs2fPcufOHa5du8ba2hpzc3NEo9FcL/NzKBQKVCoVNpsNm83G+vo6\nOzs7pFKpIxNa+yaYTCYKCgo4d+4cZ8+exe/3s76+zvLyMrFY7FDXolKp0Gg0mEwmzGYzJpMJk8mE\nw+GgqqoKj8eDyWTi1q1bRKNRNjY2iMfj3/pzrVYrfr+f48eP43K5SKVS+Hw+3G43Op2O4uJimpqa\nmJubY2FhgXA4/ErPJR6Ps729zfDwMGNjY8zMzLC5uUksFiMWixGPx0mn06/lcyKz/+j1eux2O3l5\nedjtdlQqFel0GpVKhUKRS8mU36NQKNDr9TidTgKBABaLBaVSicViwel04nA40Ov1uV7m18bhcFBX\nV8f777/Pu++++7l3MRgM8umnn2I2mwmFQuK9TSQSZDKZfVnDH42R12g05OXlcfz4cSorK1EqldTV\n1ZFKpVhZWeHBgwcHdpL6Nmg0GqxWK++88w5vv/02v/zlL3n48CFra2s59Xz/UOrq6jh79ixvvfUW\ntbW1KJVKVldXmZiYYGdn51DXYjQa8fl8dHZ2UlVVRVFRER6PB5fLhclkQqfToVKpcLlc2Gw2Pvnk\nExYWFr7159bX13P+/HkuXryIz+cjk8mg1+vRarUolUoMBgNarZbCwkJxCHjVC5/JZEilUpw+fZqN\njQ0WFhYYHx9neHiY4eFhpqenv/CAIPPHhxSlUqvVKJUvuqeXlpYYHBxkc3Mz5148vHimo9EowWCQ\n8fFx9Ho9ZrM5p2v6Nvj9fr773e9SXFxMNpv93P21WCycPn0ao9GIxWJhbGyMiYkJ5ufn983pfOON\nvEKhQKlUUlBQQHV1NRUVFbjdbhQKBRaLBbvdjl6vR6VSHdp6pM/LZrMkk8kvDbHa7XaOHTtGV1cX\n7e3t3L59G41Gc2RO3t+UwsJC2traKC8vx2KxsLm5yeTkJM+fPycSiRzoZyuVShGy9Pl8BAIBKioq\naG1tpbS0FK/Xi9lsRq/Xk06nUavV6PV6otEo0WiU4eFh1tfXicVi32ozLCsr4+TJk9TX1+N2u8XX\ns9ksqVSKWCxGKBTa8xnSphyPx0mlUuj1enQ6HUajEa/Xi1KpZGNjg4qKCiorK9FoNEQiEaLR6IEZ\neaVSKf4xm83YbDaMRiM6ne4b/X82NjZYXV0V13bUMBgMmM1mXC4XDocDi8WC0WgUHmUkEmFubo6d\nnR3S6TRms5lMJkMoFCISibC7u5vjK3ix7/h8PmpqarDZbGL/WFxcpL+/n42NjX3zHL8N2WyWeDzO\n1tYWS0tLlJSU5HpJ3xitVovD4aC6upqzZ8/S2dlJXl4ewOf2Da1WS35+Pi0tLeh0Om7evMnu7i7B\nYFA28l8XhUKBWq2mpqaGtrY2PB4POp2ObDZLNBplfX2dmZkZQqHQoa1H2gyTySThcPhLjbzP5+O9\n996jsbGRVCpFIpEgkUjk/MT9h+JwOAgEApjNZqLRKLOzswwNDdHf33/gRT9Sj3BTUxPnzp2js7OT\nuro64TkrlUoikYgw5CaTCa/XS1FREc3NzQQCAWZnZ4nH49/q/hcVFVFTU4PJZNrzdcmLWVtbY3Fx\n8ZWfsbq6SiwWw+PxYLVaMRgMFBYW4nQ6ycvLw+l0Ul9fz/LyMmNjY6ysrPzB6/wq1Go1Wq0WjUZD\nIBCgrq5OrOWb0N/fz+3bt1lfXz+SRt5ms1FaWkp7ezuNjY1UVFRQWFgoNu6JiQn+8z//U3hfZWVl\nJJNJent7GR8fz7mRlxydiooKzp07t+dgubCwQF9fH5ubm0diT5Ecn93dXfEevm6YzWaqq6v56U9/\nyrFjx3C73RiNxi/9nfz8fLxer4hgaDSafVvPG2/kLRYLXq+XM2fOcObMGXGKTaVS9PT0cO3aNSYm\nJtja2jrwtbjdbgKBAOfOncNutzMwMEB/f/8rP1upVOJ0OikvL6ehoQGAJ0+eMDc3RyQSyfmpW6VS\n4Xa7KS4uFkWM/f39XyiqYbPZ8Pv9VFZW4vF4SCQSTE1NcfXqVYaHhw9sc5fy7jU1NdTX11NdXU1l\nZSXl5eUUFhZis9lQqVQkEgnC4TA3b96kt7eXTCZDc3MzP/7xjzEYDKIwzmAwfOs1dXd3o1Kp8Hg8\nKBQKtre3yWazpNNpYrEYm5ubBIPBV266UvuQyWQS3uQPf/hDEfLTaDRoNBr8fj8VFRX7+mwbjUb8\nfj8ejwe32y3+7XA4cLvd5OXlYbFYvnHOtLKykqqqKj766COePn1KPB7P+fMNLzbroqIijh8/zqlT\np0RLl9PpJJ1OEwqF0Ol02O12Ll++LGplHA4HW1tbwqPfjxTPt8Hn84lC17a2NhwOh/ielPI5CgYe\nXhxINBqN2Lc/exB+HZBSbpLh1mq1XxkpfrnwV6lU7muk9o018lLBmsfjoampiRMnTtDS0gJAKpVi\nZ2eHnp4ebty4IUJtB43X66WtrY3vfOc7mM1mIpEI09PTr/xZpVJJfn4+5eXllJSUMDIywqNHj1hY\nWMi5ZwAvagXKy8vp6uri0qVLfPjhh0xNTREOh19p5J1OJ+3t7dTU1GC321leXubZs2dcv36dycnJ\nA1unVqvFZrPR0dHBu+++S2NjIz6fD61Wi0KhIJvNkslk2NjYYHJyko8//pirV6+i1WpJJpO89957\nGI1GDAaD8Pi/7QvY29vL1NQUbrebZDLJ2tqayNdJXkwkEvnKjddgMGC1Wqmrq6O9vX1PGkgqVNpP\nj8BisdDW1kZdXR3FxcUUFxcLj1YK0SeTSVFDkMlkvjA6o1Kp0Gq16PV6qqqqaGxsZHl5mdnZWdbW\n1nLaJSBF/zweD52dnXznO9/h7bffRqvVks1m2d3dZX5+nunpaRHtqaurw2w2i1Ta0tIS8XicwcHB\nnF6HXq+ntLSUK1eucPr0aSorK3O2nq+DQqFAp9NhtVrJz8/fk49XqVTodDq0Wi1qtfrIFpWaTCaR\n2pEOKS/vGbFYjO3t7T3vgFp9cKb4jTXyarUaq9VKS0sL77///p7cjrShj46OMjc3ty8V018Hn89H\na2srkUiE58+fc+PGDSYmJr5w/cXFxZSXl6PX61ldXeXZs2dsbGwcylq/CqPRyIULF7h8+TIVFRUM\nDg7i9XqJx+OvvJ9er1cU2ykUCh4+fMiNGzfEweCgsNvtNDY2cvz4cdrb27FarXtqGtLpNFtbW/T0\n9PCv//qvPHv2jM3NzT3ezn6zvb1NIpEQefeX0y+SYfw6m5fD4aC2tha/34/VakWlUonfHxsb48mT\nJ/ta55CXl8cPf/hDmpqaxKFHpVKxu7srWlAXFhZEPnFra+sLn1cpqtXS0oLRaBTCSOPj4/T09OTM\nyEsG3uv10trayvvvv09DQwMGg0GkQB4+fMjz58+Znp5Gp9NRU1PDj370I5GuAIhGo8zPz7O5uZmT\n64AXB9z6+npOnz7N6dOn8fv9OVvL10W6/0aj8XOV9CaTiUAgwNLSkqh3OIoto42NjSJa+yrGx8e5\nfv26aJVtbGw80P3mjTXyNpuN48ePc+bMGTo6OnC5XCIkurCwwMOHD5mcnGRra+vQBCDcbjfV1dWs\nrq4yNzfH7OzsF24CSqUSt9uN1+tFo9GIYozDOpB8FWq1msLCQkpKSnC73djtdsxm8+dOpFLoSir6\n8Xq9pNNpVlZWRHvYQb6oDoeDpqYmKisr8fl8AKK4bXd3l42NDXHgunPnDuvr66LafT9C868imUyS\nTCb/oOiRQqHAaDRSVFREY2MjnZ2dlJeXC096fn6ewcFBenp6GB8f39e2UJ1OR0FBAfn5+WQyGZaW\nllhYWBC56Gw2y/LyMqFQiGg0Sjgc/tzzrVar0el0NDQ0YDKZSKVSYlOX2hilIsNcYLVa8fl8tLW1\ncfbsWY4dO4bRaGRtbY2HDx/y4MEDenp6RB2PlKaAF8+6FI1ZX19neHj4QGsivgy73U5hYSFnz57l\nzJkzlJWVoVar2draEmmdo4hUfLqzs8Pq6ioej0fUeLhcLjo6OtjZ2SEcDjM5OXkkjLzkUBYVFVFe\nXs7FixeFQyE5EysrKywvL7O4uMjjx4+5efMmNTU1pNNpEV1RKBQHUlD9xhr5/Px8/uzP/owTJ05Q\nWFgovJx4PM7ExATXr19nbm7uUPNRdrudkpIS1tbW2NnZ+dK8o0KhEH3bR6mPVULazKT7p1Qq97Tm\nSEi5+/z8fNxuNwaDgUgkgkqlOpSOBofDQXNzM16vV3wtkUiwvr7O/Pw8z58/51e/+hVPnz5lZWWF\ndDqN0WjE6XRis9lyanBehVKpxOVyceXKFS5cuMCJEyewWq3i+8+ePeP//t//y9jYGEtLS/ua25aM\n1/LyMslkkmvXrnHr1i2ePHki8v7pdFq0YkmpkJeRxEHKy8uFAAu8eJ62t7cP9dD9KgoKCjhx4gQf\nfPABnZ2dGAwGFhYWGBgY4Je//CV37twRbYkqlUpoGtTX1+PxeMR1zM/P09vby+zsbE6uw+/3c+rU\nKX7wgx/Q2tqK0WhkdXWVjY0NCgsLj6yRz2Qy7OzsMDs7y8OHD3E6nSICIaWGkskkoVCIlZWVIyGK\no9frKS8v57333uOv//qvxfAftVot9u3BwUGuXbvGRx99JIp31Wo1ZWVlpNNpkYc/CEP/xhp5vV5P\nIBDA6/UK71IKkz969IjBwUE2NjYOxcCrVCrRhqPX68WJ/6t+JxAIUFJSgkKh2GNQc01ZWRnHjh2j\noqICg8EgWqA+G2lQqVSYzWZaWlpobm7GbDazvb3N7OysSJUc9Ia+tLTE9evX2draYmpqSqxzbW2N\n1dVVoYcdDAZFmFxqCfuqitjDxGKxiBB3fX09ly5dor6+HofDIdrn+vv7uXXrFuPj42xsbOz7vV1e\nXuaf//mfcblcpNNpxsfHmZ6e/lpCRpK3Xltby5kzZ4Q2gV6vZ319nYWFBfFM5MI7k1Ifp0+f5ty5\nc9TU1JDJZBgYGKC7u5vbt2+LVjPpPdTpdDQ3NwuvTaVSEY/H6e3t5fbt2ywtLR1I/YxUb2QwGLDZ\nbJjNZtEO7PF4RIStvLyciooKIpEIDx48YHR0lI2NDf7kT/5kz8HwqJHJZFhcXOT69evU1NRw/Phx\n4Pf7qF6vR6PRHJkDuJRikA6wL0czpf26v7+fTz/9lOnpaZFCSyaTew7F0s/v9x7/Rhp5KZ/jdrsx\nm83C6wwGgzx48IDe3t4vLHg7CFQq1R6xE+mk9mUnNqVSSWFhIYWFhaRSKaLRqKg6lgrGDhupFae6\nupq3336byspKVCoVU1NTzMzMiPYuCUndrrOzk+bmZvR6PdPT0zx58oSBgQHm5+cP3MgvLi5y9epV\nVlZWKC0tZXR0lIWFBUKhENvb268MZ6tUKiwWC2azOecRFGkDkSqkW1tbaWtro62tTYQx0+k0m5ub\n3L59m+7uboLB4IH0xq+urvKLX/ziG69fqpYuKiri9OnTfPDBBxQUFIg85PT0ND09PQwNDbG4uHio\nRl4qTisuLubSpUtcunSJrq4uEokE09PT3L17l48//pjf/e53xGIxUqnUnmtqbGykqakJvV5PPB4n\nFApx7949bt++TSgUOjCNAp1Oh9vtprKykoKCArxer+heaWxsxGg0olAoiMViDAwM8Otf/5rBwUFi\nsRinTp2ipqbmQNa1XwSDQYLBID/84Q9zvZSvRKVSYTKZRP3Ay3uzFM0aHR3l0aNH4uuf9dpfNvL7\nzRtn5JVKJR0dHVy4cAGbzQa8uHFLS0s8ffqU69evH7qyXSqVIhKJCHESQIRzJGnJzyJt7pKoRjAY\nJBQKkUqlxO8ctqHXaDSYzWaqqqro7OzE4XAwOzvLf/7nf/L48WMhBiJRX1/PW2+9xZkzZwgEAuzu\n7nL9+nV+9rOfMTU1dShtUslkkq2tLfr6+hgbGxN/g0Qi8YVtexqNhvz8fDweT869Ba1Wi9fr5ezZ\ns/yP//E/yMvLw+FwfE4FLJVKEQ6H2d7ePhLKZfD79kW/309DQwNXrlyhpaUFv9+/p95hcnJSdGcc\ntgaEWq2mtbWV8+fP884771BaWkoikWBmZoaHDx/y4YcfMjQ0RCwWE2FVlUqFz+ejqqqKwsJC4cWP\njY3x4MEDUe9zUIcVpVKJyWSipqaGH//4x5SVlZGXlyeKIXU6HXNzc4yNjfH48WMGBgYYGRkhHo+L\nlMLrwqvC17k+eH8Wq9XK6dOnaW5u/tzaEokEOzs7eyKckoCUXq/HaDQeeNryjTLyLpdL5KI6Ojow\nm80kEgm2t7d58uQJN2/epL+/n9XV1UNdl1QLIIUlLRYL1dXVtLa2Mjo6KtptPmvspYcBXoxlbWxs\nJBgMsr6+zubm5qHlLiVvp7CwkPr6etra2vD7/YTDYYaGhrhz586eTc1kMpGXl0dnZycXL16kqqoK\ngJGREXp7e/ecaA+adDpNOp1meXn5c99zOp3Y7fbP9aW63W5KSkpwuVwoFAohzLGxsSF62g8LnU5H\naWkpzc3NnDhxYk+eT0IqxqusrCQcDmO1WtnY2GBzc5NwOMzOzk5OREUcDgfFxcW0trZy/PhxLly4\nQH5+/uf66KVIV1lZGR6PB3ih7Cfd90gkQjKZ3PdDobTRtrS0cObMGWpqalAoFCwsLHDv3j1u3LhB\nX18foVCITCaDVqvF5XJRWlpKVVUVdXV1lJaWolAoCAaDwokYGRk58C6YlzUV1tfXxTMhRf3Gx8cZ\nHBzkyZMnzMzMEIlEsFgsokhQQgr9HzXD+TqhUqlwOBx7Cu0kgsEgAwMDe2yOxWIRhchlZWUHrsX/\nRhn5qqoqfvCDH4h+bL1ez/b2NnNzc3z00UdcvXqVtbW1nBX2rKysMDg4SFdXF2VlZTidTq5du8Yn\nn3xCKBR6Zf5Oo9HgdDo5d+4ceXl53Lhxg56ens95zQeJpN/e3t7OX/7lX1JXV4darWZ4eJju7m6G\nh4f3bGp5eXliAE1bWxs6nY6RkRE++eSTL2wZzAUVFRU0NTV9TqzCYrFQVVWF0+lEoVAQCoWYnJxk\ncnKSYDB4qEItRqORmpoaSkpKvlAkQ6lU4vV6+dM//VPOnDnD1NQUz58/Z3BwkMHBQWZmZl55yDlo\nSkpKePfdd7l48aJou3tVP3BVVRUffPCBiLAAYmhUb2+vyCXv932XQu4NDQ1ifQsLC/T39/PLX/6S\n27dvs7u7Kzx4q9VKU1MTP/nJT2hoaMDv92Mymdja2mJwcJC7d+/y6aefHrg8s5Se6e7uZmRkRCgd\nwu87R9bW1giFQkIqWNIt+CySauFRNfJSjvqzIfCjRDKZFF0l2Wx2z70cHx/n5z//OaOjo+JrBQUF\nnD17lrfffpuurq5vLAP9TXkjjLxWq8Vut1NdXU1nZyfFxcXY7XbUajXLy8s8fPiQsbGxA8tVfl1m\nZmb49NNPRe92c3MzNpuNhoYGxsfHRfGXXq/H5XJRUlIiik1SqZSQOw0Ggwcu/ymFybxeLyUlJZw5\nc4aTJ0+Knk5JcKWoqIhjx44xOjrK/Py80GI+duwYJSUlqNVqZmdn6e3t5datW8zMzBzour8IKXRs\nNpspLCwUKYf6+npUKtWesLxWq8Xn82Gz2djd3aW3t5erV6+ysLBw6M9PIpFgaWmJqakpxsfHcTqd\nWCwWUd8hoVarsdlse9TCampq6OrqYmRkhKdPnzIxMcHi4uKhKcpZLBaKi4vxer1fWujldrtpamra\nM4hnZ2eH9fV1kZ64e/fuvhfKKpVKNBoNNptNTGUbGxvjww8/FEI31dXVQrnM5/NRWVlJR0cHPp8P\ni8UCQDgc5unTp4yNjR1ahC2dTrO7u0ssFiMSibC0tAT8XsFOmkT42d+RIiSxWEzk9cvLy3n27NmB\nr/lNRa/XU1FRQVFRkTDw8XiclZUVRkZGeP78+R4nyOfzcebMGUpLSw+luPeNMPJSJX11dTV1dXVY\nLBahT7+wsMD9+/eZm5vLuQ7y3Nwc29vb+Hw+TCYTra2tnDt3jnPnztHf3y+MiNVqJS8vj7KyMjKZ\nDLFYjPn5efr6+hgZGREv9EEiqUtVVFRw/vx5/vRP/5T6+nqUSqXYLLxeL01NTeIgIsmtVlZW0tTU\nhMfjEYU/d+/epaenR4imHBZSDtVqtYrBNMeOHRPV6X6/n3Q6LYqpXkYSdJGq1tfX1w/di4hGo4yN\njeFwOHA4HEJa9WV1NY1GIzwyo9GI0WikoKAAeOH1PH/+nNu3b/PJJ5+QTCY/VyB5UEgHK2lGxKuQ\nWi99Pp+QRFYqlUIT3+l0otPpmJ2dFam3/UIKVWu1WuHNLi8v09fXh1qtpr6+XujVSxoPdrt9zwEr\nk8mwubnJ06dPmZmZOdRDoKT7IdX7fBVSKF9K4UiDVAKBwJHqJHmdkAZENTY2UlJSIgrtIpEI/f39\n9Pf3Mz8/v0cTw+1209bWtid1IqUVpcPZfh7C3wgjb7FYaGlpoaamRiiaJZNJMeHsqCjFSWv67W9/\ny/T0NG1tbVRVVVFcXEwmk8Hr9ZLJZDAajcIrm5ub4+rVq9y5c4eenp5Dqyew2WxUV1dz5coV3n33\nXYqKioDf97Fub2+LiEQgEKC8vJxz584Jjfrq6mp0Oh3BYHBPdfJhIrXwud1uLl26xOnTp3G5XLhc\nLtGzHwwGWVlZEXPcNRrNntC9UqmkqKiIuro64QEdZronkUiwsLDArVu3GBkZEaNwJeEYg8FAWVkZ\nFRUVYsLYyygUCoqKirh06RJut5vCwkL+/d//nfn5+QM/sExNTfFf//Vf3L9//5WevDT6tLi4mLKy\nMoaHh1lbWxNV6+3t7QQCAS5cuIDRaOTq1av85je/2bfCQsnrjUaj7O7uYjAYOHPmjHgPdTqdqNuw\nWCyk02lR36DX69Hr9cRiMYLBIM+fP89JSuSbIB2SNjY2iEQi2Gw2kYo67DHPbwpnz57le9/7Hj6f\nD6VSKfbH8fFxfvGLX/Dw4UMikcjnIq+fra7f3Nxkbm6Ohw8f8ujRo31N+bzWRl6hUOByuUTotbKy\nUhQxhEIhHj58SG9vLzMzM0fiIZa88tHRUWFcRkdHKSsrw2q1irU7nU7y8/OxWq2EQiFu3bolruOg\nw6xKpVIYeKkyvq6ujnQ6TTAYZH5+nrW1NaLRKDU1NaIK3e12C/1uaXhKIpFAqVRiNBqxWq2YTCYx\nYOUgkToTbDYb5eXlHDt2jCtXrnD8+HGhljU3N8fGxgbr6+sEg0FRJW2320XltxTNqKqqoquri/X1\ndeLx+KHO3k6n04TDYcLhMFNTU5jNZsxms6ikNhqNlJeXU1dXx/b2tigYNBqNaLVa4EX1r/R8qdVq\nnj9/LozTQbK6ukp3d7foa34Zo9Eoith2d3eZnp4WgkQWi4Xl5WXS6TRFRUUUFhZy8eJFlpaWuHnz\n5r6Nz02n00SjUUZGRhgcHKSmpga/309xcTGJRELMENjc3GRmZoatrS0xcc/j8aDVagkGg8zOzrK0\ntHTgufhvSzqdJpFIiHbcbDZLOBz+WjoHueLldmPpfZOcoIPUe/8qdDodJpNJFMS+fLiWWin7+/sZ\nHx8XX5f2JIfD8bl029raGr29vTx//nzfBxq99ka+traWCxcucPr0aYqLi8WDsLS0xM9+9jMePnzI\n1tbWkZhqJZHNZtna2uLZs2cMDw+Lwi/pgZZqC+x2uwjVr6+vH4oHqdFoqKio4MKFC/zkJz8hPz9f\n5PiGhob4t3/7N5aWllCr1Xz/+9/n+PHjlJaWCm9Hym8rFAqRn/+f//N/Ulpayj/8wz8wOzt74BP/\nJCndkpISLl26xF/+5V8Kpayenh76+vqYmJhgamqKlZUVNBqNEEDR6XTCyKvVasxmsxgXuba2RiwW\nY3Bw8Aun7R00UjRBusdKpZKxsTEePXrE6OgoJ0+epKurC7/fj8vl2vO7Ho+HhoYGzp07Rzwe5+bN\nmwe6VsmjhZMeAAAgAElEQVTr/WwblEKhoKSkhNLSUjo6OoTXIymYqVQqJiYm6Ovr44c//CGnTp3C\n5/MJ47q6urovRj6VSrG1tcVHH31EMpnkr/7qrwgEAuh0OnEQHBoaoq+vj6dPn7Kzs0NpaSkffPAB\nJpMJs9nMxMQEo6OjR0Zu+psSi8WEgt9RRTLw0jMkTb/MZYrBarVSWloq3jO1Wv3KIsGXMRqNVFdX\nU1ZWhslkEr8DL+zVrVu3WFxc3Pe1vrZGXqvVYjKZqK+v59ixY2JOfDqdZm5ujqdPnwqPOZcymV+E\n5EW8SoyloKBAPNCZTEao3R0GWq2Wuro66uvrMZlMTE5OMj4+ztTUFENDQ/T29hKJRNBoNBgMBrGJ\nS4VIL4s7SC2AUgvhzs7OgQqdSMakrKyM+vp6Tpw4wYkTJygqKiISiTA+Ps69e/d49OgRi4uLRCIR\n1Go1HR0dtLe343a70Wq17OzsiJZAs9ksVMS+853v4PV6GRgYYGxsjPn5+T2tXZFI5MAnBL6qSlqq\nSu/p6WFtbY2hoSEuX75MZ2cnTqdTePRS4WFBQYHoHDjIaMSr1iqpgjU3N1NVVcXMzAyDg4NMT0/v\nqa6fm5sjGo1SUlKC1+vF7XbT2NjIT37yE371q1/R19f3rdcn6aRPT09z48YNYrEYDocDtVrNzs4O\nGxsbLC0tMTc3x+LiImazmfLycvLy8rBarWSzWTY2NoR+xeuEZIzi8Tjb29tHco/8IgoKCqivr6eg\noID5+flDb2uFF8+x3+8XBbovG+xXIQlsVVVVUVJSIjpNwuEw/f393LhxgydPnrC2trbva33tjLy0\nkVssFgoKCmhsbKS+vh6j0Sg8zoGBAZG/PqphqC9D6jPXarWHLnqjVqspLS0lPz9fKARev36d7u7u\nPWEktVotNjapStRsNgtVMCmcLfXsSt0D+zkw5VVr1+v1NDU18e6774q+bEnc5NGjRzx48IDBwUHi\n8Th2u53y8nJ+9KMfcerUKfGzS0tL3L17l0gkQn5+Po2NjZSXl3Py5Emqqqpob2/nd7/7HY8ePWJ3\nd1dc7+TkZM7GAEejUUZHRxkbGxNphvz8/D1he/j9ZpMrL8hut1NVVcWpU6dwOBz84he/oK+v73PR\nHUlAZHBwkOLiYpqbm4U+vORd7weS2FQoFKKnp+cLf06tVuNyufD5fBQXF+N0OtnZ2RF96kfFSL6c\n6305eiIZdUkiW4pExeNxUeglhY+lQs5UKkU6nc7pASaZTBKPx/d0wHi9XmprayktLWViYoKdnZ1D\nN/IGgwGv14vL5cJiseyRKk+n0ySTyT1rMhgM5OXlUVNTQ3FxMXq9HoVCwebmJjdu3ODTTz/l+fPn\nB7LW187IS3mNrq4uvve979He3o7L5UKj0TAzM0NfXx8ffvgh9+7dO/Cw8H4jVfv6fD5qa2tFHnxn\nZ+fQwmmpVIqpqSlSqRQLCwuiP/yzhYuSkIjdbic/Px+LxcLOzg6ffvopQ0NDYppbIpFgbm6OycnJ\nAz1wKRQKCgsLxfzvU6dOYbFYmJyc5OHDhzx+/JinT58yPT0tZoWfOnWKs2fP0traislkIhQKcf/+\nfe7evcvjx4/Z3t7GZrNRWVlJVVUVlZWV5OfnEwgE+P73v8/Zs2dFRGZnZ4d/+qd/OpBw2zclnU4T\nCoVYWlqitLR0z/dSqRTr6+s5G+wRCAS4dOkSgUCAcDgsRoa+imw2y9raGktLS8RiMbGh5mq4ystD\nlaQowPj4+JEK12u1Wmw2m5DRht8XGMbjcTQaDYODgyIakclkRJ2HNBGwqamJiooKJicnmZ6eZnp6\nOmeGfmpqiu7uburq6vaMYzUYDDQ1NTE3N8fCwsKRSsdKcxhePvC3traKtHJZWdmeUddfJK+9X7x2\nRl5qq6mvr+c73/kODocDk8kEvMhr3Lt3TwhovG5IrUMej4eysjI2NzdZWFg4VOGbZDLJ6OgoMzMz\nYtDJZ4sWpXXm5eVRUFAgDlmSbvfNmzcJhUJi+IIkJXtQG4Vard7zTHR2dlJQUMDc3Bw9PT1cu3aN\niYkJVldXcblceL1eysvLOX/+PG1tbajVaqanp5mZmeH69evcvn1btFxqtVqmp6cZHh6mtraWxsZG\nWltbycvLE9OxgsEg4+Pj+1YIJLWVSUWAkjTm1zHMkrcmteR8VkQkHo+zurrK5uZmTkRFpL+Tz+cT\nkrdS//vu7q4I10vFmlarFZ1OJw7Aer3+UKYXfhZpfS+/i1IUYHV1NWeevNRCqdPpsNvt+Hw+SkpK\nsFgsopBXOmxHo1HS6bS499lslvz8fDo6OvB4PEQiEbRaLe3t7VRXV/PgwQNSqRTz8/M5M/JLS0uM\njo4SCAT2GPmjJD39WaQU1M7OjhgsdfLkSS5cuEB1dbWYM7+9vU0wGGR1dfVAizZfSyMvbdRer3fP\nC7+2tsazZ8++sCf3qCP1m0thwbGxMSYmJoTq1mEQj8d59uwZCoXilVK78MKoStK8lZWVQj54ZWWF\noaEhBgcHxaYg9Y0eZDW6Xq+nsbGR8+fP8/bbb2O1WllfXxejUPv6+rBarTQ0NNDS0kJjYyN1dXW4\n3W4ymQy3b98Wc8JnZ2dZXV0VGupS4ePq6iqDg4M8fvyYoaEhmpubKS8vB15MmPr1r3/N8PDwvlyP\nVqvFYrFw7NgxAoEAwWCQiYkJ+vv7v/IeSsV4DodDVIBLSNezsLBwILm/r4N0bV6vl/z8fP7bf/tv\nFBQUcOvWLaanp8W6pCJB6VAlhURzRSaTYX5+/tB74b+MlzUgvF4v7e3tdHZ2cvz4cdHPD3s9+VQq\nhdVqFQOYXC4Xra2tbG5ukkgkUKvVYrLh1tYWMzMzOTlUSUQiEYLB4JGYG/91mZiY4PHjx0QiEUpK\nSnjrrbe4dOkSx44d26NuNz8/z+DgIFNTU4RCoQNbz2tn5KWJP0ajUYTtpPngc3NzzM3NHYl2uT8E\nqWfY4/EIpbjx8XFisdiheV3ZbPYr719BQQENDQ2cPHmS6upqMpkMjx8/Fh7zYeeljUYj7e3tdHR0\n4Ha7RQud1+ulubmZwsJCMaWruLiYoqIifD4fCwsLDA0N8cknn/Do0SOmp6fZ3t7eE3rNZrMkEgnR\nYyzlMWdmZoTO+vz8PAMDA986BC5FSKqrqzl16pSQWv2v//qvrzzpS78rKRQ2NzcTCAT26GLv7Oyw\nurrK9PT0gbfPfRHRaJRQKERxcTEul4vm5maMRiN+v5/FxUU2NzeBF89YWVmZmMRoNBqZmZnh6dOn\nOUmJKBQK0WKZS6MnrUVSZaytraWiooKysjLKysooLS2ltLQUg8GAUqkkHo+L4tBIJEI4HGZ9fX2P\n3oYU4ZHy8tvb28RiMSYmJkQrY66YnZ3lyZMndHV1iRoTeHGwr6qqorq6GofDwcbGxqGnTD5b+yD9\nW6/X4/P5KCwspLm5mcuXL1NTU4PFYhHiSUtLS1y7do2bN2+KotOD4rUz8mq1GqvVuqdwKBqNMjU1\nxeTkJCsrKwd6ww4SacKbz+cTVb8TExNH6hQrtT+dO3eOU6dO4ff72dzc5NatW/zTP/2T2KQPE4PB\nwLFjx2hsbESj0ZBOp4WiYH19vVD2knKQkmczMDDA1atXuXbtGrOzs18rr7exscHGxgZPnz7d9+uQ\nQvTt7e38r//1v/D7/aIVdGVl5Ut/T6qPaGpq4sKFC3R0dIh0ArzYyNfX15mZmdnjMR824XCY6elp\nYcADgQCFhYV0dXXtmdZlMpmwWq3Ce0+n0wwODvJv//ZvTE1NHfq6VSqVWG+uagIklEolZrOZ6upq\n3n//fU6cOEFdXR3w+8hZKpUS4lvRaJRsNkswGGR5eZmFhYWvPDTu7u7S3d3N+Ph4TiMX09PTZLNZ\nvve971FZWSkOLwaDgcbGRmZmZsjLyyMWi+WkLuJVRY6SrHcgEKC1tZWuri7RIi0NM+ru7uZXv/oV\nv/vd7w58ja+dkXc4HJw/f57GxkbxtXA4TG9vL8PDw6La+XVEr9dTUFCAzWYjk8mwtrbG6urqkbke\nrVYrNpeOjg4cDgehUEjUQGxtbeVkQ5BUpqLRqOjV1+v1eDwe4Z2EQiHhmUhh15GREcbGxsRgiVxj\ns9l46623OHfuHEVFRaKX1mazYbPZSKVSe/Tdpdx9Q0MDzc3NtLa2UllZid/vJz8/H3ixCUkb/sOH\nD/n4449zms6anJzkP/7jP8hkMqTTaaqrq0UHgCScBC8OLpIWwNbWFiMjI9y7d0+0CR42kicvjZXN\nJRqNRmjot7e3CwljeLEXzs/PMzQ0xNDQEPPz80K8KRaLCXW/r3pPpVkZ4XA450VtUiHpxsaGiNRJ\n7Z+5fm9fNTynvb2d0tJS0S768s9ls1lmZ2f5f//v/x3aHI/XyshLox7r6urw+/3COxkbG6O3t/fI\neb1fF4VCIYZFSOGncDgs5CePCg6Hg6amJmFM0uk04+Pj3Lx5k9HR0ZxFUGKxGMPDw2JErJTKkYZ4\nhMNh0SUwOzsrhr1IRYVHZf66VqulsLCQoqIikRc1mUy0tLSQyWTESGLp0Cfp1Z8+fZquri7a29s/\nl4dPp9Osra0xMzPD3bt36e7uzukzJRX92e120T0iqTt+dlCQFEaen5/n7t27PHz4kKmpqZz9rY7K\nc6JQKDAYDNhsNlwulzjERiIRZmdn6evro7e3l76+PhYWFgiHw0emxe8PIZFIMDs7y+LiIuXl5ahU\nKlFgmstnQTp0p9PpPV0XJSUllJSU7Pn5RCIhRgMPDQ3R09NzaIft18bIKxQK7HY7eXl5GI1GIT7w\n9OlTPv74Y7q7u5mbm8v1Mv8gFAoFTqeTyspKurq6MJlMTE9PH7nagtLSUv76r/9atJwNDAxw+/Zt\nfvOb3xzK0JwvYmtrS/RbNzQ0UFhYiMPhYGdnh+npaSEyEQ6HiUajYgjEy/38RwEp17+0tCTW5PF4\n+Ku/+isuX77M/Pz8HsEYSZCovLxcjD19OZQsVVX39/fz85//nEePHolBL7lCWtODBw8YGRnBbrdT\nV1dHa2vr53r6pQFTY2NjdHd3s7S0lDOvMp1Oi1B3KpXK6WjWZDLJ5OQkQ0NDzM3NodfrWV9fZ2Bg\ngCdPnvDgwQMxujqRSOTcE/+2RKNR+vv7KSsro6urK+fpEvj9FEDp0P1FY6AlIpEI8/Pz9PT0cP/+\nfSGRfRi8NkYeXsgZFhQUYDabhZHf2toiGAyyubn5WgrfwAsjL+kx2+12dnZ2mJ2dPXJGXnqwpdYs\naW758vJyTteaTCbFpKe1tTWcTicmk0m0i0k9+lIB0lEx6p8lFouJdr2RkREKCgqw2+0ihSOJ9Uhe\nmTTl7eU2UiknOz8/z/z8PIuLi9y7d4979+6xsrJyJN6RTCbDxsYG4XCYhYUFoSyn0+n2tCFKkbqV\nlRWhgJcrstms0B7Y2NjA4XCI91aayXBY3nImk2F9fZ2+vj5+/vOfo1Kp2N7eZm5uTvS1v44RzS8i\nmUyyuLjI3NwcoVAIl8u1p6A0F0geeXd3NxaLhfr6+s8Nh3qZe/fucePGDSYmJkQx9WEdvl4bI69Q\nKMSoUMmTB0S+8ahu3F8HhUKBXq/HYDCgUCjY2Ng4MkN1XmZ9fZ3e3l4xH3xiYoLp6emctxRls1kx\nQvWwpvQdBJInL3ktHR0dmM1m8Xx4vd5XKpkB4uAl5br7+/u5c+cOT548YXR0lOnp6Rxd1auRCiCT\nySQTExNMTEzkeklfinQwWVpaYmlpiby8PNGb7nQ62djYIJVKHco+JFXK9/X17Zvy31EmlUoRCoWE\noVcqlUKWWbrnh73/r62tsba2JlrivF7v52S9X9ap+M1vfsM//MM/HOoaJV4bI5/NZhkfHycvL4+3\n335bnKSlHOtR8FD+UFQqFX6/H7/fTyqVYmhoiI8++iinIfBXEQwGuXHjBuPj4zidTkZGRnIqlPGm\nkU6n2draEjKvAwMDQjgGXhRVFRUViTnUu7u7QtUxFouxtLREKBRic3OTwcFBJiYmCIVCr53y41Em\nFApx8+ZNdDqdaNvc2toinU6L/LfM/iIdah48eMDW1hZms1kYV0nxLldRnvHxcTQaDXa7nVOnTtHc\n3Ew8HhfCbFKRaH9/f07WB6+ZkZfEVrq7uwmFQiiVSvr7+5mbm3utw1NKpRKXy4XdbhdjN588eXLk\nrkkaeXpQGst/7GQyGaLRKLOzsywsLLCyssLY2BjFxcXAi0hKeXm5qKaORCKsr6+LGQEzMzOsrKwQ\nCoVYWVmRjfsBsLGxwcOHDwkEAtTV1VFZWcnu7q5QvZOGpbzOkcWjhvReHMWIjzQR0WKxiFqf3d1d\nZmdnuXr1qtB0yGUk7bUx8hJzc3P87d/+rcjJrK2t5ax1a79JJBKsra2xubkp5j3L/PEh1Q1MT0+z\nvLxMb28v8CI3aTAYRHGaNAhD+p14PE4ikRApLJn9JxKJiNnzTU1NFBYW0tLSImo+ZmdnSSaTr3U1\nu8w3IxKJcO/ePQYGBviXf/kXIZi1sbEhHLVcRppfOyMfjUaZnJzM9TL2lVQqxejoKJlMhoGBAfr6\n+l77iliZb4eklZ6rqXYyr0ZS13z8+DEWi4WTJ08SCARobGxkeHgYq9X62resyXwzpKFPR1VO/bUz\n8m8iyWSS7u5uuru7c70UGRmZL0GKmDx69EjUPFy5coVz585RWlqKw+EQYVsZmaNA7po9QY5Fy8jI\nvJaoVCq0Wi0lJSVCYXBycpL+/v5DHQ0t80fPV9pw2cjLyMjIyMi8nnylDT9ag3hlZGRkZGRk9g3Z\nyMvIyMjIyLyhyEZeRkZGRkbmDUU28jIyMjIyMm8ospGXkZGRkZF5Q5GNvIyMjIyMzBvKGyuGo1ar\nMZvNuN1uvF4veXl5aDQagsEgKysrrKyssLOzc2gzfb8JGo0Go9FIXV0dhYWF6PV6od88NDTEwsIC\nu7u7sireN0CpVIr56zabDZfLJWSDTSYTOp0OrVbL2toaKysrBIPBnI42lZH5tigUCrRaLQUFBbS0\ntLC8vMzU1NRrPZb7TcBgMGC1WqmtrcXv9wOwsLDA4OAg4XB43/edN9bI63Q68vPz6ejo4OzZs3R2\ndmKxWOju7ub27dvcvn2bubm5I2nk9Xo9+fn5/MVf/AWXL18mLy+PZDLJysoKf/u3f8tHH31EPB6X\njfw3QK1WYzKZyM/Pp7q6mpaWFnH//H4/brcbm81Gd3c3d+/epbu7m1gsJs8PkHltUSqVmEwmjh8/\nzv/5P/+HW7du8a//+q8MDQ3JRj6H2Gw2Kisr+elPf8q7776LUqnk6tWr/M3f/A1jY2Oykf8qFAoF\ndrud6upqvv/979PU1ERpaSn5+flotVpaW1txOBzU1tbys5/9jPv37+d6yQLJgz958iSXLl2io6MD\nj8eD0Wgkk8mgUCi4cuUKAL/4xS/2XSvZ4/FQVFREZWUlfr8fj8eDWv3qRySZTLKwsCDma0vTz3Q6\nHalUiu3tbRKJRM40vFUqFSaTiUAgQENDg1AlczgcuFwuXC6XMPIOhwODwbDnIJBKpXj8+DErKytH\n6jClUCiw2Wy0tbXh9/txOBzE43Gi0SjhcJjV1VXm5+cJh8Ps7OyQSCSO1Pr/GFCr1RQXF3PhwgXc\nbjfZbJYbN27w/PlzdnZ2Dv3voVQq0el01NfXc/nyZUKhkBiB+iah1WrxeDzU1dVx4sQJ7t+/z6NH\nj4hEIkdCgdBoNFJSUkJTUxOdnZ1UVVWh1+tRKBRUVlbyox/9iL6+Pvr7+xkfHycSieyLk/FGGXm1\nWo1Wq6WsrIyTJ0/ygx/8gNLSUnQ6nRj/WFxcTGFhIY2NjTx48IDHjx+TTCZzOh5SCqs5nU5KSkq4\nePEi77//Pi6XC71eTzweR6VSYbVaOX78OJubm3z44Yf79vlqtRqdTkdFRQUnTpzgxIkTNDY2Ulpa\nilarRaFQfO7exONxnj9/zujoKGNjY4yMjDA7O4vRaCSZTLK6ukosFhNTmOLxOJFIhHg8fijz5zUa\nDX6/n/b2di5evEhFRQWFhYVotVoymQy7u7vCAGo0GjE5yu/3k5eXx+joKMFgkGAweOSMpNFopLq6\nmo6ODqqqqkin0+zs7LC4uMjU1JRY+8rKCgsLC2xvb8sDU/YZpVIp/lEoXoiOSe+I2WympqaGP/uz\nP6O4uJhYLCb+NtFo9NCep2w2SzqdFlMJ/X4/J0+e5Nq1ayiVykNbh0ajQaPRoFQqSaVSBzZhU3rn\nu7q6+OCDD0gmk4yPjxONRnNu5A0GA16vl5aWFk6fPs2ZM2fw+Xxks1kUCgWFhYW8/fbbeDwerFYr\nm5ub+7buN8rIWywWCgsLee+997h06RIFBQVoNBrgheeZTCbJZDKoVCqMRiP5+fkUFBQIg5SrjVCp\nVFJYWEhnZyfvv/8+9fX1uN1utFotu7u7LC4uYrPZyMvLO5DPt1gsVFRUcOXKFd59913y8vKw2+2o\n1eovPPyo1WpKS0vxeDy0traytLREMBgUnnw4HBZjTwGmpqb43e9+x/T0NMFg8ECu42XMZjMXLlyg\ntbUVq9XK0tISs7OzxONx1tfXWVxcZG1tje3tbXGaTiaTfPe73+XYsWPU1dUxNzfH06dPD+VQ8nXJ\nZrNEIhF6enrQaDS4XC6Ki4vFs+7z+Th//jzJZJKpqSl+9rOfMTQ0xObmZq6X/kah0+kwm80YDAax\nx0iRq+LiYhobGykpKcFoNLK5ucnu7i6xWOxQD4zZbJadnR02NzfZ2NgQUUFpzYlE4lAcG6kuymAw\nsLm5ydjY2IG8UxqNRnyOFM06KpSWltLS0iIcqKKiInQ6nfi+yWSiqKgIq9WK1Wrl2bNnhEIhtra2\nvvVnv/ZGXiqoMplMIkxz5swZ6uvrMRqNqFQq4bmtra2xsLCA1+uloqKCjo4Otre3efr0KVNTUywu\nLh76+jUaDRaLhdbWVi5cuEBXVxdutxulUsnu7q4wjiUlJRw7doxUKoVCoRDew34g1QCUl5dTW1sr\nTt1fhlKpFA8kgM/nY2dnR9xvyWOXNrWZmRkMBgN37tzh8ePHBz7EI5PJsL29zczMDKOjo0QiERFJ\n2NzcZHV1lY2NDXZ3d9FoNCgUCtLpNFVVVTQ3N1NRUcH4+Dg6nU4cDnOBQqFArVaLYh0ppWC1WrFY\nLLjdbpxOJwDBYJBwOIzRaKSmpgaPx8PS0hIKhYK7d+/mZP1fhHQ9Xq8Xl8uF3W5Ho9GQzWaJRqOs\nrq6ysLAg/ka5QLrvbrebvLw8dDodKpUKAKvVitPpxGazodfrAVhaWmJ5eZmysjKqq6txOp3E43GR\nujrsZyibzZJKpUgkEsRiMRENtFqtIuJ2WEa+pqYGm83G7Owsk5OTB2bkXS4XNpuNTCYjrm0/98qv\ngzS8SHIkS0tLqa2tpba2lqqqKvx+P2azmVgsRjQa3WPDTCYT6+vrWCyWL0yVflNeeyOvVquxWCwU\nFRVx/vx5/vt//+94PB5MJtMeQxUOhxkdHeWTTz6hra2Nmpoavvvd71JbW8uvf/1rfvvb3+bEyEth\nnPPnz3PhwgXsdjsqlYp0Ok0oFOLRo0f84z/+I+3t7eh0Onw+376vQTIcRqNRhOe/KdLvS3x28/B4\nPJSWlqJWq1lZWWFubu5AjXwkEuGTTz4BYGNjg0wmIzZZ6b9fFaWIRCKoVCoCgQCBQACj0bgn7XDY\nKJVKcQirqKjAbDZjt9vJz8+nvb2dlpYW4EWk5P79+zx9+pTNzU1++tOf8vbbb/Mnf/InZDIZ7t27\nd2SKCKW6gqqqKs6dO0dbWxsNDQ2YzWYymQwrKyvcuXOHq1ev8uzZs5wZeZ1Oh9frpaOjg66uLhwO\nh3jGnU4neXl5OJ1O8bXu7m7u379PQUEBZWVlaLVawuEwW1tbpNPprzw4HzQGgwG9Xo/D4cBsNhOJ\nRA7l4OF2u6mursbj8aBSqVCr1fte8CwdhqXOGSnVedgGHl7UBlitVoqKinjrrbf4yU9+gt1ux2Kx\noNfrhfGWng2FQoHVasXj8RzIel5rI69UKmlra6OtrY36+noaGxvx+Xyv/OPa7XZcLhcqlYrNzU2m\npqZEWLqoqEh4Q4dNbW0tV65cobW1FY/HI8Jo4XCYO3fucOPGDZaXl3n06BGpVAqr1cry8vK+hqJ8\nPh/vvPMOdXV1f/BL8VXRBYPBgMfj4dKlS+Tl5TE+Pk5/fz/d3d1sb2/v+8leOiRls9mvVSVvNBpx\nOp0UFBRgtVpZXV1lcXExJ2kchUKBUqmkurqampoaqqur8fv9+Hw+tFqtCBV7vV60Wi3ZbBa3283l\ny5cxm83cunULeHHQysvLIz8/H5fLxfb2ds6rqouKiqipqeHYsWM0NDRQXl5Ofn4+Ho9HXItOp+PU\nqVM4HA5sNhsPHjxgcXHx0DthjEYjpaWltLa2cubMmT2hea1Wi0ajIRQKsb6+Tl5eHiUlJZjNZnEQ\n02q1LC4u8vjxY5aXlw8sF/1VJBIJ0TZntVopKyujpKSEYDB4KKkov99PS0uLiCYcBHq9HrfbTVNT\nE0VFRQSDQVZXV1lfXz+0A7rD4SA/P5/KykoqKyupqKigrq6OQCCAVqtFrVbvqeEYGRlhaGgIpVIp\nIm8HwWtr5FUqFTqdjo6ODn70ox/R3NwsQsfJZFIUVklhE7PZjM1mQ6lUsra2xtDQEM3NzRgMBpxO\nJ2az+VDXr1arMRqNNDc3895771FcXCw8mY2NDaamprh16xb3798nHA4TiUSYnZ1FpVKJ6vX9wmaz\n0dTUhM/nE951JpMhmUwSj8fF5iqdwnU6ncgnKRQKVCrVVx4O1Go1arWajo4OGhsbGR8f5+OPP2Zi\nYmJP7n6/kFI0XweFQoHD4aChoYFAIIBOp2NsbIzR0dGcGHmpl7+zs5O33nqL9vZ2XC4XgLhXkuey\nubkpDigXLlwAYHR0FK1WSzKZxGAwoNVqMZlMh27gpcOKwWDAYDBgNBppaWnh/PnznDp1isrKSrLZ\nLKXTcsYAACAASURBVMlkkvX1dbLZLBqNBpvNRm1tLaWlpYRCIVZXVwmFQodm5BUKhQi1NjQ00NLS\nQkNDA/F4XKRudnd3WV1d5dmzZ8RiMaqqqiguLqa9vV0ceDOZDPPz8zx69Ijl5eWcRYOi0Sjz8/OU\nlZXhcrkoLy+nrKyMJ0+eHMrne71eqqqqmJiYOLDPcDgclJWVUV9fj8PhYHR0lJWVlUOpRZGibYFA\ngM7OTjo6OmhpaaGiogKr1SoKlz97wJuZmaGnpweFQoFOp+PMmTMHsr7X1shbLBbR81xeXo7BYBA3\nc2VlReR+ysvLOXnyJNFolPX1dRYWFojFYmxvb+Pz+SgtLc1JSMflctHR0UFHRweBQACTySSqpB88\neMCHH35IT08Py8vLwgBKobX97gRYWVnh2rVrtLW1UV5eDrwIWy8vLzM8PMz4+DiA8B5ra2uprKwE\nXpygbTYbarX6a91H6YEuKSmhrKwMp9OZ07yrQqFAo9FQVVXFn//5n1NVVcXKygoff/wx9+7dy0lV\nbnl5Oe+88w6nTp2irq6ObDbL4OAgg4ODjI2Nsby8jFarFV5DR0cHFRUVokvDaDSi0+mIxWJMTk7y\n9OlTgsHgoRsZKc/Y0tJC2/9v78ye2s7S8/9ISEhCG4uQhAAh9sXsZnHbeAG84GnP9HRNTyq5y8Xk\nIpX/JpVcpFKpVJZKajrTk5npcdntBTCL2cECSWwCtCCBJLRvSCD4Xfh3TnC3u9vdRhKeOZ+qrq6y\nwfpKfPmec973eZ/n8mV6rysUCnA4HNjtdvp7urW1hUgkAo1GQ1XpAoEASqUSKpWKnqAzAY/Hw+XL\nl3Hjxg3cuXMHNTU1SKVS2NjYoEZUJpMJS0tLcLlc4HK5qKiowE9+8hM8ePCAPovi8ThsNhttoWQL\nj8eD8fFxlJeXo6GhATqdDjqd7tx6vt8H2fSk8znb1dWFTz75BCqVCqFQKKNmVnl5eWhsbMTg4CA+\n++wzqtXIy8t75/d99uvO+3P64BZ5ooxvaGjA1atX0dzcjKKiIvB4PMRiMQQCAczMzMBgMMDr9SIS\niUAmk8Fms+HVq1cwm82IRCLweDwYHBxEfX09iouLaemeVADSBZnjr6urw+DgINrb22mFIRwOw263\n49WrVxgfH8f+/j6i0Sj93nT1zzweD4aHh+F0OqHT6QAAkUgEbrcbm5ub2NnZAfD6ZlYqldjc3KSb\ngYKCAtTU1EAoFNKeo0QioaXlr5fnyMlfJpNRUQopeWYa4opYW1uLa9euoaenB16vF3q9HouLi7DZ\nbBkVS+Xk5EAkEqGmpgZ37tyBSqVCLBbD0tISDAYDTCYTtra24Ha7qZKenFjq6uogEAjg8/lQWVkJ\nmUxGVfgGgyGjmyiycSIq856eHnR2dqKxsRGxWAwWiwV7e3twOp2wWq1wOBx040LGGWOxGKLRKPb2\n9nBwcJDRCYecnByUlJSgrq4O9fX1kMlkCIVCWFpawtLSEsLhMNbW1mAymXB6eoqioiIoFAradyei\nz62tLWxubsLlcmV1hJFMuxweHoLL5UIul0Mmk6VdI0AcJomjZDpfr6SkBA0NDZBIJPS+Og9l+vdR\nUFAAnU6HGzdu4MaNG2htbaUi3uPjYxwfH4PH471R7SSHtFQqhVQqlbYWBuGDW+SJGObmzZv427/9\nW9r74nA4tAz/m9/8Bnq9HhUVFTg6OoLX68XU1BSMRiN9YDidTni9XohEItTW1qK+vh5lZWVwOBxp\nXeSJqKunpwdDQ0PQarX0hxwOh7GysgKDwYCdnZ2MLTAejwcjIyOYmJigu3siTiM3KvB/5dezX6fV\natHX1weZTEZPW1VVVbh9+zYKCwu/8wbOz89HW1sbHA4Htre30/wuvwk5Df/iF79Af38/NBoNhoeH\n8d///d/Y3t7OuOMdn8+HUqlEXV0durq6YDab8fLlS/z7v/87NjY26M8ilUqBw+HA7/djc3MT09PT\nkMlkyM/PR3d3N37yk5+gsLAQHo8HU1NTMJlMGXsPwP9txLu7u/GrX/2Kbjp8Ph+Gh4fx+eefY39/\nHz6fD/F4HHl5eVSc1dHRgaKiIgSDQSwvL2N0dBQzMzPn2p56F4jnRm5uLpLJJNxuNyYnJ/HkyRME\ng0E6DkfK3wMDA2huboZYLMbR0RH29vbw7NkzGI1GHB8fZ1X0KBQKoVarIZVKM/q6IpEIJSUlUCgU\nkEgkaV3M+Hw+PWiEQiGsr69n5OCg0+lw/fp1fPLJJ2hsbASHw6FeAOFwGMDrz18kEtEWJ9kAHB4e\nIpFIQCgUvlGdPe975YNa5CUSCUpLS9Hd3Y22tjYolUq6a04kEjCbzZidncXOzg729vaQSCTgcrlg\nMBhgt9txcHBAR1nI/8npqbKyEtevX8fIyAj8fn/a3gOHw4FIJIJUKoVUKoVAIMDJyQl8Ph+MRiOe\nPn0Kk8mU0ZML2VG+a8/27NcRlfrZ8SKFQgGDwUB1EMRYhwjFyNdJpVLU19djdnb2/N/UWyDOXwqF\nAuXl5aisrERDQwP6+vpQUVEBHo9Hd+Y8Hg+7u7vY3d3NmIHP16+T3CPHx8eIx+PfuIZUKkXHoPh8\nPh3Rqaurw8bGBubn52Gz2TI+L0z0BB999BHq6uroyXdsbAzz8/NYXV2lIsBUKoWCggJUVFTgypUr\n6O3thUwmg8PhgMlkwt7eXsZd4kgvfXNzEy0tLVAqlVSzsbu7i7m5OVqN6unpQV9fH3p7e6FWqxGJ\nRLC6uoqpqSmMjIxgZ2cn61MN5HmX6WqCTCZDfX09bbek4yRPNEL5+fm0bZhIJOD1etOqQZHL5Sgp\nKcHdu3dx+/ZtVFdXU12X3+/H1tYWRkZGcHJygsrKSnR3d9MWp8PhwPLyMmZnZ7G1tYWqqioAoMY4\n580Hs8hzuVwolUo0Njbi+vXrdJ7b7XbD4/EgEAhAr9djYWEBLpeLzkV/H6R8XFZWho8++ggGgwFr\na2tpeQ9ELEVOXWfns10uF0wmE8bGxrC7u5uW108H5MR1Fh6Ph+HhYbqg/uIXv8DAwACkUilycnLo\nIk/+XiwWp/06ycyzSqVCQ0MDHT9rampCSUkJRCIRUqkUysrK0Nvbi9raWmxubmJhYQFOpzMjpwLy\nMI7FYnTevaysDBUVFfB6vQgEAnTDcXbhIL3rzs5OWrLc3t7GzMwMXC5XxlXpcrkcV65cQWdnJ1Qq\nFUwmE7766iv8+te/xt7e3lu/vqamBp2dnWhtbcXp6Sm8Xi9MJhO8Xm/GzYhSqRRsNhtWV1dhs9kg\nk8lQXFyMnp4ehEIhWCwWJJNJajx08+ZNNDQ0IJlMwuFwYHJyEs+fP8fc3Nw7PYPSzVmb6UwilUpR\nV1eH4uJi6p9x3huNvLw8qFQqKJVK2vY8PDxEKBRKy33P5XKRm5tL3TTv3LmD/v5++t4SiQR2d3cx\nMzOD//iP/wAAXL16FSqVClqtFolEAmtra/jyyy8xOzsLl8uFoqIiOqabDu3JB7HIE7OArq4u3Lt3\nD93d3VCr1YjFYnjx4gWmpqbgcDhwcHBA+/A/FDI/etaF6LwhYx6dnZ3o6OiASCQC8PrhfnBwgP39\nfcTj8Q/egpTc7KQfRfqupF9FIIIgi8WS9mtSKBSora3F/fv30d7eTn3fZTIZBAIB3ezV1dVBpVIh\nmUzC5XLh+vXr+PWvf42nT5+m/RqPj4/h9XqxtraG4eFhtLa24tKlS/i7v/s7zM/P4+XLlzCZTFQr\nQBb6y5cv4+bNm+jp6UEsFsO//uu/YmxsDK9evcp4mRt43QZRKBSQyWRIpVIYHx/Hw4cPv7VCRh7U\nIpGIli09Hg/0en1WtBonJyfwer2w2WywWCzQaDRQqVTQaDR0rFGr1eL69eu0zZebm4vV1VWMjIzg\n2bNnWF5evjAphslkMiviVoFAgOLiYioq9nq91LPivCgvL8f9+/fR0tICoVCIUCgEn8+XtqkYIrK7\nceMGPv74Y9TV1VHTIb/fD5fLheHhYTx//hxer5dOxfj9fqyvr2NtbQ0TExMYGxuD2+2mi77RaITF\nYoFKpTr3a/4gFnlSjmltbUVvby892SwsLNBeMpn5zMnJ+VE3M4/Hg1AoTGvfSCaToaKiAo2Njaiu\nrkZubi4tk9vtdrhcLhQWFtJEtLOQU95Zp7aL5qlOIKIS4HX1gryn3NxcKkyKxWLY3d3F4uJiRkyI\nyAarpqYG5eXl4PF41N6WWPCSa+fz+cjPz4dSqYRWq4XZbKYCqnQ+uEmc8NbWFh4/foxAIIDW1lZo\ntVqIRCIUFBRAqVRieXkZFosFHA4HRUVF6O3tRXt7O9V0jIyMwGw2w+Vype1avwuideDz+bBarVhd\nXcXm5uY3vk4kEkGtVqOpqQnNzc0oKCigI3WBQAB2uz0rc/3EDtbv9+Pg4ADRaBRcLheFhYVoaGjA\n4OAgKisr0dvbC4lEQid6VlZWMDk5CZPJhP39/Yxf97dBNBK5ubkZfV1issXn83F8fAyPx4ODg4Mf\n/dwiG3GJRAKZTIaCggL09vZicHAQVVVVODk5wdbWFra3txGNRtNSAZJIJLh8+TL6+vpw+fJlmi1i\nt9uxvr6OlZUVTExMYGVlBZFIBBKJBD6fD69evYLFYoHBYHhDc8XhcOB2u6lGJT8//9yv+YNY5CUS\nCcrLy1FXV4eqqirw+XwYjUb88z//M1ZWVmC325FKpWhP46IufkVFRWhsbERVVRUtYSUSCQSDQayv\nr8PtdqOlpYUa+5zl8PAQfr8fIyMjePXqFaxWa9bmbt8V0uMeGBjAzZs3IRKJwOVykUwmsb+/j62t\nLWxsbGTktEYU2y6XC8fHx3T2en9/H0aj8Y1rkEgkdJ77448/RldXF/b29vD8+fOMtFJsNhsODg6w\ntLSE5uZmfPrpp2hubsZf/uVforW1FVNTU/j888/B5/Nx5coV9PX1QSqV4p/+6Z8wMzMDh8OR1UAO\noVCI8vJyHB8f05Lk2ygsLMStW7dw//593Lp1i+pT4vE4IpEIotFo1vrZRO9weHiIo6MjOjtPbElz\nc3Op6ZbL5cLCwgJmZ2dhMBguXE4A0b6kK/viXTg+PobL5XqvwKecnBwIhULodDo0NTWho6MDHR0d\naG9vh1gsRiAQwPT0NObn59NisAW8/iyvXr2Kjo4Oeij0+/2Ynp7GkydP8PTpU0SjURoFHgqFYDab\nqR+I1+tFLBajh6DT01Mkk0nqf5GO+/2DWOTJDCoRKOzv72N9fR1LS0vwer0/erFL9+zm1yFWpGcj\nXGOxGPx+P8rKyiAWi1FdXQ2tVguNRvPG9xKDH7lcDoVCgS+//BJut/tCLvTEqKi1tRXXrl1DdXU1\n7bufnp4ikUhgY2MDJpMpY/OswWAQZrMZDx8+pGU90vve399/o/ojEAgQi8WQl5eHS5cuQa1W49q1\na1hcXMzIIn90dEStUOPxOBKJBNra2tDe3o7i4mJcu3YNcrmcjqm5XC7MzMxgZWUFe3t7WS8TkxE6\nuVyOiooK6nJIXN84HA7UajXq6urQ19dHVekA4PP5sLS0BLPZnPXNOvk5kM+TCCJJS49Y8K6srGB0\ndBTLy8vweDwZ10B8H6SPTAKnkslkVjaBXC73rZVSoVBIR/r4fD4UCgWkUik9FAiFQpSWltIJnsLC\nQmrNS/6My+XSE/Xe3h499J0nJGuBXB+Xy8Xjx48xOjqKtbU1bGxs4ODg4I3Xjcfj9GBBzMW+fl9/\n3ffkz2pOnsvlgsfjoaamBh9//DG0Wi0ODw9hsVhgNpthtVrP7bVIXyWdDxYSNVhQUED/jGSBNzY2\nUjVqbm4uvSkAvKFcLysrg0QigclkwuHh4YXKhSZmLGKxGIWFhbh27Ro+/fRTumEhDxi/34+VlRUY\njUaEQqGMPHBCoRBCodA7uW4dHh4iGAyisLAQXV1daG1tpWl2mYD8whPxqNlshl6vx/r6Oj777DPc\nvHkTXV1ddKP4D//wD3jy5AmsVusbvgrZgvwuyeVy6kDW3NwMg8GAcDgMLpeLpqYmamQllUrpg87r\n9WJycjJt4tcfAmmlve3+JBuw9fV1TE9PY3x8HDs7Oxcq+YxAnqOkyhmPxzPW7iP3ck5ODm03KRSK\nNw4nxFqciGNra2uhVqupkE4ikaC1tRVFRUU4OTmhwjqr1UpP7MSh0+VyUTvr84JECpeUlKCqqopu\nsDkcDh49eoR//Md//NbvPesY+m2cNQv6s1PXi0Qi6HQ61NXVoaysDHl5efD7/dR56n05u4MKh8PU\nzSpdEAvds+I+Mud/cnJCZ3O9Xi/sdjvMZjM4HA61nJXL5RCLxaivr8df/dVf4Xe/+x0eP36ctuv9\nIXA4HAiFQjQ0NKCpqQktLS3o6upCbW0tfYinUimsr69jZmYGo6OjWF1dzXrO83fh8XgwPT2NoqKi\ntPlKvysulwuTk5OoqamBVqulnysAGkNMnAmzTTKZxN7eHlQqFW21EeEjKaGSDZ9er6eCNmJ+Q9pR\n2YTD4UAsFqOiooI69J3l4OAAm5ub+J//+R+Mj4/D6XRmReT4LhBvETKOabfbsbu7m3aBL1l4U6kU\nJBIJBgYGUFVVRdurBLLIk8kbsVhMvU+CwSDd6C4sLMDj8cDhcFCfEzKaRtIifT4fgsHguS7y5Ll9\n9+5dDA0NoaKiguYsvO/rkOoFGS3+s1vkc3NzUVxcDJVKRUU5gUAAq6urP7psyuVy6U0lk8lwdHSE\nSCSC7e1tzM/Pp/VkzOfzafwtgfh6k3hWv98Pg8GA2dlZrK6u0kCd3t5etLW10czkK1euQK/Xp+1a\nfygymQylpaW4fv06ent70dTUhNLSUqouJeYQr169wpMnT2AwGOB2u7N81d9NNBqFzWZDOByGSqWi\n/vvZyJfn8/mQSCTUnGVrawtSqRT5+flQq9Xo7OyE2+2mngvZJBAIYGJiAn6/H2q1Gmq1mkacJhIJ\n+rl6PB6cnp5SMxAitrNYLPB6vVm7/pycHFRUVKCtrQ3Nzc1vVTxHo1E4nU4sLy/DYDBk4SrfHaFQ\nCKVSSee4M6VzCAaDMJlMUCqV1Bukvr4eGo3mjSqCRCJBUVERYrEYIpEI4vE4AoEAVcoT7YzL5aIR\nxD6fDwKBALW1teByudRLIh0hTMXFxbh8+TKuXbuGrq4uOgp8HhADLLVajcLCQrp5OE8u9CJPenvE\nFjAajcLj8WBlZQU2m+1H/Xs5OTmora3F7du3odVqEY/HYbFYMDU1hS+//DKtRjjfBjnV+Hw+bG1t\n4auvvsIf/vAH+P1+8Pl8mM1m5OTkoK2tDcDrzU82QnW+i/Lyclqeb2trg0AgeGPmMxaLwWazYWZm\nBs+ePcuaV/2PgZQ7iSd8Ovp93wWHw0FNTQ1+/vOfo7+/H0qlEl9++SX4fD7a29tRXV1Nw4U4HA5m\nZmayasDicDjwL//yLyguLoZGo6FhNADgdrthsViwvLyMk5MT3Lt3D8Dr34H9/X1YrVb4/f6s9rVz\nc3PR39+PBw8eoLe3F4WFhd/4PM87PyKdEB8FiURCdRxkwiSd2O12fPHFFzg4OIDb7aY56l9vEwSD\nQezv72NjYwObm5t0IbfZbNTEh7g9kv9EIhHa2trQ1NQEuVxOW2zp2IDrdDr88pe/RFtbG20hnBci\nkQj19fVoampKW57AhV7kCaRXcXx8jFgshoODgx9sMkFO8HV1dbh16xbu3r2LwsJC7O3tUdFSpqIX\nvw6Zy11eXsajR48wOztLndYkEgmi0egbZe1oNAqLxZLVfnxubi4qKyuh0+lQUVGB6upqNDQ0oLa2\nlo7/pVIp+Hw+2mN1u91YXFy8kH3Lt0HcEImRRzoVsN8GER11dXVhYGAAp6enmJ+fx/j4OGKxGIxG\nIx48eID6+nr09vYiEolgf38fXq83a0YsR0dHdPSMqIlJ1SkcDsPn80GpVKKhoYFmN9hsNoyOjmJ4\neBiBQCDjCyhRbqtUKlRVVaGvrw9tbW0oKChAMpnEwcEBQqEQ+Hw+ysrKaFUuUyEv7wPZpJIES2Ig\nlu5yPVGN6/V6hEIhmvH+bZCTejgcRiAQ+M6ZeqlU+sZmheRspEOTIhAIUFBQ8EYVdm9vD+vr6+8l\nxBWLxdBqtbh16xY6OjogEAgQj8cRDofP1aHw4t+h/x8yP0tKOu9akiGnd5FIhLKyMty8eRMDAwPo\n6uqC2+3GxsYGJiYmsLa2llGl+unpKTU0OTo6gtPpxPz8PH7zm9/QkSOi5i0qKnrDFS4cDsNgMGRk\nvvws5LPk8XiQy+Xo6urCrVu3cP36dRQVFSEvLw98Pp/236PRKOx2O/7rv/4LExMT5x6Rm27IZy8U\nCqkIK5MnTNIXbm5uRm9vL7q6uvDo0SM8f/4cMzMzcLvd9AGkVqvR3NyMaDQKs9kMo9GISCSS1dNm\nPB5HPB7/RluGx+PhV7/6FR48eIDu7m7qmvjs2TOMjIxk5VrJNMClS5dw48YN9PT0oLy8nJoiWSwW\nuFwuqn8gvhrpDnl5X8gBieSYHx0dYXt7Gzs7Oxk70Ozs7NCQq/OCz+ejtLQUKpUKXC4Xu7u7MJlM\nGXu+OBwOPH78+EdpR8jPo7i4GI2Njejv70dzczNOT0/h8/loMNmf3SIPgIrRfkjPRSqVQqFQoKOj\nA11dXejr64NYLMbi4iIePnyIqakpWK3WjPcAycPv6OiI7nbX19fpIsLlciGVStHY2Ii/+Iu/wOXL\nlwG8PvWTxTPTrQWxWEyjZpubm9HY2Ij6+nrac8vJyQGXy0U0GqVZ20Rb4Pf7qer6Q6G8vBw//elP\nUVNTk5Xqg0AggEajwdDQEHp6esDlcrG1tYXFxUWEw2G68Xj58iVEIhE++eQT1NXV4Wc/+xmOj49x\ncHCQNuev90Uul9MAI4vFgq+++upcxLQ/lry8PBpLff/+fWg0GhwcHGB2dhZzc3NYXl6GSqVCZ2cn\n1fG4XK6smPW8KyQng9hoCwQCJJNJWCwWWK3WCy16/aE4nU6aiZAJjo6OvlFhfVeIsdXQ0BDu3buH\niooKcLlcRCIRDA8P49GjR7BYLOd2b134Rf6s2pD0Zt7ldEJOONXV1WhsbKRisJKSEmxubmJsbAyP\nHj3KmniNmLPYbDbs7Oxgfn4ea2trdI6Yx+OhtLQULS0tuHr1KioqKnB6ekrH5sxmMzweT0avmaRZ\nkdKxQqGg2gAul0sT/6xWKwwGA+bm5rCwsIDd3d2Mzm4TLUdxcTFOTk5oafJdT7VEjXzp0iV0dHQg\nkUjQwKNMIpPJoNVq0dbWhvLychweHsLhcMBisSAej9OkwLW1NTrZUFNTg6tXr2JtbQ1bW1twOp1Z\nn5s/i1wuh1qtRnl5OWQyGfWon5qayooQk5jclJWVoaenB93d3airq6MhIk+ePMHS0hJ2dnbQ399P\ns+JjsdiFnIk/C3mGaLVayOVy6gb6vqY0F5FgMAiXy5X2aixZj2QyGaqqqt55/eByuRCJRNBoNCgr\nK4NWq8WdO3fQ29uLvLw8qkkYHR3F1NQUDg4Ozm0TduEXeeD/Sk5kdOi7+joEEpIxMDCAgYEBFBYW\n4ujoCOvr6/jjH/+Izz//PKu9YZJlvra2hv/8z/+Ey+WigiNiANHc3IzLly/Tk/LJyQn8fj8sFgte\nvXqV8ZMPKWlWVVWhvb2d9vmIujUcDmNhYQHj4+N4/vw5nE4nDVXJJBwOB1KpFDdv3kQ8HsfTp08R\ni8XeaZHncDiQyWQYHBzEjRs3IJfLMTc3h+np6Yw7manValRXVyM/P5/qNvx+/zdKeV6vF0ajEQ8f\nPsT9+/fx4MEDtLW1YWdn5w0zl4tAZWUlbt++jfb2dgiFQvrZZrpdRiDzz5cvX8bPf/5z1NfXI5lM\n4smTJ3j8+DH0ej0ODw9psh55KMfjcfh8vgu9yAsEArS3t6OrqwsFBQU4OTlBIBBAOBzOeIzyh87X\n59hramqgUqlo4uD3wePxaKz11atX0dzcTEPKQqEQpqam8G//9m/Ujvo8K54fxCJPkEql0Ol0uHbt\nGvLy8uB0OpFMJpGTkwO5XA6lUomSkhJIpVKUlJSgpaUFNTU1KCoqoiXumZkZGAyGrPh6BwIB7Ozs\noKamhkYjlpWVoa6ujqpGz/5Zf38/Ojo6IJFIqJ/3+Pg4RkdH4fF4Mv5QLCoqwpUrV1BbW/uN5Diy\nE11cXITBYIDVakUwGMzKg5tsRPr6+qhAymg0Ymdn51s9rcnXkUzzwcFBGpNqNBoxMzOT8UVeLpdD\npVJBKBRSJ7i9vT0aL0s4OjqiBkNNTU3g8/m0RHtRhGHEx7ypqQlDQ0PQ6XSIx+MwGo3Y3t7O2rQF\nl8tFaWkpamtrodVqweVyYbPZsLS0hKWlJXg8HqhUKtTU1KCsrAyFhYU0zjQcDl/o9hOfz0dlZSUq\nKyshEAgQDAbh8XhoQtuf0iKfl5dHo2bPG4/Hg5mZGUgkEigUCppdLxQKce3aNSr2C4VC8Hg8NOug\nsrISYrGYVhZLSkrQ39+P+vp6qNVqWCwW7OzsYHt7G1NTU1hZWUE4HD73NsrFeAJ8B2dHVYgn9tDQ\nEORyOWZmZhCNRunN3NbWhq6uLpSWlkKpVEImkyEWi8Hr9VLXsImJiXdyPUsHHo8HBoMBCoWChtC0\ntbXh+PgYQqEQZrMZPB4PPT09GBgYQEtLC0pKSgCApmI9fvwYz549y4qArbi4GP39/aipqfnG3+3t\n7UGv18NsNlPVPwmkOQuJZDybonbe5Ofno7q6Gt3d3SgtLUVVVRX+8Ic/0LI7+ezOCpKkUimUSiXu\n3buHjz/+mBq3WK1WLC0tYWFhIeNuchKJBAUFBeDxeHA6nXj58iUcDsdbe+yJRAIWiwVOp5MGNQkE\nggshDONyubQk3tbWhv7+fiSTSSwvL7+X58V5XZtarYZWq4VEIqGeGevr67Db7eDz+VCr1Whra4NS\nqQSfz6ettkAgcCFtpQk8Hg8ajQYajQY8Hg/hcBj7+/tvBDL9KUAEqoWFhWmJanU6nXj69ClKhCFc\nqgAAC5JJREFUS0tRWVkJmUxGw35u3LhBk+jsdjs9QPL5fAwODtLnNwnWKS8vB5/PRyQSwfz8PEZH\nR6HX62G1WtPWrrrwi/xZOBwOVXUTH/h4PE5vZqVSSVXewOsfjsPhwM7ODo3edLlcWQuQsNlsGB8f\nh06ng0ajofnmvb29KC8vRygUApfLhUKhQElJCU0kOj09xdraGh4/fgyTyYRAIJDxfhpR+kul0rfG\n8VZVVUEsFqOnpwc2mw1Go/GtwhSLxYKtrS243W56eiOL/nnh9/tht9vh8Xig1WrR0NAADocDnU6H\niYkJWK1WJBIJutkSi8UoKytDY2MjamtrUVZWhkgkgqWlJfzv//4vFhYW3giVyAaRSARWq/Vbx+KI\n/ScZ88lkJsN3weFwUFpaira2NvzsZz/DlStXcHJygsnJSTx9+hTLy8tZt2YmBwmy6Tw9PQWXy4VM\nJoNOp0N/fz8+/fRTKBQKavAyPj6Oly9fZlwX8z74fD5YLJYPyqPiXSDBZOm650mA2B//+EeEw2Hc\nvXsXWq2WmrURR0+NRoO6ujrE43HaBsrLy6PXlUql4HK5aGLd+Pg4lpaW4Pf703pou9CLPBnDIh7e\nQqEQIpEIFRUVUKvVaGxspD3sgoIC5OTkIJVKIRwOw+l0Qq/XY3t7G1arFZOTk2+Nu8wkZE68sbGR\nzuyLxWLIZDJUVla+9XsODw9pVOHw8DCsVmvGe6zkgVdQUECjI78OWTCB11WHqqoqxOPxb5QzNzY2\nsLq6Svv1h4eHNIIyFoudS/mTuJGtrKyguLgYtbW1aGtrg0ajgUQiocI14sYmlUpRVlaG+vp6nJ6e\nIhQK0V/CJ0+ewO/3Z+3kQxYf8p7eVk0gWgmdTkdHikiKWrZLshwOB1VVVbh27RqGhoYgFouxtraG\nFy9eYHR0lPqPZxNip8rhcOiGs7y8HDk5OWhpacH169fR2tqK7e1trK2tYWlpCdPT01mrCP4YyD30\nPoFeFxmhUAiJRJKWcj0ZBZ2ZmaHBVc3NzfRQqVQqcXJyguLiYuh0uje+l4wTk6CahYUF6PV6mt1h\nt9vP/Xq/zoVe5BOJBHZ3d7GzswOr1YrS0lIa7kJSlcgJkMfjIZlMIhqNwmAwYGpqCr///e/hcDiQ\nTCazZgxylkgkApvNht/+9rfw+Xz467/+a1RUVNC+zdc5G9gxOTkJg8GQlZEd4pKl0+loMtR3IZfL\n0dzc/NaSfGtrK+LxOA2qcTgcGBsbowEfwWDwva+XZFf/9re/RSgUwi9/+UtoNBpotVr89Kc/RTKZ\nxMnJCfh8PnVUzM3NhUAggN1uh8lkwsOHDzE9PQ2fz5e1USPipUDKwwcHB2/d4MlkMlRUVNBENy6X\ni8PDQ0QikayPz3E4HLS0tODKlSuQSCQwGo34/e9/j6mpKWxubmZ9BI2MmZETF0lou3PnzhsRrclk\nEi9evMBXX32F5eXlrGh6GN9OYWEhysvL30mU/WPxeDyYm5uD2+1Gc3Mzuru7cePGDTQ3N3/r95A1\nyWazQa/X44svvsDq6ioCgUDGKioXepEngqJXr17hiy++QEtLCxoaGqDT6ajbFNmhbm1twWKx0H6a\n0WjE2trahXJXIw/r7e1tuqg3NTWhsrIScrkcAoGARrGS/3Z2djA8PAy9Xp+1E08qlUIwGMTW1hbG\nx8dRV1cHjUZDYzcFAsEbmxQyOfA2SDUmJyeHxkSmUilIpVJ88cUX57LIn56eIh6PY3NzE0KhELm5\nuWhqaqLiKYVCgVQqhUgkAp/PR0eKDg4OsLe3R6cX7HY7kslk1k7D+/v7MJvNCAaDUKlUGBoawubm\nJvb39wG8PoHy+XxcunQJly5dQlNTE8RiMebm5rC4uIj19fWslmaVSiWqq6vR0dEBrVZLRYzDw8Nw\nOBwIhUJZ34SQe4WkshHRJjkZ5ufnw+fzYXt7GzMzM9Dr9fS+uOiQMJrt7W2IRCI6fnkRkgrPm9zc\nXJrvni6SySSdFgqHw7TlGAgEoNVqaaWTxIfbbDa43W4EAgGYzWasrq5ieXmZ+qNkigu9yJPS++Li\nIra3t9HX14f+/n5IpVJaljk9PcXBwQGmp6cxNjaGiYkJeDyeC3FyfxskiGNhYQELCwtobW3FRx99\nhMrKShQUFOD4+BihUAh+vx/BYJAqL7NZ0jw+PobFYsHx8TFSqRT6+vrQ3d2NwsJC5OfnIz8/n/5y\nnRWznYWcSkOhEAKBAA1aOTo6QmtrK6qrqzE1NYX19fVzuWZiqbu4uAiHw4HOzk709vbi1q1bUKvV\nSCaTcDgcNJhIr9fDYDAgFotR5XG2S90WiwX5+flwu92ora3F3/zN32B2dpZ+RgKBABKJBIODg2hu\nbkY8Hsfy8jKeP39O882zBZfLhU6nw9DQEDo6OlBUVIS1tTUYDAYsLi5m7bq+zunpKfx+P51WKS4u\nhlKppE6UkUgEq6urGBkZweLiIux2+wdjIpNMJrG+vg6VSoW8vDx6+LlIB5/3gVQKM6k/IYeDzc1N\nbG5u0vCc27dvo76+Hjk5OTSh8NmzZzRjfnt7m27OM82FXuQJZAc1NzeH3d1djIyMQCQS0b+PxWLY\n29vD/v4+PB5P1kuAP4Td3V28ePECCwsLEAgENJ4xkUjg6OgI4XD4wszi+v1+am4zOjoKjUaDkpIS\nmtBGfL3Ly8uh1Wrf6I/t7e1haWkJa2trsNvtEAqFSCaT8Hq9KC8vh1gsTksJ9PDwEG63G3Nzc7BY\nLBgbG6OeA7FYDKFQCF6vF16vl2ZTX4QFHnh9X1ssFvzud7/DzZs38dFHH6Gnp4eWB4nFcCqVwtjY\nGObm5rCyskI/42xBIlqvXr2Ku3fvoqioCHa7HY8fP8bS0lLWruttpFIpbG5uQiwWQyAQoKurC42N\njfB6vXA6ndjZ2cHk5CTGxsawu7ub8eyC9yGRSECv18PlcmFsbAx2ux0ul+vCPE/eBzJNYrfb0dnZ\nmbXrIEJGo9EIuVwOkUiEeDyOUCgEp9OJYDCIw8PDrB46P4hFnsyQp8MDOdv4fL6sR4O+K9FolIbj\nEHMHlUoFpVJJR7aqq6tRU1OD2tpaOmYCgE44mEwmusiT2X+SN54OlfXR0RGOjo4QCoU+uHsnmUzC\n7XZjbGwMp6enkMlkbwgfj4+PkUgksLm5ieXlZbx48QLb29tZv5/y8vKo2LG1tRWhUAg2m4324S8S\nJycncDqdtNIUDAbh9/uxv78Pm80Gs9mMpaWlrFZFfiykXJ/NDV+6IPf9wsIC1Go1NjY2YLfbMy5K\n9ng88Hg8MBqNGX3dH0I252w+jO0w461wOBzk5uZS8Rop0xOTCGL/SUgkEggGg4jFYkgmk+BwOFR5\nSv6dH+sF/acM+UyJ1zuXy/2G8DEejyMajdJTQ7YNWioqKvDZZ5/h3r17uHXrFra2tjA6Ooq///u/\nh9lsvnA/Yy6Xi9zcXIjFYkilUkgkEiSTSSQSCSpg/FPsY3/I5OTkUAOcgoICmiWf7ZjiLPC9a/gH\ncZJnXDzOCgTfl4v20L9IkLYCaUl9CBDzGw6HA7/fj9nZWYyPj8Pj8VzIn/XJyQkODw9xeHiY8aAq\nxo+D6LXC4XBWzZQ+BNgiz2Awzp3T01NEIhE4HA48ffoUDx8+zPo8PIPx5wgr1zMYjHNFKpWitrYW\nCoUCUqkUy8vLNNr0QxGtMRgfCN+7hrNFnsFgMBiMD5OL4V/NYDAYDAaDwWAwGAwGg8FgMBgMBoPB\nYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8Fg\nMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAw\nGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYPwZ\n8v8AmDaALfzjUe4AAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "tf.reset_default_graph()\n",
+ "\n",
+ "# Define our input pipeline. Pin it to the CPU so that the GPU can be reserved\n",
+ "# for forward and backwards propogation.\n",
+ "batch_size = 32\n",
+ "with tf.device('/cpu:0'):\n",
+ " real_images, _, _ = data_provider.provide_data(\n",
+ " 'train', batch_size, MNIST_DATA_DIR)\n",
+ "\n",
+ "# Sanity check that we're getting images.\n",
+ "check_real_digits = tfgan.eval.image_reshaper(real_images[:20,...], num_cols=10)\n",
+ "visualize_digits(check_real_digits)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Model"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Generator"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def infogan_generator(inputs, categorical_dim, weight_decay=2.5e-5,\n",
+ " is_training=True):\n",
+ " \"\"\"InfoGAN discriminator network on MNIST digits.\n",
+ " \n",
+ " Based on a paper https://arxiv.org/abs/1606.03657 and their code\n",
+ " https://github.com/openai/InfoGAN.\n",
+ " \n",
+ " Args:\n",
+ " inputs: A 3-tuple of Tensors (unstructured_noise, categorical structured\n",
+ " noise, continuous structured noise). `inputs[0]` and `inputs[2]` must be\n",
+ " 2D, and `inputs[1]` must be 1D. All must have the same first dimension.\n",
+ " categorical_dim: Dimensions of the incompressible categorical noise.\n",
+ " weight_decay: The value of the l2 weight decay.\n",
+ " is_training: If `True`, batch norm uses batch statistics. If `False`, batch\n",
+ " norm uses the exponential moving average collected from population \n",
+ " statistics.\n",
+ " \n",
+ " Returns:\n",
+ " A generated image in the range [-1, 1].\n",
+ " \"\"\"\n",
+ " unstructured_noise, cat_noise, cont_noise = inputs\n",
+ " cat_noise_onehot = tf.one_hot(cat_noise, categorical_dim)\n",
+ " all_noise = tf.concat([unstructured_noise, cat_noise_onehot, cont_noise], axis=1)\n",
+ " \n",
+ " with framework.arg_scope(\n",
+ " [layers.fully_connected, layers.conv2d_transpose],\n",
+ " activation_fn=tf.nn.relu, normalizer_fn=layers.batch_norm,\n",
+ " weights_regularizer=layers.l2_regularizer(weight_decay)),\\\n",
+ " framework.arg_scope([layers.batch_norm], is_training=is_training):\n",
+ " net = layers.fully_connected(all_noise, 1024)\n",
+ " net = layers.fully_connected(net, 7 * 7 * 128)\n",
+ " net = tf.reshape(net, [-1, 7, 7, 128])\n",
+ " net = layers.conv2d_transpose(net, 64, [4, 4], stride=2)\n",
+ " net = layers.conv2d_transpose(net, 32, [4, 4], stride=2)\n",
+ " # Make sure that generator output is in the same range as `inputs`\n",
+ " # ie [-1, 1].\n",
+ " net = layers.conv2d(net, 1, 4, normalizer_fn=None, activation_fn=tf.tanh)\n",
+ " \n",
+ " return net"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Discriminator"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def infogan_discriminator(img, unused_conditioning, weight_decay=2.5e-5,\n",
+ " categorical_dim=10, continuous_dim=2, is_training=True):\n",
+ " \"\"\"InfoGAN discriminator network on MNIST digits.\n",
+ " \n",
+ " Based on a paper https://arxiv.org/abs/1606.03657 and their code\n",
+ " https://github.com/openai/InfoGAN.\n",
+ " \n",
+ " Args:\n",
+ " img: Real or generated MNIST digits. Should be in the range [-1, 1].\n",
+ " unused_conditioning: The TFGAN API can help with conditional GANs, which\n",
+ " would require extra `condition` information to both the generator and the\n",
+ " discriminator. Since this example is not conditional, we do not use this\n",
+ " argument.\n",
+ " weight_decay: The L2 weight decay.\n",
+ " categorical_dim: Dimensions of the incompressible categorical noise.\n",
+ " continuous_dim: Dimensions of the incompressible continuous noise.\n",
+ " is_training: If `True`, batch norm uses batch statistics. If `False`, batch\n",
+ " norm uses the exponential moving average collected from population \n",
+ " statistics.\n",
+ " \n",
+ " Returns:\n",
+ " Logits for the probability that the image is real, and a list of posterior\n",
+ " distributions for each of the noise vectors.\n",
+ " \"\"\"\n",
+ " with framework.arg_scope(\n",
+ " [layers.conv2d, layers.fully_connected],\n",
+ " activation_fn=leaky_relu, normalizer_fn=None,\n",
+ " weights_regularizer=layers.l2_regularizer(weight_decay),\n",
+ " biases_regularizer=layers.l2_regularizer(weight_decay)):\n",
+ " net = layers.conv2d(img, 64, [4, 4], stride=2)\n",
+ " net = layers.conv2d(net, 128, [4, 4], stride=2)\n",
+ " net = layers.flatten(net)\n",
+ " net = layers.fully_connected(net, 1024, normalizer_fn=layers.layer_norm)\n",
+ " \n",
+ " logits_real = layers.fully_connected(net, 1, activation_fn=None)\n",
+ "\n",
+ " # Recognition network for latent variables has an additional layer\n",
+ " with framework.arg_scope([layers.batch_norm], is_training=is_training):\n",
+ " encoder = layers.fully_connected(\n",
+ " net, 128, normalizer_fn=layers.batch_norm)\n",
+ "\n",
+ " # Compute logits for each category of categorical latent.\n",
+ " logits_cat = layers.fully_connected(\n",
+ " encoder, categorical_dim, activation_fn=None)\n",
+ " q_cat = ds.Categorical(logits_cat)\n",
+ "\n",
+ " # Compute mean for Gaussian posterior of continuous latents.\n",
+ " mu_cont = layers.fully_connected(\n",
+ " encoder, continuous_dim, activation_fn=None)\n",
+ " sigma_cont = tf.ones_like(mu_cont)\n",
+ " q_cont = ds.Normal(loc=mu_cont, scale=sigma_cont)\n",
+ "\n",
+ " return logits_real, [q_cat, q_cont]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### InfoGANModel Tuple\n",
+ "\n",
+ "The InfoGAN model requires some extra information, so we use a subclassed tuple."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Dimensions of the structured and unstructured noise dimensions.\n",
+ "cat_dim, cont_dim, noise_dims = 10, 2, 64\n",
+ "\n",
+ "generator_fn = functools.partial(infogan_generator, categorical_dim=cat_dim)\n",
+ "discriminator_fn = functools.partial(\n",
+ " infogan_discriminator, categorical_dim=cat_dim,\n",
+ " continuous_dim=cont_dim)\n",
+ "unstructured_inputs, structured_inputs = util.get_infogan_noise(\n",
+ " batch_size, cat_dim, cont_dim, noise_dims)\n",
+ "\n",
+ "infogan_model = tfgan.infogan_model(\n",
+ " generator_fn=generator_fn,\n",
+ " discriminator_fn=discriminator_fn,\n",
+ " real_data=real_images,\n",
+ " unstructured_generator_inputs=unstructured_inputs,\n",
+ " structured_generator_inputs=structured_inputs)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Losses\n",
+ "\n",
+ "The loss will be the same as before, with the addition of the mutual information loss."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Generator loss: 3.383600\n",
+ "Discriminator loss: 2.162185\n"
+ ]
+ }
+ ],
+ "source": [
+ "infogan_loss = tfgan.gan_loss(\n",
+ " infogan_model,\n",
+ " gradient_penalty_weight=1.0,\n",
+ " mutual_information_penalty_weight=1.0)\n",
+ "\n",
+ "# Sanity check that we can evaluate our losses.\n",
+ "evaluate_tfgan_loss(infogan_loss)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "## Training and Evaluation\n",
+ "\n",
+ "This is also the same as in the unconditional case."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Train Ops"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "WARNING:tensorflow:update_ops in create_train_op does not contain all the update_ops in GraphKeys.UPDATE_OPS\n",
+ "WARNING:tensorflow:update_ops in create_train_op does not contain all the update_ops in GraphKeys.UPDATE_OPS\n"
+ ]
+ }
+ ],
+ "source": [
+ "generator_optimizer = tf.train.AdamOptimizer(0.001, beta1=0.5)\n",
+ "discriminator_optimizer = tf.train.AdamOptimizer(0.00009, beta1=0.5)\n",
+ "gan_train_ops = tfgan.gan_train_ops(\n",
+ " infogan_model,\n",
+ " infogan_loss,\n",
+ " generator_optimizer,\n",
+ " discriminator_optimizer)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Evaluation\n",
+ "\n",
+ "Generate some images to evaluate MNIST score."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Set up images to evaluate MNIST score.\n",
+ "images_to_eval = 500\n",
+ "assert images_to_eval % cat_dim == 0\n",
+ "\n",
+ "unstructured_inputs = tf.random_normal([images_to_eval, noise_dims-cont_dim])\n",
+ "cat_noise = tf.constant(list(range(cat_dim)) * (images_to_eval // cat_dim))\n",
+ "cont_noise = tf.random_uniform([images_to_eval, cont_dim], -1.0, 1.0)\n",
+ "\n",
+ "with tf.variable_scope(infogan_model.generator_scope, reuse=True):\n",
+ " eval_images = infogan_model.generator_fn(\n",
+ " (unstructured_inputs, cat_noise, cont_noise))\n",
+ "\n",
+ "MNIST_CLASSIFIER_FROZEN_GRAPH = './mnist/data/classify_mnist_graph_def.pb'\n",
+ "eval_score = util.mnist_score(\n",
+ " eval_images, MNIST_CLASSIFIER_FROZEN_GRAPH)\n",
+ "\n",
+ "# Generate three sets of images to visualize the effect of each of the structured noise\n",
+ "# variables on the output.\n",
+ "rows = 2\n",
+ "categorical_sample_points = np.arange(0, 10)\n",
+ "continuous_sample_points = np.linspace(-1.0, 1.0, 10)\n",
+ "noise_args = (rows, categorical_sample_points, continuous_sample_points,\n",
+ " noise_dims-cont_dim, cont_dim)\n",
+ "\n",
+ "display_noises = []\n",
+ "display_noises.append(util.get_eval_noise_categorical(*noise_args))\n",
+ "display_noises.append(util.get_eval_noise_continuous_dim1(*noise_args))\n",
+ "display_noises.append(util.get_eval_noise_continuous_dim2(*noise_args))\n",
+ "\n",
+ "display_images = []\n",
+ "for noise in display_noises:\n",
+ " with tf.variable_scope('Generator', reuse=True):\n",
+ " display_images.append(infogan_model.generator_fn(noise, is_training=False))\n",
+ "\n",
+ "display_img = tfgan.eval.image_reshaper(\n",
+ " tf.concat(display_images, 0), num_cols=10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Train steps"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 33,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Training step: 0\n",
+ "Time since start: 0.025766 m\n",
+ "Steps per min: 0.000000\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAf8AAAFBCAYAAAB5Maa7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsveeSnNexpvuW9951tW+ABpIoKbT/ae8L2bczFylSogPa\noqvLe2/mR8+TyCpS5MSJOTE4cWpFMAQB3VXrW1/aN9/MJR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3X\ncR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3X\ncR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3X\ncR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3XcR3Xcf3/dQX+b2/g19b/+B//Y3d/f69vv/1WtVpNiURC\n3W5X8Xhc5XJZzWZTk8lEqVRKwWBQu91O1WpVwWBQt7e3isViqtVqarVa2u12ur6+Vrvd1g8//KCr\nqytlMhkNBgMlEgnl83nd399rNpvp9PRU8/lc3W5Xp6enikQienh4UCKRULlc1mg0UiAQUK1W02Qy\nUa/X09///ne9efNGu91OnU5HT09P+sc//qF+v69arabZbGZ/DofDenl5UTgcVjKZ1Hq9VjweV7Va\nVa/XU6PRsJ9rNpsql8s6OTnRv/71L81mM11cXGi9XmsymahQKGi1Wunu7k7ZbFbValWdTkeBQEAn\nJyfq9/tqtVoql8uKRqOaTqfKZrPK5XJqNBrKZrP6r//6L9vH8/Oz3r9/r3/84x9KpVIqFov2ebVa\nTcPhUN1uV6lUSuFwWOv1WsViUdlsVnd3d9psNvZz4/FY19fXWq/X+te//qVSqaRqtarBYKBAIKBi\nsahGo6F2u63z83MFg0G1222VSiXlcjk9PT1ps9moUqloPp9rvV6rUqlou93q5eVFf/3rX/WXv/xF\n2+1Ww+FQjUZD3333nR4eHnRyciJJarVaKhaLSqfTajabWq/XSqVS2m63CoVCOjk50Xw+1+3trQqF\ngtLptDqdjpLJpG5ubvTzzz/r+flZNzc3ikQiGgwGymazikQiur+/VygUUr1e12Aw0Gw208nJiZbL\npR4fH1UqlZTJZDQajZRIJFSr1dRut7XdbvX3v/9dtVpN2+1WzWZTd3d3+sc//qHNZqN6va5ut6v5\nfG6f9/z8rFQqpXg8rvV6rXQ6rWq1qqenJw2HQ9VqNa1WKzWbTV1dXSmZTOq7775TNBrV1dWVhsOh\nlsulisWihsOh7u/vVa/XlUql7HlPT0/19PSk8XisSqUiSRqPx6pWq4pGo2o0Grq8vNR//ud/arfb\naTabqdFo6IcfftC3336rer2uRCKhVqulZDKpWq2m5+dnTSYTpdNpBQIBbTYbVatVhUIh3d/fKxaL\nqVqtqtlsarfb6c2bN6ajl5eXymaz6vf7isfjymazenh40Hw+1+npqZbLpXq9nur1usLhsJ6enhSL\nxVQsFjWZTH5XRz9+/Gg6Wq/XNZ1Of/F5oVBI6XRaq9VK8XhctVpN3W5Xz8/Pe7pcrVZ1dnamf/7z\nn5pOp7q8vPxVHc1kMiqXy+p2u7+po7lcTrlcTi8vL/9bOtpqtex5B4OBer2eEomEwuGwdrudSqWS\nstmsbm9vtV6vdXJyYjp6c3Oj1Wqlf/7znyqXy6rVaur3+6ajLy8varfbOj09NR0tFovK5XJ6fHzU\ndrtVtVrVYrHQarUyuX5+fjYd3e12v6qju91OrVZLlUpF2WxWLy8vWq1WSiaT2m63CgaDqtVqms/n\nuru7M1vTbDaVTCZ1fX2tDx8+qNFo6OrqSuFwWP1+X7lcTrFYTLe3twoGgzo9PdVwONzTqcfHRxWL\nRWUyGU0mE8XjcVUqFbXbbe12O9PR3W6nRqNhOrrdbnVycqJut6vFYqF6va7VaqXn52clEgnFYjGt\n12tls1nVajU9PDyo3+/r5OREq9VKrVZLV1dXSqVS+u677xSJRExHV6uVSqWShsOhbm9vdXp6qlQq\npXa7rWQyqXq9ro8fP2oymahcLmu73Wo0GqlWqykej+vl5UWXl5f6+9//bn70v//7v3/Tv4f/X/Th\n/49XLBZTPB5XIpFQoVBQKpUyA/bFF19os9koGAzq7OzMHEA6nVY4HFahUFAoFFI4HNZkMtF2u7Xf\nTyaTqlarSqfT6na7kqRMJqNkMml/Xi6XGo/HKhaLKhQKGo/HCgaDisfjZlgymYxWq5VWq5Wq1apu\nbm40Go20Wq3U6/XMyVSrVU0mE+12O11cXCgcDms0GimTyej09FT9fl+SlMvltN1uNZ1OFY/HtVgs\n1Ov1VCgUTOhjsZguLi7UbrfV7/cVjUbtnNLptCkQApVIJDSZTJRIJBQKheyzc7mcms2m4vG4bm5u\nFI1G1e/3zVFx5jiVcDism5sbPT09aTab6fz8XNFoVJ1OR6lUSqlUSvl8XqvVStFoVPP5XL1eT+/e\nvVM0GlU6nValUtHp6anG47E2m41SqZSSyaTi8biSyaQ2m40mk4murq50c3OjyWSi2WymRCKhzWZj\n73C1Wmm9XqtQKOjt27cWjBHIpVIpFQoF7XY7zedz1et1lctlzedzbTYbXVxcaDKZaLFYKJPJKBqN\nqlAoKJlMKhgMajAYKBQKmVFFuTabjV5eXpTP55VOp5VIJBQMBpVOp9Xv982objYb20s0GlUwGFQ0\nGlU+n7efOz8/1+npqUajkabTqRKJhOLxuILBoCn1YrHQ1dWVxuOxOp2OarWaCoWCer2eotGoUqmU\nMpmM6cp0OlWn09Hbt28t4EmlUrq8vNSPP/6o2WxmMsnvJ5NJffz4Ufl8Xm/evLEzwtEEAgHF43HF\n43HtdjtlMhl99dVXGo1G6nQ6Go1GSiaT5oRSqZQWi4VKpZK+/PJLLRYLBQIBXV1dabPZqNfrKZPJ\nKBwOK5/PKxwOKxqNajKZaLPZmKMleCAYk6RsNmuBfiaTUa/X03Q6NUc0Go0UCoWUTCY1n8/t59br\ntQWlhzqKbFUqFY3HY63Xa52dnSkUCqndbiubzer8/HzPTmw2G3tny+VSg8FAlUrFHGw0GtX5+bk6\nnY4Gg8Ev9DOXy6nVau3p6Gw2UzqdVjAY1Gw2UywWUy6XU7vdViKR0Js3bxSJRExHOfNCoaBKpaLl\ncqlIJKK3b9/q4eFB0+lUZ2dnisViGgwGymQyymazyufzWi6XisViZl/evXtn8lwul3V2dqbhcGjv\nYzKZaDqdKp1Oa71eazqd6vr6Wm/evLF/SyQSWq/XWq1WSqVSWq/Xlhh88cUXezqaTCaVTqdVKpUs\niDw5OVG1WtV4PNZ2u9XFxYXG47Fms5npaD6fVyqVUiAQsHddKBTU6XQ0Ho9Vr9e1Xq/VbDYVDoeV\nyWSUSqUUCoVMPpADdBT7M5/PFYvFTEc3m43Oz89Vr9c1Go00mUyUTCaVTCYtKFqv11oulyZTrVZL\n1WpVhULBktRMJqNcLqfdbmc62mq19PbtW5VKJaXTaaXTad3c3Oj777+3IAQdzGQy5gtyuZzevHmj\n5XIpSWYLOaNEIqFms6lsNquvv/5ao9FI8/n8d/3sZ+n8t9utAoGAotGoCe5isVA2mzVDvdvtFA6H\ntVqttNlsFIvFFA6HTaD+9re/abVa6eXlRb1eT51OxwwNWUIoFNJ8PtdisdB6vVYoFFIikVA6nTYF\n6ff7qlQqOjk5MccRDAbNSGO87+/vTXECgYBisZiy2aztMZVKabfbKRgMKhgMKhQK2f9PpVKWoV1e\nXioWi1mg0Wq1LEjYbreGVoRCIS0WC/u+aDRqUXO/3zcDfXNzo3g8rn6/r+12K0lar9dm0Gezmdrt\ntmazmSQpEomYQ59OpwoGg5ZJ8F4CgYC22609493dnSKRiP7yl78oHo9rtVppMploMBiYwY1EImaA\nttutGeZoNCpJSiaTWq1W6na7GgwGCofDqtVqkqTZbKZQKKT1eq3ZbKZwOKx0Oq1Go6HBYGDOCkPB\n2aTTacViMTvrUChk+ycjnc1murm50dnZmRaLhZbLpcnMYrGwvVUqFcViMS2XSy2XSwtc2PdsNtN0\nOrVIv1araTQa2ZkRLCYSCUnSy8uLGbxIJKJEIqFSqWRGNpFIaLFYWCDL+YfDYRWLRX38+FGr1Upv\n377VcDhUv9/XYrGwoCUajSoQCCibzZrsLJdLrVYrk3MMaq/X02Aw0HK5VKFQ0Gw20/Pzs3a7nSRp\nuVwqEAgon8+r3W6r1+uZIeJd8HuZTMbOXJICgYB2u52hXASI6OhisVCj0fiFjkajUdNRzo7ghXP3\nOkoGPpvNNJ/P7bym06lisZhisZju7+81Go1MFgiGw+Gw5vO5JQHRaFThcNieIRgMKpvNmlG9urpS\nLBbTeDy27JCMWZISiYSy2awCgYCWy6Xm87lyuZwF15vNRv1+X91uV6PRSNfX14rH4xZohEIhLZdL\nc8LT6VTtdlvT6dSQK4KuyWRi/59zj0ajikQipqOZTEbb7VbxeFzffPONYrGYVquVptOpJTvL5VLB\nYNBs7XK5NLuI/BEY9no99Xo901EceSgU0maz0XK5VDQaVSaT0fPzs/r9vlarldmXbDZrCQk66m16\nIBAwuxgIBDSfz/X27Vudn59rPp9ruVxaErRYLLTb7RSPx1UqlRSJRAyJCIfDZp/RT4/ElstlDQYD\nSyYJXBKJhNledDQcDisWi6lUKpm+x2IxzWYz2y92MRQKKZPJaLfbKRAI6Ouvv1a/39dgMDB5x8Zt\nt1ul02ltt1vtdjuzL9hdzqDf76vf72s+n6tQKGg+n6vVapnPJNguFApqt9t6eXn5XT/7WTp/HEQw\nGFQkEjFDG41GTUAikYgZFn42GAyaoymXyyoWixoMBppMJprP5wqFQtput6YUkuz3MQiRSESxWEzz\n+dz+I8tDuDebjTluHAo/u16vJck+i73z3QQpu93OhCMcfn0N6/V6L1Jfr9cGCRFE7HY7yypZgUDA\nvm+xWFiWAGRJUIDDZX8I33w+N+PK32NkJdlzRiIRSfrFefmAiqh2sVhoPp/b+WJE2C8rFApZhrxa\nrQxGz2Qylv2xb86L31mtVrZ35CIWi1kAw369M+IzIpGIBRQgQpQ6+v2+KSDf+2vvLRKJKBqNKhQK\naTweazwea7FYKBKJWHCBQfHystvtNJ1OTV6QgVgspkQiYZ/vAy++NxgMKhaL2f/P5XKKRCLK5XLa\nbDYajUZ7esTPY1z5DOQcRzSbzbTb7SyTI1BAT5CLzWZjBvhQR1OplOkon4+O8n7QoWg0alnzcDi0\n8+DM0RX0mkAE3Y9EInsGPRQKmVNBRwm8/p2OIuebzcYQMs6A5+C8cEqbzcayOhAvAtBIJLIni+yb\nffB9s9nMAmOy8UQiYXqyXq/3fme73Wo2m5kzRvbQbVY4HFY8Hv+FrPMZoFpeR2ezme0TO8NZYyt5\nB9Fo1OzLbDZTNptVJpNRt9vdO3PknCDm0L4QnHgdxZ6zb54HHSIAJ7sejUaWsG02G9MfbMyhvASD\nQU2nU0vgCGZwuOgo8iLpF2cejUb3zpyf97Ljz5w9FAoFRSIRFQoF01F+dj6fW7Lo5QW9ikaj2m63\nGgwGmk6nhgwQUB/adOwxydxvrc/S+eOsN5uNxuOxwuGwlsuldrudXl5eLCvg4UOhkGWERGM//PCD\n/S6Rezwet1or9aFKpWLQJoaK7/WGEyhqNpup1+tpt9upXC5rsVhY2aHX69m++Qz/Z+CqYDBojnG3\n22kymVhG1mg0zIDj5MrlsqTXOuxkMtFoNFIul1M6nbYMD0h8sVhYtEpWisDzu5RUBoOBttutcrmc\n1ZrJrgk6iDoDgYDBdUSZq9XKnmu1Wunnn39Wv9+3M4vH41YvJDuKxWIql8tKJBKWnXD2oCrRaNRq\noD5aD4fDtgdKH0T1KO9kMrHPpeyTzWYtI+H7xuOx5vO5oSJ3d3d7AQrnO51ONRqN1O/3dXZ2plwu\np2QyuYc6rVYrC7aI+Pl8Sep2uxZMATMWCgVTaDJ8jOpyuTQUqVwum+FCNoFSkZf1em2GLxgM6uTk\nRNFoVK1WS71eT9vt1ngNOHMcDJB8MBg0eUEOyCoJMFqtlmVkzWbTEIHZbGbPD5KQSCRULBbNmEWj\nUTt/guuffvpJ0+l0D6WIxWLabrdqtVoaDAYqFAp25qACOJTRaGTZXTAYtFLKdDpVt9s1HUUHMpmM\n+v2+fc56vbZ9U5v2Muah08lkYg4e1AYDHQqFjCtBSWI6nVomGo/HzYlxHpQDyaZ9Ft7v95VIJJRM\nJg2xy+Vy6nQ6hrjM53NDa7Bp2+1WxWLRUEnKFP1+3/T2/fv36na75qzi8bjOz88ViUT2SoqVSsWc\nmiQLTKbTqaRPpdn5fG5oCzqaz+e13W7V6/XM1jQaDbNRo9HIAljkmtICskc5l8Cu0+kYbwFnThlu\nPp9rPB6r1+vp9PTUIHOQFwL46XS6xyvgzCWp3+8bAjebzRSPx1UoFNTv983pLpdLC9hAQLbbrdk4\nbDrPiK15eHjQZrNRPB63gOb8/FzhcFidTmdPR30AzXltNhsNh0MLqAja0FH8m9fRYrH4u342+Ls/\n8X9hYQyWy6VFy2QiREc4VUl7AooTAZqMRCIGRXooi+xB0l7ERmQK8QaYmMyG/zD+PpvwUTYwWyAQ\nMLiOPfJnoCKf1Y7HY4OigM54JoTwEPXwmShwIBkkAUw6nbZsOhgMmgHFSCBYQGAQzIC+fQbHdwFz\nSbJaKpkENTV/Rj4i5j/eH8+ay+UM1YjFYlYnTCaTFtV64Scil2TOm3IOETSGi+9j3/y7N6ThcNjK\nNZyRNzicuc8qgY+pq3N+qVTK6u+cOc7dIyecIcEpRkTSLzIqzhq5hbxFBsIeyIq9PvHsoApwMsrl\nsp1tJBKxmmM6nbY9ceY4G4zYcrk0VOBQRw9lHb2VtKejkOvILIGdyTgP9SYUClltHHid7An9JEAj\nmEVeJNmzomO8G84H58O+PWoiSaPRaE9H0+m0PRPvjjNHP1mcr9dRZIg6MPVl4GWvoyBevFeCT19e\n8eeEDJHhdrtdKymiozhdskV+DxmTPqEKoJIEyQS1yAt74iwIrLC1oISRSMQSJX++HjnwqAlBDNwi\nUB7kEtTCy5lHT9BRHL8/czgCv2YXCfD4LN6Zt4uHSJM/8/V6rX6/r+FwaD4BHeW7kAuPiKKjyHSx\nWNwr/fmyIwHzfD63gAZZ/631WTr/fD5vzpp6IkLk4XAPj3hYbrVa2QGEw2GVy2WFQiFjS4ZCIeXz\neSNU8LMYxGQyqYuLC52fn1tNvVgsGiP29PRUsVjMsmUyYOo3CHKxWFQ0GtVoNDLokIwQo4Lh4j+c\nJXXlUqmk+XyuTqdj6AZ1XJwqRjuVSqlcLuvm5ka5XE6z2cwCmWq1qtPTU52cnGi73Wo8HptxASGB\nrQoBCCUGZuIZcVpe2QgUMEz5fF6JRELtdlvD4VCSjFy0Wq0sCOPME4mEqtWqLi4uzADncjmVy2VV\nq1WdnJwonU5rOBxqMplouVxaBpfJZExx4YXgEMkYydQ4c2/U2TvBYrVateyTM4dIChIB4gTv4fr6\nWuVy2ZCHYrGoSqWiarWqer2u3W5n8gI6Ew6HjQvC95LlYywpQRCc+cCFzG42m1l9mOy22+1quVwa\nf4PMBXnGCVWrVX311VeWjSeTSZVKJdVqNdXrdRUKBU2nU3v3wPMgNxAzs9msGUwg5eFwaAGoX+yF\nYIJMJRgM6vn52d5dOp2270AnCMTz+bwuLi50enpqQWChUDAdrdfrikajlomDZvka62azUT6fN5QJ\nEi0ZJ7Vu5J26MAEKiEs+n9dsNlO32zVuQ6FQsCCMgAl5KRQKur6+VjabtdoxBD74ImR7XkcppVBa\n4r2uVisLQEDz0E/+F+eP7gQCAWPGwysCXodvslgszMYSnNfrdV1dXe2VQOhKOj09VTabtfMjM4Xz\nROkTFIoghO8jyyfho6RGFsyZg0ys12s1Gg3LfvP5vOkPwT/ISzab1dnZmYrFoubzucHwnHm9XjeU\nk2cfDoeGdOF0K5WKJQyUczudjqGW6Cky74NPSOdeRz3Bj4DA88bo7nnz5o3xZeB9lUol65YYj8ca\nDod25oPB4Hf97Gfp/AeDgTnJ0WikbrerVqtlLG0IPJB1gC1h/FMfwkBVKhWVy2Vls1m9fftWl5eX\nBudwiGScKDEkGOm1dgpcSHZDXY9goVgsKhKJaDQa2cseDAbqdrt7sCpZYS6XMwNIBE4nQT6fN8Ti\n9PRUhUJBJycn+utf/6psNquPHz9aIFSr1ZTL5RQKhTQajSybIkMHtqaUcph1ptNpg+lg2XPmnU7H\nyGdA+RCXMDKZTMaMUqlUMrIMDiSfz+v6+lrv3r0zeJxss1QqmTGDoEi0DWzoDQmOhGAB4zUajaws\nA0Tfbrf3yDvAkdIrhEvJAOi1WCxaJnB6eqpSqaRCoaB3796pVqup0Whos9mYU0deQA2omeOsut2u\nwYw+AyGSz+fze+UB2tKQPVAPmOu5XM6cKyUsuhUoNeRyOZ2cnKhQKOj8/Fx/+9vfFAqF1Gq17GfL\n5bIFc8Ph0IimLMiDZJwYf2Q9m80qnU6bI1ksFns6Op/PjaUNFyESiRhsicHEQeJcqf+nUind3Nzo\n8vLSdCmbzZoOgw7A9AZpIKtFXnyQw5nzfXzufD63FtZ2u22OyOsowXg8HjfuiEcF0+m0zs/PLVD9\n5ptvjOhGjb1SqViAimGGqAbZkD1RSvIcAWwCOkqGR+cFXT3sD/IfyA82k04KOhMqlYrq9bp1fHzz\nzTfGYYCVj13zOoq8AH378hX6C5qXz+eVTCbNJqIbrVZL7XbbeEnIWDabNcgfOcG2E4BHo1HVajUV\ni0UVi0XTUVp6ceq8I8qGyAXIYL/fN/3z8kJHTLFYVCAQMPuJfaWbg8BCemXgZ7NZSwY582g0uudf\n8vm8Tk5OlM/ndX5+rv/4j/+w1m5kCx2l1j8cDveQBpAn/JHnOyG3pVLpd/3sZ+n8fa0DqB4jKskg\nFpw7GSSGhMyA7IkMVZIJDM6C6JKMymew7MFn4zCP+Xe+D6KNr6X6fQNhAdEB7ZEREC0C/ZGZgmjE\nYjFr4SFiPMy+YYry/eydSNZDqtRhYcNyrpC92DswLf9GdsRzkh0RjFHL83vHKYGQHO7bnxXvlogZ\nZeK8MfaHZ+6f138OZw6Ry5c2OPNEImFBDEgS+6/VakZqAhngufhskCO+EziPv2ffnpgEGc7/u693\nStojYdElIH1io8Ocxpn7M0+n07q8vJQk42zwXLxPZMGflZcRng2ZxbhQk0Qm+B0v6zwHdU5PhPWI\nCVkp72K321mLKxmclxdJBlN7ohbsda+nnK1nfXtCILLO53h5IRAgw6XEA1oDCxt5ofwEKtjpdEzH\n/Jn70iP78Prm/6N8gKxLMt1GZjwZlr+npOlLmUDFhULBZAndDQReOznOzs4sc/Ryzvv0RE90FPTR\ny40nJQLxs+/DUiny4nUUhAV54cwJoLxND4fDqlarlk1D+P53SPChjnpZ9zbN66iXFc7A2ynKhV5H\n8UUgr6lUyp4HefE6OhgM9vSAd+1RJu9XDuWFM/dt8r+3PkvCH0QLHiKXy+ni4kLRaFQfP35Uo9Gw\nqJ/BP0CSCB31kfF4rPfv3+vu7k739/dGvuJge72eHh4eNBwOrU2sUCiYEwM+wnBQy+r3+xZpQggk\n+0KAgeF9VsqQn/V6bSQXonkifZSFbOTDhw/2HRjYxWKhbrern3/+WbVazeo//D5ZNm1dOElqZzjx\nzea1xx6ICWOTTqdVr9etrebl5cVKD2SugUDAhJbvhVX98PCg0WhkQ2VQGuC0p6cnvX//3vZNr7gk\nTadTUyYQB57HZ/QMBiEb9fMeLi8vFQwGbd+wrMmkyGyJ9KkD93o9/fjjj3r//r2en58tSyNwabfb\n+vDhg8LhsCEufCe934FAwGBOf+aTyUTSq1Pn3eRyOcsqmYngUZxms2noDhm1ZzrDh1mvX4fAtFot\n3d/f70HOQMc8GyUGzt7XTqnxQ+L0XA4G+QA9wtYGhjw/P1csFtPT05MN4gI+LxaLVqYgeENPJpOJ\nPnz4oIeHB2sNo47Je6eVloChUCjYXsfjsZWZ+L3FYqHBYKBWq2XBHm1ZXkcpAXgdBSmhCwAdpYWN\n4Gu326ndbuuf//yn7u/v91CF5XJpwchPP/2karVqjpCEA13wbG+C3sFgsBdogwZls1lrGQSGXywW\npqPdbtdmB5D5Qh7l+6VXJvvt7a3G47EeHh50enpq3CRsaaPR0IcPH6xcR799MBg0xnwqlTJyG4RF\niInhcNjOgn50OBKRSESXl5cKhUI2TIh3RxkJe8E5EHQOBgP9/PPPuru7s7Y5gmts893dnaF9IFDY\nfWx1PB63IAl0gBICGTY6SvCYSqV0cnJiaFav11Or1bJsHqSLQIQgQHpFkD9+/GjDvTzZG4Z+v9/X\n999/bzpKbR+OBKgl/yEboLS0cfqun3+3Pkvnj7ElmwWuwZDiAInsid5QSBTIZ+q8RP6Nfn4MBtAg\nzGuMa6VS2WuLgckLRO+JL+ydiN6zSYk+fRTps1kiYA+p8W9kILSIMJ3KZwY4eIw+Uw6BuKgj87ye\nEIgQASH5FjrOi+gTA+gjajgO/r0RicJPGI/HpmwYAWAzb/hWq5UFYeyDM99ut+ZwgNj5Oc/o5d/5\n/UNUwKM7/ueIrlerlfVH8/v5fN6gPA8N8o6RP5j2yApOBziWyJ6SA/Av8sIeiO75N85a0h6Dme/m\nZygvhUIh6yXGaE+nU0OqkCky5mQyuTcgCdTFvyPgxUgkYvLLfmazmZ2dJx35/0VWpE/Ig0cK4OIg\nc9RCgbFBqjw/JhQKWbnA66jn4IDs0emAPcHJsQ/2iJz4Px/Ki88ISUJo+ZRkde3dbmf6lkqljNlN\nKeHXdBQZQ0YgfoESYB8gAB+ijH5fyDvI4yEiJb1C1nAM0Cl0FMQA+J13CenPkwWRgcMWPgId9oFc\nYxfRPUjdKZs4AAAgAElEQVTU/vw9soNt5TMIJNDRTCazF2Sgo77kGQq9TveE4wJBGV2l1Aa3hO/i\nXH0Jkj0i95w15+TtIp/hUaVgMKh+v2++CXvj235BqqbTqVKp1F4nCnYRWfGt8CBFv7U+S+dPbQmm\nJGSpdDptNSgEfDweq9VqWfaHgDMQQZIpGRnNbDazNj/IQalUSpVKRZ1Ox7I8OAIYUvqRMZbX19cG\nr+RyOT0/P1t2TMYM0Qh+AlASUf96vbYXutl8am1EGTKZjK6urrRcLi1LODs7s+Ea5+fn1qP+8eNH\njcdjO6s//vGPCoVCxjtAyOj1hZwE8QjSDxPjmLAGGQtBpEaJMZVejdFoNLI2QIKrL774wjgCZC0Q\nhBaLhY3tpRUxlUrp4uJC0mttnr0TUZ+dndmENVrX4G+sViu1221TKjIGkCCUaDgcWs8t75qImQz8\n4uJC2WzWDD5Tv+BhRKNRnZycWL2b4Oabb74xEhAT+ahb4yDIXG9vb62cEAqF1Gw2jY1NAIQxglvA\nKFL2TqYJxJzL5XR9fW1tRIlEQmdnZ6pUKtrtdjo5ObF6KDXkXq9nHAcQJd9mV61WjXOAExuNRkag\ng1zE+GdQIYwg75Fn8hmah0wZ6uR1tFqtWm06mUyqXC4b+jAej5XL5UxHGYQynU4NJQGN8IEoRM71\nem0IRb/fVzabNTnHjngdxWF67g916ouLiz207eLiwsi1l5eXqlarqlar1soLWkEWzBAYHBB6SvCf\ny+X0448/WkaJ3UNHy+Wy2RdJe/JSKBQkyZApWP0EV5AdQVtB4phQCgEWHY/H47q4uDA0ko4THNHp\n6anZ4kwmo06nY3INkhQMBtXtdi2QBsaHQ9Lv9y0YxSbBCcLZYbeB8znjYrFo49krlYqRPtn7H//4\nR0UiERuHDPmP7hZKJehop9Mxkna73bZngUPig8rBYGBBPgHir+no5eWlDXaKxWI26TAUChkxu1Ao\n6O7uzvQHntR6vTaeCnwQr6N+1sNvrc/S+b9580br9Vrv3783Ql02m1U2mzW4D2YqrXFEYfTgEvkA\n29E/SxbRbrclyRyfr4/jSBhoQVZUKBSUz+dVr9f1/Pxsk+AGg4FN4bu8vDRyGNHdfP464UuSQUeQ\nQ4DyB4OBHh8fjbBBfZbeUqBWCEoIno/kiT5xpuPx2Eh4JycnKpfLKhQKFqQQzPT7fSOiAF/iBFEC\nFAciSrfbtbN9eXkxB7bZbGxMJdArq9lsGjeA5yaI8/U3+pArlYrBdvV6XbPZTO/fv7cBKZAbLy4u\n1Gq1bLQmxDQPldMNsVy+zoVnWtjHjx9NUXFYBGzAz+v1Wq1Wy54XZi9ZC46EAAooH+JOrVZTr9dT\nu902g4zSn52d2R4ImOgWgPRE8ILxk2QlAZ6XrgGCG9j8BLO5XM6cNbINhwEDtVqtVKlUDNqsVCqK\nRCK6vb01chQDpKrVqpVAyMjpNgAm3W63pqO+9fb5+dn2CBxN+YnecOYUSDIHhYz7Oj1ENLIir6ON\nRsPmqzMJMBAI6Pr62uYIkNGuViubS0A5JJVKGdSKDXl6ejLnjI6ia3RXMGGTn/ElL197xiFBfMVx\nlUol01HKEV5H7+7u9lo1QUwIPNC/fr9vqGWr1dJyuTR5Zqw2NggZo8SDw0Q/0WcCIGSzXC5bYsM9\nCcz7gFC6Wq10cXGhZrOpTqezR2SEPAkqyKwJCMXz+VyPj48WBEgyhAAbQJmz1WopnU5bhi7pFygr\nMlYsFi3IoesFMt9wOLT5H8xBgMDJHin7SZ/aN+PxuM0WWa1W6nQ6ajabZn8YDsSZk8DQLVCpVEwP\nfaskXBjmxlSrVeteAGm8v79Xr9ez56PE+Fvrs3T+zFsnAqPdg0yIKNGTiTDcGGbgMWouwC++Rk+L\niYfffa+u9Jp9AnVSq2OEIhPd+L1QKGRMWohAvuXQk52obwN3HrZR8bxkrGSgIAbU6TEoHv7FmHFH\nASQQZoyjHAQKZBeZTMZgaSJH4C8PRXoIF0iPdiogM0onHo5E4H1JxhtxiG7U/Klpb7dbY8d7EiAM\nWLJq6dNgJFi9yAjKRzbKzx32I/uMFTiRjIk535CKMOjsA+gU7oRnmQcCAcsYIQ4RCBF8UELabrd7\nNU5kD4gXp8mZs2eeCTheksHgfLcPcoHNPZcFp+KDP18GAuakH923N9EZQI2SjB5jjVygo5yv13E4\nNqBgiUTiVwNc9BujCMmNEhKT4Dzki44WCgXTUfZGWxh/Bzue90pJkXMC8UNH2ZskSxqY1+71E+g6\nFAqZXLEfnDmojGf/o6OQgTlLZBb9pETgCWzYQ3QHMrEnUPoyCDLvS4/oqO8iwg6jo37fQPnoBg4e\nWYXs+ls6yuf7MpS3iyQ5HgUbj8dKJpOGKPmz5136S7pILhm+RskKHSGRoIzBmVFyxSdQSkN2PX+D\nGSdeR/FDyDz8GHTUExI5H+SHdww34td0lHP/rfVZsv1pX/Gwyfv37/X09LTnTMjmuISnVCrp/Pxc\nl5eXOjs7M2dRKpUsC2MwCMED88N9vQ9CGcaPOjU370EW8RfbnJ2dWUaHIyNj/OGHHww68n35lB5o\ni6NvOZvN2jyC09NT5fN5ZbNZnZyc2N6phXq2MC+cuhdGYbfbWf80QsrwnHK5bH3oTB3DyD0+Purh\n4WGPlEIABhJSLBZVr9d1cXGhy8tLI8JwsxXvB1IiMOSh8yQDhJhD3Wo0Gunh4cG4BqAPmUxGJycn\nymQy1t5FdtDr9fTDDz+o1WqZYeMsaKWhHeby8lIXFxc2rXG7fZ3YBQGRNh1JJj8Mt/E1SZ7Nkx/9\nxDTgcj6Tm8ggYxGEPT096aeffrJ+Y6BlWjJ9Znt1daXz83Mlk0n1ej0lk0krTXDTI/VXsvHDujZk\nPKBx5jDc39+b4UFmKI3RK02wR2B3e3urRqNhpDFJluExbpszv7q60tnZmaTXTMiPnS2VSiqVSmaY\nS6WS6agPAAhovY5yUx5dHAxaKhQKOjs7s6l5BK9A/999953a7bZ9B0EMrcDI+fX1tS4uLgy9CgZf\nJyqCMgG/0olB3d9n/rvdznQUOxQIBPTx40crp8Fuz2QydjEWZZpf09HDdjWCTmzf2dmZzS6JRCJW\n1uSiM/SBoA3EDSeILPJcPgEbjUZ6fHw0Ai2BEzYLHcVxSa8tpT/99JNarZbZXOQvm83ahWboytXV\nlV02RvZbKBTsOyht8NwEwQRroB0QFnHah5cwYffL5bLpKMgy595oNPT+/XsrTXtflM/nzR+dnJzo\n4uJCFxcXSqfTdqEQM0v8+WCbQGRw/gQOyBTBD6RqSMwEA7S5n5+f/66f/SydP8YDowhzHjYoJCEm\nIgHJUc+itnLIQGfkJ4dEdkpmgAActkRhSMgofYsUAsFe+R2iZvr1+V1+hlYxoHGgfVqGEE6ySbgA\nMH2JnHF6THryLVNkkBiXw0ycNh72hTHEofFdhz/DO6LG5ScPkoHgIDG0/mwxXJ7Vz/NKn2Zm8zyc\nE4aZM/ROhmALXgcMfLLlw3ZL3yt7uHcyAGqwOEBPkMOYEKj4syRw4ef9oBeez7fXsUcCIFAhLy88\n43z+emsi/AoQJi8v9DSzb78X2Mz+zNm7pD3EASeLnPsBQ/4sfcYPhH+oo2TyZMRkdjgbSfZs6/Xa\nAj10FNIpf3/Y/ue7Wg51lJKOtynoKIYb1MvLi2/9g+MArAsKBiLAv5OV+TOHBAhHxcsLhECvo17O\nD+2LfwZfouCMvU3CLjILn9IcKAkOkjPHLkIAxnGSkfszx36A7OCwvF30Our3DimN9lr/7+iHP3PI\nvoc2HR2VtEd6BhUDLUFeQEo8KsjPeySUPZA4eLuITfWt2l5mPGsfZNi3PXLm2E2QYJDL5XJpOsqZ\ne2QMHeXcyPTZJ++G8uBvrc8S9qcOh8FMJBK6ubnZm54nyZzIYDDQ09OTIpGIHh4e7O+pT89mM728\nvOj29lZ/+tOfLNoNBoNWr6cdZblc7o1wXC6X1pcPwx42MQZpNptZZk/2hwLTNkjrCsYQiK3dblvL\nytPTk66vr5XJZKym3+/39fj4aBAQ7TWeBAPjOZ1OWzRKdkdUyBhUDJiHO7mrwGdXoVDI5k8DsXnW\n/GQy0cvLi9brtR4fH62EwsQzanB3d3fK5/N2uyDPTUBDxkzfNGeOUyNzWy6XRiry7ZZ+NjzKyfWz\nhULBFJgzoSbG/fUfPnywKB1C4ng8VqPR0OPjo/785z/b/rbb1+EjtGExYMcrIlA68hIMBm1YEZDi\nZDKxLBPDyaLWnkgkbEiLP3O4MLe3t3YNMugKxNj7+3tVq1VDqICeIVLBKSBr9sx9sm2usGVePkHj\naDSymfpAshBHr6+vrebqjb9v7wyFQrq9vdVqtdL1/7rNjkEsLy8v+vDhg7788kvLgiRZvZ6a+mKx\n2JsYR8AHmkWg4QfnQGSk3OIDF+6aYPAWxp5afqfT0cPDg7WnQj5kgudwOPxNHe33+7q+vrZaLXVw\nvp9uI0Z5M1SGEslms1G327UgC0eEjvrWZrJo4G2cEW21Nzc3hgSt12u1223d3t7aACWuVsbx0jaI\nXJDEEKD58dsevgYtQO7/nY6Wy+W9wBk573a7enp60nQ61f39vZHamNtPK+Ld3Z3evXtnKJD0afwy\ncs51vb6jgwDY6ygBNbYGVIJgCN8DNyMWi9l10MDuEETj8bi1Unod3W5fW7jv7+/tunHKTJA0QSJK\npZLZl8OkDF1A1kkwIGUyVfW31mfp/CHP0YsMo3i73Vp9jOguHo/r3bt3xoCmR3e5XFqGBtz95s0b\n7XY7G2dJRpDP582ZrFYru587EokYa5X+cqJmIEEyP0Yscue29BrEEHmTWVArHI/HNvmrWq1KkrVm\ngUZAqmG88WAwsFvn6vW6Mb8DgYDa7baxVhnRiXASfZJp+Xnz1HZxRkxCDAQCNmaVudJcQhKNRvXm\nzRtlMhklk0kjZmHoQQS2263NZ4CQh5GiLABSQ48saANygPIRITOWmdY5jGs4HN67dpd5DsDBEK8K\nhYK++uorI/i0Wi0jQeHI6JY4Pz83QwBbnY6TaDSqRqOhQCBgffiw+lerlckwhpm2UN4BBokLZ3Cm\ndJPATOfMITaGQiFVq1WTBQImT/Q7PT1VKpVSu922K35rtZpisZjdfQ6ZCUQNuJ+Ain1TcvAE2tns\n9crf7XZrECbZ72632+vIgZvy7t07Yyq3223Ta3g4GOubmxvtdq/9851Oxwwu15hyBwNELVAWPtfz\naxKJhJWYKJURvHn4FwdPMISTm81mBuNXq1VzqNFo1BAgsjKY+XxWt9tVrVYzuQwEAnp5ebF3j5xj\nI+BusHdKRJ4jBHmTVl/sSDAYtE4KnHckEtGXX35p5UHmcXgECAdBua7RaKjValk2THmPsh8lUz9E\nhjKnR4ywQWSvoLIQBDk/SK+8w0gkYix6pvb1ej29vLxY4kAw1+l0lEwmdXZ2Zuga6CnzVeLxuJrN\npu2JBIMzIiAgu+a8QUdyuZxl6PgD35YJlwf5x97RTjgcDo3gSCAIgutLUOjccrm0zjTsPbYatAGy\nJXA/5EnaeQkO/j/L9vdQu4dxDsl9OIZ6va5isajFYrEXKQFZAuuSzXD9KgYAoQDS45IKHB5lA2AV\nlBQYh7+XZPAUf4ei8WKBb1CmfD6vcrms4XC4F1l7MhzfA0uUaJNsQZI5QJw5ECzRIuUASRbAkJ0B\nFUkyR4lhlj5dJIPjxqARhWcymb35BkSpkvbOFkcAmkNfK/Algg6pjjobMKmH6nzZgmybcgiBHzV8\nyE2bzcYcGbfl0Q1A2QDUxncHcOZ8DxkG969jFDEgkvYCLpyMh8GRA4wPhCL2DurFs0qyEcTlcln5\nfN6QAV8HJ+vEkHITJIErffTj8di6BICOPbGOMweB8fsgg0N3kAsgYt49joa2SBy4R0s8oYxAAw7B\naDQy+UEPGdCCjCMToAdeXigpISe+lMf74MwZVQyUD98Ijk25XLbR1Dyv509QokJHcZgQxna7nbW4\nwUUA9UJfkFNgYmTi3+koZDGeAd2m44aglA4ifhb7gl7jNEB10FGyabqhPG8FUh3BO0ELjtwThpFR\n3hVEW8iYMNpBNSiPcDPdIbrG2fvuAIJmPosrtXHwfiojOsqZ+TP3sL8vnyIX/B6ZNvKGLJVKJcXj\ncVWrVbvzwdt05BwdJXD3pVNQGQijyDo6yucQ1HoOA3bxf8f5f5Y1f24Zo2UBo8YhA68RNcLihbQk\nyYwW0f92u9X9/f1ee5OHJ3EiQLAQP4C2yPJxikRxRIFM+2PaIFGoJNsX0CtRta8RY1DZE06mUCho\nOByq1WpZbfdw+RoPBCuiRRyqr53CEqdkQgteMpnUcDi0+iDsVATRX1DknRMwKY42Ho/bEJDHx0eb\nPe5b6nyNkHOH9IJxIcvnnDh3iDDMVGCONdPNIN1hTJAhX4clwve1Mhx+uVzWbDbT4+OjKSRyhazg\nuDh3yEQeCuX3kFlkYjwe6+XlRcFg0HpzCeh8oIBTIrBBZv0wK5wcTiaZTKrZbNq1uhgID636+iXG\nBkeUzWbNobMHBrgsFgtrbSVogmfDOf07HUUX+HcP1ycSCYNE7+/vzZmS9fM7nsEuyfQDw0hWiqPk\nXJEbCKClUsnQIa6URUcJjLbbrSF4QPPI06G80EYJ0dAb5V+rC3PmyAulLf+c6Ck62ul0DAHwAZ1n\n/fO8ZK4+ePY1bHhFtOg9Pj4aOorT8MnMISEPeeHMeVcErpSKZrOZ2u22ptPX640DgYBde46tkD7x\nQQi+PBseB8x/IJcQdCGYkhhKMieInPPMBACcOUEF9oi9E5jwPkH1/LXVXj8ISAl8ecfSp2FtnDkJ\nE23inksmfQqW0FNkEwKyH07HOR3e39LtdvX8/Py7fvazzPxRGKI4XiyIAFEvpI9Wq2UscS8wODBa\n1TD4XqB864ZvT8M4MZ1ts9mY0+F3MV7U7Hw7CcZQkgk7P4fgUPujXosgAkETwfoWJM6Gc/LZO1mU\nJwNicEaj0S/gIM4LkpakPUHl73hGyD3L5VLNZtPOlnfBmc/nc3OCBG08s0dPfHQNcgDKguFCEYlo\nPVmHVkeyAaJqSRacgIZwtlyGAh+A7wehwaBj7D0JjHPm7FF2MgBPNILbwDN5RwZxCXSArMg7fYIA\nDLKHOhnYQzBFXRekymfth2fOORIEoVM4HOSf2iXBJvvhP54ZdAjdQSf4PVCfl5eXPeiUvfv6K5m8\n11F/5vyZwJFnQ1fQV4izXl6ApX1WD0oH4iHJ5AUHMhgMLKjn/aEnQPU+0PMOzeso/0YGyHvHiUsy\nNNLrtW/fwxnw+zhd5McT+byO+nd4qKPYBE8q86iaz4TRUf6MnSTo4PcO7SI66veKvvgzxx4xxhuk\n1p8pvCqcKGiDd5bsl2cm+GJvILE4TnSd32M/2HS/b58A+sAbfYL75XVU0l6ZiKwfm85Z/JqOsk/+\njE33iSRygB5hg35vfZbOn7qOJDPy9NZSP+Wl9Pt93d/f2zQylAzokMiYdiMPA0qfAg2+C3gahw0T\ntlgsGumIAARjS6/2arVSo9HYg54gFAIdMaRoPp+r2Wxa5rrdfhpFCuFmvV5b64r0CSpGGDBi7McP\nnzmcJ0+ARDaIk+NMybiI7jOZjAVc1NoRZtrvQqGQnTlGcTqdqtfrKRB4nahWLpetvEJAhZD7LIns\n0k9+Y/Ia8DpK5vvjgVM9DI1iUF6gTWaz2eyRbTxyQcQMbMmENZw08ogyUxsGiSC79MNd4A4gg74d\niGzYZ2eQxMj2/U1m4/FYj4+PduugPw8ycuBNWp18ps+ZIy/IGXtgTjmkwEajoZubmz2+BeSoYDBo\nrVsEuZFIxDoMaAcjAOv1erq7u7MhPCBYcBwIpJAX+AVeR728cEboKFMSi8WiOp2OGWocPXPp0Y2X\nl5e9DJz6N/VSrhSfz+dqNBqaz+eqVquGHkmfJujV63Vrw0Q/MM7YFeSFbBwUBSdXqVRMR/2sDbJc\ndNTf4oYD5P0ij/6Gz/v7e0kymJtkhCFWkqy1kqDDl2wOkUngaboyyPSZlAfiRnBOyQQ77cu5lDWx\nuyRv8D3u7u4sKfLIKGQ43hPEO+SY4MQHobwPzr7dblu7YqfTsT8jl15HeVc8G2dOKzpnzvv5+PGj\n3YviEUjI3ZSVaV+VPiWLXkcJxkCWmQnR6XRssujLy4vevHljOgfKzb5+b32WsD+OhAwqGo2qXC4b\nUYPJUYHA63z9Wq1m09e63a5Wq5X1thaLRTP4Dw8PFhD4XloPfQN5wgqHqUrPOi8MOI7aJJAzTs7X\nvJnCxPCP2Wxmxg5HMR6PbdJbtVrV6empAoGA7u/vdX9/r2azaUEFve6S9PT0pNlsZvVgSElA9zgD\nX5tlJCeKC5FIkmVv9H+TIUNe2Ww2SqVSVk8MBAJGQMnlcjo9PdXp6amGw6E+fPig29tbu+yGiBqi\n3vPzs5ElycAYzBIMBm3Cn2/5ov4OCY82Kp9JJBIJ1et160DAqREEQkibz+dqtVpG2rv+X50WT09P\nur29NQY30DStN0zqozYJskKwFYvFbK4EwQ3OgrNvt9sWrJHBEkxRxkAGKRNR6+fM+/2+EomEKpWK\nzs7OtFgsdHd3p9vbW7XbbftMT+h7eHgwwiUsZs5xu93a1bkgCGRKlLh8MEvgjEHHicA45szpZ2YP\njOctFAp2ZXWr1dLt7a2NM8WBANX6iYbIMgEuQQczMXyZja4Sr6O+e4XMvVKp7MkgOspsBb7v5eVF\nodDr9ElG3HodxeBT+gkEAqajlA4hXDKxDSY7iBNGH/icq8G9juIsmJDHe2JuBAiR7/JhD/V6XfV6\nXZPJxLoYIKKxb2r9z8/PRijDhmEXg8GgESJ9GxsQNJMCR6ORyQp2EXnBcfuf80Hgcrk0aL9cLuvi\n4kKJREJ3d3d6//699bqjo8D67XZbo9Foj+jn+VL4DXrs6aygGwodJdnzyAL7I2lBR5Ejf3V0r9ez\n6Y2Mab+9vdWHDx/UarX2dJSk8P3791qv18Yjwk6R0CCTnhOHXcEOMfb5N/3s/xFv/X94+boURh3o\n1kdxkixSI8LziIFvz2Je+XQ63btJDYEF+gN+972TZD1AKh4SA/Y9JJjhrMjgeEmwm3HiOEUIU/w9\nLUsMofDTncgEKA+QZRB4kCHws/xHUHVYDzskr0jaY9l7Fj/n46Fx3hcEHKB6nCRdF56zwN7JfL1z\nJ5uE6EL0jJNEBkALPPpB1Ay0BoxN9gC3gSyIgIr6Jcaf9jxaw/g8D/l5mNFfbALXwpPHfBYOSuB5\nE5RtiOCBZ8k6OA+CCd6hZ4MDv3JZCFwZ30POmSMvGA5+NpPJ7BHYkBf2TukLlAj0AZ3zZ+7LbvBn\nfFaMkyRgRUfhBzCLQJJB1b6ui77x3dSi+X2QJnSULA554flBjPj/6KgnqHmSH3rlO4G4F4NzJHji\nzPhe9oWOIqsw6D1PxOspUC66AvyPPHiSsPRp5KzXUUoAqVTKOAXdbtcCA+ysr7v7M+fdEih6fpHX\nUeSFTJtykj9H3q+3v0DgyAXvntISNn2xWFhHBTrq/QDywnfxrnn/oCRwZHyZCTn3xFtfYuNz6Xbw\npVa/B+ymT9hArEHZ0NFIJPKbOkow7cmChwRyzhw9Ybzxb63P0vn3ej1JMuik1+up1+tZFHvIdkUY\nuRyESXsvLy+aTCaWdXCt6Gw2s2iw3W4rn89Lku7v7609jP+KxaJlLMBIGAsfOdK2wXhXoFqiMSaW\n0UuOkcHQFotFm0rY7XbVarWMTMT9A/7aYP58dXWl3W6n7777TtIndj3Pi0LBxg0Gg9ZWh8DFYjE1\nm01r9QsGP11gEQwGDVUg0ycrAS4rl8sGS2HEyR7p42VeNlF5LBbT+fm5ms2mDdMBqmUymifjSa/E\nSRAHslkuXIpEIiqVSnsO0E+m48x9ny8XsgD/0boHygQZj+wQtAcI+Pvvv7egh/0xJQ2DAJTnuwFw\nHBBagUfpZ59Op1ZyAFL0NWbkEhmaTCbqdDr23WS/9OMz+W632+nt27eaTCZ6//69JFkLFUYLY4Yx\nDAQCVlLzbZN0GpycnGi9Xlv7LN0nBD7U4WkDZMJZPB5Xu91Ws9m0GqnXUcp8m81GjUbDYHuuasVB\nhkIhQ4cIlPi3RCJhveHIC2dMZw0IhWfnUx47ZGgzJVCS6ShGHVnx5TpQxuvra0nSt99+K0nW5hiP\nv04YpV5MELXb7ewz0VHOC45QMBi0y6MCgYDB35QNKP3hXGizBAFpNpt2dwOE6E6nY3AzBMPT01O9\nvLzYMB1kGh0lAMYuwnQHWYJETItkpVIxHWXOhyexIefIeiaT0dnZmRHAm82mIXVMQkVe6ToaDAY6\nOTnRarXSt99+a4kBgTaZOXNXcOzoKIFHIpEwFCiXy5m8QHIFuUDvCBik1+C0WCxa8Mbe+R1sKraL\nK8lDoZD+9Kc/aTqd6qeffrKAjbPwJTHOPBwOW0kNrgI/91vrs3T+RG8YDYwf7XoQHHBoPiMky/PR\nNsa7VqtZBMhLWa/XxrynncgT0Xa7nUXIHoHwMNZq9WnOPAaUQToQBH1kT33JE1kOI7bFYmE/T52U\nGicZO44SON1nkdSO/DWd/C//5glEnCWRPllOKvXpEqLhcGjBhH9+yD9+DKgPkOil5h1CJMO48Q6J\n0KljA3X6yYC8M7IcUAKyKQ/pMtwIkiXPz7v1mawf0AKc6klBnvxEYIKDIGjhfdJzj+L62h4tVhge\njBBnA1mNwUbsG3lhHz6T5R0gZ9yUSBZN5ocskVEjL8gENdXZbLY3ic5n/r4VkGeF0Ml79Qxw5MrL\nC891KC9AsZKMzIQOEtR648e5oqPU+9kP38l5e2QPHQXqJ1ggQIVs6TNCMlQCQjLOaDSqYrFoUDf6\nQ3ZMcIeOHu6PHnj+3ssnSCbJByUkECd0FMeHTPmgGbuIzDAKmGCUGwG5e8O3C9JN4VE19kfHDb/j\nScb0N6AAACAASURBVJ/YRI/yHOoo8kLnCOS5wzMAAUNP+S4CLr4T3UBe0FFkxX/uaDQyZPnQFnv7\nwnkSBDJ2OxB47R44DPI9yseeD3U0kXi92VTS3hwWbDrBOzYPHeV9+1kcvsOGs/JtnL+1Pkvnz61W\nDw8PFuEBnxeLRd3d3e2x5zH+1OSoU5Op4yTPzs4sOmNyFcaVf5/NZmo2m5I+wXOSLPuHlbrZbKxO\nRYa/270OkxiPx3p+fjYFJBtDEIGZqaFR8/r48aNOTk4sk+R3qVH7aJ7sHsNTq9WMyIQhHg6Hajab\nBq9h0IH4QEy4QCUWi+nl5WWvB5o6FnC87xDAADIJ6/r62vZMlshtfP7iH+D/ROL1utlgMKhms7nH\np5hOp3p6etJ8Pre2IjgT8BXolshms1osFnp6ejKjhQMvl8u6vb21iN7PMRiNRnp+frYaHkoMOiLJ\nCDecOeWiWCymi4sLtdttcyKQohi+40s0vGPOGmieufzUUAlGS6WSjR4me/ftiyBb19fX5lype9Zq\nNctQlsulvXf0Af4Kg6GQ59FopGazaSgaz0SbJy1F8/ncZh14HaVsUKlUrBZLxk1XCMx/ergPER90\nkHcLlM+zn56eWmDodYJOiEMd5apceEKSlM/nNZlMDOkheyLjB10h0SBAQUfr9brJP2WVw+zPlw0I\nuiqVilqt1p4zhLjFz/mZHjip3W5n105Ho1EbTEUQTmLjxyYjC6FQyJj/DIoCoibwPjk5scQDTgTP\nlclkdH5+bjrqCbroD44I5w8CSMa/Xr9OOl0ul3p6ejLUg4CvVCrp/v7ebDxEaZKXl5cX02XkHOdP\nEAERUfqEOqXTaV1eXhqPKBaL2SVX7XbbyjAE6J6XE4+/3llAiZNR4JKsvl8qlUwfyMR5f5QmGo2G\nzs/P94jC8HRoFycg8skCU2jp2KDM1e/39fHjRwta0AsGKcFvGY1Gv+tnP0vCH9EOsF21WrWIClQA\nJyfJosFcLmdjWj0phhaj6XRqLw3nA5xIRtPv9/Xw8CBJFj0Cm2EgcYL06sL2xbn7YCKXyxn6IH0i\n1PGdZLbcQQ0M67MGolEmD3r4mGgPZjJGkMwKh4eQgXAA20E6xIF4AhkGlBozGRpnLmmP5OKfjzom\nrTTVatWyKshDZO7j8VgPDw9aLBbK5XIWeOAg+v2+1QS5zIM5AJCqOHOGydDKRUTtgyIIQhAU/VAh\nYGTpNQiBXIPjZwQr2UG73bbrPmkNgulMqQZjx35h8WMAef/n5+fGU0He/SASUBnIXOgFWQnlJzLr\n8/Nz4wnQA8+ZL5dLu0gGEuFms7Fpg0xugyTJmTMAC4OUTL5eWXxycmJ6ScaC7PCOgVa5zASdwwlA\nkkqlXi8/AtFieh110H6/rw8fPliHzKGO4gSRNa4Dz+fzRpRFt7gIybPc2ZekPR0tFAqq1WqGHACp\nwwDfbre/0FHeJ4hVo9FQu93eG/LCLA7vYH5NR0ELCDa4sx57AoqHreGzOHPY8+goAcehjtJNQ3DF\n+7u7u9N8Pt/rqkI3fGsopUrummeIEM6dvYCkwYdCRwno0Hevo/wMl20tFgs7H0m/qqPNZlNPT08W\n0CyXS7P9fsYGSCFXU2On8Dv5fF4XFxfGU/E2naSGPVLyrFQqZj8lGf+KbJ9AnOA3Ho+bPpMAcVEa\n7xh/5vkxyAlILTbst9Zn6fwhtqBYCL4nZPiWDhTVQ7u+vQpj5BnpKB/Rqmd1MigDQQFGguzlCW7A\n+ThcnyF6Y+OhJOB2DAIZBFGf37f/zJOTE6sZgQ5gzMnCGZTB/+dzPQR3uHegVLJMnBOCj2NEsPk7\nFNr3T7N3ssbZbKZkMqnz83ODg32vrPTpWlagNN4vUC/EG0mWHdHxwIwCshmUF+XnuZERD6UelkI4\nK+DZ1WplTtbDkZwFAZwfhwu6weQ2SGmck5cXjAKBLnVA9ukHILEf/1mcMRkD8kO2f3NzY9wBgmaM\nBee6Wq32ZJvPAL725+LPHDjed61gxDkf5JxgA3nxJQwv66B3PijjXDkPn10e6ii65nWUIB/0iD3+\nmo560hwyyPmDDlDyQxa9jkra01Hej585gTyCMFA6AGHw5+TPHIIZ7592NHQJuWa//sx5f+ydZ/HJ\nE6NyGXwTDoeNe8H3gm7CWQAhokzHueJkCX4IliRZcEDbKHKNvHjZx6Yjk9gtT/5erV7HsdOtgWx5\n7gq8sEMd9TwgZB27iLxQmvE8KlAsvgM594HVobyAlpKpo2sXFxe/gPkhDksyFIl2TZAtZOnQJiDr\nyMtvrc8S9gci9i8FA/Xy8mJGi/8FsqFNolKp6KuvvjJHBUS1WCyszYWLMpiGRkS1Xq9Vr9ft+xjp\nieFBKehBxwBiPP3FFtSM/PwAX3tjjUYjffz4UT///LOq1aoNFPHRXTwe1xdffKFUKrU3KQ5CI04K\nZV+tVtYL7YccQfoB3pY+DSzxk/8oqwQCAXU6HQ2HQ9u3r/GClKzXa719+9Zaa2AFL5dLlUolvXv3\nziAp5lkTGABbYhDIcD2LlvfY6XS0WCxMUXC+fu8QjWazmRqNhiEeoDZ8R7vd1o8//qh4PG5tW9Q+\n6eqAUAnno91uq9frGSIAjyQej1tnBs/NOVFaYb66Z5UDtfIsGAy6DQggCOSQl6enJz0+Pury8tJ6\nkiEkzWYzVSoV/eEPf9BsNrNrmcksKSXRCiZpT0fQNwIJ356KwWbfBCjwDSDo0WpKfRluQ6fT0YcP\nH1Qul/Xll1+aDoCSeB2FZNZqtaydirswTk5O9u79gAHtyV0QIcnMvI5iONkbOkqgyd5ZTGW8vb21\na40JCilDxeNxffnll0b247tGo5Gq1apKpdJe/zdty5wtTg4d9fwdz/PwdWSSADprvI6SiXa7Xd3d\n3Wm1Wunm5kb5fN7em9fRr7/+2uD6brerTqej0WhkzhDyJOfhHWwsFpMk65aBPCnJvoez5d9g7jeb\nTUMpQMpwbq1WS99//73+/Oc/7+koLXkQKne7nTqdjrbbrXXrMAegVquZHUGevY5yhrwr6dMwKXge\nQPueH4WOHnZDEZzSRnlxcWFlMZ6RciU6yiRUCKmRSMRsOkHebDYzFHS329mZUw4ALSHg/r312Tp/\nIkxeApGmJ1ghSChHMBjcm/lNBuejzuVyaRPjPByJM+PQuDADghS9zrxkygmQ1Th0SGNkR+zbZ3i+\npgfcCspBJkpEzXeRjUEa8SNVgX5BRbrdrgKBgEFO/hZCYGlIjigTKISvKxN4EWBxsQRRKWeYSCTs\n/HEQ7IUIfTqd7p05wQVZClkfhhyF8oNFpE9zvnHi0qc5+mTN7I9o3I+YJTtBjiA0IS9kMWQB/j35\nLB4jzXNyCVG1WjUoFeNEsAIiIX2awc6/EVT684AbglzxPBhPoPJIJGJZCsYJGSYrgRXt2wghbdF9\nAdPenzmZhEdY/Fkgp37vlMVwWDgIzu5wLj/ZHzpPMMqzYpA5I/QZMjCX/PiRuJQsvI7yv5AgCRKQ\naerllGM8sY5zJUvzP0cLLrpG1oq8oaM47PV6bbwQLv0heKXNy+soiBo67udegID6FkOexxMeSQDQ\nT94H8L/XSwIndJQkilr/cDj8hY6S9Hgd9eUIiKzYRfYtyYLnf6ejcDGwq5wlMoGsYAs8SQ5kdLN5\nbScFzYtEIhbAgubhTLEr2EV0FASDvWODvE6xeE84feSF4AB/gY6id7wrkEueiUuQKN3ARfKcKN4j\nsv1767N0/iinJ7rNZjMjIXkIxLMak8mkvvzySxNYzx4HEm42m+r3+xZZkyWGQqG9KWfPz8/KZDK6\nurpSJBKxFhky7kQiocvLS2P34zSXy6URlDDGOE8G7gDDeyEtl8v66quvTKkwVDjrQCCg9+/fGxKA\nw+CaYbLOxWJh0/e++uorRaNR9ft9tVotI8aVSiWVy2XbNwZYkrXRkVET5VIKkLTXX01tmaDJZ7aS\nDKn46aefbCoYwsnMbMh7Ly8ve0EL0/4YQpPP53V5ebmHxGAUqQ3jIPh39g38inIEAq8tbH/5y1+s\nbsz+MXrr9dpIhAQElIBisZghGFw/Go1G9e7dO7tameFJ0+nUaoi8Vz+aFWjUs7LJApnwx5kTAHCb\nnudl4Cjodf/Xv/5lA3UwBjh4shbkhXqh9MqxoL0rn89beyUBmHd46CiMbGSfmrHPDtnbF198YcEB\nkCjOq1AoGNJE+Q1YPBQKGZGRoVi5XE43NzeKxV5vsGR2PPdJcKukz5IpjYHK+CwfOcI5IBfU8//w\nhz9YzZt3uV6vzQG8f//eEBTGuYKu0Js+n88NWfn6668N4ev1enaDJzqKU+MZdrudtTbSreJ1FJ4M\nqCP6eHFxYfoqfZpUSa15NBrphx9+sHkFzLfg3ZL9ttttBYNBnZycmE71+/29oToXFxeGxPgOBUpE\n2EUCNDhT7JukA3v0xz/+UYVCweSb94GOPjw8mPPEcaMb3W7X9PRQR+niwsFjP3DMviXbt8IS0BJs\n8Xv4IpI1OF84fXRUktX9v//+e5v0SpDIDbUQhFerle7v75XL5ayNnOmQ4/HY0A0QHZLH31ufZc2f\nWi9T/ICGPcOcyXRM4EMRiY6lV2WDGRsMBu3KXIZzAK2TafjhChibVqtlE+rq9brevHmzd5EMvwM7\nGYHEafsbAr2iwJAnAvfZ3mq10vPzsxaLhfWKYjCAC0EIyNJ87Z86aLPZ1HQ6VTKZ1FdffaXz83Ml\nk0kzkpBKuH6WPcIgxiFQVul0OqbkGByyC9AP4C6gQBjrPsvnz0yN83WrcDhsE6pWq5XBwwQEnU7H\nOg8ofxDszWYzCw4PJxqydwwOgRX7owQ0HA6VyWSs5IPjAWECjiTT4e95JiDvcDisq6srXV1dKZfL\nGVvcd3f4rIusl73j8JFhSlMeVcFITqdT3d7eWmZP2QQ9IvuUZKx9foYAkpLKcrlUPp/Xn/70J52e\nntq8AEom9GeTQYJuQHwFCQByb7Vae2xwzp2Apd1uG9s9GAza7ZRck0qAjI6CXoEu8BmU4NBRf3c6\nv4OO+h5+BhDBR4FQ2Gq11Gq17HvgCpBxrlYru2+eW9hou8NJoCO+5MdzQ8xsNBpW/nr79q3Ozs4M\nacQRH+ooMsy8Di8v/opYUD1sDJlkv9/X4+OjJUXYWYIaH5x6FMgH7r1ezybgFYtFvX371rJSygX0\n3IMg4fRJuPyFY0zsfHl52Rv4A+zuyXvD4dD2SrmFd+qJd4dDnbyOElheXFwYLL9YLOwiIsaUU84C\nTWHfoNI8I8kX8ip94heBvN3e3hphF7vHXAbOHN3hzL19GY/HarVams1mymQy+sMf/mAEVObJgCa0\n2+3f9bOfpfPHEHjSB1EfdSQOFqdBROhZ3tSXfR0Gh8/nAq35OhrfA3yLUy8UCiqVSkYQ4t88FOnh\nfqJej0IQ8fqBJNKnjBNjTQYDSkAWwbPG43FjhnoyH/A8NW4i2HK5vMfg9bMOCELImoG8gXg9udAz\ntNkL0SylCe5VR+jX6/VeqwvT1PyZS7K6J2gALHDuB/DkP86crggPE3pSlyd0cka+jupRDMYiAxcy\nZhh+AYQaiFBAlBgWZA7mcKFQMJh4s9nsnTmZs2+rPCTCAW/yfiXt7Z1npDaKccbZ8XmQlYBVMeQE\nbHA0OHM6OCgrELB4AhwO0QdFksxQcbbo6CHHxOsoQ70IuNBRZO9QR8mwyEzhtpDF0klBeYw9E9gC\n73LOnLkn75Hdceb8DO9hs9lY8M8ZAQ8j63S34MCQNewOJDrKJ6VSySBzfgc590jXocxhrzzS6FEV\n9u5JZHBYCI4JytFrOjq8vMCPQEeBzGnXo/Tk+Vo4v8NJg6HQ/vWzlCc8KZlz5nkIzBhFTEAIaVSS\nIZkEav9ORwlCCKAI2DwfBH3C56Cj6B+yg6xgx3gnPBM6SrDkybboEQEhpUbkBRnF1vsbJ/29Kcgf\n30mQ+lvrs3T+1F8RwMlkYtdOFgoFc8K+rQ9BZ+GUIKAAHZNtkvHAEsUoHzJtmfTmZ9xT2/LQCmMu\nqUuBKLTbbWtLKhQKRhSq1+sGfXqDTraD42y1WkYogrgGlE3LB9AlcA8MfqbyMfGQC3PovUYpMZac\nO0rfaDTU7XZNuSuVirXpSJ8gfn/u0icjzqx774yy2awqlYoZdOpdfLef+AXUB+FM0t51s7Tk+N/d\n7XYaDAb68OGDFouFOWDkBaPrGc/eAVBfg9/BYCVavWgT9YED9TzOnTYnOAzz+af7AaTXoBNj6dnv\nq9Xr3PtGo6FoNGrfBwkUGcEAcf7snwzxsO4NHAyhElnHsIFucAlSMBi0yYu+Xrzdfro4COYyHJbp\ndGo6WiwWVSwWVS6XVa/Xlc1mzQkgL4dkNjIXODWUPnK5nGq1mgVQGDXP4Pd6ulwu9+Ym0CIlydoV\nPYN8tXq9iY3ZIZDE0FHPVD9cODoyXTI2Am7m7ntCME6I//i5WCxm1yUvFos9HaUVlFIFwct0OtXH\njx/tBjwmMZbLZSs14FxxSN5OoqM+u0dHuQQNWcHZcdagsX6CJ3V9hvYEAgFrsySwAo0bDAa6vb01\n5AAC6tnZmfEW0FG/d0mGiGGLCaqxd759m9INNgK7eKijBBEEMJFIxDpBqPejO81mU8/Pz8b+58y5\nttiX4Q51dDKZ2NwB0OZgMGjdG77zxJNRmcOSSCTs4i6vo/ByQBYYIvRb67Os+ZNx0PJAxEg05qds\nYUCASMbjsRk0onAiXt8mQnTp4WicBw4QuN23sVDngkTH52KsqMH7yItojIzWZ2R0DkDIwzgAafoI\nPhqN7rGnyUTov6b9iSzHG1EcBn9PVoJyYUTJxH0985CVzt6BUpkiyLmzN+nToCKiZ4+84FiIejEM\nHlrlM3gXBEBkVD5L8WiK3zsG2QcqOEhgf79HykbIi8+2+N7t9vV2LljK7NfD0j57x7hyHnQlYGB5\nTr6f6B1CEAYEBMvPhsc4+YAAB8eZ+7Yu+rh5b56tjJP272I6nRqBib3wvgiOcOK8J0/SRF74/Mlk\nYj3mPlOWZIE3HAiPJGw2n9o/uQQGWfcIgy8TUJaA7BuNRm1AEfKDnnHmlBIJDJETdJQ9IRue/0JW\nfKijicTrTZmMcPXsfTJAj7Cho/58CC6RL96t13eItT4rJTDBkfC5vBcvQzwLCKbnFwHTg0qwV5Cf\n7XZrnChKp9TGfScPMuHl3O8b24NzROcP7SKyzv74fZATScZVQU+9jiIvkkz+PbmPIMz39ftSJ/7I\n+ymIepw5OuptsE8+0FEQTM+PQF48N4W9S58m4eKPQG9A9X5vfZbOv1Qq6cOHD9aOx3z23W5nc6m7\n3a5qtZoZWaAcrlH1isILJDpmKAeGDmWCjMetZX5aG3wBHBGkKCB45uCXSiVjKGMkGY9LVrfZbGwE\nK+hAt9u1iVJcIuSz+HA4bNeP8qKp75CRUwOHE0HbCPAWzr3X6xn7HDiaWvNwOFSpVLIIn1r1y8uL\nGo2GzWAn25vNZnZdsI9QMYxk8cwBB2oEhuf3+v2+QdacP3VO6r/UvIhu+TkQAuDjRCJhw5E6nY5e\nXl4UDL5OhyM4gPTZ6XRsoBHdAmRXvDscFsgQE8BqtZoZEd45Z14sFo3Id9guFw6HreUunU4bEhWJ\nROx90/bz9PRkbGfKEc/Pz+p0Omb8MIqcO0xx2rKQFxxBKpVSvV43w0IQx6x4UA6cO3eUQwxbrf4n\ndf+xG3narIe+QW+Lnkx6lm/7YQkyIwlYA2msazrnDnRfakEDQVifvu6y9JlJ7/0ecP+CkVxt9hnt\nOgk0uruKzHzz/Yd94omImyQeacGD1D08PKSc7+/v5+x/OqrdzxAo/e31/C9evMi6qEBQ3VcPPdga\n3wb87z2Vvejo8PBw7O/vR09PT8f+AJ9nBbGhTfgDEU8QPx2l1wZtVYdq74b+7IjHrouZmZnUUWUc\nOyi0dhkU5o66u7uTjIvVbWAXu0aHbSbd3d3NwTF0FPppyFRdJNTb25v2VfmHY+WU/Q5ugPo8exwR\niTSYbirgwW1B6HN/giH2z6Cs5eXllGuTAvf29mJtbS2RnojoQE7ousRF0GK+CJuji8ECqenp6byD\n6+vrROkkMYLAFy9epEN254Ic924HDP9Qlz1VNLjO+jCWOeJpeZEyjFItu8ima2McGRnp2ESJqyBg\nqEnJH72+SeePmKVNJ+IJshMhYf6rr05PT8f9/X0qjZqKmhRl4TBEd3d3dznhzM5pSym6u7uzriKw\nAC8x/qL4kZGRfAiiVJ9DqRGjnF0Nyj5vzpuR5/Cd39Q1BqGnpycWFhait7c3jRwBHB0dzbYzzGjQ\nPWfM6L548SJarVacnZ0lWzviyenWYSMiWEHK8PBwIg6y6YjIYI0RR0BRUyW8omPO7eHhIUssuA+I\nd7oiRPbIbQKCCqtr4WPcZO19fY9DPJQeBBgMh8EvFnowct7j4eEhGo1G3N8/zpy4u7vLOq2JlAMD\nA3FxcZFENoEUNj1yFfIko+C+GAw17Hrvprtx5JXvAc40mbLZbOa9IHGura3F/f1jL3QNeJDWQLhb\nW1v5ncgvwzg4OJjjYgUBCGjKMIwbPRocHEz2ukVOVUcFmkNDQzlMRvaNhNjV1ZXOpHYTaPXjvAUj\n/f39uTf97u4uZZ7DoqPuWWZGPyMiHb0ZGpCY56UsPesIWWSxp6cnR3YLPmTQdLS3tzedpjKPwUk6\nckZHR5M8WREHKAZYWycK1EI5EmmZY4DkCH6UP+jP7e1tx7hlNs5nm6YYETkbQABLByB7EgToqs+w\nApq8QG0gFLOzszEwMJD9+0pxEgrIm6C/trfNzc3F/f19bG9vdwSrg4ODMT8/HwMDj6N+W61W+pNK\nHHx4eJz7gvBN9+go+cNJIat0xkh081Fq+ZBO397eZqDEFw0ODqaOHhwc/Csdrcnu9vZ2rKysZLDO\n1kr2/ur1TTp/pKk6gUnkhGxU1/iCAwlNRPwui5zBAf+CQhkKNdwKKan5iNQ5pYGBgawnVgRBKYAD\nj3iaNV2dP0iQoxIRg6XAVgQKtFP7uEXQFVJlwGSuEY8DVGR0s7OzqfQILZVMVduhKunPnVfoSi1y\naGgoocDnpDrO3hZC35syuU/RuechgtYGSEFq5wTIE6xf67Nm+Mv66vOOiKxtI/Mh+ZAz9wA2rncx\nPDycfIDaatrX19dBZjTXvAYPEU8rq0GNIFo1ejVdcCk+BuMwMzMT29vbmUlX0mQlhSGTgY45Ec/b\n2QSrgkqEsEajkdkW/fCs1Fk5UNA1p2nCG53lqGRB9c6rjiKNVm6A85FZckJH6ZeA6ObmJh2uzIqO\nkkX6wyiTOTpaS0Dd3Y+7HpBBlTacBZojG6x6qI6sY0Y5ICKS9V51VNCgDFfJslVH3R+7Rv60uIGf\nK6eHDSG3+E3PddSdO5+smlMUICFSGiAl2RBI+QxsfokbEpzgnL565mywZ4dj4/sLKKtNryVgaGy1\nmeRc+UKp+OTkJOeU1KCqEvWQrZVRyJwzqLc7e19fXzpjKE1tYRTY01FlR38noCMv7pyO6sLY29vL\n7alVR+nHX72+ScIfR6l2cnBwkM5NUIAXIPOTQVS2K8Z27YGPeKohRzwaqePj41xHend3F58+fUqH\nU2s3EU81SQ+a89na2sroF+tWRHx5edkRPVNwn0tgKzxFGUGNnL5aGdRAdjc9PR1HR0exs7OTgQjC\nnd99XjMTXW5sbCRZx50j1MjmZRGMrGfid9y5rKjdbidcXBnWEdGBBoBL19fXc/GK+qrMQgDEKPmc\nZrOZSI/7Ozk5yeyGg0IOi4iO/nvOgBG7vHzcXGhaWh3eUjsifPfZ2dm4uLiIr1+/drR8MjjOLiAV\n6JycnMTXr1+zm+D+/j6JY+TFZ2tBu729TRSEDHA+7lz7VeUxRDzN268zIF68eBG7u7vRbDaTXOo7\n19op40xelFE8G3CpFszKM+GIrq4ep7hVLgd4mY6CYn1vxpGOmg45MTER9/f38fHjx5QRMwGeB8yc\npoBma2srFyq58/q5OCN0VOnApEEOyPlxMqBEHCb74v51v0xOTuZyIAEEHeXka+0d2U75hy7VAK3q\naEUwIJWcpUCZHPru7XY7J0mqLUNDTk9PczFYX19ffP78OUux7kXw5OzuVkmGjkZE6huiIXmpOtrd\n3Z2tuWz0c3nR4651k5wK1Lq7u7Nbam5uLi4vL2N9fT0dcOXMKJvVtsO7u8eW1qqjghzkTnJORzll\n5UQyIGjy36B7CYvAPiISLaGj0EFociUn1s4Lz40da7fb8fXr17/2s3/5E/8vvAhpFajLy8ts4RC9\n1gExCC3VgVKeiKdWOtmF2jgngEnq9zh7nytKlbXVzIRTpmACBIZP+0jE01QyJBRZFENeFYfAQyIY\nBe1ulVEqI+OIORACUqNixrWSqmpk7M5ru46sRqaprslZ1HPUjPI5CaciMs/v3Hd2F7LwyvaVhSsD\nRERHRO2O6hjUs7OzJG6dn59nrzuuhzuvWZbzuyPftbbzVLKVe0HqU8bwHeu9+/PKBoZCQDuQiQQx\nHDTZqbLh7DUjBqU+ly+fW+UMLAldARu7bzJOLjigiE5HS6c4F8/A6F5Do8iqc8tunKsSrmodl6z4\nXQEZHXXnlZzrzmuGXDt66CgnhHeCIIgvQV7cOWPrbO5DVu7n67P2fcm4+6ryTE/ZlkpOpmd0mk5V\n4py2Rzrq//1clZdqb+kPAt9zBK/aFp+Fj1URtWoTn8tLtTvsGPRHdo0sSl6cu9qXKufu48901Lnp\nadVRMlFlXTBR7/w5kun5umfdSGT893TU9yDHlXBeicLPdZS8+G6QkGpXKrG6yvqfvb7JzL/CaVoy\nKqtyf38/tre30+nLWgwiqZcdEQlzq/siXPl98JYo1hAgStPX9zgLHVwZESkI4E1kwdpGiDSlvfDs\n7Cy2t7ej2WymwAhefLfqUMBXjIJanKEralnYzwSgdijY8lQZslAAZBkT8O7v77Oepb7EKJrATYNa\nGQAAIABJREFUp85sDjY4r0La4EYlCQp1dXWVZwevUsi66EQWIqL2YlhE6lpjlADU09QZOQ9DlZAl\nPXtOlaL5HjId8oLkhJOhJgwmBUfX8clqgoyPoAVhiGzIELXjdXV1ZVZi7azWNcEFuD8iOowLOXbf\nuCoy44iIycnJJD4JrmpPsaFUNQhybm1phiBFRHICqo7KVk0J5IA4o+o86ag7VWKqAbIOmTo0Svmi\nlrvqFjbnIy/uV3eFkgC9RtDDG5A5ckayXMEch0tHK/RORyETtUOB467GfGRkJBcj1aBTQGO5mWBf\nydGzlAmenJzEzs5OHB0dZfueDJ2zrk5IUlOdMh01BEcrNT4MKByypKRQ4elKarPVT1buu9JRCUa7\n3U67KBAVAFTmOqdZgw4Ii1ZuvIC+vr6cGyGrZtOHh4ezZa4+S/V/9tvzpKNq6c55enqa21T9GR0V\nTFZH7rkqzwk+ocT394/kSLJdA2tnxBnwvpWX5X61Y//Z65vM/H/88cfsa339+nXWNMbHx+Pdu3fJ\nIn379m2cnZ3F4uJivH37No3Z2NhYvHv3LhqNRtzcPC6zODw8jNHR0Xj//n3W0yYmJuLVq1fptFdW\nVmJ/fz/GxsZiZWUl60Ozs7Px7t27JLWtrq4mZLewsBDT09MJH42NjeXoxrW1tezR/u6775JDYEyt\nKWSvXr2KnZ2dGBkZiR9++CFGR0djaWkpZmZmYnFxMbkN33//fS4VeffuXUb/ExMTMTc3l33OKysr\nOfLxp59+SnKhUa2MHpKkuhVCz8zMTK737O7ujnfv3sXS0lLMz8/H69evk/DTaDRyTnZExHfffRer\nq6txeHgYr169iru7x7n/S0tLsbS0lMr67t277GFfXV3tIAB67v39/fHDDz/k91hdXU2IbWlpKeFf\nRnxgYCBarVa8fPkyncPKykpMT0+no3jz5k3Mz8/H0dFRvH79Ouvtr169ymUy/f39sba2Fi9evIj9\n/f14//59EtHW1tayz7+3tzdWV1djeno6Wq1WrK2txeXlZcoYJm5PT0+srq6moWk0GhmM6ZUWjL16\n9SrZ3O/fv4/j4+P83ImJiSQpvnr1KgYHB+Pg4CC+//77uL29jeXl5Xj58mX2WI+Ojsa7d++SNf32\n7dusFy8vL3fMJXj58mXK9k8//ZRtZlYCj4+Px8rKSq5brh0pAwMD/0pH3759G8PDw3FwcBBv3ryJ\ni4uLWF5eTl2rOrq4uBi3t7fx8uXL1NHvv/8+uTYTExPx+vXr6Ot7HLe7uroaBwcHeSY6Ojc3lzpx\ndnYWKysrcXn5OEnRbAqZlnnrV1dX8erVq2g0Gh06KohcXl7O9dB0dHR0NH744YcYGRmJpaWljtW6\nVUenp6dzvHZvb28+v9/T0Z9//jlRCMuiOJ/p6emOWvVzHRWkfvfdd/Hly5eYnZ2NN2/eJNGw0Wgk\nqfDh4SG+//77tId0tK+vL5aWlmJxcTEz3nfv3uVK2tXV1QzQ6Shb+/3338fY2FjqKLh6cXExk4rn\nOupzEVBnZmaSO/L69euYn5+P4+PjePXqVTrAly9fxsLCQjQajejv74+XL1/G2NhY7O/vx/fff598\nndXV1Xy/vr6+/P/nOmpqKB1dWVnJ8yJc1q4ENvSvdHRubi6Gh4fj5cuXMTQ0FIeHh/Hdd9/F3d3d\n7+qo8c7Hx8fx7t27RLGWl5eT2NjX1xcrKytJJP7pp5/i/Pw8hoaGctw5fdD5oMz5Z69v0vk3Go10\nFI1Go6PVbW1tLdvBlpaW4vb2NmZmZmJmZiajLIqphm/Hdn9/fywuLmYEy+HIVhcXF9NRGJvoc/3c\n1dXjbHzdCOPj4zkYohI5rq+vY3Z2Nts5Xr16lT2yw8OPO9B9lpYTwjsyMpKtYjXy5pAHBwdjYWEh\nMwVDbNQROYmJiYlYXl7uIL5RQtEj2KgS+Rgekenq6mqSTXQXyABre9DCwkLMzs7mHclODcAQuTJa\nghzwKcRBJjI/Px/9/f05dCQicrCGXnVZ7u3tbU6mi3isnwkSZJgLCwsxOTkZl5eXMTU1lVnFwsJC\nDkbBWhfgzM/PZylnZmYmUaH+/v5YWlpKpKjRaCQkt7i4mNF7V1dXPl+dJzLBym3RgqZW+PLly8we\nbFMjw5jWp6ensbq6GhGP7bHT09PphBC96Mvc3FyiSYIcZSGyfX19HYuLi/nn7rkyr91RLRmRy76+\nvjRCNTi+u7vrGIKC+7K0tBRjY2Nxf3+fLbh9fX2xsLCQ5EODZPTy+52ISDmqATAdmJ6ezvZE3QoR\nkTqqS0I3z+joaLx69SpJXYODgzE3N/eHOgpJoKNsCELtyMhIzM/Pp46Oj4//Kx2VGS8tLaX+eT9d\nOJUFXzkcdJRzffnyZWZ/EgaDnYaHh7PUtrS0lDpqFrzvpwWyq6srByu9ePEiFhcXU0fZPPIr+DfA\nDKIAEag6CqlsNBoREalDdPL29rZDR6enpxNOn5+fzymlSNJdXV0ZLEEEarDPBvAJ2k7p6Pj4eEQ8\norhQIV0xiLvKdM919O7urkNHBWymBRrLXnW0DglD3iUT5+fn2Tqsiwui2NPTk3bNHekGs7JbYMZe\nVsT0j149f/kT/y+8/ut//a//H9Cxi6kQSh3wAzYEZ2k1Ojk5yXoz6NXPgfcjOuvFBFxNhbH0OdrG\n1LH6+/vzYfvMZrOZv1Nr/RwUIXIm8CcDBjoV0YOkwHtq/epCyhmVzAjiY1A4Az8roiXg3d3dHW1n\n9/f3SaBSCwML1/KBEoWyCuKSwK3WvECT9ZmJ9uudGAICEo94Yu77foRbvRcBTc3S85XtqA2D3ioU\nb0YAVj/Cklqu0oF7IFPP2duckxINqBAxU9aJNNnb25u944IAnAry4/tXLoszKX08H+Wrhg7apzcR\nkYY54mk4S1dXVw5YcV/uvMoLg0iednd3s+2Ujnq+EdFxZ543WaCj2s98rjuno85SyyZqzu6k2oJK\n4KIjAhJM7tr7r77vsyofoZLfIjq3+unI4RjoMDknB74zGSCX5NbnkKtKyAVt09Gurq4/1FF//9xe\n1RJinR9RbamfU4Mn354ZR1W7jBB9OUX2xHepXSq+a7PZ/EMdZWtr6ZAOdnV1JdT+ezrqdxFF3aW6\nt6ACh4nTRMpWFvK7gq7nOsrmK7dWG/Vcdu7v71NHR0dHO3TU3bJF9Rl6IUdXHVKi9DkPDw9ZKhMc\nIqp6Bv/tv/23/++f+dlvMvO3HGVvb6+D1GXoSt2M9XsMUxkrQWYkDerxABAkag2LENTpTWq/dYAN\nQVKb6e/vz+EmFg5VUpc+93a7nWeqRlnm4GFGRGb56o+UntNnUGTuFJoyVJY/B93b25tDfdTgurq6\nYmdnJ89eCUYCHCz62tbiZwQe2qNEz3d3d9nNwLkzzpxlZWUj5tTgxdnVFXUvqHEz5vpx63spwezt\n7WXttRKQ3DmEx/cdHh7uIJFGPE3gUouOeJrzHfG096ASuJxbWclztQlvd3c3Njc3Y39/P40f58DJ\n7O/v57NjCKqjdOeym56enlza5C7cOaf6nIilFuounNtzxqmhB9fX1zlQxvpodwT6b7fbqaNkupKv\nyGzE044NaAFZZsTdXw1oPeNK4Kpn11JWibu/p6M1mG21WhkII0xyFJWg6HwRT1P37FmvJE/PhhH3\nXgLS5yRL79XX9ziwyFwIEyyf66hgT183HVUaqA6eI8WJiIisxetmoKMcYnWWnKk7paMSImcXdFbe\nVtXR4+PjDicvkG+324n0VLtIXiqrHWSOT1LvVSJWuy88D7aFDvT19aVNx0Fg12TUOzs7sbm5mUuv\nyEtPT092me3t7XUkV86OTO4M9fsZAMR3CaAE0pW45/nh0zyXFzpq+Znf/avXN+n8t7a24sOHD/HL\nL7/k6lLTuyYnJ+Pz589xcnKS9VPDf05OTuLLly8xNTUV79+/z+ljs7OzcXh4GOvr6wlHMQbDw8Ox\nubkZFxcXHfOzZQVbW1sREVnTRNYynevk5CRXih4dHcX29nb89//+36PVaiUcBQa/vb2Nz58/dwyZ\nEcnt7u7G7u5u/PjjjzE+Ph6tVitr8+vr63F+fp7IgLrV9fV1bGxs5LznmiHbYkdIwInDw8M5C/zm\n5iYhvGazGb/++mv88ssvMTQ0lJAbGNz5Zmdnk8QT8WjYvnz5kvU/JJi5ubm4u7uLra2tDJwYx9HR\n0djd3Y29vb2E6arB39raisvLy46zuz+tQ57fxcVFNJvN+J//83/Gb7/9luUa29YGBgbi06dP8fDw\nkLP9ZTN7e3uxvr4er1+/jqWlpWi329Hb2xuNRiO2t7ej3W7nnHB18K6urtje3k6IsTKDkSI5fsGq\nNbUyQ+WMVqsVX758if/xP/5HXF1dxdzcXBpVMvb169eYmprK5+uOtH798MMP6bBA8+vr6/Hw8JCT\nJcHsJycnsbm5mfPIK8N8Z2cnjo+PUycQonp6eqLZbGZtXhDWbDbjf//v/x2//PJLlreQIeno6elp\n7t9g2I6Pj/M7vX//PlvHGo1GDhfSroajMDg4mDpq5nzN7Ezkc+dqoIh6gpeqo7/88ks0m80c3kNH\n7+7u4sOHDzmwSYJxc3OTrZHfffddJiHg942NjTg/P88SgPJe1VGERAkIklsl9tGtZrOZOk4Gn+uo\n0lVPz+PqZ1M4G41G9pkjLm9sbER//+M6W90XSlVfvnxJPavLnUxSnZqayrbNiMega2dnJ3XUwB3D\nuZrNZvz444+po+fn5x06qlxzcXHxhzoqWG232/H58+d48+ZNLC8vR6vVir6+vpifn89ASAlAwNTT\n0xNfv36Nh4eHrP/TUc/fHBSl0PHx8dzid3l5mTrabDbj8+fP8csvv8T19XU0Go1Eiqanp1NH7ZxB\noLy7u4uvX7/G+fl5/Pzzz9HT05N9+X19famjxg9DxI6OjuLLly8xMzOTS8EElNvb23F0dJRtkXS0\nt7c3Wq1WlsF93796fZPO38pE7GLjFMfGxuL169dxenoaXV1dMTMzk1FeXSmKvLazsxNXV1exsrIS\nfX2Pg3K839evX7Oua0XpzMxMCipCielyPgt0C942NY+gW3rBYWkJmZuby6zMxLDnvf8i0d7e3jg5\nOUlDFvEYlc/NzSWTF8v/9PQ0RkdHk3h2c3MTL1++TKhoZmamYwKdSFwNjKOk/O58eno6J9R5v7Oz\ns5zKVRUf/DQ2NhYXFxdxcnKSBL+rq6tczrKxsRERkaQnteCIiFarlcQsu9iNcsUPALWOj49n/y5E\nSMZXnbVlIYbtzMzMZCbh+zICIv/BwcHsgb+/v09S5OHhYUxMTMTY2FgayJmZmTg4OIjj4+NYXV3N\nfvSJiYnc8Y20oytiZmYmJicn4+LiIqF72d709HRm7UtLS4l+cf7gVrVgunJ19bjK1VCem5ubrP1u\nbm7G2dlZ1uqPj4+zvtlut5PYxgAZGtLX15cDXsC4+AAmQbq3ycnJdBYIf+7IhD1T78iLNbq1fAAe\nNTFvfX0968om2wlK2+12rK6upsGko5CuyjCfnJyMmZmZLKe5czpKjtToDduZmZnp6JohL9AM41Tt\nqO/p6YlGo5EjX+fm5jIr0xFUdVSWPDs7mwFPneY4NDSUBDeySUers6Wj5J6OmrcgGPX+Js8tLy9n\n8oGHpD9czZqOdnV1RavVirm5ubRdVUcrxK0012g00v6RF8EhGB4X5PDwMG5ubpIbdHV1lXfOppMR\nXSmtVivZ8XQUL8Uo4rm5uQ4d7e/vj6Ojo1wmpGNreHg4dwjMzs526CibjmhpBsHS0lIG/DgPShNk\nuOrowcFBBsrKPEtLS6mj+AwnJyfJCWk2mzE2NhYvX77M7y5xJMtIuKOjo7G4uJiI0l+9vslWPwIL\nchYBYfWOjo7mQ9bCwdDWFjYwFgcSETExMZFGorKWRVEDAwPp2GQsBNnn1glgHKXPqIo7NTWVSo2U\nx9kglUVEwmb9/f1pmMBv2kL6+/vTSGD+MqRjY2MxPT0dEU9Q2ujoaE6Dky2Jcp0TyQ0E5s5t8QMr\n+VzBjf7xnp6eVEpBg6zaezGclbhZZ6LLdJDMtO+Njo7G/Px87iRwLxGRRE3IibMjuWmzQxT0ee4O\nB4Kh9XngwZpZcrxgOTLBceCKQFY4Q1nj4OBgIhsIQxMTEx1n9tyNXkYE8nNIo+q5fsazwo1hyCIi\nyZkyIQGp35Ghqqn6+7m5uQ4SkdbBoaGhROGeywvZ9l3piv9XKqiOlw6AculPJceB/CFWfufPdFSt\n+a901PcTHJM96B/UprbAsi91VLO7oaOVKU6uBJSV8Og9nGFmZiYDWHLN2VQddX6oINuhk6AG5GB5\nd+IcyJrg44hIsmyVq9oVRUfZMt+t0Wik3nOQdJRMeI5QHDpaybw+y7OgozXgUqKsrX5Vp3Rq1ACT\nrReAuIvJycl0xOSFjtoEWnXUvfodgSg77j6QfD1bgdpzHfUd6NRzm27VMB1FLh0eHs6OCzIMERQs\n+i5/9fomM3/1ahHezMxMtNvtrHmr26hNq6dFRPbdWikKlmV4MC7VXWu/sWwBaxLphdBaJqK2gtQx\nNjaW0GKtdU1NTWW2hRwCGkWk4uA5AXMNBDm+D8anOQe1bxU0PzU1lcQjdSlKsrW1FY1GIxm9Wm0i\nIlfB+r5mjptrr96kx5qTZPAgIqDq4+PjFHZBwczMTExNTWVm4OwUD7McwUoka9oeZZTNTExMZE8y\nAg32K4itksKQdCIi62IUZHh4OKanpxOFqO1sMn8GCrEK4/jo6CgDIRwQzkX75vz8fER08lbMdyDn\nw8PDsbCwEOfn5zmr3B25c3IuixPgCAzJlUC20WjkDH7kLD3zExMTWUrAH8BFubq6iq2trWyLrUxn\n8wfUISMi76ndbneQZXFBBC2VE6L1FDJAHzgbOooMqB9c1gTulN0z+uvr65mt9vT0/Csdxc72nLQA\nmvfv+4GJn8uL3vcaINfZGbOzsznZjpM9Pz/PUos7rHMl1G3b7XY0Go0M5Lu6urK90nwE+j08/Li2\nFefCHUETnZ28SIogIOY8QA4kEJAVvKdKLpUc4UiQF5MGzToR/I+Pj+dnIecJsrq7u2N3dzdLLs6N\nTOs+8YKsX5+bm0tdkIzQUc+D7vT29ma3B85D5TT09vbGzs5OIrF+r+qolbkIdjMzMzk7QadOHSxU\n7XoNXunowMBAx6yG2dnZRIN0g1n6xVcp3+BMKSdpPR0aGkoeTZWvv3p9k87/4OAgsynZMaIKQtDY\n2FgHY97FqInWyXMcjvYzkJCozsQmxBxKQRgjIsftciYeQmWA39zc5BYmRgHjuMJVg4ODyVr2ItAI\nNVWwGAeQP6f7e+Q+ygSuvbu7SyheW5WIlvHp7u7OgRqCBZmBdhUkNE6kDvVAiDKRqxIRJycnIyIy\ncIKKRDzNKocC+DNnEnQYLBQRmWUxcEiFDGLNDG5vb7OEUFnLjH9lLCPxYT0zODpN3LksQvDHkHhf\nAYVBH4g/fi7iacWu7yY4xZwGk1cnx/khD1WD73y1Hau3tzfa7XaSpYaGhnI0aJ2gCCqOiA5n5Wcr\nUZSBe3h4yPHXCEzk6v7+PoNdmVCVF47+OaPfcxfA0tFK9jToxFmdB1/ERD5EQHpYdfT6+nE0NT1n\nN/CAnutoDbw4pqq7ZFSt10AdJDIyTie9RyW14oLQUXJUyXnd3U8b+shzfWaWIUE83YGAvnZhkH93\njk1Pj5QJHh46JxW6r9/TUcN1IiKTldrd471r4gNBZHcw+AXz1S6yhbUzAaqBCMemI3t2dXWlTa+d\nIuwHfhIZhR77DOTeOpyJHPX29mZQo31PYFZfVUfdFx3t6+tLImHtDPKcBVHsgeCfLhwfH3fYT7JQ\nk+E/e32Tzl+2UidYgVebzWYHBEyYfWlGudlsJmNbLajRaMTh4WE0m824vLzMXvXb29tcH+lVhSYi\nOqbZGRcr4lQrwhfAjjXZrKurK9foLiws5AP2GbXdBnqA12C9493dXXz8+DGNhgj8+vo69x9QdC07\n6vmcoGlpIFRO7u7ucXve6elpQp2Hh4fZqXBwcBADAwOxtraWhgtb39kvLy9zGt3l5eNSGwjC1dVV\n/Pbbb3nnJsgxeBGRaIBaMY6BM7sHELwlS9qIGO363KBDhinVkao12z07O8vpYhGRa5D7+vpie3s7\njXd19NfXj1sWBV1avCqxTf1Pe5aInUx4bpyrej8n8vDwEKurq8nfYCRqG9fh4WG+JyMFLvyXf/mX\nnO0Ofq+T9iIi5be7+7Gn2VQ4+xXqICKfdX19Ha1WK66urvJufVe6pyZJl+roUsFhneh2cnISAwMD\nsbCwkDp6c3OT/f/k2hQ07yugFHRoOaQ/hvT4vufn57Gzs5MdEAy4Wnh3d3f2htc2MZ/Z3d2dRpdB\nPjk5yfkPnz59SlnhQGVzlrh4b+iDz5Eln5+fZ9BTO07sAVAbduf39/fRbDajp6cnuTaXl5c55VKQ\nZHqhufqHh4cxNjaWOvrhw4fk6Lx48SL5CoLY5870uY4iZXteEKLa6ofEKLCTVFUdlcxIwCIeJ+q1\n2+3U0YODgwwwkQ+VDmXC6uw+r7ZhCiTIPiSEE5cMHB8fp47iS0k8oEWvXr3Ku6ktuZ69ZFZwCnG6\nvb2N//N//k92H0i8Li4usmsj4rGUayoq/oc2crIDXYAm/j95fZM1f0S0ZrOZbSOyd6St4+Pj6Onp\nSfjNyMiIyGhZ7coAoO3t7VzKosXly5cv8fDw0FGHAi9ixjNMpnSpofu5oaGhZC3Pz8/H1dVVRnTO\nXluCEEmwTHEJsDplJUg9FxcX2comItzd3U02//j4eDpFNdy7u7vY2dlJw2ain0xSWeLh4SH3q794\n8SKXx0AM7u7uchgGOHpkZCRrUhWdiYhUPjC+Fh9w6/n5eXz9+jWZvgawqPGNjo7G4eFh9taC9dS8\nsLpl5Ri4EZFdAMoB5INhAsHhYjDODDJjYuiN8bQM3uHhYezs7CTkr+4WETkKdGtrK403wiHGcURk\nVi5InJyc7HAMzq/ua4UtgpwasjvnUMmBFiiGkpNaX19PY++51VKJwFX5a25uLjs7OMq6ShgEbIWt\nDE72zgG7U2dT79RqCUEzJXNrayt19Pb2Ng4PD+PTp09xe3ubcg7hwdchX/QdXKpGDBrF5l5YWMj3\nZuiVqoaGhhLVUCNG3oIc1N5u9ufs7Cx2d3ezb5v8tFqthHg9e5Du/f197OzspFN8rqMCufv7+2i3\n28nL2Nvbi8PDww64XPkCHO391OuhM5ICMD9Gu+VEEoWqo+Pj46mjyniWSCmDmGqoVKnlk5OanZ2N\nrq6utIsV3UEUhDSRdbVrJSaZMH6H9sw6etniJLqFdwDh4Fcgy8rDFZI3h0LpRrnSAiQ6OjIykkx7\ndnZ0dDT5HIJfwVfV0cPDw1zCBZHY2dmJ09PTtC3sru9iBTEbzC4aeuWsErG/en2Tzl+t7/DwMA18\nZViLsiqRxD+cp1r/wMBA1vf1+Yuo1PcQQEB3oBT1z7u7uxRKDxYMWqFAD+n29jbnr4tkn7+fc1Ny\nBDPGgVFUryJstV9bxIf0JOokvO4M+UWgAgLkuEzeGhoayqU3hPz+/j4zjTrzoJKPQLmVX4EUJFJ2\nJlkQngKjikwE9lRXp1AIXEoVjEKFLRmBq6urdDy19aqSd2ovLKhdtoaYxYlzNpTUjO869IZjgwJV\neSFXuATkVxDjvJ4zw4TdzLlXWREoglARAV+8eNEhG+Bx58LMF+Ry1pAr5CXGt9bp9XEj5fX19WUw\nK8hFTnJfZIRcVG5OlRsOz9Y9s8svLy8zKyNTIGUIlnIA5+3PIQLuzHkEJ3UlsmctMxWMkvX+/qdt\nmPXs+BX0peqoTBNHgHOJiJx8R6eqjgqQIAOgY6Q4KCTn//DwkI6NEyTnMtbKAQAxc+qyadk+5FE3\nUw181PJl1hAtco60qUwmOFX+g5rUz1COVNaqdqTOqiD3ugzoKPIuvWIra+BTyzmy+ufO2vNli5FK\n6ShHTd48U5NCyXk9O9tCR5H6fk9Hz8/PIyJyIq3yc+V+aL8m50pA5Aoi7Dv+2eubdP6c393dXRoV\nhmVycjLZv+Ct7e3tjhWwBDviMerygPv6+uLnn3+Of//v/30SzdbW1nIGPANIKGUadRSu8yB3QSYw\nPxGFaoQom6oZZ09PT5yenubSFoaYgqtZCkoajUb88z//c7x9+zZJhy9fvoylpaUMKpwFW3R6ejoF\nFFkIDAgS6+l5nAFO+N05JYqIDDCw429vb3ORhTqmjKC3tzeDHcr2/fffxz//8z9nn/Py8nIsLy9n\nK5Qac8RTNwYymrusZQawV12e5FmDwi4vL/MZOjdja0iU+h2mPojR77x48SL+w3/4D/Hzzz+ngX35\n8mUsLy+nAQR79vX15RhoEBzuiQBWtvPw8NDBkq5wJ7mScWJfRzwaTqgMA++ztLG589XV1fjP//k/\n57z2paWlWF1dzbGh5IUj1/oE4leCweNwb5jlno2/U+ISjHGC+AqG6gjYlHA8N1ntwMBA/O1vf4t/\n9+/+XTLI19bWcgwsRMGwGEhORf7IgUCKo6ydFnXQDh01w0PQA9KXZT9HWEDInGij0Yj/9J/+U7x8\n+TKur69jamoqVldXO3RUmUPCYWQ0vo77JMdHR0dZLlQuE8xWUmUNMDhq8H4dNoO3U1tN+/r64u3b\nt/Ef/+N/7NBR8lJr3xwRHa2BMfnlyE5PT7MroSYadOb6+jqJwlAcpad2u90x1Q5KKvj087+no6ur\nq7G4uJiBsDKXQIMdFoxJAp1fwqJ7RK3eM1F6NZeBfcGFgRxVguPQ0FCHjg4MPI6J/i//5b/EwsJC\nRETMzs7mDhXlIGiDcyMAVt2D/uCzDA0NxeLi4l/62W+y5o+wwKFMTk7G/v5+ZuVqGhSqGhXsVkYQ\nvChrE5Ux/lAFTkzUSSjAgRFPG+sISKvV6hiDC3ZXY52ZmcmNarJ1ETwylH5MGZDsHPmrBHJlAAAg\nAElEQVSDgmObPjw85EZAdSzCx2BFPDH4KzGJsQMli9xrhCrSnZqayizMGSAg7q0SmAj14eFhklLc\nuwzevalHinRrRshBglYFLc5Q93i7N3cnC+vq6sqJiuRF1N1qtToY9M7GGFRFl4lActTAoUU1MxFt\nq4sLpJzdFkHDmmrGXvv8OVxBWG0PQhaqqAuYtRpXNU5TFu0fFzgxqM6O1MnoIUBVp4+LUkl75EXw\ngEhXVz0PDg7G+fl5fj5ZAQu7z+qw6TGj9ns6Dr26ublJB6mc4Oynp6e5zdHv0lFngAri8wjSZOI+\nQ7YPnZJ1kRcOVgDiziuC5NzQorOzs5zuqE7t+YHvcTbcOVnggJU/Oa6ISPIc+XF2us3JuivyKukQ\nyCmXuSMw9s3NTZbx2MWKctLRatvoua4BJQDojp8dHn5cSIMT4c5HRp62AdZgg31hW+jo6elp3jmU\nDH/MNEYlv3rn1Zk+l3Pt0TgONZkSqCpPsGlKUZAQ9ydo02EmQL2/fxxWp2wqmcSrUR70Pmy6UhAd\nded/9vomnb+6lKUXs7OzWcNV4wG71Ayu9u/K9kWhCFjqflj5AwMDsbm5GX19ffHmzZtcvIIE9+XL\nl4h4umhEFH/HaNSgxDCd169fR7PZzOz+9PQ029bUiDlgZ1d7182wv7+f5LtWqxV7e3uxv78fOzs7\nMTAwENvb2/k7JgBeXz+OX/3w4UN89913eQ+IYQarCIwYZhH2/Px8rK6uxvHxcTq0g4ODZM4zYLVl\nhvC7P3Atgd3b28uzj4yMJHnJoBYkoYODg/j8+XNERPzTP/1TKqagYH19PY6OjvLOGQ01dgQ5HIz9\n/f04OTlJNATrN+IJAnXuFy9epGxYsbm/v59kIyjF9vZ2TiIbGRlJ+LjVasVvv/0Wi4uLsby8nJnh\n3d1dEq3AfdXgKSO8fv06rq+vY3d3N9EQA6g4Ip0MsneIjSVWzj08/LhVb29vL0myt7ePk9kWFhZi\namoqDa3Jf1++fImff/45S1ru2PelB2SHYVteXk4y7cPDQ+zu7ibxszp3QRqYts4b0Cp7enqanSGC\nz/7+/tje3k6dEiDf398nJ6Ayxf1zdHSUk/eqvGi1mpqairW1tdjb2+sIznAaKoubjgpy64IbiAx9\n3d/fz5XG/f39OeXS0i4Jx9bWVvz666+po+T89vY2278kHOyc7HVhYSFevXqVZTrtdspSAmelJnaR\nnA8NDSX6A93gQPb392N4eDj29vbi7OwslpeXO2zSwcFBrK+vR0TkmUHhZAkxU0AnEHTndN13PTw8\nzCD8eXkPGe7Fixf53YwyrvwWHISrq8dW1bm5ufj+++9jZGQknWu73Y5Pnz7lFkPBw93d48TPg4OD\nPG/t0GBf37x5E5eXl7G1tRVHR0eJrggKBZgQXM8Dfwm5XLsg+35wcJBE9I2NjRx4BhU6OzuLra2t\n+Pr1a/ztb3/Ljg5Jx8bGRrYrkq+/en2Tzv/s7CyhdeMMwfRqJrW1rMJ06puMz9XVVWajL//vda91\nEtTi4mLCbNjCMuru7u548+ZNjI+PJ9Qoi+vv74+VlZU0vlqpwEdXV1fRarXi+vpxqiBYeWJiIslj\naptgSDAP+E/QAh41L90Gs8ru3N/fzyhUy9APP/wQ/f2PE63sSYCmmFUg80U+UYvTUinqpnhqinWg\nR2098f2Pj48jIhJ+2t3dzQ6J+fn5rK95v4jITGVtbS3/7ujoKA4ODtKxLC4uZkcCiLNCdqBpMCgn\nExF5t+DlarAoPec/MjISCwsLyXrHitYnHRFJktJ6NDQ0FH/7299SXg4PD2NwcDCOjo6SQCpY0h/t\n7Ahj4ERIicVDYNqabSIhuTdO38CY9fX17H+en5/P/RZkWEB4dHSU8iAL2d/fz7IRiNedKzMIGq01\npaPVUShp4RnI2mrJy7n6+h5XsP6RjpJN5QU6+v79+xgbG8s7v7q6yilyy8vLybmoWbmgcm9vL+vy\n7ApiaW03xPhXLnP/EZEBQcRjp9LFxUWHjqqpe1YC7qmpqfj5559jYGAgZVAC8FxHBwYG0qiTWTqN\nlGeEMXRHzR+CeH9/n+U6mXJExNLSUiZWOCCLi4uZcQpUawC1srKS/398fByHh4dxeHiYOm8olUTo\n4uIi7fHBwUEHsU/w69lERJKgoUgCcUHQ3t5eDA0N5WyMs7Oz7C6anZ3N2jh5iIgsR/7444859wG5\nFLdBDZ1tdl/Qp1arlbwWdfjx8fG0u4Ksat90HNA7g3oGBwdjY2Mjbm5ucmppJRbK8quOksOTk5PY\n39+Pvb29tE1TU1MREYnw/NXrm3T+WKlgcRANaB2xglCCOGrrm1Y/9cienseVrGpOslwT5GTuHDfm\nKQVmfCqvAHu3nlv2AXGocCPnCwqscwRE95y41iSjix8eHpJ5rkYNlqr1KIZhYGAgldrZwXTIIhGR\nLSmypdoSBw0AKw8NDaURcOcgzxrBU0bZtoyokhgFB+B9n4k/UaHE52z8SnB53htce5D9gzhTuRUM\nG9hYCYdT4aQ5J0GTCXMRT3MKPC9tYgywyJwRJAP1zv0MuBVMW6HpSuABGYPkOa8q52DGdrudtWyl\nBr/vbrWT1n59clQ5EbWTpt55RKS+RDyRJ2uPOwREOYEx100hkBAcVsKns4OkyZvgWK2dgyEvldWu\njAfpqiUljpIuRUR2gSgXkDMZrno0/oUJlxGRzkYWC10S0MuUyZR1vhAhuoW0SE5/T0cFnqBtGb47\nIEeVSwCVgXJU7kXt5pmamkpeBXnznJXXnKPaRSgVHa3/VKTR2QQ3yh50gLzc3T3OVbHgCu9A987E\nxETHdzHUh47WUoqWcbwX9sbzqTr6e3dO5ugFVApSLNuuOipgECzW+QQCLpwXyAG0UKBFR/3Ow8ND\n6pXnXCfD/j99fZPOf2JiIvb399Ph9Pb2JmFIBFwNFbjQhR0cHMSnT58ySgNZUrCq/GCY09PThHDr\nsgj1PhnZ0NBQrK2txdevX+PXX39NpwuyqVyD4eHhhOj17SKVqd2rtzKAShDtdjvm5+eTQMfB+jeh\nfD5z/vj4OKff+dne3sf56MvLyzlL2mQ6ZQDZPEY8xACLGiJSo1mjJZU0fvvtt4zwTcurhoESyZR1\nX4jkObtaj5MFr62txfn5efbF+h07ENTNdXLIok28YgQhFDLI6+vH3meQYB2SojwAnmfsRO+Tk5Px\n9evXXH7iWRmSgtD18uXLLHkwCNvb2+mYGW3zysGuHDC51vuvJmupyN3dXezv7ycM7Z4ZCmf33N68\neZPPrZI71a1NG4RsffjwIY0eWBh0z9FrSwT1u3N6p4VUkHtzcxO//fZbBoUV/aF3voeywOXl40Ir\nOur7kheB5dDQULx58ya+fv2affeXl5c5m6COodZ2ZY8Fp0XWIWsGbCEY9/X15TpmnSfknBMXaOzv\n72eZQOCD+yKg6e9/HA28srISy8vLsbGxkRmgVl9Oo6+vL7tQvnz5Ere3j5M0Bb4RT3V5qBW05fPn\nz4msgonZw+eBhlIMGN78CyUKwRym/6tXr1JHdTpAt3QT4Szc398n0kCv6ucq1wnodnd34+LiIg4O\nDjpIeHSErrJp0DaL0egoeYFIKO+sra1lua/qKLtvaJO9H9qs6ywCKJjMnF388uVL2mp2ogap7p+u\ntNvtePnyZZbMBIzsOd+yuLgYKysrMTw8HB8/fsxg9ODgIAO5P3t9k2x/mR5B9kBlEBGRdU7O1s/K\n2ETE1SDc3d1l1itirRF5jVBF0wIG7SIeNIGTFddWoIjoiLgZTn+m9qVl0HcF0UU8RdDINpjrMhwB\nQBUICozd73Od3b0JgCpPwuf6N2eBgMVAj42N5WZE7ytgiIh0BhjREZH/7z48U4rrH+dhrCuiExEd\nv19byJzb2QVUsuiIp15ZWYnfqT3zlTTo+ygRMGaVFOZ7QEjUIqu8uHeOQB2TvHgm7rdmdIyxM3G2\nslrZuO+BHOseZfs16PUdnKkiI7J3d+j8HPHzdj1n917uyHfHxkem9IzoaGX+Y01HRHIp6KXzCE44\nHUQupRP6Ve8O5FvbUv092aCjsih2BCHRs1Wu8bl6zm131L4KKaSX7q/aD8FHJc7RYWdy54Jx388Z\nBYW+o9+tiCbZVgLwXJ7rKKfuPStxsgaQEAY6Wu2F78eWPr9z8lIJj3TCGZSa6CjkCaFTkGwHAx2t\nyBCZ9d/V3tWkq9rFSiKELiMA1zuXdQs4/RkdhR4g/SEoek9BIB2lr5WDhWsgGHI30JtqU7zodeWP\n/dXrm8z866hID8RlgtJub29jdnY2hyyI5MbHx9PoYHgvLCzkKkeTpP7X//pfHcYOBHh39zRsRp81\no0fR6lAJ9RWb/ba3tzugXA9FvTbisZ61srISzWYzSSp14QUD22g0Yn5+PvfULy8vd7TvVJ6D+qd6\noYmF8/Pz2bIkExC0MCp4EhQdLCzYMkKzu7s765CtViv5Cow7wmFExMLCQvT09GSGNjk5Gf/4xz86\nlMbzjXgyPnpqIRbaVyqT1X3J6tVgGVpnF/xxjouLixmRUw7TzDjW2dnZmJ+fTxh+bm4uM0DQu8/h\npBkkHRaW0vh/RD/fE8mtEhYjnshT6rWMl0VBOzs7GczV1lE/v7CwkOurBwcHY2lpKT5+/JhGtjqW\n6+vrjkBNrT8iMls1kpVDN3cfjwYCVwlf3d3dSXDq6Xncctfb2xvb29uJLMnY65KqRqOR7HzTz2qn\nCyPOqQrU8F7ApZjYygkcqO4bLPbKmHbvyIgg4IWFhZxaqS2utvT29/fH/Px8bndT9kG+k9FXHVUC\nGh8fT2IghE/tfHJyMhEQdu329nH1Lv0hf5yY4UQ4Ht3d3bG5uZm/r5/cTJCIyC2GzWYzW+n+5V/+\npcPhk0tlKJyryrRnE9W26ahhX1dXVzlYh/4I/r2n77C6uprIBOervQ3S2Gg0csX0+fl5zM7ORkR0\nlM08W46enCFdG/ymxq5ETLboaOW2SEJ9hlLFcx1FEkTww/fq7++PRqORmwYHBwdjZWUlPn/+nJ9R\ndZQPZBslNZ4/NE2gq3yCTP5Xr2/S+YuWEPGQttRItra2srZP2BhvxCELXm5ubmJjYyN3p9t0RqGQ\nwBBQrq6uOtrZEMRMBRMZcp7qRYTM78qatXDd3j72xu/u7sbo6GiWKQyVwbZVWhgYGIjT09P48OFD\nfPjwIbq7u2N1dTUDEbV35xD9qlEz7uYUuMcakVbHj70vqr2+vk5m7cHBQWxvb8fm5mayre/v7zNj\n+/vf/56rbNWrlEE+fPiQ3AnwqNo750zxoRURkQ6BkXVWAQtUBswO0hesWY6yv78fu7u72adcAxps\nW9kFY/Dly5f49OlTHB4exsrKSiIMapIRkSUpJD6wXFdXV3ZMmDXAAflZZDJ/DgrXEjY2NpYQ58bG\nRiwvL3dMN5ycnIyPHz/mPnktZDoTPnz4EI1GI96/f99xV4y1gVP0jDPUyiQwUUMEeco+Li8vcxYH\nKNxQlf7+/mQuIyrV+n13d3eSSUGgt7e3sbGxEe12Oz58+JCbGJVf1N45fGWeigC8ePEiFhYW0mjK\n1vysZyu4J4dqsALJVqsVu7u7CbkK3MfGxmJvby9LeKDnw8PD+Mc//hG//vprkorpaEVZnF25pTLg\nZ2Zmcs31cx31PgIzNo3zreTEVqsVOzs7yZwnL8qZJycnScS7ubmJ7e3t1FHPHv8A/K2Lh8y7N4iU\naaxsod9BxmXDyX3VUYlYs9mMra2tGBsbS84EpFNJps6WODs7iy9fvsTHjx/j4OAgZ1lUXYqItItk\nvZZBJycnc0U1uyQQgZwKpthbrZd8xfn54xKuzc3NLJEJHKempuLjx4/RarUS4e3r68vOkt9++y3m\n5ubi9evXqb/sdtVRSEINTKqOOqtngPOBOPlnr2/S+VcoqZJg1N4Ycf8GB/kZJJKISA6AmivCCgiI\nge7pedokxaBGRP6Zmn2FngljJW49P7somKOqLX+MMeEAXYIZRbLNZjNJic4EomSUOVBZkkjd2Z31\n8vIyIadKwnlOygHJYrGqkyPqgfsZK05Vpiia1objvMh9zktp66pRLYTunIAzZnWOQC0J1FIMI2lO\nfW9vb4fzZ5w4CvKARGess7/3mdfX11nT9x04gohIw+f8lYRY5aXeubshVxwypMZ3qNG94FTJwHeN\neOwKMX7a+9VAVssaZ+xZC7IE02SdUXHH5KsSojw3dU+1Shmw8kJE5LkFENCbVquVffm1rHR19TQr\n37OTydJ9skIHnicFVc6V52rpiYGlo1rgBGiyZdPcvAckrdVqJcmRvMgQOUZEPvLqO+mU8BkcEodZ\nSYrP9ZSt0VaLiOgliADh1yFBh4eHcXJykp0+tdRau3o4JzoK/q+2ZXBwMPWxzhHwZ7VUUXUUVC+D\n5fzpi4Vj5EuCsLe3l23flbRZdRRyIHmsJZ2qo9UPkKl65zLrygOKiJSR+m/lDgmSe2RfyBR5qTrP\n1khOBcy+N52pnDI2j44qodaSwB+9vknnz2DUiXLb29tJcjFdq9Z1CSQIenBwMLNP8531toqiOR6w\npwx4cXExR3Yiy2xsbMTW1lY+IH+nRjs+Pp4Oj+D29PTEyclJfP36NVZXVxNi7ep6HHBByTFO9XV2\ndz8udqlztbu6ulLQqxBr/zC33T+GXSgTWLbC8Pb29sbR0VGeqdlsJjrg88zK12c7OzubmZjX3d1d\nR685hVQv9ZlIdRGR2XHEE+y/sLCQz9uSlO3t7Wi329mzL0MHUTM6GxsbaXR6eh7HGq+vr+cKY5Py\njBal3Gq0ZIaTYTg4YNA8B2kAhzbUxcXFnChpjenGxkZsb293kKgGBgZyMuH09HTOTqiO0MxvhqjR\naERXV1fHUpjr6+uE4M1UODo6SkhXBtJutzNLi3gMVpUxzs/PM/P2O9Cpi4uLjr0QoGtrkkdGRrK8\npQ9Zb3V1wobnVCIcHQXhYs7bTU9ewJuCcvpoBLGJfzc3NzkSenNzM3Z2dlLOEPqgKtAeM0PIy97e\nXuqo0bGVwAglHBgYiJmZmUwOdFewSRGRxECcgv7+x1WufX192YKrDfXk5CQGBwcTvm02myn7EC9D\nxiYmJlKHajDWbrc7SHdmu0dEll4iIpHT0dHR2N/fz2FT7vz29jbXevs9DpOdsntegnR//7ibwHwA\nOgLmNl9jaGgoNjc3IyIyGDs/P4/Nzc1sUcNRqNMUBSvWgQ8PD+cCI+2sWnHZETJTdVRJTDvhixcv\n4uzsLNbX15PsLKAyIEhZre5ikWTRUa3QiJNVR6Ey5EUgLJvnuLUv+vyBgccFdAI6/CMttHd3d2mX\n9/b2MgFks5FQEa7/7PVNOn9RD6NYW3UintqJZOeidrCJy5UVqVNGdE5AU8NRY7aKtEbLFEHkxpCD\ngE9PT5M5roblfCDXWm91Xi10smRZPsIGQUBsAcXLEpUgBB+izgrRal8i3BFPKyYhEAyqbEWUWjM9\nhgxxxXvqz6asjOn9/X3WqHx/Ai7Kxp/gxNWUQZ2VCOTOnVU2fHn5NMaXvIAma8bh3H5GxsZocAQR\nT8RALVGV+4BAJqs02EUJQMbiJWuQ3fodkynBdUhUZJazrGdHMtP2BAkQHCOz4ozIOuiMz8dlqARZ\n5Yjn8uLfFxcXHXKO/wExYxj/6M59FyWd6+vrnHvg52qdW4bl82vL4Onpad7X8wlu7tx3VmcnL56d\nO68IUIXoZcDeGzFPIMXoQ6Kmp6fj69evec6KmlUdNRAMFA/J8CxqZk83nLtOw2QXIQPVvtBf76t8\nJAOtPehVR2XM1UYoAQpSIBUCwupwOT11aWfXwusu2RYkvOfyUtu4IY4CEjwdOmp5DTif7rhzeg7q\n1xHxXF6qjgpaBKE+mzxAr6Bqz+UFyiZoYB8gB5BCyAi9rDqKgOq+yIu7oaM1y2cXcZMqQvFHr2/S\n+Wv1q3U+mQRncn9/nxGVcZIgfnUeCmp4AoGr0ZlhCbIpQu6BM7DT09Oxv7+fUNjJyUns7u7mlKlW\nqxV3d3dJYjJQQvYrewIBT0xMZMQPZgapY0OrHff392fdk7AS8L29vRQqUW/tPFDDMvxGtKzX139D\nKwQXnELE085ws6lFmYiZ2v7AxrLj2r4TEfnsDJFBeiHAdbkSBZe1qcG2Wq04ODjIqYFXV1fJDmaM\nIQRgP892ZGQk25fU+/f392NtbS3hzeHhxz0KX79+/VdsWwHi3t5e9veCs0G7SF6GInV1Pfbg2lZ3\nfHycbWKV9Abik8H19/cn/Oz7MIq7u7vZ+sogQTGQvQRMFcbVRkQ+Hh4e8jPJPDgdn6CrqyvLZmau\ncwbKD+6CDNSyTF3hKng4PDzMSZCISY1Go0NHa6mPjkKmdA5wKKBW2+YiInXUeU2CqzpKl+vwJYb0\n4eFxfrzAnKyA1H3PqqO1vAXSvrq6yml9tT5bB5O5B1myjD8iMsA0AAjCAHFQnomIDDIgOa1WK8t1\n9N/37O5+HOZTHXhEpFPxO9r7TL+cnp7u6IeHdCnVnJ6exs7OTvJPZLbVHnt+7p2O4hDVoT7IvhIr\nutBoNHIjq6BNEHN6epqtwzoYKgse12piYiIODw+jq+txFPbm5maiff39/bG3txcRkUhaROQzkqRA\nqZxLiVfLLv0l52b0SybcOZtOR9mTiEgSLZkVPCh3yfhNZKy+5M9e36Tzr3Ut5A8GxIVFRJKPQFiD\ng4PRaDQS4hXNy6oFEH5XO49+c6sld3Z2EqrDtBdZ6WNmoE3D0hevf152XQeP1MzTwAxwopWNEZER\no8ylZijPazycGij2+vpx+hY2qHqZ3uwKDVuzCZKMeOpld68Vqq9/7851OiDiyGiRq2RJ7pzDgdQw\nXru7uwmJ4kOopZ2ensbq6mqiA8g67lwdzbkwxStSJJtR1zW1DuOZvIi6RewCFjC5zgCGxdje+fn5\nrKPqnwd72pDX3d0dc3Nz0Wg0or+/PzMbGSKmutISAyubIQvuHHKhXOH7VZRDsMswcTJ9fY/rh3t6\neuLVq1eZITrD/v5+LCwspJOemJhIVrqJdOq3AlpkJcbMnUMLjLWen59PdrqWNsEtvYqIhLi183IY\nFxcX0W63O3RUxiXbJOPaqRYWFvJn6SjGPJnltAVM9O/m5ibLAfSB7pI5GVrN6Mi+bHloaCi2t7fj\n5uYmVlZWsnZtayFYuC6WMqOkJgDQy4paSgrcAzKmwVS2oZoQ6HkJViE0bNXIyEjO7BDE7uzsxMTE\nRExMTKTskTlD0SQR7rzqaCW2+t1amybnSKRTU1PJoldmqax7kHftxFLaYB9HRkai2Wwmeff29nE0\n+9zcXJZeDNgZGXnc0klHPRM7L6BmMus/0lGlsToeWmeZ7620KMGQqNFR9t7qb8FzHdB0dHQUi4uL\nmTCMj4/H4uJi2qa/en2Tff4VBvLfFAlkI1ukBAQBLKN+LwtnUL0/426oS0RkdsWJMWy1RzTiiTAi\nYCA4tU9fAAPKAtdFPLWKUb7KaPfnHCYjU2FY0KHMyzkIIIgJHM8JUEznl50i6NQ7j3iafe/slSSl\nrljvHCyqTQ3s7N5kcwbb+Nm+vr40whSFAfN9QesUWo+y7Kk6nEqy8myrvOiEqKQ8jrW7u7tjwl39\nXZ8d8RhQ+G/GrN4LJ1Dl0p3XAMK5I5767qucV3nxbJ1b5gHy9z1+D8Imn2RLXzSY2r34TKUOaJh5\nGXWuxnPolJy7c4aNjtbSzh/pKHnxu7JrsiDrFGDVe6EzECAG1ETMqqP17Mo+1QlWHaptgNAdMqQs\nUsmV5KUSEKFKyjHOjcPDgcjoOYe6UdO9OntE57yLSkyTRUZE6jznL/v2c5AW31uACa52N56R3xPc\n+9w/0tHnd04vKhHyuZxXkqdz+LPu7qcFb7Wc6gwQS7YmonPAm3uBRFQd1W1DXny/Wg4TADs7WaX/\n7HMlFZMH34PDZ9/oqHsmbzVJpKPOUm2S701H2aY/e32TmT/oWvRrYQtSmZoYIyF7kLWPjIzkmkhG\n2OIH2aZ2HUS9g4OD+Lf/9t+m8494dLyy++Xl5YTRKomLMIG5Ip5mK9f1oZOTkx0OLuLJwNzePk4Z\n3NrayogRvKbGSwAoBGJQs9lM0trMzExmK/f39znzeXR0NO+FsFUjcXBw0NF5AJpaW1tLWF3rVmV1\nM3LIguPj4x0lF8QYPbMEemxsLC4vL2NnZyd++umnGB8fz5YhYztHRkYyUuewKC2Hgq1MeQ0gAr3V\nFaiCH/Iiit/c3Exj587v7+/j6Ogoms1mvHr1KhnDZj9YToQUBJ5G8jFN0QSzys72fJQsOAXtQUtL\nS/l8aw25sr4R81qtVkb9MzMz0d/fn0SgaiigDycnJ7G1tRXfffddZuBQsq6urpiYmIjFxcU4Pz+P\nZrOZPIjK6Aehk3PjdycnJxP+lKVziO4cgrK9vZ0DrOr4XbM4VlZWMljSVy7rQbrDr1DSmZ2djdXV\n1Wi324nWVQSRjp6fn2cpMSKytHF7extTU1Opo5CjeueWeQ0MDMT09HTMzs7G8PBwXF1dZVlE3Zwu\nIeXJmOfm5jJIj3iEgefn53NyHKNdP9szxQliFycnJ2N5eTllvd1up0Os8lJLD0h+gsfDw8O0IQJL\nXKNPnz7F7u5u/PTTTzExMZGIlXLTyMhILC0tdYyWrmUEdu3k5KRDDrUEVx1lj6ojxfGgo7ZHDg0N\npf63Wq0MoNzNwMBAHBwcxOjoaJLuqo6OjY1Fo9GI5eXlaDabHWWyyucyEfK5jtZx38ppz3UU2sru\nKie6P7pbOUcjIyPZ3vv+/fucEDo4+DjWWHvk4uJinJ6eJgm++hIcMltN/+z1TTp/jl0bhkgwItJQ\n1GhX9FZ7HWXcBFLW4JJkXqLJvr6+hN1FbOrOhjmIbGtW6f0iniJXNXKOC5Tss9QqvUSSMnDCwLjJ\nNvz+2NhYRDwaZT3LajyyH9BbzZhqza8GMIwf5a+RNeKRzwInRzyRi8BXtQ5b26+tBW4AACAASURB\nVH4ons8HO4psRbrqzmBPWatnWkmA4ELyIaus9w6OJRs1I60BJjkB29b3Y3AhREouPT09eS8yAOQv\ndUUy6ucrScd3UPZRz69ESDC6fyKeVj/LLED8njsYl3GFjHi+ZLnOdlB3lpkhhj1HT7wHOL/qlMxK\njV7WXktdnoPsMOKpds5p1GcoiEQY6+/v7zD0IFcTJ+lereH77HrnDHrN4tVu3Tmi8XN5QWplM5Rb\n/Nnv6Sg7VruM6Git/1buyJ/paG3bhd4gttXvXXUSWlOJaYIa6ATo3l0+19FKIlY+9JyhKtCrypep\nWTt54eCRTyFH1b74fJktHaMr5OePdLS7uzvLajJ4gQ0b7a6rPxHoknm/X3kcCIkg/HrnFfHFQapt\ntHwJeXE2wY/Om6qjEZGlw8o/qPLivBUN+bPXN+n8qyCrJ21tbcXDw0O2VIF8XQwBNFlOny3B06qi\nJibCthWpt7c3SVgVwrI0RFTNSYucPWzRWWW2IhnJdCpbthpzTuPi4iKJHRFPrFsBiHouos719XXM\nzc1FV1dXbGxsxN3dXcJ1BsUwED6nEuGUDIaGhnIYS39/f5Iod3d34+zsLEZGHteQ1ulg1fEjaxHi\niKde7kajkZG6OzCIaW5uLg0uYyOLjYisb1WGsLuJiDQEtUav33Z7ezthPMgAR8cIDA0NxcrKSkf2\nATpHKmJ8ZUZnZ2cxPT0dDw8POZ+/Go+FhYV4eHjI9sTKpsflAPHhkJAjstJqtbIVEpHueVeAFk9B\nbHVy09PTyVMBxatbLi0txdXVVdaeyYORo+ap17JINU7KQPv7+3F7e9vRbmWfAuPFOMnClaeWlpZS\nhgTvnMvi4mIGi4z5/v5+PiM66t61oPb09MTu7m7ajQq7euEZCJQ5h1arlaux3WktKXFCk5OTsbS0\nlDrqzukG5KWn53FBkUxzbm4uenp6kpOjvFhbhcmwwI5t82fDw8O5FVBQaiDR2dlZtutBp6rj1+J5\nfX2dgTo5j4gkLFoIpk7f3/84wVB9H+o3NDQUMzMzcX9/nwiWEgeHyn4JRmo3EiLrzs5OBlqVpCuQ\nqjpaO0MEKngYAjt6SUfv7+/TLnKQNv/d3d1lm6Rx0p43GB5plE2/v7+PdrsdOzs7iZY8PDykXfS8\nBKczMzPJQaDjUIE6YVBgcHp6Gl1dXbG2tha3t4/rt29ubrJ0Qke1fLOLNeH1Xf//FvYHZWgDk9FP\nTEzEq1ev4tdff+2Y4mSyGCY55i/40zY/CAAIz6jFZrOZLN2IJ1Lb9fV1rK+vp7JbQiJDQ2QyKYxz\nqi1sEY9G5+X/PfnrH//4R0Zm1RHXZRQRj0EJeE+drbu7O0lw4K/19fXMTCrM12634/PnzxmBttvt\nhKlxJGQiggoOQ6DQ1/c4nfDVq1exvb0dW1tbWVtV+6WI+qgR9Gzm45Du7+9zYc3i4mIcHR2lkomo\nlW42NzfT0CinIAG589PT0xxJLNM+PT1N/sHg4GAsLy/H1NRU/P3vf887N4MbzwDZc2BgINrtdnR1\ndcX09HTyMZDRDg4O4vXr1zExMZFOvwYMNzePA6g+fvyYnysAqBkaQw1lcuci/r6+vtwbfn5+Hr/+\n+mtCispVnpmlLRyY7+VewfQRT6uV19fX884ZdLCwLobr6+scUeuZILziN8jcT09P01n19z/uE3j5\n8mX8/e9/7xivqwXx9vY2IcwXL14knD0xMRG3t08bDZ/raKvVyv78yiW4ubmJ9fX1lMeDg4OsbYPv\nBSeCL5mtzZOQj5cvX0ZPT098+PCh484FnJ6f6W50anx8vKNmfnl5GWdnZzE7OxsjIyOxvr6e9oX9\n0anz6dOnzHrN6hC0Q5d8tuC17o/o7X2cZvn69evY3t7OZTRIX5y9+yAvx8fHMTs721F71lUAztcF\nVXUUismp9vf3ZxeEEedQN50GkibPvuqomSr/+Mc/Ut+U8EDkAj462tPz2OdfeVAXFxexv78fr1+/\njunp6ZxTwWlDGk5OTnIZEra/gKFyjpC1OXK8EgHi5ORkvH37NnVUoGWmSMQjz8DGTOczGhpC0t3d\nnS2g8/PzKR/eT7lMmdSsjevr6+wekuBCI9jKv3p9k4Q/BoxDjXgaHrGyshJDQ0Md7UCVfAGuAQ2J\nOGULHDyDK7vwd4wduOf4+DiJbJxPZdaKoispBEmjMohnZ2djbm4uzyiifw43iWwNC4l42oYnIxLg\ngOsiogOSing0Fuava19C6KqEEZkoA4cwRjBHRkZieXk5Xrx4kcoZER3krhrtXl8/TXWLeCL3VIMs\nyvadnpOm9Ksyku7SmRj/enbG3M91dXVlpqbFCtLxPJt1J5zbc2JSdaK1JutznY/TRPoRJHr/mp3X\n56DOzMgODT3OlZfJMM4y/0pKg3AZAFOhe+UbbacIR87CmDtDZfBz8nQv4qm3WWYGCamGbGxsLFZX\nV3PCmb8TIFQdRW7SzhbxRFDksOiou/NvtkFdWQCLLV7v2J2TFcaTgXTvs7OzOVSJfP0eJEzmnw/d\n8l5k8Y901OdqexSEIQI6tzt3J1CMyi0QuCwvL+dAskqoc3ZyRGYhQb6L54dsCymr37fu/zAumNzX\nufqeXz27wI+8sAn4VJK3ymsiL75r1dFKwKPHuAWV28TRsts6cvx/ndtR7Uu9cza9nqPqKP3ki2qb\nnjsWzLKtgkXQv5KLIMmz9bnsJ7uIP1N1kq+oOvtnr28y868tJIw1I+ZBuxStIzIJ0/ympqZygp05\n76AkStHV9bQ61MM7OzuLjY2NePv2bUxOTsbU1FTMzMzE8vJyQp1Wc87MzCTZST1HtCyTE/1RyPp5\nBBB5ZHZ2NtvmDg4OYmJiIqamphLOlKGKnKsz7u7uTiIawh5yyMDAQOzu7iZ5cnZ2tgMxGRgYiI2N\njazxR0QywxkqSq6uKzDxnjc3NwmT39zc5HIWBvd5/ZhBZLC3trZiYWEh23siIlZXV5Mgh78wNzeX\nC5BAr2pvMgzZOkOqhidCrvLinsbHx7P8Mz09ne1jYHU1U0rre8lufvzxx5icnEwi3eLiYjSbzSRw\nWkLlmQ4MDMTm5mYGOMhz29vbebfVCTFUhoRMTk7G3d3jbIT6viMjI7G7u5swqg4SdyCIiHgaAzw9\nPZ0T9Cyp2draiomJiZRJ5DyOnwNStqi14co6F0zgrjg7uL7u4hDUQatqgFYNLKLm999/n9PhZmdn\nY3FxMXZ3d+Pq6iqn//luJqVVqBrkq7dbgODc7jwiMsN3z93d3XFwcJDys7u7m07TnbgDgXFfX1/O\ntqCjk5OT2Va2vb2dbZyIXtYP4z7Q0YmJiRgdHY2tra0MQOgo8q6AcnR0NIco0dXb29ssTXDO4G/2\nhc18rqPutKurK9uKTWmMiJzIxy729PR0rKatSB0dded0VBlofHw85ubm0tYeHBwk8e3g4CBRDugJ\nu8je9vb2ZjasTRgRemFhIdrtdi59MtnRvAx2kZwjcjebzUwqJS78ER1FbCSDiLVIxdvb2xERHa3Q\ngmUBzc3NTezu7mbJ6bmObmxsxNjYWMqLpVOQir96fbPOn8Go2b0ICvTLyYruQKEY5WqPLrS//2ll\npHnvas4Y34TzOYEj4olsKLLHkjaUo55b1kSpPMzaVyx7glCAQzFLGSJ1o8XFxSTMMExgTCtRZYYi\nY9Fr7Z0+Pj7O2iAOhH+wyysBR7AFFRA8yfxE3IwH2MuzmZiYiKWlpY45BoyOcb211cq9y3SRc5RG\n1NIZDwFJlYdKmJQJ1t5cz+Lk5CQWFhZSXmpnwPDwcI7DRUyq5R0tWBxbZVqTl4rWgK1lX85e++Nr\n/6/yUZUX96ffWOmFjNEXKBn2LyccEdnpAgGR3TA65MUZ1JqHhoY62hqf6ygORtVRnwl+BU0ypHWS\nGjRsZmYm6+GgX3pgsFdFbyqhT3AiM6KjAwMDuduBA3U2/B46WhfvyHrJEdTNrA/PVwZOR2uArw5b\nBzjVe6enEhLP3bNyrj+yi5xpnc1RM35yLpit5DxlGZ0eFTl0P3TTv507ItLBCnaw8MH3gvD6Dx11\nfxXdq6VAgX/VUe2SsnC8LBwG6IU714Hk/9kId/7/i47KpiUD1XZ7zuSFL8JNoB/kVSkMB6fqKPuG\nS6CkVsl+ElnPCmdLMOvZ/dnrm3T+HJFLR7jSVlNr7DIc8OHm5mZeHCNqRCYCy9zcXCqJEa7q2yJF\nGVMdwOHiDw8PY2trKz5//pzRtfqLM4sEBwYGcg0u42H2tdrj4eFhbG5uxpcvX+Knn35KyMvfgcfe\nvn0be3t76fxr9t1oNDISZqwIpszj5uYmuQBIYaZRqXk5993dXdYMGT53plf9/v6x1XBjY6MjQ1ZP\nl4HMzc3F27dv4+bmcfAQoltPT09GqvgOHIfMQGRv4tfHjx/j+Pg4yWmEvhpG7T3Y4WBlTnZgYCAn\nYa2vr8fa2loGLZYuIU+urKxExGMporu7Oz8PUxshFKTL4djhcHd3F4eHh7G+vp7PDtlI8OLc3hc5\n6OzsLA4PDxMq5aD29vZia2srjo+P49/8m3+TDoXOdHd3x8zMTHz//fcREVmflZXLdLSY1eCurotW\nv/38+XPMzc0ladH8dfAjQ2MPBeclsAZ/X15exvHxcepozTJN6ETwajQaeS+cdl9fX/IakBl9ljo2\n+aejX758SUehNbTet2AWaZC8kG/kuqOjo9jZ2Ynt7e34p3/6p447N1WQjiKSRUQ6P2dfWFjIwLwi\nlxzR1dXj+tv19fXU0eethP65u7vLGrNgh26Z36EkREcFC+6cU2w0GvHmzZssF0pSHh4ekkRsCJH7\nUw5jh03W+/DhQ45KN8hKMMGZjoyMxNzcXAZ37BR9Hxx8XKfebDbjy5cvsbq6mgmLKZtQ2LW1tUQn\n2AWfMTs7myUYgQa7CDG5vX1cYb2xsZGBi3tTzhDgGfKmK4WcuyvoUrPZjM3NzUzoBKXQm/7+/pib\nm4v379/H/f198pyUDBETj46OkliIc6I8R0ebzWZ8/vw5h2ddXFxEq9X6Sz/7TTp/0B9DeH9/39GL\njPlIoPR0Vge+s7OT/fou30MB7avbWOV7cnISh4eHuZt6dHQ0YTcCJ9peXFyMH374IVunOD2dAiJY\nhBVdCmBafdoRkf2oGLlqQ+DUrq7H6Xlfv36NdrsdW1tbsby8nP3giDoIZno8wXpWCZsH8Pr161yG\nEhHZXxoRadQ4H8aegbi/v4+Dg4Not9sxPT3dAQcj/tQaMJj148eP8fXr17i5uYmlpaWsQSuLeE+t\nRgxuu92Og4OD+O6772Jubi7evXuXK4pxJmRo6pSyD3VnwRbiHtIWBby/v49Wq9UxoIcxwYq3hETg\nYyaB8c6MBujTljqrSFdXV3MGgZIQspFM0J2fnZ3F3t5ePDw85LpSd2Hangzo6OiowwGDaU9OTuK3\n336LT58+xd7eXvbOWw5Cxu/u7pJENDo6muOPd3Z24tWrV3nnOjrIv0wbYRU/JiLSIU1MTGQLXbvd\nTtlSwtna2kq4XbAbEdFqtXI5Ex0V4J+fP65RNXLYYBMdCHXZyeLiYnz33XcZwEIPGV8oEB2dmJjI\nQLu/vz8/4+HhIR3P9PR0TuarOipw+Pr1a64dx0/CWSDnDw8P0Wq1MqnRXYNBPj4+Hq9evcoafkV2\nIqKj00gNHBN8bm4u+QjtdjseHh4S6kZarq2Y9PTw8DC+fPmSTkjGrBfdBEhtuAKLqqPv37+PRqMR\nP/74Y47phdwpf9Q6vhKIYUD39/dpf8/Pz5NcZ37Jzs5OOmxdBNfX17Gzs5PkbHNNkLTpJvt4c3OT\nnTvkxfKdtbW19B8Rj0kVvYbIQRbYxYjHpWRKgMYouz+2WVYPdaC7X758iS9fvsTe3l6srq5m4srn\nHR0dZUlXGQRS0Gq14s2bNzE9PR1v377NQAdS/Fevb5LwV6FHUAeoTsYo6n7OtOeIZCcRT337DFS7\n3c7tZRyIqEzEq6zgDAINNXaQMGdDiWp9T1RZIVCZDrY65r3xjOAnylnZ3XYAHBwcZNtVrZ+5C/Uo\n0WmdWNfd3Z1ToETvjAAoCVwoSq5LQdTTRdeVVSzrEL2DgE9OTqLVakWz2cxIGCzpGdauCWep40sj\nHo2ejF42B4IEPVIsz7BCoD4D10IQ+fDwkNA2wwJ+NaiI4ahGwHP1b1BnRHTIi+xJXfvm5iadNHlh\ncBhaMKjgzt2QNcGT7wRyFICenZ1Fs9nM0aa1JVGJzO8KktUK6/hS9fjh4eG8t1pSqm2GVV4gLFVH\nnaHqaO3aAbU+11HnlgVXOefUkRargR0eHo7p6enMsivJstZa6ahZBRAWOup38S0ggmxO7bsX1NNR\n90CHlWzq7Aw6Cs7v6elJh/h7OlpbnMkLFHB4eDgJvnVxFZQFFC3gEjwL0MyjV4YQlFV5YYMFPO6N\nTtFR9hdcDt2AiLlzeoUb4IzsC2cNdSP/7LLEAem1Jk1si2xfoqYkU0mJhoQhKzor++L/q00XGNbn\n684FSWSVjtJB6KMR4dXmK2NAhjwTc1LYRSUEsy4829oS/Uevb9L5g5ArEU+tSo2wrmetfb0eVq1h\natfx85wERij4UmRa69bIZIODg0k2Oz4+jmazGa1Wq2PTGYdbiWGcI2SiQt767mvLXz2/TI1hrR0N\nyFGVQ6AuR0BMYgON9fX1xc7OTi7eoBiVWBnxNK61zjSQnRjTWhdcVFaxs4PGao3PqwYXavuIjAIC\nBgGRSzua1j9OgDJ7T4pa75UhpFx1mYtsiiyoK8vCBaJq+Z4FQw2iFFxaRauf3ZrjZrOZc9zB6xxK\nlcVaT354eOjYtQ7pEWxBtASedMT5qpOIiHRaIExGUjDlTJOTk/HmzZvo6urKeQ8HBwf/KqACH7vz\noaGhRLFqHZLzob/KJM4NdWD8yUutK4Oa6ZTSWESkIx4cfNztMT09HcfHx7G7uxvNZjPlRPeAtdkI\nZn19fblmt7+/P5+R2qsM2HPmiMkyh6usVOVF3fr37lxgTEdlfs1mM5c3VadbuRgRkZ0h7rwiQRzF\nyMjjKlj2xe/SUaUx9kNwWrlLlV/gfeyAgLYqpeqBr9sIEStxGtybe1UqkmQgv7LDgmOyjKwryHUX\n1e4rgVWbXgOdiMfM3mx8cq4chwQaEWkn3J0ylk4OiWa9c/JSk8uIyOehtFBlKeJpIJKEx51Lhtik\niYmJWFtbi7u7u9je3o7d3d0MpuvZ/+z1TTp/0ZXWLZnZzc1NwrUiahf7nBQh+jX0RRTFcTL0jG9t\ngYqIDhIRtqUJfH4GlFNh7ojoYD1fXV11bAOUoVahft42J/unYLICBB11uueGgIBW0plZAZVM5+yi\nXr/7nCUvAq/bt8DyIuFKXKqELqhGjVBrO4wyjOdd71zkal/97527thVFPG008+xB6nrnKSx58Wf1\n7JVIV0frqtMLkGorkHPXfmmwn+9ZZbJG6zUDIi8UvJLQoA0MIHnxvhx3DeYEXc7EWVayUw18ONzx\n8fEsazij7/e8rYhjqzqqlVN3R50rXw0hWVVjr7LO4dTyn3Mw8M/tBRSnDnN6HtzXVi4BO32lo3S8\nv78/9RXCUe+ErrAtlYDsMzn62iFSz+5Z6NAhL8/v3M+Rf1lxV1dXMtnrQKX+/v60L79352rHVV58\njjum49Uu+nv/4DFUHa3nrDoKfRGAqbNDt9xtJXK6v5r0yILJisTHfUmqqrw4DzllF6uO+nmBT7Xp\n5AXqcHFxkds5JUxkxu9U4rWEgbw4e73ziKdyjnP/nt3o6XmcHlk3cdJHcld19M9e36TzZ1QODw9T\ncTc3N+Po6Chrri9evMjaYoV9qwO1LrfZbOZOaw8WGc3wERBnxJPzvr29zToL0p4otpKVwF2VZaxf\n/Pj4OD59+pSBi6hQPUtmKaMgaPv7+x3oQiURynKMexQIETLOW2sLTkNPT0/+f62VG0akrgeKNfhF\n3dKdGwULhqQ4Mt92u51ZOqOo60KGyrBQzsoGjoicSqbFqRLEfF7NoJQi3N+XL1/i4uIiz22Pg2fm\nvqsxV7N7TsYjg/39j6tNawZSnbegbGxsLMbHx9Pg1EU+jKlnp4MB+nJ4eBi7u7vR39/fcec2q1Xn\nTR7Bh56X8oBs1BmcvWZRdEF2bjCNdkiZGdmusmZIEsO7ubmZXAsLr2zb9KyrsVIzlumSvUpqg8TQ\njRoE1i4euy2gBGal+2zyonRgGFVXV1cOZ7q+vk7d9AwFqWS92gh15Hrn+BcQKc+5Iki1o0RbsjIc\nQqYgpHZS1GCJnsj6dHDILC0CIifuzXuAymWLVUcF3gjTlRHPBsm+zauvLW7q7TXoZBuOjo5SRz9/\n/hwXFxfx4sWLGBoaykSFrNVAN+IxwMcxgIjVZABayDHSE/6kIqRk03cmK3RaIIifcHx8nDq6v78f\n29vbeQeQOoFAdd7k0VpoNp28sC8RkchYLTPUO5dA2ilxenoa/f392faJWwGN+qvXN0n4A22+ePEi\nFhYWOnowLeE5Pj7OrGBkZCSdzOnpaa6wPDw8jOHh4ZiamkpnDlKiWOCm7u7uJG5sbW0ltPXhw4ec\nBAZyYTzMERgfH49ms5lEi/+LufNabjxNrvyBI0ALEN7RV3V1zUjqmXkUvYHeZPcJFKHXki6kUU9P\nGRZZNCC8IUEQlsBeYH/JBLq6SxcbsWRERZsigA/fP+3Jk5k8CNo4vn79atlop9OxiBbSDDurYenC\nkmVpChEg+8URaCCeYrGow8NDM6CTyUT1el2fP3+22QcYaNAQ6kR0Q1BiYOnF4eGhMpmMfddut7vS\nZscWQWB46pSgK+l02sY0U+f1sCIGJ5fLGfkLyJ9FHslk0lj2vMafHcIk41hpKRyNRvbder2eETWp\nwxGgwRSnJ5nxz4lEwnrw/Whdsq7BYGCkLPr8cSKfP382rgWlGgz2aDQyh95oNKzv3S9SImiBTNhq\ntax9jNoi+7qn06kRuZiwhrMKBALWKkSmQStTIpFQoVBQrVaTJIMOGXW9Xk+m3pjNZm2EL4aepUzJ\nZFKpVGqlMwNDT1872Q5n8EtKkDV4Md7h8+wlqVQqaWNjw/qk+/2+Li8vjSAGIcy35rHoy+sonQlH\nR0eaTqe6vLw0HW21WpaFUmaEiEZ7JWUdAgVp6VhZSe3ty3A4VLlc1snJiZrNpt1Ls9nUhw8f7Pl7\nHcWwwxPxOrqzs6NMJqODgwPl83mD1zudjjqdjo6OjswhQfiltZk+/+fnZyOI4fBBCHn26CizR25v\nb61d7vb21urk6D9JU7fbNf1tNBor6AbruMfjsc1fgZDnIXo6CUBB4/G4zUQIBJakTMYl47TRUWQd\ncjOMfEmmo75l0POBIAT+lo4eHh6avNG90m63jcRMSanb7dr5SfSQc5/40HILgoZN+paO1mo1m/9C\n0kfwj0yho6Bav/fzKp0/sAl1FYg21O2JUqWXfkfvDLlQat/AvZ7oQ4QHUYayAZ/ljeb9/f0K+YhM\nD7avr2P67ADocG9vz4IGMjLOMp/PDcoiCOHszMtGWDzcC5nGlz9ow4Ev8fT0ZENUQA5QaBjEQGWc\nm8/GwWNICZ48dE/NDdiRoIazg0B4whbkNRi0QF1E0Aj0aDSyQIbolzunXtjr9ezsvBfIAErI94Vz\nwFmoATMDAGgdxIR7BhngXDgXfp8M15O+6Bph2Isv2/hnShbuOStE/zheTwLzd055BifFH4Icj2wA\nVXIPyDlsZOSF1kI4AJQiptPlTIadnR31ej07u6+945ggDfIHx0Kgi47CWyAz96WldR2lrsvvkyFG\no1HTxcfHR2tRQ96xE9wrOsqde94NDtwT/3jOviSEveAs/EG2PcwOR4HP80EzgTDoCbbNyzmZHncI\nuom+g0StZ+WU8jg7pQm/Kpg/oD4eOcUmeZJoJBIxpj8dEwQWZN48C1DGdbtIwoOecXYvm9w5Mois\ng64CrePw0U3+id5gf3HI+ASeMV0A8K/8TApQCK9HPG9sOuRn7tzzIiT9yqZz76BooDK+PICN9ckh\n3AbvKzqdjhFFkTHfxotd/N7Pq3T+3W7XlKxSqawwY6m/+/o3jlSS9QW3223VajU9Pj5aaxEzuDHK\nZK6wSS8uLjSbzWzCUzqd1l/+8heDVshyYfmzDpSWHbIDamqfPn3Sw8ODKc/GxoZNe4IVj8AQ3dOK\nc319bfAOkB/GNBQKWS/56empJOnvf/+75vO5MpmMLZeJxWLK5/OGLlAG8M4Z2BGSzmw2U6PR0IcP\nHyzLwbCTTfsBKJFIxKCrbrerarVqs9YRRkkWec9my4UkW1tbOj4+Vr1eNxJWKpVSPB7XP/zDP2gy\nmSiVSun29ladTsfKLsCos9lyiYnnF4xGI11eXkrSisKCfGBUPaEMx9HtdlWpVKyOSblnPYN/enpS\nuVxWMBjUX//6V41GI2WzWeuxHY/HtneBXQgQ5FBYSHXtdtvKC71eT3//+98tWMOw5/N5y0woKzDo\niXPTukp7HSgLZTP4G5FIRD/++KMeHx91fn5uBp32rEwmo3w+bwhVNpu14IIzkPWil9PpcrY+czAg\ngdK5AqzqGchkTPRW03pJRodTxIE2m03rdT4/P9dsNjOSmST9+c9/tufVarXU7/dNR2khm8/nisfj\nhmCho9y5d6ggGPf39xb4EKTQfjUcDnV1daV0Om1ZNo4WR9bv97W3t6fj42PN53P9/PPPen5+Ntvy\n9u1bhcPLZVDUwLFv6B2ZLUgXmTo6Sg87ULbvbvAlIgIhWoVvb29Nl5iW6YdhNZtNxWIxHR4eqlqt\nmrOmLQ4dzWazqlQqarfbtqEQ6J6sNxqN2sKo0Wiki4sLSyIILvb39y3I8ncOB4ZndHd3ZyRDnjmQ\nPwEUSMu6jrJkbDgcKp1Oa3Nz0xC+8XhsZ0FnUqmU2ePJZDm6+29/+5shvgRzfo6L52NRen54eLBW\n7c3NTSu9RqMvy77wexsbG3r//r29BtQ0n8/r/fv3NoKa8gdT/Qi0RqORPP2QCQAAIABJREFULV76\n3s+rrPl78piPtjGSZGdAQ/V63Zii0gtnAMfqWcUEDGSPjGB8fHxcMU681pNlMLxMmgLqIuvgdUSf\nnrwEiuDbxDqdjhHTJK20/HhyEef39T/ITUBLOBYyHaJUCFIgFEB7nAljBpIChAgJyLceknmywQ1G\nKQQejIfvXPCdFdSiEomEJK1sVyNy5dxkUmQMvo5J/ZC9C9ydb0XiOaJAlBSAdYnyfUsjUT4yuN61\nQd2cQIPsDWPvz85r+L4+63h4eDCGMK8nU/DyQnAmLTOJZrP5K3nxLaS+zokekIH5WQwgHThmTzjl\neWDYyOb9ndPRQO0WeSFQ8lP1aK+t1Wqmo54FT1aEnIJyke0A2QNvgsjwWlAaLy/Uv+ED0Rf9LR31\n09D4Hsjv09OTtb/5DhueP/KyzsGgXkxpAR3FZqGjkNmQN48k/ZaOerIiJTacDAiKJGv1pGSDPECi\n9jrqyXz+ztFR+AroqCfOSi9dB56IRgmWIVWQuLlzMlaQMu4SxK/RaNjvYPu9gyZ4QEfhfsDZAIXw\nkxs9UsWdgJgScBF4Ii+cwcsLsoC+cudk5dhFT7CkVEaWj66CvCAvXkdBsObz+a/s4u/pKKjG935e\npfOHUIbT4aKenp5UqVRM8cjcz8/Pra3NE4LIahgjSQYiyfqXj4+P1e12dXt7q1wup+3tbTUaDYP+\ner2eHh4eNBqNrLe/UCjYZDQMxNevX20CFpEkDHvgMQaQSEtyHZvygPg4YzAYtJYlILLn52c7N4s8\nEomEbTg8OTmxmiURdLvdtileiURCuVzOCDogJHd3d7q4uLCIdb3txxsRFLjX6+nLly82RMR3ObAR\nzK8m9qREtmFNJhN9/vxZOzs7SiaT6vV6xgQmK8TYM40O5AEy6M3NjQ0GwlBiqCCTeUc/mUzUaDR0\nfX290iJEQEjdGgfkUYJodLm6OJ/Pq1qtqtfr6fT0VOFwWK1Wy2DydrttQ1TgT1APJFBsNpv68uXL\nypQ6D3vCPsfw8t2ur6+tzklWh0HM5/MrS6o8qrS7u6tyuaxAIKD/+q//UiQSUblctrIQZEfOTtZT\nKBQsg4b7cXd3Z2SnUChkwReB9f39vSqVin2f+Xw5FOrLly/GGcEoeh2FXOXbe2n1PDs7s/elNYuA\nH3kE8ocQlcvltLOzs0KIvLq6+qaOkqVSM/edCHd3d7q5uTEo2cP5zJunTu51dGdnRwcHB4rH4zo/\nP9d4PNbZ2Zmen59toNZoNFKr1bJnwJ0XCgUrnzw+PqpWq+nq6sqSFhyQh5hrtZrJEyjixcWFut2u\n6R9Bz9bWlo3LxT6BSMViMWUyGb1580az2Uzn5+eKx+OWBVP28ijlxsZyWFapVLJAjWFM6zrKEBpI\n1vf397a7BBtSrVZtkielFILNdDqtdDptMPi6jpLhQz59+/atIpGITV2cTl+mjM5mywmBhUJBBwcH\n1ibb7/fVaDR0eXlpwSCQPKW1yWRi+gI3BXlpNpuGdOGUo9GoSqWSbfXzHUJk90dHRwqHw/rP//xP\nhcNhlctlQz3wh3wmHBw4cegoezm+fPnyXT/7Kp0/0Wo6nTZIHMFMJpMWVZLReLYlkSkrH3EIGCsy\nWYQOpYpGoxZhQ7ZiHCgRNuUEoFYY4JBBICQSxRGRRqNRg2YY5gB8tr5LmlYrCGmeHQ1xrNfr6fb2\n1li+i8XCCFZ+4hytVwyrAfIkMybihWxEyQNF9egIwgfMxRASAisMdSQSscyQs5OZ3N/fq9lsGmEG\nCA4C5/39ve7v720WAoYb4w7MDAud9kta43DcZFGQo8g0WODjmeCSLACB5OTrn57cWa1WValULOPg\nPv1kSM6OIsIG9xmXZ6aPx2M7Jwxw2pDC4fDKIhNkA/5LKLTcG7+1tWWZIWeXXmqbzWZTNzc36vf7\nRsqEeAYZyq9IHo1GRpSkv5+a//7+vq2ZlmTwq0d2WCIzGAy0u7trso+OQtRlJzz6zp2TIYLqQZiN\nRqMmxz4jRmdhg3Nuv042Ho8rmUwqEomskOdAUSjtbW9vW9BFwAmLnkCBqZboKJkcd45jvLm5UaPR\nsJkN/s6RFz9umImCdD1Q82d5GLV1ykq+FZLSZbvdtnIOOkr2G4lETEf9Vj/4LMg5I3WHw6HpF8Eo\nJSHY7zw/AiecKl0vnJWyE0uRcOqQAEFJ/dRQoPH5fG62ipZeAgiPMjw8PFhCRcJI1xFnR2dh/3Pn\nzD2BCwUhFR4Ptg5kBz0mU0dHsSvwl/j/sVjMkATkBXQbv3J9fa37+3vF43ELLh4eHuy5UDrhdcgK\nySk6CuH9ez+vsuaPIh4cHFhtZjxebuqC1b5YLEyQu92u8vn8SntRNpu1/eyQJnBawCwYGgIIoHZP\nyoB0EY/HLSvyGRUsVGq73W5XmUzGGKJkv+VyWfP5XFdXVzZcgowxm80acxtjXq1WLbhJpVIaDofW\ngoVC4HCpq2P0PWGS/mEiVIwFG8qklx3kLA+ibsykroODAytTUA9ttVoqlUrKZDL2PtlsVs/Py2mD\nu7u7kmR/v76siAyN2hh97ChWKLTsZyWb4uw8Y+58Ol1OhOO/Ga87mUxUKpUUj8dtlnw+nzcEA5Y0\nk/fS6bSxwePxuDmm3d1dM4JkhPv7+wb1Qc7BiFJagQVN2YNAA6Miyd43n89brZRspFwuq9Vq6fLy\n0pauMN43l8upXq/r+fnZhqtAXEP2CZAY0AR0WCgULLjzBCpIcX7KJDrgN9oBW/OMS6WSGXi2/x0c\nHKharZqObmxsrOgoOpPJZNRsNo01zu+THaGH6AbyQykDmYCHwJ0TnHkdhUsAqkLw7Vt9vY7u7e2t\n7LRntDXjYGOxmKrVqrXWpVIpjUYjkxcyU7JiT9wC/UNeaFlmHj46ii2TZMRHugwIBMmCDw8PNRwO\njR8RjUbV7XZVLBaVTqdtvHg2m7Xnh93gznFc2MVv6ShOFR6IT3IgsXGvXl76/f7KJkog73K5rP39\nfavlFwoFI1YXCgW7U7aRNhoN4ynA5dnb2zM+CigkOkoSQ5mD8hzyQslBkiVS6CjZPMOj0KlkMmnP\ngdHl6zqazWbVaDTM74D0EtiTqBBUgD5Go1Hr7vFlUEkG8zMSmh0nJGnIi0eFfu/nVTr/P/zhD2o2\nm9rZ2VG5XNbOzo4Z+KOjIy0WC7Xbbb1//94U8/T0VKFQyJwYbVuz2cw2yuXzeRUKBYVCIeXzeZt8\nt7+/r+FwaLOpt7a2dHJyYqQOdpQfHBxosVjo5ORE3W5X+/v7Oj09NVIWkSyz50ulkina2dmZ1XOz\n2ax+/PFHIyMdHx+bEgGZQ97KZrM267lQKKjZbKpcLtv3BaouFoumUKenp2ZcT09PFY1GVa1WjdAF\nzFQul60ujKGE2JdOp43YcnZ2Zu0v7969M8NEaQJ0oVgsam9vT8Vi0c6TTqdttjjEmLOzM93d3alW\nq+n09NRKDrlczhCbQCBgcO9gMNDx8bHVp5k1z53TXtnpdFQulxUOL2d6s+Oc7I41zf1+XycnJ/be\nR0dHKhQKVvumde3x8dGMUS6X08HBgfb39w3qPTs7UywWs+cBp+Pw8FCpVEq1Wk2bm5s6Pj62pVSH\nh4eW1Xt5CYfDKhaLVo44Pj424/H+/XsrNSGbs9lM1WpVxWJRkqwtdnt72zKNfD6vdDqth4cH/fDD\nDxoMBtrb29PJyYmVZTY3N3V6eqr5fK5+v2/QdL1et5axWCym4+NjCyp4HY6c/Rj9fl+pVErHx8eS\nli2J796903Q6VTab1enpqfFd0FEcTaFQ0NPTk3K5nOkoZKbj42MlEgk9PT3pzZs3Vp47OTlZYaL7\nxIByXiKRsB0FGErqpASIdMacnp4aSRFCHpPzjo+PrQ7udZQZIGTehUJBjUZDpVJJb9++tWASSJ9g\nhtn9LAOKRqM6OjpSOp22ZVEE48D92CJ0NJPJ6Pj4WLFYTKenp7aW+o9//KO1OLMOl9IScH+pVFKh\nUJCklZXNtNKdnp6qUqmoVqvpzZs3Fizm83lrKeX3QDOOj48N4j47O7PvQakSHWVfRb/ft9IIQf66\njpLVHhwcWCt2JBKxoIaNf8Ph0NofWX8biURMR0ulks7OzozszV3XajXTKZZPcSbkxQeJ5XLZyhGn\np6dWsvjjH/9oOrq9va2zszMtFgvd3d39SkdB0HD0uVzOdM+3EXsdRae8L6nVamZbIpGITk5OzC5u\nb29/189+vx/g/8PPv/zLv/wvYCoG0hD1AEMBIRFt0tpFTykZzXrbBj34kkyREGx+DxiN3yOKhVCC\nYCCEcBRms5kNY6AFCJIJbS3j8dg+F5IK7G1IZWSTHrKj9UPSCpQETIwRxIFLy/okyADQJ0LBnmw/\niAIjw/shRIxrXSwWBvP6e4bMQqZGZA0sx/tBSCLy922S3DlZKex/IDSeRzAYtMBNkvXqcpe8n289\nI8OOx+OGipC5k3GDPPA9KGVQ1ggGgzbNTHpZc8rreb7UJVkIhXMHTiwUCpaF0C6FnHIG5hZQimEH\nuCSTKz/oyWczvDese9oLgbzJzLgjUC+yBd9hQkYvyRjq1F79eGPkwMssQ4JYZoTTJcvyMLqXHX6P\nOir3RxbN74FGkBUBjdMl4Z81PAHpZagROoW8oKOUYtBRSaZ7fF9km3+n3Mj50FHkly4Vr6OcJxwO\nK51OrwT8yBh757kzdNTrFPfvdZRyKWUDUAhvC3lWlI/WdZStlF5HPXkVKD6ZTNo9YiNJrjyyCE8C\nnaI1d29v71c6Kq2uQgcZApkFReTO0VG6Rbg/kjd0lAVLyDPESq+jIHFeR305jvcj+EWe0un0im1A\nRyEUErxhayhPo6PwoXgeyBivocwTjUbNHmxubpofgKeDjobDYf3bv/3b//49P/sqnf8///M//y8u\nBxa0J3HBkKTP0rOeuUCY5Dhlsm6YsrCLfQ2GDUqpVMogQjJjFAEOgqSV3vCNjQ2bbgd0589Kn7Bv\nvaE2zPdLp9NGHOMz/cQw39sNI5qWvfv7ezNYvAd1NZSdOhOQqP9ezDGgvQgCJex+iGWSVnrvKUdQ\n16OOiFLwjGj1I/PhDLQuedIK7+FhOpSW4AIeB61gfFffW0u93p+JbgoybJSF//Z3DufAf3cUGp4F\nhpL7AGZEJnhWyIuH2ak/8n4Q4jgP8Kcky9zgEoC+wFbHWXJuiEuch/eFZzEajcxY+9nz3LvvXee/\nCYJA3oCG/SAgZJHpfHQA0Ac+mUwsePY66uUFVjV/h7ygUzhM7gXjjAxQFuO/OXskEjFGNrJPzZ0O\ngnXSKO2hlEHgHHkbwtn5PZ6HJGu59Tq6s7NjcwiQaa+TyMu6jlKX9joKIZRZ+gQKnJvuHYILSNSQ\nK7lzBuOQgXOnlAwZChQIBKwTyHcLYAeRc+4cHWXTKDo6Ho+N1Oh1FP4DHTDz+dzgfewnMuF1FBI4\nZ0cfut2uBSY8Tz7L6yQ2naAOmYc75VvDeT5+yRE2HT19eHiw4Bo+A/rkdRSuDrqCjjKDhKmkvAff\nnwB/vUOHf9/Y2NC//uu//q7zf5Ww/9XVlRGVMBwEA1tbW2q320bggmVMywcOjElb1BGfnp7UaDQM\nSeBhbGws1/WSSeG46Rmt1WqWBaGE1PX6/b5+/PFHFYtFxWIxdTodVSoVNZtN2xKH8dzZWW5na7Va\nluWiaDiq8XhsGcvDw4PBe7RJ+WwQA0ctH2PNH0gsPiIlYu71etrc3NSPP/5oPblAfM1mc6XXFrSD\nflVaYrhzSbaJLx6Pm2PJZDJaLBaq1WoWnXtUBOFmOBB3Pp/PVavVDGXw0BvKd3x8bOWMwWCgSqVi\nI5zJSEEm4CeQuUtaMTR+KBQzGdLptNrttu7v7w1Rmk5fNo/htNbvvN/vq1araX9/3zIYMkoc+48/\n/miZWbVaVbVatf5tnAEZJWxwxndidGDZw5aHrc/71ut1SUsiI2xoHMbDw8NKRkTgy7pUT7KF0ASP\n5ccff7T2vUqlYotQCMAxmNvb28Zn8TqKgRsMBqajjGRGR+v1urWjoqOQcSeTycrERcp86zqKzNKy\n9f79exWLRQu47u7u1Gg07D3RQe6Y9cPrOkriQPCCM08mkys6SifDuo5y59w7OspIZXQ0Foup2+2a\njpKx3t3d2bIir6PwNSD8khnC4A8EAisLYXBa6Gij0TDUwqMitD6CYJKVMm1Okuko2TXdDOjoxsZy\nURJ3DocCHYS7AT+LxAsOAYE6wQs6mslkjMiHjkI0DAaDJlfYdOwIpEB0FHu0u7trnVK/paOMWfeo\nNORZRjTjq0KhkJHC0dH7+3tlMhnFYjE1Gg0tFgvrPPE6CiGV5IH7paPDIzugaV5HQYS+9/MqnT9O\n9OPHjwYx4xgLhYK+fv1q9VhaKeiJrFarVue7vb0140d7WrFY1NbWlrrdrkFm9XrdxljSVpVKpTSb\nzfTp0yeDgvicVCplbReQdWBmt9ttff78Wa1Wa6WXmhr45eWltQzS3kJU3263dXp6qlgsppubG2Uy\nGc1mM11cXGg4HNpYTAKD2Wymer1uJBTITqlUylqUgDyn06mRhGq1mpGUYG0z/OPjx49mqCeTiba2\ntlQsFtVoNFStVlUoFIz1ioAxmvfs7MxYv9z758+fraOg2+0aQYcZB9SRGeEZi8X0+fNnzWYzq5VJ\nsp3etJlBCoNpfHl5qZubm5UMmHavy8tLLRYLG34EhEzr2sHBgTKZjG5ubgxVuL6+VqPRsOfW7XYN\nvqvValZSIQuCI/Hp0ydls1nrHtja2jIyoSc7hUIh2yT26dMnmzNB1kj9/+LiQrlczjIXArJqtarR\naGT1v5ubG52enmpvb0/n5+f2HswT2NnZ0WAwUK1WW5nfj9O8vLw0ghiOF3JnrVbTYDCwZ0Umd319\nrU+fPq20kSUSCeXzedNR6rsgJoyGhS9zc3NjqBwz9kulkukopadarabxeKxsNmtBfjKZ1PPzsz5+\n/KjNzU0VCgXr5/Y66hf+MNMeHfUoWqFQUCAQ0MXFhREraRONRqNqtVpqNBp68+bNio4+Pz/r8vJS\no9FIxWLRGNzAwrVazeq8ICSpVEqVSkXX19e2/RFi4+7urqrVqnZ3d5XJZCyJaLVaur291adPn0xH\nfetevV5XrVYz7gd3EQgsR/N+S0cl6cuXL0ZywwFSn8dhoWvrOloqlUxHU6mU1aJDoZASiYRCoZAh\nohcXF6ajkowLsrOzY4N/mOZHKbbf7+v29tbI3wzK4Ts1m00Vi0UFg0Eb546jDAaDymQy1jmEjn74\n8MH4Stwfg87m87npaDAYVLPZ/JWOgiwUCgUbAQ9PCMQrGo2qUqkYT2Uymej6+lpnZ2eKx+P68uXL\nr3R0d3fXdJThUYPBwGzIxcWF8Zq8jmKTBoOB8vm8oWTf+3mVzp+oZjgcWg2s3++rXC7rz3/+sySp\nWq2aQ+ChE1H5ehEwCtOmcP60Z2QyGRvgAFt4PB4b0/3u7s6mMgFdAw1Fo1FjoQIpecgRdutoNNLb\nt28VjS4nOm1sbJiikb3BpmfwB8oNgQTSFVFoPp+3gAdEhOlwkOMajYbVF8k+IpGI1aEgNzLDANid\nWl6/31cymdRPP/1ky4kQtl6vZ8OEyKyBoagzU5vN5XIqFosGsZKBS7IgZjQa6fDwUOVy2dawwmpl\n6BFZDjMLiJhh6U6nU8s0ISCVy2V7vjiE4XBoGRrOlPcAMYA9e3BwoPF4rHq9bjsAaMMiuyS429nZ\n0dXVlb0WY+HLQ5lMRul02tofUWIMFfLxxz/+0dp8UqmUdnd3LWBl2uR0uhxdzXf39d/d3V2dnZ3p\nl19+WTHaLFLZ2Niw6Zbv3r2zHm6yfWbLc3aInEDbGGDIkAQX5XJZf/rTn7RYLFStVpVMJk1H4TUg\ng+s6OpvNtL29bQFmu922IIsSE7L8+PioH374QbFYzLo54vG47Vng7LHYcsJjLpczWDYajRrESgAx\nmUz0ww8/WBmEe8YZ0n5L2YnAHsQAHT06OjIdhfDG/AECGghcECt5Pd0h6BUTHnlu6Nd4PDaHNRgM\nlEql9Kc//UlfvnyxVkCvo5ubm8aS922hPAuSkWKxaHNBYLsTCEBcLpfLOjo6Ur1eNwIpZUCvo/v7\n+yoUCisDbiQZh4Vg4uTkRKVSyVA4AqTRaGRkS94Tng66xzAfupFI/JLJpEajkRaLhcklhGSSAXg1\nlHPg0kjLjghkjmcCNE8wEY1GV3SUDoBer2cEVNAvzk0ADC+G3QO//PKL7SIAzaTm32g0FI/H9cMP\nPxhCSLYPUToajZr8kyAS2P3ez6vs88cIo4AMu6H+xXAEP6BD0q9YyMyiJrL3ZCrfWw9MDHELQgp1\nMVpa/DQ3auGQdKi9SC8Tr9LptBlTIjFgIQSNz8YY0q4BuxSYCYdNfZ3oGQY8/cjAovwdUL8naFEn\npQVv/c7JOCC+cUcIG8GJH64BAQt0gc9HcblziDQ8E163/neUETB23B9BDAQ2anUocCqVMt4Hr+Gs\nvD9wKIYPghgOkqACeeHO/b0Gg0HrgIAQhWxiuGnfIeOl9sr98APJDSeKnDMJj/IUxthDvcgLMsh3\nQ14gKXGvs9nMyk4QBJElSUZIoqYNfOmJZMCyfM7+/r4RQb28IG/Uz39LR9kxANQLyRNSpX+GkLO8\njmILIJgRLM7ncytzIEvIOuf4PR2lhsqdY7hZaERLnf9ufp4B9XscI/ICjM5ZeE7wdDY2NqzG+1s6\nSveM11HkhSDft+D5lmX+cD6+K3KOXaRN71s6yh2CwMGLgUdB5s/35A5BmngdZ/c6yn97eB3CHvfh\nv1ssFjPyHLaZgPpbOkowQisxKANJAp0J3KnX0Ww2u7LV9Pd01LcU0t7Kc4Gz5OUc2zQej82vIC++\ntk+QwJTLcDi8wo/4n7T5Sa808+fBAWfv7u7aGkSiatotcMj0odLrnEqljBw0HA5XCDoYCoaJkNFj\n+IDZGH5Cfejr169GzhuPx2o0GiYURHlEd7QAsSKWDAKy0ToZivY7GOXUldne5tm79Ikz6ISpV9QB\n2QpGhBqNRi2yjEQiK4QxEAtqn7w/Z+d7NhoNQyoikYgRbp6fn82wMCzIE/1QHljmkCwZKoOxhDhD\nVsd9MOgCg0Brzd7entX4CcgwiPweQ2LY6oej8Cx77jyVSimfzxuXg3KIJHNw1NofHh6sxZJaHpk8\nU9aoB2LMWVBCNgqxiWCB1jfKGEyE7HQ6tvkQA4k8xGIxG/5CBsOQFe4dFMCP/oxEllMTaXtijvjz\n87NlSt1uVwcHBwoEAmq32xqNlpsSIfNBlKUrAx0dDAamo4w5lWQDargb2Pm5XM6Gn3BfGHXmtnPn\nkqzMhsPlzhkOc3V1paenJwtya7Wa6Q0cIvgbIEjtdvtXOgoXAuPuHThoJCgDtWCMP4YbGPv+/l7p\ndHoFgvbjnHG6g8HAOhaYZc+GO08Q3N7ett0GkACbzaYNEcK54LxJAuAQ4TglGXIFn8R3JZHVEqzh\n4P162/39fRuohH7VajVNp1Pt7u5ajX9dR2ezmWq1mvWqM/3UB2BwPpCVZDJpaAKDu3C2kUjEdrgw\nRCmZTBqyg9zyrLB/fEYsFrO14GTT6Ch8sJ2dHYP613UUMiQB9nQ6NTQBeaEDxo/x9vYTGwJ6kM1m\njRjJYCJJhnD50cutVsv4KL1e73+U+b9K548x8EYSoh5DH4huPVPZMzUhnjSbTRuEsb+/b0LD3GcC\nBYwC62slGemMVaEYX0gwh4eHpijAWpAMg8GgMYfJSCRZpiHJlC4UCtnZpeWUMMoXDKsIh8NmXGnH\nicViBh1j2L0yU//xM+Qx4gwOoqYKg5msnnowbTqbm5vK5XIrEDvRP9ndYrEczVuv161sgnOqVqsG\nm1Lm4A4QeCBZomEQCkkWvB0cHBgbFkIMC4nIQAKBgGU2ZE9kdr7VizsnIEGBiJ6j0ahBpAQcsdhy\n/CnwHBPlYKl7qBdDQP2XTGM6nVo92M9ep4UV1IC6IpkxPAv+4CgfHx91d3dnC0PIZO/u7swB01oG\nOZOJZ9JLsE2ZisCHEkQ+nzfSIoaSTWxkvnwnYHOIpJFIxLJYSTYnAEfBHHe6PQhKms2mLSlJJBJW\n+4Yd3u127fuD2MC0R6Y2NjZ0dHRkZD1KFSQOMPyRF94PHaXzgxIc5Cvupl6vW0AAH6TVahlrnmwU\ngiMTCenQoEUMDgPdSCQcXkchzPLf/C4tiGSLv6WjoEmgIdVq1dAazkDdOBAIrLQZ4jyZPAhhFOIu\nQSzBEDqKPFOGQ0dBM9d1lICLAJ1ggAx7XUcp7UDYJuCAo4VdZCohyV46nVYwGDRCHsPZyPg94oEt\npJMBnSCgDoVC1ruPvEhaOTedCugoCWokElG1WjVyIqRPdJSJhJJMJ5n+x0Iggq5cLmc6SqL1vZ9X\n6fxphSA6lWT/DusfuAO4BXakpJXMkmyC2ivOk4CCqBuBo92Huh4sU9qGcNDUniE6SbL3IPJGYDg3\nKIFvgyFTwMgQ3ZEV46RoWwG+93MMJBm6QWBClM1nUCKRZFOlgDell8zM3zP/BBb0CyS4cwhytCKy\niIloORqNWicDdTw4CmSbQOKS7DvxvXi+vgxEOQRjjcH3d80PvAMMC3JAYOMNLs4DeSGY4f2pqQPD\nwv7mziXZHUkyw7dYLAymxoljiJGXb/3Qg0zJi4xkc3PTNtOB8vjNeDhkHCgOm6CRABajhqxzz5QV\nMGAEPbD6+Q7funPkBR3lDrhzIElY3LQ0wo7HoRAk+RHYvI479zXrjY2NFScNhEw2yHMlIAOaXf+h\ns8SfX5IlIjgQdBTOBb/HOGRfZgIm5s79NLrt7W0LnNFpHAFMcN9SyP2vy7kk42bAR+DOkU30DHlB\ntmKxmN0rfILNzU1LemhL43sjH8iUt1G/paPIC6ULf3buweso748jpiXaB3jcOVwiEADkBTn38iLJ\n5hv4lmIQOMpaXkeRFy/f/1Md5d44O8+Vu6PFGh3lfP7sPFsSLV/nb2E4AAAgAElEQVRy40xeR3/L\nnqz/vMqaP5na/f291TVrtZr6/b5NiQN2hOTAyE3ara6vrxUIBGwmOoxtYB8MP3Aoaxul5QNliQsC\nAtkLuA0oHYNCyxNQ5Ww2s+4BliwA7fhhDfl8XqlUSpLUbrdVqVTU6XSMHcwcdRbfEBEvFss5BGQS\nGLOdnR0dHR3Z6xgEkUqljGzGa8nKms2mKRxEr0wmY8xohql4oluhULBlMiAV19fX1t7F8A9mUvsS\nAB0VfmsW5MujoyNTasaycucQ7HgmbAwjyqUvNhKJ6PLyUpPJZGUMMFyBcrmsbDarjY0NW0Ncr9eN\niEn5gvcnMybQYI0wd765ualyuWxzGkA8KJ8wyY6sDQY4nAdQEKbcVSoVq3uz+jkWi9nqYDK1ZrOp\nSqVicHEqlTLYno2H3skx2pbPA7Z98+aNDfphhno6nVY2m9X+/v5KeYddC/BO7u/vjUfzLR3FMJXL\nZZvMx1bIm5sbSbI7h42PvHj+ADrqa8GFQsE6QkKh0Mq5IcqBrFC+8DrqM+/Ly0s7q3cu6XRapVLJ\nsuB2u22LnSh50NnBxEgyXGnpcFk6RALAlEU2ilK+4eyZTMaeF8Gz11E2VaZSqRUdpR9ckk0HLBQK\nxkJHXiDG0hL48PBgk/S4y8ViYTpKUBWJLKdwnpycrOgotgWEiKwUuBodJQACuf369audRXrZQsmI\n6EKhYOer1+tqNptmN7kjv/MD1Hc2m1kpiyRra2tLh4eHymazKzrKRMVisWiIHeinXzeO/cpkMpbF\no6P8XSy2HJ9dKpWUy+XMpoN6MkacccScEZ8jLZM4SiXceSqV0rt37wyVwDew8IyuBco7dAx87+dV\nOn8iMyIfVmjydwgeURitFUTTEFyklyl3EEwYgcllrfdS4lj91CyMP59HNuNhKXrtYajSS8sgH/po\neQ9eC/GI70QGRpaWz+ctWwaOJkv2Z8fA8x09CYS/I5PhzNydz55gVXN2zuGjTF4fiUSMgQzBBSfp\nSWDrfAAQFE/4QXkwzMD3RNlkBHw2zxl0JhAI2J2DNFDSwBD7e/dIEdG6tIywCWoou5ABkZ2t3zlk\nUbJWj6hg8Pl9n92AcJBNcnZIeWQC/tz+zsnyaT/a3d1VNps1o+Q38vGcPQTM9/G7Bsgq1u+cz/fy\nAgeCc6OjdBMgc7yeUgpyhz54HaUsxOAkAmVKS5xtXUdJBvydo6N8BtkVOurv3Jec+ByvI56kRx81\nARGlGRwTZRGyM84uvcyYQEd9EMmdex31dw4xzK/0Bdnk8719wVZBGPOIznw+X9liSeuv7xxZv3Oe\nBeUroGg+y9/5OtLmy05eXkBD0VFvZ7GFXkdBPRmlS0DvEwMvL9w5Oor9JIlat4vrZ0dHQYU5NwOu\nvE33NpjgmDsPhZat0+gospbL5RQKhewusC/oJ9/X66hHljnnupx7Eu9v/bxK2B8FnEwmtksb4wFE\nyYMFbubvqRGWy2XV63WNx2NraWu1WiqXy5Jk/cUQojCEwGsYSqA6Hr6ftOQnhxHFsxlqMBhYOwxw\nEg4H5cNBEThgsMPhsC4uLqxm2ev1FIvFdHJyovPzc11fXxtUBkRJtC69LKigbTESidgGMs/Y53ch\nrtF21uv1rOcVJ4SToa4FNItShsNhlUol2++O06Seu7+/b8NJIM9QI4ZUA6Oac8JRoK7IM8YgMc2Q\n5zEcDtVqtQz+gwVMMAXiAYKx3gpGJloul82hHB8fq9Pp6MuXL5apMxKaO+fzJBnREnnBmBKgYpCp\nryPf7LzHCXnZQl6oN5IBgsAwKXB7e7lOmf714+Nj/cd//IdljbC9OQf1SYwbGTeQPMjRevmCs8Ox\nYOoZcD8dBvApaJlCXsh0y+WyqtXqio42m00dHh5qsVjYnA7QPzp2/J0SgCAjGGgQJZydJxD6eilt\npciKH9u8WCysbMJz80HV58+fFYlEdHBwoHa7rY2NjV/pKBwMAjnPDIebQLaJfYHbgePjvnznSK/X\ns9WxvsMAhwOiSV0dHWV+xOPjo3V8wNHZ39+3IV+Q0CgdcqfcA+ckWcC+rOsoWTM6OhgM1G63f1NH\npZdyGs+bDoFMJmMzO8rlsiWHJycnttqdvQsEj9w5yJ8kIyxiX+D8YH+wDV5HQRMajYZxXXDwyAu8\nBwItgp5cLqdOp2PTBkulkpHFT05O9O///u+/0lHKq8D4lGZAaLDB2AuCTAJBXzL/rZ9X6fzpQU2n\n0zo6OjKyz+7urq3LfHh4MDLb3d2dtaLQC0wE+Pz8bFOaPnz4YDAJY04vLi50fX2txWJhO6xBDcg8\nhsPl2llq4k9PT2q327q7u7PsGidEX2wstlyIww76YDBo++XZRkjdlmmEvJ469Wg00sXFhT58+GDn\nIwtvtVpqNpu6vb3V6empUqmUjbmlToZBl17G1sKAh2mKQ8SYArm9ffvWnDBQVKfTsaEebMCD2EgN\njZZF4K5ffvlF//RP/2Q77fv9vq6urnR3d6dOp6OjoyPL4CD/cJ+QAHEgo9FohcTm2bgQEk9OThQI\nBKzTgdW0koyMxpAjWMgEl7QQXV9f6+PHj6pWqzo4OLAInyE0t7e3VpKAEMi9Q+55eHiw0tLT05N1\ng6DMyIv0sjvi7du3VkKazWZGWE2lUgoGg8bsZjIeddRIJGJbJ798+aJffvlFuVxO79+/N94I93Z7\ne2vLXprNptVmkZn1UasEJfl83mqhnjFPqQYmO73OsOcPDg5s8Aub+ZgnsFgsjG/DKta//e1vNoAF\nYueXL190dXWlxWJhC1M6nY7JtC+BEdAwN6NarZoj8c/b6yiT/CgNQhJmyMxoNPqVjpIBTyYT09H5\nfK6zszPT0WazqUajoZubG1u60ul0LGDkD+UgbB9BCfct6Vc6enh4qLdv39q6cZxqu91WsVjUYrEc\nCkSSRKkAhG17e1udTke1Wk0///yzfvrpJxuu9PDwoK9fv6pardpQGbo5QJkIwIDcOSP99sgRMv8t\nHYW0y50TOIxGI9XrdVuQ5icCkqiho5VKxXSUGQbISyKR0Pv37w1al15QADgE8CiGw6FNjsXu+SCS\nVr23b98an2Y6narT6ajRaKysdMemwxfy/JPHx0fT0Ww2qz/84Q+2m8DraCaTUTabVbfbNQ6HJPM/\nfA8mAtbrdRtiNp1ODYX7vZ9X6fzpo6aWyIpVSVZDos6CgcIhUG9HUSSZUiCoRHaz2cvsdhwZmbhn\nbRKhY7iJduEP4ERxqGTDCKrfW053giR7DwwMsDiRHK1K3W7XAhmieBwfpBrOPZ2uzpjnu/H7ZNLA\n0xgEyFu09bBKGFIWGS0ZNOfmDiRZZkAtmHYUBJFMENITz4KzYyyAvCBs+gwDNIPzcnYgfiBE6s7I\ni89CkReeFzLja33tdnul3s/9cyaIfWTmkMGI2BnXC6RHlO4Jfr7HGESBoTiw2kFYkB+foUOKw4ki\nX71ez5wr2bE/A9kbmQKkIn4PLgyyBVSLnHPnBEzrOkodFkgaHSUgQj+5n0AgoFarZUx5P9sCZwgp\nikyLe5ZkMuztAvLC/0Ne1nWURIGBYcg745lp/SRb5M5B8tDRTqdj2TxZH/qJ/KPXXkdpmyUTXddR\nb8e8joKGMLgKXfNdSY+Pj/Z8gOh9htvr9ey78ayQZ3SUFj3Ojv2jJOHR0HUUwMsLiAeBPq1ssNpp\ng/UOjQCXzgj0i86Eb+molxeIeGTmvtSLw+c+QBvQUW/TvY4yAI3nBOeK3weB8hk69p56PFwKj2aF\nw2F77uwmAeVGDrGh3AV37nXUkxO/9/NqnT+RWaVSsWhMkvXJM74S2At4GgOOQ+WCEomEfvrpJwWD\nQYOdksmkjo+PrSUE+JTxnPTrw8xkcY20fJBkZMDBTMeiFsfIShze09OTte0BJwI10t6BocA4TqdT\nI8FBhopEIkZQAR68u7uzZTws/EgmkytQPvVvlouAsOzu7qrdblvZglnpBAi0L9KChpEAsuPOUToi\nz83NTf3jP/6jkcEmk4lNtdrZ2bEyAEaLoUl+IMfT05Pu7++NjcvIWYhSvtXp/v5eX79+XWH6gkZQ\n+gBC29raMsOKo/WKlMvljByEMmYyGeMxYIC63a59h+fnZ9uyBUxIhO67SiTZxD5g6GazqY8fP1rZ\nCAIqTtwzgrmf+Xy+YiBhar97985GxALhHh4eajQaWcdBvV63+Rb0FVNzHwwGarVayuVyVkbb3Ny0\nuQDb29vWM//4+Kibm5uVLKnb7SoUChlhEYIVOgoZECIccH0ikdBf/vIX61uez5ejVo+PjxWJRIxs\nyOvQBWB1HB7BHDwZAnjm3/sS3vn5udVrcXhkYhB6qc8y7Ag7gOMYj8c6Ojqydj7mVDB+3OsoMtHr\n9RQIBEyeJ5OJTf/LZDI2cAyEZXd310jFj4+PNtUQmfY6igPw8oLzIxjmDmKxmH766Sft7e0ZpL2z\ns6OTkxN7zsj309OT2UXPU6LvnbXYDOCaTqdm3zyCenl5aRwAgkrKEjD3+Qxkajgcmm1GP9BR34HB\nemLsR71eV7fbNdmBBEwChV1EPryOerQHlODDhw/GueKscCDQJwIFkkhsA/Z1a2vLdjbU63XrRIA0\nTDDLrBK6AuBzIavNZtPajr2O4re+9/Mqnb+vc/pWCaJNDAjRM5AxQUMoFDImJVkEpBcfiQPJYvjp\nFvBEOTgAKDp1Px/t8eCJknkN0S8tHvRfhkKhlbGSgUDAHCZsYSJhPk+SZb1kmkTICBvIgScrAdFB\nGIKg4mvgCJhntHtYjCwMwwLRCQfkZyQQAEiySH4+n5vRI1ug3Yid4kStEFc8CkB9zUOlnlTpe5i5\nRwIJEAHqjdSl/TNCBoBGCcp8uw13CaSPQ+DsnoQDmuBJoPBHFouFPTeyIUkrxmE4HNprmBGBUuO8\nvyUvbOnj2TPsw8OcDw8PZjDINJEX3hdo2JMOPURNiyCvAang/1N/5nO9jnqdRl44Iw5gHU2jPY36\nJsaPLE6SnQH7QNkMOScjpPaLvCDP6ChcEoIgT5D03xMkhqyNgMF3KpCxMg6Z9loyQRAu7pySGc7a\n6yiBiCexIS/0pHsdpdUWRI2z+PZKAhs+CycFAkSWijPyOorMgiTx/cj0QVjQUe7Rv+7x8dGeOeUp\nZIhWS68X2GZP0PY6CtfBJxJej9FR5GUymZh9BTH0CKyfTeBl3dsEkimvoyBqvkWRGj1EYnSU4ALE\ng++Zy+UscfPkRBId0D4CFuTZE1m/9/Mq2f6eYIYx48GT9VM7nM1mxvb2MLqH6hBwz2SXlpHbxcWF\n1Xdpw2CKH9A7AgzxhNZBWo+m0+kKuY9oHyY3c9jpCeX74Py9cPs/oBoonmeSNxoNffnyRR8+fNBk\nMtHh4aFF9xheXrNYLKxmRTbho9dqtWrRN2UFsop+v28lGAzLZDIxY46RAHrEicJQxkmhfMPhUFdX\nV/r06ZMuLy9tWhkRPb29OBlaXXBsGHBq48y7h5SGItBvHYlEjFhI7c93ZPjyBZE8hsiTDYPBoLrd\nrsnL/f29Dg8PV6Jtyiy8BgY1097IGEejkSqVigWcQNAMW+r1eqbcRPDIH2xv36/N2SUZ9MlzIRC5\nu7vT+fm5Pnz4oGAwaG2sZDwYSBCobDa7MuCKIIttmzhrHBbDVnhmfmTtfD5fafVaPzv8DF+egIiF\njn7+/NmIaSxA8QRRn7WT/ZFJ4nDR0c3NTSNVUpoD+aKW752/Z3v7Mhg6SmDK3wNDs8zm48ePmkwm\nKpVKhnoQVDHpb7FYWAubnz4qLYP+arW6QsAFgiaIIugCncMuIpPcu5/FQd8494etGI/Hurm5sdo5\nJE/sJnaN94tGo8rlctZuzR1wr0wrZOYAwRUTRAlaqPdDZvYlVs6H/hN4Y/N5Xa/X0+Xlpc7Pz3V/\nf28bV7EFtNxyxng8brIuyZ7FcDg0HUVeGM4DYoC9SiQSlsBBdAX99feOLcXm8yx4XvV6XZeXl/r8\n+bMhjZT8uA9kLxRaDhfKZrPWbo3O9Ho93d7eftfPvkrn77Ma3xYSjUZVKpUsmgVi7Xa7CgaXm7LK\n5bL1FBcKBetHpyaaTCaVz+eNaX94eGiZEJPmiNRgCfPggeU7nY6xl6Vlhsucb4wEwkrpgYUTCEgu\nlzMHRMvH0dGRPVDOnc1mJS2zhMPDQ1tswoZDCIWtVmsFFvMtjwyuYMuZ/x36ujFKvt6IwWdqGESW\n3d1dy+QTiYRKpZLdeT6ft3tHKJk9IC0jV/pgGXnZ6XTMSIPubGxsWPYLXOkJXcC99L5iJPgzHi8X\nFHF/ODxQoY2N5WSzw8NDFYtF6w2nN5rAhW1dZG+lUsnWmBJ8eAKORyokGZHIDzEiiATu9XdORsbI\nXBwa097G47EtfDk8PFQul1uRdbZRbmxs6M2bN+YcM5mMcrmcTQhDXnxLJfLimejdbtcy1VAoZHwQ\nSZbV+GwyGo3aJj9q6egopahyuaxSqWRn4t5B1FKplAqFgiFpbHSjTo3+rcsL2TcdCARb1INZR42O\nEoBQW4fEiY7m83njeWxubiqbzero6Mj0EllnkmYgsJxuh47u7e2pUCgYoZDxuzxjdNQPlwFCJ7sj\n2UFHuXPOTjmIuQaj0cj2a6AvLGUql8vKZDLWk18sFhWPxzUeL7c+HhwcWNZeKBSUyWS0tbVldoMV\nyKA+BHleR0EP4JXs7e3ZsCLunLN7HaU8yl32ej1r46PnnzkIzHeAE5LL5VZ0tFgs2n2ho7RUcvZv\n6ah/LsxHwZETeIBseh1lIFsymbQy1u7urnK5nG0M5c75HvAITk5ODHFIJpPKZrPa3d3VaDQyeSHo\ngpNG2Y+SAjqKDpN0fO/nVTp/oHTqiEBRW1tbthsZBi5ROfAuxCnm8eMgyCJ4SCiWX3ACbOIhFsh0\nnk8AosDDoP69DvdAmEJJqKNT6wO2oi7HAg0WOjCJD0Y7DGiyexYXhUIhg/e5P/59Y2NjBfkACubs\ntLt5SBYYH0fFqFOEHIfrCTy7u7u2kIizh8Nh24qVyWSMVEkAQZ3Wk/f4w2dzpzgY6aUXmr/3ED6l\nFpSEbPr5+dnmpgMXc3bunHNBAoM0BFpCbc3X87hnZAV54c6n06mhEP7OaROiDMDZJRkRDfYwbH5+\n+F7MyEfOffYkyQYZLRaLlZYz3+bnz055B7Tt6enJgjyQMBwVkCzBOfwJ0Ar+jgl9HmIlQ/Vn9zoK\nz4C7pG2UZ4dzwVFJWtFRIGx0jecBDC7Jgi5gXtqyQLzQUb7nt3SUJUPI2HS6HFhDZry5uWlLiCgD\ncN7fuvPRaGTGHAfL2Sltcm4CN7Jp6WXqIJweAnxPzPTywqyBcDhs3zcQCJgew29BXrCR6CEIrSRz\n6jwP5AVuAIgUpESvo+gt0wEJYPl7v5SIs4OIorO8xttFyHv+zqWXuQGUmRh+9S276AmkoDHoKMkE\nOsr7Y1uQF86M3UPX4Xygo/w99+UJtthFSgteRz0ZHHn53s+rrPmj6N6B+elR7IKnhp5KpczAVCoV\nZTIZnZycWG2RudREd6zJBX7HMNHOVKlUDA6jp/NbXANPeAOSR9FRbBinkUhE/X5fzWbTaqL0XPN3\n19fXyufzikajBucx3xnHDIyNswPeOzw81MHBgXq93krwAZsUKJtMgRod9XEExzswtoYBc9FXDOOV\nkau0JZ2cnFimBNuVLgsMqCTLYAOBgC3EKJfLNhzFt98RlGFcqbWT5UwmE5tzgIJsbW1Z+w/kNe6Z\nRT6xWEwPDw+6ubmxZSMQ2JAXZpkjZ5x9MpmszJLA2YLyEFjSOgiUiAMJBoO2qGY9AEomkxZ83t/f\nq9Fo6OTkxOBFggZm4rN8h8xkPp/bumpPnILHMZvNFI/HDSGDi+AZ8V7GvFPf3t42EqEfdEMWUywW\nrWyGjkJ2JHB9fl5ONwRto8RDBgODG4dFVsQ50NG7u7uVfnMyfJ/ZQ2glUyL78/yDxWJhSQRlAnQU\nR0J54/HxUbe3tyoUCtZ3jaxD3gQtY0QrgRGZNTrqnwf6hj7hQP2IXV8mJOhBfyDG0WL5/PxsSQ5B\nVavV0mKx0Onpqcnx4+OjtS2DvKFLvpOIWfjlctnIyeioJBuQRRDrh+Ngh3iW6Ojm5qbtAOj3+6aj\nZK4EUegow3EoN4BK0f7KTAEcPkQ8VoT7LhDOALfAk7mZQRIKhVbKKZ4nBaqwWCxMR09PTw1dwif0\nej3V63UdHBysoDaULKnzk4h4O+cRVUpQ/s4plcBHQ0chEWJffu/nVTp/yDwIA9k28COGACHEEEEE\noVbERWI8gLaBEyWZskqyBQuZTMYUkU4ColGcjWd0AhMBIQJ5UTemLg3BC5Y2qAD1YggrGAWyE0/8\noT3Ik3qen5eLN6h3E/0xbAXiFUaMuiHn3NrasmzD3yG1cN+TS12Jc1NH862HGDYgQgIVsgregywr\nEokomUzaXHKcFYNPqAuiJNwtWQWKSzQMc1p6ca6SLAhZlxfOB3QGA5l7IwNDriAyAk0SfE6nU0N4\ncL4YOgJFMovt7W1jH0NekrTSekp5Ao4GARocB+qMtP7wOoIJ38LniXcQi8js4KcgL7yWaZp+LwGy\nRQcGGetsNrO5DJDYgsGg1VbhK0CQQ569jpKpIkt8F747yAA6CmP8WzpK9kMGCG/Fk7XISGmP5Hsg\nExh/f+egbF7O4bPwndBrsk6vowzswTn6yXyUNoGk/VAoetjRi2/pKEEZ+gXqx9nXdZTgeZ1gy1nQ\nH94XHcW2kGljp5An362DjsIn4M4pzazrKDwK5IWze1SSZ4aOepQEZORbOoqD5c7J+iX9ro5iA5BH\nytEkj5BnQVC+JS/oKa/jznlWXkd3dnYsgIUgyNnQUZBJ+EyU66SXboPv/bxK589CEpwrbXMQjqRl\nGwbGE1iSOg1wKcYXhSabwSiirGQLX79+VSgUMmIcdR2cJrV8DA/wVyAQUDqdtg1vkkyI2u22Pnz4\nYH2dtGrU63U9PT2Z0WJ9MZkz8CYKBgkFQScQ2tvb09PTkz59+mQ1eOA9Zr3TrkcmwEQzIv1MJqNa\nrWbsW2pp19fXGgwGtimKtcE4R5wRwyt8nzXERISQujhQHcpVrVattBCPx21uNUFWs9lUr9ez9hzP\n6t3f39dgMNDFxYWhLjjNv//97zZLG6JRo9EwQwkakkqlzKkQVBIEEYRIss8ki5vP5/r48aMkmVGU\nZHVADweyhhQIFZSJ1lJgxsViYQOpIEmR7bRaLXv2EEdhwPM8cDZksQTRkgxpYR485wG+ZgfC9va2\nzYunVDAcDg0Jo9QAGQodfXp6sol3kA/RUWSBIIOSF5wDzh4MBlcyZkmW6XkdpRZMKYRyDsHkcDhU\nNps1A+t19OHhQe122wKZQCCgZrOpDx8+6ODgwCBadBTCVySy3LZIvRbiGX94fj6I5M4TiYSGw6E+\nf/5smSO76dHJ2Ww51Im6Nm14tMtls1nb6olz4T7IhJE/+DqSzBmx0RSeAAGM9LLGFy4QNgikjL0X\nPPu9vT0lk0lrLe10OmZfeAboSyKR0GAw0Nf/O8cf9LTX6+mXX36x70uww8RFyqk7Ozs2URGUlrul\nnIqdJPCn1LFYLPTp0yctFgslk0lrZWVNPEjPxsaGSqWSoRaUJHK5nOkoZa/5fK5KpWLriL2ONptN\ney2dOgS43Dk6SkDt5zIQlKbTafNtGxvLSYP8gQQO8ZYpjCRRcKwIsn7v51U6/3Q6ba0tDM1Ip9Pa\n3FzO0W40GqZwRFwQr9rttkFd1NggEhF58oB5GBhRMmd6XYG/qFFSh02n02o0GkZwQUCm06lSqZQq\nlYoZZAgoEFQajYYRTmaz5UQ19omzFxpIEQiMjAZjReRObRNIGYQDwghZB5AYxqZardp6UZy770Tw\n8BbEQoaIkLWRLcViMXNM1AZRcqA179wwltSqmKFAEAY8BtQOoYYFM5wDAiAtbrB/yUZZgBOPx42d\nHovFbPgQJKBWq2XPFxY2tXOeE84JY8kaTWq7tIf5nnYCmVgsZkuSkBdKDpJslKckg6ADgYAKhYIZ\nFM/KxnkxQS+bzVpWhIH23SaSTF4ogTH0BKSATAbnTwcApZ3r62tbAczdU4ulRBEOh82I5/N5NZtN\nQ6YgSnHnzWZTOzs7Rqjjc8mKyH4xijCjkRE4MMgrTgiuDJleo9Gw8zIVlLo8dVdqraVSSYVCQdFo\n1O7cy3an07FBRJSNaAMjG0bfaffzOgqSSbLiB35RCiJo39/ft5kAXkdxxqBEkJz39/eVy+WsA4Hy\nHc+b7zSdTo1BTuDPs/PttMgLvfG8nw/2KB8mEgnjV6CjfC5EXnSUcgtwOwvZ4vG4TSMEQWWS62Aw\nUKPRMNTPcykIygjS0QGCea+jtBaSvBAQoeupVMrKH3xndBQyNz8EEchvq9Wy4BNiZKfTUbPZVLvd\nXlkuBlzP76NDoBpPT0/qdruGShKQ+CFrJBp0H3gdxc6Q4P7ez6t0/mQuCKknLCWTyZUWEWAVIiic\nuPSyBpjMD7hFkkXnvn8X50UwAOQqvfSsUwMKBoPW28/nA1cBdQPR4XQlqV6vG5Qovaw2xXHjeEEz\ngLmp8SFE1Ob5fDoHuDOcMHAVGSdwHRwFkBBgM7J2olMgeSJWzuLbybg77pwaMI7fzxwgQsfI+DvH\nYXJvQMoEN5JWpv75ASF8R5CSRCJhyg0a44Mn4N91QhBZA+/pSzGcgcBM0krWTTblIWUQJww+hhX2\nP3LAewCTgmqAsJB1+C4Y7o8g2UOunJ335fuRaSMvPrghawH2hvTkSVpkMgTR3Dn8Foh8rJ9GR332\nQ9uir10Du3Jur6M8A2rQXkfpmPDPd11HccReR5FBvidOlDZagkV0zHeREJRwX8gLwRSyjf3xrX1k\n115HQ6GQ3SWIFDrKc6E8iVzx3ElIfAvauo6ChkDW47UEuOuzDHie1JN5L697PBveg3uVXpb38Pno\npC9J+Yx2Z2fHdJT3ANHyerpuF9FRX3ZEl7lXUALsJr9HkEHM+IMAACAASURBVCnJAi90AkdMJo5c\ncXcEHes6ChzvW8q9vGAX4bTx3Llnnis2BWifshGyQPnM6yif6UtR3/t5lc6fAT2wWsmSYTvyEPji\nQEbAYygthoR6MZeIMhBpA01vb2/bPAHfpw/cX6vVNBwObe49GUc6nVYgELCZ1gg/LSE8fElmiH19\niaybljpfUydC9Oegfu+JNczTn81mFmGjOLPZTDc3N9aqhkDRyhIMBlWpVCy6pG5IZiO9OKd11jcw\nLgbD37cn7PnBIjs7O1Ym8D3mvr+X78r+grOzMwtUotGoje+F4Ma9kM1hBLhzzsK5MVq0HEK6Imv2\n2THPkywFsijyQo3Ztxz2+33d3d3Z/HfuE3Ihd85iF8ohyAd37vkRnJ+6sSTjFvhzY/whdFESkV5I\nZGShHm1AZtvttr5+/arj/zv9EscD/CrJVgn7NlzPSPbcGn92dBTmPvVOf3aIl/ASCApBsNZ1lHsa\nDJarTAeDgd69e2ffZ3t727pNQL3WddSz8Km1onsEAvSEg2J5Nr5vCSPII5CD9Q1EDhmUuQZeR7e3\nt3VwcLBSZ0dHb29vV3SU12HnCHCo93M2CJc4GOQGHcB5+jY2yNWUX9FRT2bkztlf8ObNGysZAIcT\n1JDFY9PXddTzmby9DgaDhigRKPrODrgOXkcJPik7kqRh/z0RnD0d6OhkMjEdDYfDqlQqK+OPcbCS\nLEDynCPuHCSKwMCX7LDd8EpI2rArLD3zOkq5mDtvtVq6vLzUycnJSqmFO0fff+/nVTp/38pFSxMX\nTqbssyiyXzJW/pB1+v5SGOM4Ks9AZXhPu902I8fD8xO6+CO9TOny3AMIGH7aFpGnh/swkOvnRkgW\ni4URehA+MhumlJFdAVvx95JWCIVkQkBHnN1HnfwurTWchcADOBMDTbbrz8/3h7TY7/dXGOwENsC5\nlFdYFMR94vAI7Mg2CKIk2ed7Iwzsze8iL/6+OTuv5/ljZOmjRin9neNsNzc3lU6nbSYE8kL2RlBG\nEOZb6wgSCDhAOOCZkMGMRqOVMdVkLOv3DsudjNhPq0POMZQgY8iLn162WCxMdulU8KgOmSHoBmf3\nOopM4gRBCrhz7o8MmayT7wopk7MjTzwjBqqgo8iwRw9gnfvs2zO9vX2BB8T7w8gm6/6WzHBukgzu\n3PdjI+egHhsbG1b+wcZ5efHvR8C2Li/eNvqhW15HfXCwfna+I46f/9/v963Nj2AA++PLK81m0+4c\nR4yT5vM8eoks87x9MMLve5sO0urP7d9TetnS6GcdEIhx55RJ4TNxn15e6Jwg2cKfIC9k4P75UIZd\n11HunKyb4Bc99Zw0goV+v29JGn+wm5PJxMpudO54HUV24YQgq9hNgsDv/bxK50+/YzweVyaTsVqS\nJKsVYyioAdNC0ev1TKHJWrvdrur1us0j90xvmOyTycRm4eNEQRtisZjS6bRKpZJFjz6iGwwGthGK\n8gQ1IR4MkFKv17MWNJwp36nb7RphEeVsNBrWSocRx0lNp1ObNFcoFNRqtQwy8rXora0tGxCEESHT\nm0wm9n39gpZCoWAKMJlMrBYGQebh4cG+x/39vebzuWVX0WjUWuYajYaNqsQBEYQwCGVra0u1Ws16\n54EQk8mkCoWCdVCQSZJdkE34vnG/w97Ly8bGxspkNrKBTqejra0tq9suFsuBSfV63djIvqsBYx+N\nLofA8D4YSkmGBpVKJYNvKSlMJhNbDQqzlyi/UCgYQ5lMjDsHZQB+hfNAhuLn6Ps7950k0+nU5sfn\n83mrpeIYgsGgoQCHh4fWbUFAxkAmoGXOzjpYDBBOiHovsg6pstPp2M4CoFdaSWlVA6Hh7Ogow35a\nrZY5PeDU/f19G9DkkQccXLPZNNLizs6OIT8YblDBh4cHbW9vm7yMx+OVtlt4OSQD6Kj00j/uS5Lo\naLFYtFn5lBwkGSpAK53fIUJrKbwVeDCpVMpqzsDLtB0+Pj5aayJzFZhIR+kuFovZFNRms2lDlqLR\nqDlrdJS+eXQUSJr203w+byx9EDycHMuamDOwt7dn2Sl6A2FxXUfZkNftdm12C7ysVqulRqNheyRI\nEiiLMN6XFcYEGth0AuBisajpdGp8EOzM/f29arWa+SOvoxAWcbwEIcg5dhmOzGw2M33hWTUaDeXz\neesOQk/R0a2tl9W/oGrYEHSnXC6vdMAgv/ANvvfzKp0/087IHIbDofVFkp3s7+8rkUgoFluOk8QI\nUo/xdTUyOFpO6BjAwdObfHl5aREdve/h8HI85t3dnS1uYAgJnwlhiYfus/t+v69arWYsYYQYks50\nOlU6nTZYCgcDNEX7xmKxXGM6HA6NdAfJajQa6W9/+5tNOvQCx9YpDOHOzo5SqdTKRinWsPpxqzhG\ngqpgMGjnRqkIzAgquE+gKT6Tfn/eBxh6f39flUrFDAFQLtE5/aooK7MHfKfCaDSys/M95/O56vW6\nEe4gCCUSCSOVZTIZDQYDa5Xx6A2GDEeMHPAcmKr4888/G9GTQA4DRm8xaATsaIIfmPCUoLg7WLw4\nNe48mUxaWQgyG3C8h70JaAkCeAaUyXK5nI2yhVSLYwiFQivPnCyN1le+D2xjauBApvSS7+/vW00U\ng/34+GirrP1+e7Jpgijap3ygOJvNtLu7q8fHR52fn5uODgYDC4in06mV5WjrSyaT6vf7pi+w6oFx\nQadYWwxM63U0m80aUW42m1kmRmBMVkgpEueODQqFQoYQ/fd///evdJRAGnSI+8lkMjYsCVIh8sKd\n0y73+PhoiIi/836/r3Q6bSOmvY4SoHFfyCxJDToaj8d1e3trATUIICW3ZrNpLZHYXqYNwhsaj8cr\nS5zIkOv1urWoMRxqf3/fuBNk7cxa8OiN7+ailOJ1lGf517/+1XTU74d4enpZs03ZNZPJrOhoIpEw\n20JGD6+iXq8rHo9rNpvZuQnI0FFWc6Oj/u6kZamjWq3a96WGT2cHJEVpGRxjP9FR7nx7e9vm4PD9\nKAv+3s+rdP4e8vEC59nvOPpwOGzTqDzUBqzooWFfwwcioXY7nU4tuiZD88YHBZVk0TROAwUFcuHs\n1ADJ3Hw/tS8L4KQ4u/SympighSCGc/Fd9/b2NJ1OValUrDbpYS5PPKFex70Bw1IjxhBTxyWiBZpn\npwJGgWFJRKaeyOe5DdSf+S6ckz5avo8nwi0WC7tzsgwgRu7H1zo9mQdnjEGg7OEnnwHfcecEHASN\nHoYDDgWG3tnZ0WAw0O3trUGfBEwEizgYugeA0YngCeoo+3DvZBIMFiErBJ0AqvbyQunGk1Oll61j\nnD8ajSoej6/Mo/A1y0AgYOUOmMnIi2+jQh7Rren0ZZkSZ/YdB6FQaKWFzmf0/uzIBYGgL3PhRCDM\nwvLnmRA4wZEgWPFDlJA9D8kSZMOSB37n3tHR3d1d9Xo9y9AIFnl+oFbeUQCh00mCvMBo9zpKIEJL\noR/hCoKDjoLske0z1Y96MfLipyeih9y5L4ugv+wqoQwFutHpdOzOQTolWScD9pQ9JGTn2BdKH9gz\n9JSMlRIf9gXklsFaPBeeHc+N8ihcBF8qotbvdZTgg0CGAALdgvuFHSDgINhEVsnwQa28jrIFlmfo\nCa/YYZDExWJhgQny4tt4vY7ybCSZ/kI+5Ox8jicz/t7Pq3T+nryE0/LDE3ytGkfFQ53NZkYwQhgg\nkfFgfN0H+A9EgQfjh7QAZXP5W1tben5+VqPRsJosBhOnQSZGfZhalfSygcxD9BhRnDGOif53IlSI\nY0CLXnm4M0kGGxINY5CAMyED0qPNvVNf9e1s9NdTp/d3jiJ6o4EBgPQCuQkDDGlr/c75bBAJghKi\n2clkOZMeOJBgiHoiLUgEC0DH3uBh9Hygw3fnWVNCwADwvGB2e2fKuZEp6nuUhCRZRN5sNi1zgyXs\n2yC5E2rXnA3n6sseOB4+i/vn7N7APD8vW9A849jfuec9gDyRvUajUVujSy0RB8vZcaDoLQgWz87z\nK3Da6Ikkc+7z+dwGJuFo4EJg/NcHqCB7kuzOKWE1m01bXYy84HApDYAKIq88WxAs0CCclv/uyCl3\njl56hjtGGBvCncGg9/wGZAyUg6CAkiHPBxnA6XF25Jl/ei4OpQ1KhugI8DdJFXKwvb1trW5eR0FZ\nCFR8RwkT/ZrNpjlW73B9+xqfDxnSL70iwCHYeXp6snKoRyvpzvCtfhAsCbwYuEU5AETNcxICgeUs\nBtAy7t3rqG9xRl4ZmIWs8tw5O3bSdzvh3JmJgO7jyOEcIDvIjddR6YUMDJG22+0qHo/bOeCf/N7P\nq5ztT/aL8uPAYrGY8vm8Qck+28MReSGTZMYDyJ8okMwZRw7s/vz8bPVE+kp5D4wCUTCRmnc0CCjB\nBYrGPHCvmJwdZeDcPhhhXjSCzffCCOKUySaogWPcUHCE3mcknj1OUEVNDGGPRCLWQeFhO4/OYAyJ\nviVZLQtim2cWw0TFYQSDQSsLUM+ChU4QxrMk8/RdHpzVR9DU09PptCStyAvn5xl5eYlElq2Z8BI8\nnMiz5rUoP33BBCgEODhelN/LC3AnMuU5KoFAYGWxj5cXAhhJv5IXMlUMPndORg28i+O9v7/XZDKx\nvmW+k2cZe8TDyzmBBUaG11MSIhDzsoJjQqdx6KFQyNj169P34DN4BwPc7HUUdGFdRz0pjTvHeXIW\naWlMgW/XdZQ794Ed8uI7L3CuBD58lkcjKGeCjvCd1nWU83sdxW554iJyQ0mIgNif2+sod06ABKKC\nveFzyeIpPaKjz8/P1mcuyQIzauroPwiKlxdPXOS+GOLjmfTIiv+e/OF57u/vWyCBjmJD4S1gA7yO\nYpMIlAnCkAPe33cGeQIftkWSbZgkoF+/c+yoty/b29va29tb0VHuzJMZvY6Ox2OzSb+lo+igv/Pv\n/bxK58+DhAHK4JGtrS2dnJwoEolYVo2j5eF4YhlQJsJKTYvMBOdP5oLyfPz40aJBMolkMmmZBUqe\nSCSsn5xpeh7iIugYj8dKpVLKZrNWxiDjJ1LzdW1JJhjxeNxISL4FjaiQuu3W1pbq9brq9frKNDzK\nCzhHvj9nTyaTNkdbkrWM8JpQKKTDw0NtbW1ZW5uHHclGOY8kq69CngR+JQLH0EHyeX5+1vn5ucbj\n5fY5nD/fw2eNEEG5d5wG5Ryc8Gg0stneZE6cGwiOaBvjJskMy2w2MyIkf08GhJGiDnt1dWX3zBhk\nv+aTiN+fnXGjkNko/xAwsA2O8oFHuqjTUmOFCwPrHgIsd468SLJpjeFwWDc3N3p4eDBnTf2SMhiB\nILAozhGuDfwIIGKCzdPTU4XDYeuuIYsmAPOtljgl5qXX63W7L86JwX56erIRvZ8+fVIwGFQymTTY\nHU4BOgqK46fp4azIygiKhsOhTZsDyePOfUsgUC5BJtyV0Whkg3hwsPweHJLt7W0jB/p9JBBtKZmh\n+15e2OsgySBg2N4QNOFi4Ig4N4gC8kAQzvAaiMI8E54n3SMgFZ8/f9Z0utx0SQDBBE3vHHFwfgES\nekw5LhRaDpfa3d3VwcGBQdvYFT+bgCCf10GofX5+tmmNPlgkKYLP1ev1dH19becajUY2DdaXgSi5\neHkBTvdDpggc8/m88QL4PNoOsaPoKGdnGRRkWI8IU2ZgoqDX0VKpZGUgprhybkjpyJGfK/O7fvb/\nlcP+f/nj4TmiLPp8maCEcEhaifJHo+VKy1wup+FwaPWRaDSqd+/e6fT0VIlEwiDW29tbPTw8mINH\nWRFWIByCD6JXJp5Jy4wU9jVGgigNR0CtCMgdh48iAnmxBQplfHh4MKN0dHRko4+pCfF7uVxOpVLJ\niEdkEWQr8Xhc4XDYGME4M0hMEK5w4BirUChkk+m4cw9lEuxEIhEblUv5ZbFYLhI5OzvTwcGBkQlr\ntZq63a7V9cLhsMrlsmXSnBsDB/GJiWNkVfw3htoTISHl9Xo9I82QveFUCProSUZh6Rj54YcfdHx8\nbLVNSGW00qXTaRWLRUMv+M6UYjB61JQZWyrJpp9RjkEG/JwDyJrAz9w5aAgBCCxidIe1oUdHRxoO\nhzYimUBsa2u5QrpcLltJyUOhtI4SmEBO297eVrvdtkl9vtSDM4lEInbnOB6CSDI3AjMIXfAcIpGI\nfvjhB9NRSk/05kPkkmST6zB+/GFYC/V3ZD0cDtvEPOwGTtrPqlgfouSzN54LtV3KZt1u1xKAo6Mj\nI6J6R5DP55XJZFQsFlfkW9KKzCIvzInHCXe73ZUlZAR1oDbUoclCCWp8zR0SJjaLoPL4+Finp6c6\nPDw0mURmCE4ikYgty/LtnyQtvC/PUlo6s0ajYaQ5ZAv0js4c360Fyoece+Iz8gIKsbm5qTdv3thC\nMWwDkx1plyO4JRgn08Yu+uCSzwoEAup2uxZ8kywGg0HjrhDMcOegfAQgJBpMhyXp6/f7yuVySqfT\nOj4+NrIlkwEJCjKZjEqlkvkDklmQPMZkE6yCdHQ6HetU+r2fV+n8EUpPqMPheJYnEJI3ur7/lOyV\nyDubza6MP4XRTuaAsrEwxMM8GAxqop4wSPQI5IMDRNA8usD3wbggiPx/aniJRELz+XI4BrUx3xLi\nlQxho+7H/aA8fDcCEs7OGcmCqOMB//Pv1Ex9zZ87l174Bb6dBUH3C3s2Njas3XGdywH3wt8JAu15\nDAg/zpazcw84d+BhsiScDpkVz5C/i0ajlhXQQ55Op404Bv+EQANnBpyMMSeDptYHxCzJ7hzZJRPm\n2fDvOAC+F9+TgI5M3zsmAkyMDVkdmQXBIo6a2iiwtPTiEKhde/SM1xH8YozIVnACwWBwRUdxoNyB\nJLtzCIXz+dwWBbHPnHrobDZbWUMNFA5KQoYPUogcciZes66jOCAMNrpDbZ3vSQmBO+f3Y7GYdcy0\n2+0VIiboI4x6H+SDehFwoaPoAiUcZAXH7WFwXy7iOZGZYrc8csNzJEOm7Y4kyy/siUajK4RVX95E\nR7k/dBSnzg+v8VwbZMzzlAh0PZmNbBabgvyjo5TCKM2CoBHwgNTMZjM7i2/L9ARqvp9PMn1Jx5eq\nCLhApQjwKOmAEnkd9ckX/C2eI6N5KS+QmMG/AhHz45Q5H3aDIAX5ItADQfrez6t0/kC8GAaMNqQ7\n6po4ZbI/diFPJhPd3Nz8aqDOdDrV5eWlQqGQbm5uFIvFlM1mbQpbrVaztZi5XM4goMViuULUz4jP\n5XIWoVHTv7+/N+HwQo8ihkIhW21KNohRwgCxP52sybPAiXhrtZrNIU+n05rNZrq8vFS1WlUoFLIM\nMx6PW7893ymRSOjg4MBYzOHwckkKw0eAFjEmBAgMzMDQ3N/fW0sUDqBWq5lSwTSH5Qz0BtEH2KrT\n6Wg6narRaBgsR72RMgabuTKZjE5PT621EJi70WhY4OY5CThtD6nSQ4wxYmVqtVo1ZMKzic/PzzUa\njWxVNOtGg8Ggrq6ubDJiPp+XtFwyMxwOValU1Gg0LOAsFAortdJkMmnzEbyDhldBtpHL5X7VogqM\nG4lEbFAPBCUMP8HIzc2NlVeYrjcYDHR1daVaraZkMmktlrHYcpY9i30ovRweHtryJkpR7XbbZJYS\nDq1vPjsGCv2WjnJ3OCH05PLyUsFgUDc3N9alwYIhlh61Wi0LFBgwdHNzo1qtZs4qm81amXA6nZqO\n4gBAk9A1AjvaWUE9qLMTVEKEW+djjEYjffjwwXS0VCoZijKdTnVxcWE66jsjWGXcbDZNXorFou07\nCIVCv9JR0CjsAsgJHBf0fnd31xwFOspZgdmn06khoOgoLWR7e3uWIDUaDQscsHu3t7e6u7uzFsb9\n/X0dHx9beYbfR0cpyVG+Ijgi4RqPxzYTAnSU3SygbugogR86Wq1Wlc1mzbEGAgF9/frV2rTZg0Fr\n99XVler1unK5nM2+gAewWCyXAdEG68tmnMHPN/GoqtfRcDhs7aGedDqdTlWv17VYLHRzc2NzCfBJ\nj4+P+vr1q2q1mo0RRudrtZrpAL7n8PBQe3t7NtMEOfi9n1dZ8/cknGg0ag6JS4OQwUViUIBUgY8Y\neEHNBbgKpV8sllP9/EKf+Xy+UrcjSyRihnEP1AOsR5TtI3d+nxoRdVlpiWCEQiGbdsXrcJ6s2IUB\n6jOpx8dHiwqpxxGB8l15DREzyoXgAi1ivKSXiX+e1e6VDMgP6NhzFLhzjA6Zlc9A6IigNYYInIDJ\ns4c5B1H7dPqy3AU4leflM2EyRt/miWzc39+boaGG6LM8357n5YHvhbHGEBCNc3ZJFnHDVaH27RnE\nfuIfd06tlGCL96E1kUDT15w5my+HECDz/jxr2h3JpgnskHVPfPKT70B9MNRkn/yQiXo5BWKmNLOu\no3xnDD2GjayS5w8UDOrl/x4SGYgJKAgcDmTE66gnSIFaIcPIOc7G1179LHa+I8EY8uV1dDwe25wQ\n6tVeR327IDqKvKCjID9+nr6XF0+c5c4hR5KRoqOeA4X98OUQX97knjzJ0LP7uQevo3Qb0D3xLR3l\nztFRgi8+m5bQdR3FvhBEoqMgHiB5fC+en+dySDK7wGtAK35LR7FV+COPEGEXQSlJpOBHMCyJ70zJ\nl++27hPQUewd8uL5VL4d26Nf6CjP3M9p+a6f/e5v/H/4AUKC4EObB0YLGI/LLBaLttYTR8NgkYeH\nB6VSKcv+gESBkyEJAc9Qy2UtK9B6JpMxEla321WlUtHNzY1xEPyubR4qJEDviHy5YWtryyJ9D6ct\nFgvLvMjww+HlpjKyJqZl8ROJRIyQBUGy3+9rc3PT+ADBYFB3d3e6ubmxDOr+/n6l3xZHkc1mzZH4\ndjgcQDabte1oBFQYXdbB+hWU6XTanhuvA/6l5hqPx600MJ1OrV5Htsqdt1ot9ft9q98CrY/HY8Xj\nceuQIAvGuXU6HUUiyxW7bNPzDoEaJ5O3JK2Qa6LRqLFuqWOz5pWeZKbfceebm5uq1WqqVCq6u7uz\nqV+suiUoDQaDK8NRkCHulEEurJOmNEOphLWr7A3gGXI3ftYAASEDYeBakJnn83kdHBxoNpvp7u5O\nt7e3tg6ZgJo7eHp6stoqRpHvBRfG6yjfj/tj2iJLmDgrPA+yXqB8gqRMJqNEIqFwOGzyy517Ha1U\nKsavoNwE1E5rIZ+L7cDhoaO5XM6mdmJUIYTyuf+HuvfKbjNNsnY3LEGC8J4wpGiUMtl9UWb1PHpE\n50yhJ9CzqqysTBkaeEcDGjjiv8B5ggGkUqq7nwdraVWqRAIv4gu7Y0e8JN3eRplZp21Ae8Yv2WJT\nZqVSUa1WMxttNpvqdru26Q45YqPwVAgk6AIJFzaaz+ctmQE+f3x81Hg8tgoZNM6v4faLmEA8WOpF\nVT+fz433AmeGsw+HQ9tQCRKJjUKoI/FG5qvVSsPh0GyUkVWQrOfnZ2sBUa1T+YIGQtrbniQpFotm\nu+PxWNHo+uZPbLTb7arZbBq/hFsFKXxAAvlczo1sCLxsGWU6zCeEg8FggxsViURsK+p8PrdkzU96\n5HI55fN52+fy+PiodDqtarW6YaPNZtNGiblR8UevVxv8ybKfnp50e3trV2uy3jMajaper2t3d1fn\n5+cbrGMqSjJNKul2u63xeGwsUjgCvn8O5E4GS+C4u7sz0g1GfHBwYEEL5+rnyulv89C5OyCdTuvN\nmzeaTtdLS3zWTkUnvdxlzipLCD98L9ANggfKARmL4HZzc2Nb3fb29lQoFFQul+3K3kQisVEhQGQZ\nDod6eHjYmNWvVCrKZDLWIpFesmoMQpJV0p1Ox94HI16tXq4b5XdAU/zCjvv79VXNXNKRSqVssyBB\nfruKB47jZja2ze3v7+v4+FjS+lIaz5inkuH50/cnQYKDwr/7/jnPaDKZWBVI4sVGRsg5lUrFtq2x\nMtX3byeTiV3Jyf0KgUDAEk90iWdLkkxQ4Ll1Oh27+50gJ2mjzcIzgigmySZTPKSYSCRUKpXMebI4\nxiNtPGtslOQuFoup0Whod3fXoE30BbviNZ+vN6ddXV1pPB5btct3Q8YEDP5OawduCjYaiUQseSRo\nsY0SfeF9aHNgo6FQSJlMxmyUTYlU8h45QF/6/b7B0h5NI8BT9VMFwkOC0OeDDclNuVxWNpu1CQvQ\nMt7H2ygIJjaazWatZQZy6bkQJI93d3cWrKl8SbrQBd9iZfEXHAV83PX1tYLBoF0B7mUOnwC04+np\nydY5S7IkDhsNBNaXpDH26e0blG82W+/84OId2mX8DOfELmj34QOxY+IBBYiXeTabNY6Yt9HRaKSb\nmxtL7larla1E58ptEuBtG0Xn2u22bYNEX9DJ0Whk34X2zM3NjcnC2+hqtb6EqVgsqlgs2rlzudwP\n4+yrDf5+pt3P5HLDFHO5sItxdCgllYeH6chCgdo88cpDiWTJ/PFjM/ShgEwhyZCQAOnxoKk+aVWQ\nUDCi5LfY+ZlQYDqgUYIQCIUnu9FiIOnwZ/cQHDIMh8N2HmTNd/cwG0GCbXP02ff29uxeBQiOOBda\nNgRFmN9UTNtkNz7bf+b22ZEPz9afnf/27Q7kTkKFIybjZgUznAff5gFuo1KEuIhs4RIAL3rInOSG\nJAyHRAVCwObs6I6HZSEMEVxJHiAj0uelPeW/Ny0jpjM4J46elg1ExW2iktcp/729zP3ZvyVzuDok\n1qA59EqpoJDFdvsCtjM2yvPAQZLc8zvAxl5fQKq2Ze7/oKvInNZTNBrdmF5ZLpemLyRd3k6QH9U5\nNur15Vs2Cp/H/wztQ88Y35b5tn9Bt3nv6XRq8/skGX53AoUV3xsU4FuFBQRSfg+ZkyB8y0a/pS+Q\n2v7MRiFBYqMgRoyK8n4kefwuwRuuBvrCGbFRT1TlO+LTaU1A/ANm3/7j/QM2SssIG02lUpYI07Li\nGXqZk/zc3t5uJOdeF7BRf1YIi17mJGrb+vLvzPm/SsIfisPsNBd2QOwZjUZqt9sqFArmYJ6ensxx\nJJNJg9/I/jxLnP79crle6CPJLhFCqPTOBoOBGRwQ/d361AAAIABJREFU9/39vdrttj5//qz/+q//\nUrVaNQWMRqP20GBgHxwcGEeBOWagTGl98RCOEWUliHJmz7qFAAPSwd53Amostr6Dvdfr6fT01JwB\n0PfXr1+tNw+sTFZJvzebzapcLlv1dXNzo2azafPJ9KKBg4H5R6ORKbM/Oz3K5XK5MeZHewH4mL4f\nyRlnAlb79ddfdXZ2ZmOTs9nMZM48+v7+vur1ukGBXKJTKpU2SFskV8jQs345M06TFa3j8VjhcNjg\ncaC/WCym0WikTqdjo0UE2NvbW11eXmo6Xe9ipyXlv188Hlcmk1G5XDYHOplM1Gw2LckFhgUORm8J\ntAR6z3AHkqT6ZPER+oJT5RIWWlPM+g8GA/3++++20pieIt8PMl4mk1GtVrM+JLKAZMXvE3DYm39x\ncbEBaSNzKsHVarVRIZbLZUuqgJ+p2qWXWemHhwc1m019+vRJf//733VwcGAji5wbG53P51bJPT4+\nmo3mcjlLNPyFPIx6kXChA7xgpTOmCfLDJTIUA3zW8fGxLfp5fn7WeDzW169fLQGl3Ybu4hd5T+D8\nm5sbI73xc6CLtHQKhYK1Pzg7/obEDb84m80MMqdwoRAAjSMh2rbR09PTjct58Oe0GpLJpJHU5vP1\navXn52cdHBwYB2I8HluiwJ0knz59sj4/esLP8ywhoyaTyY3LsnZ2dgzRA83CRm9ubnR5eWk+nzFr\nWsicPZfL2cx9JBIx9IHdFyCgkAxpW0myXryXNwnKdDq1ItbbqB/t5BKm4+Nj86fI7vfff7etjCQf\nP3q9yuBP5kYgpsfie6HAMJCwgHc8O95XBaAGVBI4C4LoarUy6NaTxnC4/BvJA06KF8HKowSeZETW\n5n/OE9w8wkB/jD9k62Sz9MFIBhiHgsfgqwSqUI9WEBj82ck6kTlZJTKn8gamZ1ES5+Z3yWR5Nr4i\nAcnhPJ60uV2hgRxQOYOKbJ/7W/oiaaO68D9HoOaZ+N/FGXIe39vzpC3gy/F4bPriKxxPViMjJzgA\nF1N1kukHgy/LfqheyOrpd3q4Gn2COQ3vAmdFRULyQrVGQIBwCsGIyowqVXoZe0JfCNDSy/ghrRpv\nox5+RQ/oZfqK0beCgD39CB+QKQk8M+HYJYEeQjA2ir74JBqZAxlDkCJh9mgf3xVkiGoYkheVmbdR\n+DEgXCzgkWQ2ClsfefPHjwGT+HkbJaHzlaF/7ugRfI4/s1H03E+D4D+3bZQRPiZN8C2cmZW6/sY8\nj1p6mW/bKJM23gb5uye0SS+oL9/Z24dHDbDR5XJpPCBsdLlcbpybQoiCzSNo2zL3yBNjdts2ys/x\nXh5R8tNq30IN2DqL7ZGcgvj6cWLIrN5GKVK8nmMrP3q9yuD/8PBgPbBer7dhONPp1KoNDMQH69Vq\npfv7e3U6Has0gNozmYw5Zn4XQsd8Ptf5+bkxl+nvQaCit8b+gFwup5ubG+vDxWIxgweBXVlwwbIW\nxs6oOjAW4KzFYmE3VPmRNRQcQ6MaIREACmbVaDgctoydqgYyTCaTUT6fNziWgAvUDRGIcS5PAmSJ\nC5m2Zy3f399bRepZqfAcqFThJmBczWZzY9Y1Fltfn+xhd0iDENrYnBiLxaz/Dq+BXv94PDayHtUM\nCRBnJxDBXcDAQRNADkj4pJcLhR4fH/Xbb7+ZLoIOVCoVSdroJaZSKeVyOfs7rH3gVqDLXq+nwWBg\nREaqBhwhTgpyFAt8cM44FrYr3t7eWgLhHSR66WeNs9mswaAEGVCgYrFo6AO2BpqAjVLt06eMxWJG\nevIIB0nZ7e2t2u22nR3eCyRRknL62wSnL1++2Bpj9K9UKkmSIX8gKblcTre3t9bnZ5QW1ItKajwe\nW9UVja53r1OFY6OM/k2nU9sD4YMAyY+/zRIUybcmQdVAm7iW19toOp1WPp/f2NVARcvV3/v7+0qn\n0xoOh5JeVmVj/9ioT5Imk4k6nY6NGFPMQITbtlH+++rqyvrbsMxLpZJVmRQK2DDj0/hSEKPr62u7\nbhYbxR8SkLFRAh4FBSgAfgUCbiKR2Nip4ZM2trX664BBqCD4gvLi02lrgvICz2PPXDvtl4dBNscf\nUzgQy3q9nukz3xFSIsRfillvo4xqorv4c1AbZICNcg7Q4x+9XmXPny+LUyUzgnhHRgq8B9SMk/RG\nB5zKWAyKhrJWq1XLLI+OjpTJZMzIyMa/1TLwfWoyWqp0FEiSVef8DM6Efv/9/f3GuBW/g/PkgQKD\nQmaESHN9fa3d3V29e/dOweD6ukd6SdvtDhQDI6GKINAAJ5JIIXuPiEjaGEVkpIbqiD47ZDw+F65G\nOBy29aDX19c6ODgwogx311OR4PyBOxnvATqmcuR7UcHg+OgL4hho2ZBRY7BU4wR8WOl8LsRJJkHI\n1j98+KDd3V1zBlQ0Xu7IA32FVQ/sT4ClavFEKuBSAiDQvR+xlGTPjtWhoCQQBler9eQJTiWXy+nN\nmzcGcTLlgO5xdoIHMic5AP3AaaJP2KgfafKEW76vrwy9zLEZEgV0rlKpmLM7PT1VJpNRt9s1Fr+3\nG6oenhuJBTLH7nHw6BqVF0EUG4U45vv8fJZvS4LO0bpaLBZGpoXA+eHDB0myhMP7Fyo3qlmCAQRe\nChbOzmfg5JnZ9zYKWY9iRnrZJkjixzQCfg2EjxbZ7e2tarWaKpWKBW4Y99v6QmIIr4egT5DyQRUk\nAfQAVJIEytsoOizJ+C/sSwFdhKvB5MnDw4NCoZB+/vln7ezsqN1uazAYmN1xZuzHyxRio4f98Ztw\nnEKhkPkfnhc2SoGKvmzLnK2M3kYlWaI/GAyUz+d1fHxsreLhcLixjAuZkJB7mdNu+dHrVQZ/nCZC\nlWQGCbSFAGBP4qz4N4IsSo0joNdCNcEWQMZ+YrHYH1Y2emYvny/JAgcVJO+PEUiyv/OwPJznCTUo\nij/7zs6OBTQUiACCg4TtXCqVLLv30J4/o4dt/blhslN5Q8oBsvZQNaQXD917p8sfgrXv5ZLZkhTM\n53NzkDhnz/L2Mvdn5//zMgfB4MW5tvXFI0EkN0BnXl9IFtA/nhNjlKFQSOVy2aDobX1BTp7Rjp75\n9hUEI1o9nIGX7w16YioQpYf96Kt6p8bn8Z3gVuRyOUN7fLuGc37rGXgyKTZKMCfYI7dtmTNdAcnP\n64tPkAjGoHLYKK9CoWAoG/qCLDkH+uzlDbwMTOsTbgIXzwyZSzJiHhUdz4Szb9uoby1go6Ao5XLZ\n2P9+xwZyhlX/Z2f3yBVy/Z6NQk7GFtAJkkO/g4DfRXbeRhnDhDFPO863Mb2f3NZz35/fRi3xNchU\neimovC14mfv5fO9TPcl3uVzvIDg4OFAoFNpYOe3bUr5dti1zfs63S3271dsZz93voeD7+PYZiAYv\ndJAkLRJZb4YEifAkX7/vATkhS2+jIMY/er3K4O8dKQYBZM4+aE/cwTDpleGkCVI4LLbqAc/z3zh8\n+pfeYcM+B2ZlxMKv1gWaCwQC5ow814BLJLj2kz4bBBGcJspBFYIMqFyYyed7zGYzg1ap9jEGWKEo\nBItg2O+Ow/OLPDyjlP4VpDwSDSAtPymAccC9oHdNtQjMyvIMFoxA9oFJjdMnKcKJAi/T70XmfpEH\nwYv32dnZsRYHzolze8a2D5rIACiXMRuCDOdH58jypZcePvoGdE07gv471TStHxI1ZE7fm9XC9PeA\nb9EXnAkVCwx5njufjS1R8WSzWa1WK6sUccrYB0kVTGp4DR6KBQnxNkqQYdQrHA5vTHOgKx7JgzTq\n9cVXL0wesDxld3fXnCAyl16IVCRIjFtBeuJ3SYwIXug6cszn83arn+cVeD3nWUDYIyGDW0NwZMJo\nsVhs2CjBAwQPVI0iBn3xo4W7u7uGhPB5+MVIZL02lh0RQOBez5E5yIQkq8g5O61W/A7VOERX1s6S\nPMCvwUZp01It41NBHXzfm4SLNiWXec3n8z9MTpEc8AzRbc9jAJ3h75BBI5GIjffho1arldkt9opP\n9+O/e3t7hoR4G8X/xONx29GyPX3D2UlK/NprfCzPkrNgf9Ka2LpcLs1mmXaAB8AzYxT6+vra/DWo\nMyja916vsudfKpUsYJ2cnCiVSqnX69nsLTDa8fGxbbJ6+/atdnbWd4+n02kdHR2ZQbFIYTKZ2Nxx\nJLJehVmr1cyIK5WKdnd3NZvN1Gg0zKmy+pGMrVKpGLzHJSq+ygLSOz091XQ61Xg81vHxsbHtWYEJ\nmfDs7Ew7OzuaTCZ2wY20ZnJns1nV63U9Pj7amtVUKqXDw0OFw+u97KVSSaVSSYeHh5pOp6pWq3ae\nWq1mFQoQ+9PTk7GtCbiQx25ubuxCiX6/r0gkYvPxs9nM1qXu7u4aHAi8eHh4aFAwkwJv3rxRLpez\nZUrR6Ho/A8GEi04mk4kKhYJyuZw5xFqtZtButVq1M1SrVVupSdZ7fX2tUCik09NThcPr1ca1Ws0W\nvjw/P+vk5MRu1GJiYDgcqlqt2jnos8PYbzQalnweHh4qn8/baA/M81gsplqtZjBnqVSyC0PgAZAM\nMoMNYYx+ZjQa1enpqe1jOD4+1nA41HA41NHRkcGcyWRSp6entrmyXq8bilWtVrW3t6fDw0NFIhE1\nGg2DQxuNhvWjq9WqrQLlGXARSr1etwkIf/FPvV63pUGSLIG4u7vTycmJ0um0+v2+7bAYj8fa29vT\n8fGxjZ+9fftW0ej6psd0Oq3Dw0ML/rz3w8OD2ejOznrJCfr/9PSkcrlszHNsdDKZaHd3VwcHB2aj\nBwcHdssms/6ePFmv15XP53V6emrPABvlfG/evLGEARu9u7tTrVazxTncoFav1/X09GSX4/w7Niqt\nq/16vW4clkqlomq1qtlsZtMf0ktVzvhssVjcsNGTkxOr9t+8eWPBsl6vq1Kp2HW8XDyEXnobZYfK\nzs6O6vW6FSOVSsWeN8u9WGrWaDTsorR6vS5JZrvb+oIvOj09VSQSUa/XMxvlSu/T01NbS46e9/t9\nVatVu7gsFAoZ6z4ajarRaFhL6fDw0Pg1sVjMlg/t7OyYjd7e3toSIWy0Wq0a4sU+Dm+jh4eHikaj\nOjs7s+Ts5OTEdrAcHR3ZQqVUKqXT01Pd3t6aD6ESR3d4v0ajYejQ4eGhjU9zoc/19bXZKBeONRoN\ns1E/MdFoNGzhFMXp916vMvi/e/fOFiy8f/9emUxGl5eX2t/fN+MZDod69+6dlsulGo2G3r59K2m9\nezmTyejjx48ql8u2t7xQKKhQKJjwcVTVatXIPO/evdNkMlEul1OpVLIKPR6Pq1wuq9vtSpJOTk6M\nsHR2dmajHxBK6Jm/f//eyIckI9PpVNlsVicnJwZJnZ2d2e7yn376yW7wo4oHWq3ValYdHB4eGs+B\nn4egViqVVCgULLkJh8O2w5olPfv7+zo6OjI4O5FI2ArWarWqw8NDXV5eKhAI6PDw0Niy79+/N6eC\nUwWFeP/+vU5OTgxNeXpa37CYz+eVz+fV6/U2gmir1dJPP/1kKEA6nbYz4CwgSpGAZLNZvXv3TrVa\nzapoemW1Wk3v379XKBTSxcWF7dOnUnz//r3tejg7O9NgMNBwONTZ2ZlOTk5Uq9UMOmThSL1e12q1\nvmegVqttXDrz008/KZVKaTQa2S2SkKFIWElSuc75+PjYIHiIlZB23r9/b3cMHB4eGkpzdnZmzh7H\nQkB99+6dYrGYJT2MRcXjcdVqNfV6Pc3ncx0dHdlzefv2rUG5EOZod9VqNc1mM/V6PZ2cnFg//vDw\nUI1Gw1Aurpjd29vTx48flclkdH5+rng8brcJjsdjffjwwUZCT09PFQisx6rS6bQ+fPigSqVifXZs\nlOVd7DXHQd7f3+unn37SZDKx8dZgMGg2WiqVbCwQG61UKjo7O1OtVrNqlDHb5+dnffz4UXd3d2q1\nWmajT09PyuVyOj091cHBgYLBoM7OzuxSmbdv36pQKOjo6EjSS/8/EAjY7ZWj0ch0FqIqgRcbxc9w\nM2Sv11O5XNbBwYFdJ3t0dLSxuW5/f98S76OjI339+lWr1UpHR0eWXH/8+FF7e3tmo/l83pDBd+/e\nmWywUarvXC6nTqdjCX8+n1ez2dTbt28VDAbtylhsVFonWSBMJycnhi5hoyQ1+MVaraYPHz4oGAzq\n69evZqMQ4j5+/Gi3RmKj4/HYbmSt1+tmo9g213a3223V63XbDBiJRMxGh8OhjbdChuY64UQioZOT\nE/v34+NjK+68jSYSCX38+NGu2X3z5o0Gg4HFjlKppHq9bgn609OTEomE3r17p52dnQ0bTSaTlmx3\nOh3NZjMdHx+bjZ6dndliJ2+jk8lEBwcHms1m6vf7Ojk5sRZBo9FQvV5XNBq1e0C+9/oxJfD/wuu/\n//u//x/gLWA6D0lC+oMU48fQUDa/vIe+CH0b+ibArkBsfoSE9wPmkV768cAuQG70du7v7zUYDGwk\nzzP0JdmZeD+/+AanjUECPQKR+d4QVYBfOASkxntDyAHy4zvxc0By9LqYSuD3gP6AGP3SCg+X8r/A\nkn5E0MucniXvDYveT1wgc/7Ov3uyHFAkzns6nZoBIi+gbmSMY4HrwYte+M7Oy2Ujvu/K5wJ7+747\n34neomfrwvfgO/kFNRCsgAH9Nq/ZbGaLTXxLABIVMD3fE2jP6zm2wAQLNgTMSO/Uj2dBivN9V74D\nPwcpkWfP+lZkAQHT2yjvv22jVJfoAfri9dWPaLK4h9YKtkf7j2fgSZbAwpHIy9Y+b6PYDvAq3xM9\noNcMcVZ6ueaaat63a7xvQF983/3fsVH+F0h6mwXvbZQWIzYKmRX5+nNt2+h2u8bbKP4GX7Vto97/\noS+ekM3PYaPYDNNSfKZv32Cj2DzPAl33d6r49p63Uc7DM6TlStvW+37iBK2QYDBo3I5tG725udF4\nPN7wJ9s2CqfD2yjPBB/7ZzbKs/AET8+RwobwD/yb57hxNm+joVBI//M///P/fjPA/n+vwPf+8f/W\n63//939X9M88sxFWN8Kn5yvJxkykl5lSFJlKHGGitEDGnqhCfwWFgkzo57UJmMFgUI1Gw0hIvV7P\nlnPgrHhflmNwKQ/KQn/OKy5GyGd6oySIeTIZ5/c9MuZi6W3jzPidZDJpWW8wGNSXL1/U7XYtY/Sk\nPxbFeKY5TsOTj1BwzoLT8KQ/zoAM+SzOt1wuLYHwvXneNxAIqFKpqFKpKBaL6fb2Vl++fLGKHsPA\n8MLhsPV+gYTRCX7WOxUIRd45oi/+3PzhjJB9mJzwvVZkDmJC1Xx+fm63qdHv5/MYHQPSRidp4fhW\njefFEPT5TC8Pr+feefhEA13xxKxAIGAoEm2xL1++aDAYGLSJToKAMWbLngc4AT7xJTByFk80RF+w\nUc79PRvl3JyF71+v1/9go/RaOQ9BBhuliNiWuZcVzx/9/Ja+bOv6n9mo1yNIh9go5LsvX76o0+nY\nTYMkU5DceD8/0eOJip7T4pMNTzr8lo16oiE2SjHg+Vb/jo1yiRP6SwsWG+UZSPqDX9wm4PmkEXl4\nmaMznI8AyvSGt1GSZG+jsVhMFxcXGzbqSalsD51MJlZEeP1EjiQ90gtx3ZP1tm3UEzC9jcJrgFPA\nWbZtFE7Khw8fvhvfXyXsz4Uc//znP20vNGSQRCJhO8SLxaIkWf/j+Xm9iYt+FnvQ6UtfXl6qWCwa\nuQ8DZ3QFxQNOXy6X6na7RsKiT51KpYxU9Le//U3Hx8fWy7y6utIvv/yi0Whk29cWi4VdwjMYDIxt\njfPDQBh9Y9yD+fDz83M9PT2pWCxa64CKgA1g7AIgw7++vtZ4PLY97IFAwO4faLfbisfj+vvf/65U\nKqVgcD3L++XLF/3yyy8GSXt4jcsu2PYFVBsOh9Xv9623ytpf+tzn5+d2mRKEFjbc3d/fb4xGQZLp\n9/sGjWHo9J7b7bY+fvyoDx8+GE/i6upKv/76qy4vL63Km81mlgkzl82cLInj09N6JzuXxLDNsVKp\nqN1uazgcGsGUHmIo9HLzGKxinuHj46N6vZ5tVIRIlkql7Dv99a9/VblcVjQaVbvd1vn5uX755RdN\np+sLT0hQuWaasR8gZ54vCFO1WtVyud7IBunr/PxcoVDINlaCcLBFzN+oSCLKBjvGawk+oVDIrqf9\n61//auTQVqul3377Tb/88ovpGDspksmkcRewUWb/n5+fTUbbNnp7e6urqysVCgWb8YbcScXrL6kh\nQer3+6ankD3hdoxGI/3tb38zWJwLf/7xj39oNBrZld3L5XpduCQNh0Obx/c2ShVIr9nb6MXFhabT\nqQqFgtko3xcol8U/BJmbmxtdX19v2Ci6Q8vor3/9q32vVqtlNopP8rsBIKcWCgWzUZKNXq+nYDD4\nTRu9uLgwG4XYh40yHUJFDosdfcbekP98vr6q9sOHD3r//r0F9larpV9//VUXFxe2o4UNgru7u3Y1\ndaFQsGqd79Dtdu1uiW0bHY1Gxv95fHy0hIEiBj9GMcCsPvs/+K7o7HK51F/+8heVy2VFIhF1u11d\nXFzoH//4h7UMQGwghw6HQ7sgyhMu+/2+np6e/mCju7u7uri4UCgUMg7Rcrk0xJr9MZD8SO7Y6cGO\nlkAgYLLs9Xo6ODjQX/7yFyOQ/+j1KoM/G8gIvIHAeiFEJpOxkTay3OfnZyPYQEqhmuOijEwmY8xt\nHhgkK+6rnk6nyuVyBtVvr4WEacwMJ1u5yPL99qrxeGxrXIGJgMGArjkPWRu9fFZU9no9rVYru3va\ns1PZWkWyg0Pk0qFKpWKrZMn6QSJQKD+TCwQHuYagCdkkn88rEAjY9wPWhYUMQsP918Ph0Bwpd8nj\nBMlquQyESunm5saCHEx0EAeqRt/qgMjGM4Gxi/GPx2MLAoztwFCmLfP4+GjBEGfO92X0kDsYhsOh\n7UAYDAaGwoBCsbp1NBoZUgSkh5OmukeWQOU4HdZzPj8/K5PJGJwOmsC60GAwaJMHmUxG0+lUnU7H\nnDTseEn2czznXq+nbDar3d1dIxDBWEbXQXGA57EzghdnYkkOldx4PLbFNSQTOLbJZGKBluQpnU6b\njWazWWNe+zs7aK1tQ693d3fWj0dP6adT/VLp+ufBmZhk4Hc4H/YB+gLjGllCKIYXIa2JuVy9C9mM\nJVPPz88aDAbWooGYyud+y0ZJTKnwvL9jDTNXS/PdIe/SIqUiZWKBCZtMJmPLdrBR7oB/fn42/ZNk\nyQk+ajwe24IwSLkgDiRIIHdU3CAzkImHw6El+fhIfB72TJXPCLBfpjYYDLS7u2s2end3p2w2a60o\nbHQ0Glkr5ubmxoiuLIxj4uJbNoo+exsdjUZaLBYbNso9BEwAYZegJ0x+eBuliMQvolePj+vb+u7v\n79Xr9ZTP5xWLrVeGc4Mr9kHS623UJxD/7qjfqwz+QNvBYNBuEbu9vVU6ndb79++tekgmkzbageHS\nW8VoqOIhLuXzee3s7NgmPYIFVRpjOsA+rVbLrvXkFiWydBjW/BuOGQPO5XKWjR8cHFj1CkmJDJIr\ni8n8GQXxleNisVChUDACjF8KxIgHK1vL5bI5JyBmjA7UAYiIysmfnR3g9/f32tvb09nZmZ2LpRTT\n6dSuXgWO8ys5aQlwhSkbEake+L4Qh5bLpbFue72eptOp8vm8zeZSdReLRR0cHOjg4MBGLjl3JLLe\ncEbSViwWdXp6ag6IRUPSy1W98/nciG9AcywSWS6XyufzVoFThRJM9vf3zdFy7fDV1ZVVg1xPure3\np0wmo1AopFqtpmQyqcFgYC0tKoVSqWSGe3R0ZOeGYAX6RZJKS4BtkCA2yWTSbm8kKJEsEcR3d3fN\n+TYaDSMXZrNZ4wDAhM7n83bNL/1yZB4KhYw8eHd3Zza6WCzU7XZNz+fzuS0goiqmtYCNok/s20DP\nGRfDbqj8SWLa7bZ9XzbegSgFAgG7chYECIiYDZ7sDDg4ODCECtujUvdXOYP4eBvlZkwuj6K4wEZZ\nZgQKQDUI8RgbRQ7YjK8avcyTyaRKpZKhZ2dnZ2q1WppOpzZyCyk0kUhYYPNwNMlFMpm0m+CwUVpm\nwNsewUQPtm3Ur/KuVCobNur1nPHb8XisYrGos7Mz4y/QI6eqJUkiaGKjyHw2mymXy2k+n9sdDwRi\nSJLwrkqlkh4e1tcOM0mFX+S98OnbNgrqW6lUbMyVaRW2SHobzWQyVvhR0XsbRV8KhYKNnsLJwAft\n7e1ZsnN0dGTLuLyNsnmRGwl5NnBuvvd6tcGfh8hDzmaz1mMZj8c2juPn41FsmOx+xpqKlHEoT+ih\n74KSh8Nhc6j+XmRIh4FAwBw9DxuyEQkFWSJz6xD5ut2uGTcOjQtKVquVQVhcTgEpTJL1pHGaKAqO\nheBLtkw7IxqNWmUTCARsbS/9LowNme/u7tpmRPqerCsGouR74mQjkYhNKBAQIfrAK8DxMl/sWdLM\nhbPuliCNAdFTv7q60n/+539ubC0D0fCbuNhox2TIbDYz2QKjUUlms1kVi0Xd3NzY6BnIBOQZT4yj\nqtnf37c5Yirh0Whkzs0TDT2Z0i+AAnbn/CRT9J47nY4hGPRd9/f3NRqNjD0NZCnpD/Lj84FSccBU\njoxZskMCveISHIJ4qVQy2XobZQrEb9IEWQMJoI1AfzcUCimVSm2MW4LejEYju2OdHi5VDY4TPgxE\nSq6TDQaDhlKAhF1eXlpv2dsokyJA87PZzEYNaR2iL8FgcGPVc6VSUSi0XhwDp4S7NfAPnnD5ZzbK\nelxQvMlkYggmF47hS7b1hQSOxGG5XForksKFgJVIJNTv97W7u6vj42ML0PwO7Q8+w9swz43vQNLJ\n98WeIRtOp1NdXl7qP/7jP2wHBYHfyzwYDCqXyxmRj42hjB+uVutlYCSahUJBxWLRSKZUwZ4ASfsQ\n4ikJuieAszbco5keueK9vuXTOf+2jfb7fcXjcWsrYK+gj41Gw1pG/A7jx56bANlUkhEPt22UET7Q\nNGy00+moVCopHo/buvIfvV5l8IcxCdGJyh3bTNlHAAAgAElEQVSYhh6OJ98RSGF9Uv2QyQIl+cCD\nUkovm/j4faAs/k162cAFg3YymVhA5iz0UVmywUytJDM46YWIJWmDqEiLAYMDpsbI/HflXDgE2goQ\nQ/g53puzAyF6tiuK6Mkk8XjcoDuqRmQGXO8RCEbMcBIwYUmk+F2MCpn74H9/f2/vjZzRB0968d/N\ns5kh8SQSiQ2mPTLn8/k8P7UBmRT5oGswc3lm/D6f55+TP5dnudN24ByezAZKQpXrGdV8t+1Eh9+F\n+0EQ25Y5v8O5IdeBvtDCoSVB0PLTGVQmXn7oAnpOcIMJjwP3REnPFicBiMfj37RRnrn/g74gZxyl\nZ0BjoxDZCBZe5iQSyBxHS4IJoYugIL0QWP1ysW0bhVDmSVycD19AW8HrMT+LvtC29Mxvb6OeCOtt\nlGfGe6JrFAtU0/wOCfr39MWTzrAZknEvU36XQEXSzDmQKTLHRn0igpzwL0woUcBwDwAwOagvidu2\nnvugTaAngePn/VTE9jIj7NiTrykSIdt5mVNMbdsoCatviyBv9MXrCok1tkU7Bpl6n078ozWMzJD7\n916vMviPRiODKultSOsvyxIa5q1h1qNA9MRarZY5FUm2wYuFHovFwloBQJkoDZnv7u6u3r59axks\nWTswUbFYNKUB4iOjwzmh1BAMT05OLGDy8PxGtm63a+0H4Cv61lz+Ickuq/EKL8kcXiqVsvlr+udU\nmNls1lomJCoPDw/GxCUL9WNbyWRSP/30k/b3983g6KHST2w2m7aDmvGabDZrWTVyymQyVvn4F8+c\nWVUMwG9xLBaLikQiG8x+EA6c0/b71Wo1MzD6wMCM19fX6na7ikQiVvlDkKN6l14mNoDipJcd/oxZ\nRaNR/fzzz0qlUhbImUJh6Q+ZOgQ54DlgWhz8/f29dnZ29OHDh43bKKlguGyk1WpZX5LtXvTMsaNw\nOKx0Or2hJ/y3tynIsPAPmOnP5/PW/8apsg0OJ4gTA70pFApG1vXb+ubzuVXMFxcXhsxB8Mrn81ou\nl5aQ0Hbwus/ZaQednJwY3wGkisBYKBRMFzirv0zFV+cQ5E5OTjZG1/AHzO+3Wi0j7O3t7Rm0jv8h\nwKRSKT0/PxsPhGQKZCWRSOjDhw9KpVKmcySU7LygMo1Go7bnn/fibMgrmUzq7du3ZqPYJ5eN3d3d\n6eLiQv1+364+ByWDOwEZFuIdCStyR46QHjkbAZHWHCgKNsqegWDw5TZWSRbAWbIjvUyCwLsZj8fq\ndDoKhULmu+fzuSFl+H9JSiQSllCSRPBiiuPDhw/WxmGZFH4W6B45YKMUOBRBsPh3dnb07t07a4uC\nVODfxuOxrq6ubPsjqCo2CpclGo0aEuL1hfgCWuFtFJlHIhFbMIRPpZ36vderXO9LZrpcLs0B0buk\nSvbjM2RynjhDfwTHRyWIEwUG5UH72Us/1kcF4zNSfhfD87AXEwOBQMA29bF+0WfvvIAgyaD9BST0\nEEkCCBz+3JwDOBtlkGTVth+tAfkgKyUIcg4q0Xg8bkmJD3IEUd4Xw5nNXi6eocJg2YaXmT+7r7A4\nux+b2pY5zo45X98rQ35A06xS9aNLvMjMfdXOz9NTp8dIAsS50Z1v6QvojCSTOfrGc3p+fjZCq/Ri\npPRxCSDSy5yzlzsyB+KEVARM6i8l+pbMcbxe5jx3UC9kxu/5ESRIUH7UCCgX8iU2ip55mYP0bNso\nF1Ux7uZ1HZlTwXP27dFGP1rpZc77gdTRmw4EAhurd3mmfyZzEmMSZW+jFAggVdv6gk5wbuwCVM/L\nnCQQPwchkQIEhI4gDyHUIx/InNbMn9koC2xAHjg/QRm/5XUFpAGUE7+GjvFiJBAbRX6ggyAE/P8e\nteG7ehv1lxdho74FSiWOzPGL+BUQAFBdr6d8V14+MYfjRSwh2adK92Oe+FF/URCcCG+jtLX4XfT8\nz/QFwjOJJXq6LXNJ1t740etVBv9KpWL9HrZKce0p26D6/b7BIAibffmr1WojgEK28/OsvrdLT5H3\nop8bDofVarV0fX29wbrlggugmPl8bqMv5XLZlCqfzysYDKrT6UiS9fJx/ECOjOBhVDgEVtQC6WJk\nKCaGLMkcCj2pp6cnnZ+fG1wLDM/vUdHe39/bxAB3D3giCuzS2WymbrdrjgPyIKScYDBoctzZ2VG5\nXLad3ThRL3NY8sgcZxaPxzUYDIyE6HuKnB3DHo1GthoWSI0eKJvtYLVz86OXud8miMwTiYQODw+t\nkpO0ETzQQw/X0+NbrVb68uWLVRhAfOgLLY3pdKp2u61QaL2mVFqTNhm9ggG+Wq3UarXM0QAdp9Pp\nDbgXB5rP521bnIfcCbTe8SNzNt7d3t6q2WxatUHih8wJ6re3t8bIJ/gwJsb7extF/yCyoS+STHfj\n8biq1aoRmUguSYRYKMT35Owwu1utlhHssFFWLRPwIdctFgtbgUovORQK2VXQ4XDYrlUmQCBzrkrG\nWZO01Ot1Y/OThFKt+dv7KGTYBDqdrq8ofnp6siSG5B8EIRQKmSw5BwlMsVg0R7+3t6fZbKZOp2MJ\nOzKHi0ECsFqtbGU5UwLSywphFiHx3EnckDnj1kxjefSSGwhJGti/wZ6F1Wpl/K3BYGDfBb4HLRn8\nEckJMqVddHh4aDbnYfDb21sNBoONhUfeL65WK33+/Fl3d3cbrQ/OTatlPp8b2gCJMxqN2hpw+vmS\n1Gq1LGH3hE1iCO2caHS96REbJQaALA2HQ/Mv3qf766mvrq7MRkB20BnGpSeTidrt9g/j7KuE/cne\ngQill6suIQ3RE0YpvIEx5w9cw0OaTCY6Pj7euAqXfdMIjlWVQHBAqRA6AoGAjQuSdYZC65lqSIUE\nWtoJwMVUqFQdKAXn5EIXnBGMZIIdTPjr62vLHoHxqeIxrtvb243PJeDxeyj43t6eyuWyjfQASQJt\nswmQrBk4iv4YS4tSqZRKpZJVo/T97+/vlc1mVavV7HtWKhUzeNi89/f3KhaLyufzRkxE5jgCqkSc\neSaTkSQbIWS8k2cOpMfzlWTGzlY3HHEul7N7xoHVmNteLBbq9/tKJpMb+88JNARA2OjoDLB/LpdT\nr9ezPm8kErFd3dyLgMFSJcF69n1HHCzJI5A4uw6oUKnMG42GcReKxaKx6BOJhAWadDqtQqGgTCZj\njhgZ0JP3QZV2F4QvyH0kG4yWbes5iQT2y4pVdrqDgLA2NZFIaDgcam9vz+4oIEH3zHCcOmtv0e1s\nNmtIBs+ec2OjXJdKi422GDKXZJwRiHlcGRsKhdRsNhUKhQwlWywWKpfL6vV6Gg6HNo0Qi8VsnA2U\nJpPJmJ1B7ELm2WxWzWbTdHp3d1eVSmXDRmkRIjsqa9/+Q+YsCUJ/n56erKChn5xKpVStVnVzc6NQ\nKKRqtWoTHPAsJpOJ2ShTNdgnhE+SBSpSbBR9YbcGwZCevNdz2nK0epBfJpOxQorbNVnVDektmUxa\nooEPYnwQOZDIwajHRmmHTKdTpVIpHRwcaDweb9hoq9UyG+X9Pd8Lf8j3RV9I5kBiHh4erCVJ0lEq\nlfT8/LLZET/FJAaXcpE87+/vbxQC+FuS6R+9XmXwBwrxkDxZM0ELeBVOABuiYM+n02lzhB6CY3kH\nVVs8HreRL1jYBBkcEpUrAdCTsnDQ8XjcNjn5s3MeghY9Mw/jUwWDOAAn41ypSumV09ukUiap8KNq\nVH/IxJOUPIRPxe1RDP54WJ+gSSWPYjJ6xXgcrHNgLHYKwL5nCoLvzgga/UDgMj/GiDFIL9fLSjID\n8X1AqgOCDImKDwQ4RRI3Rm88FwMYEAQAshlz8cCJtCQ8KxuZe/Kmh5PD4bCNIKErZPIEUKowuBeS\nzAHjjKjEeO7Al+yiyOfzury8tEoKZ4SzBS70LH6qXxJHP76JHZK4eZnzLL1MqDKROdB6IBAwfaGX\njZ5z9nQ6bZwPZA76QKXpZcXZ+W6s4fWELvYZeBv17GpsCCcN6rRcLi0QMbrpYXZkvlqtbFabfvL+\n/r4mk8mGHVGV+6CHzpI4ehgeGfgb3TzRjWfkbZQWzd3dndkohRGJK88Pbgx3ZMCHAuGAaI2NYlvo\nHW1CeDL8HoHMr2anpcCzQ+aeFAyp0LdikCV2wbPzZwVhQubej6KfIAtA9j558T45GAzaNcYelgcZ\noBWAzPGnjInCKyHGgDp7vxiLre8lQQ7Ij70jyML/f7SH4LTAzfGtVb8Z8XuvVwn7k6lR3QHbS+sq\nr9Vqqd1umxEzMuPvmsbJPj8/6/LyUqPRyBwFVeVyuVSn09Hnz591dXVlgQZ29vX1tXq9njqdjmWr\nMOpxwtJa+SHWwOQExmVb2XS63kF/eXlpiz6m06nBqnAaCCAQpT59+mROh8+Kx+M2avjbb79Z5o4S\nsJSi0+mo1+uZ84F4eHt7a9Uh7Fz6tRhTtVo1Uszd3Z06nY6azaY5dGD4b8k8HA6r0+mo0+lYT5CM\nNRhcb/D7+vWrPn36JEkb8BgjhcgcchxZrQ98IAaweCUZKYb58uFwqGazaUs/QCZub2+N2U8QpMJB\nphhgKBSyWWk22/X7fWMf0/IYj8fqdrvq9XpWQYVCIXPCVAWBQMA2HPL5sVhM9XrdEtPJZKJer6fL\ny0tL+IB4/X3wOL14PK7hcKjz8/MN3Sdpvr6+1vn5uf7xj39oNpuZvkAWGwwGJnPQKnqbkLV4DugL\ngREbTSQSWq1WRtBiCQ5tOPTFr8cFVYD892c2+vvvv+vq6sqCQDgcNhvlsxiJJaFER6V1cAXaBa0J\nh8MbNvr09KTBYKBms2kLs7yt87skE3t7e7q7u9Nvv/1mST0VLcnd5eWl/vnPf+ru7s64Afi44XCo\nVqtl18l6G4UkSZD1z3yxWO/b4Ea8QCBgSECn07EEw7dt2F+CzKPR9YbJVqtlDHGCTCi03uD35csX\n/f7770ZeJDHARrvdrhGcabuhL8gB3UfmkjbIpbPZeodGu922HQj4JGzUE6fxpb/++qtubm6sYIMz\nM5vN1Gw29a9//UuDwcAuiMPfQpJl/wDfnfYcJEVijffLsdj6pkv2stzd3anX69l+he3Y4GXO2OFw\nONTXr18tCUF2tCex0fl8rlwuZ7wxWjGdTsdGEtElZOxJiv/OxT6vsvIn28L4MaxAILDRT/J/ECDG\nQl+W7B0Fpmr1ZBBgYU/iQVGp0umv0DtkPzSZrn8PIE+/dYmMV3qZvfZVFtkm86O+pQGkx+gZBBwQ\nCRQOIgjyI7iDjHhyChk3wZnM3o8vQa7yM7/+7J7lS9bLe6P0zKeT/fsxG/85OA//PWAbo/h+L7jn\nQID6wNQlM+dMyIlzoy+S7LnRf5ZkhpxIJKwS4LnxbEkiIIMCr2/3nUkEeU8/9sb3gzVNr9ajIf7c\n0kv7C8iRYECVC7ISjUb18PBggcCPlDFGRrWPs/b9SRICPtcjSzw3bBQeCskS1fV29UHvlbPybNA/\nb6NU0pzb96OpbrBRnDWtKOTBBIYk0xeSF96Tz8FG0QM/GUGFiG7hT6gqn56ejBDIc9vWF2yU9/Zo\nH1U6/06ww8Z5L2+jPEdslADlz42P82Q7khcmY1arlbWseKboy7aNAi0z6ocsaP9FIhGzUWwTP+bv\n1+CctLbwDdg2700xA0oFAsp3J6Di0/7MRnlvikqPwpKMgrzhy5AdnwtpkMoe/cNGObdvfyEvbJS/\nE7ixUWTu/SvxBjn6CRUQIW5m9L6BmIdtf+/1KoM/Pc1MJmMPD4VgBA9YEBgul8vZ3e2r1cqWP0Ao\nSyaTBrtyBS1ECshQ3BlAG2C1Whlxg3PRK14sFuaoyLgwahYztNttg9uATemj4uQZXwuF1ktD/Iw0\nhks/llWtON1wOGz97larZb9DAsI1kFTAVF4+gyb58N9pNpvZchQcWiSy3p5HMrSzs6NUKqVisWjj\nKsgGljFjRKFQyMhgoVDIeoNUEQRrxmfYIhgMrlfVQjQEiqPvS9Akg6ft4FnlPhhKMngtl8tptVoZ\nugAxDEIZlQ5QH8+Yu8yXy6Wazaa1NQhkoA6LxcJWghKwYPPD7yCxRR8uLi40m81MX4LBoF2FKsmc\nXalUMh2mukDmbBMEOcKxUv3wHOCUBAIvK6xBHe7u7tRut22Ujv7mw8PDBq8AfokPBlRYcAtwjIx4\nsp99tVptzC4DVdJKo1qk18vV0f1+X9Pp1HQWqB3UAR4FZ0qn05Jk5E9JGzZKD5lARMKKPnp9CQQC\ntgrcBwhsQVpX9DwT2ikkiVdXV+bAWYkNh4QFYKxzhXeBn4rH4waNY6PwAiB+wr0h4NJeYvMhY5je\nRkOh9X55zkmgZ4SOwIP+Acn7iYmbmxu7ipoKHL8BY5+kYH9/X+Px2Fo6BDOCIYEwGl1vB12tVup0\nOnZ2bBSSbTQatSSa5JulXc/Pz7q6urJqHjukvTGbrbegwqGYz+f2mdIakSRJwEZBP71+ZDIZS2jx\nrfl8XtfX1xbEvY3CM4pGo0YExk9xzqenJ3U6HSsu2XbJ82WB1d3dndnozs6OFcrslvne61XC/jgv\nv5ABmBFDgzFKVg0J7O7uTjc3NwZtb4+HQbrhLncu1JnNZqpUKtrd3bVgI73cMnd9fW0M/HK5bAth\n6N/g6FOplAUdqkCgR5wIFzPwYCUZhInjIjiSsNAXA3KDiMKmt0ajocViYWOFnoCCE2cFJNUZgWk0\nGllFSSbrt2LhRDyDlSBGVs9GPEhtfjwM4xyNRhoOh7Z9EDILSRtseMhtJDbJZFLlctlWr4KAwIwl\nIaD6xDiA6tLptG0949mQtPH8b25uDOHgDGTtXubM7kpSo9FQKBSya3l5VrCemd1lbwGygb0tySpn\n+qM4V2BvLgyh7UNmDyHq+vraHLp/JpBOIbjB0ie4sOqUwMRzIiAj82KxuNHXpMVGQkD/lt0LOGAm\nADyHg89hQxxMcMb4kAOQ/Wg00ng8tp+fzWa2aZB9EvwO8oCdXavVrPfrbXQ+n5tjBJXje/EMuLMe\nJMMjiMgRJryfCMFXcM8EP8f0Rr1eN3/A9wXtmk7Xa3lLpZJqtZrZKPqAjQLrUhmD4oRCIRsj8xwO\nzsa5sDv0lX8HmeBeEjbh3d7eKpvNWiEGOsBzurm5USAQsFXEmUxGj4+PBvPjE/zoG9UqugZpk+kb\nkhfpZW8J2/y8T/ffjeKNfQYQfxuNhu3w4HmBtHCuYrGow8NDGyknqe71eoYEo7/YqC+iIN19iyyK\nHGnj4Jv9d2MqhFYHd0EUi0WzBc7tk6BcLqeDgwMjWuO/aAH96PUqgz8ZkF/qQbCCAU62I8mUmeCO\nA/eQGVAiMK7nFfhedygUMuF6cgmZG5U4mSMVKlCf34PtZ7Hpy5Kh0T/DmICDUDJfSXkCGhU235Nq\nGoiaUS5pc06bAA5Ll/NhxKAofoadQAtpbWdnx4wTboMn8uCIPUudQI1j5Pt60osPjMBcEGOkdS/e\n7+IHWvOkIpwz8BtBzcvcw4xAZyAeHl3yxC9k6PuttI1YiUrSglz5vlQ6hULBZMuzIbGl3eFRFloB\nEO6QkdcD/zle5ugi34vgwM8/Pj5a9QGMTL+XnwGtSqVStqiG8/HZMPo9TO5RKS9z9IXzInvvCP3I\nGPZGMoGObldHoCPoC0gfC734jvgA/nvbRika+F1kztpcnLY/+/f0hZYPNgp8CzmTc2Cj6B8VG0t3\n+D4kw6BS3kZpN2Gj0WjUCMl+Vp/P8Wf3o7SgCgRuZI6Ngvagw17mFAvc8YBf4Dt8z0all3voIZNi\nH15PfHIIrO31Bf6P95PYKIgxuuYJicic5WG0FmiRBINBQySwAaZufNsT/+h9rpc5uo6N4t+QObaD\nrwHF860FEkw4Ssic8U8fc3wb889erzL480XIHnmwwWDQNk/d3NxYj7Lb7VoWydINLumAyfn8/Kzh\ncGjZWzweVzqdVrFYtGspqSQxIPqAzPPCvvSwNtk2ZB6yaJSSTVV8pr84gxlmvhswerFYNCa/H/lA\nyRirgcwyn68vtfDsUpIGWNs4Xs5NRcwICfwAlI3sd7lcbmzpI9kZDodWjQGHHxwcmBxg5rI1DOcE\nDMk0BkQwGL0QzZhuwCAlWX+T8ZlcLmfGjcHjCIB2udGRi0em06n6/b7BwoycVSoVa8kwCseWMM6O\nvICY2UnvnQDoFI7eczRoJ3CxEhXFtvNl1CgajZrOxmIxq9xpeWQyGSOsIXNkSnuMcSYQFvSJUSSm\nPXDosMG5rx2Z8x6010gECDBUKMFgUKVSyQIq3AtslOSAi6U4E9U4s98EHMYZ4Y5QSdJXJRiGQiGD\nbEmAQSS4GySfzxuBClTH2yjPnplp9hUMBgMjXTLSSnuH7X7YKLwGj1B6GyXZArXERmHF811oheBb\nGNPEv/gEabVa2U4RRvjQHb+1z9uon3DxBDICMAgl+kQrloQOG4VwuL3yd9tGfVAiiWVDHm1AiKiL\nxcJGYyWZzyiXy7aemJ+DPE2P3SOUq9XKyJR+AgibZYKEII++JJNJ5XI5FQqFjftZvL7QIojFYkYm\n5nK18Xi8YaOlUsnsnTXitLWQOQUOKAJTTLRz0NvlcmnPDR4AHIvd3V2TeT6f/2GcfZU9f9+jlWS9\nDoKr9LK9isqKqh1ylt9yxsIPDJigClQM/EVmh9IDJyaTSYO9PAGMbI8sFOicv+O02XQnvewh52GT\nNcPWJEiQ1cIKxyHiKCTZOTxsDHsV50bvFXgOljOOkyqAascTbkhA6NsGAi+7tzFqP/LjZ+pBVjBg\n+paSLBHh7MHgy2pNT6iCveshbQwFSBbWNbA4BDx/KZH0skmRG8yoMAjOjEsRxOi7kYH78StfQRCg\nQC+orFOplF04QzXhZ+CpOpgu4eXXpKIvnvuBs6AKQK/gD8AN8ef2lStBXpLNxvsRJ0i2wOHAtH40\nzX8XXvSJGStDz3Gy20tPsB8+20/LEDywUSop/3xZHcwzCwbX89v0buEDcE4/LeNhcX7GcyuQOcka\n/gTEjZ4zgRguDhA26KS3UT6L5w05C85IIpGwta5+FA7+x5/ZKLfDYZ8EsGAwaGu8sXO+B8+AeXqI\npttjhgR4gjqf5VEJpmBAZrAJb6MkO9s2SgHg9cUna95GCd4UYLTWaPVhO/w7L3wGSS3JNTrE9b7b\nNop+YKMezkcvQZRJ7vH1JN2gS+gp8USSrXvHx/q2oSeagqL5sVX4Eb54YH8A/uXfeb3K4E/2g4Bj\nsZhds0jWjzLl83n9/PPPqlQqRhAjeM5mMxvHgJSG8cM6hUVOleNv84K8Q4bLuBZQpF/pyVYmFBp4\nh+8CpOsJVPV6XYVCQdlsVv1+385ILw+HzHIUqn6WRPDg6f+RQZM4PD09mUJCMPJVE46GDJx1mRhh\npVIx7gUQOlXF27dvValULEvFQZNsceUnbQkcCyNxTCSQsZIZk1mTtCAD4DSyd/p2y+XStsahL9zq\nSDWFoYRC6+uOJdmCGUiWVIxUZiBJODEgOEg8BBRIfJ69TwKZyWSspyttBjtPUvRBh9vGgNqp5nd3\n1zey0R+lt+fbWFxT/fj4qGw2awkXG+NIMH2V4GFlqj9aX7RIIFBKMpiZ96bqwEZZDION0oP/+PHj\nho36wM1Yo7+Fkvd/fl7vxoe4iMwLhYI5c9jaJOkkbtionwDCRgnm8Hg4KxA6ThQbzeVydp02uj6d\nTg1BmUwmlpzv7++brsNVIrBks9mNjX5UiCBCEHYpJGiZ8BzYMkilWiqVbKQOGwWpwUapcElE0BeQ\nCAoC9imADNFa4SwgTPhWfAaJOVA27Qf0hWrV70eIx+NmowRC2riQ9UqlknGEQP9A9jqdjrVn+Tz2\n6/vlQPg/UC7aj9hoNps1lEp62XmxbaMQiOE3IEfaxUyO8Z3S6bS9LwUWS8iYzqBNsr+/bwUHBSY+\ns1AoWMLN5+HLiAGg1ewL8JMl33u9yuDPgyX7fn5ez+rHYjGrEJbLpbrdrmWFZMcgADhpHBMQHeMd\nBLntcRtvaH6uk0D5+PhokwOMHgEp393dqdvtWiYWjUZ1e3trs9p8zv39vTHFfaafSqWM4EP1RcAP\nBl+uhSWrpJKAOY0zAw1hyx+fKcn+DqGO7LPZbNoGMSqHdrttWTZzo9w5To8WY4dkhyNmIQUGCWES\nxjCZPBUD2+aAssmmSSaA2pE5WwSDwfX6ZCBnxu4+f/6s6+tr2wwYCq03ssFYpzpgYsMzz3HifBaO\nGwPk+8PURQaQnjgrAQvnA2EM+K/dblu7hiSDOV7IXqFQyIhsnuzHvgcqGY8QUH37bYdU/SRUzI2z\n352eJ8xzdMcHzvF4bPJpNps2qUFVcnl5qZ2dHUual8ulrVn2Fay3Uap2EjSSSRwvVRn2ipP07Tla\nNsgGEigBCb0kEfLTI5DBLi8vrbWGQ221WvYM/CgizpWCAagcEhtJPjZKwISMRkWXz+dVLpeNW8DS\nLP4O9IuNssaYZHPbRrlgCxKw74tDFPQrniHeMWq8baMkESTF9/f3Bk2zFhnZsGJYkhGJ2ZURCoXU\n6XTMPzEy9/nzZ43HY/N7s9nMLqqihUZr7vn5+Q8TXEwy+dW6fD6BEJkjg2w2q3K5bIklfB0+A78I\n8ZOdADzTcDhsz4H2EMWCR+s8twYfwXf3/CNslNFdEE+ex93dnVKplNLptLV2aF3c3NxYEYKv4ip4\nply+93qVwd+PL0G6GwwGBt+gFLBmgbDIHql+fKXgBYzDwbnQM6ZX+vi4vsmKkQyMEQKPz8SohHgP\nP8+Os+IWMPrAMJOpAICCPMGIvzMaNZvNLIvehoUxahyiZ/6SxRKg+D2qVKBzMmr+LmmDTQ/aAZEJ\nY0XufC+Mge8KrIdMmeUFTqQaI4v1Y1MgCZCKfG+fSoeMGhQhGFwvw2G3tf/+9NmouGDycj6gVfbU\n+xEdv5sAaJEAimHTF6fC8HrmZU4yQmsGfQkG1xfMdDodQ75ISmH0w6onYHpCEFUnFRTfycucAAlB\njwDvNxmiC8C9noHvp1N4LqAZ/X7fqkfxlfoAACAASURBVEFslESW54bj9gRBqit0G7lgQ5ybCpvK\nXtIG1IrMeWbYqEcH0FOSoWBwfQNbu902GyUIEDSovmkR8B58PsuevI2iL8iLqRFQNvq42Cg66duG\n9MU9hwcSLJAzNorMSVywGz+xA+xMAKXF4/eBSC97EySZjXob4TszvUE/HHnyXti2J1vzM36UDRsl\nkG/bqN/Z4ScyWKSD/lO0eBv101O0CGnxoGcgnb6lgI168iBnp8jDn+KDvG8EMSaJ4YwkuH4hG4Rs\n4H5PUOTZMkGDzUEMxsfxHPzI8o9er5LwVywWzQhR4MViPQtaq9XswZFp9vt9c9ZAWzjd/f191Wo1\nu+yGzAsBMc8KpIfBA/HjUBCqJ3dRiS0WCw2HQy2Xy40LLHBEq9XKiBgYANd1cmmJDwIgHtls1vaa\n0/NHIYG6xuOxQWMPDw829gKUDXvXE+pwpIFAwCB6nCiOHJlHo1HbxQ/cHQqFNBwO7WIdHD1GFIvF\njIhGr5GsnMx2uVxaq4OgTLWAAfnLKoBi/YgPkD0EJowXjkAikVCpVLKfJ3FEX0hWJBlsx6UhIBYQ\nJElkQHiQOYGI6hJH5y9XAUrEgUynU5uzZ+2xd5Dh8PqSHn+lK4Q/CKLIHH0Jh8N2aQiBgR4pcLAk\na4HhZNGX7cCBzHkPae2U2BUBYoA+8vt7e3uqVqsW0Kh62eDm91hw9ng8rlqtpkwmY06X1h1TKoxH\neocPCoCfgHSGzHkPSdbie35+tu1y6As6kMlk7N4ObPT+/t62ZKJXkqzqzWQyNiZGICVholIcjUY2\nPkefnfdDH2Dzgzr5hI2KjmIE34PMY7GYSqWStUFILrAvP5lDQoVdF4vFDc4ONkoyQTsGnWMZEVA2\nCRzVrIfHn5+frZWJPpMQIXNaOPS7QVa63a7JC32BK5FMJnV0dGQ2iv4j/52dHRu3RM/RF4I9ibhH\nhklKSZD7/b7ZIgk1OkDLmZXRtJPH47GRDJE5SQOE9Wq1agkW8pJkPfxt9I94ROxAJiALJIycndHk\nH71eZfAHUvejWARd33PCsfnggyL4zUcEJf6/2WxmQddX0Cg/wRjB84e5Vz4HeBuCBxkYKIGv4OlV\nQ3bzRuCZplQMBAq+C+e8v7+329IwMKBHMk4CIEQcVomyiYzz01YA0vZn96xkAgAZsCRLarzMkQmO\nne+DM2SenufiWwwQBL3MqdZHo9HGz/rAhCP0PUdPjsQpUw3ycxCMODcZvKSN7wPszNrbbX0BbiO7\n59z095j35zNwkrRKODvfm36870GjC/RW/ZmpjH3i6J8Fs9v8u/99EJRtfZlMJrb8iGBG39f3c//M\nRkFTCKL8N7bo5Q56s/0sFouFOUBfxfDZOHLOTqLNbL0nvnmZk6z4iSLp5QKrb9koNub/IEts1OsL\nSb3XOc7hK0vPIOd3SLB5P2+jyBx50qv3+gQywM8ic68v/Jv3L8iYNekgJ6Cn2KjXF547ewh4PiQX\n2zbqkYJtG/V2SXuP823L3E+h8O/z+dzkhy17G2Vs1OvKdDq1/SLeV237RWQOCvAtG0UX+J7b55Nk\n38///7RAuEWS78i/bduob5P6CS/0HZmjL997vcrgT6bpK9BtljtCQGAYuGf3Uk15MgwO8erqyjb6\nEeRyuZxtXYLtStY4n8/V7XZtax8BGQiMzJsrgoGt6BP6bJ1qA1iIB+YZ1fw+7Geg5PF4rIuLCyNk\nwWDP5XLWdkBROcdkMtH5+bntwqa3yzhOoVAwcooPvhgn6AbPwjszP9PtZU4S4kko3W7XOBEYEAx2\nnLkkgypp97D5zo8pMpvLIhc+z8/gU1GDJGzLHMeD8RAMqNoxoLu7O11cXBi5C5gccp6HtnHs0+lU\nzWbTqgeST2BHeA0siyEA8N4ebodghsyxBc9ipypjjIlWzGAwUKvVMqg/FltfqAP/BeiQ34cw9enT\nJ3OKJC18Zxa5AHNSiezt7RnqRcJF0AcR8jL3UDQVPCNwT09ParVadusfHAdgckZmaYtQkXW73Y1t\nl/St0+n0H2yUJA2yIDYKBE0La1vmVIg8MzbIUcV5ng+VZTKZtJv9/IVOTAk8PDzo/Pxc19fXG6Rb\nJiWwUSYO0DFskKQGmSOzbZnzHdFzbNTrC9eTo69MP1GJ8/v893A41MXFhaEX3i8yEknLw7c1/cQG\nMidh9JNN3kaRORwmipmHhwddXV0ZD4X2UyaT2eD0wEdAhtgoCQZ6ls1mValUFAgETNdoIfFcSDK8\nf/HoDcRuEiv8Igg0esQdMvAs/EggUw3+mS2X69W+nz9//oONgqRUKpUfxtlX2fPP5XJ6eHjQwcGB\n6vW6LVPx60J9tQxzH2Z/Op1WuVw24hX8gG63q4ODAzMof1kFigwpBMcEEa/RaKjZbFol58ln4/HY\niDi5XE6lUsnuhYeEA5Pdn90zqLkUJpvN2qgLfUSfnRLoIQGSnED0IyEiMSgUCpKkq6sr66XTs+Rs\nzAYzS1ssFlWr1QyBoKLhDzKnp8ssOsxiaV1twOKG6e+vAQViBqpjcRIVFy0evydAemEq39zc2Dwt\nTmaxWOjo6EjSGj1CRr7fTeJDsOv1enY2EhHINsPhUEdHRzY7zKY9nJckmzmmSgyHw+as6/W6nYE+\nMpU4OxPQl2g0qpOTE+3s7GxcKgRKQuCnV872u1KpZARPWN9cEwtnBIYwcDOQIaN5VFl7e+vrnYPB\noOr1us1Ke30ZjUa6vb3dIKxxt3ogENiQOS+CAQnmYDAw+JresSSzUT47k8n8YYWrJJtUIJBho7Va\nTfV6fYO4B9x7fX1tWzhzuZy1Fo+Pjw0pwEb9i4QWns5wOLTfZUIFG/XJG6Nq9G9x7MlkcoPDsrOz\no3w+r9VqpUajYbsK8EmQSiH+oVvlcln1et2SRBAgbB99AZqHzEvbDeQLlAc2Pj7CV8y0kLiBjsp2\nd3dXBwcHtvLYj55ho7QGU6mU8vm85vP5n9ooZydZJjkaDodmg+wRIUllVA+Zw7z3/AXaZwRrRoEL\nhYIajcZGgYPMadVgo0D/p6en1lbARiWZDlO8oS/X19fmF8vlsk1yjUYjk3kymTT5+bYKrVgIvBSp\npVJJgUDA7MTLHBtleuF7r1cZ/KniE4mErX3E8flep3fqwP+wwFEiSaZE/sIbhIoR0EuRZLuppc2K\nhaqDfwPGovrhgTFuA8sbR4ST9X+kzWDsR/ckWVXk4UkCJd+RKt9X3x4NobfM9+XcVAr0vnZ31zem\nsdktFov9odr3/V+yXrJwgp93HGTZ9G6ll8s3QAQgs8BtQOYQeyDR0Wrw56Y/zaIYRmyAh+ktbst8\nm9SF3kHAhLEOCZG9AegLAQFeCe/NeBHkP8Y+cWzA5JChIACBgtDTB24mWfR6jmwJ5HweHA8SRVjl\noCa0BaiyfFuEBI+qg5vuvL54gisVtbdRL3NgV//HB2OSRz+Wi7745ATHjI1CLEulUmaHHtbPZDIb\n8Pa2zCGQxeNxCwKw57EJry+8sFEqb2QFzA+szDNltwdOm9/3NkoA5X2+ZaPYHiiCt9Hta8vh1/iz\nk0igb1T0BGr0BZSDhM8n+3xfb6O+3cmYIkn1t2wU9ISze5mjL9svT2LDL4JSgOhCAoQozH4XbBRd\nZekYxQd2B/kPG0UHptPpxmVEvoAZDAa2sAp98S/eB2QNTgS8G/QFf4Jc0HH0BTQNG6VtTWyEX4as\n+B2PjHzv9SqDP307NkvNZjO12+2NjV6eVXp9fa3Dw0Mj+cBK9/u6l8ulzaTOZjPr10wmE3MYEIIQ\nMgo/n891cXFhY1+SrPfMylwgRD9aQjXPneq+X+whPAJGOLx5hSifA+xP5cO4Vii0nkOez+d2EY93\nqlyigZL66hdFJgii0Hz27u6uXVXJ+BcGMputd3qTQO3s7JhzRfGoiCBlwnLmv+PxuKrVqrUvfGKB\nA/QwHs8DmWezWeuBIXOq6qenJ11eXmo+n5tz57PJrAlekEf53jg1aW309BBJXGaz9X75x8dHkznO\nnISPMTFaJ5IMDmScER4BbGR6lu12W91uV5FIxHYyoC93d3dWLfipFmRArxhdonfL9wBdWCwW6nQ6\nVjGCDCwW681qIELInRYRej6bra9hRebA6IzjEgTo03K+RqNh+vL8/GzXzVJZQtIkyeA7Pjw8mI36\nq7WxJappbJSg5VtEyNzr+XQ6tWt1v379ujGZgdyRL8koVSV3HHji4mq1MogWRAUbXSwWuri4sGSN\nBDWRSNh9HCR7XuYUEJwVJOD6+lqxWEyXl5c2cQMnheBxd3dnGxR5JhQSbICczWZWWbKfQVoXHVzt\nzXPi33zF3mq1zEaRAy0i9AXdZkQTNODr1696enqyWfttHwhixNmJC4z5EThp3ZAAz+dzlUolPT09\nmcw9SkTbqt1u27PFt4B+oC/I3I+uXl5eqtPpmP15folvGZL4kxhio3wvUFdIqrPZenfCmzdvLOZR\ndIF+0pph+yjxgYSAc/87r1cZ/HFIGBgZDRkc28f8w2IZClVzOp02p0QfGLiITBJj3tnZMTgLJU4m\nkwoGg7ZmkoqJTJ8LhDzxCEcAdASjXJL1rfz/koz4jYYsN+n1ehtVFgkNmSTwu5+5B3bFGRLoePHd\nce4oKokBCQmVkV82AdGLAEPfmCQqEolY1U0AJujSo2KhiZe5JAv6rO6EuOOREBwWwRJyIA4T4+LG\nPUnmQHEgJD2cnWqD/w/4EmdCwkV/l//PTx2QTCWTyQ0mt0cpSERhpxPgJFnA4PyeD0ESx2cmEgkl\nEgmrWkOhkE1OcG5fWXIRCpU9eugrQ94TFj3PiaBNUsQ0A711ZIKNEkBIOkmgsNFkMmmVI6hUNpu1\nQIyNMo/ub94ECUI2yB32OIu6mFTBKXNpEwHPV8fYMJwLevPoJXJHfziHJNvKBgmUBJGAi8xp7WEz\n+CYqPWyUBBjo+f7+3oixVPzoNJ/BmUg+9/f37f9H11kgA/qHjdIy8jYKWdIvsEIWvqXAlj/WTDPK\nBjGWFoMfq8VGKdTovxMkeU98Iz4MSBsbTaVSdm78Kkky/g3dwkYpnOjlYyfoF1U6us8oKaOv/Ds6\nCh+MKRU+z/t0/CJoGkgH+yU894IFVhQPJAL8Lna/t7dn8SYSiVjSAaeLxAgeB3L73utVBn8qP5w5\nmQ1LVLjKEkhkNpspm80afBSPx3V4eGjViSeRQNy4ubmxvdzD4VCBwMt2teVyqXw+b8GSXou0znpv\nb2/V7/fVarXU6XRUqVQMZWATHA9jtVoZ2Sgajeri4sKgu3K5rOfnZyN4rVbrHd3ZbFYXFxcGnVFB\noZhUrmSCwONAuVyzOxqNNjLQ6XR9s1mn09HDw4O63a6i0agRlqgWCWYoc6VS0dPTk4bDofb29mw0\nC2hYWgeVRqOh5XKpdrttjhZOBfIgUaOX6OFCVsD+61//sqUhOEVueWs2m2q321ahkrn7pRgQX4rF\nosrlsprNpukQvbt0Om3OL51Oq16vq91ub1wwFA6HjZDll7PQN4RkGQqFVCgU1Ov1TJeonDlXt9vV\nzc2NOp2OyVmSyR/+BO+LTp2fn5uDZKMiEGYoFFK1WrWFKQQ+kkU/nYLuz+fzjX3zkFW/fv1qC5yo\nkEBxWq2WstmsyRynv1qt7DMIQPBFmOEGpm00Ghs2Sr+93+8brwN5wGOhck0mkxoMBgoGg7bohN49\nQctzaLBRqrR2u23ypMIEpcO+8/m8KpWKIpGIms2mVfrFYtHOi3MuFotKp9M6Pz+3FgpJCQ6Y/j9V\nPi0JKrhCoaDxeGwjn1SCTCKhi51ORzs7O5YEQCikFYU+gACOx2NrA1Wr1Y32TSwW09HRkZbLpa6u\nrixIUG3TK6bIovfMvQLz+Vy5XE67u7v617/+tXE1MsTX4XCoq6srtVot9Xo987/Y+Gg0sgIDUl25\nXFa73d5ICqPRqI1nL5fr+0UODw/VarXMRkliaecyGQK8Ho1GjQtEW6XX69ltfSyGWi6Xur6+VqfT\n0fX1tdrttsLh8MZ9Bdj/4+P6hlT0/ObmRhcXF5aMceMrNhoOh1Wr1fTw8KDff//dEgd0mFFnWjFU\n+fBd5vP1FfP5fF5fv341ArEkQ6EgsKfT6Y0RyR+9XiXb38+OehQASO329lbD4dAYr8weT6dTqy7Y\nSPb4+GgXQnAPNcQz6WUJzHK5tMD99evXDYVCcch+C4WCkYvy+bzBRFSKQEgE3PF4bGf1/SIqcxQr\nk8lIkn2/SCSiw8NDc+ae7ML5qVri8bjG47E6nc4Gk5hgDaRUrVZVr9dVqVRsWxekq/l8box6YEDW\nU0IMoof38PBgECRti8fHRxvny2QyyufzlumyMhkYi2qVDP3y8tKqHfgTkLpms5nS6bSq1aoajYaK\nxaKxp4G0pReyDU6GCmw0Ghm0CvTJgijIerQMIF7xvsjHT5ZQ1XMJzNXVlVXErJf2veRKpaJ6vW5X\nntKv5awEbEkGo9O763a7xgHw+oKTI9EcDoeKx+OqVCqGDFUqFUuMaNkQlCKRiDqdjq0bhf3Mwpr7\n+3uDfev1upFYIQliM8CdBB4cJmeiTcC5p9OpcWKQ+XQ6NRu9v79XPB63FaokIrCp2Tfx9etXa4OR\nvBYKBeMWcHFQvV63eyjow4PsoOfsEGAUjCQHuB19kWT7R+7u7oyEd3h4aP1rLoPxI3dUxnt7e5Z8\n+yqfS3awlW0bTaVSSiaT1r8neBC00BcIk3AA0BeWf8EPIOlh9wgtlnK5bP6WoIjfeH5+1vn5ueke\nNprP563lkE6nVavV1Gg0VCgUTF9IPEGGeGb9ft/4BsPh0FqUtBDxwbTcuB5XWl+nzftiU9uoFIXg\n3d2drq6urBomeSIZDQQCKpfLqtVqZqN+j4UvwIhFoJMPDw+2vZB2FfpCwQGqNx6PLUEg6apWq1bA\nkkAywhkKhdRqtayIYxKL9dRs4UTm3kb/nQ1/rzL4+4zUKzp9MWBUP8fOiBWwMs5CkvW8JBnrlKCD\nc6IiWC6XNhPPw4RJD7mP0R2ITvQxUTpJf5jvJCAD3fAg/SgHrQfmjMnAGQuh4gBi5Oxk9UBXVCP0\neTF6aQ1ZJpNJg9X8HC7BirPTbiFo+BEpSCu+58UoDRUfsH4ikbCMmOcLAgO/gplukoLVamWTA6xb\n3pY5kCSf6fchIENgUuA6ZE5FT++Q8wSDwY07DvL5vHK5nMkInZNeOAEkd5C6aC+gx8ics4NYoRtU\nCPBY+D2QGj86ir7QguBZkywzKre3t2eTLegl7SmP8hCseQ8WWME3wJls6wu2iK4QzHgWQNfoKPri\nyYd+65uXCUxsbBSZ0+pbLBZ2dwRyolUHSYxWQzqd3pC5t1H64/zOto2iL5wdJMv/PIgHNloul62i\nZCTZk3g9F4n3B6b2Y4cEfc6+baNe5gRWCh7gdghxnguAnksydIPPPDg42Nijws+SJNGrl2Sf6a/M\nxUY5OzZKcohfJBHl+dMWZMEX+kISg15Mp1P7/rRKw+Gw3b0A+Rl9AUJnmsqPb/Pc0HtPdPY+nYTL\n2ygy5738Qh7ak37DH98J/fE2WqvVLLnZttFgMGiJGp+7WCysBeXvHvBn97s4vvd6lbC/Z8HTq/HX\nNBKUUAYcoPRykUQkElGxWFQ0GrWNS2xAS6VSajabBlHT16Ua8AbOUgmydAxsPB5vjM/QVwQdkGTw\nM0EYhwE5EBiWh07fM5FI2BjP1dWVQqGQjfasVivjIZAZe5IQ70/my/eAo0Cl4BWTKpiz0+ZgtIcq\nwJ9z++zSmpzItaFAhvy9VqtZFgs0xbMmQ/bPjwoOchUESCp4KjiSGl4wv4E7PYOb/izn5jkQOKma\nm82mjS5Wq1VNp1NdXl5qNlvvKvBEPkYHvRw8wzoUChkaQkCVZD1Hfh6ZQ9jBkfl+PkGEz+LvbDwj\nKNKqqdfr1mf0vUXk7kcHPQeBPuLt7a1tFgSupr++rXdAk95GOSf/7W2UFgUjilRS5XJZ1WrVbHSx\nWNidDB6to+XlF08xgcGYmV9YQwvHw6HoXTqdtmdKwEPmyBkfQ1CJx+Oq1+sKh8NqNpsGU9dqNS2X\nS6tqSTqROTZKYAPK98gOgRAYnIkaz4r3OwAIsD6hQF+wU3QtEAjYngmKDq5Ar9VqajabGg6H6nQ6\ndo0vfhEeCnrE8/BIoCRrd8ZiMfO9nBs+SzweV6lUsjYa/XUvc+9fSPyomtm3QsXuSb7tdtv4L/g5\nEEQqbPyLR3bga5Ccwg37lsx96wv5epskQUDnk8mkDg8PzUYTiYSNsnc6HWurkmSin/hfbA6Snx8d\nhqsAEfP/16N+BG/26vuREHq8QGsInWyJn8VZsCUJQyMLxlH7cQ2UjGqUyicSiVhV6keXyAQhlfDZ\nVLi0AXB+vndI1Uw2J2kDfoVc5Jf2ALOTLXN+b9xwB0g8tpn+0ssEAaRBskSqJ5aGEMSm06mNXjEe\n5J0icBVnpTqjqiP5IBCQUHnnvc32J3j69/cyp7VCkGTUB+eKXMiUcVpUQhg1v49e8HPI128149ze\nyRK4JRlXgt/zDtLLHBY638sjDV7mwWDQrn32gcvvi0BnQa0YhVsulxuEP392nCo6we8DqUNg4999\npYkj9dMNPvhQ/TDahI0ic+QAgsczRZcYtaQlQgKLQ/TVH/JlTGo7GUa/+UzaiNioZ/d7XfQ2j416\nfeE7YaMk7bTHvMz5/wli6LonR1J9ehsFnUE+2CgkT85P8GHslBae1/NtG/UICv/mbdSf2xdBvAfT\nGXxffC9nR5eRv7dRfw24L1TwmST0nB1/7G2Un4NrgG0gc0lmo/wc+olPR/+8/YJgou9wpbBRP7bL\ne0syG8W/fM9GkSc6B4cGmfPH+3RpjRAyiYT/RT+IHXwP/MuPXq8y+BPgq9WqMVrpLT8+Pm5c9QoD\nnt8BOgeiIrCiPM1mU+Hweu81/ShJlkXRgyXrKhaLRrKRXlb1QhakNcFd4ovFwiAvqn34ChCI9vb2\nrBeHkpJZowi+B4Rxf/r0yQhSwFxUxzg+lMDvzKZvx3pkoERpDf1lMhn1+30Fg0E7HzAixhWLxWy5\nBAiIh7IgaNL7wnlMp1Pb6tftdiXJjAhokhfOIpPJWCUBFA0Zxs+TJxIJa3PE43EVi0Xt7OxYpc2N\nWPTfgeOARYHeIOWB3BA87+/vdXFxocfHR+vXgVDQx/ZGzgIOSRs9P5JV6WU6IJvNajgcSlovtfLO\n2RPRDg4ODIbneaAvwMhUpVQokMZ+++03I46Vy2VLGqlacIroC1UzC0e8roOGAad3Oh3FYjEdHBzY\nchyIf/AFtm0UBGzbRn0i+/j4qFarpXA4rMFgYLbk2eQk7qAXXE3MillGIdF1AhFs8eVyuRFAPQsd\nPdq2UT+FgY36JAkb/fz5s90dUSgULFHCv/DsAoGAbZIjofGz8+gaiXw6nVav1zOYG70F2YDoiv7R\n2gJlwWZ8S0N6uXQHG+31etaGwbZg0xNs8XcsHyJ4kwTiF6WXyQiqUmTCuOLNzY2SyaSy2az5MpIu\nEmD0y1+Gho1OJhNdXl6ajQLZ+2kVki7aoxB1uaUSmBwblbRhoyCatELhqoDOcN8MNooNUhT6jY7Y\nKOOPv/32mxGwDw4OLPjzPXlOyJXW7uPjo+kINgqSi6/80etV9vxht3769EnT6VT7+/smfN+PRzkZ\nV4LcRJXGfGYmk7GLKsjofEVGNkffhv41guZ9qGaA1UAYmBWGrT0YDGwONJ1O20w6RD+YxmShVNoe\ncmS7VKlUsiDPCIiH0AiiGCjv46EsZASUReUHIxqS4GKxnkWeTCYmc0ZMcIIwSWG07u3tbVQHQK77\n+/s2+ofRIXOqSbJlkAWUnpaG7ztDYAJ5GI/Harfbtg6Va1kl2ZWjkuzaWHpzOCH6ZqADTCQsFgub\nzCC75mz+7CQHvjJhbAuHA9eE3f84jsFgoKurK0s22u22JRfZbFapVGpj0xfPkhYYi288z4BbL3O5\nnCFFcCK8vlC5Pj4+GoriCVboEZvmPA+DqYV+vy9JZqNMDQDhQ+Kbz+cmf1jp/4e6N11uM02udRdA\nzCQBYp44U3OVO2yrf/kufEf73IHvzGFHu1WlkSPmiaRAYsb5gf0kEyiV5IizT1gbEYqulkjgxfvl\nuHJlJggFDh8HC3OdrB9nhdEmq/GDeqSnkame2IaDJhukU4RAKBwOq9vtWrcLHAPQNWSNKYzIC+Qv\nMjPKZ8Vi0WTCowsEhZCKaUf0mSwOg7qz11FsRrvdXtNRRgdz5zDzQRzoOPADpJAJ6SnApM4NQoeO\n+jsnOIKb4IN2Agy4ScD/6AJzTtDR+/t71Wo1c6bcG107lKfm87ndOWgoKAQ2n1HNkOMIZkFWsJnA\n3z5o9xwxOCy0285mM9NRv3adbh3uHJvO9wV+39RRnDdkb6+joAGgEtgX0Gp4FKCZnq8kyTgunkOG\nTb+5ufmhn/0pnT+Gt9FoGCmLSAbD4SMpWmpoK8OJolgMjwAa4/28gPtaPVEl9SBgG/pefU0GQ+En\n2VGjBiLlvT0/gYyRPmhPgIIdCtMVmIkIFaPioXIPMXvWqc9wMTq+U8BPg5rP50akYgYC7+3PDsJA\nfd0ToAiYgCpRHso33Ju/cw+FYlgIrmiB4RwYVjIElP/x8dHa7HB60hP852E0HAR1QAIASKS0mhKU\nQDjyQQvvzbPw8sKzpecao0uGyJ0jR4xvJljwso6sAIn7SWFkjAQ2kixLIkvnGSIv3CN/5+XF96f7\nscQ8G+6crAQ0x38Wz9TLORC311HOyOwDP9DHt75xv15WkCEvLzxbP7OD7+h1FJIpPf78PHcBRI9c\nUHdHfj2HAR0ls8VWEbj4s3PvPAfP5eDO+Qyvo54sSYmGLgq/iW7z7OgopD50lJIALaC++4GZHdwb\n7+tl38sLGTX2l2eE3HgbRG96r9ez77opL5RTNnXUB+i+Y4QA2JcufVnW3zlJEwgN5Q4Squ/pKKgm\n2TZBEGf38oLsbgboEIO/p6P+yBkAWAAAIABJREFU3LwoW1EiIcniDvFVnsOwOd/lz14/pfP3l8CI\nzMvLSz08POj09NQMCL3ovnbomeQ+OvUtUuxHRmgQbFrpPIkItADyG78P2kAGnMvljPmK0kejq2UY\nHz9+NNgc2C+Tydj7cwZP0sEokvXBYEZpqHcvFgsjuJGZSDJh8dAnNTc/oIdI1htbShn1el3dblel\nUknb29t6fHy01kHqnAzxIdND8PzZA4GAPSsCImmlmDDJCSqkp3oZ08lwRpydrIAsGSNCxjgajfTx\n40ctl0tVKhWTJd86iFL5O/fZL0M9gGDJLjDOwK9+pCnKB0zLc+LcyAujOXGwviYNKkC54vHx0d6P\nzIZMjrN7tIQ/MLGRWe4cp8HCE0o0/C+yIclG/XLn9Kpzj5uZ7cXFhYbDoY6OjqymDCIHRI4xR164\nc/4wUAsY02d1gUDA9i94/sh0uuojpz88GAyudbWAOkAa5tyc5/b2Vh8/flQ0GlU+n7dAK5PJmHHF\nuYFUUWoDXfuWjnJHlN28jsJ5IXhgbwnEQeSFGR50P2CrkHmWHxWLRSUSiTUdxcGBtuCwfcsofyTZ\nGHX0n6CfeyQB4H+3trZs7LX0NIiHUk0ikbAdKt4uEuR9+PBBi8XCltAsl0trmaZUAQcELopHNIG/\nvY5S9sQRc2++swBonFY++Ey+uwIdpTzqz95ut3Vzc2PPCcTJ76mgdIqO+jv3w+vQUZ6pJENW0VEf\n/AQCgbWdHBBWOTszTAqFwg/97E9Z80dxcBYYUR8FEcEj3EQ+ZFwsyPHrefkZnBY1cAw3CgOchMKw\nXIQswjNDUVjOyVnJWvnjI09Jxk/AgZPV4qj9HGgyRDIEf26EfHMLFFAzMCOZLMxQggcE2t853weH\nQbYNssHZcYSSzEElk0l1u11rq/GoB8pMVDydPm0Ug71NicWTb8gQPdT+rTtHXrh7snQcm58qyDOX\nnvYlQBwjA8bwSytHSF2Tv8OxALkSMEhPETvQLfVeT1TkzglC/f8ne+IZ+mlrfB9P8PKz2smQPZcE\nxwCaQdvecrk0mJhAl2AJPgZn9+f2d+7lBaeEjvqJjsgw8s+wIa+jfG+QNi/rPstGRynPESzwvNFR\nUK3NO/+Wjno0AZlGRwk0CCyQ400d5RlTi10ul/YMcSzwWZAXAm4CBUqN3CccJH92L/ugfjg2dJRg\nAXmJRCI24Y8OFX7X6zdIIecBzaCsCm/Ak1iXy6XpKOUq7hyEwZ8dUre/c36HFjtfHkJHCZg9n2tT\nv33wkEwmbcqjr5eTPfs7p6SAjnLnQO/+znleBPz4IV8aWSwW1opHAIlPwQYhG9hU+DGQh/kdP9lU\nekLt8EEMU6KtG5Tmu372hz/xP/DCWABrMYCCwQbeOEPgwNCGQiGl02kdHByoVquZ8JMhk3UwC9n3\nWMK+hfy0+VnSKiqjRut3dnsFIxjhwVerVUkyQiGChxHh7yAYxmKrmd10C/AgYaeCPHiYDCLWdDpV\nr9dbW3nK2WGWMsHKEwoJYigthMNh5fN5Q048Ax4jAtweCKwY1/v7+zYPAOcDMkCtlaiaMgeOiNpj\nu93W169fDeqXZH/PQA1gZwwy8kIgEQqthmdEIhGDCT0pD+NDAJnNZnVwcGBTzzDIoAgYetCS+Xxu\nDnd3d9ccDUOlfEvWw8ODrq6ujMSGsfXwJgYLRAjDRj0RRw9iRSeBnyh2eXlpsgSSxMx1Ro56whTZ\n5sPDg1qtlv0bAYMk9Xo9LRartqr9/X3jPnBvQMr8N1vLvIGFlEeQAgKwt7dnmzKbzaY5IubNg3hw\ndi9PMPKBr79+/WqdKNIK3ajX6+p0Omq1WmZgN506toYed+mpBRMdBXnibryOXl5erslIIPA0bQ50\nCh2FOwD3iHZhgnTuitLTzc2NZcs4CSBl5IXhWeioJ0+iV4FAwAKg3d1dmzbHQCuvo56kugmrYxdp\nA729vbVggECa9kZaM0GECNYY4et1lBIM+kCgy2fz+dlsVvv7+9bO53WUkgjn5lyQhb2O0gbH85RW\nDvTy8lLX19cWfJIsEcjh2L2Oeo4DsoS8MBwtkVhtyozH47q4uDA/BaKNjoHWUF4IBoOmo8PhUK1W\ny4ZoeeSy1+sZElGpVCyohRT7XT/7w5/4H3gB13/9+tUMw2AwMPKZN+LApjwEov6rqyvLEmnVkmSs\nfSLrXq9nLWKdTscMBVFXp9NROBw2EggwE1A75EDpad0skf319bUJG3VwnzWTUfhd1+1225TROwDY\nobx/v99XJBKxXmPKImTgIADL5dKIWsCHwFWeZALsPRwO1el0dHV1ZStQmT3uB2tAHPNBxvX1tfXj\no1xkBDCg6emOx+O2fhUyIYtMWLAEmbLb7VoEXigUbAmNpLX6//39ver1uj1fOgP8mblX0JHlcmmT\n2oDvyYJhBAcCASMybm1tqVAoKBgM6urqykpJEOQ6nY4pP6NbYeNu1tG9vFDa6nQ6Bg/OZk9bvcj6\n5vO5dSbAiyEAw8nAvYBsRUAYDAb17NkzzWYz1Wo1Y2mnUimbsMbzrtVqqlara+uMfVDDnRM80DEB\nEoLj9NmSn7HA6GKeAagESARtZMgLG+za7bZBznTrdLtdxWIx01FKLxheOhVADb2O0tdO2QEH66F7\nzoJMt1ote08IYkCznU7H3p9V2UzpvLi4MPuRTCbtO+P0vY4yfVPSWsBIIN7tdtd0FAcPkRV94Xmi\no1dXVxYU8jz4foxD9zp6eHhoKCdBVzqdtnkbkJNhzkciEStBkDShU9gXr6O+HAjC4cm/2D8mnqKj\nBE2z2cx0FCJjKBRSqVRSIBCwZU+gAJLsuWCXQB5BZkAbKR+QjS+Xq0mkfkkR0Dt3TnBK0vH4+Ghz\nB3g/gvvHx0e1223L2llP/OLFC81mq/kW0+nUSp6DwcBIy7e3t6rX6xbcM8WS0sD/tZm/JIMuqaeQ\nJQJtSFqDRhFuICTqnr7n1DNV+R3aj8hKPDQKGY+InqjeE8CIsnAWnB2lg/BCRgXjWZLVcfhdBJ5M\nj+zcE66oIcFEhyHs4SKyVJAO6pPUEIH3UCQMgG8z8T2/ZHr0P/tygyfsAPsBmc3n8zXCFdEo2d/2\n9rbV+imvkOl5rgbMdNqbeA+Un3MDOUpa+zeUE+PvSXS8l58L4clxGGfPJQFi587JnkBklsulyQuZ\nCJAmhE7PceC8RP1kLZSBIDORRZElEyB5eJ2zE/RSG+d7MCaWoVRkdYyX9XwTsimgUN837uXl4eHB\nDDN8HJ9Zc2b/nSeTydp+A/gm3LWX800d5Q86CpIE7wE4n/fl9/h8Ahh0lDtER9ET9Jt/I1gGev2W\njnI2EEZ0FBviu4aQd+QfHYUw5qF5f+ecAZTAl9U25QVn8CMdhaFPAOnPjsyORiNDS6l1wzcBocS+\neA7Tpo4S8HkdhPhGGQAd4DtT7+euvY6CGmBLqel7HV0ul2uTWUkY6Wwi4fKk8U15mUwmGgwGJi9w\nfwhgJNnPU7ryJUuCeK+j3BE6GgwGLejzNp1Jm57Mjq0DLQOh4uw/ev2UhD9mTudyOVt0A+yUyWSM\nCEbW3Gg0JMmcEzPO8/m8LeyADITgAkOlUimbGc3YX6ZbkV0hJJB29vf3FYvFDF5n1vN8Prezc37W\n3rKXPp1OGwGl1+tZbzHCn8lkVCqVlM1mVSgUbEQrxA4ie3ppWaBSqVSsnQSl9GzbZDJp09Nms5md\nnUmFtKDwhxr47u6uraFlmAWIzGw2s+yLiV2FQsEW00Ak5PuDejBRrdFomDHpdDoaDAZmPAlAgsGg\nLXRKJpO2sIjZ8ePx2PaJZ7NZIwECmdFqCZ9jMBio0+mYLITDq01n5XJZuVxO+Xxe1WrVMk2INL6F\np9vt6vHxUfv7+1oul2o2m5YZemPIXIejoyMFg0H7jpyBlid/dngEyWTS9iMwopTSB8OvYrGY8vm8\nCoWCtU5RBmBCI+xkYPeLiwuFw2Hlcjm7w00i2GKxsDvJ5/PWKnd3d2eLWcg2kOm9vT0L3JD1VCol\naZXhMc8+lUoZ8bRSqdj45Gq1akueeG7Izt7enpVVcrmcIpGIbm5urFThdTQSiSiXy+ng4ECxWEzt\ndtt2O3Q6Hc3nczs3toQ753NzuZwFmsga3425AsVi0f6XiYSJRMLugUwzFAqpVqtpsVhof39f4/FY\nrVbL5hV4HWXGADra7XYt2+v3+6aj6XTa5tlzLtZcM2sf5Gc2mxlJcmdnR8Vi0VrVKpWKzTQgo4ew\niI7W63XT0WazaeODkRcIvel0WpVKxXSUs6Oj6CHn9oH+7u6u7UiRVrMI0FEQ2HQ6bXYRG4weoKOU\nbbe3t80OHhwcaLFYqFarGdxPIEFpKZfL6fDwUIFAQJ1Ox2wMKF4ul7Nzcxd8zt7enuncZDJZ01HK\nhOgoP1sul+27006Nji4WC52fn6/pKO9JRw46ytj0XC6n4XC4duedTueHfvandP7UlyGcAR8BmxP1\nwpglEvdsXN8aQcTJ3/M7nqUN4Y4oGQiKSM7DgaAJ0tMkLjIcSFKbv0sdFrIMdTwyaCJiIjtfkyIS\nJQvxdWVfb+IFGYXfB9aiB5kMy2cuZGaesAV8RAbCQA/QDqAxDAF37qeEeVTDIzT84Z6Xy/X+XDIS\ngjQyd+mpX50aOm2B/H/eCygOGJ0MlCyYDNi3WJJFgBBwxwwG8kRB38YJusTP+BIPd+rlBeIe0T7Z\nF8+P2h3GljtHL5ADnhv3T0blOyc8Mx74m7uSZJ+FvIB4kR2iW9y5JLsz/j91UkijQLBkn/QmgyDQ\n4sadc69ePsigeX/unKwJ2UEm6ZiA++Gfzbd0lGz9z3TUo3peR7lzdBRd4/6QF18PRx4lmXygE/x/\nr6Nel5Etby88wRl5gUuAzvnWTbpC0FE/VQ754Ln654Fe+RZE5M9zhbhz7vR7OsodAPF7HeV8lLIk\nWaCEXfTlLX9WbxcplfBMPcEP++SJkX+mo55fgJ3iZ2jj9UvEeO5eR7mrTR0FUeMzeB48M8p86CTf\nFz3Y1FHkhbP/6PVTwv6NRsN6aFFClCwYDNrQi2q1ag5BkjFRt7a21O/31el0NBqNlM1m/8D+HQwG\nisVixiyHPSzJomQE1TPUcXYQCEEkxuPVlKxms2k1YyBFBHE0Guni4sKyBYSNmme327UaHhP3IKsA\nGZIBUdNmmhMzDIjiqVViyPkcfo7sEoPLhjeElMEbQKf9fl9fvnxRLpdbI+NBFCJT6Xa7arfbNv/6\n7u5ujZxFrS0QCNiAIJiwkKQITsi+vEOjNgcEBoIAIx15wTDO53MbeHFwcGB3LsnQl263q93dXeNb\n5PP5teFOIAZMDKRGyUAa5MVDx9y59MTI5c45e6PRMBkNhUJWakFe7u/v9f79eyMD8QwJWry8tFot\nyzqpIYKeABFCFsVZkSkBm3toEiMEuQj0hgCO9adAlz7Y3NraUrPZVL1et4lznBvoFJni++dyOdPR\nh4fVJk1IZUCfwMHoKCiYJ+fx+dwjOsrZm82mTVSjPMjPokefP382FIWzU2LpdDqmJ0zDY9Ik2SSb\nGZl0SB0ZQjLEv1AoZD3wmzoK0Ywy0nQ6VaPRMJIxaAfPcWtrNeHww4cPtu2SgG82exqIRkbbarWU\ny+VspgjkT2SQZ8BobGBz0C90jLKQd2iUNVgzjm3EuQPDE+DDRVguV/s0fHIwGo1MztFRSJfoKIE5\nOgpqR0C0qaOj0ci2V6JrBITevkynUxtqRRkEuSF4+fr1q3777TcLOkku4VHApeG9WDR1e3tri8EI\nfEgK0UeCCMjclMR43j6p9ToKCf1Hr5/S+b98+dL6R09PT5VMJm197osXLwx2fvv2rUajkUqlkl68\neGF1x1wup3/8x39UoVDQcDi0ndHpdFonJye2TYmVpTy0Z8+e2fQyRjY+Pj7avmyGNFQqFd3d3Smd\nTuvs7EyHh4fGlKctcDqd6vT0VA8PD+r1enr16pUJfqFQ0C+//GJTxV68eGEC+/btW6VSKdvdns1m\nLUs8OTnRYDBQq9Uyoh8jZRkNOR6PjewSjUZ1fHyseDyufr9vPdqw9Y+Pj41DgePF+bHUJhqN6vXr\n19Yu9vbtW8XjcRWLRZXLZRWLRTOeb9++Vb/fV7vdVqlUMiiTckC5XFYgELBn2e/39fz5c0M7yuWy\nCoWC/uEf/kGLxULHx8dWm2bpTiwW0/Pnz3VwcGAZB2WcQqGg09NTBYNBHRwc6OTkxJi5gUBAb9++\nVb1eV6/X04sXL2yL2OvXr3V8fGzji8vlsuLxuM7OznRycqL5fK6DgwPb2MZdPH/+3HqEq9WqMpmM\nptOp/Xe/31c8HlelUrG6/8nJiZVsMDR8j5OTE+3v72s6nerVq1e2lOcvf/mLqtWqLUJ58eKFptOp\n9vf39fbtWyv7MPUMhOr4+FjlclkPDw+2/CSRSOjZs2dWBqE08ebNG1UqFR0dHRkJknNHIhE9e/bM\n5Bx+A59zcnJiXS25XE4vX77UcDhUs9nUX//6V41GI1UqlTUdzWaz+qd/+icVi0U9PDyoVCqpVCop\nk8no5ORE8XjcCIecYzab6fnz52o2m1oul9rf3zeUgW4TdJRd9ul0Ws+ePdPBwYF1JaCjk8nEdLTb\n7er169eWvRWLRf36668qFovf1VHKHzjyk5MT3d7erukodwyc/2c62uv1rAxEOQodJYgHIkZHy+Xy\nmo5ub2/rr3/9q2KxmMrlsskFiAo62mq1VC6XNZ2u9sUXCgXTUfQnm82u6ehkMjGd/8tf/qLFYqGj\noyMjIVYqFQvQX758aWt3vV30OlqpVHR6evpdHSWxePnypclzIBAwBv3Z2ZlOT081n89VrVZVLpdN\nR+PxuJ4/f65gcDXauVKpmI5WKhVls1n1ej2TNfzA2dmZzTwgaEDuT09P1Wq1NB6P9fr16z/oaKVS\n0fb2tl6+fKnZbLamo/v7+zaC3OtOsVjUcDjU4eGh6ejZ2Zl1iyQSCdPRcrms4+NjLZerTgHOHY1G\nTUdjsZgRr7/3+imdPyNtpVW2Rn0olUqpWCzq+fPnyuVyOjo6MjJQtVpVKBTSL7/8YvVterb9ykMG\ns7BggoEmi8VChULBjBM/N5utBpWUy2WbEU+GRa0KJrQfYoIwTqdTa1OZz+f65ZdflMlkLAudzVaj\nKumZPT4+NpIYUR9wJRyGQqFg9SHqxvl83oQnnU5bmQJ+At0STBOk1gkMBUy6WKy2fOFoIpGIyuWy\nRcZnZ2cWeVLDe/XqlSTZxsR0Oq3d3V1rtaHGRxaBs04mkyoUCgYLM6zi+PhYW1tb1pb0+Pho3xc2\n8e7ursGOQGw4AEhb1CBfv34tabUDPB6PK5PJ2K70yWSi/f19FYtFyy5R/IeHB9uPwDPnnqi/cr/Z\nbNYyAgLNwWBgbW1AjtT4fJ8zGXe1WjUCGEr9l7/8xeSdz83n83r+/LkKhYIODg4kPaFVoFjUMhnI\nBLscJnQsFtPx8bFCodWsgIODA6ulSitkLJVKGSwL9ySdTkuSZdwEc4yppq754sUL5fN5nZycGLGQ\nZ/Pw8KBkMmnjXcfjsY2DpubudZTsdz6fK5fLGRyLvsJpqVQqxkL3OspueSBz5GU2m6larWoymSiT\nyWh/f1+LxUK//PKLstms1YGn06lKpdIPdXS5XBpPia4a7ginfnR0tKaj8XjcZAjZoCsEmwXZ71s6\nSmvut3QUDlEmk9GrV68sSISTgayBSPEMl8uldUBt6ii6TAfDwcGBkXMZRBWJrLbvYRcpTXgdhfAL\nX+L169cKBAJ2t/1+X9Vq1SB/dBQ2PfyLx8dH5XI5LRYL40BQXuFO9/f3FY/HlcvlrESEPc9kMqYD\nQOU8B0oDZOGx2NMui9lsZv/d7/dN3ikHFAoFPXv2TPl83my9HxNNUA2/BDSCNj+e7enpqZ1vf3/f\ndHS5XCqbzWpvb8/sCMEyweKPXj8uDPwPvP71X//1f2FIpadVrEDpvs+bzBC4hZ5gIjJJa73+EGx8\nGwq1FuqSmUzGWnlgqPu6oK8/wgrGqNGKBEQJJMXneniRFhygqVQqZQsv/PmA04AoeXlo++vXr4rF\nYjYkiPog8A/DJ6gnAV1TyoDQxXeDjAQ0CgwI5Az8xRCQcDhsJBvunGfCcwOyxZBTf8NYw9D19S/q\n/sgCxpZzQ7IBffAMWiA17sHLD/JFANDpdNYYsgRwj4+PBlNy53yOb/Mj86OGv1wuzQBjAFF4apEQ\nLn3XAWejLcq3svqZ78geC4yALymjTKdTky/Ozp1zTkhL6BjnpOTC9yV7RZaWy6W1O/He6Ke/Yz8U\ni77/TR1FnpEXdM7POOD3JJkcZrNZY2Z7HfV8CelpTKzXUVp6ec6e80F5JBgMrpW/kKFUKmWwvtdR\nggnsEefhHDhPsmCCHnSfzgpqwAx+wZFRymQyHHcOL8LrqCSblki9mfZiVjJ7G0LpDz4DssL3w74y\n5dTrKAkIEDpBCt8FHae8w89w7slk8gcdZU4EepFKpew9eG7cKUgN8oVtQWbYt5JOp9d0lOfGTBM+\niywaW+N1lLIV8sIIYK+jvuTGPbA06ls66stOvryNPPqkyfNIKMnwrAgoKAeEw2H927/92/+j77x+\nysy/VqsZg95PkOIyabmjZk37DP2Z8/ncaq/AZSg20R1ZKcNPqOkA76CcjUZDwWDQoB/+IAQ48UQi\nYcsgcIKeVEWd3Dtp33YjPc2hl552jTMiGNIRAk4N6Pb2do1o5Yc/9Ho9q9FxbtAQ2KlkV/V63UaF\nIlyeiMS8fSJklu6QAWIsIDbhyGlHQeiZXIVzoyWLO5/P57bTgeies8/nc1Owh4cHJRKrzWv1et0G\n1UDcWS6XVr/muWNEQIu8InrCWCKRWGtfwqBwHgwwhCHe6/7+Xs1m0+BXP/2OiB4Oyvb2tur1uq6v\nr21WO7KJ4cc4wHjGaPshJp4My2x4aqu+lYuMgICFUgP3RccCnSXIkrRyuNls1iDI+Xyuer1uNWiv\no7wvBEuWOw2HQ3senH1TR3EEOA3KPDs7O+aEkXMGwYxGI1uiRTbO56Cjj4+Pxsj+lo7yisVWOwEY\nMpVMJtfae3HyOF+vowxfoYYNgoRcoKOejEgtm4zQ/xs6igNARxlehLx6cilEyr29PWt9ZFaBb1f0\nOsr3RUdBpOAA0MYJVE1Ax/wVLy88W4Kmh4cHm/vgddTLy6aO7u3t2fdgWJpvufM6CgMefUdHI5GI\nJTJ8T2ws8xTQUf6NNrk/01ECD+SFpISkjG6mr1+/GkLHnXvyNPwP5AobtKmj2EV0WpLpKDad94Iz\n5nWUAOZ7r5/S+X/+/Fm1Wk3v379fY9YDK97c3Ggymejk5MSEHLi+VqtZHRw2JWN+W62WzY5mQ1Op\nVNKXL1/09etX27Y0Ho8Nsjo/P7fsDqOczWatrvfXv/5Vp6enSiQS6vV6ur6+1vv37214BJkUJYPr\n62srWRAlhsNhI8pRR6Qem0gk1Gg0NJ1OlU6nrU2FqYHn5+drrTmBwGroQ6vVUr1et7bIQCBgZJPr\n62tzcMxpv7m50fn5uX7//fc1Fnc0GlU6nTai09nZmZHKyDKur68lyeboLxYL2/LVarXWgqxoNKpy\nuaxWq2Wko3A4rPF4bIJ+cXGhyWRi0CgTviaTia6vr/X69Wv98ssv2t7e1sPDg25ubvThwwddXl6u\ndQVQZ7y5uVE0GtXZ2Zk5AyaLXV9fK5/PG2wPCa7b7Rp5aLlcDZ4pFAra3d3V+fm5QbzUUrmT8/Nz\nawfiWWSzWdXrdU0mE719+9bqgvV6XVdXV3r//v1aYAqPgGlpx8fHKhQK5lB5XsPhUJVKRdFoVNPp\n1IhCzWbTSjv8DrX/y8tLM2wQhXZ2dnRxcaHBYGD6EQgEjG9yc3Ojcrms0WhkM8VrtZo+f/6s9+/f\nGzSKjqZSKZuW+OzZMwWDQTNo4/HY5K9SqVjGh462221DUu7u7qw8wLAWtjWOx2ObE3FxcbFWXgHu\nhR/z9u1bnZ2dKZFIaDAY6Pr6Wh8+fFC73bZyDFk9OprP53V8fGyBdzgcVrvdVrPZ1P7+vhlqevKZ\nJEiA+fDwYPpweXlpexHQ0e3t7T/o6NbWlpU4r6+vLdGhBbJWq+n8/Fzv37+3ewNRSqVSpqPPnz83\necQxQqjb3983HeU7tFot+x6sBi6Xy2q322q322vtpps6SmkIHtFkMtHNzY1ev35tjurx8VE3Nzf6\n+PGjTaL0Ooo8w+chkyfzvry8VKFQUD6ft8CdMeIQn0Hi0FG+bz6fN6g/kVitKb68vFwjXbLrgrXG\n//zP/2wluHq9rsvLS71//94mIBLIJZNJ09HT09M/6ChDwOAnTSYTQ2ZbrZYkmdxjP5kIyh4KAg6m\neFKOI8vHRqKjjLpmzsb3Xj+l889ms9ZXT6YJy/Ps7EzS03QoIBiyo263q+3t1d7oWq1m5B0yoEwm\no3g8bhEtPZuLxcLY/96o9vt9xWKrpTxA+vz8/f299eMD1dBbTX2MUZgYXBwFWStGZzwem/GNxWKm\nDPTNbm1tqVwu2w50sgSPSNze3lq9CRY+dVEMHTvNGQ3Ld6GWRzYYj8fVbrcVCoV0dHRkdU2eB0zf\neDxu2/Ty+bwGg4F6vZ7Bst1uV6lUSoVCwTIjsgpQEO6ShRRMDSyXywYBkl1Qs4cwhQJwp0y5Yhd8\ntVo19AWSmyQb64whyWQyur6+ViAQMKUej8dW7+SsQOVkg0Cl9JV3Oh3jgXQ6HUNpWPeJEQsEVj3v\nd3d3Fpzt7u7auE9q0dPp1DoMeA8c22w2s93kvV7P6ty9Xk+JREIHBwc26ZJhUOwnAL3JZDKqVqtm\ntIrFoqbT1TRJAqF+v2+EKWBcAklQLOYYJBIJvXjxQoHAamodBmw+nxsRq9VqWc85q43RUerj8fhq\nLS/cEEoFcDHu7u5seh898IVCwfZK8Izu7++NJ0Nmy4x4iILD4dCCO2q9ZOIEuXt7e9aRw7yN6+tr\nc7zwOyqVirVn4tBBgXBxWhVtAAAgAElEQVRm4XBYxWLR5J4AmI136OjOzo7K5bI5Nxbs+D0TvV5P\n4XBYp6enVk+GUIqtQZcDgYAKhYLpKNwKvl+hUND5+bnpKLA2Onp3d2c/x6rmUqm0tsKaM6Gj8CTQ\nUZwuMsZ8AEnWnUGpCq4HdgcdBfWlxAK5mLIK3AVQVVqoqf13u12biNfr9SStVlhTIigUCsrlcgoG\nV5Need4E84wvPzw8tPKFH4BGj36v1zNy63w+N9I1OhWPx3V0dGSBLVwP7jEYXK13Zlw9/f3oSq/X\nM24WOkpw57ua/uz1U/b5A5eScafTaYNZcrmcwa84QOBGotJEIqFcLmcKgONAqMksiD5xIh56xtDT\nQrH5fsDdkAj5DJQcggnOlSEa/ud85A70BDkRWI9tVQwXwYjzcD3kz0OnBZAMEuPB+/E7hULBMj0P\nUUOSguzFkiR+DjjYk51ARGDM4qgpmRAMcEayLZ4zRCE/gInfQYnhYezs7Fjg4s8UiURsUA6RPkEX\nP+c5ECgYz4Z7xViHQiEb7kHEzx/kBfgS6BayYTqdtp/l76lV870I2sg+mBkPSgO7nJ/lc/k+kUjE\nHJlvGSNTwJlx13y/cDi81iJHJgHK49+PchVdC6ApyAo6mslkTEfZMe9LTThuxuaio+gvQ4s4Tzqd\nNhSHcyMvcFfQ0UgkYp0xlNoYLQyvg7vwcs6IbgILgkDugs/CpiATBMrA0LQcYp8IhjgzZRlKB+go\nZ4R863WU71osFv+go0yVKxQK9l1wZv654Igo+/2ZjhIw5HI504lv6Sh6iI7GYjFDBWjxo3TCABov\nq9wRm/4kmW3lLrhz7CLJCGTTTR1F50F9vLyAAoF+eMKd11Hq5/THQ6z2OkqiRZeBJzz75yLJ9Nnr\nKORKZBW/gh3j3F5H4a6gY/gJhjH5LhB4OhAo/zuEv5/S+dMjSwRONi3Jlhz4Gd8oObUhLxA4cIw3\nAxxCoZDNXW61Wrbf2jOaIfZMp9M/7BL4+vWrarWaCTMEIeauIxxkO/Sh1mo1q9N7Jin1ONjx/J4X\naD+U5/b2dq0HlRolzNH7+3v7HB9o0CJCxkGJgXG0jMbEiMER6HQ6NqOas/g794ELxoFz84wg3AD5\nDwYDU2SUdzgc2uQ2hJ33YOJYIBAwdj59yvTS+4l+i8VqBjYzDKRVnZYADzgwGo1ap4F31F5WqNNx\n535GOyUiUB5QJH5va2vLpuORScBVoa8aQ4/iMoMAeB/ZDQQC1hpJJoK84Dz5w50vFqsd8M1m06bc\nAXWSScOjwEDz/efz1b4JWhiRZfqS6YZBliTZXoNWq2XwNOfgzoGJPUOb+yY4QfYajYZarZaVDjDc\n2AN62DG4fP/Hx0er75LJMXwIHWV6WzQatSE5tVpNg8FA0tNeez84yeso2b0/NzwKr6PUYpGx8Xhs\ncw0oJyIvgUDAbCCdCps6Sgvtt3QU1j93DvKCE/U66s+NA4NEyzIZ5AyIGXmB7EvSBAG31WopGAyq\nVCqZLKOjkmxypZ/S2mw21ev1DP3xdpEZA6lUymw6esm5sc3/HR1lvwvJI88Pe0/wCQkcvgntk36p\nWbfb1c3NjdX6Kd1BZCaYwzaio55fxnPq9XpqtVqGGNIB82c6yvtg0+kSgm/1o9dPCfsDpZKl4VgY\n+oIB2traMqfJRRMVAe96hjtlAljyRGG0biCMsC/JGvxgFEZaEphQP+MhsKQGR8toShwKsJ30xAYm\ni8nn8yYIfHf6Zhk041mk/BvkFaJLIkIieViqDHsha6UUssmC5eXbICnBEGh5ohxGEHgd0hNZPxAV\n78MIUWqIZHAwnYHQcD7MPecZ8dwxlDCDPYN3b29PkUjEIn4yGYhBDNBgBDH8EEk2OIWaK3U3DB13\nDkmN36FOTiQOFEntFaWkJkg3AYpKxorsgjxA3vFnB84FefBkUD4fp8OZaVeCLESgQ1mFbJtRxZ68\nRM0SAwfcyZ2iX+hmPL4au4ozgcxKUEzQ6eUF1ITn6YlpnB/kwLO7KUH4O4eUybPH+UGOwpGDSPh2\nTBy0lxfq+5TRvI4S7M1mM4P8uXN0m75u4Gj4Fl5HuXNImegomSAscmwTTpm7BblBXpAViJfoKIRe\nr6MM6eK7oKPIBgN+JBknimCFTgaIpuio5yUwjZLPB/FAD7A1yLPXUdreuAu+myeXEkCDrPo7p5SD\nvBJ48F3g9EiyEgfyiP2BcAhKRUkCeaajhGcBGRIkxZOoJRnZkgCZO/cICmeH10bJMxwOGyGZ/RgE\nnego6Oz3Xj+l8+ey+TKQ3TBq9FFHIhEzFBg26jow+HHYoVBIhULB6lM4rf39fRsnSe3dR84MBVos\nFsZiZS770dGRRa5kHL4VD0Y8Dob3wwh5NigZIRnmbDYztvLOzo6RZhAUekrH47FlutTAptOpGURq\nVMPhUP1+f23e/WAwsFq5N84YUCJsMqRoNGp945yd5xGJRKwnH+MGzDgajXR5eanZbGa1Y4IO5p8T\nEC0WC1tNCfOWGfQMZwqFQralLhQK2fNEdoBfQWsODw9NUTHkOIbT09M1Jw+DHIPsWb7AfH7qHfKJ\nY3r+/LkRqViQ02q1zPCzsRLjw2fSEkd5h7t59eqVGQqMKbV+DKff/Difz21Jz6dPn9bKJDhWnBs1\nRCbsERCPx2OTc0kGmbdaLQvgkNtgMGgta0CTkmyfBkEe9xSLxXRwcGDkK4IIum0ymYxxPiDKVqtV\n00uyZqDrcDiso6Mj+/9s02Oj4uHhoWazmdXaOS8GHt4Ncz1CoZCePXtm2Tv3iuwSyJOg0N4I+5ya\ndCKRMG7H4+OjtbJGo1HrOKEkR80Wp59IJExP4efglNA5OiNwDpIM0ucs0+nUWvfgR9ChhH2BLOxX\ncTPASJIFB76FkoFcvhtkMBio3W5rZ2dHlUpFW1ur9mEQTtAH7CKkTDJhhkvxGchLIpEwQjW2m2eI\ns2eHAfYGThH/TlfAeLza9vjixQsrbeE8G42GotGocUDg+KCj6B9yRAkimUzqzZs39nzQQea7ECj6\n54FOPj4+6uPHj1ZyYhvicDi0WSMkMMPh0PhlkchqjTTDmhaLhd1ls9m0pPBHr58S9ieDCIVCtshi\nMBjYOl8cGe1T9EJKsnYYnD9Z+GQyMQIGtSBgKkh+j4+P6vf7urm5MQVpNBo2NQxoHZjSw7u9Xk+j\n0WhtMlS5XFYoFFK9XjcYCCVHWKnPPz4+6uLiwuBN4C5acejlBvLBaNPPy4wB4G3KEgQ+jCHGWFDn\nmk6nFsxQL2aJBtMJyZZYKSzJAotwOGztU5ARycgodQDJEyCBQgBj3d3dqV6v25QvRu5Sl+TOqZmi\n1EDPoDcERTCpOevt7a0tDaJmSc3s8vLSxkmToaJslHyQF+4B+PXx8dHGpWKwarXa2kpbjDtRPwFs\nq9Vaax9kQh4w+3w+N/gVJ03XBMEtK4BpGcX4sOMewwuyw53zv+xfhyxK+1YkErGOBbJFMvvRaKTB\nYGAZOfXmdDptMwcIYJl57lns0srQ1mo10xlQJCbtYeBxLj4j5rmwRnixWFgLGTrK2WlxA97t9/sW\nTHsdDYfDNnOADJi9BDgEHOrFxYWazaaVGvyd015GycVnxOgpi4KAz4HpgYfRK5AqsvJOp2N1XbLm\nUqlkz0OSTVWkTLSzs2OOu9PpWMsaOsr50GvkhQAYiJwSom9JRUcJBrlz0Arkg++OfkLM3N7ethXp\nwNa8P/IHsnZxcWGlGJznbLYaWczoXOSF78TZgfLv7u4scEZHsXmUBeCmeOQMtJEzVatVLRYLK5+N\nRiPV63WzzyBq9Pdj05kfweyXfr9vLH+PYLL+nT8gSSQoIAWUv7zvg48yGo3UbDZ/6Gd/aucPJBeJ\nRAyKpM5L7UuSKTJsWi4VgcDhscUJ8hrQF5EabXS3t7cGq+GsEXIUir5zavL0m9K6gmAtl0sbCCLJ\nsmOybCDv+Xxuu8g9oQy4C9IgkJr0NBcA5IJhKkC01L0wGpydTIEAhmzZkwHpMQa98EOG/MyFeDxu\ndVdgRL4XWTlC6Qly/s49TBuJRGzLIrAfJQ2iazJ+32bH8wQKZ3Qvz4ZMDSIRCsOGPi8vRM1kHkzO\n87DeZDJZC74o1+BQpZUj8gYYKFHSWlBG7c7zL3i+DMzxPcs7Ozsaj8dm+DyhjLY/GNzAkX6ICHdO\nq2kikbBABuOOIwKhkFadEUCaMJLRP1jVwNleR7lzdJQZ7P7OgUdBXdAj7pyz8yyplW9tbdn7ERhy\ndnTUc0ZwipSV+H5+wBUDc0AfuXMCM3SdbBjUAaIbzoNniKwTOIJcTKdTez9kF0Y8ekKZzWe6kPDQ\nUYb7gB6BPvLzOA74KP7O0SPIwZvzL7y8IJfMNWD/yrd0lPvxma7XUXgwyCoJGOUer6MELNhF7tzr\nKHcmPQ0o4s6xuZDgCJi9LfAdEgRA6CgkQUh/8/l8TUfpSEJHsRnwdnwJC3QJBIeaPu/FvfuklPsb\njUZrOoptQd5IsCgn/uj1U8L+6XTajKjfJIeB4+FIskEo5XLZnPzu7q7NZKcNq9PpqN1u6+TkRNvb\n2/r73/9ukRpGlxokUBjGztc3CR6I+kEIYNRLqzri/f29TUFDcDzxkBofESVKXSwWrV+WXmPawd68\neaNms6nPnz+b4UFoyVCBjYmqcb4YMSJcoH14EsFg0DIx1kLifGHTI2DAWKxyRaiPjo4sQ6HFjvnT\n1WpVf//73w2SYoCK9FS/Y6HGYrG+0Y2gi/sFukwkEnb2YHDVS87SFd8FgVJTEyaa5+ey2axOTk5M\nHk5OTkwRX716pclkonfv3llNem9vz5ygtKoTsu4YCNwHd7QUEaRB5PG1UGB2nhfozqa8UPfD4TLq\nk9bIvb09k5fXr1/r999/t/nlsNmpZeL8MpmMtraedoETdMMNwLGAEnDn9XrddBTn5+WFs1N682Sn\nvb09PX/+3JCgo6MjI6I+e/ZMiURC7969M2Y898UrHA4bac1P6ePOQbWQF7gkBNKbOkqngieBSbJM\nNJVK2UAjdhAsFguVSiUdHx/b3b18+VKtVktfvnyx9+N5EYjAAyI4H4/H5tQIAHhG6CjdFGSCt7e3\nJuuej+K7IQgY+PxYbLUrBbTq8PDQSh+VSkXlclm//fab3VUmkzF5xElDFPXQvHeiJE0Ex8gXAdjX\nr19tYh5BDGf3/BnKV+hAoVDQ2dmZBQ8nJycWYLx8+VLj8Vjv3r0zZC+bzRp6Kj1lxaB3yIsP7uBV\noKOZTEaBQMAWKDF4jmAJNBAbgz/yHQzxeNzGRC+XSxtXT2vkixcvTEdBfCgTS7K7zWazZi+4J8+f\ngi/FWutsNvtDP/tTZv4YUAQOliwCAlsYchX1HEh4QK2eBOVH73rFopzA9j0MLwrEbGn+LRAIWKZL\nnZisHoXjjESKqVTKDCgOJp/PW6ZDTRlolAdPtgYEhDEFwr29vbW6KZElUCnQuo86qX0CtxNUAUdR\nv4ScRlCCMtKGRe82gQ8/68shnAEYlwzGdyJAECKQ4XkxTRDFh80MBOlXY2KEcLjUTCGbQTKCBwBP\nhO/JuX2GzDOm3MOzA6Hw3QWbd+65A3A7eH7fkxfOSObn67s4P+4PhwTkR12QwGwwGFjN09eHOTtO\n0t85g1VwUKBHnB0DSCkKh0sWggyQ9aOj1P25cx9wg2JBhmKEKzJBEIi8gKgQbEqyIJZsF2ItOnp3\nd2ccIklr8uJbr5LJpL2fJGtF9JwGr6Nka9LKQHuiHgGPlxccvG/D29RRsjbknP509ImAE+MPagjZ\n0OsovBPKjTwfSnsEw7PZzAJT5Ah5wb6QeIFogsB4uxgKhTQajb5558gLNt3zGkhE+D60fyLnXkdp\n2+M5e57CJjkYHfUdO5xj884l/UFHuR/KOsguQQ48KfTlW4mL11HsIufnfOgoQYdHPf2dYysok+Cn\n/kxHfaD8Z6+f0vl3Oh2D2nd3d229JiQhHM7+/r5NzeLLU7/xD5r2nY8fP6rRaFjrxu3tra6vr3V1\ndWUZK72ZZDoIsm9XgldADRHjSeYJXMTyDdaaEmCkUikdHh5aCwcCDKkO2Ib3u7y81MXFhdXmqNNf\nXV3p8vLSoj2CFl8j9YZ+OBzq8vLSapY4GJQIx5BIJOzckGfIAIrFom0jxKHA7JZkcBUTED9+/Gjj\nmvm3Wq2mq6sr1Wq1NXbrYrEwSNeTLoPB4NpYViA2oGW4BsBonN2TJ7e3t21YEe1eEOcorfAMmQL3\n+fNntVotKxn0+31dX1/r4uLChqsQ9eOkPJQZDodt4hmbynA6ZG++g4PNap54ulyuFngwYAd5hnTk\nURwy2S9fvujq6sr4FuPxWK1WS1dXVzo/P7dx2DgwX0/H0MMDuLm5MR4KjgGZ5W4Z2ANpDv5EJBKx\nRTJ8XwiHwKe+tlyv1/X+/Xs1Gg0rSdzf3+vm5sa4GZJMRyeTid05ATA6yvQ80BQfSHi+ECQqMnnu\ngGVH6A11dQJI7hxHc3l5qfPzcysjTaerITbX19f68uWLTbdk1oXnX/g7hzTo15rjGDDyEPUYGAVJ\nDDvFkBru7/7+3pAfSgg8806no48fP+rm5sae28PDgxqNhi4vL3V5eanpdGo6SmmBchQBMDyAWq1m\nOkOgxf1h//b29myqZiAQWCs/HR4eGr/i9vbWiHMgPJROkItPnz5ZTX0+n+v29tYmIXY6nTUkCpsK\nR4YgEq4O46pJJLAHyBfdQbRBUw6Zz1eDoyBJgm7AzfAcCv7ty5cvuri4MCRnMpmYTf/06ZORIuE5\nEZj4O49Go3YP2HT8Hvslvvf6KWF/SQbfAYf6aJmIq9PpGAmEaIeaaafTsQvr9XqKxWL69ddfFQ6H\nra1jZ2dH+XzeyFtkwEC+sVhML1++VDQaVa/XW3PS/CywSyaTWas7TqdTI11QuyIqe3h4WBNYMs7J\nZLJG3olEIur3+yoUCmaYGHsJmx/Cm4ea5/O58vm83rx5o1AopH6/r9vbW6XTaWWzWZs+iELP53N1\nu12DPnHQfBeyivl8bjvcMageIYHMBLEpGAzq5cuXNtEK8hVjcYECOTuZ/snJiRkoUBlamJhWB9RO\nHzhdCZ1Ox9jRwWDQ5GU0Gq2RKaWnjIRWPEhB7XZbyWRSZ2dnFuRALKW1iHODnuAwfv31VyUSCQum\n+L6QD2H9S7Ieagzb5eWlMce9LIL68OzJSJgKCYrS6/UssEwkEmuoi5/3DusbZvJyubSJlmSst7e3\n2t/fVzKZtGmawN/T6dTY9BCbIOgif9SLPXkOCBYd5d8gH0YiEb1580ZbW1tWn6ZHn+Aeh8K5gU7R\nFeSTbYw4FUp56D4QL3LOYB7OzqhhPlN6Qg0gF97d3VlNOJfL2e+xM4Cs7/7+3tjdZL7YkZcvXxqc\nPxgMlMlklMvl1ubF0yLKlL5YbLXn4fr62lrPPI8GO4VdlGQ62u/3bUEQRF/KLJQRKFUQTJBQ+PIr\nG1Uh1H39+tWCJb4vvfnIPhk4Ex5pNcWePz4+rtlFXsDZwWDQyJLNZlM7Ozs6Pj421DIQCJju7+/v\n27mB+GHVv3r1SrHYahIhE1y/paOQj5GX8XhsY43hGCAvyAKlXHSUMiooSqfT0d7enm3tJDiFcyHJ\npts+PDysyXqpVLKEgiCWMgLy4ocw/ej1Uzp/INNYLGYZA/AK/06UBxMaB0p03+v1jIBEpsM+ciAc\nPsNPSAJaY445oxl5ENQaIWMB8/nhHkDsCAQvAhTYnkCVCBEZqSRbbjEajWygCWfjMyD9wC6VZHA8\nff6SzOhQr+12u+YcQQXImiAiQVTh3PwvUL3vueUzYbCSPezu7tq+cWBAT2Rk4pYnok2nU2OV0xJF\nOUeSwdyeye7hd+Rh80W3x6YxJ6Piv7e2tiygAmYEZiUgBTqlVEBLJm2dOGqMxN7e3hqUB9yIswT1\ngD29CdlBToLl68sQkN2Qc0ZbU8P3feLAxNwhzozaLM+Vcg7GGxY3hCvOwB1g4AjeeM1ms7WBXd4g\nwjEh8GY/x/7+/lrpYJO0C1cDSJvABh4KOkp9n5HIyAulMxA2OhfoR+cFAZK+emwOgRE/w/IbDLfn\ngiCflCH9zA1fj+e9ODuBPbVc7sCTadFR7KInGJIB+oALiB4dhTgJahONRq3swzP2g8M8+W86nRov\nARtFe6MkI/chL9w5OkqSFAgEzPZxxk0dRR7RUci1dBD48g/wP6Uc7srraDAYNI4L8rKpo5C4kReS\nIlj+fA4cKEnWXYOO8t/IC2jmcDhUIpGwvRnoGncFRwhyLOQ/iLCUc7DpJEGbOur9zp+9fkrY37No\nqZUA1dF2wQOFhU/tklo4AQMkGiJMDzHf39+r0Wio0WhoPB7r+PjY9gr0ej01m03V63WD1WmXw3H6\nOg0ERc7OmZi8RRbDOTCAKDUZBZkzJQiEF7INhrHX66nRaKjdbisej+v169eKRlfz39vtthqNhrVu\nYcRRyk34lgluOH/uF+QEuBKIGQHH0CPgwIb+fUAPILMMh0M7293dncrlsqrVqjkK7nwwGNh39pPU\ngOPIHMmKPVmL1ijKQ55vACmN5wf0TisO8sJ3o0WJYJOJkMFgUG/evLEFPJ1OR81m09oricC9UiIv\ntNX5gU0wr3kmGAK+D3CfbwsiW2RanYfyyTwxMMgEy60gT1GeqdfrarfbJqfMM0f/6PxgW6QnBfKc\nKTPQz+4JStw5zgk0CvjV6zoBFsFmvV63VjwWqOCcaRvEyaOjOE4P36KjfB6yAdcA+4DRJWjBuRLg\nEtCgowSB9MSD/HG+ZrNphN1EImETECkfTiaTNSP+ZzpKiSyRSFiJDwgeu0O2i32BU0G5CB1FXnCe\nEG5h3jebTSs9VKtVHRwcWLDUbDattZdsmzksID+gAiCwEB2RF3SU745dpEPE85u4c3SU54eOElhx\nPuQlGAzq119/VTqd1sPDapVzo9GwqZkkcty59JTIbOooCRYzHkDtsIPohw/efWsnOopvQEcZHf74\n+Limo5lMxuZN3N3dqdVqqVarmRyANm/e+XA4VLfbtRLZ914/pfMnyiYSJYJfLBbGXMU4+tGSRHwo\nNdkxsBgLcsg2paf1qHwW0S1ZEoQ4SF2eTEFWRXSOkQCGklbRLAt3YOaTsSM4nqGM8cHxk1nBQMWw\nEJVS0wXyg1MAwkEESwYAQkL0SmRLa5GHyCAcEYlyTpQAQgroA+fm+wHfAqtyf76+T0kBJfMZFFm9\nz+T5b39uf+fAnYw2przBOanRQ5ghw+K7eKY4gQ+ZFhk73QZkV9S9Y7GYnQGji7xATsQgc3ZkBETH\nZwe0Afo2OaJ6GMpeXsiEMKAYFuYlQEwDvsUJeO4A5/rWnftMCB31ffrwcRj2w51jqMko/Vx7apcw\nxdFRsk2PyMH1wND5FjzgaM7rO1N8BsqdE8RKMuSCLN2fCYSGOyf493cej8dNJoCyfZeNJ5h6HSXw\nkJ4m23n78i0dJRADCqb27sl8lDl4Xr4VjztHpmi5k2RcKQLA5XJpELxvN4QcCUrgdfTP7nxTR32J\nE5ga++3lhRkY2HTfEUH5iKAX9C0Wi9nzQE7h/XxLR7l/7hwd5btJstIk7H8+l+fNND5v05khEIk8\n7TUg+IGcBxmcJA0OBJ+FTqKj2HR/z9gXr6Megfuz10/p/CVZPRNBJ9IuFAom2IzelWRC70lTGPvt\n7dU2N+bCYzCi0ahNp2NABnB3oVDQ4eHhWnsSmRs1XrIWjwjgNFEamLO0vCAokPuAI5PJpCqVii3o\nABpnUAYb0oiamTZGRgpJKhKJ6Ph/r4DF+Eha61pAaFAipuzhNPmOCKhfKMR8ba+cuVzOWhS5cyBA\nhlLg/EOhkJE3QUW63a5Go9Ue8tPTU3N6ZL0Q7jg7xoygRJJl1Bh1MkHuOR6P2wAgEBeGduTzeTMw\nBH2j0UidTseCsPF4bMt3QEXYCSFJ1WpVlUrF7jwYXM2V8GNogZtp9eNZQkKiDenx8dEGeMA14Htw\nL5lMRpVKxZ6NNyygZBiW5XK12hTiXb/fN/Lo9va2nj9/bsaFDIzZ7Tgkz9BGFpg1AOxL7z46imxw\nDurjtJUxadCPvW02m+bwIZpRCiCT5Dnn83nbpojRgyiJw5JkyBP7MsgwmS0BYZjsG51m4hr3kk6n\ndXh4aIQvgl6mz9FiifPf29uzlrPb21sb2LSpoyQWBNvSk+OHpe9HJUOewy5Op9O1PSaZTMbGwBLk\nFotFVSoV+zl0PxgMrukoZYRKpWLEO2rjXkdBlbAbBPLcOTqKHcYGsQ0QpJPgk/MwiRI5TyaTOjo6\nMvIx7acQ7rrdrjnW0Whk7yHJCI2UQdBR7sWjJD4I8DoKDwYUlPIgusM8inQ6bYu5eKbIJzpKQB+N\nRg0NSCaT5rzRUWwZpevv6SgBLCgEAQel4u+9fkrn77N3oLXDw0Nbuwg0TA8sxJjJZKIvX77YAhVP\n+vKXFIvFTLj7/b5FVpVKRcViUclkUoPBQDc3N2vRJC2CtO/g4GmX8dPDiCpTqZROT08VDodteiCs\nb2o4kUhEg8FAHz58sCwSlqefUijJ1tTinAkk9vf3rTXp+vra6scohjfWDKXA2UDOw3mQZZTLZRWL\nRWOi1mo1M/B++tvNzY0ajYZFv2SRGDIEk6AMo5tKpVQqlWxcMxPU6Lsmw5JkjGHvlCADkl0S6DC2\nd2dnR3d3dwalQZgjuBoOh/r99981HA5NXiTZECai6u3tbR0cHBgEFw6vptpVq1Wbzd5sNm3MKEFR\nKBSycgblHDI3pviRXZLV5XI5HRwcSJIt9mHWBcYD0hR77EFdfHYMIpXJZJTP5w0h2t3dVaFQsPG7\ns9nM2OjSE5MeVInpaGRRODlfTvM6mk6njSzXaDSMqMXAoclkos+fPxs5kWzHtzwSmLKkhJXRMKoJ\n1ijDUMvd1FHkBXSm3+9rMpkYSoOO7u3t6ezszIxyvV43sqTX0V6vp99//13z+dyMPggl/Bi+6/7+\nvhEA0VGC+2AwaOw12pkAACAASURBVHZAempdpEQDPA87HgdMGQx52draMh2lu6Ber5uz88Hx1dWV\n6vW6BSvcuec0hMNhWy7F7IZUKmXdM+l0Wo+Pj7Ym2usoKAGyjt0imfCw+aaOYtOZygkCyDChd+/e\naTgcGqKJ7cOmU245PDw08iRBULlctvHUnU7HJmsSFGGrBoOBlRYICCkFYY8JpNHRQCBgfgJCsx/q\n1Ww29eXLFwt6KVVvyks2m1U+n7cxxGxVLZVKyuVyWiwWOj8/193dnSUQyIBvV4fhj/796PVTEv4Q\n7E1YfJOwwsQu4E4iNKJvDKrvMaatYnt72+AbL8QMjvG9nURbRLlA455oAzObDIfP9zV++us5D5AY\nDE4//EJ66mGmBkY7GzvrCSAQYtj/OHd/bl8X8nAR5QJJ9vOcCwHzfaQMmkEAt7a21iYqoiB+9oIn\nrND6I8mcFlkVdeP5fG7Pw9fnfVcHd06m4u+c54hSM2iJ9kH4CPP53II2ID6IdDDh+b6pVMoUF8eH\nMYbJzaQ1In/KLZB6pKeyC7LFz3N31Ht5jsgLPAtgaLID5FOSBVs8U+qazMLAOAFDkmUQXHCHQOjf\nkhdPOvXy8i0dZV5DMBg0HeWZ4AggfCFbvvTEnHV4HXAx4vH4mo5KTw4Uh+yHRHHnPFPOi5zznn5i\nI2f35QrmVhDgcedk38gYCBm2An1BR+n/H41Ga+UzbysgwQHveh31ZQCCGD8Dg4FKwOHBYHBNRz3x\nkrKNPytschwfZ4OcSCcGgSJ2ejPgR0c9oc3fJ+UcnC735206QQ3PAiicSYl8DzJeeBKbOppMJk1H\n+Xx0iSTI6yh2y5cB0AESM56jt+mUVOAG+DkEyDb3ApJMmRS7Q+npWzrK34PGUab2ZRcQ5x+9fkrn\nD2xJBjWfz/X+/XvF43EdHx8rHo+rWCya0FIHSSQStm8ZMhj9z0Bq9XrdhEfSGrz822+/KRBYrYul\n7gtjH6dMzTuTyWi5XFokXigUNBqNdHV1pdlsZkLQbDb1/v17nZycWG+u753HgO7t7alcLhtpxROK\nMAbn5+fmGBBaWlb+9re/SZLNzfYkMYhs1PnoD8YYFQoFffr0yRirKMXl5aXG47FN06pWqwZnI1zR\n6Gp/ux/wQVAFvNfpdPT582fLwrzD+PLly9p6WQwZTsHfORAwkGM2m9V4PLZVw8jLw8OD/va3v6lY\nLNqiDwZi+MBtZ2dH1WpVW1tbNl0OSB8jen19baUXFIu6+n/8x39oNBoZPBwOhw0ZgCeCYnPflCDK\n5bI+ffpkhhRuAbPAWRpVrVYN0iQTJavB0MH0p+wDwsMzxQEDp9dqNXsvMsFkMmnBi79zUDJgRKZP\nfvz40YIqAqmPHz8qFlst7qHkgiEiUGeGRDQaVbfbtYCY/51MJoag+JHR39JRAjyyRO6cz8FZgWiR\nXXW7XQt2otGoWq2WPnz4YAuCstmsLWGidr+1tWXLV0D6/NAhnOjFxYVBsgQI2KP//M//tMyRs+Mw\nPLeDdb1s+UTHPn78uNaqGAwGTUdZjFMsFg3OpsxH5w/cBuSbmRTo6MXFhcHyPqD+/Pmz6SgJAaUc\nypZeXqrVqpUSaRmECIu9fXh40Lt375TP520xmefTEGAC4wcCAesOkbSmo/V6XbFYzFaEo0/D4VD/\n/u//rvF4rHw+b8EjnBQgfgIRr6P4kk+fPllShg26urpSu93W4eGhptOpteBx56B+zKCh64wEkC6m\n29tbXVxcGCqI7m1vbxui5af4EQh4Yix3DnoKzwnE9HuvnxL293CvJzJ4ohaENogY/X7f2KewfOmR\nJAqWniIjiE8IBH2yRFA4S16eEOIjWLJz2lM8nOjhNKJ7mMEIOL2t9PVLsigcQfBtG7QyYqARds5O\nloUD8oQQFJX3w2ngHGD5e0IgZ4el6rMqOgvIpjD0tBVyPmr1KIGHCimbUKIgWifi9kQ5zk6mCCcB\nZfPyQoRM2YN6MHdLcMjveZImiATvxxQtSdZyRnsiGdqmjHkSDgEeUCtnJ4vYJBsRNBAQ+GwYiBSn\nREZL2xstTrwf2Q5ZIHeOvEBMQmb/jCjn3wuH5eUFHfVnx2Gjh4PBwKDuTR2ldMO50dH5fG4cEO7c\ny4vvZ/akOeRla2vLyiGbOspnoaNksr4UgY76TgGInjhJiLySLBP9+vWrOUW+D8gDpUcyfX/nXl6k\nFRmRzNKjcbwIvni+BGLcKzoKcYxzQDDzky3Rq+l0ajrgZzaAeHi9JgDwGTI6yp3D3UAH+OPtCjqK\n/aA1F9QHu+iDAlAysnTsIlwGf266U3CcXl48igWywNnxBfwbv+flHL3CFw2Hq02eJCQ+SfU6ytlB\nmXxnEecGNfF+ytsXbxc9H4Gzf+/1Uzr/fD5vTHsMcqlUUrVaVTabNYifOuFisVCtVtOXL1+srsql\nBwKrWckMOWH7lJ82BfwICctvRIMZjVIBPSPctAXCD2ACGE5ye3tbx8fHKpfLRu5gjgCC8unTJzUa\nDRNs4NVYLGY7C+jRJzOGGUuUC/kPRi7Gi/dDMKUnqBLDUK/XFYmsBrjw2fP53Gpb1Mx4HhiWdrut\nDx8+/KHHez6f20Ah6l3AoJPJxGp3GPZ4PG5Tt/h76nIoFQEOd05Lz/39vbW8oGSRSEQHBwdWk/fl\nGv734uJC19fXVicjeAmHw0Y6gojJJC0QGklWT4f9S9ROzR2n4+/cT+G7vr6WJCNmYbD39vZ0eHho\n8uKdOxDp77//rk6nYwad50tmStAAOuXZ4EC7EKYIgOEiYBCBQfl7av3NZlPdblc7OzvGDgehKRaL\n2t/fN/TNB4qLxWoy5vn5uRGnMMTBYHBNR6kPAwHncjnTZ7IagjHIdTgmzyinhMPcj1qtZkRBnDIs\n8MPDQyNQEpjwvsPhUF++fFGj0TDWPqWoRCKharVqzhKSJrV+9oBACgORI9CHe4OMIC9kjLQosnIW\ne0WQg45WKhWzmTjJ+XxuE/zIXvldOqcYpwvpEt2GYEqJNBaLWTcHcDzBj4fIcUDoKLyPbDZr7b6Q\nrdn5wZpa7Ct3f3V1pevr67Vpn9iISqVi6AIJEgz8bDZrTpIZ96CJwWDQbA22zjPrv6Wj2HRpFdzB\n6ahWqzZ2G7mjlfXDhw+GMFEywPaVSiVLFvw02nQ6beVo/o3AwHelSVqziyQHnJ1pnj96/ZSwPzAI\nNdeHh9V2I2AXXuwtv7u7M1jy+vraasySDLIHCiS6973ARJB8HkaHzB5FArLxtScguq2tLSOUwCsg\n6gUeRHg3IZx0Om27mX2LEJ9BBoGS4jwlrZEa/ehHSWa8FouFkSAZVhEIBAwWDYVCNlKZ92EUpSd1\ncXYCEBis19fXFtBg1GFbk43AuPYwrh96wvPCCBCAMLcBgzuZTBSNrtaCBoOrsb9s/SKowVkRLIFi\nJJNJC7Cy2awZKT8AhDsHWUE2uDugQpjXBKAEkBhwOicoOSBfyWRShULB5MWTL+nZxhlIsmBmZ2dH\n9/f35tTpVSaIhAiFo/FwJo6RLB/ZwKj454CB6fV6Oj4+XmstZWhUOBxWu902eQH+//r1q2XVGKXt\n7W3Th83lJGQnOG3kmYxqd3fX0AACfgIhODP8LjpKLdgjLZS6gsHV9MZms2mIFw4Ne8P74bjhBGUy\nGRtAQ1sq2TlnwaBTRgEl2CQ1IkNeXuBKIGcgSoyHDYVCarfbJi/hcNiGx9CWRxcSzwD75CeAEpxi\nFykLkMHu7OysITlkoiCV6IVHvtBRSUYsptQQCKxW49LpQLkhFArp/v7euD4EkHTazOdz01Fvl3wy\nA4KCT+DOkRcIlAQmZPAEGJs6ynfa2dlZkxc/mhgyJg6f36G8x4yOXC5nSB06SrDrS06UAT15kWAc\nro/f3MedPz6uVqwfHh5aAIxNXy6Xpuffe/2Uzh9DjhB6eBhBhOxGBAx85ft3PaQFQQJo0sNknsxB\nFEbkSLQoPZFXePDUsKjTYRx8ywsPG0GhRQQok7o7ZB1gbIg2GEXODiyJsOKwiMCBpvi7TbKTN/JA\nbfF43DIS4GAPVfJvZCxewXDofD/+DkH1BDjO7O+cu+Vs3LeHQskckQ2eLcROT+jEAHoFf3x8Ws3p\n4XuyWgICoGeen4cCkRVPziNL9uQs/nh5kZ5KOTjk7e1tYzfzM/4e+XlQBSY4gkZQtyebI+PcJOMh\n5zgrf0aes6S170Qmx10iF35IC3VL5AX9JCPkmeDIyPZ8do6jIrAEMdmEg6lvb8rL5rk3dZRnz/1h\nQP3YVO4cuBm99gZXWpWCYK5v6ijo5GbZg0DW2xFfxvKyjl76nnrgcgILvg+6jH4SiKKjnjxI26J3\n1ugo94iOEqTDneBOuQPfReJ1FNTKl/ewyV5HkVevE15eQqGQJReLxWINkeKOaSXGJpBISDJZgRDJ\n2X0paPPOeeY+CeEc2Ag/p4Hz851JFgnoCKx2d3fXSkckH6BK0hPfDJuKbdm0I/ws34s79/pHeyvt\n4ejH914/pfOXZAaMfnYeABO9YAajUERX/X5/DR5DYOn/9Rk95A/gHvr/UehAYMWsByWgjYnlDLSH\nMMnLR7+095CdMfO7Xq+bQiBMsEJ7vZ5evHhhiyyICFEsX6sn+r6/v1cul7NWKpCSUChkUJskg07p\n2Q2Hw8ZEbTQaaxPkaDVBcJmM1Ww29eLFi7VMEUiVWtb29rZF/ig8Z/dKsFwu15adkNkxxZE6VzKZ\nXBvTzCTBwWCgVqulXq9nRiASidgsBT6bTI86oIe1b29v1el0bBWuZ9JS3uDnfU0dOQKxYdDKcrk0\ngiGEs2AwaHICyScWi6lWqxnJhwCsUCgYAkBJptls6vj4eA1tYq43w6No6YLoiVHwpQdmzPuFI2R1\nEAWZ0gY0vlgsTM79DH6mJ2J4U6mUERCBmllXK2ktC5/NZrarg35u5A5j6mFwyGU4PgbMkO2Aymzq\nKDoFwa3f76vZbBpxEwOfy+XMoVIvbbVaBuNTO0b24BWl02kLhjzjmqAKshb3WygUlM1m1Ww2DXEi\nU0NHGfnKfVPeWC6XarVahjBQXigUCvZ8B4OBzcygZLgpL55UCMEW0hwOCZQIOJruKWQWpIcygdfR\nyWR9GyCzMBjmw+dns1mFw2Fz/t1uV51Ox1rkPMfk9vZW3W5XpVJpbWAOOulHWGN3gM4TiYTthRgO\nh4bupNNpC+5AB/2SLe6r0WisdfDEYjEjVEIi7/V6tiqewAMEirZq/Bj2EZSUgIaAnwBzMBhY+QLy\n6+3trUKhkM1NmE6ntmOG1nNmEXS7XdtI+r3XT+n8ga5p9yBaIsNAAVBir4QQshi0EwisVkIimPf3\n99afjuPFAGPEyX58ZuJ3fvtMVnrKWonW+HsvpCiXj/BoR/HDQoCKfAsixsmvi8XwYWAYlSrJsj+y\nBSAvMle+A5mfJ+CAKlCiIEODWEQEClmIs/voE6iKkgfGeDN757tRliHj5d7IIIjoKRVgkD06w78B\nsRPRk0liyPk5f3Y+i01avs4NcQtD6jkVjEol4ybb4S64M7IqMhCfQXGXvsTAnSNLGESgYgadEBAT\nWCLXOBeMHcHxfD5fm17o64XcDUE0ZSZkwsPE/s79aGJknGAPeeHnyEgYtDQcDs24MrOc58Bz4e+4\nS7+9jgCEEhjlNeQF/ZKephQSaKCjsVjMDDFZMA4GfSejQl6AumkbYwgUjhGjjo7yXFhwxL9xVoJ7\nZNyTmj0644MWbCMZvfQ0Fh2UEDn3WayfTEmwSNDjWwwhpBLME2BjE0hE0Gsv52TvmzrKv1Gm2Mze\nuXMgdCYnEhzRgul/H3Ib2T+lOGSSjgZKBSBYmzqKPUdHPbLkfQzyhL4jS5s6SicNtpuzIwskaSB/\nnIdglgDC8yjwi+iot4t0G3nU+Uevn9L5IwwMRiAaIwKlxxcnmkwmjdhRLpc1na62dfnVp5PJxDI9\nT4hrNBp2eZDuvPOHJe63LxFMUIPl7xAmD2VSs4LEmMvlbEkNHIZCoWDnG4/HNjTi4ODAuAFfv35V\nq9UyI8HAB4h+5/97VStQoySrzfv6O1kaE8OI8mEyQ55BYBEy+twRdFp/aFEk86DmyvhNkAHmwc/n\nc7VaLUkrslu/3zcB97U/7hwDC9zM/W1tbZly8Ry8vHhOAmt7cabIC+1mkkxeIEGNx6upb5DbQqGQ\nGTpg1IuLC8tuPWyNYyAISiaTymQyFjiQrYPQ8N1Qcpwd2R1sfMY/5/N5lctlLZfLtUg/FotZb3wg\nELAtbZKMfFQul/Xw8GDcGBAXYFAWToG8sLEOGJNJbJCrKF+hDz6jQl9A0CBIVqtVTadTQ25AbMhS\nB4PB2iCaRqNhtX9addEv5GU0GhnfBlg7k8nYyulQKKSdnR0LNggi+Yy7uzubIpfP523OPU42m81a\noDWdTm1FM8PHQPf8Aicm2OXzeS2XS52fn1u9HiNNFokDQNeYjAjUu6mjBDy+44UJjjgFptQxPRTk\nAh3d3d01mJ7MkcCg2Wxqa2tLhUJB/X7f7CFOV5KhgugoxEBapQlU6YeHDwEyd3t7q2QyqZ2dHWWz\n2bX2O+QlEonYhr5Go2E6WigUzEb52S44TG8XQTW542/p6O7urrVDoqMQw9FRzjcajexZpFKpP+go\n8sKkQfr0+/2+6Q/y4u0zg9mOjo40Go10c3Njfo5AcrFYGIqID6FdG1QMvswP/ez/Fyf9/9cL5zyZ\nTAze9DU7skRPuIIZiyIRodEaFw6HbSqTn1hGxo2SU2+SnoQbFjY1GKa6UQNEGCB1kBkwNpMuAl7U\na4CtODtIBe+BI8Bhk0X7lhECCIwskTaLLPg5WnuAc0E5gP0kWQbLKFIMJvUzFJ/6qD87MCckJpSw\nXC4rEomsbXcjg+LsMJD5DAg2BCjz+dzGkpLRUgYgCyHj7XQ6f5jYJz1xL4DzUT4QEzJBAgdg+Ol0\navKytbVlzs3XornzyWRik7WATTH+Xu64NwwR0HCj0bDIn8wKeeHclK+4B7I8oOnRaGQBUr/ft+wf\nR+6RHV6w0/neZLJ0aXBX1GbJvKWnbYkM7iGz8fLCvZPtkI15sip98dVqVZJsYhnB/eZgKe58Pp9b\nkEO3gCSl02krbflZCJT1cHrA6fy8z/z82WHf+z575OX+/l67u7s2gRSCGsER5/btoziRXq9nwTRO\nnECIPfK+ZAniwUImdBTbtMnV4ezoOxwCbNRisVCxWFQ4vJpACrGN0pVHDD0CAUeJrpjZbGYETWB2\n5AUHhj2G/Ae50SNilDE8F4DBX9Fo1PSMkiHrwdEtEjHOjF0EYZhOp9Zu6nUUCB25Q1b5d2xmvV7X\nYrGwWSMERZ7zwh+QQ9AvSVaOZKQx8yJAxAgiQIC4862tLWPzgxJOJhPTa+wwdvn/2lY/lNvDcThF\nBIBL92Q0ap0oEyWD5XLFuKXtC0MaCoXWBleQHdJfTNsGLUYYG2AtiHMePgca8y1OHlqSZMKCYgLj\nkJ0D4eDMaU8k4KG8QXYCcsGD9woChAZEinGGNMTZvTHHuUtPUCLfCyHHKHHn3J93LKFQyBYx+Ro6\n9WZJZozIYILBoDl12Lsw3vlvf+eSDKkhEOQ8QGL+zoHu/fPh/vieBCi09XF2MhJKH5Q7ULT5fG7b\nCDEskixo9YQu7tyXEzy3AyfHnfsAwLeEAe3yPb2z5GcXi8XaOGYIaXwmUCKlA54X8DNGDZ3jczw3\nwpM7CXQ9oY7PQu5Ac3wJCseHbnE2MkCQB8pw3C+BJoNneC8QpG/Ji9c/dJSa7SZx1JMBuT9fkqRe\nC2eI77i9vW2ICd8XewXvCL5FIpFY0yNPuvROF3n0JDjv5DblBUeEDfUlA2wtNpCfhwuFY+YZETxt\nbW2ZU2caHUGVb730pRYvL74103dNeJu+eXZsuu8mgW/gg8xYLLY2gIjyEcETNj0YDK51NvhkER3l\n7KAGnpPCnfsAgLP7c6OjyDq8o00dlWTcHZAjiLWUQvEV2Fk4IDw7T5L/77x+SuePMRwOh7YwATIG\nyzC4oMViYfVLyCQsx8nlcioUCgbnSTKFlFZGeX9/X8vlalQjtRagIgwq0SSQMrU7oCCgZa/cgUBA\n+/v7ikajqtVqRtZAsHBkHoEoFAoGpXP2QqFgWSYko8ViYdA9s/bT6bRlA0CF3AkMdjIXggAPgQP1\nj0ar5RgHBweWBeOsKYkEAoG1iD6dTlufdjabtTuHTBOPx42AGY1GVS6XjWQDXHd9fW1LT3CMQMp+\nMhaZKeUfnCxKy24GSiS0M2JAQFQwHkwBTKVSdnYWDwG9U6ZJpVI2J38yWbWuMb0MgwCb2hsd32EQ\njUYNSsexkpEdHx8b/Ac6gmEgQwbuY+b63t6e6QjfBeeO0wyFQrZsp91uW+BCGxNlKvQDwwI0Ss0e\nIi0oGCQ5nj9wpNdRT97ibvL5vC0xQc6ZrYAugDixCIhMNZlMWsmFsgUMawIzkKf5fL6mo8yDIBih\nBBGJRFSv1w3G9SUcT6Dc3t5WuVxWNpu174yOUv5jX4G0KmuxgwAdvbu7M8IjwR0BHDV5skWc9O7u\nrhFZ0VHuBd3AWcMJkp62RQaDq1ZF5AXYfFNHQZy8jvpRzLVazXrQSXZ8gA6M7nWUNcU4/vl8blNO\nm82mFouFtWkjL3R0wT2pVCpr8sKuAXSUwDYQCNjP0JrIFFBW3HKHvisJZAQdpXzmdZTg//DwUIvF\nwnQ0kUiY0yXAJ2ijpx+dQV6Y7RGJROy5hcNhm0LIsqLt7W1rB6aMQqAJFwUkgaSRkiJtud97/ZQ1\nf58RI8C8MJYQf4iwgIsgn2QymTUCH1kJykl7BhDVfL5aYgFcTLZNVAqZy7NMEc54PG7wom+58Vkx\nwo9B5fd9piA9LTWC7Suts2RRDqLLx8fHtRGZfEdmGPD7fle3H/RC3ZjzAutCaJFkykifLn9HlItR\nx3AyC8G3vcCqpi4HLIhQo3AwekE9cKQ8d1/yYCSsh0Y3WdcYJUnmtLk/sipY4gzy8aQwIFimwwF3\nxuNxY/ZjOEEEpFXGAILD8wZRCIfD6nQ6dl4CQM7kW3koBzGjgCCADIy/hw3sSYT5fN6MESOXyRxg\n9m9vb1twx53jSEE1IATC6qbs4u+czMkTsfxYbd+WRjaZTqfXCHxkgl5HJa0tudnb27Nxvuioh9M9\nYdO35cViMWvFxYF6HfUsfXrn0Wcv5wQnwNuUfiDieR3l2VJH9nwVr6M+2AXVYAYD2Sl1eqBw7CI6\nCnHPb5/zEDp2cW9vz1AFPtvrKK1u2MXZbGYOnjKGL2PyviCP6Bg66ksAlBk3ddSXCfmeyAsoGTJL\nqYs7R45ms5mhh5RjGdlLH/+f6SioBoRHgh10ne+Fb8LRY5PYqwBp1aN86BL8JxIz7tPrKNA9dpHS\nAIErZTKSR+w394uOUnr60eundP4eUu50OmtQu289IqIm2gfSIftnk5tnRxKNI2CfP382Yg4T0g4O\nDtamMBGRU9MiC7m/v7dWtVarZZvAiMbr9brVq30bjJ+WRXnDs82JwInsJBkyQMRHfQ5mK1vDJpOJ\n8vm8Go2GQYuBQMCiSpwA9dx2u21TtDg7bVEoFxvSaEMBzqPODEEN4hPPECHd29tTsVi0uuv5+bk6\nnY4kmQFnMhzkPAxFPL5axSvJkAsU6t27d0by4g5brZY5BghstBp5ogwKBx9kb2/PCJ84GlrBCPjg\nXDw8PBjKUSqVDJGi5ZHACLTDB06QEtmMhpOaTCaq1WoWIDw8PFgtvd/vG8ucQMzXCQmOMQAEe5Bf\n2fbW7Xa1XC5tiAkT7fyqYwK3YrFoRhEiWDab1cePH22XAs+I98U40U7K6mBf3pBWMH0mkzEdhQlN\nMI2OEoR++PDBHG0mk9HDw4Nt4ASRQEc96xkjuKmjOCKmoc1mMyO0QjgMhUL2swTuyDo6Sg0ZSJl7\nhUTJ2uTpdGqOqFQqGToCqY7aLUgN50ZHmdiGzEIgAxmLRCK6u7uze6fdj+DXk0ghEBPoEGCjozD/\nP3/+rE6nY7+TTCZVKpWsZZF2usfHR0u2QGFpb9vZ2VkbXsadt9tt09HFYrFGCCYo5s4pjfyZjoK6\nepSNYBHZLhaLGg6HRl7EhvrSInJO6Qsd9QOpRqORGo3GH3S02+0aEkDZxPN6CAoIdOhA2t3dNbvY\n6/V0dXVl+wnwRaB5BDHtdtuCZQYR4VdAhN+/f6+Li4sf+tmt/yPe+v/w61/+5V/+F4a2UqkY/BGN\nrvZSA6OisHd3dyqVSkqlUsbmxUhKsoyNGvj9/b1+++03TadTiyapRbOGEabtaDSyHs3z83Pd3t6q\nXC5b5PnmzRubUsVDxHhUq1VNJqtNYIwUxfEmEgmbXFetVq30AISGgx2Px7ZsIxQK6erqSh8+fLA+\n5IeHB4ts379/r/v7e1NwScZc//Dhg+LxuEqlkqbTVR/3s2fPJD0RvuhLJXiCfVosFi3r5rve3t4q\nnU6rVCpZVswdkEHynmRC796902AwsO6L2WxmCA3ni8ViGg6HSiQSSqVSqtfrqtVqBvMtl0s9f/5c\npVLJnCyOZjqd2vKhVqulVCplpREif6LkSqWiaDRqmQQZEk4KgmIkElG73dZ//dd/Wd3Rj9/8/Pmz\nfRaZFtnf58+fNZ/Ptb+/r9lsNTns5cuXFsgySe/29taWDDHdslAoSJIZsUgkYmUSSjJAvB7qxTEQ\neP3++++6ubmx2uhoNLJVvh8+fDDykneujBDO5/NKp9NaLBY6ODjQ8fHxWpDNjAPG29JJUyqVTEfp\nzLm7u1O5XFYqlbJuBCB1SGwY7mh0tcr173//u8kcjPJkMvn/Uvde222nydnvA4AEGECCyIlZElvq\nboexl6/DV+B7+fYN+MCX4evwuWd6NDNqBWbkRAAkQOR9AP+KBbS62/to68NaWh1EAi/ef8WnnqpS\ntVo1HYU4F4vFFI1GV3SULO+HH34waBkdRS4LhYKR0NLptH323t6erYQOBoNWHoTlTTkGp8Oa7nA4\nrNvbW3386iz9uwAAIABJREFU+NECERKHzc1Nffr0ybJE2ukgrX3+/Fnb29sWtKGj1MwZJY6OptPp\nX+goUDcEU+B+3/bJnUM05S7m87n+9re/qdPpWMIym82spMjiJu6Iro5yuaxyuWzlsvl8rouLC1tU\nho5CzERHa7XaL3QU+ZhMJis6SgnGt0qSAEUiETWbTb1//950FORkd3dXl5eXqtVqloFPJhObe3J5\neanpdPpVHSX4xaZHo1EdHh7aFkLmcgwGAyu3gRqho9w1gTkBEsFOJBLRzz//rPv7+5USJojjp0+f\nNJvNTBaZ37Kuo7PZTEdHRzo7O7PSy3/+53/+P7/lZ7/JzB8yBXUjn01JL9PdIOMA3+AEGXrh+5LJ\nvoBDqtWqwbvUgYBYyfKooQArEr1SL04kEpad+35n6taenUptj8Em1EBxSkSERL0QqGDuS7J6YbVa\n1fHxsdW8xuPxStmh3++voCU+uwQa2tnZsRq7X/Pqh+6QOUCg8rU7SZYBYQww+BgZf+cbGxvGR6Bf\n2fd6+8yaZ+IzK74P+9A5O6UOJh/CVF+HaVFE6orrA51AYqQXg853enh4ULlcNsIohgg2PRkNZ4A1\n7nkjfB5nQ1ZBV/gDbO5hWp4xWRqwOAGK78EHPmQOQaPRULvd1vn5uZ2dzI+2M4Y9QaaCHwLRCJgX\nMiOtj/Q/850xzmRkfGdaCSk1hcNhK1UgK8DWDDgKBAJWp0UegHSBRhknDCMdI4sOMeSF7wrZjV5y\nr6MQ+nBAHk0kSyPjIuukVEfnAGVEr6PclXdWkHHhJHAfQMxk2+gougcJlDIGOgqZFbIbgZUksy/o\nKPaPsgBIINn1uo7C7Pfjs1mgw3l4djipVCr1Cx2lpLOuo5yT+wWxgO8B94J2TOSF7N9n+5VKxfhL\n2HTf2ki7IVwK6v2QJdnDsK6j/s7pCuLsfrCYJ4T6TZOPj4/mwzyK6OWl1Wrp7OxsxS5SVqATAzQS\nrhtlDvhgkUjEdDQWi/2un/0mCX+pVMqYkMDR1N6pW0K8821gRNn0pz48PKhUKplhALKm15K2LZSA\nrJVJTIvFQnd3d9aLnMvllM1mzdmyGQ4jO5vNTBA4O44TYfUtQxh35jAzVjQSiej6+lq9Xs+iVT/+\nkqwkHA4b9AqJBVTh6elJ19fXGo1G2t/f1+npqaLRqPX6snOdiJx6HvAp97juoDG8+/v7phzAuaw1\n/vLlixlWPysAsqUf2gE6Q8sKBqherysYXC58OTo6MgifTJn+9uFwaM6JO+fzZrOZyQstQyA8ftAM\n5MFaraZGo2HP3g8jojzB1EWCKVjcEH+u/2dxDWRSuASgEyBQfD+MNdCiH+Th9xQQjMJJwakSDP38\n888GewMP05+8t7dnpSTuHDIo8kKAA3T45s0bc/i025IF+RkK1M8JFCRZrdQbbJaWEHzQv93r9XR3\nd2eOxA92ggMCPIwDIgCBiFkul20qWj6fVz6fN8Ts4eHBAgomE66T5/i8yWSy0qKGjmLIcfLIDygD\n/Ai4G3BACNLQJ5wFhMbBYGAreWOxmM7PzxWNRlfOPRgMbPon5D8ge4IReAJ+oMxkMrEeekpP0nIO\nCigDyQc6Q51+f3/f2ibRe0nGEYGzAnk0m83aSmEmKtLm5nUUuaXm73UUnhd1cSaWgrYQyNVqNdVq\nNSufgXAStEECxmaiB8jLdDrV7e2tjVk/Pj5WIpGwMjK6xnRCSIQkIvgI7pQOCJ4xNg4EFwQnGAzq\nr3/9q5X+CJSxHxAxeR7cETpK5k87cDKZ1MXFheko/JrBYGBl2997fZPOnxdkFjIaaiFEpN7hc2FE\npIygReGoZ0K2gAQIFONJd5CSeECw4/20LCJHsmUiNn9uT2iBjSm9dA+QYfoIjiwKh8r38K000suK\nYd/65lsNOTsBA8REsjuGaWB0fZ8w/w7hhoBEehlMBJrglQ/SGApHNMx3xlBBaAMRWSwWRsIBluOz\n/B+yNLJlf+f+hXFE6YF/qdNSA8VhkUVBwOR7oJy+J96fEWQByJT/T7sWQQGkokAgYAgL/70uLzw3\nnpVv/yRI9Q6STBZHgEElA+F+vCzy9/wd8gLa4M9OHZ/sx7fD+XN7tArj50mPyDnPBXnh3pBD9Nzf\nD5k5Z+fO0QuPRPmzE4x4efnanaP/3DOfD9mOs/vuGQJ8EBPkbl0mfe2aO0f+kRcQFM4OYoe8cDf+\n3Ly4D7/iFnmGYwO5mc/wGS8ZKd+ZlyedYhe9vPB9kEECA+wE8sJ9IC+cnzv3cwroQsGpYxeRSVBd\nngO2Dv32d+Pv3LcawtwPBAK/mPSJ3fezDZDt9bOThSPD+BySPAJ5bBclZI86QbZEBvknxEPOzbPA\n5mCzvI7ys/gP5OW3Xt+k8+ehQ2YhIwgGgyoWi9YyBR+ArVEeekVx1vtxKSmgfL4vlocKZIvTBgkg\nWKAdj7p0PB5XPB63GhFQFQ6l1+sZmxay2uHhoWazmZ19MplYMEPrDQ+SFwYVh88f35cKCW+xWNi5\nyVpggJMR0CYWi8XMYWIECKrm87nxKTY2Nqzl5uHhQY1GQ51OxwIMfnd9eA8Blxfg9TuXZEQs6nZk\nirC8aafc39+3ZxCJRCyrAlpcLBY2DCOfz1v0zKSwer1uxCKMJvICtOdZtcDJvt8bgyhpRWFBMGC/\nh8NhZTIZyxaRF9rlyDw5OxlQIpGw3QDxeNz2RDBbX5K1wyEvfsYC3weHD8di/exA/BgqT5iLRqM2\nmTIajZqce3TBl3Coyx8eHpqO5nI5I9sxAdKPeMUx+wFbGHPuggzKl2WQV+rsHtIPBoPWjgfsTbsV\n6AdOww/l2t/fVzqd/gVhEqIgKCN66OF3jDTyR9bJH+7clzh8MBEOh01+stmsISuxWMym9PGdpZfS\nKKQ92pkDgZd9AZ1Ox9o5sWPSS/cQwZz0MlUVVMvPZUCegNshheI8sa0MI+OzkBeyWTg1yCgZfS6X\ns/ZUpuIhL3A+CLD4OR/YYiMIdDg75SbsImU/9Bz9CIfDdn+UWLCNBCJ0baEb8AbgxOzt7dleiUaj\noVarZeUbki0/HIw79W2SlCYJKrFBvtsAmUM/0FGeBXIOx+W3Xt9kzZ/IOhBYjq9lcUcwGDQIz49d\n5O98lwAZGPDe09OTwSPAKEB9CAlkFd8WB8QElPn4+KhSqWSwF9kOYz95MJATn5+fdXd3Z5wCSFvA\n9+uwEc7P16RY48l3jsfjRpKaz182d1E3m8/nVvLodDr2Pky2qlQqtoaYejfOAk5AoVDQ1dWVQZ2M\n/6Qswl5131JF8ECw4WFi39ZH4EX0TG8qtdFGo2HQG+gCy4dqtZpljyg6UTrBCbMKPNGT78ed+9ob\nbUG8D8+q3W4bDIlRI2ChDgwrPRAImGycnJxYa+NwONT19bVKpdIKkgU7l0CDMkGlUjFIF77B0dGR\nQYOcz3dncI8EE9QHB4OB9Vpz7zgcdIquEkpYlMZGo5Hq9bpBiJCvyCbJGtHRWCxm8zNY6gIfAcNI\ntkwtmOdI9vT09KRWq2WjvAmYPDpENs7ZkQ3q4/AG7u7urJaK4aYERSaG0R+Px8amp4yHMeX5whvw\ntof6KwRRECVmGTBJUHqBpff391daGO/v79XpdAy9aTabms/nqlar1nGBPJI1SssaeaFQ0PX1tQVw\ntJyho7SH0fkQDoetMwGEg0mg9MP76ZI4IHQDnR+Px6rX68YVINOlC4TJpl5HsenBYNBIyAw2Ql6w\nHZ4D9fz8rIeHByu9+CCx3+9bHzzlB2y4r70zpRIdvb+/1+npqc3seH5+1vX1tZW9CHRY3QviuLOz\no2KxaPMOBoOXRWOUk5kUiTxTvpO0wv9ot9v23OhYIIhaLx2ho5Q8+TzsIR0VxWJRzHqgw+m3Xt+k\n8wcall7mFHtlJyuXZDU04GZvWCDDeAiIzJOpaD4aBTaFXEOm5Pvfff8ynyG9bGYiA6CWQ62YbMBn\nqRg9SIPUVYlyfbQKvOn7gXFCPrIEpuZ744Co4XOHfG9gV+4dpeHcZDC8hyQLNjBMLL8h+5W0AtMS\nKUOeAdIC5eD9PJRHDZzzM+QHQwwUhmFHZniuHuKHNMPzxfBhzJ+ensyp4PgICjFiQNlkprwffAAy\nPjIjzg0znqyYl4ckJRn8Ca/AB4Q8E8pGGA1qfASgBFTILVkxGR/6ghx4NMvLC4aF5+ozH+TV/x6k\nUWSC704ATv2eAB3WMs/Sywvnns/nFqB7FNAHcV/TUYJk31/OPXM/PAd0hnowiB9yDKIiyYIpmN7I\nkJcXvsPGxoZlfKBHJDM7OzuG1lASoGwJPM8d8lyByNFRH/DDHeB9+DmeO5kqpFueJfeBbUTeKVGh\nY36OATrqy3b+ztFRnh+6jLzwAtHEjpHxck9wsnwwC2eEl0dV4DsRnGAXuXPmJXgd5c6RQRINX/rg\nLpEXyhKgqRDyPBmXc8NlwFfwLAnAOfd4PF4ZsuWTMD4PXfOzI7APkAnxRfwcZ/+t1zfp/ImYEFqg\nVBSUy+AL47iJSBeLhQ4PDy0qhdR1fHy8An1JMngTghCEMEiEFxcXOjg4sMv20J8ki8qi0ehKUEA2\nsFgsrFULR0JW7GuAj4+Pqlar1h7D7AGgNIwRQsH3pq2n0WgYgYyyAgYU5IMlFM/Pzzo4ODBOwN7e\nns20xrEAFXrl95CXr92DLiDsGF7KNJDaqGOSfQGT8QwgSJ2fn1sNC2GnTPH8/GyIDVB3o9Ewo+NX\nX6JUfAfkhWdP5MyzxikRDBSLRYM1eW7Am8HgctHIdDq14HQ+n+vt27c2dGQ4HNoMhnVlZA68HyLS\nbDaNYERQB2MYiB0jQV/34eGhRfp8DyaooSfc+cbGhmVdrVbLAoq9vT0dHx9bdkUQyjOCyDQavUwS\na7fbJi+9Xs9KKxgrDDmBK3dO6yskNQIjgvajo6OV9cvoKKUGfhfi6NbWll6/fm3tTjxjRtYS2KML\nfB6OgwwM0iicDwhe0ssoY+ZIFItFQ9eotwPN4xTQDchrs9lMlUrFODTRaFSFQsHKfTgSpjWORqMV\nHaUdExsEWRcnSkBKzR3nLclskddRWm4DgYAKhYJ1lRCUEKzzuaPRyJzy1taWzs7OTB5xZNwDDg0k\nlFY8fo5WaEbckgj5AFZ66boBmdjf39fDw4NCoZCVn/L5vJUHCZQ8estyHAK8TCajd+/emY7SEphK\npVa6wwKBgA3MQc4Hg4Ht7mAYGSUnbDIvkAlm99N5gI6kUimbMeDr+iC7g8HAOi+o7bPojbkOIJa0\nTuPHQJl+7/VN1vxximTwLALhcqlNAmcxvIVIHoiPKB9jhOASLDB4Q5JBcbQgETFSt14nvvDAfFZM\nNuc7E4D1OQtZIrAuY0g3NzcNBqOmhrOE3EHmjJP3mTmZLM4RONmTc3CoGDOidr6br0GxYZDv7Fv9\ngLuAEInkMUzcL+xhsnEyZbgP1F5xICAyQJtkgv7OQ6GQBTP+zslmcSSeBMl7ggx1u11rqaLzgOfv\nyZBE+GQQwNc8czIT+pD5WSJ7Ph9HTmbpJzxKsmyFO0VeCMx4tmSdnmDEHYGikDnxnXEK1E1Z/oFx\nmkwm9j19LzVZBGfH4SBv/D06ShcIkCuwOcEkwQLZr2f2+0BnXV4Y8oTz5W7RUd+2xV0gDzxLyjvc\nuSdaoaP+9yB/jUYjg1khh1KGQEc5u9dR5B9ujWfM8+yp3fLsPYGUZ0mSwO+TQYN4ELAjLyAs3BWO\nx5fXPLOf8iIdH9gMdDQUCtmiNBKx+XxuJDN0EXnhPgic1u98XUc9gdPrKDMkuGOvo/5+yOip30ta\nCUDQaXR0c3Nzxb54krcnyXlyqtdRSgHcoS//gTLS0TObzSwwIDGhVEbQTfkXlAJ+BGVSSgfT6dSm\nSpJsosPYWQInr6O/9/omnT8RLK1E1LKoKZNpLRYL9Xo9lUolgz/y+bwNc4FV7zNYLp8M6/z83Oqb\n1Etub2+NmUkmS+8wgo0RJ3thmpdn7PN38AHomabPtdVqqdFoWOvc8fGxZUseXkJpvGHJ5XLWnjYY\nDFYIPhhxWleon5FBUpskcGq32yasDNKhDudZzkSTTC+kTsfUr/U5AjgfHBWGlayhVCpZdnd/f2/P\nDJjfkyeBdtfvnLkAZBgEiwQvwWDQZn8TfVerVev7Pj4+NgUls/H3THQfCCyHseTzeXMIGHYyukAg\nsAI3k0FQCsDosVoauBJ0gbkCBC/hcNhmrz8+PqpWq9mzSiQSyuVy5mwZcoO8IIfIDbwBasTh8HJ4\nERsz0TNPtsUpIS9kMkzWxFnPZjPjVvgee/6u0+no/v7eeskLhcLKilr2J6zPPQCFQ0eZDDeZTHR/\nf79SV14vw/nSEXdO2xlOAFlD1tETP/WNttPxeGwICU7Pl2lwpp5tjY7W63XTUc6BMyH4AE3zsDRy\nTrul11H0EB3n7wjQu92uSqWSZfyMAMeuxeNxc5xeXsg0T09PFQgEVC6XLdGhpo+dhC+DA/elI9AF\nZmF4EjMOHp6E9LL3YjRaTnUtl8t6enqygTmQBuGCeD3F+QcCASWTSeXzebMBJEisESfoBHGTVjsb\nuHM/lIg7R0fhP1Hi2d/fVyAQMLtIS2k8Hlc2m7VAAW4M5RjeF38H0nh7e2ttmMwpYOYBARVBi9dR\n7Dln/73XN+n8gSmB5zwjHLjw8fHRsnfIS1tbL/PAgcNhw5JZkNU2Gg1Vq1Ub9gNM7CFiBIjaKYoJ\ntCSt9oD7VjeiQ59xA+dDgKGnFuFhaEk0Gl1hq0KsIpqv1WqqVqsr5EWMDwLCemNJ5jw9ux5HDBxG\nROlnXEsyxWeugvSyQ3pvb8+Wz1AOYfYAThsYFIdXrVZVqVQMasVwYlC9EwFBIULn7MCR3DklH1/j\n9TVpHDJZDr+7sbFhtVBmnnsOCGgGpDd2uHteCGf0w0Zwnr67ARQAmN3XIEFgPMqB7DKqlTunrIOx\nwvn4GjGlAghRyDnT/TA4BJrrpRqfRXsUgnsjcyerR174XUkGLweDQdsfQFmBDI7+Z1+OAUl4eHhQ\nvV5XtVq1MbjoOFk7iAM65Wu6/uzrszEkrQy+8qUgSgHoKMxvr6O8H7VyX6elt507f3x8XOFDEFTh\nRCCVkbn5sxO4RqNRg5bRUc/bIYCGYEt2zp17HaUEidPmzrFNjUbD7AsOy6Ol2EWGinkd9d1T6Oju\n7u7KnfsgjffkswkG6Syh/IXuwqzHppMxD4dDs4sQ4CC1+tZUEEumgXKfvotHkuko/If1O5dkMjgc\nDi2B297eNoKh786gvOGDPI/2oaOQmdFRPsMHS35mv+/6AnElCSBA/K3XN1nz5/Ce7EdrRSKRMIeB\ns3l8fFQ+n7eoiXoaAxuozzAEhUwW40c2Tt2Q9wV2xABQc/ZDYGirwFiSQZLNwxRl/jTwPmM35/O5\n1fmDwaB9P6LWWCxmE6pocyHI2NjYUC6Xs416BwcHK+16ODpahnDWvC/fl4CGO8dYU1dPJBK2Oxyn\n1+v1bL56qVSylkBJ9t1RXCJdEBg4EoVCQclkcsXISDKHfnBwYPAb8DdtW4lEYqV9jawCpz6dTpVK\npawdUFqOmiWoyOfzRrCKxWLK5/MWCDA2GgiU2iXOLZVKGYueO8ewYzgwvH5Ix2KxsDY/jDH1chAT\nvg/3AuyZSqVsuE4ul1Oj0dBwuFzqNBotp8dxNwTGBF8Eq1tbWyoWiyvygj4AaWMwMSLIi5dzTw4D\nama8sddRBuKEw2E9PT1Z2+X9/b09f0iLyAuz1emEwEDzPl5HCYp57hheZJiugHV58TqKPAeDQdNR\nSidkkbPZTIVCQfP5XOVy2Yxzv983RjZZNUiR55swE39dR7284DjQUe6U2jTyAuROxw2ymUgk1Gg0\nLEiLRCI2Hj2TydjmSUaTT6dTI8Uyqx95wTHRUYAscreSVjqDYNNDdkNeGFHLH8pRBEKz2UypVMrq\n2IHAsnOEunWhULBBSOioR4t8+xv3ANrmdZTzUQbiDMgG8kKHEC2K3DnoGPZnXUcDgYDV8dHRfD5v\nSBMcDlq+2aRKaQoUgED18PDQ7oUFUARLyAtlAnwl8sKd/29e36TzPz09NeF49+6dksmkOp2OUqmU\nXr16pcFgoHK5rHfv3hkx5vvvv9fm5qZ6vZ7i8bjevn1rC3wKhYL6/b7i8bhFvufn54rH4yoWi5bh\nvXnzRul02lqAyP7i8bhOTk4s2uf9MpmMzs7OVCgUzFFEo1FjDL99+9Yg0YuLC8tODw4O9ObNG1OW\nd+/eaW9vT6PRSG/evFE8HreNcQgBWdvZ2ZmGw6GOj48lydqKjo6OrEaYy+WsX/j8/FyRSMRWAPO+\n8XhcR0dHBnWi/JPJRPl8XicnJ+p0Otre3tabN28s2/v+++8tqj46OjKnOZlMbJ43dzKfzy36j0aX\na4IhXh4cHCiTyejNmzeGMpBhScvSz/n5udLptAUavN+rV6/szskwJpOJDg4ObC53p9PR8fGxGYnJ\nZGKy1O/39e7dO5uk9urVK52enlorXaFQsAyCXQavX7828iK10ouLC21ubtqCH5YnHR8f25rgcDhs\nhKrFYqGTkxMjaGHIIUhdXFyo3+/b5wF9fvfdd8pkMqbg7969M4LPmzdvFAwuJyESAEOwjMfjKhQK\n9rnMwXj9+rXBw7RI/cM//IOen5/16tUrTSbLqWSFQsGCX74/xorfn8+Xc9yTyaS63a6SyaTB9JVK\nRd9//71Go5HppNfR7777zp4dwXoikbB2qdPTUyUSCZvPMBwOV3Q0l8ut1EtPTk5selqxWFS/31c6\nnV7RUb4PRNI3b95YJv7q1StD7ZhySL0ZHX1+fjaCIc+YIHM4XK43Pj091dPTk05PTyUts8R0Om0k\n3MFgYM8qk8mYjuJ0UqmU1a4hdBIEYhdZQMbK3VevXhmy8cMPP5hesDAL0tnFxYUKhYIeHh5MNggC\no9Hl7PqtrS2dnJyYvbi4uFjR0VgsZokMOsqOh9lspr29va/q6HS6nK74+vVrbW1tqdvt6vDwcGXR\n2vfff286/+7dO5uMeX5+rrOzM5vOWCwWbVEPqN2rV69ULBZte184HNabN2+MGE0Qhg3i+QG5M+8E\nW49dpF0yEonozZs3xn149eqVKpWKhsOh3r59a7MC9vf39fbtW2tvfv36tUKh5dTZfD5vJD10NJfL\nmY6yY+NrOvr3f//3pqPT6dSWcyHTfH9sy++9vsnFPv/2b//2fySZ0yIzog5O/zCGGAcN7EcEDZGE\nPmdJRrIg6qPGTQ8pn0v0TJQJDEMtkyi5WCyagfDDGMiiMJYwxMk2aBWibEFkx8AGomrGeVKn9BAq\nJQocM98PGJ86PTVU3053cHCgXC5nMK0f0cpcce4PFqkkY1Fz/5C6gOo8wYwhGgwywaAR9PCd+Hdg\nfFALEAMME1A1O7J9myQkMzJCyJUMAUJBAoGAsWF9Js+ZyKiAYslWGLYCkgP6xOeSQdJyBLEIyBb5\nzefz9h4MseGsMOVBvGjDBBUJBoMGIZPpsFCE0gVZAtkn8C/ZwnqLH2Qv5IjvgY7A20ilUkqn0wal\nAxuHw2HTUd8KBf+FljcvEzgcOCTcL0RUgiJ0lIFAyPHXdBSdAsFAR8Ph8K/qqM+ifHvoZDIxnefn\nvI7iQHhuZIzoHHfuYWdge6+j1NnRUc+mpxSZy+VWSm+w/9FRnhulQhwKz42f8zoKlwVdIeMEUWC+\ngiSTBa+jyD6ySO3a62g+n7fAmU4pdIWg2KMUoDG/pqNstUNHcYxf01EQRM4paUVHI5GIyZUnPPLe\n6Cj22hMP0R3ej+/n/ZTX0e3t5WZQnse6joI64LN8Gy5IhZdnvvvv6ejGxob+4z/+4zcX+3yTzv9f\n//Vf/w8XBSuensjHx0cTDpj0flgPmT2MS9+mRQ80AxlQdJSSDVk4MXgGkoxctLn5stELp8ZnPD4+\nql6vm1BBFPT9oPQNA80Gg0EzlJlMRvP53JwZNTHOTAYALAo0ynAQggOIQ5BwcMCSVgarUFMKhUI2\n8Q6nwybByeRlfSbtXpQgyGK+1nng75zSAsoIlyEcDhurl2DGv3g2sJv5vjy7UGi58KNarVpgBh8E\nqBSCF6Q5aZmJMbMgm83aM0BeqGfjfDEKEIOokdZqtRWnIb20hcGoDgZfRu9CSgSqZlsYBqXT6ZiB\nR26i0ajVHH0XCaUjyIS+ywSOgWeVk7ltb28bH4TAhBdcG36OTgyCNN8pUq/XDX73OsrcB2npOH5N\nR7e2tozJLL3wUryOEtSie8Fg0LahgdJwbq+jGxsvo3dxasgbOgpJDB2l9gsK8zUdBW3gbmFbQ+rk\nPZBFr3vPz89qtVqmo9wjvBSCG0mm5zDBISo3Gg3b4jmbzfTw8GB3io7SEul1dDgcruio718HvUFf\n/PwTggQ4FR6e54/X5a/pKM/iazqKXfgtHaUVMRQKmSzBc6BcyfchYEFH5/O5Go2GBS3IC/aeLiv4\nI+goJQ0Idw8PDxYcdDodKweQMEFs5XtIWpnquq6jyDjvQaDuOQfwttBR7huOBP6HO/SlJhLTf//3\nf/+/b6vf5eWl2u22yuWyGWQinr29PbVaLavfQ6Aho2ALViwWswvf3d3V4+Ojms2mwTgEEFtbW0YA\nIfICahkOhzaZjZ5ysgDY20Dd29vbqtfrurm5MZbtdDq1jJfpZ61Wa8U4oGC0lxC1+5Y9+tiJ/jAM\nMKnJ6svlsqSl4sBSpuWKbHJnZ0e9Xk87Ozu6uLgwx3Vzc6O7uzuVSiWL0jn33t6eEavIWEAjaAcC\nTaAetru73DNfq9VWBuvgSJiJQORKfXs2m9kZODcZAV0IJycnOj09tda1m5sb3d/fm0MiKyHzbzab\nxjMgKGLuAutVd3d3V0YLt1otY/Rj8MjemGtAiQJ5gWFNVE5Uv7+/b0Sdi4sLW1x1d3en29tblUol\na6kia0DGms2mZcKeFEeA48cEE6Aw6wInQw0bIh0IC84nFArp/v7eapsEKdThu92u0um0Li4urDPg\n+vp4/vzdAAAgAElEQVRa9/f3KpVKRnD08uLHJ+Po0FHWtxI4IGvseofJ7XUUfUJHuYvn5+df6CgZ\nEQQzoO7t7W01Gg3T0U6nYzrqO4iazabpqJcXOnw8sZXzQdbyGRs6zsyMaDSqUqlkRrzZbFqygNNB\nR5kEenFxYejJzc2NyQu2zcsLhFMycHQUIqLXUboVxuOxarXaSobPv9OJQKaJvMxmM5XL5RUdBanh\n+56enurs7ExbW8s15sjLuo6S+TcaDZvFQNKFjrJiGR0lA0ZHPakQHe12u0aY9Dra6/VULpfNWYLO\nxWIxI19eXFxYCW9dR+fzuZUxkbFGo2GIMnJJt5fXUfQhEAioVqtZsOb5H7SxrutoMBhUpVKx6ZP4\nBtDvTqdjOgoX5/de36Tzb7Vaurm50Z///GcTrJ2d5VanTCajq6srPT4+KpvNmuEDeoMccnZ2plar\npclkokQioU6no7u7Ox0eHmp3d1edTke7u7tKJpO6vb3VeDzW2dmZptOpDUwZj8f6+PGjKRkGDqJS\ns9k0QxWJRKx178OHD6ZQwGHZbFaSdHV1Zd8F1jPGslqt6tWrV9rd3VW73TZY8vr6WpPJRIeHh3p+\nfja+wWKx0O3trZLJpIrForWBxeNx3d3d6fr62oQoFAopnU4rlUqpXC7b7G0i/Farpfv7e/30009m\nEKlNZbNZY15ns9kVKD0cXu4vDwaDOj8/t5aTRCKh0Wikq6sr2z3ebret9sW89LOzM7s7HO3nz5/1\n+Phozgfyz3w+V6VSMeZ3JBJRv99Xq9XS58+f9eXLFzP+DPSIRqO6vLzUfL7ceU6mRKvZ7e2tDg8P\nlclk7HyJREL39/dqtVo6OjqyjJPdAnd3d4pEIjo9PbXM8eDgQJ1ORx8+fLDPB70qFAo28hQok+9c\nrVb1/v172yAHBEid9vLy0vYxQFbc39/X3d2dnp6e9OrVKwUCAbXbbUOsLi8vrb2PwDaTyejp6Ul3\nd3fGb2i321YuuLq6sp3nwIiFQkFbW1uqVCoaDAZKp9PWgdNut3V9fa2ffvrJdBTSUzab1dXVlfr9\n/oqOkrGu6yjkqXa7rZubG9NRhqKkUind39+v6Ci13slksqKjwK5wCBqNhukAd95sNvXhwwdVq9UV\nHc3n85KWyQdwLQiR11G276Gj+/v7ury81Hg81uHhoXUhZDIZSdLd3d0vdDQWi6lUKn1VRxOJhOlo\nIpGwLo5Wq6VSqaQ//elPNowIHc3n89ZJwz4F2lnD4bBubm4UCoVMR5+enoykeHV1pUQioVQqtaKj\nzWZTnU5Hp6enlgXTwouOklF7HS2VSlaaQEfb7ba+fPmyoqM7O8u5/ru7u7q8vNRsNlMulzMECR1F\nJrLZrJ0vmUzq7u7OdBQdQEfv7+9/VUc/fvxopYJQKGTcr2q1qvF4bAOPvI7+5S9/sZXAEKjz+bwe\nHx/15cuXFR0Fabi7u9Pj46Nev35t50skEqajm5ubOjw8NDQhk8lYwnlycvILHb28vFS9Xjc7GQqF\nVCwWjUT79PRkQ9zWUdSvvb5J548ysmiDyH9nZ8fIRK1Wyxi84/HYjGqj0bDa2f39vQaDgRHb6I/f\n3d1VqVQywk2n07Ge9V6vp9vbW5u0xYxkenRBIsiY/MpZomlasZimNhgMlEqlrBZKvY0Ilx75Vqtl\n3IJOp2Ps5nK5bASou7s73d3d6eTkxDIZ6naNRkOBQECnp6c27pM6GW16nHt3d1dHR0fa29vTeDxW\np9NZicSJhDc3N3VycmKwHstmgLtisZiazaYWi4UhBO1225Z1UIs6OTmxDJw54vToD4dD3d3dGamM\ns6D8QIugHvF43AiPoVBIrVbLSIzpdNq+L4QpOgYSiYQZAjgi1WrVMi8y0lgsplarpa2tLR0dHWk4\nHOrq6krZbFapVMqmzAHVdzodW/cMAQtI2Q+cCYVCyufzKhaLkmTMYDgCkBGBOoHTYZdTOkmn0+bU\nkalGo6FEImHOG1Lpf//3f+vh4cHqz7VazbohyuWyEbroB4fU6ttEqR2fnp4autbpdIzVnUwmFYlE\nDFHCwWxubloQ6GWxVqtZsFAqlTQcDnV0dGTlkXw+b0jW7u6u3Svy0ul0LOhNJBK2j2B3d3el3EcG\nlslkfqGjsVjMarXoSiqVMhY/d9hqtSTJvgdIDDoKggFn6Pz8XHd3dzY/fmtry3RU0oqOUt4hGwTy\n9nVfr6OgNiAP6Fs4HLb5A+Px2MhkoKOxWMz63EFYkFnQu3Q6rdPTU+tt9zpKO93t7a3evHmjbDar\nTqdjdppSFXYRgh1rfkOhkNrtthEIE4mEpOWcAnSUFsCDgwOzs57ThY6C7NI+R5A7HC53aORyOdNR\n7pHe90KhoHB4uQLXz2LxdnF7e1uFQsGmOIIM0rkTj8etJbZQKNjwLSYO4niTyaSazabtMBgOh2q1\nWka4plvt4uJCf/zjH9Xr9UyO6KxiKmQymdTbt28NmaAUyEIuvmc8HtfZ2dlKa/xvvb7JPn8i2r29\nPTOq1LHp5eciqGv7vu5wOKxkMrkyCIIhJkBO1NP8BDyiUV8fZzUjzhrYCdITRox6lyfNwGSlFhkI\nBGywBbUflIPvwe9Se0OJQQk8PAd8R1nE19T5HWB/erGp581mM3MI1Lt8xnx4eGjQYTgctmwPw+on\nwFHLgswHmQcBhUgIzEadDgYtkBlBBfPlIUzyDIDOqLND+IMUSDROtgT3oN/vG+wpvUCAQJgoNYQx\niD4YBs5IWYJeXT8ZjTqwHy7F86G9EcfD8wDWBoE6Pj42chb312q1VoY0STJHgrOmvOH7mak9w0uR\nXlYRA2mT+TBUhvPRJiYtS0i0pvnAD0MJqpXJZKz2Dh/H66jn38AU9zMHeF+m+PkBKL4dFduwrqPB\nYNA2mfHdJdn41mg0apwCnic6SssctWB01GdPnJFyCs6AOyfAoW4MnMvZvY4SfKGj2B10lEAb8jL6\nx3vv7u4qnU5bpxLwPgtj6N/3E+AkWRBHWQA+kV8pjkOFq8NAKO4coiq8JNAmvnswGDS+AcRZP+1u\nf39fuVxOyWRyZZYIw62wkxAo+R6Qebk/nvO6HnkdlV4I3vB0KAVxPn4WHab0Riuo1/9kMqmjoyOr\n8aM7DOziXCB78EEozWDP4Qf4O8eme94I8uJ1lBbNra0tDQYD01FkDGQBgvZvvb5J5+9hxEwmY9kc\npD/gMrJCYCKEAdZ8Nps1g8A6VOp8bGlj6xyGL5vNWtRXr9cNToG9ORwOTcAYMQosDHGPDoNcLmeZ\nFIMavnz5omq1akLnz84ZMpmMMpmMdnZ2rE7L1rXNzU1jdDL0AYZvPp/XwcGB+v2+qtWqcQA2Nzdt\nyiC1Sj+pi4lUwM60xRAoQNj5+PGjDbHh3BCFcA6ZTEapVMpIUs1m04wp40791MStrS1rg9rY2FCj\n0VC5XLZaHpkmjoUADlInG72oOTJVzSs69UaCHs4NEWdra8uYsjxnZueDmLAFsdvtWna5v79vjm80\nGqlWq+n+/t44Dyws4Z7JXCEiebJPPB7X4eGhGR2yzA8fPtjgFj9IBWcYi8WUTqdti9xgMDA0gr0Y\n9E57AwsMvbOzo2azqUqlYmgHdX6COoYnkcGwNRCjyOdLMid4f3+v6+trI8b5ATAM5OHOMcIMyEHG\nksmkoX6MD/aObzabqV6vG0eE5072hY6QHfnBNMhLPp83HUXXPn/+bJkyDpB/grZwdpIRskumGELe\nRV5IINDRXq9ng4AIpBh3vbW1ZcRa5kxQHiGITiQSKhQKRkAeDpfbDT99+qTHx0dzupA/cQ60EsLE\nB8VhVDAdRn6rJvpxeHioUCiker2uSqWifr9viCvBdSAQsNo4QSTfH/uSzWZX9qX0ej1d/8/mS7JW\nTxaGhAlqQPDMJD5QW9qz/dj0vb0901F2ecARgRfG+zO0igCVzJ2yxsHBgY6OjkxHGfDz888/q9Pp\nSJIldMgOsyY4O3f18PBg0xnXdZQEglZd+GSVSsUQF4iY2HcGCGF32EHwW69vEvYnC5FkcH8mkzGY\nm33Xvp+TB0LrGdPYHh8fbeDEH/7wB0WjUbsojACTmJhzj9JFo1F99913Zhw8e3Zzc1PFYtGCEshL\nQNs41WAwqOPjY2P+vnv3zqB1ZpYDbQE5MpWOFo+TkxMzPn4yl48ImeuPkaYdCVSCtiUGqtCXTP0M\npv5i8TIXHLSDlZp/+MMfrHyBwZJkGQF7rB8fH22y2I8//mgLfMgg/cQvPyMf9OHs7MyyMMhqzMb2\n9UyIN5DBptOpjbU8OjqyoOfs7MyyJngblCToYKjX6+bwptOpzUogiMF4+qzN7w9HgX/44QdzQnwe\n7Hxq3ovFwurJOMfhcKharabt7W3lcjkNh0OFw2H98z//s3K5nLa2tqyvnUxvc3PT9lpQf93Z2bE+\nau7cT1Pz451BVCSpWCyuZC9kygzXikajVvOEXEZmBNyfzWYN5mbM9t7e3gqkio7iuFkiNJlMFI/H\n9eOPPxrZjJ9F78jyPaq1u7tr8wKQETIr2vzIVhn7TdaFUw2FQis6+sMPPxjknEqljHSFLIIIMXZ1\nb2/P5myQgPjMFb1l6BhQMDYNHaWlER2NxWI2iMrrED/j51AAG//TP/2TISoQkQnUqDsTjJPlQirk\nztFR7o39AOjqzs6Ozs7ODHUg0+X+s9msOaSdneUMe/hXJE1eRweDgV69erXSgkhpBcQEe8pui/l8\nbjYuEAj8olsHeeHOQYAODg70d3/3dzY8iQwdJ0x5bT6fK5FImLyB7FH2yeVy9p7/8i//Yu2BBOEE\naiQp2Dqe89nZmRGfydI9Aua3x4IyHh4e2vtKMpQQXhtEUcoOv/f6Jp0/kZ9vWaF3/fn5eWU+MgNX\ngEmBb6gRYqAg9MAgnc9fNjAB52GMeRjU1YDdqK379i/qiCg6f+/7Q+lnJjr105wQVr4H0A6rfTc2\nNgzGBubmvUAdiFhp44FpzM/R2kiwQD86PwscRYYwGo2svsz3293dNQIY/c0oGfdCxI2BglPB/cGE\n9eeHUAUSsbGxsdKTjdMmE4DRDgzq23bgf+CgQGjYle3Z4zwvjBZOiBooZEMcpo/oKXtw5x4ehJnN\nPdPqtLm5nHBINsZ7c+fIO8YI6JsedeSN78H52TqGgYpEIjbamol2vJAXDDoOEqYxxlmSyQ/kT98e\nCTTr75wSAHVn4GXOSSsnCBNZOWUhiHo4Ds5C2QAdxRgjY+gosDd1/q/pKAGz11FKKnRuwAvyMyo8\nxwcdJcDgbnAmoBzIK6jDuo7SPULwhe3C+KOj4/HY5MjrKO1mMMx/TUd9sCjJSmDYODgk9NqTWPh7\n565xdugoMoZe8Z0pefhJhnxHEEPfTTKfz01msbvIJYEIyCklU4ivBKtfu3PQJMpgtNh6HUU+yfCR\nTUpv3Dl6xLPxOnp4eLgyE8J3jAWDQXvu3DnIDXeEnKOjnAfdxibt7OysoDnYRa+jlGO48996fZOw\nPxPK2u22ZVTA1qxeBJJl6Q91RF4ILvXhxWKharW6AuPM53OL+ICAEBSU8NOnTyuLfqjPTKdTq/XQ\n8w1sBvSZSqW0ubmpZrNpAQxQFb3WZN++posSAE0+Pj5aRotijEajlbkGCAqK3mq19P79ezUaDTO4\nDEQB0pNkq4SJ3iktZLNZi1YpnQDhswiFKBQ+Bi8MeSgUst5kMlU+k5ovThTlDAaDur6+1ufPn410\nRt16sVhYmyf/Ti8s78OAEtrxYrGYlR84tydWrZNjkBcyDUoM1GEhrtGL62t3j4+P+vOf/6xyuWyO\niGDj+fnZiGnPz8/WxgpMJy2zb4IogttGo2GlKbJtMgTOzvmp13roEENFfZASmg+MYfRTYiAjwhnx\nfpCn2G/Af9M5QvthKpXSeLxcr428gGhhrMneQL5o8aIrIhwOWxbf7/etHQ8dxZmPx2N9+PBBV1dX\n9h19cEvGOZ1OVa/XLXnACWUymRUd3d7eVrvdVrPZXNFRzynij/Sy7RKOAzaHOjio4ObmpmV5ZOKt\nVkt/+tOfzC4RuBDUIf9PT09WHgDRCYeXU0hBoJh7X61WV+yL11EcB0gGCRW7DHim2EV4PCQjPoC6\nurqyMiC8G54Xszu8jhJc0Sq5u7urZrNp9r7X66ler5usgK5gF3HI2HkmBkLI5t5AQqmXe7uInfzj\nH/+o29tbQ75AXuGSeR2Fh7Wuo8h9KBRSqVQyxJPy1HpC5XvzIRf74JHAVpJSqZQCgcBKYLyzs6Ny\nuay//OUvKxwDP1eAkhbtjL/3+iadP9mN73EkEqNugrMgIsW54kD8wAPaP2hPo7+VjBijSXYBZIty\nbG9vG6ziJ1QRyaEQnFWSCZUky6KB9zxZBOEiGEGJfOZMFIcRgeDE2b2gQTBBqGCq+klSkqw3m3MT\nZXN2ImdJlrV5EomfTMd7kHFwVua9A+WR3UJCBI2QXjIfsjiiY+6GyWIYACL29bMjOxgzTzrC2fM8\nKFsQDEpLx5HJZCxL53MZogI06KN/SQZvUkpgKh+BJdkfd8u5vbxwPmYS0JuPHuCsIWqScWF0o9Go\nwX10WYCueGKRlxffkgUJimdA2QlnR8bl75yzeTTGEwP5fh7B47uCfAWDQSstTCYTK5H47Ba2tydt\nost8t3g8bj9H1wNZNvLCHy8vEPZ4PzIrnhkMa4hwBNLz+dzge7JEukgGg4Fla5Qu+KxgMGhyhY5S\nCqR1EEKvP/O6jmIX0VEyZFC139LRxWJhuwbIIgkgJpPJio76DDgcDtvdRiLLkcT02SOzXkf9vXt5\n4c6ROUkmYyAz3B0OH5IngQ8JgV+cg1xw59hgSb+YlAe7Hh2lpEeJx9+7902UGPm+lKj5u0AgYHcE\n2gCisLe3Z0hkOBw2HQXRQH4JZvAJ6CiIB6ga3wm0nHP77/2rfvZ3f+L/hxeX6WFnSSagfDHYwMCO\nvr7oAwGIf9TOyFJox6O+BNQEdMUK1+PjY52enpqCJhIJg6aJxhkrSZACA9PDj/wODw1FptWLjAA4\nZ2dnR4VCwaJYSHhkXhgaMjyM3MHBgREXT05ObKEKzoH6H+dg/LGHB6kVo5woHsJJdsgZqKWTOSQS\nCat/cR4QERZSkBnzmZB68vm8jo6OdPo/s915D8hFwK/sDeCOgXtxkl6pyS5RKMaySi+rafn3QqFg\nypZOp7Wzs6N2u22OjTPDsId1n8lkVCwWdXx8rMPDQxsHynvg9JjOR80U40rJB4PN78Aex6BChANh\niUSWmxNjsZjNYaDXfbFYGDriOTJ+iE4ikbA7Pzs7M2fGYhkcPndIFsczg8RH2YG95Pw8Tpjgk2CL\noCgUWs4uh6mcSCRsQdF8Pl/RUQwjJSDIfycnJzo7OzNZRUe5c2wHzonPhfCFjvrgE3sCWc9Duwzw\n2dnZsWe9vb1tZQt0lICd9wQZQCeQc+D6aDRqMsadE0xim9AxUCccAndEkkISg6zhRJC7ZDJpZTl2\nbVCu5P64Oz9EJ5lMqlAo2J1zXkYac7foKAEVNofPoHsL9jrfH/u+v79vsgaJGpktFovmvNPptM2F\n8DpKlwLyu7e3Z+Ti09NTHR0dKRqNmq2nDENgCqrqESu/mY92ZD4H+YDMSqCF7DDng/n+e3t71vYL\nZ4f74z4oe2AX8UXpdNrshLfBBJP/m5r/N+n8gbSllznYng1cq9XU7XZNMG5vb1Wr1ayGSFbPg7+8\nvNTV1ZWRowgOgsGg+v2+6vW6DbDAyUQiy3Gk9XrdppUBrz4+Phr5hUiLWjmfjUKCEPR6PTUaDVtn\nS093qVQySBqYMBhczvUfDAb661//qnK5bAxzapyz2cymINLbS6vdxsZy1DCsed5zOp1aZ4NvAUIZ\nuXPQDYg0nU7HJgaiVJVKxdinkJio5dGj/fnzZ7VaLYOzmK5I5wWrWv0GvPl8rna7beUCCEWQ1Hz7\nEk6HDBFnSDTuV9oy1GQ0Gunu7s4gRkiS8/ncjNSHDx90e3trd4exDgSW8wNgOkPKwmEMh0NVKhUb\njQzUyudguMmS+UOQxcja5+dnW6/KRLhoNKpWq6VyuWwQI+TUUGg5+6Ddbuvjx4+q1WrmOEGcmCNx\nf39vXQIEnYHAsgXV/x5ID1A+wdVisbBzk+HR7w10yXpV5jXM53NdX1+rWq0a496zrDc2NnR9fa3r\n62sbcRwMBs1R9ft91Wo1KwPGYrGVzB79pcYsyUoyyAtIhJcXAhwY9chLqVTSeDy2uRB3d3dGNkNX\nJNlQlw8fPphMoAOQPtvttk3kQ853dl52w/N7BCTT6dSgY0m/0FGfGDGMZjQaqdvt2iAuMt5yuaxy\nuaxOp7NCZiZLr1ar+vLli5rNppH6QA3RUcoNOHbKXwxLAg4PhUKmo77N9ms6SkAwn89NXrgHNm9e\nX1+bnfffHSTq48ePuru7sxIDqDCyXKlU1Ov1TEfJ+OmK8HYRgjXyQkC/rqMEysgdEynv7+9ND+gQ\na7VaZhfxFcyk+PTpk6rVqtliWiDp8Lm7u9NsNvuFXXx4eLBtniBQ4/HY5AWEBFTl917fJOEPR+Rr\nsdRhIGYBJ+OgIKOQGRI9jcdjNRoNq8dhAPwDRqm5UARlPB5rMBjYLGqUh5/3sDmwGAQOSQZz0e8O\nY1aSZc9wDBA66n6w0KkfQ+YgAOA+qCP6Ugd/x7lBRwaDgSEe6zwDMmZKB5wd4SKoItiBZEYUzD2A\nXBCIAeFRN/XkJwIaOh1AS4DvCLL8M/KfBXyIYeHOkZfhcGi1MAIPDM76nQO5zWYz26YnyfqIfYcC\nLXtkIQRNEOk8qxvnj6HhD7Au55BkkDb3Q70SiBBDggPgDGSHMPgpLcFtABGhLk29cH0uxNPTkwVY\n9Nyjh/7c3PlkMjGCFf8fR8TvUmKAcY3j8wTMyWSyoqMgRD5Q4j7RUVAR/o4/6Ch6AfrH5+GMfO1b\nWpadYIwjf5wNeSF48+giOsr8D7/LgJIkwR/6wTn8nUNyHA6HFgxwr14uPFkZWccRYaOQaeyi11Hk\nAR3lu6JjyBPPHx1FXkh0COh5ruvPyN87coGeeHnhfugMoNzmO1W8jYXgWKlUTL4pg3K3vCf3Rjut\nt4voCvoHn8afm4RpXUd5fo+Pjys6urm5aXLrdVSSoWB8Lqgwg8C8jpI8YNNBS3zHCKUH2kNBVjgf\nsvFbr2/S+ftsDAKSJFN24EKc4snJiXK5nK2w9Bu1yFiAtzDurBUFfiFy9s5tOp1aNkb2OxqNrC+Z\nmpO0zAIgoyEwZOw461AoZO039MgeHBwonU4byYYNUJ67cHx8rNlspkajoWazafVEYEofJNBbC3wK\nca1arVp0SoBBABWLxVSpVMyJdjodG6dKrRkmMfXfQqFgu8Lz+bwWi4VNv4JtenBwoLdv3yocXo4A\nZjQn6MD29ra16fFscdSQeer1uo1oBS4mWsehk1mMx8s55ZKsdjibzWwFLC13R0dHymazmk6ntt8e\niBNDVygUFI/HjezW6/Usa2PPQLlcNsMKSsFOeLIm4GacCXcTj8et159s7+bmRk9PTxZ4hsNhnZyc\naGNjOWiH0a/pdNpKQLQjSTJiECtEb25uLFMgoIzFYnp6elKtVjOj0e12LRvEiVQqFUWjUZN/jL4f\nmIO8NJtN01cMOjsqCBgpPzHuGOSBoAqOApnl/f29jfD1tXh0FIRkNputTPesVqtG6qOtihc6SgA1\nmUxspwH6EAqFrNRGmzBrVyXZ2mJ0FPuCjtZqNSO5Ua6IxWIaj8eGKKCjlCHpP69UKsZxIOAj2EdH\nkRevo9hM2sGo/6Kj2WxWuVxOs9nMMmvkJRaL2arlm5sb22uAPOzs7Ng0SWzB09OT6Wi73Va1WlW9\nXjcdJQmbzWZWgiAoe35+VqVSsc+HyLeuo8jLdDpVPp+3yXiU7QKBgG33REeZgAgcP5/Pbe4G6OFi\nsbABQ/TOU7aVZFwO7gYiN7M/bm5u7Nkxr4IJqOPxckQvc1IajYaVIQiypCVqw1huxmqPRiNls1nT\nUYjY6Ch+hJZbZrn4iY8EHdic33t9k86feiH1fh8FY/SJBmk3QohwjAyuwChj/IhypdWOAOqww+HQ\nZvbzmUR6EEkwLBg8sknfrgO5DagLo48D8S2D/Ozj46MZIiArAgA+z98FznB/f994DIyjxJCR3QB/\nUef22SqRMxAjd8odAXlRvyQzJPokO/eDK6i7+bYfnoO/c+pVzGQgI/CfG4/HLeAiqwVFgRnuyXw+\nswAy9y1BvAdZfLfbNZY83xkCIO1JyArBSyQSUSqVMmgSghp3zudSI2dSH605wOs4Ee4cw0Y2gBPl\nu3N2UA0MH2Uw6pVkOcgfd05Jh1595MXLqedYUNsGWQF98KQizsWLcoPPkji3zzC73a6hBNzFuo5y\ndgxfMpnU8/Oz2u22oTNkOegodWSvo2RqcFV8vZ+syesKNXP+DjlneU44HLaAEJTRD77yWRj8hmQy\nqfv7+6/qKPVbr6MgK/7O1uVlXVd8iyt/yOJBZXjubFcESuf9vLx4fkqn0zHHzudi80i4PLsdHaXU\n4AnQXkclWXcLz4p/ksWz0IaBPthFbK23izwP+ADMk0BH/efCy4nFYjbaGNtMh4on92LHJBkSBI+F\nc3MekLZut2udBLQn0rYHkuN9EeTVg4MD3d/fW7aPLFJWRkcpi6Oj3Nnvvb5J539wcGBtIBgZFEmS\nQSEMFGGgDxkDf1+pVDSZTJTP5y1zARLzbS2h0LLP982bN7q5udEf//hHY7AykGNzc9OIGkdHRzaE\nAcPG5j3qlGQEnilKts779ft9K2XUajVdXl4ay7Zer5vA8WBp7fCdCRsbG8pms3r79q3VJqlF8n0x\nKJlMRgcHB/r48aPVoR4fH1Wr1RQMBo0HgRBy374cQptKs9m04UlMQdza2lKz2VSr1TI2LoSkdXJf\nMPgy0Gc6ner9+/dGcPJtT0zWOzo6stokCtput43MRskDqNhDppDEYGFTI358fNTV1ZWVCKrVqjLt\nYuIAACAASURBVAVm3DltY76TIRRadpD8+OOP6vV6ury8tDv35LGTkxPFYjEdHR3p7u7OhgixvAP2\nL/fje5ullwFHBCKdTsdqf+VyWdfX19YRUK/XbSANhhBin4cCyVKz2ay+fPlirZzAsny3/f19HR8f\na7FY6P3794aydbtdtdttIyFibNFRSStOCu4ALYuj0Ug3NzfWxQHixDAenDN3Tg0THX379q1ubm70\n008/regoslksFrW9va3j42MzguhorVaz9bb063PnvLArGHzqqfBrvnz5Yg4TFAIniAOGrIVj39zc\nVKFQ0I8//qhSqWRz7kE4yTIhDCYSCX369MlKZf1+X5VKxeQK5MO32SL73BlcnVarpVgsZq1ttNi1\n220VCgVJL50//s55riBJ6zpKmQMdzeVyOj4+Nj4KsDd8JpA+fge7gi0gOKAujx1BvwiCqtWqJWYg\nHJzbdyaA9v7jP/6j/uu//utXdfT4+Ngm93kdfX5+to2VcHpARLyc8//wJTjdVqtly9V2d3ettJXJ\nZMyu48sYfuUD3bOzM+VyOX358sVaOSlXQerb29uzHSfv37+3ksPDw4NKpdLv+tlvkvAH9EhNR5L1\n62azWS0WC4NHfcZH2wejFHFOjKW8vLw0RYOABSTb7XaNhEFUxQhKauSNRsOIS/7zPVlrvfaG4SMb\n8dmq9JJlE6VybpjbxWJRvV5PzWbTskCWWzD+F8IM7Uqso6T+NhqNdH19bVuwfP0ZkiPQElkYhn4w\nGCiZTFofuySLconw2VtNtMqWrN3dXd3e3moymdhIYljpoVBohci1WCzsfXACweBycc7l5aWVanwG\nToZGbRJjiaEPBAIG8QGj8/fUvdPptJLJpD3vTCaj4+Njg4QhnPb7fftMD7vSDx2Px60EQ/BQKpWM\njOpnn9PVIMnmDkAsfHp6sqU51PE9kYfsktHX/s6BR+v1uvr9vnK5nILBoJ6fn60r4OnpycibtNX5\nwToEhF++fNH9/b2NFiaAAM0gC17X0el0qmw2a58L0uNZ9WyQo5uA5VjBYFBfvnxRMBi0iZmS7K66\n3a7K5bKN7KVsho4Gg8s1uTc3N6pUKqajBLOgTIzc9ToKmZUZDzhyMmKIWcyl5+zZbFaFQmFFR4H1\nKVcwGY4xxOg6cDiOjRXJlFG4c7Jv+CMkHaFQyHQUpA05o3TEd2FRGXcGY397e1s3Nzc284Bkgy4P\nxhD75V2M70ZHu92u3Tm8Dc5AUPP4+Gi1/UAgYERNdBRbSKALwS8cDtuynnV5GY1Gur29tXuG38Rs\ngna7rbu7O/X7fSMEoqNk6IzkplzkuTzYIuSc/n1KDKAGcEsgAnJ2xkdjW2DsJxIJVatV9ft9S0xH\no9GKPaMMNB6PbaIg3C04NDc3NyqXyys6Ki3RDMZt/9brm3T+GGcPQ5HlRqNRY88TeREAoBA4Rv6g\nZNVq1SBJIlvqsMCZ9LwS/QPJhMNhPT09WTTqldMTMzg7ikGEStbb6/VWYDAcHdk1GZVf1APsRUaE\nM6DGA/kOyJcIm/tYLBY2790z5iUZzLROQPPRLbDSw8ODfTbvS9ZAvyxnBzZtNBoaj8dWU/dZITVA\niGFAddIL2gADltnjnIk7X4fE+XtKDJHIchQwdW+fqfH9OTt3jgIy+hVuB9kW8B81RH6foAjDx4ZD\nCFzIAyUoyI8EUR5OBcl4eHgwBIIMleCNLBO2Pc+x0+nYHnayLJwwzGlKFJydO6H2zZ1TqwSN4DMJ\nRNZLFaBbQM0eRcLg8qy9vJD9eG4KTsTrqJ9k6HVUWhK7BoOB6ShEWEmWxaKjOHReyNvGxssuA9+b\nz/NDPry8EFB0u13rL/c6CqES8ii/z/MHDu92uzaZ1OsgBEee/ddKFZR6PMlz/c45+7qO1mo1jcdj\nQ6F82QuiGTqKfVksFib7ICPoqEfevI56m+51dGtry6Y2+nZrvuP6nTNnAx0F8ZBkmbjXUeTWE1zR\nUXYbMLTM20V0FIcOogZiAaq2PmQOXwRyxx8QSknWBUWrIHaRoA1UiO/mF0VhZ9iD4TcTci5KR7/1\n+iZhf/rAyXY2Nzd1fn6ueDxu+9tZpEB/v2/jmM1mOj09tbrn5eWlOX4Y82THLBEhqt/e3raFDizI\noA+cfeEYemAn6scQuWg1Y7rXu3fvFA6H1el0VKvVrDSws7NjysBUJhbcLBbL6VhASbRRRSIR5fN5\n+y5kSYeHh2awYfESDKDsKI2HzMhsaDfxzvfw8NDg0VarZdmiH64CoXCxWOj09NQcMbusUfjn52fb\nFQCUGolEbD43z4fFSwgw0TU1VX92omBq0xjTnZ0dvXv3zoictVrNnAkGJBAIGJwai8Vs2mCv19OH\nDx/s7snUiNB5Fsxzp5YKO5+ghMwd6J1gDEfihxB5Q8vSE55/pVLRu3fvTAaBF2nNOzw8tKCCSY3D\n4VB7e3s2e57vQSbOkpZ6vW6fA3mJeQfcOfKCTFCjBHYkGAqFQqajEDVbrZYWi5debko6jUbDnh9n\n+/z5syqVijl5yJORSESdTkez2XIaIogSU84IwPydo7M4VUn2eaA09F3Tew3prdPpGNmRUhdOCCJk\nPp+358AUSJIRdBRCKeWSVCql09NTs110AFD/RT99ycF34oDS0G6Hc2SzH0tmqtWqptOpETPpRkGe\nj46OzBHf3t4aDwIHxzRUukKYNbK1taVarWaJBDpK2x7lqfWACp1d19Ht7W199913No2Tkhs2C8f7\n9PSkSqWi/f1909Fut2s6irygo8Fg0Kb+xeNxWy9NIMCsfOwdn+O5WpLMpkO29IhEOp02fkOz2VS5\nXLZVx8iNJCORU9KiSwFUDF4AMsaukK2t5R6P09NTs62+pRF/R3eEL9GSxJEo/dbrm3T+vmfRt7SR\nmSwWLzuNA4HASoYDzCfJjLLvYYUsxfsACWE0EUyygGQyaQbRt394Q04mQmZFZgpJiN5wCEkwQzHo\nnqhCdEdNHkQBwwI0RvbmHRkR++bmpsFztL8QTWO8IaUQWWJgMZjcHaNVGV3My0884/w8O+4cR0yZ\nhp8hO2HSIucOh8MGz/KdidAJsjB6ZCJE5tJLOQJ5qdfrNpYW/sHm5uYKCoG8UOdbLF62CPK9OR93\nxx0T8VMnZcMdmRGZOjLC/UuyDIA7R3bgklDm8AQuAhjunLP71kDIiGQkOGZaADFajAL18Cy1exA1\n5FuSldao2fIsfJZFGynPAaPG+6Nz6FEg8NJqi9Omk4CJZxh7375GAMrzZ3037+1r8NyxJ26BnOCI\nKGEQgOKA+RzsDDYAQ05W6evVvg2ZbJasnXowOh2NRk1HsV3IC5k15Sl0jWwZXYPQfH9/b+UdaekE\nCOaRF+59MpnYnVNC8nJPMrJYLAzWlmRy70sofBb2BRQIHYWfQNaM7NByxzwPEB6gfkh03u5KMuSH\nTHh7e9scHXZ1NpvZHTN+mP/2ZVEvL56wimz/mo7ScscsAZJCyOjI+brPwJ76hAAdDQaDFohBKKdr\ni/dg9ornHnHnkqx8RAD1e69v0vnTjsMl9no9ffr0yWrqPECMKs5/sViOrGQyF4a0UCgYU3g+n6+Q\nxnjI8/ncVj0S0RJRAz/jgPwGPBR1f3/fSE1AuNPpVNVqVe/fv9fDw4OtUaXdCodBtgj71hu8zc1N\nEwYUnTPM53MdHBxoPB7r6urKosZYLGYCAeEKwcApr2f819fXJmzSsnf26urKOgi2trZ0enpqzhbn\nT/DlyV0PDw+2sYxRr/S6Y2Q2NpZDkJrNpjliJpkVi8WV8gmO2TNxCSzC4fDKMJ75fNke+uHDByWT\nScuk6eQIBoMGb4bDYRtPSmYnvewmwDmhYJRWMLBfvnzRaDRamVKGsfesYC9vBDsweT2PYTKZqFQq\nGSM8Go3q/Pzc3hcGPbA4A0BAe5B9HG232zW4EhQrlUrZ/weZYtUss+QJUDgrhhwUhL0GEOakJU/n\ny5cvpqNbW1vWwirJ2pHm8/nKeFIMaTab1c7OjukohpCgCSd2e3trnBRIg+go3Q08K7JOyirRaFT1\net2Ikziler2uv/3tb+r1ejbtEH4OMg4sS+shdgkCHBkzzh87s1gsbDvfx48fbWEXq2YlGSKCs8fB\nIfc81+vra9uTEQgs+9nJ3vv9viKRiE5OTixY9JM5kWccZLfbtSl06zoKkkEHAoN+kDuQRpANZNtD\n46ALkJtBWJGBTqejjx8/2hpwdBQEigFG8K8gI3OnnqDM5xLIwWWazZYDu0C8cMynp6fa2toyFJh7\nJsjziCK6SLA4mUxUqVSsjTsajerk5MSQSd+OiG1lpkKn0zHZR0cfHh5MVgkEEomEdbMwHGpvb0/5\nfN6QPC8vPuBFFwjMfuv1Tdb86XfEkTw9PRm5JpFIaDKZ2CQwnAMGlIyWaA+SyGKxMFiUPntqsfTw\n0/bFtED+myyc+hSRG7XPx8dHm+LFaFRqj4FAwHptd3Z27HN9O4yP+FEexu4eHBxYnzkIRLvd1nA4\n1GAwsL5i1vDSY0wbGNman4/u+QmQBYGg6WelNSafz9syGH/nKArZGY6aYIXMgMVCniQ2mUyMYEnr\nUaPRME6Cb/8iWufc1MIp8dBPHYlEbEAG5KVsNmvkzvU7J5Og9ouyQswBKiW7ZA3qeLwcGkXGQimH\n74Eygxb4c2P4ut2u7u7ujPyGs2u1WgqHw8rn87ZfgOls3LkkC3I4O8EKgQwyCzmRO396elKpVLJ7\nhlHd7/eNBIe8MFfd16dhLLdaLRs/DLJF10Aul1vZZe8Nsw9yaFsNh8MG6Uqymj/EVgJe7hn5YIEN\ndw43wde24ZCAPNH9QwcJxK1gMKh8Pq9cLmfEMYIP7h3nh7OTZAEY67FZeUxZhnJXpVKxhKbVatlA\nI/74IV2cnbLFbDazUoQnJKLjbCvN5XIKhUIGD8NrwL54R02wAoGYMicOEQcLwRISc61WM/a+5yZQ\nuvBnl2TzC/z8EtrfZrOZMpmMcrmcOX3sMWcHbWFGgLfp6Chr0yEeYp+YD8JCLSZOeh2VXrgAlC24\ncyY7oqPY+YeHB0UiEeVyOVu1zfuhkx79AqFd11HIj34aKQEjXQd0PXh5QS4Jfrlzj2o1m03d3t7+\nrp/9Jp0/JAhY2kTtMJuBXxAS4HBPcAEe8wtXYGsCm6MI1LskWX2bEaSQf9ZZ4pB5CDgwPkBAvGcw\n+LJPgLIDTghYkBdZC5Cznxe9WCyMmOUHRiDskixYwejQGyrJ+qaB7si2cQoENkSVGIN4PL4y0pI7\nIMMhE+XOcdbUu3FOzFSn7urRi8lkYpEu740D5e/X75xofTqdrhBzUGqMG2N5uXNP1CNa5o7IanmG\nOCfgXBAdWNSwbv34VD+djcDU33kwuJxNAT+AsghkNrKeRCJhhsFnNdwn5Q/OBLcDmFrSSr8/vwPy\nM5vNLGDhvZErj1BgEAlQIQx6aBxnQ8DKjH7u3CNOXkdx1tS8NzY2TD5AkQgoebY8d4ZdeR31zxN4\nmjtf11EyPO6GczPzgDuCLMrLB760/AGbe7Y6g5cIVNG/fr+/Ehigo74+zp370ggZPyQ9r6MErGSf\n2Bg/FZPv5J2111E+F/0Gnud8kPp4T5AfSb/Q0fVSGrIBwkHWure3Z91N6AvySvDv7SI66vlXOELK\ntgQ76JMk01F4LzhQnqfniKzD9OgodtHrKCVh7pzf8bbLB4+e3OpLC54zRpDBe7IVExkgGObOKZdw\n5xCpfRfOr72+Sdgfx0J0s7u7awQ2skrKAq1WS6VSSalUyqA3IBpPDotGozo9PTWhofabSqWMfANJ\nB8NMFE+GW6vV1Ov1bDrTZPIyxxvyR71et/oibSXAh8DgGIt2u63n52dbE+nrcigPUC0KCTmFmhgD\ndVKplG5vbzUej63HGCb+aDTS5eWlQqFlvzyKxF0EAgGL/nEknF2SwfHU4waDgWUiTJLizj0EFYlE\nbOIi8Oju7q5NHgwEAspkMubsJa3UqiBJXl5eqlgsmqFC+ebzuc0YR8EwMv1+X5ubmxYEcLZer6dK\npaJcLmeyQl3SM7tjsZi1QhE0YqxYe8wAEeBvHA9yd3d3p93dXV1cXJhzYICHpJXZ6JFIxAa/kOFQ\nz8aoNxoNgxklWVkIueH75HI5C4iQF2D3Xq9n0y5BPPzCI1qZfv75Z2UyGUMPgsHgyr9DuKMbYmtr\ny2q3BOrwCkAcstmsbawD0gWipl7KshXOTikgk8loOBwqmUyaThFoojubm5uq1Wrq9/s6Pz+XJOPW\ngP6xG8Q7QQxrOBy2xTXAq8wmoJ8aB4cuSsvAPJFIrASLyB5914lEwjZMUrJiBgEJyNXV1YqOEqgg\nu+gojhrkRZLB8cDmw+HQiK70lnNH8Dpwevl8fkVHuC9g8WQyaTrK8/WdCJ1OR1dXV6ajBHBeR2kD\nRV7QUUoLlB4kGfp6eHhotnf9zmezmbXSgRJTCmAKJmegDRp7TsAcCARUKpV+oaOUQiStrBAHVSIh\nhVsD8gj5mb0FyAG2wdfqM5mMlSF86Qj0htZNUCbsPXKLjkI+JIDDhvxfW/PnohjTCJSJQyf7WBcI\nsliiKQyjbzPh94CUiHRRNJir0gt5jaEn1NL9kB3/ItCgpre/v2/kN7KP4XBotUrvcMga/cAS0Ar+\n+LoUUaWH8VACUAVv0Nj6RS2Is/sACeVhRKYnQ5KRUzf0LXNEo94ZkD1y5x7h8HAqkDL1bD4HtjI1\nsq2trZWAyDPQCbb29/eN70E9luwDo7eexa7v+2a8LHAd98k9wRAnuONZSqtz/5miBwzq5cXfuSRD\nd/b29gxB8NkHZ/MBLRkvDGA/657/B+/A6wDBE7A95R8+F1nhzr28UPvm7H7YCHPLPWHUD4Lh/8Nd\nAQnyLWleXvj//s4hoPm2Rp4hsxogNJIN4UTX5QWeBneOY0OeyYppDfPygo4iu5wbFIQ7RyfQC+SF\noIsMFx3le63rqD83zpc7J8DifCAOOEzsDjLuy4DrOup5UJzbOzkCa54hZQhsDyW5X7OLJC3oKHMY\nsIvIiQ9QQDB8iyfPxNsWbxdBy5gKydm+pqPcuZ/ih456ucehQ1ImefE6um5fRqOREVQJZNf9Fx0v\n+D2vo3SArOvo7u7uyp37IJRz/29e36TzJ7JlcArwN5ArNe5QKGRDfXy/KQoJg3w6nRrjnvo1UAtT\nsEADaPuCAPbw8KCjoyOl02mdn5+r0+lod3fXBo3wPhgFBIV/Si/sduY1A/9QFwfKbzab6vf7RnDZ\n2NgwiPP5+Vn5fN4gMyBypkYxh51zUBPc2tpSKpXSd999Z/A9Qo6xgqUNoQmlAKGAY1Gr1fTdd9/Z\nykjgK8g8oDN7e3sr35c6HbBet9u1licyymQyaX23DAqJx+M6Pj7WdLrcsUDwwZ176HF3d9eCNJwE\nUGC9XjeC4v7+vg1kGo1G1ivLMA0gZtqPGIAjyTIh6oyMegZG7Pf76nQ62thYTl18/fq1wcNAehDn\nKA8gcxh1vuN8vmw7LJVKOj4+XpEXSdbyyvx5sobBYGDz/D3PgbM1Go0VyBX0iJ5+Sm5v375VKpVa\ncc4E135veiqVMhIczp8MiKAXeSGTeXh4sNZcAi7kqFwuK5PJ2H2Nx2PTUUl2X6lUyu4BHaWdDh2l\nHZg798O3IKlybwSnXkd5vjhRavaM9SVzQ46en59VKBQM4qcc0Ww2be0siAiTPb2OvnnzxgZqESyS\ntKCjDIgicMHZw12qVqu6uLiw1cD/X3SUTB0dbbfbxonwOsqdJxIJ09HxeLlz5dd0FHvIsB70CTmo\n1+sWcIHWcbe0MENQBinBJhUKBQuAKCXCt4LHA2+m2+2u6OjZ2Zn5G8+98guWIAZLMl4Un9PtdlUq\nlWxfCDwK/Ac6SjIAER0+gi/7oKN+6JkfsAbvCIL769ev7Z68TYfo/nuvb9L5k9X6QQcYnOFwaJEa\ntXsybYg7tBgRxZI5saPa10f39/eVTqc1HA51c3Nj7VBAOyycYB0pk6oCgYCRWCRZlkQGA4OTBwwk\nCnMTtjm9u6lUSvl83uruvo5OhEd9mLq0JOVyOY1GI/38888mqEwpJFhgHSkkKohKkiwLoNTCnSPg\nnkGKw4c3QY03m80a/Og5F6FQyAI4SDWS7PtKUq1Ws8wVh0fw0Ww2jcyFMHMGSGH0PNOCQyTNhDVQ\nEbo3KL/QUpbL5YwZDBzo4Xk/24FIHJb2hw8fbBsZSlwoFLRYLGzKHkQk5A9oFcgWlIFM32/uCoVC\ntnSIs4OIpFKplYmG6AzQ3/7+vt05GQvkUxwBZ2m1WgbdsyuAc0iy5wGatrW1ZXVJ7pwgAjiWnnWC\ncurjwWBQmUzG+uw9P4I7Ql4gwLGHfTAY6ObmRvP53MpSs9nMghTKcpDv+CyQGUoTOCZ0lACYYCWR\nSBhkD2GQPRSZTMYCRI9goaOUCzj79va26eGHDx8sqyMAQn/hMDBgCt1a11GMO/V5UDcQhHQ6bdmi\nHwDF/SHn/AmFXtb18szm87lNFYQoDTeDQILMs9FoGOGNduNUKvULHQXVhLeCs8N+oqPYdWx0LBZT\nLpdbaW+WXvY4kCgRZPOdM5mMJOlvf/ub+QjkkwVltAKHQsv5Hdg40M6dnR3jLKBroCj9ft/kOZvN\nWtDMGSjx0mUDsoK8g2qu6wjlRtbPk8k3m02TJwjRINPer5BQQA7/rdc36/wxLDDtidin06kpM4oH\n3EJNGfjJM7uB1iAqAcMkEgljWlJa8BAzTpL97wggzo0aEg8JYggEKowj/AAMOQ+Wc+NMaVkEakSJ\ngbwwtBgnBuYw6QpnSJYLexcnRW0Ig+vrfzh/7p1aFwLFNjt+jz84dRwR98Gd+7kJBAR8ll8qRM2P\neyArB+7i98mIvLzg/CH9+OfEMAzYz7yPb//he/k6OfIivexlQKHJvLlzScZFgHFL3z2fS2DhywiU\naCCG+eALwwzi5XkVGBgf8VNL950d1BwxrDjth4cHeybc0WKxsC4RyKjA69LLohnpZYALd07LFTqK\nTCAPlOc4H3fmuwCoY6OjyBZLq2Cgoy/oKDwCOnZ49jxrAqOvyQskMIwziQTtvegoMopj5ftzxwQJ\nkUjEhvcQ5EPUarVahkau6+i6rfgtHaWll3Pz/HnOXpbhD0UiESOCAi//mo7S5sbPoaPYRUlmx9rt\nttlFZNajXOt3DurhdRSkbnv7ZSQ55yAx8F1FXkfRbXQU7gMJQ7PZtNkX1O0Z/wuxGx3Fhq3rqHf+\nXl54NrQleh3lbr0v4jN8RwrlX+SF1lVKA5SWCHYCgYB1jCEr3v+go/8b+P+bZPt75w80xoPDiEwm\nE2uhIRqlXQxyBzVxIF8EzdeBWQSysbGxwt4kyp/P5zamFYiN2vrl5aXBOmSoCDljI6l7elJZv9/X\n7e2tLfaBEMP4XCDpYPBlsIPPkBi4k0wmLUKlxXBjY0PFYlF7e3sGLdJOhCJBkqQ04FuyKBkwVpY7\nR/kajYaNA318fLTWL5RIWta6KG2QleIcAoGAtVQFAoGVjX2sZN7cfFlMQjsRGSxramkZIprHSBEV\nE/AR6A2HQ93d3dl60oeHBzWbTRtuBAFOkpUFfI8/XI1sNmtKTauPtGxPZY8AMDp15L29PVvZC3pE\ntuTLRpwN5wbk3ul0VCqVzNEiL56TAb+DmjfEPIxDOp020huyA3R/dnZmswn8rgGcQaVSUaVS0WAw\nsNnyZDFk2SAz3A3QbKVSUbPZtJbDVquldrttEC7BM1PUcAzIUSQS0dHRkZXBgM9Ho5ESiYTtL+DZ\no4O7u7saDof6/PmzlQn93+OIeA44E+DfXq+n29tbC4To0+52u5JeevM3NpYTA7lrH4xhQ5AB2vFC\noZARYdEjv2tgZ2dHzWbT5tLTikmHEzpKyxwOnIAKHQV2b7fb1h78NR2Fg+DZ84eHh1aOYpfBaDRS\nNBrV0dGRzVVgp4bX0cvLy5W2THQYm06bG0EJaOJgMLDV3+PxWL1ez84OmoKjYxAYtXeCiVgspsPD\nQwuOi8WiJU/ZbFapVMrQTAJFyhrdbldXV1dmd9Zb+LyOegcOenZ/f29B3MPDg432xi4/Pz8b14TA\nH7sKUZc9B37VeyQS0dnZmaLRqNk5EBSCh1KpZDai1+tZyeu3Xt+k8/fEKD/qcWdnR/l83qJYTy4i\nY/dkEcggX5uZzPvDNMbRzufLEYzUFgeDgUEykqyEgHGGgLQ+kpNuAiI/WuYIPjw7nAiQwAYYD7iR\n7wgMT/SN4YRIgxFhBSeGirYd4GvOD9THPfKCeY4TYkaBz4xROt8f6wl7ZNYYK2rz1I/JQqhf1Wo1\nSbLOBgbKYJCAmwlgiJK5C+kl8idCBi7GIGMccWzr2Q4QHxm1lxd/57Di4/G4oS5wU7rdriks5Rsy\nWUiC3LknRyEvkMzoXiFD88Sz9ewYeQH69OgTsk0QM5vNzOjTcpbP5yXJhi/huHGI3D1DY9YJRkCt\nBKL5fN64NgQwPvjkPfmc8Xhs34Xglhq8lxeQEGlZLqKLwU/94+d/TUf9nYPmMMiHllwIWGSLODbk\nhe9EoALK4UtlsMk9wZC97k9PT1aqoISIPeA+QRN+TUfD4fDKaNpsNmtM9PVnSFbN2YGuIdchL3wf\nLy+SbFIgXJd0Om3fFedH0IAt8DoKEiDJEATs4v7+vjKZjD03ngUIL/LibbgnCmMr+K7+zjc3N20e\nQKfTMZ7Gw8ODPRPsMfqEHoIIenkBWeZ+U6mUrQkm2UBHuXOP9IBiI1fYSu4cnaI1cm9vz1pxi8Wi\nJFkXE4EPARTBCDq6Li9fe32Tzh/4MhJZbr/K5XIGdb969coMXDKZ1M7OjgaDgaLRqHK5nOLxuP09\nsB6QMhkWWdJ8PrfsnOj8+flZnz59MigK1IFaH9nw/v6+crmcCoWCtcoQIGxsLEd3FotFM1YMtMBZ\ns/lssVi28rFxi3oifdEIChHddDq1YRz8N6zWRqOher1uMGi73TaBQJEhDOXzeRWLRWUy+VC/bAAA\nIABJREFUGXPAOJl4PG7zvLe3t3VycmI11lgsplgsZgERm+Sou5Ft8Axhw1L/pgaO0AM3X11daTQa\nGTnn+fnZaocEHTzjQqGgYrFoz196MSpk4NJytjfb4kKhkAWOZGVE2mQswG04iclkYhkZPffU4nBE\n/X5f9/f3puSNRsNa9Hi+QI35fN4GPtF6B7QJ6oHRKRaL5ix2d3eVTqcNFSoUCjYVDSNPsAgkTbbK\n/2NwDYYzEonYAJdMJmOZ+f/b3pkutbkdbfvWAAjMIEATmhEC7G2nUpUcSXJE73cEqcrR7QzGmEFo\nwEICARJo+H6orqaleHvn3+uqd3WVK7E3SOtZT3ev7rvv7gXT3wd36XTa9jyVSlmtE4eH/vIctVrN\nAvR0Om12RfbLZ6Av2J/vcKCv3dsohwVtcdROQamwUZjXtIIVCoUFG53N3i4gyuVyNrY6nU5bu+/a\n2poRySKRiP03P4WTgx4SJsgJGZkfakXgB6ELRO/29tYCK7oE1tbWFmyUPcShAzWjz4lEQuVy2W4Q\nZGYBgSz6QqmQgxsbnU6nhupQboD7AYoSjUZ1eXmp8XhspS9aL2lFpSMAPc/n8ws2yrOBQmGjtPRF\no1FDYvke9AWmvOeBESSQwWPboG6UGJ+entRsNi0A4jbB3d1dCzrYU/Tc2yj64qfsrazMr2mmJLW5\nuWlcIP6bJwCCCOIP0H0CkXg8biO6eW+JRMLIu9ls1sivm5ub1qKKjWYymQUb5dz5kfyUNX/qSNTC\naG2IRqNqNpvGFKfH2teeqNsTpbFhEGKYnvbvf//bIPdWq2W1kv39fdXrdUkySJ9MGhILECNQPw4M\nWAjYlAyY0bL899lsZnXvyWRiV0pibBDSJpOJXaazvb2tw8NDmw5HzafZbBpLFiLL96ZYYYBA+sBQ\nZLfUeoGWaSlZW1uzqWTsORn3/f29vn79qm63K0mWtayvrxs0DAGoWq3q8+fPlikzgevo6MgODKaN\nMTOfADCRSFiNlv1lzz3UD19hbW3NrkBlAuLr66td2xmJRGy/YfRSX4asR1bDddD+voJms2m1eK4q\npnMEWJlsmZYmSivsLfoCp4HShN9bRtGScaF7Z2dn5hSktyyQkgC92UdHRwZj9no9uyioXC7b1cVb\nW1tqt9u2PrJQyF3Ay+w5z0B5gBoo0D17SzkIG43H4wvTJ31POTYK8pDNZo2E+/T0pHa7rVarZQNZ\ndnd3dXh4aL6CVj8yT18CYkIoe+7hf3QK6B6G9+3trabTqU0THY/nF1U1m017v3BxRqORlcGYwdDr\n9XRxcWFQPXfQ438gu8He5h0SVLDnKysrC8RhAjjew8vLy4KNttttmzUCQnN/f68vX75Yt4Qk44Ew\nFluaE2mr1ar5Rfb89vZW9Xrd/A/f5cfsUhoEIl/ecwILX8ummwt7arfb5u/QQ2yUoJVgE7um1JjP\n581mKNO0Wi2bXYCNEhjAC2IfYOYv2yh8KZ6FrhC/t+wRQb80n1Pz+fNnm8sgvXG3uD/h9fVV+/v7\nqtVqVj6mDNlut1WtVrWzs2MlXH+ueD7B6+vrwsRFdAPf/yP5KQ9/P7GLuj2QD4rnJzVFIhHLNqld\nQ8iDIEErF722QEZ8FjUpskdJ1nsK5MQAC+B6afGKTeqIEA09MZDv4YAia+ew9pPXJBkC0O/3FYlE\nDAbipfI5EG1wvjCAyV6AiIDTfC8vNURIZygqNS7KIv7A9VMDh8OhTbcDFSGb8GtgfkI8HjeDomYF\nksFQDgYsUV5g/32pgFKCr/VjFNTMOUzhapDZ+m4AnolDBWINmaTvqvC65Zm7fgASTo0SCKUgaqro\nGH/3UCvvA4HrAqMXB+iDWdYgyfZWeit/QPIjs2OfcPhkwqBbBB4EXUCT7D3PT80U2BHWO89Kmx+1\naWyUWiUHGr8D0RCEhcFe2CjcGOxtbW1+GyTBEIcPKAqI039jo95XoJfYlh83SwAEyYpyysvLi90i\nR9DOe/DPyu9sb2/bLAF0n0CIshQ6io5Qmly2UXR+MplYfRl0k5ITATNBDvqCz/TscCBtgmKQDew4\nlUrZPmGf2DuIC7bEXmOjdLZAqGPPCVj5Lm+jjBX2xDb0jZsF6bCgBEVCg2/zNsq/A4ujL5FIxNbK\nvrIGkiOgf2wH8ikBA0ECv8PEUtaOf0E3fAcRvpOOE/YcdOJ7NkrQxXqXbRSk90fyUx7+1A65fWk8\nHlsGARQDVM0gBd8eImmBuETPfzqdtozUM3VzuZw5GT/6cXNzU6enp0aUkmTcAuYFQEKhbs/sAeBW\nepd5ydR8UWL6OR8fH62d5eHhwZCKyWRiPck3NzfWTgMMValUrB7nB35kMhn98ssv2traMkdALQ64\nisOamiIZM213ZAvAy760MRwOrc8XmJgDlYOBmttoNNLFxYUxZldWVuzd+podXIhqtWrwI0ZNaePw\n8NDqbNRFCY6keQb1/Pysm5ubBYIYkT7OyF/xS2BAOQjYDViW/YOpDimRQJSDZHd3V58+fTLin0cv\nstnsQv2dmQIMG+H7er2e+v2+BR30auOgqHOT6XOIsOeUeSilAOXy/ur1uqLRqE2uo+xSKBSMGAux\nlfJWqVTS7u6uQakcWPSng8y0Wi09Pz9b3TGZTNogEzgc1HvpriGIARqn+8b/2+rqqnK53ELQhqNe\nX1/X8fGxZZHsOWvDYWKjZHrYRSw2v7fh6urKLopKJpNGSEUvuSKZDBcbwP4pY1xfX1siQuBarVa1\nvr5uRNV4PG4tiu/fvzdiLDYKdL5soxyyTAmdTudTRSVZKyvvg4Of8bnwAUg0OJTQi+FwaDZKokNJ\nkMPbM8vL5bLZp++o2traUrVaNdSVDBVuBL7k6elJ19fXhp7RSUTgBodDkqGUwPnYMb4IG0VnQexW\nV1cNvSUQ2Nzc1IcPH4zx7/UFG+W5CIYgBLIeiJ/oCK2rkA4p8yUSCSNs4nc4W9A/bJQ9Z/ZGJBKx\nq7bH4/lAo2w2a2cMn+NtlJIpJa3fk5/y8PeDaIAGcUz8G1kaPw9Jgz/AmcC8kizDJUJlk7jrmk0m\nwmaTgap8HZt/AyYnIvQkEaBkRnRKWqhVYTDc8kdPM61nnnxDtkBrjye7Ef2RJfiMnudgXUS/vs2O\nvSBwYF04GA5gIk2e8927dwvDTra3tw0WpYZHRgvMSvmEn8PQ2XMicEkWcRMEsedkZrDx/VAUjNXf\nNY+zJbP2LZdE3xgye05G5fWFVi9uhWPPyTaIuNljslCfeTKnghop9WL2F7idNaALoF9+At/j46NN\nqMMZoBPUznlmAgh0ibV75Mg/L47V64ufowDpiQMUkigHO5khDhZ9gZAJMtHpdMy2sFPavQjCmDbp\n5yKAelGz9u1YkCNZu+/pp+7M96ED1MPRbZ6dn6djgzkL3kbJgD3PCBuNx+fjfb3P4P8TIMI490iR\n1y32ETteDoixSZ4d+8H/QBokYQD54HcIcnzQzhwMgjPvF9EXScbd8UjRsl1QBsNGOWwhErJWbBSb\nx2+S3XOgYaPourdRfs8jZSCu2CjvlkACHgTPhR/yZFvkezYKuoZN+z1HXyiNsecgIKBxkixgAhWG\nu+P3kX8DmcSn84782n9PfsrDn0iR2i/tJRgshgYExuhdHAtXTt7c3BgkDRkHUh3G5FvigBIhHI3H\nY7sJjKljKysrdiPa1dWVut2u0um0QW1kQThrJpsxYER6mzgGErC9vW239kGcOj8/t+/D4IACgbtm\ns5lN4IIl72vjNzc3NrAEp9zr9az2zZhLuAnUp3G6hUJhgdAE9IgBMJiDfS8UCnp+frbBPZIM0ZBk\nbUkEZt1u12ZoA0s/Pj6q3W5L0kLmhsNvNBqWcYPs8GxkXLQBcfh7Zi2BAZ0cTKnL5XK6uroyR8Vh\ngoP2HQLU7SCHAWcPBvM7vj2jndZOuBvc/MYAHVqmQF645Wx1ddXqqr6bBX0hwD04OLC2HoINDk/2\nHNLWaDRSp9OxPnYCQN9uur+/b0Q6SiDNZtMCO8pXvlY6m83bByuVykLQxx5KsjHATOuE6EULIexl\nD8mTmSaTSX358sWuxaaMQTcDe0uNF/QHjgM1fmyUzhUCNAbjUJMlgGNdv2WjX758WQg2QAmwUVBI\nygK0y0kynwN/hL2R5iUfngmdZXoiWTPPUSwWjUTM+0afKHPy2YlEQvl83ghwviuKQ5HPSiaT1mbn\nryv27abT6dSGdUFAA3WDc4SNer4U6Cjsel8m9VMhuR6X+x1yuZwuLy8tmOR3CMLwBUzKZH3YKPre\n6XSMJ4Wd4tMpBcNbIvijW8WTnEHJOItIVBm97G201+vp8+fPC/pCgOy7KmgRxKf6oJvJqNgobb3c\nhcDdDvjG35Ofku3vs1QOnLOzM2PJ8m+0kNAPChGw0+lYzyXZ0nQ6NWKaP0yI+MjEgY0grpFl0QdO\nXXdvb0/FYtHGVfrZ+Ri/NCck/fOf/1xg6ROhwvak95kDu9Pp2EHJ7wwGA2uP8bAdUS9OD4WGJAm8\nDckFVurBwYGN8vVtcz4LvLy8VKPRsNY3DBOnDxmN/+V+eowgEolYbQ7Y0WdvwKfAozyvd1hEuhwC\nsJ/pOsCIvSN4eHjQP/7xDz0+PlpvLBAk9Vv2nOs+m83mQjsatXUQEWrYOFcOJurN6I1vE8KpAkHn\n83m74AUY0denI5GIWq2WXfCysrKyQH6kLx0973a7ajQaNiYZRwapiUmXZBc4VwIcv++gHxxAOA+c\nF6x4HBt7TqAxHA71+fNnffv2zWwUCBIHxZheZixgo+z5eDxWq9WygNOTCck6o9G3i5jQFz+/HZsA\nQqUrhEDVD00hO+p2u/r111+trPIjG+31erq5uVm4wIuDApLebDZbqFX7uQl+371/8Yc6AUYulzNd\nwUaX69PYKL6B9wbXye+5n+9BMClp4XIqSJG+Fx4bRd/xL+gLwRjt0r5DgUCV7BgkajAY6Ndff7VJ\neaA08KrQb67h7XQ6urm5MWSCslen01l4XvQFMirrxj+ydoiSZMpwGvL5vAVjJE3eL0ajUbVaLZ2d\nnZkuMiQOPgRBE7MhsFECiVgsZmOiIbxSmhiNRup2uxaE4X/QF4Jnnhcb5Spt7n1gON2P5KfM/DFQ\nT+rqdrv2Iomufa+8H09L9g2URt2x2+0alOPhQJw5Rul7JCkxEJBIspdBRs/vYBhEoUS67XZbx8fH\nFoSgBNIcfqLGjyL6AwWFAjXwRDHpDfICYiSK9kxc9oy/+yCB5+H3PRzJ/QXAa5PJ2zRFnLsf8kFW\nwyEEA562RqAsv+/+Hfk5CR4G/V5ww3o5dNlLPr/T6dhQFSJrT3qE3OWhb74D8hxtOMv6wrRE1r3c\n9866EPbcH9A+cIDgI73N7Od7cTgcKsPhfLAKJSAOXz4XIhLvzesLa5e00K/O++Z70AEgT9oYlwmk\ntM8S7C7bqC/ZkH0Bq/LH78nLy3wOPgfE98o8yzbKu6X0wHvAZlk774l/A6aVZNmgR5GWbZQWYdbk\nfUA0GrX/jm/yNkrAha/ADnw5DZvy/oXP53kgHnMQ8a6lt4toCOy8jXIQsvc8GwFWt9u1Xnv8Frri\nh/F8b8/Ra6+nrJ33hL5Ib4S50WikdrutbDb7HzbKYbtso3Tl8LlPT0/qdrtGjiNABHHwffZ8B/u+\njAzyd++P0Bf8Ks84GAzsgObZ0T8SUwIRghp8NKRO0CBKwN4+OdThpGGj6CUBkif7oius/b/p8/8p\nD396I1FYerwhQUDAQFH85QaQZT58+KDZbGbTomit6ff7RgADSqHGQi2WFqder6fr62urI0MAmU7n\nk+4+f/6sP/7xj0qn06aE9NhzEK2trRncB9GLqBPmLw5gZ2dH1WpV29vb5mB8a5JvxyLYAYL2bPpu\nt6tms6lGo6FarWa1efbUTw6bTqd20UUikbAsNBqNWtbM4BqIaSsrKwvkP+rmp6enurm5MQNjSpaf\n8iXNYU2YyRA1eSYifAIlsiNQhH/961+qVqsL09moDfp6Idknc/U9quC7CSgp1Wo1O6QgXfqLV0BP\nYNrzWRg8mW2j0VA+n1cul7Pe7NfX+aS74XCoP/3pT4rH40bUgleBA0M/yNbJnBhbymEA+nR0dGTZ\nbjQaNeeBk+Kg4XlgfbNno9H8Uppms2kZKrAv3RdfvnwxCJJ3h6PxNV0uyeFyLWb0x+Pz6ZmUCjY2\nNpTL5fT+/XsjORIcQLrlnbM+mPcgBGQ+XNiFk6RLABs9OzszG2U6G7VjPgeSJAcTXR6UEL2NQrKl\ns8Zny74d1O8/CAj6QnYH4nR4eGh3GqysrNg0Sk9upLwHeZGAGBsFIQLtW11dXeixZxDT8fGx9blL\nsuAAoi4lVboHCIhANYfDoTqdjvXNo4sQVL2NEkDDcyFw8foLWkFAz7sg4+UZC4WCarWaBVeUZWDI\nE8w8PT0Z+ZmyB8glk++azabZKGRnbHQ0GunPf/6ztaYStMGVQceo42OjoE/+vpfV1fmFQNVq1ZJY\n3/FEAEPC0+v1bPAW3AXQLKY2ehQJX4aNvnv3zmyUoPBH8lMe/hAngE8l2YHi/+A0qd9xgJAp83Ik\n2YAYHNbz87NB4hzOtH4RMfoXT/+oJ/JhrEBoRGNko56sR4TJsxBJegITkTZZMxErwRBZLZm2b7+h\n7svaKV1QAyKboocbY+RZlqNNSbZGHzF73oEnDrL3RLuwe/0lLmR6kqz9hqDIIym+P95PTIP447No\nImbKO37PWTvi106XyLK+IFw+xAEJFEz5IZFILJQ0WDeHk79BDlKZh6w3Nub3ufuMAMe7vHb2HQIT\nusLekzkAlzO4yJP6eB+MDIVh7J0uzym9HYzoC6gJP4+9cbiz55Ae0XX2nKwTNIw/nrjJpUmU2IbD\noQWGDPdhX33XAIfQso2iL+w5pSf2iWdCJ0g0lvec/fXr5pAm6YBLJL2RyUAnuQ2PoUu+rMheQqxj\nz/3oWHQbX0YW7vccJOB7+sKaJS3M8UD34DxgF79lo+wZa+fgQffYcwIFbJQACRsGkfItfF5fvrfn\n6DwBiiS79Y6gwieLBMoEll7P19fXLcHziC+lE/wie+LLZHBZKBVD4lvec3wK66aMwvlAVxU/TxBN\nJxk2is2x59g/34UNM8CInyfo+5H8lIc/cKu/wY2hK/zdwyT9ft/+vdPpKBKZD5ZpNBoWIedyOf3h\nD38wUsTDw4NlVcViUQ8PDzZiltnnkUhEhUJBkUjECGooMzdN0XYTj8ftc8n2iSiplQOrs3aiuI2N\nDSNWff36VTs7O2q32+Z4Dw8PrQ0N4mI0Gl3I/M7PzxWJRGwmNEpC1MgVnFtbWyqXy0aQBCZi4Ecy\nmbRaOwcvkSyw2ng8tmBhNptZTfHs7Ex3d3e6u7tTsVjU3t6efvnlF8viifAhzayvr1v/6vr6urUO\nFgoFg9ZYF8Sr4+NjZTIZI+pNJhMjL4EAMVSEtWOkGCW1NohVFxcX1kpJy1ShULD6IOuYzeYXPVUq\nFQ2HQ3358mVBX9bW1qy2x42EtG4VCgXNZjPjM0DoozaPsZIVwK3g8AZKxNgZavXlyxfNZvPBRJlM\nRru7u6rVakaMxflAbAPabTQaikbnt4ExHSwWi9nh6q9VrlarNrKXbO3bt2+azWYWmKITQJBA3+jh\n/f29wazY6N7enq6vrzWdTm3w0IcPHyyrfnh4MI4BOtFqtYy7guPnClQGUdF2m81mTS+8jaKHkgwu\nBt2id599p3a/srJiA1guLi60vb1t7aSJREKVSsWCAt4VNgqqdH5+bq27fkTu6+ur1YDhEJVKJUPa\nPJeDTJMWQ9AFX+KhjIUNvL7OZy/E43Gz0W63a/Xtk5MT44fQYpxIJMzH+btVksmkUqmUIQPoIYfo\n1taWarWatRxykDWbTSP6oWd3d3d294Fn2oN2xuNxG3Z1eXkpSbq+vrZ2woODAzuk4QbMZjMlk0mV\ny2UNh0Odn58bYsTEUBKl29tb8wWbm5vK5/OaTCZWN4fQRxDKnoM4EZzjzymDUnLjnZ6fn2s6narR\naGh/f9+uEibIwubx99I8gLy+vjafkc1mDVHzc25IDiuVigXOkLx/T37Kwx+DBOrg0OJA3dzctPoy\nGRIHOYc65Ahgpl6vp69fvxqsCmFsc3NTX79+Vb/ft7u4mdwnyW4/g/1M1Nfv9/X161e9f/9eu7u7\nCxcQMWkQZjKRNXAqJC8YnzBmx+Oxdnd3LXKDFcrEOa53hQQUi8V0e3tr419hwQOf3t3dWWbmUQYu\n3SCIAD7HCZElEU1CCFte+7t375TP5w1yZAQwmfjT05MuLy8tEODyFlitnU5HhULBCDDMawAq9HtO\nNs3nscdMTKRLIJVKKR6P22z81dVVK6P42joG8vz8bEQZP+fdd0uw5/SvY/iFQsF61YEgu92uZQ9k\n8JPJxN4NV7n6S2aYzuXXDuJEQLO858PhfHwvw0toP3t9fVWr1dLGxoZKpZJNxqS74ebmxqaGAY8S\nLBFE+3UPh0MjfeKUuQxpOBzaRDn2k3IBNkqmNJlMbD7BcDg0Ah595JQYrq6uzC49IRUbLZVKxt+h\nHt3r9RZasPAJ/X7fLorZ39+3koUvGUDcy2azhir5wJ2MlkAiEonYNd++9xv9hRjJ2mOxmDqdjra3\nt5XNZm0yI3t+d3dnCB/rHo/HNhMAPhBwMNwKbJIyHIEql/GwD5ubmzo4ODBUizIGJYuXlxc1Go0F\n8jLJC8TMcrmsWCxmfoj1AK37PX96etLV1ZXK5bJNc4Q7RFmv3W4rFptP3vuejYL0Mab29fXVyI8k\nLSsrKzZhjxn7yWTSUDlQlkqlom63a4czpUgOaI9s3t3dmV6srq4uTAPEztvttlZWViyQIOHCRgnu\nisWixuP5/JBMJmMBJolVp9PR+vq6JWKgFYPBQI1GQ+VyWdvb20bGRF8IVjza8/LyNumVZJjz60fy\nUx7+wFlEgR7K5qXRmgZsxwHtWfcYFEiBv5HJE/WIlmGueoLdYDAwjgBGA4fg27dvGg6H1g8L5AgE\nRfSPY2bt/IHwQy+zv5QBR0PGDNwFJOhZxX6uAdAtzwCRzffTg6zwPGQP7DlQOEGAX7MnFpHVwl1Y\nJsARSDG9D+gNQ6FWF4vFbM1ra2tWpvAwPTAbThYyExAYz8Z+4MiX9xxBl0A6gPV4L2SrQH7oC7V/\nshiGA7Fnj4+PC7d2oS8YrmeAA0ED+wHzUYdc3neIX7xvD3mzdrJBAhCyuNXV+XW+9/f3KpfLFhj7\ntYP0oBtkpjhF9pxSw7KN+pKN33f0BRv1XRroMSgHMO2yvoAiwJFg3XS/cAj7UuDj4+N/2CgBCc9G\nMICN+j33fensF/riCXiSzN68jYKYoRPY6LK+UOb0+tLv9+05sPPl7JihQ9goaA8oALpL4oS+sKf4\nBfSSPaf0Bw8DP0kJIZFI2F4u2yjkbKBu35JJwDgYDIxXgb7wh/3ERkGn+FnfQouPRf/Rc/wJaCHJ\nBPuOjyBwYd3wvOiMYTIs+sLP4NPx0cv6QvBLIEY278s1yzaKvkynU0P4WB9rR8d8kEhCRwLhidi/\nJz/l4Q+hIZvNanNz06A4YOfb21tjqALf8eC3t7cL41Y9CWxra0uSrOaFg8fhsvkoCYqIUvqDnawd\nyJdDczKZGORCzZ3BH7PZTDc3N3ZpB5kabTjfvn2zrI+Dm+DAs+Qxfl8jh/DmnbNn6lNjYx2ULFAu\nlJqMOR5fvGENooyfOEe/L7PnPTOb/YGo6ccog5BwAECKou8e+JX957/z7+/evdPe3p7tORcYEY1L\ni1Bcq9VSJBKxq0gnk4lBlZ1OxzgQBHzoCw5/uXZKpuXLOjDB0RcYurwrWneSyaRNVKNHnks46Df2\nAcPl5aVldLFYzHrygSxTqZR9B+Qs0B4gS75rPB5bmx57hL6DiKAv7Lkn6e3v7y/MkYjH48pkMnbx\nix8yQgvowcGB1eYhr7Xbbe3u7iqdTpueUwtlaI3nNOAMfWukD1C9vpAJEUByACSTSduH0Wh+NS12\nwB4RcN7c3CiVSqlYLNqhh412Op2FOQz0WgMJo5O+PZcDlwCH56KGTyKDLfrugGQyaQ4dX0XrInsB\nzA9a5e3g6enJ1h2NRm3PfdDBdEICQA4gSca7Ys+XbZTDDPvwNur7ziH3+rX5klar1VI0GlWlUjEb\npSbfbrdtD/Hz6Iu3UYiXIIjL3T2sgz1n/wgi4AYQLPEOx+Ox2ahHUQkGrq6utLW1pb29PQtKaDnn\nllW+g30gwPecBiYBUhYkCSEhxaezBp6LIASCKj7+9+SnPPyz2aw5z2KxqPX1dd3c3Fj9sdlsKpFI\nqFqtWsYONDUYDKwfnAOPWsjLy4tKpZI5EepXGEK5XLaeUz5vOJzfRsad4rPZTPl83g6ug4MDO4hw\nYhz2xWJRg8FAt7e3qlQqmkwmajQaymQyqtVqdsCWy2Uj9PDZs9lMe3t7SqVSqtVqGg6HOjg4sHpU\noVAw1CKbzVpN9Pn5WaVSyaLjarVqCs3NZKPRSOl02uqOfkIU2U0qlTIWNTXuwWBga4XIxM1z7Aso\nBjcYculKKpUyxjRtRYnE/DYyao3c9gXUVq1WrVOAmvl4PFaxWLSDCCiU2nmxWLTAgpGX1Ler1ao2\nNjaUyWRUKpUsezo4OLALV1ZX57fUVSoVra/PR/mSQRWLRbslbH193SBonoP2u0KhoN3dXZtVwDCg\n2WxmN32hLwSp8fj8DnXQm0qloo2NDV1dXalUKll9c21tTeVyWS8vLzaqmYObG8fq9boFz5JMRwjk\nKpWKTUFbXV1VqVTScDi02iFdK+j8aDRSuVw2PWftBA+FQsEyOT6Djo1KpWIEvVKpZB0s3KBGoJNO\np83eQCa8jRaLxf+w0UqlYjZA1wZM/oODA0N0GMqCvnDI8q4Y8MKzX19fL9Rl+V7mP8AS9zZ6eHj4\nHzaaz+eNA5DJZFQsFu2ALZfLhiRho5ubm8pkMspms3p+flYqlVI6nbY9wkZfXl6jXkcHAAAIiUlE\nQVS0u7ur/f19Ix3DrsdGOVxSqZT29/fNpgjGmCUwmUx0cnJi3yXJbFSS6RtZfDqdtsNsNpuZXmCj\n+FzmK7Bmb6PYsrfR6+trxWIxs9F0Oq1yuWydCLlcznwKde1yuWx6BRJZKpXMJ2GX4/F4gZcxGo1s\nxDkoYSaTseBv2UZ9e2ehULCplKwVG6WEuba2tlCeyuVyRhwkoT06OjLuGDaay+Xs7KhWqwtj38vl\nskajkXZ3d1Uul/X6+mrlBToPyuWy6blHOX9LfsrDv16vm/Os1+tW80smk6rX61YT+/jxo6LRqE5O\nTkzxqH0dHx8rl8tpNBrZXff7+/s2U75er2tjY8MgyMlkonq9rvv7eyOdxWIxqz8WCgWDywkYer2e\nKpWKKTlZBuUDPq/ZbOr4+Ngi1v39fZ2cnNh1s/l8Xufn59ra2tLp6akd7kSI1GEhuKEE0+lUW1tb\nymQy5oAxvFarpVwuZ8FOv9+3wRX7+/tG9IIhy7AUyCUHBwe6uLhQLBbT8fGxXYj06dMnbW1t6ejo\nyOq8sFBPT0+t5gRpC37Czs6ODg8PjYCVSqXU7/dVr9cXCFz8/HQ6NWMdDofGn8hkMqrX67bn0WjU\n1nZ/f69arSZJuri4ULlctpqhJH38+HHB+OlLL5fLdjWzJNOJSqVih9LT05PVJxluUq/XtbOzo263\na4fI9va2DfOBL4LeSbIrS2mp89PAarWaGo2GHh8fdXJyYr3np6enyufzFhBQs282mzo9PVU0GrX6\nPqWY1dX5bPZqtarpdKpkMmnrOzo6WhjWUywWtb29rX6/r3K5LGnejsmd8xxuqVTKkDQua9rZ2VGt\nVtPm5qYuLy9/00aPj4/NRtnDk5MTC0qSyaSy2axdD8z+JhIJex/YVL/ftwE+oG8cKhAzU6mUBoOB\nSqXSwtrp1sDma7WakX2x0dfX+Y1rp6enKhQKikajKhaLOjs708bGht6/f2/vmYOUFjwCPYIpMrls\nNqtisWiZarFYVLPZtNsVvY3u7OzYTP5cLmdII1M1sflsNqurqyvF43EdHx/bQfbp0ye7hQ+bIvl5\n//69cTeYb893clsiQdfe3p76/b6Oj48lacF3UHYqlUoWSOGf0um0jo+PLXChNRsbPTw81Gw2M25A\nMpk0vfr48aMODw81Go1ULBb17ds3xWIxVSqV37RRZgXgO+FD0NrItEICfobi8HwgEkwUxWeCpNHK\nt7GxoWq1apNjT05O1G63bV8LhYIFoBzKjUZDHz58MPI4XSEgzdls1loBsVHOPc6etbU1s9Fer6dS\nqSRpfn8N/BNuZSSh+2/6/GO/+xP/C/LXv/71f4jC/PhJ4A0IZZIMrpNkRBF6KYGeY7GYwSUcwNRU\ngXD8IBcYy9REPbeAz6LnE/gN0hlzBchSfSsbtRrgftbIzwKx8jv0uUoyWFV6mz1Oxgc8BBTGYUW7\nIOLb4AgsILv0ej2D2aS3me7UsmgB4v+zRv9M/ndAHnz9n7/7PWePqJUCcVFrRQeoY0IsZDgOJQng\nMcojrIlSDRAf7Gr0hRohWSxtPuyXfz4/bEOStanB0aDEw8/OZm/3BKBzlEEgnVFjRAeGw6HpKutk\nb1kjpCh0irY7X7PGTvgs36cMBwa9pNcachs1eP7/dDq18bDU+rm2lc8kS8I+4EV4G2X/2GNvo9g2\n75x/j0ajCy1VlBVooQSO5me9vqBjfm4/ZSJ0BB0iKFnWZ2+jBMnfs1G+E9v37bIIZUH040c2CgRN\n0CO9zcfA7slKsStslP/PO4b3RABA0OxtlGfi77wD9pFWPewOG8Wn+ncHAZv9/J6N+pZL/g6cTUkE\n3cNGIQTy7vlO9OO3bJTDEDunnIW+LNso5eFoNGoEQNYDUue/G9vzbdjsHz7Nt1p7vhqttXTCcIbh\n09ljSVYWYM8k2f5PJhMjgPIcf//73//fj87Zn/Lw/8tf/vI/GA2Ql6/p4VRxKvwstUkY1F7BCQpg\ntWPE1NU5vGk78wcT4g1DktUhcf6DwcAGQ9B2AxzlyUMccF7BIXIB+/CZBC/0R9MrywFAj+vj4+NC\nLyrOGCXDwFEa+ur5fAapsB7YwZ6AwwEHWsC7gHNALR+CEHtOoMP3sefRaNTqzX7sKk6LQxLl9wRJ\ngpyXlxfLkDFOf3Bg3JLMqeCAQA2Y4IZT8eQjjJk6p2+r4vphBu8QPPo9572zfxA6aQv05Eue2++5\nP+DYcxwis+jhyPB9ONGHhwfrYeYAA4YeDof23vhuHA97TsArvU1tRBdvb28XDhdfv+Z5sVHsgPdH\n1k3nA59LEIBdL9uoJONJMOLUO2O+kz3nHQOBU6dnfDZOGTtCT9g/6qzs+crKijGqh8PF0bE+AVi2\nUXgDlFlw9N5GWftv2Wg8Hle327WaN7/P73l94RDCL+LTotGoISPeRgl0qJX/yEYpg/Duva5LskCA\nkgvPjo16opp/VzwDfhF9icVi1kU0GAwUj8dNX7BRuAb4fvQYG+B94BcJHjkzvL5go/B6lm2UoM+T\nWUnmvI2yx95Gvb5Q8n14eLCOKnQNGxgOhwvvw88xwNbQF/r78QOrq6v629/+9sPDP0iQIEGCBAkS\nJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQ\nIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGC\nBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkS\nJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQIEGCBAkSJEiQ\nIEGCBAkSJEiQIEGC/N+T/w+qI5cQffFvlwAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: 5.815053\n",
+ "Current MNIST score: 1.084848\n",
+ "Training step: 1000\n",
+ "Time since start: 4.819182 m\n",
+ "Steps per min: 207.504112\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAf8AAAFBCAYAAAB5Maa7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsndlznNeZn5/e9x3oxt7YdxAguIISSVGiJFtS5Jlx7Lgq\nSU0lcWpSuch/kVzlPneZ3Lg8NYlnxnamxnJGHm0URYoLSILY18aORu/7ngvWOW7SpER52GhQ+p4q\nlmyKIk5/fb7znnf7vaCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCg\noKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCg\noKCgoKCgoKCg8F1FVe8FPINKvRegoKCgoKDwEvOV9l19VKtQUFBQUFBQOB4oxl9BQUFBQeE7hrbe\nC6gHKpUKjUYDQLlcplKpUKkomQaFP0SlUmE0GrHZbGQyGdLptNwzCgoKCi8r30nPX6PRYLFYMJvN\nGI1GeRFQUHgSlUpFS0sLV69eZWBgAIvFouwXhW8tarUatVqNSnVcy8EUXhTfKc9frVbjdrtpb2/n\n5MmTaLVaotEoX375JWtra/Ve3teiUqlobm6mubmZ7e1tIpEI+Xz+pfBCVSoVOp0OgEKhcOzXrFKp\nsFqtjI2NMTk5yfnz51GpVGxubpLL5SgWi/Ve4rcanU6H1WqlXC6Ty+UoFAqUSqV6L+u5UKvVaDQa\nyuXysY8SqVQq1Go1FosFj8fDyMgIer2eQCDA5uYm+/v79V7i1yIuKiqVSonifgO+M8ZfbPKOjg7e\neOMN/uIv/gKz2czGxgb/9b/+V9bX14/1plGpVKhUKgYGBrh8+TIffvghMzMzFIvFY38oimdvMpmo\nVCoUi8Vj/azhUXSooaGBH/7wh7z++uv09PSwvr7OtWvXiMfj9V7etx6DwUBLSwuFQoFQKEQymTz2\n+1yg1WrR6XTy3SyVSsd2v4sUqNfrZXx8nP/8n/8zdrudf/iHf+A3v/nNS2H8ARmtOO6XrePEt974\nC6PpdDppbm7m3Xff5erVq7jdbu7fv8/f/d3fsba2dqw3jFqtpqmpiVOnTvH666/zyiuv0N/fz/T0\nNB9//DFra2scHBzUe5nPxGQy4XK5GBgYIJ/Pc/fuXTKZDOVyud5L+wPERcXv93Py5Em6u7sBuH//\nPisrK4RCIQqFQp1X+e3GYDDg9/v5V//qX5HL5ZiZmeHevXsEAoFj7dkJQzo5OcnU1BSHh4cEAgHm\n5uaIx+PkcrljsXZxJqrVaoxGI3a7neHhYc6fP4/H48Fms3Hy5EkePnyIVqs9lpcX8aydTienTp3C\narUSi8VYXFwkEAjUe3n/LKpTLrV87t96469Wq9Hr9TQ1NTE2NsaFCxc4deoUGo2G9fV1fvvb37Kz\ns1PvZT4V8YLabDZ6enq4evUqFy9e5MSJE/T29tLe3k6pVEKtVpNMJsnn83UNR4sXUqPRUKlUUKlU\naLVa2tra6Onp4dSpU8RiMba2tjg8PJQXgONysKhUKvR6PWazmZ6eHk6cOEFjYyOxWIybN2+ytLRE\nIpGo9zK/tYj93tTUxPj4OFevXiWRSKBSqdjb2+Pg4ECmuVQq1bEySqIw1O12c+bMGf70T/+U1dVV\n7t+/TzqdJhAIEAqFZCqg3msVqQmtVovBYMBqteJ0OtHpdFgsFrq7u/H7/fh8PsLhMJlM5lis2Wg0\notVq0Wg02O12Ojo6uHr1Kg0NDQSDQUqlEnt7exSLxbo/52+KOC8NBoNMkaZSqZp9lm+18ReHudvt\npr+/n6mpKZqamgDI5/OEQiE2NjbI5XJ1XukfIgypwWBgcHCQqakpzp8/T3t7u9wkra2tvPfee2g0\nGmKxGNvb28Tj8bociMKbsFqtWK1WCoUCOp0Ol8vFq6++yqVLl+ju7iYQCLCxscHc3Bzb29tks9lj\nE85VqVTYbDY6OjoYGRmhv7+fXC7Hw4cP+Zu/+RtWV1frvcRvjPhejrPHLFCr1RgMBi5fvsw777xD\ne3s7W1tb2Gw23G43LpdL7m+NRkMmkyGfz9dtvdW5ZrVajdfr5ZVXXuHChQsMDg5SLpdJJpMkEgnU\najWZTIZMJlP370KcLZVKhWw2y+HhIffu3aNSqdDV1YXT6UStVtPf389rr73GJ598wubmZt3WK9Zs\nNBrp6urC7XZjMBjo7+9ndHSUs2fP4vF4KBQK7O/vMz8/TzQaJZfLvTQXAPGe2u12mpubaWhoQKVS\nMTc3Rzgcrsk+/1Ybf3GL9fl8NDU14fP50Ov1hEIhHj58yMzMDKlUqq5rFAeHTqfDYDBgNBrxeDx0\nd3djsVgwGo309/czPDxMY2MjKpWKWCwmD5GmpibOnTuHRqPh888/l5ul1hEAYejtdjsejwen04nd\nbqexsVG+iFqtFpvNxvDwMMPDw3g8HnQ6HRcvXsTn87G1tUUmk2F/f5+VlZW6FS+Kg6WtrQ2/309n\nZyder5dMJsPCwgL3798/Vl6/OCg0Gg16vR673Y7L5ZIeEYDRaMRqtZLNZkmlUkSjUeLxuDSelUql\n7p6zuJxbLBZcLhetra34/X7eeOMNxsbGsNlsWCwWvF4vbW1tlEoluecsFgs3btxgYWGBdDp9JBfI\namMvnr1YT0tLi7ykDw8PY7PZcDgc+Hw+ent7SSaTxONxdnZ2jkXNiIi4lctlisUiu7u76PV6VlZW\ncLvd6PV6GhsbGRwcJBQKoVKp2NnZOdLIojgXxZnY2trK5OQkzc3NaLVaOjs76erqorW1FZvNhkaj\n4cKFC2QyGaLRKOFwmFAoRCAQOPLahW8SutdqtWi1WvR6Pe3t7UxMTNDb24tGo0GtVjM3N1eT6PS3\n2vjr9XocDgdNTU243W60Wi3FYpFAIMBf//Vfc/369bqur9rw22w2XC4XHo+HiYkJfvSjH+Hz+eQF\nRq1Wk8/niUajFItFGTI3GAycPn2as2fPYjQayWazJBKJmr6k4vDzeDz09vYyPj7O4OAgnZ2dtLe3\n09TU9NhhLEKdOp0On8/HG2+8wYkTJwiFQmQyGW7fvs3e3l7dugBECPH06dOMjo7S2tpKMplkfX2d\n9fV1ZmdnicfjxyLXLwy/CA86HA66u7sZHR3FYrFgMBgAaGhowO/3s7e3x9bWFktLSywvL7O8vCwN\nfy6Xq2vURaVSYbFYaG1tZXR0lIsXL/Laa6/hcrmwWCyoVCpMJhOtra309PTQ3NzM1NQUfX19NDY2\n8t/+238jHA6zvb1d888hnrt4Z8WzFymtS5cucfLkSbq6urBYLFQqFYxGIy6Xi46ODpLJpIwC1Nv4\nP5l6qFQq8mIyPz9PQ0OD/BwdHR0UCgU0Gg3BYPDIjL84YywWCw0NDYyMjDA5Ockbb7xBa2srpVIJ\ns9mM2WyWl1mVSsWlS5eYmJgglUoRCAS4f/8+f//3f3+kxl/skaedZeL3qi8HWq0Wk8mE3W6nt7eX\ny5cvc+bMGfR6PYlEgkQioRj/b4I4IB0OB0NDQ/j9fnQ6HR999BGzs7PcunWLvb29uq1Pp9MxOTnJ\n4OAgra2tqFQq8vk8ZrMZv99PS0sLRqORdDrNw4cPWV5e5v79+yQSCUqlEsViUR5Cb731Fm+//Tav\nvfYa+Xye1dXVmuborFYrra2tdHZ24vf7MZlMZDIZDg8PcTgcOBwOcrkcqVSKWCwmjWgqlSKdTpPN\nZjGZTHg8HiYnJ3n//ffp6+vj17/+NR9++GHN1l1NtffW2trK4OAg586do6urC71eL72HpaUlAoFA\n3VMTNpuN0dFR+vv7ZRGiqGfx+Xz4/X7pKahUKgwGAxaLhc7OTpLJJKdOnSKRSJDNZgHY3d3l7//+\n71leXiYSiRz55xHPv6uri4mJCU6ePElfXx9arZalpSV2d3fZ2dlhc3NTem4tLS1cuXIFu92OVqsl\nnU4Tj8ePxPDD7w9u4TWLUP/o6Cjt7e1YrVbS6TRbW1tsbW3JFuJMJkM8HicajdY9dw5P90TL5TKp\nVIovvviCYDBIc3MzbW1ttLS0cO7cOXQ6HXfv3qVUKtU81SIuWYBMS8zMzBCNRkkmk7S0tEhPWaPR\nkMvl5EVB7PvW1la6urrweDwcHh5yeHjI1tZWTSO9wuhX75fqi0l1rYXRaJQXBOH55/N5Dg8PWVxc\npKenh5aWFhobG3G5XDVZ77fW+AuMRiMdHR14vV5UKhV3797l008/lfnmeiAOPhES7OzsJJvNEg6H\nZX/w1tYWlUqFaDTKzMwMd+7ckW1m4tYujH9LSwvvvPMOw8PD7O/vYzQaa7p+Ec43Go1UKhXZihUM\nBqXRz2azxGIxmU+8f/8+4XBYPvOOjg7Gxsa4cuUK/f39DAwMsLS0dGTGH37fj22z2WhoaMDtduNw\nOGT4PJ1Os7u7y+HhYV1zh2q1Grvdzvj4OBcuXGBiYoJsNks+n5edLE1NTVKgRRwqohi0XC6TTqdl\nu6VWq2VtbY3NzU2i0WjdjL9KpcJsNuNyufD5fJjNZlKplNzvi4uLbG9vs7+/T6lU4tSpU+h0Okwm\nEyqVSqY0avndPCl2U50yKRaLmM1mmpqasFgsFItFQqEQc3Nz3L59m88//1y2EOv1enQ63bGsLxLk\n83kWFxcJhUL4fD7Onz9PR0cHfX19pFIp7HY70Wj0yOosyuUy+XyefD4vz5dkMklDQwMGgwGt9pH5\nSqfT8j1ubGzE7/fjdDppa2uTKSOXy3Uk3r/Y189Sjq2u5dJqtY9dKHO5HMFgkIWFBc6fPy8dQL1e\nL+t2XiTfWuMvillyuZx80NlsllAoxOHhYV1DuOJAmZ2dZXd3VxYDpVIpKYZjNptRqVQUi0WZLxRe\nv9gE1Z+xUCigVh+NYGMymWRhYYGNjQ30ej2A3NCitkKtVlMqlaQntL29LcVxhKiIECkqFovEYrEj\nPRiF3kC5XJaevV6vJ5lMMjQ0BCCNaL2LhoxGIw6Hg5aWFmw2G7FYjL29PcLhMFqtloaGBrLZLA6H\nA4vFgl6vf+zziTRRpVIhnU7jdDoxm8309/ezurrK0tLSkX8msa65uTm5d7u7u3G5XFy7do0bN24Q\niURIp9PS2JRKpccKGIvFYs01I571dxcKBWKxGJFIhGg0ys7ODpFIhGw2Ky/qm5ubJBIJNBoNxWIR\njUZz7AWuyuUyarUaq9UqvWrhnQpPu9ZUe8uCUqlEMplkfn5erkOn08lnK9KnojuhpaUFvV6P0Whk\nc3OTzc3Nmjt74lJY/f+f/Ezicwn7o1ar5UW+UqlwcHBApVKRRX7hcLhmRdzfWuMPj1f7u91uksmk\n9ILqfaCXSiUODg4IhULk83kKhcJjF5KnhY+eXLP4vUwmQyKRwGq1Hok0Z6FQoFgsyjYssaE1Gg3R\naJT9/X15qxWHZDKZlH+uXC7LPBc8amcJBoNHXnwpbufxeBy1Ws3q6ipNTU0MDw/jdDrxer3ycvMs\ntFotarW6pq1Fer0ek8mEWq1me3ub27dvc3BwQDweR6vVMjAwgM1mk96D0WikUCiQSCTIZDLEYjF2\ndnYol8uYzWYmJyfxeDyMj4+zvLzMRx99VBeDVC6XicVi7O7usre3J9NHm5ubbG9vk8/n5WFanUOt\n9r7rVbQoLh/5fF4e3sVikZ2dHRkBE+vXarXHXu1Pq9ViNptpaWnB6/ViMpnI5/NEIhHpdNjtdoxG\n45EVvj75rMSZA79P64quBTF/Q0QIAoGADK/v7+/XvA7qWWt+2r8vl8tyX4jzQ1AsFkmn02QyGeks\n1WrPfKuNv5AI9fl8tLa2kkgkMBgMda9yFgdXMpkEeKrReN48ZqVSIZPJEIlEZB7pKKg+iMU/q8PL\n4vdEYVn1y6rRaHC5XDQ2NqJWq0kkEuzt7dWlmr5SqZDP50kkEgSDQWlQm5qaSCaTmM3mr/zv9Xo9\ner2+ZuFnEQkSOe719XX+7//9v6TTaYrFIlqtlqtXr3L27FlisRiJRAKLxSKLhEKhEJubm9y6dYti\nsYjP58Nms9Ha2srp06eZm5t74Wv+JogccjqdJhaLyffiWYI4Yp+JvVWvS3z1cDDR2VIsFmU6UeSa\n0+m07NkWxqcWIdx/LjqdDofDIbty4vE46XSaYDBIMBgkm83icrm+9n04KsTlS6S2RHeCSK0EAgFZ\nW3RwcEChUKi7wycQe1fU5gjHQThFIv+fyWSkFkMt+NYaf5EL9Xq9OJ1OKpWKDPeL22K9XsAX9XNF\nCFSEhMUBVM+DReTo+vr6sNvtshe7Uqng9Xqx2WxotVr6+voYGhrCarVyeHjIgwcP6iolWiqViMVi\nFAoFfD4fe3t7RKPRZ6aHRIva2bNnaWlp4R//8R9lncaLJpvNsru7y4cffkgymSQWi8nvuVAosLq6\nygcffMDY2Bh+v59EIsHi4iLXr1+Xaa69vT3K5TKHh4eEw2F58Oj1elkXUK99Y7VaGR0dJZ/Pc+/e\nPUKh0B8USjU1NdHZ2YndbketVpNKpepahCkKzLq6umSrbTwex+/343a7SaVSOBwOtre3iUajMj13\nXFT+qlGpVIyMjPDKK6/Q2dkJwNraGhqNBpVKJdOK1a2kxwmtVovdbqetrU0aUSEkZjKZZJfCcXju\nGo0Gk8kkdSucTicHBwdEIhGKxSJWqxWPxyOjuPF4XDqJL5pvrfEHpDiIyJ9ns1kZDRD5QnFzFDdJ\nQa0Pwz/27xaeMzwyQDabDa/Xi8PhIBaL1bwA6usQz7S9vZ3u7u7HCs/a2tpwOp2y2LGxsZF8Pk8g\nEJCFRvWiXC7LanhRFAePpGZFO5EoEFSpVLjdbjo6Orh8+TJtbW3cvXuXra2tmqwtl8txeHgoC9+e\nDF9ub2/z2WefyUttKpXiwYMHfPHFF3JPVHudwWBQ5v5F2LGee0Z05UQiEWKxGAB2ux2TyYTRaMRk\nMtHb28vY2Ji8UO7u7ta1cl6o4A0NDTE0NEQ0Gv2D8L7X65WV//v7+xwcHMhw7nEwRPD7CEZ/fz9v\nvvkmTqdTXlbEvjk8PCQajco/f1wQin/Nzc0MDAwwPj4uI0jRaJRoNMrBwQHRaPTYeP1CBXJgYACv\n14vFYsHtdhMKhchms7K1UdioZDJZs1qFb63xF8ItZrNZ9oQ2NDTQ3NyM3+8HHlWJJhIJ9Hq9zD+K\ncIy47R4nxGeyWq2oVCoaGxs5ceIEPT09ZLNZdnZ22N3drfvEOY1GQ3NzM2fPnqW3t1d6lyJvLS4I\nqVSK2dlZbt++zfb2dl0Fl0TtRDweJxQKYTabpcSpMDJmsxmTyYROp2N4eJg33niDjo4OEomErEB/\n0Yd6dXjzWdGqWCzG/Pw82WyWmzdvEovFpMcvQorV/cW7u7usr68zNDQka0TqGYpOJpPMzs5KaVyL\nxUIymaS3txe/3097ezs+n4+WlhY5jOv69etHVr0tqH6Gzc3N/Nmf/Rnnz5/HYrHIIl0xKtxms9Hb\n20s0GiWdTnPr1i0+/fRTUqmUvKAfhwuAyIu3tLQwNDSE2WwmHA5LnYudnR3C4TCJROLYdSqITqd3\n332X8+fPMzw8TCgUklLQS0tL3Llzh2KxWPdLS3WUdmBggD/7sz/D5/ORTqdlYbcogBbnvKgHU3L+\nfwQOhwOPxyOrPt1uN5OTk9KrELlejUYj5X7T6TTpdJqFhQV2dnbqXh8gEGpiAwMDDA0NkcvlcDgc\njI6O4vf7ZQGh0LWuFyKc3NHRIYV/dDqd3Pyi8lxUsQpvr96CM4DMIYsWtNbWVqampujp6cFsNsu2\nG61Wi9/vZ3JyEp1Ox9raGjqdrmYe9Nf9naJgdG1tTV6injUCt1ogSLQF6nS6umnOi/3S0NBAZ2cn\nRqORhoYGyuUyvb29NDc34/F4MJlMsl5nZ2dHto7Wem1PXoxEDYbL5aK/v1+qzZXLZRkVEp0vLpcL\ng8FALpdjYGBA6hKIfPpxOVfcbjcejwe3241Op6NQKODxeNjZ2SGXyxGLxWT4uZ5yytWoVCo6Ojo4\nffo058+fZ2JigtbWVtnxYrFYyGQybG1tEQwG637BFXUJg4ODXLhwgeHhYVwuF5lMBoPBQKVSIRgM\nSvE2p9NJMpmUe6oWfGuNvwjNNjU1SY9NpVJx+fJlXn31VVlsJioq0+k0oVCI/f19tre3+T//5//I\nUEy9b+niJtjS0sLrr7/OD37wAw4PDwGkeqHYPPv7+3U3/kajke7ubrq7u9HpdLJIq9r4RyIRDg4O\nZNHacfBAhWEU3ptOp+Ptt9+WLYxP9s+L1sBCoSAP/Xo9+0ql8lwFk5VKBY/HI1uhhFaESIMdNWq1\nGpfLxenTp2WaaGJiArPZjM/nk+1xono+l8uxt7fH4uLikSjlVRevCpEWUVkuCuBUqkdDhoS2hYgY\nlkolCoUC6XSa5uZmbDab1FcQ50q9MRqNsi5KXGDFOyBqiKrf2Ww2W/eCRXFODA8P89ZbbzE+Pi4v\nYRaLBY/HQy6Xo6mpid7eXnZ3d+su1CW6Kc6ePcvbb7+N0+mU006ru4Xy+bysJxHvZq3qLL61xl+t\nVjM0NMTExAQ6nU4am2AwiFarxel0UiqV2N/fJxAIkEgkcLvdUiRCpVLR1tbG9evX2dvbk61q9UCl\nUuHxeDh37hyjo6M0NTVhMBiIRCKsr6+zt7eH1WqlqamJkZERPv7447oN+BGbu6Ojg3A4zP7+PtPT\n0zx8+FC2oiWTSVKpFJlMhmKxiMfj4f333+fu3bvcuHFDRgKOcv16vZ7u7m6amppkqLZSqWCxWKSg\nUblclopjom1O1F243W5sNtuxqip+FuJdEAe8yWSSxWhHiUaj4f333+fNN9/E7/djMBjI5/OYTCYZ\nSs/lckQiERYWFtjc3JQqka+++irxeJyDg4OadVloNBoaGhrQ6XQkEgl5yXO73TIvGwgEWFhYYHZ2\nlkAgQDKZRKvVyv0iHIeRkRHGx8d57733aGtr46/+6q84PDyse7TLbDbT2dmJ0+mU6w0EAvzt3/4t\n8/PzbG5ukk6nZTV9Op0mHA7XzSGqngchtAiE3oXD4WB6epp79+5JHYxYLIbf76exsVG2yNbqmVd3\ngFQLsen1es6cOcP777/PwMAABoOBVCrF4eEhiUQCn8+HVqtlZWVFCqH19vbicrm4fPky6XSa6elp\nReTnaVSHRapVw/r6+uju7qZYLBIMBnn48CG7u7tSICKbzbK+vs78/DypVIq+vj7OnDnDwMAAJpMJ\ns9nMzs4OiURC5qOPesOr1WqpVjU0NERLS4v0hg4ODpiZmZFCL2NjY3R1dWG324lEIkd+mKtUKoaG\nhnjrrbcwm82srKxw584dfve73/H555/Lti6RQxdVuq+++irnz5+XWtazs7MEg8EjfdZC31/00lut\nVsxms7yha7VaaWzm5ubIZDLY7XbZ1dDT08Pa2tqxKi56FqKuob29XYanhchUrRHRFbvdjtfr5e23\n3+Z73/uejMKJmhvhBYVCIVZXV/niiy/kezo0NMTk5CR3795laWlJhkprsVaj0ShnZoifIdKE6+vr\nbG9vc+3aNW7dukUgEJB/plr/QqvVksvl6OzsZHJyEp/Px71793j48CHBYPCFr/ubfD4xLtztdstn\nvrW1xaeffsr6+jrxeJxisSgvPCJVdFSqitViP2LfOJ1O+UvIme/v7+N0Ovnss8/4/PPPCYVCcrLo\n+fPnGRsbk2H1WCxWs7NFrLV6BocoWP3BD34g0yh7e3vs7u6yu7tLR0cHJpOJ2dlZ9vf3SSaTqNVq\nJicnGRsbY2VlBY1G88JT0C+98a/WUxbV2F1dXZw+fZqOjg6pUrW+vs6HH37I6uoq2WyWL7/8Ug5/\niMVilMtlvvzySwCmpqZoaWmhp6cHr9eL2Wyuaz50bGyMkydPyjzQ0tISn332Gbdv32Z5eRmbzUZX\nVxft7e3YbDaam5sJh8NHavzFs3c6nXg8HpaXl7l+/Tq/+tWvHhs1XF20ViwWicfjZLNZrFYr586d\nw263E4vFjvxQzOfzbG5u4nA45MWxs7OTYrFIoVCQh/3t27f5zW9+w+HhIS6Xi3/zb/4Nly5d4u23\n3yabzTI/P1/3gsuvQ4hClUolHA6HnDpX6xw6POont9vtnD9/nnfeeYdz587hdrulAp4ovA2FQtjt\ndubm5vj000+lcdXpdLS0tOB2u6X2fK3U24QAVCqVIplMynax3d1dPv30U3nxEPUr1d5w9T+LxSIH\nBwcsLy9z4sQJent7+fGPf8wvf/lLPvjggxe+7udBeKkul4vR0VGam5upVCrEYjH29/dlq6uIwIXD\nYR48eCDPlFqG/sVZUm3s1Go1NpuNs2fP0tXVhcPhwGazsbe3x8rKCpFIhFKpxNraGltbW7I2QaPR\ncOrUKXw+HydOnKBYLDI9PV0T7/9JhT9hk4TgmZiuub6+zo0bN1heXmZnZwez2SwF0kRrtHD2RBRJ\n1I68yLPlpTf+TU1N9PT0yEIsQI7WbG1tJRqNcvPmTT766CNu3brFwcEB2WxW/lOMv1WpVDJMXalU\npHSkKM45asTGgUfV0Jubm2QyGamKd//+fVZWVjg8PMRsNpPL5QiFQnKi2FFXt4r8VCgUYnp6mps3\nb3Lz5k0WFxef2ZIl+tTj8TiBQACr1XqkQkXVlEolIpEIS0tLVCoVNjY25FxtMStBRInu3LlDPB7H\nYrFw6dIlcrmcDEPWu6r4eTg4OCAQCDA5OUk+nycWix1JIVf1cBMx2CmXy7GyssLi4qJULRQTFIU0\n6+zsLBsbG0SjUZkasNvtWK1W2UFSC4Qcskr1aOiWcADEcCqR/vk6Q1Iulzk4OGB+fp5kMonL5app\nIdfzIBwLp9NJc3MzpVKJpaUlbt26xSeffCLDz+IzP2l4ahmV0+l0UqJa/DKZTDQ2NsoaLjGFMBKJ\nEAgE5MwBEaUVNUZqtZrNzU3cbjeJRKLmTpz4ufD7i182m2VxcZFf/OIXJBIJuRd2d3eJRqPSeRVd\nPeVymZWVFZaXl+WlTJH3fQKVSkVvby8/+tGPsNvt6PV6CoUCfr+f8fFx9Ho98/Pz/OxnP+OLL74g\nEAjI/1bf/M/HAAAgAElEQVT0E1cj8tAi/CUm0h21F1pdZVwqlZifn2dpaUn+f+GNikNHrFm0dtWj\nLUe0DC0vL/M3f/M3fP7552xubj7Xpg2Hw9y5cwe3200mk6lLfYWYara2tsbGxobM1blcLukBZrPZ\nx7TZo9EowWCQSCTC4eHhY0OXjjNbW1ssLi6STqeJRqPMz88/9X14kYj9DL/Xxd/c3KRQKBAOh/mr\nv/orFhYWiEajT/Wexd8hvEFRE1ArRJGqeI+e3I9CWvh5CQaDzMzMEIvFcLlczM/P11XUSqRHRRg9\nEonw4MEDfvaznzE9Pf0HNTfCkFUbtlohxpiLn1Uul/F4PDQ3N2MymeSwMKFg+axwuPDEZ2dnOTw8\nlHLFtT5bxN8v9kg+n+fzzz/nxo0bj3XVfNU6lpaW8Hq9jIyMyHqcF322vNTGH2Bzc5Pf/va3cliJ\nWq0mFothsVgwmUxsbm7KFr6vQqvVyjxvpVKRwjP1EM0Rm0KEEcVtUoTanmzLEnk4ofwmKtCPEvFz\nl5eX2d3dJRwOP/dLls1m2d/flx6VGEJTD8TzhUdGShwWQqe9OgwpPM50Os3i4iLr6+svhfE/ceIE\nly5dwm63S2ndWqcqqlM96XSa2dlZ4vG4jFqtrKw89q497fvXaDRyhrtWq5Ujo2tVwPUi96DQGTEY\nDLIFrR7TFAUGg4ETJ05w8uRJbDYb29vbrK6uSlXLelbzZ7NZWfsh1iFy5aKbSWgRPM/7JlrmhJ5L\nPRAXkef14oeGhjhz5oysTauF9//SG//Dw0Pu3r0rQ4AiLCj6tIXe+dd5whaLhcHBQVpbW6lUKiws\nLDA9PV23Kv9v8mUbDAbsdrsMTwoP9SgRt9y9vb1v/N+KophkMsnOzk5dlduqn7tISzztuxAhU6vV\nSqlUYmVlhY2NjWNt/EUYtK+vj4mJCZlHFIdtrREHYKlUksp33wSTyYTf76epqQm9Xk82m5W1C8cd\nIT9rNpulIFetoy1fhV6vZ3h4mKGhIUwmE6FQSEZenvY8hTbEkwPIasHTfoaIUu3u7j4WEXiec1Lo\nvNRLywK+2XkO0NnZyejoqNRdqIUNOvrk6gtEeGlCgnJjY4OVlRVu377NBx98QCQSwWq1yj/zVbS0\ntPCv//W/5vLly5TLZa5fv85vf/vbur6gz4vb7aa/v5+enh58Ph/w9cIwxwWhlvb9738ft9vNyspK\nzbSsvwnV+canvXgNDQ2cO3eO3t5ezGazNGbH+bmL1IzQFK/2+o/zuuH37a5vvfUWp06dwmQyyR76\n4752AK/Xy+joKE6nk0KhQCgUqus+1+l0Uj0RHkVQ79+//9TzTqVSYbFY6OzsxO12P5bCOUqq38fn\nNfzwqAbs8uXLOByOYyPa9nU4nU4aGxsxGAyKyM+zEFPjqmfFiyK4UqlES0sLV65cwWq1SmEQEVYW\nh+GZM2e4fPkyFy9epLW1lUwmI1sxjuPBInpdPR4PfX19DA8PMzo6SktLCysrK3UXJXpeDAYDg4OD\nnD9/noGBAb788ssjG735VVTnNaufo0ajweFwMD4+Ln/19PTIosVUKnUsn7tQ0Ovp6WFkZIQTJ06g\n0WjY3NyUffLHcd3wyEiZTCYmJyc5d+4cr732Gh0dHVIjv94h6udBqNFduHCBhoYGqZz3ovd5dUEl\n8ExD19bWxtjYmBThWlxcZHNzU7aYCUG0rq4uurq6pNaF3W7nxo0bX5lnrzXVrXTw7NSM0GPw+/2c\nO3eOrq4ulpeXj2ydfyytra0MDQ0xMDAgW81rlR566Y2/qKYUVZIiVCumUPX09PDTn/6Urq4u/vf/\n/t+srKywv79PuVyWFcc//elP+Zf/8l/KsakHBweywOI4HixqtRqz2czg4CD/9t/+W86cOSM39927\nd4/lheVpWCwW3n77bd566y0pplLv5/0slUFx4WptbeU//If/wGuvvYbP5yMej3P//v3HZs8fJ4TX\nNjQ0xJ/8yZ/w53/+5+j1etnZsLm5We8lPhOVSoXJZMLn8/Hv/t2/40/+5E8wmUzEYjE2NjZqlgt9\nkYj91N3dzZUrV4BvHgL+Jj9L5LYrlcoz02dDQ0O899579Pb2kslkuH79OoFAAL1eL1X+NBoNV65c\n4Yc//CHt7e1otVpZO1WvM0YYfFFv83WXp/b2dv7Fv/gXnDt3Ts5bOO7dOIODg/zFX/wFJ0+eRK1W\ny/kKteClNv7Vak/pdFq2K2UyGQ4ODlhZWaGnpweHwyF7PcWscED2UJ45c0Zupjt37vCrX/2K+fn5\nmgpBCAPzvD9DvNRXr17lvffee8zz93g8ZLNZ7ty5w82bN2uqG/6k3vk3PQSEZyLWPjo6Snt7O7lc\nTqq31cKIin7b6kvik+vS6XRSrU/IuAqZ4u7ubvR6PQ6Hg4mJCRwOBwAPHz7kd7/7nZRbPi709vZy\n/vx5ORynoaEBv9+PRqNhe3ubhw8f8pvf/KYmymGCasWz6lBtNeI5ixGnbrebsbExOjs75TwF4fnr\n9Xry+TzT09P8+te/lm2ZtVi3VqtFp9PJGoVv6umK/dTb28s777zDm2++SblcJhwO12zErE6nw+Fw\ncPbsWak+GAgEZB2O2WzG4/EwOTnJmTNnqFQqbG5usrq6KlU2h4aGpJiP8PzNZjORSIT9/f26FgQ+\nmYoTtUJer5fx8XFee+01OaeiUCjQ0NBAX18fTqeTw8NDeb68SKqjLaJd75s+G7fbTXt7O1NTU0xN\nTTExMYHb7SYYDHLv3j1WVlaUVr8nEYeL6MmHRxskl8sRDoeZmZnB5XLR29tLR0cHV69exWAwyNut\noFKpyL7dW7du8ctf/rKmbTjVwz9UKtVjkYunodFocLvd9PT08N577/Ef/+N/BJC1DJFIhI2NDW7d\nusWDBw9qNgJSIIp/RPtP9dqfdiEQ6RWhbw7IMZy9vb3odDpWVlbY29uriayv6Gm22+1SSCaRSMiJ\nWeLfNzY20tzcTFtbmyzoO3nyJOPj44yNjUmxDuFVBYNBpqenuXbt2jfqbvhj1i8OmGcZUUCGbJua\nmnjllVd4//33OXnyJJ2dnXJqYSQSYWZmhs8++4zr16+zvr5ekzWL9RiNRvl+ptNpGVETn8flckmt\nDiHa8+qrrzIyMvLY9yVab7e3t/n888/5h3/4B6nJUQvElD6hby+U7p71joo95HA4sNvt2Gw29Ho9\nExMT/OQnP6Grq4t8Ps/a2horKys10VXQaDRYrVbGx8fp6OhgfX2dubk5VlZWUKlUWK1WfD4fAwMD\nNDU1sb29LVvghPrg+fPnaW1tBZBtauFwmPX1de7fvy+nhtazGwcePW+Hw4HP52N0dJQ333yTH//4\nxxgMBinDXalU0Ov1HBwcyFZc8fsvCnGWm81meSYKCfOvcu6EaJHH46Grq4vx8XHeffddBgcHpYyy\n0AOo1ajwl9r4w6PKUDHFDB4fgfpP//RPLCwsMDg4yOuvv84777wjPWjxZ8Wf39/f5+OPP+aLL75g\nd3e3pn3yQhBH5Nw2NjYeM0bViLDtxMQE/+W//BdOnjwpve5CoUAikeDOnTt89NFHfPHFF2xtbdW8\nGtdoNOLz+VCpVBQKhceiKWIAS/VLajab8fv9/Kf/9J8YGxsjnU7LW7vb7WZubo6//uu/5vbt2zUx\n/EJ5sK+vj+bmZvR6PXfu3GF/f59SqYROp8Pj8XDp0iX6+/tpbGx87JeYew/Itr+trS3u3r3LtWvX\nePDgwXMN1fnnfAadTofRaKRUKkldh+pnJYxPV1cX//7f/3umpqbo7u7GZrM9tuaZmRk++OADrl27\nxs7OTk33itBbb2hooKGhQcqwitocvV7PuXPnuHDhghzOYrfbcbvdUrddeHF7e3s8ePCAX//613IE\ndK2EiSqVikytdXd3o1KpePjwoRRDetoeValU+Hw+Ll++zIULF5icnKRSqWA2m2lqakKr1RKLxfj0\n00/55JNPaiKlLC6ydrud/v5+xsbGePXVV4lEIvLiWigUcDqdrKysSP2SyclJBgcHGR0dlboW5XKZ\nZDLJ/v4+n3zyCdevX+fmzZvs7OzUPc0iJlEODAwwNTXF1atXGRkZwWg0yrC+qJLPZDJMT0/z//7f\n/yMYDL7wdIXYx83NzbS0tGC329nb22N6elrO+XhatEun0zE6Osr7778vVR+dTidGo5FCocDW1hbz\n8/NSXr4WvLTGv/pLtlqtxONx+WKKh72/v08ikSAWi5HNZgmFQng8HhoaGvB6vaRSKeltVg+gqZVW\nePXaDQYDZ8+epbu7m0AgwM7ODnt7e8TjcUqlEna7XV5i+vv7uXjxImfOnKGxsZFcLsfdu3dZXl4m\nGAwyOzvL/fv32djYOJJRoU6nk9OnT0tNBOHRiQl9KtWjmfHwyMNvbGyko6ODK1eu0NnZSTKZZGVl\nhbW1NR48eMDs7Cw3btyQ/82LRoQIzWYzIyMjMhK0u7tLKpWSHujU1BRer5d8Po/NZsNisUhPIhqN\nMjs7y+LiItlslt3dXZaWlrh3756UFq0lVquVU6dOYbVaSaVSJBIJmepyuVy0tbVhNBppb2/nwoUL\n9PT0YLFYiMVi7O7uMjMzw9LSEmtra0xPT7O+vl4T4ZBqhPEfGxtjamqK9fV19vf3SaVS8h04f/48\no6OjeL1eNBoN2WyWQCAgoxTiVzgcZnV1levXr8t20FrucyGkJSS+TSYTwWBQ9oyL1j0hQytai0+e\nPMno6Cj9/f2yql9otgtZ4Fp5/qL4OZvNotfrpeR3KpWSZ93i4iKrq6ukUikikQiFQgGDwYDRaMTp\ndMrW6MXFRaLRKOFwmPv37zM/P8/6+npdR/qKdIxGo0GtVuNwOGhra5MDuVQqFel0msPDQx48eMDB\nwYFME929e7cmAj+ifVWn0+H1ejl37hzZbJaenh52d3c5PDwkmUxiMploaGiQWixtbW1MTU1x5coV\n/H4/TqdTnjFzc3Nsbm4yPz8vJ5/WgpfW+AvE6M98Pv/Uaut0Os3q6ipra2v88pe/pKGhga6uLs6e\nPcvW1haff/65FJepbh+p5cEiDr4rV67wzjvvEI/HmZubY3p6muXlZfL5PN3d3XLYyve//30mJydx\nu93yM/3iF7/g7/7u79ja2jrSCXiVSgWv18sbb7whp1EJjyKTydDU1ITdbufjjz8G4NKlS7S3t+Px\neGROzGAwMD8/zy9+8QvW1takrGstNrnwYoQyYk9PD++++y4XL14kHA7Ll0uv1+P3+0mlUszOzpLN\nZuX8cqPRSLFY5Oc//zk/+9nPSKfTj0Waak2lUsHpdPL+++/T29tLOp1mc3OT3d1dEokEg4ODfO97\n30On08mUjEqlknMgrl27xl/+5V8eeSeImJ555swZfvrTnxKNRolEIoRCISqViowg6fV69vb2CAQC\nbG9vE4vF2NnZ4eHDhywtLbG6uvrYszgKROi2s7OT06dP09TUJMWrxKCWt956Sw798Xq92Gw2mfsV\nZ8jOzg5/+7d/y61bt1haWpJjiWtxWSwWi6RSKdlGaLFYZHoxFouxsrLCb37zGzY3N+VEuXK5jNFo\nZG1tjb29PXw+H8vLy/zsZz977FJbz8p+8fNFjZc4c8Rerx55G4vFmJ2d5X/8j//BvXv3ZEH4kxHJ\nF4VISRWLRaxWK1euXKG1tZXDw0O+/PJL7t27RyAQwOfzcfr0aTmc6uLFi4yMjNDV1YVWq5XCTx98\n8AH/63/9L5LJ5GPnTC04rqWPz/UNialbFouFZDL5XLluk8mEzWbD6/XKsJbwsKvFXWqJyIUKD00U\nMuVyOTQaDc3NzZw6dUp61X19fZjNZnZ3d5mbm+P+/fvcvn1b3uCPusq8sbGR4eFhKawkWmpGRkbo\n7u7G5/PJCtWWlhZMJhMajUZO7Pvtb3/LnTt3mJ+fJ5FIyGEWtayCFrOzBwcH5cCeXC4nozwiXyok\nQOH3eV+NRkOlUpHGqB598Xa7nRMnTuByueTaDAYDNpuNkZERLly4wNbWFhsbGzKNJBQK9/b2WFhY\nOPIxz3q9HrfbzRtvvMFPfvITXC4Xer2eZDJJIBBgeXlZDs2JxWJy1HMulyOdThOJRIjFYsTj8SNb\nczU6nY6JiQkaGxuJxWIUCgU0Gg2vvPIKFy5c4MKFC5jNZorFomyPy2azLCwsyNkbS0tLLC4uEgwG\n5QCxP6aA8HkQxrGvr4++vj56e3vlGRIMBtnZ2ZHjy4UxhEe1Al6vV0Y44vE4S0tLRz5W++sQKTxR\nr9XT08PQ0BCnTp3CbDYTDAYJBoNsb28zMzPD4eGh9MyfVej7otblcDhobm5mbGyMxsZGjEajbEVV\nqVR4vV56enqYn59ne3tb1p7pdDp5QYlGo2xsbLCwsCDl26vHQv8xS/uj/2UdOT47roZUD9dwOBw4\nnU6ampro7e3l1KlTaLVastksZrOZRCLBzMwMX3zxBTdv3vzKAsGjQuTtxaa/fPkyXV1deL1e+WeE\nHruYy37jxg1+/vOfc3BwUBclvyerc6t5spr4OB18AlGF7vV68fl8cvrkwMAAc3NzzMzMMDMzI0ea\n1vszqFQqRkZGuHTpEn6/H7fbTS6XY2FhgS+//JLt7W1CoVDNvOEXhTCsDoeD73//+7zyyisMDg7K\nQVsi+pVMJrl16xaffvopDx48qItWiHCKXC6XnE8g6qLqvR9eFKKWyOVy4ff7KRaLbG1tEYvF6qa3\nIc4Vu91OY2MjDQ0NeDwenE6nHIksZoeIzgnhtD6rPuCfu6Q/+l/WkW/HDn0ORAua+KXX62V0QoQP\nRRtiKpWSU8+Ow0ssNrvBYMBisciCFVFQKRCGVIjhiFxcvS4vX9frexye7bMQlxe9Xi9/mUwmzGYz\n6XT6sfGz9b4cCqr3hlarlZ0H8Xj8MaGe4/zc4fejZj0eDw6HQ84SEQgvM5FIEI1GZRSjXmsVuhnC\nezwu++FFodFoZLu2qLIXUdx6Ua21INJw4sIudPqz2axMFVRHJGqw/xXj/zLzNMEZBQUFBQWFr0Ex\n/goKCgoKCt8xvtK+v9SDfRQUFBQUFBS+OYrxV1BQUFBQ+I6hGP8XRL3GXP5zeVnXDS//2l9WXta1\nv6zrhpd37co7Wh+eZ+0vvciPwncXpRCyPijPXeF5UfbK8eW4Xm2UHaOgoKCgoPDHoxT8KSgoKCgo\nKPwexfg/wcuco1JQUFBQUHgeFONfhTD8Go1GuQAoKCgoKHxr+U4X/AljbzQaMRgM6PV6KpWKlNKt\nlzSnwvFCyBhXT2yrlktVipoUFBReNr7Txh8eaWCLwQtWq5VCoUAikaBUKtVkBKTCy4fQSdfpdGg0\nGjnrXWjSKxLMCs+LiCgq+0Wh3nwnjb/X65XjdMWc7q2tLdbW1giFQkSjUTKZzLF8QYUXWo3wRBVe\nLGJo0fj4OO+99x7RaJT19XW+/PJLdnd3j2wE9HcZEZ0TF6yXYQDQk4hUok6nk1MAq6fsHfXnefJ5\nCnQ6HW1tbdjtdlQqFQcHBwSDwWMxQfRl5o+98D3re3pRfGeMvzCajY2NjI2N8c477zA8PExjYyMf\nffQRwWCQRCJBJBIhkUgcqwNGpVKh0+mwWq243W5cLheFQoFsNks6nSYej5NIJGo2r/pFUOuN/CIR\no5ZtNhutra2cP3+et99+m7m5OVKplJxMd9w/x8uM0WiksbGRUqkkDWWxWKRYLAL1uXBV1wF91c83\nmUyYTCby+TylUolKpSInvRmNRjnV8KgjRtU1TWq1Wq4NHk3Is9lsjIyM0N7ejlqt5v79+6RSKdLp\n9LE0/uJM1+v1qFQqyuXyY3uk3usS/xv4xueFiDKWSqXHvqcXyXfG+KvVaoxGI9///vf53ve+x+Tk\nJNlsloWFBT788EM++eQT4vF4XUfNPguNRkNDQwMnTpzg9ddf5+LFixweHhIIBHj48CHT09M8ePCA\nTCZT943/NES+XKPRyNG+x9VwisOxubmZ4eFh3n77bZqbm5mbm+ODDz7g448/JhQKHevZ8y87KpWK\n5uZmfvSjHxGLxbh16xbBYJBIJCJrcuqxpuoLrODJfaxSqWhtbaW3t5f9/X1isRi5XI5yuSwvMmKM\n8VFfIIUTIUbNijWIGfTt7e1cvXqV0dFReV4eHByws7NDoVA4snV+FdUXMI1Gg9FopLm5GZ1ORyaT\nIRQKEY/H67jCR+syGAxoNBoA8vn8N35+drsdo9FIOByu2VnzrTX+YqOLucqtra309/dz5coVTp06\nhdfr5YsvvuBXv/oV09PT7O3t1XvJj81qt9lsWCwWrFYr3d3d9Pb20tfXx+nTpzlx4gTBYBCXy4VG\noyGdThMOh9nZ2TmSjS8MpFarxWAwYLfbsdlsmM1mrFYrBoOBg4MDVCoVXq8Xm82GyWRCo9Gws7PD\nzZs3SSQSx+qiIvaLmD0/Pj7OyZMnaWtrIxqN8umnn3L37l22t7eP7cXlaYgixeMUqag2pKKzplKp\n4HA48Hq9+P1+xsbGePPNN9nY2CAej1MoFIjH4/LPq1Sqms9uV6lUOBwOLBYLBoNBFgYLY24wGEgk\nEkSjUaxWK16vl66uLvr7++no6GBlZUX+CofDcpa7MLpHSfXFRXjIwqOsVCqUSiVUKhUNDQ10dnai\n0+kYHBxkaWlJRhaPcq3CYdDr9ZjNZjweDw0NDXi9XnQ6HblcDrPZjN1uf+z3PvvsM27fvk0mk6mZ\nx/wstFqtjM42NjaSSCSIx+NEIpFvbPx9Ph8ul4tMJkM+n6/Nemvyt9YZsdHF5rDb7Vy6dIk//dM/\npa+vD6/XS6lU4t69e/zP//k/j8WhWH0Y2mw2urq6aG1tpaOjgx/84AeMj49jNBrR6XQyLO10Ouno\n6ODw8JBQKCQ3W60RBXAWiwWXy0VfXx9+v5/m5mb8fj9ut5sbN26gUqmYmpqira2NhoYG1Go1H330\nEYFA4LGQaL0Rz91oNOLz+ejr6+OVV15hdHSUSCTCnTt3+OUvf3ns0kFfx5PG9TisvfpgF96lKKLs\n7OxkamqK9957j1OnTmG1WrFYLAQCAba2tlCr1Wi1WlQqFVqtlnQ6XVPjr1araW5uprW1FYfDgcfj\nweVySW/M7XazsbHBwsICfr+fM2fO8MMf/pD29nYMBgMzMzNcu3ZNphOTyWRd8+cialIsFp+6H8Lh\nMIVCQRqxnp4eTp48ydLSEru7u0e2TvE96/V6nE4nTU1NjI+Pc+rUKc6cOYPNZiMSicgLgXAsKpUK\n//2//3c2NjY4ODggnU4f2ZrhUaje5/PR39/P0NAQi4uLLC0tEYvFvtF3rlKpaG9vp7W1lc3NzZqd\n6d8a418dDhKGVHgS4hAplUoUi0X29/e5e/cu9+7dq9uLqNVq6e/vl78KhQKRSIRMJiM9T4BkMsnq\n6ipra2scHh7KFzedTlMsFuVl4eLFi+zu7rK/v19zo9rd3c27776L3++noaEBq9WKVqulWCzS2NiI\n2+3GYDBQqVRoampCr9eTTCaBR99NR0cHpVKJUChEuVwmn8+TTqeP1DhVt+4ZDAbcbjcTExN4PB5K\npRKBQIC9vT02NjZYXl4mlUrVNdSv0+lwuVy0trbS1dXF0tISgUCAUqmEXq/HarXS3Nz8mLEyGo0k\nk0l2dna4ceMGOzs7ZLPZuoSb1Wq19KDFc9RoNAwNDdHa2opKpWJgYIDTp0/T1tZGuVwmHo+ztbXF\nw4cPOTw8RK1W4/V6GRkZYXx8nH/8x3+U6a5avMd6vZ5Lly7xyiuvYLfbZfTh4cOHJBIJrFYrg4OD\ndHd3Mzg4SH9/P16vF71eT7FYJBgMsru7SzweJ5fL1e0C9rSCsyfXUSgUCIfD/O53vwNgamqKbDYL\nPPqeRP75KBgcHGRsbIy+vj4cDoeMSlQqFfL5PLlcTv5TrFGv16PRaDh58iQ/+clPWF9fZ2Fhgbm5\nORmpqTUNDQ38+Mc/ZmxsDIfDwc7ODvv7+/K7fx7EpWdkZISBgQFu3bpVs6j0t8L4P02VT/TvW61W\nTCYT5XKZSCTCzs4OyWSSDz/8kIcPH9ZpxY++5PHxca5evcqpU6cIh8MsLCywu7uLXq+ntbWVSCQi\nDZD4ZzabpVQqUSgUMBqNeL1eLl++zOTkJLdu3WJra0teEmqByGm+//77DA4O4nA4CIVC7O7uyhCt\n+C6KxSI7Ozvkcjn5AmxsbMgbvUhv5HI5tre3icfjZDKZmqxbILwKm80mC4UsFgvNzc1MTk6i1WqZ\nnZ1ldXWVSCTC6uoq4XC4ZqG350Gs8cSJE4yPjzM2NsYXX3yBw+Egm81is9nw+Xx0d3fT09NDb28v\nHo8Hk8lEOBxmaWmJYrGIy+UinU6TTqdJJpNEo9Ej+Vyi2M1ms2E0GmUBmVqtxuPx0NrailqtpqGh\nAaPRyP7+Pnt7e+Tzeaanp1lYWCAcDqPVauno6GBqaorvfe97cs/Vok5HpIHGxsZ4/fXXMRgMrK6u\ncvfu3cdC/Xa7ncbGRlpaWmhoaKBUKhGLxUilUqytrbG+vk4kEpGXrnrxdcanVCqRSCT48ssvMZlM\n8oLucDge+96O4vLS29vLW2+9xYkTJyiVSszPz7O5uUkgEJDpl2QyKaO61SkZk8nE+fPn6ejowGQy\nsb+/TyQSqXkUQKPR4HK5uHjxIiMjIyQSCVQqFZFI5Bt97zqdDpvNRmdnJ729vVgslpoJzr30xr86\nfwiPb/JCoUAqlSKbzbKxsYFOp+PBgwcEg0GuXbvG1tZW3das1+uZmprirbfewmq1EolE2N3d5fbt\n2+zt7fHBBx+QTCZlviifzz/mtYkK4r29PUZHR3E6nZw5c4ZMJsNHH31UkxydCLdarVaampqwWCyk\nUinu3LnD9evXuXHjhgyfh0IhWdEsBHFEnjaTyeDxeBgYGOD111+nUqnw8ccfMz09zdLS0gtfdzXi\n4jE0NITL5aJSqWAymXC5XHg8HnZ2dpiZmSEWi5HJZIjH43WvTRDh5z//8z/n9OnTWCwWent72d7e\nJiZLcAYAACAASURBVBKJ4Ha7GRgYwGg0YjQa5YFRKpUwmUxYrVb0ej2pVAqDwcDKygoPHjzgn/7p\nn9jf36/5+kWKymw2A8jLIMDc3Bw7OztkMhnZYSEKpMrlMuFwmIODAwqFAi0tLUxMTHDixAk6Ojrw\n+/00NTXJmoAXidjrZrNZPrtbt27x85//nN3dXfL5PB6PR3rFs7OzDA8P09fXBzwKoV+7do379+8T\nDAZl63A9PP/n/ZnFYpGtrS3m5uaYm5ujr6+PiYkJ7t69K9uha/0uqFQq/H4/ExMT6HQ6bt68yV/+\n5V8SDAbJZrOYzWbUarWMeup0OtRqNSaTCZvNxsmTJxkZGaGjo4NsNsva2hqLi4s1N/4GgwGr1YrZ\nbCadTjM7O8vh4eE3jrKZTCaampoei5Apxv8reNrDqVQqj+VoRVGcuJlvbm6SSqWOeqkAWCwWmpqa\naG1tpbGxUXq+N27cYHl5mcPDQyqVymOH5NMQVbvCoIkb+pM6AC8KEU0xGo3Ao0M8FApx69YtPvvs\nM+bm5qhUKmi1WhKJBLlc7rHqaJVKhdPppLOzk5MnT3L69GlOnTpFKBTi7t276HS6mqy7GpfLRXd3\nN5OTk1itVg4ODshkMsRiMe7fv8/W1hbb29uyc6JYLNa9+6OpqYmhoSGGh4fp6emhUqmQyWQoFArY\nbDYaGhpob28HkFXllUpF1mY4HA4GBgaoVCoYjUZaWlqwWq3MzMzIPG8tEWF/UVgmQraVSoW9vT1C\noZAM36rV6sfqQUSBmsFgkB0v3d3d6PV68vl8zUL+Op0Os9mMRqMhn88Ti8XY2tpifn6eZDIp0xKC\nZDLJ1tYWs7OzaLVa8vk8wWBQXg5eBiGocrks01sNDQ00NTXJdJPJZKq55LmoURFn9P7+Pnfu3GF2\ndpZ4PP5Y2qH6TBFRGpPJRE9PDz6fj+bmZg4PD2WLaK3xer10dnZiNpuJx+PMz88TDoef+zsXn6Gz\ns5PXXntNvs+i1a8WfCuM/7NyWZFIhGg0Kg9C8RKWy+W6tps5HA46OzuxWq2y/Ue0kn0TxIspCpFy\nudwfvCQvEhF+1ul0JBIJ1Go1+/v73Lp1izt37pDNZv/gmT753Tj/P3vv9dzmmZ7/f9B7IwACJAj2\n3kVRVLckS+6rdUtmk2w22Uwmk2QymZSznP2SPyOZyeYkyWQyznfXWXvjurJc1C1RIkVS7BUkCBIA\niUqAwO9A8zymFFmiHAEkvbxmPPZYInnzxfM+d7vu67bb6enp4bXXXuPUqVMoFAoikQixWKwocsql\npaV0dHTQ19dHLpcjFosxPz/P5OSk7M/utlHEuro6ent7sdls0omsra0RCoUkyzkcDgNIaWqtViv/\nvkKhoKSk5IH/VqvV/OIXv2B6errgzl/wOoR9Gxsb8kKORCJP/HpB3i0rK6Orq4uKigrS6TRLS0ss\nLCwUpHWh0+mw2Wzk83lisRirq6usrq7KStDW3wkgGo0yPDyMSqXCbDbjcDior6+noqJCMvxFgLOb\nkc/ncTgcHDp0CI/Hw8rKCjqdThItCwnRkguFQty+fVv+87jJIBEgikTJ4XDQ2tqK2Wzmzp07zM3N\nFWVSobKykra2NgwGg6yeiHdyO1CpVBiNRtra2vjxj3+M3W5ncnKyoMHLnnf+W7OKrRnAVjWwh7XY\nRZkom83uyAtpNBplf3NycpJ/+7d/45NPPnnq76PT6SShMZfLoVar5TRAoSAuPrVazcDAAJ9++imT\nk5PblkIuLS3l9OnTNDY2olQqpV7BzMxMwScVFAoFTqcTj8cjM/wvvviCUCgk+7SCULlbIMYlq6qq\n0Ov1LC4uMjw8zCeffEJ/fz9w/xyYTCaZWYuSqE6no62tjdraWsxms2REV1ZWYjabOXXqFJlMhs8/\n/7ygv4OoTImRsacJTgVfoKenhxMnTmC326XS4tLSUsF66aKkrNfrWV9fl+TCx50P8fx9Ph+HDh2i\ns7NTVjni8fi2Ap2dhCC/GgwGtFqtLK9vVforBtRqNfl8nqmpKaanp7f1c7fuaTEajWxubkpuSzF4\nLVuDu0gkwu3btwmFQtv+erfbzeuvv87zzz+P1+uVlY9C2v69cP5iVAgezDRFn1lcDuLfYjZdZDyZ\nTKaoTG6RFYiy83vvvcfIyMhTfQ9BWquoqMBisZBMJonH4wUrgwqI6Fyj0RAIBLh58yahUOiJz09o\nLXR1ddHb24vb7ZYiS4ODg8zPz8uJgEJAXA5Wq1UycQcHB7l79658Zk9y+o8SeYHCqc2JwLa0tJTK\nykr0ej0TExN89tln/PrXv+bmzZsPTHZs7SkLW48dO0ZXVxdOpxOtVgvc/yz8fj99fX0EAgG+/PLL\ngvWjRbXI6/WSSCSemnlts9mkrZ2dnSiVSqamprh06VLBsn5AkkDFPSHGxx5nu3iGfr+f06dP09ra\nSjAY5NKlS6jVu/+q1Wq1NDQ00NDQgF6vJx6PEwgEWFxcJBKJFLz9JeS0DQYDGo2GUCgk++ZPgrjT\n7XY7KpWK6elp5ubm5FRUoSHuRaVSSSqVYmlpads8AyHidvbsWQ4dOoTBYJBjgoUkWe7+E/kYiMtR\nyK0+fEi+7ULTarW43W4AWT4tpohFTU0NL7zwAmVlZbLf+TSlV6VSicVioby8nLa2NiwWC4FAgIGB\nAUZGRgp2ISqVSmw2m2TqixncmZmZJ36t1WrlD/7gD3j11Vfx+XykUinm5uZ49913+eyzzwiFQgWN\nch+WNRXaCIIpvh3HL77+4b9fKM15cbY9Hg9+vx+tVsv8/Dy//vWvH1muf1T7a2BggLm5OZxOpxy1\n83g8UoWuuroarVZbkABYkOacTieNjY1EIhGpjrjdZ9Xe3s5v/dZv0dzcjMvlYnZ2lk8//ZT//M//\nZH5+/pnauxXV1dU8//zzlJeXo9FoaGlpYWZmhrGxsSd+bVVVFc899xw2m421tTWCwSDRaLRgtj4r\nWK1WfvrTn/Laa69htVq5fv06H330EfPz80VphYnpJafTic1me6oqptvt5uDBg/j9fmKxGL/85S/5\n5JNPiqZMKHg3RqNRBruCpP0kmEwmnE4nXq8XvV5PKBTiq6++4uOPP36q1sHT4nvh/EUvf7uwWq30\n9PRgNBqJRqPcuHFDlq4fF2U+aqLgu9hss9morKxEoVAQi8WeWvFLrVbT3d3N4cOHaW9vJ5FIcPHi\nRe7du1dQ6VmxAVH0QtfW1lheXn5ir16pVGI0GmltbaW1tRWj0cjk5CRXr17l9u3bTE9PP9Us7He1\nXcxfRyIR5ubmtq2JYLPZJDnTZDJJwRYxb5xMJolEIs+csyCcp8iEVlZWmJmZYXx8XHJZHod8Pi/5\nFOFwGIPBgF6vJxAIkEgk8Hq9WK1WtFptQYhFKpUKk8mEy+XC5/NhNBq3/RmLqZG6ujpOnjyJ0Wgk\nFApx4cIFPv/8c8bGxgo6zupyuWhqapIOfLtjqAqFQmouCBVMoSex26HRaKirq6O2tlZmz1euXNlW\nZe9ZwGAwUFZWhtPplMTl7Z4Xn8/HK6+8Qn19PZlMhqmpKebn54tW0RVTNoKwmE6nt/2z/X4/bW1t\ncmfL/Pw8o6OjjI+PF3RKYc86/62KeE9LRikpKZG9ldXVVWKxGMFg8InMbsGi/64iKVsXURgMBqLR\nKOFw+Kn7zDqdjnPnzvHyyy9jMpl4//33eeedd5iamiqooIWYZbVarWxsbDA7O8udO3eeeEDVajV6\nvV7ODOfzeUZGRvj444+ZnZ19JFHwWUM891QqRTAYZGpqisXFxW29oB6Ph1OnTtHZ2YnX62VkZIT5\n+XlCodADAdCzdv7CZqVSSTqdZnp6+gGxp+0im80SjUaJxWKo1WqCwSCRSASPxyMlmr+L/viToFar\npTKekGDd7lnXaDSUlJTg9/tpbm4mGo0yODjIf/3Xf0nhlkJAvKOisqbX65mbm5MVlCd9rQjYtFot\nuVwOlUolddqLhe+apAjbRSV1bm6O/v7+onEVjEYjPp8Ph8MhNTi2+ztUVlby1ltvYbPZmJ2dJZlM\nFlwzZCu2tgQTiQSLi4vbdtzNzc0cPnwYq9XK+vo6MzMzLCwssLKyUtDKxZ51/qI/tHU8bDsHRUiK\nut1uamtr8fv9bGxsUF5ezsWLFyU79GEH/yghoe+CrUpV8XiceDz+VM5ar9dTUlJCWVkZHo+HfD4v\n1a2gsNvO8vk88XicxcVF7t69y9LS0mN/nnhmZ8+e5c0335TjZvl8HrfbTWNjI0NDQ0UZgxITHnNz\nc8Tj8aeS3CwrK+P555+npqZGirosLi4SCAS4ePGiVPEqhM3inKysrDA+Ps7CwsJ3flYGg4GSkhKq\nq6upqqpCo9HI71+ILFqML8XjcSYmJlhbW9v2M6+oqOAP/uAPOHfunFSPTKVSpNPposyawzfPX0yp\nmEymx36dXq/H5XJJ/Qi4Hzi+8sor5HI5mUEXunf+XZy/0WiUTld8rSCPFosAq9Vq5VRKLBZ7qs9Z\nVICFkuSBAwcIBALb5gz8XyH2JWQyGclzEQucnoSysjLq6upkZay8vByHwyE3QO6z/R8BMVZjtVql\nIti3HVRRJRBKaB6PB4/HIw+MVqtldXUVg8FAOBx+JA9AONj/aykpnU6zvr7+nfZ5CyW6qqoqrFar\nHD0qxpawzc1NIpEIgUCA2dlZotHo//p5W2VzTSYTbrebM2fO8Pbbb2MymUilUkSjUTY3NyV3QFRU\nCimCks/f1zVfWVmRa0qfFLhotVopJ9vb24vT6ZTZrN1ux2w2c+3atae+qLYLEbBEo1GWl5eJRqPf\neTpFtF5KS0vxeDyUlJRIpchCVotECVS0t54UQIu1211dXZw/f57m5mbg/ihdKBQqmlJePB5naWkJ\ns9lMPp9Hp9PJbHSrreLMajQa3G43nZ2d+Hw+8vn72yuNRiM9PT2MjY1x48YNqX9RyPdUBN2Cn7Kd\n5+Xz+eju7sZut5NIJAgGg5KHU0ydi63PRdwjj3tWWq2W8vJyKioq0Gg08g6sqKigvLxcksELHcCs\nr6+zsrJCOp2WfulJPDKhQFtZWUllZaWswG1dJlXI6sWedv65XA6fz0dfXx+fffaZHCPK5/MPZOq5\nXA6NRoPZbObVV1+VpDOx1jKbzWI2m+nr66Ojo4N8Ps+vfvUrbt++/YBDValU/2eNgFwuRyQSYWZm\nBo/HIx3KdvHcc8/xV3/1V/h8PqnHvbS0xNLSUsHLXEKuV+jgw/8W29BqtbIi09bWxhtvvMGxY8ek\nMtf8/DxfffUVCwsLzM3NsbGxIYVQCrmlTXxuj/v+WzMmIT37ox/9iJdeegm3241Go5HOUpA0xYKj\nRCJRkBJdLpdjYWGB2dlZvF4vFRUVT12BEuVcEYwJgt9WEaNCXI6ZTIaVlRXZRx4cHHzggn4YQpTo\npZde4vz58/IdzWazjIyMyP3yha5u5XI5Jicn+fWvf43ZbAaQHA+RAAhbBVdCCBD96Ec/oqenh2w2\ny+rqKpFIRI5rNjY2MjY2xvLycsGdkWjnbNW/fxxOnz7Nn//5n1NTU8PMzAz/7//9P65du1bw5Ulb\nsba2xr179+jq6pIcERHAfBucTid/+Id/yMsvv4xOpyMWi7G8vEwkEiGVSn0rSfdZQwR3R44ckb7m\nSaJlXq+Xnp4empubcTgcbG5uyqktMU5dSK7InnX+4iX1eDxy/rezs1NmwsIRCcKNWNX63HPP0d3d\njcViAe5fUELNzWazUVFRgdPpJBKJoFQqCQQCcgZ8u1H042wGmJmZ4YsvvuDs2bN4PB4OHz6MSqVi\ncnLygb8H919iIVdps9l4+eWXaWtrY3NzU4oYbW5uYjabicViBe2fC8GTUCjE3NwcJpOJ3t5eOZ9v\ntVplQJPL5aisrJR//u6770olwzt37hCLxWRPTAgUiTWchbJdlNCUSqVkuYtLXKfTYTQapVyoxWLB\n5/Nx7tw5WltbpS6EUHsLhUJSdrZQqorC5qmpKYaGhujr68PhcOBwOOSymCdBlEOFAppY+CMuxEJe\n7OIyy2az6HQ6jhw5QkNDgyx95/P5By44MdP//PPP09PTg8ViIZfLyT3tomxe6DaRUB68ceMGBw8e\npKqqis7OTlQqFaWlpZKMaLPZWF5eZn5+XpbMx8fHiUajXLp0iXg8Lrfk6XQ6zp49i9ls5u7duywu\nLha0nysmmkQL81F8JoVCQUVFhZQZb21tRa1WMzg4yM2bN6Wcb7HK/olEQvbrXS4XZ86cQa1WMzQ0\nJM+RWNOu0+mor6+nu7ub06dPU1NTI5cpzczMSFKraNUIAl6hfpfl5WXJx3E4HLz44ouMjo4yNzdH\nLBaTLaGNjQ05vltdXU1DQwO1tbUoFAo2NjZkpaWmpobOzk6uXbvG6upqQWzes84f7mdFbrebo0eP\n8txzz5FKpeThUSgUWCwWTCYTer0eu91OSUmJ/PAFiSqTyRCLxaSKlNVqpaGhgVQqhdVq5csvv5Q7\nrcVl/H85QPl8ntHRUTKZDE1NTXR1dfHmm2+iUCjklrat0Ol0vPTSS/zgBz+goaEBq9UqJVCF9rxe\nr6e6upp4PC7XhhYCW0vnQ0NDdHR0cPDgQQKBgGQKt7e309DQQDQalQf9P/7jP/i3f/s3IpEI8Xhc\n/n+xQtfj8bC6ulpw0SVhv9j+5ff70ev1WCwW7Ha7nOsWi5U8Ho90TEIvIpVKEQqFmJ+fZ3p6mng8\nLgOGQtibz+cZHx9Hp9PR2tqKyWSivLxcltOfBOH4hfO3Wq0yyCk00VI8b0GGPH/+PA0NDbLSlcvl\n5JpqkemInrNAIpGQ3Bjx+xbD+S8vL5PJZJidnaW5uZnz58/T3t7OyMiI3FVQUVFBf38/Fy9eRKlU\nEo1G+dnPfiaDWo1GIwPi1157jbfffhuj0SiXjBXK+efz9/dVlJeXs7q6KkWGHpW4dHR08A//8A9U\nVlai1Wolr2dubm5bEyXPEslkkkAgwPr6Ona7nZ/85CfU1tbyL//yLywuLpJKpTCbzdjtdhwOB7/3\ne7/Hq6++KoNJoUswPj4u5dttNhvwDUm7UHfj2tqa1ETo7e3lj//4jxkZGeHevXtMT0/LUcRoNIpG\no+Hw4cPSH4l3OZVKyeRETKONj48TDocL8jnsWecvHLGQRdRqtbJPIsq7IsMRzPQrV64wOjrK4uLi\nA6tkxXicx+PBbDZjNpvR6XRks1kCgQArKysPRMCP+iCe5kJKJpMsLy8TDodRKpW0trZis9k4ffr0\nA8GJ+P3a2tpwuVwEg0F5GO7evcvk5KRkmisUCsrLy1Gr1QUVPxFsVtG2qKyspLm5GbfbLfvJqVSK\njz/+mNu3b7OyssK9e/eYn5+Xsq5bszexyOP06dNcvXqVK1euFLT3L7J3cTn6fD6ZTbhcLtRqNclk\nkqmpKUZHR1lfX2d1dZXl5WWpdLa6uirFQxwOB83NzTKwKUR/VCiVGQwG+vr6KC8vJxQKsby8zNLS\nEtlsFr1ez71795idnZUXSTKZlIJMSqWSWCzG+Pg4c3NzNDQ0PDMS6+MgRkKnpqbo7u6WvXPxZyIQ\nSKfTLCwssLi4yNDQ0AOMbbEhb35+nlwuJ3X9C3lORMVhZGQEr9dLTU2NrC6KuyMQCDA8PMzt27cB\nZFVI8D+USiWJRIJoNEpXVxepVIrDhw+Tz+dl9atQsNlsdHZ24na7UalUsoKZy+VwOBxyFXdDQwNu\nt5tYLMbU1BTXr1/ns88+Y2Zmpui7T3K53AMtNYPBICtA4n0Td7qQRf/Hf/xHuU00n8+zurrKysqK\n3MSpVqupqalBrVYzPDz81Fv2tot8/v4umevXr6PVauno6JDjwV1dXXL2X5CNv/76ayYmJrh79668\nE7dWxLq6uqSQm8FgKMjI3551/vDNYVlbW8PpdMqtSoKlKlijc3Nz3Llzh5s3b3Lz5k2mp6elExBL\nRFwuF8eOHSOfv7/8RDi51dVVWTp7ErYbAKTTaSKRCGNjYzQ2NtLS0kJlZSWnTp0CvhF3WVtbk6OI\nYqXlwsICCwsLXL9+Xa4z9fv91NTUUFFRgU6nY3l5uaCCOel0mmAwyPz8vGyTWK1WHA4HqVSKmZkZ\nPvvsMy5cuMDCwsK3kuEUCgWpVAqNRkNVVRXj4+NyTrZQl7qQ4RRtB3FWhMqfqGyIC2R1dZXFxUVm\nZ2cle1gElg6Hg2PHjuHz+RgYGJCckGeNVCol18i2trbS2dkp2y9jY2OSeJlIJAiHw9IOwafYOoK0\ntLTE2toaGxsb6PV62cMuJKLRKGNjYywsLEhSrSDEiTMQCASYnJxkfHycK1eucO/ePUmOE39PZNxi\npevTimM9DcTzGx4exmKxoFQqZXIg2hCDg4MMDQ0xPT39rYFIIpEgEonI3725uVkGcoWqYIigymaz\n0dXVRU1NjVyLrFAoqKyspKamBpvNhlqtJpFIMDU1xeDgIB9++CE3b96Uo8/FhODlLC0tMTk5SU1N\nDT6fj8rKSvleCb2LsbExPv74Yy5cuCCd+sPPUmhMiI1/W9tdzxqiJXrz5k0ZcHu9XhwOB2azWXJX\n4vE4CwsLXL58mUuXLnH58mXp8LeOgXs8Hpqbm/F4PASDwX3nvxUi85+dneWzzz7j8OHD1NbWyuhJ\nlGiHh4f52c9+xp07d1haWmJ9fV2WO7eW8G02G4cPH6ahoYF8Ps/Kyooc4XrWL2g+nyeVSvH+++8T\njUb5yU9+QmVlJUajUZKz9Ho9AwMDXLhwgdu3bzMzMyNJR8IZiPK5RqORI0YGg4GBgYGCykKKQzw+\nPk4qlWJ0dJTW1laOHTvG3NwcN2/e5Nq1awSDwceW2bLZLPPz82SzWdbW1pibm5MzxoWC2MZ29epV\nxsbGsFgsD8xnp1IpuX9dOB5Rthaz8Pl8XlacLBYLVqtVCvEUokeay+VYWVnho48+kmx4sZ/CZrOx\nuLjIwMCAdEQioBEB8NaeuljJbLfb5dkpdBk9FApx8+ZNueRG6EQIfk4wGOTKlStyNloI6mwlI4o9\n5+3t7bJKILLZQiGbzcp+cyqVory8HIvFIncLXLt2TVYRn4RwOMz4+LgkbG5Vm3zWEBnw119/jcfj\noaKigvX1dRKJBE6nU7ZDxQrf27dv89VXX3HlyhU5VVJMufOtdgNcvnwZhULBb//2b1NfXy+fr+Bx\nXblyhX/6p39icnJSOsZHnV+h6ZFKpdDr9SQSiQek4J81EokEg4ODrKysMDw8jNfrxeVyYbfbpT2D\ng4OSF7K1SgTfTMYIAnp1dTUdHR1EIpGCqFnuWecP9w/LzMwM//M//8PKygqNjY1YLBa56U6U3UZG\nRhgZGXngkDy8atZoNFJbW4vX60WhUEgiz9NkFk9zgW5ubjIzMyPHQoS4hc/no7S0FJvNxp07d/j4\n448ZHx9neXn5fzkWcYmIaYaGhgZKSkq4dOmS1FEvBESfSvQTxUiQ0+lkdHSU69evyx7d45DL5Ugk\nEpKj4XA4yGazLC4uEovFCpYViZJtKBRCrVbLvrjdbieZTD4gZ/ptNoiZ5KqqKurq6igtLWVubq4g\nExeiDzswMCDXxYrecTAYZGZmRraAtj430VYREG2uzc1N6fg1Go3M/gsVdIkg6saNG6yurkreSi6X\nk850cHDwsSz4XC6HwWCQZzyVSvHRRx9x586dgknP5vN5otEoU1NTMuiy2Wxsbm4SDAYlmWs7CAaD\n3L17l56eHjndUijnD0hthcuXL7O+vs78/LzcOS+IlcFgkNHRUb788kuuX7/OyMjIt+oQbF2gVuiR\n3Lm5Ob766iu54lan0+Hz+SgvL8dms7G0tER/f/8T11Fvbm6SSqVkO9jn8xGLxQgEAvIufZa/i1hF\nvLGxQSQSkbsGhHZBMpmUFbBv+5lbk1ebzUZfXx+hUEi2Q58l9rTzB5ienmZhYYHbt29TXV1NY2Mj\nXV1dHDx4EJfLJWclH3aEDwthaLVavF6v/KCWlpbkKNp2MqOn/WDy+ftrKEdGRhgdHZU6+WfPnqWr\nqwuv18vt27e5dOnSt04ZCEKVmPFvbGwklUrh9XoJh8Pb3rT3XSEyokgkglqtxufzMT4+zuTk5FP1\nCy0WC01NTTKjSyQSJBKJgo3niKADvhmDM5vN+P1+NBoNs7OzT3SEQoe8tbWV9vZ2Pv30U0ZHR59q\nh/fTIJVKybXDn3/+udzmKDQGFArFIzkHW5XHxOpfwUIX29s0Go2shBUCIli8ffs2Q0NDsvLgcrlo\na2uTI1GPe26bm5vodDrq6uo4cuQIFRUVhEIhxsfHn3pT4NPaHo1GuXXrFmazGZvNhtvtlpWV7WJx\ncZH+/n5ef/11GXAVkm8hWikffvghFy5cwGg00tDQQHl5uWwLiIDk6tWrzM3NPdaRivdE3EVb383H\niQp9F8GhSCRCNBrl3r17WK1WSktLeeGFFzh16hRqtVpOBm0nMcvn768orq2txel0srS0RDQafaCy\n9LT2PQmCqxIMBtHpdDgcDlQqFZlMhvX19W35knQ6jUaj4cSJEywsLDwz27Zizzt/8dKLkuHi4iLr\n6+uoVCp6e3upqqriz//8z6Uk661bt5iampKkM6VSSWVlJa2trdjtdlkGrampkUSuQgpdiEs3Ho/L\nTFqlUmEwGLDb7TidzseKu2zVIXc6nahUKs6fP4/BYODLL78s+MZCETkbDAZqamqkDK3gU2wHYs2v\n0WhkcXGRtbU1otFoQVsXW+1XqVTY7XYOHTokS4tTU1PfmokKcqVQ/TObzXR2djI3N8fc3FxBz8rG\nxobkoJjNZkwmE1arFZPJJLN5scY0l8tJ5y8u74fL/E6nk66uLiYnJ5+o2Pgs7BctI71ez+bmJhaL\nBZ1OJzX0H3XOxWdisVgoLS2lpKQEk8nEqVOnSCaTXLx4kWAwWLBgV3xP4XTEuN/T7KQQa36dTieb\nm5u0trYSj8eZnp5+5vbCN2OWIpNUq9UYjUbq6+vxeDzo9XpJSHtUciQglFTr6+s5ffq05I3MV+Q5\n8wAAIABJREFUzc0RDAbllM633THf9fMQAWM8HicYDBIOh0mn0/KslJSUyDHtJ+Hw4cOcP38ei8XC\nxMQEVquVwcFBpqamCjr/L36HjY0NSXTd7lSQGE03Go1SN+VJ8vNPiz3v/MWFIqLFQCAgR4jKy8vp\n7u7mlVdeoauri/HxcfmBiJJQOp2W42Zi1EV8vVjUUIzfYWNjg1gsJlsQVquVsrIyGhoamJiYeKDs\nv5UpD8iZdHGRtrW1cffu3YL2tx6232QyUV1dLYWA1tfXt10SFSxqp9NJIBDgvffeK7jAhcDW4Ems\n09Tr9dy8eZOhoSHJGhYlWkHMKSkpobOzE4/Hg0ajweFwYLPZCjLytxXi7IqgrqSkhIqKCqqqqohE\nIpKkGA6HJQlKjFAKZr2YYNDr9ej1esrKylhaWiqo3QKCSCcIUX6/H7vdTk9PDy6XS2ooiBFKYbtW\nq31AQEqj0cgA/euvv2ZlZeWZLN76Noh7I5vN4vV6sdlsTE5OymmGJ8Htdsupnng8LgnKhcLD2bkI\nFP1+Pw6HQ2r4A48lTgqiZVVVFS+99JLsZ2ezWWKxGGtrawUV5hI98PX1dZLJJCqVCqfTSUNDg7xr\nHle6VygUNDU18cILL6DVarFarYyNjTEzM1O0ZUtCN0QELNu5F0VrRqPRPLBvYd/5fwvERTc5Ocna\n2hp1dXX4/X5KSkqora2lvLycVCqF3+9HrVZLhqtSqWRtbY1wOCyJUAMDA9y8ebOo5BfhVMrLy/F4\nPHR3dxOPx/n000/JZDKEw2FZsdh6cLf+txDKKJY0p+AdmM1mKioq5D7txcVFgsHgtr6HIM+J0phQ\nnyt01i+eY29vL6+//jp9fX0yELl16xaXL1/mV7/6FYFAQBKGhGqYEI8SLPq7d+8yNjZWtLOSy+VQ\nKpV4vV6OHTvGyy+/LLOMZDLJ8PAwly5dkhsHJyYmiEQiJBIJhoaG8Pv9HDlyhGg0KtdKF/p5wzfP\nvLa2ltdff52enh68Xi99fX1SyOfnP/85g4ODOJ1OQqEQExMTUpdjaWlJ8gbm5+eZmppibW3tqTdj\nflfbtVotlZWV1NfXA3Dp0iWuXr36xK8VexV0Op0UySrkulYBEbAIcqjVapXKf4ODg9y9e1dOunwb\nNBoNer0eg8FALpeTSZYg0xVaPU+06UTFq7KykvPnzwPIe+5RP18IXBkMBkwmk5QnF3tJirGmWMg+\nt7S00NnZyfvvv7+tCpuQCxZidIVIKr5Xzh++GblIJpNcuHCBtbU1qcNuMBhkpChmu7cSnoQG+ebm\nphRsKLQO98MwGAyYzWZZljt48KAsrQnGpyjtZjIZOYIkGOChUIjR0dFtr6t9FhClQbvdTmlpKfl8\nnosXLzI1NSV794+CTqejsbGRjo4O6VxF1leMFxPuX+g+n4+Ojg48Hg9WqxWn0ykrK+vr68zOzqJU\nKpmbm2N5eRmTyURpaalcvLG+vs7CwkLRlojAN4Gu0A8vKyuT2vOZTEYqm62urspgUBArxbRJLBaT\nmyULKa70MMSOjQMHDlBZWYnT6cTj8RCPx4lEIoRCIWw2GyqVipGREWZnZ+U7KkYcdTqd5MvE4/Gi\nBV0KhQKn00lzczNWqxWVSkU6nWZmZuaRojiiFePxeKivr8dsNsvR0kKO4wqIisXa2hojIyP813/9\nF3a7nWw2y61bt7h9+zbJZPKx75qYyLhx4wZzc3OMjIywvLws2wqFbhUBUqdFqOOVlJRQWVlJdXU1\ni4uLku8iqgAqlYqamhoOHTpEa2srSqVStlZnZmak5kGh7xihFyH2iYjkbevvthUiONbpdHIktlB3\nyvfO+cM3F+Mvf/lL3nvvPeD+KJ84NHa7HYvFIvtIIrLN5XJS7U9kScW8zAHZlwWkgIyY3R4bG5Na\n8rOzs6ytraHRaKitraWqqgqVSsXy8jJ3795lYWGhaLaL7F8o9qnVaqqqqhgaGnrsIhaj0ci5c+c4\nc+YMOp1OagcIxmyxgi6bzYbX60Wn08n/53a76ejoQKPRsLS0RDwe5+rVq4yOjkrRF7PZLC/WcDhc\nsAmFb0M6nWZqaoqZmRnW19exWCxSQMflctHT08P8/Dz5/H1RGVHmdzqd2Gw2SdYU70ExIJxhSUkJ\nNTU1mEwmyb0R2glHjx6lrKyM2dlZlpeXZQlUkL3Egp+BgQGGh4cLJq70MMRFbLFYqKuro6GhAZPJ\nhFqt5r333nvk6ltR1fL7/XR0dMjvU8xzIsTKlpaW+PTTT+XCnK3M8m+DGJkeHx/nnXfekYJXQnSn\nGL9HPp9neHiYmZkZhoeH6e7u5siRI5SVldHe3i45B6IVls/fX8R06NAh/v7v/x6PxyMD5eXlZRYW\nFor2rm5sbLC0tCTle5eXlx875SHeBREQF3Ii5Hvp/LdCfMCCqBKJRDCbzZSUlJBMJolGo7JkaLPZ\nqK+vl2I1xZ51TSaT/PznP2dkZITq6mopNpROp3E4HLz22mtSmWt4eJjR0VEmJiYIBAJcuHCB8vJy\nyQgvpHrYVuTz98VaxsbG+NnPfsbZs2dpa2vj1KlTOBwO5ufn5SrX9vZ2jEYjS0tLcvb2xIkTlJWV\nEQ6H+fzzz/n5z3/O7Oxs0WzP5XL86le/YnFxUfafRX9NlNGdTie9vb2Ew2FJSBweHubixYv4fD6S\nySSrq6tF3R8O9y/1cDgsl1o5nU7sdrss8QoZ1KamJm7duiUd/cTEhNwRILLQYrYrNjY2+Pzzz/m7\nv/s7rFYrGo3mAfZ1JBKRQbmYHEkmk0xPT/PJJ58wNzdHdXU1CwsLRc36BQHtvffeI5fL8corr1BV\nVcWZM2eor68nGo0+IKcsJgIymQzd3d1yMkO0Nwoh3PI4iExXEEG3GzAJTQYxiVOMjP9hiHtwdnaW\nRCLB+Pg4mUwGrVbLD3/4Q/x+vxRgWl9fJxwOU19fj16vlw74v//7v/n444+LmlgIPpdoTwglwq2t\nWiFcFY/HpX5HJpMhFAphNpsL1gb93jt/ga3s861SrEIfXzBJ4/E49fX1MjIs9iG5evUqQ0NDVFVV\nYTQa0ev1NDY2cuDAAY4ePUp5eblUqhLKaePj46ysrFBdXc3KygrBYLCojmhzc5PZ2Vl++ctfygxO\nkKLi8bh0/seOHcNiscgSaTqdpqSkRCrVffbZZ1y8eLFo/We4/9wFua+yshKHwyGZuaJPK4IWrVYr\nBXcEEam2thaVSkUoFCp6i0g4o8HBQe7du4fT6ZSVrYaGBrq6unC5XJJcJvrmMzMzGAwGqqur5Uho\nsRyoKEPfvXuX0dFRLBYLKpVKtttEpqPT6fB4PFLJD+6/w+I8RaNRqSVRzLOSSqW4dOkS+Xwer9dL\ndXU1zc3NdHV1yeqEXq9Hq9WSTqdZXV2V0y+3bt2SqoGrq6tFbbVs/R2e5nmJz2urGmYxl/1sRTab\nJRQKEQqFuHfvntR9OHz4MKdOncLtdrO2tsbi4iK3b98mm80yNjYmF7R9+OGHfP311wVdqPQwtrbn\nHk7IxPpzoR8hBKOMRiPr6+sEAgFcLlfBgq3fGOe/FSJjgm8qA0IvX+yy3i5Z7VlDEMgmJiYkWUVI\ncQoFv3Q6zcTEBDdv3uTOnTskEgm5slVMPhSr5C8gREX+/d//ncuXL3Pu3Dl6e3tpb2+XzH3BhrdY\nLCwtLTE9Pc1HH33E0NAQS0tLUs+92LbDfccyNzfH4uKiJAqJiZFUKkU8Huf69etMTEzIqYz5+Xn6\n+/vlPoWdsBu+yaZDoRDRaBSdTodaraa2tlbqDojNlMlkksXFRfL5vPwcin2Zi+xTiKGIEvTWSRah\nPrfVNsEwn5yclBruxS6h5/N5QqEQX375JbOzs7z22mu89dZbmEwmKQMsyFmCO/TBBx8wPj5OMBiU\nwWOh9CAKBfF5iee907aLiqOQqTYYDADy/rt69SojIyOk02m5un1kZEQmIsW2/1HPTLSbKyoq8Hg8\ncg9EKpViYWGBkZER4vG43AHzrO+X30jnLw7y1tJLPn9/k5cgcO3ky7lVQCSVSjEyMkJpaSler5eN\njQ2mp6e5ePGiVEUTGcTWXl6xbRc2T01NEQqFZDY8PDyMz+fD6/XKiFZchIFAgGvXrjExMUE0GpVr\nN3cCW4V/4Btmt8iuA4EA8/PzUlVs6zkRpK+dOi/iYhESxGIx0bVr15iZmQHu7xsXWb4or9+6dYt4\nPP7M54efxuZv+7liz8LDXyOkoFOpVFHLt1ttEL3bcDgsF4CVlJRI9Txhv6iyXL58mfn5eTl6KSSj\nd9qBPi126t38Noi15p988olUMBTtiUuXLsnlREJQSehf7JbnLqoqoVCITCbD2toa8XicfD7P4OCg\nrCwWSo+gOIOOT48d+3QE4a6YpdAnQalUUl9fz+uvv87o6CgXL158YMXpbsXWUbojR47g8/mYnp7m\nP//zP+XY4m55Eb9vEBmoyKQfftYKhUKu0i32Apf/K7aqFu6G8yOSCEFmFEGNeOa7wcZngULvgPgu\nEGRj8U+xxoSfJR7F/tdqtZKvE4lEuHPnznf5nR7r3/ed/0MQF+ZOlIa+DQqFAqvVSnV1tZyxLcZc\n87OC2+3G4/FgMplYX19nYmJiT2Y+ewlbtSAe5YC2/vleOUdbsVsdkdB+gOKz+n9TIe5scW/vxfP8\nMFQqlVQz3NjY+K6V6H3nv4997GMf+9jHXsLDbenv8i0e94eF1SLdxz72sY997GMfT42CK5wW9Lt/\nd+xn/vvYxz72sY99fHfsZ/7FwMN6+3sJe9nuvWz7XsRefub7KD72z8vuxW79VPYz/33sYx/72Mc+\nvjv2M/997GMf+9jHPvbxDfad/z72sY997GMfv2HYd/772Mc+9rGPffyGYd/572Mf+9jHPvbxG4bf\nSG1/eLSk4l7AbpM2fRo8A9GKHcGT1PJ2M/aq7Q+zxPei7eI93UuKc0ItD9hTtovnLZYq7SWlP2G3\nUIcsluz5b6TzV6lUcmmLWPO7V6DVauW60Ewms2cuRLGnWqFQkEql9pSuv0ajwWw2k06n5fKhvWK7\nyWRCq9XKRTh7RcdfrVZjNpvlIp29ZLvBYJAbNtPpNKlUak84IqVSidVqlTsfEolEUVeD/1+g0+mw\nWq1yJfTa2tquWuLzOJjNZkwmExqNhmQyyerqKlD4BElV0O/+3fH/FfKb6/V6PB4ParV6zxxuuB8h\nlpSU4PP55Aa3vQKNRkNVVRU2m41EIrGnInOn00lzczPAngtc6urqqKqqIpPJyBWoewFWq5W2tjYc\nDgebm5vS/r2Aqqoqurq60Ov15PN54vH4njjrOp2O9vZ26urqpKb8+vr6Tpu1LXi9Xnp7e/H7/djt\ndqLR6ANbOnczWltb6e7upqGhQa4Hf0aVrr9/3B/+xvX89Xo9ZWVlnDx5kpaWlp02Z9sQ++Wbm5t5\n4YUXKC8v3zPiGWq1GpvNxuHDhzl06JCsAOx2iM13lZWVvPTSSzQ0NMj1oLsdYsNcZ2cnp06dorS0\nFI1Gs9NmbQsqlQq3283p06c5cOAAFotlT9guNsw1NDTw8ssv09zcjN1ul2t+dzMUCgUmk4nDhw/z\n3HPPyQBgL0ChUFBRUcHLL7/MkSNHqK2txWAw7LRZ24JCoaCjo4MXX3yR3t5eKioqitbi+o1y/gqF\nAqPRSF1dHW+99RZ9fX174iKH+w7UYrFw6NAhfvd3f5fq6uqdNmnb0Ol0uN1uXnzxRc6cObOnXky9\nXk9TUxO/8zu/Q2tr6wM90d0MlUqFwWDgyJEjvPLKK3g8HtkP3e1Qq9WUlZXx2muvceTIEYxGI2r1\n7u9QKhQK1Go1bW1tvPXWW7S0tGC1WveE81cqlZjNZk6ePMm5c+eoqqrCYrHs+gqX6JdXVVXx5ptv\ncvToUSoqKtDpdHvG9p6eHs6fP09XVxdut3u/518IKBQKOjs7OX36NPX19dy9e3dXre59HJxOJ8eP\nH+fAgQO4XC7Zl9sLttfX13Py5Elqa2uZmZkhm82yubm502Y9EUajkYMHD9LT04PD4XiAr7DbUV5e\nTldXF7W1teh0OuLx+J4ogyqVShobG+np6cHpdJJOp5mfnycej++0aU+EzWajsbGR2tpaNBoN8/Pz\nTE5O7olWS1lZGR0dHbjdbtbW1rhy5QozMzM7bdYTodVqqayspLq6GqPRyPDwMJ988gkrKys7bdoT\nYbfbKSsrw+12E4vF+OCDD7h8+XLRfv7eSAWeAUQm1NnZybFjx/D5fJhMpp02a1tQq9V4PB5OnjxJ\nW1sbVqsVtVq96x2/yIQaGxs5c+YMfr8fvV6/J4IWQX7q6+vjwIEDmM1mFArFricRiWzC7/fzwgsv\nUFtbi1qt3hMcEVE2b29vp6+vD4fDQTabZWVlhXQ6vdPmPRYKhUIG6E1NTajValZXV1leXt71REWF\nQkFtbS3Hjx/H4/GQTCa5d+8ey8vLO23aYyEqud3d3bS3t6PX65mfn6e/v5+1tbWdNu+xUCgUeL1e\nnnvuOfx+P6lUiq+//prh4eGi3S+/Mc7fYDDgdrvp7Oykq6sLk8kkL8rdXMYVvbjKykqOHTtGXV2d\n/P+72W64T/KzWCy0tLRw4sQJXC6XvOB3u+1arRaXy8XRo0fp7u5Go9HIcZzdDIVCgU6no66ujvPn\nz1NdXb0nzjl8E6AfPnyY559/Xvac98JZV6vVVFRU8MYbb3DgwAFgbyxvUiqVqNVqent7+d3f/V38\nfj+wN0YrBQH69ddf59y5c2g0GjY3N3c9IVe8j+3t7fzlX/4lnZ2dcqqlmIHi977sr1Kp0Ol0tLS0\ncOzYMVpbWzEajcRiMeLx+K4+JGq1GpPJxNGjRzl9+jR+vx+lUkk4HN7VmZDI+P1+P0eOHOHgwYOU\nlJSQSCRYX1/f1S+ncPCiPdTY2IjBYCAajZJMJnet3XDfdofDQV9fH8ePH6e8vJxMJsP6+vquHwtV\nKpXU1tZy6NAhurq6KCkpIZVKkUwmd3WlSKFQoNVq6enp4cyZMzQ0NGA0GgkGg3IsdLdCZJ/d3d30\n9fXh9/vZ2NggkUiwsbGx6ycUWlpaOHnyJB0dHTidTlKp1J4YgbZYLHR2dnLixAkaGhrI5/Mkk0lp\ne7HwvXf+arUau93O8ePH+cu//EscDgfJZJJAIMDq6uquPuCCKPfmm2/y6quvYrFYCIfDsge6W20X\n2WdbWxt/8Rd/QV1dHZlMhlAoxMrKyq5+OZVKJTqdjrNnz/JHf/RHeDweEokEi4uLMnDZrVCpVJSX\nl/P7v//7nDhxAoBoNEooFCKZTO5a20Um1Nvby9/+7d/i9/vJZDKEw2HW1tZ29YifQqHAYDBw/vx5\n3njjDaxWK4lEgpWVFRKJxK4v+dfV1fFnf/ZndHV1sbm5yfr6Omtra3sicDl58iQ//elPqaioIJPJ\nEI1Gicfju7q9pVAocLvd/OhHP+L5559HqVQSi8VYX18nmUzuZ/7PChaLhbq6Ol555RV6e3tJp9Ms\nLCwQiUQYHh5mYmJip018JNRqNUajkaNHj/Liiy9SW1vL2toaa2trzMzMMDAwsGsJLVqtFofDwUsv\nvcSJEyfQaDSEQiEWFxcZGRlheHh4V17mYqyvpaWFc+fO0dXVRTKZZGFhgZWVFcbGxggEAjtt5iOh\nVCpRqVScPn2a5557DpfLJZ3mxMQEAwMDu5Ywp1QqKSsr47nnnuPYsWPkcjlWVlZYXFxkYmJi15LO\nRCuio6OD48ePU1NTQzqdZmlpiWAwyNDQEOFweKfNfCREr/zw4cMcP34ck8lELBZjbm6O6elpJiYm\ndm3QolAo8Pv9HDx4UI5qRyIRkskkMzMzBIPBHbbw2yHOS19fH6WlpWQyGYLBIAsLC9y7d6/ohNzv\nrfNXKpXU1dVx8uRJfvCDH2A2mwkEAvJhT01N7drL3Gq10traytmzZ3nttdcIh8NMTk4SCoWYm5tj\ncnJy14pvlJWV0dXVxauvvkpDQwNra2vMzs4SCoWYmZlhdnZ2V14sOp2O2tpaTp48yZtvvkk+nycQ\nCBCNRgkGg8zNzUnlrd2GkpISqqurOXfuHMeOHSOdTjM5OUksFpNnPZVK7bSZ/wviIu/p6eEHP/gB\nHo+H1dVV5ufniUQi8n0VMrm7CSaTiYqKCk6ePMkrr7yCSqUiEAgwNTXF8vIyMzMzxGKxXWm72+2m\nrq6Oc+fO0draSiaTYWpqimQySTAYJBAI7MqsX6vVUlZWRm9vLy+//DJOp5NIJEIoFGJ9fZ1gMEgk\nEtmVz9xms+HxeDh+/Dh9fX3o9XoCgQALCwuEQiFmZ2dJp9NFtf177fx/+MMf8tu//duUlpYyPDzM\n559/zsDAAMvLy5SUlJBOp6We8m46LLW1tfz1X/81HR0dWK1Wbt68yZUrV7hz5w46nQ6LxcLm5iZK\npXLXlf7Pnj3Ln/7pn+L1ellZWaG/v5/+/n4mJyfxer1sbGw8oHu+W+B0OvmTP/kTTp8+TVlZGTdu\n3ODy5csMDAyQzWbx+Xzyme+289Lb28vf/M3f4Pf7USgUfPXVV9y6dYuBgQF8Pp/8e7vtmatUKt5+\n+23eeOMN/H4/U1NTXL9+ncHBQaLRKFVVVWSzWXnOd5PtdXV1/MVf/AXd3d243W6uXr3KrVu3GBwc\nxG6343a7yeVyu+6ZA5w+fZof//jH1NTUkEwmuXHjBgMDA8zOzlJbW0sqlZKtmN10vzgcDn784x9z\n6tQpamtrGR4e5saNGwwNDaFSqairq9u156Wjo4Of/OQnNDc3Y7FYuHPnDoODgwwNDVFbWyv/XjHP\ny+6mLn9HCJJfVVUVdXV1aDQaZmdn+fzzz7l16xYzMzPkcjmsVitlZWW7SnRGrVbjcDhobW2lrKyM\nTCbD0NAQly9fZnBwkNXVVYxGI16vF4/Hs2uUz4SiXEVFBe3t7dhsNlZWVrh8+TLXr19nbGyMbDaL\n0+mkvr4ep9O50yZLqFQqzGYzTU1NckZ7cnKSL7/8klu3brG0tITRaMTv99PY2LhrRkTF5ITH4+Hg\nwYN4vV5SqRS3bt3iypUrDAwMkEgkcLlctLa2UlVVtWsY6OK81NXVyfHVpaUlLl68yI0bN5ienpaS\n0IcOHcLlcu20yRKCWHngwAFqampQqVTcu3dPBl3hcJiSkhLa29tpa2tDr9fvtMnAN+elsrKSQ4cO\n4fV6WV9f5/Lly1y5coXh4WFyuRxVVVUcP36cysrKnTZZQkw9dXR00NbWhs1mY25ujs8//5zr16+z\nsLCAxWKhvb2do0eP4nA4dtpkCdHaOnHihHT0/f39fPXVV9y8eZNEIoHP5+PYsWO0tbUV7R39Xmb+\ngiWv1Wqltvb4+DgXL14EwOPxYDKZ5Ljf8PDwrhBAEUQ5g8GAVqslm80SDoe5desWN27cAO5nHCKr\nUKvVrK+vk81mdzzKVSqV6PV6dDqdtGtmZoaLFy8SCoWwWCyYzWbKy8uxWCxcu3aNlZWVHbcb7p8X\nvV6PXq9HoVAQi8UYHBzkiy++AO6XSUtLSzGbzTgcDj755JNdMSkimOZ6vR6j0Ug8HmdpaYlLly5x\n+/Zt4P6Ia11dHS6XC4PBwMzMzI7bDffPi1arxWQyodfrCYfD3Lt3j48++giFQkF9fT0ul4uGhgbq\n6up45513CIVCO267mGTR6/XYbDZUKhXhcJjr169z6dIlyR2pqamhtLSU0tJSAoEA6XR619husVgo\nKSkhHA4zPT3Nxx9/TCgUwul04nA4aG9vp7W1lXfeeYfp6ekdtxu+SejcbrckPvf39/Ppp59KeV+/\n3091dTWNjY380z/9E+FweMdtFwGXzWajpqaG9fV1FhcXuXjxInfu3EGhUGA2m+nq6qKlpYWSkhIG\nBwf3Ff6+K0pKSmhsbMRoNDI9Pc3Pf/5zfv3rX7O5uYlCocBms3H06FHsdjuhUAitVivZojtJRlOp\nVFRXV1NbW4tCoeDq1au8++67jIyMyBJiZWUlZ86cIRKJcPfuXWKxGNPT0zt+0LdWKyKRCO+++y7v\nv/++ZMhrtVpaWlro7u6WfAXRT99pR1pTU8PBgwexWCyMjIzwi1/8ghs3bsgSYklJCb29veTzeebm\n5iTZcmFhYUfLi1svDYCLFy/yy1/+UvZshdhPb28vyWQShULB/Py85GDs5DOvrKykp6eH8vJyFhcX\n+cUvfsEXX3wh7TYYDDQ2NuL3+0kkEszNzZFMJpmamiKRSOyY7Wq1mgMHDnD48GGMRiP9/f28++67\njI6OStudTift7e1sbGyg1+uZnp7mzp070pHulO1er5eenh6amppIJBJ8+OGHfPDBB6yvr5PL5VAq\nlfh8PlpbW8lms8zPzxMKhRgZGdnxQL2trY1Tp07hdruZmJjgv//7v+nv75fP3GQyUVdXh06nw263\nc/r0adRqNUNDQzs6XWSz2Th06BAHDx5EqVRy+fJl3nvvPRYXF8lms6hUKpxOJ3V1dWxubrK6usoP\nfvADeV4K2Xb5Xjp/o9FIaWkp6+vr3L59m//5n/9hYGCAfD6PXq/H7XbT3t6O3+8nGo3KsZyJiQki\nkciOXS5CJcxisbC4uMi1a9d4//33CYVCKJVKWXru7u4mmUxitVoJBAKoVComJibkmMtO2G4ymaiu\nrkatVjM5OcmFCxe4cuUKyWQSrVaL3W6ntraWzs5O0uk04XCYlZUV7t69y8LCArFYbMdIRh6PR7K1\nx8bGeO+995ienpasaI/HQ2NjI1qtltLSUmZnZ6Ugx07O/wsHWVpaSigU4urVq1y4cIHV1VU0Go0k\npTU2NpLNZkmlUiwsLKDT6djc3CQWi+3YxejxeOjr68NgMDA9Pc2nn37KnTt35DvqdDpl225jY4Pe\n3l7W1tbIZrMEAgFisdiOOFKVSkVTUxNNTU1sbGwwMDDABx98wPz8PCqVCpPJhMfjobq6ms3NTVQq\nlQwSE4kEa2trpFKpHXnmJSUlHD16lLKyMkKhEJcuXeLatWukUik5pVNWVkZlZSX5fJ77b35xAAAg\nAElEQVQDBw4QDofJZrPkcjnW1tZ2TKOjrq6Ovr4+dDodg4ODfPjhh4yPjwP373un00l5eTlGoxGL\nxcLRo0dJpVLyfhcJRrFtN5vNHDp0iMbGRpLJJP39/XzxxReEw2EpguZyufB6veTzedrb24lGo/K8\nRKPRglWNvpfOP5VKsbq6yoULF8hms8zOzpJKpWRkW19fj9vtxu12Y7PZOHv2LCUlJZJUJzLtnTjk\n6XSa6elp3nvvPe7cuUMkEiGTyWA2m6mtraWiogKr1YrRaKSpqYkf/vCHeL1erl69yuDgIIuLizvi\nRPP5PJubmwwPDzM5Ocn4+Lh84TweD83NzZSWlmK1Wtnc3KS3txe9Xo/X6+XGjRvcvn17x/aeK5VK\nNjY2uHHjBqOjoywvL5NKpdDr9dTW1lJTU4PZbEan06FQKDh9+jRWqxW73U5/fz9jY2M7cimKgDAS\nifDll18yNjZGNBplc3MTl8tFY2MjXq8Xo9FILpejsbFRqqJZLBZu3brFysrKjpwX4eAF41kEgEql\nksrKShobG7HZbBiNRrRaLQcOHECtVmM2m7l+/Tr9/f070u4SlSCDwcDY2Bjj4+OEQiE2NjawWq00\nNzdTWVkpF8t4vV6OHz+ORqPBYDDw9ddfy/0WxbZdJA+ZTIbh4WFmZ2eJRCLk83l8Ph+dnZ04nU5p\ne11dnWwviucej8d3ZFrH5XLh8/lYXV1lcnKSQCBAIpHAYDDQ1NREfX09er0erVaLxWKho6NDVgWu\nXbu2YxUAnU5HdXU1ZrOZyclJZmdnpeSzx+Ohra2NsrIy2aIuKyujr6+PTCaDVqvl6tWrBIPBgmgX\nfC+dfzablSI4arUalUqF1WrFZDJx8uRJTp06hcvlQqVSsbm5iclkwufz0d7ejkqlQqvVMjs7Szgc\nLqozEhKPmUxGzm6r1WpcLhfV1dWcOXOGjo4ONBoNmUwGhUKB1WqlpqaGbDaLTqdjdHSU6enpomcX\nuVxOVh3UarUs3Yqy1/PPP095eTlKpZJkMolarcbpdNLc3AzcdwYTExPMz88XPfDK5XLkcjlUKpXc\nwGaz2XC73Zw5c4be3l4MBgOZTEaWcn0+HwcOHECn02G1Wh9wvMWCmJoQ50T0dN1uNx0dHZw7d466\nujr5zHO5nCyPiv8eGRmRJetinnWVSoVGo5FnJZ/PYzQacTgcHDt2jBMnTmC32+W7LKpi4vzbbDZG\nRkZki6NY50XwLLRarTwr+Xweu91OTU0NL7zwAi0tLSiVSuLxOOl0Go1Gg9/vJ5vNYjabGRoaYnh4\nmFgsVjRHKs6GCKbEedBoNLhcLg4cOMDZs2fxer3kcjni8bi8G5uamlCpVNjtdoaGhhgfHy/aci5x\nxvV6PSaTSWbv2WwWk8mE0+nk5MmTHDhwAK1WSzqdlkmH2+2mp6cHo9GI2+3m7t27LC8vFy0IEPwP\nwWvZKj1st9tpaWnhlVdeobq6mnw+TyKRkElqVVUVuVwOm83G4OAgg4ODstr4rGz/Xjr/zc1NNjY2\nqKqqkgfWZrNRVVXFG2+8walTpzAajaytrbGysiKV/hoaGuSmpY8//phoNFr0UtHGxoZUx4vH4/T3\n90sm6I9//GMqKytRKBSsr69L2y0WCz09PXg8HkpKSlhZWWFjY6Oojmhzc5N0Oi3Z/FeuXCEajeLz\n+XjxxRd5++23MZlMpFIpVlZWCIVCxGIxfD4fDoeDuro6PvjgA5aWlooeoYsMobKyko2NDUwmk2QO\nv/XWW3R3d6PVaolEIgSDQYLBIEqlktbWVsrLy6mpqeGdd94pautCjARtbm5iNpslQc5qteL1ejl5\n8iQ/+clP5Fjo6uoqwWCQpaUlSkpKOHz4ME1NTXz22WeyMlYs5y9sz+VyOJ1OcrkcZrMZt9uNz+fj\n5Zdf5uzZs5hMJqLRKIuLiywtLbGxsSGrXwcPHuRf//VfWV1dlbYX+sxsHVHVarXyfTMYDJSWlnLw\n4EHefvttqqqqyOfzRCIRFhcXCQaDGI1Genp6aGlpob+//4F3tFiOSNguyNAmkwmbzYbL5eLkyZP8\n8Ic/xGw2k06n5TmPRCKUlZVRXl7OkSNHeOedd1hcXCQWixXtmYtdIEqlEovFgs1mQ6vVYjabaWlp\n4dVXX+XgwYNoNBpWV1cJBAKS09LS0iK3RP7zP/+zDLiK8cxFAieI3ILMLdoUhw4d4rd+67ew2Wxk\ns1lCoRDBYJDl5WVsNhuHDx+mt7eXTz/9lPn5eZlc7Dv/R0AwQsVL2NTUhMlkwuv1olAosNvtNDQ0\noFKpyOVypNNpYrEYsViM1dVVqYEu+kZms7lo0blOp6OkpIQXX3yR559/nrq6OrxeL62trTIgEeMr\nm5ubpFIpuaZV9BHhfsYqVqGur68X/JCLMbnW1lbefPNNWlpacDgc/PSnP2VtbQ2bzUZtba185plM\nhkQiQSKRIB6Pk8lk5OXvcrlwu92srq4WZfrCaDRit9vp6+vjhRdewOv14vV6sVqtchGUWHQieubJ\nZFI++3w+j0qlwuVyUV5ezsrKCsFgsOCXokKhwOVy0dLSwtGjR+nq6sLlcvHmm29y8OBBbDYblZWV\ncvNjJpMhmUySTCZl9iA+t/Lycmpra5mfny84qUtwKHw+H11dXfT09GCz2SgtLeVP//RPyWQy2O12\n2Z4QAWUikSCdTstqlli6VFNTI1XpCn1eFAoFZWVlNDc309nZSV1dHSaTiTNnzuB0OrHZbJSVlWG3\n2+UzF+clnU6jUqlQKpWYzWZZZczlckxNTRX0vAjH09TURN//z96bxkad33n+r1/d9+Eq2+WjfIPx\nhTHYgGnOhkY9TafP9HaiRCtNonmSkUYjrfbJPtrRaJSRVlppNbtPsruzk5nJTBIl6U7odKcb6AZD\nY2ODjc3hA9+3XXb5dp2u+j/g//12QYBcUFWGeksluhvafvP19/u5j/37qaioIDc3l62tLb7xjW/w\nyiuvYLPZKC8vR1EUec+FFxqJRIhGo1ImVlRUUFdXR19fHz6f75nfl7y8PGpra6mvryc/Px+tVsu+\nffv467/+a1m/VVhYKKOmiWeeaOyIFdcbGxvcunVLyp1nxVtRFOrr6+Uq86ysLNRqNa+99hqVlZU4\nHA7ZehuNRolGo2xubhIIBKTzo9Vqsdls7Nixg8OHD9PV1cXw8PBTc0ifG+WvVquxWq0UFRVx8OBB\nOQtfo9FQUlIilz4Eg0GGhoZQFIX5+XnGxsakdS4KpUwmkyzWedY9l8I69Hg8VFVVyTGnQkhWVVVJ\n3rOzs8zOzhKLxRgbG2N6epr5+Xk509put7OxsfFA+PpZQqPRYDQa2bFjB01NTRw+fJjc3Fw0Gg1Z\nWVnyEQYCAQYHB1Gr1aysrDA+Po7P58Pv98sUgN1uJxKJoNfr5Qa6ZykQtVqtzLk1NjZSX1+PVqvF\n4/FQWFhIKBQiFArh9/tZWFhAo9EwNTXF1NSUNE6i0ai02kXr2rO+LxqNBq1WS0VFBU1NTdTV1cle\n83379rFr1y7C4fADZx4OhxkfH2dhYYGlpSUZunY4HITDYUwmExrNsxUF4nvm5OTQ2NjI3r17Je+t\nrS1sNpt8o4FAgL6+PrRaLYuLi0xOTsrCJ5HTFW26RqPxmW9aFK19FRUVHDlyhMrKSjweDyqViqqq\nKgoKCgiHw4RCISYmJuT9nZycZH5+ntXVVelpO51OeeZ6vf6Z8hbhcpfLxb59+9i/fz8FBQVYrVbi\n8TjNzc2ySDjxzDc2NpicnJROhd1ux26343Q6pfct0kzPindiBf/JkyelE6QoCqWlpdjtdsLhMOFw\nWEYSdTodMzMzzM3Nsbm5iU6nIysrC6fTKVMYZrP5md4XceYWi4V9+/Zx7NgxeeaKorBnzx7Kysqk\nkdXf3y/busU9Fw6GiEhqtVqysrJkK/LTwnOh/IUQKCsr4/333+fQoUMytLW1tcXGxgZ9fX10dXUx\nOzsrK1gXFhaYmpqSu87FYavVapaXl5+515/Yd/vSSy/x3nvvUV1dLfOJkUiExcVFbty4QV9fH9PT\n09Jzm56exu/3y+1bwlIUj+FZ5/zFmYs1pidOnJB9z4qiEAgEuHfvHh0dHbK1DGBtbY2ZmRm5xEKc\ngU6nk4/4WXYsCMXvdDrZu3cv3/72t2WOVqVSEY1GWV1dlUWIgqtarWZ+fp6FhQV5JwTveDz+TMJy\nj+JuMplwu9288sornDlzhtzcXCksg8Eg4+PjtLe3y10EGo2GSCTC3NyczOPq9XoMBgNGoxG/3y89\n52fJW6VS4Xa7qamp4a233qK+vh64b/xGIhHW19fp6emho6OD6elpqTBXVlakhymq6c1ms6xvmZqa\neqbdFkIJFRYWcujQId58801Zma0oilxYJYZwiZHhGo0Gn88nR/yKcLXVamVtbY1bt26xsLDwzLx+\nceY5OTlUV1fz8ssv09TUhE6nA5A55r6+PlpbWxkfH2dxcRG9Xk84HJZhc61WS3Z2toxu3Llzh56e\nHlmR/iwg7vmOHTs4ePAgJ0+eJC8vT9ZwRaNR1tbW6OjooKurS8pFg8HA8vIya2trUmHm5+fjcrmI\nx+NcunSJwcHBZ1Y9nzh0q7a2lkOHDrF3717MZrPkHgqFGB8fp7W1Vb5RYQSKXRAGg4GioiK8Xi85\nOTn09/fz6aefsri4mMn5Pw7iojqdTrnbeX5+ni+//JLOzk56enpYWlqSQnB1dVXOaxePGZAhx2ed\nAxU/RFHslJOTg8lkQlEUwuEw9+7dk1PmhoaGWFpaIhKJsLW1xdLSkhTYIsykKAqxWCxpOS3xve12\nOw6HA41GQywWY319nY6ODtrb2+ns7JTRCUVRCAaDLC0tydy48JKEEkhGvl+EzYSAMBqNMvw2MTHB\nl19+SUdHB3fu3JFnrlKpWFlZkcJcGAtqtZp4PC6NmWRxNxgMWK1W6TlvbW1x584drl69yvXr1xkf\nH2dpaUn+/srKijS2tFqtLBJMVnoIkLMqTCaTnHonlGdrayvXrl3jxo0bD0SEAoEAa2trkq9Op5P8\nl5aWZMvfs4KoTRCRQJPJJFNY4XCYoaEhrly5Qnt7OwMDA1KAq9VqmUbUarVyiJQoSpudnU1KW664\nGyJiBMgQ882bN+W5z83NSYNLtIFqNBr0er2sWRDz6J+l0SIgUoSxWAydTidH9kYiEaamprh8+TLX\nrl2jp6cHv99PNBqVhYyRSASj0YjP52N2dlYqXyFDn6VcF3U4YuW6iMAKfdLX18eVK1e4du0aY2Nj\n+P1+GTEMBoOyQHBtbY2pqSmsVitTU1NMTk4+dfnyXCj/xLymCJuICueRkRH+/d//na6uLhm+Fd7p\nw3mfVLQ8ifz92toafr9fWrjBYJD29nb+z//5P9Jr02q1sjI6cd+2qFZPJgQHUXi4srIixxH7fD4+\n/vhjzp8/z9zcnPTaVCoVW1tbDwxSSnZxn3icGxsbLC0tMTs7S05Ojtwf39vbyz//8z8zMjLC0tKS\nrEYX90VEhxKrnZPFPx6PEwqFWF5exufz4fP5pNEVCoVoaWnhpz/9KXNzcw/kmeGrcxZV6MmskBcK\nVCxfmZmZwev1yv0a4+PjfPDBB3R2djI3Nyd5i/sizll4fMIAShZ/UQAnWhLFJMtwOExXVxc/+tGP\nmJmZkUaKuC+JUcPEXG4yuIszF2k2MZzKarUSi8VYWFjg/PnzXLhw4YGhUOK+CEUTi8VYXl5+4N+T\nwT0cDsulVFNTU5hMJoxGowyV/+QnP2FkZAS/3/9A14swMEX0Ynp6Wt6hZ33nxfmIaZVjY2P4fD4Z\nbQmFQrS2tvLBBx8wMzMj67REhb/gFgwGGRkZkc5QNBp9JsPnnn1i+I/Df/1j/ifx4Ox2OxUVFQQC\nAcbGxvjss8+YmpoiEonIHFhtbS0Gg+GB1bip6OsX31fkZgsKCqQiunbtGp9//rm8JEajkaKiIqqq\nqggEAg+EalPFXRgdYisegN/v57PPPqO3t5dwOCwLV3bu3InL5XrA+k7lmQtrWxTshcNh7t69y29+\n8xuWl5eJxWIYDAZZGyD2nScK8FTwF15oPB7H6/ViMpmIRCJcuHCBjo4OubDKaDRSUlJCQUEB6+vr\n0tNMhbEoajgS27RE7nNycpLf/OY3TE5OyroPp9NJRUUFer1e9qInKp9kn7uYgBcKhWSVfzQapaOj\ngwsXLhAIBID7Idu8vDxZZyTeaKIRk2xjV/ystVotRUVFqNVqVldX5RsNhUJyJLrX68VqtcqBPiKS\nmIpzF7wDgYAsqBTes3ijIo2VlZVFUVGRjFoI3sLBS/aZi2jh1taWnLURj8f5/PPPaW9vl8uT9Ho9\nbrcbh8MhC3JFt5pw8P4Eo+VvnvSbz4XnD8hQeFdXl9z3rCgKm5ubqNVqnE4nOTk5chLUzp07uXv3\nLn19fSkT4oAUhuPj4yiKQk1NjSysiUajGAwGOaPA6XRSXFxMbm6ubH8SXyMV2NraYnNzkzt37uBy\nuWhoaMDtdsvIi8PhwO12yw6E4uJi/H4/g4ODKZtECF+d+ezsLK2trXKQjwh5CoWv1+vlKs6CggLp\nRaVa8cdiMQYGBjAajdTW1qLVatHr9XKGuMvlktXZubm5xONxxsfHWV1dTRlvIRAXFxfp6OggNzeX\n+vp6WVxrMBhwu91SCYkWtESlkyrFL858ZGQEgIqKCrKysmTxmMVike1nRqMRh8OByWR6YPxzqoaG\nibHl3d3d2Gw2WaAowulOpxOXyyXvkNPplOO3hdJJxQArEf0UC9ny8/Pxer3YbDZZb5SdnS2jFSaT\nCbvdjt/vf8DLF9GXZG7LE9767du30Wg0lJaWytkUosZLRGDi8bgsBkz8f5ORdn6uPH8RLhIWV6IV\ntXPnTo4cOcLrr7/O4cOH5VYoMVI0VYpIQPAUCkitVhMIBDAajTQ3N/Pyyy/zta99TQ70EQNOUr26\nUjywxAiEyGuWlJRw7NgxXnnlFY4dO4Zer2dubo7+/n5ZkJhK7rFYjEAgICeaabVa+asYTHTq1CmK\ni4tZWVlhdHRUCvRUY2tri2g0KtMSolre4/Fw7Ngxjh8/zrFjx9ja2mJiYoKJiQk5+CrV90W8UXHW\nBoMBlUpFXV0dJ06coLm5Ga/Xy/T0NBMTEw+0T6aSu0jRifSbw+GQ20EPHz7MwYMHqa+vl+9TpOtS\nLVuABwqDRadHJBKhpKSEw4cPs2vXLnJzc5menmZsbEzWiKT6zMUbFTl/m80mh0PV19dTX19PdnY2\nkUiEwcFBWeyc6nsO91MnoVBIFvvl5eWxubmJ0+mkrq4Oj8cD3C/0m56eltMTn+Jdf6Ln/1wpf0Ba\nTWtra3Lzlmijq6mpkStbb9++TU9PD0NDQym/JMADYapoNIrVasVms1FSUkJNTQ0VFRXk5+czPT1N\nR0cH/f39+P3+tOAuLG1Rl+BwOOTgnl27dsnxld3d3fT09DAxMZEWmwiF4SKEujhzERkqKSnBarUy\nPz9PW1sbo6OjMgSdaoj7IvKFLpcLo9GI1+uloqICh8PB1tYWt27d4tatW/h8PlmElEoknvnq6qqs\ngnc4HLLCORKJMD09TXd3N5OTk2mjQEX0QtR+OJ1O9Ho9OTk5FBQUyOK4gYEB+vr6ZMFfqiEiD9Fo\nlPX1dXQ6HTabTbbCORwOVlZWGBkZYXBwkPn5+bTYQghfFf5Fo1ECgQBmsxmtVovFYpFRo4mJCcbH\nx+WwqlTtCHkYwqETKSGTySSHuG1tbbGwsCDbnkUK4ymf+YsR9oevKqGnpqbw+Xw4nU68Xi8HDhwg\nNzcXg8HA+vo6w8PDXLx4kdu3b6eFFwdfFXOJPHleXh6HDh2ioaEBo9FINBplcXFRLhJJF8UP9x+o\n6NsXbVGNjY14vV7p7Q8NDXH16lU6OzvTRrCIkObQ0BCrq6vk5uY+MBI0HA7T39/P7du3uXz5ctKq\n4n8fiEUrnZ2dqNVqvF4vu3fvpqSkRC566u7upquri97e3rQRiOLMJyYmmJ2dxe1243K5qK6uxmw2\nE4lEaG1tpa2tjbt376Z8W2UixGCwu3fvEg6HycnJoaGhgYqKCjY3N5mamuLSpUvcuXOH6enpVNN9\nAFtbW/h8PhYWFtDr9VitViorK7HZbMzNzXH79m2++OILlpaWZI1ROkDcF7F0TUxKrKioYGxsjMHB\nQTo6OhgfH08buSIgilx7enrkbH6xg2BkZIRbt25x9+5d6e0nG8+V8hcQFvrg4CB5eXnU19ezvLzM\n4uIiFy9epLW1lTt37rC8vJxqqr+FWCyG3++nvb0dj8dDbW0tk5OT9PX1cfXqVbq7u1laWnomix7+\nFCQq0suXL5OXl4eiKExMTNDV1SVnFaTbA4X73NfX17l27Ro6nY7s7Gx8Ph9DQ0P09PTQ29srK+TT\nCcKjGxsb46OPPmJjYwOfz8fo6Kj0PsfHx9MiBPooRKNRurq6ZOubKP7r7+9ndHQ0bTz+hxGPx5mf\nn+fChQuMjY1RXFzM/Pw8k5OT3Lt3T7b7pRvEWQ4MDBAMBrlx4wYajYbFxUXGx8dTvtL8SRBv9Pr1\n60xMTOB2u1laWsLn86VkJPgfgng8zszMDC0tLdy6dQuNRsPc3JycGZIq3s+t8o/FYszMzDA0NCSL\n4+7evcvZs2e5fv16ynNZj0M8HmdtbY2BgQG5pOfWrVu0tbVx7ty5tAk7PwrikiuKQl1dHQsLC9y8\neZP29nZ6enrSmncgEGBgYACLxUJRURGDg4NS8aerMIf73H0+HysrK5jNZmZnZ7l16xajo6PMzMyk\n9ZnH43GGh4dZXV1lc3OT9fV1hoaGZLtuuiIej8siutnZWQYGBpibm2NpaSktjdtExONxORnUZrMB\nyNXO6RIFfRxCoRBDQ0OMjY2h0+mSNhfkaWB5eZnl5WXZZp6q1ciJeLazSP94/MmnIlaXlpeXc/To\nUebm5mhra5PjNlN98E+CWFpRU1NDdXU1PT09DA8PywlP6QwRUty1axdqtVr24q6vr6ea2hMhht9k\nZWXh9XrlWN9UrTD9QyCGDYm8vxgAlW7RoUdBjC+12Wxyg1+qwqB/KBI3/IVCoW3DG76aLgpfVfWn\ns0xMhBiyldjGuF0gqvqTdNZP1O/PrfIHMJvNDwjzgYGBbXHJxYIiMZd6cnIyrT3+RIiVyKKVJXGa\nX7pDKCK9Xi9n+28HJA45AdKimPIPQeLGue3EWyCZbWQZZPAH4MVV/kKoANtOsCTZQswggwwyyOD5\nwour/CFjlWeQQQYZZPBC4sVW/hlkkEEGGWTwAuKJ+v3ZLsLOIIMMMsgggwzSDhnln0EGGWSQQQYv\nGDLKP4MMMsgggwxeMGSUfwYZZJBBBhm8YMgo/wwyyCCDDDJ4wZBR/hlkkEEGGWTwgiGj/DPIIIMM\nMsjgBUNG+WeQQQYZZJDBC4aM8s8ggwwyyCCDFwwZ5Z9BBhlkkEEGLxgyyj+DDDLIIIMMXjBoUk0g\ngwy2O7br8qjtujlyO68AVqnu+1vbjbuiKKhUKuLxOLFYLNV0/iCo1WoURSEWi22rc1epVM90TXfG\n83+OkLjCeLthO/PejtwF7+3IXyii7chdCPTtxl1RFDQajTz37QSNRoNOp5OG13aAuONGo1Fyf9rn\nrn6qX+3p4b+m6hs//Di3i5UI9y+5RqPZdh6dWq1Gr9fLc98uvAH0ej0Gg2HbnblGo8FsNqPVaiXn\n7cBdURRMJhMOh0N6c5D+3IXydDgcmM3mbeWFqlQqzGYzHo8HtVpNNBoFts+ZezwecnNziUajRKPR\nbRG50Gg02Gw2SktLMZvNbGxs/DFRl7950m9mlP9DsNlseDye31JC6X7R1Wo1ubm5OByObSVYAKxW\nK0VFRWg0mm0jWOC+cPF4POTl5bG1tcXW1ta2ECxCgZaWlmKxWAgGg/LOpDOEQe7xeNixYweRSIRQ\nKLQtDACVSoVOp6OiooLs7Gw2Nze3xX1RFAW1Wk1OTg61tbXE43HW1takfEnnM1er1eh0Onbu3ElF\nRQXLy8sEAgG2trZSTe13QqfT4Xa7aWhowGg0Mjs7SywW+0PfaUb5/75QFIWamhpeffVVotEogUAA\n2B75OaPRyKlTp6iurmZ5eVnmiNKdu6IoVFRU8M4776DVavH7/QBpn1sUiujEiRMcP36c5eVlNjY2\ntoXhpVKp8Hg8vPXWW+Tk5DA5OQmQ9tyFImpqauLrX/86S0tLLC4uSs7pfF/UajUWi4UzZ85QWVnJ\nxMTEtjC6FEVBq9Wya9cu3n33XUKhEOPj45J3up658Pr1ej0nTpzg4MGDjI6O4vf7095YVBQFo9FI\nQUEBr732Gnq9nrt370oH42kp/+e64E+lUsmPCIkn/rdgMMjm5iZw3+PfsWMHx48f59SpU3g8Hm7e\nvElXVxczMzNEIpGk8RaKRaQgtFotGo0GtVotQ+Orq6uEQiEAioqKqK6u5sSJE7hcLpxOJzdu3ODW\nrVvEYjFp6Sbjsgve4sx1Oh0ajQatVotarSYcDrO8vEwsFkOv11NZWcnhw4c5fvw4+fn5WK1Wbty4\nwcTEBJFIJKkPNJG7TqeTH61Wi1arZXl5mdXVVQA8Hg+7du3i2LFj1NXVEY/HMZvNdHV1sbW1JSMY\nyeItPkLgiY9Op2Nra4uFhQV5X3bt2kVTUxOHDx9meXmZUCjEzZs3GRoaSuo9f5i7TqfDYDBgNBox\nGo3o9XpWVlaYn58nHo/jcDioqanh6NGj7N+/H7/fTzQapaenh3A4nFTeidxFbtZoNGKxWDAajWg0\nGqanp1lZWSEej1NWVsaePXtobm7GYDAwMzNDZ2cnAwMDQPIVUeKZC95WqxWTycTm5ibj4+OEw2EM\nBgN1dXUcPXqUvXv3srm5yfT0NP39/fh8vqRyFrwBeeZmsxm73Y7VasVsNjM5OcnMzAzxeJz8/Hx2\n797NgQMHKC8vp7q6mpWVFQYHB1Om+IVMN5lM2O12srKysFqtbG1tMTQ0xNraGjiNJMAAACAASURB\nVLFYjOrqapqbm6mtrcVoNFJeXs7IyIh0SJ8GnmvlLxSnVqvFZDJhNpulItJoNPj9foLBoAwlvvXW\nW5w4cYLGxkZ2795NVVUVKysrrK6uSqGfjJy0EOJarRadTicvtsFgQK/Xo1KpGB4eZnFxEUVR2LNn\nD9/85jdpbGzE5XLR0NCA2WxmeHiYQCAghX4yuAulL87cbrdjsVgwmUwYjUaWl5cJBoOEw2Hsdjuv\nvvoqr776Knv37qWyspKKigpWVlaYnZ0FkicUhVcpjESHw4HdbpeCxWKx0NvbKx/fzp07+fM//3P2\n7t1LQUEB2dnZGI1G+vr6pEGZLAgFpNVqMRgMZGVlyY/NZiMcDnPjxg38fj/xeJxjx47x9ttvU1tb\ny+bmJrm5uWxubjI8PJxU3oK7OHe73Y7b7SYnJ4fs7GyysrK4d++eVKBFRUW8//77HDp0iPLyciKR\nCCqVipGREfl3SybvROPc5XKRl5dHYWEhubm5mM1mLly4wMDAALFYjKamJv7jf/yP7Nixg2g0Sjgc\nZmNjg97e3pQofsHdYrGQm5tLcXExJSUl5OXlMT09za9//WtWV1dxOp28/vrrnDx5kvLycgKBAGtr\na8zPzzM7O5sSJSqcIKfTSWFhITt37qS0tBSv18tvfvMbWlpaiEQiVFdX853vfIedO3ei0+lobGxk\nYWGBe/fupSTikmgoejwedu7cSW1tLSUlJYTDYX784x9LA/zYsWO8+eabFBUVEY/H2bt3L6urq9Kw\neRp4LpW/Xq/HZrPx9ttv09jY+IBQX1tbIxgMolKp6O3tpa2tjbW1NbxeL7W1tXi9XgCGhobo7Oxk\nbGxMCh94tspI5GKrqqp45513yM7OlsIlFouxublJPB4nHA5z+fJl+vr6WFtbo6SkhKqqKpxOJysr\nK7S2tnLnzh38fj+hUCgpoVytVovRaOT06dOcOnVK8tbpdITDYUKhkBTUGo2GhYUFTCYTZWVleL1e\ndDodQ0NDfPHFF4yOjrK5uZm0kKLBYCA/P5933nmHHTt2yFyhRqOROWWVSoXNZgNgfX0dj8dDWVkZ\nWVlZBAIB2tra6OjoYGVlhXA4nBTBolKpMBgMHDx4kHfeeUdyNhgMxONxotEoiqLg8/nY2NhgZGSE\njY0NPB4P+fn5mEwmBgYGOHv2LIODgw/kz581hHH47rvv0tTUJIs+9Xq95CA8pOnpadbX17Hb7Xg8\nHhwOByqViu7ubq5cuSIjAMmA8Jarqqp49913yc7Olmeu1WqlUg2Hw4yOjrK8vCy5u91uTCYT4+Pj\nXLx4kd7e3j80jPsnQZzx6dOnOX36tLznJpNJRhbFOxZ1T3q9Xhq/Op2O4eFhLl26xPz8fFJD/lqt\nFo/Hw9e//nX5RoVDkRihE/djdXVV/t30ej3hcJju7m7u3r2b1KicMMoPHDjAu+++K3mazeYH+M/M\nzGC32zEajUSjUamztFoti4uLtLa2PnVj67lU/haLBa/Xy6uvvsrXvvY1FEVhY2ODlZUVRkZG8Pl8\nxONxFhYWpCBxuVwUFxfjdruJx+P09/dz48YNZmZmpNJ91lCpVGRlZbF7926++c1v4vV6icViMvQ5\nNjZGIBAgEAjgcDikMsrJyaGoqAij0cj4+DjXrl2ThkGycrh6vZ7c3FyOHj3Kd7/7XQBCoRDLy8tM\nTU0xMzPD1tYWi4uLWK1WgsEgZrOZvLw8KUBHR0e5evUq09PTBIPBpAlFq9VKWVkZb731FgcOHABg\nbW0Nv9/PxMQEy8vLRCIRzGYzFouFWCyGw+GgoKAAq9WKz+fj5s2b3L59m42NjaQVFGk0GrKysmhs\nbOQ73/kOWq2WaDQqIydTU1OEQiFWV1cxmUyYTCZisRgul4vs7Gz0ej1zc3O0tLQwNTX1THqJHwej\n0UhOTg6vvPIK77777gNvdGpqisXFRUKhEBqNRgpEs9lMdnY2NpuNeDzOvXv36O7ulqHSZEBRFGw2\nG7t27eKb3/ym9MxWVlZYWFhgZmaG9fV1wuEwKpVKKh6r1YrL5cJoNLK2tsbNmzcZHx9PavGZTqfD\n6XTy0ksv8d3vfhdFUeQbnZubY35+nmAwSCAQeED52Gw2rFarTGX09PSwvr6eVM/ZaDTi9Xp54403\nOHTokEx9Li0tMTc3x/Ly8gOOnUjd2e12DAYDGxsbDA8PyzNPFneVSoXFYqG+vp7vfOc76HQ6otEo\nq6urLCwsMDc3x8rKCj6f74GUm16vl5Hq5eVl7t69Kx25p4XnUvlnZ2fT0NCAy+UC7g9IGBoa4ssv\nv6StrY179+4RiURYXV1leXkZi8VCRUWFtMrC4TC3bt2ivb09qYJFpVJRXFxMeXm59IACgQDt7e1c\nvXqVa9eusby8TDgcxu/3Ew6HMZlMqFQq+evS0hIdHR2Mjo4mtXjLZrNRVVVFbm4uiqIQjUaZnJzk\n8uXLXLt2jVu3bslQp9/vR6vVUlxcLL29eDzO1NQUt2/flkZLspCbm8vOnTuxWq3E43G2trbo7u7m\nyy+/pKOjg+npacLhMEtLS2xsbGAwGFCpVFitVrRaLcFgkPHxcaanp5PqDRkMBkpLSykoKECtVrO1\ntYXP5+PKlSu0tbVx/fp1QqEQm5ubsjBOtCZarVYAVldXGR0dTaqxBeBwOKisrMTpdAIQiUTo7++n\npaWFzs5OhoeHCYVCrKys4Pf70ev1KIqC3W7HZDIRCoVkwV+yPTmPx0NhYSE6nU5G5L788ktaW1vp\n7OyUqa3FxUWCwaCsZXC73SiKQjAYxO/3s7GxkTTeAGazmeLiYlwul3yjY2NjXLx4kc7OTqlg1tbW\nWFhYQFEUzGYzTqcTp9Mp5dHa2lrSK+azsrLwer2YzWYAwuEwnZ2dXLlyhZs3bzI7OyvPdXV1VRqN\nhYWFmEwm6cQ9bQX6u6DT6fB4PLjd7gfe6MWLF+no6KC7u5tAIMDGxoY0ADQaDW63m4KCAuLxOKFQ\niFAo9NTP/LlU/qI/0mq1sri4KEOynZ2d9Pf3Mz09/UCbjSh00Wq1RCIRaYn5fL6kCnOVSkVOTg55\neXlotVp6e3u5du0a169fp6enh4GBAelZxmIxrFarzDWrVCoCgQBLS0v4fL6kW+Ymkwmv14vT6SQQ\nCNDa2kp7ezvXr1+nr6+P0dFRyTsej8vwnDBy1tfXWVpaksWAyURWVhaFhYUYDAbGx8flfenq6uLe\nvXsyrCyUp8vlwmazodVqpee0srJCIBBIiWARRu6NGzek0r979y737t0jGo1KT8flcpGTk4PZbEZR\nFNbX11lZWZFdCsmEaO80m80yrCnu+uDgIHNzc/LMFUXB6XTidrsxGo1EIhH8fj9ra2tJF+YiSpiT\nk4NWq6Wvr0/e9Vu3bjE0NCTfaDwex2Kx4Ha7cTqdaLVa6XAEg8GkK1CDwUBeXh42m41gMEhraytt\nbW1cu3aNe/fuMTEx8UAfvEgPiUiLOPNkRogEbDYbOTk56PV6xsfH5X0RhmJih5OIQorai2AwyMLC\nQkra/DQaDS6XS86luH79ujzz3t5eRkZGiEQikpeQRW63G61WK6MazyJa8VwqfxFONhqNjI2N8Q//\n8A/cuHGD5eXl3/qzIpfrdDpRq9Vsbm7KHGmyL4pKpcLpdJKVlYVKpeLy5cv83d/9nexPfRharZbc\n3FxsNptMDywtLREOh5MuzA0GA7m5uVgsFlZWVviXf/kXPv74Y3lxH4bNZiM/Px+DwUAwGJQGSyp6\ncIVg0el0dHV18bd/+7dMTU2xvr7+W39WzFMQXtzq6io+n0+2bSUTwkOw2+0AnD17lv/7f/+vrDtI\nhKIoMr1hs9mIRCIsLCxIYyvZwly8UZED/4d/+Ac6OztZWVn5rT+rVqvxeDx4vV60Wi3r6+tMT08n\nLR2XCGGIPPxGhfH38J81m82UlZWRnZ1NLBZjcXFRenjJ5q7X68nOzsZsNj/wRldWVn7r3SmKQk5O\njoyIBQIBpqamHqh/SiYsFgsulwuNRkNPTw9/93d/98g3KmoyysrKKCoqQq1Ws7S0JFOJyYZGo8Hp\ndMpI269//esnvtGsrCz27NmD2+0mHA4zNTXFwsLCs+H2TL5qiuFyuairq8Nut8vQ26NamEReqL6+\nnoMHD2I0Guno6OAnP/kJd+7cSTpvtVpNSUkJ5eXlaDQaWSj3KIUoBOLJkyeprKxkY2ODX/7yl5w9\ne/aRAvRZw263s3v3bjweD1tbW4TD4UcWvomCqJqaGl555RWys7MZGBjg3/7t32hra0s6b0VR8Hq9\n1NXVYTab5eCYR4WSRY796NGjNDY2oigKFy9e5Oc//7nslU8mzGYztbW1lJaWEovFiEQijzT8RNFR\nRUUFb7/9NiUlJczOzvLv//7vXLhwISXCPC8vj0OHDuF2u5mcnCQcDj/2zE0mEwcOHODYsWNYLBY6\nOjr40Y9+RF9fX9J5azQaqqqqqKurQ6/Xy8r9RylPvV5PUVERZ86coaamhkAgwCeffMJHH33E2tpa\n0rm7XC6am5spLi6W9+VRRogImTc0NHDmzBlyc3MZHh7mhz/8IZ2dnUnnrSgKO3bsoLm5GafTKeXL\no+SiKN49deoU+/fvR6VS0dbWxk9/+lPZQZRMWK1W9u/fT3V1NcBjpwyKiZt1dXW89957lJWVsbCw\nwEcffcTVq1efyRt97pS/SqXCbrdTUlIiw/iPU/5GoxG32y0F6OzsLF9++SUffPBBStq1xCjKvLw8\nFEX5rQlmAiqVCofDQVlZGU1NTZjNZm7fvs25c+e4cuVK0rmLopbS0lIcDoeMPjxKsIjcZ01NDfX1\n9ayvr9PR0cEvf/lLZmZmkspb9DpnZ2dTUlKCwWAgGo0+1uByOByUlpayb98+PB4Pw8PDtLS0cP78\n+aTnbxVFwWAwUFxcjMfjIRaLSYPr4fsi0gM1NTUcOnSIUChET08Pn332Gd3d3UnlDV8VtlZVVWEy\nmRgbG3us8rfb7RQXF9PQ0EB5eTnz8/O0tbXx61//OmVv1Ov1UlJSgkajeaLBlZeXR21tLQcPHsRq\ntXL37l0uXrxIW1tbSrjb7Xaqq6vJyclhbW2NSCTyyBC+1WqltLSUvXv3Ul9fz+rqKp2dnXz22WdM\nT08nnbeiKBQUFFBdXY3ZbH6i8vd4PNTV1bF//348Hg9DQ0O0tramRC4mdm+VlJQAyPvyKLm4Y8cO\n9u3bR3NzMysrK9y+fZuWlhZ6e3ufCb/nSvkn9gyL9jhRLPEowZKfn09jYyNlZWXSc75y5Qrr6+tJ\nD+Em9gyr1Wqp+B8VTtZoNNTW1tLY2Ehubi49PT386le/ksUjqeh31mq1sigrHA5Lg+thLjk5ORw7\ndkxawr/85S+5cOECPp8v6YNaxH0RrUJCgT4uH1tdXc2pU6coLCxkZGSEn//857S1taUsRSRazMRY\n5MdFW+x2Oy+//DL79+/HZDLx6aef8vHHHzM0NJT0UKi4L2K4TKKR+/AbVRSFyspK3nzzTXbu3InP\n5+NnP/sZly5dSmohbiIfceaiVkU4Fw9z0ev1HDt2jNOnT+N0Orl27Rq/+MUv6OnpSfobha+iP2Kf\nw+Oic4qiUFxczDe/+U327t1LIBDgF7/4BZ999hnz8/MpGaYkWlpFrYo480fdlwMHDvDee+9RUFBA\nf38/P/7xj+UbTcV90Wq1WCwWDAaDPPNHOXRut5t33nmH48ePo1arOX/+PL/61a8YGRl5Zm/0uVP+\nQolqNBoCgYC0yhMvuNFoJC8vj8bGRo4fP47P56Orq4uWlhaGh4eTWj2cyD1xgp8IDz0sJLKzsyku\nLubw4cN4vV7a2tq4evUqbW1tLC4upiRnnnjmcN+6ffjMtVotXq+Xffv2cezYMUKhEB9//DEXL17k\nzp07BAKBlDzOxBkQid5nIvesrCyKioo4dOgQNTU19Pb2cvPmTVpaWpidnU3JfUkcpqRSqR555gAl\nJSXU19dz+PBh9Ho9H330EefPn5c1MMm+L4lnLtqeRDFTIner1Up5eTmHDh3i4MGDTE1N0dvby8WL\nFxkcHEzZmSfOgEgM3yZy93q9VFVVybTG5cuXuXjxIq2trSwsLKTkzBMNdJVK9ci9AmLi5pEjRzhw\n4ACrq6vcuHGDzz//PGVvNPHMhcH1qHsuJm6+9NJLlJaWcuvWLa5evSrfaKrkopi4qdFoHltbI6b5\niTbjs2fPcv78edk58qzO/LlS/vCgEhVjVsXlFxfAbrfT1NTEqVOnOH78OH//93/Phx9+KPu5U8U7\ncaOgEObi7yEuTUlJCSdPnuTUqVOsrKzw/e9/n76+vpTk+RO5C4Eej8flGSauoDQYDOzbt4/Tp09z\n4sQJ/vmf/5kf/OAH+P3+pzqy8g/lLc5crVZLwSL6hMV9ETO2jx8/TlZWFt///vdpaWlhcXExZbPN\nE89cGIvAA1vXAPbs2cOZM2dobm7m6tWrfP/732d+fl5OrEwFEg0XoUAfXnHrdrt59dVXOXHiBJWV\nlfziF7/g5z//ecq8T3jwzFUqlRTm4u6IN1pdXc37779PY2Mj09PT/O///b+5c+fOMyvc+n2QaHCJ\nbYjingvuJpOJU6dO8eqrr1JZWck//dM/8YMf/IDZ2dmkp7UEHjbQxbTSxJ8BQHl5OX/+539OfX09\narWan/3sZ1y8eJGZmZmUvdHEey7epfj7iDNXFIWjR4/yH/7Df6CyspKLFy/y93//90xPT8s9J88K\nz5XyF+EhYWUZjUby8/NlMZ/P5+PgwYPs27dPTvPT6XSEQqGUVZoLiMImoYgsFgs7duzg6NGjzMzM\nYDabOXjwIHv27KGmpobc3Fxu3rzJ6upqypQnfLWEQoRwdTodWVlZNDQ0sLq6yuzsLDU1Nezfv589\ne/ZQVlYm+7RTaWwBcoCJSFcYDAa8Xi9HjhxhdHSUcDhMc3MzjY2NNDQ0kJOTI/uIk91K+TBMJhM2\nm00KRZFbbG5uZmZmhsLCQnnXq6qqcDgchEIhFhYWUlL1LKDVauVaW7hfj5CTk8P+/fvR6XTyjR48\neJCmpiYKCgqA+0OXUhGpSITJZMLtdsv10waDgbKyMg4fPvzAGxXnbrPZGB0dZXFx8ZGdI8mCqIMS\nA800Gg1Wq5W6ujoWFhbkG21ububQoUOUlZWh0WhYX19nYWEhZcYW3HcasrOzsVgsUr7k5eXR3Nws\n2+QE7wMHDmC325mZmcHv97O8vJzSN+pwOGR7opiuuHPnTpqamuQbbW5u5ujRo+zYsQO9Xs/6+nrS\njK3nSvmLnJxKpSIajaLRaMjNzaW5uRmr1cr4+DivvfYahw8fJj8/X/auitx6Ki+KCA8J70ev11NR\nUcHJkycZHBzE4XDI3Gdubq7st93c3Ezp4xRKUwzqETPa9+3bB9wfk3zkyBFOnz5Nfn4+er1e9men\nolUrEUIIarVa4L4i8nq9HD9+nIGBAUKhEG+++SYNDQ3k5+cTCASYn59nc3Mz5ffFZDJhtVpRq9Vy\nklxNTQ0rKysMDQ3JXHlBQQE2m01OnUvFgJZEiNYni8Ui/93tdj/wRs+cOcNLL71EQUEBsViMpaUl\nNjc35U7zVMFkMpGTk4PBYJD53LKyMl5++WWGh4ex2+28+eabVFZWyqI6MTQnlQaXWq3G5XKRlZX1\ngHPR0NBALBZjeHiYI0eO8Morr1BQUCDf6ObmJqurqyk/c6/Xi81mk/K9oKCAY8eOkZeXRzgc5o03\n3qChoYHCwkICgQBjY2OSfyq5u91uvF6vHAqmUqmoqqri+PHjD7xRr9eL3W6XnJeWlpISrXiulD8g\nw86BQACTyYTT6eTIkSNUVlaysLDAjh07yMrKQqvVsrGxIateU73eUYQ9Ra5frVZTXl6OxWJhfn4e\ntVotBxfF43E5hjPVKzXFgxTpCRFxaWpqori4mLm5OfLz86XHFI1G2djYSNr8+ydBeMwiUqRSqSgo\nKJCLhgKBgOyLF4WMYg97qrkLz1+lUgH3fw51dXXk5uYyPz+P1WolJycHo9EoJ7Ol2mAB5BIcq9Uq\n01pZWVkcP36c6upqFhYWZNeIWq2WUbl0eKM2m01OjIOvWtDsdjvz8/OoVCpKSkowm81yMlsq5j88\nDLVaTX5+Pvn5+TKtaDQaOXjwIGVlZfh8PnJycnC5XLIOQzgVqT5zu91OVVUVbrcbuH/mhYWFnDlz\nhqamJgKBAMXFxfKNhkIhNjY2UjKI6GGI2g9RqKgoCvX19Xg8Hubn57FYLNKYfPiNJoP7c6X8Y7EY\nwWCQ27dv8y//8i+y0EIYA8FgUFa8GgwGBgcHOXfuHOPj46mmTiQSYXl5mc8//5z5+XlZfR6JRFhf\nX8fpdGI0GmX+qL29nS+//DJluTgBMd60v7+fn/zkJ9jtdpnfCgaDrK+vs7GxIbn7fD4uXLhAf39/\nSnkDcr3wlStXCIfDMhcnhJ8QlKIH9/bt21y4cCGluVuB9fV1hoeH+fDDD2lvb5fRrnA4zPr6Onl5\neajVaoqLi1EUhZaWFm7evJlygRiJRFhcXKS9vV0WKyYa7IFAgEgkIiNKIyMjfPHFF0xMTKSUN9w/\n87GxMT755BN6e3tlXUgkEmFjY0PufYD7huWNGze4du1a0lvMHkYsFsPn89HZ2YlarcZoNALIqvmN\njQ1KS0vl6uHl5WUuXbrE4OBgSnkDbGxsMDo6yoULFxgfH0dRlAfOXCh8MU2xr69P1uOkGj6fj56e\nHqLRKA6HA3jwzLOzs4lGo5SXl6PVamlra+POnTspf6OpRvxP/SiKIj8qlSquUqniOp0u/r3vfS9+\n/vz5uN/vj//gBz+IZ2dnxw0Gw5/8/Z7W52HO4lNXVxf/7//9v8dv3LgR9/v98b/4i7+IZ2VlxdVq\ndco5C96JfNVqdVytVsdVKlX87bffjv/4xz+Oj42NxT/55JP4nj174jabLeWcxSeRbyJvj8cT/0//\n6T/Fz58/Hw8Gg/G//du/jefn56fNfVEUJa5Wq+MajeaBj0qlih86dCj+P/7H/4h3d3fHb9++HX/z\nzTfjbrc75ZwTeWu1WvnRaDTyv33rW9+K/+QnP4nPz8/H/+mf/ileUVGRFvdF3HGNRhPX6XRxnU73\nAPfKysr4f/kv/yV+6dKluN/vj//VX/1V3Ov1xrVabcq5C956vV5+dDqdvC+nT5+O/6//9b/iAwMD\n8XPnzsWPHDkSz87OTjlvcVcMBkPcbDbHjUZj3GAwSO7Z2dnxv/iLv4h/8MEH8c3Nzfh/+2//Lb5z\n58642WxOOXe1Wh3X6/Vxq9Uat9lscYvFEjcYDHGtVhtXq9Xxpqam+N/8zd/E29vb43fu3Il/61vf\nihcVFT1NDk/Ec+X5JyLRehL/LMZrirnhYq1oKgvmHsbjVgcHg0Hm5uZYW1uT08MMBkPKc7gCD4eq\nRO1CPB5/oHAosdc4XSCqzR+G2JAnprGJXuNUdlYkIh6PPzakHA6HWV1dJRKJoNPpsFgs0uNLNQTv\nR71R4IGUll6vx263P3I0d7Ih7vjD5y64iwFR0WhUDr6yWCwyNZNKiIVVD8sX8evW1paUI3q9HofD\ngcFgSA3ZBIizfnggUeLPIbFy3mg0YrPZmJubSzFzZBQxsR304Y9KpZL1IzabDZPJhKIombD/04Z4\nAInK8lFCPx2RKDC3A+eHL+/DrVzphsc9NlEgJf5MuoXkHsdHDNIR3NMNTzpL0Y+eyD2dzv1x3MWO\n+cTC3XRBopJ/FPR6PRaLRRrl6Xbej+MtihfNZrM0stKF+6P0TSLEkjBhlD9uhsGzQupN0iRCFIuU\nlZVhMBhk9XYqK3F/X1itVqqrq2WXwvLystw2l+7weDzU1tZit9sJBoPMzMykZLb5HwrRmlNWVoZa\nrWZlZUWuDk13uN1u9u3bR3Z2NpFIhJmZmbSoVfhdUKlUlJaWsmvXLoxGo+xe2A73xWazsXv3bgoL\nC4nFYkxPTzM5OZnSdtbfFwUFBezZsweHw8H6+jqDg4Pb4r7o9XqqqqqoqKhArVYzPz/P0NBQyuss\nfh9kZ2fT1NREbm4uoVCI4eHhpI5PfqGUv9jFXVRUJCvPU7Fa84+ByWSitLQUt9tNPB6XYyLTxcp9\nHBRFwe12U1paKudyi6KudIZIrRQUFODxeFCpVLLaP9XV278Loq+7vLwcu93O1taWbE9MZ4gBKLm5\nuRQUFKDT6R5IX6QzhAdaVFREVlYW8Xhc7mlP5/siJv+5XC68Xq9cmby6upr290WMzs3Pzyc7OxtF\nUQgEArINOl2hUqkwmUy4XC6KioqwWCxEo1HZ6pcsvFBhf0VR5GpQeHLONJ0gcller5esrCyWlpbS\nMgT9MESo3+Fw/NaZbwfuYmWyy+UCtsd9EVMLrVYrhYWFqNXqbXPmQhFlZWWRnZ39yPx6OkJ0hFgs\nFjmQZmlp6ZEjdNMNovrf4XBIxyIWi6VFO+vvgqgHycrKwmKxPMA9naHRaLDZbHL2gphUmOz78sIo\n/8bGRv7sz/6Muro6AoEAc3Nz+Hy+VNP6nTAYDLzxxhu8/vrrOJ1O5ufn6e/vT4sCqN+FXbt28fbb\nb/Pyyy8TDocZGhqiv78/7b04RVF47bXXePPNNykrK2N2dpbe3l7GxsZSTe2JUBSF/Px83nvvPV57\n7TVUKhW3bt3i0qVLLC0tpZreYyGMxMOHD/Pee+/R2NiIz+ejvb2dGzdupJreE6FSqTCbzXzjG9/g\n9ddfx+Vy0d3dzcWLFxkZGUk1vcdCRFl2797NN77xDU6cOMH6+jrnz5/n7NmzaR02F9Py3njjDd5+\n+23Kysq4c+cOH3/8cUrWgv8hEBHc999/n5MnT6IoCmfPnuXs2bNMTU0llctzr/x1Oh12u53Gxkbe\nf/99PB4Py8vLXL9+PS36WJ8E4b2dPn2akydPYjQaGRgYoKWlhfn5+VTTeyzUarXMOX/jG9+gqKiI\njY0NOjs76erqkvO50xHCe3v55Zd56623MJvN3Llzh88++4zh4eFU03ssNeHggwAAIABJREFUxGri\n+vp63nnnHfbs2UM0GqW7u5uWlpa0NRZFVMvj8XDs2DG+/e1vo9VqGR4e5sKFC/T09KSa4mOhKAou\nl4uKigrOnDnDyZMnUalU3L59m1/96ldMTk6mmuIjIcbkFhYWcujQId5//31cLhcLCwt88cUXXLly\nJW1D/mIddElJCa+88gpnzpxha2uLL7/8kh//+Mdpa6CLCX9FRUUcPHiQN954g4qKClZXV2lpaeHs\n2bNJf6PPvfJ3Op0cPXqUAwcOkJubi1qtZnh4mP/3//5fWnsViqJQVVXF6dOnqaqqwmg0EgqFuHbt\nGv/4j/+Y1srfbDZz8uRJTp8+LUOJPp+PTz/9lM8//zxtBYuY2Pb222/T2NiITqcjEAjQ39/PBx98\nkPRd5r8vhOd84sQJ3njjDQoLC9na2mJ5eZkbN25w+fLltGlPfBiiCPdb3/qWXGe6sbHBxMQEbW1t\naWugi1B/c3Mz3/rWt9i1a5ccsDQ4OEhXV1faes9iquL777/P6dOnMZvNLC8vMzY2xr1795icnEzL\nnLkYY71nzx6++93vUldXJ1ugh4eHmZiYSOnCqidBtDi/9tprvPXWW+Tk5ODz+RgbG2NsbCwle06e\na+XvcDioqKjg+PHj1NbWolaruXXrFl988QW3bt1Ki17QR8FgMOB0Oqmvr+f48eN4PB4WFxflmsqR\nkZG0zcc5HA5KS0s5ePAgu3fvRqvV0t/fT1tbGz09PWlrtGi1WrKzs6mtreXo0aMUFhayurrKzZs3\nuXz5MmNjY2lrtNhsNnJzc9m7dy8NDQ0YjUZGRkbo7Oyku7s7bau2FUUhLy+PmpoaDhw4QHFxMcFg\nUK6RHRsbS+lCnCfBbDbj9XppaGhg3759mEwmpqamuHbtmlyXnK7Izc2ltraWffv2UV5eTjQa5fbt\n21y6dImRkZG0NVp0Oh3l5eXs2bOHhoYGOfq8paWFa9eusbq6mpZGC9xfC15ZWUl9fT1lZWVsbW3R\n19fH+fPnGRwcTMl+ludW+Yvivrq6Oo4fP05BQQGbm5v85je/4cMPP0xbgQj3BcuOHTvYu3cvBw8e\nJBQKcf36df7xH/+Rrq6utFX8cL+tr76+nr1791JaWkogEODy5cv88Ic/TIsxyo+DaOsTWxNVKhUj\nIyP89Kc/5dKlSyldnvS74Ha7aWpqora2lsLCQjY2Nujq6uJ//s//mRZjcR8HRVEoLy9n3759lJWV\nYbFYWF1d5ZNPPuGXv/xlWoxofRwcDgdNTU3U1NSQk5PD+vo6AwMD/PCHP+Tu3buppvdElJSUyL33\nop1SvNFnvUb2T4HJZKKhoYE9e/bgdDoJBoOMj4/z4Ycf0tramta1RHl5eRw/fpzS0lI55ryzs5N/\n/dd/TVlU7rlU/qJS+8SJE7z66qtsbm7yySef0NLSwo0bN5iZmUnbi6JSqSgsLOS9995j586d9PX1\ncenSJVpbW7l9+3baehSJiyveeust1Go1LS0tXLx4kY6ODqamptLWc1YUBZvNxssvv0xdXZ304K5c\nuUJHRwd+vz9tDS5FUSgtLeXtt9/GarXS0dHBpUuXuHbtGhMTE2nrOYuCsz179rBv3z78fj+tra1c\nuXKF69evs7i4mLZenGhfPXLkCG63m56eHi5fvkxraytDQ0Npe+Zwn3tJSQmNjY2sr69z4cIFLl++\nTEdHR8pXbD8JYtdDdXU12dnZ9PX10d7eztWrV7l7927KN/j9LrhcLurr6wkEArS0tEjZkso21udO\n+SuKInsoGxsbqa6u5ubNm5w7d44PPvhAbpRLR6hUKmw2m7TMQ6EQbW1tnD17luvXr6d1j7noud21\naxd79+6lu7ubL774gp/97GdpP0jJaDSSk5PD7t27cblcdHV18cknn/D555+zubmZtkpI7JQvLCyk\nsbGRvr4+Wltb+fDDDxkcHEzb8C0glyWVlZWRl5dHd3c3586d46OPPkr5muonQTgWTqeTnTt3EgqF\naG9v56OPPuLmzZtpsWnzcVCpVGg0GrKzsykoKKC7u5vLly/z61//msXFxbQuxFWpVBgMBnJycojF\nYrS1tfHZZ59x9epVQqFQ2rb3idZbsQ66v7+fjo4Ozp07x8TERErP/LlS/uKgvV4v+/fvJz8/n8XF\nRc6ePUtbWxtra2tp+zBFBe7OnTuprq7GYDBw48YNfvSjHzE8PEwgEEhby1alUuF0Otm1axd5eXls\nbm5y/vx5zp07h8/nS2tBDvdzoOXl5XLT47/+67/S398v1/emK3Q6HQUFBbjdblnx/OGHHzI+Pp7W\nglxRFCwWC3l5eeh0OiYnJ/nwww/p6upiY2Mjrc9cRImsViuhUIjOzk7+7d/+jenp6bRYmfwkaLVa\nuRJ8enqaTz/9lKtXr+L3+9PW4xfQ6XTo9XpWV1dZWFjgpz/9qVSe6SrT4Su5HgqFGBsb4+LFi1y+\nfJn5+fmUn/lzpfzhq/CQzWaTSrO7u5upqam09eDgq7C50WgkEAjQ2tpKa2srvb29aa+E4Ku55qOj\no3z66adcv36dsbGxbfE4NRoN0WiU9vZ2ZmZmZHol3e+LGIozPT3N2bNnZdg5nT0hAZVKhVqt5ubN\nm/T19dHd3c3MzExa7GH/XVCpVCwuLnLu3Dnu3r0rC7bS/czhPvf+/n5isRhdXV1MTU0RiUTS/swB\nNjc3aWtrk/UVIsqyHbhPTU3x2Wef0dnZyeTkZFoMgEqv7RNf4Y/6aQpBXllZSVNTE2tra8zMzHDr\n1i3W1tbS+pIktrFkZ2ezurrK2NgYo6Ojac0b7nPPzs6mqqqKaDTK8vKybLtJZ+7C4CoqKiI3N5e1\ntTX8fj8+ny/tBbkwFAsKCmR73NLSUlrnmwWE9+x2uwkGg2xubqb9SFYBlUqFw+FAq9USDoflBsLt\nANFuJrbNbQcjUUCn06HVaiX3VHvNvy9EmkhRlFRM8Xuifn/uPP9YLMb8/DydnZ0oisLm5ua2sGzj\n8TiRSISJiQmWl5eJx+NsbGykPW/4am3vyMgIKpVKPs505y74LS0tSb7bYV+CQCQSYXFxEY1GI4Xi\ndkA8HicYDMpq/u0wSlYgHo+zubmJRnNfdG4X5Qn3ZWMwGEzaytinCXHO6bYt8ffB1tZWWm41TS82\nX+GPvpnC0tLpdHIfdbqHcAVEGFfkt8RikHSHCEFrNBq5RnZjY2NbWOei6lytVkvPIp3rKxIhpoZp\nNBo0Gg2hUGhbnDk8yB0gHA6nPAz6+0JwV6vVbG1tbQvZAl9FugT/xF3z6Q7BXeyq2C5nDg+eu1jb\nm6xv/Uf/ZgrxJyn/h3ewJ8MLfRrWtOAu9lI/aRf008Sfyj3xcgvrNhnhLfG9/lTuibyftDv8aeJp\n3ZdE7skSLE+Le6I3lIzc7dPyeBN5J2vB1tM8c0gub/H9nsbXEdguZy6+jkCynIr/n/uLFfZP3GCW\n+NkuEAJ8O/EWPBMFeDK4P63vEYvF5EPfLmcOv71lMFncn9b32Y7vU2A78k7kux3vynZMV0DyzvoP\nxXPn+WeQQQYZZJBBBk/W76pkscgggwwyyCCDDNIDGeWfQQYZZJBBBi8YMso/gwwyyCCDDF4wZJR/\nBhlkkEEGGbxgyCj/DDLIIIMMMnjBkFH+GWSQQQYZZPCCIaP8M8gggwwy2PZIt/G5vy9SNfo3o/wz\nyCCDDJ4yMooouXh4YuR2ghgVnWzu6qR+t98f//VZfWGVSoXVaqWiogKLxUIwGJTjgMVo3XS8QGJj\nodfrpbq6mlgsRiQSeWBWd7pCrVZjNpupra0lLy+Pzc1NIP0frNjFXVhYSHNzMxqN5re4pys0Gg0G\ng4H6+nqqq6sJhUJyXa7gnY78xbbC/Px8Dh06hMvlYm1t7YHfT1dotVqsViu1tbW89NJL22pRlNls\nJi8vj+bmZsrKyuS6XDFBMl3PXaPR4HK52LVrFy+//DJGo5FgMLgtVv1arVYKCgrYv38/u3fvluPc\nn+J+jr950m++cMpfo9FQUFDA6dOnMZvNzMzMyMUoWq1WKv90G9+pVqvR6/U0Nzfzta99jYWFBZaW\nln5rlwGk3zhJvV5PTk4O7733HpWVlYyMjMgLLoyWdOMsztVkMnHw4EG+973vEQgEGB8fTzuuD0Mo\nUIfDwbe//W3+7M/+jLGxMVZWVqTBmK5QqVQ4nU727t3LX/3VX+Fyuejt7f2tPRfp9jMQZ56Xl8fX\nv/51vve97zE/P8/s7CyhUCitlZGiKOTm5lJXV8df/uVfsnfvXoaHh9nc3JTritORuzjzHTt28Prr\nr/Of//N/lptRg8EgkUgkrRYXJb47RVEoKChg3759fOc73+HMmTPMzs7i9/uf5ir0Jyr/dJUCT/Wm\nKYqC2+3G6/Xy0ksvsXfvXioqKtjY2GBiYoKNjQ1CoRCKojA8PMzNmzcZGhrC5/M9TRp/FG+1Wk1e\nXh7V1dUcPXqU3bt3U1xczNDQENPT06ysrMjd3O3t7fT19TE7O5vy7W6KomCxWMjJyeGll17i/2Pv\nzJ7jsK70/ut9QaMbvQGNtRs7QBAkQZAENy0UZckybdnWlCf2TKacykv+mVTlJQ+pSlWSmoxrJpnx\n2BkvkmiKlilxBwECJPYd6AbQjd73vfPAuteguEi2sFHGV8UqlUgAB7fvvefcc77znTfeeIOjR4+i\nUCiYnp4mEAgQiUQol8usrq5y584dAoHAU6+8/bTdbrfj8Xh45513OHv2LMePH2dhYYHZ2Vk2NjZI\nJpOUSiWGh4d5/PjxgZliqNVqsdvtnDlzhvfee4+BgQHsdjvj4+MsLi7i9XrJ5/NsbW1x//59Njc3\nyWaz+202ACaTCYfDwfe+9z0uX77MwMAAwWCQsbExlpeXpSOdmZlhdHSUYrF4IEbqKpVKLBYLR48e\n5YMPPuDs2bN0d3czOjrKxMQEs7OzRCIR4vE4Y2Nj+Hy+AxMM6HQ6qqur+c53vsP3v/99BgYGqFQq\njI6O8vjxY+bm5kgkEni9Xqampshms3tq98sGAxmNRtxuN3/7t3/LpUuXOHbsGNPT0zx8+JCxsTFW\nV1eJRCIsLS2xvr6+Z4GAyAq+6AEpHhVvvfUWP/3pT+nv78disTA6OsrIyAgPHjxga2uLQCDA6uoq\n6XT6z13zv6zBPl+ESIvX1dVx6tQp3n//fc6dO4fBYJCp81gsRiaTQaFQ8PjxY6qrq8nlcnIU8H4e\nUrVajcfj4eLFi/zkJz+hoaEBrVZLV1cXqVSKUChEsVgkm81SXV2NWq0mmUwSj8f39WJUKBSYzWZ6\nenr41re+xQ9+8AP0ej0KhYK+vj62trYIhUJUKhVmZmZQKBSMjIwwOzt7IOa7O51Ojh8/zve+9z2O\nHz+OXq/H5XJx7NgxvF4viUSCUqlEdXU1ANPT04TD4X19aSgUCnQ6HU1NTZw9e5a//uu/pqqqCpVK\nhcvlwu/3s7y8TD6fZ319HaVSycOHD5mfn993Z6RQKLBYLLS2tnLp0iXefvttueZtbW3Mzc2xsbFB\nNpulrq6OcrnM8vIyW1tb+75XlEolDoeDo0eP8t577+HxeNBqtQwNDeHxeGhrayMcDhOJRNDpdIyN\njbG2trbrJYGvMghHr9dTV1fH4OAg3/rWt+S9aLPZaGxsxO12E4vFmJ+fR6PRsLy8TCAQ2HW7vwqq\nq6tpbm7m/PnznDhxAoPBQG9vLw6HA7vdzvLyMsFgEJvNhtFoxOfz7eqI9OdN2YRnAxe1Wk1NTQ0d\nHR28/vrrmEwmVCoVZ86cwWw2YzAYCAQCeL1ezGYza2trbGxs7Piaf+PT/mJm+LFjx/jOd77DkSNH\ncDqdqFQq6fwrlYqsm1utVjo7O1lZWZGvpP260JVKJXq9njfeeIOLFy/S1dWF0WhEoVBQKBQolUoo\nFAo0Gg16vR6Px0N1dTXT09Mkk0ny+fy+2C1sb25u5q233mJgYAC32/3UmpfLZTQaDQaDAafTycDA\nANFolJmZmT0ZB/wiiKj9xIkTXLx4kWPHjuFwOFCpVOTzefL5PEqlEoPBgMlkor29nfb2dhYXF9na\n2trXOeMKhYKamhpOnz7N6dOnOXr0KBqNhkqlQjabpVgsotFoqKmpweVyceLECVQqFaOjo89MCNwP\n291uN2fOnOH06dO0tLTINRcXttFoxOFw0NnZydDQED6fj5WVlV21+6twOzQaDUeOHOHUqVOcOnVK\nBoSpVIpMJoNSqZTO9Pjx49TW1jIxMUEqldo1J/pV+DQKhQKbzUZfXx9DQ0McPXoUlUpFoVAgGo1S\nKBTQ6/XU19fT1dXF+fPnSSQSTE9P76rz/2IZ80VoaWmhv7+f8+fPU19fj1KpJJFIEIvFKJfLWK1W\nPB4Px44do6uri8XFRYLB4K7ZrlQq0Wq1T42Tf97PMhgMuN1uTp06xfnz59FoNBSLRUKhEMlkEpVK\nRVNTEz09PZw+fRq1Ws3o6OifY9JL0/7f+Je/2Wymt7eXs2fP0t/fj9lsJp1Ok06nWV9fZ3l5mVQq\nJcl0jY2NtLa20tfXx8LCAhMTE8Tj8T2/1EVN6MiRI5w+fZqOjg5UKhWJRIJUKsXi4iKbm5skEgmc\nTictLS3U1dXR29vL0aNHyWazLC4u7tmM9+3QarW0tbXJi7yhoYFisUgqlSIYDLK4uEgkEiGfz+N2\nu2lsbKS7u5sjR47Q0dHB8vKyLAns9auupqaGlpYWTp8+zcDAAGazmVwuRzKZZGlpidXVVeLxONXV\n1Xg8Hmpra9Hr9XR1dbG+vs7a2tq+ZYvq6+vp6+vj7NmzdHZ2ApDJZEgkEszPz+P3+4nH47jdbpqa\nmmhoaGBpaQm3283m5qZc872GwWDA5XJx8uRJXnvtNVwuF+VymXQ6jc/nY2FhgVgshkqlwuPx4HK5\naG1tpb29nbGxMYLBIPl8flfW/Mu+Z01NDY2NjQwNDTEwMIDBYKBQKJDL5Zifn8fr9RKLxaitraWl\npYWmpibS6TQul4t4PE4ikdjVvfKi761SqbDZbPT29nLp0iXa29sByOVybG1tMT09TSgUIpfL4fF4\nqK+vx2q1cufOHcxmM8lkUj6cdsPml417NhgMmM1muV/sdjvlcplsNovX62VhYYFoNIrJZMLj8VBT\nU4PVaqWuro6FhYVdK11sD6BfNJbdZDLR0tIiS7hqtZpisUg8HpfliXA4jMfjoaGhQb78jUajDOB3\nCt9o569QKHA6nXz3u9/lzTffpLu7m1wuRygUYm1tjZs3b3Lt2jUCgQBWq5X333+fixcvSifq9XrZ\n2Nggk8nsaSpaROy9vb386Ec/4ty5czQ3N1MoFAgGg6yurvLxxx8zMjLC5uYm586d48qVK5jNZhwO\nB6dOnSIcDuPz+eSluJfOyGAwcPbsWd59913OnTuHTqcjnU4TDAYZHx/no48+Ynl5mVwuxw9/+EP5\nunC73QwMDJBIJEgkErt2ubwICoWChoYG3n77bS5fvszg4CCVSoVkMkkkEuHWrVt8/vnnrK+v09HR\nwQ9+8AOqq6uprq6mu7ub1dVVmRLdj2Cxu7ubt956i8uXL9Pc3EypVCIej7O8vMy1a9eYmppiY2OD\n7373u1RXV+N0OqmtraWvr49CoSCJRnu95mazmRMnTnDp0iXee+89FAoFuVyORCLB5OQkn3zyCZub\nm5jNZq5cuYLVasXpdNLU1ERTUxOpVGpf6v8KhQKXy8Xp06d55513OHPmDEqlkkwmQyQSYWRkhIcP\nH7K5ucmpU6ew2+04nU6ZBQgEAqRSqV0Jcr/sc9RoNLS0tHDu3Dk++OAD7HY7pVKJVCrF6uoqt2/f\nZnV1lXw+zzvvvENtba28XxwOB8Vicdc4Li97sCgUCtmtdfnyZd5//320Wi2FQoF0Os3c3Bx37tzB\n7/fT0tJCbW0tLpcLh8OBzWbDZDLtWrmlUql8abZVZFo++OAD+vv7USqV8m6cmJhgbm6OQCCATqej\nubkZu92Ow+HAbDZTLpcPnf9XgVKp5K233uKNN97gwoUL5PN5fvnLXzIxMcHy8jLJZJLNzU18Pp88\nrL/97W+prq5mcHCQrq4uCoUCarWa4eFhHj16RCaTkczd3URjYyNvvPEGr732GidOnGBjY4M7d+7w\n4MEDmRpaW1sjGAySyWQYHh6mVCpRW1tLd3c3586dk4S7iYkJvF4v2Wx2T9Lpg4ODXLx4kYsXL9LY\n2Mjk5CSjo6OMjo6SzWZl4CWIfdevX0ej0dDV1UVnZyfvvPMOBoOBkZERZmZmJBlzt7MAZrOZc+fO\nceHCBS5cuIDRaOTOnTv84Q9/YGVlhWw2y/r6OhsbG6TTaVKplCxbnD59Wq65Tqdjfn5eBl7FYnHX\n17y9vZ2zZ89y/vx5+vv7yWaz/PrXv+b69etks1ni8Ther5dIJEI6nebOnTuy1uvxePje975HTU0N\n1dXV8rMRZaXdXHOlUsmZM2cYGhri3LlztLS0sLq6yrVr1xgfHyeXyxEIBPD5fKTTaWw2m7zAW1pa\nGBwcJJPJUF1dLTNhxWJxT9a8traWwcFBhoaGOHPmDHa7nfv37/Pxxx+ztbVFJpPB6/USDAZJp9No\nNBrp+Ovr6/n2t7+NyWRieHiYYDBIMpmUdu928NXd3c3AwADnzp3j6NGjFItF/u3f/o2bN2+SyWQI\nh8N4vV6SyaRsdXW5XHg8Hvr6+rhy5Qp37txhcXFRdpDshd1Go5H+/n5OnTols1tra2tcvXqVhYUF\nmc31+/1kMhmSySTt7e04nU7sdrv0Aw8ePJA8r7161Iks7tDQEIODg9jtdu7evcv169eJxWJEIhGZ\nJcpmszQ1NdHW1iZ5F++++y5jY2MsLCyQyWRkdvHr2P6NdP6i9jI0NMSVK1dobm7m9u3b/OY3v+Hu\n3bssLi4+8zUiBXfmzBlUKhWNjY2oVCpJ/Jufn6dQKJDP578Skebr2O5yubhy5QonT57E4XDwr//6\nr/z2t7/l+vXrxOPxZ75G2PXBBx/IMkculyOVSrG1tcXW1hb5fH5XX0YiW9HX18cPf/hD2tvbCYVC\nXL16ld/85jfcuHHjucSX8fFxPB4PpVKJxsZGNBoN8XicaDSK1+sll8vtOndBBEqvv/46b7/9Nj09\nPdy/f59r167x85//nPn5+WdsF/vgtddew2g00tvbSzabxe/3E4vF2NraeqY9bbds93g8fP/736e/\nvx+TycTdu3f58MMP+cd//MfnliDm5uZoamoim83icrk4f/484XBYthql02mKxeKu7nN4stcHBga4\ncuUK/f39BAIB7ty5w7/+67/y+eefP+NQUqkUCwsLDAwMoFar6e7ulqneSCRCKBSiXC4/xbbeLTid\nTi5fvsyFCxfo6enh0aNHfPrpp/zsZz9jc3PzGaficrlYX18nm83S0tLC0NCQZKKnUinS6fSe2A3Q\n1dXFlStXOH36NFqtlqmpKT788EN+/vOfPxP0WSwW/H6/DNZbW1s5e/YsPp8Pv98vu172olxUVVXF\n0NCQ7MDxer3cu3ePn//854yPj5PJZKRDFPwX0Q1VVVXFsWPHCAQCzM7OkslkyGazL+0m2EmI9vKL\nFy/idrtZXFzkD3/4A3//939PPB5/xvZgMEgulwOe7J2zZ88SCoXko2IngpZvpPMXYhsej4f29nbU\najU+n49PPvnkha1klUrlKRKdVqtFq9WiVCoplUpSJGU3I1zxcx0OB729vTQ2NsqX/c2bN6XAzBdR\nKpVkel+tVmMwGFCr1fL/53K5PXnFabVampubOXbsGDqdjkePHvHP//zPzM3NvfBni3SteDUbjUZZ\nv8tkMntCuBTCT319fXR3d6PT6bh37x7/9E//9EJm8/b6nmjd0Wg0kk+yF33G21tBz58/j9lsZmVl\nhf/7f/8vn3/++Qu5ByqVCrVaLQmlNTU1sryRTqdlxmK3Gd1qtZqenh4GBgaoqqrid7/7Hf/lv/wX\n2Zb1xZ8v1lmv18vPzGQykU6nSSQSsia6269QQZQ7d+4c3d3dlEolfvWrX/HLX/6SYDD43LNmNBqx\n2+3odDp0Oh0OhwO1Wk00GpVrvhevZ4VCQWdnJ5cuXaK6uprh4WH+63/9rzx+/Fimw7fbIEjQVqsV\nlUpFdXU1DoeDQqEgScV79Xquqqri7NmzDA4OYjQauXHjBn//93/P4uKirOMLOyqViuwWqaqqQqPR\n4HA4qKqqIpFIPPXq3wvbW1pa+N73vofdbicQCPCzn/2MGzduEA6Hn8lUVSoVWWIRd2JDQwMqlUq2\nFO/EXvlGOn+32y3TQoVCgXv37jE8PEwgEHjp14nNoFAoKJfLRKNRmWpJp9O7XoMWXQlDQ0O4XC42\nNzcZHh5mcnLypZoD5XL5qY1cKBTwer08fPgQv9+/JxeLqGX19PRgMBiYnJzk3r17knzzPHyR1FMs\nFkkkEiwsLLC4uChrubt9QJubmzlx4oR8DY+NjTE2NsbKysoLf+52NrUgp4lXhci07Paa6/V6mR6s\nra1leXmZe/fuMTEx8dLWIOH8RW1d8EO8Xu9Ta76bqK6uxuVyUV9fj0ajYWpqitHRUaanp1/oTESw\nIjoYYrEY6+vr+Hw+IpGIPJ+77fgFX6Kuro5kMsnU1BRjY2MsLS290HatVovJZEKj0cgS0vr6OqFQ\nSKrp7faaC9VHm82G1WplaWmJkZERxsfHCQQCzw1UlUolVVVVsssoFAqxtLREMBgklUrt2atfrVZj\nNBpxOp1UKhUePnzIyMgIk5OT8nHzRWg0GsxmMzqdjnw+z9raGmtra0+RFfci2FIqlZhMJurr6/H5\nfNIfLS0tydf9F6HX6zEajSiVSmKxGNPT02xtbe3oQ+4b6fyPHj3Kf/pP/4mOjg42Nzf53//7f/P5\n55+/9Gu+uJjZbJa1tTU++ugjJicnpdLVbkKr1fLWW2/xne98B7PZzGeffcZ//+//nfn5+a/8PQqF\nAolEgsePH/Pxxx+/8GDsNJqamvjhD39If38/mUyGq1evcvXq1Rfa5wuxAAAgAElEQVRmK56HVCrF\n+vo69+/fZ2xsbM9Y8319fbz99ts4nU6Wl5f5p3/6JyYmJl76s7f39BaLRba2tpibm+P+/fskk8k9\nS4MODAzQ09ODUqnkwYMHfPTRR1/a+77d+Yse7qmpKRYWFvasw8Jut3P06FFsNhvhcJirV68yMjLy\nUkKTUqlEo9GgVqupVCqsrKzw+PFjFhYWdrWF64s2OBwOamtr0Wg0zM/P86tf/YqVlZWX2i4yckql\nkmg0yoMHD5icnNz1nvnt0Gq1WK1WjEYjuVyOO3fucPPmTdnW9zyIjJxOp0OhULCwsMCNGze+rvjM\nn2W7wWBApVLh8/n41a9+xaNHj156v6hUKpkFTaVS3L17l5GREcmv2AuIDJda/cTVjoyM8Nvf/lau\n38tsF4qza2trfPjhh08po+4EvnHOX6TOTSYTCoWCVCrF5ubmC1+fAna7ncHBQfr7+2UKVKTk9kK5\nTTgTg8GATqeTfZ8rKytfKkxx4sQJvv3tb9Pa2koul8Pv9xONRvdUo0CtVkuxCsEY9vl8Lz1kJpOJ\n999/nytXrmAwGPD5fJJwtpfdFSK1ViqV2NraYmJi4kvVHTs6OvgP/+E/cOzYMXK5HKurq3i93j3t\nUBD7RaVSkU6nWVpaYnp6+ksDrtOnT/Pee+9ht9tZWVmR7XJ72VqpUqnQarWS1Pfo0SNWV1df+jVW\nq5WLFy/S29tLqVRidXWV+fl5WS/dKyiVSsrlMrFYjIWFBR48eEA4HH7hvxddJCdPnsRms7G8vMz0\n9DR+v39fWkKFYt/4+Dhzc3MvvN9EpkW0QKtUKtmqm0wm99T2SuXJnASv10u5XJbqlM+DcLg1NTW4\n3W7MZjPBYJCNjQ22trb2tJ1VZBcCgQDDw8MMDw8zPT39wjtdo9Gg0+lkJ44IXNbX13dcoOgb5fyF\nA1Wr1eh0OiqVimTyf9nCWa1WXnvtNXp7eymXy6yvr0syzl61EAnBHiH6EI/HCQQCXyq/2tXVxXe+\n8x0aGhpIJBLMzMywvr6+Zw5U1J6Fgl82myUQCMj654tgNBo5d+4cQ0NDaLVaNjY2mJyclCIdewUx\n16FUKhGLxaQO/osgNBjeffddamtrSSQSzM3Nsby8vKc9/mK/wJP+7M3NTalr/jwIIavu7m5OnjyJ\nXq8nGAzy8OHDPXs5b7ddCMoI4tvLAi6NRoPVaqWvr4/GxkbK5TJra2uyb3uvUSqVSCaTrK+vSwnc\n50Gk2oVSoVKplMTFvZYPF44ok8kQDAZZWlrC6/U+N0BXKBRSVMnlcmGz2VAoFEQiEdbW1vYkE/pF\n2wuFAuFwmEQiwezs7HMDLnEmbDYbLpeL2tpaqb0g5qHsdcBVqVRIJBIsLy+zuLjI6urqM3tWlBCr\nq6upr6/H5XJhNptRKBRkMhnZQbKT+EY5f/ijNKRgWot+1Jc5IdFrPDg4SFtbG6VSiVu3bnH16lWi\n0eiebhbRyymmO30ZS1+hUEglK5PJxMrKCr/+9a8ZHx/f88h8+5rn8/mXZkxECtdsNlNdXY1SqWRq\naoqPP/4Yv9+/Z3aLmr2wPZ/Py1aaF/17oahoNBplZC40/vey11zoCYg9nsvl5ESz50HUTUXdXKFQ\nEAgEuHfvHsFgcM/shj/u82KxKHvjX9TVIch9NTU10u5yuYzP55N6EXuJ7XtddNV8cb+Ie6iqqorm\n5mYcDof8OyFeFIlE9tRu+GMAUCqVyGQyz2RNtnNZmpqaOHbsGBaLRX5tMpmUokr7AZVKRaVSeSYj\nK2wWe+XkyZP09vaiVCrlORECafuRbREkdBF8bT+j2x+tQt66ublZflb5fF7yFHYS3xjnL14SBoMB\njUYjLxeVSoXD4cBischIUfxbsegul4vu7m4cDgcGgwF40uJis9lkrWa3bddoNBiNxqc6DIxGI3V1\ndYTDYZm5EPUj0Z7mcrlobm5Gq9XK7+NwOKTE6F7YLupxwhnCk/WrqakhHA5LNr9er5e2O51Oent7\nsVgscpObTCacTic+n29PbBcHzmQyYbFYZMeCw+GQIjOVSgWdTkdVVZWsfzY0NNDe3i7FSIRamsVi\nYWNjY89s1+v1OJ1OuYZVVVXU1NTIkg88+RyqqqqAJ9mtpqYmydYWZYO6ujpSqdRz20h3GuKSrqmp\nob29HbPZTLFYxGw2U1VVJZ2o+N1EZqOlpYUjR47IbAEgA4K9quGKdLLovdbr9RgMBiwWC/F4XLaO\nORwO6urqKJVKWCwWOjs7qa2tJZPJoNVqZUraaDTu2Qt6uzhOXV0dOp0Ok8kk2e/lchmtVovb7aa6\nuppSqURnZyc9PT1otVoymYys/ZvNZiKRyJ4KWdlsNjweDzabjUKhQHV1tXSiYs07OjqoVCoYDAb6\n+vqoq6uTLZTwRHxMr9fvaXZOZCHq6uqw2+1YLBaMRqO8z0X3RX19PaVSiba2Ntra2tBqtVJnQaF4\nMr3wq4gI/Sn4Rjl/jUaDyWRCr9fL15xGo6GtrU3W/SuVirw4Bani+PHjnDx5Eo1GI/WsT506RTQa\nZXR0dFdTReIyFIdKEFQqlQpWq5Wuri6mpqZkhC4ckdB/vnjxIh0dHWSzWTQaDbW1tbzzzjsEAoE/\nVw/6T4JYS3ERC75CQ0MDjY2NJJNJcrmcnHxWVVWFUqnk5MmTnD9/HovFQjabRafT0d3dzbe+9S0p\n1LHbEKUKu91OXV0dWq2WqqoqOc9cvBJMJhNNTU2oVCosFgtnz57lxIkTZLNZ8vk8VVVVnD59ms3N\nTebm5vbk9S+Clra2NhoaGuQF2NLS8lT7VUNDA01NTVQqFTweD4ODg9TX15NKpeTAn9dff51MJrNn\nGReVSkVDQwNDQ0MyuG1qamJ9fZ1kMimDg+PHj2MymSiXyxw9epS+vj7giWSxwWCgra2N3t5eKUO7\n2xCBbl9fHydOnMBsNmO1WmlubpaEP4VCQXt7O+fOnSObzWIymejo6KC+vp5YLEZNTQ0mk4nu7m45\n3GqvYLfbOXv2LB0dHZhMJurq6nA6nVIArLq6mgsXLuB2u0kmk7hcLtxuNwDJZBK1Wi2zjCLLtBdQ\nKBQ0Nzdz/PhxXC4XxWKR+vp6ef6USiXt7e389V//tZy/4XK5MJlMMrsiiJo2m21PX/86nY6WlhY8\nHg91dXXU1tZit9tldlSpVHL+/HkuXrxINBpFq9VSU1MjlWhtNpt8kAQCgUPnvx3CeYp0ZrlcZnh4\nmP/8n/8z8ES8Z3Z2VpJrqqurcbvdvPbaa7I+Ho/HGR0dRavVMjg4yJEjR1haWpKtOyqVatfq58IB\niej6ww8/ZHR0lHK5zMrKCrOzs0SjUTmQ5dSpU5w8eZKpqSmCwSCTk5OUy2Xi8TiDg4OUy2XZFiJ6\n/XfDbvEKEhyFtbU1/uEf/gG1Wk06nWZyclIORhJ1rEuXLmE0GpmamiIajXLjxg1CoRADAwOcPn2a\ntbU1ZmdnSafTklS1GxCZH61WK6esZTIZCoUCPp+PyclJwuGwzBodP36cy5cvs7y8zPLysiQELi4u\ncvr0aZxOJ0tLS5KAtJtCLcJ20b702Wef8ejRIwqFglSvTKfTspXu8uXLtLW1MTExQTgc5qOPPmJm\nZobe3l7OnTvH+vo6CwsLJBKJXReY2V4yCQQCXLt2jXK5TCgUYnR0VPIOGhsb6e/v55133iGVSjEx\nMcH4+DgzMzO43W5OnjzJiRMn8Hq9+P3+PXmBiqyQXq9nbW2NTz75hE8//ZTp6Wk5FtxkMuF2uxka\nGuL8+fOMj4+zsrLCwsICw8PDeDwezp8/L1sU92J8tbgf9Xo9lUoFn8/HtWvXgCfMc8HLaWxspLe3\nVw4mGh0dZWxsjLt371JfX09vby/nz59nY2ODcDi8Z2l/wd8CCAQCfPzxx7K9M5VKYTAYaG9v59Sp\nU/T19TEzM8PS0hKPHj1Cq9VSV1fH6dOnsVgsUm1xr3hQQgG0qqoKv9/Pr3/9ayYnJ2WmxeVy0dnZ\nyfHjx2Ub4OTkJMFgELvdLgddif210/v8lXf+gNzcer2eTCbD7OwsU1NTUphHfNjiAxHjFIViUigU\nIp1Oy4ERtbW1cpOLedei/r6TbO7tpQqlUkk2m+XevXtSp3q77UajEZ1OJ0sUW1tbT41oLRaLWK1W\ndDqdlEMVaVRRE95pZyo2t0qlIhAI8Lvf/Y5UKiVVv4TtarVajt80GAxyQM7S0pJMy7lcLqkaVi6X\n0el0sq660y9p8YLT6XSyXWt+fl7K34pDptPpMBgMOBwOPB6PHEa0trbGysoKgUAAg8FAR0cH6+vr\nxONx2YMuSgI7fdEIx6/X6ykUCoyMjJDJZOTrV9husVgwm804nU4cDgcKhYLNzU1WVlbw+XyEw2Gs\nViter5dAIEAul0OtVu+a3YB0/FVVVUQiEW7fvs3W1pbsqhHlIaPRSE1NDXa7XbauCob54uIixWKR\n6upqNjY2iMViMpu3W33bgp9iNBoxGo2sr6+zublJMBgkGo1KJ240GuW6GwwGqT44Pz+PwWBgYWEB\ng8Eg++X3QmFOBLnV1dUoFApWV1eZmZkhGo3K2QIiVW6z2SQ5bmtri6mpKdbX17HZbIRCIcxmMz6f\nT57v3Q5yBUfFYrFIjsfMzAzBYFBO0BSlLlFPF+TRubk58vk8VquVUqmE2+0mFArtukIrPM1nstls\nsl1vcXGRpaUluX6i5KbRaEgmk/h8Ph4/fszc3Bxmsxm3241CoWBlZWVXWrZf+ZG+om4vVJyEutqL\nlKeKxSLpdBq/38/MzAwrKytkMhlUKhVWq1XWdsXBEGndmpoaSaraSecv0jxCKCabzT5XyKFcLpPP\n54nH47K/Wcw3FxeqVqsllUrJ7oZCoSCFRZ6n3vV1bTeZTJjNZkkeEiprX7S9VCqRTqcJh8PMzMww\nOzsra4aitRGQ0rjJZBJAvlZ2mugi0veijpZMJqWm9nbbRY0tkUjg8/mkkIt4JYv6rdAUj8fjpNNp\nOZJ0N7IuQpFPrE04HCYWiz2zZ8SQFiFNPTY2xvr6OplMRvbKl8tl/H6/5JSINCSwK1mX6upqLBaL\nHK6ysbEha+XbbRdDhjY3NxkfH2diYoJoNCpHEosW3qWlJRk4CHEu2HlHqtFosFgsWK1WLBYLoVCI\nzc1NuWdEsCTWPBwOs7y8LIVcRClD7Am/3y9bt3K53K4GACKQcrlcqNVqWVILh8NPrXuxWCQWixEI\nBJicnOThw4dPBYWAXHNB+NvN8pZQE6yrq6OtrY10Os3a2pqUoBb7Xdw9Yh7B/fv3mZ6elkOqVCqV\n1OFYXV2VqoRizXcD4oy2t7fT0NBAKBTC6/WyvLwssyaCGC2kwGdnZxkeHpZzWMRY4GQyidfrlcHO\nn7jm3+yRvuK1brfbUSqVxONxWe9/HsRLWKvVyv8WLxJBUjObzTLibGtrQ61WU1VVJacura+vf+0g\nQKFQUFVVhc1mo6GhgWg0KtsKn5feEZK3lUpFtqWJwUNVVVXY7XYZuZtMJhoaGmTWIpVKMT09LRXF\nvrYs5Lb2JaFZLrIiz1t3ETQJCGlNo9EoPzu73U42m8Vut9PW1kZzczNms1lmCMLh8I70c4tXUHt7\nOxqNhmAwSDabfe6YT7Hm4oLO5/OyJCG4AqIfV6QUxWsFYGlpiUAgQDQa/drOVLwmXC4Xvb29pNNp\nQqEQkUjkubaLF7O4JLc7GpPJJElp4vsIzoDJZCIajbK6uiplUL8uxOuzs7OTxsZGCoUCKysrrK2t\nPROQCia3GC6UyWSIxWJSJc1isVBbW0tDQ4PcEyIIE1oRYlDOTjgntVqN0+nk2LFjcrLagwcP5D2z\n3fZCoUA0GpUORrDLRWZSjJXVarWyrCQkiguFghzRvRO1dKVSiUqloq2tjSNHjlBdXS0zP88bUS6C\nFtG/L0ijgkflcDhoamqSsr5VVVXyDEej0WcyT18HKpVKdl+1trbidDq5ffs2s7Ozz4y1LRaLRKNR\nec+JiaCAvF9cLhd1dXVEo1HZrqjT6VCpVDLw3QnOiAjwmpubOX36NE1NTbJtPBQKSQKx2DNCFdRm\ns0ndCHHexH5paWmRhFIRbOr1eklIFlnfPwevvPMXE77EyzwQCHypyInZbObUqVNsbW0RDocl+7al\npYWuri56e3uJRCJEo1EsFguNjY2SsKHT6YjFYl97frhCoZDfu729/an068vQ1NTE0NCQnDCoUqmo\nra2ls7OTY8eOSSEO0TbS1dVFJpPBYrFw79492fHwdWwXPddiprroeX4ZW1yj0dDd3Y1Wq2Vzc5Nc\nLidLAWIG/ePHj0kmkzKQ6e7uZnh4mM8++0wO7vi60Ov11NbWylnaExMTxOPxF4q0COLlwMCA3F8q\nlQq73U5HRwenTp2itbWVQqGAVqulsbGRjo4OqqqquH79Og8ePHjm0P85UCqVGI1G2trauHz5Misr\nK0xNTeHz+Z77fUXpoampCY/Hg8/nk/ulvr6evr4+Xn/9dSKRCJlMBqfTiVqtprOzk5WVFX7/+9+z\ntLS0I7PPhQMZHBykr69PZqweP3783H9fLpfR6/W0t7djMBiYm5uTjrK5uZnBwUEuXbokuTL19fXU\n1tbS0tLCjRs3GB0d3ZFJkIKI29TUxLe//W2MRiOBQID5+fnnfm/x+rfZbHR1deH1egmFQiiVSmw2\nG+3t7bz++utSgMxqtcrJc5lMhlu3bskU79ddcxGgnjhxgu9///sUCgWGh4e5devWc4Mi8fvU19ej\nUCiYnZ2Vn0NtbS39/f1cuXKFsbExWebTarW0tLQwNTXF+Pi47OzZif3idDp55513OHr0KEqlUur3\nf9F2kbXQ6/V4PB6CwSAqlQqlUonZbKaxsZELFy5w5MgRSZb2+/3U1tai0+mYmJh4avT514HYL729\nvfzd3/0dVVVVrK6ucv369acG9wiUSiWy2SwWi4WGhgbm5+dJJpNynoIYwCRe/WJaodPpJBqNsrCw\n8NxA7qvilXf+TqeTwcFBOjo6pLLc8xi0arUarVZLV1cXx44do7OzE7Vajc1mI5fLEYvFuH//vkyr\nCxJUbW0t8XicxcVFZmZmWF5e3pELUaVS0drayunTpxkcHOTGjRtyNO92bB/209HRwcDAAC0tLbhc\nLpn+2tjY4Pbt22QyGYxGI4uLi1RVVckWrs3NTVZXV2XXwte13WKx0NfXx4ULF+jq6mJ+fp50Ov3M\n61aQdURQ1dPTQy6Xw+Vyyc6L5eVldDqdTEFHo1Hq6+upqamR6Tyhab0TqK+vZ2BggIsXL7K1tcWt\nW7eeGyyKTJDb7eb48eO0tLTg9XoliTGdTjMzM0N1dTXBYJD19XUZ4FgsFjn292V9938K9Ho9bW1t\nchSr3++Xcxu2Q7yA6+rqaGlpobe3V46RFaWjSCTC9PQ0NptNpk1FC6DJZJICUTuV+q+pqZHEpubm\nZh4/fkwwGHyq9ipex1VVVTQ1NdHd3U1ra6sUVBJZrkAgwPT0NGazmUAgINn0FosFrVaL0WiUbbxf\nF0qlkrq6Orq7uzl27Bher1feCwKi7Gg0GmUW78iRIzQ2NtLY2CjLb6VSCZ/Px9TUFFarlWKxKFsA\nq6qq8Hq9mM1mtFrt17Ybnrx6m5qa6OzspL29nbt37+L1emWtXtguuAwieGprayOXy9HQ0CDLFSLl\nLkijNTU19PT0yHbFXC4nuQA7EaBbLBaam5vp6OigurqaR48eEYlEZI+/KPHo9XpMJhO1tbX09PTQ\n1NQkVfxEZ1Q8Hmd1dRW9Xg/AkSNHeOutt9Dr9fLFn06nicViXztTpNFosNvtNDc3097eLrNbhUJB\nlh9ERkan01FTU4PD4aC1tRW73U5jY6Ps9BIy0IJ07nQ66e/vx2azodFoZHC2uLj4Z2dEX3nnX1NT\nIw9nPB7H5XLJtJ/D4ZCDHdRqNSqViq6uLlpaWgAkoU6k6CKRCIFAgKWlJZlqzOfzhMNhyaLfqShR\nqVTKi0Iwl0WPNjwJaoRmgeAjtLe3Y7Van3IohUKBjY0NQqEQsVgMk8lELBbD5XJJ7oLP5/tS1bo/\nBSaTifb2do4cOSIvMKPRSDqdxmKxYLfbpR6BGI/s8XhQq9XyQhFiF8vLy7LmLtKGWq2WUqkk05Tb\nx1t+XTidTrq6uuT0PoPBIJ2GuIiF7eICtdvt8iUpLqBYLEYqlZJKc4lEAqvVSk1NDdlslnA4TCAQ\nkGnUnShXNDU10d7eLvvjBSlKaM2L9RZBrSj9iFZLQYoTg4cASTYS/A0xAnone+fNZjOtra243W4c\nDofMSuh0Ompra6XGgrC9vr4eu90ua7qizCUCXY1GI4VPRPlJnGGRct+JNVcoFNjtdpqammhoaCAY\nDErHb7VacTqdMn0MyIDbYDCQyWSkcy2VSjJFOzIyItvNPB4PRqORYrH41Iz2nYBYW5fLJfdvKpVC\no9HQ2NiI1WqV+0FkK2w2G4AshYpuoUgkIsmKwkEaDAapByA6CnYKwqE7nU60Wq3kGAgCnVASFbV7\nu92OwWCQd4RoN95OXEylUqRSKdluJ7RUxP26ExBcIjE4SZAThZaM0OMQAbfgY4jfRZTHxN9vbGww\nPj4u0/wWi4WmpiZ5NwrdlD8Xr7zzF1OqtqcWi8Uik5OTcn5yU1MTxWKRcDjM7Owsq6urTE5OsrKy\nwvz8/FMjFQOBwFO92iLNBRCLxZ5ig38dCNENcYAaGxt54403GBkZwWAw8KMf/YiOjg6sViuhUIi1\ntTWmpqbkYIq5uTkCgYBMsymVSjY3NyWpSLQlKRRPJrcJpvpOXC5arVb2n1YqFfr7+4lEIgwPD/Pm\nm29y5coVbDYbxWKRzc1NZmZmJKM+GAxKSVZxcWxubrKwsCBtGx4elsSuaDRKLBbbsS4LweUQAiwX\nL15ErVaztbXFT37yE6lotrGxwfLysiQ/PXjwgNXVVfkqFvD7/TJzIV5RCoVCzmbYqZ5icbHodDqS\nyaRMfxcKBTo7O/nbv/1buZdElmp6eloOEFleXpYzE0TQsLq6Klsq79+/L0flJhIJ2VO8E7brdDq5\n5vBEjnpra4tIJMLf/M3f8Nprr6HX6/F6vUxNTTE7OyuJfsFgkLW1tacmbm5ubvL48WMqlScjrH//\n+9/LIShCz2MnWtFE54FOp5NtZZ2dnfj9fjweD//+3/97XC4XlUqFBw8eMDExwcrKCl6vF6VSKVX8\nxO8t7Nsu3LJ9FLQgAe7EmgsuULlcJpFIyHbb2tpaLl26xHe/+10AVlZW5KCex48fy5ek4PEIW8QZ\nFUHarVu3pCjZxsYGGxsbOzZjQQTfyWQSg8GAwWCQafAf//jHdHZ2UiqV+PTTT7l9+zaBQAC/3y8H\nJQlek7Bla2tLnkuDwcC1a9dkILO0tEQoFNqRLNd2yeqtrS3JD7JarQwODvKjH/2IQqHA2toav/71\nr1lcXGRubg6/349SqcTv9z/1sFOr1aysrKDValGpVIyNjUk+mig/fZ398so7/1AoxKNHj2RbU29v\nr5R3fPPNNzl+/DhOp1NehmLBY7EYkUhEskIFtqetvhjN7iQbt1wus7q6ysLCAk1NTdTU1DAwMCCj\n8suXL9PU1IROp+P69eusra1Jhy+yFF9MV29Xjfri77CT7G2hj9DY2IjT6cTlcnH27Fna2to4c+YM\nFy9exGQyPRVkif79RCJBNBp9JsUWi8WesVvIuO7kuosuD7fbjdFoxO12S1XHt99+m46ODgwGA598\n8gk+n4+lpSV5sQk2/3Z7xGew/fUj/nsn2f65XI6VlRUWFxfp7OxEq9XS09ODw+Ggu7ubCxcuYDAY\nCAaDTExM4PV65chQYfsXuybi8fgzdqtUKqmOuVMQHQednZ2ydn/y5Ena29v51re+xYkTJ1Cr1fK1\n4/V65VCodDr9zBAZQU4TUxXhjwS3nWShl8tlAoGAvDPEC/mNN96grq6O1157DavVSiaTYXR0lM3N\nTdlmK0iWXwygRE1X2C4CsUqlIrNLOwHhwOfn53G5XORyOdxuN3a7nfPnzzM0NCRLO8lkUrYOwx87\nooQtlUpFdo6IerpYb61W+0LC7J+LeDzO2toak5OTUhb5+PHjGI1G3njjDRobG8lms4yMjBCJRCRp\nV7RLiwzAdmKdsFfYr9PpJEF8px4W4oG5uLgobTObzbz++uuS15RKpWR3UywWIxqNymzvdk6DsCce\nj8tgyO/3S3J6Npv92tLzr7zzX1pa4t/+7d9obGyUdaKhoSFaW1tlVC1qIz/72c/wer0vncC1HbvZ\nCyoIOCqViqNHj2I2m+nt7eX73/8+9fX10iHFYjFu3brFL3/5SzY2Nr7Si+aL7Omdht/v53e/+x12\nu53Ozk7MZjMXLlygt7dX1i1FS9Mvf/lLZmZmvtKa77bdgJyo5fF46Orqwmq1cu7cOTo7O+XLt1gs\n8ujRI/7f//t/8mL5Krbv5n5JJpPcvXsXrVZLd3c35XKZtrY2/uqv/gqXy4XBYKBYLMqX3K1bt76S\nMuXzSEg7DcF2drlcMkty7tw5Tp06JRUtC4UCXq+Xjz76iHg8/qW1Y5E+3U2USiXm5uYkEU6Uty5f\nvkxHR8dTKfvp6WkePHjwlQaBCS0Fgd0QzInFYoyPj0s5Wb1eT0dHBxcvXpRZO6Fi+fjxY7xe75eW\n1gS5bjt2etIcPMkyZLNZyXMSDvTkyZNy+qYQh1pcXPzSgE/sle3/ZjfszuVyrK2tyY4rp9OJx+Ph\njTfeoKWlRdpeqVSe0ip4GUSr825Mln3lnb+QJf2Xf/kXbt++DcCbb75Je3u7THmLHk+v17srH/qf\ni3Q6zcTEBP/tv/03qqqqMBgM/PSnP6WhoQF48mJaXFxkbW2NUCi0p1raL4N4Gdy4cQOfz4dSqWRg\nYIDW1lYsFgvFYhGfz8fCwgKbm5sHas1Fz+8vfvEL2R76N3/zN3R1dckyw+rqKisrK0QikT0Z5/xV\nIC6Bqakp/sf/+B9oNBo8Hg8ej4f6+noAVldXefToERsbGxZszo0AACAASURBVPs2wOR5EETDW7du\nsbS0hFqt5r333uPMmTMoFAqSySQzMzNSbXO/hsY8D6K++vHHH8tWQxE4Ani9Xh48eIDX631mYMt+\nQvAqZmdnSSaT6HQ6zp07J9c8l8sxMjLC8PDw12KM7wZEu+eDBw9YXFzEaDTicrkYHBwEngSTt27d\nYnZ2dk/Hln8VVCoVNjc3uXnzJiaTiZ6eHgYGBmSGanx8nOvXr+/5aOHn4ZV3/qK//NatW1Jxrqam\nhnfffVcOMJmbm2NxcfErv/j3AiLNJ1KcOp0Oh8Mh21I0Gg1+v5/x8XG8Xq8UvjkIKJfL5HI5Jicn\nmZ6elrLK7733HiaTCZ1Ox8LCghy7uddT114GMbb33r17cnb24OAg0WgUm81GOBzm4cOHrK6uHqig\nRbxetu+XYDDIlStXcLvd6HQ66fx3kiC5ExAvRsH90Gq1UjHRZrORSCQYHx9nYWFhz0fFfhlEz/vD\nhw8lkVKIxWg0Gnw+n5wtf1ACRYFKpcL6+jobGxtSTCwUCmG1WgGYnJxkYmJiT8eWf1XkcjmpBKrR\naHjrrbfkmgcCAW7dusXq6uqBsxueZF1isZgczS7Gsuv1eubn5xkZGdnzseXPg3Jff/oOQaQuhQDF\n4uIin3/+uWTB37hxg8nJyf0287kQtudyOaLRKI8ePWJ8fJxUKsXi4iLXr1/f0xG3fwrE6yKbzbK+\nvs7du3elYuLY2Bjj4+MH6hW3HWK/pNNpeSCF5OmtW7fwer37beILITpUwuGwdJrFYpHFxUUePXp0\noALFL0IEjj6fj+HhYclhmZycZG1tbb/NeyFEACP2y+zsrJyzPjk5uWOdNLsBsdeDwaBUBs3n85Ko\ndxAdqIDIGglpX9HRsba2tidTKL8OxBjhxcVFvF6vVPQLBoMHIlB85eV9t0M4I1FzNhgMhEIhfvOb\n3zA9Pb0ngzT+XIgUrSCiGI1G7t27x9WrV7+S+M9+QshoCvZ1oVDgww8/5OHDhwciwn0ZKpWKzFxo\ntVpmZmb46KOPJHHroEJ0eIhWH61Wy9WrV7lz5w7hcPhApXG/CNGDLfqe/X4/H3/8MQsLCwc6cBHQ\n6/Wya+LevXv84Q9/kLrxBxVCGRSQrYfXrl1jcnKSZDJ54M+o4FcUi0Wmpqb4/e9/v6P6H7sFMTlW\nZKivX7/Ow4cPd0yB8kvwUnnfb5TzF8jlcgQCAZRKJRsbG3zyySesr6/vkGm7h0qlQiKRkK+I+/fv\nc+vWrQO/weFJ+cXv90vZyevXr7OwsHCgLxWBZDIpp8NNT0/z2WefHfhXBSBTikJx8tNPP+Xx48cH\n2vELpFIpVlZW5PCbzz777MBmuLZDEM38fj+pVIqRkREePHhwoB2/QCaTYWVlRWpQ3Lx5k9XV1Vfi\njEYiEWm7aHk+SGW5FyGfz+P1ellfX2dra4vbt2+zuLi4V9mWvzznL9JcsViM1dVVNjY2DkSa5atA\npNEDgQDLy8sHOp24HaImLaZTiQEVrwJEGj0SiUhJ1oNClvsyCEEQIYj0KlyIwFNlOp/PRyAQeCWC\nFtF+Kuate73eA51R/CLEkC2/3y9ltl8FiDMai8XkUKVX5YyWy2UymQzBYFBKW+8RXur8d2+00dfD\n1/5Uhewm/FG3+lWA6P0VKnivQlQusN32V2nN4ek+/VdpzeGPe/1VW3P4ow7Fq2Y3sKtjhHcT2++X\nV9F2eDX3y26PEn7ej3zZX77ybP8XYfvGfpU2yl70uu8WXjV7t0Oox72qv8OraverjFd5zV9Fx3+I\nncU31vnD3h7OnXQce30od9rp7dXFstOvgMP9svd4lW2HV9N+Eei+ingV1/ug4qDugFfuEz5MAe49\nXtUU4OGa7w9eddtfVbvhcM33EtvW/KX+/dD57xBe1Y0Cr67thxfLIQ5xiEO8EIfO/xCHOMQhDnGI\nvzC81L9/IxT+DnGIQxziEIc4xFfHofM/xCEOcYhDHOIvDIfO/xCHOMQhDnGIvzAcOv9DHOIQhzjE\nIf7CcOj8D3GIQxziEIf4C8Oh8z/EIQ5xiEMc4i8Mh87/EIc4xCEOcYi/MBw6/23YPtzlVcOrbPer\navshDnGIQ7yq+EZr+38ZxDQ0vV6PTqdDr9c/NWr0oI6kFXar1WoMBgMajQa1Wk2hUCCXy5FKpQ7s\nZDqlUvnUmqtUKsrlMoVCgWw2e2BHjCoUClQqlVxzlUpFpVKhWCzKdd+jGd1/ErbvFa1Wi16vp1Qq\nUSqVKBQK8s9BgwgK1Wo1arVa7hVhb7FYpFgsHsh9rlQq5V7RarXodDq5R4rFIqVS6UBOYNy+3hqN\nBq1Wi1KpJJPJkM/n5XofNLsBVCoVWq1W2i3uw2QyKdf7IO4VAJ1OJ/+oVCqUSiXJZJJ0Oi3XezfW\n/C/O+atUKgwGAw6HA51Oh0KhQKvVYjAYqKmpQalUUigUmJmZYX19/cBsduF8rFYrdrtdHlSdTofR\naMRgMFAsFgmFQszNzZHJZA7MZlcqlWi1WmprazGZTCiVyqc2vEKhIJ/Ps7q6itfrPTCa98JxWiwW\n6urqnnJEGo0GlUpFPp8nmUyysrJCPB4/EHbDHx1QXV0dNptNXo46nY5yuUypVCKXy7G1tYXP5zsw\naw5PzmhVVRX19fUYjUY0Gg0ajUaezVwuRyaTOXBz3cUZdTqduFwu6Yg0Gg35fJ5MJkMmkyEWi7G1\ntXWggi4RkDc1NWG1WqUDBUgmk6RSKdLpNJFI5EDtc7HmdrudlpYWuV8UCgXZbJZoNEoqlSKZTBKL\nxQ7U40I8hNxuN/X19ej1ejmaOxwOE4lESKfTJBIJEonEjv/8vyjnr1AoMBgMdHV18cEHH+BwOCgW\ni0SjUQAaGhqorq5GpVLxP//n/yQajZJOp/f9RadQKNBoNFRVVXHp0iWuXLkiL5N4PI7ZbMZut6PV\napmamuJ//a//hc/nOxAbXaFQoNfrcblc/PjHP6anp0dG5LlcDqvVKl/Sv/jFL/jVr35FPp/f9zWH\nJ4ezqqqKU6dO8ZOf/AS1Wk0+nyeRSKBSqTCbzVQqFTY3N/nZz37G9PT0gbnQNRoNJpOJH/zgB7z2\n2muUSiXS6TTxeFxekIVCgZs3b/J//s//oVgsHpgL3WAw0NPTw9/93d/hcrkolUrE43Hy+TxarZZ8\nPk88HucXv/gFIyMjByZAV6vVmEwm3n77bf7qr/6KcrlMNpt9yllGIhEmJyf57W9/SzQaPTABul6v\np7GxkZ/+9KccO3aMcrlMIpGQticSCQKBADdv3uT+/fvAwZipIc7o6dOn+Y//8T+i1+spFArScebz\nedbX15mfn+f27dtsbm4eCLsBeae///77vP322ygUCpLJJOFwWAZaS0tLPH78mJGRkR23+xvv/EX6\n0GQyUVNTg81mo729HafTSbFYZHV1lUgkgkajoaamhtbWVtra2rh9+zazs7OsrKzsmyMSL0+z2YzV\nasXpdNLU1ITJZGJpaYmNjQ2i0ShNTU04nU7a29vR6XS0tbWRTqfZ3NzcF7sFdDodJpMJh8OB2+3G\n6XQC4PP5CIfD5PN5DAYDzc3NtLW1MTc3x+joKBsbGySTyX213WAwYDKZcLlcNDc3Yzab2drawuv1\nEo1GsVqt2Gw23G43breb+/fvEwqF2Nzc3NcLXWRV7HY7DQ0NOJ1OVCoVPp+Pra0totEoXV1dNDQ0\nUF9fTzqd5u7du/j9fmKx2L7ZDcg0eX19PS0tLZjNZjKZDF6vV57Rrq4u3G43JpOJubk5fD4fwWCQ\nfD6/77abzWZaWlpwuVzodDrW1tbY3NwkEonQ0NBAW1sb7e3t1NTUMDMzw/z8PJFIZF/t3v5ydrvd\n1NTUUCgUWFtbIxQKkc1maW9vp6uri2PHjpHJZPD5fIRCITKZzL7aLva6y+XC5XKh1+sJh8NsbGwQ\nDocxmUy0trbS2NhIa2srkUiEfD5POBze9wBAqVRSXV1NY2MjZrOZYrHIxsYGgUCAcDhMS0sLXV1d\ndHR0YLFYCAQCRCKRHb0Xv/HOH56kER0OBz09PdTX1+N0OgkGg4yNjXHt2jUymQw2m41wOIzT6eTs\n2bN0dHTQ1taG3+/f19q/Wq2mvr6e1tZW3G43RqORx48fc/XqVSYmJshkMgwMDFAsFunp6aGlpYXe\n3l4CgcC+On+RZWlqaqKtrQ2Px0M8HmdycpKrV68SCoXQaDS8++67WCwWzp8/T2dnJz09PSSTyX13\n/uJgdnd3y8v6s88+4969e2QyGbq6uiiVSrjdbjo6Oujr68Pr9RIIBPbV+YuMRHNzM/39/WSzWW7e\nvMm1a9dYXV0lk8lw5coVzGYzJ0+e5OjRo/T391MsFvfd+Ws0GsxmM21tbbhcLpaWlpicnOTTTz8l\nk8lQX1/PD3/4Q1paWjh58iTHjh1jfn6eRCKxr85foVDIgKunpweAu3fv8sknnzA1NUU6nebNN9+k\npqaG/v5+7HY7IyMjJJPJfXf+oiQn7piNjQ3Gxsa4fv06oVAInU7Hv/t3/4733nuPixcvsrW1xezs\nLI8ePToQzt9oNOLxeNDpdIyMjHDnzh0ePHhAKpXi+PHj/PjHP+bixYucOXOGxcVFwuEw0Wh0XzOL\n4lFXU1NDZ2cn4XCYa9eucePGDdbW1kilUvzkJz/hBz/4AUNDQ1itVmZmZpicnDx0/n8qKpUKkUiE\nqakplpeXZc02FAqRSqUoFosYjUZ6e3tpbm6Wh1mv1+87E71UKuH3+8lkMqyurqLValEoFKytrZFO\npykWi9TV1XHixAmsViuJRAK9Xo9Go9lXuwGy2Szr6+skEgkWFxfRaDSkUilCoRC5XA69Xk97ezsd\nHR2StChqu/uNRCKB1+slmUwyPT2NVquVGYlSqURNTQ0DAwO4XC5JqFOpVPttNqVSiUQiwfLyMvF4\nHK1WS7FYxOfzkclkKBaLNDc309vbS1VVleSO7Pc+BygUCsTjcaamplhfX0en0xEOh0kkEhSLRfR6\nPT09PTQ1NckXq0ql2nfbK5UK+XyeQCDAgwcPmJqaktmWZDJJsVjE4XDQ39+P1WolmUyiVCr33W6A\ncrlMPp9nZWWFZDKJXq8nk8kQCATkGfV4PLS2tspa+kGwG57YnkqlmJ6eZn19Hb1eLzNYxWKR6upq\n+vv7qa2tpVAoHBjbK5UK5XKZYDDIw4cPmZ2dpVKpsLGxQSqVolAo4HK56OjowGg0ymBhp/EX4/wF\naUIs/BfTPgaDgc7OTurr6wHIZDIHgjVfLpeJRqOSl/A828WLw2KxSILLQaj353I58vk8oVAI4BlS\nmVqtpqWlhebmZtRqNblcjmQyeSDq5qLzQNj+xX1gNpvp7u7GbrdLQlQmk9n3dGK5XCaTyZDNZvH7\n/c8l8tXV1eHxeDAYDBQKBWKx2L6nzQHJhPd6vXi93mfWXKfT4fF4qK2tpVKpyDN6EPghInMiauTb\n11yhUGC1Wmlvb8f0/9s7s9gor7OP/2d/Z/fYM7Zn7PGCN+wxGENjGwwF2gAFFwQppaQ0vYjUKr2o\nelNVvanUXPWuVSK1uWiSNpCLlFQNTRNKWloQiw14Be9493g8nsWz7x7P+13wnReHGiIlwJyB85Ms\nCFHEP4/OOc95n/MsGg3S6TTC4TAV1USkYsXtdsPj8ayr3Ww2o7i4GBKJBMlkUriMZRuy1u12u/DP\nBJFIBLVajcrKSuTl5cHlciEWi2U9WkHIZDIIBoMIh8OfO9PJBaWgoAClpaVCpUg4HH7s5+Iz7/yJ\ns3/wzx5EoVCgpKQEBQUFyGQyGB0dRV9fH6LR6NOS+j+Qjbj2tvqgdpFIBJ1OB4vFAqVSCb/fj+7u\nbszNzT1tuZ9jPRs/+GekesFgMEAsFmN+fh59fX0IhUJPS+ZDWWv79WyuUChgNBqhVCrh9XoxNjaG\nqamprF8WgYdrJweLWq2GXq+HVCqF1+tFb2/vE8km/jI8rOqAJL1qtVqhXHF2dhbj4+PUONH1fk9s\nLpfLoVarIZVKEQ6HMTIyAqfTmQ2p60I0r9W+tpyYREFdLhfGxsYQi8WyJfV/WG9/isViyGQyKJVK\n4cNiZmZm3UtltljvQ5REtNZWQvl8PgwPDz92mz/zzh94dFaqRCJBa2sr9u/fD6vVCgDwer1C4gUN\nC+Vh+i0WC9ra2tDW1gaVSoVwOCwkjWT7zZzwsIN8y5Yt2LNnD6qrq7G6ugq73Y7FxUUqknHW8qAW\nkp/wjW98AzqdDn6/H5OTk3A6nUJkiRYePFRqa2vR3t6OTZs2IZPJYGpqClNTU9Ss8/Ugh2F7ezsO\nHDiAwsJC+Hw+TE1NYX5+nnqbl5SUoL29Ha2trRCLxZiYmMCdO3fg8XiouLSs5cGvz61bt2Lv3r2o\nqKhAIBDAxMQExsfHqSr1A/7X5nl5edi5cyf27t0LlUqFqakpdHZ2YmFhAbFYjErtxOYbN27Ezp07\n0dDQgGg0iuHhYfT39yMYDD72PfpcOP9HIZVKcfLkSZw6dQo6nQ4+n084VGg9EIF7i6Wurg6/+tWv\nUFdXB4lEAqfTCbvdjmQySdUCXwtZ5B0dHfjFL34BpVIJt9uNwcFBOJ1O6m1eVFSEn/70p9i7dy8U\nCgWGh4fR19eHQCBAvc3b29vxxhtvCFnRt27dwvDwMDWlcutBoiyvvPIKfvCDH0Amk+HmzZv45JNP\nMD8/T61u4N6Xc319PV5//XVUVVUhlUqhq6sLly9fRiQSoVY7uXAdPnwYv/zlLyGVSjEwMIC//OUv\nuH37NrW6gXs2N5vN+NnPfobdu3dDKpXi5s2bOHv2LBYXF6nWLpVKsXPnTrz55puQyWSYmZnBhx9+\niEuXLj2Rc/G5dv6tra3o6OhAa2srFAoFIpEIrly5gvfffx+Dg4PZlrcuJIv+8OHD6OjogMViQTqd\nhsvlwt/+9jf885//zHrW9qOor6/H0aNHsW/fPshkMgQCAfT19eHPf/4z+vv7sy3voYhEIhw8eBBH\njhxBXV0dUqkUXC4X/vvf/+LcuXNwuVzZlvhQzGYzXnrpJRw6dAgcx8Hj8WBwcBDnz59/IvXDj5Md\nO3bgpZdeQktLC1KpFGZnZ3H9+nV89tlncDgc2Zb3UFQqFb7zne/g8OHDKCwshMvlwvj4OK5cuYLb\nt29T8/a8Hg0NDTh+/Dj27duH1dVVodKls7OTapuTj4pjx46hpqYGbrcbIyMjuHbtGoaHh6l52loP\ni8WCEydO4ODBg5DJZLhz5w6uXbuG3t7eJ2bz59L5q1QqWCwW7NmzBy+//DLS6TTGx8fh9/vxn//8\nBx9//DG1B6LRaER1dTWOHDmCtrY2eL1eIVrxr3/9Czdv3sy2xHWRyWSwWCxob2/H97//fSgUCoyN\njcHpdOLKlSu4ePEitZcWvV4Pi8WCAwcOoKOjA5FIBAMDA7Db7bhy5YrQ9IRGzGYzvva1r+H48eOo\nqKjA6Ogopqen0d3djc7OzqznhjwMlUoFs9mMvXv34tSpUwgGgxgYGMDY2Bg6OzsxMDCQbYkPxWg0\noqqqCkeOHMH27duxuLiIsbEx9PX1oaenB7Ozs9mWuC6krHjHjh04deoUJBIJbt++je7ubly9elUo\nW6QRnU4Hs9ks7FGfzyc4/p6eHiEpkEaKi4uxbds2HD9+HGVlZRgcHMTly5dx+fJljI2NwefzPZG/\n97l0/mVlZfjJT36CnTt3wmQy4U9/+hP+/e9/w+12Ux8a2rVrF1599VXYbDa43W788Y9/xMTEhNCm\nlVa0Wi1eeeUVHDx4EFarFR999BE++OADeL1euFyurCZWfhGbNm3Ca6+9hq1bt2JlZQXvvvsubt68\nCa/Xm/VGSl/Et7/9bXz3u9/Fxo0b0d3djd///vfwer3weDxURyusVit+/OMfY/fu3VCr1Xj33Xdx\n/vx5QTvNtLe344c//CGam5uxsLCAN998E+Pj4/B6vfB6vdmW91B0Oh1OnjyJjo4OFBUV4ezZszh9\n+jQ8Hg+8Xi8VFUQPw2az4dVXX0VbWxsikQjeeustXL9+HS6XK+u9FL6IAwcO4Pjx46iqqsKtW7fw\nu9/9DktLS/B4PE/0g+i5c/4KhQKFhYXYunUrNmzYAACYmJjAjRs3EAqFqChhWQ+SvVpRUYHW1lZw\nHIeJiQn09fVhfHycaucpkUig0Whgs9lQX18PmUyG+fl53LhxQ6hrpRGSNVxUVIS2tjaYTCYsLCxg\neHgYvb291CUPrYVor66uxpYtW6BSqeB2u3Hjxg2h7SmtiEQi6PV6YY+S5MTe3l4kEglq80JIboXV\nakVbWxu0Wi0mJiYwMDCAyclJ6m2uUCjQ2NiIhoYGyGQy2O129PT0UDu0iiASiVBYWIgdO3bAbDZj\naWkJo6OjuHPnDtU2B+5pr66uxtatW6HVauF2u3Hr1i1hCNSTJPvdVJ4ipMwpPz8fJpMJHMchFovB\n5/PB5/NR6/iBeyE5tVotDPYB7jWiIY2KaEYmk0GlUiE/Px9qtRrJZBKhUAiBQIBaxw98vryssLAQ\nMplM6I8fjUapdfzA/csiaWlNGqKQHvk0I5FIwHEcTCYT1Gq1UM9PppzRDGnHbTabIZVKBe2025ys\ndaPRCIPBgHQ6jXg8TsVsky+CtG+3Wq1QqVTC4KdcsLlYLIbRaERxcbEwjIg043rSPDdf/qST2bFj\nx3D06FEUFhbi+vXrOHPmDHp6erIt75GQ2+GpU6fw4osvIh6P48yZM/j444+f2HvQ40IkEmHfvn1C\n6HlkZATvvfceLl++nG1pX4jRaMTLL7+Mjo4OcByHc+fO4ezZs5iamsq2tC+kpaUFJ0+eFN6cT58+\njQsXLlB9YQHuOf7vfe97OHr0KIqLi3H16lW8//771O9RAKipqcHJkyexf/9+xONxnD59Oif2KADs\n378fx48fR319PYaGhnDmzBlcunQp27K+EJPJhBMnTqCjowMKhQJ///vf8eGHH2JycjLb0r6QlpYW\nnDhxAjt27IDD4cCZM2dyYo8+afjH/SOVSnm9Xs+//fbbfCwW41OpFP/GG2/wUqmUF4lEj/3ve1w/\nIpGIl8vl/Le+9S1+eHiYTyQSvNvt5k+cOMFLJBKqtUskEp7jOP7Xv/41H41G+WQyyX/00Ue8xWLh\nxWJx1vU96kcmk/E2m42/cOECH4vF+JWVFf7nP/859TYXi8U8x3H8j370Iz4QCPCJRILv7e3l29ra\ncsLmBoOBf/vtt/loNCrsUdptLhKJeI7j+EOHDvGDg4N8LBbjPR4Pf+LECeptLpVKea1Wy7/++ut8\nKBTKqT2qVCr55uZm/sKFC3w0Gs2ZPSqVSnmdTse/9tpr/PLyMh+Px5/UHn0kz82XP8dxyM/PF3ol\nk1aPq6urVN+0yDx5g8EAjuOQTqcRiUSQSqWor8/mOA5Go1HoJpdMJoV5BDSHb0mjEIvFAo1GAwBC\ny2SabQ7cy2mxWCwoKiqCQqFAOp1GLBbDysoK9TbX6/WoqKhAfn4+JBKJ0GKZdptLJBKUlJSgrKwM\nWq0WAASb06ybdAetqqqCxWIRxuGSN35a1wvJrSgtLUVjYyOMRqOwXnLhXFSr1bDZbKitrYVarUYm\nkxHe+J+mzZ95508aVuh0OpSUlECj0SAUCqG/vx/j4+PULhLg3js/GVlpNpshl8sxOzuLrq4uOJ1O\narWTSWFGoxE1NTUwmUxIpVLo6ekREohohLRhVSqVqKysxMaNG6HT6eB0OnHr1i1MTk5SaXPydqjR\naGAymWCz2VBRUSE0Z7l06RK1Gc9kjZNpfmQQSyAQQFdXF7VNZUgLVo1GA4PBICSzqlQq3L17F1ev\nXoXdbqdSu0qlgkajgVarhdVqhc1mg9VqRTKZRGdnJ65fv05lHwKpVIr8/Hzo9XpoNBps3LgRW7Zs\nQV5eHux2O65fv46xsTEqbZ6Xl4fi4mKoVCoYjUbYbDZUVVVBIpGgp6dHmHT6NHkunL9MJkN+fj4q\nKiqg0Wjg8Xhw7tw5amvigfu6NRoNysvLUVZWBplMhoGBAbz33nuYnp7OtsR1IZctjUYDs9mMxsZG\nFBUVIRKJ4JNPPsH58+eprBUmutVqNQoLC1FfX4+mpibodDqMjo7iD3/4A+7evZttmetCLlsmkwm1\ntbVoaWlBdXU1AODixYs4ffo0VX3k1yKXy2EwGFBZWYnm5ma0traiqKgIDocD77zzDrU9FGQyGfR6\nvTDSubm5GZs3bwbHcejq6sJvf/tbaksS9Xo9ysrKUFlZidraWjQ0NKCsrAzhcBgffPABPv30UyqT\niBUKBSorK1FdXY2ysjJs2LBBmHd/7do1/OY3v8Hi4mK2Za5LUVERdu3aheLiYhQXF8NisQjVZv/4\nxz/wzjvvPPU+J8+08yeZ2iUlJaiqqkJJSQnkcjmCwSBmZmbgdruzLXFd5HK5oLu8vBwVFRXQarVI\np9PweDyYnZ2lcnOS9relpaWwWq0oKSkRQnLRaBROpxMej4e67GGxWAytVovNmzcLU+MKCgogk8mQ\nTCbh8/ngcDio6xBGMvrr6+vR3t4OvV4PnU4HvV4vjPYlteW0VVUoFApotVp885vfFJymUqnE6uoq\notEofD4fPB4PlTYnUzT3798vPFHIZDJEIhEsLy/D6/UKY6tpglxWdu3ahba2NmGwTDweh8fjEX6l\nsbV5XV0dmpqasGfPHpjNZqyuriKRSMDpdEKr1cLhcMDv91M3L8FgMKC1tRXbt29He3u78AxHZoKQ\nUb5kVPjT5Jl0/uRNSK/Xw2g0oqCgACqVCgAQDAYRj8fhcDiEMbm0QL4+tVotysrKYLFYYDKZIJFI\nEIlEhFGnLpeLOgdKJmiVl5fDZrNBr9eD4zhhg6ZSKSwuLj6RARVfBbJOysrK0NLSgtLSUvA8j2Qy\nCbfbjYmJCUxNTQmHIy2QnAqLxYJt27aho6MDwWAQfr8foVAI09PTEIlEmJ+fp2oQC3miMJlMqKmp\nEZz/3NwcPB4P5ufnIZFI4PF44Ha7qbI5CfXX1NRgr21TDAAAC7VJREFU165dOHz4MGKxGO7evQu3\n2y04n6mpKap695Mootlsxvbt27Fv3z60trZiaGgIk5OTcDgcCIVCUCgUWFxcRCKRoEY7eRZqbGzE\ngQMHsGvXLgDAnTt34HQ6hTJt0r6XpkuuSqVCaWkp9u7di927d2Pz5s0YHBzEwsICJiYmIJFI4HA4\nMDs7m5V5LM+k8ydh0A0bNqCgoEBw9E6nE1Lpvf/lYDBInQMlySClpaVoa2vD8vIy+vv7oVAoMDMz\nA5fLhaGhIaTTaWo2J3D/y7miogLNzc2ora1FV1cXFhYWAACjo6PQ6XSYn5+nyvGLxWJh+Mr27dvR\n3NwMj8eDzz77TJiKeO3aNWECG002l0qlKCwsxKFDh9Da2ori4mLcuHEDV65cQSwWg1QqhU6nw8jI\nCHU2VyqVaG5uxsmTJ1FVVYVIJIJPP/0UExMTiEQiUKlUSCQScLvdVNmcNAg7cOAAXnzxRRgMBty6\ndQvvv/++0CBMoVDA5XJRZ/OCggJs2rQJx44dQ3V1NaLRKM6fP4+rV68iFApBIpFAJBJhYWGBKpuT\nvIS2tjZ8/etfR15eHm7cuIG33noLS0tLiMfjkMvliEQi1EVarFYrtm3bhhdeeAEVFRXIZDK4ePEi\n/vrXvyIYDILnechksqyt82fK+YvFYkgkEuj1ephMJjQ1NSE/Px/xeBxut/tzfcyJ8WmA3MyVSqWQ\n9NTU1ITR0VGMjY3B7XYL4USaEv3IV5xarYbVasXWrVvR2NiI4uJidHV1wev1IhwOw+VyQaFQwOv1\nUqVdqVQiLy9PCCmWlJTA7/fD7XYLDn9ubg6JRIKaiyKJauXn56OyshKNjY2orKwEx3EIBAKYmZkR\nwrbkuYUWiM1LS0tRW1sLm80GjuOEvTk5OYloNAqRSCRkQNMAsXlBQQFqa2tRV1cHi8WCTCYjDOxZ\n26mSlrUC3O/cZ7VaUVtbi4qKCnAcB6fTienpady9exfxeBw8zwvPALQgFouRl5eHhoYGVFRUwGAw\nIJFIwOFwYHR0FD6fT6ioID80QM7F0tJSbNy4EUVFRZBIJPB6vZiZmcHY2BhWVlaEdZIt3c+M8ych\nc47jUFxcDJvNhhdeeAEGgwE+nw/RaBQOhwMOh4OqsieyUJRKJYxGI5qamtDS0gKbzYZ4PA6j0YhA\nIACHw4GpqSlqFjhw/9JSUFCA6upqbN++HZWVlZBKpcjLy4NEIkEoFKLunX9tBQhJeqqpqYFKpYJC\noYBEIkEymYTf76eubIg0q7JYLKirq0NZWRn0ej0AIJPJYGVlBbFYjMqxzuRJq66uDhs2bEBeXp7Q\n0SyRSAi/p003WS/FxcVoamqCyWSCSCRCOBxGJBIR7E1TyJkgFovBcRyqqqpQVVUFhUKBeDwuXMwT\niQR1kUTgvs0LCgrQ2NiIwsJCpNNpIYeFtKim5RxfC8nFKSkpQXV1NTiOQzgcxuLiIvx+v7A3s23z\nZ8r5A/duUSTsHw6HkUql4Pf7EYlEqF3owL2vBbKYU6mU8JZFchSIbtq08zyPRCKBaDSKeDyOUCgE\nAAiFQtTX9CeTSSwvL8Pv9yMQCCCRSCAQCAjtWGlz/ASe5+H3++FwOODxeIR8llAoJEQpaNWdSCRg\nt9tht9uxvLyMeDwOv9//uTVOG0TT8vIyRkdHsWnTJuj1egQCAQSDQarXOACsrKxgenoaY2NjaGho\nEEaAk9a9NNocuGf35eVl9PX1wWw2Iz8/H06nE8vLy9TrzmQymJycRHd3NwoLCyGXy2G32xEOh6k5\nx5+Z3v7EoKurq1hdXUU6nRZ+jcVin2t0QoPhH2R1dRXJZFJwouQSkAuOiDh9cohnMhmhQQutm5Qk\n9fl8PmEsMmm0kUgkqG7QwvM8gsEglpaW4HK5EAqFhK9+Wr+GgPs2X1pawuLiItxut+CAVlZWqJ6t\nkclkhGcV8gVHQre0XlqAezZPpVJYWFjA9PQ03G63kFlO65lC4HkegUAAIyMjmJubExIqaT1T1pLJ\nZGC32zE8PAyn04lwOEzdJfGZcv7kEAmFQnC5XMJcbZlMhkwmQ+ViJ7dE4uidTidCoRCsVivy8/Op\nvrDwPC9crrxeL6anp8HzPMrLy4XucjRCbJ5MJhEOh+F0OuF0OmEwGGAymai1N3DvUFldXRUGUi0s\nLCCZTKKyshJarZaqw+VBiCMKBoNwu92w2+3QaDQoLS2FTCaj1uZkvcRiMSwvL8PlciGRSKC8vBwm\nk4maL7n1IHs0GAwK45DVajXq6uqE7pU0stbmZKRwLBZDeXk5ysvLIRbT67rIHg0EAlhaWsLy8jI4\njhOejB4XJNr9ZaHXgl+CtReAVCoFpVIJg8EAjUYDuVyebXkPhSz0lZUV4WlCrVZDq9VCpVJBIpFk\nW+K6EN3pdFqIWohEImi1WqjVaigUimxLfChkraRSKcTjcSSTScjlcmg0GiiVSqEqhEbIgZ5MJpFM\nJoWpZiqVCnK5/CsfCk+KtTYnb+TE5hzHUW9zsj9XVlaEChe1Wg2ZTEatM1p70SWROI7jYDAYoFQq\nIRaLqV4v6XRaiH6SiYk6nQ5SqZRa3cD9KBeJbJEplSqVihqb07livwI8zwvJIqRGNC8vT3gXpRXy\n9UC0SyQSYQwuzRcXAJ/78lnb4U+tVlN7KAL3nRFwv+xPLpdDr9dTfXEBIESySIWLWCyGQqGg+rII\n3Lc5yUKXSqWQSqVQKpXUr3PyRSeVSiGXy4VflUol9TYnESGFQgGFQgGZTAa5XA6ZTJZldY+GfFyQ\nZG65XC7optnmAISnFblcDo7jBLs/Lt1fNdpE71X7K0BCi6ShQl9fH7VtH9fC8zzC4TBmZ2fR2dmJ\nnp4eTExMUNfl7EHIu//S0hIGBweRTCYxOjpKXZb/epCkovHxcRgMBszNzWFhYUGo86cVkjw3OzuL\nvr4+rK6u4u7du0ICGs3wPA+v14s7d+4glUohEolgaWmJuu5s67GysoKZmRl0d3fD6/VidHQU0Wg0\nJ9Z5IBDA0NAQ4vE4ZDIZnE4n1bkthEwmg4WFBfT29sJut2NwcFAYykYzPM8jGo1ifHwcEokEBoMB\n8/Pz1OxPWq9Ov/4q//Ham9Xi4iL6+vqe+tCEL4tEIhGc6dDQEEZHR6ksIXoQ0iRkZWUFHo8H/f39\ncLlc1B8swL2GOel0GuFwGBMTExgZGaGmxvxRkDJR8hbd398Ph8ORMzZfXV2Fz+fD3NwcRkZGqL9w\nEeRyOWKxGFwuF0ZGRjA7O5szNheJRPB6vZifn8fQ0BC1A58eRKlUIpVKweFwYHx8HFNTU9Q40YfB\n87zQFdLv92Nubg6Dg4NwuVxPS8Lrj/qX2X94WJ+vtJOkUqkwFEckEgmZubSzdqqcSqVCLBajrgXx\nw5BKpUJ/dplMJmT+5wJEN8dxQmkozYlzBDKIiIRDA4FAzjhQpVIJjUYjXAJyaY/qdDoh1B+JRJ76\nQJYvC8dxwnjtTCZDZS/8h5GXlwetVgue54VzMRf2qFwuF+aEkIqRp7hHH+nfn0nn/2AyRS7cygmk\nmxjw+bd02slV3UDual+rG8gt7STHglRW5Ipu4H6UK9e0k3wckgOQK7oBCHlQuVCiuBbSCI3kujzl\n9fL8OX/C2gM918hV7bmqG8h97bmqG8hdmwO5pz0Xdef6Bx3hKet+fp3/0yRXD18gt7XnKszmT59c\ntnkua89Vctnm/6/9kf6d3jqsHISG2s3niQfD3ownT67bPJe15yK5vl5yEWZvBoPBYDAYDAaDwWAw\nGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAY\nDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgM\nBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwG\ng8FgMBgMBoPBYDyM/wPMmqZRt8bk4QAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -10.885268\n",
+ "Current MNIST score: 6.105103\n",
+ "Training step: 2000\n",
+ "Time since start: 9.603889 m\n",
+ "Steps per min: 208.248977\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAf8AAAFBCAYAAAB5Maa7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvVlwm+d1Pv5g33cQBElw33dSJEVSlMXIkqXYjuU6lpNJ\nbTeTXqQzvehdJzNpZ3rRq173MtN2Ou0kTjvz8ySxY8uyHVuyHckSJVLivoMguIDYd4Ak8L/Q/xx/\noHYRpGQXz4zGlkACL77v/d6zPec5QAEFFFBAAQUUUEABBRRQQAEFFFBAAQUUUEABBRRQQAEFFFBA\nAQUUUEABBRRQQAEFFFBAAQUUUEABBRRQQAEFFFBAAQUUUEABBRRQQAEFFFBAAQUUUEABBRRQQAEF\nFFBAAQUUUEABBRRQQAEFFFBAAQUUUEABBRRQQAEFFFBAAQUUUEABBRRQQAEF/F+F6Gkv4D7IHsSb\nikQilJaWoqmpCcPDw1AoFPD7/fj0008xPj6OVCqFTCZzEB9dwLcQIpEICoUCKpUKOp0O4XAYkUgE\nmUwG2eyBbNEDgUgk+latt4AC/q9AJBJBIpFAJBJhd3cX2Ww2n8/qA+27NF+f8m2AWCxGeXk5jh49\nipdffhkKhQJutxsTExO4devW015eAc8YxGIxlEolTCYT7HY73G434vF4vh/QA4VIJCoY/wIKeAYh\nEokgFoshlUr5Gd3d3T20z/8/Y/wrKirQ1taGM2fOYHBwEBUVFchmsxCLxTAYDJDJZNjZ2Xnay3wo\nxGIxxGIxMpnMtyZLQQZILBYDAEfOz7JBUigUKCsrw/nz59Ha2gqtVotLly7h0qVLWFhYQDgcftpL\nvC/EYjE0Gg3MZjP0ej2y2SycTiei0egzfc0JYrEYCoUCBoMB29vbiEQi2NnZeeb3O0VxRqMRBoMB\nXq8X0Wj0W5EpkslkUKlUKCkpQSaTwcrKCtLp9DO/biG+LU6uSCSCVCpFS0sLhoeHUVRUBLFYjNnZ\nWUxMTGBsbAy7u7sHvt+/88afLnRNTQ3OnDmD06dPo7W1FQAQjUahVCohkUie8irvD5FIBJlMBq1W\nC6vVCoVCAQDweDwIhUJIpVLP7IYno6/T6aDX6yGXy7G9vY1wOIxEIoFUKvW0l3gXyLlyOBzo6+vD\na6+9hu7ubkgkEkilUsRiMYRCISQSCezs7Nx17Z/mAURliuLiYpSWlsLhcGB7ext+vx9ra2tPZU2P\nC7FYDKvVCrvdjrKyMqRSKXg8HqytrSEQCDyTe52Mvk6nQ2lpKcrLy2Gz2TA3Nwen04mtrS1sb28/\n7WXeBcps6XQ6vuY1NTUIh8OIx+Pw+/1IJBJPe5n3BZ0vMpkMANhBfBb3iBASiQRarRbNzc344Q9/\nCIfDAYlEgps3b0KpVGJpaQnxeJzPRwqc8u0QfOeNv0wmg9FoREtLC1544QWUlpbya4FAADMzM9ja\n2kIymXwmIwuxWAyTyYTu7m688cYbMJlMiMfj+N3vfoerV69ifX39mTxYgDtrl8lkaG9vR29vLzQa\nDfx+P27duoWlpSWsr68/cw+qTCaDRqPBmTNn8Prrr6O2tpYPl+bmZkSjUSwvLyMYDCIcDnOdTiS6\nU14Tptkf5bvl01kQi8Ww2+340Y9+hM7OThQXF+M3v/kNrl+/zuWKZxnkqB8/fhzHjx9HTU0Ntre3\nsbm5iXfeeQdXrlzh6733956mw0WZlo6ODrz11luoq6uDVqvFjRs3cPnyZbz//vsIBAJPZX33AzmK\nFRUVOHr0KIaGhnDkyBHIZDKMj48jGo1ifHwcKysrz+y+EYvFkMvlsFqtyGQyCAQCSKfTz3wGVy6X\nw263o7q6GrW1tdBqtRCLxejt7YXP58Po6Cjcbjd8Ph8ymQykUilUKhVisRiSyWTe7sd33vibTCYc\nP34cAwMDqKys5MgZAJaWlvD+++/D6XQ+kxumtLQUtbW1aGlpQW9vL4aHh6HVajnqLCsrg9PpxPz8\nPBYXF5FOpw+CNPLYEIlEUCqVqKysxMDAALq7u9HU1AS5XI5QKISamhqsr6/D6/UiEAhgc3MTKysr\niEQiSCQST3X9lZWVeP7553H27Fl0dnZCr9cDALa3t7GzswO1Wo3h4WE0NjYikUhgd3cXu7u72NnZ\nQSqVQjKZxOzsLFwu10NT1eQw5AsdHR0YGhrCqVOnUFtbC7VaDalUCr/f/8w6iEIUFRWhuroaQ0ND\nOH78OCwWC1KpFIqLixEOh1FbW8vXNJPJQCKRIBqNsiPp9XoPfc1qtRoWiwV9fX147rnncOLECRQX\nF0MqlXJUbTQa4fP5kEgk4PP5EI1Gsb29jUQiwZmkfB7qD4NIJIJGo8Hx48fR19eHzs5OtLW1oba2\nFru7u5BIJNja2oJKpYJEIkEwGEQ8Hsf29vYzEVkrFApoNBp0dnaipaUFBoMBy8vL+OSTTxAKhZ7J\ns1wIqu2Hw2G4XC4YDAaIRCJsbm7C7/dDq9WipaWF+UZqtRoKhQKXLl3CxMQE34f94jtv/G02G954\n4w0MDg5Co9HkvDY5OYn//M//RDKZfEqrezAaGxvx+uuv4/Tp02hoaMiJKM+fP4/nn38eTqcT/+//\n/T/8z//8D8LhMJLJJJNGntZDKhaLodPpMDg4iH/+539GcXExJBIJdnZ2sL29jRMnTkAsFmN7exvT\n09P48ssv8f7779/lwDwNtLe34x//8R9RVFQEuVwOAEin00gmk+yNv/rqqygtLYVcLsfOzg7S6TRi\nsRh8Ph82Nzfx3//93/B4PIfOyzh9+jT+6q/+ClVVVZDL5YjH49jZ2UE8Hj+0NewHlZWV+P73v4/j\nx4+jtbUViUQC0WgUZrMZb731FhQKBXZ2dviPXC7HysoK/ud//geXLl16KsbfaDSisbERP/vZz/D8\n889DoVDwc9rQ0ICamhq89NJLCAaD8Hg8GB0dhdPpRCwWw8bGBlZXVzE3N3eo5TuRSASDwYC3334b\nL7/8MnQ6HfNxpFIpqqqqcO7cOajVagDAzMwM1tbW2Gl52sZfpVKhrKwMb7/9Nt5++22IxWL84Q9/\nwM2bN5FIJJ7Z85yws7ODQCCAubk5fPHFF7Db7QCA69evY3V1FSKRCP39/ejv70dHRwdMJhPEYjF+\n+ctfYmFhIW/p/++08ReJRExiMZvN/O/pdBrBYBB+vz/HWD4r0Ol0cDgcGBwcxPDwMIqLi3PSysCd\n1JHRaIRUKsX58+dx5MgRpFIpLCws4KOPPsLy8jJ8Pt+hr93hcKCxsREnT57E0NAQjEYjt7Ls/a9C\noeAUaXNzM0ZGRnDt2jXcvHkTGxsbh7puhUIBh8OBmpoajpgzmQxSqRRu3LiBd999F4uLi9jZ2UEy\nmcTRo0fR3NwMpVIJpVIJlUoFpVIJtVoNk8nEDN4HIZ/pfqlUCqvVipKSEigUCqytrXH68FmHRCKB\nSqVCVVUVBgYGYLPZkMlkkEgk4HK5sLS0hIaGBpSXl3OKlCL/3d1dDAwMYG1tDTdv3jy0NVNJq7e3\nFz/5yU/Q1NQEmUzG95zurUQigUQigcVigVqthk6nQ1dXF3Z2dhCLxXJKGoFA4MDPIpFIhOHhYbzy\nyivo6OiAUqm8a5/KZDLo9Xr09/fD4XDA5/PB5XJhdnYWIyMjGB8fP9A13g+U5u/u7sbbb7+NgYEB\n5mtVVlbi/PnzuHDhAq5du/bUHZQHYXd3F5FIBJOTkwiFQtBoNBCJRPB4PFznP336NGpqamAwGKBU\nKvn30ul03gKK77TxJyKL1WrlqD+bzSISiWB8fBxLS0vPZDrUZDJhYGAA/f39aG1tvacRkUgk0Gg0\n0Gq1sNvtGBwcRDabxdjYGMLhMNLp9KEaf4lEAoVCgaamJpw5cwavvPIKGhsbc9YuFovZ+AN3DiK7\n3Q673Y6uri6UlpZCo9HA5XIduvFXqVTo6upCW1sbVCoVxGIxEokEZmZm8Omnn+LXv/41gsEg1Go1\nVCoV1xqtViu0Wi2USiXkcjmUSiX0ej07OIcBuVwOnU4Hi8UCk8kEkUgEt9uNTz75BKurqw/9/add\nM9dqtaitrUV7eztaWlqg1+s5YxEIBLC+vs4sdIVCkVO6y2QyaG1txddff32o61YoFLBareju7saZ\nM2egVqtzulmEXBCpVAqFQgGtVouioqKctQcCAS4TRaPRQwlE6urqMDw8DLPZjGg0ing8DoVCwY6A\nSCSCWq1GY2MjmpqasL29jbW1NYyNjSEYDD414y+VSlFSUoLu7m68+uqrnC4H7pRIz5w5g2g0ikQi\ngUgkglgsxhkVMpxPM6tIyGQySCaTWF1d5UifvodMJuOAwm63cyBCv5fPDozvrPEXi8Woq6tDe3s7\ne1bAHeO/traG3/zmN7h8+fJTXuXdEIvFKCsrwxtvvIHu7u4cAyKshQs3jBB2ux1nz56F1+vF7du3\nD23darUaZWVlOHnyJH784x/DarXed+3Avevd1dXVGBwcxPvvv38oaxZCr9fjBz/4AU6dOsXpfq/X\ni3/7t3/DxYsXuW6+s7ODzz77DPF4HGKxGIODg2hqamLWsVqthkwmY0NwWGuvra2F0Wjk6+pyufDB\nBx9gc3Pzgb9L++hxSIr5hEQigcPhwFtvvcWZIpFIhEQigUQiAZlMBrvdDp1OB6lUetd1lUql0Ov1\nnKI+LBiNRvT19aGhoYGzEcCd6Gx7exvpdBpSqfSBGSBi27e3t8PpdMLpdB54yjqbzWJrawtTU1PM\nNZienobdbudykclkQllZGRNdpVIpiouL0dXVhU8//fRA1/cgaDQaDA4Ooq+vjx10gtFoRFtbGywW\nC06dOoXr169jamoKTqeT+RWrq6vPZLsrrUfopBDJj75jNpvlMmLB+D8EYrEYbW1t6O/vh06nA3DH\nc5qfn8fVq1dx48aNR4qKDhNisRhGoxFlZWWoqqqCyWTiDUG1ZdImUCqVdx0s1FZXX1+P4uLivKyJ\nDAOltqVSKbLZLNLpNBNPVCoVRxODg4NwOBwA7tS2KMVFNU+xWJyzNpvNxlkZnU4Hm83Gaa7Dgkwm\ng06nQ0VFBex2O8RiMRYXF3HlyhVcu3YNi4uLnCHKZDLweDy4ffs25HI59Ho9HA4HR34SiQRVVVVo\nb2/H6OjooWRfTCYTH3zAHQNEXS50vywWC19XvV4PlUqVIyyysrKC1dVVbG1tHVoLpkgkQklJCVpb\nW9Hb24vq6mpIpVLeV3K5HEVFRTAYDNDpdDmHH+373d3dnLaow4JWq0VjYyPKysq4RBSPx+Hz+fg5\ntVgs0Gq1OevdC5lMhrq6OjQ1NeFPf/oTotHogRPW5ufn8cEHH8BoNCKZTMLpdMJiscBut0OpVKKt\nrQ2vvvoq5HJ5TrnxMHrP7wfKoBgMBmg0GibaUjZRLpdzp05xcTHMZjMaGxuxvr6OaDSK9fV1XLx4\nEYlE4kC+g0Qi2Vc7Hhl0Km3Q96HrTn/y6bh8J40/td90d3fj+PHjOcb/2rVruHjxIpNuniWIxWKU\nlJSw100HcyqVQjweZ2EZ8s6F+gRCLkBRURG0Wm1e12UwGGCz2aBWq5HJZBAKhZgAZLPZMDAwgJ//\n/Oecns1ms+xxr6ysYGZmBteuXYNUKsXQ0BBv5IGBATb+EokEcrmcH+jD8tApJSuM2EdGRvDuu+9i\nZWXlnqWhlZUVrKysoKGhAX19fczuBoC2tjacOnUKq6ur8Pv9+/oej3IdrFYrjhw5wi1Pu7u7sNls\nGB4ehs/ng1QqRVdXF8xmM0QiEWpqatjJoXv03nvv4eLFi4jH44cm7iISiVBXV4eenh5UVFRAp9Nh\ne3sbu7u77MiS0xIMBjltm8lk+D6lUilsbGwgGAwe6p5Rq9WoqqqC1WoFcMcJCQaDmJ6eRjqdZmdZ\np9NxRkW4PnpeyVlsaGjgroCDNv7j4+OYmJjgv9PayIieO3cOp0+fzjlD4vE4XC4XQqHQga7tfqDr\nlUqlWCOEtDeoxEbZN7PZjMHBQfT39yOTycDr9WJsbAyTk5Nwu9331OfYL+RyOSQSCXdtPOn7k6aL\nQqHg8313d5c7i/KJ76TxN5lMcDgcsNvt0Gq1TAyiGu6tW7eeOQa0VCqFRqNBW1sbOjs7oVarkUql\nEIlEkE6n4ff7sbi4CI/Hg3g8jq6uLtTW1uaw0oFvUom0GfebJqKHSqFQwGg0orS0FCqVih/C3d1d\nlJaWoqWlBSKRiA9oj8cDn8+HQCCAW7duYXR0FC6XC1arFQ6HAw6HAyUlJTlRPn1OdXU1Kisrsbq6\neihtO93d3XjxxRdht9vZ2Zqfn8fo6OgDlfxEIhFSqRSi0SirRIrFYlRUVKC9vR0WiwUymQzpdPqR\n10LkPZVKhfLycjQ3N8Nut0Mul2NmZgbr6+sIBoOIRqMA7ihXDg0Nobe3FzabDcCdvVRXV4fz588j\nmUxCLBbDYrEwE10Y+W9vb0OlUqG/v5/TvJOTk4eiwSASiVBdXY3m5mao1Wrmg+zu7nKdXCaTQSqV\nwmg0AkAOqY6cMpPJBLPZDI1Gg1QqdeA8HplMBpPJhMbGxpwslkKhgM1mg1gshlarZe6HkAtAnQoU\ntZLD29rair/7u7/Du+++iw8++ODASzB735v+Th05e1+ntPnTNP5k7KPRKCYmJribpaamBsXFxcyK\nJ1AQQfM5stls3g0/3b/m5maYTCbcvn0bfr//sc8tEv4ZGhrCj370IwwMDPBry8vL+Oqrr7C4uJi3\ndQPfUeNPBLjy8nImsQQCAayurmJ2dvaZJPpRWotql0tLS5wy39nZgc/nw8zMDFwuFwKBACKRCKLR\nKBobG2Gz2bgfndJGFKkHg8F9qXTRIUQRJUUHEomEjYXZbIZcLs8RpZifn8fS0hLcbjfGxsYwMTGB\n3d1dKJVKZLNZ6HQ6FBcX39P4t7a2YnV1FYlE4sBVDEUiEQ96or5yv9/PLPOHpfCi0Sh8Ph+zuSUS\nCYqKilBbW4uysjLMzc3B6/U+8volEgmUSiUsFguam5vxwgsvoLGxEWq1GtevX8fCwgI2NjYQCAQg\nFovR2tqKwcFBNDQ05BC2iEj5MOzs7EAmk6G+vp41AZRKJebn57G5uYlQKHRgqV6RSISioiLOdFG9\nk1KzQoeAnBVhDZSyBCqVClarFRUVFVhbW0MoFDqw/UKOVHl5OcrLy9kpAb4RFKP+frlcztk7eobi\n8TiSySTXdalnvaKiAufOncPKygo+//xzJJPJQ+9Xp7Xe634TAfNpnZsikYgzjouLi/D5fNja2kIi\nkUB7eztaW1vR0tICjUaTEwzReUX3It97mZ7XiooKlJaWwul0IhgMPtZ70L4hsvSbb76Zk9Xd2NjA\n5cuX806C/s4ZfzrM3377bdTV1XGqjepc8/PzeRNJ2Pu5+zlwqLXpz3/+M+bn56FSqbh/XNhLTpMH\nNzY2MDExgf7+fgwNDaGvr4/XIRaLUV1djRMnTuDKlSv7UumiA2FrawvRaBROpxNSqZQ9aCIDLS8v\nIxAIwOFwQKvV4tatWxgfH8fU1BS8Xi/i8ThKS0vR0NCAgYEBlJWV8QEp/CylUonh4WGetTA5OQmX\ny8Wv5xNkKOkwV6lUCIfDWF5eht/vf6SsicfjwcLCAkpKSmA0GpmTodPp0N7eDrfbjUAg8EgHOaUt\nqdWzrq4OLS0tcDgc0Ov1KCoqQjAYRDAYRCaTgUwmg9VqhdlsZnLQ/YigD/pMYhMbDAa88MIL6O3t\nRTAYxK9//WtcunQpr+1FeyGMhpPJJHw+HyKRCHZ3d2EwGGAymZj7sncICjkKqVQKZWVlOHXqFD79\n9FPO1hyEAyCTydDd3Y2jR49Cq9XyWmivCCe00TOSSqX457a2thAKhZBOpyGXy6HValFaWgqtVssO\nscPhwNra2lOZH0Hnx949RC3TVEI9bJAozpUrVzA9PQ21Wo1oNIpUKoWrV69iYGAAb7zxBurr6+9y\neqkt8yBIuNTlpFQqOVX/uF0+JpMJnZ2d+Ju/+RscPXo05z2y2SwSiQQ8Hg8SiUReS1vfGeNPEYJS\nqURJSQmam5uh0Wj4wdza2sL4+DhHp88aqI3D7XYzQ5tYw3sH4YhEIsTjcUQiEYRCIVgsFhw5ciTn\n8G9oaMBLL70EqVSK69evw+l0PnEEnc1mkUwmuQxB6wXAKmuJRALpdBp9fX1obW2F1WpFeXk5QqEQ\nysrKoNfrUV1dja6uLlRXVzN7mzxcWpdcLkdFRQXXtyoqKjA2NobZ2Vn4/f79XeQ9IGOr0Wig1+sh\nlUoRDof5sx7lWkUiEXi93ruuLbHUi4qKHuswIGNI7UrBYBBFRUX8flarFYlEgjMvCoUCYrE4J0Nz\nr8MbQM7+IVDqWaVSwWw2w2w2M2P96tWrGBsbY9nUgwD1we/u7iIQCGBhYQELCwuIRCJsCMkxoz8S\niYSNKnBnz1RWVkImk8HtdsPj8Tyyw/UoENaTjUYj2tvb0dnZmUOapBIK9eqLRCJ4vV74/X4eAw0A\nyWQScrkcxcXFnC0ix0Yul3Mr42F2ixCkUimKiopgs9nuMmIkrGO1WjkwOUy1P3L2Njc3EQgEIJPJ\nuPtmfX0dUqkUNpuNOU/CZ4AMtMlkgl6vz2s2i0oJW1tbfIYLu2fuBXqdyqjd3d0YGhpCf38/ysrK\nchxKam8tKyuD2+1mQm4+rv13yvjLZDKOFgwGAx+KlD5fW1s7sEEV+70RtInIuD4MlPZ3Op3o6OjA\nzs4OG1OxWMxMZLVazSl5SpM+6frulTbb3d1lsaRQKITq6moUFRWhvLwcDQ0N/PeqqipUVFTAaDTe\nZZyEhCipVAqdToe2tjY0NTWhs7MTn3/+Of7rv/4r78afIgKFQsFEw3A4jOnp6YeqxdH9JiMtrCXS\nd9Hr9RwdPgoolR0Oh7G6usotV2q1GkajETqdLmete40+3RthlCOMIOiP8PrTNae1ZjIZjv7osI9G\nowdm/GUyGadk/X4/Zmdn8emnn8LlcqGqqgpNTU1oampix4cElLLZLGQyGddKDQYDKisrWb+DMmb5\nglgshlqtRlFRETo6OtDa2gqVSgXgm8M8nU5jdXWVe8y/+uorTE5O8mAf6uah9K7Vas3p46ZD/yAI\naY8CmUyG6upqVFdXc0mPoFar4XA4UFxcDL1ez2qihwkyrtvb23ft64WFBXzwwQeorKxEZ2cnPwO0\nl5VKJUpLS2Gz2XKcsf2CHPWFhQVWQXzQ8057hYaHfe9738P58+cxNDSU4zRQELC9vQ2j0Yje3l54\nPB4u+eWDlPudMf5EkjKZTDkHLnmM4XAYm5ub9zT+j9vn/KCbe5gPLTk2MzMz+OMf/4je3l6Ul5fz\n63K5HA0NDYhEIqxUNzMzk/c10jq2t7ehVCphs9mgUqlgsViYra3T6biGC9zJasTjca5jFRUVsYY1\nkEvwOSjBHFq3sCb7oLase4H06HU6HUek0WiUFfbm5uYey+HKZDLY3t5GKBTi36U6bHV1NWcohD3B\nXq8Xq6ur3CZH0a/P58thxxNvQyaTobS0FGfPnmUVMWG6l9ZA9+Agrj1w5/rT2tVqNYLBIFZWVuDx\neLC5uYlkMolEIoFwOMy9/tvb2ygqKkJFRQWTvITMaMqI5etwF4lEMJlMqK+vR2NjI1paWtDc3HxX\nDzbxckZHRzE/P4+NjQ243W4uedF6UqkUSkpKIJVKmTsjbLkUlg4OE+QA9vf3o6enB3K5nPcBkYwl\nEgmam5vxxhtvwOv1wul0YmZm5lB1/+n5FDq+AFjUbGVlBfPz8zmMeXKuyLGhIDAfziHdezrvKOtD\nbX/0/FGmrr6+HvX19airq0NdXR1qa2tRXV3N9z4YDGJtbQ0LCwtIJpPo7e2FTqdDa2srT4icnJzM\nC/fiO2P8KSVXV1cHu93OBp36gDOZDLeuGAwGPgRJTUl4oFJnALXXEYNd6LUJjQXhaXjr2WwWGxsb\nGBsbQ21tLRwOBx8cYrEYpaWlXD6Ix+NYXV3NO5lI2KNKDhhdUyEbWuiQkXLbwsICZDIZs7qFxl8s\nFiOZTHJkne9WLmLq0uEA3DkEqf6qVqs5W7LXmNDeKS8vR11dHfR6PT/wW1tbmJubw8TEBFwu12Ot\nmQ6T3d1drK2tsaogXQtS8Eun00ilUojFYkxkVSgUSKVSmJmZgdPp5ImPdPiQ8VcoFGhpaUFDQwPs\ndntOlow+n8o8Bx2F+v1+rK+vw2azcSRpMBhgMBiQTCYRDAYRCAS4s8Tv96O6upp5FRaLhTMh1LaY\nT6U8SoUPDg6it7cX7e3tKCsry+mBJ9D19fv9WFhY4CE+QrU/kUiEnZ0daLXaHANFUtJ03Q+rNEkO\nHnXsDAwM8BAuykTR1FOLxYKGhgaYzWZ4vV5MTk5CLBZjeXkZHo/nwDoUhOcuZXhVKhWfG2q1GhqN\nBjqdDiKRiEcRU9eTRqOBVCpFbW0ttx17vV7EYrF9X2f6zkajkctTOp2Os6HUOaHRaGA2mzE0NISB\ngQF0dXXlcCioy2hlZQXXrl3D5OQkAKC+vh6VlZVQq9VoaGjAysoKnE5nXvgg3xnjT4zLF198ET09\nPWwoaJ65zWbD66+/zoMfUqkUR21CshZFGouLi5icnMSVK1ewvr6OWCzGqVaKsJ72BDoC9ZgLRX/I\nkQHuHKbt7e288d1ud97JRHK5HGazGXq9Pof4sveApHXNzs7i6tWr8Pl8sNls3JZJDzBdV5fLhZs3\nbyIYDOb9OlMd02QycQuZTqdDXV0dnE4nvF4vtra2OHMidHIMBgMqKirQ1taGlpYWTpmn02nMzMww\n0fJJFcVo70ajUYyOjnJEIJPJ+OCKRqM8WpjSjcQoT6VSOWsmZ4/el5wpYW1ZmJWJRCLw+XxcLjoI\nZLN3pLY9Hg/W19eh1WoxPDyM9vZ2rK6uYmpqCmazGQ0NDZiammJSqcFgQDgchkh0Z3aHQqFggl08\nHs+bkItIJOJ+/rNnz3I7l1KpvKsmL5fLUVtbizfffBMNDQ24dOkSvvjiCywsLLDhpxKF2WxGeXk5\n7HY7GybK9ng8HqytrR1aSp1ajF966SW8+eabqK6uhtVqzRnERXVsvV4Pi8WCyspKpFIp1NbWory8\nHO+99x4ZvVeiAAAgAElEQVQ++eSTA1kfnSHEhZBIJNDr9SguLobdbud2WIvFwsGOyWRCJBLhMdYk\ng97Y2Mgt09PT01hdXd03+ZvsQWVlJfr6+pBIJLC5uQmn08kKigqFApWVlWhra0Nvby+am5tzHD96\nJn0+H27duoXf/e537KyQZDgAlJWVobq6GtevX8fW1ta+A7hvvfGnA6yiogIdHR1oaWlhtidd1FQq\nBZvNBrPZzJEN1VJKSkpylLhIUKeqqor7zTc2NhAKhdhrp5nuc3NzHBk+LQdAJBKhuLgYHR0dOW1H\nFOlvb29DJpOhqKgIDocDFRUVCIfDiMVieRWNoHS/sLf5fqSzbDaLWCzGjPq9pD/6md3dXWxsbGB2\ndpb72vMB2jPFxcU4evQoysrK+DAnTe3+/n7Y7XbmVsRiMT6IRCIRzGYzKioq0N3dDYPBAABcWpqY\nmMDo6CiTc550b1Bmid6HtCmCwSCSySSSySRisdhjp1wpGqIDaC+o3ZFaSg9Sbz4UCiEQCCCTycBq\ntaKyspINYX19PVQqFddpacJfXV0dqqqqYDab2end3t7ma5Kv1lCxWAy73Y7a2lpUVFSwVoIwSyIk\nlWm1Whas0mg0cDgcmJiYwOTkJAcPjY2N6O3tRVFRETQaDdfWPR4PvvzyS+5fP+jIXyS6o9pZXV2N\nY8eO4cUXX0R3d3eO407nmtlsZu0CyrJQrdtgMHCW4CDWSLyXmpoaVlPUaDQwmUywWq0oLi5GVVUV\n9Ho980AymQxisRgbfmLiGwwGOBwOtLW1seLoflULiTdUXFyMpqYmAMDW1hbMZjNnFqgNta6uDpWV\nlbDZbDlnI5XlqGtAqVSipqYG7e3tMJlMXIIxGo0oLi6G1Wrlsth+8J0w/lKpFC0tLXyQE4NWWHOx\nWq0oKytj4ZD7vRcxii0WC1pbW3H27FlmXQN3Dt65uTlcuHCBU7pPa8Y1GaLa2lo8//zz/L3IiaHo\nnh502oQrKyvMTs3XupVKJYqLix+pFYjumUqlYjlOoXSwsO7s8XgORJdBJpPB4XDg1KlTqK6u5s+U\ny+WwWCxwOBzcM0yRMDHT6VChaAO4E1kHg0HMzs7i1q1bmJiYQDgc3rd3Tp0WyWQSW1tb/G/7gVqt\nhtls5lnh9/o8r9eLUCjE2a2DQDabRSgUQjAYhEajYVIfOY/Hjh1jXsPa2hqAO2qGDQ0NaG1thV6v\nz5l4Rlm9fLUmSiQSVFdXo76+nvUpiFhL+4WygcA3TmVtbS1qamrw8ssvY3x8HP/+7//OZ8XZs2fR\n398Pi8XC70Pyyr/5zW9w8+bNQxnuQ2WTo0eP4pe//CWXMgh0rolEIpSXl7PRp9IGGdiNjQ2Ew+G8\nn4GUKSkpKcHzzz+Ps2fP8hQ/uubCTKyQKEf8BJp0SQ6uRCKB2WxGR0cHlpaWMDs7u+9hP9SlUVRU\nhJqaGiiVSu7AIq2KmpqaHB2QvUERZTasVivq6urQ19eH/v5+tLW18VTUnZ0dLh3YbDasrq7uWzr8\nW2/8dTod7HY7Ojo6OPoVSuMSw5YmsQnFEx4Gqi8plUpotVoWnbl+/ToTXXZ2dnIOmr1ElIOCSCRC\naWkpjh07ht7eXma3Ch+AQCAAnU7H312ojvawdpTHhU6nQ1NTE48fpoMDQI53TV0BlMavrq5mhS4h\nITAcDmNpaQmbm5t5rTuTkTlx4gSGhobQ3d0Nq9XK+4XU26gLgFTxaDgUrU/YpkgaDaFQCF6vF+Fw\nmPdGPvdBvt5reHgYr776KkpLS3P4DnQNSFDqIEotADiFbLPZ8MILL+DkyZOoqalhMqNQ3pmIUh0d\nHTzSl7p5qO5ObO+LFy/uS9PiXtje3kYwGMTCwgKXcajDg7IWdJ2KiorQ1dWFlpYW1NbW8jP6wx/+\nkDNHRFSkDAJltxYXF7GysnIoCnoikQgGgwE//vGP8dJLL7EjQiCmOQDm4gjPjWw2i3g8jqWlJVy8\neBFzc3MHYvwVCgVKS0tx9OhRVFVV5UxO3HuOCXlO1BpHHCRy2HU6HUpKSjhztzfj+CQgp5NIqnV1\ndSgtLeVsIBEplUrlPW2P8LMlEgnKy8tx+vRplJSUQK/Xc0BHmRedTse6L/vFt974k5od9QPTBqGU\nP7XY3G/CFtUdSSuaIn/gm41FB73X68XMzAxu3LiB5eXlu9KLwlT3gzYURQgqlYrblMizBsCzviOR\nCMLh8F2pKXowHA4HTp8+jZaWlpyNTFEz9RRTmi4cDrNGej4fVLFYzJPlLBZLTrlFmEIWi++MySXj\n4vP5mKhJG52uXzAY5Hn0+TxYioqK0N7eztyQ4uJiSCQSXi+pEBIbmwiBD5oaR2StWCzGqXJ6r4ft\nA+DwiaIdHR343ve+l5O5AMCdA/Pz87hx48ZD2x0fFRSF0ZAeKkNVVFTg5MmT6O/v5/tPB7VwLG42\nm0V5eTkcDgcUCsU9uS1OpxOfffZZXmWJM5kMgsEglpaWuLTg9/vZ6Hu9XmxsbMDj8SCTycDhcGB1\ndZUdQFqjTqdDTU0NGx6h472zs8PZIuIWHTRIc+P06dN47rnn2Omm9VCJiTIQ29vbOaOUd3Z2sLm5\nidnZWdy4cWPf8yvuBYr8lUolB28U6QvPcFqzsGRIvfG0VmGmEQDzBvLRUUFEPeKt1NXVwWw2w2Kx\n3NPW0ProjA+FQjmEZ5VKhaamJg7YgG/KAvQzlOHaL771xj+RSGBra4tr2EKRE6orh8NhVkITgjbO\n+Pg4VldX+aFoaGjIuXHUf//RRx/hwoULmJ6evq+EKKXG7hdVCzd1fX09urq6MDw8zFK3YrEYfr8f\nExMTuHr1Kq5evYpIJMIDI+gzrFYrr1U4I5w2GDFiI5EItra2EI/HcePGDdy4cYPJIvkiRdEwipKS\nEmi1Wp5FsLGxAafTicuXL+Prr7/mjAwps5HhpzS0sGyxsbGBjz/+GPPz83k5WOh+trS04OzZs2hq\namLluHQ6jXQ6jVAohO3tbT4cFApFThbnXsZaWF4iEaRHMfz0fpQlOQwHgD6Poue9UX8oFMKf//xn\nXLhwARcuXEAgEMjL55LAyo9//GM899xzMBgMXB+3WCwc8QuJnqRoKcwg0c8IjSdd983NTR5Tmy8Q\nMdXtduOLL75gp5oUCYkMR1GY2+3Ge++9h8uXL/NMETJeP/zhD/GTn/zkrrRvIpHAxx9/jA8++ADh\ncPhQ9kFjYyNOnz6NyspKloSmszAcDsPn82FtbY27DnQ6HWt3iEQihMNhjIyMYHR09MCm5GUyd2be\nu1wuXLp0ifU/iJNATit1c1FZUCQScYcCRfvkdJITsNeB3O86s9ksNjc3sby8jN7eXl7H3p+jcyKd\nTiMYDGJiYgJfffUVrFYrbDYbioqKYDKZYDQac2a2kM1IJpMIBALY2toqsP0BcA8/Sd8K07NkiAOB\nAORyOdLpNDweD4slhEIhxGIxFuLQarU4deoU6uvrAXxzAyORCJxOJ8bHxzE5OYlQKHTfWu79Uv50\nA2lAjtVqRUdHB3p7e9HX15eT9o5EIigtLUVZWRnq6+uxurqKpaUlTE9PIx6PQyaToaamhsmNRDKi\nzxcSSGic5dTUFG7fvo319fW8thIplUr09PRgaGgIZWVl0Gg0SKfTmJ6extjYGMbHx3Hz5k1MTU3x\nQ0vXh/TqrVYrurq6ct43EolgYWEhr8I+xH0gGV+n04lEIoFYLIZ0Og2VSgWDwQCr1ZpT4xTWkMng\nkNNDNVvha0+TAPogGAwGFjrZqyJHCntff/01bt68ifX19X3tEbrWRUVFaGpqQnd3N06fPo3Ozk6e\noEhRvrAjYXd3Fy6XCy6XC263m4mkxAcQ/jxlXEjn3e/351WMiLKCRDZ92D1NJBJIJBJYX18H8E0K\nmkSrgsEgO5XAN10Vy8vLPHf+oCCRSGAwGFBWVobnnnsOzz//PEpKSjhj6PP54Ha7MTU1BafTCY/H\nwyx/mjOh1+t54NjS0hKv+SD2Ojkj6+vr+PLLLzkr1draCpvNxgp/wWAQIyMjXB4EvpmwSOW6EydO\noKenh7MHCoWC2wNpdsh+1+p0OnH16lWYzWbU1NTkEGpJuIsUH4PBICKRCJaWljA+Ps7CdEajEWaz\nGVarFadOnUJnZ2fO57jdbszMzCAQCBT6/AEw+55qcQB4Q1MvaCQSgUqlQjwex/j4OL766itcvnyZ\np+QJxTXUajVef/31HM/N7/djenoaTqfzgSQLIfnvXg+EXC6H0WhEVVUVjzJtbW2F2WyGWq1mL5wO\nzd7eXqTTady6dQsXL16E1+vF+vo6FAoFmpub0dnZyUN16POFxolYuJubm/j8888xOzub17QiRZHn\nzp3Dyy+/zANa6IH84IMPMDIycl/2ciqVwrVr11BSUoLz58/f9ZrP58vr9EWRSMT1OYrgvF4v/72n\npwcdHR3cOw4gJ7oj7z2VSjHrWBiNEiHwUdP5wlTlYcBut+P48eMoKyu7q/5IrUYjIyOYnZ3dt+Gn\nUlBHRwf+4i/+Am+99VYOT2IvSCQqFovh66+/xqVLlzA2Nga73Y6+vj4cO3YMBoMhJ0tCXIuNjQ34\nfL4D64/fT7eG0CHY3NxkCV/gzh4PhULMETkokKNaVlaG06dP46WXXsLJkydzzrjV1VV88cUX+P3v\nf4/JyUluk6PfC4VCaGtr47keGxsb2NzcPFByYiaTwebmZk6J4Wc/+xk6Ozuh0WgQjUYxNzeHX/3q\nV7hx48ZdWQjii+zu7qKmpoajfoVCAb1eD6PRCI/Hk5e1Li4uYmtrCxsbG6iqqkJpaSkqKiq47XB5\neRnj4+OYnp6Gy+W6p2Q7ACb1kZKk8CyZnZ3FyMgIIpFIXs6Mb73xJ9wrhUNp5M3NTUxOTuKzzz7D\n1NQUSzESGzubzcJgMKClpQX19fV31ZXW19dx9epVZls/CHsNPxkGnU6HiooK9Pb2QqVSIZPJIBwO\nY2pqCnNzc7DZbNyKR601lKYqLi5Gc3MzBgcHsbW1BYVCgb6+PtTV1eVwHIR/gDsH+vT0NG7evAmX\ny/XI0sGPApFIhLKyMrS1tTEznuRxnU4nJiYmMD8//9C2KzKuwpKN8HDPN2FuenoasVgMWq2Wh4Uk\nk0lotVpUVVVhZ2eHrz1pta+vr8PpdMJisXDpiLpGyOjTd3ncKYSHYfjNZjOOHz+OY8eOYXBwkDNb\n9PlUZllaWmJxkicFHbh1dXVoa2vD0NAQ2tvb71KyA75J0ZJg0fT0NK5fv46xsTGsra3BZrPhyJEj\nOHnyJA+CEnICSNXtxo0bWFxczNu1zCcRlqDVapnoR/D5fJibm8vrcykERb4WiwUtLS3o6+vDiRMn\ncsqa5NCOj4/jD3/4A+bm5jilTC2hVqsVOp2Ohc9I/OxJNSyeBLFYDF6vl6eUqtVqTExM4NKlSyxc\ntve8oKDuypUrMJvNeOWVV1BZWQkA3FbqcrnyVipKJpMcUJLokFKpzCmn0OyHvdftXn8XBgdUlvN6\nvXnLEH0njL9IJLrnHGry9JRKJRslmitPBtlutzM7dGBgAM3NzXc5EjQU6GE10Gz2bmlYInHU1NSg\no6MDg4OD8Pv9mJmZwebmJjY2NrC1tQWr1Yqqqip0dHTwZCpKzZpMJtTV1eHYsWNMECGlMWKj0+dT\n2lmoOU1ljXwJh9A0soaGBvT09HAaLpPJsLrZ4uIiNjY2HhgZUGrvfloJ+ZQ5pfdeXV3F5uYmFAoF\n1xXlcjkcDgfXaIWGigiKTqcTAHJ6s4XOFgnMkLrY0077k9NZWlqKtrY2/OAHP8Dg4CBaW1tz9ksq\nlUI0GsXS0hLm5uaY9/Ckn0fDSnp6etDb24ujR4+ioqIix9BHo1F29Ih3QJmexcVFxGIxGAwGNDQ0\noLe3l7Xaycmi/R0Oh7G+vo7p6Wkm+u3nuguFqcgo7heUBaH5DMLMx/r6Og9OyjfIKTWbzaitrcXR\no0e5M4iUToFvzgy3243x8XGkUileK5GRaTAXcZ8CgQDC4fCBjtreC9K0yGazTMaloO5+Q5zou83P\nz+PatWs4fvw470Wz2YzS0tIczsN+sb29zZkKIFeZEMADs8L0s3q9HlVVVdDpdHcFQ+R05Uud9Vtv\n/OkAicViCAaDKC0tzWF12u12/OAHP0BXVxcfLF6vFxqNBr29vTh58iQqKyt5iInFYuH3phsVj8ef\n2HjqdDqUl5fj3LlzOHLkCFQqFdbX1zEzMwOlUskbZmdnB3K5nEU3/vIv/xKlpaWcJiwpKcHw8DAb\nloqKCm5jobUSmYTU32hAzPr6el7HsprNZnR2dqK/vx9HjhyBRqNhb3x1dRXj4+OP3CZGUdxer50Y\n9nK5fN81OeFnCevzBL1ej8rKSnR0dKCuro5T+aScSAcg1Z2pbZLaK+lQ9Hq9PNzjaYPW/8Ybb+D8\n+fMoLy+H2WzOMfyZzJ1pl9TBMj4+/sRrpzITpflLSkpYQIvGMweDQayvr2N2dhapVAoKhQJtbW3s\nxHZ0dMBsNrPeArGm6Xney0ZfXl7G7OwsjzvdD2j4C7UPJpPJvKTiyaG8F8mM2hPzPadd+Lnl5eXc\nJmk0Gu9aAxkd0tuQSCQoKytDd3c3/3woFGKyXTAY5HP0sEHTG0n5zmQy8XTQB0Eqld417Eqn0+WQ\njA8CdMYIpeH3vkagcm97ezvefPNNNDU18TMKgM/IfJa1vvXGH/hGK97j8TDZgrx4GvFLtZRAIICm\npiYolUo0NTXhyJEjKC4uhtFozNHrJsYxpVr2ExGR/GNLSwuy2TvDG3w+HyYmJnj4B6WtqCWuoaEB\nHR0dcDgcTEoDwNPXaNMLoxQy/B6Phw9amh++3yiG6oZVVVVobW3FwMAAKisrodPpkEwmsbGxwSI3\n8/Pzj2REpFIpHA4Ht3DtlZqlnu/9gFoqRSJRjhORzWa5faa+vh6Dg4OoqKiAVqvlz6TfLSoqglQq\n5WiIuhYokg0Gg7h58yZGRkbgcrkO3fhTK11JSQnvZfojVG4TMqSpvj4/P48vvvgCMzMzWFpaeuLs\nEEWaVKM3GAysMUHXaGpqCjMzM1hYWOCOlYqKClRWVvJIVlKIFI4rpsweicoQv2dmZgZTU1NYX1/f\n1zUnEtjRo0dRV1fH/CDiQDzufAa6HiQ5PjAwgCNHjtyVodva2sLi4mLe9otQ9IZY7o2NjTxi+36G\nTiwWo6WlBa+99hrEYjHMZjOqqqqYrDY5OYl0Os3aBtTp9DiaKfuFVqtlNjw9ow0NDTh27BjzPYhb\nREEbGVyLxYLy8nIWhMpms1AoFPwsHyRoHSaTCeXl5cwD8fl8TDQmLYCmpiZ0dXWxkmE8Hs9RlMw3\nmfhbb/ypZ5J6billuHdj6vV66PV61NTU5JAs7pdWpojf5XLx4fIkXhcZZIVCkTPtrrGxEf/yL//C\nRBW6qT6fD9PT0/j444+RzWah1WoRCAQQCAQQCoVQUlICq9Wa48FTCxJ1PrhcLvh8PmxubsLn8+Wl\nHUckuqNzPjg4iJMnT6KnpwfxeJxn2dNAldu3bzOL/kEpezocW1pa0NbWxg6bkDS535Q/ORAU8Qrr\nbXRIymQydHd34+zZszmym/QzWq2WGecEGrITjUYRCoWwvLyMDz/8ENeuXcPy8nJehyY9Coio2tXV\nhWPHjqG5uRn19fWorq6+pwNFhpSkiD/++GOuR+43y6LT6dgppNKK2+3G7OwsPvroI1y7dg2BQIDV\nzGKxGN8HqjELIazve71e3LhxA8FgELu7u5iamuJWvP2QoCgt/8orr+C1116DzWZDPB7HysoK/umf\n/gkul+ux31MkuiOk09fXh3/4h39AVVUVv0bfJxKJwO/35400R2pzarUadrsd9fX1aGtrQ3NzM09p\nvBcnSSKRcEmA9C5oPgQxy6PRKDY3N5FKpbgdVqjLcdCgeQjEcZJIJOjp6YHRaITT6UQgEIDb7c6R\nuqbn3+FwoKmpidVHM5kMt0LT83GQxEWRSMSaLGSrxsbG4Ha7sbOzw2JGZ8+eRUNDA6f3qU2Ugg1q\nMS0Y//8fFL1FIhGsra1hY2MDWq0WVqv1nhvzYQafetAXFhbgdrsRDAYfiegnfA8hyDkh/oFEIuEU\nW0dHB1wuF8bGxnIGB8ViMUxPT8NgMLDx0uv1sFqtLHghTPcnEgkEg0EeWnH16lWWOc1XVFFbW4ue\nnh6cPXsWR44cgdVqxczMDBNmaEKfy+XCxsbGfceq0v2qrKxEa2srzp07h/7+fvbK6TsZjUZ0dXWx\nct6TgNpuXnrpJZSXlyOZTMLtdjNpUqPRQK/Xo6enBw6HgyVchVEDMdQp5ZZOpxEIBDA9PY2vvvoK\nbrebjRt1jjwq7vcQk9hUb28vzp07h3A4jOXlZXz88cdYW1u76/DW6/VwOBwYHBzE6dOnYTab70rx\nUuSwvb2N999/H19++SVzGRYXF1lARKiu97gQiUTMpCZxpGw2C5VKBb1ezzoIIpEIVqsVLS0tOQx+\n4bNJ98Hj8WB5eRm3b9/G5OQkEzYpSyZs831S0Gfp9XrWUqd5IT//+c9RVVWF//3f/+XecQBsgKg9\nixzwRCKBiooKNDc3o6OjA11dXbDZbDnPayAQwOTkJFwuV8400f1Co9GgpKQEzz33HJqamjizYrPZ\nkE6ncySKCXS9KS0ulUoRjUbhdDrhdrtzzqfl5WUAdwIaIjDHYjEkk8l9rZ9KRpStoFY4mqypUCjY\nsRVqtVAnwttvv41Tp04xeTccDmNlZYWnFba2tuYYf9p/VVVVnPEV6qg87tqFdX1hZE7/LpPJckqH\nWq0WL774IhKJBJOd1Wo16uvrIZfLkUgk7hJ9S6VSqK6uxpEjRxCJRPZd5gK+xcafNi15R5Si2tra\n4tQh1druN2SGIKw70+aZnp7GzMwMdnd34fV6n0i2l6Jli8XCRDFKMZpMJnR1dWFjYwMrKyvc4ywS\niViPYHNzE4FAAGazGQaDAUVFRaw+JySR0Jo3NzexuLiI8fFx7O7u8qbOh3deXl6OwcFB9PT0oK6u\nDsAdIxWLxViDgPTshb3W9HDIZDJYLBYYjUbue+7t7cXx48dRU1PDBodKGFqtFi0tLVhYWMDU1NRj\nrZWimbKyMvT09ODFF19ES0sLgDt11tXVVchkMq770fWlvn6hIaLrLJFImJW+srLCrYwul4sZuPnI\nrtBI4YqKCgwPD+Ov//qvWfQpHo9jdHSUh3qQepnFYkFbWxu6u7vR0dFx137f3d1FMpmE3++Hx+PB\nhQsX8Ic//AGRSIS5INQySt/7cQ9CuVzOQlqkV0FrIBEn2rdFRUWora1FS0sLTCYTvwfdfyplpVIp\nOJ1OfPXVV9z6t7GxkWPo6R7tJxqiA5rUPamMolAocObMGej1eszOzmJ2dhbxeBwqlQpqtZqFrUpK\nSrjsFolE0NzczHyYsrKyHIeGzqiRkRGsrq4+9Gx6HFCGamhoCH19fTzxkDgXwjNDOKqaiGUAOHN4\n+/ZtLCwsYHl5GYuLi0gmk1hbW4NcLudpmMXFxTln15OCymvl5eU4cuQI/H4//H4/y+JKpVL09vai\nq6uLB/hQ2cFkMmF4eJjPjUQiwc45zWbQ6/U5mUVy9IqLi9lRfdJyF2VAioqKeG10HpMjQNcMAIxG\nI6qrq5nLIpVKOVATi8XMHaLvDXxj/EtLS9Hc3Mwk0f2y/r+Vxp8eeLFYzDOmjxw5grq6Oq4tut1u\nVFVVwW63Q61WP5AUQhHR9vY2t7KQOt3Ozg5CoRCkUuljHe7kRdfW1uLUqVOw2+05UaVUKkVfXx+2\nt7dx+/Zt7O7uci1NLpdDr9ejvr4eQ0NDrIhGB9PedD89fJSKDgaDTCCkh34/bXOUwqShSXTYmkwm\n2O12XL58mVuWqONCeKjRsJxz587h6NGjKC8vR1FRERte4JvIlDY6peuMRuNjGSNyNFQqFdra2jA8\nPMza5XQ/SkpKkE6nOcK+Vz1c6LXTa6lUCltbW7hy5Qq++uorLC0tIRwO58XwUyRpMplQXV2N73//\n+xgeHuYpg5RO//zzz/HOO+9geXkZXq8XEokEFRUVOH36NMrLy+8pf0pKaVevXsUnn3yCkZER3iO0\n7kQiwXtFKpU+llgOdaQMDg6ipaXlruyaRqNBWVkZSkpKUFdXh97eXnR0dKC5uZmdAjJCNI+DJFNv\n3ryJDz/8EPPz89jc3LxLmnq/DH/gmwCCyJtC7QYS1PrpT3+KsbExLC4uoq6uDjU1NaiqqoLJZIJG\no+F1ZDIZzijReGpaJ7WWulwu3Lp1K6/S1cL9ajQaYbPZOJtGcrd07pATEolEuI5P92BychK3b9/G\n6OgoyxRHo1F2DKlEIyTDkoz3k4I0ITo7O/HTn/6UidDUWbC1tYX6+nqUlJRw3z79Hu13+q9arWbF\nUSot0plJWVi6V9SJRd1gTwLSbnnhhRdw7Ngx6HQ6rK6u8rhyKkdlMhmMjIxgfX2dlQCJhEnOJpUj\niNdFnQj0bOj1eub1kGDQfvCtNv4qlYqNq8FgQCQSYe9cLBajpKSEx37uHQgBfEOUonG/lJr0+/1c\nA6VWNKpFPmp6iNZXXV2NgYEBqNVqRCIRlv0kYk1bWxtee+01uN1uRKNRrgHZbDb09vbC4XDwwybk\nMZCzIhQoIlJMZWUlG2JiMD8pUYSuVyAQwPLyMtra2vjfzWYzmpqaMDw8DJ1Oh/n5eSZWkldLKT2j\n0YihoSE0NTWxwpyQsEh64V6vl/WrSYbWbDZzlPqwtUokElgsFtTU1KCoqIjrlfF4nAl7arWajYjw\nYNje3obX68Xy8nJOmw3VQbe2tuByuXDt2jUefJMPww98w85ubGxk1jsA7qKgqFokuqMoODExwe2H\nvb29aGtr404VMvixWAyLi4tYXFzE/Pw8bt26hevXrzNBSoi93SCP6nBRCrWpqQnPPfcc6uvrmcRE\n+5UcqKNHj6KkpAQNDQ2w2+1scD0eD7ejUkqfpEwp3U/93ffCfoynSHRnJHZbWxsikQjm5+dRXV3N\nE1w3DEIAACAASURBVNjoOe3v70dJSQnW1tbYkbHb7bzPhe+3t6OCuipWV1exsLCAmzdvYnx8nGcC\n5KuGm0gkWEEumUxCr9cDABtLYTvqxsYGpqen4Xa7EQ6Hea87nU6srKxgdXUViUSC97dSqUQkEmGH\nkbhE+ahBU8upRCKBw+Fggi31xns8Huh0Oi5xkpgTdUJRlCws0wnLiORUUpBEjlg2m4XNZoPVaoXX\n632i7AWJflEZi/Z3cXExnE4nj36n77G0tISZmRksLy+zfSKHikYVFxUVMQeGzigiHJPqK/HA9nPt\nvzXGf280Q8autbUVr732GkZGRliXnDbu/d6H3osYn8899xz6+/vR3t6OTCbDbPlkMslRoMFg4GlK\nj/LAUvq2qqoKPT09LO+4twxRVVWFv/3bv+WDgnpnhRrWwhoS/RzVi+h1mUwGm83GNT1SFKPvQsbu\nfusWHlj3uuaLi4u4dOkS+vv70djYCADcGtnR0YHZ2Vn88Y9/RGVlJTo7O3mEJV0LYQ+1EPQgJpNJ\nzM/PY2JiAjabDVqtFlKpFBaLBaWlpXA6nY9k/GUyGUpLS3HixAlks3dUsYQ9y+QgUP2TDgZiv9O4\nZjrUKJPi9XqZz3A/XYLHhfBaULTS09ODo0ePcsq1srISEomEa4BUvxwdHeWRpOXl5aiqqsqJQCOR\nCFZXV/H73/8eH330Eebn5zkVeS/sFVm6136433cg8aoTJ06gvLwciUSCpVTpfSQSCV544QW+5kLl\nxIWFBfzHf/wHxsbGsLm5mXPAP8oanhQURFRWVuL48ePwer0YGRnhuj85LxqNBrW1taitrc15hh6l\nE4W+6/z8PD777DN8+eWXmJyczBFrycf3y2bviMCQwfF4PMzwJ0IbDSaamprC9evX8d5772Fqauqe\n2ZS914n2u0ql4qFjFJ3vV1KZJkn6fD5Ov1PQotVqYTabEQwGecR0JnNnaillWGgMNO21+50x6XSa\nS2V0tpMjNzMz80RrT6VSXA5UKBRMTGxtbcXU1BSuXLmC+fl5uN1uJhwDwAcffMDrp5JudXU1+vr6\n8L3vfY8zcGRrlEoldwx0dHTwLIv9OI/PpPHfa+iF/937+uTkJH71q19hbm6O1ZUeNrWOXqMaHdVQ\npqenYbFYoNVquXYdj8cRi8W4v57IXw9aN0WLoVAIFy5cgNvtZlJHWVkZjh8/jueffz5ns9IfUgAT\nEq+Ine31euF2u1nrPxgMsrgOEZRIUYpYv0ajkVv9yGF40DW537+ReI/L5YLf7+eOCjKmJSUlOHPm\nDLRaLSwWS07NSlg7p/elB5haAxcWFjA3NweXy8WyzCKRCC6XCx6P54FpRSGLeXt7GysrK7hw4QKX\nH8bGxtDZ2YmTJ0+ioaEBpaWlOevw+/1YX1/H3NwcRkZG8MUXXzCJicoQVFJ5VOfvcUEHKalQRiIR\nGI1G3LhxA+fOncPx48fZIFGqX6vVIplM5iiJRSIRrK+v49q1a7h48SLm5+exsrJyX4ll4T15nBS6\ncM+ura3h888/x9raGk/ta25uRmtrK9rb2zkCFWJ2dhaXL1+Gy+XCwsICxsbGsLW1xdHkYQw7ovs/\nPT3NHTcajQaXL1/GmTNn8PLLL0OpVOa0gz2ILCzc18FgEGNjY5ifn8fGxgaWl5extLSE9fX1R3LG\nn/T7xGIxvP/++0in00wAVCgUSKfTWF5exsWLFzE+Po7Z2Vm4XK5HylyR8aR7k0wmmdBL/7bfde/u\n7mJxcRHvvPMOXnzxRfT39wP4Jpu3sbGBubk5/nwieWYyGWi1WlRXV6Orqwtms5lT/nR20hlO5ERy\nPDc3N7GwsACPx7Ov+5DNZvGnP/0JAPDzn/8cDQ0NAMBk5WAwiFgsdtf0QRpIRF0fm5ub/JwePXoU\njY2NOSOVd3d3OQuQDwG0Z9b4P8x4Cz1ql8vFk44edmgIX6PWCfKUl5aWUFZWBofDAY/Hg1AolGP8\niZD3sLUDd7zZWCyG0dFRjI2NMcvfZrMhFovBYrGwlK/wd6kmR0aJUqE0eGN+fh5zc3OcSqLNQYIp\ndrsdwDeKWJlMhuvd++mZj0ajWFtbw+joKBNzqH9eJLrTxyokbwmvN2U0qIUlGAwy83tkZAS3bt3C\nxMQEX3MhhCm7h4EOEa/Xy+NoRSIRxsfH+aAjlTCh7Obi4iJmZ2cxPj6O8fFxTExMsPE/SAMkjLDp\ncKXPJ2LozZs3odPpYDKZ0NDQwKNIqfODjIhIJGKncG5uDp9//jnee++9Ry5L3CvaexCEjq7P50Mw\nGMStW7e4FtzT0wOv18uzLKhsJmzZ+93vfof5+fm8i1A9DrLZLNxuN9bW1gB8UwpMpVIoLi5GY2Mj\n7Hb7XUOQkskkQqEQl5SEIiyRSAQejweff/45xsbGuCOG9PKfFI+SjUmlUhgZGeE++Pr6ehiNRtai\n+OijjzA7O8vqm4+6HsoMkTNMyJeTRvfhww8/RE1NDXp6ejjzQu2iN27cYA4CGVYadNbY2IhQKJRT\nThJm9OgPReo7Ozvw+/2Ym5vL6eJ4UoyPj2N7exvHjh2DVquFXq9n3lggEGAdF+H3FXIlSKeG1kYl\nTyoBAN+0jYfD4e8u2/9RDgEi4kWjUYhEopz+zscFGWpq45qfn0c6nc5hxNJ/H+St731NSPCjQyGV\nSuG3v/0trl+/jl/84hd44YUX7oqKybu9fPky3n33XaytrbFsLOlrUz1OyEr3eDw5cr/AnQ2TTCbz\n4qFHIhG88847CIfDqKqqypkK9qDrQbXIqakpXLt2DX/6059y0obRaBSxWIzJi3vfQ/jf+33Ovf6f\n/k5Oot/v59a2mpoaRKNRjI6OYnx8HHNzc7wOSi0edOQpXK/QEQC+SbUmk0n89re/hdPpxC9+8Qu0\nt7fnkNEkEgm3ev7+97/Hhx9+yJLRTzJx7VGuNwBm5QPg+5bNZnm/ff3114hEIjCZTMhkMmhvbwdw\nR8/+ww8/xEcffYTr169zNuppGH4h6PvSc/7JJ59geXkZf//3f48zZ84wb4ii68XFRVy+fBmjo6OY\nnp7OOR+IJxQMBhGNRpFKpfJWG38UZDIZzM7O4l//9V9RV1eH4v+PvS+NbTO7zn4ocSdFivsqat93\ny/Iq2zP2TGafZiaZNG3apkDRFF0QFEGBAgUa9EN/5E//JGjRJf1TJEhRtJlkktkyk+l4ly3Jkqx9\nIyVSoijuOylqIb8f7rl+SUu2FmrGzvgARuIxJd73vvee9TnPMRgwNzeHpaUlFl0exLHlYhgOsq5H\nCXVZra6usq4KugNLS0sYGBhAIBBg66cafklJCXMOqMWUuxdc/c3NQtIwq4NyuHAlm80iEAjgF7/4\nBUKhEE6ePIn+/n5cvXp1T3gC0lPEWUFlz9OnTzNQ5fLyMiYnJzEyMoKlpaVDByePpfHfi1DP9V5r\nkw8TihgJvLFbX/lBv4N+jjIN1B729ttvIxQKMepVqVTK+AWi0SgGBwdx7do1xgb1qANKLICFCFj6\n7sNe0q2tLbhcLty4cQM/+tGP0NTUhIqKCkapnE6nsbi4iKWlpbxhPZQmdDqdmJ6exsjICCujfBYG\nNpvNssmPAoEA4XAYJpOJ8Tm4XK6ieP+HlUInht630+kEn8/HzZs3IRQKUVdXx9J+JSUliEajbF+H\nh4cZcPSzWm/hvm1vbyMUCmFubg4ffvgho2VdX19nWYmRkZEj4bQ/rNCZ9Hq9iEaj+PnPf47V1VXW\nMZTL3WPoJMT+wsICnE7njkbx85RYLMZAlOXl5VhZWSkK0+dR3hECMd++fRt6vR4NDQ3IZrNYXFzE\nyMgInE7nA63EJPF4nHHq7ySkB6mrhuu4FuO9Ec5maGiIEX/dvHmTEZ7tZd+oFJBOp3H37l0IhUJs\nbm5CKpUiGo3C5XJhfn6evddDY44O9dNHJ5+vFv6MpLS0FJWVlfjSl76Ejo4OmM1m/OQnP8H169cZ\nIOjzNkg7CUWeJ0+exKVLl/Dss89Cr9fD7/fjpz/9Kd5++21GuHFUUcJhZCdMyeMuarUa586dw+uv\nv47f+Z3fYQDAXC6H0dFRvP/++/jlL3+JwcHBx+qZeDwevvGNb+Dv/u7v4Pf7MTAwgO9///twOByf\n99L2JJRhIZ4OHo/HzvbjtM+7STGCo89axGIxzGYzXn/9dfD5fAwMDLBJrE/CcxBYj9tKu18pKyuD\nXq9Hc3MzstksA2+HQiFkMpm9Rv0Pte9Pjf/nKDwejw3+UavVkMlkmJ+fh8fj2dFwPm5iMBhgsVhg\nsVhY5O9wOLC4uMgi/yfhsj4JQhSgHR0d6OvrQ0dHB3Q6Hex2O4aGhnDt2jU2SfFxk6amJpw+fRrp\ndBpra2sYGRkp2hjVz0IK+R7obD+VoxHCR9XU1KCkpIRlYR6HgVl7ES5R1kH1HwG2VSoVKzWl02lm\nF/ZoG54a/8ddCnuDnySDuVNbzVM5GuHxeKwlqK+vD1arFaOjo6wXnjAqj5tQzzIA1nL1uDu2T+Xz\nFUL5A/dLYF8k3cLFcnE7cfa5B0+N/2ch+2Ghe9zkSV37k5jSJDnontNMAq1WC4lEgkgkglgsxlqv\nHkejSkxswOeryJ/Uc/5FlSexPFdMKcLzPzX+O0mxFcGTrFgeh7U/yYb8IHLQPaeIiAiguGM+H+es\n0ePwfh+Hc/5UnspnKE+N/05ykEE9T+VohFsj48rTd/NUnspTKbZwdc3j4pQe0Roeat+f2Fa/gwoZ\nGqJO/Kxnrz+VB+VhffpP5ak8ladSTHncdMzntZ4vlPEn+lyJRAK5XI5UKoVgMLjnnyeWPC6hx1Mp\njjzdy6fyVD5/eRwi4afy2cjB+V6fMCHkpE6nQ3NzMy5cuIDGxsY98yPT2EmVSsWG7jyVp/KbLofl\nD38qn59w5y/s9fNEBV6M977X7/88zxiXhZW7ni/Cuf+Nifwf1VspEAgglUpRV1eH6urqvL7dvQjR\ndtL//6JIYT3+i/TsT+Xp+34ShQsKpbv7KCY7iUQCqVSKsrIybGxswOfzFa0k+rAzVNjS9jh0q3xR\nzvxvjPGng059zoUvkMYttre3w2q1Ynh4GKFQaM8vOpfLIRaL5R3ULwp6mJykox6x+lSeylM5vBAp\nkVgsZnNFHtYGyuPxoFAoYDKZoNfrEYvF2PjZw9KmP0pIn37exv+LqNOeWOPPBe4JhUKUl5dDJpOh\ntLQU0WgUXq83ry5PFJ1msxn19fVIJpPwer2Ympra1/dyD2ixWwW53/F5C11KmkTInWS2V67qx12I\n51sul0MsFrORnzQH4bN+xr04k9wx0E+7VQ7PpsbdT4qWAbDBMQf5nfR7aBobsbId9bsio184EW57\nexsrKysIBoN50yqpDNrS0oJjx46ho6MDCoUCTqcTSqUSU1NTWF5ePrL18ng8hr9SKBTY3Nxkk+0e\nV3bQnWYEFA58I91JbbiPqzyRxp+4k6VSKdRqNfR6PXQ6HRQKBfh8PpxOJ9bX15FKpZDJZBjQj/iS\nacjI3Nzcgb7/MIeSa+RLS0sZjSP94fF42NjYQCQSYbOnSbLZLIRCIRv4QNPUjsJbpj3T6/Vob29H\nNptFJBLB4uIiIpEIgHvZFFIuNLRopwiD60iQQkwmk0ilUp8bVWpJSQnkcjk0Gg0sFguUSiXcbjfj\nz/682PK4aVCuUeNyzNOUr1QqhUQi8VjOgOBGdIXPst8zS79LLBZDqVQyxZpOp9nEvMMYCzL8KpUK\nMpkMQqEQmUwG6XSazU4vKSlhEwtpLCyXXwG4bxCMRiMsFgvEYjFyuRwSiQQ2NjaQyWTg8/kYIVOx\n31lpaSlEIhEMBgMqKytRW1sLqVSKXC6HhYUFrK2t5Q0IKy0thdlsRnd3N86fP4+TJ09CJBJhcXER\nW1tbKC0tRSqVQjwe33GYzmGE3mlZWRmMRiP0ej3T1TSmWigUMuO6sbHBJq2S/heLxWyqZSqVYiOT\nj/oukGMnk8nYBEGJRAKxWMx0Ip/Ph8vlgtfrfWwdmSfO+FO0r1QqUVVVhd7eXvT19bEXkclkMDo6\niu3tbTgcDgQCARbdaTQaKJVKaDQaNDU14c6dO59p6p47ThIAZDIZTCYTmpub0dTUhPr6egiFQqyt\nreHKlStYXFyEz+djKbHNzU0YDAYYjUY2s54MVbEdgJKSEpSXl6OxsREvv/wyAGBtbQ39/f3weDwQ\ni8WwWCwwGo2IxWKIRqOIRCKMg7rQiEkkErS3t0On0yGbzWJqagqTk5Pwer1IJBKf6eXg8XgQCoWo\nqqrC+fPn0dXVBYPBgMHBQQwNDWFoaAixWOwzNf7c/SLFRmWsbDYLPp8PjUaD1tZWnD59Gi6XC3Nz\ncxgbG0MwGHzslAufz4dIJGIKUSgUsuwRTWbby5mlfZHJZLDZbDh//jzEYjHi8ThmZ2extLQEv99/\nqHdVWloKiUSChoYG1NbWQqvVIpVKIRwOQ61WQ6lUQiqVwuv1wuFwwOl0wuv1IhaL5eGABAIBlEol\nXn31VXzjG9+ASCRCLndvpHUsFsPa2hp+/OMfY3BwEJlMpugOADlIdXV16O3tRU9PD1QqFQQCAQKB\nABKJBLa3t5nxoiAiFotBo9FAKBRCIBCgsrISb731FsrKylBaWoqhoSF4PJ6irRPIB2DX19fDYDBg\ne3sbGo0GIpEICoUCZrMZIpEI6+vrcLvd8Hg8CIfD4PP50Gq1sNlsUCgUmJ+fx+zsLCYnJ5njcJQi\nFAqhUChgNBqhUCggFovR1taGxsZGmM1mKBQKiEQi/Ou//iveeecdNr78cZMnyvhzFYHJZMKxY8dw\n6tQpnD59mnlg6+vrSCaTmJmZYSMsRSIRqqqq0NLSAr1eD6VSCYlEArVajdLS0s+EFpUi6fLycpbm\namlpQUNDA6qqqmCz2WC1WiEQCBAMBqFWq7G6uopwOMwcFLocGo0G0WgU0WgUsVgMm5ubSKfTGBoa\ngtPpZIrlMMLn81FdXY329nY0NTVBLBYjkUhAp9MhEolALBZDq9VCrVYjlUohmUwikUiwaIyL9KVn\nr6mpQXl5ObLZLJqamtDd3Y1gMAi73Y7BwUHEYjFkMpki7fiDwuPxUF5eDoPBgPr6enR2duLEiROo\nqamBXC5n0aTH40E2m0Umk/nMjCp9D5/PZ6UWlUoFpVIJ4F5kUVtbi6amJnR0dMDtdsNgMGBlZQXh\ncPixAEoB9++oyWRCRUUFKioqoNVq2Xvf2NhAPB6Hz+eDx+NBIBBAPB7Pa58tLS1FeXk56uvroVKp\nWFbMaDSio6MDQqEQ8XgcWq0WIpEI8Xgc6+vrB16zRCKBTqdDXV0durq6UFlZyYy2UqlkZaFQKITl\n5WVMT09jeXkZyWSSZb1WVlYgEAjQ1taG8+fP4/jx40wnbWxsIJlMwuPx4OrVq2wOQzHfGdXtbTYb\nWlpa0NHRgebmZigUCggEAlRUVLAMESH6S0tLsbq6iv7+fjalkPSrVCrFqVOnsLm5ieXl5aIbfzKg\nOp0OZrMZzc3NKCsrw/r6OisFaLVaCIVCbGxswO/3IxgMIh6Ps+BPr9dDJpOhpaUFLpcLS0tLiEQi\n8Hq9GB0dRSAQKOqauVJSUgKJRAKj0YiKigqcOHECHR0d0Gq1LMPZ1taG0dFRLCwsPDX+xRBquaus\nrMSpU6dw7Ngx2Gw2APcU6MbGBgwGA9RqNbRaLUstNTc348SJEzCbzSgrKwOPx4NSqYRAICh6Smu3\ndcvlcthsNpaS+8pXvoKurq68+dK5XA5WqxWdnZ1MIXJ/B5cnnYTmp//t3/4tfv7znxdFsQgEAlYL\n1Ov10Gg0kMlkOH36dF5tdK+ZE1IshX3E2WwWH3/8Mb773e9iYWHhyIw/fbfFYsGJEyfw5ptvoqur\nCxqNBjweD+l0GnV1dQgGg3A4HIjH48yofpYOAEWP1dXVaG5uRmtrKzOGPT090Ov14PP5aG5uhlar\nxccff4zFxcXPZH17EUqh19XV4ezZszhz5gwaGhpQUVGRhzyfnJzEtWvXMDw8DIfDwdL32WwWIpEI\njY2N+P3f/320tbVBr9fn3REaDqRQKLC1tYWpqSlWitqv5HI5yOVyVFRUoLa2Fq2trcxoCoVCZiTp\nezOZDMbHx+F2u1m2IZPJ4OOPP0Y2m8VXv/pVdHZ2QiQSse+QSCQQCATIZrPQaDQoKytDNBot2rmi\ns63VatHU1MQcdqPRyFLncrn8gZ/JZrNwuVxYXFyERqNh+ob2ur29HSqVCu+99x5GR0eLslYSiUQC\ns9kMnU4HvV6Pzs5O1NXVQSqVsn3nyk57tRMnwfLyMgYGBvC9733vSIw/la0ow0Ag8ra2NtTV1eWd\n08rKSrS0tMDj8SAejxd9LYeVJ8r4E4KV24fKNShUF7XZbLhw4QJqa2uRSCSgVqshl8uZkaefkUql\n0Ol0CIVCRzYuktJbVFuuqqpCV1cXenp6YDKZ8g4LfZ6EjCy3D3WnHlQqa/zWb/0W+Hw+/uu//mtf\nnQw7ydbWFpaWluB0OtHb28vqa9wa7k5rftg+7PT3kpISNDc34zvf+Q7++7//Gx9++CGbV10sIUfP\naDTiueeew3PPPYeWlhaUl5czzAKl281mM86dOwer1QqHw8FKE1RP3O/3AvvDiGxtbUEkEqG3txfd\n3d0wmUyQSCQoKytDeXk5Oy9CoRAVFRX45je/ifPnzyORSEAul0MgECCRSLBsi9vtxuLiIqamphiC\n+yhFLBZDpVLBZrOhpqYGNpsNGo0mr+2spKQEVVVVEIlE6OrqQiwWY9k3buTf0NAAlUrFDAExchKQ\nSqPRwGQyMRzJQSWTySAej6O8vJz9Pqrfcx3tkpISCIVCVFZWQq1WI5vNMiOqVquRy+XQ1NQEtVrN\nfoZq0FNTU7hx4wbGxsYQCASKyixK61KpVDCbzTCZTFCpVA/oFloLZSs2NjawvLyM/v5+BINBZDIZ\nnD17FhUVFQB2xp8USzKZDMtwkpNB5YidWrAfpmO4/6ZWq9HV1YVvf/vb+PDDD/H2228XPbijMygW\niyGVSiESiRgBHHctZrMZDQ0NGBgYgN/v3zWQIJzGmTNnUFFRAYVCgVAoBKfTif7+fvj9/iNhon2i\njH9paSkzQoXTwWjTS0tLYTAY0NPTA7PZjHQ6DbVajfX1dVYvos+KRCKoVKojnxNNNW+FQgGlUonK\nykp0dHRAqVQ+cKi5z7Pbgd+JlEIsFuP06dOIx+N4//33EQ6HD238V1ZW4Ha7AWBXw89dExf1Sp/b\nDY3NdWasVitef/11+P1+OBwOLC4uFm3eOxl1rVaLjo4OnD59GmfPnmWpOVo3Raw0La+6upop8rGx\nMczMzOwZnPiw536UUE22pqYGTU1NUKlULO3MPRN8Ph96vR4vvvgiwuEwM15isZhFwTKZDPPz8xgf\nH4dCoWC92x6PB8Fg8EjAogSEMhgMsFqt0Ov1kMvlTKHT+nU6HXQ63SN/H+0fF2hXWlrKUr8ajaYo\nxj8ajbKMGe1LoREivBGVu7jZLKvVilwux/QTcM9ZoSzF9PQ0PvzwQ8zNzSEejxe9U4jP50OlUrEy\ni0wmAwCk02mk02kkEgkGiiPwXCaTwZ07dzA5OYlIJAKhUIimpiZm/IH7mVaFQsGAisUQwhoIBALo\ndDpGnFbocJHQOaA/5CxSwEc/I5fLUV1dDY1Gg2w2i1u3bsHn8yGVShVl3cB9J3Rra4sFkUKh8IF1\n6/V6NDY2oqWlBdlslpVwAbCfJxxXW1sbXnnlFbS1tUGr1WJ1dZVlxUKh0Bfb+HMV6tbWFlMEO22K\nVCqFwWDA+vo6+xmVSgWj0Yjy8nL2OS5i9Cgll8shlUoxUNz29jaL0go/x41+HmZkufsBgJU3lEol\nu0iHuah0WAOBAAMEiUSiXR0SSonSOyGQl1AoZIqOlCs5BtzoSiQS4fz58+DxePj3f/933L1798Br\nJyGDXlZWxsB99fX1rCV0J6Eou7q6Gj09PTh37hw+/vhj/OAHP4DP50MymXzk93JLM/t9B/R+CbdB\nKeNCZ5Cif8IGbG9vs+hDr9cDAHNmOjo68NprrzGQ3Q9/+EO8++67CAaDRxIVpdNphqmhtHPhM+4l\nW0SfJYeFsllqtRomkwlSqZS9y8NEphsbGwiHw7hz5w7kcjmOHTvGsBc7rYfOLtf4k3PGHV28tbWF\nVCqFUCiEubk5jI6OHtop30nou1QqFerq6iCTyZgDs7KyArvdjvHxcczMzGBpaQmpVIr9O+kkuVyO\ntbU11sbLDZCampqwsLCAsbGxfZfldst+UQbC7/djaWkJ5eXluwY89HxUltjY2EA6nWZnV6/XQyKR\nsM+TLqytrcWzzz6L/v5+zMzM7Gvdj5KNjQ2srKygsbGR6dzCNSuVSrS1teGP//iPMTw8jOvXr7P7\nEIvFEAqFEAwGcfr0aVy4cAHd3d2wWCwsUMxkMiw7eRTl0CfG+JNwjZ1YLM4zLgBY1CkSiVhkLRAI\nGIOVUChknyXQyWEjB1rXwy41HVyhUAiRSASRSJTnKZJ3Tr24YrEYm5ubyGQykMlkzEnx+/3w+Xys\nVqbX65nXyefzoVar0dbWhnQ6DZfLdahn4nq3D3NE0uk0QqEQVldXWVuTVCplw5Po57e3txnXglwu\nh0QiYe+rpKSEYR20Wi0EAsGB+6xJqB3UaDSirq4ObW1tMBgMecYim82ydibqUqCWMmpDWl9fh9fr\nxZUrV3D37t0HkNqlpaWMHY1qxdvb21hdXd13rY94Kwh0RVHNTvtP57xQ8XBFKpVCq9Wyv+dyOayt\nrUGtViMcDmN6ehoDAwNMuR5WyKEWCoUPZOgo0tnY2GDdANxno3Q0RVbpdJq1Sfn9fqTTaZbd4PP5\nrKWKHMiDOrvUhbC6uorFxUUYDAYolcq8fQPwADCY64Bzn4Ob7SGneH19nfXYF1toj9fW1hjQTSQS\nIRKJwO12w+VyweFwMOAeOYHcuxWNRuHxeBiIkVteamlpgd1ux/T09L6MEDf7tdOaqXxDe7KTcYzn\nTQAAIABJREFUk0iBntfrRTAYxPb2NhKJBMLhMPx+PzY2NtDU1MSYWymzy+fzUVlZiZdffhk+n6/o\nxl8gEECv1zNiJK7zQecklUphfX0dZWVlqKurY8+cSCQYvmlzcxM6nQ5GoxFqtZp1WAiFQpjNZhw7\ndgzhcLgowVChPHHGH7hf4yovL4dEIslL0XEvXllZGcRicV66mmtwqT75MOW5V3lUfZcuEqH9ueUH\n6gVeXl7G+Pg4gsEglEol8w6tVis0Gg3EYjGGh4cxMDCAnp4e9Pb24tSpUygvL2fPpdVqcf78eUSj\n0UMbfyINISO0k2xtbSEQCGB+fh537txhzotcLkcul0M0GmUReElJCSorK9HX1webzcZ4DWh/ZDIZ\n42sQiUSHSnWR01dWVgabzYb6+nrU1dVBpVKxz5CRiUajTJFQBEf/XlJSgra2NtTW1oLP5zNEMaHL\n6Xu0Wi2qqqpQW1sLmUyGZDKJTz75ZN/Gn0hadDodlErljhmgncon+9mXV199FS+//DIymQx+/OMf\nY2Jigin9wwjhDKh+SzwUm5ub4PF4rCskGo3mddvQ2aBOnc3NTSSTSfh8PpbRSKVSDF1dXl7Oshw7\n1eb3K6SUqQvB7XbDbDbnRcDcWnkhf8Fud5/rFBDPSDHTz1zJZrMYHx+H1+uF2WxGJpPB9PQ0otHo\nnrpW4vE41tbWkEgkWI8/cC/yb29vx+LiIn75y1/ueT1cx4gc/0KhXn7Kgu50zqlcOzExgenpaWxu\nbiIUCmFlZQWzs7MIh8Po6+vDyy+/DKvVmndfqIPq5s2beO+99/a89r08m0wmQ0dHBzo7O1kWirtu\nIixyOBxIp9OQyWR45plnEAwGsbi4iIWFBWSzWchkMsZbwc0YUrb6ueeeQyQSwdjYWNEzRk+M8acN\nJTBTOByGw+GA0WiEwWDIq/uQUqdomOpu29vbrJeVx+Ohvr4eb731FgKBwKER07vVdqmHmHpvL126\nlDdQKJ1Ow+/34+bNm/jkk0+wvLyMWCzGWlwKI3/q7ycPvLq6mikWAFCpVDhx4gTGx8cPnAolR6Wu\nrg4NDQ2QSCQPdCRQBLm8vAyHw4GJiQkWQXJbnOLxODMIx44dQ3V1NSwWCxQKxQMKWyAQoLy8HCdO\nnEAgEMDQ0BBrQdrv+mnvCSQqFArzaE5JIW1ubuLu3bu4fft2XpaIeCFaW1thNBohFovxpS99CTqd\nDm63G9FoFBsbGzCZTDAajVAqlZDJZJBIJKxWl06n901m097ejosXLzKkNtepBcDWzDWa+3UOyCgJ\nhUKcPHkS3/3ud/Gzn/0Mt27dOtTESjpv29vbSKVSiEQi8Pl8rKNmaGgIMzMzDG/AXT/db0JR0xns\n7OxEX18fDAYDa5ElR5TuRzE6MqicyOPdQ83TfeLWmCkTRkqaHHg6S7QOckiA+9mnzs5OfPWrX2Up\n6GLWcWnPqe2XSov0973sDeETCrN83Jo/6di9nGn6zt2+m+jWLRYLrFYrysrK2HuliD8SiWBpaQmT\nk5O4desWxsbGkM1msb6+jng8jkgkgkwmg6GhIUilUlgsFjQ2NsJisQC4f85tNhva2trgdDqLgrqX\nSqUwmUzo6elBU1MTJBJJXndCLBbDysoKBgcHMTExgUwmA6lUCo1Gw1oSnU4nw5qtrq7C7/czJ5n7\nPQ0NDaisrGT7XkwH4Ikx/gBYGrikpASBQACTk5OwWCxoaWlhioTa/Yj5jNJKxIZGoMGSkhKYzWbI\nZDLcuHEDi4uLiMViLFWz303e7fN0+RsbG3HixAnWrkWSSqWwuLiI/v5+vPfee4hEIkin0w/9Lqrn\n63Q6rK6usnQRcA/k1dDQAKPR+MhsxMOEz+fDarWioqIir9ZPinB7exuBQACzs7OYmprC2NgY7t69\ny2qllK5bX19nfd4WiwU1NTW74geoa6G9vR1OpxPj4+OH6rXn1s9DoRDLhGi1WvD5fFaTnZ6exqef\nfpqH2BWLxdDpdIhGo+jq6kJjYyNrR1peXkYkEsHGxgZDtAsEAmxubiIej2NsbAwej2dfxp+yI7W1\ntejp6WFAtkKHa3NzE+vr68wZW19fZwAuMkZEUZzNZqFUKqFQKJgDx61T8/l8dlbcbjeWlpYQCoXY\nz+9HuKn7VCoFr9eLhYUFBj5MpVL49a9/jeHhYQSDQSSTSWxsbOQZTzKgFFlVVFQw5Dqx73Hb7sgA\nFKuHmhjyysrKIBKJWPobuGccae9pbwjTIhAI2H+nbBmVrVKpFNLpNDQaDbq7uxGLxRh4MZlMFsUB\noLNB4L5wOMz++16FSk0CgeABPAwRfjU1NbHSHvF5kLO2W2p/NyE9oVarodPpIJVK2fkEwDJypBtv\n3brFqNjJ2SGx2+3Q6/VYWlqCyWRixp/WVV1djd7eXsb4ehhGTDqbRqMRDQ0N7O5zA5loNIq5uTnc\nvXsXd+7cwebmJiMucrlc8Hg8jPK5pKSEnYNCUCPhd2h/ChlfDytPlPEHwGrMbrcbV69ehcFgwLPP\nPssuXC6XY1mBcDiMTCYDsVgMr9eLpaUlvP7664z1is/nQy6X47XXXoNer8fNmzcxMzMDp9P5QI/9\nQYQUulQqRXt7O44dO8ZSucD9YUGTk5NsvXt9uWRsXC4XTCYTTCYTgPsKjHplD1I35wLlqAZF0Rm3\nbhuNRhEMBuHxeBgnNyGKiSaV+ALOnj2L3t5eVFdX54HAuN0NwD2lYLPZUFFRsWMqcC/CBRjGYjHM\nzMwgnU7D5/Ph+eefx5kzZyCTybC+vo5gMAi32w273Z63DuBedGK32+F2u6HT6VBeXs7Ag6ToKStT\nUlLCWOGozvooJ44rZDTUajUMBkPeUBZ6Jq5DRZHdwsICXC4XVldXWTrc4/EgEokgHo/j7NmzOHHi\nBJqamphTxlXwVIrq6emB1+vFzZs34fF49k1JSgqLDP+dO3fgdrvznCyPxwO/389S+1wFzo0UCfCq\nUCig0WgYmQ8ZB9qH5eVlLC0t7TvDstvalUolVCoVc7LoLpJiX19fRyAQgNfrZWQ0Wq0WGo0GkUgE\nW1tbDKshFAoRCARYdoycnerqauRyOcTjcayuriIWix0JAHC/Qkx73HZSAtetra1BpVLhO9/5DmPT\nm5ubg8vlwtra2oFY9SiTAoA5reT40Tum/Z6ZmXkkiyM565R+Jwdhc3MTzc3N2NjYgNfrRTKZRCAQ\nOFAUTZmEsrIyaLVaKJVKiMVi9n209nA4jKmpKczNzWFxcZGdI2KgpSw0ZY/0ej2sVitzOEnH0h+B\nQACz2Qyv13tgPoud5Iky/lyvkJQ5Tdrj1o4p2ltaWoLX64VQKMyrJ3JBgyKRCC0tLQzcRQeafkci\nkcDS0hLm5+eRTqf37XlR2xNFMWQoSOEkk0lGF7qXS0QRGzGoUURHQpeK6oxcT3I/QlznVJely0TM\ngtFoFHa7HSsrK4x9Ti6XM4NXXV0NvV4PqVSK+vp6tLS0oKqqirV90eXj8ozT5ZLL5SgrKzv0XHEy\nOnSZkskk6uvr0d3dzYyrUChkFLSUyqe1kUNJCG1yrMgp4dboNjY24Ha7cfPmTczPzyMaje45qqN3\nRcaOUPIA8vaJ3iOln+nsTE5OYmFhgWEo/H4/Y70rKytj6Hjg3kwFKlEA950O6oZRqVR5mIb9nhva\nP5fLhWAwiJKSEuYscqPF3ZRvSUkJTCYTGhsb0dfXh66uLkgkEnZmiF8/Ho9jcXERi4uLB8rU7SR0\nFiiIoMwEpXQJpEh1WnKyCRdD3RncbhuiZqbOBKPRyLAjBDQ9yD4XW1paWvDKK6/AbDbn3Tke7x5B\nEAGoCT+zsrLCnLmhoSHcvXt3XwaVzjGdC3rH9G9cvAgFMNwyJun8srIyHD9+HJcuXUJNTQ1jxOTq\nPCIReuWVV6BQKHDz5k2EQiEkk8k9r5fOgVgsRk1NDVpbWx+KySEjTpkGcqa5Zciqqip0dHTg3Llz\nqK+vZ2eIO7cFuGdDrFYrEolEUcmhnijjD+S3CVGESrVRbi1TIpHA6/VibGwMwD3PlkBm3FpdSUkJ\njEYjjEYjjh8/nvc9iUQCLpcLb7/9Nv7jP/6D1WX2Kjwej7ERarXaB4B+3CwGGZi9vFiBQIC6ujp0\ndHTAZrOxlD8ARlFLTGncQ7SfdfP5fOh0Omi1Wmb8Nzc34ff7sby8DLfbjYmJCSwuLuLYsWPo7Oxk\nyiybzeLMmTNobm5meIzCVCIdcqozcrsfiH50N5T7XoVSyXSZCESWTqcZqJDqjhUVFWyYC62JHBHi\n76ZsEXdN3Br37OwsfvrTn2JqamrPjhz9L0UuOp2OEdtQeYXb/knfSXsYDofh8XiwvLzMFA3hQfh8\nPsNktLe3AwACgQAzVoVKnqJtmUyGSCRyICVDBjoYDCIajYLP5+cpw4dRadMaWlpa8NJLL+Gtt96C\n2WxmzjKdbco4LSwswG63H4rat1BIf/B4PFZKIQVPbZfl5eXsjIjFYojFYqjV6jx0+9bWFqRSKStX\n0F4SR0FNTQ18Ph9WV1cfi8EvJ06cwB/+4R/mpa9zuRxDzdO7MZlM6OrqAgB21r73ve9henp6X0O6\naI+SySRisRhL+wP3dQPhh4iOvbS0lO0V3U+9Xo8/+qM/wsWLF6HT6dhZ4d4ZuVwOuVyOP/iDP4DV\nakUwGMT09PSeiLu4d5Qc9Pb2dpw6dQoqlWpHHSUSiaDRaPL4OchhF4vFKCsrg0qlwgsvvIC/+Iu/\nyMsg0PAi2kfKzFmtVrjd7gPjuHaSJ874A/l1Z6qd0aAKupDE/hWNRiGTyVBTU4OWlhZYLJYHQC0k\nhcpQLBbDZDLh+eefh9FoxPLyMubn5zE8PIy1tbWHgke4aWyK4ihVDCCvl1ur1UKn0yEQCDASjkKR\nSqXMg+3p6WHAFmotI8chEong7t27iEQibAAQIX73mhqljMRHH30Ep9MJnU7HcBThcJjxJxgMBpw+\nfRrt7e3QarXsYtJkMyqvFCqUbDaLZDKJeDyOVCrF6miUqQiHw0Ud9kOOSyqVwsrKChYWFiCXyyGV\nSlFSUoLu7m5sbGzg7t27sNvtWF1dZU7hm2++ib6+PgZ4KjT86+vr8Pv96O/vx+XLl2G32/eVyiWl\nSg6W0+nE0NAQSz8nEglIpVLI5XKo1WqGDCYFSNgJ7oREOme5XA4+nw/z8/OYm5uDwWDAxsZGHuaE\n3ofFYsHJkydhMBgwPj6O4eFhLC8vIxgMHmjPaY1kJLlCa6PnJ8enu7sbfX19qK2tRW1tLVQqVR4Y\nMB6Pw+VyYWVlhRFQZTIZloE5TOp/e3sbS0tL4PP58Pv9qK6uRlVVFerq6hjgkxwDygRw311h9rG0\ntBRqtRpSqTSPlySXy7FsjFwuPzQfx0GFSlatra149dVXcenSpR2ZOws7Gki2t7fZWHQqeeznvlLZ\n8MqVK6yjiWZB0Hh2uVyO3t5emM1mOJ1ONnOD26WlVqvR29sLhULBzgo5FrSv9L62t7dhNBrxpS99\nCcC9Dof94C7orlBwRH8vbPOUy+WoqqrCxYsXUVtby7pbqH6vUqkgFotRVVUFpVLJ7Bb3TtAdoSw3\nkZ6JxeKiMaA+kcYfQN7GRCIRNvwDADP+NTU1EAqFUCqVjDGtMOJ5mFA6lIBeTqcTAwMDbJjKXpCj\n5KSQsaPvJ+9WLBajsbERW1tbKCsrQzwe37GXllrWzp8/jwsXLuTVQKlVkXpL3W43tra2YDAYGPKY\nkLF7OegEHrpz5w5mZ2chlUqRSqXYNDaKfi5evIiKigrU1NTAZDKxNPqjavVkFAgjQJeJLkAsFiu6\n8acI2u/3Y2VlBc3Nzazm1tjYyMozc3NzcDgcKC0thcViwQsvvIDm5uZdSZcIeDY7O4uZmRmWWdir\ncAFv6XQas7OzyOVybBBOLBaDUqmEwWBAQ0MD1Go1y5JwFYpcLmfvh4tX8fv9mJ+fx8jICOrq6qDV\nah84A7lcjvFFEPKahtEclCaau+eUiSsrK2PRECk3yq5YrVZcvHgRr732GvsM93fRfUmn00gmk0gm\nk4zDwO/37/lsP2ytq6urzEF0OByor69nrYZ6vT6PWny3AIKEnksmk7EMH2Xk4vE4DAYDFAoFSktL\nj4S9bTfh8XhsDLper8f58+fxzW9+E3q9/gFwKX2e+6y0V5FIBE6nExMTE3C5XPvOXpBOHBsbg9Pp\nhMlkQnV1NRobG1FVVYWqqipUV1ejvr4ebW1tCAQCCAaDyGazbDQ7pc8L11e4DnKUeTweNBoNenp6\nMDMzg7GxsUeCW7nlBtJZ0WgUXq+XZXvoPdLnyPgTwVY4HIZCoUBVVRX0ej1rzd6pXMB1LOg7JRIJ\nNBoNjEYjMpkMCxIP6wA80caf5t673W4olUrGEkWtZSdPnkRXVxdr36Ja3V6FXg6l6hQKBeRyOSPs\neNT6gHsoZ5/PB6fTCYPBAL1ez1paZDIZ4wm/dOkSSzkXvlQ6EDQVkMuGxU0HA/cdFo1GwyYYqtVq\nOJ1OBINBxGKxPe1tNptlqF5Kt5F3TymslZUVLC8vo6OjAxKJZE9KkaIiuVwOoVDIShbkFBE6mvqN\ni5kOpTNDF4cuIPFFmM1m9PX1MbIf4tzezfAD+X35u6UBHyaUniRypNu3b2NiYoL9G3GIW61WvPji\ni+jq6kJNTQ1kMhlkMhl6e3tZZDMxMcHS4KTQaD/T6TT6+vrw2muv5T0PRRk0ErusrIwZ/fn5+UPv\nP0W6FRUVOHPmDI4dO4aqqqo85D73bHMBsSQ8Ho8NPKqtrYXZbGbES6OjoyyDtB+nq1DIcSa2P4/H\nwxwxwnoQsdCjznjh2slRo8zWxsYGKisr82YeFDOd+6i1nDlzBl/+8pdRW1sLm80GnU6X16pGpRvS\ne4Wyvr6O8fFxXL58Ge+99x6cTue+cUX0WUKwx+NxLC8vY3h4GB0dHTh16hTLuBCTJVEqF5Z6d8pW\nUDaU+2/UwrtfYjdaK2FVhoeHkc1m4fF40NnZiba2tjyKXyoTks6k8gWVDh9G2kWlRfreXC6H9vZ2\nSCQS3LlzB0NDQ7hz5w48Hs+haemfaOOfy+Xg8Xhw+/ZtaLVa1uJBbX8ajSavPrTThlNKMZvN7vhi\nuB5lMplkM+v3UvvP5XIsVXnjxg1EIhHU1tbCarXCZDKxWhSlrLjfxZVHRdHcz4lEIuj1elRXV6Ok\npATpdBpra2vMYdkrrzjXyBcKGSW3242pqSlcuHDhoXTEhc9BypyUC/2dvpPbT1xshUhlIi6rGLVs\ncbETDxPumkpKSiCVSlFRUQGLxQKpVMrqdnv5Pdy0NqWHKRNA38Pn8xEOhyESiZiTQDPQKYLjRpGk\ndID7Q0i2t7dhMpngdDrzuNvpbtD+U3S9trZWFEIaHu/eJMVnnnkGZ8+eZaQoFP3v1YgSwI7GHOdy\nOahUKtZuGQwGWYvbQYXOO52RWCyGW7duMbbMpqYm1NfXQyKRIJPJ5HVVJJNJ9u7a2trQ3d2dh1uh\n8gAANrWRmC4J2HZUQvfLarXi+PHjeP7553H+/HkYjca8aX/c7AqNWabIlt7XxsYGfD4fhoaGcPPm\nTUxPTzNK4IMInU0qKwYCATZimYwbsbnu53l3K+sSYLNwQNxehPbH5XKxfXA4HJienkZ3dzcaGhpY\nu+RBWWN3Ck6p24XO0+rqKiKRyBfX+JO4XC786le/QmdnJ7q6uvIMDNebLvQCuf+dUPzENMWNPLjp\nF2rboah0LxKLxRCPx+F2uzEyMoLjx4/j3Llz0Gg0ALCj0XzUgSwEPZKQITKZTMxb9vv9EIlEjEe6\nWEY1l8thZWUFIyMjDKy4UxqL+6cw4uQys9G7ouiTQFLFrody+6sPEqlw/z8ZTqrfVVZWQqFQIJlM\n7jkKLfy9Oz0vtSn96le/YvSscrkcJpOJlQAoHb5TOpAAiW63G8PDw2hsbHzgDFEaPpFIwOFw4Nq1\na1hbW9vz/uwmPB4PNTU1eOONN1BfXw+j0Xio30UlLioZmEwmNDQ0wOVywW63F8VZpPOaTqdx+/Zt\nTE5OwmAw4NKlS3j55Zeh0+kQiUTw6aefYnZ2FktLS3C5XIhGoygpKcG3v/1tNDc35yHYuSKTyVjU\nrVarWXvsUQllHnp6evDd736XZV4Kn5mwOKFQiHWMbG5uskE5QqEQsVgMLpcLN2/exMjISNE6LWgN\nhPlxu9171rPc+/gooc6Lwih8r8+Qy+UY0drk5CRkMhlUKhX+9E//NK+FvJhCXSXt7e3g8XgYGBiA\n0+k89O994o0/9Q/vpGyp1haJRCAWi1FeXs48+nA4zAzz7Owskskkqqur0dbWho6OjjzFSKA9l8uF\nmZkZJJPJfYHngHtGx+fzYXBwEH6/H5OTk/jyl7+Mnp4e1kv/qN/D/c7doiahUAi1Wp03zyCTycDh\ncBSFxpgrmUwG4XAYKysrWFtbY2jbRCKBq1evYmJiAqurqwzoVFJSgsbGRrzyyiss6ij00klREWsW\nAdyKJR6PB2NjY+js7GRzBrh1XK5TSHvOZQUspJOliNRkMqG3txeJRAK/+tWvMDg4WNTMBTmgTqcT\n77//PouEysrKMD8/j/7+fob12C1zpNfrWTp1N0VJ5R63272nIUZ7WTd3Gudun6FsEkWBpPRFIhEr\nCRWumZDgdXV1mJyczGuvK5ZQOcDr9eLy5ctwOBws8qcILB6PI5FIMIzDRx99hGAwCD6fj9raWrz0\n0kuw2WxsqBjplEL2xmILj8eDXq9HTU0NLl68iL6+PpjN5h2H0DidTszPz2NiYgJutxvxeJzNwyA9\nQkZNp9OhuroaDoeDjWMupkQiEUxMTOAHP/gBzpw5g4sXL8JsNkOtVoPP5yMajWJ5eZn10UejUdbd\ndOnSJZw6dWrHDiPKgBAAvL29HXfu3Nl1QNyjJJfLMQKxn//85/B6vfj617+O5uZmlJWVsTNL7deb\nm5v44IMPMDAwgHQ6ndfCqNfr8dxzz6GhoSEPf0HrBpBHKFWMDpcn3viTAdoJJEc80BMTExCJRLBY\nLAiFQvD5fFheXkYgEGC80alUCq2trYxljfq5gfs0seFw+ECkFpTOjkajiMVi8Hg8mJubYz30ZrOZ\n9V7TQJRCAMtOIBz6d26tnYiLqC4mFouRTCZhMBjyWo6KIUS3TCQ/SqUSqVQKS0tL+Oijj/Dpp5/C\nbrczlj4ej4fjx4+zQRdmsxkGg4FdFC4AzGAwoLa2FsC99rT91hR3E5qwdufOHYhEImxsbDB0MRkP\nupRU+yTDT1491X6p/kv16IaGBpSUlMBut2NiYoJhOIol2WyWRR2U3amqqoLdbsfc3NyuKGAaQlJf\nX88GJ3GF6/BQliAUChUtFU3nhNgIudwFXExCJpNhJEY8Ho+RqRC4dafarkwmg81mg81mg16vRzQa\nPTTpD1foDBC+iFjmdpNcLoexsTGMjY2Bx7tH1ywWi3HhwgXWIkdZLxrBGwqFik72QzqrtrYWFy5c\nwJtvvon29va8VDe963A4jOHhYfT39+POnTvw+XysxYwGR1FXCbW71dTUMIKp/WRC9yKpVIq1E3s8\nHvB4PDQ3N8NqtUIoFGJtbQ1jY2MYHBzE2NgYG/gjFouxtbXF5kYolUoolcq8NnCiW6+vr8eJEycY\nHwcR/+xXqI14YGAAXq8XRqORTUHldnYRP8U777yDDz74gDHQkl6sqKhgAF8q6RFnAZDfWXKQ9u2d\n5Ghn2R5c/m6vH6RNefHFF9HZ2ZlnFGOxGKampvDjH/8YAwMDmJ+fx61bt3Djxg1cvnwZIyMjmJub\nw9raGqvXqVQqhiSlSU3kvY2NjcFutzMH4KBCyG6n04np6Wk2/Y0iysLeaALFFEY/5FQA+e1TXCeB\n0LDEiDY3N1dUJSMSidDY2Aiz2Qy9Xo+RkRG88847uHLlChwOxwP0vPF4HOPj4xgdHcXCwgJMJhO0\nWm2eA0NZC71ej0QicSBA0W5CdbtEIgG/34+1tTVsbW1BKBQik8kgFovB5/OxMsnq6ipCoRDz1Mlg\nEYodyM8SlJaWwm63M4a/o6rlEjvh1tYWXC4X5ufnd0Vc63Q6vPjii7h06RK6u7sZ30RhliOTySAY\nDGJsbAw3b948dOaC3ifRTyuVSkilUkSjUTYuemhoCNeuXcPVq1dx+fJlfPrppxgcHMTi4iJzZKnt\nkX4nrZkbUZGjxeWaeBwkmUxiamoKOp0Op06dyrvDQqEQGo2GkWUV814S++BLL72Er3/964wQjAwh\nZUXn5+fx7rvv4oMPPsCVK1fgdrshFovR0tKCjo4O1NfXM9IpMmLEJ5FOp7G6usroio9CiP6chuHc\nvXsX169fx69//WvMzMww1r719XW2nrGxMaysrCCTybBx19yWPALlEfh6fX0dS0tLhz4zmUwG8/Pz\nuHbtGj766CN8+OGH+OCDD/Duu+/ivffew4cffsjasAvpfCk7Ozs7C7vdznhpuPo+m83C5/Ph8uXL\nWFlZ2Utp8f897B+f+MifjCRFDNwxoUSX2tLSAo/Hg1QqxfpFaUoYcahzJ8rtJtRPvB+QyG7rpb5i\nUoKLi4uYnJxkteOqqiqGoKeIjjxvckoymQyi0SjrASdQCGUAyDGSSqUM0V5sIQeEiCkcDgdu3LjB\nKE0Lhcosa2tr8Hg8kEgkWFlZQVNTE0wmE9RqNSQSCZvQ5fF4sLq6CofDURRqSzIQxAwXi8Wg1+vR\n0NDACF2452l1dZVxMdCgImottVqtjC+AEOsqlYpNk3xUKecwQpFKaWkp0uk0Q+lzOcOJMKenpweX\nLl1Ce3s7Y1gkR4bOCClF+kP7cFgh8OL6+jprF6X/JT4HoiqmfyMjRWlmUuy5XI5lxyhtLhQKodVq\n0dramsfKGQgEDuWgF0tofKvT6UQoFGJseTweD0ajER0dHdDr9RCJREWtn6vVapw8eRI9PT2oq6vL\no9QG7ne+UO3abrczfJBGo0FTUxP4fD6cTidKS0sZdS6Px2OYBXqn165dw+joaNHLFwQTrLItAAAg\nAElEQVSYttvtLJOhVqsRi8UY9qqQp39paQmBQAA+nw8rKyuw2+3QaDQwm804fvw4K00SjXFvby/c\nbjcuX758aEd9fX0dDocj707R+X+UI00OSDQaxdraGnp7e9Hd3Z3H4MotOR7GBpH8xhh/GhfKraUp\nFAp0dHSgoaEBExMTuH79OlwuFzY3N2GxWFD1fyQeFOWLRCLU1NQwQ1r4PVRPLNYh39raQjAYRCgU\nwsjICAQCAWpqatDT04OXXnoJBoMBIpGIMc+lUikolUpoNBrWa7u4uAij0Yj6+no2jYyUCx0aKgFw\nB2cUS8jRIEa61dVVNpDnYRKNRhGPx7G0tISBgQH87u/+Lvr6+lhKmnj0T548Cb/fj2g0WjRe662t\nLTZVjfqNiXWLIpxoNIrt7W2Ew2Gk02k2S2F7exvz8/MQi8VIpVKoqKhglK0EIqKMwKNkt+6OvQhF\n/tPT05BKpdDpdODxeMwRo/TmSy+9hK9+9auMCZLbEbC9vZ3HWkjYFkrz7tR2ul8hQCQh9KlH3+v1\nYmVlBRUVFWzMskajQTZ7bxiR0WhEZWUlpFIp0uk0VlZWsL6+ztKhZEQlEgm7vwaDgTmVRNX8uEgs\nFsPq6ioblAUAcrmcjSgWi8X7Ysh7lJhMJrzyyivo6upi47VJKNrkZsDW19cZP0pVVRULmPr7+5FK\npdDQ0MBIxeRyOZuDYDabEY1GMT4+fiRMhZSpKy0tRXl5Oerq6iAQCDA4OMiehSubm5sIh8MIh8MY\nHR3F//zP/8Bms6G3txcWi4XhXai82NzcjLGxsX21gD9KuBnZ/UokEmGYhkgkktcxwjX+xVjvb4Tx\n39zcxM9+9jPEYjG88cYbqK6uZi0sBMii0bdmsxk+n49F/TqdjkUgmUyGKUDu5lLqfWVlhbV5FPsZ\nKIVJPcbEXkYEPR6Phw2fkUqlbExqIBCAXC6HxWLByy+/jM7OzjywSzabRTAYxK1btw49tngn4fP5\nqKioYIqa6+nu5bmpn9hoNO7YahcKheB0OouOhuaCdSiND9zP7hDmIxKJIBQKMdDa+vo6RkZGEIlE\ncPXqVZw9exanT59GY2MjlEoleDxeHpfAUQkpcGote/HFF5kjyW1Vq6mpgVgsRi6XQygUQjgcxvj4\nOMLhMFpbW9mIZXp26lculiLPZrNYWVnB1atXkU6n0dDQwBju6I9QKITBYGBdEsSzQKWxRCKB27dv\nY2lpCUKhEK2trTh58iTLFgFgREJETnRUaf+DOmyBQADT09OMj6Tw9xVbEokE7HY7Y0vkMtNxjV91\ndTUuXLgAPp+P8fFxlJSUYGVlBe+99x58Ph/8fj9GR0dhsVhQV1eHZ555BqdPnwaPd59JVSaTPQCY\nLbZYrVbGbGq1WpFMJjEwMMC4GHYSCgyJDrowsqcA6iiGKx1UaB0UrD777LN5jJzFlN8I47+9vY1b\nt24hGo1Co9Fgc3MTDQ0NTIFzeerp4AD3kcROpxNra2tIJBIsouCmbCltGQgE4Pf7j4SRi9sOR94e\neeYLCwuYmZnB+Pg4G+dK6dtEIgGlUolQKIRTp04xpUcXkbAPMzMzRWndKhQ+nw+DwcAIOQqpTh8m\nVMZQKpUwm807Gn9q+zkMgctOQk5jPB5/AHlLWAkihopEIohEIigpKWHjOolLn4B+pFi4EfRRKXYS\nQshbrVa89tpriMfjDPBGTIYmk4k9q8/nw9TUFK5fv45wOMyYA2mdVCIiZ6FYbXOhUAiTk5OsR1+p\nVLIpiRKJBEKhEJWVlXkpe2KkXF1dxcLCAq5cuYKpqSmWtbHZbKisrASQP3SFWD+PwvjrdDo2ayCd\nTrN6816CAepiKFwXrf2gEyx3k3g8jpmZGTQ2NqKpqYllpLj3UyKRwGKx4NSpUwzUSFMLV1ZWWBSa\nzWah1+uxvLyMmpoaZvyFQiFUKhWjyT5KZ9dkMqG9vR01NTUwm80AwM55PB7f8R3Q+ZVIJFAqlXl7\nTLgH4mh4XIw/cG/d0WgUgUAgbxAdANYKXYwS7hNv/Em2trZgt9vx/e9/HwsLC/jmN7+JiooK1k9P\nG0gpQ+B+uxy1xvF4PDZkoXBSHrWDHAXxDAnVaJ955hmcP38e29vbcLlcuHXrFhwOB6txUWTK5/Ox\nvb2NqqoqXLp0CbW1tYzshYQyGAeZ0b4X4fF4bGIZMaFJJJI9pYwFAgGqq6vR1NSEioqKPLwFZUMS\niQSCweCRpHDJqSPAErf3XSAQQKfT4dKlS5ibm8PU1BTi8TgbWEOZCIVCAYPBkGfsLRYLGhsbEQ6H\nH9oud9hzRLXyjY0NyGQylJWVMf5+nU7HMhHhcBilpaVwOBz49a9/DZ/PxxQIlzyFnEouMOwwQhEm\ncH+WBQAYDAaYTKa8dk4yJlKpFAqFgjEMrq6uYnh4mM1Apy6AioqKvF51Km3RvhxFeevZZ5/Ft771\nLUgkEiwsLOBHP/oRJicn4fF4HvnzNTU1uHTpUl7UT2ukKZbFHNcaDodx586dPM586tWn5+Hx7rFb\ntrW1QaPRoLm5GT/60Y8wNDSEWCz2AGUyZRS5nUVSqTQPZ3UUksvloFarUVlZifLycjZtlDpFBgcH\n4fP5dmxx5fF4OHXqFN58880HOCYI9FjMwVDFko6ODly4cCHvvAD3HJmmpiYsLS0dutf/N8b4k6GY\nn5/HlStXIBKJWHsNHXoCAXLZl4grmowpFzRHBohGkpKCOSpkq0AgQHNzM3p6elBZWYlkMolEIsFA\nf+R8UOqTIp/W1lYcP34cFotlx1Gw3FarYgpXCdCetra24rXXXsPly5d3PZw8Hg9WqxX19fU4efIk\n+vr62Cx6EsroUIvMUURytD9erxd2u51NYCOjJRaLYbFYWMcBlWEEAgEMBgNsNhsaGhqg0+kYViCX\ny0EoFObR1x6VUNmEzgah+Om8xmIxRKNRJBIJNimQqIhpJHRhPZjbIrpXpkKucCMrqlFS66ZGo2Fs\ncYQt4GaJCOTH5/PZDAlyCIRCIQO22mw2GI3GB6IfbsdIMQ0RAQttNhtOnjzJyofpdBqNjY2YnZ3F\n5OQk1tbW8roSqBzX2dmJCxcu5GVZADA2QQBFPytEfzw8PAyBQACLxcKGm2m1WsYqSr3jpB+Joppw\nIYQd0uv1OHbsGCsR0TNyQaJHYfxpP0kPAGBzXHp7e5HNZlFeXs5mDFB5QyQSwWq1orW1FZcuXcLx\n48cfMKRbW1vw+/0Ih8OPRXcIARGNRiOamppQWVn5wIwLAAyYfFj5jTH+XBkZGcH4+DgSiQQkEgm6\nu7t3BWFR5ErIejrI3Do8KVhS6kdRzyUj2tjYiM7OTha1lZeX5w1DAe4xhD333HN44YUXWD8rEeZw\n20co+qaugKPwcAsv/fnz52E2m+F2u3c0/qSY29ra8MYbb+C5555DdXV13me4e78XpOxhJJfLYWlp\nCcPDw6irq8urr1GGRS6XszGuyWQS5eXl0Gq1eOGFF3D8+HFotVoIBIK8UaR7iT4PA/ijn6P3G4/H\nWZmLW3+lFDihy3t7e1FXVwebzQalUpnHmEd/+Hw+ZDLZvqOinQw/IbTr6+tRXV0NnU7HyhC7PRNw\nHyhoNBrR0tICv9+PkpIStLa2orGxEVqtlinGwlbLYqfQqRxC8xR4PB6qqqrwrW99i52df/zHf4TX\n681bv0ajwblz5/DXf/3XsNlseTgi2gMqGxRbn1A78fDwMObn5yGXy9Hd3Y2vfe1r6OrqYiU2bpmL\nno+L+wDAGCxfffVV1NfX5+kZ+sxROrpUOlpZWYFarWYZn87OTlRXV6O2thb/+7//i3fffZc5jSqV\nCs8//zz+/M//HGVlZQ/Q7ZLjvLq6Cp/P97kbf3KyrFYrzp07h7q6ugc6zygDzc08HkYeS+NfDODI\n9vY2Ll++DJfLBa1Wi56eHpw7d46ha0lB0f9y672UoqSIiurN7e3tCAaDbAxlMQ8MIUQXFxcxPz+P\nzs5OyOVy2Gw2fO1rX0NPTw/W1tbY+NPOzk5YrVZG/ci9kJQh2NraQiQSYV0CUqm0KKxtXOEy4JHC\nttls+Mu//Et8+ctfZpPqXC4Xurq6YDab2Weqq6vzCGe46w+FQlhaWkImk4HZbGbv4yjEbrfj6tWr\naGpqYrS55AQKhUKYzWacPn2adVycP38eEokEFRUV0Ol0rC2OyHFCodCRZSsKJZfLYXFxEW+//TYu\nXryI1tZWFvnTZEdqY9VqtWx4DlFZA2BdDJRpEQqFqKurY62NexHKqhXeXS5vu8PhQDwex/z8PEON\nUzaCMB/crEBVVRVUKhUaGhpw7NgxpNNpaDQa2Gw25rQQHz99T2VlJZqbmxEIBIrm7JLTd+XKFfzD\nP/wDXn75ZbS0tIDH40Gr1eLYsWP4zne+g9/+7d9mZTka+GS1WtkZIUkmk6zdmDgkuBmLYjm6BGol\n2t6hoSGEw2HYbDbU1dXhpZdeYi2uwL0Om7feegsnT55ENBplbXQSiQQ2mw1V/0cLTI4WAVtpyudu\ns0CKIf39/djc3MRXvvIV9PT0wGKxsI6W5uZmKBQKHDt2jA3sosi/MANHOob4IGi2yn5pfospQqEQ\nFRUVePXVV9Hd3Y3a2lrU1dU98DluFroY+/xYGn8yxoWkNfuRXC6HyclJTE5OgsfjYWFhAVtbW6iq\nqoLFYoFKpWIAJG5tkrzZWCzGkKAEpNre3oZCodj3QIi9ytbWFqanp1FbW4uGhgbGbtbX14fu7m4E\ng0GGlN5pJgB5s9ROFY/H4XQ6sbCwwC5FMYW+z+VyweVywWKxQCAQQKPR4JVXXmFKf3BwELOzs7hw\n4QLq6uoeGKtMex4MBtm4So/Hg5mZGaysrDAO7t3ksJeW0rW3b9+GRqOBRqNhKWmKXAuJZujP5uYm\nMpkMIpEI4yNYWFjIA+scdQZgdXUVn3zyCWpra9HU1MTeNaGxqZxBOAb6LjJq3HNO7GoEDtur0H7s\ndHeJPIkG5FBpgWr+KpUKtbW1bHKb0WiERqOBWq1m66XsDznn5GyRMiRDlM1m87JkxRAKBsbGxhCJ\nRCCTySASiVBRUYGysjKUlZWhqqoqbxreTtkHui9ra2sYGBhALBZDMpk8EGvoXoUY6IB77YYOhwMK\nhSKvO6eyspKBW8+cOYMTJ04wpwHAA0NwSMcEg0F4PB4Eg8GHTr4shszOziIYDDLgHrFyknNusVhw\n4sSJXX+ezgwNN6PM5NraWtHBxHsVujMWiwW9vb34yle+gs7Ozh25Zij74Xa74ff7ixLEPZbGnyaM\n0QsDDgeOyuVyGB8fx9raGpvAdu7cOZw9exYnTpzI8/q2t7cZMnpubg7T09OYnZ3F/Pw8cwgeFvXv\ndOH3KkQTqVQq8cILL0Cj0bByALcLYTfDv7m5iWg0CrfbzdZ+9+5dBpw6iuEhiUQC//mf/4l4PI4/\n+ZM/gVqtZv8mEAigUCjQ29uL1tZW1s9cuPatrS2kUil88skneP/99+H1ehEKhRCNRpkjs9sFLYyY\nDnpOotEo3nvvPQgEArS3t+eNM92thkyAIZ/Ph+HhYQwODuL27dtMIe4FRcwtm5CB2+8zxGIxLCws\nMGAklYsUCgXLDBVGP7Tndrsdd+/exejoKObn51lbFK1/r0K/s7CGT5PgAoEAI04qHKlMJFuEo3jm\nmWdw7NgxBurjtq1SJi6RSDBa3PX1dfb3jz76CIODg3vOWOxHYrEY7HY7/uVf/gULCwv4q7/6K4Y8\nJ8enEMdAQmsPBAIYHR3FT37yEzY/IRwOIx6PF32E9W5CJFc//OEP4XQ68Wd/9mcwmUxM71KHVOFa\nSDcmk0nMzc1hZGQE169fZ3iHo+ZViEajeOedd7C5uYna2lpYLBZGDf4wob0nXpGxsTFcv36djYMO\nh8MP3fujyghQhuj555/HG2+8gYaGBkil0h3Xv729jbt37+Ljjz/G4OAglpeXD/39j6Xxp4skFArz\nQEeHeQHUtsLj8eD3+yGTyRhhDtUOU6kUIpEI/H4/ZmZmMD8/D4fDwXimi8WpvJsQd/vk5CQ+/PBD\nnDt3Dq2trcwB4ALitra2kEgk4PP54PF42AjWaDQKr9eL5eVlLC0tMcaro4osNjY2MD09zYBZ58+f\nR3d3N4D7OAbq5+bK1tYWYrEYhoaG4HQ6kUwm0d/fj9u3byMcDiOVSh3pXu/0HE6nExMTExgbG0Nz\nczNMJtMDAC2KjGmeQSQSQSAQwOzsLObm5jA3N5fXOvgw4QImgfvjTff7nqgOSKOmiWyIO4CIOCTW\n1tZY/3Y4HIbL5cLc3ByjmKVoer81aHLWuXV/4F4tmAiJuDz+3OekfSB+i2QyiYWFBVgslry6PheD\nk06n2TNTFiAWi2F8fBw+n+9Izg7dudnZWfD5fDaquKenJ+85SCjd7vV6sbS0hMXFRUbMNDo6ypy1\nYrVV7uc5yIBLpVLYbDacPXuWcf9TeplYF9PpNEs1RyIR1jI6OTmJ8fFxVuY66mfY3NyE2+3G7du3\n8ZOf/AQvvPACTpw4kUfsBtw/K+FwmOlun8/HIn673Y7JyUksLi6yEuluwj3PxXo+uvMWiwVNTU04\ne/YsOjo6GOsg3SP67kgkArfbjf7+fty4cQNut/s3t+ZPiHbygujSH1bokiWTSTgcDmi1WlbzyeVy\nWFtbw8LCAqampjA7O8uIdg5auz3IYclms1hcXMS//du/YXt7G62trTv+3s3NTXi9XvT39+Pq1au4\nefMmSyOur6/nEe0c5aWkS3bt2jUMDAzgb/7mb/JGK+8klAp2u934p3/6J3zyySeMvvUw4LfDCEXx\nTqcTly9fZhP/uLK5uQm/34/Lly/j6tWrGBgYQCQSYaQyBwEmkiLgpsz3e95IWRPnellZWR7AldKd\ni4uLuHHjBm7fvo3x8XF4vd4dKVIPKoXvj8CD1EpJMyp2+zliZpufn2fDkihVzi0jkGIknUAGi9uO\ne1RC6xkfH8ff//3fIx6PM+Nf+DlCkw8MDOAXv/gF3n33XUbV/XnUlgvXR477P//zP2N7exv19fWM\nKyEajcJut8PhcCAYDLJe+uXlZTgcDtjtdjZb4rN+lvHxcUxOTkIsFqOzszOP8Y7rJC4tLbGZBaOj\no8yJOUiG7bClOa5Qhqi1tRW/93u/h66uLmi12ryplJTdJbt0/fp1XLlyBUNDQ0UDhz62xp9e0lEc\nLlLy6XQa/5+993pu+7zSxx90ohC9E+y9SmwWZVXTsmUrduxNdrKZeCa7uUhm9mJ3r/Zi7/b3L+zs\nxU42F5v5TjKTeOKi2JblJlmVokSZTewkAIIgCRKNIDoI4HfBOa8+oChXkPxQ4TPjiWNB0sHL9z39\nPGdqaopFF9FoFOFwGKFQCOFw+HtFy8WQlWqjAwMDqKioQGdnJ8rKyliNORKJ4PLly7hz5w4WFhbg\n9XqxurrK6nukXPbzUdLP7P3334fX62VUnCdOnIDRaIRUKoXT6cTExAQePnzIOPJHRkaK5mAV4/v6\nfD58+eWXaGxsxPHjxxnDIu0wp6Uzy8vLCAQCBcr8+9wVMtw/RLkoFArWyEoz19y/w+fz4aOPPsLg\n4CDGx8dZWeXbZii+L7jNp1Sr/y6/j1L3XGVHxp/+HQD7O/ZyKmQnyFn84IMPsL6+DqvVyqYnIpEI\nAoEAotEoQqEQi/z5Yvi5SCaTWF1dxfvvv4+FhQVG/xyLxbCxsYGNjQ3mVBIpDo2QHtR3oXdz5coV\nhEIh1NTUQCaTMariUCiETCaDQCCAxcVFrKyssM2D3/eOFPN7Ug+J0+nE1atXWXa7tLQUKpWK8dDQ\nJM/ExATefvttzM7OFvWO89b471SKxUQ6ncb6+jrW19cxMTHBaooHYTSfJl8gEMDw8DC7CJQqpTTi\n5cuXcePGDfYIDxqksO/fv4+vvvoKJpMJx48fRzweh8PhQElJCUZHRzEwMIAvvviiqJvXivnzCofD\nGBsbw8jICOrr6yESibC6uorJyUl89NFHuH379remL/42oCwON/L/riDWL0qv09gQpfofPXqEDz/8\nEA8ePMDS0lJR5P624J7VdzmzXC7HS/IVAjm7Dx8+xOjoKCorK9k2UGqE29zc3Jd0+A8B9Qndu3cP\nw8PDAB5nirhycx2ugx6LIzmGhoawsLCApqYmyGQyhMNhLC0tsbJPMXQ5OZvF/BmSrlxeXsbdu3ch\nlUoRj8dhMBhgtVoLAr1AIID79+/jzp07Rc9q7S3/6PeEQCB44qT38gHt7DznC2jtZHd3N8xmM6sv\nr62twePxIBgM8i6SAB6PfZWWlrLOeaIipgU9fJQbeEw0U19fj4qKChaB0iRCOBwuutw/pEkUAHQ6\nHWw2G7q6utDd3Y3nnnsOMpkMfr8fN27cwMDAAKamplimYr9RzJQpHyEQCNiIItFy70cJopjYLXVO\n/85X/UhTFcS9QKWvg+ij+D4gMi2aJtLpdKirq8OxY8fYeu2pqSlMTU1hcnLy+3ynr7XvvDb+fP/h\n7TWo6bGsrAwKhYKl3DY3N4safe4l+Ko4ngaqJxNtKZfkiY8KRSDY5mgnR9HhcKC5uRlisRihUIhN\ne2xubv7glaVHOAKfsLPBko/v8+vAXZ/N5YWoq6tDPB5nDbnBYPD7Tq8cPuMvFArzh+0HuZfRDY1E\ncbupD9v57AX28sz3I1otxt9B0RqVDUiZ0NgqRXH7WQ8/whEOAnsxkrfXeoBKdEQtT+up4/E4a9ym\nXq7vIcPhM/4CgSC/l4fNnevn/sNX7KRNJUfgm+pvB8VYtZscO4mUDoMx2nnuwDfLvt9nzo18uJwH\nwLasu8nN93MHHitFAjkwfMdud/0w6BigUL8Ah+edAoVlC+BwyU7Gn0YWiRuD27fwPfu6vta+87Lh\nby8Nv1QqZcssaF746/4+vtQriRCCCDiSySQjdDlo2b4OpMRpeUg+n2eNXE+70Hw5cwDsUUql0ic8\ncD7I97RpB8oAcOvQRIPLd1DfBd0XOvPDcNdJx0gkEnbefO1v4YLLrcJlUeTTXX8auDqGjCURTvFZ\nbqAwY0fTbXu9PZZQXL5XHoMUCq1grayshNFo/MZpAvoBkEe8lxSWTwMpFI1GA6vVypjPvo3sByUz\n8FgZlpSUsG1VNM/6daAHfFByA48Vilwuh16vf4JN7GkPkw/KhrgDSHbaOnhYlKFEIoFCoYDRaERp\naWlBFM1n0LkrlUro9XrGZsl3uYHH+pHWC3M3m/JdfgqMaDfEYbovAoGArUHnnvl+gJeRfzFByqS8\nvBxNTU0oKyuDyWSCUqnE0NAQWwDyTSRCB+EB06Wuq6tDTU0NKioq2Orbjz/+GOPj49+YuTioB0AU\ns+Xl5aipqUFlZSXUajWWlpbw6aefIhAIHBin9jdBINhe2GK329HU1AS73Q6NRoM7d+7gwYMHRR1T\nLDaI97y6uhqtra3Q6/VIJpO4efMm3G43NjY2DlrEp0Imk0GpVKKjowO1tbUwGAyYm5srYH3kI0jH\nWCwWtLe3M0fx/v37mJmZQTAY3Fe2yu8CyhDV1dWhsbERVqsVm5ubGB4exsrKCkKh0EGL+FRQYy5t\njHQ4HJibm8PMzAxjPOUrBAIBW1JVU1MDpVKJmZkZxh66H3r7mTf+IpEIJSUl6O7uxi9/+Us0NDTA\nZrNBLBbj97//PW7duoVsNvu1j/OgDKhQKIRSqcTZs2dx8eJFdHd3Q6vVIpFIYGVlBU6n8wcR5Owl\nhEIhrFYrXnzxRbz44ovo6+uDTCbDzZs3MTExwahevw4Hce7kjVdUVODMmTP4yU9+gubmZiiVSvzX\nf/0XO3M+ds5TlqWiogKvvPIKfvnLX0Kr1WJ5eZkRtvDZ+CsUCtjtdvz93/89XnnlFRiNRnzwwQdY\nWlr6ViW6g4RCoUBzczN+85vfoK2tDSqVCv/93//NJnT4bPxlMhlOnjyJX/ziF6irq8PCwgJ++9vf\nIp1OIxwOA+BnFC0UCtnq3gsXLqCrqwvvvvsu3nnnHUSjUV7fF4FAgPLycly8eBEXLlyAwWDAn//8\nZ1y7do2tht5r2Z95419aWor29na2KlGv10MsFrPNVHwegdLr9aitrUVrayvb7ywUCtn+dr5GoCKR\nCBqNBpWVleju7kZVVRXkcjny+e398sFgkLdeuUgkgkwmQ0NDA5577jm2gjifz7N5/71eYPJ9IRQK\noVarcfz4cbS2tkKn07HeFr/fz2vDLxAIYLFY0NPTg+rqauh0OohEIsTjcfh8Pl4rcpFIBJvNhtra\nWjgcDmg0GmSzWWxsbPCWi4MglUpZWau8vBwqlYqt1KbFVHyVnVL9NTU1cDgckEqlSKVSBeuI+Qiq\n8xuNRhw7dgxmsxn5fJ6tLd+v+/LM1/yVSiVaWlpY+pbqtqlUikWffPXKiSKXNljRrgOSfb92xn9X\nCIVCaDQaNnNus9lY8xY5LgdBNvNtIBaLoVAoUFVVhfb2dkZNnMvl2PIYvt4XkUgEtVqNtrY2NDQ0\noLS0FBKJhLG4RaPRgxZxV5AytFqt6OnpKdjml0gkDoyc6NuA6vzl5eWor6+HxWKBUqkEgIJV4HyF\nXC6H3W6H3W6HxWKBXC5HJpNhDjpfDSiwvWpYp9PB4XDAYrFAIpEgkUggFArt2SKzYoDKREajEc3N\nzaz3LBqNsjM/Mv5FgEwmQ1lZGSwWCxQKBVsCQkse+HpBAECj0aCiogJGo5EpQ6IV5XM0QcbfbDbD\nbDZDpVKxJhy+z5yLRCIoFAqYzWaUlZVBLpezUsDODmi+QSgUQqVSsQiUtvrREhw+GyGBQACr1Yre\n3l5YLBaIxWJ2379NT85BQiwWo7GxEW1tbczhEgqFbH0xHx10glqtRnNzMxwOBxQKBaRSKbLZLEKh\nEG97cghqtRo2mw16vR4qlQoSiYTx+/M1Owdsv1NqxqV+IgBYX19HMBg8avgrFmQyGaxWK/R6PduU\nlM1msbm5yfvLrdPpUFtby5r8SIkTzz9fIRaL4XA4UFFRwR4ljd/w2XgCgEqlQiagaZEAACAASURB\nVHV1NYvgxGIxG9fis9MCbGeKbDYbTCZTgcPF9zlz2k1ADIXcSRYuWREfQd395eXlqKioKFisRGNb\nfJVdJBJBr9ejo6MDZWVlEIvFbD7+aRsY+QBuX05HRwcMBgMkEgmA7V0FfNePcrmcNXDTZEU+n0c8\nHt/XDNczH/lLpVKYzWZotVqmUGiXPF/rzgS9Xo+Ghgao1Wr239Lp9L7vuv+uoAmFuro6tlqWjD+f\na3HAtsN17Ngx2Gw25izS/C2fIzgAMJvNqK6uhlqthli87dcfBrITiUQCk8kEo9EItVpdcGf47LQA\nKBjBNZvNzAjxXXZKPZvN5oKtoQD/HS4qtTQ0NODkyZMwGAxMdmre5rPsKpUKHR0daGhoYFkiACzI\n2C8805E/XXCdTse24wFg8/5UQ+cjaFSuoqKiIBKiGWhS7nwDcRLU1taitrYWEomEeepEmHOQ8/tf\nB4FAAJvNhv7+flRXV7OUOY1DcRnE+AaBQIDu7m5cvHgRJpOJcSnsZG3jG6jR780338SpU6cgk8mY\n08VnuYFt2Ts6OnDp0iW0tLRAoVCwzYx8vivA9nTCmTNn8NJLL6GysrJAPwL8pg8vKyvD8ePHcerU\nKTQ2Nh6aTJFAIEBtbS26urrw8ssvo729nclNa373s1zBTwtSJJAhIvIHLvgcDRHBDKVCKZoAwFYd\n8zUKJdIKu93ORioB8D7tT2dOq4gtFkvBw+TzIiWiB21qakJvby80Gk2B7Hy+62KxGGazGefOnUN7\neztzDvkuN7EnNjc349KlS6isrCxgx+PrXSGdaDAY8Pzzz+P06dMwm82Qy+XsM3w8cy7pVm1tLS5e\nvIiuri7Y7Xa2y4KPVMpcIh+FQoHjx4/jhRdewHPPPYfy8vICIqVMJnMU+RcLYrEYEonkiagtHo9j\nfn4eKysrByjd00Gzt0QTyo1+QqEQFhYWeNu5LZFIIJfLUVJSUiB7NpuF3++H3+/npWIUCLa34ymV\nSiiVygKHi9Zr8rVHhBxcrVYLtVpdwKCYTCYRjUZ5WQMVCATMwbVarQXlLWrK5WOXv1AoRElJCUwm\nE6qqqlBbW1tgPLPZLBvZ4hPIgBqNRjQ0NKCzsxMNDQ1PlOb46KCLRCLWj9PX14eXX365wEGniJ9v\nDiNxb5SVlaGhoQGvvfYa+vv7odfrD1z2Z9L4U8rQbDbDbrdDJpM9sQBl5yIIvkAsFrMuf6vVeqD0\nvN8FNB/vcDhw7Ngx6HS6J+Tm06PkQqlUQqfToaamBs3NzQUUoQS+KRVg2+ibTCaUl5ejqqoKVVVV\nTziLfIygBQIB609wOBzo7u5mFMQEPvYqUBmxpaUFtbW1sFqtOHbsWAHtM8C/1DPpO7vdjs7OTpSX\nl6O6uhp1dXUoLS0t+CyfshbkrEilUnR0dKCjowMOhwNdXV1wOByQyWTss3y6L3TeCoUCFosFXV1d\nqK+vR0VFBbq7u2G329nngINb/vTMGn+JRILKykrU19cXeOXA4yZAGrHgC7hNOM899xxqamqeMKBy\nuRwGg6Hg4vMBEokEarUa7e3t6O/vh9lsLvh1ouKk0Tk+QavVoqGhAS+++CJOnDhREPUDjzMx37ST\nYL+hUCjQ0tKCvr4+9PX1obGx8YnPiEQiVkPnC6j2+eabb6KhoQF1dXXQ6/Xs17if45vccrmcsbJZ\nLJaC6HPnZ/kiO+mV5uZm/Ou//isj8+E66FxOeT4YUOBxCbG0tBQ//vGP8ctf/hJSqRQlJSW76j++\npPxpT4LJZEJ3dzf+7d/+DS0tLUyPUF8IV2b69/3EM2f8BQIB6urq0NPTg87OTnR1dRWkE4FthahU\nKnllQCk99PLLL+P5559HS0sLGhoanlhwI5VKoVKpeNPwR955Q0MD/u7v/g5tbW2MSREoXC5EpQy+\nKEXKVpw/fx6vv/46Kisr2Xw8PUT6fnwy/uRIVVVV4fXXX0d3dzesViu0Wu0TCoQUER/OnO64yWRC\nZ2cn+vv7odVqodFo2LgT977wpcmSZCkrK0NTUxM6OzvR2NgIuVwOhUJRoMBJXr40K1JJ6Pjx4zh/\n/jyqq6sZyylxnuRyuQJnhdtseVCGlJYMNTU14dSpU+jt7YVWq2XNtxTp53K5gjtCP6uDzAKoVCpY\nrVacP38e58+fh8PhYEuehEIhstksMpkMk5XkpBL1fk1y8cOCFAnUUV5dXY1Lly6hqakJlZWV7IHS\n5aZmOj51+9Os88mTJ/Hqq6/CarWyDlyuISJPeGd0ehCgy6xUKlFXV4fXXnsNlZWVUCqVjNGP+9mS\nkhLI5fIDVyzA470JRqMRzz33HC5evAipVPrENjNSNiUlJWw97kFGF9TcZ7PZ0NrayhxFMpSUcibF\nQqtxpVIpRCLRgdahS0pKYDab0dbWhs7OTnR0dAB4HO3TqBM5tmKxmC37SaVSBzbeKpPJoNVq0dLS\ngtOnT6O5uZmlbum8STYqu9BEkUajObCeC4FAwMpZZ8+excmTJxmLHxnPdDqNVCrFVm7TKFpZWRmS\nySTj9t9vuWUyGWpqatDX14cf/ehHaGhoYHqcRm8TiQQymQwUCgU7dyonLS8v7/soNzlQZrMZvb29\nOH/+PE6fPs1YQqmpL5lMYnNzk5V4ifSntrYWy8vLcLlc+yLvwbvVRQSXmvXkyZOorq5GaWkpi9jI\nG5RKpWxdKB+8c3pwVqsVFRUVcDgczMBzPdx8Ps+YoXb2MRyU3FTnr66uhtlsRmlp6RM7wSmyUCgU\nrJnuIGWnveV2u53dE2I3o73amUyGTVVQpkgulxeQuBwEpFIpdDodent7WXmFIgauciH2SsoUabVa\nRjt7EBAKhbBYLDh27BjLVnD7brLZLBKJBCKRCNLpdEFXelVV1RO16f0Cjdw+99xzeOmll3Dp0iU4\nHA6m6GlEi5YmbW1tsXpvS0sL2traDiTDSI55W1sbfvrTn6K/vx/Nzc0FsqTTaWxsbGB5eRmRSAT5\nfB5isRiVlZX4yU9+gtbW1n2XG3i8sOfSpUv48Y9/jJaWFpZJBLaJfDY3N+H1euF0OhGLxQBsO17n\nz5/HP/3TP6G8vHzf5eaWV371q1+hr6+PERBRwEByj4+Pw+VysQxAWVkZfv3rX+O1117bN934zET+\nRLZBhshqtTKjn0gkkEwmkUgkoNPpWCe6XC6HXC5HOp0+sKhCIpFAqVSioqIC7e3tsNvtTEknk0kk\nk0lsbGxAJBLBarVCIpFAJpNBo9FAo9Fgc3PzwKI5tVoNq9WK9vZ2NDQ0QKVSQSQSIZPJIBqNYmNj\nA4FAADabDWVlZWzhT3V1NVKpFPx+/4HILZVKUVlZifb2dtaAQ+m4aDSKQCAAv9+PTCbDFI9YLEZV\nVRU6OjowPT19YBERkbL09PSwGed8Po90Oo3V1VWsrq5ibW0NVqsVXV1d7MxPnDiBjY0NDA4O7nvm\nggxRc3Mzzpw5g/b2dlgsFgDbBigSibBVpuvr6+jr60N9fT1EIhFqamrw6quv4tNPP0UkEtn3dK5A\nsL169eTJk6zRTC6XY2trC6lUihmglZUVqNVqnDlzhjnnXV1dWFtbw+LiIra2tvZ1hpvOvKKiAn19\nfaiqqmI9TuRkzc7OwuVyYXl5GT09Pejr62PZmdOnT2NpaQlzc3MIh8P7OnVBvRWNjY1obGyERqOB\nSCRCOp3G5uYmPB4Ppqam4PF4sLW1hUuXLrHov6amBolEAhMTE0ilUvB4PPuqH0UiEQwGA1paWhhh\nFRn9YDCIR48eMdmbmppYmVGpVKK1tRVnzpzB7OwsJicn4fF49vSuPzPGXy6Xw2Kx4Pjx46itrYVM\nJmMPNBaLwe/3Y21tje2szufzrEmNPPaDADUftrS0sLScQCBgXrnf78f8/DxzbsRiMdsIZTKZDpTK\n0mg0oqWlBc899xxaW1shk8mQzWYRj8extLSE2dlZPHr0CKdOnYLNZmNpyK6uLmxubh6I8afyQ1tb\nG/r6+tDZ2QmLxcKWDq2urmJ8fBxjY2OIxWLQarWs1tjS0oL+/n4EAgFsbGzse+pfINimNH3ttddQ\nV1cHu90OqVTKNlSOj49jcHAQo6Oj6OvrQ3t7O4RCIfR6PS5duoRYLIahoaED6UQXCoXo6enBSy+9\nBJ1Ox1LP8XgcHo8HX3zxBYaGhjAzMwOlUomqqioIBAI0NzdDo9HA4/Fgenp63/dxkPE/ffo06uvr\nWVYrlUohGAxiaGgIV69exfT0NKqrq9Hc3Ay1Wg2ZTIbe3l4kEgncvHmTZTT2EwKBgJWHqEyYy+UQ\niUTgdDrx4YcfYmBgAAsLC/j1r3+N48ePQyKRQKvVoru7G5OTkxgZGcH09PS+G39ifSRadhr9XFpa\nwp07d/Dee+/B7XZDoVCgqakJ5eXlUCgU0Ov1aGpqwvnz5xGPx7GysrJv+pEcLqlUyu4AjU+ura1h\nfHwcH3zwAW7fvg2/349XXnkFZ86cgUKhgEKhgEajQW9vL0pKSvDb3/4WXq93T51dfnQwPYn//C4f\nFolEuHjxIn7yk5/g3LlzaGlpgU6nQyAQwMLCAu7evYuBgQE8ePCAdejSOsiWlhYcP34cNpsNoVAI\nyWRyXy9LTU0NfvGLX+DFF19kcuRyOXg8HgwNDeGLL77A/fv3EQ6HUVdXB6VSybZZdXR04MyZM8wJ\nSKfT+6Jg6JL39/fjrbfewrFjx1BeXo7S0lKsrKxgbGwM169fx61btzAyMgKz2Yy6ujqWbbHZbDh+\n/Dief/551NTUoKSkBD6fb1/OXSgUwmQy4Wc/+xnOnTuHiooKaDQaZDIZjI6O4vbt2/j8888xNjaG\nQCCApqYmmEwmxl1A41KnTp1CX18f0uk0lpaW9lxuUog9PT34+c9/zlbHSqVSLC4u4tq1a/jyyy8x\nODgIl8sFjUaDpqYmSKVSNmVhs9nQ29uLl156CS0tLSzdu9cgVsrXX38dJ0+eZKWhXC6HW7du4erV\nq7h9+zZmZmawvr6OxsZG2O12qFQqyGQydu4nT57EK6+8ApVKhZmZGQB72yHNjUB/9KMfscwbACwu\nLuLy5cu4fv06RkZGsLq6CqlUiurqamg0Guh0OtZXUltbi/Pnz+PcuXOIx+MIBoN7TkFL+u38+fM4\ne/ZswW6Q27dv4y9/+QsePHgAl8uFcDgMm83GJqAog1daWoqmpiacO3cODQ0NbMHSXvNdqFQqOBwO\nvPLKK6iqqmJZubW1Nbz33nv47LPPMD09zZbg2Gw2tteC+lxogufUqVOorq6GXC5HNBpFMpncs3Pn\nOk4XLlxgTaypVApDQ0P4wx/+gLGxMayuriKRSDBOEYPBAJPJxJpzNRoNampq0Nvbi+PHj0Mul8Pj\n8QD4zvf9//u6Xzz0kT95Wt3d3bh06RJsNhvbv764uIgHDx7g7t27cLlcCAaD6OjoQFNTE1un2NjY\nCI/Hg4cPHyKbzcLpdLK97ZFIhLEu7cWFEYvFsFgs6O/vR2trK5tKWF5eZoZoYGCAGSGv1wu9Xg+z\n2YwTJ06gp6cHiUQCFRUVkEqlWFhYgNfrZes4U6nUnsztkiGqr6/HCy+8ALlczi662+3GrVu3cOfO\nHczNzWFjYwPHjx+Hz+eDTCaDyWSCyWRCLpdjkSg5PGtra2znQjqdZotFinn21FjZ3NzMIqJ8Po/1\n9XUMDw/jyy+/xNDQEDY2NmAwGOB2u9HQ0AC9Xg+bzcYMKG0Py+fziMViiMViiMfjiEajSKVSBUtR\niiE/3XO9Xo+6ujrGWLm1tQWPx4PPPvsMY2NjcLvdSCaTLGWrUqnYnTGbzejr60MymcTExARWV1cx\nNjaGeDyOeDyOWCzGVlwXszRAvTharZaRm1CNf2RkBJ9//jmcTierO7vdbrjdbjbBIJPJ8OKLL7Lm\nOq1WC7fbzeROJBKIRqOIxWIsUiqW7MTMRj0f+XweyWQSq6ur+OKLLzA2Noa1tTVsbW1Bp9Nhenoa\nFRUVqK6uZuWlyspKlvqNx+MQiUTsjVIKntZzFyvSo4VDcrmc8SdkMhmk02lMTU3hk08+QSAQQCwW\nQzabhdfrxeTkJKqqqmA2myEWi9Hc3Izm5mYAwMjICABgfHwci4uLjDxqp44shuzUp0JNqsD2PQ+H\nwxgcHMTg4CAikQiy2SyEQiFmZ2fR1NSErq4udteamprQ1NQEALh79y5sNhvkcjmTnVZcc5d2/VCQ\ns7ezqTmVSmFxcRFffvkl0w+kc0ZGRtDW1gbgMelVaWkpKisrcfbsWTidThiNRoRCIXbfNzY2mCP2\nQ2Q/9MZfIpFApVIx5UaNcOl0Grdv38bly5eZYsnlcnA6nfB4PKweQ3vEz5w5g6amJgQCAayvr+Pd\nd9/F3bt34ff7WVcpoRgXnJrl1Go1tFotm3/P5XJYXl7Ge++9h5GREaysrCCdTkOv12Nubo4tEKFu\nboVCwTiuNzY2MDExgStXrmBqagpLS0uIx+MFRrSYikWhUKCkpIR1Z2ezWYyPj+O9996Dz+fD5uYm\ntra2sLq6irm5ORgMBlZ3pKao48ePo6KiAi+//DLu3buHgYEBTE9Pw+v1IhQKsXJMMZU5RZRUQqF9\n9wMDAxgcHEQwGEQmk4FcLofL5YLX60VtbW0BX75CoYDNZsM//uM/4ty5c6xUMDY2Bo/Hw5gMi6XM\nqSOYCIhIsWxtbWF5eRm3b99mBiWXyyEYDGJiYgIVFRWorKx8Yly0vr4e//Ef/wG32w2n04lHjx5h\nYmICU1NTCIfDTKkUQymSQubueSDj73K5MDs7i1gsxprlVlZWsLi4iOPHjxf8OXTn+/v7UV9fj+Xl\nZSwuLmJ2dhYjIyMYHh5mjZrFiKyp6XBnkyfdF7fbzdbHUgnD7XYjEAg8MaJLd+att95Cf38/1tfX\n4Xa7MTc3h4GBAczPz7NGzWJs06OZcnqbpFsSiQQCgQBWV1cLVoOHw+Gv7ZCvra3Fb37zG/h8Pvh8\nPng8HoyNjeHmzZsIBoOIxWJFWzVOzKzc0UPqIwqFQkyvANtOQSgUwsbGxlPvamtrK6xWK06dOoXl\n5WX2XoaHhxGJRJjTW4z7QiOUBApyNjc3Cww/9elEIhFWUtl5Z2QyGaqqqvDGG2+go6MDPp8P8/Pz\nuHXrFtxuN4LBYMGf+V1x6I0/KRbyuLiz2PF4HOFwmKXzJRLJE4qBasA0f0xeYTabRXV1NQKBAGZm\nZvDo0SNsbGwULeVFxv9pVLjRaBSbm5uIxWIFo2e7zUIbjUYYDAZkMhmYTCbIZDLm5IyPj8PpdLIG\ntmKARt9kMhnbSkUypVIpRKNRFkFSiYC7XpY7ckmUtBUVFVAoFDCbzXC5XJiamsLIyAi8Xi8CgQD7\nvT8UND5Ghp8730xNoXROxC5G2YGdZyCXy9n6X5PJhIqKCjQ2NuLhw4fMCdjc3PzBMgOPDRHXgAJg\n0wk0CkdzzyUlJVCr1buOhAqFQlbystlsqKysZE6CRqPB5OQknE5nUeQGHp8jkQ3RnaAJBZJ754gc\n915xz4HOu6qqCjU1NaitrYVOp4NIJMLs7CxWV1eLKjuXJ4HeHOkZeo90H8xmM1QqVYF+IVDTqM1m\nQyQSQXV1NaqqqiASiSCRSFiTWjFAKWSu00LZI66OpO9Du+Upu7Gz41ylUkGlUsFms2FzcxM+nw96\nvR6ZTAbDw8OYmZkpWnDBlZ3koJFV7h2isozdbmcB0W5Qq9VQqVQwmUyorq7G+vo6gG09OzIyUjSq\ndLq/3PMlJ4x4Qui90pk3NjZCp9PteubUCFhRUQGTyYRgMAi73c4cH7/f/4PO/NAbf6on0rgT17Do\ndDoYjUZ4vV5sbW2hpKSERatkUHceODkCP/3pT/HGG28gkUjg8uXL+N3vfoeZmZmi1rsoqtipFKnj\nVqPRYH19nU0EUIZgN8VCxoEUSjqdRiAQwP/7f/8PV65cYem5YoCUyE7vXCgUQq1Ww2QyIRaLIZPJ\nsDqY2WwuoG/d+efRWFJzczPS6TSGh4fx5z//GTdu3GDGvxggRbvzrshkMuh0Omg0GsRiMQiFQjbv\nbDKZnnpfBAIB69RtampCJpPB1atXoVQqsbm5WVTjv5Otj8ovKpUKBoOBRY3ULNXa2gq9Xr+r3PT7\ndToddDod6uvr0dHRAb1eD6FQCKfTWbRsCylFriGi7BFNrZADUFJSgpqaGjQ0NLDy3W6yC4VCGAwG\n6HQ6NDQ0wGg0QqlUIhaLYXl5uWhy72b8S0pKUFpaCo1Gwzr/JRIJjEYjOjs7UVFR8VS5ge2Izmg0\nQqfToaKiAhKJBLlcDrOzs0Ut0e0kGqKSl1qtRmlpKSKRCJO9pqYGJ06cgE6n+9o/k4IVvV4PpVIJ\noVCIUCiER48eFU12cgy5ssvlcmi1WrakLZvNQiQSQa1Wo6uri3Fd7KYb6c+kANFsNiOVSiGVSmFm\nZqaoTaQ7z5zskE6ng0KhYAQ/xEB74cIFOBwO9vnd7o1EImFBrlwuZ2WnGzdu/CCdfuiNfyKRwPr6\nOlZXV7G+vg69Xs+U+6lTp6BWqzE6OopMJgO1Wo3nn3++oHFnJxENgftDjEajrEmjWJckn88jEolg\nfX0da2trMBgMjHfA4XDg5z//OVpbWzE7Owu1Ws1457VabcGfwc0C7PwOW1tbWFlZgdfrRSaTKZrs\n6XQawWAQgUAA4XC4oD7X19cHqVSKiYkJRKNR6HQ6dHd3o6ysjClzkp3k3elA5PN5bGxsYHJyEmtr\na0Wt4dLkRzgcRiKRYOUWg8GAf/iHf0BzczOmpqZQWlqK8vJy9PT0sH3hJAdFqTsjKGA7mnC5XBgZ\nGSnqRADVPEOhEHNOKOty7Ngx/Pu//ztmZmbg9/thNBrR2trKVrVSrZxk5xpikjuXyyEcDmNoaAjT\n09NF7XNJJBLw+Xys452cRolEgh//+MdwOByYnZ1l1NYnTpyAxWJhDHRc+bn7z0n2ra0tLCws4LPP\nPsPS0lLRSi10Jn6/n9XkKXqrq6vDv/zLv7CxLbPZjPr6erS2tkKj0TAZiDOCIlfuXc/lcojH4xgc\nHMTdu3cRjUaLduapVAo+n49lMcmYisVi9Pf3Q6lUYmFhAZlMBhaLBZ2dnbDZbGxKitLS2Wz2iZIN\nZWvcbjc++OADTE5OFtVp2djYwOrqKithUXBgNpvxq1/9Cj09PXC5XDAYDKisrERPTw90Oh0jLaKm\nRGqe42YKKEs2NDSEjz/+GGtra0WTPZPJYHV1FYFAgP386cx7e3vxn//5n3C5XIhEIrBarWhpaUFF\nRQUbN6dJqUgkAqPRWLCdk5o1fT4fPv74YwwODv7g0tahN/7pdBrhcBjz8/OYnZ1FR0cHM/7Nzc2w\nWq2orq5GPp9nUalWq2U0iqRUSBlxjT49AJr9TiaTRZObaoQ+nw/T09PQ6/XM+BsMBpw5cwaVlZVw\nuVzQ6/XQ6/UwGAws8qdIiRjRuI+TLko8Hsf6+jr8fn9RRxmpdOL1erGwsIDa2lqm2BobG9kGq0Qi\nwcZ1tFotxGJxQYMNkdBwIys681AoBLfbjVAoVNSGv2QyiVAohKWlJaytraG8vJxtDDt9+jRqamow\nPT0NnU4Hs9nMxtKAbSNDd4ZKH9wzpzo2nUssFiua7FwOgpWVFQiFQjaCWF1dDbvdzoy/zWaDwWBg\n95zuydbWFutl2EkSRQ7d9PQ0lpaWinrmqVQK4XAYwWAQkUgEWq2WRTO0w2J6epqtglar1VAoFIyR\nkGbkyTnkOozZbBaxWAyLi4sYGhoqqgHN5/Oszky1WYVCAaFQCJvNhtdeew3t7e1wuVwoKyuD2WyG\nWq1mTks2m0U6nUYymYRcLmcBB8lO93xiYgKTk5NF7UQnvbi5uYlkMsl6cwQCATo6OlBdXY2ZmRlk\nMhmUl5ezMxcKheyeUL8Q/ay4bzQWi8Hj8WBgYAArKytFNf40EUGlQ9LLWq0WFy5cQEtLC2ZnZ2Gx\nWBgvCpXm6LwjkQjL0HCddDrzyclJDA0NFTWgo/6DcDjMHD7K1tXX16OsrIzxJlRUVECn0zGeDiLn\nImdToVCwBnBuEOrxeHDv3j3moP8QHHrjT9713bt3oVKpUFFRwQ5NLBZDp9OxbkpuypeML6XD6bNU\nf+caIar1F3sULZ/Pw+fz4cMPP4RCoUB9fT2Ax3Wi8vJymEwm9vi44zrUOZzJZFgKkhwA+m5+v58p\nrWJ3/OfzeUxNTeHKlSv46U9/ytKFlFpsaWlBLpdjstOZp1IpFgHm83no9XooFAr23bLZLDY2Nlgz\nS7FKFVy5Y7EYHjx4AKvVysaDuKx/5KhwlR63qz+VSjGmRe59SiQSWFtbY9F5se8LTUQMDg7i+eef\nZ1kgiuTr6+tRVVXFehrICNF9IbIro9FYkM7mZqGok7iYoDe6srICl8uF5uZmZggpXU6Gh1sGozOl\nES16v9x+gVQqhbW1Nfj9ftY0WGzZE4kEVlZWYLfbGSU4lecqKythsVhY/wvJRU4JdfQD29sjub0C\nGxsbWFpaQjAYLPoIGskQjUYRDAZhNBpZI5pEIoFGo0Frayvy+Ty7LxQZE5kO6bydC9AymQxWVlaw\nvLzMynvFBGVLqHeI6zSVlJTA4XBAr9cX0FbT96WeI6Is5p4pnfn09DR8Ph/L5hQLFJSl02kkEomC\n/graVdDY2Iitra2CfikKKGhce7fMXD6fZ1wX4XC4KCPdh974A9sH43K5MDExgWAwCKvVyjp06YJw\nP0vRRDQahd/vRyqVeqKGnclksLa2hgcPHmBubo5RvRYbkUgEIyMj6OnpQSqVYoaSuoNJ2dAlpnEd\nGrMhI0qEEtS9vrCwgMHBQfh8vqI/ToLH48H9+/dx9uxZbG1tFaQWucuUuN2tqVQK8XicKTsaEyTD\nH4lEMD4+jsnJScTj8T2Z/U8kEhgdHUVNTQ3OnTvHHiEpdBqjI7mpTpdKpdiOeepA5qZBl5eXMTg4\niKWlpaKWWbigWl95eTnq6+sLGj+JApdbWqGogu4N/Tvx/9Ovz8zMYHR01v8inQAAIABJREFUlHX6\nFxu5XA5TU1N4+PAhHA4HlEoli+SpIYorOzkt9FYp/ctdRJPJZOD3+zE8PAyn07knb5Q64R8+fAiD\nwcDIqujMaVabPkv3hZvd4ja70n/LZDJwu9148OAB1tfX92TuP5/Ps3HOrq4uVuKiu05GnXvm3FIi\nVxfRuWYyGYTDYcZUl0gk9uTMiZ1veXkZSqWS6QhqrKRsHLcUx5Wbu/mPfj2dTsPr9WJwcBBer3dP\nRrgpW7S0tMT6UoDHfU073yjwuPRJdX3qBaPPke6Znp5m5cSijCb+4D+BB8jn8wgGg6wz/GlNedyL\nkkqlGMvc2toai3pisRhbvOByufDJJ59gZGSEKaBiIx6Pw+l0wuv1IhqN7hq5cJU5ebc0PhKJRBCN\nRtlcfyKRwMbGBkZHR3HlypWi1kB3wufz4dGjR/D7/U/1RLmKhS5xKpVi1MWpVIqRE1GZ4vbt2xgc\nHGQ1v2KD5txJee1m7OiucI0/GU0yTFT+IGdsbm4OV65cwfz8/J6d+fLyMj7//HO43e6nGgyu7FSq\nIMXInZvOZrMsRfrVV1/h1q1bCIfDeyJ3Pp9n3BXhcHjXnyv3fdIZU52dnHPgsWMQj8fh9Xpx8+ZN\nVnfeC9mDwSBu3LiBqampp/4dO+8LlYaIJ4BIjbiR9cTEBK5du4bV1dU9kRsAnE4n7ty5wzrcv05u\nOnsAbKESlTHo1yi7df/+fYyMjHzvMbNvAjXjzc/PPzWbs1PubDbLGmD1ej1UKlWBcx6JRLCwsIAv\nv/xyz6hzBQIBNjY2MDU19bUly53nTg2JVG7kOjfJZBKBQABjY2N48OBB0ZqInwmGP0Imk4HX64XP\n50M+n2eeFDd1QqDUIZcnv7S0FIlEAh6PB9euXcMnn3yCBw8eYGVlZU9ZrShNT9MENMHATRESuB3I\ncrkcKpWKjXTRbPeVK1dw/fp1jI+Ps/rTXsm9tbXFGi5JQXO3Je525sQ6R923lGW5c+cOPvzwQ9y5\nc6foNfOdoPnbmZkZJJNJtkiJ2wjKjYLozGk8k2q4kUgELpcLn376KT799FMWyRU7dU6gSCAUCmFx\ncZHJQtmKnbLTmdNOCPosOcwPHz7E5cuXcePGDUxOTrIO8L1ANpvF5uYmFhYWEI1GYbPZClYOcyM4\nkp3KL5SREYvFrFfmiy++wEcffYR79+7B6/Xu2RY3uis+nw9TU1OMYZObniWjT8aTyi4UxVFPDDWy\n/vWvf8W1a9cwNja2p280k8lgfX0dCwsLCAQCsFqtBSNz9P3IYeHeF8qaisViRmd869YtvP/++7h7\n9y7cbveevVEqRbndboyPjzM+lp1jwyR3Pp9nb5TeMf18YrEY3G43rly5woK5UCi0Z2yo6XSa7R9Y\nWlqCwWBgrIkkN911GofeTXbKhA4NDeHPf/4zbt++DafT+V2Coq9l+HumjH88Hsfs7CxbukJpf5r/\nJ0VOh010s6QM19bWGGHIF198gbt378Lj8eypEQLA/u7JyckC2ch5ob+bUnZcpSKTyRCNRrG4uIjJ\nyUkMDg7ik08+wfj4OFZXV/dMqZDcqVQKTqcT6+vrjNmK6llUc+bKTkaIlOHa2hrm5+cxOTmJ69ev\n44svvsD8/DxCodCeRP2EXC6HUCjEFAulFkk++gzJzVWIEomEOYmzs7P46quvcPXqVdy/f58xiO2l\n3JlMBktLS1hcXIRarWZ75elucFOgpFTou0mlUgQCATidTkxPT+PWrVv48MMPMT09zdLPewXqoaFG\nM6pDc4midqZvucowk8lgeXkZc3NzePToET7++GPcvHkTLperaNHQbqCeCa/Xi5mZGWg0GqjVanZn\nuOUIrhHi/hMOh+F2uzEzM4OBgQF88MEHGB0d3fM3So4SEZ2ZTCbWCEdGlDIsFDBxe4zy+Tz7/ZOT\nk/j0009x9epVuFyuoqWfd8PW1hb8fj8WFxcxMzMDuVzOGp6ppEuGn0pYXLnJSVxeXsb8/Dy++uor\n/PWvf8X9+/exsrKyp7sKKGtMJFA0FsmdPKByFpVLub1RIpEIoVCI6Zdr167hnXfegdPpfGrW7Cn4\n2zH+wLZB2tzcxNzcHNxuNzY3N1FbWwuVSlUwNsKdq4/H45iensb//d//4a9//Stu3ryJmZkZBAKB\nPWmWexpyuRx8Ph+j6ZXL5airq2N/PzWmAYWjQp999hn+93//F59//jkGBwexuLjI6C/30mnhIhqN\nskaaWCwGi8UCrVbLUuo7yTkoIvnjH/+IP/3pT/jss88wOjoKn8+3J3XE3UAe+NraGqanp5FIJFhD\nEQB2X+jcuf0gQ0ND+J//+R9cuXIFN27cwOzsLEKh0J7V+neTPZFIYHZ2Fn6/nzEXarVaZkx2EhlR\npuadd97B73//e1y9epX1KOxFg+LTkM1m4ff78fDhQ7aiWK1WswkcUoBc2XO5HBYWFvD73/8e77zz\nDq5evcpKTnuVet4JcnYXFhawsrKCsrIyRn7DLVFwdQywfY+uX7+O3/3ud/joo4+Yw7Kfb3RrawuB\nQABDQ0OQSqWoq6tjDKdUFuLede7I7dtvv40//elP+Otf/4qHDx/uSbPc00AlWpfLhenpaVRVVcFi\nsUAqlbJyHFBIxkTn/ujRI/zhD3/A+++/jytXrmBmZoaxhu7HmWcyGQSDQQwPDyOdTqOrq4v1F1HJ\nkKa1uJmYfD6Pa9eu4S9/+Qv+8pe/4NatW/B6vd9HLz7b3P67gUulWFpaijfffBMACkhGADCFSLW3\ngYEBLC4uIh6PM8Own8hms/D5fIxnnbipd5NbIBAgGAzi/v37uHnzJu7du8c60ffLWSFQkwuXKau3\ntxe1tbUFaTo6bwBwuVy4d+8e7ty5g4cPH7J+h/0+czL+8XiccefTalPuaBmB1oXevXsXd+7cYfzo\ne7l962lyJxIJOJ1Olq0wmUyoqalhimSn7H6/H3Nzc4xGeWNjY9/vC70rv9+PYDAIh8MBtVrNuNd3\nM57EPT86Ooq7d+9iYmKCza/v55lTvd7tdkMqlaKmpgYSiQQWi4XJy73vAFiNfHR0FLdu3WJUuPu5\niZOcvmAwiGAwiIGBAdTU1ODkyZOw2+0sHb1TdmpcGxoawt27d7G+vr5njc9PA0X2tL737t270Gq1\nOHbsGMt0AYXBBU1auFwufPnll4zhdL/vCzVHEn/GvXv30NbWxtb4Ak8SGlFP1/j4OK5fv852zezF\nfXkmjT/wWKkvLy+zaGKnEaVHcf36dbz77rtwuVx7vrHq24Bq0Wtra0/MNRNyuRy8Xi/efvttDA4O\nHsh63N1AXa4bGxvsYnNBta7h4WG8/fbbGB8fRygUOiBpHyMej+Phw4eoq6tj3vhOUB3y+vXruHbt\nGuvqP2h4PB588MEHOHXqFF544YUnmBTJ4C4uLuKjjz7C8PBwUSlwvy/y+TwGBwchl8vR398Pq9W6\n633JZrOYnJzEwMAAZmdni8r4+H3h8/lw+fJlWCwWnDlzpoBDn0D3hfYmuN3ufXdudwPt36isrITD\n4WD3ZafsgUAAs7OzmJmZgdfrPXDZo9Eobty4Ab1ej5aWFuYs7pSb5u3dbjdbzX3Qsrvdbly+fJmN\no3P7i3ZynGxubmJmZgZjY2N76rA8s8afwE0178T6+jpmZ2cZHzgfFDmB22y2U3bKEMzNzbG0L19A\nzUMk+04kk0kEg0HMzc1hampqT2u13wXUSPd1Ne9YLIaVlRWMjo5ifn5+X6O3rwPNRD/t/lLD2vz8\nPK5duwav17vPEu6OfH6bO2FjY+Op5761tYVYLIaHDx/izp07vLkvyWQSKysrBR3duxkhr9fL6vsH\nbYAIpPdoiyJQKDu94a+++grvvvsuLww/sH3m09PTmJubY/dlp47J57dHMz/55BPcvHmzqNS9PwQ+\nnw+3bt3C2bNnWS/RbrIvLCzg+vXrmJub2/NMxTNt/HfWDrmgzMDDhw/ZTms+XBIunua0UBp0bm4O\nXq+XNwoReBzZPw3xeByLi4twu91s1pYv4HYO74ZIJILl5WUsLCxgdXV138srTwMRhHBH+rjY2tpi\nhDJjY2N72uz0XZFIJFgafDfZM5kMi4SIBY8PoHourYXdTcdkMhn4fD4MDg7C5XIdjKC7gO7x5uYm\nY/DjghyX2dlZ3Lp1izfBBU1zLS0tIRwOQyqVFky5AI97vu7fv4+xsbE9bWD9LgiHw9jY2IDL5YLf\n74dWq32Cfyafz2NpaQmfffbZno0icvFMzPk/DTTatFu6n1bn3rt3j/HH8wXU6bxTbuDxw5ybm2NN\nanwxQt8ESoNOTU3B5/PxSm7ueNzTfj0QCGBxcXHfa7bfhG+SnchwaFUx386d2ym/E5QpopWxfHmn\ndN5bW1usb4IrG81nEyvkXo2VfR9QKSUSibBV51xQb0M0Gv3arMxBIJfLYWNjgzVk77wPVDP3+/1F\npwb/IaB7Pjk5iTt37uwqG323+fl5bGxs7LlMz2zkLxAIoFKpoNFo2Jgf1VSI6Wx1dRVOp7NoKx2L\nBe6eAXqYpCCpw5+Iifj0MIHCZSs0w8o1TIlEgu2dP4imyq8Dpf6j0WgB5Sl9J2InPIjGxG8COSdL\nS0vQarUFu9zJSO0VUdUPRSQSwf379yESiVBdXV2w74HuCJcxjy/I57cprt955x0cO3YMlZWVBRS+\n9Bkih+IT4vE4rl+/zvYr0F4CbvaCHBu+yb64uIh3330XoVAIvb29jBSHO9VCVNZ8w9jYGFuE1t7e\njrKysoKGaOJT2I/s3DNr/AFAo9HAYDAULDehNau0FnFtbY0XTX5ccDe2UTqUPFqiJd7Y2MDm5iYv\nlTkpvFQqxc6e5CR2Pz4aUGBbvkAgwIiIqAOaHuhu2Ri+wOPxYGJiAg0NDTAYDCgpKWEjWzRHDIB3\n5762tob33nsPyWQSarUaWq2W0f/SAiX6HnzD7du34XK58M///M+4ePEiHA4HW5rEnZXnm+ybm5v4\n4x//yKZFurq6oFKpWPmCS7nNN9lnZmbYghypVIre3t6C6RwaX+SjjhkZGcH6+jojWrJarczhomCP\nSnh7jWfa+FMnaDKZxOLiIjweD0ZHR+HxeFgKpph77osFShERq9j6+jpWVlbg8XgYyQylhvjmlZPh\n93g8GBkZYanPra0t2Gw2tmUwkUgUjP7xAfl8nnHnG41GqFQqlJSUwGg0wuFwIB6Ps9Qz32QHtru4\nZTIZGhoaUF1dDYfDwfZcAI/Hir6pL2O/QTwbtNq0q6sLDQ0N0Ol0bP6cuqP3i0fh2yKZTGJtbQ3X\nr1+HQCDAhQsXYLPZoFKpAGxzLSiVSshkMl5FopQNcrvdeO+995DL5RgZDTm4SqUSBoMB4XCYV30i\ndH9HR0dRWlrKOEVo46lEIkFZWRmsViuWl5d5pyMjkQhu374Nq9WKEydOsJXJ+XweJpMJvb29mJyc\nLPp2zZ14po0/1X4ePXqETCaD6elp1nwjFAqxubm5J1upfii4ncL37t1jDXLz8/OQy+VQq9WMyZBv\nF5uIUB49eoREIsEyLZlMBrW1tUgkEozghI/w+Xy4ceMGY3BTKBRsRbHL5YLX62WLffhkhABgfn6e\nUZnW1NSgrq4OdXV10Gq1WF5e5k3j1k4kk0nWUb6xscE24VVXV7PyXCKRYCRAfDr3TCaDjY0NPHz4\nEEKhEEajEZlMBmVlZazJixqP+QbaEnn79m04HA40NDTA4XBALpczqmSlUsk4U/iEfD4Pp9MJsViM\ns2fPwuFwwGw2s/KQVquFRqPBysrKQYv6BOLxOCYnJzE1NYVgMMic8mw2C5VKhbq6OiwvL+/5lMUz\na/zz+TxWVlZw8+ZNNhoSDocRiUSYB04MUXxSJsC2XFSTo+12iUQCiUSCEVvQmlM+Gv/NzU1cuXIF\ncrmcsVgBgEKhYE0txdy7Xkysra3hxo0bBZS4SqUSWq0Wm5ubCIVCWFtb41XkTAiHw0gkElhdXcXE\nxAQ0Gg0cDgdUKhV8Ph8WFxd5mcalXpyVlRVEIhEEg0GMjY2ho6MD8XgcCwsLmJyc5J3hBx43z62t\nrWF4eBgAsLKygu7ubiwsLODBgwe8zNARKEAaHh6G0WjE6dOnYTAYsLa2xjaC8vGuA49LdI8ePYLV\naoVSqUQ6nWbvgG/9UAS678FgEAsLC2xnCPWice/4XgYZz6zxB8DWrwYCAcalzDflsRtIoVDKf7dG\nJz5/D1pvSxeXW9PiW8p5J2jTIFC4S0EikbBNfnxV5LR5MBaLIRwOY3V1FT6fDzKZDJFIhDVZ8g1U\nE+dueQyFQvD7/Ugmk/D5fAgEAry9N9Rgtrq6iuHhYSQSCSwuLmJ1dZVRjPP1zuRyOSSTSczPz0Ms\nFiMYDEKj0SAcDmNiYoJ3ky1c0MTC/fv3kU6n4XQ62aKxubm5Ah4DPoF0oMfjwccff4zZ2VmWtZia\nmsL8/DzLjO6l/E8OkfMDRfnGu23zO8IRvgt241k4DPdpJ7kVyczH5rOd2EnvCzyeduGr7NyFYcQz\nT0qeome+Oi/U4EfLwqjZj7KN3G2FfAI553K5HEqlki0rIoY/bo8O30Byl5aWwmQysaU/wWAQy8vL\nxZL9a+37Mx35H+EIPxR8VBzfFjvnznf+Nz6Cu9qX/j/Af7mBxxk7rvz073yXn2b7uexz3G1/fAWl\n0Ol/aQkUH0uiO0EEVtlsls39p1KpAsKuvcRR5H+EIzxj2C1bARyOd/A02Ql8/Q475d4t48JXkKxP\ny3LxVfadDKg0pcAlvToMslOWa6fcRZD9ax/T34TxB/j9+HbDkewHBz528h/hCPuBw/p2D3N5bieK\nKPffbtqf22x2hP3HYXh8RzjCER7jsL7ZI7m/O/hqGQ/nT/IIRzjCEY5wBH7g8Ef+h7V2f1hTaMDh\nPXPg8Mp+WOU+whGOcPjAX6LyI/ACh61sctjk3Q2H9TscVrmPcIS/RfD1tR6FPkc4whGOcIQjfH8c\n/rT/EY5whCMcdhzWKZLDWo4iEiC+j/3tBiJa2ktyq6O0/xGOcIRDg52z3YcFOxkXDxN2Y1w8DBAK\nhWx17mGSXSAQQCqVQi6XQywW79nZ/01H/vQYuTvaycMiz4tYrvhGb8l9kFzPnKg6aZcB3xi6uDu3\nRSIRO/tcLsf+fyqVYrLzCSQzLf2RSqWM+pQ2zhEdKh/PXCKRQCqVshWixIxGGy4TiQTvIiQ685KS\nEpSWlsJgMAAAY3BLJpMIBoOMnY5PoDuiUqlgt9tht9uRSCQQj8cRj8fZemu+3Rc6c7VaDaPRiKam\nJmi1WiSTSQQCAayvr8Pj8fByYZFYLGZruGtra3Hs2DFks1lsbm7C6/XC7XbD6XTybmERvU+9Xg+b\nzYbOzk5UVVUhlUphZWUFTqcTc3NzbEVxMe7L34Tx35m2ImWoUChQWloKmUzG/iEvUSaTsQ11fr8f\na2trvFCMtGhGq9VCpVIxPm6pVAoAEIlEkEqliEajCAaD8Pv9iMViBy438Z4rFAro9XqUlJRAJpMx\n7zafz7P/9fl88Pv9vFlZTMrQYDBAp9NBJpOxu0OOoUgkQjgchtvtRiQSQSKROGixAWwrQ7lcDrPZ\nDLVaDblcDoPBAIVCgVQqxRTg7OwslpaWEI/HeXPmYrEYer0eFosFKpUKer0eFRUVEAgEjPt8fX0d\no6OjCAaDvFk7KxKJIJPJYLfbYTKZoFar0dDQgIaGBsRiMba5cHR0FJFIBKlUihdnLhAIIJFIoNVq\nUVZWBpPJBLvdjpMnT8JsNiMej2NpaQkzMzNIJBKIRqO8CS5I79ntdpSVlcFut+PYsWM4e/Yscrkc\nwuEwHj16hLt372JpaYlXG/+kUinUajXKy8tRWVmJqqoqnD9/Hi0tLUin05idncX9+/cRi8Xg8/mK\ndubPtPHnptq4NJsikQhyuRz19fVob29nl7yyshJqtRolJSUQi8VsO9fVq1fx3nvvIZ1O79sjfdpS\nFplMBo1Gg76+PjQ0NECn08Fut8Nms0Emk0EikUAgEGBhYQEjIyO4cuUKJicn9zW62G3EUSAQQKlU\noqGhAWfPnoXJZIJWq4XdbodGo2FyR6NR3L59m/1D0ehByQ08fpznzp3DiRMnmANjtVqZ3NlsFmNj\nY/jwww8xNjYGt9u9r0rxafVkhUKB2tpavP7666iqqoJMJoPFYoFWq2WZrVQqhT/96U/47LPP4PF4\n2C73g5RdLBZDp9Ph3LlzeOONNyAWi6FSqWC1WiGVSpHP5xGNRjE+Po5oNIp0Os0b469UKmG32/Hz\nn/8cXV1dEAqFMBqNMJvNbANgMBiEVCrFzMwMstksL4y/SCSCXq9HX18f3nrrLWi1WshkMthsNqhU\nKuRyOfh8PpSXl2N2dhYejweZTOagxQawfc8tFgt+9rOf4eLFixCJRNBoNLDZbBAIBIjFYlAqlQgE\nArhx4wZb684HGI1GHDt2DG+99RZqa2shkUjYG6XgIhqN4sGDB0XVKc+s8RcIBLDZbHA4HKiqqkI+\nn2drEuVyOaxWK6qrq1FbWwutVguDwcB2QkulUgiFQsTjcZhMJqysrODRo0fwer0IhUJ7ni4i5VFe\nXg673c6Um1AohE6ng9VqRWtrK8rLy6FSqWA0GmE0GiGRSCASiQAAarUaW1tbGB0dxdLSEiKRyL4o\nGJPJhLKyMlRUVEAulyMej7NMitFoRE1NDY4fPw6NRgOlUgmj0QiVSsWi/kgkAr/fD6fTCblcjnQ6\nvS9eOkUNDocD5eXlyGQySKfTkEgkLN3c1dWFlpYWSKVSlJaWQqfTMeNPRvTBgwdYWFjYtyYpgUDA\nHECHwwGdTodUKsXqhjqdDhUVFTh16hQsFgtTiiqVim1ASyaTuHnzJhQKBct87YfjIhQK4XA4WKQm\nFouRyWRYaUKn06GrqwunTp0C8NgJ4zqKm5ubUCqVBaW7vYZAIEBpaSmTm9ax0pmr1WpYLBb09/ej\nrq4OuVyObZ6jspZWq4Ver9/XWjQFFBaLBWVlZbDZbFAqlSzrVlJSAp1Oh7a2Njz//PMoKSlBPp9n\nZSLKmAaDQZYZ3S8IhULIZDJ25larld1V7ht94YUX0N3dja2tLVYuop+LXq+HQqHY1wwunZler2dy\n63S6glXhRqMRdXV1OH36NEwmE7a2tiCVSiGRSABs2yupVMoaAIsl+zNt/BsaGnDhwgW8+eabyOfz\ncDqdAACtVouWlhZoNBqIRCJ2mDsVSElJCfR6Perq6tDb24utrS22I3ovL49Wq0Vvby8uXbqE/v5+\nLC4uIhKJQCqVMsNK9XGq8++MWktLS2E2m2GxWKDT6RCPx/clRVdVVYX+/n689tprsFqtWF5eZo/T\nbrdDrVazM6d6M8mey+Ugk8lQWlrKUtSxWGzPjb9AIIBCoUBnZydeeuklvPrqq4hGo4hEIlCr1dDp\ndDAYDJBIJBAKhWzzGXeRCCmnkpIS9rn9cLYEAgHsdjv6+/vx8ssvo62tDcFgkBl5crK4K2ap54J+\nP4A9bSx6mtwikQhtbW24cOECXnzxRSgUCkSjUWg0GqjVaiiVSubQ0lly7zopyP1OPVMkf/78eZw/\nfx59fX1IpVIQCoVQq9VQKBSshAiAOQZ0vtQ3AmBftrcR6O8mvXju3DmUl5cjm81CqVSyMiKtJeb2\ns+xc/rPf2QqhUMgynufPn8fZs2fZGVPplu4wyUe/j+5MLpdjvVD7eeZSqZSl8s+ePYu2tjZIJBLI\n5XLmhHOX+2Sz2YK3uLW1xQLAI+P/LSASidDZ2Ynz58/DarVCKBSitLQUACCTydihA7uTk2QyGUSj\nUayursLpdGJ6ehrhcHjPL41QKITFYsHLL7+Mzs5OaDQaVFdXI5PJQCgUMkX+tE1c1ATl8/mwsLCA\n5eVl1pizl7KTYmltbcWPfvQj1NTUQKVSsWiHojmug7XTaUkmk/D7/XC5XPB4PPti+AGw2vLZs2fx\n/PPPw2AwQK1Ww2QyQSKRFJRTdtsXsbW1hWAwiKWlJbhcLoTD4X1pJqLyVWNjI9544w3U1NTAYDBA\npVIxpUPKnLsilyv/5uYmlpeX4fV6EQwGkclk9kUxUpTZ19eHF154AWVlZZBIJNDpdExuroEkkFLM\n5XLweDyYmZlBIBDYtzSuUCiEwWBAa2srLl68iNbWVuj1+oLIXywWFxhPKjXSuYfDYXz11VdwOp37\nunq2tLQUFRUVOHPmDF555RWUlZWxvhWJRMKcVlotyw2KSPb5+XncunULa2tr++p0VVVVoaurC6++\n+io6OzthNptZAERyC4VCZDIZpuvIiQGAWCyGgYEBDA8P79sdB7YzsD09PTh79ixefPFFlJWVwWAw\nsMZsuuN05vRO6R7l83l4vV5WkjtK+38DqCbb2tqK9vZ2KJVKiEQi6HQ69hmKPLn7qwm5XA5erxcu\nlwtzc3MYHx+H0+lEJBLZ0wtPiqWurg49PT2ora1lzWVcuUnGnZEzsK3M5+fnMTw8jJGREXi9XrYz\nei+hVCphtVrR0dGB7u5u5pWr1eoC2XO5HLa2tthj5RrT5eVljI6OYnx8HG63m3XO7zXMZjNaW1vR\n09OD5uZmZui5cufzeTaBwJ1SALadlqmpKSb3fmSHgO10YG1tLTo7O3HixAmUlJRAJBJBpVKxz1C0\nk8lkmMLh3ve1tTXcu3cPCwsLCIVC+1LDFQgEMJvNaG9vR09PD9ra2p4Yx6K7QrV8bhoU2HZyZ2dn\nMTw8jEAgsC/1fkoxNzQ04MSJE+jt7WWBBRe5XA6ZTAaJxP/P3ZcGt3ldZz9YiX0hCIAgQRLcF1AU\nqX2zLEqyZSnK4sSeOk3SLE3aTJvJTCfT6a/OtH/aTpt+STqdSTqTpFlsx4ktRba8SLIVbRZFkRQp\nUtwXcQEJkgCxETtAAt8P9ly9ICmRskgQ8pnB2ANCwMHFvfdsz3lOGIlEgpURSTweD1paWjAyMpK2\nLgXKVuzduxf79+9HQ0PDiruDos5IJIJAIMAcX9ozyWQSY2NjuH1gBh1RAAAgAElEQVT7NlwuV9r0\npmxFY2Mj9u/fj6KiohVrvri4iGg0Cr/fj0gkArlcDqlUyv4eCoXQ1dWF/v7+tEX+lNI/ePAgjh49\nij179qzadhiLxRAOh+Hz+cDn86FQKFKCDafTidbWVgY63yj5VBr//Px87Nu3D8XFxSuiTeDBxUIp\nIPIQyUuMx+N4//33cfHiRUxMTGB2dhZut3vT289EIhH27duHo0ePshr+cqEDSm1aIpGIRRpU2vif\n//kf3Lt3DzabDV6vd9M3O4/HQ1FREb74xS9iz549yMrKWnXNFxYWEI/HEYlEWNsZd5N//PHH+M1v\nfoPx8XE4nU72HTdTbwA4ePAgXn75ZRQXF6dkVUjICIXDYcTjcUilUmZok8kk3G43zp49iytXrmBs\nbAyRSCQtGSKDwYAvf/nLOH78OCQSyap1bzJCfr8fQqEQKpWK/T7JZBL9/f34+c9/jvHxcQQCgU13\ntihD1NDQgL/9279FdXX1qhdiIpFAKBTC/Pw8fD4fsrOzWckIWMoG3L59G5cvX8bc3FxaMkTkWJ08\neRKf//znodVqV80aRqNRBAIBdllTfVokEiGZTGJubg7Xr1/HwMBAWqJnwtyUl5fjlVdegdVqXbXE\nk0gk4Pf74XQ6YbfbYTAYkJ+fn0KSMzU1hc7OzrRkQYGlO1GhUGDXrl04ceIEjEbjqvucWj5tNhv8\nfj8qKipYBxSwZGDtdjtDy6dDKNOyb9++h+7zZDIJr9cLu92O8fFxaLVaWK3WlOAhHA7D4XBseAfR\np8r4U0STl5eHffv2sUMHPIiUI5EIA5FRewgX5BQKheBwONDV1YWWlhZ4PJ5NN0LA0mUukUhQVVUF\nq9UKuVzOdCLjQ+1Ni4uLkEgkDMxC329+fh7j4+Nob2/H0NAQ/H5/Wi4WkUgEg8GA3bt3o6CgIKXu\ntrCwgGAwiEAgwNZcJpOlREOxWAyBQAAjIyPo7Oxk9a3NFmqDKy0tRX19PUPAU+QWiUTg8XgQCoUQ\nj8ehVCqhVCpT9gv1Pg8ODuL+/fsIhUJpuRTlcjlMJhNqa2thsVjYWhKAz+fzwel0sohfo9Ewh4W6\nEyKRCGZmZtDX14dQKJQ2YCVlt+rq6lgpLpFIIBKJIBQKYWZmhp07hULB8BZcwx8MBmG32zE5OZkW\nZ4vWsKSkBFVVVSgqKmKgt4WFBfj9frhcLtjtdlauysnJYaUjchRjsRjm5+cxNTUFn8+XNsNfXFwM\nq9WKiooKxpWwuLiIUCgEj8eD6elpzM7OIhQKQSwWw2AwQCqVppQC4vE4/H4/Kw9ttt4CgQAGgwE1\nNTWoqalhoFDCG1AL9vT0NDweDwKBAOuTJ+wNfc94PI5gMJiWvSIUCpGVlYWamhrs378fxcXFLANK\n5WS73Y6ZmRk4HA5WlqWWVm7wRJlSbklgw/Tc0HfLABEIBMjNzcXu3buh1+vZ83She71ezM/PIxKJ\nQK1Ww2AwsBR0IpGA2+3G4OAgRkdHMTs7m7baENVvLRYLSkpKmNdKhn9+fh4zMzMIBoNYXFxEUVFR\nSv0/Ho/D4XBgYmICMzMzaTH8wNIhlUqlyMnJQXFxMbKzswGAXRbhcBiTk5OYmppCPB6HTqdDdXV1\nSk03HA5jenoaDocjbREFsIT9oJY9o9HIWsgo6nQ6nejt7YXL5UIymcT27dthNBpTov5AIACn08mc\nhHStOSH89Xo95HI5gKW9Eo1G4fF40N/fjzt37iCZTEKv1+PIkSNQKpWQSCQAwPaU1+tNS8RPestk\nMlgsFpjNZqhUKlbXXFhYgM/ng91ux40bNzA8PAyBQID9+/ejurqaGSJgKYXr9XpZijcdBpTP5yM3\nNxf19fUwGAzIysoC8CDdPDk5ibt37+LKlSsIBoNQqVT48z//c5hMJpZ9JAcnGAwyIqvNFuIz2b59\nO7Zv384wOOSIOJ1O9PT04MaNG2htbYVAIMDu3bvxne98B7m5uSx1Ts5wugi4CD9RVFSEF154AeXl\n5cxxJV0GBgbQ3NyMa9euYWZmBkKhEN/+9rfR2NgIpVLJ7lAy/ulqdxaLxdBoNNi/fz9eeOEF5OTk\nsGCB7rrLly/j5s2baGlpAZ/PR2VlJf7+7/8epaWlrGxH2WkqTW+0fKqMP0WVFKGRMXe73eju7sad\nO3cQDochlUpRXV2N2tpaFBQUMEBULBbDnTt38Otf/xr9/f1pRYRKpVJkZ2dDoVBAIBAgEomwyKa1\ntRX3799HIBBAYWEhampqGECKItVAIIDz58/j/PnzaYkoSG9ipcrOzoZIJEI8HofL5cLY2Bj6+vrQ\n3t6Oubk5xONxVFZWYufOnaxMASxt8IGBAfz617/GrVu30lIrp6hCoVAw0NPi4iICgQBmZ2cxODiI\ne/fuYWBgAG63GwqFAsXFxSgvL09JmScSCVy+fBlnzpzBxMTEpupMQunjnJwcxjNABqW7uxvd3d3o\n6+uDzWaDw+GA2WzG9u3bAYAZz2QyiZmZGbz++uu4cOFC2oBbEokEGo0GJpOJtaJSpq29vR29vb0Y\nHByE3W5HLBaDyWRCXV0dpFIp2y/JZBJtbW1488030d3dnZb9IhKJoFarYTQaYTQaIRKJWBTZ09OD\ntrY2DA0NYXR0FFNTU5DJZCgqKgKw5GDSxR2JRNgZ9Xg8aVlzrVaLoqIiGI1GyGQydj5dLheam5vR\n1dWF4eFhTE5OwuVyQaPRIBqNpjhbwBLQ77333kNLS0ta0uZZWVmoqKhAZWUl1Go1eDwe/H4/fD4f\nBgYG0NTUxLJt09PTWFxchEqlSgFFk7S0tODixYuYmpradN2pfZUifj6fzzJsLpcLt27dYvuF1lwm\nk2FxcREymSwFp+D1enHr1i20tLRsCiPhp8b4k6eo0+mg0+nYhe52u9HT04M//elPuHTpEgQCAYqL\ni1FQUJACgIrH4wiFQujv78eFCxfSxgBFpQqNRoOioiJoNBrw+XwEAgEMDw/j9u3buHjxIgYGBsDn\n83H06FFYrdYUpChlNJqbm3Hr1q20EW9Qe1thYSGKioogkUgQjUbhdrvR2tqKa9eu4erVqwiHwww9\nT3qTwxWNRjE2Nob33nsPdrs9rU5LTk4OampqoNfrkUwm2cVy6dIlNDc3o7e3FwBQU1ODsrKyFQj0\naDSKjo4OXLx4MW1kRFQDLSwsRFlZGWQyGSONaW5uxuXLl9HZ2QmPxwMAUCgUrE+YmzZ3OBz48MMP\ncefOnbRc5tQbbzKZUFFRgdzcXCQSCbhcLvT19eHChQvsUkwmkzAYDCgsLGQ95sCDFGh/fz/Onj2b\nNidXKpUiLy8PJSUlLDILhUKYmprCrVu38M4772BsbAwulwsAUFVVxXrKuboHg0HcvHkT165dQyAQ\nSIuTazQaYbVaUVJSwiJQh8OB3t5efPDBB2hpacH09DTi8TgkEglMJhOys7MZ8yawFDnbbDacP38e\nfX19aTGgMpkMNTU12LZtG/Ly8hhnyMjICG7cuIGzZ8/CbrczcG1ubi5MJhO0Wi3LbpGD3tXVhUuX\nLrEOhc0Ssif5+fl49tlnkZ+fz6L42dlZ3L17Fx988AFu3ryJ+fl51nVA2AqVSsWyFclkEh6PB9ev\nX8fdu3c3JTP3qTH+wFIdtLKyEhaLBXK5HB6PB2NjY7h06RI6Ojrg9XqRm5uLvLw81NfXo6ioiHnl\nsVgMDoeDIZ7T3TdssVjw7LPPwmw2g8fjwe124+7du3jvvfcwMTGBeDwOjUaD0tJSHDhwgKXXAbCI\n1e/3p7WHlXps9+/fj3379kGtVsPj8WB8fBw3b95Ea2sr5ufnIRQKoVQqsXv3bjQ0NLCUaSKRgMfj\ngdvtTlvNGXiQaSkrK8NnP/tZ1NTUsEuxp6cH165dg91ux8LCAkQiEYqKivC5z30OJSUl7D2i0Sij\nICYOhXTpbTAYsGfPHjQ2NsJgMMDlcmFgYAAdHR3o7OxkFyKfz8eePXtw6tQpVucFltqe3G43fD5f\nWmiIKW1uMBhQW1uLxsZGVvqx2+3o7OxEZ2cnJicnWY+z0WjEF77wBezbt4+9D3G0055JV9pcpVKh\nrq4OBw4cwOHDh6HVauFyuXDv3j10dXVhaGiIlXx4PB7q6+vx9a9/fcV+mZ+fZyWidAErS0pKcOTI\nEWzfvh0FBQWQy+Xo6OjAzZs30d/fD4fDwdZRoVDg1KlTeOGFF1gEyi0lURvrZgtxVOzcuRMHDx6E\n2WyGVCrF/Pw87t69i7a2Ntjt9pQ1Ly8vx7e//W3U19ez96GSjM1mw8DAwKYzV/L5fEilUuTn52PX\nrl0pmdx79+7h0qVL6O/vx/z8PLuneTweDh8+jJdffhkGg4G9Vzweh8fjYfitzZBPjfEn+k+z2Qyd\nTsfqa1TXXFxchEajQV1dHfbu3YuCggIGNkomk3C5XLh27Rp6enrSOvBBKBRCJpMhNzcXZWVlUKlU\nTHe/3w+v1wseb4mtsL6+HnV1dYypjXQfGhrCxYsXYbfb06o7tfcVFxezMgQByXw+H4LBIIRCIUpK\nSrBr1y5UVFQgJyeHgVnC4TBaW1vR2tqaVhpfsVgMk8nEIjlCbVMa2uv1IhaLQS6Xs+EgpaWlKW2L\nU1NTuHr1KoaHh9NK+azRaFBVVYWysjIWERG+goCViUQCOTk5KCwshNVqZeA0kp6eHnz88cdwuVxp\nq/ULBAJWsiJsCNXBCfAZjUYhEAiQn58Pq9UKq9WK3Nxc9j4ejwe3b99Gb29vWgChwANymerqalRU\nVMBkMjGWOxrkFAwGkUgk2HmoqalBbW0tw2IAwNjYGG7duoWpqam06E7970ajEZWVlcjPz4dGowHw\nAFwZDoeZLtnZ2SgtLcW2bdsYxSywFBR1d3ejq6uLnYvNFroTTSYT8vPzGRNiIBBgoFCah0DMfVRS\n5O6Xubk59Pf3Y2RkJC0YKFpzpVLJZjpkZWWx2j3N/KDMrFwuh0ajQW1tLbZt25bSomuz2dDd3c1a\ntTdDPjXGn9KhhJiMRqMsmqTndTodjh07hsbGRnb5AEuHYXp6Gm+99Rba29vTqjcN6TEYDIy3nAA1\n1JolFotRWlqKl156CQ0NDSm120Qigfb2dvzv//4vSzumQ8gQFRQUwGAwsI1LqS9irxIIBNi1axde\nfPFFFBUVpegeCARYS2W6BuFw0c8lJSVsrsDi4iIbNERMfWq1Gs8++yz27dvHem9J96GhIfziF7/A\n/fv306Y3j8dDTk4OGhoakJ+fz7o9aO9zaUAtFgtOnDiB8vJySKVSluFKJpMsbep0OtOmu1AoZIhz\njUbDgH5yuZxR9lK7bU1NDfbs2YPc3NyUFO7s7CzOnj2L5ubmtOlNUSgZUGrXI0IiIgvj8XjIzs7G\njh07UFlZuaIN8N69e/jd736HsbGxtOjOnRJnNpuZnslkkvGgcPEr+fn5LBtKXS/AErjy6tWruHHj\nRtpmJ9BgJCIJo71CvCHUCcXj8SCRSFBaWorKykqYzeYUThSbzYZ33nmHlZLSpTtNc6SSMpVH6S6n\ntdVqtSmOPLeFsbe3F01NTfD5fJum66fG+BO4r7m5mWUBlEolrFYr9Ho9m1ZWWVnJWofo33HTceme\nxkZe+MjICK5du4aDBw+iuLgYBoMBx48fR2VlJaLRKNRqNWpqalLS/dQ24vF44PP50hYNcT9/fn4e\no6OjMBqNKC8vh1KpRHV1Nb7xjW/g5MmTCIfDKC0tRUVFBYucic2KpptRmjodwi3z0ORDsVgMuVyO\n/Px8PPfcczCbzQgEAhAIBKiqqkJJSQmrf3K7LxwOR1qH4Cyn/IzH44y8ymq14pvf/CYaGxsRDoeh\n1+tRUVHBpuABYPwQbrcbDocjrdEzXYhkPAGwbACtOTH1lZaWsiwYV/dgMMiYCNMl1LbFnT5J3RYN\nDQ1QKpU4ePAgQqEQpFIpSkpKUFlZmeJsLS4uMhBsMBhMi95kcLjslPQoLy+HWCxGbW0t28Mmk4lh\noUjobhofH8fExETaynK05tRmSN9HpVLhyJEjsFgs+OxnP8umlRYUFLC+fm7rs8vlwt27d+FwONKi\nNznipDd3zauqqvCXf/mXOHXqFFwuF0KhEFQqFfLz81FbW5uyX5LJJO7fv4+uri4EAoFN0/dTY/yp\nVainp4dRye7fvx9VVVWoqKgAABbdEQc0gbYGBgbQ1dUFl8uVVgNKSP1IJILR0VEGWuTxeKisrERd\nXR0aGhqYTlzUM1HK9vT04P79+6wFMJ1CdLz37t2DUChELBZDUVERzGYzzGYz6zmny5Pa4whA1N7e\nzmp36SxXLC4uwuv1YmxsDHfv3oXFYkF+fj6brlVXV5fS209shdSqMzg4iP7+/rSPkaXPn5mZwfDw\nMOvH1mq1MJvNyMvLY04NpR+59KFOpxPDw8MYHR2Fz+dL61hTAjBNTExAJBLBZDJBp9OxoVq1tbWs\nbEHRHdfhGhsbQ3d3NyYnJzctDfowiUajsNvt0Gq1DHsjk8lQXFzM6HLJgVWr1Skllvn5eYyNjWFg\nYADT09Np3y9erxcTExNsn8jlcta10NDQwJjluBkYMkQzMzOsG8DpdKbtfqE2xNnZWczMzECv1zNe\nE6vViqqqKjZnhQIjbncC1fl7enowODjIwK/pEOoampqagtFoZO2sNJAoFouxcjSNT+auudfrxdTU\nFHp6ejAyMrKpwahg0975yeSfHvcfkFGJRqNwuVwYHh5mNV0ucpXbZkbc/b/85S/xxhtvYGZmJu0j\nKqnHORgMssvZ7/dj27ZtbKQm8CCdRJ5wKBRCW1sbfvjDH+LmzZtp6+vnCpGbTE1NYXBwEL29vVCp\nVKiurk5ZZ2Ih5JLL/P73v8dPfvITjIyMpHW8JpVK/H4/Jicn0dnZidHRUXi9Xsa7Tf3EFIFwiXFs\nNhv+3//7fzh37lzaKE65uofDYdjtdvT19aG/vx+hUIjNjqe0OZfXn0u49NFHH+Ef//Ef0dXVldb9\nQmfT4/FgeHgYvb298Pl8jHeAykOUZieCFroQo9EofvrTn+IXv/gFJiYm0mpAifPBZrNhfHwcDoeD\npaQpkKC9LRaLU/ZLMplEV1cX/vVf/xVXrlyB2+1O237h4kBoTgZ1XHBZQemRlZXF9gvp/tZbb+HH\nP/4xenp64Pf706Y7FwdCGRWxWAypVMrofrm8/jSCnfb6zMwMfvrTn+LcuXOYmJhIC0kb8GDN6ZxS\niyiVtGh/c9ec+zcAuHXrFn70ox/h9u3bmJmZeVIH/Z8f9cdPjfEHUrmpfT4fi+bIc6TFp4Xu7u7G\n+fPncfHiRfT29qY1ElquNxHi+Hw+CAQCWCwWqNVqVjukTc/j8RCLxXDz5k28//77rIVlK4R7SH0+\nH1wuF/Lz81FaWgqJRMIuE+7EOLvdjuvXr+P8+fO4cePGlszV5urtcrlY2015eTn0ej2rhdKa06XS\n39+Pa9eu4dy5cxgcHEyr4SchljWPxwOXy4VAIAClUonKysoU4881SsFgEL29vbh48SLOnj3L0qXp\nlEQiwVoSnU4nAoEAK08Q0RalSLlGiNgT33zzTTQ1NaW9E4fOptfrhcvlYrTTxD7HZWPjOuhE49va\n2orXX38dNpstrZk5cnKJgZIY8JLJJONboHtw+X6JxWLwer04f/483n777bSRQHF1p4DI4/Fgbm4O\niUQCMpksZWomlWC4afZIJIKJiQm8/vrr6OjoSAsJFFdvYjX1er2MiIrmDHD3CmEYuPslFovh1q1b\n+NWvfgWHw7ERd+Mjjf+nJu3PFWLz6+jogMFgQFlZGRvfywXhtLe344c//GHa6nAPE9qcVAufnp5G\nU1MTI+ggvblkIefOncOZM2fS0nqzHr3D4TAikQiL7KRSacoccNJ9ZGQEP/vZzxhBy1ZJIpFg+2R8\nfJzpVlxczFjQuKNvAaC5uRlvvPEG7Hb7Vqmdonc4HIbL5UJOTg5OnDjBCICWj+f1er348MMPGUHL\nVq17NBpFNBqF1+uFx+PB4OAgioqKUFVVxaI67kUOLIG2rl69mhaClocJGX+v14v79+9jamoKHo8H\nlZWVDHzGrU1T58j4+DiGh4cZ3ijdQtwVPp8Po6Oj6O/vx/T0NPh8PhsLzg0suF04k5OTmJubSzsG\nivQm/o+JiYkUBr/lRDjcujqw1MY6NzcHj8ezJfc6AVNnZ2fR3t6OpqYmVoYmUDQ5ZtyAjqiWiSI6\nLUOTNv0TtkiSySSCwSCbxMfdIFzPLF1jY9crxBc/PT0Nn8+XojeAFUC5rboQlwsBVah0EQwGU6I5\nOtAejwc2m21TUayPK5TaHRkZwcTERMrMezqYROE6ODi45c4iCWUwHA4H7t27B5fLxS5zLlCO+oUH\nBga21OHiCg1iITBZLBZLycxRFDUwMIAzZ84wB22rJZFIwOFwYGxsDGNjY/B4PCnpXMLxBINBXLx4\nEefPn8f8/PxWq83amVtbW9Hb2wuHw4FoNJoSOQMPGDf/67/+Czdv3txirR9E052dnXjvvfdSZpYQ\niJSr+wcffIAf//jHGB0d3WLNl+zMzMwM3n77bVy+fJlhhAiMSfiuZHJpYNJ//dd/4c0330zbGf1U\nRv7AkkdIIJblk9r8fj/u3buH4eHhtNYP1ys05Gd51AwA09PT6OjogN1u3xKv/FFCHi3XIeGi6wkX\nQANEMkXoggmHw2w/cNfc7XZjZGQE/f39Wxr1LxdKjxInBAFDudmKiYkJdHR0oLu7G9PT01ul6goh\nToVAIMD65Ll6BwIBjI6Ooq2tDbdv395CTVOFuoOoXLTams/OzqKvrw9Xr17FnTt3tkrVFKHW2pGR\nEczMzLBx2dysBWXBmpqa8Pbbb6e1q+JhQkHF2NgYxGIxTp48yWr4hCkCwIKKDz/8EO+9994WavxA\nCFh8+/ZtmM1mRp7EZZYFlkaZt7S04K233kJ3d3fa9PvUGn8+n88GK2i12pS/TUxM4Cc/+Una+oUf\nR4iS8+TJk6irq0sxQgBw5coV/OhHP4LNZtsiDR8uPB4PFRUVOHbsGAwGQ4ruPp8PP//5z/Hee++l\nHa29lvB4SwNn6uvrUVNTk8ILDgAdHR34l3/5FwwMDGyRhg8X6v3fsWPHijUHgDfffBO//e1vMyZy\nJiGjk5ubi8LCwhSEPACMjo7i3/7t39DU1LRFGj5aJBIJjEYj453nyuXLl/Hf//3fGB4e3iLtHi1S\nqRRarTalNQ54cEbffffdTW0x+yRCToBUKmUlIq50dHTgRz/6Ebq6urZIw9WFGBIJs8AFspKcO3cO\nv/vd79J+p38qjT+fz2cToQh8Bjyox/T396O7uzujojgSIhUpLy+H0WhkzweDQUxPT7PhLZmSviUh\nYJ9er2d1cxKHw4Hu7m50dnbi/v37Gac7TVTMzc1NYSCMRqOYmprC3bt30d7ennEXIkUQSqUSeXl5\nKWvucrlYO2Vvb2/GrTmdUbVazUh/gKUzOj4+jjt37uDOnTtpG5i0XqE1JxIoulsAMM6L1tZWdHR0\nZFQ5EUjVXS6Xp0TONpsNd+/exe3btzE0NJT2rqe1hKs7d1hSOBzG8PAwmpqacPv27S3FQD1KKNXP\nXfPp6WkMDg7i5s2b6OrqSns29FNp/KldSKvVMmpIYOliGR4exr179zKmbssVqr/J5XI2GITE4/Gg\npaUlI40n8GBzK5XKFCQxsBTF3bx5E3NzcxmpOzESKpVKyGSyFHaztrY2dHV1pa1d6HGEDCiBiWgo\nCLB0mb///vsYHR3NGFwIV2jNCcHNJWfp6OjA9evX4fF4MlJ3kUjEuCu4k+8cDgcuXryIzs7OtBNu\nrUdovxD5D/eMdnd34+2338bY2FhG6k5gVmqvJPH7/fjwww9x9epVeL3ejHRaCMi63PgPDQ3hV7/6\nFdrb27ckG/qpNP41NTU4duwYysvLUzZ4IpHAzZs38cEHH2SkhygSiXDkyBGcOHEihd0MWKojnj9/\nHp2dnVuo4cOloKAAjY2NqK+vX5HW6unpwfnz5zMy0wIAO3fuxIkTJ2AymVJ0DwQCuH79OlpaWjIu\nigOW6EGPHj2KAwcOpFwqwFJp6913300bnezjSnl5OY4dO4aSkpIVZ7Srqwu3bt3KuPIQsHRGDxw4\ngKNHj0KpVKbo7nK5cOPGjU0bxPKkkp+fj8bGRmzbtm3FGR0dHUVTU1NaKcIfR7Zv344TJ07AaDSm\n6B4Oh9n47a3oqFhL1Go1jh07ht27d68oVTgcDrS0tGB2dnZLdPtUGv+SkhJ84QtfgMViYc+FQiE2\nKa+joyPjojhg6WLZvXs3Dh06xFK4yWQSfr8fY2NjaG5uzrjaLUlubi5OnjyJqqoq9hz10vf396Ot\nrS0j1xwArFYrjh8/Dr1ez54LhUKYnZ1lDGeZqLtKpcLhw4fR0NCQQuYTCoUwMTGBtra2jHRaAKC4\nuBinT59GYWEhey4Wi8Hn82FoaCijOhO4IhKJsHPnTuzfv59l5qhDh6ZCZhKwkitGoxHHjx9njKcA\nGGB0cnIS/f39GbnmwNKY5MbGxpTplMRHMDo6iqmpqYzUXaVS4cCBA6irq0vpSohGo3A6nRgaGtqy\nM/qpNP46nQ51dXUp/aA2m41N1crETQKAkfuUlpYyAFQ8Hse9e/fQ3t7ORlhmoqjValitVphMJvac\ny+VCR0cHxsfHM1ZvAMjLy4PVak0ps4yPj6OjoyNtM+M/iUilUlRUVKSMpg6FQujr62Pp/kzVXa/X\nY8eOHSlrPjc3h76+Prjd7ozVWyAQsEEyVO8n5kdKmWeq7mq1Gtu3b0deXh57LhAIYGxsLGNLciRm\nsxm1tbUr9sv4+HhG34tSqRRVVVUrxsdPTU3B6XQyIONWyKfK+FOtX6VSQa1Ws+cTiQRGRkZw/vz5\njETJA2C1fpVKlUIGEYvF0NraiqampoxqjyMhnIJMJoNGo0kBV9rtdrz//vvo7+/fYi1XFwIpKpXK\nlHG9yWQS9+7dw8WLF9M2+e5xhehBVSpVSgTq9Xpx+fLljAHLdF4AACAASURBVM20UE+8TCZL6cKh\nSYlnzpzJiB7t1YS7X2gcOLB0mTc1NeHq1asZBwolofZhjUbDgiLqL3/77bfT2mL2OEI1c7lcnnKn\nA0sI//Pnz29Z2nwtobuRZlWQeDweXLp0Cbdv395STMunyvgLhUI2ZpOEes+Hhobwxz/+MSMvRADI\nyspiPNAklE6kiyUThcfjsfab5bVbm82GM2fOYGZmZgs1fLgQYxgXQET7pbW1FWfPns3Y/bKc4AQA\no5R9++23M6o3nit0IS5vpyQe/J/97GcZu+YE3OLuczqj77//fsbeL4SUX85wCgDDw8P42c9+lrFn\nlHRfXi9PJpO4du0afvGLX2TkmgNIYfAjSSaTcDgc+M1vfrPlZ/RTY/x5PB4KCwvxzW9+E8ePH2fP\nj46O4re//S0uXbqUsZuEx+PhmWeewVe/+lVs27aNPf/hhx/id7/7XcYQhSwXHo8HlUqFr33tazh9\n+jSLnr1eL1599VW8++67GcFu9jCpq6vDl7/8ZRw+fJg9d+fOHbz22mu4cuVKRu+Xz372s3jxxRdT\nRrD+4Q9/yOjIGVgChr700ks4efIke+7+/ft47bXXcPHixYxe8wMHDuDzn/88rFYre/6jjz7CG2+8\ngfb29ozVXaFQ4PTp0zh9+jQLjHw+H15//fWMP6MVFRU4ceIEdu7cyZ5rb2/HG2+8gY8++ihj1xwA\nDh06hBdeeAEGg4E9d+bMGfzxj3/MWOxWJkjycR48Hi9psViSX//615N9fX3JxcXFJElTU1Oyrq7u\nsd4vnQ+lUpm0Wq3J//zP/0zGYrEU3f/93/89yePxtlzHhz1MJlPyueeeS3700UfJaDSaTCQSyWQy\nmZyamkqePn06Y3XPyspKlpWVJX/wgx8kZ2ZmkrFYjK35G2+8kdRoNBmre3Z2dnL79u3JX/7yl8n5\n+flkPB5nun/ve99L8vn8jNSdx+MlCwoKkq+88kry1q1byfn5eab3rVu3ktu3b89IvQEkFQpFsqqq\nKvlP//RPSZvNlgwGg0z3//iP/0gKhcKM1d1oNCYPHz6cfO2115KTk5Nsv9jt9uQXvvCFjNVdLBYn\nLRZL8lvf+lbyxo0byZmZGbbmf/jDH5JGozEpFAq3XM/VHhqNJlldXZ3853/+52RbW1vS5/Mx3f/u\n7/4uKZVKk3w+Px26PFKe6sif0ilCoRBf/epX8corr6CgoGBFaisTvUPSsbKyEv/wD/+A3bt3r6Ah\nBjJPd+4ksFOnTuG73/0uiouLV2WuylTd9Xo9vve97+H555+HVqtdkfbfShDOw4R037t3L37wgx+g\noqICMplsRdo/0/riuWf0lVdewcsvv4yysrIU4BbpnWlrDizpX15eju9///vYu3cv9Hr9iv2yuLiY\nsbofP34c3/jGN1BZWYmcnBzWEkp6Z6LuPB4POp0O3/rWt/Dcc8+xAUokNNMiE1v7eDweGhoa8Fd/\n9VewWq2wWCwpez0ejyMajWbEmj+1xl+pVKK0tBRmsxn5+fl4/vnnUV1dnXIZRqNRRCKRjLsQ8/Pz\nUVJSApPJhN27d+PgwYMp/as0SCbTyDbEYjFKS0tRWFgIk8mE06dPo76+PqWuFYvFEAqFMq7FTKfT\nobS0FPn5+aisrERjYyPKysoYxoLabzLlYHLFYrHAYrEgLy8PjY2N2Lt3LyQSCbvI6ULJtP2iUChQ\nXFyMgoICxm1eW1ubMto0HA4jGAxm3EVuNBpRWFiIwsJC7N69G4cPH0ZeXh7rwqEWuXSOjF2PCIVC\nmM1mFBYWwmKx4NSpU9i9ezejlgWW7kWaBZFJuqtUKphMJhQXF6Ourg7PPfccampqGJ8CjYWm/ZJJ\nuuv1euTn56O8vBzPPvssDh06lELURvdiJtmjp9L4k2d44sQJPP/883jmmWdSRlICS55tKBSC3+/P\nqIuFx+OhqqoKL7/8Mg4fPozy8vIVoJCFhQXMz89nHLpfKpXimWeewWc+8xkcPnwYSqVyBblMJBLJ\nOKYtwoO8+OKLOHr0KHbs2LFizROJBPx+P5t5nknS0NCAz3/+8zh69Cjy8vJWrHk0GoXL5dqI+d8b\nKkRCdPLkSRw9enTVM+r3++H1ejPqjAJAaWkpTp8+jZMnT6K2tnbFfonH43C5XBmH7s/KysKOHTvw\nmc98Bp/73Oeg1WpX7JdAIACn05lxQ830ej0OHjyIL33pSzh+/PgKoN/i4iLcbndGtt8WFxfj2LFj\n+OpXv4ry8vIU4Daw1II7MzOTUXf6U2v8s7KyoNPpoFQqV2yE5P9NOzt37hzOnTsHh8OxRZqmCrWt\nKBQK6PV6ZGVlrar7yMgIXn31VVy+fHmLNF0p1Oak0WhWHWZCqfIrV67g97//PQYHB7dI05XC4/Eg\nkUiQk5MDqVTKJshxaZ/n5ubw6quv4v3338+YS5EmOiqVSnaJ0zrTKNBkMon29na8+uqraGtr22qV\nmdAZ1Wq1kMvlK6b2URvrO++8k1FnFFja6zKZDDqdDmKxeMV+WVxcxPDwMH7729/iypUrW6ztA6E2\nSqVSCYVCkTI3niLnxcVFXL16Fb///e8zbvCQWCyGVquFRCJhZSDa6zQe99VXX8WFCxcy5owCDzqe\nuBkKuteTySTi8Tja2trw2muvZRR4+6k1/tRiMzs7i4GBATYCF3gw8/7ChQu4fPlyRo2+5fP5iMfj\n8Pv9sNlsiEQikEgk4PP5LFvR3NyMP/7xjxmJ2o5EIpibm8Pw8DDjk+fxeIjH4wgGg7h8+TLOnz+f\nMVEoGdDFxUUEAgFMTU0xw0QYi1AoxHggWltbMyprwefzEY1G4fF4MDo6Cq/Xy9Lmi4uLCAaDuH79\nOt56662MikK5adqZmRn09/cjKyuLzY6PRCKYn5/HpUuX8Kc//SljLnMy8HRGx8fHsbi4mMLLHggE\n0NzcjLfffjsjUduxWAwulwuDg4Ns+BCd0UAggD/96U947733MmbNgQf7hYZp9fb2sjudz+cjEAhg\neHgY77//fsadUeBBtnZkZATRaJRNHlxcXMT8/DyuX7+Os2fPZlTkz1v7JVsij8zp8Pl8aLVaWK1W\nGAwGNghHoVCAx+PBbrdjcHAQY2NjmJ2dRTwe3/Q6Czkka72Gz+ejuLgY1dXVkMvlUCqVyM7Ohkgk\nQiKRwODgIEZGRjA2NoZgMJiWOu56dCeSEKvVivz8fCgUCmg0GqhUKvD5fLjdbgwNDTGqzXQAcriR\n+1q6G41G1NTUMJ3VajWkUikEAgFGR0cxPDyM8fFxeDyetNRC17tfeDweKisrUVxczMiI1Go1hEIh\nIpEIhoeHMTo6irGxMUSj0bRgLdaru1qtRmVlJfR6PdObpsnNzs7i/v37GBsbg8PhQCwWy5gzyuPx\nYDabUVJSArVazR40OOn+/fsYHR3F+Pg4AoFAWgzRenUXi8UoLi6GyWSCVquFWq2GUqmEUCiEz+fD\n2NgYxsbGYLfbM+qM8ng8aLVaFBYWQqfTMd0lEgmEQiGmpqYwPj6OiYkJeDwexOPxjDij9DqTyQST\nyQS9Xg+NRsPWPBaLwWazYWJiAhMTE4jFYuk8o4+07xlp/P+v9YTJ8h+AIjeNRsM8cqVSySLoubk5\nTE5OYnFxkaW60rFRlstqn0lpf7VazVjaFAoFhEIhkskkbDYbm2ZGaNzNlvXoTilFjUYDmUzGWNrk\ncjl4PB78fj8mJyfZ7Gpa+3Tq/bDfmNJyRKIkEolY1oLP52N6ehpOp5PpnK7DuZbe9DpifSQGSzKg\nRBMaDAaZ3pm05sRuRlkWmuDH5/Ph8XgwPT2dss/TfUYfpTtNSiQyJblczlD+09PT8Pl8aT+j6w0u\nFAoFW3Mi4BIIBAgGg5idnWWOVjrO6OPoThMqhUIhYw0Vi8UQCARwuVxwu91IJBJpPaPrNf4SiYQ5\nKhKJhK15PB7H3NwcwuHwVqz502f8BQJBEnhQR6b/z2ShiIG7YZ523Z82vbn6Pi26A0/PPgFSWz2B\np2OfkHB1f5rWnCTTW2kfJes1pJkm681cZKg80r7zH/XHrRJCBT9NC0/AlKdRd2ApI/G06U06cp0A\n7vOZLKQjrfvTIqut+WqZo0wUOqNPk85ceVp1fxr3ynJ5WvV+lGTkrUM81GSMNmrhN/sHJET8ahza\nmS7Ljf/TsubkcG3Gmqdjv2z0HgeQlkuWu1c2UtJxbrhrvpH7PB17fTMcxXTtF9J9I9d8s/bhap+x\n0XpvtdOfkWh/iUTCAB3UrrJaavdxJB2RIZ/Ph1gsZjU1QvDTZ37Sz+XOgd4soZ5a0nGj1pzec7N0\npzVfWFhYsdZP8jtz26M2a78IBAIIhUK215fvl08i3DXfLL15PB7DqCzHGTzpmhNCejPPqFAoXLVW\nvxH7ZbN0JyeXmz7fiPIFrcdmYqNowM3ys/Skn0VOP+3BzdKdz+czrMFGrDnhGwi7sFWls4w0/gsL\nCwwYsdZCC4VCBrKgtij69wQMVKlUkEgkEIvFCAaDCAQC8Pv9Gw4aWU6ZuZbhJ91pI8TjcXYhcYGB\nWVlZSCQSCAQCmJ+f35SWLrrA14OzoIuIshx0+OhwiEQihniVSqUIBoPw+XyYn5/f8PYiWnNuX/Ba\nADq6NACk/FYEYszOzmagOq/Xy3TfaAeG1my9lwkZGLpI6d/SBaXVatlUy2g0yghRgsHghupNuq9H\nZxJu1Mr9jaiLRKfTQaPRQKFQMHDXZpBFcffIei7d1TAOJASAzcnJYeNmnU4n032jZbmu69k3qwFL\nyWHW6/XIycmBTqeDx+OBw+GAy+Xa8Ha05Wv9OHtm+esFAgFUKhWMRiP0ej2kUimmp6fhcDjgdDo3\n/Ix+Ut2XC+1/k8mE3Nxc6PV6RCIR2O12zM7Obsp+WUsEa78k/bK4uPhPZAgfdUDJuCuVShgMBmi1\nWrZhEokEI42oqKhAWVkZSkpKIJVKkUwmN9X4cz3RR+kukUigUCgYmQgXN5CVlYXCwkJYrVaUl5cj\nLy8PQqGQcRhstHB1X+tSJP2oVZGbvhKJRFAqlaiqqoLVaoXVamWzzwOBwIb3/5Ouy53FhwkhiaVS\nKeOFoIOZlZWFvLw81NXVoba2FhUVFSwypw6MjRZuxLUep0UsFrO+7eX7paqqCg0NDWhoaIDRaEQy\nmWTO7mbovdYe5+pOEebyUb5isRg5OTmor6/Hzp07sXfvXojFYrbmm9FGR5mox9F9eVmJfovS0lLs\n2bMH+/btQ3l5OXi8Jd4It9u94XoDeCxnkbKdXIY/ytoolUrs2LEDhw4dQmNjI7RaLRYWFuD1ejfF\nWSR9HyfKXZ5mp8CiqKgIzzzzDI4cOYJ9+/ZBJBIxPomNPqNce0Lf4ZPoLhAIIBKJsHPnThw9ehQn\nTpxAcXExeDwevF4vXC7Xhur9f/LPj/pjRkb+67nEKbrcvXs3jh8/Dp1OB6lUyvooybCLxWIWgYrF\nYrjdbnR1deH111/HxMTEhvaLrie9zePxIJfLodPpcOTIEdTV1UGv14PH461IX1HkT2UQl8uF8+fP\nM+6CjWwxWs+hEQgEUKvVKCsrw/79+1FYWAi9Xs8ML72HUChkEahUKoXX68Xo6Ch+85vfoKOjIyVF\nvxFC67CWo6VUKrF9+3bU1dWhoKCAsUPSg5jdiJNbIBDA4XDg6tWrrKVuI9d8vVkWpVIJvV6Pmpoa\n5shSFoALYsvOzmaMeqFQCA6HA7/61a8wOzu7oSnd9aRu6aKWy+XIy8uDxWJBbW0tCgoKUgZY0W9D\n0bNCocC2bdvQ1tbGWqQ2upXuURc5rSW1VBqNRpSUlGDHjh3QarWM1IqcAm7kv7i4iPr6erz55psY\nGRnZ8DT6ejNyUqkUCoUCKpWK7Xe5XI6srCymt0gkgtFoZD31lZWVKC8vh9frZW2vG6n3WsaTjCO1\nzGVnZ2PXrl0oLi5GdnY2w4DRHWQymZCdnY2srCwUFBRALpfj3r17G95Kt9YZpaxbVlYWa0u0WCxo\naGhgOnIxbHl5ecjNzUV2djZCoRDKysrg8XgwNDSU9vR/xhr/tUShUKCqqgqNjY34sz/7M6jVashk\nshSPa3lalcdb6keXyWS4cOECpqenmZOwEYu+nuiNx+MhNzcXDQ0NOH36NPbv34+cnBx2qdD7cCMT\nYgUkliuiv9zIOtd6Us5ZWVmoqKjAM888g9OnT6OsrAxGozHFEC1fAx6Ph3A4jMLCQnz44Yfo6enZ\nNEO0mtCFmJOTg9raWhw9ehSHDh1CWVkZtFrtCspZ7l5JJBIIBoPweDyQy+WIRCIbfik+SihaLv6/\nQSf79+/Hjh07sG3bNnbRPEz3WCyGcDiMq1evQiQSsexIunQnB7Cqqgo1NTWwWq04cOAAKisrU3Rf\nvlcAoKysDAKBAGfOnMHU1FTa11wsFqOgoABlZWWwWCyoq6vD0aNHkZubmzKhjStU749Go+jq6mLY\nn43MLq4nIJLL5SgrK4PZbEZOTg4OHz6MZ599FhqNBlKpdFXgGo/Hg8VigV6vx7lz5xgGYDMcl9WE\nSj8GgwEWiwVqtRp5eXl4/vnnUVtbi9zcXMb0t1pnT2FhISYnJyGTyRjPSDr0Bpb2uVwuR0FBAYxG\nI7KysmC1WnH06FE2vI0LAuc+FhYWUFpainfeeQdZWVlpHyqWkcZ/LSHGtpdeegnPPPMMdDrdquNw\nycslQ8lNyYtEohUpyHSIQCDArl278J3vfAdlZWXIyclZMQ53OVCO9F5YWGBGOBqNppXiUiAQQKvV\n4sUXX8Tzzz+PwsJCyOXyFR0CXCdgOQmKWCyGWCzesNT/egCcVBqqq6vD9773PZSWlsJgMLBxuKvt\nGdovdAESWYpIJNqwTBEXOPQwITKiU6dO4Ytf/CIMBgM0Gg1L+6+GPuY6jsDSoBepVMpwMBsh69Fd\nLpejqqoKf/3Xf43a2lrG2EbUxNw9w90v9B0If5GVlZX2NdfpdDh+/Di+8pWvMHa/7OzslKif9KT/\ncp0vIu6an5/fMEd3PbrTFMW/+Iu/wN69e6FQKBgOZPmQnNX05/GWyI1kMtmGrjn3s1YToVCInJwc\nHDp0CF/5yleQk5PDyqFyuZyRca0m9L5isRgajQahUGhD7sXld9nDRKFQoKCgAC+//DIaGxshkUig\nUqmQnZ3Nss2rnVWuoy6Xy6FSqRi76EbpvpYT9FQafwCQyWQoLi5GXl5eStTMleVgncXFRbhcLszN\nzYHP57MhL9zaKaVnqD69kZ4Y/Sg5OTmwWq2M/exhryVJJpfmGDgcDkQiESgUCiQSCQgEghUXOs26\n3mgPkrzz4uJiNo/9YQdyOSI2EAjA5XJBJBIxYBRFRVxDxS0drEfW8x3J+Ov1etTW1kKv1z9yv3D1\njsfjmJ+fRzweh06nQywWY1kYLsp9s9J1dKGVlJTAarWuiPYfpXs0GoXP54NEImGRUyAQYJHRk+i8\nnmyLRqNBUVERamtrUVlZucLBfZgkEgmWYcnNzYXT6YTD4UA4HGb7ejOjI6lUisLCQtTU1KC+vh4i\nkeihbaTL13xxcRGRSARKpRIVFRWw2+0MQLfRpa7VxGAwYNu2baivr2fZoYc5iaQz6UT6FRUVobKy\nEuPj45ifn0c4HN70Nc/KykJ5eTl27NiBHTt2MCbO9ehN51StVmPXrl3o7+/HxMREWkYW83g8GAwG\n7N27F7t370ZDQwPbL/T3R+lOIO+SkhLs3bsXAwMDmJ2dTdvUwqfW+HPrLI+6VLibf3FxEVNTU5iY\nmACfz4dSqWSgL6qV0TCJ8fFx1rK30Q4A0YUuH/u4mlDrUygUwtjYGAKBADQaDQN5cT10SvduBl0q\n1Qlpjda6yLl/d7vdmJycZB6+SCRCMBhEOBxmESkd4o0G7NA+oYhmrf3ClXg8DofDgVAohNzcXCST\nSYhEIszPzyMSiaQYo8fdJ+t5bVZWFrKzs1m3ynr0pvcNh8OYnZ2FTCZDUVERxGIx5ubm4Ha7V+BF\nNuOi0el0yMvLY2dsvbqTsxiPx2GxWBAMBsHn8+F0OjE/P/9EZbr1/Bu5XI7y8nKYzWZIpdJ1vy/X\nWdRqtdi5cydzkLk0xpslPB4P+fn52LlzJ3JzcyGRSNb9b5PJJLs3rFYrQqEQBAIBxsfHEY1GN11v\nqVTKsAkqlYrNUFhLZwBsGFBOTg5OnDjBwMWjo6NPlO1aD76C1vy5555DWVnZuvbLcocrGo1i+/bt\nbB3u3r0Lv9//xHZnPf/2qTT+lOIPh8MIh8MMuEWeFBkQLlFQJBKB3++HQqFAdXU142Gmui+1rlHt\nrqWlBXfu3EFnZ+eGoesFAgFLY9HGpOiZC1IEUtNlwWAQkUgEGo0GDQ0N0Ov1bPQl9/XxeBzT09Po\n6OhAd3c3hoeHN+xiVyqVKy4V2sgPa8uMxWJsIxuNRhw8eBA7d+6ESqUC8ADYSYegp6cHXV1dGBwc\n3DC0tEgkgsViQWFh4Yoyz3LEOhd1HwqFEAgEIBAIUFxcjOeffx5isRhZWVkpDkskEoHL5cLdu3cx\nMDDAgF5ryXrSuHq9Hnv27EFubu6Kvy3XmzuXIBgMIhgMQiwWo7a2FkajMaWdlPu6kZER3L17FxMT\nE3A6nRuyX/h8PsrKyhjQbLWUJ/e3J90jkQgbZqXT6bB3715s374dwJIzQ45uMBiE1+tFZ2cnBgYG\nYLfb15WlWysFTe2SDQ0NKCoqWvF3rsGhUhytfSgUYjXbgoICHDlyBAcOHGDfiV7v8XgwMTGBe/fu\nYXJykq35WqWrtb6XUCiE2WxGfX09NBrNqq+jdV9YWGAOYDQaRSgUYkFJRUUFS8H7/X6meyAQwNzc\nHAYHBzE0NASHw7EuAOxa+1woFEKhUKC8vBxFRUWrlmK5DjYX0E3ZIBodbbVakZubiyNHjsDr9bI9\n43A4MDU1heHhYdjtdrjd7pQz/DC9qWS82mvIWBsMBlRWVkKr1T70O9Kax2IxxONxxGIx5mDJZDIY\njUbs2rULZrMZp06dgtvtZl1dk5OTmJiYwNjYGObn59maL8eycXUkm7cW3uSpNP40CY/6ncl40IVN\nP3pOTg7zJGnBtFoti4Z0Oh3MZnMK0xql7lQqFRYXFzE0NAS/378hlyJ9bk5ODgCkGL5wOIxgMIho\nNMoQrTS0aGFhAQKBgE1Hq66uRkFBAXNcyIGIRqMYHR1lSNKNnNedm5uL2tpalrYn3ePxOEKhECKR\nCBtlScM56O8SiQR5eXmsf95sNjMAD5/Px8LCAkKhEJqamiCXy+F2u+HxeJ54zemAVldXo7y8nF0s\nFKXFYjFEo1EW9dDIWRrIsbCwALlcDovFgry8PBgMBuh0OrZX+Hw+gsEgpqenYTKZIBAIMDEx8cRe\nO7230WjE3r17U4w/GR7CfMRiMXb50/6l51QqFWu5pJ5o0j0ej8Pn8+Hu3bvQarW4cuUK66tfjxF9\n2Gu4LXA1NTUpADkylPF4nD24IK5YLIZIJMJ60Knvn3ux8ng8zM/Pw+l0sumSdME/CZaEuhOys7NR\nU1ODvLw89jeu0aQH13CQE7i4uAiJRIL8/Hzk5eWxM8w1XjTeWK/X4+7du+jt7YXH42Hp9UfpR5+1\nXMiAms1mlJeXQ6FQsL9x8Stc3clokyGic1tUVJQSxdK/9Xq9mJqawp07d9DW1oaRkRFMTEykDAp6\nlO4P+24ymQwGgwFFRUXIzc1NKSUu1532DtdZjEajkEgkDJtBJSa6e+LxOGw2G4aHh9HR0YGhoSFM\nTU2lcBp8EoyASCRCTk4O8vPzUVBQkLLm3N+b62xxHa5gMMg6kLKzs5GTk4Pq6mrmoEciETYpta+v\nDz09PSxzFwgE4Ha74XQ6Uz5v+ZqvJU+d8efxeCgrK0NDQwPMZjPkcjkWFhYwMzMDu92O6elpTE9P\nY2ZmBseOHUN9fT2kUilUKhVkMlnKD0N9x1xsAKXT8/LyYDabH1qT/ySi1+vxwgsvoKGhgZEShUIh\nzM7OsofdbodcLsfBgweRm5vLygMajSYlYuLOoyeh2vb27dvR0dGxLrDNeoTH46Gurg4vvfQSCgsL\nGRiRavlOpxOzs7OYmZlBRUUFamtrGbnS8jWn9eUecrq8qqurEYlE0NLSsu4I+lFCpZ2DBw9i9+7d\n7LekGdtutxsulwsulwuxWAzV1dXIzc1lTgBFUKQ/TQSkNQHAWtko2rhx4wa7pB4la130IpEIeXl5\nOHDgAAwGA7tAo9Eo/H4/vF4vIyBSqVSwWCyQy+WQy+XM4HIvXS5oiva5UCjEjh07kJeXB7fbjdHR\nUYZx+KS6E89DYWEhLBZLSqYoHA4zgi2/349AIIC8vDzWpiuVSqHRaFK6R0hPrtBkyc985jMwmUyI\nRCLo6OiAzWZ7ojUn8hgy3CQU2VNGJRgMsg4SAiXSPuc6mMtBdslkEvn5+dBoNKioqEBLSwuuXbuG\n69evr7nfH5UdkMlkKCgoYGN8uesVj8cRDocRiURYppQAZlR+pBHR3Fr18m4MsVgMtVoNs9mMQ4cO\nYWRkBB9//DHOnDkDp9OJcDj82GsOAEajEVVVVcjJyUkpbVE6nx7kXMnlckbYJpfLAWDVTgChUMgc\nL4lEArPZjJ07d8Ln88Hj8eDy5cu4du0a+vv7HxpoPEp3qVSKmpoalJeXswl+JHT+KbCIRqMMM0XT\nIrmlW1pz+u5isRhSqZS1+NbX1zMMmsfjwcjICK5du4Z3332XORVcfemuXUueKuNPi1VWVsai0GAw\nCKfTydLcc3Nz8Hg8CAQCqKurQzQaZZthtY6A5UIbX6/Xw2Qysd7YJ43kJBIJjEYjrFYrzGYzeDwe\nPB4PbDYburq6GDjI7/fDZDKx/v/H4a0n5KjZbGYdEE8KNCIjThGBUqlkbYf9/f3o7e2F2+1mzINq\ntRrl5eVsnvV615zP50On06GoqIgBfp40gtZoNCzln5OTwxggvV4vurq6YLPZWA1cJpPBbDbDYDCw\nnmPS7VFCqbv8/Hzk5+dDqVSyzNMnFbFYjMLCQhQXP+hvNgAAIABJREFUF7POBLoMbTYburu74fV6\nWeqwqKiI7anVHJTVhNK8lI3Jy8uDVqt9YrR0dnY2qqqqkJeXx7pBKLvV19fHcCsEuNVoNNDpdClG\nfq01J/yJWCzG/Pw8LBYLxsfH1zT+a71nUVERKioq2P4Dloyn0+lk+5wAfUQqRnuF6tRr6U6YGa1W\nC5/PB7vdzhz1h90zyWTyke+rUqlgtVqRn5/P8BUUQd6/fx8jIyOIRCKszEnMhER4RXfcoz6D7l6l\nUomcnBxkZWXBZrOtcORX0/1RYjKZUFVVBbVanVICpfvF6XQiFotBq9UiJycHUqk0RW8Aj/x84IFD\nSqx6fr8fvb29LAB7lN4P05/Az2azOYV2mUqvQ0NDzOjr9XrWucA9nw+bS0A60Yhj0oPGA0ej0RWl\n09V0X0ueKuNPhD1lZWWoqqqCTCbD5OQkWltbcfPmTfT19bG0oUqlYhfZWodnufB4S4QpRqORbZAn\nAaJxowpK1xP4sK2tDZcuXWLpM7VaDblczsBwj6M38IAxTa1Wszrvk+guk8lQWFgIo9HIKG/D4TCm\np6fR3NyMDz/8EJFIhKVM/X4/Q5Q/ru5kDGge+ZOideliIS+b6sU2mw2XL19Gb28vQqEQjEYjysrK\nGCp7rYtwuZAhI1Imn8/3RBSpMpkMVqsVZWVlzCAuLCywS+vs2bMIBAIQiUQoLy+HTqdjTt7jrjld\n6lqtFjqdjiHrP6mYTCaWtSJdYrEYfD4fbt68iVu3biEWi6GgoIA559zf+HH0J8eLHJcncdIFAgGq\nqqqwbds2SCSSFODk2NgYzp8/j+npafB4POzZs4fhjEjnx113ctR1Ot26souP+l4ajQY7duxAQUFB\nCpgsGAyivb0dFy5cwMLCAvLz83Ho0CFGKvZJB9aQ80XzHZ70jFZXV6ekzWOxGBwOBz766CMMDAwg\nkUhg37592LdvH/v8JxmMQxm0tQikHvW9xGIxzGYzjEZjSrYiEomgv78fb731FuLxOLKzs9HY2Ai1\nWs0waJ9kGBHXufD5fKyb4WGgxk8d4E+pVMJisUCr1SKRSGBqagojIyO4f/8+S93I5XJWQyoqKlqR\nBltNuIeY/isWi6HT6WC1WuH3+zE6OvqJ9ebxeNDpdDAajazlyuPxoK+vj4HbKKVlMpkYc97DCEUe\npTtd5haLBbt370Zvby9mZ2c/se5USqDyyvz8PKanp9He3o6BgQHMzc1BLBazvtzs7OyUyGm9epPu\nCoUC27dvx+TkJO7cufNEswC4qcFIJIJEIoG+vj60tbVhYGAADoeDZSaIrni9qHquULYoLy8Pzz77\nLK5evQqPx/OJ9SbhznvweDxob2/HrVu3MDQ0BB6PB7VajVAoxFKbD2sD5MpyB4F0Ly8vR0NDA2w2\n2xMBXCkrRPVYws20tbWhtbUVIyMjbI86nU7E43H2G3ySdVcqlaisrERXV9cTl7d8Ph/cbjfDHoTD\nYbS1teHWrVvo7OyE3+9nGbzc3FxUVFSs6yJfbc2BpR5xIoZ5kixXJBLB5OQk053q3G1tbbhx4wa6\nu7vB5/MRDocZC2RhYeG61nw13em8CAQCRCKRT5wp4vF4mJubw+joKCorK6FWqxGLxdDR0YHm5mY0\nNTXBbrczoLRSqYRarU6hcX/cz6OH2+2GzWZbE2vxMIlGoxgbG4PNZkN1dTUWFxfhdDrR0tKCpqYm\ntLW1IZlMQqfTMQwUOQqfRHcADHvx8ccfo7Oz86HdUU915E9RGveLEfWqwWCAXC5HIpGAy+WCw+Fg\ndUq6VCjKzs7OhkKhSPEQqQbmdrsZWpTAO0KhEHq9nvHo63Q67Nq1Cw6HY13Gn5u64xo3gUAApVLJ\n2lCoZjs7OwuXy8WQytRuSK/lRgQEsPP7/fD5fKyeRMQ/YrEY+fn5zPBaLBYcOHAADodjXcafdOeu\nOZVLKOInb3l2dhbj4+MM8UuIc3KalqfTyIgRepgMw8LCAoRCITQaDVtzuVyObdu2YWxsDF1dXWsa\n/9XWnLvuhHolUOLk5CSGhoYwMzMDr9cLmUzGkNrcWidXd0JsU+2Ufi860FRSMBqN2L9/PwYHB9HV\n1bWuNaffdvlzlCqndXI6nRgYGMDg4CCmp6dZPXNubg5+vz/l39J7JhIJpjMXEEvrbDKZGOubxWJB\nTU0NLl++vKbej9Kd2t1CoRBb8/Hxcdy5cwdDQ0Ow2+2MwlWj0TAinOVrTsAoMmZerxfRaJRlmAgH\nIZPJYPk/drr1yvIMAX0Xv98Pj8eDUCgEoVAIj8eDnp4etLe34/79+4jFYpDJZBgZGUFubi7q6+tT\nwIi0BwngRfuOOleysrJgNBqhUChSIn+a6/FJ1zwSiWBmZoaVJQKBACYmJtDc3IzOzk6MjY2xDiOJ\nRILCwkJUV1enEJ3RfiEw4OLiIhsMJRKJoFKpWEmM7huhUMjuz8ddcxIqfVLZ0O/3o7u7Gzdu3EBv\nby+8Xi8j2SL2QoPBsCojKgEBKXMQj8cZaZNGo0kJ7vx+P+bm5j4RqQ7t88nJSdjtdoTDYYRCIdhs\nNjQ1NaG5uRlDQ0Pg8/mM3yQ7Oxvbtm1L6Sqj/cLFchGYVSgUQiaTscwQvZ7WZ3R09IkxURlp/KVS\nKUMyJ5NJttmI5IR+4EgkgqysLJjNZgSDQfZv/H4/ZmdnV3h1tHgDAwP44IMPMDg4yFDCyWQSMpkM\nX/rSl/Dd734XPN4SGc/zzz+PqampdV2KVP/jDiUi9DOwlM6iVg2iGdbr9ZienmapHAIrLvemyXje\nvn0bN27cgM1mY73PYrEYBoMB3//+93Ho0CHweDyUlJTg2LFjaG5uRk9PzyP1ploxAEZeQ4afx1tq\nqSSHhYygUqlkNSwiIEokEqivr08ZDsTjLaG4XS4Xmpub2cadmZlhYLXDhw/jb/7mb6DRaJCVlYXa\n2lqMjIysiweBdOdyG1DrZiwWw/z8PLxeLzweD9sjVDdcWFiAz+fD+Pg4m/tAFzw5QpFIBBMTE+js\n7MTo6Ch7ELHIN77xDXzuc59jpaL6+np88MEHa+oNrBxhS7oDS0OQfD4fc/RosBDpxd3v+fn58Hq9\nkEgk7KIg4zk8PIz79+9jcnIS/f396OrqgkajgdVqxbe//W2UlZWBx1siKykoKFhXfzg5SeT00XN8\nPh+xWIxNtaNpiDQRkRyaWCyG8fFxBAIBbNu2DWVlZSmZC8owTU5OYmZmBrOzs7hx4wbGx8eh0+lw\n6tQpfOUrX2HpZ71eD7Vava60/2pjg8mBDAaDzOBRfZXKWNTVEo1G0dvbC5lMhu3bt0OlUrF2RnJy\nPR4PXC4Xy+5duHCBgf2+9rWvYdeuXexeUCgU62YafZjxpxZCj8fDAG1OpxM+n+//s/ddwW3fR/4f\nsAIgeiEIgAUE2LtYREqiRHWXOPHYKTe5zNxM3u75Xu/l/vd2c5O5u7k8XJLxJLkkExe5SJZkyeqW\nRIqUKBax904CJAiCKKwA/g+8Xf1Agk2yTTjWzmjsESlgsfj+dve7+9nP8gVndXWV91Okp6cjNzcX\nNpuNfRNNiSwuLjIY8/r167h//z50Oh2qq6vx85//nIGQBBDcS9lfeNvdbHO/34+5uTksLCxALpcz\ngJgmIGiCo7+/H4FAALm5ubx7Q9gSo0rN8vIyJicn8dFHH8Fut0Or1eLtt99GXV1dmJ2FS8z2a3P6\n9/Q9E3h4cnKSRwmp3RwIBNDW1gatVouioiLk5+fzcrnNI4Dr6+uor6/Hhx9+yAvp3n77bd6HQd/R\n0tLS17IdNSqDPwV4coTEca7X61FYWAij0YikpCR4vV7IZDJ+kHU6HXw+H7RaLTIzMxlIBICNRuja\nhoYGTExMhFEqSiQSVFVV8cEUi8XIyMiAXq9nhPtuiGFi3aMbNDkok8mEjIwMBmoIKxm5ublYWFjA\n8vIyrFYrsrOzwwIQOct79+6hqakJbW1tcDgcHIwTEhJgMpnCSs0qlYopePfiFIUlMaHdk5KSGDBH\nYCIKuHq9HlarFT6fDzKZDFarNQx0RDcgKp12dHRgaGgI09PTmJ+fh8/ng0KhgMlk4plUAlvq9fqw\nDHkvegsze1q6QmdFSPMcExMDjUbDs8JpaWnIzs7mFhH1BOfn5/Hw4UO0t7ejv7+fJ0mmp6fZxm++\n+Sa/p0QiQUpKCt/s9upYhL9L1ROdTsf6EIaCqltKpRIrKyvcd0xJSeExPnIQvb29aGxsxOjoKM+T\nj42NYWBggMGgwt5+UlISg8BetHdON3G63QoBXEtLS5zEBwIB/j0ahwPAlaXHjx+jq6sLMzMzvN63\nvb0dc3NzvBdD2OaipTWbbbmbrpuDKGE2hJMRVOkRVphoGyctfwoEAlhaWsLk5CSePn2KqakpuFwu\neDwejI6OorGxEVKplDcsCnXfibp2r0Jb+gh1TufA4/Fw8KSbJb0nVY5oeqGnpwc9PT18A6fvoaOj\nAxaLBRkZGVuqgi9avha+BoHaqApBrQSq5pLeFBypmkfVoNnZWXR3d8Nut3NyNjs7i7t372JtbQ3Z\n2dlhNqeA/LJYKPKBNBpMfoMSJyGPBRGCARtxiPSemJjA8PAw/zwQCKCjowMPHjxAXl4e9Hp9GO6L\nJgi+LjK0qAz+wPOAL7zJpaWl4fTp00hNTYVYLGawEPVbgY3SXWpqKoqKivjhBDYe4tnZWbz//vu4\ncOFCxDIx9a/I2DSCRkxdey2zUPAh3ePj45GXl8djVVQO9fl8SE5OhsFg4AeupqYGOTk57IgDgQB8\nPh9aW1vxb//2bxgdHd2iB7UyqPQOgJG5VB7bzSHSLZ30pu9ArVajvLycgyOV0kQiEY8WLSwsICUl\nBadOnUJGRgZjFZaXl+F2u/H555/jv/7rv7Zs3IqJieGSNNmNAggFs91E2B4SBlORaIN9q6SkhMvb\nwjlwk8kEmUwGt9uNY8eO4eTJk7BarTyyRQQ4v/71r1FfX7/llpCYmMg2Jz2pnE4Ofbc+rrCfKnSk\nEokE2dnZsNlsUCqV3GKg4J+SksKJ08mTJ1FdXc0lWXIud+7cwb/8y7+wjkI96CYofH9q1+wFNwAg\nos2BDbQ/LWORyWTMOeDz+bh8vLq6CovFghMnTrCTE4lE8Hq9cDgc+OMf/4jPPvssbElLKBSCRCJh\nHIEwQU1ISOAzuZdzLqT0Fp51o9EIi8XCAFGqrhCglZDtRLds+b8lNGtra5ifn8ejR4/w7//+7xge\nHuYzTa9P/mlzch2Jc38n3SMJjfoZDAbI5XKsr68jPj6eQWEUMGUyGTIyMmCxWDhJX1xcxPj4OD74\n4AP85S9/YaAxvR+BH4Wb6fZbbt4JOa9QKNiPELcKlb+ppRkKhSCXy3kMU6VSYXV1FWNjY3jy5An+\n+Mc/4tmzZ2FET6FQCBaLBTKZjAndgOf057ttAt1NaGJCo9Ew+FMul3MCK/w9mUwW1qqamZnBkydP\ncOvWLVy+fJmB6XQeg8EgCgoKeD/A5gvs18XeGpXB32g0cllGp9MhOTkZycnJqKqqgtls5qyV2gPA\nBp0o3TaoFEfBc2VlBQ8ePMCFCxfQ2toa8QafmJgInU63hR2LenNGo5Fvq9sJrYMNBAI8J56cnAyT\nycQ71imRSUpKYlQ73Tbi4+NhNBohl8vZkbvdbnz88ce4evUqnE5nxIxPrVbDarWGIWaB5yt4VSoV\n3G73tg8tOQZ60KVSKff4SktLkZubC5VKxQuRaDZYuHhDo9EgIyMDMpkModAG6cmzZ8/wySef4O7d\nuxEPbFxcHAMzhSU5uhEolUrmF99OaPkNOVWpVAqlUgm9Xo/y8nJYLBa+FZKtqQpDa1szMzORnp7O\n52lpaQlffPEFLl++vO1qVp1Oh+Li4jACHko6aJnObmhiuoGRLcRiMbRaLbKzs5mtLBAIsAMxm81c\nCVAqlUhJSUFeXh7MZjMH2LGxMVy8eBG3b9/m2ejNumdnZ4cxLZLuhJmJj4/fsRdKVbHNI0larRYl\nJSUoKCjgpCUhIQEpKSkoKSmByWTC6uoqdDodj+sajUasrq7C6/WioaEBV65cwdOnTyNSa6vVatTW\n1iI/P3+LzelGu9uqa2rN0XmhW3NycjIHdKp+0HIiQvxLpVLu9ROBkcvlgsPhwLVr13Dv3j1u2W22\neVZWFl5//fUw8iChTlRB2M6pUxuObE63TRqtLS0thcFgYP4Sg8GAw4cPIykpCZOTk0hOTmb+/Ly8\nPPh8PszMzKCzsxN3795FS0tLxMAiFotRXV2Nw4cPb8EgAc/H1XYKRuTz6HPQzDttZi0qKuJnODEx\nETabDVVVVejv7+e9GqWlpSgtLYVUKkVPTw/Gx8fR0tKClpYWTrY2s97RZsDMzMwwvel87LWaK+TH\noAqf0WhEYWEh0tPT+QKqUqmQl5eHxcVFjIyMMKCzuLgYVquVGQZHRkbQ09ODwcFB+Hy+sD0h9D65\nubk8VUJCMZF+52UTgKgN/vSlWCwWWK1WZGVlITMzEwqFgns7VJamnrdGo+Esj75Yn8+HiYkJ1NfX\n48KFC9vSgMpkMhQVFSE9PX3Lz6g8T0Qf2wmVMkOhjakDs9kMm83GtziJRMJZHi25iY2NRXJyMhQK\nBWQyGWfXoVAI8/Pz6O3txbVr13D37t1tkalmsxmVlZXQarVhfy8SbaChFQoFPB7Ptk6REiYKBlqt\nFikpKcjNzUV+fj5nrJSN0/pNhULB5WmZTMY9esrKHz16hAsXLmBmZiZi0kLrLwl8JBS6JdINezsR\n2jEuLg4ajQZpaWnIyclBTk4OFAoFI/2J4TEmJgZKpRI6nQ4Gg4GJWkKhEGZnZzEwMICbN2/i6tWr\n8Pv9EXVPTk5GbW0tUlNTt9icSDp2A0NRdYaCqUKhQHZ2NoqKihgASb3++Ph4HrmMjY2F2WyGwWCA\nSqXilb3j4+N49OgRLl26hK6urm1R2Dk5OaisrAxzLOTkaF/GTsGfgKkUJKVSKXQ6HXJycrhCtLq6\nym0otVqN0tJSpktOT0+H2WzmxImAdbdu3cLHH38cdvsUilKp5MrYZptT8Bey10USAo/R/xMjX3Z2\nNl8sXC4XByur1YrExESYzWZoNBpYLBYerwwEAhgaGsLjx49x9epVnk7Z/IyKRCJYLBYes9sslITs\ndqMWAvQIEEZkSqSPw+HglldVVRVUKhUmJydhsViQk5ODkpISiEQbHCMtLS24d+8erl+/HrY3QShi\nsRjFxcUoKCgIe0bpM5Kv3akMTdVPYUWVknAaIXa73VzWz8jIQG1tLXQ6HQKBAMxmMw4dOgSbzQaX\ny4Wuri48ePAALS0t6Ovr27bCRvTYRqMxTG9h8N+tOkeJGRDeTiSeBwCYmpris1RSUsIEUGq1msmF\nAGBwcBCPHz9mXhea1BEKvUd6ejry8vJ4WgkAt1Hpd1629B+VwZ8od6lXTj1ZpVLJpUHqM6tUKmb6\no9uIsD/ndDpx/fp1PH78eFtHDmzQ1/785z/HsWPHwv4+FArxBMFuCydkMhlMJhPf7I1GI9OcxsXF\nYW5uDk6nk39OFKYEnBOODYVCITx79gwfffQRent7OYBFkrKyMvzDP/wDLBZL2N9TlrrTBj7geXmf\ngrxWq4XZbEZhYSHS0tKYhdDj8fCtPzc3F3K5nIE3VFoMBoPwer24cuUKvvjiix0RtVKpFKdOnUJd\nXd2WnQHUctltBpoqNkQoRHiPqqoqyOVyBlMCz2mhqbxPTGH0cAcCAXR3d+N3v/sdnj59Cp/Pt63N\nU1NT8dZbb4Uli5SwUvD3er076p6UlASNRsNVLJVKhfz8fBQWFmJlZQWDg4Pwer1QKBRQqVSw2Wzc\n36YgTbvX19bWcOPGDXz00UcYGhralupWJBIhLy8P1dXVYTd/KoPTzX8noUoE8UhQ6ba8vBx6vZ77\nmASC1Ol0jKUhJ0n6E6DyD3/4A5qamuD3+yMGQZFogyujuLh4i80Jx0HcEDuNnhFXCFVUaFqgvLwc\nKysraG9vx/r6OtRqNY/02Ww2BINB/l5pG6jP50NjYyPee+89TE1NReSlIF9EyZFwfJcQ6pS47sRr\nQQkiJThEXJOdnQ29Xo++vj4MDQ1xomIwGHDo0CGUlZUhEAiwH0hKSoLdbsfExARu3LiB+vr6bS8G\n5IOp8rqZxY6e093aFpQsUgJOlcLc3FwsLS2hoaGBRygz/29Ta1ZWFl577TX+XgnbNTQ0hM7OTjx4\n8IAroZFsTt/t5pFpwsTQbX23hCsuLo4nJeh19Xo9NBoN+vv7MTc3B7VaDZvNBrPZjGPHjuHYsWNY\nW1tDYmIiVzja2trw1VdfMfh2u++aEkGqDAttvjn4v6xEZfCn2xk5dMrmvV4vvF4vj1dotVqe/RSu\nUgQ2Hqzh4WE8efIEDx48wMDAwK5l75ycnC2c3nRY9rLdishS6MASUIVYpVwuF+x2Oy+E0Ov1YX15\nek+n04muri7cvXsXjx49gt1u39YhCh2LMFAK+bB3yxBjYmLCONSpfOr3+3miYG5uDm63GzqdDjab\njcGLm28DPT09aGxsxL1799Dd3Y2lpaWI7083ZLPZzLz49BqblxztJLT2lm6hhCMgWy8uLnLCpdfr\nUVlZybc54ZjT4uIiGhsbcf36dTx69Aizs7M7Vkpoj/fmAEqBZy/MhlKpFGq1mm88oVCIpw9o7Mnt\ndkOv1yMzMxO1tbVc3aLXDgaDGBwcxMOHD8OIi7azOZUndTrdFsdCo5e7oc9pdJW2SpLNCVjo8/ng\n9/sRExMDk8mEiooKRvRTAk3fc0NDA65fv46mpiZMTExs+73TM0VVpkg2T0xM3LEyR79DgEkKYC6X\nCwMDA2FjkXq9HhaLBadOnUJ6ejrjIUSijUmK0dFRfPnll7h16xb6+vq2lG6FNqdqg0KhCDsTFPwp\nwOxU4aJEntpJBHqbmZlhRD+V/DMyMnDo0CGcOnUKGo0mbFSMEtzLly+jra0Ndrt9W/9ANqfKmNBH\nCZNFSly2EwpmBFijiuzU1BSmpqaY9luv1yM7OxvHjx9HZmYmV3EJPDw5OYmWlhY0Nzdv214hWxFG\nQ9jvp5+R7xEu6dpOKOEjP0a+YmJiAuPj4/zdjo2Nobi4GHV1dUhLS+MR2lAoxBNoz54949i1ncTG\nxnJyvDkJJ3wLbZ59kTFFoURl8BeWOmgUxG63c7lqZmaGZ9yTk5P51kxCh7y5uRlXr15FU1MTZmZm\ntn0/MqpcLg+7gRI4xOPxwOl07jpeQa9BtxFahpGYmIi5uTnMzc3B4XBw68Jms20BWIVCIUxMTOCv\nf/0rHj58iN7e3h0Pp/ABFToW4QKJnSoewHOUNgFuaE62r6+PyVhmZ2fh8XgYKCREXAttfv/+ffzP\n//wPRkdHd9xLTX1AIViLXkfIi72XUihxk1O/fnx8nJmwaEkQTUSYzWZeoUkSDAYxOzuL3//+97h1\n69a22AqyFfWNNyecZPO9Ji4SiYSJTShBbG9vR3d3NxYWFuD1euHz+bhnXlxczP19oc2bmprwz//8\nz3C73TsutxEm1Jspo+lWsRfmtLi4uLCWDI1X0XNJK3nJ5mKxGEePHg07C0RX/MEHH+DPf/7zrsth\n6Ha8WW96RqlCtxtgMTExESqVCl6vl9HhnZ2dePbsWRiSWqvVwmKxICsrK6zNQElLb28v/uM//mNX\nTn4K/pFuyBT8KRDtlCwSnkVI9ex2u9HU1MRIeLqJm0wmLC0tobKykufbKWlZXl7G/fv38dvf/nbX\nfjclJZHoxenmTEj9nYRwFcSguba2hqmpKYyPj3MCRpWd9vZ27vGT/USijXHh2dlZ3LlzB48ePdrx\n/Si4R1rfTYkB8ezvxmZJeBZC7a+vr2NkZASDg4Nhr9nX14eZmRnk5+fDbDZzFZe4HoiJdjdCJGrn\nbE7AqaIoxIrtlujuJlEZ/Ofm5vjBoL7I8PAwYmJi+HZCc73U+6Cg5XK58OTJE9y+fRv9/f0YHh5m\nBr1IEhsbi+PHj+ONN94IY46iB4McQXFxMc/BbvfA0ANJznxxcRFjY2NobW1lvnfq/6WlpbHeoVCI\niYeuXbuG+/fvo62tjfeAbydGoxE1NTURgxnNhGdkZHCvbLu5VgLo0WgQzfqOjY1xlk63U5lMxlMR\npPvy8jK6urpw5coVPHz4EOPj4xH7WUIpLCzEiRMnYDQaI9qc+MpphnY7RyXkfKBVr6urqxgYGOAx\nrfX1dW7beL1e/iwUrG/cuIFr166hpaVl1w2OMpkMVVVVqKysDHN6ZPNgMAiz2Yz8/HwmNNruO6Tq\njN/vZz4C+nsKRIRsn52dZedJwJ+pqSlcvnwZX3755Z4W8qSnp6OiogIWi2WLzQnomZubi+npaX4G\ntzsvFARJd7oZkXMnnEJsbCyfPargra+vo6mpCVeuXMGTJ0923SQYExODkpIS1NTURNyeRgRLZWVl\nu7Zr6N/R+KHX6+XEg6pkoVAICwsLmJmZgcfjCZvZ9vv9uHLlCq5evco22km0Wi1Pywjfn55PsVjM\nuCAiwdqp9A+Ez7ULdSYd6aJB2AlhwnL16lU8fPhwT/1im83GvXcSOuM0el1aWsrLpnZLJoSV1Egr\ndQmESIROhHQPBAL46quvcOXKlT3tb5BKpSgvL0dhYeGWajBRcWdmZqKoqAitra07fo/CFjKd781j\ngsJLKflKej+Hw4GrV68y6Hk3MRqNqK6uhtlsDtM7GAxyi7OwsJA5HXbDLOwkURn8yVmsra3B6XTC\n6/Vypkkz0Onp6cyARn1Cp9OJjo4OXL58GX/96185I95OlEolDAYDTp48ibq6urASrtAZpKSkoLq6\nGh6Phxm7IjlaQudTACWHTTcTArEJQYsEXKQFPx9//DEePnwYNua0WWJiYph6+M0330RBQQH/jA4p\n9RKzs7MxMzODubk5TExMYGFhYcthCYWeb4vz+XxhzoNu31TV0Ov1nCRQ2WlgYAD37t3Dn//8Z0xP\nT+94+6TedlVVFc6dOweDwbDF5qFQCBqNBpW/hMsMAAAgAElEQVSVlVhcXITb7YbT6YwIeBQywbnd\nbm5TCJHuVBUQi8Vh40+U4Fy/fh0XLlzgG+t2Qtvz6urqUF5eHtY2EJIMZWRkoKqqihPFubm5iI5R\nSNpB7QkK+nQbokoDMVJSYHU4HGhra8Mnn3zCSO3thG7qdF6Eu+qFNpfJZCgpKYHL5eKVodslu/Rs\nEKMcBUnhmSX7UEJMN8/Z2Vk8evQIf/nLX+ByuXZ0ilQdqaqq2hL8yeY01nnkyBFOTGkhzGbdhRcF\nqoxFqjoQ4Y9wP4jf72fCr9u3b++YKFIr0Wq14uTJkxGrB4T1yM/P51WtY2NjcDqd2ya6lDxRwrv5\nXAWDQW4XUfAPBoNYXFxET08PPvvsM/T19W1rb+A5LXZxcTFOnDixJfiTb0lOTkZNTQ1v3aOWz04J\nI+kfKeGjqhn5F8Ky+Hw+PHnyBNeuXeM1ttuJVCrlAFpUVMTBnwI3sFH9yc7OZt2J7G23QEqvEcl/\n+nw+XignnOl3OBy4c+cOWlpadgUWSqVSZGVl4fTp01uCP9Fg63Q6lJeXM/jcbrfvWGHdSaIy+JMT\nXllZ4S19dFtLSEhgR0OzkwaDAX6/Hz09PfjTn/7EtLC7GaS4uBg//vGPcfToUeTl5TEKGHg+ViES\nbbDlqdVqBrvduXMnYmmYeM2Xlpa45CxcFhMXF4fl5WVoNBpMTU1hdnaWyWzu3buHP/zhDxgZGdkx\n8AMbD2ddXR1ee+011NXVMWpamFkHg0GIxWJUVFRApVJBpVLh1q1bePjw4ZaMmyoPVAol5yEk2aD/\nzszMMDtfUlISVlZW8Mknn+D69etwOBy79qFMJhPOnTuH8+fP48iRI5xwCXUHNjLgs2fPQqVSQa/X\n4/r16xgaGtoSKOg2RnvJqc0h/IzkQGZnZ3kbXkJCAvr7+/GnP/0JDQ0NO05DkBQUFOD06dM4f/48\n8vLywhwLJZoxMTEoLCzkEdVbt27hiy++iDhlQjpTEigctRL2Vf1+Py8LouTkyZMn+PzzzzE0NLRr\n+U8ul6O6uhrnz5/H+fPnw1YVU0CmSQi66aWnp+Py5ctobm7ecl6oMkF9Z/oTKdDSGlnahufxeHD/\n/n08efIEs7Ozu7ZIaJLl3LlzYcGfeqnEI2Cz2XgTZ2pqKi5dugSHwxHxvFDiKqxebRY6i/S7oVAI\nU1NTTN06Nze3o+5xcXHIy8vj55TGzSjwE+aAiMVSUlKQmZmJL774Ajdu3NiC1aF/t/nWHCkQCZNK\nwnKMj4+jr68PY2Nju+5uUKvVKC4uxsmTJ3Hq1CnGAglbcvTdnD9/HqmpqcjKysKnn36K3t7eLXah\noCnEIG2X3FAlkZ4F2mQ5OjrKoMqdJC0tDdXV1Th79ixKS0uZK4bOLMWR4uJiBmTfvHkTn3zyScTL\nhTCx38kn02ck/0/PrcPhYCr0neKRVCpFXl4ejh07hvPnz0Oj0fDrEv6KALQnT55Eeno6ioqK8PHH\nH+PRo0cvtL01KoO/y+XC8vIy/H4/Z1PkWGnOmR4cjUaDnJwcTE5Oore3F93d3Qwi2U4IKZufn49T\np06FAbco6At7aCqVCgqFAkeOHGFE8OLi4hYMwPLyMhYWFuDxeJgpiwIx9X9o5WNfXx9qamr4i6VV\nrbv1PhMTE3nnQE1NDcxmM8/nUwCiUnF8fDwkEgn31Wkhz+ZlHHSrAcC6C+dmSXdgoyVDZXj6zP39\n/ejv798W3AeAwUHp6ek4c+YMSktL+UYRCARYdyop0pw/lXU7OzsxOTm5pYxOXPJEKCQsdQo/H7Vs\nqG0EAE6nE48fP94RaAY87zcXFRXhzJkzyMrK4raTkJpTOONPK42JJZBuyUKhaovb7eZyYSTnQ0FO\nyAs+NDSElpYWTox30l2r1eL48eOoqanhNovQGRLojEYPCUfR3d2N7u5uTmBJKKjTLXO7ZJUCFgAG\nzHk8Hh7R2glDQy09m82Gt956C8XFxdBqtZxwCAG4BMoisO3y8jIaGxt5x4DQptSqoKRru+BPTp/6\nwzExMZicnMSjR492DUJUWq6qqkJdXR0yMzMhl8vZ5lQFodW0SUlJ3MsdGRnBo0ePuCJButE5oHO0\nk7On8rZUKmWken9/P3p7e/nitJPuJpMJr732GiorK3kZDQGXKQgKbU7+pbm5GePj48xwJ9SHqKh3\nY6cT9rbj4uIY/Dw9Pb0rIFIkEqGwsBDnz59nojQADOIku1AfX6lUQiqVYm5uDnfv3oXT6dyyjVMY\nD3arDBB2inBjTqcT4+PjcLlcu551lUqF2tpa1NTUwGQysc8VVvvIt5DeWq0WbW1t6Ozs3PV7jSRR\nGfypXE63T3IgFCQoMfB4PDCZTDh79iympqYwPDy8a78PAI/D5Obm8twrCQVjCp7CfdUlJSVYXl7G\nZ599hunp6S1lxeXlZeb0puqF8OGl7HBqaoqzRJFIxAF3Ow4CodAIYUlJCQoLC8NQ3xTUCZBGh5Am\nC5qbmxnUJ3QewWCQM1bitxfemOj31tfX4XQ64Xa72ZlTUCX+9u2EEPIWiwXHjx/nsUJKWsgWBKSj\nfrrVakVCQgIuXbqEjo6OLQ+RcMUlERlFsiE9QKQHMbhNTEzsehMitHJJSQnq6urCiE3olkIjbITy\nFYvFvKNAo9EwH/fm87JTS4OEen4E+BKJRJienmaE+k5Co1qnT59GeXk5c0jQ+aN2lJBwyGg0IiUl\nBRYBg53QrsJqy8LCwrbfO91MaSKDQEodHR0YGRnZUW86C/n5+fjxj3/MoF465263m9HcQmZCwrek\npKSEIauFVaClpSXeUbGT7sAGxkOhUCA2NhZTU1NobGzE3NzcjroT/0htbS1OnTrFdqWJpdXVVUZz\nU7WRJhmysrJ4RbMwwaDgT/rvVjGhraRJSUlM39vd3b0reRNNC7zzzjs8ck1tQZfLxToL0egEQk1N\nTeVNk5uDP7Vo97Kmm9qLxLnQ2tq6I2BbqHtFRQV++MMfhtGjE3CWnh96RhMTE5GVlYXs7GwYDAau\nUgn1oyrKbuRRAJjynLBjdrudx153EuIFeP3111FTU8P09kRhDIATOfLpOp0OOp0Olv/jeNgtIYwk\nLz8s+A3I/Pw8ZzKRDE7Oixz+4uIiz85vRv5vFpFIhJSUFLz77rs4cuRIGPCJbgUURCJteCNnFgnh\nSsGfbkrbfREU7DweD3Oe0+z2TmhlkUiEkpIS/OIXvwhjraIDSg4tEmKbxmmEyYzwZ2TLnRD2dBta\nWlriXQTk6Ig/fzuRSCQ4ceIEjh8/zg6PXo9uwHQrF74OBT5KCDajd8kp0cKk3Xp2Xq+XMRsSiYRp\nfncSo9GIc+fOwWq1hiVb5IipzCcc7xOW7YVcAkLx+/0MUNwLQI1APqFQCMnJybBYLGGtqkhisVhQ\nXFzMZELAc5yEkHBq87QMJUqRxv7W1zcWIgk5zHcSAnL5/X5IpVLk5+dHJNMSilQqhdVqRUpKShiH\nBCX/FNQ3j1SSLkKyLKF+KysrDCjbTe9QKMTtu/X1dZhMJhw+fDisBx5JlEolUlNTmakTeG5zctKb\nJ0XI5oTxiISup7bWbuecPufs7Cy8Xi/Ti+fl5e3IVUJ8HxqNJoyimvARlHhHOi9UoYs0sUBLyfZi\nc2Cj+uhwOLCysgK1Wo2ysrIwbFAkSUhI4NascOGPcPupkN2R9KbnbjvdCdOyl8AqvByFQiHo9Xpm\nEt1JaJ04+X+6EAnp5iNRb5Puu40UbydRGfwpKO5kcApC1ONVKpWw2WwoKSlBfn4+E/8IhYxoNptx\n4sSJMJpQOuRCxPhmoUBEjmWzUBDbrWdPjoD2nkskEqSnp6O8vBw5OTkwGAxbkgsK3Lm5uTh37lwY\nrSwFf+rZU9AhEQKMIukudCy7HXKyO60PlUgkyM3NRVlZGTIzM7fQI5PucrkclZWVKC8v55uQsP8p\n5LfebHMCdUWyOSWBm+k9Nws96G63m7EJOp0Ohw8fRlFREbdPhEKO2Gg0oq6uLixgUSASOsXNulOJ\ndDfHshvaHdjAB0xPTzMYLCsrCzU1Ncw2F2mkicrmJSUlW2hCqQxLSe5250XoKIWfncrmewn+i4uL\nGBoagsfjgUKhQEVFBcrKypCWlrblGSXdZTIZCgoKeO88va+wDAsg4ujfdv1wsuNebR4MBjEzM4PJ\nyUmsra3BbDYzPoiwOpF01+l0yM/P5zE7IdCNzsTm55BsvlMvn7Ahe0Hq+3w+DA0Nwel0IiEhATk5\nOSgtLUV6enoYsFko8fHxMJvNzHQo1F04krc54aKK5k74CSFeZTeZnZ3F0NAQ/H4/b6DMysqCwWDY\nlvSL9hao1eotnCGUVNGo6GbdduJDIdzHXmy+srKCkZERTE1NIRQK8c3cbDaHrRTeLCqViplGKWEV\nglkpadk8uSDUfbvzvpNEZdl/r+ML5NCDwSDzVp88eRI9PT148uQJbt68iY6ODv596mdZrVbodDpe\nrQg8BwIRGQ8t9KEgTAeJqhGRdCR99qo7LQ6hMltNTQ3a2trw8OFDXLlyJWxLX2JiIvR6PUwmE29x\nI1vR6A+V25aXl8OqExSohLpH0mevsrKyArvdDovFgszMTPzyl79EbW0tWltbcf36ddy7d49fj8hJ\ntFotMjIythD6UAClRIJuRSSby24vY3NgA7MwMjICo9GIoqIi/NM//RNaWlrQ0NCAy5cvY2RkJEx3\nsVgMo9HItw/h7Xnz+JrQQdJno379drrv9aFdXl5GX18fsrKykJubi9raWqSlpaGhoYGpiIUVG+pt\n5uXlhVH50vMiDOyb7UfJKd2aNp+X/drcbrfj0aNHPKZ07tw56HQ6pKSk4Nq1a3j27FnYa9E0CwXa\nzUkfVSTIzpttTkEyUoDfj+6BwAYhjs1mg9VqRXp6OmQyGYLBIJKSkvDll1+GPaPChOv06dNISUkJ\nc/jUyhDqQkKXD9oNvx3+Y68yNzeHBw8eIDk5GTabDTabDSsrK1hcXMSNGzfQ0NAQ9vuEUygrK0Np\naWkY74AQPxSp4km9aUoIX/YZ7evrw/3795Gamgqr1QqbzYYTJ07A7Xbj9u3bmJiY2KK7wWBAbW0t\n0tPTw1heiYQnUuCn80IryyO1JIRt293E4/Ggvr4eycnJKCkpgVKpRFZWFs6dO4dgMIi7d++GvQ7p\naLVaUVZWBqVSycGfLnuhUCgiYRjFK9rKuJcxws0SlcF/v4FoYWEB2dnZSEtL4/WzycnJ0Ol06Ovr\n4+rA8vIyCgoKUFlZyZSw9H7UO56enkZ/fz8mJyd5QYxOp4NGo4HD4cDIyAhXBsjhvIjuwWCQR+ry\n8/Oh1Wp5Y5VGo4FWq2WGOo/HA6lUiuzsbBw6dAhJSUlcHqJ+nNvtxuTkJFwuFwYHB6FWq6HVaqHT\n6SASiTA7O4vp6eltD8p+bO71ejEwMIDs7GwUFBQgMzMTarUaOp0OcrkcFosFLpeLd7qbzWYUFRUh\nOzsbcrmcDzj1AhcWFpgxi6g/tVottFotFhYWMDw8DKfTuW3vbD+6z8zMoK+vjxfNqFQqiMVi3irW\n39/PuI21tTVYrVZ2KhRAKTOnnrfdbsf4+DikUil/f4mJiXA4HJiYmOAW1ubzsh+9l5aWMDg4iPHx\ncS4p0l4LmUwGvV4Pu90Oh8MBl8sFpVIJi8WCyspKXlpEIrS73+/H8PAwFAoFL4EiOmfC3rxssjg3\nN4f29nYcPXqUF/2UlpYy611eXh6mpqa4ZJqRkYHy8nIcOnSIwU901oU9f+Lgp768TCbDzMwMRkdH\nt63e7Uf3QGCDyW9wcBArKyvQarVITExETU0NJBIJ9Ho9BgcHeYQ2Li4O6enpOHbsGMrKyqDRaLZU\nVKja43A4eLGUXC5n4puv6xldXFxEV1cXjh49ypWUzMxMnDlzBgqFAhkZGRgdHWUfQ+x6lHAJb/5k\nC7ocEUupVCqFTCbjm/rc3Ny2uKX96D41NYXu7m5u58pkMhQXF/O57+rqwvDwMBYWFhAIbHD/Hz58\nmKtzwiBP7SKqEtPEGK06n5qaCntGI8ledV9aWkJ/fz8/o4QBqK2t5WVAg4ODmJyc5HX0RqMRJ06c\nQHV1NVctKCkQYhYWFxcZCC2VSrG4uIipqSk4HI5dSdy2k6gM/vuRpaUl2O127j/TUoS0tDTU1dXB\n4/FgcnISU1NTmJmZQUlJCTIzM8P47skZzs/PY3h4GC0tLXA6nQCAzMxMlJSUoKSkBH19fejo6MDs\n7CyPOgHhq1n3E/ynp6cxMTHBzGqxsbHIy8tDTk4OfvSjHzGIcXh4GAkJCTh8+DAMBkNYXzwU2hjT\nczqdGBgYQE9PD+x2O7RaLaxWKw4dOgSpVIr+/n4MDg5GXOARKYnZScixlJeX87/XaDTQaDQoLS3F\nL37xC3R3d6O3t5cDbU1NDbRaLbNuUdmKKIRpDGlhYQHp6encSpiYmEBra2sY2pf03a/NAWB6ehrd\n3d0MMCTSpfT0dJw6dQqjo6NobW1FZ2cn3G43zp49y6N7wioQOcOZmRl0dXVhfHycy6aFhYUwGAxs\ng0jJ4n5tvry8zIGGHnQCIebn5+OnP/0pmpqaeAd7RkYGzp8/D6vVCq1WG7bSlLAtdL7GxsaQkpKC\njIwM3krW2dmJsbExDkQvY3OXy8WshdQbNplMTFI1Pj7Os9D9/f04f/48amtrkZeXx1MVFPwJNDc5\nOYnx8XEsLS0x/XFGRgZ6enrQ2dnJnAn7tbNQgsEgJicnMTo6yolEXFwcCgoKkJOTg7feegsNDQ34\n8ssv0dPTA7FYjPPnz6Oqqgp5eXlha4apckgrdCcmJjiBsFgsWF9fR2trK49u0vTIi+ru9XoxODjI\nHBMA+CJTUVEBh8OBixcvoqGhAQMDAygvL8epU6dQXV2NtLS0sLMeCoU40Z2enobL5YJCoYDBYEB6\nejqGhoaYKngn4Opexel0YnR0NAx5n5OTA6vVijNnzqCpqQmffPIJent74ff7eT9IbW0tY4/oGaGE\ny+l08uinVCpl5snOzk709fVtm6DvR2iSa3Z2lisdcrkcR44cQUlJCd555x1cuHABX375JUZHR5GW\nloYTJ07g9OnTXG2hi5EwaXE6nZidneVE32QyYXp6Gi0tLZiYmODgv1/dv/PBf3h4GBcvXoRSqWTS\nHuI+JxAg9VwyMjKg1Wp5dEfoUJxOJxobG9HU1IRnz57x3PfAwACePn0KjUbDxD2UcQpLtvs9MIFA\nAJ2dnVCpVMjNzeWAQQEpPj4eOp0OiYmJMBqNiImJQXJyMn82oUMZGhrC/fv38fTpUwwNDWFhYQGJ\niYlob2/HgwcPEB8fD7fbjfHxcS77CmW/urvdbjx79gwtLS3Iz89HWloalEolgOdkFTabDRqNBnl5\nedBqtQwiIt2pb9ze3o7GxkY0NzfDbrfD6/Wis7MTDQ0N0Ol0TKo0OTm5pd3yIg/p3Nwcent70dbW\nBrVajfT0dC6r0SpU4v9fWVnhbW7CVkUwGITdbsfTp0/5vNADn5SUhFu3bkEqlTK9c6T1tPvVfX19\nHQ6HAwMDA2hpaYHVaoVerw/rwZKtq6qqwnYPCBPFYDDIzrqpqQlDQ0OYmZmBVCqFXC6HSqXC2toa\nV5CETvRFbb60tASHw8HJIFUiqDVhMBhw4sQJFBQUYGFhAampqUhJSWFHTmfd4/Ggp6eH+d2npqbg\n9XohFoshl8shl8vhcrkwPT3NUwgvE4ioNTc9PY3Ozk7eRUG+RSKRoLCwEEqlkqsQqampW/AAoVAI\ndrsd/f39ePLkCbq7u7l0TWOhodDGTo+RkRHW+2V0JyzM2NgYenp6kJqayrsFiCnuzJkzKCoq4p0d\nJpMpbOcDBf3R0VF0d3ejubkZw8PDcDgcPOonk8ngdrtht9sxPT29rc77CUwE4h0eHobVauVWIfnz\noqIifr4IiGk0GvlMkXg8HkxNTaG1tRXPnj3DyMgI/H4/v05sbCzm5uYwNjbGLaKXsTlVpWgzqLDX\nT9McZ86cQW5uLi9JMxqNMJvNYb5ReBba2trQ29uLoaEhbr9IpVL4/X44nU4MDg7uq30olO988Lfb\n7Zifn0dWVhZ0Oh1qamrCuLQJRa9Wq8McGQV94S2oubkZLS0tGBoaisjRvrkc8zIHJRAI8E7q+vp6\n3uVOQCCRSMQOLSUlJcz5CsvOfr8fAwMDvAeAGPaEAV6oN7B3TMV24vf7mbbYarXy5irSnVYs6/X6\nLYeSbvz0kLS2tqKhoQEdHR3MLyC8HQv1ftmHE9ioWoyOjuLJkydMEEUVo9jYWE4iMzMztwRrOi+0\nia6+vh6PHz9GT08PI4JJhHp/HbrTrZESPRrlIt1jYmIYrBXpvYSscN3d3Xjw4AFzHGxmCKPe9Ys6\nlc1C7ItdXV3IysqCRqPhNoTQ5pvfS1ghoirLkydPUF9fjydPnjD7p1C+zrNCt8aZmRm+AFAbgmye\nmpqK1NTUiO8lRMrT8qVHjx6hu7sbDodjS5lZ6Pwjvd5+qi5UVRsZGUFrayu3FyhZlMvlKC4u5nL6\n5s8tbGt1dHSgoaEBDQ0NGB0d3ZZlb7fzstcEgLg7+vr6YLPZeKMg+XOyeSRbUAt3ZWUFk5OTfF6a\nm5sxNjYGj8cTUe/tfOJ+bE7f9+zsLDo7OyEWixkATdNKkWwu/N5J9/7+fjQ3N6O+vh6dnZ28S0Do\nGwFsuYTuR3begnFAIhKJ/mU/vx8MBuF0OuHxeJjcYTtUKH3RgUCAkf0TExN49uwZbt68ieHhYXi9\n3h3JP3Yy9H7HLpaXlzE+Pg6lUoni4mImctlNd6/Xi9nZWYyOjuKrr77C3bt3mZJ3OzDibo5lv7pT\n7zIvLw9ms3kLMCUScp8WJU1PT6O3txfXrl1DW1sbXC7XFnDZ5oRnJ6eyHwkGg3C5XMyAuN2I5eZA\nQsyN4+PjaGxsxOeff46RkREsLi5GrEp83TYn5zA9Pc0LZyKNhW1OOmhCY3Z2FmNjY7h27Rru3LnD\nvcedAH1f53mhMnxpaSkT8uz2mjSBMz09jdbWVly8eBFtbW2YmZkJmzbY7k8kvfezDpVeR7hgKZKe\nm/9OiKugtsbFixcxMDDAa643674Xm+9XiBs/JyeHK4iRXleoPyVbVK25ePEi7t27h+HhYd4hQRVE\n4Z+dns/96k63XLVajYKCgohLeiLpTmPUk5OTqK+vx/vvv4/W1laMj49zC4ueVdJ5p0rLfvUOhUI8\nk5+amoq0tLSIryPUm3RYXV2F0+nE2NgYLl68iM8//xydnZ2w2+08zbQfuwP4fzvp+p2/+dOXNjU1\nhadPn+Lq1avw+XyoqqpiIgp6EOmWTMQuHo8HXq8XCwsLGBkZwejo6I40ry97k4j0ejSS09DQgNTU\nVBw6dAgWwfw2Ia+plDQ/P8/sgbT0pqWlhbnMXxQotN/PFgptcNZ3dXXh9u3biI2NZYQr7XkX3pIp\nSNJaZirPdnV1wel0bkv+8XXbHNgIKOPj42hubsa1a9dw6NAhrmAIZ5s9Hg8GBwcZFOXz+eDxeDA/\nP4+uri6MjIxE3AfwTehMsrCwgL6+PtTX10Ov1yM3N5fbQ5QYUoIwNDTEBEg+n49n1hsbGzExMcGM\nbd+GBINBLsHeu3cPfr8fWVlZvBI2FHo+9jk8PIzJyUnmEqDzMjw8zM5wL4RY28l+b0perxe9vb1o\namqC1WqF1WpFcnJyGChubW0N8/PzGBkZ4fXXPp+Pn9GnT58yKdPLrmLdq4RCG3TcNM1CBD4ymQwJ\nCQlh1azJyUmMjY3B5XLxWadntKmpCSMjIzuSIu2mx35leXkZvb29aGxsRGZmJmw2G7d0Cey8vr4O\nn8/HwEUiVxM+o8+ePdtSUXxZ3XaT2dlZNDY2wmQyISkpCSaTCQqFgplYSXdqOTidTiwsLPBl1Ol0\nor6+Hj09PVvYNb9Oicrg/yJfyNraGkZGRvDHP/4RXq8XGRkZjHQnJL/dbsdnn32GW7duMcf1Zhat\nb1t36hM1NDTAbrfjH//xH6FQKLifS9lga2sr3n//fXR0dGBiYoLbEl9HefNFhRaofPrpp1hYWIBC\noYDVaoVSqeQH0+Vy4ebNm7h06RJGRkaYzW5zL/ll5EVs7vf78ezZM/z2t7/Fz3/+cyiVSgYk0nkZ\nHh7Ghx9+iMbGRq4IkRM5CL0B8OjgrVu34PP58Pd///eIj4+HRqPhGxuter1w4QIGBgYwOTnJY6Uv\n+r5fh+4ejwd9fX344IMPsLi4CKVSCY1GwytTifToiy++wO3btzEwMACn0xm2LObrKOXvV1ZWVjAz\nM4OHDx9ifX0db7/9NiorK7ntEggEGI/w6aefoqWlBQMDA2HENgdlc+LjuHLlCvx+P9544w2kpqZC\npVLxxcLn86G+vh5ffPEF4xGomvWi7/uyuq+urnKfOyYmBmfPnsWRI0d44oKe4cnJSVy9ehUNDQ3o\n7OzkBUkHeV5oBbpEIsHq6irq6upgs9nCktylpSV0dHTgypUraG9vx8DAANMAf1s+/cWogb55eaFP\nTiAcIjZRq9UQi8VM2UsP6Pj4+I4MggchYrEYCoUCRUVFyMjIYAAd9fWpBEeHe7dFE9+WEF1uamoq\n8vPzuZ8rXC4yPDzMJC878al/2yIWi6FUKpGdnc0TIMJVv4uLi+jv7w9b7Rot54WIQbKzs6HT6XjP\nO92eif5XuDcgGs4L7aZIT0+H1WrlGWyh7jTVQJTX++Us/6aEbG6xWJCcnMw0sVQpmp+f55E3ulhE\nw3khIKvZbEZaWhrkcnlYpYgW/9DNf/NegYMUhULBy5oMBgOv+RXe/IeHhxn7RVsxo0FMJhPMZjPf\n/IlXhuxOS3/m5+fhdrvD9lV8TbJjfP+bCv4k1DeXyWSIj4/n/QBCVrNoONiRhOZQJRIJH5LNyyWi\nUYRMVOTM6Q8R4ESr0O2CMAvCXmA0nxUAYWDFzf3jaD0rwPOep3BPArB/QpiDEKHuALboHs0SCURL\n/41mmwORsQPfFb2F/yX5FnT//gV/Ibr7ZRYAACAASURBVBpXSCjzXXIswnWxpO8rx/LNSSR0vvC/\n0SyRbL75/6NVIgGqXun9zcrfiu7fFZ0PUL5/wR+InGV9l+RlyCZeySt5Ja/klXzv5fsZ/L9teRHm\ns2iR72qi8crm3758120OfHd1/67qDbyy+bcpApu/Cv6v5JW8klfySl7J90H2GvyjcqXvK3klr+SV\nvJJX8kr2L3utVry6+b+SV/JKXskreSV/e/Lq5v9KXskreSWv5JW8kufyKvhHkBfh0I4GeREO7WiR\n77Ler3R/Ja/klXzXJCrpfb8tETo+WiSRmJiIhIQE+Hw+XqYQjYhP4VKIuLg4xMfHM1+33++PGgbA\nzbI54Ah1X1lZiWqbCwlpiFiHdrYTc2G02pwIaYLBIOLi4nhFKjExRisXA9mZ7CrceknEV9GoN23P\nAzb2cxB3B52fzYugokmIJI1Irsjewi2L0ag32Tw2NpbPhXCR0ctswPsmhZ7P+Ph43hMh5EzZwwKf\nF5LvffAnoyuVSqSmpkKpVEIsFqO3txdTU1MvtUDkmxLSOy4ujtfnGo1GiMVieL1e9Pf3w+PxfGsL\nRPYj9IDGx8dDIpHAaDRCpVJBLBZjbGwMY2NjUUOLKhTSm/6oVCqkpKQgMTER6+vrGBoa4s2E0SSb\nmRdjY2NhMBigVqt5n/nIyEjUsTCS46OEPCYmBlKpFHq9HmKxGAAwNjaGubm5qEsAKGERi8XMGkl7\nDEQiETwez7e+WGkvQmclISGBbRwbGwutVsvUtA6HAw6HI+oSAEpYxGIxEhISEAgEIJVKodVqEQo9\nX81Mm/2iSeiM06Ko9fV1qNVqJCUlYX19HW63+xux+fc2+FMATUhIQEpKCg4dOoSf/exnSEtLQygU\nwu9+9ztcv36d11dGiwidIvFev/7663jzzTcRHx+Pjo4O/OY3v8Hg4GDUBSJKWMRiMZKTk2G1WvHu\nu+8iPz8fcXFx+OCDD/Dhhx9iYWEhqh5QcuaJiYmQyWQwGAyoqanB22+/DYlEgpmZGfzmN7/B06dP\nsba2FnVOMSEhAUlJSVCr1dDpdHjrrbdQVVUFkUiEW7du4be//S3vi4gWEYlESExMhFQqhUwmg0aj\nQV5eHt58801otVqsrKzgvffew927d/k2Gi0SHx/PeisUCqhUKhw7dgzHjx/H+vo62tvb8Yc//AFT\nU1NRpbdIJEJSUhJkMhmSkpIgl8thMBhw7tw5pKWlYXl5GRcvXsSVK1e+CR76l5KEhASoVCpIJBK2\nfVFREU6fPo3V1VWMjY3ho48+Qm9vb1TpDQByuRwqlYpp3aVSKerq6pCTkwOXy4WGhgZcunSJK0Zf\nl3wvgz8Ffo1GA5PJhEOHDuHIkSOoqalBcnIy1tbWoNFouNwVTUIJi9lsRlZWFkpKSnDy5ElUVlZC\nJBJhaWkJYrE46vSmm7NarUZ6ejoKCgpQVlaGY8eOITMzEzExMbh3796+dq1/W0KBPzU1FVarFfn5\n+aiursbhw4cRHx+PkZERKBSKqNQ9ISEBarUamZmZyMrKQnZ2Nk6cOIGioiKEQiEMDg5yOTqaJC4u\nDlKplG2emZmJ4uJiHDlyBCqVCn6/H5cuXYo6m4tEIojFYqjValitVlgsFqSnp6OqqgoVFRW8X4SW\nAkWL0PMpl8thNpuRkZEBs9mMzMxMHD16FCkpKVheXkZzc3PUYUViYmIgkUig0+lgNpthNpthNBpR\nWFiII0eOYHV1FVqtFl9++WVU6U1xSKlUIi0tDSkpKUhOTobBYMDhw4dhsViwsLCAubm5b8Tm39vg\nHxcXB5PJhOrqar4JaTQa7rsEg8Gou8UBzzcX5uXloa6uDq+99hrS0tIglUoRCoW43xVN2a2wymI0\nGlFTU4OTJ0+ipqaGNy+GQiEueUWTzakUKpFIkJubi7q6Ohw9ehRWqxUqlQrAxqY6AFFlc+D57Tk5\nORnV1dU4cuQIqqqquKQYCoWQkJAQVZUtIDwQ5eTk4OTJkygtLUVWVha3KxISEhAfHx9VutM5l0gk\n0Ov1KC8vR3V1NQoKCmAwGKBUKhEIBKBUKqOOPY5srlQqYbPZUFtbi4KCAthsNqjVaiQkJGB9fR0y\nmQxA9DD2kc2TkpJgMBhw6NAhlJeXIysrCyaTCWq1mrf/USsjWoRazpScl5eXIy8vD1arlbej6vV6\nGAyGb+T9v3fBPzExERqNhg/46dOnYbPZIJfLeZkOACwtLcHv90eNc6EeosViQUFBAc6cOYOKigoY\njUZIJBIAGw9CIBDglZzRIgkJCbyu+PDhwzh58iRsNhuUSiUD5gDw6uVoCaIxMTFs8/LyctTW1qKq\nqgomkwkymSwsE19eXo4qmxOmoqKiAjU1NTh27Bg7cuotUgBaXl7G+vr6QavMIpFIkJqaiuPHj6Om\npgZVVVXQ6/WQy+XcQ4+JieFNndHyjMbFxSEpKQnFxcWoq6tDRUUFsrKy2JELtxhG03poYGO9tcFg\nQG1tLY4cOYKSkhLo9XoolUokJCSEVYei6VIUExMDhUKB7OxsvPbaaygqKoLNZoNKpUJSUhLrTYli\nNNk8MTERarUa5eXlOHXqFLKzs5GcnMy4M2qRxsXFfSMbXb83wZ8ePGHZuaKiAkePHmUwlHBF58rK\nCpaXl6PCsVApUa/Xw2azobS0FJWVlSgsLER8fHyY3oFAIKp2oFMf0Wg0Ij8/H+Xl5aisrIRCoWBH\nTrK2thY1ICi6CalUKg7+5eXlKCws5LNCwZMQutHiFEUiEd8a8vLyUFFRwc6cnKEQdU4TFtEgIpEI\nCoUCaWlpKC0tRVlZGfLz88NQ/mTjQCAQNecc2EhydTodsrOzUVlZiYKCAqSkpPB2UeHURTRV50Qi\nEZf7i4qKUFZWhqysLAZa0jNKG1KjqToXHx8PrVYLq9WKQ4cOIScnB0ajkc846U8TI9Fk86SkJJhM\nJuTl5aGkpIQruEK9SfdvAmD5vQn+dHO22WzIycmBRqPhrErYNxTun48WRCv1ygsKCpCWlgaJRIKE\nhIQtmATS/ZvIEl9ECChnNBqRl5fH5UNh8BQGoWiyOSVcGRkZSE1NRWJiIqPlo9nmwiQ3NzcXWq0W\n8fHxYcEzks2jRffY2FikpKQgIyMDEokkTG8Soe7RMroVExMDmUwGq9WK5OTksJXim89LNI340XnQ\n6XSw2Wxc0RLqTuclmvQGnlfmCJ8grAoB4UuF6HxHQ7tFiDnLz8+HVqsNSw6FetOfmJiYr/0ZjS60\nzDcocXFxkEgkUCgUSExMhMvlwsLCAmexZPDl5WUsLCxgZWXlwA8J8PwGKhaLIZfLsba2hunpafh8\nvi0H2u/38yhLtOlONnc4HDxXTkI9uc1/f5BCkwkSiQTr6+sYGxuDy+Xacl7W19cZ+RxNusfHx0Ms\nFsPlcmF8fHxLC4uqFdF0i6Pgn5iYiGAwiPHxcdjt9i02pxtcNAUjOuuJiYlwu93o7++H2+3mGf/N\ngSia9KZR0JiYGExOTmJsbGxLNUgYiKJFhKOJS0tL6O3t5dFP+vnm39/856CEbJ6YmAiHw4HBwUF4\nvd4wn775dykRjo+P31KtfhH5XgV/qVSK2NhYLC0tYWBgABMTE3zIydh+vx8zMzPw+/0HrPFzodny\nUCgEu92Ojo4OzM7OhpU8g8EgPB4PFhcXo+IWB4QTnSwvL2NgYAADAwOcoAh7iIuLi1heXj5IdcNE\n2Fd2OBx4+vQpJiYmtiQoKysrUYUNIb1DoRBWV1cxODiI9vZ2TlyEQYiCf7QIOfP19XU4nU60tLRg\nYGBgy1gZVVuiKRCRE15ZWcHw8DAaGhowPT0dpmc0B9BAIAC3243W1la0t7fz6Od2ATQahKoTKysr\nGBsbw/379zE8PBzxEkFVSGFV4KCSALJ5KBSCz+dDV1cXHj9+DKfTGebThWX/xMREDvhisZgvVC8T\n/L83ZX8KMCMjI3C73QgGg5w1Cg3Y0tKC//3f/0V3d/cBahsuq6urcLlc6O/vh1gs5n45BVZgI7h+\n/vnnuHTpEhYXFw9Q2+cSDAaxuroKh8OBUCjEqHghTgEABgYG8P7776OxsfGgVN0iwWAQS0tLmJyc\nhM/nQ1JSEgBswSk8ePAAly5dwuTk5EGpGiZUEl9YWMDQ0BAUCgVSUlLCyqEAMDc3h+vXr+P+/fsH\nqG24UEJit9sRCASgUqmwvr4e5rSBjfPy4MEDjI6OHqC24RIMBuH1ejE8PAytVguTybQleK6vr6Oz\nsxPt7e1YWlqKiiSAEqm5uTluF1ESLtSdiIkcDkfUVC7oGR0dHYXP54NGo4l4gfB4PHA4HFhaWkIw\nGOTkmOTb/iyEm5ibm0NXVxd0Oh0UCsUWzFAgEOBq7srKCscrSpB3SoD30t743gT/9fV1+P1+LiOm\np6dDLpezMQHw3PPly5ej5uZPB8Xj8WBychKpqanQarWQSqVhgWh1dRVNTU14+PBh1OnucrkQDAZh\ns9mgUCi22HxqagpXr17F0NDQAWv8XChxmZubQ3x8PIxGI5KSksJsHgqF0NHRgZs3b2Jubu6ANd4Q\nCv4+nw/T09NM2iK0OQC43W589dVXaGtriwpHDjzX3eVyITY2Fnq9HhKJZEvCNTo6ihs3bmBqauoA\ntQ2XYDDI/kUikUAsFnOSK6xwdXZ2oq2tLWqqXMJkMRgMQqFQcIlZaHOXy4WWlhZMTU1FzXkJBoNY\nWVmB3W5HTEwMdDrdlkQRAOx2OwYGBuDxeMLaRwdVhSGbLy4uYm1tjadvqJJBsrq6iunpaczOzmJ1\ndZVv+tQ62q7VuNdKxvci+FMWRP1ZsViM8vJy5OTksGOhL2R1dTVqEOcklJ2vrKzAaDQy6QbpTj9f\nWlqKmgkF4Hl/k8rNBQUFKC4uhlQqDeOaX11dxeLiIlZWVg5Y4+dC52FtbQ1qtRpHjhxBeno699no\nsy0uLm4p1x20CM+yxWJBWVkZz8cDGzZfXl7G1NRU1CQtwPMb2Pr6OsRiMYqLi2G1WpGQkBA2WeFw\nONDe3o75+fkD1vi5CLlBdDodysrKoNfreZSVqhp084+W4A88T9JDoRAyMjJgs9m4RUo/n5mZwY0b\nN9Df33/A2oYL+Q+pVIq8vDym3BZOQHV0dOD27dtwOp1hLd7dAv83CQ4kmy8vL0Or1cJisYSNPodC\nISwuLqKxsRGdnZ38zJKP/DqwI3/zwX9zuVMmk8FsNqOsrAwWi4V/5na70dfXh8HBwaiZHRb2o4QE\nHBUVFdBqtfyziYkJtLW1YWZmJip6oaQXHWKxWAytVovCwkLk5ORwpruysoK+vj50dHTA6/VGTf9Z\nONokkUhgNptRWVkJo9HI52V2dhb9/f0YGRmJGm6CzSAhqVSKrKwsFBcXQy6X889HRkbQ2tqK6enp\nqKkSCXWn8a2SkhJkZGSwzT0eD0ZGRtDT04Pp6emoCaDC85KQkID09HRUVFRAp9Ox7g6HA319fbwz\nJBrOujAIxsbGQiaTIT8/H7m5uRCLxYx5mZ6eRnd3N9rb2+FwOA5Y660SFxcHg8HAHByUtPh8PszN\nzaG9vR0tLS1wu90ADp6gSOgf4+PjYbVaUVJSAoVCwbovLi5iaGgIDx8+RFdXV9jEwm4irG7sJH/T\nwV9YSqEvnOZwy8rKkJaWxr87NzeHL774As3NzVHjyEn3YDAIsVgMk8mE3NxclJaWIiEhgX+3s7MT\nFy5cwMjISFToDiAs4ZLL5UhNTUVhYSFsNhtjFfx+P+7evYt79+6FIV0PUoSAOeGcf0VFBZRKJf/e\n6OgoPv74Y3R0dEQNuQ8li3ReNBoNcnNzUVhYGHZeWlpacOPGDczMzESV7iTEC1FaWoqMjAz++/n5\nedy+fRvNzc3weDwHoeYWEY7CESuhzWbD4cOHwxjlhoeHcfv2bZ4CiBYhuxPhTElJCQoKCvi8rKys\noLu7G0+fPsXQ0FDU4Ik2J+jp6emora2FVqvl33G5XHj27BlaW1vR1dW17/f4JpKEzUluUlISCgsL\nUVVVBalUyj8jYPe9e/fQ09Oz7/fZi+5/s8Gf5m7pDwHk3nnnHZw5cwYGg4GzrEAggPn5eTx9+hTD\nw8MHrPlzLnmZTMbMg9nZ2XjnnXdw+PBhBilSeXdsbAxNTU1wOp0HrTqDUuRyOfc9z549ix/84AfI\nzs7mXiL1SDs7O9HT0xMVZfOYmBheJkMLn959912cPHkSSUlJYeV+u92Ohw8fRg3QTyTaoPIlm1dX\nV+Pdd9/l/QPCWe3u7m40NTVFjSMnu9M2uZ/85Cd44403kJqayoliMBjE/Pw8Hjx48ELO8JsS4ehV\nfn4+fvKTn+DEiRNbSs99fX24cuVK1OAUCEVOY2Tnzp3D22+/zcRhwumnO3fu4N69e1FVaaFRXIPB\ngJ/+9Kc4f/48lEplGAh6YGAAv//979He3n6A2m4VmjyoqKjAT3/6U9TU1HClheTOnTv44IMPYLfb\nX+g9vreAPzKuSqVCamoq0tPTIZPJIJFIcPbsWVRXV/Po3NraGkZHR9HW1obu7u4XNvbXKXFxcZDJ\nZMjIyIDVaoVEIkFRURHeeOMNTlqCwSDcbjdGR0fR0dGBgYGBqLj1U7nZZrMhJSUFMpkM586dw2uv\nvRaGUZiamkJbWxu6urowMTFx4Ld+cigqlQqZmZlQq9Ww2Wz4wQ9+gKKiIt75sLy8jLGxMdbd5/Md\nqN6ku0gkgl6vh9FohEajwZkzZ/CTn/wEYrGYk9y5uTmMjo6ivb0dAwMDB25zEto/kJycjJSUFLzx\nxhs4c+YMJ7k089/W1oa2tjZMTEwctMosUqkUarUaRqMRtbW1ePfdd5GWlob4+HgAYKDu06dP0dzc\nfOAlZxIisdJqtUhNTcXZs2d5SyUFUIfDgc7OTjx69Ij7ztEgwuVmpaWl+NGPfoTKykqmUF5dXYXd\nbkdLSwtu3rwJr9d70Cqz0B6C1NRUnDhxAn/3d38HuVwedl6cTicaGhrw4MGDF2oPfa8Bf3Q4TCYT\nKioqcPr0aZjNZi4nUhAioNmf/vQnfPrpp5icnDzwAy4SiXgTW01NDX74wx9CpVJBp9MxExQlLR0d\nHfjP//xPblVEg2OhRRXnzp1DTU0NDAYDTCZTGLBybW0Nly9fxnvvvYfBwcGo0BvYSFysVit+9rOf\nITs7m9n9yBkSz8Kvf/1r3LhxI2rGtYCNc1NaWorTp08zUE4ikYQBKxsaGvCrX/0K/f39B37OhaJU\nKnH48GEcO3YMR44cgcViCatura2t4f3338dHH32E6enpqLE5AF4m8/rrr6O6upr9C7BxXgYGBvDf\n//3fqK+vfyl0+dcNPouNjYXBYMDRo0fxs5/9DIWFhWFAXGDj9vnnP/8Zvb29L6U38PWW0BMTE6FS\nqfDDH/4Q77zzDvLy8hhHBGyU+z/88ENcvXr1hQDQm9kkv05JSEhARkYGfvnLX6Kurm5LtaK3txef\nffYZOjo6Xgi/tR/Ogr/J4E/lTbFYDJ1Oh5ycHKSnp4eVVgKBAHp6evDVV1/h9u3b6OnpiQqwHPC8\nDGoymVBaWoqkpKQw3ZeXl1FfX49r166hoaGB5+ijQUSiDV75nJwcHDp0CGq1mkeeQqEQJicn0djY\niJs3b6KtrS1qkhbh7fnw4cPIzMyERqMJS1o6Ojrw1Vdf4cGDBxgaGoqKSguJSCRCTk4Ojh8/jszM\nzC3goY6ODty9exePHz+OqqkKkUgUtu9euDsBAKamptDT04OGhgZ0dXVFle4AYLPZ8Oabb6KmpgZW\nq5VL5tSOa25uxqNHjzA2NhYV55xELBajpqYGr732Gg4fPgyVShUGlJudnUVrayseP36MhYWFF9b9\nm/jMZrMZJ0+exKlTp1BSUsILfICNmzO1Qbu7u1/o5rxXwNyLSGFhIU6ePIna2lpkZ2cztiIQCDD5\n3M2bNzE+Pv6Nn5e/yeBPY07BYBDx8fFQKpVhqyhpJOfhw4f413/9V3i93qjoOQPPZ09Foo3FD3q9\nPuwgBgIBeL1efPDBB/j000/hcrmiLgjFxcXxXurNWXRPTw9+9atfoa+vL2psTiISiaBUKpGXl8dr\nV4Hn38mdO3fw3nvvYWRkJKp0p8QlMzMTpaWlYTPaoVAIs7Oz+Oijj3Dr1q2oqlaQqNVqHD9+HCUl\nJWG3IGDjJvTXv/4V3d3dWFpaOiANI4tIJOI+P+0JIVlbW8PTp0/x8OHD/9/emf20eT19/AuYxcYY\nzBa2hCWENAFClE1pI35K2qZCtFUXVW1velFVrdSrSu1/1Pay7UUbKZGqokYJESkRkLCbzbGJDd6w\n/XjHy3uB5uR5bFKl4OAD73wkbgyY8eE8Z86Z850ZbGxs7Nv2fP/PKisrMTw8jPfeew9Go1Fz4vf7\n/ZicnMTi4qIU16DZ9PT04Ntvv0VnZ6dGhAvsXG0tLS1haWkJTqdzz3/jVT0j169fxxdffIGuri7h\nkwCIqpbLy8sYHx/fc2Tuv9h9JJ0/sDMIgUAgJwebduS3b9/GnTt3pHL8BJVmVVcII3HfP//8gz//\n/BOTk5PSKOTVqOuEq21XFAV3797FnTt3YLPZpLgrV0OCuexOZplMBmtraxgbG8P9+/fx7NkzaYRP\nBJWuzm69mslk8OTJE4yOjuLRo0d49uyZdI6fRH5UWIYIBoNYWlrCvXv3MDo6Kp0ToqvF3cqsulwu\nrK6u4q+//sKDBw+kSackSJRrMBhQUVEh5ksqlYLH48H4+Dh++uknTE1NFdhSLSRSpEORWh0fi8Wg\nKApu376N3377DXa7vYCW5kK2NzQ0iEZhRDQaxcrKCn7++WeMjIwc2DN6ZJ0/8Hxhye7a53K5cPv2\nbTx69Ei6hRzYudNqamrS7GopYjE3N4fffvsNNptNStvr6upEdzAik9lpOnT//n3cvXsXHo9HmhQz\noqysDF1dXTh+/LjGCZHY7NatW5iampKqsAxRW1uLnp4eTZSI5svs7CxGRkZgsViks72oqAgnTpxA\nT08PKisrNbb7/X48fPgQDx8+lErdT1RVVaGrqwttbW1CrAXs2L6+vo7R0VE8fPgQFoulgFbuTlNT\nE/r6+lBfX69JAaW6Gw8ePMDIyIhUKYnAzjPa0dGB7u5u1NTUaGz3+/2Yn5/H33//faAO9GWpq6tD\ne3u7uE4kqIDS5OQkbt26hdnZ2QOz6cg6/6KiIly8eBHXr1/XONGioiJRV1m2yU00Nzfjk08+waVL\nl3K+FwgERJc2Gbly5Qq+/PJLdHR0iNcokmGz2bC+vi5FgZNszGYzPv30UwwNDYldOYX7qcmMbKdP\nYmBgAN999x36+vrEa+T8FxYWMD4+Lk1an5qSkhJ88MEH+PTTT9HY2CheT6fT8Hg8GBkZkS5Ni+jq\n6sL333+Pq1evitfUG65ffvkFNputgBa+mLfffhtff/01Tp8+LV6j3gR//vmnuB6Sjfr6enzzzTcY\nGhrS1FBIp9NYXl7Gjz/+iOnpaekcPwBcvnwZP/zwA86cOSNeo/VlbGwMf/zxB9xu94HadCSdP4VB\nOzs7cerUKc1EIfFWNBqVLtxPYrnm5mb09/ejra0t52cSiYTouCUT5eXlMJlM6Onpwblz53KiFlRv\nPhwOS/dwms1mUWWrs7NTc/Kn9D6PxyPdhqu8vBzNzc04d+4cLl26BJPJJL5HjkjdRlkmGhoa0NnZ\niUuXLuHMmTPQ6/XiexQpstvtUtSuUKPT6dDd3Y1r167hypUrmkJhNOZutxsWi0WqFDNgZ8zPnj2L\n//3vfzh//rwm9EzzfHV1FVarVboNem9vrxCFnjx5UlN6OJ1Ow+v1YmpqSroNek1NDS5evIjh4WFc\nuXIlZ56nUimsra1henr6wOfLkXT+ZWVlqK6uRnNzM5qamjRhOXVFLtnuy0n53NTUhNbWVs1iTt+n\nCSObA62srERnZyeOHz+uKWuqJpVKSWl7S0sL+vv70draqimDC2jr5Ms2X4xGIy5cuID+/n6YzeYc\nsRz1VditxWmh6ejowHvvvYfTp09rrogACEFuNBqVbtNSWlqKwcFBDA0NiTryhLpeu6Io0m3QOzs7\n8eWXX+L1118XXSoJGvNgMCga4MhCUVERbty4ga+++gpdXV2aMQd21hVFUeBwOKTacBUVFaGpqQnf\nfPMNBgcHYTQac9aWZDKJzc1N0V7+IDlSzp+U5pcuXcJHH32EgYGBnI5g6h7J1BpRBigr4aOPPsLw\n8DBqa2s1DlRtt06nkyotsaqqCufPn8fnn3+Oq1evagRzwPMCOlQJLbt1ZaGoqqpCY2MjhoaG8P77\n76OlpSXnZ2jMKyoqEIlEpJgvRUVFaG9vx/nz50XVx+zNFmWMlJWVQa/XS9Gsija3vb29eOuttzA8\nPKyJbqnnhE6nEymuhY640LN35swZkR7X39+viSiqc/grKipgNpsRDAYLrsuhSqfXr1/Hm2++icuX\nL+PYsWOan1FnGJnNZpjNZni93oLOF1rLz549i3feeQdvvvkmjh8/njPm6XQa6XQaRqMRnZ2dIrul\nkOh0OpSVlWFoaAhDQ0MYGBjQZA8BWtvb2tpw+vRpWCyWAy1bfaScPy0Yvb29+Pjjj4UDzRZBqTs7\nyQCl9TU3N2NwcBCDg4MaJava7kwmk+NcCwU1pmhqasK5c+fEaSi7sAe1n6TfKTS0Gamrq0NfXx+u\nXbuGa9euaZTyNN60ycput1koaPN36tQpYbe6+Q3NFTrx0+/IMO5lZWVobGzEG2+8gcHBQVy4cCHn\n2aQoSzKZhE6n01zBFAqqWtnb24t3330XFy9exPHjxzWdBqkDJJ3eKioqpMhoMRgMaGlpwY0bN3Dz\n5k10d3cLoZza7kQigXg8jtLSUk0GQKHQ6XSiO+Jnn32mEcrRmCcSCfGVyWRgNps1m4NCYTQa0dLS\ngps3b+LDDz9EdXW1GHOa47FYDNvb29je3obBYEBDQwOsVuuB2nlknD+lalF+eW1trUYNSosLTfLt\n7e2Cn4QAbQrIa6+9hoaGBs3DQf89XQAADOlJREFUp65LEIvFxKSRIQRNncB6e3vR29sLo9GIkpIS\n4fTVYdBQKIRYLCZFxIK0FSdOnMCNGzc09/y0sJDzjEQiUBQF8XhcilN/eXk5qqurMTAwgKtXr8Js\nNmucEPU4j0ajCIfDwvZCzxc6VXZ3d+Pq1as4depUzhxPJBKIRqMIhULw+XwIhUJShP0NBgO6urrQ\n19eHgYEB1NbWamxPJpMIhUJC0+L1eqVJIW5tbcXly5dx9uzZnOqDyWQSkUgEgUAAkUgEDocDW1tb\nUtSC0Ov1uHDhAq5cuYK2tjbNNQXNFZ/Ph62tLSQSCTgcDvh8voJHWgDgxIkTGB4eRm9vL6qqqnLq\nPyiKIjpqptNpOJ1OBAIBkdrNqX7/EapV3dHRgZaWlhwHStBOVwbHr+bYsWMYGBhAfX19juAMeD7h\nk8nkvvs45wsKK586dUrTqhfQ9pumE5Es9/108q+trUVfXx8aGxtzHCh9UQ9tWeYL5Zc3Nzejo6ND\n1DNXbxLp9EzOU5b5UlFRIfptZKc7qTflkUhEOE8Z7C4rK0NtbS2OHTuGxsZGzemSNoixWAyRSARb\nW1sIh8MF32wRJpMJbW1tqK+v19w5p9NpKIqCUCgk7Ha73YhGo1LYXlpaitbWVrS2tsJoNGp0W5FI\nBG63G4qiQFEU+P1+uN1uMV8O0oHuBhUKa2hoQFlZmSaK4nK5hLOn8fd4PNje3hbr0kGtNUfG+Wcy\nGRgMBvT09KCtrW3XcCGV3ZRlIScymQyamppw8eJF1NXVvTDklkwmcwr/FJJMZqeN6cmTJ3Hy5Mmc\nSAs9iLRxkWFRAZ7bZjQacfLkyRxHpI5OUGiu0GFQNZlMBiaTCQ0NDWJRpBRWsj2ZTCIajWrmS6Eh\n7YTJZNJca1EYlCIrsVhMpJrJYDuF/SsrK2EwGDQ2xeNxBAIBFBUVCcFcPB7P0RoVCr1eD7PZDIPB\noLEplUpha2sLiqKgpKRENJRJpVJSXBMVFxfDZDLBZDKJomHA84JhKysrYj3f2NiAz+cTV3PUDKpQ\n62NFRYUoQqS+KsxkMrDZbFhcXIRerxdp2z6fD+l0WujQDsr2I+P8CRLnZDtIOg2trq5iYWEBkUik\n4M5TzW734uq7LeqwRTWfZbBdvZCobSf76BRnsVhEy2FZ7CbbKZJCr9MJWlEUuN1uPHnyBLOzs9Io\n5kkIRZsqNVT62ev1YmVlBXNzc3j69Kk0m93S0lLNfTNBdc03NzexsbEBi8WCiYkJ+Hw+Ka5a6Hor\nW2VO6XFerxdOpxM2mw1ra2ui26MMtuv1etTU1GhOzsDOeuP3+7G2tgan0wm73Q6r1Qqr1SpF5dCS\nkhKYTKacTBBgpyKe0+mE0+kUJ+mnT5/i2bNnCIVCYm4VKgJA2WbZmSCZTAY+nw8Wi0VUn/V4PNjc\n3ITX6xU+SafTIZVKvfL/wZFy/uRw1IudOj0ukUhgfX0dVqtVugYhL0onI+fv8/mwuLiIjY2NAlmY\ni/p+PPsun/4X4XAYNpsNc3NzB6pk/Teyw/vZCwTlmW9ubmJ+fh5Wq1UaB/pvtmcyGcTjcbhcLiwv\nL2NiYgJut1uKTQvw3PbdNq90El1eXsb09DQWFhakqQmhPlBQNItIpVKIx+Ow2WyYmZmBxWKB3W6X\n5sqCSvnulg1C9eTn5uawtrYGm80Gt9stxUaXrnHVV4kErZV2u120Yfd6vZo+J4UM/ZPwPHvDRWOu\nKIpYy/1+P8LhsKb7IAm6X/VnKLx8OU+oneRuToYGPhAIwO/3S6P4V4u0XiQSohCu1+tFNBqVRu2v\nVmarNy10d0XOSFEUIWiRQTFPdu0mJiMBJp1GFUVBLBbTdJorJNvb2wgEAkgkEjlzgDIBKEeeTnCy\nhKCDwaBYoNX2FBcXQ6/Xo6SkBPF4XFwBkNq/0LYnEgl4PB4oipKTwqrX60UaF+kWMplMTp+FQhGL\nxRAIBHKiECUlJSKtD4DYwO/Wm6NQ0DVnNlVVVejs7ERNTY34uXQ6rRnzQkZH1ZFFNcXFxWhpaUF3\ndzcMBoO4DqWsKbqqJkH3q7a/8KtZHtne3hbOP3vg4vE4/H4/fD6fRlkpA5lMRrTRjMViOae5YDAI\nl8sFt9stRfqQGiqwsVuoMBaLweFwwOl0inr+soy5upDMbhsur9cLq9UKp9OJYDBY8JMQQVEsWhh3\nm+fr6+viFEfKbRnsV4ttd4twBQIBrKyswOl0wu/3510tv9e5R2P+oihRMpmEy+USFQlJxZ3PMd+P\n7S+yJZ1OIxwOw+FwwOVyQVGUvGtzXuQIX4YXzVt1QSK32y3qKVD0MV/jvp+16kV2U6QoGAwKu6mJ\nmzqat9/P8DK2H6mw//b2Nrxer8b50y6QwrgejwfBYDDvYdz9ivBCoZBI/1CTSqXgdrtht9uxsbHx\nSkLn+wkvZTIZbG1twefzacaUhDnLy8uwWq1wuVw5G5v9st8xp7QbOq2p38/hcGB6ehpPnz4Vd8/5\nXhT3ajcpghOJRE7Rk1AohPn5eczNzcHhcOS90tx+xrykpESIsWgs1Wl+TqcTExMT2NzchM/nE4LF\nfC/m//X9KMOCxl0dkk0kElAURWiJAoFA3lNad9MwvSx0kqffVV+DUsoZhaBpvNX/m0KSrX3KXstt\nNpu4wt3t2nG/f3uvY559LafWEvl8PtjtdjgcDtFxVu2r8sHLXhkcKeefSCTgdrvhdDqxubkJg8GA\neDyOpaUlLC4uYmFhAXa7XZxAaSHKB/udbG63GzMzM3jjjTfQ0dGB8vJyrK6u4vHjx5iZmcHy8rJI\nZ6GQer4ezv04/ng8jrm5ObS2tqK9vR01NTVIJBJ48OABJiYmMD09DafTiVAolHcB1H7sTqfT2Nzc\nxOjoKNLptAjpLy0t4e+//8bjx4+xuLiIra0tkTuczzu4vb4PXUdYrVbMzMygvb0d5eXlSCQSGBkZ\nwb179/DkyRNsbGwgHA7n/fS8nzGn6JbVakVVVRWqq6uxtbWF1dVVjIyM4NGjR7BareI0lG/ns9dn\nJhqNwm63Y319HS6XCzqdDtFoFA6HA+Pj47h//z5mZmZEzrl6w5KPz7Cf93K5XJifn0d/fz9qa2uR\nSqVgtVoxPz+PsbExPH78GJubm4hGo3k/Oe8HagRms9nQ3t4uxHKPHz/GxMQEHj16hOXlZVG5Mt92\n72fMw+Ewnj17BpPJhPLycoRCISwsLODhw4eYnJzE7Ows3G533jcs//V9jpTzTyaTQjQ0NjYGo9GI\nSCSCyclJzM/PY2VlBQCkyWVV4/f7sbS0hMnJSdGKmHqxr6yswOfzoaSkRIoiFmoSiQSWlpZETwKz\n2YxoNIpbt25hYmICDodDjLUM6meCnP+DBw9EyDmTyWBqagq//vqriLKQql6WRZFOyYuLi7h37x4c\nDgfKysoQjUbxxx9/4O7duwiHw0Jf8SoWxr0SjUaxsbGBiYkJKIqCmpoakcXy+++/Y319XXM19KoW\n9P9KPB4X4s+xsTGRGreysoLR0VHcv39f3NO+KpHWXt/T4/Fgbm4Ok5OTIqw/OzsrHKjD4XilYta9\n2p1IJLCysoLJyUno9Xpsb2/D6XTi7t27mJqagsVieeUFw/b63ltbW5ienkYsFkN9fT18Ph/Gx8dx\n584drK+vw+fzvfLn8WXeX44L2Fz2NDJ0mq+vr0drayuA56HdSCSCaDQqFnP1jlEGSJXb0tIiGvqQ\nUI525QDEfa8sdpMq12QyobGxEcXFxUJFHAqFNFkVMjkiYKdaXlVVFWpqamA0GkUo1OPxiMI+avGQ\nLHZTtbyamhqRd55MJuF2u4We5VWcKPYLCfsaGxuh1+uh0+mQSCQQDofFmGenXcpAcXExdDod6urq\nUF9fD+C5CDcQCIiNoyz2qtHr9TAajairq0NFRYXorkkNfGRQ9u9GaWkp6uvrUV1djcrKSlEp1O/3\ni8JEMtoN7LQeppLbJOYOBoPw+XyikNUB8a/+/Ug5f6K4uFgUS8gWu8hSIOdFkMpWlqpsL4ta4S9b\nVOXfyL7bO2xjXmhl838l+17zsNl92Mabit6oN7CHwXZSwNPG9jCth9RMizKKCjjm//+cP8MwDMMU\nCkkiV//q349Uql8hkSWFbS8cdtsPo/2H1W7g8NqeXf3zMHFY7QYOr+37TVMsJC9jNzv/PHIYJ/hh\n5iiM92H7DNlFbpiD5bCN+WGzdzcO22c4bPYyDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMw\nDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMw\nDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMwDMMw\nDMMwDMMwDMMwDMMwDMMwDMMwDMMwB8f/AYsX8wwmCe67AAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -2.954233\n",
+ "Current MNIST score: 7.171986\n",
+ "Training step: 3000\n",
+ "Time since start: 14.378457 m\n",
+ "Steps per min: 208.645476\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAf8AAAFBCAYAAAB5Maa7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsfVlsnOd19jP7vi+c4Qw53Ia7uIiiVmqz5VWO46ROnNgp\nsgNBe91eNLkrWqA3BQq0CAIUrRGgdovWjZc4smMr8iJZomRKlLiL+zYkZ5/hrBxy5r/Qf44/0txE\n0baSzgGEKBY5837v975nfc5zgKIUpShFKUpRilKUohSlKEUpSlGKUpSiFKUoRSlKUYpSlKIUpShF\nKUpRilKUohSlKEUpSlGKUpSiFKUoRSlKUYpSlKIUpShFKUpRilKUohSlKEUpSlGKUpSiFKUoRSlK\nUYpSlKIUpShFKUpRilKUohSlKEUpSlGKUpSiFKUoRSlKUYpSlKIUpShFKUpRilKUohSlKEUpSlH+\nr4roq17AZiISiQqFQuGrXsaeRCQSYbu1C/9dLBZDJBLxfxP++TJFJBJBKpVCLBav+36pVAqVSgW9\nXo9cLodMJoN0Oo18Pg+JRAIAKBQKyOVyWFtbQz6f/1LXvd8iEt27Dg/r2aOzIpFI+Mzk83ne94dh\n3bRGOtv5fB5isRgSiYTPSD6f/8rWuvG+CUUsFvMf+nelUgmxWIzV1VX+81Xc0T9moT0H7p3Rrf7+\np7yntAf0vF/SHdjWvou/6G/fi/wpHgKh0hH+N4lEsqnh/bKlUChgbW0NuVwOuVwOq6urEIlE0Gq1\nKC8vh9VqhUwmY8NDCl0qlQLAH73hJxFe0IdNyElTKpWQSqWfM7QPw7rpTAvXRE6kTCaDWPzVqpzt\nHGyJRMJO7UYhp+WrdFz+2IX2nfZwo0H8Uxahrqe7+1WL9KtewJ+abFQMOx3uhyEaogtJfwfuRUEq\nlQpqtRpSqRQajQYmkwkAkEwmsba2xr+3trb2lax7v2SjEQXAz/ewKHpytmQyGeRyOVZWVpDP5zm6\n/qrfARl5Mvy0b3K5HACwsrKC1dXVr3ydwOfvKO2tXC7nKJ/efS6XY8f4YTkLf2xC+7bZ/v2p76lI\nJIJMJoNOp4PJZIJYLEYul4Pf7+cs6lcl/yeNP0VPMpkMUqkUy8vLyGQyX8h3kVEBPn/Qhcblq74E\nGzMSwgiO0p8mkwnZbBbZbJZT/X8qSlEYhYhEIsjlcn5+qVQKmUzGKeBgMIhUKoVcLvelro1S0sK/\n0xkip2WzMtKXtUbaL6Hxp3Q/nZeHyaEioXdMEZnQ2P+xnfGtAo3dBCX0M1/EGdrK8NMaKONCzqww\nM0BnCgA7kA/7+6B1y+VyaDQa2Gw2uFwuSCQSrKysQKFQIBQKIRqNfq7k+2U92x+F8d8uct5otHba\nOJFIBI1Gg8rKStjtdhgMBnz66aeYnJzct/UKv4sOgXBd+2H0v4z6dCaTQTKZRCqVgk6ng8FgQCwW\nw/LyMoDP6lYP+0XcSTamIsViMWw2GxwOB5RKJYxGI+x2O6qqqiCXy/H6669jYGAA4XD4gZ+dzuxm\nZSH6d/qTz+c5EqV/I0UoVJJSqRT5fJ6zA1/0+xGujwwmKetMJsPZrb2eld3c673Kxv0VvofV1dU/\nqjS/MNDYqDOFzyEsxwgDEzpLYrEYMpkM+Xyeje1+R6jCtYjFYigUCi4rUumRjKdKpYJCoYBIJEI4\nHEYymfzSHO+dZCOeQfjfNBoN9Ho9bDYbnE4nXC4XP2NLSwvGx8dx/fp1LrPS/35Zzs1DafyFdUGq\nkwgjsVwuh5WVFT4AFA0JL+xGKS0tRXV1NUpLS+F0OuF0OpFMJjE/Pw+ZTLYv6xYeBEojqtVqqFQq\npNNpZDIZZLPZXUdAFIEeOnQIlZWVUCgUn6uZFgoF3L59Gzdu3HigtQv3UCqVQqFQwGg0wmq1wuFw\nwGazQa1Ws2LX6XSIxWKIxWI7HlaJRAKNRoP29na43e51IKpQKISZmRkMDw8jm83u66G/HweJ3ofV\nakV1dTXa2tpQV1cHhUIBrVYLk8kEu93O6zYajbh27Rri8fies0YbMRTCSJ4izo3lGKGCEX6OQqFA\na2srampqoFAoEA6HMTU1hcnJSSwuLu5pfduJ8KwLgXK0RmFZaLcGlPahvr4eFRUVMJlMnIrf+C5X\nV1cxNDSE6elphMPhdU7RbtZNBk6lUkGj0fAdTafTWFlZWecQ7lYkEgkUCgXq6urQ2NjI75f0EoFm\n/X4/7t69i3g8jmw2u+vP3+xZxGIx1Go1zGYz7HY7jEYjIpEIn0kyKFR+yWaz8Hg8qKysZJ0qdNzI\nkaSMTSKRQE9PD2ZmZpBMJh84WCGh82IwGFBSUoKGhgbY7XbI5XIuY1HKnPQe6YtUKoV8Pg+FQsE/\nn81msby8jIGBAUxMTOybHhHeS6FzrVKpYLFYYDQaYTQasbCwgEgkArlcDp1OB7PZDIfDAbPZzJlT\nvV4Pi8UCs9kMjUaD+vp6lJWVYW1tDaurq+vA1FNTU5ifn0ckEkE6nf5CnJ2H2viLRCIoFAqo1Wro\ndDqoVCrI5XLEYjHE43EkEgk2kBQFpdPpTQ1rXV0dXnzxRZw8eRK1tbUAgHfffRcvv/zyvm0sXRoA\nfAjsdjtsNhuWlpYQCoXuWxlqtVq89NJL+MY3vgGLxcIAO+Azg/UP//AP+PTTTx/oYgpBh0qlEjqd\nDi6XCx6PB1VVVXA4HNBqtVhbW4NKpcLq6iqmpqaQSqVY2W8lUqkUdrsdP/zhD/HEE0/AarVCIpEg\nnU7j1q1b+O1vf4vZ2VlWug8iQiUj/PtuP9ftduOZZ57B448/jkOHDq0zOplMBplMBi6XCw6HAz6f\nD1NTU3t2WiiqEdbzaS/JUcxms3xe6B3RGaNIVaFQwGQy4Tvf+Q5eeOEFqNVqDA0N4b333sNvfvMb\nLC0t7btTJXRY6A8Zuo1p/vsRqVSKRx55BM899xyamppgMBggk8nWReT5fB6ZTAa/+tWv8Prrr6O/\nvx/xeHxX3yUEqlI61mazwWQywefzwe/3IxwO7ynil8lkMBqNePrpp/EXf/EX0Ol0UCgUKBQKSKVS\niMfjCAaD6O7uxr//+79jfHz8gc48GUer1YoDBw6gs7MTdXV16O/vh9/vh0gkQjweRzKZhNVqhUgk\nQjQaxTPPPIPnnnsOKpVqnT4hETo+i4uL+Lu/+ztcuHABmUwGq6ure1qnMHsjLLO43W50dnbixRdf\nREtLCzQazedwI8C9DFIqleJgQyqVwmQyQa/XY21tDaFQCFNTU/jnf/5nTE5O7tt5p3Xm83kOQtVq\nNex2O1pbW9HQ0IDa2lp89NFHGBgYgMFgQEVFBRobG9HQ0ACz2Yy7d+8iFAohk8mgoaEBVVVV0Ol0\nyOVyeOKJJyCRSJDP5xGPxyGRSKDX6/H666/jvffew+DgIPx+//8d469UKtelQcigk7FPp9PIZrN8\nEAmZvhkwRy6Xw2g0oqGhAV1dXXA4HMhms/D7/RgYGMCnn36KYDC4b+vW6XRYXl5GLpdDIpHgl5pI\nJHjdu02heb1eHD16FPX19TAYDOylk9Azd3V14a/+6q9w6dIlDA8PI5lM3leaTiqVoqKiAlKpFLOz\ns6y8yeDNzc2xcY7FYtBoNGhsbIRUKkU0GkU4HEY6nV63LolEgrq6Ojz55JNwuVxwOp04ePDguueQ\ny+WoqanBN77xDdTX1+O1117DhQsX7sto7KY+eT9RZ2lpKc6ePYuKigqOegYGBjA6Osoe+NraGlKp\nFAwGA773ve/BarXi0qVLGBoawuLi4q733mw2w2AwYHFxEclkcl3kLEy30vrJcaTsFmUH5HI51tbW\nOJIljIJGo4FUKt33FCI5HIXCPUAcOSGkoPYSNQNATU0NOjs72UHX6XTr1i/EoigUCpw5cwZra2u8\nf7vZd4PBAKvVilAohHQ6Db/fj3g8DoVCgUQigVQqxdm53YpYLIbVakVLSwueffZZHDlyBEajkSN/\nAFCr1fxO5HI5TCYT3njjDbz55ptYWVnZMxBSIpEgm81iZmYGy8vL6O7uRiAQQCaT4e+nCL+6uhpn\nzpxBS0sLVCrVOicym82ygU0mk0gkErDb7dDpdPjWt74FjUaDl19+GdFodE8lAKETLRKJYDAY4PF4\nUF1dDbfbjUKhgHQ6DY1Gsy6rtLy8DL/fjzt37mBqagoOhwOVlZWoq6tb5yioVCqYTCYolcp9Pe8K\nhQIKhQLJZJLfk1wuRyqVwujoKPx+P27evImFhQUUCgV0dnaiqakJlZWVKBQKmJ6extTUFORyOTwe\nD0KhEEZHRzExMYFgMIh4PM73VSKRoKKiAq2trfB6vaioqEAkEsGlS5fwyiuvbJnV3qs8lMafUk7C\nND4hcDemQ3cSnU6HtrY2dHR0oK6ujj3E4eFh9PT0YHx8fN/WTenyZDLJyjubzbK3er+RUHV1NZ58\n8klUVlZCJpMhnU4jHA6zs6JWq+FwOFBXVweDwcDe8sDAwDpjvJOIxWIYjUZIpVLMz89jZWUF2WwW\nyWQSwWAQoVAI8XgcsVgMSqUSHo+Hlcpm7VsymQxVVVU4ffo0vvvd76KyshImkwm5XA7Ly8uYmpqC\nwWCAxWKBzWZDSUkJOjs7sbi4iO7ubsRisftKh25VE76fvabshNfrxYEDB5BMJtHb24t3330XV65c\nwa1bt7h+LRKJUFJSgrq6Ohw+fBhdXV0ctYfDYa617ySE3M/lckilUpv+zEasCBkJ4Xmiv1MqWyQS\nIZvNrkv/7qcI08LC59yr0qXartfrxZNPPomWlhYYjUY+d8lkkhHTLpeLwboHDhxAIpHAG2+8gcXF\nxV1FR3K5HHq9HtFolA0erX0vgCsyPI2NjTh37hy+/e1vw2q1Ip/Pc3qaOmaEJaSqqir4fD589NFH\niEaju7qvW5Wx0uk0fD4fpqenuRxKzrVWq4XBYIBUKkVDQwPa29vhdDqxsrICn8+HeDzOa00kEkgk\nEojH44hGo+js7ERnZyeOHz+OaDSK//7v/+affxChtRmNRigUCmSzWYyNjSGdTqOkpIQdJSqRTE1N\n4ZNPPsH4+Dja2tqgVqvR3t7OuARK+ZNDt59C7Z9Ch5y+j4I7upMOh4Pfcz6fx8LCAmZnZzExMQGT\nyQSz2YyRkRH09fXhxo0bWFhYQCqV4oyCTqdDQ0MDlpeXcf78eXR0dHDW6L/+67/2vVPmoTT+5HkL\nAUQU5d/v5XS73fjZz36GI0eOIJ/P46OPPsL777+Pnp4eTE9P7/u6Y7EYp/IoAhJGprsVkUiEiooK\nnDlzBnq9HqlUCrOzs3jrrbfwxhtvIJ/Po6mpCT/60Y9QXV2N6upq/OAHP4DL5cLf//3f39clyOfz\niEQikEqlvNdra2uYmZlZl8olAxOJRDA9PY14PM71VuG6jUYjfvrTn+Kpp57iGv/a2hrC4TCuX7+O\n//zP/8SpU6dw/vx5WK1WqFQqiEQiuFwutLS04M6dO/D7/bte/354+jqdDo8++ihOnDgBmUyGCxcu\n4NVXX8XU1BRHUsL6eyQSQV9fH4LBIEwmE5599ll2vMiw7CSBQADxeJwzWzsJ3YftwHP079PT0/jd\n736372ecvoMi/v3Ye7lcDpvNBq/Xi0OHDkGn02F6ehrvvPMObt26hdHRUWi1WrS1teEv//IvUVFR\nwQ6IVqtFaWkpZmdnkUgkdvyuVCqFQCCAVCr1uRT2Xp5FqVTCbrfjm9/8Js6fPw+j0chZv4GBAeRy\nOdTX18NsNvM5J+NntVpRUVHBWaWdZOP6KDOXzWY3zX6trq6yEy+VShGJRDh1HovF8Mtf/pLLhaRv\n6e5T0NXc3AytVstO5V5lo8OQTCYxMTGB6elpiMViKJVKLu86HA7o9XrMzs7C7/cjFoshEokgn89D\nJpOhoqKC9VEmk8H8/Dx6e3vx4YcfYmBgYM9r3EwymQxnoGlvl5eX1znrlM0g+zIxMQGz2YxQKIRQ\nKIRIJAKRSISrV69iYWEBS0tLWF5eZjtBe53L5dDb24upqSnU19ejs7OTHXy6b/spD6XxV6vVnwPd\n3O+Di8VidHZ24vHHH0dHRwdsNhsSiQRu377NL+iL8BIVCsU6xrv7NfoA4HQ6cezYMZw6dQp2ux25\nXA4TExN466238O677+LGjRvI5/NIJpNoaGiASqVCR0cHqqurMT4+DqVSeV8IaZFIBLVaDbVajVQq\nhWg0imQyyQAfOtx0+cnJIcYz4ee0t7fj7NmzOHv2LOrq6iAWi+Hz+TA5OYm+vj5cu3YNH3/8MdLp\nNEQiER577DFUVlZCJBKhuroaXV1dWFhYQDAY3NU7p7oneeB72W+pVAqDwYBDhw7B4/FgamoKt27d\nwqeffrolspgirPHxcYyOjqK0tBRtbW04deoUbt68uauMEqUR78ej3/h8VKo4fPgwXC4XRwrBYBCT\nk5NYXl5el5nZDwVC6Gzh3dzr50okEjgcDjz++OM4efIk7HY7Z+UuXryIwcFBzM3NQa/XQyaTYXp6\nGnq9HgaDASKRCBaLBWfPnkUikcDs7OyO30ep93g8zgp7L0LZivb2djzyyCPo6upi8NbU1BSGhoYw\nMDAAlUqFkpISyGQyBt5Rur22thZPPvkkZ9h2ko2R/05YGwD877lcDktLS7h06RLEYjECgQA++OAD\nDA8Pb/m70Wh0HZBzryh0YZaQggkACIfD7GyQMyeXy2GxWKDVarkkk81mIZFIYLfbUVtbi8rKSgDA\n0tISpqamcOPGDdy6dQt9fX3w+/2cql9dXWVM0l6F8AlkoMlB2ug4UgtfKBTiDODy8jKSySTS6TRS\nqRRSqdTnHAfgs/dIThdlFQqFAuN+9rt0Bzykxt9sNvNLI68LuD/PXCwW4/nnn8dLL70Ei8WClZUV\nRKNRDA8P77t3SKJSqRjduba2hpWVlT19jtfrxc9//nPU19dDJBIhk8lgeHgYv/rVr9ZFcsFgEB9+\n+CHcbjc6OjrWMUiJxeJdGxVSwBaLhdP45PFS5kIowkyMsFVNLBbj/Pnz+PnPf76uHDAyMoK33noL\nb7/9NkZHR1EoFBCPxzE7O4uqqiouI9TV1SGfz+Pjjz/GyMjIrhQzOVy0nr0oKLlcDrPZjObmZphM\nJly7dg1jY2OIRqNb/g4Z4f7+fly6dAnPPvssamtr8cILLyCVSu0KcUx7tlWb305CCrO+vh4/+tGP\nUFdXh7W1NSwvLyMajSIej2N1dZUV+H61ZkokEuh0Ov48YabrfkUul6OyshI/+clP0NTUhEKhgKtX\nr+I3v/kNBgYG1rWVUrRoNptZwTscDjz//PMIBoN45513dvw+Kh0kEgl27PaybqlUCp1OhyeeeAI/\n//nPGfOQSqXQ29uLN998Ez6fDx6PB0eOHIFSqUQymYRer2dcRmtrK5xOJ27evIm+vr5dnReh3M+6\nZTIZ5ufn8etf/xpLS0sIBAI7/j6lsAmQRkHN/YpMJoPBYIBWq4VCoUAmk+EzSvqFso6pVAorKysM\nRCTnXq/Xo6qqCs899xwOHz4MqVSKiYkJXLp0Ca+99hrGx8cZKU/dD0RH/iDnXalUQqVSrQNqb/w8\nWj/pvLW1NS790vrX1tYQiUR2BZik+0p3eSsQ+4PKQ2n8Dx48yOkpn8+HQCDArHK7EbfbjcbGRni9\nXq6FX7lyBb/+9a/R09Pzha27oqICJ0+exPz8POMKdgtEouj7a1/7Gp5++mm4XC6k02nMzMzgzTff\nxPvvv49QKLTud+LxOPr7+zEzM7POOz537hwuX76MgYGBXR0amUzG6NRAIIDr169zOrFQKKyr7QvT\npYRWF6Zsu7q6GKQ1OzuLmzdv4uLFi/jggw/Woc6FdbNMJsOXrLy8HN/5zndgNBpx4cKFLWvhJCaT\nCY2NjdDpdGyMFxcXd+0ti0QinD17FufPn4dSqcSNGzfwyiuv7NpBnJqaws2bN9HV1YXKyko0NDSg\noqICFosF8Xh8WwdQo9FArVYjFoshnU5zanGrdRISmAhDysvLGaNAuA8CsyYSCeh0OlitVlRVVeHI\nkSOwWCwAgLGxMQwMDKC7u/tzZ2o3otfrcfz4cW59vHHjBkZGRrC8vLxrNLhCoYBOp8Pzzz/P5aGZ\nmRn09fWhu7t7XWaOOnmCwSA++eQTTE1NwWQy4dFHH0V9fT20Wi3q6urw6KOPYnh4GAsLC1veOafT\niaNHj3Kb2Pz8PDKZzK4Vq1gshsViQXNzM77+9a/j5MmTEIlEWF1dxejoKN544w309fVhamqKW/uW\nlpYQjUY53W6z2bicZzKZ8MILL0Cn0+Htt99m0peNIuzIuV/80OrqKnw+H2QyGTsou/ldqkU/KM+C\n1WpFZ2cnHA4H1Go1pqenMTk5uQ6fsLGDhb6Pur3OnDmDRx99FFVVVchms5ibm8Ply5fx+9//HgsL\nC3zPKisr0dzcjI6ODkxMTODXv/71A+Fe7HY7HA4HFhYWEAqF1rU2i0QiqFQqGAwGlJeXo6qqCvX1\n9VhdXWWQNGFWhKyo24nFYoHH44HFYuEWaWqN3m8H4KE0/k1NTVheXkY4HOY+z2QyiXg8zoCT7ZSk\nx+PBE088gerqashkMsRiMfT09OCVV175QtInJA6HA52dnUyD6/P5+IURvelmXqNIJIJer4fL5cJT\nTz2FJ598EkajESMjI7hy5Qpee+01XL9+/XPfR87BzMwMlpaWYDabYbVacebMGcTjcczNzTFKdTuR\nSCRwOp3sLPn9fiwuLnIWQdiPHovFkEwmkclkIJVKodVq0dzcjJMnT+LZZ5+FwWBAoVDgLMvbb7+N\nq1evor+/f90zb2xtIyeDFOPy8jI+/PDDdbX2zUSn06G6uholJSUcfSkUCiwvL/PF2+qdE2vfoUOH\ncPbsWYyOjuKTTz7BRx99tKPTQbK0tITx8XFGjLvdbng8HrhcLk4PbrV+6g6hFiqRSMS8/VQSKBQK\nn6uHVlRUoK6uDk1NTWhvb4fNZuPsRzgcxtLSEnK5HMrKytDc3IwjR46wQykSidDb24tPPvkEIpEI\nAwMDmJubu6/UvUajQVNTE7xeL1QqFa8zGAwiHA5vacCAz4yY0+lEQ0MDvv71r6Orqws+nw83btzA\npUuXcOfOHXYU6X5Q5HTr1i0MDQ1BLpejoqIC1dXVUKlUqKmpwblz55DL5RCPx5FKpTYNFoxGI6qr\nq7G4uIjFxUUmahICurYTqVSKuro6PPbYY/jud78Lu92O1dVVzMzMoLu7m1sr6V1GIhEMDg4CuJfm\nXlxchMfjQWtrK7RaLbRaLc6cOQOxWIzx8XGMjIwgHA5v+t3CFjihIdhJp+Xz+S0/czuhlPeD6kyV\nSgW32w2Xy8UdKMA9/ZVIJNhRF85PoLNIBpB0jNFo5E6tnp4e9Pb2MuZBoVCgtrYWZ8+excmTJ3Ht\n2jX8z//8z54zFgDYgaaOLSHvAAF/vV4v6uvr4fV64Xa7+WytrKxwgEM9/NvdC5FIBLvdjs7OTpSU\nlEAikfDdV6vV3OG28V7s9f08lMa/rKwMfr8fyWQS1dXVqKmpgVKpxMjICD7++GM2pBuFFEtdXR2+\n9a1vwWw2Ix6P4+bNmxgbG/tCDT9wz0s1GAwA7l04s9nMtUqfz4doNLruZQnrXI2NjThz5gzq6+uh\n0+kAAFevXsU//uM/bkvSUijcaye5cuUKjh07hpKSEhw/fhyBQADT09MYGRlBIBDYce3ZbBahUAh3\n795lvERzc/O69Fk6nUYymcTi4iLGxsYQi8UgkUhQVlaGsrIyNubpdBr9/f344IMPcOnSJSwsLPD3\n0KEltLzVamXPFrh32c1mMywWC6cct/N4qfZKLThHjhxBS0sLIpEIenp60N/fv6XTpdfrUVlZidLS\nUqyuruKtt97CH/7wh/vqNCD0L11MqVSK8vJytLW1sfOxlVElRUKpQTJiVqsVPp8PiUQCa2trKCsr\nQ1VVFerq6uByuWCz2Zg/wmw2875nMhlEIhHMz89DLpfjzJkz6OrqQltbG/d5A/da6mw2G44fP47X\nX38d//RP/8Tp1t0IKWTCt7S2tqK8vBzRaBQ9PT24dOnSls6EVCqFXq/HqVOn8OMf/xjV1dWYnZ3F\nyy+/jGvXrjH5jfB9UfmPyhhEz724uIjl5WVuo3rssccQCAQQiUQYPb5RqHU4Go0ikUhApVIx10I4\nHN42KiZCneeeew7f/OY3YTQaed9ff/11vPXWW5icnEQ2m4VUKuXWXr/fz1keSiH7/X7O4hgMBjQ3\nN+Oll17Cm2++iffee2/L6F/IsUB780WyEJKheRBJJpMMnFWr1XC5XNwDPz09jbm5Oc4aUcmRsjHk\nQFO9nLI1ly9fxsTEBLdOU/B07NgxPPLII4wboJLvbsCgm0k2m0U8HmdyIWop1Ov1EIvFaGlpwdNP\nPw2HwwGZTIbJyUkMDg6iu7sbS0tLiMViu8JmkPNSUVGBc+fOoaysjDkp9Ho93G43QqEQdxiQY0ak\nVHuRh9L4z8/PIxwOIxwOo6SkBHa7HXa7HSsrK+jt7cXy8vKmikWtVqOsrIw9MOBe9P3BBx+sizy/\nKInFYpicnMTs7Cyj52UyGc8QEILmgPWXOZfLIRqN4vbt25ibm0M4HMaFCxdw9+7dHb93fHwcv//9\n7+F2u+F0OmG329He3s5pqp2M/+rqKubn5yGVShl9azQaodFoIJPJEAgEEA6HuQUvEAggFApheXkZ\nEokE0WgUkUgEwWAQhUKBgUWffPIJ5ufnkU6nWXHabDZ4PB54PB7U1tZCo9HwgVYqlVAoFKyQt5qw\nJpRUKoX5+Xkkk0kYjUbU19ezUiaqUDISQuAetWgZDAaEQiH09vair6+Po+DdCvFR0O+IxWJWNjvV\nk1OpFDOXaTQaOBwOeDwemM1mAPd6nAuFApqamnDgwAFUVVXBarVCo9FAp9NBq9Wum6pItXeLxcKE\nWCqVCuPj47h27RoTo9TW1jLTZTQaxezsLO7cuYOJiYnP7dNmks1mMTs7y05VSUkJamtr2VEbGxtj\nRbXZ8xcKBYRCIfT392NgYAA+nw8XL17ExMQEIpHI534W+AzLQeAwpVKJWCyGRCLBbapEPrVdnTcY\nDDIJjkQZKw8UAAAgAElEQVQigcfj4Va8wcHBz1EiCz9Hq9XC5XLB6/XC4/FALBYjmUzC5/Ohv78f\ng4OD7LhQ/z2BL9VqNTQaDbRaLe/v2bNn0dXVxQEAvbOtREg/TJHnXkh3dhIyOhv59PeKTSHjT62O\nVHIRAuIo+0X3h84WGTpquaM2ViLNoYDPbrczj4jf72cCLrlcvimR0W4lEolw7X11dRVqtZp1uEKh\nYEZQvV6PeDyOqakpjIyMrMu6ku4nsiAhzoFImohsqr29nQcAUfbPbrfjzJkzCAQCWFxcxOTkJOvM\nB5GH0vh/8sknyGazSKfTMJlMMBgMcLlcWFhYgMFgYO5noSEtFAowGo3o6OiA1+tlJG8gEMC7776L\nO3fufOHrJo90amoK0WiUa+WbpWaEhp/q48lkEv39/chms+jv79916vnu3bsIBoM4c+YM98EeOHAA\ner0e3d3d6Ovr2/b3c7kcRkZGEI1GEY1GodVqIRaLMTMzg5WVFfT392N+fh6hUIi98Fgsxj3MIyMj\nTFsZi8UwMjKC3/3ud+jr6+OUnEQigcViQXt7O86fPw+bzcYMaIFAAOl0mqNZKjeQbKdsYrEYBgYG\nYLfb4fF40NLSgurqajidTga5Xb9+nVvB6OKRUyYWizE4OIihoSHMzc3dN92qkIOCntPn82FwcBDR\naHRb5RyJRBiNT5mQkpISKJVKpFIpJjFpbGzEgQMHYDAY1rFZkmKh7yY0eVNTE4B7Z+zy5cu4cuUK\nrl69CqVSicbGRnz/+9/n/Wlvb0dpaSlefvllvPXWW5iamtoWewDcc0p6enpgMpkgk8lw4sQJVFRU\nwOPxIJVKYWBgAAMDA9znLoweKVK+fPkyrl69yqQ6232f0AEghScSibgvnZg9g8EgpqenmZBqM5mb\nm8PHH3+MTCYDg8GApqYm2Gw2yOVyJqwSsu4JU+xmsxk1NTUc9dE7nJiY4Mwe7d3G955MJtkJHxoa\nwocffoi1tTUm3FleXsb09DSjxTcTofF/kE6FnYRa7+hskTEWUoDfj5DxV6vVfIbz+TxmZmY4A7NZ\nKxvtu0wmg1qt5pZDIaMkdU+43W6cPHkSmUwGFy9eZNrwQqHwOR6S+xEhMyvdPSpV2Gw2FAoF7gaI\nRqMYHBzE3bt3ufRFjhrtnZCSXqlUwmAwoK2tDW1tbUzmJiRbyufzcLvd+OY3v4lwOIzR0VG8/vrr\nGB8f37Rd9X7koTT+paWlrFRlMhnm5uYwNzeH8fFx5HI5TpEK2fLsdjunYFpbWxkdTNHTg6audiOU\niiE+ckITr6ysIJlMAgAjZzdGF4lEgg8Q9efu9pKRQv3kk09gtVpx8uRJKBQKjtx3urAUSZFSo5GT\nPp8PyWQS4XCY2c9SqRR3ApARra2t5XJFLpeDTqeD0+kEcK9zw+v1orq6GhqNBlarFW63G5lMBqFQ\nCNeuXcP8/DyWl5fxxBNP4IknnoBarebOhZ0urkQigUqlYuKXVCqFmZkZzM/PIxAIwGg04vDhwzCb\nzejp6eF0IkWKxLC3uLiIWCy2q/0WCikm4ocn2uJoNLpjBE2dChTJSyQSLC4ucv2/pKQE1dXV3PZE\nDqVUKoXL5YJOp1tH90vsccA9p2hhYYHr+wsLC5BIJFheXsaBAwdQVlbGhDl2ux2PPfYY5HI5/uM/\n/gOTk5M70jUTU55cLodSqeQ2p5WVFVgsFo6myQjT+aMMBT3jTo7GZkJkRjR8SS6XIxAI4Pbt2/D7\n/dsqRJPJhPr6eiiVSlitVni9XibJMZvNqK2txdraGqLRKIMhXS4Xzp49yzgHogcnchzidKfBVzs9\nE+m2dDrNEeXi4iJ6e3sxPz+/5e8JM4dC0N9+p/ybmprw4osv4pFHHsHKygqmpqawuLjIGbXV1VWs\nrKwwGdhuuhSEI5Pn5uaQTqcRiUR4QujG0gVxIbhcLpw4cQJer5fvSGVlJZ5//nmcOnWKMTfZbBY3\nb97kACaRSDDAcmVlZc+gRSoxUaQul8s5k2m1WlFSUgK9Xs/lCcLoEAi6pqYGZWVlSCQSbLCJBI0G\ny1GWVSQSYWlpCUtLSxgbG4PVamW9ShlBAnYLuQf2Kg+l8bfb7Xyo4/E4ZmZmmDCBFLdMJmPFQtSV\nR44cwZEjR1BaWopMJsP0iUajESaTaR23Pgn1ChOVKEWEwGe1xnA4DL/ff1+I4EKhwN4b1R5JUQuB\nRXToKfW1FwAHeee9vb1wOp04dOgQo0R30/ZHB5t6t0OhEPx+P+MuADDlL6WyCoUCI3GrqqrYuOfz\neTidTrS0tKCmpgbl5eVob29Hc3MzdwsEAgHMz89jeHgYFy9eZKQ48RvQhC8yZttF40Ljl06nMTc3\nx8x2MpkMJpMJDocDhUIBd+7cQSaTYXIYYt2am5vD5OTknlDBer0edrsdWq123ez63XSnEF6BDCU5\nDdlsFk6nEzabDY2NjZifn8fIyAiTLEmlUgSDQZSWlq4DY9K5kcvlWFpawsjICHp7e7ltktLtN27c\ngMPhwOHDh1FaWgq9Xo/29nbk83m89957OyLgSTFT+jadTmNpaQmRSAShUAgqlWodKczG+v2DRqzE\n0mexWGAymXg/+vr6EAqFtt13s9mMxsZGmM1mZpdcWFhAOBxGdXU1XC4XADBjnkKhQHNzM55//nk0\nNjbyWaJ2NcLVkFLfbZAhzChQG/Lk5OS23RfC3xHyWuyXyGQyjqBffPFFmEwmxGIx9PX14e7du0wD\nTkYoEAhgfHyc9dZWsrFMQcZ5M8eB/j/dawJoGo1G1t0mkwknTpxAoVBALBZDd3c3bt68iTt37nCg\nQg4KldR0Ot26rBlNJ91p/yjQpDWRbidsT1lZGbRaLeO5CEdUVVWFjo4OtLW1wev1IhwOo7e3F6lU\nCgqFAi0tLSgpKYFWq2VHaHJyEjMzMxgaGkIul4PZbIZIJEJTUxMqKiq4REmlwj/JtL9er+c2mcXF\nRczNzbFnRbz/pMzISJw5cwbPPPMMzGYzK1GqPR4+fBgymQzXr1/nNCPwWQ/pyZMn0dnZCYvFwulr\nqrmEQiG8/fbbePXVV3cE1lAbjXAmARlKOswE8BJGJ/eD3N1K1tbWMDs7yxSfQk+Vak1bfTZ580R3\nGolEsLCwwJeaenA3Oic02crlcsHlckEsvjftz2azoaamBiLRvcFEZNwkEgn8fj9++9vfore3F8PD\nw8x2RRcyGo3CaDRCrVbD7XYzWcZ2z53NZhEMBpl5UC6XQy6Xo7W1lWdoE+c3AWuIV9zhcOwZDCoS\niVBeXo7W1lbo9fp1Udlu+seFZR9CqJNjQvTHZrMZly9fxocffojl5WXOutCYU2D9VD2lUgmTyYR4\nPI7p6Wn4/f7PZZmuXLnCCvjYsWM8UMVkMjFQartxwGtra8xJLpFIEIvFOB1L/eBCZPJ+Ce2X0Whk\nymgyuKFQCENDQztmb2w2G9MHU2aMulZoL6VSKXw+H2ZnZ+F2u1FVVcVDhuj5A4EAbt26hYsXL+Li\nxYsIBAK7ivoB8Hk0mUwoKSlhMNpGTJBQCO+wXb/5g4rBYMBPfvITPP3007BYLIjFYhgdHcWFCxfQ\n3d2NeDyOc+fO4Vvf+hYkEgm6u7vxL//yL1zf3kmIQpjApTsZL+r7D4fD8Pl8zI+g0+mYk0QqlcLj\n8TDugjKAlMFMJpNwOp1wu91wu93My//hhx+ip6dnR6p4wmJQdmJ1dRUSiQQmkwkdHR08vZEyoY2N\njThy5AgaGxsZGKhSqWC321FaWoq1tTXmySCKbOAeoRKVC3t6epDL5ZiWmaivicOA1vGg8lAaf+Go\n1+npacRiMQZBUNRMKdXy8nJ0dHSgo6MDpaWlSKfTDHLL5XIMpLDZbCgrK2PjLJfL2YOuq6tjABqN\nYaTDE41GMT09jZKSEsRisW3RwNTaEY1G2aBRWlgYDS0vL7PX+6BGf+O+CZUPIUUNBgOzaW0m+fy9\n4UPU+kTIVmE7nhDYRkqKlP/y8jJisRj0ev26trRoNIq5uTkEAgFOWc/OzuLy5csYHR2Fz+dbpwTC\n4TBmZ2dRUlLCCOilpaVtU6Grq6tcy6fns9lsqK2tZUPm8/kYjEhoXafTCa1Wy8ZCr9cz58BuLpZG\no4HZbEZHRweOHz/O5E5CMM9OaX/KLBGRh1QqRWVlJSorK+F2u5FKpfD+++9zHz1lXYR4l411ReLA\nX1lZ2bK9KxAIYHh4GNeuXeOJcHRWampquNyzVVdNLpdDMBjk7IZKpYJer0dpaSk0Gg0D3vaTi5yy\nDUajES0tLTh16hQjomkfiVhlO8nn8wxYpNZSjUYDt9vNe0rgU7vdDrfbDbvdDoVCwc71wsICZmZm\nMDo6ir6+PkxMTGw6xXAjuFfooBkMBr4v6XR63STE7dYO7I+u2CherxdHjhxBV1cXG5q+vj5cuHAB\nN2/eRDgchs1mQ0VFBRoaGtjxHB8fx5UrV9DX17clBS05hHRHd0NVSzrH7/ejp6cHi4uLKCkp4dbe\njo4OmEwmPhdarRZmsxmrq6vshCkUCrYPLS0tsNlskEgkSCaTnJWdnp7eNgNAaxeWC61WK2pqauDx\neJhbg0Cm5eXlcDgcaGhoYH2fz+fZwRRmUCl4pWegMl4wGGSMBd0lYTfUbsqhu5GH0vgnEglMTEzg\n448/RqFQgE6ng8FgYOYkqjeJxWI0Nzfjxz/+MTweD9dMFhcX4ff7UV5ejsrKSqa/pYMnFou5ThOP\nx3Hnzh1Os5jNZqjVagBg5HlFRQWqqqo4xbVVRLSyssJjO6kOqNVqOUVJXicZ2/304EWizyh6SemI\nxWLu/acLsZlQqp9Y9wqFAq+dsAvE6kfGhhCsuVwOk5OTcDgcqK6u5mhseXkZY2NjePfdd9Hb24uJ\niQlGYxN98MY0eyAQwNjYGBoaGmCxWNDZ2YmRkRHcvHlzy+cm45nNZrGyssL930R3rFAoMDExwYx7\nBoOBOyJEIhHjSGw2G1/U3aRTDQYDGhsbcerUKZw9exZKpZIVBdUJdwIPUjYlHo/z5Xe73Th48CCM\nRiNu3ryJV155BbFY7HN7tVUqXYgv2Uoo03Dz5k2ucVP7XktLCxYXFzE1NbVlhLGysoKlpSXG1RDa\nv7q6mqdaqtXqfY1O6Xx7PB6cPn0aL730EjtcdCZ3k3aPxWIYGxvD6OgoRCIRjh8/jrq6Ou4mopZW\nGttK9zaRSKC7uxtvvvkmbty4genp6V09H61HyJlBo74p20BR/U5z278Io09y7Ngx/OQnP0F9fT2X\n2j788EP88pe/xNraGhwOB7xeLywWC6+jqqoKP/jBD5DP5zExMYFEIrEp0JKe7X64JMj4E96LDKTT\n6cSpU6dgMBhQVVUFrVaLbDYLmUwGl8vF/CbBYBButxvV1dU4deoUurq6ANzTF4S1IKKx7Up01Lkj\nzMqVl5ejpaUFVqsVYrGYPyOfz8PhcDAQUDh1lkp8+XweiUQCU1NTkEqlcDgcAO7ZmsrKSjgcDv49\npVKJ8vJylJSUAPisREyl6fsFJ2+Uh9L4f/TRRwweEolEXMulSJx44QkBSpFOIBDA5cuXMTY2hqWl\nJdTV1aGqqgqlpaWwWq3cOaBWq7k3XKFQoLq6GtlslmveBEYCwPX4cDi842YTPiGZTLKnbzQa4fF4\nGKMQjUa3TanuRQj0Ri10xHVPHudOSnFtbQ1LS0sMVgM+O7C0/xspXAns5PP58Ic//AFDQ0PMt07R\nFUXywWCQ2wILhcK6ccxCSafTiMfjTNSzm/opKRbh5SWjurS0hNXVVczOziIWi3HkaDKZ2PkIBoM8\nSIpSvjvttUajQWtrK/78z/8cLS0tHOVS7S4ej+/KgaAsFp2FtbU1Tl/7/X5MTEzwfuyXUNZG2DpF\n7zWbzcLn82FxcZEZB7daN0VLlHlJp9PQarVQqVQIh8P7hkSnd+LxeNDQ0IDjx4+jo6ODwZXUfRIM\nBnnd28nk5CTeeecdRCIRiMViLC0toaqqCpWVlbBYLDAYDFCpVJzFCwaD8Pv9HO1PT09zFmk395d+\nhs4nOeTE6VEoFDA0NITbt29zYPBViF6vh9PphEKhgN/vx+3btzExMQEAXPZbWlrCO++8g4mJCaYm\npsmU1KGymdDZvl9ncKODm8vlEAgE0N/fj3feeYcZA+n9EKteKBTi4UpTU1P43e9+h6GhIVgsFm4F\nLysrw4EDB9Db28uDnjZbm3CwDwWfoVAIwWAQmUyGHUbChRFGIhAIMMEPAdMpWMpms0gkEqxLKNNJ\nOIqqqiqYTCZUV1fD6/XCZDJxUELOy157+4XyUBr/kZERToGJxWImyiClTh6Yw+GATqdj5b+wsMBt\nRoQgp/SKUqmExWJh8AcZKblcjrKyMibmIOeCUsbAZ+hiSsNsJUI+fDKchEmgTANFqkKlu1chRS6X\ny6HRaODxeFBeXs7thYTGFbYubSaUiaC/A2AHiMofdHiFv0NI3Tt37mBwcJANCKXI7vfZMpkMYrHY\nfRk7Qo8D4FRYOp3G4uIiR7TUS0s1VpvNxqk3yojsRJlK6HuagXDy5EmcO3eOI/61tTUkEgn4/X5G\ncO/0/NQ7DXyG+6Ba/e3btxEIBB7YuxeKsBuF2AWVSiV/fyaTwezsLNMjb2WI8vn8OurdVCrFDGjU\nzUL/ttfzTehwq9WK0tJSNDY2orW1FceOHYPT6WQnjXq+iXtip/0iYCLtPaXyp6amcOjQIdTV1UGn\n0yEQCGBoaIgBkzTq+n5E+OzCe6XX63H48GFUVVVx1EyT/b5s40/EZCUlJeuyk/Pz88jn8ygpKeFs\naSgUQjgcxuDgIFKpFNra2lBWVsY4mq14OR5Ezwl/jxzNubk59Pb2Qq/XQyqVIh6PIxKJMLskTTGk\ntP7CwgJu3ryJ0tJSHDlyBA6HAyaTiZ0An8+3JTeEsJRK+p86Qaj8JhLdm78SCATYSbxz5w7jQMiR\nFrIXCmd6VFZWsuOl0+m4hE3ts0T6RsZ/q3Lc/cpDafypv5QekBS8WCxm4+1wOBi48+mnn3K6hNqu\npFIp13UIsSkkjhHWxY1GIxvKgYEBXLlyBY8//jhaWlqgVCrhcDjQ1tbGkd1WB5mcBAJ7USQnFot5\nel0kEsHVq1extLT0QANRyDBLpVKuP1dUVKCsrIypXmOxGPc9b2dQNwMakSGnLgCqYQuNGv2OkMaW\n/n0vz7TbFrmNa9+47mAwiO7ubhw+fBitra3weDwAwDiLXC6HSCTCl4iAdOTMbFw7gTXPnz+PP/uz\nP4PFYoHdbmdqXnJICW+yW0CTEPBH0Til7fd7hCcpaK1Wy46wx+OB3W5ng03lNp/Pt2NmSohVoTLA\nxx9/zBwLwgluuzkLGx0FpVIJp9OJp556Ci+88AJn+DQaDTukQgW/uLi4IxW0cO3CklsgEIBEIsFj\njz2GpqYmGI1GhMNhzMzMYGpqCgsLCw+sbOk7CW9ErV75fJ67JPYTI7FbcTgc6OrqQmNjI9fxdTod\nmpubsbq6CqPRiMnJSczPzyMYDHIANjY2xqUdQrBTbfqLdGCEnQPC7B0BWOk9UaaVypgikYg7Zurr\n61FWVsY2JBgMMp30Zmd1o44hqnkC7ul0OqbfHR8fx/j4OKampjhCFwLUN2YzCoUCfD4fIpEIVCoV\nTp8+jRdeeAEmkwk6nY7vEdnE/Ryl/VAaf4PBwMA88pKUSiU8Hg/PxrZarbBaray8CYF58OBBiET3\nOOgJHUqAO4lEwoh12lQ6SJFIBN3d3bh69Sp6e3tx4MABNDc3QyQSwWw2o66uDkNDQ9umosn4UxRJ\nSPR0Og2n04n29naOOIeGhhAKhbhWtNuXqdFoYDAY4PF4oFAosLS0xA4K9ZTKZDIe6Uoprd0YIhKh\nYtdqtaj4/4NqNBoN5ufnuZWOgDOUun7QvmOhUiYa2N2MJxZmZCgqDQQCUCqVqK2thd1uh0wmY+Ps\n9/uZoZCmu1EpgvaJzlxZWRmXU86fP4+jR49yTy6l/FKpFNf46dxuh9wWrpuUJQFDbTYbjEYjg+72\nAzGvUCi4rc3hcDCF8sGDB1FTU8N1y3A4zDPf7+c7KSKbnp6G2+2G1+vFqVOnoNPpMDAwwONzyTEg\n8KuQ9ZLeIb1/lUoFh8OBY8eO4dChQ+vKScJMzfLyMiYmJhiDsNtUvPC7iFTFYDCgtLSUEdWklB9k\nMMzG76VsJpUXaN+JQOvLEoVCgZqaGh57Xl9fz86nWCxmw0h3ELh3t9RqNaxWK5xOJyQSCYLBILe9\n6fV6BAIB+Hy+L2T+vEwmg8VigdPphMlkYhZYau+jrA85h+SYCjN6k5OTuH37NvfOHzp0CIlEgrFi\n203yBD7TjZTVJWpyAsFOTU1hbm6Oqag3crps/BwA3IFGzqvdbofZbIZKpUImk+GM5n63dz6Uxt9i\nsUAsFjM9KL30Y8eO4aWXXmIWOAKD5PN5RqISY9b09DRUKhXToFINmYB2Op2O6TQzmQwmJibwr//6\nr+jr62MCGuDeS6b6i7CdazOhSJwUFXDvhUkkEpSWlqKlpQUKhQKLi4u4du0ao0jp53bzUo1GI+rq\n6vDMM89Ap9Ph0qVLKBQ+o1h1u92QSqU8/GK7IStC2ey5pFIpbDYbDh06xGnKK1euYGhoCIFAAHNz\nc+uM1INediERCBFlEAJ2q/0hw0GXjICJuVwOJSUlaGlpYSxCMpmE3+9HOp1GKBRiToDNonTCa3R1\ndeGpp57C+fPn1+EQMpkMEokEZw2oNZRSt7uhJibjD4BJYrxeL0pLSzE/P89DlB5U1Go1GhsbmaRG\nmHqmYT/kFG01EGc7ISUbjUahUqlw6NAh1NfXo729ncdQi8ViPP300zh9+jQToxA4VQiGondBpT1y\n0oWIcYrkKLtFGJ/drpUMP5Wy6PN0Oh13FVGpcb8Mv1AIMEvnlmrIX2bkr1KpcPbsWTzzzDPo6uqC\nUqnkevLq6iq0Wi13P9EZJzCt1+uF1WrFysoK5ufnoVarmVWSKJp3aqHbi9D3ezwe2Gw2npRIXTr0\nHqlFcDN9EQgEcOPGDdTW1qK1tRVHjx6FSCTCwsICrl+/zlz8JMLsHJ0byiQTzwV1cM3Pz2NmZgaB\nQGBHXNfG/066i/AAlKGj4JR+Zj/39KE0/kajkQ+hTCZDWVkZnnnmGRw9ehQOh4NpOJeWliCVStHU\n1ITS0lIYDAb++YMHDzKqN5VKcX2zrKwM5eXlPIgkkUhgYGCAEbwrKyvsFBCQi9i3tqPeBO6lKo1G\nI9dlAKCyshJdXV0MxAPuUYKOj4/zeMjNDokw2qVIzev1wuv1orKykpXe5OQk2tvbcebMGZSWlrLR\nGRgYwP/+7/9y2ms7EYlETGQhRJrW1tbi4MGDOHr0KNxuN6eo7t69y3MDiGJyP7xROviU0jUajTAa\njdDpdJwK3yjCUgtdDJPJxKN1if+e9vzTTz/FzZs3MTAwwOlc4dopxV9XV4eOjg48+eSTaG9v53dH\nwL5UKoV0Os29t2tra5iYmMCbb76J4eHhHXEWAHgmfTqdhkajQWlpKaqqquDxeDAxMYFoNLqruevb\niUqlgtlsZhSyTqfjDAC1sOXzeVy8eBGvvfYaFhcX9/R9FNUS+x/dtQMHDsDtdnNWrqysjNtBKYNC\ne05RsbCkBXy250JSL1K4Pp8Pvb29jL7fSUhxU2lGJBKhoaEB586dQ2VlJRsPan3dCUD4IEJ3/H6Y\n8gBw/VmlUqG5uRlutxtarRZDQ0Po6+vjaHez3xeLxTh48CBOnjyJRx99lFvSotEolpaWMDk5yd0v\nlNGRyWRoa2vDiRMnuM4+ODjIM0xKS0vhcDhw9+5d+P3++yp5CQ3bVs69SCRi7Ae9I5qBQgDqzVLq\nm0k0GkV/fz/fKxquRYRoG4V4UiirIJPJcPjwYZw/f57LNpTu7+vrQyQS2TLrIXx/QqGzLhaLMTo6\nin/7t3/DyZMn0dbWxm3nhUIBN27cwLvvvrujHdqtPJTGX3goLBYLGhoa8Oijj6KiogL5/D1OaFKO\nNEKVyB2USiVcLhc6Ojq4hk+T3cRiMY+FJGT5+Pg4Ll++jFu3biEUCkGj0aC8vJypLJPJJGZmZtDb\n24tgMLitNyeM/CmaqaysxOnTp1FaWgqxWMzIUAKk7caTo84GGtyj1WrR39+P8fFxprA9ePAgLBYL\no/Dv3r2La9eu7fqQkEIVi8VQKBSwWCxoa2vD0aNH0dTUBLFYjMXFRWY+I2DeflKMUiaE3pVSqYRS\nqWQWua1E6B1LJBJYrVY2NGTclpaW0N3djY8//hg3b95EJBL5HKKd6uJU8zx9+jQ6OzsZ1JRIJBhc\nRPVGmk+wvLyM0dFRXL16lVO4FIlstTcERlxZWYHRaITX60VVVRXKy8vh9XoxPT39QAOpyKkzGo0o\nKSlhsCNNMqSWrnA4jGvXrvEI5b3U6AGw8aaoiJw3kegei+bq6iqP6SZALXFFAPeUInUh0Dulshk5\nhUTgRNktAsvtZnIlsB68S6WWpqYmPPHEEzwMjNq7dps1ux8hR0cYTdKskt0Yf/pDLavHjx9Hc3Mz\n73UsFoPP59uUGItIxlpaWvDss8/yZD0hWJocV6fTyZGuyWSCx+NBZWUlMpkM5ufnMT8/j9HRUeRy\nOdjtdiiVSka8b/cshG1RKpXcpkeo+Y26lVLrarUa1dXVqK+vh9PpZKp3usNC2WkPCcxIA5go87ZV\n5pLuKM3NMJvNaG9vZ24PyhBNT09jenp6y0BoI75nY8BB551mqNC91Wq17JAODg7i2rVre6Ih30we\nSuM/PT3NQL2DBw/ikUceQXl5OZLJJO7cuYMbN25gdHQU2WyWmeSy2SzzLVOq2mq1wuPx4J133kEs\nFmMyD7fbDZVKhbGxMXR3d6+bx97S0oLHH38c1dXVyOVymJubw+DgIHp7e3mYzVZChp2UFJULOjo6\nYLFYkMlkmEp0p2h5oxebSCSY/jGTyfBnEAaAeNZjsRiGh4fh8/l2bZSp7kmH0Ol0orm5GWfPnkVb\nW3KTRasAACAASURBVBsMBgMzngnXv9+kIxQ90v9ms1kkk8ltyVuE6WAyPKWlpejs7OQZEYSxePXV\nVzEzM7MO4U9CqViLxYLa2locOnSIp4SRUhobG8PAwADPAchms6iqqoLNZsPc3Bz6+/vXTfPbaV8o\n8hSLxSgtLcWJEydQU1MDg8GAmpoaDA4O7qrdcTOhz6WZDVarlducTCYTUxH7fD709PTwudrtu9zs\n5+iZScGl02kMDAxgZGQEq6urMJvNsNvtKCsrw+HDh3H27Fl2noQEMDQQK5/P82hqMg4ymQxarRYa\njQa3bt1CT08Pd/TsRgiwVijc4w+pqalhfI9Op8Pq6io7dvudtibnktK5dM7pLO7G+FOAc+DAARw7\ndgxHjx5lLhP6GZoGuvHzCPvh8XhQU1MDrVbLmKn+/n68//77CIVC3EpttVrhcrlgt9uZuntqagq3\nb9/G7OwsMpkM1Go16zWfz4dwOLztsxBo2+12o6mpiQM5IfGZ8HmJzfHUqVNoa2tDLBbD0tLSjpwI\nW4mQZZX0BnEDCB0mobGm8hCxQ3q9XsYRUethIBDY1vAL8T0bzxV9B+m8tbU1TE5O4u7du6ipqUE+\nn+dugsXFxX1p8wMeUuNPBo34kQmMQgQchPYPBoMIh8Po6enh1j8CCwnHw1LUplKp0NraylE4RRGE\n3DaZTGhvb8exY8dgt9s5/Xe/gByihKTBKcFgkHugKWrZjbKiTILNZoPJZIJCoUAoFOLUrMvlQldX\nF5qamvhA00XcWLvaTsgbJ6Y2r9eLjo4OBvrRBaeZ8LtJad+viEQi1NXV4ZFHHmHClUAggHg8vm0a\nkVqVqAvE5XLhwIEDPByILl46ncbs7Ow6pkO6kC6XC263G2azGeXl5aitrUV7ezs7iRSdETiLGA1j\nsRjUajW/J8J1EPp8q/QrCc0bV6vVTJJDKfOamhpUVlZCp9OxQ7EboYjW5XKhvr4eDoeDsTAVFRWw\nWq08hWxlZYWpnAlbsNuoXwhCIkW5sY0JuBdpUf8zUTmXlJSs458gh4HWTtkzaldNJpNsLOk8CA24\n1Wrlues7OV10zjUaDex2O6fN6ayQE7gbzMb9CGXvTp8+jXPnzkGr1bLRIdDvTr8vjIStViscDgdj\nT2gIjEgkwvT0NM+gLxQKzGVSXV3NOpXS95RtoSFHdrudqaapDFVaWgqlUolQKMQc9g0NDYzLIJZP\nYdZT+B6Ea6esWn19PQ/8qqmpwczMDObm5jA/P8+ZP4PBgLq6Opw4cQJHjhyB1+vlIVw0lZI4NXaz\nf1KpFFVVVThx4gQaGho+R3suNNAAuCyg0WigVqvhdDrZaSIuEoVCAbPZzIyxQqGSAtkg0geEL6FS\nF4GECd9CwMvl5eV19PAmkwl2u50DjweVh9L4V1ZWctTe0dEBp9PJ3rjBYMCRI0d4dOjw8DC6u7tR\nX1+/DqlNKH2z2YzW1lYAn1Gh0obb7XZ4vV64XC6o1Wo0NzczPaNUKuVNpv79nRQjpbOIQrayshLZ\nbBbd3d3QarWoqalhsNNOQodVrVbD6/XC6XSyZ0hKj9gNaYSxEHhF3vtulblarYbZbIZer8eBAwfQ\n2dnJAEchP7/RaNzFG7w/Ieejq6sLP/3pTyGXy3mmQyQS2TZLolarecBGRUUF2traYDQaGQREZQyF\nQrGut5a+Uy6X4+DBgzh37hzTPLtcLk4NA1hXk1UqlcwsR8xfmUyGe4ZdLhcuXrzIJYLtykQ2m43H\n+B44cICNIiHjvV4vzGYzp4V3s4/U+nnw4EH88Ic/hN1u584YpVLJioVYzuLxOD/bRqW93feQsqMe\ncKLeJoVJyp4AnPQMcrkcdXV1qKmp4ZSzMN1PBklItkUKuFAosAORyWRQUlKC5uZm+Hw+zhKRs7XV\nM9D7I+bO1tZWWCwWBINBbt0iPvb9FKJP/v73v4/nnnsOMpkMY2NjGB4eZo6NnYTeLwURq6ur6O/v\nRzgcRmlpKcrKymCxWDA8PIxgMMh4HLPZjKeeegrnzp1jQDQ5Z2KxGFqtFocPH0ZjYyN3wkSjUXaK\nJRIJQqEQxsfHkU6nuROASkeTk5O4c+cOD0gjsJ0QnEnrdjqdePzxx9Ha2srDsLLZLHp6enDlyhXu\nNiEH9uDBg/ja176Giv8/2IaoyhsaGjh6px797c4tZV06OjrwN3/zN8xpkMlkeF4A1f2FrXmky2nC\nYHl5OQAgGAxyy2x7ezvsdvvnvo9S9xRIEh5AoVCgqqoKSqWSuTUikQjUajU7APSO6E4pFAp4vV60\ntLRsWda5X3kojT9N0iNvnsBJwuEjtGkmk4kBL1ulSOm/C2suBAyk9CKhrfV6PfPZx+NxDA0NYX5+\nfldGlIb5UDsfjXF0OBxobGxEbW0t5HI5Tp8+jV/84he4cOEChoaGPtfmREx0R44cwenTp5mK1ufz\noba2FsePH+e2x7KyMsjlcmaOIgVI6a2d+MJJqG0qFovho48+gt/v59Q3HX6LxYLvfe97sNvtuHjx\nIs8+fxCkv1gsxoEDB/Dtb38bjz76KKNnZ2dn8eabb2JoaGjbvae9IoKmGzduQKPRrONrl8vlaG9v\nxy9+8QsmghGOnaVSENXYyAAXCgXmMADu4U+qq6s5KyGTyWA2m1FaWsq4k/Lycmi1WjidTty5c4dn\ntG+WCiVU+fDwMObn59HT04Oamho0NDTg0KFDOHToEP72b/+W2ctGRkagUqlQV1fHaGBhypDIewj3\n4vV6uf5OWaHV1VUMDQ1hZmaG11RWVsYsmPROtxNSTpSyFhpqIXGRy+XCz372M5w8eRJjY2Ow2+3s\noLlcLr6LwHowFH0ekZpQe24qlcLk5CTzT7S0tODw4cNwu90YGhpCf38/Zmdnt2UpJAruSCSCaDQK\nv9+PkpISuN1uvPTSSzh06BDEYjGOHz+Ov/7rv8Y777yDwcHBLVPZFOmSw0P7QWlip9OJ8vJy1NTU\ncAcE8R/cvn0br776Kqampra/JIJ9oW6M69evMzjv4MGD6Orq4kEyTz75JBobGxEMBgHcc3goyCEn\nit4XAc2IzlsikcBgMHCGVavVMrUtDadRq9UoKSmBRqMBAFitVlRVVaGtrW1dF8xGvUN6/dq1a8jn\n8zhy5Ajsdjv0ej2T7hw9epSzCx6PB1VVVSgpKWHn3WAwoL29HXq9HktLS5iamsIf/vAHjI6OIhqN\n/j/2riy4zes6fwCx7xsJgiQALiDBfZMoSqJEUbYsy4oUxXbiuE7aybSdNG0y04fO5KEvfehDn9I+\ndJJOZ9JkmjRp7MSxHXmRLduSSIkSKUokxX0BdxAECYDEShIggD6o5/onTVFcQJlOdGYyUkjrX+5/\n71m/8x2mQ7np9YKCApSVlaG+vh51dXUs+0WMlsQESt+X60gQtwuReLlcLuY0vvzyywzAeu7cOUil\nUszOzjI9QDTx8/PzjIMg9/+5WIgamDqPqBS5sLCA6elp9m+5ZHd2u511itHArr3IgTT+hIJfW1uD\nw+GAwWBAZWUl8vPzkZ2dzZTz2toaU3ThcBjDw8Pw+XzMgFCfP9WFubVYqrdRW9HGqIdaN3p6ejAz\nM7Ot5ybjT73IXq8XYrEYLpcLXq+XHTbihY7FYow/nxDk5IWaTCacO3cOly5dYumtsbEx5qkWFBTA\nYDAA+Iz5ie5LRkAsFrN+6K2EMgZU7wwEAgwzoNVqUVpaytomn332WSiVSqyursLlcrF5AFzWNEoB\nb8QtUIRI0SnNPTh9+jT+4i/+gjk5fr8fDocDra2tj117MhSBQICxexHbIR0ooVCIvLw8GI1Ghg4m\n4CRF9/Ttw+EwS8VSxgcAI5RZXV3F6uoqq0eScie2SYp81Wo1q2d3dnZuOjCIJig6nU5Wt66trUUk\nEoHdbmepWgIU3b17lzGAUXmAWzcmQ09thxSBkTIOBAIIBALo6OjA9PQ0e/b8/HyYzWY2YW47NUVq\nv+OeJW5LJoH8zp49i7KyMgwPD8NkMsFgMDC2S6IBJmZHrgNB7zQ3N8c6VkKhEBvIQ9z4hYWFsNvt\nyMvLg9lsxvDwMHp7e9fRsnKF0qlE/zw+Ps4oVgngyeM9HKNKtOE01Y1mxVPHgEqlYuUC7rROKn2Y\nzWbk5eWhuLgYlZWVKC8vXzf2eXR0lIEstyPUCklB0ejoKOPD8Hq9bPBNdXU1qqqq2HfgOlZ0xqkv\nnotMp3o8OQS0Z7jOLw0K45LO0HRPnU7HRqNvDMRIBxP9rUQiQX5+PhuclJmZifz8fFRVVbFZFgRg\nJqp3cm5yc3ORm5vL9kMikYBGo2HD3Gi6KmGAamtrcfz4cTz33HMM1BkIBDA/P4/h4WHWnrgZzoNq\n8FRzHx8fx+TkJFZWVvDss8/CarWCz+ejvr4eJpMJDocDMpkMNpsNoVCIsWUCD8sANpsNNpttHdcF\nt6xI7LYAWMcZF4tFo9NTQaZ0II3/yMgIq+ulpaXBaDSygTs0btPr9WJoaAgCgQDZ2dm4desWXn/9\ndWbQeTweSy3SWEgCZQDrFxYAjEYjTpw4gZqaGtjtdoyMjKCtrQ137tzZlmcOfIYY5Ro+MszkwVFr\nmNlsxne/+1288sorSCQSjGaUUrQZGRnIyMhgqSk+n4+cnBwG+KHRxtyBPVRfysnJYWmt7YxWJaPH\nVbzJZBL9/f2wWq3M4FPvO4GMyBP+6U9/itu3b7O2IKFQyIwdFzBDRl+j0aCsrAzV1dWora1FSUkJ\nDAYDI+Lp6upCZ2fntqJQv9+P3t7edc5cZmYm1Go1FhcX4ff72SxvbrRMDiKtCymnwcFBvPfee6yV\nKZlMQqVSITs7mylfmjJZXFzMsi9cml+DwYDq6mqWTSA++I2GiJxCoqdOJpNsSBJNKqRaHwEZKTVN\nipu+HykDIqxxuVwYGxsDn/9wMFN/fz9jHgsGg9Dr9axtlOaPDw8Pr3NEHiWEheHudUrbA2CtmqS0\nyFDTz2KxGJxOJ4aHh7GysoKpqSlcu3aNtV9xa8Xc+j79/Pjx46ioqGB8H8lkEnq9nuEajEYjJiYm\nNh2IRMaPe0a5LaZkMGl63Pe//31861vfQjQaxa1bt/Dpp5+ycmF9fT0bVsXFOVA2gMjFyHGmUhJ1\nWBD163aV+MZvQoY9EolgYmIC6enpzDBzia/IGV9bW2Ng4ZmZGfT09MDhcGB+fh48Hg8qlQqHDh2C\nXC7HzMwM5ubmEIlEUFZWxko1dEa5eo3OxMDAAIaGhtbNo6AzxiUCm5ubw9DQENrb22EwGBglOWUX\nqV06HA6js7MT77zzDmw2G3NqCHtEDu+5c+fQ2Ni4jrLb5XJBoVDAbrfDZDIhPT2dlevi8TgD6N69\nexednZ0YGBjYlFSLgiIKoCgDRfuFziiRoOXm5jI70NraiuvXrzPqZolEgs7OTpZhyc7ORklJCSMi\n4+5PrVYLrVbLxptzMTapwlsdSONP4DpCQ5Mh8vl8UKlUbBqT0+lkQBRS1twhCi6XCyqVik1Oo3QS\nsN74U4RL3rTVasXExAS75naJVmhjbfzZ2toaGwGamZnJACT5+fmsjq/X65nRUqvVrB3R6XRiYWEB\ni4uLCAaDDKxE0VwoFIJWq0VWVhYDqBGBEdVQH0UvufEZuULAwaGhIdy7dw/5+fmspqfX66HX6xGN\nRmGxWNgwCkr1raysoLe3F16vl/GzZ2RksANI3rvNZoPdbofBYEBaWhpr3fr0009x9+7dLadtkdDo\nWi5pi0AgwMTEBO7cuYPV1VVkZmayllCRSIRoNIru7m7Mzc2tG5Mbj8fhcDiYwzc/Pw/gIQCVpv7R\n4af3J6wBRQ3kqHGBb4/q+yWuAO53oRHSNGtcp9NBq9UywOjY2Bg++eQTBAKBdcQmXKAVDbqZnZ0F\nn8/HysoKo6klMhzCQlB2gvYh/W4rJUNOEO0d7t53Op1obm5mZEU0SyAtLQ137tzB0NAQlpeX4Xa7\nMTMzg2g0ivn5edy/f5+lbYH1oEIuIFAkEsHlcmFubg42m409DznV1PXDLddsfPaNQu9OQ3xUKhVz\nFm0227qZIpSiNpvNrAWWUvFkJAhIzB3EQnqMz+djdnYWN2/exIMHD/ZM7EOOdn9/P/h8Pvx+PywW\nC9RqNYCHI3fJySLAtN/vh9frxejoKJxOJ3w+H3g8HuRyOebm5iCRSLCwsACfz4eVlRXMzs5ifHyc\nrTedF+rOIMR8V1fX58Z0b8z+AQ9T6fPz8+jv74fRaGRlE6p3ezweLCwswOPxoL+/H7du3cL4+Dhj\n59PpdAwPRfqIMm/cQTsKhQJ5eXnMeNJcgOHhYQwODmJoaAgOhwPT09PrSgYb98tmepNKBhkZGYyz\nQigUYn5+HqOjo+jv70drayu6u7uZ40+RPjnJ6enpzM5QJpjWmjKjNIQumUyyFvHtcEJsRw6k8edO\nh6Lae2dn57q2Lu4hfhSwbWpq6pH32NgfGgwGMTU1xUZvbrdWvh2JxWLo6elBRkYG6uvrGVAG+EyR\nUn2NFFkymWRz1x0OBxtLSwqbCC4SiQRqa2tx6tQppKenMxANpSMJVb3ddiKSRCKBpaUlDAwM4P33\n38ehQ4dQUVGB/Px8xgFO5EPf/va3mZJbXFzE5OQkfv7zn2NwcBBarRZ1dXU4duwYKioqGO87HSgu\nHzgZtnfffRd9fX3bet6NeyGZTMLv92NoaAhutxt37tyBzWbDsWPHcP78eSiVSvh8PvzmN7/BzZs3\nMTs7i0AgsG4/bLwnIfs3/szj8SArKwtGo5GlhinTkJaWhunpaUbmtFkGYzOlQhMsP/zwQ8zNzaGs\nrAxlZWXs+rdv38Y//uM/MiQwlVqIoIQbZW1cl433pjo1GeDZ2VkWxXKN+may2fVisRj6+/vxy1/+\nEq+++irL1tE9Xn/9dfzqV79ax8H+qHts5kiTER4fH0dLSwtsNhsbjkMlNxqsQoZrO0LOL30varfk\n8g8Qwr2iooL16lNWhWidl5eXWdmJujSWlpbg8Xig0WiQmZkJoVCIrq4u/PjHP4bD4djW8z1KeDwe\no5UmZ3ZwcBDPPfccCgoKkEwmceXKFfziF79gI7u5hmOzdR8cHFz3Ox6Ph+HhYSgUCqSnp7N5KRTF\ncwGWO6H0pZHf1J3AzWjev38fXV1d7Bsmk0kMDw/jxo0bbK+LxWLU1tbi0KFDKC8vR2FhIQuqqGxK\n5U/Cu8zOzuL69ev41a9+xaaNbgevtHGdCKU/MjICkUiE9PR0RoLW1taGy5cv480333ys/pqbm2M8\nHsQnQM4BYYgIMJtMPiRYczgcO6bffpQcSOPPrVHR32ljbbagqUqDAGDXT0WPL9UClUolvF4vOjs7\nGbiE0Jr0TlQfJH5pqqlPT08jFArB7/ezMaRCoXBdbZbSi8RNTko8KyuLtZw9KurfSpLJJBYXF1mN\nuKWlhbVRCoVCFBYWoqysDHl5eVCr1SzKlEqlyMvLg0ajQVFREfLy8mCxWBgZDrW2rKysYH5+ntXq\nOzo6cPfuXUbZvNvvSqxbS0tLDIA0OzuLO3furIuUKPLfTifHRllbW2Pp1u7ubmi1WlaWIpAoEdBs\n5/qUIice9dHRUXi9Xty/fx9yuZxF5w6Hgw0/or3KdX65mJZH3ZMUDXVBEEZBq9ViYWFh210iXCEQ\nbTKZZJFde3s72tra4Ha7EQ6H0dbWxrI5ezmz1FVDDi63U4DOE7fW/TihtSBwqUwmYyRgwGdrmp+f\nj+LiYjYXgtL3lOFKJBIIBoMM10K4kWg0ykhtaGQudSjsVShjsby8jKmpKUZqRpPgRkZG2L22w7y3\nmW6lbNf8/DyL9jfusZ3oFypD0Pnhsljy+XyW6dzYUkz32FhOXF5eZnMYCOhMU/LIsSXMTE9PDyYn\nJ7c9eZMr5PQRuLy9vR09PT3s7BLd8eTk5Lb3OLc8zF3DYDCIxcXFdZnszbIoe5EDafzpIFEUAmBP\n0+J2IptFHNsVbuqVlA+lAIk3vb+/n6Vkuf9daWkpKisrMTExgbm5uXWDVmgNuJ41999OTk4CAHQ6\n3boSAaW76KDsdO0SiQT8fj/z0mmDAw/xBYcPH4bb7UZlZSWMRiOAh2nrubk5xgxGqV9qIaJWRIqW\nJicn4Xa7EQgEMDg4iNHR0T3XtbgAz3A4zKh9U8lGSFGo0+nEwMAAA7JJpVIMDAwwRP3S0tJjWb8A\nMONFZZqZmRlMTEwwY0oR26Oi5J04xNQrLpfLWcsYKU3aazsR2vdpaWmIxWLwer3o6+vD8PAw3nnn\nHUxMTKQsWgEelmEyMzMhlUrXfWsqFVL7IUVojxOK4kdHR+FyuVg3B5VIuEDdw4cPMxa2SCQCr9eL\nxcVFhonh1o2pFMQFbRE2hPrTdyKbOTOxWIyVNGkfdHZ2plRXJpNJtq6puCaVzwjkTCWTjcRbjxLK\naoVCIXg8HvB4D5nx6HxLJBLk5eVBJBIhFAphamoKw8PDaGtrW8esup134a45BV7EOdPT04NQKMQY\nKPcyB4Jrd3g8HuOXoVZiwnYQRmQ3DvpGOZDGX61WM1pP2sSpBDrsl1A6ioueBR6WMYLBIHsHQuaT\n8Hg8dHV1YXR0lEUKdBhoQhzweSVPf+cOpzGZTJDL5Zifn2c90dsh5eHeg/6kA88V+n0kEsGdO3cw\nMDDAgHSEAKcaqkKhwNjYGCMcoUPOzeLQ+1J0sZvvvJlXzI1wuNdM1R6ia3o8HkY1nUg8HM/qdrtZ\nNLZVdEEpvmQyyfYOtRRRexsphccp82TyM6rSx72jXC5nY4kFAgFcLhebPU4K/nFoYq5S5HInLC0t\nob+/H+Pj40gkEvB4PDtiDtxK6B7Z2dk4duwYtFotS63ThEni5iCWwMcZLIpCKWqj9d5Y9ovH4xgY\nGMDMzMw6lDw5HaSgud+Kq7voXnQWd5NZJAwJNyW/2d7YLz2ZSmeCu76Py1Q9SojenEsKBXw2kZPP\n5zMdQzp4p0EkOW3JZJKVJqjrgzKY2+mo2q5Q1pJm1ywtLUGhUCAejzMKYcLC7VUOpPHX6XSMZepJ\nRPupEoqAuJ4lbfTH9cNTRLyZPE7p07XpMNAgHCL72Y5B3ez3jyNM8Xq9rIWRG/3Rn1KpFAsLCwgG\ngywC3gwIlArZKi22n8qQjL7H42E1f6oDb3fvksHhjqpN5dzujULgpJGREQYIu3//PtxuN4tWd3pf\nSsfSMJhUTHncTPh8PrxeL7q6ujA0NAQArG0tGo2yaY2b9Zk/SshZ3QoXk0wmWavkZrIV+yf3ensx\nEtw2WpIvQ1C0UTamuncrNI6brrkf+4275tyW6s0cxFQI6QzCezU3N0OtViMUCuH+/fsYHh7eEZ31\nVrJ392Ef5MSJE0m3242xsTH2kvvxYVMtlOKnyGBjNP1UUi90OL9MTiIJOUpcuuHdKnNuCeFxZ6W4\nuBi5/8+YNjU1hY6ODuaw7uZeT2qfUy1dKpVCJpMxoCY3EiYcQKpS1AdFuKDOVE3RfCqPF2qb3Dio\na7/Xn/hecnNzEYvFGAX3DksLW9r3Axn5j4+Pb4vv+qAJbZDHRcxPJXWS6nT+k5SN5Yi97PedvD8N\nRhEIBAiFQjuOwHaCL0il0NkiFs2NQNYnke35ouSpXvlihMtl8SR1DQEiicRoq+Fmu5UDGfkLBILk\nxlalp/JUnkpqZLP08ZdFUgF0eipP5csgKciobWnfdw7tfQLy9HA/lZ1IKsAvfyyynRa3VNRHv6g1\nf6obnsqfiux3eeFApv2/jPXbP2XZq4dKAEG6xlZshE9l50LfRywWM+QwtaARD/pOEdep/DZPo/mn\nkmohADK151Gb8sZujD9lOZDG/8uYjnwqm8ujFDs3cqQeczL8hHjfLpL2T/0Qc2XjWnAzAUqlEhaL\nhbUgEukSKcUvAjm+MVPx9Fs+lb0Kl/hJKBQy1lOixObSnX+Rzwhsvt+flDN8UPOlTzXAl0ieRv4H\nW+j7EMscKb+9RP6pfr6n3/qppFKIE+FPPPLf0r4/Nf5P5alw5MvcOvhUnspTeSoc+fIB/p7KU/mi\nhKbdbTaT/Kk8laeyO3l6lg6eHMia/34LpZmpd/ZphPdUiERFIpFAJpPtS1/tU3kqB0WoLg48emxt\nKu/1lPDs4MmBjPx3MpFrN9cm7mcal5jKe31ZPdz9XPMvg9CeUKlU0Ol0bLreU3kqf4xCQ8fEYjFj\nRdzv+30Z5Y9ZLx7IyH8/AEByuRwajQb19fUwmUxwu90YHh5GX19fyu6xXx4uDYCh4RWpnE5HQkNh\nvmgU7BchQqEQ6enpOHPmDMRiMYaGhhAIBP7k1uFPQf6UMR1yuRzp6eloaGhAfX09hEIhALCZCAsL\nC5ibm4Pb7cb8/DybGUKjsVPBDbGXNX/UMKn9zN5yCbH+2PbLgTT+qRRCe2ZlZaG0tBTnzp1DZmYm\nmpub4XQ6v+jH21QoBS0SiSCRSKDT6aBUKtnEwNXVVbhcLiwtLW1rYt927sc1/qm4FveZFAoF1Go1\nlEol0tLSsLKywsYVHwQlbDQaUVFRgWeffRbhcBijo6MpW9P9GjjyxyIboyr6/1Sao9He3O8hEAgg\nFArXDeLZ7vfiGv9UCo/Hg0wmQ3p6OmQyGYRCIbxeLxv09EXvgbS0NKSnp+Po0aP42te+hvPnzyOZ\nTCIUCmF+fh4ejwcejwdzc3NsnHQ8Hkc0GsXw8DAbM/6k34O+l1QqhVQqZW17pCP5fD4ba+zxeFI+\nUGqvkf9OZm48aTmQxj9VBoFSuVqtFg0NDfjmN7+JrKwsOJ1OdHZ2YmRkJOUeXSqeXSgUQqlUwmQy\nITc3F4cOHUJJSQmys7MhFAoRDofx85//HNevX4fb7f7c2N3dPvdehcfjQSgUsgEktBbl5eVoaGjA\nkSNHIJfLMTU1hXfffRdXr179wgew8Hg8NDQ04MUXX4TFYkFHRwfGx8cfOWFxu9ck7oJEIpGykbZ/\njEJELMBnijItLQ0ikQgKhQLLy8tsciINVlEqldDr9QgEAmyi3xfBCcE1CkKhEAUFBfjGN76BIWkw\nIAAAIABJREFU0tJS6HQ6vPvuu2hubkZfXx8ikUjK7rub55RIJCgtLcX3vvc9FBcXQygUYmlpCYOD\ng7h69SrEYjGMRiOsVisKCgpQU1MDpVIJHo+H999/H62trejr60M0Gt3x/bntuzt9bnL0LBYL7HY7\nampqkJGRwcoWAoEA8Xgc9+7dw29/+1ssLS2tG5e+V9mLPueSDJEjdZDkwBr/vQhFXZmZmbBarSgv\nL0d9fT0sFgsmJibQ1tbGFHyqDf9evEQ+nw+dTgez2YyysjJYrVaYzWaUlJQgNzcX6enpEAqFWFlZ\nwdzcHKRSKbq6ujA5OYn5+fk9PTdNZ9vtekilUmg0GlgsFpjNZsjlcoaat9vtKC8vR0lJCSQSCaxW\nK/h8PgwGA8bHxzE5OQmn05kSJ2YnIpFIoFQqYbPZkJ+fj7GxMfT19cHn8+10etY64aYkn1S98MvW\nKy8UCiEWi6HVaqFSqSAWi6FWq1mWi/bP8PAw2tvbIZFIIJfLodfrIZVKIRAIMDo6yozqdt8/FcNZ\nSKHLZDJkZ2fDYrFAr9ejuLgYzzzzDHJzcyGRSNDd3Q2dTge1Wo1k8uHo7S8ifSyVSnHkyBE8++yz\nKC8vh0AgwMTEBO7cuYP29nZ0dHRALpcjIyMDmZmZUCgUSCaTaGhoQElJCfr6+jAwMMCctN3Ibgx/\nWloaVCoVsrKycPToURw5cgR2ux0ajQbJZBIikQhCoRDJZBIGgwE8Hg/Nzc3o7u7e9XM+6rl38vyU\ntZJIJAfW8AMH1PjvVSgCLSoqQlNTE7761a9CpVJhaWkJH330ET788EO4XK59QXPv5WCnpaXBarXi\n1KlTeOmll2CxWKBUKtkmp7q/TCbDhQsXYLPZcPnyZXzyySd7Mv7A3lJSFI0VFRXhmWeeQVNTE8xm\nM/R6PWQy2TpAUTweh1QqhdVqxfnz5/H+++/jgw8+YCm7J6kY5XI5LBYLTCYTeDwerl27huvXryMS\niex5Pfh8/jp8xn7Klw1NTZGoVqtFUVERLBYLtFotCgsLUVFRgezsbKjVaqytreHtt9+Gy+VCRkYG\nc+Sj0SicTic8Hg9cLteO77/XNUpLS4NcLkdmZiZOnz6NCxcuwG63w2g0QiQSsXQ6j8eDWCyGTqdD\nPB5nwcaT/EY8Hg8KhQKXLl3ChQsXoFQqMT09jXv37uEnP/kJ2tra2PPQHiLHRqVSoa6uDgqFAhKJ\n5Ik6smT809PTcejQIZw/fx5NTU3rurSEQiHTi1lZWairq8Pa2tq+GP+dCLULa7VaSCQSLC4uPjX+\n+yncVGt5eTmOHz+OyspKlJSUwGg0YnR0FDdu3EBXVxfm5uaeeJT5qGeWSqVIT09HVVUVjhw5gpyc\nHFitVuTk5EAoFLKU5vT0NLq6ujA1NYWlpSUcO3YM2dnZeOGFF6BSqaBQKNDb27srZUjPAmxvswuF\nQkilUpSWlqK8vBxGoxEZGRkwGo0wm83Iyclh0RzNqA8Gg2hubsbs7CxkMhkqKipgs9lw5MgR+Hw+\n3L17F9Fo9IkcEpFIhJycHNTV1eH8+fOw2WxYXl6G0+mE2+3edgp5M+FSi6Y6xUyZoZycHNhsNuTk\n5EAul2NpaQmTk5PIysqCTqdDNBpFIBCA1+tFd3c3xsfHv1BsBT27Wq2GyWRCeXk58vLyYDabIRKJ\nIBKJIJVK2R5aXV3F0NAQBgYG0NfXB7VajZKSEmRlZcHn88HhcKCvrw8zMzP7vl/ojOr1euTl5aGg\noABarRYajQYqlQoFBQXIz8+HUqnE0tISRkdH0d3dja6uLgwNDcHj8TAj9STbikkXNjY24vnnn8eJ\nEycAAHfu3EFLSwuuXbuGsbGxTccgr62tIRKJIBaLIS0tDRqNBhqNZk+R/3aEjL5er4fFYsEzzzyD\n8vJyll0hXRKLxVhmjrIw4XAYCwsLLAsQCASeuMGVyWSwWCwoLS1FdXU1YrEYXC4Xrl27hkAg8ESf\nZTvypTL+YrEYcrkcSqUSEokEa2tr7H9paWmQSqUwGAw4efIkLl68CKvVCoVCAZ/Ph4GBAVy7dg3D\nw8N7quc+TjZLP5Lyo1qmUCiESCSCSqViEc3p06fx3HPPQSQSMYSt3+9HMBgEn8/HzMwMPvnkEwwM\nDGBhYQGhUAjnzp3D8ePHIRaLwePx4PV6MTc3t2tvdasxr6RMMjIyoNfrodFocPToUZw4cQIWiwVq\ntZohb2OxGObm5hid5traGnw+Hy5fvozx8XGo1WoIBAKmOCsqKpCfn49oNLrnDMbjhFKJNTU1OHv2\nLC5evIi5uTm0tbXB6XTC7/fvWkETSJMAX8vLy1hZWdlWSpr2CLVfKRQKyGQySCQSVjtMS0uDyWRC\nUVERqqurUVRUBLVaDbfbjd7eXthsNphMJqyursLr9cLpdEKlUkGj0SCRSCAajWJ1dRULCwvw+/27\nesftCL2LWCyGUqmEUqlke91ms+H48eOw2WzIzMyEy+WC1+vF6uoqQqEQJicn4fP5MDk5ibt372Jh\nYQFra2sQCARIJBIYHx9HX18fent7WQqdDGsqM0d8Ph8CgQAmk4kZn4qKClRVVUGn00GhULDvGg6H\n4ff7MTs7i7t376K5uRmtra1IJBJQKBTIycnZdd17J0KOp1arhU6nQ3p6Ol544QW8+OKLiEajGB8f\nR0tLCz7++GO0trY+8jq0V+LxOMNNkY7Zj2fmttmS4S8rK8Mrr7yCkpISiEQieL1eTE9PY2VlBcvL\nywiHw+wZhUIhIpEIFhYWIBQKUVdXh/n5eczNzcHlcu0pi0cOz1bOM4/HY0754cOHcezYMTQ2NmJu\nbg6dnZ24c+fOngKK/ZIDafwJJc1dbB6PB6PRiNLSUhw9ehRWqxWBQACLi4vw+XyQyWQwGAwoKSmB\n1WpFVlYWhEIh3G43rly5gk8++QTd3d0IhUL79ty0kbmHnLu51Wo1jEYjDAYDjEYjjhw5gpKSEmRk\nZMBgMECtVsPpdKK/vx8tLS0YHh7G9PQ0AGBlZQVLS0sIh8OIxWL4/e9/j2AwiLKyMthsNohEIty8\neRM9PT07VoKUniTk7EZENKX1s7Ky8Oqrr6Kurg4ajQZarRZqtRoSiQThcBhOpxNOpxMzMzPo7e3F\n5OQkS3mtrKxgYWEBKysrzPBXVlYiKysLZrMZ586dQzwe33fjL5VKkZOTg5deegmnTp2CQqFAR0cH\nXn/9dTidzj0pZ6lUCqPRCJ1OB7FYzAxbLBZ7rOIXCATrQFe1tbUoKipi9WORSMQcMJlMBoVCAalU\nirS0NOh0OmRlZUEul7P0LAGkDh06xMoYHo8Hk5OTeOONN3Dz5s1dv+dWQvtdLpezVGxtbS2qqqqg\nUCigUqmgVquRSCQQDocxMjLCouTFxUUsLy8jGo0iEokgGAyyDJ3P54NUKmUIeppNIBQKIZFIsLq6\nimAwCGDr7NXjWv3o9yKRCGq1Gq+99hpOnz4No9EItVoNuVzOIs9wOIyuri60trZiZmYGc3Nz8Hq9\nWFxcxNraGng8HmKxGPx+P5aXl/c96ifHtqmpCQ0NDTh06BDMZjMEAgH+8Ic/4Nq1a3jw4MG2zhg3\nExgKhRAOh/fcprdRn9MzEw6orq4Ozz//PHJycmA0GmE0GiEWiwEA9+/fx40bN7C4uMiCokAggHA4\nzK6XlpaGS5cu4eWXX4bf78enn36K//zP/9wTfictLQ18Pn/TYUC0V/h8Po4ePYozZ87gyJEjLBiK\nRqNQKBQADh7SHzigxl8qlbIIkiL69PR0VFRU4NixYygpKYHJZMLa2hr8fj88Hg9kMhn0ej0KCgqg\nVquZ4e/v78etW7fQ29vLWsv2S7jtXZtF/1Sr0ul0qKiowOHDh1FcXMxShiMjI7h16xbu3r2Lnp4e\nzMzMsFTWRpmZmUFHRwfeeecdNDY2Ii8vDw0NDfB6vbh///6O0MUbU9XUYkgGSS6Xw2QyobCwEA0N\nDaiuroZMJluX0nc4HLh69Sqmp6fhdrsxNjaGubk5BAKBz7Xf8Hg83Lt3DyaTCefOnYNer0dDQwMm\nJiYwNDSEYDCYUsQu8BkIp7i4GKdOnUJFRQWMRiPi8TgmJibw4MGDXafmBAIBNBoNbDYbampqoNfr\nkUgk0NbWhtXV1Udel6s8cnJyUFVVBZPJBIvFwpxYAmBJpdJ1UYjX68XU1BSmp6exvLwMPp+PSCSy\nrh+bx+Ox8gqxFubn52NhYYGl1lOVjuTWaQUCAZRKJXJyctDQ0IC6ujoUFRWxCG10dBQOhwMOhwND\nQ0MYHR3F1NQUM5KbKdloNAqBQIDV1VUkk0kIBALk5uYiKysLarUaLpcLw8PDDP2/V0OVm5uLuro6\nNDY24siRI1AoFODz+VhdXUVzczMGBgYQCAQwMDCA7u5uzM/Pw+/3r3O8KZNGDs1+C5Uoqqur0djY\nCLvdjqmpKdy6dQvXrl1De3v7trBO9B3pzBgMBuj1+j2n/UkHcsGwxKqpUqlgsVhQV1eHjIwMyOVy\nAMDi4iKmpqZw+/ZtXL9+HeFwGJFIBJFIBOFwmDlVpP81Gg1qamqwvLyMmZkZVjrd7X7gDgHa6hp2\nux2nTp1Cfn4+5HI5kskkRkZG0NrauiO7Q90NJSUl0Ov1mJycxMLCwufA6RsDtN2834E0/kqlkil/\nSjUfOnQIJ06cwMmTJ5FIJCAUCqHValnULJVKIZfLIZPJAACxWAxjY2Po6OjAvXv3MDU1te/eFzfy\n3+jpxuNxRCIRzM3NMfS+0WiEUqkEn8/H9PQ0rl27hl//+te4f//+tj6mw+HAv/7rvyIajeLv//7v\nceHCBaSlpa1DQe/02dPS0qBQKKDT6SCXy6HT6ZCZmYni4mKUl5fDZrMxb5bey+124/bt2/iP//gP\n5qxsdWCSySTu3buHQCDADk1dXR16enpYq10q+Au4QmnoY8eO4Zvf/CZycnLYN1lYWMDs7Oyury0S\niWCxWNDQ0ICXX36ZgUt9Ph/m5uYwOzv7yCiTjGVpaSn+6q/+CgaDASqVCjKZDAKBgClqblkmHo9j\nfHwct2/fxocffojFxUWo1WpMT0/D5XIhHA4zUNoPfvADvPLKK8jJyYFer4fBYMDFixchlUrx05/+\ndNfGnxsZcp0YisRVKhWys7NRV1eH4uJiSCQStlcIpEpgs8cpVkqt032oU6CmpgZHjhxBRkYGOjs7\nEQqF4HQ6t9w7WylLLnCyqqoKf/mXf4mSkhKo1WoAD/d6OBzGb3/7W7z11luMAvpRtXzSAUSWs99C\nGcbi4mIUFxdDJBLh7t27+K//+i/09/djYWFh29ehHnqhUIjCwkLYbDZGCrQX4Rp+uo9YLGb7RqvV\nQiqVAngYLTudTly9ehUtLS3o6upa1zNPY6iBh2eQOkcUCgUjdaOJfrtNu29Gfbyx/JFMJmG1WlFR\nUcECqLW1NXz66af45S9/uaN2ZqFQCJVKhQsXLqCqqgqXL1/G3bt3EQgENs0ob3V2HnfPA2n8V1ZW\nEIvFEI/Hsbq6Co/Hg3v37sHpdOLGjRtIJpMsXUQHrKamBmVlZbBYLAiFQpiamkJzczPa2tpY3XC/\nhQhJuB+Jan1Un4pGo+ju7kYikUAikWARXFtbG1paWjA5ObntjULMXFTK0Ov1yMjIYH3225VkMonV\n1VW25olEApFIBEKhEE6nE2NjYxgeHkZPTw+SySSqq6shkUjg9XoxOTmJjz/+GDdv3oTf7193yLZ6\nDzr43IyIQqGA0WiE2+1GMBhM6Teje9Bhicfj6OnpwZtvvrll/fNxolQqkZeXhz/7sz/DsWPHYLVa\n4ff74Xa74XA4tqw5cp+pp6cHP/7xjyGVSlmKXyAQMMcgOzsbS0tLmJ+fh9PpxMLCAtxuN6anp7G6\nugqRSMSiIq5BevfddzEwMACtVovGxka88sorsFqtOHr0KMbGxiCRSNDf37+r9dz4d+4+n5+fx8jI\nCG7evAmXywW5XI6Ojg50dXVhcHAQMzMzbF12WqKqra3FuXPnUFVVBYvFgrS0NExNTW1piB/17Jv9\nnAw2gd7oOUdHR9He3g6Hw4FQKPTYufBcffAk0r5ra2vweDz42c9+hqtXr7Loc3h4mJVEtiOEU9Dr\n9VhbW4PL5cLc3Nye6tYb14jO4erqKst6rq2twel0ssjf7/djZmYGQ0NDmJiYYKUU7r8niUaj8Pv9\njFMjmUwyDE1aWtqun30z53SzdwE+cwqmpqZw7949jI6O7liHqVQq2Gw2WK1WFBcXAwDUajXLIIZC\nIbYnN9qZncqBNP6kQOgF6cM6HI7PeT4EsJLL5TCbzQiHwxgfH0dzczNu3ryJwcHBx5KAPK4OuF0h\nZf6on9PviGOgqKgI4XAY7e3taG9vR1dX147uR5EI1bQIWMU1qtt9bqqtkuIjh4JbAx0bG0NWVhaW\nl5ehUCiYgv/www/R39+/rfQaYTMyMzNRVFQEhUKBaDTKwHHce6ayd51raCk7Mz4+jt/97ne7ivop\n+iwsLMTRo0dx9uxZ2O12rK2tYWRkBD09PZiYmNgy5Uf7LZFIYGpqClNTUwDW10OFQiFqampgNpux\nsLCAqampdedgK4nH4+jo6MD9+/dZC9qlS5eg0+lQUlKCY8eOwePxYGBgYM/rzH0XAuFOTEygpaUF\nDocDcrkcV69exYMHD3ZN7CSXy5GTk4PGxkZ84xvfQEZGBsuaURp4r6A/+reLi4sYGxuD2WyGwWCA\nQCBgPyOA4nauxT33+y3xeByBQAAff/wxALB13ul66HQ61NfXw2w2IxaLYWZmBk6nc9177KW1dGNg\nFIvFEA6H4fV60d/fj4yMDCiVSvh8PiwtLSEUCq2L8jeTeDyO5eVl5gBGo1FWNt4LUHE37+d2u9HS\n0gKn07ljp49aG3Nzc2EymSASieB2u9Ha2sqwJlzbuBc5kMafm2rZyuOiRUgmkzCbzbDb7RCLxVhY\nWMCtW7cwMzOD1dXVLReJ0ogEztlPTABJLBZDMBjE7OwsYrEYuru7d5125jKk0XrtxnBuFZ0CDx2y\nhYUF/PrXv8YHH3wAuVwOkUiERCIBn8/HDtlW96Ra6qVLl1gJIScnB8FgECMjI7h37x46OztZBiGV\n34IUMbUpEoiHorudikqlQm5uLl599VVcvHgRWVlZiEaj8Pl8+Oijj/DGG2/A5XLtSTnSGvT19cHh\ncDDg5E6vSQ50KBSCz+djqVW73Y4HDx4wGt29Or8b7+n1etHc3MyQ4gSE2809+Hw+cnNz8YMf/ADH\njh2DxWIBj8fD+Pg43njjjZSyXQLA4OAgfve738FgMCAjIwMajQZGoxFVVVW4fv36gSVV2ujI7+YZ\n8/Ly8J3vfAd2ux2xWAwOhwPj4+PrIm/SOal0bGKxGMNsUMmLixfaLN1Oz0LYAT6fj2g0Co/Hw5y0\n/cy6cPERJMvLy5ibm2NgxO0Kn89HXl4eXnzxReTl5SEej8Pj8cDtdjMnaHl5OWW68cAa/8cpI24G\ngIg0dDodwuEwlpaWMDU1xTzGR4lWq4XJZILdbofBYEAkEmH1OWotikQiGB8fx9DQUEoVYzQaZZHh\n/Pz8rug/tVot7HY7cnNzAQCrq6ssIkllWw43RUdsfCKRCHK5HFKpFGtraxCLxevuzW1Ps1gsyM3N\nZSxojY2NMJvN0Gg0LPJsbW1FT08PvF7vY7383b4DRRmkEOhnO/muYrEYFosF5eXlOHr0KCM0Gh0d\nZSnW5uZmjI+P7/kd6PkCgcCeW/O4/dFra2sQiUTIzMxETk4OMjMz4ff7WVdAKvY57ReqM+/2mgQK\nO3HiBJ577jmcOnUKFosFQqEQ7e3taG5uxo0bNzAyMrInVPdGWVxcxMDAAG7cuAGZTIYTJ05Ao9Gg\nsLAQhw8fZmh/AscGg0EGQNtYQqMa8JMaEb3bfUeZrPr6ehQXFzMg5ezsLNxuN3g8HrKyshjTqN/v\nx/j4eMocrs3WiavHaC+IxWJWHqMMXHp6OouWuU6yXC5nGYFUO2uUESUis3g8zp6XMBPbFYVCgcLC\nQhw6dAh2ux0CgQBLS0twOByYnJxEIBBI+XscSOO/k2EdAoEAUqkUSqUSUqkUi4uLCAaDDHm7lWRl\nZeHkyZN47bXXUF1djcXFRZZGl0gkiMVimJ2dxZtvvonh4WEAqWNQSyQSGBsbg1Qq3fXgj8zMTHz1\nq19FdXU1gIceZyQS2feIhA5oLBbD8vIy1Go1o/Kl3mDiMhCLxTh58iRefPFFVFZWwmg0skiTIv4b\nN27g8uXLmJub21eWv2QyiUgkAr/fD6PRuKnX/jiRy+U4evQoLly4gEuXLkEsFsPtduPjjz/GBx98\ngJaWlpSCFfcj+0GYGY1Gg6ysLOTn52N8fHxfoqS9Pj+fz4dUKsV3vvMdfPvb32YRdyQSwVtvvYXX\nX3+dTaBLpRBXwpUrVxCPx1FRUQGz2Yz8/HycPXsWJpMJs7OzUKlUSE9Px/j4OOt0WVxcZCQzlIYm\nLMZBFplMhjNnzuDZZ5+FQqFgZUVCm/P5fNjtdhw5cgRisRgjIyMse7lfwg3yKNDTarXIzMyEWq1m\nraMFBQWorq6GxWJhQQh1iUUikZS3Wm58Hho0RaVorVbL2hS3I1qtFufOnUNjYyO0Wi38fj/m5+fR\n29uL0dFRBIPBlIOgD6Tx30m6qrCwEOfPn0dBQQHm5+fxxhtv4MaNG58zIhSJZmVloaCgAEeOHEFR\nURHy8vJgNBqxuLgIt9sNkUgEnU6HZDIJn8+HtrY2jIyMpDzNR17u6uoqAzhuV4jpraioCPX19bBa\nrVhZWcEHH3yAy5cvs0Eo+y3ceh1F+9SBYbVaYTKZoNfrcfjwYZSXl0Oj0SAtLQ0AMDs7i/7+fly+\nfBl37tzB/Pz8vg/AIU+dCKJWVlZ2NBTGbrfj8OHDuHjxIg4dOgShUAiXy4UHDx6gtbUVAwMDT5yi\neDtCColIVMRiMdbW1tien5ubYwC2gyIE6G1sbMRLL72E+vp6dgZv3ryJ3//+97h58yYrJeyHJBIJ\n+P1+Roqk0+lY9CuVSpFIJCCVSiGTyVBVVcUCB2KqpDq03+/HJ598gg8//PDA7Q2SzMxMlJaWsqhf\nJBLh6tWrePvtt9Hb28uyQsPDw1hcXERaWlrKh+hsJVxMiVwuR35+Ppt9QrwAer2ete62t7ejtbUV\nTqdzzxwFm4lAIIBarYZGo4FOp4NUKmVRv0KhgNVqhUqleuw1hEIhmpqa0NjYiPr6euTm5iIWi6G9\nvR03btxAe3s7y66kWqd/6Y0/Re8ajQazs7O4fv06enp6PtdXTjWh8vJyNDU14dy5c4welQ641+uF\nRqNBRkYG5ufn4XA40NHRgYmJiZS/IzfqpOhgOyIQCCCTyVBWVoYjR46gtLQUUqkU09PTaGlpQWtr\nK0Kh0BNRMmT8AbAWF/KEbTYbysvLmYOVlZUF4LNvu7CwgP7+frS3t6O3t3ffDQ+VIChD4fP54PF4\ntnWoaMJcTU0Nnn/+eRw/fhx6vR4ejwfd3d1oaWlhuI2DqNxlMhlr11KpVMyoORwOjI2NwePx7Hme\nQaqEx+NBq9VCr9cjMzMT58+fx3e+8x3w+XyEw2FMTU3h6tWr+J//+R9WH94vSSaTrH2wq6sLgUAA\ncrmccc7T7AoCU9KZ5oLhlpeX4fV6EQgEcOfOHVZaPGiSn5+PpqYmxjURi8XQ2dmJt99+G36/H6ur\nq+DxeIzI64vY59zMlUAgQFZWFioqKlBYWMiGJxHR0r1799Dd3Q2fz7cvmQluIEGYlng8DoFAAIVC\ngdzcXCiVykf+ez6fj/T0dNhsNnzlK1/BmTNn2GyItbU1DA4O4ubNm3A4HJ/jj0iVHFjjv12hGtDC\nwgLGxsawsLDwufp5MpmERCKB0WjEV77yFVy4cIFFQNT/nZaWhkgkAj6fD5/Ph08//RQtLS3o7++H\nz+dLecqIuM5FItGOWO0UCgXMZjO+9a1v4ezZs9Bqtejr68P169cxNDT0uXa7/RRak1gsxsBAPp+P\nsTHabDZkZ2dDo9Gs+3eJRAKrq6vrKDr3W+hQSqVSxGIxPHjwAA8ePNiW02EwGFBZWYnnn38eZ8+e\nhUajgdPpxO3bt3HlyhW0tLRgfn7+QBp+4KGD/P3vfx+nTp1ijs/o6CiuXLmCtrY2hpI/CMLn81Ff\nX4+mpiYcOXIENpuNZYvGx8fxox/9CLdv30YgEHgi+2ZtbQ3Dw8P40Y9+hOzsbOTk5KCoqIhNgkxP\nT2e890SURUJp4YyMDBQUFKC0tBQjIyPweDwHaq/weDzU1dXhz//8z5Geno6VlRX4fD6W7t9t10Cq\nhTqRXC4Xbt++DbPZzNgvRSIRADBK7NXVVUQikR2VkHci8XicUYEnEgksLi4iFAoxgqv8/HzGD7FR\nKBhtaGjAD3/4Q5hMJuh0OvYOAoGAjbKmc7kfa38gjf92hYxNf38/0tPTEY1G2cCNpaUl8Hg8qNVq\n2O12FBQUoKCggCGFuSIWi6HRaNigEbFYzGouFFkQoCMVH4HLWkiG0eVyYX5+ftN7CIVCZGZmorq6\nmikg6ikHwEBzRCX7JA8pF3ErEomQnZ2N4uJi1NTUwG63M6pbKnMQAZJer0dhYSGysrIwPT2NYDC4\nb5Enj8djbWIikYiBZ4i9zOv1fi41SGQbxcXFqKioQE1NDQ4fPgyj0QgA8Hq9jIlxcnJyX547FWIy\nmVBWVobq6mpYrVZ2ZohN0el0fo6B8UkLOcN2ux319fU4evQoAz5x6VGDwSAGBgYwNTX1xAZzJZNJ\nBAIBDA4OYmVlBSKRiEXG9FzRaBRpaWmM339+fh6BQAAlJSUwm81QKpUoLy/HpUuX8NZbb7GMxRe9\n5gKBABaLBfX19XjmmWeYPiGnnLAhB8kxJKBqVVUVCgsLWRBHJSEaEFVZWQmPx8N0aqqFC6Al8rap\nqSnGz6HX65Gfn4/CwkK4XC6GOSDws9VqxZkzZ1BTU/O5bBGfz0dpaSkaGhrQ0dGBmZmxBBcIAAAg\nAElEQVSZfWGn/dIb/5mZGVy9ehVNTU0M+BEIBBAIBCAQCJCTk4NXX30VjY2NqKio2BTcJRKJkJGR\ngfT0dAbOUSgUjNqWgDupOqxCoRBKpRImk4nVxh88eIClpaXPsYHxeA/Hn1ZUVOCHP/whioqKoFKp\n1rFthUIhzM3NPRGw32ZC7ZIKhQK1tbW4ePEijh49CrPZzNY7kUiwmr5YLEZ+fj5UKhXa2towMTGx\nb2lnMiw0gY0iX7lcjuzsbNZSQ9kiWj+JRMIyLKdOnUJBQQFb82QyicXFRfT09Oz7LIK9is1mw+HD\nh6HX69nzezweTExMwO12P7ES0VZChuj06dP453/+Z8hkss+Bpbg13y/ieROJBCQSCTIyMhiTHHFq\nrK6uQigUYnZ2Fu+99x5u374Nh8OBv/mbv8ELL7wAqVSKmpoa5ObmYmxsDGNjY/tSw92JUFbi8OHD\n+Jd/+RdkZWWtwzXR7+Vy+RPlKthK+Hw+lEolampq8L3vfQ95eXlIT0+HQCBge4PO7QsvvIBoNIq7\nd+/uW0BEZYhYLIapqSkMDAxArVYjPT0dSqUSFRUVrBTtdruRTCZRW1uLCxcu4OzZs+vWfCM+7cyZ\nM4zCPhKJYHFxMeXP/6U1/oS2pJRsaWkpzGYzLBYLvvKVr8Dr9UIgEECr1aK4uBhZWVmsB5RSQhKJ\nhNHrbhw00dDQAIPBgIWFBTx48ADt7e3Mo9/roSWu/EOHDuHQoUMAgOeee27TNjcyrJmZmbDZbKyl\njowaj/dwiMuJEycwMzPzROvOlOI6fvw4KioqYLVaYbPZkJeXx0CTq6ur+Oijj/Dhhx9iZWWFfY+6\nujrYbDZ8/etfR0lJCatxdXR0pDxNx10rtVoNi8UChUKBkpIS1NfXw+fzMQY0AoaKxWKoVCo2R4LL\nY8Dj8VBYWIi//uu/xv/+7//i6tWriEajB6JmTkKTAWtra3H8+HFWE+UKd12exJ4RCASQy+XIyMhA\nbW0tTpw4wVLlfD4fxcXFkMvlrGtkZWWFdY7w+XxYrVb83d/9Hd599128//77rJ1uv0UoFEKtVuPw\n4cP4+te/Dp1OB61WC6VSia6uLrS0tMDr9cLtdmNychJutxuBQAC/+93v4Pf78b3vfQ9GoxFyuRyv\nvfYadDodfvnLX7Lulv0U0h8GgwG5ubmorKxEZmYm9Ho9JBIJO6vkANMwJaFQyOrQo6Oj6OvrQ1dX\nF8Lh8BMbTUxOe2ZmJioqKthZzMvLg9VqBZ/PZ3MqSG8SPbbFYsGZM2cgkUjwwQcfsMl6qTyjXP6M\n/v5+yGQyNs5cq9XiyJEjMJlMeP7551lwYbFYYLVaodFoMDExgXv37mF4eBjLy8s4evQoSkpKkJ+f\nD5lMhvz8fHzrW9+CQqHAz3/+c1ZeTZV8qY0/Nx1OlIh2u515qlyjzt3c4XAYq6urkMvlLDIlXmxi\nVSspKYHRaMTs7CyWl5cxODgIv9//WD7l7Qil/AsLC1FbW8vu+6jefOI9iMfj8Pl88Pl8bAqgSCRC\nVlYWDh8+jNbWVkY9up+GiBSyTqdDbm4umpqacPLkSZSUlLCe19XVVUY/e/nyZfziF79ALBZDZmYm\namtroVKpUFZWhmPHjsFut8PhcGB5eRmjo6MIhUIpB0VRdKnRaBh1KQ0nIiPCnW/A/Rb0vcnQ8Pl8\ndqhnZ2fhcrkwNjb2xLosNnsvLtX12toaa+MrLS1FUVERpFIp27NisZhFKDQeer8cAIrW9Ho9VCoV\nDAYDrFYrnn32Wbz00ksQCATr6uTkMIbDYfh8PlZ+kcvlSE9Px4ULF5BMJuF2uzE+Ps6ou/fLGNGe\nKSoqQm1tLerr6xl9djQaxdDQEN58803Mzs5+bhz07du3kUgkmJOfnZ2NkydPQiAQoKurC/F4HC6X\na1+eWaFQID09nXUjEDiusbER+fn5DFzG4/GwtraG2dlZzMzMIBwOQy6XIzc3F1VVVSgtLUV/fz9y\ncnIgFApZxoh6zvdDqHfeYDAgJycHeXl5aGpqwrFjx2A0GiEQCBCJRDA1NQWn08kMq1QqhVAohF6v\nh16vR01NDXJychCJRNiI38dxv2xXKNNAGWGn0wmtVotgMIhk8uFwtIKCAthstnX/juyQ0+lEe3s7\nPvroI3R0dCAUCsHr9SKRSDAeC6PRCK1Wi8nJSYjFYmYD/qj7/LcjtMGtVivq6+vZPHngsyEoNAgl\nFothaWkJXq+X9QQbDAYkkw8HhoRCIYhEIuTl5TE0r0QiAQBMTk5iamoKPp8vJa1o5LRIpVI20IKb\n+uEOFwE+U4bUltbX14e+vj5UVVXBZrMhIyMDOp0OdrsdFRUVmJycZPXJ/RKxWAyDwYD6+no0NDTA\nbrdDo9Gs49FeWFhAc3Mzfvazn2FwcJCl3rxeL9ra2nDixAmsrKywCLu4uBhVVVXo7e3FwMAAPB5P\nSp6VDil3YJHBYFg3n5w49Enoe5DTRYeODp5IJAKfz4dCocDLL7+MvLw8/Nu//Rvu3bu34z2yF6PL\nba8khUHgI3KKMzMzAXw2CIXH47GaKeEs9sMAAZ85iVVVVbh48SIKCwuRnZ0NqVTKyhAb9zoZRKI7\nJqyG1WqFTqeDSqXCuXPnUFJSgn//93/HlStXUj4HgoTH4zHylRdffBFVVVWMICkWi2F+fh7T09OY\nnJx8JLX14OAg/umf/gnf/e538bd/+7fg8Xgwm8147bXXIBKJ8Ic//CGljgufz4dMJkN1dTWb46DV\naiESiaBUKtkQNNr/xK1P2Tm9Xs/4OGQyGWQyGWpqalBQUIDnnnsOly9fxrvvvsuc3f0QvV6PoqIi\nnD9/HsXFxVCpVMjJyYHJZIJYLEYwGITL5cLVq1dx8+ZNBjCmgI3OAumpc+fOQalU4vXXX8fAwMCO\nzujjaIxJbxOOS6PRsImE9HvuHg+HwxgcHMRPf/pTdHR0wOVysWzy5cuXIZPJ0NjYyBwZeg/K+P7R\n9/k/TqilRi6XQywWIx6PszGwEokEQqGQRROxWIyNhaRugHg8DrPZjGAwCKfTiVAoBLlcjsrKStaa\nRg5Dd3c3mzXOZYbbi4RCIUxPT+Pq1auYmJhYd720tDSo1WpYrVaEw2HmZROBz9jYGCYmJjA+Po7D\nhw/j3Llz0Gq10Gg0KCkpYVSc+2n8aTNzp6jFYjHo9XoAQCQSwY0bN/Dee++hvb2dzQkAPhtGdPPm\nTRiNRjQ1NbGe2IyMDGRlZWF8fDzlzyyVSmEymaBWq5GWlraOMEMoFK6bh0A0o8FgED6fD36/f91Q\nDfo+ZWVlyM7ORjQaRXZ2NoaHh5/IutPfJRIJcnJyUFJSgurqagQCAXg8HszMzEAqlbKxvQBQWVkJ\ng8HAoj3q+6cJavshCoUCxcXFOHnyJE6fPo2cnBzWK8+lpaYUPzmuExMTjDCHFHh+fj4DkmZmZrJ2\n0oyMDMZauB/CJaOiDCBRC4+OjqK/v39LrE0gEEBPTw8bPESZkOLiYuaYpVIkEgmbLd/Y2AiTyQS5\nXM66caiEAoBF/Hfv3kVLSwvu3r3L1lMikcBms8FsNiM9PZ0h0qmctx9CWVeNRgO1Wo3FxUUMDw8z\nToFIJMIGXNHa9/T0gMfjYXZ2lnE+BINBFBQUQKfTQSgUIjs7mw3HcjqdKTOiZPippdPpdOLjjz/G\n4ODguqwzVyKRCCYnJ3H9+nVMTk6uy3A6nU50dnbi/fffZ2VRrk76k+jz345QJBeLxTAxMcHoZg0G\nA2vpAh4am/n5eQwPDzPeeADw+XwYHBzE9evXEY1GWY3m/PnzMBqNiEajmJubQ3t7O/r7+z83T3m3\nQj3uXq8XDx48AI/H+5whstlsuHTpEmZmZtDW1oaVlRUsLy8z4xOPx3Ht2jU4HA4cPnyYAQBLSkow\nMTGBTz/9dM90sI8S6meNRCLo7u5mE+NOnjyJgoIChib/7W9/iytXrnwOlU2H5cqVK6xLg4CBMpls\nXctLqp6XIri8vDyo1WrEYjHG6Z9MJtkIUEI2r6ysYG5uDpOTkxgYGMDY2BimpqbYuFyz2YyzZ8+y\n2hylGjUaDcNtbFd2miXgvpNKpUJdXR2eeeYZNDU1sfZF4tUfHBzEyMgIAyJRPT0SiSAQCKybWpdq\nhU4dHWfOnMGZM2fYuFMqm3EVIznozc3N+O///m84nU74fL513SE5OTloampiNVU+nw+j0YicnBy4\n3e59IYgipT40NIShoSFIJBJoNBocP34cKpUKfX19GB8f3/K+XFDY6uoqy/ap1WpIJJKUP7NMJsPX\nvvY1XLx4kU34JONB+5Ic3eXlZYyMjOA3v/kNenp6sLCwgGAwiKWlJUxPT7POi5qaGmi1WjYbgurs\nqRbK5lKL3DvvvMMM+vHjx3H69GmcPn0a4XAYvb29mJmZQSAQYCXG4eFhjI2NYXBwEJcuXUJZWRkM\nBgNbb0qjpyqrCHymz/x+P3p6ejAwMMCwAI/bF5v9rL+/Hz/5yU/wD//wDygoKAAARqSW6g6XA2n8\nNw6q2Sg8Ho+l3W7cuIGpqSkG0tJqtSgqKkJ1dTVDlCuVShiNRpjNZszPz7M2Jy4nwMZDSuldrVYL\nuVwOn8+3rWffThqXWy/a+J40QvODDz5AOBxmaPJ4PL6u42BtbY2BF9fW1lgHgUqlWldD3Yk8LsVF\nv1tbW2NsZoR27u3txXvvvYe8vDzw+XxmBB91LYr2XC4XXC4Xq+8dOnSIESvt1dPl8XiQyWSoqKhA\nQ0MDm0nucrkYa5ZOp2PvlUgkEAqFMD8/z1rMPB4PpqenMTo6ikAgwL7B4OAguru7kZeXB6lUiqam\nJjbQY7tUrttZb67Qf0d/BgIBtLe3Y2ZmBh9//DF8Ph/rNSaFwWWio5ZLGsU8PT2NQCAAsVic0pnz\nBIg8evQoTp48yXr1uZ02VIYIhUIYGRnBO++8g7t377K2TwJQ0rt6PB6MjIygvb2dgVxra2vh8/kw\nOTm5L8oRWD88jHRER0cHJBIJ/H7/tkflBgIBOJ1OZGZmstn13I6dVIjBYIDdbkd2djbUavU6PQp8\n1nFDzmMy+XAg2iuvvAKlUolPP/2U6RWfz4epqSnmLC8vL6OzsxPt7e2s9z/Vsra2hlAohPHxcbhc\nLkYulEwm0d3djaWlJQwNDcFkMkEmk8FqtSIYDGJiYoKVfmj07eDgIGvVTiQSWF5ehkQiQWZmJsu2\nble/bDf1z51Js1vdRRwL3AyiUCiEXC5nWLVUyYE0/lR/JTDKZnO/4/E4lpaWsLi4iN7eXlZ3USgU\nqKiogNvtRmVlJbKysrCysoL5+XksLy+zNie/388IZqRSKaRSKVQqFaM+TUtLg0gkglqthkwmS/k7\ncgFkXKGaLbV2EKCLIm7uWqysrMDj8bA0qEKhgEajgVQqZVOxtivcSOxxzhcXUMnn87GysgKHw4Fr\n167B5XJBp9PB7/dveQDo+QcHB5GXl8cAPqWlpUhPT4dEItk1HzellbldFXa7HQAY9/qDBw+wsrLC\nsjw0KnNpaQlutxt5eXnQ6/WYmJhgwKJYLMbqpMPDw7hz5w7rT6+pqYHf72eZgu22AG6WGtxKyXAl\nEolgZGQEIyMj7GdyuRxWqxUikYhhKgQCAcumhMNh1pNMfO0mkwkLCwsIBALbeuathMB9RNRD9WOK\n+MnB9nq9cLlc8Hq96O7uxltvvYWpqSm2tzY6jsFgEJOTk7h16xZUKhVMJhNsNhsCgQBaW1sRDAbh\ndru3/ZxkACkyfhQZDHcPE22vw+FgXQqJRGJbIGC/34/p6WmoVCpoNJrPAR33KtT5RCyOANb1oVPZ\nivgsRCIRzGYzK70tLCzA4XDA5XJhZWWFOStCoRALCwtwu91obm5OGfZps+cn8q/NvuPk5CQD1paW\nluL48eMwGo1IJBJYWlpiTg1x7A8PDyMWi7HSkVqtRm5uLux2O3w+H9xu9zrmPO7/Nj4XN+O20QHf\n+OdeA5ZYLMZsE91To9HAbrdjbGxsR3v8cXIgjT8x7olEIubtcBf1UYYzmUwiGAyiu7sbExMTePvt\ntxlwj2Y8ezweFsFR+pHavyorK2E2m5lSEIvFjL5zu8As7n+TCgT1ZhuTNmQwGMSDBw9Yz7FMJoNe\nr0dGRga8Xu+OADlcsCGXdnirlDDXy6VZ3IlEAtnZ2VheXgafz98yBR4Oh3H16lWkpaUhLy8PWq0W\nBoOBzQSYnZ3dMTKXxnsKhUKUlpaipqYGlZWVSCaTLJuytLSEwcFBLC4ugs/nr0PRUoalv78fCoUC\ns7OzWFpawvLyMrtHPB6Hw+HAu+++C5lMxpyvY8eO4f/YO4/guNPzzP865wR0ABo5Z4AACBAgGIYY\ncjicHCxZki1La68P3uSt2q29bG3tYav2tpc9+bKuDdbKXtmWZ6yRRpoRZ8Qw4BAgCSLn2N1ooBG6\nG0A3GuhG74H1fQLpCQwA2ZT0VE1phiKAD//+/m983ufV6XT84Ac/4KOPPvpahyCe94PG5aBBedT7\nc5DYZzQaZf9UqVQSiUQIBoNMTEzg8/kwGo2S4/L5558/0ZSI+D0qKyu5ePEip0+fpqGhQVZWRClU\nzCz/8pe/5G/+5m8kryIYDMos74t+b9Eu+/DDD3E4HBw/fhyj0UhJSQkvvfQSqVSKjz/++JGIXELc\nS4xrfd19P3iWg5MfwuF81V0Nh8PMzc1RXFwsxy4Py4GKd9btdlNaWiplkC0WCwsLC7IcLoJYuLc3\n/s/+7M9oa2vDZDJRU1NDR0cHV65cYWFhgZ2dHdlOSSaTRCIR1tfXj0Sa+GHt5N7eHqFQiImJCakE\neeLECclx0uv15OXl4XA4mJqakqtws7Oz5TRYS0sLTU1NfPLJJ9y8eVO2DURr5IsSJpFMwK8rhAdx\nmIGQqIge9E+VlZV873vf44c//OGhKolmpPM/uBBB7I0XazK/aqxHfDDhcFg6vgcd2oPECfFnCoUC\nl8sle86i/CLKT48K8TPFvz+JzOSDlQ/xfUKhEJ988gkWi4WCggLp/Ovq6ohEIo/MxhXfXwRewNca\nNXEWUYUwGAxUVFTItspXORRRZo/FYvJnCsllePS1xII34HK5KCwspLa2lqKiIoLBIIFAgOHhYWKx\nmFTkEmM/DwZsIrASweeDBkHcsenpaSn9rNPp8Hg81NfXk5+fj9lsli2Zr3p+4ncUmyk9Hg8Gg0H2\nvR9VhCeZTBIOh9FoNLjdbjweD3q9nhs3bpBIJFhYWJDrbzs7O6VIyuLiohSfeZw7Kqpu5eXlvPDC\nC1RVVeF2uwFke6q3t1cGXX19ffT29sr3+mGws7ODz+fD5/MRCoXIz8/HbrdTXV3NwMCADOTg4Qyy\nyBTFe/qwZVVhlAV5z2w2s7q6eh+x9UEYDAZpW0SGe5gkRdHOGRkZYW1tjaysLBm8zs7OEgwGWV5e\nZm1tjXQ6TXZ2NrW1teh0OhobG8nNzaW6upq7d+9KrkMymbwvUbJYLJJ/9CTjlQeD3kexien0vcmn\n1dVVxsbG8Hq92Gw2ysvLMRgMci317Ows8/PzskohKnNidXt+fj5zc3NMTEzI9/PLSIAHzyqY96JK\ncZA3dFhIJpP/ZIxS3PEHZdKfFBnp/LOzs9nZ2WFrawuz2YxCoSAajT5WX+9gP+aLIPalR6NR9Hq9\nVPYLhULMz88zPDz8SMI5B4VgxE5nQep73Jf9i86eTCbx+/0Eg0E8Hg/t7e3k5eWRnZ1Ne3s7gUCA\n0dHRx7qYYk3ywWf3dd9HGLSysjIuXbrE5OQkPp/vK5X7VCoVWVlZUhFL9KR3dnbY2dl55J6cSqXC\narVSWVnJ+fPn5T368Y9/zODgoNTKPtjDfRDiz7e3t9ne3pbfXzgW8TW7u7tEIhEpiCKkRy0WCw6H\ng6ysLMlQ/rrqhRgbLCoqkgztX/ziFwwNDX2lQ/kiCC6MmPPOz89HrVbzox/9SGb8Wq2W6upqSktL\nqaurQ6vVcu3aNSmu8zhz0IL0WFFRQVtbGxqNRgZWItv/4Q9/KOVtH+ddEHyTaDTK8vIy2dnZUinT\n4XB8bbvqIA6OcOp0Ojm+97A9VZHxu1wuvF6v1Cb4sp/rdrupq6vDZrPJHvRhZtHpdJqxsTHGx8fv\nW6ctpoQefIcTiQT/8A//wN7eHiUlJdjtdkpKSmTidfDeajQaHA4HeXl5bGxsEAqF5PN7XIgKnai0\nPQq2trbY2tqiv78flUpFXV0d+fn53Lx5k/HxcRYXF+XdUygUNDc38/bbb8t3YWtrSwZHohXyVQQ9\n8f4fHFFNpVIygThMFr5Yl37wmQj1VNGiOixkpPMX29bEfLtgph7VNixRJhWl83Q6zfDwML29vY+8\nQORgdi6+Tmy8O2wcvJRif7TNZqO+vp7bt2+j0+keWQUtnU7LF0F87cO2O/b391lcXKS/vx+1Wo3T\n6WR5efkLX26tVovL5eKtt97i5ZdfxmQyEQ6HWVhYYGVl5ZEy3oP9ts3NTaanp0kmk3JBxszMjHTU\nX1ZWfpjf7SCEFoRoUYmMe25ujlAoJLPIh6maCCLW+vo6d+7cYXx8nMnJyceW9Eyn06yurtLf38/s\n7CxKpRK/3y/1wUUWKjQixOz6k/SgRfC3vr7O4uIiGo2GeDwuJyYGBwe5ffv2QwVDX/e7JRIJ2RcV\n0txutxubzfZIm/5EICGCt0exLyJbXF5eluJhXwSDwYDT6ZT9db1ej8/n4/Lly3IM80lx0OaIREOc\n7yBh8SCSySSLi4uMjY0xMTGBwWCQmifxeFx+zcGxTLEV02AwPDFBVOivPEn7Y2VlRUps7+zsMDc3\nRzgcljarsLCQ1tZWOjo6yMvLQ6vVMjY2xocffsj169eZmZmRxOWvqyYLu7i9vS2nNI56J8bB6shR\nqCpmpPPf3NyUxCBBpjmqB61Q3Fv6YrfbMZvNaDQa9vb2mJ6eZnh4+Il2QYsxpUctbz3MmUU/1+Px\nyIhdo9GQTqelUTcajY+laPV17ZUvQzqdZnZ2lp6eHjY3NzGbzTgcDqLRqJRqFWQt0ap48cUXaW5u\nZn9/n/n5eSl88TikIpFR+f1+VldXSSQS/4Q1/rg42BI5qD6Wk5MjN0IuLS0xPDwsF3k8zMZC8X0T\niQQbGxuSWxAKhZ6IWLW5uSmXF4l3yGKxUFRUhNvtpqGhgZycHMkiFkJHj9pqERABzNraGrOzs5jN\nZiKRCFeuXKGnp4fe3t4nfgdEK03oMojs0Waz4XQ6cbvdktvzMD9HjKwebAk+SsApyFlCIVH0/kXV\nLzc3l7y8PAoLC6VYTSKRIBAISOdzWDgYAIjf7auQSqUk+Xl8fByPx8POzs59d9ZoNGIymSSnyGQy\nyff4ce+JOONh2EJBKhaTRdFoVCZCJpOJ+vp6Lly4QGNjo2yB3L59m3/8x39kbm5OtkC+6izC+YsJ\nM9EGFP992D4pKyuLgoICnE6n/LPfqlG/g32WL4tcD/NnVVRUUF9fj1arlRFtKBRiZWXlsR74wdEQ\nUUY9zEsiCHJdXV28/fbbUlZXlFiXlpZIJpM4nU729vYeuWLyoCF5FIyPj7O8vIzVasVms9Hc3CxF\niXQ6He3t7VLrXAQAYnTwV7/6Ff/rf/0v5ufnnyjgOriI6TDvjkKhwGAwUF1dLYVUSktLAQgGgwwO\nDnLlyhVmZmZk9vSw31dkoUIe9DB6ieLnC23306dP884778i2hMfjkRmiaPU87JjiF/2seDzO8vIy\n4+PjlJSUsLm5yZ07d5ienj6U+y/mwPPz86mrq8Nut8vMNCsri/LycuLxuHyGD0PcE3fkccm54vey\n2+2oVCo2NzdRqVQ4HA7+5E/+hLNnz+JwOGRgIubRh4aGCAaDj/UcDhORSITh4WFSqZTkVwnZ69LS\nUqqqqigtLZXVo3A4TDQafWpbFb8KB8mtarUao9FIW1sbJ06coL6+noKCArKzszGbzayvr/OP//iP\nfPTRR1IC/WHuyEFfJJbsiAD9KBYGdXZ28u///b+nsrJS3kkhCre9vX0oJHKBjHT+wgkf9R7pgx+s\nMFQjIyNsbm4yNjYmKxCP+j0FDp79SX8HUdIvLy+npqaGkpISGhsbOXHihNw8ODs7KxUAFxcXnygy\nfZKMMxaLsbOzg9fr5dKlS0QiEZaWltBqtdTV1XHy5EkcDgepVEqOxolZdTGi8yj4omd+sOryJL8P\nIEflGhsbaWxspLy8nMrKSqqrq/H7/bJaMTIywtDQkFSRfNiziyDxYLXlSe+LxWLB7XZLlrPL5eLY\nsWOcPHlS7gGIRCL4/X6pfHkYgZK4p/v7+3It9mGMEApC7pkzZzhx4gQOh0MqNQp1yebmZtbW1lhd\nXX3o9sKTPu90Oo1araazs5OysjL29vYk9+T8+fM0Njai1+uZn5+nt7eX8fFxent75c6QZ42NjQ16\ne3tRqVRUV1dTVFQkM/68vDxyc3PR6/UsLS3JMdlnvY5YQGTgxcXF5OXlYTabaW5u5tixY5SUlGA2\nm0mlUty9e5eenh4++ugjBgcHZSv3UVt/B1u54s8OCyKwLSkpoa2tTSrXxuNxlpaWGB0dlbtlDiuh\nyUjnL9T5HhSkOMyoR0ChuKdINzIygs/nkzu5Rdn0US75V5X4n9QJGY1GiouL+da3vsX3v//9+0RC\nxFzutWvXuHXrliznHdVoztchnb7HFnY6nXzjG9/A4/Hc9/+LZxGJRPjwww/54IMPuH379mPN9X/Z\nMz8sxw/3glG73c67777L9773PcnkX1tb4+rVq/zkJz9hY2ODcDgs2f8Pe3b4p9McTwox893c3My3\nv/1tzp07JxUMBVN5c3NT9kxFCfpJjLrIvlwuF3V1dYRCITl/fRisdpVKRVFREX/6p39KXV2d1LEQ\nzt/lctHZ2cnk5KQc/Xpaa2j1ej1vvfUW7777rlxsdfD+Ady6dYu//Mu/ZHBwkEAgkBHOE2BtbY2e\nnh6pny9koKuqqiRnZnh4GJ/Px507d2RPPRMg2sKtra2cPXuWmpoaSeQDpH7H+1Rd8woAACAASURB\nVO+/z//4H/+DSCTy2K20L3pHHxRRehKIRT5ut1suehMcmrm5Ofr7+1lfX79vouVJkZHO3+l0srGx\nwebm5hcyrQ8T+/v7kqCl0WjkLPLe3t6hlnWe5PsolUqKi4v55//8n9PZ2Skvh+ixDg4O0tfXx507\nd5idnSUajRKJRIjH48/kRRWkrMHBQf7bf/tvdHd309HRgU6nw+/309PTI8dxhoeHWVhYeOg+7Rfh\ni5z8wez/SaBUKuUK2nQ6zbVr1/D5fKysrMjdD3Nzc5Jf8CiVqgfPexgQSpcNDQ38yZ/8idzod3Al\nsQgWRZB49+5dAoHAV05mfB1E5iL0BAYGBrh169ZDK+A9DETmtbq6KsW9xIIZIcAzMDDwyCTdJ8XO\nzg7/7//9P0KhEN3d3Xi9XjQaDdeuXWN4eJi1tTXGx8cZGRmRpMtMgTiLqLiJcTZBahPSy4e5Ee+w\nIOyMz+cjGAxSVVUlFTmFkuvY2BhXr149lFbFg5/bYX6Odruds2fP0tTUBPw62NBqtfdNRfzGb/U7\nSCg5imxfQBjqSCRyZFr4hwGRyZ06dUqqsW1ubrK2tiad6dWrVwkGgzLbelo7t78IwmjMzMzwox/9\nSEpVGgwGpqenef/99xkaGpIym0f5+R7830eFyN6E0qPf72d+fp6hoSGp8x+JRB6bFHpYJf6DEKup\ni4qKeOGFFySRULCaE4mEXB86PT3N6Ogo09PTxGKxJ64SKRQKwuEww8PDsoX2JITZg0in741fTk1N\nsbS0hM/nu8+5KhT3Fh19kSjYUWN3d5erV6/Kc5SUlKDVannvvfe4ceOGXCaTSY7zINLpND6fj9XV\nVdRqtZxhF1wlYYMzpVohIJz/3Nwcw8PDZGVlEY/HWVtbY3FxkdHRUe7cucP6+vpjq4V+HRnwMCBs\njNvtRqlUMjY2JieINjY27tthcJifwZOlRUeE7OzstMjADxrITIqYnybUajWnTp3iv/yX/yLV5z7/\n/HOmp6fZ3NyUJeeDRLdMeFaCme10OsnOzpYjm2tra187YvMoOMoAUYxSim2AgoAjWNFfJ+LztKHV\nasnKyuI73/kO//k//2cmJibkGmghkSrKobFYTCq3HcbcttgUaLfbCYfDcuzuMJ6PENQpLi4G7nFL\nBNNetBWEouSzcFJCTlqUbRUKBaurq3IkMVPeyS+DGOc76OiFzT2s9tlRQK1W43A4sFqtmM1mksnk\nfbLGB5ehPS6O0r7AvWdvs9k4duwYTqdTkhdTqRQ+nw+/3y85Io8YoH+lf89I56/X69MHxUaOIkN6\nHiBaHmazmdraWt588002NzcZHR1lYGBArtPNdMMCR/sCHbXzFziKMv1hQqy5zs/Pp6uri9dee42h\noSGGh4eZmpoiGAyysbEhg0Rh3B/nd3nwmX+ZTPGTPifxfYVjEnr6R8G0PgxksqP8KjyP5xZ3Q+yC\nOerJsMOGSI7ERtqcnBzUarVsHyYSCdmSfsy26PPn/HU6XVqMcQgcVgDwPF1yoSImtOONRiPhcFjK\n5oooMBNLck8LR6GjcBAHs6Gj7PkdBvR6PVlZWVRXV8tKhVBaFGVPYVwOOs/H+T0OLrM56PQFDuN9\nFU5f8BWAjKpsfRG+6Bk8D3ie7CJwX0B4kDibyXfjQQhFWbFS3GQy3ef0hY7Bg4p/j4Dnz/mrVKr0\ng8b2eflADxMHFfzEdrbd3V2p/XxUYyfPG466qnAQmfycheqgzWZDp9MByG1u4q4Itv+TGskveuZH\n8awOjuMCz5Vx/x2OFg9OVDxvFWKR+QvRKiHSdlD+WfjBx0zunj/nDzwfn97v8Dv8Dr/D7/A7ZCa+\n0r9nJNv/WeHB2fDnJYI8WPqG5yfyhee7TPq8P/Pn6cwCz/vZn9dzw++e+dPGUZ/9d86fXzNdBZno\nSdZVPk0c7ImqVKov3UediRDPXJS6HnYZSyZAaMofVKJ8HiAIc1qtFoVC8VSFcJ4UB595MpmU46OZ\nDvGOitLuYYkePQ0cvC9CH+J5sIvw6/sitFuep3dU7IlQKpVHOraq/Pq/8psPccGFat7zcsEF01Wv\n10tlsecFYjTMaDTK0ajnAeKZi/E/sYb4eYDoMZrNZmw2m2RJPw8QG+XsdjtGo/FZH+ehIe6LyWQi\nKysLrVb7rI/00BD3xWKxSAGn5wEi4NLr9djtdvR6/bM+0kNDBFwmkwmLxXKk7+jz8/YfAVQqFWq1\nWu5oflZyuI8DtVqNxWIhJyeHra0tVldXn4uMQlQqcnNzyc7OlmI5z4MDFS9lSUmJXF/8PFQshAPy\ner1UVlaysLDA0tLSc5H1i6y5pqYGs9kspXufB6hUKpxOJ3V1dWxsbDA/P/9cvKPCeVZWVlJYWMj0\n9DTLy8vPxTsqRqMbGxtJp9NyvffzAKVSSVFREceOHWNmZobFxcXflf0PE0JNyWKxSBa9TqcjkUg8\nyUjFU4FCcW+Vr9lsxmAwyOgwHo9ntEE8qFdgMpkwGo1kZWWh1+ulVkGmGhZxX0wmk8yYPR4PGxsb\nGV/CVSgU8p5YrVa8Xi8ej0eO/mWy8xcVLYvFgt1up6CgAIVCweTkZMYH6KKKaLfb8Xq9FBYWkkgk\nHnrZ0LPCwbvucDgoKSkhNzeXubm5jH5H4deJnMPhIDc3l/LycsLhMKOjoxn9zOHXdz0rK4uKigpq\na2sJhUJHsjL4vp97ZN85QyGi8fb2djweD3q9nqtXrzI7O5vRWZzoYZWVldHU1ITD4WB9fZ2rV68S\nCoUy+sUUQUtTUxNlZWV4PB76+/vp6+uT61czESJocbvdNDQ0UFRUhMFg4Nq1a8zMzEiBpUyEOHtx\ncTFVVVVUVlYSDAbp6elhZWUlo3ktCoUCk8lEaWkptbW1VFdXc/PmTYaGhp66bv+jQqFQ4HA4KCoq\n4vjx4+h0Om7fvv3E+yueBsRippqaGk6ePInP52N4eJjV1dWMPruQds7OzubkyZNUVVUxNTUl1+Bm\ner/faDRSVFTEiy++iE6nY2FhgbW1tSfatfEw+K1y/gqFgsLCQhobG+nu7sZkMhGJROjp6Xls7een\nBbPZTFFRESdPnqSzs5O9vT2Gh4eJx+MZsRr0qyBWy549e5bi4mKpXy02hGXqc9dqteTl5XHs2DHO\nnj2L1WqVK02FAEemwmazkZOTw8mTJzl27BgOh4NYLEYgEMho4pbYY1FWVsapU6coKyvD6XRy48YN\nVlZWMtYJKRQKyUlobm6mvb2dkpISgsEgKysrUkI5EyGCLafTSUdHB01NTVRXVxMMBpmfn2draytj\nzy5acaWlpbS1tdHU1ITFYqGvr49AIJDRSpAajQa9Xk9LSwstLS0cP36cmZkZRkdHWV1dPfIg97fK\n+SuVSurr63nppZc4d+4c0WiU/v5+qdmeycjKyqKrq4sLFy7Q0dHB8PAwk5OTGT+SKBadnDt3jpdf\nfhm73c7ExIRcXJGpECXzxsZGzp8/zyuvvMLy8jK3b9+Wmv6ZCoVCgdvtpr29nZdffpmmpiaWlpbo\n6+s78mziSaFUKiksLKSzs1OuyF1eXiaZTLK1tfWsj/eVMJlMlJWV0d3dzZtvvin3JoTDYba3t5/1\n8b4Qgl3ucDiorq7mzTffpLa2VorMBAKBZ33EL4VwoNnZ2bS1tfHHf/zHaLVa5ubmWF5eJhgMZqxt\nFPbF6XRy4cIFLly4gEqlYmxsjDt37jyVd/S3yvnDvc1yer2evb09RkZG+Lu/+zvm5+ef9bG+FhqN\nBrvdjkajIRwOc+XKFT799NOMNSoHYTabcTqdpNNppqam+Nu//VsGBwef9bG+FhqNBpfLhdVqJR6P\n09PTw09/+lOWl5ef9dG+EgqFAovFQklJCQaDgUAgwPvvv8+1a9cy1hjCr5nOubm5ssc/MDDAz372\nMyYnJ5/18b4SSqUSq9VKZWUlLpeLra0tPv74Yz755JNDXWt82DhIBq2rq8NisTA3N8fly5fp7+9/\n1sf7UohzW61WqqurKS0tRa1W09fXx69+9SuWlpYy8q6LdpyoKjY1NZGbm0s0GqWnp4fr168/NRXL\n3yrnr1QqcTgcZGdns7+/z/z8PNevX8/oXj/8Okr0er2YzWa2trYYGhpicHDwuSD6ZWVlUVBQgEql\nwu/3c/36dZaWlp718b4SQlY5Pz8fp9NJPB5ndHRUtogyFcKwuFwuqqqqMBqNrKys0NPTw8jISEYa\nRAFBxC0pKaGoqEgGix999FFGO1Dh+PPz82loaMDpdBKNRrl16xZ9fX0Ze1+EXcnOzqaqqor6+nr0\nej3j4+Ncvnw5Y5MiMTHkdDolO76oqIhkMsnw8LBcr5xpEIRKo9GI2+2mtraWtrY2srKyWFtb47PP\nPmN4ePipvaO/Nc5fZBXl5eXU1dVJ4YdM3rMtoFKpcDgcHDt2DI/HQygUkrvZM9mYK5VKdDodRUVF\nNDU1sbW1RTweJxaLZTwJRxCI6urqKCwsJBKJyDXEmXxfxOhqWVkZbW1tbG5uMjc3J9frZjIcDgfl\n5eU0NzdTXV3N+vo6kUiEaDSasW0WUXquqKigo6ODjo4O1Go1Pp9Plvszsc1yMOPv6uri7NmzNDc3\ns7a2RigUYm1tLSODFvG8TSYTra2tdHV10dLSIoPc5eVlNjY2Msq+HNxP4XA4KC4uprOzk4aGBqqr\nqwmHw0xPTxMMBolGo0/tXL81zt9ms+H1esnPz8dqtTI7O8vm5mZGE87gnuMvKiqipqaGvLw8UqkU\ny8vLGT82BJCdnU1tbS01NTVkZWXh9/sJhUIZzZIXqKys5OzZs5SWlqJSqZifn2djYyPjWfI2m42u\nri45zbK0tMTCwsJ9y30yDWKSpaKigldeeYW6ujoMBgM+n4+lpaWMJW3BvZaW2+2ms7OTrq4uCgoK\nmJubY2JigvX19Yw8uyAnlpaWcvz4cc6dO0dtbS02m43R0VHm5+czzr4cXN9bUFBAS0sLHR0dtLS0\nkJOTw8rKChMTEywvL7Ozs5Mx9kUomer1ekwmE+3t7bS1tdHQ0EBOTg5Wq5WZmRmmpqbkuu2nhd8a\n5+9yueRFSafTrKyssLGxkXEv5oPQaDQ0NDRw/PhxbDYbCwsLMpPLdHi9Xt566y2amppQKBT4/X4W\nFxczNosTUCgUnDhxgt///d+nsLCQ+fl5hoeHn4uRSpfLxTvvvENXVxc6nY5AIMDY2FhGCykplUo5\nCvpHf/RHmM1mee7FxcWMMeQP4iBR7sUXX6SrqwuNRkMoFOLu3bsZyfAXrTij0ciJEyd46aWXOH36\nNAaDgXA4zMzMDNPT00cqK/uoOChjbjAYqKur4/vf/76cBEkkEkxMTDAwMEAwGMy4c2s0Gpl8vvLK\nK1y6dAmDwcDOzo4UfxobG2Nzc/Opnv033vmLiLG2tpbvfve7lJeXs7Ozw+zsbMYTt0Tfua2tjc7O\nTkwmE+FwmPn5+Ywm+om5W6/XS2trK/n5+ezv77O2tsba2lpGZRQPwmAwYLfbKSoqori4GJPJRCKR\nYGVlJaOfOSDHWAsKCrDb7ahUKra2tmQGmqkQkyzHjx/HarWi0WhIpVJsbGxkbJAr7EpVVRVvvvkm\nZWVlGAwGqce+urqacW2Wg9mz2Wymrq5OkvyESM7m5ibRaDRjHCjcWygkuCx2ux2Px0NRUZGUS06n\n06RSKTkbnykTUOl0Wo5Rer1eGhoaZOVZqVSSSqXQarXE43HC4fBTrxL9xjt/Qdpqb2/nhRdeQKVS\nMTs7SygUeqr9lceB2+2murqalpYWKisrUalUcsY8k425VqulrKyMxsZGKisryc7OJhKJZLyCIoDT\n6aS5uZmqqipcLhdKpfKf7A3PNIjzVVZW0tHRQV5eHkajURp6sSQkEyGEZU6fPk1DQwMGg0H+udi1\nkYlQqVQYjUYqKio4deoUubm50hEJUlcm7to4SE6sqKigsLAQvV5POp1GrVbLXRuZBqVSiclkoqio\niNLSUjweDyaTSZ5bp9NhNpszbjeLOHd+fj7Hjh0jLy8Pg8FAMpmUVQGj0YjJZEKpVD7Vs2emRThE\nFBYW8q/+1b/i9ddfR61WSyMojHom49SpU/zH//gfaWxslOc2Go04nU50Ot0zPt2Xw2Kx8J3vfIfv\nfOc7WK1W4J6xzMvLIy8vLyONIty7EzU1Nfz5n/85J06ckCs1HQ4HTU1NeDyeZ33EL4QwIh0dHbz6\n6qu4XC4A9vf3KSsro6WlBYvF8oxP+cUwGAzk5ubS3NxMSUmJfCetViudnZ1UVVU94xP+U4hsrri4\nmJKSErxeLwaDgXQ6zf7+PqWlpVy4cIHc3NxnfdT7IMrQZWVlnDlzhry8PGlHRNDS3t5Oe3t7xtkX\njUZDVlYWZ86c4cSJEzJASaVSpFIpvF4vr732GhUVFRll11UqFVarlZKSEjo6OsjJyQEgkUjIBO7Y\nsWO8+OKLZGVlPdWz/0Zm/iLjOXHiBN3d3bzwwguUlJQAsL29zerqKgsLC6yvrz/jk/5TiDnn9vZ2\nXn/9dY4fPy51CeLxOMFgkIWFBWKxWEbtqhakrYaGBrq6ujh37hxlZWWoVCpisRjr6+ssLS2xuroq\ny2GZcnatVovD4aC1tZVXXnmFY8eOYTKZ2N3dJRaLsbKywtLSErFY7L7o/FmfX6ibCYb8qVOnyM/P\nR6FQsLW1xfb2NsvLy6ytrckMScwQP+uzW61WXC4XTU1NnD59WpbNY7EY29vbBAIBSd7S6/WkUin2\n9/ef2gz0l0EQcKuqqmhtbaW5uRmj0SgrcpubmwQCAdbX12WQkEwmn7kwlOCD1NfX09nZSWdnJzk5\nOaRSKXnu9fV1gsEgiUQCu92OQqFgb2+P3d3dZ3Z2MaVVW1tLV1cXHR0dFBcXo1KpiEQibGxsyGe+\nsrKCRqMhJyeH3d1ddnZ2ZBvgWUDohJw8eZL29nby8/MxGo3E43H8fj/r6+tsb2/j8/nY3NzE6XSS\nTCbZ29sjEokcuersb6TzFyNmf/AHf8Af/uEfYjAYUCgUJJNJNjY2mJubY2BggMXFxWd91Psggpba\n2lr+03/6T1RVVckS0dbWFsvLy4yMjHDjxg02Njae9XHvgyDjvP766/zbf/tvMRqNKJVKeZEXFhbo\n7e1leHiYVColI9xn7YTgXvZZVlbGn//5n9PV1YXBYGBvb49YLEYwGGR4eJhPP/0Un8+HWq2WjuhZ\nQ6PR4PF4uHjxIv/u3/07zGazDLZWV1dZWlri888/5+bNm8TjcbRarWy9POvn7nK5OH78ON/73vc4\ne/YsOp2OnZ0dotEoPp+PoaEhPv30U2ZmZjCbzezu7sp/nuXZNRoNLS0tXLp0iQsXLuByuWSwtbS0\nhM/n4+bNm/T09BCLxXA4HHK89Vk6f7Hn4Tvf+Q6tra2UlZWh0WjuO/fc3By9vb3Mz8/jcrkwGAxs\nbm6ysbHxzKaiRFXr7NmzfP/738flcmEymVAoFIRCISYmJggGg0xNTTE4OEgqlaK8vJzNzU1WV1ef\n6WiuXq+noKCAt99+m9bWVoxGI/v7+4TDYcbHx5mbm2N9fZ2JiQlWVlZwuVxkZ2ezvb3N9PT0kctw\n/0Y5f7Fn/fTp07zzzjt0dnbK8tXY2Bg3btxgamqKyclJEokEtbW1eL1eAMLhsNSbfxZbwwwGA1lZ\nWbzzzjtcvHiRgoIC1Go1sViMa9eucffuXRYWFpifnyc3N5ezZ8/KPuPQ0BC9vb2Ew+H7ZnOfxsuq\nVCqx2Ww0NTXx1ltvSecJMDs7y6effsrc3BxLS0usra1x/Phx6urqMJvN7O3t8fnnnzMyMsLCwoI0\njk8ruzMajWRnZ/Pyyy9z6dIlampqUKvVJJNJrl27xo0bNwgGg6yurqLT6Xj77bcpLy9HrVazsrLC\n6OgoAwMDzMzMoFAo2N/ff2qONScnh5qaGl5//XVOnjwp+59+v5+PPvpIGpRoNEppaSlvv/02WVlZ\nKJVKAoEAk5OTXLt2jdXVVUlSexqEI4vFIvc8vPLKK1RVVaFUKonFYvT19fHxxx+zuroq7/Lp06dp\namoC7pVK19fX6e3t5cqVKwAkk0lp4I/67BUVFRw/fpyXX36ZtrY2bDYbu7u7rKys8POf/5z+/n7W\n19eJx+OYzWa6u7vJz88nnU7Ltdvis9nf32dnZ0dmd0d5dqPRyJkzZ3jxxRc5fvy4nHgSycTly5cJ\nBoNEIhFSqZRc7KPRaNjb2yMajTI6OspHH31ENBqVGwqPeusc3Bu5vXjxIufOncPtdqNWq1ldXSUQ\nCNDT08OdO3eIRqPs7e2h0+no6OigoaGBZDLJ9vY20WiUK1eu0NvbSywWk4HYUdsYhULBuXPneO21\n1ygrK0OtVrO9vc3s7Cyjo6PcuHGDQCDA9vY2arWakpISuru7cblc9yWpn3zyCXNzc0SjUSm5fFjP\n/DfG+QsiS3V1NZcuXeKf/bN/hkKhkKXynp4e3nvvPWZmZtje3qaoqIja2loaGhrY3d1lfn6e1dVV\n4vG4NOBPw4iLPpzX66W5uZl3332X06dPA7C2tsbc3Bwff/wx165dw+/3k5ubS11dHd3d3VRVVaHX\n6+We84OZ0dPITMU0QnV1NefPn+cP/uAPcDgcJJNJ/H4/n3/+Oe+99x5zc3PEYjEaGxvp6OjglVde\nITs7m3g8LiNh8eyfVllapVLhdrtpbW3ltdde49VXX5VnCQQCfPLJJ/zsZz8jHA7jdDppbGzkwoUL\nnDlzBp1Ox8TEBB999BHhcJjl5WVSqRR7e3tHnt2JueHKykrOnTvHG2+8QWFhIalUCr/fT39/Px98\n8AGjo6NsbW3R2NjI8ePHefvttykpKUGj0TA0NMT169flqlZRlj5qEqlarcbpdNLZ2cn58+fp7u6W\nDt3n83HlyhX+/u//np2dHSwWC8eOHaOrq4u3334blUpFPB5nYWEBpVLJ+Pi4LOvu7u4eaXYnss+q\nqipeeeUVOjs7ycvLk+XmyclJfvGLX3D79m329vYoKyujtbWVCxcu0NLSglKpJBKJsLi4SCgUYnNz\nk+3tbTY2No5cSEer1ZKdnc3p06fp7u6mvLycWCzG4uIiU1NT3Lhxg5///Odsbm6i1WppbGzk2LFj\nvP7661itVtLpNJubm1y/fh2/3y83zu3t7R3pfRHjiGKaorKyEqPRiM/nY3JykuHhYT777DMGBwcl\nn0i0HM+ePStbFvF4HJVKRTQaZWlpiZWVFZlRH5WNETa5o6OD8+fP43Q6iUQiTE9Pc+fOHe7evcvA\nwABbW1vodDpqampoamrihRdeoLCwEIVCQSKRYGZmhkQigVqtZnZ2lnQ6zd7e3qG1TH9jnL9Wq6W8\nvJx/82/+DR0dHSgUClKpFIuLi/zwhz/k6tWrDAwMsLe3R1FREZcuXaKxsRGv18vKygqpVAqr1Ype\nr5eqf08jAxUCEC+88AL/4l/8Cylrmkwm6enp4W/+5m8YGBjA7/ezv7/P6dOneffddykuLpaZh16v\nx2g0ytlRofx31GcXPa1vfetbXLp0CavVSjKZJBQK8dd//df88pe/ZGhoiL29PbKysjhx4gSnT5/G\n5XLJSBh+LeuaTqefiqqYeOZNTU3863/9ryWpLBaLcevWLX7wgx8wMDDAwsICarWaxsZG3nrrLaqr\nqyWjW/ABhBLg05rAEOzg7u5uvvnNb5KTk8POzg7r6+v8/d//PR9++CHj4+OyzN/S0sJLL72E2+2W\n78T29jbxeByXy0UkEiEUCh35ueHe4puSkhJeeeUVWltbUSgUBINBbt++zXvvvcfdu3fx+XwYjUZK\nS0u5dOkSLS0t0tjt7u4SDofRaDSUl5fj8/meiry1TqfD6XRSX1/PmTNncDgchMNh7ty5w9WrV7ly\n5YoUUrJYLDQ0NPB7v/d7FBUVAfcqWfF4nK2tLfLy8igvL2diYoJIJHLk7+nBHfFFRUWoVCpGRkb4\n5JNPuHPnDpOTkwQCAcxmM3l5eVy4cIHTp09LQt3+/r6858eOHQPu8aaOenpEp9NRXl5ObW0tJSUl\nWCwWNjY2+OCDD/jss8+YnJyUq4YLCwtpbm7mG9/4huR2CUcpnvnx48fp7e0lGo0e+TN3u900NzdT\nUVGB3W4HYHBwkP/5P/8nCwsLLC8vs7W1hcfjoaamhkuXLtHV1SX3n4i7rtFoaG5uJpFIEI1GZSXj\nsM7+G+H8lUolTU1NXLhwgePHj+P1ekkmk/T393Pt2jUuX77MxMQEGxsbFBUVUV9fT0tLC6WlpWi1\nWhKJBEajEY1Gg1qtRq1WP7Xs2el00trayunTp6mrq7vPIP785z/ns88+Y21tDZ1OR11dHS0tLdTV\n1cnxHJFtij66eCmfRtWivLyckydP0tbWRkFBAYCMyC9fvszAwADhcJjKykpOnDhBc3Oz1PgXF3pz\nc1M6/Kf1zI1Go3zm9fX1mEwm1tfX6enp4eOPP+b69eusra2hUqlobW3lzJkzch46kUgQj8dZWVnB\n5/PJ/fJ7e3tPpfTs9Xo5fvw4ra2tFBQUkE6nmZiY4OrVq1y+fJm7d+8Si8UoLS3l5MmTkiClVCpl\n6VC0viKRCIlEgp2dnSNvV4iNmmfOnKGyshKTycTa2ho3btzgl7/8JTdu3GB5eZn9/X2am5vp7u6m\nqakJp9MpSYB+v5/bt28zPT3N9va2lFs+6iA9OzubM2fO0NLSgtPpZGtri/HxcXnuwcFB9vf38Xq9\nnD17lrNnz1JRUYFSqWR9fZ2trS2ZrS4uLhKNRp+K3LJCoaCsrIxz585RXFxMOp1mdnaWvr4+Pv30\nU6anpwmHwyiVSmpra+nu7qa1tRWPx8P29rYUQpuampLj0YIceNT3Rbyjzc3NWCwWyb3p6enh9u3b\nLC8vo9FocLvdMtsXQfzS0hLhcJhQKITf75f/HYlEjpwAqFAo8Hq9nDt3jsLCQnZ2dpiYmODatWv0\n9fXJeX6r1UpFRQUXL16ktbWVnJwcyY3a2NhgY2OD9fX1+ySuD7sd/RvjjxOTBAAAIABJREFU/M+f\nP8/v//7vy3GsRCLBT3/6U370ox8xNzdHPB6XZLquri4qKysxm81yYYgY/ROs9adBzlEoFOTn5/Pd\n736X9vZ2VCoVe3t7TE1N8Rd/8Rf09/eztLSEUqmkqKiI1157jZMnT5KTk8PW1pY0Ig8awafhQBUK\nhSRslZWVoVAo2NnZ4ZNPPuEv/uIvWF5eZnt7G5VKRVtbG3/0R39EVVUVFouF7e1tmXGGQiHC4TA7\nOzsykDnql9NqtfLqq69y/vx5TCYTe3t7+P1+fvCDH/CrX/1K9sGLi4v55je/yblz5/B6vcRiMTY3\nN1lbW2N+fl5mTcKoHHXmL2b5//iP/5iamhoUCgXRaJTe3l7++3//76yurhKLxWTG/x/+w3/A5XKh\n0+mIRCJSt/3OnTv09fXJ3vrm5uZTqXCdPn2aN954A5fLRSwWIxAI8OGHH/KTn/yEra0tlEolFouF\nN954g29+85tYLBZ2dnYIh8P4fD4GBwf5+OOPmZ2dlQbyqFcUKxQKcnNzeeedd2hpaSGdThMIBLh9\n+zYffPABCwsLJBIJzGYz1dXV/Mt/+S8pLy9Hq9WytrYm5ZV7enro6emRa36FzPVRPXeh/dDQ0MDb\nb7+N2+1mfX2dvr4+PvvsM3p7e9nd3UWr1WK1Wunu7ubP/uzP0Ol0xONxQqEQs7OzTE5OcvPmTSn5\nGwwGWVlZOfJnbjabOXXqFG1tbahUKoaHh/nZz35Gf38/fr+fvb09LBYLpaWlvPvuu7S3t6PRaFhc\nXGR2dpbx8XHGxsYYHx9nd3dXkujC4fCRBYvimefn53PhwgXsdjuhUIgPPvhAbhpMJpMYDAY8Hg9t\nbW1861vfQqPRsL29zdzcHMPDwwwPD7O8vEwkEkGr1eL3+5mbmzv01tZz6/xFHy4nJ4fS0lLq6urI\nzc1Fp9MxPz/PrVu36O/vZ2VlBUD2XxoaGmTmIXrW4uuysrKYmZlhZmaG4eFhFhYW2N7eJhaLHSrz\nUqVSodVqyc/Pp6mpiYqKCrKzs0mlUty5c4crV64wOztLLBbDbrdz8eJFzp49S2trK7m5uZKYpdfr\ncbvdXLhwgeLiYmZmZhgfH2doaIj19XVJcDksIpcYu7FYLHg8HqqrqykpKcFsNhMKhejr66O/v5+1\ntTUUCgWNjY28/PLLdHV1UVZWhk6nI5FIyFZFcXEx3/jGN2hpaWFmZoahoSEmJibkMxccgMOAOHtW\nVhaVlZVUVlbi9XpRq9WMjIxw5coV5ubmpCHv7u7mpZdekrLKOzs7bG1tEYvFMJvNtLa2kpWVxdzc\nnGQaBwIB+XcO07CL6ZWsrCxKS0tlOXF7e5ve3l7u3LnDxsYGSqWS8vJyXn31Vc6dO4fT6WR/f5+N\njQ1CoRCpVAqn08lLL71ETU0NPp+PsbExhoaGZEZ3mKx0YQwtFgtOp5Pi4mK8Xi96vZ65uTlJBt3b\n28NsNtPe3i576jqdjq2tLVlhEdwSm80md7UPDg4yMzMjs+tYLHYo5xZnFwu1CgoKyM/PlyXcqakp\nhoeHiUQiqNVqsrKyeO211zh//jy5ubns7u5KFnckEsFisdDR0UFlZaWUc7179y5+v1+q0h1mVifa\nWna7Ha/Xi9vtxmg0EggEJDt+f38fi8VCXV0db7zxBqdOnUKtVrOxscHCwgIDAwNoNBq8Xi8vvfSS\nrL4MDg4yNDTE8vKyXFx0WPdc3Ber1Upubi4ulwubzSbvQjAYJB6Po9PpcLlcXLx4kYsXL1JcXEw8\nHsfn83H79m0mJyex2WzU1dVJ8t/GxgY3b96UAfv29vahVl7Exj6bzYbb7SYrKwuj0SgTm3g8TiqV\nwuFwUFFRwZtvvsmpU6dQqVSsrKwwPT3N1atXiUajOJ1OSktLMZlMqFQqJicnMZvNLC4usry8LL/X\nk+K5c/5iq5NYi1hdXc2xY8fu668sLCzwi1/8gomJCVKpFBUVFXR3d/Ptb39bvgjiw9/f38dqteJ0\nOqmqqsLn8zE+Pk5OTg7j4+OEw2GCwSB+v1+W6h7XIQl9aofDIVc6Njc3k5+fj9lsZmdnh8HBQW7e\nvMnq6io2m43y8nJef/11zp49S3Z2Nru7u2xubrK3t4darcZut5OdnU19fT2Tk5MMDg6SlZVFMBiU\nhiYUCt1HpnscqFQqKbSRn59PeXk5VVVVeDweFAoFq6urXL9+ncnJSfb39ykqKuLMmTN8+9vflvOt\nkUiEra0tdnZ2sNlseDwecnJyqKurY3p6mpycHFwuFxsbGwQCAWZmZmTw8rgQ1RyTySQ3ajU2NlJc\nXCxFNWZmZrh58yZra2tYrVaKi4u5dOkSb7zxhhwzW1lZkeXO3NxcKisrqa6uZnFxkYmJCbKzs5mZ\nmWF1dZXZ2Vn8fv8TjwSKs9vtdlwuF4WFhVRVVeH1etFoNAQCAQYHB5meniadTlNYWMjx48d55513\nqK2txWAwsLy8zMrKCqurq1itVnJycvB4PDQ2NuL3+ykqKsLhcBAMBvH5fExPTx+KvOtBOdbc3Fwp\nlywmDkKhELdv3yYcDmOz2SguLub8+fN8+9vflroWgUAAn89HIBCgpqaGkpISqquriUajrKys4PF4\nGBoaYmFhgZmZGebm5g6lnyvui91up6CggIqKCjwej3xHA4EAi4uLKBQK8vLyqK6u5tVXX+XkyZPo\n9XqWl5eZm5tjcnJS/p3S0lL0ej2RSITZ2VkcDgfj4+NMTU3Jka8nPbcIWISGQl5eHgUFBdhsNpRK\nJbu7uwSDQfn+lZSUcObMGd555x2ys7NJJBIsLi4yNjbG2NgYlZWVVFVVkZWVJQmXbrdbLgCanp6+\nb0rnSc+u1+uxWq14vV4qKyulrRYjw4lEAr1eL+3i+fPnOX/+vHSgo6OjjIyMEAgEZLDl9XplAGw0\nGjGbzSiVSvx+/6E5fxHgOhwO+cwtFgtarRatVotarcZkMuF0OqmoqKC9vV0mbPF4nPn5eYaGhhgb\nG8Nms1FfX09paSk5OTkoFApycnJQqVSyWiNajE+K58r5H4zGy8rKOHv2LNXV1RQUFJCbmys1nn0+\nHz09PUSjUSoqKvjTP/1TTpw4cV+/WUTc+/v75Ofny9nRvLw8uaxjc3OTRCJBb28vH3/8sezZPc6I\ni4gMrVYrLS0tnDp1isLCQsrLyzEajaTTaRKJBJOTk0xNTaFWq+nu7ua73/0uZWVlOBwOAMlGT6VS\nWCwWqYWu1WrlZT9x4gRbW1usra3x4x//mOvXrzM7O/vY1QuxBSw7O5tTp07R0NBAUVERFRUVAPLl\nGhoaIhwOk5+fz/e+9z26u7spKipCo9HIS76+vk4qlaK0tBSn0ylZ9xaLheLiYi5cuEA0GuXmzZv8\n3//7f6UY0+MaRsEaLisro6uri5KSEsrKyqQKXjqdJhgMMjMzg1arpa2tje9+97tUV1dLAuPy8jLD\nw8Ps7u5iMpnweDzo9XrUajXFxcUykFtfX2dtbY2//uu/5sMPP5RB2uOeXUjctrS00NzcLLX71Wq1\nZAQHAgFisRj5+fm8++67XLp0idLSUllKnJiYkMFBeXk5JpMJrVYrDZbX66WtrY1AIMDdu3f5q7/6\nKyYnJ5+YSKdWq3G73Zw4cYLKykqKi4spLCyUnBRRWjabzZSVlfGNb3xD9nfj8TiBQICbN2/KioZG\no8HhcKDT6WR25fV6aW9vZ2Zmhp/97GcEAoEnNoziZwnGfnFxMQ0NDVitVqlNkUgkZFuoq6tLjnNp\nNBqi0ShjY2P09fWxv79PQUEBLpdLqnKKYL2kpITx8XHu3LnD+++/TzgcfmKDfnBBkiDKVVVVoVKp\n7tP1z83NlYp4J06cwOVysbOzw/LyMn19fXKiwu12U1JSIjNQYXNKSkq4efMmV65ckZMuTxK4CLvu\n8Xhobm6mvLycmpoaXC6XPLvZbCY3NxeTyURpaSkXL16UImJra2uMjY1x/fp14vE4Ho9HVveEsmV2\ndjZGo5GsrCw0Gg3JZPJQAi6R0JWXl0unfezYMTQajdQ+KSoqIh6PU1ZWxqlTp6TgTzweZ2lpiTt3\n7jA6OorJZKK6upqOjg4cDgcmkwlAVhT29/eJRqNy3PJJ8Vw5f/g1izMWixGJRNjf35cfaDweZ319\nXWZpeXl5tLa20tTURG5uLvv7+/Jr4/G4zEStVitmsxmdToder5dM1+3tbebn5yUJ40lHRMTPFyNK\nZrNZZp/b29usra2xubmJTqejubmZtrY2amtr0Wg0slwejUZlCU5koSLCFFFtIpHA5/MxMjIi1/8+\naSYnZtj39vZky8Jms5FKpeTs7N7eHoWFhVJSNj8/n1QqJXvlm5ubsnoizqRWq+X5Rd9/bm6OmZkZ\ntra2nriPLgJCoSrocrkoLi6WQj4ikDObzVRUVHDixAm5UlbcD9FLTyaTUuQHfm1sFQqF3CU+Pj7O\n8vKydPpPYlwEf0OtVmOz2SgrK5O6FIIfIVazulwu2tvbKSwslLPZ0WhU7lQQ0tYqlUr+o9FoiEQi\nLC8vMz09zdTUFFtbW4fSahHLWKxWKwUFBTQ0NJCVlcX+/j6pVEqStaqrq+UeCKvVysrKihyf3NnZ\nuU9vXmRRwomJFt3Y2JgkCx5GCTqdTstlLDU1NbKHL86elZVFbW0tFouF5uZmysrK2NvbY3FxkXA4\nzPr6uszAs7OzMZvNMlhUq9Vy7Gt8fFyuFj/MlqJwfg0NDfK+7O/vyykK0cJobm7G4XBIQtzKygq7\nu7tYrVZZJRLJhZi4CIVCDAwMMDk5KaekDgtWq1WeW7R4xNlzcnJobW1FpVKRn59PZWUlu7u7TE9P\ns7q6yurqKgaDAafTSW5uLrm5uVitVvm5bW1tMTExwfDwsKziHiby8/PlTpDS0lKpBGowGKiqqpJM\nfvHvwWCQtbU1VlZW2Nvbw+PxYLfbqampIScnB51OJwOuUCjEjRs3mJ6eZmNj49C4Rc+V8xfGcGNj\nQ7KUtVqtnNXf29tjdnaW1dVV9Ho9zc3NdHV1yZG47e1tWT4SqnlLS0u43W4cDofcbAUQDAa5fv06\n//t//2+mpqZkyfdxL7sY39jb22NsbIxUKkVxcTGANNZLS0uyZH7hwgUaGhqkepjoG4mMTOxnF2M4\nIkKORqOMjIzwd3/3d/z4xz8+FGEIUZVYX19nbGwMj8dDZ2fnfax9UUqsqanh9OnT5OXlkUgk2N7e\nlk5fLLMQ44i7u7v3Lc6Zn5/n8uXL/O3f/i3Dw8OH4kCFmIpQA6uvr8dms8nMeWtrC71eT0VFBWfO\nnKG6uhqNRkM4HJb/xONxud1PqVRKJ7C/vy8/n88//5yf/vSnfPjhh4emKiYCq7W1NTY2NmSlR/xO\nqVSKnJwcSkpK6OzsxG63k0gkCIVCrK2tEQ6HgXtqeoIjIiRyRRY7OTnJD3/4Q27cuMH4+PihOdBk\nMik/f61WS1FREUajkVQqRSKRwGKxUF9fT3NzM6Wlpeh0OikBLQJth8OBVquVo6wHyayJRILLly/z\n4x//mKGhoUNzoAeDRY1GQ25uLnl5eZIELN5bl8tFdXU1RqORZDLJ/Py8FMoRlQPBjRFCY+LZz8zM\n8Jd/+Zfcvn0bn893aKNn4rMzmUyyNWW32+V9tVqtkqsipH1DoZBsb+7s7Eheid1ul+Vm8UxSqRSf\nfvqpJPQepgMVjrKgoEBm7cJxC5soqiYajYbd3V1JSIzH4xgMBurr63E6nVIpTzjPZDJJMBjk//yf\n/8OvfvUrSfo7TOTk5FBfX09NTY1M6MQ7J+y4aButrKzQ19dHKBQimUzi9Xqpra3F7Xbjdrvlnohk\nMsnu7i69vb381//6X6UfOCw8V84ffj3ClkgkCAaDXLlyhXA4TGtrq4y4zGYzbrebhYUF3n//fS5f\nviw3m3m9XrxeL3l5eXL29tatW7LPLhj0GxsbLC0tyQw0mUweWkYUjUaZmpri/fffx+/309bWhsPh\nkKW2SCRCb28vd+/elcGIwWDA7XaTl5eHx+ORsrm//OUvJaFLsOhXV1elmNFhjZ+JyxgIBOjt7UWt\nVku1Pq1Wi8FgoKSkhGg0yi9+8QspHpNMJrHZbDidToqKitDpdCgUCiYmJhgfH5ca17FYjOXlZQKB\ngOwjHpZRTKfTUsHx2rVrcoGP0+lErVbjcDjweDxMT08zMjJCOByW3A6xh1uUdROJBDdv3mR9fV3u\nWIhGo7JCdJh70MXvvri4yK1bt8jPz2d3d5f8/Hw5jpqXl0ckEqGvr4+lpSVCoRCxWAyNRoPFYqGi\nokL2PQV3ZXZ2lqWlJXZ2dvD7/VIN8DBZ0EIcZmhoiJycHCoqKmQpU6VSodfrKSkpIRKJcOXKFTm3\nvbW1hdlsljPQBoMBrVbL6Oio5JSEw2FisRgjIyPMzs4eqga6CDACgQB9fX2UlJTgcDhk5q5QKLDb\n7SiVSnw+H1NTU7LdlUwm5TN3uVwYjUZisZjM2kQ7ZWlpibt377K+vn7oTmh3d1cSfysrK9nb27tv\ngknwhkRrYmpqSipYZmdn09TUhNFoxGg0srS0xMjIiNQZ2dnZYWhoiFAodKhOSLznwi4WFhbi9XpJ\npVLy7HBPPCccDjM9PU1PTw9+v18SKsvKyigoKJAB7vj4ODMzM9y+fVu2QO/cucP29vahPnNxX8Lh\nMEtLS+Tl5UmBIVH6F5yFlZUVrl+/zq1bt/D5fADY7XZsNpusbInJlt7eXqnXMT4+fihV0Afx3Dl/\ngWQySTgcZnh4mPn5eZlFVlRUkJubS3V1NdPT03K+XyzZaGtrk05WtBDEOM7AwIAsgR3l6JOYHhC8\nBLPZTG1tLTk5ORQXFxOJRBgZGcHv9xMKhdDpdFIBUPRpASKRCMFgkKGhIcnYPkqRHLFjXVxKhUKB\nw+HA6/Vit9uprq5mZGSEkZERSRxTq9VUVVXJPfNarVYSbubm5ujr65PylUc1PpROp4nFYvh8Pvr7\n+6VBUSqVMlMoLCxkbGyM4eFhObctenBWqxWDwcDu7i5bW1vMzMwwMDDA559/fuTPPJ1Os7q6ysTE\nBLdu3ZKVErHLXLRWJicn+fzzzxkfHwfuZSLl5eWUlZXJxTN+v5/x8XFu3LjB8PCwbB0dFQTPQ8gg\nFxQU4PF4sNlsGI1GCgsL5bTE5cuXpdhQSUkJRqMRnU4n9dCFIb969SrBYPBQmf0PIp1OyyrX+Pg4\ndrsdi8WCzWaTJfF0Os3i4iJDQ0N89NFHJJNJzGYzhYWFUpRGp9OxsbHB4OAgV65c4dq1a0c6EioC\ndL/fL4mEdrtdto0MBgNWq5XV1VWCwSB9fX2SyyKkc8WqYo1Gw/z8PP39/fz0pz9lbGzsSGfk0+n0\nfW1Wl8tFIpGQpFERBEYiEebm5rh+/bq00zk5OXJVrlarlePSn376Ke+9996h9ci/CKJqGQ6H8fv9\neL1e2XKz2WzYbDbMZjOpVEryQXp7e9na2sJkMsmKilarRaVSsbq6yujoKD/5yU+4du3aoTH7vwjP\nrfMXECpOOzs7KJVKcnNzcbvdOJ1O/uEf/kGW0sWEgNjIlZubK/kBIire2Nhga2vrqQjkiMxIlJWF\ngxdZUSgUktmYUBirqamhtLRUVgfEuYVU6NNQmBMqfD6fT04R6PV6XC4XFouFVCpFMBiU/VmbzUZR\nURF1dXWScBmNRmXbRfTVn9YzDwQC6PV6ampqKCsrw2w2S1KTEJIRBEq3201VVRUVFRXk5OTIcTnx\nuQmRnKPG/v4+m5ub9Pf3o9PpyM3NxW63y6AEYHNzU/7ZwXteUFAg542FcIvYJnbUC0+EM5qfn+eT\nTz6hra0NjUYjz2k2m9ne3iY7O1tOAQgOQ3FxMR6PB5VKxcbG/2fvS4PbvK6zH+w7AQIgsZAgCe6b\nSEqUKJGSKFG7tdmy7DhOnbqN03oy6UzjaZPJj7aTb/onaZupO8kkk0nHTRMv8R5bli1rsUiJEsVF\nEldxERdsBDfsOwgS7/dDc69BSXbsBCBBl88MR5ZMAocX973n3HOe8xw3dVher3dVVP1isRj8fj9u\n376NWCyG4uJiFBYWUifKMAxkMhmUSiX0ej3lN+Tm5tLPJxwOw+fz0RatVK85STWHQiFYLBZcu3YN\n2dnZUKlUKCkpgUAggEgkglAohEQioaW5paUl2pWhUqkgFAqxuLiIhYUF2omQyiEzpKtlcXGROj8i\n2ZyVlYXS0lJKfiWtlZWVlVTTXyqVQqVSQSaTgcViwe/3w2KxwGazpeTGnAjS1RIIBGA2m+lt3+12\no6SkBIWFhTRzwWazYTAYsHnzZnrm8Xg8qiy7vLwMp9NJ+RTJbHV+GNa98yckOqvVCrPZjObmZjqR\nikhu9vf3IyMjA2VlZaioqEBeXh6USiUVZyETrQg7ezVAsg5OpxMDAwMwGo1UXaukpASbN2/G4uIi\notEovTkTNj+JJB0OByYmJjA7O7uq087IwTg+Po6enh7o9Xp6gy4sLITD4YDX66WtLTU1NSgqKkJ2\ndjat/9vtdkxMTFBCWqpB1sbv98Nms6Gnp4e2FYlEIqhUKuTn52N+fh52ux06nQ5FRUUoKytDfn4+\n5HI5ZmdnqfjJ9PR0UodsfB5IjX96ehoDAwMQCARYXl6GQCCARCKhfdF5eXmUYU6+tFothEIhnZY3\nNjYGl8u1amsei8WwsLCAgYEBym0JhULIy8uDSqUCn8+HUqlEfn4+jEYj7e/W6/VQKpVwu92Ym5vD\n1NQUFZpZjTUnwenY2Bj8fj+sVitsNhvm5uaQm5sLgUBAnWZpaSmysrKgVCohk8lgMBggkUiwsLAA\ni8WC0dFRzM7Orsqak0DRbDYjFotBoVBAoVDAZDKhoKCADhqKRqNQq9UQi8VQKBTg8/mQSCR0SuHC\nwgImJibosLPVmIxHRt0uLy/DbDYDADIzMzE+Pg69Xo+srCyw2WwsLS0hLy8PAoGATqrUarWQyWR0\n3sLIyAjNCKd6v5COIQCYnp4GcI+TNTU1Rcu0JFBnsVgoKSmhhGMyypfL5dLyIdGnSfWar3vnD4DW\neJRKJU6fPo28vDyIRCI0NTWBz+fD7/dDp9Nh//79tE+aRFrz8/Po7+/H0NDQqo/cJCndtrY2lJaW\n4tixYxAKhcjNzcWePXuoDntzczOVcyWtKyRF1tPTk5SWlS9r99LSEk0d19bWwmg0gs/nUylZ0omx\nd+9e6PV6ZGZmQiwWw+fzYWFhASMjI+jr61vVNU90RpcuXaKy0FqtltoeCoXg8XhQV1eHzZs3QyQS\nQSwW08wA6ck1m82rNio0Ude+v78f4+Pj4HA4kMvlyM3NBZ/Ph06no7XmhoYGqNVqqpgXCASwsLCA\n8fHxVV1zsk9cLhetZRJyWX19PTZt2gSGYaBUKulzWVdXR4mhJOCamprCyMgIJicnVy04J90URD+A\nx+MhPz8fFRUV2LFjB4xGI1gsFhWTqaurg0ajQTgcpt0J5BZH6uSpBqk/E9ls0rLH4/GgVqtpF45W\nq4VCoaDrvmnTJtqBo1Kp4Ha7YbVaMTw8TIO21bCbXAwsFgslpPJ4PCoIVlRURPvftVotSktLodPp\nMDc3Bw6HA6lUipmZGUxNTdFpm6l2/OQMsFqtK0bEx+NxcDgcCAQCZGZmwmg0Uq2KoqIiVFZW0oFy\nWq0WbDYbDocDo6Oj6OrqSmlZi+Ar4fyBe1HjnTt38JOf/AQ5OTlQKBT0AKyrq6OKS4T85ff7cfPm\nTVy9epX2za+mAyVYWlqC3+/HRx99hNnZWRqJk5RnXV0dVCoVHcricrngdrvR09OD/v7+pBNYvgxI\nn/lLL72E8+fP0/Q/IUhJpVJK+iNdGkRQx2q1rslceXLQkCE+//7v/w6pVEpFgIj6YiwWg9vtBp/P\nx/LyMm3PIuSutVhzEgTE43F88sknGB8fp7aTkb5SqRQ2m432c8fjcXi9Xty9e5eu+WrbTtackHRb\nW1sxNDREeQukBS4YDFKmemZmJuLxOBwOB0ZGRmi2YrWD3MQpk4QkabfbV+hrSKVSAKAjwklmhhBY\nU52+vd9mAtKREo/H6SQ+h8MBqVRKg1qj0YhwOIz8/HzamRCNRmnXxWqM7U20nbTkJv7b8vIyLBYL\nPB4PxsfHKe9i8+bNqK2tpdosPB4PgUCAdiKs1n552PqQ34XwR8ilY3BwEHl5ebBarbR1NyMjg6pC\nkpHQqxGgf2WcfywWo2pgRASktrYW+fn5tDebqFuR4TldXV0YHBxMKkP7y4K0c92+fRv9/f2QyWRU\n9lepVEIikVDCYiAQQCAQwPT0NG7fvg2TyZTSWtwfQywWg8vlwqVLl8DlciEQCKgqGGkxImxsosV+\n8+ZNDAwM0L7stQBxRBMTE5icnASHw6HiIaSEQdKJZLpW4q07Wb3wXxaJLVeE5ElqzVlZWdBoNNDr\n9TTDIRAIEAwGMT4+juHhYczMzKxatuJhthOSLmlBJO1PCoUC2dnZVM1x69attHY7MjKCsbGxNQ24\nEhndHo8HU1NTKxQMycRE4B57m5TzxsfHYbVa6dyN1bab/En63Em5jRBeMzIyqCYFm82GRqOB1+ul\n6ppOp3NVs1sP+2+iLxKNRuFwOKhYmkQigd/vx+LiIhQKBTIzM6ntZrOZtkKvht2fdf6StSe8tJmZ\nGQgEAqrtEI/HUVhYSAN0q9VKtRZW40z/yjh/YGX6iIw5FYlElCySqFTl8XhoP/FaOaH7bSdkssnJ\nSczPz9NbESFHiUQi+Hw+OJ1OOByOtLKd3Cjv3r0Lm80GHo8HkUgEuVwOlUoFiUQCu91O29GSPaHq\nTwVZ92AwiLt378JisYDH40EulyMrKwslJSWIRCIYHR3FzMzMqtXLvyjIwR6NRjE/P4/x8fEVIiz9\n/f24c+cOnUCYTiDkNFLSmJ2dxfDwMMLhMLRaLTo7OzE8PEzVKdMJJAtD5gp4vV54PB4oFAq4XC70\n9vbi1q1bMJlMabPXCcjz6vV6MTY2BqfTSWVzLRYLent70dnZiZl7c+PfAAAgAElEQVSZmbU29QGQ\nwNbn82FwcBDhcBgGgwGhUIjq9w8MDNCBbekGEhh2d3fT2QqkhZrsl9W6zH2lnD/w6eYgSn6ESUpS\ndGTjr+Vt/7NAbCMMftJGQnpAib72atSDviwSgxfy4JF6HWkdIimttcpUfBbIupPbEXBPRtnpdMLl\nciEaja7QHkg3kD0TDofh9/vh9XoRi8Xo+F6bzbZq5MQvC5LJIKJdfr8f3d3dkMvldIDManWDfBmQ\nGx8R7orFYhAKhWhvb6f96mTQU7qte2I2g1yUBgcH6ZRTm80Gp9OZ0hbWPweklOHxeGCxWNDX14dg\nMEiJrSTln44gWUeixXLr1i1oNBp6eVrNfc5ZtXf6cvhRMl8sUS2JtLak22HyWUgMVlZjbGwyQQKx\nYDBI23LWCxIfUrfbvWbp8j8FiYxvt9u9JnX+PxWki4VI5Uaj0XWz9oTTMDs7S/d8ut36PwuRSARO\np5NmY+x2e9oGjIkgnQtsNhu5ubmYnp6mPIt0DQAIWCwWQqEQ7WQgA7jIJNok4P993v/8yt38Pw/r\nxeHfj/Vu93q0n2Qy1hsSSWqk73u9rD9ZcyIg9edORVxNkHZMh8NBdd3Xko/zRUHWnLTzEXJaupQU\nPw/E8dtsNqqqNz09veo36D8FhHg+NjYGnU5Hy1oikYiWplP9O/yfcv4bWH2k+0P4WVivdgOfT0JK\nZySm0tcbEksAwKeCO+sBRF2PEHOJI013+0lm0eFw0DJXJBJZNxkLIk43PDyMSCRCZ56Qcm+q15/1\nx79lTZDeu24DG9jABr5iIE4HWF8BJJnySFosyTCfdA8AyHorFAo6IIpMQE3S2n+uf99w/hvYwAY2\nsIF1j8TAZT2BTGRN1JRIEjac/wY2sIENbGAD/8fwuf6dvVpWbGADG9jABjawgfTAhvPfwAY2sIEN\nbOD/GDbY/p+D9UqAAfCAPvZ6wnqt3W1gAxvYwHrBxs3/c7Ce+9QJ1qPt6y3Q2sAGNrCB9YYNwt8G\nNrCBDWxgA189bBD+1gqJZYP1hPVqN7Bh+wa+HNbzeq9n29crvkprnpY1/4ct8HpJAyfavp5Uvu63\nm2A92E/sXW9cgYet+XqzPXGPrzfbE/++Hmxfz45nve6X9bzmfwxp6fzZbDa9Ba2GxnGywGazqe0A\n1pU2fKLt8Xh83aw7i8WiIhnrbb+QiZNEC3697JdEuxP3y3oAsTtxzdfLfrl/zYmCXbrb/7BnNN3V\n9wjIXidBy3rZ518Eaen8gZUbOt039/0ghLX1RlxbTxH5/Vivaw5gXdud+Od6wnpcb2Dlmq/H32E9\nDN15GNbjWv8xpKXzX68bO3Gi2nqzfb3cIh6G9brmZL+sl1tQIhJnwq8nELvX042fYL2uOYB1u+bA\nynP9q4S0dv7JwGrWUpN5E1pvdfdErGYNdb3fPtcj1qvdBOv1IF+PNgNYtwELsH4vol8E6cpmSN5k\ngy/h/O+v76zlB/9lnH8imeZ+29cCX8b5P4yst5Z2Exu+zPcTrKcDIt0Jhp9l33qw+7P2cjoTC8m5\n93n8k3S0ncVigcfjgcvlgsfjIRqNYnFxcV2Ujfl8PoRCIQQCAVgsFvx+P7U9SZeaz/XvaXnzTya+\nzEHO4XDA5/MhEAiwtLSEpaUlOht6tTfQl3k/YjuPx6OEGjLWci02/pd5T0Ji4nA4NAW+HtKDiYEi\nsPYp2Yc5ls8KIB9m91qu9/22k7UFVt4aE51TYpC7VniY3Vwul65vLBZbYXs6BOaJthIbiAMVCAR0\nLK7P51the7pk2O4P/rhcLjIzM6FUKqFSqWC1WjEzM7NindMl03P/mmdmZsJoNCIvLw9cLhednZ2Y\nn59HLBajpMhU2v6Vd/4E5OETCoUQiUSQSqXQ6XTIy8uDQCAAn88Hl8ulrNS5uTnY7XZMT0/D6/Ui\nGAyuGUuVzWZDKBQiMzMTEokEUqkUWq0WCoVihd0Mw8Dn88FqtcJut2Nubg6Li4tr5kyJU9fpdNDp\ndOByuZBIJFAoFBAIBODxePQwX1xchMVigclkgtPpRCgUWvW1Jk4FuBeVZ2RkoKysDAqFAgAgFAoh\nFApX7JNQKASXy4W7d+9iYWEBwWBwTbNFHA4HUqkUeXl5qKyspHZwOBz6BQCLi4tYWFiAzWaDyWRC\nLBbD0tLSqtucaHtGRgaUSiXKysqg1WpXHHwkGFheXsbc3BysVivm5uaok1rLbBGXy4VOp4PBYEBR\nURGEQiEYhlkRfC8uLiIYDGJiYgJ2ux0+n2/Ng1ziOAsKCpCfn0+f0eXlZbqPl5eX4fF4YLfbMT4+\njmAwiMXFxTWzGQC96BQVFaG4uBh5eXlQq9WQyWSYn5+Hw+EAwzDUbovFgtnZ2TVdb3KuiMViqFQq\nVFRUoLS0FIWFhcjOzgabzUZhYSHcbjeWlpYwPj4Ok8mEmZkZhMPhlNj9f8r5czgcyOVyaLVaGAwG\n7N27F0ePHoVKpYJMJgOXy8XS0hJCoRC6u7vR2dmJjo4O+sBGo9E1cf4cDgcKhQIVFRUwGAwwGAzY\ntWsXysvLkZmZSZ1oMBjE3bt3cfHiRbS3tyMUCsHv9yMSiazJpudwOBCJRKitrUVzczMkEgn0ej0q\nKipoIMPhcBAOh+FyuXD27FmcPXsWd+7cweLiImKx2JpkXFgsFsRiMfLz8/GNb3wDFRUVYBgGarUa\narUaEokEXC4XLBYLVqsV/f39ePXVV3Hz5k1EIpE1caLEboFAAK1Wi0OHDuH555+n+1Uikay42blc\nLty8eRMXLlyA0+mE3+9f08ORzWYjOzsbtbW1ePbZZ9HY2IhYLEafWy6XC4ZhEAgEcOPGDZw7dw6d\nnZ0IhUJr5owYhgGHw4FAIMCmTZtw+PBhPP7445DL5XQPkGyFy+WC2WzGG2+8gStXriAajabsUP+i\n4HK5KCgowGOPPYaWlhbU19evuHECQDQaxfDwMC5fvozf//73sNvta/JcJoLH40Emk2Hfvn04ffo0\nKisrIZfLqd0kGBwbG0Nrays++ugjuFwuRKPRNW3VY7FYkMvlqK6uxvPPP4+tW7dCp9PRbOdjjz1G\nLw5vvfUWzpw5g2AwmLLz+yvv/IVCIbKysrBjxw4cOHAAIpEIYrEYUqkUer0eWq0WAoGAHuYkTSQU\nCmEwGCASiaBQKBCJROByuVY1lZ6bm4vS0lI0NjZSR09u/jqdjt78ye2ZzWZDJBJBo9Ggrq4OCoUC\nt27dgtlsXtXARSwWY9u2baitrUVFRQUKCgqg1+sfuPmTNefz+RCLxdDpdCgvLweXy8Xk5CSmpqZo\n2WU1wGKxUF5ejoMHD9LbUHl5+UNv/mTNFQoFDAYDqqurEQ6HweVyqTNdrX1CHMyJEyfQ3NyM7Oxs\nFBUVITs7e0V6lNz84/E4FAoFysrKsLCwAKvViomJCUxPT6/6AcnlcmE0GvHEE0+goqICOp0OpaWl\nkEgkWF5efkALgcvloqqqagU3x2q1IhQKrardHA4HQqEQLS0tOHHiBHJzc5GXlweFQgEejwcOh7NC\nr0SpVILD4eDYsWNQKpX4+OOPMTU1BafTueqZIplMBr1ej5MnT2L79u0oLCyEXq+n2UNSBiB/FhUV\n0TT1J598guvXr8Pr9SIaja6azQAgEAiQm5uL+vp6HDhwAJWVlSgsLIREIgHDMIjFYjQrAAAGgwEt\nLS1QKpUoLS3FpUuX6Fm4muvNYrGQk5ODqqoqNDY2Ytu2bSsCllAohKWlJQiFQrpvGhoaIJPJUFZW\nho6ODrS3tyc98/yVdf4sFgsqlQo5OTkoKyvDyZMn8fTTT6+ovZHoe3Z2lqaZSaqazWZDr9cjLy8P\n8Xgcc3NzGB8fX5UbqUQigUajQW1tLRobG3Hs2DFUVFSsCE7Ird7v99NbBo/HQzweh0ajgUgkgl6v\nRzAYRDgcpiWAVDpSFouF7OxsFBYW4tChQ2hubkZ9fT1EIhG1m6Q/vV4vwuHwCr6CWq1GVVUVMjIy\nIBQKEQgE4HK5EAqFUrre5Macm5uL5uZmPPXUUygqKoJGo6F2k4yQ2+2mkThxppmZmaisrKSHz9jY\nGN0nqXZIbDYbKpUKeXl5OH78OB599FFkZGSAx+PRPR6JROB2u7G4uIilpSVKkFKpVCgvL4fD4aDp\n3rm5OYTD4ZQHXOQ5KygowO7du/H000+jtLQUfD6floHC4TBCoRAikQiATx2uQqFAfX09nE4nwuEw\nlpaWMDs7u2oBF4/Hg1KpRFFREY4ePYq/+qu/ouWUpaUl+Hw+eL1eWvMnGTCSBRMIBPB4PGCxWIhE\nIohEIojFYim3m8vlQigUori4GNu2bcOTTz6JLVu2ALh3w3e73XC73QiHw/T7+Xw+ZDIZSkpK6C3V\n5/NhZGQECwsLq3YZImfi9u3bceTIEZw+fRp8Ph+xWAxerxcejwcej4eeJXw+H1KpFIWFhVAoFNBq\ntbTUYjabV81uPp8PlUqFuro67N+/H/v27UNVVRW122w2w+PxIBKJQCwW032SmZmJpqYmaLVa8Pl8\nzM7OYnZ2Fh6PJ2l2f2WdP5vNRnNzMw4ePIgtW7YgPz9/BYmIHHSjo6O4evUqRkdHsbi4iIKCAtTW\n1iI3NxdqtZreNMhrhkIheDyelKV2WSwWCgoK8Oyzz6K+vh7FxcVQqVQrVAMjkQhGR0fR39+Prq4u\nBAIB8Hg8bNu2DSUlJdBoNMjOzkYgEEA0GoVAIMDVq1dp6itVdrPZbBw4cADf/OY3YTQaodFoIBAI\n6M0mHA7Dbrfjzp07GBoawuTkJAQCAQoKCrBt2zZIpVLU1dUhJycHcrkcy8vLGBoawtTUVEpT0mw2\nGxqNBt/97nexb98+5OfnrwhYiNMfHh7G8PAwrFYrlpeXoVAoUFtbi5KSEpSXl0Mul9PfORqNUueU\nSrt5PB52796N733vezAajZDL5WCz2TSVaLfbMTk5iYGBAdjtdni9Xuj1ehiNRtTU1ECj0eDw4cOQ\nyWQQiUTo7OzE3Nxcym91JHX77LPP4tSpU5T0FI/HEY1GsbCwgKGhIQwPD2NsbAzxeBxyuRzl5eW0\nVtrQ0ACpVAqGYXD79m2Mj4+nvHTBZrORkZGBrVu34rvf/S5qamqo3ZFIBB6PB319fejp6YHVakUw\nGIRIJEJVVRU2b96M3Nxc5Ofn44knnoBYLIbX68XMzAx8Pl/KnZFUKkVBQQFOnz6NJ554AjqdjgZa\nNpsNd+/eRUdHB8bHx+Hz+WiGcc+ePSgvL4dSqcTu3buRmZmJl19+GT09PfB6vaviRIuKitDU1IRT\np06hpqYGQqEQoVAIs7OzuHHjBvr7+zEyMkIDl6ysLNTX1+PIkSOQyWQoLS3FU089BYVCgVdffRV+\nv39VAi6FQoEjR45g//792L17N5RKJWKxGOx2O7q6unD+/HnMz8/TdVQoFNDr9XjkkUewdetWWp6W\nSqX4wx/+gBs3biRtj38lnX9paSkaGhqwf/9+NDQ0oKCgAMFgkKbAHQ4HlpeXacqzv78fFouFRu2Z\nmZmURMJisRCNRuH1eiEUClMaMfL5fOzYsQMtLS3Yt28fjEYjZDIZJiYmYLFYMDMzg0gkgmg0CovF\ngomJCQwPDyMcDkMkEkGlUiE3NxeFhYXg8XiU+GU2m2naNBVgsVgwGAxoamrC8ePH0dDQALFYDI/H\ng9u3b2N+fh4ejwfRaBQOhwMmkwlTU1Ow2+0QCoXwer0oLS2FWq2GSqWCRCJBNBrF9PQ0pqenU2Iz\nsRsAtm7din379qGlpQVlZWXgcDgYHh7G6Ogoza74/X6YTCZYLBbMzc0BuPdgKxQKlJSUICsrCyKR\nCBwOB9PT07DZbPD7/fQwSoXtKpUKzc3NOHnyJLZt2wYOh0PXfGZmBqFQCPPz8zQAcDgcCAQCyMnJ\ngdPpREFBAQoLC5GVlQWPxwOfzweLxYJgMJjyILGyshItLS3Yv38/SktLEYvFcOfOHdy+fRvBYBBO\npxNTU1Mwm82wWq1gGAZyuRwLCwvg8XgoLS2lzstms8Hj8VByVKo4AGw2G2KxGHv27MHRo0fR0NAA\niUQCp9OJW7duYXJyEh6PB+Pj4xgdHaVZFHLTZxgGGRkZKCgogEQioeWWxcVFmvpNxTNK0vcFBQV4\n9NFH0dLSguLiYni9XgwMDKC3txcWiwUWiwVDQ0OYnp5GMBiEUCiEVqtFRkYGpFIpsrKykJubCw6H\ng1u3bmFubo6eR6nKFJHsQ11dHR555BHU1NRAIpFgamoKg4OD6O/vx+DgICYmJmC1WmlWTqFQgGEY\nlJaWoqysDNnZ2aiqqsLMzAw6OjpgNpvhcrlSmuEi2dedO3eioaEBubm5mJ6exsTEBG7duoWenh50\ndXXB6/XSzLNEIoFKpUJRURFKSkqg1WpRWFgIPp+PO3fuYHx8nGbx/ty98pVz/iwWC42NjfjXf/1X\nWiNnGAaDg4P48MMPcebMGdy+fXvFzyS2X7hcLng8HggEAqhUKhoQ8Hg8RCIRBAKBlGwYQjL75je/\niSeffBJSqZSS+Nrb23HmzBm0tbXB5/M9YDubzYZEIoHX68Xy8jIyMjIgEAgowVEkEqWUzc1isVBd\nXY0f/ehHyM/Pp6SyiYkJvPTSS+jq6sLY2NgDdhOmtFqtRiwWg0AggEKhAIfDQVZWFnQ6HcRicUpv\ncxwOB48++ii+973vgc/nAwBisRguXryI//mf/4HZbIbf719hN3Dv5ioSibBz507KZSC/t8FggFar\nhdlsTumt32Aw4IUXXsD27dvB5XIRDodhNpvxy1/+EtevX6es50S7AWBhYQEAcPz4cQiFQmRkZCA/\nPx9utxsDAwOYn5+H0+lMid3kM9+9ezd+/OMfg8PhYGlpCYFAAJcuXcJPfvIT+Hw+GjQl2i2RSOD3\n+1FYWEhT2BqNBjU1NZiensbQ0BBtz00FuFwu5HI5vv71r+Pxxx+nRD6TyYRf//rXOH/+PD3E7//c\n2Ww2dWICgQBisRhFRUXYtm0bZmZmMD8/n7J9TrqcKisr8bd/+7fQ6XT09nnhwgX8/Oc/x9zc3EPL\nmX6/HwMDA8jNzUVdXR2kUik0Gg2qq6tpkJvY0phs8Hg8KBQKbN++HcePHweLxYLJZEJ3dzdee+01\nfPzxxw99/0AggImJCYyPj9NODLVaDaPRSPk5JCBL1TOakZGBvLw8msEFgKGhIZw9exYffPABzSAm\nIhqNwuVywWKxwOVyQavVQqlUQiwWo7i4GAaDgZaJNpw/QEljubm5OHToEA4ePEjJcMFgEBaLBe3t\n7fjwww9hsVgeumgymQyFhYU4ePAgHnvsMRQWFtKaXF9fH9566y2Mj48nva2IsIV37NiBI0eOYOvW\nrRAKhQAAq9WK4eFhXL16Ff39/Z+ZQiaZjkOHDqGurg4SiQShUAgLCwtobW1Fa2srAoFA0je5QCCA\nUqnE0aNHcfjwYWRlZYHD4SAUCmFgYACtra3o6+vD3NzcQ99bKpVi//792L9/P2pqapCVlQWGYeB2\nuzE6OopLly5hYmIi6XYTYmRNTQ2OHTuGlpYWmro1m83o6enBjRs3YLfbH8q0ZbFYqKysxIkTJ9DS\n0gKNRgOhUIj5+XlYrVbcuXMHd+7cQSAQSKrdACg58ujRo3jkkUdQUFCAeDyOcDiM69ev4/Llyxgf\nH39okEpu3YcPH8aJEydQVlYGiUQCFouFhYUF2l7kcrmSbjdwz3kXFxfj+PHjOHDgANhsNsLhMGw2\nG9ra2nD16tUVQieJ4HK5yMvLwzPPPIO9e/dSu8PhMM3IeDyelDh+UqJobm7G8ePHUVtbi1gsBr/f\nj66uLly6dAljY2P0BpxoO3m+t23bhm984xsoKSkBn89HPB6Hy+XC5OQkXC5XUm5yDwNJ3Z84cQIH\nDx6ETCaDy+WC1WrFxx9/jNbW1s9sOxSLxcjJyUFzczMaGxvphSQWi2Fubo6Wh1Lh+Ekb4tatW3Hi\nxAns3LkTkUgENpsNnZ2d+OijjzA6OvrQbAnhD1VWVqKpqQkGg4FyWiKRCG3JTVWLqEwmg1qtxvHj\nx3H48GHo9XrY7XbcvXsXn3zyCTo7O+F2ux+6biqVCvn5+aipqaFkRoJoNIpgMJi0DNG6df6k3ikQ\nCOgNYMuWLXjyySdRWVkJkUgEv9+P6elpdHV14dq1a7h58+ZDD3K9Xo+ioiJs2bIFLS0taGxsRDwe\nh8fjwdTUFG7evInLly8nrfectC6JRCLI5XLo9XocOnQIX//616FQKBCLxeBwONDf34+rV6/i1q1b\nsFgsD7y3TCZDbm4udu3ahQMHDtDU0tLSEubn5zE0NIQbN25gYGAgae0iRAhJKBRCp9OhrKwMp06d\nwo4dOyAWi+F2uzE9PY2rV6/i2rVrmJycRDAYfOB1CLP/6NGj2LVrF4xGIwDA6/VicnISfX19uHXr\nFtxu959tM/Cp4xMKhZDJZMjPz8eBAwfwzDPPIDMzE+FwGPPz8+jp6cFHH32EgYEBuN3uB0RcBAIB\n8vLy0NzcjK997WvIzc1FRkYGIpEI5ufnMTAwgJGREVpGSpbthDSm1WpRUFCAkydP4siRI+BwODS1\n39bWhtbWVkxPTz9QbmCz2VCr1cjPz8eRI0dw5MgRZGRkIB6PIxgMYmpqCnfu3IHdbl+R6UiG7Xw+\nHyKRCEVFRdi5cye+/vWvIy8vD4FAAGazGb29vfj444/R19f3QIDLZrNpS1pTUxOOHTuGkpISCAQC\n+P1+LCws0PVOdoBLbsxZWVkoLS3FI488gqeeegpLS0u0772trQ0XL16kN+D7hXOUSiUKCwuxe/du\ntLS0UAExn8+H6elp3L17F263O+kkYi6XC4FAgMLCQtTX19NaeSQSwdjYGG7fvo3Lly+jr6/vgXON\nZLWMRiM2b95Mu424XC68Xi/m5+dp73wqSMR8Ph9yuRw1NTU4cOAAnnrqKTAMA4vFgu7ubly5coVm\ntu534DKZDFlZWaiqqkJDQwP1BUtLS3C73ZidncXc3FxKMrjEJ5GOhKNHj2L37t20vHLlyhV0dHRg\ndHT0ARIzIQZWVFRg69at2LRpE7RaLVgsFjweD2ZnZ+F0OhEIBJK2V9at8xcIBMjKyqKM/JaWFmzb\ntg15eXkQi8WIRCIYGBhAR0cHzp8/j9HR0YemeLhcLk6fPo3jx48jNzcX2dnZtM5vMpnw5ptv4sqV\nK0ndLITNSR6ugwcPorS0FEqlktYwyQbv7OzEzMzMQ29xZWVleOGFF7Bp0yZoNBpkZGRgeXmZ3kje\neOMNjI2NIRKJJMX2xFbC4uJi7Nu3D/v27UNubi4EAgFCoRCuX7+Oixcvoq+vD5OTk595cz5x4gS+\n/e1vQ6/XU60Cj8cDq9WK8+fP0xJHstjyJGgpKChATU0NTpw4gS1btkCtViMSicBqteK9995DZ2cn\nxsfHMT8//8B+YbFY0Ol0eOGFF9DS0kJ/bxJs9ff348yZMxgfH09q/ZbD4UAmk6G4uBh79+7F448/\nDoPBADabDYfDgWvXruHdd9+FyWSC3W5/4GAhh1JzczP+8R//EQaDAXK5HFwuFwsLCzCZTOjq6kJX\nVxc8Hk/S9jkJuNRqNYqLi/G1r30Ne/fuRU5ODgKBAKxWK9599120t7djZmbmoW1vxBH89V//NR59\n9FEYDAZKIrXZbJRcR0osyVpzUp7Izc1FY2Mj1XvgcDiYnJxEZ2cn3n//fUxMTGBhYeGBoIW0xjU2\nNuL5559HdXU1hEIhWCwWvF4vent7cevWLYyMjCSdNMdisSCVSpGbm4tTp07h9OnT0Ov11AmdP38e\nly9fhsPheKi+Q0ZGBoqLi/H444/jxIkTtE2XYRhMTk6ivb0dw8PDtFSRTJC2yMrKSnzrW99CY2Mj\nxGIxent70d7ejosXL2JkZOShAROLxUJpaSl27dqFRx55BNXV1RCLxWAYhp6JXV1dsNlsKcnK8Xg8\nZGdnY/fu3Xj++eeRk5MDr9eLy5cv4/Lly2hra6PZqcRnjMViQa1W01bd7du3U75ZLBbD5OQkLl++\njMHBQdpungysW+cPfCp+U1RUhMrKygdahfx+P+bm5ijJ7/4HLC8vD1VVVdizZw9lm3M4HCwvL6Ov\nrw9tbW24fv06pqamklovJ7cCrVaLkpISVFdXIzs7GxwOh7YfLiwsUMIbaXVK/Nm6ujocOnQITU1N\nyMnJoa1d09PTuH79Otra2tDX1/eZ6aU/FcQR5eXloaysDBUVFVQcaXFxkd78Z2Zm4Ha7VxwObDYb\nOTk52Lp1Kw4ePIjNmzfTFimGYaiYSFdXF8xmc1JvQ+T2rFKpUFhYiMrKSuTn54PD4SAQCMDv98Nu\nt8Nmsz1wmJO2tM2bN2Pv3r1obm5GSUkJ2Gw2lpeX4fV66ZqPjo6m5DAn4j1FRUWorq6mOubBYBDz\n8/OYnJykrOHEvcrn85GVlYXt27fjxIkT2Lp1K+16IQfLhx9+iN7eXnqTSyaII8rJyUFJSQntGXe5\nXHA4HFTV8X4xEy6XC6lUiqqqKuzYsQP79u1DeXk5PRADgQBu3ryJK1euwGQyJb3Nj+wXQv4tLy+H\nRqPB0tISPB4PJW7Nzs4iGo2uWHOFQkHb0vbt24eGhgZKQAuFQjCbzfTW7XK5ks46JxkLjUYDo9GI\n0tJSAIDT6YTdbofJZILJZKIS5oR/w+fzYTAYUFVVhaamJuzduxfl5eUA7qWcSf2/ra2NEkNTlTbP\nyclBcXExdDodAMDhcGBqagomkwnz8/O0m4UEmBqNBiUlJdi1axd27dqFLVu20DKix+OByWTCjRs3\n0NfXR9swkwlyLqvVauTl5aG0tJSSf81mMyYnJ2Gz2WiASr6fz+ejoqIC9fX1OHz4MGpra1FYWAiG\nYWiJ4s6dO2hra4PZbE7aRQ5Yx86f1G8IYUwkEj2gV07S6+TfEsFisbBp0yb8zd/8DTZv3kyFXEjg\ncOnSJbz55puYnJxEKBRKqu2khUwmk0GhUKywkXywRATns5N3kH4AACAASURBVGpxp06dwmOPPQad\nTkdFLUjN+ne/+x1lfCf74WQYhqYzxWLxCruXlpZoLzaLxXogumWz2aiqqsIPf/hDlJSU0N+b/OyN\nGzfwm9/8BrOzsyk5WIgjUqlUNEgk7x2Px6nM8/1iGqRu++ijj+L555+HTCajDpTc+t9//320tbXB\n6XSm5DbE4/Egl8shk8kAfNquSmrFZP/fv19EIhFKS0vxwgsvoL6+fkU/ejgcxsDAAH7zm9/A7Xan\nRE+BOKLMzMwVbZ9E7wEA3eeJay4QCKDT6XDs2DH8/d//Pfh8Pn2+I5EInE4n2tracO7cuRVaF8m2\nXSaTISMjY8XAm3A4jEgkQiWeE21nsVjQarXYvn07nnvuOTQ2Nq74Pq/Xi9HRUZw9exZTU1MpE5zh\n8/nIzMykN1+y5oFAgD6nZN8Tu8ViMbZv346jR4/i6NGjyMjIoNoioVAI09PT6OnpoSXQZBMUiQaL\nRCKh2cBEjQ2SfU08W8izUVpair/8y7/E9u3bUVZWRj+veDxOS0MdHR0YGhpKmd4J8UXEl8RiMYTD\nYappQi4LxG5Ctj158iQef/xx5OXlQSqVAgBtjTaZTOjv70dHR0fS15yTlFdJPn70Rb6JbIxgMAil\nUkk3O9G75/P5WFpawuDgINxuN21fysjIwKZNm3DgwAEcOnQIKpUKXC4XsVgMXV1d+O1vf0uJU6nS\nmCfDd1gsFhXlIXK3hPW+sLCA4eFhytRnsVgoLi7Gnj17cOjQIVRUVEAkEiEej8Pr9eK9997D22+/\nje7ubjidzpSxnklPc2ZmJlVIFAqFlIymUChgMpmosBBwL2DZtWsXzVbI5XKa6RgcHMRvf/tbXLhw\ngZYpUuFAScQdj8eh1+uhUCggkUjA4/EgFouhVqsBgJK3lpeXweFwUFxcjJMnT2Lfvn0oKSkBj8ej\nDPVz587h1VdfRVdXFx3KkQoQxyOXy2E0GulaE8daUFBAb6Rkr/B4POzbtw8nT57E1q1bkZmZCRaL\nRTXmf//731PiVCqU/RLXnGho6HQ6qpSYmZkJnU4HlUqFmZkZSsISCoUoLCzEY489hubmZhQWFlKi\nmdvtRnt7O37729/ixo0bmJ2dTblgi1gsRklJCaRSKcRiMcRiMb1pstls2Gy2FfyAXbt24eTJk6io\nqIBCoQCLdW9iGyktffDBBxgeHqa/bypAMiS5ubl0v5B24IKCAhQVFSEUCiEcDlOydE1NDdVF0Wq1\nNEBYWFjAzZs38frrr+PGjRt0j6Wyk4XD4aCkpARqtRpisRgSiQQGgwHl5eVQq9Xwer1UpbWhoQF7\n9uzBjh07oNfraYcXyeZduHABZ8+exfDwMLxeb0rXPB6PQ61Wo7y8nAoNkbLXpk2bIBQKwWazqdDS\noUOHsGPHDuTn51NCZTwex/z8PIaHh/Hhhx/i+vXrlEP0JW3/f5/3P9ftzZ+oxRHG7NTUFAoLC6FU\nKiESicDlcqHRaJCTk0MPeEKyy8/Px+7du6kGAAAqGHH9+nW8/vrrmJ2dhdfrBZD8MaLLy8uIRqOY\nmZmhvcxqtRpZWVng8XjIzMyEUChETk4OHYBDIuLa2locPnx4xcCZhYUFTExM4KOPPsKlS5eo8lay\n7Sa2+/1+WCwWmM1m2O122gfM4/FoG9aHH34IoVCIxcVFSg7cs2cPdu7cCbVaDYFAgMXFRSp28frr\nr8NsNsPn861QYUwWyC3A6XTCZDLBarWioKCAHi4kCp+fn8cHH3yASCRCWd61tbU4ceIEKioq6NAW\nn88Hk8mES5cu4b333qO1PLLmyQS5tc3OztKhTVKpFHK5nM6lyM/Ppz3bZLxpRkYGdu7ciZaWFtqJ\nwTAMZmdncfv2bbz//vsYGBhISf2T2M1isRAIBGCz2WC32+F0OiGXyyGXy6FQKJCVlQWJRIJbt25R\nZ6RUKrFp0ya0tLSgsrKS1ptDoRCtOb/zzjvw+XwpFSMijHy73Y7Z2VnodDoolUrKDSorK6OCW6Sf\nX6fTYdu2bdixYwcNthiGgcPhwODgIC5evIhr167RWnuqEIlEqCqc0+mEVCql0++MRiPKy8vhcrnA\nYrGwtLRES6B1dXXIz8+n2cTFxUVYrVbcvHkT586dw8zMTMpnKfh8PszOzmJhYQGBQICSJvPy8lBb\nWwuNRkPLW0Roq7q6+oGbs9frxd27d9HV1YXr16+nVJwNACVqz83NweFwUNY/IR46nU76bxqNBqWl\npaiurqbPQGK2gjj/jo4OjIyMpGSfr1vnz2azIRAIkJGRgezsbKoKFg6HIZVK6aHj9XoRiURoyrek\npAQ7duzAE088gbKyMgCgi3327Fm0tbVRjXNS9yPfk6zWEBKEZGZmQiQSweFw0GlO5GEMBAK01i8W\niyGXy1FZWUlbGTMzM2k6b2hoCGfOnKEM9cRaGPmeZEW7PB4PEokESqUSy8vLmJ2dpRLI5HZGnKBY\nLAaXy0VJSQm2bt1KlcIIPyEYDKKtrQ2XLl2iRDUgNeNPCelNKpVCKpXC7/fD4/GsSKMRKdaMjAww\nDAOBQICamho0NzejtrYWSqUSwL2DxWQy4d1330Vvby98Ph/97O63PRkgZEVysM3PzyMnJ4c6V7JP\nFQoFcnNzEY/HV2iJE2Iisb23txcXL16kmvhA8gNc8pqkDCSVSqlYVuLQGy6Xi4yMDCpkAtxrXd20\naROKioqQkZEB4N5n43A40Nrailu3bsHn89EsSyptJ3oZgUCAdlAQuwkjfvfu3WAYhnJhqqurIZPJ\n6D6Px+OYnJxEa2srzGbzimxisoPcRPsEAgHt5iBnAslQqNVq7Ny5EwaDgRLOdDodNBoNvWyQuvOd\nO3fo2UIcf6rWnGSsSCY28f1IMF5RUYHTp09TArJCoYBKpaLlFQK3200F3AKBwIo1SObzSewj0sJs\nNptmRoj/IMPA9uzZg8rKSkilUkgkEnpGxmIx2uJNWm+npqbgcDioDDpBsuxet84f+PQG7ff7MTIy\nAr/fj6mpKUrc8/l8mJycxOzsLMRiMbKzsylhiwxsIW07t2/fxpUrV3Dnzp0HWtOS/XCSckUkEsHc\n3Bx6enqwsLCAgYEBWv8MBALo7u5GNBpFdnY2KioqsH//fmzbtg1arRbAPQGOmZkZdHd34+rVq7DZ\nbLSGmIobKLGd1LImJiawvLwMq9WKzMxMcDgcRCIRquLH5/ORl5eHpqYmSpIjjpX0lRNpTlK3TcUm\nJ69FbCes50AggLGxMQgEAtozPjQ0BJ/PR28bZD4BkXomLX2Dg4O4du0azGbzAz3aqeBZENvtdjuu\nX78Ou91OSy7kORgfHweLdU9tsaamBk1NTSgsLIRYLAaLxYLP54PL5UJ/fz96e3uTyhz+IrbfvXsX\nPB4PU1NTUCqVkEgktNPC7XZDLBZDr9ejrq4OVVVVUCqV4PF4dKys2WymSnpkzVO1zxPt93g86O3t\nRSAQwOTkJNUQIbK4S0tLyM7OhsFgQElJCc3YAfeIckRw5vbt2zQrByBltpPXZbFY9HxwOBzQ6/VQ\nqVSIx+Pw+/0IBoNUzEyj0UCj0dCzk8gsE82NiYkJhMNh6kBTBXJpicfjMJlMUCqVCIVCUKvVNIAM\nhUJ0dDURMSNBGgDKhXE4HBgeHqblx1TuF2I3n89HOBzG+Pg4DQIIZ2RxcZEKgpESL8kwkoCEEBmJ\nZkgqz8V16/zJ5iQCGYSRTzbo/ZFdeXk56urqcPz4cezcuZNusFgshsHBQbS2tqKjo4O21ZHFJrf9\nZEaKhMBCZG/JCFuS2kl8Lx6Ph5ycHDQ1NeHRRx+ls5+XlpbgcDhw48YNXL9+Hb29vZR4lLiRku1A\nCdMauFdu6OnpoQQasqbkvQsKCrBp0ybs2bMHLS0tK8hmJpMJ165dQ3d3Nw0iEpFs2xMd/9LSEqan\np3H58mUsLi5SjghhP8diMRiNRtTX12Pnzp0wGo3UCQUCAYyMjODmzZu4ffv2ChISWaNUBIuLi4t0\noAoRPiJZCpJxCQaDkEqlqK+vR3V1Nerr62mNkeyXoaEh9PX10VkWqXag5Dn1er3o6upCX18f7brQ\n6XQIh8Pw+/1wOBwoKSmBXq9HYWEh8vPzKaF0cXERMzMzGB0dxdDQEOx2+4o1TwXIHialqUuXLqGj\nowNSqRRlZWVQKpWIRCJUnU+hUCAjIwNarZbyWchcCKIJMDw8/MB+SZXtxP67d+/C4/FArVbT55GU\nrSYmJqgT5XA4lC9FyImBQADz8/O4e/cuDXKB1GQrEsFmsxGNRtHf3w+Px4OJiQlUVFQgLy+Plmcn\nJiag0Wjo9yfesmOxGD1fR0dHqZx7YiY0FfaTbLTX60V3dzd8Ph98Ph8KCwshEAgQiUQwPT0Nj8dD\nA1uGYcDn8+mliHRNzc3NwWazrRg4l2y7163zT3w4yVAMEjXdv0gs1r2Rinv37qWtI/F4HO3t7fjD\nH/6AyclJqstNfi6RHJb492QtPjkYyE2d2J74+5EUXVlZGaqrq+mwGb/fj3feeQdXr17F1NQUJicn\nV6Sv77c1lQFAYpCR+J4sFosSK3U6HV3LoaEhvPvuuxgdHcXU1BSsVmvKlLbut5sQFUkkTUh0JF2Y\naHtWVhaKi4spuz4Wi1FOBWndSZx+l0r7yaFA2uHIQcBi3esZT9ynhASl0WioA7JYLDhz5gz6+/th\ntVoxMjJC2yhTlWlJfD2yx8mNjcViwel0wmazUeIrw9zrIjEYDJDJZJSw1tPTg48//hg2mw2Tk5Mr\nCI0Pe69kgjyjxBFyuVxwuVw6k4IE26RLhGQESLbg/PnzuHnzJqanpzEyMkIJfoR7kSonRDgiZHgN\nkSAeGRlBV1cX3S8sFotydMho8MS+csKN6e/vRygUWjEYLZW2h8NhOhSLaDlcuXIFMpkM8XgcXC6X\nBiqJZ0s0GkV7ezt6enowMzOD8fFxTE1NIRaLQSKR0CmbqeBakGc0EAhQad6hoSHaLUJu96TezzAM\n3S+kfNrV1YWJiQnY7XaMjY3BZrNBLBZTmXdyOUkW1q3zBz51RJ/HsCZ1IqJ0pVKp6Ad04cIF/O//\n/i91wg97/fuRjACAPDh/jDgjkUgosai4uBg8Hg8zMzMYGxvDu+++i8uXL3+m+FCqnBHZ5J/XckL6\no3Nzc1FRUbFCSKe1tRWvvPLKihr/atr+x0YyE3Zufn4+jEYjBAIBHA4HbDYbPvzwQ7z99ttUgna1\nQIKUxM/5YfZrtVrodDpKZIzH45Ss9dZbb9GbFEEqHX/i696/X+5/L8Lk1ul0VKTL5/NhZmYGFy9e\nxCuvvELVzYjdqQpuE+0m+4V81uTfbDYbtSMnJwfl5eXIyspCZmYmAMBut9OA68qVK1hYWKBnFLmd\nphJkzUOhEG3fTAywWSwWnTRIunP4fD4ikQjsdjs6Ozvxxhtv0GFQwKdKi6m89ScGi0SR7/5LEYfD\ngdFoxPbt28Hn82n7qNPphMViwfnz53H+/HnYbDZavhUKhbRDJ/F9km07CdBJwJg40ptwMPbu3Uu7\nisjPWK1WzMzM4MyZM+jt7YXZbAaLxaLtmoRHkGysa+f/RSCTybB582bU1NQgNzcXwL3hCv/5n/+J\nrq4u+P3+L0yGS/Xt9H7k5+ejqakJlZWVdMjQRx99hF/96lcpFdn4Ivi89xUIBKivr8euXbtozXlm\nZga/+MUvcOHCBUqoXCt8nu05OTl0GqROp8PS0hKuX7+On//853TUaarr5J+FP/ZZb968GUePHsWm\nTZsgl8vh8/nw2muv4d1334XVan2A1b9ae+eP3RLVajUeeeQR7Nu3D4WFhYhEIhgcHMSvf/1r9Pb2\n0qEzX/T1konEMtrDUFFRgSeeeIKKygSDQXz44Yd47bXXYLfbH2CYr0amiLz+/ZnERAiFQuzatQv7\n9+9HeXk5YrEYrFYrXnnlFXR0dMBut68QuiJls1QGXOR1SbBI/p4YtDAMA51Oh8OHD6OoqAhKpRIu\nlws3btzA22+/DavVCofDQWcOsFj31FpJaTKV605mbCTanbh+y8vLKCoqwvbt2yGXy6mIz7lz53D9\n+nXMzc2t6AIhJFeyJsm2/yvr/ElUXlVVhUOHDqG+vh4CgYAKg1y7do3WDtMNhDW8e/duHDp0CEaj\nES6XC62trTh79ix6e3tXJVX+p4DchA4ePIjt27dDoVCgr68PFy5cQGtrK+7evZuWay4SiVBcXIym\npiYcPnyYjpn95JNP8MEHH6C7u/szByutNfR6PcrLy3HgwAHs2LEDWVlZMJvN+OSTT3Dx4kUMDQ2t\nqsP8omCz2aiurl6h4Mfn83H16lWcO3cOHR0dmJubW/P9cv+6kbJQbW0t7dNWqVRwOp24ePEizp8/\nj/7+/oc+o6v5GTzMbhaLhYqKCmzfvh3Nzc10uFNnZydtQ3wYB2c198/D3ovNZiMjIwP19fXYv38/\n6urqIBaLqbrm5cuXcevWrQcGDd0fBKXa7vvfi3QpFBUVYevWrWhsbER+fj4lBRK9/zt37jywX1JV\nokh3MH/uF5vNZg4dOsS8+OKLzPT0NLO8vMyEQiHmueeeY3g8HsNisf7s90jVl9FoZL773e8y586d\nY5aXl5mlpSXm/PnzTGlpKcPhcNbcvs/7OnjwIPOLX/yCuXv3LrO0tMREo1HmX/7lXxgul5vWa56d\nnc38wz/8A3Px4kUmHA4zi4uLzODgILNv3760X/OWlhbmtddeY8bGxphYLMYEg0Hml7/8JSMUChk2\nm73m9n3WF4/HY77//e8zXV1djNfrZcLhMDM/P8/8xV/8BcNms9N2v7DZbGb79u3MO++8w5jNZmZx\ncZFxu93Mm2++yRiNxrS1m8ViMVwul/m7v/s75s6dO4zb7Wb8fj9jsViY733ve2m95nw+nykvL2de\neeUVxm63M9FolLHZbMz777/PNDQ0pK3dXC6XUSgUzHPPPcdMTEwwbrebcblcTFdXF/P9738/1b7o\nc/GVuvmTdosdO3bg5MmTKCkpQXFxMeRyOa5evYo33ngDHR0dKZ0P/6eCzWZDIpHg1KlTaG5uRmlp\nKQoKChCJRPDaa6/hzJkzVNM6nUB6iisrK3Hq1CnU1taitLQUWVlZGBwcxGuvvYZPPvkkLdec9OUe\nPXoUR44cQVVVFfLz88Hlcqla4ujoaNqtOXCvtGIwGHDq1Ck0NTVh06ZNUKlUsNvtePnllz9zzvla\ng7Q6NTU14eTJk6itrUV+fj698RO1xHTLVBCRLTKqde/evairq4NCoUAoFMLLL7+MM2fOPHSGyFqC\nPJ8ajYaqmjY0NECr1YLH42FgYAC/+93v0N7enlZ7hZzlRJFwx44daGxsxNatWyGVSrG4uIiPP/4Y\nb731FkwmU1qtOSFQVlZWorq6GjU1Ndi8eTOysrKoBPurr76K1tbWtDwX1xp/UlQrFosZo9HI/PCH\nP2Tm5uaYUCjEBINBZmxsjPnnf/5nRiQSMVwud82jwfu/2Gw2k5WVxTQ0NDBvvvkm4/P5mOXlZWZ+\nfp65du0ac/z4cYbP56dddMtisRg+n88YjUbmO9/5DjM5OclEIhFmcXGRmZycZH72s58xer0+Ldec\nxWIxKpWKqaurY379618zoVCIWVpaYtxuN9Pf389861vfSssMEYvFYthsNlNUVMQ888wzzK1bt5jF\nxUWGYRjGYrEwr7/+OlNbW5uWN342m80olUqmsbGRefHFF5nFxUVmeXmZ8fv9zNDQEPODH/yA4XA4\nabnmXC6XKSsrY55++mmmra2NWVpaYuLxOGO325lPPvmEOXjwYFquOZfLZTIzM5kjR44w//Vf/8U4\nHA5meXmZiUajzPDwMPNv//ZvTFZWVtqtOZvNZiQSCbNt2zbmBz/4AXPlyhVmeXmZicfjzNzcHHPt\n2jXm6aefTju7ATASiYQxGAzMd77zHeYPf/gDMzs7S20fHh5mfvWrXzEVFRWrYfvn4itz8+dwONDr\n9XjsscfQ0tIChUIBDoeDsbEx/OxnP8OVK1dSNkTjzwFh0W7ZsgWPP/44HUPJYrHQ2tqKX/3qV7hz\n507S530nA0RV7uTJkzh69CjUajV4PB5cLhdeeuklnD17Fi6XK+3qVonZimeffRaNjY1U6Of27dt4\n8cUX0d/fn3LN+D8FREjk8OHDeOKJJ5CXl0flb9955x288soraXcTIuDxeCgoKMC3vvUt7Ny5kw52\nMplM+OlPf0pvn+lmO+mBb2pqwre//W0UFxfTdsvLly9TMmg63ZwJRCIR9Ho9Tpw4gYMHD9I2Sq/X\ni5deegkffPBB0qdQJgNkSE5tbS2eeOIJGI1G2tLX29uL//iP/6BclnQCi8WCXC5HSUkJGhoa6ERH\n4J6+yXvvvYdXXnkFVqt1zW3/yjh/NpsNhUKBLVu20HGKnZ2duHTpEtra2jA1NZWWDyeRhTQYDGhs\nbIRGo8HCwgI6Oztx9uzZtCaasdlsiMViVFVVobKyEkKhEENDQ2hvb8fly5fpwJh0s52QcLRaLRoa\nGpCTk4NgMIju7m588MEH6OjoSMsDEfhUGrqoqAibNm2CTCbD5OQkenp6cOHCBQwMDDygOJguIINv\nqqqqqCR0d3c3Ll68iPb2dpjN5rS0m0gQ5+fno7q6GkKhEHNzc+ju7sa5c+dw69atNesA+WMgQ4iK\ni4thMBjA5XIxODiI9vZ2tLW1YXx8PC1tJ87fYDCgsLAQGRkZ8Pv96Orqouei3+9fazMfCqlUioKC\nAuTk5ECtVoPNZmNiYgLd3d1obW3FyMhIWqz5V8b5Ey35oqIiaLVaLC0t4fe//z1efvnllI38TBZI\nbYv08g8MDODHP/4xhoaG0naDA58qWmm1WmRnZ4PD4eDChQv46U9/CrfbTdte0g3E+WdkZECv10Mq\nlcJiseC///u/cfHiRTidzrQMFAFQgZPMzEw6OKanpwf/9E//hLm5OToPIt1Ank8y0IeI5Lz99tu0\njz9VExH/XJBhWwqFAlKpFAzDYHx8HD/96U8xMDCwpm2rfwxSqRQajQYymYwKEF28eBEvvvgiHA5H\n2q45h8Ohw85IVm5+fh4vvfQSLl26BJ/Pl5bPKMMwEIvFyM3NhVwup2N8e3p68KMf/eiBttW1xFfC\n+bPZbJw8eRInT56EwWAAAITD4RWCC+kKo9GIJ598EocOHQKPx6MyrT6fL20PcoI9e/bg9OnTqKys\nBAAEg0F4vV464S5dkZmZiaeffhrHjh2DWCzG4uIivF4vXC4X/H5/Wt4+CWpra/HMM8+goaGBDm1x\nuVxwOp1pvV/YbDaOHTuGxx9/HBqNBuFwGA6HA/Pz8yuG/aQj8vLy8Nxzz2HPnj1gGAbz8/Mwm82Y\nn59/YA5IOoHFYmHbtm149tlnYTQa4fV6YTabMTExQWXR0xFEHfTAgQPYvXs3BAIBJiYm0NHRAZPJ\nRBVd0xFktsbhw4eRn58Pn8+Hzs5OXL16lQ6e+7KvR5Ds33ndO38yNWznzp04cuQI5HI5/H4/bDYb\n3G53WkflRM700UcfRWVlJVgsFubm5mCxWFZMLUs3kAl5mzdvxunTpyGVShEKhTA1NfW5yn3pAB6P\nB7VajYMHD2LXrl0QCoVU+nRhYSFtHSip9ZeVleFrX/saZDIZHSc7Pj6e1tktMglyx44daGlpAZ/P\nh8lkws2bN6l+eTqCKOEVFBTg0KFDKCkpwfLyMsbGxtDX1wen05m25wufz4dMJkNNTQ327dsHDoeD\nyclJXLt2DaOjoykb45wMyGQyGAwGbNu2DRUVFZS71dbWBpvN9oWe0YfNTkh1wMDj8aBUKlFUVITN\nmzdDIBDAarXi6tWr6Onp+ULZCjab/YCaH1FoTDbWvfMXiURQq9VQq9WQy+XgcrkwmUw4d+4cLBbL\nWpv3mWCz2ZDL5cjOzv7/7L1XcNzXdT/+2d4LFtiC3kEuiMoGCuxUYTNVKNOSLNmxM87EY2fGE09e\n85DJ5CFPmUkmE3tS7MRxpNiWZMsqFEmxgiAJohF10bHYXewutveK3f8Dc64WIEACYBHw++vMYEAS\nxO7Z+733nvY5nwONRgOpVIpUKoWOjg588cUXm/pwikQituYKhQI8Hg9msxm//vWv0dvb+1Wr91BR\nKpUoLCxkqWcOh4ObN2/if/7nfxht62YUwijo9XrIZDIIBALYbDa8++67+OKLLzZtdovD4SAvLw+V\nlZUoKChgHPL9/f34h3/4B0xNTX3VKq4qRCVbV1fHht/E43GcP38ev//97xEIBL5qFVcUDoeD/Px8\ntLS0oLKyktHyWiwW/OY3v8HIyMi6X4/kaRtQDoeDuro6HDp0CFqtlhnC4eFhXLp0idENr+V1uFzu\nkiFnJE/jM3A4HKjVahw+fBi7d+9mlLyRSAT9/f0YHx9/pAEnSl+JRMLAjcvpyL9m+MN9o28wGGA0\nGrFz5042Jx74MhvwLHi0NyJqtRqFhYXYvXs3jhw5Ao1GAx6Ph1QqBT6fz4ZAbDahOlxtbS3a2trQ\n2trK1jz3sG1GkUqlyM/Px549e3Do0CEUFxcvGQG6GXviSfLy8hjjY3t7O9sfi4uLCIVCmzZyJm5y\nYmWrrq5ml3kkEoHD4di0uBCa8XD48GEcPnwYSqWSXchUrtiMmRYKKrZv345Tp07BaDSyMxmLxWCz\n2dbttOROrHyaIhKJGIvfsWPHoNVqmfEOBAJwOp1rWnNiMiQej9yz/bQ+g0wmQ0lJCQ4ePIimpiY2\n4TSZTMLn860Ju0V60xwAAGzY3NPgvdiyxp84+8+ePYu33npridHR6/XYvXs3Ll++/BVquLoUFRWh\nvb0df/qnf4p9+/Yx3fl8Purq6lBfX78pdSeaylOnTuHP/uzPkJ+fz35GnRZjY2Po6en5CrVcWdRq\nNRoaGvDWW2/h7NmzS/ZLUVERjEYjpqam1hxZPEspLCzEc889hx/+8IcwGo3MgAqFQhQXF0Or1W7K\n9j6JRILKykocP34cP/7xj5ekMyljFwqFNiWoVa1WKMesMwAAIABJREFUw2g04rXXXsOxY8eY7hSd\niUSiddf7n/YoXOD+HWIwGLBr1y6cO3cOOp2O/YzL5YLH461rSAw59QCeenZJJpOhrKwM7e3tePHF\nF9l65a7ZetaQx+NBJBI9MJfgaYhKpUJ1dTUOHDiAHTt2sAmJ1C68nvfOZDKQSCTg8/kIBAJPrf11\nSxp/4nluaWlBWVkZ4vE4i5w9Hg/6+/tx48YNmM3mr1rVJUKeXUlJCfbt2welUsl0D4VCcDqd6Ojo\nYO19m0k4HA5EIhF27NiB7du3sxGaHA4HXq8XIyMjrF1rswlxsbe1tTGwmUAgQDweZ6M3N2tnBYfD\nQWVlJRobG5nOQqEQHo+HjSzdbKxywH295XI5GhoaWCulUChEOp2G0+nE7OzspgWdcTgcGAwGNDc3\nQyqVIhqNQigUwu12Y3JyEk6nc0PtlM/iGfH5fFRUVKC0tBTJZJJN9ZucnER/f/+624aJr/5ZZPSU\nSiVqa2shlUoRCoXYFNO+vj6Mjo6u2YhSmp+G+TwLvo78/HwYDAakUimEQiHw+XzcuHEDFy9ehN1u\nX9Nr0BTJ3DWPx+NPTfctZfzJeEokEmg0Gjb+0263M8S22WxGV1cXrl69uuZFfxZC6RyxWAy9Xo/K\nykosLi7CarUimUzCbrdjenoaHR0dGBoa2lTAM4p2lEolSktL2SQtauebm5tDT08Puru7MT8//1Wr\ny4T2i0gkYkAcHo8Hm82GxcVFeDweWCwW9PX1YXJyclMhtyltKRQK2ajecDgMs9mMTCaDubk5DAwM\nMOT2ZpLcVsrS0lKIxWLYbDZks1kEg0FMTk5iZGRkU3aFEOkWjaSOx+OMf2BmZgYDAwNsKuVmc7io\n3KnVatm9SGe0q6sLvb29iEaj6y5vPW2qZdrrUqkUWq0WiUQCk5OT4HA4MJlMuHjxIiYnJ9fttDxq\n9PiT0p3L5UIul0MsFsNut7MMyaVLl3Dx4sU1n08y+ouLiwxI+jR131LGn1DmBoMBRUVFSKVSmJ2d\nRTQaxejoKCYnJzE/Pw+n04mFhQXE4/FnkmoDHp2O4vF4kMlkrK88EAjA5XLB5XLBZDJhbm4ONpsN\nTqcTPp/vmdYTH6Y7Hcy8vDwUFRWBw+HA4XDAZrNhdnYWExMTsFqtsNlscDgciEajz3TNgdUPCBl+\nrVYLlUqFWCyG6elpDA8PY2JiArOzs7BarZifn2c9z5thv5DTQlzyxJpIe2Zqagpzc3NM90gk8szq\nsmvRnerOarUai4uLbF75zMwMZmdnme40vvRZrfmjJNfJpbT+0NAQuru7md5zc3OYn5/fdIybHA4H\nYrEYCoUCmUwGNpsNPp8PbrcbNpsN09PTsFqtCIfDjxxT/KyFy+VCIpGwzNbAwABGR0dZ59PU1BQ8\nHs8Do33XIk/b8FN9PpPJwOl04uLFi0ilUvD5fBgfH1/SnbBWvZ/VWd5Sxp/SOQRcuX79Ovh8PiKR\nCGZnZ2G32xEIBBCPx1mb3GbZ4OSJRiIRjI2NYXFxEbFYDF6vF2azGW63G8FgEMlkctMhtykd5ff7\n0d3djampKSSTSdhsNlitVvj9fkQikWcOmlvLs11cXEQ8HofFYsH169eRzWYRjUYxNzeHhYUF+Hw+\nJJPJTUXlm81mweFwkE6nEYvFYDKZWCufx+NhbazUDvqsKXEf9V7ZbJaV4Hp7eyGRSLC4uLjEuY3F\nYpuylZXqwxaLBbdu3UI2m0U4HIbdbofX693QGX1WhpbuxsnJSVYK8nq9cLvd8Pv9iEaj63ZanoXu\ndDd6vV6Mjo6Cy+Wyve73+9k+Xw/iPdeAPi1jShmRxcVFLCwssP0cj8cRDAYRCoUQiUTWvVdozZ92\nl8XmhGavYSgBsHILyrOMgB5Hluu51fQGtr7uG9X7q/i8T2q/fBUR35PQfaXXeBZtZ8vfM/fva32N\nXODaV3VWnpTuX8UZ38j7rmREn3U3D+mwnvelzBmtee6fSdaxFg+171va+H8tX8vX8rVsdsk1XlvF\nUSZZrvtW0ht4upH/WvXYaHCR+/sb1P2h9n1Tpv230iZbLltZ96/la/lanrxs5ftgq+q+PFL+qvhH\nNrJ+y3/naT2DTRn5c7nc7NNGmD4NWZ4m20pCvbyblehmNXmW7GNPWrZaFJgrW1X3rXxGt/Kak2xV\n3bea3v8nD7Xva2d7eIaymZniHia5hBhbTehS3Iqy1ffLVtN9q+pNspX13qq6b9X9stX3+sNkU1qq\nrbzJv9b92crXDtdXI1tV9618mW9V3b8+o5tTNuUT+X91sTerbPUNTvpvtc+wVS9z4Os1f9ZCem9F\nI8rh3OcK4fF4W27dcymRt5ruj5Ktt5M2sWzFy5Dka92fveQa0K2k/1Y3RHSRb6U1B7BljRAZ/624\nX4jxcaut+VpkUz6NJwGu+H/xYX0tD0ruJb4VQTlbFXyWOy51KwkZImDr7ReKQreacDgcZkC32prz\neLwtq/ujZFO2+j0uYxmNcsxkMshkMuvid34cNDCxPW1Ud7qYiBjiWTG3EXMi/XkjQpEgvdazOiiP\nu+bAUkP2LA3x4655ruPzrJkVFxcXN/yeyx22Z81OuFE2x69Sb+DLqXobJbzJ/d1nveapVGpDzKXL\nSW++qjVfL1EPfae7fDM6Dps1PH6slRKJRJDJZFhcXGQUqWt5ePSwuFzuUx8IsZJwuVyIxWI2oTCd\nTq+L4/+rioDJaRGJRGzNv4r124hQVMLj8bac3hQJPo5Be9aSe8Y4HM5jORGPo8NG1ip3vZ+1kws8\nHqvjVxFULNcBWD/DIKXdaZ88awdgvXrnGn4qdaTT6a/KAdh6JD+PK2SIdDoduFwuzGYz40bPfQBi\nsRhKpRKnT59GS0sLhEIhwuEwnE4nLly4gOHh4Wf20Gijy2QyqNVqaDQauN1uWK1WZpRy/69IJEJt\nbS1Onz4NvV4PoVAIq9WK4eFhXL9+HcFg8JldqlwuF1KpFAUFBVAqlRAIBDCbzfB6vQ8YUxrOdPDg\nQRw8eBAqlQrhcBgTExPo7e3F8PDwMznguRdLQUEBCgoKIBKJEAwGMTs7uyJnPpfLhUajwbFjx7B9\n+3bodDqMjo5icHAQIyMj8Hq9T33NKcMiEAggk8mg1+shk8nA5/NhNpvhdDpXdGC4XC5qa2uxb98+\nlJeXg8fj4fbt2zCZTGxq3dNY89zLUCAQQCgUQigUQqPRQKvVgsPhIBwOY3p6GpFI5AE96BJtamrC\njh07sG3bNkxOTuLGjRtwuVwIhUIbytCtRW9yUPh8PmQyGUQiEcRiMQoKCiCVSpFKpTA/P4+5ubkV\n7wkOhwO1Wo3q6mo0NDSgoqICly5dwvDwMILB4Loj4bXqzefzWX1dqVRCpVJBIBBAKpVCrVazsbNm\nsxl+v3/FZ8/hcKDT6VBeXo6WlhaEw2F88cUXbBLjk15zes5CoZDtFZ1OB7lcDj6fD6lUyubbu1wu\nWK3WVTMhAoGATfFsbm5Gd3c37t27x4zwk9abw7k/aVYkEgEA1Go1iouLIRAIGNAxHA7D4/HA4XDA\n7/ev+toUuNbV1UEqlWJgYOCR98pGncItZ/wf9UFzIwuDwYD8/Hzk5eUhEok8YPxlMhny8/Px+uuv\n4+jRoxCLxfB6vZidnYXP52NfiUTigQv1cSgb6fdz/430FggEKCgogNFoRDAYhE6nW9H4S6VS7Ny5\nE2+//TYqKiogEokwPj6Oa9euweFwYHZ2FoFAYElktZGU33Idl9OUAl9mLDQaDaqqqqDX66HVauF2\nux9YNzrgJ0+exNmzZ6HVauHz+dDX1weBQIBgMIhAIIBYLPZA1iO3PLFW3ZcDu3KNEZ/PB5/Ph1ar\nRU1NDRubm5+fz2aBL9fdYDDg1Vdfxd69e1FaWorbt29Dp9Mhm81iYmICwWCQXTK570vlp7XqTeua\nC07LjfZpvevq6lBUVAS1Wg2TyQSLxbKq8d+5cye+8Y1voL6+HjweDxqNBnK5HNlsdskQklxAXzab\nXdcgG9KTaqUCgYD9m0gkgkQigUwmQ0VFBbZt2waxWIxIJIKCggIEAoEVjT+fz8ehQ4fQ3t6O3bt3\no7+/H5lMhn3ecDjMnDWBQMAyOYlEgs2zX4vQZ6bR22KxGADYpD8ypDU1NcjPz0c2m8X09DRMJtOq\nxl+n06G5uRn79+9HfX09+zxms5nt82w2y56pQCAAl8tlP1tPxpImzKlUKohEImSzWeZkyeVyFBQU\noLS0FOl0GoFAACMjI3C5XKsa/7KyMhiNRhw9ehQ+nw+hUAjT09NwuVxLzqdUKoVUKmVT+bxe75rT\n/LlGX6lUoqCggN2D5eXlyM/Ph0wmQ15eHqRSKbxeL6xWKyYmJtj7L9ddJBLBYDCgubkZhw4dglwu\nRyqVgtvtRjgcRiKRYO+hVCohFovB5/Ph9Xrh9/vXNKAs974Wi8UoLi6GRqPB4uIidDodM95isRgi\nkQihUAh2ux1msxkLCwsr6k1rqVKp0NraCoVCAS6Xi+npaXi9XmYHyBlVqVTg8/lsCFI4HF7XKPhN\nafxXGmaQewEuN2LLDSul+iUSCRobG/Htb38beXl5EAqFS/4fYQMKCwshEonA4XCgVCpRXV2Nl156\nCXw+H52dnZifn0c4HGbGZ7Va7Wp4gdwogn62XGf63cXFRYjFYtTV1aGmpgYGg4FdCMt1VygUKCws\nhFgsBpfLRXl5OZ577jlEo1HcvHkTPT09bEPk6r6ScVjJqcrVm94/d81zjRQdgpqaGuzfv59FSnw+\n/4H34XA40Gq10Gq1zNlpa2tDJpOBUqlEV1cXZmZmEAqFlqx1Op1+IOrIXcfl+4G87txIKNeY0s8L\nCgpQVVWFvXv3wmAwLGnvWa67UCiEXq+HQqEAn89HQ0MDVCoVNBoNbt++je7ubvj9fsTj8SXtTeFw\nmF30a1lzuhDJINBnoC/KbBmNRjQ2NqKuro45eiuBwjgcDjMAUqkUAHDmzBlUVFSgpqYGfX19GB8f\nRzgcZiNWRSIR0uk07Hb7A0Z0pdR5rtFXKBRQKpVQq9Xg8/nIZrPMqKrVahiNRuzevRt6vR5yuRyJ\nRGLFc0HPTKVSQaVSQS6XY8+ePSgqKsKtW7fQ3d2NwcFBeDweJJNJaDQaqNVqyOVyWK1WNkEz9zJf\nrnuu8ZRKpcjLy0NhYSEKCwuZgREKhSgoKEBRURHq6+tRWloKuVyOdDrNRoevJEKhEHK5HEqlElKp\nFG+88QYaGxtx8+ZNDA8PY3p6Gul0GgqFAhUVFcjPz4dQKERHRwcmJyfZujxsv9Ca5+XlQa/Xo76+\nHhqNBvF4nL1/aWkpysrKUFpayu6LaDSKdDq9qu5isZhlC2KxGIqKinD79m12Pv1+PzgcDoxGI4xG\nIzQaDcxmMz7++GO4XC5Eo9FH7hdac71ej23btmHXrl1ssqJCoYBer0dZWdkSRzWVSj3UyHG5XAiF\nQshkMrYHW1pa0NHRgdHRUdhsNnbv7Nu3D5WVlVCr1fj0009x9epVuFyuJa+/mu5CoRD5+fkoLi7G\n/v37sW3bNkSjUYhEIqjVahQVFbGMIodzf0pnIpFAMplcdc3pzpDJZEin0zAajeju7sbNmzfhdrsR\ni8WgUCjQ1NSEgwcPQqlUwu/34+OPP8a9e/cwMzOz6rosl01p/KnGQ16OQqFAeXk5+Hw+4vE4PB4P\ngsEg8+B4PN6SmlAmk0EikYDX60U8HkdNTQ3Ky8uZJ0+SSqUQi8XQ3d0Nq9UKuVyOmpoadqHSHPvx\n8XGYzWZEIhEEg0F20SyXXF1oYxsMBpSWliKZTCIcDsPr9SISiSCZTEIoFILL5bIoEwCSySSCwSC8\nXi/kcjlaW1shEomWXOgUjc3Pz+OTTz5BJpOBSqVCQ0MDSkpKsH//fsjlchgMBjbXO5VKwW63Y35+\nfsU1z63B0uVfWVkJlUqFVCq1JCIH7nuoyWSSXU4UTdC88IqKChQWFrLIj4QO9sDAAK5cuQKpVIqS\nkhI0NDRgx44dEAgEyM/Px/T0NBYWFtizHBsbYx7z8jXncrls/fh8PvLz81FVVcXqbeFwGNFoFLFY\nDDKZjEWb5EhEo1H4fD6k02nk5+ejvLwcQqHwASOaTqfZWGOfzwexWAyj0YiysjK0tbVBqVTCYDDA\n6/UiHA5DKBTC4XBgeHh4xTXPxZfQ3ikvL4der2eXBY0b5nK5UCqVyGQyCAaDrH7r9/sRi8UglUqh\n1WqhUqke6KcmgN709DQuXLjADAWtuUgkgl6vx/bt25nxl0qlGB4exuTk5IoXXy64kxzR8vJyyGQy\nVhenaFYul0MulyMUCrGLLx6Pw+fzoaSkBEVFRZDL5Ss6ubRf+vr60NvbCy6Xi7KyMtTW1iKdTkOt\nVqO8vByBQID9PZVKYWxsbMVa6/L9wuVyodPpWJqWHK28vDwWdXK5XPY86QLPZrMoKCiAQqFgqerl\nFzqNer137x4ikQj4fD5qa2tRX18PLpeL0tJSzM/PI5vNQi6Xo6ioCAsLC5iYmFgx+qT3oKwSBS5q\ntZplVcjB02q1kEqlSCQSSKVSSz6zSqVCfn4+c3CXrzk9V5PJxNLlSqUSlZWVaGlpgVwuh9PpRDQa\nBZ/PR2VlJQwGAywWCxunvtzZovcnB48Mu0wmg1QqhUwmg06nY46bQCBAKpWCRCIBcN+JUqlUbL1X\na9fMZDIIh8MYGhrC+Pg44vE49Ho9KisrkU6nUV5eDp/PB5FIBI1Gg4aGBnC5XFitVsTjcSQSiSW6\nrxQAUDY517nQaDTIz89HUVERuzdEIhEEAgFUKhW7d1bKRubqbrVaMTIygnA4jMXFRahUKhiNRkgk\nEkSjUWQyGcjlctTW1qK5uRl2ux0LCwvrjvqBTWr8yWBkMhmIRCIUFxfjxIkTkEgkcDqdGBgYwNTU\nFAKBAKu3xGIxdpmTwZiZmUFxcTFSqdSK0VA8HsfCwgL+9V//FRcuXEBpaSneeecdGI1GVFVVobCw\nEM3Nzejv78etW7fgcDhYRLqS8acImDa4RCJBU1MTjh8/Do/HA7PZjIGBATgcDgQCAXZRBwIBllqN\nRqNwOBzo7+/H9u3bV+wxJYPV09ODv/7rv0YymURdXR3+6q/+CocOHUJzczPKy8uxb98+DA4Owmq1\nIhKJ4ObNm7BarQ/oTYaH1lwoFCIvLw/Hjh1DdXU1AoEAxsfHYTKZ4HK5wOFwoNfrWf2NLkS73Y6J\niQmUlpaiqqpqRVKPdDqNSCSC3//+9/jZz36GwsJCnDhxAhUVFcxTNhqNcDqdmJubY3O+f/nLX8Ju\nt68aPZDzR1iIs2fPAgD8fj8mJydhsVjgdDpRWFiIvLw8zM7Osvqlw+FANptFaWkpDAYDysvLVzyc\nNOf9n/7pnzA8PIyCggL8+Mc/xrlz57Bt2zaUlZVh37598Pl8zFhcvXoVfX19K85Rp8s311Du3bsX\nbW1tSKVScLlcMJvNrHxSVVWFaDSKyclJZjwHBgYgFotRXl4OuVyOvLy8B/Sm83Dz5k38zd/8DWQy\nGRobG/HTn/4UdXV12L17N7Zv345oNIpkMskip7//+7/H3bt3V8y25ALfBAIBDAYDXnjhBRgMBmSz\nWdjtdrjdbkQiEWi1WlRVVWF8fBw2mw1erxfJZBKBQAByuZwZqpXWfHFxEZFIBL/73e/w6aefQigU\n4uzZsygvL0dFRQVKS0uxf/9+duaEQiGGhoZw584dLCwsPJB6prR+bkRfW1uLo0ePgs/nI5lMwu12\nM4e8sLAQQqGQ3Tfk+KfTaWg0GpbWXemMptNpjI+P4x//8R9hs9kgk8nwox/9CAcOHMDOnTuxc+dO\ntn5CoRB8Ph+//vWv0d3dDafTueSOoTWn1yYHvbm5GTU1NZBKpQiHwwiFQiyzV1xcDK/Xi/n5eVgs\nFoRCIaRSKfD5/BWdxNz9kkqlcOXKFfzyl79ENBpFS0sLfvCDH0Cv16O8vJzpTJkpr9eL69ev4+7d\nu3C5XGy2/XLdyVnU6XRoa2uDRqNhWBs+nw+32w21Wg29Xo+FhQUEAgEWvFBZYzVni17f5XLh3Xff\nRWdnJ7xeL15//XWcPHmSlV8kEgnLbPF4PHR2duL8+fPo7+9/IMAgx4X2Czmf27dvZ2ctEomwckdt\nbS0EAgECgQAWFhYQDAYBgGW9VtObHPTBwUH88z//M2w2G8RiMV577TW0tLTg5ZdfhlwuZ04ElbZu\n3ryJCxcuoL+/H263+4HXfZhsSuNPniOHcx8wQ8CPZDIJl8u1pDaZyWRY+ioXDUoefUlJCQQCAcLh\nMMLhMD777DPcu3cPi4uLSCaTiMViuHv3LiKRCCwWC86fP494PI5Tp06xVNbi4iJmZmZgNpsxPz/P\n0ujLhQ5NNpuFSCRCfn4+lEolADCwB2EPOBwO8+QIU5Dr0VM6NJVKwefzYXx8HBcuXIDH42G622w2\nOJ1O5qX/13/9F+bn53Hy5EkoFAqkUin4/X4MDg5ibm5uVWAXbTxKb0kkEqjVagCAz+fDzMwMrFYr\nc1IymQzm5+cRi8XYZwDALhOJRMKi0nA4jI6ODty6dYt1AlDkH4lEMD8/jxs3boDD4eDEiRNob2+H\nWq3GwsICZmZmMDExgampKVgslhXrcJlMhmE5KGLl8Xjw+/3s4nM4HPB6vYhGo5ifn4fL5YLX60Us\nFmPZJZFIxDosotEoPB4P5ufncfnyZVZHp8h/aGiIZXA++OADeL1enD59mnn9c3Nz6O/vx/j4OIaG\nhhCNRlesf+Zmqyj9mVtX9Xg87H0ymQwikQgSiQTcbje7XEOhEMrKyhAKhRAIBMDn89kz7+joYLXX\ndDqNqakpeDwehgX52c9+hpdeegmnT59mKezZ2VkMDw9jcHAQd+/eXfJ8l+8XEjKmZrMZdrsdoVAI\nwWCQ7XWn0wmr1QqHw8EyLBKJBC6XC/X19aiqqgIA9tnu3LmDyclJtj6pVAq9vb2w2+3g8Xi4cOEC\nQqEQzpw5g6amJohEIgYCu3v3Lrq6ujAxMYFIJPLAmpNRzmQyLHp0u93o6+tDMplEPB5nn5nH48Hp\ndILL5cLhcLBz73A4kEgkWP1cIBCw/drf38+itkwmA6fTiaGhIUQiEQiFQrz33ntwOBx47bXXkJeX\nh0wmA6/Xi/Hxcdy4cQN3797F/Pz8kpT5SmcUuO9IUz2Y0uGpVAoCgQBqtRozMzMIBoNwu90sO2ix\nWNj6U0aPnPaZmRlWGqQ7xWw2s/X6+c9/jldeeQUvvvgi60jyeDzo7OzE9evX0dfXtwSIt1z33BKe\nz+fD4OAgC/TIuMtkMkxPTzPAcywWA4fDQW1tLZxOJ1paWiCTyeDz+WC322Gz2Rigm/QOh8MYGBiA\n0+lEIpHAtWvXEI/H8dZbb7EMTywWw+zsLC5evIg7d+5gYGAALpdrxf2Su/+z2SxsNhtisRgEAgE7\nxyqVClqtFiaTCRwOBx6Ph5WrGhsbsW3bNlRXVzOdbTYbHA4Hy27SPWCxWDA8PMz2y/nz58Hj8dDa\n2sqcnkAggN7eXly8eBEjIyOYnJxEOBx+QPdHyaY0/rR5aPHUajUSiQRL31IpIJPJsIg/txZPda66\nujqWMgkGgzCbzfjkk0/w2WefrdgalUgk0N3dDYfDgfr6ejQ3N4PP5yOTybCH5nK5VgWD5B5QkUiE\nvLw8cLlceDweeDwe9kClUikymQzi8TiL+ClyIQT3tm3boNfr2eXZ3d2N//7v/2bo/+USjUZx6dIl\n8Hg8HD16lKWlgsEgZmZmMDw8/FDwU67TJBaL2e/GYjHYbDZEIhEIBAKIRCLEYjHmCJAjIxQKoVAo\noNPpYDAYIBQKEQwGWar5vffeWxFtm0gkMDw8DKfTibKyMhw+fJhFlXNzc+jp6cHIyMiqSN3cNSfd\n0+k05ubm4HK54HA42P7g8Xgsa0MOF6057TOxWIxkMgmr1Yru7m785je/wdDQ0JJIhiQcDqOzsxPR\naJTVDoVCIQKBAOu6sFqtK2aJctecdBcKhaxDwuFwMCNExjtXd/qsEokEUqmU1ec9Hg9MJhM+//xz\nvPvuu+xzrrRfAoEA8vPz8eqrrzKw5dzcHIuEEonEinst90Ik3ROJBMxmM3M4STgcDoLB4BLHl84A\npW/JwbVarRgaGsLvf/979PT0rNp2ee/ePTidTuzcuRN79uxhz2J4eBifffYZ7t69+0D6lmQ5EBO4\n75inUilEo1GkUinmgNOzJINCximVSrHnQNGdyWRCd3c3vvjiC/h8vhXXnMvl4tatW5DL5XjllVcg\nkUhY+e7WrVvMMXjUfqGvdDoNm80Gu93OMjYUYfp8PrjdbkSjUUQiEchkMgD3nXm/3w+PxwOLxYKZ\nmRmMj4+jp6cHg4ODq655MplEJBJBW1sbK4U4nU6MjY3h888/xx/+8Ic17RcS6vCge5s6tGQyGQKB\nADPw2WwWCoWCdUHpdDosLi7CZDJhfHwc4+PjGB0dZedmJZmbm2MATZlMBg6HA7PZjK6uLvz2t7/F\n0NDQqinz3PUmx8vj8cDn87G9RDV6p9PJnr3b7YZcLoder2dZLbrrqJw2MzODubm5VddcKpWy8opS\nqWRnaWJiApcuXcJ//Md/IB6Pr6sdPFc2pfHPBbEolUpwuVwMDg6yqLKmpgaxWIzVhX0+H/s9ANDp\ndKitrcXx48dx7NgxaDQaTE5OYmRkBB6P56E90ZFIBHa7nXmclK4ipOWjWi5ywVoymQwLCwtwOBzg\n8XhQqVQoLi6Gx+OBzWZj6Vz6XYFAgG3btuHgwYM4ceIEysrKANzfvFNTUyzjsZJkMhn4fD54PB52\nmLhcLmKx2BI09MOELg8CRo6OjrI1oNTtwsIC+wLA0k9FRUVobm7GsWPHsG/fPkgkElgsFty7d485\nLKuteTweZwAhisaSySScTier5a72u7ngPYoiKGImYFxJSQm4XC6CwSBcLhd8Ph8D1ohEIrS0tGDf\nvn04cOAAysrKGBq7u7ub7ZfVJBKJsANPwDZhWzCLAAAgAElEQVSPx4PZ2VkEg8EVnYbl+tNnpuiZ\nMigqlYp1e8RiMbjdbpbupXaiPXv2YPfu3WhqakI6nYbVakVXVxeGh4dXNYDAl+n0RCLBQJnJZBLj\n4+OYmZlBPB5f034hnAhd5lRDpqwXRVlU5pLJZCgrK2POdV1dHVQqFZxOJ0wmEy5cuLBqxwIJGb7c\nMpvdbkdHRwesVutDP3fuupPulKUgp52Q/XK5HLFYDKlUCmKxGBUVFaivr0dNTQ2qqqpQVlaGyclJ\n9PT04O7duxgZGVkCUl3pPQlER2n+WCyGvr4+3L59G36/f00XOdWhCc9Cn0UqlUIulyM/Px9isZhl\nN5RKJRobG1FbW4uKigrodDqkUil0dnayTMNKbbm5IpPJUFlZCa1WC6FQyLJ/H330EQYGBta0X0go\nm0OBA51DpVIJvV4PkUjE2p7z8/PR0NCAsrIyaLVa+P1+DA8P4/z58/B4PAiFQgxntJrU1taivb0d\nWq2WlQjv3r2LDz/8EBaLZVVnK1fItuTiFgCwjiGVSoWCggJGLsflclFcXIzKykrIZDLMzMygs7MT\nJpMJs7OzDIP0sDXXaDQ4c+YMDh48CKFQiFQqhYWFBbz//vu4du3amrtBVpNNafyBL71Fv9/PQHEG\ng4FtXvLiJicnAYBlAyr+D71MaGKdTod4PI6pqSl88cUXsNlsD239IfQuRYXA/YtypdazlXSmL0ph\n0gVTWVmJkpISVFRUwO12s2gtGAwyBDnVklpaWlBdXc1qYP39/ejr63tk5J5MJpmxoQuBPP+18BXQ\nmkciEbjdbsTjcUgkEoZ8Lisrg1KphEQiYZueOg4qKytRX1+PpqYmFBQUsJbJW7duYXZ29qGblC5z\nOlSUBvR4PCumnVdbd8ITUDaIMhEajQYSiYQRKCkUCuTl5aGgoAB6vZ6hlSsqKsDhcDA/P4+hoSH0\n9/ezPujVhJyl3PppMBiE0+lctTy0kv6pVGpJSUgqlTIDRKllPp8PiUSCgoICFBcXo6KiAkajEeXl\n5cjLy4PJZGJ6m83mhz5zoVDIWmHJmCSTSVarX0ubFr02XeS5ERydH3LYhUIhtFotiouLWQqUAKWh\nUAgDAwPo7e2FyWRiLX+rSWFhIfbs2QO9Xs8Akz6fDxMTE/D7/Wu6EGmv0/tQ6YVej76oBU2j0WDb\ntm1oaGiAQqFANpuF0+nEyMgIbt26hYmJCTgcjhWxHSQCgQC7du3Cvn37IJfLAYBlTNbicOVGn/T3\n3KxXblmNnHKqbxP4jyJOKneaTKYVOVByhcPhoLCwEMePH8e2bdsYYNPhcGBwcBAOh2PdnAW5mSP6\nO2GHALDPE4vF4HA4EA6HWfl2enqagflyM77LhYKZ+vp6HDx4EPn5+YjFYnC5XDCZTBgdHWWZnUet\nOf2Zvuc+i0wmwzIji4uLrMWUSrIWiwXBYJCVICmj8bA9npeXh+rqarS1taGurg4cDgc2mw337t1D\nf38/yxg8jmxK45/bkkZ9kUqlEhX/1xtcU1PDSE4EAgESiQQUCgVqa2vx8ssvo7GxEeXl5SwdGQgE\nMDAwgA8//HDNnlIuMINSgmsx/nShUDQhFouhUCigUChQU1OD5ubmB4y/VqvFc889hyNHjkCv17Ns\nB9Uxb968iTt37jzyYS/vCweAWCzGjP+jdCf9qd+VIn6xWIyioiIYjUZIpVJ2oWi1WlRXV2Pfvn2o\nqalhZCLpdBoulwvDw8O4evUqPB7PI/XONaBkDAnJ/ii9ycEiLAR9fiLdoP5sMqp8Ph9GoxENDQ1o\nbGyEXC5nbYlmsxnDw8Po7e3FwMDAI423TCaDRqNh9TgArKvjUVF/7gWwuLjIWsZyjRBd7LTelZWV\nLGIuLi5mlzyHw8HExAQ6OjowODgIu93+0PeWSqWor69HZWUlW3MqGxBIaT26k9D+p31HaOfi4mLW\nXUD7iKIeh8OBy5cvo6ur65F6A8C2bdvwxhtvMKxANnufp8Bmsz1QK19N99zvJGR8yMlOJBKsjLVt\n2zbU1taitLSUGetMJoObN2+is7OTodwfJmKxGCdPnsQrr7wClUrFMjoLCwsMOLsR3XMDDkLVE/hT\nJpNBLpcjEAhgbm4OIyMjLLjx+XwPtJ+uJBwOBxUVFXjnnXdQVFTE7reFhQXYbLY175fV/p3OLQES\n5XI5wyT4fD6YTCbm1NPvrOUeJxxNY2MjDh06xAzo6OgoyxqvJep/lO6kZzQahVgsXgLcXoksbC1S\nWFiIhoYGNDQ0sDUfHh7GjRs3MDc3t6Ea/3LZlMY/d4NTyk2tVkOhUIDD4eDGjRvMiwoEAhAIBKiq\nqmIod7VazaKZaDSK6elp2O32NT2Ebdu2Yd++faioqGD/lpuaXY/uwJfsdwKBAE6nEx9++CECgQCL\nKPPz81nUSS07dBkvLCxgeHiY1ZceJnSxfOMb34BSqVxyOS9vV3mY3sCXXjkZoWw2i6GhIZhMJta/\nTik6atOi8gilk8fGxjA+Pr6kT3kl4XA4qK+vx7lz53D48GGmy3oOzEr/jxyCUCiE7u5uRnpiMBhQ\nUlLC0miU1qeWoqmpKVy9epWBI1cTKu+89NJLOHfuHMrKypgBfVyaYLpUCNhHNcWysjKG4XA4HKzm\nT4Q0169fR39/P0Kh0ENfXyqVory8HKdPn0ZbWxs4HA4rDz1ODZF0z21BpLQ/RXEmkwkSiYTtcaFQ\nCL/fj3v37j3SSZTJZCgtLUVLSwuampqg0WiQSCTgcDjgcrlWxTesR3dKpScSCdZuRnihzs5OSCSS\nJW1VdrudGdPVhMPhoLGxEQcOHGCZA2IevX37Nubn5x+bnpl0pzJFPB5nOAX6SiaTrERFmcJH7XG1\nWo2XX34ZJ0+eZPdqKBTClStXcPXqVdZd8LhCGRPSnVqmKZuxHnY+cpZ3796N119/nRn+bPY+KdN7\n773HcESPs+bA0qwlOY+UxcgF9K5V8vLyUFRUhNdeew0vvfQS9Ho9A4UPDg6yDoYnseab2vgDX9ba\n+Xw+q3l0dXXBZDIhmUxCIpEgLy8PeXl5MBgMjHwlmUwiGo1ibm4Ovb29mJ2dfehDoAilpaUFZ86c\nQWlpKUvnEJBlvcYot55PuhD6mxgICwoKIJPJWA9xOBxmWQNCXHs8noe+t1qtRllZGY4fP46jR49C\nJpMxXoH1sLPl6k+bllLplA4mcpmamhqWql5YWGCp6cXFRfj9fty9exdjY2OrRkOEcSgsLMRzzz2H\nN998E0VFRcxgB4PBJfwH69GdvtPFvbCwwNLPfD4farUaTqeTtRFSpLG4uIj+/v6HRqDkCGo0GpSV\nleGFF17A8ePHwefzEY1G4Xa7EQgE1oSxWE1/ujzISaTUM5HzUMSWTqchk8mYA9DV1YXp6elVwUvU\n1UDZmgMHDqCmpoah0mdnZxEOh5+I8Y9EIohGo+DxeKwcRfXV3O4HoVDIIsmH7RWlUomysjLs3bsX\nu3btQnl5ObLZLMswmc3mDe2X5UIGJ5FIgMfjIR6PMydsOdNm7mdeTW+pVAqdTof29na8/PLLqK6u\nZuDMqakpdHR0wOFwPNaak5B+1MWUi4BfjwGiPW4wGFBfX49XX32VEXeRw9zV1YXe3t41leUeJeTs\nEhaAshEbMcw8Ho/1wT///PN4++23WeksFAphYmIC165dY11TT8L4555ZKlus93UJn1RRUYHnnnsO\nL774Itrb25HNZuH3+2G32zE8PAyTyfTE5nhsSuO/XAj8RejW2dlZVk/PZDKsz3RhYYG1WAiFQoyN\njeHu3bv44x//iPHx8Ye+R15eHtrb2/H8889j//79UKlUDM1LKE56sOsR6v8cGRkBl8tdYigpeiSA\nG6XtvF4vhoeH0dfXh3v37jFA40rC4XCwc+dOvPLKK2hra4Nerwefz4fT6cTk5CScTuea0v4rCRmg\nmZkZJBIJxONxltZPJBKYnZ2FXC5HcXExxGIxu3B8Ph8sFgtcLteqaTUOh4O8vDy88847OHHiBHQ6\nHYRCIRKJBOMUoI6CjUhuJwilz5PJJEwmE6xW6xJwI0UFlD5eWFhY9VkTaGvfvn340Y9+BKPRyLIe\nVqsV169fx8TExJpAZ6sJOS6kF6UWo9HoEg6MbDa7hKglGAyuWjcmA1pUVIR33nkHp0+fRklJCXv9\nO3fu4LPPPntsQ7Q8JZvJZBAIBFjb3XJjRGn61S5iAoS1tLTg8OHDOH78OKqrq1ma22q14pNPPkF3\nd/cTiZ5zJTfapP2xntfncrmoqKjAG2+8gf3796O5uZmRYwWDQYyNjaGzs3PVeQwb/QxUCsvVfT1C\nuI0TJ07g29/+NrZt28a6q9xuN6anpzEzM8Pu5CclGzX4uSKTyVBTU4M///M/Z9S+5FwODQ0xUObD\nsBkblcd5PZFIhNLSUhw5cgQ/+MEPGF9GNBrF8PAwLly4wNgqn5TeW8L4U9qDEL4ajYbxSBOoi7iZ\nqSsgFAphZmYGJpMJY2NjKxpQhUKB/Px81NTUoL6+Hi0tLWhpaUFBQQEymQwWFhbQ29uLiYkJlrpc\nr+SCcgisRS1+xD9OJQkCibndbszMzMBisaxIbMPj8RhVZ3FxMdrb23Hw4EGUlJQwFkSz2Yyenh5G\nFrLeDbP8/1NNnlKLLpeLoc/n5+cZXShFfJR+XC5ESavT6VBWVoYXX3wRRqMRIpEIyWQSXq8Xo6Oj\nGB8fZwC4jax5LjAql6Pe5/OxPvdsNstq68sv+eVCqVuj0Qi1Wo29e/di7969rDZJmZ179+7Bbrc/\n1uWyXP/caJR+vlbhcDjQaDTYu3cvqqurUVhYiCNHjrCIn7ILExMTrF/4cVOKy+vRFNGtRyj6NBqN\naGtrQ3NzM5qammA0GiEWixEKheByuZje1IL7pC7GlfAMaxVKO7e3t+Pw4cM4duwYKioqoFAoEIlE\nGD/9zMwMo5LdDIaIHK2qqiocPHgQp0+fRmtrK8RiMRKJBPx+P6anpzE8PMzKLE/CYD8Joe6wtrY2\nvPjii6xzhxwWi8WCoaGhJbwFm0UIfHvixAm88MILqKurQyqVgtfrxdzcHAYHB9mAnycpW8L4A19O\nyKqtrUUqlUI4HIbdbodSqUR5eTlEIhH8fj/rXR0dHUUsFmNGaKWHXVBQgObmZnznO9/BoUOHWMmA\nDMX8/DyuXr26ap/3WvUmBjS9Xo/FxUW4XC44nU42gCMcDsPtdqOrq4tNKssl21guhBo+efIkXnjh\nBcYfTXqHw2GMjY3hxo0bsNvtGzKgBJojtj9qYSGyHEr30eEiA/Wwy4DD4aCyshLf//73sWvXLtbR\nAIC1ixEdLjlcGzmkpDuluamNjkBZ672w6PVeeuklxnImlUoZ8xeBe2ZnZzExMfHYNTl6v1wk9EYv\nK6KR/clPfoJ9+/axfm+qwTudTkaitNGJbSvp/yReg8/n4+jRo/i7v/s7xsZGPfdEfGUymZjem+VC\n5/F4UKvV+MEPfoBXX32V6U0dQNPT0xgaGsLc3NymMZ4AGEh2586d+Nu//Vvo9XoA9zsSfD4fpqam\nMDQ0hIGBAUYxvVl0Jw6GM2fO4Ec/+hHrEItEIjCbzRgcHMTg4CBsNhsALKE7/qpFLBajqqoKf/In\nf4LW1lYAYN0BNJGQSKboXvj/RdqfDlJLSwva29tx4MABAPeJOXp6emCxWDAxMcGiTeoppp7Z5aAL\nDoeD8vJyvPzyy2hoaEB5eTlqa2shl8uXbJgvvvgC165dw+3bt2GxWNatN0UupaWlaGpqwuHDh1Fd\nXY1MJoPu7m5Wd6LInIhDCHC1/OHSQz98+DBOnz4No9GI6upqaDQahoeglsYrV67g1q1bGBgYWEK4\nsh7dhUIhA1EeOHAAIpEIPp8PH3/8MYaHhxnIidb2YWlRctzOnj2LY8eOYdeuXdDpdKyEQOCkrq4u\n3LhxA3fu3IHZbN6QAeVwOIxcY/fu3di2bRvsdjsrAa03bcblctHa2opz585h//79MBgMrIc6Go2y\nQ9rZ2Ylbt26tmmVaq+4ymQxarRZGoxHxeBxjY2Pw+/0rstU96rU4HA6++c1v4pVXXmFkVwTYojMy\nNDSEzs5Oxoi20TIL9dwrFAqoVCrWNbAR48bhcFBVVYW33noLL774IhtEQ8aT+CyI758GD21Ucvk5\nAGw4a0Nr/sILL+CNN97Arl272GsSYyS1wI6MjMBisWw4u5X7nvT9cRwJOqPf+c53cPr0aahUKlZa\nmZycZLpbrVY2kfBppM43qntrayu++93v4uDBg8zZp1ZM4oKhPblRPM7T0JvD4eC1117D66+/zjIV\nqVSKkQFRZohmNDzJ7NamNf60qcViMXQ6HVpbW5cY/7m5OczOzjJSEho5udIwj9zX1Ol0aGlpwblz\n59DY2Mg2eSQSgdVqZbSw58+fx9WrV2G32zdU6yeu9srKShw4cACHDx9GTU0No4i9ceMGqy8vZyhc\nSSQSCfR6PY4cOYLvfve7UCqVjGzD5XLBbrcjGAxiYGCAYRycTueGNgqRKzU0NODo0aM4ceIEuFwu\n5ubmcPv2bbZB12pI8/PzsX37drz66qs4cuQIA67F43FYLBZ2OK9evYpLly7BZrNtaE47ZVn0ej12\n7tyJ48ePo6WlBffu3UMikUBvb++6Xk8oFKKwsBAHDhzA9773PZYBoVbGmZkZuN1uzM7O4sqVKyyy\n2GiqmMfjobS0FDt27MD+/fsZip3a5tYjarUahYWFOHXqFF577TUGrEsmkzCbzZidnYXH48G9e/fY\nzIdH9devJlTSKiwsRHFxMYqKitDb27shw8bhcFBcXIy2tja8+eabqKurW4JUHxkZgcPhgMfjQW9v\nL0ZHR+HxeNY91CT3/WQyGRQKBaRSKVKpFKOFXa/IZDIUFxfjhRdewNtvv82iy0QiwUpC4XCYGf+H\nUYWvRQgkRl05hMvZiOh0OjQ2NuLVV1/FoUOHwOVy2fyI0dFRhvuZnp5mY5QfJ7uVO1Bovd09uUIk\nXocOHcI777wDuVzO8D7E0hmPxxn3BgWFj2P8l0ffG31+eXl5KC0txcmTJ3H69GnGn0AsfgMDA+Dz\n+QiFQozd80k6W5vS+NOm4HK5KCgoQG1tLXbt2oWqqioW5Xq9XnR0dKC3t5e1a62Udl5e+z106BDO\nnDmDqqoqlgJNpVKYnZ3Fe++9x4bXzM7OYmFhYUO1SuC+02IwGLBjxw60t7ejsLCQUZna7XYGPKHD\n/6iHWlJSwtrh1Go1SyXG43F0dHTggw8+gM/ng9PphNlsZiOI1ysczv2JW6WlpThw4ACee+45aDQa\n+Hw+BINBBINBhtZe6+u3tbXh3LlzqK+vZ9FnLBaD3W7Hb37zG/T09CAQCMBqtWJ+fn7Dm5xaEOvr\n6/Gtb30L9fX1rK2KHKz1tt1861vfwokTJxiHAXUj9PT04N1332VsguQ4bvRSEQgEkEqlOHToEE6e\nPImqqip0d3fj8uXLa2rVXC5NTU343ve+h7179zJQI00vvHDhAq5evYpwOMx49x/Vkvko3QsLC3H6\n9Gk0NDSguLgY0Wj0kUx9KwmPx8OZM2dw9uxZNiuBEM+jo6P44IMP2P62WCxsCM9GhGrclJ3T6XTw\n+Xw4f/78hvZgVVUVfvjDH+LgwYMs4k8kEnA6nejp6cGnn37K5kaQAX2cSJ3P57NJeAKBgFGQb0SO\nHTuG73//+zAajezOJLAvsRcmk0lYLBZGWf04utN+B74cwb6RDEh+fj5+8IMfsMFvAJgDNzk5yRz/\ncDiMmZmZR3ZOrUV3YgfN5aPZiOzZswc/+clP0NDQwOxGKBSCxWJhc1sIvE2cAU9SNqXxB740olRT\nDQQCjHyDuMIpanzY4c81/AKBAA0NDdizZw/y8vIQi8Xg8XgwMDCArq4uXLp0ibExUZplo7pTmyCN\nADYYDBCJRPB4PHC73fB6vWv2QGnmfHt7O+rq6sDj8eD1emE2m3Hv3j1cuXIFN2/eZEQY66HaXEmI\ncMbv9zNPOR6PIxQKMQzFWg4QRbPV1dVob2+HTqdjo5aHhobQ3d2NS5cusfp+LBbbUMRFQuDKWCzG\nxjmT0aYhHGsVciRaWlpQX1/P+tFtNht6e3tx48YN3Lp1i/WEb/TyIqFLhFLywH1youLiYgauXIvQ\nxVpSUoKDBw8yemCPx4PR0VF0d3fjypUr7FKkQTaPI7lgyng8zoiPNBoNA4atRYiUxWg0orm5GTKZ\nDKFQiOFhOjs72bQ+Ks89zn4h3ePxOAKBACuhyeXyJaQyjxIqk1GpidoQPR4PpqamcOfOHXR0dGBg\nYIC1sj2J/nhad+oSojvuUTTkuZLb/rlr1y5Gaex2u9Hb24s7d+6gq6uLDQTKHev9uLrnsqhupJYt\nFouh1WrR1NSE2tpa8Hg81ml0+/Zt3LlzByMjI6yk6vP5NuwoLpflpeT1OrhSqRQVFRVoa2tjQ+vc\nbjdGRkZw9+5d9PX1YXJyEjwe77EyWw+TTWn8c8l0aKpcX18f6y93u92Yn59nveBrEUqRVVVVYfv2\n7QCA2dlZ9Pf341/+5V9w5cqVJ9Y/CYBNIBwZGYFer2cXi91uZyNN1yLk4Ws0GhiNRhQWFiKVSsFs\nNuP8+fP4+c9/vmFQ32pC0/Zu374NhUKBsrIyRCIRlnpaD9mGSCRCYWEhamtrAQAOhwNDQ0P41a9+\nhffff/+J1t8ymQxCoRAmJydx4cIFRpFLfOrA2lN0VL+mMb8AYLPZcOPGDfz7v/87BgcHn2jNk/gU\nenp6AIARqtTW1sJsNj+SdIiEBv7k5+ejvLwcfD4fgUAAY2Nj+MMf/oBf/OIXa2KjW6/uLpcLV69e\nRTKZZBTQxcXF7IyuRXdac61WC61WCwCwWCzo7e3Fr371Kzad7UkQnABfGiC6S8LhMORyOaRSKaRS\n6bqMv1QqRV5eHnQ6HZuoSeWg//zP/4TFYnlsJ2u57rTu4XAYSqWSEaKtByhLEwDz8/Oh0WgA3Mcn\nDA0N4aOPPsLHH3/MgqEnqTvxJlAUTbiL9TxbwsdoNBqWxbVarejs7MRvf/tbDA4OMoK0J9kFksvf\nkvt9PXeLVquFTqeDSqUCl8tlZ/Ty5cv46KOPWIniSeq+XDal8c+tpZARvXXrFkpKSnDkyBFIJBIW\n0eV6XSs9BOJDP3z4MM6ePYu2tjaWTbh9+zZ+8YtfYGRk5In22QJfDq8wm83o6OjAzp07UVJSAo1G\nA6VSydJG1Na1/PcBsEv89ddfx4kTJ1BQUMBAT3/84x/x6aefPjG2p9z3p5rZ6OgoDAYDDh8+DJFI\nBIPBALlczureAB7QndrnFAoFmpubcfbsWRw6dIiVKIaHh/HLX/4SPT09T9TZovdfXFyEx+NBf38/\na83TaDQoKiqCUqlk3OfL0fSku1gshkqlwqlTp3D69GlGZkSjQf/3f/+XtQs9ad2p3ZPGGJeVlaGl\npYWNVKZokVoX6fPSn1UqFWpqavDqq6/i6NGjrJV0bm4Ov/vd73D9+vUnajxzdadSnMViwdjYGAwG\nA/bv349s9v50RhqmlEvFTLoLhULk5+fjwIEDOHPmDPbs2cNaG3t6evBv//ZvMJlMj8Wd8DCh+4Bo\nt5977jkWtdMIcRoIQ/uHdKcBS6dPn8bzzz+P/Px8thaff/45Pvnkk8cCUj5Mctso4/E4SktLUVhY\niGAwCJ/Px4ZvER6AgMWZTAZ8Ph+FhYVobW1lo7QJ4Dc+Po5f//rX6OvrYziqp6E7faeyETG4+v1+\nhEIhlsWIRqNM92w2i7y8PJSVleHIkSM4evQoKisrGffI7du38cEHH2BiYoIN/HkabZR0d5DTR/ci\ndWoJBALW9kzMk9lsFpWVldixYweOHj2Kffv2Mf6R+fl5nD9/Hjdv3mSD5Z6m4Qc2sfHPNaI0MpNY\nvCQSCesTp/Qc1QYJGUkc70qlEkajEadOncL3v/99hMNhhli9du0aLly48NT0J64AoiBNp9NQq9Uo\nKSlBTU3NEu4CYuWi9DGHw0FJSQlaW1vxzW9+E21tbYhGo5icnITJZMLly5dx9+7dJ647AFaysFqt\nmJ6ehtPpRHl5OYqKilBdXc34CEiID50Om0QigdFoxEsvvYTvfve7EIvFcLlcMJvNuHnzJi5cuLBh\nRPyjJJu9T9QTjUYZEYnBYEBlZSWMRiMbWERRRjKZRCAQYNPRDAYDGhsb8fLLL+PMmTOsdk37paOj\n46kdSKqzWq1WTE5OQqvVor6+Hg0NDWyMaDqdZrgX4ltIJBLgcO7PPD9y5AjeeOMNlJeXsxG9t2/f\nxuXLl2EymZ6a7rlMkGNjY4xLIBKJQKlUwul0smltZByJeVGj0aC1tRWnT5/G22+/jXg8zrArN27c\nwKVLl55qSxw5Gl6vF8XFxWhqakIkEoFKpWJOFzFAxuNx2O12Bkil4Stnz55Fa2srI78aGxvD1atX\n0d3d/USJWZYLMcvF43FotVrs2rULfr+f8e4LBAJIJBLEYjH4fD5WOxaLxQwU++abb7IxwHNzc7h1\n6xYuX74Mt9v9VAw/CT1THo+HkpISGAwGNswsFApBrVYDuD8rw+VysYmlFRUV2L9/P15++WUcOHCA\nDaSam5tjsxYel+p5LboD941/YWEh8vPzIZVK2aA2jUaDUCjEngWBX3fu3ImjR4/i1KlT0Ov17H7p\n7u5GR0cHhoaGHgsLsh7ZtMY/VwgxG4vFEI1GoVAoUFVVhddffx2jo6OwWCyM0pbaUpxOJwQCAWpq\navDTn/4UbW1tyGQyGB8fR0dHB/74xz9iZGTkqX8WMjAElCssLER7ezsb3BOLxcDn82Gz2TA5OYnZ\n2Vn4/X4IBAI8//zz+Iu/+AuUlJQwJP/58+fx6aefwmq1PlW9KaoMhUIwm83Q6/UoLS3Fa6+9hubm\nZnb50Rja8fFxNg/cYDDge9/7Ho4fPw6FQoGZmRn09PTgww8/xN27dx/JPf+kdLdarTCZTCguLkZr\naytjwSPq1lAoBIfDwUbgCoVC7N27F2c3wLQAACAASURBVH/5l3/JnLPx8XFcuXIFv/vd7zAzM/PU\nD2U2e5/Va2RkBBUVFTh8+DCOHDmCyspKNmaaOC1mZmZw8eJFuFwuiEQinD17FufOnUNRURG8Xi/G\nx8fx29/+lmFZnoXufr8f4+PjOHbsGOrq6iAUCrFnzx5GyCUQCBCNRhmjoEAgwI4dO/DDH/4Qra2t\nyGazsFqtuHPnDt577z02XOlZ6B6JRJBMJqHValFeXg6dTsfmExQUFIDD4SAcDuOTTz7B6OgouFwu\nTpw4gXfeeQeFhYXM4fz4448ZMPFpGn4Siv6lUilKSkpQUFCAsrIytLa2wmAwwGAwIJPJYGJiAhcu\nXEAoFIJKpcKbb76J9vZ2Nnp8dHQU7777Ljo6OtgMgKctmUyGsX3SlEaaF7J9+3bk5eUhm83i9u3b\n6OvrQyqVQlNTE86dO4eSkhLG+nrt2jW89957GBsbe2oZopVEJBJBq9VCoVBALpejrKyMtUjzeDwE\nAgE2NjkSieDUqVPYv38/CgoKEAqFYLVa8f777+PixYsYHx9/6OTWJy2b0vivJi6XCz09PeDxeAy5\nrNFooNPp4PV6YbPZWNRP7GCHDh3Czp07UVRUhHQ6je7ubnz22Wfo7u7eUA/8RiSdTsNkMuHKlStQ\nq9WsvqjVapHNZln0RrzqGo0GjY2NeO6557Bjxw4AwMjICD7//HNcvnwZw8PDz0RvAi3duXMHXq8X\nBoMBXq8X2WwWOp0OoVCI1aYo3VVVVYW2tjYGfCLdP/roI9y6dWtDnAkb1d1isaCzsxPRaBRyuRx+\nv5/NlCcaYhoXKpVK0dTUhAMHDqCpqQkSiQROpxPXr1/H559/jr6+vqeSul1J4vE4Zmdn0dnZydKH\nqVQKarUafD4fkUgETqcTdrsdyWQSxcXFaG5uZix+2WwW/f39bO732NjYM7tQotEo7HY7Ojs74ff7\n2ShW4lcnoiiXy8WGOh08eBBNTU3Q6/VIp9O4e/cuPv74Y3R1dcHtdj8T3anEaLfbcePGDchkMiQS\nCcjlcqhUKgD3M1wejwfhcBh5eXkwGo3Ys2cPamtrkc1mMTk5ic8++wyXLl3CwMDAU0/b5uqeTqdh\nsVjQ1dWFxcVFNqtEIBAwR9dut2NxcREVFRXYsWMHGhsbUVhYiEwmg6GhIXz88ce4efMmpqamnonT\nQkJdBDTzRKVSQa/XM4BeJBJBIBCAVCpFWVkZIwgTCoXweDy4du0azp8/j+7u7ifCULlWIUd9fn4e\nIpGIYRDUajWi0Sijdk+n09Dr9Yy/g9Z8amoK58+fx7Vr1zA0NLRhFtmNypYw/tReQTzeXq+XRY9N\nTU04evQorly5ArPZzMYd8ng8HDhwAN/4xjcYkCWdTqOzsxPnz59/prpnMhl0dXVhbGyMDTtRKBR4\n8cUXoVAoMDY2hqGhIUxMTIDD4aCpqQnf/OY3sWfPHvb78/PzeP/99zE9Pf1MdXe5XLh06RIuXbqE\nxcVFiMVi1NbW4oUXXmBDkyYnJ+H1esH9/9j7zuY2sOvsB733RhCFBexVIqm+qrvavmt77NhJnMk3\nTzKTfMovyPsvMkkmkxnn3bHHHo3X6816q6guUSIl9gpWdAIgeiFA4v2g99wFKYoiKRGA1nxmNPZS\nFHl4ee/p5zlcLrq7u/GLX/wCNpsNwNPMx+DgIH7zm9+UTJkQlpaW4PP58OWXX7J9D6dOnUJzczNm\nZmbgdDrZatbq6mq8/fbbuHjxIuNPWFtbw2effYYbN26UlBRkfX0dbrcbHo8H//u//wsejweNRoPe\n3l7w+Xw26RIMBsHj8XD16lX88pe/ZI2sGxsbmJiYwL//+7+/8tngF4Gc79/85jesictqtaKxsZER\ncLlcLuRyOYhEIvT29uLq1asswstms/jmm29w7dq1V95X8SLQsp35+XnWaFtXVweDwQA+n49AIMAU\nfXd3N9566y10dnYyUpmFhQX8+te/xtzcXEmVOGVGhoaG8PjxY1butFqt0Ol0kEgkbDxSKpWiu7sb\nV65cgclkYk7P3bt3WTPoYab6d5I9mUxiYGCA1dENBgPMZjPm5+eRy+UwPz8PsVgMu92OlpYWtLe3\nQygUYmNjA4FAANeuXcPNmzeRTCZLrmNCoRDbRklZgEAgwOjlV1dXoVAo0NPTg56eHnbm6XQaw8PD\n+I//+A/GmlpqvBbGn9K4brcbt2/fZmN4hUKBsTjRXCR16AoEAlgsFtTU1EAoFOLmzZv47W9/iwcP\nHpRc9nw+D5/Px7xAGsdKp9MQCoVsAqBQKIDP50OhUKC2thZ6vR6pVAq//e1v8cc//rFkUVCx7DQi\nQ/+dSCTYtjwawyJHjMfjQafTwW63QyKRYGxsDL/73e/w1VdflfxRAtjCiMXhcJDNZjEyMoLl5WVE\no1HWic7n8yEWi9lUBgB8/vnn+MMf/oC5ubmSy17cUQyAbUocGRkBh/P9+l2K8GQyGUwmE8RiMdxu\nN65du4b//d//LQsDW3EjFwBGQkULlrLZLJLJJHg8HiQSCeRyOZRKJTgcDu7du4dPP/0Ug4ODJY08\nd5Kdw+Egl8vB5/MhEokwbop0Os3We0ulUvD5fCQSCfzpT3/C559/jkAgUFLD/7yfIZVKwe12IxQK\nMaIYcsY2NzcZNfXU1BS++OIL3Lx5s6Tp8t1kp8bWaDTK6JzlcjnUajUbZc5kMqx/yOl0vhRPxatC\nLpfD2toapqenEQqFWJ1frVajuroagUAAarUayWSSEZq9zPKyl8VrY/w3Nze3eFkEt9uNkZGRLR9T\nKpVsb7vRaASHw8HIyAj++7//u+QHTYp8pxKDz+fb8t9cLhcajQYWiwUWiwVyuRyJRALffPMN/vzn\nP7/ScZu9gnZpF4NqVcWQyWQwGo2orq6G0WgEl8uF0+nEJ598cuj9Cc8DdZQXY3l5GcvLy1s+RtMA\nNM1Q3DVcjmgCwDNKOJVKYWlpacvHxGIx9Ho9Ww3N4/Hg9Xrx2Wef4cGDB2UzQsXnRQ2YxX0e1ORX\nvNKaHLNPPvnkpciSXhUomi5+t5SBlMvl0Ol0zGkJh8Po7+/Ht99+ywxWOUFOe7Gu4/F4UKlUUKlU\nUCgUEAqFyGQymJ2dxaeffrrr+u1Sg5ysUCgEHo/HShjkbBFL6sOHD/Hll1/C4/GUNFvxPNAqaxqV\nlkgkUCgUrHGRmotTqRT6+/sxODhYNv0CvCbGH9gfhWJNTQ2uXLmCuro6xmVOZCzl9g53g0AgwPHj\nxxmrHu3OpkakSpWdw+HAZDLhypUrjBCHxo2oXl2JoDRja2srLl++DIvFgkKhwOiGk8lkWaO43cDh\ncKBSqXD27Fn09vZCqVQiGo2ySPVVELEcBij6rK2txVtvvYWGhgbw+Xysrq7C5/O9UiKWVw0ulwup\nVMreqNlsRj6fh8vlgs/nqwjDvxOoSdRqteLixYuM1pwWofl8vn3TR5cKtFisq6sLp0+fRn19PXg8\nHpaWluByubZsF6w0aDQa1NbWoqurC52dnYz5cmFhgdGxl6KZ9Xl4bYz/XkB13YaGBrz99tuw2WzM\nu3W73ft6mM+bvT8siEQiaDQa9PX14cSJE5DL5VhdXWVbyw4ieynkppn+hoYGXL58mZH5eDweRgO6\nX9lL9RikUin0ej36+vrwxhtvwGAwIJ1OY2FhAeFwuCIVOcFqtaKzsxOXLl1CR0fHFgbCg25DLAWU\nSiWam5tx/vx5XLlyBXa7nTW9RiKRsqT79wIOhwOHw4Genh6cP38evb290Gg0CAaDCAQCSKfTFSu3\nTCbD2bNncfLkSfT19aGxsRFSqRQejweJRIJ9XiWBsix1dXW4fPkyurq60NLSAqvVypoANzc3IRQK\nd6W/LqUuJAgEAohEIpw4cQLnzp1DXV0dLBYLjEYjy/YSmdTzZC6W+7Bk/0EZf0rJNTU14Z133gGH\nw4Hb7cbjx4/33ShHF4pGUQ778kgkElRVVeHEiRPo6+sDj8fDyMgIBgcHEQ6H9ywDRVal8ih5PB5M\nJhPa29tx5coVGAwGpFIpzM/PY35+nlF47lX2Uj5WtVqNzs5OnD17FufOnQOHw8Hs7Cymp6f3deZA\n6ZUM8Si8+eabsNls4HA4CAaDrJmuklaWEjgcDgwGAz766CNcuXIFJ06cAABmQGlG+mVWGB8G6F72\n9fXhn//5n1FXVwedTgcArK5eKBQgEol2zXKVYwUudc///Oc/x3vvvQeNRgMej4dMJoP19XXk83nI\n5XKIRKLnNp2Vw4ASZXJ3dzf+5V/+BQaDgS2nooVlcrmcjcztRH9brAtLKb9QKIRarca7776Lv/3b\nvwWfz2eN29lsFlqtFrW1tfD5fM+UIElmsj+UeTwM2X9Qxl8mk6G5uRk2m40dnkgkQnV1NWvkehGK\nOaZLeWlMJhOOHTvGarfA92kjoq7cDcVUk6VUnAKBAA6HAw0NDYzEhc/no7q6Gmazmf0sz8NOK0lL\n9Ug1Gg3a29thMpmYnEqlEg6HAxqNZtd/W+yoAIfroe/0ffV6PaxWK6RSKZOdiK9EItGevkYp5abv\nS2lchULB5KYRLhp93e3fF2+CK+WZ8/l8CIVCiEQiNj4HPO0Xqaurg0wm23W0b/symFK9UeLup/dF\nctDGSqPRCODZHhMC7QsoZmYshexUYhEIBKwRkc/no1AoMNpt2sK405nTXZNKpaz/oRTTI9TMrVAo\nUCgUGL8Fya7VaqHT6RiR2naHkO64Wq2GUChENBp9qT0zu+EHZfz5fD7UajWkUim7qMSit5cLWy6l\nCDyN/In/n+hD6dJXavQJPH2kMpkMUqmUUfimUql9cfaXusRC35PWLtMkAG0XK5WCOyjIwSPa4VQq\nxehNSfYXGdFynfnGxgabFCE2S4rc9vJOyyE38PSeJxIJuN1upsyJBKt41/rzUByFHmSJzUFB38fj\n8WBpaQk8Hg98Ph/ZbJb9Dl5E+UyRKGVBSwE6r3g8jqmpKTYrv7GxgUQiwfq4qC/neUaUfk+lPHPg\n6d10u92YmJiAxWKBRCJBofCUwXNtbY3xRZCs22UnZ5PWWR8Gdg/Lyod/Pcg/ohEc4kSPRCKYnZ3F\n559/jqGhIfj9/hd+jVKniIDvDVGhUEBLSwvbhnbv3j18/vnnmJmZYZvedkM5mkfo4ioUCjQ3N7Ou\n9C+//BJ37txhPPi7oVxnDjylDjWZTKweNzQ0hM8//xzDw8MIBAJ7lqmUBhT4npK2qqoKuVwOKysr\nuH79Oq5fv47Z2dldu4iLz7vUZ04KkMPhwGg0YmVlBaOjo/j666/x4MEDuFyuXZ2X4oi/1Hc9k8mw\nOf9cLgen04l79+7h66+/xujoKKOffZ7cFPGXWvZ8Po9wOIxwOAw+n4+VlRWMj4/j+vXruH37Nubm\n5pBKpZ4re3HEX2q5aVFXNBpFMpnE5OQkBgYG8PXXX+Phw4dsHfV22emMKeovpUNPm0X9fj+cTifW\n1tawuLiIkZER9Pf34/r163jy5Anb+bD9TCkrkEqlXnZb6P/Z7S9/MJE/LQ3JZDKMaSsajWJxcRGP\nHz+G1+vdcy28HIafOvuJ1SwWi+Hx48cYHx9HLBaryBouOVvxeBwzMzNsT8Lq6iru37+PhYWFPXfL\nl+PMs9ksvF4vBgYGkMlkkEgk4HQ6MTQ0hEAgwM78RUa0lCDZw+Ewpqam8O2330KpVCKRSGB0dJQR\nSb0oQiuX7LRoiEhdiK1wYmJiTyOh5WyqC4fDGB8fh1QqxdTUFFKpFJaXlzE3N/fCPfHl6uouFAps\n3wAAtgciHo9jcXERHo9n1wbRcsqdy+UQDAZZRmt5eRnr6+sIhUJwu90IBoO7NhQXO1ulBBn/paUl\ntqJbKpUin88jEAhgdXWVrb/eDpKVov3DlL2yWjy/x75+Yqrv0Kw5rXGlFAuND9HO9EoxopSaogYR\nnU4HkUiEQuEpmQ4R0ZDcldQJTakpkUgErVYLuVwOoVDISFxisRgymcwW2SsBlP4jcpzi7ZC5XA6p\nVIqRjNDSlEq6LzT3TN3ClDHK5/NIJBLIZDKMW2KnGmc5ms7o+9J9kclkEAgEzLmi7m26I/l8/pm7\nXo6SFn1fLpfLSkRUJiJ5iHAG+H7JzvZMVrllp3OnxjO6A1RqIYbC4jdK5YlyyV0su1AoZLIUL+Qi\n2Yrf6PYzL7X8xbIX94bQx4k3hUp3xVH9Ici+q33/QRh/AKyWJZFIIBAIWF1rfX19S+28XJ7s80AK\nXSgUQiwWs4amXC63ZY0lpQ0rCaQUqQmKHmY+n2epNgBl8b53w3bFQhmM4pWt5Uwv7waqv5JioWwW\nKZLiFb+Vel/4fD47c5KzuHHrVdyXnRToQR0fUtx8Pv+ZBtZix3Y/92U3Bf+qZadzL26uBfCMo7Lf\nHp3nNdptb5g+COjrkH4s/vj2hsm93pft2bCdegSKv97L3pfir7ld9uK7v9evuZPsxWdEgRZ9vFAo\n/GUY/yMcodJRrqh7L3iRQn/e31UCtstXrCiLDVGl/By7KfLtn1Msa6XJDTwr+/OMfzll3+70AHjm\nTuxk/Is/t1x4UXNusWO6QybhyPgfobINz4vwPNkrRZk/Dy8yqJUid7EsxcpwuwIvVyp1JzwvOt5+\n5sUKfvvnlwPbZdxugLbLt5PBKgfIOG5vFt1unHZCubNnxdMK28+8eHQU2Pm8y3nmlPXYniXYfu47\nvYeNjY1dfzE/mIa/V4VKUhT7xesq+04KcS+ohJ+vOH1dbiW3H2yPdLZHyLv9LOX+GSmNvdO8/E4R\nfqVE/aTMqcRE2E2+cp818OzY3PZ+jOelyF/kFJQCJPv2Zu/tMu/mLJYDxWWDYsP/oje5n3t+ZPx3\nQDlYoV4FdjNElfxzkFIsrlW/LrIXRxX7kbvcPxOlCvdzV8otM7C1R4bGpHZS4NtRKbJT0932BtgX\nOY7lzhSR8S8UCs84LsX/W4xyG9JiA8rj8fZ85sWGtNyZluf1Bux2X/Z65s8nRf4LBD1OoVD4Qma6\nSkKxZ06d668LSJkXd8a+DihWKq+T3MDWaOh1uyvba8qVYNT3guLo8yCz/uU2QgAONOtfrt9R8fss\nFAqsubHSz3z72zzIdNpef86jyB9PHyaNrNFYSTgcRiKRqGgFQxdFqVRCpVKBw3k6R13Jm9EIxSOO\nYrEYABjzW6V1qReDHESpVAqlUgng6WRGLBYry8rl/YCcQ+JyJ0bDUtCevixozE4sFrPxwHIQz+wX\nNFJKAQWNBpY7kn8ROJynC4FkMhk4nKdjsLQpspJlp/epVqshEAiwsbHByHIqVWbge10uk8mg0WiQ\nzWYZW+phrSuu1JDlX0v5zYRCIWw2G9599110dHTAaDSyWfVKmq3fDhpt7O3txbvvvguLxQKRSMSM\nf6XM1u8EiUQCg8GAS5cu4dixY9DpdMjlcmyDYSWfuUqlQktLC958800YjUYIhUIkk8kX0qSWG3K5\nHGazGcePH0dHRwdzAF4HJ9dms6GpqQnd3d1Qq9XI5XJsHLaS5dZoNOjs7ERzczNqa2vZcpftJYtK\nAmVXWltbcfr0aVgsFkilUiSTyWf4ACoNPB4PGo0G58+fR2trK1QqFbvjlRxUEI9Ee3s73n//fSgU\nCgBAOp1+mTv+l8Hwd1BwuVzY7Xb09PTg7NmzSCaTGBsbYw1FlQoOh8PWo546dQqnT5/G48eP4Xa7\nK1apEDgcDqxWK7q7u3H69GnweDwMDAwAqIza7PNAnnlHRwd6enrQ29uLx48fY3l5uaLPnKIhi8WC\nEydOoKGhAQKBAIFAoOKjZ1rO0traira2Nmi1WkxPT2N2drai5ebxeFCpVKivr0dvby+kUinS6TRW\nVlYOlIIuFUivVFdXo7u7G+3t7fB6vUilUjvWnisFlCqvr69HS0sLjh8/zpj2inV5JZ65UCiEVqtF\nZ2cnOjo60Nraimg0iqWlpUO9J0fGn8tFW1sbLly4gGPHjmFqagqxWAzpdPq5G6MqBQaDARcvXsS5\nc+fQ0NCAkZERxqx3WKmilwXV+FtaWvDOO++gvb0dLpcL4XAY0Wh0V7rOcoPWo547dw6nTp2CzWbD\n6OgoAoEAi4oqEVwuF2KxGA6HA++//z5UKhWTORKJVPQ9l0gkMJlMzNlaX1/H4uIiwuEwo6WtNNnp\njlssFnR0dODEiRNM7nw+z+5KpcptNBpx7tw5HDt2DBaLhUXNqVSqYt8nlW57e3vx1ltvQaPRYHl5\nmZVZKvmOS6VS1NXV4ac//SnsdjuL9OPxODvvw5D9L7rhj1YvtrW1oaOjA1wuF16vF6OjowiHwxXt\nnctkMpbCNRgMiEQiWFhYwNLSErLZbEXKDTxdsazX6+FwONDS0oJCoQCfz4fFxUWsra1VpDIn6PV6\nNDY2oqWlBRqNBh6PBx6Ph/GPVyLorrS1taGrqwsOhwOpVApOpxPBYLBia6FUA7Xb7bh8+TLa29sh\nk8ngdDqxvLxc0b0KXC4XEokEx44dw5kzZ2C1WpFMJtnyn0p1EjkcDsRiMSwWC86cOYOmpiZIpVL4\nfD6srKxUtAEVi8UwmUxMr+j1euRyOSwvLyMajVasLgeAqqoqNDc3o76+HkajESKRCMlkEuFw+FDP\n/C/a+KvVajQ2NqKtrQ11dXVsYcrCwgJisVjFXhZK4ba2tqK1tRVarRbJZBKBQACBQKBiHymHw4FK\npUJbWxtaW1tRU1MDDoeDtbU1BAIBVnuuNBQbop6eHjQ2NkKhUCAcDiMUCiEWi1XsmVND6LFjx9Dd\n3Q2r1YpcLgev11vRTYpUqrDZbDh//jwaGxshlUrh9/sRCAQqNgLlcDisqbKtrQ3d3d0wm83I5XJs\n0UulOrhU1jKbzejo6EBNTQ3kcjmi0ShzWipRbg6HA6lUCpPJBLvdDrvdDp1OBy6Xi2AwyEoWlQbS\nKwaDAbW1tTCbzdDpdFAoFNjY2Dj0bOJfZNqfGlqam5vx4x//GO3t7VAoFEyhHGaq5WVB3vnFixfx\n7rvvwmw2I51OM179Sm1QpJRibW0tfv7zn+PEiRNsLzrNDleq3DRG2dPTg48++gi1tbWIRCKsBlqJ\nRgj43oDqdDqcO3cOPT09UCqVjKylku84j8eDRCKB3W7HiRMnoFKp4PF4AJSfee15IL0iFouh1Wph\ns9lgt9shEokgFosrUmYC7YtQq9UwGAzQ6/VQq9XY2NiAUCgst3jPBd0VmUyGqqoqaLVaKJVKNklU\nvNOg0kBnrlQqodVqt0wQKRSKPbEnvgz+Ioz/9llyOtTa2lqcOXMGZrOZrXitNMNPkQTNk1MkRw1Q\ncrkcqVQKqVQKuVyuYgzR9jl4WopSVVWFrq4uWCwW8Hi8ZxYvVQLovtAfgUAAkUgEu92OhoYGKJVK\nRKNRtmp0r18TOPyGo+Iz5/F4kEqlMBgMsNlsbDJhO53vi8hltrOMHQaK55tpioUMEcktFArZAqm9\nnCfducPuUC++L2R0DAYD1Go1FAoFc2QkEgn4/N1VLp2DSCQCALYZ87Dkpu8nFAohl8uh0WigVqsh\nkUggFoshlUrZBsli0pmdQEu+FAoFkskkEonEochNstNdl0gkUCqVbJsr6UuFQgGVSsV6uICd7wvp\nJ7VaDaFQiGAwyDY1HsZ7pftCGy7pXnA4HIhEIqjVaiiVSsjl8h03FtLXoM+XyWQwGAxIJBKs4Xsv\nsv/FGH+RSMTmm6n7k1Ln1Ikbj8fZJakE0AUXi8WQSCQQiUTM+FutVpjNZggEAmQyGda4VUkgJS4S\nidhKUYPBgOrqaiiVSuRyOSSTSSSTyT0/ssOeMaZHJRAIIBaLIRKJIJFIIJVKodPpoFQqt8wP7zUt\ntxP39mHIToqcZFepVNDr9UwxFlP67vVr0tKQwzb+RLAlkUigUChgNBqh0WggFArB5/PZaloisnqR\n0yIQCCCRSNi64MNwwOjM6b7IZDLo9XqYTCbIZDLmFJCB2u58Pe8c1Go1AGB1dXXPyvwgIIdFJpOx\nteJKpXLLKmCNRgOlUskcqZ1kIYOmVqtRW1sLl8uFZDLJ/v5Vnzmdk1gshkqlglKphEQiYcEdlV4M\nBgPC4TBbjb797OnryGQyOBwOqNVqDA4ObimHvWrZKaCgmX6BQMCMPP2dWq2GWq1+pjm02GknW2Cz\n2dDb24vl5WV4vd49Z8Z+kMafHqPFYkFNTQ0aGxvB5XKRTCYRjUaRTCaxvr4OrVbLLgPtiN7vGA4p\nlGLWsYOCw+GwyMFqtaKmpgbV1dWIx+OIx+OIxWLI5/OQyWSQy+UAnl6GXC637xnzYrY04OXXqJKi\nra6uhs1mQ1VVFfh8PqLRKNszLxAIYDKZGCMelVj2SkhEslIEUvy7Oyg4HA7kcjm0Wi2sVitMJhN0\nOh1isRgikQgbFZLJZIxICXjKdpbL5bYYlb18r2I2wIOwdxV/LZFIBJ1OB7PZDIvFwhR0MBhEIpFA\nNpuFSqVCdXU1hELhljJFsRJ5EUgpFZdoDlr/pd+bTqdDdXU1LBYLdDodJBIJYrEY/H4/crkcBAIB\ni56B71P9xexnzzOi9He0KpgyBsUrmw8Ccqzq6+vZHZdIJOBwOPB4PIhEIsjn89Dr9axxi+QmB4DW\nGe+kK4rfYzqdZpwSxGdwUO4OundVVVVoa2tDVVUVNBoNuFwuotEo64rn8XjQ6XSQyWQAvncQKfrn\n8/k79loUZxAymQx8Ph8AwGg0IpPJIJvNHrgJmZyTrq4utLW1MfKezc1NLC0twePxoFAoQKPRMONP\n8lFzsdvtBp/P3xJFk9x0nwCwviO5XA4Oh4NUKoVMJnMgwjQq/xiNRsaVQJF8NBrF7Ows4vE4e8fF\nb5PD4UCtVsNkMiESibDV7tu/Pv2s0WgUo6OjyOfzqKurY0FsNBrdVcYflPGnS2g0GmG329Hc3IzW\n1lZ0dnYiHA5jYmICwFOlS2xn9Aiz2SyCweCBGv2KH+1BDSgplrq6OjQ0NKC1tRWNjY2wWCwYGxvD\n3NwccrkcM/5EW7m5uYlEIoFAmIWmzwAAIABJREFUILDnzu1ihV+sRA+qzPl8/pZO+KamJlgsFqRS\nKQwPD8Pr9QL4nqUNAFs/GYlEEI1GX8pxOSgond/Q0ICWlhY0NzfDarVCp9NhYmICExMTWFtbA/CU\nIIdqn5ubm8hkMgiHw8hkMns6t0Kh8IzcLxNJy+VyGI1GdHR0oKWlBQ0NDZDL5chkMnj06BFcLhei\n0SikUikzUKRY0uk0EonEnvos6I5RJEgyH6RHg5xbjUbDZrEdDgeqqqogEAgwMzODgYEBphSlUil7\no7SrnMa2SLbdUGz8yek6qPEkhWw2m3H69Gl0dXWhtrYWIpEI2WwWt2/fxszMDJLJJCtRFJ85yb/X\nu5LJZFgWhOQ+yH0nvULltsuXL6Ourg5GoxG5XA5OpxPXr19HMBhELpfbkhkiJ3G7w7Wb3KRHpVIp\nRCIRc5L3CzJu5JRfunQJFy5cgF6vB5/PRzqdxvXr11nJEwDTj/Q7pqwppdR3kzufz7PmXYVCAbFY\njPX1dXYe+wFlTOx2Ozo7O/Hxxx+jqakJarUaiUQCi4uLbPSTSHyKy7ZU9iGHazfbsr6+znQoBSgU\nxL5Qzn3/ZBUMOrQ33ngDv/rVr1gTRT6fx+zsLL777jusra0hm81CIpGwDlbyxh4/fgyn07mvqKb4\n814mciYv9e/+7u9w5coVqNVq5PN5hMNhzM7O4u7du4hGoxAKhTCZTCwLwOPx4Ha7MTg4iGAwuKcI\nvvjvd0qF7Qc0+37q1Cn86le/gs1mY1HcwMAABgcH4fV6kc1moVar2RwrUcvOzMxgdnaW1f33Ijd9\n3ssYfw6HA4VCAZvNhp/97Gd4//33IZPJkM/nsba2hhs3buDRo0dIpVIQiUQwGAxYW1tjj2ptbQ3j\n4+MIBAJ7PvPtEfdBjD8pYIfDgTNnzuCjjz5CfX09pFIpQqEQJiYmMD8/j6mpKaTTaVRVVUGtVrPR\nOA6HA5/Ph6WlpT11QZPcpFhfxsnlcDjQ6/W4cOECLl26hIsXL0IikWBzcxOBQACPHz/G8PAwyxKl\n02n2Rnk8HlKpFNxuN0Kh0K6GlD5OhqBY7oOeuUAgQFdXF9555x2cOnUKDocDIpGIRc5erxdjY2PI\nZDIwmUzg8/mIx+PsvsRiMXi9XlaC2En24jtCGYriWvVBZBcIBNDr9fj4449x6dIldHV1QaFQgMvl\nwufzYWJiAlNTUwgEAuzM6urqWJaBdBDpzZ2cJ/pZiu9IJpNhGY6DOIqU0u7t7cVf/dVfobOzE7W1\nteDz+Ugmk/B6vQgGg5icnGS6pVAoIBKJMEcpk8lgbW0NiURiR0ZFOtNiY8nlchGPxwF8H6DsF8Sk\n+ZOf/ARXr16Fw+GASqUCj8eD1+tFPp/H8vIyZmZmWMZWr9ezMiKXy0U6nd6Vb4beJJUfORwOYrEY\nVldX91yi+0EZf/Jwm5qa0NPTA6FQiFQqhbm5OXg8HiwvLyOVSrFGKPLQASAajWJ8fBwrKyv7vqiv\noiak0WjQ1NSEzs5OtLe3g8vlwuVyYXV1FV6vF16vF+l0mtWIqKllY2MDPp8P4+PjrAN9v3K/TLlC\nIBCgpqYGHR0d6Orqglarxfr6OkKhEEKhEHw+H6tbUnc/1WwzmQyWl5fhcrn2FR28ivPmcDjQarVs\nBK65uRmFQgGBQAArKysIh8NYXV1FJpOBWq1mGQ5KtUUiETidTqytre3LUdxPun0nUAReX1+Pvr4+\ntLW1wWQybYlcQqEQy0potVr2bzicpxztq6ur8Hg8+8pabJf7IFG/WCyGwWBAd3c3urq6UFdXx7I/\n6XQakUgEoVAI2WwWMpmMlRsoy5VKpeD1evd05sV3mxRh8bbO/chNTVgOhwN9fX1obGyEXq/H+vo6\nVldXEQwGEQwGEQqFsL6+DoVCwSI4msKJRCJwu90v7G8pPutiJ3e/DjqltHU6HRwOB0ubV1VVYWNj\ng5VYSMeEw2FmdDY3N9l9yWazCAQCWF1d3TUoep4O2e89p4hfLpejsbERx44dw7Fjx1BdXQ2xWIx4\nPA6Px4Pp6WmsrKywMyfZC4UCa6pMJBJszv95xpDkfl4pYz/lX2ris9vt6O3tZWPBMpkMqVQK4XAY\nc3NzGB0dZU4s7QWhiS3SL6FQCB6Ph/0+9ip3sewvwg/K+EskEtTW1jIu6nw+j1gshrm5OSwvLyOR\nSCCXy0Eul0On00Gn07FUaTQaxdTUFNxu94G+98vW+o1GI7q6umAwGMDj8ZiSnp2dZd2nuVwOEokE\nRqMRKpUKQqEQmUwGgUAAMzMzB5LhVfQoNDc3o6WlhS1dSaVSjACHPFeiO6WaIj1Wt9sNn89XUtlJ\nMRoMBpw8eRIWiwXA9ym0lZUVth8hn8+Dz+dvGSFKpVKIRCJYWlra0tRUCtkpu1VXV4e2tjbIZDIW\nIQYCAUbFSlGYVCqFRqNhpaJUKsWcslKdOSkjqVQKo9GIpqYmmEwmbG5uIp1OY21tjSlDysQVCgXI\n5XLI5XKIxWKkUikkk0n4fD5EIpF9ywDsP+qne0K9LLW1tbDb7RCLxUxnuN1u5gRSXZ46zalskUql\nsLa2BpfLdeAzP4jhFwgEbGbfZrNBoVCwzJbL5cLIyAimp6cRj8eRzWYhEAhYEx1192cyGXi9Xvj9\n/kO/L8Ud7FqtFsePH0dXVxc0Gg04HA6i0SgWFhYwNjaGR48ewel0Ip1OsxFn0kfFvx+n03mgqYOD\nnLlAIGAlrXfeeQf19fXg8XjMCRkbG8OTJ08wOjoKn8/Hon6672KxmOl0r9eLxcXFA2V79ir7a2n8\nd/JqKDXb2dkJu93OGuESiQRcLhdCoRDzxo1GI9588010dXVhc3MTAwMD+O677/YdOR9U9uImQdoQ\nZ7PZ0NzcvEWZr62twefzsfQsn89HS0sL3n//fVgsFvj9fvT39+Px48cv3Wy4F7kBbKn/KRQKmM1m\nRqpBxBQUfUajUZYGlMlkOH36NE6ePAmxWIyRkRF8/fXXrGHnMEFeOTWtFbOBKZVKFplRmjkUCrEz\n53K5sFqtePfdd1FfX49EIoHr16/j7t27h07sQ3eFeiWoe9liscBkMiGRSMDj8bBtcYFAgFEkA0+b\npTo6OnD+/Hmo1WrMz8/jxo0bmJ2dPfQzJ+NDXftmsxlVVVWora1FLpfD3NwclpaW2DIn6lGgNKtO\np8P58+fR1taGQqGAR48e4fr164yt7bBQ3LVOd4Tut8lkwvT0NCKRCJLJJIuKXS7XlnSzw+HAm2++\nCbPZjGAwiLt372JiYuLQz5yWw9TV1cHhcKCpqYlNTKyuruLzzz9n5xyNRhEMBrGyssKiS7FYzCJW\nkUgEp9OJW7duIRgMHvqZ8/l8tu+Dzlyv16NQKOCrr76Cy+WC1+tFJBJBOBxGMBiEz+djJYXq6mpc\nvXoV9fX1yGQyGBkZweTk5KETE9GEyqlTp9DT08MaQQ0GAwYHB7G0tAS/38+yK0QMVrxoqKWlBWfO\nnIFarUYwGMT09DQCgcCh35fX0vgD33eB0vysXC5HfX096urqoNPpthwcpVMolaXRaNDd3Q2bzYZ0\nOo3BwUHcuXOH1XoOC8XKXCqVspWZCoUCNTU10Ol0rCuZavGUJqdubSI9UavVcDqd+OabbzA5OXno\nFwV4etGVSiUUCgXkcjlUKhVMJhO0Wi2b2QfA6lYku0gkglKpRHt7O5qamsDhcDAxMYGvv/4afr//\n0OXmcrmQSqXQ6/VQqVRMsVssFjYmKZFIWJ2V7gvJbrVacerUKZjNZsRiMdy9exdDQ0OHTtNKhsho\nNDKDX1VVxWrKS0tLiMViEIvF4HA4SCQSzNkSiUSQSqVobm5GZ2cnpFIplpaW8Kc//QmLi4uHKjfw\n9MzlcjkaGhrQ3NyM5uZmyOVyCAQCeDweZvhpzIkcLsoQmM1m9PT0oLa2FhsbG3jy5Anu3r1bkjfK\n4/EYdXZfXx9sNhs2NjawurqKhw8fYmlpiTWCUrNWPB5n6WqHw4ETJ05Ap9Nhfn4e/f39mJmZOVS5\nAbCplPb2dkbqlM/nsbq6irGxMYyPj2NsbIyNj1Fan5oidTodurq60NraCqFQiLm5Ody4cQOrq6uH\nKjdF7FarFZcvX0ZXVxeqq6vhcrkwOjqKu3fvYnx8HAsLC6zngLKjwNPAz2Kx4OzZs6zReHBwEGNj\nY4f+RmmK49ixY/jwww9ht9uRSCTgdDrx8OFD3Lp1i409UtMv8H0DsEAgYGVqpVKJ6elpXL9+fcu8\n/mHhtTX+IpEIZrMZ9fX1aGpqYkpdJpNhY2ODpessFgsuXLiAaDTKPEGaPV9fX4ff78fY2BjGxsZY\n1+hhw2AwMFphs9nMjDzVmElxNjc3g8PhYHl5GcFgEBsbG5DL5ZBIJEin03C5XHjy5AlcLtehy0wG\npb29HV1dXWhpaQHwdCQpnU5jaWkJNpsNSqWSKZFoNIrh4WGIxWLo9Xo2s51IJLCwsICRkZFDP3Ny\nuKqrq3Hx4kW0trbCbDYjGo3C7/ezmV4+nw+NRgODwYC2tjZMT09jfn6e3TO9Xg8ej4e1tTXWVHfY\nhEpkiI4dO4ZLly7B4XBgc3MTPp8Po6Oj8Pv9qKmpgd1uh81mY4Q+KpUKAFinNI3L+Xw+PHz48NAN\nKACWYbt69SqOHTuG2tpazM3NYWJiAgMDA0ilUmxpD9GZkuG3WCxoaWlBdXU15HI5crkcZmdnS/JG\nKWPR1NSE999/HzU1NSgUChgZGcGDBw/w8OFDFimLRCLWuLm+vg61Wo2GhgZW1hAKhQiHw3jw4AHm\n5+cPVW7gqXOuUCjQ2dmJ7u5uGAwGjI+P48GDB7h37x5LgRf3ElCDYVVVFTo6OlBfXw+9Xg8Oh4OZ\nmRn09/e/cGTsZUG6Ra/Xo7m5GTqdDul0Gg8fPsSdO3cwNDTESLWKmzkLhQIEAgGqqqpYtkOtVsPt\nduPmzZt4+PBhSYy/WCxmc/n5fB7T09O4du0aRkdH2a6VnYh6xGIxFAoFamtr4XA4IJFIMDc3h08+\n+eTQHS7gNTX+HA4HEokE9fX1rHlofX0diUQCKysrbKezXq+HWCxGbW0tOjs7MT09jcXFRaRSKTx5\n8gTz8/OIxWIYHR099AteLLtKpUJdXR1aWlqgVqvh8Xjg8/kQjUZZt6ndbodEImHNOlS+IGUSiUQw\nPDwMt9tdMmIi6uw3Go2oqqqCz+eD2+2Gx+PBwsIC4vE4GhoamCGiBim3243NzU34/X7cv3+fGaFS\nnTnJTiNf1G07NzeHhYUFhEIhuFwu2O12lslwOBwsDS2Xy+H1ejE8PIzh4WE4nc4D1/r3A2royWaz\niEaj8Hq9iEajmJmZweTkJAKBAIuibTYbq43W/v+98dRf4XK5MD09jVu3biEUCpUkS0Tjaj6fD9PT\n0/D5fJidncX09DQmJiaQyWQgFosRCAQYo1yhUEB9fT1btCUSiTAzM4ORkRGMj4+X5L5Qd3ogEMDY\n2BiWlpaQTqeZ3PPz86wcRPeJGr1qa2vxxhtvoLGxEYVCAUNDQ7h58+YzhDeHBSq7TU1NsTT+3Nwc\nxsfHWe9Q8cgp8L3xam1txZUrV2CxWBCLxTAzM4OxsTE2BXCYKBQKWF9fh9vtxo0bNyCXy5FOp/Ho\n0SNMT0/D7/c/MzFAP4NCocCZM2dw8uRJtq3yyZMnWFhYQDgcPnTZaW3w8PAwgKd6Zm5uDo8ePUIg\nENjVWbXb7Th37hxaW1tZRmx+fv7Atf794rU0/gAgk8nQ2NiIzs5OdHZ2soc5MjICv98PHo+Hrq4u\n1NTUQKPRoK2tDVevXsUXX3yB6elpfPbZZ6zWeBASh5eBXC5HdXU1q5PPzc1hcnISbrcbLpcLLpcL\nV69ehc1mg06nw4kTJ9glm5+fRyAQwMTEBDweT8kXs1DHeCQSwejoKG7dusU2IN65cwdvvPEGLl++\njLa2Ntjtdrz11lsYHh5mSujGjRu4ffs2S5seNsiAJpNJuN1urK+vg8Ph4N69e2w73NjYGGQyGZqa\nmnD69Gm89dZbaGxshEqlgs/nA5fLxZMnT/DVV1/h5s2bJYmcSfaNjQ3MzMwgkUhAIBBgdXUVTqeT\nzQQ7nU5W9jp58iTa2toY7TONFw0NDeG//uu/MDo6WhLDDzyNzHw+H/785z+Dz+cjl8shGo2yXgr6\n43a7WUmmtrYWXV1dOHnyJFpbW7G5uYkbN27g3/7t3xhXxGGDuAQePXqE8fFxVgraifabJoeIn6Ot\nrQ3vvPMO6urqEIvF8Omnn+JPf/pTyZzc9fV1BAIB/P73vweHw2GkNtvH84p/BmKa6+npwccffwyV\nSoWJiQn83//7f/H48eOSyE3vc3BwkPUv0VnvNl3A5XKh0Wjw4Ycf4vLly+Dz+RgfH8d3332HYDBY\nEtmJwOgPf/gDPv300y13ezdwOBy0tbXhn/7pn2C325FKpTA+Ps6cy1KA9+JPKQv+dbe/FAqFUCgU\nMBgMbD3p/fv38eTJEywtLcHn82F+fh6Tk5Pw+Xyorq6GRqOBTqdj0R7Nru6XGe9lQHVwoVDICDaG\nh4cxOjrKGnHi8Ti8Xi+mp6fB4/HgcDjYkof5+Xm2/jYUCu15VOtVgDztfD4Pv9+P6elpTE5OwuPx\nIJVKIZvNsvp5IBBg9enq6mpmeJ1OJ2ZmZhAMBtncbalkJzIkr9fLmMGo9kkdt6lUirEQUnc3lTRu\n3ryJqampsmw3y+fzSCQSCIVCWFtbQyqV2qIcaeaXmuvOnj2L2tpacLlcDAwM4Msvv8TU1NS+yZRe\nlexUGiqeEy8exaMouq+vD7/4xS9gsVgQiUTwxz/+Ed9++y3m5uZYV3epZaepiedxOVA/yc9+9jN8\n8MEHsNvtGB8fx+9+9zvcunULCwsLJd8qR07jXvgnmpub8fd///c4f/48tFotBgYG8NVXX6G/vx9u\nt5tx3JcKxYb/RTh79ix+9KMf4fTp0+DxeJiamsIXX3yBb775Bj6fr+RB3V6brhUKBXp6enDp0iWc\nPXsW6+vrmJ6exm9+8xvcuXMHoVDoVYn0f3b7y9cy8qfo0+12s9QKjQqRUZmenmbp2lOnTqGjowNa\nrRY1NTVQqVSMcKOUj5JAbIOkFIncoVAoIBaLwe12Y2xsDCaTCW+++SaMRiMcDgesVisbWzzs7v6d\nkMvl4HK54Ha7GcFEcTqOiCnC4TBOnz6Nzs5O1NTUwOVyQa1W4/79+yWplW9HoVBAMpnE0tLSM/S2\nhHw+j0wmA4/Hg2AwiGPHjqGhoQGBQACPHj3CgwcPkEwmyyJ7LBZDPB5/7u+bxuPoPpPDODc3h5mZ\nGdy4caMsS5/IadkNdI85HA4sFgvOnTvHutC/+eYbTExMlKwXpxh0pi8CGf/e3l6cPn0a2WwWMzMz\nuHbtGvx+f8myRAQy/HsBj8eD1WrFRx99BIvFgmQyifv37+O7777D9PR0ybOKe9VpVLLo6urCe++9\nB6vVipWVFfT39+POnTsYHx8/bFGfwX50sUqlwtmzZ9Hb2wuVSoXR0VHcuXMH/f39cDqdhyjlVryW\nxn99fR3BYBDJZJJFddS1WqxM4vE41tbWEIvFsL6+DolEgsbGRnR0dGB5ebkkdbhiEGMU7ReglNz2\nFBd5v+QY0GKN1tZWTE9P48mTJyU3/CRj8barnR4rKR/io+ZyudBqtayHoRzOFp3nTk032z+Pmi8p\nQ0NnX+6tg3v53rScheaFaTPcYY8kviyIR4G4KyjDRON0lQxq9lIoFIwDIhqNwuPxlDzy3A+oyU6p\nVMJgMDDGU2LcrLQlYcWgN2qxWNDc3Awej4fl5WVcu3YNS0tL5RZvV3C5XKjValy8eBE9PT0QCATo\n7+/HJ598wnYilEyWkn63VwQy9pFIBGtra4hGoyx9T0qaUqFra2sYGxvD8vIyuFwuGhsb0dXVxZZX\nlEN2muPPZrM7phRJ/tXVVUxOTiIWi0GlUqGzsxNWq/WV8NofBGTYX5RSpHovNeoQsQstIyoH9lqL\ny2azrBxEJCkGg6Es571f0Igidc2bTKaK3yMPgG0xk8vlLJKWSqXsHVQyaKsajTESF385yhT7AZ/P\nR3V1NaxWK6RSKePATyaTiMfjFX3uKpUKra2tbLpIJBIxttBYLFZu8XaFzWZDZ2cn6urq2Ii03+9n\nhEWlxGtp/PeDSCSC27dvY3JyElwuF83NzTh27Bjb3FTJcLvduH//PoLBIJRKJeMmqHS5c7kcFhYW\nsLCwgHw+j6qqKrS3t7MRtEpGKpWCy+VCLBaDUCiEzWaD2Ww+0IKPUqOYLUwul8Nut0OhUJRbrBeC\nttdJpVI2DaPVahlNayVDo9GwMotAIGCrZYHDW938KiAUCtHU1ITGxkZGcy6VShmVciXDaDTi/Pnz\nsNvtbDSTqM4r2WkBgPb2dka6RdwzlCkt9blXvkZ7SVCJgJjBKGVE++UrGcQ4l81m2cw3pXOLV8NW\nIojqlMiKaByKZqMrFSQrGXtqRiOq1kqGVCqFUqlk95q2mpFSr1SIRCI2/QJ8T5VK1L6VDOKFIMeW\n+EWI06JSIRAI0NDQgIaGBnaviYJbrVZXtLOr0+nQ19cHs9nMPiaRSGA2m8uaXdwL6urq2HIlysjp\ndDpYLBZWpisVKvc3/IpAyrvY0FPXfSUrRACM+pfkJMdFLBZX9OMkkiKZTLaFFlgikTA2ukoE8UcY\njUZIpVL2caFQCK1WW7GGqHiJi9ls3mJ01Go1jEZjxRoiSvPX1tZCr9ezcpJYLIbNZoNery+3iDuC\n3qLRaERzc/MWZa7X69HS0lKxmS4+nw+ZTIa6ujoWPdMyH2JJrURHl5xZk8mEtrY2aLVaFu1rtVr0\n9PSgqqqqIvWLUCiEWq1GTU0N6uvrIRQKWbM0jbjSVFepUNmh7yuAWq1mRArFNety1c33Cg6Hg/r6\nely5cgUmk4mldMmIZjKZimwoIvpi2pZHdMWFwtOtfkqlcgvLWKWADKjZbMaFCxdgs9lYP4ZUKkVd\nXR0SiURF1hTJEDkcDnR3d7OlVpubm0yxJBKJktcU9wKBQACtVovW1lZUV1ezsoVKpcLFixcZuVWl\npaLJMa+qqoLD4YBYLEYul2Pz23/913+NX//614fOi38Q0IIlGoGmhlCxWIx3330XGxsbbOd8JclO\nPThWqxV6vR58Ph/pdJrd/V/96lfY2NhgE0WVIjutsm5sbITFYoFYLGZjr0KhEOfPnwefz4fL5dqy\nNvywUamh77++ii8ilUpRX1+PDz74gHnnRC17+/ZtBIPBiuwmJi/x3LlzuHr1KmNuSyQSGB8fZ+NP\nlSg7ebfvvPMO4wgnhrq5uTmsrq6+cB97qUHc4haLBX19fbh8+TKkUilyudyWneC05rdSGuiKuf9b\nW1tx6dIlNDU1IZlMsvFRajxzu91sq2UlgM/nQyKRoKmpCX19fTh58iQ4HA5WVlYY77xEIkEymYTf\n70c+n68IZ5fL5UKhUMBqteL48eM4ceIE6urqsLi4CK/Xy7YuajQaRCIRpFIptnOhnCA2vJqaGnR2\nduLEiRNob29HPp/H0NAQEokExGIx21yZzWaxvr5eMjKu3eTmcrlwOBzo7e1Fb28vOjs7UV1djamp\nKYyMjAB4OuliNpvZOZOzW853yuPxoNVqcfLkSZw+fRq9vb2ora1FNpvF/fv34fF4WJ+IRqNh/45I\n015Bw+iuc/4/WOPP4XCg0+nQ0tKC9957DzabDVwuF8FgEPPz8xgaGmJEOZUGmUwGu92Os2fP4syZ\nM+Dz+chkMgiHw3A6nVhZWWHjgpVghAgcDod1s545cwZms5nRLkejUUQiEUSjUaysrLB+gEoAKfSe\nnh6cOHECbW1trOufjKdKpYLH48Hy8nJZZuafJ7dQKERLSwveffddtLa2QiwWY2FhAZFIhCl8uVzO\n1hSXklJ5N9CI3Ntvv42LFy9Co9FgaWkJ9+/fB4/Hg1KphNlsRqFQQCKRwNraGtbW1sp+33k8Hqqr\nq3H8+HH85Cc/gcPhQDKZxLfffovJyUmo1WpotVq2f35zc5ORFJVTdtpvcfbsWVy9epUx4j158gT/\n8z//g/X1dVRVVbHSUVNTE4LBIMbHx8vq7FKJ9q233sIvfvELXLhwAdXV1VhdXcW1a9fw2WefsZ0c\n1dXVqK+vR0NDAyYnJ+H1ektOyFUMkUiE+vp6/OM//iM+/PBDdHV1MfbI//zP/8T8/DxMJhM0Gg3q\n6upw4sQJCIVCDA0NsZ6pl8RfnvGniMhisaCpqQldXV3Y2NiAy+XC4OAgBgYGMD4+jkgkUhHRRDGo\n89lut6OxsREmkwnz8/OYmJhgm7lIkZeS4e9FoGbE6upqto4zEong0aNHmJqawsLCAnw+H+OjrxQD\nCjw1RBqNBjU1NVCr1cjlchgaGsKDBw/g8XjY/viFhQV4vd49s6cdNoRCIQwGA2w2G6xWK6LRKKan\np3H79m22a0EgEIDL5WJ5eRmJRIKlcsuZeeFwODCZTGhubobNZkOhUGALfx49eoRsNgsulwu9Xo9M\nJsP2zRPjXrnOnqL648ePo62tDQqFAvPz87h16xYePnwIr9eLQqHAHJdkMon19XUWwVHZpdT3njJb\ndXV1uHLlCvR6PWKxGG7fvs0moYDv0+q0aGljY4NNLhSTHpXy7EUiEVQqFd544w309PQgnU5jfHwc\nn3/+OR4/fgy/388+x+FwgMPhoFAoQCgUQi6Xs7Mu5oApBcjZamtrw/nz5yGRSOByudDf34/vvvuO\ncSlsbm7CarXCZrMBeHrHTCYTa1yk38MBswA/PIa/F4FSRQqFAiqVChsbG4jFYvD7/RgfH8f4+Dgj\n+KFml3IrcgJ1OiuVSgBPU0AulwvLy8tsN3ShUGBjIvuhwzxsuWmKQiAQYG1tDZFIZAs1K3X6SyQS\ntumqnJ45gdb38ng8xONxTE9PM2fFYDDAbrfD4XAwUhRS6NupaksNMkY8Hg/ZbJZxKywuLrJ7r9Pp\nYDAYoFarodPpWE2RJkiNpuB6AAAM10lEQVTKITsZI5VKhXw+D6/Xy5za1dVVRlPc0dEBsVgMg8EA\ns9mMcDjMCLvKMUNPQQVNIcRiMUxOTuLevXtYW1uDUqmE0+lkewlI9ubmZqZ/iscxSwl6mzKZDNls\nFqFQiAVBsViMLYhKJBIwGo0QCASw2+3o7e1lWbtkMlnyc6feCj6fz7awTkxMoL+/H2traxAIBIzp\nlZwTiUSCrq4uZLNZBINBpFKpkvbqUD+ZXC5nwUQgEMDc3BwePnyIgYEBZLNZiMVitpab9InJZMLF\nixdZyYUyvIeBH6TxJ4MoEAggFArZ/9dqtYzDnQwQcXdXyowoGXIej4fNzU1kMhmo1WrE43FMTU2x\niCiRSCCbzSKdTpdNGW6Xm+SgRR0KhQL19fVYWVmBz+eDRqOBVqtlv4ed2A3LgVwuh1QqhXQ6jXg8\njkwmA4VCAZlMxhYBicViqFQq6HQ6ZDIZtsug3HJHIhHG+R8KhZBOp2E2mxlbHjmJMpmMrYPm8/ll\nb3al7YS0gz4SiYDL5cJisUCr1bJpC7ofpFCL96GXGoVCAfl8HrOzs9jY2IDdbmdnLhaLodPpoNfr\nIZFIGJFXKpXa8j6LmSZLic3NTQQCAdy8eRMGgwFcLhexWIw1tOr1elRVVbEmukgkglgsxvQMRf6l\nPHdyTvP5PMtOcDgc+P1+lh2iJWl6vR4bGxtIpVIIhULMUaS3Wo7mxVwuh9XVVdy9e5cFROFwGJub\nm6wfoKmpCQqFAplMBslkEsFgkG14pY8dVnb6B2f8ORwOZDIZNBoNamtrUVtbC51OB5VKxZihkskk\nlpeXAaBiureLx7VsNhtqampgtVphNpuZF0mebygUYlSQ5Y6age9H+4xGI6xWKywWC+rq6qDX69m+\nbY/HAw6Hg6WlpT3T7R42KOI3mUyw2Wxb/peijWQyCR6PB6lUyiiX19fXy56xICpfShnKZDIYDAZI\npVLI5XJIpVKIRCIYDAYWRVDjYjn7LcRiMes0t1qt0Ol0kMvl0Ol0EIvFkMlkbCEUKcyFhQV4PB6s\nra1t2Y1eapjNZjgcDlgsFuh0OohEIjQ0NMBut0MsFkOv18PhcKC6upoxzlEWKRgMlnSJGIEc1+7u\nbjQ1NbHep3w+j1OnTqGvr4/9HM3NzZDL5VhbW8PU1BTm5uYwNzfH9qeUOsPI4XBgNptx9uxZ2O12\n6HQ6xONx1NTU4MMPP2Ql0s7OTtTX17PG1tnZWaysrGB2dhZer7fk00VEPNTc3IxTp05Bq9Uik8kg\nn8/jxIkT6OjogFAoZM2XRqMR8Xgc8/PzcLlc8Hq9mJ2dhcfjOVSmyB+k8Ver1WhqakJ3dze6urpg\nt9shk8lQKBRw8eJFGAwG3Lp1C4lEghmjchtRSvfb7XZ0dnaiq6sLHR0daGlpAfCUdrahoQEjIyN4\n8OABBAIBM0LlzFhQul+v16O7uxvt7e1obW3F8ePHUV1dDS6XyyKJ8fFx+P1+pFKpZyKicoDP50Mu\nlzPWR4PBgNbWVpw8eRJSqRRcLhfZbBaBQIBtZwsEAohEIshkMmU9d7VajYaGBpw6dQoGgwEAcOrU\nKbS3tzM+gs3NTXi9XoyOjmJpaYmVMspp/GUyGdrb23H8+HF0dnYiGAxCLBazBlGi9aXUtNfrxdDQ\nEFZWVpgRKlfKv7m5GT/60Y/YbPz8/Dzq6+vR09OzhZAokUiwzZcPHz7E4uIi20VSDj0jl8vx/vvv\n48KFC1Cr1VhcXITb7cbHH3+MmpoaKJVKxuwXj8cxPDyMgYEBTE9PY3l5GR6PZ9fFUocBKrG0tLTg\nH/7hH6DT6ZDP5zE+Pg6BQID29nZoNBpGCb2+vo5YLAan04l79+7B5XIxltFST7jweDxIJBJcuHAB\nf/M3fwMul8uWtTU0NLAzp5JdMpmEy+XC8PAwZmdn4ff7MTw8fOhL0H5Qxt9sNuPq1avo6upCQ0MD\nLBYL4zenFBLVXsijrZTmrbNnz+KDDz5gdLIGgwF6vZ6R+aRSKczPz2N0dBSPHz9GIBAoa/MTQalU\n4uc//zlOnz4Ns9kMtVrN6FmJKS8cDmNychJ37tzB4OAgwuEwi4LK2XTW3d2NX/7yl6ipqYFer2cj\nlsRzvr6+DrfbjeHhYdy5cwcjIyOIRCJbVtOWGsTf/9577+GDDz5gBD75fB5ms5kpFKrj3rt3D/fv\n38fU1BRbpVwu2cViMWpra/HTn/4UHR0d0Gg0iMfj4HK5MBgMEIvFW7jO7969i4GBAczPz7MUbjmc\nFmoI7ezsxOXLlyGTyVh9lpb6iEQi5HI5hEIhPH78GHfv3sXg4CCcTiei0WjZuv0tFgs6OzvR2dmJ\nxsZGCAQCiEQimM1mxognEAjYuuu7d+/i4cOHePz4MVZXV1n6udSQyWTo6+vDhQsXWPBG2TYejwed\nTgeJRAIOh4NIJIKpqSncvn0bIyMjmJmZQTQaRTQaLcuIZVNTE9555x309fVBrVajUCigurqaya1S\nqSASibC+vo5QKIR79+5hYGAAY2Nj8Pl8iMViCIVCh67bf1DGX6vV4sqVKzh79ixqamrYjK3T6WT1\n/aGhIQwPD8PpdFbMvDmHw0FLSwt+9rOfQafTQSAQIBaLIRqNwu12s3rdkydPMDg4iMnJSda4Ve7x\nIalUivPnz+ODDz6ATCZDLBbD2toaVlZWWIet0+nEyMgI7t+/j9nZWcTj8bJumqNsRW1tLX784x9D\np9MxxzAej7P6YiaTwczMDAYHB3H79m34fD4kEomy3hmaj+/u7sbVq1fB5XIRjUbh9XoRCATg9/vB\n4XCwurqKxcVF9Pf3Y3BwEIFAoOzLZuRyOaxWK/r6+tDU1ITNzU2Wzl9YWGA1/ZWVFYyNjeGbb76B\n0+nE6upqWXtyxGIxrFYrHA4HGhsbWQNxJBJBOBxm/QqpVAoejwe3b99Gf38//H4/YrFYWd+p2Wxm\nO0FUKhWrH2cyGZZi5nK5CIVCWFxcxJdffomRkRH4/X6Wqi4HqGmvo6ODsSdSj1M2m0UsFgOXy8XG\nxgZWV1cxODiIL774gq12L1dgxOFwYLVa8fbbb6OmpgaFQgGpVArxeByJRAKZTAZ+vx9cLheJRAKr\nq6v46quvcO/ePaZfStWf8IMx/qTQqdbJ4XAQCAQwNDSETz/9FCsrK2zffDweRzweZ+nPchtQ2sRG\nyi0ej2NsbAw3btzAzZs3sb6+jnQ6jUQiwS5RJUT91ExG8onFYszNzeHu3bt48OABcwBI9lgsxshn\nyl2qoGxQJBJhzZ/ffPMNBgYGMDU1xUoq9HCj0SiLmst55lRb3tzcRDgchlQqxejoKH7/+99jZWWF\nRQzr6+vIZDKIRCJb7nq5QLwbVLcNhULgcDj47LPPcPPmTZaVoI2dyWSSZYjKTZAjk8nQ0NAAhULB\nCIempqbw29/+FisrKyydT0RE1B1f7t4QYpZzOBzI5/NwuVyIRqP4+uuv8fXXX7NGPgDsvlRCXwiV\nQIlmm/qcnE4nvvjiC7YwrHh7azweRzgcfma1e6nlppFnmUzG+E2Wlpbw8OFD9Pf3b9F9NLq6fYql\nVHL/oIx/JBLBrVu34Ha7oVAo4Ha7MTU1hTt37sDv91fcTH8xZmZmcO3aNcjlcmQyGczNzeHJkyd4\n8uRJRc3Eb0c6ncatW7cQDochk8kwNzfH+AiorFLuzMpO2NzcxOLiIn7/+99DrVZjfX0d9+/fx8TE\nREUR+WxHPp9HJpPBvXv3kEqlIBQKMTMzg1u3bsHv9yMSiVRED8tOSCaTWFhYwB//+EfGWnn9+nUM\nDw8zY1mJcmcyGXg8Hty5cwdutxv5fB7Ly8tMr1BKvxJl9/v9ePToEebn51kD6+DgIAYHB5nxr0S5\n0+k0RkZGkEgkIBKJWBb04cOH8Hg8TO5Kk71QKMDlcuHPf/4z63ei/o/h4eGKCNoIlUpuv++TKd4c\nR3Vyihoq4aCfB0p18vl8CAQCbG5uslngSjQ+xeByuc9kLkj+SgeXy2V/6DFWyqPcDSQzoRJGJfcK\n6gEpPufXQXZ6nyRr8Z9KB4/HY+tuXye5KfoHsEWHV7rspA/5fD7T4WU8813t+w/K+NPBEypldv9F\noDG/7Yao0lE8e/06KXPge9kBvDaKBcAz5w28HnIDz8p+JPf+ZDjI9y2W/XU5bwI5ueWS/WXOvNzk\ncf9f9r8c47/lC7xGF71Y9tdJbuDgD6TcKCdZzBGOsF8cvbPS4wdw5pVq349whCMc4QhHOMIRjnCE\nIxzhCEc4whGOcIQjHOEIRzjCEY5whCMc4QhHOMIRjnCEIxzhCEc4whGOcIQjHOEIRzjCEY5whCMc\n4f+1B4cEAAAAAIL+v3aGBQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\nAGAS66L/TmkRvJQAAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -8.133733\n",
+ "Current MNIST score: 7.012982\n",
+ "Training step: 4000\n",
+ "Time since start: 19.170171 m\n",
+ "Steps per min: 208.657508\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAf8AAAFBCAYAAAB5Maa7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvWlwm+d1PX6w7ztAECBAkCAp7osoiqJEUhJpLZbsqFGd\nOM6eJp3JNJO2adM2M51p59/pTDvTfGknbTrNJP3QJmnqSRzb0WLJsqzNsiRKpLiI+76DAAFi34Hf\nB/7vY5BaTImkZKs4MxwnEgU87/s+73O3c88FssgiiyyyyCKLLLLIIossssgiiyyyyCKLLLLIIoss\nssgiiyyyyCKLLLLIIossssgiiyyyyCKLLLLIIossssgiiyyyyCKLLLLIIossssgiiyyyyCKLLLLI\nIossssgiiyyyyCKLLLLIIossssgiiyyyyCKLLLLIIossssgiiyyyyCKLLLLIIossssji/yo4z3oB\nD0H6WS9gq8DhcMDlcpFKpZBOf7oui8NZ3R6ftnVn8fTB4XDYfgFW98wnfd9krhfI7vMsnjs80r5z\nn9Yq/q+Cy+WCz+eDy+Xed9hkkcXzBHJ0Py37PNPYr3dessjieccndbdviwvO5XKxY8cOqFQqzM3N\nYWVlBYFAYDu+iiEb+WfxfwXrI/9PC7L7PItnBQ6Hs5377pH2nb9d3/osQV483VQOhwOtVguTyYSm\npiZoNBr09PRgaGho241/Op1GMpnc1u/YLvxfPgzJaZNIJBCLxRCLxQgGgwiFQojH40ilUs96iQ8F\nl7ua0HvaqfdP6375tK77SUHODu1xgUAAPp8PDoeDcDiMeDz+jFf4/IBsEf0IBAKIxWLIZDKkUimE\nw2GEQiFEo9GnvrbnzvhzOBzweDwWbQMAn89HS0sLvvGNb8BisSCVSiE/Px98Ph+Tk5Ofmpd/m73E\nLDLA5XIhk8lQXFyMwsJC2Gw29PX14d69e1heXkYkEnnWS3wguFwuhEIh0uk0EonEpzLjlMX2gQw+\nl8sFj8eDVCqFRqOBWq2GSCTC0NAQlpeXn/UynwvQvebxeBAIBBAIBDAajcjPz0dNTQ3C4TAGBwcx\nNDSEubm5p76+T6XxX+9NcTgcVFdXo6CgAFKpFIFAAE6nE0KhEHK5HEajES0tLdi3bx9kMhkikQgi\nkQiGhoYgk8kQjUaRSCSe6SEpl8uhUCiQTCYhFAqh0+lgNpthNpvX1CPT6TSi0SiuX7+OycnJT0RW\nITPdKxaLIZfLYbVakZubi5ycHPB4PESjUQwNDWFmZgZOp/OZr5teTD6fj1QqBQ6HA6lUivz8fFRX\nV0MqlUIqlSIvLw8mkwm5ubkoKytDfX09XC4XBgcHcfPmzaceJfH5fAiFQpSXl6OgoABisZhxSjIP\nmnA4DI/Hg7t372JiYuKprvFREAgEkMvl0Gq1MBgMMBqNUKlUEAgELGNBEdHS0hIWFxfhcDgQDoef\n+XtKB3gymQSPx4NMJoNOp4PBYEBubi4UCgULOqLRKLq7uzE1NYVgMPhMMkUcDgcymQwajQbl5eXI\nzc2FUCgEj8djQZJIJIJCoYBcLgePx0N3dzdzcqPR6DO513TeiUQiWK1W1NbWQiqVQiAQ3MfLiMVi\ncDqdmJycxNDQ0DM5VzIdKoFAAKvViqqqKshkMojFYvB4PPD5fPD5fGg0GhiNRhQVFSEajaKqqgqv\nv/46HA4HksnkU73fn1rjT9E93fDPfOYz+OxnP4ucnByMjY3hww8/hE6nQ35+Purr66FWq8Hj8QCs\nRkc2mw0WiwVqtRperxfBYBDAs0kBUlnCbrcjEolArVajpqYG7e3taGtrY9cKrB6Mbrcbf/qnf4rZ\n2dlPTGRHL6VCoUBBQQGOHj2KAwcOoLGxEWKxGG63G//5n/+JM2fOYGVl5Zkbf2D1MJfJZIjFYuBw\nOMjNzcXRo0fx53/+59BqtRAIBOx3KeuSSCTg9/vx+uuv4+7du0/d+AsEAiiVSpw4cQInT56ETqdj\nB6NQKASfv/pKLy0tYWJiAj/84Q8/Mcafw+FALBbDbDajpqYGu3fvxt69e1FWVsaMDwAkk0k4HA7c\nuHED165dwwcffACHwwG3241UKvXM9o5QKIRCoUAsFoNYLEZeXh5qa2uxZ88eNDc3o6ioCDweD4lE\nAsvLy/inf/onvP3224hGo4jFYk91rWRAtVotqqqq8O1vfxvNzc1QqVTsPq9HNBrFvXv3cOrUKUxO\nTrI1P43zJdOoczgc8Pl8KJVKNDc34y//8i+Rl5cHuVx+n/H3+/24efMm3nzzTYyPjz/1vUH3mZxy\nuVyOlpYW/Mmf/AnMZjM0Gs0DyaR0nqRSKQwNDeHWrVuIRqNPdf2fKuNPRt9ms6G4uBiVlZUQi8WI\nRCLg8/k4c+YMFhcXMT8/j7m5OWg0GlRWVkKr1aKkpARKpRLAavSkUqlQXl6Oo0ePAgCcTic6Ojrg\ncrm27UAnD1Eul0OlUkEqlYLH4yEWiyE/Px+lpaXQaDSwWCyorKxEQUEBq8XR5uFyuVAoFPjWt76F\ngwcPIhKJ4N1338Xp06e37SUlL1wmk8FisUCpVILD4cBut6OkpAQKhQJCoZAd7gqFAlarFSaTiXm+\nSqUSx44dQ2VlJZxOJxKJBHw+H37729+it7cXoVDoqUZHHA4HqVQKsVgMEokEeXl5+PznP4+DBw9C\nqVSy+77+31DEt3//fvzLv/wLfvWrX+HChQvbekCKRCLo9XrU19ejsrISpaWlqKqqQl5e3n2RP61Z\nqVTCbrezPf8swOPxIJFIUFNTg/3790On00Gj0UChUECn0yEnJwdGo5G9B7R2Ho8HjUaD+vp6mM1m\n7N+/H+FwGA6HA/39/ejs7ERnZ+eWr5fL5SIvLw8tLS2w2+3IyclBMplEMplEPB5nWRUy/mazGUaj\nETk5OcjNzWV7RiAQQK1W49VXX0VjYyP8fj/u3LmD69evY3Z2Fl6vd8vXTqDvLygoQEtLC8rLy7Fj\nxw5UV1dDJpOtuc/rIRAIYLPZcOzYMZhMJsRiMYTDYdy4cQN9fX2YmJjYlrNRLBajsLAQVVVVaGho\ngEwmg1AohEgkQn5+PkwmEyQSCQuAMiGRSFBeXg4ulwur1YozZ87ggw8+2Ba+SyaXjD5bIBBAIpHg\nyJEjaGpqgkqlQlFRESwWC6RS6SO7X8gefP7zn4fNZsPAwAAmJycxPz+PhYUFuN3uLV3/enzqjD+H\nw2Gp/OLiYkgkEvj9fgwPD2NgYAC9vb3weDxIJBJQKBSIRCI4ePAg8vLyoFAoAKy+5LThDh48CLFY\njOnpaczOziIYDG5rNMfhcCAUCiGTyZiRCYVCjFhmtVpRWVmJuro6yOVyAB8Rt+j6xWIxXnjhBRw8\neBDRaBQikQgzMzOYnp6Gx+PZ9Bppw6bTaRYdm81m2Gw2lJSUQK/Xg8PhoKamhmVVJBLJI1ulJBIJ\n6urqUFdXx/7M5XIhEAggHo+zNOPTdABSqRTi8Ti0Wi3y8/PR1taG3bt3g8fjMa88mUwiHA4jEAgg\nEAgglUrBZDKhtLQU5eXl8Hq9mJ+fx/T0NPx+/5aujwx6bm4uKioqcOzYMTQ1NaG6unrNM3oQy14i\nkUAikaCgoAA2mw0Oh+Op8xTIya6rq8OXv/xlmEwmqNXqh66dQCWYwsJCFBYWsj93uVy4ffs24vH4\nlht/LpcLjUaD0tJSHD9+HPX19bDZbEgkEohEIggEAmtKDjKZDEajEUKh8KHr37dvH/bu3YtkMonC\nwkIIBALcuHEDo6OjbC9tB7hcLrRaLWpra1FeXo68vDwIBAKEQiFWskin0/D7/YxsplAooFQqIZVK\nUVVVhcrKSnC5XIRCIVa6m5ub2/KzkRylhoYGHDlyBEeOHIFKpWLBxMNAe0cgEMBisUCr1aKsrIxl\n5nw+H7xeL9xu95Y4AeT4CwQCJBIJJBIJcDgcqNVqFBYW4siRI/jMZz4DpVIJkUjEggvaM4lEAvF4\nHF6vF4FAAOl0GkqlEjk5OWhsbERpaSnu3LmDgYEBjI6Ooq+vDyMjI1heXt42MuCD8z/PHv/fg/6Q\njGAgEMDMzAy6urpw/fp13LhxA729vewAznxJtVotGhsbkZuby4wpPRQOhwOlUgmTyQQOh4PR0VG4\n3W74fL4tv6BMhm0ikUAoFILH44HL5YLb7cbKygqWl5fB4/GgVqtht9shkUjYdWd+RuZn8ng8mM1m\n7Nq1C6Ojo0+c4s3s0RYIBGwDGwwG1NfX43Of+xy++c1vYt++fdi5cydqampQVFTEiEIP8so/DgKB\nAEVFRZDJZLh79y47nJ4GaC+l02nk5uaitLQU+/btY3shmUwiGo3C5/NheHgYFy9exJtvvomLFy/C\narXCYDCwg6ekpAT9/f1YWFjYsvVlOoltbW04ceIEmpubUVhYCLFYvKYMQUYklUox/gJBJBJBqVRi\nfHx8SxzDxwFlVPbu3YsjR46wyBO436HN/DO6/vUQCATQ6/WYmJjA+++/v2XrpHtdX1+PgwcPorW1\nFXl5eez9i0QicDqdzFhKJBLI5XLIZLKP3feZ6fcdO3YgmUwiGAxua4YxlUqxUuHY2Bj6+vrg8XjY\n+xWJROD1enH9+nW8++67OHXqFLxeL+NLpVIpKBQKVkrKzc1FLBbDlStXEA6Ht3StKpUKJSUl+OpX\nv4oDBw5Ap9Ot4X8AH51/me9s5p/T2SUWi1FQUIDdu3czYvfExMSmnKzMc1EikUCr1bKggMfjoba2\nFl/60pewZ88eWCwWCIVCcLlcpNNphMNhBINBxkGbmJjAW2+9hV/96le4ePEiVlZWYLfbIRAIIJVK\nodfrUVJSgoaGBuTm5kIkEmF2dnYzQcXfPeovP1WRPwBm/AOBABYXFx/5u4lEAm63G52dnRAIBIhG\noywNnUgkAKxuvlgshng8jlAotG21OTroKNVMLz5t4Gg0inA4DLlcDrPZjGAwCJFIhFAohK6uLszM\nzLBNLBAIUFVVhYKCAqjVapjNZojFYmi12i1ZI0W9wEcOBm1MPp/PUoeZB3emAYpGo5iamsLs7CwW\nFhYQj8fXREwGgwElJSUoKChAQUEB9u7dixMnTuDKlSvo6enZ1DU87vWmUimEQiGsrKzA7/djbm4O\ni4uLmJmZwcLCAoLBIObn5zE0NISxsTFEIhFotVr4/X4cOnQIFosFoVAIGo0GfD6f7avNILMtSCaT\nQSaTQSqVQi6XQyQSsXXH43EEg0H4/X54vV4sLS3B4/EglUpBo9HAbDZDpVKhra0NsVgMnZ2dGBwc\nxPLy8pZnKR4EKu0MDQ3h3LlzKCoqgkajYZGZ3++HXq+HXC5HMpmEy+XCwsICYrEYM2BWqxWlpaWs\nTKbVapGTk4OcnBz4fL4tM0apVAperxcjIyM4f/48lEoluFwuc9TdbjcrX+3Zs2eNEYhGo+jq6sLU\n1BQikQhSqRT4fD6Ki4tZ2lqr1bJyUTgcxuzsLBYXF7clqiNeUHd3N3MCRkZGYDKZoNPpIBKJWK15\namoKDocDS0tLmJ6ehkAgYJmmkpIStn6j0fhQrsBmkEwmEQqFMDs7i9HRUaysrLByg8/nQyAQQDAY\nfGDnSiZpNy8vD9XV1TAajTAYDNBoNPD5fLh27Rr8fv+mggo6F5PJJGKx2BpinlgsRk5ODiQSCRKJ\nBAKBAGZnZzE4OIiVlRW29kAgAK/Xi66uLoyNjSGVSmFlZQXxeBxKpRK5ubnYtWsXcnNzWYkjFovh\n6tWrmJ+f3/R9fhA+dcb/cUDEoXfeeQcrKyuIRCLQ6XSQy+VIp9OQyWRQqVTweDyYm5vD1NTUttZZ\n1nusmYhEIojFYhgZGYHFYoHX6wWXy8XCwgJ+8pOf4Ny5cwiHw0ilUpDL5fje976Hz372s5BKpWtS\n7k/aDpj5bxKJBHtZiHXtdrsRCAQgFosZ65YOlswoLR6Pw+Px4NKlSzh37hwuX768JsWZl5eHhoYG\nfPWrX0VBQQEAoKSkBH/8x3+MRCKB3t7ep96b7nK5MDY2hoWFBYTDYVy5cgUXL17EnTt3WBYpc03/\n+q//CrfbjdbWVhalCIVCxgTfivXTZ8rlcvh8PszOzsLn80Gj0YDL5SIejyMcDsPtdmN8fBwDAwO4\nefMmBgYGEAqFUFpaivb2djQ3N2Pnzp3YtWsXLl26hJ/+9Ke4e/fuUzH+kUgEU1NTOH36NLq7u/Hq\nq6+ioqICk5OTmJycxNTUFGpra5GXl4dIJIK7d+/i8uXL8Hq9rJx1/PhxfPOb30RpaSmkUilLtdrt\ndoyNjW2J8U+n04jH4+ju7kZvby9+/etfryn70OFvMBhQXFyM3NxcFBUVsUyL1+vFL3/5S7z99ttw\nOp2Ix+OQSqX40pe+hBMnTkCj0bAoes+ePYhGo/jwww/h9/u3LaXr8/lw79499v+7u7sfeu0AMDEx\ngXfeeQcAUFBQgNbWVnzhC19Afn7+AwlrW4VgMIjJyUn89re/xcjICKqrq+H1erGwsIDx8XFMT09j\nfn6eBWbrjT9xcA4dOoTvfe97KC4uRk5ODmprazE6OspKv09q/DPP7EgksqYDgsPhIBqNslS+z+eD\nw+HA+fPn8bOf/YxlizIzD5nrdzgceP/998HlclFRUYG/+7u/Q3NzMyQSCex2O7xeL8tWbweea+MP\ngKVGJRIJ83wlEgni8Tj4fD7S6TT6+/vR0dEBr9f7TFnoXC4Xer0eRqMRXC4XDocDvb29cDqdLBqi\nNqjTp08jlUrhu9/9LuML/N7v/R64XC7OnTv3xKSi9Q4KRZcrKyssAibSEkX01D6USqXg9/uxvLyM\nyclJzMzMMIeFPo/az9ra2hiJSiQSIScnB3a7HTt27MD8/PxTMU6ERCIBh8OBn//85xAIBJicnMTc\n3NwDDT/dm2QyyaIRiUSCqqoqzM3N4d69e5vaQ1R73r17N6qrq1FeXg6tVssiNp/Px8SGiCOh0+lQ\nXV3NSll37tzB2NgYkskk7HY76urqwOPxUF1djT/6oz+Cy+XCwMAAfvnLX2JycnJbeRbpdBrBYBCz\ns7N48803ceXKFfj9fvj9fgQCATgcDiiVSuaou1wuxnqOx+MYGBjAqVOnIJVKkZubCwDYtWsX/uzP\n/gz/8R//gUuXLm1Zx0tmRuVBqWZytoDV5+RyuTA+Po67d+8yrhEJQFGGL9NYpFIpOJ1OzM/Pw+Px\nPFUOxkbuD/2Oz+fDyMgIXC7Xdi+LnWdjY2Nwu92M+xMKhVjkHwqF1pS31q83FAqhu7sbP/rRj/C1\nr30Nx48fB5fLRWlpKb75zW/i7NmzuHXr1qbXmnkPdTodXnjhBRw4cIA5dB0dHTh//jzu3LnDODYf\ntzdpzy0tLeG3v/0tkskkXnnlFXA4HJhMJnzrW9+C0WjEqVOnNr3+9XjujT+1AlJtn8gk9GKGQiEM\nDQ2hp6cHwWBw2xiimaprD9sQVFcSiUQIh8OYmZlBd3c3XC7XGh4DEZ7UajW+8pWvMMPQ2tqKQCCA\na9eubYpRnLm2eDzOIs+enh5cuXIF/f39WFpaYh41lVLS6TRWVlbg9Xofeh/JcLlcLsaaznw+NpuN\npeCfFqg8dOnSJVYTfZRBTKfTiMVi8Pl8bP0mkwk5OTkYGBjYlPGnuvaBAwewf/9+VFVVrel7p1JE\nIpGAQCCA2WxmqXAyUktLS1hZWWFRJWVq8vPzkZ+fDwBMcyGdTm+70BVFTCQekxlFLi4usvbERCLB\njCU57XNzc7h9+zYOHDiARCIBHo8Hu90Os9mMO3fuYHR0dEvT549S5KRaudvtxvT0NBYWFtDV1cVI\nfNQuTDwLlUrF2tOIP9Lf34++vj643e6n3v73OMjcD9u5N+hdWlxc/Ngy7oNATtbCwgJu3ryJQ4cO\nAQBj/3/mM5/B6Ojolhh/Wi856GT8CwoKcOXKFVy7dg1nzpzB1NTUY90zcpD7+/tRU1PDMk1arRbH\njx+H2+3GO++8s+U6AM+98ReJRMjNzYXZbGYiF1RPjUaj7EWemZnZlhQcMUSJpBWLxdjPehADd3Fx\nEePj4xgcHMTt27fhdrvXKBbS78ZiMbhcLiaUQqIpdJhuBejQvnHjBqvLLS0tsUwEgDXknI8TYaFU\nHaVaMwkyYrGYCb48TTVDOvDD4fAavsOj4Pf7MT4+zrQAQqEQIpHIptcslUphMplQX1+PiooKSKVS\nJJNJBAIB9PX1oaurCwMDA0x0iBj0QqEQZWVlUKlUUKvV4HA4rGX0QaQ0q9WKH/zgB7BarfiHf/gH\ndh+2Cw8zJrFYjGUsMsmKmUQrqgsHg0GmB8Dn89He3o5wOIzXX3/9iQzH44I4DNevX8fw8DA6Ozvh\ncDhY2pdgMBhQVVWF1tZW7Ny5E0KhEA6HA0NDQ/if//kfXLlyBcvLy58IrYsHwWQy4dixY9ixY8ez\nXsqGYTAYsG/fPlgsFvZnCoWCkZK3Enw+H3K5HIWFhcjLywOfz8ft27dx6tQpOJ3OJ3qPpFIpKioq\nWCkUWHXaDQYD9Ho9pFIpKw1v2XVs2Sd9wkAMXoPBgF27dmHHjh2sNu52u9HT04ORkRGMj4+jv79/\n06SQjwNlIMiIP8z4BwIBLC8vY3l5mbWEiEQiRtLJXCPxGdLpNFpaWiAWiyGVSlm/8VYc5qlUCpFI\nBDMzM3C73fB4PE9cZ+VyuTAYDCgtLUV+fj4ztJQJsVgsqK+vx/T0NCPLPE0H4HHSx1SuIJ0GIiZt\nxXqpRY4MXTgchtPpRHd3Nzo6OjA6Ogqz2YxEIgGLxQIej4f8/HyoVCqIRCJIJBLw+Xzo9Xrw+Xws\nLS1hbm4OEokEVqsVQqEQUqkU5eXl2LNnD/bv34+hoaEt7VbYKDI7FKjLJB6PI5lMMjLmwsIC3nvv\nPaRSKbS3t0Ov17O0LqWK0+k0HA7Htq6VHO7e3l4IBAKMjo4yh5HL5cJsNqOhoQElJSUoKiqC3W6H\ny+XCu+++yzgOt27dwszMzLauczPgcDjQ6XTYs2cP8vPzkUwm4fF4sLy8/ImcZyEWi2G329HY2IjD\nhw+jsLCQOfOUqdnqoI4ic6lUCrFYDGBVJ2ZqauqxvksqlTK9merqarZ3iDRMJShSakwkElnjvxHQ\nw7FarWhra2O90alUCnNzc/j5z3+O69evM1Wo7TIy1FZIBxwdbg8CpX9o2qBQKITJZGJyxFTjpbVO\nTU3hJz/5CaRSKVpaWpi8KzHyt4J5TilYr9e7qVICRfwFBQV49dVXUVNTAwDMoeFwOKybYGBgAPPz\n84hEIltyDY+zRmBjEbBMJkN+fj6USiUWFxdZfXKz++hBjGKfz4fp6Wl0dXXh3r17CAQC7O8lEgnS\n6TQMBgNkMhm0Wi30ej37vOXlZYyPj+PChQswGo148cUX16gXlpSU4Ctf+Qr++7//+5kY/8z7JRKJ\noNFoEAqFWCZlZWUFHo8Hk5OTGBgYQFlZGSPQkVLgvn37WFlku53FZDKJkZGR+zJEPB4PpaWl+MEP\nfoDS0lJIJBIsLy/j4sWL+Pu//3tMT08/cwnxjSBTI8BoNCIajWJmZgazs7NP9V3cKBQKBQ4ePIiX\nXnoJ7e3tEAqFzGi6XK5tmVWQyfeh5xmLxR4786dSqVBaWoo//MM/xEsvvQSxWMwcTOK8ELdHJpOx\nstJW4bk0/hRlNzU14ciRIyw68ng8OH36NN577z10dHQ8FT1l8kKJrUotHA8DZSzUajUUCgXi8Tjm\n5ubuM/wAmKAF9R8LhUKUlpbib/7mb/Cb3/wGv/vd756J/C8ptNlsNlRXVzOmbCKRQEFBAQoLCxEI\nBHDz5k2EQiEolUqUlJQw4ZQ9e/awaONpHDgkzUmOyHpW8YNA7TukRlZTU4PFxcU1wkVPgkgkApfL\nhbt37zK1sOHhYdy5cwdut5s5tcCqUzA1NYXR0VHY7XbYbDbWm+5yuTAxMYGLFy/i1q1bLPK/fPky\nvva1r6G9vR08Hg9GoxHNzc24ePEi+Hz+U9cXz4RGo0FJSQnGx8dZFoXWkkwmMT4+jh/+8Idob2/H\nwYMHmUrgkSNH4Pf72ZTO7WLQU+tlSUkJcnNzmcqf1WqFRCKBxWJBYWEhrl+/jtOnTyMUCrFZFp8G\nw69SqfDFL34RL7/8MpRKJctEXrp0CVevXv1EDbPicDioqqpCU1MTjh49isrKSvD5fAQCAczNzTFu\n0uzsLHp7e7e0jEglSyI3k5HeSPAgFAphNBqxf/9+VFdXo7i4GOXl5Uin0/D5fLh79y7Onz/PMkoq\nlQrBYBANDQ24e/cuIpHIGkLqZvDcGX8OhwOFQoGcnBzs3bsXLS0tMBgM8Hq9GB4exttvv413330X\nwWBwTQpdKBRCpVJBqVQy+VoyBm63m73Aj5v6ogOMhpLQxDWqc9NDpNYuk8mEwsJCmM1m5sESP2H9\nA6c6NREDS0pKkJeXh1dffRWhUAhjY2OYmZnBysrK5m/sBkHcg6KiItTV1aG1tRWpVIodyiqVClar\nlalYzc/Pw2g0QiQSsT7w2tpaTE9P4+rVq+wl2A5kiiTl5uYinU7D7XZjamrqkS8YEbicTifrtKio\nqMDy8jJrUXqSFDTtuXA4zBwJm82GhYUFpjFAWZ3MOn4mSZEOoIWFBVy+fBlvv/02bty4seZ7LBYL\n8vLymE6ESqVCTU0Nenp6MDExseURxkaum8vlIicnB3V1dQgEAnC5XPcpFy4uLuLNN99EPB6H2WxG\nVVUVTCYTdu7cif7+fmg0GsTj8W0z/iqVCjabDS0tLSgrK4NMJkNRURFrQxQIBEin0xgZGcEvf/lL\ndlB/GpCbm4vq6mp87nOfQ0tLC+Mp9Pf349q1a+jp6fnEXAsp47W0tODw4cPYvXs3cnJykEqlMDMz\ng46ODrz99tusA0Or1aK6uhrAKuHY7XazQVFPcrbQviQRn0QiAZVKhfz8fFYWXe/ske4GcQVOnjyJ\n2tpamM1mrKysYHp6Gg6HAxcuXMDPf/5zhEIh8Pl82Gw2VFVVYe/evYw8PjU1hVAotOn7+FwZfw6H\nw8Q1XnjhBbS0tKC4uBhisRgXL17Ev//7v2NoaOg+HXliVu7btw+tra1obm4G8BHT/dy5c/jVr34F\nr9e7qb7YxgW4AAAgAElEQVRiPp8PtVrNasSkGw6sOh9arRYnTpzA4cOHUVBQAKfTCZfLxTzNh23U\ny5cvw+Px4Pvf/z4OHDgAPp+PtrY2yOVy/Nu//RuuX7/+xGt+XNTU1ODo0aNobGxEUVERVCoVADDH\niQxUR0cHurq6WP3aYrFAo9Ewx8FutzNy5nYZf1LWOnz4MI4ePQoul4sbN27gZz/7Gbxe7wOzDuQw\nBAIBDA0NQSgUwmq1oqCggGU8fv3rXzPiz0bXTi82Cdnk5uZCo9EwciTV/UjUR6lUwmw2o7W1lUn+\nErM8nU5jfHwcv/71rzE5OXnfd73xxhtwuVz4i7/4C5SVlYHD4eCVV16BXq/HD3/4QwwNDW321j4W\nKNtlt9tx5MgRJvazftAJCUgtLCygr6+POTGkh6DVarfV0S0pKcHx48fR2tqK4uJiNrtgvYY7KcFt\np4rfVuOll17Ct7/9baY4BwCXLl3CL37xC/T09GBlZeUTUfMnrscXv/hFNDQ0oLy8HEqlkmXiLl++\njDfeeIOJWSUSCbz44os4efIkeDwe+vv7cebMGday/CSZUcoUAqulKp1Oh0OHDkEoFOLq1auYmJiA\nx+NhWTQul4umpiZ85zvfYeOTicTH4XDQ3d2Nq1evorOzE6Ojo4xfQXNrjEYjjhw5gqqqKty+fRs/\n+clPMDY2tul7+VwZf4LFYsGBAwdQVFQEkUiElZUVjI6OorOzk3lqdNDK5XJoNBqUlZXhxRdfxJ49\ne5iXSOpedOi8//77GBwcfKy1ZLb5qVQqVFZWsghzZmaGKbKRNnVtbS3KysogEAgwPDyM/v5+rKys\nPHKDLi4uIpVKweVyIZlMgs/nw2Qyoba2Flqt9r5Oge0ASdFWV1fj0KFDKCkpuU9xkJwY8phnZmaY\netXU1BQqKiqYippSqXzkEJKtgEqlQnFxMerr69HU1AQOhwOPxwOFQsF6ix+EdDoNp9OJW7duQaVS\nwWw2s2mGpN8uEolY7e5RoOujA6WgoICNp1ar1UytbX5+npEgI5EIeDwek2ulMhFlk4hDQYTE9Zic\nnIRarYbD4WClAirR6PV6TE5Oblv0/CBIpVI2rMtqtWLfvn0AgFu3bmFhYYE5yuQ8Us8/lTioI2Az\nYi4bARGu6Jk/CJSO/spXvgKv18vqwPF4HPF4HLFYDH6/H06nEwsLC4yn8KxKAgaDARUVFWhtbWV6\nEG63G8PDw7h27Ro6OjqYEt2zBo/Hg0KhgN1uR3NzM+x2O+O3jI6O4urVq7h48SJ6e3vh9XpZJrW6\nuhpNTU3g8Xhs30xMTGB0dBQdHR1MVfLjkCnVTPK7fD6fkWej0ShGRkbgcDgQCARY9xUJbu3duxdy\nuZw5DhT8hUIhLC4uYnBwEPPz8+xeczirc2wMBgMsFgsUCgX8fj/UajUEAsGmn8lzafyNRiOampog\nlUoRjUbZ0B6Px8NeMtL1LygoQHl5OZqamnDixAkYjUb2OQKBACqVCvv378eOHTuwtLT0WMafokQy\nYgaDAU1NTRCLxXC5XHjvvfdYTzzJO9rtdqjVagQCAQwPD+P8+fNYXFy8TwM9E4lEAuFwmKWyqL1Q\nKpUyA7DdBwzVP6urq9HY2PjQgS18Ph9isRjRaBQulwuRSIRJ6Pp8PvY7D5qqt9XQ6/VoampCRUUF\nY89rtVqm+rg+y5N5/xcXF3H58mWUlJRg9+7djKEvEAiQk5PDWvQeZYwynynNua+srERzczOKi4uh\nVqtZr/vAwAA8Hg/jI4TDYXC5XIyOjrKaIZHQkskkY2xT62gm4vE40xv3er2QSCRsv5CM7tM0/gqF\nAvX19SguLoZIJEJ7eztsNhuSySQ6OzuZbC5lQMxmM/bt24ecnBwAYK2QTqdzW+vSi4uL6OzsRFtb\n2yN/r7W1FS0tLQA+4vyQApzP58PExARu3bqFK1euwO12P1M+QEFBAb7+9a9jz549LOKfmZnB66+/\njmvXrm17B8XjgM/nw2AwMEeRsorpdBoDAwP453/+Z9YpxOGsjujeuXMnbDYbVCoVc8wqKysRiUQw\nODiIv/3bv2XOzcc9AzrPc3NzUVhYyGY78Hg85OXlIRQKsVknlMlqbW3F17/+dRQVFd13nmXODJBI\nJIwcTiB+gE6ng1AoZN1cUqkUQqEwa/wzwePxIJVKIZPJ2LAZt9uNq1evYmhoCBKJBC0tLaitrWVp\nZoVCAY1Gg5ycHMjl8jWa9pm1nSc5WOjFp021srKCgYEB1NXVYdeuXejr68P4+DhisRjsdjva29th\nNBpZLR8A5HI56/0UiURYWFjA9PQ0i4Toe+i/9DM4OIjLly+z393Ow4XD4aC4uBjf+MY3sG/fvkca\nbVpfJmOWiFHkCNFL8CQci49bJ62Bw+HAaDSipaUFVquVvdjl5eX4zne+g4mJCSwuLiISibBMxcLC\nAhwOB6vpUTsmzWpYXFxER0cHBgYGWIbpUch8JqSkSFPOBAIBG17V19cHn8+3xkgQQejGjRtYWFjA\nu+++u6ZEsrS0hImJCczOzj7wu4PBILq6umA0Gply3vq5ExtFZsr7cdKoHA4HOTk5qK6uRltbG6qq\nqiAWi8HlcmG32/GFL3yBTTvr7u7G/Pw86wigA3FmZgZnzpzBO++8w1Qntwsmkwm7du1ak9GimRzh\ncJiVfdY7rnQuUX+4TCaDTqdDMplkin/bMUzsUSAnW6/Xo7q6Grm5uUgkEnA6nejv72e6Hp8U8Hg8\naLVaHDlyBPv372f69xQ5+3w+lqWlccBUjpmYmEBHRwdzGNYPktrofqVza3Z2FtevX4fX62VDt7xe\nLwYGBjA+Ps4E0Hbu3Injx4/DYDA8NBgCgKqqKsjlcuzatQs9PT24c+cOK0lQaYC4CsvLyxs6WzaC\n58r4i0QimM1m6PV65sVGIhEsLy9DIpFg9+7dOH78OA4ePIji4mLIZDL2b2OxGNOU9vv9MBqN0Gg0\nEIlECAaD7NB/XNCGAcAmxJnNZuzYsQNqtZqN26yurkZdXR3zZoFVCcmqqiq2mSUSCW7evImZmRnm\nnJAhy/xJp1dV265evQqHw7GtKX8qMezatQsvv/wyzGYzi3hpRgC1blHbSiKRwOLiIjNmRJ6jVjn6\n91vZqZB5b6hWSylBiiA5HA6sVitOnDjBBq/QVC56diMjI0xlj2rVlHaen5/HpUuXGDlvI6Dro3tB\nNX2/34+JiQmcPXsWw8PDDyT4hMNhDA8PY3h4+D5xnI/TLQiHwxgdHcXs7Cz7HSKYPm7qnEYPE5/j\nUc+MOnG0Wi2MRiNsNhvq6+uxe/dumEwmFkkJhUJUVVUhFosxR8bv98NsNrPx3IlEAvPz8zh9+jRu\n3769JSSoR4HaaGnIVjqdRigUYlLFfD6fjfml50HXIxaLIZFIoFQqkZeXB5PJhIWFBYyMjCAYDD51\n408TNaurq1m2MRgMYmBgALdv38bQ0NCmWnu3ElSizc3NxZ49e1BTU8OI01Trp1IY9d7LZDI27r23\nt5cN77Lb7WwoFGUeN3o+kqOwvLwMn8/HhOHy8vKwsrKCwcFBuFwuNhypvr4eu3btAp/PZxk5elfo\nukh502KxoKGhARUVFdDpdBgaGoLP54PVaoVMJkMkEsHi4iKmp6fZeOnN4rky/iqVCnv37kV5eTmA\n1QOV+kDb2tqYDKpGo2HiDASv14ubN2/i6tWruHXrFl577TW0t7fDYrGw9pHNHi7hcBiTk5O4dOkS\n5ubmIBAIGJGjrq4OSqWSOS0ajQatra2oqqpimycajWJlZQUXLlxYc7DTgSoWi1nGw+fzbcmaPw5y\nuRyvvPIKXnrpJRYR0UQu+hkeHsa9e/fYfOpUKoXl5WXmuNBLTC8hGf+tzFbQi0u1cJvNhoKCAja6\nlCASidgEtsLCQubEJBIJuFwuTE1N4ebNm3C5XBCJRKisrIRarQafz8fy8jLu3r2LpaWlJ1pfMplE\nb28v+Hw+uFwuxsfHWWp4I/+eQMb7UfePUtHk0K7vW3+S9W8EJLx19OhRfPnLX4ZcLodKpUJOTs6a\nOeherxc9PT24ceMGbty4gZWVFZhMJrz44ouoq6tDKpVCMBiE0+lkJb3tRn9/P5xOJ373u99BJBKx\n9lXar+QMZnZiCAQCVqem2jqVN+x2O/bt24eJiYmnKvzD4awOR/qDP/gDvPzyyyzj6ff7cebMGTZE\njH73WbYokoHMyclBaWkprFbrGhIxlQ2pHESDr2QyGfx+P/r6+nDv3j1IpVKcPXsWzc3NeOWVVzA1\nNYWuri7Gk3ocZLb60UwYapMVCATYv38/jh07hoaGBrYXYrEYm9RKZ3RmxozD4bCuIZPJBK/Xy1q4\nVSoVy+beunVry8iXz53xb2xsRGlpKbuxMpmMteLodLr7omNS91tYWEBvby+6u7sxODiI6upqlJSU\nICcnh43LFYlEm1ofeanT09OIRqM4fvw4WlpaUF5eztrdADBSHAA2xtXpdKKjowPj4+P3RXQSiYQ5\nNLTZMieSbRcKCgpQW1uL1tZWFBUVwe/3s7GxmUM5pqenMTk5ySbTUfRNabv1IPIbDQzaShKXQqGA\nxWLBoUOH0NTUBIlEgqWlJfh8PshkMqbJTqQc4CPHYf2AHZFIhOLiYtaK6ff7NzWwhYiEAwMD0Gg0\nrMzwOF7+Rp83n8+HUqmEVCplzhew6nQajUaEQiHGIfm4eigNN9Lr9Uz3wOl0wuPxsHYqLpfL2NlG\noxF79+5FY2Mj48NklrBmZ2cxMDCAy5cvs/ezrKwM1dXVOHjwIOx2O+LxOHp7e5kj9jR08ulAJvns\njehBECGMRlxTO7HFYoHZbMbOnTtx9uzZp0LKpfG39M62tbWhqKgIAoGAGcOenh44HA7IZDLs3r0b\npaWlrIXu1q1b97VIbzfkcjlr296/fz9MJhNSqRQ8Hg8bvT06Ooqenh7Wc8/lclkAQnNRgNX2V6rZ\nkyNPc0Uo4/U4JYBEIoGlpSWEQiFmI6gtb9euXdDpdPD5fBgbG2OZxEzZ6kzHirq9CgsLmQMQDoex\nvLzMSjSUAaZ3ZrNn+3Nl/JVKJduwNGVOIpEwmdRMjzydTuPu3bv48Y9/zOq74XCYeW9zc3OYnp5G\nTU0NY+JnpuSfFFSrTafTKCkpQWtrK7RaLWNlU72fNMNDoRAUCgW6u7vxj//4j5ienr7vocvlcphM\nJib+QvPEpVLptszgBlY91draWrz22mvYuXMnpFIpxsfH8frrr+MXv/jFfep869ecWQOjzU0vBZU4\nSKZ2Kw8bvV6PmpoafOELX0BdXR2AVabwwMAA8vLyUFRUBIVCsWav0FqFQiEj0qXTaQgEAuac0MCi\nzb6QoVAI8/PzuHbtGqvtbbUDR/c4Ly8Per2e1fopeqL6qcPhYJmaRzkg1JZUXl4Om80GrVaLW7du\nob+/H263G8lkEiKRCJ///OfxxS9+ETqdbs38hkQiwabhpdNp9Pb24r333sPFixcRDoeh0Whw5MgR\nHDlyBAUFBeDz+QgGg7hy5QrOnj371HQsMktSGwXNA6BMnN1uR15eHoxGI3Jycpii3nrdj60GvWNS\nqRQnTpzA97///TVzNXp7e/Hmm29iYmKCRdqvvfYavvWtbyGZTOLs2bOYmJjYUAfLVkKj0aCurg4v\nv/wyDh06xJ69y+XCrVu3cP36ddy4cYNNEM28h+vvZTAYxOjoKM6dO8cmumo0Gua4r28t3Qji8Tib\n7aBUKtHS0oI9e/awDMTi4iLeeustXL9+fY0IVSY5F1i1XbW1tXjllVewY8cOpNNp8Pl86HQ6Vk7L\nz89HSUkJbt++zVrAN7NfnivjD6wl6lHbyp07dxCPxxnZxuPx4MqVK2xIis/nYwp6BD6fz9LBFNVs\nNvIn0IMn8SAaMexyufDBBx+gt7cXi4uLrN1MKBRieXkZLpfrgdGBVquF3W6HXC5nhym1E5EnvJVR\nBXnjYrEYk5OT6OnpgdvtZt0Q1HL1qI2Z+XehUAizs7MsncXlcpGXl4cXX3wRV69eRW9v75atXalU\nspHJg4OD+PDDD9l0OJlMBqvVisrKShgMBkilUiwtLWF5eRler5cZTZ1OB7vdjurqasbqdzqdLPLY\nSO37YaD6cE5ODrxe72NPCNvod4hEIqhUKkgkEiQSCQSDQQSDQaZUJ5FIMD8/j9nZWVZvf9TBqNVq\n0d7ejurqaqjValRWVqKrqwtnz56F1+uFQqFAOr06cZAO2VAohMnJSeZ4UytcT08P69NWq9UwmUxr\nuDzkmExNTWFubu4TPR0vE9QuLBaLUVlZyYSBioqKmLLhdnRZkOEvLCzEwYMH0dDQwAz/0tISy6CQ\ncZdKpSyTSHu5oqICf/VXf4UPP/wQnZ2dmJ6ehs/nu+9cyaxpfxzvZCNQq9WoqqqC2WxGOp3GvXv3\ncPfuXWbwKTuWeX4/7PvS6VWZ8pGRERgMBqZiSBmRJ+G7ELhcLvx+P9577z10d3czw0xdW4uLi6yr\nILMsR2sl/s7//u//ore3FzweDzqdDo2NjUy8TavVsnIlOUBUdnoSPFfG/0GtFD6fj03RUqlUUKlU\ncDgceOONNx46CYzaAKmWy+fzWRS6VUgmk5ibm8PQ0BAbnjEyMoK3334bHR0da9JVHwe9Xo/y8nKo\nVCoWhVK0QSSTrYgqeDweZDIZcnJyUFhYCA6Hg3v37qGrq4upTj2JkxGNRln/M036MxgMaG5uxvT0\n9JYaf4lEAplMhlAohPHxcbz11lsYGBjA3NwcALCxvHl5eZDJZGx0q9PpZCqQdrsde/bsgcFggE6n\nAwDMzc0xFcjNgJ7R+kzVVoFqiyqViqXpSXQqHA5DoVAwglU4HIbH4/nYtksqr1VUVKChoQEajQZ2\nux0qlQqdnZ2Ix+OQSCSYnZ3F7du3IZFI2PPu7++Hw+FghiYSiWBgYAALCwtIp9OsrYvGcfN4PDaN\nkxyz9YJd6+/l42K7ovB4PI6+vj6YTCbGxREIBLBYLMjPz9+2yaLktObn5+PIkSMoKysDl8tFPB7H\n7OwsLl68iM7OTszOziIcDrN5EfPz8+jp6YFCoYBIJMLhw4chEAiYc09S5USEpLIj6aNQ6XIzDoBS\nqURpaSkMBgO7f++++y4uXLjAMrUbBRE05+fnIZPJYDAY2ARMqv0/SaaN9lwgEMDt27dZticej983\nA+BhyBxrfP36dQgEAthsNkZi1Ov1EIvF0Ov1yM/Ph8vlYtnGrPHHR3Vuih6B1dGrXV1drD+fx+Mh\nHo8/kkRFrUbl5eUsKifDtFVIJBLo6upCOByG1WrFxMQEOjs74XQ6mdHeKCwWC/bs2QOdTsfY0eSV\nb9UBxuVyWWq3sLAQOTk5GBkZQV9fH1ZWVhAOhzdNGAuHw/B6vVCr1ZDL5SgqKrpPKGgz4HA4TLVx\ncnISk5OTbPAH3SeKDEgZjwbMkFMSi8UYW91sNqOwsBAikQjDw8OYmZlZ033xJPc+FovB4/FgeHj4\nieVHH3X9XC4Xer0eBQUFKCgogEqlYoz1WCwGvV6PRCIBh8OB0dFRjI6ObmjiJTmd0WgUXC4Xcrkc\nSqUSwGqLq9PphNvtxvnz51kmig5GvV6P5uZmaDQaJBIJeDwerKysgMfjobCwEHv37kVubi4rYRHR\n60HZJaqHboT0+CAQeZYMwVYjs5WVSILEM9kuZ48MSE5ODkpKSlgqmfbZhQsX1rDIA4EAvF4vXC4X\nLl26hMrKSlRUVKCiogICgQDFxcXgcrkwGo3weDysddFmswEAZmdnMTIywjIJT+pIkVNJ3R3hcBjd\n3d24d+/efZnajYLuv0wmg81mQ21tLRYWFhAOhzExMcGyUo+7XirXkh4FORFPeu30eXNzc5iZmWEO\nYzqdhtlshsPhYAHik2a+nivjHwqFMDw8zIQgMjc+9WE/ChwOB3q9Hna7HUVFRTAYDODz+Sxicblc\nW7JOMgzz8/NIpVJwOp2Ynp7G0NDQY28WOjzMZvMaoQhKHVOdcrOz5indvWfPHpSXl4PL5WJmZgaL\ni4trNAc2AyLpyGQyiMVi5Obmskh00+SW/z8yIV7E9PQ0JiYm4PV62ctD3xONRplxof/SXiIBD9pr\npM7I4/FgMplQUVHBjJfT6Xzs9lDq6iB50K0AOSNCoZCVsIxGI5RKJWMeU0aHpIs7OztZqnIj0VAg\nEEBfXx/y8vKY5C7N2JDJZJibm4Pf71+T2chMee/atQsGgwHpdJoRKf1+P6qqqtjgHHK+qW1ULBZD\no9EwwSOxWMz4Gl6vF8Fg8LHvP6WZPR4PJiYmtnyyZGZZku6D0WiEyWRa03Wyld/H5XIhkUigUCjY\nM6eBYWNjY5ibm2MlNzJ8xKJfXFxk42qHh4fZ+bRjxw5UVVUhHo8z4RmBQMBmW5AR3UwAkk6vTrjz\n+XyMk0Lv35MaVYlEAp1Oh+LiYtTU1KCiooJl9DZT1iXHjd4VKiVkcrlisdiG7wc9g9nZWczPz7OO\nAso8Li4uoru7+4nXCzxnxt/r9eLDDz9ktRFgtT5dXl6O+fn5DRn/goICHD58GEVFRZDJZOBwOBgb\nG8OpU6e2tB0nnU7D7/cDWD3M3G73YxtQerFFIhFkMhmrWwkEAhgMBtjtdmZMNptOlEqlMJvN2L9/\nP3bu3ImlpSV0dHRsaXaB5m8bDAYoFApotVom1LHZ7xEKhUxXWyQSYWZmho1ZBT66l6R0x+FwWIsf\nACY7TAc1n8/H2NgYkskkbDYbS0ubTCbWj3vnzp3HdrooMqGI6WGqjhsFGRsyADQJj8b6klNEKUWT\nyYSJiQlMTEzA7XZvOKpYWVnBlStXYDKZ0NzcvEaKdXJykpVV6DpI8Gbnzp04fPgw9u/fz8RQamtr\nsbi4yIS5FAoFADBHLBAIIBAIMJldp9PJnFOz2Qw+n4+JiQksLCw89v03mUw4efIkRkZG4PV6t3yy\nJPX9E7mVx+OxtP92GH/6TnrGZIyi0SjGx8dZtEt7JDPqJe5Qb28v7t27x5TsysvL8dprr6GhoQF6\nvZ6x67u7uzE0NIShoSHG49gs/H4/RkdHGe+jqKgINpsNY2Njj/1cOBwO25P19fWMmLe0tMSc4CfN\nUBBhmUDkSvrccDjMZNo3+h2Zxj8UCkGv1zMnZWxs7IkIipl4roz/ysoKrl69ioKCArz44osAVrWr\nT548icbGxjX91/QA1tcyDQYDrFYr8vPzmbBCT0/PlkT+9IJR2yHV3zgcDnp7e3Hjxg24XC5WL8rc\nJOs3DYfDQUlJCU6ePInDhw9DoVAw48/hcJh3Ozc3tyURZHl5OY4fP47i4mJWK66trUVjYyOGh4c3\nNTOb6m82m42NSs2MtEkO+EkOYbFYjKKiIhQWFsJut6OyspJN39q5cyfa29uZl55OpyEWi6FUKhm/\ngwg6FHXQIRqJROB0OpGbmwuTyQS1Wg0ALIU+NjbGnuXHpe/X9/uS1n5NTQ2ampqYCFJvby+mp6fh\ndDoBfCTKQ/eqqKiIDUVSq9UQCoUIBAJsUqFYLIbVaoXJZILNZmN1RBpnHIvF4HQ6WbT3uEJFmSDG\n+Gc/+1k0NDQw1TMioIrFYsjlclitVlitVsavAcDuP3XpkDpeLBZjXICqqioYDAYcO3aMqeuRkI7T\n6cTFixeZYt1GkJlirq2tRUNDA/bt24dQKASn04nx8XHcu3cP/f397N3MnLAYDAY/dlS3TCbD5z73\nObz88svMaFJ690n5Mo/6PjJKSqUS7e3taG9vh1KpZEaKuDXV1dXM2abyDEnkptOrUtYSiQTl5eVo\nbGxEY2MjysrK2LuaGaWOj4/D7XZvmcwyleni8TgUCgUOHDgAm82GY8eO4b333sOHH36IlZWVRwY3\nBoMBhYWFaGlpQWlpKZPnJTVV0osIBoOPHWRQxsxiscBqtcJisUCn07EWbYFAwHgQmZF/Zsb3xo0b\nGBwcRCqVYmcel8uF2WzG0aNH0dzcjKKiIta9JRaLodPpIBaLNzWB87ky/oFAAP39/RgYGMD8/Dwb\nV9rY2Mh+J7MNZH0KLvPv/H4/pqam0NPTg87OThblbQZk/A0GAyorK3HkyBE0NDQgHo+zF2lubg7L\ny8vMYNALuL5WKBAI0NDQgC996Uuw2+2MMEUysQBY5mIr+v1VKhXy8vKgUqmgUCigUChYvYzD4WB0\ndJRlGDIdl4d5uhwOh2UorFYrm2tNsq10cAkEAlZrfxKQLGhhYSHq6upQUVEBq9XK2ujIEJEBpAlx\nma2HANhQGXoGHo8H8/PzUKlU0Gq1bGAH8Qr0ej3eeustCIXCRx5MFAlSeyP1C9fU1ODw4cN45ZVX\nWO3vypUrGBwcZBmozBY96t+mH4PBALFYDI/Hw6bgcblcWCwWqFQqqNVqaDSaNYdTMBjE3NwcSzM+\nzn4n9j6x9ilT0tjYiIaGBibjHIvF1kSimU4Pgf5Oq9WuyYRQ94NOp0NFRQV27NixRo+B9vrAwAC6\nuroei6DL4awOUTEajSgsLERBQQFaWlpYL3dPTw8byUqHuEAgYPfP6/XC4/Ew+dVwOMzIlVqtlmWN\nTp48ifb29jVtuSQPvFXGf31ZQSaTYdeuXaivr2dZLdrLarUaDQ0NsFqtyMvLY+x5cpri8TgGBwch\nl8tx4MABtLS0YNeuXUyDA/hI4nh8fBzj4+PMWG8FotEolpeXEQ6HIZVKUVlZidLSUtaVAICV1x52\n1litVlRXV+PkyZMoKytj9yAajWJ+fh4ulwtLS0sIBoMfe06uz8QRGVyv12PHjh1oa2uD3W5n79aD\nhpNRKXZubg6Dg4OsHBmPx9cQJ61WK9ra2pj8MmUmhEIhu/bN4BNp/J+0xktEi5GREbz55ptoa2tj\nan+Zn/2g/535vclkEn19ffjggw9w4cIFDA4Obkm/NR3SpaWlePnll9nwFh6Ph4MHD2Lnzp3MeNLB\nu160h4RGaLIUTWWj6w8GgxgfH8fg4CBLXW5F5D8wMIDTp0+juLiY8SmqqqqgUqmg0+nYAJa5uTks\nLQGRWmsAACAASURBVC2xw4UOgQcRszQaDV555RW0traipKQEJpOJeb107ZlSmBu9x5mIx+OYnJyE\nQqGA1WplUaJer18TudGLRc9o/efQgKbM/luqcwqFwjVpv8zU7qOQKf1KRofGA5eWlqKsrAx8Ph8K\nhQKFhYVQq9Vob29nNe7MNCWlNOVyOXNEuFwuW6dWq0UikWCRtEQiYTXOaDQKn8+H+fl5DAwMYGZm\n5rH3eyQSYUI2Xq8XcrmcqWiSE0Xfm1mK+DhkpqN9Ph+GhoZgsVhQWFi4xnnIBPEPiIC5EXC5XDZN\nMnNdtFdIPOz3f//375v/weFwEIlEMDw8jFOnTqG7u5v1y9tsNrz00kuoq6tDWVkZ8vPz2T2g84om\n/m0VMoMGYNVAabVaxk2Jx+PMqAoEArS1tcFiscBgMLDZDpkdOMPDwxCJRKirq0Nubu59o7a9Xi9m\nZmbYrJKHjcJ+kvMzGAyytkJ61vSuHTt2DI2NjYxVnykLnrkvZDIZVCoVDAYDu/fA6r4fGxtjk/ge\nVqbIdFAz26aplW9ychKRSARcLhctLS0Qi8X3BQ/rP4+4HvRuU/aC3jvSqLHZbGzWBzm31KlBIlpP\nik+k8d8M0ulV5b6zZ88yHXCJRMI8sPWRBnmLFDFTeq+npwfd3d3o7e1dMw1ws2sjWVKXywWHwwG1\nWg2DwcDGPxJoA2euk6KrYDAIpVLJPERKu1Gm4v3338fk5OQaRuhm17+0tIS+vj709fWx+0r1p1gs\nBrPZjJmZGVYf9Hq9zBsnA5efnw+9Xg+5XM4mJra3t6OyshJGo5Gl3inKW1paYuM2N+rAZNbJgdW6\npdvtxujoKIRCIRwOBwYGBljXgsViYRFMZrYis1+fDG0sFmPZAjq0XS4X3G43Zmdn4ff7mdPjcDgw\nPT39yEOd9l4ikUA0GmX7AlgtHywvL2NhYQFGoxFSqZSVVmQyGUwmE2szXH/AUGqRSEaZ0x5J81wg\nECCVSmFqagq9vb1wuVyYm5tjB/jj7pdEIgGv14vOzk7813/9F1544QVUVVWtUTN7UCROsrLT09Nw\nu91MI5+mmBGzPxqNYm5uDv39/UwemiLvTIZ/b28vrl27hqmpKcap2Qgo2zcyMoI33niDRbjAR+pr\nlIlYD+KH0B6uqqpiGTGz2YympiaUlJTAYrGwzJDX62XdJn19fejp6dnSiYT0HtB+JHW4zI6XmZkZ\nzM7OIhKJYHJyEjKZjO15mlOg1WqRl5eHVCoFn88Hh8OBSCTCnMtkMonJyUn09fVheHiYzefYKlD3\nC3E3yDEjgq3JZGLXm0lYXJ/9yNQeyFRavXr16ppR7x93T9dfGzkei4uLuHfvHk6dOoXBwUEolUrU\n19ejpKRkjdBaZnaH9rhWq2XcLHJi6DmQI0HnImXn6Lk9dzX/zW4eYqE3NTWhuroaRqORRQmZ3lhm\nOj0cDmN2dhZvv/02fvrTnzJ1va0EGZWxsTGcP38ewCqLnoxh5iZZr8xHByhFJ/R5lIYOBAK4desW\nzp07h3PnzmFlZWVLlbgCgQBmZ2fR0dHBxkzK5XJIpVI0NDSgurqaaSooFAqWdRAIBJBIJMjJycHL\nL7/MUowUPWS+pPTyBgIBTE1NoaOjA8PDw49d11rPlQgGg2woD9WiX331VRw/fnxNCxn9PrH8o9Eo\n03kQCATw+/0Ih8MQCoWIRCLweDwYGhpCV1cX3nnnHdbalFnXe9RepoOIUstUNyaHwWKxMF6FyWRC\nX18feDwe8vPz0dLSAp1Od59qJa0/cx6Bw+HAzMwMGxRFUVgkEkFnZyd+/OMfMzW/J50YRofXlStX\ncP36dfzoRz9CWVkZi1gedv3JZBJLS0s4f/48ent7MTMzA6lUCqVSCYPBwCZqAqvZhaWlJUgkElaH\nph+Khs6fP4/f/OY3WFhYeKxoOplMYmFhgSnHffe732Xz7TPxsGvh8Xiw2WzIz89/YEYx85qpjHP2\n7Fn85je/wfT0NDwez5bL+2ZmFjINInWTjI2N4fr16zh9+jRbJ7VqymQyHDt2DIcOHWICNr29vRge\nHsb09DSbXhiLxdDf3886lR61licBaQZktgw+Kpp+lKIpnZXUU9/T04PTp0+ju7v7kWdl5tof9HuU\nAaCSs1wuh06nw1//9V/DZrMxp4v2e+aAn8zsEWUQM68zM0NEo89v3bqFwcHBDY0hfhQ+kcZ/s6Cb\n9dZbb+HevXss8gfWttqsj/QCgQDGx8fXtH9t9brw/9j70uA2r/Pqgx3ETgIgQBIE930VqYUyKVGb\ntdqO4jRKJmntxm0nTfOjadN2MmlnMml+tDNNl5lOmulM8qOT2E4bW4kd7/JGUTspStxXAARBYt8B\nYl++H/ruDUhTskQCJC3jzGhsQSRw8b73vc92nvPgHjFxbm6OEpgikQiamppQXl5Ofy5zMl4sFkMo\nFILP54PX64XX64XH46GSlC6XC1arFQaDAUajEcFgMCc64eFwGFeuXIHD4cDNmzdx8uRJ9Pf300hB\nLBajvb0dMpmMqt2RerpAIEBFRQWUSiU1CCRa8/v9sFgsmJycxMzMDAKBAGw2GxYXF6HX67O2/sw0\n5YcffgixWEyHb6RSKXi9XhrFLC0twW6306wJg8GgkT+R9CXdCSRqJhPCMvfVw4I83Jltkw6HA3fu\n3MHy8jKEQiHcbjdN78vlctTX11OGeDKZhNVqpdmfpaUlOJ1OWodfXV2FVqtFd3c3FAoFGAwG5ubm\nMDw8DIPBQPUMtuowkr370ksvYWRkZI2ML4fDgUqlwhNPPEGFlm7fvo2xsTHMzc3B6XRS7XzSlphI\nJGg6lpDjiKOuVCqh0WjQ2NhIX7t8+TKMRuOmn1+SXXvrrbdgt9tRWFiIxsZGnDlzBnK5fE2WKJNL\nsz7Fuz5j5/V6YbfbMTw8jKmpKSwtLUGn09H++mxGy5kgpTeSxudyufB6vVhaWoLVaqXPKdnjhNwa\niUQwMDAAnU5HswWrq6tUypZE/kQvIFfrD4VCtDZOeBwkYiZ6BB6Ph4oNyWQyqhNCUuNGoxFms5mO\n384k25EJjdkCcapdLhcdwFZfX08HVW0UVJKglM1mQygUQiaT0RIeIRd7vV4IBAIEAgFcvXoVU1NT\nW77mj6XxB+5t+qGhIQwNDW263pQLkEiUpHhJfTQQCMDn89G0T2YtMBKJwO/3w+VywW63w+FwwGaz\n0XSY1WqF2WzektTjw4CQf0wmE0ZHR8Hj8VBTU0PryiwWC5WVlSgtLaWsblLDJ/eARIhut5vWrJxO\nJ/R6Pa5du4bbt2+vGaObi2goHA5TpbXR0VE6HtZms2FychLXrl2DXq+nBp1Ewpn19cy/ZwMbacYT\nlUaDwUBfI07TsWPH0N3dDR6PR1tFFxcXMT09jStXrmBubg5ms5lGEYRAZLPZ6Ajj0dFR6iRk87uk\nUikMDg7i6tWrNNVPiEqVlZXw+/2QSqUIBAL46KOP6MjVh927LpcLMzMztPzR2dmJQCAAvV4Pq9VK\nZ2dsdu2xWAwjIyO4e/culEol9u/fD6lUCo1GQ4mh69XgSGp5PZGYBCJkkMylS5dw9+5dWK3WhxoM\ntFWk0/d0K+bn53H79m3w+XzodDqMjIxQg3i/6z49PY3p6emcru/TQIKb8fFxSKXSNXodREuDdJKw\n2WwUFxejuLgYUqkU0WgUdrsdU1NT0Ol0NKhjMBjw+/3U2c72PSDn9tDQEPx+Pzo7O2G32zE4OEhL\nW5nZwczOJlIGlsvlkMlkKCgoQDgchtPphFwuRzKZxMjICGw225bX+XAsqu3H7rDUOQYhQSmVSqjV\naiiVSjpUhdTqMlP7ZFOR1DCpS8disS0paT0qiEBLb28vTp8+jaeffhoNDQ0APmnESDRB1k9a4d5+\n+22899579CEk6n5ETS4XY33Xo6SkBG1tbTS1SNbm9/tpnTybOgbZBCFK8ng8zM/P00FAJDrLTJUS\nkDnnhCFP5qBnk2yWifVGkLD8FQoFzZ54vd41h+GjvDdxLonzFg6HaXYmWyBiRUTHoaCgAJFIhPJQ\nMtO2wMZcHfKMklJRZgS6HeBwONBqtTRVT/aIy+XKyjCqXIJcS6LRkcnBIWcKuZYkdU70KzK7n4gU\nMLnm23G+ECeFZCGIYNZGn5vJiyHpf0JITiaTVL+FlBgecv880L7njf8uAINxr7dbIBBQidTtOhi2\nAo1Gg/r6evT29q4pWazf2Jk16Gg0ilAohBs3bmBoaAihUChnxufTQMSRYrHYGqbtZwFkgh5RoBwf\nH9+1jkqukYtMzEYgpDnifGdmgkhma312CMCueJYzOyxIYJHHY4+88c8jt3jYNrxMfB6NVB555JHH\nNiJv/PPII4888sjjc4YH2vfsj5HKI4888sgjjzx2NfLGP4888sgjjzw+Z8gb/zzyyOMTgkt55JHH\n443Hts8/jzzyeDDuJ0qTRx6PI9YrLX7e8bk1/vnDLo/PO/J7P4/PE/L7fS12a54vJ3eJyWSipKSE\njjolkqZ55JFHHnnk8Zghz/YHQNXAnn32WTq0o6ioKF/nzCOPPPLI43OHz0Xan8FgoL6+Hj09Pejr\n61szIjGPPPLI4/MEQu4kQ4oypcTz+PzgcxH5MxgMHDx4EP/4j/+IPXv2IBKJYGVlBS6XK7/h88gj\nj88dSCaUzIvPHA2dx+cDj33kr1arcezYMTz55JOQSCSYnZ3F2NgYVldX84Y/DwC/j4Q4HA6kUimK\ni4vBZDIRi8Xo5LO8Fnp2QSY9slgsev1JBJrrgSufRzAYDJSXl6Onpwd1dXUoLy+n45aTySSdgPfq\nq69ienp61+73PFE7e3isjT+TyYRSqcSpU6ewb98+CAQC+P1+2Gy2z0TKn8Vigc1m0/nOAoGATqly\nuVxYXV3N2mcRz/9RZ9EzmUwIhUJIJBI67/xBPAoy5czhcMDpdCKRSOzoQcNgMCCXyyGXyyGRSKBW\nq1FeXk7Hfo6NjWF5eRmBQGDH1/o4gEz1k0gkEAqFNOokxp9cY+IABAIBeDweJBIJOr1yq/dgKyO+\nySROkUgEqVQKHo9Hxxav3/dkfLfNZtvRAVZk1O2ePXvw9NNPY+/evairq6PXHQAikQiWl5fhdDqR\nTCZhs9novSBDr3YCQqEQCoUCQqEQfD7/E9MSyTRHn88Ht9u9q55P4txyudw1+3u3DBB7rI0/h8OB\nTCZDTU0N1Go1mEwmiouLodFowOPxsv55mdO9gK15pwwGA3w+H0VFRTh27Bj279+PhoYGOBwOTE5O\n4vXXX8fExERWNjsZhZk5fvdh187hcFBfX4++vj6cO3cOUqn0gSlE8rC+8sor+M1vfgOPx7NjHRdk\nNOgTTzyBJ598ElVVVZDL5eDz+fTQLikpwcjICMbGxuD3+/PdIVuESqXCs88+i+7u7jUGiMlk0ql4\nmWNwh4eHcenSJTgcDjgcDlit1i3dg/ViRo86RpjD4aCmpgZ79uxBX18fysrKUFhYSPdSJuLxOCYm\nJvCLX/wCMzMzcDgcm173ViAQCHD69GmcOnUKBw8ehFwuX2P4gXsTC0tLS/Gtb30L7e3teOWVV+Dz\n+RCJRLC0tASfz7fpz99Kf311dTXOnz+PtrY2VFZWrjlfI5EIfD4fHWn95ptvIhqNbnqd2QaHw4FA\nIIBKpQKHw0EsFoPL5YLP59sV2a3H0viTw6SzsxP9/f0oLS1FQUEB0uk0HctJNn+ubsBm3pfFYqGs\nrAxlZWV0/nZRURG6urrQ3NwMrVYLj8cDjUYDoVAIvV6PRCKBsbEx3L59e8trJg/pw3ZASCQSlJaW\n4tixYzhx4gR6enogFAofaPxJijESiYDNZuODDz6ATqdDNBrd9odBIpGgvLwcBw4cwJEjR6BSqZBK\npWC32xEMBmG32+H1ehEKhT7hEO2UYAgxXtXV1VCr1fR6hsNhBAIBRKNR8Hg8hMNhuN3uHT9gMiGR\nSFBZWYne3l7s3buXjoHONMjpdHpNlCQUCsHj8bCysoKZmRkMDAxsea+sH7/7sFAoFKisrER/fz96\nenrQ3t4OpVIJsVj8CWMK3HN0CwsLEQgE8O677+Ly5cuIx+NIJpObXvujgsViQSQSobOzE3v37oVa\nrQabzUYikYDH48Hq6ipisRhkMhkKCwtRV1dHMxnBYBAejwcXL16E3+/f9DUn9/RRwOVyoVar0dXV\nhWPHjqGhoQFqtXrNe8bjcQSDQZSXl6OoqAhSqRQ3b97E9PR0Vvf9o9oJJpMJFouFnp4edHZ2Umcr\nEonAbrfD7/eDw+HQ9U9NTWFpaWnbSZePrfHncDg4deoULly4AJVKRQ+UZDKJVCqVE+O/1Yifw+Gg\nra0NTz75JM6cOQO1Wg2BQLAmqhAKhdBoNOjt7UUymUQsFsOPf/zjLRt/svHIfx/m2igUCnR0dODZ\nZ59FT0/PQ30Oi8WCQCDAmTNn0NnZCa/XS8sw222oiouLcejQIRw4cABNTU0AgMXFRYyNjWF8fBxT\nU1NYWFiAzWaD3++nh3amk/Sw1ypbYDAYYLPZtHMlEonA4/HAarXCYDDA7XZDLpfDZrPB4/HsGuPP\nYDBQXFyMuro6tLe3Q6vVbmgQyGtMJhMcDgdNTU2oqKjAysoKBgcHMTo6umWnZrO/W1lZiVOnTuFL\nX/oSOjo6PtWgsVgsVFdX44UXXkAsFsPIyAgCgcC2Gn82mw2RSITa2lpUVVXR7F44HIbBYIDJZILP\n50NzczNaW1vB5/PR0NCAhoYGpNNpLC8vY3R0FHNzc1ta96Necz6fj/b2dvT29tKSbeZzRrKVRUVF\nKCoqQl1dHU6dOoUf/ehHmJ2dzZohXZ/NfRgQMuWXv/xl/Omf/il1tkKhENxuN2KxGIqKihAMBmEy\nmfBf//VfsFgs234GPpbGX6FQoKamBtXV1dTrItGRz+ejBK5c9Phv9uaJRCKo1Wr09/fjyJEjKC4u\nBpfLRSKRQDAYpNEOqb+VlJRALBaDy+VCIBBALBYjEolsqq6YTqeRSCSog/Rp34PNZoPH4+HgwYP4\nyle+Aq1Wu+a9UqkUQqEQTCYTRkdHYTQaYbfbIRAI0NLSgrNnz0IsFkMsFuMP/uAPwOVy8b//+78I\nBAKPfuG2ALVajaNHj6KqqgqpVAqBQABjY2P41a9+hZWVFbjdbvh8PoTD4TXllUyDv10ZADabDalU\niq6uLpw+fRqtra3QaDRIJpOIRCJYXV2Fz+dDNBoFn89HMBiEzWajJQu9Xo9gMJiz9X0aWCwW+vr6\n8PTTT6/R1yAOeSKRQDweRyKRAIvFomUAJpOJgoIClJaWorW1FYcPHwaHw8H09PSm1rEVB72mpgZn\nzpxBWVnZmvXH43EsLS1hZmYGQ0NDiEajEIvFOHToEBoaGuizXV9fj7m5uW1NTUulUpSWloLD4WB1\ndRXBYBDXrl3DlStXsLKyAo/HQyP/0tJStLS0oLOzE93d3eDz+RtmNHINLpcLhUKBw4cPY//+/Wtq\n5pFIBMlkkmZ3iTNMMhxHjhxBMBjEhx9+iOXl5S2v5VH3CYPBgFarRXd3N6qrqymnJR6PIxAIwG63\nIxAIIBwOQygUorKyEl/72tfQ0dEBk8mE8fFxDA0NbUtZ4LEy/sRLKy8vx5EjR1BbWwuhUIhIJAK/\n3w+HwwGTyQSHw7Fj5Jv7oaSkBAcOHMCBAwfQ0tICBoMBp9MJs9lMI0+y+VOpFPr7+yGRSMBms6mz\nYzQa4fF4NvX5j8IdEIlE0Gq1NF1OSiqpVApWqxUWiwU+nw+zs7MYGBigaS2JRIKjR4+iubkZVVVV\nEAqF6O3thdvtxltvvYVQKLRtUREh+u3ZswcqlQqRSATT09O4fv06BgYGEAwGP/XhI7wMDodDHa9s\ncDBI1FtYWAiVSgUmkwkulwu5XI7jx4/jj//4jyEUCsHlcgHcO6AyDwsGg0EJchUVFVAoFFAqlVhY\nWIDJZNp2UpREIoFKpUJPTw8OHDgAsVhM12ez2eB2uxEOhxGJRBCLxehhDgBlZWWoq6sDj8dDbW0t\njhw5Ap/Ph7m5uU1Hd5s50JlMJsrKytDV1QUOh0O5K06nExaLBdPT07h58ybeeecdRCIRyOVyiEQi\nFBcXU3KjVCql92y7QIyPwWAAk8mE1+vFG2+8gTfeeAN+v39NtCkQCLB//35Eo1GaBdhsiWQrEIvF\nKC8vR3t7O6qrq2nwFggEsLS0RIMEcl9IBkCj0aC7uxvpdBqzs7Ow2WxZOecfhfzMZrNRUlKCnp4e\nlJaW0ozt6uoqTCYTdDodHA4HxGIxVCoVysvL0dbWhtbWVhgMBqjVaiQSCRiNRjgcjtwGFDl75x0A\nSbe0t7fjueeeg1KpRDKZxMrKCmZnZzExMYGFhQXMzc0hFArtmpQoABw4cADf+c53UFlZCQCIxWK4\nfv06XnnlFWrUiYElkURVVRWYTCaamppw/vx5vPrqq5s2/g8L4lxduHABe/fupdEBOQxff/11vPji\niwiHwwgGg/B6vVhdXUU0GkU0GsXMzAzef/99HD16FHv27IFIJEJRURGUSiWCwSD8fn9O10++A5vN\nBp/Ph1gsBofDwcrKCn75y1/i0qVLCIfDD7U3CIFUJpNRhycb3AUOhwOVSoXjx4/j+eefB4/Ho6zh\nwsJCiMViahxJVEb+Tl5jsVjgcDg4dOgQ2tvbEQwG8Zvf/Ab//M//vO1dC01NTfjCF76Arq4uSCQS\nMJlMBAIB2Gw2/PrXv8bVq1cpXyEej9NDncPh4MKFC/irv/or6qydOHECy8vLeO+99zad6XpUsFgs\nFBQUoKCggEahhGj27rvv4q233oLP54PX64XL5aIti263mzqRPp8PS0tLWe3QeRi43W6Mjo7CarWC\nz+cjFovB6XTC6/V+IrqMRqMYHR1FZWUl7YbabsPPYDCg0WjQ2tpKu5sSiQTsdjtmZ2fxyiuvYGZm\nBrFYjO5jDoeDEydO4Lvf/S5UKhUaGxuh0Wig0+m2TcuFPHPkPFOr1SgoKKAZZrPZjIGBAUxPT9N9\nQM7xCxcu4NChQ9izZw9qampw+vRp/Nu//Rt++9vf5jQD8FgZf7FYjLa2NnR3d6OmpgapVArLy8u4\nfPkyxsbGoNPpqIEhXtpOt/yRQ5qk3NhsNux2O65du4Z3330XV65cgdPpXHNoiEQiXL16FUqlEq2t\nrVCpVGhvb8cHH3ywLWteXV2F0WiEwWBAcXExCgsL4XQ6cfPmTbz33nsYGhra0MAkEgnYbDYMDQ2h\nrq4OnZ2dNK1L6tSBQCDnDyuPx0N1dTVqamrA5/MRiURgs9kwMzMDo9H4UG1NPB4PYrEYtbW1KC0t\nRSwWQygUykpKl8lkQiAQQKPRUAcrkxBHSH7AvRRpJn+FtGaR9yksLIRSqQRwj8BltVoxODi46bT5\nZhCJROB0OjE9PQ2fz4dQKASbzQaTyYSPPvoIExMTCIfDdM+QcgppUauoqEB3dzcqKiqgUqnQ2tqK\n/v5+jI6OYnl5Oef7RSKRoLGxEeXl5WAymTTD9dZbb+GDDz7A7du3P7Hf4/E4bQdMpVLw+/2w2+3b\n3i0SjUYRi8VoUPAgpy+ZTMLr9cLr9X7i57bTAQgEAlhcXMR7770HnU6HkpISGAwGjI+PY3BwEEtL\nS9QpIKUhjUYDl8sFtVqNwsJClJWVQaFQwOPxbGs2Ebh3NhQVFaGgoADAvXPParXi1q1bMBgMsFqt\nCAaD4HA4kEgkEIlEiMViOHLkCCorK6HVanH27FlEo1EMDQ3Rluhs47Ey/gqFAmfPnsX+/fvBZrMR\niURgNptx8eJFesAoFApwuVwUFBTQ1ONOgsViQSwWQyQSgcfjIZ1OQ6fT4T/+4z8wOTkJr9f7id+J\nRqN49913wWQyodVqIZFIoNFoIBAIcr7edDoNo9GIF198EX6/HzweDw0NDZiamsKPf/xj6PX6B15T\nn8+HiYkJWCwWyugmpQuhULgtIh4kvblnzx5wuVw4nU76QD7sQ8bn86FWq9Ha2oqqqioYjUbYbLas\nrI8QVrlcLjgcDo3qMw1/NBpFKpWCRCKhYi2EAU0Oby6XS0tDAHDw4EF0d3fjL//yL7fV+Ov1egQC\nAUxPT0MqlcJiscBkMsFoNN43dU++y9WrV7G0tIQf/OAHlCTY1NSEr3/961hdXcXKykrODZNcLseR\nI0coKTSVSsFoNOIXv/gFdDrdhr/DYrGg1Wqh1WoRi8UQCAR2RFH0USP3zJ/f7qiffKZer4fRaMTl\ny5eh1WrR3t6O6elpGvGvd0wIX8disUAkEtG2RZVKBZ1Ot23Gn7SnMhgMyGQy8Hg8SrC02+24c+cO\nXC4XQqEQACAcDsPv9+Oll17C9PQ0qqqqoFQqwWaz8cUvfhFarRY/+MEP4PP58sb/QWAwGBCLxbQl\nLpVKYWhoCO+//z6WlpYoW9vj8YDD4TwSqz2X0Gq1uHDhAvr7+2mq0Gw2r9kkG4EIABGew3YSczJ7\nsN1uNwoLC+HxeLC8vPypka9Go8Gzzz6LtrY2unaJRIL6+nosLS1hcXEx5/eDGP+Ojg5wOBzcuXMH\nr7/++iMZ78bGRnzjG99ARUUFvU/ZiupisRhWVlZw9epV/PznP0dHRwfUajUsFgumpqYwPDxMa7Wk\nbZUciMQ5IBFRbW0tWltbcejQIer4qlQqaDQaOByObSGfRSIRyrPhcrkIhUKUdPtp9zoUCsFisWB8\nfBw1NTWora2l2h1isTjnaweAwsJC9PT0oKamBpFIBBcvXsTrr78Ol8u14c93d3fj+PHjqK+vh8lk\nwmuvvYaBgYEdMabZwE6smfCbzGYzIpEIvF4v4vH4fdfC4/Egl8tpqUCj0UCtVm/bmUgymFqtFjU1\nNSgrK4NQKITH48HVq1dx5coVSq5cj3g8jnA4TMujAoGASi8Thz8XeKyMP5/PR3l5OeRyOZLJJCYn\nJ3H9+nXY7XaEw2EwGAyEQqE10RSLxaI1up1AcXExnnzySbS2tlLlu5WVFUrGuR8ye6NJCWO7NuWq\nXAAAIABJREFUNjrpDtDpdNDpdJ8arZMUrkgkQl1dHY4ePUojomg0imQyCbFYvCa9nUtwuVzU1NSg\nsrISLBYLer0eN2/e3DDLstF3YTKZqKiowFNPPQUGg4GZmRnadpmNgzKRSMDtdmNsbAzxeBw2mw1V\nVVXQ6/W4desWPvzwQxoJZLLON/rspqYmHDlyBC0tLVAqlWCxWKipqUFHRwdu3LixLcY/Ho9TtvOj\nIhqNwuv1YmVlBSsrK9BqtVQlkOyXXDvwBQUFNCqLRCK4ceMGLl++TEtxTCYTEokEEokEUqkUx44d\nwxe+8AXw+XyMjo7it7/9bdZ7z3MB8pxmniWZwkvbCZL58Xg8D+QxMZlM8Pl8SKVSGm2T7MB2nulE\n+bGurg61tbVQKBRIJpO01n/79m2EQqEN10QyoITzAmBNB0yuvsdjY/wzQWqfZrMZRqMRsViM9sqT\nQRbRaJTeMEIc2u4NTgwJn88Hi8VCPB6HyWSiNa0HIRKJ0BY0DoeDgoKCNaSv7cSnXTc2mw2JRIK9\ne/fi4MGDKCsrA5PJhMPhgNfrxfz8PKampuB0OrflMAfW9u/GYjHaQvRpIHLGIpEIBQUFWF1dRSAQ\nyIrs7Ho4nU6MjIxAp9NRboLf71+zNz6tdW1paQlTU1OUeMZgMNDf3w8mkwm9Xn/f6HW3gcPh0CwH\n6ZkmLV+5RiKRQCAQQCgUokEG6Tsn/eYdHR3o7e3F4cOHUVlZCbFYjLfffhuXLl3CwsLCtrexbgZM\nJhNSqZSqdBIDSs6p3SSdS8DlclFVVYWKigqqieL3+zEwMIBbt25tqyyxSCRCV1cXGhsbwWAwYLFY\nMDExgdu3b8NgMNz3+hHeF+kMIEGgyWTKaQfUY2n8yYEgEAggkUjoAUe8WuIIkDYRlUqFoqIiqFQq\npNNp2k4XDAaxvLxMVd6ybZAIo5nL5YLFYiGRSMBkMsFgMDwwIkulUvD5fJTMwmazUVBQsKb2u1Ng\nMpkQiUQ4ePAgNBoNjSaEQiEaGxtRX18PsVgMl8sFs9lMjb/JZKIqYtth+MnnEJbuw2ZOxGIxjh07\nhkOHDtHI7tq1aznRFSda9lvp4AiFQvQg0Wq1UCqVKC0tRXt7O23r0uv1WVx1dsHhcCAWi1FaWoqy\nsjJwOBzo9XoMDg7CarVuy153uVz46KOPkE6n0dbWhn379oHD4dCZA1wul4rkdHR0QCQSIRgMQq/X\nY3x8nKasdzsKCgpw6NAh9PX1gcfjUYb9bnVcGhsb0draiubmZvT09IDH41HiLhnItV3BhFAohEql\nQkNDA0pLS5FIJGAwGHD37l06G4SAnNdqtRqVlZUoLCxETU0NhEIhrFYrlpaWMDk5iaGhoS0pK34a\nHjvjn1kD12g0qK2thdvtpq1ShJVLHAEej4fW1lYcOHAAvb29AAC73Y5EIoGlpSW88847mJ2dpQ5B\ntm4EMYocDod6fbFYjLLoH5TyJ54hWSeJ/Mn7bKeC2HqwWCwolUp8+9vfxpkzZz6RjSDkp5WVFdy5\ncweBQICS5bZj0uL6dCYAKpT0MJkThUKBF154AUePHgWPx8PQ0BBeffVVWCyWXZvWDYfDmJ2dhVar\nhUKhAI/HQ2lpKU6dOkWV3nbr2sl8i7q6OjoLYGZmBi+//DJ0Ot22SKIuLy/jF7/4BVgsFtrb2/H0\n00/j/PnznxhitV7wyWq1YmVlZceG4jwqxGIxvvzlL+PcuXMQCAQYHx/H9evXd+XocwaDgcOHD+P5\n559HY2MjZDIZ0uk0FhYWcPPmTcTjcQiFQgSDwW3J6spkMpSXl6OqqgoKhYK2NQ8PD8Pr9a7R4OBw\nOFAoFDh48CDOnz+P2tpaFBUVwel0YmFhgZb3iJplrvDYGH9S984kvhFt6lAohHA4jHg8jqamJpSU\nlNCJf1qtFrW1taisrKTKXWVlZUgmk6iurkZ5eTk+/vhjvPPOO3A6nVlTSSMERZlMRnuHSTrzfrWh\nzN8lhJCNBqPsBMrLy9Hb24uGhgY0Njaivb19wwl/gUAAs7OzmJubw8LCAj0gifHfjsM8ky8B3DMw\nIpGIsuLvB+IsEqGWUCgEl8sFu92+qwaKZCKdTiMYDOLu3buoqKhAZ2cnGAwGRCIR9uzZg6mpqZ1e\n4oYg96ipqQmnT59GbW0tVbh0OBxYWFjIaVSUCdIbT9pQH4ZjIxAI8Nxzz6Gvr4+Ohtbr9bh69SoM\nBkPO1/yoqKysRFdXF8rKysBmsxEKhXDnzh28+eabu8KxJXr5XC4XGo0GLS0t6Ovro2JhJKgYGxvD\n8PAwCgoK0NLSgqqqKiwuLmJlZSVnPfMMBgM1NTXo7u6GTCajqqypVAoFBQVQKpUoKCgAk8lEY2Mj\nGhoaUFVVhYaGBjQ3N4PBYMBkMuGNN97AwsICfD4fVlZWYLVac9qN9lgYf9JaQdjMRLva4XDAZrMh\nHA7Ti1hfX4/9+/fTITp1dXVQKBRU7CWTQBWLxVBdXQ2/34+bN29mVYCGyWSitLSUEpgIwWV1dZUa\nwfU/T7IEfD6fyhdnqrxl/nc7wWKxoNFocO7cORw4cAB1dXX3/Vlyb1KpFFgsFrxeL+x2O21n2S7D\nv743fiOng3jpZFyxUChEbW0txGIxfUBJuWI3IxwOY2ZmBu3t7YjFYjRTVF1dTVXItnuoyIPA4/Ho\neOVDhw7h7NmzKCsro/oSxGncrvUmEgk60ZGUCzMNPzkrMrsvuFwu+vv7cfjwYaTTaRgMBty5cwdL\nS0u0xXG3gMFgUPXE0tJSmlmcnp7G7du3d3RfEIebaPhLJBI0NDTgiSeeQFdXF53bEgqF6JS/1dVV\nNDU1oaioiNbRSWdLrrKiJJDk8/mUwCcSiVBeXg4ej4dkMgkej4cDBw5gz549qKyspHbH5XJRtUuL\nxYJoNAq32w2/35/TrNFjY/xbW1vR09MDsVhMBybMzc1Br9cjHA6DxWKBz+ejq6sLTz31FP07SZdv\n5MkTDz+VSmF1dTWrN4LNZmPfvn04dOgQJBIJUqkUnT1ApHwzvx9JfyoUCmg0Gpw/fx69vb2QyWT3\nNV7bAUJYLCkpoZHDg1BYWIiuri6Ul5ejoaEBLBYLwWAQTqdzW9ZPnKhM4+9wOGAwGD7RWkn0B/bv\n349nnnkGKpUKSqUSGo0G09PTePnll3H37t2cr3mriMVisFgsdF6BTCajBFGSLdstbWgMBgMqlQp7\n9+7F1772NbS2ttJx3PPz8/jZz36GwcHBHdvrmcQs4Pfa/kS3Yv24cHKmlJaW0u/G5XJ3ZIrlRiCO\ncHt7O86fPw+5XA6v14upqSnYbLYdXyOR+j1y5AgOHjyI4uJiKBQKOsWPOPOEG9LR0QEul4uKigoo\nlUoIhUI4HA5MTU3RoW7A5oOkjfgD6XSa9uwTzX6hUIiWlhYUFhYiFotBKBSiuLgYcrkchYWFEAgE\nNOMrlUrR2tqKv/iLv8Dy8jKWl5dx8eJFuN3unAZEj43xb2howJ49eyAQCOB0OjE+Pg6z2UyJeo2N\njejr66MG6n598ZkXmvw76TPNVgqGbNaqqirU1dVRb5HD4aCjo4MKtGS23ZBoSCqV0noRaVUjUwpV\nKhVKS0tht9u3hWBEuiWOHDmCkydPoqysjAoNEWNCHrbMh7SwsBAFBQUQiUQIh8MQiUQQiUTQ6/U5\nj+iIwBPhRpCxsRqNBsXFxRCLxSgrK6OOoUwmQ1NTE/r6+qjRTKfTEIlEKCsrg1QqpVrvW1n3+kxE\nNpFMJuH3+zE7O4u33noLvb29aGxsBJPJRElJCXp7ezE3Nwez2fzQ75mZQSGH2MN2TKwHkc8lqo8l\nJSVoampCb28vlEolfW+iF7DdmRZyT3Q6Hd566y1KzvX5fFRmljivSqWSBhNVVVXQarXQaDSU4HXy\n5ElEo1Fcvnz5E07+doPFYqG8vBw9PT04fPgwLXuGQiHI5XI0NjZi//79MJlM8Hg82+6wEKnfp556\nCgcPHkR7ezukUintjsrk75A91NjYCIVCQWWw+Xw++vv7kUwmMT8/D6PRiOXl5U09r+TzNnIArFYr\n9Ho99u7di+LiYnq/RSIRgHtOjFwup+d6pv3hcDiQyWQQCAQoLS1FVVUVotEohEIhbty4AafTmZOM\nxWNj/Gtra9HW1gY+nw+LxUJlEQmbu6enB3//93+PwsLC+xK71qfOiQELh8NUszsbIBGESqVCSUkJ\nNR4ikQjPPPMMzp49S+eEk5aizDVm/iHtTiwWC7W1tWhoaNg2djEptzz//PM4d+4c+Hz+mnWSPlVi\nJMjAFgaDgYKCApSVleH8+fNobGxEcXEx3nzzTdjt9pxmMYiePznAI5EIysrKKNmzoaEBx48fh0ql\nWiMisz7N29jYCKVSSR1NMp1uMyDXh8z4zoXxj0QiGB0dhcfjgUwmo+1INTU1+OpXv4qXXnrpkY0/\nIayS1jCHw/HI14C0yhUVFeHcuXP4u7/7O9qOm9mVkU6nwWazoVQqIZFIHvUSbAnkObt58yZsNhvY\nbDb8fj/m5+fv2wUkEAjw7LPP4umnn6aHfkFBAZ5//nmUlZVhdnZ2w/LedoHcv46ODvzwhz+EVqul\nGSCSnSNO7ttvv42JiYltHTlLzrT6+np84xvfgFqtXnO+ZLL4yfPD4/EoMTRz35w7dw5dXV348MMP\ncenSJTgcjk0/r/frHDAajRgfH8fp06dpppZE+CR4I0FHZuaIgGRQ+Xw+DeKamppgtVqpYme2r/1n\n3vhn3nhyaNTV1eHLX/4yenp64PF4wGKx0NDQgMLCwjUpucxJf0RdTq1Wo62tDTU1NVAoFGsGesRi\nsax4YBwOh6aGCGmPvFZSUoJ0Og0+n0912zNhtVqxuLhIZUVbW1tRWlpKJ+bF43EsLCzQ0b+5AoPB\nQHd3N06cOIHa2lqw2WwqY2kwGHD58mUsLi6uEaMhpB02m41z587h1KlTYDAYKCkpwcmTJ9HQ0IDT\np0/jpZdewuTkZE683d7eXiqdSVpuuru7odFoANwbgapSqVBQUHDfzBB5uBUKBb761a+ipaUFRqMR\nIyMjGBgY2BS7mBx2mSWcbD7s6XQaq6ursFgsWF5ehtVqhVwup3V1m82GRCKByclJ+Hy+h3o/ci06\nOzvR1NREmfjXrl2jamXkZ4hk6YEDB+ghmPm9+Xw+6uvr10R1mQc4l8tF5f8ff1pZWYmBgQGMjY1l\nZWzrwyCdTsPr9VJRq3g8TjuANkI0GsWNGzcQCoWwsrKC3t5e7N27FywWC62trfjhD3+I3/zmN/jd\n735HswfZQGZ0ymKx0NjYiLa2NsptIteW/LtWq4VKpQKHw1lzT9lsNmpqaiAQCNDc3Ay3241oNEqz\nL7/97W+puFUuHAKSuq+uroZEIgGXy6W8qEgkglAohIWFBRgMBvh8PsjlcjQ3N6O8vBwKhWINqZfH\n40GlUuHw4cMoKSlBZ2cn3nrrLdy8efOR1v4gXlUoFEIgEKB7mZwtxN6QMdUMBgOxWAwulwt37tzB\nRx99BODeVNdnnnkGFRUV1Ba0tLTge9/7HhYXF+FwOPDBBx/g7t27WQsQPvPGv6CggKZ4yPSzkpIS\nqNVqujEzU0TRaBQejwd2ux1msxnLy8uYm5vD9PQ0JicnUVVVBYfDAYFAgOLiYnC5XOpYbCW6ywR5\n8MiGALDGgQFAxWdWV1fh8XiwurqKeDwOvV6PyclJjI6OUlbugQMHoFAo0NraCrvdDqFQuKF3mS1w\nuVyIRCK0tLTg4MGDtMVmdXUVBoMBExMTuHjxIqampu5LXGQwGCguLqY1r+bmZtpDvbKyQsld2XZg\nmpub8eSTT6KoqIiWVaqrq1FdXb3m54giHbn2RMSHxWLR8axFRUW0lDQ3NwcOh4Pr169vOqWY+ScX\nB2o0GoXL5YLRaITRaIRQKERhYSFkMhm6urqwsrKCxcXFRzL+qVQKtbW1OH78OBQKBe7cuYNkMgmr\n1UpH9fL5fMjlcjzzzDN46qmnIBKJHthWSaRd/X7/mqipuLiYTlEUCARYXV2Fy+Wicw5yDULGfRgk\nEgnMz8/D7XbD4XAgHA6Dy+VCq9WivLwcX/nKVxCPx7G0tIT5+Xk4nc5NrSkzayQSiSCRSGgEyeVy\nsX//fjoxTqPRrJEEX5/NIpM5gXsZMqVSCZVKhc7OTgC/r20T7RMyejbb158Mtqqvr6ekZpK9Wl5e\nht1uh9vtxsjICMbHx+lAH4vFgvr6eqovQoKNkpISKBQKSKVSlJeXo6WlBUtLSxgZGXnkCZf3ey6j\n0ShCodCaDOf6PU74RWazGQaDAe+//z5+9atfAQCqq6shk8nQ3NwMPp8PjUZD/7hcLkoUHR8fz1pW\n9zNv/NVqNY3cSO0R+L2BJf9PDLfFYsHw8DBefPFFLC4u0psWCoWoobVYLGhpaUFHRweAe8ZOLBZT\nOdqtIhaLwe/3w+v1IhAI0MOQbFjg3sPodDqh0+lw6dIljI2NUQZoIBBAMBikYiJCoRAdHR10YyYS\niZz2+hcWFqKzsxMajQaBQIC2qOh0Ovh8PgSDwfvOoibe+8WLF3H9+nWwWCz09/fje9/7HjVEL7zw\nAuRyOf793//9oQzRw4JwFDLbK+8Hr9cLvV6P999/nw4iSiQSKCgoQFtbG2WhE+ew8v9P4+JyuQ+M\nCDfC+lLO/bBVYh4hpy0sLGBsbIwOhQLW1iQf1vkgksZSqZRGkIT4OTU1hbm5ORiNRlRUVKC3txfN\nzc2f6piS/WG323H79m2w2WxoNBpotVrI5XIA99rSzp49i4WFBZhMJqysrOzaVku/308zFO+//z6+\n//3v4/jx4wCA/v5+yGQy/PjHP8bAwMCm3p8YSrlcjj179mDfvn2oqamBSqWie10mk0EkElGne/2+\nJyW6UChElejKysogkUjWDAsjGa+ysjJ885vfREVFBX7yk5/AarVmdVohYfeToVmRSATBYBArKyt4\n5ZVXcPfuXaoOSvr4FxYWMD4+TqNtslaRSIQ/+ZM/wRe/+EWqv19aWkqdyGxxuQh/KDM7sN65SiaT\nuHTpEt5++23Mz89jZWWFiv/Mzs7iX/7lX2jX2d/8zd/gi1/8IoDfc2JIq2u2AoPPvPEnaajx8XHI\n5XLU19dDKBQCWCvoMjs7i5s3b2JlZQVTU1O4evXqht52IpEAg8GgHjDw+2Ep2fJuk8kkQqEQBgYG\nEAgE1kjzZhp/t9uNlZUVDA8PQ6fTIRAIrEn5hMNhzM/Pw+Fw0Pcm9clc1uaSySSCwSDm5+dht9sx\nPz9Pe2k/LeVNjBfRamcymSgqKoLRaKQTDhsaGlBfXw8Oh5P1tS8tLWF4eBgVFRW0DJRMJunUOYfD\ngUgkApfLheXlZQwPD8NgMFDSDZ/Pp339BQUFaG5uhkajgVAopEJBD3Iq7gdimMm9u5/jtBWQ9/X7\n/XA6nbSMRco1y8vLj3SIk71mMBgwOzsLhUIBlUpFD1atVgur1YqSkhK0tbVBKBTeV0shlUrB6/XC\narViZmaG/iF1/qqqKjQ2NqKrqwtisRgajQb9/f0IhUL48MMPYbFYHjgIa6dApix6vV643W4YDAa4\n3W5IpVKo1WoA95zpzYLcg2g0SsmppBRI2g7vV74ia1teXobRaMTS0hLMZjNSqRSdsikWi6ljq9Fo\nwOFwIBAI0NDQAL/fj7m5Ody4cQPT09MPnND4KCDn48LCAgBgcnKSai18/PHH0Ov1lCyZ+d7rz3Mi\n3tXS0kJHeEulUipEplar17SBbwWpVAoulwuvv/46JiYm1qjIEk5XJBLBhx9+iFu3bsFms6151uLx\nOGZmZmi2ggQbpB16enoaLpcrq0HdZ974m81mWK1W6rmS9g4CkpocHBzEP/zDPyAYDCISidx3Q5Ja\nTWbKJhKJUCnPbCESieDll1/Gyy+//Kk/e7+1xuNx+Hw+6qgQ7zKzpSXbYDAY8Pv9mJiYwMTEBE3R\nbjYVRQ792dlZiMViSCQSFBQU5GTITzqdxvDwMLhcLk6ePImmpibIZDJEo1EsLi7iZz/7GW7evEnT\n1RsRnOLxOMbGxuD3++FyufDcc8+hrKxszZTIzYDct09bfzYQj8fpdyO6C6SnmwjZPCzS6TSuX78O\nDoeDuro6yGQysNlsVFZWoqKigh5gmRoaG71HKpXC8vIyrl27hhdffBHXr19f0ylSWlqKo0ePQqPR\n0Ojo5MmTKCoqooY/HA7veGvag5BKpWjLpUAgoB0lW5nJQVqESTfEei7RRsi87waDAe+99x4uXbpE\nr3nmNZRIJDh//jyeeuopOm6WcALq6urw3HPPIZFIQK/X0wApM0NFnolHuS+xWAxWqxWvvfYadV4I\nF+ZRAhsij33r1i0UFRXhK1/5Cm0PlMvl0Gg0sFgsWZEvTqfTMJlM+Jd/+Rf6GovFAo/Hg0gkQiwW\no4PDHrR+8j3J2nk8HlZWVvD+++9nnd/ymTf+AGjN+fXXX8f8/DyOHj2KM2fOUB4AYcQ/jPY9YXtX\nVVVRjzoSieSk33Kr70c85FyqQK0HSRGSKYnkINkKlpaW8PLLL9Npe8A9r72wsBDBYHBNFmarWF5e\nxsDAAHQ63Sci/6mpKSoG8mn3m2i3r9f0z3XWJRuor69Hd3c3xGIxNbzksNmM0+jxeGAymaDT6SCT\nyWhES64DyWpkZuIIcUun08HlcoHD4WB5efm+UZ3X66VDioiuBJvNhlqtRn9/P1VVy7x3u+0+JBIJ\nLC4uQq/Xo6KigvJ7SIS4WYedfM/x8XG89NJLaGpqQkVFBSV0KpVKWs5JJpOU22E2mzE7O4s7d+7A\nYDBsuHfD4TCuX78Ot9uNO3fu4OTJkzh06BAA0I4dkhkghnT9vdvMfSDnSmZf/mbvqdFoxN27d3H6\n9Gn6+4WFhSgrK8PU1FRWO7ky15eZMX7UcyGTlyEUCumI4GzisTH+VqsVNpsNd+7coTPmm5ubqYKZ\nQqFAZ2cnvF4vnE4njRaIBCMhcfX09ODs2bPQaDQIhUIwGo2wWq27SpGLgJAE2Ww2PcS3Y53kMCf9\nzFs1/qFQCCaTac1IXZFIhKqqqqwbf5fLBZfLRWVt7+cMkg4PUssjNWVyzTcapMRmsyEQCBAKhXa1\nnntFRQVaWlogFAqRSqWog7vZeuLq6iqcTidVaiRETgBrHMRMMiNRs9TpdDAajSgsLKSEKUJ8JQdn\nOp1GKBSC2WzGrVu3IJFIKHNdqVTi4MGDlAEeDAbh8/mo0tuDsnwPA7Ieoh1POEKb5RiQ/QT83lCQ\nDpitaNCn0/dUBL1eLxYWFlBeXo7S0lI0NjaisbERRUVF4PP5NL08MjICm81GRWXuF/3G43FaWjSZ\nTKisrMTBgwep1K5CoaAaK8SJ3Oo1J8j2WZbpPMjlclRUVEAmk9EUfC6Cu0QisamzgKT/GYx7UtwV\nFRWQSCRZFeR6LIw/AbnY165dg8lkwje/+U2cPXsWxcXF6OvrQ01NDcLhMMbHx/Hf//3fmJubQygU\nQkVFBdrb23Ho0CF0dnaiubkZBQUFMJlMeOWVV3D9+vWd/mobQiwW01kFwMOljrMBQqaUyWQQi8Ww\n2WxbmnlQUVGBr3/965RVDADFxcU4duwYPfRzhfs9RGSKHDFspBbK5/NRXl6OtrY2PPHEE7RfHrh3\nP2pra9ewpncTiPEtLCyEWq0Gl8uF3++H2+2mUfNmDlxioAnLOZFIrJmeSQ4rksIlWThCzGKz2Sgp\nKYFMJgOTyYTH46F8F1IXZbFYcLlceOONNyAQCNDW1gaRSASxWIzm5mYolUocOnQILpcLCwsLGBwc\nxMzMDIxG46bLYGRtGo0GR48eRSKRwMrKCiYmJmA2mx/5AOZyuXSAWCaRjjg7W80uEv5TIBDAwsIC\nrXcvLy+jo6MDxcXFdNS51WqFUqlEYWEhFAoFJiYmPtG6lyk8VVBQgJKSEggEAsRiMdqGTCRrRSIR\nLb3m8nl9FJC9duTIEXz1q1+FRqOhe1GtVqOpqQlarRZ2ux12u31XZe0yS2VSqRR1dXVQKpV0tHfe\n+G+AdDoNl8sFr9eLDz/8EGKxGCdPnoRSqYRSqUQ8HkdRURF8Ph9MJhOi0Sg0Gg1qamrQ3t6OkpIS\nSKVSyvYeGhrC4uLiTn+tDVFcXIxTp06hpaUF6XR6zZjfXIKQWJqamrBv3z4EAgGq7jY5OYnJycmH\naqHhcDi07/aJJ55AeXk5/TeJRIKmpibcvHkzp9/lfiB9w3K5HNFoFFarlbKF1Wo1qqur0dzcTOWM\nU6kUJSt6PB54PJ5Np9FzBalUitLSUiqYwmAwYDab8fHHH8NgMCAcDm9q76TT98Zgm81mTExMYHl5\nGdXV1VCr1eBwOGCxWPTQJZFQJBJBOBymPc2klkxmXlRVVcHv969hURM2+t27d3Ht2jV0dHSgvLwc\nUqkUQqEQpaWldHz3wMAAzWRs9qBkMpmoqKjAvn37cPz4cXA4HLhcLnR1dcHpdK5pSRwbG3vg8JjG\nxkb09PRQI0wUJpPJJHWEPm2g16eBvF80GoXf7weLxUIymaQOLJG85fP56OjogEQiQTqdRklJCZRK\nJerr66lGPuEExGIxMJlMlJWVoaOjg449z8xaKBQKqpuxnSVIIjJFyhnks8nUPK1Wi7q6Opw5c4Z+\nXwKJREI7Isj+vB9y1Xq7EYjDnEkcDoVCsFqtCIfDawiEWz3nHzvjT5BKpfDRRx8hkUhg7969UCqV\nAECJSX/913/9id/JlFe1WCyYnZ39BJt+N6G0tBQXLlxATU0NkskkHWSU65Qz2YA9PT349re/DZFI\nBCaTiWg0iv/8z/+ETqejw3seBB6Ph/b2dhw8eBDNzc1UUY9EGkQ+dzsfPgKpVIrGxkacOHECKpUK\nwWCQErQy+3gz9wwx/iaTCcvLy/D7/Q88DDNr4NsBtVqNvr4+OtWScGVefPFFGI3Gh7pnGyGdvjc5\ncHJyEsvLy4jH4/jCF74AsVhM1f8YDAYtL5B++VAoBB6PB6lUSl+PRCKQSCSoq6vD0tIUZIs/AAAg\nAElEQVQSdUhISjkej2N8fByvvPIK7dsmEsNEIZHFYtFy3VaiOTabjdbWVhw7dgyHDx+mpQbynckz\np9Pp8K//+q9wOp33JR329fXhu9/9Li1DAveMdTweB5vN/tTW00cFcbRMJhOsViumpqZQV1eHnp4e\n9PX1obOzEx6Ph+rR9/T0gMPh0Ba6cDgMq9UKp9NJ2+MIlyCTSJjZnux2uz/BgckleDwehEIhWCzW\nGtIxn89HQ0MDzp49iz/8wz+EVCr9xPUVCATUGXgQRySXuhsbgezlzGtst9tx48YN2Gw2qohJHJ6t\nYFca/2wciuRAunv3Ln7wgx9QWV/ifVdWVkIoFK7peyXRs8ViwauvvopLly5RudlHWXuuNwqTyaST\n5shGiEaj+OCDD/Duu+9mhb36IBDBjcuXL4PFYuHChQtobm4Gj8ejA3A+/vhjjI6Owmg0rukEIAzY\nM2fO4MyZMygrK0NFRQUKCgooc3lubg7j4+MYGhrCnTt3diQVNz09jVgshsXFRezbtw8HDhygPf2Z\n7VOE7Hb58mWMjo5Cp9NhaWkp64OgsgEi5VtXV0cjQpvNhsXFxQ1Jdg8L0rExNDREldi4XC5CoRD6\n+vpgt9tx7do1LC0tweFwIB6P0wwAEVAi0RfZWz6fDzabbQ0PIVOxTSaTUVU68noikcDg4CDefPNN\n2ha1lb2TSCRw9+5d+Hw+DA0N4ezZs3jqqafWKOTJZDLU1dXhz//8z/Hkk0/C5XJhbGwMY2Nj4PP5\nqKysxOHDh7F//36acQmFQnA4HLh+/To++ugj3LhxA16vNycZO+IE+Hw+zM/Pw+/307boaDRK2z3J\neHEyeEYqldJafjKZpMaIyM1yOBz6vN65c4e2+21n6pzNZkMmk6G1tRXFxcXgcDgoKipCcXExysrK\nUFVVBalUSnlJZOyvx+OB2WzG5OQk1Sa535pzLby1Hk1NTThx4gRaWlroay6XC3fv3oXdbqeOSjb2\nyq40/kTmdKsXOxKJwGAwwGAw0Dqj2WyGy+VCa2srJVCsn/Cm1+vx3nvvrWk3ehisdyRyBdJm09jY\nSBnD0WgUd+/exe3bt7MquLERSBQ2Pj6OYDBIR1SKRCLs3bsXdXV1VBZZKBQiGo3SbAEhyz3zzDP4\noz/6I+rBxuNxmM1mLC0tYWhoCENDQ7h58+aOZV2Wl5fhcDiwsrICj8cDoVAItVq95jAJh8OwWCyY\nnp7GxYsXMT09vcaI7pb6IZvNhkgkQm1tLfr6+sBms7G6ugq9Xg+dTrfleQrpdJqS94DfD64iUe3i\n4iJee+01LCwsbHpSXKaOOyGbEcNGUuirq6sYGhrCtWvX4PP5thyBEg0Dk8mE4eFhpNNpOqSH9MAX\nFBRApVLh5MmT1KgPDg5CJpNBKBSitbUVzz77LORyOZ0OurKygrt37+J3v/sd3nzzTYTD4S2R/R6E\nzLY+Es2Pjo6uMWaEVCaXy1FXV4eKigqUlJSgsrISEokE0WiUZmtIKYvcX7/fj8HBQVy/fn3by1zp\n9D31VpVKhba2Nmi1WlRVVUGj0axJ5xOD73A4YLVa6RCemZkZmEymB7aIbiSKlEuUlZXh5MmTa0qg\nPp8PCwsLtN2cZBq3il1p/DkcDjUI2XogyEExODiI8fHxNXVGLpdL03lkNKPNZnvkC7wdHiKDcW8o\nzvnz53H+/HnIZDLKRie9ztv1AIbDYdphQWr3PB4PPB4PR44cQW1tLUwmE4B77HkiHEIkN8lDRYhK\nr732Gt544w04nU7Kyt/OGuJ6xONxrKys4L333sOdO3eg1WppKSIYDGJpaQlOpxNOpxMOhwOrq6ub\naukBcussikQidHR0oK6uju5zl8uF1157DVeuXMn60BDCPHe5XLh58ybC4fADU+IP+57rJbqJ2iWL\nxaKGl4hNZXPfEMfizTffxMTEBKqqqtDc3Iz9+/ejtrYW5eXl9Lkkk/v2799PpaAz08uzs7MYGBjA\nxYsXodfr6XCf7XYU17fihUIhqhsyMTEBHo8HgUBAW2FJjTnTYSBdPz6fb0dGFIdCIVgsFoyMjFDp\nZJKdI0FFMpnE3NwchoeHcePGDZqV8/l8VHzpQevObMPcju9Hyijr22KJkiF57bEl/GXqVT+q9vKD\nQCRz1ytBZc7ozmzF2Qxy7SUKBAKoVCrU19ejqqoKDAYDY2NjuHz5MvR6fU7nP68H6Y+/du0aAoEA\nRkZGUF5ejrKyMmg0GrS1tdEaIZfLpeM4iYKixWLB+Pg4DAYDHA4Hrly5guHhYVrb3c7vshEyBVRs\nNhssFguKioogFArpa5kDbDaLXH5HFotFZxC0t7cDuBdJLC4u4vbt29DpdDn5/GAwSAcJZdux8Pl8\n0Ol0uHLlCux2O9hsNrxeLywWC6amprI+LY9Ez2azmSohkmFgVVVVqK6uRldXF9WULywshFwupzwY\nu90OvV6P2dlZzM3NYXR0lGbNyPvvNIihzMwaZmZEgU9K1u50douUjYxGIy3Hzc/PQ6FQ0LISkf4l\n81tcLhcSiQQdpvRp+2S7vyMZDMRkMqm2wq1btxAMBmlmJVvr2ZXGn6hEcbncbYlkt2P8bbYgkUhQ\nXl5OldRisRguXbqEf/qnf9rWqJ8gHA7j448/xsDAABgMBg4ePIhTp07hS1/6EpqamlBUVLTh7wUC\nAczOzuInP/kJBgcHKbN7N4IcAA6HI6tliO04WDgcDpRKJU6cOEHHtFqtVmqIbDZbzj47V9/N5XLR\nFLZEIgGbzYbD4aBkw1xxLUgUtrS0hKWlJQwODtK5Bt///vdRWFiIVCpFB0YRp2Rqagq//vWv8X//\n939Zk5PdDmx0luwGRyUTiUQCDocDHo8Ht2/fhkgkAo/HQyAQWKPSudl1b3cJjwxoYrFYMJvN+PnP\nf47BwcGc8Lh2pfEn+vrZjPq3A9mqxTwIwWAQZrMZDocDt2/fxmuvvYaBgYFNt2llC+QBIeOEJyYm\nIJPJ7psJCYfD8Hg8GBsb29G55o87iL45SYVGo1FcunQJr776ak4Nfy5ADmCSBk0mk3C5XGAymYhE\nIjtScw6FQlheXsbPfvYzvP322wDWpm5JiYJ0wOw2EuhnHeTMJWUJMnkzM3O41azcdhr/6elp/PSn\nPwWTyYTX68XIyAg8Hk9OPmtXGv/1ddOdaPXaDDLTY7lCKBSC3W7H5OQkFhcX8dJLL8FsNu+aQ8Vm\ns8Fms2F4eHinl5IHfj8J0OPxQK/Xw+Px4KOPPsLg4OBn0uHK1ArYDZP8YrEY3G433n///Z1eyucW\nmez3bJ+D2213jEYjnfYaCoVyyx/L2TtvDbvf0u8QCJNaq9XSfuadINvk8dlAQUEBtFotzp07h0gk\nglu3bsFoNO5a7Yo88vg8g8lk0m6iLGRyH2jf88b/MwjS7gR8NgbJ5LFzYLPZkEgkqK+vp2OLd5qo\nlUceeWwL8sY/jzw+78h2S+F2qxPmkUcej4y88X9UbERS+6wccuvX/llZdx555JFHHlnFA+37riT8\nbQcylZuITjsBYZBmzpLO/LedROa6M9P/RHQjs+Ngp9e6HvfTBCdp6PUEtN2yfrLWTIETQkLdbjbw\no2J9r3ZmxL6b1525P1Kp1CeuObB79sd6ZArDZGK3rzuzSyFz7bt93cDvR+BuNNZ8N68bAJ0Rsr6H\nP9fr/lwaf0KqIAeKRCKBQCCgMqGxWIyKu5D2kcxDcydZ0mT2dzqdphPRiAMQi8UQjUYRjUYpF2C9\nKMdOgRAVyaFIlAAJe5uIbmSjPSfbIPMIyMNJBm8QhbNM9bPdtG7gXqsfm82m7bNsNhupVIoSijY7\n7jbXYDKZEAgE9BpnHpCZZKjddr0ZDAYd0UueQ7JXMv/stnUD9645mXdCxsYSpbzdGlAAvz9bRCIR\nPf8AfMI5341rB+6RcsksDNKtQOxMLtf8uTL+TCYTzc3NqK6upgpciUQCQqEQAOD1emE0GqHT6e77\noO7UBlIqlWhra4NKpYJYLKZSqXw+nw5BIZPZdtNGZzAYaGxsREtLC8RiMdLpe1rb5CC3WCywWq2w\n2+27Zs0EYrEY3d3d0Gg0EIlE8Hg8VA7U5/PBarXC5/Otkd3cDWAwGKiursbevXshFAqRTqdhNpsR\nCAQQi8XgdDrh9Xpz3kr0qCCR5759+1BXVweBQEAle4l8NZk4t9sEoRgMBlQqFfbv3w+FQgE2m43Z\n2VlYLBY6sTAYDO5KR4vBYKCtrQ1tbW0QCARwu92YnJyE3++no5eJWM5ug1gsxr59+1BWVgaBQICp\nqSnMz89TJ2C3rhsAqqqq0N3dDaFQSAckORwOqnCa63P8c2X8WSwWjh8/jmeffRaNjY1gs9lUYtPt\ndmN8fBzvvPMOxsbG1oyIfBjkkgDFYDBQUVGBP/uzP0NXVxdKSkrgcDgQDAaRTqcxMTGB69evw+Fw\nrIn6dxoks9Lf349vfetbKC4upmNGg8EgXC4XBgcHMTQ0BIvFsuNyvplgMBhQKBR44YUX0N/fD5lM\nRkc8e71ezMzM4Pr169QQ7aZ1MxgM7Nu3Dz/60Y8gkUgQCoVw5coV6HQ62Gw2jI+PIxwO7zrjD9xT\nOLtw4QIuXLgAPp+PkZERvPvuu7DZbDCZTDS7tRuNaF1dHf72b/8WNTU1SCQS+PWvf41r167BarXC\nbDYjFAoB2D1OIvD7/XLixAl85zvfAYvFwujoKP7nf/6Hym6TOSu7DZnP6BNPPAEmk4lf/vKXCIVC\ncLvdu9IxJ2AwGOju7sYPf/hDsNlsGAwG/PSnP8Xo6CgdSpVrB/dzY/yJcQ6Hw/D5fPTPwsICZmZm\noNfrYTabYTQaabp/t4A8oGQqmNVqxY0bNzA1NQWz2QyLxQKz2bzp+eW5ElEinAQyXcvr9WJxcRHv\nvvsurFYr3G431UvP1VSzzYL9/9h70+C2rvN8/LnY930hAQIEwX0TKVKkRO2SJcuObdmRnNhxM800\naTPTZTptZzLTfuu/M/1Sf6jbpMs0k6VNnDqJmzi2Zcfaaa2kuIj7Dm7gBgLEvhO4/w/6nRNSXpKI\nuCQt45nhaGxIwMuDc8/7nnd5HoEAYrEYAoEAiUSCyva2t7cjHA7D6/ViZWWFBmC/L7hcc4lEQrnO\nPR4PpqamcPHiRYyPjyMcDmNtbQ2hUGjXEEMRiEQiqFQqmgKdnp6msreRSAThcBihUGjX0eMyDAO5\nXA6FQgGBQID5+XlMTEzg9u3b6O3tRSwWQzQaRTqd3nVBi1AohEwmg1QqRTKZxNDQEG7duoX+/n5K\no0yCrd30fAKARCKBUqmEUCjE/Pw8lRaem5ujjI+7KTAn4PP5VPUUADo7O3H79m2MjY1RMbPtuAh9\nbpw/AFq/ItSyLpcLN2/exN27dzE2NvZI6cSNzXdcfVkbNaUjkQjW1tbQ2dmJmzdvwuVybaoV/b7v\nu/HPXNu/0fmTFH9fXx+uXr0Kt9tNI9xHdZ4EXKw7kR/m8/kIBAKYnp7G3bt3KZXyVh5OLvcMUZIj\nUs8zMzPo6OjA3bt3MTMzsytvcARSqRR6vR4ikQjBYBDd3d3o7OxEf3//rjzECXg8HjQaDXQ6HRiG\nwczMDK5fv47+/n5MT0/vWruBBw7UYrFAoVAgGo2ip6cHd+7cwfT0NOfS4FuFRqNBYWEhhEIhFhYW\ncPnyZQwMDMDr9e7qNReJRCgsLIROp0M2m8XAwACuX7+O+fl5RKPRbbPjc+X8AdCDcW1tDYODg7h6\n9SpWVlYeSRSHOOSNHbJcZQz4fD5kMhlCoRBcLhel943FYo/0mRsDChIU5RqkwYw0mQ0ODqK7uxsL\nCwsIh8Nbcvzb4fyJAuH8/DwuXLiA4eFhut5bdfwf1w2eCwgEAmi1WsjlcqRSKXR1deHixYvweDxb\nvulzveYqlQoOhwMSiQRLS0u4fPky7t+/nzPHz1WAzuPxYLfbYbfbkclkMDY2hqtXr+a0j4Ur2zUa\nDfbu3Quj0YhAIIDBwUGMjIzs6iCRoKSkBE1NTZBKpZibm8PAwABWV1d3teMHHkhsNzc3o6KiAslk\nEsvLy3C73dsebH2unP9GuU2/3w+fz4e5uTlaJ/99YDKZUFRUBKVSiVQqhampKQSDwZw7UeJApVIp\nNBoNQqEQlpeX4fP5HsmB8vl8mEwm6HQ6KBQKrK6uYnFxcRM/dq4gEomg1+uh1WohEolo3TYajT7S\nzVkikUClUkEul1MlN66aqCwWC2pra6HT6bC0tITR0VF4PJ5HdkQSiQRSqRQSiYRqoHPR0KNQKNDU\n1ITq6mqIRCLaMPeo9X3SSS0SiajKZq6lRcnn2Gw2nDx5EgUFBQgEAnC73Vu6xZHAnEz2kLR7Lu3m\n8XiQSCSor69HY2MjzVosLi5uyYGSAPeTxte2CvLeBQUFOHToEGw2G5LJJK2Vb/UseHiMN5fg8/kQ\nCoWorKxEa2srFAoFYrEY/H5/Thwol/1bIpEIBoMBLS0tKCsrQyqVomWh7Q5aPjfOnxwERqMRBQUF\ntPP5UdMsJSUlOHPmDOx2O3w+H15//XVaH8s1RCIRlEolDAYDYrEYgsHgI/clkIemtrYWRUVF6Ojo\ngNfrzRWX9CbIZDIUFRXBbDZDoVAgEonA7/c/sgOVy+UoKSmBxWKBRCJBZ2cnZw135eXlOHz4MKxW\nK5aWlrC6urqlBjmFQgGj0Qij0YhgMMiZkqFarcapU6ewf/9+yGSyTQp4jwJSn1SpVFCr1VheXs75\nrZA8mxUVFTh37hxYlkVPT8/v3XT7MHg8Hq1pMwyDcDicc2dERhJbWlpw4MABGoxutcucBP0ikYiO\nwgK5ZWgUi8UoKirCiRMnIBAIMDExQcdWt/rehOuAjAnmcs0FAgEUCgVqamrQ1taG5eXlTap+WwWX\nGS6ZTAaLxUInFObn53M63vz7ZIg+N85fKpXCYDBQ/e9IJIJ4PP57vw+JyE0mE+rr67G6uoq5uTmE\nw2FOUmUCgQC1tbVobGykWQa/3//ITU98Ph8WiwVqtRoulwtLS0ucSaFaLBY899xzqK+vh0AgQDKZ\npPPDjwKxWAydTodYLIbFxUWEQqGcO35ycDmdTjQ1NUGn0206xB4V5Pbp8XgQDoc5aehRKpUoKCiA\nzWaD0Wikn7nVg4VI03q93kcqj/02aDQaNDQ0oKmpCXq9HpFI5BOloH9fsCxLha+4CBIbGxtx6tQp\n1NXVQSqV0j2ZC5AgggtNeblcjrNnz+ILX/gCDAZDzkcoNxKO5XrNa2pq8OKLL+Lw4cM0G5XLmzNX\npEYMw+Cpp57CCy+8gJKSEsTjcayurj6SH/ok/D42f26cv0QioaluhmHg9/sRDocf6b3IgZpIJDA9\nPY2JiQl6oOcafD4fDocDFRUVkMvlSCaT8Pl8W8owrK+vIxAIYHx8nI7YceH89Xo9Dhw4AIfDQcmT\ntqJASHS719bWsLS0xMntmcfjQSQSoaCgAKWlpZRcZqvOP5PJIJFIIBwOIxqNcuKINBoNLBYLTCYT\nVCoVUqnUlmvF5ACPx+M500h/GGq1Gvv370ddXR2USuUmUp+tgNhJHCgXjqi8vBzPPfccSkpK6IRF\nLgIXsu5cddnLZDIcOXIER48ehVqtplmRXIErYh0ej4fi4mKcO3cOVquV7u9cngNc9YUIBALs27cP\nTz31FGQyGWZnZzm7eP0u+Nw4/41jRKlUCouLi1hbW3uk92JZFvfu3cP8/DyCwSBCoRDC4TAnTXM8\nHg9KpRIqlQp8Ph/xeBxer/eRnX8ikUB7ezuEQiEikQgtH3DxkJJeBRKdJxKJLd3819bW0NXVRbXc\nuQhaSJpbJBKBz+fT2uVWb+qBQACRSITazMXBaDab4XQ6IZPJKBPkVp0/GS8FuDvQFQoF6uvr4XA4\nAID2GOQicNmYjePiJqfT6VBaWkoJrBQKBWWu3KrtXNWASTlEr9dDp9PRZmKtVguhULjl9+dy6omw\nmmq1WkgkEqyvr0Or1UKtVucsW8QFRCIRFAoF7VkibIpFRUWUZG678blx/oWFhTh48CDMZjPW19cR\nDAYp6cbvA7KxvV4v1tbWOKdH5fP5KCwshNVqpYHLVrIMmUwGHo8HALcPKaHElcvlm6grt5IdSaVS\nnM94S6VSFBQUQKlUbuLF3+oNLJ1Oc95BbbfbUVVVtekwyYUT4pLzgsfj0TXXaDQ5fW9O2dH+X2Cr\nUqloKTGdTu9qB0Sg0Whgt9uh0WggEonoOj08RbPbIJFIUFFRgfLyckil0o/obexmWCwWtLa2ori4\neBPd806C99v/yuOBsrIynDt3DsXFxchkMohGo4/cGUoORK5JO0jTT3FxMZxOJ0QiEdLp9JZJiLim\njWQYBiKRCFKpFAqFAkKhkPM0Zq6gUChQUlICrVb7kY7l3Ww3wzBwOByor6+HQqHgrMadawgEAjrF\nIZFIdj3/PYFQKIRWq4VSqaQZlkwmw1kJLZcwm82oqamBWq0GAKqf8ChTT9sJuVyOtrY2NDc30zNl\nfX0diUSCk0brXKKiogJf//rXUV1dTZ/NRCKxo6RVj/3Nn9RwdTod5WhfWVmhLEq7GWq1Gna7HVqt\nNqf1RC7BMAykUimam5uxb98+GqGTbMDDKnO7BSRgKSkpwdNPP43S0lKwLEvtFggEtPFvt0Gr1cJi\nsaCkpARGo5HeLIjd5Fa629acz+fj6NGjOHPmDPR6PXWeLMtSu/l8/q57Tvl8PkpKSvDSSy/hyJEj\n9CAnJS2y7rtNNIlk41pbW/HlL38ZFouF6iWQKQVCyrXb1lylUqG4uBj79+9HTU0NAGBpaQkrKysI\nBoNIpVJ0v+ymZ1QqlaK4uBgNDQ2oqKiAUqlEIBDA5OQkZTlNp9NUfGs7n9HH1vkzDAOZTEZZoMrK\nyqiYD+mi3W0bnEAqlUKn08HhcKC2thYmk4mmuIgz2o1BAMMwMBqNKCkpwdGjR9HY2AihUEhvcaSW\n+yjlFi5BUs8lJSVoaWnBwYMHYTKZkEgkqNOXy+WcTXQ8Kkjt1m634+DBgygrK4NMJkMsFqNBL6FA\nJWyKuwWkxnz48GEcO3YMcrmcUm4T2lNCVhQMBnfY2t+Ax+OhsLAQTU1NePbZZ1FUVIRQKISlpSXE\nYjHIZDKIRCKYTCb4/f5dtdfVajXKysrQ1taGgwcPIh6PY35+HlNTU+DxeIjH41AoFNBoNHQkd7fA\nbrejtbUVjY2NMBgM8Hg8GBoawvT0NIRCIRKJBIxGI7LZ7CM3cnMBhUKBffv2oampCVarFaurq3C5\nXLh79y7W1taoIqter8fa2tq2ZgEea+dfUlKC1tZWfOELX0B9fT11QkSyd7dxhAMPDheLxYJnnnkG\nLS0tqKurg9lspp3WwIPaF9k0uwUkKDl27Bi+/OUvw+l0wmAw0DnzRCIBmUwGpVKJUCi0qxyRUCiE\n1WrFH//xH+PIkSOwWq1IJpNYXV2lLH82m42K4ewWCIVCmEwmtLW14Zvf/CYdg5ybmwOfz4fRaIRG\no4HT6cTo6OiuClyKiorQ1taGlpYWFBcXI5lMYmxsDH19faisrEQmk0F1dTWCwSD6+vp22lwKsViM\np556Cs899xxsNhtSqRTm5+dx/fp1BINBNDQ0QKFQ4NChQ+jo6MDs7OxOm0xRUVGBv/zLv8S+ffvA\nMAymp6fR0dGBK1euwGKx0PJiPB6nolW7BSdPnsSf/MmfwGazYWVlBTdv3sSHH36IsbEx2O12yGQy\nHDhwAD09PRgeHt5pcyn0ej3Onj2LI0eOgMfjobOzEx988AG6u7uRSqVo6aihoQFdXV3w+XzbZttj\n6fxJ1/OhQ4dw8OBBtLa2AgCGh4fBMAzcbjftGhUKhbsmPUdm+tva2nD69GlUVVXBZDJhcnISIyMj\nkMlkSCaTcDqdtM61k6MiG1FQUIDm5mY8+eSTOHDgAJUG7erqop3zRqMRVVVVSKfTdK4Y2HnFrYaG\nBpw8eRJHjhyB0+lEOp3G8PAwpqamUFhYiFgshtbWVgiFQjAMg0AgQFO8O2m7RqPBqVOncOrUKVRW\nVsLn88HlcuHWrVvg8/moqKiAwWDAkSNHIJVK4XK5Nok/7ZTtJDB/5plnUFlZCR6Ph6mpKdy7dw+3\nb99GPB6H3W5HY2MjbRqdmZmhCnM7ZTvpNq+pqUFDQwNUKhUGBgZw48YN3LlzB8lkEgqFAgaDASdP\nnqSvT0xMIBKJ7KjCHI/Hg1arRWNjI6xWK1KpFPr6+nDjxg3cv38f8XgcGo0GVVVVKCoqgslkos/A\ndgnNfBzIpcJisdC9Qrjwe3t7sby8DKFQiPr6ejQ0NMBqtcLpdGJ4eJjStu/UfhEIBJDJZLDZbDCZ\nTACAiYkJ3LlzB7Ozs3Syoq6uDg6HAw6HAyMjIxgaGkI0Gt3SWPTvZB9n77yDcDqdOH/+PE6fPo3a\n2lqwLIv+/n7cuHEDPB4PkUiEMuY9Kq8/FxCJRDh9+jReeOEF7NmzB1KpFJFIhAoPmc1mMAyDhoYG\nOl5I9OV32oGWlZXhz//8z7Fnzx4UFBQgkUhgbm4Ob7zxBpRKJRwOBwoLC9HS0oJwOIzp6elNgctO\n2n/q1Cn8xV/8BbRaLeVAuHHjBj744APs3bsXtbW1eOKJJyCTyZDNZjE8PIx0Ok3Tojt1KBqNRnz1\nq19FW1sbxGIxfD4fBgYG8M4770AgEODAgQNobW1FVVUVdDodbt68iUAgQMckd9L5O51OPP/88+Dx\neHC73eju7saNGzfQ0dEBgUAAuVyOI0eOoLKyEmVlZXjrrbcoMddONTIS52+xWGCz2QAAU1NTeO+9\n9zA1NQWJRAK73Y7S0lK0tbWhrq4Od+7cwf/8z/9gbm5uW2RaP8luPp8PiUQCtVoNiUSCYDCIu3fv\n4ubNm/D7/TS7WFdXB5vNhieffBKvv/46fvrTn8Lv9++Y7aS0JRKJaAlxeXkZ7e3tWF1dpbVyk8mE\nEydO4PTp05ibm8N3vvMd3L59exM3xXaDKIOSLG02m8X8/DxGR0fpiCKfz0dNTZabwhYAACAASURB\nVA3OnDkDPp+PS5cu4dvf/jYWFxdzzur4Efs4edcdglKphNPpxLFjx3D69GlYrVbE43HMzc2hvb0d\nb775Jt1MCoUCZrMZZrMZAoGAzv57vV4Eg0FUVlbCarVCJpNhenoa/f39lOiEC1itVtTU1KC5uRml\npaUQi8WYm5uj0fng4CBkMhnKyspQUVGBEydO4NixY8hkMggEArSWxLIsqqqqIBaLkUgk0NPTQ/UL\nuOKqrqioQEtLC5xOJ9RqNU0bEpUtlmUxNTWFkydPorS0FHa7HYlEAvF4HJFIBIuLixgeHkZhYSFt\nQpqZmUFPTw+djecCBQUFqKmpoZ3PfD4f4+PjePfdd2l0Tgh5KioqsH//ftTX1yMQCNBpkYmJCayu\nrsJut4PP52NtbQ29vb0YHx/ndBqkrq4Ox48fp/s3m82ip6cHH3zwAXU0oVAIJpMJxcXFOH78OPbs\n2YPz588jlUohGAzC5XJBIBDAZDLB4/FgdnYWPT098Hq9nK25Xq9Ha2sr9u3bR+0OBAK4f/8+JiYm\nEI1G0dfXB6FQiKqqKthsNhw/fhwOhwOrq6tIpVLwer1YXl6GSqUC8KDxa2hoCP39/Tmhp/0k1NXV\n4dixY3A6nZRDwOv1YmZmhvJ93Lx5E2azGfX19SgsLMTp06fhcDgQDoeRSCTo7yCVSrG2tkalaOfn\n57dMCfxJkMlkOHnyJJ566inI5XKsr68jFovRC0Q6naa18z179sBisaCwsBDnz59HY2MjJadaW1uD\nUCgEj8fD0tISxsbG0NPTw2kvTHl5Oc6cOYOmpiZks1nKg0+on9fX1+FyuTA5OQmv1wur1Yry8nJ8\n85vfxNmzZxGPx+nzKhaLEY1Gsbi4iPv372N8fJxT+fYTJ07g2WefRWFhIVKp1KbgNZvNIhKJYG5u\nDsvLy4hEIigsLMShQ4eg0WgQiUQQiUSoBohYLMbi4iKmp6fR09OD1dXVLXGmAI+Z81er1Whra8Ph\nw4exZ88eJBIJLCws4O7du/jwww9x584dAA86R8vLy2mqSKVS0U20sLAAr9eL5uZmlJeXQ6FQ4O7d\nu/B4PJzwmhM4HA488cQTqK+vh8FgQDQaxcjICC5duoSenh6Mj4+DYRga5ba1tcHpdNIbn9vtRl9f\nH1iWRWtrKyQSCcLhMJLJJCKRSE5U3T4OYrEYTU1NOHDgACwWCwBgYWEBN2/exI0bNzAzM4NEIoHl\n5WU4HA44nU4cOXIECoUCABAKhTA5OYmioiI4HA6UlpZSOVePxwO3241AIJBzuwHQw7mqqgoikQh+\nvx9DQ0N4//33qVP3er1QqVRobm7G/v37UV1dTb+HeDyO3t5euN1uVFVVgcfjYXl5GcADQiKfz8eZ\nUld9fT2OHz8Oo9FInUpPTw86Ozsp/fPy8jJGRkZQW1uLpqYmtLa2gsfjIZ1OU1VLgUAAu92Oubk5\n3L9/H6FQCOvr6/B6vZzYrdPpcOLECTQ0NFC64/HxcYyMjGBhYQHJZBIzMzMQiUQYGhqCyWTC3r17\nUVdXR0ekFhcXMTs7C71eTwNLlUoFn8+H1dVVzhq+ysvLcfbsWRQVFSEWi2F2dpaWUkgmKBQKob+/\nHw0NDWhpacGePXvQ0NBAg4XFxUXE43EolUosLS1hZGRk02tcnC8SiQSHDh3CoUOHIJVKsbKyguHh\nYXg8HuqMyPnQ398Pm82G1tZWHDhwAAcOHADLsgiHw1hZWYFIJAKPx4PL5aJ9AZOTk1hZWcm53QBg\ns9nwwgsvoLKyEslkEpOTk3C5XJtktVdWVjAxMYG+vj4olUrU1taisLAQwINbs9frRSgUgkwmQyAQ\nwNTUFA2CZmdnc85uSNDU1EQnWbxeL4aGhrCyskIzV4T4bHJyEpOTkygoKEBVVRWqqqoAgFJqsywL\nqVSKqakp9PX1gWEYKhdNfsdHwWPl/A0GA5577jm0traCZVl4PB50d3fjxz/+Mb2BAkAsFoPL5UJj\nYyOOHDkCjUYDsVhMuedTqRSUSiXVRE8mk5iamkIymUQoFOLE9pqaGrz00kswGAwIhUIYHR3F1atX\n8atf/Qp+vx/Agy95enoa2WwW+/btg81mg1QqRVFREaqrq+mDStjGgsEgpqen4fP5EAgEOHH+UqkU\nR48exbFjxyCVSjEyMoLbt2/jww8/xNDQEE1dxWIxtLe3QyaT4dChQ3Q+WqVSQafTobKyEhKJhH4P\nfD4fbrcbt2/f5sz5FxUV4dlnn4Xdbkc0GsWtW7dw7do1jI2N0Q5zlmUxOTmJ73//+5DL5aivrwef\nz4dIJIJYLMa+ffsor3s2m6VCQGtra+js7OTM+dfU1ODIkSNQq9UYGhrChQsX0NPTA7/fT79nlmVx\n/fp1rK+vo6ioCAaDgaaudTodmpqaADzI3sjlckilUni9XmSzWc6cv0qlQktLCyoqKpDNZnH16lW8\n/fbbmJ+fp7PaLMticXERP/nJT8Dn89HY2LhJOttkMtH9k8lkoFar6bx3e3s7xsbGOLGddPkrFApM\nTEzge9/7Htrb2zfd2FmWRVdXF+LxOLRaLex2O4DfcHaYTCZkMhmqBqhWq5FOpyEUCvHee+9xMtkg\nEAhoTVkgEODGjRv4wQ9+gNHR0U0llEgkgrfffhsA6J4mtstkMhQWFtIxXZJdlMvl+NWvfpVT+eKN\nUCqVKCsrg06nw+rqKn784x/jgw8+oLV84MGaT0xM4Lvf/S7kcjlqamroNBTDMFCpVFTcSSaTQa1W\nQywWw2Aw4Cc/+QlnwaJOp4PFYoFYLEZfXx9effVVjI+Pb8oGsiyLmzdvgmVZVFRUwGg00tfIc8qy\nLHg8HsrKyqBWq2EwGPDOO+9gZmYmf/MnEIvFsFqt0Ov1yGaz6OjowDvvvIPBwcFNhxnhKpdIJHA4\nHJDJZBCLxZtEUNbX1xEOh6miWyAQ4HQ6gMz0k4bE999/H7dv38bi4uKmvxeNRuHz+SAQCKBWq2kk\nDjyQGV5fX0c6naaZCp/Px5nsLfAbiWCj0QiGYTA2Nob33ntvkwMFHjALrq6u0tShSCSCQPBg+0kk\nEmi1WiSTSSrN6fF4aH2aK8hkMlitVnoL6+jowL179zY5UAAIh8NwuVwIhUKbuAr4fD7tE4jH4wiH\nw/B4PFhbW6PlAq5ADgHgQabl8uXLtI9iI1ZWVjA7O4tkMkkZ3AingUajoeWXUCiEtbU1hEIhTtec\nHGgKhQLr6+sYHR1Fd3f3RyZAEokEZmZm4PF4No21EtvJaFosFkM4HKaaCVxl5gh/hVarBQD4/X50\nd3fTYHwj/H4/pqamPuJUSOBFfj/yQ26AXDhPsk/lcjlkMhlYloXb7aYltY2fub6+jsXFRSwuLtJb\nNTkTyboTlk1yznBZCiWfqVAoIBAIEIvFMDY2BpfL9ZGLTDgcxtTUFL0oERDeBfL7kf8mWiNcnIsk\nUJVIJJBIJDSzcv/+/Y9VkV1dXcX8/DxisRiy2Swd6yZ8F2RKTSAQ0L6HXPS9PFbOf+Phls1m8etf\n/xqvv/76RxaJROFyuRw6ne4jr5FxQI/Hg3v37uHu3bvo6+tDJBLh1HbixBcWFvC///u/mJqa+ti/\nR5p3SGZiIwh74cTEBG7fvo3Ozk5MTk5yFrhsXHOWZTEwMIC33377EzcmuTUTx0/Asg+Ekki25ubN\nm+jq6sLq6ipndm/8MxqNorOzE/fv3//Yv5/NZunaP2w3mV6YnJxEX18f7t69i6GhIc5uFA/v85WV\nFdrc9DAIs+LD1K0khR6NRmnqvbe3F11dXZiZmeHcbuDBXl1YWIDL5fpYu8kBt/HfkP+fSqUQCASw\nvLyMmZkZdHV1cTYqtdFuwuQXi8WwsLDwsVmpT2rAJZcKQqxDFEHv3buH4eFhToKujecKWbtIJPKJ\nuiafRDtLuFGi0SgikQh8Ph/N8s3Pz3Oim7BRW4N8fiQS+dhx240U3MSBkn9HJJGTySSi0SjC4TAG\nBwfR1dVFG6ZzCdJXRpw4YWUlFOcPgzhz0gtAnmvy/8hPJBLB0tISuru7MTExseXA5bFy/uSh+zR6\nUIFAgLKyMrzyyis4derUR14nTWiDg4MYGxujtyaz2ZwTne7fZjux/2GQh+HUqVN46aWXsGfPnk3/\nlpBbLC4uYmRkBFNTU1haWoJIJIJWq6XlDK7s/jRaVh6PB51Oh1deeQXPPPMMlEolfS2bzVKaS5fL\nhYmJCbhcLkSjUWi1WsRiMcRisZyv+caU4afRsvJ4POzduxdf+cpXcOTIkU3/PpvNIhgM0mZL0jMC\ngNrO5ZoTuz/uQCGH/vPPP48XXngBRUVFNEDLZDJIJpPUebrdbszPzyMQCEAikUAul3M6b0zoZD/p\nMGQYBsXFxXj55Zdx5syZTfoKhDciGAzC5/NheXkZy8vLiMfjkMvlHwkqcwliA8k4PLxfSHBw5MgR\nnD9/HrW1tfT3TaVSSCQSiEajmxwoYRvdDt6OdDqNcDj8sfP7DMNArVbj3LlzePbZZyGXy6njjMVi\nSCaTVJ8iHo/D5/PRnpbtmAKIRCKfKGXOMAxqamrw/PPPY9++fXS9SVMjj8ejxGixWAxLS0s0s8gl\n0Rtx/D6fjzbuPWw3wzA4fvw4nn76aZhMJpoFWltbQzKZhFqtpkJjwWAQCwsLmJube2RRuo14rJw/\ngE3Ox2q1orq6GolEAkKhkLJvNTQ04OWXX4bD4dgUkYnFYhpdkjq5QCCgtMBDQ0OYnZ1FMBjMeapr\no/OXy+UoLy8HwzBIJBJQKpWQSqUQCoV46qmn8PLLL9PmolgsBj6fT1X6QqEQvF4vUqkU1Go1Kisr\nIZPJqBQuF8QdG203GAyorKxENBoFj8ejpYnCwkKcO3cO+/fvRyaTocqEIpEImUwG4XAYwWAQ4XAY\nAoEABQUFlMyIdPfm+pDZaLdAIIDNZoPD4aCjoCqVCkKhECdOnMBXv/pVSCQSBAIBGoyIxWLalRuJ\nRJDJZKg2AGEyJIdmrrEx6FIqlSgvL6flCq1WS/f6M888g6effppmCCKRCEQiEYAH2Q5CdiUQCKDV\nalFRUUGbz3ItPrLxRkze22AwwG63IxQKUWZLkUiEPXv24MUXX4TD4cDa2hrC4TC1k6g6ptNpymRo\nNBpRXl6OQCBAR9NyafdGZDIZCIVCFBUV0UBAq9VCpVJBLBbjySefxIsvvgjgwSQCUfzcmPEgwRmx\n3Wg0UonXXAa6D695Op2GSqWC3W5HOBym8/8ymQwWi4U60Gg0SvdLJpOh2UZC/8swDMRiMTQaDe0N\nyDU22k6yQCaTCQUFBYhEIlAoFFAqlZDL5Th69Ci+9KUvQalUUsXWSCSCZDIJrVYLjUZDM6XkHCHZ\nAa5sJ3+m02k6Bkp8h0qlgkKhgEKhwJNPPoljx44hm83SBlLy3ZSWltKRQRK0Ew6AreKxc/5kw/D5\nfLz44ovYs2cPZmdnYTAYUFVVBYFAAKVSSR/ctbU1DA0NIRgMwm63Q6/XQ6VS0Q5j4Dda7NeuXUN7\neztu374Nv9/PyW2UYRiUlZXhb/7mbzA6Ooq5uTk0NzejuLiY6swzDAO/34/Z2VkMDQ1RlS6tVovC\nwkLodDoaBKXTaYyMjOAXv/gFent7MTk5yWlt8cSJE1Cr1RgYGIBEIsHhw4ehVquhUChgs9mQTCZp\ndmJ+fh5lZWWwWCzQ6/Worq5GWVnZJqGRX/ziF4hGo3C73Zyl0Unfwle/+lVUVlaiu7sb+/fvR2tr\nK2QyGYxGI1QqFZaWljA9PY2BgQEIBAJUVFTQcVGdTkfTc7FYDNPT04hGo4jFYlhbW+MkLUqcx969\ne/HXf/3X6OjoQCAQwJkzZ1BaWgqVSoWioiLaET80NIS+vj5UVVWhvLycsrrZ7XaamgyHw7hw4QKG\nh4dp8MIFhEIhlEolzpw5A5lMho6ODpSVleHMmTPQ6XQwm82wWq3wer0YHR1Fb28vgsEgampqUF5e\nDofDAb1eTymw/X4/FhcXEQqFsLCwwJkTJU1jlZWV+IM/+APcuXMHw8PDeOqpp9Dc3IyCggJYLBZI\npVI6ftjd3U0JrqqqqlBQUAChUAiz2YyysjKUlJRAqVRibGxsUyNbLm0HHvRE6XQ6HDp0CPF4HF1d\nXZBKpXjiiSdQWloKm80Gi8WCSCSCvr4+9Pb2YmZmBtXV1aipqUFdXR2tv6tUKqjVashkMvj9foyP\nj3Oy3uQ9VSoVSktL8fzzz0Ov16O7uxt79+5Fa2srysvLYbfbUVhYiIGBAXR3d+P+/fvg8XiorKzE\n3r17YbFYIJPJIJFIIJPJsLq6ipWVFbjd7o+9lW/VdhKskDP7wIEDdIrJ5/Nh//79aGhoQG1tLcxm\nM1iWRUdHB7Xd6XSipqaGTnSJxWKYTCY4HA7YbLaclOYeK+dPaq/RaJR2iRqNRtolSWaxfT4f2tvb\nMTs7C4/Hg5mZGQgEAhw9ehRyuRwWiwUajYbWnIgTnZmZwejoKGV6y+WGIWlQiUQCjUaD5uZmFBYW\nwuPxoLKyEmazGXw+H2NjY7hx4wYWFxcxPz+PmZkZ1NTUUK50rVZL+eh5PB7tgjabzZDJZDmzl4Bs\n9EwmQxUISXAlEomoylw6nUZvby9GR0cxMzNDZ6OBB81rNpuN3ixIzSudTsNsNnOWzt1YYpHJZNiz\nZw8NpGpqalBdXQ2xWIzl5WVcunQJ4+PjmJiYwPT0NKxWK7RaLUwmE+UIIDcJMoNMOtK5ANl7hA76\n2LFjMJlMCIfD2L9/Pz3shoeHMTg4iP7+foyNjWFqaoreWkmjpUAgoHs8FApBp9PRPc6l3SKRCLW1\ntZDJZLDb7bBarWhtbYVSqUQqlcLIyAh6e3tx7949TE5OQiQSwWq1gsfjbVLTW19fp4qXUql0Uw8K\nFyBd+4cOHYLRaMSePXuwf/9+VFRUQKfTYWVlBZ2dnbhx4wZ6enowMTGBffv2oaamBlKpdNOkglAo\nhM/ng0Qi4US4a+MNlKx5RUUFnQAQi8Vobm6GxWKBUqnE3Nwcuru7cfHiRYyOjiIcDqO4uBgymYw2\nDZK1Jc/rRrtz7UQJiIpia2srzVCVl5ejpqYGVqsV2WwWi4uL6OjowKVLlzA9PU21Oki3P2meI2c6\nV8qRG4OWjbohp0+fRnFxMUKhEGpqalBaWgqHw4FAIIDR0VF8+OGH6OrqwuzsLOx2OwoKCmj2kWS7\nCLNrLrKgj5XzJ7PNgUAAarUacrmcOvONmJubw7e//W3cu3cPXq8XDMPQzVRdXQ2VSrWp8580X5Dx\nLi7SRalUiqa8Sfrz4WZEALh58yZeffVVmhoijpKI6JB6+sbGFx6PRxWvuACpBZIbgUqlgsPh2PR3\n1tbW8MYbb+Ddd9+F2+1GJpOhI34NDQ2Qy+U0LUcc/8PNXrkGCVyy2SyEQiEKCgpQUFCAlpaWTZ85\nMjKC1157DcPDw1hcXATDMDh48CAOHTpEO6k3NumQWilXzofYTlTY1Go1FW0BNmubt7e344033sDA\nwACCwSAYhkFrayulHiUBITlc4vE45/Ko5Lni8Xiw2Wyw2Ww4cuTIJrvdbjfeeustXL58GV1dXeDx\neKipqYFYLIZKpYJcLqffH/Ag4OJ6IofYTsbH6uvraXbw4f3y5ptv4urVq5iamgKfz0dTUxOKioqg\n0+noWCjJbq2trW2LkA6xvaioCEVFRbSHhdgeDodx7949vPPOO3j77beRSCRQUFAAg8EAm80GtVoN\nhmFo3d/v92NhYYGz8eeNdgMPxopJUE5AbJ+amsKdO3fw61//GpcvX4ZQKERlZSXq6+ths9kgk8mo\npovP56OKgFzt9YfPLJIhbGtr+8jry8vLuH//Pi5fvgyXywWxWAy73Y69e/dSUbdYLAaPx0NJgXKx\n5o+V819eXsZPfvITRKNRvPTSS7RO/jAIhSsZu9BqtXA4HKirq6PUtJOTkxgdHUV/fz/8fj8lDJqf\nn895mggAOjs78a//+q84f/489u7d+4nKfdFoFF6vF4lEgo5rlZWVoba2FgqFAsvLyxgeHsbQ0BDG\nx8eRSqXg8/kwOTmJxcXFnNsdjUbx7rvvAgDOnDlDA6eHkclk4PP5sLa2hkwmA5lMRkktiouLkc1m\n0dfXh4GBAYyNjVEmtImJCczMzHAiqDMzM4Of/vSnOHbsGPbs2fORCQCCSCRCez14PB5UKhVsNhvq\n6+uh1Wrh9Xppx7bL5UIikYDf70d/fz9nB+Pg4CBu3LiBxsZGOn72sN0sy2JpaQkul4tmlVQqFZxO\nJ5xOJwQCAaW9XlhYoKQvLpeLM9a2cDiM3t5e2tdC8LDt0WgUw8PDcLvdAEBn5svKyqDVahGJRHDl\nyhX09vbSUdxgMIihoSHO5Is9Hg8GBgZoie3j7AYeBC4dHR3wer2QSqUoKCiA0+mE3W6HQCDA8PAw\n3n//fczPz8Pv98Pn81EnysXoWTqdxsLCAhYWFlBQUECzUQ9PfxDSqvv37yOVSsFoNKK6uhrFxcV0\nLPTatWu4du0a7S1aWlqC2+3+xAmHrSISicDtdoPP51NGx4/b5/Pz83j77bcxPj4OsVhMswIWiwV8\nPh/T09P41a9+hZGREYRCIczPz2NxcfEj4465AmkG9nq9UKvV1A89vOYsy+L27dt4++23sbq6ioKC\nAtTX16OyshJqtRqJRAKdnZ345S9/SSdEZmZm8g1/D8Pv9+P69euQy+VwOp0wGo0wGAzQarX0xk7q\nMFarFcFgEMlkEjabjdJaknptd3c3Td15PB7ORRbGx8cRCoVgNBohEolo74FCodi0YZRKJWw2G6LR\nKCQSCYqKilBRUQG9Xo+lpSWMj4/jzp07uH37Nrq7uzmlOwUe3Lhu3boFHo8HvV6PkpIS6HQ6yGSy\nTeURhmGg0WhQUFAApVIJs9mM6upqOBwO8Hg8DA0N4datW/jwww/R3d2NxcVFzm9CCwsLePfdd+n8\ntcFggFKppJwPxHahUAi1Wk3ngm02G6qrq1FYWIhgMIiJiQm89957dMSPK+ezEUNDQ7h48SLW19dR\nXl5O9w1JhW9sZCTz6STILSkpoUI/hPaaNBpxbXcwGMTt27fBMAySySQtm5C9srGznzyrer0elZWV\nqKuro0JLo6OjuHDhAq5evYqlpSXOyJQ2YnZ2FteuXUNtbS2cTidMJhOkUikN1MmaE1ZNoVAIvV6P\nxsZGlJeX07T6jRs38MYbb8Dlcm2LXHEqlcLAwAD0ej1KS0tRUFAAvV6/KRtIyj6E3lwsFqO0tBQt\nLS0oKipCOp3G+Pg43nvvPfz85z/nlFOBgGUfsPPduXMHq6ursFqtMJlMtO9g417x+/0YGRlBIBCA\nSqVCQ0MD6urqoFarsbCwgM7OTrz55pvo6+vbtsyWy+VCR0cHTCYTzGYzTCYThELhpjVfX1/HxMQE\n7t+/j0QigcrKShw/fhxOpxMAMDY2hsuXL+NHP/pRzs/y3aUL+xv8/aP+Q0Imc+/ePUxPTyMSicBk\nMkEmk9FFl0gkqKioQENDA6qrq3H69Gk0NjYik8nggw8+wGuvvYabN29ieHiYaixzfSiSRrGpqSmM\njo4iFotBJBJRVjay0VUqFerq6lBeXo69e/fi6aefpnWj73//+3j99dfR1dWF+fl5TpqHHgbLPpjP\nJ/OngUCAppOlUikNuBiGgU6ng9PphMViwYkTJ3Dq1Cno9Xrcv38fr732Gq5cuYLR0dGPkOxwBZIV\nGR8fx9DQECQSCe0eJnVMklq3Wq2wWCxwOBx49tln0dTUBJFIhDfffBPf/e530dPTg4WFBc7IWh4G\nGY3s7e1FIBCAxWKBUCikEsQkJc4wDxjxNBoN9u3bh/Pnz8Nut2NxcRH/+Z//iQsXLmBychKhUGhb\nRrZSqRTcbjeGhobQ2dkJmUwGg8HwEQEUUmbTaDSQy+V45plncPToUajValy7dg2vvfYaenp66Kjc\ndqx5MBjE8PAw/a43NpEBvznMSae2XC5HWVkZXnzxRUpP+/3vfx8/+9nPMD09/bHjglwgk8nA7Xaj\nt7cXd+/eBfCANnejIyLnD6EY5vP5eOKJJ3D27FkYDAb09/fjn//5n3H37l34fL5tE/kJhUIYHBxE\nR0cHRkZGNl3mCIis8sTEBBjmgeDV+fPnsW/fPkilUvzsZz/D9773PUxNTW3LmUiwtLRElSrD4TDN\ntpEsACEH6+jowOTkJHg8HlpaWvDKK6/AZDJhcXER//7v/45Lly49atPw//dpLz52zp9lWVofSafT\nEIvFKCsro2M4hDXKYDDQsRGHw4FMJoP29nZcvnwZd+7cocx426UIRQ4Okr4kI1sOh2PTeA1JlxuN\nRqog5na7cenSJVy5cgWDg4ObJGe3A+vr65QshtyOTSYTbSYDHnTUkxu/2WxGVVUVzGYzBgYGcOXK\nFVy5cgVLS0t0tGg7kMlkqPAHOdCIatvGg5HcPk0mE+x2O+rq6rC+vo6Ojg5cvHgRd+/e5ZyN8GEk\nk0mEw2F4vV46sieXy6HX62ngQsYRjUYjCgsLacf23Nwcbt68iQ8++ACTk5OIxWLbuuZkZps8o+vr\n69DpdJBIJJTBjDT1mc1m2Gw2NDc3Q6vVYnh4GFeuXMHly5cRCAS2JctCQMo55HYcj8fB5/NhMBjo\nmq+vr4PH48FgMKCoqIiKdcViMfT29uLChQvo6+vbViVRwgHi8/ng8Xgos6BUKqWy5oQEh4zZlpaW\n4sCBAygvL8fS0hJNTa+srGxLYE6QTCbh8/ng9Xrh8/mQTCaRzWY3nS2BQACRSIQKnzU0NFAKcbfb\njXfffRfXr1/fdvXWSCSC1dVVeDweRCIROv5MMhehUIjO7Gs0GjQ0NODw4cNobW1FLBbD8PAw/u//\n/g8TExOP+nx+qvPfrWC3+sMwDGs2m9lnnnmGfeedd9i5uTk2mUyymUyG3YhsNstGIhH24sWLbHNz\nMyuRSLb82Vv9kUqlbFFREfutb32LHRsbY9fW1th0Os1ms9lNtqfTaTYchbkdFgAAIABJREFUDrOv\nvvoqazabd4XtCoWCra2tZX/wgx+w8/PzbCKRYDOZzEdsTyaT7NTUFPuNb3yDLSoqYgUCwY7azefz\nWaVSyb700ktsb28v6/V62XQ6/RHbM5kMG4vF2Lfeeos9evQoW1BQsONrLhAIWJlMxv7DP/wDOz09\nzUYiETadTrPr6+vU9mw2y6bTaTYajbKvvvoqu2/fPlapVO647Xw+n21ubmZ/+ctfsm63m02n02wq\nlWJTqdQm25PJJNvb28v+2Z/9GVtXV7fjdjMMw/L5fPbrX/86Ozg4yPp8PjaRSLChUIiNx+NsNptl\ns9ksm8lk2FQqxb711lvsV77yFba4uHhX2G42m9l/+Zd/YScmJth4PM6ura2xbrebDYVC9KzJZDKs\nz+djv/e977Hnzp1jZTLZrrD9ySefZK9cucLOz8+zoVCIHRgYYPv7+9mVlRU2Go2ymUyGXV9fZ0dG\nRtj/+I//YE+ePLnjdhPb/+qv/ort7OxkfT4fOzw8zP7whz9kL1y4wPb29rKBQIBdX19nM5kM29XV\nxf7TP/0TW1paupXP/FQ8VjX/h0G65gsKCujoHgB6UyBjcNevX8fFixexsrLCeR3rdwFJ2YrFYuj1\nejoGtBHZbBY+nw9DQ0MYGxtDKBTaFbaTDmaRSETT/g+DZVmEQiEsLS1haWlpWzqdfxuy2SySyeRv\nTSGTDE0wGKRNgDsNwmhGGrBIiYtkizZOTDAMQ/n+t6NO/tuQzWaxtraG27dvQyQSoaWlBQKBgPL3\nk9+Bx+MhkUhgfHz8I3oXOwFSa+7v78ePfvQjHD16FKWlpWAYZlNamqw9KYvlolFrq2DZByp9pMms\nra2NjuyRcUTgge2pVAp9fX3o7+/nfJLidwHLshgdHcV3vvMd7NmzB3a7HV6vF0VFRZuaMBmGoT09\nk5OTO2z1A7Asi6tXr8Lj8aC0tBTJZBJzc3N0ZHQjFXN3dzd+/etfc7pfHmvnT5pYCI0p8JsZbPJa\nNBrFxYsX0d7eTjvRdxrkMPf5fHC5XBCJRJRTnDimTCaDpaUldHZ2YmRkhBPmvkcBqR263W6Mjo7S\nueuNDpVlWconv7Cw8LFiF9sN4tQJZ3kgEIBCoaDpXPJgksBlbGyMdsfvNEhz3OzsLOUrJ9wDIpGI\ndvnH43F4vV64XC4qFbrTYFkWgUAAnZ2d4PP5tMlPKpVCo9FAr9dDp9NhYWEBw8PDmJmZ+Yh4y06B\n/X/kSclkEuvr61haWqIcCk6nE3q9HsCDKaTx8XFMTU1ta9r505BIJNDR0UHFnKRSKWQy2SblxGg0\nivn5eYyMjGBmZmZb0/2fBrfbjZWVFUxPT8PhcCAej6O2thY2m42y4cViMczNzaGzs5MzVdBHweDg\nIFwuF4qKiih7qcFgQElJCZxOJx0BHRkZQXd3N6dn42Pr/FmWpWQbfD6fSiO6XC6srq5umt/3er20\nTr4bQMhW3nvvPQwODtJZ7GAwSAMAln0gPhQKhXbV5iYjff/1X/+FN998k6paPcydT7i3SVC2G0Bu\ncv/4j/8IiUSyiaCHYRja2R0KhegEyG4By7K4desWhoeHKaEJ6REpLCxEW1sblpeXceXKFczOzu4K\nx08QDofR19eH6elpXLhwAXw+HwqFAna7HSdPnsRTTz2F//7v/8bbb7+NpaWlXWe7y+XCz372M8r3\nUF5ejqamJpw9exYA8MMf/hAffvghZ6QyjwKiUTAxMYHV1VXweDzIZDK4XC488cQTeP755zEwMIAP\nPviA8nLsFpCL2+TkJNxuN7LZLB0NPnfuHFQqFSYnJ+mI624JWoDfrPvc3By90PX09ECj0VBSMZ/P\nB7/fz7ky6GPr/IEH0e3y8jLu3r0LqVQKln2gE86FklMuQTY3iXBJCpFsht1uezKZxNTUFGVO3ChB\nSTr/d+vvEAgEEAgEPpZciHSkJxKJXXWgEHi9XipdTWwnpSO/309laHOx9rn8DkmjKwliGYaBRCLB\n3NwcYrEYVldXcenSJYyMjGy5tJVrJjoirERuaAzDIBAIYG1tjdp6/fr1nARcubadNF8StVKi+cAw\nDORyOXp7e3Ht2jV4PJ4tfebGaaVc2E7eZ6PthF68qqoKGo0GFy9eREdHx5YnQTZO/eRy3Tfylrjd\nbkxMTMDn82F1dRXXrl3D6Ogo52cMN9RpW8fu9Ax55JFDcEGJul3YqDnO9efweLycBb2kfMPlLZw4\nIrlcDgA5m2DZOJbHBYjdFosF9fX1dOx4qyAMo4QGnAsIhUJIpVJ85StfQWNjI/7jP/4Dg4ODWy6z\nEGlekrnkYs8olUo0NDTg7/7u7zA8PIy///u/p6WkLeJT/ftjN+qXRx6fFXBFW7yd2I7fIZdBBpd0\n0RtByoq5JGbZ2H/C5e9AuAqIZO9WsbFxk0u7SSZjamoKLpcrJ/04QqGQaqVwiVQqhcXFRdy/fx+z\ns7O52jOfOuq3W0+fz95VKI88HgG7uQTyaXiYpvSzgo0OaDsyF7nEw0yCnxX7eTwehELhJq0UrjMv\nuXpvkUhEbd/Yu5RL20nGhc/n04a/XL31I7+4g9j9OzqPPPL4TOJxCLiAz07Q9XDAtfHP3Y6HFQu5\nsj/XfRHkbR/5xR3EZ2Nn5JFHHnnk8bnBZyxwzDv/PPLII4888vic4VP9O7ddDHnkkUceeeSRx65D\n3vnnkUceeeSRx+cMeeefRx555JFHHp8z5J1/HnnkkUceeXzOkHf+eeSRRx555PE5Q97555FHHnnk\nkcfnDHnnn0ceeeSRRx67CNtBQZ2f888jjzzyyOOxwuNAPw1gq8JEn+rfH2tJ3zzyyCOPnQRREQR+\nc5B/VpyRRCKBWCzG+vo60uk00un0Z8J2sViMwsJCiMVipNNp+Hw+BIPBnTbrt4JhGFitVtTU1AAA\ngsEgRkZGEAqFOPm8vPPPI488dj0+q/LHDMNsEobhStKWCygUCmi1WsRiMUQiEayvr+/69WcYBjKZ\nDI2NjdDpdAiFQujr6/vMOP+amhr86Z/+KTKZDCYnJ+H1evPOP4888vj8gaRBicAK14pwuQLDMBCL\nxZBKpVCpVEgkEohEIrnSaecUcrkcRqMRJSUlMJvNmJmZwfz8PCKRyE6b9okg+6SkpAS1tbU4ePAg\nhEIhpqenMTk5udPmfSp4PB60Wi2amppw4sQJOJ1OhEIhrKysgM/nc/a5eeefRx6/Az6rNcSNGvC7\nXQp24xoTGVgidUqkYInc6W66QZP1JT9SqRQKhQJyuRwymQwymQx+vx+pVCqXcq05AY/HA4/Hg0Ag\ngEgkgkajgclkQlFREYqLi6HRaBCJRLC6urotTWi/D4RCIcRiMWQyGeRyORQKBerr69HQ0IDa2lrE\nYjF4PB4IhcKdNnUTBAIBtVehUNAyxfHjx9Hc3AyDwYBUKsX5euedf47wGVN7yuN3APlONx7swO6u\n3T58YAiFQgiFQvB4PGSzWaRSKWQymV13eybrSxy9RCKBwWCATCaDQCBANBpFNBpFKBTaahNUTkHs\nFolEEAgE4PP5qKysRH19PSQSCTKZDAKBALLZLLxe765yoKQkIZfLoVKpUFBQgBMnTqC0tBQSiQTp\ndBp+v59+J7sFZM01Gg2sViuqq6tRU1ODuro6SKVSiEQiiMViBAIBBAIBJJPJXXM+MwxDg5TGxkY0\nNTVBp9NBrVZDKpVCJpMhmUzC4/FgcXERqVSKM1s+N86fpA7FYjHEYjEEAgF0Oh0KCgrog8swDPx+\nP1ZWVuD3+xGNRpHJZOitY+N7kT833qK42GAkfahUKmE0GiEQCMDj8aBSqSAWi+khnslk4PF44PP5\nEAqFkE6nd/yAV6lUsFgsMJvN0Ol0SKVSyGaz4PF4SKVS1BGRm0U4HEYsFqNrvhMgB0tdXR0qKysh\nkUgAgDY9pVIpxGIxavvq6iq8Xi+i0Shd852CWCyGwWDAvn37oNfrN6XKE4kE4vE4tTMSicDtdiMY\nDCIej+9oNoA4+rKyMlRWVgJ4ELQolUpkMhnEYjH6E41GMT8/j6mpKSQSiR2/SVutVpSWlsJms0Gn\n04HH46GoqAh2ux3hcBjRaBSxWAwWiwUOhwM9PT2Ym5vb8eY5iUSC+vp6OJ1OFBYWQqlUQqfToaGh\nAXK5HCsrK2BZFkqlEq2trWAYBl6vF5FIZEf3OMMwsNlsaG5uhs1mg9VqRXFxMex2O+x2O+bm5uDx\neGA0GmEwGFBXV4eJiQka/O70udLS0oLm5mZUVlaioqICFRUVkEqlSCQSGB0dRTgchsPhgFqthtFo\nhFgs5ixw+dw4f5La0mg00Gg0kMlkaGhowOHDh6FSqSCXy8Hn8zE8PIxbt25haGgIbrebpuoedv4k\nmODaUTEMA7lcjuLiYrS0tEChUEAgEKCsrAxarRbr6+vIZDJIJBK4c+cO+vr6MDk5uckZ7cSGZxgG\nRqMRR48excGDB1FXV4dgMIh0Og2RSIRgMIhAIIBUKoXZ2Vncu3cPLpcLi4uLALBjAQDDMBAIBDh9\n+jT+6I/+CEajEQzDIBgMIhwOIxgMwuPx0PptZ2cnuru7sbCwsOO3UqlUiqqqKnzrW99CU1MThEIh\n4vE4QqEQlpaW4PP56E1oYWEBly9fxtjYGJLJ5I4ejHK5HLW1tfjSl76Er33ta/SwSyaTWFxcxPT0\nNG2Wy2Qy+PDDDxEMBuH1ene8Ca26uhovvfQSjh49itLSUjAMg/X1dcTjcQwMDMDv90OtVtPsxb/9\n27/B6/UiHA7vaOlCoVDg7NmzeOaZZ1BdXQ2xWAyWZZFOpzEzM4OhoSGYzWY4nU7U1NRAr9eju7sb\niUSC09vo74L6+nr87d/+LZxOJ/R6PYAHwXkikYDb7cbg4CCOHz8Op9OJqqoqDA4Oor29fceDFh6P\nh3PnzuEb3/gGVCoVLUdEIhEsLy/j+vXrMJlMcDqdcDqdSKVSUCqVeef/qCBO/8iRI/jiF78IhUIB\nqVQKgUAAg8GAgoICmhplGAYFBQWoqanB2toafD4f/H4/JiYmMDIyAoZhoFKpUFJSAj6fj0gkgo6O\nDrhcLtqIlEsoFAqYTCacO3cOBw4cgNlspnaq1eqP3PxLS0tx5swZ+P1+uFwujI2NYXR0FB6PB9ls\nltaYUqkUwuEwfD4fbT7K5eaSSCRobGzE4cOH8eSTT6K4uHjTzZ9hGHqLzmQy2Lt3Lw4ePIjV1VXM\nz8+jvb0do6Ojmw73h2dfuXgYyI3/7NmzOHHiBCwWC6RSKQBAJBLRaNxms9HAqrq6GqdOncLk5CS6\nurrQ3t6OeDy+rU1dPB4PfD4fZ8+exRe/+EU4nU6IRCKa6VIqlQAAnU5HU/6xWAy1tbXo6elBe3s7\nJiYmsLi4uK2OlOzj2tpavPTSS2hra6MNTmSfyOVyOrZFsl4GgwE1NTX4+c9/jrt37yISiWy7IyW3\nytOnT+PQoUMwm810pC+ZTCIQCIBlWSgUChQWFkImkyGbzeIP//APYbFY8Oabb8Lr9e6IQzp06BCe\nfPJJHD9+HHa7nTqhdDqN0dFRTExMwOPxwGw2Q6/XQyQSobGxEV/72tdw6dIldHZ27kjAZTAY8KUv\nfQmnT5+Gw+GAQqGgma3R0VG8//77mJmZoQG8RqOBRCJBTU0NWlpaMDIyAr/fv+12MwyDAwcO4OWX\nX8bhw4ehUCjA5/PpGfiLX/wCV65cwcLCAn1dIpFArVbDZDJBo9FgbW0t53Y9ts6fjHxoNBoUFhbi\nzJkz+PrXvw6hUAiB4MGvnUqlkEwmadqfx+PBarWirq4O2WwW8Xgcq6ur6Ovrg8ViAY/Hg06nQ319\nPRiGocHB6uoqIpFIzlKnEokEWq0WNpsNVVVVeOGFF9DW1rZpXjgUCiGTyUChUNCabkVFBbLZLNbX\n1zEyMoKenh6YTCbMzc0hm81CrVZDo9EgHA5jfn4eAwMDdIQnF2AYBgaDAcXFxTh58iSOHz+OQ4cO\nQSaTAcCmFL9KpYLVaqXlFuBBBLywsAA+nw+FQoGFhQUkEgkaAKTTaSQSCZpOzdV683g8SCQSWCwW\nHDt2DK+88gqsVitUKhVYlqVBSigUQjKZpGlSgUCAmv+fvTcNivu+8/xf9H3QzX00NwgQEpdAWKD7\nsC4rSizHHieeZDJTm92p2cnu7OyD2Zraqq39//fBVu2TSWprZis7NbNZJ5U4o9iKJ5FlW7Z1IgmE\nuO+bpqEbuoFujm5oGuh9QH2/bjCSkBAIefSuUpUEovvDt7+/z/15f3bvxuv10tHRgU6nY2hoCI/H\ng8/nY3Z2lkAgsKLG/izT66JmGxsbS2pqKufOnePcuXOyRjs/P8/Y2Bhutxufz0dkZCQpKSnyvhcX\nFxMdHS1LRRMTEzKTtNlGSaPRYDQa2b17N4cOHeLEiRMkJydLx3Rqagq32y0zLNnZ2SQmJqLT6UhN\nTaWgoICWlhaam5uZnZ3dMuMvas25ubmcOnWKw4cPs3PnTrxeLw6HA4/Hw9jYmDx38X+jo6PRaDQc\nP36chYUFPv30UyYmJrbU+BuNRmJjYzl69ChvvfUWKSkpqFQqHA4HMzMzTExMUFtby8DAAB6Ph8zM\nTIxGI3q9nh07dvDaa69htVq5f//+lskskJCQQHFxMW+++SYVFRXo9XpcLhc9PT34/X7u3bvH+++/\nz+LiIhkZGfL31el07Nixg6KiIoaGhrbc+Iv7euLECX74wx+i1Wrx+/1YrVb5XF6+fJmrV68SHh5O\ncXExOp0OnU4nPy+TyfTS+D8JxNhHRUUFFy5coKCgQNZPBNxuNzabjfj4eKKiojAYDDLyELX2uLg4\n9u3bR15eHrCstMLDw2UzzM6dOxkcHKSvr++ZGFGRfRARxb59+6TjAcsR+tzcHLW1tfh8Pvbs2UNM\nTIw0sCK9lJaWhslkoqSkhNnZWZaWlmTWwOl08uDBA8bHx1eM8GzEKIno89ixY3znO99h586dJCcn\no9VqgWWHxev10tzczG9/+1sqKip49dVXCQ8PR6PRAMtdsFFRUZw5c4aysjI5X+z1emUTTF9fHy0t\nLXR3dz+zsoZarSYlJYU///M/5/jx46Slpa3IqrhcLlpaWrh16xYjIyP88R//MXv27MFsNsvz1uv1\n5Obm8uabbzIxMYHT6aSrqwun04nX65XlI5H9eBYOgFqtJiYmhtOnT/Nv/s2/ISsrC6VSKSP76elp\nrly5QmVlJQ6Hg/379/Onf/qnREVFSWdX3Oe4uDjcbjeTk5P4fD78fv+mRnexsbHk5eXxzjvvcPz4\nceLj45mbm8Pj8VBTU0NdXR1NTU2Mj4+ztLTEn//5n3P27Fk0Gg1KpVJ2p4uJgK1q6FKr1ZSWlnLy\n5EnOnz9PamoqCwsL9PT0UFNTw7Vr1xgeHmZ6eppAIEBpaSnZ2dlEREQQFhaGTqdDr9fL53krkZGR\nwR/8wR9w/Phx0tPT0Wg09Pb28tFHH9HQ0EBXV5fsAVlcXCQ7O1ueqV6vJykpCbPZvOVyA5w6dYp3\n3nmH3bt3yxLFJ598wqVLlxgfH8flcuFwOIBlPTY7Oyt/Ni4ujoyMDKkjtxIWi4V/9+/+Ha+++qrU\nhTabjb/5m7+hqamJubk57Ha71HEzMzOyV0elUsmS9Gbga2n8s7KyKCkpoaCggJKSEioqKoiMjGRh\nYQG73c7Q0JD8Mzo6SkpKCjt37pT/D5ZTd6KZKDw8HIPBgN1ux2q1Mj4+jtfrZXp6msHBQWlcNwqN\nRsO+ffuoqKjg0KFD5Ofnk5WVhc/no6enh/b2djweD1NTU7S0tMgGxb1795Kfnw8sZzMmJydZWlqS\n4yQej4e+vj4mJiaYnJzE7XbT39+Pz+d7JoYoLCyMlJQUDh48yGuvvcb+/fslOUhtbS0OhwOXyyV/\nj9u3b6NWq0lPTycnJ0ca//HxcUZGRlAoFMTHx6NQKGhvb8fhcMgyhXid0Eh6I3IDlJWVcfLkSY4f\nP05OTg5qtZqGhgba29ul09Hf309jYyN+v58TJ06wY8cOWY/z+/309vZit9vRaDSYTCbm5uZk9L1W\n9L9RhIWFER0dzZkzZzh//jxlZWUsLi4yPDxMdXW1VChVVVU0NzczMTFBQkICs7OzUoGLDFJ/fz9e\nr1eWvzZztlg4S8JROnDgAImJidjtdjo6OmhsbJSp597eXqkMQyO2paUl2eciznkrjL9KpSI8PJxX\nXnmFo0ePkpmZyejoKDdv3qS+vp66ujpqa2txu934/X5gOWIVJQxYzn4tLCxsaY+FcE7T09M5fPgw\nubm5AFRWVnLnzh1Z9hkaGpJ6LCwsjNnZWZmZC50U2Uro9XoiIiIoKCigrKwMk8nEwMAAd+/e5fe/\n/z13795lampK9iGEhYXh9XoBpFO4uLj4XJpxExIS2L17N2VlZWRmZrK4uEhVVRVffPEFN27c+ErA\nKO61cMw3eyz3a2n8y8rK+Ou//mvS0tKIjo6WNebZ2Vnq6ur47LPP+OSTT3A4HAQCAdLS0jhx4gS5\nublERkYSDAaZmZnB5/OtmDN+8OABlZWVNDQ0MDExwfz8PDMzMzK9u1HodDq++93v8tZbbxEdHY1K\npWJxcZHx8XGqq6v53//7f9PZ2Smjofj4eKxWKyqVShr/2dlZbDYbCoVCjkn19fXx/vvv09jYyMDA\ngKTqDH0gNnrB8vPz+S//5b+QkZGBXq9naWmJzs5OLl26xM2bN6mrq1thAC0WC9nZ2cTFxREVFUUw\nGMRms1FfX09cXJxsyrx//z6XLl1iYmJCdqU/SyOqVCo5f/48f/mXf4lGo5EG+/Lly/z0pz9lcnKS\n+fl5+X4pKSm4XC4mJyexWCwEg0Gmp6e5d+8e/f39REZGolAomJqaYmRkhNHR0U1ROgqFgqSkJP71\nv/7XlJeXo1KpGBsbo6Wlhf/xP/4HDQ0NX/lsV4/5LS0tMTIywr1791aMc22mIRWliuLiYv70T/8U\nhUKB0+mktraWS5cucfHixRUyA7J/QWSuhCGanJyUjm4ol8FmQa1WExkZyb59+ygvLwegtraWv//7\nv6e1tZWRkZEV7y9k0mq1kuVPZLO20hgJEpm0tDQKCgqIjY3F4XDws5/9jCtXruDxeNY06iqVCq1W\ni1KpZG5ujtHRUaanp7e03m8ymcjJySE9PZ24uDgCgQD379/nr/7qr2SZajXEHRO9DB6Ph5GREemQ\nbRWysrIoKyuTfWWzs7P88pe/5B/+4R++or+Ecyh61MR0gsjCbQa+VsZfqVSi0+nkCJ9Op8Pr9WK3\n26murubGjRsy4h8bGyMQCKBQKDh69Cjf/OY3ZXZgbm6O+vp67HY7FotFGqWenh4GBwcZGxuTHdKr\no7qnhVarxWw2Yzab0ev1BINBurq6aG5upqqqSr5/aEe5xWLhwoUL7NmzR9bEHQ4HlZWVKJVKzGYz\nbW1tdHR0SKdBjOqEMqVtNN1vNBoxmUwyVe5yubh9+za3b9+msrKSwcHBFV3ZSqWSrKwsKioqiImJ\nkWfe29tLVVUV8fHxMksj5J6bm1sxAfAsFJBohhPOyuTkJK2trXz++edcu3ZtxcgkfEnOkZaWhsVi\nQaFQMDMzw+joqBw/0+v1+Hw+mWHZLEUZHh5OZGQkKpVKdr5//vnnfPHFF9hsthX3UfQ0CAIXvV6P\n3+/H4XAwPDyM1+uVRtXv929qN7foURBjfB0dHdTW1nLlyhXq6+u/cl4Gg4GYmBgiIyNlunx4eJjm\n5mZGR0flndiKUUWz2UxKSgp6vR673U5VVRWffvopHR0dTE5Ornh/pVJJfHw8qampspwYCASora2l\nuroan8+3qbKGQqlUYrFYsFgshIWFUVlZyaeffkpTU9Oao3uRkZHyjovo2eFw8PHHH9PV1bVlcsOy\n8c/OziYyMhKXy8WHH37IlStXHtrkmZGRQWlpqczgBoNB+vv7qa2t3XKKX9EcrFarqamp4Z/+6Z+4\nd+/emnc1PDycvLw8du7cKZ3Y+fl5nE7npvUpfK2Mv16vJyMjg/T0dKKiogBkVHH58mXef//9FQcf\nExNDWloax44d48CBA5hMJvx+P263m9bWVtrb20lKSqK1tZXr168zMzPD3NzcpsgeGRnJjh07iImJ\nQa1WMzc3R2dnJ1euXOHWrVv09fWtMJ5paWky/ZiWlsbS0hLT09MMDQ1RV1fHwsICBoOBu3fv0tfX\nt2k1XKVSSVxcHPHx8Wg0Gvx+PyMjI1y/fp0vvviCnp6eFVmRyMhIMjIyKC4uJi8vD4VCgc/nY3R0\nlN7eXpqbm4mMjGRycpKOjg7m5uY2LULS6XQkJCRgMBgIBAK4XC4aGhr4zW9+w9DQ0Iq6IUBKSgp7\n9uwhIyODyMhIOfvc398vy0lKpRKPx7PpVKjC+MNyuaSjo4Pr16/z+eef4/P5VkTxRqNRKhYht8fj\nobu7G6vVuiKjstkRqahjajQaZmdn6e7uprq6mnv37smarUBYWBhJSUmUlpaSlJQkMzMOh4Oqqio5\nxbJVHAVarZaIiAhZXhFZwJGRkRWGSJD+5OfnU1RUhNFoJCwsjPn5eVpaWmhoaJBnvhUQaX+VSiW5\n7j/99FNsNttXMpai5Hb48GGysrKkIRobG6OqqorBwcEtkVlABHR+v5/BwUE+//xzqqqqVqT5RdlH\nECwdPHiQ6OhoYDmV7nA46O7u3lKHSyAQCDA2NkZdXR0XL15kcnLyK3sqlEolUVFR7N+/n4KCghWO\n+Gbqkq+V8Y+Li+PChQscOHAApVIpU+C///3vqamp+YpSO3DgAP/23/5bdu3aJaOoiYkJBgcHGRoa\norW1lXv37jE2Nsbk5OSm1rtyc3NlA1EgEGBiYoL29nbu3LmDy+VaoSgMBgN/8id/woULF0hOTkaj\n0bC4uIjD4aC3t5ehoSEZ0YlZ+s1SNGq1ml27drFr1y55fn19ffT398vITCAsLIyioiL++q//msLC\nQllbdrvdNDQ00NfXh8vlYnh4mNnZ2U01/LBsQDMzMzGZTHi9Xvp5Mg2xAAAgAElEQVT7+xkYGJCp\n/lC5w8LCuHDhAj/84Q9JT0+XSqenp4d79+7JlGggENiSMT+DwSDHNkdGRujo6JDloFAIUpS/+Iu/\n4MiRI1LxTE5OUlVVRUtLi5yf34xx1dUQStrv9zM6OordbsflcrG0tIRSqZSRvGh4OnDgAP/pP/0n\nkpKSpIMyNDTEnTt3ZJp9q4y/KDWIKZTJyUkWFhZkelnIrlQqMZlMvP7667z++utERERI3oLBwUGs\nVuuWzsovLCwwOjpKd3c3LS0tOJ1O2eypVqtXOAAajYbc3Fx+8IMfkJOTQzAYZGFhAa/Xy+jo6Jbz\n+4sJhIiICNn/JIjaQnsQVCoVer2e/fv38/rrrxMbG8vi4iLz8/NSl2x1v0JrayvBYJCJiQmGhoak\n3KHP2tLSEjqdjpSUFM6cOcMrr7wiHcXQSafNwNfK+Ot0OtLS0oiPj5cdwUajkdTUVHp6eujv7we+\nnNnOy8ujoqICg8Ego4rp6WmGh4ex2+0MDw/jcrm25OKYTCaSk5MxGo2oVCp0Oh0Wi4WdO3fi8/nk\nZqfIyEgyMzPZs2cPeXl5sg4qjMDAwACjo6OMjIzI+txmKkaR9hfkQ3q9ntjYWHJzcxkZGaGlpQW/\n349arSY5OZni4mLJXy0M6PT0NN3d3ZIo51n1UDwOovlM1NliYmLIzMykuLiY1tZWrFYrAFFRUTKd\nuHv37hXNOCJjMTk5uaJJdLMhGseCwSBms5ns7GwKCgrkWYr7IsorpaWlpKamyqhCOMbCQdsqUqXF\nxUVmZ2dlHTMtLY3S0lLm5+dpa2ujt7dXzv/v2rWL8vJydu/eLeVeXFxkcnISm832TMc914O5uTnZ\nzGc2m3nllVckj39PTw8ulwuA9PR09u7dy549e0hNTQWQkx5TU1NbTga1tLSE2+1mfHwcv99PVlYW\nZ86cobu7m66uLjo7O5mbm0Or1VJcXMz+/ftl/5PIBs3NzeHz+bacUVEwOo6Pj7Nz5072799Pamoq\nNpuNzs5OrFYri4uLWCwWSktL2bNnDykpKcBy0/bc3JxcprTV3AROp1NOASUmJvLGG2/gdDoZHh6W\nzdsLCwvs3r1bNmKKjIVwWl4a/3VCRAuiTqXX68nMzOStt95ibm6OmpoaAPn11NRUWR4AZKOfMP5j\nY2ObHn2GQox4iHGgAwcOoNFocLvdDA0NAchLLtLsQu7FxUVGRkYYHBzE5XLh9Xq3TG5hPBQKBYmJ\niURERMgyg5jDFYqluLh4xaiTOHPhtAh6361AIBBgenqahYUFTCaTTNGq1WoZpQWDQZKTkzl37hw5\nOTkr5Ba87WJOeiuJfQTPvUKhICsri507d0rOB8EgFwwG2bt3L6dPn5bOFiAb5txut2Sa2yrFuLCw\nIFkGjUYjBw8eZNeuXZIkR5S34uLi+OY3vykjIUBGoWJD3mZmtNbC7OwsbrebsLAwsrOz2bt3L21t\nbaSkpPDBBx8wNjZGMBikqKiI733ve3LeHJBR6Pz8/JYbIjHVMTs7i8lkYu/evbz22mvU1tbK9P/8\n/DxGo5HTp09z8uRJSW4FXzouYkphK+H3+5mYmCAsLIz09HSOHTuG1+ulra2NX/3qV1IvZmdn84Mf\n/ICioiL5s8LRfF5Mp16vl5mZGXnPCwoKJE/C//k//0dywxw4cIALFy4QHx8vf1ZE/pv5bH6tjP/I\nyAi/+tWv8Hq9xMTEMDk5id1up6WlheHhYZl2ycvL40/+5E84dOgQ8OUlmZiYoLm5mXv37uF0Op9p\nc9nj0Nrayq9+9SvUarXs8BdEPRMTE3Lpw4EDB/ijP/ojsrKypDIcGxvDarVSW1tLR0fHlna1+v1+\n6uvrMZvNZGVlsbS0hN1up76+nsbGRhYXF4mIiCA1NZWTJ09y8OBBdDqdVOKimbK1tZWxsbEtkxtg\namqKzs5Oqqqq0Ov1sn7f2trKwMCAzBAVFhZy9uxZMjMzZWOliPibmpoYHh7e8k7i6elpent7+fjj\nj3nw4AFTU1N0dXXJWW3B1b537145IrWwsIDf76ezs5O7d+8yODi45WlcQfTU2NjIL37xC+bm5piY\nmKC3t5eenh7CwsKIj4+XrGzCgIpu88bGRmpra7fc2YJlx2VycpJr167JpkqHw8HAwADDw8MYDAYs\nFgtFRUUUFxdL+u3Z2VlZQuzr69typ0Xc2f7+fi5evCjJq4aHh+XIb3x8PHl5eRQVFZGZmYlKpZKf\nzd27d7lx48Zz4fUXOq62tha/3094eDizs7OMjIzQ1dWFSqUiLS2NwsJC8vPziYmJkZwiHR0d3Lp1\ni9bW1ufCpBgMBnG73dy4cYPe3l6io6OZmJiQWeWIiAhJKpeZmSl7G3w+H7dv3+bq1avSodwMfK2M\nv9vt5ubNm4SHh5OVlcXY2BjDw8MyPaRWq8nJyeHw4cOcPXuW9PR0lpaWcDqdOJ1OXC6XpMSdmpra\n0g1cNpsNt9tNVlYWHo8Hu91OT08PnZ2duN1uoqKiyM/P58iRIxw9epRgMChZxWw2Gz09PXR0dGC3\n21lYWNgy2QUXuNFoJDMzE6/Xy+DgID09PQwPD7OwsEBWVhb79+/nwIED7Ny5U6bLh4aG6O7uprm5\nGavVKlPVWwXBwldbW8vs7CxDQ0PY7XZGRkbwer3o9XoKCwupqKhg7969aLVa6bB0dXXR1NREd3c3\nbrebQCCwpZvDfD4fw8PD3Lx5U47LeTwe2XeQmZnJvn37KC0tZceOHXKrnNVq5cGDB9y/f5+RkZGv\nNDVuNsSSoa6uLgKBAB6PR+5NENvXBOfGrl27iI2NZWFhQXK2V1ZW0t7evqUZOQGxZOjBgwe0tbXJ\nTYMiPZuUlMS+ffsoKSmRusXj8dDb28u9e/e4fv26fCa2OgoVPUHXr1+XzWSi/LK4uCg5AMSZB4NB\nRkZGaGtrk6O6z4rP5EkgMmyCflpkrQQRkdlspri4mLKyMklc5PP56O/v5/79+1y9epX+/v7nEvmL\nrGZjYyNtbW2EhYXJ3QiLi4tSn+/evZv4+HipF/v6+rhz5w737t2TNNGbga+V8RdeYmVlJX19fQQC\nAelJCSX37W9/m7fffpvExER5kW7dukVzczNarRar1UogEJB192fJyPYoCG/1/fff55NPPpEPp6iz\n7d27l//4H/8jJSUlwPLDPDAwwAcffCDrkOPj4wSDQZk92Cols7S0RF9fHz/72c8k+Uoote2RI0f4\n0Y9+RHJysoxCqqqq+OSTTwgEAisi563evhUMBmltbaW/v1+OuYmSRXx8PN///vc5efKkbKp0uVx8\n8MEHtLS0MDs7K7efPY9VrT6fT5I9ibSsSBMWFhbyV3/1V6SkpMh73tDQwC9+8QsmJydlaQj4Svfx\nVmBiYgKfzydlFndVrVZz5swZvvvd78qmLa/XyyeffML169cZHx+XZy5k3+oUutPplM2JoVTIKSkp\n/OAHP6C4uFje897eXt599106OzulU/m8IJosReNk6HNWVlbG97///RV68c6dO/z2t7/FZrPhdDqf\n6yIin88nJ61CG1PNZjPf+MY3OH78uGxedDqd/PM//zN3795lYGBgy7NboRB6PbTkJs5clCpSU1Pl\naF9bWxvvvfee3G2ymT0W2974i4a29dabxFjK6hRyUlISeXl5lJSUyO1bg4ODtLa2UllZSWtrKxqN\nhomJCVkH1Wg0Unn6/f4V608fB0FKIhpm1oPFxUVZwxJQqVTk5ORQVlZGUVERCQkJBAIBWltbuXv3\nLvfu3WNqaopgMIjT6ZRepV6vx2w2SwfoSSIllUqFSqWSxns9EN3yoYiOjiYnJ4fS0lI5v+p0OuWZ\nix4MMUK5uLgoex7EeQjjsB4Icg/giVamCrKYUGRkZFBeXk5RURHJycmEhYVJCteqqip6enpYWlqS\n/OzBYFA2Dq6lXB8HpVKJWq1mYWFh3elskYYOhdFoJDs7m7KyMjmBMTMzI9POtbW1kvBKsOMJ5+Vp\nWcXEboknGRMU9e9QiGe0uLhYTlTYbDZaW1upqamhqalJcv8DKxyA0H+vF+IZFfdsPRBd+6EQz2h5\neTm7du0iLi6OxcVFueyprq5uxQTLs4DobXqSMxfOSihiYmLIycmhpKSEzMxMyabY19dHQ0OD5AIQ\nzZUbhejLAp4oOFnr88nIyGDfvn3s3r2bxMREFAoFLpeL9vZ2mZXzeDzPLLslmsifRCeJgDQURqOR\nrKws9uzZQ05OjpxcEKWMxsZGnE7npm9+3NbGXxgCpVK54c7e7Oxsvv3tb5ORkSGVRXt7Oz//+c/p\n7OxkaGhI0sbCsmKIiYmR3d1iLloozMfJLahAN9rgo1arqaio4OjRo5JOdn5+nhs3bvDxxx/T2dmJ\nx+P5ilxRUVEkJiYyMzODx+OR44LrkV2r1aLX6zd8+ZKTk/nWt75Ffn6+PHObzcbFixflXnORMg09\nd5PJJFnF1tsAKEbIDAaDfOA2oqz27NnDuXPniI+Pl7ILFrr29nZGR0elQyXOVUw7CAO+XgdEzIWH\nh4dvuJYdGRnJ6dOnKS8vl42vbrebq1evcvPmTSl3aEZLyC5WVD+J0yKeUZVKxfT09IbSwjt37uSd\nd94hOztbnnlPTw8ffPABzc3NcvJm9WcbmnV5ErlVKhUGg0E+o08LjUbD4cOHefXVVyWHfyAQ4MGD\nB9y6dYvh4WEmJibWdMCfJnMh7otWq5V7I54WycnJfPvb36aoqEiO3rpcLlkrHxsbW9E0t1GI+yLS\n+Rt9Rr/xjW9IMiIAq9VKXV2dnBDwer3PzICK+yLKH08re0REBK+++irl5eXodDoUCoXs4ent7cXp\ndMpAU7BXbkZ2a9saf6HMDx8+TFpaGi0tLfT39zM8PPxUr5eSksKxY8ewWCwsLCzIulBNTQ2Tk5NS\n6YooSIwWGQwG8vLySElJoaenh9bWVqqrq3G73Q+9VAqFgpSUFM6ePcvExASNjY3Y7fanYpgS1L17\n9uxBp9MxNzfH2NgYHR0dtLW1yTl+IUsocUR0dDT79u1DoVDQ1dVFd3c3/f39D3UCRPRXVlZGeXk5\ntbW1dHV1MTIy8lTpp5iYGA4ePEhGRoakqhQkRCIdF5o2FdGzyLikpaXhcDhWTDA8DGI87Ny5c2g0\nGh48eMDQ0BDj4+NPLDfAjh07qKioICoqivn5eXw+H93d3dTX10uq4dVnLlboJicno9PpGB0dlRv+\nHvXwijr3a6+9RktLC62trTgcjqeKWMLDw1dkWnw+HyMjIzQ0NNDZ2SmV4Vqy63Q6SXgkpiAepTjF\nfTly5AhpaWk0NTVJwqOnQVJSEocOHSIhIUHW13t6eiTXhTjz1fdXUHCLSHi9WcLExEROnz7N5OQk\n7e3tciPjk0KpVJKXl0dhYaFs2pqYmKC+vp4HDx4wMTEhGUFXOy1Cz4ls0XqVfFFREXv37qWnp4e+\nvr6n5g6IjY2loqKC9PR0gsHlpWEDAwNcvXqV9vb2R46bPY3TFR4ezrFjxzCbzZIVc3W2cz0ICwtb\n8YwuLCwwPz/PgwcPuHz5MgMDAw/tUXgauQHZFyEayYX+fVK5xZjorl27pKPocDj43e9+R2Vlpewh\nEv//X1zNXzwYO3bsoKysTO5AdrlcTzRyotPpZCer4L/3eDy0tLTQ2NhIb2/vQ99fbFkKC1teVTs7\nO8v4+PhXtgOuBbPZTElJCV6vV6aKhOJd74cZHR3Njh072L17t+x67uvro7a2lra2NoaHhx/6WiLy\nFFuhxIV9HAQpzP79++VSD0Fw9CRlA4vFwu7du8nPzyc2NlYyFtbX18smubUg0uWCBlb0Mohxn0ed\nndjdHRMTAyw/2EL29Z55REQEFouF/Px8cnJyAHA4HLS2ttLU1MTAwMBjX0un02E2m5mcnJQpzodB\n3KPY2FjKyspkVCS2fD1JtGWxWCgoKCA/P5+kpCQABgYGZIPa4z5/4bwIg7TejEt6ejqlpaXyZwTz\n3nplNxgMJCcnk5+fL7dnejwe2traaGhooKOj4ytnHsrsJmQXuzBCv/8oGI1GcnJymJ+fR6VSMTs7\nK7MX670vcXFx5OTksHv3btLS0oDlM6+vr6ehoYHe3t6vlCdW/x5P0y8SExPDrl27MBgMsuHzSfSi\n4N0oKCiQTX5+v5+uri5qamp48ODBI+mphcxPapzUajVJSUmkpKQQFxcnpzieRL9ERkZK2cUzKrZo\nVlVVUVtbKz/DZ9mLI6i9g8EgsbGxcjT5SZyupKQkCgsLKSwslHwEVqtV0s+3t7fL8udmY9saf3ER\nxKz9wYMHZQQr1o6uB0lJSfzhH/4hx48fl1+zWq385Cc/oaqq6qE/FwwG8Xg8VFdX09bWRnh4uHQG\nJiYmHqkYRZOHzWZjx44dnD9/HrfbjcvleqJUenl5Od///vfZtWuX/NqNGzf48Y9/zNDQ0CMfOqfT\nyczMDD09PSiVSrlQ5FGpf5GKGx8fx+FwsHv3bjQaDQMDAzJbsh6YTCbeeecdzp8/T3h4OLA8Vvd/\n/+//5fLly7JmuxYWFxcZGxvD6/XS29u7oiv5Ub+viFoGBweJjIzk+PHjTE9PY7Va8Xq9607rFhYW\n8md/9mfs27dPfq2xsZH/9t/+Gz09PY+UQczOz87OSha7x6X+xffGx8dpaWkhOjqaAwcOMDQ0xPT0\n9BNRkr7++uu88847klgG4MMPP+Tdd999bHQlon3R07KekoWIVvv6+oiNjZV77UWGYb1KMS0tjX//\n7/89R48elV+zWq38zd/8DVVVVWvKETqGKzIUq0sZj5Pd7XZTXV1NdnY2JSUlDA8P43Q6n4hr4ujR\no/zZn/2ZDCyCwSDXr1/nxz/+MTabbYUcq2UK7Q1Z6/uPkr2np4dbt26Rm5vLjh076OjoIBAIrPu+\nmM1m/tW/+lecP38ek8kELPe+/OM//iMfffTRY4mIQs//SeDz+aiqqqKgoIDi4mKSkpLo7++XJFnr\nQXFxMX/xF39BaWmplKGuro7//t//O11dXY/dAfK0kbTNZuOjjz5i3759JCYmYrFYmJqaeqJxvAsX\nLvCHf/iH0lEMBoN88MEH/OIXv8Bqta7IPm9U3sdh83Z3bgz/n/iLoKBMSUmRTHJivevjoNPpyMnJ\n4bvf/S6FhYUYjUZsNhvV1dVcvHhRMrg9DIIJzePx4HQ6mZiYeOhCidUQqTyDwSCjMIVCwdjY2GPn\nfNVqNSaTiZMnT/L2229LD7mzs5OPPvqIjz766LGNQ6LrXiyYEaQo64FoPDObzRgMBpnKXM8cvvCO\n3377bSoqKjAajQwPD/PgwQPef/99mpubH+vhC9mF8VtvvTF0s5vJZGJpaQmtVitf51FnrlKpiI2N\n5eDBg7zzzjskJSWxsLBAR0cHn332GZcuXXps2UYo81BqzvVGMwqFQn6mooEuGFymBn2cMRMLZ956\n6y1effVVwsPDGR0dpa6ujkuXLnHnzp3Hlm2E7KETA+uBUFR+v1/KbDQaJRHPoyC4+1955RW+853v\nkJOTg1KppKOjgxs3bnDp0qV1c8k/adpc/IxoNBUNi0qlUpY8HvVa4eHh7Nixg3PnzvH666/LTE9D\nQwOXL1/mk08+2bQ9IPBlgAHINbALCwvr2rqXnJxMSUkJb731FqWlpeh0OrkmV0yybFajmWiYnJ+f\nl6NvKpVKRtCPel50Oh07d+7k5MmTvPnmm8THx8vm56tXr3Lp0qVNHY0T5xsWFiZLZ4uLi7Lh+lHv\nm5iYSElJCW+88QZHjhzBaDTKZ/SDDz6gsrJyM/gf/v9HfXPbRv6wfFGamprwer0kJCRQWlrKj370\nIyYnJ7+yMGY1wsLCCA8PJyEhgZycHFlHFFvyHhV9PguMjY3xxRdf4Pf7iYiIoLy8nOTkZHp7ex/b\nGKXRaEhISCA1NZX09HQAent7uXr1quSL3kx0dnZit9uZm5ujvLycN998E4VCIaOLR71/dHQ0WVlZ\npKamyvR7S0sLv/vd7566FrxeiKjCbrczMTFBaWkpb7zxBuPj43IM8mGya7VasrKy5F0xGAw4nU4+\n++wzbt++vem0pg6HQ3I2FBQUUFFRQUREBL29vY9NocfFxbFv3z4yMzOJiIgAoLu7m3fffZf29vZN\nlRuWm/Lsdju9vb3s3buX8+fP4/f7cblcj8zYhIWFkZeXx759+7BYLBgMBhYWFrh16xa/+93vmJiY\n2FS5vV6v7Gnp7Ozk4MGDlJaWSrkf5SzHxsZy9OhRioqKMJvNANjtdi5evMj9+/c3/Rl1u91MTk7i\ncDjIysriwIEDAIyOjko624chLy+Pb3zjG2RlZWEwGACoq6vjvffek6yWm4XFxUU5USXuy65du+QC\nm0c5niaTiRMnTnD06FFiYmLQaDSMjo7y8ccfc/369U0nTxITKvfu3aOrq4vy8nKSkpJwOBySUfVh\n75+dnc33v/99SkpKZDa0u7ubn//857S3tz8XEqJtbfwBSaHa1NREYmIixcXF/OAHP6CiooLR0VEa\nGhq4c+fOiq76qKgoScl64sQJOR43MzNDdXU1N2/e3HRCGZFCHx4epra2VtZF//N//s+SPvjOnTuS\n/nZpaQmFQkFmZiYlJSWcPHmS8vJygsHlHeADAwNcv36dzs7OTZVbyD43N0d3dzeJiYmUl5fz+uuv\nU1BQIJfIVFVVrViAExsby+7duzl48CCHDh1ix44dzM/PyzGzO3fuPHXz3ZPK7vF46OjoIC8vj7Ky\nMn70ox9ht9txOBzU1dXR2tq6Yl9DUVERr7zyChUVFZSUlKDVapmammJwcJAHDx7Q2dm56TPOQmmM\njY3hcDhQqVQcPnyY3bt3MzIyQn9/Pw8ePJDKHZYnOg4ePCjJfPLy8uRSKPEZjY6ObqrcQvb5+XnG\nxsaYmZnBYDDwB3/wB5w5cwaHwyFr9+IZDQsLo7CwkGPHjlFUVER+fj6RkZFMTU0xOjpKU1OTbDbb\nCszNzeHxeAgEAuTm5vKXf/mXjI2NYbPZaG5uZmhoSJI46fV6Tp48SUVFBQUFBZJAyel00t7eTm1t\n7VM1sD0NgsEgs7OzkmjowIEDnDx5UjbStbW1yaY3jUZDdnY2J06coKioiF27dhEfH4/X68XlctHc\n3ExDQ8NTNTw+DUR2z+PxkJqayltvvcXCwoIk0BJZWa1WS3h4OOXl5VRUVJCfn09GRgZKpVI2PtfW\n1tLT07NlPASiBGq328nNzeU//If/gN1ux2q10tPTw8zMDEqlkpiYGFJTU9m7dy+FhYXs3r0bi8WC\n3+/H6XTS0tLCvXv3GBkZ2RK5V2PbG3+xqra9vV02Ax06dIgDBw5gs9mIi4vD7XbL5guTyURmZib5\n+flcuHCB8vJylpaW5DKF+/fv09bWtmWyu1wuWlpaOHbsGHv37iUvLw+32y1n4sPCwnC5XKhUKqKi\noigpKeHIkSN861vfIjIyUqb779+/T11d3ZZcFFHztdlsDA4O4vP5KCgo4PDhwwwODlJdXY3f78dm\nszE9PS1nyw8cOMDx48d55ZVX5K6BtrY2STu8FRDNclarlbGxMdRqNSdPnmRpaYnBwUE5iiVqmjqd\njuPHj3PixAn27t1LTEwMCwsL9Pb2UlVVRXNz81NPmDwNhAGcmpqipKSEs2fPYrfbaW1tRa1W09fX\nJ5tIU1JS+Na3vsW+ffvYuXMnCwsLjI+PSwpcUf/cCgjOAbfbzdTUFPv37ycpKQmr1UpVVZXcsrmw\nsIBKpeLIkSO8/fbbpKSkyF0QAwMD1NbWSoO7VRCBgcfjwWAwcObMGebm5ujp6SEqKorOzk5ZEjCb\nzbzxxhscPHiQ2NhYyeLX3NxMdXU1nZ2dW+LkwpfP6czMDGNjY/L56+vro6mpCbPZLPt89Ho9JSUl\nfPe73yU5OVmOrA0ODlJXV0d9fT0DAwNbIjcg+U88Hg9er5d9+/aRkJAgG1Tb29vlOuqoqCjOnTvH\n8ePHVzTEtre3c/fuXVpbW7fUgAqWytHRUUpKSjh37hwjIyP09PSQmJjI5OQkarUai8VCXl4eJ0+e\nJDU1VbIPOhwO6uvrqampobOz87lE/fACGH9Y9sytVqscV8vNzSU+Pp6srCxOnDhBZGQkt2/fZmJi\ngv3791NYWMjOnTtJSEiQdfsvvviCn/zkJ1uqVGCZwGZoaIiOjg6ys7OJiooiPj4eo9HI4uIiWVlZ\nPHjwgPT0dMllnpycjNlsZn5+HpfLxbvvvsvly5c3PQ0aimBwmZrSZrPJlZrJyclkZGSg1WqJiYmR\nrGV79uwhNzeXpKQkoqOjCQaXN/XV1NTwk5/8ZMsMv4DwzIXTlJCQQFpaGiaTCZ1OR2FhIaOjo+j1\nerKzs7FYLMTHx2MymeR0wYcffsh777236aWK1QgEArjdbmpqamSpKzU1lYiICBITExkfH8fn8xEX\nF0dCQgIJCQnSoRFG6Kc//Sn379/fUqUiMl02m40bN26QnZ3Nrl27yMvLIyYmhuLiYrm212QyERsb\nS2JioqRMttvtXL16lX/8x3/E4XBsmdxC9kAgQHd3N6mpqZw6dUp2o2dkZDA9PY1SqUSr1WI0GklI\nSCAyMlJGn11dXfzyl7/k2rVrWxY5h8o+NTVFe3s7+/fvx2AwsGvXLlJTUykvLweWy4hiT0V8fDxa\nrZZAIMDg4CDXrl3j3Xffpa+vb0vlFrKLFep+v5/4+Hg5meX1eiWXgUajITY2Vi4Ec7lcdHd385vf\n/IaPPvrouUTOCwsLTExM4PF4WFxcJDMzk6ysLA4ePAgsZyx0Op10XsTUj81mo6qqil//+tc0NjZu\nennoUdj2DX/w5cOpUChWpAJnZmZWzIGbzWbOnj0rV2nOzMzQ3d3NtWvXuHz5Mjdu3HguXOYLCwuy\noUihUODz+fB6vQwNDeFyufD5fBQWFvLqq6+SmpqKVqvFbrdTU1PDp59+ypUrV+jo6Nhyek1RwxIE\nQiIL4/F4ZOOi0Whk//79cqRPjGjdunWLq1evcv369YeO9Vwe9KMAACAASURBVG2F7LOzsyvOXFAh\n6/V6uaY3ISEBpVIpGfyuX7/O1atXaWho2PIVpvBlGl0QfAiWsvn5eQwGA3FxceTm5pKVlYXJZGJs\nbIyamhru3LnDtWvXuHHjxpY7LQKiEU2r1UonTDBOxsfHk5KSQmZmJnFxcajValpaWrhz5w63bt3i\n+vXrMqv0PCAaNEUD2uLiIlqtltjYWFJSUkhJSSE5OZnw8HC58Ob27dvcunWLW7du0dfX99wWyIg7\nLe68GJW1WCykpKRgsViIjo5GrVbT0dHB9evXpezV1dVPxUHyLCD6WUS/h5iDT05OJikpSTrmBoNB\n9vTcuHGDW7duyT0Pz+MZBeT9EJTkRqMRi8VCUlISSUlJxMTEYDabUavVDAwM8Pnnn3Pz5k1u375N\ndXX1VjgtL27DXyiCwSDNzc2SAGX//v2YzWaZnlUoFOTn55Oeni6XJNhsNq5evcpPf/rTx3b2byYC\ngQC3b99mYGAAu91OZmYmkZGRtLS04HK5MBqN8uLAcuNQbW0tFy9e5P3333+u3uHo6CifffYZdrud\nzs5OSbs6ODhIeHg46enpUn5Ybv66cuUKH3744Zrz2VsJsSZ4aGiInTt3ypnupaUlmWER7HQej4fr\n16/z0Ucfcf369edmgACZcvZ4PLS3t5Ofn09mZiZarZbExETJgCdYzTo7O/mHf/gHampqVsyVPw9M\nTU0xPT3Nr3/9a2pqaigtLSUjIwOLxUJGRgYmk0murl5YWODTTz/l4sWLdHZ2PlcOdsEJUV9fj81m\no7CwUFIk79ixQ95vMYI1MDDA//pf/0vW+J/nmQcCAQKBAJ999hlNTU1UVFSwb98+ysrKUKvVUnaB\nmzdv8j//5//EbrevazpgMxEIBBgZGeGXv/wltbW1HDlyhP3790tSs9CRvfHxcX72s5/x2WefSW6A\n5wUxYtrc3ExfXx/Hjx/n+PHjHDp0CKPR+JURz9raWv7rf/2vOBwOJicnn+uZC7wwxl9AzHQODw+j\n0WiYnp5mYmKCN998k7Nnz8oO88XFRa5du8Zvf/vbTV2L+CSYmJigqqqK1tZWtFqtbHa5cOECe/bs\nAZArcS9dukRtbe22kBuWHZL5+XlaW1tlBubcuXMcOXKEuLg4mRloa2vj448/xuFwbAvZ5+fnZW9C\nd3e3XBlbVlYmG4fEuI5oBtzqVbEPw8zMDFarVfa86PV6jhw5wuHDhwkPD5cZsZGREWpra1esoX6e\nEE2XgswpJiaGuLg4vve975GbmyuzYCL7ZbVan6uzFQpRdhGOeVNTkxxFFCUtn8+Hy+XCZrPJUczt\nALFoqrq6GrvdTlNTE6+99hqnT58GvmxsHB4elhsdt4Psgv2zr69PMgy2tbVx/vx5cnJyUKlUjI+P\n09fXx/DwMB6P57nVyVdDkLfV19fLO3/o0CFOnTolqb4HBwfp7OyUfWnb4czhBTX+/f39smFOq9Vi\nMpnIy8ujoqICg8GA2+3GZrNx584dqqurn7PEX0KMFgGymSUrK4u9e/eSmZkppwMaGhqorKzEZrM9\nZ4m/hNvtlul7jUaDyWTCbDZTUFAgu+NtNht1dXU0NDQ8Z2m/hCAtEk1YYm9AdHS05O0fHh6mqamJ\n5ubmbXXmc3Nzks5ZrVYTFRVFcXExsbGxkte9v7+f1tZW+vr6to3TAstjl4Ja2Gw2k5iYyIULF2Q0\nJ5aY9PT0rIs/Yqsgtgh6vV45IWI2m+Uk0ezsLF1dXTQ3N2O325/rlr7VEBmA6elpRkdHGR0dpaCg\nQC4mE139XV1dW96b8CiIsoVYqz4+Ps78/Dz79+8nLS2N+fl5Ob0idrBsF4hZ//7+flwuFx6Ph+jo\naA4fPiwdyaqqKhobGyXb6HbBC2f8V8NkMpGRkUFMTAw6nY6wsDBqa2v52c9+RmNj4/MW76FQKBQk\nJCSQkpKCyWSS9KK/+c1v+Od//uctbe57UhiNRjIyMoiPj0ej0RAWFkZ7ezt/93d/t62crbUQExND\nenq6TD8vLi5y5coVfv7znz+U6nk7QKPRkJSUREJCgtwWabVa+du//Vtu3LixrZTKaphMJtLT0zGb\nzTJyvnnzJn/3d3+3rc9c0HonJCSg0WiYm5tjfHycn//851y5cmXbPqPBYBCdTkdiYiImk0nO1t+9\ne5e///u/3/IG3PVC8HCEh4cTExODWq1mamoKp9PJhx9+yIcffvjcxuIeh2BweS9JeHg4Op1ORvwN\nDQ380z/907rIzbYaL7zxX1xcZHZ2lpmZGWZmZlCr1QwODnL37t0tG7l5GoiUrZB9enoar9dLa2ur\nnEPfrhCRhBiPWlpaknPoWz1N8aQQFKiTk5OMjo4yOztLc3MzjY2N2/7M5+bmJJ3o1NSUHOlbz76B\n5wkRjY6PjzMwMMDg4CB37tyhvr5+W2UrVkM4Km63G4fDwcDAgJzNFuRL2xWBQIDJyUlGRkbo7u6m\nubmZW7duyQh0O0OUVfr7+xkZGaG+vp7Kykr6+vq2/T13u91YrVYaGxtpamri/v37NDc343Q6n7d4\nX8ELb/ynp6cZHBxkaGiIkZERwsPDGRsbY3h4eFs/nMFgUBKJ2O12GfmL1NF2ll1s57PZbAwPDzM/\nPy836G1nAwrIh3NwcBCNRoPT6ZT8/9sZorY/MDBAf38/PT091NbWMjo6uqGVrluBqakpent76e7u\nRqVScfPmTR48eLBt6vwPw9LSkmT/a2tro6enh08++QSbzbatMy2w3C/S29tLS0sLJpOJ3/72t9TX\n1z/3Br/1wOVy0dbWRk1NDdPT0/zud7+TFLrbGX6/n/7+frRaLWq1mtu3b9PY2LhtdfkLMeq3Hiwt\nLdHd3c0XX3xBZWXlc+/AXS8EScj9+/f54osvqK+v3/aeueDPF4xcN2/e5N69e9hstuc2dvMkWFpa\nYmZmhubmZu7cuUN7e/u2P3P4Mls0OjoqI4qnXbe8lRDNoLOzs/T29sp6+VaP3T4pxD0PBAKMjY3R\n3t6O1Wp9osU/zxPizIeHh+VCtO2caQmFyFwMDg7KjYUvgj4XY7rj4+MMDQ3h9Xqfp9yPHPXb/L2B\nT4d1n5bYKR4VFYVarZbboV6Eh1OsUI2KimJ2dlayzm33Sy7OPDw8HIPBgMfjkVwA2x1i9auohT5u\ne9l2gVhapNPpCA8PZ2ZmZtt0az8OQnaxflastn4RoFAoJMHPwsLCts9sCYgzV6lUKJVK5ufnX5gz\nF/pFLIp6URwWcebiz5Ostd4skZ76m88RT2z8xe55sQFuuytFcUFC95C/CN5t6P50pVIpx7ZeBMUS\neuahimW7nzmsVIgKhUJu39vuCFWGCoUC4IU6822mzNeF0GdU4EXQifCl7KErbV+EM4cv74vA4zb9\nbQG+3sZfEIasdeji8ode/LU+jNCfFf8WXxOv9aw/RKEM19rbLGQX6dLV33+Y3Fshe6gyFP9efbZC\nUT5M7rVkDz0LIfOzfuhXG6FQuYPB4Ir3f5jsa90VIXuo3M/6zGHtMwo988d91qErj0PlFp/XZsn+\nqDMPxeNkFxCOp4AwyptxX0I/27XkftxZhcotiJnEa4bK/SxlX++Zr0f20LsnAiz4csRtM+7Lw+75\nk7xPqOxqtVreF0FDLf48S7lDz/xpZV995hqNRr6mCBDFua/n5R71zRe64S/UAImISDyogqYTkGto\nQ5Xb6odSPIzitcRlEQ/mZkVZq5WweB+lUimpgVdfntW/t/i9hGIJdXo2K8paHRXB8uVc7XwIrD7z\nUAUolKJSqZRn8AQX/KnlFv8Wn3HoQxaKtWQXX1epVKhUqhWZhM2Iytc671AHabXD9TC5BRVpWFgY\nGo1G3pmFhYUVz8mzlj1UKcKXz5WQL/TvoQ5NqOzi2Var1VIpipq8yPg9a6xljFavnX2Y47iWQRBc\n9eLMxZrYzXDSV8stzny1vKHOzVqvAcsG1GAwSL3o9/vl2uNnfV9C9a84l9X64HGOYqhONRgM6PV6\nAFm6EZtUn7Xs4nkKtTehumA9DpeQXa1WExkZiVqtBpZJnMSfZ3FfvjYNf6GXe7ViCfVQH3Zg4usJ\nCQnk5ubKEsJmeLfw1bTWWhHAeiIC8TupVCpycnKwWCxS7rUch2ch92qFslbEGPq+j3r/YDBIbGws\nJSUl6HQ6uWp3M4x/qOyhcofKvtbvsfo1Qn+v3Nxcdu3axeLiIoFAQPLDb2a2ZfV9flQ0vRri+yaT\niZKSEuLi4uS2vc2WPVSJrxUVrfd9xRIsWCZCesJo6IlkD3VIV9+V9cos/o9araa4uJiMjAzm5uak\n4d+sM1cqlSuclSeN/EP/X2JiIvv370er1TIzM7PC4XrWsgsDujq78LTvk5+fT2FhIX6/H5/Ph9/v\nl3f9WcqtUCikoRav/7SyB4NBzGYz5eXlxMTE4PF4pMP1BAHd14Pb/2EIVdqhEA9saErzca8RFhaG\nVqslMjISj8fzUI/4Wcq9VlS/2kA97jXEz4WHh6PX63G5XGu+9mbLLuRYLdvDXkNEHRqNhujoaLmE\nZz0//7R4VCpO3KH1nLmQPTw8nOjoaEk+slm11VCZ11Ja6zVC4v+JFdJzc3MrSgabKftG0tuh90Wn\n0xEbG8vo6CiwOWe+Vgnqad5jtWNjMpkkwdRGXne92EjWMvTMBcOkGLnbrJR/6PtuxKELlV2v1xMR\nESGpvDez5yQ02n/a+yJ+TqFQYDKZVjjnoY7cRvHC1/wfhfUaolDodDr0er30zDfr4XycbE8qu0Kh\nwGg0rtibvtlK8VlBUDTPz88zOzv7TC94KEKzLc/qtcWaYJ/PJzuqNyPbEopn8dpqtZqIiAi5hU9E\nWS/CmRuNRsxmM16vV2aKNit6hmfnRCsUCrkK2Ov1ygzds74va9WdNwq9Xk9kZCRzc3P4fL5NyxKF\nOv/PSvbIyEj0ej0zMzMrIudnfear0/0bhUajISoqiqWl5W2qoWe+Ttm/3g1/Lyo2M6vwEmvj5Zlv\nPTbDWdwKvKhyw4sre2h56EWTfbXTsk3w0vi/xEu8xEu8xEv8C8Mj7bviUd98iZd4iZd4iZd4ia8f\nXhr/l3iJl3iJl3iJf2F4afxf4iVe4iVe4iX+heGFHfULCwuTJD4bHSPaaoQS8azuDN1GzSJrIpRM\naXW373Y+/9XEH6HjnZtFKPSssJosJvTvm0Vu86wQKrsgb1Gr1XKd9XZG6Ly8Wq1Gp9Oh0WiYnJx8\nIWQXDHFi1E1shtzuXPlCt4sx2vj4eLlBdLvrx7CwMIxGI1FRUaSkpKDVamlubmZiYmLbyf7CGn9x\nsYURWs00td0OOhRKpRKNRgOwQvb1cBI8T4iHUjDaibnTULm3u+yCtCWUWVAwlW1XCDnFuQtuBFhe\nab1djb8wnuLOCANqNBrlYqLtDpVKhVarJTw8nJiYGCIjI+nq6trWi6xCWUrNZjMJCQlkZ2fj9Xpx\nu93b3viLhWfJyckUFhZSUVHBlStXcDgc214/hoWFER0dTUFBAadPnyYyMpIf//jH23Kj4gvZ7R/q\nGUZGRhIbG0tZWRmZmZksLS3hcDjo7e3FZrMxMjLC1NSUpC5d6/IolUqMRiMWi4WZmRkcDsemGTKh\nuPV6PQkJCeTk5FBeXk54eLhckWu1Wunr68PpdOLxeFaw/60m9hHz/UajcdO36wnZIyMjsVgslJWV\nsWfPHoLBIDMzM4yOjtLb20tfXx+jo6PMzMysmB9fS3atVgssM7VtphET7Fvx8fFkZWVx+PBhUlJS\nABgfH8fhcNDV1YXVasVut8uZ/YdllAQd8bPmCH+Y7Hq9HovFQmlpKSdOnEClUslVswMDA3R2dmKz\n2XC5XCsY79YiYQolkdpMiPeKjIwkISGBM2fOUFxcjEqlYmpqCqfTSUdHBz09PVitVmZmZlYwoz1v\nKJVKYmNjKSgo4Jvf/CaJiYloNBqGh4fp6+ujvb2dvr4+hoaGHnnmWw1BhBQVFcXZs2c5e/YsZrMZ\nn89HX18fLS0ttLW1MTg4iMfjeZr58U2FyWQiPT2d733ve5SVlZGQkEBnZyeNjY3U1dXR09PD0NCQ\nzHpthzOHL+3I8ePH+eEPf0hGRgYqlYqqqiru379PTU0Ndrud8fHxTdtHsQpfP25/k8lEdHQ0SUlJ\n8s83v/lN9uzZw+LiIn19fTQ1NdHZ2cnAwADj4+P4/f6HlgdUKhURERFkZmbi8Xjo7+9naWkJn8+H\ny+ViZmbmmazyVKlUGAwG4uLiSEpKIiMjg7KyMl5//XWioqKYn5+nq6uL9vZ2mpqasNlsOJ1OqVRW\nk7AIAxoZGYnZbGZsbIyZmRmCwSAej4fx8XF8Pt8z2feu0WgwGAwkJiaSlpZGVlYW586d49SpUywt\nLcnd201NTTQ2NjI4OCijjLWUoohmDQYDwWAQr9crdxGMj48zNTUlv7YRhHJkJyYmkpmZSVFREd/5\nznfIyckhGAzidDqxWq3U1dXR1tZGb2+vJEp6WElApIIF8xaAz+fD7XYzPT2Nz+fbkNzw5TKV+Ph4\nkpKS2LFjB6dOneKdd96RGYvx8XHa29t58OCBdACE47IWac/DjP/U1BSTk5N4PJ5ncl8UCgUGg4H4\n+HhSU1PJysriwoULHDp0SDqL4+Pj1NTUUF9fT3t7Ox6PZ4XTtR4EAgFmZmbweDy43e4Nyx3qsMTH\nx5Oens6hQ4d4++23SUhIYGlpSeqIqqoqWlpa6OrqYn5+fgUD23owNzeH1+tlbGwMn8+3YQMmnqnY\n2FgsFgupqamcOnWKN998E1jm45+amqKmpobk5GQ6OjoYHR19YiO6tLSE3+9ncnJS6qdnIbter5fp\n8qKiIs6ePUtRURGwTOlcUFCAxWKhvr6enp6eFTS9631/sU/B5XLhdrufmcNjNpuJjo4mJSWFQ4cO\ncebMGakfEhISSEtLIyIigp6eHhwOxxNTmAeDy3tDvF4vTqdTkqFtBC9c5B8WFsbevXupqKjgxIkT\nxMXFoVQqSU9PJzY2lmAwKB8qn8/H3NzcVxaWrKUQVSoVOp1uxc7u3t5e3nvvPerr67FarRv+pSIj\nI8nNzeXEiROcOnVKeudpaWlotVqCwSA+nw+fzyfZqNYru0qlWrGz+9q1a/z+97+ns7NT0v1uBImJ\niWRnZ/PGG29QXFyM2WwmKSmJxMREYJlvXpy7cJZCmbTWkhuQ/OPiAZ6cnOTy5ctUVlbS1NS04XS8\nWq0mLi6OiooK3nrrLSwWC7GxsaSlpREeHg4sG5C5uTlptGdnZ1c4iWsph7UISdrb2/niiy+orq6m\no6NjQ3LDMqua2WzmzTff5OTJkyQkJJCUlERaWposXQQCAXw+n5RdZH4epdRWk8AEg0GqqqqorKzk\nxo0bOJ3ODctuMBjIzs7m7bffprCwkJSUFNLS0oiJiQGQuxCmp6el7I+6Lw/D+Pg4DQ0NXLt2jc8/\n/3zDylyUE48dO8brr79OVlYWGRkZ8hmFL5fDTE1NyfJFqAFa7/tbrVaam5v58MMPaW5u3rCjq1Qq\niYiI4Fvf+haHDh0iLy+PjIwMkpOTgS8XfU1PT0vnOlS/rFfu+fl5BgYGqKys5Ne//jVTU1MbNkQK\nhYK8vDxOnTrFK6+8QlFREenp6ZjNZuBLo+12u5/6zIV+GRkZ4eLFi3z++ecrdObTQOiB8vJyDh06\nxOHDh8nPzycjI2PFsq+ZmRncbjezs7P4/X4pz3rPPBgMMjY2RmtrK++99x5dXV3MzMw8VrxHffOF\nivyFoSspKeHMmTOUlJQQHh5OIBDAaDTKBkCNRkNERMSG389isUjuc5vNtiHFEhYWRnx8PKdPn+bY\nsWPs3buXYDCIUqlEq9XKLVYREREbll30DwAystiIQgwLC2PXrl2cP3+eo0ePkpWVJeUWjWdCaYqH\n9WkhSgULCwt0d3dL5fS0MBqNHD58mFOnTnHgwAHMZrPcrCbOSKPRoNVqN3zucXFxGAwGxsfH6erq\n2rAhSktL4+jRo5w6dYry8nKMRiM6nU7KrVAo0Ol06HQ6oqOjn/p9gsGg3HrW1NTE2NjYhgxRWFgY\nRUVFHD16lGPHjpGVlbViOxkgexj0ej3x8fFP/V6Tk5PExMTgcDi4fv36hqPQ6Oho9uzZw6uvvsrR\no0eJi4vDbDavkF2lUkmO/qdFMBgkJSWFqKgo6SxudENeWloaxcXFnDhxgn379pGcnIxOp1vR3KrR\naIiJiZFO2NMgEAiQmprK1NQUV65cwefzbciAarVasrKyOHDgAKdPnyYvL4+UlBRUKpV8hkSW0GAw\nPPX7iAyjyDjp9foNb+CMiIjAYrFw+PBhTp8+TUFBATExMSuaisVehKioqA3J7vF40Ol0XLt2DZvN\nth7j///Ye/Pgts/jfPzBfZO4CPAC70PiIYoUb8q6bMmW7VixrUR20rjTadppJ5l0Ms2/bX6/dKad\n6V+daWY6mU7yR6fOJD7q27Fk2bpPipRIijdIAjxAACRAAiQIAiCI7x/srkFdBAlIpJrsDCcTiwQW\nL97Pu/vuPvs8j7SnKvhTwGlqasKxY8dY9jYajfINkuxePvStWHZ2Nn7wgx9gbm4On3zySVKbRCgU\nIjc3F9/97ndRXl4OsVjMrxcPQEuF7wKBAHV1dbBYLGhvb0d3d/eWfafWQmNjI374wx9CLpfzWt+L\nmk+FqdVqPPfccwgEAnj//fexsLCQVMKl1Wr55qzRaNatMR0s98rNbtXy8vJgsVhw7do1nD59OimA\nj0AgQGVlJX7yk58gOzt7XWJCfqdyv1RWViIcDuN3v/sdP1fJvN6zzz6LN998E/n5+esO7HjfU7Fn\n0tLSUFdXh+vXr0MqlbJU61YtOzsbp06dQltbG8rLy9f9G1WoKFFPxgQCAcxmM0QiETIyMngCIpng\nX11djTfeeAPNzc2wWCzr1pfWJBW+SyQS5OTkIC8vDxqNhtuqWzWVSoWWlhYcP34cBw8ehEKhWFed\nWl1dXSdPvFUjIS6VSgWTyQSVSpV0W9RsNuOZZ57B888/j4MHDz6wrZYq36l1qdPpIJfLk3o94CkL\n/iaTCbt27eKHhgJTLBbjHq1YLIZUKuWbHR02S0tL8Pv9CIVCmJ2dxeDgIOx2O4+PGAwG7Nu3D5WV\nlese+njkbCwW21IQFQqF3PeM18QWCoVc+gS+QRaLxWIeQVtZWcHMzAwWFhYQjUYxOTmJ0dFRTE1N\nYX5+HrFYDDU1NWhpaUF+fj5nl/F+k4b7Vg4W6tuaTCauUNBGDgaDWF5eZjAd3UpjsTWN9bm5OUxO\nTnK50WazYXJyEtPT04hEIlCr1Th69Ciqq6thMpl4XeJHw5J5aDQaDTIzM1nYg4J8NBrlkiewVl6n\ntguV0e12O5xOJ2QyGRYWFjA1NQW32425uTlEo1FUVlbi5ZdfhtFo5PZBfDC+d6RwMyYSiRhdbjQa\n+TCMxWIsS0pTFiTmFIvFsLCwgJmZGVitVoTDYahUKrhcLjidTng8Hu4TnjhxAq2trVAqlVwtS1Uw\nlkqlDJ7NzMyETCZj3+PbcDKZDCqVivvHc3NzsNvtsFqtnOi43W54PB7GjuTl5eH1119Hbm4uTzs8\nKMBtxeJvxaWlpTCbzfzaKysrLAUbiUSg0WgglUq5/z8zM4OhoSH4fD5otVq+Xc7PzyMQCGBlZQWH\nDh3C8ePHIZFI1u1zSvyTvVhIJBLk5+ejtrYWer2efQ+FQvycikQirlaEw2G43W6Mj49jeHgYCoUC\narUaHo8HPp8PCwsLCIVCUCqVeO2117B7927eK+T76uoqt/e2ajSRUF9fj5qamnUTXMvLy/yjUqmg\nUCgQjUbh9/sZXOx0OqHX67G6uoq5uTksLy/zZ66oqMArr7yy7tknoxZlspeioqIivP766ygrK+P3\nWFlZQSgU4hacWq3m6aiZmRlMTU1heHgYAGA0GjE/P89YLXq+jx07hrq6uvuey5WVFf79ZO2pCf4C\ngQAajQYWi4UXE/imd0jBUCaTQS6XQyaT8UaIRqPw+Xzwer1YWlqCw+FAe3s7+vr6YLVaEYvFkJWV\nhcnJSfh8Ps7K6RCiILrVjUKgPIPBsK7cTF/k2NgYVlZWuPxMSUEkEkE4HMbk5CS8Xi+Xwru7u2G1\nWuF2u7G6uopDhw5heXkZ9fX1KC0thdFo5ASCgv9WjUpWarWaS5/Ua56amsLExASX/KnESP1/t9uN\noaEh/o56e3sxNDSEsbExhEIhaLVa/oxVVVUcSONnw5MJSHK5HGlpaVAqlet0thcXFzE8PMzJk1Kp\nhEwm4wNnYWEBfX19sNlsUCqV8Hq9GB4exvj4ONxuN1ZWVtDa2or09HTs2bMHhYWFHEjjA/9WjdD9\nKpUKarUaUqkUsVgM4XAYTqeTgzs9E7Q3Kdm6ffs2QqEQ0tLSYLPZeIKB0PRisRgqlQrl5eXQ6XS8\nNqnwnUCt1L4i38LhMEZGRuBwOBAKhSCXy6HRaBhY63K50NfXhzt37nDZdHx8HA6HA263G5FIBLt2\n7UJGRgZaWlpQXFz8wAQxmZuzRCKBWq2GyWTi9lUkEuHv3+fzIRwOc+soGo1idnYWExMTuHXrFjwe\nD0wmE+bn5zE9PQ23280XDr/fj6ysLBQXF8NgMKzjbEgWsU7B32g0Ij8/H2KxmJ/R8fFx2O12BAIB\nxgQAa8FvfHwcAwMD6Ozs5Mmp6elpzMzMYH5+HktLSwyulkqlKCwshEQiWZe0kDLhVo2AoUVFRcjL\ny4NYLGZg28jICGZmZhAIBKDRaKBSqbCysoLZ2VmMj4/jzp07sNlsyMrKQjQahcvlQiAQQDAYxOLi\nItra2pCXl4eSkhKYzWYA3+wPwvkkkzCKRCKYzWZuyVF/3+VyYWxsjJOL9PR0PtMnJiZgtVrR2dkJ\nYK3SROtNl1i/3w+lUgmDwYDMzExuyQHfnF+pGJN9aoI/3RwI6UhfWjAYhNfrxcWLF9HX18eH8Orq\nKoaHh3lsi4IpHUR+v58RtrFYDB6PB2fOnMHw8DCuSI/05wAAIABJREFUXLmCv/iLv0BbWxuAbx6u\nrWa41Guifja9p8/nQ19fHz744AP4fD5+Dxqbo8QlHvhHGzsYDPJGvnPnDpxOJzo6OnD48GG89tpr\nXAGgxGWrt9BoNIpgMMjyxgKBAMvLy/B6vfj666/xhz/8gX+Pbkh0w6RbNH1eejBp8sLv9+P999+H\n1WrlHmtdXR0ApCRxoe86vg+8sLAAq9WKt99+G319fVyVWFlZQSQS4R4gycVSxk63J6oW9PT04F/+\n5V/w2muv4eWXX8auXbug1Wp5zZNNXOIDQvwUxOXLl/Ff//VffOOi36P1D4VCPPsvFov55hS/Dh98\n8AEmJibwN3/zN6irq1vXc0/29k/JH/0IBAIEAgHMzMzg3Xffxfnz53kv0eekvULPCCUj5Df1wq1W\nK/71X/8Vb775Jn70ox9BpVIxCC8VLRBa6/jxzbm5OXR0dOA3v/kNpqamOOki3yORCK95JBLhZ5jO\nHPqOzpw5g/HxcfzsZz/DkSNH1lUmU2Xku1AoRCgUgtfrxaeffor333+fz5J430med2FhgTkw4v2O\nRqNYXFzEr371K0xOTuJnP/vZun52KpDy5A/tX6FQyM/or3/9a3R1dXFLgSqYBLgkYPHAwAAnx/Hf\n39WrV+H1evHXf/3XePXVV9d99lSNBxJPC1W45ubmcP78efz617/mlkK81C9VJajaK5PJeFKEXm9l\nZQUff/wxFhcX8f3vfx8FBQXrKkSpkt5+KoI/fWkymQxpaWnrHhyfzwer1Yru7m7cvn0bEomEN8LE\nxAQ8Hk9CLGihUAhOpxPBYBDz8/N46aWX+N/UajWys7M5kdis3yKRCFqtFjqdjktn0WgU4+Pj6Orq\nQkdHB7xeLwQCAcLhMJaWluD1ehPuAVIZaHV1FQaDAS+++CKAtYdFp9PBYDDA6XQmnOXGH6QqlQq5\nubnQarW85nNzc7h9+zba29vR0dHBmzIcDvOBnYjfKysrmJycBLBWLi4oKODgL5fLYTab4ff74fV6\nE/KbfKafjIyMdT3nWCyG0dFRXLlyBe3t7RgcHOQDnBLDRNfI5/PB5/Oho6MDOTk5yMrK4uCv0Whg\nNBoxMzOT8K0ovtqhUqmQn5+PrKwsTn4WFhbQ0dGBK1euoLOzkw+MrczFT01NQSaTobe3F1lZWRz8\nJRIJ9Ho993ETtfhkx2g0oqSkBDqdjg+s8fFxXL58GdeuXcOdO3c4sGz2EFtcXGRegJmZGcYAAWut\nm4yMDHg8noST9PjqklgsRm5uLvLz87mfGg6HcefOHXz11Ve4desWj7VthRHS6XQiEonA7XYjFArx\nDVooFCItLQ3p6elc3UvE4hPjtLQ0WCwWmEwmDjROpxMXL17ExYsXcfv2bd4nm72lR6NRrh7RTZne\nVyaTwWAw8LOQiFFST2eM2WxGYWEh43Gi0Sj6+vrw1Vdf4dq1axgaGtr0OB+Z3+/H5OQk/H7/fVgN\nYuHzeDwJrwldpOizm81mZGZm8pk+Pz+PS5cu4auvvkJHRwcnXFtJkiiRif87Aj5qtVqoVKqkcBbA\nUxD84w/z9PR0FBUVIS0tjQORx+NBT08PhoeHYbVaOetOZOTpQUb93UAgwP9Np9OhtLQUi4uLCW/y\neN8lEgnPxlNffGVlBUNDQ+jo6IDdbudNuFXih2g0ygRBdDsViUTMgzA7O7spYAttdL1ejz179iAn\nJwdCoRDRaBRutxsXLlxAd3c3B7hkbgGEvj18+DD/N5VKhcLCQszOziYc/ONxDiKRCIWFhairq+Me\naDQaxd27d/HFF19gdHSUS23J2MjICG7evMlVIgA8SrgZfggKQPRw19XVYffu3dzbnpubw5dffonL\nly/D5/MlPVoVDAYxNDSEsrIy1NTUAFhLwGgUcjPBn5D7NHJ78OBB5OTkcMl/YGAAb7/9NqxW67rn\naqsWCAQwPT29bjImLS0N+fn5CIVCCb8HPZuURFRVVfEYq1AoxOLiIs6dO4fPPvuMK3HJGN38lpeX\noVQqea9mZGTAbDZvigWOfKYWZWNjI5flKWC//fbb6O3tTbpETN/tvUmPUqmExWJhnEAiRtgKQvIX\nFRWhvr6eW5WRSARXr17F22+/zS2irRpdIIhLJN4H4okJBAIJv4dIJIJCoUAsFkNaWhqqq6tRXl4O\nlUoFgUCAmZkZvP/++7h48SICgUBSZ0t+fj4qKyuhUqnW+S2TyZCVlQWDwbCpS9GDbMcHf7pVEGju\n0KFDyMrKArD2MI2MjOCDDz6AzWZjZDiNhVApdzM3DCpbxv++yWTC3r17MT4+jomJiYT9podGoVCg\nqqoKe/fu5Y0YCoXQ1dWF9vZ2fuiTLaPRTZBeQywWo6ioCKWlpRgYGNjwEKBsXCKRQCqVQiQSISsr\nC21tbSgsLASwVpbyeDzo6OjA+Ph40oGf/F5cXFxXVdHpdKipqcHk5CRGRkYS8pvWmnzfvXs3Wltb\nodfrGcw3OTmJ/v5+Lrslaz6f777AkJubi8rKSiY62sh3gUAApVIJrVYLsViM/Px8tLS0oKKiAkKh\nEOFwGAsLCxgbG4PD4UgJK9jS0hKGhoZ4hJV82L17N8bGxjA4OJiQ3yKRiBHIAoEAtbW1OHr0KHJz\ncxGLrXFuzM7Owmq1wu/3J+13LBaDy+VCZ2cnMjIymGciIyMDe/bswczMDFwu1yP9BtaeDZ1Oh5KS\nEg5Ghw8fRktLC7NtLi4uwul0wuVypYSadXV1lXEyGo2Gz4e8vDzk5+fDZrM9MhDRekulUlRUVGD3\n7t1YWlpCbm4ujh49it27dwNYa5cQ7iDZNaekdGVlBW63m8lsCG9SWlqKiYkJjI+PP/I1KHAZjUbs\n378fRqMRi4uLqK2tRUNDA8xmM6LRKJaWluDxeLhCkowpFAq+mQcCAahUKm4pZWRkIC8vj3Fej/Jd\nIpFApVKhuroaR44cQTAYhFgsRk1NDSoqKiASibhX73a74fP5kh6tJqImgUDA7SSBYI25MTMzM6nR\nXrIdH/zjkd8ZGRmcmVM/ymaz4fr165yVUmZJaFy6rW72PeON5pETHZOJL+HSw1pYWIji4mIGInq9\nXlitVlit1pQc5g/abPTAyeXyDXvn8aV+8lssFkOv16OiogKZmZlYXV2F3+/H9PQ0hoeHk+YPiPed\nSntkRN9MN9+NLD4Y0fdvsViwa9cuyOVynvKYmppKqUAI9R7j95hcLodarV6Hjt7Id5r0kMvlMBgM\n2LVrF/Ly8iAUCnnNHQ5HSqoVAFjkJf72QGjw+OmCjfymA4lAcAUFBdi7dy+kUikikQhmZmYwPT39\nyIC8WZubm4PVakVLSwv/N+JpSGS/0B6XyWTQ6XRIS0vjw728vBwSiYT3isvl2lS172FGz5/b7cb0\n9DTKysrYF6VSCZVKldAzSs+mTqdDYWEhIpEIiouL0djYCJ1Oh9XVVczOzjKNbLK3fuKSANZaF2az\nmQMPASSp9bKRERi0sLAQRUVFCIfDjPIXi8Xw+XyYnJxMyZpT+4wqlvPz85BKpRxEZTLZusmrh1n8\nXsnOzkZDQwNWV1chk8lQXV0No9EIkUiEubk5bjMnw+5J565Op4Ner2cBrHgAN11wkrUdL+kbD76J\nf8CDwSBsNhump6fXBQ664VHJdbOzsxQ84jfF2NgY/vCHP3B/eiMj4BBVHWiz0cjJzMwM7t69m1Kl\nJ8pQ46cJqGfZ0dGx4SFAt3fym0B7hDwnINPo6CijzVPlOx0K8UQqbrcbFy9ehM1m2/Dv44FaxJC4\nsrICkUjEvARerxednZ2s25Aqe9AhMjIygvb2dszPzyfseyAQgMvl4pFOYm0EAJvNhq6urqQ4D+61\n+PIrWSAQQE9PD0ZHRzd8n3ggotPphN1uZ/9oJHRhYQF37tzZsHKzGZ8J9LW8vLwu4XK73bhz505C\npVBKNj0eD7q6umCz2RiYRYfs5OQkrly5khK2Q+CbMV6iIibfCfszNja2IZ4ofpKmv78fZ8+e5UoQ\ntbrC4TB6enrQ3d2d9M2ZklKDwQC5XI7p6el1t9qFhQW+BGzkN1WBZmZmcPHiRVy/fp1L47Tmbrcb\n58+fT+iZ38hvovTOz89HLBbD9PT0ulYwTQwkci6urKzA7/djaGgIp0+fxujoKHPLUMI2MDCAa9eu\nJfTMP8oIR0EYCKIhpv5/OBzG9PT0ptpyD7Mdf/OP79vTlyoQCODz+XDt2jX09/ffd1AlI5hAyP74\nLJzKf5vJoulwpC8tXsbXbrfj/PnzcDqdKQ1E96Lj6aZO4kCJGv0ulfQJFLW8vIyurq6UHCz32r0j\nZpFIBD6fb1OaCpS40CFIlSAqFV+6dOmR5cmtGCUd8etLs7qb5e2ORqOQyWTrGNNWV1cxNDTE7aFU\n+k2z67QHKXFOtFJGzyXddKRSKc8rR6NRzM/Po729nWeaU2ECgQCLi4uMy9ns80PnCPELEHCQlAbp\nskH0takK/sB6XE78usdP5GzkO50pBGSOxWLIy8tDMBiEUqnE4uIiurq6cPfu3ZQEfwAMVOzs7ITF\nYkFlZSUnYvdelDb6/IuLi8zLv7S0hJKSEg7KBAxNtLX6KL8FAgGCwSAmJyd5tp5GOCnJi2cofZQR\non9qagrXrl3jkUKLxQKpVIpoNIqenh60t7cn3WahveBwOHD79m14vV4UFxdzq5sqbfH8GVu1HR/8\ngQdzIM/OzuLMmTM8L5kqi8/+6b3JNjtGFH8jjb9ZDw0N4dNPP03pwRLvY7yfVEbajMXf6uKRtsvL\ny2hvb8fNmzdTInQU/373iv9QeWsrvtMoUHzlZXp6Gl999VVKNBrijQBm8TgRhULB8/eb8ZvGIWme\nfGVlBQKBAP39/bh+/XrKgj8BTufm5uD3+zkgUjl5s7S1VHFxuVw8pSIQCOD1enHjxo0N8QObNZrw\n8fl83NojYqFEEg36nijQO51ORKNReDwepvMeGxvDpUuXUgJQpGQjGAxieHgYFouFEzyhUAiDwcAa\nJYn6TlM1wWAQWVlZ8Pl8UCgUmJubQ1dXF3p6elIiU726ugqv14vl5WXYbDZUV1evq6rl5OQgPT19\nw0BE+5vIhXw+H+x2O6qqqtDc3MwkYFeuXNmwkpCoud1uXLlyBSsrK8jMzER9fT0sFgtPQWVnZ6O/\nv/+Rvsef2263m/ErDocD1dXVkEqlfCnq6OhIWhSL4kV3dzdcLhdmZmZw+PBh7N+/n6t1BMp9UIza\njD0VwZ96IMQIRmCc2dnZlAG3KJOtqanB888/j9LSUp491Wg0KC8vh0wmYxBNou0EuVwOrVbLt1Fi\nM5udnU3Z7ZmIMg4cOIDDhw8zZzXdCqqqqiCRSOByuRJWsqIMUyKR8MwwEa7Q6EyyRmuek5ODlpYW\nlJSU8M2GaFupJ+p0Orkknsjr0oy72+3G4uIiRkdHMTs7mzINecrQS0pKUF9fD41GwwlAQUEB2tra\noFAoMDw8jImJiYTGTQFwkkWld5/Ph4GBAfj9/pSAzmht0tPTUVZWhpycHE68lEol6urqEA6HIZfL\nWZo5kb1OSdfQ0BDee+89vuElUlpN1G9SUtTr9SgsLIRCoUAkEoFYLEZOTg6OHDkCuVwOo9HIpDyJ\nHMbEGvfpp59yS6u9vZ3bR8n6HX/T1Ol0SE9P54RXJBKhuroaoVAIKpUKvb29GBkZYcW6hxkFpOXl\nZfT29uLf/u3fIJfLsbCwwIJYyVYV45lNqToiEol4TY1GI5555hkmGOrv74fL5VrHQfIw34nZ9NNP\nP+Uyus1mS8kkC/keiUTg8Xj4hk8AcKFQiJKSEhw7dgxqtRpdXV3o7+9nMbVH+U0VM6vViv/4j/+A\nVqtFNBrlEdZUYaCcTiermhLPCzF/NjY2QigUQq/Xo6+vD3a7fUt79akI/sQyRwxP09PTGBsb2zKg\n5d7bMQVPs9mMlpYWvPLKK8jNzWXZSoFAwPPiWq0WdrsdXq83IYpFqVQKjUbDfabJycl1kwlbsXvB\neXq9nkVgGhsbIZfLmUjCaDRi9+7dUKvVGB4exsjICBYWFja8uVOLhfx2uVwYGBiAw+HY8OFOxHdq\nr5hMJtTU1OC5555DYWEh39rFYjFKS0sBrPEs9PT0YGJiIqHDgdbF7Xbj7t27mJ6eRk9PD9+ok/Gb\nXluj0TAldFNTE9LS0pgvwGg0oqamhnEe4XA44f0Si8W4V26z2eB2uzE8PJxUAI1fc7pp5ufno6Gh\nAcXFxZxwiUQiFBcXIxqNQi6XQywWIxQKsdTuRn4TJuSTTz7B/Pw8XC4XJw/J+B4PdDIYDDyeSLSu\nkUgEer0eNTU1vG+JVjcR5DURWV29ehVdXV2Mlk8mOY+fQBGLxUhLS4PJZEJJSQmKiorWaXvk5eUB\nAON1vF5vQjP/FETtdjs++OADvlkT1XkyvseDUNPS0ljK22Aw8H7RaDSorKxkUGAwGOTRuUc9o/Ft\nrlu3buH27dvriKqS8RtYP2lFyHmLxcIMpKurq8jOzoZEIuEWL+3Tjb5zmtRyOp04ffo0Vw2SJQ2K\nH2kXCARYWlqCWCxexx+yuroKhULB2jAqlQrhcJi1Ff5PBn+xWAytVgu5XI6lpSVcvHgRX3zxxZbQ\n5vGIWdosSqUSpaWleO2119Da2ori4mJIJBLMz8/j7t27cLlc0Gq1qKqqQiQSwY0bN3Dnzh309fVt\neIsmACFl5B988AFu3Lix5fG4eEIVOhBbW1vx0ksvoa6uDgX/KyU5PT3NQKaioiLs27cPvb29uHHj\nBrq6uhIaGYvFviH0GRwcxOXLlzE2NrblTU5+U6/QYDCwmEdjYyP0ej3C4TBXKBQKBWpra1FRUQGj\n0YibN2/i9u3b3CN8mFF1iHpmIyMjmJiY2HKr4t7pDZlMhvLychw5cgTNzc2orq6GTqdDMBhkHvqM\njAzm5heLxbh9+zZGRkY2/N6JMvazzz7jWfNkQETx0zISiQQymQz19fVoa2tDa2srioqKuD9KFLB5\neXkwGo3MmXH37t0Nb/90AE5NTcHv9zMzXyrWnNDWWVlZaGpqQktLC5qbm5GVlcWHMdFjl5SUcCmf\naHXJv0f5TqNs8SxyW7X4vUKaF6Wlpairq0NLSwuqqqqg1WrXvY9Wq0VlZSWcTieD/xKhn6WATy2h\nZPBOwDfnC5X109PTUVFRwTLqJSUlfCmIxWKQSCTIysrCrl270NfXh6mpKW7HJFItIjZB+v9btXun\nfeRyOTIyMlBaWorW1lY0NDQw9TFVXNRqNfLz82G326HVauHz+RizkijgNf7/J+M7ATYJQ0EaE889\n9xxqa2uZ4nd1dRUSiQQGg4GpoqmysVkMwFMR/Ilfu7u7Gx999BG++uordHZ28heVqIlEIjQ1NaG0\ntJTR66FQCAqFAgUFBThw4AAKCgogkUjQ0dGBO3fuYHBwkCsMHo+Hs0QCG230/lR6vnDhAmQyGa5e\nvbqlUTOz2Yx9+/bxLDjR5iqVSjQ3N6OlpQUmkwkLCwu4dOkShoaGYLPZEAwGebNMTU1xlpjI5l5e\nXobdbsfZs2dht9vR39+/pZtzeXk5t02obEbl2cOHD6OmpgZmsxlWq5XLWOQnkaHY7XY+VDYyKpWN\njo7ynDaV0DZjSqUSFRUVMJlMUCgUTHOsUqlQUVGBgwcPori4mCsTo6OjzIQWjUYhlUrhdru5FErr\nupHvwWAQ4+PjfKvdysFiNptRUlLCdNdE4UoSx01NTSgrK8Py8jKuXbsGm80Gp9PJ5WKBQMD6B4kG\nQvpuqTK0Fb+pymY2m/lmEwgE+KB+5plnUFNTg8LCQrhcLi6TU8JB4ikOh2NT/Xra78mYSqVCdnY2\nNBoNX1SANcbHffv2oa2tDVVVVdDpdHA6nRgdHcXo6ChCoRD/9Pf3c9Uh0fVLNuADa8mHXq+HUqmE\nQCDgNkRGRgbq6+vR0NCAqqoqCARr8ub9/f0MgiZGUtIQ2IwvW90nZKRXoFKpoFQqueJAba3GxkbW\nPInFYhgYGGCej0AgwCh+r9fLt/5E/Um2xC+VSllXhPQ7FAoF0tPTUV1djX379qGurg5ZWVkIBoMY\nGBiA3W5nEDeNvG5VoOipCP5LS0sYHh7GzMwM2tvb4XQ6t3QbEolEOHXqFN58801GJnu9XkgkEigU\nCuj1eohEIiwtLeH999/He++9B4/Hc1/A28yX7vf7sbCwALvdzoQNW9k0RUVF+OlPf8qkJNPT0wgG\ng0xrmpGRgdXVVXR3d+OXv/wlenp6MDMzc997beZAWVhYYBGczfztvdbW1oa//Mu/RHp6OkKhECYn\nJyGVSqHVamGxWKDVaiEUCnHlyhX85je/gc1me+A8e6LvT0Auh8OxJX/J0tPT8corr6CpqQlGoxFT\nU1OYnZ1FWloacnJyUFZWBplMBpfLhU8++QRnz55Fb2/vff39zawb3UKT7TUXFxfjzTffRFZWFsRi\nMUZHR7G6usrkSUVFRZDJZBgcHMQ777yDq1evor+//76+5VbQ9MkcikKhELW1tWhpaUFmZia8Xi9s\nNht0Oh0KCgrQ1NSEzMxMSKVS9Pf34/Tp0/j888+ZPjd+cuFJm06nQ1NTEwoKCqDT6TA+Po6lpSXo\n9XrU19ejubkZWq2WK4offvghPv744/tGkrfD96ysLFRVVcFkMnHPWa1WIycnB62traioqIBer2dB\nnd/85jdob29nsahkv/etmlQqRU5ODnJycpCRkcFqhAaDAXv37sWzzz6L7OxsKBQKTExM4OLFi/jP\n//xPOJ1ObiFul+8KhQJmsxk6nQ4KhQLLy8tQKBQwGo1cmSNMjsfjwWeffYbPPvuMKYuTxXU8FcEf\n+EYcx+l0bolEobKyEs888wyqq6shl8tZSMTr9a6bwR0aGsK1a9dw9epVzM/PJ62xTb5TZrbZ15JI\nJGhoaMDRo0eRl5fHNwoKzARgCQQCuHHjBi5duoTh4eGUgfKS+exmsxlVVVVoaGhAbm4ulpaWGM2u\nVCohlUqZf7ujowNnz57F+Ph4wlWVx2lFRUWoq6vjqgStdyAQYKDZ0tISo3yvXbvG1MqpEg3ZiimV\nSuTn56O+vh779u2Dz+djMSxSjQTWiHLGxsZw7tw5XLx4EU6n8z6ipSdtBoMBeXl5aGhoQHl5Oc+V\nR6NRlpwViURwOp2YmprCmTNncOHChYT1Ox6XCYVC3uutra2sxBkMBpnVjpTdpqam0N3djXfeeQfd\n3d3Mp7GdAUin06Gqqgr79u3D+Pg4cyVotVpkZ2dzi2J6ehpnzpzBxx9/jP7+fsZVbJfvarUamZmZ\n2LNnD+RyORwOB6LRKJRKJbKysngSgcb0fv/73+PSpUtwOBw8bbEde4YYaHNyclBeXo7Z2VnMzc1B\nLBYjMzMTJSUl63y/efMmfv/736O3txcTExN87ie75k9V8KfS2GaM+m9lZWV49dVXGVFOwDui2lSr\n1VAqlbhx4wbeeeedpJma7rWtbDJC3Dc2NuLAgQMwmUzMDDg2Nobl5WUYjUaEQiGIxWKcO3cOFy5c\ngMPhSHrkJFkTCATIzMzEc889h71790Kr1TJV79jYGIxGIxMIjYyM4JNPPkFvb2/KxnyS8RsAysrK\ncOjQIZSVlSEWi8Fms2F8fBwLCwuQyWSsztXd3Y0zZ86gr69vR/iu0WhQX1+PxsZGlJaW4urVq7DZ\nbHC5XDAYDEy7Ozc3h2vXruHKlSuMXdlu33NycvDMM8+wJn13dzcmJycRCASQl5fHaHO73Y6LFy9y\ntWK7fZdIJFxi3rNnD3p7e2G327GysgKTyQStVsvjk8PDw7h8+TLOnz8Pj8ez7b5rtVrU1NSgrq4O\nu3btYqlwYjWlNkYgEMDAwADOnz+Pr776alsTXLLs7GzWY5ifn0d3dzfjQ0wmE7KysqBWqzE5OYlb\nt27hiy++QHd3d0qmZpIxmUyGvLw8lJeXY9euXbh58yYmJiagUqmgUqlQVlbGFTviPnjnnXdYmTNV\n9tQE/3h0/mb7/CRAsW/fPigUCjgcDnzyySe4ffs27HY7ysvLUVxcDIvFwrOcqSax2YpJJBJoNBpU\nV1dzdtvb24t3330X4+PjkEgkqKmpQUlJCfR6PZxOJ2ZnZ1O6QbZihLjOyclhFH8wGMTFixdx6dIl\nTE1Nobq6GrHYGhsczc6mgrUqFb6LxWLU1tbi+PHj0Ov16OjowKeffsoEQWKxGEajEeFwGDabDT09\nPSkbOU3GhEIhjEYjjh8/jpaWFshkMgwNDeHs2bPwer2oqqpCeXk5SxbfuHFjxwR+oVCIyspKvPXW\nWzAajZicnERvby8GBgawvLzM1NhCoRAjIyP46KOPWJdgu32Xy+V49tlncfz4cQamEj4mHA7j4MGD\nPKlEqm+bxSs9Lt/z8/Px/e9/H4WFhYhGo1hYWIDNZkMsFkN1dTWysrKgUqkwODiIt99+m5H5O8H3\n+vp6nDhxgp9Rr9fLkst6vR5msxkSiQQ3b97Eu+++i4mJiW0/G4G19tDRo0dRVlYGtVqN7u5u+P1+\nvsRVVlYiIyMD8/Pz+PDDD3H+/PlNYUAStR0f/GmMxGAw8Iys1WqFy+VK6Dat0+nQ2NiIuro66HQ6\nLC0tYXp6Gu3t7bhz5w7m5uZYxpN0AJIZZSMj9KlSqWT+60gkgtnZ2YRAcwKBAHl5eaivr0dJSQk0\nGg2CwSDGxsZw48YNzM3Nce+TWKZIcz7ZUlY8apYIj0iNLBEj1HV1dTW3KiYnJzEwMIDu7m74fD6U\nlpYyheXy8vKm2fweZvGqfoTI3cz8LfXzi4uLkZ2djVAoBLfbjZ6eHhY2kUqlPMLn8/k2JcX6MKMA\nSPtmK6VgrVaL3NxcFBYWwmAwIBQKweFwYHh4GEtLSyguLoZOp2PN9Onp6aSVwYBvpjiA+1HQiRiN\nwmVnZ6OkpIRbcpOTk5icnOQxRBLcmZ2d5Vn4rVr8NEG8z5t97qVSKUvq5uTk8Pw6EQfl5uYiLS0N\ncrkcy8vLGB8fx/j4eNIjp7TmFIi3MvUklUrUkHUoAAAgAElEQVSh0+lQVFQEnU7H9NLEBUJjxEKh\nEB6PB3fv3t0yK+m9o7LANwyimzV6xmmEj55x6vfr9Xqo1WreL3a7HT09PUxotRW/4983nnJ+K77L\nZDJkZGQwR0A0GkUoFMLq6irEYjFMJhOkUil8Ph+6urpSpv9yr+344E9ozsrKSlRWVqKsrAy//e1v\neczvUV+mQCBAVlYW3nzzTbS2tgL4RrK3v7+fb88mk4nH+0gQKFmjG6Rer4fFYoHBYMDi4iJu3bq1\nYb+GDqaqqiq8+eabPP9O4hHDw8MQCASwWCwoLi5GZmYmo/9TRZBB3NiEtif2s0RMLpejqakJjY2N\nDGTxeDyYmJhgEB7JMwPg+d5UJS0kkBONRnmtEwnONBPc2NiI3NxcnrednZ3FxMQElpaWoNFoWNyE\nEqJkmdTIb5pNBsDshIl+n9RmKSoqgkajwerqKhNhud1uPuxzcnJYSyAYDG65PRR/KJLf8XTWmzlk\npVIpTCYT9Ho9JBIJlpaW4Pf7WSpWJpNBrVZzD3RxcTEhzoRH+R4/ckrBYyvPDgGFCSVP6m7z8/P8\nXep0OkgkEszNza0bJ9uK3/S/pP1AEw5beS25XM7YGyJOW1xcxNLSEif/RF5FinVbaYXeO8Muk8l4\numKrwV8mk/HI4eLiIvx+P/Mb0BSURCJBMBjE7OxsUklLPO8B8Rls9ZwlCvaVlRUEAgH+IewWJblE\nzT4+Po6ZmZktvddGtuODv1gshlAoZNKD/Px8nDhxArm5uTh79izcbvcDv1TqIZaVlSE3NxcSiQRO\npxP/8z//gy+++IIXlA5Nk8mEkZGRDSVYE7X4DUqENWq1GhkZGbh79+4jy600BldQUACLxYJwOIzO\nzk589NFHuHDhAv8etTNisRgji1NhEomEKxZZWVkoLCzE1NQUT1w8aoSK/qakpARGoxEzMzO4evUq\n/vCHP8BqtfLapKenIyMjA0NDQ3xLStboQFOr1dBoNNBqtZBIJBgbG+NRnoetOTGY5eTkoLKyEmKx\nGHfv3sWZM2dw7tw5hEIhDqB0GxoeHk7JfonF1sRNNBoNK48RiHN2dnbDMivNkufm5iI7Oxuzs7MY\nHBzE+fPncfv2bf58SqUSBoMBw8PDGB0dTYo4iMYBgbUASAQqlMjRWORGRjfnzMxMpjW9fv06rl69\nitnZWX6OSDN9bGwM09PTW/abfAew7qANh8NM0pTIjZRm+NVqNVQqFWw2G5aXl3H37l1cu3YNsVgM\nUqkUKpUKarUaPp8Pvb29Scm90prHvzbpM8TTcD/K4hNkhUIBn8+HS5cuwefzYXh4GOPj40yXrFKp\nIBaLeb9slcEufoqBsATA+qQ/Eb8pANMlbWhoCJ999hncbjcGBgYQDoehVquZEG5xcRGDg4NJaahQ\nIhvPqxJ/qUjkdelvKbgvLS2hvb2dtRjiL6E0tz88PIze3t4t6Vckajs++ANreu/EehUIBFBSUgKB\nQIBbt24xl3i8EcNTSUkJampqkJGRgUAggLGxMXz55Zc4d+4clpeX+cvMyMiA0WhkmslUmUAg4IOQ\n+lC7d++G1+vFwMDAAzc98atXV1ejrKwMer0e09PTuHnzJj7++GMWvVAoFNBqtcjMzGTmulSVhuLL\noUStTEI7CwsLXGWIN9rYOTk5qKmpQXFxMYP5Lly4gM8//xyBQIAPTGrjAEiZQmA8KQwFO7VazRS/\nD3sfmn8vLi7m6lIkEkF3dzdOnz7N1J0ajQZGoxF6vZ4FeFIFrIy//RMZD5GRPOyQoQpNRkYGcnNz\nUVFRgezsbB6J/fDDDxlFbDQakZGRgbS0NJ7HT3av06FIzxsJKQmFQu69Pux7pc+YlZWFvLw85t4Y\nHBxk1Te/3w+1Wo3c3FyuCgQCgZTpSsS3WmjfbCQBTq08YuyjdfX5fJiensaFCxcwMTHB5ducnBxo\nNBrMzc0lxJS4kdGaxweU+Nvpo54jqqDo9XpotVpWpBwZGcHo6Cj6+/sxNzcHjUaDgoICmEwmSCQS\nvlknc77EJwD3ti4SsbS0NGi1WqSnp/ONPxwOMy+Iy+XiqYvi4mKkp6dziyiZS9HD1jNRQh0q8ael\npUGpVPIZ5PV6MT09jfn5eSwsLECj0aCkpAQWi4X3+YNiWyptxwf/cDiM+fl5BAIBHtmora29r7wb\n/0VQgGltbWWue7o5DwwM8A2QxEC0Wi0MBgMqKipSJhREYieRSATz8/MIBoOwWCywWCyIRCKsdx7v\nO9EMFxcX4/XXX0dFRQWWl5dx48aNdSNNIpEImZmZyM3N5ZtFVVUVvvjii5T4Tr1sGstzuVxc1nzQ\nIUOlPI1Gg/379+PEiRPIzs6G1+vFxYsXMTAwgGAwyHPm5eXlyM7O5oBbUFCwaQGfBxmVs4mS1uv1\n8ljeg0RT6ODRaDQoLCzEyZMnUVtbC5PJhN7eXnR2dsLlcvGtv6CgAHV1ddDr9TAajaiqqsLXX3+d\ntN/A2prPzc3dV45+2AFJVQiTyYTm5mZ861vfgsFgAABYrVbY7XZWqdNqtWhqakJVVRWkUilyc3NR\nUlLCN7BkjMqu8QcsJb0PM4FAwPiEY8eOYe/evZDL5RxAKZkluuHDhw8jPz8faWlpKCkpgdlsTonf\ntObU86ck61EHvlgsRl5eHvbu3YsDBw5Ao9FgZWUF8/PzsFqtnLBpNBo0Nzejra0NGo2Gq3+bFU16\nmO90448XxNqolUiz7/X19aioqMDCwgIWFhbg9/vhdDq5GlBQUICXXnoJlZWVXFHKycnZlFDVwywa\njTIbYSJjgrT3aYxy9+7dEIvFcLlczNNC8u2rq6toaGjACy+8ALPZzGuu1+uT9pvaFNQeSrRaYTQa\nceLECdTU1CA7Oxt2ux1TU1NYXFyE2+1mzhONRoMjR45wbDObzcjLy+PR3MdhOz7400LT7Kzf70cg\nEIBSqWSdYyr3ajQaZGVlITc3F3l5edi/fz+Ki4uZ0pHUqSiDjS/3AeCSayqMSqAE5rDZbPD7/Zib\nm4PH4+GgT8h4k8kEi8XCAaaxsREGg4EP0XtnUkmkIhaLMQ1nqnyn94pEIgiHw9zrlsvl60BWFPQt\nFgsKCwtRWlqKAwcOYO/evZBIJIhEIqztHl+ypJlskUiEtLQ0Bv4la9Tbp5tbOBzmtsu9t7n4ALR7\n927U1NTg4MGDsFgsEIvFcDqd0Gg0XIInkBLtN6oYKZXKpP0GsO5AEQgEfHOmz3WvZWZmorCwELW1\ntWhqakJraytEIhECgQAWFxf5ZreysrLuVkstF4PBkLL9En87oeTwYSaXy3k2u7m5GY2NjSgqKkIs\ntia7TDgZpVKJlZUVrhAQFwdpnafC6PlPtOqk0WiQmZmJw4cPo62tDXv37oVUKmWu99XVVQwODmJp\naQmLi4uQyWS8/4ktL1WHOe3zRHwn3NSuXbtw/Phx1NTUoKCggKmjZ2dnEYlEmJqa2kiEK9Dr9QwU\nTdYo6UrU6FJRW1uL559/Hvn5+RAKhfB6vZibm4PT6eSkZHFxkVsCVPmjaYVUWKJBn4wYH5ubm1Ff\nX4+MjAwUFBSwONr4+DgGBwdZgTGeul2r1SIjIwNSqTQlvj/IdnzwB745/JaWlrC0tASn08n/RiVe\njUaD4uJiHDhwAC0tLWhqaoJGo2FwiVarhdFoZFQ/0ajSF0HAi1TIYN7rdywW4yw7XlKWbhIKhQKV\nlZU4duwY9u/fj7KyMqhUKgiFQoRCIVgsFmRnZzMRUSwWYyBXKBRiashUGr1ePBDoXqPeYFNTE44e\nPYrDhw/zGlMgrqqqQm9vL8RiMVZWVrC0tITx8XHuldMBk4rgf6/v1MN9mGVlZeHIkSM4fvw4Wltb\n+cAD1iiJfT4frl27xiNCXq+XNQK2KjmciN+JABRLS0tx7NgxvPrqq8zWB6zRSctkMkxNTeH8+fPM\nFuh0OuHxeDhRppn5VNtGNzmNRoM9e/bgxIkTOHXq1LqJkoyMDJjNZvT09GBoaIj1BqamphiEplKp\nIJfLU+pvopaRkYHa2lq8/vrr2L9/PyQSCSPuLRYL92qplEvqnUQ8k5aWlrKEazO+i0QiZGVlob6+\nHq+99hpMJhPv20gkwsA7r9fL/CZ00SIRK7VanZLgv1mLH9M+cOAA75WioiKEQiHMzs5yC46o2Ccm\nJlj4Sa/Xp6TCRbbZ6Zv8/HxUVlaitLSUOft3796NSCTCfCfT09MYGhrC+Pg4XC4XADB2IZXny72W\n+qc/Nfb/beaX6ZZNmuIejwfz8/Mc8FdXV9Hf348rV66w8AT93erqKmtgd3R0oKurK2lA0WaNEhGH\nwwGXy4Xl5WVYLBa+VZDOwMjICD+oVPobGRlBd3c3BgcH0dnZ+cRn5YkGmISEpFIpj99MT0+js7OT\nfaebCt0yent7MTk5ieHhYdy5cyeliddGRqVpj8eDyclJTE1NISMjg9HNw8PD6OzsxK1btxiQSONE\ndrsdk5OTEIlEuHr1KgYGBp743DPpolutVgSDQRQVFXFJ9e7du7h+/Tpu3LiBpaUlrt7Mzs6it7eX\nld/Onz//2JDEDzOaQpiamkJPTw/kcjn0ej2PZLW3t+PcuXPo6+vjiQSPxwO3242ZmRmkpaWhr68P\nly5deqJ+A9+UrG02GyYnJ5GWlsbl4Nu3b+PixYs4d+4ck0EtLCzwDXV1dRVqtRpnz57F8PDwE90v\n9Mx5PB709vbC4/FAIpFgeXkZExMTuHDhAr788ktcvnwZDocDPp+PgaYk8DM5OYkzZ86klPgsESPZ\nYqfTyb7TJa29vR2nT5/G6dOnuUXn9/vh9/thMBh43O/cuXNob29/4s8o8SaQZonD4YDH44HD4cCZ\nM2fw+eef48yZM+jv78fs7Cw8Hg+USiXKy8u5uvHFF18kE4/+/0f941Nx83+UUSAMBoMIBoMc/N1u\nN6RSKY9lDAwMYHp6+j6wWiQSwZUrV3Dz5k1oNJonzoxHpUcag6MNrNPpYDAYEAgE0N/fD4fDsQ7U\nR1Ke4+PjMJvNMJlMnDU+KaODb3h4mLNXoVDICOrR0VH09PQwdSx93kgkguvXr6OnpwcNDQ2Mnn3S\nvs/NzfH45OjoKLRaLc+9d3R04O7du+u0HYje1+FwYHR0FIFAgB/MRAFAqbLp6WnMzMxgYGAAi4uL\nzAjm8/nQ3t6O/v7+dUJITqcTc3NzPO9MdMtP2m8Si3I4HLh16xbkcjmXmq1WK9rb2zE0NMRJ7PLy\nMoN9p6amIBQK4XA4nrjfADiwjI+Pw263M1ObWCzGzZs3cePGDQwODjLhE7X65ufn4ff7UV9fD7/f\nn9IqVyJGF5x4ldJAIACj0QiPx4Nr166xiBnZ7OwsV/Cam5u5+vWk151YXc+fP4+uri6Mj49jz549\nsFgs6OzsRHt7OwYHB7mSSM9ofn4+IpEIysrKGEvzpH0PBAIYHR3F2NgYOjo6UFdXx2RV169fx+Dg\n4DqiKq/XyxLnWVlZcLlcGwJnk7EnuwsTt6Q+KY0GmUwmzs7n5ub4Vv2gQEMleNKV3g6jkmx6ejqz\nU5HEK4EeH4S4JaIfAqRsh9FcMCGgAbDELbVVHvQ3aWlpAMBo4id9oANYR2VK/cGFhQX4fL4HIrSp\n3WE0Gnl2ezv8JsS3Xq9HVlYWzw/7/X72/V6/CREtl8tZHGo7fSfueKFQyDPyPp/vvhumVCrlMdKF\nhQVMTk5umk8glb6rVCpkZmZCLpdDIBBwYuD3+9edH2KxGEqlEiaTCTqdDjabDR6PZ9t0CAQCAXQ6\nHU9PRCIR+P3++0CbwFqLRqfTITMzk5N8IqPZDiPuESJNoymEpaWldeceYajMZjPMZjOGh4cxNja2\nrcyEhElTKBQQCoW8x+8lqtLpdMjOzkZOTg6EQiE6OjqS2S+PjO//J4M/WXyParsOiq1a/O3gafIb\nwCOBaqn4/cdtiWbaT+t3dO8t6GnxnRKGeFT+0+I7ESElis7fKUYXEgKhPk2+02jj8vLyOqD3Tved\nEjSZTLZO3nkLfv/xBv8/2Z/sSdlOS2ASsac1eQGeXt/vLfk/Db7HMwuSPS0JwL3TJ0+b3/HnyhZ8\n/+MM/vGLtxOEKDZj8SQ7T8tmJYu/VT5NfgNPZwAH/rTm22XbgTtIlW3W96cxaSF7Wn1Pgd+PjO9P\nPeDvQRZPBSkQCLasfUxkQcRbTsjpx9nzimdMS0ZAQqlU8ohLJBJZx3v9uCyeKS0Z0Q6VSsXzrfEl\nuyfhO4AtrzlRrlLCSdz5j7tHSr4ns0ZKpRIymYzLu8vLy1v+DjdjyQZ/wmvQSBRpLTyphH+rCQCR\nNFFJnVDtqdC4SOS9N2vxn5HIyGjUMRwOM+HQTgys8T7RmtPlKhQK8TO603yP94e4RaRSKU8qka7D\nVvfL/9ngTwyAG7GNPeo1tFotamtrmZZxYGDgPtR9qo2SFplMlpRoh8Viwa5du5gTgFDIj2ucLj7h\nEovFWwIGUX+xoqICGRkZAICxsTHYbDbmL38cFp9w0fjnZg8CYvOqqKjgMarh4WEWRHpcB0s81etW\ng5BAsCbtmpubC5FIBLfbjZGRkYT5+ZOxZKoWAoEAarUaZWVlSEtLg1AoxNjYGJxOZ0roizd67/jq\n3Gb/ViAQwGw2o6CggOlc7XY7fD7fEwFhbjXpImIvUo6USqVwOBxwOBwsUPO4k3Rga8miUChkFUNi\nhh0fH2eWwK2+7uM22ms5OTnIzMxkLMDExAQWFxe3vF/+TwZ/4BuilM1m5kSt2tTUhNraWuzatQsA\n4PF4eNzocW8QUhnbbPAkbv2mpiZUVFSgoKAACwsL6O3t5bGjx200LbHZkqJIJMLevXuxb98+lJaW\nQqVSIRAIYGVlBWNjY499zeOD/mbeSyQSQavVorGxEZWVlSguLmYaarfbzeqTj9PvrSQstOalpaVo\nbGxEYWEhjEYjlpeXcevWLVit1idyEG7lPWiypKGhAXv27EFBQQFEIhGPeT0pno6tBH6JRILs7Gw0\nNjaiuLgYWVlZnCg6HI6Hvl6qWwyb9Z10JPbs2YO9e/ciLy8PCoUCwWAQly9ffuj0RarbOlut4Op0\nOtTX16O8vBx5eXkIBoMsaT07O/vAv9vulhTFopKSEtTX1yM3Nxfp6enw+/3o6emBzWZ7YIyIb3c/\nyv5PBn+SFt3KTZGUvn7wgx/g1KlTkMvlmJ2dhdVqxbVr1x57Zku+b3bunWiCq6qq8I//+I9Ma0wq\nXZ999tljvQnFB6HN+k5Vmm9961v4u7/7O8jlcgQCAUxMTLBa1+Mshcb7vlkTi8XIzc3FT37yE+zf\nvx8KhQIOh4PJap4EfwHtmc0YqeU988wz+Od//mcolUpEo1FMTEzA5/Nx2f9eS/WBuJU1pxHRt956\nC9/5znegUCjg8XgwMjKC9vb2h1adUun7Vm/8UqkUe/bswT/8wz+gqKgIYrEYdrsdZ86cwZkzZx5I\nN3wv8CsVvm/WaGTxlVdewY9//GPI5XL4/X6Mjo5idHT0sVdakjGqVPzkJz9Ba2srFAoFrFYrrl69\nisuXL7PuSLzFt9K2yygWHT58GD//+c+hUCiwtLSErq4uzM7O8ljpve0Bogj+owz+yVhzczNOnjyJ\nhoYGhMNhWK1WXL58GWfPnkV3d/eOLAsBayp/3/nOd/Dyyy8jMzMTLpcLIyMj+PLLL3HlyhUWMNmJ\ntmvXLpw8eRLPPvssBAIBhoaG0NHRga+++gq3bt3asb1EADh27Bi+/e1vo6ysDD6fD3fu3MH58+dx\n6dIl2O32bT08HmWZmZk4efIknn/+eSiVSoyNjeHu3bs4d+4cbt68+VB8yE74HpqamvD6669j3759\nCAaD6OnpwY0bN3Dx4kV0dXXt2P2iUChw8uRJvPTSSzCZTJiYmEB/fz8uX76MGzduwOv1PnS/bDeY\ns6ysDN/97ndx6NAhrK6u4tatW+js7MSVK1d29JoDwLPPPotvf/vbKC4uhsPhQFdXF27evIlbt25h\nYmJixz6jZrMZp06dwtGjRyGTyXDjxg20t7czAdmD1nwz38Gfgv//mkQigUajQUNDA773ve9BpVLB\n4/Hgzp07OHv2LD766KPtdvGhplKpWCHtueeeg1qtxvDwMC5duoTPP/8cPT092+3iA41K5jU1NTh1\n6hRyc3OZjfHcuXP44IMPHkgOtBNMqVRCr9fj0KFDeOWVV3jNr1+/ji+++AJXrlzZbhcfagaDAZWV\nlThx4gRqa2shlUoxOjqKc+fO4eOPP4bD4dhuFx9ocrkcGRkZaGtrwxtvvMFyzZ2dnTh9+jTOnDmz\n4WtsV4DS6/UoLCzE8ePHceTIEajVavT09ODrr7/GmTNnMDQ09NC/jcW+kcF90iYWi5GVlcWXopyc\nHIRCIZa7PnfuHILB4Lb4tpGlp6cjOzsbzz33HF566SUolUp0dHTwhai/v/+Rf7+dyUxOTg4aGhrw\nyiuvoLq6GmKxGD09Pfj000/R3d3NFPUPskT9/lPw/19LT09HQ0MDqqqqkJ6ezixM7e3tsFqt2+3e\nI624uBgHDx5EcXExC3BMTEzg+vXrD+1n7QRTKBRoa2vDgQMHYDKZoFAo4PV60d/fj4GBgSdO+bsZ\nKygowAsvvIC6ujreLx6PBzdv3nzi2hCbtba2Nrz88ssoLi7m6QS73Y7e3t4dm2wBa0JM3/3ud3H0\n6FFWmVtcXMTdu3cxMTGx4d9v52He1NSEN954A3V1dQxOdLlc6O7uvo+J8UG2Xb6npaXhjTfewIsv\nvojc3FxuyRFt7Uat1e1c85qaGvzt3/4tamtrkZ6ezkyMfX19G2qgbDcB1quvvoo33ngDu3btYtbR\nmZkZTExM3McKeK+R8uhG9qfg/79mMBhw5MgR1NTUQCKR8GjfyMjIE+fM34wJBAKUlpbihRdegMVi\n4bHE2dlZDA0NMc/4TjMaXWloaEBDQwPUajUEAgFCoRDsdjtzie9EI+Tt0aNHGVuxurqK+fl5DA0N\nMc/4TjNSwKyoqEBLSwsMBgNEIhFWVlYwPT0Nu92+4cGyXUaUrc3Nzdi1axekUikrTtpsticuUJSo\n0ShiWVkZ9u/fD5PJBJFIhHA4jNnZWdhsth2bcGk0GlbUq6qqgkqlwsrKCoMqZ2ZmduQzKpVKkZmZ\niZqaGrS2tsJoNEIsFmNpaQlzc3OYnp5+oErpvbYdgd9sNqO4uBgtLS2oqamBQqFAOByG3++Hx+Nh\n2eWNLBHf/xT88c2Y1vHjx1FZWQlgbfGWlpbgcDhY7GWnGQGBysrK8OKLLwL4BvxFgjU78eEE1nxX\nKpXYt28famtrAazJB5M2utvt3pG+E0o+KysL+/fvh1qt5skSn88Hu93+RKYqtmIErCwuLkZVVRWA\nNQ6IcDiMmZkZOByOHbvmCoUCGRkZ2L17N7KzsxGLrSnVkaLkTk24pFIpjEYjLBYLioqKAHzDu+H1\neteJXu00MxqNKCsrQ0FBAet1hEIhVv2bn5/fkX1+hUKBmpoa1NTUwGw2QyaTsYYBiUTt1CS3qKgI\np06dQnV1NdRqNQCsi0OpBFb+0Qd/sViM73znOzhx4gQyMzMBrD2cH3/8MT799FO4XK4ducGBNRDO\nG2+8gWPHjvF/GxkZwQcffICzZ8/uyIOc7MUXX8TJkydRUVEBYC3ZOnv2LD788EMMDAzsWBCOyWTC\n97//fbz44ouQyWQA1spx7777Lj7//PNtE8pJxJqamvDWW2+hra0NwNqat7e34/e//z2uX7++I4lO\ngLWk5dSpU3j11VdhNpuZnOWdd95hjMJO3S/FxcX4q7/6Kxw8eBDAWnI+ODiI3/72t/j6668fO/HW\nVk0gEKC1tRV//ud/jsLCQh7hPX36NN577z1+Rnea78TPcvToUbS1tUEsFiMcDmNychJvv/02vvzy\nywei+7fb6FJhsVhw+PBh5OTkcJXlypUrePvtt9HZ2bmjgZWpstiT+BEKhTGlUhn71a9+FVtcXIxF\no9FYNBqN+f3+2E9/+tNYdnZ2TCKRPBFfNvsjFotjx44di/X19cVCoVBsdXU1Fg6HY2fPno01NzfH\n0tPTt93Hh625TCaL/dM//VNseXk5Fo1GYysrK7FAIBD7xS9+EcvIyIjJZLJt9/NBP1KpNFZTUxM7\nd+5cbGVlJRaLxWLBYDB2+/bt2IsvvhjTaDQxgUCw7X7e+yMSiWIajSb24x//OBYOh2Orq6uxSCQS\nm5ubi/3yl7+M6XS6HbvmCoUilpubG/vd734XW11djcVisdjCwkLMarXG/uzP/iymVCpjQqFw2/28\n90coFMYMBkPse9/7Xsxut8disVgsGo3GnE5n7L//+79jpaWlMalUuu1+PuhHpVLFCgoKYv/+7//O\na+73+2MDAwOxH/3oRzG5XL4j1xxAzGw2x1588cXYzZs3Y2R2uz323nvvxRobG2NisXjbfXzQj0Kh\niJWXl8d+8YtfxKLRaCwWi8V8Pl/sxo0bsb//+7+PSSSSraz5I+2P+uZP9L1KpZJpNkmG1uPxYH5+\nfkfenqlknpaWBrlczsx0Pp8PbrcbXq93xyJw5XI5DAYD0tLSmB40EAjA4XDA7XZjcXFxR5dB8/Pz\noVKpmArY5XKx/jypb+00UyqV2LVrF5Ph0Jr39PRgeHiYqZ93ouXl5aG5uRnZ2dkMYrLb7bh48SLs\ndvtj593YqkmlUrS2tuLgwYNcvg2Hw7h69SrOnTu3Y88WYA3MevLkSezbt4/JhWw2G9599110dnbu\n2GoFABw6dAinTp2CxWLh8cjLly/jnXfe2dFjfZmZmfjhD3+I5557jtfc5XLht7/9Lc6dO/dYqix/\n1MG/rKwMBw8eRElJCUQiEQBgfn4eVqt1Rx/mUqkU+/fvx+HDhxk5HIlEMDU1hfHxcWbG24mWk5OD\nF154AZWVlYxZWFhYgNVqhdvtfuyEPsnYnj178MILL8BkMnEgcrvdGB0dxcLCwo5d8/T0dBw6dAh1\ndXXsNwErp6ennwif/FatoKCAkeYAEOdQLMEAACAASURBVIvFMD8/j5GREfh8vh0bQCUSCWpqarBv\n3z4oFArGhbhcLkxPT+/ofW40GtHW1oaCggIOoES6NT8/v2P9FggEKCwsxL59+5Ceng5gbb8QyO9x\n0mwnayqVClVVVYwLWV1dRSgUgtvtfmwcLX/Uwb+5uRk///nPOTMH1vq3xKC03cQaDzMiCzl58iSP\ngUSjUdhsNlit1h29yUtLS/GTn/wEFouFM1y/34/+/v4dja8AgP379+Ott96CXC4HsHawuN1ujI2N\nIRAI7Fjf9Xo9vvWtb6GpqQkAuGfucrkeSSyzE6y4uBgnTpzg6lYsFsPy8jLrnAPYkesuFotRUVGB\nPXv2sO4CsV/u1GoFsBZA09PTsXv3bphMJgDfrC9VjXai0UVCr9cjOzub2fmi0SgkEgmUSiVX63aa\nETurVqvlWERJrUajgVKpfCyx6I8y+BNPtVKphEqlgkQi4X9zOp24efMmZmZmduQDSqND8SpswBpS\nfmRkBMPDw5sO/k9CmpSoTRUKBbcqyHw+H/r6+jj4ky8P0mzfDhlVsVjMvpOqFhmJ4Nw7rvWwQ/JJ\nfw6ZTAalUgmpVMqHdywWYxCU2+2+72CJF9uJ/yxPcvaZJhMUCsW6oEOVIhI1udfHeF/jaU7jfX7c\n/kulUqjVakilUvaB3nd+fh5er3ddAhAvP06/R5XIeIDX4/ZbKBRCLpdDqVTet+aRSARzc3MMaI33\nnT4jkRGJRKL7FEkft+90nstkMvaH/A+Hw9xOvHfNaZ1pzamKGr9nHrfvSqUSGo2Gz0TySygUIhwO\n38d3QsqplFSSoFo0Gl0nxPUnet8HmFL5/9o70+Cmr6v/f7VvlmwjeZF3WzbGxoANOCwBEkNImgIl\nJGkCIc3Spk07mc50pm+a6XTm36Zp8yoznWeSSTuTaaZNCoQ9GBsDNmbxwo7BC17Am7zIiyxZXmVJ\n9//Cz73IxiT6WZZQn9zPjKetC9bh+N57zr333O9RIyUlBQkJCSyA0mO53t5e3L59G1arNSSDv8Fg\nwJIlS6DX69mdudvtxtjYGNra2tDa2vqdndi8F5lgZfIymQxJSUlITk6e0VLT5XJhaGgIzc3Ngpvg\nBKvxhk6nQ0pKCqKiolibaLfbzcZLW1sbxsbGvtWOufwcjATAaDQyMR/vlsUjIyMwm80PJbm+2BkM\nu1UqFVJTU2E0GmcEFzpeOjo6ZgR/atds+70TmWAlXHq9HosXL2biT7Sz6OTkJAYGBjA4OPjQFdFs\n22cnM5RA2i+VSpGYmIjExMQZSS6tOu/v739kwkX/u3dQ8g4+wZijaWlp0Ov1zB63282egw4NDT0U\nRGf7nAZc2hAuGHYD0/f9aWlpUKlU7HM9Hg+cTiccDseMtYXaTP1MBX28Ewdf+V4G/7i4OLz11lvY\nvHkzWxCpkEJvb++8xE6CNVieeOIJvPHGG1i8eDH7zPHxcQwODrJOckJVt4Jhu0ajwc6dO7F9+3am\nKud2uzE8PAyLxcLamX6bnYG28VEsXrwYP/3pT1kBFDB9Z26z2Vg709kJ12w7vYNPMJuGbNq0CS+/\n/DJiY2OZ7Q6HAxaLBZ2dnQ8pnc21Q6Z20t30XH92oYmNjcUrr7yCDRs2sM+ku8+enh50dHTM8Dnd\njc5ObOdq2BLoMZSbm4s9e/bAZDLNmKP9/f0wm80Pve2fHSjp97z/HcGwW61WM4lwtVo9Y47SddFb\njZDa473Dp0kxJVgJV2pqKvbu3YucnBzm84mJCQwNDaGrq+uhOUp97u1nt9s9w146R4Oxpm/btg1R\nUVEz5mhvby86Oztn6MxQu2c3rqMnSd7j5bv43gV/kUgEnU6H3NxcpKamAgCrrDxz5gwqKytDVnFL\nJBIhISEBa9aswaJFi9gv+86dOygpKWFH/kIJdG2DSCSCUqlEdnY2srKyIJfLQQjByMgIysrKUFpa\nOmdmHgqIRCJERUVh7dq1iI+PBzDtr9bWVpSUlKCmpsbnlxXBPHoGpm03mUzIzc2FVqtlv+fLly+j\nqKgIXV1djzwlepStwToK1el0yMvLYwVQdI6WlJSgqqoKY2Njc/69b/NxsK63EhISsHbtWiaMQwhB\nbW0tiouL0dTUNOccnStZDJbNFIVCweaoTCZjhX5lZWUoKytj1xVCbA8Wc83RtrY2FBcXP3KOzl73\nZp8iBUvHIC0tjc1RaseVK1dw8uTJb91YeNtGk0chG7nvZfBXKpWIjY1FREQEgOlstbW1FX/7299Q\nV1c3r58bjEVFLBYjPDycVT7Tgpby8nL85S9/mVflc7CCkEwmQ0xMDCsicrvdGBoawr/+9S8UFxeH\nrO3AdNFNSkoKC6Butxu3bt3CBx98MC81v0AnW8CD48GoqCgkJCSwxczj8eDEiRP4xz/+EbI+F4mm\nFf2SkpJYAPV4PGhra8P//M//zHuOBvqkhfpcr9cjPT2dBW6Px4Pz58/jo48+EuzzYCVbtCYnMTFx\nxnix2Wz497//Pa85GoxxDjwoUszMzIROp2OfXVNTgw8//DBk5ygwbbvRaERqaiq7CiWEoLCwEH//\n+98D6vPvVfCXSCTYs2fPQ2p++/btw/Hjx9HV1RWylc8ZGRl49dVX8dxzz7HvNTU1Yd++fThz5kzI\nPjMDgK1bt+LFF1/EkiVL2PdOnTqFw4cPo7a2NmR9rtfrsWvXLmzfvp2p+Q0MDGD//v0hr+aXm5uL\nbdu2YfXq1Ww3cPXqVXz99deoqKgIWaUwiUSCrVu3zjgGdblcOHz4ML755hv09PSE7HiJi4vDhg0b\nkJ+fz3xOFTdLSkpCeo5mZmbiySefnHE9VF5ejsLCQjQ1NYWsz5VKJUwmEzIyMlg9jtVqRXFxMQoL\nC0P65ZNOp0N0dDQiIiJYbUhdXR3KysqC0ib5exP8pVIp1Go1CgoKsGPHDshkMgwMDKC9vZ1J+Ybq\nExwq+/jKK68gIyMDbrcb3d3dqKysxFdffYWOjo6QtJvuhPLy8vDKK69Ao9FgdHQUFosFZ8+exf79\n++F0OkPSdmD6ffxzzz2HTZs2QSaTYWhoCA0NDTh27BgqKytD2vb09HTs3bsXRqMRLpcLDocD165d\nwxdffBHSzxLFYjHWrl2LrVu3Ijw8nNXilJWVoaio6KEiv1AiOjoazz//PHJyckDIdO+BtrY2Jocb\nqnYD08H/mWeeQXR0NDvdunnzJg4fPiy4EDeYqNVqrF69GsuWLWNXFXa7HWfPnkVFRUVIz9HY2Fis\nX7+eCVgRQnDv3j0cOHAA9+7dC7jd35vgr9FoYDQambKcWCxGRUUFPv30U9TX14esahW9pqBKhDKZ\nDOPj40zX3JcCv8cFrUJVq9UICwuDRCJBfX09vvrqK1y6dClkRZSAB89Bw8PDodVqIRKJcP36dZw8\neRLt7e0hvaiIRCJoNBrEx8dDpVLB4XDgypUruH37NhwOR8gK4wDTtsfExCAxMREKhQIWiwWNjY3o\n7u7GyMhISNuu0+mwfPlypi5HixMHBgZCto6IkpGRgYKCAkRERGBqagqjo6MYGBhgwluhik6nw7PP\nPotNmzZBKpWylwlUMTRU5ygA5OTk4De/+Q0bL263G4ODg6irq3tkTctC8r0J/pmZmdi2bRvS09NZ\nIUtbWxsuX74cko0eKHK5HJs2bWI7IUKmuw02NDSgtrb2O5+YPU7i4+Oxfv16LF++nLW97e/vR2Vl\nJVpbW0PW5wCQlZWFjRs3IjY2lgnMNDY24tKlSxgYGAhZ2+kz1vT0dPZe226348KFCyEvzarVahEd\nHQ2DwQC1Wg0AaG5uxqFDh3Dv3r2QTnKVSiXCw8MRGRkJlUqFiYkJXLhwAWfPnsXQ0FDIJi30TXl4\neDhTrrx37x7OnDmD69evz6uAOFjQOgWj0ciuiKqrq3H69Gm0traGbOc+SkREBGtPbbVacfHiRZSX\nl8Nutwdljn5vgn9+fj7ef/991kt7aGgIVqv1oedloYZSqcRrr72GH//4x5DL5XA6nRgbG8Pg4GDI\nthqmZGVl4f3330daWhorfLLb7Uw+OZTZtGkT3n777Rka4a2trbh27drjNu1boVK+eXl57O7WZrPh\n7NmzIW+7Xq/HsmXLEBkZyb5XU1ODTz/9NGQTFmA6CGm1WoSHhzPRmPHxcRw6dAgHDx4MadupmJK3\nqE9dXR0++OAD9PT0PGbrvh1vcSRq+/Hjx/Hxxx+HbLJF8RYZEolE6Orqwscff4zq6uqgjZfvTfD3\ndnRnZyc+//xzlJaWPm6zvhNqN11UysvLsX//fty+ffsxW/bdSKVSaDQayOVyDA8P4/Dhwzhx4gQc\nDsfjNu07CQ8PR2xsLBQKBRobG3HmzJmQD57AtDhOVlYWUlNTIRKJUFNTg3PnzoVsv3tvkpOT8eyz\nzyI+Pp5pyZvN5pAOnsC0gFVeXh7y8/OhUqkwMjICi8US0gWhlMjISCxZsgQJCQkgZFr2eWJiImQL\nQr2JjIxEQkIClEole/nkcrlC3naJRAK1Wg2VSgXggcbA1NRUUJOWkAz+VNTD+53oQry5pNnWwMAA\niouLUVtb69fPe9TP91a+os+r/IHKOYpEItTW1mL//v0L/iZ+tu2A/z6ncqEymQxWqxWlpaU4d+7c\ngh4lzvY3MLdoilDCwsKg1+shl8vR3t6OQ4cOobGx0W97vZnLdn99rlQqmTKeSCTCnTt3cP78+Rni\nLAvBbNvp6Yg/thuNRqxfvx7R0dGw2+24cuUKmpqaFspkAHOrAPprt1QqxbJly7By5UqoVCqYzWZc\nvnwZAwMD/po7g0DYbjAYsHHjRqSmpsLpdKK5uRnNzc0Lfs8/1xz1dz1PSUlhTXyGhobQ2tq64P1B\n5lJd9Nd2tVqN7OxspKWlQSQSwWw2o7GxEQ6HI6jXiSEZ/KkmNv3yeDyYmJjw+85v9i/PX0fPFuCQ\nSCSQSqVMJpJqM/v73IS+8QfA9JsXeoBLpVJIpVJmNwDm8/l+lvdpC62zGB0dXdABTjUEvMcLlVL1\n53NkMhnTCXc4HGhubl7w3TMdL/T3KxaLMTk56derE6lUivDwcNYgpK6uDhUVFQt6veXtc2/1OafT\n6dfOJTw8HCaTCQqFAm1tbThw4ACuXr26YHYDD+RnvWVSXS6XX8FOKpViyZIlyMnJgVKpxKVLl/Dx\nxx+jq6trweymc4nOTep7f30eHx+PXbt2ISMjA3a7HZ9//jkKCwsxMjKyUKY/5HOqHOhv/UlBQQF+\n8YtfwGg0oqKiAh999BEaGhoWzG7gge0U+vTUH58bjUa89957eOqppyAWi3Ho0CH2aiuYhGTwV6vV\niIuLY/rYHo8Hra2taG9vR2dnpzAhA6kUYWFhrPsdxd/gSYOl9zGTVCqFVqvF4sWLkZCQgIiICFgs\nFty/fx9ms1lwa0aFQgGtVjuj8dBCnCR469NTOUilUomYmBikp6fDYDBApVKhpaUF7e3tMJvNggIS\nLcShGv40OLhcLr8SOBogaeJGfa5SqZCSkoKUlBRERkZiZGQELS0tMJvNgp8p0YAsk8nYpJ+amsLI\nyIhfAcK7yQwwPf7kcjkiIyPZTn3RokW4f/8+Wltb0d3dLbjilzYgUigUTOt7eHjYrx2o926T+lEi\nkUCpVMJoNCI5ORkGgwGEEObz3t5eQT6nyadKpWJJy8TEBDo6OtDf3++37d5COfQqKjk5GXFxcYiK\nioLFYkFzczP6+voEC8J4vwqhbWQHBgbQ0NDgd8I/+38rFAosWrQIiYmJiImJgVarRVNTEzo6Oub1\n6kcikUCr1SIxMRHh4eHo6urCvXv3/C7GnW27RCKBSqWC0WhEXFwcoqOjMTo6iqamJvT19Qk+laJJ\nG9XEp2/7b9686VeSO9epikwmg1arhdFoRGxsLGJjY3Hv3j20tbVhcHBQ8CkmfYmzePFi1tm0vb0d\ntbW1fp3mClH2o4Rk8NdqtcjLy8PevXthMpng8Xhw9uxZnDp1ignx+PqPVCgUiImJYUVEC3HcRHc+\narUaY2NjzB66mG/duhXPPPMMTCYTk94tKSmBzWYT9NlqtRpRUVFMYGYhoIFZLBZjYmKCNYZQKpXI\nyMjA7t27sWLFCkRFRaGoqAglJSWsEYmvtovFYmg0mhnNZBYCsVgMhULBkgiPx8OSu3Xr1mH79u1Y\nsmQJent7UVhYiLNnz7LCQqHJolwuXzC7AbC6De9dg0KhQFxcHLZt24b169dj6dKlKC4uRlFREc6f\nPy/4zph2Hlzo1qX0tQNNFulivnTpUuzcuRPLly+HSCTCsWPHUFpaCovFAsB3n9Pfq3eSuxDQXZv3\nHbBcLkdERAQ2btzICiNv3LiBo0ePorKyEg6HQ9gCKpWyEyLKQqwx9OfRn0NVD1NTU/Hcc89hzZo1\nSElJwcGDB3H69GnY7XZBd910DfPurkk/z9/ATxN0+vNowrVixQoUFBRgzZo1sFgsOHDgAC5fviy4\nup0mXN4dE6nWvb+2zz7NVSgUiI6OxoYNG7BhwwasW7cOx44dQ2FhIW7duiX4ufLsUxBgenPh78uE\n2YkL8N3zLySDv9Vqxe3btyGTyfD0009Dq9WioqICjY2Ngu9E4+Pj8e6776KgoAButxsnT57EiRMn\n0NfXN2/76C52fHx8xo54YmIC/f39uHTpElwuFzZv3ow7d+6gqqpqXm9ON2zYgDfffBM5OTlMp7q8\nvHzedlOmpqbYLpQuVHS3XFxcjImJCSxbtgy3bt1CQ0OD4AGu0+nw9ttvY/v27QgLC0NFRQVOnTqF\n+/fv+2U37YdOTz/oUbPNZkNNTQ2USiWr6bh+/Tp6enoEL8S5ubl46623sHHjRthsNpSVleHUqVN+\n34FSe70Xp7GxMXR3d6OqqgpyuRwGgwEtLS24e/fuvI5dX3rpJbz88stISkrC3bt3ceHChXlL4XpD\nkxXqRyoa1NLSgvLycqjVauh0OvYeX6jPU1JS8NZbb2Hr1q3weDy4dOkSTp065fc1i3fCQr8mJydh\ntVpx584dhIeHIz4+Ht3d3WhqasLw8LDgObp582bs3bsXeXl56OnpQXV1NW7evOmX3dT22f85MjKC\nrq4uXL9+HVFRUYiMjITZbEZPT4/gq6Lw8HC88cYb2LFjB3Q6HWpra3Hx4kV0d3cviO3etkxNTWF4\neBj379+HwWCAyWSCzWaD2Wyel89XrlyJn/zkJ3j66acxMjKCW7du4dq1a35fC8/l8/HxcQwMDKCp\nqQmJiYlYtWoVbDYbBgYG5nX9+uqrr+Kll15CSkoKzGYz6uvrF+S4f7btvhCSwX9kZAT379+H3W6H\nWCxGZGQkbty4gc7OTp8zO5FIhMTERKxbtw47duxAeno6nE4nKioq2C7cH2iW6Z2MOJ1OuFwu1NTU\nYHJyEgqFAvX19airqxO0i6NHcVu2bMFLL70EQgiqq6tx7Ngx3Lp1yy+76YIIzNQ6n5iYQHd3N6am\npiCXyzE6OoqamhrWPc1X241GI5YtW4adO3di48aNIITg9u3bOHr0qN93oDTp8l5cqKhHc3MzXC4X\ntFothoeHUV9fj8HBQUEnRElJSSgoKMDrr78OjUaDzs5OlJaWoqqqyu+Fxfu4n0JbvNLERa/Xo7a2\nFm1tbYLGi16vR2JiIrZv344f/ehHIITg6tWrOHjwIFpaWvyy2zthofa43W6Mj4/DbDbD7XYjOjoa\ner2eHeMK2X0mJSVhw4YN2L17N9LT0zE1NYWrV6+irKzM7zoF7wXRe4663W40NzdDpVIhLi4O9fX1\naG9vF5RwabVaJCUl4Qc/+AFef/11Ns5LSkoWpJB4rsV8cnISfX19qKurg9FohFwuR0tLC/r6+gTd\nQdM5+vLLL7M52tDQgFOnTrFTm4W0m552mc1mqFQqmEwmOBwOdHZ2Cnr5Q+foli1b8Pbbb0Oj0cBi\nsaCyspJpWCy07U6nE3a7Hffv358xVmjw9xW9Xo+kpCTs3LkTL774IlvTi4uL0d7e7pfds+3/b4dI\npVKiUqlIfHw8SU5OJhqNhojFYgLApy+pVEp+//vfkzt37pDR0VHidrvJ6Ogoeeedd4hCoRD0sx71\nJRKJ5vyeXC4nkZGRJD09ncTGxhKpVDrnn33U1+rVq8mhQ4dIe3s7IYQQl8tFzpw5Q7Kzs4lcLvfb\n7kd9SSQSolKpSFxcHMnMzCQRERFEIpEI+hnvvPMOuXHjBhkaGiIej4dMTU2RP/3pT0Sj0RCpVBoQ\nnwMgcrmcREREkPT0dJKSkkKUSqWg33FCQgL55JNPSHNzM3G5XMTlcpG7d++SH/7wh0SlUgn6/Qn5\nEovFRKVSEaPRSFasWDGv8bJjxw5SUVFBent7mc+/+OILYjAYiEKhCNh4kclkRKvVkvT0dJKVlUV0\nOp2g8SKVSskf/vAHUldXR8bGxtgc/dWvfiX4ZwkdQ0qlksTExJAVK1aQlJQUIpfLBY2X/Px8cuTI\nEdLR0cHmaGlpKVm1ahXRaDQB87lUKiVarZakpaWRVatWkejoaCKTyQSNl5///Ofk1q1bM+boX//6\nV2IwGAK6viiVSmIwGMiKFStIdnY20Wg0gn7HCQkJ5NNPPyUtLS1sjjY1NZEXX3yRREREBGyOSiQS\notFoSGpqKtm0aRNJSUkRHEN27NhBKisricViYT7/8ssvWWwLkM+/lZDc+QMPskUqNCHkLken0yE2\nNhbLly9HVlYWxGIxmpqaUFVVhcbGxgVTfpor0yJkWtPbbrezIy1fbZdIJIiPj0dubi5Wr16NhIQE\nOJ1OVFdX48yZM+jv7w+o1Kbb7cbExAQsFgvbTfiaTS5atAgmkwlr167FihUr2BOWqqoq3LhxY8Hk\nTR9lDz1apLr1QnYBGRkZWL9+PdatW8d6sF+/fh2lpaVobW31uWXvfPB4PJicnER/fz+sVqugSmIq\nJ7tlyxbk5+dDKpXCZrOhqqoKFy9eDLiynMvlwtjYGDo6OiASidiVjC+YTCbk5+ejoKAAWVlZAIDG\nxkZUVlbizp078+rE5iv0umhwcBDDw8PsBY0vyGQyPPHEE3j++eexbt06xMTEwOl0oqqqCiUlJejo\n6AiolK/b7WbXRRaLBZOTkz6PddoO/Pnnn2d1Gp2dnaiurg7I08TZ0F00rTUScrq1atUqbN68GU89\n9RQr8rt+/TrOnTuHhoaGBX/K6g19bdbb2wu73Y6xsTGfT0MjIyPx5JNP4oUXXkB+fj4kEglsNhsu\nX76M8vJydnrGeQABpjN0sVgseJduMpnI7t27SVlZGaH885//JAkJCQHNyr3tFolERCKRCLJdoVCQ\n7du3k88++4wMDAwQQgix2+3kl7/8JTEYDAuyc/bFdrFYTCQSiaBMetmyZeTDDz8kV65cYT4vKSkh\nK1asIFqtNqg+F7pjfOONN8jp06dJX18fs/2DDz4g0dHRAd050y/qb6lUKmi8mEwm8tlnn5Gmpibi\n8XgIIYTU1dWR7du3k4iIiKCNFalUKvi0Ys+ePaS+vp4MDQ0RQgjxeDzk888/J3FxcUStVgfFdolE\nQmQymaDxotVqySeffEIsFguZmpoihBBis9nIu+++G5Q56u1zmUwmaLw8++yz5Pbt22R0dJT5vKSk\nhOTl5RGdThc0n8vlciKTyQT93T/+8Y9keHiY+dzj8ZA///nPJC4ujiiVyqD4XCaTEZVKJeh3nJOT\nQ86dO0cmJiaY3XV1deSFF14gBoMhYKcV//v1rYTkzt/7Pah31auvWeLSpUvxzjvvIDMzEzabDXV1\ndbh58yaGhoYCrvfsbTfVKPB1NySTybBmzRps2rQJGo0GjY2NqK6uZgVggc4Qve32Fm7xBb1ej/Xr\n1yMlJQVjY2OorKxkd4iB1gef7XPvugZf/m5cXByysrKg1WrR3NyM8vJyVvkdaD35uWz39e8pFAok\nJiYyTXb6IqapqSkojUG87QZ8Eyii/96wsDAYjUao1WqYzWYUFRWhuLgYNpst4I1kvO2mzzl9GS+0\nylyr1UKr1UIsFuPSpUs4ceIErly5ErQ5Ont98eXv0OeZ9CXL8PAwjh07huLiYpjN5oCeblEbZmu3\nzK6snwv69FalUkGpVEIikaCmpgZHjhxBWVkZbDbbgguezYW37XNV1s+FQqGAWq1mQmcAcPToUZw4\ncQJ37tx57B0qQzL4Uwd7C8T4MlAUCgUMBgNyc3OxZcsWAEBbWxsuXryImpqagHfWmj3AfbGZQt97\nr1y5kh2DtrS0oKioCK2trQEPoLPtnktNbC6kUimMRiOys7ORnZ2NqKgoWK1WVFRUsKPnQE7O2QmL\nkIWFVntTXQYA6OjowJEjR1BbWxv0BdFXnwPTveOzs7PZG20AuHbtGk6ePAmz2RzQADo7AAlJWtRq\nNZKSkmAymRAREQFg+l18YWEhLl++HPCkZS6f+0p0dDSWLFmC2NhYJs1aV1eHffv2wWq1BmWOUvuF\n2C2VSpGWloaMjAxotVpIpVKMjY2htLQUp0+fhs1mC6iy3OzxImScR0ZGwmQyIT4+nrXsvX//Pr76\n6iv09PQEdbwI8TkAJCUlIScnBzqdjs2RqqoqHDlyBGNjY4+9UVVIBn+5XM4GiJCdc0xMDHbt2oUn\nn3ySfc9qtaKsrCwoWvg0S50t6OIL+fn52LNnDzIzM9n32tvbcf78+YA3H6IiK97Jlq9PKsPCwrB7\n927s2LEDWq0WwPTdXmNjI5qamgKeldPdmPd7X19tX7ZsGd577z2sXr0awPTpks1mQ319vV/iMr7i\nrfAHCJP23bVrF1577TUkJSUBmLa9v78fnZ2dQelm5j3OhYyX5ORk/Pa3v8XGjRvZ9yYnJ9Hb2xvQ\ne1uKWCye4XO32+3zPC0oKMCvf/1rZGRkAAB7fheMttp0M+Ttc19rcnQ6HX72s5+xZ3307w4PDwvW\nNZiv7XS8AGA6/L58bm5uLn73u98hOzsbANhzTVo7EGioPof3m3xfx8sLL7yAN998E8nJyWxdmpiY\nwNjYWEjc84d88Kcyrb4MFK1WylA28gAABexJREFUi5UrV8JkMgEAGyQ9PT1BW1iogI5Q0Ym4uDis\nWbMGBoMBLpcLExMTsFqtQetJLZFIWBClDSZ8+Vy5XI7MzExkZmZCLpdjfHycdUwMZNEWxXtRpIua\nrz7X6/VYtWoV4uLi4Ha74XA4MDg4CJvNFpQA6h2I3G63oALL5ORk5OTkMKEpu90Oq9UalL7xswOR\nkAVRo9Fg6dKlSEpKAiEPet47HI6g9I2nPgeEJYoAEBsbi7y8PMhkMvb+e2BgIGitY2dfU/hqu0wm\nQ3p6OtLT0yGRSDA0NIS2tjbY7fagHJnPPsUVMkcjIyOxfPly6PV6OJ1OWCwW9PT0+C3f7SveV880\nafH1cxMSErBkyRKIxWIWh6hgWigQksGfKtrR4O/roqhUKpGSkoKYmBgQMq0lPzw8HLQe5hKJhCUu\ntGrb14Gi0+kQHx8PlUqFqakpWK3WoN0J0clJg7+QzlhisRg6nY4dbTkcDvT29gatD7j3roLuCoTU\nWGi1WsjlcrhcLlgsFgwMDAStuYa32pfQ4C+TyZia3/DwMJqbm4OS4AIPFkQaiOii6As0QaZ3oF1d\nXUG51qLMriMScrLorSw3MjKChoaGBRHF8RVv24W8CqHzm4617u5u3L59O2jtzGdfIwrxuXfiMDk5\nibt37+LevXtB2zl72y2klgiY2ZCtr68P1dXV6O3tDZSpggnJ4O9wOJjDhQjM2O12VFZWQiQSISMj\nA0eOHEFRUZHfwhW+MjU1xXZeQncVHR0duHDhAtLS0jA0NIRDhw7hwoULgTSXQYMmzUiFJEtOpxN3\n795FYmIiEhISUFpaimPHjqG5uTmQJjPo80RA+OQcGhpCTU0NkpKS4HQ68fXXX6OsrCxoOznvHbNQ\n5cqenh7U19dDp9Ph2rVrQW3z7PF44HQ6Zxyd+8r4+Dja2toQFhYGmUyGEydOoKioiMkwBxrvsS2k\noBUAE3uRyWRobm7Gl19+iStXrgTK1BnQJ8TU10J87na70d/fz55klpeX4+DBg0FrJDP7KaWQ5Hp8\nfBy9vb0YHR2FxWLBN998g/PnzwfllAjAjNNboRux4eFh9PT0wO124+rVq/jPf/6z4F0q/UHy3X8k\n+Ljd7v/nrd/uK3QnMjk5CafTiX379qG0tDQolc/Agx3QfHpKq1Qq9oa3rq4OX375JVpaWoJWDUoD\n53x8HhYWBqfTieHhYRQVFeH48eNBuUsEHuze5uNzjUYDpVIJm82G1tZWHDhwALdu3QrasRz1uZAT\nIgqt8O/r68PFixdx4MABWK3WoI0XOtaF+lytVsNgMMDhcKCrqwuHDx9GRUVF0BIu793+fHweFhbG\nJHYPHz6Mjo6OoM5RoZsKYPqUKCEhAVNTU+js7MSpU6dQUlIStHURwLx9rtfrERMTg+7ubtTX1+Po\n0aOor68P2s5/vj4HgLS0NCgUCrS3t+PChQs4fvx40E5b/pc/ftv/Kax8MXjMazbR7lparRYajQbd\n3d0Br2RdKMLCwhAREQG1Wo2pqSl0d3cveOveQCCRSLBo0SJotVoolUoMDg4GrU7BXzQaDQwGAxQK\nBQgh6OnpYSJBoU5UVBQiIiIglUrZDuO/YZzT7pEqlQoikQg9PT2CG7s8LhYtWsSSrrGxMfT09ARt\nB+oP9EVOWFgYRCIR+vv7BXe7fFzodDrExcWx9uh0jv43QDt10kJi2mskiHxrfP8/Ffw5HA6Hw+EA\n+I74vrC9PzkcDofD4YQ8PPhzOBwOh/M9gwd/DofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6H\nw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD\n4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8Ph\ncDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw+FwOBwOh8PhcDgcDofD4XA4HA6Hw/k/\nzv8HG6CQYYaTOZ4AAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -4.360440\n",
+ "Current MNIST score: 7.883360\n",
+ "Training step: 5000\n",
+ "Time since start: 23.939930 m\n",
+ "Steps per min: 208.856087\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAf8AAAFBCAYAAAB5Maa7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsvVmTW9d1Pb7uvcDFPM9AoxsN9NxNitRA2bQsyamoYidO\nZao8+iXv+Q7//0dJVfKUp6RcSdmyynZsy5REUZx6noHGPM8z+veg2ltompQ4NAa276piiZJI9LkX\n55w9rb02oECBAgUKFChQoECBAgUKFChQoECBAgUKFChQoECBAgUKFChQoECBAgUKFChQoECBAgUK\nFChQoECBAgUKFChQoECBAgUKFChQoECBAgUKFChQoECBAgUKFChQoECBAgUKFChQoECBAgUKFChQ\noECBAgUKFChQoECBAgUKFPy5Qpj0Ap6B88v8MEEQcH5+qR+pQIECBQoUTDO+1b6L41rFJCEIAgRh\nWv0cBQoUKFCgYLxQTXoBo4RKpYJGo4HRaESn00GlUsH5+Tn/UqBAgQIFCv4cceWNv9FoRDAYRKvV\ngkqlQrvdRrfbxWAwwPn5OQaDAQDw7xWnQIECBQoUXHVcaeMviiLMZjM2NjYwGAyQTCaRTqdRLpcx\nGAzQ6XTQ6XQwGAzQ6/XQbrfR7/enzgGgksW0revPCQpvRMG3QTmjo4dyBi8XV9r49/t9dLtdNBoN\nGI1GeDwe9Ho9qNVqaDQaqNVqiKIItVqNVquFk5MTFAoF1Gq1SS8dACBJEmRZhs/ng8lkwmAwQLPZ\nRL1eR61WQ6vVQq/Xm/iBIE6FIAhQqVQQRRGDweBCdoXWOOm1vgpE8WuKzFV4lmnCszg5kiQBwIW9\nNC0QBAGiKEKWZWg0Guh0OqhUKvT7fWi1Wmi1WrRaLdTrdRSLRXS73ala//NAkiSoVCqoVCoMBgO0\nWq2xPAO9W71eD4PBAIPBAFEU+a4+Pz9Hu91Gs9nk+7DT6bx273fSuNLGv9vtolQq4eDgADMzM/B6\nvTAYDFCpVPB6vXA6nbDb7bDb7ahUKvj1r3+N7e3tqTH+KpUKJpMJb731Fubn59Hr9ZBOp3F2doZo\nNIpsNjsVmQpRFCFJEiRJgkajgUajQafTQbfbRb/f51+vq9Ek40QX0DQao9cV9G4lSWIHYDAYQBRF\naDQanJ+fo9/vo9PpoN/vT3i134AcXZPJBIfDAbfbDb1ej3a7DYfDAZfLhXw+j1gshs3NTT4DrxPU\najX0ej10Oh263S663S56vd7If64gCJBlGU6nE4FAAMFgkB0Ri8WCfr+PQqGAbDaLVCqFs7Mz9Hq9\n1+79ThpX2vifn5+j1WohnU6zB5vJZNBoNNDpdKDVajE7O4vBYIBqtYpisYhGozHpZTM0Gg0sFgu8\nXi9mZmbQ7XYxOzuL73//+6jX6zg9PcWdO3cQj8dRLBbR6/WYwzAK0EVtsVgQDAZhtVqh0+lQr9ch\nCAIMBgNmZmbg9/svRAuUwSDiZSqVwubmJnZ2dlAul9FsNsdyqbwsyMgPBgNIkgSHwwGr1Qq73Y54\nPI50Os08EgUvBjKiWq2WMyu9Xg+yLMPr9UKr1QIA0uk0SqUSX/KTdnpp3W63G4uLizAYDDg/P0ex\nWITb7cbq6io6nQ6CwSD0ej2i0SgymQxardYFztE0Y25uDktLS1CpVEilUqhUKiM9p+Rgu1wu+P1+\nzM/PQ6vVolqtQqVSwWq1IhQKwWKxoNVqoVQqIZlM4sGDB0ilUpwJaDabaLVaijPwHbjyxr/b7aJY\nLEKWZciyjHw+j0ajgW63C7/fD1mW0W63UavV2BBNC3Q6HRwOB4xGI7RaLQRB4INhMBiQSqUgyzIe\nPnyI/f19frbLhiAI0Gq10Ol0UKvV8Pl8uHbtGmdSYrEYGo0GDAYDQqEQ5ufnYbVaodVqMRgMoNVq\nYTQa4XK50O12EYvFEAgEYLFYEIvFkEgkkEqlXgsHQBAE6PV6zMzMYGlpCQaDAQCQy+XQbDan/kKf\nNJ5Wt6VLn6L/8/NzaDQa2Gw2OJ1Ojq6z2SzK5TJKpRLK5TKAyWeRKDqWJImzXcDXZ5eyi2q1Gmaz\nGQcHB0in06hUKhcyYtMIQRAQDAZx69YtLpU+fPhwpD+Psod6vR4WiwUmkwmdTge5XA6SJGEwGKDR\naMBiscBgMECv18NoNKJarUKWZdRqNdTrdVQqFWQymbHd5cMlq1fdj/RZVPIapZMrjeRTXx3/32V9\n0HBKsdfroV6vs+ftdDoxOzuLRqOBdDqN7e1tVCqVqYjgBEGAz+dDJBKBRqNBvV7nyOH8/BwOhwOB\nQADLy8uw2+0QBAH5fB6lUunS1yGKImZmZrC+vo6lpSWsr69jZWUFwWAQNpsN0WgU0WgUZ2dnODw8\nxNbWFjsikiSh2+2i0+kA+PqytNvtmJubwxtvvIFQKARJknBwcMAX5zSD0tGBQAA3btyAw+GA2WxG\nPp9HvV6/9L0zzKe4qiDCLUX1g8GAswF+vx8bGxu4du0aVldXYTAY0Ol0kM1mJ96yOxgM0O12US6X\nkclkkM1mUSwWudZvs9ng8Xjg8/lgsVig0+mYq0O162l0eOnM3759Gx988AFUKhVyuRw2Nzf5HI/i\n56nVami1WvR6PRQKBUSjURwdHSGRSCCbzSKRSGB3dxebm5s4ODjg8my32+W/a7fbYTabUSwWx+KM\nk30RRfFSzujwe9BoNK+azf3/v+1/XunIn3B+fo5Op8MH7/z8nD3JwWCAcrmMfD4/dakiuhxKpRKa\nzSb6/T4sFgtUKhXOz8+hVqvh9XqxtLSEUqmEzc1NnJycXNrPp4jfYrEgHA5jZWUFjUYD9XodJycn\niMfj6PV6ODg4QCwWQ71eR7/fhyRJyGazODk5gc/nY0KU3+9HOBzG2toabDYbrFYrVCoVYrEYOziT\njuS+C2SoBoMBZFlGKBSCw+FAoVBAv99HMpl8ZQeAHAyDwQCz2cw171qtxsSm540aiacgSRLUajXU\najUEQUC/30e9Xp+48RnW3aDzKIoiOp0O6vU6AMBmsyEYDLITlM1msbm5eaEcM4l19/t9VCoVdDod\nJg8D37QY9/t9qNVqtNttdDodtFotAF9Hdefn51Pr0BGR0Wg0wmq1olarXeBkXDaGv8dhfseT7diC\nIKBQKECr1cJkMqFcLsPv9wMATCYTIpEI7HY7Op0OZ1ja7fZI7hTiJdjtdvj9fs4ep9PpV8o40Hs3\nGo0QBAHtdntkZ/TPxvjTpup2u+xp0mYuFApIp9Mj2ygvi0ajgXw+j16vB6PRyIJFHo8HAFCr1aDR\naGA2m7G8vAyz2Xyp7TBUxw+FQlhZWcHCwgLu3LmD/f199qwpynmSbZtOp/H48WPIsszGLBwO4/bt\n27Db7fB4PBcMHF2e05B1+S5QxNdqtRAKhRAOh/l9pNPpV34GSZJgMpkwMzODSCSCdruNUqmE09NT\nVKtV9Pt9tFqtCz/nad/5MCNdp9PxuxZFEa1Wi4lS04DhrhBRFNHtdjlLRxeiXq9HKBSC0+nkZ6O/\nO4lzS0FFt9uFSqVio07ZCbVajcFggNPTU2xvb2NrawvFYpG7kAaDwVS2CEqSBJ1OB41GA1EULxjg\nUYEcP+peetr7oHXUajX+dXZ2BrvdzpmhlZUVDAYDfPnll4jFYiPrAqDyRCQSwfvvv49isYhoNIpa\nrfZKxl+r1cLpdMJqteL8/JyD0lHgyht/o9GIxcVFzM3NIRAI4PDwEKVSCRaLBcvLy3C73bhz5w7O\nzs5GktJ6GQwTXyhKJuMfCAQgyzLXPY1GIwCwE2AymVCr1V75sNIa+v0+yuUy7t27h83NTSYXkkfa\n7/ef2W5IDhc5JK1W60JrjkajgSAIXDdVq9VTYYy+60Imx5EurH6/j2q1ikaj8coXjUqlgsfjwU9/\n+lOsrKzA6XSiVquhVCpxaaHVaqHRaKBWq6FSqXDGimqe7XYbwNcX1MrKCpaWlhAMBuFwOGAymdBs\nNpHL5bC9vY29vT0cHR2hWq2O7JJ5WQiCwE6Qw+HgC9dqtcLtdqNWq6HRaEycPCcIAjQaDTPj19fX\n8fbbb8NoNCKRSOCzzz7D/v4+4vE4Wq3Wn0S2r7L2y6w30+dFIhH89Kc/xfe//30YDAYcHR1hb29v\npGU5yoS8yPug0pDFYuGyCj2D1+uF2+0eSRlXFEXY7XZ88MEHePfdd/HWW28hlUrh0aNH2NnZQT6f\nf+Hvgu7Ba9eu4Z/+6Z+QTCaxv7+PaDTK5cTL3uNX2vhLkgSz2Yy1tTVcv34dCwsL8Pl8yGQy0Ov1\nsNvtaLVayOVyyGazU2F4gG82gsfjwdraGht+4Ov0FgAkk0k0Gg1mnhM72ufz4eTkhA3Aq66Dyg7D\nXRLPWxoZrmmKonjBWaDLjwgtFDlNGsPtfM86bNTFkM/nUSgUOEolR+dVoNFo4Ha7cevWLWxsbECl\nUqFcLqNaraLdbqPdbl8w/uVymaObfD6PfD6PXC4HQRBgNBrx3nvv4Z133sH8/Dzsdjt0Oh2q1Sqy\n2SxCoRB8Ph/0ej329vaQSCSmJgKlNjra+/R9CILArG8qsUySK0I1X7PZDJfLBY/Hg42NDaytraHZ\nbOLg4ACPHj1CPB5np3xa3vGTEAQBOp0O8/Pz+MlPfsLG8/j4GCcnJyO/H1/kvZDDZTKZYLVamXhL\nGRVZlpkkfdniQJIkwWq14tatW3jvvfewvLzMZU+9Xs9B04s+i8vlwrVr1/DRRx/h/v37aDabsNvt\naDQarOlymWXpK238NRoNHA4HlpeXEQ6H4XQ6cevWLb5A9/b28Itf/AKnp6dTJcJBNcOZmRmsra2h\n3W6jWq0in89z/XZzcxOpVAperxerq6uIRCJYX19n8tGrtp5RqaTZbLIK4qsyT1Uq1YVD2W63UalU\nUKlU0Gw2J074o4tclmV+5idBjOOzszMIgoBGowGv18vkRsoKvOx7MplMsNvtbMzr9To7FlQa0Wg0\nbHAikQjXCOv1OtLpNB48eABZlrGwsICNjQ0mVdI7lyQJLpcLFosFs7OzWFtbw3/8x38gm81O9BxQ\n1EcdFaurq9Dr9dja2oLZbObOHKvVinfffRf3799nsZdJcXVEUeSW4bW1NaysrGBmZgZGoxFffvkl\nPvvsM6RSqZFlKC7z80j/ZHZ2Fm63G/V6Hdvb2zg+PkYul5saPhQZdKPRyN1QAFAqlaDVarmjKJPJ\njGQvq9VqGI1G+Hw+OJ3OC2JPLxPAqFQqOBwOvPfee9jY2EC/34fJZEIgEMDCwgJEUUQmk0G1Wr1U\nEuOVNf7Ell9eXsbCwgI8Hg8TRQqFAh4+fIh79+7h0aNHyOfzU1NrpqjfYrFAq9VyOjeXyyEejyOR\nSEClUuHx48dMBHQ4HAgGg5iZmcHy8jLu3bvHKfZXAV1Wr2r0h+uzVE9UqVTodrucdZnkBf7kOsmI\nPw2UzWg2m0wUlWX50qIiKo9Eo1GUSiXmEgwzmnU6HavI0X93Op1wOBzw+/2wWCyQJIlZ5qVSCTs7\nO2i329Dr9fD7/XC5XDCZTJxK//jjj6FWqyfefkYXOwm57Ozs4Pj4GGazmclVKpWKnRedTjcR4hzt\nFZ/Ph4WFBdy4cQOLi4vweDzIZDL48ssv8eWXXzIrfRqUOL8LsixjcXERi4uL0Ov12NzcxP/93/8h\nFotNVRurLMswGAwIBAKYn5+H2WyGSqXC2dkZCoUCms0mzs7OUK1WR3KvU9aYdE6eHBb3PPuR9o9K\npUIwGMTi4iJ3btG9bTAY4Pf7cX5+DrPZjKOjo0ttX7yyxl8URUQiEbz11lsIh8MwmUw83CeXy+E/\n//M/sbe3h06nMzXpfuAbFqnFYsFgMEAikUA+n0cqlWJvNp/Po1wus3BRPp9HrVaD2+1GOByGx+Nh\nx+BVcRkHngwqRXWkF0CCP4lEYuKdFi/SUkfvhLouhqPlV43uWq0WMpkMHj58CFmW2fhTucVgMMDp\ndMLtdnO07/F4MBgMsLKyglAohLm5OSaVEfHy3//939FoNBCJRPDBBx/g5s2b/AzUS0/M9ElDFEXU\n63U8ePCAn8Nut/PaHQ4HT+wkUtq4QSI/i4uL+PGPf4xr167B6XSiWq3il7/8Jf7t3/6NHalpCSy+\nDZR6Xl9fx9raGtRqNba2tvC///u/PAtlGkD3h8fjQTgcxurqKmRZRjabxc7ODur1OsrlMvMrRrFu\nKvEYjUaWdB6eE/M855+CPJ1Oh+vXr+Ptt9+Gy+WCSqXiLgUSkdJqtQgEAiiXy0in00rk/20gUtDa\n2hqWlpYwGAywvb2Nhw8fol6vIxqNIpFITNUgHzI8RNzzeDyw2WyQZRmZTAZ7e3vM9qbWIdp8vV4P\n1WoVMzMzCAaD2NjYQKVSQS6Xm/Rj8SVpMBiwsLCAUCgEAGg2m8xcJy+e2nPGWRcdNvrkiVPG5WmG\nkJj4Op2OI2m3241GowGV6tWPU6vVQjabxePHjyFJEivC9Xo93h/ZbJZ5K/V6HYFAAFqtFjabjTs+\nMpkMDg8Psbe3h62tLRwfH0OWZVSrVcTjcdhsNoRCIVamazabE+92oe+Ahm4NM7/JgaRuHXIgJxH1\n08Cw+fl5zM/PQ6/X46uvvkKxWGTHjRzCabhbngeyLMNmsyESiSAQCHBJrlqtTk1wRPK+gUCAy7hU\nX6fvhBT+SFp8FKDSJTn+g8EApVKJ232pjRz40+CJOqgWFxeZgxMOh1kDgkAZjFAohHa7jUKhwGf1\nsnCljD/VbG02G8LhMJaWluD3+1GtVrG1tYVPPvkExWIRxWIR1WoVarWaXyhdOMMRHP0aByiSMZlM\ncLlc8Pl8sFqtEEWRe+bj8Ti63e5T60q0qSiNZLVax7LuZ4GMPrUner1erK2tYX5+HqIootFooFKp\nMHM2Eolw7/m4Ls3hFP9wL/yzeprJW7fb7TAYDBAEATabDX6/H7Vajf/bq4CMHik1Ds8RIOeEogO9\nXo9OpwNZlpk7QTyBg4MD3Lt3DwcHBzg9PUWxWITNZkOr1UKxWEQul4Pb7YYsyxc6F+g5x2m0nmy9\nfbLnn2qsJpOJ2xYtFgtkWR7bGoFvOCt6vZ5FnujifvDgAR4/fox4PI5yucwtcsMtjNMI2vNOpxOh\nUIh1K0jtdFzDfJ6F4b2h0+m4Dk7GU5IkNrh0/40640JzBoZbE4mMS5omVLoiB4QCDNIGCIfDcLvd\nUKvVcLvd0Ol0KJVKHIw2Gg3IsozZ2VmWu6YSw2XhShn/YfLN7du3MT8/D0mSsL+/j0ePHmF7e5s9\ncq1WC5fLBa/Xi8FgwAp6FFlQJNRut0fuAJDhdrlccLvdmJ2dxezsLEwmE9fFSd2PJhIC4MhQp9Nh\nbm6O//ykI45hBrTP52NxHyrBCILA7WtGoxHhcJi99ng8Ppb0/3CLIXUaUBRB8s9PgjIYMzMzsNls\n6Pf7mJubQygU4hKNJEmvVN8lY0E/f/hz6EIj54CIoR6PBzMzMxAEAdFoFPfu3cPOzg4ODw9ZqMlu\ntzMxiozYsNOj0+mg1+svpU30RUE/fzAYcPaBvh+DwQCTyYRgMMjtuiQNrdfrx7rPjUYj3G43IpEI\nVlZWcO3aNZhMJo6MafiNWq2GzWbjMeHD6WdycKYljU734LVr1/Duu+8iEAiwYZsGp4UIuJSZoE6K\n2dnZC9m54T9HDu2oQB1Lwx0oOp0OdrsdgUAA9Xqd+/OpFEDrc7vdLHxGRG6fzwetVgu1Ws1BEQ00\nmpmZQb1eZwnjy3TMr5Txp7RQKBTCzZs3YbVaUS6XsbOzg6OjI1QqFRiNRvj9frz11luYm5uDy+Xi\nyzSXy/HFfXh4iKOjI5yeno60/1mlUkGn07GQjtlshsFgQKvVQjweR6fTYcMPfF1voh5nilDp0qTe\n/0kytmmimdvthsvlgtPphM/nw8zMDHw+HwaDAdLpNMrlMnq9HkwmE0RRRLlcHkvESY6J0WiE3W7n\naWyUYcnn81yOoD/vdruxtLTERKNIJMIT3EwmE/r9PjQaDQ+nedXn+Dbnjf5fq9XiWnelUkGpVOJR\nzw8ePMDp6Sm38zmdTnYcqc8/lUrBarVCrVbD4XBAp9NBp9ONdbAV9WMT5yCfzyMajTLBaWVlBTab\njdUhZ2Zm4Ha70e/3cXJywsNmxsUVoSiUpFfJUWw2myiVShytra2tYW1tDa1WC4VCAbFYjM8vDcI6\nPDycWFRNWTmr1YpgMIjV1VWsr69jdXUVAHhS3mVLhb/sOg0GA7PricBttVpZTrler7NQWyqV4iFn\nowJlzzKZDJP+ut0uDAYDlpaWYDabUavV+B42Go1Qq9UAwHtVlmV2HojE2+l00Gw2MRgMODChDMIo\nBoddGeM/nFIJhUK4du0aOp0Ojo+PsbW1hXg8DkEQ4HA48M477+Bf//VfsbCwwMIyJL9Jkdf//M//\n4Je//OUFwzuKNWs0GjidTqyvr+P9999nXWsa1EM99gCY5by0tIRoNIpWq8Vzw5PJJEvAjiNb8azn\nmZmZwa1bt3Djxg34fD52yIZr0dvb25wZMBgM6Ha7bHRHPR2P9gmRI4lhe35+jj/+8Y9sTIbXEIlE\n8LOf/YwPsNlsZqUxchooAhlHDZqMP33fRAJsNBo86axcLqPVasHlcmF1dZXLKZIkoVAocCRBA2go\n8h/nhS8IAhYXF7GxsQGLxYKtrS1uz5qZmcHf/d3fwev1snNFXJ79/X18+umn2N7e5vbEF/25LyPC\nQnoOpVIJqVQKg8EAOp0O/X4fiUQC1WoVFosFf/mXf4l/+Zd/Qa1Ww/HxMf7whz+gUqkAAJxOJ87O\nzpDJZCbGsaB7JxQK4e2338aHH34In88Hk8mEarWKZDLJQ4gmCTqrBoOBZ5ksLCwgGAxiMBggmUyy\nrsXZ2RlOTk5QKBRY+GpUoCzxyckJZFmG1WpFs9lk8t7GxgZnEvV6Pebn5yHLMhqNBr788kvs7Oyg\nVquh3+/z0DOq+TcaDRiNRiZIk5ZHpVK59FboK2P8RVGExWLhlhuNRsP1Trfbzepmq6urWF1dhdVq\nZa+RPDMymr1ej1/4qA0RKRAuLCwgEAiwalsul0MqlUK1WoVGo8H8/DxcLhdsNhv0ej1mZ2dZkMNk\nMqFer0Or1aJcLiOZTPJlM2oMp5WDwSA++OAD/PCHP4TL5eIauFqthkqlQrPZZDELABdSXdFoFIVC\nYWTvm2puGxsbuHHjBlZXV7ktTpZl1Ot1FItFliG2Wq1wOp1wuVyIRCK4fv06D5/R6/W8RyjaiMVi\nLIQyjgt9uDyQSCRw586dCxKpNA6apqN5vV5mx5MCWSgUgkaj4RbSXC43MjnUYQiCALPZDKfTiZs3\nb+LmzZt8FiiS8nq9XNdVq9Vc32+1Wqyal0qlXsqAvszzUV23WCxif38fyWSSSYcAWMSn1WrhV7/6\nFcrlMmw2G0tYW61WbrM0mUzw+XxMeh0XKOu1tLSEjY0NLC0tYWFhAV6vF4IgIJVK4dNPP8XOzg7S\n6TSOjo7GtrangeSTa7Uav0t6n1TjF0URuVwOiUSCJ/mNOvNZr9cRi8Xw29/+FicnJ/D7/VyiCoVC\ncLvdzLkigjCVf9xu9wU9EZ1OB5fLBVmWce3aNbTbbZ4C2W63sb29jf39fTx48ICzYpeFK2P8h1mU\nNOyBBs0sLS3BarXC6/ViY2MDHo8HlUoFp6eniEajrH5G9Zlut8tyraOOoKluS9rl9BxUf6Z0ks1m\n4wtckiR4vV54vV6srKyg2Wwin89zDSmbzV5gjo4SxLKdmZnBxsYG3nzzTbz55psQBIGHsxCJhWpj\nDoeDSTok8Uus9lHxFYgst76+jr/+67/GysoKrFYrG8xqtYpKpQKv1wsA8Hg8XF+m8cStVgvtdptF\ngERRRDqdRj6fx/HxMZLJ5FiNP/2cYrGIUql0gTRKBsdoNHKpiOrkTqcT3W4XDocDjUYD+/v7I+2L\nHgYJolitVszMzGBlZQXr6+vodrtwuVwIBoOs5e90OmE2m2E0GnkeAZXnDg4OXkht8jJAXRfDpZEn\nyzyNRgNfffUVTk5OuGYbCARgt9uh1+vhcrlYdrlWq43N+KtUKuYLrays4Ec/+hFCoRBsNhvUajWK\nxSISiQS+/PJLPHjwAOVyeSraPul9k8KdLMtM/KSpd+VyGYVCAeVymffDKM8gdeR0Oh0kEgkesU6l\nKepIIAedSrNqtZr1NUgKWqfT8V1PnVukrFooFHB4eIj79+/j7t27KBaLl/ocV8b4EzOYhsRQCxT1\nY5IwiyRJKJfLODw8xN27d3H37l28//77WF5e5hoqEXVGncI9Pz9HtVrF7u4uXC4XZmZm+KLe2NiA\nwWBAIpHg3tVmswmbzQa32w2Px8MbSRRF7uknB2BcOu3DqmAul4vfPfXwb29vM0N+YWGBZ3QTsdJk\nMsHj8WB5eRnpdBqxWIzfzWWC6t1LS0t45513OHog4o5Go8H777/P+4hY9ZQxaLfbfPlTZwilgeny\nqVarE1XHI3axRqPhSJnSipQ2p8wRtUQ9fPgQn3zyCQ4ODsZC8CKOAfFBHA4HPB4PBEHgLhcqowwz\nqCkCHNavmESb37AKIYCnOh+kWJnP57GzswOdTsfkV5fLhfn5efz4xz9Gs9nE4eHhyNdM6WeLxQKr\n1YrV1VW8+eabTFakevnZ2RnXqqeB7Dec3crlckgmk0ymJFY/cS3GqQ5K559+diqVgkqlgs/ng9vt\nhsFg4KAG+MZBJDEisi3E6qe5JkR6lSSJJ25mMhlkMhn+Xi4TV8b4A9+w/Uk9zmg0sqdI86Apskmn\n01wfIuKG2WyGXq9nxuio+4ipdptOpzltRR6tx+NBvV5Ho9FAr9dDpVJBvV6HLMushU/GSa/Xw+Fw\noFQqoVQqsUjEOCBJEjweD0KhEGZnZyEIAk5PTxGLxXB4eIjd3V0AX8vWhkIhGI1GNJtNaLVa1mKw\nWq1YX1/H/v4+e7+XffHQJbK7u4uPP/6Yv2Nqm6S2GtJXoJQjDcoheWPiA7TbbTQaDSQSCezv7yOV\nSqFWq038wpRlGX6/H3a7nfXxKcIgtjDJEt+/fx+ff/45Hj9+jGKxOLaMRafTQaVSQSKRwM7ODvN0\n9Ho9j3bxHnO8AAAgAElEQVQmJ6vX611oTyRC16SN07f9bCoPNZtNJgkSmbJWq8FisWB9fR137twZ\n23qHWzmp/Zb4NhQskAM7Tfon5NRS1ofKiCR1PdwWOvx3Rg1yAOiOoEzn73//eySTSfh8Pm4JJNE2\nl8sFrVbL55BUQh0OB/OIqBxA2YJCoYBisTgSEbQrZfyJHUpCISSDSn3R5GUTkUSSJLjdbrTbbeTz\neRgMBvbSKbU06uii2+2iVCpxOx8JtZhMJpjN5guXNqU56fmIqEh/Pp1OI5PJjDVlR8Z/fn4e4XAY\ntVoNm5ub+MMf/oDt7W0eCNTr9ZglTdMJV1ZW4HK5EA6Hcf36dXzxxRcsMXvZoIP6X//1X/j44485\nCrLZbAC+JvH96Ec/wptvvgm9Xg8ALHTSaDT4vdP3QJryx8fHePDgAV+ak26xpG4Et9uNXq/HQ5+o\npbHdbqNer+Ps7Ay/+MUv8PjxY+Tz+bGlz7vdLl9o0WiUNR/+5m/+BsFgkC9HcmzJiFJWiJzk10U5\nb9h4URulXq9HMBjkvTdq0CAqymhRZo6CJbPZDAAs6kNS29Pwfp+llUB3+HCEPfznx+XIDv+zUCjg\n448/xq9+9SsAYBY/tWKvra0hEokwEbpUKiEWi3HQQZLdVP4dDAYoFovcGXXZz3RljD+l0Dc3NyHL\nMsLhME+So8lnVDOnL6XZbCKRSKBQKCAejyOZTDIxcHd3l1v/Rr1u0jE/PDy8cFlLksQqcjS1z263\nw2azwWAwsANDLSLb29vY3t7mdr9RYziqEQQB6XQau7u7ePjwIY6OjpBOpy9EcXQJ0XNRpmJlZYVr\noBqNZqTqXMTroOiTGM1U06cUm8PhgFarRbFY5FHPxLeglB9JLVM9ctKXJUUN5GSRk9hsNjllXiqV\nEI/Hsbu7i9PTU+5uGJfTMlwH7Xa7iEaj2NzcxPr6OhNHhycXkggNcTJisRhyudzE3/WLgvY+Tcgj\nkt24BJXIASGBMzL+VG+ms0CO7rS9Xyr7NJtNJrRS9wc5AOM0/N8G+vl019AwLUEQkM/n4fF4OOhM\nJpM4Pz/nrK7RaLzAwxjOPCrG/xmgF7i3twebzcaMcqqf9Pt9Tu2TsRJFEcViEaIocntIPB6HJEk4\nOTlBLpcbixE9Pz9HqVTCyckJwuEwfD7fBeleURRhMBi435VqupT+6nQ6yOfzODw8xPHxMT/vqDGs\nvtVut1Gr1bC7u4t79+4xR2F4HcO/r9frzNJNp9PQarVcDhhlrzml7okbQSASH/Xq0runw0dOmiRJ\nPGshHo8z0XIaBrcQSc7pdPKAEHIsiVmeTCZxeHiIg4MDnjY3bgzXzckpIVIXkT673S6KxSK3pZXL\nZZTLZaRSqanSmn9eUNmpUCggm83CYrGMlaw4LEXb6/UgSRI74el0GslkktuaJ11SeRqo7EZBBAnt\nDJ+7adsTw3cNyYbT2skxbzQaKBQKnBklPgCVv6h7YRTPdmWMPwCehuZ2uy8wL2mjk6dLRp6IGjQP\nWhRFpFIpriuOU9eaxrHS6FYqWYiiiEKhwLUuIjsNj5Ks1WrIZrPI5XLMeB3H4aXIpVqtsqM0POP+\n2zYsGVOKSoksp9VqX2os5qtCkiRuV1xcXIRWq2XPnWROKeog3YhyuTzxMbjDCAaDuHnzJq5fv87T\nK8/OznB2doZWq8XZimQyiXQ6PTZS6NNA7U5er5dnPrjdbo7oBEFAPB7HYDCAyWRi7XQqr7yOoLIM\nDWwaN0eE7guqP6tUKpyenuI3v/kNTzeddOnqWaComNqyScPl/v37bDynIep/GqjLye12IxgMwuv1\nQq/Xc2BJ9x1lpInPJYoiE4tHgStl/DudDnuxlF4h6dZyuYzj42P+c4VCAclkErIsc/98v99HsVhE\nOp1med9xeZOU1iKhGyLkqFQqNBoNtNtt6HQ6AGB5X+DrGuowM3rUIjlPgvqfO50OT7kblr78LpBn\nTII5FosFhUJhDCu/CFEUmWdBpEp6t3T4ut0uKpUK9vb2cHJywl77pCMlcsKoN55aXYvFIqvQkcEn\ndUUqx0xirZTKp1poJBKB3W6HKIqo1WqcbRFFkQlpJycnODk54fHPrxvomWlE97h7/IFvhvdQ2bDb\n7SIej+Pu3bs4OjpCqVQaK2v+RUClUHqH9Xod8Xgcx8fHzHGaRsMP4IKcPI3TprbhwWBw4a4nB41E\nt6jUOApcKeNPkWcul0OpVMJgMIAsy+h2u0gmk/jtb3/LQhDUCUDMer/fzzUYSj+O04hSTY5SoETi\nI/3+crnMYifUK0wXCW38cXu+FL1T2x6VVKiO+Lyg9Dq1gcXj8RGu+ukgyV9Zlpml3Ww2WetBq9Vy\na9/x8TF2d3eZmzDpS4cMqtvtxtzcHMtak1qfWq1GrVZDJpPh2QmTWjNlsKi2GQ6HeThOu93mDBYJ\nu1SrVRbW2dramvjo55cFaTDU63Vks1nmKI0TWq0WHo+HZZ1LpRLOzs7w4MEDngA66b38LAwPzCHu\nQjqdRjweZwd9WtcOfDNDgd5/PB5Ho9FAtVrltm3KflKWgwS6NBrNSHRbrpTxJzJRIpHAH//4R4TD\nYYRCIahUKh5X6XA4IEkS5ubmeDOR+tzx8TESiQSneyex9kajgXq9jsFgwHMKcrkcz3aem5vD8vIy\nX/KDwQBnZ2c4Pj4em7APQRRFaDQaBINBhMNhyLKMYDCI+fl51l1/1nskvgAJj1BXA5G8xg1qu6Sa\nG0VqWq2WDRARzig7Mw2GH/haL95ms2F5eRmrq6vQarU4OTnB7u4uaxBQ3X9SayYlSI/HgzfffBMe\njwdms5mVFmnmw+7uLmKxGKtbDhP9XmRe+mWBojCSp6bectJgf56/T1yMQCDARL9YLDbWDBc5h++9\n9x7C4TD6/T6i0ShisdgFwzkN+/lpoLtiWFSJBi3R90FnkvbIk90BwOSejzIVXq+XB8pRgEElXsr+\nbm9vIxKJYDAYwGazcTZUIfx9B87Pz5HP57G5uYmdnR34fD72oObm5uB0OmEwGHDt2rUL5YBYLIb9\n/X2k0+mJXJCULicyGcn6BoNBZLNZmM1meL1eRCIRbuUSBAHJZBLxeBxHR0djN/7AN8p5ZMT9fj/W\n1tY4K0CRM6WzqAODnC6dTgeDwcBZDarpjYsFTaD+c1obEaMoxZhOp5HNZhGLxViXe1oIRhaLBSsr\nK1haWoLP50MikUAsFsPOzg6q1SrrgzebzYkZflEUeajWjRs3EAwG2ShqNBqkUins7u7i/v37iEaj\nTOwbJj3RuRzXMwzPC1lYWAAAltGmMhW1fQ5rQBgMBpjNZhbXoeFWgUAAZrMZ1WoVR0dHHG2P4zmo\nrDU3NweLxcLzQFKp1FSQVb8Lw5nRWq0GQRDg9/uxsbGBYrHIzhi1P1OplPTzKQNMBNNx3/GNRgPx\neBzz8/Pw+XwAvimZ0t6msnMymWT9Fpp4SQRBRd73O9DtdlEul/Hzn/8c8Xgcf/u3fwuDwQCPx4NG\nowGDwcBKYicnJ0x4ocEKkzgIVGPe2dmBXq/H2toaS/reuHED3W4XHo+H20QkSUIikcCnn36Ke/fu\nYW9vD5VKZaxrp1HI9+7dQ6vVwvz8PBwOB9577z320KPRKHq9HtRqNQKBAKcbyRmg9G+/3+c57dSm\nNm42NL07SZJQrVZZN4EyK1QznwYxn2HMzs7ipz/9KRYWFtBut/Ho0SPcvXsX29vbPACIxs1OAvRd\n2+12OBwObrWkFrNEIoFHjx7h8ePH2Nra4l5zcr6GRVzG/d7VajW8Xi9++MMfco18ZmYGwNfaEOQQ\nVioVruVfv34d7777LvfQO51O7t45PDzE1tYWHj58iFQqNbbnoBZV6q4xm81MOpwWJ/bbQEEaaYGE\nw2HcunUL8/Pz3EpHMzeq1Sr29vaQyWRQr9eZTJrNZvHw4UP85je/GYli3reB5gGcnp5Co9GwuFwi\nkeAugFqthmQyiePjY+TzeWg0Gm41JkEjxfh/B+hyOT09vTDkhAhcsizzIBmqJZL63yQvddKYL5VK\n6HQ6HDUMG8der8dDWPb39/H555/zZqF+9HGBouWzszOWGF5fX0c4HMYHH3yA5eVlHr4CAFarFSqV\nisfLkuAPiboAQLVaZXnMcUaqsiwjEAjA4/EA+PqyoUlhNAOiVCpNhZjPMGjk8DvvvAO73Y5isYi7\nd+/i4cOHyOfzbEAnvV663BKJBLfS0uCnarWKw8NDnJ2dIZVKXeDbTHrd3W6XiZ16vR52ux12u53Z\n28RXaTQa3K4YCoWwvLzM/AYqLzabTdY1SCQSY8/UEaeCjD9NF3wdOBSdTgflchlHR0eYn5/H0tIS\n/H4/j3imVjoq33m9XhSLRbTbbZbcpS4u6rkf9/oHgwEODw85G0clLUEQuNxMHTmZTIbLHJ1OBzqd\njh35y8KVNP6EVquFo6MjVhKjmt3w8IdJXy5PA6V6nE4n3G43C0XUajXs7e3h0aNHePToEddHJ1XL\npVJFoVDglJXD4cAbb7yBmzdvwmq1cuqfWNwAWEmRPN5isYjj42MeLmIymVh0Z1wti1qtFqurq4hE\nIgC+ZsqfnJzg0aNHzDIfridOAyida7PZsLq6il6vh/39fXz22WfY3Nwcy4S+5wGdORLS2tzc5HYm\nyko8TaZ10iDt9Vgsht///vdYXFxEJBKBwWCA3W5HIBDg4T1kTGguSLPZZKeABIsKhQK2trYmQlwk\nkSHS8dfr9Tg6OkIikZhI18fLgIjbuVzuwlS/YQVAGpRmsVgAgI0nZThIRnfc+4x0Hvb29rC/v39h\nzU8bnUwcByqJmkwm9Pt9xfi/CKj2PBxJTKvRB76pA9GUKtLCLxQKOD09xfb2Nra2tnB0dMSiHJOM\n7MjTpn9ubW3xBDOSJabLner5JLksSRK63S5kWYbT6bzgEY8z6qdWOYriBEHA2dkZTk9PkU6nUa1W\nn0kkmiRUKhWP7ZUkiQVkSE55WtY5DCpvkUjLNL7XYZB4GKkhHh0d8TRKo9EIu90Ot9uNxcVFWK1W\nLh2S8adf1WoVhUIBBwcHI1Nsex6Qw0iZuknxQF4G7XYbBwcH+OSTT1CpVFgieZjwRzoRNDmPRrsf\nHh7yHA4iVI8bZIuG+UzPevfkLJAWB2WPLhNX3vgD06f89G0g45/NZpFKpfhSj8fj2NnZwf7+Pg4P\nD5HJZNgoTfLwDhPlms0mDg4OuJZfLpchyzIr4FFET8Q/AJwuJb3509NTTteN0/hTBiIej0OtVuP4\n+BjRaBT5fJ4lT6ftkiTJZ4fDwXPNT09Px6bw+DIYdsaH1zitZ3T4PBYKBUiSxGuVJInr+jdv3uR2\nLYr2SRiq0Whc0M2fhGM2LKpVLpeRz+d5+uC07etnodfrIZlMsm7F/Pw8XC4Xt4Z2Oh2eYul0Ovm+\nIS7JsHzxpPA0J/dp75/+HOldyLJ86Wfkz8L4v06gPlYqVRABhOrOmUyGSSDTYpCGU7aJRAKVSgX7\n+/s8P344PU0MVo1GA5VKhUKhwJrz1JJWLpfH+lwkT/z48WOkUimIoshtn2RIp+E9D4PYwEtLSwgG\ngwCAg4MD3L9/f+z94y+DYYnfaXu3z8KTGQpyYFqtFj799FMWoBkmKQ5Pg3wR8avLBJ3PWq2GaDSK\nfr+P/f19xOPxqe+PHwY5MCTFTiN+h1v8RFHkNl0SzaHsyyRauF8FVFalrIYy0veKgw7p0dERCoUC\n1Go14vE4stksCwBNE+GMQOshKVxaOzH3qUZHpQAaSlStVi8czklcRsSpiMViyGaznAWo1WpTMbDn\naaC55jdv3sTi4iIAIJvNIhqNTlS290Uxbfv42/Bk1EZGler804zz83M0m00uC2k0GuTz+alV9HsW\nqGxEgltXHeQAjCIAUYz/lIEY/4eHh5ymI8/vdQF56BQVEcbV1/yioPRaJpPhMsa01qAJpPR469Yt\nrK6uQhRFnjUw7q4PBdMPcnDb7TaKxeKkl6PgBTCqu0gx/lMI6lUl0se01m+vCiiCIwdr2g0/AOZO\nkEQrDYCaVqKfAgUKpguK8Z9CUHscgImxgv/cQNkK+v20gjoTJElCr9fDwcEB75VkMjk17X0KFCiY\nboxX6eD58Wd/e01ai1rBdILITAaDARaLBV6vl9XjTk9PEY/HmaugQIGCP2t8q32/0safesoNBgPa\n7TazyP/cDSoJHk2LipqC5wfNUqB5CsRqVqlUrIN/2axmasucRuKjAgWXBcqqPUnqfB3wjGDxW+37\nlU77a7VaOBwOBINB5PN5HsjyunyhowAN1CFRm+8Sm1AwXSDp6nGR+qjE8DSmuwIFVwkkfvRkqXXa\n9zo5LQC4fZZ+/224ssaf+qBDoRB+8pOfYH9/H7lcjntE/1xBrSMU+VMPvsItUPAsDEf8w1kAZb8o\nuEoY1nEgR+B1CBafXB85A99FFBdHuahJw263Y3FxEbdv38b6+jrMZjNkWZ70siYOEh6hTTPuIRcK\nXh88KYdNmg0KFFw1DOvt01Cm12WvD59RCui+C1fW+AuCgMXFRdy8eRMejwdGoxEajWZkX+brakBp\no7yu61cwegxH+WT8lf2i4HkwnJJ+XUClLp1O99oY/2E8r4N+pY2/1WqF0+lkCchqtTrSWunrtsmB\n16OnXcHkMcwNUYh/rz/IKL+Od9aoQZH/6/p+nveMXtmaP/A121+WZXS7XZRKJSSTSUUw5wlQf7vi\nACj4LjxtKpmC1w9PEsQU/Cle53txeIrtt+HKGn9BEGA2m2EwGHh4zCi/zNdxkwAXIzoFir7Cd4FI\nosr7eTYo7Uqz5qdpeM649/e0PPeLotfr8Wjs1w3P+86vrPEHAJPJBIPBwOM0r0q68rIP8GUe0GH+\nwLCREEURJpMJAKZyYM4wSWaa1jVtuOwyERlJaid8WtQyraUpqg2Tkadfer0eBoMBnU6Hx1lP4/qf\nBuJzPDnBcJx48g6h34uiCI1Gw7M4aK+MYo00m+R1xfO8kytt/PV6PXQ6HbLZLJrN5qSXcykY7kUF\nJudZP83Ak4aALMtsSOkC0Wq1eOeddwAAX331FcrlMlqt1tRcioIgXLhYpmVdVx0ajQZGoxFGoxGd\nTod5OYPBAJIkod/vX5i5MC0gsSWdTge9Xo9er4der4dut4uZmRnMzc3h6OgIqVRq0ksF8HwBgyiK\nMBgMPOKanmmcoDtErVazc6VSqSBJErRaLWZmZtDtdnF2doZarYZms/napucnjStp/NVqNYxGI3Q6\nHdrtNh4+fIiTk5NJL+uFQYZUlmVoNBro9XpYLBaYTCak02nWLRjXARVFEWq1Gnq9HmazGWazmd+z\nSqXiA0jje3U6HWRZRr/fhyRJWFhYQKPRQL/fx8nJCZLJJFqt1sQnFg6vc5pStFcRZDT1ej2MRiPc\nbjecTiccDgdPnEskEigWizwqd1qiMMoOabVaqNVqiKLI55OcXZq/XigUUK/XJ763Cc+zp2VZxuLi\nItRqNY6Pj8d2t5CRN5lMcDqd8Hq9vB9oXXT3OBwOdLtd+Hw+pNNpZDIZHlOsZOxeDFfS+JOyn06n\nQ7lcxm9+8xs8evRo0st6IVAkajKZYLFY4HA44PV6sbCwgGAwiDt37uDBgwdj83zp4tPr9fD5fIhE\nIpifn8fc3By8Xi8MBgNffL1eDx6PB263mw9rs9lEo9FAIpGARqOBTqdDp9NBNpudeFrUaDTCbDYj\nnU4rg3FGCIrq9Ho9vF4vZmdnsbCwgLm5ObjdbnQ6HeTzeXzxxRfY3d1FqVRiQapp4BnQ+q1WKwwG\nA6ediQcBfJ0ujsfjiEajEy9XvEh5UBAEzs5pNBrOzI0jY0p3nc/nw82bN/H2229jYWEBhUIB3W4X\nsiwzg52yEYPBAEdHR9jZ2cH9+/fRbrcV4/+CuJLG32q1YmFhgVOJ9Xr9tVL1U6lU0Gg0cLlc8Hq9\nHG3LsgyLxYK5uTkYjUb4/X788pe/RDQaHfmMbrqshy9tq9UKjUaDXC6Ho6MjlEolVKtVNBoNaDQa\nOBwOzM7Oot1uo16vQ6vV8oUUCATQ7Xaxvb2NVCo19hIAXXZmsxmRSAQ2mw21Wg31en1sa/guULcK\nGRhy8sgQDv/+aRfftBhMyljZ7XYYDAZ0u11oNBoMBgPs7+/j6OiIMy+dTgfJZBKlUgmdTgeCIECW\nZXbKJlWDVqlUHJXq9XoMBgPk83ne7/1+n9P+k6yX03ppzbSu71oL/f/BYACHw4Ef/OAH+OKLL/Dg\nwYORr1en0+Htt9/GtWvXsLKygkgkArfbDUmSkEqlEIvF0Ol0eP9T7V+lUsHv90MQBBwfH2NnZ2di\nGaLhrBBJp0+ibPIiuFLGnzxzj8eDGzduwG63o9VqTUVq+VkY7iclooskSZBlGVqtFlqtFv1+H+12\nG9VqFf1+HyaTCbOzs3C73SiVShgMBiM1/oIgwOv14saNG/B6vfD5fHC73ej3+6hUKjg8PMTp6Sly\nuRyq1Srq9Tp6vR7MZjPm5ub4O/B4PPD5fAgEAgiFQrBarahUKqjVany4xwFJkmA0GmG1WuFwOBAI\nBFgEahpAl5vT6cTMzAyXIyj9Tb+63S46nc6Fi4Za8chhmNQFNDx0yGazwev1wuPxQJZlZLNZtNtt\nVCoVFAoFVKtVPp+iKPKz0lnQarVQqVRMoBtnhEc/32q1wu/3IxAI8L6nc0nGf9KSx3SHkGGUJInf\n1/OsjZxJp9OJQCCAdDo9cuNPzuEbb7yBW7duIRAIwGazQZZlHlZ1dHSEWq2GbrfLvASbzQaTyYS5\nuTm4XC4IgoDDw8OxfwfESaAsrd/vBwCUSiVks1mUy2X+s5N2xJ/ElTL+dKkvLy/jr/7qr+DxeHB8\nfDzVQg0U2dAvita63S4ymQyKxSIbRUmSYLFYsLKyAq/Xi42NDej1evT7fTx+/BjA5W8wckzcbjeW\nl5ehVqtRr9dx7949xONxxGIxxONxFItFJgmRh57NZpFIJDAYDCCKImq1GgwGA6sunp+fI5VKIZfL\ncSvmqCEIAnQ6HZaWljiKk2WZDdA0HFBZlhEIBPDee+/h7//+79HtdvniazabnJItl8uIRqPsFBYK\nBdRqNYiiiE6ng0ajwf9t3M9ls9lgs9mg1+vhdDrhdrtxfn6OcrmMWCzGKX0y9E8OUiFDRGRAjUbD\nWQGqBY8alI72er24fv06tFotut0u8vk8c27GtZbngUql4pKnWq1Gs9m80Gr4XXtg+HnD4TC++OKL\nka+Z7uxIJIJwOAxZltFut1EoFHB6eord3V08fPgQhUIBrVYLGo0Gbrcb8/PzXH6s1+soFArQaDS8\nn8YFjUYDm80Gn8+H5eVlfPTRR+j1erh//z5+97vfYXNzc+JO4bNwZYw/XRQLCwucNkomk3j8+DHq\n9frUvXwy+na7HXNzc2g0GshkMnwhEuP5yRTi9vY2HA4HPB4PVldXMTs7C5fLNbKNT9FXsVjE7u4u\n+v0+arUacrkc0uk0stksKpUKRxjD71kQBE6jEwmz1WrBaDTCYrFgMBhw/XRcDprZbEYgEMA777wD\nnU6Hk5MT5PN5FAoFNBqNF/68y267XFhYwNLSEpaWlnDz5k2srq6i0WigXq+j3++jXq9Dp9MhkUig\n0+mg2+1Cp9PBbrfD7/ezo9But9FsNtlJOD09RblcHss0wOGxw+fn56hWq+j1euh0OiiXy8jn85wd\n+jYIgoBut4terwe73Q6j0Yhyucyp7FFDEAQYDAaYzWZotVq0Wi3e88VikSP+J9vSJl2eoOFdlPF8\n3vX0ej1Eo1GkUim88cYb0Ol0Iy0dkbOh0+lwfn6Oer2OarWKWq2GfD6P3d1d7O/vIx6P8x1DAZLV\nakWv12OyrsPhgM1mQ6/XG1vpjgifer0ewWAQq6urWFxcRLVaRTQa5e6hacWVMf6SJMFqteLmzZsI\nh8Po9Xq4c+cOfv3rX6NQKEwdGYQulvn5eXz00UfY29tDOp1Go9FAq9V65no3NzeRzWbxxhtvIBQK\n4fz8HLIsw2Aw8Mjiy16nJEnY399HNBplg0OX8reRDYcjuMFgwJcRHYpWq8Ups3EZf6fTibW1Nfzw\nhz9EqVTCH//4R5ycnCCbzb7Uu6ML/7K8+1u3buEf/uEfsLa2xpwVaoGrVCq8P7a2trC1tYVKpYJA\nIIBwOAyfzweDwYBSqcR7yGw2o1wu47//+7+xv78/cuM//D7I8Wg0Gpx9eN4UNPD1/qFn9/v9MBqN\nMJlMXOoY9XNIkgSz2czE4VQqhePjY97Hw3tfkiTWUyd+xrhBtX7KYr0oGbjdbuOrr76C2+3Ghx9+\nOFIlQPpsKm0Sb6jb7aJcLiOTyWB7exvHx8coFotcFqTgo1wuo1qtotVqQa/Xw2q1wuPxsKM8atD+\noLS/z+fD3NwcBEFgp5uIidPqAFwJ4y8IAmZnZ3Hjxg28/fbbMBgMuH//Pu7fv4+9vb2p6vGniN9k\nMuH9999HOBxGvV5HJpNBpVL5ztTzYDDgyI4uIZ1OB5/Ph1gs9tJpyGcdcvp5ZKzp8n5eUhMdcmJJ\nO51OWCwWZvtTbXLUxp8O6fXr1/Hhhx+i2+3i+PgYZ2dnHE2+yCF9cnjGqyolSpIEtVqNYDCIxcVF\nFkTqdrsoFAo4PDzEw4cPkcvl0O12kUwmkcvl0Gq1OMr/6KOPsLi4iKWlJQwGAzSbTRgMBgwGA8zO\nzuLzzz/HJ598wrXIUV5MjUaDHcRhY/2iUTEJ/wBfp1g9Hg9nQEaBYaNkNBrhdDpht9uh0WiYw6DV\naiFJEjtSKpUKc3NzkGUZ8XicyzDjRrfb5ezVizhZBCqJabVaAH860fGyIYoi2u02MpkMPv/8cxwc\nHAD4eu9Uq1XOxg0TXUmDgLJ1xWIR1WoV2WyWeQGjJrpSC7ZWq0UgEMDS0hJCoRDsdjvOz8+RSCRw\n584dJJPJ514HZRFepEzzys8x0k8fEwRBgMfjQTgchtfrRaPRwMHBAU5OTpBOp6ci6qcRkSaTib3U\n76CZKXsAACAASURBVH3ve3A4HPjDH/7w3EJEZIyr1SrK5TITfFwu158QTF50fcCfGi+6fF8lKh7u\n4zUajRAEgT9zWNBjFKCfTdoEq6ur2NjYwOPHj3FycoJcLse10Rf5TBJ4oWh2mO39MqA+Zmo7pIux\n1WqhVCohFosxz4LIfmRcq9Uqcrkcl7xcLhdUKhVEUYTb7YbVasXKygrsdjsqlQr29vZwenrKjHrg\nciM7chSHo7WXPYO0/yjD5XK5UK/XEY/HL229BCLMqdVq6HQ6mM1muN1u+Hw+qNVqVKtVqNVq7vOn\n8eAqlQrhcBjn5+cvXT66DDz5nl/0ndMdRY7NOFpwO50OSqUStre3eYoe7W0S3HryOWhWS7FYRC6X\ngyiKfO8Ni6CNau1EyDabzfD5fFhZWUEwGITVagUA5PN57OzsPPddPExsJbVIukuoewS4/OzLlTD+\nwNe9/efn5zg+PubWMqopTUMPqCiKcDgcuH79OsLhMObm5rC6uopSqYRoNIpsNvtcn0OeIZHtiEFt\ntVovEAZfBpdtgGlT0z/VajXa7TZOTk5gNpvR7/chyzKMRiP/ucve4CQr7Pf7EYlEMDs7C0mScHp6\nitPT0xe+4OhZKCokZ4wIdpVK5ZWeIZvN4ujoCE6nk+vl5XIZpVIJlUqFhVeGIzuKhn7xi19gf38f\nt2/fZlnrDz74ADdu3IBWq0UkEsE//uM/4v79+7h37x7u3r2LXC53aZfLsLNHn/eql/BwBEQZM7rw\nXyY6elZdngw/ldDIwPv9fqytrcFms2EwGODg4ABWqxVzc3N48803oVar0Wq1UKlUcHJyMlFBoifb\nP1/m71PGKJ/Po9FojDSKpr3R6XS4pAV8U0IhIzj8XZEz2Ol0UCwWkUwmoVKp0Ov14PV60Wq10Gg0\nRto9RARms9kMj8fDOhUGg4EzXq1W67l/PnE11Go17HY7XC4XKpUKisUi8vn8BcXRy/wuXnvjT5ex\n1WqFxWLhC7JYLHIbyZPtTmq1GmazGcFgEOFwGBqNhj3OUqmEXC6HaDSKfD7/yk4DXTYGgwHBYBA/\n+MEPEIlE4HQ6odfrkUwmkc1mUa1Wn/vzAKDZbKLZbLLqVSgU4pRwsVi84DG+yOe+KugSValUMBgM\nsFqtHCUvLCxwXy4dILvdztFpv9+/1BINGem1tTVOzXk8HnS7XWZrv+gFodPpsLy8jLm5Ofj9fhQK\nBWQyGZydnb2SlgS15R0fH+PBgwfY2NiA0WhEr9dDsVhEOp1GrVZjxTvgTy+CZDKJer2OTqcDrVaL\nZrPJJLv19XUYDAYsLi5iMBhwliEWizGJ7WWzRoTh1OywA/AqFxaJAs3Pz2NjY4Nr2jSt80W/v6cJ\nBg1H/OQoUpeB1WqFVquFRqPhdj+LxQK/34+5uTmIosgOGTG/AfD3RBHcOOu/L/szRFHktP9wT/0o\n1j38eYPBAO12m6P94c6np/0darGks+dwOGAymbC4uIh2u80lrVG9b8qQuFwuBINBhEIh7rAgzsF3\n3b/Dz0b3pN/vRzAYxOzsLOr1OkqlEhKJBFKpFNLpNJ/ny8Jrb/zJ0Ljdbng8HsTjce6xpA4AygIQ\ntFotgsEgfvKTn+Cf//mf4XQ6odFoUKvVsL29jbt37+LnP/85SqXSpdReyDmJRCL48MMPMTMzw21u\npVKJW7eeB3RR0efq9XoEAoELRMHt7e2XGmR0GUpqlBI3GAzwer2IRCIsTfz2228jGAwyE1wQBPh8\nPgSDQXg8HtYDuKwDS+/99u3brPtgMpmYcU7f74s8m9lsxl/8xV/g3XffRSAQwO7uLu7du8cG+mVB\nF+De3h4MBgP3WguCgEKhgHg8zgORvu0zSqUSPvvsM/5v0WgUOzs7+NnPfoaNjQ3Y7XaEQiFYLBa8\n8cYb2NvbwxdffIG7d+++svEnDBv/V4Usy7DZbHjrrbdw+/ZtNBoNZLNZaLXalyLWUbAwfK7pDiER\nrXA4DLfbzfLV1C0kyzIikQj/OUEQOOPT7XZZe95sNqPRaLAGAGnQEx9nWglgFM1aLBZO/79shuVJ\nPO1eGc5SDJcser0eO2NP7iXKBgwGAxQKBaTTaTgcDv6+yuUyEonEpRvKYZCYj8/nQygUwvz8PBv+\ncrn8XByi4Wci4baNjQ0sLS0hHA4D+Jr7EI1G8eDBA9y5cwe5XE4x/sPQ6XTc+uZwOJDNZi+ki0gU\nQhRF9Ho9vPvuu3jrrbdw7do1LC0tYWZmhmstKpUKq6urcLlcMJlMCAaD+N3vfvdK3QJUk19aWsLK\nygpUKhVHLEdHR9jb23shXWqPx4PFxUUsLy/D6XSiXq+j2WxyettqtUKv16PZbD53/fllL+snhYnU\najWuX7+OGzduwGazsdoVKektLy/DYDCgWq1eEDMym80AcGmeOj3PzMwM1tfXWX5YpVLh5OQEe3t7\nfEE8z8+jSG92dhbLy8u4ffs2ZmZmIIoip3xfVRaY/m61WkUmk+FIP5FI4KuvvkI0Gn2pzEKlUsHp\n6Sm++OILSJKEGzduXHAU6TsYJi++CujzKLvzsk6AJEnQ6XTY2NjA7du3sby8DKPRyBmjl61HD+/Z\n4RY9OqeCICCdTqNQKHDGTq1Ws5BWLpcD8DX58M6dOxBFkRnog8EAOp0O3/ve93D79m0mO9brddRq\nNRSLRXz++efY2dlBJpOZOuExSZJgt9thtVovZPAuI/v5rAwC/fvwXqHyw3fxDvr9PlqtFt/5JDA2\nPFJ5FPVyo/H/sfdlv22m1/kP933fSVEUta+2xx5ZdmbNFEiKFkiQq6L3BXpZ9D/4/S76f/SiKBAU\nvSqapsE0aTKZjMce79oXiou47/su9sI4x5Rie2yLlD5p+ADGjBeRL1++33u25zxHC4/Hw51lx8fH\n3P68v7+PWCz2yvfr5z/1k5zv3r2LH/3oR8x5yGazrN/SaDRQqVT4jhkkLr3xp1QbXe70wFHaRa1W\nw263MxHkL//yL/HJJ59gYWGB63X0b9vtNgwGA9fRpVIpAoEAfwHvA4lEwsIZDocDnU4HpVIJjUYD\n4XAY0Wj0e79U8sIBwOfzYW1tDdPT09Dr9ahUKuw80MVLCl/vine9qOly6H/fhYUF/OQnP4HD4QAA\npFIpKJVKGAwGOJ1OJvH0ej3eG4VCMfARv2KxmIU3bDYbS8SGw2E8f/4c2Wz2jZ0VOp2OIz9i9M7O\nzmJmZgYejwcSieQE6aher595/dTrnEwmsb+/j16vh93dXRwcHJyozb8LSD9ic3MTdrsds7OznJqk\nWuugBtDQxUbSvcS8fpdsEjHOTSYTvF4v7t69ix//+Mew2WxMJOxvo3qfNfaPjD39d81mE4lEgjtq\nyIElnft+NT9K/1IJQqVSwe12Q6VSYWZm5sTQK0pTkzRwoVAQjPEnB51KcDqdDtVqdWACRnQuAPxZ\npqa/lk3fCWkn9JM9X/V6lD2ku7larXKWl77ns2YtTmcdgJezYyYmJmC321nq+fDwELu7u6+c5Eiv\nQ46JWq3m9S0tLeGjjz7ijGQ6nWbSZTqdRjab5U6GQeLSG38yPMViEdFolFPpRPhwuVz42c9+hunp\naZjNZlgsFuj1ekilUsTjcRweHrKH2Gg0WFLVZDJhYWEB165dQ71e58v4bVrb6N/QASSZzWKxyINs\nSCWs1Wr92ZjefojFYhbCEYvFuHHjBn7yk5+wSE65XGZ9cdLW73+Q3gbv29LTL0Msl8uZCa1Wq2E2\nm09MPCMCFem1E5OVFL1I4ncQoJShyWSCx+OBXq9Hp9NBMplEJBLhaYJvwvz8PO7evYsbN27A7/ez\n1nin00E6nUYmk0E6ncbu7i4ikQgL2ZwVtVoN0WgU//u//8uqfeTcve8l1mq1mONAxrNQKCAYDGJr\na4udobOgnzCnUqlOZLPe1fj7fD7cunULf/VXfwW/3w+9Xo9CoYB4PI5QKIRYLPbOnJbT73E6yqT+\ncspKUc2YHAUy+P3GiDKM9GfdbhehUAi//OUvce/ePayuruKDDz7AtWvXOKj47LPP0Ol0sLGxIYg5\nEvS9eTweLC0tYXp6GgqFAnt7e4jFYgMZb03ngoTLXhf903/7uRyvupvo9YxGIz+XdL/27+lZyI/9\n73U6a0EBD3E6RKIX0sL379/H5uYmksnkCae33/DTzxNHoN1uQ6/Xc6TfarW49JnL5fD8+XPs7+8P\nZeDYpTf+9PDSl0O9kvQQG41GLC4uYnFxEQaDgR/y3d1dbG9vY2triwfl0IhcmUwGg8FwouXlfUG1\n3EAggE6ng0AgwCktmmKlUqmws7ODg4ODE59LJHqhCeD3+2G1WlGpVOD1euHz+XB8fIxqtQq1Wo1G\no8EHtNvtvjay+b597P/v2362fq35Xq+HSCSC7777DolEAiKRCNFoFJ1OB3K5HF6vF2KxGIlEAjqd\njicVkhdNkdVZQQ9eLpfDzs4OstkspFIp1wFtNhuazSYkEskJcZ9er8dtXtevX8dPf/pTVovUarUs\nl0vCM0TKyefzzO496wNKKWS6KAdx+bbbbY4gWq0WNBoNdDodlxhSqdSZozyKgk9HW/T77/tZsViM\n8fFxLC4uYmpqCisrK7h58yZUKhUrvh0dHXE/9/s6Q/3R5GkG+eui0jfhdGtdp9NBMBhEIpFgKeOt\nrS3Mzc1henoaFosFMzMzsFqtKJfLAyW4ni7ffR/hjAKLsbExTExMYHp6GlarlRU9q9XqQLJx9F5k\n/L4Pp7/b0+TM/lJVf4mXOoc6nQ4KhQKX9c4a/fe/hkj0QsQnFovh97//PTY3NyEWi7G/v49QKMQZ\nIJVKxfdNvxNCZ42yV6RISEPFJBIJ3G73iT+n8z5oXHrjT95S/zxo0ponSVCz2cyeVaVSQSgUwm9/\n+1s8ePAA29vb+Ou//mv4/X6EQiHU63WuLebzee4Ff9sDdPrQkhf75MkTPH36lNOaNpsN//AP/4Cb\nN28iGAziP/7jP3gOATk0FDEvLS3B6/UiEAjAYrGwTO7x8TGnQyuVygmSzts+aK9a97vsPXmpFNE/\nffoUsVgMZrOZCZQ05W9tbQ0mkwmZTAaLi4tYXV2F3+/HxMQEVldXUSqVEIvF3nkdrwNFLzRcxuVy\nMQ+AWiOLxeIJA6tSqWC32/Hhhx/iiy++YGeKnCtKN5Ox63a7A20tosthkL3irVYLqVSKSx1UBiEt\ngUGULCgao7kCtBdk/F+3N/215Zs3b+Lv//7vodfreegSib1Eo1FEo1HmsrzvZU7fYf96Bh1RkQjR\nw4cP8fjxY4jFYnz++ef4+c9/js8++wwulwsul4vLRYNAv4GlzMWb7iw6v263G1988QUz5vV6/QkF\nw0GhP9PyNnjTv6PSEt3nxM2g8eFKpRKpVIolsc9ytl/FGSgUCqhWq1hfX+fAkzIRs7OzLAVNJWj6\neXoNEjai7ymRSCAej6NSqcBqtcLv97NdI77JMHDpjT/NiN/d3YVWq8X4+DjXBkk3v9PpYH19Hbu7\nu9jf38fBwQEODw+5xS4Wi0EikSCZTEIsFiOfz8NqtQIAt+mc5YLo//LIc0yn0/j3f/93rK+v87S7\nX/ziF9jc3MTR0RGazSY/zM+ePcP+/j6y2SwymQx2dnZgt9sxPj6OhYUFaDQaaLVariO9zwjj942i\nAJy4TClNTXXTWq3GBnNrawtyuRyNRgOZTAbxeBwGgwETExP47LPPcHh4iO+++24g0TNFzkSGLBaL\nyGQyyGazmJiYwMrKCu7cuYNqtYpQKMRjkdVqNWsBPHz4ENVqFVKplGVe5XI5xsfHYTKZkM1m8fDh\nw3e+2N6EYTLBNRoNxsbGoNVqWWCJ3vOsddHTev46nY4dATqL/eI5wItLkIb+jI2N4c6dOxgfH0er\n1UK9XufaqV6v5/ILZSrO2st+XiAjvLe3h//+7//GzMwMnE4nfvzjH7NTNihQREx7TGUvp9MJkUiE\ncrmMo6MjLsmYTCZMTU1hYWGBCWgqlQoGgwGrq6sIBoN49uzZifbS90F/EHQW9Nf6qQRHswDIANO5\noNJjqVTiNbzPOX9ViaK/bbP/F80/oWCMuhZOvy+9Bv3/7373O+zt7aHdbuOjjz7CxMQEFAoF3+sK\nhWIoKrWX3viTktje3h70ej2cTiesVivPm7dYLMhmszg4OMAf/vAHrK+vIxwOs/CPVCpFKBRCtVrl\nvm8aQUpKamdJ+7/q8FD0QbryNOVubGwM4XCYOwIo2qRRlc1mE0dHR1hfX8fMzAxWV1e5HYwINZ1O\nhyPR80B/VEcZgFdpFnS7XRwdHfHvqbd8YWEBarWahTJkMtmZyX/0cPV/byKRiAfKHB8fY3FxEU6n\nk50mmhpH/dzZbBZfffUVCoUCpFIpzGYzxsbG4PF4mNRFGQW1Wv3ekfpZa5LvAupjV6lUvM+DJlr2\nvxcRslQqFZxOJ4xGI8xmM4sj1et1eDwe7pUeHx+HWq1GLpdDoVDgDhb6mV6vh2g0iocPH753//n7\nlLfOil6vx/Xzra0tGAwGXLt2DVtbWwN9H3LCSO57ZWUFKysrmJ6ehlgs5sCB5IepZu7xeJjQ2O8A\nzMzMYGZmBslk8rXDu94G/Rmzs6JfWpd0WXK5HGeviHdE7Yr9vCtay1mdkFeViIg30t9t8DrOwelM\n2MbGBjY2Nvgev3HjBrxeL3q9Ht9PhULhTGt+FS698QdebObBwQE7Ak6nk1P+7XYbDx8+xObmJh4/\nfoxUKsXKVXQBbWxsMPmCaorUjz4spSgyUMlkEn/84x85ZUqdAIT+3mA6uLQmmUwGn88Hg8GAer2O\nTCbDo3XPO7p5n5/JZrP4l3/5FwQCAfzt3/4tOp0O9Hr9UNpayPsuFArY3NxEKpWC0WiETqdDp9OB\nWq3Gp59+ColEwsOT8vk8Rw40MXJpaQl37tyB1WrlaXM0lfF9WvHOywjRWU+n07BarcwkzufzZ35t\nOsu1Wo1JnsTAlslkWFpawqeffoqFhQWMjY1xKrPdbkOn00EulzPhNhgM4uHDh0in0/B4PPB4PDz5\nTSaTQa1W8xjm93XMz9PwE6jL58svv0S9Xsft27d5fsMgQN8BdSbo9Xp89NFHWFhYgMvlQq/XQ71e\n5x5yEksirgoNkCJOi1arxbVr12A0GvH1119jfX2dZ4e8K7elvy5/FlBpidQ1U6kU0un0iTVR1oiy\nAe12mzMi5PAOe8rrmzgLb0Kv18Pjx4/xT//0T/j5z3+OiYkJliUfZDmUcGWMf6VSwdHRESQSCex2\nO8xmM1QqFafBw+EwYrEY12L6CXL94iYajQb1eh2bm5vMph+WIaUL77TRON2m0n94aM35fB7hcBjP\nnj2Dw+GARCJBJBLh/uGLuODeBfSgBoNBuN1u1Go1qFQqOBwOvoiGAXptInYqlUpIpVIYDAYWbtnd\n3WXD32g0uBWKWj5rtRp8Ph+sVusJuV2hg1LpIpGIRYnIuTkrKLXb76QCL4x/tVpFqVTiNDNxDegS\nJuMTi8Wwt7eHjY0NHthCipsul4v7z8nIXSaQASwWi6hUKlwiGSTI2FEplIi3ZrMZMpkMvd6LNlun\n0wmXywW5XI56vY5UKsWdQu12G1KplDVDLBYLZmdnWbK2f5Tx2575QWZb+vlMJLDUb2xpTdQW1+v1\nOBLXaDSsuzBsvO9nzeVyKBaLmJyc5PbQQbVcnsaVMP7ASwdgY2MDW1tb/HCRuly9XkepVOI0zOs8\nUZVKBaPRyFmCd1WBG9RneVO2gQxnOp3G06dPuZ3t8PAQ8Xj8UhiiflC6y2w2w+PxIJ1OD8wovQ60\nh+Rk5HI5BIPBE2siUAYoFoshl8thb28PMzMzWFtbQygUQiqVOrcyy/tCJBKxURCJRMhkMgM3/pQO\nbTabvH/NZhP7+/uo1+uYnZ3F1NQUZDIZisUivz+VW3Z2dvDNN99wS2I+n0coFIJWq4XX6+WSC40H\nFrqDexqkoKfX64dS7qHvoFKp4OnTp3j+/DnUajXLFBNJbm1tDXfv3oXZbEa1WsXh4SGy2Sw7hBT5\nT01NYXJyEl6vl8tmdHe+a3ZxkJ+1v7T5unvy9PuJxWKo1eqhzRAZFIhAGAwGORM2iOzcq3BljH8/\n6ICSDOfCwgISiQTy+fxrvVCSpAXACl+UAhMiyHjl83k4HA7odDomW102pNNp/OEPf4BKpcLU1BT2\n9/cHSoR6W7xu78jQNJtNJjcmk0kEAgHUajWo1Wpuc7uooS7fByKaJhIJWCyW9+oIeRf07yVl5f71\nX/8VX331FYtx1et1KBQK7sgJhULY3t5GpVJBt9uFWq1mJz6ZTCIWi2F8fJyHuVD0eVnOPLX6Wq1W\nTE5ODnX6H2VgKK1fr9e5Dv3s2TOIRCJ88sknrG6aSqWwu7uLUqnEA7domqTFYkGtVuPXeNW8lPMC\nrc1qtb61HDVxDs7K3zpPUBbNarWi2WyOav5vC7qsidBHdWT6u9OgcYoGgwEAuC1qmCn/QeH4+BgK\nhYIJLpcRxWIR6+vruHnzJiYnJ9kJu0j0i3TQWaLIikpJuVyOMxbEQBci6BnQ6XT8ufqlbYdpPKnz\nptFo4KuvvuIULJW2TCYTTzKjgVp0QZNWB/CyXEOkKqvVinw+z47CZXAAqH04kUggnU4PPf1MpRgq\nBQAvvu9AIAAAuHbtGmv5t1ot7jICwCWBXC4Hq9XK6ocATkjTnifo+ZPL5XC73Uin068VR+tH/9Q8\noTrnp0HETYPBwAO/Bv4eA39FAaHZbCKZTOK7777D4eEhzzjvh0j0YvKfwWDgeczpdPrMqmrDBk2s\n83q9cLvdMJvNUCgUF72s9wKlivV6PXw+H9Rq9UUviWcVEPeCui9UKhX3QxsMBni9Xvj9fkGs+XXQ\narW4fv067t69i9XVVWbOA+fPeqcOmlarhWaziUwmg0AggMePHyMQCPB0t/7UMk3h1Ov1aLfbMJlM\n+PDDD+FyuViP/zKApljSAJqLUPjr9XqshkriV8vLy3C73XzmqU1Op9NBIpGw0Aw9B/V6/UKMKL2/\nRqPB/Pw8HA7HK+/009BqtXA4HHC73bBYLBfmvLwLdDodxsfHsbq6iunp6aGs90pG/gQSS0kkEnzJ\niUQi2O12rK6uQq1Wc+2ciFu1Wk3Qhl8kejE1b3x8HD6fj9UJz9qLO8j19UfMbwPKvJC4S3+/7kWB\nyIAktkSDZuiXRqNBr9dDqVRiOWkhQi6Xw26346OPPsKtW7dgt9uRy+WYJX2eeJXDQY7fqwxhv4Jk\nsVjk1imn04mxsTG4XC40Gg3E4/GBKCEOE5Qun52dxfT0NDtBF4F2u41yuYz9/X243W5cu3YNNpuN\nSYDk9NKEuv7vjaYTXkRGlJzHfD6P7e1tKBQKLC8v8zhqckronFBLIHFNdDodgsEgdnZ2BJ8toudi\n0CqQ/bjSxh94qbbVP9DD7/fjH//xH+HxeJDNZvGf//mf+PrrrxEIBFAsFgVdFxKJRFAoFFhZWcHC\nwgKUSiV6vR7S6fRAx+GeZX3vWvumsa3Ug07e/EU6MwqFAnq9/sT8Ao1Gwx0kNMIzGo3i6OjoxCUp\nJNCwmS+++ALT09OsZxEMBt96quFFgS57GoIFvCxTeL1e1uLI5/ND0T7/PrytuBOp6ZFY0dzcHEKh\n0IWWFJvNJp4/fw6LxYLl5WVYrVbMzMzAZrOxNHapVEI6nf4zQZv3CYwG5cwTGS6RSODTTz/Fj3/8\nYzx79gyHh4dIJBIcBEmlUm4JvHXrFj788EPOKhGBV6gQiUSoVquIxWJIJBIniMiDxJU3/v0gWdZC\noYAHDx7g/v372NnZwfb2NsLh8FD6ywcFYsRbLBZ4vV6srKxgamoK5XIZW1tbePLkCdLp9EUvEzqd\nDg6HA9euXYNMJsPBwQHC4TCSySR75CKRiMcS2+12TExMYG5uDmazGfl8XhCtimRM5ubmMDMzA5/P\nxxPEaCRwPB5HKpW6EG2Ft4FIJMLi4iI++ugjmM1mNBoNxGIxPHjwAH/605+G3lFxFtD3f5rUJxKJ\nEIlE+BLv108/T5BTSFMHASCfz2N3d5fluylDpNPpoFQqoVKpYLVakclk8Pz58xOiV+eNbreLVCqF\np0+fQqVSIRaLIZlMIhqNciYml8v92bP4Pl0Wg+aXkOBZOp2G2+3G2toafvKTn5woAVGGlCS89/b2\nmFAqxGeVQFnGbrfLztewyM8/GOPff+jy+Tzu3buHYrGIzc3NPxPWESIobehyubCwsIDJyUmYTCYk\nk0mEw2Fsbm4OlT38tpBKpTAajfjkk09gsVjw6NEjbG5u8vREYm/funULt2/fxvj4OBwOB0wmEw4P\nD7G3t/dKhcDzBrWC+v1+rK2tYWVlhfUiAoEACoUCjo6OeKKi0EB7PT09jWvXrkGtViOTyeDx48d4\n9OgRtra2hpZOHBReZ2hIzIokhC/KWRSLxRgbG8PHH38Mo9HILXbEBSFuiMlkglwuPzHjY3t7G8lk\n8tzXTKCMye7uLjtQVMIiSe6z6uIPC9QOl06nEYvFMDU1hQ8++ADXr1/n7gXilNTrdfzqV7/C5uYm\ntre3WXb8ooOL10EqlfJnoO6cYTD9gR+Q8SeQytvXX3/NNX4hp/mBl/O2bTYbJicnWc8/kUjgyy+/\nxPb2Ng+xuGjQBDaSMHU4HLh9+zbK5TJ0Oh3LVVosFr4USev/m2++wa9//WtEo9ELv3So1k+1ULPZ\nzCS0YDCIYDCIarUqmJnspyGXy6HT6WC321mN8NmzZ/jnf/5nJBIJHgx1GUHlAOoGuIjnl9roHj9+\nDIVCgZ/+9KeYn5/HF198Aa1WC61WC4lEwoO2jo+PUalUcP/+ffz+979njf2LArUK53I5bgekzoDT\nUw8H8V7DQCKRYNXKer2OxcVFJt5WKhWkUikkk0k8fPgQDx48QKlUEnSpi+55i8UCpVI59Pf7wRl/\n4EVKN5PJXPQy3go0dtPhcMDn88HtdkOhUCAUCiEQCGBvbw+ZTGYg8pmDAOltP3r0CBqNBj6fJ5TS\n9gAAIABJREFUDzabjbsT5HI5gJdkrlwuh8PDQ9y/fx/ffPMNtra2BBH5G41GuFwuqFQqlMtlHgr1\n9OlTRCIRJogKESRoYrfbYbFYIJfLEQqFsLW1hf39/YFPbLsIkJbHRb4/zav405/+hG63y/NFDAYD\nTwYFwFFcLpfDV199hZ2dHZRKpQt3HCl9TneHUEnOrwNNH2w2myxkpNVqWcGStFoePXqEVCol2ExG\nP2QyGQ/eos83mur3A4VYLIbb7T7BWK3Vanj+/Dm2traYtS2Uh/b4+BjFYhH/9m//hmQyib/7u7/D\nxMQEjEYjZDIZRxuFQgH5fB6pVAr37t3DL3/5SxSLRcEYJZfLhdnZWe6LTqfTePToEZ4/f45MJnPh\nF/frQNwQnU4Hj8cDi8WCXq+H58+fY3d3d6iXyQ8R+XwehUIB29vb3CFitVphtVrR7Xa5k6VUKvFU\nzoto8Xsd+tVEhXKHvAu63S4KhQK++uorfPvtt9yp0D+W+DJ9LuoqIklsysoM4zOMjL/AcXx8jGQy\niXa7jXg8zr38sVgMmUzmQshO3wdqzfruu+9QrVZ5LCWJbNRqNTSbTR7CkUwmedqeUEDEp52dHY7c\nqNYsVMNP6PV68Hg8+OlPf4rFxUWYTCbuohB6f/NlBEXN5FilUimUSiUmuEokEtbGP+/2yreB0O6P\n9wHNh6Bs0GVTfwRefA/lchnBYJBLMMPkJ4yMv8BBqmDlchmxWIzbFc9zbO+7otfroVar4eDgAIFA\ngC9Bkh++DN54NptFLpfj3wt9vf3o9XowmUxYWVnhCJQIXCMMB/2zDYg8N8L54aLLQINCvV4/NyLu\nyPhfApAX2263OXK7LBd5v0AIPZyXxZBelnW+ClQzJA2CYDCIbDYrqOzKCCOMcHEYGf9LgMs4wexV\nuAqfQeigNHM6ncbXX3/NHIy9vT3WWhh9DyOMMIJQC4Cj22mEEd4RpK6oUChYXIbSiOdRQxxhhKuG\nt1VxFCjeaN9Hxn+EEa4QKPIndTMavUrCKKO0/wgj/GAwMv6n5SUvkxfXP371Ml3c/aNjLxvr9rKv\nnSYRjs7L+eCyPqPA5T0vAJj8PDovr3+rN/3llR7pC7w83DTP+fvGPwoJFMXJZLJL16ZFe65UKiGT\nyS56OW8N2nOFQsFDfC4LaM9VKtWlGu9Mz6hcLodGo4FUenmoSLR2pVLJsqyXBdSBQ22glwW05zKZ\nDAqF4lLtOQA+6xe955fnG38PiMVi6HQ6jI2NodPpcMuc0DXN6XA7nU54PB5UKhUUCgWk02nB95iT\nROXU1BT0ej2azSbi8Tji8bjgvXOxWAy73Y7Z2VmIxWJUq1UcHBygUCgIOirqH5bk8XggFosRj8ex\nu7srGOXH10EkEsFgMGB2dhZ6vR5isRhbW1uIxWKCj+hEIhG8Xi/8fj8PkBGS1PaboFAoMDU1BZvN\nBoVCgWAwiMPDQ8Gr4IlEIlgsFkxOTkKtVqPb7WJnZwe5XO5StPpNTEzA6XRCpVIhnU5fqOLmlTb+\nUqkUVqsVH3/8MUqlEp49e8ZiG0K/VKRSKebm5vD5559jb28P29vbPG5YyGsXi8XQarX49NNP4fV6\nEY1G8e2337K8plDXTg7X5OQk/uZv/gatVgvhcBjFYhHlclnwF6JEIsGdO3fw6aefol6v4969ewgG\ng4LvfxaJRHA4HPjZz34Gp9OJZrPJE9uEKGBFoPOytLSEX/ziFwDAHRWkyiZUiEQiqNVqfPrpp1hZ\nWYFMJsNvfvMbxGKxSzHzYXx8HD/72c9gNBpRLBZRrVZRqVQEfb8AL/b9+vXr+Oijj6BWq/H48WPE\n43Fe+3njcuVL3gEikQjj4+NYWlqCz+eDRqNBtVod2kU4yJS8VqvF/Pw8Zmdn4Xa7eSjIsB7KQa7d\n4XBgeXkZk5OT0Gq1yOVyqFargn4ogReDcCYnJzE3Nwev14ter4dkMjk06eRB7rnJZMLS0hKmpqZg\nsVhQqVRQLBYvReRM0tU+nw8qlQqZTEbwzjkAqNVq+P1+TE9PY3x8nMc9C905B16cl6mpKUxNTcHh\ncKDT6VyKLhCpVAqHwwG/34+5uTmYTCbmcAl97RqNBm63G9PT05ienoZOp4NEIrnQNV1Z4y8WizEx\nMYHr16/D7/dDr9ej1WoJ3qsFAL1ej6WlJSwsLMDtdkMkEqFer1+KtbtcLty4cQMTExPQarU8InQY\n6CfOnBUKhQLT09NYWFiAy+VCt9sV3NyE18FsNuPGjRuYmpqCwWBAsVjkUoWQ197voHu9XkgkEiQS\nCdRqNcFf6FqtFgsLC5iZmYHT6eSBVpfB+DscDnYWjUYjj6UW+nmRy+Xw+XyYnZ3F5OQklEolT2UV\n8roBwGAwYG5u7kRpjtZ9UWu/ksafyCDz8/NYW1vD3NwcbDbbUIkhg/oCxWIxTCYTVldXsbKywlP8\nhnnAB/W6IpEIPp8Pd+7cweTkJDQaDev4D2Ptg3xwVCoVlpeXsbKyApvNxhMHh5V6HuRrOp1OfPTR\nR5iamoJKpeJpZkK/zEUiEebm5rC6ugqXy4VOp4NgMIhSqSR4R9dkMmFtbQ1LS0vQ6/XI5/M8D0LI\new4Afr8fn3zyCfx+P8RiMUKhELLZrODT5mq1Gh988AFu3boFh8OBer2OcDjMc0GEvHaXy4UvvvgC\n8/Pz0Gq1POTpIjMuV9L4EywWC8bGxnisabPZFPwB7/V6UKlU8Hq9cLlc0Ol0LNUq9AMOvIhCJycn\nYbFYIJFITszQFuraRSIR5HI5vF4vxsbGoNFo0Gw2kc1mL8VlTlGF3W6HRCJBNptFPp8XdN0ZeLHv\nHo+HMxaNRgPhcJg5FkLed61Wi9nZWYyNjUGhUCCbzSIej1+K80KlOZvNhna7jWAwiEwmI/i7UaFQ\nYGZmBtPT09Bqtcjn8zg4OECpVBL8Wbdarbh16xbGxsYAAMFgEJFI5EKz0VfS+FPPqkqlgl6vh0ql\nQq/XQ6VSETxbHnhR2zKbzTAYDOy0CG3q3eug1Wpht9uh1WoBAMVikY2/kCGRSHgUq0KhQK1WuzTG\nX6PRYGxsDAaDAcCLoUSXJe1vsVjgdruhVqtRr9dxdHQ0VH7LoKBSqTA+Pg6bzcZyymT8hb52YsuT\nw0Ujq4Vu/Cnt7/V6IZfLkUwmsbOzI6hR4K+D2Wxmh6vVamF7exuBQIAD0ovAlWT7Ux+lTqeDWq1G\nrVZDtVq9FLPMqe/WYDBAJBJxzVzohBzqMddoNNBqtWg0Ghz1C5ltDoBnrut0OkilUmb4C535LBKJ\nIJPJoFarYTAY0G63kc/nUa1WBTvxkUDnnPr6C4UCCoWC4M9L/55rtVoetV0sFgXvKEokEtZSUCqV\nKBaLSKfTKJfLgg+K+ve81+shkUggm80Kvq2yX79Cq9WiUqkgGo0il8tdeMv5lYz8qeavUqmgVCrR\narXQbDYFTwyhFj+lUgmNRsNz5C/D2klTXqVSQaVSodPp8NhhIT+cwMuLRa1WQyKRoFqtotFoCN5Z\nlEgkbECp55mcFiEbUOBFFKfX66HRaCCRSFCpVFAulwVPyhWJRNBqtTAYDFCr1eh0OsjlcpfCQZfL\n5TCZTNDr9ZDJZCiXy8jlcpfivGi1WlgsFmg0GnS7XSSTSXa4hHxeKItrMpmgUqlQqVSQSCT4rF8k\nrqzxl0gkkEgk7AhcdFvF24KUCGntJGEpdJAhksvlvObLsG4AUCqV0Gq1rEQoZH5CPyQSCYxGI3Q6\nHcucCj3VD7w0oB6PBxqNBgDQ6XQE7yQCL4WgXC4XZDIZOp0O6vW64NdOe058HODF7HihZ7eAl1oQ\ns7OznG2hLJHQoVKpMDc3B5/PB7FYjEqlglwuJ4hMy5U0/jKZDHq9HlKplOv/vV5P8MaIHlCdTgex\nWIzj42OOJoS+doVCAZvNBo1Gg16vxxPkLgMMBgOcTicUCgWTK4V+mQMvIjm32w2bzQaRSIRWq3Vp\nWkKNRiNmZ2dhMBjQ7XZRrVbRbDYvellvBIkp9av6NZtNFIvFC4/i3gYGgwHLy8twuVw4Pj5GqVRC\nqVQStLNIQcT4+DhWVlag1+vRaDSQTCZRrVYvenlvBN3ny8vLmJqaglgsRqFQQCqVEsTdeCWNv1Kp\nhNls5qlmlDYXehQtFothNBphNpshFosv1WWuUqngdruh1+txfHyMRqMh+BoowWw2w+v1QqlUotPp\nXIqaOcko+3w+OBwOAC8iuXK5LPgULvCCdLaysgKTyYROp4NSqTQ0PYhBQiqVYnJyEjMzM1AoFKjX\n69wSKnSYTCbcvHkTHo8H3W6XeRZCd3RJs+XmzZswGAyo1WqsjCdkkLz8jRs3MDMzA5FIhFwuh0Qi\nIQhH90oa/1qthkQigXQ6zV65WCyGWq0W9HCf4+NjZDIZxGIx5PN5NvxE1JFIJIJ1XqrVKg4PD5FI\nJJhcCYBLGEJdNwAkk0ns7u4in89zDZGiPKGuu9froV6vY3d3F8FgkFnDtG4hO7q9Xg+pVAoPHjxA\nIpFgx1zo6wZelCcODg5Yw5/aROmsCHntpVIJm5ubiMfjOD4+hkwmuxRlul6vh0wmg1AohGq1CrFY\nDJVKBZlMJuh1A0C73UYmk2EnSyaTQalUCuK8XEm2P0VvtVqNh+I0m01IpVKuLdJ4XyGBLnRS3ALA\nPaxU/xdqPZrqcJVKBY1GA5VKhWtytG5gsOI2g0K1WkU2m0WtVuO1k0ECINg6ervd5p7+drvNuvi9\nXo/LRoAw95xYz5SpoOwcXYhCHTHb6/WQy+WQSqV4zdQiJ+TzIhKJOF1Od0p/OVTIawdePqPEUegf\nAS3k83J8fIxyucx3CoATztZFjiQWZgh8RsjlchiNRmaGBgIBRCIRjqSF6imKxWJoNBrodDrI5XIU\nCgXs7u4il8txak6IaxeJRFAoFCfYuLFYDEdHR5wOFWoULRKJoNPpYLVaIZfLUalUEAgEkM/nAbwk\njwoRMpkMVqsVBoPhRNao2Wwy0VWIWS6RSASNRgOXywWVSoVGo4F4PI5CocD7LdQR1mKxGAaDAUaj\nEQBQLpc5jStkcjFlKEg7pNVqIZ/Po1QqAYBgR56TgaTxvcfHx6jX68jn82i1WidGngsNpw18q9Xi\nIUS9Xg9SqRQKheLCzovwduyMEIlEMJvNWFxchNVqRbfbRTgcRjKZRLvdFtzhJlC6dmxsDH6/HxqN\nBrFYDIFAgOVOhbp2ANDpdJiZmYHdbgfwIpWeTCZPeLunHa+LjjDowbTZbCxHTEIzdClKJJI/iyqE\nkH2hmr/X64XT6YRYLEaxWGQyEeku9K9TCNERRZlGoxGTk5PQ6XRotVrIZrMol8t80VM0R5EpRdcX\nue/0jNpsNlZTrNVqnHmRSqVcAgBeZrz6xXMucv0qlQpOpxNarRbdbhe1Wg31ep0dA3K4KOvVnx24\nyLVTUGQ0Gjl7S6Rcao2mc033S//aL2rddF7UajUUCsUJ8jm1RiuVSuZ30ZnvP+vDXPuVNP4ulwtr\na2s8Ee/o6AipVOrCL77vg0wmw+zsLJaWlqDT6VCpVHB4eMieIl2EQjKgwIs9NxqNuHHjBrxeL8Ri\nMTKZDFKp1IlU7mlPWAggidnFxUUYDAYcHR0hFouhXC4DeGn8qVQEQBCEOhrLOjU1hbGxMYjFYpTL\nZWSzWbTbbUgkEr5w+vdcCL3o5KDPzc3BYDCg0+mgWCyiVqudqKHTmZdIJJxiJ1zUZ5BIJHA4HHC7\n3ZBKpcz273Q6LFxEeyyTyXB8fPxnsuLnvXb6/tVqNcbGxmA0GtHr9dBsNtnokEYHGX+JRIJWq8W6\nCxf1vNLaDQYD7HY7R/90FmQyGXcYAeAAibQLKGN6EeeFHFmj0ciKp+QQkNOi1+tPtKZTCez4+Hjo\na79Sxp/UlBwOBzOJ8/k8CoXCibYQ+nd0UNrtNjqdzoUaJLo4fD4fJiYmIJPJTkjMAi+jPYVCwZ75\nRQvp0AE3mUyYnp6GxWJBu93mNiKK9uVyOaxWKxN1aOwsTeUCzv8BJe/b4XBgYmICCoUCjUYDhUIB\njUaD07hGoxE2mw1KpRIAkMlkkM1mT5RjznvtpKZIsyuOj49RrVZZF58uxampKd73er2OTCaDo6Mj\n1nI/b14APXtGoxFerxcqlQqlUok7LChSstlscLvdXL4rFotIJBLY3d1FoVDggUvnue/0jFosFu7I\nIa4F8CKyNpvNcLvdcLlcMBqNEIvFKJVK2Nvbw87ODrc0nve+y+VyaLVamM1mqFSqEwZdoVBAo9HA\n6XTC7XbDarVyS102m8XGxgYikQg7Oee576QfotVqodVquXzYr8pps9ngcrng8Xig1+uhUChQrVYR\niUSwtbWFdDp9Qu76PPdco9HwnUeOLO032SqPxwO73Q6NRsOdUru7uzg4OGAC9TCkl6+U8SdDRMNl\nlEolqtUqS57Sv1EoFPwQyGQyFAoFlMtlJktRmvE8RVPoMDudTthsNm5/ojai/nSp0+lk5cJUKsUy\nuv1po/MSTaE9NxgM8Hq9UKvVLGRBxl8qlUKtVnNbmkqlQjKZRCwWY4UxijhIH+A81k4PosVigcPh\nQK/XQ6lUYmKRRCJhA7u8vMzSosFgEOFw+ISwDkVJw5oCeBoymYznKGg0Gia2FotFHB8fQ61Ww263\nY3l5GdPT02xAw+Ew16UpE9BsNlGv18+lXU0sFkOpVMJgMMBmswF4QWolUq5cLofNZsP09DSuX78O\nh8MBjUaDdDqNw8NDtFotZDIZNJtNNBoNNBoN1Gq1czkv5FAZjUaeRVAul1GtVvne8fv9WFxcxOLi\nIkwmE8RiMXK5HLRaLZrNJjs6tO7zaG8kdrxer+f5D0QqpiyMw+HA1NQUVlZW4PV6YTAYUK1WkUgk\n2AnOZDIsDFSpVM7lvNDdotPpoFQq0Wg02BjqdDqMjY1hfHwcc3NzWF5ehtFoZO7O7u4uRCIRIpEI\n0uk0/2y5XD6XZ5Rkt2m2TKlUQrvdZqMvl8tZv8Dv90OtVrPxNxqNUCgUUKvV3PnVTwQfBK6c8ad0\nilarRb1eRzqdZvlN4CVh59q1a/B6vTCbzdje3kYwGEQ6nebWOqqH0WzxYYOiUJKYTafTSKVSfClS\n9Oz3+7G2tgaTycQe4tHREdLpNCuOlctllEolLhcMG3K5nL1zqpmnUil+yMjZunnzJmZnZ6HRaBCJ\nRBAIBBCPx1Gv1yGTyVAsFpHJZJDJZM5l38nhohbQdDqNSCTCM+VlMhlsNhvm5uZw9+5dqFQqtFot\n1mKw2WxcV08mk4jH44jFYudiiCh6UKvVaDabSKVSiEQiyGaz6PV6sNlsWF5exvXr1zE5OQkAyOVy\nTDKamJiAXC5HqVRCPB5HIBBAMpk8lz3XaDSsL5/NZhEIBBCNRlGr1aDX67GwsICbN29ienqatRfa\n7TY8Hg/W1tbY6MTjcYRCIezt7aFarQ597UqlEiaTCRqNBp1OB6FQCOFwGNlslgmMd+/excTEBOx2\nOwtGSaVSjI+PcxmASIK7u7vY398HMNwMQD9JUaFQoFAoIJFIIJlMQiwWY35+HgsLC1haWuLBVuSA\nUynS4XBw21o0GsX6+jrS6fTQ95x4CjqdDp1OB+FwGKFQCMViES6XC3Nzc5idnYXH44HZbD7BBzAY\nDLh+/Trm5uZQrVaRTqext7eHJ0+eDF0unZwqt9sNlUqFYrGIUCiEZDIJmUyGmzdvwul0wu/3cyBK\n48+bzSYcDgdu3bqF5eVlFAoFpNNpPHz4EPv7+wNb95Uy/lKpFHq9HkqlEs1mkw0oGc5OpwONRgOD\nwQCNRsPtL6Qz3mg0OHJTqVTckkG1sWFCrVbDYrFwmjCZTKJSqbA3KBaLYTabYTQaIZPJOPKhGQZU\nryNyDGUshj0XQCKRwGQy8WAZEuAAwGk6m80Gr9cLjUbD7Wl0WVP9i5wfvV7PGZhhS4+q1Wp+OKvV\nKu851eHkcjmmp6fhcDjQ7XaRzWY5G9M/qZD2nKIleoiHeblQtNbr9ZDP5xGPxyESiZj85/f7sbCw\nwBF/Pp9HNptFOp1GqVTiGjWl4M1mM0dGwzwvRFI0GAwol8tIJpPI5/PQ6/WYnp6GWq3G8vIyPB4P\nnxXqk6bIp39YitVqRT6fh0gkGqqzS5f59PQ0j00mQitNVPR6vZibm4NUKmUnNpfLcXmr2WxCoVCw\nM2y3208MkRrG2ikzNz4+DqfTiUqlgmaziWQyCY1Gg+npaZjNZkxMTMDlcqFSqSCZTCKTyaBYLHLL\nLjnxJOKVzWY5OzmsaYCU6VxYWIBKpUI6nebAwOFwcPaI+Bf097lcjp9DuiOVSiXsdjuXvohgOqx1\ni8ViuN1u+Hw+NBoNHkRECpEUPFitVnZM6Nmk7Eq32+UsmVqt5s9O2dKzrv1KGX+qK8vlcuRyOY6I\nqX2u3W7DaDSyoQ8GgyiXy1xD75d3lcvlkMlkPABj2EQpnU7HBMVUKoVQKIRWqwWHw8HGkUZZJhIJ\nTsFJJBI+CPRfhULBD30ulxsqQU0qlcLpdMJsNnP6NpFIsHZ7oVDAxMQEpqamUK/XWWSEeBZ0GVGd\nmqIrkUjEez6sS1Gn02F6ehoKhQLpdBqJRALtdht+vx+NRgNKpRJLS0vQaDQ4PDxEOBxGLBbjYUsk\nHkW1YK1WC4fDgVQqNdT0Pxl5n8/HUX88HudIh0oVMzMziMVi2N3dxd7eHtLpNI+GpvQ7DQYym83o\ndruIRqNDu8ypxW92dhYmkwmJRAKJRAKNRoNntFPdWSQSYWtri6NjKsv1Z2toaqfH4wHwQtxrGGU6\ncqqdTieuXbsGAIjFYkilUtBoNPjwww/hdrtht9thMpkQDAbx7Nkz7O/vIxqNolgscmuXwWDgX8SP\nOTw8HJqzSDyh2dlZjI2N8RS/crkMr9cLk8kEq9XKpLlQKITt7W0cHh4in8+jUqmwQJrJZILZbIZW\nq4XX60W32+X26WHsOQUON27cgEKhQDgcRrFYhFwux9LSEg8pkkgkiMfjePr0KYLBIKLRKI9vF4lE\nMJlMsFgssNlssNlsmJ+fZ5GmYa2dHK7JyUkea16tVuFwOGC326HT6birhfY8EomcyFLL5XKYzWY4\nnU54PB5MTk7i+PgYz58/5/vzLLhSxl8ikZwY+RiNRpFOp/nCIBJdMpnkVDMZAdJGp2lu9Fputxvl\nchmpVOrEbPpBHxiFQgGtVotyucw98tVqldfdbDYRj8dRKpV4mhjp6VP9kKI5emgUCgVisRgrHQ4j\noqPLBQB7txQR0f4mEglUKhVmnzcaDb4AydMFwFkMSlESH6O/22FQoKwDTdqKRCKIRqMoFArMP8hm\ns3j48CGkUikT6trtNkuMRqPREwQevV7PY4ElEglKpdLQLnXaW3I0qFTRbrc5ctve3mahq2KxCKlU\nCrfbzSUZ4MW5I1IjEdiIrzGMc06GtFarcZmCosdsNotsNov19XXuQ6fJbaRlkE6nOfrXarVQKBQw\nGAzsjFGUPSzHq9PpcLq7VCqxkdrb28P29jaazSYKhQLvoVgshsPhQKlUYsNFhDuKpG02G3q93lCe\nUepQabfbKBaLTJak96lWqwgGg3xGqPe/VqtBrVZDo9FwoEE8I8peGI1GuFyuE1H0INdOpOZKpcJB\nAmVlJRIJkskkk6LprJADaDabUavVkE6n+fkkh1yhUMBqtXJmadBZFwoIGo0GR/Pk/JVKJbRaLY7g\n6VmkLAVl4lKpFGq1GjQaDd/vvV6POzakUimX6d537VfO+PfXhjKZDFqtFg/LyefzJ1opyOA6nU4A\nQDabBfAyKrRYLHC73SwXTJkEIkcNcjiDQqGATqdDqVRCuVxGPp9Ht9uFVqtFoVBArVbjCEIul3OZ\nwOPxQCQScepTqVQyO91oNMJkMiEajeLo6IgP2SA7GyiaE4lETOArFAoAwKQy6rggQRryZilNTjwL\nSj8bDAa0Wi3o9XpEo1E2bHRpDWrdNACqXq8jEonwg9gvJBKPx5nhTSx0h8PBKVHgJctbp9PxmdLp\ndIjFYvzdDWrPqfVJqVRCLpcjk8mgUqkgm81y1oou43q9ziUZq9XKUd729jZisRifOZPJBAAsAKPR\naPhipfTjIC5HMpRUM6f9IbLqaaIo1ddp3URiJQNE892Js6FQKJBIJDh6GhTplRwWuVwOpVKJSqWC\narXKJGIi9eVyOWSz2T+Lko1GI46OjhAIBNjZou+P2OuUaaQSwKD2nAwR8Q3y+Ty/PhnUdDrNI3Kt\nViuMRiPcbjcMBgNkMhm2trZQKpW4HY3Ipt1u94QgU71eH9gYbOJvUdazWq3yvUJCOZlMBvF4HNFo\nFCKRCHa7HXq9Hnq9Hkajkc8BTesEThJlSXuEeFWDygL0C2zRmO12uw2FQsFtn9FoFJFIBJFIhLMB\nlDkym80AwGRLABzYUYa41+vx9/e+d8uVMv60OZRe1mq1sNlsLCQSj8chk8lgt9t5wAWljhqNBhQK\nBTqdDnQ6HUefpOTVaDSwvr6OnZ0dhEIhJuMN6lKkiJccFmK5SiQS5i0oFApMTU3hgw8+gN1uh8Vi\ngcFgwP7+PoxGI7e+UGmjnwy2vb2NQCCAYDDI7NFBrJtSmVKpFOFwmLkR9NBS3Z4GXCwsLPAAIOrf\nJRIM7bdSqeQMQTAYxMHBAbe9kGEYxNqpvYkmbfWn84n1T8JLn3/+OXw+H5xOJ1QqFY6OjmA0GjnD\nYbFYoFKpuPe7VCohHA4jEAhgb2/vRIbjrOuWSCQwm82wWCwIBAKcym2326hWq4jH42g0GgCA1dVV\n3L17l+vscrkc9+/fx+7uLjuXFouFu0QmJyf5Ut3b2+MM1CCcLjL8Ho+HU7Vk7DudDvL5PFKpFHq9\nHjweD+7evYvFxUV4vV7IZDJks1nY7XZkMhl0Oh1OV5NDX6/XkUqlcHR0hIODgxMp1LOAHEWLxQK/\n34/d3V3E43GkUinOFPZzKe7cuYO/+Iu/gN/vh8VigUQiwfPnz/Hdd9+h1WpxRwNxcuwYcjzGAAAg\nAElEQVR2OzweD+vXh8NhzuIMYs8pWnS73ZyByGazTD5MJpMnylyrq6uYnJyEQqFArVaDzWZDMBhE\nq9WCyWSC0WhEvV4/4VRaLBaEw+GB1aPJISKeRSwWQ6lU4tJVp9NBKpVCOp1Gs9nE3NwcPv/8c8zN\nzXFAdHBwwC2upEVCWWCNRgOtVgu5XI54PM4qjYPac5VKBY/Hg6mpKbTbbTb2pPJH5aB6vY47d+7g\ns88+4ymXVKrY2NjgZ1StVnP2gLoHms0mczPexwG4MsafUrX5fJ5JEj6fD2azGe12G263m+u7ZrMZ\nMzMzHLHKZDJm9pPWuF6vh1wuRzKZ5E0lT3eQ6S2K5Gq1GlKpFJRKJaxWK2cjGo0GXC4XarUalEol\nJicn+XNQtEBZiGq1yuUKigAlEglzA/o19gcFIv+QJLHVauW2pnK5DJ/PB7lcDpPJhOvXr8Pr9UKr\n1UIqlaLdbmNychIymQz5fJ5lOmneNXnmlOIeFPr3PBqNMmHPbrfzQ2W1WlGv16HVajEzM4O1tTX+\nbPQa1HtOrV6dTgeZTOaEStowpDsp00Nn02g0wmKxMGGPIjaDwYBPPvkEH3zwASwWC/caT05Ootls\ncsRE3BAaVEN7PmhFSeJxENmMsj10QVPWgZ7Pu3fv8vz5brcLtVqNdDoNALz2brfLCnsAWD1t0Oum\n6JOMBKW9q9UqZ1jMZjPMZjPu3r2L27dvc5sideBks1nm8vR6PU799nq9E4TiQaeg6aySw02ciVqt\nxtlSj8cDr9eLtbU13Lhxg++fUqmEiYkJVKtVhEIhbsWl9mnKxAw65U973ul0uN2ZBhFRRgoA9Ho9\nrFYrPvzwQ6ytrWF8fBxms5nbbn0+H4LBIEf33W6XW0X7MxWDWj855zKZDI1Gg1UrKRiie7HT6TAP\n4fbt21hdXYXH42Hp5bGxMe7gIeJ3JpPhs06O5ln0Iq6M8QdejDQNBoOwWCyYn5/H4uIinE4nOp0O\n19oo7Qa8SPMTwQkAxsfHOdKkSOPhw4ecOqIUTrFYHHidKJ1OY3t7Gz/60Y+wvLyMmZkZSCQStNtt\naLVabo/qr/VSrZSYxs+ePUO5XIZUKmUSCclH9rPsB0kAbLVa2Nvbg0wmw+3bt5lYRBEXseCpN5oc\nNErhE+N5d3eXH8jNzU0UCgWoVCoe0tSvwzAoZLNZ3L9/HysrK1hcXGShHzLm1L5F0Q05iM1mk1nr\n1DJF9byNjQ3I5XKe9U5ZlkGTLgOBAMRiMaanp+H3+5mpTaQ4h8MBv9/PWRbqDqH6ObW4Ug11d3cX\nsVjsRFRB9clBdlyUy2U8fvwYPp+PRXxkMhlndGQyGRYWFjAzMwOfzweZTMbkJolEArvdjnA4jFQq\nxUTdra0tTquSSNOgzznwguT34MEDFvlxOBzMqSCezdLSEubn5+H3+7lFsdfrQaPRwOFwYHd3l0sD\nsViM7xoAzBfoFwEaBNrtNvb29rgESuW3XC7H52V+fh7Xr1/HysoKy6ITodVsNkOtVnO24Pj4mKdg\nUm27VqsxJ2YQUT8Z/0KhgGfPnjG/w2az8X4RkfvDDz/ErVu3cOvWLXYYKFNqtVqxv7+PRCLB4mLb\n29ts8Kl8M6hzTuumzJZGo2GdBXJK6vU6bDYbpqamsLa2ho8//hjz8/PodrtMUqQ7P5fLcVl0d3cX\nkUiEnUXqCnjfc36ljL9Go8HMzAyrmlHNWa1Wczsc1di63S5778lkEqFQCI8fP8bBwQGi0Si32KXT\n6RMzASjCPr3hJLDzvjAYDPD7/bDZbHwBUwaDeqIpcusn0+VyOayvr+Pp06c8B0Cj0TDznj4vRRqv\nuljOsnbqIaYLggRkSBGPHC65XM5rJ7JUPB7HkydPsL6+ju3tba5Nkqoh9dBTV8CgLnPy8iklSrrm\ncrmceRREaqK/p4eN+BgHBwe89nA4jK2tLTSbTWSzWW71IcGOQe55r9c7EanQxUCdBlR6oIuRItZK\npcIp32fPnuHJkyd4+vQp6vU61tfXkc1mUalUeM+Pj495zwflcJFBISeqXq/D4XDA5XJhfn6eM1lU\niusfhFIqlRCLxfD06VM8fvwYm5ubCIfDAMB8B4lEcuI9Ttf733fPScSpXC4jnU5z/d9sNsPn80Gv\n17OzaDaboVQqUSwWee1kcJ48eYKNjQ0WLKpUKsjn8yckpKkeP6jz0m63UalUkEgkIJVKuVzkcDgw\nNzfHSnP0571ej88CzS149uwZnj9/joODAya4ZjIZFuciud1BtebSXlSrVaRSKS5rGgwGVhJdXV1l\nIp/dbofVamWeC2VjDg4O8Pz5c+zu7uLw8BDJZJLnSNDzQw7aqzgW77PnVK4k/Ypms8klBr1ej1u3\nbnH5jbhPKpXqxJ43Gg3s7Oyw/kw/OZaIuv1Cbq/KWrzN2q+U8ZdKpXxxUJqF6i9KpZInP9HhopRn\nKpXCxsYGvv32W4RCIZ7odnpTySOlvxsE6HUo5dk/FISMPxknOqzkpVYqFRwcHODx48f49ttvkU6n\nmYR2HmsnkINBkblSqYTZbIZGo+E9J41z8lbj8Tg2NjZw//59bG1tIR6Ps3HvT2X1r/v02s/yecjA\n9a+LyhNWq5XPD6ngUXsUPdRPnz7FvXv3EAwGmZNxWhFyWHtOxohIoN1ul5UfSVWOPl+z2eRyViKR\nQCQSwaNHj/DkyRM2QP2KloTX7Tn93VmMaKlUQj6fh1qtxtzcHNejSVuDLjYiVVJ5ZX9/H48ePcLW\n1hYikQiv7fTFPYxnlHraE4kEADAheHp6GmNjY/zcdrtd7qMn9nw6nWYH/fDwEJlMhvdj2IJQnU6H\nO5yAF2VEm83GbWgWi4W7WfqZ9bT2RCKB9fV15jlQeeK0zO/rynLv+12Q40ylS7PZDJfLxWp+MzMz\n7OjSuSJOULFYRLFY5A6MSCTC2iP0OYfVDUJRebvdxtHREer1OmtYzM3NYWFhAT6fj886dRFls1nW\nfKhUKtjZ2UEgEODPBJwcVjQIXBnjT6nKeDzOpDEiBMnlcn4oAfCFTizpb775Br/73e+4P/R13isd\n9lddfmf5Qnq9HgqFAg4PD5m0ODc3x3X0ftUqakcslUoIBAL4zW9+w8azv7f8Tesb1NrpUkwmk0zS\n83q9zIynGle73WZjRZfL48eP8dvf/hahUIiH0bxq7cPc82q1inA4zOnm27dvc+RG56VfErRSqSAe\nj+Orr77inuJKpXLiAh/22uk1c7kcgBelLp/PB4/Hw44iSfZSTZmIRdvb27h37x4CgQAT+V43cOZ1\n6z7r2pvNJhKJBJ+Hzz//HOPj48yloGif1B4pGnr69Ck2NjZwcHCAdDr9xslnr1vfWdYNvChZHB0d\nMQnO7/fD5/Mxx4UyQyTY0mg0kEwmsb29jf39fdYVeZ+pbWd9RqnOXSwWcfv2bdy4cYOn5FFWo1Ao\n8GjlRqOBUCiEg4MDVn+kCPlVdeZB7znw4p4mx4VKcX6/HysrK8wDobp6MplkY0utu8FgEJFIhEtK\n71rbP+t5obJZsVjEwsICPv74YzidTuj1euaB5HI5HB4e8jNRKBQ4E51OpznQe9cZIm/z766M8QfA\nWveU1qKBN9SaRz36lUoF4XAY0WiU63ihUOitNcKH4TXSxUJylvV6nVOXZEA7nQ7i8Th7spubm1hf\nX2em+vetaxjr7na7KBQKHOGTFgGlYYmJTakrmpr34MEDZsG/TY1zGGtvNpvI5XIQiURcg3U6nTz1\nrNPpsCIYnZXDw0M8e/aMSURvE0UMY+31ep1V1sLhMI6OjpioSq2K1ANNgjQbGxvY3NxEJpNh0tGb\n9n0Y66YSDjngoVAIR0dHcLvd7Og2m02OhlKpFDKZDLa2trC/v899/uc1c4NAjku320W322V5aspu\nUfq42WxyUJHP5xGJRLCzs4NoNMpM+PNeO0XRlAkioz41NQWDwcC1ZtLsJ4cxGAxif38fyWTyBMHs\nPNdOnJ9Op4NEIoG9vT24XC5otVrOWJHjVa1WWYEzGAwiFAqxKt55D/WhDAAFbJFIBM+ePeOOFzr/\nNN+BnNx4PI7Dw0PuJiPjP4x1D56KPBj8v/f5IUqzUh+ny+Xi1j9KY1UqFQSDQfzqV7/Cr3/9a/zX\nf/0XIpHI0OQ13xaUuiKRh/HxcZYfpr7iSqWCp0+f4ssvv8SXX36J+/fvs3d4UWvv9Xoc2ZMmNZEs\n6c+oZ/7hw4f44x//iP/5n//h6WwXOWK2f7ANkQ+VSiWz9cn4x2IxLlHcu3cP4XCYL8OLWjtd2PV6\n/cRMCOrnJmeXSGX9dfJBEbPeF/0XH61bq9VyuaI/gg6FQtjc3MTe3h7i8Tj371/E2qnEQE4ACW0R\nQY4MaKFQQDab5ZbD/f19ZLPZC3Fa+tdOjhVNAyUdAnIO6I6hdD+1BlMm4KImh9Id02g0UCwWAQAT\nExMnuAFUKydBtoODA8TjcSYXX8TEViq9UdkwEAjAZrPB4/HwnhNPgZzGRCKBYDDIktZn1E34/2/6\ny8H1Tw0W7/V0UH3fZDLBbrfD7/fD4/HAaDSeUP/KZrMcRbxt9DZskDCEXq+H0+nEzMwM7HY7zzun\ndH8kEuG64UUbIAK1sigUCvj9fkxNTcFisZyIiqgTgyJ/arO56LUDL/vPp6en4fF4YLPZoFareXZ4\nsVjkVBwNoBmUmMlZQMxiq9WKiYkJnghJ0qGU5aJOkkQiwaNNhbL2sbExjI2NwePxwGKxQK/XAwCT\nyEKhEI9lHZZK5fusnTpBvF4vy+RS6Ysi0s3NTRwcHPB5F8LaAXBmdHJykgfiUIlHqVQin89ja2sL\nh4eHiEajJxQBLxLE6aIZCnQ/tttt5hnt7+9je3sbOzs7XHq5KIerH6TBMj8/j5mZGTgcDkgkEg6Y\nJBIJtre3sbW1hY2NDXYIzrj2N9r3K2X8gZcEE1LpM5lMcDqd6Ha7TCAiT1EIB/pVIEYtMXD7a/20\nfiEYn1ehf147GX+KOvr3Xoj7TiRLEtWgVjLa/0KhMDTZ27OChKJIZpi6Q6imG4lEONUvNJDwCimz\nKRQK/h7S6TQCgQCL6QgNpNZHjgt1K+h0Ok73n+aFCAFEdjYYDHA4HFAoFFCpVHC73Wg0Gtje3mZJ\nc6GddQqUyNml1taZmRns7++f2Hehrb1/TDgNPltZWYHJZGJiZSAQGJTD8sMy/oR+sQXSlCfCCv0S\n2sEgUFREg4XoIBDx43XtHUIAlV5I376f7du/90IFyaFSux5FRBSJCvnckJDT6fVTilGoa+/vjaZz\nQ5+FyhdCiN5ehf5nlfad2l2r1Sqn2YW4dtpzkhmmoIM6iQYppz1I0B1DraG0bq1Wy4TiiyxTvAl0\nXqj7jCYXKhQKLksPMDj6YRp/wukWFCE+hG/CWfUDhIKr8jkuE4bV2jlsvKnNUOi4rGvvz5gCEGRW\n8XUgg0qtc0J1FE+jP1AiIbYBr/2HbfxHGGGEEUYY4QeIN9r3wYpgjzDCCCOMMMIIgsfI+I8wwggj\njDDCDwwj4z/CCCOMMMIIPzCMjP8II4wwwggj/MAwMv4jjDDCCCOM8APDyPiPMMIII4wwVLxu6t8I\nF4eR8R9hhBFGGGGE1+CqOi5C/VTv1Od/WqDidWNthYr+9V+2tQOXV9hkhBFGGOEK44fT5y9Uydu3\nxWU0/MDlW+8II4wwwg8dVyLyH2GEEUYYYYQRTuCHE/mPMMIII4wwwgjfj5HxH2GEEUYYYYQfGEbG\nf4QRRhhhhBF+YJBe9ALOiss8sveyjlwdYYQRRhjhcuNSG3+ah3xZW+SAy9smd1nXPcIII4wwwiU2\n/v3Gh4z/ZTNCpx2XywJaN3D5DP/7rvuyOjuXITP2Kgf+tGMvxOeb1icWi9Hr9XB8fAwAEIvFEItf\nVFSPj49xfHwsqLXTuqVSKUQiEa8RAORyOa+93W6j3W4Lbu0SiQRyuZz3vNvtQi6XQ6vVQiqVotfr\noVwuo9FooNvtCmb9IpEIMpkMUukLs0v7brPZYDaboVarUa/XkcvlUC6XUa1WT3w3g8alNf79eFsj\nSgdHIpHwoe90Om+8WOghkUql0Gq1kMlkaDab/IsO1/scsP7L4/QXfPrSpgNPF0uz2USn03njxUKf\nVyaTQaVS8edtt9vf+7PfB7og6AGkC5vet/8CVygUUCgUEIvF6Ha7aDQaaLfb37t2Wj/tD+3zWS/T\n/rX3v06/U0Pfu1qthlwuh0QiQb1eR71eR6vVeqs1DNpBOu10nTaSdDYkEgmUSiXUajXvXa1WQ6PR\nQKPRuBBDetp5ot9LJBJ+vqRSKSQSCVQqFWQyGUQiEVqtFur1Omq1GprNpiCcgH7jqVQqoVQq+Tmj\nZ1QkEqHRaKBaraJcLp/5eRvk2ukuMRgMvHY6N3K5HADQarWQz+dRLBbR6XTQ7XaHZoTeZe0ymQw6\nnQ4WiwVSqZQdL6VSCYPBAJFIhGaziWg0ikKhgFqtxuu/SNA5t1qtMBqNfL6Pj48xNjYGh8MBpVKJ\nfD6PcDiMeDyO4+Nj1Ov1oZ35S2v8aTNoAwH8mRE9ffmKxWLodDqo1WrIZDLU63UUi0U2RP3/ln5e\nKpVCp9PBarXigw8+gMViQSQSwdHREY6OjvB/7J3XcxzWdf8/23vvWHRgARDsFCmKoiXLkuIkY/sh\nyXjGf87v95TnPHnykjx5HMeT2M6MFNu/UbEqRYIdvSzq9t777u+Bc68WsmyLDVjK+M5gpCEJ4ODi\n3lO/55xyuUyj0Xji7INQ2gqFQl5Q8WdCHqVSiclkwufzSe82Go2SzWap1+tfK7t4zEajEZ/PRygU\nol6vk8lkyGQyFAoFqtWq/J5PcrmEEyUUm/gz8XtQKpWo1WqGh4cZGRnBZDJRKpXY2toil8tRLpfl\n1/rquYvPNZvNGI1GGo0GzWaTZrMpI5IngTgXlUpFr9eTX0coFnHuOp0Om83G3NwcPp8Po9HI1tYW\nW1tbJBIJqtXq1yrzfufh6yZOPukjFmcizlecuZBbq9Wi1WrR6XRYLBbGx8eZn59HrVZTqVRYW1tj\ne3ubg4MDOp3OkUZE/Q4KIOUW99pkMmG1WnE4HLhcLqampvB4PLTbbQ4ODtjc3CQcDpNMJqUiP0rZ\nv+7PxDl7vV78fr/8bzAYBKBarbKzs8PW1hbr6+sUi0X5Vo/LiIrfgdFoxOVyMTs7y8jICB6PB4fD\ngcViodVqUSgUiMVirK6usrGxQalUOuT0HpfsarUam83G5OQkFy5cwOfz4XA4MBqNMprOZrPE43FW\nV1fZ29sjmUxK+Y/L8RL3xWq1cvnyZc6dO4fH48FkMqFWq9FoNPR6PXnuQg91Oh2SyeRzc1xeWOMv\nIKKIfsUtlJtWq8VgMGC1WrFYLJjNZgKBADabTSr+Wq1GLBajXC5jMBio1+sUCgUUCgVWq5WJiQn5\nOE6dOoVGo+Hu3btUKhX29/cPyfEkEFGakLvVaqFQKNDr9VitVux2O2azGY/Hw+joKCaTCaVSSSaT\nIRaLsbe3R7fbRaVSkc/naTabaDQavF4vgUAAs9mMz+djYmKCSCTC8vIytVqNfD7/VIZIyGg0GmU2\nodlsotVqMRqNOJ1OHA4HNpuN8fFxRkdHMRgMVKtVDg4OCIfD7O/v0+125ZmLDIXdbsdqtUolpdfr\nWV9fZ29vj0Qi8VSPWCgRq9WKSqWi3W7LDI7JZMLhcMg0nHCavF4vBoOBUChEOBxmeXmZZDJJrVaT\n6UURAep0OmmEjUYjhUKBnZ0dme14GiiVSvR6PRaLhV6vR6fTodFoyJTn0NAQQ0NDeDwexsfHCYVC\nqNVq6vU64+PjLC8v8+DBA4rFokwrKhQKtFrtHxnnTqcjo+2nRb+h1+l00tFtt9vY7XZ8Ph+Tk5MM\nDw8TCAQYHR3F6XTS6XRIJBKMj49z9+5dNjc3ZUq0VqsdUpL9H886UlKr1dK5UqvVtNttdDodbreb\nUCjEqVOnCAaD+P1+fD4fALVa7VBEd3BwQDqdplwu02q15NcRafXnZZhExCkyb71eD6fTycjICKdP\nn2ZmZga/34/NZsNoNNJsNikUCgSDQXQ6HSqVikgkQjqdJp/PA49+n61W67lH0yIAEL9nrVaL3+9n\namqKc+fOEQgEcDqd6HQ64FGZIpfLSf0ugpBOp0Oz2ZQBWrvdfq5yC4ggQ6lUYrfbCQaDzM7Ocu7c\nOdxuN3q9HqVSKe2Q+Dl6vR7lcplcLkcul6PZbD6X0tcLb/wFhMIVF7Ner2MymfB6vczOzjI2Nsbw\n8DBTU1M4nU6q1apM233++efEYjH8fj+pVIrNzU1UKhXj4+P86Ec/YmhoCK1Wi16vJ5lMkkwmWVlZ\noVqtHsoaPAn6DVG325XRsM1mIxQKMT8/LyPn8fFxtFqtjJwikQgfffQR3W5XGsh8Po/FYuHKlSu8\n8sormM1mLBYLNpuNmzdvUiqV2NjYkFHI0zgAZrMZt9st01P5fB6j0YjX6+XixYvMzc0xOjoqDVJ/\nbfTmzZssLCzQ7XbJZDJsbGxIR2Vubk5GJF6vF4VCwX/9139JR+1plY5Go8HlcmEymeh2uzI96HA4\nmJ2d5eWXX2ZiYoLh4WFZixPKOpPJ8NFHH7GxsUE2m2VnZ4d0Oo3X68XlcuFwOGQU6/f7WVtb49e/\n/jXpdJpms/lUcoszDwaD0nHJZrNoNBqcTidXr17l0qVL0niKLBHApUuXmJ2dxel0yqxVJBJBqVTi\ndDpl9gmQ7ycWi0nF87RQqVTY7XYcDgd6vV5m3bxeL1NTU1y/fp1QKEQwGESv18v0eSgU4uLFi3g8\nHtxuNxsbG0QiEVKpFBaLRZbhRElDlMOeZalFo9Fgs9mwWq2YTCbK5TJqtZqhoSHm5+d59dVX8fl8\n2Gw2qcBFOnd4eBiz2czi4iJra2vEYjHq9TpGo5FqtUqpVHpumQwhu9FoxG63o9VqaTQaBAIBmRma\nm5vD5XLJNHSn08Fut+P1etFoNLJk1+v1qNfr0qAVi8XnavyFwyiCIKVSicFgYGRkhMnJSUKh0KE3\n3G63aTabUqeLDEu5XKZarVKtVuWbOYrMkTh7nU4ng7Hp6WkmJiYIBoMYDAZpq4T9MJlMuN1uFAoF\n8Xgcg8EgnQeRYT0x/n0QxkQYM4PBgE6nQ6/XMzY2xvT0NOfOncPhcKDRaKjX6zJ96HQ6sdlsnDp1\nirGxMfx+P61WS0bQOp0Og8EgvfVoNMr6+jqffvopGxsbNJvNp3oA3W6XRqNBNpvFaDRiMBgwGo1Y\nLBbGxsaYn59nfn4ejUZDp9Nhf39fesJerxePx8OlS5dkSun8+fMyHe1wOFCpVJTLZRKJBMlkkrt3\n77KwsEAqlXqqByCiTnFOIn1lsVhklmF6elpGQfF4nGQyidFoxGq14nK5GB0dRa1WYzAYaDabZDIZ\nmcEQSr1Wq/Hw4UOi0Sh3794lEok8dfQsHJVoNIrZbMZsNkvDFAwGGRkZYWhoCLVaTTqdJplMyhKA\n3W7HZDIxNzdHIBCg0+mQSqXI5/NoNBr5Ua1WKZfLLC4usrGxQbFYfGq5AXnm7XYbi8WCwWBArVZj\nt9sZGhrC5XKh0+koFAoUi0Xa7TY2mw2bzYbFYiEQCHDt2jWKxSLZbJZoNEqr1ZLkqXq9TiqVksb1\nWZG9RLSVyWRoNBrS2dXr9dIh0Ol0VKtVmckSKV5Rppubm8NsNhMKhYhGozI92m63SafTJBIJ6Rg+\ny8iu1+vJaFhkFhQKBQaDAZvNhlarlfdJZKVEGUNkY65cuYLL5SIYDLKzs0OxWESn07G3t8fW1tah\n0t2zhMhuikBH3BeRsWs0GiQSCTKZjMygiiyp3W5ndnYWi8Uifx6r1SqNajgcfmaO4Z+SvdvtSs6H\nXq+XHByFQkE2m6VYLEpdpNVqsVgsOBwOgsEgJpMJrVZ7KMjp9XpUKhXq9fpzkfmr8vdnoEVWU2Q+\nRTai0WhgNpulw+Xz+RgaGqJQKJBKpUin0/Jr9ZNKnwVeeOMPXz7QXq8n2ZQ6nU4+uEAggEKhIJ/P\nS8Mv0mButxu32y0NqkajodvtUiqVKBaLFItFCoUChUKBhw8fsrS0xMrKCul0+qmVjHicQskKwo3R\naMTj8eDxeLBareTzeVKpFKVSSV5ypVKJy+VifHxcRpoiYiuXy5RKJZk22t/fZ3l5mbW1NcLhsGSR\nPo3cnU5HPqRut4vJZJJkHLfbLVPTiURCRtY2m42hoSEZwZ46dUr+LELuftlTqRQbGxusr6+ztbVF\nJpN5aq9dKBTx8Hq9nszqCGOjUqkk2SmXy6FWq/F4PExMTMjU+ujoKFqtVt4NEXnW63V2dnZIJBJs\nbm6yt7cnSUdPA3HmIopptVpYrVbpDNpsNlnfTyQSFItF8vk8Ho+HQCDAxMQEFouFU6dOyd/dwcEB\npVJJlmyy2axMM5ZKpafOVHxV9nK5LB1mg8GAXq+XDlOtViMSiVCtVmU5wufzMTY2xvj4OH6/X973\nvb09dnZ2aLfblEolFAoF5XL5EE/mWUE4Lv1Rl3C2tVotzWZTckBqtRr1eh2Hw8HQ0JDkLkxPT8s3\nqtfricfjUh9pNJpDKelnDfFeKpUKvV4Pg8Eg/y6TyVCpVA6RKfvvitfrxW63A8goWtxx0SnwPCNo\nEdHXarVDka9wEsVdaTabWCwWgsEg8/PzeL1e6ahks1nS6TSpVIp2u029Xj+yDishv/gQNfxqtUqx\nWJQk3GAwyNTUFCMjIzidTgD29vbY3t5mc3OTYrEos73PEt8K4y8glJpQZqVSiVQqxfr6Otlslq2t\nLekFBwIBcrkc1WoVt9uNy+WS6bxut0sulyMSibC0tCTZl7u7u8TjcZkJeJZoNBrk83lZviiVSlIZ\nhsNhYrEYxWJRPsp8Pk8oFGJ0dFR+frPZlFHn9vY2a2tr7OzssLu7y/7+/h+R/J4FhKMkas+lUolS\nqcT29jbNZpPt7W2pZOx2OxMTE2QyGebm5piYmECtVkuuQ7PZpFKpsL29zfr6OqJXKdIAACAASURB\nVKurq0QiEflgnnWk0Wg0SKfTaLVaGSFlMhmWl5eJRCIcHByQzWZRKBTY7XbOnz/PxYsXmZ+fx2az\nycxKt9ul1WrJaPru3bssLy+TyWTkXXnWykYQTTUaDSaTiUajwf7+vpQ7mUySyWSwWq0Eg0GuXr3K\n+fPnmZ+fl79/lUpFq9WiUqmQyWTY39+XmRbxcz1rtNttCoUClUoFrVaLz+cjl8uxurpKPp8nEonI\nlLLD4eDcuXO8+uqrzM7O4vF40Gg0pNNp6Sxms1kZIeVyuWeeGu2H4IaIVGwgEGB/f594PC7r4uVy\nWWZZvve973H58mUmJiZwOp00Gg0WFxcl2TWTyfzFrpdnAZEZFb9rh8NBOp2WBkhwntrtNn6/n4sX\nL6LRaBgfH8dms+H3+2XGIpvNUqlUqNVqz03efojuIEFwTSQSdDoddnd3SSaTpNNpGo2GNP7dbleW\n3xwOB8PDw2g0GnK5nKytHxXxTwQanU6HTCYjdVm73SaZTEpneGxsjEwmw8jICDabDZPJhMvlwuVy\nyUylcCCeJb5Vxl8oYXhUc8lkMmg0GvL5PNlslr29PRqNhozsMpkMBwcHhEIh/H4/SqVSkrhqtRrp\ndJrNzU0SiQTZbFZe/OehFEUrjehRjcfjNBoN9Ho9e3t7pFIpqtUqRqORXC4n65xarZZ8Pk+j0SCZ\nTMoodH9/n3A4TDweJ5VKPTfFKLIXIh2Yy+Wk0azVatLpEM5Np9NBr9fjcDjw+XxkMhlSqRQ7OzvS\nk9/e3pZOSz6fl1HLs4Y4c5FmTqVSdLtdYrEYsVhMOnoqlYpCoSBJlyLDUiqVWF1dJRwOy06KaDTK\n5uYmkUhEKq3nARFJCGMaj8elgotEItLAiMyKKD2o1Wqq1SrJZJLl5WWi0ah0emKxGNFoVJYMngf6\nFWKn0yGbzUpimXCehGEpl8uMjo5K8hYgeR8rKyuUy2WKxSLpdJpsNvtc09Dw5X1RKpWUSiXJJO92\nu7L7ptFoYLPZUCqVNJtN2aEh2i3j8Tg7OztUKhX5+zoKBr2Qvd1uk8/nicViAJRKJWKxmDxz4dD2\nt9mKjFM6nZaOl8gUPG+IdL0w3KJ01O12SaVSMhvodrslSbS/Ti7KWcK5eZ7O4Z+SXWRNkskkvV6P\nRqNBKpWiVqvR6XTweDx/9ucW5MpnLfe3yvjDlynGVqslvSvBeBYXoNfrkc1mCYfD6HQ6Lly4QCAQ\noFAocHBwwN7enmwFq1Qq0uvsb2l7HnL3p9MFs12tVksPXaStKpUKhUKBWq2G3W6n2WzKqC2RSKBS\nqahUKlLhP+9LLy64qOs2m00ZVQoZBIM1nU6zs7MjMwDhcJgHDx7w6aefyn+Xy+VkNuF5P1Zx5vV6\nXSpw4NCQDZ1OJzMTjUZDRvlbW1u899573Lp1Syp34Yg977YocSadTkd2FIjUqOBiiH8jWj4FcSuT\nybC6usoHH3zA3t6e/D313/XnDcHCFsZEoVBQqVQko1z0n5tMJjweD3q9nkajQTweZ2lpiY8//liS\nvERkdFTGSERjsVhMZgpFaUt0k4jozeFwSGfh4OCA9fV11tbWDqWDj9IYARSLRSKRCIB0YgVp2Gw2\ny5ZLvV4vGfTxeJxYLEahUDiWwUXCEIosIiADHUEGFKUhk8mEQqGQA3OE/nyeXRV/CSI4AGSWsdPp\nyPJtMBiUJUfx96KT6HndkW+d8f/qYBu9Xi97w4XHKMgtwiNPJBLyoogotd+JEJ/zPJVi/1AbQcoB\nZI1NREtCFqVSSTKZZHV1lXK5LFuJRD24fwjR836o/T3owutuNpsyQhBeq0qlkgZyf38fg8HA2toa\nW1tbNBoNKpXKof8eFSu3f66CMCjCyAseCSDTpiIr8/DhQ0merFQqMh16FGf+VflF1ks4HiJyV6lU\nsgtGRJ6bm5uyLNPpdKjVat9oaNTzgJBbZFP650WIdk/RA12v19na2iKbzaLVamX24zimuPVzAb46\nd8JsNuP3+9HpdNTrdRKJBPv7++zv70uC5dfN5jgquYVeE62i/e3SXq9XdvAIhyYajZLP5+Ugpmdd\n8nwc2UVrrvg5+ueBCBa9yKrE43Gy2SztdluSQ49r2I/IeAm5++2UcLZE5rTRaMj/Cr16Evn/BfQr\ncsGCttlssu7aH9WIaWiiz1JE08KTFP/uqBW5UqlEp9NJ4ketVjuUsegnq4iUryAm9no9+TjFAz8K\nb7f/3I1GIw6Hg2azKfvJxWXvH3sqUs1iyJLJZJIGoL//+SggHpjJZEKv10uDUq1WASRTWijtUqkk\nuRWCld5utyXf5CgHofRHmiKl2D+BUDC1NRqNjOIEmc9sNmO1WuVsieOYQCc4CyI6FiRDvV5/iDha\nqVQkIVRES0IpiqmFRwnRdy56tbPZrLwvDoeDQCAgjX+73aZYLNJoNGTbXalUktHsUUPoGIPBIO+E\nmHPh8/lwOp3yngt+BjxqP+6fFnlU/fIC/fMiLBaLvA8ATqdTnnmtVpNEbeFsWSwWOajoWRFZH1d2\nhUIhO1vE1EeFQiHnyPSfuSA5GgwGDAaDDIaepfOiemZf6dni/zzJJ/Ubfq1Wy5kzZ5ifn5ctfGIo\nj1KpPMRMn5qaIhAIyD/X6XTY7XbZmnEUl1wYRjEC8sKFC3Lwiahp9k8W8/v9TExMMDMzg9PpxGq1\nYjAY5KAa4RgcldMiMhYTExOcPXuW6elpTCYT0WiUTqeDWq2WD3R8fJzx8XGCwaAktphMJsxmM3q9\nnmKxeCSEon6nRbTwzc/Pc+bMGUkGVKlUOBwOJicnZe+/y+XCZrPh9XplK5oYdpROp4/M+Is74/P5\nOHXqFGfOnGFiYkLWkjUaDaFQiNnZWYLBIFarVbZ5iZ50MZhI1M+PCkL28fFxTp06xdWrV7FYLJID\n4HK5OH/+POPj43IOgYjwRKeOKGOIEbRHKbuYonjx4kW+853vyLKXTqcjFApx9uxZ3G43Op1ORqgG\ng0E6wmq1mmazeWjK5VFAGJvJyUneeOMN5ubmZIu0zWbj/PnzjI2NYTabpRMuAo5arSadd5EZO0po\nNBp8Ph8XLlzgRz/6EV6vV5YqxsbGOHPmDHa7XZZ5hZEX6XMR1D2LwVWPCzGo7c033+TNN9+Ug8Qs\nFgtnz55lbGwMvV4vgwiR8RWtvcCTtLH+3z/3l9+qyB++rMl1Oh38fj9zc3OyT9dsNkuPXQyjEW1b\ngnQketLFQIijlFvIbjQamZmZwev10mq1CIfDlEolGZ3a7XbGxsYYHR1leHhYTjur1WrSSz9qiHSW\n1+vlwoUL0rESHQtisM7Q0BCTk5MEAgHpBYtoVSiUo4pAxfcQ7X5iJoTb7T6UZhMtisFgEI/HI++R\nGFok+AFHRYLql7/XezSxTQz3ETLFYjFarRZjY2NyAI0YGiIm1QleyPPoXvkmsisUCsbGxrhy5Qpz\nc3OMj49jsVioVCrodDqGhoawWCyyrQwelQPEey6Xy8cytrXX62GxWDh9+jRXrlxhZmYGi8XC9vY2\nxWJRyi3KRf2pduFkiSzRccDv93Pp0iVeeeUVOakwlUrRaDTw+Xyy3VkYG0GCFhmv4xrzq9VqmZiY\n4OLFi1y7do2JiQkmJydJpVIyawdfLiSqVqvkcjl53qK0dRyw2WzMzs5y6dIlzp49y8TEhCRjDw0N\nyQy00IO5XE52j/TrxWeJb5Xx7ydBtVotPB4PoVBIss6FsTGZTDidTsbGxpiampKpRTHFTUxjE/Xb\noyQStdtt9Ho909PTBINB6vU6ExMT1Go1OarY6XQyPT2N3+/HZDJJkpkge4m6+VESiYSy8Pl8XLp0\nSaY34/G4HJgkyFsTExOYTCapyEVrYjqdlp0WRwUhu0ajYW5ujsuXL8veZofDAXw5ZjQYDOLz+eTw\nkHq9zt7eHg8ePCCfz0sy1FGi2320FezatWsEg0FZQ+xnE4sZBkajEZVKRalUki2Nm5ubsqPiOBAK\nhXj11VdlBuD06dPyHjSbTblYSczyKBQK7O7ucu/ePanUjyONa7PZuHLlCi+//LIcX51KpUgkEpI4\nKfgvgOTlrK2tsbq6KlPQR41er8fo6CjXr1/n3Llz2O125ufnSSaTsqNIOCuCjxGLxdja2mJtbe1Q\nCfWoodfrOXXqFBcvXmRmZkZOf4zFYrJtu3/UeC6XY319nc3NTXZ3d4+NHwLgcrm4ePEip06dYn5+\nnpmZGclLEC1/KpWKer1OrVbj4OCA1dVVdnd35YCpk5r/X4CIhkSaqtPpMDQ0xBtvvIHH45HsYoPB\nIBWOaP0TSkdMCjuK+dVfJ3+r1aJUKqFUKhkZGeEHP/gB586dI5vN0uv10Ol0cuGMGHCSSCSIRqNE\no1HZFXCUMvefeaFQwGazMTU1xQ9/+EMymYxkFAunS7QUiamJe3t7UuEfRxTabrdle5zX6yUUCmG3\n2yW7uNVqyYFQjUaDaDTKrVu3WFlZIRqNSoLlcSgWQcoSc/InJiawWq0kEgnq9fqhIUDVapXV1VUW\nFhYIh8OyT/o4lLnoukkkEnJ/ghgCJdjaYuqlmAFw69YtHjx4IB30o2TL96NerxOJREgkEng8Hrkl\nTxhPMYNBdB0tLy9z+/ZtDg4OjrSr4qsQ0/G2trZwuVyHuAuC3S8cFtGNc//+fdbX16WROq7lPq1W\nSy568nq9cmpr/6ZWQQjc2tpieXmZpaUlksnkkfGf/hSKxSIbGxuSl+B2u+Xd7Se9xuNxNjY2WF5e\nloOMnse+CvgWGn8B0d/ZarWw2Wxy4MPa2hqJRAKNRsPExAQTExNyiEwmk5HDQvpHQh6lzML4izqm\nxWLh3LlzBINBNjY2qNVqqFQqubQnHo+TSCTkACLhIBxHKrS/nU9MKDx79izRaJRIJEKz2cRut0uj\nKlqfNjc3SSaThxZYHDVarZYcE+tyueQ2QcFnEAQ5lUpFMplkbW2NO3fuEA6HZQvPUUOkzsVQJZFN\nsVgsWCwWisXioeUoYqPi/fv3uX//PrFY7NgifiF/IpFgfX0dnU5HIBDAbrcf2tcuRluLgVt37txh\ne3tbEuaOC5VKhXA4jM1mA8DtdsvyhFiL2+12KRaLrK6u8vDhQzng5ygd869DKpViaWkJjUZDsVgk\nEAhIh1uk/EW78YMHD1hZWWF/f/+5jSH+pmg2m+zt7XH//n2USiVjY2N4PB5Zwxe7C7LZLKurqywt\nLbG9vU2lUjn2lb6FQoGNjQ2MRiP1el1yovq3qoqW3Xv37rG9vS114vM6828V4U9AkLhOnz4tp2uJ\nYRWJREJu8LNYLOj1en7729/yzjvvcO/ePQ4ODiRr9ziUi0KhwGazMTExgd1ul0SycrksMxJis9vB\nwQH/+Z//yccff8za2hrZbPZIWfJflRsgEAjIoRXiYovhIGLsslar5cMPP+SXv/wlDx48kINdjlOx\niOU4APl8XqY2k8kkjUZDLujIZDL893//N++//z7hcFj+2+OCyAQJvko8Hpeth7lcThrSdrvN7du3\n+dnPfsbS0pJsbz1OAwpfDuNaWVmRQ1AEyUmQ5XK5HP/7v//Le++9x/7+vuwgOU6ILFc4HObevXuS\nod1qteTegnw+z9LSEu+88w4rKyuSiHncZy5Km6Lsk8vlJG9IkM4ODg747LPP+Pjjj+UGy+N8n/Dl\nmUejUR48eMD+/j7pdBpAlkTF0rXPPvuMjY0NyuXysaX6+yHGW+/t7bG2tsbu7i6lUgmz2Sx3o4gs\ny61bt/5oVscT4q+L8AdfEmzEYYpxm2JDlEiFVioVtra2ePDgAffv35c7t4/zoggP8P79+3IhhM/n\nk+1cgn0bi8XY3Nzkzp077O/vH2oHPC65ASKRCAsLC5KoIlqhxNCWcrnM+vo6Dx8+5OHDh3Ku+HEq\nc5El2tzcpNVqkUqlAKTBF9G/6Nd++PDhc13I8rjI5/MsLy9js9nIZrMy/S/2LYilVGIvRf8Mg+NE\nr9eTWTcR6YuFVRaLBa1WSzKZlPVmMZDouOWGL9P+8XhcOuNWqxW3243ZbJZvdHV1lZ2dHbLZ7LG0\nUn4dxAAt0UIptsnZbDb0ej3lclnOgojH4wMjtxho1Z/1MRqNzM7OytXnxWKRra0t4vG4nIk/CBAz\nNcS+EEFIF5nQcrksx0TncrkjyYB+a41/u93m7t27NBoNPB6P7D8Xq2Xr9TrpdJpoNMrW1pacGX3c\nl7zX65FOp/nDH/4gN+aJ1bk+n49SqUS5XCYcDvPw4UMODg7I5/MDccl7vZ4k7lUqFUlcFMpQDMjZ\n3NxkY2PjUHnluFGv1+XCplwuRyAQYGpqSvbwV6tVSTQT6fJBkBuQZEOXywUg97H7/X45ZGZ5eZn1\n9XU5tXAQZBdRfqlUwmAwkM1mKZVKTE5O4vP55FS027dvS+LoIMgNyO2fYnqi2Efg9/sxGo0UCgX2\n9vbY2NgYqDOHL9nwYpx5pVLBaDQyOjpKu91md3eXtbU1mUUaFAi9Do8yRiJoEJvw4NFbCIfDz203\nxZOiX3YxXMlgMDA9PY1Wq5Vl22QyeWRyfyuNv4Bom4AvN/8Vi0U2Nzf5/PPPicfjlEoldnd3B+px\nCgKI6HcW42ULhQILCwvcunWLUqkkjeygyA3I7VOBQICRkREMBgPFYpFYLMann37K+vq6nC0+SGcu\niDdWq5X5+XmGh4cxGo2Uy2VWVlb46KOPiMViJJNJOYJ2kNDr9fD5fIRCIbnStFar8dFHH3Hr1i0Z\npQ7SmcOXbZZ+v19u8NPr9ezv7/P+++/Luq3gsgwaxEyQ0dFRhoaG6PV63Lx5k48++oiVlRX29vYG\nymkRELMGRInRarWSyWT4wx/+wBdffMHy8jK5XO64xfxaiLKu0+mUBNGVlRVu3LjBjRs3iEajx86r\n+HMQ991qtVIul1laWmJhYeHIz/xbbfwFIcpoNGIwGA7VFz/++ONDynDQHic8GgwhiH1i5em9e/f4\n4IMPjmW+9jeFWq1mZGSE0dFRSUrc3t7mxo0brKysPDf26tOgfwKXGLphNBo5ODhgZWWF9957T7YS\nDZLcAqIz5OzZs7LfPx6Ps7CwwAcffPDcloM8LRQKBSaTidnZWU6dOsX4+DjVapXt7W0++OADuTRp\n0OSGR7I7nU5mZmaYmZnB5/NRqVR4+PAh7777ruwtH6QIVEClUuF2uxkfH2dubg6TyUQ8Hufjjz/m\nxo0bh4bLDBrESmXR59/r9VhfX+edd95hf39fZhQHEQaDAbfbTSgUwu12UyqVuH37Nu+99x67u7tH\nOvRJeWTf6Rgg2m+EV26z2dje3pY9wkc5uvdxIAhDIpLz+/10Oh3u3r3L9vb2scxg/6YQU9jGx8eZ\nmprC5XKRSqVYWFggnU5L4zlosguWthiAMj09jdFoZHNzk7W1tee6YONpISZTnj59mtdee43p6WlJ\n8BNtiINo+OGR7B6Ph7feeovXXntNTihcXV0lm81KXsWgyS6cxbm5Of7xH/+Ra9eu4Xa75TZKsSNk\nEI2QGBt75coV3n77ba5evYrBYJCrw4XhH7Qzhy8nFF67do3XXnuNS5cu0W63iUQi7O3tyVkbgyg7\nPCJEX7hwge9+97tMTU2RzWbZ2dkhHA4feRb3Wxv5K5VKSXyyWq2y5WZpaYmdnZ2BjSbgkWfr9Xpx\nOp3o9Xqi0SiLi4usrKyQSCQGVm4Ai8WCz+fDbrfTbrfZ2tpiZWWFtbU1uX9gECHG+IrVoNVqlZ2d\nHRYXF9nd3R1YZQiPhp/Y7Xa8Xi92u518Pi/JoIlEYiANkIDRaMTtdjM6OorRaCSVSrGyssLy8rKc\nDTGIELX+oaEhQqEQarWaaDTKnTt32NnZOdYOkL8EjUaD1WplamqKkZEROp0Ou7u73L17l1QqNbCy\nCwfd6/Vy/vx5vF6vJOpubm4ey6TKbwox/nx8fJz5+XlJ8tvY2CAajR75mGf4lhp/URPy+XwEg0HU\najVbW1v87ne/4+7duwNtQIVXPjIygsPhoF6vc/v2bf7whz+wvr4+sHU4eCS73W5nZGQEnU5HIpHg\ns88+Y2FhQRrQQcRXN5opFArJC7l//z4HBwcDRXz6KoxGo1wJKoiLCwsL3L59+0inJT4uFAqFZMiL\nSZXhcJgvvvhCdoIMKsQkRTGfPRaLcffuXT755BN2d3ePW7w/C8FTCAaDGI1GwuEwCwsL3Lhx49hm\nVnwTiP0IPp+P+fl5lEolGxsb3Lx5k9XV1YHVL/ClsxgKhTh16hS1Wo3t7W1u3bpFMpk8Fpm+Vca/\nf1qV2WzmzTff5G//9m/x+Xzcu3ePDz/8kFgsNpBpZzGIRa/XMzc3x49//GMuXbqEWq1mcXGRW7du\nUS6XB05uQRwSsl+7do1/+Id/YGpqSu67F/vLB032/iVQgUCAH/7wh7z++uvYbDbW1tb4f//v/w0c\n41lARBJarZbz58/zT//0T1y4cIFms8mHH37IjRs35J77QYJwzMV9eeutt/ibv/kbRkZGuHPnDr/8\n5S9ZW1sbiL7yr0Jsf9RoNExOTvL3f//3XLt2DYvFwm9/+1vef/99ObNi0CDui0aj4erVq7zxxhuc\nPXuWcrnM73//e+7fvy9nWgwSxA4NrVaLw+Hg+vXrXLt2jbGxMb744gs++eQTlpeXBzLD1f9GJycn\nuXz5MleuXMFisfD555/LAUpiVsFR41tj/MXaVZfLhdPpxO12c+XKFebn5ymVSuzt7bG8vDxwhl8o\nFIvFgt1ux+l0cu7cOa5fv47dbpcT0La2tgbycosUopD98uXLXL16lXq9zsHBAffu3TvS9pVvAtG3\nLwaDOBwOpqenefXVV5mZmaFSqbCxscG9e/cGzmnp325ns9mw2+1cunSJt956C4VCwerqKnfu3GF9\nfX3gUqBiyJNYJex0Onn11Vd56aWXUCqVbG9v8+mnn8rJnIMCceZilazVauXChQu89dZbBAIBKpUK\nS0tL3L9/f+BazMSZC7nNZjNXr17lu9/9LgaDge3tbb744gv29vYGKrgQG0xNJhM2mw2z2czQ0BCv\nvfaaHMm+s7PDzZs3SafTAzGwSkCj0cgtiVarFaPRyLlz53jrrbdwu90UCgUWFxe5f/8+0Wj02O7L\nt8b4i8tx5swZJicnGRsbY2hoiHg8zs2bN3n48OFAPUoB0W4zOTnJ7Owsk5OTnDp1Cr1ez8bGhmxd\nGUTZ9Xq9ZDuHQiGmp6eZn5+nWq1y9+5dmXYelEcpoFQqMZvNDA8PS7lnZ2fx+/3E43EePHhAOBwe\nOMMPXzKdp6enCYVCTE1Ncfr0aTQaDcvLyzJ1O2gRPyD3xYt7PjExwdzcHPV6neXlZZaXl6lWqwN3\n10WKf2pqirm5OUZHR5mamsLv95NMJrl//z67u7sD13YLyPXfp0+flqvLQ6EQJpOJ1dVVWQYVOwkG\nBRqNRsp65swZfD4fPp+P8fFx6vW6bEfsHws+CBAdQ1NTU3K1udVqlSXo7e1t7t69y/r6OqlU6ljJ\nid8K469QKAgGg1y4cIFXXnmF4eFhbDYbq6urbG1tyclmgwiTycTU1BSXL1/m8uXLcp3s7373O9bW\n1lheXiYejx+3mF8Lt9vN3NwcV69eZXZ2Fq/XSyqVkgtvlpeXB5JYqdVqGRkZ4fz581y9epWhoSGM\nRiMLCwtymlw4HB44uQFp+K9evcq5c+fwer1UKhV+9atfsb6+LsfIDprsCoUCn8/HmTNnZNrW6XSy\nvb3Nzs4O6+vrLC0tDWSJxWQyMTk5yUsvvcSVK1dwOBx0Oh0+/vhjeV8ikcjAyS6Y8WfOnOGVV15h\ncnISq9Uq+QnivgzCuOR+iKhfrO8VqfJer8fy8jJbW1uS5Ddo/fyi/fP06dNcunSJUCiEUqkklUrx\n4Ycfyu4hsXTrOPGtMP5KpZLp6WmuX78u01n5fJ53332X3/zmN7Klb9AgCE/nzp3j9ddf5/r161Sr\nVT755BP+5V/+hZ2dnYGM4AA5yEe0C42OjlKr1XjnnXf413/9VznOctAgFMv09DTXrl3j+9//PgqF\ngnA4zM9+9jNu3LgxUJFEP4RiOX/+PN/97nc5f/48jUaDX//61/zzP//zse6I/0sQcwguX77Mm2++\nid1up1Kp8G//9m/85je/oV6vD+x9sVgszM/Pc/XqVa5fv069XufGjRv89Kc/ZXt7+9iV+NdBtCIG\nAgEuXbrE1atX8fv9lMtlfvOb3/Dv//7vA9u+KkjPYmXv+fPnqdfrbGxs8Itf/IKFhYWBGa3dD3Hm\nwvjPzs4yNDRENpvl4cOH/PSnP5WbEQfhzL81ff52ux232y23gIl92YPa3wyPhhDp9XqGh4dxOp1y\nZKVYEDJol/urcLlcTE5OSq+8VqsNrELph0ajYXR0lOHhYXQ6He12W+6FH+T7Ao/u+alTp3C5XHLt\nbbVafSHuy9DQEJOTkxgMBtrtNqVS6YW460ajUY4chkcrlMV8/EGdWyEg3qjBYKDRaJDP56WTOMj9\n8BqNBr/fj9vtptvtUiqVyGazksQ6qLKLoVWi40lsixWLegZpwNm3IvKHR6lcrVZLs9mU+80FgWUQ\nDvrrIFrMbDYbGo2GWq1GKpUilUoNjHf4dRAertg5oFQqKRaLRCIR8vn8wBtQjUaD2+3GbrfTarXI\nZDJEo9GBrDcL9J/58PAwBoOBSqVCLBaTw5MGVXZBanW73XLjYzabZXd3Vw7bGlQolUqMRiOBQACT\nyUS1WiUWixGLxQYy+hToH9/r9Xrp9XpkMhl2dnbIZDID/UZVKhU6nQ6n0ynHg0ejUTkBb1BlF/rc\nZDLhcrlot9tyumk8Hh84h+uFN/6idahSqZBMJtHpdMRiMe7fvz/Q/fzwSPZer0e1WpUK/O7duzx4\n8GAgiXL9UCqVdLtd6vU6yWSSXC7HwsIC29vbA+1wifYbQCry+/fv8/nnn5PJZAZamYstj0qlkkKh\nQCwW44svvpAjkwcVorNCq9XS6XRIJpOyPzsSiQyUQuyHKBHp9XrUajWl1Dom5gAAIABJREFUUolM\nJsPNmze5ffv2wDH7+yG6E3Q6Hd1ul3g8zu7uLrdu3ZKdQ4N45oDcvqrRaGTEf+/ePe7cuUM6nR7Y\nM1epVLKjQqlUEo1G2dzc5N69eywvLw9chuuFN/4C0WiUe/fusbGxQSqVIhwOD/TACnikXKrVKltb\nW+TzeRQKBRsbG2xsbAxkHVFAOFzZbJbl5WW63S6pVIq1tTWi0ejAKhV4JHu325VGv9FosLq6ysrK\nCsVi8bjF+7NQqVRy7n29XieVSvHw4UN2dnYGWpmLyD+dTrOyskI+n5dE3HQ6PbBywyPZm80me3t7\nRCIREokES0tLkmw2qLKLdtZcLsfKygqRSIRwOMzq6upALnjqh1KppNPpsL+/T6FQIBwOs76+zvb2\n9kC1I34dFAoFhUKBlZUV1tfXJel80Nqd4Vti/PtXyYpaXKvVOpaRiY+LYrHI/fv3aTab5PN5arWa\n5CsMMhQKBZFIhE8//ZR4PE42m6VSqQzkgJN+KBQKms0mq6urbGxsyEUg5XJ54B0usZhKTEzc398n\nn88PdLlCQKFQsL29TT6fZ2NjQ25HHFSCooBCoaBYLHLv3j3S6TQbGxsUi0Wq1epAZ1tE9m1vb4+P\nPvqI5eVlIpHIQKfNBTqdDuVymQcPHlCv13nw4IHU64Mst9gce3BwwEcffcTi4iKbm5tyt8agQXXc\nAvwJ/J9v+g9FFKrX6wEObV4Tm8wGFf0Tw2q1GuVymXa7LT8G9aL3T2nr9XrkcjlJOut0OgN/5uLc\nRUpRDJUZZKUozlxERYlEgnw+L0lng2z8lUqldF7K5TKpVErOYR/kCFTIrFAoaLVapNNpcrkczWZz\noN8nfCl7t9ulXC6TTCZfCMMPX2bnms0m2WyWXC43cCnzP4d2u025XCaRSBx3aej//rm/fOGNf//4\nR8F+Fhd80I2/SqWSH4LlL1b1DvIjFYZIpVJJ5SI2mA1qW6WAuC+iO0EYoUFi4X4dxJmL6EJEn4PO\nNocvjX+n06FSqVAqlWg2mwNt+OGwAa3X67I7YdANfz9arRaVSuWFcLYExF2pVqsyIzfIOuWrEO+z\nUqkc9135s8b/hU77i7qWVqtFrVZLAp245INs+IXs/XILmV8Ewy/k7ndWBt3w90fPX5V5kJVifwQq\n7nd/u9Ogyg2HZW+323905oOO/ij0RTKewCFd+CI4ifAlCbrT6ciOp0GXuR/irAc9gIMX3PgLg9lu\nt6nVapKcI5TMIB88IJWK6LvtVy6DKruQrdPpyJ7+/tTzoMotIO6LqJO/KEa0XyGKezPo2Qo4/EZ7\nvZ50AoThH2TZARlACGflRbkrInoW9+ZFeJsC4s6Icx708+5H/3kPunOrOG4B/gRejN/0CU5wghOc\n4ASDiT9r3781E/5OcIITnOAEJzjBN8OJ8T/BCU5wghOc4K8MJ8b/BCc4wQlOcIK/MpwY/xOc4AQn\nOMEJ/spwYvxPcIITnOAEJ/grw4nxP8EJTnCCE5zgrwwnxv8EJzjBCU5wgr8ynBj/E5zgBCc4wQn+\nynBi/E9wghOc4AQn+CvDifE/wQlOcIITnOCvDC/8Vr9nAZVKhcFgwO12c/36dX7yk59gMpnodDoD\nvS5VLAcSsn//+9/nwoULhxYbiSUfgwYhu16vZ3x8nDfffBOTyUS1WgW+nPk+qPKLlcahUIjZ2dlD\n+yQGVWYBpVKJ0WhkfHwci8VCo9E4JPMgy69QKDCbzbjdbgD5NgdZZgGFQoHRaESn0x3SKWLx0SBD\n3Pev3pMXRXaxWRJeHLnhsKxf/fgG+PZt9ev/wTUaDVqtFoPBIPfLC8MhNoiJ5RxiL3en00GpVKLT\n6dDr9djtdpxOJy6XizfeeIO/+7u/w+12Y7PZCIfDpNNpuaKxXq8/E9kVCoX8/nq9Xm6zAg5tnBMG\nvNvt0mq1gEeXWa/XYzKZsNvtWCwW3G43P/jBDzAajej1era2tkgkEpRKJcrlMpVK5am3HArZ1Wq1\nlFsoMiF7/+a2/kUurVYLhUKBWq1Gp9Nhs9mwWq3o9XpmZmZ4++23WVxcxGg0ksvlKBQKlMtlqtUq\n9Xr9qRZ79N8XIbder0elUsklImL5TP99AQ7tEddoNBgMBmw2G0ajEbVazcWLFwkGgzidTiKRyKHz\nrtVq8nf2tLILJ0+v12MwGP5IbiG7kLX/vohzN5vN2O121Go1NpuNmZkZarUam5ubVKtVqtWqlLtW\nqz2zM9fpdPJeajSaP5L7q4u4+ldxi9XRdrsdk8mEWq3G6XTi8/lIJpPkcjm5DrtarVKr1Wg2m08s\nd7/sCoUCk8kkz128QyGzkLPdbgOPHNb+YEG8cavVikajQaPR4HK5UKvVFItF6vU6zWZTnrdY6f20\ncgNotVopu1arlTpFLBATP4N4u1/dQqdQKOQ7Ec6iw+Gg1WrRaDTodDo0Gg35PsUGvqeVXThIJpNJ\nfm+xxlosbRPy9y8ZE/8vDGP/plebzYbJZDq0ulvcF7GU7FnIrVKpsFgsGAwGdDqdlK3RaEjZxcdX\nlxcJiPuuUCjQaDTyvQo7IFYd1+t1Go3G0535E3/m88Wf/YmEJwfgcrkIBoNMTk7idrsPPT6xx7pU\nKgGPlHc8HqdSqWAwGBgaGmJycpLLly8zMTGBWq3G6/USDAaJRCLs7++zt7fH6uoqi4uLrKysEI1G\nn/iBipWy4qKMjIwwPj7O5OQkarVaXopGo0GxWKRUKlGpVNBqtdTrdRKJBAAmk4mxsTHm5+e5evWq\nVC6jo6OoVCr29/c5ODhgb2+PpaUlVlZWWFlZeSqF3u8922w2Kffw8PChdaelUolCoUCpVKLdbqPR\naMhms2SzWXmZR0ZGePnllzl//jxKpRK73c7w8DDpdJpIJEIkEmFra4uVlRXW1tbY3d194s1e4qzF\nIx0dHWVycpKpqSmpEITxyOfz0slTKpV0Oh1SqRT1eh2lUonX62ViYoLXX3+dYDCISqUiEAhgsViI\nRCJEo1FisRirq6usra2xublJoVB44vsiFAE8+p2HQiEmJyeZnJyU2xSr1SrFYpFcLkepVKLRaKBU\nKimXy2QyGXq9HhqNBofDwYULF3jzzTfR6/WYzWZ8Ph/1ep1YLEY8Hmd/f5/V1VU2NzfZ3t5+qk1w\nKpVKvtHh4WFmZ2eZmJjA7XbLTZalUolsNivPXRj8bDZLuVym2+2i0+mwWCx873vf4+zZsxiNRtxu\nNx6Ph3g8TjKZJJ1OEw6HWVtbY2dnh3Q6/VRvtF/5nj59mpmZGcbGxlCr1TQaDfL5PNlslkwmI+86\nIP9ObF7UarUMDw/zyiuv4PF4sFqtTExMoNfrSSaT8muIM9/b23sqhd5vOHw+H2fPnmVycpKhoSEa\njQalUol0Ok06nSabzVIoFKROKJfLlMtl+TtQqVRMTk4yMTGBwWBgZGSECxcuUK1WyeVy1Go1IpEI\nKysr7O3tEY/Hn/qNivty5swZzp49y9jYGEajkUajQSKRIB6PS12Sz+dptVqHtqKK763T6QgEAlIv\nXrlyhdnZWQqFAvV6nU6nw+rqKuvr6xwcHFCpVJ76jSoUCux2Oy+99BKhUIiRkRFarRalUon9/X35\n+xbvVDgzjUbj0PcWb1OlUuF2u/nOd76D3W4/9K4XFxfZ3d0lFov9pTf6Z+37CxH59z9IhUKBxWJB\no9FQr9dxOByMjo5y6tQp/H4/lUoFvV6PzWaTh9tsNonH4xwcHOD3+zEajfj9fnw+H36/X17wSCRC\nNpuVXu/k5CR2u11Gt7FYjGg0+liyq1SqQ8rEZDJJL9vv9zM5OcmZM2fo9XpUq1VsNpv82ZrNJvV6\nnb29PXK5HFNTUzLq8fl8jIyMEAqFpKIvl8uYzWZGR0ex2Wy4XC6USiXVapWtra3HiqBFtCiMj4h+\nms0mDoeDkZERZmZmmJiYIJfL0e12sVgstFotGcFks1l2dnbw+/3odDo8Ho/8mJmZYXh4mFQqJSMm\nl8uFzWbD7XZjNptRKBRkMhn29vYeS6moVCo0Go38f5PJhEKhoNlsSgM+NzeHTqeTTolGo6FardJu\nt1EoFOzt7ZFIJBgbG5MyORwO/H4/Z86cQa/Xk8vlsFgsWCwWQqEQPp+PoaEhNBoNrVaLaDRKoVB4\nrPui0WhQqx89SxG9tVotTCYTQ0NDTE9PMzc3RzabpVgsolar5ZkrlUrq9TrhcJhGo8Hp06dxuVw4\nHA7MZjNTU1NSgXe7XZxOJ0qlkkAgQDKZxOPxSAWzs7PzWHIrlUq0Wq1U4CIT1263cbvdjI2NMTMz\ng91uJ5FIYDAYcLlceL1e2u02arWaZDLJ9vY2Q0NDMpslotcLFy4QDAZpNptYLBb5ucVikVQqhdls\nptPpSIPan0n7Jmeu1WqBLzMs8OgNBAIBJicnmZ6eplKpkEgkcDgcGI1G+b6USiXhcJhCocDMzAxO\npxOn04larcbj8TA3Nyfv48jICCaTifHxcfL5PMlkEpVKRb1eJx6PP1YELTILarWaXq+HTqdDp9PR\n6XRwuVyMjIwwNTVFIBDg4OCARqMhdafL5UKv11MsFtna2sLn82E2mwkEAvLt+f1+PB4PzWYTl8vF\n/Py81FOlUkk6iOVymUQi8VhvVGQAxc9hMBhQqVR0u138fj9jY2NMTU3R7XbZ3d2V2Qe1Wo3f70ev\n17Ozs0MsFmNkZASPx8Po6Kg0xg6HQ+q+8+fPEwqFZLTcbDbR6XS0Wi1yuRyVSuUbyw0cOnONRiOz\nQk6nk2AwyMTEBBMTE+zu7pJOp+n1ehgMBjweDyMjI7TbbTY3N2k0GhgMBiYnJ/H5fHS7XXkuxWIR\nvV7PtWvXcLlcMlsmHPpGo0E8Hn+se/5Hv4Mn+qwjhlAswjv0+XyYTCZSqRQul4uhoSFGRkbw+Xyk\n02l8Pp98cFqtFr1ez6effsqvfvUrLl68yLlz55ifn8doNNJut8nn82xtbfHxxx+jUqmkovJ6vbhc\nLsbGxqjVaiwsLDy27Gq1GoPBIFOIgUCAer1OqVSSWYZgMEitVgNgenpaGhBhtP7jP/6DaDTKSy+9\nxNmzZ5mZmZE73cvlMisrK9y9e5dgMMjo6Cjj4+PyIZdKJSKRiFQ+3xTCAdJqtSgUCmlEcrkcdrtd\nOk4ej4dqtYpKpSIUCmE2m9Hr9TidTjY2Nvj5z3/O+Pg4ly9f5uzZs3i9Xpl2y2az7O7uUiwWGRoa\nIhAI4HK5cLvdtFotqtUqS0tLT3TmQoFpNBp8Ph8KhYJ8Po/b7cbn8+HxeGi320SjUVwuF8PDw2i1\nWmng/+d//ocvvviCa9eucenSJc6fPy+VU71eZ3d3l93dXQqFAl6vF4/Hg9PpRKvVSmfszp07j11X\n1Ov1GI1Ger0eFosFr9dLqVRCo9Hg8Xjk7yGVSlGtVhkeHpYO6tjYGM1mk5///Oeo1Wop++TkJL1e\nT0ZK6+vrpNNpOp0ODodDGiph+Hd3d6Wj/U0Vi0h5CqXocrkwGAyUy2VZUrPb7Wg0GtLpNAqF4pAz\nGAqFuH37NslkkqtXr3L16lVefvllbDablD2fzxMOh6nX65TLZaxWK1arVTqdxWKR1dXVx74vovQn\nonWn0ynTs06nE7vdjtlsJplMsrm5id1ux2az4fV6mZ6eZmRkhF/84hckEgmuXr3KtWvXeOmll2Ta\nt9lsEo1GiUajUpeJM7HZbJTLZeLxOPfv338suZVKpUwzdzod6YjWajWcTicOh0PqkL29PaLRqAyM\nRkdHuXjxIplMhl/84heMjIxw+fJl3nrrLcbHx2U5o1KpsLOzQ6PRwGg0YrFY0Gq1FItFtFotpVKJ\nnZ2dx77n4gxEql6kt1utlizxiEztwsICzWZTOi0zMzNcuXKF3//+99y8eZMzZ87wxhtv8KMf/Uhm\nUJvNpnQmhTNmNBrpdDqynJjL5VhbW3ts2Q0Gg3Q29Xo9VqtVZkStVis6nQ6FQsHm5iYLCwt0Oh3p\nLIpM7a9//WtarRbT09P85Cc/4bXXXpNZyEajwerqKrlcjomJCRwOh3xLInOdSCR48ODBU5WJXgjj\nL2o4Q0NDjI+P4/F4MBgMFItFDAYDFouFarXK/v4+u7u7LC4u8tlnn1Gv11GpVHg8Hra2trhz5w47\nOzt88MEHWK1WVCqVfJzFYpFoNIpOp8Pn8/HjH/9YkorEI36SNKgwRGNjYwSDwUOGzeVyARCPx4nH\n4+zs7LC4uIhKpaLZbGI2mzGZTNy5c4dMJsPS0hIOhwOLxSJlarfbpNNpMpkMHo+HCxcu4HK5sFgs\nUob++tLjnLlOp8Pv9zM6OirTlsLQ22w2SqUSGxsb8qJ+8skn8ns4HA4KhQKLi4ssLy9z8+ZNnE4n\ner1eerHigYo05dtvv82rr776R3I+yZkbjUaGh4cZGhqSddZarYbFYkGhUHBwcEA6nWZpaYk7d+5I\nvojBYMBut7O1tUU0GmV/f5/3338fl8sllYQwkslkEq/Xy9TUFG+//TZer/cQv0Sc++NAp9PhdrsZ\nHh6WDoWo3VssForFIouLiywtLbG3t4dGo5HlGIfDASAV2urqKu+++640oIIbk0wmqdVq+P1+Lly4\nwOuvvy7LZeJrPe59Effc7/cTCASksm2329KB3NnZoVgs8uDBA8rlsnQ4RBYglUoRDofJ5XLcu3eP\nd999VzrOTqcTgIODA/R6PX6/n5dffpnh4WFZsxUZk8dJPwsn1+l0SmfWZrPJWrPRaJTp2s3NTZaX\nl2V0qVarpSMgeBORSIS7d+8yPj4ujfPQ0BDJZJJIJILT6WRiYoKLFy/KjJfIbD5u2lx8/UAggM/n\nk/wf+LLsEg6HqVQqPHjwgHg8DiCj1S+++IJGo0E4HObg4IDd3V02NjYYHh5Gr9cTCASw2WysrKxQ\nqVRwOBzMzs5KJ1PILs78caDT6XA4HNIRF/wfQSzM5/MkEgn29vZYX1+n0WjIz1tdXeXGjRtEIhFS\nqRSFQoFcLieDQbvdjtfrJZ/Ps7a2JktFoVAInU4nM3xCDz2uXjSbzQSDQbxeLw6HQ5YXRKZ0c3OT\nhw8f8vDhQ3Z3d+l0OpLcHIlE0Gq1spSZSCQwm82k02k8Hg92ux2r1Uo+nycajVKv1wkGg4yPj0tO\nzpPK/lW8MMZfoVDImplIBzYaDUniSKfTlMtlNjY2yGQyVCoVisWiTF8VCgX29/dlyr2f7PJVcpLD\n4eDll1/m1KlTGI3GQxf8cQ9bPGyfzydlB2RKv1qtsr29ze7uLuvr65KnUK1WMZlMWCwW+fP8KWIO\nPFIEVquVXq/H9evXpVHoZ7k+iewmk4nR0VGpCDqdjoy8kskkxWKRlZUVIpGIJES2223MZjOAJBqK\nsxMfQjZhcEX0febMGam4BGHnSc5dqVTidDqZnJyUKc5ut0uxWKRYLJJIJNjf32dlZYVisSjJeSLt\n2x8li7MQd0Wk+URKO5lMMjc3h8lkApCG4Ukgvr7IZrndblmyyGQyJBIJqdR2d3el3J1OR5Kc+u+2\nMLDCkDkcDqrVqkxpq9Vqzp07JwlFQoE9ae3WZrPJey4iIqHIM5kM0WiUtbU18vm8rHeK7yucjv39\nfVQqlUzzKpVKgsEgRqORdDqNxWJheHgYr9eL3W6n3W5LEuyTyC6+vygJuVwuSdg6ODggFouRTCbZ\n2dlhZ2dHEs6EsyRKYwDhcJjFxUVZvnO5XMzOzlIsFonH43i9XiqVCiMjIxiNRrrdrsxQPgn7XKlU\nYjKZZObT4XCgVqv/P3tfFhv3dZ3/zb7vC4ccDklxuEgUtdtabcWukQDOhiTNQ4MgDwWSh6APfWj7\nWKBFX1rksWheigLdUgTNVseO4022JGunKO77kJzh7Pu+kDOc+T/wf66GjCSLHA7nR+b3AYJgS+Ic\nXt57z7nnfOc7yGaz8Hg8jEPjdruRSCRYbbxWq20rpZLtq6urMJvN0Gg0GBgYgN1ux+LiIkqlEjtH\nJpOJ/exoz+12veleNJlM2+50iUSCtbU1rKysMO6Sz+fDxsbGNhJx/S/aW+FwGDabDR0dHejv70eh\nUMDs7CzUajU6OjpgMBig1+tZ+ZPuyL3YTZyxzs5OmM1mSKVSrK+vY2FhASsrK/B4PIyjQI6a1rie\n+0UPzkgkgu7ubvZrZWUFwWAQiUQCwBa3LZfLsc6c+j23VxwK508OJ5PJIB6PQ6lUMkblxsYGq50F\ng0F4vV5GqKBXWDKZRKVS2Vbzrnco9ZcFkTQorTs4OAipVAqZTMYOy24uF8oq0Mtco9GwdHo8Hkcw\nGEQ4HGZkMbpY6HtOp9Pb2M/PspnWiNJCCwsLzIHIZLI9XSzVahX5fB6JRIJduBqNBiKRCJVKZZvN\noVAImUxmm9OhWvdOu+vXnv5usVhENBqF2+2Gy+VCd3c3Y4jXtxe97LpvbGwgmUyyjIhKpWJ1unQ6\nzQ5WOBxGJBJhZMX6wLA+UNkJcpS1Wo1xEpaWlqDX69HR0QGFQsGY6bvdL/l8HrFYDNFolHWhENvX\n7/fD4/EgEAggHo8jk8lsewEQ8XJnxwutH2WcyGlVKhUsLy/D5XLBYrFAKpWyNDIRHl8W5XIZiUSC\nZUMocBUKhchkMlhYWGBOtJ4UB4Cl9QmU1arPohSLRYhEIsZ/IG6DzWaDXq9nXRgymQxCofClX0VE\ndgsEAizFbzAYGLPa7/djfn4eoVAIqVSKEeR27uH6z8pkMizAohc/sdSz2Sw0Gg08Hg/a29shlUqh\nVqvZ63E3606PHrlczngdlE1Mp9OYm5uD1+tFNBpFLpf7g72xMytVrVbh8/kQDochFouxsrICrVaL\nTCbDWkMdDgezWyKRwGg0QqVS7SpzQbwBv98PjUYDnU4HvV7Pfm4+nw8TExPscUGBxs67r/6zYrEY\nI0erVCp0dnYC2CKPEpk7FAqxva9QKFgmksp5L2N3rVZDMpmERCJhZSebzQYAyOVyWFhYwNLSEmKx\nGCMXPiuTWf95i4uL8Pv9kMvl6OzsRH9/PyKRCCqVCgYHB2EwGBAKhdiZoLtYKpWyDMxecCicP/B0\nwehVaTQaGfGEFjsejyMej7NFJ+yG6EYtUhKJBEqlEkqlkrG/90KuqHcolJqlVp9QKITl5WXE43Ek\nk0nGcK6/uL+obWln4EIMaZVKxerv5IR2az+1BhFBxmQysZaw1dVVBAIBJBIJxjKv/9pfdJjqLwpy\ntsDTelqlUmGksd2CnAkFTXRIi8Ui4vE4W3Nqt6q39WVS9Tv3VqFQYBcKHUxy/rsFsfjX19cZ47dU\nKiGZTDLnT3t8Zxvhs+yu/5nUt6UJhUL2c6O9Tm1plGrfzX6pVqtYX19npCqlUgm9Xo9CoYB0Oo3V\n1VW2V+od/7PspP+u/zv1Fxy9lqmbhDQ5yPnvNtAlwiS18er1eohEIiSTSUQiEXi9Xpap2OmYn+Xw\naO8BW/uDCGXUXpzNZlkZhtovyfnvxnYiflHLmkwmg9FoZO2mPp8PoVCIsfi/aM03NzdZyyEAFAoF\nRnYmGyn4kUgkLNhQq9UsQH1ZVCoVdpdUKhVoNBqo1Wpks1m21+ml+0X7GgA7M8BW+Sefz7O2UoPB\nwL4WZcO0Wi0rTdV3Wr0M1tfXWZaTshfUfheJRBCJRFiQ/byvWf//s9ks6xihVtvNzU0W5FI3klgs\nZlwJ4gGQT9wLDoXzpzQJMeeJ6U5tPWNjY8xp08Wy11oI/Tur1Yqenh52mOr/bDcgu4mt2tXVhVKp\nhEQigbW1NSwsLLD6d6M1HGDLeXZ3d7NWF4ps94L6soHBYEB3dzdWV1eRyWSwtLSEVCrFXjSN2k0X\nTHd3NywWC3K5HBQKRUPpc+KKEIPZ7XYjEolgZWXlD/riG7GbSKgdHR0wGo1Ip9NQKpV7Wvf6dCZx\nRfx+P4rFInw+HwKBQEN217+ayuUy5HI546KIRCKk02nGdt8rBAIBzGYzbDYbI0VS7fNZL6GXQX3v\nPAVIKpUKVqsVRqMRIpEIqVRqT84feLouVOum1tp4PI5IJLItdbtbu+s1LiiA0Gq1jJBHHQxU4tiL\n3QCg0+lgt9uxsrKCQqHAXs57vVfoMUEZFwpiKFNB60HOfzf6CpTdAcDIrBqNhrVOUgfRXvdKNptl\nmhi0Xyj4IodfX2rYjX5LfWCqVqvhcDjg9XoZ6bSR1sFCoYBQKASDwcBKT/S1xGIxC3apNEMBz15w\nKJw/EWMKhQJSqRTm5+fh8/mwuLjI6uT1wiGNOCKqpxgMBhgMBlSrVczMzOCXv/wlPB7Prr9ePaHQ\n5/NhZGQEiUQCS0tL8Hq97OW5H46fXp9EiEyn07h16xZu3rzJOAO7QaVSYaz8hYUFFAoFLC0tYWFh\nYVs6rlEHSqx8aqmr1WpYXl7Gr3/9a8zMzOy65k/7IJvNIhgM4smTJ6hWq3C5XFheXt63QAsAO4wm\nk4lliR4+fIjf/e53u25/AsBEQagDRSqVwu12M4Z+fUmiEdBe0el0sFgsAAC/34+PPvoIo6Oju3bS\ndJkXCgXE43FMTk7C7XZjeXkZs7Oz20pCjdheq221tOn1etYSms/nMT09jc8++4wFGbvdL3Rxezwe\nPHjwAMlkEktLS/D7/ds4K3u1mX6n1zJxOYjNPjIywkpnuwG91hOJBObm5hjvaW5u7gtfn1+E+p+X\nSCSC0WiExWKBwWBAPp+Hz+fDzMwMXC7XH2TQvgi0X3K5HILBIB4/foxarYalpSWsrKw8s1y4m69N\nDp8yLLTmUqmUdRnNzMwgFArtWkSMHmvpdBoulwsymQyrq6uM+NzI/i6Xy8jlcozLIZFIoNFooNfr\nGVcpEolgZmYGiUTij8P5b25uIp/PsygcAJaXl1ltZT8uRABssXU6HaRSKaLRKGZnZ3Hnzp09qfvR\nRkmlUnC73cjlcojFYnC73SwFuh92CwRbylhUP9vY2IDb7cbIyAgmJyd3bXv9ZR6NRgEAXq8Xbreb\ntZs0qhhIkEqlzG6lUolgMIi5uTncvn0bfr9/1+tDh58CrlKphGKT1URoAAAgAElEQVSxCLfbzbIV\n+7HmAJjULLWK+f1+jI6O4uHDh3u6CKiVMB6PY2lpCYlEAl6vF4FAoKFX3E4QGY1eEH6/HwsLC7h3\n7x7m5+f3FNDV806ozEW9znt98e8EkVCJwCUUChEIBDAxMYHbt28z4ZPdgJw/ndFsNsu4HI2INO0E\nkVDb2tpY54zL5cLIyAimpqZY2e9lsfOMzszMwOPxwO12IxwON6wAV/9v5XI5IxRKJBJEo1HMzc3h\nwYMH8Hg8u1ZVJNszmQy8Xi8rN7jd7oYdKH1t4hAolUoYjUao1WqUy2V4vV5MTU1hcnISsVhs1yqc\nVOKKx+NYXFxkWVxS+GzE9npFQAp0SWPF6/VieXkZS0tLWFxcbPizDo3zJ6Z2pVKB3+9nhLR6+dX9\nALGVNRoNUqkU7t27h+npaRZF78V2UorLZDKsZ7ZYLO6rExKJRLDb7XA4HJBIJFhZWcGNGzfgdrtR\nKBT2HEETMYfESKi+uJ9rrtVqceLECVgsFqan8PDhQ6autxe7q9UqkskkawGlS3I/11wgEKCjowPH\njx+HUqlkL+eZmRkkk8k9SfvSxUJMYbFYjFKp9Fzy0F7tViqVOH78ODo7O7G5uYmZmRncvn0by8vL\nSKVSu/6a9WeU+tprtRrrRtgvu6nEcvr0aWi1WsRiMdy/fx+PHz9m3Q97sb1SqTAyIhGp6IzuB6h7\noaenB11dXahWq1heXsbt27cxPj6OtbW1PX1WrVZDPp+H1+tFKBSCSCRCsVjctzNKXRTEyler1Uil\nUpiYmMCDBw8wMTGBWCy2J7tJtCaXy2FtbW3bGW0EVDYjDgQF5+VyGbFYDCMjI5ienobL5UIul9tT\ngL6+vo5AIIBYLAapVMo4I/tlO2VBiTSbz+exsrKCJ0+eYHl5GcFgsOEzdSicP7C1WYjZWM+23a+L\nHNja6NSao1KpEIvFcPfuXczNzTX0yqXXHKVo9uv1RqDadmdnJ+x2O4CtrMitW7cQDAb3bDtdivUK\nWPu95qQZ4HQ6odPpkE6nMTY2hsnJyYayC0SGqn+R7GfAQu18bW1t6OzsZEIqd+/ehcfjaejVRZcg\nObL9XHPaK1qtlgkEkVDUkydPEIvF9qyNTzyCeq37/bSb9BtIKY1apcbGxrC0tNTQS6hWq7Ega6/t\npc+zm1K4JDBFeg2kPeLz+Xb96q+3m85oIy3JzwKpzRF/w2w2Y3NzE4FAAHNzc5ifn0ckEtnzvBO6\n0zc2NvZ9zallu6urC1arlenCkA4MtT7uda/XkyP303bKPLe3t8PhcMBgMKBWq7FWU5fLhWAwyHho\njeDQOH8A+1Jffh7okFosFgwNDUGlUiEcDuPOnTtYWFho+Os3y27gqQIiOf/NzU24XC7cuXOn4Uh0\nv519Pepr/ceOHYNKpUIikcDExASrEzeCZtlOjoimKdpsNlQqFaytreHBgwcNp+OA5thOe5yEjGw2\nGxQKBeN0TE9PN1yGataaE1mLav0kluXz+TA1NcU0PBpBfW1+v0B7nNpAzWYzxGIxYrEYXC4XZmZm\n/qA7aS9oxrqTcBPdKySmFgwGmc7EXrOKhGasOe1xq9WK/v5+2Gw2SCQSxONxrK6uYmFhgen8c812\nCrZoBonJZMLm5iYruXq9XhQKhYYHhgGHzPk3CyqVCgaDgQ3hOH/+PCO37TXdfxCgdpv29nY4nU68\n+uqraGtrw/LyMustbpbjbgRUt6UOApLPrdVqWFxcRDqd3rcU936DhuS0t7ej5//LFjscDsTjcfj9\n/me2sXEBJJJltVrZLIyTJ09CLpdjZWUFiUSCk/uF0qAketTb24uzZ8+iu7sb2WwW4XCYqZ5xDZS+\ndTqd7Nfg4CB0Oh2bzVCvR8IV0JofO3YMw8PD6OnpwbFjx2C325HNZreJYnENJHZ2+fJlnDhxgokf\nqdVqBINBllHbr1LUfoEeFL29vXjrrbdgt9sZNySTyaBcLrPs8X7djUfW+ZNSGg3NqF8wkpCUSqWQ\nSqWw2+3o7u5GX18fnE4nc0JjY2N7qgk1CpFIBKlUyvrsgafRJfV61k/IczqdGBgYgMlkYkQxGrpx\n0KB2pZ1yyJRurl/znp4eDAwMoLe3l8m/zszMIB6PH7jt9RKrO8sytOaULqdBL319fWhra0Mmk4HL\n5cLS0tK+c1Be1vbnCduQQJVMJoPD4cDg4CB6e3vhcDggFosRDAbhcrkQDodbEnA9T8CJslnUrjU0\nNIShoSGmBpfL5bCysoKlpaVt0+i4YHv9mPGuri5cvHiRTdeTyWSIRqOsbsslJ0RnVKFQQKFQYHh4\nGNeuXWPdQ8T/oSFGXAoW6YxS693FixeZ6ubGxgai0Si8Xi8jQnIp4CJSn06nw9DQEK5fv87kxrPZ\nLJu4mU6n9/VxcSSdP12GXV1dUCqVWF1d3XZBUGrFarWivb0dV69eZaStmZkZ/Mu//Au8Xi+CweCe\nyE+N2q5QKNDe3s46A+o3qlarRVtbG0tpXblyBWazGUKhEDdv3sT4+Dh8Ph8ikciB2k22q1QqqFQq\nNi6UQPVx0vK+cOECzpw5A6VSCa/Xi5/97GdYWVlhJJqDBtVliUBVn0YldjlNG7t48SLsdjvkcjnu\n3buHhw8fMmW0VrxAqTZLCn/1MJvN6OjoQEdHB4aHh3Hx4kVIpVIkk0l88MEHrG2WpoUdNEg1c2fg\nQfr9RGK9cuUK+vv7IRaLMTExgZs3b8Lv9yMYDCKdTrckaCHZ5J3Q6XSsZjs8PIzXXnsNGo0GpVIJ\nH3/8McbGxpgufTPLai+yHcC2wLxWq7EzSsPBrl69ivPnz6NcLmNhYQEffPAB6/ZJJBIt2S/Ps50k\nfPv6+jA0NISLFy8yAvFHH32E27dvs/HP+9Vh1QjqtSgsFgt6enpw/PhxvPrqq3A6nSgUCnC73fjd\n736Hubk5hMNhJhW8X7YfSecPbC3u0NAQTpw4weZP04hf6uMnYRbS8Kf+2JGREaRSKRQKhZbYbjab\n8eabb7IhFCReRMN+TCYTc0SDg4Os7WZiYgITExNIpVIHHtlSqvDEiRN49dVXGdmOJmxVq1U2hrWt\nrQ19fX2w2WxYXFzE/Pw8Hj9+jHA4vC/18t1CKBRCp9Ph+vXraGtrY6lYWvf62r7D4UBfXx9KpRIW\nFhYwNTWFsbExJll7kLZTtqK/v58NRCLxIrKd1pxGjTqdTszPz2NhYQFjY2NYXl5uifOk7NalS5fQ\n29vL+DzEpKYzSun+/v5+SKVSzM3NYWpqCqOjo0in0w3XnHcLquF3dHTg4sWLbEQ3KXmur68zBdLu\n7m4cO3YMfX19CAQCmJ2dxcTEBKamplrycqbX8fDwME6ePLmtj5/2PK15b28vnE4nDAYDJicnMTk5\nibGxMUQikT8IjpuNeub++fPnYbFYmO20X4jg53Q60dfXh+7ubmQyGczOzmJychLT09NM4vigMnOU\nylepVOjt7cXp06cBPA1caIofTRKlrJxOp4Pb7Wb3OQ1n2m+i+JF1/kKhEOfOnds2nY9EgorFIiOa\n0VjJxcVF3Lx5E/fv30cgEGiZ3QKBAHa7Hd/97ndx4sQJpuKUz+eZ6pVYLGaiMtVqFWNjY3j33Xcx\nPT3dklczQSgU4urVq/jLv/xLNkp0fX2dDdMhlUOakEdjb2/fvr2nXuH9tNtiseAHP/gBXn/9dUgk\nEqaDT06dVA5peNKDBw/w+9//HuPj4wiHwy2xmxzRpUuX8A//8A9sPvr6+jpyuRxSqRRqtS2RExI4\nob1+48YNLC0tsRkMBw2xWAyNRoPvf//7+O53v8uU+0gTg+r4Op2O6dUvLS3h9u3bePToUcvOKGXm\nzp49i7/927+Fw+FgXQ7FYnHbwCKacQCA7ZeZmZmWnNF64uG3v/1t/MVf/AUrzW1ubiKdTrO0skKh\nQFtbG0QiERKJBEZGRvD555/D6/Ue+BmtV7scHBzEX/3VX+GVV15holE0xyOXy6FUKrGx17XalmDQ\n73//e0xOTiIejx+o3WQ7TUb9+te/jr/5m79hrd+12tZMiWQyybqpaC4IAMzMzODTTz9liqrNwJF0\n/hQRzs/P48mTJ7h8+TIb/lGpVCCTydhwnY2NDTx48AB37tzB6OgogsFgy21PJpN48uQJFAoFTp48\nyZjlNL2M6s+BQAAjIyP49NNPsbi42LRN8rKoVrcmso2NjWFoaAhms5nVs0iwgoYMzczM4NGjR3j0\n6BHrw2+l3fl8HouLi+js7ITT6dy25tVqla15Pp/H+Pg4bt26hdnZWTZ1qxUgpxOLxTA3N4fu7m42\nkY4udZIElUgkjBk/MjICj8ez5xat/QC1pwUCAfh8PrS1tbG9QcpmxPAHti7DBw8eYHp6umXBFvC0\nJTCRSMDn80Gr1UKv10MqlbLvS6VSMcZ5MplkI61dLte+tGjt1e5yuYx0Oo1oNIpQKMRmIZAGAaWi\nJRIJRCIRVlZWMD4+jsnJyYZahhu1G9jSvI9GowiHw0in02zImFAohFqtZoGtXC5HoVDA3Nwc7t+/\nj4WFBSSTyQO3m2xfX19HJBJBIBBAJBKBVqtlEtR0v9Da051Ofmttba2pZ7TxuYDNwd/txxeh+nNv\nby87kFQTJcefTCbx/vvv45NPPmm4V3i/QA7TZDKhq6sLwNPDS6m7YrGIhYUFvPPOOxgdHYXH42k5\n+7ZWqzGVPpvNxka6kgOlTV8sFnH37l28//77mJ2d/QNeQyvsJq1vaoGjem6lUmEjPDc3NxGJRPDR\nRx/h7t27WFxcPPC0807Q4BKTyQSNRgOtVsv+P625VCpFpVLBzMwM3n//fYyPjzMt8laBAvSuri4Y\njUZotVo2x4HU2Wi6YKlUwueff46bN29ienoaqVSqpR0V1WqVtcDR9ML6NDSR5mq1GrxeLz755BM8\nevQIS0tLDavuNWp3pVJBZ2cnbDYbIw7TnxFJmob0jI6O4saNG5icnGSE0FaBAtnu7m5oNBp2l9AY\naNKAEAgESCaTuHXrFu7du4fZ2dmWdmxRWcJoNKKzs5PdNXS/0BklHf+FhQV89NFHGBsbY3d6A/vl\n71/0h0fa+cvlctZqI5PJmIRnKpVCNpuF1+vF7Owsbty4gampKTZNqdWgiNZqtcLhcGBjY4OloAuF\nAnK5HDweD0ZHR/Hxxx+zFrNWBy3AVlAlFArR0dHBmLY0ZYxSul6vF5999hlu3rzJap+tBl0kNNiF\n6nE0DYzGp7pcLrz//vuYnp7e07yEZoBU3dRqNZsERgIkVM9NJBJ4+PAh/u///g/hcLjhHuf9AK05\nTXUjW4nnQuTLWCyG3/3ud7h79y4bz91qCAQClq5VqVSM40KdRaR2ODs7i3feeQdLS0uc2S/VahWp\nVIpN+6Tx4RSY0Jn97LPP8OGHHzIhn1bbXq1WkcvlkM1m2WhxEtshOWnqSHj33XcxPj7O2oZbCVrT\ntbU15HI5tn9JRZL+TrFYxOjoKH79618jEAjsx355ofM/kml/Ar3caFAKOX8i8vl8PszPz2NtbY0T\nL34C1bKKxSLrUiiVSsjlcgC2vi+Xy4XZ2Vk2PpIrtm9sbCCfz7N6Vr26oVwuZ4ELaXjvN4llr6CL\nJZVKMYIo/RyItBONRllHwn5qvjeKUqmEWCyGWCzGZJjpVUEvatIh4FJ7Wa1WQzqdRjAYZOU2St+S\nnkK5XEYikUAoFOIMUxvYWnNqwVpbW4NKpWKpf61WC51Oh3w+j0QigWAwyOYycAGpVAqrq6tMCpxk\npIlLRFPu4vE4QqFQS7MV9SBRJxJ6ymazbKKjVquFxWLZNusgkUhwRi+EBtKpVCqIRCJks1k2xtlm\nszH5YRpG1shkwJfFkXb+VNunSyYajTLnLxAIsLi4iJGREdZywxVQmlkkErEBEqlUapvzn56exuLi\nInvdcQUikYjNWk+n00xff2NjA0qlktVAW1VDfB6ImQuAzaCnbAUpy/l8PqysrCCbzXLOdtKziMfj\nTMSkWq3CYrFAJpMhEokgGo1yUoSI9goN2KnValAoFLDb7YwDQz8Lrux1Cq6y2SwCgQAbfS0QCJji\n48bGBnK5HOcEcegxEYlEIJFIIJPJIJFIWA1ap9NtywZwZa9T6ZMGMKVSKahUKigUCnR0dDBdl53D\ncbgAsikUCrGBcXq9Hjqdjs1PoCzMfs6UeBGOpPOXSqVQqVTo6+tDf38/64HO5/NsXvLS0hLW1tYQ\nCARa1tK3E5R2tlqtGBwcRFtbGwAwdadSqQSPx4Pl5WUEAoGW9ZU/C2KxGHK5HB0dHXA6ndvSuOvr\n68hms5icnITX64Xf728paWsnZDIZdDodHA4Hq4XS5VEulxGJRPDgwQMEg0H4/f4D1354HogQR/VE\nvV4PsVjM9ku1WsXU1BTi8TiCwSBWVlY4cxnSfqHBK8SKp1R6LpeD3+9nr/61tTVO2E6sebVaDYPB\nAJ1OB4VCwYITkUjEMluRSASLi4stEQp7FoRCIauNq9VqKBQKCAQC5HI51uHicrmYKM7CwgJnXs5k\nO5FXKTtBwWz9FMzl5WVEo1HOBLlCoZAFhyTGRXcIiT4RMXBmZubA2hGPlPPfqV3e1dWF7u5uyGQy\nFsXS3O7Z2dk9zXJupu2UXjabzejt7YXVamVqeVTfCgaDGB8fZ7VoLmxw6kAgqeFjx45Bq9Uy2ynq\npVGUVIJpNeqnf+l0OnR2dqK9vR1KpZINFaKLZWJiAsFgkMngthr18wWop9xkMrFMF/WQe71eLCws\nwO/3t6Sf/3m200vTYrGgvb0dOp2OXeY0PXJtbQ1utxsej4cTARftF3L+bW1tbPIatSYKBAKWZXS5\nXE1nbO/GdnKgJBRmNBqhUCjYUCCRSIRIJIJQKISFhQVOBVykykrt2Vqtlmm30Gs5HA5jcXGRTdXk\niu31d4xWq4VSqWQTAImnkMvlMD09jeXl5QMLuI6U8yfHr9FooNfrYbFYYDaboVQq2eue2PL1bEsu\nQCKRQC6XsxdFR0cHu8xJBU0qlbLUNFdsr5cEValUbOKa0WjE+vo6uyzp+wD2f6rhXlEvIavRaNDZ\n2cle0EQqojIGANY+xwXb6yWHTSYT+vv70dHRAalUilgsxtpCxWIxy8BwIeCqf8GJxWLY7Xb09/ej\nvb0dsViMpaGBrRISBetcsZ32hEajweDgIJxOJywWC+PmKBQKlMtlCIVC5HI5ZDIZTqTNyXahUAib\nzYZTp04xjYJSqcQ6jCgwj0ajrMzYSpDjpywXKfh1dXUhHo8jk8nAaDSy9tZ0Os14Cq0G2U4aIUND\nQ+ju7obBYEAymWQPPZlMhnQ6jUAgcKAl6CPn/Ckqp7YtuVzOyHKlUokNpuBSLQvYuujqp5YZDAaI\nRCLkcjkUCgXk83lGpOOS7fWjVk0mEywWCwwGA+udp7Un4RbKVnDBgZLz12q1TO5ZpVKx2dyU+qdu\nBS7ZTryQeqVKqVTKXhRU76T9zpVaf/3LmaSHLRYLEz+h1q1SqYRkMskp28kB6XQ6JoOr1+tZayIR\nLXO5HMLhMLtnWr1f6rNEGo2GzdWQy+UoFousva9arSKRSCASibB1bzXoTqdRzk6nk8m2k4OXyWQs\nbR6JRJBKpTiRnSMVS7lcDrvdjtOnTzOSH933MpkMmUwGPp8PoVDoQHUghAf2SQcEkUgElUoFq9UK\nvV4PkUiEeDyOeDzOpoCFQqFtLRdcAL2gDQYDLBYLdDodKpUKQqEQkskkkskkfD4fotEoZ1oSCbTm\nbW1tMJlMUKlUyGaziEajyGaziMViTD+eS1MS6WKhKX1ms5mlPpPJJLLZLEuDEpGOK6CAy2azMdtJ\n8IeCRRJF4RJLvl5pzuFwoL29HWq1GplMhl18xWIRiUQCXq+XDXnigu3kQK1WKzo7O9HR0cHY8nSX\nrK+vIxwOM9lkLpxTWnOtVouOjg50dXXBbrcDAJPGpfQz8RVICrfVoIDLYDAwaW2bzcb0FNRqNYRC\nIRKJBKanp+Hz+VAoFDix7nRGLRYLm0bZ1tYGuVwOk8nENDn8fj/T7z9I/tmRevlThOtwOPD6668z\nneR4PM7YrUQ440JEXg+RSAS5XI6TJ0/i/PnzsNlsrM6ZyWQQCATg9XqZIA5XbKc1t1qtuHz5Mo4f\nPw69Xo9EIoH19XU2Tcvn8yGTyXBqEhg5/76+Ply5cgUOhwNCoRCRSISxuEOhEHw+Hwu4uGK7SCSC\nXq/H+fPncerUKZhMJnZhx2Ix+P1+1qrFJdvrz+j169fR19cHvV6PeDyOXC4Hr9eLVCqFcDiMaDTK\ngkUu2E6O6OTJk3jllVdgt9uRyWSYrYFAALlcDm63G7FYjFPrLhKJYLVace3aNQwPD8NqtSKXyyGf\nz2N1dZVliFZWVliwyIUgnTIqNL+CZjzQHgkEAtjY2EAoFILL5UI8HueU7VqtFufPn8eFCxfQ09MD\nt9uNcDgMn8/HSkVutxtra2tIp9MHGnAdKedfTwqx2+2QSqVMHCccDsPr9cLr9SISiXDmJUSgNJDJ\nZILRaESlUkEmk2GXCtlOvfFcQv3LXyaTIZfLMcdDfdB0MXLJdiI5kX48aZzT4XS73QgGg4hEIpwh\nVxKoZGEymaBQKNhe8fv98Hg8bJ/TZciVvU51UHoREaEyGAxibW0Ny8vLSCQSiMViTCefK7YDYD3l\nKpWKZVeoBZQu8Gg0ymr9XLGdOolMJhPT7A8Gg3C73VhcXGQlFhquxTXb1Wo19Ho91tfXkU6n4fP5\n4HK5sLq6ikKhgGQyiXA4zAiAXIFYLIbBYIBcLmeaFqurq2zNqZsomUweuO1HzvmTkMzs7CwbRDE3\nN4fFxUWmrc2VtBCBnJBIJILP58Po6Cg0Gg0TIaIxmtlslhNElnqQ7blcDjMzMwiHwxCLxZibm8Py\n8jK8Xi8ymQzr9ecaBAIBvF4v7t+/D61Wi2Qyifn5efj9fiagxJWOkHpQbXlychKRSARqtZpdhnSB\nExuaS7ZTbd/n8+HWrVtQqVSsxaxeyId+ccl24iJMT08zwlY4HIbH40EikWDtfnSJc8n2zc1NhMNh\n3Lt3D9PT06wlMRwOsyCLBNG4ZDtJEi8tLaFUKkGpVDJ1VtLioCmWXAsUNzc3kUql8OTJE6yurkIu\nlyMYDLLzSS19RMY9aNsFX/xXWoJdrwIRoCwWCzo6OmC32yGRSLC+vo5AIMAWvVwucyr1TC8hrVYL\nk8kEu90Og8EAqVSKeDzO5tsT4/kgR2l+EYi5rdPpGHmLNM1pkAVNxePSmgNPX0JarRZWqxVmsxkS\niQTZbBahUGjbxcKVtDPwtHVIqVRCp9PBYrFAqVRCIpEgHA6z9Hm5XGYBLpdsp3Oq1+thNBohFotZ\nXzmRcutHzXIF9V0ter2ezQrJZDJIJpPMeXJprxCoXKFSqaDT6Ri5jybKUZDFxTWnbhuNRsMmhRaL\nRZYip0CFS3YDT8tbUqkUGo2Gdd0Qgbs+wGqi7S/070fG+ROrUqlUMs1q6neuF8nh2ianUgUxP6k1\nhCJemst+kHOoXxZ0MKkFkda8ft3Jbq6sN/D0UqGWM3Ko1D5Zr6vApb0CbLed9grwtPWT9g3X7Aae\nip3QL0K97VzbK8D2lq2ddtPvXAxYgKfBIv2qB9nLtXuF8Dy7AXDuHq/H82wmHKDdR9/506GsvxSB\np4tMQjMv8xKq/6Ht/CE242Kqt5364Os/bzeX4vNsb9YBr7/I6x3oTttfZs2eZXszD/jznBD9/rKX\n+U6769GMNae1oTWvt69+vfay5ju/zn5jvy7znf/+IGyv3yfPQiO2N/sFSJ/5LOe/m8991n5ppu0v\ncqC7+cxWrvmzsJvPbND2Pw7nv+0fN/AD3Xk51R/6/a6FPe8SbOTr1R/0egfaLEdUj73avvNyOmjb\n93O/1F+KzXSihP3cL4Rmvb6bteb1aOaa78fX/aLgh6t4XgABHB7bd/431+0GGg7Qj7bz3/nCbaZz\nbsarn34/rLYftq/dzK9/kGu+31//sK55s792M7Hz9XyYcNhtP4x2A7u2/eg7//rfuVq/eh4O+8UF\nHC67gcN/cQGHz/bDajcPHocYR9v5EygDcNicP3C4L8bDHEXz4MGDxxHGC/37kZD3PczO87CDX3Me\nPHjwOHw4UiI/+4VWpLT363MOczoe4DMJPHjw4HEQOFLOfz+cRn3rGoCmitPsd8aivuWu2f3S++2k\nd7KgD1MAwNvNgwePRnHQGewj4/wbbR2iPnupVAqZTHYgMp2Nft2dGgGkE9AKqcjdguytD7RIYIdL\n0ss7QZr6QqFwmx4AKQFyFfViUvVCRvVBIhf3DKlIyuVyCIVCJhxVH5Rz0W6BYEtBUi6XQyqVYmNj\nA8VikYlHAdy0GwCbuyAUCtm8i0KhwCYXctVuADh27BhsNhvy+TySySSi0eg2tUsuQiAQQK/X4/Tp\n0xCLxYjFYggGg0gkEk31P6Iv/istwd8d5IeR0htJphoMBqYXzTVZ2nqQIyJlQ5J5JUfEZWckk8mg\nUCi2XZDkRLnafysQbEl2ajQaKJVKyGQyFnBx3RGJxWKo1WomY0yz5wFu9ztLJBIm220wGFjQVX+Z\nc9F2oVAIk8mEtrY2tLe3QyqVsmCLAq4vErFpFSwWC86dOweHwwGTyYRSqcTki7lst0AgwJ/8yZ/g\nq1/9KiwWCwQCAaLRKFt3rkIkEqG7uxs//vGPcenSJahUKqTTaTbOugH8/Yv+8Mi8/PcCuVwOtVqN\nU6dOwel0wmg0YmlpCQ8ePEA+n+fsC5oulq6uLgwPD0On00EgEODOnTtwuVyclAIm6PV69PT0oL+/\nH11dXdjc3MTq6iru3bvHSSlggkAgwMmTJzE0NASHwwGRSIRUKoXHjx9jZmaG03abzWZcu3YNvb29\nMJvNCIVCWF1dxcjICGKxGGftFgqFOHv2LN566y1oNBqUy2WsrKxgamoKk5OTrTbxuaAA8Wtf+xrO\nnDkDkUiElZUVTE9PY3p6GsFgkJPcFqFQCJVKhePHj+NP/x5GHEkAACAASURBVPRPIZPJkEgkoNfr\nMTExgcXFRU4O5wIAlUoFs9mM4eFhXL58GZFIBCqVColEgo3R5eKaA4DD4cDp06fhdDqh0+kglUoR\nDAYRCoUQDodRKpWaYvsfrfMXCoXQ6/U4duwYrly5gjNnzsBoNGJzcxOffPIJJ4fRAE8HjHR3d+PC\nhQt47bXXoNPpUCwWMTU1xbmpXAS6zC0WC1599VWcO3cOfX19yGazkEgkuHfvHifXGwArBw0NDeHL\nX/4yHA4HKpUK/H4/3G43p7MVcrkcNpsNX/rSl3Dy5EloNBosLS0BACYnJzlpM/DUgQ4PD+Nb3/oW\nACCRSLCLkavBLbDliDo7O/Haa6/hypUryOfzUCgUSKfTWF5e5uzrWSQSoa2tDUNDQ3jttddQq9Xg\n9XoRCASwtrb2hRLHrYROp8Px48dx4sQJDAwMQKVSIRQKsWCXwMUAoLu7G6dPn4bdbodCoUA+n4fR\naIRKpfoDyff9BHd/mjuwn4eF6p99fX346le/inPnzqGjowMymWxbHZeLkEgk0Gq1uHTpEq5fvw6H\nwwGVStVUOdn9gEAggEqlQk9PD9544w0cP34cSqUSAoGAk8FKPRQKBdra2nD8+HGcPn0aOp2OjXel\nOihXYTKZ4HQ6MTAwAKvVio2NDRQKBc6Ntd4JpVKJvr4+9Pf3o729HQKBAOl0GqlUCsVisdXmvRAd\nHR149dVX0dnZCYlEgmQyiXg8zibRcdHxA1uluFOnTuH06dNQq9XI5/MIBAKIx+MoFAos4HqRvHKr\n0NbWhi996Uvo7u5GtVpFMBhEIBBgEy65Zm89nE4nzp8/D61Wi3Q6jfn5efbib2aQe2ic/346B5lM\nBrvdjtOnT+P69es4duwYm28dCoU4O5oTAAwGAwYGBnD27FmcPHkSRqMRuVwOS0tLSKfTnJ2MJpFI\n0NnZiRMnTmBoaAg2mw1CoRDBYBB+v5+zJRaBQACdTocTJ06gr6+PjS3O5/NYXl5GMpnkbNAlEAhg\nt9tx4sQJ2O12qNVqAEAsFoPf7+fklEvgaaA4MDCAnp4eaLVaiEQiFItFRoQCuMlVEAgEsFqtGBoa\ngsViYRycVCqFYDCIYrHIWbulUik6OzvhcDggl8tRrVaRz+dZ4EKOiGv2CwQCaDQa9PT0wGAwAABK\npdK2ccsErtktEolgNBphs9kglUpRLpeRTqeRzWabHqAfGue/n1CpVBgcHMSFCxdw6dIl2O12rK+v\nY2xsDEtLS9vmRHMJAoEANpsN58+fx6lTp3Ds2DGW3nr48CEikQgnAxe6WAYGBnDq1Cl0dXVBr9dj\nc3MTCwsLWFhYwPr6OmcDF7PZjAsXLqC3txcGgwEymQypVArj4+MIhUKctJteZ93d3RgeHobFYoFC\noYBQKEQgEMDS0hIKhQLnLnJgy3a1Wo3BwUF0d3dDoVAwxrzH40E0GuVs0EJ8nL6+Puj1ekgkEgiF\nQmQyGXg8HuTzeQDcdKASiQRmsxkmk4mNuq5Wq4jH44x8xkW7RSIRFAoFjEYjFAoF+7NSqYRYLMYy\nRVyyG3jayaJQKKBUKhkJt1qtolAoIJvNNrUz5I+y5q/X63Ht2jUMDQ0BAG7cuIGPP/4YU1NT8Hq9\nnNvghFqthq6uLrz22mtoa2uD3+/HL37xCzx8+BDT09NIJpOtNvG5kEqlGBwcxODgIGQyGe7cuYN3\n3nkH4+Pj8Hq9nH35A1v1xOHhYVitVqTTabz33nv49NNP4XK5kE6nOWt3rVaDxWJBV1cX5HI5Jicn\n8V//9V8YHx9HJBLBxsYGJ22v1WqQSqWw2WwwGAyoVqu4c+cO3n33XSwvLyObzXLSbmDLdoVCAZPJ\nBLlcjmAwiHfeeQcPHjxgr1Au2k53HjkigUCAmZkZtub5fJ6TPAtylsDWHSMSiZDP53Hv3j08evQI\n6XSas/u8Wq1u6wABAL/fj08++QSrq6vY2Nho6pr/0Tl/iUQCk8mEs2fPoqOjA+l0Gg8fPsRHH32E\nWCyGcrncahOfCeIpdHZ24syZM1AoFFhYWMCHH36Iubk5Tjt+kUgElUoFp9OJzs5OrK+vY3JyEh98\n8AESiQRKpVKrTXwm6DVkNBrR398PpVKJSCSCO3fu4MGDB0ilUpy8VICn+8VqtaK9vR2bm5twuVz4\n4IMPkEqlUCgUWm3icyESiaBUKmGz2aBUKpHJZDA+Po779+8jm81ylmdR3wpqsVhQq9UQCARw9+5d\nLC4uslc/F0H7RavVQi6Xo1gsss6nXC7HWZY/ZVukUikrayUSCUxOTmJhYYFlt7gIys6JxWKIxWKs\nr68jEAhgdHSUcRWaiT+qtL9QKITBYEBHRwe7EGdnZ+HxeBCPxzl9qchkMlgsFrS3t8NisSAej2Np\naQnBYBDZbLbVJj4XlMKlfmeJRILl5WWsra0hHo9z9lIBthjnBoOB2V4oFLC0tIRQKMTp1ycJzJjN\nZthsNmg0GgQCAXg8Hk4HW8DT/WI2m9He3o5arYbFxUW2z7lMUhSLxdBqtbBYLLBarUilUnC73Zxf\nc2CLB6XX69HW1ga5XA6Px8PWnKsPImDrTqeUP5Vv6T7nsuMHtvYLtSjqdDom7pPJZA7kXjzyL3+K\nrlQqFYxGIy5evIg33ngDVqsVbrcbv//971mKhUsgdqpQKIRWq0VPTw8uX76Mc+fOQSwWY2JiAvfv\n30cymeRk0EJ1frVajfPnz+P1119HT08PUqkUbty4gbm5uW1EHK6A1l0ul8NqteLatWu4du0atFot\nxsfHcfPmTUYK5SJEIhHkcjmOHz+Oa9eu4cSJEyiXy3j48CEmJiY4m3YGtrJycrkcly9fxltvvYW2\ntjZ4vV58+OGH8Hg8nF1zer11dHTg2rVrOHv2LKRSKaanp/Ho0SNkMhnOBi1UMz9x4gSuX7+O7u5u\nZLNZ3L59m/GfuAoKFC9cuMDY8vPz87h79y6i0Shn1xzYsr29vR2nTp1CT08PNjc3MTY2htnZ2QMr\ngR5p508pIRI6OXnyJL73ve/hzTffhEKhwAcffID/+Z//4VzKnAIWulRsNhuuXLmCH//4x+jp6cHG\nxgZu3bqFTz75BLlcrtXm/gHIdqVSiY6ODrz99tv4wQ9+ALVajVu3buGXv/wlXC5Xq838A5DjFwgE\n0Gq16O/vxw9+8ANcvnwZcrkcT548wW9/+1vEYrEWW/ps0H7RarW4cuUK/vqv/xpGoxEejwcffPAB\nHjx4wFnHT8GiwWDAN7/5TXznO9+BwWDAp59+ip///OeIRCKtNvG5oMzcwMAAfvjDH+LkyZOoVCq4\nc+cOPvvsM06eUQKl+69cuYI///M/R1dXF0ZHR/Gb3/wGc3NzrTbvhaBM7je+8Q289dZbjNfy/vvv\nIxqNttq854LuR6fTiT/7sz/D4OAgstksPvvsM4yOjvLa/vsBkuw1GAy4dOkSvvKVr6C/vx9SqRTA\nlgY+6W1zCXSJK5VKGI1GvPHGG3jzzTdhNpshFouZ3Ca1anEJ9VrsfX19+PKXv4wzZ84wNmulUkGh\nUODki4LWXSqV4syZM3jrrbdY2xMA1iPPtRdFfaBoMplw7do1XLhwAVqtlrWaFQoFzmZaKEjv7u7G\nq6++CqfTCbVaDaFQiI2NDc6mnsl2qVSK/v5+nD59GjabDSqVCuvr6ygWi5wlygFPW+S6urrQ29uL\n9vZ2yOVylMtlZDIZTpcqKEC32+04duwY2traIBQKUSgUkEqlOLlfCKSkaLfbcebMGSZElEwmD7SE\ne6Sdv0gkglqtRnd3N86cOYPXXnsNJpMJlUqFSScWi0XOXebA1gZRq9Ww2+04f/48zp49C41Gg3w+\nD6/Xi1Qq1XQ26F5AaUSVSgWHw4GrV6+it7cXQqEQ8XgckUhk25AQLoFsl8vlGBgYwMWLF2E2m1Eu\nlxGPx5FMJpsuvNEIRCIRdDodzpw5g8HBQaZHEI1Gkc1mOen8gafr3t7ejldffRUdHR2sPS6VSiGf\nz3NyvwBPA8aenh4MDAyw7oRUKoVMJsP6+rkKtVqN3t5e2O12JqGczWaRyWQ4u18IWq0WNpsNVqsV\nKpUKhUIBmUyG06RQ4CmhlTpxxGIx8vk82+sHhSPt/CuVCiQSCZxOJ7q7u6HRaCAUCuH3+/Huu+/i\nzp07rL+cS6DBJSqVCh0dHbBYLFCpVKhWq5iYmMCHH37INPy5drFQ641cLoder4fZbIZcLkcul8Od\nO3dw584dpNNpTh7OarXKGP46nY5lWrxeL+7evbtNj4BLqN8DUqkUVqsVOp0Om5ubmJubw/3799l0\nM66hvq1WrVbD4XBAqVQinU5jenoai4uLKJVKnAzQyXaBQACLxcJen8FgEFNTUwiHw5xtMyMoFAp0\ndnZCq9VifX0dXq8XbrcbuVyOk2eUUKvVGDFULBYjnU4jFoshGo2iWCxy7ozWQygUsiFslUoF2WwW\noVAI6XT6QAOuI+38gS0Wa2dnJywWC0QiEbLZLNxuN+7du4eFhQXOibMQqtUqtFoturq6oNVqUavV\nkEwmMTMzw0hnXL0QhUIhzGYzrFYrlEolKpUKYrEY7t+/j9HRUU4fTplMBrPZDIPBALlcjlKpBJfL\nxQIuru6XWq0GuVwOnU4HnU4HiUSCXC6HJ0+e4NatW/sxIaxpEAqFkMlk0Gq1MBqNAIBAIIBPPvkE\nExMTnBTcIlCmyGAwQKvVolKpYGZmBu+99x7W1tY4bTs5oba2NjZ74PPPP8fdu3eRy+U4u1+Im6PV\namE2myEQCLCysoJPP/0U8/PznB+vTV1EKpUKGxsbePz4MT799FOEw+EDDbiOtPOnliebzcYU5aLR\nKFwuFyYnJ+Hz+Vpt4gtBg4fUajVKpRKi0ShmZ2fx6NGjVpv2QhBJkUaZ5nI5eL1ePHr0CBMTE602\n74VQKpXo7OyEwWBgk/vm5ubw4YcfIp/Pc/YiBwCNRgOz2Qy1Wo1arYZEIoEHDx7gxo0bnLabynNa\nrRYajQaVSgVutxu//e1vsbCwwGnbiZuj0+mgVCpRKpXw+PFj/Pd//zenHT+VK1QqFSwWC6RSKWKx\nGH7729/ixo0bnMwq1kMkEkGv18NqtQIApqen8dOf/pTTQS7wVDvEYrFAo9FgfX0dH330EX7+858f\nePvwke7zJ+lEjUYDhUIBgUCAUqmEfD7PyVczoV4sxGq1Qi6XY3NzE7lcjvN1OLLdYrHAbDZDJBJh\nfX2d82lE4GnrEGVbqtUqisUiq/Nz+TIUCATQ6/XsIi+Xy8jn8yztzGXbJRIJ9Ho9lEolarUayuUy\nZ6dT7oRUKoVKpYJEIkGtVsPGxgZn5cHrQU5IKpVCKpVCIBCgUqmgUqkcCtsp40KT7zY3N9me4Tpo\nv9PMinK53BIi8ZF1/sTCVSqV0Ov1bJMQKYTLm4Tab3Q6Hdrb21m9nwhEXAalcG02G2w2G2QyGTY2\nNthEM66CGOc6nQ5OpxNms5mxh7kuFkK2W61WdHd3s5c/qYRx3Xa5XA673Q6r1QqZTIZKpcLq/Fy3\nXa1Wo729nc1hr1QqKJfLnA8WhUIhNBoNTCYTDAYDCxhJbpbLkEgkUKvVTExJKpWiWq2ydecyJBIJ\n67CwWq0QCoWoVCot4YYcaeevUCgY6cxgMEChUCCTySASiXDaEREb1Gw2o6urC0ajEUKhELFYDJlM\nptXmvRBisRhqtRo9PT04duwY9Ho9KpUK4vE4p7MW9RmLs2fPoru7G0qlEoVCAblcjvMXuUQiQX9/\nP1555RU2SvagCUR7gUgkgsFgwIULF3Dq1CnYbDYIBIJtI2S5CAq4Ojs7cenSJRw/fpx1hnD5bgGe\nPoyOHTuG4eFhnDhxAnq9nnUPcXmvU8DV09OD4eFhnDp1CkqlkgWKXLddo9Ggu7sbV69exYkTJxhB\nuhXrfmSdPwBYLBY4HA7o9XrI5XI20WxlZYXTl6JcLofD4YDdboder4dCoUCxWMTCwgJCoVCrzXsh\n9Ho9uru7YbfbYTKZIJVKEY1GMTc3x2kZYnL8XV1d6OnpgdFoRK1Wg9vthtfr5bQjIh18p9OJvr4+\n6HQ6ZLNZTE1NIR6Pt9q854Iuw87OTpw9exZ9fX1sGM7y8jKn+8wpdTs4OIjXX38dXV1dqFarcLlc\nCIfDrTbvhSD1yitXruDy5cuwWq1sLDiXzygRFPv7+/GNb3wD586dg1arRSQSQTAY5HQ2VyKRQKVS\n4dKlS/ja176G48ePQyKRwO12t+xBd6QJf52dnRgYGIBWq2XiOKurq5ifn+d0+pzGmVLbU7VaRTqd\nZhPwuAyaZU5iJ9VqFV6vFyMjI5xTUqyHRCJhYic2mw0KhQLJZBLT09NYWFjg9MWi0WjQ29uL3t5e\nOBwOAEAkEsHnn3+OYDDYYuteDBp/e/bsWXR1dWFzcxOLi4sYHx/n9CAcmUwGq9WK06dP480334RQ\nKMT8/DxGRkawurraavNeCNI++cpXvoLr169DJBLB7/dz/oySjsX58+fxox/9CEajEdlsFvPz86wT\nh6uQSqXQ6/X4+te/ju9973tQKpVYXFzExMREyxRDj6zzFwgE6Onp2SZ2Eg6HEYlEONtnTtDr9bhw\n4QJ6enpQq9WQTqcRDocRDoc5LRUKAD09Pbh69SpLgWYyGYRCIQSDQU6/5JRKJV555RWcPn0aEokE\npVIJ8XgcXq8XkUiE0xeLw+HA1772NRw7dgy1Wg2FQgHBYBDz8/OcvsyFQiHOnTuH69evs26cXC6H\nlZUV1t/PVVgsFrz99ts4c+YMUyKMRCIYHR3F2tpaq817IYaGhvD1r38dDocDAoEA6+vrWFxcxL17\n9zidKVKr1XjjjTdw7do1xmtJJpO4f/8+xsfHOX2nHzt2DF/+8pcxNDTEeC1LS0v41a9+heXl5ZbY\nxEnnT32cjdRAhEIh7HY7ent72ZSqhw8fwuv1Nk11i+wG9m47pUKHhobQ0dHBXkKTk5NMwKIZ2I81\nFwgELIWr1+uRTCYxMTEBl8vV1PG3jdpO/JDh4WEMDg5CLBZjdXUVY2Nj8Hg8SKfTTd0vja55e3s7\nXn/9ddjtdhSLRczNzWFmZgY+n69pGYv9WHORSIShoSG88sor0Gg0iEajmJmZwfz8PILBYNMCLoFA\n0PCam81mvPbaaxgYGECtVsPq6irGx8cxPz/fNF35/VhzgUCA/v5+vPHGG2hra0Mmk4HL5cLExATm\n5+c5veZqtRoXL17EuXPnoFAo4PV6MTo6itHRUayurjZ1rzfqi7q6uvD222/D6XSiUqlgeXkZIyMj\nuHfvXsuGynHS+YvF4oZIEKQvbzKZYLVaIRaL8fjxY/zTP/0TAoFAU50QtZ0QdvtZVNey2+0wGAwo\nFov41a9+hV//+tdNfcXRAKS92k7fu9FohMPhgFQqxejoKH7yk59gZmamaWsuFAohFAoZcYawm88T\nCoWQy+Xo6OhgDNwPPvgA//qv/wqv19tU2wUCwbZ9vpc11+v1cDqdUKlUcLvd+OlPf4pbt241jS1P\nn9vomkskErS3t7P9cvfuXfzkJz/B2tpa05wQ7Ze9rjnwdHSv0+mExWJBsVjEz372M/zmN79BLBZr\n6n4B0PCa22w2DAwMQC6X4/79+/jHf/xHTE1NNY10RuTInYS83XwWtT739fWhq6sLIpEI7733Hv7t\n3/4NHo+nqfuFziiwPRB4Gfvru5/OnDkDrVaLtbU1/PM//zNu3rzZUgVITjp/ilD3ira2NgwMDKCz\ns5OlWCKRCObm5pq60PXT+PbyOWKxmA0IIYZ/sViEx+OB2+1uatDSqO0mkwmDg4Po6+uDUqlEuVxG\nIpHA3Nxc0yey0WW+V/T29uLy5ctMlKhcLsPn8zEFyGaBLsW9fgZliE6fPs1UIDOZDBYXF5satABo\n6HwKBAI4HA6cOXMGTqcTCoUC5XIZoVAIExMTB7Lm9Y7oZfc89cafOXMGV69ehdVqhUQiQSaTwfLy\nMlwuV1Nfn/Xnc7c/W4FAAJvNhrNnz+LUqVNMxz8ej2NychKhUKip90t9wLWbO4a+55MnT7KRwzR8\naG1tDbOzs01tCa3fL7sFTR185ZVXcOnSJUYizmQymJuba2rQ8jLgpPMn7PUH6nQ68Z3vfIfVzIvF\n4oFFWI2k/qVSKa5evYo333yTKZ3RBLxm277zMt/tC9Rut+Pb3/42Tp06hVqthvX19QMRamk0UBQI\nBLhw4QK+/e1vo729HZubmygWiwfSM9xIwCUQCGA0GvHNb34Tr7/+OoRCIUqlEkqlEiqVCmfXnP7t\n8PAwfvSjH2FwcBDVahXr6+vY2NhoOrGSLvO9fA6Vh95++21861vfgk6nQ6VSYfvlIGzf67oLhUI4\nnU788Ic/xPnz59kZpf1yEAEXsPs7kVpwv/SlL7GRw/VTKptd53/eGX2Z70MkEsFms+H73/8+I1YW\ni0Xmj1pNIuak82/k8qLaUEdHB5RKJYrFItbW1g5kvnOtVtuz7ZRKNZvNbA5BKpXC6upq00l+NEho\nr69+gUDARlTShRgKhQ5k/kC1Wv0DLe+X/R4oBWw0GmG329kAIo/Hg1Qq1SyTGcjuvaSfSQvCZrPB\nZDJBIBAgFovB5/M1vY2V9svO//cyoB5zvV4Ph8MBlUqFUqkEr9d7IGe0Wq1uO6O7TT2T/obJZIJI\nJEIikTiwM0p7fLe2C4VCpnlis9mg0Wiwubl54Gd0LxkLEsUxmUwwm82QSqVsPstBnFE6n/W2v+z3\nQHvFarVCq9UC2OrCWVtb4wSZlZPOf69RKEWYEokECoWC6ZuPj48fSPsNTYXbC8j5y2QyphS2traG\nkZERJBKJfbb0D7GXzQ08XXOpVAq1Wg2RSIR8Po/Z2VksLCw0XfBkpyPare0kAa1SqVCr1RAKhfDo\n0SMEAoFmmLsNjewXkUjE1lwqlWJjYwPLy8uYmpo6UEdE//2yoPOpVCrZlM1kMonx8XG43e4mWLsd\njdS16/cKSW5TG+tBsOT3uuakoqhWq6HRaNgZnZub4/wZJedPEu21Wg3hcBgjIyMHdkaBvZEtSV1W\nrVZDIpGgXC5jeXkZExMTnOja4qTzbwSkDhYIBGAymZDNZvGLX/wCo6OjB/L5jWQsACCVSiEUCkGn\n0+H+/fv4z//8zwPZ5I2yWcvlMiKRCKLRKHK5HN577z18/PHHKBQK+2jls9HImovFYhSLRUSjUUil\nUkxOTuLf//3fD6z9Zq+2i0QiCAQCZDIZxONxCIVC3LhxA7/85S+bzrEAGltzqVSKzc1NJJNJ1Go1\neDwe/O///i8eP368z1Y+G3vNbkkkEsjlcmxsbCCVSqFSqeDBgwf4j//4D/j9/iZYuh17XXMitNLc\n+HQ6zc7oRx99xNkzSmuuVqsBgM02mZqaOtAzCuw+2KISEfFZstksRCIRPvvsM/zmN785kDP6RThy\nzh8A8vk8AoEALBYLMpkMVldXWyaksBtUq1XE43EEAgEYDAb4fD6srq5yXi4UAAqFAvx+P0wmE9Rq\nNdxuN2fHDteDskNra2sQiURMAZLL/fHA1l4plUoIBoPM8Xi9Xni93pa1Dr0M6BWYTqfh8XhQKpUQ\nCoUOxRml4TGRSITtdb/fj9XVVc6vOWlu+Hw+NuBsbW2N82eUODixWAx+vx9qtRqhUAgrKysHkvbf\nCyhQWF9fZzonpNTq8/ng9Xo5cacfOedfrVaRzWbh9/vR1taGUqnE+eEmwFO+QDQahdfrhclkYpry\nXLadUsC5XA5ra2swGo0wm81sEh6XQfXfaDSKlZUVyGQypNNpTl+GBBLE8Xg8TEaZ6wOIADByXzQa\nZSnnVCrF+TNaq21N7Mtms1hbW8PKygoqlQrnxzwDYOczFArB5XIBAJu2yWXba7XaNrGtlZUVtLW1\nIZfLcX7oU622Jc5GwaFKpWL3IlfsPpLa/tlsFh6PB5lMhvODKgj0IopEIggEAmyqWSNohAW/G9Rq\nWxPk3G43YrEY5y/yemxubiIajWJ1dRXZbJaRwRot3zQbNG7Y4/HA7/cfmhG4tM9jsRjm5+cRi8UO\nxUAZ4Gm2xePxYGVlBfl8vuH9chCgh0UikcDMzAwCgcChGDsMbK35xsYGvF4vZmdnkUwmm97Nsl+o\nVCpIp9OYn5+H2+3ets+5YL+o1QY8B3/XyD+m2dQ6nQ7FYhFTU1PIZDJ7rjsdBOo/h0RESOZ0r5vl\nIG0ngSClUgmhUIiJiQlEIpE9231QttPn1Wo1qNVqhMNhzMzM7DmAOej9Uq1WGYFufHy8qQI5+wGy\nWyAQoFwuQ6FQIJvNYnJycs9n9CBBfeoAoFAosLS0hPn5ec5c6C8C3YskojY2NoZwOMx5u2u1GmvP\nlEql8Pv9mJycbKlAzm5Ae4aEz5qp2bIDf/+iPzy4G3Z3aGhlxGIxpFIpzGYzRCIRQqEQp9ItzwNt\nEJVKBZPJxMYPc91u4Oma6/V6qFQqhEIhzo/CBZ6SipRKJQwGAzY2NhAOhw/Fq4i6QzQaDYxGI8Lh\nMCPRcRn15Dm9Xg+hUIhQKMT5NDTwlDynVquh1+uRTqcPxRmtX3OdTgeFQoFQKIRsNnsobFcoFIw9\nTzwRrp/R+m4irVYLnU6HUCiERCJxUHa/0L8fSedPIEY01zfJTlDbH0kcHyY8S7L2MKBeyeuwrflh\n3y8ADsXLuR6Hdb/Ui9bwZ/TgUC8rfYC2//E6fx48mo39GM7TKhx22w+j3Tx4HCB458+DBw8ePHj8\nkeGF/v1Isv158ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODB\ngwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcP\nHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx4\n8ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjx48ODBq9lyDwAAAENJREFUgwcPHjx48ODBgwcPHjx4\n8ODBgwcPHjx48ODBgwcPHjx48ODBgwcPHjz+X3twSAAAAAAg6P9roycAAAAAAAAW8kA0lO7zqMEA\nAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -13.686996\n",
+ "Current MNIST score: 7.210840\n",
+ "Training step: 6000\n",
+ "Time since start: 28.708190 m\n",
+ "Steps per min: 208.999591\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAf8AAAFBCAYAAAB5Maa7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsfVdwnNd59rO994Zd7KL3TgAkCJAUxSJRkWRZsuKaZBI7\n7SYzSSaTzOQ+N7mIJ8mMM0ku7BlZEye2im1KMklJrCAJNpAoRO9lC7DY3ut/wf89WlCNpLiF1D4z\nGNMjlLPfd8552/M+L1BGGWWUUUYZZZRRRhlllFFGGWWUUUYZZZRRRhlllFFGGWWUUUYZZZRRRhll\nlFFGGWWUUUYZZZRRRhlllFFGGWWUUUYZZZRRRhlllFFGGWWUUUYZZZRRRhlllFFGGWWUUUYZZZRR\nRhlllFFGGWWUUUYZZZRRRhlllFFGGWWUUUYZZZRRRhlllFFGGV9XcIq9gM9BttgLKKOMMsooo4wn\nGF9o37mFWkUZZZRRRhlllFEa4Bd7AaUGmUwGhUIBDoeDZDIJn8+HVCpV7GWVUUYZZZRRxmND2fj/\nf4jFYmi1WlRWVsJqtSIUCmF7exuRSKRs/MsoowjgcrngcDjgcDjIZDLIZrPIZssVwWJAIBBAJpMh\nm80ilUohHo+X78UnHGXjD4DD4cBoNOKll17C4OAgOjo6cOPGDQwPD2NjYwORSKTYSyyjjK8VuFwu\nBAIBBAIB+Hw+4vE4EokEcwLKKBw4HA40Gg1aW1uRTqfh9/uxtrYGv99f7KUBuLc+AOV98ZD42ht/\nDocDuVyOqqoqHDx4EPv27YPFYmGGP5FIFHuJZZTxtYVMJoNOpwOfz0c6nUYgEEAwGEQ4HEY6nUYm\nkyn2Ep9qiMVidHR0oKurCz09Pbhz5w6uXr1aEs9dLBZDpVKhtrYWkUgE09PTSKVSZSfgAfG1N/4A\noNFoUF9fj/7+flRXVyMSieD69es4d+5csZdWRhlfS2SzWWQyGcjlctTU1MBisYDH42F1dRVra2tY\nX19HLBYrCSP0tILD4UChUOCVV17Biy++iLa2Nvz4xz/Gu+++i1gsVuzlQaFQoL6+Ht/61rfgcDiw\ntLRULkU8BL7Wxl8kEkGhUOC1117Dyy+/DKPRiFAohPX1dYRCoWIv71Og+ielRLlcLrLZLNLpNPsq\ntNdLKTdal1QqhVwuh0wmg1arhdFoRCwWg8/ng8fjgd/vRyQSQTabBZfLZRFdLBYryvofBRwOB0ql\nEkajkZFDl5eX4ff7kU6ni728pwbZbBZ6vR579uxBS0sL9Ho9gsEgXC4X1tfX8eGHH2JycjJv0R6d\nt/vTyg/DPaCfzf199/+OUt3zfD4fCoUCtbW1qKqqgkAgQCqVKrrTRc9RqVTCYrGgpqYGMpkMjY2N\nWF1dhcfjKdraSgUcDudL99UTZfx5PB74fD4ymQz7etSDw+VyUVFRgfb2dpw4cQKHDx8Gj8fDzMwM\nrl27hu3t7ce8+kcDbXQ+nw+RSAS5XA6lUgm1Wg2xWMyMfzweh9PphNfrRTQazevhFAgEkEqlkEql\nbA18Ph9SqRRarRY6nQ5qtRomkwnV1dWMPGm32+F2u+Hz+Zjxl0gkyGQyCIfDiEajCIfDcLvdCIfD\nJefFi8ViyOVyGAwGVFZWorq6GlqtFgAwNjaGxcVF2O12JBKJshPwFUHOpF6vR3t7O7q6ulBZWQkO\nhwO/3w+n08n2uc/nQygUQjgcRjKZ/ErPnsfjQSgUQiKRQCKRAABzTlOpFPvd9H10/pLJJHNCyDlX\nqVSQSCTMySX+AjntuV9EoguHwwiHw4jFYkgmk0U1sjqdDg0NDaiuroZOp2PPotjrAu7d31QSqqio\ngFqtxtDQENsPhSSHcjgcCAQC1iFW6GfD4XDA4/HA5XLB5X7Svf9l2ZknyviLxWKo1WokEgl2UB7l\noNPhPHjwIP76r/8aNTU14PF4SKfTGB0dxX/8x39gbW0tD5/g4UAvVSAQMC+3vb0d3d3daGxshNFo\nhFAoRCaTQSgUwttvv42LFy9ifn4+b2k5DocDtVqNlpYWtLS0oLq6GolEgvEm9Ho9NBoNRCIR+0om\nk4jH44hGo4hEIixy4HK5kMvlEAgESKfT2NrawsLCAn79619jfHwcfr+/6JdMLioqKtDd3Y1vfetb\naGtrg0wmY2t3OBw4d+4c3nzzTWxvbyMcDj+2Cyg3eizVKPFxg7JIRqMR9fX10Ol0kMlk4HA4EIlE\nUCqV+OEPf4gDBw5gYmICExMTmJycxPb2NkKh0CM9Jw6HA4lEgoqKCtTX16OhoQGJRAIejwfLy8sI\nBAJIJBLg8XhQKBSwWCzMcXW5XPD7/UgkEpBKpdDr9Th48CAaGxshFAqhUqmgVqshkUggEonA4/HY\ne02n0/D5fFhbW8Pk5CTGx8exvLwMt9uNRCJRlHfO4XCwZ88evPbaa7Barbs6LooNuhf5fD6EQiEE\nAgFqa2thNBoRj8cxNzdXMAec1kJ33tbWVsFKIrR/eDweZDIZ21v0njY2Nr7w558I40+ec2NjI/bv\n34+1tTWsrq5ieXn5kZj4RqMRBw8exIsvvoju7m7w+XxsbW1heHgYp06dwszMDJLJZB4+yYOBIn2L\nxYLq6mro9XqYTCZUVlaioaEBDQ0NqKyshEajgUAgQDabRTQahdfrRSaTQSKRgMvlYm2Kj8uAkrdt\nNpvR0tKCjo4O1NfXg8/nQ6lUwmw2Q6lUQiaTMQ80k8nsKkukUikkk0n2TsmAcjgc7OzswGg0wm63\nIxKJYGxsrOiESw6HA51Oh7q6OvT29mL//v145plnYLVawePxANy7vI1GIwAgHo9jfn4eKysrWFlZ\nQSAQeOQMhlgshkKhQFtbG2w2G+LxONxuN9bX15FMJlnGJR6PIxgMIh6Ps32bzWZZ9CwSiaDRaKDX\n62EwGCCVSsHlchEIBOD1erGzs4NAIMCyLcV2uPh8PgwGA/r6+tDf3w+LxQK5XA4+n7/LIW5sbIRW\nq2UZpoaGBty4cQOzs7Pwer0PdYa5XC7L7MhkMiSTSTidTiQSCfj9fgSDQUSjUSSTSZb+pmeVyWRg\nMBhgtVqhVqthMBhgNpvR19eH6upq1iYnl8uZsaIojcPhMCKj1WpFKpXC5uYmNjc38/iEvxiU2Wtu\nbsbAwAC0Wi2SySTLSBR7fwBgTpfX60UqlYJKpYJOp0NraysaGhqwsrKCYDCY1zVwOBxUVlaivb2d\nOYIfffQR7Hb7QztJFJDy+XzweDzodDrI5XLGu9Dr9ZBKpSzYiMfjiMVi7A5QKBRwu91YWFh4YMfn\niTH+UqkUfX19+Ku/+iucPXsW586dg9PpfGjjz+FwUFNTg7/7u79DT08PS90tLy/jn//5nzE+Pl50\ng8Pj8SASidDR0YGXXnqJRdg6nQ5isZhdgrk1RIlEgoMHD0IoFGJ7extcLpcZ0cd1WHk8HrRaLWw2\nGxobG2Gz2WA0GmE0GqFUKiEWi9mFBtw7oHRJ5l6UAFhqlS5CyijU1tbiwIED8Pv9mJ6eLvq74HA4\nqK6uxmuvvYbjx4+jt7eXPXeK7DkcDmQyGfr7+9HZ2YkbN27g0qVLOHnyJKLR6CNzGaje+sd//Mc4\nceIEvF4vRkdHcfr0aYRCIaTTaUgkEni9XqysrMDr9SIYDLJnTOlItVqNtrY29PX1Yd++fTCZTBAI\nBFhaWsLdu3dx584dLC4uYnNzE6FQqOiXu0AgQHV1Nb773e9iYGAAJpNpVzoTuPfZhEIhTCYTKioq\nsGfPHng8HrzxxhuIx+PM8X2Q505OkkKhgEqlApfLxcLCAux2+649S98rEAgQiUQQj8cZMbG9vR09\nPT3o7+9nBEWFQgGRSPS5tf5cDo9SqWROj0AgAICilY5EIhEMBgPq6+vR2toKHo/HynGUVSlmBoDe\nidvtxsbGBqLRKAuY6uvr0dfXx85CPsHhcNDc3Iy/+Iu/QFVVFdxuN6ampuB0Oh/63ZGNk0qlkEgk\n6OrqQm1tLQCgrq4OfX19qKiogFKpZIHe9vb2rgzq2bNnMTMzw7KrX4YnwvhTZJlOp7G6uoqxsTFM\nTk4+kuE3m82ora2FSqViHvwHH3yADz74AHa7vSTqzEKhEHq9Hg0NDejt7YXJZIJarWYXSSqVQjAY\nZCQ5iipEIhEsFgv6+vrg8/mwsrLyWC/ydDqNnZ0dTExMwO12Q61WQy6XQ6FQQC6XQyqVQq1WQ61W\nQ6FQIB6PY2NjA8lkEjweD1arFVVVVaiqqoJOp4NUKgXwCbN7amoKN2/exLlz5zA6Oop4PP7Y1v4o\nkEgkqK+vx8GDB3H06FEm/rS0tIS1tTVsbGygvr4edXV10Ol04HK5SCaTcLlcLPJ4VDIal8tFU1MT\n/vAP/xDd3d0Qi8UQi8WorKzE0NAQxGIxJBIJZDIZ0uk0i8pCoRC2trYQCoWQSCRQWVkJi8UCo9EI\nk8kEo9HI6tAajQYWiwXNzc148803sbi4WHTDz+VyUV1djc7OTjQ1NUGv1wMA44N4vV7E43FkMhno\n9XoolUpwuVzEYjHm6JKj8DCkPA6Hg0gkwrJSFOHe/zuoPh+JRJBOp1kmYnNzE2azmZUkqJwF3Ds3\niUQCsVgM4XCYZVyAew51IpHA1tYWpqencefOHUxMTGBnZ6coBFgul4vm5mb80R/9EQ4ePAgej4dM\nJoPl5WX85je/wfj4eEmk/gEwJ48MLYfDgV6vR1VVFbtb8gUulwuhUMiygnw+/6EcTgJpzNTX1+PQ\noUOw2WyQy+UwGo1Qq9UAAKVSCYPBwIIlgUDA9lksFoPD4cCpU6cwPDyMnZ2dB87elbzxp4esVqvB\n4/Gwvb2NlZUVrK6uPlRajzz21tZW9Pb2QqVSIZvNIhQK4dKlSzh16hR2dnZK4vJTKBRobGxEa2sr\nGhsbIZFIGNGRiEHr6+vY3t5GPB6H1WpFfX09BAIBTCYT9uzZg4WFBVy7dm0XSemrgrgFoVAIa2tr\nuzoPxGIxq3UaDAZotVpEo1EsLS0xkZD9+/dDoVDAZrPtSntms1kkk0nMzc3hww8/xJUrV7CxsVHU\nS0YqlcJkMmH//v04ePAgurq6EI1Gsbq6iitXrmB8fBwLCws4evQou+zFYjEjhxH57FH2ExkUs9mM\nwcFB6PV6ZDIZ8Hg8GAwGpkapVqshlUohFArZJR2JRLCxsQGfz4doNIq6ujpYLBaIRKJPRc96vZ7V\nt0dGRnDz5k1Wty4G+Hw+xGIx2trasG/fPlRVVUGhUCCdTsPlcrFyXyqVglQqRXt7O6RSKePr0DOi\nMtKDgr4314H4or2XyWQQj8cRj8fZmslRVSqVUCgUEIvF4HA4iMfj8Pv92N7extbWFvs3ZRX4fD5i\nsRg2NzcxOjoKh8MBn89XlPq6UChEdXU1Dhw4gFdffRUVFRXIZrMIBAKYn5/Hhx9+iIWFhYKu6YuQ\nSqXYXiUHTqPRoLKykjm4+cpS8Hg8KJVKRnDe2trC1tbWF56d3JKVRCJhmaaamhp0dXXh5ZdfRm1t\nLcRiMYRCIcvwAp84ndlsFkKhkHWqUTbm8uXLuH37NkKh0K7s6xfhiTD+EokEBoMBKpWK1cge1sMS\nCARQKBR48cUX8e1vfxs6nQ7BYBCbm5tYX1+Hy+Uqap2fwOPxYLFY8NJLL6G/vx9yuRxcLpfVecLh\nMHw+H27duoXp6Wl4PB4cPnwYVqsVAoEAOp0OPT09mJychMVigcPhyNvnoog9kUiwaCgYDMLhcDBj\nFI/HUVlZCZPJhO7ubvT09MBisTCHBgBrH1peXsbt27fh9XqLZvjpErHZbNi3bx9ef/119Pb2QiAQ\nYGJiAsPDwzh9+jSWlpYQi8UwMDDADit55RUVFairq8PMzAwzTA8DqtMnk0k4HA6k02koFAoA90Rv\nNBoNxGIxRCIRuyToolMoFKipqWHGUCKRQCgU7moLpc/J4/FYGWnPnj1YXV3F1atX4Xa7i/L8ZTIZ\nKioqcPToUTz33HNQq9XIZDKIxWIYHR3F8PAwJicnYTAY0NPTg7q6Okb8kkqlSKfTLBt1v6PzeSDn\nM1c++GE+O4/Hg1QqRVNTE7q7u1FVVcXuqUwmg52dHYyOjuLmzZsYGxtDPB5HIBCAy+Vi7yKbzTLe\nxqOqGD5Ia9cXgcvlQqvV4s///M/x4osvwmg0QiAQIB6PY3Z2FuPj41hfX897Kv1hkGtMKctCLcZ0\nv1Am93HvZ6FQCIvFAo1Gg1gshvn5eYyPjyMcDn+m4aVgRy6XQ6/Xo7m5GX19fRgcHIRWq4VGo2Fi\nVhS5U4cI3bFEcFcqlczhdzgcmJubw+bmJgKBAHsuT7zx53K5MJlMaGhoQF9fH2pqaiAWi8Hj8R74\nZZLRb2trw8DAAA4ePAibzYZsNouxsTF88MEHmJ2dLQnRCj6fzwgke/bsgc1mY4QuSgva7XY4nU6M\nj4+z1LJarUZzczMjQFFdvqGhAaFQCMFgMG+XOV2WFOGSE5DNZtnBrK6uxjPPPIO2tjZG3iJjlEwm\nsb6+jlu3buHWrVtwOBxFeRd0kdhsNnR3d6OtrQ3d3d3o7OyEWCzGysoKrl69ynrLI5EIVCoVFAoF\nNBoN446kUin4fD7G+n2UyJ9+z8rKCk6ePMlIlNTaZDabIZPJmOEWCAQsO0bll9ysisfjwfXr1+Fy\nuRCLxcDj8aDX6zE0NASDwQChUIj29nY4HA7mUBay3kzkz7a2Njz//PPYt28fzGYzKwWNjIzg+vXr\nmJiYwPr6Ojo6OtiFLxAIwOPxEIvFEAgE4HQ6H9qRf5T+fYLVakVXVxcOHz6M/v5+aLVaFs3fuXOH\nfU1PT2NxcZFlhoizUSro6enBs88+iyNHjqCxsRECgQCxWAxOpxPDw8O4cuUKPB5P0Tk4wCfGrb29\nHc8++yyMRiPb7+RYJZPJh8r+PCzS6TRCoRDm5ubw3nvvYWZmBnfv3kUoFPqU8SXSnsFgQHd3Nzo6\nOlBVVYXm5ma0t7eDw+HA5/Ph4sWL7IzSzxGhm3hTYrEYNTU17LNOTU1hbGwMLpeLvZsH3cMla/wp\npVxbW4vBwUEcPXoUMpkMfr+fpfUe5EOKRCJYrVa8+uqr+Nu//dtdnv7Nmzfx4x//uCQMPxFWmpqa\n0NfXh8bGRuh0OsbkX1xcxK9+9StMTExgbW2N1fyz2SyLNCkFRU5TW1sblpeXYbfbC/Y5co0daRP0\n9PTgtdde21XnB8A+2+TkJP7zP/8T09PTRYksclnxe/bswd///d+jvr4eer0e2WwWq6uruHnzJs6c\nOYPTp08D+CRlbjabYTAYwOfzWSvj/Pw8RkdHWf3tYUHR7tjYGMbHxwHc28c6nQ6NjY3o6uqCTqeD\nUqlk0btcLkdLSwsTPMnttlhfX8e//du/YWRkBIFAACKRCL29vfiXf/kXaLVaCAQCNDc3IxAI4Ne/\n/jXLNBUKRHYaGhrCP/zDPzBHKhaL4cKFC/inf/onRCIRJBIJdiGaTCZWV+dwOAiHw1hfX8fExASm\npqYe2Eh9FaEdInx9+9vfxv79+9mlTJycd955B++99x7sdjvrFMgXvopzz+FwcOLECfzjP/4ji5iz\n2SyCwSBWV1dx6tQpXLp0qWSkc8lRP3z4MP70T/8UVquVBRPr6+u7jHC+0v7xeBwrKyvY3t7GjRs3\nEIvFEIvFWPmH/jatlQz/j370Izz//PO7Wj13dnYwMzODf/3Xf8Xt27d3lZ7obqIMk9lsxt69exGN\nRjE1NQW73f6pTGkmk3myI38+nw+JRILa2lpYrVY4nU52CU9PT3/pC+VwOFCpVGhvb8f3vvc9HDp0\niG2QtbU1vPvuuzh16hQjzRUbEokERqMRhw4dwjPPPAOlUslqhqOjozh37hxu376Nzc1NBINB1upF\nl2QgEGA15nQ6Da/Xi9XVVYTD4aJ8Hi6XC6PRiN7eXrS3t0OtVkMoFLL/TmIc77zzDs6cOYP5+XmW\ntirkGvl8PsuS7N+/H/v27UNtbS0UCgUikQimpqZw+fJlnDlzBlNTU+xAGgwGDA4OoqqqitXmgsEg\nlpeXsbq6CqfT+VgIi7TPabz07Ows3G43E6ERi8UwmUysz9lqtSKRSGBzcxOzs7O4ffs2RkdHMTU1\nhWg0imw2C61WC7PZvKsuurOzg83NzV0EqkKAw+GgqqoK3/72t3HixAkIhUKmmPiLX/wCZ8+eRSgU\nYoZHIBDAbDajp6cHer2eOSqTk5P4n//5H8zOzrKz8UV/86saBLVajcbGRhw4cAD9/f2MmJjJZDAy\nMoJTp07h8uXLTIyoFO6Yz4LVasX+/fvR19fHunWoNHH27Fn87ne/w9LSUkmpb1ZWVmLPnj3o6+uD\nyWRive2ZTAaBQGDXNNZ8ZjwzmQx7t1SKzi0fUcTf3t6OoaEhHDlyBO3t7czwRyIR+P1+vPvuu/jg\ngw8wNzf3qe4g+j3UUSQUCtlQJbvdzvRE7l/bg6BkjT9dzAAYwez27du4dOnSlx5ugsViQW9vL37v\n934PtbW1yGaz8Pl8mJqawttvv42JiYmSqPMDYEQ4SjkLBAJ4PB6sra1heHgYw8PDWFhY+JSASW67\nEPBJe53H48H6+vpnbo58I1edrb+/H3V1dUygJZlMIhqNYmdnB/Pz83jvvfdw+fJleL3egl2QtD7q\nTmhtbcXg4CBeeeUVpluwtbWFpaUlXL58GRcvXsS5c+eYMafIs6+vj2nOp1IpuN1uVo4hlbHHBWL0\nh8NhOBwO1pduMBgYQ54uI7vdjrt37+Lq1au4cOECJicnmfKcUChEfX09urq6mIOZyWSwtLSEiYkJ\nBAKBgpJehUIhKisr8eKLL6Kjo4Oxly9fvoy33noLc3Nz7LlLpVJYLBbU1dWhrq4OAoEAyWQSXq8X\nk5OT+OijjxAIBPK+j/h8PtMK2bt3L+rr6wHcu6ecTicuXLiAt956Cw6Ho+AO7cOAup+ef/55tLa2\nshSz1+vFxMQEzp49i9OnTzMCYrFBUXRlZSWOHz+OtrY2KJVKlqWggGdjYwPhcDjva6Za/OdlmUgX\npaWlBYODgzh27Bi4XC5CoRDsdjtcLhe2t7dx8uRJnDlz5lNZCrqnqCtHr9dDIBCwn/0iGeMHuXtK\n0vhT6iwcDuPs2bMYGRlBIpFghJgH+WBcLhfd3d0YGhradcmNjo7i7NmzzDCWCrRaLZqamqBWqxmp\ncWRkBP/3f/+HiYkJrK6uslo6gQw/tZvIZDJkMhkWJVK6sRggVbOamhpG3AIAn8+HhYUFnDlzBh99\n9BEWFhYKruRHh9Jms6GzsxNdXV3o7u5m6yS1xLfeegsul2uX0ppAIIBGo4HVakVjYyPUajUjYy4v\nL+ODDz7A0tJS3h2ubDYLkUiEhoYGDA0N4bnnnoNOp4PT6cSbb76JmzdvYn19fVdKkKagnThxAr//\n+78Pk8nELrCLFy/i17/+Ndxud17XnQtiZ5tMJigUCkSjUaytreGNN97A6dOnsba2xpxzauM6ceIE\n+vr6WIbA5/Ph2rVrmJychM/ne6Ayy1dNkctkMtTX1+Oll15CR0cHiwIXFxfxxhtv4PLly1hfXy+J\n+vjngcqMOp0OXV1dMJvN7HPMzs7iJz/5CSYmJpiITimAFBhtNhsGBwdhsViQSqUQjUYxPz+PK1eu\n4Pz585ienn6s+iaPCiolKhQKSCQSFu3Pzs7iv/7rvxh3iDo/PuvnxWIx9u7di+9973tIp9NYW1vD\nO++881hsV0kafwDMiDmdTvb/Hzid8f9LBi0tLeju7oZMJoPP54PD4cCVK1cwMjICj8dTUpu6srIS\nAwMDMBgMSCQSrJ/+8uXLTLL0ftDmUKlU0Ov1LK2eWwf2+/2PJc35oKA2FqvVioaGBphMJgiFQoRC\nIUSjUSwsLGB4eBiXLl3C7du3EY/HC/oehEIhlEolWlpa0N7ejs7OTthsNtb9sbKygqmpKZw+fRq3\nbt1CPB7fpeculUqZ9KtWq4VQKGQpPxLfiMVieX/muW2WNFMhk8kwYiip3OXWHW02G/r7+xl5liJn\nmguxtrZWsHdBa+rt7cUzzzwDlUqFxcVFnD59GufOncPMzMyuC1EikcBsNqO3txe1tbXgcrnIZDLw\ner24ceMGk3TNJ+i8DQwM4LnnnkNLSws0Gg27qyjy39nZeSTl0UKBjGhPTw8OHDgAq9XKuiWoA+ru\n3btsTkWpQCqVYs+ePdi7dy9sNhukUimi0SjW19cxNjaG8+fPY25uDoFAoOjdQlwuFxaLBS0tLWhs\nbIRer0cikcCNGzfw8ccf4+LFi1hZWfnc80aCaj09PThy5AiGhobgdDqZdsjjKCmWpPH/KiQc4JOo\ns6mpCa2trQCAlZUVXL58GRcuXMDo6GhJkPyATy7xmpoaHD58GEajEeFwGLOzs5iZmcHKysrnPgc+\nnw+5XA6VSsUUwjice9rner0eNTU18Pv98Pv9BfssYrEYer0e3d3d6O7uZrVQj8cDl8uFW7du4eTJ\nk1haWmI16EJCLBbDbDbj6NGjbIQzl8tFIpHA2toaLl68iDfffBNut3vXBU7GSqlUoqOjA83Nzcyb\nz23NyVV1y7fxB4BwOIxgMIhQKIRYLMYcxdyImdL9LS0t+IM/+AOW4qVsGMkCF7rWLxAIcPz4cbz+\n+uuQSCQ4efIk/v3f/52J+OR+r1KpRGVlJVpaWlBRUQEAjNty69YtLC0t5X3NpOn/6quv4tVXX2Xk\nWuo3z2azUKvVrMRVKjXy+0G1aBrVSx0KyWQSbrebiUSVSkmUoFAo8MILL+DYsWNMp4UItjdv3sTV\nq1cfe7ntYZHbftjU1IQDBw6gt7cXZrMZsVgMJ0+exBtvvMF4LJ8HgUAAi8WC119/nXWokVT44zqr\nJWn8HxX04IeGhvCDH/wAfX19rI4yMzOD//3f/8Xc3BxSqRSEQiG6u7vxzDPPQK/XM6W0hYUF3Llz\nB0tLS9ja2so7w5XWLJFIoFKpwOfzEQgEcPfu3U/pe5OkqUwmQ11dHerr6xn7W6FQsFQoscULIdgi\nEAhgtVrR1taGoaEh6HQ61tZiMpmYlCxd5p2dnUgkEvjoo49w5cqVgkb+HA4HBw8exAsvvMDaDpVK\nJYLBIHaaYKOhAAAgAElEQVR2djA1NYXl5WWmxU7986SoRe1/+/btQ3NzM2ObU62dGOmFEGihi295\neRkzMzNobW1FTU0N+vr68Dd/8zfY3t5GNBplhlQoFKKhoYFFqwBYK92pU6cYmbEQFyeHw0FPTw9O\nnDiBvXv3QqFQsNZQiUTCLjgO557kM9Wln332Wdbum0qlMDs7i1u3bmFzczOvJTzaDzQPZHBwkI1y\npudFJZjvf//7OHToEOx2O2vL3drawubmJux2e1EcrVxwOBx2Vvfs2YOKiopdAmLpdBoCgQB6vR6R\nSKRkOAuUQq+oqIBer2elUSInZrNZWK1WiEQiJnBVSOeFy+VCp9PhmWeeQVVVFeu+aWxsZJkVAOju\n7sbx48cxMTEBl8vF5LSJk0OiW52dnejp6cHevXuh0+kQiUQwPj6OkZER+P3+x3JOnxrjz+FwIJVK\nYbVacfjwYXz/+99nBJZkMomdnR3GptRoNDAajTh8+DB+8IMfwGKxQKVSsal+BoMBw8PDyGazrN6b\n77WT1j2JPJBSE+nmEzObxvp2dXWhvb0dra2t0Gg0zFjR5w2FQkwKNV8XOmUZ2tvb8cILL7BoiFq1\nCNSBQH3ZpE8dCASwuLhYELIfpeOam5tx7NgxJgRCkRuJEimVSnR1dbERrlQbFYlEGBgYwN69e1lb\nJanm0QVUyEuHhGFcLhfm5uZw48YN8Hg8dHR04NixYyySi0ajSKVSrDxAcrjkIM7Pz+O3v/3tF2aY\nHieoXenQoUP4zne+A5vNBolEgng8DpPJhN7eXng8HjaoyGq1or29HS+99BIGBgZYtiUej2N6eho3\nb97M+yQ1klU+cOAAvvvd77LhKuRgpVIpGI1G5vAmk0lEIhEsLi5ieXkZ6+vrWFhYwPz8PPx+P9xu\nN+sGKWRdmgKN5uZmPPfcc4xjRLLUJNYllUrR0NAAHo+Hra0tRKNR1sZWzKiaZp5QeZMMpkgkYiWh\nnZ0deDwe+Hw+lvXMncOQz7XpdDocOXKEKcgaDAb2fKk02NnZiUwmA41Gg7W1Nfh8PuYMcjgc1NTU\noLe3F0NDQ+jo6IBSqYTP58PMzAxGR0cxNjb2mSXgR8FTY/y5XC5sNht+9KMf4ciRIywKJu/QZDLh\n6NGjcLlckMlkOHbsGPr6+lBfX89GbJLQiF6vZxHshQsX4PF48npISXmNDKBGo0F/fz9rKTt06BAb\nDUqHlMRcSPOdxE6obkcXTD4vRQ6HA7lcjmPHjjHCGT33+0UuqFdVIBBALpfjO9/5Djo7O/GTn/wE\nIyMjBSsBUFRJFwl57GSUqPUzV+2NWikpk0FZFhKbSiaT2NzcxNraGtxud0E+C/3+VCqFhYUF1is8\nMDCAV199FbW1tZBIJJDL5Wzv0sxvAEgkEnC73VhbW8PCwkJByK90Rv/sz/4MBw8eZGePLvBDhw6h\nrq4ObrcbLpcL6+vrsFqtLG1Kzhqd6bGxMVy7du2RR/g+KFQqFbsreDweQqEQHA4HJicnsbGxgVAo\nhJdeegldXV1s8ppCoUBTUxNsNhuTew4Gg3C5XLh+/Tp+8Ytf7BK0KsTeJy5QdXU1uru7odVq2X6I\nx+MIhUKIx+PQ6/U4duwYtra24HQ6sbCwgOXlZWxsbBSNJ0XncGNjAw6HA0qlEgKBAFqtFv39/Whu\nbkY8Hmfj3kOhEGZnZzE8PIzp6Wmsra3lNSMnEAigUqlYNpYkeGnyJ4/HY46X1WrFoUOHWNYzHA6z\nYTwKhQIVFRVMx4PL5WJqago///nPcfv2bWxsbDy2mSdPlfEnzXNK4eemsqqrq/Hyyy/D5/NBLBaz\nzU+pWoqMVCoV650mEZV8Hkxi2AYCAdjtdpjNZkYqE4lETDaU5HtJfzwajSIUCsHj8TCddx6PB7fb\njZGREUxNTRWkbpdIJNh0uFQqxbIUJHmbO2SFhq7IZDLU1NRApVLB6XSioqICMzMzbF5BPkDvcGZm\nBufOncORI0dgs9nY+yVZXlLMI+eFIuRQKASRSMQMEPXHx2IxbG1tYXR0FJOTk/D7/QVLN9Jnopo/\nZYwAYHBwEP39/UwGOJdHQ62gN2/exOTkZEHa4zice9MR+/v7MTAwAJvNhkgkgvX1daTTaVgsFqjV\nauj1eoRCIfh8PrhcLmg0GiaeA9x7H8FgkBEUC0FKo3bCO3fusOFCDocDS0tLzNkLh8O4desWJBIJ\nTCYTc1Zo+BJlirxeL7hcLu7evYtMJoPV1dWCRtN0Du/v24/FYiw7arfbEQgEIJPJ0NHRgerqakxN\nTeHDDz9kkWqhkc1mEQ6HMTIywlQ1SeaajCU5haR3QoOfyIjSFNh8PG/SGaAWdZFIxFpzI5HILnI2\nTaKkPU2ZFVLfpLuTWtMXFhZw5coVbG1tPVYi6VNj/CnqdTqdcLvdbP55Op0Gn89HQ0MDOjo6WJ0u\nmUyyIUEikYhN+aNIhKZwFULVikRWZmdnWQRaWVmJqqqqXSNvgU80okOhEHZ2drC2tsaUEHk8Hux2\nO95++23cuHEj71PxMpkM/H4/fvnLX2JhYQHf/OY3UVtbC5PJBIPBsIuESM8cADOwZrMZf/mXf4mh\noSG89dZb+PDDD/Nm/IF7z3l4eBhOpxNGo5ENhcllzZOnTiASY+5oVjro1F+8traGCxcu4ObNm0XR\nVaDPZrfbsb29jcnJSaytrbEx0DQXHPhkKI3D4cBHH32EW7duFSSaozr/kSNHUF1djWw2i5WVFYyO\njiKRSODo0aOoqqqCSCRikyGrq6vZz+eOhfZ4PFhcXITL5SrI8/b5fLhx4wYmJibA5XI/c7TtjRs3\nWIZrYGAAhw8fhslkgsVigc1m2zWtrampCYODg2wUc6FAXQk7OzvY2NiASqWCXC5HNntPadPlcuHa\ntWuYnp5GKBTCvn37WNaxrq4OExMTzLgWA4FAAKdOnUIqlUJraysbqkTT7sgJowi6ubkZTU1NkEql\nSKVSuHr1at5E3ZLJJHMMg8EgeDwec1BdLhf4fD70ej0aGxuZLDfwSemUOtQoA0zn1G63Y3l5GUtL\nS489+/zUGH/SWl5aWkJVVRVsNhtkMhm70Im8ReByuZiensbbb78NiUSCxsZGvP7662x2OI3Jvd8Y\nPG6QJz4zM4N33nkHUqkUGo2GGRsaW5nNZiGTyXbJQkokElRWVjK2LofDYSIvhWL4U9ZidnYWJ0+e\nRFNTExoaGth4UzoIEomElQWi0SjMZjMsFgsEAgFsNhteeeUV2O123Lp1K6/pOUrR/+xnP8P777/P\nnCmRSMSyPSKRCLW1taitrYXNZmPkrvs/dzqdhsfjYZFSqbRF5U5azHUe6bmGw2F4PB643e6CyClT\nNNPR0YE9e/YAAMbHx3HmzBkEg0E2fKi7uxtdXV0QiUS7JLzpWVNkRb3TVGLKt/GnUhqdQ5LVzv27\nuWWi5eVlJJNJNrlNo9HgG9/4Bl5//XVwOBxotVrs27cPU1NTuHjxYsGcRXqWRGDMzXARkZjumEAg\ngOXlZZbJS6VSqKyshM/nKxoJkLJWTqcTIyMjbJ4FnWEqfdJep6FwAwMDUKvVaGhowLVr1zAyMvLY\n+Rakl+Hz+bC9vY1kMonh4WHcvHkTiUQCFRUVaGtrQzgcxsTEBHw+H6xWK1NWJDtD2WpybMxmMyoq\nKqBUKhl34XHhqTH+mUyG6XtTrZvarnLV7+gA0Nz18fFxaLVaKBQK5tHSAY9GowUh5GSz9ySHY7EY\nOjs7UVFRgYqKCjabnVKFlN4idjT18ms0Gpb5iEajn6sLkK+1x2Ix2O12RjL0er0wGAyMMEkkv6am\nJgiFQjidTtTV1aG1tRXV1dVsGNHp06dZ726+olFKeZ89e3ZXTy6Rhsj4DwwMIJlMQqPRMONPET/1\n9IfDYca293q9D6w8mS9QFGE2m2EymVj9mQh+1JK2traGmZkZOByOguwTatNrampCRUUFNjc3MTY2\nhitXrqCiogIGgwFzc3NQKBSwWq1sP9Mzz01R5xIwc6cZ5pPUSt0cD/J3stksO7PAJ7oXzc3NbOQw\nCdVotdq8rPmLQJ0fCoWCTcIDPiHTyeVyiEQixGIxVicngiMRBosF4kZ5vV5MTU2hpqYG1dXVUKvV\nzPDTfRQOh6FSqVhnlMlkglqtRjKZxPj4+APPvH+YtREB1+l0gsPhMKInKYlyOBx4PB4EAgFsbm4i\nGo2ioaGBZRfpfEajUTapkqb9qVQqpsnxuPBUGf9oNMrYqWq1ms3UpouPLmdK0SiVShw+fBjt7e1o\na2tjgh3xeBzj4+O4fv16wYwoCfucO3cO0WgU/f39mJiYwMcff4xQKMRkIg8cOICBgQGkUinWHpib\nugsGgwVvc6FDmUgkWHskOVjRaJQZ/52dHWSzWUxPT0Oj0aClpQU//OEP0d3dDeAescpsNjNHIt9r\npoucUm0bGxvg8/ng8/lQq9WorKxEZ2cnE8qhmmIkEoHL5cLKygrOnTuHy5cvw+VyFV00isiLL7zw\nAg4cOMA6QCgqIQlUGlA0Pz9fkP1dX1+Pl19+GRaLBXa7HWfOnMHk5CSbJlhZWYm1tTVWxiINgM8a\nQ0znl4hy9PnyzcvJ/d8vA5FbczMw1E1CzHqPx1MUhVEyNFSSAz7JCAD3HDVqOXa5XEzwhzJ8xRQv\nImc9mUwiEAhgdXUVJpOJKW2Sgp7D4cD8/DyMRiOeffZZyOVyCIVCmEwmVu6LRCKP/Y6MRqOYmZlh\nQ79IjEqr1aK6uhqNjY2sfZKIwzRoi8vlMmKx3+9nZQAKTJRK5WPPuDw1xj+bzTK5Sp1Ot6v1jTwq\namVJJBKIRCIwGAw4ePAg6urqYDabwefz4XA4sLi4iJmZGdjt9oIZURr1ubCwAC6Xi0gkwlqZ4vE4\nNBoNE45IpVKMfEbpu1QqheXlZSwsLBQsY5ELuuiIRU7MYRpDGY1GodVqWTYgEomwQ5j7O3LLGvnE\n/Rd5Op1GJBJhl7VKpUJlZSUkEgn7fq/Xy+p48/PzmJ6exsTEBBYXF/POOH8QaDQa1NfXs6mQFNlF\no1Fsbm5iYWEB09PTuHz5MmZmZphaWL5Al7XZbMbAwABTnAyFQtBoNKisrERvby9MJhM0Gg3rYKFz\ne//vIsNvt9sxOjoKt9udN8NPGaHc9tkHBZG7FAoF+1KpVOy/E9Hxi7TZ8wGK3MVi8S5+S+6z9Xq9\n2N7eRjAYhN/vZ5nEYu9t4BOHPRaLsY6Qra0t2Gw2AJ8IX6VSKXi9XvB4PASDQVa2uXz5Mpual6+6\n/9bWFnuv9fX1sFqtUCqVLIKPxWJMH4Q+CxH96N/hcJjZKgo68jFY6akx/gBYdFxTUwOpVMqiBor6\nKW1CpBWqw5DnlUqlsLi4iFOnTmF+fr7g5K10Og232410Oo2tra1dNVmqeQKfKI1JpVJW94zH45iY\nmMCdO3cKrudPkZpMJmMZCFKco6iZeln5fD5kMhlL2+bWbEm2spia3NSm1dbWhsHBQWi1WpY92tra\nYunq0dFRTExMMFGfUoDZbGZZLIvFwtriQqEQpqen2aAWl8tVME4IEZ1aW1vh8XgQi8Wg0WhQV1eH\n9vZ2prHR2trK+rYlEskuJ5DSuVRumZubw+nTp7GyspI3ngW1pRLrOrcV90E+s0QigcViYelmo9HI\nPlM4HMb8/Dy2trYKer+Q0qNIJNpVDgU+Kd8tLi5ibm4OW1tbJafwB9zL8EYiEWxtbWFnZwfBYHAX\ny14oFEIoFLKMUCgUQigUwuLiIv77v/8bo6OjeTuv6XSaZUc4HA4aGhqgUqmYk0UBKp1Lyj7EYjGW\nEUomk4zcSmRzyug+7vfxVBl/EqewWCy7Lo772ZS5ojNU39rY2MDFixfZpe5wOAru7VK0kU6n4fP5\nEIlEmKdus9kwNDSEtrY2qNXqXQ5LMBiEw+FgjORCShdTNEFpQUr35w5gog6GgYEBVFdXI5FIQCqV\nQq/Xo7q6GpFIBF6vF3a7HTs7O0UlztEY6KqqKtbqGQgE4HK5cPbsWXz88cdYWVmB0+kseo0/FxwO\nB52dnTh8+DB0Oh2THXY6nZiamsL777+PW7duweVyFcw5VKlU6OzsRHt7O0QiEdNTMBqNkEgk0Ov1\nzIHNvZDpgswt121tbWF9fR1TU1MYGRlhGu75AnV/ZLPZBx4mRnwEm82Gjo4OdHV1oba2Flqtlk3+\nSyQSTOZ6bW0tb+v/LCgUCtTW1sJsNrM23HQ6jWQyiaWlJVy7dg3z8/PY3t4u2RHEANidx+PxoFKp\nWD893UXUMeL3+/Hxxx9jdXUVc3NzeW+rJB6Zx+Nha6F1UcbZ4XBge3ubtQ8HAgHYbDY2A4XKp8Fg\nkDkEbrc7L2qtT5XxJw89Ho8jGAzuEr+hFGhuSpmcgM3NTVy7dg2/+c1vMDMzUzSvN7e2mMlkIBKJ\noNVqUVFRga6uLjY+lFq3SCBicXERY2NjuH37NmMaFwLURZGrdJfrsRJj1Wq1orOzE319fWhvb2dp\nR7FYjEwmA6fTiYmJCSwvLyMYDBbVoOr1euzduxe1tbVsSuLOzg4mJydx9epVDA8P75ovXwqgd9DS\n0sLUxegcLC4u4tq1a7hy5QoWFxfz3v6ZC6VSiX379qG1tZW1ZGk0GlRXV7OL8X5QpoieLQ1Lunv3\nLu7cuYPr16+zXvR8GigyIFSOo0v5s8oMdNGTQ9vZ2YmDBw8y408aHOFwGBsbGxgfH8fk5CS2trby\ntv7PgkajQV9fH2shBsAIZqurq7h9+zbW1tZYNF2qoHdC92MuW55KTdlsFk6nE8vLy5iammIKlvk8\ns6lUCjs7O1hfX8fc3Bx2dnbYXU0R/urqKra2tlgGw+/3I51OQygUQqPR7OKtiUQiRtDNBwn6qTL+\n29vbePfddxEIBMDlclFVVQWDwfApxTngE0chHA7jzJkzeO+993D37l34fL4H9vTzAVKfI51qoVCI\n/fv3o7e3Fx0dHaytjwiO6+vrOHPmDN5++22mHV6og0vEIblcjnQ6zQh9dPiUSiWsVitTAKTpVrmR\nndfrxejoKH7+859jfHw87wf0i8DhcGC1WvHKK6+gvr4eHA4HsVgMKysrOHv2LGZnZxEKhfJSf/sq\nkEqlMJlMsFqtqKiogFAoRDgchtvtxtWrV3HmzBk4HI6CO7RKpRK9vb1oaWlhsry5X4T7/53rBLvd\nbty9exenT5/G9evXYbfbGVchn+9AJpPBYrFAKpUiFothaWmJZePudwBI6bK+vh7PPPMM9u/fjz17\n9kCtVrPaOjG8f/nLX+LcuXPY2NgoaIaOy+XCarXim9/8Jjo7O9lzpjtwc3MTs7OzLHtX6qDhRHq9\nfpcGB5W5FhcXcefOHczOzrKAIt9nlgTBrl+/zsjDuU4u/ffcUiFJuxsMBkilUjidTqYXQo56voZW\nPVXGn6bhkQiOxWKBTqdjIhBcLpeRKcgbi0QiOH36NJtdXew6F5fLZURE2hBtbW1MsIVUn2ZnZzE/\nP89G5M7Ozu7iBeQbpEPd2dkJhULBUlo03Y7U2iwWC/bs2YP29nbWMhcKhbC9vc0unOvXr+PWrVvw\neDxFM6oUuZlMJqY+GIvFMDExgatXr+LWrVuw2+0lFfETbDYbjh07xgREAGBjYwMXLlzAyMgI468U\n+lKnNDill0kfnob25Lbs0fdzOBwm1EX7/M6dO7h9+zaWlpbyRta6H1qtlrWiisVi1hZJn4EIWfQZ\nyNHt6elBbW0t9Ho9wuEwnE4nHA4HVldXsbCwgPPnz2NmZqag74PS4zRmm+6RaDQKh8OB2dlZzM7O\nwm63F5wv9KjgcDhMR4GGa6VSKUxPT+PcuXNsvxAptBAgZ2pnZ4eN0wbwmRkuavvk8/m4ffs2ayt2\nOp0YHx9nxp/H4yESiUAsFjPH4XGhJI0/RQIPe8mSetXw8DAuX768a9Y5seI3NzdZK8UXpfKKAerT\nrqqqwje+8Q00NDTsmi6XzWbh9/uxsbGBd999F2fPnsXCwkLBLsTcdXK5XHR0dOA73/kO5HI54vE4\nNjY22EXT1NQEs9nMmK3klQcCATidTty4cQOXL1/G+fPnsbq6WvRog8fjQa/XQ6/XswFLbrcb58+f\nx+nTpzE5OckIjKWGpqYm/Mmf/AmqqqoAgIlG/fSnP8Xq6ip2dnaK8nzpPHq9XigUCsaEzhV+0mg0\nkEgkyGaz7DK8efMmK2FNT09jenq64GfUYDCgp6cHQ0NDqKmpYZEjyQsHg0GEw2HWbkuSrLmtlaur\nqxgbG8PFixcxNjaGubm5omS2eDwezGYzY55T377f78fS0hLOnz+PiYkJ7OzsFHRdXwXZbJZpKBD3\nKRQK4eLFi3jjjTcKMtH0i9aWex9/3tkj+3P9+nWMjY0xrhRF/ERiJCeH+FSPa/+UpPF/HB+O2MFk\nGH0+H4v8iVFJh7lULnRaczqdZhchGf5QKITNzU1cuXIFFy5cwNTUFBMGKsbFns1m4XA4MDU1hcHB\nQdbWwuPxWP2KDD+lFufm5nDnzh1cuXKFiTG53e6SeP4qlQovv/wyjh8/zrTwaeodscqL7aDcD6o1\ny+VyVFRUsLGhFNW5XK68aZk/CJxOJ37xi1/g/PnzUKlUTMOcpihSWyVNsqR12u12bG1tIRgMwufz\nFeWMrqys4MMPP2SZFKVSyXgq9G/qXiHyHOlc3Lp1CyMjI9jc3ITD4YDD4SiaAwbcMyJKpZIZEOon\n/+1vf4vLly/j7t272NjYKMraHgW5ZSNqUZyZmcHJkydx6dKlvLewPm4QV4pExAhkDyjoeNzl6JI0\n/o8L5IGXiuzqg4CYni6XCwaDAQKBAF6vF2tra5iYmMCZM2dw5swZtmGKgdxLenx8HK2trWhoaEBl\nZSVjt1KkTyNMNzc3cePGDQwPD+PcuXNFFQv5LEilUgwMDKCnpwcCgQDBYBB2ux2rq6twuVxFLwd9\nHkgEhFjPhHg8Dp/PV9QxrD6fD8PDw7vqnrmqirRXcp3eXCnfYsLpdCKRSECn0yEajUKv10OtVjNR\nLYVCwaJoANjZ2WHDfs6cOYNTp06xPvlig56vx+PB5OQkZDIZPB4Pzpw5gytXrrD24icFFLRtbW1h\nenoaAoEA165dw1tvvQW73V5QLsXjAJGk7wdliWhQ0eN+R0+18X/SQNHmysoKfvOb3yAQCKC3t5cd\n0unpaTYGtNiXYzZ7b5BMNptFQ0MDlEolY/ID9za02+3G4uIirl+/juvXr2N9fR0ul6ugjPMHBQmz\nkCrk7OwsLly4wBQLSx25Bj63Ra4U9slnGXOK3HIjuNyvYiMWi2F7exvvv/8+Ll68CJFIBJlMxjTi\n29vbsXfvXlY/P336NK5cuYKpqSnY7XZ4PJ6S2TcUGbtcLgwPD7Myy+LiIrxeb9H3yMOC+CMffvgh\nJiYmEI1GsbOzU5AJj8UAnaGyyM9TDhL6uXbtGqLRKJaWltilUgzG9heBCENXrlyBz+fD6OgoxGLx\nLq2C9fV1TE9PM9ZtqR5OYt1SJLe0tIQbN27sIu6UEkhYibotcklFubKypRBFl4pBfxjkTj90OBxM\nIEcqlWJjYwOrq6tYWlqCUqkEAFy7dg1TU1PY3NwsarfQZ4F4CuFwGA6HAwKBAHw+H5FIpGQclIcB\n1b7X1tawublZsOmrxcLDyks/KMrGvwQRCARw9+5dLCwsQCQS5UWH+nEgm80iEong/PnzuHDhAmNw\nk7YCXaClYIC+DGRMiZxIfc+FUsJ7GOQOIzIajaz9kxTbaCiIWq1mGYAyvhqoVh6LxeDxeHD37l28\n//77u0Yll7rxoZryk74fKBIutdJhPpGPvVU2/iWKfNZ68gGK7oi4QpdiqfXEfx5o1O+dO3fg9/tx\n9+5deDyekrsoqWZOUSiNto3H48wpAAC5XI7a2lokEomCDaf6OuH+aOxJ2ONllJGLsvEvYTyMnnip\n4H7G6pMC6ul3u92MSFQKw3pykSuQQzPMgXtks7GxMYhEIpZ12d7eZtPMCjHz/uuI8jMt40lG/sen\nPRrKp6qMgoJGfgqFQiQSCfh8PjZUqVSQS5Ij6WSFQgGtVgu9Xs9a/5RKJSKRCJxOJzY3N1mbWdlY\nlVHG1wpfaN/Lxj8PyO1hpolwpM5UyAs4Vzb1Uf7uo4otFRuPuu5S+ryfF63TGqlVjpQrgU9Uw6RS\nKZNPDgaDiEajnyLd5bba0ffmsoof5BlQ9oEEboBPtOLLzkYZTws+a8R4ofb2V7yTvtC+l3Ta/0lM\nVxJxjHSnaTITTcoqVA08N0X8KGzrz5uHUOr4rBayJxGft376/7SXPqt74sum3ZHBp+4GEtmhyWMP\nypym36HT6aBQKMDlcuH3+5kuwpNWsiqjjFzk3n9cLvdTd2mh7nHC14rt/6gfliKS3OEVhTICRHoL\nh8PIZrNsZn2hiWMPs0E/y8l6Uo1m7mChBwGHw4FUKmXzH0gcp5i8hVzHLR8RNImkkKGnyJ/IpV/2\n93Idk1QqBZ/Ph1gstot4+FnR0lcF/V6BQAChUIhYLFZyhMxSRi4htJQ0FUoR96sI3n8fPOhzo64c\nALuErB52LTwe77F3TZW08X9U0ExnjUYDmUy2qwfa7/ezntd89bjSONVkMskmOxVj9vtX/Xv0809a\nBuZBHQBKfVutVlRVVUEqlSIYDGJ1dRVutxuBQGDX7+DxeGzGAtXdhUIhG77xJEW7ucTMR32/5JiE\nw2Fm/B/VqORetLmcBqVSCZVKxYiMpOMej8dht9tLshWzFEAy2/RuM5kMM0R0H1J/fDqd3pUlpL3x\nJJ35fOD+CZMPCz6fD5lMBrPZDJFIhFQqBYfDAY/H88C/g85DPpzpp874UxuUSqXC4cOH0dvbi4qK\nCjZS9NKlS7h+/TomJibyNr4y9wDRwSplT/vL1lSKa/4yPMiaBQIBZDIZnn32Wbz88svQ6XTY2dnB\nxMQEfve73+HKlSu7DCRpuyeTSQiFQiiVSlRWVkImkzEFtcehpZ8baRTi2T/s37j/+ykiIWf6Yfb5\n/c71pSEAACAASURBVOUp4N6lqVKpUFNTg/379+PIkSPQarVsdsHdu3cxPDyMS5culY3/Z4D2akVF\nBYsYo9Eo+Hw+C4iEQiFcLhcbU0z6HKQlT1MHi3X2v4zzUoh1PcoZzC07KhQK1NTU4Pjx40z6/Fe/\n+hUuXbr0wL8z15Y8bjx1xh+4N4u7srISvb29ePbZZ9nM50wmg42NDUxPT3/mmMXHiUf1GGk2uNls\nZtPC1tbWsL29XRKyvk8ThEIhtFot6uvr0dPTA7VaDa/XC4FAgNu3b7Mon555Op1mQ6E4HA7i8TjE\nYjGMRiMTHFlfX38samNPisP1VfY5j8dDVVUVLBYLm2MB3Du/Go0GZrMZHR0d2LNnDxQKBcRiMTKZ\nDDweT0kOWioVULlRLpdDp9NBqVQim81CqVTCZrOxORA+nw+hUIjxRuLxOKamprC8vFzUuybXIaS5\nECTAxefz2fyTfDonD1s+zF03OVK1tbXo7u5Ga2srTCYTuFwuVCrVI60lH3gqjb9SqURdXR06OzvR\n0dHBDH0ikWAjN0s1rcXlcqHT6TA4OIjm5mbo9Xq89957uHnzJra3t8sX3mMCh8OBWCyGXq+HTqeD\nWq2GWCyGXC6HxWKBUqlkURPwiZohGXkul4tgMAibzYaqqirU1dWBw+HA5XLlTZjpSScx5oIkiHt7\ne3HkyBHs2bMHKpUKXC4XBoOBPf/7a68A4PV6MTc396XExq8ziBths9nQ3NzMDH9HR8euIVBU+opG\no7Db7fjZz36GYDCIra2toq49l5RKYlZyuRwikQh+v78g3VOP8rup5CyVStHe3o6BgQGYTCaWMSwl\nPHXGn8PhoL6+Hq+99hpqa2uZ4aeI/+rVq5iamirqqNPPAtXj+vv7sXfvXuzdu5dt9Gg0ilAo9MQY\n/lLnCNDFIhaLoVarIZPJIBAIkM1mMTU1hZ/+9KcYGRn5Qp4GZQQoPXrw4EFYrVYsLS1he3v7sUuP\n0mVOJDtaw5MGutitViva2tpw5MgRDA4OwmQyMYNEZL5IJIKVlRXMz89DrVZDp9OhoqKC/bzL5Sro\n2nPbKyUSCWQyGcLhMJv/UCrvg2ZrBINByGQytLS0wGQywWAwQKvVsrZM4j5Fo1Gsrq5iamoKo6Oj\n2NzcLLpWvkAggEajQWNjI/bu3csclmg0iomJCWxsbJTM884Fl8uF0WhEW1sburq6UFtbC4lEgsXF\nRQwPD2NhYaFk1v1UGX8+nw+JRIK6ujocOnSIpViSySRWVlZw9uxZ3L59G+vr6yXzAugylEgk0Gq1\n2L9/P44ePYrGxkbs7OzA7/cjmUyy1FwpGVbqqqA0l1AohEwmY1PDYrEYgsEg4vE4EolESUn9Ul25\nrq4OOp2OzSJYXV3FyZMn4fF4vjB6p8ueyjRdXV1QqVSwWCy7MgSPA5RKlEql7LlSnZ3Y27kZilIE\npXCFQiHkcjna2tpw/PhxDAwMoLm5GUKhkLUuejweeDweeL1e3L59G1evXoXZbEZTUxOGhoagVqvR\n3d2NlZUVrK6u5n3tRD5Uq9XQarWQy+VQqVRQq9WstZGclUAggEgkUrRRvvSciSxpsVjQ0NAAk8kE\nhUIBPp/P9pDT6cTGxgZcLheWl5cxNzeHzc1NRCKRXRLdfP49M1EIh4DWL5PJUF9fj8HBQTz//PNQ\nqVTg8Xjw+/1Ip9O4ePEikslkSUXTlKkwm83Yu3cvmpqaUFFRgWw2C7/fj2vXrsHpdBZ7mQxPjfEn\nkkt1dTWqqqrYsJNUKgWv14vJyUl88MEH2NzcLKkLkgiKFRUVaGlpQX9/Pzo6OiCTyeD1eplAS27U\nVwqMclq3RCKBSqWCwWCAxWJBa2sr6uvrkU6nsbGxgRs3bmB1dRUOh+MLBxQVMqVNWZba2lq88MIL\naGxsBIfDYX3uDzMKt7KyEu3t7dDpdIhEImhpaYHX68X29vZjWy+xhg0GA7LZLDMwqVQKEomEjTgt\ndrT2eaBMi0wmg9FoREtLC44ePYpvfOMb0Ol0LOKPRqNwu924dOkSJicn4fP5sLy8jIWFBWg0GkQi\nERw6dAg9PT2ora3FxMQE7ty5k9eMGIfDgUwmY+TDAwcOwGg0sjJRKpVCNBrF1tYW5ufncfXqVTaU\nqxigclZNTQ1OnDiBgYEBVFVVsa4UukNCoRBGR0dx7do1zMzMsLNsMBiQSqWwurqKVCrFnORsNguP\nx5PX/UWBEPFojh8/jqGhIdTW1kIqlUIgELAZHDU1NVhaWsr7mh4GfD4fWq0WtbW16O3thV6vZ89V\nrVZDKpUyR6oUUDoreUSQp2gwGNDQ0IADBw6gr6+PpbZcLhfOnDnz/9h7s+A2r/MM+MG+7yRAgFi4\ngeC+U6JEbbZsS7IUL3HiOOm0nUw7nelMm970vv9/1V71ou1FZpq02Vo3W9PYcW3FsSRrMWVJlESK\n+04QBLHvKwGQ/4X+9wiUJWsDKEj1M6OZRDLB8x2c77zb8z4vzp07h9XVVWQymae84tsg4ymXy1Fd\nXY2enh7s378fLS0t0Gg0zFt0Op3MCwbuKKj5/X4kk8kdn7lbhrOmpgbd3d3Q6/VQqVSQy+XQaDSo\nrq6G1WpFbW0ttra24Pf7YbPZ4PF4EAwGkUqlsLy8jCtXrjAdhOIoyuv1lj2VS1F0dXU16urq0NLS\nAp1Oh0wmg7GxMUxOTj5SLdFkMqGlpQVKpZJFh2TMSgW6DAcHB7G9vY2VlRWWEdJqtVCr1VCpVBgb\nG8Pi4mLFOAEUMTc2NqKzsxMKhQI6nQ51dXXo6OiAxWLB9vY2/H4/JiYmsLq6CrfbjbGxMSwvLyOd\nTiMcDiMUCiEWi8FkMmFra4s9b01NDVQqFeLxeMnbdil6HhwcRFtbGywWC9ra2ljNnNQ7qfwTj8dh\ns9lgNBrR3d2NxcVFjIyMYHl5ede+C4qY9+zZg+HhYRw6dAhNTU0QiUTIZrPwer2Ym5tDNBpFKBRi\no8LX19fZO0z1asoSyOVy6PV6JJNJxGKxso3kJqOvVqvR09ODPXv24NChQ2hsbIRSqWStnhwOB21t\nbfjGN74Br9eLQCCA9fV1OJ1OLC0tPbXAiMfjQavV4tChQxgeHobZbIZYLGYExUQigXg8XlGZimfe\n+BNxqKmpCS+++CK+853voK6ujr2Ya2tr+Nd//VeMjY1VDFueIiFqx+no6MArr7yCV199lekSbG5u\nwuv1Ynp6GhqNBiaTCRKJhF2I165dQzqd3lXVKVq3w+HA3/zN36C1tZWpGBYLKxGam5tx4MABtq5c\nLocPP/wQKysrrGZXU1ODxsZGOBwOXL58eVeMv0gkQm1tLWw2G/R6PaRSKbxeLz7++GNcuHDhkV5Q\nMv5yuZw5opSKLxWkUiksFguOHTsG4PbseL/fj1wuB4vFgs7OTgwMDOCf/umfWNr2aWeHiKOg0+nw\n8ssv43vf+x5rlSTtCw6Hg3Q6jaWlJfzgBz/A1atX4XK52GTIYuRyOUQiEfbdcLlcaLVaGAwGlvUo\nJfh8PtRqNf70T/8Ub7zxBpNMBu6MlM1kMqz0U1VVBaPRiL6+PhQKBQQCAXzve9/D6urqrpGLKfJ8\n4403cPz4cVgsFnC5XDar4vr16/jRj34Ep9OJSCSyI7MYDAbh8XhgtVpZCVKj0cBgMMBkMsHr9WJh\nYaFsxh8AFAoFGhoa8M477+Dtt99mfC3qrqE7pqOjA21tbUxR8syZM3jvvffgdDqf2rnn8/kwGo14\n++230d3dzbK1uVwOGxsb7M/dAdvTxDNr/OkwmM1mtLS04MCBAxgaGoJarWbKaOl0GvF4HKlUirUG\nPe2IqLj2WVVVhcbGRvT396OxsRFSqRQ8Hg/xeBzLy8u4desWbt26BR6Pxy5MEhCKRCJQqVSwWq1M\nQGJlZaVsaTC6yL/xjW/g6NGjaGtrg1qtZjVEqr+FQqEdEYZYLIZIJIJYLIZKpcL8/DwKhQIkEgmU\nSiVOnDiBjo4OSKVSrKyslHzdd4PKQ11dXWhtbYVQKMTW1hZSqRTm5+exsrLySBcInUMOh4NQKITr\n16+XpK7H4XCgUqmg1+tx4sQJvPDCC3A4HMhms5DL5ewSpotaJpOhvb0dgUAAfr8fHo8HGxsbj1TC\nKCVUKhWam5vxxhtvYHh4GCqVislcu1wuzM/PY2ZmBvF4HBsbG7h16xb8fv99sxYUYRPLm1p3S53l\nIO2HF154Aa+//jqGhoaY4SdxoUAgAJfLhdnZWbjdbgSDQUgkEphMJvT398NqtbIoVqPRIBKJlE1Q\nDLjDMD948CBeeeUVDA0NoaqqCgDw8ccf48MPP0QsFoPb7cbMzAxr7yvOcG1vb7OAg4SV9u3bh/37\n9yMcDuPq1asld2qB245tVVUVent70d3dja6uLnR2doLP56NQKCASicDj8SAWiyGXy8FgMEClUjFV\nTplMht7eXqysrODDDz9EoVAo617fCxwOBwcPHsSxY8fQ0NAAoVCIeDzOBLBmZmawtLSEdDr91J3y\nYjyzxh8AqyMaDAaYzWYYDAZWu6XLmIRXKindAtzJWJBQjFarBY/HQz6fh8/nw5UrVzA2NoaVlZUv\nMIkpqrLZbOjv74dGo0E+n0cqlWLM41KB2M1GoxFtbW148803MTw8DA6Hg2g0yrxZIrl5PB6srKxg\ndHQUgUAAEomEvaQ1NTXw+/3IZrOQyWQwm83Yv38/enp6kEqloNPpSrbuLwPV+61WK/h8PnK5HGKx\nGFwu1yO3U5JjBtxWj5ybm0M4HH6i9ZFefn19Pdra2nDy5EkcPnwYAJBMJll3ApEUaR0UDfn9fpYG\njcViiEQiCAaDu1ryEolE0Gq1sNvtkEgkmJubQyKRgNfrxczMDEZHR/H5558jHo+zCPpBoBZKcuI3\nNzeRzWZL5txwuVwoFAo0NjbihRdewDvvvAMOh4PNzU2WKifDPz8/jxs3bmBpaQlutxtCoRD19fUI\nBAI4fPgwWltbWetouVsSxWIxqqur0dvbi5deegkqlQrhcBgLCwv48MMP8ZOf/ATpdPqB3StkbIVC\nIRoaGtDd3Y3Dhw/j6tWrGB8fL6nKHHVOmEwmtLe3M35Ca2srALC7xel0Ynl5md0bVqsVVVVVzDHW\n6/UwmUzo7u7GwMAAJicn4XK5SrbOB4FUPnt6evDCCy8wXk4ul2OllsnJSSwvLz81R/x+eGaNPxFX\nNjY2cPXqVYjFYmxvb6O7uxu1tbVQq9VYWFjAxMRERaVailOGPp8Pa2trLN1GLX0zMzP4zW9+g8nJ\nyfu2EPF4PBgMBrzwwgtoaGjA9vY2ZmZmMDc3V7KOgGLyzYkTJ3Dq1Ck0NTVhc3MTyWQSIyMjOH/+\nPFZWVuD3+5FOp5HJZFjGJZfLsVQdRSf5fB7xeBwNDQ1wOBwwm82spkgKbuUEaTwIhUImRxuLxeDz\n+R7LkFCnA0Wl8Xj8iVOjKpUKL730Eg4dOoShoSEYjUZwuVwmGU3ZBpobQXvscDhYqjeXyyGdTmN+\nfh6jo6P4n//5Hzidzl3LfCWTSczOzuIHP/gBcxSz2SxjxcfjcVarf9g1EZuanK14PM7Y308KKgfZ\nbDa89tpr6OnpYZG+y+XC559/juvXr2N8fJytnRyXbDYLLpeL6elpeDwebG9vw2AwIJvNlr0FkMPh\nsDp5VVUV/H4/5ubmMD09jbNnz2JhYeGR1foUCgVjqvN4PEQiEYRCoZIaLgp8Dhw4gFdffZX9vq2t\nLXg8HszOzuKDDz7AxMQEAoEA63CRSCTsDPT09GBoaAgvv/wy+vv78fd///f4x3/8R/zsZz8r2Tof\nBKlUCoPBAIPBAKVSuUMIzO12Y3p6GqOjo4/F/Sh3Z9cza/wBsLTK+vo6bt68CYFAwNTCOBwOa72p\npFQLcMcBSCQSCIVC8Pv9SCQS7KJIpVJwOp0IBoP3/fKpzUsulzP54traWmg0GoTD4ZI9Mw2Acbvd\nuHHjBnMuMpkMJiYmMDY2tiMt96BLhs/nQyQSwWq1oq+vD3q9HmKxGIVC4bGZsI/ykhQ7X3dfzo8r\n6kFdJY/aKXC/z1Mqlejv70dfXx9qa2vhdDpx5coVpFIplu3icrmQSCSoqamBUqmESCRiqV4a88vh\ncGAwGKDT6cDhcHDjxg3Mz88jk8mw5y/Xu5HNZpneARGennQwCWW86JxQirdUF+TW1hYSiQQjTY6N\njWFzcxN+vx9TU1OYn59nEdy99o3P5+/QArBYLLDZbAgEAmXNPG5ubiIQCOD69etwOp0Ih8NYXl7G\njRs3HkvPRKPRYHBwEBaLhSlZ0rtSKhCzP5VKsQCIFDWDwSDW1tZw+fJlOJ3OHSOii+VzE4kE8vk8\nBgYGYLVaodFoMDAwgPHxcaysrOyKCBTdJTdv3mTkVoPBsIPdT5nRR9k/Km2QPSjHszzTxh8Aq11N\nTEyAy+Xi0KFDbK45/SnHUIQnxfb2NrLZLGKxGAKBAJLJ5I4xqzSy9X4/SxrcZHS1Wi1MJhOMRiO7\naEsBSl29//77eO+9975wgB/1QiBxnaamJgwNDTHD9Ljf0b0U4B4ESm9Go1Fsb28zhvOjSj4Xy4+W\ninRGxr+1tRUmkwnhcBgff/wxPv30UySTSej1erS3t6NQKECpVGJoaIjxJ2j91KPN5XJhtVphMBjQ\n3t6OTz75BD/72c8QCAQQDAYRDAbLZvyJA5JIJEr2mdSpQVMI70UyfVxQqnZpaQlLS0v3jNgfdL5U\nKhV6e3vR1NQEjUaD7u5u+Hy+souKBYNBfP755xgZGSnJTIjq6moMDw/DYrGULXuxtbWFXC6Hmzdv\nYn5+Hi6Xi3WwEO7H/SAsLi6yzATdA11dXXjllVfw3//937ti/Mmw/+IXv8DFixfx2muv4eWXX0Zj\nYyN4PB5CoRCbK/Mo0Ov1MBqNLBPyf8b4kwf4KAeOxpMSwQy4Pdc8GAxWXL2/GOQ5FgoFRvYLh8MP\nnAJI7UVLS0tobGyEzWbDiRMnAADf//73S1Lfpd9fKgEZDocDi8WCI0eO4MCBA2ySns/nw9WrV7G8\nvPzEa34Y5PN5BINBRCIR5oQ9qsPU2NiIgwcPoqOjA7FYDGfPnsVnn332xA5APp/H2toa/vmf/xla\nrRa5XA7Ly8twu93I5/NwOp1YXFxkTsulS5egVquZ9j1FRIVCAVwuF/v370dnZyfkcjl6e3uh0+mQ\nzWaRTqcRjUZx8eJF/OpXv3rqRNiHAZU6iPhKDsbjjEi9n2F5lCwQh8NBc3Mzenp60NbWhvr6esY/\nom4ehUIBiUTCuCWlRvE7WiojTQ46jbhOp9Mld16Io+T1eiEQCFgU/yi/Y2tri7H9RSIRBgcH4XA4\nkEgkcOnSJaytre3K3V/Ml7h27RrMZjMOHToEiUQCqVS6Q6TtYXHkyBEcP34cXC4Xn3zyCb7//e+X\nXCStIo3/44DEb0QiEevxJ7LTgy51irapVkos2HJfiMRgJkPP4/GQSCQeas1ktKhdRygUoq+vD8Fg\nED/5yU8eeS33i55LtQeUrm5oaMArr7zCapSFQgEejweffvrpYxv/R11joVBgo50BsFr9oxhui8WC\n119/Hc3NzUilUrh27RomJiaeOJLe2tpCIBDABx98AODLn42ie4FAAJFIBIlEAi6Xy9jlRP6LxWKw\n2Wyoq6vDK6+8wgbokArhb37zm4pSX7wXKJ1Oxh+4k/YvFR7l+Umjo7u7G2+88QaGhoZgtVqZQfL5\nfAiHw0gmkyw7VC5DVMq7irJZxYTS4ixZqbC1tcXKCY8L4utcvHiRER5NJhM2Nzdht9vhdDqxsbGx\nKyQ7kvqenZ1lHUO0j/l8/qHPKd3DnZ2dOHnyJAQCATKZDH73u98hGAyyO6sUqEjj/7iH+W5CEBGL\nvuxCph5drVYLkUiEcDgMt9u9a7PZiwlbXC4X6XSaEXQeBGKFy2QyAGAp0LtHpD4Myn3x05Srnp4e\n9PX1sZRWJpOBy+XC+fPnH8v4P866iSlOTh5FwY9yOWu1WiZ05Pf7d2QSSoGH+RxyHinrRboPxRHU\nuXPnMDU1BYPBgGPHjuG73/0u5HI5hEIha8EUi8WMTFipoPUWt5vR8++200IaA729vThy5AgOHToE\ntVrNsniLi4sYHR3FlStXMDk5iUwmU5Glx3uB7iPSq8jn83C5XEzvv9KQTqcxNjaG5uZm1gKqUChw\n9OhRpNNpvP/++yXnK9wPNE+BFDgpmHwUe1Y8FVAgEIDD4cBms+H111/HmTNncOvWrZKtt2KN/+OA\nNo481mAwCK/Xe89Ljf5bkUiE6upqNDQ0oK6uDqlUCm63G1NTU3C5XIykVC7vUSgUQq/XQ6FQIJvN\n4tatW7hx48YDtcGLZUepRa5Y772SZgAQk7q1tRWdnZ1svX6/Hzdv3sSnn36K1dXVknq1XwYejweN\nRsNkS2dnZ3Hp0iWEQqGH+nl6Hq1Wy8iKlEnY7T2ni+V+59Pv9yMSicDn86GlpeWeo3B3I8v1JCA1\nTLFYzOr95Jzv9rqL53CYzWYIhUJ4PB5kMhkIhULWZSEQCCAWi9kZWV9fx/r6esW1exWjOBAB7pQk\nw+EwotFoRa57a2sLsViMdRdRuaumpgbV1dVl0Sa4H4q5WJRJDAaDj+R8SKVSpttBDqNAIIBCoWAZ\nu1KhIo3/46C4/YkuCK/Xi7W1tXumlugCV6vV0Ov1cDgcOHr0KCQSCUKhEP793/8dkUiEiYqU6+BL\npVLU1dVBq9UimUzi008/xR/+8IcHRmEcDgcKhQIOh4MNjyC2Oe1Fpbys1DLY3d3NRiyTkNF//dd/\n4dy5c4jFYru2XqFQCIvFwvbtypUr+PWvf/3QPIniS5LD4TCPv5JaSotB2Y10Os3KYySQ87h1892G\nWCz+gtjO42bmnsRhoPdOq9VCpVJhY2MDwWAQjY2NqKqqYqleh8MBADAajchms7hx4wYikQgT2KlU\nFBv/zc1NpFIpJBIJpgRYiSBSN51ncgCeFtmbFBXj8Tjcbvcj7R2pHCqVSvYzqVSKtVKXEs+N8Sex\nBardkyBIcXTA5XLR0tKCjo4O9PT0QC6XY3t7GwqFAjU1NWhqaoJYLIbZbMaf//mf49ixY9jc3EQ4\nHIbH48HZs2cxMzNT0nVLpVI0NDRAr9czZcIvi2io3amzsxNDQ0NsgNH29jYT4zly5Ag+++wzzMzM\nPNUXli6ShoYG9Pb2oqurC0ajEQCYk9Lc3Ayfz7fDey83hEIhzGYzjEbjjoE+DzKAHA6HteENDQ0x\nT5zH46GqqgoajeahBWt2A8UZitbWVnR0dDDyGdV2RSIRG01biUaJSnkvvvgiTp06BY1Gg1wu91R0\n0vl8PhQKBV577TUcOHCAaSrk83koFAo20ZLuIK1WyybkDQ8P4+tf/zpmZ2cxNjaGkZGRknZClAIi\nkQi9vb3o6+uDWCze0aJZqWULuVyOgYEBDAwMsHNNrcQ9PT04dOgQ3G43QqEQkskkS8MTX6QcczAS\niQRWV1eh0Wig0+mgUCggFArvy2+giZ0GgwF79uzBsWPH0NLSwvZcqVSioaEBY2NjJV3nc2P8KS1I\n0+9ohCxFnRqNBkajEXv37sXw8DCOHDnCLr18Ps/ka4ksaDQamVhDIBDAysoKlpeXMTs7W9LDIhaL\nYTKZoNFoWK3/XkaIJIF1Oh1qa2tx8OBBJmdMHAc+n4+amhocOXKEXZChUKjks+WLQS+bUqlEVVXV\nDuEZIjp1dnay0a2kRkhs6Pb2diQSCayvr2Ntbe2hU+9Psl4a7KPValnkfi89efrvKYWrUqlgs9nw\n8ssvo6+vj+27WCxGc3MzK13QxL2nCYpATSYTGhsbsWfPHjY6lyI7aiu02WxYW1tDMBh8qmu+GxwO\nBxqNBmazGYcPH8bBgwehUCgYl2c395jD4aCmpgatra04duwYDh06BLFYjFwut6NbRyqVMt6RRqMB\nALbnuVwO4+PjMBqNyGQyTGO/EhRIqfy5d+9e9PX1QSQSsUwRtcKSPO3TlkmnoEIul8NqteLQoUPo\n6emBUChkJSKDwYCuri7EYjFG/IvFYixjlEwmEYlE4Ha7S5qxo2AikUjAaDSitrYWdrsd4XCYjZIn\naXe5XM4I6gqFAnV1dRgeHsaxY8egUCjYZ8pkMtTW1rL5IaXa++fC+FP9W6VSMXYlpZHp70+cOIE/\n/uM/hkajYZrbwJ2UUbFMKwDGHdje3mZZBa1Wy4Y1lOoL4PF4TLWKjPS9PlssFsNoNOLUqVN46623\nmPFSKpXsMudyuTAYDDh69CgUCgU0Gg0++ugjzM7OPnAdj9orT+Dz+aitrcXevXvx9a9/nTkjdAES\ny58m95FzoNFooFAoUFVVBZFIhEgkgkuXLpVcnvhuFHM9aIY8cO/nJidGq9WioaEB+/btw9DQENrb\n22EymRjrXKVSMWZuMBjE+vr6Q0v8loubIZFIUFVVhaNHj2Lfvn0wGo0wm82svk/PRi2iH330UUUZ\nf1pfT08P3n77bezbtw96vZ61nxFB83HKW4+653Q/vPDCC/irv/or2Gw2qFQqZmgkEgkLNqjuS61s\ndNapJOlwOBhZ9MKFCzh37hymp6fh8/kedYtKCr1ej66uLrz44osYHByESCRCoVCASqWCxWKB2WyG\nz+dDOp1mBNNH2fdSqo6SM97f34/h4WEcPXoUzc3NrNWV7AGVREnQikpcRGKcmJjAu+++i4WFhZK+\ngzSwzWQyQSqV4p133oFarcavf/1rNp/grbfeQm9vL6xWK7M9IpFoxz1JEAqFbFoonfdSrPe5MP7A\nncsCANuc7u5uJoM7PDyMgYEB5oFTKt/j8aCurg7V1dXY3NxkLNdikSDS6Kb0eqlBnixFCzqdDolE\ngo37pYNksViYl0vKW6urqzuyGxKJBGKxGDabDfX19ZDL5Q+1hsd9LrFYjP379+P48eMYHh6GQqH4\nQksWXYjUekmjR2UyGcRiMVpaWpi08cbGBpOCLSeKa5tGoxEtLS3w+XyQSqWw2+3QarVsrKlGLUUi\nlgAAIABJREFUo0FtbS3a2trgcDhQVVUFsVjMnk0gEKCurg4OhwMNDQ1IJBJPrO//JM8lEAhgMBjQ\n2tqKnp4etLe3MwOWzWZZdow6XXp7e3Ht2rVdI4kWp5Dp99EZlslkbISuRCJBX18fDh48CKPRyCS8\nqRSgUCigVCof2NFTClAQIJFIdugMUHZtbm4OXq93Rxo5k8mweQGdnZ3o7u6GRCKBxWKBTqdjzxuL\nxeD3+8u298V3YzG5k8vlorq6Gg6Hg5VCOzo6oFarWXZOo9FgeHgYIpEIU1NTWF1dxcbGBitnlBs0\nFlqhUECtVqOqqgo6nQ46nQ5dXV3o6upCU1MT1Go1m2BILd8ymYyVdinVT5nVqqoq5PP5kkbT5HiQ\n8qZarYZcLmf3tUwmQ6FQgFarxZEjR2C321m5l0DvZTHEYvGOO6dUeG6MP72MpG4mEAhw/PhxHD9+\nHGazmSm4bW/flktcXFzE5cuXcfnyZbz22mvo7+9n0aBAIIBEIoFIJGIXJR2ecjCM6QsXCoUsQnM6\nnbBarRgYGMDBgwfR2dnJxvryeDykUilsbGxgfHwcfD4fOp0ObW1tTE2KnuFRVeseFVKpFCdPnsTX\nvvY1xsa+G9SG5nQ6kUwmIZfLYTQaWf3ZYrGwsaHT09NYXFzcMXGsHKA95/P5aGlpwaFDhzAyMgKz\n2Yw/+qM/QldXF+rr6yEUCr+wh3RZ0B8qYej1etZf/CjrKCVI9tdqtWLfvn1oaWmBTqeD3+9HMplk\nZ5yEsFQqFex2O5uGWW7jT4aIfhcZEDKSFosFf/Inf4ITJ05Ao9GwVG7xfy8QCNi42aqqKpZ2f1g8\n6jPSu+9yuXD16lUMDg7CaDQimUxieXkZk5OT+MUvfoHR0dEvjA2n5/3Lv/xL2Gw2aLVapsi4Z88e\n2Gw2XL58GWNjY2U1/pRxK07b8/l8NDU14bvf/S727t2L5uZmlrZOp9Pg8XjQ6XQ4efIkHA4Hzp07\nhwsXLrD5FY9SqnjcZyPOSkNDA5qbm9Hb2wuHw4G6ujqoVCrIZDIWJdN0zkwmwzIzdOcTwRW47VAU\nZ4pLDYlEAoPBwEZYW61WmM1mnDhx4p6qlLQ3xeqMxTLGxAeQSCQl7cx5Low/sZmJNFYoFFgvMxlC\nGjZz/vx5XL9+Haurq3C5XNjY2EAgEMD777/P6qRSqRRvvfUWhoaGwOfzsbm5yQ58qV/QbDYLn88H\ng8EAuVyOU6dOobOzE4lEAkqlksnfzs3N4ac//SmrT2WzWSSTSQSDQQgEAuj1evzFX/wFa2+5desW\nfvnLXz6SIXocJJNJ/PKXv0QikcDJkyeh0WjYoJulpSV88sknWF1dhc/nQzQaRT6fZ06O3W7H8ePH\nmRTmwMAAkskkfvzjH5ctct7e3mYRWyqVgkwmQ2dnJxumI5PJYLfbodPpwOfzWd2TDAARQOnva2tr\nYTAYmLjO9evX4fV6y7L2hwGNsp6bm0MymcTnn3/Oogaj0Yi6ujq0tbWhrq6OtRNROrHchp96sI1G\nI5qbm9He3s40zElVTi6Xo729HWq1mvU5k6BV8RQ/Isra7XY2+KXcmJ6exk9+8hN89NFHLIqLRqMI\nBoNYWlpiayveR7qsz549i3Q6jZMnT6K/v58NzSHn8nFUTb8M5DBZrVY0NDTAbDZDqVQCABOFUigU\nsNls6O3thV6vx+bmJlZWVrCysoKlpSV0d3ezoKimpgaDg4OsbZS6R8oNkUgEnU6H9vZ27N27l6ko\nkrYJEVWDwSBcLhebP0JZWyo7arVatLS0sGme9Nl3T8d8EhTrbsTjcZbZLDb4HA4HgUAAp0+fxtzc\nHHw+347sF9mgl19+GQMDA6wEUBy8lgrPhfEHbhvDeDzORpfqdDo2JW5rawt+vx8LCwv4wx/+gHPn\nziEYDCKVSiGbzWJpaYl9jkwmY8QX2mj6MktZ6yfEYjGMj49DIpGgubkZnZ2d6O/v33HpffbZZ7hy\n5Qp+/vOfIxAI7BhtCtyuCdXU1OD1118HcPsQrq6u4uLFi2UnEqXTaZw/fx48Hg+tra1oamqCQqFg\nz/WrX/0K09PT2NjYYD/D4XBYulGj0TDDRJHH73//e0xPT5eFibu9vY1MJoOFhQVYrVZ2mVit1h3e\neDQahdvtZgpharWaDX5ZWFhAOBzG1tYWq0VzuVw2kGk3NMW/7Pk2NzdZXzmPx4NcLkddXR3q6+sR\niUQgk8mgVqsZL4MGkJTL+EskEmb0zWYz6urqMDAwgOHhYVRXV+/grRQ/BwAWsQUCAcbjkcvlUKlU\naG5uhtPpxPj4eMkV6O4Fl8sFl8vFLnGqvz7IcGxvb2NiYgLLy8ssjU2SzBSFSqVSNtyrFKC6c29v\nLwYGBtDQ0ACdTscyQzKZDEqlknUoUL+8x+PB8vIy5ufnYbFYANyOlInvFAgEEAqFkEgkkM1myzqz\ngEpYNP6bJv/J5XJWwkokEvB4PFhaWsLk5CTOnz+PGzdusLuDy+Uy3sKhQ4eQzWYhl8shFouhVCph\ntVqxuroKt9tdMgcgFAphcnISfr8farWaZRk0Gg1isRhmZmbwu9/9DiMjI4wECNzhM9D0U7FYjN7e\nXigUirK8o8+N8acpeaurq6irq2O1FNqslZUVnD59GsvLyygUCpDJZExiErjjKVdVVaG9vZ1NyqM6\nUiwWK0sr1NraGv7t3/4NTqcTJ0+ehN1uh8FgAI/HQz6fRzgcxm9/+1u89957bDrY/SKEYg+ThFDK\nXZcjOdPl5WVcvnwZHA4H9fX1uH79Oi5duoSZmZkvEMm2t7cRiUQwNTWFH//4x/D7/fizP/szJnCh\n0WhYLbQcF0skEsHPf/5zrK+v49SpU7Db7aitrWWRAgCMj4/j/PnzGBsbg1arxdGjRyEQCBCNRnH+\n/Hn4/X6IxWI0NDSw1i662B93OmEpcPd+UQpRIpGw2QBWqxUmkwlarZZFFFQuKsd+19bWYnBwEC+9\n9BK6u7tZ54RWq93RfVC8ZloHCc2Mj49jfX0dRqMRDQ0NMBqN6O7uRjwex+9//3v4/f5da1UsLvk8\nCrLZLD7++GPweDx0dXUxfozJZILZbMbi4mLJuC51dXX41re+hb6+PtjtdtYJRZElnVnSqEgkEojH\n4xAIBGhpaYHD4YDNZmOEaIFAAJVKhaGhIej1erYPc3NzZdl3ioIpKi/Wo6DgJxKJwOVyMSXFhYUF\nOJ3OHfceye56vV5MTk5CLpdDr9ejrq4OJpMJb731FkQiEf7zP/+zZM8xMzOD73//+yxlb7fbMTQ0\nhJdeeglXrlzB2bNnMTY2Bq/X+4UsEelwnD59GslkknWCkSNUSv2W58b4b2/fHu+7sLAAu93OSE50\neXi9XszMzCAQCAAAY/uTKhtdgK2trXj11VfZwQduOw4ff/wx3G53ydedTCYxNzcHoVCITCYDi8UC\njUbDtMCJBV/sId6NYi1uADuILeWOhiiN7na7ce7cOXg8HtTU1GBmZgYTExMIhUL3fKlICGN8fBx1\ndXVIp9OQy+VsUl0gEMDFixfLks7NZrOYn59n8sJms5kJtFD6dXx8HOPj48yZtNvtLIM0OzuLcDgM\nhUKBRCLBHEdS4qJ20UoAZTNyuRxisRjC4TBWVlZYCYDKY/Ts5TgzFIU6HA50dnbuiJwBfOEyo7NL\nw6v8fj/m5+fhdrshlUrZ+GetVgu73Y6DBw+y6Ho38aj7VCgU4Ha72Zja4lZkCjRKvT6lUona2loA\nX5TupbuxuN1Qo9FAKpVCJBJBJBIhm80y8i6fz0dVVRUEAgEOHz7MUvKkIhkOh0uqc0GZz0gkgqWl\nJWi1WoTDYWg0GohEImxsbGBxcRHXrl3D7OwsvF4va5e+26hubm5iY2MD8/PzqK6uhkKhYEOZXC4X\nzp49y0ZQPylCoRDGxsZYC7nT6YTH48Ha2hqmp6cxMTGBjY2Nezp69C64XC7Mz88zyW6yTzKZDKlU\nqiQE1+fG+AO3DenMzAw6OzvZ31FGIBgMYmNjA/F4HBwOBzqdjo1LpN5LtVqN/v5+fOc734FEImGf\ncevWLfzwhz8sSw96oVBAOp3G6Ogobty4waJ3qh89zLQuEmsh40/e42714m5vb8Pj8eDjjz/GmTNn\nwOFwHqodqFAowO/3w+fzsXop8R4KhQIjUJUaZFRu3ry5Qzjj7nZHMugajQYbGxss0iDVyEKhsOMF\nLm7JqRRQ5BaPx5FOp1mZq6amBgMDA0xqmQxyOXC37kPxPhe3HhJIdyGXy8Hn82FhYQFLS0uIRCJo\naWlh3w+fz4fZbMY3v/lNNta7kkGOMnEXKDX9uJmEL0M4HMbo6CiamprQ29v7BZIllVNSqRRCoRC2\ntrYYUU2tVgO4fZ8mEgk2KZUiT4VCgSNHjsBms6GxsRFTU1OYmprC9PQ0Y9s/KSgKJuf65s2bCIfD\nqK+vh9FoRFVVFZxOJ2ZmZnDz5k2sr6/fl5NFzgs5EXw+H/X19Whra4PVaoXD4UBTUxNyuVxJjP/d\nA4sCgQDGx8fx7rvvsmd7mM8gB5HeD7lcDrVaXbI9fu6M/9TUFObn5xGJRHYw3wcHB6FWqxEMBrG2\ntoZbt25hY2ODRcxWqxUnTpzAwYMHGUs+l8shmUwiFAohHA6Xtf2suBXlbjb5g2AwGNDR0cG06kt9\nkTwM7l7/wzof1H2xvr6+Qw9ApVLtii733RFCMailTKlUwmAwMCEXAEz3obj9RqfTYWBgAIFAAC6X\nq+xrfxhQfdlkMiGfz8Pj8bDBIbS/xXLF5Yj8afokRYXF57s4Q1VM0iIHze12Y35+HrlcDiqVCiqV\nCmKxGFtbW6xltKGhATU1NSUnzZUDxY4LEbjKMTI3FothamoKMzMz6O3thUqlgkgkwtbWFoRCIcv2\nqFQqdqaJ/0H69J9++ikuXbqE7e1tyGQyGAwGVjIwGo2Qy+Xo7++Hw+HA8PAwrly5gqtXr+L69esl\nG6ZDDsDa2hoikQiWl5fZHaFUKh/osNK/U0fO9vY2E/yhO4o4XtFoFBsbG2U5P4/TYUIOGp31gYEB\nbGxs4IMPPtjBoXpcPFfGP5PJYG1tDYuLi1hYWIDFYmGRmMPhQFtbG6LRKObm5pDP51kai8/nw263\n49VXX0Vrayv4fD4KhQJCoRBmZ2exsLBQdilOOhyPc/BMJhMGBweh0+mQy+UQjUZ3hYl7LzxOJEPl\njXQ6DZVKxYxTJUiKUm+3RqNBPB6HVCqFTCZj+hFEliIxl66uLoyOjj7tZTNQulCtViOTyTCDSfVm\numTKaTDT6TQCgQBT06S2MwA7HIDiljQATD0vlUpBoVBAoVCwjBxNcCM1NyqVVfJoYno+gUDAnpNq\n0g87yfNhQeTTyclJjI6Owmw2s3Q59b9TuUEqlbI9p4Foy8vLOHfuHH77299ic3MTcrmcDTKSy+Xo\n7e1FT08Puru7IZfLWSra6/VifHy8ZIEStVdnMhkEg0HmkMtkMgwODsJms0GpVN6zvEjZDhJiorbA\nTCazg7wtFAoZ8biSQEqEVBK12+3o7e3F+fPnvzL+98L29jaWlpbw/vvv4/jx4+ju7mYpcbqgOzs7\nWZ8uacxLJBLo9XpIpVL2Qk5NTeFf/uVfcO3ataf9WPcFh8OB3W7HsWPHYDQaEY/HMTU1VZLD8Sig\nl5IyAA+blqKIk4x9LpdDMBhEKBTalZHKDwJFqnw+H0qlEjU1NUgkEhAKhWhoaIBMJkM2m2VtatR/\nXCmg0gR1fYhEIuj1etTW1rJIkP69XIYzmUzC5XIhGAwikUiwS7i437nYYSRHgGRNi8cvk6NIKVDK\n7pFCGpWPKhEczu3ZEMUKbvl8ntXLS3neKW1+9epVhEIhdHR0oLm5GXX/v6BZVVUVqqurd2SAqBNm\nenoaP/3pT3Ht2jV4PB4WCPl8PiZhnEgkwOfz0dbWxpyYjY0NbGxslKVLh0CtdCSW09nZiWQyyYiL\ntIdEwKWyXXV1NXMC5HI5FAoFe26/34+RkRGsr69XlOOYzWbhcrng9/tZh0Mpg6LnzvgDwPr6Os6e\nPYtkMgmn04n+/n4mukB65pQiB+6krNPpNBtfOTExgYsXL+LKlSvweDxP+Yl2gtK0KpUKVqsVHR0d\nsNlskEgkWF9fx8LCAvx+/66uSSKRwGQyQafTQSQSsVaX+71MxQaVdK5JunVubg4LCwsVoY9PKf+6\nujrU1NQwolw+n2dRP5Ec6aIhmdFKuEi0Wi3q6upgtVpZuralpQUmk4lxE4oj0nJoWZAg1ZkzZxAM\nBiESicDn83dkeCi9ub19WzGxuroaXV1dLNUcjUaZKic58TKZDCKRCLlcjs29CAaDZW9vvZsb8jCg\nlr6hoSHs378fUqmUsdBJ0bKU+06OlM/nQyqVQiQSwcrKCkwmE9RqNdRqNQwGA/R6PRs+QyqmgUAA\ns7Oz2NjY2DGRjshnZPj1ej0WFxd3SO7S2S8n6Jy43W6sra3BbrdDLpdDIpEgl8ux9lY+n4+trS2Y\nzWZWtqMsEymfUvvu+vp6xQ1aKmb/A7cH/NAgt1LguTT+Ho8HXq8Xly9fRldXF/72b/8Wg4ODTI+9\nuLYI3GGVhsNhOJ1OLCws4N1338XIyAjS6XTFRRJ0WZtMJhw/fhx9fX1MpjKVSmFlZWXXddqVSiVa\nW1vR29sLg8GAH/zgB1/qgHC5XIhEItb7rVKpWMfDzZs3MT4+/tSHndAAoKamJnR3dzPjSCM2nU4n\nU4ukXlya6lYpxt9ms6G/vx92u53101OrHGXESA2S2gFLfd5TqRRSqRT+4z/+A++++y62trZYpogk\nqYlslUgkmKTs3/3d36GxsREajYYNfXI6nSgUCrBarexizGQyEAqFMJlMSKfTSCaTZd37u2XEHwQy\njlqtFl/72tdw8uRJKJVKuN1uLC0t7ag/lxpEPgsGgxgbG2MRMQ3yqaurQ2dnJ+rr62Gz2WCxWJgW\n/t1rov+9ubmJtbU1TE1N4fr162wseU1NDRMvKje2t7dx48YNpFIp/PVf/zX6+/thMplYOchsNoPH\n4yEajcJisUCv1yORSMDv9zN+EWWZNjc3WRa4kkAlO3LSdTrdDqf9SfFcGn/gTjTvdDrxwx/+EO+9\n9x5TNFMoFKivrwdwu9WPVNH8fj9CoRCCwSDm5+dL7o0/KTgcDvR6PSwWCzo6OpgCV0NDAwDsOMjZ\nbHZXDRA5JDTkR6FQYHx8HLdu3UIgEEA6nUZtbS1qa2thNpuZpy6Xy2GxWFBdXc1Y6S6XC16vdwf5\n8WlAKBTCZrOxWieRQzkcDjweD1wuF7LZLKRSKZRKJWv9k0gksNls8Hq992UP79ZzWa1W9Pf3w2az\nQS6Xs2loxbV1AExnvpwTCYulZSkCC4VCLEIjhn84HMb169fxD//wD1Cr1Szdn0qlEA6H2SAs+vm5\nuTm43e4dHS/3QikIgXw+H42NjWhoaEB9fT0ymQz8fj/T9ace8vr6eiZfTb9TKBRiz549TB1wZmYG\nH330UVlaiO+Fu0lkfr8fmUwGPp+PcSqUSiXC4TBWV1fvOe2OPgO43Y72+9//nilEUqsdZXSK5WrL\nEUBRpjYYDMJms+H48eMAwEoTVCJSKpXMuSWCMZfLRSwWY9lhuqMqBcRV0Ov1rPuCHLfizo0nwXNr\n/IHbhy4QCOCTTz7Z8fdqtRrt7e3Y3t5mSm2ZTIYR5cqtK/+44HA4MJvNOHjwIIaHh9HZ2ckME7UF\nZjIZNrpyN0F95DTkhsoRFy5cwPr6OpLJJJqammC322G323ewjykKjEQi8Hg8WF9fRzAYfOoZF4FA\ngNraWtTU1LA0NdVFaQ5BOp2GUChEOp2GTCZDJpNBOByGXC5nmhJPCxwOhw0kItEq4lUQijXP79Yc\nLzWKDW+xIbob+XweS0tLcLvdbE10eQuFQjQ3N2N7e5vJW1+5cgVzc3NMcvleKNVzUctVXV0djhw5\ngu3tbZYFyufzsNls6OzsREdHB9NQIFnozc1Nlj3y+/0YGxvDZ599VtahPnejmJND7+z6+jp7tmKt\nkPutiT4jEAjg2rVrTFJ8ZmaGCdcUa9OXC5ubm4hGo3A6nejo6MCePXsY74gGiVGKPJfLMUXCaDSK\n1dVVxGIxRuqORqO7JhL1MKA7keYX0N+V8h19ro3//ZBIJFhPMF2ExR5xpRp+LpeL1tZWvPnmmzCb\nzUx/nqIoio5ItGM3nyMcDuPatWvo6enB8PAwtFotrFYrTp06xUhYxCwuJnwV1+HW19cxPj4Ot9vN\nyDtP87ugbhCtVss87WQyiZs3b+LcuXM4c+YMq4mePXuWtTgSe/vLIolyPxedF61WC5PJxDoU6GIk\nxyqXyyGRSCAUCiEUClVM6pMccuD2s2QyGcTjcYjFYiYxGwgEMD8/jw8++ACLi4uIxWI7atTFeJJu\nmmLk83nMzc1BJBKhtbUVDocDDoeDdTEolUqWBSouLdKAsEwmg5WVFVy6dAkXL17E6urqfdd8Nx6H\na/AoIKP+sL8jn8+zeSmTk5OML0UEvN1AKpXClStXYDKZsHfvXpZxSKfT8Pl8mJubQyqVYi3by8vL\nGB8fZ+2Om5ub8Hg8FRfwkczv/XQxSoH/k8Y/n88jGo0+7WU8MuhLp/QPgB2KVhR1+ny+XSevbG5u\nIhQK4cqVK6iursaLL76IpqYmGI3GHWsvnrRF08Po5RwdHWVtLOWYo/CoyOfz8Hq98Pl8rJyytLSE\nc+fO4dKlSzukRIudRhLMedqZC+BOOp+ml1FqncZXezweTE1NYW1t7QsT6Z4mig0RgWr8i4uL+OST\nT5BOp7G6usqUOx9kcEpxnra3txGPx9nsDIFAALPZzKSKATBHhfQK0uk0Iyym02ksLS3hwoULmJ6e\nRiKReOg934334VF+B517j8cDv9/POkZ2E5ubm1hdXcXs7CyWlpYgEAiQSCQwNzeHpaUl5lxlMhkk\nEgl4vV6mlkoO/ebmZsWc+3uBjD8NVSqVo/J/0vg/i6DL0O/3Y25ujnUuFEf4qVQKPp8PHo8H8Xj8\nqazzwoULWFhYgFqthl6vZ+QUKktQ2oqERAKBAKuZfvrppzh9+vSuZy3uh3Q6jVu3bsFms2Hv3r1w\nOp24cuUK/vd//xdTU1P3vTAqYe0AmHAOtXPl83nmcKXTaXC5XMzPz+O9997D0tLSU++ueBAouzU5\nOckUFxOJxK7MsLgbGxsb+O1vfwuBQIDW1lY2HTEejyMSiSAajYLH47GJc8vLy1heXkYul4PX68XE\nxMSO1rRnGaS7/zSQz+fZVMUbN26w0csffPAB5ubmmHNVqszPbuFe7dJerxcrKyslUz39yvg/YyCZ\n1OKILhqNwuv14sKFC7h48eJT7ZHf2tpCOBzGf/7nf2JsbAz19fVMrIXU2YDbHRnkpKRSKRYRkaRl\nJYBGnF65cgUGgwETExO4du1a2VTASom2tjYcPXoUXV1dO1TOSNktEongypUrGBkZwcjISMW1s94P\nRNgi/YJy9pQ/aB25XA5Xr15FMplkY4mJKb+5ucmyP5QJIGNP2a5KjjafFVDkPjMzgx/96EesNLe6\nurpDA7/S39e7QbLMCoWC/Z3b7cbi4mLJiIkVafyLCQ3P2pdWbqRSKQQCAfh8PnA4HCSTSTaG8+zZ\ns7h+/fojpRJLDbqcL1y4gKmpKcaITqfT0Ov1kEgkKBQKWFtbw9ra2o5U4eN+1+U6L6Qtf+vWLQgE\nAkxPT2N2drbi6oN3g8vlorGxEd/85jfR1NS0I72ZSCSY8tuFCxdw/fp1zM7OVvTzFIPacneb0Hqv\ndWxvb2N+fh6Li4usk6CcPJVn9V68m6BWaj2DfD7PRlg/DyDhNmrhJnLs2toaG/ZTClSk8S9u13mY\nedmVhHK3cC0vL+Ojjz7C6uoqRCIRFhYW4PP5mAIX9Qw/TdAlSDVO+g69Xi+7JOkCf1ICy70G0pRq\n/ym6c7lcjMj0LBh+gUAAnU6H5uZmKJVKdnm43W6Mj4/j9OnT+Pzzz+Hz+RCNRp+p96vSUNy2SP+/\nXCg+6+VqnysHqA24eK8q+R0qRrlJlneDGP0OhwMDAwNQKBQsc7q8vIzZ2dmSDB8CKtT4q1QqGAwG\nNDQ0QCQSIZ1OY25ujumD313DqZSDRAxrsVjMWO3JZJK1E5ZiraFQCFNTUwgGg+BwOFhfX0c8Hmdk\nrSc1pMV4ks+iSLO4faZUh7YYRH7U6XTQarWQy+Us7UetX0/yHFtbW6wsUe6IrnjgzeOAhJOqqqqw\nvb2NW7duIRaLIR6PI5fLsRGoo6OjmJ+ff2Ki093tXJX0Lj4sSuGs77aWBk3xJHIbCfJUKmjdxdMB\nNzc3EQ6HK7rDCrgz24NUECUSCdxuN3w+X0nXXjzmmsvlMv0Bj8eDWCwGr9fLpJrz+fzz2+dfXV2N\nffv24e2334Zer0cgEMCPfvQjjI6OwuVysVQx1dQq5eDTIddoNKipqYHFYsH6+jpSqRQbcfukoDYy\nEgYp1Utz90VOe1upLyWBDF5DQwN6enpQX18Pl8uF9957Dz6fjwmVVOIFX7zX9OI/btqYfl4qlcJm\nsyGRSODdd9/FxMQEFhcXd4j3FIvtPMnai9dNnws8GynpezkulQzaazJGVVVVUCqVTPmwkt9V4ilp\ntVro9XpoNBpEIhFMTU0hlUoxsmAlrp9ko3t7e9HQ0AC9Xo8zZ84wNcpSdiVR+zMAzM3NsSFLi4uL\nmJiYYMJcxXfFvfCwZ/rpj027B9Rq9XZNTQ0aGxshkUiQyWQwOzsLv9//TEX+FIUWv5yVstZ7oZy1\nuXKBIn+tVgudTgelUolkMomVlZWSRP67gVJG/kKhEGq1GlKpFEKhEKFQCLFYbEdvf6nO4bMe+T8r\nhp9AUSFF0UKh8JmI/Mn4k84H9dc/S5E/OVs0P4VaG0sZ+RePMK+pqYFSqUQ2m2ViRNQp9Qilky+1\n7xVp/AFU5kn4Cl/hK3yFr/AVng18qX3nftk/foWv8BW+wlf4Cl/h+UP5xy89Hv6fcn7FOZNKAAAg\nAElEQVS4WCxGTU0Nmwd+LzWxSgQN9qmvr2ejMytBCe9BIAGUpqYm1NTUALjTolPpoD3v6upi6cpK\nTlMSKI3Y1NTEtBaehhjO44DD4UClUqG1tRUymYypVVb6ngO3124wGNieV3p3CIEmDxoMBigUCmQy\nmWdi3cDttYvFYmi1WgCoGInqhwGtneaclHjP/98v+8f/c8afZoF3dXXBaDSyYSyVftjpMm9vb8eh\nQ4cgEAiYtnalOy5cLhdKpRKvvPIKmpubkUgkkMlkysL+LyVIiri9vR3f+ta3sL19e1BUJpOpeMeF\nasPHjh3D/v37GWu4koaX3A98Ph8WiwWvvfYapFIplpaWGEmxkkHvaGdnJw4ePIhAIIBgMFjR9wqB\ny+VCIpGgq6sLWq0WPp/vmTGidKc3NDQwLYtnAUTeU6vVkMvlyGQypb7LvzL+xZDL5XA4HHj77bdx\n8OBB9Pb2MiOaTCYr9lKXSqUwmUw4evQo3nzzTbS0tECr1WJ9fR3pdLqiX1Sr1YqBgQGcPHkSe/bs\ngcPhQDqdxsrKSkWTxORyOQYGBvDSSy/hlVdegdVqhV6vx/r6OiKRSMWum0RCXn31VRw7dgx9fX1s\nxsLc3FxFr5vP5+PIkSN47bXX8OKLL8JsNqO6uhqxWAx+v/9pL/G+4HK5sFqteP3113Hy5EkcOHAA\nCoUCfD6fDY6pRJCDOzQ0hG9+85s4fvw4HA4Hy3IFg8GnvcT7gsvlQqfT4dSpU3jzzTdx8uRJSKVS\nZLNZpNPppy4EdT/QOe/q6sK3v/1tnDp1CoODg8hms2wGQYnwpca/Ilv9yoViD3Hfvn2oq6tDNpvF\nzMwMxsfHd7AtKwk0RrS5uRk9PT3Ys2cPkskkOBwOTp8+zYb8VBrIs7XZbNi/fz/6+vpgNpths9kw\nPj6+Y2JVpYHL5UKhUGBwcBAHDhxAW1sb9Ho9AODjjz+u6HXzeDzU19fj1KlT6OnpQVVVFTgcDhYX\nF9l3UmkOQHEb2549e3DixAk0NzdjbW0NiUQCo6OjFbtuKmtZrVacPHkS/f390Ov1CIfDWF1dZQN/\nKg00M14qlaK/vx/f+MY3YLPZ4HK54HQ6sbS09LSXeE9QhkWhUKCurg7Hjh3DwYMHYbFYEIvFMDU1\nBZfL9bSXeU+QRoNOp8PAwADeeecd1NbWMv2W1dXVXVtLZVqNMoA8XIvFgubmZqhUKmxtbbF50E6n\ns6I9RZ1Oh6GhIdjtdvB4PKbnHwgEKjJ9Ti8on89HU1MTDhw4AL1ezyJ+n89XjjTXE4Muc4FAALVa\nje7ubjgcDggEArjdbkxNTSEajVbcuoE7qX6JRILa2lr09vaiqqoKiUQCN2/exOzsbEXyFYo1CnQ6\nHRoaGhivZW1tDWfOnIHb7a64dQN3FBWrqqpQV1cHh8MBvV6PQqGAiYkJ3Lhxo2RyrKVE8Rm32Wxw\nOBystToajeL69esVK5dLs02ampqwd+9etLe3w2g0gs/nIxAIYGlpqWLvRIlEAqPRiP379+Pw4cNs\nz2nAWTgc3rX1PPfGn4w+9ca2t7ejt7cXKpUKhUIBsVgMkUgE8Xi84mqKfD4fQqEQEokEFosFPT09\nMJvN4HA4iMViCAaDO8RbKgVUP1QoFFCr1bDb7WhoaIBcLofH48Ha2hrC4XBZddAfB0S+kcvlMBgM\n6OjoQENDA3Q6HXg8HgKBAJaXl5FMJitu3ZTV0uv1sFgs6O7uhsFggEgkQjAYxMrKSkUO7+FyuZDJ\nZKipqYHNZkNTUxOTJeZyuYhEIpifn0csFnvaS90BylQYDAbYbDbY7Xbs27cPNTU1kEgkiMfjbMJm\npb2fAoEAUqkUjY2NaG5uRktLC7q6uqBQKMDhcJDJZJ7qZND7gRzE2tpatLa2orOzE93d3bBYLJBK\npQBuq4hGIpGK23OpVAq1Wo329nZ0dXWhr68PnZ2dUCqVrHd/t0sVz73xJ+ETqVQKlUqFgYEB7Nu3\nD0qlEn6/v2JFMiiVqFAoUFVVhcbGRnR0dMBoNGJ7exvRaJQd8kozRHw+H2q1GmazGQ0NDWhuboZe\nrwePx0Mmk4Hb7UY0Gq2odQO31y6TyWC1WtHV1YU9e/awi4XD4SAUCmF9fb1kIzVLBXJwa2tr0dXV\nhX379mFwcBASiQQcDgebm5vweDwVST4jRcy+vj7s27ePleNovkcqlYLX6624Paf0rcPhwNGjR7Fn\nzx60tLRArVaz+RXJZPKpDtm6H4RCIaqrq/Hiiy/i0KFD6OzsRFVVFXg8HuvCSSaTFccjIjGvgYEB\nfPvb34bdbofJZGKdT1tbW8jn808sW10OqFQq2O12fP3rX8dLL72EqqoqSKVS8Pl8RmYtHnK2G3iu\njb9QKIRcLkdNTQ0aGhrQ0tKClpYWyGQy8Hg8rK+v4/z58xUXEVHq1mq1wm63o7W1FX19fVCr1eDx\neMhms5idncXk5GTFlSpkMhmqqqrQ39+Pzs5OOBwOtLS0MHWqaDSKyclJeL3ep73UHSAj1NHRgUOH\nDqG1tRV2ux1qtRrA7TazYDAIp9NZcYZILpdDr9fjwIEDOHjwIJqamlBbWwvgznCiSCRScSxoDocD\nk8mEnp4evPzyy+jo6IDFYmGTzGg+RCUaIqVSidbWVhw4cACHDx+GyWRi2YpCocDa/CrNOQeAxsZG\n5mi1trZCo9FAIBDsMJ7FipCVAplMhuHhYbzwwgtwOBzQarUQCASs5ZlGKVdaaYvD4aCurg5vvvkm\n4+CIxWJwuVw28pkyuLu57ufa+EskEpa+7enpQW9vL2w2G0QiEYDb85EvX75ccSxioVCIqqoqtLS0\nYGhoCF1dXWhqaoJcLmcHfWlpCfPz8xVn/FUqFRobG7Fnzx4MDAygsbERGo2GeeaxWAwLCwsVxyIW\nCAQscj5y5AhsNhuqq6tZ/+3W1hbC4TC8Xm/F7blGo0FbWxv27t2Lffv2QaPRQCKRAADy+TwymQxi\nsVjF1UHpUhwYGMCePXtgtVpZlqVQKCCTySCZTCKdTldcZk6lUqGvrw+Dg4Po7OyEQCBgk+tIkjWT\nyVSkDkddXR0OHjyIzs5OmM1mlmWhNjkaBFVJxp+ycn19fRgYGEBtbS0EAgGA2339lO6vxEwLh8OB\n0WjE4cOHYTaboVAoWIYll8shHA7D7/cjm83u6tqfW+PP4XCg1WrR0tKC48ePo6WlBUajERqNBsDt\naW0+nw9jY2OIRqNPebU7IZPJmAE9evQotFotVCoV+Hw+tra2kM1m4fF4KrKeaDKZ0N/fj/7+frS0\ntEAulzMDmsvlEI/H2YyGSoJEIkFnZycGBgZgt9uhVCohFAqZs0UGNB6PV9SeU/T84osvoq2tDVVV\nVSwa2traQiKRQDgcRiKRqLh2M9KtGBoaYrXy4vpnIBBAOBzG5uZmRRl/IuDu378fLS0tLO1MWZZg\nMIjV1VU2XruSjD+Hw4HZbGYdCQKBgDksyWQSa2trcLlcbEpopaC4o8JsNu8w/LFYDB6PhxGJK2nP\nicwql8thMplYAFcoFJBIJOD3+7G6uorJycld5509t8YfABQKBUwmE+x2O6xWK2QyGfh8PtLpNCKR\nCDY2NhAMBivqYgHuRP4mkwkWi4VFFYVCAdFoFKurqwgGg0in0xX1ggK3HRe9Xo/q6uod9c90Og2v\n1wuXy8WmYVUSeDweNBoNdDodVCoVS4PmcjmEQiG4XC4mfFJpey6RSFBTUwOVSgWRSMSi5mw2i+Xl\nZczMzFQkoZUETqqrqyEWiwEA2WyWOYizs7NYWVmpqMucQHVzImxRf3YwGMTS0hImJibg9/sras+J\nGCqVSqHValkvP42MdbvdWFxcxNTUFLLZbMXtOZFDyUkMBoPY2NiA0+nE6uoqnE4nXC5XxWmHFLex\n8ng8pFIprKysYHl5GcvLy9jY2MDCwgJisdhXaf9SgHrjdTodm8hEUUU8HsfCwgLcbnfFGSE6KDKZ\nDHK5HDKZjKWdC4UCNjY2MDExgVAoVHFOC61dJBJBKBSCx+OhUCigUCggHo9jdnYWCwsLFXex0KVI\nXSHksGxubiKTycDlcuHy5ctwuVwVdZkTqMWP1p3L5ZBIJBCNRjE+Po7R0dFdv1geFqRLQBmWeDyO\njY0NzM3N4dKlS5idna3IdRdPYaM7xeVyYX5+Hjdv3sT169exsbFRsWunTEUqlYLT6cTk5CRu3bqF\nxcXFinxHi0Fy7GtraxgdHcXo6ChWVlYQCoWwtrZWke8ogcqHIyMj+Pzzz3Hr1i2k02kkEomvjH+p\nwOVyMTAwgKNHj0Kv10MoFCKfz2N6ehpjY2O4dOkSRkdHn/Yy7wmdTofDhw+jra2NyfiGQiFMTEzg\n888/x8jIyK6KQTwKbDYbhoaGUF1dzchPc3NzuHHjBkZGRnDjxo2KI8wBt+c9tLS0oKmpiZ2VeDyO\nkZERXL16FdeuXatY0ROVSsUIUOTgLiws4MyZMxgfH8f09HTFtcoRqD1RKBQyhbPLly/j4sWLmJ2d\nrVixFj6fD5VKxTgKm5ubWF9fxx/+8AfMzMzA5XJV7J5T9xPdLdFoFDMzMxgZGUEgEKjIwILA5/Mh\nEAjA4/EQj8exvr6OhYUFrK6uIpVKVRypFbjtrFBgxOVykc/n4ff7sb6+jvX1dUYO3e1A9Lk0/rTR\nDocDPT09kMlkSCaTCAQCuHbtGs6ePYtLly5VHNEPuDPUpLOzExaLBVwuF9FoFPPz8zh37hwuXbqE\n69evV1z9lqIJg8HA6qCZTAY+nw83btzA6dOncePGDbhcrorLtgC3L0Sz2Qyj0Qgej4dwOIyVlRVc\nuHABn332GWZmZirSaQFus/0tFgtkMhny+TyCwSAmJydx+vRprK6uwufzVRRPAbhzXkgLgs/nIxwO\nY2NjA6Ojo7h48SKCwWDF7jll58RiMSOyrq6uYnR0FKurq0gmkxVtQImnsLm5iVAohNXV1f+Pvfd6\nbvu80scf9A4QAFFIgF0kxSKREkWJapYVObHjksS7m9jJTLxJZrM7O7nIzF7lZi/3b9id2Wwm2SRf\nJ3YcO3GR1UVJlCiKpMQKsAMg0SvRO34X+p3XkNJsiQRILZ8ZTmIV6uDD9/Oe9pznwGKxIJPJ7MjW\nVrkDJUXTZDIJv98Pl8vF+E877ZwDn9nO5XJZdS4ejyMSiSASibDKLv25SmX/z6TzpxlcqVTKInO7\n3Y4bN27gk08+wcTExI4UggAetZ16chMTE7hy5QouXLgAp9O5I0tyVDYXCoUQiUTgcDjweDy4cuUK\nLl26hFu3biEaje5I9nO57UQkslgsuHjxIkZGRrC0tLQj+RXlKopCoRBcLhexWAx3795lAQvtq9hp\nzxz4rORP5XOXy8Uy/kAgsCPntQGwS5zGV/P5PBwOB5aWlnYsF4dANpM8dTqdhtfrRSQSqZjeCf3b\nX+RMks3lf5fG+8hm0iko/9oO27/o9y23vVxBlCoBFMxQ4FWpoPGZdP5ErqC+c7FYhMPhwJUrVzA/\nPw+fz7cjL0OKbAUCAWNsZ7NZzM3N4e7duyyj2Km283g89gUAwWAQo6OjmJub27HPHPjsMqfec6lU\ngs1mw8TEBNbX1xGLxXa07eX952QyicXFRaysrGBzc3PHzTwTHr8MS6USYrEYHA4HwuHwjt+ySSjX\nI6DRvmKxyJwUfbad9lnIHj6fD4VCAYlEAj6f/4gDzeVy2xI4Ps33K3+WtbW1aGlpgdVqRTKZZIFY\nNptFMpmsuGjOXwNxtkilsKurC4FAgPX5i8Uia1tEo1Hk8/ltt/2ZdP4kX0lZXKlUwvr6OoaHh3es\n8ySUO34AjKdAxJCdant5T4suvHA4jPv378PpdO5Yu4HPMujyrMLlcsFisexox08otzudTmN9fR1e\nr3fHOn7C444xnU5jc3OzopWtJ8lCCeUZplAoZMItVNqlP0NVgJ3ysyCbiPnf0tICvV7PWPT0Z1Kp\nFNNY2Am2kwMlJ9rc3Ixjx45hYWEBuVyOjdClUin4fD424bKVtj/J9yK7c7kcW0h0+vRpCAQCpFIp\ntlI+Ho/D6/Uin88jmUzuOf8nQT6fRyaTYdFTKpVCMpnckTr4jyOXy7HsIZ/Psxew0gIQXxTl8pRk\neyaTQTKZ3HH8hMdBLyf1DElLgZ750ziI7Ua5KhuxoKl3WF7JoL7iTgHZSupmVPFSq9XQaDSIRCIQ\nCATsUqRztR12fFHQ+QiHw4jFYuByuRCLxUytMJ/PQyQSsX0KO2URFD3zSCSCjY0NKJVKFAoFiMVi\n9Pf3s3YjcYxWVlawvr5ebbMBgGkRrK2tMQGuYrEIrVaLl156CSdOnIBQKITNZoPVakUqldoRuwno\nfHk8Hty6dQvNzc3QarUAgNbWVnz9618H8LBit7y8jImJCbhcropsDX0mnT85Irrwstksu8z/Esqz\nP7osq3HZkxOiFzWbzTK5zccPBNlXXjot//VKgy7ox50pl8tlqor0e4/bWE3HWu4cyZkCD0mAJO/L\n4/GQSCRYRF5ub7VsJ7vpWdP5lcvlqKure2TcMhgMIh6Ps+pRtUvRj5fLJRIJW9rS3d0NrVYLkUiE\ncDgMl8vF7C//OVUTNJaYSCQglUrZetlSqcQqj/fv38fc3BxWVlYY92K7gpjPC9oL4na7WbVOKpWy\nJVYKhQJ+vx8ajYa1YujurCZfh5w/bWClKotGo8GxY8cYiXFmZgYikYgpcZJCZDUTkFKpBL/fz6bL\naFlbfX09zGYz+Hw+kskkNBoNkskka9nR+dougvQz6fwf7yeW//pfAo/Hg0QieUSUZiew0stJXeXO\nvTyTIAYsXS7VzlApEOHxeJDJZEzsh8PhIJPJsEyI/gyAHUFKKz8fKpUKLS0tkMvlUCqVUCqVmJub\nw8LCAnshqcy4E7I64LOVoZ2dnTAajdBqtZDJZIjH4xgeHsbs7CzW1tbYOak2+fLxioRWq0VfXx8G\nBwchEAggFApht9sxPT2NmzdvsokLEjCqpt1UtaBzUFtbC7lcjoGBAda26+rqwvj4OM6fP89G/8iZ\nVvO5J5NJhEIhphxaU1PDlCH5fD7MZjN0Oh0TpfH5fOyrmpVTGpHz+Xwwm81QKpWQyWSPkOc6OzvB\n5/ORyWSg1+vhdrths9mqrrkQjUaxuroKk8mEuro6puFCKqJCoRAdHR2sUur1erG+vo7Z2Vk2EbDV\neCadP0WzfD6fKbTF4/FHNs61t7dDr9dDpVI9Uta12WyYm5urytwl2c7n88HhcJBKpRAKhZDL5dhs\nbnNzM9ra2tisKwCmWDg3N8cOeaUPennPv1gssiyZx+PBaDTCaDSyDXnUR8xkMuyQr66ustJ1pVEe\nJOZyOSQSCfB4POh0OqYOaTQa0d3dDYfDwTbN2e12rK+vw+/3V+1iIbupFJ3P56FUKqHRaNDR0QGF\nQoFcLgelUone3l44nU7Y7XbYbDbYbLaqzUXTM6cqXTqdZpmcwWBg43+kFqnT6djc/9LSEhP/qWYm\nWj6fTdkcyXCXSiWIRCL2ZbPZmCjN+vp6VR0RPXO63yQSCSQSCVNZpLtlaGgISqUSbrcbMzMzjDNV\nrcpFqVR6JMEpny4iYrdcLofJZMKJEyfQ1NQEr9eLS5cuIRgMVn2EkVqI5aRukmwHHt79DQ0NeP75\n5xEOh7G4uIhAIMBav1uNZ9L5C4VCKJVK1jN0Op1skYxIJEJzczO+/e1vY2hoCJ2dneByuchmswgG\ng3jnnXewvLzMHGs1bKcSeTQaZTrbtKToq1/9Kt588022ErJUKsFut2Nqagr//d//XbXonMZXSMQi\nFAohEokwLfGhoSG8+OKL6OjoYBK0m5ubuH79Oj799FOmtlgt508Vn3Q6jVAohHw+j5qaGvT29qKv\nrw8tLS2MQJrP5/HgwQN88MEHuHLlCgKBQFXtJpuoFE2lf71ez5jc7e3trKXx6aef4vz58wiHw1Ul\nwFLlhErK2WyWVbnoYjSZTKivr8exY8fg9XoxNjaGDz74AEtLS1WruFBlq5zXQlMuFIxxuVzU1NSg\nv78ffX19cDgcmJ6eRjAYZATYaga69OxLpRJj+QOfOSixWIyBgQEcPnwYHo8Hn376KRPoqpYDpfXs\nQqGQJUk0JkcVuEKhAKlUiqGhISbU5fF4cP/+/aq2XAQCAasgKhQK1pIDHl2+Rat/c7kcmpubMTEx\nAY/Hs+f8Py8oe8vlcuDxeFCpVDCZTGyl7+HDh3H48GGYTCY2k14uOFJTU4N4PF6V5TN0iRcKBUgk\nEuj1evT09EAmk6G1tRUHDx5kW9uEQiFKpRJqa2vR2toKo9EIhUJRFbELKoUSYU4mk6H5/98eplar\nmQhN+Wy3QqFg2eitW7fYdq5qgC5zsqu9vZ1dLtFoFPF4nC0pEggEaGhowJe//GU4nU4sLi4ilUpV\nrSRKTp0Icw0NDUilUrBYLKipqYFGo2H6+UKhEN3d3YhEIpiamkI4HK76tj9quVG5fG1tDffv30cq\nlUJ9fT06OzshkUjYyuW5uTmo1WrEYrGqiQCRzdQWIl7FnTt34HQ6sbm5id7eXrS3t8NsNsNgMKCz\nsxMNDQ3QaDQswKwG5HI5DAYDawtxOBxMTk5iamoKLpcLXC4XjY2NOHz4MLq6uqDT6dDQ0ICmpiZk\nMhkEAoGq2C0UCtHY2IiWlhbGCYlEIrhx4wasViu8Xi+kUinMZjNefvllmEwm8Pl8NDU1oa2tDSsr\nK1VTXdTr9RgaGkJ7ezvUajX4fD6mpqZw+fJl+Hw+RKNRcDgcHD9+HG+++SYEAgG0Wi26u7vh8XgQ\nCoW23KZn1vmXk5uUSiWamprQ39+P/v5+9Pb2wmAwQCgUsgCBtkZJJBK2AKgaIOZ2NptlB2D//v1Q\nq9Uwm82MFEJRo1AohEwmg8FggEajgUwmQyQSqbjd5c4/n89DLBajrq4O/f39KBQK4PF4CAQC4PF4\nqKmpYSJG9DKr1WqEQqGqOCLKwgqFAht/MhqNrApA2ttGoxE6nQ4qlQparRaHDh3CtWvXIJfLGSmz\nGqDeOelbyOVy+Hw+WK1WKBQK6HQ67Nu3DyaTCQaDAWazGV1dXTAYDFhbW6uq86epBKoGpVKpRySJ\nOzs7kc/nsW/fPuh0OjQ1NaGhoQFarZaRBatls1gsZuVyv9+P1dVVXLp0CRaLBX6/H2fPnkWhUIDB\nYIBSqYTZbEZ9fT00Gg02Nzcrfl4oyZHJZNBqtWy1rNfrxd27d/HRRx9hZWUFIpGIJRkHDhyAUqmE\nwWBAU1MTfD5fVZw/VRbr6upQV1cHhUKBUCgEi8WC8+fP4/bt23A4HNDr9Th8+DCGhobQ3NwMgUCA\n+vp6NDc3w+VyVcX5czgPN8z29PSgvr4eAoEAbrcbd+7cwdtvvw2n04lYLAaVSgWFQsE+K61Ht1qt\nePDgwZbb9Uw7/2g0ilQqBalUio6ODuTzeUgkEmSzWdjtdoTDYSgUCigUCohEIuYEqt3TojEVIhE1\nNTUxtrnFYoHdbmdStAaDgZXtyvkC1bCdSrjpdJoxntVqNdsUNjY2BoPBgJ6eHnR0dDD5YnJYQqGw\n4naX206rYwUCAQqFAkKhEEZHR+HxeFAoFNDb24uBgQEcP34cBoMBYrEYCoUCSqUS0Wi0KiS08vIx\niZx4vV6Mj4/jwoULyOfzrMJy9uxZvPbaa0xopKamBnK5vGqZHPCZVjuHw4HP58P4+DguXryI+/fv\nI5vNYnp6GtPT0/j+97+P06dPg8fjQS6Xo7a2FuFwuGrruIldLhAIkMvlMDMzg6tXr+LKlStsVvvy\n5csolUo4ePAgE9HRaDTQaDRwOBxVsZuSHLlcDh6PB4fDgQsXLuDy5cus2kJJ0YkTJ5DNZhmHymQy\nYWFhoWp2kyCRTCZDqVRiAcudO3ewsbHBpIpdLhfi8Tjy+Tz4fD7UajXq6+sZp6HSdtMoKFUrfD4f\n3nnnHVy5cgWrq6uslRKPxxGPx5HJZBgJ0Gg0sp0dW90m4m7pd9shIEILjd3IZDKYzWb09/fDaDQC\nAHw+H1ZXV5mUaD6fZ2tdieVdLdvLySwymQwNDQ0wmUxMOMdut2N5eRkOh4P1bEUiEerq6lhkWQ2U\nSiVwuVzG1KatiqVSCR6PBxsbG1hbW8Py8jJ8Ph+y2Sw4HA6USiWam5uhVqurYjfwWWZBPUVSV/T5\nfGy8yGazweFwsKyNz+dDp9OhsbEREomkaraTIxIKhSgWiwgEAtjY2IDdbsfGxgacTifW19fh8/mQ\nTCZRLBZZeVSn01XFZroUqdomEAgQDocxOzuLlZUVuFwuhEIhBINBeL3eR9oTarWajaVVA8RLkMlk\nEIlEKBaLWFlZwczMDJxOJ6LRKLLZLBKJBDY3N5liIZ/Ph9FoRH19fVUqi+WOiJx/KBTCvXv3sLi4\niHA4zCSVqV/u9/uRzWZZEkKjr9WwnRIKcuJra2u4d+8e3G4364kT4djv9zOel0ajgclkqorzB8Cq\nRAqFAnw+H7FYDFNTU7BYLCxIoXszlUphdXUV0WgUAoGArUbfDjyTzh/4LMKVSqWQyWQwGo3Yv38/\nDAYDu2jW1tYwMzODYDDIHKjZbMbAwAATYqgG6GKhL7PZDJPJ9Mj+bXJIdOglEgn27dvHttI9LZ6k\nelCeVVA2r9frIRAI2MWdzWbZxU7OX61Wo6enB3q9fkvs/qK206VIZ4X6+hQAAGBTIfF4nEmHcrlc\n1NfXo6OjAzKZbEtsf5K/IxAI2DPncDiMbFksFhkDnZxULBZDPp+HVCpFW1sb6uvrn7pS9DTPnNps\nQqEQkUgEFosFkUiE/R61WCgzImGXrq4uqFSqp7L7aWwXCoWMTEmk25WVFWSzWXbZUxsuHA4jHo+z\nMbrGxsaqVLnKWyw0IheLxWCxWODz+djvS6VS6PV65HI5bGxsIJVKQS6Xo7W1FRqNpuJ2A58FXBKJ\nhFVp/X4/a1uR7TRC53K54HK5UCwWodFoYDabq5b5UyJHo5OZTAZOp5NNCVFVQ0XnTRwAACAASURB\nVKlUIplMYnJyEn6/f9ud/44s+/8lMZvPC4oQRSIRK4PT96BL3mw2swuGMjeKGOfn55+4b/7nLpLP\naz85T7FYzMbm6O/zeDwoFAo2hqNSqWA0Ghl5JJPJwO12M9b8Vtn+eVH+Yj6uLFdTU4O2tjbodDrU\n1taitrYWbW1tbOlSIpFgGVOl7QYeToDQswU+awOQc6dec319PRv7owsoFouxJTRPa/sXVRKkM0FE\nSrKbGM+NjY1oa2tDa2srTCYTurq62OfM5XKIRCKIx+NPVU580mcvEomgVqsfEX+iqhd9pv3796Oh\noYGJodDnzGaziMViVRnF5XK5kMvlUCgUTAefevcCgQAKhQL19fXYt28fqyJqNBp2XmjErhpMfyKE\nSqVSAGD8HApmqMes1+tZAKBUKh/R+weebLnN00Imk0GtVrOWHLXoKGOuqalhPX6tVguVSsWc/ePy\n3ZUEjZZTL5+WEQEP31+BQACz2QytVotCocA2RRKnq3yKZKuf+Y50/gT6wF/0g9PFQipQ5aAXVKlU\nQi6XM/IZ8HC0bn19HVar9al6ieWH7IvYTSItRDgs/z5CoRAajYax0bVaLevZ0k7u9fV1uFyuJyIS\n0b/1pFK2JOhD0xP0vfh8Pmpra7Fv3z7Geq6trUVNTc0jTmgr9p8/qe2UCdELR+0ipVKJlpYWiEQi\ndHR0wGg0Qq/XQ6PRgMfjMdupNFppUCBLDpSCRJVKBbPZjFKphAMHDqCzsxMGgwF1dXWsOpDNZpnz\nrwaoHVcecFGVi7YrHjlyBA0NDaitrYXBYGCfM5PJbKnzf5LzQo6FRuN0Oh2am5uhUCiwb98+9PX1\nsT65SqWCQCBg5fRqCVqVT1VQYE7aIVwuF7lcDt3d3aw1SkEAOf8nqZJsFQQCAbvPyXa6VyKRCDQa\nDbq7u1lVS6lUsmkoqhqUK7hWCpTQ0c+/WCxCIBCgqakJwWAQ4XAYjY2N0Ov12NzcZNwnspX0AOhz\nb6XtO9r5P6lcbfl+83IpU3KcRJATi8Vs1GVzcxPz8/OYmZnB8vLyU12KTyr9Sv1vKjGXl4RIvIJm\ncGtqapgiYSQSwdraGuuVPs2l+KTPnJwl9Z0pm6ipqUFjYyOTQCXtdipDb25uwuPxwG63P/WUwpPa\nTktZyp+5Xq9Hd3c3Y0RrtVrodDpotVoIhULk83kkEgmmOPakzv9pZILpbJCEL42rHjlyBLW1tbDb\n7dDr9TAYDKivr4darQaPx2O2k/jV0+BpqwYkT8zhcNDR0YFvfetbWFpaQjweZ+2uuro69k6UE2K3\ngi3/JPaTEyedAalUinPnzqGurg4WiwUKhYJNhhAxtFwNslqEYhKCoskUPp+P9vZ2/OAHP8DMzAzW\n1tZgMpmg0WjYOCBxE8gRVUv/hDb1UdYskUhw7tw5aLVajI+PM3lcKp8TpwH4LHDYrgz6r4EEz2j0\nXCAQoLm5Gf/4j/+I7u5u3Llzh2X7lLSSncTPINu3WnJ+Rzt/4MleTir7E/mDIlbKlCiyomjQ4/HA\narXi2rVrmJyc3JJFHE/6QypXPisWi4zFL5VKUVtby15CsViMQqGAQCCAyclJ3Lp1C4uLi2yT1ZPY\n+6SZM/AZyZIuRXJKUqkUOp2OZaikJpbJZBAMBjE+Po7bt2/D5XJtycjZk9hOCnNUSiRWMRHSHu+d\nZ7NZbGxsYG5uDnNzc09V9n8aFItFtrQqnU4zMpTJZIJEImE9Z4VCwS7GTCbDRKFsNltVxkKBz7Tx\nY7EYUqkUJBIJamtrWcAYj8eZ9KxarQaXy0UikUA4HGZqedXQ4SDp72g0ilAoxM52a2srZDIZdDod\neDweC+KpzUIBy/r6OjweT1XGQvP5POLxOAKBAHw+HzQaDWpra3H48GFGoiSekUQi+ZPq3NO05p4W\nmUyGVQgDgQAMBgOam5vZRFE0GmX3OgmiicVi9pkjkUhV2i2FQgGJRIL5mKamJqhUKhw8eBAymQwa\njYYFhLlcDjqdjlXoKNDdrvXWO9r5P21W8XifirJ9Aj3wxcVFXL16Fe+++y42Nja2JHN+0r9LPSEq\nDZLd1E+n4KBUKiEajWJpaQkXLlzAu+++y8ZEqmE7sYNJ7IZ6VTQOpVKpmLgPh8NBIBDAysoKPvjg\nA4yMjDyVMuHTvhjJZJKxyUksh543TX1QMEOfc2ZmBr/5zW+Y86+G2lw+n0ckEkEoFEIikWAS0CqV\nCjKZjJEtKcglvYL79+/j6tWrWFhYYIzoSoMknkOhEGKxGKvC0QhioVB4RAUtmUwiEomw6tzS0lJV\nxvxKpRLi8Tj8fj9cLhfq6uqYnLJCoYDZbGYVGQBssVgwGMT6+josFgsjBlYauVwOm5ubcDqdWF1d\nhVgsZhr+Go0Gvb29jyQA5TsM3G43Zmdn4fV6q9KySKfTCAQCWFhYQGtrK5NmVygUMJlMTBiN/pdU\nAHO5HHw+H9bX17dFJe9voVgsIplMwm63Y2RkBCKRiMlVq1QqdHV1IZlMsupA+UggjS5Go9Ft2dmy\no53/k4IcJs29P76rnb6CwSDsdjsuXbqEa9euIRgMPvUl/rRlJYoCydmUO3v6DMTaXlhYwPvvv497\n9+6xkZFqgS456peX7zWnnwf9uUwmg6mpKVy4cAHz8/MIhUJVXUhEqorl1SDgs34d/X8q4c3NzWFy\nchIWiwWBQKBqGyABsPNaTgwqLxuW9zmTySQ8Hg/m5uYwNTXF9kZUC4+Xz+m8EDmunK9TLBYRDocx\nMTGB+fl5NpZWDZRKJXYx06gtOXwa+aR3lUr9oVAI8/PzWF5eZjoA1UIwGMTq6ioaGxsfqSaWc16o\nr57JZBgXanZ2Fj6fr2p2ZzIZWK1WtLe3o7+/n72rj/fUqdqby+WQSqXgcDhgtVqfmtz6NHC73Rge\nHkZ7ezt6enrYXSOVSlmluqamhk1j0Pu6sbGxbbtDnknnTyh3+nQoqNSYSCSwtraG2dlZ3Lp1CzMz\nM0+9WGYryDCPS/NSCT+fzyOZTCKZTCIWi8HlcuHu3bu4fv067HZ7VbeckZ1U9if7yQlRRE7Zm9fr\nxa1bt3Djxg3Y7faq6suT7cQg39zcZFk/kS5Jd97v98Nms+HOnTuYnJzE+vp6VTe00QVNo5+pVIox\ni8s3PaZSKfj9ftjtdiwtLWF6ehqrq6ss+66m7W63m21KLCdTAp/1S2kF7fT0NCYmJrC6ulrVixwA\n4vE4FhYWoFQqUSqVYDQaoVQqH8n40+k0Njc34fP5MDMzg/HxcaytrSEcDld1E6Tb7cb9+/ehVqtR\nKpVQX1/PSJaU7VNVyefzYXFxEdPT01haWtoWmdnPi3Q6jcXFRUxOTmLfvn1obm6GXq9ngS8FLblc\nDtFoFD6fDy6XC1NTU1hYWKgauRV4GHDdv38fk5OTaGxshMlkYi0h0umgoDKdTiMYDMJms2F6ehoO\nh2PLyX7ADnX+T5s9l7Nqy/spqVQKExMTmJqawuLiItbW1rC+vo5AILBl/aCn6ZuXl/1JKY+eRTgc\nxvLyMiwWCywWCxYWFrC2tga3270lEqdPYzf9vXw+j2g0Cr/f/8hGM+ozLi4u4v79+7h16xbsdjt8\nPt+W9LOe9rxQlD0zM8M4CvX19dBqtewSdLvd+PTTT3Ht2jW2KGorthBuhe0LCwv46U9/io6ODnR1\ndWFgYIBd7LFYDKurq/joo4/w4MEDrK2twefzIZVKPRXxbCuIU8ViERcuXMDS0hJaWlpw9uxZvPrq\nq6wCkMlkYLFYcPfuXdy7dw9zc3NMCrWajh8AXC4Xfve732FkZAQtLS343ve+h2PHjjEHWs6tuHLl\nCqxWK+x2+yPrrKuFlZUVJu7z/PPP44c//CGbpiAeSTgcxp07d3Dr1i3Mzs7CbrdXbaqFkM1msbq6\nis3NTYyNjeF73/sevvGNbzwyppvJZBAOhzE5OYm7d+9iZGSEZc/VKPsTUqkUvF4vfvGLX2BmZgY/\n/OEP0dfXx3QT6P4MBoPY2NjAyMgIxsbGsLCwAJfLtZf5f14kEgm2PhMA25hEWtDLy8vM6Uej0apf\nJOXIZDKw2Wz4wx/+wEpy1KtzOByw2+1wOBxseUg1V1SWgzKGqakp/OpXv2JKeUKhkGX86+vrWFpa\nwuzsLJLJZFWzn3JQH5eIhyqViolrUGbt9/sxNjaG6enpLVvis1VjU16vFyMjI1hbW4PVaoXFYmGs\nYRoBHRkZgc1mQzAY3JI2xVa8M6VSCevr6wiHw3A4HIjFYggGg5DL5eByuYhGo1heXsbs7CwWFxfh\ndrurvpaVkEqlmEPc2NiAQqHA2toadDode19tNhsWFhYwOTkJn8+HWCxWbbMBgC0t8/v9bLa8vb0d\nBoMByWSS8RNmZmYwOzvLxnCrfU9SJSiVSsHj8bBnvX//fqhUKhQKBSa2tLi4CKvVivn5+aou3SJQ\n+2dtbQ3pdBparRYejwf9/f0QiURIpVIsQCR+xdLSElvpux2oztDm3wCHwykBT8eY53K5GBgYQH19\nPWOqEplvOw/CVmREpNhGZed0Os3Wnm7Xxfe0mX/596F2C23wKx9x2o4LZCvHd+g5lO9JKOczbKX9\nW/XMy0GiJ6QXEY1GmaRvtS/vvwXS+a+vr4dIJILX60Uikaja5r7PC+r363Q69PT0IJFIwO12w+/3\nV72l9bdA7+jAwAAOHjwIr9cLm80Gq9W6JZWt7QSPx4Ner8fXvvY1mM1mpFIpXLt2DWNjYzv6vBMv\n58SJE/j2t78NpVIJv9+P//3f/8XCwgKSyeRW2f5X/fuOdP4AnvqTk/CGVCplOtskD7pTDwWBLvDH\nSUM7IeP5PCgX+Smfutjpz70cFMCUKxXuBvvLgy5iO1dLVOaLgmwnwSWavtgpFaK/BprJ1mg0jN+S\nTqerSqj8PKAzThoWxCkKh8M7/r4hUbSWlhbIZDLkcjm4XC74fL4df945HA7q6+uxf/9+trRtYWEB\nkUhkK5PT/5vOvxzVkKPcwx72sIfdgu2oQlUCFDAC2BVBYjlI838bVR/3nP8e9rCHPezh2cRuDlwe\n16PZ6n/iiX+zithdP8U97GEPe9jDHnYW/qp/f2ZX+u5hD3vYwx728DSo5jKj7cYzOeq3hz3sRuxx\nUyqD8ue82575445ot9j+5xzobrP9cZXY3Y69zH8Pe6gidmtW8bjdu/Vz7Cbs1mf8587KbvssjwcA\nzwJ26ifZ/WHVHvawhz3s4ZnBLiQW7hH+9rCHPexhD3v4P4Y9wt8e9rCHPexhD3v4DLuO8LdHeKks\nyvtzu43oUoE52m1D+XnZjbbvNpv3sPuxW89d+frtYrFYMbGiXeX8yy/z3YZd2C9i2K0BVzl267nZ\nQ+Wwm9/R/+so3ykC4E+kiXdyAiAWiyGTyaBUKgGA7XKhfS75fH5b5MV3lfPfyT/AvwWye7fZT7r2\n5f+9m7BbnzthN9q9G20Gdq/dW4ndGgBxuVy2RZTL5SKTyaBYLLLFXJRR7zQfwuFwoNfrsW/fPrbh\njzZxOp1OeDwebG5ushXcW2n7rnL+uxk76cD9X0GpVNq1Gf9utHkPuxtUWS3Xyy//PSpLkw79Tjmj\ntCVPKBRCrVajpqYGYrGYrUPncrnI5XKPONKdYDst31IqldDr9WhuboZOpwOXy4XT6WSbIe12O5aX\nlxEKhRCPx7fs399z/nv4m9gJL8qTYjfbvoc9VAqU8ZdvhCxv95FzzWQySKfTO2ZTJAUltA5ap9Oh\ntbUVbW1tEAqFiEajEIvFyGazGB0dxfLyMtLp9I6wncfjQSKRsC+RSIS6ujo0Njaiu7sbyWQSmUwG\nExMT+Pjjj2GxWPacf6XxtD3vSmeff0mIYicc+L+Ex4mFhN1gM11AlA1Rm2Sn2k7kIoFAwPqk29lb\nfBKU93DLHZNYLGZf+Xwe6XQaiUQCuVyu6lvdyGahUAiJRAKxWAyhUAgejweVSgW1Wg2pVIpisYjN\nzU1sbGxgY2ODlaOrCT6fD4lEAoPBAL1eD71ezxySTCaDXC6HQqFAKBTCxsYGRkdHme3VBD1bs9mM\n9vZ2tLW1sQxaLBajUCiwZ97Y2IibN2/iww8/RCaTqdozp3NiMBjQ0dGBQ4cOoaenBw0NDVCpVOBw\nOOx50/mnwMvpdG6Z3bvG+W+HA6VLRSQSQS6XQ6fTQSQSAQDy+TwymQzrtXA4HMhkMhSLRQSDQSST\nSWSz2aodILq8pVIpampqoFKpADx0OGR7Op0G8PDFFovFiEQiCAaDKBQKVdnVTQeZz+dDJpNBoVBA\nLpezPh315WgPeqlUAo/HQ7FYhN/vRzKZrMrzppdVJBJBLBZDoVBAKpVCJBKBx+OxEmkul3uErJNO\np5FOp6tSZqRnLRKJIJFIIJVKIZFIWF9UIBCw5w4AiUQC0WgUoVAIiUSiqpcjOSKtVguZTAaRSMTO\nu0wmY1+ZTAaxWAxerxderxd+v7+qwQufz4dcLkddXR1MJhMUCgUkEgn4fD50Oh2MRiPkcjkKhQL8\nfj8ePHiAQqHAnnm1wOFwoFarYTabsX//frS2tsJkMkEqlUIqlUKlUqGmpgZKpRKBQAALCwvw+XwI\nh8OIx+NVuUuAh89boVCgo6MDvb29OHToEMxmMwwGwyNBokQiAZfLhclkQiwWw+XLl9ka3WpAIBCg\npqYGHR0dOH36NPr6+tDS0sLuklQqBaVSye5GsVgMlUqFGzdusHtyK874rnH+2/FCkyMyGo04dOgQ\n3njjDZhMJnC5XASDQWxsbMBqtSKZTEIgEKCnpwe5XA4fffQRFhYW4PV6P5cj3Y6gRSgUQqfToaur\nC2fOnMHp06fB4/GQzWYRDodhs9lgs9kAADU1NWhra8Pw8DD++Mc/IhaLVSVw4XA4zHn29fXhyJEj\n6O/vR11dHXg8HlKpFDY3N2G32xEOh5HP56FQKJDP5/HOO+9gfn6+4pkGh8OBQCCASCSCyWRCa2sr\nBgcH0dHRAbPZDIFAAADIZDIIh8Nwu93wer3weDxYX1/H6uoqlpaWqvKs+Xw+zGYzWltb0dPTg9bW\nVjQ0NEAul7MLsVAoIJ/Pw+v1YmVlBTdv3oTVaoXD4WBEqUrbLZVK0draiq985Stob2+HwWBgpVHK\npKmPGw6Hsbq6iqtXr+L8+fOMIV1pcDgc5ojOnTuHF154AWKxGAKBADwejzkiHo+HQqGAZDKJlpYW\naLVaDA8PY2VlpSpBC5XMDxw4gBdeeAFHjhyByWRigSEFvRQ0ajQayGQyTE1NwefzYXFxsWqJhFwu\nR3NzM15//XUcO3YMjY2NjyQRIpEICoUCQqEQHA4HRqMRBoMBMpmMtS6qYbdCocDhw4dx9uxZvPTS\nS9BoNBAIBCygSqVSaG1thVqtZomSTqeDRCLZUnnhXeP8nxaUvbW2tsJsNkOr1UIul0MsFkOn06G9\nvR2nTp1CbW0tuFwufD4fDAYDVCoVCoUC5HI52traEIvFYLfbEY/HEQqF/oQNvx3gcrnQarXo6upC\nbW0tyyg0Gg2am5vR39+PgwcPgsvlIp1Ow+/3Q6vVoq6uDiKRCGq1Gg0NDQiFQnjw4AFzrtt92VD/\nsLOzE11dXZBIJKyE2N7ejs7OTrS3t0Or1YLL5SIajSIcDkOv1yOZTLJqSzQaxdWrV1kVoBKXpFAo\nRE1NDQ4fPoz6+npIpVLodDqYTCZ0dXWhoaEBOp2O2RSPx7G5uYmmpiZEIhH4/X6srKyAx+NhdXW1\nYhckZZ9tbW3o7++HyWSC2WxGU1MT6uvrYTQaWTkaeFitSKVSMJvNMBqNrEqwubmJeDyOTCZTEbu5\nXC7UajWMRiN6e3vR39+Po0ePwmQyMQIXOX6qAlLmr9VqWdC4trYGv99fEZuBh2VnkUiEzs5OdHd3\n4+DBgzh69CgOHToE4E970sBnVcV4PI5oNAqLxYL19fWKBuQUiJvNZvT19eHEiRMYGhpCa2srRCIR\nIpEIgIdZqlwuZ1UugUCATCaDlpYWmEwmrK6uVjzYkslk0Gq16Ovrw9GjR3Hq1Cm0tbVBoVDAYrFg\ndXUVQqEQJpOJOVYOh8PuHzrf1UBzczN6e3tx9uxZDA4OoqmpCYVCAV6vF7du3YLf7weXy4VcLofZ\nbGZ3qEgk+hMextPi/5TzFwgEGBoawosvvsgyTuqxlPcWi8UixGIxamtrIRQKIZfLodVqIRKJ4PP5\n0NPTg42NDSwsLGx79EgZXEtLC9566y0cPnwYzc3NLAt6vFdOL6her4darYZWq2XRb3NzM9rb2xGJ\nRLC5ubntjF3q0b7wwgv44Q9/CJ1OB4VCAT6fz7IKAjHzeTwe6urqIBQKWc/L6XRCqVRCIBBU5KKh\ni7GlpQU/+tGPcOLECchkMmbznxM+KhQKEIvFaGhoQHNzM7LZLMxmMyKRCC5durTtNpPdIpEIRqMR\nL7/8Mv7t3/4NEokEPB6P/f7j5zyXy6FYLEKlUkGpVKKhoQE8Hg+zs7PMSVXCbi6XC7PZjNOnT+ON\nN97AsWPHGI+C2m7lX/RrVJE5cOAAfD4fkskkAoFAxZwoj8eDUqnEiy++iFdeeQUHDhyAXC4HACST\nSUaMo6CFPm+pVIJGo0FHRwdqa2shEolYq6sS4HK5UKlUGBwcxI9//GOWaZZKJQQCAdhsNvbZqHRO\n9vP5fJhMJphMJna2Komamhr09PTgjTfewCuvvAKxWAzgYVB19+5dXLx4EfX19Thx4gQ6OzsfOTc8\nHg9SqRR8fnVcX39/P1577TWcPXsWRqMRAOD3+7GwsIBf//rXcLvdqK+vR1dXFwYGBv5kCdKe8/8C\noB/44OAgXnrpJRw+fBidnZ2ora2FRCJBsVhEOByG3W7H+Pg4wuEwIz8JhULo9Xr09vairq4OwMMf\n1PDwMKanp5FKpbbV+fN4PMjlcrz22mv40pe+hKNHj8JgMEAqlaJUKiESicDr9bISfyaTQTabRT6f\nh06ng9lsZsSRdDoNh8OBqakp1vffrouGAq2enh584xvfwKlTp1BXV8de0kQiAa/XC7fbzcZXyMmI\nRCI0NTWhrq4Ocrkc0WgUTqcT0WgU2Wx2W+wtB/EjXn75Zbz22mvo7u6GVCoFj8dDJBJBIBDAxsYG\nkskkK5tTKU+tVkOn04HP52NzcxMzMzNYWlqqWNYvl8uxb98+fPOb38SZM2fY845Go1haWoLL5UIw\nGERdXR3UajW7CFUqFcRiMYrFIqu+hMPhijxv4KHIidFoxNDQEF5//XWWDSWTSczNzeHevXvI5/NQ\nKpXo6uqCXq+HQqFgmXSpVGJOn3gulYLZbMahQ4dYlaVQKMDhcMDpdGJ0dBROpxMikQgDAwMYGhpi\nQSyHw0EymYTX60U8Hq94CVoikeDEiRN47rnnoNPpkEgk4HQ6MT8/D4vFAovFAoFAgPr6evzd3/0d\nDhw4AIlEwiouFosFCwsLFc/6ORwOzGYzXnnlFXR0dCCRSODOnTtYXFzE6uoqZmZmYLfbsb6+DrVa\njS9/+cusYpTL5ZBIJBAIBJBKpSpuN4fDQUtLC3p7eyEUCnH37l18+umnCAQCWF9fx8rKCrLZLLhc\nLhKJBEss0uk0AoEACya3Cs+086eycWNjI86ePYvvfOc77OIAgFgshmAwiJWVFdy/fx8XLlxAIBAA\n8PBCqqurw8DAALLZLPh8PrLZLEKhEGZmZmCz2bb1cuRwOIwN+uqrr+Ls2bPswi4UCnC5XGz+c2pq\nCnNzcwAelquVSiW6u7thMBgAPLwcE4kEPB4PVldXt3VMh/gIra2teO655xiPghi3oVAIKysrWFxc\nxMrKCjweD3OkVFaniBgAIpEINjY2EI1Gt328iMPhQKvVor29HS+++CJee+01Ni+cz+dhs9kwPT2N\nxcVFxGIxCIVCRrhsbW1lGV+xWEQymcTy8jLrnW8nKHNuaWnByZMn8fLLL6Ojo4NdIk6nE7dv34bV\naoXb7UZnZyfrN9fX17PqF52TWCyGeDxekYudy+Wy9gqVnrPZLILBIPx+P0ZHR/HBBx+gUCjAaDQi\nm82iu7sbjY2NkMvlj3AuIpFIxQIWKse2tbXh7Nmz6Orqgkwmg8PhwOrqKqxWK86fP4+VlRXIZDLk\n83m0trayuW5q0RF5uJKMfyKcDQ4O4siRI1CpVFhcXMT4+Dju3LmDmZkZrKysQCKRoK2tDceOHUN7\ne/sjGbbD4cD6+npFgxaqJDY0NODkyZPQ6XSIRqMYGRnB8PAw5ubmkEwmUSwWEYlE4Ha7WWWLy+Ui\nn88jlUpVLJEoh0AggEQiQUNDA5qampDL5TA3N4df//rXCIfDjPApkUjY9Aqdh/JzspV3yTPt/Llc\nLpqbm/Gv//qvOHnyJOrr61mvs1gsYnl5GdeuXcP9+/dhtVrhcrkgFouh1+thNBrZGAYRSdLpNMtS\nt/PQU4/wzJkz+Kd/+ie0t7ejpqaG9ZcTiQQuXryImzdvwm63IxgMIpFIoKmpCc3Nzdi/fz+6urrQ\n0dEBtVqNfD6PQCBQEQfK4/Gg1Wrx1ltv4aWXXoLJZIJYLEapVEI2m8X8/Dx+9rOfweFwYHNzE/l8\nnlVXent70dPTA71ezzKkUCiEtbW1bWf6U7Xi0KFD+PGPf4zu7m7m+NPpNKLRKK5cuYI//OEPSCaT\nrCozODiIxsZGdHR0MGJaNptFoVBAOBxGNBrd9kudeoKvvfYa3nzzTTQ0NAB4WHa22+2YnJzExYsX\nYbfbWYsCABoaGmAwGKDVasHhcFgGShdmJZyRWCxGa2sr3nrrLQwMDIDP52NjYwOLi4uYmprC2NgY\nlpeXGWve6XSiqakJKpWKlaLJcWaz2YpVWQQCAVQqFSvjKpVKOBwOvPfee1hYWIDT6YTD4UAmk4FY\nLGZkVqPRyAIWChKphVEpKBQKNDQ0oLu7Gy0tLeBwOBgbG8PPf/5zdmYLhQJ4PB54PN4jrRdqe1Vj\nioUqEc3NzTCZTACAYDCIBw8eYG5uDolEAsVikTlaannRsy0UCoygXWly83hT2wAAIABJREFUpUKh\nQGNjI5tEoOA2GAyyqie1sVQqFZtw4XA4yGQyCIVCW96Ce2adP4/Hw4kTJ3Du3DmcOXMGzc3NEIvF\nCAQCcLlcWFtbw+TkJO7evYu1tTVEIhGIxWJ0dnZiaGgItbW1aGhoQEdHB7RaLQqFAubn5zExMbHt\nF7rRaMTJkyfxyiuvYGBgABKJhF3kTqcTdrsdly9fxszMDDY3N9l40eDgIA4ePMjK5nq9Hnw+H+vr\n67hx4wZjnW+n7QcPHsTzzz+P559/Hh0dHRAKhVhdXcXi4iICgQAePHiAO3fuIBqNgs/no7W1Ffv3\n78exY8fQ0dGBxsZGSCQSAEAqlcLKygomJiYQDoe3zWbg4ct56tQpvPrqqxgcHIRcLkcikcDExATs\ndjtCoRCuXbsGq9UKsVjMHH5XVxc6OzvZGBeHw0EwGITL5YLL5aoIsbKtrQ2nT5/G2bNn0dHRgVKp\nhIWFBdy+fRsbGxtYXl6GxWJBJpNBbW0t9Ho92tra0NDQwLgs2WwWsVgMy8vL2NjYqIiIC5fLxdDQ\nEF566SUcOnQIWq0WyWQSd+/exc2bN7G6ugqHw4FYLAa1Wg2DwcAuUOrbUknU4/HA5/NVzCEZDAac\nO3cOp06dQn19PaLRKNbX1zE2NoaVlRXEYjGk02nIZDLWH6dWI7WLQqEQbDYbc7aVQnd3N15++WW0\nt7eDz+fD5XJhZWUFCwsL7NnRFBTdf2KxmCVA0WiUjYNWEjKZDCdPnsTQ0BAUCgUcDgdWVlbgcrkQ\njUYBgKn8HTx4EJ2dnWzCIpPJwGazwel0VkWboKGhAS+//DJaW1tZFXFjY4PJENN4K1VaGhoaGCfK\n4/Hg5s2bcLvdW2rTM+n8iST3xhtv4Lvf/S4jaxUKBayurmJ4eBgffPABlpaWEA6HweFwoFKpYDKZ\ncO7cOXz/+99ns9ACgQClUgmpVAq3bt3ChQsXGFluu2xvb2/HT37yE+zfv5+Vy30+Hy5duoTr169j\nbGwMsVgM+XweAoEALS0tOH78OF5//XX09fUx8RbgYTnUbrfjvffew9zc3LZnz1/+8pfxk5/8BDKZ\njLUo7t69i1/+8pdYWlqCz+dDOp2GWCyGwWDA4OAgXnjhBZw8eRJSqZSRvBKJBMLhMKanp3Hz5k0k\nk8lttbu2thb//M//jHPnzrEyrdfrxc9//nMMDw8jFAohl8uxEaN9+/bh9ddfR3d3N8xmMyub099b\nWFiAw+HYdufP4XBw5MgR/Md//AdUKhV4PB6SySRGR0fx7//+74yXUiwW2ajT8ePHce7cOcbuB4Bs\nNgu/34979+7BarVWpOTP4/Hw+uuv47vf/S7kcjlSqRRCoRA+/fRTvPfee+zP8fl8GAwGHDhwAKdP\nn0ZTUxMkEgnjBaytrWFpaQk2m61i8/LNzc340Y9+hK6uLnA4HMYbotYKVe+InHbgwAF0dHSw+ySX\ny8HlcmFmZgaBQKCiinknTpzAv/zLv0AulyMYDGJ5eRlutxupVIop+YnFYrS1teH48eNoamqCQqEA\nl8tFPB5HIBCoyriwSqXC1772NZw7dw5CoRBOpxOzs7MsGSPFPKPRiOeffx7Hjh2DTCYDn89HJBLB\ngwcPYLFYqiKq1N7ejrfeegsGgwHxeBwWiwU2mw2lUokJQ2k0Ghw+fBjf/e532dx/qVTC6uoqfve7\n38Hn822pTc+k86e+N43y0aKHzc1NXL16FZ988gm7KEqlEmQyGdrb2/Htb3/7T5jdALC2toYHDx5g\nfHyclfK2A8Qyl8vlkEqlEAqFTA1sZWWF9bWoPCuVSmE2m3Hq1Cl8/etfR0tLC5vPpYDlxo0buHjx\nItbW1hCLxbbFbuAz+U9SNiOxCr/fj6WlJVitVkQiEeZA9+3bh2PHjuG5555DT08PK9FRWXppaQm/\n//3vcffuXSa0tJ22kxgOtYVCoRDsdjvcbjcjv3E4HGg0Grzwwgs4e/Ysuru72ZgiXeiUuX788cfw\ner3b7vjLL2vSefB4PPB6vUilUsxukUiE/fv345vf/CYOHz7MBK3oeft8PlYhcLlc22Zzue3ly1io\n7eB2u5lwDL2bGo0Gx48fx5kzZ2AwGJgQVzqdhs/nw/z8POx2OxKJREV60OW2l7fiEokEcyzEH+nq\n6sK5c+fQ1dXFgvJ4PI6NjQ04nU4EAoGKCiqVjx7S1AS9W1TmV6lUaGlpweDgII4dO4ba2loW3Doc\nDoyPjyMQCFTFiVLlslQqsTl+0iEQCARob29no5aNjY3s55NMJlmAWA1dgkKhwNrFNLqt0WjYCCsF\n5adOnWLBFu1RSCaTiEQie2X/zwM+n8+cJ410kCOamppi7GEip7W0tODo0aN44YUX0NbWxi7FZDIJ\nn8+He/fu4fLly5ibm2OEwO0CKa/RRVFO1pufn8fGxgaKxSKbRBgYGMCJEydw9OjRR15ov9+PtbU1\nXLlyBTdu3IDX691WJjQpsPH5fNZny+fziMVirCxL5S25XI6enh6cPXsWhw4dYiUuOugkH/rRRx/B\n4XBsexZK5K3ylaA0h01KjsViETU1NWhpacGpU6cwNDQEs9nMLpd0Og2PxwObzYY7d+7g3r17FbnU\nHx8FIjY2jauWSiU2mtXf34+zZ8+ykUvgIS8gHA6zltbKysq2t1jK7S6/zOmyUygUqK2tRSqVYqXn\nI0eO4MCBA1AoFOyZezweLCwsYHp6Gna7vaJlaAr2CoUCOz9yuRyNjY3g8/koFAro7OzE4cOHMTAw\ngPr6elYJi0QimJmZYctaKl0+J30EPp/PVPIaGhrQ1dWFfD4Pg8GAvr4+HDp0CJ2dnRAKhezzrq6u\n4t69ewgGgxV3ovl8HqFQCJFIhE2pNDU1obu7myVsg4ODOH78OJvo4nK5SCaTCAaDWF1dhcvlqkrZ\nPxaLYW1tDVKpFEqlEmazGd3d3XC5XODz+aivr8eZM2fQ39/P7M7lcggGgwgEAttCwH0mnT9lM6T0\nRKVQr9fLonPg4VhZTU0Nvv71r+PVV199RLGNGK2//e1vcefOHczOzjLhi+0EXYA0skfZMGUTFLlr\nNBr09fXhO9/5Dnp7e9n4EMnjDg8P4ze/+Q2sViucTue2s1vp8s5ms0ilUhCLxeDz+VCpVEwkJJvN\nQqVSob29HcePH8fp06eh0WgeYZo7HA784he/wPXr12Gz2SoykkPl+mQyyfq0NTU1TCpUpVIhHo9j\n//79OHPmDA4ePMhUCYGHJXOn04nr16/j//2//1fR5SHk7CORCKtgkEzrgQMH4HQ6IRAI8JWvfAWn\nT59mhCPg4Vlzu924f/8+/vjHP+LmzZvw+/0Vs7tUKiEWiyESibBMiBQrORwOVlZW0N3djRdffBG9\nvb2slUTB1r1793DhwgXcvn0bHo9n220uRzqdhtPphMFggNFoRGNjI06dOgUul4uFhQV4PB4899xz\nGBgYQG1t7SPlfrvdjg8++ADj4+OIRqMVdaKlUgkejwdzc3M4cOAA64/L5XIMDAzA4/FAJpOhp6cH\nTU1N7F6hM2a1WjE2NlYV559MJjE+Pg69Xs84WSqVCp2dncxB1tfXw2w2P3KvUFWLuAHVUFJcXV3F\nO++8g+985zs4ffo0Dh48CLPZjC996UtsmoxEicjuWCzGyOjbQWZ9Jp1/OchhisViaLVadHZ2stWO\npG71/PPPs/46ZaBTU1O4efMmrl27xrSsK4HHHT2NK7a0tOArX/kK1tbWkEgkGEnu4MGDLFKkbGh0\ndBQXLlzA3bt3EYlEKjL7XK50SBk0MaIHBwcRiUQQiUSgVCqxf/9+HDlyBAaDgTnQQqGA2dlZXL16\nFTdu3GCyypV0ROWQSCQwmUx46aWX0NTUhM3NTXR3d6Ovrw8NDQ2MaZ7L5RAKhTA8PIyLFy9icnJy\ny0dyvggoA+3u7sbf//3fw+fzgcvl4ujRo+js7GRnnPrli4uL+PjjjzE+Po719fWqlXGBz8hag4OD\nUKlU2NjYQGNjIw4fPoza2lpWxYvFYixomZiYgNvt3lZOyF+ymxIMLpfLRopp5DIQCLCRRCL5lY+L\nzszMwO12V0VilsSbqIIoFArR2dkJg8GAYDAIPp+Puro6lk2XSiXGB5mdnYXP56uK3dReoZ81qfWp\n1WqkUikkEgm296E8ibNarbh58yY8Hk/FR/wIpLyayWRYe7SmpgYNDQ2MO0EqfmR3MBjE6Ojotsma\nP9POv1yRTaPRQKFQMELXysoKjh49im9961uPjA1R9nrhwgX87ne/w9LSUkUvFnL8VD4n2dOBgQHo\ndDosLi5iY2MDp0+fRnt7O7tY6O+urq7iv/7rvzA9PV1RmVP696kHTbYLhUK88sorOH36NJxOJ3g8\nHpqbmxlbm6oV+Xwew8PD+M///E+Ew+GKirWQ8yGCJwUu9fX1+MEPfoBEIsGmKqjsXL7Mx+Px4Pe/\n/z0jJlbSgRJhiORX6ez09vaiu7sbmUwGhUKBtcDI7nw+zwiV7777bkXH5MpBexPoXeVyuejr68OB\nAwdYa44+F2VEoVAI8/PzuH//PiwWS1Xs5nK5f6KyqVKpoFKp0NXV9SeqhFShmZmZwdjYGJxOZ9WW\n+ZBqKZ11DocDpVLJytGPo1gsYn19HX/4wx8wMzNTNQfK5XLZsiG6HwEw/XutVvvIn6cK6uTkJM6f\nP7/tHJy/BlIslUgkjxCyaSyxHDS26vV6MTIywjRcthrPpPOn0vjjuvs8Ho+RtKLRKCvpCgQCJgBx\n7949XLx4Ebdv32Z625W2nRxiebRHsq1SqRTt7e1Mt5/D4SCRSCAYDOLSpUu4du0aE6GptN30zOnS\nJlBfkTIjUsyjHt7c3ByuXr2K4eFhbG5uVkU1rDwIKdfQpqoRySaTI9rc3ITH48GtW7cwPDwMq9Va\nlU14dImQ7USoJFBAQ0FwqVTC2toa5ufncfPmTdy+fZvpElQSdF7IdvqZl9te7vSBhyRMi8WC27dv\nY3h4GAsLC1Xb4keBSrFYRCaT+RPp5MdllBcWFjA+Po7Lly9jfHy8qhsqRSIRmyIiQiih/JlTpn37\n9m1cvXoV9+/fr1gF9M9BIBDAYDBAo9Gwc05nmngX9N/0zK9evcraWdUKWgCgtrYWg4OD0Gg07JmT\nncSVAsBa1bSoymazbVsi9Ew6f3qotO4zm82ytb1Go5GNZgFgxLRQKITFxUWcP38eb7/9NiKRSMVL\niYRMJoNAIMAkKWnZCu0EL3dW6XQaNpsNDx48wHvvvYfR0VHEYrGq7dmmeWdSqOLz+dBqtVCr1Y/o\ng2ezWWxubmJqagpXrlzBO++8wxjq1QCN6JEcL6mJ0fw+McwLhQJSqRRsNhvGxsbw4Ycf4saNG1su\nvflFEIvFsLq6yjI5Us0zGo2slEjZRCqVYs/84sWLFVdpK0epVGJjkeWVLr1eD41Gw6pa5XyQ69ev\n4+rVq7h9+3ZFx+PKweFwkEql4HA4WAWA/re2thZKpRIymQzAw8pQPB7H1NQUPvroI9Zeqdb7CTw8\nL+vr64jFYsx2hUIBlUrFnjsRnp1OJ4aHh3Ht2jWsrKxUXD65HCR163a7GXmYsn6NRgO9Xs9aWpub\nm5ibm8P777+PxcXFqvX6gc+mckQiEVwuF3P+fD6f7aagqkUqlUIkEsHt27dx6dIleDyebUuGnknn\nn8/nkUgksLi4iGvXrsFms8FoNKKvrw9NTU2PjK6Qtv/ExAQrlweDwapdiLlcDj6fD6Ojo0gmk7DZ\nbGhubsaBAwfw3HPPPSLFSiTGDz/8EL/61a/g8Xiqtl+bsgir1Yr3338fVqsViUQCGo0GL7/8Ml58\n8UWWfRaLRQQCAVitVvz85z/HyMgIfD5f1SJzkgO9cuUKRkdHsba2BoFAgMbGRnzve9/DwYMHmQMl\n3YTr16/jZz/7GVwuF1MWqzSo72yxWPA///M/8Pv9iEQiTB3yrbfegl6vh1wuR6lUYlLWH3/8MS5c\nuMC0C6oBIr9duXIF8/PzCIVCTLfizTffxLlz59DQ0ACxWIxCoQC73Y7R0VF8+OGHWFxcrOgSnMdR\nLBbhcDjwy1/+EgDYBsq2tja8+uqrGBgYQEdHBxvTtVgsGBkZwfXr11lgXi3bS6US7t27x/Q2aDfF\nkSNHcPr0aZw8eRJmsxmlUomJFo2OjmJhYaEqla1yRKNRXLhwASMjI+znL5fL0d/fjzNnzuCrX/0q\nhEIhkskkZmdnMT4+jvn5eUQikaraTdW23/3ud8jn88y3GAwG7Nu3D//wD/+AkydPAngo6EPcCrfb\nva2TIM+k86eMeGJiAk6nE263G62trSgWi1AoFNDpdOBwOAiFQnA6nZicnMStW7cwNjZW1bIWXeZe\nrxfXrl1DLBZjM9sikQhDQ0PMebpcLiwuLrIxRIvFUvUDTvP56XQadrsduVwOtbW1OHToEHP88Xgc\nLpcL4+PjGBkZwd27d+FwOKpmN9meSCTw4MED9mzFYjHi8TgSiQQrTwcCAaytreHGjRtM7a9azpNA\nJM/bt2+zJUkCgQBtbW1M84H6h5OTk7h69SrrOVcTFHivrKwwqedisQiRSIRXX32V8QDi8Th8Ph+G\nh4dx+fJlLC0tVWTq5m8hFothdnYW2WyWOX8ad6VRYWLHf/zxxxgbG6s4B+cvwev1ssomtej27dsH\ntVoNoVCIbDaLaDSKiYkJnD9/HsvLyxVvI/45kEofaZiUSiXodDoMDAwwEaJUKgWXy4Xh4WGMjo6y\noLLaiEQisFgsSKVSLIg6fPgwjh49yhbMpVIpLCwssAB3u6ugz7TzHxkZYWXDQCAADofD5FgBwO12\n48aNG3j77bcxOjpaNYZ2OQqFAtxuNwtCiNTS2trKovRcLgeLxYJPPvkEb7/9dsXGs/4aiLW9vLyM\nlZUVAGDlw3Q6zfrooVAI9+7dw29/+1t88sknVbcbALtMrFYr+zXq5ZLuAofDYWXnX/ziF7BarTvC\ndgDY3Nxk+8kpyBIKhUy6NxaLYWFhAZ9++il++tOf7ohzDoCR9wh01rVaLerq6iAQCOB2uzE9PY33\n338fV69e3THPnJYPEXg8HmQyGbq6ulBXV8eCsrGxMbaqdacglUoxx0JtuKamJgwNDUEmkzERouHh\nYbz77rs75plTK5dAk1DHjh3DoUOHIBAIWHXrk0/+P/a+7LkN6zr/A7GvxA5iIQju+yKKm0jtsrUk\nXuJx7E6aTptp+9KH9qkz/QP63Ld2OtNpMpm2M3Ga/Gy5dhRZsiRrlyhSXATuG0iQIEAQ+w4Q4O+B\nc65B2klEbYBsnhmOlFgiDq/uPfv5vkt49OhR0eieSqW+kcVbrVa8++67sFqtyGazCIfDePLkCf73\nf//3lbzR76TzzxeCTezq6mIHTWXzxcVF/O53v8PKykrRXBIS4lg3Go24cOECzpw5A4VCwZz/48eP\n8eDBA0Sj0aLTnRCsmpqacOrUKfT29rI5BbfbjatXr2J2drZonBDJ9vY2xGIxFAoF+vr6cPLkyV0Y\n25OTk7h+/for5YvfjxC0aU9PDwYGBtigXygUwq1btzAyMlKwAbk/JrQVUl1djfb2dsZ+x+FwsLi4\niP/7v/8ryjcKfI0OWVtbi46ODpSWloLH4yGVSmFwcBB3794tyjcK7Ogul8thNBoZ8yaXy4Xb7cYX\nX3yxC+u/mIRsiUwmg06ng1qtZjMWs7OzuHPnDnw+X9HqTts5arUaJpMJEokE8Xgc4+PjmJ+f/9bV\n45ch30nnTwcsFouh0+lQV1eH7u5u9Pf3Qy6XM6jFlZUVDA4OMpjfYhAaBBGJRLBYLGhpacHJkyfR\n09MDiUSCra0tRKNRTE9PM7KWYhFakSNcgp6eHrz99tuoqKgAsDPP4PV6MTQ0VPCy814hyFO1Wg2b\nzYZjx47h7Nmz0Ov1rKWxuLiIkZERlmUXi5DuOp0ODQ0NePPNN3Ho0CE2pxAOhzE6OsoMSzEJl8uF\nSCSCVqtFW1sbzp8/zwIu6jvfuXOnoO24bxOyMTKZDBqNBocPH95Vfk6n07Db7RgbGyvokNy3CdkY\niUSC8vJydHZ2wmazMSQ/r9eLO3fuYGlpqdCqfkPorguFQlitVjQ1NUGr1bKB3IWFBQwODhZFayhf\n8mGVJRIJDAYDysvLoVKpwOPx4PV6MTo6ysjXXoV855x//hRlRUUFmpubcfz4cQYcQkxgLpcLbre7\noJPx3ybE9202mxn2fX19PeMAD4VCWF9fh9/vLyiYzF6hiVaVSsX4to8cOYLy8nLIZDLkcjmEw2H4\nfL5XBjz0tEJBi0qlQn19PQYGBtDZ2ckGzjKZDGKxGILBIKMhLhahVUS5XI7W1tZduhMyWyQSgc/n\nK4q+bb6Q7mVlZWhra8PRo0dx9OhRaLVatp0QDoexvr5e0DWtvUJ3nbjlW1pacObMGfT29rIVulQq\nBY/HA7fbXVT3BQAL0G02G3p7e/H222+jubmZzWGEQiHMzc3tamsUg1A7q7S0FHq9HmfOnMGZM2dg\nNBoBgHEPPHnypOB3narL9Hsej8c4Z8rLy3Hs2DH09PSwVlc4HMbg4OArnd36zjh/MuAymQxmsxlV\nVVWora1FQ0MD2tvbYbVaGepTMBjEw4cPMT09XbB1ob260yqi1WqFzWZDTU0N2tvb0d7eDp1Ox/ZA\nl5eXGb1joY0KRbMikYitllVVVaGurg6HDx9GQ0MD5HI5eDweEokE7HY7njx5sgtiuVBCw1nUGzcY\nDKiurkZzczMOHTqEyspKSKVScDgceDwejI2NMZ6BQt4X2mcmMh9io6yqqkJbWxva2tpgNptZGdTh\ncMButyMQCBR0OJEMN/FWiMVi5oBqamrQ2NiI9vZ21uuPxWJYWlrCysrKK+eN/za9CRSHjLhKpYLV\namV2pqWlBWVlZeDxeKzvvLm5+cpgnvcK3RO1Wg21Ws2SBA6Hg7KyMpSXl8NsNqOlpQUtLS2Mttzn\n8zEE1EJVFcViMbvXIpGIAVUR8qBOp4NKpUJfXx9aWlogl8uRTqcRi8Xg9/sRCAQKkhRRIGswGKDT\n6VifP5PJsBaFWCyGwWBgFObUxo1Go/B4PK+EV4PkO+P8CTzGbDbj6NGjOHfuHGw2G4xGI+RyORva\nojWzL774AqOjo0Xh+GnP1mw24/Tp0+jr60N9fT30ej0UCgULWra3tzE1NYWLFy/C6XQWVG/g64BL\nqVSirq4O/f396OjoQHNzMzQaDeRyOdM9Ho/j9u3buHv3bsGzfsrcxGIxVCoVWltbcejQIXR3d6O6\nuho6nW4XtKnT6WQTuIW+L9QvLC0thU6nQ01NDQ4fPoyjR4/CYDBAo9FAJpMx3e12O27dulUUZVDC\nqiDdzWYzjh07htbWVobtT+XbUCiEwcHBgp852ZXy8nKG0KZUKlFdXY1jx47BaDRCqVQyznsOh8Og\nh/1+f8F0p3tSXV2N1tZWFvjxeDxGNkTBo16vB4/HQzqdhtPpxOrq6iun680XhUKBmpoavPnmm9Bq\ntYz8SCgUMlhwDocDg8EArVYLLpeLcDhc0EouDR92dnair68P3d3dCAQCrOJmMBjQ1NTE1lnNZjOb\n4UqlUojFYkgmk680oftOOH/KnOVyOTQaDSQSCTKZzC6mOTKGRNW6tLT00hn6nkbICanVapSVlTH+\naaJ6JN3T6TSCwSAcDkdRrN7kn7ler2doiUSlTDCWABgewdzcHJaXlwtasaCAhfQ2mUywWq0oKyuD\nWq2GQqGAWCxmzGzxeBxOpxPDw8OvnDxmr940oKXT6WA0GmGxWFBZWQmbzQadTsdgqonaN5lMYnJy\nEo8fPy7ofeHz+axEXl5eDqPRCIPBAIPBgKqqql26AzuT0bSJMzU1VTC9CbLXarWyqgTBhOf3bKVS\nKaPf3trawuTkJD7//POCzrXI5XJUVlaira2NOXqyJyaTCXq9nmHME1lSKBTCV199hTt37hQk66dB\nPpvNhp6eHjQ2NqKsrIzN3PB4PBgMBlaRk0gkAHZwXRYXF/HZZ58V7L5wuVwoFArU19ejoaEBRqMR\ner0emUwGmUwGIpEIKpUKuVyOtYxI98ePH+PatWuvNOsHvgPOP7+8ZTabYbVaodFowOPxvgFtmsvl\nsLq6ypjuCu1AuVwuJBIJysrKUFFRgcrKSmg0GkilUoYbTpCVsVgMc3NzWFxcxPr6ekHL5nvPvKKi\ngjkgpVIJqVS6C2ve5/Nhfn4eS0tL2NjYKKju1BqqqKhARUUFa7OYTCZ29rRjTk5oYWEBc3NzBb0v\nRB5jMplQX18Pi8XCvioqKqBSqRgSId0Xl8vF7kwhS/5SqRQGgwGNjY1oaGiA2WyGVquFSqViDGwU\ncG1vbyMUCmFlZQXj4+MFrXBxuVyYTCY0NTWxOQqtVguxWAy5XA61Ws2ow/MD9NnZWQwODiIajRZE\n73zWz46ODrS2trKglhwPsW7SVkUymcTm5iZGRkYYfkEh9CaK9a6uLtTU1ECv1zNkUAqAyZ4T4ykN\nb9+8ebNgmCESiQQ6nQ61tbWoqqqCRqPZRSm/l7CNAq5MJoPZ2VmMjo6+cvvy2jt/ypzr6+vR1taG\nioqKXVSsIpGIrWrlcjk8fvwYt2/fZqAihRLKnI1GI3p7e1FdXY3y8nKYTCaUlZVBoVAwQ04TuDdu\n3GAMT4UshRKkZlNTE3p6emA0GlFWVsb6XTKZjBmV7e1tzM3N4caNG3C73QUfUJRKpbBYLDh58iQa\nGxtZ/1CtVkOr1TLWO2CnYjE8PIzx8XEkk8mC6i4QCGAymdDV1YU333wTarWa0SXT7/PPnHgHVldX\nCzqnQOXZI0eO4MiRI2hqamKVISo704oZQVYvLi5iYmKioMOV9D47Ojpw8uRJNDc3M2dPfX/6Pd2X\naDTKiLfi8XjBys90V0hvk8m0q4pITp/0BoBAIICVlRVsbm4WDLFSKBRCpVKhtraWsZVSJZSE7gnw\nNQGO3+/HxsYGfD5fwVqKBoMB9fX1qKyshF6vZ/TTVHHOFxqsTCZHh/QOAAAgAElEQVSTiMViCIfD\nBZmDeu2dP63zVVRUoLa2lh2+Wq2GTCZj05TJZBKhUAjT09OYmJgoGG4/CYfDYTu2dXV1qK+vZ/zU\nKpWKBS25XA7xeByrq6sYHBzE8vJywfvOQqEQer0eVVVVaGlpgdFohEajgUKhYERJANgQzvT0NO7f\nv18Uu7dqtRo1NTVoampCU1MT4xyQSCS7DE08HofL5cLg4CDsdnvBB0NFIhE77+bmZtZ/psE/OvNM\nJoNIJILp6WlcvXoVKysrBQ9ytVotWlpamHEkR0RbOZQFEa754OAg7t+/X9AAnbJQi8WCuro6WCyW\nXUx41FIkIx6JRDA7O4svv/wSk5OTyGQyBdGdnL9KpYLNZmOJRD4TJWWfyWQS8XiccQ/cv38fa2tr\nBdusIEwWnU4HvV6/q6pCmTMBckUiEUQiEQSDQTidTkxMTBTU+SuVSphMJtYGonsCgJGdbW1tMWpz\nYi7NZrNwOBwIBAKvPNB97Z2/RCKB0WhkQBVUfs7PPnO5HKLRKNbW1jA3N4elpaWCOyEOZ4dm2GKx\noLy8nPVu83tzwA7in9/vx9LSEsbGxopiP14sFrNys81mg9lshkql2lXqp0fqdrsxMTGBoaGhgp85\nAOj1ejQ0NMBms8FisaC0tJSdNwWKxPewuLiI+/fvY2JiouAOVCwWsyCR+p4UINIXIVu6XC6MjIzg\n97//fUFxIMjJqFQq1gel4HAvTS/xly8sLOD69eu4ceNGQSsW5Py1Wi0MBgMLtuiO0M9Hk9oOhwMP\nHz7Eb37zG6ysrBSstUXrkzKZDKWlpWweIZ/Vkc47GAzC4/FgdXUVV69exZUrV+DxeAoW6FLQQvMf\n+S1b4Gs+CLKHTqcTKysrWFhYwNTUFBsMLITkz5vlU/bmZ/mEzzI9PY35+Xlks1koFApMT08XhHXw\ntXf+wM4B6/V6RtpDGVxJSQnC4TDm5+dx7949fPXVV0Ux4Q98vfspk8lgsVh2lYrIMC4vL2NychK3\nbt3CvXv3EAgEikJ30ptW5KhFQXonk0mMj49jaGgIDx48wPDw8CtDrfpjQkNCKpUKSqUSMpmM0fXS\nYyUCmdHRUYyOjhZNoEhZkUqlgkQi2XXewE7AQvdkdnaW9W0LqXv+HddqtbvW5fL1Xl1dxaVLlzAx\nMcH2tAtdaaH1vtLSUrYtlF/GJQrw27dv4/Lly9jY2IDT6YTH4ynofAWXy2UDoTQDQgEL6b22tobZ\n2Vncvn0bc3NzjImTyH4Kde5KpRKtra1s3TP/fsfjcbjdbly/fh1PnjzB6uoqwuEwotEowuEwW2Ut\nFMujxWJhq835jj8SiWBubg4jIyN48OABNjc34ff7EQqFsL29DYFAgI2NDYRCoYOy/36Fhic0Gg1M\nJtOu1TgiNLlz5w4uXbqEL774osDa7hZa2TIYDCxzBsCmzKenp3H58mUGtVksQhG6VquFRqNhw1rA\nTntlY2MD9+7dw+XLl3H79u2Ct1jyhRwR9ZrprmxtbSESiWBiYgKfffYZHj16hLm5uQJruyP5K5VK\npZINa1FbKBqNIhAI4ObNm/j000+xsLBQ8GFW0psQzai9kq830SgPDw/j4sWLGBsbKxokP3L+Mpls\nVzZHbJp+vx8rKyv48ssv8atf/QrRaLQogIgIWlun07GZBKp+ElHY2NgYHj58iMuXL2Nubq7gM0Qk\ncrkctbW1bH2PKitEoT06OopPP/0UIyMj8Hq9BccKAb6ublHrmTYQcrkcfD4fHA4H7t27hxs3buDq\n1atIp9MFn3siee2dPxkXhULBAGUAMDKfJ0+e4JNPPsH4+HiBNd0tJSUlUCgU0Gg0bDAxv0WxvLyM\nW7du4eOPP95FflJooey5vLycgVZQZpHNZrG6uoqRkRFcvXoVjx49KvhOPwk9UqVSCbPZzDAIOBwO\ntra2EA6HMTQ0hGvXruH27dtFsQZKQs6f1ijzHWg6ncbMzAyuX7+OmzdvYn5+/qWzgT2tkN5CoZCV\ncan0nEwmEQwG8dlnn+HKlSuw2+1FgUVAQmhsVBkivTOZDIMF/+yzzxiaXDE4ImDHrhCWQn4bK5lM\nYmZmBhcvXsTjx48xMzODzc3NonH81K6goVvga6rtxcVFXLx4EZcvX4bL5UI4HC4aBwp8bRNpxiyX\nyyGbzeLOnTvMDq6srBS8ErdXXnvnr9Fo0N7ezvZW8x/p4OAgLl26xPpBxSQlJSWw2Wyoq6tjlya/\nLHfp0iXcvXsXa2trRXVhgJ0Ivb6+niGacTgcNtw3OjqKzz//HBMTE0V35gDYeln+VH8gEMDc3Byu\nX7+Oe/fuwe12F5yqd6/QHjEB+AA7pdCZmRncvn0bV65cwezsbMHWy/6Q0F5zfhZK/OZ37tzBtWvX\n8PjxY/h8vqLInEm2trZY4Ep6Z7NZxGIx3L9/H1988QUePnyIzc3Noror29vbjIoaANPb4/FgcnIS\nd+7cwezsLCOnKibbkkwm4fV6GUUyba1cu3YNd+/exdTUVMHbQd8mtFobCoWg0WjA5/ORzWYxOzuL\nu3fvwuFwFN27BL4Dzt9iseD06dOwWCy7VvrS6TS++OILfPTRR0WTfeYLj8dDU1MTOjo6IJFI2GVP\npVJYWFjAf//3fxclEQuwg8DV3NwMo9HIenJEcXr37l38+te/Ljj08F6hcyTnT4AywA618/DwMC5f\nvgy73V5UWQWwc58JcEYmk7G7Eg6HcevWLdZeKZbsk4Qw7gGwCWiS8fFx/Nu//RvW1tZeObjJ00gi\nkUAgEGDgMuREI5EIrly5gk8//bTgUM/fJltbW3C5XHC5XMjlcgx/YGVlBXa7HRMTE0V53sAOquPE\nxASampqYXXE6nfjkk0+wtLRUVEHWXvF6vVhZWYHJZGJU5k6nE9PT00X3Lklee+dPWP5CoRDpdJoh\nbN29exd2ux2pVKrojDmVoIVCIQQCAbLZLBtcuXLlCq5cuVJwMJw/JKQ78PVsgs/nw/j4OC5fvoy7\nd+8WbXRObZV0Og0OhwO/34/5+XncuHEDN2/eLMoqCwntNCcSCWQyGQwNDeH+/fu4ceMGMzDFqDvp\nHY/Hsb29jfX1dXz55Zf46quvsLa2VlTzIPmSzWaRSCQQi8UQCAQQj8dZxj8yMlKUdxz4elPF6/Wy\ngTiHw4GLFy/izp07RXve29vbCAQCGB8fR3d3N1wuF+x2O65fvw6Xy1W0egM7Z26326FSqVBWVoa5\nuTncvn0bIyMjRfsugdfY+ZMxp34t0fISj/alS5ewuLhYdBko8LXuBNdbUlKCRCIBn8+Hzz//HHfv\n3i26vhYJh8NhAzgETOFyufDw4UN88skn8Hq9Rak3sNNq8Xq9mJycBAA2n3Dz5k0MDw8XJdc9AIb/\nvbS0BKFQiFwuhy+++AI3b97E9PR00VEMk9BZUnCYSqUwOzuLjz76CDMzM0XV498rtDa5uLiIoaEh\nBAIBXLp0Cf/v//2/gu3wP43QzJDL5cL4+Dji8TgmJydx/fp1TE9PF+X9JolGowzgSaPR4Pbt23j4\n8CF8Pl/RZv10ng6HAwKBALW1tQgEAvj444/hdDqL+rw5hVbgD8ifPDFCqmpsbERfXx88Hg/8fj/S\n6TRCoRD8fj/C4XDRDD/lC6ESdnd3Q6fTsT5XMpmEx+NBMBgsqqlQEhrgMhqNaGtrY3CmBM6ysbFR\nlHoDO2fO5/NRV1eH8vJyhMNhhEIhhEIhBINBhMPhon2ofD4fpaWlaGhogEAgQDgcxubmJgKBAGKx\nWFEGuMDXu/JmsxnV1dWIRCLw+/1wu91FrTew05YTi8WoqqqCQqFAKBTCxsYGNjY2iq5Xni9UUdRq\ntbDZbKwyR8h9xSzE9mg2m6FUKuF2u5kdL9bzBr7G4VAoFDCZTEgmk3A6nUgmk4UOWv6of39tnX8+\nf7xer8fm5iYikciuclw2m92XIyKwFOqp7s0EX9QFJHQwjUYDLpeLQCCAdDrNyvz02U8rhDiWz2Hw\nMnTPZ8NTKBQMHYw+iz5jP59Fu9O0RvWydKdgkda2aDUr/5z38zn5OOM0eb/3vr2o+0IUvmKxGFtb\nW6zK9axBFulNQ6Z05i9a93wec6FQiHg8/lzBIZ056U8657fHXpRj5nK5jB8EAENje9bvTeedv3Of\nf19elN505qR7Op1GKpV6ru+9974A2GVnX/SZ03YFUfk+6/feywVAuubb2Rehe/6Zl5SUYGtr67kn\n+/PBu6hSvPe+PMU7+m46/xctdFGIDIjgGPMv3x+7KPmX61UL6U3bDplMhumfH8j8IaE/86qFKgnE\n8761tcV0J33/WCm+kGdOxlUoFDI61Ewmw+YJvi14zJdCnTmwgy9B8yZELkLzMvnB47dJoe85IWDS\nG02n0yyg2Gsg90ohdScYZnqj2WwWqVSKbTiQUyq2+5I/m0SBEOlO9yWbzf7B+aRieKP5iKmZTIYN\nodJ21R+7L4V6o+SHCOyopKRkVyUhm83+wcpZ3pkfOP+nFYq0aFdzP9lQIS85Zf4UMT+L7oW65N+W\nhe41gsXo/POz0Pwz35uF/rG/X8gzz0c1zK9a5EPAfpsUy5lTpWiv4ylW3anylF9ZzNf9T2WgxXRf\n9lO1KIb7QnYdwK43+scy/3zo6ULdl2fN/POGmw+c/34kf5p9P//whXyc+Tq8jrqTzvt9bIU0LN+m\nB/D0uhT6zEkHkv22PF5H3YtBb5L96vK66l4sbxTYny6Fdv7PIwfO/0AO5EAO5EAO5Psrf9S/l/yx\n/3ggB3IgB3IgB3Ig3z05cP4HciAHciAHciDfMzlw/gdyIAdyIAdyIN8zOXD+B3IgB3IgB3Ig3zM5\ncP4HciAHciAHciDfMzlw/gdyIAdyIAdyIN8zOXD+B3IgB3IgB3IgL1nysSmKQV47Vj8Oh8PgGvPx\nvTOZDLLZLEpKSqBWq2EymVBaWopcLofFxUWEw+FdMKB8Ph8GgwEVFRXo6upCZWXlLgSrcDgMt9uN\n2dlZzM7OYn5+/rl1J0z/fGzybDaLTCbD8KFNJhP0ej3UajU2NjawvLyMVCq16+cTCAQwGAyoq6tD\nf38/SktLwePxsL29zdgBl5eXsbCwgPn5eWxubj73medjkxP4BcEfl5SUMFILnU4HqVSKxcVFbG5u\n7tKdy+VCIpGgtLQUPT096OzshFAoZOh+Xq8XLpcLTqcTDocDCwsLz02MsVd3YDeUaklJCYxGI9M9\nFothcXGRQWkmk0lsb28zTgOdTof+/n5UVFRALpdje3sbyWQSKysrWF9fh8fjwdLSEtbX118IZvhe\nlDI6dzpLs9mMsrIy6PV6OBwOeDwe5HI5JJNJRKNR9vf4fD5qa2vR09MDtVoNqVSKXC4Ht9vN3ofX\n68XCwgKi0ehz6Z2vO5093XdgBzFOq9WivLwcZWVl4HK5WFpaQiqVQklJCfx+P+MvoLfa3t6O2tpa\n6PV6cLlcpNNpzM3NweVyIZPJwOVyweFwvDB8/L2658N/W61WWCwWGI1G+Hw+OJ1OiMVipNNpeDwe\nRrsMAGq1Go2NjbBYLNBoNMhkMvB6vZidnWX474uLi/D5fC9U93ycfMKdVyqVsFqtMJvNUKlUcDgc\nSCaTUCgU2NjYYNS5ZCetViuqqqpgMBggFAoZs+Tq6iqkUikikQjm5+eRyWReqO4Ea0uw3zweDxaL\nhem+tbUFp9MJmUwGHo+H5eVluN1uBAIBZDIZ8Pl8VFRUwGg0Qq/XI5VKIRgMYmlpCdlsFmq1Gg6H\nA2tray+MhCz/fojFYkgkEggEAsjlcnZfLBYLPB4PotEoFAoFwuEwsxubm5tIp9NQKBQwm83QarWQ\nSqWIxWLweDxwOp0wGAzg8XiYnZ19IWRHr5Xzl0gkUCqV0Gq1kMlkyOVyDA+ejLRUKoXFYkF1dTV0\nOh2y2SzGx8fh8/mQSCSY8xcIBLDZbGhubsbZs2fR0tLCnEMul4Pf74fD4cCjR48gEAjgcrkY/vl+\nhcPhQC6Xo7S0FFqtFjwejwUthE8uEAggk8lQU1ODiooKlJWVYWVlBZOTk0gkEswZ8Xg8iEQiFrS8\n88470Ol0zPlHo1Gsr69jYmICjx8/RiqVQiQSeWaiCT6fD6lUCpVKBaVSyb4H0SnncjmIRCLo9XrU\n1NTAYrGgtLQU4+PjjK+dMMwFAgEUCgV0Oh3Onz+PM2fOQCKRMN2dTicWFxcxPT2N+/fvM07yZw0A\nhEIhpFIp1Go1RCIRgK/hhCkYEQqFqK6uRnV1NSwWC8LhMMbHxxGLxZBMJhGPx5HL5cDj8SCTyWC1\nWvHOO++gqakJKpUK29vbiMVimJ6exsLCApaXl3Ht2rVvkDXtR0pKSiAUCiGTyaBUKplhEQqFLAAQ\nCARQKpWoq6uDzWaD1WrFxMQEHA4HcrkcYrEYgsEgCyqFQiE6Oztx/vx5GI1GlJaWYmtrC8vLyxgf\nH8fm5ibm5+cRjUZ34Z/vVwhPXaFQQCwWg8PhQCQSQSKRMGfK5/NRXl6O+vp62Gw28Hg82O12xGIx\ncLlcuN1uhEIhFpwJBAKcOHEChw8fRnl5Ofh8PpLJJIaHh1mgNjQ0BK/Xy4LNZ9Wdzp3P57Oglsig\nyLA3NjaitrYWVVVVcLlcmJ2dhVQqRSqVgsPhYO8NAIxGI/r6+lBbWwuTyYRUKoXV1VUMDQ2xN53N\nZpFIJJh92q9QoCIWiyEWixkjoVKphEQiYZwCZWVlaGhoYEHUxMQEYrEYNBoNnE4nFhYWGBNqNptF\nU1MT2traUFFRAYlEglgsBrvdjrm5OSgUCqyurjIG1WdlT82/2+TsVSoVVCoV5HI5JBIJhEIh6urq\nmO5EDV1aWgqBQIDJyUkWAKRSKQiFQjQ1NaGqqgpWqxXxeBxerxd2ux1bW1soKyvDrVu3EIvFEIlE\nnvm+5Dt8sjU6nQ4ajQZqtRpisZgFf3V1daipqcHKygoCgQC0Wi38fj+mpqawsLAAl8uFRCIBrVbL\n7CixSa6srGBmZgZWqxXb29tIp9NYWlpCOBx+Jr1JXhvnz+FwYLVa0d/fj+7ubpSXl7NLR8ZKJBKh\nuroaWq0WcrkcAoEA29vb7DHmkyHweDxIpVLI5XKo1WqW0QI7xpcMq06nQzKZxNTUFDNK+xUej4eG\nhgZ0d3fjyJEj4HA4CAQCu+AmdTodzGYzMzYCgQDJZJJRn9IXRfKUQWs0GlYFoeDHarVCqVTCZrNh\nfX0dXq8XHo9n35ecw+FAqVQy2uSOjg4WXQuFQnbpDQYD1Go1ZDIZRCIRuFwuTpw4wZjQyMDlM7xp\nNBrI5fJd5240GqFUKlFbWwuFQoH5+XksLCzA7/fv+8wBwGAwoLGxEQMDA1Cr1QiFQuzfXCwWs397\nuVwOqVQKoVCIra0tnDt3juGuE3sZZc9isZhF5UR0wuPx0NraiqqqKsTjccTjcaytrcHtdiMej+9b\nb6LBbWtrQ19fHwts9Xo9FAoFpFIpFAoF5HI5+1kEAgEOHTrEjHA2m2UkQxQAyOVyaDQaRka0vb2N\nmpoalJWVIZVKYXp6Guvr60gkEnC5XM905hKJBHq9Hl1dXaioqEA6nYZGo4HVaoVCoYBMJoNEIoFM\nJoNMJoNQKASHw0F3dzerIlGwSOdO7J1yuRwikYhVEU6cOIHu7m5sbW1BqVRiZWUFTqcTgUBg33pT\ngG6xWNDa2gqpVAoul8sooFUqFcRiMUQiEfsZRCIRmpub0d/fzwL6fIdOQVxpaSn787lcDkajEfX1\n9chms8zZxuNxLC4uPlPQRc6+srISlZWVEIvFMJlMaGlpgcFggFKpZHdXJpOxAKG8vBzZbJbZGrKl\nREglk8nY/aKAuaamBvF4HFwuF6OjowgGgxgbG4PD4di33qS7RCKBxWJhFc+2tja0tLSwJI9stVQq\nhUgkQjabhdVqhUAgAJfLRUtLCxKJBGMvLCkpgVQqhUQigVgsZm/h2LFj2N7eZglUOp3G+Pj4M9kX\ncvxyuRxarRZ6vR6VlZU4cuQIq5aQvZNKpezuiEQibG1tMVvT2NjI6NxzuRz78yKRCDweD1tbWywJ\nEQgE2NjYYMnI8PDwM505O/vn+tuvSIgUo6qqChcuXEBDQwMri9OhcblcSKVSmM1mSKVS8Hi8b+Dc\n55Pd5JM9uN1uOJ1OZmR0Oh3LXqRSKY4cOYJIJIJ79+5hYmICXq/3qR0plYsPHz6Ms2fPorm5Gel0\nGpubm7tYp9RqNasK5JN/7NU9n+whmUzuugxqtZpl0pRtnT9/Hnw+H3fv3oXL5UIwGHwqvelym0wm\nnDt3Dt3d3aiursbGxgbS6TT7/pRZSyQSVjkhyWe2o0dJzigSicDj8TCmttLSUpa5lJaWoqurCx9+\n+CFu3ryJR48eIRgMPrVhJN1bW1tx4cIFtLW1QSKRYHNzEzKZjDkh+n0+aQnpnf+Vz5mQy+UQiUQQ\nCoWYkSGdFQoFcrkcTp06hUwmg6+++oqV1J+26lJSUgKVSoWTJ0+ir68PLS0tLBvXarVQKpW7MqL8\n4Cn/rtA55P+aTqcRj8extbXFjC6dQy6XA5/Px9tvvw2xWIxr164hEokgmUw+ld6ke3V1Nd588000\nNTXBYDAgFotBqVSywJaCLLrnpLdOp2PtmHwKVpJEIoGtrS2W2ZEhpX8TeqNXrlzB2NgYYrHYU1dd\nqBLY3d2Nrq4u1NXVscShsrKSVUry9d575vlJRb5ks1l2htlsdlc1ant7G/F4HG+88QZyuRwSiQRr\nGzytcDgc6PV6dHd3o66uDhUVFeDz+dDpdKirq2NBdj5RDIlcLmeVR6VSCYFAwH4mAIypkuxvfiWK\nKqiRSAQlJSUIBAKIx+P7TjAqKipQX1+PyspKGAwGlJaWssqKUqmEUCgEgF33PJfLQSaTMWbBsrIy\nZu/p/lPwyOFwdgXq9PMNDAwgnU4jHA6zwGc/lVGhUAiz2Qyr1QqbzYaysjLYbDZ0dnbCYrGwqiBV\nZehzuVwuMpkMotEo+Hw+jEbjNwiTqA1MVa98f6BWq/Hmm28imUxiYWGBtWmeRV4b5y8SiVBbW4sf\n/vCHKCkpQTqdRjKZZOVEyjz3MiEBXxu/vUZle3sbmUyGlV4EAgGampqg1WrZ4ZeUlODIkSM4dOgQ\nfv7znyObzTJj/DRCGVd/fz/OnDkDLpfLqgcU5ZWWloLP5zOd879Iz3zd6fGFw2HY7Xak02lIJBK0\ntLRAIpEw51daWooPPvgA1dXV2Nrawr17957a+QM7JX+bzYb3338fFouFnSE5f7lcDplMxs4q/2zz\ng5f8fxO64BsbG3A4HBAIBGxOgOhOeTwempqa0NDQAJVKhUAggOnp6ad2/pSl9/X14ac//Sl4PB6S\nySTkcjmEQiFzenvPPF93chxkVOjMk8kk1tfXEYvFwOPxYDabWTZK53D27FnU19cjlUqx0uLTGhYu\nlwu9Xo/3338fPT094PP5CIVCSKVSLJuRSCS7dKVfSUcyzPkBcC6XQzQahcvlQklJCSQSCYxGI/sz\nFOj97Gc/g1KpxMLCAusJP43Q9+jo6MA//MM/QCgUsjYUOQ6RSAQ+n/+NsyDdiXuesp782ZJQKIRI\nJMICdKoY0Od2dXWhqakJ8Xgc6+vrcDqd+3L+IpEIP/jBD/Duu+9CJBIxmmMKtOhu7i3L0/mmUinm\nlChAoHK+z+djAbpWq91172QyGX74wx9CpVJhdnaWlf/3IxUVFfirv/orVFVVQS6Xs2yeKiX5bca9\ndz2TySAQCEAmk7FMmt5oPB5HOBxm94XP5++6L1VVVbDZbAgGg7Db7VhfX9+3829ra8M777zDqitU\nKVEoFCwYoerb3uSC7oTRaGTnTtW6aDSKVCoFDofDZqLybVRPTw80Gg0mJibgdrtZde1pRSKRoK2t\nDZ2dncxn0JdUKkVJSQmrouwNXBKJBJaXl1niQEE83RfyD3w+nwVAFCBotVqcO3cOoVAIN27cgNvt\n/m47f4FAAJPJBLVazWgNaWCLLjQ9vkQigXA4jGAwCLFYDC6Xy8qImUwGkUgE4XAYgUAAGxsbcLvd\nWF9fRzgcBpfLxenTp6FUKmEwGKBQKNiF4fP56O/vRzQaZQMXTyMikQhKpZINKdEFzY9Qo9Eo+9Xr\n9SKdTkMmk7GfJRaLMaNAXz6fDz6fD16vF9vb25DL5fjwww9x5MgR6PX6XQbGYrHg/fffRyKRwMzM\nzFPpne88qJRNF5paEFQmjEQi2NzcxPr6OuOK93q98Pv9CAaDbDCQhuPo/MkxtLW14f3332cGIH/g\nqrOzE+FwGL/85S/h8/n2dW8ikQjcbjdkMhnTl4IRyiIDgQBWV1fh8/kgl8uRTCbhcrkQDocRj8d3\nBYGJRAKxWAzhcBjZbBZ8Ph9vvfUWTp06xeYK6M+rVCqcP3+e9YGf9oHSHXY4HKx/mB9skmGj815e\nXkY2m4VMJoPH44HX62VzCtSPBMBaSLFYjPXc33vvPdTX17NsjnSvq6vDX/zFX+DXv/41vF7vU593\nNpvF2toabt++zXqWiUQCYrGYDdHGYjGsra1hdXUVa2trkEgkyOVy8Hq9zGDTgJpAIGDc8VSxEAgE\nOHr0KN566y1WLaL7KhAIcOTIEfh8Ply8eBEbGxtPpTf1UYeHhyGTyVBdXc3eqVarhUQi2TVUuL6+\njng8DqFQiHg8jlAotKtsKxAIwOfzWTCTSCTA4/FQWlqKH/3oRzh06BBreQE7wWpZWRnOnTuHra0t\neDyepz5zAHC5XPjss8/Q0tLCBpflcjlMJhOrVM3Pz8PpdMLn8zHnHY1GEY/HkUgkwOfz2WwAzfPQ\n/IRIJEJLSwubL6JZDrovDQ0NOHXqFK5evbrvYdHJyUn2PQwGAyQSCWw2G4xGIxKJBNbW1jA1NYVA\nIMACbqqmUNZLVTCBQMCCnHQ6zZLG8+fP4+jRo6wqSvdFJpPh8OHD2NjYYBXUp5V4PM7mVNbX12Ew\nGGA2m9HY2AiRSIRoNIrp6Wk4HA5Eo1FW0cpkMsyu83g8FkQoDl4AACAASURBVOTQXaDgRSQSwWq1\n4u2334bNZmNVLjpzo9GI/v5+3L1795la0cBr4vypd5ZMJuFwOFgZKxaLsfKO3+9nxtzj8cDlcrFo\nMBaLIZVKIZVKwe/3w+fzsQlKp9O563M0Gg2OHz/OyqH5l1ypVLIe+9MK/f21tTXMzs5CqVQilUqx\nSWzKbILBILxeL1ZWVhCPx6FUKhGNRuHz+RAOhxGJRBCNRhGNRplTo4yypKQEpaWl6OjoYNlyfsYq\nFApRVlYGhUKx77MPhUJ48uQJ65fH43Hm+Mgwe71eNjBEpXun0wm32w2fz8eMCA3H+f1+ZLPZXUbo\n6NGjrKxN5wYApaWlMBqN7PI/reRyOSwvL2NwcJCdRzKZZAYunU4jGAzC7XZjZmYGa2trUKvViMVi\nWFhYQCAQQCQSYbpwuVzEYjHE43GWhdDQaFdXFxQKBSvzUV+Reph7S9h/TGhGZXR0lJU0yfHL5XKW\nIa+urmJxcRETExPIZDJsetvlcrGSNwWtFMBRBkW96t7eXlRUVLDPJaH2WX6F4Wn0BoDV1VVcv36d\nzd5sbW1BKpVCqVQikUjA7/djbm6ObdFQq8TpdLLZnPwhMLpjdOZSqRQymQxnzpyBVCplZ05CczB7\ns8Q/pXsmk8H4+Diy2SzcbjcLKnQ6HYRCIWKxGObn5zE5OYmFhQWEQiFIJBKEw2H4fD42/EpBCN0x\nCnqlUilMJhM6OzvR3NzMWnj0+Xw+HxqNBlKp9Kn1pr+7ubmJ27dvY3NzEy6XiwUaZrMZqVQKPp8P\no6OjmJ2dxerqKjtfClrorlCZmSpfdI9VKhXS6TROnz4NtVrNPpcSGLFYDJVKtau0/rSysrKCSCQC\nr9fLnL/L5YLJZEIwGMT8/DyGhoawvr6OYDDI7E4sFmMtRfp58u0yzXDodDo0NDSgr6+P3dH8bR+a\n39jPGwV2Kj3Ly8sIh8PY3NxkG2Z+vx98Ph9+vx/Dw8OYnJyE3+9HJpNhCeDe7QjayshPNMrKytDV\n1YVjx46xChadN9mD/OrIs8hr4fyTySSWlpZw8+ZNZLNZdHR0wGw2IxKJMIO8tLQEp9MJj8eDzc1N\n+P1+9rhoeIsOjr6+raT5beVTMrhffvklLl68uK+Bomg0iuXlZXz00Uew2+3o6upimUQ6nYbf78fS\n0hJzlKFQCOl0mpWBKNOmn4G+vm16nzKm/L4vACwvL+OXv/wlHjx48NR653I5xONxjI2N4V/+5V9Q\nXV2NiooKNtCXzWaxvr6OtbU1BINBhMPhXatlyWRy10UnfejfIf+8qf+cXxIm43Lv3j38z//8D5aX\nl/eleyaTwY0bNzAxMQGNRoPS0lLWFtra2oLX60UwGGQBVTKZZIaF+sv5ZeN8g0j/m+Y5ZDLZrk0R\nWlv8zW9+g6tXr+5rQySXy8Hn8+Gzzz7DzZs32aATDWklk0kEg0GWxZMRpP+W73AAsIpD/rwL6U0T\n+flnnsvlMD4+jn/913996ioRyfb2NpaXlxEMBndNmVNWlq8zVbPo3Kh/S7rsre4BO2VQiUQChULx\njR711tYWYrEYrl69iosXL+6rvQXs3MuFhQW43W48ePCAlbip50yZZjQaZdPwVNqnO763V57//4lE\nImi1WqhUKlYWpv+eSqWwsLCAjz76COPj4/vSG9iZh1hdXUUgEMDIyMiulTPK4CORCBuKA3Zv69AZ\n5nI5VionvSkhKisr29WDp4ApkUjg4cOH+Pjjj7G+vr5v3ZPJJDY3NxGLxVgGTJk8zahQUEjDb/mz\nRKTLt9kVsVjMBlqpPUx/LxqNYmVlBVeuXMHDhw/3vcVFPz9VJPh8Pux2O+7du8f8RTQaRSwWY3eB\n/Mpeu52vO/kbg8GA+vp6qNVq1kKjtuPm5iZGR0fx6aef7rsami+vhfOnPvvCwgLS6TTW1tag1+tZ\nWS2ZTLJMMxQK7crQAHzj128TinplMtmuEi4AbGxsYGFhAQ8ePMCTJ0/21ZMjozQ7O4tQKITNzU1I\nJBJ2eUKhEHOgVMnIj/SeRqi/p1Kp2AAbsGOE5ufncefOHdy/f39XleNpJJvNIhAIsOh2eXl5l/Pf\n3NyE1+vdNeG8n74ZVVMoQ6bsnmYCJicncevWLYyNje2rtEVGdWNjAz6fj2WLlFVRpYUGlPKN9NOK\nQqGA1WqF0WiEXC5n5chsNovJyUncv38f9+/fZ6t3+5F0Og2XywW3280CI+px0xwBfdafkm/77Kqq\nKhw6dAh6vZ6deSqVwubmJsbGxtjQ3LNsKpBjp+CIDDrpTn3QPyV7A1gOhwOVSoWBgQG0trayAVNq\nO83NzWF0dBR37tyBw+HY99Q8OYRoNMoysL2YFk9zT/b+GTLmNTU1OHfuHCoqKljQEgqF4PF42D1/\n8uTJvtosJPmrgvSZ1K7MH7h9mjPI/3PU6+/p6UFPTw9rVZANdjgcmJmZwc2bN+FwOJ5prZUCjvyg\nhH7dz7vJ151+frPZjDfffBPV1dWsKuH1erG0tIT5+XmMjY1hbGyMtU73K9Quetae+7fpTveuqakJ\nfX19rHK4tbXFqmULCwu4desWVldXn+nMSV4L50+H4/F44PF4MDY29o1p4Xzn8yz/kBRxarVamM1m\nNi+wvb0DwnHp0iU8fvwYa2tr+/7+1HeLRCJYWFhglzM/y3kWnUkIfKasrIxtDNAk8c2bN/G73/0O\ns7OzzwTeQobP5XLB5XJ9ozz2PHrzeDwYjUZYrVbo9XrmnLPZLObm5vCLX/wCQ0ND8Hg8z/w5tE71\nvDuxe0Wr1aK3txfV1dWQy+UAdpx2IpHAF198gV/96lds5/tZhXqXz2tc8oXD4aCzsxNnz56FyWRi\nWQVhFfz7v/87hoaGnmldLl/o3jwLLsYf0ttoNOKDDz5AX18f0zsej8PlcuF3v/sd/vM//xOhUOiZ\nd87zdX/a4Opp9KaBxL/+67+GUqlkA3hutxuPHj3Cf/3Xf+HBgwfPvOf/h/R/XqF24vnz53H27Fk2\nn0EVwd///vf49NNP2frvi5CnSdT+lFA1sbq6Gh9++CFUKhWr+DkcDly8eBHXr19nrbXn+awXLbT9\ndPjwYZw5c4YlFZlMBrdu3cInn3yCkZER+P3+574rr4Xz3ytUJsl3RPuJcL9NaE/WZDKxEuvW1hYi\nkQgmJiZw5cqVZ3L8e4X+fv7jfN7vabPZcOLECZhMJtbrDwQCWFxcxOPHjzE1NbXvadY/JC/qoXA4\nOys4x44dw8DAABsiIgCP4eFh2O32Z47KX5aQMbdarbhw4QKqq6tZlrK6uooHDx5geHgYLpdrX2ty\nr0IIl6G6uhqNjY2sZ57NZnH//n1cvnwZ8/PzzxWwvAyhiW2j0QiLxcLmQjKZDJxOJ37729/i1q1b\nCIVCL8wJvSgh3A2r1cq2emhoc2hoCL/5zW8wPz/P5hqKSSorK9HT08OCRA5nZ7XP4XDgq6++woMH\nDxCJRF5IoPEiRSaTobe3F729vazcT3MyIyMjuHnzJlZWVvZdqXwVUldXh9OnT6OpqYlVnjY2NjA9\nPY2hoSFMT08jGo2+kCDxtXT+LwqSkYSMS0dHBysR5XI5BINBTE9P4/HjxxgbG3thl/xFOlDCPzh+\n/DgMBgPrIS4tLbE2hdPpfOFn9rxCQD+dnZ1oa2uDSCRCMpmE1+vF8PAwBgcH4XA4EIvFCq3qLqFV\ns6qqKnR3d0Oj0SCbzcLv92NychJXrlyB3W6H3+8vOsOiUCgY+mV5eTl4PB6Dsb579y5u3ryJtbW1\n586cX7RwOBxUVFSgsbGR9W9zuRxcLhdGRkbw5ZdfYmpqquj0BgCVSoXDhw+jpqaGDRH6/X7MzMzg\nwYMHuH37NhvGLCbhcDiorq7G8ePH2R799vY2VldXMTg4iIcPH2JmZuaFQfu+KKEp/r6+PgYfTgOv\nT548waNHj1jrtthsIofDgc1mw7vvvouqqio2o7C2toYbN24w1NQXpfdr6fxftPD5fJhMJrzzzjvo\n7u5GSUkJW9X4+c9/zgZCiumSAzt6y+Vy1NXVoa+vD6WlpUgmk1hbW8O1a9fw61//GsvLy0WnN7CD\neU5DhBqNBiUlJXC5XBgeHsbnn3+OwcHBF1ateJEiFovR3t6O1tZWKBQK5kDv37+PK1eusKnrYtOb\nw+GgsrIS77zzDmpra9lk9MzMDD7++GPcvn0bKysrRVetoAD32LFjeOutt1gPNJ1O4/Lly/i///u/\nF8ZH8KKlpKQEJpMJ7733Hg4dOsQqREtLS/iP//gPDA4OsoHNYhJqSzY2NuKNN96AXq9nFaJ79+7h\nF7/4BRYWForSJlJFsb29HQ0NDbvmkz7//HNcv369KO0KVRS1Wi3a2togl8tZ2292dha//e1v4Xa7\nX+hd+V47fzIsra2tOHnyJANryGazsNvt+Oqrr/Do0SM4nc6iuyzUL+/v70dXVxe0Wi1KSkrgdrtx\n8+ZN3Lt3DzMzM8+M6f+yhIbAWltbcfbsWVitVjZBPDs7i2vXrsFut8Pj8RRdNiQWi9mZd3R0sKwi\nFotheHgYw8PDzwR08rKFz+ejtLQU9fX1OH78OMxmMwCw7Pn27dtYWloqSkdEhFGtra2or6+HWCxm\nRnFubg5PnjxhWBLFJASf29raisbGRoZgmMlk4Pf7YbfbGRBRMb1PYCcwr62tRXNzM8xmM2tVJBIJ\nOJ1OTE9PF2XmDAA1NTUYGBhAVVUVW21OJpPw+XxYXFzE+vp60bUpgJ013vb2dhw+fBgqlYoNswYC\nAYYv8aLtyvfa+dOE/xtvvIH333+flbdSqRTu3LmDzz77jO3dF5PQ/nZtbS3+5m/+Bq2trWyAcG1t\nDR9//DFGR0eLsgxKsMn9/f34yU9+wvq3uVwOExMT+P3vf8/2potN5HI5KioqcPLkSXR2drLBrWg0\nirGxMUxNTRVdsAXstFisViuam5vR1dXFgq1sNgufz4eJiYmiNeYGg4GVzbVaLbhcLltfI5CrYis9\nA2BMhD09PdBqtezM0+k0IpEIgsHgviFlX4VwOByYzWa8++67aGtrY5sJtFURiUQY1kexCYfDQW9v\nL3784x/DbDazgXACIaNtqmIUnU6HDz74gKHAAmAskT6fjzEtvkj5Xjp/Wr/R6/WoqqpCVVUVAwah\n3VWXy4XV1dWiuywE2tPa2oquri4GQkIrM7Q+VIxlUC6XC7PZjIGBATQ1NbF1StrnDQaDRTm0Res3\nHR0duHDhAsrKyliwFY/H2TpkMZYTCVL2zJkz6OrqYnoTzWkgEGBAL8UkVJWrra3F22+/jfLycgBf\nb864XC5EIpGiLD0TEU1HRwc6OzsZoU82m4XH42FwssUWbBFAEQHMmM1mNkQdDoexsLAAn89XlNUK\nQoQsLy9HQ0MDo6zO5XJYX18v2tYQsLOqXVpaCpvNBr1eD2AnMKcV1tXV1Zdy3t87509Zs1wuR21t\nLXp7e1FVVYXS0lIG2kFIgH6/v6gyUOpn6XQ6HD58GIcPH4ZarWYDiul0GrFY7IWsO71oIaSwuro6\nnDp1CnV1dQx9LpPJIB6PM3yGYjKKHA6HrYB2dXVhYGCAtVgIdIPof4staOFwONBoNKivr8fAwAAa\nGxtZVkH4E4R1UIxnbjQa0d7ejv7+foagCIBBRBOkdzE5Ijrzuro6tLe3o6amhpEE0ZR/MQYt5Pgr\nKioYr4ZGo9kF/uP3+3fhpxSLcDgcxmJaV1cHs9nMBhQJI4bWEYtRd5PJhMbGRsbGSmdOKKQva/vm\ne+n8lUol2tracOLECZw+fRoWi4Vloc+zLvgyhYZwrFYrDh8+jJMnT+LQoUMM1GcvyMV+4SpfphBY\nyKFDhxgNa1lZGQPe+DbCkWKQ/DM/deoU+vr6UFVVxeBACbUrn0iqWO4OVSu6u7tx/vx5Bvucjy/B\n5XLZOlGx6W40GvHee+/hxIkTUKvVu4h+aBd6PxC+r0JoaKu7uxt/9md/hubmZkaHS2crFosZXXAx\nCW08vffee7hw4QLDIyDh8/mMvbOY7gq9v6amJvzt3/4turu7d90VYGduRK/XPxcU7ssQ0v3cuXP4\n8MMPYbFYdrEuCgQCGI1GaLXal/L53xvnT7SUTU1NaGlpweHDh9HR0YHa2lrGqgSAlf2LKRvicrnQ\n6XRobm7GoUOH0NnZidbWVuj1evZAKQulEm4xPE4yhpQF9fb2orOzk2EpAF8DNBGpSzHoDYCRo3R0\ndKC3txdHjx5FY2MjA/ShYIuYz4olqyCDUllZic7OTpw4cQK9vb3M+FFWkclkWFZRLAEvsTEePnwY\n/f39eOONN9DQ0MCMNpVxY7EYo74tBr2Brwdwu7u78eabb6Kvrw96vZ45fiJsCQQCjNuiGISCxLa2\nNsY82tDQwGYUqF2RSCSwsbFRVKVzLpcLpVKJnp4evPHGG+jv74der2dbFXTmxL9QLJU5amnV1NSg\nu7sb586dQ2trK+RyOXufBGVMEOQvQ743zp8Q/M6cOYN33nkHDQ0NDK4yH986n+O5GJwotSmqqqrw\nl3/5l+jq6mI7oMDX4EZkFCORSNE4IjLmR48exc9+9jNUVVWx7DMfa56Gt4pp2pzL5cJgMODP//zP\nWXWIAq18JMlgMAiPx1M0/X4q33Z1deGf/umfYDabGVNiPl4+MaYV01oivdH3338fH374ITQaDXP8\n+fwcPp+PwWUXg9AbbWxsxD/+4z+ioaEBCoVi15lTO3F5eRlLS0v7hh9+WUI8DxcuXMDf//3fM9pr\n4GuGua2tLfj9fkxMTDwX2uaLFoFAAIvFgr/7u79jZGyU8VOAm06n4XQ6MTU1VTSBC7ENDgwM4J//\n+Z+hUCh2QZvTQGsoFMLs7OwLAZf7NvnOO3/CvW9tbcWRI0dw7NgxVFZWstIbXRQ6XI/HA7vdjkAg\nUNBLTqxUWq0W/f39GBgYwKFDh2AwGFjAkm/MU6kU5ufnMTc3V9CJc8r2qdfc39+P/v5+2Gw29jjz\nCYpyuRw8Hg+GhoawtrZWEJ3zdZfL5dDr9ejp6UF/fz+b1qbJfnqcpP/c3BweP35cUEdE2b7JZEJ1\ndTV6e3tx5MgRmM1mBpmcT8WcyWSwsbGB4eFhzM3NFfSec7lcKBQK2Gw2NsR65MgRxsFO500wx7Ti\nd+fOnX1T375IoUpieXk56urq0NHRga6uLthsNkaWRPoS/4jf78fIyAgmJiYK6vzJJtbX16OxsREt\nLS3o6elhLKjkfJLJJBKJBCKRCKampjA4OFjQN0qsjhaLBY2Njaivr0dzczNaWlqY8ySdE4kEQqEQ\n/H4/RkdHYbfbC4pcSZwxdXV1qKurQ1VVFTo6OqBQKHatUsZiMUbtPjU1hZGREaysrLwUnb7Tzp/2\nm2n386233kJ5eTnUavUuB0osf1tbW1heXsbQ0BA2NzcLpjftwptMJrS0tODcuXPo7e1FWVkZK92S\nUaToNhwOY3JyEtPT0wUFaqFhyvr6ehw7dgzvvvsuysvLWfa515gnk0msrKwU3PlTNG42m9He3o7z\n58/jxIkTUKlUDEeezpow/GOxGKampjA+Pl5QwyIQCCCVStHQ0ICBgQEGO0x7zlRCJIMeiUSwuLiI\nsbGxfbElvmihYMtqtaKvrw8nT57EqVOnIJFI2D1PpVKIx+OMUS8YDGJychJjY2Pw+/0F010oFEKr\n1aK1tRXHjx/HsWPHUFdXx5wQrcYRs5vf78fq6irsdjuWlpYKukWkVCpRWVmJo0ePYmBgAD09PQx6\nmEr8tMFCtNdjY2OYmJh4JuKhFyUCgQBWqxWdnZ04fvw4Ojs7vzE4HAgEsLm5iUAgALfbjdXVVYyP\nj8PhcBRsCJqGQFtbW9Hf34/u7m40NjZCrVZDIBBga2sL8Xgc6+vr8Hq9jERtamoKU1NTL+3Mv7PO\nn2Aeq6ur8cMf/hCdnZ3Q6XTg8XiMM5yy/nzD8vjxY1y9ehUul6tgugsEAuh0OvT09OAHP/gBamtr\nIZPJ2NQtOX9ieAsEAlhbW8ODBw8wPj5e0KxCoVCgqqoK586dw5EjR6DX68Hn89kMBZFUxGIxhMNh\neDwejI6OYmhoqKBnzufzYTQa0dXVxVDw8jM4Om/KhDY2NuB0OjE6Ooq5ubmCwhArlUpUV1fj1KlT\nDOaZ8MyJ/pnOOxQKYWlpCePj45ieni6oMSfUwb6+Ppw/fx61tbUMkS2ZTDJuDVpJdLlcmJ2dxdDQ\nENxud8HuOTEM9vX14fjx4xgYGGAgPnTmyWQSHo8HGxsb2NzcxNzcHKampjA7O4tIJFKw9haHw0Fd\nXR3eeOMN9Pb2sm0EqsIRZezy8jJcLhfW1tYwMzOD6elptm9eCKEZnKNHj+L48eNobW2FWq1msxPk\nQOfn5zE7Owun0wmHw4GlpSU4HI6CbRHlQ7C/9957qKurQ3l5OcRi8S4b7vF4MDw8jNnZWTgcDqyt\nrcHlcsHj8by0M/9OOn/isj506BB6enrQ3t4Oo9EIAMyo0HpcJpPB5uYm1tbWMDU1hUePHhUM5pQ4\nqK1WK3p7e9Hf34+6ujpIpVLGZ83lcplhj0Qi8Hq9mJubg91uZw+0EOuJRIfc0dGB/v5+dHZ2sjOP\nx+NIJBKMWYsw/F0uFyYnJ1nWXygHKpfLYTKZcPToURw5cgT19fWQSqVIJpPs4XE4HESjUYRCIayv\nr2N+fh52ux0zMzMIhUIFGeASCARQq9Vob2/HiRMncPjwYRiNRoY6GIvFmEOiTMjlcuHJkyd48uQJ\n2zcvhCiVSpSVleHo0aM4evQoampqIJPJGKUuBbg+n49lcIuLi7Db7Zifny/YsB9hVXR0dOD06dNs\n8DaXyzHmSHJEKysrcDgcWFlZYXSsGxsbBZvJUalUsNlsGBgYwLFjx2CxWCCRSHadeSKRwOrqKqam\nprC0tISVlRUsLCzA5XIhkUgUbFCxsrISHR0dOHr0KFpbWxkSayAQYFgh0WgUs7OzrFS+traGtbU1\nRKPRgulNLecTJ06gq6sLSqUSQqGQzWeVlJQgGAxieXmZAYU5nU74/X6EQiG2zvoy5Dvp/EUiEXQ6\nHc6cOYPTp09DLpczgwjsBAcymQzZbJZRgo6OjuLy5cuYm5sr2GAIrSG2tLTgJz/5CaxWKwQCAdLp\nNEKhEHK5HEQiEUQiEWKxGLxeLxYXFxkpi9frLVg2JBaLYbFYcPr0abz//vssuKKHt729DaFQiGw2\ni1AohJWVFUxNTeHGjRuYnp4uaDak0+nQ1taGH//4x2hqagKHw0EymWRIZoQE6ff74Xa7MTc3h+Hh\nYdy5c6eg0LIikQi1tbU4ffo0fvrTn7LSZzQaZRTXVOlyuVxYXFzEzMwMHj16hJmZmYKi+pWVleHI\nkSP4wQ9+gO7u7l3VoPwZHLfbjYWFBczMzGBqagqTk5MIBoMF05vH46GtrQ1nz57FhQsXIJfLWSuF\nAhLq3y4uLmJycpJlcy6Xq6DDuGVlZbhw4QLeeOMNdHd3M72JHpac/8rKCux2O+bm5rCysoLNzc2C\no/odOnQIH3zwAXp7e6FWq1lrIhQKsfcaDocxMzOD0dFRuN1uBAKBgjp+YAcq+Uc/+hFOnz6Nuro6\nduZkqzkcDnw+HxYWFmC32zE7O4tAIIBUKvXSN86+U86fBp8qKiowMDAAm82GVCqF6elppNNpCIVC\n6PV6horncrkwMTGBiYkJTE5OPjf/+vNISUkJxGIxurq60N/fD7VajbW1NSwuLmJ7extisRh6vR5a\nrRYcDgdTU1OYmJjA1NQUZmZm4PP5CuL4acDPYrHgwoULaPn/7Z1XcNvZdf+/6B0gCIAoBNh770UU\nRWlVVr2ud2cznvHE4zgPiWcyTmach0zm77zEz3mwM8mMJ8nYiRPvevuumtXYRLH3AnaisQEgCgmA\nKP8Hzr1Lyrsbr0QJPym/zwxnvcJaOLq695577znne8rKAAAjIyNwuVzY3d1FSkoKtFotdDodfD4f\nhoeHMTU1hZmZGSwtLSEQCCRlQyRJW0S5z2QyYWNjA6Ojo7RtpkajgUajgVarxczMDIaHh2G1WrGw\nsICtra2kPYPy+XzodDqcOHGCSvaOjo7CarXC7/dDJBJBrVZDo9EgGo3SdqDz8/Ow2+0v9EbxTZAK\nkPz8fFy8eBFZWVnw+XwYHByE3W6Hx+OBSqWCSqWCWq3G3Nwcnjx5guXlZTgcDgQCgaQdtvh8PuRy\nOaqrq1FXVwe5XI7FxUUMDQ1hfX0dkUgEMpmMJhOPjo5iYmICDocDHo8nacI+pALEaDSiqakJWVlZ\niEQiGBwcxMzMDFZXV8Hj8SASiSAQCOiLnNPphNvtTtpcAb5U7svOzkZZWRmUSiVcLhd6e3ths9ng\ndrshFAppSeX09DQcDget2Epm9ZBIJKINzEwmEzgcDiYmJmh3vnA4DD6fj93dXayvr8Nut1Pxqpcx\nV15550+EYfh8PqRSKdLS0lBbW4u2tjZYLBZEIhGaYKPRaCCTyaBSqRAOh7G8vIyOjg5MTk5icXER\nW1tbL3VjIeIqJDveYrHgyJEjqK2thUqlwuLiIsbHx6nCnFwuRygUos7/8ePHmJ6exsbGRlIOLSQx\n0Wg0UgGfrKwsAMDS0hLm5+fpM6lYLIZUKoXL5UJ/fz8dc7/f/9IdKDkkqlQqpKeno6GhgXZFtFqt\nVOueJEDxeDwIBAJYrVYaFtrc3EyK2hmZM0ajkVawFBYWgs/n0yf9cDgMpVJJb5k7OzvUETmdzqSp\nERLlPovFgqqqKjQ3N0MkEsHlcmF6ehqLi4sIBALQ6XTQ6XQIhUKYn5/HyMgINjY24PP5klbJwuFw\naMObyspKmmi2traGoaEheDwexONxpKSkQKlUQigUwmq1Ym5ujtqdrBson8+HyWSi2fFpaWkIBoMH\nxpbsn2KxGC6XC0tLS/B6vdSBJuu1Qi6XIz09HXl5ecjKygKXy4XH46Gtyr1eLwQCAQQCAXg8HlZW\nVrCxsUHHO5nOX6vVIisr60DCs81mw9DQEJxOJ8LhL4wEZQAAIABJREFUMK04CwQCtP8Acfys8/9f\nIAkVpLXt1atXUV1djdzcXMTjcbhcLkgkEtqDXa1WQygUwufzwWq1oqurK2nPQ+RELpfL0dbWhqtX\nr6KwsBA6nQ7xeBx8Ph9isRgWiwVGoxGpqalIJBK0HHFkZARerzdpN36RSAS9Xo+3334bJ06cQElJ\nCQQCAY1lKZVKGAwGaLVaKBQKBINBelMi2uzJ2BBJZn9lZSXeffdd1NTUwGAwUKVBLpcLnU4HjUZD\ndQlWV1cxPz+PmZmZpAoSkQPX6dOncfnyZSo8RGwXi8VIS0uDUqmESqWiMdzFxUW4XK6kyieTF6I/\n+ZM/wfHjx6FSqagtJORlMBgglUohEAjg8XjgdDrhcDjogSVZjp/L5aKqqgrf+c53UFJSQhO2yJ+L\nKFYKBAJEIhGsr6/TuG0yBcOIJPiJEydw8uRJqNXqA8p95KWCJOOS0AuZ48l0/BwOBxaLBZcuXaJ7\ny/5xTE1NpYfc7e1t2nSICY4fAMrKynD8+PEDcuCJRIJemPZX35Bn/mg0+tLsfqWdP3H8JPbZ1NSE\nlpYWWuZEYpqkO1VaWhoCgQBsNhuN2zqdzqScysmNX6VSoaCgAHV1dWhqakJKSgoEAgFCoRAsFgt2\nd3eh0+kgFAoRDAaxvLyM6elp+tSfjJvQ/t7ThYWFqKurQ3l5OVJSUugELywshNFohFqtprrgCwsL\nGBgYoM9yyXi+JYIsOp0OBQUFaGpqovXwkUgEOp0OVVVVEAqFEAqF8Pv91HlarVYa409W5rBYLIZa\nrUZxcTGqq6uh0Wjo02FmZibNrSB5ImS+rK6uIhgMJjX+KRKJoNPpUF1djby8PIhEIkSjUSgUChQV\nFcHv94PD4cDr9WJ9fR0rKyu0q2YyN3MSqsjIyEBDQwP0ej11oHq9HhUVFYjH49jZ2cHGxgbcbjeW\nl5fh8XheSDe2b2u7WCxGSUkJSktLIZVK6QGS3KTD4TA2NjZoBzmyryRb5IzD4UCn06GxsREWi4Wq\nsKakpKC4uBherxc+n4/G9z0eD50ryRYiIpUslZWVtG8MABiNRhQVFcHn89ExJ6qyLzss9Fo4f6lU\niqqqKrS1taGgoACpqal0wRJRCFLyNDw8jNHRUXz00UeYm5tLau0n6bhWX1+PyspKqu2cSCQgEAhQ\nUlKCwsJC7O7uwuVyYXFxER0dHfj9739PY3HJsp3H48FkMqGyshK5ublUypQkmTU3N9OaeFLbfOfO\nHYyPjyctO57YTpTBcnNzYbFYaEIoh8NBZmYmjEYj3ch7e3sxPj6OR48evdCymz/WdqlUCpPJBLPZ\nDKPRSJ8NuVwuKioqkJeXB7/fj5mZGczOzmJoaAgjIyNJl5Td3yDJYrFAo9FQaVmtVovW1lZ4vV64\n3W5a/TE0NISVlZWkN8Ehr1xarRY5OTkQiUQ0XyQ3NxdarRZra2uYn5+HzWaDzWbD9PQ0vF4vI5yQ\nWCxGRkYGLBYL7adBqqHS09Nhs9loCavL5UpqRcJ+u8lrEAlV7Hegx44dw+LiImZnZ6ns8Pr6Og3F\nMcF2s9lMK4eI7UT90Wq1Ynd3l1ZRJCOh8pV1/mTxVVdX4/z586iurkZRURHkcjlttELK4shkEAgE\nMJvN9DCQLOdJkvtOnDiB1tZWNDQ0IDc390CDGPLUSJ6JdDodioqKMDAwgO3t7aQlPXG5XKSnp6O+\nvh4tLS1obm6mhxYy5sR2gUAAsViM7OxsbG9vo729Penqg0RJ7uzZs2hsbIRUKqULk9hOkoxIT3aX\ny4V79+4l1XkKhULo9XocOXIE165dQ01NzYGmPGST5/P5tG+CSCTC3Nxc0m9CIpEIMpkMN27cwIUL\nF2Aymajt5DYnk8moQBSwJ7U9NTWVdOfJ5/NRWFiIt956CydPnqR9QIjtpMGQWCw+kE80MzNzoEnL\ny4bMiRMnTuDy5csoLy+HSCQ6kCOlUCjA5/NpgiXJs1hbW6PrOVnjn5KSgnPnzuH8+fPQaDT00ELm\nOZF+1mq1MJlM0Gg0tGPi0xUjL5v8/HwcPXoUtbW1UCqVtNEQ0SogFw6TyYSCggJ0dHRQOV9S0v0y\neKWdP3nuv3DhAtRqNU2IIyUUJAmETAJShkMkIJPV6IG8VtTW1uLYsWPIyMiAWCymSXskwQwAfTIk\nzR6I0EwyHBFxjKQ+u6GhgQqzENv5fD5VT9w/7qRFZbKcP5fLpQ1vmpubUV9fj4yMjAPz5Wnt/v0H\nsHA4nNTXCqlUiry8PNTW1uLo0aOQSqXY3t6mfzYijrM/Dk1+LZlPzxwOB0qlEpmZmWhoaEBNTQ0E\nAgF9cSPrk8TFBQIBhEIhDWUks6U2SQrNyclBW1sbcnNzqUAYmRtE9ZHsRxKJBDweL+mNwcgeU1RU\nhNbWVqSmph6wHQCN6ZPDmUwmo/0IkolAIEBKSgqqqqpoG2oiTQ182XyNy+VCLpdDqVTSboP7/3wv\nGzIn0tLSUFdXB4PBQDUIANBGYKFQCCKRCAqFAiqVCkKh8MCcelm8ks6fPN2q1WqoVCqIRCLYbDYE\ng0G6EfJ4PKSlpUEul9MnrHg8js8++wy3b9/G2tpa0hyoSCRCSkoKzQCdmJigzoc0NyFJf2TBzs/P\n44MPPsDjx4+TtpmTDS41NRVZWVkIhULo7++nJUIkh0GpVNI+8RwOB/fv38cXX3yBmZmZpOYoiEQi\nZGdno6ioCC6XC16vl8ZAyYaTSCRopzu3243PP/8cnZ2d8Hq9ST0sqtVqNDY2wmg0Ynh4GEqlkraL\nlUgkkMvl2NraosI+RLdibGwsabXOZDPLyMjAyZMnEQ6HMTg4CJ1OB7FYTFs9x+NxbGxsUAGuO3fu\noKurC/Pz80nrH0/WY35+PtLT0zE1NYVwOAyz2UznC9F+WF1dRSwWw9LSEh48eEBlcJNZmSCXy5GT\nk4NIJIK+vj6aSMzj8WjPBJKXEI1G0d/fj56eHszPz2NjYyOphy6FQgGlUonl5WVMTExQbRPiGNfX\n1zE7O4twOAyv14uenh5MTU1hZWUlqQmtXC6X5g5NT0/DaDRCqVQe6Ow4OTmJ5eVlxGIxLC4u0hLX\n/VoKL+vF5ZV1/uTHbrfj7t27tJaWbDik3aNEIjlQOtHR0YHx8XG6SSbD7v01qcFgEC6X64B6n1gs\nhkqloupsXC4XTqcTnZ2dsNvtSXnS2l+W6Pf7MTw8TJXjyKGFNN6QSqV0I+dwOBgaGsLAwAAth0oG\nJAyxvr6OoaEhuN1uRKNRCIVCuiHKZDJ6OieHgCdPnmBxcTFptc5kzBOJBJW03d7ehkgkos+4IpEI\nEokEgUCA3qjn5ubQ19dHy1eT6UBJ9juRMCU68uRAFo/H6eEqGo1iYGAAs7OzSa3pJ2HFSCQCu90O\nt9uN4eFhaDQaOtf5fD58Ph8VyVlfX8fw8DA2NzeT2uWRvHJFo1FYrVa43W6Mjo5CpVLRPYZUQpGK\nG9IUbGdn58BN9WVC9kYul4tQKITR0VF4vV5MT08fqOX3er2w2WxU0GpmZgbr6+sH6vqTdcHg8Xhw\nu900PDsyMkLtjsfjWFxcxOrqKuLxONbW1jA3N0cz/V+6zS/12/54vnEU9jtIiUQCoVBI64BJydN+\n7X4yqPtPVMnayMnpkLRxjEajB24J5MmKHFaenhDJSmYhG7lcLqdPhKRrFrFpf37F/vHd//yfDIiD\nJHYLBAJ60v6q+bKfZNpO7JJIJJBKpZDJZLRyYn9ogti330YmjDmJhUulUkilUni9Xtq2ef+YM9F2\ngUAAkUgEqVRKD4Jkg/7f1ij5LBmQV1FiO2lARez5ujVKSGaOBTlQSSQS8Pl8BIPBA69tJCTxdfMl\nmZDXIHKYffrwt3+uvER7v9G/v5LOn2wsJDZITopPO1Dg6ydGsjeW/fkIpCTx6Y1l/z+f/vVk2E1s\n5/P59FZEkia/yfZksz8Bcf982X8j3n8wZIrdwJeHLvJDElW/7vDKJNvJCxyx/WnlMqaOOXDQdgBf\nG2pjmt3Al/keJOfjVRnz/a+25OXi68JVTLadPPF/1cXtZZv1zB8mEWb9zbKwsLCwsLxafKN/T14t\nCgsLCwsLC0tSeC2cf7JKO1hYWFhYWF5FXnnnn+zazufhVbWbhYWFheXVhqme51vF/F9EXaRAIACH\nw0m6zOW3hSSIkUYdrxJEVSyZ6lzPCqlBTrZAyrOQTCU3FhaWFwab8PdtIFUEpJ53dXU1aR3cvg37\n+xyo1Wr4/X54PJ6k61z/MZBMfIVCAbFYDLfbjZ2dHcbbDewJ8IhEIsjlckSjUXi93lfi8ELq2Ekp\nnt/vp1UnTIdk4YvFYnA4nANtUJkMqVIiAkG7u7u0JIzptpP5QjQ9nq6WYTL7K06I4uWrMuak8oGo\nRn7LMf9G/857fhNfCP8vWV8sl8uh1+vR3NyMvLw82niB6Zsil8ul/caPHz8OLpcLh8PxSjgiiUQC\nrVaLqqoqVFdXw+PxwOfzMd5uYE+D3GKxoL6+HqmpqXC5XIjH44yfL+SAm5eXh4aGBmxvb9NGNEwe\ndw5nr0Ut6ShpMBjg9/uT3j3vj4HH49E213l5eVQw61UYc7FYjJSUFJhMJhiNxqRKdf+x7NdVUavV\nMBgMkEgkjGj+879ByqqVSiXUavUBxddvYfdPv+nDV9b5H3asn9ycKyoqcOHCBRw9ehRmsxlerxeh\nUAiBQOBQvod812HG+omyXltbG06dOoWmpiYIBAJsb2/TntFMhIx5dnY2zp49i6NHj6KsrIz2MQgE\nAox1pOSFqLGxEadPn0ZTUxPV8o7FYozeYEiToOPHj9PGUkRYhdTiMzFkRESDSktLcerUKdTX16Og\noAByuZz2C/g68ZpkIxKJoFKp0NTUhNbWVtTX18NkMkGtVtN6fCaGGInzzM7OxtGjR1FfX4+Kigqk\npaVBpVLRkFEyu11+FUTsSKVSoaKiAs3NzaipqUFeXh6MRiMUCgXtGZDsxlf7IQcWpVKJ9PR01NbW\noqamBjU1NbBYLDAajbR5Hent8Q18o/N/ZeV9gYNqVc8Ll8uFSCRCU1MTfvKTn0AoFMLpdFJ5VCLB\n+7ybIrH9MEV7+Hw+UlJScP36dVy8eBESiYSGLT755BP4fL5D62B4mHYT5b2ysjL85V/+JUwmE2Kx\nGBQKBWQyGTY3N7GxsXFoi/MwY9tEDvjUqVP40z/9U8hkMjidTlgsFvzud7+Dx+OB3+8/lJeXwxZ4\nIp0Wv/vd76KmpgZyuRx5eXnIycnB//zP/2BycvLQJF4P03YejweFQoHm5mb86Ec/glKpxO7uLhob\nG3Hr1i18+OGHcLlcjHSiEokEJpMJV69excmTJ6FWq7GxsYHZ2Vl8/PHH6Orqoi3GmWI7CVOo1WpU\nV1fj+9//PjIzMyGXy+FwODA4OIibN29ieHg4af0jvgmZTAaTyYRTp07h3LlzSEtLQywWw/r6Oh49\neoT29nb09/cnTbr7qyDOX6vVory8HNeuXUNpaSn0ej28Xi9WVlbQ1dWFzs5ObG5uYnd395nH/ZVy\n/iQ+GY1GaTetw1ooZrMZ165dw9mzZ2mzF6VSiZKSEkxPT0MsFj9zNz3i5Ei3O9Kh6rAcWlNTEy5d\nuoTKykrIZDLa1KiwsBBdXV0QCoXPPFYkHg+AJhEe1kJRKBS4dOkSzp8/j7S0NNqHwWQywWQy0Q5p\nzzpOROWM3AYPc4EXFBTg4sWLaG5upm07lUolMjIykJKSQpXhnsV2sunulxw+LNs5HA5aW1tx7tw5\n5OXlQaFQQCAQQKPRQK/X0zF/nt+fjPlhyplyOBwYjUZcuHABb7zxBnQ6HQQCASKRCPR6PdRqNW2d\n+qzft7+N7WHazeVyUVFRgfPnz6OqqgparZa+BJB5v/97n+U7XoRyH7mBHj9+HKdPn0Zubi5SUlLA\n4/Gg1Wrp/36e73z6BfSwxlwgECArKwvnzp1DY2MjMjIyIJVKsbu7S18znmfMn7b9sMac9LuoqqrC\nyZMnUV5eDpPJRH1SIBAAn88/lPn5yjj//Z3Z9redPKxbv16vx4ULF1BbWwuRSARg76mOtAomG8Pz\n2E4STg7zJsvlclFaWorr169Dp9NBKBQikUhALBZDqVRCKBQeyncQDssJkXacx44dQ2trK91IiO3k\n7+B5xokkynyVHvizQsYjMzMTV65cQW5uLiQSCYC91wDSgOR5b/z7w1qHOeY8Ho9uLCaTCWKxmNpO\nxorMz+dxRITDdEQ6nQ4nTpxATU0NlEolPRiRzfB5e6Hvt/2w7CahrYKCApw5cwaZmZlQKBQAQKWm\nI5HIgaY0z/Ndh/lqQNZoXV0dGhsbodfrIRAIEI1GIRAIkEjs9Tx43iZGL2LMBQIBzGYzjh8/jqKi\nImi1WnA4HOzs7EAoFCIcDmNra4txr0Q8Hg8SiQRFRUVobGxEVlYWnS/RaBRcLhdbW1s0JPo8tr8y\nzj+R2OupTjLYD/M2JBKJoFQqodVqIZfLAew5uWAwiKWlJTgcjud61iKNHvZ3dzoMSIMjksxCHH8i\nkcDm5iZmZmawurr6XG1R4/E47XdP/v0wEAgEkMlk0Ov10Gq1dGwikQhWVlawsLBAu6M9Ky8idk1i\niWq1GpmZmVAqlQD2xsXj8dCWnV6v95kdUSKRoB0RD3NjIodnvV4Ps9lMD1jxeBx2ux2Tk5Ow2WzY\n2tp6rvmy//97WPaTapCMjAxoNBr6eweDQdqRbmVl5bkqc76qGdXzwuVyIRQKodVqkZWVBZlMRn9/\nskZnZmZgs9me+XXuRYw3sDfmcrkcZrMZBoOBvmZFo1Gsrq5idnYWY2NjWF1dfebD4ovIiyEhXI1G\ng5ycHNo6PZHY69jpcDgwOTmJ0dFRmnD5LBy23eTQIpVKYTQaYbFY6KtQPB6Hz+fD8vIy+vv7YbVa\nn/v1+JVx/gD+4Mn5sG79+fn5qKyshFqtpidar9eLubk5tLe3Y2pq6rkH+jCfywkqlQpFRUXIzs6m\nZU+kx/Xo6CgePnwIp9P5XLeh/YvzsJwRecItKyuDTqejrzmBQAB2ux3d3d0YGBhAMBh8Luf9IjYW\nkUiE7OxseiIXCASIxWLw+/2wWq24c+cO5ubmnvtGcdi2czgcqFQqZGRkwGAwQCqV0vkSDAYxMDCA\nzs5OrK+vH+ik9iy8iM1cr9cjMzMTarUaIpEIicReQ6yVlRXcv38fw8PDz3XIBV7MZi6VSmmiFunt\nThJCx8fHcfPmTczNzTHSdrVaTQ9bUqkUXC4X4XAYm5ub6OjoQEdHB9bW1pLavvhpyOHcYDDQhEqx\nWEwvj3Nzc/j8888xPDzMqFLo/Yl+mZmZ0Ov1NJxIyhP7+vpw69YtzM3NHUo11Cvl/F/EXxSPx0ND\nQwPeeOMNqFQq+j0ulwuDg4P49NNPsbi4eCjfe9ibeVpaGk6dOoXi4mJ6M9/e3sby8jI6Ozvx0Ucf\nHep3Hpbj53A4yMvLw7Fjx6DT6ajtXq8XVqsVt2/fRnd3N2OScAhkM6+srERxcfEBIai1tTUMDg7i\n/ffff+6N/EVA5ktDQwP0ej0N5ezs7MDpdKK9vR23b99m5JjzeDxkZWWhsLAQMpmMvhL5fD7MzMzg\ngw8+wOzsLCPHnOQNpaen0/h4JBKBx+NBd3c3fv3rXzMq2xz4cswNBgOKi4tpRQIAhEIh2O12fPjh\nh7h//z7jav25XC6kUilycnKQkZFBc1ii0Si2t7cxNDSEn//859ja2mLUXCfhIY1Gg5KSEprTQsJC\nwWAQt27dwq9//WuEQqFDedF8pZz/YUFOWRkZGSgtLcXRo0dRXFxMY7exWAxWqxWjo6MHemEzAXKy\nzcvLw5EjR9Da2oqsrCz6udfrxZMnT7CwsMCYUy2By+VCrVYjKysLra2taG1thVarpZ/Pzc2hs7MT\nGxsbjLKbHFjS09NpmVlVVRXdELe3t9HX14fR0VHGbYYcDgcSiQTp6eloaWnBtWvXkJOTQz+32Wy4\nc+cOlpeXGaUJQcZco9HAYrHg9OnTOH78OA2zxGIxjI6O4vHjx8/1dPsiIBu52WxGbW0tLl++jNLS\nUvr55uYmHj16hOnpaUY6fnLjf+ONN3DmzBlawgoAVqsVDx8+hMvlYpTtZL5YLBaUlpbi7NmzqKur\no6EKskaHh4efOXH7RUG0K8ie/uabbyIzM5OOucPhQG9vLxYXFw+1MuH/nPMnJVoGgwE1NTVoaWlB\nbW0tLBYLTXgKh8OwWq0YHx//Y2opXwrkwKLRaGAymdDQ0IDW1lZUVFQgNTUVwN6GuLGxgb6+Piwt\nLTFqYQoEAqSlpSEvLw+1tbVobm5GaWkpDbPEYjHMz8+jp6cHbrebMbaTzGCtVouKigo0NTWhsbER\nOTk5tD7b5/NhcHAQExMTzxViOUzIZqhWq5Geno7Kykq0traiubmZPoPGYjH6bG632xkz5qQ6JjU1\nFQUFBaiursbRo0dRXl4OiURCxU7GxsbQ39+PQCDAKNvlcjm0Wi1qa2tx7NgxtLS0wGAwANhbo2tr\na+jo6IDVamXU7ZNUOOXm5qKxsRGtra2oq6uDTCaj88VqtaKjowPr6+uMsp2obJaVldExJ0JKJF5O\nDuhMKu0D9koS09PT0dDQgKNHj6KxsZEm+cViMSwvL+Pu3btYWlo61P3l/5TzJwuzpKQEb7/9Nqqq\nqpCVlYXU1FRaWkWyQGdmZjA1NcUIgRxyk5BIJGhpacGlS5dQVFSEzMzMA1ny29vbcDqdGBoags1m\nS7bZFFJKdvHiRbS2tqK8vBxGo/HAs3kwGMTCwgJGRkYOVVBpP982Z4EcWgoKCuhNory8nGY9A3vP\noBsbGxgfH8fCwgJjNhUix1pfX4+2tjY0NzcjNzeXxm5jsRiCwSCWl5fR09MDv9+fbJMpXC4XaWlp\nVISoubkZBoPhQF7L1tYWpqamGHlAz8/PR0tLC06cOIHKykro9XqajBsKheB0OtHb24ulpaVkm0wh\nL0RVVVVoa2vD2bNnkZmZSUuHybP59PQ0Hj9+DJ/Pl2yTKRwOhx7OSSmoXq+n8yUUCmF9fR09PT0Y\nGxtj3K0/MzMTTU1NuHLlCsrKymipcDwex/b2NmZnZ/HFF1/A4/Ec6nf/n3D+JPam0WhQXl6OxsZG\nHDt2DFlZWTQJB9jL9iVPz5OTk4yIC3G5XCgUCphMJpSVleGNN97A0aNHkZaWBrlcTh3/zs4Oenp6\n8PDhQzgcDkZsiAKBAGKxGPn5+bS8rLq6Gunp6QdK4mw2Gzo7OzEwMEAlZpMNyS4vLi5GS0sLTp06\nhdzcXBiNRvrcH4vFMDw8jLt372Jubo4RN1ASrzWZTCgoKMDJkyfR2tqK/Px8qFQqehNyu93o6Oig\nYiHJnuf7bc/OzkZ1dTVOnTqFmpoa5OfnU43zeDyO6elptLe3Y2xs7LkqEw7bdqlUCr1ej/r6erz5\n5psoKyuDyWSipYg7Ozvo7OzE3bt3sbKywog1SiC3z5aWFrS1taGoqIg6/ng8TvOIent7GRWWI5n9\nOTk5OHfuHBoaGpCZmUkP57FYDIODg7h79y6mp6fh9XqTbPGXkAN6ZWUlTp8+jbKyMnqxINUgnZ2d\nePToERwOB2PG/EWTOKwfDoeT4HK5iby8vMRbb72V+O///u+E3W5PhEKhRDweTxBisVgiHA4n/uM/\n/iNRWVmZSE1NPTQbnsd2gUCQyMvLS3zve99L3LlzJ2G32xOxWOyA7dFoNLG6upr40Y9+lMjIyEgI\nBAJG2C6XyxPZ2dmJv/7rv050d3cn1tfXE7FYjNodj8cT4XA48dlnnyUaGhoSarU66XaTH7lcnigr\nK0v87Gc/Szx58iQRDof/YMx3dnYSP/3pTxNmszkhFouTbjOABI/HS8hkssSbb76Z+OUvf5mYmJhI\nRKPRA7ZHIpFEf39/4sKFC4yY50/b/vbbbyf+7d/+LeFwOBLRaPTAGg2FQol//ud/TpjN5oRMJku6\nzeSHy+UmTCZT4sqVK4lf//rXiUgkcmCu7+7uJpxOZ+IHP/hBQqfTJfh8ftJtJj8cDidhNpsTFy5c\nSNy/fz8RiUTofCFr9OOPP05UVlYmFApF0u3d/8Pn8xM6nS7xwx/+MOFyuRLhcPjAGt3e3k78/d//\nfcJgMCREIlHS7d3/IxAIEgqFIvFP//RPCZ/Pd2CukzV6/vz559kXv5HX/uZPauH1ej2KiopgMpmg\nUCioGhiwl33rcrnQ09ODe/fuwWazIRgMJtnyL7XMzWYzcnJyqO37xV8ikQiGh4fR0dGBwcFBuN3u\npD9rkRscqRHOysqCxWKhqloAsLu7i62tLTx69Ai3b9+GzWZjxE2IPN2qVCqkp6ejoKAAZrP5gOJe\nJBLB1NQU7t+/j46ODng8nucujzss2wUCAVJSUpCZmYnq6mrodDoqUBWNRhGJRHD//n3cuXMHMzMz\njJjnwJeJrEqlEqWlpaiqqoJCoaCvcpFIhMY+b926BY/Hc2iS1c8LCcsZjUYqKkPmC1mjPT09uHPn\nDgYHB+H3+5O+RglkrZKGYAaDgdpOulTevn0bt27dgsPhYEQYlEBCFaWlpSgqKjqgCBqNRjE+Po7b\nt2+jvb2dyrQzCaVSCYvFAo1Gc0C8LhaL0XlutVpf2L742jt/gUAArVaLnJwcVFRU0M5O+wV3nE4n\nBgcH8fnnn+PJkyeMSTgjQhWFhYUoKSlBWloarc9OJBIIBAJYWVlBR0cHPv74Y1itVkZUJ3A4HHrg\nqqysRF5eHnQ63QH51fX1dUxPT+PmzZvo6OjAxsYGI5qDkIY9GRkZqKioQE5ODrRaLd1UwuEwlpeX\n0d3djffffx+zs7OMcaDkwJWfn4/i4mKqPkjG3Ov1Ugd68+ZN2O12RjhQ4oBSU1ORl5eHkpIS5OTk\nHBA4cTqd6O/vx0cffYSRkRHGjDnwZUfN7Oxs1NfX00zt/doV7e3t+N3vfge73c4oB0pEZQoLC9HU\n1ESV8ADA7XZjZmYGN2/exMOHD+HxeBjlQEkB7DdGAAAYrUlEQVSCYmVlJUpKSiASicDlcrG7u4vV\n1VX09/fjvffew8LCAnZ2dpJt7gE4HA50Oh3q6+thNBrp3hgIBGg1yIteo6+981coFCgtLUVjYyOO\nHDlCE+TIphIOh3H37l18/vnnGBoaompVTCA1NRXFxcVoa2tDU1PTH+QnLC0t4T//8z/R2dmJ0dFR\nxtSXk82woqICN27cQGFhIU3uI1nDPT09eP/999Hf34+VlRXGbCpCoRCpqak4duwYbty4gYyMjAM6\n/W63Gx9++CHu3r2LiYmJF5ac+CyIxWJYLBZcvXoVbW1tB25C8XgcExMT+NWvfoW+vj6qhscEiApe\naWkprl27RstuyQE9Go3iwYMH+PTTTzE2NobNzc1km0whMefS0lI0NDQgIyMDSqWSznWHw4H33nsP\n9+7dw/LyMmPGHPgyATorKwvFxcUoLi6mL4uJRAKjo6P49NNPMTg4iLW1Nca8VhDIBaO5uRmVlZU0\nzu/3+9He3o4HDx5gbm6OUcmswJeH3YKCArz77rsoLCyk+/rKygoePHjwUtboa+38eTweUlJSUF1d\njbKyMqSlpdFkrXg8jqmpKfT09ODWrVvo7+/H6uoqIxYnuTWYzWY0NjaisLAQer2eOqFQKEST+8gE\nZ1Lik1AoRG5uLsrLy1FQUACNRkM3FJvNhidPnuDmzZt4/PgxVldXGXWLUygUyM3NRXFxMQoKCg68\ntAwNDaGjowO///3vMT4+Dq/Xy4hEOYJKpUJmZibKysqQmZlJ54vH40Fvby/u3r2Ljo4OuFwuRoRY\nCKQldXZ2Nm2NTGy3Wq3o6+vD7du3MTAwgI2NDUasUQKXy4VEIkFFRQVqa2tppvbu7i4GBwfR0dGB\n+/fvY2ZmhjGHcwK5fR45cgTFxcVISUkBl8vF2toaxsfHcefOHbS3tzPutQIA1d2orKxETk4ONBoN\nuFwu5ufnMTQ0hLt372JwcBA+n48xFwsCn89HamoqMjMzUVJSApVKhZ2dHSwvL6Orqws3b958oc/9\n1I4X+rsnGaKYVF9fj/z8/APd6aLRKDo7O/EP//AP8Hg8jFqY+1XNjh07hvT09AO2BwIB/Nd//Rc+\n+OAD+hTHFNuBvRN5SUkJysvLoVKpqO2JRAITExP42c9+htnZWcYcWAikLr68vBwZGRkHbkGJRAI3\nb97EL37xC9pzgEm2A4BGo0FmZibS09MPqFU6nU78y7/8C63PZprdfD4fKpUKZrMZxcXF1PEDwJMn\nT/CP//iPcDgcjKkE2Q/RDamqqkJNTQ09LIZCIXz00Uf44IMPsLi4yLj5QnJbjEYjzp49i+LiYnox\nWlhYwC9/+Us8efKEscqJHA4H+fn5OHLkyIHS276+Pvz2t79Fd3c3nE4now7nBJFIBJPJBLPZTEOK\nq6ur6Orqwueff44vvvjipbyyvLbOf7/Yhl6vP7AZbmxsoKenB48fP6aJQ0ya4CR2azQaadtVYM/2\n2dlZ9Pb2wmq10sQhJtnO5/NpqKWwsJB2FQyFQpiamkJ/fz+9eTLJbnLgMpvNOHPmDHJycmjsc3Nz\nEwsLC5idnaXJfUyzncPhoK6uDmfPnj0Qt11dXYXVaoXdbqd64EyyHQC0Wi0uXLiA+vr6A53uvF4v\nHA4HLV1lmt0AkJ2djSNHjiA7O5tqKEQiEWxtbcFut8PpdDJuvgB769RisaCoqAi5ublQq9U0xLK5\nuYmJiYnnatjzIhGJRFAoFKioqEBLSwtSU1NpaGt+fh79/f2MPCgCoImh77zzDk6ePElDW16vFw8f\nPsTQ0NBLU9p8bZ0/8KXqU0pKCqRSKQAgHA7Dbrfj3r17tHkM0yBxRLVaTevKSXOHqakp3L59m5FJ\nLMCXLSktFgvMZjNtTOHz+dDb24ve3l643W5GJPc9DY/Hg06nQ01NDXQ6HYC9DmZ2ux0PHjygCZVM\ng9ziCgsLUV9fTyVwo9Eopqen0dvbC5fLxcj5Ql5bWlpaUFpaSnMUtra2MDQ0hKmpqUMXNzlMsrOz\ncfLkSVgsFirk43Q6MTw8jMXFRUbVle9HKBSirKwMNTU1tF/87u4uFhYWMD4+jsXFRWxtbSXbzK8k\nNTUVZWVlqK6uRmFhITgcDrxeL1ZWVjA5OYmFhYVkm/i1mEwm1NTU4PTp0ygvLweHw4HL5cLExAQG\nBgZequ2vtfMnp1ZS6kRu/dPT0+ju7sbc3FyyTfxGyG2UqJqR5jF37txh7KZCHBHpD8/hcBAMBmGz\n2XDv3j10dXUxKmZLILdngUAAiURCFbYCgQDGxsbwq1/9CsvLy8k282shDU0UCgV4PB52d3exvb2N\nu3fv4je/+Q1WV1eTbeJXQsq1LBYL7fNAnNC//uu/oqenJ8kWfj1Ene3YsWMHbs4PHjzAz3/+c0bv\nLzKZDGfOnMHp06chlUqRSCTg8Xjw7//+7/jss88Ylcj6NEVFRfirv/orlJWV0ReukZER/OIXv0Bf\nX1+SrftmTp8+jXfeeQcZGRn0oHvnzh289957cDqdL9WW19b5c7lcVFVVobW19UDP9YWFBYyOjsJu\ntzN2gqelpeHEiRMHJrfP58PY2BhmZmbgcrkY+aQFACUlJTh58iRMJhO1ndyE5ubmGBlzBgC5XI7m\n5mY0NTVBLBaDy+Vie3sbc3NzmJiYwPz8PCNv/QCQlZWFhoYGFBQU0KoKj8eDubk5TE5OYnFxkZGx\nTy6Xi5qaGpw4cQIGg4G26l1dXcX09DTGxsbgcDiSbeZXotVqUV5ejrq6OhgMBnC5XOzs7GBtbQ1T\nU1MYGRlhhPbDV5GdnY26ujpUVFTQboN+vx82mw2jo6OwWq2MtF0sFiMrKwv19fWorq5Gamoq4vE4\n/H4/FhcX0dPTg7W1tWSb+ZWkpKTAZDKhrq4OlZWVUCqVVDJ5fHwcAwMDL70q4bV1/jweD+fOncN3\nv/tdyOVyAKDSoENDQ4yQYv0qOBwOsrOz8Rd/8RcoKSkBAHoq7+npwdzcHCPtJhw/fhw//vGPoVKp\n6GvL4uIiuru7GSUL+jSpqan43ve+h9OnT9PGN9vb2xgZGcHk5CQikQhjba+ursbf/d3fIT09nR64\n1tbW0NXVBYfDwbgSLQKXy8WFCxfw7rvvQq/X0/Lb+fl5jI+PM0oM52ksFgu+//3vo6mpiSbKBQIB\nTE1NwWazMS47fj81NTV49913kZOTQxPlNjc3MTc3h42NDcYlJxJkMhlt2qNWqyEUCumLqMPhwPr6\nOiNDWwBgNBrR1tZG9VoAYGtrC6urq1hZWYHD4XjpB/TX0vmLxWKkpKRApVLRemHSDGR6ehqTk5OM\nnCQkSVGtVkOlUtGb0Pb2Nux2O/r7+xn79CwUCiGTyaBWq6FUKiEQCLC7u4udnR3MzMxQ8SQmIpVK\nodFooNFoIJfLaZhlfX0dvb29jGsGQiCJoTqdDnq9HlKpFPF4HKFQCFarFV988QWjmsfsRywWQ6VS\nwWQywWAwQCgU0lDF48eP8eDBA0bGnMkaNRqNyM/PR1paGhKJBHZ3d7G0tIQPP/wQg4ODyTbzKyFr\nNC8vD9XV1UhJSaGVT729vfjNb36DlZUVxjp+s9mMI0eOoKKiguYSbWxs4KOPPsLdu3cZeUAnCdCV\nlZV45513kJ+fT7VOxsbG8Nvf/hZjY2NJsfu1dP5paWmoqKiA0Wik2eYejwezs7OwWq2w2WyMq/0E\n9jbz3NxclJSUUAcajUbhcDgwPT2NqakprK+vJ9vMryQlJQVlZWXIysqCRCIBsBeqWFxcxOTkJKan\npxn5lAgAGRkZqKqqop3A9isQjo6OYmlpiZHOXyqV0qoKUl++vb2N5eVljI+Po6+vj5GHXABU/TEn\nJ4dW4rjdbiwtLWFgYACjo6OMvD3zeDzk5eWhqqqKCvrEYjG4XC6Mj4+jvb2dsQlnKSkpKC8vR0VF\nBbKysgDsCeK4XC709vbi4cOHjA2FZmZmorGxEdXV1cjMzDyQv/XgwQMMDAwwck+XSqUoKyvDkSNH\ncPToUfB4PHqxGBgYwCeffJK0UOhr6fzr6urwN3/zN8jNzaVJf1arFR988AHm5+cZVx5HEIvFeOut\nt3D9+nWaQBQOh9Hd3Y2HDx/C6/Uy0gkBQEFBAX784x+jqqoKAGi3PiLHysRTOeH06dP4wQ9+gKys\nLDpfhoaG8MknnzD2oAgABoMBf/Znf4bjx4/T5CGPx4Pbt2+jq6sLOzs7jLW9oaEBf/u3f0s38kQi\ngenpaXz44YeYmppiXPktQSwW45133sGNGzdoiVkoFEJ7eztu376NtbU1RlayAHuJcj/5yU9QXl4O\n4Ms1+vHHH6O/v5/RYZYzZ87gz//8z2E2m+l8GRwcxGeffUYltpk4XwwGA374wx+ira2NlvV5PB7c\nuXMHjx49gsvlStp8ea2cP2mwYTAYUFVVBYFAgO3tbdhsNvT19VF1MyYmP5ESuezsbOTm5kIgEGB9\nfR1zc3N4/PgxRkZGGFnrTMZcq9WitLQUer0eu7u7sNlsGBgYQEdHB2MTzng8HgQCASwWCxU52dra\nwsrKCnp6emhZIhNtFwgEUKlUKCgogMViAYfDgc1mw+DgIDo7OzE9Pc048SfgSylfk8mEiooKmli5\nsrKC3t5e2r6UiU6INB7KyclBTk4OVcObnZ1FV1cXzSVimu2kosJgMKCkpISu0ZWVFfT19eHBgweY\nnZ1l5HwhvQeysrJQUFAALpdLy/q6u7vR1dWFtbU1Rr4qymQyGAwGFBYWwmw2g8PhwG63Y2RkBA8f\nPsTY2BhCoVDS9pfXyvmTDn5SqZQ+Pa+traGnpwcdHR0YGBhg5EYOfDnJJRIJDVUsLS3h3r17ePz4\nMaxWKyNtJ018pFIpxGIx+Hw+AoEAhoaG8OjRI/T398Pn8yXbzK+Ez+dDLpdDIpHQxCeXy4UHDx6g\nu7sbExMTjBxzYO8GqlAoIBKJaMIZ6TT45MkTLC8vM24jB76MgcpkMqrkt7W1he7ubjx69IjRa1Qs\nFiM1NZXmEQF7anh37txBV1cXpqenGWk7l8uFUqmkapskp2VwcBD3799HT08P49Q2CRKJBGlpaZDL\n5XTMyRp99OgRhoaGGHfYIqjVahpKJIm409PTuHfvHiMuRa+N8+fxeMjMzMSVK1dw5swZAHtZ8n6/\nH0NDQ5ifn2fk5Ca1/M3Nzbh69SqKi4sB7Nlut9vx5MkTxmbJk9a3169fx4ULF2hVRSQSweTkJCYn\nJxn5fEu0CMrKynD9+nU0NTXRzzY2NvDkyRPYbDZGqptxuVxwuVycPXsWly9fRnp6OoC9+TI/P4/h\n4WH4/X7G2Q3srVGLxYLvfOc7OH36NP11v9+P4eFhLCwsvDR1s28DWaNNTU146623UFRUBAC0cU9v\nby+j12hKSgpu3LiBixcvHlijJI+Iidn9ZMxLSkrw7rvvor6+nn62ubmJ/v7+pGTI/zFwuVzweDy8\n8cYbuH79OgwGAwDQyqexsTGqtplMXnnnT8RZSJLflStXUFFRAQCIxWLw+/2YnZ1lZG08h8OBVCpF\nWloaWlpacOPGDRrrj8VitGaYqVnPqampyM/Px9mzZ3Hs2DFIJBKabb60tMSobn0EIuSj1+tRV1eH\nGzduID09nY652+3G5OQkIzdzDocDhUIBvV6P48eP480336QZ27FYDHa7HXNzc4xq2gP84Rq9dOkS\nysrKABxco06nk5FjLpPJoNfrceTIEVy9ehUKheLAGp2enmbczZmMOWkJfvr0aRw5cgRisfjAGl1e\nXmac/DBpDmYwGFBbW4srV67Qqop4PA6Px4OpqSm6RplkO7kQGQwGtLa24vjx45DJZIjH44jH43A4\nHJifn2dECPe1cP5CoRCtra04f/48cnNz6ek2HA4jGAwiGAwiEomAy+Uy6qRItOQvXbqEY8eOQavV\n0hIWYncwGGTcsxY52dbU1ODixYsoLS2l7YbD4TACgQACgQB2dnaSPsGfhsvlQq1W4+zZszhz5gwM\nBgMkEgkSiQR2dnYQCATg9/sZl7RFbkJFRUW4fPky6urqoFKpwOPxEI1GEQwG4ff7GTlfyBpta2vD\nhQsXkJWVReW2Q6EQ/H4/fD4fdnZ2qDYEUyCvFdeuXUNbWxttCU4EWojtu7u7jLKd5OLU19fj8uXL\nKC4upk2HwuEw/H4/tra2GJkox+PxoNFocPHiRZw9exYajYaWPe/s7MDn88HtdjOuGoSMeUlJCa5d\nu4aamhrIZDKquLmzs4OtrS14vV5G5Ci88s5fqVTCaDSiuroaVVVVUKvVVJrV6/VidXUVW1tbjHva\n4nK5SEtLQ3FxMZqbm5GXl0dj/cFgEGtra9jY2EAgEGDERNmPVCqF0WhEVVUVmpqaDrQb3tragtPp\nhMfjSWoyy9eh1WpRVFSExsZGlJaW0mYs4XAYGxsbWFtbg9/vZ9yYk05gFRUVaG1thcVioXkKgUAA\nDocDbrcb29vbjKtmUavVyMjIoMps5NASj8fhdrvhdDqxtbXFuAMXOZxXVlbiyJEjNBGXZPgTYZlg\nMMi4ZDmFQoHMzEzU1dWhsbERaWlp4PP5tG8C6ZLItH2Rw+HAaDSivLwczc3NKCkpgUQiAYfDQSQS\nObBGmTZfSMJ2XV0djh49SnubJBIJBINBuFwubG5u0gN6ssf9lXf+Go0GlZWVKC8vR3Z2Nj0hEgEI\nu93OSEdEWvYS20lcKB6PIxgMYmVlBS6Xi8Zvkz1R9pOSkoKKigpUVVWhsLCQ3pzJZr6wsIDNzU1G\nPG09jdlsphKbFouFOiHS8Mlut9ONhUm2k5r+uro6lJaWQiaT0Xnh9XoxOzuL9fV1xs1zDocDk8mE\no0ePorq6GllZWVRLIRqNwul0YmlpCT6fj3FjTm5xzc3NKCsro0/PZDNfWFiA0+lk3AsXh8OBVqtF\nc3MzbWdO9sV4PI7NzU3Mz8/D7XYzKieHhCry8vLQ1taGyspKmEwmWiK3f42SSxGTbFcoFKivr0dz\nczMKCwtpz4REIgGfz4eFhQWqQsiENcpLtgFfw//7Y//DeDwOn89HW2lGIhGsrq5idnYWHR0d6O7u\nxszMDOPioMBe0k0oFEI0GkUgEEA4HMb8/DxGRkZo5jNJPGMSJE4biUSwvb2NQCCAzc1NzM7O4vHj\nx+jo6MDExAQjcxWi0Sh9YiYvQsvLy5iYmEB7ezt6e3uxuLjIuFyFeDxOx9rr9cLr9cLn82F2dhZ9\nfX149OgRRkZGGCkCFY1G4Xa74ff74Xa7EQwG4XA4MDMzg/b2djx+/Bizs7OMFCQKhULweDxYX1+n\nT81zc3O0mmVoaAgOh4ORa9Tr9WJzcxNOp5Ou0bm5OXR3d6OzsxMTExOMrMSJRCJYX1+H0+nE2toa\notEobDYbpqam0N7ejr6+PiwsLDBujRInv7q6Sg+0Ozs7WF5epvNleHj4ZTbZ+uk3ffjK3/y3trbg\n9/vR0dEBn8+HkpIS2thkeHgYU1NTjNTxj8fjWF1dpfFZm82GvLw8eDwerKysYGRkBEtLS4yzGwC9\n9fB4PDrmSqUSW1tbmJycxMTEBDY3Nxlp+/5nWrvdjtLSUqq4NTIygvn5ecY9+QN7+StLS0sIhUJw\nu90oLy+H2WymN4rR0VFGJswBe8p9Xq+X3twqKyvB5/Ph8/kwOjqKmZkZRjZNisViWFlZwdbWFtbW\n1rCwsIDS0lL4/X7Y7XaMjo4yVg7X5/NhcnISXq8Xi4uLWFpaglarRTAYxNTUFCYmJuB2uxlnO6ly\n2tjYgMPhwMLCAlwuF03GJQ3CmOb4AWB7exszMzNYW1vDzMwMbDYbCgsLEQqFsLCwwLiDIifZBnwN\n32p0iN62TCaDTCYDAHqb3t7eRjgcZsQzy1chEAio7RKJBNFoFKFQCMFgEKFQiHFxrf1IJBI65vuT\noEiCJdMSzwg8Hg9SqRRSqZRm4u7u7iIQCNAxZ8oCfRqhUEjtFolEiMVi2NnZQTAYRDgcZuSmSNg/\nXzgcDmKxGE0MjUQijF2jfD4fYrGY2h6LxWgycSgUYuRhkSAUCiGRSKBQKCAQCBCLxV6JNcrlciES\niSCTyaBQKACArlEyX5i6Rvl8PkQiEZRK5YF+G2R/eYnz5Rv9+2vh/FlYWFhYWFgO8I3+nfuyrGBh\nYWFhYWFhBqzzZ2FhYWFh+T8G6/xZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhY\nWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhY\nWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhY\nWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhY\nWFhYWFhYWFhYWFhYWFhYWFhYWFhec/4/OMf2ofghYyUAAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Current loss: -15.399969\n",
+ "Current MNIST score: 8.610293\n"
+ ]
+ }
+ ],
+ "source": [
+ "global_step = tf.train.get_or_create_global_step()\n",
+ "train_step_fn = tfgan.get_sequential_train_steps()\n",
+ "loss_values, mnist_score_values = [], []\n",
+ "\n",
+ "with tf.train.SingularMonitoredSession() as sess:\n",
+ " start_time = time.time()\n",
+ " for i in xrange(6001):\n",
+ " cur_loss, _ = train_step_fn(\n",
+ " sess, gan_train_ops, global_step, train_step_kwargs={})\n",
+ " loss_values.append((i, cur_loss))\n",
+ " if i % 1000 == 0:\n",
+ " mnist_score_np, display_img_np = sess.run([eval_score, display_img])\n",
+ " mnist_score_values.append((i, mnist_score_np))\n",
+ " visualize_training_generator(i, start_time, display_img_np)\n",
+ " print('Current loss: %f' % cur_loss)\n",
+ " print('Current MNIST score: %f' % mnist_score_values[-1][1])"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 34,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[]"
+ ]
+ },
+ "execution_count": 34,
+ "metadata": {},
+ "output_type": "execute_result"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgAAAAFuCAYAAAD3bqBNAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzt3XeYVOXZ+PEvRUVBQWzYsYtYd2OIsYZYX3t37RpLYswr\nKSZRU9Q3P6PGgvGNiQMKKrpq7Gjsii0GdVcJUewVQUFgLShI2d8f9+zL7Li77Cwzc6Z8P9e11+6e\nafc8W859nnPfzwFJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRJkiRpccYBC4FnOrjPzen7jMrY\n9i7wKbB2O49ZCPw+4/vRwDtZ9+kNnAu8AnwJNKXjOAnolr7PwPRzLe7j2A7iXxcYCbwPzAWmAWOB\nXTp4jNo3mI5/X6Sq0jPpAKQuaiZ2oEOAtYDJWbf3BvbNuG+m5Ykd6x4dPHd733cjdsKDgD8C/waW\nBfYEriZ2Mj8FpgDfyXjcGsAdwP8A92Vsf7udGAYA/yLe11lEErAqkWQ8ChwC3NnOY9W2Q4Htkg5C\nkrRkHgeeA2YTO9xsRwAfAx8A12ZsfxeYSSQPJ7XxuIXA7zK+H03rGYAd0/fZtY3HXgbMI3bU2Qay\n+CP+TL8lZheWz9reHXgemNjJ59Ei5xI/A0nEPxOpXM0mjqYPa+O2w4HbgPlZ25uBe4hTCJcQswe5\nGJD+3NbfzlXE0Xo+DCBi7ZG1fWH6Na7O2r4b8CTwOTH78DegX8btGxHjMRX4AngM+G7G7QPTz/1T\nYBIxtiemb9scuJc4dfIpMZOx3mLiHw08BZxMzF58TsxcbJ11v3WAemBG+jUfybpPR3FlqyXGoAn4\nDHgY+Hb6tnNZlNhlJnndgV8DbwJzgNeA07OedxxwA/AbIqn8FLgrHZskqcjGETuxg4l/6Jnn9FcA\nvgJ2II7eM2cAWr5fj9gp3Z/1vIubAViF2LnMBC4EdiZOASzOQHKbAdgzff/XgJ8TO8XsZKDFXsAC\n4Pb010cRScAj6ds3S8f8AjFe+xE747nATlnxfQocBxxAJEcbpx/7L2B/4tTDS+nnX6WD+EcRO+Kp\n6efbP/24JuJ0CMDKxCmOV4kZm/2In+lnwKYdxNVW/cYKRI1EPTAU+C/gn8AsYhZlTWBE+rm+nRHD\n1elx+B0xq/MHImn8TcZzP55+ntfS7/8IYibpXWC5DsZAklQA44idRS9ih/GzjNuOI/45k/7cVgIA\ncaS3EPhBxu2LSwAgEos3WVTINzcdzym0v5MeSG4JAMCPiB1my+s0ETv53bLu9zyxc890ILFjXR24\nhdg5Zp5O6EEcUf8rK75U1vPcSOzs+2RsW5HYIV7cQeyj08+3Q8a2AcRpjZbH/T/iiD5zh74UMba3\nLiaubN9J3y9zVmN9IklrmeU5l9anADYmEqczs57r/HScK6a/H0f8jDNnPbZOP9dpi4lLkpRn44gE\nAGIn9a+M2+4HLkp//S7tJwAtz9NEHCFC5xIAiGLAHYmivqeIHcTC9Ne92rj/QHJPACBmFw4AriQK\nDluSgUsybl8AnNPBc3xM6/fc4tz0Y5fLiC97h/YRcBORMPTM+BgLvNjBa45mURKW6RHiyBziZ/ZM\nG8/9v0SCQQdxZetNvM8ZxKmYfdPbMp1L6wTgh+nvB2W9/rbp7ful7zcu/ZHtTWJspLJkDYAqwa3E\ntO7awErA94kWwM44kfinPyLH12wmdva/JRKB1Ygdz/a0nlFYUl8R55t/AmxJnMt/kpjx2AzoTyQj\n0zp4jhWJHXm2j9KPXSFj2xdZ91mJmPKeB3yd8bE3MbvQkQ/b2DadRUfWKxFV+dnPfVo6psxEKjuu\nbLOJn8N96XjvJhKCvwHLtPOYldKfX856/fHEz3eNjPsu7r1IZcc2QFWCB4jTAIelP7/NoqPT7Ja+\nbG8DZwPDab+4LNOtQF++2ULYROyk64gjyiXRg5h1GEHMMGR6CziDeH+bEe+9mW+ej1+aSISeI+oV\n2tpZt2ybwaIZkGyziGK6S7O2d+ObBZbZVm5j22osSlaaiCPrX7Tx3BA741y8TsywdCPaQ48hTqO8\nTdunK5rSn79H1INkx/B+xvftvZfXc4xRKhnOAKgSzCWOkg8mer3rc3z8lcDTRBvf4rxO7Fi/1cZt\naxLnype0RW8BsfP5Aa0r+Vu0FMj9hzgyfok4TZBpT+JoeC3gCWAfWh/p9yCOlJ8jjsDb8wSxtsEE\noDH98SLw3228ZrYNaJ0MrUGco380/f249Ht5I+O5G4kk6kRya9nbD/iE2Ck3E6cXfkwUD7bUACxo\n471BJE+Zr9+fqAPon3Hf77JoxgCi42BgxnuRJBXJOKI6u8VexA5jHlHc1eJdvrkOQFvnwzcgppEX\nVwPQj9jxfkYsBLQnUUl/evp+L9D2lPNAcqsBqEm/xjvAMOIodVdiRuBz4C8Z992b2Lndko7nOKL6\n/u707ZulH9NAVLHvRxzVf01UzHcU3xZEknF/+nF7EO2EC+g4ARidvs8bxMzMwURiNJlFSc3qxGmI\n8UTi9n1iyn4hkWB0FFe2lYiZhWeJjoOhRIX/fKJTA2KGZiGRYLQU9F1PzHL8ghjjU4lE4nkWzUSM\nS7+X59JjcDSRoL2Es6iSVHSPs6gIEOIf8QziCC5Te22AbRlG/KPPTABG8c3V+pYndsQTiCPMr4hl\ngc+n/bawgeReBLgBsWLhW0RV+ufEDq6tUxV7ETvSr4jFjy7NimUrYkbgs3TMD9H2OgBtxbcN8I/0\n4z4jCvf2WUzso9Nx/IA4f/4pkTisk3W/9YnEpWUdgEbg+E7GlW3rdJzTiXF4ntZJyurEGM1lUQLV\ng2j5ezO9/X2iCDFz5mUcMUN0DpEczACuwfP/kiR9w2han0MvZ+OIwkupouRaA7Ap8CAxZfYeUTzV\nrcNHSKpWlfS/oZLeiwTklgD0IXb+7xLFPDsSy63+Nv9hSSpzzSy+A6NcVNJ7kbpkT+I8ZGbRy5FE\nsZEkSSojucwA9CCqhjN7fxcSbTd98xmUJEkqrFxaWJ4mKmsvIKqdVyMuUtJMLEf6adb9V2fxK4VJ\nkqRvmkqBZ9hzSQA+JVqNLiN6ed8gemhrWbRud4vV11hjjSlTpkzJS5CSJFWZD4nrUhQsCcglAVgq\n/TE0Y9vpxAVK5mbdd/UpU6YwZswYBg1a0lVRq8uwYcMYPnx40mGUFcesaxy33DlmXeO45WbSpEkc\nffTRaxKz6CWRAHQn1h0/k1gcpQb4Fa2vm93KoEGDqKmpWaIAq02/fv0csxw5Zl3juOXOMesax600\n5VIEOJdYVes04nTAzcT1vK8rQFySJKmAcl3H+inavgiKJEkqI14NUJKkKmQCUGLq6uqSDqHsOGZd\n47jlzjHrGsetNBVqfesaoKGhocHCD0mSctDY2EhtbS1Em332FU7zxhkASZKqkAmAJElVyARAkqQq\nZAIgSVIVMgGQJKkKmQBIklSFTAAkSapCJgCSJJWIWbPgrLOK81omAJIklYBHH4Utt4R//rM4r2cC\nIElSgubMgZ/9DHbdFTbeGG65pTivawIgSVJCJkyAbbeFq66Cyy6Dhx+GAQOK89omAJIkFdmCBfCn\nP8G3vw3du8MLL8BPfxpfF4sJgCRJRfTee/D978OvfgX//d/w3HOw+ebFj6Nn8V9SkqTq09wMN94I\nP/4x9O0Ljz0Gu+ySXDzOAEiSVGCzZkFdHRxzDOy7L/z738nu/MEZAEmSCurRR+G442D2bLj5Zjj8\n8KQjCs4ASJJUAHPmRGHfrrvCJpvAxImls/MHZwAkScq7CRPgqKPgzTejve+MM4pb4d8ZJRaOJEnl\na8ECuPji6O3v0SOZ9r7OKsGQJEkqP++9B0OHwq9/DcOGJdfe11meApAkaQk0N8OYMXD66aXR3tdZ\nzgBIktRFM2fCEUfAsceWTntfZzkDIElSFzzyCBx/fOm193WWMwCSJOXgq6/iHP9uu8Gmm5Zee19n\nOQMgSVInvfRStPe99RZcfnms5V+KFf6dUaZhS5JUPC3tfd/+Niy1VLT3DRtWvjt/yD0B2BV4HvgU\nmAIMB5bOd1CSJJWKzPa+n/4Uxo8v7fa+zsolAVgWuAe4DugLbAvsAfyyAHFJkpSo5ma44QbYckt4\n9114/HG46CJYZpmkI8uPXBKAZuBzoEf6oxuwEJhdgLgkSUrMzJlR2HfssbDfftHet/POSUeVX7kk\nAHOAw4Hz0l+/D7xGnAaQJKkiPPIIbLFFfL7llpgF6Ns36ajyL5cEYG3gTuB/gOWAzYHNiIRAkqSy\nltnet9lm0d532GFJR1U4ubQBHgRMBS5Nf/8KcD7wZ+B3bT1g2LBh9OvXr9W2uro66urqco9UkqQC\nyWzvGz4cfvKT4lT419fXU19f32pbU1NT4V+Y3BKAr9rYNh/4ur0HDB8+nJqampyDkiSpGBYsgEsu\ngd/+FgYPhoaG+FwsbR0UNzY2UltbW/DXziW/GQsMAM5KP2594DfADQWIS5Kkgnr3Xfje9+Css+Bn\nP4N//au4O/+k5ZIATAV2J1r/PgEeA+4GzilAXJIkFURzM1x/fbT3vfcejBsHF15YOe19nZXrGY4X\ngF2A/sBA4LfEaQBJqljPPw9HHw2rrhpFYbffDl9+mXRU6ooZM+JneNxxcMAB0d63005JR5WMMl7E\nUJIKZ/58uPVW+O53Y/nXf/4zkoA334RDDolkoK4O7rwzqsdV+h56KNr7Hn00frbXX1+Z7X2dZQIg\nSRlmzozV3tZbLxaC6dUL7roL3ngDLrsMGhvh9dfh7LNh0iQ46KBIBo46Cu6+G+bMSfodKNtXX8EZ\nZ8Aee8QSvhMnwqGHJh1V8kwAJAl45RU49VRYay34/e9h992jNeyxx2D//aFHj0X33WijSABeeimS\ngF/+MqaSDzggkoFjjoGxY2Hu3OTej8KLL8K3vgVXXw1XXAEPPABrrpl0VKXBBEBS1Vq4EP7xj9jZ\nDx4cO+2zz4YPPoBrroGttlr8c2y6abSQTZwIL78MP/95tJLttx+sthocf3y8xtftNkyrEBYsiMK+\nIUNg6aXjZ1LOl+4tBIdCUtX54gv4y19g0CDYe2+YNQvGjIm2sN/8BlZZpWvPu9lmMXvwyivwn//E\ntPO//hWvsdpqcOKJcQQ6b15e346ytLT3nX12tPeNH19d7X2dZQIgqWq8804coa+1Vuyct94annkG\nnnsuzuEvnceLmw8eDOedF6cI/v1vOP10ePpp2GsvGDAATjopitJMBvKnuRmuuy7a+95/f1F7Xz5/\nrpXEBEBSRWtuhieeiGK9DTeEUaPghz+MZOCWW6LKv1u3wr1+t25Ref4//wOvvRbnpE89NXZOe+wB\nq68Op5wSF56Zb1N1l82YEYV9xx8ftRgTJlRve19nmQBIqkhz5sDo0VBTA7vsAq++ClddFef3L7wQ\n1l67+DF16xazDhdcEF0FDQ0xE/DII3EBmjXWgB/9KK47v2BB8eMrVy3tfY89ZntfLkwAJFWUjz6K\n8/DrrgsnnBA71YceigK9U0+F3r2TjjB06xbJyYUXxgVonn8+jl7vvx+GDo24f/zjmL0wGWjbV19F\nYZ/tfV1jAiCpIjQ0wLHHwjrrwKWXxo7g1Vfhvvvi6LqQ0/xLqlu3aFW7+OI4NTF+/KJWwl12iZqF\nn/wEnnoqOhcU6zHU1sKIEbb3dZUJgKSyNX8+3HYb7Lhj7ECffBL++EeYPBn+939hk02SjjB33brF\nyoOXXBLV7M8+C0ccESsO7rRTnLo444xYmbAak4EFC+Jn/J3vxNr9tvd1nUMmqezMmgV/+hNssEEc\n6XfvHuvzv/lmVPn365d0hPnRvXvs6C6/PKran346liH++99h++3jNEfLVeyam5OOtvDeeSdmRM45\nJ37O48dH66W6xgRAUtl49VU47bSYEv/Nb6LXu7FxUZV/z55JR1g43bvHTv+KK2KG48kno9q9vh62\n2w4GDoRf/CJaGistGWhp79tqqyjifOKJmAWwvW/JmABIKmkLF8b53b32ioV77rgDfvWrOCIePRq2\n2SbpCIuve/c47XHllZEMjBsH++wDN9wQK9+tt14sT/zCC+WfDGS29x10UKypsOOOSUdVGUwAJJWk\n2bPhr3+NKd699oJp06K967334He/i5X1FNco2HnnWNlwypRohdtzz1jvYNtt4zTJr38dMyXllgw8\n+GC09z3+eJz2GD0aVlgh6agqhwmApJLy3ntx9LrWWrF63uabR/X7Cy9EZfwyyyQdYenq0SNOi/zt\nbzB1Kjz8MOy6K4wcGRXzG28c588nTCjtZKClvW/PPSMBmDgxah+UXyYAkhLX3BwFboceCuuvH61d\nJ50Eb78dVf477FDabXylqGfP2PmnUpEMPPhgzBT89a+xGFHmRYxKKRnIbO/7859jXYQ11kg6qspk\nAiApMXPnxnnrb30rzutOnBjte5MnR5X/uusmHWFlWGqpuOLhyJHw8cexU91++xjrLbdcdBGjl19O\nLsaW9r4hQ6BXr2jv+8lPbO8rJIdWUtF9/HFcKGfddWPxnlVWiZ3SK6/EUrilslpfJVpqqZhav/ba\n+Dnce2+sOzB8eJxu2XxzOP/8uIhRsWS29515ZrQ12t5XeCYAkormxRejmnuddWLVu4MOip3+Aw/E\nTsmjveJaeum4VPF110WR5T33RFfFJZfEDnjLLeEPf4DXXy/M6zc3R2HfVlstam284ALb+4rFPzdJ\nBbVgQbTu7bxzrH3/+OOxU5k8OS7OM2hQ0hEKorhy333jlMy0aXDXXVGAd9FFsaJiy0WM3nwzP6/3\nySdR2HfCCZEITpgQtR4qHhMASQXR1BRr8m+4IRx8cPTz//3vceGbM8+EFVdMOkK1p1cv2H9/uPHG\nSAbuuCMStQsugI02an0Ro65oae8bNy6KPG3vS4YJgKS8ev31aN9bay0466wo7nvhhWjlO+SQyl6t\nrxItuywceGCsODhtWiRxG24YdQIbbtj6IkaL8+WXUdi3554x7T9xYiSHSoYJgKQl1twcl9zde++Y\nLr711lir/b33YvGe2tqkI1Q+LLdcJHG33grTp8Mtt8QSxL//fbRvDhkSsz7vv//Nx7a0940cGSsY\n2t6XPBMASV325Zdw9dVROb7HHrES3ahRsQM47zxYffWkI1Sh9O4Nhx0WU/jTp8cMwZprRiX/uuvG\n9QkuvzyuaHjBBZEcLLtsJAKnn+66DqXABEBSzj74IJaXXWutuDjPJpvE+dzGxqjy79Ur6QhVTH36\nxCWL77gjThPceCOsumr8jqy3Xiw41NLeZ9Fn6fBsnMrajBnRutS7N6y8cvSTr7wyrLSSrUT51twc\n16a/4oq49G7v3rFa3+mnxz95CaKY78gj4+PTT2Oqf4MN4roEKi0mACpLs2fHwiUXXwyffdb2ffr2\nbZ0ULO7rvn2dlmzL119H4dcVV8Dzz0cV+PDhcNxxsPzySUenUta3b8wMqDSZAKiszJsXa4Sffz7M\nmhWrxp19dvQwT58evcUtn7O/fvXVWG/+k0+iRS1bz54xc9CSGHQmcajkqe7p0+P8/lVXxVryu+0W\nq8bttZcL9kiVIJcE4Cjgb1nblgEWAhX8b1ClYOHCqDj+7W/jAjHHHBNFZgMHLrpP377RltQZ8+bF\n6YOOEobp0+GNN+Lz9OlxJJytT5+Ok4TsbSuuWPo7zwkT4mj/ppsi1mOOiSuzDR6cdGSS8imXBODG\n9EeLNYDngTPzGpGUobk5Fg056yx46aVYqeyuu6LqfEkstRQMGBAfnY1j9uyOk4VPPole6Oeei69n\nzvzmVda6d4f+/Rc/u5C5rRjr4i9YEEf3w4dHMd9aa8G558LJJ8esiKTK09VTAN2IZOBe4Kb8hSMt\nMn58VBGPGxdLhD79dFzBLAndusXRfp8+nS94W7AgkoDsJCE7cWhoWPT1l19+83mWXbZzNQwtX6+0\nUucX2/nss7gozJVXxszKdtvFTMuBB0aSJKlydTUBOBoYBOyTx1gkIK5Cds45cOedcaQ/dmwsMFNu\nBXo9esSOeZVVOv+YL7+MUxMdzTR8+GHMhnzySdx3wYJvPs+KK3acLPTvD488Ejv/OXOin7u+Pq4K\nJ6k6dCUB6A78FvgDMDu/4aiaffBBnNcfNSquFnf99dFK1KNH0pEVz3LLxcfaa3fu/gsXRkFjR6cl\nPvkkllxt+bqla2LlleGMM6KP3xXZpOrTlQTge8AA4JrF3XHYsGH069ev1ba6ujrq6uq68LKqVDNm\nxIVFrrwyeogvvxxOPTUq+9WxlpqC/v1h440795i5c2PMV1rJMZaSVl9fT319fattTW21KRVAVyZV\nrwKWA47v4D41QENDQwM1NTVdiUtVYPbsqDa/6KI4kv3FL+BnP7O3XFJ1a2xspDYuoFELNBbqdboy\nA7ADMDzfgah6zJsXFwQ5//w4Ej3ttDjnn8u5cknSkulKR/J6wIf5DkSVb+FCuPnmWAv8xz+G3XeP\nS8cOH+7OX5KKrSszAE7QKictl4o96yx48cXo5b/zTthii6Qjk6TqVeJrkqncjR8PQ4fCnntGdftT\nT8XFe9z5S1KyTABUEJMmwUEHwXe+E61nY8fGzn+HHZKOTJIEJgDKsw8+iEvEbr55XBv++utj0Zp9\n9im/hXwkqZJ5NUDlhb38klReTAC0RLJ7+c86y15+SSoHJgDqkrZ6+c8+G1ZdNenIJEmdYQ2ActJR\nL787f0kqHyYA6pTmZnjwQfjWt6CuDjbbDCZMgOuug4EDk45OkpQrEwAt1vjx8P3v28svSZXEBEDt\nevVVOPjg6OWfPt1efkmqJCYA+obJk6OXf/BgaGiwl1+SKpFdAPo/M2fCH/8YvfzLLw+XXQY//KG9\n/JJUiUwA9H+9/BdfDAsWwK9/DT//ub38klTJTACqWHYv/49+BOecYzufJFUDawCqUHYv/267RS//\nFVe485ekamECUEWye/kHDYpe/uuvt5dfkqqNCUCVaKuXf+xYe/klqVqZAFS47F7+e+6xl1+SZAJQ\nsbJ7+a+7Lnr5993XXn5Jkl0AFcdefklSZ5gAVIjZs+HPf4aLLrKXX5K0eCYAZW7ePLjmGjjvPHv5\nJUmdZw1AmVq4EG65JS7Le9pp9vJLknJjAlBmmpvhoYdg223hiCNg003t5Zck5c4EoIw891z08u+x\nB/TqZS+/JKnrTADKwKuvwiGHwJAhMG1a9PI//bS9/JKkrjMBKGGTJ8PJJ0cv/wsvRC//hAn28kuS\nlpxdACVo5ky48MLo5e/Tx15+SVL+5ToD0B+4HvgEmAncAayW76Cq1ezZsYjP+uvDX/8Kv/oVvPUW\nnHGGO39JUn7lOgNwO7HzXx9YCIwGrgH2yW9Y1Wm//aKwz15+SVKh5ZIA1AJDgFWBL9LbTgZWz3dQ\n1eg//4HHHove/sMOSzoaSVKly+UUwLeBV4BTgDeAKcClwNQCxFV1RoyA1VaDAw9MOhJJUjXIJQHo\nD2wJbAhsnf5Yk6gJ0BL46qtYyOeEE2CppZKORpJUDXI5BTA3/XkY8DUwGzgHGA8sB3yZ39Cqx223\nQVNTXL5XkqRiyCUBeAXoBixDJACZj2+zK33YsGH069ev1ba6ujrq6upyDLOypVKw666wwQZJRyJJ\nKqb6+nrq6+tbbWtqairKa+eynExPIgmYABxPHPXfDMwCDsm6bw3Q0NDQQE1NTR7CrFyvvBIL/dx6\nKxx6aNLRSJKS1tjYSG1tLUTxfWOhXieXGoD5wM7pz28ArwHvAycWIK6qMWIErLIK7L9/0pFIkqpJ\nrusATAWcv8+TOXNied+TT4all046GklSNfFaAAm6/XaYNcviP0lS8ZkAJCiVgqFDYaONko5EklRt\nvBhQQl59FZ58Em6+OelIJEnVyBmAhIwYASuvDAcckHQkkqRqZAKQgJbiv+OP9yp/kqRkmAAk4M47\nYcaMqP6XJCkJJgAJSKVgl11g442TjkSSVK0sAiyy11+HcePgppuSjkSSVM2cASiyESNgpZW87K8k\nKVkmAEU0dy6MHg3HHQe9eiUdjSSpmpkAFNFdd8Enn1j8J0lKnglAEaVSsNNOsOmmSUciSap2FgEW\nyRtvwGOPwZgxSUciSZIzAEUzciSsuCIcfHDSkUiSZAJQFF9/DaNGWfwnSSodJgBFcPfdMH26xX+S\npNJhAlAEqRTssANstlnSkUiSFCwCLLC33oJHHomL/0iSVCqcASiwkSOhXz849NCkI5EkaRETgAL6\n+mu49lo45hhYdtmko5EkaRETgAIaOxamTbP4T5JUekwACiiVgu22gy22SDoSSZJaswiwQN55Bx56\nKPr/JUkqNc4AFMjIkdC3Lxx2WNKRSJL0TSYABTBvXhT/HX00LLdc0tFIkvRNJgAFcO+98NFHFv9J\nkkqXCUABpFIwZAhstVXSkUiS1DaLAPPs3XfhwQejBkCSpFLlDECeXXMN9OkDhx+edCSSJLUv1wTg\ncGA+8HnGh6vcp82fHwnA0UdD795JRyNJUvtyPQWwLbHD/0EBYil7990HU6fCKackHYkkSR3LdQZg\nW6ChEIFUglQKtt0Wtt466UgkSepYLglAd6AG2Bt4F/gAuBrol/+wys/778P993v0L0kqD7kkACsD\njcDfgU2B7wIbAWMKEFfZueaaOO9/xBFJRyJJ0uLlUgMwDdg54/sPgF8C44HewOzsBwwbNox+/VpP\nENTV1VFXV5d7pCWspfjvqKOiA0CSpM6or6+nvr6+1bampqaivHa3HO67JXAk8OuMbTsAjwPLAfMy\nttcADQ0NDdTU1CxxkKVu7FjYbz9oaIAqeLuSpAJqbGyktrYWoJaYeS+IXE4BzAR+DJxJzBysA/wJ\nGEXrnX/VSaWgttadvySpfOSSAEwmCgAPAGYAzxPT/6cXIK6y8cEH8I9/WPwnSSovua4D8CSwfSEC\nKVfXXgvLLgsVVtYgSapwLgW8BBYsiDX/jzwSll8+6WgkSeo8E4Al8MADMHmy0/+SpPJjArAEUinY\nZpsoAJQkqZyYAHTRhx/CvffG0X+3XJopJUkqASYAXXTttdCrV5z/lySp3JgAdEFL8V9dHaywQtLR\nSJKUOxNxpvMaAAAQsklEQVSALnjoobj4j8V/kqRyZQLQBakUbLVVXPpXkqRyZAKQoylTYu1/i/8k\nSeXMBCBHo0bB0kvHlf8kSSpXJgA5WLgQRoyAI46Avn2TjkaSpK4zAcjBww/De+9Z/CdJKn8mADlI\npWCLLWDIkKQjkSRpyZgAdNLUqXDPPRb/SZIqgwlAJ40eDT17wtFHJx2JJElLzgSgE1qK/w4/HPr1\nSzoaSZKWXM+kAygHjz4K77wDY8YkHYkkSfnhDEAnpFIweDBst13SkUiSlB8mAIvx8cdw110W/0mS\nKosJwGJY/CdJqkQmAB1oKf479FDo3z/paCRJyh+LADvw+OPw1lsxCyBJUiVxBqADqRQMGgTbb590\nJJIk5ZcJQDumT4c777T4T5JUmUwA2nHdddC9OxxzTNKRSJKUfyYAbWhujun/Qw6BlVZKOhpJkvLP\nIsA2PPEEvPEGjByZdCSSJBWGMwBtSKVgk01gxx2TjkSSpMIwAcjyySdw++0W/0mSKltXE4AewDhg\nVP5CKQ3XXx+fjz022TgkSSqkriYAvwd2AJrzGEviWor/Dj4YVl456WgkSSqcriQAQ4EDgduBipok\nf+opeO21mP6XJKmS5ZoArAaMBI4Evsx/OMlKpWCjjWDnnZOORJKkwsolAegO3ABcCkxMb6uYUwAz\nZsBtt1n8J0mqDrmsA3AWcdT/l/T3i91NDhs2jH79+rXaVldXR11dXQ4vWxw33BBX/zvuuKQjkSRV\ni/r6eurr61tta2pqKspr53KsOwlYA1iY/n659OfZQPbFcmuAhoaGBmpqapYswiJobobBg2GLLeCW\nW5KORpJUzRobG6mtrQWoBRoL9Tq5zAAMyvp+FHEK4MT8hZOMZ56BSZPgyiuTjkSSpOJwISCi+G+D\nDeB730s6EkmSimNJrgVwQt6iSNDMmXDrrXDeeXH1P0mSqkHV7/LGjIEFC+D445OORJKk4qnqBKBl\n5b8DDoDVVks6GkmSiqeqE4Bnn4WXX3blP0lS9anqBCCVgvXWg+9/P+lIJEkqrqpNAGbNip7/k0+2\n+E+SVH2qdtd3440wfz6cUBG9DJIk5aYqE4CW4r/99oMBA5KORpKk4qvKBGD8eJg40eI/SVL1qsoE\nIJWCddeF3XZLOhJJkpJRdQnAp5/CzTdb/CdJqm5Vtwu88Ub4+muL/yRJ1a2qEoDmZrj6ath3X1hj\njaSjkSQpOVWVADz/PPz73xb/SZJUVQlAKgXrrAO77550JJIkJatqEoDPPoP6ejjpJOjRI+loJElK\nVtUkADfdBHPmwIknJh2JJEnJq4oEoKX4b599YM01k45GkqTkVUUC0NAAL71k8Z8kSS2qIgFIpWCt\ntWDPPZOORJKk0lDxCcDnn8f5f4v/JElapOITgPp6+Oori/8kScpU8QlAKgX/9V+w9tpJRyJJUuno\nmXQAhdTQEB/33JN0JJIklZaKngEYMSLa/vbaK+lIJEkqLRWbAHzxRVz57wc/gJ4VPc8hSVLuKjYB\nuPlmmD07EgBJktRaxSYAqVRM/a+zTtKRSJJUeipycvzFF+PSv3fdlXQkkiSVpoqcARgxAlZfHfbe\nO+lIJEkqTbkmAEOB8cCnwFTgz0CvfAe1JGbPhjFjLP6TJKkjuSQAqwD3An8B+gLbALsAv85/WF13\nyy3RAWDxnyRJ7cvlGHk6kQTMBroBKxNH/9MKEFeXpVKwxx4wcGDSkUiSVLpynSSfnf78AbAG8CQw\nOp8BLYkJE2D8eLjjjqQjkSSptHW1CHADYE1gIXBb/sJZMiNGwIABsM8+SUciSVJp62qZ3FyiCPBX\nRFFgX6IwsJVhw4bRr1+/Vtvq6uqoq6vr4su278sv4YYb4PTTYaml8v70kiTlXX19PfX19a22NTU1\nFeW1u+Vw3+8C1wBbAvPS23YEHgb6APMz7lsDNDQ0NFBTU5OPOBdr9Gg44QR4+21Yb72ivKQkSXnX\n2NhIbW0tQC3QWKjXyeUUwARgOeBCYClgXeASYCStd/6JSKVg993d+UuS1Bm5JACzgT2BzYGPgXHA\ng8BP8x9WbiZOhGefhVNOSToSSZLKQ641AJOAPQoRyJIYMQJWWw322y/pSCRJKg9lvxRwS/HfCSdY\n/CdJUmeVfQJw223Q1AQnnZR0JJIklY+yTwBSKdh1V9hgg6QjkSSpfJT15XJefhmeeQZuvTXpSCRJ\nKi9lPQMwYgSssgrsv3/SkUiSVF7KNgH46iu4/voo/lt66aSjkSSpvJRtAnD77TBrlsV/kiR1Rdkm\nAKkUDB0KG22UdCSSJJWfsiwCnDQJnnoKbr456UgkSSpPZTkDMGIErLwyHHBA0pFIklSeyi4BmDMH\nrrsOjj8ellkm6WgkSSpPZZcA3HEHzJwJJ5+cdCSSJJWvsksAUinYZRfYeOOkI5EkqXyVVRHga6/B\nE0/ATTclHYkkSeWtrGYARoyA/v3hwAOTjkSSpPJWNgnA3LkwejQcdxz06pV0NJIklbeySQDuvBNm\nzLD4T5KkfCibBCCVgh13hEGDko5EkqTyVxZFgK+/Do8/DjfckHQkkiRVhrKYARg5ElZcEQ4+OOlI\nJEmqDCWfAHz9dRT/HXssLLts0tFIklQZSj4BuPtumD7d4j9JkvKp5BOAVAq23x4GD046EkmSKkdJ\nFwG+9RY88khc/EeSJOVPSc8AjBwJ/frBoYcmHYkkSZWlZBOAr7+Ga6+FY46x+E+SpHwr2QRg7FiY\nNs3iP0mSCqFkE4BUCrbbDrbYIulIJEmqPLkmAFsBDwMzgI+A64CV8h3UO+/AQw/BKafk+5klSRLk\nlgAsC9wPPA2sBmxG7PxH5TuokSOhb1847LB8P7MkSYLcEoB1gBeB84H5wEwgBQzNZ0Dz5kXx39FH\nw3LL5fOZJUlSi1zWAXgN2Dtr2yFAY/7CgXvvhY8+svhPkqRCWpKFgP5AJAQ75SkWIIr/hgyBrbbK\n57NKkqRMXUkAViDO+29D7Pxfzlcw774LDz4YNQCSJKlwck0ANgD+AbwLfIuoA2jXsGHD6NevX6tt\ndXV11NXVtXn/a66BPn3g8MNzjEqSpDJUX19PfX19q21NTU1Fee1uOdx3ReAl4BHgJKC5g/vWAA0N\nDQ3U1NR06snnz4d11oEDDoCrrsohKkmSKkhjYyO1tbUAteS5zi5TLl0AJwBrA4cDnwGfpz8+y0cg\n990HU6fa+y9JUjHkkgBclr5/H2D5jI8V8hFIKgXbbgtbb52PZ5MkSR0piaWA338f7r/fo39Jkoql\nJBKAa66B3r3hiCOSjkSSpOqQeAIwf34kAEcdFR0AkiSp8BJPAO6/Hz780Ol/SZKKKfEEIJWC2lro\nZLegJEnKg0QTgA8+gH/8w6N/SZKKLdEE4NprYdlloZ2FASVJUoEklgAsWBBr/h95JCy/fFJRSJJU\nnRJLAB54ACZPdvpfkqQkJJYApFKwzTZRAChJkoorkQTgww/h3nvj6L9bLpcjkiRJeZFIAnDttdCr\nV5z/lyRJxVf0BKCl+K+uDlbIy2WEJElSroqeADz0UFz8x+I/SZKSU/QEIJWCrbaKS/9KkqRkFDUB\nmDIFxo61+E+SpKQVNQEYNQqWXjqu/CdJkpJTtARg4UIYMQKOOAL69i3Wq0qSpLYULQF4+GF47z2L\n/yRJKgVFSwBSKdhiCxgypFivKEmS2lOUBGDqVLjnHov/JEkqFUVJAEaPhp494eiji/FqkiRpcQqe\nALQU/x1+OPTrV+hXkyRJndGz0C/w6KPwzjswZkyhX0mSJHVWwWcAUikYPBi2267QryRJkjqroAnA\njBlw110W/0mSVGoKmgCMHWvxnyRJpaigCcCdd8Khh0L//oV8FUmSlKuCJgCTJ7vyX67q6+uTDqHs\nOGZd47jlzjHrGsetNC1pArAK8Cawc1s3DhwI22+/hK9QZfxDyZ1j1jWOW+4cs65x3ErTkiQA2wPP\nAusDzW3d4aCDLP6TJKkUdTUBOB64ETi7ozvtvXcXn12SJBVUVxOA+4kj/1s7upMr/0mSVJq6uhLg\nx52506RJk7r49NWrqamJxsbGpMMoK45Z1zhuuXPMusZxy02x9p35OEO/ENgFeDJj2+rA88CaeXh+\nSZKqzYfAtsDUQr1Aoa4FMJUIfPUCPb8kSZVsKgXc+UNhLwZU8OAlSVLXFPxiQJIkSZIkSZIkSZIk\nqWytCtwFzAKmA5cDPRKNKFltXS9hCDAe+AJ4Gzgx6zHHAW8As4l2yu9k3NYDuAT4CPiMGOsBhQg8\nAVsBDwMziPd3HbBS+jbHrH1DibH5lCi8/TPQK32b49axHsA4YFTGNsesfYcD84HPMz6uS9/muLWt\nP3A98AkwE7gDWC19W8WN2ePEm+0FrAdMBH6XaETJ2Z7Y+S8EdkpvW5HYwf2IKML8HvGPuyVB2CX9\n/XbED3gY8YvTsq7i74GXiDUWlgfqgccK+zaKYllgCvH+ehJ/NPcC9xDv3TFr2yrAl8Cx6e8HAP8G\nzsXftc44n9ihXZv+3jHr2CXANW1sd9za9zjwd2AFoA9wGzCWCvy/tiGxs8vMQg4D3k8mnEQdD7xL\nvP/MBOAk4NWs+15FJE0AY4C/Zd3+CnBC+uvJwBEZt62afv718hBzkjYB7qP14lT7EZnxD4DXsu7v\nmC3SO/25G7A58DpwGvG75ri1byhxgHILi2YA/Pvs2BPE71Y2x61ttUSC3idj24rAYEpgzPLdBjiY\nmOL4KGPbJGAtIvupJu1dL2Ew8U8n0yRgiw5ufyV9e19gjazbpxFjvgXl7TVgb1pfWfIQoBHHbHFm\npz9/QBz9TwFGE+Py76z7Om5hNWAkcCTwFYt+7/xda193oIb4O32X+H27mjgiddza9m3ifZ5CTOVP\nAS5Nf058zPKdACzPon9GLb5Mf+5DdfmYyMayLc+iMWmRmSH2oe0x7JNxn/ZuryR/IP7R/Ij2f68c\ns9Y2IKYDFxLTjH3wd60t3YEbiH/EE2mddPr32b6ViYT878CmwHeBjYgjVX/X2tYf2JKYHd86/bEm\ncZSf+JjlOwGYDSyXta3l+8/z/Frl6gvaHqPP0l+3NYa9ifGbnXH/7MdXyviuANxOHJntBLxM+79X\njllrc4kiwF8Be+K4tecs4h/lX9Lfd2PRqSfHrH3TiPPTo4E5xAzAL4G9iPFz3L5pbvrzMOJ9TgPO\nAf6LEhizfCcA/yGqtlfN2LYZ8YtS7j/IfPkPMbWTabP09pbbN2/n9iZi6ijz9gFElvkfyt8GRKVr\nH+BbxM4fHLOOfJeYNlwqY1sv4GtiutBx+6ajiYKrWemPOiLhnEnMCDhmbdsSuDBrWy9ixuk5HLe2\nvELs6JfJ2NayBP9LVOCYPQncRPwTr/YugBaZRYD9iX80ZxD/tLMrP4emv98lfXt25ef5xHndgcR0\n5c1URrXsisB7RIVx9lUqHbP29SbG7VLiva9LtBX9L45bZ41iURfASjhm7VmLOJA7k9iJrQM8C6Tw\nd609PYmi3L8Tf6urAI8Sp+gqcsxWJQrfphPnwS8mP5cdLmeZCQBEZejTxA/3DRa1cLU4ijiq+5z4\nA9s247aewB+JWZUmoqd05YJEXVw/I8bpC1r3GLdMhzlm7RsEPEj8M3mH+MfQMiPguC1eZgIAjllH\ndgKeIcbmY2A4sHT6NsetbasTLXpTiL/RUSwqinfMJEmSJEmSJEmSJEmSJEmSJEmSJEmSJEmSJEmS\nJEmSJEmSJEmqIP8fNlEVUNySV/wAAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAg8AAAFuCAYAAAAGZfvAAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsnXfYHUX1x79vS0IgSAlCKELoVYXQBUGaCBh6CSLGBgoo\noCioqIj6Q5oiCCooIAIBlWIgFOlIi0AEAekhEBIgoSakvu33x7zHPXvumdmZvXv33n2Zz/O8z967\n75a5W2a+c86ZM0AkEolEIpFIJBKJRCKRSCQSiUQikUgkEolEIpFIJBKJRCKRSCQSiUQikUgkEolE\nIpFIJBKJRCKRSCQSiUQikUgkEolEIpFIJBKJVJVLAfRl/E2t8xzjB47zkQbvk5dTBs4VqWUZAJcB\n2L7ZBYlEIpFI67AmgC0H/rYCcCOAGWzdlgA+Vuc5Rg4cZ0iD98nLKQB6SzhPFdkRRlh9ssnliERa\njs5mFyASaSJTkbYsvAlgMYB/FXiONwf+Gr1PPbSVeK4qEq9PJCJob3YBIpEKsCNMD/QIAC8DeA/A\nbgP/+wqARwC8D2A+gH8DOJDtOx5pF8SlAG4D8EUAzwFYCOAxAJ+pcx8A2AbAvQNleRnAsQBuB3BJ\n4O89eOA3zQXwGoDfwpjwiWEALgIwfaAsTwP4ljjGtwA8C2ABgFcBnA9gKcc5x8P85u1gftt8AP9B\n+lrSuc9g534cwEFim2kAfgngjoHj2H7/SAB/GfiNC2Du3WED/9sRwJ0Dn+9inwFgb5jrs2Bg33MA\nDGf/P2WgfGNh7td8AA8B2MlSjkgkEolUmEsBvKSs3xGmYXsVwH4ADoVpCI8G0APgBzCm7f0ATIax\nXqw2sO941AqBdwA8CdPo7Q7gYQDzkDTQefZZH6aRuhvAngPHoEbxYsdvPgXpmIeTB76fB2BXAF8D\nMBumQR82sM3vYSw2Bw387l8M7POFgf8fAtOwHw0TL3AEgDlwixj6zbMHyrAbTMPeC+CzA9u0AbgZ\nRrwdO1C+3w7s93l2rGkw9+D/AOwMI6o0bgXwKEwjvwPMdeob+DwCwNcHvn8N5voC5t73wcRC7Abg\nSABvwYg74hSYe/MmgGNgRN4dABYB2MxxDSKRSCRSQS6FWzx8X6w/C8BpYt1mA9sePPB9PGqFQB+A\n0Wyf7QfW7VvHPpcBmImkgQeArQe28RUPy8I0+r8X22yHpBEFgGeUbX4AYI+Bz78b2Iab+w8F8A1H\nOcYPnONksX4KEjfSrgPbHCC2uQwmVoUsqdNgevxZLADwPfa9Dcaqse3A9x2Rjnlog7EoTBLH2Wlg\nO/r9pwx8P4xtMwzm/vzFo1yRSMsTYx4iEX8eE99PGFh+CMA6ANaF6ekC7mDHWUiLlBkDyyXr2Gcn\nmEZtIdvmIZiG1JetYcp9hVh/H4wbZEcYYXAnjJBYFcANMNaAn7Pt74SxNjwK4JqB/1/pWYbLxffr\nYBrjJWCubf/A8XjddQNMQ70xjKsDqL1XGncBOBXAxwHcBOAWAN91bL8egFVgfis//70wLp5dB44D\nGIvJBLbNwoH/7elRrkik5YkxD5GIP++L72vBxBS8DdPAfgdA18D/XEF2C8R36vm73sesfUbCCAzJ\n645jSpZz7PMGEhfJcTAWgtEALoARNffDNMKA6V0fCnO9ToGJD5iKxBrjYob4PgvmWn4IwPIDn+fC\nuCXo72oYUbEy20/eK41DAJwNM7LlkoFz3wxgDcv2yw8sLxDnXwzjxhrFtp2N2lEss2GsO5FI5Yni\nIRLJRztMT5+GVQ4HsCmM/78ZvApgJWX9igHHeHtgOUr53ygkI0AonmBDGNfKMTDDXrl14SoYc/9y\nMLERb8FYFXgDr7G8+L4iTFzJ2wDehREFmyt/WwJ4cGCf/oxzEHMAnAQjgtaDcWFsByMONN4dWJ5g\nOT93ay2HWlaEEWGRSOWJ4iESSePb8IyEcVP8EcY8T5YAGgHherd8zxGyzz0D5x7K1m2KdJxEFpNh\ngvo+J9ZvDxMAeh+MZeVZJKMrXoVpbK+CcWMARiRcO/B5LoC/AfgZgA7oAoezN/vcBmD/gfMuhvmN\nS8Fc2ynsb0MAPxw4vi+jBsq+/8D35wGcCWNJot8hLQfPwFhC1hTnfxVGTPGcIEMAfJp9XwImJuKO\ngDJGIi1LjHmIRNL4jumfBRNP8A0Yc/e7MI3FF2CEhGtYYp68AVn7/B+MGf5mGFP8sjANNmXK9OFt\nGMvJj2Aa64kw4uOnAJ4C8CcA3TAuih8PbPMETK/9CzAiAQD+ARPkeeZAeZaFcV88DzO00sUZMALo\nOQBfhRnlQEMcJ8HEF/x9oEzPwPT6fwIzcoIsJz7X97WBc5wLYGkYt8rmMALs/wa2IUvDXjBWisdg\nAkN/DyMsboRxp5wMIzimiHNcMrD9bBiX1hIw9yQSiUQig4hLoKej3hGmsZCZBj8KE3Q3F8Ys/2cA\nq8M0MlcNbDN+YF8aOaGdYw2YBv7wOvYBjMn9QZj4iGkwQYvTYfIQ2PgxanvYR8IMC10I06s+D6aR\nJJaAGWny0sA2L8MIBW71+BqMsJgHc22uQjJ8VWP8wO85dGC/+TAWB3nNh8OIo1cGzv0CTIPMA1Rf\ngnuECTESJl/FqwPHeh7G9UDiow0meHT+QJmIA2GGyi6AEQbXAdiI/f+Ugd+yD4AXYVwttwLYxKNM\nkUgkEomUxs4w4oGzLIwb4pjyixPMeJgGd80ml6MIToH5LdEtHBm0NOrhXgGmR7ADW7cVjE/1fZhe\n1JcadO5I5IPIpjDugmORJKy6ESa51ATHfpFIJNISfAJGOPDkKsvCmC6/DiNYPgWTJW4H7QCRSCSY\nNhj/+jMwZvZZMEMYq9KTHw/jPqlKeV2QKyhaHiIRT8bD+FoPQlo8fAWmUuNcAJMZLhKJRCKRSIUo\nWhnfDNNzkClYN0I64Agwk+nEAKJIJBKJRCpG0UM1bQlQRsCYUjnz4R7ONgp6sppIJBKJRCJuXhv4\nawhl5Xl4HyYnPGc4zBA3jVErr7zyzJkzZza2VJFIJBKJDE5mANgCDRIQZYmHJ2Gmr+VsiFpXBjFq\n5syZuPzyy7HBBhs0tmSDiOOOOw7nnOMa0h/RiNctnHjN8hGvWzjxmoXz9NNP47DDDlsFxnpfafFw\nLUzmuGNhAiW3g0kGM9a10wYbbIDNNtus8aUbJCyzzDLxeuUgXrdw4jXLR7xu4cRr1pqUNZTobZjp\nag+EmVznQpi0vveUdP5IJBKJRCIF0UjLgxQmj6I2A14kEolEIpGKEZOYRCKRSCQSCSKKh0HEuHHj\nml2EShKvWzjxmuUjXrdw4jVrTfJMDVwGmwF49NFHH42BMpFIJBKJBDBlyhSMGTMGAMagdqr4QoiW\nh0gkEolEIkFE8RCJRCKRSCSIKB4ikYrT0wP09TW7FJFI5INEFA+RSMXp6gL22afZpYhEIh8koniI\nRAYBN9zQ7BJEIpEPElE8NImzzzZ/kUgkEolUjbLmtogITjjBLL/97eaWIxKJRCKRUKLlIRKJRCKR\nSBBRPEQikUgkEgkiiodIJBKJRCJBRPEQiUQikUgkiCgeIpFIJBKJBBFHW0QiFebqq5tdgsHDDTcA\n06YBkyYB770HPPhgs0sUibQuUTxEIhXmkEOaXYLBQX8/MHZss0sRiVSH6LaIRCIfeP7852aXIBKp\nFlE8RCKRDzzXXNPsEkQi1SKKh0ikwgwb1uwSDA7mzWt2CSKRahHFQyRSYUaPbnYJBgfz5ze7BJFI\ntYjiIRKpMB/5SLNLMDgYPrzZJYhEqkUUD5FIhVltNbPcbLPmlqPqHHRQs0sQiVSLKB4ikQrT12eW\nU6YAd9zR3LJUmd7eZpcgEqkWLS0ezj8f2HTTZpciEmldeKO3yy7NK0fV6e5udgkikWrR0kmiLr64\n2SWIRFqb2GMuhigeIpEwWtryEIlE3Lz0UrNLMDjo6Ul/X2qp5pQjEqkKUTw0kIULgX32AV57rdkl\niQxG3n4buP/+ZpdicNDdDXR1mc/rrRctOpFIFlE8NJAHHwT+/nfgN79pdkkig5GTTmp2CQYPP/yh\nERBvvgkce2x0Y0QiWUTxEIlUlCeeaHYJBh/LL28sED09ZrKsSCSiE8VDJFJRXnwx/X2PPZpTjsEG\nuS9kHEQkEklo6dEWkUjETid7e8eOTXI+RMI54ADghRfMZ7quPT2JkIhEImmi5aEEYvBVpBEMHZp8\n7uyMPeV66OwEllvOfO7oMMv43kYidqJ4KIHTT7f/L/pVI3nhloeurhjkVw+9vUD7QG0YxUMkkk3Z\n4uFgAD0A5rK/P5VchtLwEQbR1BzJC392ouWhPvr6EtEQxUMkkk3Z4mELGLEwgv19oeQylIat8uGi\nIlZQkbx89avJ52h5qI9oeYhEwmiGeHi05HM2DVtlHsVDpAiGDUs+0/DCSD6i5SESCaNM8dAOYDMA\newKYBmA6gN8DWKbEMpSKLcUtNzfHCiqSF3p2vvENIx4WL25ueapMtDxEImGUKR5GApgC4K8A1gew\nLYB1AFxeYhlKhXqGe+2VXh/FQ6QIenuBZZYBzj3XjLxYtKjZJaou0fIQiYRRZp6HWQB2YN+nA/gu\ngMkAlgQwr8SylIItGJKvjwGTkbz09iYN3ZAhUTzUA7c80DKKh0jETpni4aMADgXAM/IPA9AHwGJw\nPQ7AMhg7Nlkzbtw4jBs3rlFlLBQSBm1t6fX33Zd8jhVUJC9cPETLQ31Ey0OkqkyYMAETJkxIrXv3\n3Xcbft4yxcPbAI4G8BaAXwFYGcCZAC4BYAktPAfAZpg4sZwCFo3NqnDbbcnnWEFF8hLFQ3HEmIdI\nVdE61FOmTMGYMWMaet4yYx5ehQmW3AdGQDwM47I4psQylIrN8hBHW0SKILotioNfyygeIpFsyp7b\n4l4Anyj5nE3DJ+YhVlCRvEjLQxxtkZ++vlrLQ4xHikTsxPTUDcRmeeCCIYqHSF40t0VMd56PaHmI\nRMKI4qGB2HouvFKKvZtIXqR4AGKWybzEgMlIJIwoHhqIzfJw3nnJ51hBRfIiYx6A6LrISwyYjETC\niOKhgfzjH9nbxAoqkhcuHrq6zDJaHvLBLQ8xz0Mkkk0UDw3i0UeBs8/W/3fsscnnWEFF8qKJhzi/\nRT6i5SESCaMS4mFeBXNPzp2bfJZuC6rogVhBRfITLQ/FEWMeIpEwKiEe3n672SUIp9MxCJb7pWMF\nFclLFA/FES0PkUgYlRAP7ZUoZRouHqTlYfHi6FeN1E8UD8WhWR522ql55YlEWp1KNMuy8a0CWZYH\nmnEzDtWM5CWKh+LQLA+RSMROFA8NwmV5WLQIGD7cfI6Wh0heongoDs3yEIlE7ETx0CBcZZ43LxmX\nH8VDJC9RPBRHtDxEImFUQjxUMeaBpwmWQuL664GZM83nKB4ieWkF8bB4MXDGGdUfIupjefjiF4Er\nryyvTD7MnAlMm9bsUkQ+iFSiWa6i5cF3joEoHiJ5aQXxcMklwIknGkFcZXwsD5deCnzuc6UVyYtV\nVgFGj252KSIfRCohHqoID4SU4mfzzYHddzefo3iI5KUVxMOCBUlZqkzVYx5uuaXZJYh80KiEeKji\nTIGuURR9fUllH0dbRPLSChkm6d2sonWQU/WYh898ptkliHzQiOKhQWSJBxqNUfUeW6R5cPFAz1PZ\nlofBIh6qbnmIRMqmEuKhinDxIMUPtzxE8RDJSytYHug5r2JQM6fqlodIpGwq8cpX0fLAyywFQhQP\nER+23x44/HD7/6PloTii5SESCSOKhwbBLQ9RPETycN99wJ//bP+/Jh7KtjxQb73qwwW55YFbUVo9\nJmm99ZpdgsgHlSgeGgSvdGSFzsXDW2+VV6bI4KIV3BYrr2yW11xT7nmLhl9LbkVpdXEfrSSRZhHF\nQ4PwFQ9HHllemSKDi1ZwW8yYYZYPPFDueYuGuy04rS4eIpFmUQnxUEWyxINr4qxIxAcuHjo6TI+5\nbMvDd75jluusU+55i4a7LTivvmqWkyaVWx5fqtixigwOKiEeqvKCTJ0K3Hmn+ewSD3xirEgkL1w8\nAEaQlm15oDlaqi4ebJaH+fPN8qWXyi2PL60ekxEZvETxUCDrrw/svLP5zMssxcP8+cCIEeWVKzI4\nkeKhq6t8y8PixellFenvN40wtzycfrpZkhhbbrnyy+VDFA+RZhHFQ4HwXp/L8tDdDQwdWk6ZIoOX\nVrA8EIsWNee8RUD1C7+Wu+1mlvQev/ZauWXyJcZkRJpFFA8Ngiqdz35Wj3no6DAT2sSAydbl9tuB\nOXOaXQo7rWB5IKpseaAGmFse6DPVPSecUG6ZfJk6tdkliHxQqYR4+N3vml2CcEg8DBmii4f2dmD5\n5csvV8SfXXcFxo9vdinstILlYZllzLLKlgd6V/m1JPEQ3QKRiE4lxMMTTzS7BOH4iIf29lg5tTqv\nvNLsEtjRxEPZlocFC8x5qyweXJaHVn4/edlo6HckUhaVEA+t/ALbIHNnlniIPsvWpAquMjlCoKur\nXMtDf78RDcssU223hWZ5oERRsu5ppVFS/F4vu2zzyhH5YBLFQ4OIlodq0NsLHHss8M476fVVEA8y\nN0HZloeFC81ypZWAt98u77xF4xPzQAwbVk6ZfOCCbdas5pUj8sEkiocGQWXWgthIPHR0VPO3DSae\neQY491wzCRWnCuJBszyUKR4WLDDLkSOTz1UkJOZh1VXLKZMP0sr01FPNKUfkg0kUDw2CytzZWTs9\nd7Q8tA5LLGGWH/pQej2Jh1YWEZrloUy3BQmGpZZq3iiPIgiJeWil91XGmbTqcNLI4CSKhwbBxQOP\na6DGKMY8tAazZ5tlFdOFN9vyMG+eWY4YYc7bykLLRUjMQyv9RhlnUmUBF6kelRAPDz7Y7BKE0deX\nVDJdXekKiD5Hy0Nr8K1vmaXstbVSI2Gj2UM1TzvNLN94wyyr+iyHxDy00m+U4oHPBhqJNJpKiAcK\nzKoKfX12twUXDzHmoflQY/vuu+n1VXBbyJTKZVseXnzRLOn9rGrPNyvmgT8DrfS+knjYc0+zbKWR\nIJHBT9ni4cMArgfwDoDZAH4FYNDNSC/FA3dNSMtDdFs0FxofL3vsrSwaiGZbHg4+2Cz3398sqyoe\nsmIeKLZgyJDWei5IPBxyiFlqs4JGIo2i7MftagBzAIwCsCWAXQD8oOQyNJzeXj/LQ3RbNJ/Ro81y\nxRXT66toeZg7F3jssfLOf8wxZrn00mZZVfHginno7wdmzEi2a6X3lcQDBf22Utkig58yxcPaAHYA\n8F0ACwG8BOCnAL5SYhlKIbotqgNNgLTrrun1rSwaCGl5mDKlOcP1aFruqoqHLMvDtGnm89JLt9b7\nKsVDtGK2BtOmAa+/3uxSNJ4yxcNGAN4GwC/r0wBWBbB0ieVoONxPahMPHR3R8tAK2CwMrSQebM+I\ntDwceSSw+urllIlDM8RWXTzYYh6WWsp8/sQnWuu5IPFAiatiXdIajB4NjBrV7FI0njLFwwgA88S6\n+QPLpUosR8Phbouurhjz0MpUQTzYGmVpeRgxojlTvVddPPB3kuDigRrpoUNbq4Gm+Ba6/q1Utsjg\np8zR7fMAyHhg+j5X3+U4AGbavrFjzZpx48Zh3LhxDShecXC3hXRNVDXm4dBDzfTUN97Y7JIUS5Z4\naAUR0dOTuAY42miLsmfVBJKyVVUIuywP/f3pHn4rPA+EFA9Vvf6R+pgwYQImTJiQWveuHD7WAMoU\nD08CWB5mxAVlYt8QwHRYxcM5ADYDAEyc2Oji1c+HPgS8914iHtraagVCVWMexLM5aGjW9Z8715jD\nfcbm2wRBs0dbnHQS8ItfJD73qlsetIDJF18EHnnEfB42rLXeV7reJN5aeQbYDwoUXFsmWod6ypQp\nGDNmTEPPW6bb4nkA98EogqUAjAZwMoA/lliGhkIVDrktNOsCD86qkuVhsPLSS/r6RvYw5841wXe/\n/73f9rZGudmWhyFDzFwPlJ2zquLBFTB5wgnAVVeZz60qHsjycMQRzStLxCDnPnn66cGbNrzsoZoH\nwFg7XgLwEICbYUZcDCooYJKsCzHmoXX5+c/N8re/Ta9vpHiYM8cs77jDb3tNEPT3m79mT8nd3l59\n8eBKEsVpNbeFtDxEmosmLDfcEFhzzfLLUgZlZ/SfBeCgks9ZGjzIilseAFPptLVVN+bhg0YrNRIy\nDTGgB/m99Va5U2PTM07ioRnxFkXgsjxwWi1g8mtfM8soHlqDqVP19VXLkOxLzElWINJtQTEPQFLp\nVDXm4YPG5Mlm2UgR4Xvv5eyJgB7kR9u9/3595fLlxhvNmHYym9sqz1bHFfPAaTXLA1mwonhoDaTl\n7aGHGn/Od9+tTa1fFlE8FAifiY96ZVQhUWVfdbdF1cqbl09/2iwb0VjQM3DttX7ba+JBa/B23NEs\nNUtFIyCLA4mH/fYr57xF42t5kKnmWwVKsR5pLiTmiHfeafw5l13W/DWDKB4KxOW2oMqeTFjaSIwq\ncNhhzS5B9fFpgLho0cyeWoNHPVBNbDSCddcFPvWp5uSWKBJXzAMlYKJ1rWR5IOJsmq2BFO2D1V1B\nRPFQILbRFkBSQZ13nlnefXc1xQNFnkfyw8WDLUbhr39NPvtaHqgRL8Py0N8PzJ5tBEtHxae2c1ke\n+LWv4vsaKQ/ZKahqDJAvUTwUCHdb8NEWtA5I/NGLFsWYhw8q3Dd6zTX6NrNmJZ9dMQ+a5aEM8XDZ\nZcDttxt/a9Vnc8yaGItoNfFw6KFmWXXxNli4+eb09zffbE45yqLir31rIWMeeMCkZqquYswD0FoV\naJFoAX+NMFPzey4rHII3yL6WBxIPN91UX/l8uPdes5w8GVh5ZfN5r70af95G4BvzULZ4mDXL1CG2\nZ2TJJYEttjDJxpZdFth44/LKFqnl9NPT348+ujnlKIsoHgrEx21BjVFVYx6A8nzqZXP77WbZ6HvC\nLQ+2XiMXGL6WB3JbHHdcfeXzgYsqeu6rmrrcd7RF2WKfEpjdcov+/56eJGh1882BJ58sp1yRCBDF\nQ6H4BEzSrIfrrltd8VBFa4kPZO5vtK9SSxom4QLDZXnQ3BZlMNeSUL6KaEKsra1WQJTtZtTKxZk2\nLYmZue22UooUifyPKB4KxDVUkyod6il89rPVjXmoYpl9oAa7UZkSTzzRpDvmx7ddSy5gNLHWbPFQ\nxjC0stAsD0DyPm+wAfD3v5cv9m3lIu66C3j22fLK0yosXGjuzfXXN7skfvARO4OJKB4KhLstKGBS\nxjz8+MdmSf+vYi++imX2YaONzJI37kXGPJxxBnD22X6WBx70qF1vKhcXD2VWUiRUVlnFLI84IrGq\nVQ1bD5++b7aZmdW3WeIhJCB1sL6bnPfeM8t9921uOXyp+lBmG1E8FIgrYJIqglGjzLKzs3lui85O\n4Mgj8+8/WC0P1Ntv9BwNvILfe299myzxoDUsq61Wf9l8ofNTfMVSSyWza1YNWw+fri0lYWo18bDZ\nZoCYTBHz5jW2TJFwBmt9GcVDgfi4LU480SxHjmyeeOjtBS68MGyfNdZI7z8YabTbguDXb8QIfRsu\nHrTyaA1LmcmCZHrsVrCivf12vjJkWR64eKAJycqA7vFbb+n/7+w0Iy44UTy0HlE8RDLhLgoSD5TX\n4b//Ncu2NmD4cPO5SjEPPqb2KrLppsAhh5jPmuWhEQ0Fr+BtjR2PKfC1PJSJFA9tbcDzzzcviVhP\nD7D88sDnPhe+b5blgVw00orYaP7zH7P8wx/0//PRFldcYZZSPDz8MLDddoPrnW3FLJ8umi2qG0UU\nDwWiDdV84gmzjioAWg+0Rm/NF15OrcwnnVTNNLm9vYlPksRDo0db7LFH8tlWqXPLUCuKB9ngnnmm\nWUozelk8+qhZXn11+L42ywM9z2R5kFbERnP88e7/9/QkZaJcG08/nd7m+98H7r8/WiSayWASbpwo\nHgqEKpvTT68NmCS1LMVDVR6sLMuDTJBSFXp7k0BDzW3R6F6Oj3gMEQ+bbAKstVb95fItU6tkl6xH\nuGozlAK62wIo/5098EB9/YIFSZzJSiuZpRx9Qc/vl7/cmLI1g1tvbXYJwhisaapb5NUfHFClMmlS\nEjDJ4yBoWVXx8KlPJZ8HC319tZaHRsc8yPNn4RvzAADbbgsss0z95cpCNrinndb4c7qoZ6SJ7Vo2\n221B2Bqf+fMTF+hHPmKWZIEgSDzwuVKqTqvmtPjoR/X1VbTI+hDFQ4HwSsWWJKq3N1nX0VF/Q/zM\nM8Brr9V3DB96e5NKtCqCx4feXtOzbGsrL2BSnj/PNrYGr7OznPKTGZwaL9lolU0901KXZXl49dUk\na2QI11+vW8B6epKykXiaPz+9TdXiA3yQU1+3CttsU+75+L2dPbvccwNRPBSKJh6oQqIKqmjLwwYb\nlDNEjxpZ+jxY6Osz96izszmWB9u15FH0IeKhq6uc8u++u1nuvLNZNrtCr+eZzEoSVZR4WG01YM01\n8+2rTXbGOyLt7UZALFiQ3mYwiocbbmh2CXRszyBNlFg0/D1vRtK2KB4KhD8gJBLk7HyNcFuU0Zhz\ny0PVxENfH/C73+mNKlXAXV3ljbbgPPaYvv4znzHDOJdZJtzyUIaPdfRoc27KW9LsRDj1/Ga6vtK8\nbLM8lPX8r7128lmKAioHFzxLLPHBEA+tirwfnEZYa5tdD0fxUCD8RfUJmKzaUE2qRF1lvv/+csoT\nwg03AF//OvDHP9b+jywPvMdeZoDTBRfo63t7zRC7YcPCYh5C3RYvvgj85S/+2xP0fBPNTsFLk5rl\nQQp9QoqHskdbbLll8tkmIHljNWyYSd3MieKhPCZNsjfojWjoo3jwYKmlml0CP6Tboq3NRL8DwHrr\nJeurOlTTx/Jw113llCcE6o3Zem/t7c1zW9igYXi2uBh61mSDF+q22HZb4OCDw8vHn2MgyTdQL9dd\nly+470Mfyn9Obv7ncJHPv5clHrKGR8ueriYeBjOt1i7MmmX/XxQPTaIq6nnkSLPcYoukct1uu2Qd\nUN3RFj09fgGT55xTTnlCoPKee26SrAsA3njDBLE99FC60W0F8TB/fhIz08iAyXffzVe+RomH/fYD\nDjoofD+9cpGGAAAgAElEQVQtJsAX2YMnSJjx95W2L4Os4dFS9CyxRK14yHt/y2DqVHON773XCHua\nsyILGna6/PKNK1tRNDJOLIqHQcSyy5rlQQfZ01NXUTzMnWsEHD2srofWlkq3FXjpJWCXXZLvzz1n\nlrfd1nqWhzvuACZONOUqI+bh7LPDnsVGiYe8yFEGIWRZHlpBPPhaHqR1rR5R1WgoJ8U99wDbb+8/\nxJgn42t1KA9Ho8VDMzrYUTwUCI9rsA3VrGLMA8UK3HijWbrKXPZwJR/4i8WHtfKKt9EBkyHH4dt2\ndITFPIS6LagiPuEE4M47/fcjtxxRZfFgszzYxENZjZZ0g2r/z3Jb8Oej1eoangOHMoT6QAKp1X6P\nRlnioRlUQjxUxW3BxYMMmLRZHpr9APhA5fVR/Esv3fjyhGJ7fqR4aGSGSX6cjTc2Uzz7bNtotwXn\nzTf9t5UBk82eUbMea5HN8kDPu3RflGl5cDU+PjEPWdaLZiKDyeVnG/QbW+33aETxEKlh7lzg8583\nS4IefJrbQk7J/f77xlTOA7CqoJ5lxeoq86JFjS1LHmSFRH5gaXbX5rYoKjscL8Pcue6eOr++ecRD\niNuC/76TT/bfT7otaGRA3jwG9VKP4PO1PJQ92sIVpEyze/J7sOSSZiIseQyiFdxxHCr7ffcl67Sg\nZgmJh1arO0eOBNZfP71u2jSzjOIh8j+uugq4/PL0RDw8Fax0W/T2AnvvDVx7bfViHmQDleehnTED\nuOmmYsoTimxMvvMds6Tf8bGP2d0WRcHv84IF/uLBZkkoarQF359mgPVBioellzZZJqdO9T9GkdQz\n62uW5aGZMQ+24dFaYqs5c2qFI9/vmmuKL2M90PW8445knY/wJYHR7MZTsuKKwOqr6/+LMQ9NohXd\nFjL5E//8+OPpseNtbeY75UCgfTs6kh5EK0Nj+Gk0SZ7Kc8cdgT33LKxIQcjrS6mVqaK69NJ0I80b\n3xkziikDv2aLFrnFA69AbZYH2kamZabf4ftMcfHwxht++wC14gEAZs70318j5PySesSDzfJA17BZ\n4uHZZ5MhqPIZ0FJqb7FFbb4Nvt/nP198GetBE2w+AZ6tanno67OnSY/iIfI/NPGgpacGai0MzZ5o\nJxSat+Css8wyz4tQxvwbNuSLRdebGuCll7ZbHlZZpfgycPGw6qq12/7qV8nnLPFAzxJBx/W9R3nd\nMjJgEgB23TXfsQia3CkP/J6FPp82ywPRLPGweHGSptwmHni5tRiqVuudc7RrXmXLA59rRNJo8dCM\nNiSKh5y4LA9AunKV4oFS+VZFPFD5aChVHrN+M2eWk+KBXjree9cCJpdcEthqq2LKoFkettoK2G23\n2m1ffjn5bBuqST00WVnRd997RFYYAPjc5/z2AWr97YAJBJU+3xDqGVbYSMuDdF9o8QeNaBwWLADW\nWScpI0dzW2ijt/j3MmZbDSGP5WHBAuDf/zafW63e7OlJ6nYZQBzFQ5NoRbN+lnjglasUD2RarIp4\noIeUZlDMU8m7enaNRl5falh5AywDJtvbgU03Le7Z42Xo7zcCoq1NPz6/Vrahmi63BWByV4RmGwz5\nrZrboohZYvPSiJgHIitg8tRTGzNUdf78JIuij9siy/LQ7BTiEq1DkVW3/OlPyedWszx0dyfXeP/9\nzXKffcyy0eLhjTfKTwhWCfGwcCFw7LHNLkUaevD5FLvaxFhA8lLT/6kHIGfcLJsDDvDbjspHJtQ8\nKXCbaXmQI0Ck22LIkFq3RWenvXHPgzzOFVf4HT+v22LsWOCII7LLxdM6hzyHmnhoZgAwF1hFWR6I\nLLfFKaeEnc/nmXr9ddOQUu4NeU5NPGiWB+4GaLXRFlqd4HJbdHebOWqIVut09fSY0Uanngr88pfm\nPsvg7CLhx9x11yRJYVlUQjwAJrVwK0GBdGeckayziQd6qUlVU2PWbMuDb/R1iHiwDdVrpuVBioeJ\nE83S5bbo7DRlboTlgbCJB588Dza3Be8B//nP2eXiFU6oeJCV/2C3PGS9r76Ns0/5aAg4xRvJ63rV\nVWZJWVIB/frz5Fl0rFbGZXngQzqB1rQ8dHUBP/whsMIKZl0jO4jN/v2VEQ+thjYm3mV54BUGvdDN\nFg++0ENKfjxNPJBokD1hgmeTKxtb7gkuHmR66q6uZJRMEUiR8L3v2a0xVIEecog95sHmtrAFbPmU\nq6cHePtt//1ayW3RTMsD4evO87lG9B7RPB9yn//8xyz5CBfN8rPaarX7tAradXRdQ1m3tFq9qQVM\n0nPVCKvPB0k8bAWgD8Bc9nd3iedvOPQwjxjhDpik3mFVxAOVr6vLvMCaeKBtbD31Zuajt42Y4A1w\ndzfwzDPmeyPcFvIer7iiWWrHJ7Gz1lrumIe2ttpGL9T3zsv1n/+YyYZuucVvv1YSD/VkUrRZHkKH\nahYpHvg7p51TE4/a9c8zyVhZaNfR5bZYbrn091Ya5t7XZyb2kllao+WhGLYAcBeAEexvxxLP3xB4\n3AA9yIsXuwMmqYJvdsyDL9y/apv211c8NMPvOmJE+jtdd94A33QT8OKLZn0Z4oGOf9llwHnnpf/3\nyU+a5Te/6Y55IOsIx9WD1uANHiV4klkKNWzioVlCuB63Bd1vG1mjLYgixQM9d7Y6ghpZXu6+PtN4\nTZ+e3nb11YF11wU239yvfGURanmQbgvbMTivvx6Wdj0vVHf84hfp9VE8FMMWAAKmP2kOM2b4+YoJ\n8m0B5oXv6jIvAO/NyChoLVf+E0+Y9XzK6FaBi4clltBTyLayeJAvGRcP1HPba6/k/93dxcc8yOOQ\neACA3/wm/T8yz66wgjvmQXMRhYoHLTGTT6ZJW8BkK7gtQkcDyTkiJL7pqRshHkgc+FgeyHL2s58l\n68gC+rGPNT6gbscdwwKjtXfLdQ2vuCL5TB2CrGs5alS6jm4UNvEZxYMfwwCsbflbEkY8jAHwHIDX\nAVwFoKAUPMWx997A4Ydnb/epT5kl79X295sKvb/fVGYyYFIGE3LxQKr6X/+qr/w2dtkFuOiifPty\n8TB0qB5DkCUe6LfmEQ+nn17faA1XgBu99Ntua0z2fL0r5uGhh4Dnn89fBi4eeNAbYK43ZSZ1xTxo\n8Q1FDBn0iXto5YDJ0FwTWZYHX7eF73T0edwWcp9Ro8ySWxNsuSra20291Oh5Z+65J2z7ULcFr1tI\nOLeay/fCC9PfGykefOYBaSRFioetYYSB/HsWwKcBzABwC4yA2AhAP4BJBZehbubM8dtu553Nklc6\n/f3Jw3LjjcCkSeYzuS1otALBzaDSvwqYFylkvgEXd9xhhu7l6UnzbHb1ui3yvETnnx++D8dWwfBA\nOW5y93FbbLONMQXnLUNnp71cvCfsinnQxEOo5UHDx8zbygGTodNz2ywPoTEPRx7pdz6fBk9aHuR1\npWyc++6brKP65umn0+dqbzeiv54kXI0g1G3B30WbqOKQJaYMSPSst156fSPFQ1FtQ16KTG1yN9xC\n4Frx/RsAZgFYH4DFWH8cgCQt2tixwLhx4zBu3Lg6iunGd0ghPfgyv7jWgyHxIKeg5WZQbaKj3Xc3\n47yLDArK8xDznvCwYW7Lg61irMfyMGtWsm+enrUsEx/5obmW+GiLRrotbI0cL9df/qJv4+O2WGml\n8HICfubtVo55yLNvEaMtZKyB63xZ0DlsbouTTqrdh0QfLwdZiGwWw0Ywb15tR0mjCPHget6+/OXs\nMhSFbeh0I8VD8tsnDPyZNhIA3i0hY1QD8qKprAbgeAA/BEAJcSnfmcP4cg6Azf73jcbnNxJf8zjd\nuNNPB770JdML7etziwd5bF4ZaZYHShBTJNLUpTUCEt5DHzrUbXmg3o+knpgHqvSefBL4+MfD95cv\nLq+QpWuJylhGwOSUKfbyZlkQfCwPeafHXmON7G0aHfPg81xyenuBtdcGXngh/FxFuS0aGfMg93Hl\nWuHXjbstyrI8LLus37nqcVv4zOHywAPZZSgK29DpcsTDuIG/pI2cMmUKxowZU/xJGWW5DN4EcAiA\nnwMYCmAkgPMB3AbgJcd+paOlndbgDwONn+ZuCw5VqnRsys3OKyPN8tAIZKXj01OcPj0pa5bbgqwE\nkiICJvOa5OVv5BYfm+WhjIBJG0WJh7yVJ5/vwkajh2r6TJDE6emxT4ecRdb1do224J97eoDHHrOX\nT9vHRpZ40KDfz3v93G3RSMsDP7bvvQu1PPB5YHwsD2Viy/hajnhoDmWJhwUwcQ8bAngNJg7iHQAH\nl3R+b3x98/zGcX+/VglRr5YqHwqscsU8NKqXIIPhsh7A2bOB005LymMTD1mNbD1uC0JONuOL/I38\numuWBxptUWSSKM3y4No2q9d91lm1gZZAfoFFAcCA/2iLRgZMhoqH3t78z0c9oy34e/ruu2Y+FE1A\n3H9/+nxZZLktaDgvhzoxTzyRxD3QfXr9dWOVealBXTUa5huC9k647vvaa5vlpz4FHHec+dzsEQdE\ncy0PteUogzKDFZ8AsBuA5QAsD2A8gJKn8sjGNxOiTTz4xDwQWswDbUOzsxXNH/6Q/p71O6XYyLI8\nbLONfpwiLA/1TB/NsVkeynZb2ASCj+XBRt79Ntgg+exjeQgNmPzyl8Pu34gRYRP99PQkE7eFYnNb\n+ARMaiI/a9RFEZaH0aOB7bZLr+MjvyhtPt0nGs11++3u8/73v8Dll2eXT5Jn0i35TtAwdxt0DW66\nKZnK3lV/bbaZ/X8+PPdcOoOni2bEPGjHfPbZ4s9jo6VGOrQCvpYHLSmNy21h8xHT/lrMQyPYZRez\nzBqzTsj/Z+V5sFGE5SGvFUDeyz32SI7He5XSbdHogEmq7GWENhcPfCIgH4oQHb49Y0082LL+XXxx\neJn4qIEsenvzC+56Aia1xk47FhdOPtdXJoGic268sRFJL7+czH+hneOGG5L92tsTq4xLwN1/P7DR\nRsDnP59dPkme90S+z0OGuHvOtH1HR3bCLsCeWdaX9dbzP0aruC3KnEMoigdBIy0PxMEHJ+tp/7Ji\nHgjKhBYqHrIsD7aXvx7Lw4c/bJZ5X0D+G7beOu0vpZeb3yNyZzRyYqzOTpPOFqjtLfCGef31w3p1\nefM8FCEeik63HuKj7+21z6vis2+R4sEnXur994H99rPn1Pjxj82SXEh0T556yoj3u+8GHn/cfg6y\nfpDbwselwy0Zoc99nvdaEw8uywO3zvp0fvjz02j3Rqu4LaJ4aCK+lofZs5PP9KK5Rlv09ibHpB4n\nf7Ck26JR0ENOw7psAV5EqOXBJh7qsTxQjzJvo0T7XXBB2pog3Ra0jkRFs2IeeGOmJYlylanZloes\n/UN8siHiQZuUKGTfkPTU/Pprv0d7h/k+vb0mD8x119mz2T7xRFI2IH+uE3JbkHjwjaUKzSFQhHig\nOWZs8AkFfSwP/Pl5tMG5jaN4iHhZHq6+GvjTn5LvvgGTPNkSX3K3hWZ5+O1vi2vE6CWnER833eTe\nPtTy8Pjj+otSRMxDPZaHIUOMC8AmHnhPhhrvV14Bbr65GOtD6GgLKldnZ+01c10H/vyFxADw8viK\nB9vQ46LEQ8iz0tub3+oSanmQIywkcgInoFY8uN53fg4Szv/8p718LqTlwTcroe/sqkQZlodjjjFL\nPiGcq17k9VSjYsiIZuV5sFn/yiCKB4GP5UGmkPaNeZCuCc1tod38o44yvZQioJeczpfVW9MsD089\nZTJW2rYjfyunmeJBzjOSZXmgxoTyMOQZ4rb22iblNRE62oJbHvr70/u7xAx/fnwEZ1sb8LvfhVse\ntIDJBx80S1eQ2dlnZx+b8K0Iu7tNT/2RR/yPzbEJDxmHpDVY2vOc1SPs7a2dz0RCx6230aNnnHJ3\nuK4pd228807YecoQDxwfofqhDyWf8wR0htCsmIf29nRStygeGP/3f+Wez8fyIHsLvjEPriGDWQGT\n5B+34ds7podcVoi+x6WX8Oij0+u1GBBOMwMmeWPMXRFyqCatkz3R0PNOnWpm2aPGFMif54GW/Lq5\n7nVoufv7a6f2zuu2+OxnzfKPf7TvF5J7wjf+h/z7Dz3kf2xOT0/+mAftWmnXfZkkUS56e5PrYLPE\n8MZoyy2BffbJ9/zT/f3Od8z3TTbx2y80QV094oGCuLPcFhwfy8PixYloGqwxDzIuqygLtQ8tLR72\n2ksfz9xIfIK+5FAsX8uDdFtkpafWeOUVEzPx6qvp9b4PJ73kdO7Jk93by+OSeLAJKEAXQPVYHujl\nqMdtQWXyiXmoVzzce69eBk6I24LW+ZQn1PIAJBNw8fNnoYkHMomfdpp9v5B76CuI63Ur1TO3ha/l\nwSY4bG4EaoS6usyskP39tdfu29/W9+XwDJPy3Nq2Icfm1CMeJk0y537lFeMS9sHH8rB4cZIwq9Gz\n+ZJ4kM9Ro4dqRvFgoa2tGDPMokX+PR6X26Knx5jz+Hhqvq20PJDVhAIm11nHfKdeO6+MfvSj9Dob\nt9xigpn+8Y/asvlA21FSoKyYB3lcqtRkClz+ANsEVEg5NfK8GHPmmIqQhrXZ3BYhlgffYamufULc\nFkB+y8NDD2Xf47yWBykgfVJDv/569jb8HD4UIR7ypqdulHg44QSz/MQnajsfIdB98umpyyHDITz5\nZPg+sjM1d67/8+E72oLifmzXrqenmMyblBtFvhM0J1Be8XD//Waqc+0Z1wR8FA8D8PG89fDd75qX\nMMv0D7jdFl//ugmG4r5sIHkw5GgLmvmOAiaXWALYaSczlhrIN7RNa0x4GbKg/caPD9ue4OZ/Qj7Y\nrnHuZcc83HxzbTnoevtaHkJGO/BjcTS3xfbb6/trlgd+3UIsD9tsA+y5p74tD9rj5ctrefAJWKRR\nBD7krXBDh2xmuS1801MTPuKBYhlsZe3qMvUFdaD6+mqTd/m4daRbynVNl1sOOOQQ85nEiy9yRlGf\nOq2eEWa+oy1IPNjqnR12KCYe4vjj7f+rJ/PqdtuZqc5tkxFGy4MFPiSnHih1qk8wjsvycOONZmnr\niUq3hcxeSC+y/L+vSRpINyahFT4dv6PDf0ifTTxw36Qsc6PEQ54XQ/ogs0ZbaJYHeW2zeo5apa5Z\nHmwZ8IqKeciCi4ciAiZ931XfFMl5LA9jxwK77uq3H1FPnoe8loejjjKft9hCP+f77wNLLWU+0zN7\nzTXpbVzigSaQo/vk01G56y7gqqvMhGp5h70S115r73G/845Jr07/841t2Wmn5LNvzEOW5aGMybPy\nigf+XLssD1E8KLS3F5M0iV4cH/Omy/JAJjXp0+ZuC5d4kL01rTeQVUbuA88K3NKgMuQVD3R+roTl\ntTr55Nrj1OO2qCfmQVaCvqMtZHQ8h/9ezczqa3mwNbZZbguXaTdEbNtcTXktD75zS/jO9mmbZE0i\nf0foc+LrtqBrxANh84qHrMZv0aLEOkH1h5wq3VU3jhqVHN/H8kB5X+h89frozznHLLUO24knmgDO\n555LzPo+rLJKYq0ryvLgwwEHuP/v0+HLc/6sTiW9gzzTaBQPAxRleaCXXQtkk/gM1bzttvR3ejBk\nzEOW5UGrQLJuPjWG3d3pSlN7ONdZB/jGN9LrqGHMKx4oUPPll+1l1obMuSwPrumFOXleDGkWtrkt\ntDwPhGz4+bPhiu/gaJaHrbfWy3zPPckkR1rA5M476/vZymOjHsuDJh6KHksvTeE25P0Ibfhsbgtb\nwOQFF5j/XXVVIqL5cxYS82ArK38Gqf6Q+SNcjS7v0PD7u99++vaU94Wfrx74OyYhq+XixWH1O0/m\nVVTMgw/S4iP573/d/583L3toroaPeyzGPFgoSjxQoGJIzMOcOXYrgFzPxQOvhPjLT5YCzfLga5IG\ngAsvTPbJsjy88ELtQ1uv5YGGevEePZXDlZSIfve4cWkf4yuvmB4ruYRc5KkAZM9Hui34PaJzZImH\nrJElvuLhoIOAU06pFTg8LkCzPEyfXnt817lt2MSDTy9JC5gsKpU3BXj6zLEwezZw7rnJ97yWhxC3\nBWA6JOPGAb/6lfnOhZNWgdsaAltlL59NLWDSdZ+4KzU08JxitOqBLBnavaCydHeHiV0uHnxHWxRh\necjir3/N3ibPrKNZcU78GXFt1yg+EOKBfKDLL5+9LVWIY8YkFYNEWhBsoy2y3Bahw/CAZFY8KW4a\nFfMgy0Pzcnzzm8k6Kse4cfbj8OvFXR7UEN51V3hZfJCBRtxtIZNH0TlCLA9amXzdFgCw0kruik0T\nmC7ypKeWjUvemAcKBObJeWxMmlS77sMfNtaGz3wGWH31JODYxZFHAr/8ZfK9rc3kKAjJZJmVJIqe\nXf4MU6pkSlOfJR6k0M+ycMpns6+v9hk47zx9X35cuk9ZzwWVZ8iQ+twW1Ll4/nmz1J5b/kzzZ4jE\n4pw5+rE1ywNZ6DQWLUqGajY6z0MjoGsIRMtDMPXEPOy3H/Dww+YzPXA+5nF+MyZM0LdZuDD9MlJg\nkxxtwU3imttC61X6KtR588qJedAexg020F0tY8b4HZNDFgyfxjFPBSDFQ5bbwsfywMvh26jbhmp2\ndibPhoZtdI2NIiwPed0WW24J7L47sPnm2fufdFLtuhEjEr++r5+YGnHACI5bbjGfQ2bx9E0SxaH3\nWMssGOK2sFX2/Bkka5m8Lz6TSJGFyPe5oA5b3kZI1tfas8SDrnm5aJ6Pp57Sj61ZHlyurcWLkzic\nRloestqobbcFvvCF8OMee2zyWbuOMjYLiOLhf/DkJiH095t0zpRPISTSnz8I776rb/P66+nKhh7o\nvAGTvFwnnOBn/r3ttrSbIKQRC7Ho0MNIAgmordhpmxVWML3OsWPtx5HwGA7fsoRw//3p7yETYxEu\nt4XLLGvbB0iLB3mcXXdNrGXa/100UzwAwMiR+r2UgataXgDuCgnJNkhccklSppCJnWxuCzlEU/sf\nNeAuy8OnP51kUaTzaZ858tnULA8ubDEPWfChoXmwuXTb2kzSPyDttvB1+9GxpOXBRU9PIh7+8hf3\ntryXH0rWyBRtcjsfeABytDwE0tmZmJ1C4I0DkH+YoOtGaJWtT8CkzW3Be+22HP18Jk+pzkMtD3RN\nsl5Cuga8EZZ+Zdqmvd0MP9Tuma18PuPP6X6+8Ya7rBoXXZT+nmeopiSr8idRp7kCVlzRLGUlyJ/N\nIUOSY7gsDzL6PhQuHkLdFjbxYGv0+/qMxYrQys6P6Wt54GJ/+PAknqZIt4VLPGhzUMh6QyZ04zkB\nDj3UXqasmAcXoW4LgoRGaGO3887GpfmZz+jlABJXlc3yQPiIhyyR3NtrfjvdFz6Roca666a/h8Tv\nbLyx+/95h2oefnjy2TXaQq4ri5YWD8OH5xtvzIOF9t4b+MEPzHefyojfDN/x9CFDNW1uC25et2U8\n4+s//en0/3wfzquvTmbM23dfYLfd3NvTNRg9OlnX0WEEzrXXmu9cPLz1FvDMM7XHsT3UPlHTVIYv\nf9ldVg3K6U/4DtX0tTy4hurxY1BjdvrpwP77JxY1WwZJek5c4sFn9BA/pmtdHvGgmWtt4qG3Nx0o\nu8ce+jGpHHksDx0d+QLkfEdbACZu6pOfTIbHacmesirw6dOz3ZNazENIA0QjzMpyW9A7c+CB5jvF\nvbhiHvKIB5n/xEa9E4uFiIcs63gReR5a0fKQcxLbcsg7xS73902cmKwP7cm4HqC5c40lYKWV/MTD\n9debz3yoFO/t8iAzrVKW46FvvTX9f9+Hk2aKBPx6d1wY8P0mTzaNIJ/xsa3NHsCUVb6sqOm8yAao\n6KGatpeaL4Fksqiddkr7P7PEgytgUpv62YaMt6F1QHFJogC90afzcPHgypgH5LM8tLcn4iHU8qA1\nRtqzv8025pxUNvpfVsCkJGv4nzbaIm+OFJ9cCtxF88IL4YHqdA3lKAjtWaJn3jZU01bWd981uR6A\n7Onm5Rw+PnDLRkgjTNvasvbmFQ/8freieGhpy0PegElueeCEVkZZN2LkSGDppc2Nfe01kzmND6Oj\nB5dPUW2zPPCy2n6z61r09Jh5DHgAWRZ5xQN/IWfN0nvatuMQ9DLQelc56H9UcYSQNVRTWh7IbeGy\nQGXFPMjfBiTxM0svnd7WFtMgLQ+uwDMfXEK4yJiHrq7aa07Xgftwba6Nei0P1Ig3YrQFnYNbAciC\nEGJ58C0Tfza1gEkfbCLPRlubeVb/9a+w8/zzn2aUi0yy57I8TJqULhvFVWm/8733TC6Kyy833/ks\npRp//7tZUmyNayQY4cqa64K2Pf10/f8dHfmEXxQPTUDr+QHFWh4IUpVkQr777uR/WSY5Lh58cj24\nytPdbXpFcqpsF++9Bzz+uHsbGTcCpCva66/XBYZEPvzkOqF9XRUjVcyh+faB2obM5rbIsjwsXqyX\n1eW24Pdr333NNZTDGLNyfbjcFnmCI7V1jY55oOvBK/ws8ZDH8tDRUTsKwocQtwW973R8CswMtTxk\nwS0PZC3L0wC9/75fB+zRR82ynrTUM2Yk18qVFVazzgKJdVb7nXJejyzh/OyzZjlrFrDqqrUxDRp5\n0zxn1X9FWB5izEMgeS0PZJqXk/AUbXkAkgfju9813/nY9CzxIHu7WbjKQy8XvTQ+3HRT0ojbsFWg\nxNJLp1+eI45IknJxZNnp+zvvmKWrsg+tNN95J5lzwuW20PI82AImhw4FvvKVZBvC5baQ67RnuZ6Y\nh3otD2WJBzoeH7Gz6qruY/paHviMoVw8hAgrm9tCe/bJhcDLJi03/P77Zk91lSlPzANgrAH/+Ic7\nFwJx4olmmUc80HU6/PBay4NmCaXZhoH0tV1tNbPUnnWtE2DjlVeAU081n7u6/Hv+/L6FNMJ0X2xl\n6utLB7sTWUkLeZmpnpTH5a4iWlcWLS0eQvKec2bM0Nc30vLwyivm+89+lvzPx59HPSxeMdjO66o8\nSDwUleWPyHJbrLBCehtbkKssO+1zySVm6bKAaG4AF8stZ2ZRBdxui56epKw+MQ9U1iy3Bf8/VZ42\n88mk7AoAACAASURBVHE9MQ9FiocQt8UDDwB33KEn83FZHnjvfIUVavfNY3ngcPEQcm1CRltIywOV\nlcPvvy1ngU+ZtKGanZ3++VSoE2WrDzXyDI2n4Mz116+NeTjqKHedZLPESmjECokcV7tArg3API++\nQyWz3msbtK3tmZs0ycSIcSZPNpa4hx6yH5dfBy0DLz0jUTwohAbtELabWORoC34u/qBxvy4da/31\n9ePz/etNkkQ9nJCH59vfzt7GFjBJdHent7FFatvEw957m6Vr1AdP/80ZMwb42tf0fShXv2zIuNui\nu1tPdyvFg2wkfd0WQBIYGGJ5APxiHopyWwBh4oGEmZzuHHCLBx4wmWWGzRvzQGUPaQRtbgvt2dcs\nDy7xwAOUgdqYFxtawCQ9l75DdOXz9pOfpL/39NQKi69+Nfns2xGh4M8XX0yuFT3PDz/sjp/guXRc\n4oHq0C9+Mbs8/N6svLK/EK3X8hAiWCmvBLmLNHiZP/rR2v/PmmU6jTarV6NpafGQ1/Jgi8T1qYzy\nuC34dpo/j4+waJTlgXrYIRXuyJHZKbu1mAf+G3ksAI0n9zHly4Aq24v3+ONJ750f4+WXTcX8+9+7\ny+9yW3R3J5YH11BNOe12ltvib3+r/X+W5cF2/8tyW2jp1m24nhmt0afvXDxo53jnneTeTJ+eHinl\nQ0eHSXEtz5VFiNtCszy45heQMS5aI2ArkxYw2dkJbLihWX/ddbX78cZfWvOWWy4tqr7/feM+4s/G\nJz+ZLoMP1HEZNkyvr/msjxLu1nA967TORxTy+/Xtb7vFA3cz5xUPPgHjtjK6BBqP8dLuxZVXmnow\nWh4U8qamJn+qnLVQNgJZ53zjjSQrmg3ZWPIHSBuTLBsQerB9AiZdDwb1cENMvT69Oy3inI8oqdfy\nQOW29aI//vHasixaBKyxhrvcvHyASREL1LotZLKmIpJE8X3pftgsD7aAyTJiHoi2trRZNOuZkImA\nONozRVO0c3eWvG7k9jvtNLPMCuS1nfuyy8xn314zPYchoy3uvTdJpaztq1me5DGz0GIe/vY309M8\n80zgsceAffap3e/CC81EYV1dtSm65btJQd5ypBddQ1/xQPXq0KH6e+yaQp7jIx7ktab3msPLsMQS\nbvGw9trJ50bFPGhzJMnYEImts6UROlKqKFpePNQzVFM+aD7DGOX5tAl8OC7xIIPx5Ge+v89Nd22T\n1Qhr+IgHLTCOB2Bx8eDKTmcbqkkWE5/77DMyQ9LdDey5Z1JRSreFT8yDK5unVinxCon+v2iRHjxX\nT8xDkaMtPvWpZH1WXg1XRTZkSO0zdcEFZukSDxSzY5sUyYfllze965Ej/Rtpl+XLFjAJpLPAusSD\nvJYHHOBXrsWLkxgRspbdd5/5PmQI8LGP2fe1BZrzZ58jLV2uZ06DWx7kM9nenp4dlWbb1AgVD7vv\nbvLsSHgZhg1ziwebYKDPW2+d7SbKcltoGXe5pVPjttv0c2hEt4XC6NH5xAO9DPJFOf98E+ilMXmy\n6f1o53vkEfs8F7Kx1Coafkx5/LffNpVCUW4LOr5P5ZlXPHC6u9OV7IIFwLRp7sRKdFxebp/hWbYh\nuBxthM3w4bXD3uh/WswDXw+Y58b2W7KEEv0umemSqCfmwVZZbbMNcNhh6XXa80AjbdraEp/ykCH1\niQfXM6VN407ktTJyeE89pIfP9+VkBQvb1knLA3+WfLOkLlqUiAeyGHzrW+kU3zba2/V7KC0PdJ94\np2qppRIxYEudLaEhuGutVXsf+eROgB4oS9B10p4fTTxIlzEhh5W7xAN/rzTxMGxYtojKEg+ai4LW\nvfqqvo+8f1E8BLLZZvn2czUyZ5yh77P11sB66+mV2BZb6JM9AeHiQWuIb7zRz20RIh58euf0Urkq\nWpu5nZBuCzLRyeFhNjMcNWDaCyonN3KNHSekT/mpp9KVI3db2CwPCxak42ZCM0zy7bMqHpvlwfV/\nwibqHngAOOaY9DptyvO11jLLBQvSgYpZOQWyxIMtoZFrOGgR4oG/b6HiwXe0hStuhZCWBz7KxDeQ\nc/HiZFtq9LOEPGG7liSq6HdRMCcXxxtvnLyTEyf6XUdK3nbwwbXly8oEyQm1PPT0mNwQ0i2iWXd9\nLA8vvli7fsgQ/0R6Wcn9+Llo3dln6/u4BKlPWcqgpcUDUJ/bQruQrp72woX289mGXPm4LbLEA2BX\nwNo2v/xl7f/IbUHn0sYFS3ymw/axPGg9NM1NwSeQkePAbUmDtO8hL8jMmWnXE++V8qGaVPbFi01Z\nXKmUQ0ZbZFU8PNaCyIp54HEgNmgCLsI1pvz999PiAbBbH3p63DMUumZJdflm84oHHpCYRzy43Ba+\nlofOTnugNbcgAPnFA6WB9xEPtm1sQXryXvDvPnkqeEyLPHdIAHeoeKAU/bJDqMWV2Toc/F7xXAwh\n4iErRsplebAxYkTtOXyI4qFOXI1M1oNgq8TksWhiKlf2MJ88DwR/ybJUspZHQVoeXPPcEz7TYWel\nt7WJBzl+ubc3mWYaqI1f0MpAwy15Wfg+Wbz0Uu067raYPz8RCfTy03XklRQFZVEsAz//gQcC//2v\nXk4gv+XBFfPw2GPuYwImoPTFFxOh4apU5s9PzkcNljb3xK231ooSSV7xILENwZVsuWXyOWTEiNzO\nFfOQJf6z3BZcPPiKpKlT0wmvirI8yPJp2/NrZ5ukT0OKKCAsgDtPwKS2zjaiTaOvD9h8c/OZp79v\nhHjQLA/0edas9D4y8N71PK+8cm25y6DlxUM9MQ/ahfznP5NJijRcWcJ4UBn5Htvb7fEK3M+edfys\nVKSA28QqLQ/aMC6Cxkr7iIe8loejjkrPrtnXly63FA/aCyqjlEMtD9LfCqR7pXPnJj1XKjtdC/7y\nrreeWVLmTPkin3VW+ju3+uQRD1ROWspnzJc113THTBALFiTnc1ke9tknOyOp65kaOTL5nNWA0Yip\nrJ6v9rtef702p0HW/iFJoiQut8UVV5h5b0KZMSMZ9UCC11c82OKzbBH+0hLB713IcycTFslj+e6v\nvTN0HO0+UWZKfhxOlnggwazVwUOGpCf/sx3DdV/of/w5kMebMCH9nZfFFttB3HorcNVV+nEbScuL\nhzzQBbQl4KA0wxo0oYqkv98MUaTKkcztUhVqpilXwCThMwkKrdcsD1I8uBg/Pn2cRrgtgHRDI7P4\nUWXosjzIcoVaHviEZAQXD7zHIC0P2nBL7u7QykVcdFHtvjayYh5oG/7/ceOAHXd0H5eg38GH10oW\nLqx1Wzz3nN/xJS7xMGyY+W3bbpvttqDyfP3r7vPdeWe+chKhoy1CxYM0P+ch1PLAe6LyOEC2S4c/\na3ffbQI1fanH8gCY3/jTn9rLpF1/nvZcK0OWeKBnVnMda8JCkmV5oPLsuadZdnfXJliTjT4/n6vz\nsMkm5n5TTF4UD4x6Yh6KpK/PTFUre6ah4kG+/BttZHpkWW4LHoSmqe+QIY/SBJhleQgJmOSVJZV3\nzpx0NkfAmNQ7OhLrgo94yBPzANRef008uCwP3/9++ngu8UKzLBL0G2yz+mXFPAC1lZ9vI0LHAoAf\n/ci9jbQ8yGRBvtfcJR7I3eKqDEksZHUAisLHbZEVMOlyW6y/PrDDDun/k8vTl9CYh9VXtx9Hlk+D\nZ8E84AA9T4GNeiwPhDYCoadHt2z4lKHZ4oHSZVPc3H771Vq/5XxA0vKgvS+rrmom3AP8722RNEI8\nDAfwIIAviPXrArgDwBwAMwB8z+dgoeKhvz/Jte5Cm6gkBJt44DdPEw8y18QOO5iHIMttwU1nPjEP\nrrHJdByXj5GfV76MfIy5FA9kPgOS60JpsLlfj7s06DgS+cJkWR5sPSoe/MhjHnhl7LI8+JYLAG6/\nPf0/urbLLquPzdfuwb33pvPzy2jxkGmWKeOii76+JGvkFlvoZXQlhuJkWR4AvTKka3jIIWZJlrSQ\nRFh5cAnykIBJbT/APE/yfc0KmpTvP7c8+NSHWUMGtfqloyMZqaS5+3wpQjxo8OBmYrnlzFK7Xpwi\nxIOrw8CzgWrIDoU2T4UcFuvjtpBZSHm5y6Bo8bARgHsBbAWAV+VdAG4EMBnAcgD2BHA0gMOLOvGk\nScCf/mR8jDTDpQufStUFDUUMtTxcemn6OJ2d6RTPQLbbwhXzQOckny8P1pLH13q92nnli8GHrcr0\n1PRC0/+AJNLfNeWtK0hK7mN7QWzrf/jD5LPNbaFZHniaaV7+ED8wd3m4hvm5BFw9lodTTsnepr/f\nmD6nTgWOP17f5h//8Iu812ImaMg1pdvVKkM53I3O1WjxEOq20IR7Z2e6tyzFgxQLWeKBni8axhfq\ntggVD1dcYYTDdtv5lY/zBdlFFIS6LVzHkfUeJYiSbujQoZr1Wh76+tzPqW04Jkem8PZxW2j1V1XF\nw04wloVLALwi/rcDgJUA/AhAD4DHAJwLwBF9YPC1POy1l/HlT5vmW1xDf3+SAS+LNddMPlPGwlDx\nIOnsrI1oznJbhMQ8aA8TmTXzigc+kkNaHvhLRFYWOr7m0uDHkYTGPNgsD1QpUhl5QK3L8sCjr13n\n5+eVZchKT+0rHmTF5isefOZ4oPs3erQuNkPQLA9tbWaqdsq0p1ke3nrLLMliduaZtcfRkCnoQ8lK\n8AOk75v27s2fnx5xwwPjeC4Rwlc8UAcnVDzYtrEFTD79tG4hyWKDDZKgTkIKzCItD1I88CHD8nkD\nkpFGrqGaTzyRlNlHPMyaVTvRV5bbgtc/vvi4LWbNqhW4rSoehgFY2/I3HEYQfATA+UhbHQBjkXgO\nRjgQTwPYJFepHYRevGnTgKOP9tvWluUvZLSFpLPTPfESkRUwaUsSpTWolE3QJxJfa/T4i+ISD3LU\nC3elyMbSZ6imy/Iwd246yYu2H5B2W2RZHmz3TV4vnkr2179O/6/Zlgcf+PVphHiQlavWk6IKXKby\nzQru056Frbf2Dyh1WfM0tHdPBpfS/ByA2/LAR59wZExS6GgLuQ3NA+MKmNREThbS9Qikf9Puu5vf\nsuWW/pk1bWjiYaONks+XXJJ8DgmYBJLpvjXxIIPKt98e2Gqr9P5Z4oEycIbAs+tq4oHqe4plypor\noxF4vjIAgK0BaLHN/QD2BeCaA28EAJFsGPMBLKVsmyI05iH04oUoY5uvMMvyIDMlcjo7a9W6LWuh\nSzxce61e1r6+dJn4g+yTw15r9Ph3bWIsCZW7vd3EeNxzT+0577nHXgbANGouy4NrqmOe0pe7LVyW\nB23Muu383OT49NPp//lOjJWVfjaveBg92iz5zK5AWrDy50OKB1vj/bGP6SMhfMSDVhmS5SzEZA7o\n7+TKK/vNYwMkw2x93SPau9fVZfJfkOuCx4d0d9vFA83z8MIL6flQpDWkXssDNXau3mmW5eG992pn\nCOVss41Z8pEPkyebYctjxtivr6/7WBMPHNekh9ddp9fB8jpomWOl5eGFF/Tj+L6PPpMzAsDhzKFP\n959jyzjbqpaHuwe2l38dcAsHwAgHmah0OADHZK2GUPEQOh4+ZHubeODrNfGgDT0iQsSDK2BSntNm\nedCCv0LdFjbLA02MRcycmRyDzk1TaIfep2WWMZVtWxvw8MNh+/IKKmu0BXdb2MSDLRZDI6/lgc/m\nWo/lYfhw07Btuml6PaUmprIRsqGzJQmaOFFPROYrHuQ1o+tO599779qyaWjXPiTDJPVYfcVDVsDk\nWmulM6n6BExKf7e0htQb88CTTQH6NdNEDkfm97j//vR3rVyU78TVM3cJEk6WeNBchxS8beu8uTJr\n0jUiMU3vn3aNsiwPnD/8wW87jia26f0699xkXSuLh3p4Ama0Bb/EGw6st3LcccfhwAPHAjB/Y8eO\nxQSZTUMQcvGmT/ffnhpvGlJD88D7WB5cdHbW9pLyBEwS0nzFc9nLMvm6LeTv4I2qy21B8yvwnhQd\nKzSQqqsLePJJ8/mKK8L25WSNttCGahLc3cFxXT8uHnyn5F5xxXTmxI6OdI8l1G2hmW15WXheAFk5\nuiaE09DEg6z4Fy6szahH497p/ORayyseQitRX7eFSwDSZ35ul+WBsMXJFGV5+MQn0utt4sGnUwKY\neVKkH9/VyaPG9cEHa+cI8g1+zhIPct8hQ5Lkbq5yAUm6fx/Lg1aGEPEQ2mn60Y+MCLvoIj1z7aqr\nAhMmTMDYsWPR2zsWF1xg2snjjjsu7EQ5KEs83AXgTQC/ADAUwMcBfAPAH107nXPOOfjb3ybCGDYm\nYuLEiRhnGzA/QEil8bOfhW3f12fMSV/5ShI0I8UDN9vljYzO47YgtImxbOLB1/KQleeBB+1Q4Js8\nBv1fWkZ8oQmbtH0peNUH6pWSGHQlibKl7ZXn97E82K6jllVPRm+/+CJwzjmJyCxCPFDw6pgxAK9n\n5PNos5qFiIfnnksHy952mzFpc8gi5RPdztGuPbcu+VKP5aGtLWlUpFVFszzI54B/X7QoSQEuj5lH\nPJx0UhLTVY/bgl9PLe07P+fkycZ9RyMxqHHdemvgc59L7yfL8qc/maW8/6Hiwec60TkozoYspbxc\nZHmg51l7h7OGanJcU5Jr/OQn5r2fOTPtMuEZN8eNG4eJEydi6NCJOOII006ec845YSfKQVnioRfA\nbjABkq8DuAHAOQAuc+2UB+3FOOssYLfdate3tflXUm1tyUtw0UXJC07iYehQE0yz/fbJPr6WB+Kr\nXzXLPAGTvJxtbcAPfpDeVytT3pgHl+VBmmDfey8JSOLHkueUWeIkHR32tOMyCQ9HmzSHW2PkiBie\nX0ALvALMfbBNhCShl9yVm0HGNNgqI7q2ecSDjO2he3/uubXBjBxpIZD7S6R4mDPHLH19vbT/7rub\nJVn4bBRlefAVD9p1f+utxHojg0HJ8vDLXyZDhkko8fLyY8ky1eO22H//dKpzIJ/bgu7nN7+ZHfy4\n5ZbGcrTJJiYWiffMpajTsqkCekC1Szy8/HLyWQr1n/9cn92TkjbRrJw8v4V0+1JMliYesoZq+pI1\nFJq7X+j68LZgsLgtRqNWGLwIYHcAywJYDcBZcieNIgIm29t131p/v332QNuxNf8/PWgHHug3jwWH\nvxCuhCS+MQ+2h5uXV362iZXzzjONqfwdPCJeigc+nBVIjwO/7LJaNwE/jgseMOmTb4CQybKoYdGG\n6HV0JD7E2bOTa0mJk+j+SBOv64Wl87gsONIyYDODcitGiHhYYgl7XI08j/xui3mwPYNSPNB5eYQ6\nfZ4xI1lHqXvpfdhlFyMKed4QjaIsD769Wn59uAn+mmuAK6+srcCpR3/88cCpp5p1rmBOPsyzCLcF\nFwT1jLagXu955+n/197fri6znj/PO+xg6g9KiiTvr008ZFkeeEdbXqchQ3RhdNNNZqllMaX3gwT7\nUUfZn6mi3BYXX+zel9cfmhtlsIiHwihKPGjDZfr73Y2WHO7lEg/aA6Q9UNLnx28+mciy3Baul8g1\n2kH+3xXzMGmS6WVceGHtMfmLKMUDjxo/+uh0cCOf+llOGZ41s2dnZzJ084EH7NtKtBn3uBDjv41/\n5hYU3mgDpvLnAoqyMwK14onHnrgsDzJYy+XOChUPI0bUWoRsGRzlceWkQ/yYGlI8aNYyeqcPPDBZ\nt8IKyeylxJAhdvEye7axivT1mUj/009PH79Rlge+Hb/XK61kUpBrbgvZcNF7Tjk4eKNEMzwC+QMm\ns8SDvDYbbZTttthjD7cg06yXXV3G4jRvXnLdVlrJ1AM//7n5Loer5hUPHPmu2ZIsuTpj9D9+/Y4+\nOt0ezZ5txPErr/iP7nGRNYOptPgC6bLz61oGLS8eQrGJB81t0durN1pLLWXGMN91V7LOllveJR60\nF11mZOMvhJbpjAhxW2j7amVyWR7oZVi4UD/m44+bHqQrYHLJJdPHPuWU5Px//nNtGV0Vvk1E2q4F\nVUo28WCzPPDt6JxSPCxcmB7SOH68fgzAz/IgxYPNbZHX8kA9QA75XldYIb1eHteW4MbXbUHXjG9P\n07VzH67WeA0bZq9QP/xh4zrs6zMBgTyr7JVXZg/9leRxW2j3QPb+tB49faf8C7beKLc89Pbmc1v4\niAc50gWoTbMOJAHLvtDQ1ddeq72+9F7KshQhHuR1svXIaZ0tCBJIdzovvjj9Dn/4w6YzeM01abeJ\nBuV/cc2wmnVvtYBJWfa33ipGyPjQ8uLBx/LwyivJZ+1FbG83E7zIbHRPPaVXTp/8ZG2kbk+P8d/K\nG0P+6v5+u3jgyUzkAyIjtelckunT/cSDRh7xQGWw5W746EdNDgGZnlqqY359R41KjiWtOrS9DduL\nZbsWVCHK//NenDyubYib9H9Koch/M7dC8B5PluWB36Oi3RZawCRda5nXQZ43z6gYfnye48O2LW0v\ne+jDhmW7qLTruu66/uUl8rgttOydmtvCNtqCjmUTzY2yPNhybPD7oWXupLkvfOEZIPlcLYD9t9vq\nQCnYCWmtomPK9zpUPND/eD0lY52AdII4FwccYJYUEJoHH/EA5JtMMg8tLx58oOAqQG8I6WLyih0w\nUcHahD+aeY4ao9/8Jr2+oyPpSdnEw4MPJue2BeEBSUKfk06qPf955+lui+uvr91Wkidgks7V1uZu\nuF1Tci9enMz6BgA77ZQ2X0qGDdOTsGjHJmyVPk/2xHFZHvg5XJYH2QvizwsJ2fXWS1sUbEM16bw+\nbouskRs2NPGgBVxRWTih6YV9LA/U6PJja5aHoUOzTblag7rvvibfQgh5LA9Dh5pAyEmT0v/PGqpJ\nv1OKU9nA8ZFAIeKBwwMFpZuSgjzpmbUJ4rzwY8ieOZXFFjAp6/G339bjX7T6T74fWW4Ll6uXX5Oe\nnmTocmhMjeu+2TKNSvg5NbdF2bS8ePB5iHlWP22eCnogQ3xmvvCHi45PIoDKPmJEEiDksjzssYf9\nPF/6UtKo8GNQMh2Cots5NvHginkgs+W8efnFQ3d3OgaCWzFsPUqZKZNwBRtq0PFl4+OKeeD3cuWV\n7eJBWgb488KHe3V3J8l1XBV/qNvivffCnlFttIWt8pHXOdTyQPdYigf+e2iKZz6zbV7Lg82VGFpu\n36HD/L4PHWoCIfl7awuY5FDCLpoojLa3iYdQywP/LTxgmN4Vuu6US4PgKZFDG0cNV90danlYvFi3\nPGjrtJgHTQST24iyY3JsFjPbiK8sbPftyiuTEXohbgtb50hu10haXjzQi8cT2eQ9xjyZINsCv/iy\ncZYvhNb43H67sR5oqtAlHlwPDw15srkRCK2nZguYdLktyPS7+ur2SmDxYmO9cVke5EtG29h6lCee\nqK8PtTwQNPspP44r5oEm06GZIIFa8fDSS8nwLhv0DFHueZflgZtVZf4JDpXjiSdqzcAuXBnqNMvM\nGmskAZF5ZkXkMRZaJbzEErX72GIe8ogH10RINnwzHfL7og0v9rE8UKNNwtqWP4S7Lej/PuKB3zP+\nzFHZKWmUfDf5cNp65ziRyCHTWZYHTTxoIyZsydz4dbrtNnMeaf2gd/Mzn0mExHXXJcewHV8rdxa2\n44wbl3R4Q+bAOPlks3QFezaalhcPgBkmY8uBPn169v50gbUgIA3+YMiJemQSJC3gbs01zeQ4/MWl\nY9rcFltt5VbqFDApLQ8SV5ImwF888El5bOejBuyOO2qPDZgX3pYe2yYebC+Z7drY8hDYjscnGeLl\noc/8WtgsD1dfbWJfKNjQpyLxtTzQsbTrkHeGwvZ2YwHhKYZdZs/DDkt6rHnOycWDVgnb8gxobgsf\n8aBN3OYrekgk20aPSPjv0OZT4eKBghzl7zrtNNOgUa4Yvr12Lnpuurv9xIPtt0ihKO8Dj+fq7DRJ\n9OqBLLBA7bUKtTzcdZd57yT83lM+GfmuURK5qVPT+1LQcHt7IpzIZWwbykwUZXkAknqE/+b11we+\n9a30dryeoTrXFa/RaCohHly56rOSyADhfiF6CDXkQ+0KuOPIpESEr+XBVzxQhLN2bnkOV4Am7eMK\n9CPOP7/22EA6+yRB18jWKHzzm/r6UF+vPB+RNdqCroUW8yB/C/WEtGdTBkb5xjy4/LB5xUNbmxne\nymN+bJYHWkflyWN54GJIszzsumvtPlrPsh7LQ0+PyQJLY/ltbLihHvdkw5VQi9bRb6ZrrA3V3GWX\nWvFua0jpPD09fu8BiRJJlnhYffX0d2l11XCV529/s/8v1PIA6I0it5Bcc02ynYxf0s5FtLcngouu\nAX8PNau3rxWbn8MGJaTjv3nRouw05rSvJLotGCET3WgUGVQikw5pU3C7sImHyZPdloff/EaPefCB\n92h8k0SFiAdClt9lebA1CvIYO++sr/dFC2J1xTzwQFGb5YHg20pktLrvUE2XeOCBeSFoftvLLjOf\nbfNt0G+WAcI+aL+H34dVVzVLHuisWR6yplKm49vcFlttlSSfsuFK8KMFXfqIB/rN2igGuS3gb3nw\nFQ8A8N//1g6t5OJBGxp90EHp7xttlMz7YOOGG+z/c6XqD7U82NAsWvJd8xEP8lrzzsUnP1m7Dx9J\nElpOjf7+tCt00SK/GW6zhuY3kkqIB6B1xMPZZ6e/+4qHLLeF9j8J+TzrSZzFYwCyMkwC7h6zJMTy\nYMvsybdfaaXkxc1redB8+jxnhmwMuOWB4EGPHFvv/Sc/qS2H71BNl3h49ll9/yzksR59FLjvPvv2\nvNGeNq32/zbrEN/fZXkAjBn7n/9MvmuWBy1WQ1JvwKQtvgQwgv7FF9Pr+IRlWlwAFw+Uy8R2reX7\nV6R42GCD9BBxIFs8aHNuHH+8+zz1WgS/9jW9jPz++YwoA+yp4OmzrQ1pa7PfC9uzFBJzxMvggmIw\ngCgeCqPeYUOhyU1c55YxEKHiIcRtseGG6e8+bgsN28PkIx6mT/c/X0jMAzfB8xEyfHtesWvPgGtO\nC0L6f7NGW2huC14ejjbWurOzNvESkB0wqVVaBA1t/OxnkxTHviOHgNqZMbPmmXD1+OfPT0ZLK8nA\nawAAIABJREFU2ODX0RZ4NmdO2vSrWR7yioeQgElXEOLyy9dmDB01ylTsp5+um/W5eKAAPdvMpFmW\nBz4xFhAmHjSyxEMeshpmQhMm/f1mokGtjPz5o+HefL4eDR5no4kH2+9dfnljqQGSe8XrB81d6BLf\nRfDWW/nFQ3RbMPK4LY44Ivn8zDPFlcXlN8zjtuBBPFkJSIoWDy4TIQ9otJ3vzjvT3zXLg234Gbc8\n8KFS/Bryil0rg08DKmfxyxptobktCJkdjifS4utsQUw+AZOaqCGfa3c38OMfp8/tw2OPpb9njRRx\nNb5LLJH9/Pm6YThvvlk7eqEe8WC7Pg89lJ4VMmReAmLIEJPRUtuPW5FsuTQIaSKX7wqNAAkNmLTB\nn8ve3nSdKjO++kJ16yOP5C8Xx1UnrbKKe9+77zaTW9liHmz1ILd40bvB64cvfSmz2JmE3Lfvfc8s\nuWVOQh0ULWg3Wh4EoeKBB7nQbJW+EdWu88oKrl7LAx8+ZPMLEnnFw4036utdlgfb8E6O7JXltTxw\nhc0rON4b1Spgn0pfy2PgGm3hsjzIYFRuJgXcw7vqCZiUsRehyGf2kEPc2+fJkyD3z3JbSBYurI0n\n6uionQNFEuq22Gab9HObJ/GSC255oOfalgSInpNvf9ss5XtI9VUet4XrfECt5SFvg6MFaGv4Wo9d\n4sH2vlMHYeZMM7GdvKe/+IVZcosgH3nEoU4Nf5f5sO28yGyk3/2uPmUCkJRXJszjdSm5DuU7A0Tx\nkCKP24LvQ8FreQPOODJSnN9QV0/YFvOgNRKEfFnyBkzafNTUQGrigTfutvPJ9T7igc7JLQ9c+fPf\nvGBBkhNAEw+26z1mjL1MWaMt3ngjXU4XcjgiBYHSfuPHJ+lzsywPrpgHqrz4PdlgA3fZONL8mVW5\nUM+d37vvfz+5NllkBUxqaBYbco+43Cya26Gz078CzWN5cMHFA+VTyBpB9PzzSVm0/xclHlyWBzks\n0Bc+jXURuMQDXSeJTEcuAyY33rh2H5ltmCCLKxe9RTXG/Hqffjpw663u7V0u8r4+PUuv3K6RVEY8\n5E0HeuONyQOpqTSfc7u+1+u2OOwws9x5Z108/O53yXefJFGhDBmiBy/yhsrlq+f4uC1oO35Obbjr\nq6+a/SlgzTdBDAB8+tP2MmWNtiC030w9l/XXN71FaXH4wx/Mkiw9vAdcT8zDpZeaJa9QQ96HrEl7\nJCQeeK//5z+351rR9s+yPJx9djpZlCYeyGzvGq45Z447CDkLV8BkHnhjQ79dmwMD0C2LtmMCxbot\npOVByw/jg5YoS6MIy4PtmZfi4cwz05l2XTEPe+2V/i7FAxf2Lihpkw++75Er5sH13EbLgyBUPGh+\nro03Bm6+2aQDtc0WWE+58rgt1l/fROdfeaVufj3ySGPtWGcdu9vC9uBqDa7ENpae+9J8LQ/aUE2b\nS4QHkWlZ8CZPNksap6/9Flsj8eij7jK6RlvIbTlUoVFD99Ofmu/0jFHSGR7oxnvgeWMehg83ef15\nhRpSQfD7y6+7zWxK18TWO8uCiya6j/LdGDIk/Rt6emq3IfOtbWQOkeXuc+GbtdEX7oJy5dKgbWVZ\nANOb52naabtXXmlOwKRrhlLf8mgTWGm4xINNgPCp3QmePE6KB15nk9D//vfNkt4V/h6OGpVdbopF\nyuK11/RRU7vuWvs7pJudl9v13EbxwAh1W6y4on1s7+67m4xjIYlhXPhaHmxui7Y2M0Tnwx+2uy3e\neMOY7G64QRcPNpNjXvHQ3m6y4PHvGlkVR3e3X9S7lomT/MQk8kLcFlouC34uX8uDvB/0W154wYw6\nOPlkI/7kuH7qaXJRUG+SKDk/Rd4Kgs9Ay4eGyXPVA//d5M+Xv72rKz0io7+/9rx0HbMmx7r77vR3\neRztGfz615P/NcrykBUwKc9Lo08OPTQ9oVxWbglfXG6Lgw+276flOSCkaV0ydaqxIsp5NLLKqImH\nvPWQzCmhvTvy9/PGeejQ7Hwnvs/QSivpaaj5O0Mp8i++OL1NtDzkINRtwVMq2/YrKnd7kbOr2XpQ\nNNb+qad08WA75vvvZ5dHi0znjavr+FkvzIIF4XMM0PYkHr78ZbPUfovt/FnWBFvMAx+VoImHnp6k\ngj/vvOT4dK2oh8ynXPYRDz55HuR9+spX9GNl4ROjU6R40M5L56AGjLaV56V39P773cd6+OHa83Nk\n6m8gcQc20m3hmjaZtuVQemg+E6bcLiuA1IXL8nDcce59N9mkdt2MGdkN9+jR2aMktDJq4sEmwrI6\nl1TGuXPNUquT5HTjvb2118tFvekEuLVuyy2BzTevdW/w2B+X5SHGPDBCxUN/fzpDogYNycnKQOdz\nLsLVUNrcFhyb5UFOL+sKuuT4qH3ZKGlBpa4es4t33vGL2ueVgq0Hrg239ZlpMyTmQR6bH3/VVU3Z\ntPTkNvFADeRPf2rmVQlxW8gGjffUAeALX9CPpcHdEzzokSc8kuWpBx/xQPe8p8feyNJ1PPTQdOUt\njyUn2pLHkZN0cRrhtqCyZk2bLK+zzzTpEybkL5scbcGvo0woJfnPf0y+nJ12StbVM1mhDZd4yCtq\nSZTtt59ZZnVo5s6tdaO5ZjwuorGWKeG130rzbgBui1m0PNTBoYcmQ3FsQZLkxwsZgZGVzctHPLgU\nqk08vPdeep2veNDS60qkeJABRK7j+7gtshISAelc/HIeCfqtmqXIdi154hnNTcRjHvhv4OZZud+W\nW6YbOsLVWFAjSu6BrIDJRYsSE73NbbH11kbI2CKtNXjA5F13JZ99xFcetDwRrjkMbOKB33OeUMqW\nO0Qem6Dj2yaNa6TlwTWxnLz+thiJosSNdFv4WBc5G22UTMjUKPLEPGRBzyI9Q/zZHDeudnuawZg/\nFz51aT1wwW0TDzyOI7otPAm1PJx9NvDFL5opb/mwPc4pp5hliHq2zR1P+JjoQ9wW2rbaAyO3o5Sv\ntihvjk82vrxui56e7ImN5HGkiZkqDO1luvJK/Xha4hSCKneth8975dLy0NVlyiazzWkBclI88G01\naLthw4AtttC37ew0I086O9O9Px94gFZW8CGdiyPnPMiCm2BpinWaflqeg8fFyOeJv2/S38vRMlNy\nqCwyxmPOHODf/zaJo4pCigeXEKP/0RwfNhFVlHjgx5Fui3rN7kXRCPFgi10CgN/+Vt9H69n/8If5\nzu8D78QtWKDfc74uui08yRPzsOGGJsjQNpSIXtjvfCfsuJJGui00fMQDJcHxFQ9ZrgXfnpNGVrCb\nZOJEs5TXK6Q37JrLhBpqLTukDDDlv4+GjFJcBE3uxBsLSidOlg+ZrMjl/qFjkE9bXvNnnzWzBj79\ndH2Nic/MnPJa779/2Dm4aFp9dXM8eU9C3BaA+z2T981meZAzIVKmyX//W/8deZCWKNezSPUUuRdt\nbo4i3SqEdFs04hx5oHJoWR3zllG2Hfz5sQV8auJhtdXynd8H/s789a/JNOIAcOGFZsmt6DImgxMt\nD4Ki1VR7u3lZjz221meqBQcB2W4L101rhHigvAPymFSZhoiHmTPt5beVy6e8WZYHOfESZVWjstA5\nKOiKGm0XWb29nh6TpAWwZ7rk5waS9Lt//atZUk4HHuw4YoT5Ts9PqOVBrtN4663wHthyyyWfn3gi\ne/us/B0++2cFinLLw1NPpdcR/H643jNtNk4OiQcpnBrRQ5PpqbNEL+8Y2UQUF6C24bWh5HFbEM8/\nb0RsGfB7lNfy4JpSwHZ/3n+/9n/yvTj//PDOkQ1XVtQvftEsv/GNZJ3r2YrigdHWZhoVnjCpCDo7\n9ah6OZ0yL4eEHu4DDvAzJ9drHuQvOeV30ILrAD/xQCmAV1kFOOus7HPKfbMg8WBLjLL66vp66bag\n38TVv21stesaU8N2++3mOzfj+5j0afgURcTLniZ/oaUo8EkSRYS4t7L49a+TzzSs8aij7NtrM5GG\nQL+nv99UsJq1g5und9xRP6+v20KKf/lc0vl9XGj1It0WPjP60m+zWR64wK43oyP16KXlIeQer722\n/9DLepHz++SBhj4C6XgnFz/7We1z9K9/pb93dvoNh/ehs9MuRLTAeS0vChHdFgy6aDQ2u2hkZWxT\ngC7Lw1ln+aWnrtc8SA9Mf3+S9lYekyofn+GonZ3JS0H+aYmtYvE5/qJFwNix/qmNgfRoGSke6GXd\nbz+/PA8SsjxoeUD4dOX83BpUHldjIV1CIZaHIsXDYYfV+vXPOMO+fb0+d4qjee65ZLZCCXdbEPJ6\n84rZZXmQY/BtPXfKWNlIZAyMj+WBsFke+Igwnj01D9SLlZaHVol54PT1JYnXAD2Ym7jiCvv/+G+7\n4ILk/mQlQZP1iOysFOnqeeABUw8//njt/7SEh5rbgt7xaHnwgIbg1IstwApIP3gu8ZD1INUjHvh8\nGlm9/XfeSSpsPmOnjc5Oe854op6AsoULw6PZuXig6yUnyLr22uzZCjU6OtLHdxEqHqSPW/YmfPI8\nEPW6tyTSXytzCcjy+JbFtv9//uOuxLjbgpDb+1oeZNCzrFSfftoIDDncrhE9tJCASSDttrBZHvh1\nqHdkCM+22NdnOgzPPlvfCJvzzitm3iBJd3fy/my5JfCxj9m3lTNQaqMoAGPBJvEgBccBB6S/y2s9\nalSSMRUoVjw895xZ2mbSlB0M7dnSREYjqYR4sFWW48cXc3z5ENiCylxuC98Kvd5JvlzTtAImexkl\n1ZkyJfvYWdMzA+F576dOTUzlCxeGV0w8Epx+O704PAA2T9IYKouWOCjkOJp4mDYtncSnszM9zNYn\nwyRfZ6OIoYU+18inLBo332wqQ4plcJ2DC3V5DWwxD1nxIfL72LHGX8yPscYa9rLVQ0jAJJEV81Bk\nI8WzLfb3G+uOnBsilGOOcedByEt3dxIkmDXixzcZXVubfb4VmXhNe8/WWSd9rKKhuAYZyC/rCE08\nZCVGLJpKiweeEbAeXG6LLL/gL35hpoTNyn9ej+UhT8/Pdz8f60Qoo0cnPtE8lodf/7pWPNAQRn6d\nfSpmCZXle98zy003tW8bKh7ksC/fqPmQgMmsctng0xFnUW/MA8HHpUvo2nCrlyt/g8vyIMtrE6v8\nGk+b1njLwx//WBsQLOHXljotWkp1ANhhh2LKByRui1YZZaHR05NcE23+Co7LksY55xz70OCsfCFA\nbcK+ojj11PT30aPT33t60pN9ucRDtDww+EUDTLbB9darHXqVF/kQ2MxjWqTzWmsBl1+e3UCGWig4\nofvQ7ynTjyl9sfRgL17sJx4oHTUAnHBCkoyGfsNRR5kc+bwRtIkHV6NA5aIRCL4CRFZOmniQ+MZQ\nNDrmAQgTWvW6LQif+3Dffck61/PqEg+2xlZSVGS8C/48UKI6F3K0BQVxy2MWhbQ8tGKsA9HbC3z1\nq37byiH5rt9lEw+yMSZXgu24Rd4Xml3Zduz+fjN5Fz1TWsxDdFsozJyZ/n7kkebG8vW2ZB8+0I16\n5BEznp96pUAyQc2zzwJ//nP+c9QjHkIfUmqIZRR6Ixg3zvTeb7klvZ5ezEWL/MSDDKyjzJT02zs6\nzIgQPoLE1sOUeeo53NeedS/4/3/wg/T/NPGw117pUSWURyCLMsRDCI00m8tz+IxwAfSASRKssrxc\niHLkuRptedh55yQlsg1NPGjHBIopr4x5aHXLA40OycqxIM38ruBKX8uDa1+gWOGVNSyUoNgSbbRF\ndFso8Iv0yCPpBBraNqFQFsoxY0yCJX6s/fc3N2PddYubTKseZFCPBkUFa43omWcWW54rr9RjK+hl\nWLTIL+ZBmtXJJSVfUJ7UxdabzkoSBZiUzSHiQSaT4a4haiyWXDKdI0QO47L1CAareHBFs9M98hUP\nmuXh/9u78yC7yjKP49/uLMTQwQQkREACwgwimyRAkEEIixayBCEUoSeDDCADolMGKYZiRtDBqXIp\ntlLAUYYASk2LQoCwRBzZdCxETEwhEgSGUZaIhCURQoYBzfzx3Hfu26fPOfc9557lnu7fp6rrdp+7\nnNPvPctz3uV5Fy2y/Sv6fUenMnbciJ/TTrPvqRc6TPqS+kgUeZFywfe6dbadvVzz8Pbb1odr6607\nb2d0JE1Sh0mwCb0grJkiyj9OQxKuhcp6zKnZIpD/Ja9ZU/zn33hjWPKcbhTVbBEy3MwFOdEd8IIL\nrEmgCllrHgC+973273fdFf8adxE/9NB8fR7cAXflldmCh6ScGWkd5KLbl9Spq4o+D1l02+fh+uvt\nMe0E7tbhMnd2Wo+/P/h3jnHj7JP2i5tusvdcc40d726WxSJ1M9qiU81DEVwHxPXrbb29XPPgMsF2\nO9cKwKxZ7d/dfpan5sGv1brllu63ywkNHtL2LTVbxPALqYwT58AA7LZb8Z/r6yZ4yJoJzvXwj975\n+c0xZfNrHkKDh7j/LanmYe7c7jpMQrY7zzlzhv/tB2j+rJr+NkUvbFmCh7T9pOwTfp47Ml9Ixz63\njrvvbi+LSyR28MH2uHZte5lrPkwqh6TEPa+/Pvx/C5myPqusoy384GHFivgpt8vo89CUDpNFBQ8u\nQ6z7XMi3n/vHZbcJu9LWnfS9+JN7abRFAP9LTjqpVlVgeXUTPITkCoh7/ZQplg/BiesDUVamOLdj\nZ5m1MOR1LnjYsCHfScV/T1KE7lJhp3WO8lOD+0mB/ItFdChxUvAQN79I2kWn6maLrM11IVn34qp8\n46YIv/RSe5wyxYYML1jQnqAorTzjPPPM8GaSMm5E/EDwF7/I1px6443Fb0+UHzz0eofJIoMH//90\n8+dEg8esTd8hGXxDhQYPn/qU7cOjtdliMvAgcHJk+XnAW8Br3s8XQz7QL8i0qsZzz4Wrr86yqdXp\nJnjwhycmvf+d72x3IDvgAMs+ee657Q6fSZYvz749IfwDMfTgD7kougAoepefZ7uiXKdI1/kqpGd1\n9E7Tv3BG78DTah6iuTTSyqzq4CFrCt6Q72XnncM+ywWLZ51lVcZ+05bf5JFHaG6ALNz+cP75FsS7\nuVCShEz6577vImpHm1DzcOKJ9uiaLUIv6m7YtZ+LIc5VV9ljNMdN6Hlq4UJ7LPKGNXqMpX0vJ50E\nDzwwPPsmNL/ZYlfgx8AcIFq0+wAXAVO8n6BJTv2TeFKa440bLeVuNNFHrwjNROksW9b+/aqr4LDD\n0t+/dm17xMOECZYrYerUzuuJGx/t54LPKzrHQ4iQsvGHoeYJHtK4mVZdAqGQ4MHPDhmteYieEJLS\nno8bN3I4ctqJrOy7xbSpsUOEBBt9fWGJmsoMolwtRJHl2dcHzz5r+V+gc7+Kp59uB0HRsf3O+PGW\nttjVwnSjCTUPixbZY9aahxUrrLyjQ6STdBrymxSsdcrpk0f0GEvbt10AHR3h5jdbdMoaXIQig4dD\ngHuAa4FnYp7fG8h1n1vW2NoqZa152Gmn9u9TprQvbN0MSQ1VRBn7B2I3wUO0g6grv/7+fMFD2t3C\nAQfYSfqYY4avK/q7m3rbbUdS8BCt1kyreYgGFqO95iH0czvNkNoNFzwUGYR+4xvZeuG/+mp7krbd\nd08eYrjHHsWM9mpCzYPz4ovZmy0GBsKbE9woOye6Htd8GRU3L07Rukkm+Oqr7dqbMmXZxEnATgk/\nk4GVwHbAlYysdZjeeu504HngaeArQNDh4H9JSReipFkbe0XW4MFPawzlHuTRmoYi7kb8AzF6UCbd\nGcQN25sxI/61fX3DP/fWW8O2q9MBn5Qjwi9//w4xSwe5tOAh+r9Hy8xPITwa+jyEvi7twtFtmm5X\n5kW0qTt+x86s/vSnYlKPp4kGD71Y8+BGvl1ySXF9HuJEZ/SNln3SucI1W+y6a/Hb5IQc49HmGfdd\nVjF7LGQLHvYDnoj5+Q1wGPAKkDRqeyusOWMxsD1wBPBR4JKQFXcKHpYtg499LOST6pM1eIgmRSnz\nIL/33uF/J1WvZ5FW8+DnQuhmvf7nutqCTrLcLSTVePkT6hQVPETvVqMnTL//RNWjLbLWPIReAENn\nfe12PVFz59pjGcFDN6oMHt5+u3eHarpz+cKF5QYPnUZbJJ0r9tzTnnNZaovi5y+KbltcPw43q7JT\n9XeZZXX3t14f/RkHLO3w3l8Bc4HbsU6Tj2P9HxZk2lriD67DD+/NCNqXNXiIDrPsZsfYd9/0/A7T\npllVl8sfH5q4J02nPg8LFsC3vjV8WUhHH3+OkDwdg/IGD3198MEP2u9+U4rbjrvust7beUZJxJ0c\no8uuuAL22y/9czoJnZK622aLUCHVy2kX07QJnaLpfn0XX2yPbl6XXggeJk/O1jkwrybMbeE6yW7c\nGD8ksShJeR7e+972+qv0oQ+1f49+L27CLN93vzv876q/y6oOm4OA/YEvecsmAamTai9atIipU6fy\ny1+2lz300CAw6P1d5GaW57jjbNhk6Ak8umN3mk0zTUgZTZ3a7ghUxBwAnfo8RHd8CAsG/MmD8vSW\n76bm4f77R9aO9Pdbr+0jj7S/Q7JbhiyPu4DPnGnTo+cNlH/yk/TU3UnrLiuzakj69LQLx95751uv\nq9b96lftMe7EnNeOO+YbBbJhAzz8cLtvU5nGj+/tDpNuf3vzzXJrHqL/u1tPnZmE582zoaTRQCCu\nieSN1tVzaGiIoaGh/0/pb/t1F+1ngaqKVdYDX8Cu+v3YqIwLgG+mvenyyy9n6dKlHHroUqxyYylz\n5gxPWxc3NrwXLVxoB2s0zXGom26yx06zy3XD3WEWkXY1rc9DkpALu0uSMnlyvmmVs9RWRIOHiRNH\njk7p7x++3WUFD/5n573DCG2j7bbZIpTfP+Bzn4t/Td5mi7SLYvR9RXYuu+GG/O9dt678mgewsnn5\n5d6teejvt+/dBQ9VlAm01+P29zpyB7kMoNHvJa6p1x0bg4ODLF26lOuus2vkOecsBS4vczOB6oKH\nXwAnYrke1gE/AG5geE1EEP+uuJu78aZxO/Zmm5W3jlNPteaSpD4JWeQZbRE9WK+8cuRrXGAzcaIF\nD8uWZZtWPG/wlnQx6u8fXhvx+OPJn5EleIhb5qr5857w3f/gj+SJE/38boOHs8+OX+5n/ftiQsaX\naPCw1Vb26PotJJk3L2jTgGIvoNHh0aedlu39VVwo33rLEm31aodJsLv/omseon27otx63HFWR/Dg\ntiEpKV3cax2/SaoKZTVbxI1YvqX1k9kz3sBPf2c/4IA8n9ZMrqqxTLvsYolH/KGIeeXJ8xA9WOPe\nF50RL2s174EHhr82NEmUX1OTJ6103P8Z9zmumr/bi13WAKqbC0x/f+f8BGn9cZIuHJ226fjjw6vl\ni7xgR2ue5s/P9v6q7rLBjqVerHkACx4ee8yael3A2K2kHEGO21fc8XHSScWsNwt/wr1ODjlk+N9u\n+0Nn8+1Wj+46w913X/v3pOl2Rzu3Y1Rxp7BqVfefkafmIdqGHTdbZ9J0ulE/+EH8ZGdZyi8keJg4\ncfjBmnbCSdrm0FEmLnjIuw/UcZcZkrTNdWCM45f7QQd1l6nViSb5KfICGg0esnbqrTJ4gM5343XZ\nZBPrAwKdL/qhOs1l4moczjzT9rNoevkqJNU8xHGzJzvuPdHlZemBfsbZVFUl02uqDB6iQlMJ+/LU\nPETXE03ZDO3OTEnTLjsuVXc3QoKH6IiRPH0eXLrcTrpttnBC9qGddoKnnsrXr8RXZAKm8eMtcRDY\nlOp5dRrf341ug4eqawLKnk04r74+S9hWpYkT658jKUvNQ3Rfq3rbG1Hz4Ksqb3evqbNt8mc/y/4e\nP3iIq0EIceqpI5fNn29Z/NKG4nWSJ4V5aPmntc8m9TUIHd1SVLNFiCeftJNRt1WgadsaHafeiX/s\ndzuvhX/iLbPmIevNTtU1D3VfLJPEzS7arV7t3+HLEjxEVX1j3Yjgwc8yWEXazTrddhv88Icjl9dZ\n8xAyR0aUv/PnPSHGDZkaN86qFbs5ybr3dgpq8qRFT7rT/ulPk6dED61mrDJ4KEpap+as+3KRJ0e/\nn0qZwcM++2R7f9XBw3PPVbu+UGWc5/w8Cr3KnfM6BXVxwZWChxhJOcZHo3nz4MMfHrncpUQuc6hm\nWeIm3wpR1sFwwQU2dLbTLIV5godozcOSJfDgg7D//sm1EmecEfbZ3fZ5cM48s7v3h3C1CknpxSH7\n/1FE5tNrrrHH73ynvaysZovLLst+7qo6eChrGG63yqgR2Xbb9u+dmj3r4kbTpfXPeOGF+Bu6ojNe\ndtKI4EHgjjvgwgvh0EPr3pLs8iZdKeJiEWebbWw8fqf2+KSJsdJEP/PYY9uZIZOEXjBc8NBNHo6N\nG8ODlW6cfro9pgUPoVyZrlmT/b3HHAOXe0PeXZI2v9mrrJqHPIFA5VkCG9frLT//+4jOYtsr3P4Z\nndvIlzT6JO8w9LwUPDTEzJnwz/9cTbPFypXlryPE/vvXu/4iah5ChH626zBZRAbQsrk7x7TyOOEE\ne9xzz7DPyjPN8K23wmc+0/47rqyLvNvP01G4rG0JkbVZpSpl1DxMmGAzad58c/GfXZTttrPHOrNc\nhlLwICO4iV+efx6eeCL/56SN3w+Rp69FWcoMHkK5mocmBA+uxuGII5Jf42Zz7ZSU7M47i9mmJEVe\nsPv62rUPIZ970UXD/3700eK2JUQRNUNN0d9vicmOO67uLUl21FGWnroJzdONDR5Ge8fJXrD11vGz\nuYUKnenS52cdrFueZosy25CbFDxsuaX1WTn22OTXvOMdcPvtnYeqfuQjw2sPuhEX3BV9l+fPv9JJ\nNNHd3XcXuy2dnHtutesLVdW00r2mrw+OPjr+fPPYY3DPPdVvU5LGtnjFdSqU3pSlCnL2bBsFsbb8\neV06KnK0RRZJbZpNCh4grMyOOirss4pKyx53QS8yF0WndeV5TZny5HCReuyyi/30isbWPKRNxyu9\nIW//jL32goMPLnZb8ggJHq64Yvjf3V7Yly9P7nPStOChSCHTd4cInUekCCG5CuoOHpoOsk0dAAAL\nzklEQVSg0zwmUo/GBg+zZtW9BRKqVxPRdBLSbOFnK/zsZ7u/M5g1K7kdukkdJotWVF+SooKQECEz\nbNYZPFQx/Xe3TjoJfvSjurdC4jS22UIRe+9z31GVw8HclLZFC6mCv+SSctbtuLb5sRg8RI/3970v\n3+dUWWMZEjTXeR677bb61h3qsst0ru9VCh6kNPvua73JFy2qZn0rV8L06cV9Xp4+D2UayzUP/vH+\n1FM2dXwe0UmxytRpCCqM3K9OPrmcbYnTqwmifE3YxrGqkcHDV74ytpKbNFV/v2VzrErIyTqLkOCh\nyiaZsVzzsHp1+/cdd6xvO7IIOUdFb4J0sTR7720jr6psZpJseuB+KrumnDyk2UL6PLznPdVsC7RH\nHFR5d9orykpVXmZWvr326vyaaCCo4MG4IeK6SexdjQweNmyoewtkrEkKHlyioypMmGA1HU1IIFM0\ndwe6ZElxn7lsGTzySHGfF3XWWZ1fE51aXMGDufpqm823CTNh9qJOideK0Mi4rlfzksvoEpokauXK\nfPMuSDhX/kVOaFTWaINTTrEOnSEXvmjthIIHs+mmMGdO3VvRPGefbSnHd97ZcuaUqZHBwxtv1L0F\nMhaE3vUU3ddCRnIpqh9/HA47rN5t6WTx4vDXRptgy0pYJWPDpZfa44oV5a+rkcFDWbMtivhUZdo7\nXOKsp5/O9/7Fi+G114rbnqJEO+L2wqgekRCNDB56IfugiFRn2jTL2Jh3iOYppxS7PWWZP7/uLRAJ\n04g4178DHBhQW5iMPnfeaampJd7Xv26Pn/50vdtRtj32qHsLRMI0ruZBVckyGqVNXS2wcCEMDqpa\nvwgPPQTr1tW9FdJ0jQseRGRsGu2BQ1UJx/bdt5r1yOjWuMNRNQ8iMtoUmVZdpAqNq3lQ8CAio8ny\n5dXOuSFShMYFD2qrE5HRZNasurdAJLvGNVuIiIhIvRQ8iIiISCYKHkRERCSTRgQPO+xQ9xaIiIiI\n04jgYfFiuO8++33evHq3RUREZKxrxGiLgQGYOxd+/nPYbbe6t0ZERGRsa0Tw4OyzT91bICIiIo1o\nthAREZHeUWTwsD2wBHgRWAPc0lrm/CVwD/BH4Hng/ALXLSIiIhUpMni4FXgJmIkFDS8DS1vPTQDu\nAB4CNgeOBD4FfLzA9YuIiEgFigoepgGrgQuADcB64GvAbsAWwEHADOBC4G1gZev5TxS0fhEREalI\nlg6Tk4BtE55bDRwRWXY88AzwCrAr8AQWODirUNOFiIhI42QJHvYD7o1ZvhE4lnYTBcAZwDnA0a3n\np2C1Eb43gIEM6xcREZEekCV4uJ/OzRwTgcuABVhNxAOt5euByZHXTgZeS/uwRYsWMXXq1GHLBgcH\nGRwcDNtiERGRUWxoaIihoaFhy9auXVv6evsK/Kx3AbdjnSPnA7/znjsMuBnrLPmn1rLzsADjoJjP\nmgUsX758ObM0X63UqK91hGzcWO92iIiEWrFiBbNnzwaYDawoYx1FdZicANwNrAUOYHjgAHAfNhLj\ny8AmwAeAvweuKWj9IiIiUpGiMkweDeyFjbRY4y3fCLwfeA74CHAl8ALwOnA58O2C1i8iIiIVKSp4\nWELnWoz/Ag4vaH0iIiJSE6WnFhERkUwUPIiIiEgmCh5EREQkEwUPIiIikomCBxEREclEwYOIiIhk\nouBBpIMZM+reAhGR3lJUngeRUemll2DChLq3QkSktyh4EEmxxRZ1b4GISO9Rs4WIiIhkouBBRERE\nMlHwICIiIpkoeBAREZFMFDyIiIhIJgoeREREJBMFDyIiIpKJggcRERHJRMGDiIiIZKLgQURERDJR\n8CAiIiKZKHgQERGRTBQ8iIiISCYKHkRERCQTBQ8iIiKSiYIHERERyUTBg4iIiGSi4EFEREQyUfAg\nIiIimSh4EBERkUwUPIiIiEgmCh5EREQkEwUPIiIikomCBxEREclEwcMoMjQ0VPcmNJLKLTuVWT4q\nt+xUZr2pyOBhe2AJ8CKwBriltcw5D3gLeM37+WKB6x/zdJDlo3LLTmWWj8otO5VZbyoyeLgVeAmY\niQUNLwNLvef3AS4Cpng/FxS4fhEREalAUcHDNGA1FgxsANYDXwN2A7ZovWZvYHlB6xMREZGajM/w\n2knAtgnPrQaOiCw7HngGq4GYDmwHnA5cDbwJfB+4sPW7iIiINESW4GE/4N6Y5RuBYxneRHEGcA5w\ndOvvrYAfA4uBE4Adge8BmwKfTlrhqlWrMmyerF27lhUrVtS9GY2jcstOZZaPyi07lVl2VVw7+wr+\nvInAZcACYD7wQMprjwe+AWwZ89y7gYeBbQrePhERkbHgeayv4e/L+PAsNQ+dvAu4HZgAzAZ+5z13\nELA/8CVv2STgjYTP+j32T7+7wO0TEREZK35PSYFDkSZgnSGXYUFB1N5Y34ZBrJPmrsBvgH+sagNF\nRESktxwH/BkbZeHncfgj7U6WxwIrW8ufxUZmFN1sIiIiIiIiIiIiIiIiIiIiIlKX6Viq61exOTIu\nA8bVukX12RJ4Chut4swBHgJeB54GTo2852TgSaz/ycNYfg5nHHAx8ALWH+VWYEYZG16TPYH/wBKT\nvQBcTzvDqcot3iFYuazDemZ/jXanZ5VZZ+OA+4FrvWUqt3gLgLcZ3i/u+tZzKrNkmwPfxqZ/eAWb\nQ2qr1nMqN899WEFNAnYAfoVlohxr/goLHP4MHNhaNg27MH4SG7VyMHbSd8HF3NbfH8R2jEXYDje1\n9fznsU6r22BziwwRn/irid6BZTr9PDYEeXPgDix52VRUbnG2xIZLf7z19wzgEeALaF8LdRF2QVzc\n+lvlluxi4JqY5SqzdPdhGZk3AwaAm7C0CDqveXbCLpZ+9HMCluZ6LPlb4LfY/+4HD58AHo+89ios\n2AK4AfjXyPOPAae0fn8OONF7bnrr83coYJvrtjNwJ8NH8MzDIvLTsKHBPpWb2bT12IfNRfMEcBa2\nr6nM0h2C3dzcSLvmQcdosgewfStKZZZsNhbgD3jLpmHpDmottyJn1SzCrli1zAveslXYcM/Natmi\neiwD3oul8Pbtip2sfKuA3VOef6z1/DuBrSPPv4iV9+4032+AI7F06c7xwApUbmnWtx6fxWodVgPX\nYWXySOS1KrO2rYB/A/4amwzQ7Xfa1+L1A7OwY/S32P72TewuWGWWbF/sf/07rPlhNXBJ67HWcuu1\n4GEK7ZOZ47JQDjB2/AGLAKOmMDIrpx+VDhBffgPea5KeH23+BTtRfZLk/Url1rYjVn35Z6xadADt\na0n6ge9gJ/FfMTxg1TEa711YIP994H1YxuG/wO6Ota8l2xzYA6uV/0DrZxusdqHWcuu14GE9MDmy\nzP39WsXb0oteJ758/tj6Pa78NsXKbr33+uj7R1PZbgbcjN0RHgj8muT9SuXW9ibWYfI84HBUZmnO\nx06yV7b+7qPdXKZyi/ci1hZ/HfA/WM3DPwAfxcpOZRbPzTq9CPtfXwT+CZvFutZy67Xg4VGsd/x0\nb9n7sR1tNOwI3XoUq4ryvb+13D2/W8Lza7GqLv/5GVhk+yijw45Yj+IBLCX6r1vLVW7x9seqOSd4\nyyYB/4tVb6rM4v0N1jnt1dbPIBasvoLVRKjcRtoD+HJk2SSspuvnqMySPIYFCZt4y9ycVCtRuQ3z\nY+DfsQvAWB5t4fgdJjfHTlCfwU740d61h7T+ntt6Ptq79iKsHXt7rHr1uzS4d23ENGwytmsYmfZc\n5RZvU6zMLsH+75nYsK8rUJllcS3t0RZboHKLsy12A3gudvHbDngQ+Bba19KMxzoxfx87XrcE7sGa\nFlVuEdOxjoJrsLb/rzK258Dwgwew3rf/ie0UT9IeZucsxO4mX8MOzn2858ZjM5s+i0WeS7C2yNHg\ns1hZvc7I+VVA5ZZkF+Bu7CT039gJxdVEqMzC+MEDqNySHAj8FCuXPwCXAxNbz6nMkr0bG0a5GjtO\nr6U9gEDlJiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIj3n\n/wCPS4FmpN8uiAAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# Plot the eval metrics over time.\n",
+ "plt.title('MNIST Score per step')\n",
+ "plt.plot(*zip(*mnist_score_values))\n",
+ "plt.figure()\n",
+ "plt.title('Training loss per step')\n",
+ "plt.plot(*zip(*loss_values))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Skip training and load from checkpoint\n",
+ "\n",
+ "Training a model to good results in a colab takes about 10 minutes. You can train a model below,\n",
+ "but for now let's load a pretrained model and inspect the output.\n",
+ "\n",
+ "The first two rows show the effect of the categorical noise. The second two rows\n",
+ "show the effect of the first continuous variable, and the last two rows show the effect\n",
+ "of the second continuous variable. Note that the categorical variable controls\n",
+ "the digit value, while the continuous variable controls line thickness and orientation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "INFO:tensorflow:Initialize variable Generator/Conv/weights:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/Conv/weights\n",
+ "INFO:tensorflow:Initialize variable Generator/fully_connected/BatchNorm/moving_mean:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/fully_connected/BatchNorm/moving_mean\n",
+ "INFO:tensorflow:Initialize variable Generator/fully_connected/BatchNorm/moving_variance:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/fully_connected/BatchNorm/moving_variance\n",
+ "INFO:tensorflow:Initialize variable Generator/fully_connected_1/BatchNorm/beta:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/fully_connected_1/BatchNorm/beta\n",
+ "INFO:tensorflow:Initialize variable Generator/fully_connected/weights:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/fully_connected/weights\n",
+ "INFO:tensorflow:Initialize variable Generator/fully_connected/BatchNorm/beta:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/fully_connected/BatchNorm/beta\n",
+ "INFO:tensorflow:Initialize variable Generator/fully_connected_1/weights:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/fully_connected_1/weights\n",
+ "INFO:tensorflow:Initialize variable Generator/Conv2d_transpose_1/BatchNorm/moving_mean:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/Conv2d_transpose_1/BatchNorm/moving_mean\n",
+ "INFO:tensorflow:Initialize variable Generator/fully_connected_1/BatchNorm/moving_variance:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/fully_connected_1/BatchNorm/moving_variance\n",
+ "INFO:tensorflow:Initialize variable Generator/Conv2d_transpose/BatchNorm/moving_mean:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/Conv2d_transpose/BatchNorm/moving_mean\n",
+ "INFO:tensorflow:Initialize variable Generator/Conv2d_transpose_1/weights:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/Conv2d_transpose_1/weights\n",
+ "INFO:tensorflow:Initialize variable Generator/Conv/biases:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/Conv/biases\n",
+ "INFO:tensorflow:Initialize variable Generator/Conv2d_transpose_1/BatchNorm/beta:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/Conv2d_transpose_1/BatchNorm/beta\n",
+ "INFO:tensorflow:Initialize variable Generator/Conv2d_transpose/weights:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/Conv2d_transpose/weights\n",
+ "INFO:tensorflow:Initialize variable Generator/Conv2d_transpose/BatchNorm/beta:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/Conv2d_transpose/BatchNorm/beta\n",
+ "INFO:tensorflow:Initialize variable Generator/Conv2d_transpose/BatchNorm/moving_variance:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/Conv2d_transpose/BatchNorm/moving_variance\n",
+ "INFO:tensorflow:Initialize variable Generator/Conv2d_transpose_1/BatchNorm/moving_variance:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/Conv2d_transpose_1/BatchNorm/moving_variance\n",
+ "INFO:tensorflow:Initialize variable Generator/fully_connected_1/BatchNorm/moving_mean:0 from checkpoint ./mnist/data/infogan_model.ckpt with Generator/fully_connected_1/BatchNorm/moving_mean\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAf8AAAFBCAYAAAB5Maa7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzsndlzW+d5/z8H+w4QJAgSJLiAFMWdIilqtRZLjmx5i51M\nG0+mHaeZyU2vOu3/0NvedKaZthdd0kzi1HFty/EiS7YsURK1UhIp7jtIAARJEAuxL78L/86J5HiR\nEosAnfOZ0XhkitKLw/e8z/M+y/cBGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZ\nGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZGRkZ\nGRkZGRkZGRkZGRkZGRkZGZk/V4RiL+ArKBR7ATIyMjIypY0gCKjVagCy2SyFQoFCQTYf/5+vte+q\n7VqFjIyMjIzMt4lCoUCr1Uq/Fx0AmW9GUewFyMjIyMjIPC5KpRKHw8GPfvQjXnzxRfR6PQqFbNIe\nFflJycjIyMjsOMrKymhpaeGZZ55hYGAAlUoOZD8OsvGXkZGRkdlxdHV1cfr0aTweD3q9nmw2Sy6X\nK/aydgyyqyQj8wBqtRqDwYDb7cZqtTI9Pc3GxgaZTKbYS5ORkQG0Wi0mk4muri4GBgZIpVKsrq7K\n+f7HRL75y8g8gF6vx+l0cvLkSX784x/T1NSEVqtFEEq1MUZG5s8Lg8FAbW0tPT09dHR04PV6mZqa\nIpvNFntpO4rv3M3/wUNa9gJlHhWVSkV5eTm9vb2cOnWKzs5OlEolZrO5JIuIFAoFSqUSjUaDSqVC\npVIRj8dJJpPyvt8mBEGQfuXzefm5P2EEQUChUNDQ0MArr7xCa2sriUSCa9eucfv2bdn4PybfGeOv\nUqmkkK3FYsFsNlMoFKQNUygUyOfz+Hw+Njc3yeVyJfuyCoKAUqnEaDRisVgwGAyk02lWVlZIp9Ml\nu274fO0qlQqbzYbZbEav16NUKsnn86yvrxOJREgkEuTz+WIv9SGUSiU2m422tjZeeOEFHA4HgUAA\no9GIUqks9vIkxP1sMBiwWq1UVVVhsVjQ6XREo1Hi8Tj5fJ5EIkE0GmVzc5N4PF7ye0apVKLT6aTP\npdPpSKfThMNhNjY2yOVyJWNgxfYys9mMyWTCYDCwublJNBpla2ur5M4WhUKBRqNBp9NhNBrRaDSo\n1WoUCgXpdFraI6lUqqTW/UXEd3T37t2cOHGCmpoaQqEQ4+PjzM3NldyZ8mUYjUbUajXxeJxMJlPU\n573jjb/oeRuNRiorK9m1axf79u3j8OHD5HI5FAoFOp2OQqFAPB7n3//93zl37hyxWKwkPUXReBqN\nRnp6ejhw4ACtra34fD7+5V/+hZWVlZIualEqlVitVk6cOMHAwAC7du3CYDCQSCQ4c+YMV65cYXJy\nklQqVeylPoRoVNVqtXRAioemWq1GEISSOBhVKhV6vZ76+no6Ojo4efIkjY2NWCyWhwROpqenGRoa\n4sKFC4yOjpaM4fwyVCoVJpOJhoYGWltbOXr0KPX19ayurvLZZ5/x3nvvEYvFSCQSQHEjeoIgoNFo\nqKqqore3lz179rBr1y6uXLnCzZs3GRkZIRaLlcw7Ku7hqqoqmpqa6OzspKqqCrvdjslkwuv18skn\nnzA+Ps7S0lJJ7xO9Xk9PTw/79u2jubmZVCqF3++XHK6dQHNzMxUVFYyMjLC+vl5UG7Tjjb9Y/LFn\nzx66u7vZvXs3nZ2ddHV1sbW1RTKZlP6cVqvF5/NRXl5OJBJhbGyMO3fuFPkT/B5BEDCZTNTV1TEw\nMCB9pvr6egKBAH6/H5/PRzweZ3h4mOXl5WIvWUJ0wlpaWujr6+PkyZP09PTgdrvRarWkUimy2Swq\nlUq6oRYKBWKxGKlUquhee6FQIJ1Ok0qlJI9cqVSiVqtLooVIdArr6uro7u6mra2Nzs5Oent7cTqd\naLVa0uk0uVwOjUZDeXk5NpsNnU6H2WxmZGSEcDhc7I/xEIIgUFlZSV1dHV1dXTQ3N9PY2MiePXtw\nOp2EQiEsFgt2u51wOEwwGGRpaYnl5WX8fn9R1qxQKKisrKSzs5OTJ0/S3d1NXV0dVquV+vp62tra\niEajpFIpAoGA9CuZTBbFQGm1Wux2O729vfT397Nnzx4cDocUXQkGg9hsNiYmJpiamsLr9eLz+QgG\ngyVnUNVqNTU1NdTW1mKxWJidnWVhYUE6S0rVaQHQaDTo9Xo6OztpbGwkFAqRTCaL+k4W/1T7EzEY\nDLhcLl588UVOnz5NbW0ter0egFgsRiQSYWNjg8rKSpqbm3n11Vc5deoUkUiE//qv/yop469QKCgv\nL2f//v38/d//PQ0NDeh0OgRBwOl0UlVVRSwWIxgM8o//+I8lZfzh8/Xv37+fn/70p+zevZvy8nLp\nawaDgWeffRa9Xs/U1BSRSIRcLsfS0hKhUKjoIbB8Ps/W1hbhcJhwOIzVapVy6aUQ9hdDze3t7bz+\n+uv09PRQV1eHIAhks1lSqRSbm5skEglsNhsOhwOXy4XL5aKmpobV1dWSM/4KhQKPxyMVV9bX16PR\naCRH0mq1UldXx+nTp4lEIkxPT/PRRx9x/vz5ohh/0QGrr69nYGCAU6dO4XK50Gq1VFdX89RTT7Gx\nsUE8HicajXLlyhUuX77M4OAga2trRTGm4vn41FNPcezYMVpaWtDpdFIqtKamhp6eHvx+P9PT05w9\ne5ZLly5JqdFSQqlUUlZWhs1mQ6VSsba2xuzsLFtbW8Ve2jei0+morKykr6+Pzs5OxsbGCAaDRCKR\nop17O9b4iy/ivn37+MlPfkJbWxtOpxONRiN5VG+99RZXr14lGo1it9tpaGhg//79NDc3YzKZMJvN\naLXakugPFW/Me/fupa+vD6fTidfrZW5uDovFgl6vR61Wo9PpcDgcGAwGFApF0W/M8PnPora2lgMH\nDnD8+HEaGxsB8Pv9hMNhNBoNGo2GWCzG4uIifr+fsrIyamtrqaqqwuv1cv/+fdLpdNE+Qz6fJ5VK\nsbGxwezsLBaLBY1GQ0VFBWVlZfh8vqI/a0EQsNlstLa2YjQaWVtbY3p6mpGREYaHh4lGo2QyGfR6\nPR0dHRw+fFhqW9Tr9SWRuhDz+42NjfT09HDs2DH6+/uxWq2Mj48zMjJCJBKRInapVIpkMsnAwABV\nVVUMDAxIzs7c3BzBYHDb1t7U1ERvby/Hjx9nYGAAh8MBIN300+k02WwWvV6PwWBg37592O12XC4X\nV69e5caNG+Tz+W3dR2q1Wjo7tra2mJiYYGtri2g0SjKZxGq10tTUhEqlwu1288wzz6BUKpmammJ9\nfb1kWlzFdIsY0RIEAZ/Px/j4OOFwuCRTuPD7dGJ9fT3Hjh2jo6ODpqYmXn31VYxGI++++y5bW1tF\nOft2rPHXarVUVFSwd+9efvCDH0gV2dlsVtoUH330EefOnSOZTGI2m6mqqiKRSJBMJikvL0cQBJqb\nm9nY2ChaIZpKpUKn09HR0cFLL73E3r17cTgcrK2tcfPmTYaGhrDb7djtdsrKyigrK5MK6CoqKrDZ\nbGxubrK6urqt6xYRc/xtbW08++yz7NmzB6vVitfrZXFxkfn5eQwGA0ajkXA4zMjICGtra9TU1NDa\n2oparWZ2dpZQKEQwGCQejxflcxQKBVKpFGtra0xMTOB2u6mrq6OqqgqHw1ESrX6FQkGqYUkkEgQC\nAS5evMinn37KhQsXSKfT5PN5ySkWBIEDBw5gt9ulG3Wxjb9KpcJqtUr7va+vD4fDwdLSEleuXOH8\n+fOsr6+ztbWFIAhSB0MsFuPIkSPU1dXR19dHJpMhkUhsm/EXBIGGhgaeeeYZjhw5QkNDA+l0mqWl\nJVZWVtja2iKTyaBUKjEYDJhMJskp1mq1ZDIZ6T0Vo17b8bPI5/Ok02nW19eZn58nl8uxsrKC1+sl\nkUhQXl7Onj17cLlclJeX43A42LVrFy0tLfj9fmKxGKFQSKq3KBZiLU5lZSVlZWUIgsDa2hqLi4vE\nYrGiO+ZfhviuOp1Oent7OXnyJE1NTTidTg4dOkQikWB+fp6pqSn8fv+2v5s70vgLgoDdbufYsWN0\nd3ejUqnI5/Mkk0lCoRCXLl3i17/+NaOjo5JBj0ajpNNp/vd//5fBwUHKy8txu928/vrr3Lp1i5GR\nEWZnZ7fV+Ig5fo/Hw8DAAIcPH0alUjE2Nsa7777LjRs3GB8fR61WS790Oh0ajYZ0Os2hQ4d47rnn\n+PTTT/nVr361bet+EK1Wy969ezlx4gT9/f2UlZURDAalkOf169elvHMkEmF9fZ1wOIzdbqerq4vG\nxka8Xi+ZTIahoSHGxsaK8jnEnL/P5+P69et0dHRIOeja2tqit/uJh7jf7+f69euoVCp8Ph8ffvgh\nIyMjUqV2oVAgm80yPT3Nm2++SVlZGW63W2oNLPYhaTAYaGlp4eDBg3zve99ja2uLmzdv8pvf/IY7\nd+6wtLREJpOR1ilW+b/11lvMzMzwl3/5l3g8Hv7iL/5CihQ86UNTvL05nU66urqw2+1sbW0xNTXF\n+fPnOXv2LPF4nGw2K3XpmM1mzGYzbreb3t5eDh48iMfj4cyZM9y5c4dIJLItt9VwOMz09LTkiNjt\ndqanpxkfHyeXy6FWq/nggw/weDzs2rWLyspKDAYDP/jBD4hEIvh8Pj766CNmZ2eL6jiKht/tdlNR\nUQEgOYXFjtp+FWKNwg9+8AOOHj1Kf38/Go2GXC6HxWJhYGAAk8nEf/zHf7C6urrtxZY70vgDmM1m\n+vv7aW5uplAoMD8/z8zMDJOTk1y+fJmbN29KHjZ8HhHIZrMsLCwQCAQkz7ytrY2+vj4sFguRSAS/\n378tIRgxjOV2u3nhhRfo7+9HpVIxNDTE5cuX+eyzz5ifn5duNmIeVKxILysro6qqCr1eT0tLC8eP\nHyccDrO+vo7P59vWcF2hUJBCcul0mtnZWa5evcqVK1eYnp6WKtQTiQSZTIZcLif10Iufob+/n8XF\nxaIa/1wuRzqdltYpvrxVVVUolcqi35xzuRxzc3O8/fbbqFQqNjc3GR8fZ2Nj4yGjXigU2NzcZGJi\nAr/fj8PhkJzHYqqgKRQK7HY7Bw4coLu7G7PZzLVr1zh37hyXL1/G6/V+pfO9tLSESqVidHSU6upq\nWltbsVqtUurrSX4mk8lEfX09LS0tVFdXk0gkWFhY4KOPPmJwcJCbN29KURexNkNsq1tYWCASidDR\n0UFLSwsnT57Ebrdz8+ZNgsHgE89Xi618U1NTBINBrFYrfr+flZUV6ZkJgsD6+jrLy8vY7Xba2to4\nfvw4JpOJZDKJRqPh7t27LC0tEQwGi1I7YrPZqKmpwW63k8/nWVhYwOfzEQ6HSyY18UXENPPRo0fp\n6OjAYDBw7do15ufnpSJdm80mtf892MYtjioWW9bFQuRvs+V1xxp/o9FId3c3jY2NZLNZhoeH+fDD\nD7lw4QLLy8tf+VKl02nS6TSRSISFhQWWlpZ46qmn2LVrFyMjI2xubm5b/kU03K+//jpWq5X5+Xl+\n+ctf8sEHH/xBCkK81eXzebLZLIlEAoPBwMjICA0NDfz0pz9lcnKS27dvSwV020E2m2V5eZmVlRUU\nCgWhUIiRkRGGhoYYHR0FPs/bfvHnkc1mpSI/m81GV1cXV65c2ZY1fxWFQgGVSoXZbEaj0aBUKnE6\nnVRWVha94l90TiYnJ5mamnro/38ZYg56c3OTra0tqc+7mOFbhUJBRUUFJ0+epK2tjc3NTc6ePcub\nb775jfnlfD5PJBLh7t27tLa2cuDAAVQq1bbUvdjtdp566il6enqwWCxMTU0xODjIf/7nfz5kREXS\n6TTRaFTqTrh9+zY//OEP2b17Ny+99BIdHR1Sn/eTNv5iJGhtbY21tbWv/DOBQIDV1VUEQSCVSvH0\n00/T3NyM0+mkpaWFmzdv8v7773Pz5s2iGH+73Y7b7cZsNhONRrlz5w6zs7N/4PiWEj09PTz//PP0\n9fVhNpsJhUK88cYbfPDBBxw4cICWlhbcbjepVEqqPRM/i0KhQK/XU1tbS6FQYGNjQ2p3Fd+TP9UB\n2JHGX/SElpeXpRv8hQsXuHjxIoFA4JF7yJeWlvjkk09oaWmhrq6Ozs5OIpEIIyMjT/gT/L7wSTzA\n7t+/z9tvvy31wD/KDzYYDHL27FleeeUVDh06JN1Qr1+/TjQafeKfAT6/jQaDQYLBIJlMRvJYvy5M\nLggC09PTXLhwgdraWioqKh7a+MVGdLQUCgU2mw2LxVISN/8H1/dNiFGieDxOLBaTitDC4bD0+Yrx\nWcT0xerqKqFQCK/XSzgcfqTQrcFgYNeuXVRXV0vO0Hbkzo1GI01NTZhMJvx+P1evXmVoaIhYLPaN\n/3Y+nyeTyTA6OsqZM2f44Q9/SGVlJfX19Xi93i91HoqFuI6ZmRn+9V//laqqKtxuN8899xx1dXU0\nNDQwMzNTlPfAZDJRUVGBRqNhaWmJd955h/Hx8ZLUJRCjtHV1dbS2tqLRaLh+/Tq/+tWvuHLlChsb\nG9y4cYPp6WlMJhPz8/N/cNlTq9VUV1fzyiuvoNfrmZub49atW0xNTSEIghQF+FPYkcYfPm/ju3//\nPhsbG6TTaa5evcr4+Phj/R1ra2vE43HC4TA2m43+/n78fj+jo6PbsqE0Gg1arRaAubk5zp07h9fr\nfeRcYCQS4d69ezzzzDM0NjZSWVmJ3+9Ho9E8yWU/RD6fJxwOs7y8zNTUFPl8HkEQcDgcOBwOSZ1N\nRDRKwWCQyclJotGolC4ohdxdNpuVtAcEQZCKFUWHplQclG9CqVRK+gqRSAT4vNjuQcXL7VaiKxQK\nJJNJlpaWpOf8qMVkYn3M7t27qa6ulgz/dvw8RMXKbDbL/Pw8t27d4t69e4+0bvHmvbi4yLVr13j2\n2WfxeDw0NzczNTW1LWmLx6FQKOD3+wkEAmi1WhoaGjh06BAtLS20tbVx9+7dbV+TIAhYLBYpAre6\nusrVq1dZXl4umef2RcT9ajabSSaTjI6O8sYbbxCLxchkMsRisa/9fvHmX1VVhclkIh6PYzAYHpKU\n/lPZscZ/bW2NM2fOoFKppMrnx0V8MXO5HHa7nRMnTuD1ejlz5oz09SeFIAhSUZBoQH0+32OFZcXo\nwYMh3e1+GQqFAplMhsnJSf7t3/5NckKOHDmC3W7nzJkzD210hUKBSqXCYDBgNptRqVRSGqMUcnfR\naPShcKIo9CO2S5VqS9EXMRgMOBwOCoUCoVBI6j8Xa0bEqMB2Olz5fJ7V1VU++OAD+vr66OrqemQN\nBdEAd3V1UVNTI+XYt4NUKoXP5yOZTBKNRpmcnMTv9z/WXhD3eDabxWKx0N3dzejoqFS8WwqOr4h4\nhqTTaanlsrKykkOHDnH16tVtdVgUCoVUK1JdXY1KpZLaP0v1XRTTs8vLy9y/fx+r1frYQk+5XA6f\nz8cvfvELBEEgFAoRCASIRqNyzj+ZTDI/Py+FEcUbp1hwo9VqyefzUhGX+MAeLKgQ/6vRaDCZTGg0\nGiwWy7aEtcRbpcFgIJfLScIgj2IABUFAq9XicrkkFUBBEIhGo48cQv02KRQKrK2tce3aNXw+H01N\nTRw+fJiKigoKhQL37t1jZmZGatex2+3s27ePI0eOYLFYWF1dZXh4GJ/Pt63r/jKy2azUtgWlPRxK\nrVZjtVoxmUzSPHPRYbHb7VKxosFgwOl0olQqpXcgFosxNzf3jTeQbxPx3x0fH8fhcNDR0SF1gojv\n6oOaG+LBr1Qq6erq4ujRo9TW1pJOp5mZmZEiGk+aZDLJysoKKpUKjUZDJpN57AFK4tkkzmWor6+X\nCjFLQWfkixQKBfR6PTabTYpQlpeXYzQapcjRdrwb4r8hPnsxjfKgw/TgWS466uXl5TidTlwul+Qw\nTE9P4/f7JQGjJ+k8FgoFJicnGRoaYu/evZSVldHf38/c3ByBQOAP7NEXyeVyhMNh7t+/L9m4TCbz\nrTo8O9b4iwYTeGiAj1qtxmKxUFFRIVVuixWhYqgTfp+XEW+hBoNBOjy3Y1MLgoBOp0Or1UoOyoMt\nTt/0vSaTiZ6eHv7hH/6B3bt3k8/nCQaDBAKBonjE8XicxcVFVlZWWFxc5Pjx4zz11FN0d3fzi1/8\nQhqS43Q6aWpqkhQZE4kEIyMjfPzxx8zMzGz7ur+IuI9EB1DcM6VUkyCi1Wqpr6/H7XZTVVVFPB4n\nnU5LQlBut1vS/O/s7CSfz2O1WolEIszPz7O6urqtxh+Q8v1ra2skEgn0er1U85FIJKTPIOoViO/n\n6dOneeWVV3A4HExPT3Pt2rVt6/FPJpMsLy/jdrul3P/jnhGinofY+eJ0OrHZbFItSSkiCnGJiqni\nBWs7W0YfrO0Q30PR+ItrePCdFc+YPXv2cOjQIZ5++mmMRiMbGxv8z//8DxcvXmRsbIxkMvnEP8Pd\nu3fJ5/NSp8err77Ke++9x+bm5kOS5l+2l8TutCdZoLtjjT/8XvRErVYzMDBAe3u7lGsuLy+XNkki\nkZB+ra2tEYlEJM9RrOheXl6W+tK3I6wlTrkTB1N8k7yt+NIdP36c559/HqPRSG1tLR6Ph2w2y+zs\nLJ9++imXL18uqlBOLpcjFArx61//mkAgwIkTJ3j++edpaWkhEomg0+lwu904nU4mJib48MMPuXjx\nIrOzs9tuiL4MUeVPVJlTq9UlafzFA0+lUtHQ0MDJkyclh1a8/YgDigRBoL29Hfi8zuTq1assLCwU\nZd2ikuL09DTvv/8+zc3NHD16FLPZLBUyJZNJqXhUo9FgMBjo7OzEYDBw8eJFBgcHOXfuHPPz89uy\n5lwu91BaSizUfZyaCTF9pFQqH4polKLxt1qtuN1uSRLY5XLh8/m4desW09PTj3xJ+bYQ07PipErR\nIIoRrhMnTrB7925J+dRoNFJRUUFVVRUul0uKOL700kuo1WppONqTviSJTuPZs2fZv38/x44do7m5\nmdXVVXK5HMlkkkQigdfrZXZ2lpGREYLB4LZNV9zRxh+QeiEHBgY4efIkVVVV0qEn5liUSiXZbJZk\nMin1qT4ox+lwOIjFYlKxXX19veQkPKkfgpiL9fv9rK2tkc1mMRqNDw2WESMaoopeXV0dzz//PH/z\nN39DIpGQNv/y8jKzs7PcvHmT0dHRok7MEzXyL168SCaTobGxkba2Nvr7+6WaBrFOY2RkhDNnznDj\nxo2SGVUsFqKJbTUPhspLYX0PIna9iAVRer0ejUYjFfY9WBPidDrR6/XodDoWFhbQarVFES4SD/Ll\n5WVSqRTt7e3s378fj8eDwWCQBLkymQw6nU7qUshkMkxPT3Px4kXOnTvH0NDQtq5ZvGmq1WpMJhNG\no/GxBGbEG6k4YXS7OhUeF1FAra+vj+PHj3PkyBGUSiXj4+NcunSJubm5oqQoREVHMdevUCgwm800\nNDRw6tQpDh8+jMvlIpVKkUgkSKfTUruuOFSnt7eXhYUFaWLnkyadTkuCZ2KkVtSLMBqNkrM7MzPD\nyMgIGo2GyclJ6Zx80uf4jjf+FotFMjDNzc3k83lWVlakoq1CoYDb7cZut2M2m6murqapqQm9Xk80\nGmVjY4OysjIMBgOvvvoq7e3tDA8P87vf/Y7h4eEnJooi5j8DgQDT09Nks1mam5vx+Xx/MOjGYrGw\nf/9+/vZv/5bu7m5JdCOTyTAwMEAoFJJySQ8KGxULUW3x7t27/NM//RM/+clP+NGPfkRNTQ0LCwtc\nv36d9fV1VldXCQQCRR/q82WIxV1Go1FqxyylG9qDlfOffPIJS0tLUq2LOIZYjABUVFTQ2tpKU1MT\njY2NWK1WKisrt7Ur5ItrFyueh4eHpcLPsrIystmsVLy7a9cuKXIRDAYZHR3ls88+Y3JyctvXWygU\npJSi2+2mtraWmZmZR37X3G43hw4dwuFwkM1miUQiRCKRkhotLqZBq6urOX78OO3t7ej1emZmZhge\nHubWrVtFm6aYTCYlyWdxb9fW1rJ//37a2tqorKykUChw9+5dbt26hdfrpaqqiqNHj9LY2Ch1FCUS\nCWKx2LZoueTzeSkd+vbbb3P9+nVsNhvV1dX09vbS3NxMXV0dtbW1lJeX09DQwI0bN/j000+ZmZl5\n4s96Rxt/QRCorq7m0KFD1NTUsLW1xdWrV6XRlIIgoNfrCYVC6PV6VCoVPT090sS5ZDLJ6uoqXq8X\ng8Hw0OEYi8UQBIGJiYknIiEp3iai0SiBQACLxcLRo0dZXl4mEolIOUK9Xo/dbqe7u5sDBw6gVCqZ\nn5/n0qVLAHR0dEiHvFgMUwqGVCxYmZiYYGNjA61WK4XQJycnWVpaIhAIEAqFSiqcLhIKhVhZWcFu\ntz9U7V8KHQki2WyWaDTK/Pw86+vr0gRCcQyxUqlEp9Nhs9mYmZnh+PHjNDQ0SDUxarW6aGvP5XJs\nbW0xNjYmjb81Go3k83lisRhOp5Pq6mrJSb969Srnzp1jZmZm20VmMpkMGxsbbG1toVQqaWpqwufz\n4fV6pWr4r0KsWWhubpYG/WxsbHDt2jVmZmZK5n2Fz8PoFRUVeDweOjs7qaysJJVKcefOHa5du8bs\n7GzRJkMGAgEmJydxOp0AOBwO2tra6O3tpaKigtXVVe7cucPQ0BDDw8Osra3R09PD3r17pffk6tWr\n3Lp1a9u6XMQIz9bWFolEgpWVFSklIRZGezweOjo6qKuro729nUgkwvDw8LY45jvW+Ithzfr6ek6d\nOoXdbmdqaoqf//znkpa1x+PB6XQSDAbZ3NxkbW2N1157jbKyMhwOB4FAgKtXrzI2NoZGo+HVV19l\nz5499PX1ScUjGxsbT2wWt3hD3tzcpKenh8OHD0tztO12O5WVlTidTqxWK3q9HqVSycTEBNevX+f9\n99/HZDLx2muvYbPZqK+vl8KmpYYYehNTL0tLS4yOjkqh31IkGAwyNzdHU1OTdOMLhUJFq6f4MkQH\nUlTy+zLEmphbt26h1Wp56aWXsFgslJeXF9X4iwVNk5OTTE5O8uGHH0pfMxqNHDhwgGPHjqHX6wkG\ng7z//vvEj6nCAAAgAElEQVRSsdR273Exd7u+vg7A7t27CYfDXL58+RuNoVarxeFw0NraysDAAIIg\nMDw8zHvvvScVhJUK4mS/lpYWPB4PWq2WpaUlrl69yuDgICsrK0WLUogRw7q6OrLZLG63m87OTrq6\nutDpdAwNDfHP//zPTE1NEQgEUCgU1NbWSuOLNzY2eOuttzh//vy2FPt9EXGa4+bmJpubm0xPT2O1\nWnE6nfzoRz/iueeew+PxUFZWtm1Rxh1r/EVsNhstLS0A+Hw+DAaDlFNeWVlhc3MThUIhVdOLgzfE\ndqFz584RCoVQKBREIhHef/99Kioq6O3tpb+/n+rqaj7++GPeeeedJ9IeEo/HmZmZoba2lra2Nurr\n6xEEQUpf3Lp1SxJqEcPkPp+PmZkZqcXP7/czPDzMxsbGt7q2PxYxfNja2spLL73EgQMHHnpuokTx\ndvZqPy4+n4+JiQn279//SKqFpYrY+ldWVobFYkGhUDA9Pc2tW7e2TQXyUSgUCtL42yNHjnDkyBFq\namq4f/8+v/3tb7l9+3bRpreJRYqpVIpsNovdbpfGh39VW7AYddy9e7ekwAlIvd+jo6N/lDbJk0Kc\nPnfy5EmOHj2KTqdjenqaK1euMDExwfr6elHfVbHdbWtri5qaGl5++WUaGhokAZxQKMTm5iaZTAaL\nxUJHRwcHDx7E7XYzMzPD9evXGR0dLfrneBCxYNdisWCxWMhkMkQiEcLh8LakJXa88dfr9VRWVpJM\nJtHpdKjVagRBkKrOQ6EQ8Pvwm1KpJJVKcf/+fW7fvs29e/eknN7CwoI0khPg9OnTnDp1ing8zuDg\nIOFw+FtvvYjH48zNzeFyuXC5XJKUrDh7e2RkhI2NDfx+PwsLCyQSCXK5nGSIRAEIsVq+FEKICoUC\ni8VCa2srL7/8MrW1tZK+vFarpaysTCp8KoX1fhnr6+ssLi6SSCQk7Yhi6/v/MYhKYfX19TidTgRB\nYG5ujrt375ZEd4WIqArZ29vL6dOnpXDtvXv3OHPmDKFQ6BtD7E8K0fBsbm4SCASkG5pWq5WKiR9E\njHTV1dUxMDDACy+8QENDA6lUivHxcW7dusXc3FzRQuhfhth+uHfvXnbv3k00GmVkZIQLFy4wPz9f\n9L2STqeJxWKsrq5SXV1NS0sLBoNBEk4Sjb44m+Po0aPs2bNHGgYlTiYs9uf4MkwmEwaDQWqTXl9f\n35a9vvNOs/+PaDhSqRTRaBSNRoNGo5E89C8iCp0IgsDY2BhvvvnmH4TdxBxNPB7n//7v/wgGg/zd\n3/0dlZWV9Pf3c/fuXbxe77dqsBKJBIuLi1IYUWz9EduLxIIgUW3rwd7QB4f9iLnDUtCfV6vVeDwe\nWltbcTqdpFIp1tfXcblc2O129u/fz+rqKvPz86RSqaIXKH4ZqVRKkuIUvXNRinknodFoqKys5ODB\ng3R0dACf3z6np6eLPqNdRGxbrKmp4fDhw9KwrkuXLnHjxg2pG6ZYiO/Y3Nwcn3zyCTab7aFWyi+u\nTbyQvPDCCzzzzDPU19ejVCoJBAJ89NFHfPjhh0Sj0aK/pw9iNptxuVxUVFSQSqW4e/cun3zyCRcv\nXiyJiOLW1hY+n4/79+9TKBSIRCKSkyW2dr/wwgtS9KihoYFcLsfw8DCDg4PcuHGjpCJd8HvVR1GQ\n7uzZs1y4cAGfz7ct6cUda/xFJicn+eUvf0l7e7sU9hH7aFUqFVqtFrPZTEtLC93d3VJF6J07d1hZ\nWfmDv+/BVqQ7d+5w6dIlysrKOHbsGMlkUpoF8G0dRqKzEY/HH1nhThQI0uv1CIKA0+mkp6eHe/fu\nMTc3V1StcIVCgdFoZO/evfT29qJUKgkGg/j9fsxmM0ajkZaWFqqqqlCpVN/KgIongXjTCIfDGAwG\n+vv7WV9fZ2lpqdhLeyycTift7e309vbidrvJZDKSEmSpVJlrNBoqKipoaWmhr68Ph8NBMBjk8uXL\nRW9dFSkUCqyurjIyMsLBgweldGM0GpXEqcTWSpfLRU9PD/v376ezsxOTycTc3BzXrl1jeHiYxcXF\nkiochc87iqqrqzEajYRCIS5dusS9e/cIBAIl8X6K+3Zubk66IIlaEB6Ph6qqKgYGBnA6nZSXl6PX\n6xkdHeXChQuMjY2VhAPzRcRotFarJZ1OMzY2xuTk5LZ1gOx44z80NMTt27f58Y9/TG1tLblcTpKB\nFCvlPR4PJ0+e5PTp07z77rtS/vCbDOTq6ipvv/02zz//PM8//zyBQIBgMCiFmoqFqPAnTjRsbm7G\nZDJx4cIFhoeHi5rTUiqVUufCvn37SCQSLC8vs7CwICnR1dfXSwVnxQrlfhNiJ8b6+jpOp5Nnn31W\nUpbbSTQ2NjIwMEB3dzdVVVWSvkUpVZmL0/q6u7vp6uqiUCgwPj7O5cuXH3tY15MkHA4zNzeH1+vF\narUyMDBAOBxmdnZWEhzTarU0Nzfz9NNP093dTXV1Nfl8nrGxMd555x2mpqZKqmhURDT+Op2O5eVl\nLly4wNTUVMk4iA+2torS6GJuXKzD6OjowGq1olKppC6YDz/8kMXFxWIv/0vR6XRUVFSg0+mkz+b3\n+7dtpPyON/5iz/Dly5epq6ujo6ODo0ePUl5eLnl79fX1dHZ2YrfbJRGaR+nfT6fTrK2tScJAKpUK\nk8kkTUsrlkcsagSsrKwwMjJCZ2cner1eEnl5UtoEj4IYbRHFK4aGhlhbW0MQBOLxuNTPLf6ZUjFA\nXySZTLK+vs7IyAhlZWW0tbVhs9l2zGQ/8RYqVm8bjUZ8Ph93796VpuqVwrNXqVS4XC5efvll9u/f\nj0Kh4Ny5c7z//vslc+sUyWQyhMNhzp8/TzAYpL6+Hrvdjs1mQxAErFYr9fX1HD58mIMHD1JRUUEw\nGOTOnTt88skn3L59W6pBKhXE0HlLSwsHDx7EarUyPT1d1BqLr0JM8/p8PsmJFYdFxWIxKaposVi4\nceMGV65cIRAIlEx664tUVVVx5MgRbDabpCi6nfv9O2H8c7kcExMTRKNRTp48SV1dHW63m6WlJeLx\nOEajkXg8zuzsrNQB8Cgerdjf6/f7WVlZIZFIPKQjXSwKhYI0yfDevXs4HA7p9m80GovSyiIiKuIt\nLCwQDAY5e/Ys+Xwel8slyeWKhZmlqHAmkslk2NzcZGRkhMbGRp566ikcDgdlZWVEo9Ft887/WEQN\nebfbjcfjQaPRSAp5Xq+36EZVfI/cbjd9fX2SGEskEuHq1at8+umnJacBIfZsDw8Pk0wmJSfW4XBg\nNBqpqamht7eXffv20dLSwtbWFpOTk5Ii4dzcXMntd7HQTxzZWygU2NjYIBqNlkS65UHEi96Dhdzw\neQ2LSqWiubkZjUaD1Wrl8uXL3Lx5UxJMKyVEx9zhcNDT00M+n8fr9bK1tSUb/z8GcbLcRx99xOXL\nl9HpdNLtXJwIpdFoJGGZRzm8RQOwsLDA8PAwMzMzUti/2IcnfB6GHB4eZteuXbS0tFBeXo7D4Shq\nPjeXy7G5uclbb71FoVDA6/Wi0+nIZDJkMhmpG6PUKRQKxONx7t+/T1tbGwAej4f+/n5JRKSUDNMX\nEQtcGxsbcbvd5PN5FhYWpNtQsREFiF555RW+//3v09DQQDQaZWpqiomJCZaXl0vSwRKLzRYXF7l2\n7RoKhQKPx0NjYyOdnZ0cOHCAmpoa8vk8w8PDXLhwgY8//piFhYWSM/wAFRUVHDlyhPb2dgwGAyMj\nI0xOTj721MJiIxYVz87OolAoGBoaYnJysmQiXA8iduBYLBZsNhsTExPcuHFj26NC3xnjD59vAL/f\nLwkAiYjTqB6c1PYoiJ6mKHQRCARYW1srGR16cSiEKD7S1NTE7t27WVpaKlrITtTGn5qakkSMHA6H\nNMREnOoWCoVKbljOFxEdGXHGQ01NDe3t7czNzbGxsVGyaxcEgZqaGo4dO0ZTU5M0VvnWrVtFUcj7\nsvVVVlbS0tLCvn378Hg8+P1+7t27x+DgIGNjY09EVfPbQDwTNjY2GBsbw2AwSK1aVVVV1NTUUCgU\nmJqa4uLFi1y4cGHbRyc/CgqFApPJhMfj4ciRIzQ2NpJOp7lx4wY3btzYccY/FApx+/ZtyckSRz6X\n4juqUqkoLy/HZrOhVquZnZ3l9u3b296N8J0y/iJfZuD/lJ7y5eVllpeXpb+nVBBDYKKef0dHBz6f\nj8HBwSc6lOib1vRgmE0szhELW6LRKOPj4/h8vpJxor6KL44TFQ3WpUuXUCqVJZm2EMPpzc3NvPba\na3g8HkKhkNRGFAgEirpm0QlvbGzkxRdfpL29HUEQuH79Or/73e947733pCKuUkWsuYnFYlJfudhG\nCZ8LRA0NDfHRRx9x/fr1kjRASqWSyspK2tvbOXLkiCSR+9lnnzE4OFhy+f5vwu/3Sxc/KK1z+oto\nNBqqqqqw2+3A5+qFo6Oj2+7sfieN/5OgFPrnv4jY3z87O8vly5cloaBiDWwBJE15+Fza1GQy8dRT\nT3H69GnKysqYnp7mt7/9LaOjoyX3PL8M8Rknk0kWFxcZHR2VohaluH6DwUBLSwt79uyhvr6eWCzG\n+Pj4Q7eiYqLX66WBLCdPnpSmUn788cfcvn27pFUfvwxRj2NsbAy73Y7FYuHevXv87ne/Y3FxsejP\n+8sQHfL+/n727t0rFcidPXuWycnJki7E/SZKfd06nQ6Xy8WJEydwuVzcunWLQCBQlPZs2fjvYMSp\nUbOzswwNDfHiiy9is9mkka7FeBGUSqU0xEfUrhaLoBKJBOPj41y8eHFH9MuLmg+rq6vcvn2bGzdu\nMDIywubmZkmGpLVaLU6nk3379tHV1YXZbGZiYoKrV68yPj5OMBgs6voUCgU2m42+vj5pjQsLC9y/\nf58bN24UbVzsn4I4X2F2dhaNRoNOp+POnTtcvnx52+ayPy5itKKrq4uWlhZSqRS3b9/mzJkz0lhz\nmW8XhUIhaUC0tbXR0dFBLBbj0qVL+P3+oji8svF/BMQXuNRe5HQ6TSgUYnp6GqVS+ZCGfrGK6hQK\nhdS/KiptuVwuFAoFk5OT3Lt3TyqaLHXEmoVr166xsrJCOBwmFAqxtbVV7KX9AaLYU3d3N6dOnaK5\nuVlSlHvnnXeKHu6Hz289dXV1fP/732f//v0UCgWuXLnC+++/X1Ka649LoVBgfX2d27dvS/n9Us6Z\nGwwGHA4HTU1NGAwGSebc6/XuiPdyJyKKzZ04cYIDBw6QyWS4d+8en3322VcO5XrSyMb/ESnFF1nU\nHBdlgMPhsFTcKPajb/e61Wo1RqOR6upqGhoapEFF8/PzXLlyhVu3bpVsMdeXkc/nWV9fZ2Njo2Rn\nEYhqlhUVFZSVlbG4uMjKygrBYJDBwUHm5+dL4jan1Wqx2WzU1tZSVlZGPp8nEAjg9XpL9pb8KDzK\ndMVSwul00traikajYXl5mYsXLzIxMcHW1taO/RmUIlqtFrvdTltbGy6Xi7KyMjweD4VCgZGREUlB\nsVhOr2z8dziFQkES6ohGo2SzWZRKJSqVSjKw2/lCi+NvXS4XdXV1uFwutra2WFpa4vz589y5c2fH\nGH6RUjX6ImLbXHl5OYIg8NZbb7GwsIDX6y2q1PODCIIghcXFlJQ4kGUn55h3IqK2QjKZZGJignPn\nzrGwsLBjIy+liig69LOf/Yze3l7sdjuDg4NSF8js7GxRz8JSbbiWT4JHRBAEjEYjVqsVj8dDPp9n\nZGREEozY7kNVp9NhNBqx2+1YrVaMRiOCIJBKpZiamirpFrmdinjzr6ysxGg0EolEiEajJXeTEwVl\nenp6KCsrQ6lUMjIywuzsrDSXQ+bJU1NTg9vtxmAwEIvFmJmZIRaLlZyoz05HvPl3dHRQXl6OVquV\nRrKvrKxsh5DS19p32fjLyMjIyMh89/ha+67YrlXIyMjIyMjIlAay8ZeRkZGRkfkzQzb+MjIyMjIy\nf2bIxl9GRkZGRubPDNn4y8jIyMjI/JkhG38ZGRkZGZk/M/6sRH4EQUCr1aJQKEin0yU5lU1GRmbn\noVar0ev12Gw2YrEYGxsbxV7Snw2CIFBWVkZ9fT3l5eXodDqWl5fx+/1FVdArdf5sjL8oeWs2m1Gr\n1WxubpJKpXac2pyMjExpIV4qKisraW1tZWFhgVAoJF8sthG3280LL7zAnj17KC8v59y5c3z22WcE\ng0HZ+H8F33njLwgCKpWKnp4ejh49itvtxmQyEY1GuXHjBp988gnhcJh4PF7spQLsiHnUj4MgCFgs\nFhwOBx0dHdTX12O325mammJkZIT5+XnC4XCxl/m1PDiUo729nTt37jA2NsbU1FTRnEdxX5vNZurr\n69m9ezeRSIRIJEIikSAUChEMBkmlUrJ87hNENPzHjx9nYGCApqYmPv74Y6anp6XoosyTQaFQYDKZ\n6O7u5ujRozz33HNUV1ejVCrp6+vD5/Nx7dq1kphrUYp8J42/eMs3Go0YDAYMBgN79uzhueeeo7Gx\nEbvdTiaTobq6mkQiwb1791hcXCzqIWk2m7HZbJjNZnK5HKurq8Tj8a+Vf3xwcl+pHe7ioWg2m2lo\naKC1tZVDhw7R3t5OdXU1N2/exGg0srW1VfKDfsxmM01NTXzve9/jyJEjaLVawuEwMzMzRV23Tqej\nsbGRffv2ceTIEWkA0dbWFoFAgMXFRUKhEBsbG6yvr5NIJORb0LeMUqnEZDKxb98+Tp8+TXV1NRsb\nG1y/fl2aBCk/828XQRAQBAGHw4HH4+HEiRMcPXqUvr4+acaJxWJBr9cXbbrpTuA7afzFmfLt7e00\nNTXhcDioq6sjFouxtraGQqHAarVy+PBhmpqa+PnPf84HH3zA2toa6XS6KGtuaWnhxIkT9PT0EI/H\n+c1vfsP9+/dZXl7+0j+vUCikl6BQKJSU8RQEAaVSSVVVFX19fRw8eJC9e/dit9sxm83o9Xq6u7vR\n6/XMz8+zurpKLBYryUNSEASqqqo4evQoHo8HjUYDUBJrtVgsHD9+nCNHjtDV1UWhUJAG+aRSKeLx\nOHNzcwwPD/Puu+8yNzcnj2z9ltHr9dJ43KamJrRaLX19fbz22mu888473L17l3Q6XXLO+U5GoVCg\nVqs5cuQIL774Ir29vdTW1qJSqchkMqyvrzM4OMjt27fleRFfw3fS+FdWVrJ7924GBgbweDwAUpjf\narVit9uprq7G4/HQ2dnJvn37WFlZYWhoqCi3f0EQqKur4/jx41LOsFAofOXGFQQBk8mEVqsFIB6P\nl8yMeZ1Oh9Vqpb29ne7ubvbs2UNLSwtVVVXMz8/j9XqxWq3YbDbq6+vxeDzMz88zNzdXEgb1QUQn\nxuFwsHfvXioqKtja2iIYDBIKhYq6XtHAe71e7ty5w/r6uhTpslgs2Gw2XC4XVqtVGvhz+fJlrly5\nQiKRKJqT+3Xo9XqMRqMU8XpwIqE4vEir1ZLL5Ugmk0WdtqhQKFAqlezevZtjx46xa9cujEYj+Xwe\nl8vFoUOHUCqVeDweJicnWVlZYW1trWSmLO5UBEGgvLycpqYmDh8+zKFDh6iurkaj0RCPxxkfH+fm\nzZtcuXKlJM+UUuI7afwbGxt5+eWX6e3tpaysjMnJScbHx7lw4YJU9FdXV8fp06dpbm6mr6+PjY0N\nxsfHCYfD2/pyiikKl8vF3r17MRqNzM7OsrS0xPr6+pd+j0KhwG63Y7FYyOVyrK2tEY/HS+JQMRqN\nNDY28ld/9VccPXoUl8tFJpPB7/dz6dIlVlZWaGhooL+/XzL+MzMzLC0tlZyXLubVHQ4H3d3daLVa\nFhcXmZ+fJxAIFP15h0Ihzpw5w/nz5zEYDJSXl+N0Oqmrq6Ojo4O+vj4aGxtpamri0KFDvPnmm8zO\nzhIIBErS+JvNZmpra/H5fIRCoYduzAqFQhpbnEgkyGazZLPZohp/vV7P/v37+dnPfobT6QQgk8lg\nsVjo6Oigv7+fubk53njjDT799FOpCLDY+2YnIwgCtbW1PP/88xw8eBCPx4MgCMTjcTY3N/n44495\n8803mZub2/azfKfxnTP+4txwk8mEIAgEAgE++eQTbt26hdfrlQ6RaDTK7t272dzcxOFw0NraSn19\nPdFo9CuN7pNAo9Fgs9mw2+3odDppE6fT6a/0WpVKJbW1tbS0tFBWVsbExAS3b98mFAoVtXBREAR2\n7drFyZMnaWtro6KiAkEQuH//PoODg1y5coVkMolerycej6NSqairq8PtdqNUKou27q9CEAQMBgNm\nsxmr1crq6iozMzMEg8GScLby+bzUsZJIJKSoxMLCArOzs4yNjXHo0CEpLOp0OhkYGOD69etEo9GS\nuRVpNBrsdjt79uzhwIEDDA4OMjIywvr6uuSkiGmkU6dOUSgUWFhY4M6dOywtLRVlzWVlZfT29tLd\n3U1lZaXUQTQ/Py/tjbq6OiwWC08//TR6vR6r1cr9+/fx+/1yEeYfgUqlwm6309raytNPP01DQwOF\nQoFYLMbo6Cjnz5/ns88+Y3FxsWQKuEuZ76TxLxQKZLNZ1tbWWF1d5fr160xOThKNRhEEAbVaTTQa\nZXFxkWAwiMPhoLa2lurqahYWFrbV+KvVasrKyrBYLKjValZWVvD5fF+bJxTD/rW1tbS1taFQKFha\nWiIejxdt04t1Frt27eLIkSM0NDSgVqtZX1/n7t27nDt3jpGRETQaDU6nk2AwSDQaxWQyYbPZUChK\nT29KpVJRUVFBZWUlJpOJiYkJpqenWV9fl8LOxUTc52I1cywWAz7fH16vl9nZWTKZDEqlkrKyMpxO\nJwcOHJC+Vgq3ULVaTXl5OV1dXezfv58DBw4wNzfH5OSkVKwlvrOVlZUcPnwYrVbL9PQ0fr+/KMZf\nrVbjdDo5fPgwnZ2dWK1WIpEIXq+XoaEhNjc3EQSBYDBITU0NTqdTKkYTnVyfz7dtkS6xNkij0aDT\n6cjlcmSzWTKZjOQAivvgwf0gft+DvxfrSoqBRqORolrd3d0YDAbi8TgTExNcunSJM2fOMDc3t63n\n907mO2f8ATY3N5mcnGRqaopAIIDX62Vra0s67MQWnGAwiNfrxWQyodFoMBqNUkHXdqFSqbBYLBgM\nBgqFAlNTU4yOjpJMJr/ye/L5PH6/n2AwyL59+zCbzZJ4UbHQ6/U4nU7a29vp7e3FYDAQCAS4fv06\nly9f5u7du4TDYQwGA4uLi4yNjaHVakmn01/7WYuJXq+nvb2dlpYW1Go1gUCAiYkJNjc3S7p9qFAo\nEI/HWVlZ4fLlywB0dXVRVVXFsWPHuHLlSklUQYviLF1dXfz1X/819fX1FAoFtra2WF9ff+gZazQa\nLBYLNTU12Gw2dDodZ8+e3fY1i2lDj8fD008/ze7du8nlcvh8Pm7evMmbb76Jz+cjl8tRWVlJc3Mz\ne/fupampiePHj2O32ykvL+fdd98lFApty5oFQUCn00nV8fF4nLW1NSlSKBYMP+gIiPUuSqXyoeLi\nTCZTtLooo9HI3r176e7uRqPRkMlkWFlZ4b//+7+5cOECi4uLckHrY/CdM/6FQoFgMMitW7fIZrNs\nbm4SDocfOkhE7zUajbK6ukpjY6O00bc7/KzRaCgvL8dsNlMoFFhZWfnG/Hc+n2d1dZWVlRWi0WjR\nDZEYiWhqasLtdmM0GhkbG+P27duS4RcL5ETDtL6+Lh2S0WhUKqAqFArSf4v9udRqNS6Xi8rKShQK\nBWtraywuLrK1tVUyIfOvIpvNksvlpFuzeJjrdLqSSLGIznZ3dzdPPfUUvb29xONxbt68SSAQIJlM\nSs9YEAQqKipwu91Sekw0SNuJWq3GYDDQ19fH4cOHpe6P1dVVBgcH+fjjj7l//7601wOBAKurq2xs\nbNDb20tPTw9ut5u9e/cyMjJCoVB44nlphUKBwWCgu7ubzs5OWltbyWQyhMNhQqGQFMHKZrOk02li\nsRiZTOYh469SqdDpdBgMBvL5PPF4nJmZGbxeL2tra9vSaSQ6L93d3ezatQulUsnMzAxXrlzh5s2b\nTE9Pl0Q07nEQHSrxfXxwv4vFrWazmUwmQzQalb4mnpVKpZJ0Ok0mk/naNPFX8Z00/n6/X9qU+Xz+\nKx9KMplkY2Pja3vpnzRarRan04nFYiGfz0upiq8zfPl8nmAwyNLSEj6fj0gkUnSpYqvVSltbG06n\nk0Qiwfnz53n//fcZHh4mkUhIRl2lUiEIAul0mnA4TCKRIBwOS1/L5XJotVry+XxRP5P4AtpsNiwW\nC/B5gd3y8nLJRioeRDwk6uvraW9vx2g0kkwmpX7/Yh+SBoOB6upqjh49yqlTp6ipqWFwcJCzZ8+y\ntLT00P4Xi7yampqwWCyS85hKpaRQ9Hag1WpxOBw888wzPPfcc9jtdiKRCLOzs7z77rucPXtW6lKA\nz9MwsViM2dlZpqenWVhY4PXXX2fPnj10dnaytbVFJBIBnpxOh1KpxGq1curUKb73ve/R0tKCRqOR\nukXE55zJZNja2mJ5eVlKH4kdFmJdksvlQqVSEQqF+NWvfsXZs2fZ3NzcFuNvMplwuVx0dnbi8XhQ\nKBTcu3ePDz74gIWFhR154xcNv5gOSqVSFAoFVCoVRqMRu91OfX29tIcUCoXU5qjVatHpdITDYSKR\nCPl8/rEjMt854w+fG0exEvjrHoZYPaxWq0mlUqhUqm27FQmCIAngHDhwgLq6OhKJBMvLy3i93q+t\nxhbDdBsbG9y7d49kMonBYCjKjU4MKdbU1HDo0CHsdjsLCwvcvXuXyclJqbBPFMqprq7GarWSzWaZ\nmZlha2uLeDxOS0sLzc3NNDU1IQgCy8vLXLt2TbpdbDfiYdPW1obL5SIej0svWrEjEl+HeKDU1NTQ\n2trKs88+y+HDhzEajdy9e5c33niD+/fvF63lTHSqampq2LdvH11dXTgcDrxeL6Ojo4yNjbG5uSn9\neZVKhV6vp66ujsbGRnQ6Hevr6wSDwW2/6dXV1XH48GF6enpwOp0kk0k+++wz3njjDe7du/eVh68Y\njeo8tAYAACAASURBVBwZGWFubo6uri6+//3vo9VqpTkAT8p45XI54vE4CwsLeL1empqaUCqVCIJA\nPp+XUoWFQgG1Wo3FYpFSL6FQSKqTqqurw+PxoNVqyWQyVFRUUFZWtm2pRrfbTU9PDzabjWg0it/v\n58aNG9y4ceOh/VKqPNjV5XK5cDqd1NTUUFtbi9lsRqfTSekW0cBrNJqHbv6iPRPrfDKZDDdv3uTu\n3btMTU09dg3Jd9L4P6rojfhwVSrVtht/hUJBRUUFTU1NdHf/P/auKzauKz1/03tvnOGw9y6RFClR\nxYplr+OyXsTJbhZZZJGnvOQ573nPU4B9CQJkESQIstjY22R5ZVlaUaRoUiTFXmeGw+H03nvLg3OO\nh7ItkxTFoez5gMFaWmp47r3nnr99//cPQqlUIhAIHMpaPA+lUgnpdBperxc8Hg8CgQBs9tk/TqKk\nqNPp0NPTg3K5jO3tbVgsFrjdbpTLZQiFQmg0Gly8eJGWWCwWC6xWKxgMBnQ6HXQ6HcbHxzEwMIBA\nIIClpSWYzWb4/f4zvyYAtL7c0tICoVBIxYjOA8u/EiQFyOVyKalLIpFgcHAQN2/epOnpQCCAlZUV\n/PGPf0QoFKpa2YLFYtF20ImJCXR2doLH42FxcRHr6+s4ODg4lInjcDiQSCRoaGhAY2MjeDweYrEY\nrFYrTYW+bBCHqqmpCVevXkV7ezv4fD4ODg4wOzuL27dvI5/PP/edjUajsNlsMJlMaG1txaVLl+D1\nerG8vIxMJvPSjD/pCPF4PPB4PDRNnM/n4Xa7aZSfz+eRyWQQj8cRjUYRiUTg9/sRCoXA4XDA4XDA\nZDKRTCbh8XgQj8fPpPZP9ndLSwsuXrwImUyGQCCAubk5LC0tUeLqeYdAIIBUKsXQ0BAGBwdpm3N7\nezuEQiE4HA7dQ5WEzGKxeKhcVywWEQ6HEYlEEA6HYbFYwOVywWQyj30fvpPG/6ggLzWpvbDZ7DPz\nZElKtqenB/X19chkMtje3kYoFDpyupuUDMiDr0bkz2Qyaasil8vFwcEB1tbWDtUyxWIx6uvrcfHi\nRbS2ttKDg8Vi0RZLrVZLdQuWlpYwOTmJ7e3tMyNFPQu1Wo2WlhZIpVJ4vV58/PHHMJlM5+qgqSxN\naDQa8Pl8KBQKdHd3Y2RkBFeuXIFSqUQkEsG9e/fw6NEj+Hy+quopcLlcaLVa9Pf344033oBEIoHd\nbseDBw+wtLR0qNYPfLHHFQoFjEYj6uvrwWazYbfbMTU1Ba/XeyZrZrFYEAqFaGxsxIULF6BQKOD3\n+3Hv3r0jK/gRZ31paQk6nQ4dHR3QaDRoamqC1+t9qQx1wrNJJBLIZDLIZDJwOp34wx/+gK2tLeTz\necqhiEajSCQSyGazyOVyYDKZaGhogE6nQy6Xw/z8PO7cuYOFhQVYrdaXrhdBiJ59fX24dOkSRCIR\n1tfXaQbrPL2P3wQS4AwPD+O9997D+Pg4BAIBuFwuOBwOzbLYbDYEAgHE43Fq4BOJBA1ElEolANAS\nEin7BgKBE4m8fWeMfyUJgqS0COnpm0BIRxwOh76cZyV+QiSGlUolBAIBrFYrpqam4PP5jrShmUwm\nPRj5fD7VNmCxWGea0iXRJpvNRqlUQiaTQSKROJQa7+rqwttvv42BgQEolUo6aUuv16OtrQ0ikQiJ\nRAIrKyvY29vD48ePsbm5iWAwWDVDxeVyqUceCASws7NT9RaiSiIQmVkhlUphMBjQ2toKsVgMtVpN\nyycGgwHxeBwWiwUzMzPf2kVyFuDxeDAYDGhoaEBdXR3MZjOePHmC1dVVOJ1O+r6S1Ccpi7W3t0Mg\nECAYDMJut8Nms9Go9WVDKBSipaUFLS0tMBgMKBaLODg4wMLCAvb394+URSHpWo/HQzNiMpkMer0e\nfD7/pa6flAgDgQAymQySyST29vawvLyMlZUVFAoFyupPpVK0G0qpVMJoNKK/vx/t7e1UQW9mZgYH\nBwdnMrdAJBKhqakJjY2NUKvViEQiVMOi2u/jUUB0ZTo6OvDuu++iv7//UCYxFAohHo8jFArB5XIh\nEokglUohHo8jkUggnU7TzKlEIgGDwYDL5YLP56Mtxyc9I78zxp8QU8gwh2Kx+NyBMWTwjEwmo5Kh\nZz3dj8vl0hY9i8WCu3fvwuVyHenfEuMvlUqhVquhVCqxvLwMLpdblQOepBfL5TIlsBAy1tjYGP7u\n7/4OHA4H6XQa2WyWOj08Hg92ux0zMzO4f/8+ZmdnkUwmq672V9nPnM1mqbBPtUCyUyS6r6urg16v\nh16vR0dHB/r6+qBQKOiHx+OBwWDA7XZjbW0N8/Pz2Nvbq9r6yTXw+XwYjUZoNBowGAwsLi7i7t27\nMJlMdLojycgJhUL09/fjxz/+Mbq7uwEABwcHdGDRWTm4EokEfX19aG9vh1wux8HBAfb29rC+vg63\n233k7yEdRqR+KxaLodVqz8T4+3w+quxIokyPx/NcA6rX63H58mX8zd/8Ddra2uD3+6kGw0nY5SeB\nVCql8uBsNhsOhwNWqxWhUKjqjuxRwGazIZPJ0N/fj7/8y79EPB6HyWTCRx99hIWFBezt7dEsy7OS\n1s/itCe+vrLGnxwQRLxkZGQEXV1dUCqVlDVuMpmwubmJ5eVlhMNhWqMiwjp6vR5arRZcLhe5XA7x\nePxMWKMkgtNqtdBoNMjlckilUshkMt9a6ydGv6urCxMTE7h27RqkUimi0SiePn2Kg4ODM03tlkol\nxGIx2nool8sxNjaGzc1NiEQi9PX1YWxsjAorbW9v4+HDhzTVSMg7+/v7cDqdSKVS52JIkUgkgkKh\noHwQ0hZVLbDZbKhUKgwNDeHmzZswGAzQaDQQiUSQyWTU4JMPKV/JZDLU1dWhoaEBwWAQgUCgaqlS\nHo8HtVqNvr4+GAyGQ6lltVpNiU/Nzc1obGyEwWDAwMAAOjo6IBaL4XA48Jvf/AZzc3NnJlBEIjey\nvlKphIODA5hMJoTD4WN1ClW2zwFfGOWzMKLE6djZ2cFvf/tbeL1ebG1twefzfe3Pk4zoxYsX8eab\nb0Kv1yMYDGJmZoYKR50VZ0QqlaKrqwtqtRq5XA4HBwdwOp1f6QZhsViUl0ACQeCLlleyz866e4iU\nqy5duoRLly6BzWbD6XRieXkZa2tr2N/fp51apCvtees77bW/0safHIi9vb24fv06hoeHae2ZwWDA\nZDKhsbERXC6XGpZCoQAejwej0YiOjg7IZDKkUimEQiGEw+EzNf4KhQIikQhutxter5eur/LnyP+S\nNKharYbRaMTo6CiuXr2KgYEB8Hg8BAIB1NfXQ6FQnGm6nBwsXq8XNpuNtjGNjo4iHo/j2rVr6O7u\nRjabhdlsxuzsLD799FM6Xtbr9SIajZ6rHl3SiUE6QQjbtpp6+MQI6XQ6DA4OorW1lQ40YTKZNPOS\ny+Vo/Y/sGaPRiOHhYbBYLOzv78Pv9yMajZ75YUiISyTbls1maccLydiJxWIqrNTY2AiVSkVJXltb\nW5QLcpbrJq1XPB6P9uaHQqGvvK9HQTXElcrlMtLpNGw2G+7fv49gMAiXy/W1ziyDwYBUKkVrayuG\nh4cxMjKCcrlMy3FWq/XMul3IfiAyydlsFlarlbaCkv0tkUjoMCuBQEA/wBdZu0QigWg0SnVRzups\nlMlkaGlpweXLl9Hd3Q0mk0nLLsRhIS3N1Qh4XlnjT6BQKNDV1YVyuUynwwkEAkgkEigUCrzxxhu4\ncOEC/H4//H4/kskkOBwOGhsbaVuZ3W7H5uYmNcAvG+TBi0QipNNpzM/P4+nTpwiHw1/r0RKhDrVa\njbfeegs/+tGPoNVqoVarIZPJUCwW6UsgkUjOVOmPtAV5PB5sbm6iqakJ3d3d+PGPf4xisQiVSgUA\ncDgc+N3vfoeHDx/CbDbTFzCXy1V1QMuzqMwoGQwGWhI6y2jn65DP5yk7nM/nY2JiAkNDQ1CpVFTs\nw+FwwOPxIBwO04Ozq6sLzc3N+PnPfw63242DgwN89NFHmJqaQjKZPNO2xXw+j1AohI2NDajVajry\nua+vDwCo1gM5vPl8PuXjPH78GB9//DGcTueZ63JUKuAxGAyq0nfc7hoyqZPccz6fT2W9XzaI07Kx\nsUGj4Wf3MxGEamxsxI9+9CNcunQJQqEQn3/+Oe7fv085SWcFwici9yiZTGJrawsWiwXFYpFmZIaG\nhtDf34+enh7U1dVBqVSCw+HQ+51MJmG1WvFf//VfdAbKWZw3UqkUdXV1VB68VCqhubmZyhRPTU3h\nzp07VZvJ8soafyaTCS6XS+vLGxsbVJ2Kw+FAJBKhp6cHTU1NkEql0Ov1tM4oEAhorY0MhVhaWjpz\n2VZCTEyn04dS/jwej6adVSoV6uvrUVdXB41Gg/HxcVy8eBGxWOwrhrMaqmeEyBSNRrG/v49IJEJT\nt8SQ7u/vY2trC+vr67BYLGdCFDopiPEhmv7E+FRbB5+QKZ1OJ+bm5sBkMpHL5dDU1ETFoUwmEyVi\nEdno/f19NDc3Q6PRQCqVYnx8HKlUClKpFFtbW3A4HGfWTlkoFBAOh7G0tAQej0fFY0i6P5PJIBaL\nQSwWQ6FQUEW5eDyOtbU1LC0t0YzFWYEY/spWvkqxqpN8H9n75HrPKhIlLX7fBDabDb1ej76+PkxM\nTKCxsRH5fB5bW1tYWVmBx+M5s9JX5VyBymcQi8Uo0VOj0aCnpwc3btzA4OAgmpubqd4J2Sd8Pp86\nBEtLSzTTeBZ7KB6Pw+12Y39/n05hFQqFaG5uhlKphFgshkwmozNDiHroWeGVNv6EKBMOh/Hw4UNs\nbm7SVCaTycTo6CiGhoaoE0CGbBCGfCKRoFLAZNLZWRzw5AAgqSulUgmZTEYNjVAoRH19PTo7O9HX\n14ebN2+ir68PMpmMKkGtrq4ik8nQGi9ReKqWKl4mk4HL5aKkLVJ/K5fLcLvdWF5ehtvtPnd98s+C\nZFAqjf95WW+5XEYoFKKiJvF4nCrFbW1tYXt7G3a7HblcjkqDPnr0CGq1Gv39/Xjttdfwzjvv4IMP\nPsDFixfx29/+FpOTk2dm/EulEsLhMJ48eYJYLIZYLIb6+nqo1WoIBAK43W6YzWa0tLSgv78fUqkU\nTCYTXq8XJpOJSrieNci7RRzt0xonHIvF6ECu8wAOh4O2tjYqRczlcmG327G7uwuLxXLmJNzK4IiQ\niSuFcOrr6zE6OoqbN2+ivb0dbDYbwWAQDocDW1tbyGazkMvlGBwchEqlQnd3N/b29rCzs3Mmxp+I\ntfF4PDoMSqlUQi6XQ6VSoampCW+88Qbu37+Pu3fv4vbt22d6Pr6yxh/4YnMkEgnY7fZDxAngixfW\narUinU7D4XCgtbUVHR0ddBBOuVym/cJbW1sIBAJnFvVXerJisRjj4+NoaWnB22+/DeDLGqNIJKKe\n7Pr6OhKJBJxOJ2w2G2w2GwwGA5qamqDT6cBisZDP5+lLctYolUqUVEOukZBYotEovF4vstnsuZze\nVwki4apWqyESiRCJRODz+RCNRqta8ycg99XhcCCVSsFkMtGIOhKJUC4FcS6JaEs4HIbP54PZbMa1\na9eg0WjQ2NhI985ZtYeSve9yuTA9PQ2RSAQ+nw82m0170FtbW6HT6cBms7G9vY1PPvkEu7u7Vdvb\nmUwGBwcHCAaDtJuFiK4cV16YGC6SuTnLGvTzIBKJYDAYcOPGDYyPj4PP52N9fZ0GVZFI5EyzdcTQ\nRyIRbG9vo76+HvX19ejv76f37fLlyxgZGYFEIoHD4cDOzg5WV1epXgppK1UoFFRV9CyzRsViEZFI\nBEtLS7TllgRrQqGQctX6+vrAYDCQyWTw5MkT7OzsnMn7+Mobfw6H87XqduVymR7afr8f+Xwecrmc\nEsvK5TICgQA2NzfhdruRTCbPbGOQ6IEINAwNDaGjo4NG8GTIRiqVQiwWw8bGBkwmE+x2O3Z2dmAy\nmZBMJjE2NkYjITJx6yiCIy8DpAxTKZREriEcDiMcDtNOhfNE7nsWRHOekP0cDgf29/fPRfshQblc\nRjAYfG6bFmGSE/h8Pvj9fuzv70MikWBsbAxSqRQymawqpaJQKIRQKHTo7yUSCRV70ul0YDAYsFqt\nePDgwVf0/s8SpNWT6PALhUKIxWKa+j/KXiYlMJlMBrlcTgMXv99f1dkiBFKpFM3NzRgeHkZnZyfK\n5TJ2dnYwOTkJm812puloglKpROcNJJNJiMViDA8PQyKRIBwOo7+/H1qtFpFIBBaLBZOTk3jy5Am2\nt7dRKpWg1WrpDBFSmnzZ2d3KUgVZ/97eHgKBACwWC4AvVS5v3LgBjUZDyxekb9/hcCCdTr/08+aV\nNf6E6dnb24u/+qu/Qjweh8Ph+AqRpVgsIp1OQyaTYXh4GAaDgTJBCfuY1B/PknWZy+Vw//59xGIx\nZDKZQ0p/sViMErdCoRCmpqawvr4On8+HeDyOTCZDtZ8JKYoYp2qM2wRAFdDUajWNJOPxOKxWK51d\nLpfLkc1m6VrPmwNAmM4dHR1QqVRIp9OYm5vD8vLyuTH8L4J0Og2fz4eZmRnE43GqqHhe+Bf19fW4\ncuUK+vr6aAts5STFaoBkKsgQHCLOpVAoDnVafBuYTCYEAgH6+vrQ29sLBoMBj8eD3d1d6lRUExqN\nBl1dXairqwOHw6E9/WazuWr3HvjSmBIS7rVr19DT00PnDpjNZlgsFsrbCgaDNAASiURobW2FVCpF\nMpnEzs4OrFbrS9vvxMF7Vl6ekKKJM05+LpVK4eDgADdv3qSyv9FoFGtra3C5XC9d3fSVNv6VM+Rf\ne+01MJlM+Hw+MBgMCIVCiEQiiMViiMViXLx4Ed3d3ZSlWy6XoVar6WzohoYGaqj8fv9LJ3iR1C25\nluXlZdrLmkwmEQqFkEwmaW+8w+Gg2QkSbeRyOWQyGarQRUoJ1QCHw4FMJoNAIACTyaQa1Ovr63T+\nQF9fH7q6uuB2u2EymWCz2c4N05+0X0qlUjpsI5vNYn9/Hw6H49wYyBdBLpdDLBaD2+2m74fX6606\nmZHFYoHH46GlpQVXr15FQ0MD8vk8VlZWsLW1dex++tNGZRtlqVSi3UQk4/htGQkGgwGFQoHGxkb0\n9/dDLpdjbm4Oa2trVH+kWiCGqKWlBePj49BoNPD5fLh//z4WFhboeOVqoVJjg8PhQKVS0e4ikgUl\nQ4tI/z+ZjKdQKNDZ2QkOhwOv10sN6svc64Tn9GwQSXgilSDkXJIxvXz5MoxGI3p6epDNZum0vpe1\n3lfW+JPUCZHIfffdd9HX14etrS1wOBzo9XoqISqXy6nwCTGc5XIZRqMRMpkMV65cgc1mw2effYbH\njx8fS1//pCiXy8hkMtRz/bafffbP2WyWykLq9Xoqr1stY0rS/pXyyqFQiHZRSKVSXL58GXq9HqlU\nCh9++CE8Hg8tgVQbRH2OpJyFQiHy+TyCwSAikci5cFBeFKTmSXQtLBYLnE5n1a+NzCdob2/HxMQE\ndDodfD4fPvvsM8zPz5+ZjO83gWQPs9ksisUiFcERiUTfqqhJWOukrbG/vx9MJhO/+tWvsLCwUPWU\nP3G8uru78dprr0Eul2N6ehr/9m//RjtzqgUSMTudTpquL5VKsNvt+Pjjj7G4uAiz2YxCoXDIOSfl\nRY1GQ5UhiW7+yyRXVnYoHAWpVAqpVIq27xLdmaGhIbjdbtjt9peaIX1ljX+xWEQ8HofP54PdbqeR\nfE9PD4RCIerq6qh3DgB+v58+fCaTCa1WC5FIRH+mVCphaGiIkopcLhdNIQGgvdQvoyxw0ocbCoXw\n+eefQygUoru7G2KxGCKRqCqkumKxSLMQ5CUlXrvNZqP3kZC5JBIJWltbaYdAtTMAJAKSSCSor6+H\nUCik44arRTT7OhAls2+bIvd1IOpnOp0ORqMR8Xj8XGQ0BAIBmpqa6IAnIv60ubl5ZLnrlwnioEYi\nETidTsrWHhoaQjqdxvb2Nt2/lQagUoL7woULeP3112E0GqkeSTVGVT8LpVJJRZXEYjGcTidMJhMC\ngcCZCJ59G8h7CYBmPcmUwng8ToO0yvIAn8+HwWCgSpEbGxtYXFx86Y6MSCTC4OAgWCwWvF4vJeAS\nHQcyrQ/4gljc2dmJsbExaLVaaLVaJBIJqkMjEAggFApf6jv6yhr/QqGAWCwGm82GhYUFKJVKyown\ntbh4PE6nJBFPKhaLgc1mo7m5mQ7rEAqF0Gq16OnpoQ9neXmZptZJbd3r9SIWi50bQxCNRrG8vIzO\nzk709vZCp9NBr9dT7sNZIpfLIRQKUQ4DeSkJ49Xr9VLSTW9vLyQSCYaGhmg9NZlMVt34s9lsSCQS\nqiMej8ep8T8v4HA4EIvFSCaTxz6cRSIRNBoN2tra0NDQALPZXHXjT0hwRDtfKpVib28P29vb2Nvb\nOxfDW4jGAiFMdnV1QaVSYXR0lGp0EC5OpeYGEaEhKfXh4WEAX7y3bre76rV+FosFrVaLK1euoKOj\nA2w2Gzs7O1hbW0MkEjkX3S2kiyiZTCKRSFB+lkwmg0ajAQDahkn6+pVKJeVWaLVaPHr0iHZLvUxw\nuVzU1dVRgbBoNIp0Ok2J3Nlslp4lUqkUo6Oj+PM//3NkMhlEIhGYzWYq+JPJZF66bssra/zz+Twi\nkQjm5+cRCASokINAIKApuVAoBI/HA5vNRr0wMh9ZKpViYmIC7777Ljo7O6FWq1FfX49SqQQOh4NY\nLIZCoUAlVTUaDR48eIDNzU0Ap6+zfBIQyU4yCXBgYAAulwvr6+tnTtIh2uF7e3vw+XwQi8WUBGi3\n22G32+F0OilP4fLlyxgaGkIymUQsFqMtatUCGfREBGbS6TTNSlSrg+JZEKVHrVZLa7FHXReDwUBT\nUxNu3bqFq1evgsfj4eHDh1U/4IVCIRoaGvD6669jYGAA5XIZMzMzuHPnDrxe77kgWpKzZn9/H2tr\na9DpdGhsbMSNGzegUqnQ3NwMi8UCj8eDRCKBUqkENpsNo9GIvr4+XL9+HS0tLZDL5VToqtr7nclk\n0ol5N2/eRGtrK9LpNKampjA9PX0uon7gi9S4w+HA3t4ebDYbVWb92c9+RqV+iVx1Mpmkz+P69eu0\n3h8KhSiD/mUik8lgb28Po6OjePvtt6FSqSAWi2mJl0T25XIZOp2OSlc/ePAA09PT2NraQiQSoXLi\nJ5GPPg5eWeNPonKv10t7/UkKnxAoiAY36TEn8pykRZDP50OlUiGbzdJ6i1qtBpPJRCKRgNFopMZA\nLBZjbW3t2H29LxOkfkvkiZVKJU3fhcPhM+1pzWaz8Pl8ePr0Kerq6nDp0iVIJBKMjIwgGo0eSoEJ\nBALo9XoqPWu1WuHz+apa+ycEUtJ3fnBwgJ2dnTNXlHve+vh8Ptrb2/H6668jFoshEonQiCiZTNJx\noACo5gObzYZQKIROp8P4+Dh+8IMfwGAwwO/300li1QJxSC5cuICenh7I5XJEo1GYTCZsb2+fG0Eo\nEn2SCYn9/f0wGo1UI4HMFwmFQtSos1gsaDQaNDQ0oLu7m46snpqawvz8fNWvjaj5tbe3o7W1FeVy\nGWazGTs7O3A4HOdizwOgqn47OzuYn5+HSCSCWq3GhQsX6LAqMvc+mUxCIpFAo9Ggvb0duVwODx8+\nxPr6OgKBwEt3JMlk2GQySQe36fV6OlshFothdXUVDocDoVCI8rRmZ2extLQEl8t1phnQV9b4ExAW\nLtEzrySckU8lyI0tFovY39/H1NQUJV6QefOtra2QyWTIZDLg8/lgMBhIpVJQKBTVuMTngvRCP3z4\nEO+99x6ampogkUjA4XDO9AUmqbfHjx8jGAxCJpNhfHwcN27coMxVr9cLsVhMh7aQCEqn02FnZ+fM\n1vp1IJEQKR05HA6srq5WPTVLwGKxIBaLMTIygn/4h3+gkygdDgfsdjsODg6wu7sLp9MJAEgkEggG\ngxAKhTAYDLh69SouX76MsbExBINB2j5XTePPZDLR19eHK1euwGg0gslkwuPx0Hnl5yHqr4Tf78fa\n2hquXbtGJ801NDSgqanpa9OzpOyVy+WwuLiIO3fu4NGjRzCZTFXPuHA4HLS2ttKs58HBAZaWlmhP\n/XlwugDQ9uv19XXKbdLr9aivr4fRaKTBWOX8DTLO/enTp/jP//xPbGxsUFXMlwniJEYiEdhsNshk\nMgiFQnA4HCpgNTMzg7t37yKTyaBUKtFAsxodFa+88QcOp+Cf7a98HsLhMLa3t8FkMmkPdFtbG5qa\nmuD1euHxeOB0OuF2u+FyubC8vHxmSmhHASG+kLGt2WwWarUaExMTKJVK2NjYONP1lMtlJBIJOj0s\nl8thZGQEbW1teP/99xGLxSAQCNDc3Ayj0UgdMJvNVvWDns/no6enh8qEptNpRKPRc6NHQGqGqVSK\n1hElEgmampqgVqvR0dGBkZERWu7J5/O0PYoQYAUCAfb39/HJJ5/gs88+o62W1YBYLIZarcbIyAhG\nRkYgEAiwsrKC3//+91hfX0cqlao6H+FZpFIpuFwu3L17F16vlzraAoGAarUT3X82m418Pg+3242l\npSU6vMvtdled3CoWi2E0GnH16lUMDw/TVuPf/e53cLlc52K/P4tAIIDFxUX867/+KwYGBtDV1QWt\nVktT60SGe3V1FSsrK9jb24PZbMbu7u5L75cnqGxvDgQCePToER20RpxAsp5KomK1zr7vhPEnOG6/\nMmFzE++LTEWLRCJ0IAOZInVwcFD1fuhnQWqRfr8fPp8P2WwWSqUSPT09sFgs2NzcPPP1Eidqfn6e\n9tpqNBoMDQ0BAGU/JxIJmM1mmM1muFyuqqcZORwOGhoaYDAYwGQyqaGt9roISHun3+/H5uYmHXdL\nCHNarZb+LMl8kUiIKC263W5YrVbcuXMHk5OTVZtUyGAwIJPJ0NbWRuduZDIZbG9v4+7duzg4ODhX\nJEuCbDaLUCiEubk5OJ1ONDU1QSaTQSQSQafT0fkcpOyYy+Vgs9kwOTlJszLn4fyQy+VobW3FC0TP\nEwAAIABJREFUhQsX0NTUhEgkgs3NTczOzp6bWv+ziMViSCaT8Pv92N7exoULF9DY2EgJdnw+H0wm\nE48ePcL09DR2dnbOvEW3WCwimUwimUzi4ODgzH7vSfGdMv4nQaUMsMViwd7eHtbW1uD1ehEIBGgd\n9bwZfuDLzAbp2yapIz6ff0h97CzXXS5/MUbT4/FgZWUF5XIZFy5cQH9/PzQaDdhsNnw+H6anp/Hg\nwQOsra0hmUxWPcojNXUulwsA52KMbyWIEV9dXcUvfvELjIyMoKOjAywWC2q1Go2NjVQznqiHRaNR\nxGIx+Hw+bG1twWw2w2q1wuVyVdXwM5lM6hBqNBpks1mYTCbs7u7C5XKdm0E3z4K0/BGp3729PSpn\nTQx+pXY/GUoTjUarXuOvhF6vx9DQEAwGA3K5HJ4+fQqLxXIusy2VKJVKSCQSMJlM8Hg8lPlPngGD\nwUAkEjl39/u84ntv/IEvWJpENCedTtP2QMK2PM+biKTa7XY7Hj58SIdbuN3uqq2JDPOxWq1IJpN0\nKptCoaAtmCsrK1hdXT3TgUrPQzqdpjK+KysrmJmZgcfjOVcRaKlUgt/vp+ShnZ0dKjer0+noAUgy\nF4lEgs5WsNls8Hg8CAQCVZv8CHzJXWhvb8eNGzeg0+kQjUYxNzeH9fX1Mx1xexKQDAzhsXwTzhMx\nmICMOu/t7cXVq1epmt/09DT29vbOTZbrm0Ccr0QiUXXhpxpeHsq1z/E+LBar3NDQUG5vby+rVKqy\nQCAoM5nMMoPBqPraGAzGVz7VXtNR1lnt9Zzkvn7Tp9prJR+hUFhub28v/9M//VM5lUqVs9lseXFx\nsfzBBx+U6+vrq76+7/JHIpGUu7q6yv/yL/9STqVS5VQqVb59+3b54sWLZalUWvX11T6n/nkuapH/\ndwRkVjqJ+kh0dx6ij5e9htOKss7DvToOXrX1Al8qQZJBOR6PB2azGW63G/F4vNrL+86CxWLBaDTi\nrbfeQk9PDzKZDO7fv48//vGPcLlcVdXvr6E6qBn/7whI+r+GGs4alWNMvw2lUgmpVApOpxNLS0uw\n2+1YXl6G2+0+t2SzVx1E16Surg43btxAc3Mz4vE4pqam8OjRI4RCoXNdaqnh5aBm/Gt45fEqRsDf\nBRDBLDJL4ihcAiLL/emnn2JtbY1ybfx+/7ngfnwXQYy/SqVCV1cXxGIxXC4XbDYb3G73ua/11/By\nUDP+NbzSqBRXqTkBZw8ii8xkMg8NdvomENKWx+OBx+M5w5V+v8Fms6l8dSqVgs1mQzAYrLrMcA3V\nQ8341/DKg8VinSvxpe8LSORPJLXJM6hF8OcLJDvDYDBQKBRo9w2ZF1/D9xMvb2TQi6F2itdwJNQi\n/+qBGH+iKUFGXteew/kCyc40NDRgdHQUqVSKEi1DoVDteX138Vz7XjP+NdRQQw011PDdw3PtO/Os\nVlFDDTXUUEMNNZwP1Ix/DTXUUEMNNXzPUDP+NdRQQw011PA9Q83411BDDTXUUMP3DDXjX0MNNdRQ\nQw3fM9SMfw011FBDDTV8z1Az/jXUUEMNNdTwPUPN+NdQQw011FDD9ww1419DDTXUUEMN3zN8b7X9\nK8eQMhgMsNlsCIVCFAoFpFKpV0ryks1mg8PhUHnVVwnHGQdbQw011FDD6eB7GfmTQRdk2AWTyYRM\nJsPQ0BCam5vBYrEOacafNzy7NqlUCoPBAJFIRMervgogThebzT7X97uGGmqo4buG73Tkz2QywWKx\nwGQyIRKJIJVKIRaLoVAoUF9fD4FAQJ0AsViMhoYGxGIx7O/vw2q1wm63w+PxIJPJVG36FRnKwefz\nIRKJUF9fj8bGRnA4HLDZbHptQqEQsVgMfr+fzun2+/0olUpVn9xFjLxcLodYLAaPx4NKpYJCoaDG\nn8ViwePxwOFwwO/3I5lMnqssBnFOmEwmOBwOhEIhZDIZvQar1YpIJIJ8Pl/llR4N5N2oTeGroYbv\nJ77Txp/NZoPL5YLD4cBgMKCjowONjY3o7e3FzZs3odPp6CzycrmMbDaLUqmEQqGA3//+97h79y6m\npqaQz+erYkAZDAZYLBYkEgnUajUMBgN+8IMf4J133oFEIoFIJKLp/mw2Cw6Hg/39fXz88cd49OgR\nYrEYvaZqgslkgsfjobm5GY2NjVCpVBgcHERfXx8YDAaEQiHkcjmmp6fx2Wef4cmTJ7Db7edmTG/l\nSFQWiwWxWEz3U29vL0QiET788ENsbW196zz78wByPTweD8VisWb8a6jhe4hXzviTMaIcDgcCgQBq\ntRp1dXWQSCSQSqWQSqUQCATg8/mQSCQQi8Xg8/mQyWRQKpUQi8WQy+VQqVRgMpnIZrPI5/OHRpGW\ny2V0d3eDy+Wit7cXJpMJu7u7sFqtCAQCL7Ru8t8sFgtCoRBCoRAikQgCgQA8Ho8al7q6OiiVSigU\nCmi1WiiVSohEIhiNRuh0OrBYLACgdX4Gg4FyuQyFQoFr165Bo9Ggr6+PZjH29vaQzWZPvO5Kg0aM\nR2X2gfycWCyGVCqFUqlEc3MzBgYGIBQKweVyoVAoIBKJwOVyIZPJIJFIAIA+y6GhIUgkErS2tmJj\nYwO7u7twu90IhUIvdM+fXTtZP/Al54PJZILL5YLL5YLH46G/vx/d3d3UyRKJRPTny+UyRCIRVCoV\n1Go1eDwejEYjtre3sb6+DqvVCqfTiVgshlwud+K1f9P1PHtNJIone0qv10Mmk4HP56O+vh4qlYpe\nI9lf5JmUSiXk83msr69je3sbDocDoVAIiUTixOv+untO1gl8ye94tvzG5XLB5/PBYrEgEonoey0Q\nCOi7LZfLIRAIwOVyUSgU4Pf7sbq6CpvNBq/Xi1QqdaJ9/uxan72eZ/dLpUNI3mcGg4FisUizdOTM\n0el0UCqVkMlkEIlEYLPZKBQKsFgs2NragtfrRSgUQjKZPDVHvXKNJLip/PvK/cBms5HP55HP58Fi\nscDj8SAUCqHT6VBXVweNRkPXz2QykclksL29DavVCpfLhVgshnQ6fWrrBr7c0wwGA6VS6dD5Wblu\nAEin0ygWi/RM4vP5UKlU0Ol0qK+vp/efwWDA5/NhZ2cH+/v7cLlcyGQyp5phJHuBw+HQ4IXc/1Kp\nRDOdPB4P+XyevmdMJhNsNhtisRgymQx6vR719fVobm6GQCBAsViEw+GAxWLB6uoqIpHICwcZr4Tx\nJ8ZeoVCgrq4ObDabGneDwYCWlhYoFAooFAqoVCpIJBIIhUJIJBJwuVzkcjkUCgUa1edyOZjNZuTz\neeRyOboByAFKHpZKpYJMJoNarYZcLkcikUAwGDzWTRcIBJBIJNR4k5eOzWZTh4UcCkKhkKbHm5ub\nodFooFAoIBaLwWQykU6nkc1msb+/T68jm83SDUZeFi6Xi46ODrr5+Xw+XC4XcrnckdfOYrGg0Wjo\n2ioPRfK7OBwOuFwu/b2EO0Ecst7eXkxMTIDFYiGbzSKZTCKTySCbzSKXy8Hj8aBUKlHDKxAI0NnZ\nSTMdZNOHw+Fj3XOpVAq9Xg8+n08PiMq1Vx7c5HtZLBb4fD51HK9du4ZLly5BKpWCz+eDw+Egm83S\ntZP9xOfzIRQKcfPmTXR1dcFoNGJxcRHLy8vY3d09lvFnMBjQ6XRQqVTgcDh0jeT/q/wAOLR2cqgQ\n50mlUkEoFKK1tRV1dXWHDn9ynSKRiDqcMzMzmJ2dxfLyMra3t49l/Ml36nQ6CAQCuhZiMJ81lsTA\nkYOS7B9iNFksFqRSKZqbm+n+VygUUCqVUKlUkEqllJzrdDphMBiwvLyMjY0N7O/vH8v4kz2rUqno\nn4kDUmk8K7k0zxp8UtYixp+UhFQqFfR6PZqamqDT6aBWqyGTyajjsry8jJmZGWxtbdHg4qhrJ79T\nrVaDz+ejUChQw0f2A7m3pLzz7LOo3DdkbxNHXCqVorGxES0tLTAajdQJYDKZSCaTePLkCd3ne3t7\nxzL+JNunVqtRKBRQKBRolpbP51MnnMPhfK3xJ+vmcDgAgEQigUKhQLNZJHhqbm5GW1sbmpubodfr\nwWAwYLPZMD8/j+npaeRyOfh8vmM7LgqFAkKhkJaDGQwGBAIBDeTINZCS67PGn8vl0n8fDocPnacy\nmQwajQYtLS00syiRSFAoFLC9vY25ublTcXKBV8T4E8P/1ltv4ec//zk9oCs3TKXHVekxer1eTE9P\nw2q1wuv1IhKJIBqNIpVK0QOceGjE6yUHokKhQEdHB+rr6zE2NoaNjQ2YTKZDWYJvg9FoxPDwMN56\n6y309vYeOjCefUnJ4fisZ7i5uYnd3V1sbW3B5XIhGAzSUkSl4ScbUCqVoqOjA11dXWhra0Mmk8Hn\nn3+OeDx+5HULhUK8/fbbuH79Otrb2+k9ftYYVR7wwJedB+TZMJlMGuFsbGzAZrMhFArRtZdKJXA4\nHIjFYnR2dqKnpwfNzc3o7e0Fl8tFMBjE7u4ugKN1BDAYDPT19eHv//7vYTQaIZVKv7LGZ/+b/Lny\noCcGJp1Ow+v1wmq1wufzIRwOg8FgIBwOw+FwQKlUoqmpCePj49Bqtbh+/TrkcjlEIhH8fj+i0eiR\n7jd5hu+88w7effddyGQyCIVCegA+u9avy2aQPSUQCKghIj+byWSQTqeRTCZhsViQyWTQ0NCA+vp6\nGAwGDA0NQaPRoL6+HhwOB3t7e8dyFBsaGvCzn/0M7e3ttIwjFAqf67RUrr3yHpADk8/nHzK65XIZ\nxWKR7h2BQIDm5mbIZDK0tbWhoaEBf/jDH46cnSOR4tjYGH74wx+iUChAKBSioaGBRuiVma1n8ey7\nQLJClYaVnE/kPSfvBoPBwPj4OFpaWrC4uIg//elP8Hg8Rz7QiYP03nvvoaWlBeFwGDqdDg0NDTRb\nJRAIaIbweesme4VEz2T9fD7/kDEm73OxWIRUKoXRaIRKpUKxWITL5TrSusl71t7ejvfffx/RaBSh\nUAgqlQp1dXVoaGiATqeDQqE40j0HgHw+T+87eQcIT4p8eDweGAwGzSgBXzgNyWTy2I7L6Ogoent7\nYbVakU6nweFw6Hnb1dVFMyTAl5kisjcqz5lisYhcLnfo78ie4fF4EAgEEIvF1Hnj8XgAgOXlZUSj\nUbjd7iOv++vwyhh/pVKJ1tZWjI6O0jp9JpOBz+fD/v4+MpkMWCwWtFotSqUSYrEY3G43LBYL5ufn\nD6Uzk8kkstnsoXRPZTmBvKByuZwe3mq1mh4Ex0kT1dXVYXx8HJcuXUJ3dzeYTCYKhQKy2Sz8fj9i\nsRiSySSNehKJBFKpFDKZDJxOJ2w2G2w2G/b397G/v49gMIhYLEYdkMpDn0TPYrEY8XgcpVIJ3d3d\nNKPAZDKPlFYk2QMSuTc1NYHL5YLBYKBQKCCZTCIUCiGVSiGXy4HP56NYLNJokcFgIJPJIJlMIhqN\nwmw2Y2trCyaTCW63mzoh5KVgs9kQCATwer3wer2YmJiAXq+HVquFRCI58j1nML4gR+r1eoyNjcFo\nNEIsFtNDolQqIR6PIxaLIRaLoVAogMViIZFIIJ1Og8fj0YgjmUwimUzC6XRie3sby8vL8Pl8iMVi\nNPoJBAKQyWTUoRkYGMDg4CCkUiktBzxrqL8JQqEQKpUKAwMDuHLlCiQSCXg8HjgcDjV6qVQK0WgU\nwWCQZrFIqrjy+ZNIzu/3IxQKIR6Pg8FgIBKJwOPxwO/3I5PJ0LSuXq9HV1cXTZXKZLJvXW8l9Ho9\nBgcHceXKFXR2dtI9yOfzAQDFYhH5fJ7u3Ww2i0wmg0wmAwA080PKEWTNhUKBRqGxWAzhcBgej4fe\nL71eTyNSvV4PvV4PgUBw5HVLJBK0t7fjypUruHHjBkqlEgQCAXQ6HY2gy+UycrkcIpEI3Se5XI46\n35UHdqFQQDgcpqRPkjVLp9Pw+XzweDxgsVj0LJNIJNDr9TAYDFCr1V9rqL8J9fX1GBgYwPXr19HV\n1YVEIgGlUgmtVguhUEgdjkwmg0QigUQigUwmg0KhgHw+T3ke5OyIxWKIRqMol8vgcrnUeWAymXC5\nXEilUmCz2ejo6EBbWxtNTRMH+6jgcrlobm7G2NgYbt68iVwuh0QiQUuFWq2WOs/pdBqpVArJZJKW\nJXK5HD3DSqUSstksQqEQ0uk0NZCkbJFOp+H3+8Fms6FQKDA0NAS5XE73i1arBZfLPdZ+0Wq1GBsb\nw9WrV+H1epHL5cBms2E0GtHQ0ICGhgbweDzkcjl6jpO1EyelVCohk8kgGo0iEAjQ6yGZGz6fj2g0\nikKhAI1Gg8bGRrS2tkIoFEIsFtPS6YvilTD+bDab1r/5fD7K5TLS6TQCgQCePHmCTz/9FNFoFEKh\nEFeuXEEul8P+/j4WFxexu7uLaDT6rSxs4pmR9BeDwUA6nUYsFoNYLIbRaEQul/tKuvjbUFdXh+Hh\nYajVapqNSKfTCAaDWFxchNlsRjQahVarRX19Pe0wCIVCWF9fx8rKCs1QfNO6gS8P2FQqRVnniUQC\nEomEpp6/zpP+JnA4HNTX19NIkKw9m83C6/ViZWWF1vtUKhW95yRzEQqFEAwGEQwG4fP5EAgEkEwm\nv5IGJ15vIpFAOBzG/v4+isUihoeH0dLSQn/3Ue45k8mEQCCAXC6HWq2mrY/khcvn8/B4PDCbzbBY\nLEgmkxAIBLDZbPD5fJDL5ZDJZJBKpdBoNODxeNjY2MDS0hIWFhYOrZ9EF8FgEHa7HUtLS7h8+TJE\nIhGy2eyhWvZR9opUKkVnZycaGxuh0WgOtZsSA+T3+7Gzs4Pl5WVq+J1OJwqFAng8HmQyGS23JJNJ\nbG1tYX9/H5FIBAqFAoFAANvb24fS7eRA+fnPf47R0VGaLSCO4retncFgoKOjA2NjY2hvb0d9ff2h\nUku5XEY+n0csFqOZs3A4jEAgQCN0ku5saGhAU1MTtre3YbfbkU6nYTQa0dnZCY/Hg/39fayuriKT\nyYDP52N4eBiXLl3C5cuXaQR1nD2uUqnw+uuv4+rVq+js7PxKlE/Og1AohJ2dHdjtdrjdbkSjUWqQ\nyIGtUCiQTqexubmJRCIBBoOBzs5ONDQ0QC6X4+nTp3j8+DG4XC66u7vxwQcfoLOzE3K5nKaOj4Pu\n7m68//77GBsbQ0tLC80mVDq6hUIBwWAQBwcHsNvttIsmHo8jlUrRd5XL5WJvbw8WiwWlUolG9cQ4\nTk5OwuVyQSAQ4G//9m/x05/+lJbkeDzeV0prz4NAIMClS5dw8+ZNXLx4EWKxmP77yrWT++52u+Fw\nOBCPx5FIJBCNRuk7WCgUEI1Gsba2Br/fj0KhALlcTnkWfr8fi4uLEIlE6OnpwT/+4z9iYGAA+Xye\n8o6Os3a1Wo2RkRGMj4/j5s2bh0pzldE9cdJdLhf8fj8NOBOJBEqlEnK5HMLhMMxmM1ZWVqgTLBaL\noVQqoVarYTabkU6nMTw8jHfffReNjY3UGTotUvErYfyFQiHa29uhUqkQDAZpGnl5eRlmsxk2m416\nYDabjUZ45MYfl9BBXgqtVovx8XGMjIzAaDTSVPxxbrxYLIZerweLxYLX64XFYsHS0hIWFxfh8XgQ\niURo9CwSiZBMJmk9JxQKHftBk1Rmd3c3Xn/9dTQ3N1ODetT7QDYzSfUBQDKZhN/vxyeffILFxUXY\n7XYkEgnk83ka+cfjcbqGyuiOpJy/iVVOIigmk0mfNTH8JOo9yj0gxp9Ev5WG32KxYHp6GisrKzCZ\nTNQhZLPZNNtCIv9K4hkxVMlk8tCzIAdUZbqUkBrT6TRNeR/12SkUCvT399O6aqUBslgsWFtbw9TU\nFMxmM3w+H3VmUqnUVyL/yclJGoXG43HkcjnweDxkMhlEIhF6r9hsNqRSKU37kgxMJVnpKGhubkZf\nXx8kEsmh6DWTycDv9+Pp06eYnJykTlblngBA1y4SiSCRSGiUXSgUsLGxgdnZWaRSKcTjcUr+JE6b\nXq+nvBfyOSpEIhE6OjpgMBi+YvhTqRTC4TCePn2K5eVlbG5uIhAIIB6PU5Iw2bfEEayM/BkMBpxO\nJ8RiMbhcLgKBALxeL61Jm0wmKBQKSKVSpFKpYwuLqdVq6jxUGv5SqYRUKgWXy4X19XVsbGxgZ2eH\nZlPI2sm7SN47khErl8vgcDhwOp008ne73fSZ+f1++Hw+aLVaJBIJ2O12uqeOAi6Xi7a2NrS2th7i\nVgCgGVGTyYStrS1sbm7CbrcjGAwinU4jk8kcIjgTQ1oZ+UciEbjdbpqSj8ViyOfzCIVCiMViyGQy\nYDAYtJR3nJS/RqPB+Pg4jEYjzYSSD3FEdnd3sbm5iY2NDQSDQUSj0UPZInKe5fN5GvmTc5nsb4fD\nQTPOOzs7GBwcpD9DOFDkrH0RvBLGH/jCIAWDQWo4FxYWMDc3h2AweMiomc3mF/o9PB6PMrm7u7tx\n69YtNP8/45LNZh/bkSCpp1AoBL/fj4WFBUxNTeHx48fPjeiPC2L4CGFkdHQU165dQ7FYxMHBAa2V\nHhWkdOL1eil3wmQy4be//S2ePHlCywovCpJmJOnWjo4O9PT0wGAwwOv1nugeEcNIXjqfz4e5uTn8\n7ne/o8zwF/WciXEkPf9GoxGNjY1Qq9U0oj2OMSIKk4TEVCwWaXnlyZMnmJycxL179+BwOE6lNY/N\nZtPUrtFohFwup0b/uHucGG2ydnIIezwebG5u4v79+/jDH/6ASCSCZDJJf+akIOUEUjJisVj0fh3n\n3hDnRygUUsNPMmaEWU1aTy0WC3VIjvpcHQ7HoT8Toltl2ZHBYCAWiyESiRzrvgsEAigUCloCBb44\na+LxODX8k5OTWF1dpVHkiyiAkr1BDDCJcG0227GMP+FSKRQKGjmT743FYggEApidncXU1BTW1tbg\ndrtpluXrgoCjONjlcplm7Qi5MBAI4ODg4FjGXywWo6WlBXK5nDq5lYbcZrNhcnISs7OzePr0KeUT\nVJYqKomXlX/+OhAnzO/30/O7UCjQzM2L4pUw/qFQCPfu3cPy8jLUajVcLhetv55mDzuDwYBWq0VX\nVxeuXbuG0dFRdHR0UFbxSV6c+fl5/PM//zM4HA4SiQT29vZoiuq0+sFJpqKlpQVDQ0O4evUqBgcH\n0draShnEx/ldpDQxPz9PGfpra2tYXl6G0+lEMpk8tbUTtnV7ezuuXbuGa9euobe3F9lsFtvb28fa\n5MViEZFIBF6vl0YuPp8Pd+7cwezsLLa3t2l086IgHr9arUZPTw/efvttjI2NwWAwUIGl4zD9CTF1\ncHAQQ0NDKJfL2Nrawt27dzEzM4O1tTWEQqFTaUsizHqDwYDr16/jRz/6ETo6OsBiseDz+RAMBo/1\nfWazGaurq2huboZIJEKpVMLi4iIePXqER48eneqeJ8xqtVqN69ev480330RdXR3cbjdsNtuxuhRS\nqRT29vbQ/P/6E+VyGeFwGMvLy5iamsLU1BRNl5OOoONmFypBskM6nQ5jY2Po6OgAh8OBz+eD0+k8\nlkBUNBqFw+GgHQQA4HQ6sbq6ivn5eZqtiEQitBXupOuu5BUpFArKpclms9jb2ztWK26pVKKka3J2\n5/N5HBwcYGdnB6urq5iZmcHS0hISiQR1WL7pvh81I0jIlsThiMfj8Pl8x3pHc7kcotEoPU9JaTgY\nDGJzcxMLCwv4+OOPYbFYEI/HD5FTn13nUdZNBLhIpoZ0phHuxovilTD+uVyO9nvb7XZqkE4LpHVQ\nLpfj0qVLuHXrFoaGhmA0GsFgMLCysoIHDx58xZM/CjweD5LJJFgsFvL5/CFC0IuCHCak1nvz5k1c\nvnwZAwMDlPS3sLCA+fn5Y7ezZLNZLC4uwmaz0fZCm832QodIJcg9V6lU6OnpwY0bNzA6Ooru7m4w\nGAyYzWZMT09jf3//yN9JarRWqxWffPIJyuUyfD4fZmZmsLe3d+jAOSnIIUhIfaTmfOXKFdTV1SGV\nSmF7exvz8/NHZvoDX6T8LBYL9vf3sbu7i729PSwsLGBychIWi4US3V507UwmEyqVCk1NTbh8+TKu\nXbuG4eFhlEol2Gw2PH36FBaL5VjP2G63Y3t7G9euXUMqlYLFYsGDBw/w+eefY3Nz89ScdFKWaG9v\nx+joKEZGRqjRdjgcWFpaOpYhSiaT2NnZQVdXFwYHB+FwOLCxsYGHDx9iYWEB6+vrNHI7rbW3tLSg\nt7cXra2tEIvFiEQisNlsODg4ONa5QMqfbW1tEIvF8Pl8ePz4MSYnJ7G1tQWbzUYdrhcB2e8ikQha\nrRZarRYKhQKFQgGxWAxOp/NYKeh8Pg+HwwGXy4X+/n7KW3n06BFtjzWbzfB6vS+0brJ24ugSdVc2\nm01LSIRUelQkEglYrVb09PRQzo3JZML8/DxN9e/u7iIcDp/a2iudFlLmOy3RuVfC+BPPKZ1On5qY\nRCW4XC5UKhW6urpoOyGJ1E0mE+7du4df/OIXJzJ6hDX+MkBSt62trRgeHsZPfvITjIyMgMvlwuFw\nwGQy4Te/+Q0ePnx47O/OZrNYWFigtbXTVq3j8XhQKpXo7+/HrVu38NOf/hQqlQrlchlmsxlLS0v4\n8MMPj/UikXqayWTCv//7v1MG83FLHs8D6SgwGAy4ePEi/uIv/gKvv/46RCIRwuEw9vb2MD09jU8/\n/fRY30vqqVarFXNzc/joo4+wuroKj8dzqlkWNpuNxsZGTExM4K//+q/R09MDuVyOvb097Ozs4OHD\nh9ja2jryd5bLZbhcLphMJvj9fvj9fvz617/G8vIy5d+cFjgcDlQqFUZGRvDTn/4UnZ2dEIlEcLlc\n2N3dxczMzLG+L5FIYGtrCxcvXkQqlcLCwgI+/fRT3L1791QM57NrJ/NDRkZGoNPpqEiR2Ww+lpML\nAIFAALu7uxgfHweXy8WTJ0/w8ccf45NPPjm1yPDZtTf/f788cTbC4TC8Xu+xsi25XA5WqxU2mw2F\nQgFutxsLCwv49a9/jaWlpVMrJxIwmUyqG0L4EdFolHY/HAfhcBgbGxsYHx+n5OGZmRmUiTyjAAAg\nAElEQVT88pe/hN1uP7EY2fPWTjQ5hEIh8vk8dVi+N4S/lwUSCdXX11NW5cjICNhsNtLpNMxmM/7n\nf/4HMzMzp278XhQMBgMSiQRdXV24desW3nzzTbS2ttJU1OzsLH71q1/BYrGc+HechER1FDAYX4jZ\n9Pf345133sH4+DhNXYZCIfz+97/HJ598cuK6VjabRTAYPJQyPC3weDw0Njbi8uXL+OEPf4je3l7a\ngbK1tYVf/vKXWF5ePvb3knrewsICnE4ndnZ2aOvVaUEsFkOn0+G1117DrVu30NjYSAVi/vSnP+HD\nDz88UcSVSqVgtVpx+/ZthEIhLC8vH1sM69tA2rVIWaStrQ0SiQQulwv//d//jenp6WN/Zzqdhs1m\nw+zsLKRSKe7fv4/FxcVTyRA9C8JpGR0dxcWLF8Hn8zE9PY2PPvoIVqv12N/ndDoxNzeHtrY2CIVC\n3LlzBxsbG6cu503an9va2vCTn/wEAwMDSKVSmJycxPT09LGFZrLZLHZ2dqDX6zE0NITZ2Vl89tln\nMJlMlLR3WiBEzImJCbz//vuoq6uD3W7H7du3T3QuhkIhLC0tYX19HWKxGLdv38bc3BwcDsepB3hE\nZO3P/uzPMDw8DAC0O+Ck6pvP4ntp/ImABWkJ6+/vx8TEBN544w1otVoqpbiwsIB79+69MInwNEFa\nc7RaLY34r1+/jqtXrwIAYrEY7HY7Zmdnce/evRfKlJy20SdsfqVSiYGBAUxMTOD69evo7OwEi8Wi\nrWyTk5NYXFw8cbqVkGJO03EhtWaNRoOBgQGaMpdIJCiXy1Rm9s6dO8ciQBGQrMXu7i5sNtuhlqbT\nWDs5TAYHB6nuhFQqpUzoJ0+eYGpq6kT7JZPJwO12Y3JyEvF4nBIqT/PeSyQSNDY2YmxsDAMDA6ir\nq0M2m4XD4cCDBw+wvr5+7O/NZrPw+XxYWVlBsVjEwsLCqWcrSGlOr9ejt7cX/f39aG5uRrlcxs7O\nDu7du3cihysUCiGXy+Hx48dgMpl49OjRS3FayOCwpqYm3LhxAwaDAdFoFMvLy1hdXT32Hs3n85Sb\n8PDhQ0xPT+PRo0cvZSYG6Xzq6+vDjRs3oFAosLm5ic8///zIokSVIO2GT58+RSqVok7Eab2nlSD8\nivHxcfT29gIAXC4XJW+eBr53xp8wbhsbG3H9+nW89dZbMBgMVEufMDfv3buHO3fuwOl0niq/4EUh\nEolgMBjwwQcf4OrVq2hsbIROpwODwaC99v/7v/+Lzz//HKlU6txMxiOkxNbWVrz99tu4dOkSBgYG\noNfrKeN3ZWUFt2/fhtVqRSaTOfFhQP7daRofJpMJo9GIoaEhvPvuu7h06RJV3wqHw5iamsLc3Bxl\nhJ8UqVTquW2RJwEZDtXV1YV33nkHvb29kMlkYLFYODg4wPz8PGw22wtNr0ylUjCbzadaYiFgMplo\nbW3FxMQE3nzzTXR0dAAAfD4fbDYbAoHAiQ/EcrlMv+O0U84AqOzxxMQE3nvvPTQ3N6NYLFJmeyAQ\nOJFMK2HbT05OgsFgHEu986ggUb9Go4FOp6OCW5lMBoFAAOFw+ET3q1wuY29vD//xH/9BxWxeRmaV\nOL1EL5+sPRwOn/hML5fL+NOf/oS5uTn4fL6XNsWTcBWampqocF0qlTpR6/o34Xtj/AlxQqvVoq2t\njZK0xsfHaS8uaWmzWCxYXFzE+vr6SzkQTgIigTs0NIQrV67g1q1bGBgYoL3VuVwOzv9j70t/28zO\n6w/3fV9EcZHERSK1UKR2ybIsy7sns2SWzBRIgX5pgP4hAVoUAdoPLYoCSYE2SRv8ksnSGY/H4/Em\n29qtfd+ojaS4iBJJUSKp7ffBvXdkjyemxqRGnugARgZGTF69uu997vM85znH68Xo6Ci6urqo/v9J\nANHcNpvNaGhowMWLF1FcXIz8/HyqQhYOhzE8PIyenp5nVK++LbJ5mHC5XEgkEtTV1eHKlSuora2F\n0WikSoA+nw99fX2YmJh4Zqzn2yDbvzMmkwm5XI6mpia0tbWhrq6O6k6kUiksLi6ivb0dKysrr9Qi\nOazwmE0Q5T9S4SLKeOl0GlNTUxgaGsLGxsYrX7iyMTr1PIhMtN1uh8vlgsvlgkKhQCwWw8jICBYW\nFl7pore3t4dQKJTlVX8FPp8PtVqNyspKlJWVQSAQUHE1Mr/+bfcLUR3MFQg3h6gFEo+Xw0I53xbB\nYDCLK30xSHVaJpNBIBA8I+eeLfzFBH8SPMvLy9HW1oYPPviAHuBkpCKZTGJpaQldXV0YHx+Hz+c7\nMb1+Mpp1+fJl/M3f/A01lwCeBozt7W1MTU3hyZMnmJyc/Nbug7kAyTxra2tx4cIFNDY2Utndg4MD\nxGIxTE1NYXR0FBMTEzk5iL8tSKVIp9PhwoUL+Ku/+qtnTGsIyW9oaAhzc3MnptJCwGQyodPp8KMf\n/QgtLS0oKCigoiRkwuD+/ftZJRZmEwKBAFqtFs3NzWhtbYVEIqGjqAMDA+js7Mz6yG+2QKR8XS4X\nysrKYDAYaKXh0aNHR/YJOW6IRCIYDAY69szlcrG6ugqv15vVDDRXEAgE1Bny4OArfwuiGXCSQaah\niD8DEY56kdfHt/6OrHzKCQaXy4VcLqfa4xUVFbDb7VCr1WAymdjZ2cH4+DimpqawuLhIDWgWFhZy\nQnY7Csgv2+l0orq6Gk1NTVSfmsvlIplMIhKJYGJiAkNDQ3Q86aiCIbkCUW2rra1FU1MTamtrUVpa\nSkVatre3MT09TSWayWjVSVg78PTCSMb53nrrLTQ0NFA1te3tbayvr+PmzZv47LPPMDU1dWSVtlyC\nKLe53W6cO3cOpaWl1No3lUrB5/Ph7t27uHPnDlZXV0/UhQt4+uwlEgmamppw/fp1VFVVUd3+cDiM\nubk5jI+PH9nF77hAOBbFxcWoqqqC0WgE8HT6Z3V1FRMTE/D7/Sdmr78IROVTpVJBoVCAyWTC6/Vi\naGgIgUDglVpzuQaDwaCKfHq9Hul0GjMzM5iamqK6DScZcrkcWq0WfD4fiUQCKysrmJ+fx8rKStb2\n+/c6+PN4PKjVapSVleHChQu4fv06TCYTnVMNBAJYXl5GZ2cn+vv7MT09jaWlpVd2S8oGGAwGnd8/\ne/YsLl68iJaWFkilUuzv7yMUCsHn82FxcRE9PT3o6OigG/skvJBEvMdqteL8+fP4wQ9+gMLCQkil\nUhwcHCASicDn86GjowN3797FrVu3supn/qogl0aiQfDDH/4QMpmMSuGGQiGMjY3h7t27+PLLL7Pu\nC/6q4HA4EIlEqK6uRltbG4qKip4hJ46NjeHOnTvo6+v7VgTFXIPH40Gn06GmpgbvvPMOVYTb2dlB\nIBDAyMgI5ubmEAwGT8R+fx7EZMxisaCkpAQajYYqzREzsmxPRGQbh/VPJBIJAFD1wHA4nBOiWzah\nUqlQW1uL/Px8JJNJTE9PU2nvk752rVaLgoICCAQCRKNR6tOxtraWtdbg9zb4k5tfTU0NfvzjH8Pl\nckGv11N9/o2NDdy8eRP/8R//gUgkgo2NDerC9F2DCDyUl5dTW93S0lKIxWJ6gNy4cQMPHjygWu/E\nZe8kHCaErGK1WvH+++/jzJkzsFqt1AMgmUyip6cH9+7dQ39/Px3zOSmBHwCUSiXsdjvefPNNyupn\ns9mUeDM6Oopf/vKXGBwcfCWiXK4glUpRWFiIuro6yuwnPgdPnjzBzZs3MTw8nNOe8atAKBTCbrfD\nZrNRR03ibEgEfcLh8InY78+DsMyJRS2p1O3v72NzcxORSOSVSGfHATLdQnrOpOK1urqKycnJrGjL\n5wrk/CSjoRqNBvF4HBMTE5SUehL3zWFYrVa43W5IJBLMzs7SCYVskiO/l8GfjNfU1dVRZnl+fj7t\nWXk8HgwODuL27dvo6+s7kunNcUAmk8FiseDcuXO4ePEirFYrlEol9vb2MDExgZ6eHnz++efo7+9H\nMBiks70nYUOz2WyIxWLU1taipaUFLS0tMJvNEAqFiEajWF5extjYGDo7OynLPFuytdkAYQjb7XZc\nuHABDQ0NsFgs4HA4VNVsfHwc3d3dGBgYwOrq6ol59gCoU5/D4cCVK1fgcrloi4t4BUxPT9MqUS5E\ns14VPB4PWq0WLpcLFosFPB6POhsmEglsbGwgEomcyHI/8LRqJJVKIRKJwOfzweFwvubS+Dr0nckl\nhvgnkIsvGTM8ySBjxWq1GkKhkAoSZYNMnGswGAzo9XrYbDYIBAKsr69jYmIC6+vrp4S/l4GYpFy7\ndg0fffQRRCIRNf+Ynp7GrVu38Otf/xorKysnbiMQf4Hr16/j6tWraGhooIYj29vbuHPnDv75n/8Z\na2trJ65PC4COBn3wwQe4cuUKtQTe2dmB3+/Hw4cP8ctf/hJLS0vU0fAkHYLEYa66uhpvv/02LBYL\nzZqTySRWV1dx//59PHz4kI6BnpTAD3zFU2hsbMRPfvITOtJHNNWnpqawtLREx+NO0rMHnu5/sVgM\ng8GAuro6WCwWAKAObsSBMVfjYdkA0bJ43h2RKLbx+fyskbZyDaJhT9a7t7dHjX1OKg5bFR82ZCPK\nfid97SQGEBGueDyO+fl5xGKxrH7X9yr483g8qFQqqs9PpC93dnYwOjqK7u5umrFFIpETtQlItYIY\n81y6dAk2m42ajXg8HvT29uLhw4cZ37yPYin7qiCjlPX19bh48SKqqqqgVquplfH8/Dy+/PJLdHR0\nYGFhAfF4/Gsa1eSAOeyNfVwg5MqKigpcv34dra2t1MY5EAhgenoaY2NjGBkZwejoKDweDz0EWSwW\n2Gw2nRr5roiihGPxzjvvoLW1FVKpFEwmE+FwGENDQ9TidWZmBslkEkKhEHw+H2w2mzqFfZfvhFKp\nRH5+PhoaGtDU1ASr1Qoul4uNjQ3qbkZ4IVarFalUChwOBz6f70RcwkjQUSgUMJlM0Gg0YLPZCAaD\nlEPC5XKh0WhQVlaGvb09eL3e75xY/DwOa8rzeDz6XhI11MrKSoyOjh7LyNurgjxXwn8hDp8nZQz6\nRThsp85gMFBYWIi33noL9+/fx8jIyGnZ/3kQrXi73Y62tjb8+Mc/BpfLxdbWFpaWlvDgwQMqgXnU\nET5yG8vVS0r6ayqVCk1NTbh8+TLcbjfYbDZWV1cxMzODJ0+e4M6dOxgbGzuSlCTxtM/l4XK4xFZf\nX48f/OAHMJvN4HA4CIfDGBkZQXd3N27duvVSo5fDWQaQfZXBbwKHw4Fer0ddXR0+/PBDmEwmqtdP\n7FF7e3sxOjpKZ5yJJSvRzCc99ePOpsn3W61WtLS04M0334TD4QCfz4fX68XExARu376N4eFhzM3N\nUU93YgMtEomol/p30Q8lQdNgMKC+vh6XL19GTU0N1Go1Nb6ZmJig5kZk7YWFhWCxWFRYKVeCK5mC\neG1otVoUFhZCJpNRlvnW1hb0ej329/chFotRVVVFneWIz/tJAQn+hNyaTqeRTqfBYrFQUFCA2tpa\n+Hy+E19C39/fRyqVglAohEAggNvtxtLSEvr7+09s3//58w8A9Ho9Ll26BI/Hg6mpqb8sY5+XgVir\n2u12tLa2orKykrrajYyM4Oc//zkGBwepN/RRAz+bzQaHw6F+0NleO5vNhk6nQ0VFBXW3E4lEmJ6e\nRnd3N+7evYuhoSGsrq4eSRiDfDaAnB0w5Jaq1+vR0NCA+vp62Gw28Pl8+P1+PH78GLdv30Z7ezvW\n1tZe+vyf97s+/He5AoPBgFQqRVtbG9W8F4lEiEajuHPnDu7evYsHDx4gEolQz/jnrToP74vjrLgA\noGIgb7/9Nt5//30UFBSAy+Vid3cXt27dwp/+9CdMTU3RUv/hUrRCoaAlakJoPO5D8bDpzY9+9CPY\nbDYqs93X14f//u//xsrKCjY2NmhQ4nA4OH/+PHXjSyaTR3JRzAWIFofFYoHNZqPeAU+ePIFSqYTR\naIROp4NcLsf58+cRi8UwMTGBeDx+IvkLiUQCoVAIgUAAkUgEEokEVqsVFy5cQG9vL+bn50/cxYWA\ntOkikQi4XC54PB4qKysxOztLRXNOavbPYDCQTCaxubkJDocDpVIJt9sNk8kEiUSSNRnn703wLy4u\nRnNzMxoaGlBUVAQAmJ6exuPHj9HT04PFxcUjk1TILUwikUChUCAYDGad5UrKgU6nE1euXEFZWRnE\nYjH1Fb9x4waGhoawsrJypF4byQYVCgUYDAbC4XBONjupWhQWFqKtrQ3l5eXg8/lYXV1Ff38/bt++\njZ6eHng8niN5b7PZbPB4PGxvb+ecFU3EQOrq6lBZWQmJRIJQKITx8XHcv38fHR0d8Hg8fzajP1wq\n3d3dxe7u7rFlF0ajEc3NzThz5gxKSkrAZrPh9XoxPj6OBw8eoK+vjxodPQ9yCIpEItpiOs5AREZC\nXS4XamtrUV5eDrlcjmQyiaGhITx69Ajd3d2IxWJIJpN0f/D5fGxsbFBy7EkI/uRSQjzjV1ZW4PF4\n4Pf7IRAIoFarUVhYCIfDgZqaGphMJtjtdkxPT5+4qYuDgwPE43EsLS2hvb0dOzs7NKnKy8uDXC6n\nplAnLfgTnwyv14s7d+6grq4OJSUlkEgkkEqldHLhJIJUDycnJ9Hd3Y36+nooFArIZDJwudxTwt9h\nkEPX5XLh0qVLKC0thUgkQiqVQm9vL+7cuYNgMPit2anEYIEcMNkmjPD5fBgMBjQ1NeG9996DVCql\nGUNnZyc++eSTb6WXTiR18/PzwWAwqLVtLswzRCIRzGYzLl26BJ1Oh2QyibGxMdy7dw+3bt3KeJ6Z\nHOwcDgdCoRAKhQJra2s5D/5SqRQmkwkulws2mw0sFgvz8/N48OABHjx4gOnp6Zd+BrmscDgcpFIp\n+uc4WgB2ux1/+7d/C5vNRhUrJycn8fOf/xz9/f1/Vrdid3cXW1tblJ2eSCSOldDFYrGgVqtx/vx5\n1NfXQ6fT4eDgAAsLC/jf//1fPHjw4GsmLMRmemVlBVKpFA6HA1tbWxn9nnIJQkrc2trC+vo6RkZG\nMD4+Tqt1PB4PBoMB8XgchYWFUKvVqK2tRTgcPpHBPxaLIZFIIBqNwuv1gsfjwWq1UtVLsl9OGg5b\ne//Xf/0XAECn00EgENA23UkP/p2dnWCxWDCZTHTEO5VKZXWs+LUP/jweDzKZDDKZDPF4HL/73e+w\nurqKYDCIsbExzM3NfetxJpL5p9NpxGKxrPcUSbm5oqICGo0GkUgEXq8Xc3NzNGP+tgGbrJ387Lnq\nzTGZTIjFYgiFQhwcHMDv92N+fh6ffPIJHj58SI07MgFZIyk/k+CZ6zJ6UVERampqIJfLEY/HEQqF\ncP/+fdy8eTMjUhN5Mff39+mf4zpcGAwGlWEVCoUIh8NUua+/v/+lQSWZTGJtbQ2pVIoaLDGZzGPj\nLbBYLEilUpSUlEAul8Pv9+Ozzz5De3s7RkZGvtF9bXd3F/Pz80gkEpDJZFhZWTn2dsvzSKVSWF1d\nRTqdxuzsLLxe7zP6Fel0GqFQCKOjo7T0vLq6mnUW96uCBE/y3xsbG/D5fJibmwOPxwOfzz/RExck\nwKdSKYRCIUouJmf5SVwzAUmAiGT7zs4OHQvNtr7/ax/8Saa4vr5Oy5xTU1Pwer2vPEZGerrb29s5\nmysmLP+1tTX09/cjHA5jbGwMn332GYLB4CsFbSIqQgJSLsmKbDYb0WgUPp8PXV1daG9vx/j4+JG+\n8zBhjow25prERWZq7XY79vb2MDc3h+HhYTx8+BD9/f0ZPX8yt32Yr3AcBww55AQCAVWt9Hg8+Pzz\nz/Ho0aOMWi07Ozv0YktGoo4TRBciPz8f6XQa3d3d+MMf/oB79+59Yz+ZBCe/34+NjQ1IJBJsbW09\nM0f/XYCIh21sbLxwDUQngkgSE0OuXBrcfBs8TxDe2tqC3++nFr4SieS1mPUnv49IJEKVIEOh0Ikl\n+wFfVbIBUL+ZcDiMtbW1rMu2v/bBP5VKYW1tDX/4wx/A4/GoUl8qlXrlXzAZ3YpGo0gkElkntxwc\nHCAQCODmzZt49OgRhEIh0uk04vH4K8s4ElIaCZ65IrcQwt/29jZGRkbQ1dWFW7dufWtTpMMCKEQy\nN1cvKnnRZDIZxGIxJicnMTIygk8//RTLy8tHvjDt7e09c0HMNROa9JjJHD/huAwODtIRspeB/H+S\nyeSxj1iS8rFQKMTe3h66u7vx85//HIuLixlllWSfHJ5S+K4P9Zd9P1HoXFpaogZRJ5HsdxjknLpz\n5w4GBgYgEAiwvLx8IsYrXwSSRJBsf2FhAR0dHYhEIhgeHj5RVufPg3C1hEIheDweYrEYfD4f2tvb\nMTExcarwdxikH57pYXdUkGCUqww0mUzC7/cjEAiAxWI9k/2+ys9DsqNcb3Kicz8yMoL19XVMTk5i\nfn7+lb73cOk814fLwcFTb/F79+4hGo1idnYW4+Pj36o8eLhcelzZP1n/xx9/jJmZGYyMjGB5eflI\nra7n131cIO0Sr9eLGzduYG5uDiMjIxmPMpF9clIP8heBJBS7u7tgsVi0xHvSsb29jVQqhUgkQnkt\nJ7XsD4CeH3t7e5iamqLcBdKWOanrJtBqtZDJZOjo6MDKygqdVstmQnEyWQ/Ayf7NnOKF+K57rt8W\nLBaL9rlP8tzyi0AIl0qlEuvr64jH46/V74BwU8jeed2e/ylOLog88eEk6KS/G6SN/eGHH6KoqAj/\n8z//A4/H820viH82vr/2mf8pTg5O+ov1TXh+Zv91AplnJqS91+1nIOt93dZ9ipMJ0sojku6Es3XS\nqyvEk6OqqgpXrlxBXl4eddvMVWXrNPif4i8er2vgB3LfljoOvK7P/hQnE4e5MCc96BMQQTaVSoXS\n0lJEo1GEQqGcXuhPy/6nOMUpTnGK7w1IICWtpNehnUcqFgqFAnl5eVR/IxgMvorOyZ+N76fB/xSn\nOMUpTnGK7x/+bHxnHtcqTnGKU5ziFKc4xcnAafA/xSlOcYpTnOIvDKfB/xSnOMVri5Oq0f4yvK7r\nBl5sO/s64PBY6euEw34E2Vz7Kdv/FKfIAo7TgjibOHygvG5TD0zm09zluJUJXxXPaxu8LusGQIPQ\n6zaieVg2l5D/Xpe1E9W/bAnAEZwG/+cgFouhVCphsVio0cvS0hK1dD2JG4ZI7BYUFMBkMlEnv0Qi\nQe2AT6LvNlm3TqeD3W6HUqmkUq8rKysYGBigphwnCcSPgcfjoaSkBGazGQKBACwWC3t7exgeHsb0\n9DSVJz4pIFbJYrEYKpUKFosFSqUSAoEA+/v7iEQi1Awo106KRwHZJ0KhEEqlEjqdDnq9nnpKEAe3\n4eFhJJPJEzPeRQIOl8uFTCaDUqmEXq9/5plvb29jYGAAy8vLJ0p5jozL8fl8yGQyaDQaGAwGiEQi\n8Hg8pNNpeL1eDA4OUpOfkwCSJfN4PIhEIkilUuj1emi1WggEAhwcHCCdTmN0dBRzc3NUBv27XjP5\nX+IOKpVKoVKpoNfrIZPJwOPxsLOzg7W1NQwNDb2y/DtwGvy/BpVKBafTiY8++ggOhwMejwc3btyg\nRkEnZZMfBhGIaGpqwrVr19DW1gYGg4GVlRX84z/+IyKRCOLx+IkKRIfFOCorK/GTn/wEFRUVyM/P\nRzKZxO3bt/HTn/4UCwsLJyr4kxdUJBJBpVLh3XffxXvvvYe8vDxwuVyk02n87Gc/QzAYxNra2ol6\n5iTwE/vijz76CBUVFdBqtfRA/OlPf4ru7u4TF/yFQiEMBgPKy8tx7tw5XLhwASqVCgKBAOl0Gr/+\n9a+xsrKSlUMxm+ByuZDL5SguLobT6cSFCxfgdDqhUqmwt7eHQCCAv//7v0ckEsHu7u6J2C8kgAqF\nQqjVapSUlKCurg4XL16EXq+HRCJBIpHAl19+CZ/PB5/PdyKeOVk3l8uFVCqF0WhEcXEx2traUF9f\nD7lcTq2K/+Vf/gVerxdbW1snIvg/b9Jls9moTX1BQQG1Tx4ZGcE//dM/IR6Pnwb/bEEsFqOgoACN\njY1oa2tDZWUlNBoNRCIR1tbWEAwGMTIykjMPgW8DIl9ZU1ODlpYW1NfXo7S0lPo/q9Vq1NTUIBQK\noa+vD5ubmydi7QwGA/n5+bBYLGhtbUVdXR3Kysogl8tpcDUYDLh06RIePHiAwcHBnLkSHgU8Hg9K\npRJ1dXVwOp1wOBxwOBxQqVRgs5++SiwWCxUVFWhra8P9+/fh8/m+08OFz+dDKBRCoVCgvLwcTU1N\n0Ov1MBqNKCwshEgkwt7eHphMJuRyOerq6hCNRtHZ2fmdlaRZLBb4fD7kcjmMRiPMZjMcDgfMZjN0\nOh10Oh3kcjk4HA6Ap5canU6HyspKDA0NfWce88SQRSaTQa1Ww+VywWq1QqVSQavV0j8SiQQsFote\naoxGI4xG43dqOCORSKBQKJCfn4+ioiLY7XbIZDJIpVK67ry8PIhEInC5XOzv70OpVKKgoACJRAJb\nW1vfybo5HA7UajVMJhNsNhsMBgO0Wi14PB7kcjny8vKQl5cHlUoFPp9PzxG1Wg2NRgO/33/sFxeS\n+EilUmg0GpSVlcFisUChUNBqVn5+PnQ63TPP/ODgABKJBCqVCmKx+JWf+V908BcIBBCLxZBKpcjP\nz4fT6cTZs2dx7tw5SKVSWmp0OBzw+XwIBoMIBALfqaEF6f8olUooFArIZDJcvHgRb775JgoKCiCT\nyQA8lYTk8Xiw2+1YXl7G5OTkdypzSUpxpJxlt9vhdrvx1ltvwW63UzUusqEVCgXq6+uxsLCA0dHR\n76Rtcbj0KRAIkJeXB6vViitXrqC+vh7l5eUAnrrLJZNJ7O/vg8ViobCwENXV1RgeHkYgEDjW4E8O\nFoFAAIlEAqVSCa1WC5PJhMbGRly9epW2V7a3t6mLJPk57XY7ZmZm0N3dfeyBiKyBBEuDwQCHw4HK\nyko4nU4UFBSAz+cjnU4jkUjQki2LxYJcLofVaoXH4zl2jwlia61UKlFYWAi9XpvGsDQAACAASURB\nVI+CggK0trairKwMCoUCfD4fbDabrvvg4IDuLxKg5ufnj23NBBwOB0KhEBaLBXa7HVarFeXl5aip\nqYFUKgWPxwOfz6cy0sSMiMvlQiwWQ6fTYXl5+VjXfDhLViqVcDqdcLlccLvdsFqt0Ov1AJ5WXUQi\nEa3Ystls7O7u0rXLZDKEQqFjXTsp60skElgsFlRUVKChoQEVFRVQq9XgcDjY39+HRCKhex14eiEm\nNtACgQA8Hu/V1/LKn/AagpRZjEYj3G43ffgGgwFqtZqWh4inPI/Hg8lkglqthkgk+k5L6BwOB3K5\nHNevX8eZM2dQXFz8zG13f38f8XgcyWQS29vbkMlkMBgMlL/wXXiHk1KcyWRCbW0tfvCDH9AsTq1W\n00yCeG+z2Wyk02nk5+fT0i5xcDtOkKyioKAAxcXFOHv2LGpra6njFnE3SyQSWF5exu7uLv17cmMn\nB85xBSNSrrXb7aitrYXVaoXZbEZhYSHy8vKgVqvBZDKxs7ODWCyGaDSKWCwGhUIBABCJRBCJRM8Q\n0o4LCoUCRUVFaGlpgdvthslkgkajgVKphFgsBpfLxe7uLmKxGEKhEOVd8Pl8MJlMSKVSSow6TpAL\nX11dHd544w0UFBTQy7lIJKLmMtvb21hfX8fe3h5kMhk1kxKJRJBIJJTAeFxgMplQKBRwOBy4fv06\nLl26BKlUColEArFYTFXyiG1yIpGAUCgEh8PBwcEBWCwWJBJJVgLRUUAqQ8XFxaiursbbb7+N0tJS\nGjDJ+ohrJOFScLlcAE8dCsml8biZ/yKRCHl5eSgvL0dLSwuuXr0KmUwGPp//jC31/v4+tUzmcDjg\n8XjY3t5GJBLJGsfiLy74kw1rNBrR0NCAtrY2lJWVobCwEGKxmHqjx2IxbG5uYmtrCxwOB2azGWq1\nGgKBAIlE4tiDP7ltH+4F1dbWwmg0gsfjgcFg0INxYWGBeoSTzSYWi2mZ9DjBYDAo6aa5uRktLS1o\nbm6mz3J/fx+7u7tIpVJYWlrCzMwMZDIZFAoFlEol5HI5+Hw+ksnksfb+SYnf5XI988dms9FDO5FI\nIB6PIxgMYmJiAnt7e9DpdFAoFM8c/MfVP2cymeDz+VCr1bDb7Th//jzNRFUqFS0dRqNRRCIRBAIB\nRKNRup/FYjH4fD4Npsd12WKz2RAIBDAajXA6naivr0d1dTUlxpGgH4/HEQgEKJ9CpVJBqVSCw+HQ\nLPq4AihpTxFCXHV1NVpaWlBXVweNRkPfyf39faTTaYRCIfj9fsTjcfB4PMrTIZU80jY6DohEIojF\nYggEAthsNrS2tuLMmTMoKyujbH4ASKVSVGI2kUhgf38fOp0OXC73GRLmcZ0rRPs+Ly8P+fn5cLlc\nqKmpQVVVFTQaDbVKjsfjCIfDtI1CKrwk+JOLC7ncHAckEgk0Gg3sdjscDgdKS0tRWVkJq9WKeDyO\ntbU1rK6uYmNjA/F4nLZalEolpFIpmEwmtra2sLGxgUQiQSsCr4K/uODP5XKRl5eH1tZWXLt2DVev\nXgWTyQSTyaQ60Ol0GpFIhDqlyeVy2Gw2qNVq8Hi8Y7+hA083vlQqRUNDAz766COUl5dDp9MB+MqP\nPZlMIhwOY3x8HDs7O5BIJCgvL4dGo4FQKDzWAwb4qjyn1Wrhcrnw3nvvUeINORhJdSWRSGBqagpd\nXV0wGAwoLS1FUVERvc0f5zMnvVi9Xo+WlhacO3cOLpeLroNkcRsbGwiHw1heXsbIyAi9EDidTkil\nUtqrOy6QkqBOp4PD4UBTUxOkUikNMiSD83q9WFhYgN/vx9bWFg4ODmhZmsvlgs/n08mFXIMEbTJ9\n4Ha74XA4YDKZcHBwQC8h29vbCAQC6O/vx+rqKjY3N1FcXAwulwuFQvHM2N9xVCxYLBZ4PB7UajUK\nCwvR0tKCs2fPQqfT0QkEUmGJRqOYnp7G0NAQDg4OoNFoaFuRBM7jrFbI5XIUFBRAo9GgtrYW7777\nLvR6PTgcDnZ2drC3twcGg4FoNIrV1VWMjo4iGo1CLBaDyWRCqVTSah7pUR8HeDwerFYrqqqqUFVV\nBZfLhZKSEnC5XFrJ2tnZwfr6Op48eYJwOAwmkwmTyQSz2QyZTAY2m00D/3FeFjUaDerq6nDlyhU0\nNjYiPz+fchD8fj8mJycxOjqKmZkZLCwsoKmpiXKLhEIhgKcVi2g0ing8TpO7V8FfRPBnMBhQKpUw\nGAw4f/48zZjVajVWVlYwMTGBxcVFxONxRCIRWlrhcDgoKipCfX098vPzj/WmCDwtOwsEAhQVFVGy\nFjkYg8EghoaGMDIygs3NTbrp19bWEAqFYDQaUVFRAbvdTrOQ4wDJiLRaLQoLC2G321FeXo6ysjIU\nFBTA6/Xi0aNHmJmZweLiInZ3d5FOp5FMJuH1ehGNRuF2u2lF47iyCtIr1+v1dL2lpaUoKyuDUqmE\n3+/H8PAwxsfHsba2hs3NTdpaSSQSCAaD0Ol0kMlkODg4oAH3OA510itva2tDQ0MDCgsLYbFYIBAI\n4PV6sbi4iKGhIfj9fmxubiIWiyEej2NrawsMBoNeaNVqNc00joPsx2AwIJfLYbfb8cMf/hAVFRUw\nGo1QKBTw+/2Ynp7G9PQ0ZmdnkUgkEIvFaEa3v7+PaDQKFotFW15KpZJWN3INkUgEs9mMlpYWXLhw\nATabDXK5HNFoFB6PB+Pj41haWkIwGMTW1hbW1tawtrYGBoMBm81GL2VqtRrpdBrb29vH8rwZDAa0\nWi3cbjflrWi1WiSTSbrHZ2dn4fP5sLm5ic3NTUQiERwcHEAqlWJ3dxf5+fmQyWR0PHR7ezun6wa+\nqthWVlbizJkzcDqdUCqVSKVSWFhYwNTUFIaGhhAOhylBO5VKgcFgwGw2w+l0QiQSwWg0QiwWY29v\nD7FYLOccKELKNpvNuHr1KioqKqBSqbCzs4PFxUVMTEzgyZMnGB8fRygUopk/qRbt7u6iqqoKEomE\nfiZpZbwqvtfBn8x6KpVKmM1mlJaW4t1330VVVRVSqRRWVlbQ3d2Nx48fY2xsDJFIBMFgkJJAyG2t\nqKgIbDb7mZJYrkCCEJ/Pp0Qtl8uFuro6tLa2gs1mY319HaOjo+jp6cHDhw+xvr5Ogz850CsrKyGX\ny5FKpejac71uBoNBR+CcTifcbjd9fmq1GsFgEGNjY2hvb0dfXx8mJiaoaAXpixMSEbl8HUdWQciI\narUabrcbly5dgsvlgtlsxv7+PkKhEObm5nD79m08fPgQq6urtAxKDmySWRNOBemD5vJAJ1mzRqOB\nyWTC5cuXceXKFSiVSqTTafh8PoyMjKCvrw93797FwsIC4vH4M2vi8/kQi8Wora0Fk8mERCKBUCjM\neSAixKeSkhKcO3cOb7/9NvR6PXZ3d+Hz+TAzM4Ouri709vaiv78f29vbz7R9OBwOtFotLBYL9vb2\nwOfzaTadSzAYDPD5fOTn56O+vh6XL1/G1atXEY1GaRWor68PnZ2dGB8fh8/ne8aWlclkIpVK4cyZ\nM5TAtbu7m3NCK8nUxWIxbDYb6uvr0dLSApVKhXA4jJWVFczOzuLhw4cYGBjA3NwcJbECT/ezWCyG\nw+FAPB6HVCoF8DQQ5TKAkuApk8lgNpvhdrvhcrlQVFSEUCiE2dlZGkA7OjqwurqK9fV1SqhkMBhY\nXV3Fzs4OGhoaqD4EaZPm8pkTwqFWq0VlZSUaGxuhUCiwv78Pj8eDgYEBdHR0YGBgAPPz89jZ2aHP\nm2T4Op0OWq0WVquVVqh3dnZOe/5/DgwGA3l5eSgrK8PFixdRXl6OoqIiyGQy2he/d+8ePvvsMwQC\nAWxsbNAslJQ7k8kkQqEQ1tfXkU6nj6UHymAwIJFIYDabceHCBVy9ehVarRZisRg7OzuYmZnByMgI\n7ty5g5GREaytrdGDgxyOBwcHSCQSCIfDNKPItawleUmLi4tRX1+PK1euoKysDFKpFJFIBMPDw+jr\n60N/fz/6+/tpS4VkmIcVw4g/PXneub508Xg86HQ6XL58GefOnUNTUxPEYjFSqRT6+/vR29uL3t5e\neDwe+P1+uhcOHxzkvw+XEnOdPZNyeUNDA9555x1UVVUhLy8PqVQKIyMj+PzzzzE2Noa5uTmEQqEX\nZpek9UIOKrL+XK9dIpFAr9fj/fffx7Vr12AwGBCPx+HxePDFF1+gq6sLHo8Ha2trLxyBI60uQuTa\n29sDl8v9mgJdNkGqWnq9Hg0NDfjrv/5rlJSUYGdnB0+ePEF3dzcGBwfh8Xjg9Xppb/ZF+4TL5dL1\nklJ0rkD683l5eSgpKcGFCxdw5coViEQieDwe/O///i8mJiawsLAAr9eLSCTyNR/53d1dOtJHSs5k\nz+TywsXn86FQKOB2u9HU1EQDeCqVwpdffolPP/2UTmKFw+Fn1k32cDweRygUom0BkUgEoVBIOV65\ngkKhQHFxMS5evIizZ8/CZDJhfX0dk5OT+M1vfoPBwUH4fD4qZHY4vpDW8+joKCwWC86cOQMWi5XV\nNtH3MviTW25lZSXa2trQ0tICoVCIWCyG4eFhLC4uwuv10rL5N4n3pNNpWobZ3d3F9vY2ZYrmAiTw\nWywWXLx4Ea2trXC5XPD5fBgdHcXs7Czm5+cxPz+P0dFROkP+oo1AmMXJZBJMJvNrmVO2QURYGhoa\ncOXKFVRVVYHNZmNsbAzj4+MYHx/H9PQ0PRi/aS1kpIi8xORClsuLl06ng8vlomx+g8GAsbEx9PX1\nobe3F2NjY5idnaVTFN+0buBpefIwfySXfXOBQIDS0lLU19ejqakJKpUKW1tbaG9vx8OHD/H48WP4\nfD5EIpFvVKckhDRSbcp1v5zwQMgoXH19PcxmM3Z3dzE2NoabN2+ir68PU1NTiEaj35gRHxwcIBwO\nIxAI0EOeKCrmav0cDgdSqRT19fW4cOECSktLwWQyMT8/j8ePH+P+/fvweDzY2NigXIoXrZuU2KPR\nKAwGQ84nWZhMJsRiMaxWK65evYra2lqoVCrMzc2hq6sL9+7dw+LiIr1ovegsJJfEaDSKUCgEg8GQ\ns/UeBqmAVlVVobW1FSaTCbFYDE+ePMGDBw/Q09ND38tvyoZTqRRisRglyykUCsrtyuVzVygUsFqt\naGhoQFlZGQQCAbq6uvDFF1+gu7sbHo/nG9s95L1cX1+n7QmS8Wdrzd/L4E8U2M6cOYO33noLRqMR\nw8PDuHHjBr744gsMDw8jlUq99CGSvhCZzd3c3MzpmB8h0zidTnzwwQcoLi4Gh8NBb28vbty4gceP\nH2N9fT2jkk8ymUQ0GqUHezQazWlvTi6X08vWlStXwOFw0N/fj1/96lfo6enB9PR0RkI9ZMSFjLkc\nnunOFYqKitDc3IzGxkaYzWYAwP379/GLX/wCy8vLGYsjHTbgIBWMXK2bXBRra2ufCaBzc3P4xS9+\nga6urozWTYIlqdyQMaNcgbQqHA4HfvSjH8Fms4HFYmFtbQ2dnZ34t3/7t4yU7vb397GysgKPx4N0\nOk0DUzZ6od8EHo8HlUqFy5cv49q1a5DJZJifn8fAwADu3buHzs7OjJ5dPB7H+Pg45ZYcbnvlAkTA\nqby8HO+//z5V0ezp6cGdO3fQ19eHRCLx0u8/ODhAJBLB0tIS7HY7DVC5vOCS4O9yudDU1AQGg4Hh\n4WH88pe/xODgIPx+/0s/Y3d3F8lkEuvr64jH49jf38fW1lbOe/6EWFlcXAydTof9/X08fvwYv/vd\n7xAIBF56HpMEgiRAW1tbtN2YDXyvXf3kcjnEYjGCwSAGBgZw69YtLCwsZKyhTUanyOwon8/POduf\njA9JpVJKChkcHMTo6Cg2NzczftEEAgHkcjl4PB5VHstleU4qlaK0tBR6vR4MBgM+nw+Tk5MYHx9H\nMBjMWKGPzWYjLy8PWq2WMqrJzHG2QQKexWJBTU0N5HI5wuEwHj16hLGxMQQCga+VP78JhFTqdDrp\nLD2Zmc4FJBIJ5YOYzWYwmUw8efIEn332GVZWVjJeN3nGpBRK/i5Xo6F8Ph+FhYWw2WwoKiqi7+cf\n/vAHdHR0fK38+U0gJXgOh0PHc8l7nasWEQlCROOewWBgamoKf/rTn7C8vJxx8CZZNHmX0+l0xr+v\nbwNCOLNarRCJRGCxWEgmk7Qil+l5eHi87/Azz2Xwl8lklJRINAc2NjYwPz+P9fX1jD6DnKdk/Pbg\n4IDu+VxyigjpWSAQUAGz9fV1ytF6GYheiEqlAovFou1QHo+XFW2F72Xw53A4VMGJvKBPnjzBwMAA\nQqFQxqVBMotLZo2FQiFlcWcbhOinVquRn58PiUSCaDSK0dFRTExMYGlpiWbDmYAwiQUCAVXxykXw\nJ2xxlUoFh8OBvLw8SmgZGxvD4uIiotFoxocLh8Ohspakx0VGz7INQvKzWq2w2+0Qi8UIhULo7OzE\n9PQ0NjY2MnpJSSAiKnNklJFMK+QiGKlUKpjNZppVAMD4+DgePnyIYDCYcYuHw+FQURcej0ez7lyN\nQZHplaKiIuTl5YHP5yMSieDhw4cYHR09kowzMW4hQkqkZJ2r4E+UKbVaLVW9W1paQkdHB0KhUMbr\nJh4LPB7vGY5LroI/kcs2Go103dvb21hYWKATNy8DqWpJJBIqzEVGXnPVTiQaIUT6Fnh6UYrFYnRy\nJRNwOByIRCIa/MnPm6skjpzlxJiHKPVtbGwgGo1mnMQRkTEiZU2URJlMZlbOw+9l8CdGGjKZDJFI\nhPYRj9IvIQe6TCajs+ZSqRRSqTQngYiwiOvr69Hc3AyxWIy5uTl8/vnnWFpaOtLhwGAwIJPJYDQa\nqQJXrsQ4yIWF6K9rNBqkUikMDQ1hcHDwSIJIJLPQ6/V0XjqXZD+VSoXa2lrYbDYoFAqw2WyEw2H0\n9vZiZWUl40BEKkREH52I0iQSiSNd2I4Ck8mEiooKKJVKqmrn9/sxPz//jf3mF0EoFEKn00EsFlMn\nSCJwlYtDXSAQwGw209FZ4CtibSwWy/hzmEwm9Ho9CgsLweVykUwmX0hUyxZIm8VoNNLsMZ1O09HJ\nTNsNJKCVl5dTGVoiTpMLkEAkl8shl8tpBplIJJBIJI70vBgMBtWQIITYaDSaExGrw1NPROVzb28P\n8XicjsId5SzncDhUO/+oGfhRQdatUCigUqnA4XCwvb1N9SkyPVeI8RlxUiSt562trdNRv+dBMrn6\n+npcunQJhYWF2NrawszMDHw+35EOBRLUmpqaYLfbwWAwkEwms9pzOQytVouSkhJUV1dT4YpQKISJ\niQlsbGxkvHZCdiRyo0qlMqvCEM+Dy+XCbrejoqICeXl5EAgEWF9fh9/vh9/vP9KlhZhxFBQUIC8v\n7xkVvWwfjgwGA2q1GnV1dSgoKKASw4lEgmYVma6bw+HAZDLBZDJBLBZTG2hCoMomyK2/qKgIZWVl\ntK1wOBBl+qyYTCZ0Oh2am5uh1+uRTCYpsTQTTsxRQaSp7XY7DAYDGAzGM73MTA80ImZUUlKC8vJy\nCIVCbGxsYG5uDrFYLOvBn7Ti8vPzYbVaIZFIqL0qCfyZtir4fD40Gg1KS0tpHzgajWJjYyMn5XOB\nQACVSgWj0Yi8vDyw2WyEQiGqa5JpFZTH40GhUECr1VIN+lQqhXA4nBMjJRaLRS2Q9Xo9xGIx0uk0\nFhYWqJR2pr/n/Px8OBwOKJVKsFgsbG9vIxaL5aznLxKJoNPpqFw8h8OB1+ulledMq6B8Ph9FRUXQ\n6/Vgs9nY2dnB1tZWxq2xl+F7lfmLRCKUlZXhjTfewN/93d/B4XBQxuRRNe3ZbDaMRiN++MMfoq6u\njs57k5nRbKOoqAiXL19GVVUVjEYj2Gw24vE4fD7fkYh65GAsLS3FxYsXodFosLW1hdXV1SNlVpmA\nbFC320175mRumQjJHGWTCoVCaDQaFBUVQavVIpVKIRKJ0BGebK6bxWJRHQeDwUCV5IiG+VF+x8QQ\nx2azQSQSIRAIYHx8nJJ6shmMiCYBCf5isZiSJEn/NdPDhcViwWw246233kJRURFisRja29sxMDCQ\ntQPmMEjgKysrg9FopOVnUmXI9PtIq6KyshLV1dXUeXNsbCzjPvBRQLLGgoICOBwOSKVSytgnM+WZ\ngMFgQCwWIz8/n7YPyLkSDAZzEohItcJqtcJgMIDNZiMYDGJycvJI54FQKERBQQG0Wi1tfW5vb1Pp\n32yDw+FAr9fDYrGgsLCQPvOJiQnMzs4e6VmVlJSgubkZKpWKqkUS8nYunrlMJoPD4UBRURE0Gg04\nHA5WV1fR3t4Or9eb0WeQvVJeXg6LxUIrNtmcaPneZP5E6autrQ1OpxMAsLy8jKmpKTqnnwkIOe7t\nt9/G9evXUVRUhJWVFQwODlJv+Wwe5sR7mhg9aLVaxGIxzM3NYXp6mpYyXwbSYyZCRm1tbQCestZv\n3ryZ9cyCwWBQf/Xq6mpYLBaw2WwsLi6ir68PMzMzWFtby+hAJ3yKa9eu4e2330ZJSQnm5+fxxz/+\nEZ2dnVldN+lbVldXo62tDRaLBTKZDIlEAr29vejo6MiIiUsglUphs9lw9epVNDY2gsVi4cmTJ/jk\nk08QDAaztldIS8ThcKClpQWNjY00q5iensadO3cwNDSUsT850TYoLi6mznMejweLi4sIBAJZWTMB\n4ZxcunQJly5dgsVigVAoxN7eHu7evUufVabBXywWw2AwQKPRQCKRgMFgUBW9bF8ShUIhiouLcf36\ndZw/f55yDObn5/Hb3/4Wvb29R2oPyWQyKJVK8Pl8AE/H0La3t7PeHiIz+M3NzXjjjTdgt9upnOzY\n2Bhu3LgBv99/pKocMTY7ODigVaZUKpX1ioVYLIbRaMTVq1dx7tw5aoIUj8fR19eHkZGRIwVtImTF\nYrEQi8UwMzODcDic9XWTS6Lb7cbbb79Np7V2d3cRCARo5p8Jnh8dBoDNzU0ay06D//+BvKRarZZa\nf+7t7WFlZQXz8/NHuuERFjVR7pJIJBgbG8Pt27exvLyc9UAkEolQ9H/+2cSZKhgMYnh4GPPz8xlX\nLEgJt7q6Gh988AEKCgqQSqXQ29uLx48fH6mMncl3sdlsFBUVoaGhAXa7HRqNBgwGAx6PBx0dHVhc\nXMx47WQGubW1FW+++SZYLBZGR0fx6aefYnZ2NqsZKJfLpcI49fX1yMvLA4/Hw9raGnp6evDkyRNs\nbGxkfFnU6/Vwu91obGyE1WrF7u4upqen0dvbm9WxM8LAt9vteOutt1BSUkIP4vn5eXzyySeYmZnJ\n+DuJVXVZWRlMJhPtgxJzkWxCIBBAq9WiqakJbW1ttPycTCbR19eH9vZ2bGxsZPx7VqvVcDqdlBdC\nfDiyuceBr7gzxcXFuHLlCkpLS8Hn87G7uwu/34/29nZMTU1l/J0sFgsmkwlFRUXg8/lUCpgQS7O5\ndh6PR+fjL168CKVSCQaDge3tbczOzqK3t/dIFUXyjqpUKuzu7mJpaQmLi4vPqABmC1KpFGazGY2N\njXA6nRAIBNjZ2UEkEsHU1BQWFhaOdA6LxWIq/by6uorh4eEjXTYzBZfLpbLqzc3N0Gg09KK0uroK\nj8dz5MspsVAGQOXnT4P//4GwUDUaDQwGA7W1TaVSWF1dhdfrPRIZx2w24/LlyygpKYFEIsH+/j6W\nl5fR2dmZVe9nQijMy8tDc3MzbDYbHZ1ZW1vD8PAwVlZWMv48Ho+H+vp6nDt3DhqNBiwWi1rN/jlR\nnW8DUnZ1OBxobm6GUqmkPbjZ2Vl0dnZibW0to88i1srvvPMOKisrweVyaU8uF7rhYrEYer0epaWl\nKCgoAPA0+1pfX8fw8DCmp6czvigyGAxUV1fjzTffpMp64XD4SL3UTEGIkBaLhfad0+k00uk0/H4/\nxsfHj1TyVigUuHTpEurq6gCASupmMu99VJA+P5F5Jg5l0WgUgUAAoVAo4/3JYDBgt9vx4Ycfwmw2\nY2NjA52dnZiamsr6YU5IhVarFfn5+RAKhc8I3RARrUzB4/HQ3NyM8+fPQyQSobu7G3/84x+xuLiY\n9WcuEolgsVhgMpmgUCjoeRAOhxGJRGj5OBMQot/ly5dRXFyMra0tfPnll7h3715OeEREiTA/P5+y\n84l08lHGncnaiS23QCDA6uoq7t+/j6WlpayvWyAQwOFwwG6302e+ubmJxcVF+P3+I58JJMkiomEb\nGxtHmuR5GV774E+sEokLEofDQSKRAIPBwMLCAubn51/6gh5mlVZXV+PSpUtUirGrqwsPHz6E1+vN\n6kZnMpmw2Wyoq6tDU1MT9Ho94vE4Njc3sbS0hMnJyZeWX0nfVqFQwGQyoampCVVVVRCJRJicnMT9\n+/cxMTGBWCyW1YORsORra2tpPyoYDCIWi9Gb+cv6gMSiOC8vD263G83NzSgoKMD29jYePHiABw8e\nZCxodBQQC9OSkhJIpVJsbW0hEAhgZGQE8/Pz1MTkm0CeuUwmQ35+Pqqrq1FZWQmpVIr5+XncunUL\nk5OTWS8pSiQSanVLepfhcJjKPZOM4M+BaIMbDAa43W7KLzk4OEB/fz/u3r17pB52JiCXu/Pnz8Ns\nNoPP52NnZwcejwfDw8NYWFjIKHskPVCDwQCXywWn0wmZTIbFxUU8ePAAY2NjWVsz+T4iRuR2u6lz\n4NbWFsbHxzE8PJyx4BeDwYBKpYLVakVFRQWKiorAYrHg8Xjw6NGjrCYV5PuUSiWqq6tRVFQEHo+H\nnZ0dhEIhDA4OHokwR0y6bDYbSkpK6AQVUezMZlJxWP3R5XJBrVbTXjcxYDvKWSaRSJCXlwe9Xg+p\nVIp0Oo1gMIi5uTlEo9GsrRv4ynjIZrOhsLCQjlTG43FMT09jeXn5SGdwXl4e1cEg3KdgMIhoNJq1\nM/G1D/4qlQputxtvvfUWXC4XwuEwLaPNzMxgamrqpcGfzWZTgZazZ8/iwoULODg4QF9fH/71X/8V\n/f39RyYMvgwcDgc1NTW0V0z6QslkErOzs5iamnrpoXCY2d/Y2Ijm5maUQ6/7BgAAIABJREFUl5fj\n4OAADx48wD/8wz/kRPXMYDDg3XffRW1tLbRaLQKBABYXFzE1NYWJiQmEw+GXHizEopgE/urqaojF\nYvh8PvzqV7/CzZs36VhMtsBgMFBTU4P33nsPBQUFYDAYCIfDGBgYwP3796k4zss+g7D7z507h9ra\nWhQVFeHg4ACjo6P42c9+hkgkkvV1KxQKXL58GWfOnKHsdq/Xi5s3b6KrqysjxjkhC1ZWVuLChQs0\nQ9nf38fDhw/x8ccfZ/WyRfwkLBYL3nnnHZr1k6mC3/zmN5ibm8voWZG56ebmZtTU1FAToFAohDt3\n7mB8fDxr6yZr53K5qKqqQn19PWX4x2IxdHV1ZdxKIwGtsLAQZ8+eRXFxMeRyOQ1og4ODOalYaLVa\ntLS0wGq1gsFgYGdnBz6fD48ePcL8/HzGpFAej0d5IVqtlhKRV1ZWEAgEsrp2kulaLBbU1dVBoVDQ\nkUqipJhp0CYZf21tLQoKCugYbygUos6Q2QSbzYZEIqHEShaLRUchx8fHsbCwcCRiqM1mQ3V1NRQK\nBTY3NzEzM0MNxbJ1OX9tgz8pm5eWluLDDz+EzWZDNBrF7du3sbGxAQ6Hg8HBwa+Nc5CXkclk0tuV\n0+mEw+GA2WyG3W6nRKRbt25hbm4u64Gfy+VCLpejoqICTqcTfD4fHR0d+OSTT5BIJODz+RAOh792\nEJPDlFgNl5SUwOVyobi4GFarFYWFhfQF7+jooEYW2QKTyaRz4WSmf3NzE5988gm6uroQCATg8Xi+\n0RSGWLiaTCa43W643W7YbDbYbDYIBAKMj4/j0aNH8Hg8OSFAEaa10WiEQCDAyMgI/t//+3+YmJh4\nqWIYKX2Sg8npdMJut8NisSAej9MK0ebmZtarFUKhEAqFAhqNhpKfHj9+jM8//xz9/f0vLRsTcl95\neTkaGhqo1bNcLsfExATu3buH4eHhrEvMEpEshUIBtVoNPp+PcDiM27dv4+7duxmx84moTF1dHWpq\nalBVVUXNdG7cuIFPP/0060EI+EoiXKlUUi0FMq5FXO/+3KWaKDySvV5XV4fa2lqYTCbMzs7i448/\nxu3bt48kapQJyPkgkUig1Wqp+14sFsPy8jIGBgZeKolLCJo2mw0VFRVobW1FdXU1eDwebt26hY8/\n/vhIXIejgHBbCCmSSNx6vV5MTEwgHo//2X/P4/EglUpRXFyMxsZGXL58GQ6HA6FQCP/5n/+Jzz//\nPCetCrJ2iUQCiUTyjLV0MBjMqKJIJhxKSkpw7do1tLS0QKFQoKurC//+7/+O4eHhrD7z1zb4EwEe\nm82GlpYWrK2tYWJiAo8ePYLX6wWDwYDf76ejViTgE9EeMidcVVWFM2fOUJ9l0hvr7OzE7du34ff7\ns75ZCKmwpKQEOp0OwWAQ/f39uHnzJhWGOXwQkwyfeIBLpVJUV1ejoaEBZ8+ehdFohFwux+7uLp48\neYKbN29iYGAg6/1yFotFL0wFBQVgMpnweDxob2/H3bt3KdP88AYlNr9isRhCoRBGoxFutxvnzp1D\nY2Mjfcn39vYwNjaGL774AisrK1mvVhAvb71eD4VCgUQigdnZWeoK9rwwDinvE5ENoVCI0tJSVFVV\n4cqVK7Db7bT8vrKygvb2dvT09OTE+EkqlUKn0z3zrIaGhvDFF19gdXX1hZUtLpcLgUAAsVgMtVoN\nu92Os2fP4tq1azQoEHLib3/726wTK4Gv3lG5XE5Z+RsbG3j48CG6u7uxvLz8wn/H5XKpqBYZU7t6\n9SoaGhpoFre5uYlHjx7Ry362AxGXy4VEIoFUKoVIJAKTyYTP50NXVxcmJiawurr6te8ke4aoixYU\nFNAqS2VlJSwWC9LpNJ48eYLf//73OQmghyV45XI5FccJhULweDyYm5tDJBJ54b8jlzW1Wg2DwUAN\noxobG6HRaLC9vY2enh78/ve/z4l4FUnMBAIBDaDb29vY2tqC3+/H4uLiC880kggSLQaz2YyGhgY0\nNzejtbUVqVQKo6OjuHXrFrq6unIy3kdkpYkiLJnLJ3yFP3fJJZMZOp0OTqeTEmMdDgcikQhGRkbw\n6aefZv/9zOqnHSMEAgGsVitMJhOEQiFu3LiBzz77jIriAHhG65tIPNbU1KCiogImkwlWqxU2mw1q\ntZqO8GxtbSEUCmF+fv5bsTMzgclkwtmzZ6HT6RAOh/Hpp5+is7OTuq8dDqBkYxP/cNJTcjgcKCgo\ngEqloiM8pGowMDBwJLJgpuByuXC5XKiqqgKfz8fU1BTu3LmDhYWFFwZ+UsYrKSmB2+2mz9tutyMv\nLw8KhYLKViYSCSpP+7Lb/beBSqVCfX09jEYjdnZ2sLCwgLm5OYTD4RceZOTS4na7UV5eDrvdjpKS\nElgsFmg0GqqIl0wm6eUtF6QzAFRcRigUUpWvYDCIQCDwwksS6THbbDa43W5UVlbC4XBQoRcyfhSN\nRrG8vIzh4eGsl0GBpxmkVqul/fLd3V3E43HMzs7C5/O98N+QfjVxQ3O5XCgrK6OXHy6Xi83NTQQC\nAXi9XgQCgZwc5jweDzKZ7BkvD8IvIE6Cz4PI9rpcLjQ0NMDtdsNut9OeM3Ei9Hq9WFtby4nRFjnr\nuFwu2Gw2FVKamprC5OTkN15OydrLyspQV1eH1tZWqsQok8mwubmJ+fl5+P3+nDD8ydrJJYTFYtER\nTvK8XmTtTP6dXC5HYWEhLl26hHPnzlGtEDIKOzAwgPX19Zz5EJBq52Hp3Z2dHWxublL1ym+qhioU\nCpSUlOCNN95ATU0NSkpKoFQqEYvF0NHRgZGRkaxXiIDXOPiLRCKUlpbCZDJhb2+P/oIPz/oKhUIo\nlUqoVCrk5+ejoKAAVVVVNPhoNBqoVCowmUxsbGxgdHQUfr8fwWAQU1NTORGvAJ4e5nV1dVCpVAiH\nw+ju7n5mTIuYrSiVSmi1WuTn56O4uBjV1dVU/U6r1UIikYDJZGJpaQmzs7MIhULo6ek5sjBQpuBw\nOCgpKaEKhAsLC3j06BECgQB9qUiVgqzbaDTC6XTC6XTCYDAgPz+fvpS7u7uYmZnB8vIyVldXMTAw\ngHA4nPV1A08Z5y6XC3l5eUgmkxgbG8PExAS2t7fp2gmpTKVSoaCgABaLBU6nEyUlJfQwISzeWCwG\nr9eL5eVlDA0NwePx5OTSAjwl/5jNZggEAkoIfVHwIO2k0tJSOBwOyjy2WCyUrc5ms2ngnJ2dRX9/\nf8beC0cFj8dDfn4+lEolgKdTFfF4HBsbG19bOyEGkvXa7XY4nU6YzWYYDAa6X/x+P2ZmZugzz1UJ\nVygU0os1cTqMRCLweDxfOxcOj/BZrVY4nU5UVlbCbDZTH4DNzU34fD709fWhp6cHsVgsJ4GIZJ/E\nwIcE/8nJSUxOTn5NzpeUyUtKSlBWVgaHw4Hy8nI4nU4qVU2IvO3t7Zidnc1ZACXE68O+Equrq7hz\n5w7m5ua+xpYnvKGysjJ6Oa+trYXT6YRUKqWk2KGhIXR0dLy09P6qayfMfIKpqSl0dXUhGAw+c0kn\nFQ6tVouioiI4HA5UVlbizJkzdCJmc3MTy8vL6OrqwuTkZE7W/doGf7FYDKfTCZPJhO3tbaysrGBh\nYeH/s/dez21f6fn4g14JgACIRhJgr6JYJEpUL5bk3rbEySazk0x2cpXs35DfbSYXmVxkssluxjO7\n+do76914XVRsS6Ikir0T7AUECZAgSPTefxea9xiUZW8DTNrLZ4bjsUhRB5/POedtz/u87PukoU01\n/e7ubpw6dQpGoxEqlWqfZnwymYTNZsPPfvYzzM/Ps1aeYsFgMKCzsxMqlQqrq6uYmZnZF6mTDGpr\naytOnTqF7u5uJgcqEonYoaZ62Pj4ON555x04HA5sb28XXMkvf101NTWoqamBQCCA3W7H0NDQvsuQ\n0p7t7e04d+4cLl68CLPZzNoP6blTpqKvrw99fX1YWFiA3W4vyroBsOdJiofj4+OwWq1f4INQx8eN\nGzdw/vx5NmSJoigATEf/3r17GBkZwdTU1O+t3PXHIN/4BwIBLCwsPDONKBaLUV1djR/+8Ifo6elh\nuvf5Mx1yuRxWV1fR19eHBw8eYG5urmgXokgkQkVFBTQaDXK5HJsb8DQ5kS7DtrY2/N3f/R1aWlpg\nNpvZQCfa636/H3Nzc7h79y5u3779pWWDQoB0QyirRqOln1ViEAgErBx05coVtmfoswFgTPvf/va3\nGBgYKDiPiMDlciGXy5moDQCmjLewsPCFTJFMJoPFYsEbb7yB7373u0zIh4xYIpFgTsuvf/1rrK2t\nFWXdwP6BR/Tv22w2/PKXv4TD4fjCcxeJRCgvL8dbb72Fy5cvo7a2FkKhkH3uvb092O12DA4O4uHD\nh88sdxQKNIQsfw7J4OAg3n//fbhcri/sdz6fj/r6erz55ps4d+4cWlpa9t3tXq8XS0tLLDAsBr6x\nxh94Mot8YmIC77//PqxWK0sZnTp1Cq+88gq0Wi00Gg00Gg10Oh2LOIPBIFwuF1wuF7a3t7GysoL5\n+XlMTk4yBaViDaugzZFKpZiiXDgcZhv/5ZdfZizPsrIy6HQ6lJWVQaVSsVpnIBCAy+XC+vo6Zmdn\nMTU1BavVikgk8qWpsT8VfD4fIpEIQqEQyWQSm5ub8Pl84HK5KC0tRU1NDa5cuQKdTgelUgmDwQCj\n0Qij0chS5LFYjOn+LywsYHZ2likn+v3+olyIdNCojkip+kwmA5lMhtraWjZXnYhper0elZWVLFoG\nnlyCu7u7rOVoenoa4+Pj2NnZYaWDQoNSiKQKBzwhbm1tbbGSRGVlJcrLy2EymaBSqWA0GtmcBZFI\nxD4vjUGdm5vDyMgIZmZm4HQ6iyKHC3zOFNdqtSgpKWFRu9vtZhMbiQNC629ubkZ7eztKS0uZAYjF\nYgiFQrBarZicnMTw8DCWl5fhcDiKlpkDPtfE5/F4TGY7k8mgoaEB0WgUfD4fZWVlqK6uZmWthoYG\nNqkwl8shHo/D5XLBarVidHQUo6OjWFlZgcfjKeogH+KoZDIZ1iJWXl6O5uZmuFwultWwWCws4m9p\naWGkTFr76uoqZmdnMTg4iImJCdjt9qI+cwp6aG5AIpGAUCjEyZMnIRaLsbW1BYVCAa1WC4PBgMbG\nRrS0tKCjowMmk4mR7EKhEOx2O8bGxtDb28vu9WJNHwTA+DVcLpdp8FNbaigUgkwmY3uK5J0pQ2Q0\nGtnEVZ/PB7fbjTt37uDevXtYW1srShYX+IYafz6fzyIaj8eD0dFRcLlcJsxz48YN/O3f/i0UCgUT\n/fH7/Ww+u8fjwdLSEqvrT09P/8GT8/4YUFqLpD13dnbg9/thsVhQWVkJrVaL73znO7h06RJL6dP8\naqp57e7uYmtri43MJfGhYtRs8yGRSKBWqyGVSpHL5RAIBFBSUoK2tjYml/vWW2+hvLycGft0Os0M\nTyAQQCAQgNPpxNLSEoaHh1mav5hrJ+NZWlqKkpISFhUQuYbD4eD8+fM4c+YMysrK2IzvTCbDBJdi\nsRirm05PT2NoaIiNWS7mLHMaTa3RaFjdnIxqU1MTampq0NTUhLq6OtTW1jJCKF1A1O3h9/uxvLyM\nsbExDAwMYH5+nk0tLBaIMKfX66FUKtnzFIvF6OrqQiqVglwuR0NDA+rq6tjYVpp9Theo0+nE8vIy\n+vr6MDQ0hJmZmYLrVuQjX8eBxrHSWnQ6HS5duoRQKLRPza2jowN6vZ4x64mXsb6+jpmZGTx48ACj\no6OwWq2M01OstdNoaeIYkPxuS0sLuFwuNjc3UVJSAqPRiNbWVjQ3N6OxsRESiWTffeN0OllL4+Dg\nIDY2Noo2MZEgFAoZzyKdTiOZTKK0tBTnz5+HSqXC5uYmNBoNysvLUV1dzUpyVM6i0szm5iZGRkbw\n8OFDfPrpp0UbapYPkUjESi2UkaUWbDqL9NxrampYCZeUF0k8anV1FdPT07hz5w76+/sRiUSK5ih+\nI42/TCaDRqOBwWCAxWKBRCIBj8djLR5EhMsfMvPgwQO8//77bB600+lEMBhEPB5n7VnF3NjAk7Rs\neXk5uxAplX/jxg2UlJSw+c9kpFKpFEKhEO7fv4+JiQk4nU5sbW3B5XKx4RSFbuf7Muh0OjQ3N0Oj\n0TBZ2VdffRVXr16FVCqFUqmETqdjEVsmk4HP58Pq6irGxsYwNTUFr9fLhiNRpF9oVv/TEAqFqK6u\nRk1NDeRyOYsuvvvd7yKZTLJxpwqFAkKhkKlp0R7p7+9nrVETExOwWq0IBoNFm+6YD5plXlFRAbVa\nDYFAALPZjNdee43tV6lUCqlUCrFYzEbkJpNJptgXDAaxsbGBTz75BGtra/D7/b+3/v+funZiXmu1\nWta7XF5ejgsXLgD4fAgVffF4PGQyGQQCAXi9Xni9Xty7dw/vv/8+Y0wX+7lTi19FRQWam5uhVCoh\nk8lQX1+P8vJyXL58GZlMhjlhlGKnqJP2vcPhwLvvvou+vj44HA74/f6i3zE0l8RgMECn07FuFWpx\nfe655xCPxxmniDpZJBIJOBwOMpkMwuEwpqen8d5772FqaopNSiyUpOyXgbhCZPypU4SUOK9cuYJE\nIsEykBRpS6VStm8SiQTGxsZw7949PH78mIn5FNNBJ9DYdKFQyL4aGhpgMpmYjgsFf5SFFIvF+xwu\nWvutW7ews7PzB41D/2PwjTT+NE1Op9PBZDKxtAlNsBIIBEgmk5idncXa2hrzBB89esQ2SSGVkv4Q\n0AZQKpXg8/lMkjg/VUdGc3NzExsbGxgbG2PDKPx+P2OOFttZyQeXy2UsYvK0abAMEXRyuRzcbjeb\nqWC32+FwOLC0tMT0EsLh8NdiOAkcDgcSiQQymQxisRgikYi1kRGrOJfLsRGtdrudSeW63W5YrVZ4\nvV6mGPmHjob+UyAUClFaWsoieto7KpWKlZBo7fF4HJOTk1hbW0MkEoHL5YLdbkc0GoXb7cbk5GRR\nCU/5oGdOZRR61tT2R/VcMpahUAiLi4twuVxMNtfn88Hr9WJycpJpEHwde4bL5bLpgzT0iS52eu70\nDGkqIfXQ+3w+1vq5uLiI+/fvfyXDvlhrr62tRW1tLSQSCeNNKJVK9nN0d6RSKcRiMWxvbyMajSIY\nDLKyEE2gK1ZZ6Fmg4E2n07FuBQos8tedzWZZBiUajbKppWtra3j48CEeP36M5eXlgqv4fRVoT9B9\nmN/zbzAY2M9ks1lks1lGxAwEAuy+/+STT9Df388IfsU+q99I45/NZhlbsrGxcR/JAgCi0Sj8fj9u\n3bqFjz76CFarteBDP/4Y0GYFnmx0hUKxb+2kZrW+vo733nsPvb29GB8fZ987SCQSCeZ0EJs4H9ls\nlvWNf/bZZ/jNb36D+fn5r91JeRboohAIBBCLxV/YL1TjXFtbwwcffID/+q//2qfPf1DrpwiNvvJJ\nqrRuMkAejwfvvfce/u///o9F9xStHcT6KSomx4WyEk+vPZVKYXt7m/VgT05OIhqNFpW/8lUgJ5dG\n+JIRehp0kROX4s6dO1hcXASHw8Hi4iJmZmYO5LnT5MfW1lZWOnx63cCT8xoKhbC7u4vNzU243W6s\nr6/j/fffx9zcXNFT/PkgR1atVqOzsxOVlZUQiURfWDft/XQ6zebaU2loZGQEt2/fxsLCQlGJoF+G\ndDqNdDoNPp+/j2ALYJ/DmEwmkUqlwOVyEQwG4Xa7sbKygtHRUTbn4evCN9L4U+S+tbUFnU4HuVzO\nesWpTW9mZgb9/f1M2/+gDRDw5MWTlzcxMQGDwQC5XM7SbcRfGB0dxcjICDY3Nw/FugHA7/djdXUV\nQ0NDkEgkqKqqgkAgYEzo9fV1DA4OMvIhjQs96PUnk0lsbGxgcnISDx48QH19PfR6PTuQ2WwWY2Nj\nGBsbYwQnMjpfV3biyxAMBtmEQLVaDbPZDLlczr4fCAQwPT2NlZWVfcOnksnk1xYpPwu5XI6xlR8+\nfIhoNIqamhrmvHC5XEa0XV9fx+LiIiYmJrC5uQmPx8PEUQ5i/VR7nZmZwccff4zW1laYTCbGBQCe\nsMi3t7exsbGBubk5zMzMYGNjg0XJxRAd+n2QyWQQDAbx+PFjpmlCHBA+n8+4CF6vFzs7OxgfH8fq\n6ip2dnYQCoUQCAS+Fu7T06Bz6HA4cPPmTZbxIs0C0gKJx+Ns7U6nExsbG0xmmP7/68xU5CMUCmF7\nexsOhwNqtZpNbBUKhUin04y46vP54Pf7md7F1NQU9vb24HK5fu9haIXCN9L4J5NJ7O3tYXx8HLFY\nDEqlkh1au92O6elpjIyMYHt7u+DjSf8UkJGn+etVVVUoLS1FOp1mLPj79+9jamrqSwVcDgrhcBgO\nhwP9/f2MQERkylgshrm5Ody6dQs2mw1ut7vgU+3+WKTTaZa+//TTT7G1tQWLxbKvVfLu3bt48OAB\nXC4XAoHA1xr1fBUikQgcDgeGh4fB5XLR1NQEpVLJ1u7xePDw4UPMz88zw1ksrYE/FMFgEDabDQ8e\nPEAwGMTu7i5T2SRlyMnJSSwuLmJtbQ1Op7MoHRN/KDKZDCKRCObn53Hz5k14PB5UVVUxA8rhcOBw\nOGCz2bC4uAir1Yr5+flD4SzS/TIyMsLSysRAFwqFjPxJszj6+/uxsrKCQCCAVCr1tWdZ8pHL5bC7\nu4v+/n6UlZWxcgVl65LJJBN3cjgc2NjYwNLSEmw2GwKBQNGEh35fkMDa+Pg4QqEQ46DROGIy/B6P\nh/FZ5ufnMTo6ikQicSAlaM7v/pEDwe+8eSUSyT7hklwux9jl+f3EB/FQvwo0I5wERPLXTpmBcDiM\nVCp14JdJPvInCJaUlLBeYPLaI5EIGxVabHLQHwqSDFWpVKz2T8jlcvD5fGwAUjqdPtBLMB8UJSuV\nyn3p8/xafyAQQCQSQSKROPALPB9EiFMoFGy/UNqW2j7D4TCi0Sji8TgSicSh2e8klEPyvlQqovUn\nEgnE43FWnijGGOQ/FtTnr1AooFQqWd847Rm6Zyh7GovFGBHxoD8DGXutVssifzoD5KjT2uPxOGKx\n2KFZP/HQDAYDI1Dmj+OlsgBltahkEQqFiqLeR8v6o795gDgcJ+kIRzjCEY5whG8mvtK+c7/qm0c4\nwhGOcIQjHOHbhyPjf4QjHOEIRzjCnxmOjP8RjnCEIxzhCH9mODL+RzjCEY5whCP8meHI+B/hCEc4\nwhGO8GeGI+N/hCMc4QhHOMKfGY6M/xGOcIQjHOEIf2Y4Mv5HOMIRjnCEI/yZ4cj4H+EIRzjCEf7s\n8PSArz83fCO1/X8XTCYT6urqIBQKEYvFmP4zSYgeFhnRZ4E0oc1mM9PijkQiiMfjbODGQctwfhVI\njpPkLUUiEXQ6HVQqFTY2NuD1eos+17xQoMEiKpWKDWHi8/nwer3Y29vD7u5uMaU5CwI+nw+JRAKN\nRgODwYCKigoAYPPDD2oQyu8LLpfLxqMaDAa0trbC7XZjenoakUjkUM2/eBboPDQ0NKC1tRUikQg+\nnw/T09Pwer1syudhBZfLhVQqRXl5OVpbW8HlchEOh7GysoKdnR1Eo9FDfQY4HA40Gg3Kysqg1+sh\nl8shEAiQy+UQCoUwMzMDj8dzaGSx80EjmhUKBdRqNQwGA9RqNfh8PhwOB6anp5mcOiF/guDvwrfS\n+Dc2NuJv/uZvoFQq4Xa78atf/QrLy8vw+XyHSkP8aZDBNJlMeOGFFxAMBrGwsACn04m9vT2Ew+FD\nbzhpAAqPx4NUKkVpaSnOnj2LpqYmfPjhh0zT/TB/BgLNBKiursb58+dhsVgglUoxMzPDDOdhNz4i\nkQharRbt7e24ePEirl+/Di6Xi/n5efzzP//zoTf+NNa4vLwcFy9exD/90z+hv78f//qv/4qtra1D\n//zpTF+6dAn/+I//CI1Gg9nZWfzLv/wLpqenD73x5/F4UKvVOHfuHH784x+Dx+PB4XDgF7/4BQYG\nBtg7OIznmWYDWCwWnDp1CqdPn4bFYkFJSQnS6TRsNhv+7d/+DX6//1Aafz6fD6VSibq6OrS1teHC\nhQs4fvw4pFIpbt26hZ2dnX0D4PLnT/w+7+NbZfxLS0vR1taG5557Dt3d3fB6vXA6nfB6vfD7/Yfa\n8ItEIiiVSrz22ms4deoUdDodbDYbi/ppiMhhhUqlgsVigdlshslkYhGnQqFAXV0d9Ho90uk0SktL\nMTQ0xAYvHTbweDzw+XxotVpUVlbi9OnTOHbsGBoaGqBSqSAQCFBXVweZTIb5+XkEg0GkUqmDXvY+\ncLlclJSUQKvVoqenB11dXaivr4fFYkF5eTmboCaRSMDj8Q7dxcflclFaWorW1la0traipaUFWq0W\nFRUV0Ov1KC8vR3d3Nx4/fnyopnYSeDwehEIhdDod6uvr0dPTg/Pnz8NkMrFZ77FY7FDufwKHw0FT\nUxPa29tx4sQJdHZ2wmQywePxsGE6lEU9bIafIubKykq0trbi1KlTaG9vh06nYyOO7XY7Njc3EQqF\nDt3+53A4UKlUqKysxNmzZ3H8+HE0NTXBYDBAqVQim80iHo9jb29v3yRMeg9/dpG/SCSC0WjE5cuX\ncfbsWVgsFrhcLmb8D9PkrXzQtDyj0Yhjx47h9ddfR0dHB3Z2drC9vf0FZ+WwfQYulwuVSoW6ujr0\n9PSgsbERFRUVyGQy7BCq1WpIJBI0NTUhHA5jb28Pdrv90KTNxWIxpFIppFIp5HI5ZDIZKisr0dTU\nhOvXr6OxsRFqtZpd3BUVFbDZbBAIBIeibkipZZlMBplMhpKSEhiNRlRVVeHGjRvo6enZN22MMi/5\nkcJBg8bOSiQSKJVK1NTUsLPc3d0NsVjM0rVyuRx6vR4SieSgl70PlKVQqVTM8Hd3d+Pll19GRUUF\nSkpKEA6HEYlEEA6HD53xp7uInPaTJ0/i+eefR09PDyoqKsDhcGC32+FwOPaVUQ/6/OaDMo4WiwVd\nXV24fPky2tvbYTabEQqFEAgE4Pf7sbCwgNnZWQQCgUNl/KVSKRSfCydbAAAgAElEQVQKBerr69He\n3o5r166hvr4eOp0OoVAIbrcbgUAAm5ubCAQCX3j2f8i7+FYYfw6HA61Wi5aWFly7dg11dXWIRqN4\n/PgxPvnkE3i93kO1QfPB5/MhlUpx9epV/P3f/z0sFgtSqRTW19cxMjKCu3fvIhqNspGth+1ziMVi\nnDp1ClevXsX169chkUgQj8fZ5vT7/fB6veBwONja2oJYLMa1a9cwMDCAoaEhxOPxAz98JpMJjY2N\naGpqQlVVFcrLy6FSqaBWq2EymVBSUgIu9wk3NpvNIhwOIxQKsQjooMHn8yEWi9Ha2opjx46hra0N\nNTU1KC8vR1lZGVQqFUQiEYAn67fZbJienj406U4ulwuBQMC4Ot3d3ejo6EBTUxP0ej2L1miM8d7e\nHlZWVhAMBg966QwcDgcSiYRlJc6cOYP6+npUVVUxxyuTycDpdMJms7Hx14cJlDGqqanBqVOncOXK\nFZw9exalpaXgcrlIJBJYWFjAvXv3sLy8fGj2Tz7EYjEqKirw5ptv4vz582htbUVJSQkSiQTm5uYw\nMTEBq9UKm82Gzc1NeL3eg14yA4fDQUVFBXp6enDt2jWcPHkSWq0WkUgEy8vLmJqawtzcHOx2O5aX\nl/9k/te3wvgDgNlsRmtrKyorK5FOp7G4uIjZ2VnYbLZDd8jyIZFIYLFY0NLSgra2NiSTSdjtdvT1\n9WFqagoulwvA4Yv4AUAgEECpVKKjowOnT59GXV0dfD4ftra2MD4+jvX19X3Gkd6DSCQ6FBcHn8+H\nUChER0cHrl27hsrKSuj1epSWlkIqlUIikbBZ7ul0GsFgEG63G8PDwxgYGEAikTjwGeJ8Ph8WiwXH\njx9He3s7WltbWZlFrVYz8iUAuN1utrcePXp0KJxiuVwOnU6Huro61NfXo66uDsePH0ddXR20Wi1E\nIhG4XC4ymQx2d3cxMTGB3t5eLCwsIBAIHOjagc9T/FqtFrW1tTh16hQ6Oztx7Ngx6PV6KJVK8Pl8\nBINBbG9v4969e3j48OGhiTgpa6TT6VBRUYG6ujq0tLSgvb0dzc3N0Ov1SKVS2NjYwPT0NB4+fIiZ\nmRm43e4D3//5n0GhUMBgMKCxsRHt7e24evUqGhsbodVqsba2hpmZGbZ2m80Gn8+HUCh0KNZPhFba\nP5cvX0ZnZyf0ej177iMjI1hdXYXdbofb7UYwGPyT1/6tMP4cDge1tbU4duwYpFIpbDYbHj16hNXV\n1UNxQXwV5HI5mpubYTabIRQK4fF4sLCwgFu3bsFmsx2KzfllEIlE0Gg0aGtrQ2NjI/h8PjweD2Zn\nZ/Hb3/4Wk5OT7GcpLc3hcFjnwkFffuS8nDt3Dn/5l3+JbDaLTCaDdDoNoVAIPp/P6pqpVApOpxMT\nExP4yU9+gqmpqQO//LhcLkQiEdra2vCjH/0ItbW1MBgMEAgE4PP54POfHO9cLod0Oo21tTV8/PHH\nuHPnDiYmJg6c/0IZu/b2dnz3u99Fc3MzdDodc77yyUvJZBJra2v47//+b4yMjMDpdB7o2gkCgQAK\nhQKtra24cuUK/uIv/gJGoxFCoXDf+l0uF4aHh/HLX/4SQ0NDhyaLRw5kQ0MDrl69ikuXLqG5uRlq\ntRo8Hg/ZbBaRSATT09P4z//8T8zOzh6aZ0/gcDjQ6/U4f/48Xn31VVy8eBEymQxcLhfpdBqDg4N4\n7733MDw8DLfbfdDL/QIEAgEMBgNeeeUVXL16FadPnwaPx4PL5cL9+/dx8+ZN3Llzp+D75Rtv/Hk8\nHsRiMSwWC2prayESiRAKhbCxsYFwOHzQy/udEAgEUKvVkMlkyOVycLvdcDqdiEQihyKl/FUQCAQs\nQubz+YjH47Db7SylnJ+WyuVyLPInstBBgsPhMFKZyWSCWCxGPB7H1tYWrFYri8yIqLi9vY2VlRUs\nLCywjMZBG36dToeXXnoJly9fRlNTE6RSKVKpFGKxGCNcZjIZ7O3tYWhoCIODgxgYGMDGxsaB12qp\nxn/p0iU8//zzaG9vh0ajgVgsBp/PZ3snFoshEAjg008/xf3792G1Wg8FyU+pVKK2thYdHR3o7OyE\nwWCAxWJhrVgAkMlk4HK5MDIygsHBQYyMjGBlZeXA945QKIRUKoXJZEJFRQXMZjNOnDiBrq4uVFRU\nQC6XszLdysoKHj58iKGhIczPzx+KZ09cFYlEwvhGJ0+eZOdALBYjFovBarXi9u3bGBsbw+zsLEKh\n0EEvfR+ojdVsNqOrqwunT59GbW0tAODWrVvo7e3FxMQEVlZWirJfvvHGXyAQQCaTwWAwMDZtJBKB\n2+0+1Ol+4MkmFgqFKC0thUwmQzabhdfrxe7uLuLx+FcayD+kpaNYIANDrPFYLAaHw8FY8Plro3rt\nYSCYUapTq9Wis7MT5eXl4PF4SCQSsNvt6O3txfb2NmKxGMxmMzKZDBwOB1ZXV+FwOA7ccAJPykWV\nlZW4fv06zpw5A71ej729PbhcLni9XtbiR2WkO3fuYGRkBMvLywdufIAnxtNkMuHs2bO4ePEiVCoV\nOBwOi4gzmQyrkc/NzeHjjz/GgwcP4PP5Dry7gsvlQqPRoKenB8899xwuXLgAgUAAoVC4r0y0s7OD\niYkJfPTRRxgeHsbs7Oyh0OkQi8XQ6XTo6urC8ePHUV9fj+bmZtTU1LBo2e/3w2q14uHDh/j444+x\nuLh44JkugkQiQWlpKZRKJYxGI86ePYuenh6cPn0aEokEiUQCS0tLuHv3Ln7xi19gd3f30HRKcblc\n1g5dUlKC0tJSdHV1oaenB83NzeDxeJiensatW7dw8+ZN7OzsFM2OfeONv1AohEqlglwuZ6SmTCaD\nZDJ54Gnl34V8NjxF/vF4HLFY7Hca/nwC2kEdSIFAAIlEwlKciUQCu7u72NjY+EL/MjGJeTweuFwu\nUqnUgV3i5HSVl5fj0qVLsFgs7LK2Wq24d+8e4yTMzc2Bz+eDw+EwgtlhuMANBgOamppQU1MDtVqN\ndDqN2dlZDA0NYW5uDkqlEm1tbYjH43A6nRgbG8PGxsahMPzURvbGG2+gs7MTSqUSmUwGkUgEgUAA\narWaZWL6+vrwP//zP1hfX4ff7z/wbBilyY1GI65du4b29nbI5XJ2Xum/0WgUH3/8MT755BOMj49j\nb2/vUOwbACxavnHjBjo6OiAWi1FaWsocc4/HA6vVig8++AC3b9/G7u7uoenl53K5qKiowIULF6BQ\nKFBWVoZz586htrYWcrkc0WgUKysr+MlPfoK+vr59ffCHAZQtValUaG5uRk9PDzo7O9HQ0ACpVIoH\nDx7gP/7jP2Cz2Yq+9m+88acUEG1c+u+XGU9q51KpVIhEItjd3T2wNDRdJFKpFEKhEACQTCa/8qCp\n1WpoNBqEw2HWNnRQh1Imk0Gr1UIikbAUbTAYhN/v/4Jh53A4EIvFEIlE4HA4iEajB2b8iX+g0+lQ\nU1MDpVKJRCKBzc1N2Gw2bG9vIxqNIpfLwe/3sxa0w3CJkBNFKWedTgc+n49YLIa9vT3YbDbMzc1B\nJBIhEokgFothd3cXW1tbiEQiB15uEQqFUKvVOHbsGC5fvsy4LrFYDNFoFF6vF9lsFolEAjMzM7h7\n9y7jVxx0xE8lxvr6epw6dYp1IwgEAqRSKWSzWZZpsVqtuH//PkZHR+FyuQ7F3iFy4rFjx/Dyyy+j\ns7MTZrMZ2WwWAoEAyWQSLpcL09PTuHfvHgYGBrC+vn4onBYOhwOpVMpaiq9fvw6pVAqZTIaamhqI\nRCJWshsYGEB/fz/W1tYO3FkkUKB04sQJNDU1QaFQoKqqCi0tLVAqlQiHwxgdHcWnn36K4eFhJJPJ\noq/9G2/8KcqnhyUWi1lq5WlwOBwolUqYzWY0NzfD4XAgEokgGo0eyOGkCJ4IWhwOB5lM5ksvOQ6H\ng/LycrS3t2NlZQUbGxu/szxQTCgUChiNRkilUmSzWXaBx2KxL2RduFwu66EnAtpBpeK4XC4UCgU0\nGg2rM3u9Xtb+kx8d53I5ZLNZ5iAe9CVIF3hTUxNOnTrFRD8SiQTLGoVCIezu7iIajbK2xFAodOCG\nH3jifNfW1uL48ePo6upiez6XyyGRSDAxrvX1dfzsZz/D/Pz8oeHuEEH0woULuHbtGkwmE9v7wOd3\n0dDQEP7f//t/sFqtcLlcB75nCJRqPn/+PH74wx+yTgoitYZCIUxPT+P27dt477334Pf7D8WeAT7X\nE7l+/Tqef/55nD17lnFDstkstra2MDc3h3fffRe3bt1iaqiHBWKxGFqtFt/73vfwve99DwKBgHXi\nrKysYGhoCD//+c8xNzf3tak+fuONfzKZRDAYRDAYRCQSYRFdV1cXfD4fvF4vkskkRCIRFAoF3njj\nDfT09EClUuHevXsYGxtjmYOv+5Bms1kWEQsEAvB4PKa/LpFIEAwGmRElT7Gnpwft7e1wuVwHaviB\nJ1mI6upqyGQy8Pl8JjBDYibkUOUrbvF4PKytrR0oH0MkEqG2thbV1dUsE8Hj8VBWVsba4wi0L6gT\n4KAdAKVSCYvFgoaGBpjNZlbq4nK58Hg82NzcZBF/KpVCIpE4NCUwUr07e/Ysmpub2XPOZDKIRqNY\nX1/HwMAAdnZ2YLfbsba2dmhqtcCTduLu7m7WP04iQ7lcDpFIBCsrK3jw4AEGBgYwNzf3TBGWgwA5\nuy0tLbhx4wYuXLgAkUgEHo/H9vbk5CQjhM7MzByKLBHweWa3tbUVPT09uHTpEhobG1mpMRwOY3p6\nGsPDw3j06BHm5+cRjUYPxX4HwGaDnDhxAi+88AJOnz6NkpIScDgcuN1urKys4N69e+jv74fNZkMs\nFvva1vaNN/4UQfr9fgQCAWg0Guj1epw8eRKbm5vY3d3F7u4ulEolqqurcenSJVy7dg3ZbBYbGxtQ\nKpVQq9Usco1EIiyVXuyDS4eLIn9S+quuroZOp2OpfRKuOH/+PE6dOoWamhrcu3cPAoEAWq2WRX1f\nZ/scseXNZjOkUilreSorK4PBYIDT6UQ6nQaXy4Ver4fZbEZjYyObV0Dkv4O4HIVCIWpqamCxWJgH\nzuPxoFKpoFQqGS+BygM8Hg/pdPpQ1MtVKhVaWlpQXV2NsrIylq4lTgKRRWlPkMNyGCCXy1FRUYET\nJ04wchllgXw+H2w2G0ZHR7G8vIytra1DI8dNzqHZbMb58+fR3t6OyspKtv5sNgu3242pqSl8+OGH\nWFpawu7u7kEvm0EgEDCp6jfeeAMWi4VlGpPJJKLRKKxWK27evInp6Wns7Owc+D4nUNDW1dWF69ev\no6OjA0ajkZ3JcDiMiYkJ3L17Fw8fPjwUomEEuj/q6+tx4cIFvPnmmyzTmM1m4fF4MDw8jN7eXoyM\njHzt5edvvPGnw7e3t4ednR2YzWYYjUacO3eOEYQePXqE8vJynDlzBmazGXK5HOl0GjqdDi0tLUwF\nanV1FbOzs7BarV+LIX2a+c7j8dj0rM7OTiSTSayursJsNuPcuXP467/+a5hMJiSTSZSVlaG6uhpV\nVVVwu91YW1vD3t7e18IBIG+chDWI9EdEoo6ODkSjUcTjcYjFYrz00kt48803EQqFMDw8jL6+PmZg\nD4JvIRKJUFVVxTIRxL4VCoWsv5+4GMTA3drawt7e3oHXbtVqNY4fP876+fOjZyJRkrEi5+qw1GzL\ny8vR0tKC+vp6lJWVAQDjijgcDiwtLWFubg5+v//QEMyAzyVjq6urmeoaPXfSIJiamkJ/fz/sdvuh\nVB7s6enBhQsX9rXyAU9mDGxubmJpaQkLCwsFEY8pJMrKylgbX09PDzQaDXv2qVSKTUikboTD4CwS\nhEIhjEYjXnjhBVy8eBE6nY5JVKfTaTidTvT19cHhcByI7sM33vgDn/eQU6qKjFFbWxsSiQST3Tx3\n7hwqKipYH25VVRWuX7+OhoYGlJSUoLGxEQ0NDWhubmYyipS+Lsamono/pf2J1FJeXo4TJ05AIpGg\nvr4eLS0tOHnyJJqbmyGVShEMBpnedmVlJfx+P5xOJ0uXLiwssNppMTYU9WiXlJRArVZDKBSyP1Or\n1bBYLHC73Yxb8cILL6C7uxsej4cNlaGBGrFYjNWqt7e34XK5ippaFwqFkMvlMJlMKCsrA4/HY9/L\nZDJspDLthdbWVvD5fGxvb2NiYgJzc3MIhUL7Diu1phXz8NJ4YZ1Oh+PHj6OsrGyf4U8kEgCeRNdK\npRIymQxKpRIbGxvY3t4+0AFERGxtaWlBT08PjEYjS5lns1lEo1E4HA7WnpvL5Vgt+jBAo9Hg5MmT\nOHnyJMxmM2QyGfteIpGA1+vFwsIC5ufnEQgEDpyYmA9Svevu7kZLSwtKSkpYrTydTmNrawuPHj3C\n7OwsPB7PgTu3BBKwosCnpaUFer2enddMJoOVlRUMDg5icXERe3t7RT+Dfwg4HA4MBgOOHTuGkydP\nor6+npXowuEwlpeXMTk5icXFRfh8vgNZ97fG+FNfMA0sIUa0QqFAW1sbdDodGhsb2d/hcDhoaGhg\npB2hUIiuri7E43H4fD68/fbbuH37NvPkC30oaI0ikWhfmyJNNCM2cTabRUtLC4xGI7sQlUolXnnl\nFaZER3++u7uLgYEB/PSnP8Xi4iJbc6E3Vv7wD6VSyQbeAE96cPV6PY4fPw6z2Yy33nqLCZ8olUpW\ns6YyRSAQYNoGvb29jKNRrKwLDV7Jl7+l7FEikWCqcy+++CJee+01VFZWQiAQIBQK4Ze//CVSqRRs\nNhuCwSBzUui/xcwU0Xhho9GI9vZ2KJVK9r1UKoVoNMp4CzU1NaitrUVtbS2rJxIh9iAuGTL+J06c\nwOXLl/fxKqjFz+l0IhAIQCqV7uNYHPRlzuFwYDKZ8NZbb+H06dP7Is9cLodwOIytrS0sLy/Dbrcz\nJ+ywoLa2lunEm81mdlYpYLLZbPjggw8wNzd34ByifHC5XDbg6dy5c6iurmZrpyzX+Pg4PvjgA6yu\nrh6qwW2UGa2pqWFOl16vB/Bk7V6vFw8ePEBfXx9rvz0IfCuMP/CE+Jc/ZYrU8sh4p1IpCAQCeL1e\n9kXsXY1GA4VCAZFIBD6fD7lcjhdffBF1dXWYm5tj6mjFSFET4z//QvF4PHjw4AH29vZYliKVSoHH\n48Hr9WJvbw9SqZRF3pQ5UKvV6O7uhlwux/j4OEZHRzE3N8emAxbqcFDGgtLkZEDJG6dhRJlMBm63\nG0KhEDKZjBlXnU7HPnc6nWb16WPHjuHGjRvo6+vD7OwsNjc3C54Oox5bqVS6r8MiEAigv78fQ0ND\n8Hq9yGQyEAqFLDPD5XJx5coVVFVVMSYxh8Nh0r8+nw8LCwsYHBwsmPZ2PgQCAcrKylBWVsY4FsCT\ny2R+fh6/+tWvMD09jXg8jrq6OnR2djJewMWLFxEIBJjDRVr5oVAIW1tbsNlsWF9fh8vlKkrqtKSk\nBAaDgY0kpT1NJbm+vj6MjIxAKpXi+9//PkpKSgAAOzs7jLPj8XgQjUb3lTpisRjC4TBzzotxNqVS\nKcrKylBVVbUv3R+JRLC9vY3x8XEMDw+Dz+fj9OnTiMfjcLlc2NraYmN7853C/JbkYpZkiMdSXV2N\nM2fOwGAw7NPjIAPU39+PYDDIShuHZViVTCbDmTNncO7cOVRWVu7rElpdXcXg4CAeP34Mh8Nx6MTc\nSMDq4sWLuHz5MkpLS9n9ODo6isHBQfT19WFtbe1AHdxvjfGPRqMIhUJs41LrELXx7ezswOfzYX19\nHQ6HAzs7OygpKYHJZIJWq4VKpWLjRDUaDerq6tDQ0IC6ujrE43GMj4+zlphCgYz+07V/qsN5vV7I\n5XI4nU4WZTidTmxtbUGj0UCr1UKj0UCpVEKhUDDCXXV1NSorK6FSqeD3+5lIRyHXLRKJIBKJ2GVM\nRtDtdsNms7Fon+ZO7+7uwu/3IxaLIZfLMYlRMmgikQgtLS04deoU5HI5ALBUXiEjalIHE4vFLIUY\nj8fh8XiwuLgIt9sNg8EAjUYDiUTCSEXRaJRpuEskEshkMigUCqbm5vF4MDQ0hGw2i9HRUSYlWqiD\nTSn/srIytnYiys3OzuL27dsQiUSoq6tDU1MTmpqaYDQaWR83tSsmk0nweDxkMhn4fD6srq5iZmYG\nQ0NDmJ6eZkarkBeSQqFATU0Nc1xI58Hr9aK/vx+PHj1CKpVCc3MzXnzxRej1egiFQmxtbcHhcMDh\ncGB7exuhUIg5mwAQCoWws7ODhYUFxncpJDgcDtRqNcrLy2E0GqFQKAA82S/b29sYHh7GyMgIZmdn\n0d7ejo6ODkYkXltbY/uG+EPZbHafk+/z+eDxeIrCLxIKhSxybm1thVKpZM725uYma+mbn59HKpWC\nVCoF8OSZUrkw/4ucCVp/IpFAOp0uSrZLJBKhrKwMp06dwsmTJ1FWVrZPPnxgYAAffvgh1tfX4Xa7\nGc+FghDgc+cKALtj8zt3iuF05Y9nP3PmDHp6etDW1gaBQMAGnj148AD379/H0tIS9vb2fmdQ9vTY\n7UKu+xtv/OmlBwKBfUpUHA4HRqORpWwnJibw6NEjrK2tMclESrsLBAL2ZTAYUFNTgx/84Afo6upC\nfX09KioqIBKJkEqlCuoV079JlzO9WJlMhqamJvD5fMb6J6IcXYI8Hg98Pp9FhBUVFWhra0NnZye6\nu7tRW1sLDoeDgYEBzM7OFjSCprR//jx76rqgNrqrV6+io6MDZWVlmJ2dZVOpXC4XW79Go8Grr76K\nkydPMu9epVLhypUriMViGBsbe6ZmwJ8CmUzGeAp0GXi9XjgcDmQyGbS1teGNN95AW1sblEolPB4P\nNjY2GIM7Ho/DZDKhqakJHR0dkMvl7B10dXUxZbr19fWCPnOBQAC9Xr8v7RwKhTA+Pg6r1YpgMIjX\nX38dL774ItPJp4uOjA5lMugyJJEmmm+gUqnw6aefYmtrq6DPvLS0FM3NzftU5DY2NvD48WOMjIwg\nGo3itddew5kzZ9De3s7ejVarRWNjIxKJBBKJBDKZzL7LMJVKYXl5Ge+++y7GxsawtrZW0MuRIueG\nhgbI5XLmNG1vb2NwcBA///nPkcvlUF1djcuXL6O1tRXAE8NIETRlJEn/gtrr4vE4ent7cfv2bZbV\nKCSUSiWOHz+O2tpalJaWgs/ns/LQzZs38fbbb8PtdoPH48FkMjHnIBwOs7WTcU+n0yxLKhaLAQCb\nm5vweDwFdxQBMCJ2V1cXamtr2YCh3d1dvP322+jt7YXNZmMOCPDkfHzZns0n86bTadYGW+gMB4/H\ng0wmw/Hjx/HDH/4QdXV1LEM3NTWFt99+G1arFevr64hGoyx7COwPEvKzQ/mdR+RAFipb8I03/k97\nqGREqWZEbNxkMond3V04nU5sb29/4eGRE+F0OuFyudDV1YWqqiqUlJSwFjA6yIUCpY3j8fi+yJzG\n5FIqbn5+Hh6PB0tLS8xxyb/kFAoF7HY7+57FYoFOp4PRaIRSqYRIJCpoaowMSb5ACBHSmpqawOPx\ncOLECRiNRkbws1qtmJ+fx/b2NiKRCDgcDut3XV9fh8ViwcmTJ3Hs2DGUl5ejvLx832S0QkEkEjFd\ngvwogcRbVCoVzpw5A5VKxbTlJycnMTAwwIy/VqvF7OwspqamUFVVherqajaCtqWlBVqttuDrJs4E\nMbUpjZhIJPZNBOvs7IRarWaRpdvthtvths/nY5eeRCKBRqNBc3MztFot1Go1WlpasL29jf7+/oKv\nXSqVsq4Q4AnJLxgMwul0orKyEi0tLTh37hwaGhqgUqnY3iZSKf2d/PdFDpBAIGCCXYWegknzHygT\nATxxOBYXF7GwsIBsNou6ujo2Ca+ysvILURoJ/1BpiwxZKpXC3t4eZmZmEIvFCm5E5XI56uvrWVdI\nLpfD9vY2hoaG8PjxY6yvr0Ov16O6uhrHjh1jEuPJZJIZRuoeSafTrBwqFAqRSCRYaW57e7vgRlSv\n16O5uRnl5eVMOnlhYQFDQ0MYHh6Gw+GASCSCRCKBSCSCwWCAWq1mZyMej8PhcDBxKKPRyN6h2+3G\nwsIC3G53wSe+CoVCVFZWoqGhAS0tLZDJZIhEIrDb7RgeHsbQ0BBCoRDTdsmfjcLn85FMJhEKheD3\n+8HlctmYX8qEUsaaCMd/Kr4Vxj+bzUIsFrORsdlslkWn+f3OUqmUed7P+j2ZTAYejweRSARzc3Ps\nJdJlGQqFEA6HC3ZIqYYVDoeZU8HlcqFWq3HixAnE43EEg0EIhUL2s8/iHQSDQYRCIVbbP3PmDDuo\nJKlLhroQayePFHgS8VMWQiaTsSEbPB4P8Xgcfr8fwWAQe3t78Hg8+4RPPB4PPvroI9y/fx9lZWX4\n8Y9/jKampn0DUgoNgUDAhH1oHXRRHjt2DGKxGGKxmDmLdrudzZDPFy6iCPTs2bO4ceMG3nrrLVRV\nVbFyQaHB5XIhlUohEon2pTMVCgXOnTuHH/zgB+wCp3IATZQbGxvD1NQUyxqVlpbi2LFj+Id/+AeW\nrdBqtaisrGTE00KCUtC0j/PbEp9//nm0traivLycRZX5gkr0OSkKzeVyrBWT9lxVVRX0ej1L+xbq\nfFI7q0qlYkY7kUhgenoaa2traG9vx5UrV3Djxo1nqorSn5EsMAUj9D2LxQKz2cyGMRXS+NNzUavV\nAJ6c0+XlZfz0pz/F2toalEolTp8+jbNnz6K7uxsmkwmlpaWsRRT4XGqcImq6Oz0eDxKJBHMuC238\njUYjmpubWZklmUziwYMHePfdd+F0OiGTyWCxWFhnzunTp3Hs2DHU1dUxsas7d+5gY2MDAHD69Gkc\nP34cfD4fk5OT+M1vfsPIxoUEST8T0Rx4wlvp7e1Ff38/XC4XNBoNdDod4vE4pFIp1Go1TCYTSkpK\nmNbF/Pw8hEIhLBYLXn31VZhMJqTTady/fx8jIyOMu/On4htv/IEnl8XKygomJiZw7tw5aDQalm6h\nXkupVIpAIPB71b7T6TRWVlYwOzsLi8WCyspKXL58GXfu3Kjip1kAACAASURBVGHa44U4qDQj/s6d\nO8xrpMPH4/GYoaJLn4z/s0AZDq/Xi6WlJZSVlaGyshJNTU1YWVnB8PBwwUZaUorf4XBgcXER1dXV\nUKlU4PP5+54NtXHRONxn/fvEzfB6vcw5UCqVkEgkMBgMCAaDBc22BAIB1vpGyo/U7ZFf04zFYnC5\nXHj8+DHGxsZYmi5/3blcDna7HZOTk7hx4wbMZjP4fD6LSgopsEN8Ctp/lGJsaGhgWRSRSIRMJoNw\nOIzZ2Vncu3cPMzMzWFlZYalliuwWFhbw0UcfIZ1O4+WXX2YE0vzWx0IhlUqxaIX2dk1NDV599VXo\ndDqo1WqWtk0mk7DZbFhbW8PKygrjidCsBaVSia6uLjbYRSQSwWg0QqVSFbxFlNZKBEWK2K5du4bO\nzk7I5XKUl5ezPUPtxtRZQU5LOBxGSUkJNBoNc1qAJ4JNZrMZMzMzBU+dU5qeHFEul4u6ujr86Ec/\nYiOfTSYTDAYDdDodm8yZXxsnQmy+w0IdRnq9fl+7aaFADhfxgOgdnDx5kmUBxGIxFAoFy0DSHqIO\nGCJs032j0+mgUqmQSCSg0WhgNpths9kKum7gc1tDxNBcLoeSkhKcOHECOp0Ozz33HGQyGSQSCRsu\nRkGrQCBAIpFAKBSCz+djZbmqqirE43Fsbm5CKpUy8nEh8K0w/rlcDltbW5ifn8f8/DwbkQs8eSEa\njYaxjXd3d5lBefpizm+/o5RqLpeD0WhET08PJicnMT8/X5ALPZfLsdTfxMQEmpqaEAqFIJPJWLqb\n1lJaWgq1Wr0vPfT0uknMI5+bwOfzUVdXh8bGRkxPTxcsa5FOpxlTfHV1lREmKdKhbAzwxAEIh8OM\n2PT0v08tbDQgiKI7sVgMg8HAxHUKBSKJeb1eRKNRCIVCRl7MLx8lk0n4fD4sLi7CZrN94ZlTREfK\njPl/Tp0QhbwYiVRII22JdGkymQB8XjNMJBIIh8NYWVnBZ599BpvNBrfb/YW1x+NxRsIEPq+LFkN1\nMZlM7tOdoB5og8HAfobD4TClTiqzTE5OYmdnhyldSqVStLa2QqvVskE6HA6HdekUizWdX24QCAQ4\nfvw4AOx779FoFLu7u1hdXWWOJZEyQ6EQGhsboVAo2N+h853fLVMo0O+m1DKVSUhwJj/7BXxehnG7\n3ez90CAasVjMHGP62Vwut89RLuS6qcOCukLoszQ1NaG6upq1RT/tpOZnfCQSCct4EIj0XSwxHTLm\nWq0WpaWl7J3KZDI0Njaivr5+H0+L5Nzza/7568rngJFuC3Exjgh/T4GIOL/97W/B4XBQVVXFRshK\npVJ0d3ejtLQU7777Lu7du/eFFhE6INSWdOXKFVy7do2NS21sbGQvtZBrTqVSSCaT2Nvbw9raGqqr\nq6FWq1lvNM1tb2hoQE1NDaLR6BfkNykiaWxsxNmzZ3HhwgVUVVUxWU+anFaoNDqlzFwuFxwOB1pb\nW/eRsehZCgQCNkFRpVIhEAjsEx6ii7upqQmvv/46Ll++DK1WCz6fD7FYDKPRyGq+hUI8HofX68Xm\n5iaqq6u/cCETKPVMRv5po0j16FdffRXf+c53GDGpWMp0mUwGwWAQgUAAkUhkX0RKa6eUeigUYm2u\nz8q2mEwmdHd346/+6q/Q2dm5T+aVjGohQZMFo9Eoy1rQevNb3gKBAJaXl3H37l309vYyh4EuvMrK\nSrz44os4d+4cI236/X7Y7Xbs7u4WfN25XI5lHfJLEE/fAalUCi6XC4ODg3jnnXews7PD9g7t5e9/\n//vo6OhgZMtc7kkr8uzsbMHTz7T2/C4CMuhktPOJuolEAqOjoxgYGGCqnHq9HnV1daitrd3XpUGZ\npaWlpaJMzcvvgKJ7hMPhQC6Xs7Jt/jOkPU/ERDKaJJmeL2Nss9kwMjKCzz77DC6Xq6DrpvtaJpOx\n8hVlJp6+X/JJq/lZ0vyzl39GnE4n7t69i7m5OTbMrRD41hh/8l6tViuqqqrQ0dGBiooK1nKm0+kg\nk8kQCARgNBoZ6zwej+/zxmja29mzZ2GxWFj9l1KihbxgKOXt8XiwvLyMkZERpsxGL5+IXk1NTXjz\nzTfR2dnJZGaz2Szb4GKxmLX1EFGReA6UBSmUUaLRpQ6HA1NTU2huboZer2c1Q+BzT5hqizSAifgX\n+bXQuro6XLhwARaLBSKRiH02ktotJMgQWa1WGI3GfRK/tG7gCUmtoqICL730EhoaGhhJkSIRiUQC\nuVyOq1evor29nZF7tra24PP5Cq4JQXX87e1t2O12VFVV7YveCOTs1tTU4Nq1a8xwkRYEn89HZWUl\nGhsb0dXVxcRHqGxWDHnXWCzGCFbxeHxfihn43FD5fD5sbm4iFAqxsg85jiKRCBaLBRcuXEB1dTXE\nYjFyuScjlycmJoqSxqU7hUZUE5E4H9FolLV59vb2wmq1gsfjsbQ4tePS+6I7hFQNSVK30KDMFQ2K\nyTem9NmopXh6ehqDg4OYmZmBXC6H2WxmHAxaM72rnZ0dzM7OYmlpCS6Xq+CtfrQXqLuD1p5/D5Ch\n9Pv9rF2SUuwKhQIlJSWsE4nWHY1GMTY2htHRUWxubhZFjIkcrPwyUf4zB8A6QKhuT2UUyiA+7fxk\ns1l4vV7MzMxgZ2enoO2s3yrjH4vFsLW1hdnZWTx+/BhXrlxhUTSldl9//XW8+uqrrD5OgjlSqZSl\nzfM9ZLoI8wl3hbwcE4kEm6EtEAhQV1eHqqqqfRGGWCxGc3MzmpqaWKaAJv7RAaUNRH+P+rpp7YWs\nP9Pvs9vtSKfT6OjogMVi+ULNmNprXnnlFbz00kusZSuRSLDUIx0W2vDkWNDhL7Qhisfj2NnZwcjI\nCPR6PU6fPs0IkQQOhwOVSgWFQoHm5mY2Jjc/kqPPll8jJZ3xYgylSaVS8Hg8WF9fx9zcHKshP238\nqe3w8uXLOH78OGsh0+l0KCkpYdLXT2v/j4yMME5LoUHZKp/Px9pB8/cJdbxQL7RKpUJ3dzfa2trQ\n2trKZGlJ3Chf293j8eDx48dYXFws+Lrpne7u7rL3+bTxDwaDsNlsuHPnDh49eoRoNIqOjg5cvHgR\nJ0+eRENDA/R6PUvB0/Pwer2w2+1YWloq+B6nVkJihlOGJR/ZbBZ+vx9jY2P4yU9+gvX1dSQSCdYm\n2t7eDrPZvK+1FADW19fZ8y5kOY7WTZLf4XD4Sx0L+hm73Y75+XnMzs6Cz+czaXZqy8znLITDYdZa\nWoz2RDpHlG3+MhD/xePxIBQKIRqNQqlUsvsmv4Wa1hgIBLCyslLwzNa3xvgDn0ekHo8Hq6urqKmp\ngVqthkKhYLU18sSEQiETe8lnD+enlciA+v1+Niq1WPB4PJibm8Pc3BzKy8thNpshkUj2RaTkAZNx\nyt9sT6sEkoOwurr6zJp1IUC13JWVFdTW1rLaP5Et89Nc+SkwIq3kP+v8tkcSnpmfny/KBUNiIVTP\np5aup1Nx9Lzz1QCftW6KVjY2NvDZZ59hZWWl4MafnFuHw4GhoSHWyklti4T8jItYLGYXaH7tNj+S\ni0ajTPyqWBERyfeSwJZIJNrnKFLEZLFYIBQKcerUKXA4HGg0GpSWlqK0tJRl5vLT1aTdsbe3VxQ9\n/Ww2y7omdnZ22NyE/PdOgjQ3btzA8ePHIZPJYDKZUFFRAZ1Ox94DfVYiif7yl7/EwMBA0XgKRDQk\nA5MfwRMnJ5vNsnGzp0+fhlarRU1NDaqrq2GxWCCXy/el2EnBs7+/vyhOIiEQCGBnZ4dlrfKfdzab\nRSAQgNPpZJotBoOBDY3SarWQy+XsvAJPMk8+nw+BQKAohh/4/M4lUu7T/waVJWZnZ7G8vAyfz4dM\nJgM+n4/29naUl5cz7goFnZFIBMvLywVvYSV8q4w/Red7e3uYn5+HWq1mE/AMBgPr9SQjTxH/s34P\nRZ+xWAxra2sYGxuDx+Mp2tqDwSDsdjvGxsagUChYP7lcLmfRGhl4qqc/a92pVIod+r29PUxPT2Nh\nYaEolzpdMAsLC2xUZXV1NZtDQA4VXSD5TOdnrZ0iv8nJSUxOTmJ1dZUR0goFekZutxtLS0sYGhpC\nIpFAXV0d00QA9tcev+x50++j9CnNRHc6nQU3RrQnXS4XxsbGYLFYUFZWBrPZDIVCsW99VMbKH0Dz\nrN+XzWbhcrkwOTnJRIyKYURJ5XFubg4mkwkCgQDl5eVQqVT71kwKhvm12meBuk3Gx8fR39/PyiyF\nBu3JtbU1jI+PI5fLoa6ujmUgcrkcI3n19PSAw+EwBcZ8RyX/98ViMayvr+POnTuYn58v+JoJsVgM\nTqcTS0tL0Ov1TMSJSkXEvdDr9ejp6YFOp2N7SiaT7Yv2yWH2+/1YXV3F/Px8UefOu91uzM3NoaKi\nAlwul3VvkchSMplEJBJhZc3KykrU1NSgvr5+X7qf9rjT6WRaKcUcXBSPx7GysgKLxcL2N7WfE4F8\ne3sba2trSCQSEIlEbDYKRfz03KnVeHh4GPPz80Ux/oXv6ykM/vlP+cvxeBxutxuLi4sYGRnB0NAQ\notEoG2zxrFopgTaM2+1mUrp3797F//7v/8JmsxVVRzqdTsPlcsFms7E0KaXIKXr4KsJhLpfD3t4e\npqamsLi4iLm5Ody+fRtjY2Ms/VdoUHRks9kwMzODdDqN0tJSRuzLT41/1e/IZDIYHBzEhx9+iP7+\nfszMzDBSZrGio0gkgpWVFUZGo5YbUt7Kj5C/DDQZ7Re/+AVu3ryJ5eXlfQSxQoPIlqFQCIFAYF8J\nIj9K+n2eeTQaRW9vL/793/8d8/PzRRvnSlHjzs4Otre3WYsTiREBnzPUn2ZBPwuhUAhOpxO//vWv\n0dvby4YsFQPUhkojY9Vq9b7uEGLsy+VyKBSKfQz7p0HSuiMjI/j000/h8XiKtm6KQjc2NrCyssLm\na5B+BmWIVCoVKisrUV5e/gXZa0Iul4PL5cLw8DAePHgAq9Va1AFAJGPudDoRDof3MfeJLySXy1FR\nUcGI0OR05T93Mrgffvgh3nnnHXY2iwUKKnZ3dxEIBFgHViqVYpNLw+Ewk+E+duwYOjs7YTKZGIeF\nSjS7u7uYnp7Gz3/+c4yOjv6x6/7/vuqb30rjTzUhv9+Pvb09uN1uNjq2tLSU1UrzPUQy+lSL6+vr\nQ19fH4vmpqamimqIgM+nhAUCAZb62t7eRjQaZan+Z10sVO7Y3NzExMQE7ty5w8bPWq1WuN3uoh1W\nigqCwSCT+/T5fFheXkYgEGA93M+K+MkokK7+Z599hs8++4yl82hEc7FApKhwOAyfz8fmIHi9XvD5\nfKZAmL9P8tdN8rKDg4P46KOPYLVa2ajiYoGEZigS8/v92NnZwc7ODhKJxDMN6NNrp9LYxMQEPvvs\nM9y5c4dxSIoFSmNGIhFW33e73djb22N9518V8VNWLxqNYn5+Hg8ePEBvby9WVlaKuu5MJsPWS6xy\nev7k5JHzlR+55a+bzjUJvvT29mJ2draohii/dh4KhVjZhFT8YrEY40KRU0BRc/60Sso2TUxM4ObN\nm6z9sphIJBJML4GY+xS0hUIhxGIxZLNZtnZqgSYCdSwWQyKRgMPhwOjoKG7duoWBgYGi73FyqPPb\nlYmTlUgk2NyTra0txONxVjKibi+r1Yrl5WVsbm5iaGgIDx8+RH9//5/yvP/8jD+BLul4PA6n04nx\n8XE0NzejsbHxC4IWxCC12Wzo7+/HO++8g1//+tcYGhqC3W7/WqcvJRIJ7OzsYHl5GTMzM/D5fODz\n+aivr2ctN/mfkdKgQ0NDuHXrFt577z2Mjo5iaWmpYMI+vwtUr19fX8fg4CCGhoYQDAYZWYvaX/LX\nTZfL6uoq7ty5g08//RSDg4Pw+/1Fd7T+//bO7Lnt6zz/D/Z930gQ3Pd9J0VRC7U5suMsbpzUTTLt\nRaftRTqdTvsX5LpXnVx0JpN22izt/LK0ii3Ldi1RG0WKFDeRBDeAJAiCBAgQIABiB0j8LjTniFRk\nxWkACZbPZ4aTzCQiD7/84rznvO/zPi9ZAzl8rK2t0Q9fIBCATqeDxWI5kW053pZDNtDh4WFcu3YN\n09PTeWk3+yxIZ8Hjx4+pkQ+Px4NarYZCoTihcSFrJ888Go1ifX0dv/71r3Hv3j04nc6Xtu54PI6N\njQ0sLCxgamoKPp8PmUyGDtb6rJZU8n75/X7cvn0bP/vZz+go13xDPmOhUAgOhwOhUIh64HO5XJom\nP/6eHP86PDyEz+fD8vIyfv7zn+PmzZt5y8Q9b+0kTZ7JZKDX6xGLxRAOh2lJ6/iBhWgBSI06Eolg\nYWEBn376Kf7zP/8TLpcr72sGnmqciKpfqVRCo9HA5/NR4ehxrwfyNyLdGfF4HDMzM/j5z39OLYHz\nGfiPc1x4TWZyEC3T5OQkpqamYLfbkUgkoFKpEAgEYLPZcO3aNTx48ACLi4u4fv06bXf9I96TFwb/\n3Pun5oacfyqI4OzUqVPo6elBU1MT7RVeXV2Fy+WibWButxtra2v0xvyyNsbPWrfJZEJVVRU6OjrQ\n1NSEmpoaAKB11FAoRFuHtra24HA4XkrwfB7H2+HMZjPa29vR0tKChoYGmM1miEQixGIxrKyswGaz\n0VZHp9OJra2tnPfffl7I+0HGJFdVVaG+vh7d3d0oLS2FUqlELBajZRW3241kMknH4e7u7ua1DvpZ\nkNSiUqlEaWkpLBYLDAYDmpqa0NvbS7UABwcHWF9fx+PHjxGPx+mG43a786pl+aw1E3Mlk8mE4uJi\nlJSUUIOUlpYWlJaWgsvl0gye1WrF5uYmotEoHA4HVlZWEIlEXuo4V6FQCIlEQlv4yJdOp4NUKqXu\nccTBjRhK2e12am9NRmzny2zmWYj2RqFQwGQyUSEfuTUTIaDBYIDRaIRYLKZtlySj5PP54HQ6sbKy\nktea+bPrFolEtJxSXV0Ns9lMB31ls1lqTEVmXojFYgQCAQSDQWQyGXi9Xupu+bIuQcATca1cLofZ\nbEZ5eTnq6+sRj8fp5y0YDFJBq8ViAfB0kmsikYBAIKBr/iMvnS+M71+a4E8gwqKenh76sk9OTsJu\nt9P52/kQPf2xEHFIS0sL2tvbkc0+cTUcGxtDIBDIW2vcHwuHw0FdXR2am5tRVVUFqVSKcDiMyclJ\nzM7O0nW/ygPW8+DxeNBqtRgaGkJtbS00Gg2tRd69exdbW1s0LVlIz5x4ELS3t+PKlSu0Nh0MBjE3\nN4d79+5Rv/ZCWPdxYahUKqWzCurq6sDj8XBwcACfz4exsTHYbLaXFjR/H8SASywW0xZho9GI+vp6\nqpInJcTHjx/nvRz0eSA3fZKOPp4ZMpvNKCkpoWWB3d1dbG1tYWdn55W+4+QyQbQtx7MS5NavVCoh\nl8shk8kQCASwv79PvUJeJcQy22g0IhqNwul0nvjcPavLycNzZsH/OOTWQYwguFzuiTpSrv3BcwVp\nMSMvOfCkZh0KhU44WxUixGjo+Bx6MiSpUNdNuhPUajVt9SP13uNWtYW2drLBy+XyE46UJIVL0oiF\nsm6y+ZE0KZ/Ph1KpPOEoR9pWyWe0EDieMiclRKFQSF3ogCfPPJFI0J71Qnjmz6b6yfMnmRhiQkS6\nnfLlWPmH8lllFZJlJF/EOKcQ3nHyPpO5FcfdTQnPthfnGBb8GQwGg8H4kvHC+J7byQwMBoPBYDAK\nHhb8GQwGg8H4ksGCP4PBYDAYXzJY8GcwGAwG40sGC/4MBoPBYHzJYMGfwWAwGIwvGa/VVL/PA/GJ\nFgqFSKVSefXXzjXPerW/6j5WBoPBYOSXZ2fQ5IovVfDncDgwGo3Q6XSQSCR00MIXAeITrdFooFar\n4Xa78zaFLR/weDwolUrqOBeJRF6KL3suIGYdZOpfOBx+rmFHofLshMJCdFT8fRDjnOMGL18k8rWB\nvwyIMRAZ+vNFe3fIpMVEIlEwZkufBx6PR50YyUCvXI6ufq0H+xyHOLZdvnwZV69eRWdnJ9LpNBYX\nF3P9o/KCSCSCVqvF5cuX8ad/+qd03PAXAeI4d+rUKXR3d6O6upqOHP0iIBaLodPp0NHRgZaWFjpY\n5IuwCZL3njhDSiQSHB0d5XQTyTfE3vX42NMvygYOPP0bkDkLX7S1k0wpef6v2qb4D4HD4aCkpITO\nBSgUx8LPg1QqRXV1NfR6Pc1UJ5PJP+RbvHCwz5fm5i+Xy2EymdDT04O+vj54vV5IJJJXvazPBY/H\nQ3FxMS5cuICLFy+iubkZH3zwwRdmE1SpVKioqMD58+dRXl4Ov98Pj8eD+fl5AIW7GZIbT3l5Oc6f\nP4+WlhaIxWJsb28jGAwWfOaFw+FAoVCgpKQEfX190Gg0yGQyGBkZgdVqpTaohQjJVlRXV6OtrQ1q\ntRrZbBabm5tYX1/H+vp6wa4deHLgJc++tbUVcrkc6XQajx49eqWDtz4vHA4H1dXVqK2thdFoBJfL\nxf7+PhYXF7G6uvqql/dCyIGlqqoK/f39sFgs4HA4uHnzJlZXVxEMBgv64M7hcNDS0oKWlhY0NjaC\nw+HA5/PRzy2ZZvjH8qUJ/nq9Hq2trejq6kJNTQ2cTicikcirXtbvhdwaqqur8dd//deoq6tDMpmk\n6fMvAiaTCZ2dnbh8+TIsFgusVisePnx4Ys58ocLn89Hc3Iy//du/RUlJCYLBIB48eID19fWXNpb1\n/wJJMxsMBvT29uIf//EfUVFRgYODA/zwhz+EzWajQ1IKEbKBnz59Gv/wD/+A4uJixGIx3L59G7/9\n7W9pua5Qnz+Px4PJZMLQ0BD+/u//HkajEfv7+/jhD38In89HhysVIuTQ29fXh+9///tob2/H4eEh\npqam8NOf/rTggz8ZXnTmzBn80z/9E+RyOVwuFw4ODhAOhxEOhws2+JPP7ZUrV/Dd734XtbW1SKfT\ncLvdiEQicDgcOVv/ax/8ScqztbUVX//611FWVoaDgwPMz89jc3PzVS/v98Ln89HY2Iiuri4YjUb6\nIsTj8YLd+AhisRgqlQqDg4N45513UFJSgu3tbVy7dg1Wq7Xg1y8SiVBVVYXa2lrodDpkMhkEAgFa\ni3ve3PlCgcfjQSAQYGhoCN/85jdhMpmwt7eH2dlZeL3egl47ACiVSjQ0NKC5uRlmsxlCoRA7OzuY\nm5vD5uZmQWe9JBIJ9Ho93nnnHVy9ehUGgwGrq6u4e/cubDZbQX92uVwudDodamtr0dfXh4aGBnC5\nXGxtbcFqtRZ8qY7P58NkMuFrX/sa3nzzTYjFYiQSCfj9fjgcDng8noIO/FqtFmazGQ0NDSgrK4NQ\nKITD4cDw8DDW19dzOtzqtQ7+HA4HUqkUFRUV6OjowJkzZyAUCrGysoLFxcWCr5mLRCJoNBp0dXWh\np6cHKpUKPp8PVqsVwWDwVS/vhZBRm/X19ejr68PAwACOjo7gcDhw//59OByOgt7Aibiys7MTTU1N\nEIvFcDgcmJ2dxf7+fkELhzgcDlQqFSwWCwYHB9HX14d4PI6FhQXcu3cP29vbBTtNEXgSPIuLizEw\nMIDm5mZIJBJ4PB4sLS1hfn4e29vbBbl2cmMuLi5GW1sbLl++jO7ubsRiMTx+/BiffPIJHA5HwYpF\niTanqqoKFy9eRGdnJwwGA2w2G2ZnZzEzMwOPx/Oql/lciCi3pKQE7e3tePvtt6muy+VyYWlpCZub\nm9jf3y/IZy8UCqFWq1FbW4vOzk40NDRArVYjGo1ibW0Nd+7cwebmJpLJJAv+nwdyCrxw4QL6+/th\nNBoxPT2NkZERrK6uIhAIvOolvhCTyYSWlhZcunQJfX19EAqFmJ+fx29+8xtsbW296uV9JqReazab\nceXKFbS0tIDP52N5eRkLCwvY3d1FLBYryA8hQaPRoKGhAd/4xjfQ0dGBcDiM69ev49e//jU8Hk/B\n1vvJs29ubsZ7772H7u5uJBIJfPjhhxgZGcHk5CT29vYKtubM4XBgNpvR09ODb37zmygvL4fX68X1\n69dx8+ZNzM/PF+znlsfjQSqV4vTp0/je976H+vp6BINBjI+PY2RkBHNzczg4OCjYg6NAIEBNTQ3O\nnTuHd999F0VFRdjb28O1a9cwPDwMu92OUCj0qpf5XEQiEVQqFb7xjW/gnXfeQU1NDbhcLjY2NnDr\n1i3cunULOzs7BfncORwO1Go1zp49i6GhIVy8eBFGoxGRSAQ2mw2Tk5MYHR2lI9BzxWsb/MkJvLm5\nGQMDA6itrQWfz8fq6irdAFOp1Kte5nPh8XgQiUSora3FhQsX0NzcDKFQiOnpaYyNjWF2drZgN0Dg\nySZSWVmJnp4eDAwMwGQyYX9/H+Pj45icnMTBwUHBqs35fD7EYjE6Ojpw+fJltLa24ujoCHfv3sXD\nhw+xtLRU0K1yCoWCrv3s2bMQCoWw2+0YGRnBo0ePsLW1hUwmU5DrLy4uRk1NDdra2tDb24vq6mrs\n7u5iZGQEw8PDmJmZgd/vL8jPLYfDgUajQWtrK06dOoWWlhaarbh79y79zBZq4FcoFDCbzRgaGsK5\nc+dQUlKCnZ0dzMzMYGxsDFarFaFQqODeG6KJKisrw8DAAM6fP4+GhgYEg0Gsrq7SPWdhYQHhcPhV\nL/e5lJWVobW1FZcvX0ZPTw9KS0vhdruxtraGsbExjI6OYm9vL+fvzWsZ/I8rhfv7+9Hb24vi4mIk\nk0ksLS1henq6oHvMBQIBVCoV2tra8NZbb8FsNsPlcuG3v/0tbt++DYfD8aqX+EJEIhF6e3tx6dIl\ndHd3I51Ow2az4fbt2xgdHUUikXjVS/xMSKnl4sWL+N73vgeVSoUHDx7gv/7rv7CwsIB0Ov2ql/hC\n9Ho9vv3tb+PChQuor6/H7OwspqamMDk5ibW1tYIMPIT6+nq899576OrqQlVVFQQCAW7cuIEf/ehH\n8Hq9Bbt5A0/2nKKiIrz11ls4deoUVCoV3n//fVy/fh0TExMIhUIFe+AFAIPBgPb2dprpisfjmJiY\nwH//939jfn4e+/v7r3qJz4XD4UAkEqG5uRl/9Vd/2MB9ZgAAIABJREFUhaqqKvB4PCwsLODDDz/E\nL3/5S8Tj8YIWV7a0tOCtt97C1atXodPpaJno1q1buH79Ora3t/Pys1+74E/ShnV1dbh8+TJOnz4N\nlUoFp9OJmZkZ2Gy2nKdPcgkxIhoaGkJfXx+MRiOOjo6wu7uLlZWVghfc6PV61NTUoLe3Fw0NDRAK\nhbTeuba2VrDpflKvtVgsOHfuHFpaWiAQCLC8vIzJyUmsrKwU7AZIsFgsaGtro/XCQCCAu3fv4oMP\nPoDP5yvI504gYqeGhgYYjUZEo1FMT0/j4cOH8Hq9iMfjr3qJnwmfz0dLSwvOnj2Lvr4+yGQy2O12\nTE9Pw2q1IhqNFmzwEYlEkMvlOH36NL7+9a+jpKQENpsN77//Ph49eoSFhYWCTfULBAJotVqcPn0a\nly9fRllZGRwOB+bn5zE8PIzZ2dmc1shzCenEKS8vx9DQEE6dOgWJRILZ2Vl88sknsFqtWFlZyau2\n67UK/sSFra6uDpcuXcLg4CAaGxshEongdDoxPDxMe2wL7YUg6Su1Wo36+npcuHABbW1tUCgUcDgc\nWF1dhcPhKNgARFpUSkpK0N3djba2NlgsFmQyGayuruL+/fvY2dn5Q00qXhpcLhdKpRK1tbW4dOkS\namtrkc1msbCwgJmZmYJeO4/HA5/PR21tLXp7e1FRUYFsNoulpSU8fPgQExMTBZ2xIGIni8WCiooK\niEQibG1t4d69e5idnS3oGz+fz6ellsHBQdTV1cHr9WJ2dhaLi4vY2toq2EMXj8eDSqVCdXU1+vr6\nMDg4iEwmg+XlZVy7do22lRUqcrmcBs++vj4oFArY7XZ88sknGBsbK1hlP2lFJN4nfX19qKiogNvt\nxsTEBH7zm9/A4/HkXdT92gR/DocDuVyOuro6nDt3Du+88w618U0mk9jc3MT4+Di8Xm9BfhgFAgEU\nCgUGBgaoQNFsNiOVSmFkZAQ3b97E3t5ewW7iXC4XQqEQTU1NuHLlCiorK8HlcrGzs4ONjQ04HI6C\nLrWIxWLU19ejt7eXGuL4/X5MTU1hYWGhoFO2EokEWq0Wg4ODuHLlCnQ6Hebm5vCLX/wCi4uLOTMF\nyRdqtRqnT59GV1cXdDodNjY28PjxY0xOTsLpdL7q5b0QhUKBsrIy9Pb2oqOjAyKRCKurq/joo48K\ntisBePJ5lUgkqKqqwle/+lV0dnZCLBZjZmYGs7OzcLvdBT33hMPhoLS0FN3d3ejp6UFxcTG8Xi+s\nVitmZmYK2shHKBTCbDbj1KlT+P73vw+DwQC/348bN27g5s2b2NraeikXjdci+HM4HBQXF9PAPzg4\niLKyMnA4HESjUbhcLtjtdmxtbdG0c6F8KEm6ubq6Gt3d3ejv70dXVxeKi4vB5/Oxv78Pq9WK+fl5\nqhQuJDgcDoRCIUwmE+rq6nDq1Ck0NzdDo9FgZ2cHH330EaanpxEIBApSqAUAMpkMZrMZAwMD6O3t\nhdFoxMbGBsbHxzE3Nwe3212wGwkAaLVatLe3o6WlBaWlpYhEIlTsVKi3H4JarUZNTQ0uXLiAxsZG\nHB0dYXZ2Frdv38bGxkbB3jyJ10N9fT0GBwfR0tICHo+H0dFRmrEo1Cwdj8eDRCJBQ0MD+vv7cfr0\naYjFYszNzeHWrVsYHx8vWFEuueTpdDqcOnUKFy9epIH/zp07mJqagsfjKdgsnVQqRUlJCS5cuIBz\n586hqqqKitDv3LmDxcXFl3ZJei2CP5fLRW1tLS5fvox3330XFosFfD4fyWQSgUAAs7OzWFlZoWrV\nQtoMiZNZT08P/uZv/gYWiwVarRYikQgHBwfw+XxYW1uDw+EoSDtW0htcX1+Pb3/72+jv70d5eTmO\njo5gt9vxb//2b9jY2CjoWwTpr7148SJ6enogEokwMTGBX/ziFwVd8wSeCs3Onj2L+vp6SKVSWK1W\nLC8vw2azFeyBC3iydpPJRJXOJSUlODg4wL1793Djxg1EIpGCDEDAk0ydWq1Gb28v3nvvPVgsFjgc\nDvziF7/A+Pg41tbWXvUSPxOS7u/v78elS5fQ1dWFqakpfPTRR/jggw+wurpacPsMgWhDOjo68MYb\nb+ArX/kKYrEY7t27hx//+MfY3t4u6L2GmFf92Z/9Gdrb2yEQCDA6Oopf/epXmJube6n+LV/44E+c\nymQyGVQqFcRiMWKxGE3Zzs7OYm5ujtqZFtpLLZFIUFJSgoqKCpSUlEAmkyGdTiMUCmFiYgI3b97E\nyspKQbYI8Xg8Wqo4d+4cOjo6YDAYEI/Hsbq6StubCjUAkVp5Y2Mjzp8/j7KyMvD5fIRCIWxubsJu\ntxf0RiIWi2E2m9HZ2YmBgQEYjUb4/X787//+L8bHxws63S8Siei7MzQ0BK1WS81MlpaWEIvFCi7L\nReDxeDAYDHRQldlsxtbWFh49egS73V6wN37gqSC6ra2NahSi0ShWVlYwOTmJQCBQcPsMQSgUoqio\nCAMDA/jWt76F5uZmhMNhDA8P49atW9jd3S1YYahMJoNGo8Gbb76Jy5cvo6qqCtvb23j06BHu3LkD\nm8320veaL3zwJ3C5XPB4PHC5XITDYdhsNty5cwcjIyNwOp0Fu4kLhUJotVoYDAZoNBocHh5ib28P\ny8vLuHXrFj788EP4/f6C3MSFQiF1wevr60NlZSUODw/hdDrx8OFDTE1NIRwOF6xOgdiwtra2ore3\nF1qtFsFgEHa7HSsrK/B4PAV78+Tz+VCpVGhtbaWOYJFIBAsLCxgZGcHS0lLBrp3c3mpqatDf34+2\ntjak02ksLCzgxo0b2NjYKNgDI5fLhUajQU1NDc6cOYOGhgaIRCIqrnQ6nTg4OHjVy3wuAoEAUqkU\ndXV1GBwcRFtbG+RyORYXF/H48WMsLS0V9NrVajVaW1tx5swZXLhwAdFoFIuLixgeHsb4+DhCoVDB\nHRhJWddoNKK1tRWXLl3C+fPncXh4iKWlJfz2t7/F48eP4Xa7X/ravvDBn9Tvg8EgfD4f4vE4AoEA\nTX06nc6C7isnIzJJcA+FQpidncVPfvITLC8vw+fzFWzwlMlkKCoqQnl5OYqLiyEQCLC6uoqRkRF8\n/PHHWFhYKNiTOAAYjUacOXMGfX19qKqqwuHhIR4+fEiffSGWWQhyuRwVFRW4cuUKenp6IBQKcfv2\nbVy/fh1LS0sFWysnHhz19fV499130dPTA7FYjKmpKYyMjGBmZqaghaF8Ph9tbW24dOkShoaGIJfL\nqYvc7du3sb+/X7CHLqVSSTuhvv71r0Oj0WB+fh7/8i//grm5uYJeu0qlQl1dHb75zW/izJkzkEql\nuH37Nq5du4ZHjx7B5XIV5AWJCCtbWlrw53/+52hra0M2m8XDhw/x6aefYmxs7JV9Vr/wwR94EkDT\n6TSdlJVKpRCNRhGJRAr2xk/gcrkQiUR01nc4HIbL5cLy8jL1YC9URCIRlEoltFotHVm6srKC27dv\nY2lpCT6f71Uv8bmQtkqDwYDW1laUl5dTB8UHDx5gZmam4Cf2ET+FhoYGSKVSLC4uYmJiApOTk/D7\n/QW7iZMbXG1tLfr7+yGTyeB0OnHnzh1MT08XrPc68DTT1dbWhq6uLmi1Wmpe9fjxY+zu7hbk55Xo\nioqKitDf34/W1lZotVrMzc3h5s2bmJychNfrLchsy3H/DSJqlUgkmJ+fp+59Ozs7BbvPC4VClJSU\noL6+Hh0dHZDJZPB6vZicnMTc3Bz29vZe2TvzWgR/4GmfOfCkJicUCsHlcl/xqn4/pE9YIpEAeHLz\nDwQCSKfTBbsJEgQCAWQyGRQKBQQCASKRCJaWlnD//v2CVdsCT13ByPQyvV5Px8Xev3+/YJXOwNP3\nnHS3EJEcsb8t9NY4kUiEkpISVFdXo66uDqurq5iensbHH3+M9fX1gn7npVIpTCYT2tvb0dDQgEwm\ng9HRUfz4xz8uSOtbAhlwVlZWRhXm0WgU165dw8cffwy3212w2UUulwuBQICqqir09PSgqKiItsWR\nMkshHlqAJ89dIpGguroatbW1MJvN8Pl82NjYwOTkJOx2+yt9Z16b4L+/vw+v14tMJkO98Xk83mf+\n/4+PNCWnS/LvRCIRUqkUUqkU0ul0XsV2qVQKfr8f0WgUHA4HiUSCip3+0J/57JjWfG+kyWQSkUgE\n2WwWqVQK29vbVOBXqBsh8LQ9Ua1Wo6KiAnw+H263GxsbGwXfGsfn8yESidDe3o7BwUFIJBJYrVY6\nra+Q4fP5KCoqwhtvvIHOzk5qovTw4UM6KbFQ4XA4qK2txZUrV1BbW4tMJoOFhQWsrq7i4OCgYIMn\n8OTA1dXVhbNnz6Kurg7pdBp2ux0bGxvw+/0F/dz1ej1tp+zs7EQmk4HNZsP4+Di2trYKeq+xWCxo\nbW3FN77xDXR1dSEej+P+/fv4+OOPYbfbX3l567UJ/uFwmG4gxPSEzFwnQZDD4UAmk0GpVEKj0UAo\nFNKASVLBJPgnEgkcHBxgd3cX4XA4b3+oTCaDYDCIWCz2whnrZD67UqmESqWCUCj8ncPN0dERDg8P\ncXh4iFAoRGt4+fpwpNNp6ptweHiI/f19ems+fvAgqnqZTAapVAqpVAo+nw8+/8nrR/59KpVCIpHA\n/v5+TudWPwv5W8vlchiNRiQSCfh8Png8Huzv75/4uWRUqFAopGsXi8Xg8/ngcDjIZDK0zES+8ukj\nQSbHVVdXo6mpCZlMBjs7O3j8+PGJNqHjB1qBQACRSASJRAKxWAwul0sPbPF4HNFoFOl0Ou8aB6FQ\nSFXyNTU1SCQSWFlZwePHj3FwcECfO5fLpV/k3REKhRAIBOByuTg6OkImk0E8Hkcymcx7JwwxsCJ+\nBGazGaFQCFNTU7DZbIjH48hms/SZk+zM8b8B+X04HA4ODw+RTCZfyoAlPp8PpVKJzs5OOuPEarVS\nA6VwOIyjo6MT6z3+34+vm8vl0ktRvjunyAG9pKQEg4OD6OrqgsViwerqKhYXF7G4uAifz0cPLs8+\n82czv9lslr4r+X7m5DNHhJWDg4PQaDRwuVx4+PAh7t69C5/P9ztaNKKHIe85iQfpdJpmgnP5zF+L\n4E82MpJq1ul0aGpqgkajORH8iR9AX18frly5AqPRCC6XSwPm0dERDUBHR0fY2dnB3bt3MTc3B7vd\nnpeX/fDwEIlEgmYsNBoNdDrdicBO0tRGoxGnT5/G+fPnodfroVAo6At+dHREdQ4HBwd48OABFSDl\nS3R3eHhIbzzHDy7PPieRSAStVouWlhY0NTWhoaEBOp0OCoUCPB4P6XQa0WgUOzs7WFtbw61bt2C3\n2/M6dvb4By0YDNLndDzdT2w41Wo1ioqK0NjYiMbGRpSXl0OtVkMgEGB/fx9utxvz8/NYWFjA/Pw8\nUqlUXoO/SCSCTCaDSCSC3+/H3t4ewuHwifQn+f8plUro9XqUlZWhsrISVVVVEIvFODo6gsfjwerq\nKh2zTFzR8rV2sVgMjUYDs9kMkUgEr9cLp9OJ7e3tE2s/flghmpKioiLq2JlIJOjUNpfLRf0A8rVu\n0pFTUVGBpqYmAIDL5cLIyAhsNtuJ/UUkEtGDikAggFgshkwmowcvPp+PSCRCAy85OOQLhUKB0tJS\ndHZ2oq6uDgBgtVppup/87GcPWXw+nx52hUIh/drd3YXH46GflXytnWhympubqWNoJBKhdtV+v5/u\n9+SzzOfz6TM3mUyQSqX0+5HMZDgczntJUiKRQKfTob+/HxcuXIBer4fL5cLdu3cxPz8Pr9f7O5ki\n8jsoFApotVr63DOZDPb397G7u4tMJpPTLM1rEfyBp4Ho8PCQTsUjIjoAKC8vR0tLC7q7u+kXl8tF\nIBBAMBhEJBJBKBTCwcEBYrEYjEYjtFotmpubEYvFEAgEEIlE8vbikFt7JpM5EYBIH3pdXR1qamrQ\n0dGB9vZ28Pl8pNNpBINBhEIh+lLzeDyaborFYlhYWIDL5crLgAuirSDlCq/Xe6JVSKfToaamBpWV\nlaioqEBVVRVKS0tRXFxMT+KBQACJRIKKejQaDVKpFGQyGe33znVKlcvlQiaTQSaTgcvlIhgMwuVy\nnRg6VFpaioqKClRWVqK4uBg6nQ6lpaUwm82Qy+U4PDxEOByGXq+nAcpsNsNkMmFpaQmbm5t5uZFK\nJBKYTCYoFAocHR3B5XLB4/HQA4dYLEZRUREqKytRV1dHs1xGoxFFRUUwmUw0y1JdXY3q6mo0NzfD\nbrfDZrPRToFcr5uYEVVWVkKlUiESiWBxcZEGkmw2Sw8qjY2NKCsrg0wmg1wup89XoVCAz+fTTMv2\n9jb1Y1hfX8/btEulUonW1lbU1tZCqVTSVlCn04lQKETb/0pKStDY2AitVguZTEaDqUQioUGVx+Mh\nHo9jd3cXq6urWFlZwc7OTl4U3xwOB2VlZTh16hSqqqogFArhcDioAdTBwQG1Fa+srERlZeWJtYvF\nYohEIpqlEwgECAQC2NnZwcLCAjY2NuD1enNeNiAZ2vb2dvT19aG2thZHR0fY2NjA9PQ0lpeXkUgk\nqNufyWSCxWJBWVkZ5HI55HI5NBoNRCIR/Z6pVAo+nw82m436j+Q6m0sCuMViQX9/P7q7u1FaWopE\nIkFbtzc2NmgMIQdDnU4Ho9GIsrIyGAwGqNVq+txJRnV3dxezs7NYW1vL2ZTC1yb4H08dHx4egsfj\nnbj1t7e34wc/+AEaGxtRVFSEw8ND2Gw2zMzMYGNjAxsbG9jc3EQwGEQ6nUZ/fz+amppQU1NDN5p8\neC6TFDSHw0EqlUIoFDrRryoSiXDp0iU6alMmk+Ho6AhbW1twOp2Yn5/H6uoqNjc3IRAIUFpaijNn\nzqC6uhp6vR7pdBrhcDgvXgHEqIXH4+Hg4AAbGxsIBAL0f7dYLHj33Xdx7tw5tLe307RhKBTC6uoq\n9eGORCIwmUzo6upCfX091Go1tFot9vb28iJGIhkWpVIJDocDr9eL9fV1qhgmYza/9rWv4fLlyygv\nLwePx6NBx+VywWazYW5uDqWlpaiqqsLp06fR3d2N3t5e/OxnP4Pb7aZpxlxCNmqlUolEIoH19fUT\nXSFyuRydnZ346le/im9961uQSCTg8/knOmJ2d3cRi8WoJz1RT9+5cwd+vz8vnQ5cLheVlZVobGyE\nVCqFw+HAxMQEvF4vjo6OaC90V1cXvvvd7+Ls2bOQy+W/Uxoimz6fzweXy4Xb7cbw8DDef/99bG5u\n5uXQQuYmkJuz3W6n8+0zmQz4fD5KS0tx/vx5vPfee6iuroZWqz0hQiaHe/I9AeDmzZv4zW9+gzt3\n7uQ8+JOfXV9fjzfeeAMWiwWRSASzs7Ow2WzUzEetVqOsrAxXr17FV7/6VVRVVUGr1Z7IPB7PBmWz\nWfj9fvz0pz/FjRs38qLV4HK5UCgUOHv2LE6fPg2dTgebzYbFxUXMzc3RvzPJJPX29mJoaAiXLl2C\nwWCAQqE48dzJfx4eHmJ4eBg/+tGP6LTFXEK6Kmpra/Gd73wHTU1NkMlk1PDs008/pRc7cnApLS2l\nhxzydxIIBDTFn81mEY/HEQqF8M///M8IBoPY3d1lwZ9A6kN8Ph8HBwdIJBIQCoV0o6yrq8PQ0BBq\namrA4XCwtLSEsbExzM3N0ZvOwcEBDg4OaD1rYmICbrcbFRUVUCgUeOutt/Dxxx/TGlmuNhmBQEBP\nqalUivpuS6VSnDt3DhcvXsTp06dRXl6OZDKJpaUlzM/PY25ujgp2QqEQIpEIuFwuPB4PotEoiouL\noVKp6KmZqNhziVQqhUajAZ/Pp89ELBajtLQUX/nKV6iRiMFgQCgUwuLiIqxWK22LCgQC2NvbQyaT\ngUQiQSAQwO7uLuRyOcRiMfr6+uhJP5fw+XxotVqo1WoAoBkjtVqNyspKXLx4ES0tLairq4NarYbT\n6cTCwgJsNhscDgeCwSACgQD29/chl8tRVFREa5IKhQLFxcUoLy/H1tYWIpFIztZNJg9WVVVBqVRS\nJ8hsNguLxYK+vj709PSgoaEB1dXVEAgEWFpawtraGlwuFy0NJBIJqo0ht3GJRILi4mIYDAaqc8kV\nJENUWlqKyspK6qLocrkgEonQ1taG9vZ2tLa2orm5GXV1dUilUnSugtfrhUgkwtHREfb395FKpegB\nTa/X04yMQCDIaR2d7CvE1Een0yEej2NjYwNbW1tQqVSoqalBfX09urq60NbWhtLSUkSjUWxtbdGs\nxvF3+/DwEEVFRTh16hRUKhW6u7sxPz+f804HgUAAuVwOi8Vy4tY/MjICj8cDk8kEs9mMpqYmDA4O\norm5GSUlJYhGo/B4PPRQlk6n4XA4EAqFwOFw6KjuxsZGbG5uYmZmBslkMqdrVygUKCkpQWVlJYxG\nIzKZDGZmZvDRRx8hEAhALpdDIpGgo6MD3d3d6OzsREVFBQQCAXw+H5xOJ/b397G1tYXV1VUIBAI6\nJt1sNuPq1auIxWJwuVw5XbdQKERxcTGqqqpQV1cHuVwOn8+HDz/8EKOjo1RbIZfLUVNTg5aWFnR1\ndaGsrAwlJSUQCARwOp3Y3d2lpTiDwYCGhgZ0dXWhu7sbXq8Xn376KTwezx+93tci+BNIKjaVSkGl\nUqGyshKxWAw9PT1ob2+HQqHA+vo6pqam8MEHH9CU+PMgqf54PI4zZ85gYGAA8/PzWF5ezmlNlwwI\n4fF4ODw8hEKhgMViQUtLC/r6+vCd73wHarUa6XQay8vLGB0dpTqE57lC+f1+hMNh1NXVobGxEdXV\n1dREJRqN5vT2TxzDSH3ZYDCgtrYWBoMBf/Inf0JvlR6PB4uLixgZGcHo6CgmJiYQi8Xoh4GQSCQQ\niUTQ1NQEg8GA3t5e+Hy+nAd/UluTyWS0DcpkMqGlpQWNjY343ve+R/UibrcbMzMzGB4exuPHj2G3\n2+mNnohyFAoF/H4/Tp06hTNnzsBsNqOyshJ7e3s5C/7Ha4LEBpoEJ7PZDKlUirfffhtnzpyBVqtF\nMpnExsYGxsbGMD4+DpvNRrNax8VcpaWlaGxsxLlz52AwGKDX6yGXy3Me/KVSKYqLi2mQJofFyspK\nFBUV4cKFC2hoaEBJSQn29/dhs9kwOTmJlZUVbG5uQiaT0Vsn0YL4fD709fWhuroaRqORft9cvuNS\nqRQ6nQ4WiwVKpRKpVArhcBgcDgfNzc1oampCT08PmpubodVqcXBwALvdTg9d4XAYcrkcu7u72Nra\nQiaTQV1dHRQKBcxmMxobG6FSqU5kKXMB+TwWFxejuLiYHpw2NjYgFAqpK2RnZycGBwepFsHhcGBj\nYwPr6+tUzGq1WuH3+8HhcHB0dASz2YyysjJ6wMw1KpUKFosFJSUlUCgUiMfjsNvtmJ+fh0gkQm1t\nLYqKinDu3Dn09/ejqKgIALCzs0O/PB4PVlZWMD09DZFIhLq6OtTX16OyshIDAwN4+PBhztctEAho\nCaKkpATxeBw7OzuYmJjA+vo6VCoVlEolzGYzurq60NnZic7OTtrm7XQ66fj20dFROJ1OlJaW4mtf\n+xoGBgbQ2NgIr9eLsbGxnKz3tQj+2WwW0WgUgUCApuKUSiXefPNN9PT0QKVSgc/nw26343/+539w\n9+5depr9LNLpNBKJBJLJJD2pGY1GSKXSnN4uMpkMDg4OkEwmIRAIUFZWBo1GQ0Vxer0eoVAIjx8/\nxi9/+UssLi5ie3v7M4NKIpHA9vY2DAYDOBwOKisrae00EAjkdGMkqmUAKCoqooNxDg8PUV1dDT6f\nj0AggE8++QTXrl2D0+mE1+s9oeQ/vuFtb28jk8nAbDZTjcPc3FxON0YS+Egdk7jNKRQKZLNZaLVa\nGI1GJJNJbG1t4Ve/+hXGxsZgs9kQDofpoeXZ5zA9PQ2tVotLly6hvLwce3t7OTU6IilFiUQCuVxO\nSy5dXV1obGykh0atVotEIoFHjx7h2rVrNHhGIhHq9X+8w2V/fx8ej4cGMp1OB7lcnpM1E0h6VqVS\n0Xoyuf1WVlairKyMTrEMhUL4+OOPMTIyguXlZezt7eHg4ICmocnvQFT0AOjfTyAQ5LRExOVyodPp\nUFxcTL9/MpmE2WymrWdlZWUwGo3IZrNYX1/HzZs36SyRYDBIdThEkJzNZqk/wPnz51FTUwOJRPLC\nTp//C6Sv32AwUA8OLpeL4uJiVFdXo6+vDyUlJbS+PDs7i4cPH2JxcRFOp5Mq6Y+OjhCLxZDJZMDl\ncrG2tobNzU1aHsj1uoEnWqHy8nIoFAqk02l4vV4kEglq79ve3k7niIhEIjgcDqytrcFut2N5eRkO\nh4N2sRBdg1QqRSgUAo/Ho4eKXJeIjmcUeTwevF4vHWVOhK79/f1ob2+ngsTDw0OqXSEe/7u7uwiF\nQlTY2traisPDQ5rlEovFuVlvTr5LAZBKpbC/v4/FxUVYLBbU1dWhoqICRUVF9Aa3traGxcVFrKys\nIBKJvLBuQm4QRMErl8vB4/HyooY+3v6jUCigVCqh0+lo0HM4HJiZmcH09DRcLtcL1fukLgo8sd9V\nq9VIJBJUA5FL+Hw+rSlLpVKYzWbo9XpkMhkIBAJ4vV46rnJ6evr3CiYTiQSi0SjtDjCZTDkPRMCT\n5y0Wi2nbGyldkINBJBKBy+XC48eP8fDhQzpt67P+9plMhm6QRNl9vI00V2smhxaiGheJRFS/Qurj\nfr8fDocD4+PjGBkZgc/ne2GpSigUIpVK0e9NWrlyCWm9Im20pP2srKwMpaWl0Ov1ODw8pF4LDx48\nwNjYGLXrftZwidT8s9ks/b48Hi/nrXMkK0TSzKQWq9PpIBQKUVZWBrVajUwmA4fDQceyrq2tUce8\nZ/cYPp9PdQIikehE62UuIaJn0lZLsi8kK1VTUwOlUolYLAar1YoHDx7gzp072N7eht/v/51WW5Ld\nI8+efE8g934iMpmMTjYlz0atVqO6uhr9/f3o6OhATU0NHcB1//59WK1WbG1tYXt7Gz6f73fedyK+\nJcLwfJh4CYVCKhQnezq5UBQXF2NgYADt7e2RSEV1AAASMElEQVSwWCy0NLS6ugqPx4PNzU3Mzs7S\nsi0AKvgjX8FgEH6/P2cH3Ncm+B8dHSEQCGB8fBwVFRW4cOECTe1ms1lsbW3B4XBgb2/vc6e/yaZF\nPvTJZDLn7Wekxp/NZnFwcACZTAaBQEAFgOFwGIuLi5idnaUn4BdBgoREIqFTDkk7Y64FfxKJhLa8\nkUPS8b739fV13Lt3D1ar9XPZtpINRqfTQafT0bXnErIG0sZEbtRCoRDRaBTxeByJRALT09P0xv/7\nJp2R561QKCCXy3F0dPR7D5f/V4jjGdmAZTIZveEkk0l4vV5MTExgbGwMa2trLzyskuBmNptp98De\n3l7ORxgT8RlprSTr1mq19GcS9f/4+DjGx8exsbHxwmfO4/FQVlaGmpoaeoPKhzcECdKk/ZAcFomA\nkrRZPnjwAKOjo3SY1fPWTp4B0QoYDAZks9m8jBk/vneRA6NarYbZbKY300gkgvX1dQwPD+PevXuY\nnJz8zF5ycsDX6XQwmUy0RTcfrX6klZW0S8pkMpSVleHw8BA9PT1Uw0C0W9evX8fKyspztQekTZq0\nxqZSKbjd7pxqccjPITV/vV5Pnz8pP1dVVeG9996DVCpFLBbD+vo6RkZGcOPGDbjd7t/ZH8m/12g0\nkMvlyGQyWF9fx8LCQs6Eiq9N8M9ms4jFYtjY2MDo6CiMRiMaGxupepK00RmNRpSWlmJnZ+e5t1Cy\nuZaVlaGxsfGEIQkx5sjlCx+LxeBwOHDv3j2k02m88cYbaGxsBJfLRSaToe184XCY9hA/b93kJTeZ\nTGhsbMTZs2fR1dVF+1uJyCuXeL1eWK1WrK6uQi6XQyaTUfMV0vNutVoRCAToAeR5iMViKJVKNDU1\nobOzEx0dHRAIBJifn4fH48n5BpNOp+F0OrG5uUlFkOFwGOvr61haWqJ6Crfb/ZmBkNTMNRoNSktL\ncfr0aQwODkIkEsHtdmN1dTWnamIivopGo/D7/dQFMpVKYXl5GWNjY/B6vfD5fPB6vXC73b8T+EkQ\nJhsq6UW+evUqLBYLAoEAwuFwzgdhkaxWMpmkAePo6AiJRAIPHjygbWd7e3vwer3wer2/s+7jWQ+T\nyYSKigoMDQ2htrYWdrsdDocjL1m5Z01uuFwuYrEYlpaWcOvWLRwcHCAajcLtdtORssdNxci6Sfaw\ntrYWAwMDaG1thdvtxvXr1/PSoigWi6nol9zWU6kUtra2YLVa8eGHHyKVSiEQCGB7e5v2/BOzIhLM\nSDuaxWJBc3MzHcR0/fp1fPrppzn31OdwODCZTKirq6OtuHw+H16vF+Pj43A6nZBIJEin09jd3cX2\n9jZcLhd9r8j+rVKpoFarqedLZ2cnSktLYbPZ8B//8R+Ynp7O6bpJ+3BNTQ0sFgs4HA7ttFpfX4fN\nZoPL5cLh4SFisRjtHiNzCcjBUC6XQ6fTUcFjc3Mz6uvrsbu7i9HRUQwPD+dsZPRrFfyTySRVSiqV\nSmSzWeqdTxT9YrEYWq2Wpk+Of1BJakypVKKtrQ39/f3o7++HWq2mJjC57t0ma+ZwOEgmk6ioqIDB\nYIBUKqVqbtKTSj6Mx0/nx29URCtw6dIldHd3o6qqClarFRsbG3kxzNnf38f6+jrW19dhNBqh1+vB\n4/EQi8Wws7MDh8MBt9uNRCIBqVRKbzjkQ0rqthqNBhaLBWfOnMGZM2dQXl4Ov9+PqampzxRk/jGk\n02m43W64XC7s7+9DIBAgGo1id3cXNpsNjx49ol0jAGiGgDx3kikQi8WorKxEa2sr3nzzTVRUVCAe\nj8Plcp1oHcwVmUyGiiITiQRNX7rdbupzvre3R9f5bOaEBE+VSgWDwYDKykqcP38eV65cwebmJtbW\n1hAMBnOe9ieHlkQi8Tti2YWFBdy+fRvhcBiZTOZE5ur42mUyGVQqFZ3H0NLSgp6eHigUCty8eTNv\nJlykO4Ic+smafD7fiRnyxw8HRARLyhIqlQp6vR5FRUXo7Oyk/d/z8/O4ceMGdnZ2cr5ukUgEvV4P\nmUwGALRkeXBwgNnZWWxsbNCSyvG973h5UK1Ww2AwoKioCA0NDejp6UFNTQ2SySTNdOSy9Zn8XK1W\nC4vFQoVwXC4XoVCIiv7i8fgJkyHyvhAfBYlEQn06Kioq0NXVhbq6OmSzWfrMc93mRwzByM0fAM3c\nkr1maWmJrp34rpCMKfGwKC4uRk1NDRobG9Hc3IyWlhbEYjHMzs5icnISCwsLOVvzaxP8AdAbxf7+\nPux2O1QqFWKxGEwmE2w2G22NS6VSNEVNUkXEOpe0xgwNDaG7u5sGIqLczccGQ3o59/b2YLPZqKNZ\nKpWC1+vF3t4eEokEXSM5GABP7Ts1Gg3Ky8vR0dGB8+fPo7i4GBwOBysrK1hYWMiLOVEikaCmH8QD\ngXxgye/gcDigUqnohxN4EgxIvZO0I7W2tqK7uxuNjY1IpVJwOp24desW1tbWcr5uUj/zer3weDxU\nWCQSiVBaWoqBgQG4XK4TTlwCgYAGLuJDoNfr0dbWhra2NtTU1CAej8NqtcJms31mZun/Cnm3yc2M\nBKVsNksdxYholGRYiMkMEZyp1WpYLBZqXlRbW4uysjJIpVI64S0fU8aSySSCwSA9UB0eHtLZCsTJ\njKTWiY6ElGWIlW9NTQ3q6urQ1NSE8vJyKlZbW1vD8PBwTjdFwtHREXZ3d+FyuWgbsFAohE6ng8Fg\noDoRYoBD0styuRxSqRSpVApKpRINDQ1UUGkymWjXAPGLyHULLoATwZwEeJJGJnsfAPrsidaIuIam\nUinqy9Da2oqKigrodDqEQiFqTJQvPwixWAypVHrCyvm4vTAJnOSZE1c8opOSyWRoa2tDU1MT9Q1J\nJpO4efMmPbDk+h0/7jJIdAqxWAzhcJiKx8m7T3xoyLtCtFJ8Ph+VlZXo6upCdXU1HZM+OzuLf/3X\nf8XKykpO1/xaBX/gySZJUi2pVAobGxtQq9Xw+XzUHSmVStGbMo/HoyINuVyO8vJy+mEtKipCIpHA\n4uIiPvnkk7yc0AnEX358fBy7u7vQarXIZDIIh8NYXl6mnvPHNx6SulWpVLQ9hjjSkXICMfXIx+Sr\nw8NDepMIBoNQq9XUUzsUCmFnZwc+nw8ajQa1tbUwGo0QiURIp9OQy+V0mqHRaKQZD1IyON6elo91\nB4NBWK1WXLt2DXq9HiKRCD6fD36/nz5ro9GI+vp6ardMMj/keSsUCqr2DgaDWFlZwb1792C32/Ni\n25rNZmn6MxAIoLy8HDqdDul0GuXl5VCpVDTgEL0LCVDxeJy6oen1euj1emg0GkSjUdy7dw9jY2NY\nWFigg5pyydHREZLJJKampqBUKtHR0QGNRgOJRIK2tjbw+XzqzXHcClckEtGSAWmfKi4upulgolC3\nWq3Y29vL6ZoBUB2OzWbDBx98gHPnzqGjowNarRatra0IBoNUCErWTb7IfBByCzWbzTAYDMhkMnA6\nnVheXsbU1BT997lmf38fjx49orqEhoYGqNVq9PT0QC6Xo76+HsDTdl3ieEkscROJBEwmE0pKSmgN\n2+l0YnFxkQqP82Fjnc1msba2hgcPHqChoQEGgwFCoRDt7e00jU5+7vGyBDm8HB0dUdMllUqFdDpN\n/Tnu3LkDq9WaF7Ef0fnYbDbo9XqUlJRAKpWisbERb7/9Nvb29k6YPBEbYqlUSq3aiZi0rKwM8Xgc\nS0tL2Nvbw/379zE3N5d7nUJOv1vuyNkb9TzPebFYTNPUer0eAwMDqK6upiKiuro6KgBbWVnBr371\nK/z7v/87NYPJN8+qxLPZLE3td3Z2YmBggLZ9EDMJUucSCoVIp9OYmprCnTt3cP36dSwuLubVPxx4\n6g1OUvvH669DQ0N444030NzcTIOVRqOBVqs9URPd29uD3W7Hr3/9a4yOjsJut+d1WhqpbZJUHUnH\nJZNJGAwGtLS04Ac/+AFaW1vp7IRUKkVT0uT3DIVCmJycxN27d/HJJ58gEAjkdb44ud0QM6WysjIa\nKGUyGZqamuihhGgFiHUxaYcjg6smJibwwQcfYG5uLm91cwLZlN9++2309/ejra3txM9Tq9W0X5/c\nnojIlvhgxGIx+P1+uN1u/L//9/9w584deDyevPq1E8Owv/iLv8Df/d3f0QwWqdXy+XxoNBp6WyVZ\nmuMOj0TT4HA48OjRI7z//vvY2trKubiSQGrf5Cb5l3/5l+jq6gIAWn45PnvguCMqKS8RUaPL5cLK\nygpmZmYwPj6O2dnZvNhuE8xmM+rr63H16lUMDAygpaUFQqGQrgd4ahxF3hXg6SGTPHfSgTE7O4v5\n+XmsrKz8zuCuXGIwGPDGG2/gzJkz6O/vh8FgoC6ixGGQlDqJtwt55kSwSv42H330Ee7du4fV1VVs\nbW0919Plc/DC+P7a3fyf5XkbGTEDIvUgtVpNFdMkAHk8Huzs7MDlcmF1dZX2F7+qNZNN3O12w2q1\n4vTp01QIs7KyAp/PB4FAgHg8Do/HQ806dnZ28h74yfpIDe5ZxbDdbqctdDU1NdBqtYjFYlTwQuyM\nPR4Ptra2sLy8DLfbnfeDFllnMBikdd3jA56I6JAI5I7XGslGHolEqPZhY2ODOtDle93JZBIejwfD\nw8OQy+U06EgkEpSXl2NwcBBXr14FADpDIRaL0WwY0QdsbGxgeXmZ2j/n8105PDyE3+/H/fv36Q3p\neBdKS0sLzp49S70uyFATMgHN7/fTkorH48HS0hL8fn9ebnLPrjsWi2FychI/+clPqJNgMBikxktD\nQ0NobGyk9tuJRAI7OztUr0NmEOzs7GBzcxM7Ozt5G7gFgAZx8m5OTU1RbQJ5fnK5HBUVFeju7qat\nzORd2d7epnqj5eVl2t9PXAvz+dkMh8PUPZBkN2UyGXUbjEajVJRtNpupmDiRSMDpdFItz/r6OmZn\nZ7GzswOv10snbuaLaDSK2dlZehFqb2+nbbnRaJR6fpCOJtI6SQzl1tbWsL29Da/XS51N9/f3c37j\nJ7z2wf95kA9zOp0Gj8fD3t4ekskk9vb2aG8usUQlm/nLCKAvgpwO9/b26MAOUrsjbYxHR0cIh8N0\n4Ei+J4Y9b33Pg7TWcDgcBINBlJSUUPGR3W6nLYw+n4/qG17mjHEyivc4sVgMXC4XExMTiEajtG6Y\nSCSoNeji4iKdxUDcIPMdiAikCyQYDJ74G5OaNFkz8DT4k6FVi4uLsNvt8Pl8CIVCed8UCSSNbrVa\nsby8TDtaiPHQ5uYmUqkU6uvrYTAYkE6n6S2f2M4uLCzQoEpEj/mGHAbJZnx8qBaPx4NaraaHSKVS\nSW9xTqcTHo8H4XAYa2trsNls8Pv9iEajeR+fTLIPBwcHcLlc1K6cy+XC4XBga2sLCoUC9fX1SCaT\n1AgtHo/D5/NhfX0dfr8fPp/vxPChfI9PBp589ohwm4zSlslkSKVS1I6deOhXVFSAy+UinU5TJ8D1\n9XVq80tKvS/jPSGzNo53VwSDQQgEAloGzWazdEIrMeuJRCLwer1YWFigB8Pt7W1aKsjX837t0/6f\nBZfLpdOfgKc3a3IbJCnglzFz+w/huLKf+AOQoAQ8VYQTYUkhcNyaViqVQiQSnUh1kXkKZG51vm+g\nnxeBQAC9Xk+NUsiGSurQZBMnxiGFsG5STyQpdOBpVoa8y8TPgHQLvOz35PjAG+BpposIt4hJElk3\nSeMS/4hUKvXSnzd5rsSDg9ysyUZ/fEQ18NRsizxjkjUipcOXtW6S/lcqlfSmSdqWiSiQBH6SniaB\nlHweScYo3wcWAhH3icViKBQK6kvwrNsg0YYATw87x5/zcZHdy3zeZBQ10TRxOBzqf0J+P6KNAnDi\n3SZp/+NtsX8EL4zvX9rgz2AwGAzGa8wL43tu7dMYDAaDwWAUPCz4MxgMBoPxJYMFfwaDwWAwvmSw\n4M9gMBgMxpcMFvwZDAaDwfiSwYI/g8FgMBhfMljwZzAYDAbjSwYL/gwGg8FgMBgMBoPBYDAYDAaD\nwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPB\nYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8Fg\nMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAwGAwGg8FgMBgMBoPBYDAYDAaDwWAw\nGAwGg8FgMBgMBoPxmvP/AYtuX2nX3FZBAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# ADAM variables are causing the checkpoint reload to choke, so omit them when \n",
+ "# doing variable remapping.\n",
+ "var_dict = {x.op.name: x for x in \n",
+ " tf.contrib.framework.get_variables('Generator/') \n",
+ " if 'Adam' not in x.name}\n",
+ "tf.contrib.framework.init_from_checkpoint(\n",
+ " './mnist/data/infogan_model.ckpt', var_dict)\n",
+ "with tf.Session() as sess:\n",
+ " sess.run(tf.global_variables_initializer())\n",
+ " display_img_np = sess.run(display_img)\n",
+ "plt.axis('off')\n",
+ "plt.imshow(np.squeeze(display_img_np), cmap='gray')\n",
+ "plt.show()"
+ ]
+ }
+ ],
+ "metadata": {
+ "celltoolbar": "Edit Metadata",
+ "kernelspec": {
+ "display_name": "Python 2",
+ "language": "python",
+ "name": "python2"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 2
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython2",
+ "version": "2.7.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/README.md"
new file mode 100644
index 00000000..ae4a303f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/README.md"
@@ -0,0 +1,148 @@
+# Global Objectives
+The Global Objectives library provides TensorFlow loss functions that optimize
+directly for a variety of objectives including AUC, recall at precision, and
+more. The global objectives losses can be used as drop-in replacements for
+TensorFlow's standard multilabel loss functions:
+`tf.nn.sigmoid_cross_entropy_with_logits` and `tf.losses.sigmoid_cross_entropy`.
+
+Many machine learning classification models are optimized for classification
+accuracy, when the real objective the user cares about is different and can be
+precision at a fixed recall, precision-recall AUC, ROC AUC or similar metrics.
+These are referred to as "global objectives" because they depend on how the
+model classifies the dataset as a whole and do not decouple across data points
+as accuracy does.
+
+Because these objectives are combinatorial, discontinuous, and essentially
+intractable to optimize directly, the functions in this library approximate
+their corresponding objectives. This approximation approach follows the same
+pattern as optimizing for accuracy, where a surrogate objective such as
+cross-entropy or the hinge loss is used as an upper bound on the error rate.
+
+## Getting Started
+For a full example of how to use the loss functions in practice, see
+loss_layers_example.py.
+
+Briefly, global objective losses can be used to replace
+`tf.nn.sigmoid_cross_entropy_with_logits` by providing the relevant
+additional arguments. For example,
+
+``` python
+tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits)
+```
+
+could be replaced with
+
+``` python
+global_objectives.recall_at_precision_loss(
+ labels=labels,
+ logits=logits,
+ target_precision=0.95)[0]
+```
+
+Just as minimizing the cross-entropy loss will maximize accuracy, the loss
+functions in loss_layers.py were written so that minimizing the loss will
+maximize the corresponding objective.
+
+The global objective losses have two return values -- the loss tensor and
+additional quantities for debugging and customization -- which is why the first
+value is used above. For more information, see
+[Visualization & Debugging](#visualization-debugging).
+
+## Binary Label Format
+Binary classification problems can be represented as a multi-class problem with
+two classes, or as a multi-label problem with one label. (Recall that multiclass
+problems have mutually exclusive classes, e.g. 'cat xor dog', and multilabel
+have classes which are not mutually exclusive, e.g. an image can contain a cat,
+a dog, both, or neither.) The softmax loss
+(`tf.nn.softmax_cross_entropy_with_logits`) is used for multi-class problems,
+while the sigmoid loss (`tf.nn.sigmoid_cross_entropy_with_logits`) is used for
+multi-label problems.
+
+A multiclass label format for binary classification might represent positives
+with the label [1, 0] and negatives with the label [0, 1], while the multilbel
+format for the same problem would use [1] and [0], respectively.
+
+All global objectives loss functions assume that the multilabel format is used.
+Accordingly, if your current loss function is softmax, the labels will have to
+be reformatted for the loss to work properly.
+
+## Dual Variables
+Global objectives losses (except for `roc_auc_loss`) use internal variables
+called dual variables or Lagrange multipliers to enforce the desired constraint
+(e.g. if optimzing for recall at precision, the constraint is on precision).
+
+These dual variables are created and initialized internally by the loss
+functions, and are updated during training by the same optimizer used for the
+model's other variables. To initialize the dual variables to a particular value,
+use the `lambdas_initializer` argument. The dual variables can be found under
+the key `lambdas` in the `other_outputs` dictionary returned by the losses.
+
+## Loss Function Arguments
+The following arguments are common to all loss functions in the library, and are
+either required or very important.
+
+* `labels`: Corresponds directly to the `labels` argument of
+ `tf.nn.sigmoid_cross_entropy_with_logits`.
+* `logits`: Corresponds directly to the `logits` argument of
+ `tf.nn.sigmoid_cross_entropy_with_logits`.
+* `dual_rate_factor`: A floating point value which controls the step size for
+ the Lagrange multipliers. Setting this value less than 1.0 will cause the
+ constraint to be enforced more gradually and will result in more stable
+ training.
+
+In addition, the objectives with a single constraint (e.g.
+`recall_at_precision_loss`) have an argument (e.g. `target_precision`) used to
+specify the value of the constraint. The optional `precision_range` argument to
+`precision_recall_auc_loss` is used to specify the range of precision values
+over which to optimize the AUC, and defaults to the interval [0, 1].
+
+Optional arguments:
+
+* `weights`: A tensor which acts as coefficients for the loss. If a weight of x
+ is provided for a datapoint and that datapoint is a true (false) positive
+ (negative), it will be counted as x true (false) positives (negatives).
+ Defaults to 1.0.
+* `label_priors`: A tensor specifying the fraction of positive datapoints for
+ each label. If not provided, it will be computed inside the loss function.
+* `surrogate_type`: Either 'xent' or 'hinge', specifying which upper bound
+ should be used for indicator functions.
+* `lambdas_initializer`: An initializer for the dual variables (Lagrange
+ multipliers). See also the Dual Variables section.
+* `num_anchors` (precision_recall_auc_loss only): The number of grid points used
+ when approximating the AUC as a Riemann sum.
+
+## Hyperparameters
+While the functional form of the global objectives losses allow them to be
+easily substituted in place of `sigmoid_cross_entropy_with_logits`, model
+hyperparameters such as learning rate, weight decay, etc. may need to be
+fine-tuned to the new loss. Fortunately, the amount of hyperparameter re-tuning
+is usually minor.
+
+The most important hyperparameters to modify are the learning rate and
+dual_rate_factor (see the section on Loss Function Arguments, above).
+
+## Visualization & Debugging
+The global objectives losses return two values. The first is a tensor
+representing the numerical value of the loss, which can be passed to an
+optimizer. The second is a dictionary of tensors created by the loss function
+which are not necessary for optimization but useful in debugging. These vary
+depending on the loss function, but usually include `lambdas` (the Lagrange
+multipliers) as well as the lower bound on true positives and upper bound on
+false positives.
+
+When visualizing the loss during training, note that the global objectives
+losses differ from standard losses in some important ways:
+
+* The global losses may be negative. This is because the value returned by the
+ loss includes terms involving the Lagrange multipliers, which may be negative.
+* The global losses may not decrease over the course of training. To enforce the
+ constraints in the objective, the loss changes over time and may increase.
+
+## More Info
+For more details, see the [Global Objectives paper](https://arxiv.org/abs/1608.04802).
+
+## Maintainers
+
+* Mariano Schain
+* Elad Eban
+* [Alan Mackey](https://github.com/mackeya-google)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/loss_layers.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/loss_layers.py"
new file mode 100644
index 00000000..eaea0539
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/loss_layers.py"
@@ -0,0 +1,930 @@
+# Copyright 2018 The TensorFlow Global Objectives Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Loss functions for learning global objectives.
+
+These functions have two return values: a Tensor with the value of
+the loss, and a dictionary of internal quantities for customizability.
+"""
+
+# Dependency imports
+import numpy
+import tensorflow as tf
+
+from global_objectives import util
+
+
+def precision_recall_auc_loss(
+ labels,
+ logits,
+ precision_range=(0.0, 1.0),
+ num_anchors=20,
+ weights=1.0,
+ dual_rate_factor=0.1,
+ label_priors=None,
+ surrogate_type='xent',
+ lambdas_initializer=tf.constant_initializer(1.0),
+ reuse=None,
+ variables_collections=None,
+ trainable=True,
+ scope=None):
+ """Computes precision-recall AUC loss.
+
+ The loss is based on a sum of losses for recall at a range of
+ precision values (anchor points). This sum is a Riemann sum that
+ approximates the area under the precision-recall curve.
+
+ The per-example `weights` argument changes not only the coefficients of
+ individual training examples, but how the examples are counted toward the
+ constraint. If `label_priors` is given, it MUST take `weights` into account.
+ That is,
+ label_priors = P / (P + N)
+ where
+ P = sum_i (wt_i on positives)
+ N = sum_i (wt_i on negatives).
+
+ Args:
+ labels: A `Tensor` of shape [batch_size] or [batch_size, num_labels].
+ logits: A `Tensor` with the same shape as `labels`.
+ precision_range: A length-two tuple, the range of precision values over
+ which to compute AUC. The entries must be nonnegative, increasing, and
+ less than or equal to 1.0.
+ num_anchors: The number of grid points used to approximate the Riemann sum.
+ weights: Coefficients for the loss. Must be a scalar or `Tensor` of shape
+ [batch_size] or [batch_size, num_labels].
+ dual_rate_factor: A floating point value which controls the step size for
+ the Lagrange multipliers.
+ label_priors: None, or a floating point `Tensor` of shape [num_labels]
+ containing the prior probability of each label (i.e. the fraction of the
+ training data consisting of positive examples). If None, the label
+ priors are computed from `labels` with a moving average. See the notes
+ above regarding the interaction with `weights` and do not set this unless
+ you have a good reason to do so.
+ surrogate_type: Either 'xent' or 'hinge', specifying which upper bound
+ should be used for indicator functions.
+ lambdas_initializer: An initializer for the Lagrange multipliers.
+ reuse: Whether or not the layer and its variables should be reused. To be
+ able to reuse the layer scope must be given.
+ variables_collections: Optional list of collections for the variables.
+ trainable: If `True` also add variables to the graph collection
+ `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
+ scope: Optional scope for `variable_scope`.
+
+ Returns:
+ loss: A `Tensor` of the same shape as `logits` with the component-wise
+ loss.
+ other_outputs: A dictionary of useful internal quantities for debugging. For
+ more details, see http://arxiv.org/pdf/1608.04802.pdf.
+ lambdas: A Tensor of shape [1, num_labels, num_anchors] consisting of the
+ Lagrange multipliers.
+ biases: A Tensor of shape [1, num_labels, num_anchors] consisting of the
+ learned bias term for each.
+ label_priors: A Tensor of shape [1, num_labels, 1] consisting of the prior
+ probability of each label learned by the loss, if not provided.
+ true_positives_lower_bound: Lower bound on the number of true positives
+ given `labels` and `logits`. This is the same lower bound which is used
+ in the loss expression to be optimized.
+ false_positives_upper_bound: Upper bound on the number of false positives
+ given `labels` and `logits`. This is the same upper bound which is used
+ in the loss expression to be optimized.
+
+ Raises:
+ ValueError: If `surrogate_type` is not `xent` or `hinge`.
+ """
+ with tf.variable_scope(scope,
+ 'precision_recall_auc',
+ [labels, logits, label_priors],
+ reuse=reuse):
+ labels, logits, weights, original_shape = _prepare_labels_logits_weights(
+ labels, logits, weights)
+ num_labels = util.get_num_labels(logits)
+
+ # Convert other inputs to tensors and standardize dtypes.
+ dual_rate_factor = util.convert_and_cast(
+ dual_rate_factor, 'dual_rate_factor', logits.dtype)
+
+ # Create Tensor of anchor points and distance between anchors.
+ precision_values, delta = _range_to_anchors_and_delta(
+ precision_range, num_anchors, logits.dtype)
+ # Create lambdas with shape [1, num_labels, num_anchors].
+ lambdas, lambdas_variable = _create_dual_variable(
+ 'lambdas',
+ shape=[1, num_labels, num_anchors],
+ dtype=logits.dtype,
+ initializer=lambdas_initializer,
+ collections=variables_collections,
+ trainable=trainable,
+ dual_rate_factor=dual_rate_factor)
+ # Create biases with shape [1, num_labels, num_anchors].
+ biases = tf.contrib.framework.model_variable(
+ name='biases',
+ shape=[1, num_labels, num_anchors],
+ dtype=logits.dtype,
+ initializer=tf.zeros_initializer(),
+ collections=variables_collections,
+ trainable=trainable)
+ # Maybe create label_priors.
+ label_priors = maybe_create_label_priors(
+ label_priors, labels, weights, variables_collections)
+ label_priors = tf.reshape(label_priors, [1, num_labels, 1])
+
+ # Expand logits, labels, and weights to shape [batch_size, num_labels, 1].
+ logits = tf.expand_dims(logits, 2)
+ labels = tf.expand_dims(labels, 2)
+ weights = tf.expand_dims(weights, 2)
+
+ # Calculate weighted loss and other outputs. The log(2.0) term corrects for
+ # logloss not being an upper bound on the indicator function.
+ loss = weights * util.weighted_surrogate_loss(
+ labels,
+ logits + biases,
+ surrogate_type=surrogate_type,
+ positive_weights=1.0 + lambdas * (1.0 - precision_values),
+ negative_weights=lambdas * precision_values)
+ maybe_log2 = tf.log(2.0) if surrogate_type == 'xent' else 1.0
+ maybe_log2 = tf.cast(maybe_log2, logits.dtype.base_dtype)
+ lambda_term = lambdas * (1.0 - precision_values) * label_priors * maybe_log2
+ per_anchor_loss = loss - lambda_term
+ per_label_loss = delta * tf.reduce_sum(per_anchor_loss, 2)
+ # Normalize the AUC such that a perfect score function will have AUC 1.0.
+ # Because precision_range is discretized into num_anchors + 1 intervals
+ # but only num_anchors terms are included in the Riemann sum, the
+ # effective length of the integration interval is `delta` less than the
+ # length of precision_range.
+ scaled_loss = tf.div(per_label_loss,
+ precision_range[1] - precision_range[0] - delta,
+ name='AUC_Normalize')
+ scaled_loss = tf.reshape(scaled_loss, original_shape)
+
+ other_outputs = {
+ 'lambdas': lambdas_variable,
+ 'biases': biases,
+ 'label_priors': label_priors,
+ 'true_positives_lower_bound': true_positives_lower_bound(
+ labels, logits, weights, surrogate_type),
+ 'false_positives_upper_bound': false_positives_upper_bound(
+ labels, logits, weights, surrogate_type)}
+
+ return scaled_loss, other_outputs
+
+
+def roc_auc_loss(
+ labels,
+ logits,
+ weights=1.0,
+ surrogate_type='xent',
+ scope=None):
+ """Computes ROC AUC loss.
+
+ The area under the ROC curve is the probability p that a randomly chosen
+ positive example will be scored higher than a randomly chosen negative
+ example. This loss approximates 1-p by using a surrogate (either hinge loss or
+ cross entropy) for the indicator function. Specifically, the loss is:
+
+ sum_i sum_j w_i*w_j*loss(logit_i - logit_j)
+
+ where i ranges over the positive datapoints, j ranges over the negative
+ datapoints, logit_k denotes the logit (or score) of the k-th datapoint, and
+ loss is either the hinge or log loss given a positive label.
+
+ Args:
+ labels: A `Tensor` of shape [batch_size] or [batch_size, num_labels].
+ logits: A `Tensor` with the same shape and dtype as `labels`.
+ weights: Coefficients for the loss. Must be a scalar or `Tensor` of shape
+ [batch_size] or [batch_size, num_labels].
+ surrogate_type: Either 'xent' or 'hinge', specifying which upper bound
+ should be used for the indicator function.
+ scope: Optional scope for `name_scope`.
+
+ Returns:
+ loss: A `Tensor` of the same shape as `logits` with the component-wise loss.
+ other_outputs: An empty dictionary, for consistency.
+
+ Raises:
+ ValueError: If `surrogate_type` is not `xent` or `hinge`.
+ """
+ with tf.name_scope(scope, 'roc_auc', [labels, logits, weights]):
+ # Convert inputs to tensors and standardize dtypes.
+ labels, logits, weights, original_shape = _prepare_labels_logits_weights(
+ labels, logits, weights)
+
+ # Create tensors of pairwise differences for logits and labels, and
+ # pairwise products of weights. These have shape
+ # [batch_size, batch_size, num_labels].
+ logits_difference = tf.expand_dims(logits, 0) - tf.expand_dims(logits, 1)
+ labels_difference = tf.expand_dims(labels, 0) - tf.expand_dims(labels, 1)
+ weights_product = tf.expand_dims(weights, 0) * tf.expand_dims(weights, 1)
+
+ signed_logits_difference = labels_difference * logits_difference
+ raw_loss = util.weighted_surrogate_loss(
+ labels=tf.ones_like(signed_logits_difference),
+ logits=signed_logits_difference,
+ surrogate_type=surrogate_type)
+ weighted_loss = weights_product * raw_loss
+
+ # Zero out entries of the loss where labels_difference zero (so loss is only
+ # computed on pairs with different labels).
+ loss = tf.reduce_mean(tf.abs(labels_difference) * weighted_loss, 0) * 0.5
+ loss = tf.reshape(loss, original_shape)
+ return loss, {}
+
+
+def recall_at_precision_loss(
+ labels,
+ logits,
+ target_precision,
+ weights=1.0,
+ dual_rate_factor=0.1,
+ label_priors=None,
+ surrogate_type='xent',
+ lambdas_initializer=tf.constant_initializer(1.0),
+ reuse=None,
+ variables_collections=None,
+ trainable=True,
+ scope=None):
+ """Computes recall at precision loss.
+
+ The loss is based on a surrogate of the form
+ wt * w(+) * loss(+) + wt * w(-) * loss(-) - c * pi,
+ where:
+ - w(+) = 1 + lambdas * (1 - target_precision)
+ - loss(+) is the cross-entropy loss on the positive examples
+ - w(-) = lambdas * target_precision
+ - loss(-) is the cross-entropy loss on the negative examples
+ - wt is a scalar or tensor of per-example weights
+ - c = lambdas * (1 - target_precision)
+ - pi is the label_priors.
+
+ The per-example weights change not only the coefficients of individual
+ training examples, but how the examples are counted toward the constraint.
+ If `label_priors` is given, it MUST take `weights` into account. That is,
+ label_priors = P / (P + N)
+ where
+ P = sum_i (wt_i on positives)
+ N = sum_i (wt_i on negatives).
+
+ Args:
+ labels: A `Tensor` of shape [batch_size] or [batch_size, num_labels].
+ logits: A `Tensor` with the same shape as `labels`.
+ target_precision: The precision at which to compute the loss. Can be a
+ floating point value between 0 and 1 for a single precision value, or a
+ `Tensor` of shape [num_labels], holding each label's target precision
+ value.
+ weights: Coefficients for the loss. Must be a scalar or `Tensor` of shape
+ [batch_size] or [batch_size, num_labels].
+ dual_rate_factor: A floating point value which controls the step size for
+ the Lagrange multipliers.
+ label_priors: None, or a floating point `Tensor` of shape [num_labels]
+ containing the prior probability of each label (i.e. the fraction of the
+ training data consisting of positive examples). If None, the label
+ priors are computed from `labels` with a moving average. See the notes
+ above regarding the interaction with `weights` and do not set this unless
+ you have a good reason to do so.
+ surrogate_type: Either 'xent' or 'hinge', specifying which upper bound
+ should be used for indicator functions.
+ lambdas_initializer: An initializer for the Lagrange multipliers.
+ reuse: Whether or not the layer and its variables should be reused. To be
+ able to reuse the layer scope must be given.
+ variables_collections: Optional list of collections for the variables.
+ trainable: If `True` also add variables to the graph collection
+ `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
+ scope: Optional scope for `variable_scope`.
+
+ Returns:
+ loss: A `Tensor` of the same shape as `logits` with the component-wise
+ loss.
+ other_outputs: A dictionary of useful internal quantities for debugging. For
+ more details, see http://arxiv.org/pdf/1608.04802.pdf.
+ lambdas: A Tensor of shape [num_labels] consisting of the Lagrange
+ multipliers.
+ label_priors: A Tensor of shape [num_labels] consisting of the prior
+ probability of each label learned by the loss, if not provided.
+ true_positives_lower_bound: Lower bound on the number of true positives
+ given `labels` and `logits`. This is the same lower bound which is used
+ in the loss expression to be optimized.
+ false_positives_upper_bound: Upper bound on the number of false positives
+ given `labels` and `logits`. This is the same upper bound which is used
+ in the loss expression to be optimized.
+
+ Raises:
+ ValueError: If `logits` and `labels` do not have the same shape.
+ """
+ with tf.variable_scope(scope,
+ 'recall_at_precision',
+ [logits, labels, label_priors],
+ reuse=reuse):
+ labels, logits, weights, original_shape = _prepare_labels_logits_weights(
+ labels, logits, weights)
+ num_labels = util.get_num_labels(logits)
+
+ # Convert other inputs to tensors and standardize dtypes.
+ target_precision = util.convert_and_cast(
+ target_precision, 'target_precision', logits.dtype)
+ dual_rate_factor = util.convert_and_cast(
+ dual_rate_factor, 'dual_rate_factor', logits.dtype)
+
+ # Create lambdas.
+ lambdas, lambdas_variable = _create_dual_variable(
+ 'lambdas',
+ shape=[num_labels],
+ dtype=logits.dtype,
+ initializer=lambdas_initializer,
+ collections=variables_collections,
+ trainable=trainable,
+ dual_rate_factor=dual_rate_factor)
+ # Maybe create label_priors.
+ label_priors = maybe_create_label_priors(
+ label_priors, labels, weights, variables_collections)
+
+ # Calculate weighted loss and other outputs. The log(2.0) term corrects for
+ # logloss not being an upper bound on the indicator function.
+ weighted_loss = weights * util.weighted_surrogate_loss(
+ labels,
+ logits,
+ surrogate_type=surrogate_type,
+ positive_weights=1.0 + lambdas * (1.0 - target_precision),
+ negative_weights=lambdas * target_precision)
+ maybe_log2 = tf.log(2.0) if surrogate_type == 'xent' else 1.0
+ maybe_log2 = tf.cast(maybe_log2, logits.dtype.base_dtype)
+ lambda_term = lambdas * (1.0 - target_precision) * label_priors * maybe_log2
+ loss = tf.reshape(weighted_loss - lambda_term, original_shape)
+ other_outputs = {
+ 'lambdas': lambdas_variable,
+ 'label_priors': label_priors,
+ 'true_positives_lower_bound': true_positives_lower_bound(
+ labels, logits, weights, surrogate_type),
+ 'false_positives_upper_bound': false_positives_upper_bound(
+ labels, logits, weights, surrogate_type)}
+
+ return loss, other_outputs
+
+
+def precision_at_recall_loss(
+ labels,
+ logits,
+ target_recall,
+ weights=1.0,
+ dual_rate_factor=0.1,
+ label_priors=None,
+ surrogate_type='xent',
+ lambdas_initializer=tf.constant_initializer(1.0),
+ reuse=None,
+ variables_collections=None,
+ trainable=True,
+ scope=None):
+ """Computes precision at recall loss.
+
+ The loss is based on a surrogate of the form
+ wt * loss(-) + lambdas * (pi * (b - 1) + wt * loss(+))
+ where:
+ - loss(-) is the cross-entropy loss on the negative examples
+ - loss(+) is the cross-entropy loss on the positive examples
+ - wt is a scalar or tensor of per-example weights
+ - b is the target recall
+ - pi is the label_priors.
+
+ The per-example weights change not only the coefficients of individual
+ training examples, but how the examples are counted toward the constraint.
+ If `label_priors` is given, it MUST take `weights` into account. That is,
+ label_priors = P / (P + N)
+ where
+ P = sum_i (wt_i on positives)
+ N = sum_i (wt_i on negatives).
+
+ Args:
+ labels: A `Tensor` of shape [batch_size] or [batch_size, num_labels].
+ logits: A `Tensor` with the same shape as `labels`.
+ target_recall: The recall at which to compute the loss. Can be a floating
+ point value between 0 and 1 for a single target recall value, or a
+ `Tensor` of shape [num_labels] holding each label's target recall value.
+ weights: Coefficients for the loss. Must be a scalar or `Tensor` of shape
+ [batch_size] or [batch_size, num_labels].
+ dual_rate_factor: A floating point value which controls the step size for
+ the Lagrange multipliers.
+ label_priors: None, or a floating point `Tensor` of shape [num_labels]
+ containing the prior probability of each label (i.e. the fraction of the
+ training data consisting of positive examples). If None, the label
+ priors are computed from `labels` with a moving average. See the notes
+ above regarding the interaction with `weights` and do not set this unless
+ you have a good reason to do so.
+ surrogate_type: Either 'xent' or 'hinge', specifying which upper bound
+ should be used for indicator functions.
+ lambdas_initializer: An initializer for the Lagrange multipliers.
+ reuse: Whether or not the layer and its variables should be reused. To be
+ able to reuse the layer scope must be given.
+ variables_collections: Optional list of collections for the variables.
+ trainable: If `True` also add variables to the graph collection
+ `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
+ scope: Optional scope for `variable_scope`.
+
+ Returns:
+ loss: A `Tensor` of the same shape as `logits` with the component-wise
+ loss.
+ other_outputs: A dictionary of useful internal quantities for debugging. For
+ more details, see http://arxiv.org/pdf/1608.04802.pdf.
+ lambdas: A Tensor of shape [num_labels] consisting of the Lagrange
+ multipliers.
+ label_priors: A Tensor of shape [num_labels] consisting of the prior
+ probability of each label learned by the loss, if not provided.
+ true_positives_lower_bound: Lower bound on the number of true positives
+ given `labels` and `logits`. This is the same lower bound which is used
+ in the loss expression to be optimized.
+ false_positives_upper_bound: Upper bound on the number of false positives
+ given `labels` and `logits`. This is the same upper bound which is used
+ in the loss expression to be optimized.
+ """
+ with tf.variable_scope(scope,
+ 'precision_at_recall',
+ [logits, labels, label_priors],
+ reuse=reuse):
+ labels, logits, weights, original_shape = _prepare_labels_logits_weights(
+ labels, logits, weights)
+ num_labels = util.get_num_labels(logits)
+
+ # Convert other inputs to tensors and standardize dtypes.
+ target_recall = util.convert_and_cast(
+ target_recall, 'target_recall', logits.dtype)
+ dual_rate_factor = util.convert_and_cast(
+ dual_rate_factor, 'dual_rate_factor', logits.dtype)
+
+ # Create lambdas.
+ lambdas, lambdas_variable = _create_dual_variable(
+ 'lambdas',
+ shape=[num_labels],
+ dtype=logits.dtype,
+ initializer=lambdas_initializer,
+ collections=variables_collections,
+ trainable=trainable,
+ dual_rate_factor=dual_rate_factor)
+ # Maybe create label_priors.
+ label_priors = maybe_create_label_priors(
+ label_priors, labels, weights, variables_collections)
+
+ # Calculate weighted loss and other outputs. The log(2.0) term corrects for
+ # logloss not being an upper bound on the indicator function.
+ weighted_loss = weights * util.weighted_surrogate_loss(
+ labels,
+ logits,
+ surrogate_type,
+ positive_weights=lambdas,
+ negative_weights=1.0)
+ maybe_log2 = tf.log(2.0) if surrogate_type == 'xent' else 1.0
+ maybe_log2 = tf.cast(maybe_log2, logits.dtype.base_dtype)
+ lambda_term = lambdas * label_priors * (target_recall - 1.0) * maybe_log2
+ loss = tf.reshape(weighted_loss + lambda_term, original_shape)
+ other_outputs = {
+ 'lambdas': lambdas_variable,
+ 'label_priors': label_priors,
+ 'true_positives_lower_bound': true_positives_lower_bound(
+ labels, logits, weights, surrogate_type),
+ 'false_positives_upper_bound': false_positives_upper_bound(
+ labels, logits, weights, surrogate_type)}
+
+ return loss, other_outputs
+
+
+def false_positive_rate_at_true_positive_rate_loss(
+ labels,
+ logits,
+ target_rate,
+ weights=1.0,
+ dual_rate_factor=0.1,
+ label_priors=None,
+ surrogate_type='xent',
+ lambdas_initializer=tf.constant_initializer(1.0),
+ reuse=None,
+ variables_collections=None,
+ trainable=True,
+ scope=None):
+ """Computes false positive rate at true positive rate loss.
+
+ Note that `true positive rate` is a synonym for Recall, and that minimizing
+ the false positive rate and maximizing precision are equivalent for a fixed
+ Recall. Therefore, this function is identical to precision_at_recall_loss.
+
+ The per-example weights change not only the coefficients of individual
+ training examples, but how the examples are counted toward the constraint.
+ If `label_priors` is given, it MUST take `weights` into account. That is,
+ label_priors = P / (P + N)
+ where
+ P = sum_i (wt_i on positives)
+ N = sum_i (wt_i on negatives).
+
+ Args:
+ labels: A `Tensor` of shape [batch_size] or [batch_size, num_labels].
+ logits: A `Tensor` with the same shape as `labels`.
+ target_rate: The true positive rate at which to compute the loss. Can be a
+ floating point value between 0 and 1 for a single true positive rate, or
+ a `Tensor` of shape [num_labels] holding each label's true positive rate.
+ weights: Coefficients for the loss. Must be a scalar or `Tensor` of shape
+ [batch_size] or [batch_size, num_labels].
+ dual_rate_factor: A floating point value which controls the step size for
+ the Lagrange multipliers.
+ label_priors: None, or a floating point `Tensor` of shape [num_labels]
+ containing the prior probability of each label (i.e. the fraction of the
+ training data consisting of positive examples). If None, the label
+ priors are computed from `labels` with a moving average. See the notes
+ above regarding the interaction with `weights` and do not set this unless
+ you have a good reason to do so.
+ surrogate_type: Either 'xent' or 'hinge', specifying which upper bound
+ should be used for indicator functions. 'xent' will use the cross-entropy
+ loss surrogate, and 'hinge' will use the hinge loss.
+ lambdas_initializer: An initializer op for the Lagrange multipliers.
+ reuse: Whether or not the layer and its variables should be reused. To be
+ able to reuse the layer scope must be given.
+ variables_collections: Optional list of collections for the variables.
+ trainable: If `True` also add variables to the graph collection
+ `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
+ scope: Optional scope for `variable_scope`.
+
+ Returns:
+ loss: A `Tensor` of the same shape as `logits` with the component-wise
+ loss.
+ other_outputs: A dictionary of useful internal quantities for debugging. For
+ more details, see http://arxiv.org/pdf/1608.04802.pdf.
+ lambdas: A Tensor of shape [num_labels] consisting of the Lagrange
+ multipliers.
+ label_priors: A Tensor of shape [num_labels] consisting of the prior
+ probability of each label learned by the loss, if not provided.
+ true_positives_lower_bound: Lower bound on the number of true positives
+ given `labels` and `logits`. This is the same lower bound which is used
+ in the loss expression to be optimized.
+ false_positives_upper_bound: Upper bound on the number of false positives
+ given `labels` and `logits`. This is the same upper bound which is used
+ in the loss expression to be optimized.
+
+ Raises:
+ ValueError: If `surrogate_type` is not `xent` or `hinge`.
+ """
+ return precision_at_recall_loss(labels=labels,
+ logits=logits,
+ target_recall=target_rate,
+ weights=weights,
+ dual_rate_factor=dual_rate_factor,
+ label_priors=label_priors,
+ surrogate_type=surrogate_type,
+ lambdas_initializer=lambdas_initializer,
+ reuse=reuse,
+ variables_collections=variables_collections,
+ trainable=trainable,
+ scope=scope)
+
+
+def true_positive_rate_at_false_positive_rate_loss(
+ labels,
+ logits,
+ target_rate,
+ weights=1.0,
+ dual_rate_factor=0.1,
+ label_priors=None,
+ surrogate_type='xent',
+ lambdas_initializer=tf.constant_initializer(1.0),
+ reuse=None,
+ variables_collections=None,
+ trainable=True,
+ scope=None):
+ """Computes true positive rate at false positive rate loss.
+
+ The loss is based on a surrogate of the form
+ wt * loss(+) + lambdas * (wt * loss(-) - r * (1 - pi))
+ where:
+ - loss(-) is the loss on the negative examples
+ - loss(+) is the loss on the positive examples
+ - wt is a scalar or tensor of per-example weights
+ - r is the target rate
+ - pi is the label_priors.
+
+ The per-example weights change not only the coefficients of individual
+ training examples, but how the examples are counted toward the constraint.
+ If `label_priors` is given, it MUST take `weights` into account. That is,
+ label_priors = P / (P + N)
+ where
+ P = sum_i (wt_i on positives)
+ N = sum_i (wt_i on negatives).
+
+ Args:
+ labels: A `Tensor` of shape [batch_size] or [batch_size, num_labels].
+ logits: A `Tensor` with the same shape as `labels`.
+ target_rate: The false positive rate at which to compute the loss. Can be a
+ floating point value between 0 and 1 for a single false positive rate, or
+ a `Tensor` of shape [num_labels] holding each label's false positive rate.
+ weights: Coefficients for the loss. Must be a scalar or `Tensor` of shape
+ [batch_size] or [batch_size, num_labels].
+ dual_rate_factor: A floating point value which controls the step size for
+ the Lagrange multipliers.
+ label_priors: None, or a floating point `Tensor` of shape [num_labels]
+ containing the prior probability of each label (i.e. the fraction of the
+ training data consisting of positive examples). If None, the label
+ priors are computed from `labels` with a moving average. See the notes
+ above regarding the interaction with `weights` and do not set this unless
+ you have a good reason to do so.
+ surrogate_type: Either 'xent' or 'hinge', specifying which upper bound
+ should be used for indicator functions. 'xent' will use the cross-entropy
+ loss surrogate, and 'hinge' will use the hinge loss.
+ lambdas_initializer: An initializer op for the Lagrange multipliers.
+ reuse: Whether or not the layer and its variables should be reused. To be
+ able to reuse the layer scope must be given.
+ variables_collections: Optional list of collections for the variables.
+ trainable: If `True` also add variables to the graph collection
+ `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
+ scope: Optional scope for `variable_scope`.
+
+ Returns:
+ loss: A `Tensor` of the same shape as `logits` with the component-wise
+ loss.
+ other_outputs: A dictionary of useful internal quantities for debugging. For
+ more details, see http://arxiv.org/pdf/1608.04802.pdf.
+ lambdas: A Tensor of shape [num_labels] consisting of the Lagrange
+ multipliers.
+ label_priors: A Tensor of shape [num_labels] consisting of the prior
+ probability of each label learned by the loss, if not provided.
+ true_positives_lower_bound: Lower bound on the number of true positives
+ given `labels` and `logits`. This is the same lower bound which is used
+ in the loss expression to be optimized.
+ false_positives_upper_bound: Upper bound on the number of false positives
+ given `labels` and `logits`. This is the same upper bound which is used
+ in the loss expression to be optimized.
+
+ Raises:
+ ValueError: If `surrogate_type` is not `xent` or `hinge`.
+ """
+ with tf.variable_scope(scope,
+ 'tpr_at_fpr',
+ [labels, logits, label_priors],
+ reuse=reuse):
+ labels, logits, weights, original_shape = _prepare_labels_logits_weights(
+ labels, logits, weights)
+ num_labels = util.get_num_labels(logits)
+
+ # Convert other inputs to tensors and standardize dtypes.
+ target_rate = util.convert_and_cast(
+ target_rate, 'target_rate', logits.dtype)
+ dual_rate_factor = util.convert_and_cast(
+ dual_rate_factor, 'dual_rate_factor', logits.dtype)
+
+ # Create lambdas.
+ lambdas, lambdas_variable = _create_dual_variable(
+ 'lambdas',
+ shape=[num_labels],
+ dtype=logits.dtype,
+ initializer=lambdas_initializer,
+ collections=variables_collections,
+ trainable=trainable,
+ dual_rate_factor=dual_rate_factor)
+ # Maybe create label_priors.
+ label_priors = maybe_create_label_priors(
+ label_priors, labels, weights, variables_collections)
+
+ # Loss op and other outputs. The log(2.0) term corrects for
+ # logloss not being an upper bound on the indicator function.
+ weighted_loss = weights * util.weighted_surrogate_loss(
+ labels,
+ logits,
+ surrogate_type=surrogate_type,
+ positive_weights=1.0,
+ negative_weights=lambdas)
+ maybe_log2 = tf.log(2.0) if surrogate_type == 'xent' else 1.0
+ maybe_log2 = tf.cast(maybe_log2, logits.dtype.base_dtype)
+ lambda_term = lambdas * target_rate * (1.0 - label_priors) * maybe_log2
+ loss = tf.reshape(weighted_loss - lambda_term, original_shape)
+ other_outputs = {
+ 'lambdas': lambdas_variable,
+ 'label_priors': label_priors,
+ 'true_positives_lower_bound': true_positives_lower_bound(
+ labels, logits, weights, surrogate_type),
+ 'false_positives_upper_bound': false_positives_upper_bound(
+ labels, logits, weights, surrogate_type)}
+
+ return loss, other_outputs
+
+
+def _prepare_labels_logits_weights(labels, logits, weights):
+ """Validates labels, logits, and weights.
+
+ Converts inputs to tensors, checks shape compatibility, and casts dtype if
+ necessary.
+
+ Args:
+ labels: A `Tensor` of shape [batch_size] or [batch_size, num_labels].
+ logits: A `Tensor` with the same shape as `labels`.
+ weights: Either `None` or a `Tensor` with shape broadcastable to `logits`.
+
+ Returns:
+ labels: Same as `labels` arg after possible conversion to tensor, cast, and
+ reshape.
+ logits: Same as `logits` arg after possible conversion to tensor and
+ reshape.
+ weights: Same as `weights` arg after possible conversion, cast, and reshape.
+ original_shape: Shape of `labels` and `logits` before reshape.
+
+ Raises:
+ ValueError: If `labels` and `logits` do not have the same shape.
+ """
+ # Convert `labels` and `logits` to Tensors and standardize dtypes.
+ logits = tf.convert_to_tensor(logits, name='logits')
+ labels = util.convert_and_cast(labels, 'labels', logits.dtype.base_dtype)
+ weights = util.convert_and_cast(weights, 'weights', logits.dtype.base_dtype)
+
+ try:
+ labels.get_shape().merge_with(logits.get_shape())
+ except ValueError:
+ raise ValueError('logits and labels must have the same shape (%s vs %s)' %
+ (logits.get_shape(), labels.get_shape()))
+
+ original_shape = labels.get_shape().as_list()
+ if labels.get_shape().ndims > 0:
+ original_shape[0] = -1
+ if labels.get_shape().ndims <= 1:
+ labels = tf.reshape(labels, [-1, 1])
+ logits = tf.reshape(logits, [-1, 1])
+
+ if weights.get_shape().ndims == 1:
+ # Weights has shape [batch_size]. Reshape to [batch_size, 1].
+ weights = tf.reshape(weights, [-1, 1])
+ if weights.get_shape().ndims == 0:
+ # Weights is a scalar. Change shape of weights to match logits.
+ weights *= tf.ones_like(logits)
+
+ return labels, logits, weights, original_shape
+
+
+def _range_to_anchors_and_delta(precision_range, num_anchors, dtype):
+ """Calculates anchor points from precision range.
+
+ Args:
+ precision_range: As required in precision_recall_auc_loss.
+ num_anchors: int, number of equally spaced anchor points.
+ dtype: Data type of returned tensors.
+
+ Returns:
+ precision_values: A `Tensor` of data type dtype with equally spaced values
+ in the interval precision_range.
+ delta: The spacing between the values in precision_values.
+
+ Raises:
+ ValueError: If precision_range is invalid.
+ """
+ # Validate precision_range.
+ if not 0 <= precision_range[0] <= precision_range[-1] <= 1:
+ raise ValueError('precision values must obey 0 <= %f <= %f <= 1' %
+ (precision_range[0], precision_range[-1]))
+ if not 0 < len(precision_range) < 3:
+ raise ValueError('length of precision_range (%d) must be 1 or 2' %
+ len(precision_range))
+
+ # Sets precision_values uniformly between min_precision and max_precision.
+ values = numpy.linspace(start=precision_range[0],
+ stop=precision_range[1],
+ num=num_anchors+2)[1:-1]
+ precision_values = util.convert_and_cast(
+ values, 'precision_values', dtype)
+ delta = util.convert_and_cast(
+ values[0] - precision_range[0], 'delta', dtype)
+ # Makes precision_values [1, 1, num_anchors].
+ precision_values = util.expand_outer(precision_values, 3)
+ return precision_values, delta
+
+
+def _create_dual_variable(name, shape, dtype, initializer, collections,
+ trainable, dual_rate_factor):
+ """Creates a new dual variable.
+
+ Dual variables are required to be nonnegative. If trainable, their gradient
+ is reversed so that they are maximized (rather than minimized) by the
+ optimizer.
+
+ Args:
+ name: A string, the name for the new variable.
+ shape: Shape of the new variable.
+ dtype: Data type for the new variable.
+ initializer: Initializer for the new variable.
+ collections: List of graph collections keys. The new variable is added to
+ these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
+ trainable: If `True`, the default, also adds the variable to the graph
+ collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as
+ the default list of variables to use by the `Optimizer` classes.
+ dual_rate_factor: A floating point value or `Tensor`. The learning rate for
+ the dual variable is scaled by this factor.
+
+ Returns:
+ dual_value: An op that computes the absolute value of the dual variable
+ and reverses its gradient.
+ dual_variable: The underlying variable itself.
+ """
+ # We disable partitioning while constructing dual variables because they will
+ # be updated with assign, which is not available for partitioned variables.
+ partitioner = tf.get_variable_scope().partitioner
+ try:
+ tf.get_variable_scope().set_partitioner(None)
+ dual_variable = tf.contrib.framework.model_variable(
+ name=name,
+ shape=shape,
+ dtype=dtype,
+ initializer=initializer,
+ collections=collections,
+ trainable=trainable)
+ finally:
+ tf.get_variable_scope().set_partitioner(partitioner)
+ # Using the absolute value enforces nonnegativity.
+ dual_value = tf.abs(dual_variable)
+
+ if trainable:
+ # To reverse the gradient on the dual variable, multiply the gradient by
+ # -dual_rate_factor
+ dual_value = (tf.stop_gradient((1.0 + dual_rate_factor) * dual_value)
+ - dual_rate_factor * dual_value)
+ return dual_value, dual_variable
+
+
+def maybe_create_label_priors(label_priors,
+ labels,
+ weights,
+ variables_collections):
+ """Creates moving average ops to track label priors, if necessary.
+
+ Args:
+ label_priors: As required in e.g. precision_recall_auc_loss.
+ labels: A `Tensor` of shape [batch_size] or [batch_size, num_labels].
+ weights: As required in e.g. precision_recall_auc_loss.
+ variables_collections: Optional list of collections for the variables, if
+ any must be created.
+
+ Returns:
+ label_priors: A Tensor of shape [num_labels] consisting of the
+ weighted label priors, after updating with moving average ops if created.
+ """
+ if label_priors is not None:
+ label_priors = util.convert_and_cast(
+ label_priors, name='label_priors', dtype=labels.dtype.base_dtype)
+ return tf.squeeze(label_priors)
+
+ label_priors = util.build_label_priors(
+ labels,
+ weights,
+ variables_collections=variables_collections)
+ return label_priors
+
+
+def true_positives_lower_bound(labels, logits, weights, surrogate_type):
+ """Calculate a lower bound on the number of true positives.
+
+ This lower bound on the number of true positives given `logits` and `labels`
+ is the same one used in the global objectives loss functions.
+
+ Args:
+ labels: A `Tensor` of shape [batch_size] or [batch_size, num_labels].
+ logits: A `Tensor` of shape [batch_size, num_labels] or
+ [batch_size, num_labels, num_anchors]. If the third dimension is present,
+ the lower bound is computed on each slice [:, :, k] independently.
+ weights: Per-example loss coefficients, with shape broadcast-compatible with
+ that of `labels`.
+ surrogate_type: Either 'xent' or 'hinge', specifying which upper bound
+ should be used for indicator functions.
+
+ Returns:
+ A `Tensor` of shape [num_labels] or [num_labels, num_anchors].
+ """
+ maybe_log2 = tf.log(2.0) if surrogate_type == 'xent' else 1.0
+ maybe_log2 = tf.cast(maybe_log2, logits.dtype.base_dtype)
+ if logits.get_shape().ndims == 3 and labels.get_shape().ndims < 3:
+ labels = tf.expand_dims(labels, 2)
+ loss_on_positives = util.weighted_surrogate_loss(
+ labels, logits, surrogate_type, negative_weights=0.0) / maybe_log2
+ return tf.reduce_sum(weights * (labels - loss_on_positives), 0)
+
+
+def false_positives_upper_bound(labels, logits, weights, surrogate_type):
+ """Calculate an upper bound on the number of false positives.
+
+ This upper bound on the number of false positives given `logits` and `labels`
+ is the same one used in the global objectives loss functions.
+
+ Args:
+ labels: A `Tensor` of shape [batch_size, num_labels]
+ logits: A `Tensor` of shape [batch_size, num_labels] or
+ [batch_size, num_labels, num_anchors]. If the third dimension is present,
+ the lower bound is computed on each slice [:, :, k] independently.
+ weights: Per-example loss coefficients, with shape broadcast-compatible with
+ that of `labels`.
+ surrogate_type: Either 'xent' or 'hinge', specifying which upper bound
+ should be used for indicator functions.
+
+ Returns:
+ A `Tensor` of shape [num_labels] or [num_labels, num_anchors].
+ """
+ maybe_log2 = tf.log(2.0) if surrogate_type == 'xent' else 1.0
+ maybe_log2 = tf.cast(maybe_log2, logits.dtype.base_dtype)
+ loss_on_negatives = util.weighted_surrogate_loss(
+ labels, logits, surrogate_type, positive_weights=0.0) / maybe_log2
+ return tf.reduce_sum(weights * loss_on_negatives, 0)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/loss_layers_example.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/loss_layers_example.py"
new file mode 100644
index 00000000..2323cb07
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/loss_layers_example.py"
@@ -0,0 +1,211 @@
+# Copyright 2018 The TensorFlow Global Objectives Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Example for using global objectives.
+
+Illustrate, using synthetic data, how using the precision_at_recall loss
+significanly improves the performace of a linear classifier.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+import numpy as np
+from sklearn.metrics import precision_score
+import tensorflow as tf
+from global_objectives import loss_layers
+
+# When optimizing using global_objectives, if set to True then the saddle point
+# optimization steps are performed internally by the Tensorflow optimizer,
+# otherwise by dedicated saddle-point steps as part of the optimization loop.
+USE_GO_SADDLE_POINT_OPT = False
+
+TARGET_RECALL = 0.98
+TRAIN_ITERATIONS = 150
+LEARNING_RATE = 1.0
+GO_DUAL_RATE_FACTOR = 15.0
+NUM_CHECKPOINTS = 6
+
+EXPERIMENT_DATA_CONFIG = {
+ 'positives_centers': [[0, 1.0], [1, -0.5]],
+ 'negatives_centers': [[0, -0.5], [1, 1.0]],
+ 'positives_variances': [0.15, 0.1],
+ 'negatives_variances': [0.15, 0.1],
+ 'positives_counts': [500, 50],
+ 'negatives_counts': [3000, 100]
+}
+
+
+def create_training_and_eval_data_for_experiment(**data_config):
+ """Creates train and eval data sets.
+
+ Note: The synthesized binary-labeled data is a mixture of four Gaussians - two
+ positives and two negatives. The centers, variances, and sizes for each of
+ the two positives and negatives mixtures are passed in the respective keys
+ of data_config:
+
+ Args:
+ **data_config: Dictionary with Array entries as follows:
+ positives_centers - float [2,2] two centers of positives data sets.
+ negatives_centers - float [2,2] two centers of negatives data sets.
+ positives_variances - float [2] Variances for the positives sets.
+ negatives_variances - float [2] Variances for the negatives sets.
+ positives_counts - int [2] Counts for each of the two positives sets.
+ negatives_counts - int [2] Counts for each of the two negatives sets.
+
+ Returns:
+ A dictionary with two shuffled data sets created - one for training and one
+ for eval. The dictionary keys are 'train_data', 'train_labels', 'eval_data',
+ and 'eval_labels'. The data points are two-dimentional floats, and the
+ labels are in {0,1}.
+ """
+ def data_points(is_positives, index):
+ variance = data_config['positives_variances'
+ if is_positives else 'negatives_variances'][index]
+ center = data_config['positives_centers'
+ if is_positives else 'negatives_centers'][index]
+ count = data_config['positives_counts'
+ if is_positives else 'negatives_counts'][index]
+ return variance*np.random.randn(count, 2) + np.array([center])
+
+ def create_data():
+ return np.concatenate([data_points(False, 0),
+ data_points(True, 0),
+ data_points(True, 1),
+ data_points(False, 1)], axis=0)
+
+ def create_labels():
+ """Creates an array of 0.0 or 1.0 labels for the data_config batches."""
+ return np.array([0.0]*data_config['negatives_counts'][0] +
+ [1.0]*data_config['positives_counts'][0] +
+ [1.0]*data_config['positives_counts'][1] +
+ [0.0]*data_config['negatives_counts'][1])
+
+ permutation = np.random.permutation(
+ sum(data_config['positives_counts'] + data_config['negatives_counts']))
+
+ train_data = create_data()[permutation, :]
+ eval_data = create_data()[permutation, :]
+ train_labels = create_labels()[permutation]
+ eval_labels = create_labels()[permutation]
+
+ return {
+ 'train_data': train_data,
+ 'train_labels': train_labels,
+ 'eval_data': eval_data,
+ 'eval_labels': eval_labels
+ }
+
+
+def train_model(data, use_global_objectives):
+ """Trains a linear model for maximal accuracy or precision at given recall."""
+
+ def precision_at_recall(scores, labels, target_recall):
+ """Computes precision - at target recall - over data."""
+ positive_scores = scores[labels == 1.0]
+ threshold = np.percentile(positive_scores, 100 - target_recall*100)
+ predicted = scores >= threshold
+ return precision_score(labels, predicted)
+
+ w = tf.Variable(tf.constant([-1.0, -1.0], shape=[2, 1]), trainable=True,
+ name='weights', dtype=tf.float32)
+ b = tf.Variable(tf.zeros([1]), trainable=True, name='biases',
+ dtype=tf.float32)
+
+ logits = tf.matmul(tf.cast(data['train_data'], tf.float32), w) + b
+
+ labels = tf.constant(
+ data['train_labels'],
+ shape=[len(data['train_labels']), 1],
+ dtype=tf.float32)
+
+ if use_global_objectives:
+ loss, other_outputs = loss_layers.precision_at_recall_loss(
+ labels, logits,
+ TARGET_RECALL,
+ dual_rate_factor=GO_DUAL_RATE_FACTOR)
+ loss = tf.reduce_mean(loss)
+ else:
+ loss = tf.reduce_mean(
+ tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits))
+
+ global_step = tf.Variable(0, trainable=False)
+
+ learning_rate = tf.train.polynomial_decay(
+ LEARNING_RATE,
+ global_step,
+ TRAIN_ITERATIONS, (LEARNING_RATE / TRAIN_ITERATIONS),
+ power=1.0,
+ cycle=False,
+ name='learning_rate')
+
+ optimizer = tf.train.GradientDescentOptimizer(learning_rate)
+
+ if (not use_global_objectives) or USE_GO_SADDLE_POINT_OPT:
+ training_op = optimizer.minimize(loss, global_step=global_step)
+ else:
+ lambdas = other_outputs['lambdas']
+ primal_update_op = optimizer.minimize(loss, var_list=[w, b])
+ dual_update_op = optimizer.minimize(
+ loss, global_step=global_step, var_list=[lambdas])
+
+ # Training loop:
+ with tf.Session() as sess:
+ checkpoint_step = TRAIN_ITERATIONS // NUM_CHECKPOINTS
+ sess.run(tf.global_variables_initializer())
+ step = sess.run(global_step)
+
+ while step <= TRAIN_ITERATIONS:
+ if (not use_global_objectives) or USE_GO_SADDLE_POINT_OPT:
+ _, step, loss_value, w_value, b_value = sess.run(
+ [training_op, global_step, loss, w, b])
+ else:
+ _, w_value, b_value = sess.run([primal_update_op, w, b])
+ _, loss_value, step = sess.run([dual_update_op, loss, global_step])
+
+ if use_global_objectives:
+ go_outputs = sess.run(other_outputs.values())
+
+ if step % checkpoint_step == 0:
+ precision = precision_at_recall(
+ np.dot(data['train_data'], w_value) + b_value,
+ data['train_labels'], TARGET_RECALL)
+
+ tf.logging.info('Loss = %f Precision = %f', loss_value, precision)
+ if use_global_objectives:
+ for i, output_name in enumerate(other_outputs.keys()):
+ tf.logging.info('\t%s = %f', output_name, go_outputs[i])
+
+ w_value, b_value = sess.run([w, b])
+ return precision_at_recall(np.dot(data['eval_data'], w_value) + b_value,
+ data['eval_labels'],
+ TARGET_RECALL)
+
+
+def main(unused_argv):
+ del unused_argv
+ experiment_data = create_training_and_eval_data_for_experiment(
+ **EXPERIMENT_DATA_CONFIG)
+ global_objectives_loss_precision = train_model(experiment_data, True)
+ tf.logging.info('global_objectives precision at requested recall is %f',
+ global_objectives_loss_precision)
+ cross_entropy_loss_precision = train_model(experiment_data, False)
+ tf.logging.info('cross_entropy precision at requested recall is %f',
+ cross_entropy_loss_precision)
+
+
+if __name__ == '__main__':
+ tf.logging.set_verbosity(tf.logging.INFO)
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/loss_layers_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/loss_layers_test.py"
new file mode 100644
index 00000000..3f91c80d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/loss_layers_test.py"
@@ -0,0 +1,1379 @@
+# Copyright 2018 The TensorFlow Global Objectives Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for global objectives loss layers."""
+
+# Dependency imports
+from absl.testing import parameterized
+import numpy
+import tensorflow as tf
+
+from global_objectives import loss_layers
+from global_objectives import util
+
+
+# TODO: Include weights in the lagrange multiplier update tests.
+class PrecisionRecallAUCLossTest(parameterized.TestCase, tf.test.TestCase):
+
+ @parameterized.named_parameters(
+ ('_xent', 'xent', 0.7),
+ ('_hinge', 'hinge', 0.7),
+ ('_hinge_2', 'hinge', 0.5)
+ )
+ def testSinglePointAUC(self, surrogate_type, target_precision):
+ # Tests a case with only one anchor point, where the loss should equal
+ # recall_at_precision_loss
+ batch_shape = [10, 2]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ labels = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+
+ auc_loss, _ = loss_layers.precision_recall_auc_loss(
+ labels,
+ logits,
+ precision_range=(target_precision - 0.01, target_precision + 0.01),
+ num_anchors=1,
+ surrogate_type=surrogate_type)
+ point_loss, _ = loss_layers.recall_at_precision_loss(
+ labels, logits, target_precision=target_precision,
+ surrogate_type=surrogate_type)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(auc_loss.eval(), point_loss.eval())
+
+ def testThreePointAUC(self):
+ # Tests a case with three anchor points against a weighted sum of recall
+ # at precision losses.
+ batch_shape = [11, 3]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ labels = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+
+ # TODO: Place the hing/xent loss in a for loop.
+ auc_loss, _ = loss_layers.precision_recall_auc_loss(
+ labels, logits, num_anchors=1)
+ first_point_loss, _ = loss_layers.recall_at_precision_loss(
+ labels, logits, target_precision=0.25)
+ second_point_loss, _ = loss_layers.recall_at_precision_loss(
+ labels, logits, target_precision=0.5)
+ third_point_loss, _ = loss_layers.recall_at_precision_loss(
+ labels, logits, target_precision=0.75)
+ expected_loss = (first_point_loss + second_point_loss +
+ third_point_loss) / 3
+
+ auc_loss_hinge, _ = loss_layers.precision_recall_auc_loss(
+ labels, logits, num_anchors=1, surrogate_type='hinge')
+ first_point_hinge, _ = loss_layers.recall_at_precision_loss(
+ labels, logits, target_precision=0.25, surrogate_type='hinge')
+ second_point_hinge, _ = loss_layers.recall_at_precision_loss(
+ labels, logits, target_precision=0.5, surrogate_type='hinge')
+ third_point_hinge, _ = loss_layers.recall_at_precision_loss(
+ labels, logits, target_precision=0.75, surrogate_type='hinge')
+ expected_hinge = (first_point_hinge + second_point_hinge +
+ third_point_hinge) / 3
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(auc_loss.eval(), expected_loss.eval())
+ self.assertAllClose(auc_loss_hinge.eval(), expected_hinge.eval())
+
+ def testLagrangeMultiplierUpdateDirection(self):
+ for target_precision in [0.35, 0.65]:
+ precision_range = (target_precision - 0.01, target_precision + 0.01)
+
+ for surrogate_type in ['xent', 'hinge']:
+ kwargs = {'precision_range': precision_range,
+ 'num_anchors': 1,
+ 'surrogate_type': surrogate_type,
+ 'scope': 'pr-auc_{}_{}'.format(target_precision,
+ surrogate_type)}
+ run_lagrange_multiplier_test(
+ global_objective=loss_layers.precision_recall_auc_loss,
+ objective_kwargs=kwargs,
+ data_builder=_multilabel_data,
+ test_object=self)
+ kwargs['scope'] = 'other-' + kwargs['scope']
+ run_lagrange_multiplier_test(
+ global_objective=loss_layers.precision_recall_auc_loss,
+ objective_kwargs=kwargs,
+ data_builder=_other_multilabel_data(surrogate_type),
+ test_object=self)
+
+
+class ROCAUCLossTest(parameterized.TestCase, tf.test.TestCase):
+
+ def testSimpleScores(self):
+ # Tests the loss on data with only one negative example with score zero.
+ # In this case, the loss should equal the surrogate loss on the scores with
+ # positive labels.
+ num_positives = 10
+ scores_positives = tf.constant(3.0 * numpy.random.randn(num_positives),
+ shape=[num_positives, 1])
+ labels = tf.constant([0.0] + [1.0] * num_positives,
+ shape=[num_positives + 1, 1])
+ scores = tf.concat([[[0.0]], scores_positives], 0)
+
+ loss = tf.reduce_sum(
+ loss_layers.roc_auc_loss(labels, scores, surrogate_type='hinge')[0])
+ expected_loss = tf.reduce_sum(
+ tf.maximum(1.0 - scores_positives, 0)) / (num_positives + 1)
+ with self.test_session():
+ self.assertAllClose(expected_loss.eval(), loss.eval())
+
+ def testRandomROCLoss(self):
+ # Checks that random Bernoulli scores and labels has ~25% swaps.
+ shape = [1000, 30]
+ scores = tf.constant(
+ numpy.random.randint(0, 2, size=shape), shape=shape, dtype=tf.float32)
+ labels = tf.constant(
+ numpy.random.randint(0, 2, size=shape), shape=shape, dtype=tf.float32)
+ loss = tf.reduce_mean(loss_layers.roc_auc_loss(
+ labels, scores, surrogate_type='hinge')[0])
+ with self.test_session():
+ self.assertAllClose(0.25, loss.eval(), 1e-2)
+
+ @parameterized.named_parameters(
+ ('_zero_hinge', 'xent',
+ [0.0, 0.0, 0.0, 1.0, 1.0, 1.0],
+ [-5.0, -7.0, -9.0, 8.0, 10.0, 14.0],
+ 0.0),
+ ('_zero_xent', 'hinge',
+ [0.0, 0.0, 0.0, 1.0, 1.0, 1.0],
+ [-0.2, 0, -0.1, 1.0, 1.1, 1.0],
+ 0.0),
+ ('_xent', 'xent',
+ [0.0, 0.0, 0.0, 1.0, 1.0, 1.0],
+ [0.0, -17.0, -19.0, 1.0, 14.0, 14.0],
+ numpy.log(1.0 + numpy.exp(-1.0)) / 6),
+ ('_hinge', 'hinge',
+ [0.0, 0.0, 0.0, 1.0, 1.0, 1.0],
+ [-0.2, -0.05, 0.0, 0.95, 0.8, 1.0],
+ 0.4 / 6)
+ )
+ def testManualROCLoss(self, surrogate_type, labels, logits, expected_value):
+ labels = tf.constant(labels)
+ logits = tf.constant(logits)
+ loss, _ = loss_layers.roc_auc_loss(
+ labels=labels, logits=logits, surrogate_type=surrogate_type)
+
+ with self.test_session():
+ self.assertAllClose(expected_value, tf.reduce_sum(loss).eval())
+
+ def testMultiLabelROCLoss(self):
+ # Tests the loss on multi-label data against manually computed loss.
+ targets = numpy.array([[0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0]])
+ scores = numpy.array([[0.1, 1.0, 1.1, 1.0], [1.0, 0.0, 1.3, 1.1]])
+ class_1_auc = tf.reduce_sum(
+ loss_layers.roc_auc_loss(targets[0], scores[0])[0])
+ class_2_auc = tf.reduce_sum(
+ loss_layers.roc_auc_loss(targets[1], scores[1])[0])
+ total_auc = tf.reduce_sum(loss_layers.roc_auc_loss(
+ targets.transpose(), scores.transpose())[0])
+
+ with self.test_session():
+ self.assertAllClose(total_auc.eval(),
+ class_1_auc.eval() + class_2_auc.eval())
+
+ def testWeights(self):
+ # Test the loss with per-example weights.
+ # The logits_negatives below are repeated, so that setting half their
+ # weights to 2 and the other half to 0 should leave the loss unchanged.
+ logits_positives = tf.constant([2.54321, -0.26, 3.334334], shape=[3, 1])
+ logits_negatives = tf.constant([-0.6, 1, -1.3, -1.3, -0.6, 1], shape=[6, 1])
+ logits = tf.concat([logits_positives, logits_negatives], 0)
+ targets = tf.constant([1, 1, 1, 0, 0, 0, 0, 0, 0],
+ shape=[9, 1], dtype=tf.float32)
+ weights = tf.constant([1, 1, 1, 0, 0, 0, 2, 2, 2],
+ shape=[9, 1], dtype=tf.float32)
+
+ loss = tf.reduce_sum(loss_layers.roc_auc_loss(targets, logits)[0])
+ weighted_loss = tf.reduce_sum(
+ loss_layers.roc_auc_loss(targets, logits, weights)[0])
+
+ with self.test_session():
+ self.assertAllClose(loss.eval(), weighted_loss.eval())
+
+
+class RecallAtPrecisionTest(tf.test.TestCase):
+
+ def testEqualWeightLoss(self):
+ # Tests a special case where the loss should equal cross entropy loss.
+ target_precision = 1.0
+ num_labels = 5
+ batch_shape = [20, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.7)))
+ label_priors = tf.constant(0.34, shape=[num_labels])
+
+ loss, _ = loss_layers.recall_at_precision_loss(
+ targets, logits, target_precision, label_priors=label_priors)
+ expected_loss = (
+ tf.contrib.nn.deprecated_flipped_sigmoid_cross_entropy_with_logits(
+ logits, targets))
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ loss_val, expected_val = session.run([loss, expected_loss])
+ self.assertAllClose(loss_val, expected_val)
+
+ def testEqualWeightLossWithMultiplePrecisions(self):
+ """Tests a case where the loss equals xent loss with multiple precisions."""
+ target_precision = [1.0, 1.0]
+ num_labels = 2
+ batch_size = 20
+ target_shape = [batch_size, num_labels]
+ logits = tf.Variable(tf.random_normal(target_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(target_shape), 0.7)))
+ label_priors = tf.constant([0.34], shape=[num_labels])
+
+ loss, _ = loss_layers.recall_at_precision_loss(
+ targets,
+ logits,
+ target_precision,
+ label_priors=label_priors,
+ surrogate_type='xent',
+ )
+
+ expected_loss = (
+ tf.contrib.nn.deprecated_flipped_sigmoid_cross_entropy_with_logits(
+ logits, targets))
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ loss_val, expected_val = session.run([loss, expected_loss])
+ self.assertAllClose(loss_val, expected_val)
+
+ def testPositivesOnlyLoss(self):
+ # Tests a special case where the loss should equal cross entropy loss
+ # on the negatives only.
+ target_precision = 1.0
+ num_labels = 3
+ batch_shape = [30, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+ label_priors = tf.constant(0.45, shape=[num_labels])
+
+ loss, _ = loss_layers.recall_at_precision_loss(
+ targets, logits, target_precision, label_priors=label_priors,
+ lambdas_initializer=tf.zeros_initializer())
+ expected_loss = util.weighted_sigmoid_cross_entropy_with_logits(
+ targets,
+ logits,
+ positive_weights=1.0,
+ negative_weights=0.0)
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ loss_val, expected_val = session.run([loss, expected_loss])
+ self.assertAllClose(loss_val, expected_val)
+
+ def testEquivalenceBetweenSingleAndMultiplePrecisions(self):
+ """Checks recall at precision with different precision values.
+
+ Runs recall at precision with multiple precision values, and runs each label
+ seperately with its own precision value as a scalar. Validates that the
+ returned loss values are the same.
+ """
+ target_precision = [0.2, 0.9, 0.4]
+ num_labels = 3
+ batch_shape = [30, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+ label_priors = tf.constant([0.45, 0.8, 0.3], shape=[num_labels])
+
+ multi_label_loss, _ = loss_layers.recall_at_precision_loss(
+ targets, logits, target_precision, label_priors=label_priors,
+ )
+
+ single_label_losses = [
+ loss_layers.recall_at_precision_loss(
+ tf.expand_dims(targets[:, i], -1),
+ tf.expand_dims(logits[:, i], -1),
+ target_precision[i],
+ label_priors=label_priors[i])[0]
+ for i in range(num_labels)
+ ]
+
+ single_label_losses = tf.concat(single_label_losses, 1)
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ multi_label_loss_val, single_label_loss_val = session.run(
+ [multi_label_loss, single_label_losses])
+ self.assertAllClose(multi_label_loss_val, single_label_loss_val)
+
+ def testEquivalenceBetweenSingleAndEqualMultiplePrecisions(self):
+ """Compares single and multiple target precisions with the same value.
+
+ Checks that using a single target precision and multiple target precisions
+ with the same value would result in the same loss value.
+ """
+ num_labels = 2
+ target_shape = [20, num_labels]
+ logits = tf.Variable(tf.random_normal(target_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(target_shape), 0.7)))
+ label_priors = tf.constant([0.34], shape=[num_labels])
+
+ multi_precision_loss, _ = loss_layers.recall_at_precision_loss(
+ targets,
+ logits,
+ [0.75, 0.75],
+ label_priors=label_priors,
+ surrogate_type='xent',
+ )
+
+ single_precision_loss, _ = loss_layers.recall_at_precision_loss(
+ targets,
+ logits,
+ 0.75,
+ label_priors=label_priors,
+ surrogate_type='xent',
+ )
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ multi_precision_loss_val, single_precision_loss_val = session.run(
+ [multi_precision_loss, single_precision_loss])
+ self.assertAllClose(multi_precision_loss_val, single_precision_loss_val)
+
+ def testLagrangeMultiplierUpdateDirection(self):
+ for target_precision in [0.35, 0.65]:
+ for surrogate_type in ['xent', 'hinge']:
+ kwargs = {'target_precision': target_precision,
+ 'surrogate_type': surrogate_type,
+ 'scope': 'r-at-p_{}_{}'.format(target_precision,
+ surrogate_type)}
+ run_lagrange_multiplier_test(
+ global_objective=loss_layers.recall_at_precision_loss,
+ objective_kwargs=kwargs,
+ data_builder=_multilabel_data,
+ test_object=self)
+ kwargs['scope'] = 'other-' + kwargs['scope']
+ run_lagrange_multiplier_test(
+ global_objective=loss_layers.recall_at_precision_loss,
+ objective_kwargs=kwargs,
+ data_builder=_other_multilabel_data(surrogate_type),
+ test_object=self)
+
+ def testLagrangeMultiplierUpdateDirectionWithMultiplePrecisions(self):
+ """Runs Lagrange multiplier test with multiple precision values."""
+ target_precision = [0.65, 0.35]
+
+ for surrogate_type in ['xent', 'hinge']:
+ scope_str = 'r-at-p_{}_{}'.format(
+ '_'.join([str(precision) for precision in target_precision]),
+ surrogate_type)
+ kwargs = {
+ 'target_precision': target_precision,
+ 'surrogate_type': surrogate_type,
+ 'scope': scope_str,
+ }
+ run_lagrange_multiplier_test(
+ global_objective=loss_layers.recall_at_precision_loss,
+ objective_kwargs=kwargs,
+ data_builder=_multilabel_data,
+ test_object=self)
+ kwargs['scope'] = 'other-' + kwargs['scope']
+ run_lagrange_multiplier_test(
+ global_objective=loss_layers.recall_at_precision_loss,
+ objective_kwargs=kwargs,
+ data_builder=_other_multilabel_data(surrogate_type),
+ test_object=self)
+
+
+class PrecisionAtRecallTest(tf.test.TestCase):
+
+ def testCrossEntropyEquivalence(self):
+ # Checks a special case where the loss should equal cross-entropy loss.
+ target_recall = 1.0
+ num_labels = 3
+ batch_shape = [10, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+
+ loss, _ = loss_layers.precision_at_recall_loss(
+ targets, logits, target_recall,
+ lambdas_initializer=tf.constant_initializer(1.0))
+ expected_loss = util.weighted_sigmoid_cross_entropy_with_logits(
+ targets, logits)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(loss.eval(), expected_loss.eval())
+
+ def testNegativesOnlyLoss(self):
+ # Checks a special case where the loss should equal the loss on
+ # the negative examples only.
+ target_recall = 0.61828
+ num_labels = 4
+ batch_shape = [8, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.6)))
+
+ loss, _ = loss_layers.precision_at_recall_loss(
+ targets,
+ logits,
+ target_recall,
+ surrogate_type='hinge',
+ lambdas_initializer=tf.constant_initializer(0.0),
+ scope='negatives_only_test')
+ expected_loss = util.weighted_hinge_loss(
+ targets, logits, positive_weights=0.0, negative_weights=1.0)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(expected_loss.eval(), loss.eval())
+
+ def testLagrangeMultiplierUpdateDirection(self):
+ for target_recall in [0.34, 0.66]:
+ for surrogate_type in ['xent', 'hinge']:
+ kwargs = {'target_recall': target_recall,
+ 'dual_rate_factor': 1.0,
+ 'surrogate_type': surrogate_type,
+ 'scope': 'p-at-r_{}_{}'.format(target_recall, surrogate_type)}
+
+ run_lagrange_multiplier_test(
+ global_objective=loss_layers.precision_at_recall_loss,
+ objective_kwargs=kwargs,
+ data_builder=_multilabel_data,
+ test_object=self)
+ kwargs['scope'] = 'other-' + kwargs['scope']
+ run_lagrange_multiplier_test(
+ global_objective=loss_layers.precision_at_recall_loss,
+ objective_kwargs=kwargs,
+ data_builder=_other_multilabel_data(surrogate_type),
+ test_object=self)
+
+ def testCrossEntropyEquivalenceWithMultipleRecalls(self):
+ """Checks a case where the loss equals xent loss with multiple recalls."""
+ num_labels = 3
+ target_recall = [1.0] * num_labels
+ batch_shape = [10, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+
+ loss, _ = loss_layers.precision_at_recall_loss(
+ targets, logits, target_recall,
+ lambdas_initializer=tf.constant_initializer(1.0))
+ expected_loss = util.weighted_sigmoid_cross_entropy_with_logits(
+ targets, logits)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(loss.eval(), expected_loss.eval())
+
+ def testNegativesOnlyLossWithMultipleRecalls(self):
+ """Tests a case where the loss equals the loss on the negative examples.
+
+ Checks this special case using multiple target recall values.
+ """
+ num_labels = 4
+ target_recall = [0.61828] * num_labels
+ batch_shape = [8, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.6)))
+
+ loss, _ = loss_layers.precision_at_recall_loss(
+ targets,
+ logits,
+ target_recall,
+ surrogate_type='hinge',
+ lambdas_initializer=tf.constant_initializer(0.0),
+ scope='negatives_only_test')
+ expected_loss = util.weighted_hinge_loss(
+ targets, logits, positive_weights=0.0, negative_weights=1.0)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(expected_loss.eval(), loss.eval())
+
+ def testLagrangeMultiplierUpdateDirectionWithMultipleRecalls(self):
+ """Runs Lagrange multiplier test with multiple recall values."""
+ target_recall = [0.34, 0.66]
+ for surrogate_type in ['xent', 'hinge']:
+ scope_str = 'p-at-r_{}_{}'.format(
+ '_'.join([str(recall) for recall in target_recall]),
+ surrogate_type)
+ kwargs = {'target_recall': target_recall,
+ 'dual_rate_factor': 1.0,
+ 'surrogate_type': surrogate_type,
+ 'scope': scope_str}
+
+ run_lagrange_multiplier_test(
+ global_objective=loss_layers.precision_at_recall_loss,
+ objective_kwargs=kwargs,
+ data_builder=_multilabel_data,
+ test_object=self)
+ kwargs['scope'] = 'other-' + kwargs['scope']
+ run_lagrange_multiplier_test(
+ global_objective=loss_layers.precision_at_recall_loss,
+ objective_kwargs=kwargs,
+ data_builder=_other_multilabel_data(surrogate_type),
+ test_object=self)
+
+ def testEquivalenceBetweenSingleAndMultipleRecalls(self):
+ """Checks precision at recall with multiple different recall values.
+
+ Runs precision at recall with multiple recall values, and runs each label
+ seperately with its own recall value as a scalar. Validates that the
+ returned loss values are the same.
+ """
+ target_precision = [0.7, 0.9, 0.4]
+ num_labels = 3
+ batch_shape = [30, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+ label_priors = tf.constant(0.45, shape=[num_labels])
+
+ multi_label_loss, _ = loss_layers.precision_at_recall_loss(
+ targets, logits, target_precision, label_priors=label_priors
+ )
+
+ single_label_losses = [
+ loss_layers.precision_at_recall_loss(
+ tf.expand_dims(targets[:, i], -1),
+ tf.expand_dims(logits[:, i], -1),
+ target_precision[i],
+ label_priors=label_priors[i])[0]
+ for i in range(num_labels)
+ ]
+
+ single_label_losses = tf.concat(single_label_losses, 1)
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ multi_label_loss_val, single_label_loss_val = session.run(
+ [multi_label_loss, single_label_losses])
+ self.assertAllClose(multi_label_loss_val, single_label_loss_val)
+
+ def testEquivalenceBetweenSingleAndEqualMultipleRecalls(self):
+ """Compares single and multiple target recalls of the same value.
+
+ Checks that using a single target recall and multiple recalls with the
+ same value would result in the same loss value.
+ """
+ num_labels = 2
+ target_shape = [20, num_labels]
+ logits = tf.Variable(tf.random_normal(target_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(target_shape), 0.7)))
+ label_priors = tf.constant([0.34], shape=[num_labels])
+
+ multi_precision_loss, _ = loss_layers.precision_at_recall_loss(
+ targets,
+ logits,
+ [0.75, 0.75],
+ label_priors=label_priors,
+ surrogate_type='xent',
+ )
+
+ single_precision_loss, _ = loss_layers.precision_at_recall_loss(
+ targets,
+ logits,
+ 0.75,
+ label_priors=label_priors,
+ surrogate_type='xent',
+ )
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ multi_precision_loss_val, single_precision_loss_val = session.run(
+ [multi_precision_loss, single_precision_loss])
+ self.assertAllClose(multi_precision_loss_val, single_precision_loss_val)
+
+
+class FalsePositiveRateAtTruePositiveRateTest(tf.test.TestCase):
+
+ def testNegativesOnlyLoss(self):
+ # Checks a special case where the loss returned should be the loss on the
+ # negative examples.
+ target_recall = 0.6
+ num_labels = 3
+ batch_shape = [3, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+ label_priors = tf.constant(numpy.random.uniform(size=[num_labels]),
+ dtype=tf.float32)
+
+ xent_loss, _ = loss_layers.false_positive_rate_at_true_positive_rate_loss(
+ targets, logits, target_recall, label_priors=label_priors,
+ lambdas_initializer=tf.constant_initializer(0.0))
+ xent_expected = util.weighted_sigmoid_cross_entropy_with_logits(
+ targets,
+ logits,
+ positive_weights=0.0,
+ negative_weights=1.0)
+ hinge_loss, _ = loss_layers.false_positive_rate_at_true_positive_rate_loss(
+ targets, logits, target_recall, label_priors=label_priors,
+ lambdas_initializer=tf.constant_initializer(0.0),
+ surrogate_type='hinge')
+ hinge_expected = util.weighted_hinge_loss(
+ targets,
+ logits,
+ positive_weights=0.0,
+ negative_weights=1.0)
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ xent_val, xent_expected = session.run([xent_loss, xent_expected])
+ self.assertAllClose(xent_val, xent_expected)
+ hinge_val, hinge_expected = session.run([hinge_loss, hinge_expected])
+ self.assertAllClose(hinge_val, hinge_expected)
+
+ def testPositivesOnlyLoss(self):
+ # Checks a special case where the loss returned should be the loss on the
+ # positive examples only.
+ target_recall = 1.0
+ num_labels = 5
+ batch_shape = [5, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.ones_like(logits)
+ label_priors = tf.constant(numpy.random.uniform(size=[num_labels]),
+ dtype=tf.float32)
+
+ loss, _ = loss_layers.false_positive_rate_at_true_positive_rate_loss(
+ targets, logits, target_recall, label_priors=label_priors)
+ expected_loss = tf.nn.sigmoid_cross_entropy_with_logits(
+ labels=targets, logits=logits)
+ hinge_loss, _ = loss_layers.false_positive_rate_at_true_positive_rate_loss(
+ targets, logits, target_recall, label_priors=label_priors,
+ surrogate_type='hinge')
+ expected_hinge = util.weighted_hinge_loss(
+ targets, logits)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(loss.eval(), expected_loss.eval())
+ self.assertAllClose(hinge_loss.eval(), expected_hinge.eval())
+
+ def testEqualWeightLoss(self):
+ # Checks a special case where the loss returned should be proportional to
+ # the ordinary loss.
+ target_recall = 1.0
+ num_labels = 4
+ batch_shape = [40, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.6)))
+ label_priors = tf.constant(0.5, shape=[num_labels])
+
+ loss, _ = loss_layers.false_positive_rate_at_true_positive_rate_loss(
+ targets, logits, target_recall, label_priors=label_priors)
+ expected_loss = tf.nn.sigmoid_cross_entropy_with_logits(
+ labels=targets, logits=logits)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(loss.eval(), expected_loss.eval())
+
+ def testLagrangeMultiplierUpdateDirection(self):
+ for target_rate in [0.35, 0.65]:
+ for surrogate_type in ['xent', 'hinge']:
+ kwargs = {'target_rate': target_rate,
+ 'surrogate_type': surrogate_type,
+ 'scope': 'fpr-at-tpr_{}_{}'.format(target_rate,
+ surrogate_type)}
+ # True positive rate is a synonym for recall, so we use the
+ # recall constraint data.
+ run_lagrange_multiplier_test(
+ global_objective=(
+ loss_layers.false_positive_rate_at_true_positive_rate_loss),
+ objective_kwargs=kwargs,
+ data_builder=_multilabel_data,
+ test_object=self)
+ kwargs['scope'] = 'other-' + kwargs['scope']
+ run_lagrange_multiplier_test(
+ global_objective=(
+ loss_layers.false_positive_rate_at_true_positive_rate_loss),
+ objective_kwargs=kwargs,
+ data_builder=_other_multilabel_data(surrogate_type),
+ test_object=self)
+
+ def testLagrangeMultiplierUpdateDirectionWithMultipleRates(self):
+ """Runs Lagrange multiplier test with multiple target rates."""
+ target_rate = [0.35, 0.65]
+ for surrogate_type in ['xent', 'hinge']:
+ kwargs = {'target_rate': target_rate,
+ 'surrogate_type': surrogate_type,
+ 'scope': 'fpr-at-tpr_{}_{}'.format(
+ '_'.join([str(target) for target in target_rate]),
+ surrogate_type)}
+ # True positive rate is a synonym for recall, so we use the
+ # recall constraint data.
+ run_lagrange_multiplier_test(
+ global_objective=(
+ loss_layers.false_positive_rate_at_true_positive_rate_loss),
+ objective_kwargs=kwargs,
+ data_builder=_multilabel_data,
+ test_object=self)
+ kwargs['scope'] = 'other-' + kwargs['scope']
+ run_lagrange_multiplier_test(
+ global_objective=(
+ loss_layers.false_positive_rate_at_true_positive_rate_loss),
+ objective_kwargs=kwargs,
+ data_builder=_other_multilabel_data(surrogate_type),
+ test_object=self)
+
+ def testEquivalenceBetweenSingleAndEqualMultipleRates(self):
+ """Compares single and multiple target rates of the same value.
+
+ Checks that using a single target rate and multiple rates with the
+ same value would result in the same loss value.
+ """
+ num_labels = 2
+ target_shape = [20, num_labels]
+ logits = tf.Variable(tf.random_normal(target_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(target_shape), 0.7)))
+ label_priors = tf.constant([0.34], shape=[num_labels])
+
+ multi_label_loss, _ = (
+ loss_layers.false_positive_rate_at_true_positive_rate_loss(
+ targets, logits, [0.75, 0.75], label_priors=label_priors))
+
+ single_label_loss, _ = (
+ loss_layers.false_positive_rate_at_true_positive_rate_loss(
+ targets, logits, 0.75, label_priors=label_priors))
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ multi_label_loss_val, single_label_loss_val = session.run(
+ [multi_label_loss, single_label_loss])
+ self.assertAllClose(multi_label_loss_val, single_label_loss_val)
+
+ def testEquivalenceBetweenSingleAndMultipleRates(self):
+ """Compares single and multiple target rates of different values.
+
+ Runs false_positive_rate_at_true_positive_rate_loss with multiple target
+ rates, and runs each label seperately with its own target rate as a
+ scalar. Validates that the returned loss values are the same.
+ """
+ target_precision = [0.7, 0.9, 0.4]
+ num_labels = 3
+ batch_shape = [30, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+ label_priors = tf.constant(0.45, shape=[num_labels])
+
+ multi_label_loss, _ = (
+ loss_layers.false_positive_rate_at_true_positive_rate_loss(
+ targets, logits, target_precision, label_priors=label_priors))
+
+ single_label_losses = [
+ loss_layers.false_positive_rate_at_true_positive_rate_loss(
+ tf.expand_dims(targets[:, i], -1),
+ tf.expand_dims(logits[:, i], -1),
+ target_precision[i],
+ label_priors=label_priors[i])[0]
+ for i in range(num_labels)
+ ]
+
+ single_label_losses = tf.concat(single_label_losses, 1)
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ multi_label_loss_val, single_label_loss_val = session.run(
+ [multi_label_loss, single_label_losses])
+ self.assertAllClose(multi_label_loss_val, single_label_loss_val)
+
+
+class TruePositiveRateAtFalsePositiveRateTest(tf.test.TestCase):
+
+ def testPositivesOnlyLoss(self):
+ # A special case where the loss should equal the loss on the positive
+ # examples.
+ target_rate = numpy.random.uniform()
+ num_labels = 3
+ batch_shape = [20, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.6)))
+ label_priors = tf.constant(numpy.random.uniform(size=[num_labels]),
+ dtype=tf.float32)
+
+ xent_loss, _ = loss_layers.true_positive_rate_at_false_positive_rate_loss(
+ targets, logits, target_rate, label_priors=label_priors,
+ lambdas_initializer=tf.constant_initializer(0.0))
+ xent_expected = util.weighted_sigmoid_cross_entropy_with_logits(
+ targets,
+ logits,
+ positive_weights=1.0,
+ negative_weights=0.0)
+ hinge_loss, _ = loss_layers.true_positive_rate_at_false_positive_rate_loss(
+ targets, logits, target_rate, label_priors=label_priors,
+ lambdas_initializer=tf.constant_initializer(0.0),
+ surrogate_type='hinge')
+ hinge_expected = util.weighted_hinge_loss(
+ targets,
+ logits,
+ positive_weights=1.0,
+ negative_weights=0.0)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(xent_expected.eval(), xent_loss.eval())
+ self.assertAllClose(hinge_expected.eval(), hinge_loss.eval())
+
+ def testNegativesOnlyLoss(self):
+ # A special case where the loss should equal the loss on the negative
+ # examples, minus target_rate * (1 - label_priors) * maybe_log2.
+ target_rate = numpy.random.uniform()
+ num_labels = 3
+ batch_shape = [25, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.zeros_like(logits)
+ label_priors = tf.constant(numpy.random.uniform(size=[num_labels]),
+ dtype=tf.float32)
+
+ xent_loss, _ = loss_layers.true_positive_rate_at_false_positive_rate_loss(
+ targets, logits, target_rate, label_priors=label_priors)
+ xent_expected = tf.subtract(
+ util.weighted_sigmoid_cross_entropy_with_logits(targets,
+ logits,
+ positive_weights=0.0,
+ negative_weights=1.0),
+ target_rate * (1.0 - label_priors) * numpy.log(2))
+ hinge_loss, _ = loss_layers.true_positive_rate_at_false_positive_rate_loss(
+ targets, logits, target_rate, label_priors=label_priors,
+ surrogate_type='hinge')
+ hinge_expected = util.weighted_hinge_loss(
+ targets, logits) - target_rate * (1.0 - label_priors)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(xent_expected.eval(), xent_loss.eval())
+ self.assertAllClose(hinge_expected.eval(), hinge_loss.eval())
+
+ def testLagrangeMultiplierUpdateDirection(self):
+ for target_rate in [0.35, 0.65]:
+ for surrogate_type in ['xent', 'hinge']:
+ kwargs = {'target_rate': target_rate,
+ 'surrogate_type': surrogate_type,
+ 'scope': 'tpr-at-fpr_{}_{}'.format(target_rate,
+ surrogate_type)}
+ run_lagrange_multiplier_test(
+ global_objective=(
+ loss_layers.true_positive_rate_at_false_positive_rate_loss),
+ objective_kwargs=kwargs,
+ data_builder=_multilabel_data,
+ test_object=self)
+ kwargs['scope'] = 'other-' + kwargs['scope']
+ run_lagrange_multiplier_test(
+ global_objective=(
+ loss_layers.true_positive_rate_at_false_positive_rate_loss),
+ objective_kwargs=kwargs,
+ data_builder=_other_multilabel_data(surrogate_type),
+ test_object=self)
+
+ def testLagrangeMultiplierUpdateDirectionWithMultipleRates(self):
+ """Runs Lagrange multiplier test with multiple target rates."""
+ target_rate = [0.35, 0.65]
+ for surrogate_type in ['xent', 'hinge']:
+ kwargs = {'target_rate': target_rate,
+ 'surrogate_type': surrogate_type,
+ 'scope': 'tpr-at-fpr_{}_{}'.format(
+ '_'.join([str(target) for target in target_rate]),
+ surrogate_type)}
+ run_lagrange_multiplier_test(
+ global_objective=(
+ loss_layers.true_positive_rate_at_false_positive_rate_loss),
+ objective_kwargs=kwargs,
+ data_builder=_multilabel_data,
+ test_object=self)
+ kwargs['scope'] = 'other-' + kwargs['scope']
+ run_lagrange_multiplier_test(
+ global_objective=(
+ loss_layers.true_positive_rate_at_false_positive_rate_loss),
+ objective_kwargs=kwargs,
+ data_builder=_other_multilabel_data(surrogate_type),
+ test_object=self)
+
+ def testEquivalenceBetweenSingleAndEqualMultipleRates(self):
+ """Compares single and multiple target rates of the same value.
+
+ Checks that using a single target rate and multiple rates with the
+ same value would result in the same loss value.
+ """
+ num_labels = 2
+ target_shape = [20, num_labels]
+ logits = tf.Variable(tf.random_normal(target_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(target_shape), 0.7)))
+ label_priors = tf.constant([0.34], shape=[num_labels])
+
+ multi_label_loss, _ = (
+ loss_layers.true_positive_rate_at_false_positive_rate_loss(
+ targets, logits, [0.75, 0.75], label_priors=label_priors))
+
+ single_label_loss, _ = (
+ loss_layers.true_positive_rate_at_false_positive_rate_loss(
+ targets, logits, 0.75, label_priors=label_priors))
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ multi_label_loss_val, single_label_loss_val = session.run(
+ [multi_label_loss, single_label_loss])
+ self.assertAllClose(multi_label_loss_val, single_label_loss_val)
+
+ def testEquivalenceBetweenSingleAndMultipleRates(self):
+ """Compares single and multiple target rates of different values.
+
+ Runs true_positive_rate_at_false_positive_rate_loss with multiple target
+ rates, and runs each label seperately with its own target rate as a
+ scalar. Validates that the returned loss values are the same.
+ """
+ target_precision = [0.7, 0.9, 0.4]
+ num_labels = 3
+ batch_shape = [30, num_labels]
+ logits = tf.Variable(tf.random_normal(batch_shape))
+ targets = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+ label_priors = tf.constant(0.45, shape=[num_labels])
+
+ multi_label_loss, _ = (
+ loss_layers.true_positive_rate_at_false_positive_rate_loss(
+ targets, logits, target_precision, label_priors=label_priors))
+
+ single_label_losses = [
+ loss_layers.true_positive_rate_at_false_positive_rate_loss(
+ tf.expand_dims(targets[:, i], -1),
+ tf.expand_dims(logits[:, i], -1),
+ target_precision[i],
+ label_priors=label_priors[i])[0]
+ for i in range(num_labels)
+ ]
+
+ single_label_losses = tf.concat(single_label_losses, 1)
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ multi_label_loss_val, single_label_loss_val = session.run(
+ [multi_label_loss, single_label_losses])
+ self.assertAllClose(multi_label_loss_val, single_label_loss_val)
+
+
+class UtilityFunctionsTest(tf.test.TestCase):
+
+ def testTrainableDualVariable(self):
+ # Confirm correct behavior of a trainable dual variable.
+ x = tf.get_variable('primal', dtype=tf.float32, initializer=2.0)
+ y_value, y = loss_layers._create_dual_variable(
+ 'dual', shape=None, dtype=tf.float32, initializer=1.0, collections=None,
+ trainable=True, dual_rate_factor=0.3)
+ optimizer = tf.train.GradientDescentOptimizer(learning_rate=1.0)
+ update = optimizer.minimize(0.5 * tf.square(x - y_value))
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ update.run()
+ self.assertAllClose(0.7, y.eval())
+
+ def testUntrainableDualVariable(self):
+ # Confirm correct behavior of dual variable which is not trainable.
+ x = tf.get_variable('primal', dtype=tf.float32, initializer=-2.0)
+ y_value, y = loss_layers._create_dual_variable(
+ 'dual', shape=None, dtype=tf.float32, initializer=1.0, collections=None,
+ trainable=False, dual_rate_factor=0.8)
+ optimizer = tf.train.GradientDescentOptimizer(learning_rate=1.0)
+ update = optimizer.minimize(tf.square(x) * y_value + tf.exp(y_value))
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ update.run()
+ self.assertAllClose(1.0, y.eval())
+
+
+class BoundTest(parameterized.TestCase, tf.test.TestCase):
+
+ @parameterized.named_parameters(
+ ('_xent', 'xent', 1.0, [2.0, 1.0]),
+ ('_xent_weighted', 'xent',
+ numpy.array([0, 2, 0.5, 1, 2, 3]).reshape(6, 1), [2.5, 0]),
+ ('_hinge', 'hinge', 1.0, [2.0, 1.0]),
+ ('_hinge_weighted', 'hinge',
+ numpy.array([1.0, 2, 3, 4, 5, 6]).reshape(6, 1), [5.0, 1]))
+ def testLowerBoundMultilabel(self, surrogate_type, weights, expected):
+ labels, logits, _ = _multilabel_data()
+ lower_bound = loss_layers.true_positives_lower_bound(
+ labels, logits, weights, surrogate_type)
+
+ with self.test_session():
+ self.assertAllClose(lower_bound.eval(), expected)
+
+ @parameterized.named_parameters(
+ ('_xent', 'xent'), ('_hinge', 'hinge'))
+ def testLowerBoundOtherMultilabel(self, surrogate_type):
+ labels, logits, _ = _other_multilabel_data(surrogate_type)()
+ lower_bound = loss_layers.true_positives_lower_bound(
+ labels, logits, 1.0, surrogate_type)
+
+ with self.test_session():
+ self.assertAllClose(lower_bound.eval(), [4.0, 2.0], atol=1e-5)
+
+ @parameterized.named_parameters(
+ ('_xent', 'xent', 1.0, [1.0, 2.0]),
+ ('_xent_weighted', 'xent',
+ numpy.array([3.0, 2, 1, 0, 1, 2]).reshape(6, 1), [2.0, 1.0]),
+ ('_hinge', 'hinge', 1.0, [1.0, 2.0]),
+ ('_hinge_weighted', 'hinge',
+ numpy.array([13, 12, 11, 0.5, 0, 0.5]).reshape(6, 1), [0.5, 0.5]))
+ def testUpperBoundMultilabel(self, surrogate_type, weights, expected):
+ labels, logits, _ = _multilabel_data()
+ upper_bound = loss_layers.false_positives_upper_bound(
+ labels, logits, weights, surrogate_type)
+
+ with self.test_session():
+ self.assertAllClose(upper_bound.eval(), expected)
+
+ @parameterized.named_parameters(
+ ('_xent', 'xent'), ('_hinge', 'hinge'))
+ def testUpperBoundOtherMultilabel(self, surrogate_type):
+ labels, logits, _ = _other_multilabel_data(surrogate_type)()
+ upper_bound = loss_layers.false_positives_upper_bound(
+ labels, logits, 1.0, surrogate_type)
+
+ with self.test_session():
+ self.assertAllClose(upper_bound.eval(), [2.0, 4.0], atol=1e-5)
+
+ @parameterized.named_parameters(
+ ('_lower', 'lower'), ('_upper', 'upper'))
+ def testThreeDimensionalLogits(self, bound):
+ bound_function = loss_layers.false_positives_upper_bound
+ if bound == 'lower':
+ bound_function = loss_layers.true_positives_lower_bound
+ random_labels = numpy.float32(numpy.random.uniform(size=[2, 3]) > 0.5)
+ random_logits = numpy.float32(numpy.random.randn(2, 3, 2))
+ first_slice_logits = random_logits[:, :, 0].reshape(2, 3)
+ second_slice_logits = random_logits[:, :, 1].reshape(2, 3)
+
+ full_bound = bound_function(
+ tf.constant(random_labels), tf.constant(random_logits), 1.0, 'xent')
+ first_slice_bound = bound_function(tf.constant(random_labels),
+ tf.constant(first_slice_logits),
+ 1.0,
+ 'xent')
+ second_slice_bound = bound_function(tf.constant(random_labels),
+ tf.constant(second_slice_logits),
+ 1.0,
+ 'xent')
+ stacked_bound = tf.stack([first_slice_bound, second_slice_bound], axis=1)
+
+ with self.test_session():
+ self.assertAllClose(full_bound.eval(), stacked_bound.eval())
+
+
+def run_lagrange_multiplier_test(global_objective,
+ objective_kwargs,
+ data_builder,
+ test_object):
+ """Runs a test for the Lagrange multiplier update of `global_objective`.
+
+ The test checks that the constraint for `global_objective` is satisfied on
+ the first label of the data produced by `data_builder` but not the second.
+
+ Args:
+ global_objective: One of the global objectives.
+ objective_kwargs: A dictionary of keyword arguments to pass to
+ `global_objective`. Must contain an entry for the constraint argument
+ of `global_objective`, e.g. 'target_rate' or 'target_precision'.
+ data_builder: A function which returns tensors corresponding to labels,
+ logits, and label priors.
+ test_object: An instance of tf.test.TestCase.
+ """
+ # Construct global objective kwargs from a copy of `objective_kwargs`.
+ kwargs = dict(objective_kwargs)
+ targets, logits, priors = data_builder()
+ kwargs['labels'] = targets
+ kwargs['logits'] = logits
+ kwargs['label_priors'] = priors
+
+ loss, output_dict = global_objective(**kwargs)
+ lambdas = tf.squeeze(output_dict['lambdas'])
+ opt = tf.train.GradientDescentOptimizer(learning_rate=0.1)
+ update_op = opt.minimize(loss, var_list=[output_dict['lambdas']])
+
+ with test_object.test_session() as session:
+ tf.global_variables_initializer().run()
+ lambdas_before = session.run(lambdas)
+ session.run(update_op)
+ lambdas_after = session.run(lambdas)
+ test_object.assertLess(lambdas_after[0], lambdas_before[0])
+ test_object.assertGreater(lambdas_after[1], lambdas_before[1])
+
+
+class CrossFunctionTest(parameterized.TestCase, tf.test.TestCase):
+
+ @parameterized.named_parameters(
+ ('_auc01xent', loss_layers.precision_recall_auc_loss, {
+ 'precision_range': (0.0, 1.0), 'surrogate_type': 'xent'
+ }),
+ ('_auc051xent', loss_layers.precision_recall_auc_loss, {
+ 'precision_range': (0.5, 1.0), 'surrogate_type': 'xent'
+ }),
+ ('_auc01)hinge', loss_layers.precision_recall_auc_loss, {
+ 'precision_range': (0.0, 1.0), 'surrogate_type': 'hinge'
+ }),
+ ('_ratp04', loss_layers.recall_at_precision_loss, {
+ 'target_precision': 0.4, 'surrogate_type': 'xent'
+ }),
+ ('_ratp066', loss_layers.recall_at_precision_loss, {
+ 'target_precision': 0.66, 'surrogate_type': 'xent'
+ }),
+ ('_ratp07_hinge', loss_layers.recall_at_precision_loss, {
+ 'target_precision': 0.7, 'surrogate_type': 'hinge'
+ }),
+ ('_fpattp066', loss_layers.false_positive_rate_at_true_positive_rate_loss,
+ {'target_rate': 0.66, 'surrogate_type': 'xent'}),
+ ('_fpattp046', loss_layers.false_positive_rate_at_true_positive_rate_loss,
+ {
+ 'target_rate': 0.46, 'surrogate_type': 'xent'
+ }),
+ ('_fpattp076_hinge',
+ loss_layers.false_positive_rate_at_true_positive_rate_loss, {
+ 'target_rate': 0.76, 'surrogate_type': 'hinge'
+ }),
+ ('_fpattp036_hinge',
+ loss_layers.false_positive_rate_at_true_positive_rate_loss, {
+ 'target_rate': 0.36, 'surrogate_type': 'hinge'
+ }),
+ )
+ def testWeigtedGlobalObjective(self,
+ global_objective,
+ objective_kwargs):
+ """Runs a test of `global_objective` with per-example weights.
+
+ Args:
+ global_objective: One of the global objectives.
+ objective_kwargs: A dictionary of keyword arguments to pass to
+ `global_objective`. Must contain keys 'surrogate_type', and the keyword
+ for the constraint argument of `global_objective`, e.g. 'target_rate' or
+ 'target_precision'.
+ """
+ logits_positives = tf.constant([1, -0.5, 3], shape=[3, 1])
+ logits_negatives = tf.constant([-0.5, 1, -1, -1, -0.5, 1], shape=[6, 1])
+
+ # Dummy tensor is used to compute the gradients.
+ dummy = tf.constant(1.0)
+ logits = tf.concat([logits_positives, logits_negatives], 0)
+ logits = tf.multiply(logits, dummy)
+ targets = tf.constant([1, 1, 1, 0, 0, 0, 0, 0, 0],
+ shape=[9, 1], dtype=tf.float32)
+ priors = tf.constant(1.0/3.0, shape=[1])
+ weights = tf.constant([1, 1, 1, 0, 0, 0, 2, 2, 2],
+ shape=[9, 1], dtype=tf.float32)
+
+ # Construct global objective kwargs.
+ objective_kwargs['labels'] = targets
+ objective_kwargs['logits'] = logits
+ objective_kwargs['label_priors'] = priors
+
+ scope = 'weighted_test'
+ # Unweighted loss.
+ objective_kwargs['scope'] = scope + '_plain'
+ raw_loss, update = global_objective(**objective_kwargs)
+ loss = tf.reduce_sum(raw_loss)
+
+ # Weighted loss.
+ objective_kwargs['weights'] = weights
+ objective_kwargs['scope'] = scope + '_weighted'
+ raw_weighted_loss, weighted_update = global_objective(**objective_kwargs)
+ weighted_loss = tf.reduce_sum(raw_weighted_loss)
+
+ lambdas = tf.contrib.framework.get_unique_variable(scope + '_plain/lambdas')
+ weighted_lambdas = tf.contrib.framework.get_unique_variable(
+ scope + '_weighted/lambdas')
+ logits_gradient = tf.gradients(loss, dummy)
+ weighted_logits_gradient = tf.gradients(weighted_loss, dummy)
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ self.assertAllClose(loss.eval(), weighted_loss.eval())
+
+ logits_grad, weighted_logits_grad = session.run(
+ [logits_gradient, weighted_logits_gradient])
+ self.assertAllClose(logits_grad, weighted_logits_grad)
+
+ session.run([update, weighted_update])
+ lambdas_value, weighted_lambdas_value = session.run(
+ [lambdas, weighted_lambdas])
+ self.assertAllClose(lambdas_value, weighted_lambdas_value)
+
+ @parameterized.named_parameters(
+ ('_prauc051xent', loss_layers.precision_recall_auc_loss, {
+ 'precision_range': (0.5, 1.0), 'surrogate_type': 'xent'
+ }),
+ ('_prauc01hinge', loss_layers.precision_recall_auc_loss, {
+ 'precision_range': (0.0, 1.0), 'surrogate_type': 'hinge'
+ }),
+ ('_rocxent', loss_layers.roc_auc_loss, {'surrogate_type': 'xent'}),
+ ('_rochinge', loss_layers.roc_auc_loss, {'surrogate_type': 'xent'}),
+ ('_ratp04', loss_layers.recall_at_precision_loss, {
+ 'target_precision': 0.4, 'surrogate_type': 'xent'
+ }),
+ ('_ratp07_hinge', loss_layers.recall_at_precision_loss, {
+ 'target_precision': 0.7, 'surrogate_type': 'hinge'
+ }),
+ ('_patr05', loss_layers.precision_at_recall_loss, {
+ 'target_recall': 0.4, 'surrogate_type': 'xent'
+ }),
+ ('_patr08_hinge', loss_layers.precision_at_recall_loss, {
+ 'target_recall': 0.7, 'surrogate_type': 'hinge'
+ }),
+ ('_fpattp046', loss_layers.false_positive_rate_at_true_positive_rate_loss,
+ {
+ 'target_rate': 0.46, 'surrogate_type': 'xent'
+ }),
+ ('_fpattp036_hinge',
+ loss_layers.false_positive_rate_at_true_positive_rate_loss, {
+ 'target_rate': 0.36, 'surrogate_type': 'hinge'
+ }),
+ ('_tpatfp076', loss_layers.true_positive_rate_at_false_positive_rate_loss,
+ {
+ 'target_rate': 0.76, 'surrogate_type': 'xent'
+ }),
+ ('_tpatfp036_hinge',
+ loss_layers.true_positive_rate_at_false_positive_rate_loss, {
+ 'target_rate': 0.36, 'surrogate_type': 'hinge'
+ }),
+ )
+ def testVectorAndMatrixLabelEquivalence(self,
+ global_objective,
+ objective_kwargs):
+ """Tests equivalence between label shape [batch_size] or [batch_size, 1]."""
+ vector_labels = tf.constant([1.0, 1.0, 0.0, 0.0], shape=[4])
+ vector_logits = tf.constant([1.0, 0.1, 0.1, -1.0], shape=[4])
+
+ # Construct vector global objective kwargs and loss.
+ vector_kwargs = objective_kwargs.copy()
+ vector_kwargs['labels'] = vector_labels
+ vector_kwargs['logits'] = vector_logits
+ vector_loss, _ = global_objective(**vector_kwargs)
+ vector_loss_sum = tf.reduce_sum(vector_loss)
+
+ # Construct matrix global objective kwargs and loss.
+ matrix_kwargs = objective_kwargs.copy()
+ matrix_kwargs['labels'] = tf.expand_dims(vector_labels, 1)
+ matrix_kwargs['logits'] = tf.expand_dims(vector_logits, 1)
+ matrix_loss, _ = global_objective(**matrix_kwargs)
+ matrix_loss_sum = tf.reduce_sum(matrix_loss)
+
+ self.assertEqual(1, vector_loss.get_shape().ndims)
+ self.assertEqual(2, matrix_loss.get_shape().ndims)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(vector_loss_sum.eval(), matrix_loss_sum.eval())
+
+ @parameterized.named_parameters(
+ ('_prauc', loss_layers.precision_recall_auc_loss, None),
+ ('_roc', loss_layers.roc_auc_loss, None),
+ ('_rap', loss_layers.recall_at_precision_loss, {'target_precision': 0.8}),
+ ('_patr', loss_layers.precision_at_recall_loss, {'target_recall': 0.7}),
+ ('_fpattp', loss_layers.false_positive_rate_at_true_positive_rate_loss,
+ {'target_rate': 0.9}),
+ ('_tpatfp', loss_layers.true_positive_rate_at_false_positive_rate_loss,
+ {'target_rate': 0.1})
+ )
+ def testUnknownBatchSize(self, global_objective, objective_kwargs):
+ # Tests that there are no errors when the batch size is not known.
+ batch_shape = [5, 2]
+ logits = tf.placeholder(tf.float32)
+ logits_feed = numpy.random.randn(*batch_shape)
+ labels = tf.placeholder(tf.float32)
+ labels_feed = logits_feed > 0.1
+ logits.set_shape([None, 2])
+ labels.set_shape([None, 2])
+
+ if objective_kwargs is None:
+ objective_kwargs = {}
+
+ placeholder_kwargs = objective_kwargs.copy()
+ placeholder_kwargs['labels'] = labels
+ placeholder_kwargs['logits'] = logits
+ placeholder_loss, _ = global_objective(**placeholder_kwargs)
+
+ kwargs = objective_kwargs.copy()
+ kwargs['labels'] = labels_feed
+ kwargs['logits'] = logits_feed
+ loss, _ = global_objective(**kwargs)
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+ feed_loss_val = session.run(placeholder_loss,
+ feed_dict={logits: logits_feed,
+ labels: labels_feed})
+ loss_val = session.run(loss)
+ self.assertAllClose(feed_loss_val, loss_val)
+
+
+# Both sets of logits below are designed so that the surrogate precision and
+# recall (true positive rate) of class 1 is ~ 2/3, and the same surrogates for
+# class 2 are ~ 1/3. The false positive rate surrogates are ~ 1/3 and 2/3.
+def _multilabel_data():
+ targets = tf.constant([1.0, 1.0, 1.0, 0.0, 0.0, 0.0], shape=[6, 1])
+ targets = tf.concat([targets, targets], 1)
+ logits_positives = tf.constant([[0.0, 15],
+ [16, 0.0],
+ [14, 0.0]], shape=[3, 2])
+ logits_negatives = tf.constant([[-17, 0.0],
+ [-15, 0.0],
+ [0.0, -101]], shape=[3, 2])
+ logits = tf.concat([logits_positives, logits_negatives], 0)
+ priors = tf.constant(0.5, shape=[2])
+
+ return targets, logits, priors
+
+
+def _other_multilabel_data(surrogate_type):
+ targets = tf.constant(
+ [1.0] * 6 + [0.0] * 6, shape=[12, 1])
+ targets = tf.concat([targets, targets], 1)
+ logits_positives = tf.constant([[0.0, 13],
+ [12, 0.0],
+ [15, 0.0],
+ [0.0, 30],
+ [13, 0.0],
+ [18, 0.0]], shape=[6, 2])
+ # A score of cost_2 incurs a loss of ~2.0.
+ cost_2 = 1.0 if surrogate_type == 'hinge' else 1.09861229
+ logits_negatives = tf.constant([[-16, cost_2],
+ [-15, cost_2],
+ [cost_2, -111],
+ [-133, -14,],
+ [-14.0100101, -16,],
+ [-19.888828882, -101]], shape=[6, 2])
+ logits = tf.concat([logits_positives, logits_negatives], 0)
+ priors = tf.constant(0.5, shape=[2])
+
+ def builder():
+ return targets, logits, priors
+
+ return builder
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/test_all.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/test_all.py"
new file mode 100644
index 00000000..d7e439e2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/test_all.py"
@@ -0,0 +1,37 @@
+# Copyright 2018 The TensorFlow Global Objectives Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Runs all unit tests in the Global Objectives package.
+
+Requires that TensorFlow and abseil (https://github.com/abseil/abseil-py) be
+installed on your machine. Command to run the tests:
+python test_all.py
+
+"""
+
+import os
+import sys
+import unittest
+
+this_file = os.path.realpath(__file__)
+start_dir = os.path.dirname(this_file)
+parent_dir = os.path.dirname(start_dir)
+
+sys.path.append(parent_dir)
+loader = unittest.TestLoader()
+suite = loader.discover(start_dir, pattern='*_test.py')
+
+runner = unittest.TextTestRunner(verbosity=2)
+runner.run(suite)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/util.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/util.py"
new file mode 100644
index 00000000..e2b287a9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/util.py"
@@ -0,0 +1,348 @@
+# Copyright 2018 The TensorFlow Global Objectives Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains utility functions for the global objectives library."""
+
+# Dependency imports
+import tensorflow as tf
+
+
+def weighted_sigmoid_cross_entropy_with_logits(labels,
+ logits,
+ positive_weights=1.0,
+ negative_weights=1.0,
+ name=None):
+ """Computes a weighting of sigmoid cross entropy given `logits`.
+
+ Measures the weighted probability error in discrete classification tasks in
+ which classes are independent and not mutually exclusive. For instance, one
+ could perform multilabel classification where a picture can contain both an
+ elephant and a dog at the same time. The class weight multiplies the
+ different types of errors.
+ For brevity, let `x = logits`, `z = labels`, `c = positive_weights`,
+ `d = negative_weights` The
+ weighed logistic loss is
+
+ ```
+ c * z * -log(sigmoid(x)) + d * (1 - z) * -log(1 - sigmoid(x))
+ = c * z * -log(1 / (1 + exp(-x))) - d * (1 - z) * log(exp(-x) / (1 + exp(-x)))
+ = c * z * log(1 + exp(-x)) + d * (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
+ = c * z * log(1 + exp(-x)) + d * (1 - z) * (x + log(1 + exp(-x)))
+ = (1 - z) * x * d + (1 - z + c * z ) * log(1 + exp(-x))
+ = - d * x * z + d * x + (d - d * z + c * z ) * log(1 + exp(-x))
+ ```
+
+ To ensure stability and avoid overflow, the implementation uses the identity
+ log(1 + exp(-x)) = max(0,-x) + log(1 + exp(-abs(x)))
+ and the result is computed as
+
+ ```
+ = -d * x * z + d * x
+ + (d - d * z + c * z ) * (max(0,-x) + log(1 + exp(-abs(x))))
+ ```
+
+ Note that the loss is NOT an upper bound on the 0-1 loss, unless it is divided
+ by log(2).
+
+ Args:
+ labels: A `Tensor` of type `float32` or `float64`. `labels` can be a 2D
+ tensor with shape [batch_size, num_labels] or a 3D tensor with shape
+ [batch_size, num_labels, K].
+ logits: A `Tensor` of the same type and shape as `labels`. If `logits` has
+ shape [batch_size, num_labels, K], the loss is computed separately on each
+ slice [:, :, k] of `logits`.
+ positive_weights: A `Tensor` that holds positive weights and has the
+ following semantics according to its shape:
+ scalar - A global positive weight.
+ 1D tensor - must be of size K, a weight for each 'attempt'
+ 2D tensor - of size [num_labels, K'] where K' is either K or 1.
+ The `positive_weights` will be expanded to the left to match the
+ dimensions of logits and labels.
+ negative_weights: A `Tensor` that holds positive weight and has the
+ semantics identical to positive_weights.
+ name: A name for the operation (optional).
+
+ Returns:
+ A `Tensor` of the same shape as `logits` with the componentwise
+ weighted logistic losses.
+ """
+ with tf.name_scope(
+ name,
+ 'weighted_logistic_loss',
+ [logits, labels, positive_weights, negative_weights]) as name:
+ labels, logits, positive_weights, negative_weights = prepare_loss_args(
+ labels, logits, positive_weights, negative_weights)
+
+ softplus_term = tf.add(tf.maximum(-logits, 0.0),
+ tf.log(1.0 + tf.exp(-tf.abs(logits))))
+ weight_dependent_factor = (
+ negative_weights + (positive_weights - negative_weights) * labels)
+ return (negative_weights * (logits - labels * logits) +
+ weight_dependent_factor * softplus_term)
+
+
+def weighted_hinge_loss(labels,
+ logits,
+ positive_weights=1.0,
+ negative_weights=1.0,
+ name=None):
+ """Computes weighted hinge loss given logits `logits`.
+
+ The loss applies to multi-label classification tasks where labels are
+ independent and not mutually exclusive. See also
+ `weighted_sigmoid_cross_entropy_with_logits`.
+
+ Args:
+ labels: A `Tensor` of type `float32` or `float64`. Each entry must be
+ either 0 or 1. `labels` can be a 2D tensor with shape
+ [batch_size, num_labels] or a 3D tensor with shape
+ [batch_size, num_labels, K].
+ logits: A `Tensor` of the same type and shape as `labels`. If `logits` has
+ shape [batch_size, num_labels, K], the loss is computed separately on each
+ slice [:, :, k] of `logits`.
+ positive_weights: A `Tensor` that holds positive weights and has the
+ following semantics according to its shape:
+ scalar - A global positive weight.
+ 1D tensor - must be of size K, a weight for each 'attempt'
+ 2D tensor - of size [num_labels, K'] where K' is either K or 1.
+ The `positive_weights` will be expanded to the left to match the
+ dimensions of logits and labels.
+ negative_weights: A `Tensor` that holds positive weight and has the
+ semantics identical to positive_weights.
+ name: A name for the operation (optional).
+
+ Returns:
+ A `Tensor` of the same shape as `logits` with the componentwise
+ weighted hinge loss.
+ """
+ with tf.name_scope(
+ name, 'weighted_hinge_loss',
+ [logits, labels, positive_weights, negative_weights]) as name:
+ labels, logits, positive_weights, negative_weights = prepare_loss_args(
+ labels, logits, positive_weights, negative_weights)
+
+ positives_term = positive_weights * labels * tf.maximum(1.0 - logits, 0)
+ negatives_term = (negative_weights * (1.0 - labels)
+ * tf.maximum(1.0 + logits, 0))
+ return positives_term + negatives_term
+
+
+def weighted_surrogate_loss(labels,
+ logits,
+ surrogate_type='xent',
+ positive_weights=1.0,
+ negative_weights=1.0,
+ name=None):
+ """Returns either weighted cross-entropy or hinge loss.
+
+ For example `surrogate_type` is 'xent' returns the weighted cross
+ entropy loss.
+
+ Args:
+ labels: A `Tensor` of type `float32` or `float64`. Each entry must be
+ between 0 and 1. `labels` can be a 2D tensor with shape
+ [batch_size, num_labels] or a 3D tensor with shape
+ [batch_size, num_labels, K].
+ logits: A `Tensor` of the same type and shape as `labels`. If `logits` has
+ shape [batch_size, num_labels, K], each slice [:, :, k] represents an
+ 'attempt' to predict `labels` and the loss is computed per slice.
+ surrogate_type: A string that determines which loss to return, supports
+ 'xent' for cross-entropy and 'hinge' for hinge loss.
+ positive_weights: A `Tensor` that holds positive weights and has the
+ following semantics according to its shape:
+ scalar - A global positive weight.
+ 1D tensor - must be of size K, a weight for each 'attempt'
+ 2D tensor - of size [num_labels, K'] where K' is either K or 1.
+ The `positive_weights` will be expanded to the left to match the
+ dimensions of logits and labels.
+ negative_weights: A `Tensor` that holds positive weight and has the
+ semantics identical to positive_weights.
+ name: A name for the operation (optional).
+
+ Returns:
+ The weigthed loss.
+
+ Raises:
+ ValueError: If value of `surrogate_type` is not supported.
+ """
+ with tf.name_scope(
+ name, 'weighted_loss',
+ [logits, labels, surrogate_type, positive_weights,
+ negative_weights]) as name:
+ if surrogate_type == 'xent':
+ return weighted_sigmoid_cross_entropy_with_logits(
+ logits=logits,
+ labels=labels,
+ positive_weights=positive_weights,
+ negative_weights=negative_weights,
+ name=name)
+ elif surrogate_type == 'hinge':
+ return weighted_hinge_loss(
+ logits=logits,
+ labels=labels,
+ positive_weights=positive_weights,
+ negative_weights=negative_weights,
+ name=name)
+ raise ValueError('surrogate_type %s not supported.' % surrogate_type)
+
+
+def expand_outer(tensor, rank):
+ """Expands the given `Tensor` outwards to a target rank.
+
+ For example if rank = 3 and tensor.shape is [3, 4], this function will expand
+ to such that the resulting shape will be [1, 3, 4].
+
+ Args:
+ tensor: The tensor to expand.
+ rank: The target dimension.
+
+ Returns:
+ The expanded tensor.
+
+ Raises:
+ ValueError: If rank of `tensor` is unknown, or if `rank` is smaller than
+ the rank of `tensor`.
+ """
+ if tensor.get_shape().ndims is None:
+ raise ValueError('tensor dimension must be known.')
+ if len(tensor.get_shape()) > rank:
+ raise ValueError(
+ '`rank` must be at least the current tensor dimension: (%s vs %s).' %
+ (rank, len(tensor.get_shape())))
+ while len(tensor.get_shape()) < rank:
+ tensor = tf.expand_dims(tensor, 0)
+ return tensor
+
+
+def build_label_priors(labels,
+ weights=None,
+ positive_pseudocount=1.0,
+ negative_pseudocount=1.0,
+ variables_collections=None):
+ """Creates an op to maintain and update label prior probabilities.
+
+ For each label, the label priors are estimated as
+ (P + sum_i w_i y_i) / (P + N + sum_i w_i),
+ where y_i is the ith label, w_i is the ith weight, P is a pseudo-count of
+ positive labels, and N is a pseudo-count of negative labels. The index i
+ ranges over all labels observed during all evaluations of the returned op.
+
+ Args:
+ labels: A `Tensor` with shape [batch_size, num_labels]. Entries should be
+ in [0, 1].
+ weights: Coefficients representing the weight of each label. Must be either
+ a Tensor of shape [batch_size, num_labels] or `None`, in which case each
+ weight is treated as 1.0.
+ positive_pseudocount: Number of positive labels used to initialize the label
+ priors.
+ negative_pseudocount: Number of negative labels used to initialize the label
+ priors.
+ variables_collections: Optional list of collections for created variables.
+
+ Returns:
+ label_priors: An op to update the weighted label_priors. Gives the
+ current value of the label priors when evaluated.
+ """
+ dtype = labels.dtype.base_dtype
+ num_labels = get_num_labels(labels)
+
+ if weights is None:
+ weights = tf.ones_like(labels)
+
+ # We disable partitioning while constructing dual variables because they will
+ # be updated with assign, which is not available for partitioned variables.
+ partitioner = tf.get_variable_scope().partitioner
+ try:
+ tf.get_variable_scope().set_partitioner(None)
+ # Create variable and update op for weighted label counts.
+ weighted_label_counts = tf.contrib.framework.model_variable(
+ name='weighted_label_counts',
+ shape=[num_labels],
+ dtype=dtype,
+ initializer=tf.constant_initializer(
+ [positive_pseudocount] * num_labels, dtype=dtype),
+ collections=variables_collections,
+ trainable=False)
+ weighted_label_counts_update = weighted_label_counts.assign_add(
+ tf.reduce_sum(weights * labels, 0))
+
+ # Create variable and update op for the sum of the weights.
+ weight_sum = tf.contrib.framework.model_variable(
+ name='weight_sum',
+ shape=[num_labels],
+ dtype=dtype,
+ initializer=tf.constant_initializer(
+ [positive_pseudocount + negative_pseudocount] * num_labels,
+ dtype=dtype),
+ collections=variables_collections,
+ trainable=False)
+ weight_sum_update = weight_sum.assign_add(tf.reduce_sum(weights, 0))
+
+ finally:
+ tf.get_variable_scope().set_partitioner(partitioner)
+
+ label_priors = tf.div(
+ weighted_label_counts_update,
+ weight_sum_update)
+ return label_priors
+
+
+def convert_and_cast(value, name, dtype):
+ """Convert input to tensor and cast to dtype.
+
+ Args:
+ value: An object whose type has a registered Tensor conversion function,
+ e.g. python numerical type or numpy array.
+ name: Name to use for the new Tensor, if one is created.
+ dtype: Optional element type for the returned tensor.
+
+ Returns:
+ A tensor.
+ """
+ return tf.cast(tf.convert_to_tensor(value, name=name), dtype=dtype)
+
+
+def prepare_loss_args(labels, logits, positive_weights, negative_weights):
+ """Prepare arguments for weighted loss functions.
+
+ If needed, will convert given arguments to appropriate type and shape.
+
+ Args:
+ labels: labels or labels of the loss function.
+ logits: Logits of the loss function.
+ positive_weights: Weight on the positive examples.
+ negative_weights: Weight on the negative examples.
+
+ Returns:
+ Converted labels, logits, positive_weights, negative_weights.
+ """
+ logits = tf.convert_to_tensor(logits, name='logits')
+ labels = convert_and_cast(labels, 'labels', logits.dtype)
+ if len(labels.get_shape()) == 2 and len(logits.get_shape()) == 3:
+ labels = tf.expand_dims(labels, [2])
+
+ positive_weights = convert_and_cast(positive_weights, 'positive_weights',
+ logits.dtype)
+ positive_weights = expand_outer(positive_weights, logits.get_shape().ndims)
+ negative_weights = convert_and_cast(negative_weights, 'negative_weights',
+ logits.dtype)
+ negative_weights = expand_outer(negative_weights, logits.get_shape().ndims)
+ return labels, logits, positive_weights, negative_weights
+
+
+def get_num_labels(labels_or_logits):
+ """Returns the number of labels inferred from labels_or_logits."""
+ if labels_or_logits.get_shape().ndims <= 1:
+ return 1
+ return labels_or_logits.get_shape()[1].value
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/util_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/util_test.py"
new file mode 100644
index 00000000..195252a5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/global_objectives/util_test.py"
@@ -0,0 +1,333 @@
+# Copyright 2018 The TensorFlow Global Objectives Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for global objectives util functions."""
+
+# Dependency imports
+from absl.testing import parameterized
+import numpy as np
+import tensorflow as tf
+
+from global_objectives import util
+
+
+def weighted_sigmoid_cross_entropy(targets, logits, weight):
+ return (weight * targets * np.log(1.0 + np.exp(-logits)) + (
+ (1.0 - targets) * np.log(1.0 + 1.0 / np.exp(-logits))))
+
+
+def hinge_loss(labels, logits):
+ # Mostly copied from tensorflow.python.ops.losses but with loss per datapoint.
+ labels = tf.to_float(labels)
+ all_ones = tf.ones_like(labels)
+ labels = tf.subtract(2 * labels, all_ones)
+ return tf.nn.relu(tf.subtract(all_ones, tf.multiply(labels, logits)))
+
+
+class WeightedSigmoidCrossEntropyTest(parameterized.TestCase, tf.test.TestCase):
+
+ def testTrivialCompatibilityWithSigmoidCrossEntropy(self):
+ """Tests compatibility with unweighted function with weight 1.0."""
+ x_shape = [300, 10]
+ targets = np.random.random_sample(x_shape).astype(np.float32)
+ logits = np.random.randn(*x_shape).astype(np.float32)
+ weighted_loss = util.weighted_sigmoid_cross_entropy_with_logits(
+ targets,
+ logits)
+ expected_loss = (
+ tf.contrib.nn.deprecated_flipped_sigmoid_cross_entropy_with_logits(
+ logits, targets))
+ with self.test_session():
+ self.assertAllClose(expected_loss.eval(),
+ weighted_loss.eval(),
+ atol=0.000001)
+
+ def testNonTrivialCompatibilityWithSigmoidCrossEntropy(self):
+ """Tests use of an arbitrary weight (4.12)."""
+ x_shape = [300, 10]
+ targets = np.random.random_sample(x_shape).astype(np.float32)
+ logits = np.random.randn(*x_shape).astype(np.float32)
+ weight = 4.12
+ weighted_loss = util.weighted_sigmoid_cross_entropy_with_logits(
+ targets,
+ logits,
+ weight,
+ weight)
+ expected_loss = (
+ weight *
+ tf.contrib.nn.deprecated_flipped_sigmoid_cross_entropy_with_logits(
+ logits, targets))
+ with self.test_session():
+ self.assertAllClose(expected_loss.eval(),
+ weighted_loss.eval(),
+ atol=0.000001)
+
+ def testDifferentSizeWeightedSigmoidCrossEntropy(self):
+ """Tests correctness on 3D tensors.
+
+ Tests that the function works as expected when logits is a 3D tensor and
+ targets is a 2D tensor.
+ """
+ targets_shape = [30, 4]
+ logits_shape = [targets_shape[0], targets_shape[1], 3]
+ targets = np.random.random_sample(targets_shape).astype(np.float32)
+ logits = np.random.randn(*logits_shape).astype(np.float32)
+
+ weight_vector = [2.0, 3.0, 13.0]
+ loss = util.weighted_sigmoid_cross_entropy_with_logits(targets,
+ logits,
+ weight_vector)
+
+ with self.test_session():
+ loss = loss.eval()
+ for i in range(0, len(weight_vector)):
+ expected = weighted_sigmoid_cross_entropy(targets, logits[:, :, i],
+ weight_vector[i])
+ self.assertAllClose(loss[:, :, i], expected, atol=0.000001)
+
+ @parameterized.parameters((300, 10, 0.3), (20, 4, 2.0), (30, 4, 3.9))
+ def testWeightedSigmoidCrossEntropy(self, batch_size, num_labels, weight):
+ """Tests thats the tf and numpy functions agree on many instances."""
+ x_shape = [batch_size, num_labels]
+ targets = np.random.random_sample(x_shape).astype(np.float32)
+ logits = np.random.randn(*x_shape).astype(np.float32)
+
+ with self.test_session():
+ loss = util.weighted_sigmoid_cross_entropy_with_logits(
+ targets,
+ logits,
+ weight,
+ 1.0,
+ name='weighted-loss')
+ expected = weighted_sigmoid_cross_entropy(targets, logits, weight)
+ self.assertAllClose(expected, loss.eval(), atol=0.000001)
+
+ def testGradients(self):
+ """Tests that weighted loss gradients behave as expected."""
+ dummy_tensor = tf.constant(1.0)
+
+ positives_shape = [10, 1]
+ positives_logits = dummy_tensor * tf.Variable(
+ tf.random_normal(positives_shape) + 1.0)
+ positives_targets = tf.ones(positives_shape)
+ positives_weight = 4.6
+ positives_loss = (
+ tf.contrib.nn.deprecated_flipped_sigmoid_cross_entropy_with_logits(
+ positives_logits, positives_targets) * positives_weight)
+
+ negatives_shape = [190, 1]
+ negatives_logits = dummy_tensor * tf.Variable(
+ tf.random_normal(negatives_shape))
+ negatives_targets = tf.zeros(negatives_shape)
+ negatives_weight = 0.9
+ negatives_loss = (
+ tf.contrib.nn.deprecated_flipped_sigmoid_cross_entropy_with_logits(
+ negatives_logits, negatives_targets) * negatives_weight)
+
+ all_logits = tf.concat([positives_logits, negatives_logits], 0)
+ all_targets = tf.concat([positives_targets, negatives_targets], 0)
+ weighted_loss = tf.reduce_sum(
+ util.weighted_sigmoid_cross_entropy_with_logits(
+ all_targets, all_logits, positives_weight, negatives_weight))
+ weighted_gradients = tf.gradients(weighted_loss, dummy_tensor)
+
+ expected_loss = tf.add(
+ tf.reduce_sum(positives_loss),
+ tf.reduce_sum(negatives_loss))
+ expected_gradients = tf.gradients(expected_loss, dummy_tensor)
+
+ with tf.Session() as session:
+ tf.global_variables_initializer().run()
+ grad, expected_grad = session.run(
+ [weighted_gradients, expected_gradients])
+ self.assertAllClose(grad, expected_grad)
+
+ def testDtypeFlexibility(self):
+ """Tests the loss on inputs of varying data types."""
+ shape = [20, 3]
+ logits = np.random.randn(*shape)
+ targets = tf.truncated_normal(shape)
+ positive_weights = tf.constant(3, dtype=tf.int64)
+ negative_weights = 1
+
+ loss = util.weighted_sigmoid_cross_entropy_with_logits(
+ targets, logits, positive_weights, negative_weights)
+
+ with self.test_session():
+ self.assertEqual(loss.eval().dtype, np.float)
+
+
+class WeightedHingeLossTest(tf.test.TestCase):
+
+ def testTrivialCompatibilityWithHinge(self):
+ # Tests compatibility with unweighted hinge loss.
+ x_shape = [55, 10]
+ logits = tf.constant(np.random.randn(*x_shape).astype(np.float32))
+ targets = tf.to_float(tf.constant(np.random.random_sample(x_shape) > 0.3))
+ weighted_loss = util.weighted_hinge_loss(targets, logits)
+ expected_loss = hinge_loss(targets, logits)
+ with self.test_session():
+ self.assertAllClose(expected_loss.eval(), weighted_loss.eval())
+
+ def testLessTrivialCompatibilityWithHinge(self):
+ # Tests compatibility with a constant weight for positives and negatives.
+ x_shape = [56, 11]
+ logits = tf.constant(np.random.randn(*x_shape).astype(np.float32))
+ targets = tf.to_float(tf.constant(np.random.random_sample(x_shape) > 0.7))
+ weight = 1.0 + 1.0/2 + 1.0/3 + 1.0/4 + 1.0/5 + 1.0/6 + 1.0/7
+ weighted_loss = util.weighted_hinge_loss(targets, logits, weight, weight)
+ expected_loss = hinge_loss(targets, logits) * weight
+ with self.test_session():
+ self.assertAllClose(expected_loss.eval(), weighted_loss.eval())
+
+ def testNontrivialCompatibilityWithHinge(self):
+ # Tests compatibility with different positive and negative weights.
+ x_shape = [23, 8]
+ logits_positives = tf.constant(np.random.randn(*x_shape).astype(np.float32))
+ logits_negatives = tf.constant(np.random.randn(*x_shape).astype(np.float32))
+ targets_positives = tf.ones(x_shape)
+ targets_negatives = tf.zeros(x_shape)
+ logits = tf.concat([logits_positives, logits_negatives], 0)
+ targets = tf.concat([targets_positives, targets_negatives], 0)
+
+ raw_loss = util.weighted_hinge_loss(targets,
+ logits,
+ positive_weights=3.4,
+ negative_weights=1.2)
+ loss = tf.reduce_sum(raw_loss, 0)
+ positives_hinge = hinge_loss(targets_positives, logits_positives)
+ negatives_hinge = hinge_loss(targets_negatives, logits_negatives)
+ expected = tf.add(tf.reduce_sum(3.4 * positives_hinge, 0),
+ tf.reduce_sum(1.2 * negatives_hinge, 0))
+
+ with self.test_session():
+ self.assertAllClose(loss.eval(), expected.eval())
+
+ def test3DLogitsAndTargets(self):
+ # Tests correctness when logits is 3D and targets is 2D.
+ targets_shape = [30, 4]
+ logits_shape = [targets_shape[0], targets_shape[1], 3]
+ targets = tf.to_float(
+ tf.constant(np.random.random_sample(targets_shape) > 0.7))
+ logits = tf.constant(np.random.randn(*logits_shape).astype(np.float32))
+ weight_vector = [1.0, 1.0, 1.0]
+ loss = util.weighted_hinge_loss(targets, logits, weight_vector)
+
+ with self.test_session():
+ loss_value = loss.eval()
+ for i in range(len(weight_vector)):
+ expected = hinge_loss(targets, logits[:, :, i]).eval()
+ self.assertAllClose(loss_value[:, :, i], expected)
+
+
+class BuildLabelPriorsTest(tf.test.TestCase):
+
+ def testLabelPriorConsistency(self):
+ # Checks that, with zero pseudocounts, the returned label priors reproduce
+ # label frequencies in the batch.
+ batch_shape = [4, 10]
+ labels = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.678)))
+
+ label_priors_update = util.build_label_priors(
+ labels=labels, positive_pseudocount=0, negative_pseudocount=0)
+ expected_priors = tf.reduce_mean(labels, 0)
+
+ with self.test_session():
+ tf.global_variables_initializer().run()
+ self.assertAllClose(label_priors_update.eval(), expected_priors.eval())
+
+ def testLabelPriorsUpdate(self):
+ # Checks that the update of label priors behaves as expected.
+ batch_shape = [1, 5]
+ labels = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.4)))
+ label_priors_update = util.build_label_priors(labels)
+
+ label_sum = np.ones(shape=batch_shape)
+ weight_sum = 2.0 * np.ones(shape=batch_shape)
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+
+ for _ in range(3):
+ label_sum += labels.eval()
+ weight_sum += np.ones(shape=batch_shape)
+ expected_posteriors = label_sum / weight_sum
+ label_priors = label_priors_update.eval().reshape(batch_shape)
+ self.assertAllClose(label_priors, expected_posteriors)
+
+ # Re-initialize labels to get a new random sample.
+ session.run(labels.initializer)
+
+ def testLabelPriorsUpdateWithWeights(self):
+ # Checks the update of label priors with per-example weights.
+ batch_size = 6
+ num_labels = 5
+ batch_shape = [batch_size, num_labels]
+ labels = tf.Variable(
+ tf.to_float(tf.greater(tf.random_uniform(batch_shape), 0.6)))
+ weights = tf.Variable(tf.random_uniform(batch_shape) * 6.2)
+
+ update_op = util.build_label_priors(labels, weights=weights)
+
+ expected_weighted_label_counts = 1.0 + tf.reduce_sum(weights * labels, 0)
+ expected_weight_sum = 2.0 + tf.reduce_sum(weights, 0)
+ expected_label_posteriors = tf.divide(expected_weighted_label_counts,
+ expected_weight_sum)
+
+ with self.test_session() as session:
+ tf.global_variables_initializer().run()
+
+ updated_priors, expected_posteriors = session.run(
+ [update_op, expected_label_posteriors])
+ self.assertAllClose(updated_priors, expected_posteriors)
+
+
+class WeightedSurrogateLossTest(parameterized.TestCase, tf.test.TestCase):
+
+ @parameterized.parameters(
+ ('hinge', util.weighted_hinge_loss),
+ ('xent', util.weighted_sigmoid_cross_entropy_with_logits))
+ def testCompatibilityLoss(self, loss_name, loss_fn):
+ x_shape = [28, 4]
+ logits = tf.constant(np.random.randn(*x_shape).astype(np.float32))
+ targets = tf.to_float(tf.constant(np.random.random_sample(x_shape) > 0.5))
+ positive_weights = 0.66
+ negative_weights = 11.1
+ expected_loss = loss_fn(
+ targets,
+ logits,
+ positive_weights=positive_weights,
+ negative_weights=negative_weights)
+ computed_loss = util.weighted_surrogate_loss(
+ targets,
+ logits,
+ loss_name,
+ positive_weights=positive_weights,
+ negative_weights=negative_weights)
+ with self.test_session():
+ self.assertAllClose(expected_loss.eval(), computed_loss.eval())
+
+ def testSurrogatgeError(self):
+ x_shape = [7, 3]
+ logits = tf.constant(np.random.randn(*x_shape).astype(np.float32))
+ targets = tf.to_float(tf.constant(np.random.random_sample(x_shape) > 0.5))
+
+ with self.assertRaises(ValueError):
+ util.weighted_surrogate_loss(logits, targets, 'bug')
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/.gitignore" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/.gitignore"
new file mode 100644
index 00000000..fb46913c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/.gitignore"
@@ -0,0 +1,7 @@
+/bazel-bin
+/bazel-ci_build-cache
+/bazel-genfiles
+/bazel-out
+/bazel-im2txt
+/bazel-testlogs
+/bazel-tf
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/README.md"
new file mode 100644
index 00000000..03e6656f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/README.md"
@@ -0,0 +1,338 @@
+# Show and Tell: A Neural Image Caption Generator
+
+A TensorFlow implementation of the image-to-text model described in the paper:
+
+"Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning
+Challenge."
+
+Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan.
+
+*IEEE transactions on pattern analysis and machine intelligence (2016).*
+
+Full text available at: http://arxiv.org/abs/1609.06647
+
+## Contact
+***Author:*** Chris Shallue
+
+***Pull requests and issues:*** @cshallue
+
+## Contents
+* [Model Overview](#model-overview)
+ * [Introduction](#introduction)
+ * [Architecture](#architecture)
+* [Getting Started](#getting-started)
+ * [A Note on Hardware and Training Time](#a-note-on-hardware-and-training-time)
+ * [Install Required Packages](#install-required-packages)
+ * [Prepare the Training Data](#prepare-the-training-data)
+ * [Download the Inception v3 Checkpoint](#download-the-inception-v3-checkpoint)
+* [Training a Model](#training-a-model)
+ * [Initial Training](#initial-training)
+ * [Fine Tune the Inception v3 Model](#fine-tune-the-inception-v3-model)
+* [Generating Captions](#generating-captions)
+
+## Model Overview
+
+### Introduction
+
+The *Show and Tell* model is a deep neural network that learns how to describe
+the content of images. For example:
+
+
+
+### Architecture
+
+The *Show and Tell* model is an example of an *encoder-decoder* neural network.
+It works by first "encoding" an image into a fixed-length vector representation,
+and then "decoding" the representation into a natural language description.
+
+The image encoder is a deep convolutional neural network. This type of
+network is widely used for image tasks and is currently state-of-the-art for
+object recognition and detection. Our particular choice of network is the
+[*Inception v3*](http://arxiv.org/abs/1512.00567) image recognition model
+pretrained on the
+[ILSVRC-2012-CLS](http://www.image-net.org/challenges/LSVRC/2012/) image
+classification dataset.
+
+The decoder is a long short-term memory (LSTM) network. This type of network is
+commonly used for sequence modeling tasks such as language modeling and machine
+translation. In the *Show and Tell* model, the LSTM network is trained as a
+language model conditioned on the image encoding.
+
+Words in the captions are represented with an embedding model. Each word in the
+vocabulary is associated with a fixed-length vector representation that is
+learned during training.
+
+The following diagram illustrates the model architecture.
+
+
+
+In this diagram, \{*s*0, *s*1, ..., *s**N*-1\}
+are the words of the caption and \{*w**e**s*0,
+*w**e**s*1, ..., *w**e**s**N*-1\}
+are their corresponding word embedding vectors. The outputs \{*p*1,
+*p*2, ..., *p**N*\} of the LSTM are probability
+distributions generated by the model for the next word in the sentence. The
+terms \{log *p*1(*s*1),
+log *p*2(*s*2), ...,
+log *p**N*(*s**N*)\} are the log-likelihoods of the
+correct word at each step; the negated sum of these terms is the minimization
+objective of the model.
+
+During the first phase of training the parameters of the *Inception v3* model
+are kept fixed: it is simply a static image encoder function. A single trainable
+layer is added on top of the *Inception v3* model to transform the image
+embedding into the word embedding vector space. The model is trained with
+respect to the parameters of the word embeddings, the parameters of the layer on
+top of *Inception v3* and the parameters of the LSTM. In the second phase of
+training, all parameters - including the parameters of *Inception v3* - are
+trained to jointly fine-tune the image encoder and the LSTM.
+
+Given a trained model and an image we use *beam search* to generate captions for
+that image. Captions are generated word-by-word, where at each step *t* we use
+the set of sentences already generated with length *t* - 1 to generate a new set
+of sentences with length *t*. We keep only the top *k* candidates at each step,
+where the hyperparameter *k* is called the *beam size*. We have found the best
+performance with *k* = 3.
+
+## Getting Started
+
+### A Note on Hardware and Training Time
+
+The time required to train the *Show and Tell* model depends on your specific
+hardware and computational capacity. In this guide we assume you will be running
+training on a single machine with a GPU. In our experience on an NVIDIA Tesla
+K20m GPU the initial training phase takes 1-2 weeks. The second training phase
+may take several additional weeks to achieve peak performance (but you can stop
+this phase early and still get reasonable results).
+
+It is possible to achieve a speed-up by implementing distributed training across
+a cluster of machines with GPUs, but that is not covered in this guide.
+
+Whilst it is possible to run this code on a CPU, beware that this may be
+approximately 10 times slower.
+
+### Install Required Packages
+First ensure that you have installed the following required packages:
+
+* **Bazel** ([instructions](http://bazel.io/docs/install.html))
+* **Python 2.7**
+* **TensorFlow** 1.0 or greater ([instructions](https://www.tensorflow.org/install/))
+* **NumPy** ([instructions](http://www.scipy.org/install.html))
+* **Natural Language Toolkit (NLTK)**:
+ * First install NLTK ([instructions](http://www.nltk.org/install.html))
+ * Then install the NLTK data package "punkt" ([instructions](http://www.nltk.org/data.html))
+* **Unzip**
+### Prepare the Training Data
+
+To train the model you will need to provide training data in native TFRecord
+format. The TFRecord format consists of a set of sharded files containing
+serialized `tf.SequenceExample` protocol buffers. Each `tf.SequenceExample`
+proto contains an image (JPEG format), a caption and metadata such as the image
+id.
+
+Each caption is a list of words. During preprocessing, a dictionary is created
+that assigns each word in the vocabulary to an integer-valued id. Each caption
+is encoded as a list of integer word ids in the `tf.SequenceExample` protos.
+
+We have provided a script to download and preprocess the [MSCOCO](http://mscoco.org/) image captioning data set into this format. Downloading
+and preprocessing the data may take several hours depending on your network and
+computer speed. Please be patient.
+
+Before running the script, ensure that your hard disk has at least 150GB of
+available space for storing the downloaded and processed data.
+
+```shell
+# Location to save the MSCOCO data.
+MSCOCO_DIR="${HOME}/im2txt/data/mscoco"
+
+# Build the preprocessing script.
+cd research/im2txt
+bazel build //im2txt:download_and_preprocess_mscoco
+
+# Run the preprocessing script.
+bazel-bin/im2txt/download_and_preprocess_mscoco "${MSCOCO_DIR}"
+```
+
+The final line of the output should read:
+
+```
+2016-09-01 16:47:47.296630: Finished processing all 20267 image-caption pairs in data set 'test'.
+```
+
+When the script finishes you will find 256 training, 4 validation and 8 testing
+files in `DATA_DIR`. The files will match the patterns `train-?????-of-00256`,
+`val-?????-of-00004` and `test-?????-of-00008`, respectively.
+
+### Download the Inception v3 Checkpoint
+
+The *Show and Tell* model requires a pretrained *Inception v3* checkpoint file
+to initialize the parameters of its image encoder submodel.
+
+This checkpoint file is provided by the
+[TensorFlow-Slim image classification library](https://github.com/tensorflow/models/tree/master/research/slim#tensorflow-slim-image-classification-library)
+which provides a suite of pre-trained image classification models. You can read
+more about the models provided by the library
+[here](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models).
+
+
+Run the following commands to download the *Inception v3* checkpoint.
+
+```shell
+# Location to save the Inception v3 checkpoint.
+INCEPTION_DIR="${HOME}/im2txt/data"
+mkdir -p ${INCEPTION_DIR}
+
+wget "http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz"
+tar -xvf "inception_v3_2016_08_28.tar.gz" -C ${INCEPTION_DIR}
+rm "inception_v3_2016_08_28.tar.gz"
+```
+
+Note that the *Inception v3* checkpoint will only be used for initializing the
+parameters of the *Show and Tell* model. Once the *Show and Tell* model starts
+training it will save its own checkpoint files containing the values of all its
+parameters (including copies of the *Inception v3* parameters). If training is
+stopped and restarted, the parameter values will be restored from the latest
+*Show and Tell* checkpoint and the *Inception v3* checkpoint will be ignored. In
+other words, the *Inception v3* checkpoint is only used in the 0-th global step
+(initialization) of training the *Show and Tell* model.
+
+## Training a Model
+
+### Initial Training
+
+Run the training script.
+
+```shell
+# Directory containing preprocessed MSCOCO data.
+MSCOCO_DIR="${HOME}/im2txt/data/mscoco"
+
+# Inception v3 checkpoint file.
+INCEPTION_CHECKPOINT="${HOME}/im2txt/data/inception_v3.ckpt"
+
+# Directory to save the model.
+MODEL_DIR="${HOME}/im2txt/model"
+
+# Build the model.
+cd research/im2txt
+bazel build -c opt //im2txt/...
+
+# Run the training script.
+bazel-bin/im2txt/train \
+ --input_file_pattern="${MSCOCO_DIR}/train-?????-of-00256" \
+ --inception_checkpoint_file="${INCEPTION_CHECKPOINT}" \
+ --train_dir="${MODEL_DIR}/train" \
+ --train_inception=false \
+ --number_of_steps=1000000
+```
+
+Run the evaluation script in a separate process. This will log evaluation
+metrics to TensorBoard which allows training progress to be monitored in
+real-time.
+
+Note that you may run out of memory if you run the evaluation script on the same
+GPU as the training script. You can run the command
+`export CUDA_VISIBLE_DEVICES=""` to force the evaluation script to run on CPU.
+If evaluation runs too slowly on CPU, you can decrease the value of
+`--num_eval_examples`.
+
+```shell
+MSCOCO_DIR="${HOME}/im2txt/data/mscoco"
+MODEL_DIR="${HOME}/im2txt/model"
+
+# Ignore GPU devices (only necessary if your GPU is currently memory
+# constrained, for example, by running the training script).
+export CUDA_VISIBLE_DEVICES=""
+
+# Run the evaluation script. This will run in a loop, periodically loading the
+# latest model checkpoint file and computing evaluation metrics.
+bazel-bin/im2txt/evaluate \
+ --input_file_pattern="${MSCOCO_DIR}/val-?????-of-00004" \
+ --checkpoint_dir="${MODEL_DIR}/train" \
+ --eval_dir="${MODEL_DIR}/eval"
+```
+
+Run a TensorBoard server in a separate process for real-time monitoring of
+training progress and evaluation metrics.
+
+```shell
+MODEL_DIR="${HOME}/im2txt/model"
+
+# Run a TensorBoard server.
+tensorboard --logdir="${MODEL_DIR}"
+```
+
+### Fine Tune the Inception v3 Model
+
+Your model will already be able to generate reasonable captions after the first
+phase of training. Try it out! (See [Generating Captions](#generating-captions)).
+
+You can further improve the performance of the model by running a
+second training phase to jointly fine-tune the parameters of the *Inception v3*
+image submodel and the LSTM.
+
+```shell
+# Restart the training script with --train_inception=true.
+bazel-bin/im2txt/train \
+ --input_file_pattern="${MSCOCO_DIR}/train-?????-of-00256" \
+ --train_dir="${MODEL_DIR}/train" \
+ --train_inception=true \
+ --number_of_steps=3000000 # Additional 2M steps (assuming 1M in initial training).
+```
+
+Note that training will proceed much slower now, and the model will continue to
+improve by a small amount for a long time. We have found that it will improve
+slowly for an additional 2-2.5 million steps before it begins to overfit. This
+may take several weeks on a single GPU. If you don't care about absolutely
+optimal performance then feel free to halt training sooner by stopping the
+training script or passing a smaller value to the flag `--number_of_steps`. Your
+model will still work reasonably well.
+
+## Generating Captions
+
+Your trained *Show and Tell* model can generate captions for any JPEG image! The
+following command line will generate captions for an image from the test set.
+
+```shell
+# Path to checkpoint file or a directory containing checkpoint files. Passing
+# a directory will only work if there is also a file named 'checkpoint' which
+# lists the available checkpoints in the directory. It will not work if you
+# point to a directory with just a copy of a model checkpoint: in that case,
+# you will need to pass the checkpoint path explicitly.
+CHECKPOINT_PATH="${HOME}/im2txt/model/train"
+
+# Vocabulary file generated by the preprocessing script.
+VOCAB_FILE="${HOME}/im2txt/data/mscoco/word_counts.txt"
+
+# JPEG image file to caption.
+IMAGE_FILE="${HOME}/im2txt/data/mscoco/raw-data/val2014/COCO_val2014_000000224477.jpg"
+
+# Build the inference binary.
+cd research/im2txt
+bazel build -c opt //im2txt:run_inference
+
+# Ignore GPU devices (only necessary if your GPU is currently memory
+# constrained, for example, by running the training script).
+export CUDA_VISIBLE_DEVICES=""
+
+# Run inference to generate captions.
+bazel-bin/im2txt/run_inference \
+ --checkpoint_path=${CHECKPOINT_PATH} \
+ --vocab_file=${VOCAB_FILE} \
+ --input_files=${IMAGE_FILE}
+```
+
+Example output:
+
+```
+Captions for image COCO_val2014_000000224477.jpg:
+ 0) a man riding a wave on top of a surfboard . (p=0.040413)
+ 1) a person riding a surf board on a wave (p=0.017452)
+ 2) a man riding a wave on a surfboard in the ocean . (p=0.005743)
+```
+
+Note: you may get different results. Some variation between different models is
+expected.
+
+Here is the image:
+
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/WORKSPACE" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/WORKSPACE"
new file mode 100644
index 00000000..22da718b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/WORKSPACE"
@@ -0,0 +1 @@
+workspace(name = "im2txt")
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/conda-env/ubuntu-18-04-environment.yaml" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/conda-env/ubuntu-18-04-environment.yaml"
new file mode 100644
index 00000000..332ff2a4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/conda-env/ubuntu-18-04-environment.yaml"
@@ -0,0 +1,142 @@
+name: im2txt
+channels:
+ - defaults
+dependencies:
+ - _tflow_select=2.3.0=mkl
+ - absl-py=0.5.0=py27_0
+ - astor=0.7.1=py27_0
+ - backports=1.0=py27_1
+ - backports.functools_lru_cache=1.5=py27_1
+ - backports.shutil_get_terminal_size=1.0.0=py27_2
+ - backports.weakref=1.0.post1=py27_0
+ - backports_abc=0.5=py27_0
+ - blas=1.0=mkl
+ - bleach=3.0.2=py27_0
+ - ca-certificates=2018.03.07=0
+ - certifi=2018.10.15=py27_0
+ - configparser=3.5.0=py27_0
+ - cycler=0.10.0=py27_0
+ - dbus=1.13.2=h714fa37_1
+ - decorator=4.3.0=py27_0
+ - entrypoints=0.2.3=py27_2
+ - enum34=1.1.6=py27_1
+ - expat=2.2.6=he6710b0_0
+ - fastcache=1.0.2=py27h14c3975_2
+ - fontconfig=2.13.0=h9420a91_0
+ - freetype=2.9.1=h8a8886c_1
+ - funcsigs=1.0.2=py27_0
+ - functools32=3.2.3.2=py27_1
+ - futures=3.2.0=py27_0
+ - gast=0.2.0=py27_0
+ - glib=2.56.2=hd408876_0
+ - gmp=6.1.2=h6c8ec71_1
+ - gmpy2=2.0.8=py27h10f8cd9_2
+ - grpcio=1.12.1=py27hdbcaa40_0
+ - gst-plugins-base=1.14.0=hbbd80ab_1
+ - gstreamer=1.14.0=hb453b48_1
+ - h5py=2.8.0=py27h989c5e5_3
+ - hdf5=1.10.2=hba1933b_1
+ - icu=58.2=h9c2bf20_1
+ - intel-openmp=2019.0=118
+ - ipaddress=1.0.22=py27_0
+ - ipykernel=4.10.0=py27_0
+ - ipython=5.8.0=py27_0
+ - ipython_genutils=0.2.0=py27_0
+ - ipywidgets=7.4.2=py27_0
+ - jinja2=2.10=py27_0
+ - jpeg=9b=h024ee3a_2
+ - jsonschema=2.6.0=py27_0
+ - jupyter=1.0.0=py27_7
+ - jupyter_client=5.2.3=py27_0
+ - jupyter_console=5.2.0=py27_1
+ - jupyter_core=4.4.0=py27_0
+ - keras-applications=1.0.6=py27_0
+ - keras-preprocessing=1.0.5=py27_0
+ - kiwisolver=1.0.1=py27hf484d3e_0
+ - libedit=3.1.20170329=h6b74fdf_2
+ - libffi=3.2.1=hd88cf55_4
+ - libgcc-ng=8.2.0=hdf63c60_1
+ - libgfortran-ng=7.3.0=hdf63c60_0
+ - libpng=1.6.35=hbc83047_0
+ - libprotobuf=3.6.0=hdbcaa40_0
+ - libsodium=1.0.16=h1bed415_0
+ - libstdcxx-ng=8.2.0=hdf63c60_1
+ - libuuid=1.0.3=h1bed415_2
+ - libxcb=1.13=h1bed415_1
+ - libxml2=2.9.8=h26e45fe_1
+ - linecache2=1.0.0=py27_0
+ - markdown=3.0.1=py27_0
+ - markupsafe=1.0=py27h14c3975_1
+ - matplotlib=2.2.3=py27hb69df0a_0
+ - mistune=0.8.4=py27h7b6447c_0
+ - mkl=2019.0=118
+ - mkl_fft=1.0.6=py27h7dd41cf_0
+ - mkl_random=1.0.1=py27h4414c95_1
+ - mock=2.0.0=py27_0
+ - mpc=1.1.0=h10f8cd9_1
+ - mpfr=4.0.1=hdf1c602_3
+ - mpmath=1.0.0=py27_2
+ - nbconvert=5.3.1=py27_0
+ - nbformat=4.4.0=py27_0
+ - ncurses=6.1=hf484d3e_0
+ - nltk=3.3.0=py27_0
+ - nose=1.3.7=py27_2
+ - notebook=5.7.0=py27_0
+ - numpy=1.15.3=py27h1d66e8a_0
+ - numpy-base=1.15.3=py27h81de0dd_0
+ - openssl=1.0.2p=h14c3975_0
+ - pandas=0.23.4=py27h04863e7_0
+ - pandoc=2.2.3.2=0
+ - pandocfilters=1.4.2=py27_1
+ - pathlib2=2.3.2=py27_0
+ - pbr=4.3.0=py27_0
+ - pcre=8.42=h439df22_0
+ - pexpect=4.6.0=py27_0
+ - pickleshare=0.7.5=py27_0
+ - pip=10.0.1=py27_0
+ - prometheus_client=0.4.2=py27_0
+ - prompt_toolkit=1.0.15=py27_0
+ - protobuf=3.6.0=py27hf484d3e_0
+ - ptyprocess=0.6.0=py27_0
+ - pygments=2.2.0=py27_0
+ - pyparsing=2.2.2=py27_0
+ - pyqt=5.9.2=py27h05f1152_2
+ - python=2.7.15=h77bded6_2
+ - python-dateutil=2.7.3=py27_0
+ - pytz=2018.5=py27_0
+ - pyzmq=17.1.2=py27h14c3975_0
+ - qt=5.9.6=h8703b6f_2
+ - qtconsole=4.4.2=py27_0
+ - readline=7.0=h7b6447c_5
+ - scandir=1.9.0=py27h14c3975_0
+ - scipy=1.1.0=py27hfa4b5c9_1
+ - send2trash=1.5.0=py27_0
+ - setuptools=40.4.3=py27_0
+ - simplegeneric=0.8.1=py27_2
+ - singledispatch=3.4.0.3=py27_0
+ - sip=4.19.8=py27hf484d3e_0
+ - six=1.11.0=py27_1
+ - sqlite=3.25.2=h7b6447c_0
+ - subprocess32=3.5.3=py27h7b6447c_0
+ - sympy=1.3=py27_0
+ - tensorboard=1.11.0=py27hf484d3e_0
+ - tensorflow=1.11.0=mkl_py27h25e0b76_0
+ - tensorflow-base=1.11.0=mkl_py27h3c3e929_0
+ - termcolor=1.1.0=py27_1
+ - terminado=0.8.1=py27_1
+ - testpath=0.4.2=py27_0
+ - tk=8.6.8=hbc83047_0
+ - tornado=5.1.1=py27h7b6447c_0
+ - traceback2=1.4.0=py27_0
+ - traitlets=4.3.2=py27_0
+ - unittest2=1.1.0=py27_0
+ - wcwidth=0.1.7=py27_0
+ - webencodings=0.5.1=py27_1
+ - werkzeug=0.14.1=py27_0
+ - wheel=0.32.2=py27_0
+ - widgetsnbextension=3.4.2=py27_0
+ - xz=5.2.4=h14c3975_4
+ - zeromq=4.2.5=hf484d3e_1
+ - zlib=1.2.11=ha838bed_2
+prefix: /home/arinto_murdopo/anaconda3/envs/im2txt
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/g3doc/COCO_val2014_000000224477.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/g3doc/COCO_val2014_000000224477.jpg"
new file mode 100644
index 00000000..8976fa84
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/g3doc/COCO_val2014_000000224477.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/g3doc/example_captions.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/g3doc/example_captions.jpg"
new file mode 100644
index 00000000..b3a8f432
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/g3doc/example_captions.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/g3doc/show_and_tell_architecture.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/g3doc/show_and_tell_architecture.png"
new file mode 100644
index 00000000..984590d5
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/g3doc/show_and_tell_architecture.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/BUILD"
new file mode 100644
index 00000000..8c403171
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/BUILD"
@@ -0,0 +1,96 @@
+package(default_visibility = [":internal"])
+
+licenses(["notice"]) # Apache 2.0
+
+exports_files(["LICENSE"])
+
+package_group(
+ name = "internal",
+ packages = [
+ "//im2txt/...",
+ ],
+)
+
+py_binary(
+ name = "build_mscoco_data",
+ srcs = [
+ "data/build_mscoco_data.py",
+ ],
+)
+
+sh_binary(
+ name = "download_and_preprocess_mscoco",
+ srcs = ["data/download_and_preprocess_mscoco.sh"],
+ data = [
+ ":build_mscoco_data",
+ ],
+)
+
+py_library(
+ name = "configuration",
+ srcs = ["configuration.py"],
+ srcs_version = "PY2AND3",
+)
+
+py_library(
+ name = "show_and_tell_model",
+ srcs = ["show_and_tell_model.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ "//im2txt/ops:image_embedding",
+ "//im2txt/ops:image_processing",
+ "//im2txt/ops:inputs",
+ ],
+)
+
+py_test(
+ name = "show_and_tell_model_test",
+ size = "large",
+ srcs = ["show_and_tell_model_test.py"],
+ deps = [
+ ":configuration",
+ ":show_and_tell_model",
+ ],
+)
+
+py_library(
+ name = "inference_wrapper",
+ srcs = ["inference_wrapper.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":show_and_tell_model",
+ "//im2txt/inference_utils:inference_wrapper_base",
+ ],
+)
+
+py_binary(
+ name = "train",
+ srcs = ["train.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":configuration",
+ ":show_and_tell_model",
+ ],
+)
+
+py_binary(
+ name = "evaluate",
+ srcs = ["evaluate.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":configuration",
+ ":show_and_tell_model",
+ ],
+)
+
+py_binary(
+ name = "run_inference",
+ srcs = ["run_inference.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":configuration",
+ ":inference_wrapper",
+ "//im2txt/inference_utils:caption_generator",
+ "//im2txt/inference_utils:vocabulary",
+ ],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/configuration.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/configuration.py"
new file mode 100644
index 00000000..3b664eb9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/configuration.py"
@@ -0,0 +1,104 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Image-to-text model and training configurations."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+class ModelConfig(object):
+ """Wrapper class for model hyperparameters."""
+
+ def __init__(self):
+ """Sets the default model hyperparameters."""
+ # File pattern of sharded TFRecord file containing SequenceExample protos.
+ # Must be provided in training and evaluation modes.
+ self.input_file_pattern = None
+
+ # Image format ("jpeg" or "png").
+ self.image_format = "jpeg"
+
+ # Approximate number of values per input shard. Used to ensure sufficient
+ # mixing between shards in training.
+ self.values_per_input_shard = 2300
+ # Minimum number of shards to keep in the input queue.
+ self.input_queue_capacity_factor = 2
+ # Number of threads for prefetching SequenceExample protos.
+ self.num_input_reader_threads = 1
+
+ # Name of the SequenceExample context feature containing image data.
+ self.image_feature_name = "image/data"
+ # Name of the SequenceExample feature list containing integer captions.
+ self.caption_feature_name = "image/caption_ids"
+
+ # Number of unique words in the vocab (plus 1, for ).
+ # The default value is larger than the expected actual vocab size to allow
+ # for differences between tokenizer versions used in preprocessing. There is
+ # no harm in using a value greater than the actual vocab size, but using a
+ # value less than the actual vocab size will result in an error.
+ self.vocab_size = 12000
+
+ # Number of threads for image preprocessing. Should be a multiple of 2.
+ self.num_preprocess_threads = 4
+
+ # Batch size.
+ self.batch_size = 32
+
+ # File containing an Inception v3 checkpoint to initialize the variables
+ # of the Inception model. Must be provided when starting training for the
+ # first time.
+ self.inception_checkpoint_file = None
+
+ # Dimensions of Inception v3 input images.
+ self.image_height = 299
+ self.image_width = 299
+
+ # Scale used to initialize model variables.
+ self.initializer_scale = 0.08
+
+ # LSTM input and output dimensionality, respectively.
+ self.embedding_size = 512
+ self.num_lstm_units = 512
+
+ # If < 1.0, the dropout keep probability applied to LSTM variables.
+ self.lstm_dropout_keep_prob = 0.7
+
+
+class TrainingConfig(object):
+ """Wrapper class for training hyperparameters."""
+
+ def __init__(self):
+ """Sets the default training hyperparameters."""
+ # Number of examples per epoch of training data.
+ self.num_examples_per_epoch = 586363
+
+ # Optimizer for training the model.
+ self.optimizer = "SGD"
+
+ # Learning rate for the initial phase of training.
+ self.initial_learning_rate = 2.0
+ self.learning_rate_decay_factor = 0.5
+ self.num_epochs_per_decay = 8.0
+
+ # Learning rate when fine tuning the Inception v3 parameters.
+ self.train_inception_learning_rate = 0.0005
+
+ # If not None, clip gradients to this value.
+ self.clip_gradients = 5.0
+
+ # How many model checkpoints to keep.
+ self.max_checkpoints_to_keep = 5
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/data/build_mscoco_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/data/build_mscoco_data.py"
new file mode 100644
index 00000000..2c3e9d97
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/data/build_mscoco_data.py"
@@ -0,0 +1,483 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Converts MSCOCO data to TFRecord file format with SequenceExample protos.
+
+The MSCOCO images are expected to reside in JPEG files located in the following
+directory structure:
+
+ train_image_dir/COCO_train2014_000000000151.jpg
+ train_image_dir/COCO_train2014_000000000260.jpg
+ ...
+
+and
+
+ val_image_dir/COCO_val2014_000000000042.jpg
+ val_image_dir/COCO_val2014_000000000073.jpg
+ ...
+
+The MSCOCO annotations JSON files are expected to reside in train_captions_file
+and val_captions_file respectively.
+
+This script converts the combined MSCOCO data into sharded data files consisting
+of 256, 4 and 8 TFRecord files, respectively:
+
+ output_dir/train-00000-of-00256
+ output_dir/train-00001-of-00256
+ ...
+ output_dir/train-00255-of-00256
+
+and
+
+ output_dir/val-00000-of-00004
+ ...
+ output_dir/val-00003-of-00004
+
+and
+
+ output_dir/test-00000-of-00008
+ ...
+ output_dir/test-00007-of-00008
+
+Each TFRecord file contains ~2300 records. Each record within the TFRecord file
+is a serialized SequenceExample proto consisting of precisely one image-caption
+pair. Note that each image has multiple captions (usually 5) and therefore each
+image is replicated multiple times in the TFRecord files.
+
+The SequenceExample proto contains the following fields:
+
+ context:
+ image/image_id: integer MSCOCO image identifier
+ image/data: string containing JPEG encoded image in RGB colorspace
+
+ feature_lists:
+ image/caption: list of strings containing the (tokenized) caption words
+ image/caption_ids: list of integer ids corresponding to the caption words
+
+The captions are tokenized using the NLTK (http://www.nltk.org/) word tokenizer.
+The vocabulary of word identifiers is constructed from the sorted list (by
+descending frequency) of word tokens in the training set. Only tokens appearing
+at least 4 times are considered; all other words get the "unknown" word id.
+
+NOTE: This script will consume around 100GB of disk space because each image
+in the MSCOCO dataset is replicated ~5 times (once per caption) in the output.
+This is done for two reasons:
+ 1. In order to better shuffle the training data.
+ 2. It makes it easier to perform asynchronous preprocessing of each image in
+ TensorFlow.
+
+Running this script using 16 threads may take around 1 hour on a HP Z420.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from collections import Counter
+from collections import namedtuple
+from datetime import datetime
+import json
+import os.path
+import random
+import sys
+import threading
+
+
+
+import nltk.tokenize
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+tf.flags.DEFINE_string("train_image_dir", "/tmp/train2014/",
+ "Training image directory.")
+tf.flags.DEFINE_string("val_image_dir", "/tmp/val2014",
+ "Validation image directory.")
+
+tf.flags.DEFINE_string("train_captions_file", "/tmp/captions_train2014.json",
+ "Training captions JSON file.")
+tf.flags.DEFINE_string("val_captions_file", "/tmp/captions_val2014.json",
+ "Validation captions JSON file.")
+
+tf.flags.DEFINE_string("output_dir", "/tmp/", "Output data directory.")
+
+tf.flags.DEFINE_integer("train_shards", 256,
+ "Number of shards in training TFRecord files.")
+tf.flags.DEFINE_integer("val_shards", 4,
+ "Number of shards in validation TFRecord files.")
+tf.flags.DEFINE_integer("test_shards", 8,
+ "Number of shards in testing TFRecord files.")
+
+tf.flags.DEFINE_string("start_word", "",
+ "Special word added to the beginning of each sentence.")
+tf.flags.DEFINE_string("end_word", "",
+ "Special word added to the end of each sentence.")
+tf.flags.DEFINE_string("unknown_word", "",
+ "Special word meaning 'unknown'.")
+tf.flags.DEFINE_integer("min_word_count", 4,
+ "The minimum number of occurrences of each word in the "
+ "training set for inclusion in the vocabulary.")
+tf.flags.DEFINE_string("word_counts_output_file", "/tmp/word_counts.txt",
+ "Output vocabulary file of word counts.")
+
+tf.flags.DEFINE_integer("num_threads", 8,
+ "Number of threads to preprocess the images.")
+
+FLAGS = tf.flags.FLAGS
+
+ImageMetadata = namedtuple("ImageMetadata",
+ ["image_id", "filename", "captions"])
+
+
+class Vocabulary(object):
+ """Simple vocabulary wrapper."""
+
+ def __init__(self, vocab, unk_id):
+ """Initializes the vocabulary.
+
+ Args:
+ vocab: A dictionary of word to word_id.
+ unk_id: Id of the special 'unknown' word.
+ """
+ self._vocab = vocab
+ self._unk_id = unk_id
+
+ def word_to_id(self, word):
+ """Returns the integer id of a word string."""
+ if word in self._vocab:
+ return self._vocab[word]
+ else:
+ return self._unk_id
+
+
+class ImageDecoder(object):
+ """Helper class for decoding images in TensorFlow."""
+
+ def __init__(self):
+ # Create a single TensorFlow Session for all image decoding calls.
+ self._sess = tf.Session()
+
+ # TensorFlow ops for JPEG decoding.
+ self._encoded_jpeg = tf.placeholder(dtype=tf.string)
+ self._decode_jpeg = tf.image.decode_jpeg(self._encoded_jpeg, channels=3)
+
+ def decode_jpeg(self, encoded_jpeg):
+ image = self._sess.run(self._decode_jpeg,
+ feed_dict={self._encoded_jpeg: encoded_jpeg})
+ assert len(image.shape) == 3
+ assert image.shape[2] == 3
+ return image
+
+
+def _int64_feature(value):
+ """Wrapper for inserting an int64 Feature into a SequenceExample proto."""
+ return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
+
+
+def _bytes_feature(value):
+ """Wrapper for inserting a bytes Feature into a SequenceExample proto."""
+ return tf.train.Feature(bytes_list=tf.train.BytesList(value=[str(value)]))
+
+
+def _int64_feature_list(values):
+ """Wrapper for inserting an int64 FeatureList into a SequenceExample proto."""
+ return tf.train.FeatureList(feature=[_int64_feature(v) for v in values])
+
+
+def _bytes_feature_list(values):
+ """Wrapper for inserting a bytes FeatureList into a SequenceExample proto."""
+ return tf.train.FeatureList(feature=[_bytes_feature(v) for v in values])
+
+
+def _to_sequence_example(image, decoder, vocab):
+ """Builds a SequenceExample proto for an image-caption pair.
+
+ Args:
+ image: An ImageMetadata object.
+ decoder: An ImageDecoder object.
+ vocab: A Vocabulary object.
+
+ Returns:
+ A SequenceExample proto.
+ """
+ with tf.gfile.FastGFile(image.filename, "r") as f:
+ encoded_image = f.read()
+
+ try:
+ decoder.decode_jpeg(encoded_image)
+ except (tf.errors.InvalidArgumentError, AssertionError):
+ print("Skipping file with invalid JPEG data: %s" % image.filename)
+ return
+
+ context = tf.train.Features(feature={
+ "image/image_id": _int64_feature(image.image_id),
+ "image/data": _bytes_feature(encoded_image),
+ })
+
+ assert len(image.captions) == 1
+ caption = image.captions[0]
+ caption_ids = [vocab.word_to_id(word) for word in caption]
+ feature_lists = tf.train.FeatureLists(feature_list={
+ "image/caption": _bytes_feature_list(caption),
+ "image/caption_ids": _int64_feature_list(caption_ids)
+ })
+ sequence_example = tf.train.SequenceExample(
+ context=context, feature_lists=feature_lists)
+
+ return sequence_example
+
+
+def _process_image_files(thread_index, ranges, name, images, decoder, vocab,
+ num_shards):
+ """Processes and saves a subset of images as TFRecord files in one thread.
+
+ Args:
+ thread_index: Integer thread identifier within [0, len(ranges)].
+ ranges: A list of pairs of integers specifying the ranges of the dataset to
+ process in parallel.
+ name: Unique identifier specifying the dataset.
+ images: List of ImageMetadata.
+ decoder: An ImageDecoder object.
+ vocab: A Vocabulary object.
+ num_shards: Integer number of shards for the output files.
+ """
+ # Each thread produces N shards where N = num_shards / num_threads. For
+ # instance, if num_shards = 128, and num_threads = 2, then the first thread
+ # would produce shards [0, 64).
+ num_threads = len(ranges)
+ assert not num_shards % num_threads
+ num_shards_per_batch = int(num_shards / num_threads)
+
+ shard_ranges = np.linspace(ranges[thread_index][0], ranges[thread_index][1],
+ num_shards_per_batch + 1).astype(int)
+ num_images_in_thread = ranges[thread_index][1] - ranges[thread_index][0]
+
+ counter = 0
+ for s in xrange(num_shards_per_batch):
+ # Generate a sharded version of the file name, e.g. 'train-00002-of-00010'
+ shard = thread_index * num_shards_per_batch + s
+ output_filename = "%s-%.5d-of-%.5d" % (name, shard, num_shards)
+ output_file = os.path.join(FLAGS.output_dir, output_filename)
+ writer = tf.python_io.TFRecordWriter(output_file)
+
+ shard_counter = 0
+ images_in_shard = np.arange(shard_ranges[s], shard_ranges[s + 1], dtype=int)
+ for i in images_in_shard:
+ image = images[i]
+
+ sequence_example = _to_sequence_example(image, decoder, vocab)
+ if sequence_example is not None:
+ writer.write(sequence_example.SerializeToString())
+ shard_counter += 1
+ counter += 1
+
+ if not counter % 1000:
+ print("%s [thread %d]: Processed %d of %d items in thread batch." %
+ (datetime.now(), thread_index, counter, num_images_in_thread))
+ sys.stdout.flush()
+
+ writer.close()
+ print("%s [thread %d]: Wrote %d image-caption pairs to %s" %
+ (datetime.now(), thread_index, shard_counter, output_file))
+ sys.stdout.flush()
+ shard_counter = 0
+ print("%s [thread %d]: Wrote %d image-caption pairs to %d shards." %
+ (datetime.now(), thread_index, counter, num_shards_per_batch))
+ sys.stdout.flush()
+
+
+def _process_dataset(name, images, vocab, num_shards):
+ """Processes a complete data set and saves it as a TFRecord.
+
+ Args:
+ name: Unique identifier specifying the dataset.
+ images: List of ImageMetadata.
+ vocab: A Vocabulary object.
+ num_shards: Integer number of shards for the output files.
+ """
+ # Break up each image into a separate entity for each caption.
+ images = [ImageMetadata(image.image_id, image.filename, [caption])
+ for image in images for caption in image.captions]
+
+ # Shuffle the ordering of images. Make the randomization repeatable.
+ random.seed(12345)
+ random.shuffle(images)
+
+ # Break the images into num_threads batches. Batch i is defined as
+ # images[ranges[i][0]:ranges[i][1]].
+ num_threads = min(num_shards, FLAGS.num_threads)
+ spacing = np.linspace(0, len(images), num_threads + 1).astype(np.int)
+ ranges = []
+ threads = []
+ for i in xrange(len(spacing) - 1):
+ ranges.append([spacing[i], spacing[i + 1]])
+
+ # Create a mechanism for monitoring when all threads are finished.
+ coord = tf.train.Coordinator()
+
+ # Create a utility for decoding JPEG images to run sanity checks.
+ decoder = ImageDecoder()
+
+ # Launch a thread for each batch.
+ print("Launching %d threads for spacings: %s" % (num_threads, ranges))
+ for thread_index in xrange(len(ranges)):
+ args = (thread_index, ranges, name, images, decoder, vocab, num_shards)
+ t = threading.Thread(target=_process_image_files, args=args)
+ t.start()
+ threads.append(t)
+
+ # Wait for all the threads to terminate.
+ coord.join(threads)
+ print("%s: Finished processing all %d image-caption pairs in data set '%s'." %
+ (datetime.now(), len(images), name))
+
+
+def _create_vocab(captions):
+ """Creates the vocabulary of word to word_id.
+
+ The vocabulary is saved to disk in a text file of word counts. The id of each
+ word in the file is its corresponding 0-based line number.
+
+ Args:
+ captions: A list of lists of strings.
+
+ Returns:
+ A Vocabulary object.
+ """
+ print("Creating vocabulary.")
+ counter = Counter()
+ for c in captions:
+ counter.update(c)
+ print("Total words:", len(counter))
+
+ # Filter uncommon words and sort by descending count.
+ word_counts = [x for x in counter.items() if x[1] >= FLAGS.min_word_count]
+ word_counts.sort(key=lambda x: x[1], reverse=True)
+ print("Words in vocabulary:", len(word_counts))
+
+ # Write out the word counts file.
+ with tf.gfile.FastGFile(FLAGS.word_counts_output_file, "w") as f:
+ f.write("\n".join(["%s %d" % (w, c) for w, c in word_counts]))
+ print("Wrote vocabulary file:", FLAGS.word_counts_output_file)
+
+ # Create the vocabulary dictionary.
+ reverse_vocab = [x[0] for x in word_counts]
+ unk_id = len(reverse_vocab)
+ vocab_dict = dict([(x, y) for (y, x) in enumerate(reverse_vocab)])
+ vocab = Vocabulary(vocab_dict, unk_id)
+
+ return vocab
+
+
+def _process_caption(caption):
+ """Processes a caption string into a list of tonenized words.
+
+ Args:
+ caption: A string caption.
+
+ Returns:
+ A list of strings; the tokenized caption.
+ """
+ tokenized_caption = [FLAGS.start_word]
+ tokenized_caption.extend(nltk.tokenize.word_tokenize(caption.lower()))
+ tokenized_caption.append(FLAGS.end_word)
+ return tokenized_caption
+
+
+def _load_and_process_metadata(captions_file, image_dir):
+ """Loads image metadata from a JSON file and processes the captions.
+
+ Args:
+ captions_file: JSON file containing caption annotations.
+ image_dir: Directory containing the image files.
+
+ Returns:
+ A list of ImageMetadata.
+ """
+ with tf.gfile.FastGFile(captions_file, "r") as f:
+ caption_data = json.load(f)
+
+ # Extract the filenames.
+ id_to_filename = [(x["id"], x["file_name"]) for x in caption_data["images"]]
+
+ # Extract the captions. Each image_id is associated with multiple captions.
+ id_to_captions = {}
+ for annotation in caption_data["annotations"]:
+ image_id = annotation["image_id"]
+ caption = annotation["caption"]
+ id_to_captions.setdefault(image_id, [])
+ id_to_captions[image_id].append(caption)
+
+ assert len(id_to_filename) == len(id_to_captions)
+ assert set([x[0] for x in id_to_filename]) == set(id_to_captions.keys())
+ print("Loaded caption metadata for %d images from %s" %
+ (len(id_to_filename), captions_file))
+
+ # Process the captions and combine the data into a list of ImageMetadata.
+ print("Processing captions.")
+ image_metadata = []
+ num_captions = 0
+ for image_id, base_filename in id_to_filename:
+ filename = os.path.join(image_dir, base_filename)
+ captions = [_process_caption(c) for c in id_to_captions[image_id]]
+ image_metadata.append(ImageMetadata(image_id, filename, captions))
+ num_captions += len(captions)
+ print("Finished processing %d captions for %d images in %s" %
+ (num_captions, len(id_to_filename), captions_file))
+
+ return image_metadata
+
+
+def main(unused_argv):
+ def _is_valid_num_shards(num_shards):
+ """Returns True if num_shards is compatible with FLAGS.num_threads."""
+ return num_shards < FLAGS.num_threads or not num_shards % FLAGS.num_threads
+
+ assert _is_valid_num_shards(FLAGS.train_shards), (
+ "Please make the FLAGS.num_threads commensurate with FLAGS.train_shards")
+ assert _is_valid_num_shards(FLAGS.val_shards), (
+ "Please make the FLAGS.num_threads commensurate with FLAGS.val_shards")
+ assert _is_valid_num_shards(FLAGS.test_shards), (
+ "Please make the FLAGS.num_threads commensurate with FLAGS.test_shards")
+
+ if not tf.gfile.IsDirectory(FLAGS.output_dir):
+ tf.gfile.MakeDirs(FLAGS.output_dir)
+
+ # Load image metadata from caption files.
+ mscoco_train_dataset = _load_and_process_metadata(FLAGS.train_captions_file,
+ FLAGS.train_image_dir)
+ mscoco_val_dataset = _load_and_process_metadata(FLAGS.val_captions_file,
+ FLAGS.val_image_dir)
+
+ # Redistribute the MSCOCO data as follows:
+ # train_dataset = 100% of mscoco_train_dataset + 85% of mscoco_val_dataset.
+ # val_dataset = 5% of mscoco_val_dataset (for validation during training).
+ # test_dataset = 10% of mscoco_val_dataset (for final evaluation).
+ train_cutoff = int(0.85 * len(mscoco_val_dataset))
+ val_cutoff = int(0.90 * len(mscoco_val_dataset))
+ train_dataset = mscoco_train_dataset + mscoco_val_dataset[0:train_cutoff]
+ val_dataset = mscoco_val_dataset[train_cutoff:val_cutoff]
+ test_dataset = mscoco_val_dataset[val_cutoff:]
+
+ # Create vocabulary from the training captions.
+ train_captions = [c for image in train_dataset for c in image.captions]
+ vocab = _create_vocab(train_captions)
+
+ _process_dataset("train", train_dataset, vocab, FLAGS.train_shards)
+ _process_dataset("val", val_dataset, vocab, FLAGS.val_shards)
+ _process_dataset("test", test_dataset, vocab, FLAGS.test_shards)
+
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/data/download_and_preprocess_mscoco.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/data/download_and_preprocess_mscoco.sh"
new file mode 100644
index 00000000..ab3ff28d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/data/download_and_preprocess_mscoco.sh"
@@ -0,0 +1,90 @@
+#!/bin/bash
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# Script to download and preprocess the MSCOCO data set.
+#
+# The outputs of this script are sharded TFRecord files containing serialized
+# SequenceExample protocol buffers. See build_mscoco_data.py for details of how
+# the SequenceExample protocol buffers are constructed.
+#
+# usage:
+# ./download_and_preprocess_mscoco.sh
+set -e
+
+if [ -z "$1" ]; then
+ echo "usage download_and_preproces_mscoco.sh [data dir]"
+ exit
+fi
+
+if [ "$(uname)" == "Darwin" ]; then
+ UNZIP="tar -xf"
+else
+ UNZIP="unzip -nq"
+fi
+
+# Create the output directories.
+OUTPUT_DIR="${1%/}"
+SCRATCH_DIR="${OUTPUT_DIR}/raw-data"
+mkdir -p "${OUTPUT_DIR}"
+mkdir -p "${SCRATCH_DIR}"
+CURRENT_DIR=$(pwd)
+WORK_DIR="$0.runfiles/im2txt/im2txt"
+
+# Helper function to download and unpack a .zip file.
+function download_and_unzip() {
+ local BASE_URL=${1}
+ local FILENAME=${2}
+
+ if [ ! -f ${FILENAME} ]; then
+ echo "Downloading ${FILENAME} to $(pwd)"
+ wget -nd -c "${BASE_URL}/${FILENAME}"
+ else
+ echo "Skipping download of ${FILENAME}"
+ fi
+ echo "Unzipping ${FILENAME}"
+ ${UNZIP} ${FILENAME}
+}
+
+cd ${SCRATCH_DIR}
+
+# Download the images.
+BASE_IMAGE_URL="http://msvocds.blob.core.windows.net/coco2014"
+
+TRAIN_IMAGE_FILE="train2014.zip"
+download_and_unzip ${BASE_IMAGE_URL} ${TRAIN_IMAGE_FILE}
+TRAIN_IMAGE_DIR="${SCRATCH_DIR}/train2014"
+
+VAL_IMAGE_FILE="val2014.zip"
+download_and_unzip ${BASE_IMAGE_URL} ${VAL_IMAGE_FILE}
+VAL_IMAGE_DIR="${SCRATCH_DIR}/val2014"
+
+# Download the captions.
+BASE_CAPTIONS_URL="http://msvocds.blob.core.windows.net/annotations-1-0-3"
+CAPTIONS_FILE="captions_train-val2014.zip"
+download_and_unzip ${BASE_CAPTIONS_URL} ${CAPTIONS_FILE}
+TRAIN_CAPTIONS_FILE="${SCRATCH_DIR}/annotations/captions_train2014.json"
+VAL_CAPTIONS_FILE="${SCRATCH_DIR}/annotations/captions_val2014.json"
+
+# Build TFRecords of the image data.
+cd "${CURRENT_DIR}"
+BUILD_SCRIPT="${WORK_DIR}/build_mscoco_data"
+"${BUILD_SCRIPT}" \
+ --train_image_dir="${TRAIN_IMAGE_DIR}" \
+ --val_image_dir="${VAL_IMAGE_DIR}" \
+ --train_captions_file="${TRAIN_CAPTIONS_FILE}" \
+ --val_captions_file="${VAL_CAPTIONS_FILE}" \
+ --output_dir="${OUTPUT_DIR}" \
+ --word_counts_output_file="${OUTPUT_DIR}/word_counts.txt" \
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/evaluate.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/evaluate.py"
new file mode 100644
index 00000000..0c81a59d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/evaluate.py"
@@ -0,0 +1,198 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Evaluate the model.
+
+This script should be run concurrently with training so that summaries show up
+in TensorBoard.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import os.path
+import time
+
+
+import numpy as np
+import tensorflow as tf
+
+from im2txt import configuration
+from im2txt import show_and_tell_model
+
+FLAGS = tf.flags.FLAGS
+
+tf.flags.DEFINE_string("input_file_pattern", "",
+ "File pattern of sharded TFRecord input files.")
+tf.flags.DEFINE_string("checkpoint_dir", "",
+ "Directory containing model checkpoints.")
+tf.flags.DEFINE_string("eval_dir", "", "Directory to write event logs.")
+
+tf.flags.DEFINE_integer("eval_interval_secs", 600,
+ "Interval between evaluation runs.")
+tf.flags.DEFINE_integer("num_eval_examples", 10132,
+ "Number of examples for evaluation.")
+
+tf.flags.DEFINE_integer("min_global_step", 5000,
+ "Minimum global step to run evaluation.")
+
+tf.logging.set_verbosity(tf.logging.INFO)
+
+
+def evaluate_model(sess, model, global_step, summary_writer, summary_op):
+ """Computes perplexity-per-word over the evaluation dataset.
+
+ Summaries and perplexity-per-word are written out to the eval directory.
+
+ Args:
+ sess: Session object.
+ model: Instance of ShowAndTellModel; the model to evaluate.
+ global_step: Integer; global step of the model checkpoint.
+ summary_writer: Instance of FileWriter.
+ summary_op: Op for generating model summaries.
+ """
+ # Log model summaries on a single batch.
+ summary_str = sess.run(summary_op)
+ summary_writer.add_summary(summary_str, global_step)
+
+ # Compute perplexity over the entire dataset.
+ num_eval_batches = int(
+ math.ceil(FLAGS.num_eval_examples / model.config.batch_size))
+
+ start_time = time.time()
+ sum_losses = 0.
+ sum_weights = 0.
+ for i in range(num_eval_batches):
+ cross_entropy_losses, weights = sess.run([
+ model.target_cross_entropy_losses,
+ model.target_cross_entropy_loss_weights
+ ])
+ sum_losses += np.sum(cross_entropy_losses * weights)
+ sum_weights += np.sum(weights)
+ if not i % 100:
+ tf.logging.info("Computed losses for %d of %d batches.", i + 1,
+ num_eval_batches)
+ eval_time = time.time() - start_time
+
+ perplexity = math.exp(sum_losses / sum_weights)
+ tf.logging.info("Perplexity = %f (%.2g sec)", perplexity, eval_time)
+
+ # Log perplexity to the FileWriter.
+ summary = tf.Summary()
+ value = summary.value.add()
+ value.simple_value = perplexity
+ value.tag = "Perplexity"
+ summary_writer.add_summary(summary, global_step)
+
+ # Write the Events file to the eval directory.
+ summary_writer.flush()
+ tf.logging.info("Finished processing evaluation at global step %d.",
+ global_step)
+
+
+def run_once(model, saver, summary_writer, summary_op):
+ """Evaluates the latest model checkpoint.
+
+ Args:
+ model: Instance of ShowAndTellModel; the model to evaluate.
+ saver: Instance of tf.train.Saver for restoring model Variables.
+ summary_writer: Instance of FileWriter.
+ summary_op: Op for generating model summaries.
+ """
+ model_path = tf.train.latest_checkpoint(FLAGS.checkpoint_dir)
+ if not model_path:
+ tf.logging.info("Skipping evaluation. No checkpoint found in: %s",
+ FLAGS.checkpoint_dir)
+ return
+
+ with tf.Session() as sess:
+ # Load model from checkpoint.
+ tf.logging.info("Loading model from checkpoint: %s", model_path)
+ saver.restore(sess, model_path)
+ global_step = tf.train.global_step(sess, model.global_step.name)
+ tf.logging.info("Successfully loaded %s at global step = %d.",
+ os.path.basename(model_path), global_step)
+ if global_step < FLAGS.min_global_step:
+ tf.logging.info("Skipping evaluation. Global step = %d < %d", global_step,
+ FLAGS.min_global_step)
+ return
+
+ # Start the queue runners.
+ coord = tf.train.Coordinator()
+ threads = tf.train.start_queue_runners(coord=coord)
+
+ # Run evaluation on the latest checkpoint.
+ try:
+ evaluate_model(
+ sess=sess,
+ model=model,
+ global_step=global_step,
+ summary_writer=summary_writer,
+ summary_op=summary_op)
+ except Exception as e: # pylint: disable=broad-except
+ tf.logging.error("Evaluation failed.")
+ coord.request_stop(e)
+
+ coord.request_stop()
+ coord.join(threads, stop_grace_period_secs=10)
+
+
+def run():
+ """Runs evaluation in a loop, and logs summaries to TensorBoard."""
+ # Create the evaluation directory if it doesn't exist.
+ eval_dir = FLAGS.eval_dir
+ if not tf.gfile.IsDirectory(eval_dir):
+ tf.logging.info("Creating eval directory: %s", eval_dir)
+ tf.gfile.MakeDirs(eval_dir)
+
+ g = tf.Graph()
+ with g.as_default():
+ # Build the model for evaluation.
+ model_config = configuration.ModelConfig()
+ model_config.input_file_pattern = FLAGS.input_file_pattern
+ model = show_and_tell_model.ShowAndTellModel(model_config, mode="eval")
+ model.build()
+
+ # Create the Saver to restore model Variables.
+ saver = tf.train.Saver()
+
+ # Create the summary operation and the summary writer.
+ summary_op = tf.summary.merge_all()
+ summary_writer = tf.summary.FileWriter(eval_dir)
+
+ g.finalize()
+
+ # Run a new evaluation run every eval_interval_secs.
+ while True:
+ start = time.time()
+ tf.logging.info("Starting evaluation at " + time.strftime(
+ "%Y-%m-%d-%H:%M:%S", time.localtime()))
+ run_once(model, saver, summary_writer, summary_op)
+ time_to_next_eval = start + FLAGS.eval_interval_secs - time.time()
+ if time_to_next_eval > 0:
+ time.sleep(time_to_next_eval)
+
+
+def main(unused_argv):
+ assert FLAGS.input_file_pattern, "--input_file_pattern is required"
+ assert FLAGS.checkpoint_dir, "--checkpoint_dir is required"
+ assert FLAGS.eval_dir, "--eval_dir is required"
+ run()
+
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/BUILD"
new file mode 100644
index 00000000..82a15fd3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/BUILD"
@@ -0,0 +1,31 @@
+package(default_visibility = ["//im2txt:internal"])
+
+licenses(["notice"]) # Apache 2.0
+
+exports_files(["LICENSE"])
+
+py_library(
+ name = "inference_wrapper_base",
+ srcs = ["inference_wrapper_base.py"],
+ srcs_version = "PY2AND3",
+)
+
+py_library(
+ name = "vocabulary",
+ srcs = ["vocabulary.py"],
+ srcs_version = "PY2AND3",
+)
+
+py_library(
+ name = "caption_generator",
+ srcs = ["caption_generator.py"],
+ srcs_version = "PY2AND3",
+)
+
+py_test(
+ name = "caption_generator_test",
+ srcs = ["caption_generator_test.py"],
+ deps = [
+ ":caption_generator",
+ ],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/caption_generator.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/caption_generator.py"
new file mode 100644
index 00000000..ef2c48e2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/caption_generator.py"
@@ -0,0 +1,211 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Class for generating captions from an image-to-text model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import heapq
+import math
+
+
+import numpy as np
+
+
+class Caption(object):
+ """Represents a complete or partial caption."""
+
+ def __init__(self, sentence, state, logprob, score, metadata=None):
+ """Initializes the Caption.
+
+ Args:
+ sentence: List of word ids in the caption.
+ state: Model state after generating the previous word.
+ logprob: Log-probability of the caption.
+ score: Score of the caption.
+ metadata: Optional metadata associated with the partial sentence. If not
+ None, a list of strings with the same length as 'sentence'.
+ """
+ self.sentence = sentence
+ self.state = state
+ self.logprob = logprob
+ self.score = score
+ self.metadata = metadata
+
+ def __cmp__(self, other):
+ """Compares Captions by score."""
+ assert isinstance(other, Caption)
+ if self.score == other.score:
+ return 0
+ elif self.score < other.score:
+ return -1
+ else:
+ return 1
+
+ # For Python 3 compatibility (__cmp__ is deprecated).
+ def __lt__(self, other):
+ assert isinstance(other, Caption)
+ return self.score < other.score
+
+ # Also for Python 3 compatibility.
+ def __eq__(self, other):
+ assert isinstance(other, Caption)
+ return self.score == other.score
+
+
+class TopN(object):
+ """Maintains the top n elements of an incrementally provided set."""
+
+ def __init__(self, n):
+ self._n = n
+ self._data = []
+
+ def size(self):
+ assert self._data is not None
+ return len(self._data)
+
+ def push(self, x):
+ """Pushes a new element."""
+ assert self._data is not None
+ if len(self._data) < self._n:
+ heapq.heappush(self._data, x)
+ else:
+ heapq.heappushpop(self._data, x)
+
+ def extract(self, sort=False):
+ """Extracts all elements from the TopN. This is a destructive operation.
+
+ The only method that can be called immediately after extract() is reset().
+
+ Args:
+ sort: Whether to return the elements in descending sorted order.
+
+ Returns:
+ A list of data; the top n elements provided to the set.
+ """
+ assert self._data is not None
+ data = self._data
+ self._data = None
+ if sort:
+ data.sort(reverse=True)
+ return data
+
+ def reset(self):
+ """Returns the TopN to an empty state."""
+ self._data = []
+
+
+class CaptionGenerator(object):
+ """Class to generate captions from an image-to-text model."""
+
+ def __init__(self,
+ model,
+ vocab,
+ beam_size=3,
+ max_caption_length=20,
+ length_normalization_factor=0.0):
+ """Initializes the generator.
+
+ Args:
+ model: Object encapsulating a trained image-to-text model. Must have
+ methods feed_image() and inference_step(). For example, an instance of
+ InferenceWrapperBase.
+ vocab: A Vocabulary object.
+ beam_size: Beam size to use when generating captions.
+ max_caption_length: The maximum caption length before stopping the search.
+ length_normalization_factor: If != 0, a number x such that captions are
+ scored by logprob/length^x, rather than logprob. This changes the
+ relative scores of captions depending on their lengths. For example, if
+ x > 0 then longer captions will be favored.
+ """
+ self.vocab = vocab
+ self.model = model
+
+ self.beam_size = beam_size
+ self.max_caption_length = max_caption_length
+ self.length_normalization_factor = length_normalization_factor
+
+ def beam_search(self, sess, encoded_image):
+ """Runs beam search caption generation on a single image.
+
+ Args:
+ sess: TensorFlow Session object.
+ encoded_image: An encoded image string.
+
+ Returns:
+ A list of Caption sorted by descending score.
+ """
+ # Feed in the image to get the initial state.
+ initial_state = self.model.feed_image(sess, encoded_image)
+
+ initial_beam = Caption(
+ sentence=[self.vocab.start_id],
+ state=initial_state[0],
+ logprob=0.0,
+ score=0.0,
+ metadata=[""])
+ partial_captions = TopN(self.beam_size)
+ partial_captions.push(initial_beam)
+ complete_captions = TopN(self.beam_size)
+
+ # Run beam search.
+ for _ in range(self.max_caption_length - 1):
+ partial_captions_list = partial_captions.extract()
+ partial_captions.reset()
+ input_feed = np.array([c.sentence[-1] for c in partial_captions_list])
+ state_feed = np.array([c.state for c in partial_captions_list])
+
+ softmax, new_states, metadata = self.model.inference_step(sess,
+ input_feed,
+ state_feed)
+
+ for i, partial_caption in enumerate(partial_captions_list):
+ word_probabilities = softmax[i]
+ state = new_states[i]
+ # For this partial caption, get the beam_size most probable next words.
+ words_and_probs = list(enumerate(word_probabilities))
+ words_and_probs.sort(key=lambda x: -x[1])
+ words_and_probs = words_and_probs[0:self.beam_size]
+ # Each next word gives a new partial caption.
+ for w, p in words_and_probs:
+ if p < 1e-12:
+ continue # Avoid log(0).
+ sentence = partial_caption.sentence + [w]
+ logprob = partial_caption.logprob + math.log(p)
+ score = logprob
+ if metadata:
+ metadata_list = partial_caption.metadata + [metadata[i]]
+ else:
+ metadata_list = None
+ if w == self.vocab.end_id:
+ if self.length_normalization_factor > 0:
+ score /= len(sentence)**self.length_normalization_factor
+ beam = Caption(sentence, state, logprob, score, metadata_list)
+ complete_captions.push(beam)
+ else:
+ beam = Caption(sentence, state, logprob, score, metadata_list)
+ partial_captions.push(beam)
+ if partial_captions.size() == 0:
+ # We have run out of partial candidates; happens when beam_size = 1.
+ break
+
+ # If we have no complete captions then fall back to the partial captions.
+ # But never output a mixture of complete and partial captions because a
+ # partial caption could have a higher score than all the complete captions.
+ if not complete_captions.size():
+ complete_captions = partial_captions
+
+ return complete_captions.extract(sort=True)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/caption_generator_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/caption_generator_test.py"
new file mode 100644
index 00000000..bbd06931
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/caption_generator_test.py"
@@ -0,0 +1,178 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Unit tests for CaptionGenerator."""
+
+import math
+
+
+
+import numpy as np
+import tensorflow as tf
+
+from im2txt.inference_utils import caption_generator
+
+
+class FakeVocab(object):
+ """Fake Vocabulary for testing purposes."""
+
+ def __init__(self):
+ self.start_id = 0 # Word id denoting sentence start.
+ self.end_id = 1 # Word id denoting sentence end.
+
+
+class FakeModel(object):
+ """Fake model for testing purposes."""
+
+ def __init__(self):
+ # Number of words in the vocab.
+ self._vocab_size = 12
+
+ # Dimensionality of the nominal model state.
+ self._state_size = 1
+
+ # Map of previous word to the probability distribution of the next word.
+ self._probabilities = {
+ 0: {1: 0.1,
+ 2: 0.2,
+ 3: 0.3,
+ 4: 0.4},
+ 2: {5: 0.1,
+ 6: 0.9},
+ 3: {1: 0.1,
+ 7: 0.4,
+ 8: 0.5},
+ 4: {1: 0.3,
+ 9: 0.3,
+ 10: 0.4},
+ 5: {1: 1.0},
+ 6: {1: 1.0},
+ 7: {1: 1.0},
+ 8: {1: 1.0},
+ 9: {1: 0.5,
+ 11: 0.5},
+ 10: {1: 1.0},
+ 11: {1: 1.0},
+ }
+
+ # pylint: disable=unused-argument
+
+ def feed_image(self, sess, encoded_image):
+ # Return a nominal model state.
+ return np.zeros([1, self._state_size])
+
+ def inference_step(self, sess, input_feed, state_feed):
+ # Compute the matrix of softmax distributions for the next batch of words.
+ batch_size = input_feed.shape[0]
+ softmax_output = np.zeros([batch_size, self._vocab_size])
+ for batch_index, word_id in enumerate(input_feed):
+ for next_word, probability in self._probabilities[word_id].items():
+ softmax_output[batch_index, next_word] = probability
+
+ # Nominal state and metadata.
+ new_state = np.zeros([batch_size, self._state_size])
+ metadata = None
+
+ return softmax_output, new_state, metadata
+
+ # pylint: enable=unused-argument
+
+
+class CaptionGeneratorTest(tf.test.TestCase):
+
+ def _assertExpectedCaptions(self,
+ expected_captions,
+ beam_size=3,
+ max_caption_length=20,
+ length_normalization_factor=0):
+ """Tests that beam search generates the expected captions.
+
+ Args:
+ expected_captions: A sequence of pairs (sentence, probability), where
+ sentence is a list of integer ids and probability is a float in [0, 1].
+ beam_size: Parameter passed to beam_search().
+ max_caption_length: Parameter passed to beam_search().
+ length_normalization_factor: Parameter passed to beam_search().
+ """
+ expected_sentences = [c[0] for c in expected_captions]
+ expected_probabilities = [c[1] for c in expected_captions]
+
+ # Generate captions.
+ generator = caption_generator.CaptionGenerator(
+ model=FakeModel(),
+ vocab=FakeVocab(),
+ beam_size=beam_size,
+ max_caption_length=max_caption_length,
+ length_normalization_factor=length_normalization_factor)
+ actual_captions = generator.beam_search(sess=None, encoded_image=None)
+
+ actual_sentences = [c.sentence for c in actual_captions]
+ actual_probabilities = [math.exp(c.logprob) for c in actual_captions]
+
+ self.assertEqual(expected_sentences, actual_sentences)
+ self.assertAllClose(expected_probabilities, actual_probabilities)
+
+ def testBeamSize(self):
+ # Beam size = 1.
+ expected = [([0, 4, 10, 1], 0.16)]
+ self._assertExpectedCaptions(expected, beam_size=1)
+
+ # Beam size = 2.
+ expected = [([0, 4, 10, 1], 0.16), ([0, 3, 8, 1], 0.15)]
+ self._assertExpectedCaptions(expected, beam_size=2)
+
+ # Beam size = 3.
+ expected = [
+ ([0, 2, 6, 1], 0.18), ([0, 4, 10, 1], 0.16), ([0, 3, 8, 1], 0.15)
+ ]
+ self._assertExpectedCaptions(expected, beam_size=3)
+
+ def testMaxLength(self):
+ # Max length = 1.
+ expected = [([0], 1.0)]
+ self._assertExpectedCaptions(expected, max_caption_length=1)
+
+ # Max length = 2.
+ # There are no complete sentences, so partial sentences are returned.
+ expected = [([0, 4], 0.4), ([0, 3], 0.3), ([0, 2], 0.2)]
+ self._assertExpectedCaptions(expected, max_caption_length=2)
+
+ # Max length = 3.
+ # There is at least one complete sentence, so only complete sentences are
+ # returned.
+ expected = [([0, 4, 1], 0.12), ([0, 3, 1], 0.03)]
+ self._assertExpectedCaptions(expected, max_caption_length=3)
+
+ # Max length = 4.
+ expected = [
+ ([0, 2, 6, 1], 0.18), ([0, 4, 10, 1], 0.16), ([0, 3, 8, 1], 0.15)
+ ]
+ self._assertExpectedCaptions(expected, max_caption_length=4)
+
+ def testLengthNormalization(self):
+ # Length normalization factor = 3.
+ # The longest caption is returned first, despite having low probability,
+ # because it has the highest log(probability)/length**3.
+ expected = [
+ ([0, 4, 9, 11, 1], 0.06),
+ ([0, 2, 6, 1], 0.18),
+ ([0, 4, 10, 1], 0.16),
+ ([0, 3, 8, 1], 0.15),
+ ]
+ self._assertExpectedCaptions(
+ expected, beam_size=4, length_normalization_factor=3)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/inference_wrapper_base.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/inference_wrapper_base.py"
new file mode 100644
index 00000000..e94cd6af
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/inference_wrapper_base.py"
@@ -0,0 +1,181 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Base wrapper class for performing inference with an image-to-text model.
+
+Subclasses must implement the following methods:
+
+ build_model():
+ Builds the model for inference and returns the model object.
+
+ feed_image():
+ Takes an encoded image and returns the initial model state, where "state"
+ is a numpy array whose specifics are defined by the subclass, e.g.
+ concatenated LSTM state. It's assumed that feed_image() will be called
+ precisely once at the start of inference for each image. Subclasses may
+ compute and/or save per-image internal context in this method.
+
+ inference_step():
+ Takes a batch of inputs and states at a single time-step. Returns the
+ softmax output corresponding to the inputs, and the new states of the batch.
+ Optionally also returns metadata about the current inference step, e.g. a
+ serialized numpy array containing activations from a particular model layer.
+
+Client usage:
+ 1. Build the model inference graph via build_graph_from_config() or
+ build_graph_from_proto().
+ 2. Call the resulting restore_fn to load the model checkpoint.
+ 3. For each image in a batch of images:
+ a) Call feed_image() once to get the initial state.
+ b) For each step of caption generation, call inference_step().
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os.path
+
+
+import tensorflow as tf
+
+# pylint: disable=unused-argument
+
+
+class InferenceWrapperBase(object):
+ """Base wrapper class for performing inference with an image-to-text model."""
+
+ def __init__(self):
+ pass
+
+ def build_model(self, model_config):
+ """Builds the model for inference.
+
+ Args:
+ model_config: Object containing configuration for building the model.
+
+ Returns:
+ model: The model object.
+ """
+ tf.logging.fatal("Please implement build_model in subclass")
+
+ def _create_restore_fn(self, checkpoint_path, saver):
+ """Creates a function that restores a model from checkpoint.
+
+ Args:
+ checkpoint_path: Checkpoint file or a directory containing a checkpoint
+ file.
+ saver: Saver for restoring variables from the checkpoint file.
+
+ Returns:
+ restore_fn: A function such that restore_fn(sess) loads model variables
+ from the checkpoint file.
+
+ Raises:
+ ValueError: If checkpoint_path does not refer to a checkpoint file or a
+ directory containing a checkpoint file.
+ """
+ if tf.gfile.IsDirectory(checkpoint_path):
+ checkpoint_path = tf.train.latest_checkpoint(checkpoint_path)
+ if not checkpoint_path:
+ raise ValueError("No checkpoint file found in: %s" % checkpoint_path)
+
+ def _restore_fn(sess):
+ tf.logging.info("Loading model from checkpoint: %s", checkpoint_path)
+ saver.restore(sess, checkpoint_path)
+ tf.logging.info("Successfully loaded checkpoint: %s",
+ os.path.basename(checkpoint_path))
+
+ return _restore_fn
+
+ def build_graph_from_config(self, model_config, checkpoint_path):
+ """Builds the inference graph from a configuration object.
+
+ Args:
+ model_config: Object containing configuration for building the model.
+ checkpoint_path: Checkpoint file or a directory containing a checkpoint
+ file.
+
+ Returns:
+ restore_fn: A function such that restore_fn(sess) loads model variables
+ from the checkpoint file.
+ """
+ tf.logging.info("Building model.")
+ self.build_model(model_config)
+ saver = tf.train.Saver()
+
+ return self._create_restore_fn(checkpoint_path, saver)
+
+ def build_graph_from_proto(self, graph_def_file, saver_def_file,
+ checkpoint_path):
+ """Builds the inference graph from serialized GraphDef and SaverDef protos.
+
+ Args:
+ graph_def_file: File containing a serialized GraphDef proto.
+ saver_def_file: File containing a serialized SaverDef proto.
+ checkpoint_path: Checkpoint file or a directory containing a checkpoint
+ file.
+
+ Returns:
+ restore_fn: A function such that restore_fn(sess) loads model variables
+ from the checkpoint file.
+ """
+ # Load the Graph.
+ tf.logging.info("Loading GraphDef from file: %s", graph_def_file)
+ graph_def = tf.GraphDef()
+ with tf.gfile.FastGFile(graph_def_file, "rb") as f:
+ graph_def.ParseFromString(f.read())
+ tf.import_graph_def(graph_def, name="")
+
+ # Load the Saver.
+ tf.logging.info("Loading SaverDef from file: %s", saver_def_file)
+ saver_def = tf.train.SaverDef()
+ with tf.gfile.FastGFile(saver_def_file, "rb") as f:
+ saver_def.ParseFromString(f.read())
+ saver = tf.train.Saver(saver_def=saver_def)
+
+ return self._create_restore_fn(checkpoint_path, saver)
+
+ def feed_image(self, sess, encoded_image):
+ """Feeds an image and returns the initial model state.
+
+ See comments at the top of file.
+
+ Args:
+ sess: TensorFlow Session object.
+ encoded_image: An encoded image string.
+
+ Returns:
+ state: A numpy array of shape [1, state_size].
+ """
+ tf.logging.fatal("Please implement feed_image in subclass")
+
+ def inference_step(self, sess, input_feed, state_feed):
+ """Runs one step of inference.
+
+ Args:
+ sess: TensorFlow Session object.
+ input_feed: A numpy array of shape [batch_size].
+ state_feed: A numpy array of shape [batch_size, state_size].
+
+ Returns:
+ softmax_output: A numpy array of shape [batch_size, vocab_size].
+ new_state: A numpy array of shape [batch_size, state_size].
+ metadata: Optional. If not None, a string containing metadata about the
+ current inference step (e.g. serialized numpy array containing
+ activations from a particular model layer.).
+ """
+ tf.logging.fatal("Please implement inference_step in subclass")
+
+# pylint: enable=unused-argument
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/vocabulary.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/vocabulary.py"
new file mode 100644
index 00000000..ecf0ada9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_utils/vocabulary.py"
@@ -0,0 +1,78 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Vocabulary class for an image-to-text model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+
+class Vocabulary(object):
+ """Vocabulary class for an image-to-text model."""
+
+ def __init__(self,
+ vocab_file,
+ start_word="",
+ end_word="",
+ unk_word=""):
+ """Initializes the vocabulary.
+
+ Args:
+ vocab_file: File containing the vocabulary, where the words are the first
+ whitespace-separated token on each line (other tokens are ignored) and
+ the word ids are the corresponding line numbers.
+ start_word: Special word denoting sentence start.
+ end_word: Special word denoting sentence end.
+ unk_word: Special word denoting unknown words.
+ """
+ if not tf.gfile.Exists(vocab_file):
+ tf.logging.fatal("Vocab file %s not found.", vocab_file)
+ tf.logging.info("Initializing vocabulary from file: %s", vocab_file)
+
+ with tf.gfile.GFile(vocab_file, mode="r") as f:
+ reverse_vocab = list(f.readlines())
+ reverse_vocab = [line.split()[0] for line in reverse_vocab]
+ assert start_word in reverse_vocab
+ assert end_word in reverse_vocab
+ if unk_word not in reverse_vocab:
+ reverse_vocab.append(unk_word)
+ vocab = dict([(x, y) for (y, x) in enumerate(reverse_vocab)])
+
+ tf.logging.info("Created vocabulary with %d words" % len(vocab))
+
+ self.vocab = vocab # vocab[word] = id
+ self.reverse_vocab = reverse_vocab # reverse_vocab[id] = word
+
+ # Save special word ids.
+ self.start_id = vocab[start_word]
+ self.end_id = vocab[end_word]
+ self.unk_id = vocab[unk_word]
+
+ def word_to_id(self, word):
+ """Returns the integer word id of a word string."""
+ if word in self.vocab:
+ return self.vocab[word]
+ else:
+ return self.unk_id
+
+ def id_to_word(self, word_id):
+ """Returns the word string of an integer word id."""
+ if word_id >= len(self.reverse_vocab):
+ return self.reverse_vocab[self.unk_id]
+ else:
+ return self.reverse_vocab[word_id]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_wrapper.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_wrapper.py"
new file mode 100644
index 00000000..a047a9c8
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/inference_wrapper.py"
@@ -0,0 +1,51 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Model wrapper class for performing inference with a ShowAndTellModel."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+from im2txt import show_and_tell_model
+from im2txt.inference_utils import inference_wrapper_base
+
+
+class InferenceWrapper(inference_wrapper_base.InferenceWrapperBase):
+ """Model wrapper class for performing inference with a ShowAndTellModel."""
+
+ def __init__(self):
+ super(InferenceWrapper, self).__init__()
+
+ def build_model(self, model_config):
+ model = show_and_tell_model.ShowAndTellModel(model_config, mode="inference")
+ model.build()
+ return model
+
+ def feed_image(self, sess, encoded_image):
+ initial_state = sess.run(fetches="lstm/initial_state:0",
+ feed_dict={"image_feed:0": encoded_image})
+ return initial_state
+
+ def inference_step(self, sess, input_feed, state_feed):
+ softmax_output, state_output = sess.run(
+ fetches=["softmax:0", "lstm/state:0"],
+ feed_dict={
+ "input_feed:0": input_feed,
+ "lstm/state_feed:0": state_feed,
+ })
+ return softmax_output, state_output, None
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/BUILD"
new file mode 100644
index 00000000..7d48bf39
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/BUILD"
@@ -0,0 +1,32 @@
+package(default_visibility = ["//im2txt:internal"])
+
+licenses(["notice"]) # Apache 2.0
+
+exports_files(["LICENSE"])
+
+py_library(
+ name = "image_processing",
+ srcs = ["image_processing.py"],
+ srcs_version = "PY2AND3",
+)
+
+py_library(
+ name = "image_embedding",
+ srcs = ["image_embedding.py"],
+ srcs_version = "PY2AND3",
+)
+
+py_test(
+ name = "image_embedding_test",
+ size = "small",
+ srcs = ["image_embedding_test.py"],
+ deps = [
+ ":image_embedding",
+ ],
+)
+
+py_library(
+ name = "inputs",
+ srcs = ["inputs.py"],
+ srcs_version = "PY2AND3",
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/image_embedding.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/image_embedding.py"
new file mode 100644
index 00000000..58e3ddaa
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/image_embedding.py"
@@ -0,0 +1,114 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Image embedding ops."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from tensorflow.contrib.slim.python.slim.nets.inception_v3 import inception_v3_base
+
+slim = tf.contrib.slim
+
+
+def inception_v3(images,
+ trainable=True,
+ is_training=True,
+ weight_decay=0.00004,
+ stddev=0.1,
+ dropout_keep_prob=0.8,
+ use_batch_norm=True,
+ batch_norm_params=None,
+ add_summaries=True,
+ scope="InceptionV3"):
+ """Builds an Inception V3 subgraph for image embeddings.
+
+ Args:
+ images: A float32 Tensor of shape [batch, height, width, channels].
+ trainable: Whether the inception submodel should be trainable or not.
+ is_training: Boolean indicating training mode or not.
+ weight_decay: Coefficient for weight regularization.
+ stddev: The standard deviation of the trunctated normal weight initializer.
+ dropout_keep_prob: Dropout keep probability.
+ use_batch_norm: Whether to use batch normalization.
+ batch_norm_params: Parameters for batch normalization. See
+ tf.contrib.layers.batch_norm for details.
+ add_summaries: Whether to add activation summaries.
+ scope: Optional Variable scope.
+
+ Returns:
+ end_points: A dictionary of activations from inception_v3 layers.
+ """
+ # Only consider the inception model to be in training mode if it's trainable.
+ is_inception_model_training = trainable and is_training
+
+ if use_batch_norm:
+ # Default parameters for batch normalization.
+ if not batch_norm_params:
+ batch_norm_params = {
+ "is_training": is_inception_model_training,
+ "trainable": trainable,
+ # Decay for the moving averages.
+ "decay": 0.9997,
+ # Epsilon to prevent 0s in variance.
+ "epsilon": 0.001,
+ # Collection containing the moving mean and moving variance.
+ "variables_collections": {
+ "beta": None,
+ "gamma": None,
+ "moving_mean": ["moving_vars"],
+ "moving_variance": ["moving_vars"],
+ }
+ }
+ else:
+ batch_norm_params = None
+
+ if trainable:
+ weights_regularizer = tf.contrib.layers.l2_regularizer(weight_decay)
+ else:
+ weights_regularizer = None
+
+ with tf.variable_scope(scope, "InceptionV3", [images]) as scope:
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ weights_regularizer=weights_regularizer,
+ trainable=trainable):
+ with slim.arg_scope(
+ [slim.conv2d],
+ weights_initializer=tf.truncated_normal_initializer(stddev=stddev),
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm,
+ normalizer_params=batch_norm_params):
+ net, end_points = inception_v3_base(images, scope=scope)
+ with tf.variable_scope("logits"):
+ shape = net.get_shape()
+ net = slim.avg_pool2d(net, shape[1:3], padding="VALID", scope="pool")
+ net = slim.dropout(
+ net,
+ keep_prob=dropout_keep_prob,
+ is_training=is_inception_model_training,
+ scope="dropout")
+ net = slim.flatten(net, scope="flatten")
+
+ # Add summaries.
+ if add_summaries:
+ for v in end_points.values():
+ tf.contrib.layers.summaries.summarize_activation(v)
+
+ return net
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/image_embedding_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/image_embedding_test.py"
new file mode 100644
index 00000000..66324d68
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/image_embedding_test.py"
@@ -0,0 +1,136 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for tensorflow_models.im2txt.ops.image_embedding."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from im2txt.ops import image_embedding
+
+
+class InceptionV3Test(tf.test.TestCase):
+
+ def setUp(self):
+ super(InceptionV3Test, self).setUp()
+
+ batch_size = 4
+ height = 299
+ width = 299
+ num_channels = 3
+ self._images = tf.placeholder(tf.float32,
+ [batch_size, height, width, num_channels])
+ self._batch_size = batch_size
+
+ def _countInceptionParameters(self):
+ """Counts the number of parameters in the inception model at top scope."""
+ counter = {}
+ for v in tf.global_variables():
+ name_tokens = v.op.name.split("/")
+ if name_tokens[0] == "InceptionV3":
+ name = "InceptionV3/" + name_tokens[1]
+ num_params = v.get_shape().num_elements()
+ assert num_params
+ counter[name] = counter.get(name, 0) + num_params
+ return counter
+
+ def _verifyParameterCounts(self):
+ """Verifies the number of parameters in the inception model."""
+ param_counts = self._countInceptionParameters()
+ expected_param_counts = {
+ "InceptionV3/Conv2d_1a_3x3": 960,
+ "InceptionV3/Conv2d_2a_3x3": 9312,
+ "InceptionV3/Conv2d_2b_3x3": 18624,
+ "InceptionV3/Conv2d_3b_1x1": 5360,
+ "InceptionV3/Conv2d_4a_3x3": 138816,
+ "InceptionV3/Mixed_5b": 256368,
+ "InceptionV3/Mixed_5c": 277968,
+ "InceptionV3/Mixed_5d": 285648,
+ "InceptionV3/Mixed_6a": 1153920,
+ "InceptionV3/Mixed_6b": 1298944,
+ "InceptionV3/Mixed_6c": 1692736,
+ "InceptionV3/Mixed_6d": 1692736,
+ "InceptionV3/Mixed_6e": 2143872,
+ "InceptionV3/Mixed_7a": 1699584,
+ "InceptionV3/Mixed_7b": 5047872,
+ "InceptionV3/Mixed_7c": 6080064,
+ }
+ self.assertDictEqual(expected_param_counts, param_counts)
+
+ def _assertCollectionSize(self, expected_size, collection):
+ actual_size = len(tf.get_collection(collection))
+ if expected_size != actual_size:
+ self.fail("Found %d items in collection %s (expected %d)." %
+ (actual_size, collection, expected_size))
+
+ def testTrainableTrueIsTrainingTrue(self):
+ embeddings = image_embedding.inception_v3(
+ self._images, trainable=True, is_training=True)
+ self.assertEqual([self._batch_size, 2048], embeddings.get_shape().as_list())
+
+ self._verifyParameterCounts()
+ self._assertCollectionSize(376, tf.GraphKeys.GLOBAL_VARIABLES)
+ self._assertCollectionSize(188, tf.GraphKeys.TRAINABLE_VARIABLES)
+ self._assertCollectionSize(188, tf.GraphKeys.UPDATE_OPS)
+ self._assertCollectionSize(94, tf.GraphKeys.REGULARIZATION_LOSSES)
+ self._assertCollectionSize(0, tf.GraphKeys.LOSSES)
+ self._assertCollectionSize(23, tf.GraphKeys.SUMMARIES)
+
+ def testTrainableTrueIsTrainingFalse(self):
+ embeddings = image_embedding.inception_v3(
+ self._images, trainable=True, is_training=False)
+ self.assertEqual([self._batch_size, 2048], embeddings.get_shape().as_list())
+
+ self._verifyParameterCounts()
+ self._assertCollectionSize(376, tf.GraphKeys.GLOBAL_VARIABLES)
+ self._assertCollectionSize(188, tf.GraphKeys.TRAINABLE_VARIABLES)
+ self._assertCollectionSize(0, tf.GraphKeys.UPDATE_OPS)
+ self._assertCollectionSize(94, tf.GraphKeys.REGULARIZATION_LOSSES)
+ self._assertCollectionSize(0, tf.GraphKeys.LOSSES)
+ self._assertCollectionSize(23, tf.GraphKeys.SUMMARIES)
+
+ def testTrainableFalseIsTrainingTrue(self):
+ embeddings = image_embedding.inception_v3(
+ self._images, trainable=False, is_training=True)
+ self.assertEqual([self._batch_size, 2048], embeddings.get_shape().as_list())
+
+ self._verifyParameterCounts()
+ self._assertCollectionSize(376, tf.GraphKeys.GLOBAL_VARIABLES)
+ self._assertCollectionSize(0, tf.GraphKeys.TRAINABLE_VARIABLES)
+ self._assertCollectionSize(0, tf.GraphKeys.UPDATE_OPS)
+ self._assertCollectionSize(0, tf.GraphKeys.REGULARIZATION_LOSSES)
+ self._assertCollectionSize(0, tf.GraphKeys.LOSSES)
+ self._assertCollectionSize(23, tf.GraphKeys.SUMMARIES)
+
+ def testTrainableFalseIsTrainingFalse(self):
+ embeddings = image_embedding.inception_v3(
+ self._images, trainable=False, is_training=False)
+ self.assertEqual([self._batch_size, 2048], embeddings.get_shape().as_list())
+
+ self._verifyParameterCounts()
+ self._assertCollectionSize(376, tf.GraphKeys.GLOBAL_VARIABLES)
+ self._assertCollectionSize(0, tf.GraphKeys.TRAINABLE_VARIABLES)
+ self._assertCollectionSize(0, tf.GraphKeys.UPDATE_OPS)
+ self._assertCollectionSize(0, tf.GraphKeys.REGULARIZATION_LOSSES)
+ self._assertCollectionSize(0, tf.GraphKeys.LOSSES)
+ self._assertCollectionSize(23, tf.GraphKeys.SUMMARIES)
+
+
+if __name__ == "__main__":
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/image_processing.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/image_processing.py"
new file mode 100644
index 00000000..6a754554
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/image_processing.py"
@@ -0,0 +1,133 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Helper functions for image preprocessing."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+
+def distort_image(image, thread_id):
+ """Perform random distortions on an image.
+
+ Args:
+ image: A float32 Tensor of shape [height, width, 3] with values in [0, 1).
+ thread_id: Preprocessing thread id used to select the ordering of color
+ distortions. There should be a multiple of 2 preprocessing threads.
+
+ Returns:
+ distorted_image: A float32 Tensor of shape [height, width, 3] with values in
+ [0, 1].
+ """
+ # Randomly flip horizontally.
+ with tf.name_scope("flip_horizontal", values=[image]):
+ image = tf.image.random_flip_left_right(image)
+
+ # Randomly distort the colors based on thread id.
+ color_ordering = thread_id % 2
+ with tf.name_scope("distort_color", values=[image]):
+ if color_ordering == 0:
+ image = tf.image.random_brightness(image, max_delta=32. / 255.)
+ image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
+ image = tf.image.random_hue(image, max_delta=0.032)
+ image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
+ elif color_ordering == 1:
+ image = tf.image.random_brightness(image, max_delta=32. / 255.)
+ image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
+ image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
+ image = tf.image.random_hue(image, max_delta=0.032)
+
+ # The random_* ops do not necessarily clamp.
+ image = tf.clip_by_value(image, 0.0, 1.0)
+
+ return image
+
+
+def process_image(encoded_image,
+ is_training,
+ height,
+ width,
+ resize_height=346,
+ resize_width=346,
+ thread_id=0,
+ image_format="jpeg"):
+ """Decode an image, resize and apply random distortions.
+
+ In training, images are distorted slightly differently depending on thread_id.
+
+ Args:
+ encoded_image: String Tensor containing the image.
+ is_training: Boolean; whether preprocessing for training or eval.
+ height: Height of the output image.
+ width: Width of the output image.
+ resize_height: If > 0, resize height before crop to final dimensions.
+ resize_width: If > 0, resize width before crop to final dimensions.
+ thread_id: Preprocessing thread id used to select the ordering of color
+ distortions. There should be a multiple of 2 preprocessing threads.
+ image_format: "jpeg" or "png".
+
+ Returns:
+ A float32 Tensor of shape [height, width, 3] with values in [-1, 1].
+
+ Raises:
+ ValueError: If image_format is invalid.
+ """
+ # Helper function to log an image summary to the visualizer. Summaries are
+ # only logged in thread 0.
+ def image_summary(name, image):
+ if not thread_id:
+ tf.summary.image(name, tf.expand_dims(image, 0))
+
+ # Decode image into a float32 Tensor of shape [?, ?, 3] with values in [0, 1).
+ with tf.name_scope("decode", values=[encoded_image]):
+ if image_format == "jpeg":
+ image = tf.image.decode_jpeg(encoded_image, channels=3)
+ elif image_format == "png":
+ image = tf.image.decode_png(encoded_image, channels=3)
+ else:
+ raise ValueError("Invalid image format: %s" % image_format)
+ image = tf.image.convert_image_dtype(image, dtype=tf.float32)
+ image_summary("original_image", image)
+
+ # Resize image.
+ assert (resize_height > 0) == (resize_width > 0)
+ if resize_height:
+ image = tf.image.resize_images(image,
+ size=[resize_height, resize_width],
+ method=tf.image.ResizeMethod.BILINEAR)
+
+ # Crop to final dimensions.
+ if is_training:
+ image = tf.random_crop(image, [height, width, 3])
+ else:
+ # Central crop, assuming resize_height > height, resize_width > width.
+ image = tf.image.resize_image_with_crop_or_pad(image, height, width)
+
+ image_summary("resized_image", image)
+
+ # Randomly distort the image.
+ if is_training:
+ image = distort_image(image, thread_id)
+
+ image_summary("final_image", image)
+
+ # Rescale to [-1,1] instead of [0, 1]
+ image = tf.subtract(image, 0.5)
+ image = tf.multiply(image, 2.0)
+ return image
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/inputs.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/inputs.py"
new file mode 100644
index 00000000..5dc90c0c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/ops/inputs.py"
@@ -0,0 +1,204 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Input ops."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+
+def parse_sequence_example(serialized, image_feature, caption_feature):
+ """Parses a tensorflow.SequenceExample into an image and caption.
+
+ Args:
+ serialized: A scalar string Tensor; a single serialized SequenceExample.
+ image_feature: Name of SequenceExample context feature containing image
+ data.
+ caption_feature: Name of SequenceExample feature list containing integer
+ captions.
+
+ Returns:
+ encoded_image: A scalar string Tensor containing a JPEG encoded image.
+ caption: A 1-D uint64 Tensor with dynamically specified length.
+ """
+ context, sequence = tf.parse_single_sequence_example(
+ serialized,
+ context_features={
+ image_feature: tf.FixedLenFeature([], dtype=tf.string)
+ },
+ sequence_features={
+ caption_feature: tf.FixedLenSequenceFeature([], dtype=tf.int64),
+ })
+
+ encoded_image = context[image_feature]
+ caption = sequence[caption_feature]
+ return encoded_image, caption
+
+
+def prefetch_input_data(reader,
+ file_pattern,
+ is_training,
+ batch_size,
+ values_per_shard,
+ input_queue_capacity_factor=16,
+ num_reader_threads=1,
+ shard_queue_name="filename_queue",
+ value_queue_name="input_queue"):
+ """Prefetches string values from disk into an input queue.
+
+ In training the capacity of the queue is important because a larger queue
+ means better mixing of training examples between shards. The minimum number of
+ values kept in the queue is values_per_shard * input_queue_capacity_factor,
+ where input_queue_memory factor should be chosen to trade-off better mixing
+ with memory usage.
+
+ Args:
+ reader: Instance of tf.ReaderBase.
+ file_pattern: Comma-separated list of file patterns (e.g.
+ /tmp/train_data-?????-of-00100).
+ is_training: Boolean; whether prefetching for training or eval.
+ batch_size: Model batch size used to determine queue capacity.
+ values_per_shard: Approximate number of values per shard.
+ input_queue_capacity_factor: Minimum number of values to keep in the queue
+ in multiples of values_per_shard. See comments above.
+ num_reader_threads: Number of reader threads to fill the queue.
+ shard_queue_name: Name for the shards filename queue.
+ value_queue_name: Name for the values input queue.
+
+ Returns:
+ A Queue containing prefetched string values.
+ """
+ data_files = []
+ for pattern in file_pattern.split(","):
+ data_files.extend(tf.gfile.Glob(pattern))
+ if not data_files:
+ tf.logging.fatal("Found no input files matching %s", file_pattern)
+ else:
+ tf.logging.info("Prefetching values from %d files matching %s",
+ len(data_files), file_pattern)
+
+ if is_training:
+ filename_queue = tf.train.string_input_producer(
+ data_files, shuffle=True, capacity=16, name=shard_queue_name)
+ min_queue_examples = values_per_shard * input_queue_capacity_factor
+ capacity = min_queue_examples + 100 * batch_size
+ values_queue = tf.RandomShuffleQueue(
+ capacity=capacity,
+ min_after_dequeue=min_queue_examples,
+ dtypes=[tf.string],
+ name="random_" + value_queue_name)
+ else:
+ filename_queue = tf.train.string_input_producer(
+ data_files, shuffle=False, capacity=1, name=shard_queue_name)
+ capacity = values_per_shard + 3 * batch_size
+ values_queue = tf.FIFOQueue(
+ capacity=capacity, dtypes=[tf.string], name="fifo_" + value_queue_name)
+
+ enqueue_ops = []
+ for _ in range(num_reader_threads):
+ _, value = reader.read(filename_queue)
+ enqueue_ops.append(values_queue.enqueue([value]))
+ tf.train.queue_runner.add_queue_runner(tf.train.queue_runner.QueueRunner(
+ values_queue, enqueue_ops))
+ tf.summary.scalar(
+ "queue/%s/fraction_of_%d_full" % (values_queue.name, capacity),
+ tf.cast(values_queue.size(), tf.float32) * (1. / capacity))
+
+ return values_queue
+
+
+def batch_with_dynamic_pad(images_and_captions,
+ batch_size,
+ queue_capacity,
+ add_summaries=True):
+ """Batches input images and captions.
+
+ This function splits the caption into an input sequence and a target sequence,
+ where the target sequence is the input sequence right-shifted by 1. Input and
+ target sequences are batched and padded up to the maximum length of sequences
+ in the batch. A mask is created to distinguish real words from padding words.
+
+ Example:
+ Actual captions in the batch ('-' denotes padded character):
+ [
+ [ 1 2 3 4 5 ],
+ [ 1 2 3 4 - ],
+ [ 1 2 3 - - ],
+ ]
+
+ input_seqs:
+ [
+ [ 1 2 3 4 ],
+ [ 1 2 3 - ],
+ [ 1 2 - - ],
+ ]
+
+ target_seqs:
+ [
+ [ 2 3 4 5 ],
+ [ 2 3 4 - ],
+ [ 2 3 - - ],
+ ]
+
+ mask:
+ [
+ [ 1 1 1 1 ],
+ [ 1 1 1 0 ],
+ [ 1 1 0 0 ],
+ ]
+
+ Args:
+ images_and_captions: A list of pairs [image, caption], where image is a
+ Tensor of shape [height, width, channels] and caption is a 1-D Tensor of
+ any length. Each pair will be processed and added to the queue in a
+ separate thread.
+ batch_size: Batch size.
+ queue_capacity: Queue capacity.
+ add_summaries: If true, add caption length summaries.
+
+ Returns:
+ images: A Tensor of shape [batch_size, height, width, channels].
+ input_seqs: An int32 Tensor of shape [batch_size, padded_length].
+ target_seqs: An int32 Tensor of shape [batch_size, padded_length].
+ mask: An int32 0/1 Tensor of shape [batch_size, padded_length].
+ """
+ enqueue_list = []
+ for image, caption in images_and_captions:
+ caption_length = tf.shape(caption)[0]
+ input_length = tf.expand_dims(tf.subtract(caption_length, 1), 0)
+
+ input_seq = tf.slice(caption, [0], input_length)
+ target_seq = tf.slice(caption, [1], input_length)
+ indicator = tf.ones(input_length, dtype=tf.int32)
+ enqueue_list.append([image, input_seq, target_seq, indicator])
+
+ images, input_seqs, target_seqs, mask = tf.train.batch_join(
+ enqueue_list,
+ batch_size=batch_size,
+ capacity=queue_capacity,
+ dynamic_pad=True,
+ name="batch_and_pad")
+
+ if add_summaries:
+ lengths = tf.add(tf.reduce_sum(mask, 1), 1)
+ tf.summary.scalar("caption_length/batch_min", tf.reduce_min(lengths))
+ tf.summary.scalar("caption_length/batch_max", tf.reduce_max(lengths))
+ tf.summary.scalar("caption_length/batch_mean", tf.reduce_mean(lengths))
+
+ return images, input_seqs, target_seqs, mask
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/run_inference.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/run_inference.py"
new file mode 100644
index 00000000..9848522d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/run_inference.py"
@@ -0,0 +1,85 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+r"""Generate captions for images using default beam search parameters."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import os
+
+
+import tensorflow as tf
+
+from im2txt import configuration
+from im2txt import inference_wrapper
+from im2txt.inference_utils import caption_generator
+from im2txt.inference_utils import vocabulary
+
+FLAGS = tf.flags.FLAGS
+
+tf.flags.DEFINE_string("checkpoint_path", "",
+ "Model checkpoint file or directory containing a "
+ "model checkpoint file.")
+tf.flags.DEFINE_string("vocab_file", "", "Text file containing the vocabulary.")
+tf.flags.DEFINE_string("input_files", "",
+ "File pattern or comma-separated list of file patterns "
+ "of image files.")
+
+tf.logging.set_verbosity(tf.logging.INFO)
+
+
+def main(_):
+ # Build the inference graph.
+ g = tf.Graph()
+ with g.as_default():
+ model = inference_wrapper.InferenceWrapper()
+ restore_fn = model.build_graph_from_config(configuration.ModelConfig(),
+ FLAGS.checkpoint_path)
+ g.finalize()
+
+ # Create the vocabulary.
+ vocab = vocabulary.Vocabulary(FLAGS.vocab_file)
+
+ filenames = []
+ for file_pattern in FLAGS.input_files.split(","):
+ filenames.extend(tf.gfile.Glob(file_pattern))
+ tf.logging.info("Running caption generation on %d files matching %s",
+ len(filenames), FLAGS.input_files)
+
+ with tf.Session(graph=g) as sess:
+ # Load the model from checkpoint.
+ restore_fn(sess)
+
+ # Prepare the caption generator. Here we are implicitly using the default
+ # beam search parameters. See caption_generator.py for a description of the
+ # available beam search parameters.
+ generator = caption_generator.CaptionGenerator(model, vocab)
+
+ for filename in filenames:
+ with tf.gfile.GFile(filename, "rb") as f:
+ image = f.read()
+ captions = generator.beam_search(sess, image)
+ print("Captions for image %s:" % os.path.basename(filename))
+ for i, caption in enumerate(captions):
+ # Ignore begin and end words.
+ sentence = [vocab.id_to_word(w) for w in caption.sentence[1:-1]]
+ sentence = " ".join(sentence)
+ print(" %d) %s (p=%f)" % (i, sentence, math.exp(caption.logprob)))
+
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/show_and_tell_model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/show_and_tell_model.py"
new file mode 100644
index 00000000..0ac29e7f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/show_and_tell_model.py"
@@ -0,0 +1,358 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Image-to-text implementation based on http://arxiv.org/abs/1411.4555.
+
+"Show and Tell: A Neural Image Caption Generator"
+Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from im2txt.ops import image_embedding
+from im2txt.ops import image_processing
+from im2txt.ops import inputs as input_ops
+
+
+class ShowAndTellModel(object):
+ """Image-to-text implementation based on http://arxiv.org/abs/1411.4555.
+
+ "Show and Tell: A Neural Image Caption Generator"
+ Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan
+ """
+
+ def __init__(self, config, mode, train_inception=False):
+ """Basic setup.
+
+ Args:
+ config: Object containing configuration parameters.
+ mode: "train", "eval" or "inference".
+ train_inception: Whether the inception submodel variables are trainable.
+ """
+ assert mode in ["train", "eval", "inference"]
+ self.config = config
+ self.mode = mode
+ self.train_inception = train_inception
+
+ # Reader for the input data.
+ self.reader = tf.TFRecordReader()
+
+ # To match the "Show and Tell" paper we initialize all variables with a
+ # random uniform initializer.
+ self.initializer = tf.random_uniform_initializer(
+ minval=-self.config.initializer_scale,
+ maxval=self.config.initializer_scale)
+
+ # A float32 Tensor with shape [batch_size, height, width, channels].
+ self.images = None
+
+ # An int32 Tensor with shape [batch_size, padded_length].
+ self.input_seqs = None
+
+ # An int32 Tensor with shape [batch_size, padded_length].
+ self.target_seqs = None
+
+ # An int32 0/1 Tensor with shape [batch_size, padded_length].
+ self.input_mask = None
+
+ # A float32 Tensor with shape [batch_size, embedding_size].
+ self.image_embeddings = None
+
+ # A float32 Tensor with shape [batch_size, padded_length, embedding_size].
+ self.seq_embeddings = None
+
+ # A float32 scalar Tensor; the total loss for the trainer to optimize.
+ self.total_loss = None
+
+ # A float32 Tensor with shape [batch_size * padded_length].
+ self.target_cross_entropy_losses = None
+
+ # A float32 Tensor with shape [batch_size * padded_length].
+ self.target_cross_entropy_loss_weights = None
+
+ # Collection of variables from the inception submodel.
+ self.inception_variables = []
+
+ # Function to restore the inception submodel from checkpoint.
+ self.init_fn = None
+
+ # Global step Tensor.
+ self.global_step = None
+
+ def is_training(self):
+ """Returns true if the model is built for training mode."""
+ return self.mode == "train"
+
+ def process_image(self, encoded_image, thread_id=0):
+ """Decodes and processes an image string.
+
+ Args:
+ encoded_image: A scalar string Tensor; the encoded image.
+ thread_id: Preprocessing thread id used to select the ordering of color
+ distortions.
+
+ Returns:
+ A float32 Tensor of shape [height, width, 3]; the processed image.
+ """
+ return image_processing.process_image(encoded_image,
+ is_training=self.is_training(),
+ height=self.config.image_height,
+ width=self.config.image_width,
+ thread_id=thread_id,
+ image_format=self.config.image_format)
+
+ def build_inputs(self):
+ """Input prefetching, preprocessing and batching.
+
+ Outputs:
+ self.images
+ self.input_seqs
+ self.target_seqs (training and eval only)
+ self.input_mask (training and eval only)
+ """
+ if self.mode == "inference":
+ # In inference mode, images and inputs are fed via placeholders.
+ image_feed = tf.placeholder(dtype=tf.string, shape=[], name="image_feed")
+ input_feed = tf.placeholder(dtype=tf.int64,
+ shape=[None], # batch_size
+ name="input_feed")
+
+ # Process image and insert batch dimensions.
+ images = tf.expand_dims(self.process_image(image_feed), 0)
+ input_seqs = tf.expand_dims(input_feed, 1)
+
+ # No target sequences or input mask in inference mode.
+ target_seqs = None
+ input_mask = None
+ else:
+ # Prefetch serialized SequenceExample protos.
+ input_queue = input_ops.prefetch_input_data(
+ self.reader,
+ self.config.input_file_pattern,
+ is_training=self.is_training(),
+ batch_size=self.config.batch_size,
+ values_per_shard=self.config.values_per_input_shard,
+ input_queue_capacity_factor=self.config.input_queue_capacity_factor,
+ num_reader_threads=self.config.num_input_reader_threads)
+
+ # Image processing and random distortion. Split across multiple threads
+ # with each thread applying a slightly different distortion.
+ assert self.config.num_preprocess_threads % 2 == 0
+ images_and_captions = []
+ for thread_id in range(self.config.num_preprocess_threads):
+ serialized_sequence_example = input_queue.dequeue()
+ encoded_image, caption = input_ops.parse_sequence_example(
+ serialized_sequence_example,
+ image_feature=self.config.image_feature_name,
+ caption_feature=self.config.caption_feature_name)
+ image = self.process_image(encoded_image, thread_id=thread_id)
+ images_and_captions.append([image, caption])
+
+ # Batch inputs.
+ queue_capacity = (2 * self.config.num_preprocess_threads *
+ self.config.batch_size)
+ images, input_seqs, target_seqs, input_mask = (
+ input_ops.batch_with_dynamic_pad(images_and_captions,
+ batch_size=self.config.batch_size,
+ queue_capacity=queue_capacity))
+
+ self.images = images
+ self.input_seqs = input_seqs
+ self.target_seqs = target_seqs
+ self.input_mask = input_mask
+
+ def build_image_embeddings(self):
+ """Builds the image model subgraph and generates image embeddings.
+
+ Inputs:
+ self.images
+
+ Outputs:
+ self.image_embeddings
+ """
+ inception_output = image_embedding.inception_v3(
+ self.images,
+ trainable=self.train_inception,
+ is_training=self.is_training())
+ self.inception_variables = tf.get_collection(
+ tf.GraphKeys.GLOBAL_VARIABLES, scope="InceptionV3")
+
+ # Map inception output into embedding space.
+ with tf.variable_scope("image_embedding") as scope:
+ image_embeddings = tf.contrib.layers.fully_connected(
+ inputs=inception_output,
+ num_outputs=self.config.embedding_size,
+ activation_fn=None,
+ weights_initializer=self.initializer,
+ biases_initializer=None,
+ scope=scope)
+
+ # Save the embedding size in the graph.
+ tf.constant(self.config.embedding_size, name="embedding_size")
+
+ self.image_embeddings = image_embeddings
+
+ def build_seq_embeddings(self):
+ """Builds the input sequence embeddings.
+
+ Inputs:
+ self.input_seqs
+
+ Outputs:
+ self.seq_embeddings
+ """
+ with tf.variable_scope("seq_embedding"), tf.device("/cpu:0"):
+ embedding_map = tf.get_variable(
+ name="map",
+ shape=[self.config.vocab_size, self.config.embedding_size],
+ initializer=self.initializer)
+ seq_embeddings = tf.nn.embedding_lookup(embedding_map, self.input_seqs)
+
+ self.seq_embeddings = seq_embeddings
+
+ def build_model(self):
+ """Builds the model.
+
+ Inputs:
+ self.image_embeddings
+ self.seq_embeddings
+ self.target_seqs (training and eval only)
+ self.input_mask (training and eval only)
+
+ Outputs:
+ self.total_loss (training and eval only)
+ self.target_cross_entropy_losses (training and eval only)
+ self.target_cross_entropy_loss_weights (training and eval only)
+ """
+ # This LSTM cell has biases and outputs tanh(new_c) * sigmoid(o), but the
+ # modified LSTM in the "Show and Tell" paper has no biases and outputs
+ # new_c * sigmoid(o).
+ lstm_cell = tf.contrib.rnn.BasicLSTMCell(
+ num_units=self.config.num_lstm_units, state_is_tuple=True)
+ if self.mode == "train":
+ lstm_cell = tf.contrib.rnn.DropoutWrapper(
+ lstm_cell,
+ input_keep_prob=self.config.lstm_dropout_keep_prob,
+ output_keep_prob=self.config.lstm_dropout_keep_prob)
+
+ with tf.variable_scope("lstm", initializer=self.initializer) as lstm_scope:
+ # Feed the image embeddings to set the initial LSTM state.
+ zero_state = lstm_cell.zero_state(
+ batch_size=self.image_embeddings.get_shape()[0], dtype=tf.float32)
+ _, initial_state = lstm_cell(self.image_embeddings, zero_state)
+
+ # Allow the LSTM variables to be reused.
+ lstm_scope.reuse_variables()
+
+ if self.mode == "inference":
+ # In inference mode, use concatenated states for convenient feeding and
+ # fetching.
+ tf.concat(axis=1, values=initial_state, name="initial_state")
+
+ # Placeholder for feeding a batch of concatenated states.
+ state_feed = tf.placeholder(dtype=tf.float32,
+ shape=[None, sum(lstm_cell.state_size)],
+ name="state_feed")
+ state_tuple = tf.split(value=state_feed, num_or_size_splits=2, axis=1)
+
+ # Run a single LSTM step.
+ lstm_outputs, state_tuple = lstm_cell(
+ inputs=tf.squeeze(self.seq_embeddings, axis=[1]),
+ state=state_tuple)
+
+ # Concatentate the resulting state.
+ tf.concat(axis=1, values=state_tuple, name="state")
+ else:
+ # Run the batch of sequence embeddings through the LSTM.
+ sequence_length = tf.reduce_sum(self.input_mask, 1)
+ lstm_outputs, _ = tf.nn.dynamic_rnn(cell=lstm_cell,
+ inputs=self.seq_embeddings,
+ sequence_length=sequence_length,
+ initial_state=initial_state,
+ dtype=tf.float32,
+ scope=lstm_scope)
+
+ # Stack batches vertically.
+ lstm_outputs = tf.reshape(lstm_outputs, [-1, lstm_cell.output_size])
+
+ with tf.variable_scope("logits") as logits_scope:
+ logits = tf.contrib.layers.fully_connected(
+ inputs=lstm_outputs,
+ num_outputs=self.config.vocab_size,
+ activation_fn=None,
+ weights_initializer=self.initializer,
+ scope=logits_scope)
+
+ if self.mode == "inference":
+ tf.nn.softmax(logits, name="softmax")
+ else:
+ targets = tf.reshape(self.target_seqs, [-1])
+ weights = tf.to_float(tf.reshape(self.input_mask, [-1]))
+
+ # Compute losses.
+ losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=targets,
+ logits=logits)
+ batch_loss = tf.div(tf.reduce_sum(tf.multiply(losses, weights)),
+ tf.reduce_sum(weights),
+ name="batch_loss")
+ tf.losses.add_loss(batch_loss)
+ total_loss = tf.losses.get_total_loss()
+
+ # Add summaries.
+ tf.summary.scalar("losses/batch_loss", batch_loss)
+ tf.summary.scalar("losses/total_loss", total_loss)
+ for var in tf.trainable_variables():
+ tf.summary.histogram("parameters/" + var.op.name, var)
+
+ self.total_loss = total_loss
+ self.target_cross_entropy_losses = losses # Used in evaluation.
+ self.target_cross_entropy_loss_weights = weights # Used in evaluation.
+
+ def setup_inception_initializer(self):
+ """Sets up the function to restore inception variables from checkpoint."""
+ if self.mode != "inference":
+ # Restore inception variables only.
+ saver = tf.train.Saver(self.inception_variables)
+
+ def restore_fn(sess):
+ tf.logging.info("Restoring Inception variables from checkpoint file %s",
+ self.config.inception_checkpoint_file)
+ saver.restore(sess, self.config.inception_checkpoint_file)
+
+ self.init_fn = restore_fn
+
+ def setup_global_step(self):
+ """Sets up the global step Tensor."""
+ global_step = tf.Variable(
+ initial_value=0,
+ name="global_step",
+ trainable=False,
+ collections=[tf.GraphKeys.GLOBAL_STEP, tf.GraphKeys.GLOBAL_VARIABLES])
+
+ self.global_step = global_step
+
+ def build(self):
+ """Creates all ops for training and evaluation."""
+ self.build_inputs()
+ self.build_image_embeddings()
+ self.build_seq_embeddings()
+ self.build_model()
+ self.setup_inception_initializer()
+ self.setup_global_step()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/show_and_tell_model_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/show_and_tell_model_test.py"
new file mode 100644
index 00000000..0bdfb6e1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/show_and_tell_model_test.py"
@@ -0,0 +1,200 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for tensorflow_models.im2txt.show_and_tell_model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import numpy as np
+import tensorflow as tf
+
+from im2txt import configuration
+from im2txt import show_and_tell_model
+
+
+class ShowAndTellModel(show_and_tell_model.ShowAndTellModel):
+ """Subclass of ShowAndTellModel without the disk I/O."""
+
+ def build_inputs(self):
+ if self.mode == "inference":
+ # Inference mode doesn't read from disk, so defer to parent.
+ return super(ShowAndTellModel, self).build_inputs()
+ else:
+ # Replace disk I/O with random Tensors.
+ self.images = tf.random_uniform(
+ shape=[self.config.batch_size, self.config.image_height,
+ self.config.image_width, 3],
+ minval=-1,
+ maxval=1)
+ self.input_seqs = tf.random_uniform(
+ [self.config.batch_size, 15],
+ minval=0,
+ maxval=self.config.vocab_size,
+ dtype=tf.int64)
+ self.target_seqs = tf.random_uniform(
+ [self.config.batch_size, 15],
+ minval=0,
+ maxval=self.config.vocab_size,
+ dtype=tf.int64)
+ self.input_mask = tf.ones_like(self.input_seqs)
+
+
+class ShowAndTellModelTest(tf.test.TestCase):
+
+ def setUp(self):
+ super(ShowAndTellModelTest, self).setUp()
+ self._model_config = configuration.ModelConfig()
+
+ def _countModelParameters(self):
+ """Counts the number of parameters in the model at top level scope."""
+ counter = {}
+ for v in tf.global_variables():
+ name = v.op.name.split("/")[0]
+ num_params = v.get_shape().num_elements()
+ assert num_params
+ counter[name] = counter.get(name, 0) + num_params
+ return counter
+
+ def _checkModelParameters(self):
+ """Verifies the number of parameters in the model."""
+ param_counts = self._countModelParameters()
+ expected_param_counts = {
+ "InceptionV3": 21802784,
+ # inception_output_size * embedding_size
+ "image_embedding": 1048576,
+ # vocab_size * embedding_size
+ "seq_embedding": 6144000,
+ # (embedding_size + num_lstm_units + 1) * 4 * num_lstm_units
+ "lstm": 2099200,
+ # (num_lstm_units + 1) * vocab_size
+ "logits": 6156000,
+ "global_step": 1,
+ }
+ self.assertDictEqual(expected_param_counts, param_counts)
+
+ def _checkOutputs(self, expected_shapes, feed_dict=None):
+ """Verifies that the model produces expected outputs.
+
+ Args:
+ expected_shapes: A dict mapping Tensor or Tensor name to expected output
+ shape.
+ feed_dict: Values of Tensors to feed into Session.run().
+ """
+ fetches = expected_shapes.keys()
+
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ outputs = sess.run(fetches, feed_dict)
+
+ for index, output in enumerate(outputs):
+ tensor = fetches[index]
+ expected = expected_shapes[tensor]
+ actual = output.shape
+ if expected != actual:
+ self.fail("Tensor %s has shape %s (expected %s)." %
+ (tensor, actual, expected))
+
+ def testBuildForTraining(self):
+ model = ShowAndTellModel(self._model_config, mode="train")
+ model.build()
+
+ self._checkModelParameters()
+
+ expected_shapes = {
+ # [batch_size, image_height, image_width, 3]
+ model.images: (32, 299, 299, 3),
+ # [batch_size, sequence_length]
+ model.input_seqs: (32, 15),
+ # [batch_size, sequence_length]
+ model.target_seqs: (32, 15),
+ # [batch_size, sequence_length]
+ model.input_mask: (32, 15),
+ # [batch_size, embedding_size]
+ model.image_embeddings: (32, 512),
+ # [batch_size, sequence_length, embedding_size]
+ model.seq_embeddings: (32, 15, 512),
+ # Scalar
+ model.total_loss: (),
+ # [batch_size * sequence_length]
+ model.target_cross_entropy_losses: (480,),
+ # [batch_size * sequence_length]
+ model.target_cross_entropy_loss_weights: (480,),
+ }
+ self._checkOutputs(expected_shapes)
+
+ def testBuildForEval(self):
+ model = ShowAndTellModel(self._model_config, mode="eval")
+ model.build()
+
+ self._checkModelParameters()
+
+ expected_shapes = {
+ # [batch_size, image_height, image_width, 3]
+ model.images: (32, 299, 299, 3),
+ # [batch_size, sequence_length]
+ model.input_seqs: (32, 15),
+ # [batch_size, sequence_length]
+ model.target_seqs: (32, 15),
+ # [batch_size, sequence_length]
+ model.input_mask: (32, 15),
+ # [batch_size, embedding_size]
+ model.image_embeddings: (32, 512),
+ # [batch_size, sequence_length, embedding_size]
+ model.seq_embeddings: (32, 15, 512),
+ # Scalar
+ model.total_loss: (),
+ # [batch_size * sequence_length]
+ model.target_cross_entropy_losses: (480,),
+ # [batch_size * sequence_length]
+ model.target_cross_entropy_loss_weights: (480,),
+ }
+ self._checkOutputs(expected_shapes)
+
+ def testBuildForInference(self):
+ model = ShowAndTellModel(self._model_config, mode="inference")
+ model.build()
+
+ self._checkModelParameters()
+
+ # Test feeding an image to get the initial LSTM state.
+ images_feed = np.random.rand(1, 299, 299, 3)
+ feed_dict = {model.images: images_feed}
+ expected_shapes = {
+ # [batch_size, embedding_size]
+ model.image_embeddings: (1, 512),
+ # [batch_size, 2 * num_lstm_units]
+ "lstm/initial_state:0": (1, 1024),
+ }
+ self._checkOutputs(expected_shapes, feed_dict)
+
+ # Test feeding a batch of inputs and LSTM states to get softmax output and
+ # LSTM states.
+ input_feed = np.random.randint(0, 10, size=3)
+ state_feed = np.random.rand(3, 1024)
+ feed_dict = {"input_feed:0": input_feed, "lstm/state_feed:0": state_feed}
+ expected_shapes = {
+ # [batch_size, 2 * num_lstm_units]
+ "lstm/state:0": (3, 1024),
+ # [batch_size, vocab_size]
+ "softmax:0": (3, 12000),
+ }
+ self._checkOutputs(expected_shapes, feed_dict)
+
+
+if __name__ == "__main__":
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/train.py"
new file mode 100644
index 00000000..db602735
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/im2txt/im2txt/train.py"
@@ -0,0 +1,114 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Train the model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from im2txt import configuration
+from im2txt import show_and_tell_model
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.flags.DEFINE_string("input_file_pattern", "",
+ "File pattern of sharded TFRecord input files.")
+tf.flags.DEFINE_string("inception_checkpoint_file", "",
+ "Path to a pretrained inception_v3 model.")
+tf.flags.DEFINE_string("train_dir", "",
+ "Directory for saving and loading model checkpoints.")
+tf.flags.DEFINE_boolean("train_inception", False,
+ "Whether to train inception submodel variables.")
+tf.flags.DEFINE_integer("number_of_steps", 1000000, "Number of training steps.")
+tf.flags.DEFINE_integer("log_every_n_steps", 1,
+ "Frequency at which loss and global step are logged.")
+
+tf.logging.set_verbosity(tf.logging.INFO)
+
+
+def main(unused_argv):
+ assert FLAGS.input_file_pattern, "--input_file_pattern is required"
+ assert FLAGS.train_dir, "--train_dir is required"
+
+ model_config = configuration.ModelConfig()
+ model_config.input_file_pattern = FLAGS.input_file_pattern
+ model_config.inception_checkpoint_file = FLAGS.inception_checkpoint_file
+ training_config = configuration.TrainingConfig()
+
+ # Create training directory.
+ train_dir = FLAGS.train_dir
+ if not tf.gfile.IsDirectory(train_dir):
+ tf.logging.info("Creating training directory: %s", train_dir)
+ tf.gfile.MakeDirs(train_dir)
+
+ # Build the TensorFlow graph.
+ g = tf.Graph()
+ with g.as_default():
+ # Build the model.
+ model = show_and_tell_model.ShowAndTellModel(
+ model_config, mode="train", train_inception=FLAGS.train_inception)
+ model.build()
+
+ # Set up the learning rate.
+ learning_rate_decay_fn = None
+ if FLAGS.train_inception:
+ learning_rate = tf.constant(training_config.train_inception_learning_rate)
+ else:
+ learning_rate = tf.constant(training_config.initial_learning_rate)
+ if training_config.learning_rate_decay_factor > 0:
+ num_batches_per_epoch = (training_config.num_examples_per_epoch /
+ model_config.batch_size)
+ decay_steps = int(num_batches_per_epoch *
+ training_config.num_epochs_per_decay)
+
+ def _learning_rate_decay_fn(learning_rate, global_step):
+ return tf.train.exponential_decay(
+ learning_rate,
+ global_step,
+ decay_steps=decay_steps,
+ decay_rate=training_config.learning_rate_decay_factor,
+ staircase=True)
+
+ learning_rate_decay_fn = _learning_rate_decay_fn
+
+ # Set up the training ops.
+ train_op = tf.contrib.layers.optimize_loss(
+ loss=model.total_loss,
+ global_step=model.global_step,
+ learning_rate=learning_rate,
+ optimizer=training_config.optimizer,
+ clip_gradients=training_config.clip_gradients,
+ learning_rate_decay_fn=learning_rate_decay_fn)
+
+ # Set up the Saver for saving and restoring model checkpoints.
+ saver = tf.train.Saver(max_to_keep=training_config.max_checkpoints_to_keep)
+
+ # Run training.
+ tf.contrib.slim.learning.train(
+ train_op,
+ train_dir,
+ log_every_n_steps=FLAGS.log_every_n_steps,
+ graph=g,
+ global_step=model.global_step,
+ number_of_steps=FLAGS.number_of_steps,
+ init_fn=model.init_fn,
+ saver=saver)
+
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/.gitignore" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/.gitignore"
new file mode 100644
index 00000000..58cbf2f4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/.gitignore"
@@ -0,0 +1,7 @@
+/bazel-bin
+/bazel-ci_build-cache
+/bazel-genfiles
+/bazel-out
+/bazel-inception
+/bazel-testlogs
+/bazel-tf
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/README.md"
new file mode 100644
index 00000000..6ee79e31
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/README.md"
@@ -0,0 +1,854 @@
+**NOTE: For the most part, you will find a newer version of this code at [models/research/slim](https://github.com/tensorflow/models/tree/master/research/slim).** In particular:
+
+* `inception_train.py` and `imagenet_train.py` should no longer be used. The slim editions for running on multiple GPUs are the current best examples.
+* `inception_distributed_train.py` and `imagenet_distributed_train.py` are still valid examples of distributed training.
+
+For performance benchmarking, please see https://www.tensorflow.org/performance/benchmarks.
+
+---
+
+# Inception in TensorFlow
+
+[ImageNet](http://www.image-net.org/) is a common academic data set in machine
+learning for training an image recognition system. Code in this directory
+demonstrates how to use TensorFlow to train and evaluate a type of convolutional
+neural network (CNN) on this academic data set. In particular, we demonstrate
+how to train the Inception v3 architecture as specified in:
+
+_Rethinking the Inception Architecture for Computer Vision_
+
+Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew
+Wojna
+
+http://arxiv.org/abs/1512.00567
+
+This network achieves 21.2% top-1 and 5.6% top-5 error for single frame
+evaluation with a computational cost of 5 billion multiply-adds per inference
+and with using less than 25 million parameters. Below is a visualization of the
+model architecture.
+
+
+
+## Description of Code
+
+The code base provides three core binaries for:
+
+* Training an Inception v3 network from scratch across multiple GPUs and/or
+ multiple machines using the ImageNet 2012 Challenge training data set.
+* Evaluating an Inception v3 network using the ImageNet 2012 Challenge
+ validation data set.
+* Retraining an Inception v3 network on a novel task and back-propagating the
+ errors to fine tune the network weights.
+
+The training procedure employs synchronous stochastic gradient descent across
+multiple GPUs. The user may specify the number of GPUs they wish to harness. The
+synchronous training performs *batch-splitting* by dividing a given batch across
+multiple GPUs.
+
+The training set up is nearly identical to the section [Training a Model Using
+Multiple GPU Cards](https://www.tensorflow.org/tutorials/deep_cnn/index.html#launching_and_training_the_model_on_multiple_gpu_cards)
+where we have substituted the CIFAR-10 model architecture with Inception v3. The
+primary differences with that setup are:
+
+* Calculate and update the batch-norm statistics during training so that they
+ may be substituted in during evaluation.
+* Specify the model architecture using a (still experimental) higher level
+ language called TensorFlow-Slim.
+
+For more details about TensorFlow-Slim, please see the [Slim README](inception/slim/README.md). Please note that this higher-level language is still
+*experimental* and the API may change over time depending on usage and
+subsequent research.
+
+## Getting Started
+
+Before you run the training script for the first time, you will need to download
+and convert the ImageNet data to native TFRecord format. The TFRecord format
+consists of a set of sharded files where each entry is a serialized `tf.Example`
+proto. Each `tf.Example` proto contains the ImageNet image (JPEG encoded) as
+well as metadata such as label and bounding box information. See
+[`parse_example_proto`](inception/image_processing.py) for details.
+
+We provide a single [script](inception/data/download_and_preprocess_imagenet.sh) for
+downloading and converting ImageNet data to TFRecord format. Downloading and
+preprocessing the data may take several hours (up to half a day) depending on
+your network and computer speed. Please be patient.
+
+To begin, you will need to sign up for an account with [ImageNet](http://image-net.org) to gain access to the data. Look for the sign up page,
+create an account and request an access key to download the data.
+
+After you have `USERNAME` and `PASSWORD`, you are ready to run our script. Make
+sure that your hard disk has at least 500 GB of free space for downloading and
+storing the data. Here we select `DATA_DIR=$HOME/imagenet-data` as such a
+location but feel free to edit accordingly.
+
+When you run the below script, please enter *USERNAME* and *PASSWORD* when
+prompted. This will occur at the very beginning. Once these values are entered,
+you will not need to interact with the script again.
+
+```shell
+# location of where to place the ImageNet data
+DATA_DIR=$HOME/imagenet-data
+
+# build the preprocessing script.
+cd tensorflow-models/inception
+bazel build //inception:download_and_preprocess_imagenet
+
+# run it
+bazel-bin/inception/download_and_preprocess_imagenet "${DATA_DIR}"
+```
+
+The final line of the output script should read:
+
+```shell
+2016-02-17 14:30:17.287989: Finished writing all 1281167 images in data set.
+```
+
+When the script finishes, you will find 1024 training files and 128 validation
+files in the `DATA_DIR`. The files will match the patterns
+`train-?????-of-01024` and `validation-?????-of-00128`, respectively.
+
+[Congratulations!](https://www.youtube.com/watch?v=9bZkp7q19f0) You are now
+ready to train or evaluate with the ImageNet data set.
+
+## How to Train from Scratch
+
+**WARNING** Training an Inception v3 network from scratch is a computationally
+intensive task and depending on your compute setup may take several days or even
+weeks.
+
+*Before proceeding* please read the [Convolutional Neural Networks](https://www.tensorflow.org/tutorials/deep_cnn/index.html) tutorial; in
+particular, focus on [Training a Model Using Multiple GPU Cards](https://www.tensorflow.org/tutorials/deep_cnn/index.html#launching_and_training_the_model_on_multiple_gpu_cards). The model training method is nearly identical to that described in the
+CIFAR-10 multi-GPU model training. Briefly, the model training
+
+* Places an individual model replica on each GPU.
+* Splits the batch across the GPUs.
+* Updates model parameters synchronously by waiting for all GPUs to finish
+ processing a batch of data.
+
+The training procedure is encapsulated by this diagram of how operations and
+variables are placed on CPU and GPUs respectively.
+
+
+
+
+
+Each tower computes the gradients for a portion of the batch and the gradients
+are combined and averaged across the multiple towers in order to provide a
+single update of the Variables stored on the CPU.
+
+A crucial aspect of training a network of this size is *training speed* in terms
+of wall-clock time. The training speed is dictated by many factors -- most
+importantly the batch size and the learning rate schedule. Both of these
+parameters are heavily coupled to the hardware set up.
+
+Generally speaking, a batch size is a difficult parameter to tune as it requires
+balancing memory demands of the model, memory available on the GPU and speed of
+computation. Generally speaking, employing larger batch sizes leads to more
+efficient computation and potentially more efficient training steps.
+
+We have tested several hardware setups for training this model from scratch but
+we emphasize that depending your hardware set up, you may need to adapt the
+batch size and learning rate schedule.
+
+Please see the comments in `inception_train.py` for a few selected learning rate
+plans based on some selected hardware setups.
+
+To train this model, you simply need to specify the following:
+
+```shell
+# Build the model. Note that we need to make sure the TensorFlow is ready to
+# use before this as this command will not build TensorFlow.
+cd tensorflow-models/inception
+bazel build //inception:imagenet_train
+
+# run it
+bazel-bin/inception/imagenet_train --num_gpus=1 --batch_size=32 --train_dir=/tmp/imagenet_train --data_dir=/tmp/imagenet_data
+```
+
+The model reads in the ImageNet training data from `--data_dir`. If you followed
+the instructions in [Getting Started](#getting-started), then set
+`--data_dir="${DATA_DIR}"`. The script assumes that there exists a set of
+sharded TFRecord files containing the ImageNet data. If you have not created
+TFRecord files, please refer to [Getting Started](#getting-started)
+
+Here is the output of the above command line when running on a Tesla K40c:
+
+```shell
+2016-03-07 12:24:59.922898: step 0, loss = 13.11 (5.3 examples/sec; 6.064 sec/batch)
+2016-03-07 12:25:55.206783: step 10, loss = 13.71 (9.4 examples/sec; 3.394 sec/batch)
+2016-03-07 12:26:28.905231: step 20, loss = 14.81 (9.5 examples/sec; 3.380 sec/batch)
+2016-03-07 12:27:02.699719: step 30, loss = 14.45 (9.5 examples/sec; 3.378 sec/batch)
+2016-03-07 12:27:36.515699: step 40, loss = 13.98 (9.5 examples/sec; 3.376 sec/batch)
+2016-03-07 12:28:10.220956: step 50, loss = 13.92 (9.6 examples/sec; 3.327 sec/batch)
+2016-03-07 12:28:43.658223: step 60, loss = 13.28 (9.6 examples/sec; 3.350 sec/batch)
+...
+```
+
+In this example, a log entry is printed every 10 step and the line includes the
+total loss (starts around 13.0-14.0) and the speed of processing in terms of
+throughput (examples / sec) and batch speed (sec/batch).
+
+The number of GPU devices is specified by `--num_gpus` (which defaults to 1).
+Specifying `--num_gpus` greater then 1 splits the batch evenly split across the
+GPU cards.
+
+```shell
+# Build the model. Note that we need to make sure the TensorFlow is ready to
+# use before this as this command will not build TensorFlow.
+cd tensorflow-models/inception
+bazel build //inception:imagenet_train
+
+# run it
+bazel-bin/inception/imagenet_train --num_gpus=2 --batch_size=64 --train_dir=/tmp/imagenet_train
+```
+
+This model splits the batch of 64 images across 2 GPUs and calculates the
+average gradient by waiting for both GPUs to finish calculating the gradients
+from their respective data (See diagram above). Generally speaking, using larger
+numbers of GPUs leads to higher throughput as well as the opportunity to use
+larger batch sizes. In turn, larger batch sizes imply better estimates of the
+gradient enabling the usage of higher learning rates. In summary, using more
+GPUs results in simply faster training speed.
+
+Note that selecting a batch size is a difficult parameter to tune as it requires
+balancing memory demands of the model, memory available on the GPU and speed of
+computation. Generally speaking, employing larger batch sizes leads to more
+efficient computation and potentially more efficient training steps.
+
+Note that there is considerable noise in the loss function on individual steps
+in the previous log. Because of this noise, it is difficult to discern how well
+a model is learning. The solution to the last problem is to launch TensorBoard
+pointing to the directory containing the events log.
+
+```shell
+tensorboard --logdir=/tmp/imagenet_train
+```
+
+TensorBoard has access to the many Summaries produced by the model that describe
+multitudes of statistics tracking the model behavior and the quality of the
+learned model. In particular, TensorBoard tracks a exponentially smoothed
+version of the loss. In practice, it is far easier to judge how well a model
+learns by monitoring the smoothed version of the loss.
+
+## How to Train from Scratch in a Distributed Setting
+
+**NOTE** Distributed TensorFlow requires version 0.8 or later.
+
+Distributed TensorFlow lets us use multiple machines to train a model faster.
+This is quite different from the training with multiple GPU towers on a single
+machine where all parameters and gradients computation are in the same place. We
+coordinate the computation across multiple machines by employing a centralized
+repository for parameters that maintains a unified, single copy of model
+parameters. Each individual machine sends gradient updates to the centralized
+parameter repository which coordinates these updates and sends back updated
+parameters to the individual machines running the model training.
+
+We term each machine that runs a copy of the training a `worker` or `replica`.
+We term each machine that maintains model parameters a `ps`, short for
+`parameter server`. Note that we might have more than one machine acting as a
+`ps` as the model parameters may be sharded across multiple machines.
+
+Variables may be updated with synchronous or asynchronous gradient updates. One
+may construct a an [`Optimizer`](https://www.tensorflow.org/api_docs/python/train.html#optimizers) in TensorFlow
+that constructs the necessary graph for either case diagrammed below from the
+TensorFlow [Whitepaper](http://download.tensorflow.org/paper/whitepaper2015.pdf):
+
+
+
+
+
+In [a recent paper](https://arxiv.org/abs/1604.00981), synchronous gradient
+updates have demonstrated to reach higher accuracy in a shorter amount of time.
+In this distributed Inception example we employ synchronous gradient updates.
+
+Note that in this example each replica has a single tower that uses one GPU.
+
+The command-line flags `worker_hosts` and `ps_hosts` specify available servers.
+The same binary will be used for both the `worker` jobs and the `ps` jobs.
+Command line flag `job_name` will be used to specify what role a task will be
+playing and `task_id` will be used to identify which one of the jobs it is
+running. Several things to note here:
+
+* The numbers of `ps` and `worker` tasks are inferred from the lists of hosts
+ specified in the flags. The `task_id` should be within the range `[0,
+ num_ps_tasks)` for `ps` tasks and `[0, num_worker_tasks)` for `worker`
+ tasks.
+* `ps` and `worker` tasks can run on the same machine, as long as that machine
+ has sufficient resources to handle both tasks. Note that the `ps` task does
+ not benefit from a GPU, so it should not attempt to use one (see below).
+* Multiple `worker` tasks can run on the same machine with multiple GPUs so
+ machine_A with 2 GPUs may have 2 workers while machine_B with 1 GPU just has
+ 1 worker.
+* The default learning rate schedule works well for a wide range of number of
+ replicas [25, 50, 100] but feel free to tune it for even better results.
+* The command line of both `ps` and `worker` tasks should include the complete
+ list of `ps_hosts` and `worker_hosts`.
+* There is a chief `worker` among all workers which defaults to `worker` 0.
+ The chief will be in charge of initializing all the parameters, writing out
+ the summaries and the checkpoint. The checkpoint and summary will be in the
+ `train_dir` of the host for `worker` 0.
+* Each worker processes a batch_size number of examples but each gradient
+ update is computed from all replicas. Hence, the effective batch size of
+ this model is batch_size * num_workers.
+
+```shell
+# Build the model. Note that we need to make sure the TensorFlow is ready to
+# use before this as this command will not build TensorFlow.
+cd tensorflow-models/inception
+bazel build //inception:imagenet_distributed_train
+
+# To start worker 0, go to the worker0 host and run the following (Note that
+# task_id should be in the range [0, num_worker_tasks):
+bazel-bin/inception/imagenet_distributed_train \
+--batch_size=32 \
+--data_dir=$HOME/imagenet-data \
+--job_name='worker' \
+--task_id=0 \
+--ps_hosts='ps0.example.com:2222' \
+--worker_hosts='worker0.example.com:2222,worker1.example.com:2222'
+
+# To start worker 1, go to the worker1 host and run the following (Note that
+# task_id should be in the range [0, num_worker_tasks):
+bazel-bin/inception/imagenet_distributed_train \
+--batch_size=32 \
+--data_dir=$HOME/imagenet-data \
+--job_name='worker' \
+--task_id=1 \
+--ps_hosts='ps0.example.com:2222' \
+--worker_hosts='worker0.example.com:2222,worker1.example.com:2222'
+
+# To start the parameter server (ps), go to the ps host and run the following (Note
+# that task_id should be in the range [0, num_ps_tasks):
+bazel-bin/inception/imagenet_distributed_train \
+--job_name='ps' \
+--task_id=0 \
+--ps_hosts='ps0.example.com:2222' \
+--worker_hosts='worker0.example.com:2222,worker1.example.com:2222'
+```
+
+If you have installed a GPU-compatible version of TensorFlow, the `ps` will also
+try to allocate GPU memory although it is not helpful. This could potentially
+crash the worker on the same machine as it has little to no GPU memory to
+allocate. To avoid this, you can prepend the previous command to start `ps`
+with: `CUDA_VISIBLE_DEVICES=''`
+
+```shell
+CUDA_VISIBLE_DEVICES='' bazel-bin/inception/imagenet_distributed_train \
+--job_name='ps' \
+--task_id=0 \
+--ps_hosts='ps0.example.com:2222' \
+--worker_hosts='worker0.example.com:2222,worker1.example.com:2222'
+```
+
+If you have run everything correctly, you should see a log in each `worker` job
+that looks like the following. Note the training speed varies depending on your
+hardware and the first several steps could take much longer.
+
+```shell
+INFO:tensorflow:PS hosts are: ['ps0.example.com:2222', 'ps1.example.com:2222']
+INFO:tensorflow:Worker hosts are: ['worker0.example.com:2222', 'worker1.example.com:2222']
+I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:206] Initialize HostPortsGrpcChannelCache for job ps -> {ps0.example.com:2222, ps1.example.com:2222}
+I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:206] Initialize HostPortsGrpcChannelCache for job worker -> {localhost:2222, worker1.example.com:2222}
+I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:202] Started server with target: grpc://localhost:2222
+INFO:tensorflow:Created variable global_step:0 with shape () and init
+
+...
+
+INFO:tensorflow:Created variable logits/logits/biases:0 with shape (1001,) and init
+INFO:tensorflow:SyncReplicas enabled: replicas_to_aggregate=2; total_num_replicas=2
+INFO:tensorflow:2016-04-13 01:56:26.405639 Supervisor
+INFO:tensorflow:Started 2 queues for processing input data.
+INFO:tensorflow:global_step/sec: 0
+INFO:tensorflow:Worker 0: 2016-04-13 01:58:40.342404: step 0, loss = 12.97(0.0 examples/sec; 65.428 sec/batch)
+INFO:tensorflow:global_step/sec: 0.0172907
+...
+```
+
+and a log in each `ps` job that looks like the following:
+
+```shell
+INFO:tensorflow:PS hosts are: ['ps0.example.com:2222', 'ps1.example.com:2222']
+INFO:tensorflow:Worker hosts are: ['worker0.example.com:2222', 'worker1.example.com:2222']
+I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:206] Initialize HostPortsGrpcChannelCache for job ps -> {localhost:2222, ps1.example.com:2222}
+I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:206] Initialize HostPortsGrpcChannelCache for job worker -> {worker0.example.com:2222, worker1.example.com:2222}
+I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:202] Started server with target: grpc://localhost:2222
+```
+
+If you compiled TensorFlow (from v1.1-rc3) with VERBS support and you have the
+required device and IB verbs SW stack, you can specify --protocol='grpc+verbs'
+In order to use Verbs RDMA for Tensor passing between workers and ps.
+Need to add the the --protocol flag in all tasks (ps and workers).
+The default protocol is the TensorFlow default protocol of grpc.
+
+
+[Congratulations!](https://www.youtube.com/watch?v=9bZkp7q19f0) You are now
+training Inception in a distributed manner.
+
+## How to Evaluate
+
+Evaluating an Inception v3 model on the ImageNet 2012 validation data set
+requires running a separate binary.
+
+The evaluation procedure is nearly identical to [Evaluating a Model](https://www.tensorflow.org/tutorials/deep_cnn/index.html#evaluating_a_model)
+described in the [Convolutional Neural Network](https://www.tensorflow.org/tutorials/deep_cnn/index.html) tutorial.
+
+**WARNING** Be careful not to run the evaluation and training binary on the same
+GPU or else you might run out of memory. Consider running the evaluation on a
+separate GPU if available or suspending the training binary while running the
+evaluation on the same GPU.
+
+Briefly, one can evaluate the model by running:
+
+```shell
+# Build the model. Note that we need to make sure the TensorFlow is ready to
+# use before this as this command will not build TensorFlow.
+cd tensorflow-models/inception
+bazel build //inception:imagenet_eval
+
+# run it
+bazel-bin/inception/imagenet_eval --checkpoint_dir=/tmp/imagenet_train --eval_dir=/tmp/imagenet_eval
+```
+
+Note that we point `--checkpoint_dir` to the location of the checkpoints saved
+by `inception_train.py` above. Running the above command results in the
+following output:
+
+```shell
+2016-02-17 22:32:50.391206: precision @ 1 = 0.735
+...
+```
+
+The script calculates the precision @ 1 over the entire validation data
+periodically. The precision @ 1 measures the how often the highest scoring
+prediction from the model matched the ImageNet label -- in this case, 73.5%. If
+you wish to run the eval just once and not periodically, append the `--run_once`
+option.
+
+Much like the training script, `imagenet_eval.py` also exports summaries that
+may be visualized in TensorBoard. These summaries calculate additional
+statistics on the predictions (e.g. recall @ 5) as well as monitor the
+statistics of the model activations and weights during evaluation.
+
+## How to Fine-Tune a Pre-Trained Model on a New Task
+
+### Getting Started
+
+Much like training the ImageNet model we must first convert a new data set to
+the sharded TFRecord format which each entry is a serialized `tf.Example` proto.
+
+We have provided a script demonstrating how to do this for small data set of of
+a few thousand flower images spread across 5 labels:
+
+```shell
+daisy, dandelion, roses, sunflowers, tulips
+```
+
+There is a single automated script that downloads the data set and converts it
+to the TFRecord format. Much like the ImageNet data set, each record in the
+TFRecord format is a serialized `tf.Example` proto whose entries include a
+JPEG-encoded string and an integer label. Please see [`parse_example_proto`](inception/image_processing.py) for details.
+
+The script just takes a few minutes to run depending your network connection
+speed for downloading and processing the images. Your hard disk requires 200MB
+of free storage. Here we select `DATA_DIR=/tmp/flowers-data/` as such a location
+but feel free to edit accordingly.
+
+```shell
+# location of where to place the flowers data
+FLOWERS_DATA_DIR=/tmp/flowers-data/
+
+# build the preprocessing script.
+cd tensorflow-models/inception
+bazel build //inception:download_and_preprocess_flowers
+
+# run it
+bazel-bin/inception/download_and_preprocess_flowers "${FLOWERS_DATA_DIR}"
+```
+
+If the script runs successfully, the final line of the terminal output should
+look like:
+
+```shell
+2016-02-24 20:42:25.067551: Finished writing all 3170 images in data set.
+```
+
+When the script finishes you will find 2 shards for the training and validation
+files in the `DATA_DIR`. The files will match the patterns `train-?????-of-00002`
+and `validation-?????-of-00002`, respectively.
+
+**NOTE** If you wish to prepare a custom image data set for transfer learning,
+you will need to invoke [`build_image_data.py`](inception/data/build_image_data.py) on
+your custom data set. Please see the associated options and assumptions behind
+this script by reading the comments section of [`build_image_data.py`](inception/data/build_image_data.py). Also, if your custom data has a different
+number of examples or classes, you need to change the appropriate values in
+[`imagenet_data.py`](inception/imagenet_data.py).
+
+The second piece you will need is a trained Inception v3 image model. You have
+the option of either training one yourself (See [How to Train from Scratch](#how-to-train-from-scratch) for details) or you can download a pre-trained
+model like so:
+
+```shell
+# location of where to place the Inception v3 model
+INCEPTION_MODEL_DIR=$HOME/inception-v3-model
+mkdir -p ${INCEPTION_MODEL_DIR}
+cd ${INCEPTION_MODEL_DIR}
+
+# download the Inception v3 model
+curl -O http://download.tensorflow.org/models/image/imagenet/inception-v3-2016-03-01.tar.gz
+tar xzf inception-v3-2016-03-01.tar.gz
+
+# this will create a directory called inception-v3 which contains the following files.
+> ls inception-v3
+README.txt
+checkpoint
+model.ckpt-157585
+```
+
+[Congratulations!](https://www.youtube.com/watch?v=9bZkp7q19f0) You are now
+ready to fine-tune your pre-trained Inception v3 model with the flower data set.
+
+### How to Retrain a Trained Model on the Flowers Data
+
+We are now ready to fine-tune a pre-trained Inception-v3 model on the flowers
+data set. This requires two distinct changes to our training procedure:
+
+1. Build the exact same model as previously except we change the number of
+ labels in the final classification layer.
+
+2. Restore all weights from the pre-trained Inception-v3 except for the final
+ classification layer; this will get randomly initialized instead.
+
+We can perform these two operations by specifying two flags:
+`--pretrained_model_checkpoint_path` and `--fine_tune`. The first flag is a
+string that points to the path of a pre-trained Inception-v3 model. If this flag
+is specified, it will load the entire model from the checkpoint before the
+script begins training.
+
+The second flag `--fine_tune` is a boolean that indicates whether the last
+classification layer should be randomly initialized or restored. You may set
+this flag to false if you wish to continue training a pre-trained model from a
+checkpoint. If you set this flag to true, you can train a new classification
+layer from scratch.
+
+In order to understand how `--fine_tune` works, please see the discussion on
+`Variables` in the TensorFlow-Slim [`README.md`](inception/slim/README.md).
+
+Putting this all together you can retrain a pre-trained Inception-v3 model on
+the flowers data set with the following command.
+
+```shell
+# Build the model. Note that we need to make sure the TensorFlow is ready to
+# use before this as this command will not build TensorFlow.
+cd tensorflow-models/inception
+bazel build //inception:flowers_train
+
+# Path to the downloaded Inception-v3 model.
+MODEL_PATH="${INCEPTION_MODEL_DIR}/inception-v3/model.ckpt-157585"
+
+# Directory where the flowers data resides.
+FLOWERS_DATA_DIR=/tmp/flowers-data/
+
+# Directory where to save the checkpoint and events files.
+TRAIN_DIR=/tmp/flowers_train/
+
+# Run the fine-tuning on the flowers data set starting from the pre-trained
+# Imagenet-v3 model.
+bazel-bin/inception/flowers_train \
+ --train_dir="${TRAIN_DIR}" \
+ --data_dir="${FLOWERS_DATA_DIR}" \
+ --pretrained_model_checkpoint_path="${MODEL_PATH}" \
+ --fine_tune=True \
+ --initial_learning_rate=0.001 \
+ --input_queue_memory_factor=1
+```
+
+We have added a few extra options to the training procedure.
+
+* Fine-tuning a model a separate data set requires significantly lowering the
+ initial learning rate. We set the initial learning rate to 0.001.
+* The flowers data set is quite small so we shrink the size of the shuffling
+ queue of examples. See [Adjusting Memory Demands](#adjusting-memory-demands)
+ for more details.
+
+The training script will only reports the loss. To evaluate the quality of the
+fine-tuned model, you will need to run `flowers_eval`:
+
+```shell
+# Build the model. Note that we need to make sure the TensorFlow is ready to
+# use before this as this command will not build TensorFlow.
+cd tensorflow-models/inception
+bazel build //inception:flowers_eval
+
+# Directory where we saved the fine-tuned checkpoint and events files.
+TRAIN_DIR=/tmp/flowers_train/
+
+# Directory where the flowers data resides.
+FLOWERS_DATA_DIR=/tmp/flowers-data/
+
+# Directory where to save the evaluation events files.
+EVAL_DIR=/tmp/flowers_eval/
+
+# Evaluate the fine-tuned model on a hold-out of the flower data set.
+bazel-bin/inception/flowers_eval \
+ --eval_dir="${EVAL_DIR}" \
+ --data_dir="${FLOWERS_DATA_DIR}" \
+ --subset=validation \
+ --num_examples=500 \
+ --checkpoint_dir="${TRAIN_DIR}" \
+ --input_queue_memory_factor=1 \
+ --run_once
+```
+
+We find that the evaluation arrives at roughly 93.4% precision@1 after the model
+has been running for 2000 steps.
+
+```shell
+Successfully loaded model from /tmp/flowers/model.ckpt-1999 at step=1999.
+2016-03-01 16:52:51.761219: starting evaluation on (validation).
+2016-03-01 16:53:05.450419: [20 batches out of 20] (36.5 examples/sec; 0.684sec/batch)
+2016-03-01 16:53:05.450471: precision @ 1 = 0.9340 recall @ 5 = 0.9960 [500 examples]
+```
+
+## How to Construct a New Dataset for Retraining
+
+One can use the existing scripts supplied with this model to build a new dataset
+for training or fine-tuning. The main script to employ is
+[`build_image_data.py`](inception/data/build_image_data.py). Briefly, this script takes a
+structured directory of images and converts it to a sharded `TFRecord` that can
+be read by the Inception model.
+
+In particular, you will need to create a directory of training images that
+reside within `$TRAIN_DIR` and `$VALIDATION_DIR` arranged as such:
+
+```shell
+ $TRAIN_DIR/dog/image0.jpeg
+ $TRAIN_DIR/dog/image1.jpg
+ $TRAIN_DIR/dog/image2.png
+ ...
+ $TRAIN_DIR/cat/weird-image.jpeg
+ $TRAIN_DIR/cat/my-image.jpeg
+ $TRAIN_DIR/cat/my-image.JPG
+ ...
+ $VALIDATION_DIR/dog/imageA.jpeg
+ $VALIDATION_DIR/dog/imageB.jpg
+ $VALIDATION_DIR/dog/imageC.png
+ ...
+ $VALIDATION_DIR/cat/weird-image.PNG
+ $VALIDATION_DIR/cat/that-image.jpg
+ $VALIDATION_DIR/cat/cat.JPG
+ ...
+```
+**NOTE**: This script will append an extra background class indexed at 0, so
+your class labels will range from 0 to num_labels. Using the example above, the
+corresponding class labels generated from `build_image_data.py` will be as
+follows:
+```shell
+0
+1 dog
+2 cat
+```
+
+Each sub-directory in `$TRAIN_DIR` and `$VALIDATION_DIR` corresponds to a unique
+label for the images that reside within that sub-directory. The images may be
+JPEG or PNG images. We do not support other images types currently.
+
+Once the data is arranged in this directory structure, we can run
+`build_image_data.py` on the data to generate the sharded `TFRecord` dataset.
+Each entry of the `TFRecord` is a serialized `tf.Example` protocol buffer. A
+complete list of information contained in the `tf.Example` is described in the
+comments of `build_image_data.py`.
+
+To run `build_image_data.py`, you can run the following command line:
+
+```shell
+# location to where to save the TFRecord data.
+OUTPUT_DIRECTORY=$HOME/my-custom-data/
+
+# build the preprocessing script.
+cd tensorflow-models/inception
+bazel build //inception:build_image_data
+
+# convert the data.
+bazel-bin/inception/build_image_data \
+ --train_directory="${TRAIN_DIR}" \
+ --validation_directory="${VALIDATION_DIR}" \
+ --output_directory="${OUTPUT_DIRECTORY}" \
+ --labels_file="${LABELS_FILE}" \
+ --train_shards=128 \
+ --validation_shards=24 \
+ --num_threads=8
+```
+
+where the `$OUTPUT_DIRECTORY` is the location of the sharded `TFRecords`. The
+`$LABELS_FILE` will be a text file that is read by the script that provides
+a list of all of the labels. For instance, in the case flowers data set, the
+`$LABELS_FILE` contained the following data:
+
+```shell
+daisy
+dandelion
+roses
+sunflowers
+tulips
+```
+
+Note that each row of each label corresponds with the entry in the final
+classifier in the model. That is, the `daisy` corresponds to the classifier for
+entry `1`; `dandelion` is entry `2`, etc. We skip label `0` as a background
+class.
+
+After running this script produces files that look like the following:
+
+```shell
+ $TRAIN_DIR/train-00000-of-00128
+ $TRAIN_DIR/train-00001-of-00128
+ ...
+ $TRAIN_DIR/train-00127-of-00128
+
+and
+
+ $VALIDATION_DIR/validation-00000-of-00024
+ $VALIDATION_DIR/validation-00001-of-00024
+ ...
+ $VALIDATION_DIR/validation-00023-of-00024
+```
+
+where 128 and 24 are the number of shards specified for each dataset,
+respectively. Generally speaking, we aim for selecting the number of shards such
+that roughly 1024 images reside in each shard. Once this data set is built, you
+are ready to train or fine-tune an Inception model on this data set.
+
+Note, if you are piggy backing on the flowers retraining scripts, be sure to
+update `num_classes()` and `num_examples_per_epoch()` in `flowers_data.py`
+to correspond with your data.
+
+## Practical Considerations for Training a Model
+
+The model architecture and training procedure is heavily dependent on the
+hardware used to train the model. If you wish to train or fine-tune this model
+on your machine **you will need to adjust and empirically determine a good set
+of training hyper-parameters for your setup**. What follows are some general
+considerations for novices.
+
+### Finding Good Hyperparameters
+
+Roughly 5-10 hyper-parameters govern the speed at which a network is trained. In
+addition to `--batch_size` and `--num_gpus`, there are several constants defined
+in [inception_train.py](inception/inception_train.py) which dictate the learning
+schedule.
+
+```shell
+RMSPROP_DECAY = 0.9 # Decay term for RMSProp.
+MOMENTUM = 0.9 # Momentum in RMSProp.
+RMSPROP_EPSILON = 1.0 # Epsilon term for RMSProp.
+INITIAL_LEARNING_RATE = 0.1 # Initial learning rate.
+NUM_EPOCHS_PER_DECAY = 30.0 # Epochs after which learning rate decays.
+LEARNING_RATE_DECAY_FACTOR = 0.16 # Learning rate decay factor.
+```
+
+There are many papers that discuss the various tricks and trade-offs associated
+with training a model with stochastic gradient descent. For those new to the
+field, some great references are:
+
+* Y Bengio, [Practical recommendations for gradient-based training of deep
+ architectures](http://arxiv.org/abs/1206.5533)
+* I Goodfellow, Y Bengio and A Courville, [Deep Learning]
+ (http://www.deeplearningbook.org/)
+
+What follows is a summary of some general advice for identifying appropriate
+model hyper-parameters in the context of this particular model training setup.
+Namely, this library provides *synchronous* updates to model parameters based on
+batch-splitting the model across multiple GPUs.
+
+* Higher learning rates leads to faster training. Too high of learning rate
+ leads to instability and will cause model parameters to diverge to infinity
+ or NaN.
+
+* Larger batch sizes lead to higher quality estimates of the gradient and
+ permit training the model with higher learning rates.
+
+* Often the GPU memory is a bottleneck that prevents employing larger batch
+ sizes. Employing more GPUs allows one to use larger batch sizes because
+ this model splits the batch across the GPUs.
+
+**NOTE** If one wishes to train this model with *asynchronous* gradient updates,
+one will need to substantially alter this model and new considerations need to
+be factored into hyperparameter tuning. See [Large Scale Distributed Deep
+Networks](http://research.google.com/archive/large_deep_networks_nips2012.html)
+for a discussion in this domain.
+
+### Adjusting Memory Demands
+
+Training this model has large memory demands in terms of the CPU and GPU. Let's
+discuss each item in turn.
+
+GPU memory is relatively small compared to CPU memory. Two items dictate the
+amount of GPU memory employed -- model architecture and batch size. Assuming
+that you keep the model architecture fixed, the sole parameter governing the GPU
+demand is the batch size. A good rule of thumb is to try employ as large of
+batch size as will fit on the GPU.
+
+If you run out of GPU memory, either lower the `--batch_size` or employ more
+GPUs on your desktop. The model performs batch-splitting across GPUs, thus N
+GPUs can handle N times the batch size of 1 GPU.
+
+The model requires a large amount of CPU memory as well. We have tuned the model
+to employ about ~20GB of CPU memory. Thus, having access to about 40 GB of CPU
+memory would be ideal.
+
+If that is not possible, you can tune down the memory demands of the model via
+lowering `--input_queue_memory_factor`. Images are preprocessed asynchronously
+with respect to the main training across `--num_preprocess_threads` threads. The
+preprocessed images are stored in shuffling queue in which each GPU performs a
+dequeue operation in order to receive a `batch_size` worth of images.
+
+In order to guarantee good shuffling across the data, we maintain a large
+shuffling queue of 1024 x `input_queue_memory_factor` images. For the current
+model architecture, this corresponds to about 4GB of CPU memory. You may lower
+`input_queue_memory_factor` in order to decrease the memory footprint. Keep in
+mind though that lowering this value drastically may result in a model with
+slightly lower predictive accuracy when training from scratch. Please see
+comments in [`image_processing.py`](inception/image_processing.py) for more details.
+
+## Troubleshooting
+
+#### The model runs out of CPU memory.
+
+In lieu of buying more CPU memory, an easy fix is to decrease
+`--input_queue_memory_factor`. See [Adjusting Memory Demands](#adjusting-memory-demands).
+
+#### The model runs out of GPU memory.
+
+The data is not able to fit on the GPU card. The simplest solution is to
+decrease the batch size of the model. Otherwise, you will need to think about a
+more sophisticated method for specifying the training which cuts up the model
+across multiple `session.run()` calls or partitions the model across multiple
+GPUs. See [Using GPUs](https://www.tensorflow.org/how_tos/using_gpu/index.html)
+and [Adjusting Memory Demands](#adjusting-memory-demands) for more information.
+
+#### The model training results in NaN's.
+
+The learning rate of the model is too high. Turn down your learning rate.
+
+#### I wish to train a model with a different image size.
+
+The simplest solution is to artificially resize your images to `299x299` pixels.
+See [Images](https://www.tensorflow.org/api_docs/python/image.html) section for
+many resizing, cropping and padding methods. Note that the entire model
+architecture is predicated on a `299x299` image, thus if you wish to change the
+input image size, then you may need to redesign the entire model architecture.
+
+#### What hardware specification are these hyper-parameters targeted for?
+
+We targeted a desktop with 128GB of CPU ram connected to 8 NVIDIA Tesla K40 GPU
+cards but we have run this on desktops with 32GB of CPU ram and 1 NVIDIA Tesla
+K40. You can get a sense of the various training configurations we tested by
+reading the comments in [`inception_train.py`](inception/inception_train.py).
+
+#### How do I continue training from a checkpoint in distributed setting?
+
+You only need to make sure that the checkpoint is in a location that can be
+reached by all of the `ps` tasks. By specifying the checkpoint location with
+`--train_dir` , the `ps` servers will load the checkpoint before commencing
+training.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/WORKSPACE" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/WORKSPACE"
new file mode 100644
index 00000000..2d7b4fb2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/WORKSPACE"
@@ -0,0 +1 @@
+workspace(name = "inception")
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/g3doc/inception_v3_architecture.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/g3doc/inception_v3_architecture.png"
new file mode 100644
index 00000000..91fb734a
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/g3doc/inception_v3_architecture.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/BUILD"
new file mode 100644
index 00000000..21fc27aa
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/BUILD"
@@ -0,0 +1,198 @@
+# Description:
+# Example TensorFlow models for ImageNet.
+
+package(default_visibility = [":internal"])
+
+licenses(["notice"]) # Apache 2.0
+
+exports_files(["LICENSE"])
+
+package_group(
+ name = "internal",
+ packages = ["//inception/..."],
+)
+
+py_library(
+ name = "dataset",
+ srcs = [
+ "dataset.py",
+ ],
+)
+
+py_library(
+ name = "imagenet_data",
+ srcs = [
+ "imagenet_data.py",
+ ],
+ deps = [
+ ":dataset",
+ ],
+)
+
+py_library(
+ name = "flowers_data",
+ srcs = [
+ "flowers_data.py",
+ ],
+ deps = [
+ ":dataset",
+ ],
+)
+
+py_library(
+ name = "image_processing",
+ srcs = [
+ "image_processing.py",
+ ],
+)
+
+py_library(
+ name = "inception",
+ srcs = [
+ "inception_model.py",
+ ],
+ visibility = ["//visibility:public"],
+ deps = [
+ ":dataset",
+ "//inception/slim",
+ ],
+)
+
+py_binary(
+ name = "imagenet_eval",
+ srcs = [
+ "imagenet_eval.py",
+ ],
+ deps = [
+ ":imagenet_data",
+ ":inception_eval",
+ ],
+)
+
+py_binary(
+ name = "flowers_eval",
+ srcs = [
+ "flowers_eval.py",
+ ],
+ deps = [
+ ":flowers_data",
+ ":inception_eval",
+ ],
+)
+
+py_library(
+ name = "inception_eval",
+ srcs = [
+ "inception_eval.py",
+ ],
+ deps = [
+ ":image_processing",
+ ":inception",
+ ],
+)
+
+py_binary(
+ name = "imagenet_train",
+ srcs = [
+ "imagenet_train.py",
+ ],
+ deps = [
+ ":imagenet_data",
+ ":inception_train",
+ ],
+)
+
+py_binary(
+ name = "imagenet_distributed_train",
+ srcs = [
+ "imagenet_distributed_train.py",
+ ],
+ deps = [
+ ":imagenet_data",
+ ":inception_distributed_train",
+ ],
+)
+
+py_binary(
+ name = "flowers_train",
+ srcs = [
+ "flowers_train.py",
+ ],
+ deps = [
+ ":flowers_data",
+ ":inception_train",
+ ],
+)
+
+py_library(
+ name = "inception_train",
+ srcs = [
+ "inception_train.py",
+ ],
+ deps = [
+ ":image_processing",
+ ":inception",
+ ],
+)
+
+py_library(
+ name = "inception_distributed_train",
+ srcs = [
+ "inception_distributed_train.py",
+ ],
+ deps = [
+ ":image_processing",
+ ":inception",
+ ],
+)
+
+py_binary(
+ name = "build_image_data",
+ srcs = ["data/build_image_data.py"],
+)
+
+sh_binary(
+ name = "download_and_preprocess_flowers",
+ srcs = ["data/download_and_preprocess_flowers.sh"],
+ data = [
+ ":build_image_data",
+ ],
+)
+
+sh_binary(
+ name = "download_and_preprocess_imagenet",
+ srcs = ["data/download_and_preprocess_imagenet.sh"],
+ data = [
+ "data/download_imagenet.sh",
+ "data/imagenet_2012_validation_synset_labels.txt",
+ "data/imagenet_lsvrc_2015_synsets.txt",
+ "data/imagenet_metadata.txt",
+ "data/preprocess_imagenet_validation_data.py",
+ "data/process_bounding_boxes.py",
+ ":build_imagenet_data",
+ ],
+)
+
+py_binary(
+ name = "build_imagenet_data",
+ srcs = ["data/build_imagenet_data.py"],
+)
+
+filegroup(
+ name = "srcs",
+ srcs = glob(
+ [
+ "**/*.py",
+ "BUILD",
+ ],
+ ),
+)
+
+filegroup(
+ name = "imagenet_metadata",
+ srcs = [
+ "data/imagenet_lsvrc_2015_synsets.txt",
+ "data/imagenet_metadata.txt",
+ ],
+ visibility = ["//visibility:public"],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/build_image_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/build_image_data.py"
new file mode 100644
index 00000000..894388b7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/build_image_data.py"
@@ -0,0 +1,436 @@
+#!/usr/bin/python
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Converts image data to TFRecords file format with Example protos.
+
+The image data set is expected to reside in JPEG files located in the
+following directory structure.
+
+ data_dir/label_0/image0.jpeg
+ data_dir/label_0/image1.jpg
+ ...
+ data_dir/label_1/weird-image.jpeg
+ data_dir/label_1/my-image.jpeg
+ ...
+
+where the sub-directory is the unique label associated with these images.
+
+This TensorFlow script converts the training and evaluation data into
+a sharded data set consisting of TFRecord files
+
+ train_directory/train-00000-of-01024
+ train_directory/train-00001-of-01024
+ ...
+ train_directory/train-01023-of-01024
+
+and
+
+ validation_directory/validation-00000-of-00128
+ validation_directory/validation-00001-of-00128
+ ...
+ validation_directory/validation-00127-of-00128
+
+where we have selected 1024 and 128 shards for each data set. Each record
+within the TFRecord file is a serialized Example proto. The Example proto
+contains the following fields:
+
+ image/encoded: string containing JPEG encoded image in RGB colorspace
+ image/height: integer, image height in pixels
+ image/width: integer, image width in pixels
+ image/colorspace: string, specifying the colorspace, always 'RGB'
+ image/channels: integer, specifying the number of channels, always 3
+ image/format: string, specifying the format, always 'JPEG'
+
+ image/filename: string containing the basename of the image file
+ e.g. 'n01440764_10026.JPEG' or 'ILSVRC2012_val_00000293.JPEG'
+ image/class/label: integer specifying the index in a classification layer.
+ The label ranges from [0, num_labels] where 0 is unused and left as
+ the background class.
+ image/class/text: string specifying the human-readable version of the label
+ e.g. 'dog'
+
+If your data set involves bounding boxes, please look at build_imagenet_data.py.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from datetime import datetime
+import os
+import random
+import sys
+import threading
+
+import numpy as np
+import tensorflow as tf
+
+tf.app.flags.DEFINE_string('train_directory', '/tmp/',
+ 'Training data directory')
+tf.app.flags.DEFINE_string('validation_directory', '/tmp/',
+ 'Validation data directory')
+tf.app.flags.DEFINE_string('output_directory', '/tmp/',
+ 'Output data directory')
+
+tf.app.flags.DEFINE_integer('train_shards', 2,
+ 'Number of shards in training TFRecord files.')
+tf.app.flags.DEFINE_integer('validation_shards', 2,
+ 'Number of shards in validation TFRecord files.')
+
+tf.app.flags.DEFINE_integer('num_threads', 2,
+ 'Number of threads to preprocess the images.')
+
+# The labels file contains a list of valid labels are held in this file.
+# Assumes that the file contains entries as such:
+# dog
+# cat
+# flower
+# where each line corresponds to a label. We map each label contained in
+# the file to an integer corresponding to the line number starting from 0.
+tf.app.flags.DEFINE_string('labels_file', '', 'Labels file')
+
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def _int64_feature(value):
+ """Wrapper for inserting int64 features into Example proto."""
+ if not isinstance(value, list):
+ value = [value]
+ return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
+
+
+def _bytes_feature(value):
+ """Wrapper for inserting bytes features into Example proto."""
+ return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
+
+
+def _convert_to_example(filename, image_buffer, label, text, height, width):
+ """Build an Example proto for an example.
+
+ Args:
+ filename: string, path to an image file, e.g., '/path/to/example.JPG'
+ image_buffer: string, JPEG encoding of RGB image
+ label: integer, identifier for the ground truth for the network
+ text: string, unique human-readable, e.g. 'dog'
+ height: integer, image height in pixels
+ width: integer, image width in pixels
+ Returns:
+ Example proto
+ """
+
+ colorspace = 'RGB'
+ channels = 3
+ image_format = 'JPEG'
+
+ example = tf.train.Example(features=tf.train.Features(feature={
+ 'image/height': _int64_feature(height),
+ 'image/width': _int64_feature(width),
+ 'image/colorspace': _bytes_feature(tf.compat.as_bytes(colorspace)),
+ 'image/channels': _int64_feature(channels),
+ 'image/class/label': _int64_feature(label),
+ 'image/class/text': _bytes_feature(tf.compat.as_bytes(text)),
+ 'image/format': _bytes_feature(tf.compat.as_bytes(image_format)),
+ 'image/filename': _bytes_feature(tf.compat.as_bytes(os.path.basename(filename))),
+ 'image/encoded': _bytes_feature(tf.compat.as_bytes(image_buffer))}))
+ return example
+
+
+class ImageCoder(object):
+ """Helper class that provides TensorFlow image coding utilities."""
+
+ def __init__(self):
+ # Create a single Session to run all image coding calls.
+ self._sess = tf.Session()
+
+ # Initializes function that converts PNG to JPEG data.
+ self._png_data = tf.placeholder(dtype=tf.string)
+ image = tf.image.decode_png(self._png_data, channels=3)
+ self._png_to_jpeg = tf.image.encode_jpeg(image, format='rgb', quality=100)
+
+ # Initializes function that decodes RGB JPEG data.
+ self._decode_jpeg_data = tf.placeholder(dtype=tf.string)
+ self._decode_jpeg = tf.image.decode_jpeg(self._decode_jpeg_data, channels=3)
+
+ def png_to_jpeg(self, image_data):
+ return self._sess.run(self._png_to_jpeg,
+ feed_dict={self._png_data: image_data})
+
+ def decode_jpeg(self, image_data):
+ image = self._sess.run(self._decode_jpeg,
+ feed_dict={self._decode_jpeg_data: image_data})
+ assert len(image.shape) == 3
+ assert image.shape[2] == 3
+ return image
+
+
+def _is_png(filename):
+ """Determine if a file contains a PNG format image.
+
+ Args:
+ filename: string, path of the image file.
+
+ Returns:
+ boolean indicating if the image is a PNG.
+ """
+ return filename.endswith('.png')
+
+
+def _process_image(filename, coder):
+ """Process a single image file.
+
+ Args:
+ filename: string, path to an image file e.g., '/path/to/example.JPG'.
+ coder: instance of ImageCoder to provide TensorFlow image coding utils.
+ Returns:
+ image_buffer: string, JPEG encoding of RGB image.
+ height: integer, image height in pixels.
+ width: integer, image width in pixels.
+ """
+ # Read the image file.
+ with tf.gfile.FastGFile(filename, 'rb') as f:
+ image_data = f.read()
+
+ # Convert any PNG to JPEG's for consistency.
+ if _is_png(filename):
+ print('Converting PNG to JPEG for %s' % filename)
+ image_data = coder.png_to_jpeg(image_data)
+
+ # Decode the RGB JPEG.
+ image = coder.decode_jpeg(image_data)
+
+ # Check that image converted to RGB
+ assert len(image.shape) == 3
+ height = image.shape[0]
+ width = image.shape[1]
+ assert image.shape[2] == 3
+
+ return image_data, height, width
+
+
+def _process_image_files_batch(coder, thread_index, ranges, name, filenames,
+ texts, labels, num_shards):
+ """Processes and saves list of images as TFRecord in 1 thread.
+
+ Args:
+ coder: instance of ImageCoder to provide TensorFlow image coding utils.
+ thread_index: integer, unique batch to run index is within [0, len(ranges)).
+ ranges: list of pairs of integers specifying ranges of each batches to
+ analyze in parallel.
+ name: string, unique identifier specifying the data set
+ filenames: list of strings; each string is a path to an image file
+ texts: list of strings; each string is human readable, e.g. 'dog'
+ labels: list of integer; each integer identifies the ground truth
+ num_shards: integer number of shards for this data set.
+ """
+ # Each thread produces N shards where N = int(num_shards / num_threads).
+ # For instance, if num_shards = 128, and the num_threads = 2, then the first
+ # thread would produce shards [0, 64).
+ num_threads = len(ranges)
+ assert not num_shards % num_threads
+ num_shards_per_batch = int(num_shards / num_threads)
+
+ shard_ranges = np.linspace(ranges[thread_index][0],
+ ranges[thread_index][1],
+ num_shards_per_batch + 1).astype(int)
+ num_files_in_thread = ranges[thread_index][1] - ranges[thread_index][0]
+
+ counter = 0
+ for s in range(num_shards_per_batch):
+ # Generate a sharded version of the file name, e.g. 'train-00002-of-00010'
+ shard = thread_index * num_shards_per_batch + s
+ output_filename = '%s-%.5d-of-%.5d' % (name, shard, num_shards)
+ output_file = os.path.join(FLAGS.output_directory, output_filename)
+ writer = tf.python_io.TFRecordWriter(output_file)
+
+ shard_counter = 0
+ files_in_shard = np.arange(shard_ranges[s], shard_ranges[s + 1], dtype=int)
+ for i in files_in_shard:
+ filename = filenames[i]
+ label = labels[i]
+ text = texts[i]
+
+ try:
+ image_buffer, height, width = _process_image(filename, coder)
+ except Exception as e:
+ print(e)
+ print('SKIPPED: Unexpected error while decoding %s.' % filename)
+ continue
+
+ example = _convert_to_example(filename, image_buffer, label,
+ text, height, width)
+ writer.write(example.SerializeToString())
+ shard_counter += 1
+ counter += 1
+
+ if not counter % 1000:
+ print('%s [thread %d]: Processed %d of %d images in thread batch.' %
+ (datetime.now(), thread_index, counter, num_files_in_thread))
+ sys.stdout.flush()
+
+ writer.close()
+ print('%s [thread %d]: Wrote %d images to %s' %
+ (datetime.now(), thread_index, shard_counter, output_file))
+ sys.stdout.flush()
+ shard_counter = 0
+ print('%s [thread %d]: Wrote %d images to %d shards.' %
+ (datetime.now(), thread_index, counter, num_files_in_thread))
+ sys.stdout.flush()
+
+
+def _process_image_files(name, filenames, texts, labels, num_shards):
+ """Process and save list of images as TFRecord of Example protos.
+
+ Args:
+ name: string, unique identifier specifying the data set
+ filenames: list of strings; each string is a path to an image file
+ texts: list of strings; each string is human readable, e.g. 'dog'
+ labels: list of integer; each integer identifies the ground truth
+ num_shards: integer number of shards for this data set.
+ """
+ assert len(filenames) == len(texts)
+ assert len(filenames) == len(labels)
+
+ # Break all images into batches with a [ranges[i][0], ranges[i][1]].
+ spacing = np.linspace(0, len(filenames), FLAGS.num_threads + 1).astype(np.int)
+ ranges = []
+ for i in range(len(spacing) - 1):
+ ranges.append([spacing[i], spacing[i + 1]])
+
+ # Launch a thread for each batch.
+ print('Launching %d threads for spacings: %s' % (FLAGS.num_threads, ranges))
+ sys.stdout.flush()
+
+ # Create a mechanism for monitoring when all threads are finished.
+ coord = tf.train.Coordinator()
+
+ # Create a generic TensorFlow-based utility for converting all image codings.
+ coder = ImageCoder()
+
+ threads = []
+ for thread_index in range(len(ranges)):
+ args = (coder, thread_index, ranges, name, filenames,
+ texts, labels, num_shards)
+ t = threading.Thread(target=_process_image_files_batch, args=args)
+ t.start()
+ threads.append(t)
+
+ # Wait for all the threads to terminate.
+ coord.join(threads)
+ print('%s: Finished writing all %d images in data set.' %
+ (datetime.now(), len(filenames)))
+ sys.stdout.flush()
+
+
+def _find_image_files(data_dir, labels_file):
+ """Build a list of all images files and labels in the data set.
+
+ Args:
+ data_dir: string, path to the root directory of images.
+
+ Assumes that the image data set resides in JPEG files located in
+ the following directory structure.
+
+ data_dir/dog/another-image.JPEG
+ data_dir/dog/my-image.jpg
+
+ where 'dog' is the label associated with these images.
+
+ labels_file: string, path to the labels file.
+
+ The list of valid labels are held in this file. Assumes that the file
+ contains entries as such:
+ dog
+ cat
+ flower
+ where each line corresponds to a label. We map each label contained in
+ the file to an integer starting with the integer 0 corresponding to the
+ label contained in the first line.
+
+ Returns:
+ filenames: list of strings; each string is a path to an image file.
+ texts: list of strings; each string is the class, e.g. 'dog'
+ labels: list of integer; each integer identifies the ground truth.
+ """
+ print('Determining list of input files and labels from %s.' % data_dir)
+ unique_labels = [l.strip() for l in tf.gfile.FastGFile(
+ labels_file, 'r').readlines()]
+
+ labels = []
+ filenames = []
+ texts = []
+
+ # Leave label index 0 empty as a background class.
+ label_index = 1
+
+ # Construct the list of JPEG files and labels.
+ for text in unique_labels:
+ jpeg_file_path = '%s/%s/*' % (data_dir, text)
+ matching_files = tf.gfile.Glob(jpeg_file_path)
+
+ labels.extend([label_index] * len(matching_files))
+ texts.extend([text] * len(matching_files))
+ filenames.extend(matching_files)
+
+ if not label_index % 100:
+ print('Finished finding files in %d of %d classes.' % (
+ label_index, len(labels)))
+ label_index += 1
+
+ # Shuffle the ordering of all image files in order to guarantee
+ # random ordering of the images with respect to label in the
+ # saved TFRecord files. Make the randomization repeatable.
+ shuffled_index = list(range(len(filenames)))
+ random.seed(12345)
+ random.shuffle(shuffled_index)
+
+ filenames = [filenames[i] for i in shuffled_index]
+ texts = [texts[i] for i in shuffled_index]
+ labels = [labels[i] for i in shuffled_index]
+
+ print('Found %d JPEG files across %d labels inside %s.' %
+ (len(filenames), len(unique_labels), data_dir))
+ return filenames, texts, labels
+
+
+def _process_dataset(name, directory, num_shards, labels_file):
+ """Process a complete data set and save it as a TFRecord.
+
+ Args:
+ name: string, unique identifier specifying the data set.
+ directory: string, root path to the data set.
+ num_shards: integer number of shards for this data set.
+ labels_file: string, path to the labels file.
+ """
+ filenames, texts, labels = _find_image_files(directory, labels_file)
+ _process_image_files(name, filenames, texts, labels, num_shards)
+
+
+def main(unused_argv):
+ assert not FLAGS.train_shards % FLAGS.num_threads, (
+ 'Please make the FLAGS.num_threads commensurate with FLAGS.train_shards')
+ assert not FLAGS.validation_shards % FLAGS.num_threads, (
+ 'Please make the FLAGS.num_threads commensurate with '
+ 'FLAGS.validation_shards')
+ print('Saving results to %s' % FLAGS.output_directory)
+
+ # Run it!
+ _process_dataset('validation', FLAGS.validation_directory,
+ FLAGS.validation_shards, FLAGS.labels_file)
+ _process_dataset('train', FLAGS.train_directory,
+ FLAGS.train_shards, FLAGS.labels_file)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/build_imagenet_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/build_imagenet_data.py"
new file mode 100644
index 00000000..c054735e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/build_imagenet_data.py"
@@ -0,0 +1,707 @@
+#!/usr/bin/python
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Converts ImageNet data to TFRecords file format with Example protos.
+
+The raw ImageNet data set is expected to reside in JPEG files located in the
+following directory structure.
+
+ data_dir/n01440764/ILSVRC2012_val_00000293.JPEG
+ data_dir/n01440764/ILSVRC2012_val_00000543.JPEG
+ ...
+
+where 'n01440764' is the unique synset label associated with
+these images.
+
+The training data set consists of 1000 sub-directories (i.e. labels)
+each containing 1200 JPEG images for a total of 1.2M JPEG images.
+
+The evaluation data set consists of 1000 sub-directories (i.e. labels)
+each containing 50 JPEG images for a total of 50K JPEG images.
+
+This TensorFlow script converts the training and evaluation data into
+a sharded data set consisting of 1024 and 128 TFRecord files, respectively.
+
+ train_directory/train-00000-of-01024
+ train_directory/train-00001-of-01024
+ ...
+ train_directory/train-01023-of-01024
+
+and
+
+ validation_directory/validation-00000-of-00128
+ validation_directory/validation-00001-of-00128
+ ...
+ validation_directory/validation-00127-of-00128
+
+Each validation TFRecord file contains ~390 records. Each training TFREcord
+file contains ~1250 records. Each record within the TFRecord file is a
+serialized Example proto. The Example proto contains the following fields:
+
+ image/encoded: string containing JPEG encoded image in RGB colorspace
+ image/height: integer, image height in pixels
+ image/width: integer, image width in pixels
+ image/colorspace: string, specifying the colorspace, always 'RGB'
+ image/channels: integer, specifying the number of channels, always 3
+ image/format: string, specifying the format, always 'JPEG'
+
+ image/filename: string containing the basename of the image file
+ e.g. 'n01440764_10026.JPEG' or 'ILSVRC2012_val_00000293.JPEG'
+ image/class/label: integer specifying the index in a classification layer.
+ The label ranges from [1, 1000] where 0 is not used.
+ image/class/synset: string specifying the unique ID of the label,
+ e.g. 'n01440764'
+ image/class/text: string specifying the human-readable version of the label
+ e.g. 'red fox, Vulpes vulpes'
+
+ image/object/bbox/xmin: list of integers specifying the 0+ human annotated
+ bounding boxes
+ image/object/bbox/xmax: list of integers specifying the 0+ human annotated
+ bounding boxes
+ image/object/bbox/ymin: list of integers specifying the 0+ human annotated
+ bounding boxes
+ image/object/bbox/ymax: list of integers specifying the 0+ human annotated
+ bounding boxes
+ image/object/bbox/label: integer specifying the index in a classification
+ layer. The label ranges from [1, 1000] where 0 is not used. Note this is
+ always identical to the image label.
+
+Note that the length of xmin is identical to the length of xmax, ymin and ymax
+for each example.
+
+Running this script using 16 threads may take around ~2.5 hours on an HP Z420.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from datetime import datetime
+import os
+import random
+import sys
+import threading
+
+import numpy as np
+import six
+import tensorflow as tf
+
+tf.app.flags.DEFINE_string('train_directory', '/tmp/',
+ 'Training data directory')
+tf.app.flags.DEFINE_string('validation_directory', '/tmp/',
+ 'Validation data directory')
+tf.app.flags.DEFINE_string('output_directory', '/tmp/',
+ 'Output data directory')
+
+tf.app.flags.DEFINE_integer('train_shards', 1024,
+ 'Number of shards in training TFRecord files.')
+tf.app.flags.DEFINE_integer('validation_shards', 128,
+ 'Number of shards in validation TFRecord files.')
+
+tf.app.flags.DEFINE_integer('num_threads', 8,
+ 'Number of threads to preprocess the images.')
+
+# The labels file contains a list of valid labels are held in this file.
+# Assumes that the file contains entries as such:
+# n01440764
+# n01443537
+# n01484850
+# where each line corresponds to a label expressed as a synset. We map
+# each synset contained in the file to an integer (based on the alphabetical
+# ordering). See below for details.
+tf.app.flags.DEFINE_string('labels_file',
+ 'imagenet_lsvrc_2015_synsets.txt',
+ 'Labels file')
+
+# This file containing mapping from synset to human-readable label.
+# Assumes each line of the file looks like:
+#
+# n02119247 black fox
+# n02119359 silver fox
+# n02119477 red fox, Vulpes fulva
+#
+# where each line corresponds to a unique mapping. Note that each line is
+# formatted as \t.
+tf.app.flags.DEFINE_string('imagenet_metadata_file',
+ 'imagenet_metadata.txt',
+ 'ImageNet metadata file')
+
+# This file is the output of process_bounding_box.py
+# Assumes each line of the file looks like:
+#
+# n00007846_64193.JPEG,0.0060,0.2620,0.7545,0.9940
+#
+# where each line corresponds to one bounding box annotation associated
+# with an image. Each line can be parsed as:
+#
+# , , , ,
+#
+# Note that there might exist mulitple bounding box annotations associated
+# with an image file.
+tf.app.flags.DEFINE_string('bounding_box_file',
+ './imagenet_2012_bounding_boxes.csv',
+ 'Bounding box file')
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def _int64_feature(value):
+ """Wrapper for inserting int64 features into Example proto."""
+ if not isinstance(value, list):
+ value = [value]
+ return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
+
+
+def _float_feature(value):
+ """Wrapper for inserting float features into Example proto."""
+ if not isinstance(value, list):
+ value = [value]
+ return tf.train.Feature(float_list=tf.train.FloatList(value=value))
+
+
+def _bytes_feature(value):
+ """Wrapper for inserting bytes features into Example proto."""
+ if six.PY3 and isinstance(value, six.text_type):
+ value = six.binary_type(value, encoding='utf-8')
+ return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
+
+
+def _convert_to_example(filename, image_buffer, label, synset, human, bbox,
+ height, width):
+ """Build an Example proto for an example.
+
+ Args:
+ filename: string, path to an image file, e.g., '/path/to/example.JPG'
+ image_buffer: string, JPEG encoding of RGB image
+ label: integer, identifier for the ground truth for the network
+ synset: string, unique WordNet ID specifying the label, e.g., 'n02323233'
+ human: string, human-readable label, e.g., 'red fox, Vulpes vulpes'
+ bbox: list of bounding boxes; each box is a list of integers
+ specifying [xmin, ymin, xmax, ymax]. All boxes are assumed to belong to
+ the same label as the image label.
+ height: integer, image height in pixels
+ width: integer, image width in pixels
+ Returns:
+ Example proto
+ """
+ xmin = []
+ ymin = []
+ xmax = []
+ ymax = []
+ for b in bbox:
+ assert len(b) == 4
+ # pylint: disable=expression-not-assigned
+ [l.append(point) for l, point in zip([xmin, ymin, xmax, ymax], b)]
+ # pylint: enable=expression-not-assigned
+
+ colorspace = 'RGB'
+ channels = 3
+ image_format = 'JPEG'
+
+ example = tf.train.Example(features=tf.train.Features(feature={
+ 'image/height': _int64_feature(height),
+ 'image/width': _int64_feature(width),
+ 'image/colorspace': _bytes_feature(colorspace),
+ 'image/channels': _int64_feature(channels),
+ 'image/class/label': _int64_feature(label),
+ 'image/class/synset': _bytes_feature(synset),
+ 'image/class/text': _bytes_feature(human),
+ 'image/object/bbox/xmin': _float_feature(xmin),
+ 'image/object/bbox/xmax': _float_feature(xmax),
+ 'image/object/bbox/ymin': _float_feature(ymin),
+ 'image/object/bbox/ymax': _float_feature(ymax),
+ 'image/object/bbox/label': _int64_feature([label] * len(xmin)),
+ 'image/format': _bytes_feature(image_format),
+ 'image/filename': _bytes_feature(os.path.basename(filename)),
+ 'image/encoded': _bytes_feature(image_buffer)}))
+ return example
+
+
+class ImageCoder(object):
+ """Helper class that provides TensorFlow image coding utilities."""
+
+ def __init__(self):
+ # Create a single Session to run all image coding calls.
+ self._sess = tf.Session()
+
+ # Initializes function that converts PNG to JPEG data.
+ self._png_data = tf.placeholder(dtype=tf.string)
+ image = tf.image.decode_png(self._png_data, channels=3)
+ self._png_to_jpeg = tf.image.encode_jpeg(image, format='rgb', quality=100)
+
+ # Initializes function that converts CMYK JPEG data to RGB JPEG data.
+ self._cmyk_data = tf.placeholder(dtype=tf.string)
+ image = tf.image.decode_jpeg(self._cmyk_data, channels=0)
+ self._cmyk_to_rgb = tf.image.encode_jpeg(image, format='rgb', quality=100)
+
+ # Initializes function that decodes RGB JPEG data.
+ self._decode_jpeg_data = tf.placeholder(dtype=tf.string)
+ self._decode_jpeg = tf.image.decode_jpeg(self._decode_jpeg_data, channels=3)
+
+ def png_to_jpeg(self, image_data):
+ return self._sess.run(self._png_to_jpeg,
+ feed_dict={self._png_data: image_data})
+
+ def cmyk_to_rgb(self, image_data):
+ return self._sess.run(self._cmyk_to_rgb,
+ feed_dict={self._cmyk_data: image_data})
+
+ def decode_jpeg(self, image_data):
+ image = self._sess.run(self._decode_jpeg,
+ feed_dict={self._decode_jpeg_data: image_data})
+ assert len(image.shape) == 3
+ assert image.shape[2] == 3
+ return image
+
+
+def _is_png(filename):
+ """Determine if a file contains a PNG format image.
+
+ Args:
+ filename: string, path of the image file.
+
+ Returns:
+ boolean indicating if the image is a PNG.
+ """
+ # File list from:
+ # https://groups.google.com/forum/embed/?place=forum/torch7#!topic/torch7/fOSTXHIESSU
+ return 'n02105855_2933.JPEG' in filename
+
+
+def _is_cmyk(filename):
+ """Determine if file contains a CMYK JPEG format image.
+
+ Args:
+ filename: string, path of the image file.
+
+ Returns:
+ boolean indicating if the image is a JPEG encoded with CMYK color space.
+ """
+ # File list from:
+ # https://github.com/cytsai/ilsvrc-cmyk-image-list
+ blacklist = ['n01739381_1309.JPEG', 'n02077923_14822.JPEG',
+ 'n02447366_23489.JPEG', 'n02492035_15739.JPEG',
+ 'n02747177_10752.JPEG', 'n03018349_4028.JPEG',
+ 'n03062245_4620.JPEG', 'n03347037_9675.JPEG',
+ 'n03467068_12171.JPEG', 'n03529860_11437.JPEG',
+ 'n03544143_17228.JPEG', 'n03633091_5218.JPEG',
+ 'n03710637_5125.JPEG', 'n03961711_5286.JPEG',
+ 'n04033995_2932.JPEG', 'n04258138_17003.JPEG',
+ 'n04264628_27969.JPEG', 'n04336792_7448.JPEG',
+ 'n04371774_5854.JPEG', 'n04596742_4225.JPEG',
+ 'n07583066_647.JPEG', 'n13037406_4650.JPEG']
+ return filename.split('/')[-1] in blacklist
+
+
+def _process_image(filename, coder):
+ """Process a single image file.
+
+ Args:
+ filename: string, path to an image file e.g., '/path/to/example.JPG'.
+ coder: instance of ImageCoder to provide TensorFlow image coding utils.
+ Returns:
+ image_buffer: string, JPEG encoding of RGB image.
+ height: integer, image height in pixels.
+ width: integer, image width in pixels.
+ """
+ # Read the image file.
+ with tf.gfile.FastGFile(filename, 'rb') as f:
+ image_data = f.read()
+
+ # Clean the dirty data.
+ if _is_png(filename):
+ # 1 image is a PNG.
+ print('Converting PNG to JPEG for %s' % filename)
+ image_data = coder.png_to_jpeg(image_data)
+ elif _is_cmyk(filename):
+ # 22 JPEG images are in CMYK colorspace.
+ print('Converting CMYK to RGB for %s' % filename)
+ image_data = coder.cmyk_to_rgb(image_data)
+
+ # Decode the RGB JPEG.
+ image = coder.decode_jpeg(image_data)
+
+ # Check that image converted to RGB
+ assert len(image.shape) == 3
+ height = image.shape[0]
+ width = image.shape[1]
+ assert image.shape[2] == 3
+
+ return image_data, height, width
+
+
+def _process_image_files_batch(coder, thread_index, ranges, name, filenames,
+ synsets, labels, humans, bboxes, num_shards):
+ """Processes and saves list of images as TFRecord in 1 thread.
+
+ Args:
+ coder: instance of ImageCoder to provide TensorFlow image coding utils.
+ thread_index: integer, unique batch to run index is within [0, len(ranges)).
+ ranges: list of pairs of integers specifying ranges of each batches to
+ analyze in parallel.
+ name: string, unique identifier specifying the data set
+ filenames: list of strings; each string is a path to an image file
+ synsets: list of strings; each string is a unique WordNet ID
+ labels: list of integer; each integer identifies the ground truth
+ humans: list of strings; each string is a human-readable label
+ bboxes: list of bounding boxes for each image. Note that each entry in this
+ list might contain from 0+ entries corresponding to the number of bounding
+ box annotations for the image.
+ num_shards: integer number of shards for this data set.
+ """
+ # Each thread produces N shards where N = int(num_shards / num_threads).
+ # For instance, if num_shards = 128, and the num_threads = 2, then the first
+ # thread would produce shards [0, 64).
+ num_threads = len(ranges)
+ assert not num_shards % num_threads
+ num_shards_per_batch = int(num_shards / num_threads)
+
+ shard_ranges = np.linspace(ranges[thread_index][0],
+ ranges[thread_index][1],
+ num_shards_per_batch + 1).astype(int)
+ num_files_in_thread = ranges[thread_index][1] - ranges[thread_index][0]
+
+ counter = 0
+ for s in range(num_shards_per_batch):
+ # Generate a sharded version of the file name, e.g. 'train-00002-of-00010'
+ shard = thread_index * num_shards_per_batch + s
+ output_filename = '%s-%.5d-of-%.5d' % (name, shard, num_shards)
+ output_file = os.path.join(FLAGS.output_directory, output_filename)
+ writer = tf.python_io.TFRecordWriter(output_file)
+
+ shard_counter = 0
+ files_in_shard = np.arange(shard_ranges[s], shard_ranges[s + 1], dtype=int)
+ for i in files_in_shard:
+ filename = filenames[i]
+ label = labels[i]
+ synset = synsets[i]
+ human = humans[i]
+ bbox = bboxes[i]
+
+ image_buffer, height, width = _process_image(filename, coder)
+
+ example = _convert_to_example(filename, image_buffer, label,
+ synset, human, bbox,
+ height, width)
+ writer.write(example.SerializeToString())
+ shard_counter += 1
+ counter += 1
+
+ if not counter % 1000:
+ print('%s [thread %d]: Processed %d of %d images in thread batch.' %
+ (datetime.now(), thread_index, counter, num_files_in_thread))
+ sys.stdout.flush()
+
+ writer.close()
+ print('%s [thread %d]: Wrote %d images to %s' %
+ (datetime.now(), thread_index, shard_counter, output_file))
+ sys.stdout.flush()
+ shard_counter = 0
+ print('%s [thread %d]: Wrote %d images to %d shards.' %
+ (datetime.now(), thread_index, counter, num_files_in_thread))
+ sys.stdout.flush()
+
+
+def _process_image_files(name, filenames, synsets, labels, humans,
+ bboxes, num_shards):
+ """Process and save list of images as TFRecord of Example protos.
+
+ Args:
+ name: string, unique identifier specifying the data set
+ filenames: list of strings; each string is a path to an image file
+ synsets: list of strings; each string is a unique WordNet ID
+ labels: list of integer; each integer identifies the ground truth
+ humans: list of strings; each string is a human-readable label
+ bboxes: list of bounding boxes for each image. Note that each entry in this
+ list might contain from 0+ entries corresponding to the number of bounding
+ box annotations for the image.
+ num_shards: integer number of shards for this data set.
+ """
+ assert len(filenames) == len(synsets)
+ assert len(filenames) == len(labels)
+ assert len(filenames) == len(humans)
+ assert len(filenames) == len(bboxes)
+
+ # Break all images into batches with a [ranges[i][0], ranges[i][1]].
+ spacing = np.linspace(0, len(filenames), FLAGS.num_threads + 1).astype(np.int)
+ ranges = []
+ threads = []
+ for i in range(len(spacing) - 1):
+ ranges.append([spacing[i], spacing[i + 1]])
+
+ # Launch a thread for each batch.
+ print('Launching %d threads for spacings: %s' % (FLAGS.num_threads, ranges))
+ sys.stdout.flush()
+
+ # Create a mechanism for monitoring when all threads are finished.
+ coord = tf.train.Coordinator()
+
+ # Create a generic TensorFlow-based utility for converting all image codings.
+ coder = ImageCoder()
+
+ threads = []
+ for thread_index in range(len(ranges)):
+ args = (coder, thread_index, ranges, name, filenames,
+ synsets, labels, humans, bboxes, num_shards)
+ t = threading.Thread(target=_process_image_files_batch, args=args)
+ t.start()
+ threads.append(t)
+
+ # Wait for all the threads to terminate.
+ coord.join(threads)
+ print('%s: Finished writing all %d images in data set.' %
+ (datetime.now(), len(filenames)))
+ sys.stdout.flush()
+
+
+def _find_image_files(data_dir, labels_file):
+ """Build a list of all images files and labels in the data set.
+
+ Args:
+ data_dir: string, path to the root directory of images.
+
+ Assumes that the ImageNet data set resides in JPEG files located in
+ the following directory structure.
+
+ data_dir/n01440764/ILSVRC2012_val_00000293.JPEG
+ data_dir/n01440764/ILSVRC2012_val_00000543.JPEG
+
+ where 'n01440764' is the unique synset label associated with these images.
+
+ labels_file: string, path to the labels file.
+
+ The list of valid labels are held in this file. Assumes that the file
+ contains entries as such:
+ n01440764
+ n01443537
+ n01484850
+ where each line corresponds to a label expressed as a synset. We map
+ each synset contained in the file to an integer (based on the alphabetical
+ ordering) starting with the integer 1 corresponding to the synset
+ contained in the first line.
+
+ The reason we start the integer labels at 1 is to reserve label 0 as an
+ unused background class.
+
+ Returns:
+ filenames: list of strings; each string is a path to an image file.
+ synsets: list of strings; each string is a unique WordNet ID.
+ labels: list of integer; each integer identifies the ground truth.
+ """
+ print('Determining list of input files and labels from %s.' % data_dir)
+ challenge_synsets = [l.strip() for l in
+ tf.gfile.FastGFile(labels_file, 'r').readlines()]
+
+ labels = []
+ filenames = []
+ synsets = []
+
+ # Leave label index 0 empty as a background class.
+ label_index = 1
+
+ # Construct the list of JPEG files and labels.
+ for synset in challenge_synsets:
+ jpeg_file_path = '%s/%s/*.JPEG' % (data_dir, synset)
+ matching_files = tf.gfile.Glob(jpeg_file_path)
+
+ labels.extend([label_index] * len(matching_files))
+ synsets.extend([synset] * len(matching_files))
+ filenames.extend(matching_files)
+
+ if not label_index % 100:
+ print('Finished finding files in %d of %d classes.' % (
+ label_index, len(challenge_synsets)))
+ label_index += 1
+
+ # Shuffle the ordering of all image files in order to guarantee
+ # random ordering of the images with respect to label in the
+ # saved TFRecord files. Make the randomization repeatable.
+ shuffled_index = list(range(len(filenames)))
+ random.seed(12345)
+ random.shuffle(shuffled_index)
+
+ filenames = [filenames[i] for i in shuffled_index]
+ synsets = [synsets[i] for i in shuffled_index]
+ labels = [labels[i] for i in shuffled_index]
+
+ print('Found %d JPEG files across %d labels inside %s.' %
+ (len(filenames), len(challenge_synsets), data_dir))
+ return filenames, synsets, labels
+
+
+def _find_human_readable_labels(synsets, synset_to_human):
+ """Build a list of human-readable labels.
+
+ Args:
+ synsets: list of strings; each string is a unique WordNet ID.
+ synset_to_human: dict of synset to human labels, e.g.,
+ 'n02119022' --> 'red fox, Vulpes vulpes'
+
+ Returns:
+ List of human-readable strings corresponding to each synset.
+ """
+ humans = []
+ for s in synsets:
+ assert s in synset_to_human, ('Failed to find: %s' % s)
+ humans.append(synset_to_human[s])
+ return humans
+
+
+def _find_image_bounding_boxes(filenames, image_to_bboxes):
+ """Find the bounding boxes for a given image file.
+
+ Args:
+ filenames: list of strings; each string is a path to an image file.
+ image_to_bboxes: dictionary mapping image file names to a list of
+ bounding boxes. This list contains 0+ bounding boxes.
+ Returns:
+ List of bounding boxes for each image. Note that each entry in this
+ list might contain from 0+ entries corresponding to the number of bounding
+ box annotations for the image.
+ """
+ num_image_bbox = 0
+ bboxes = []
+ for f in filenames:
+ basename = os.path.basename(f)
+ if basename in image_to_bboxes:
+ bboxes.append(image_to_bboxes[basename])
+ num_image_bbox += 1
+ else:
+ bboxes.append([])
+ print('Found %d images with bboxes out of %d images' % (
+ num_image_bbox, len(filenames)))
+ return bboxes
+
+
+def _process_dataset(name, directory, num_shards, synset_to_human,
+ image_to_bboxes):
+ """Process a complete data set and save it as a TFRecord.
+
+ Args:
+ name: string, unique identifier specifying the data set.
+ directory: string, root path to the data set.
+ num_shards: integer number of shards for this data set.
+ synset_to_human: dict of synset to human labels, e.g.,
+ 'n02119022' --> 'red fox, Vulpes vulpes'
+ image_to_bboxes: dictionary mapping image file names to a list of
+ bounding boxes. This list contains 0+ bounding boxes.
+ """
+ filenames, synsets, labels = _find_image_files(directory, FLAGS.labels_file)
+ humans = _find_human_readable_labels(synsets, synset_to_human)
+ bboxes = _find_image_bounding_boxes(filenames, image_to_bboxes)
+ _process_image_files(name, filenames, synsets, labels,
+ humans, bboxes, num_shards)
+
+
+def _build_synset_lookup(imagenet_metadata_file):
+ """Build lookup for synset to human-readable label.
+
+ Args:
+ imagenet_metadata_file: string, path to file containing mapping from
+ synset to human-readable label.
+
+ Assumes each line of the file looks like:
+
+ n02119247 black fox
+ n02119359 silver fox
+ n02119477 red fox, Vulpes fulva
+
+ where each line corresponds to a unique mapping. Note that each line is
+ formatted as \t.
+
+ Returns:
+ Dictionary of synset to human labels, such as:
+ 'n02119022' --> 'red fox, Vulpes vulpes'
+ """
+ lines = tf.gfile.FastGFile(imagenet_metadata_file, 'r').readlines()
+ synset_to_human = {}
+ for l in lines:
+ if l:
+ parts = l.strip().split('\t')
+ assert len(parts) == 2
+ synset = parts[0]
+ human = parts[1]
+ synset_to_human[synset] = human
+ return synset_to_human
+
+
+def _build_bounding_box_lookup(bounding_box_file):
+ """Build a lookup from image file to bounding boxes.
+
+ Args:
+ bounding_box_file: string, path to file with bounding boxes annotations.
+
+ Assumes each line of the file looks like:
+
+ n00007846_64193.JPEG,0.0060,0.2620,0.7545,0.9940
+
+ where each line corresponds to one bounding box annotation associated
+ with an image. Each line can be parsed as:
+
+ , , , ,
+
+ Note that there might exist mulitple bounding box annotations associated
+ with an image file. This file is the output of process_bounding_boxes.py.
+
+ Returns:
+ Dictionary mapping image file names to a list of bounding boxes. This list
+ contains 0+ bounding boxes.
+ """
+ lines = tf.gfile.FastGFile(bounding_box_file, 'r').readlines()
+ images_to_bboxes = {}
+ num_bbox = 0
+ num_image = 0
+ for l in lines:
+ if l:
+ parts = l.split(',')
+ assert len(parts) == 5, ('Failed to parse: %s' % l)
+ filename = parts[0]
+ xmin = float(parts[1])
+ ymin = float(parts[2])
+ xmax = float(parts[3])
+ ymax = float(parts[4])
+ box = [xmin, ymin, xmax, ymax]
+
+ if filename not in images_to_bboxes:
+ images_to_bboxes[filename] = []
+ num_image += 1
+ images_to_bboxes[filename].append(box)
+ num_bbox += 1
+
+ print('Successfully read %d bounding boxes '
+ 'across %d images.' % (num_bbox, num_image))
+ return images_to_bboxes
+
+
+def main(unused_argv):
+ assert not FLAGS.train_shards % FLAGS.num_threads, (
+ 'Please make the FLAGS.num_threads commensurate with FLAGS.train_shards')
+ assert not FLAGS.validation_shards % FLAGS.num_threads, (
+ 'Please make the FLAGS.num_threads commensurate with '
+ 'FLAGS.validation_shards')
+ print('Saving results to %s' % FLAGS.output_directory)
+
+ # Build a map from synset to human-readable label.
+ synset_to_human = _build_synset_lookup(FLAGS.imagenet_metadata_file)
+ image_to_bboxes = _build_bounding_box_lookup(FLAGS.bounding_box_file)
+
+ # Run it!
+ _process_dataset('validation', FLAGS.validation_directory,
+ FLAGS.validation_shards, synset_to_human, image_to_bboxes)
+ _process_dataset('train', FLAGS.train_directory, FLAGS.train_shards,
+ synset_to_human, image_to_bboxes)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_and_preprocess_flowers.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_and_preprocess_flowers.sh"
new file mode 100644
index 00000000..ab8d451c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_and_preprocess_flowers.sh"
@@ -0,0 +1,96 @@
+#!/bin/bash
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# Script to download and preprocess the flowers data set. This data set
+# provides a demonstration for how to perform fine-tuning (i.e. tranfer
+# learning) from one model to a new data set.
+#
+# This script provides a demonstration for how to prepare an arbitrary
+# data set for training an Inception v3 model.
+#
+# We demonstrate this with the flowers data set which consists of images
+# of labeled flower images from 5 classes:
+#
+# daisy, dandelion, roses, sunflowers, tulips
+#
+# The final output of this script are sharded TFRecord files containing
+# serialized Example protocol buffers. See build_image_data.py for
+# details of how the Example protocol buffer contains image data.
+#
+# usage:
+# ./download_and_preprocess_flowers.sh [data-dir]
+set -e
+
+if [ -z "$1" ]; then
+ echo "Usage: download_and_preprocess_flowers.sh [data dir]"
+ exit
+fi
+
+# Create the output and temporary directories.
+DATA_DIR="${1%/}"
+SCRATCH_DIR="${DATA_DIR}/raw-data"
+mkdir -p "${DATA_DIR}"
+mkdir -p "${SCRATCH_DIR}"
+WORK_DIR="$0.runfiles/inception/inception"
+
+# Download the flowers data.
+DATA_URL="http://download.tensorflow.org/example_images/flower_photos.tgz"
+CURRENT_DIR=$(pwd)
+cd "${DATA_DIR}"
+TARBALL="flower_photos.tgz"
+if [ ! -f ${TARBALL} ]; then
+ echo "Downloading flower data set."
+ curl -o ${TARBALL} "${DATA_URL}"
+else
+ echo "Skipping download of flower data."
+fi
+
+# Note the locations of the train and validation data.
+TRAIN_DIRECTORY="${SCRATCH_DIR}/train"
+VALIDATION_DIRECTORY="${SCRATCH_DIR}/validation"
+
+# Expands the data into the flower_photos/ directory and rename it as the
+# train directory.
+tar xf flower_photos.tgz
+rm -rf "${TRAIN_DIRECTORY}" "${VALIDATION_DIRECTORY}"
+mv flower_photos "${TRAIN_DIRECTORY}"
+
+# Generate a list of 5 labels: daisy, dandelion, roses, sunflowers, tulips
+LABELS_FILE="${SCRATCH_DIR}/labels.txt"
+ls -1 "${TRAIN_DIRECTORY}" | grep -v 'LICENSE' | sed 's/\///' | sort > "${LABELS_FILE}"
+
+# Generate the validation data set.
+while read LABEL; do
+ VALIDATION_DIR_FOR_LABEL="${VALIDATION_DIRECTORY}/${LABEL}"
+ TRAIN_DIR_FOR_LABEL="${TRAIN_DIRECTORY}/${LABEL}"
+
+ # Move the first randomly selected 100 images to the validation set.
+ mkdir -p "${VALIDATION_DIR_FOR_LABEL}"
+ VALIDATION_IMAGES=$(ls -1 "${TRAIN_DIR_FOR_LABEL}" | shuf | head -100)
+ for IMAGE in ${VALIDATION_IMAGES}; do
+ mv -f "${TRAIN_DIRECTORY}/${LABEL}/${IMAGE}" "${VALIDATION_DIR_FOR_LABEL}"
+ done
+done < "${LABELS_FILE}"
+
+# Build the TFRecords version of the image data.
+cd "${CURRENT_DIR}"
+BUILD_SCRIPT="${WORK_DIR}/build_image_data"
+OUTPUT_DIRECTORY="${DATA_DIR}"
+"${BUILD_SCRIPT}" \
+ --train_directory="${TRAIN_DIRECTORY}" \
+ --validation_directory="${VALIDATION_DIRECTORY}" \
+ --output_directory="${OUTPUT_DIRECTORY}" \
+ --labels_file="${LABELS_FILE}"
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_and_preprocess_flowers_mac.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_and_preprocess_flowers_mac.sh"
new file mode 100644
index 00000000..15490563
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_and_preprocess_flowers_mac.sh"
@@ -0,0 +1,96 @@
+#!/bin/bash
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# Script to download and preprocess the flowers data set. This data set
+# provides a demonstration for how to perform fine-tuning (i.e. tranfer
+# learning) from one model to a new data set.
+#
+# This script provides a demonstration for how to prepare an arbitrary
+# data set for training an Inception v3 model.
+#
+# We demonstrate this with the flowers data set which consists of images
+# of labeled flower images from 5 classes:
+#
+# daisy, dandelion, roses, sunflowers, tulips
+#
+# The final output of this script are sharded TFRecord files containing
+# serialized Example protocol buffers. See build_image_data.py for
+# details of how the Example protocol buffer contains image data.
+#
+# usage:
+# ./download_and_preprocess_flowers.sh [data-dir]
+set -e
+
+if [ -z "$1" ]; then
+ echo "Usage: download_and_preprocess_flowers.sh [data dir]"
+ exit
+fi
+
+# Create the output and temporary directories.
+DATA_DIR="${1%/}"
+SCRATCH_DIR="${DATA_DIR}/raw-data/"
+mkdir -p "${DATA_DIR}"
+mkdir -p "${SCRATCH_DIR}"
+WORK_DIR="$0.runfiles/inception/inception"
+
+# Download the flowers data.
+DATA_URL="http://download.tensorflow.org/example_images/flower_photos.tgz"
+CURRENT_DIR=$(pwd)
+cd "${DATA_DIR}"
+TARBALL="flower_photos.tgz"
+if [ ! -f ${TARBALL} ]; then
+ echo "Downloading flower data set."
+ curl -o ${TARBALL} "${DATA_URL}"
+else
+ echo "Skipping download of flower data."
+fi
+
+# Note the locations of the train and validation data.
+TRAIN_DIRECTORY="${SCRATCH_DIR}train/"
+VALIDATION_DIRECTORY="${SCRATCH_DIR}validation/"
+
+# Expands the data into the flower_photos/ directory and rename it as the
+# train directory.
+tar xf flower_photos.tgz
+rm -rf "${TRAIN_DIRECTORY}" "${VALIDATION_DIRECTORY}"
+mv flower_photos "${TRAIN_DIRECTORY}"
+
+# Generate a list of 5 labels: daisy, dandelion, roses, sunflowers, tulips
+LABELS_FILE="${SCRATCH_DIR}/labels.txt"
+ls -1 "${TRAIN_DIRECTORY}" | grep -v 'LICENSE' | sed 's/\///' | sort > "${LABELS_FILE}"
+
+# Generate the validation data set.
+while read LABEL; do
+ VALIDATION_DIR_FOR_LABEL="${VALIDATION_DIRECTORY}${LABEL}"
+ TRAIN_DIR_FOR_LABEL="${TRAIN_DIRECTORY}${LABEL}"
+
+ # Move the first randomly selected 100 images to the validation set.
+ mkdir -p "${VALIDATION_DIR_FOR_LABEL}"
+ VALIDATION_IMAGES=$(ls -1 "${TRAIN_DIR_FOR_LABEL}" | gshuf | head -100)
+ for IMAGE in ${VALIDATION_IMAGES}; do
+ mv -f "${TRAIN_DIRECTORY}${LABEL}/${IMAGE}" "${VALIDATION_DIR_FOR_LABEL}"
+ done
+done < "${LABELS_FILE}"
+
+# Build the TFRecords version of the image data.
+cd "${CURRENT_DIR}"
+BUILD_SCRIPT="${WORK_DIR}/build_image_data"
+OUTPUT_DIRECTORY="${DATA_DIR}"
+"${BUILD_SCRIPT}" \
+ --train_directory="${TRAIN_DIRECTORY}" \
+ --validation_directory="${VALIDATION_DIRECTORY}" \
+ --output_directory="${OUTPUT_DIRECTORY}" \
+ --labels_file="${LABELS_FILE}"
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_and_preprocess_imagenet.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_and_preprocess_imagenet.sh"
new file mode 100644
index 00000000..6faae831
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_and_preprocess_imagenet.sh"
@@ -0,0 +1,101 @@
+#!/bin/bash
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# Script to download and preprocess ImageNet Challenge 2012
+# training and validation data set.
+#
+# The final output of this script are sharded TFRecord files containing
+# serialized Example protocol buffers. See build_imagenet_data.py for
+# details of how the Example protocol buffers contain the ImageNet data.
+#
+# The final output of this script appears as such:
+#
+# data_dir/train-00000-of-01024
+# data_dir/train-00001-of-01024
+# ...
+# data_dir/train-01023-of-01024
+#
+# and
+#
+# data_dir/validation-00000-of-00128
+# data_dir/validation-00001-of-00128
+# ...
+# data_dir/validation-00127-of-00128
+#
+# Note that this script may take several hours to run to completion. The
+# conversion of the ImageNet data to TFRecords alone takes 2-3 hours depending
+# on the speed of your machine. Please be patient.
+#
+# **IMPORTANT**
+# To download the raw images, the user must create an account with image-net.org
+# and generate a username and access_key. The latter two are required for
+# downloading the raw images.
+#
+# usage:
+# ./download_and_preprocess_imagenet.sh [data-dir]
+set -e
+
+if [ -z "$1" ]; then
+ echo "Usage: download_and_preprocess_imagenet.sh [data dir]"
+ exit
+fi
+
+# Create the output and temporary directories.
+DATA_DIR="${1%/}"
+SCRATCH_DIR="${DATA_DIR}/raw-data/"
+mkdir -p "${DATA_DIR}"
+mkdir -p "${SCRATCH_DIR}"
+WORK_DIR="$0.runfiles/inception/inception"
+
+# Download the ImageNet data.
+LABELS_FILE="${WORK_DIR}/data/imagenet_lsvrc_2015_synsets.txt"
+DOWNLOAD_SCRIPT="${WORK_DIR}/data/download_imagenet.sh"
+"${DOWNLOAD_SCRIPT}" "${SCRATCH_DIR}" "${LABELS_FILE}"
+
+# Note the locations of the train and validation data.
+TRAIN_DIRECTORY="${SCRATCH_DIR}train/"
+VALIDATION_DIRECTORY="${SCRATCH_DIR}validation/"
+
+# Preprocess the validation data by moving the images into the appropriate
+# sub-directory based on the label (synset) of the image.
+echo "Organizing the validation data into sub-directories."
+PREPROCESS_VAL_SCRIPT="${WORK_DIR}/data/preprocess_imagenet_validation_data.py"
+VAL_LABELS_FILE="${WORK_DIR}/data/imagenet_2012_validation_synset_labels.txt"
+
+"${PREPROCESS_VAL_SCRIPT}" "${VALIDATION_DIRECTORY}" "${VAL_LABELS_FILE}"
+
+# Convert the XML files for bounding box annotations into a single CSV.
+echo "Extracting bounding box information from XML."
+BOUNDING_BOX_SCRIPT="${WORK_DIR}/data/process_bounding_boxes.py"
+BOUNDING_BOX_FILE="${SCRATCH_DIR}/imagenet_2012_bounding_boxes.csv"
+BOUNDING_BOX_DIR="${SCRATCH_DIR}bounding_boxes/"
+
+"${BOUNDING_BOX_SCRIPT}" "${BOUNDING_BOX_DIR}" "${LABELS_FILE}" \
+ | sort > "${BOUNDING_BOX_FILE}"
+echo "Finished downloading and preprocessing the ImageNet data."
+
+# Build the TFRecords version of the ImageNet data.
+BUILD_SCRIPT="${WORK_DIR}/build_imagenet_data"
+OUTPUT_DIRECTORY="${DATA_DIR}"
+IMAGENET_METADATA_FILE="${WORK_DIR}/data/imagenet_metadata.txt"
+
+"${BUILD_SCRIPT}" \
+ --train_directory="${TRAIN_DIRECTORY}" \
+ --validation_directory="${VALIDATION_DIRECTORY}" \
+ --output_directory="${OUTPUT_DIRECTORY}" \
+ --imagenet_metadata_file="${IMAGENET_METADATA_FILE}" \
+ --labels_file="${LABELS_FILE}" \
+ --bounding_box_file="${BOUNDING_BOX_FILE}"
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_imagenet.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_imagenet.sh"
new file mode 100644
index 00000000..f6c77781
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/download_imagenet.sh"
@@ -0,0 +1,104 @@
+#!/bin/bash
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# Script to download ImageNet Challenge 2012 training and validation data set.
+#
+# Downloads and decompresses raw images and bounding boxes.
+#
+# **IMPORTANT**
+# To download the raw images, the user must create an account with image-net.org
+# and generate a username and access_key. The latter two are required for
+# downloading the raw images.
+#
+# usage:
+# ./download_imagenet.sh [dir name] [synsets file]
+set -e
+
+if [ "x$IMAGENET_ACCESS_KEY" == x -o "x$IMAGENET_USERNAME" == x ]; then
+ cat <')
+ sys.exit(-1)
+ data_dir = sys.argv[1]
+ validation_labels_file = sys.argv[2]
+
+ # Read in the 50000 synsets associated with the validation data set.
+ labels = [l.strip() for l in open(validation_labels_file).readlines()]
+ unique_labels = set(labels)
+
+ # Make all sub-directories in the validation data dir.
+ for label in unique_labels:
+ labeled_data_dir = os.path.join(data_dir, label)
+ # Catch error if sub-directory exists
+ try:
+ os.makedirs(labeled_data_dir)
+ except OSError as e:
+ # Raise all errors but 'EEXIST'
+ if e.errno != errno.EEXIST:
+ raise
+
+ # Move all of the image to the appropriate sub-directory.
+ for i in range(len(labels)):
+ basename = 'ILSVRC2012_val_000%.5d.JPEG' % (i + 1)
+ original_filename = os.path.join(data_dir, basename)
+ if not os.path.exists(original_filename):
+ print('Failed to find: %s' % original_filename)
+ sys.exit(-1)
+ new_filename = os.path.join(data_dir, labels[i], basename)
+ os.rename(original_filename, new_filename)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/process_bounding_boxes.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/process_bounding_boxes.py"
new file mode 100644
index 00000000..5e9fd786
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/data/process_bounding_boxes.py"
@@ -0,0 +1,254 @@
+#!/usr/bin/python
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Process the ImageNet Challenge bounding boxes for TensorFlow model training.
+
+This script is called as
+
+process_bounding_boxes.py [synsets-file]
+
+Where is a directory containing the downloaded and unpacked bounding box
+data. If [synsets-file] is supplied, then only the bounding boxes whose
+synstes are contained within this file are returned. Note that the
+[synsets-file] file contains synset ids, one per line.
+
+The script dumps out a CSV text file in which each line contains an entry.
+ n00007846_64193.JPEG,0.0060,0.2620,0.7545,0.9940
+
+The entry can be read as:
+ , , , ,
+
+The bounding box for contains two points (xmin, ymin) and
+(xmax, ymax) specifying the lower-left corner and upper-right corner of a
+bounding box in *relative* coordinates.
+
+The user supplies a directory where the XML files reside. The directory
+structure in the directory is assumed to look like this:
+
+/nXXXXXXXX/nXXXXXXXX_YYYY.xml
+
+Each XML file contains a bounding box annotation. The script:
+
+ (1) Parses the XML file and extracts the filename, label and bounding box info.
+
+ (2) The bounding box is specified in the XML files as integer (xmin, ymin) and
+ (xmax, ymax) *relative* to image size displayed to the human annotator. The
+ size of the image displayed to the human annotator is stored in the XML file
+ as integer (height, width).
+
+ Note that the displayed size will differ from the actual size of the image
+ downloaded from image-net.org. To make the bounding box annotation useable,
+ we convert bounding box to floating point numbers relative to displayed
+ height and width of the image.
+
+ Note that each XML file might contain N bounding box annotations.
+
+ Note that the points are all clamped at a range of [0.0, 1.0] because some
+ human annotations extend outside the range of the supplied image.
+
+ See details here: http://image-net.org/download-bboxes
+
+(3) By default, the script outputs all valid bounding boxes. If a
+ [synsets-file] is supplied, only the subset of bounding boxes associated
+ with those synsets are outputted. Importantly, one can supply a list of
+ synsets in the ImageNet Challenge and output the list of bounding boxes
+ associated with the training images of the ILSVRC.
+
+ We use these bounding boxes to inform the random distortion of images
+ supplied to the network.
+
+If you run this script successfully, you will see the following output
+to stderr:
+> Finished processing 544546 XML files.
+> Skipped 0 XML files not in ImageNet Challenge.
+> Skipped 0 bounding boxes not in ImageNet Challenge.
+> Wrote 615299 bounding boxes from 544546 annotated images.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import glob
+import os.path
+import sys
+import xml.etree.ElementTree as ET
+
+
+class BoundingBox(object):
+ pass
+
+
+def GetItem(name, root, index=0):
+ count = 0
+ for item in root.iter(name):
+ if count == index:
+ return item.text
+ count += 1
+ # Failed to find "index" occurrence of item.
+ return -1
+
+
+def GetInt(name, root, index=0):
+ # In some XML annotation files, the point values are not integers, but floats.
+ # So we add a float function to avoid ValueError.
+ return int(float(GetItem(name, root, index)))
+
+
+def FindNumberBoundingBoxes(root):
+ index = 0
+ while True:
+ if GetInt('xmin', root, index) == -1:
+ break
+ index += 1
+ return index
+
+
+def ProcessXMLAnnotation(xml_file):
+ """Process a single XML file containing a bounding box."""
+ # pylint: disable=broad-except
+ try:
+ tree = ET.parse(xml_file)
+ except Exception:
+ print('Failed to parse: ' + xml_file, file=sys.stderr)
+ return None
+ # pylint: enable=broad-except
+ root = tree.getroot()
+
+ num_boxes = FindNumberBoundingBoxes(root)
+ boxes = []
+
+ for index in range(num_boxes):
+ box = BoundingBox()
+ # Grab the 'index' annotation.
+ box.xmin = GetInt('xmin', root, index)
+ box.ymin = GetInt('ymin', root, index)
+ box.xmax = GetInt('xmax', root, index)
+ box.ymax = GetInt('ymax', root, index)
+
+ box.width = GetInt('width', root)
+ box.height = GetInt('height', root)
+ box.filename = GetItem('filename', root) + '.JPEG'
+ box.label = GetItem('name', root)
+
+ xmin = float(box.xmin) / float(box.width)
+ xmax = float(box.xmax) / float(box.width)
+ ymin = float(box.ymin) / float(box.height)
+ ymax = float(box.ymax) / float(box.height)
+
+ # Some images contain bounding box annotations that
+ # extend outside of the supplied image. See, e.g.
+ # n03127925/n03127925_147.xml
+ # Additionally, for some bounding boxes, the min > max
+ # or the box is entirely outside of the image.
+ min_x = min(xmin, xmax)
+ max_x = max(xmin, xmax)
+ box.xmin_scaled = min(max(min_x, 0.0), 1.0)
+ box.xmax_scaled = min(max(max_x, 0.0), 1.0)
+
+ min_y = min(ymin, ymax)
+ max_y = max(ymin, ymax)
+ box.ymin_scaled = min(max(min_y, 0.0), 1.0)
+ box.ymax_scaled = min(max(max_y, 0.0), 1.0)
+
+ boxes.append(box)
+
+ return boxes
+
+if __name__ == '__main__':
+ if len(sys.argv) < 2 or len(sys.argv) > 3:
+ print('Invalid usage\n'
+ 'usage: process_bounding_boxes.py [synsets-file]',
+ file=sys.stderr)
+ sys.exit(-1)
+
+ xml_files = glob.glob(sys.argv[1] + '/*/*.xml')
+ print('Identified %d XML files in %s' % (len(xml_files), sys.argv[1]),
+ file=sys.stderr)
+
+ if len(sys.argv) == 3:
+ labels = set([l.strip() for l in open(sys.argv[2]).readlines()])
+ print('Identified %d synset IDs in %s' % (len(labels), sys.argv[2]),
+ file=sys.stderr)
+ else:
+ labels = None
+
+ skipped_boxes = 0
+ skipped_files = 0
+ saved_boxes = 0
+ saved_files = 0
+ for file_index, one_file in enumerate(xml_files):
+ # Example: <...>/n06470073/n00141669_6790.xml
+ label = os.path.basename(os.path.dirname(one_file))
+
+ # Determine if the annotation is from an ImageNet Challenge label.
+ if labels is not None and label not in labels:
+ skipped_files += 1
+ continue
+
+ bboxes = ProcessXMLAnnotation(one_file)
+ assert bboxes is not None, 'No bounding boxes found in ' + one_file
+
+ found_box = False
+ for bbox in bboxes:
+ if labels is not None:
+ if bbox.label != label:
+ # Note: There is a slight bug in the bounding box annotation data.
+ # Many of the dog labels have the human label 'Scottish_deerhound'
+ # instead of the synset ID 'n02092002' in the bbox.label field. As a
+ # simple hack to overcome this issue, we only exclude bbox labels
+ # *which are synset ID's* that do not match original synset label for
+ # the XML file.
+ if bbox.label in labels:
+ skipped_boxes += 1
+ continue
+
+ # Guard against improperly specified boxes.
+ if (bbox.xmin_scaled >= bbox.xmax_scaled or
+ bbox.ymin_scaled >= bbox.ymax_scaled):
+ skipped_boxes += 1
+ continue
+
+ # Note bbox.filename occasionally contains '%s' in the name. This is
+ # data set noise that is fixed by just using the basename of the XML file.
+ image_filename = os.path.splitext(os.path.basename(one_file))[0]
+ print('%s.JPEG,%.4f,%.4f,%.4f,%.4f' %
+ (image_filename,
+ bbox.xmin_scaled, bbox.ymin_scaled,
+ bbox.xmax_scaled, bbox.ymax_scaled))
+
+ saved_boxes += 1
+ found_box = True
+ if found_box:
+ saved_files += 1
+ else:
+ skipped_files += 1
+
+ if not file_index % 5000:
+ print('--> processed %d of %d XML files.' %
+ (file_index + 1, len(xml_files)),
+ file=sys.stderr)
+ print('--> skipped %d boxes and %d XML files.' %
+ (skipped_boxes, skipped_files), file=sys.stderr)
+
+ print('Finished processing %d XML files.' % len(xml_files), file=sys.stderr)
+ print('Skipped %d XML files not in ImageNet Challenge.' % skipped_files,
+ file=sys.stderr)
+ print('Skipped %d bounding boxes not in ImageNet Challenge.' % skipped_boxes,
+ file=sys.stderr)
+ print('Wrote %d bounding boxes from %d annotated images.' %
+ (saved_boxes, saved_files),
+ file=sys.stderr)
+ print('Finished.', file=sys.stderr)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/dataset.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/dataset.py"
new file mode 100644
index 00000000..752c97e0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/dataset.py"
@@ -0,0 +1,103 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Small library that points to a data set.
+
+Methods of Data class:
+ data_files: Returns a python list of all (sharded) data set files.
+ num_examples_per_epoch: Returns the number of examples in the data set.
+ num_classes: Returns the number of classes in the data set.
+ reader: Return a reader for a single entry from the data set.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from abc import ABCMeta
+from abc import abstractmethod
+import os
+
+
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+# Basic model parameters.
+tf.app.flags.DEFINE_string('data_dir', '/tmp/mydata',
+ """Path to the processed data, i.e. """
+ """TFRecord of Example protos.""")
+
+
+class Dataset(object):
+ """A simple class for handling data sets."""
+ __metaclass__ = ABCMeta
+
+ def __init__(self, name, subset):
+ """Initialize dataset using a subset and the path to the data."""
+ assert subset in self.available_subsets(), self.available_subsets()
+ self.name = name
+ self.subset = subset
+
+ @abstractmethod
+ def num_classes(self):
+ """Returns the number of classes in the data set."""
+ pass
+ # return 10
+
+ @abstractmethod
+ def num_examples_per_epoch(self):
+ """Returns the number of examples in the data subset."""
+ pass
+ # if self.subset == 'train':
+ # return 10000
+ # if self.subset == 'validation':
+ # return 1000
+
+ @abstractmethod
+ def download_message(self):
+ """Prints a download message for the Dataset."""
+ pass
+
+ def available_subsets(self):
+ """Returns the list of available subsets."""
+ return ['train', 'validation']
+
+ def data_files(self):
+ """Returns a python list of all (sharded) data subset files.
+
+ Returns:
+ python list of all (sharded) data set files.
+ Raises:
+ ValueError: if there are not data_files matching the subset.
+ """
+ tf_record_pattern = os.path.join(FLAGS.data_dir, '%s-*' % self.subset)
+ data_files = tf.gfile.Glob(tf_record_pattern)
+ if not data_files:
+ print('No files found for dataset %s/%s at %s' % (self.name,
+ self.subset,
+ FLAGS.data_dir))
+
+ self.download_message()
+ exit(-1)
+ return data_files
+
+ def reader(self):
+ """Return a reader for a single entry from the data set.
+
+ See io_ops.py for details of Reader class.
+
+ Returns:
+ Reader object that reads the data set.
+ """
+ return tf.TFRecordReader()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/flowers_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/flowers_data.py"
new file mode 100644
index 00000000..022b5234
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/flowers_data.py"
@@ -0,0 +1,52 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Small library that points to the flowers data set.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+from inception.dataset import Dataset
+
+
+class FlowersData(Dataset):
+ """Flowers data set."""
+
+ def __init__(self, subset):
+ super(FlowersData, self).__init__('Flowers', subset)
+
+ def num_classes(self):
+ """Returns the number of classes in the data set."""
+ return 5
+
+ def num_examples_per_epoch(self):
+ """Returns the number of examples in the data subset."""
+ if self.subset == 'train':
+ return 3170
+ if self.subset == 'validation':
+ return 500
+
+ def download_message(self):
+ """Instruction to download and extract the tarball from Flowers website."""
+
+ print('Failed to find any Flowers %s files'% self.subset)
+ print('')
+ print('If you have already downloaded and processed the data, then make '
+ 'sure to set --data_dir to point to the directory containing the '
+ 'location of the sharded TFRecords.\n')
+ print('Please see README.md for instructions on how to build '
+ 'the flowers dataset using download_and_preprocess_flowers.\n')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/flowers_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/flowers_eval.py"
new file mode 100644
index 00000000..ae3e9dc1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/flowers_eval.py"
@@ -0,0 +1,40 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A binary to evaluate Inception on the flowers data set.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from inception import inception_eval
+from inception.flowers_data import FlowersData
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def main(unused_argv=None):
+ dataset = FlowersData(subset=FLAGS.subset)
+ assert dataset.data_files()
+ if tf.gfile.Exists(FLAGS.eval_dir):
+ tf.gfile.DeleteRecursively(FLAGS.eval_dir)
+ tf.gfile.MakeDirs(FLAGS.eval_dir)
+ inception_eval.evaluate(dataset)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/flowers_train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/flowers_train.py"
new file mode 100644
index 00000000..1f044a53
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/flowers_train.py"
@@ -0,0 +1,41 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A binary to train Inception on the flowers data set.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+import tensorflow as tf
+
+from inception import inception_train
+from inception.flowers_data import FlowersData
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def main(_):
+ dataset = FlowersData(subset=FLAGS.subset)
+ assert dataset.data_files()
+ if tf.gfile.Exists(FLAGS.train_dir):
+ tf.gfile.DeleteRecursively(FLAGS.train_dir)
+ tf.gfile.MakeDirs(FLAGS.train_dir)
+ inception_train.train(dataset)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/image_processing.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/image_processing.py"
new file mode 100644
index 00000000..fe74f1b3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/image_processing.py"
@@ -0,0 +1,513 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Read and preprocess image data.
+
+ Image processing occurs on a single image at a time. Image are read and
+ preprocessed in parallel across multiple threads. The resulting images
+ are concatenated together to form a single batch for training or evaluation.
+
+ -- Provide processed image data for a network:
+ inputs: Construct batches of evaluation examples of images.
+ distorted_inputs: Construct batches of training examples of images.
+ batch_inputs: Construct batches of training or evaluation examples of images.
+
+ -- Data processing:
+ parse_example_proto: Parses an Example proto containing a training example
+ of an image.
+
+ -- Image decoding:
+ decode_jpeg: Decode a JPEG encoded string into a 3-D float32 Tensor.
+
+ -- Image preprocessing:
+ image_preprocessing: Decode and preprocess one image for evaluation or training
+ distort_image: Distort one image for training a network.
+ eval_image: Prepare one image for evaluation.
+ distort_color: Distort the color in one image for training.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_integer('batch_size', 32,
+ """Number of images to process in a batch.""")
+tf.app.flags.DEFINE_integer('image_size', 299,
+ """Provide square images of this size.""")
+tf.app.flags.DEFINE_integer('num_preprocess_threads', 4,
+ """Number of preprocessing threads per tower. """
+ """Please make this a multiple of 4.""")
+tf.app.flags.DEFINE_integer('num_readers', 4,
+ """Number of parallel readers during train.""")
+
+# Images are preprocessed asynchronously using multiple threads specified by
+# --num_preprocss_threads and the resulting processed images are stored in a
+# random shuffling queue. The shuffling queue dequeues --batch_size images
+# for processing on a given Inception tower. A larger shuffling queue guarantees
+# better mixing across examples within a batch and results in slightly higher
+# predictive performance in a trained model. Empirically,
+# --input_queue_memory_factor=16 works well. A value of 16 implies a queue size
+# of 1024*16 images. Assuming RGB 299x299 images, this implies a queue size of
+# 16GB. If the machine is memory limited, then decrease this factor to
+# decrease the CPU memory footprint, accordingly.
+tf.app.flags.DEFINE_integer('input_queue_memory_factor', 16,
+ """Size of the queue of preprocessed images. """
+ """Default is ideal but try smaller values, e.g. """
+ """4, 2 or 1, if host memory is constrained. See """
+ """comments in code for more details.""")
+
+
+def inputs(dataset, batch_size=None, num_preprocess_threads=None):
+ """Generate batches of ImageNet images for evaluation.
+
+ Use this function as the inputs for evaluating a network.
+
+ Note that some (minimal) image preprocessing occurs during evaluation
+ including central cropping and resizing of the image to fit the network.
+
+ Args:
+ dataset: instance of Dataset class specifying the dataset.
+ batch_size: integer, number of examples in batch
+ num_preprocess_threads: integer, total number of preprocessing threads but
+ None defaults to FLAGS.num_preprocess_threads.
+
+ Returns:
+ images: Images. 4D tensor of size [batch_size, FLAGS.image_size,
+ image_size, 3].
+ labels: 1-D integer Tensor of [FLAGS.batch_size].
+ """
+ if not batch_size:
+ batch_size = FLAGS.batch_size
+
+ # Force all input processing onto CPU in order to reserve the GPU for
+ # the forward inference and back-propagation.
+ with tf.device('/cpu:0'):
+ images, labels = batch_inputs(
+ dataset, batch_size, train=False,
+ num_preprocess_threads=num_preprocess_threads,
+ num_readers=1)
+
+ return images, labels
+
+
+def distorted_inputs(dataset, batch_size=None, num_preprocess_threads=None):
+ """Generate batches of distorted versions of ImageNet images.
+
+ Use this function as the inputs for training a network.
+
+ Distorting images provides a useful technique for augmenting the data
+ set during training in order to make the network invariant to aspects
+ of the image that do not effect the label.
+
+ Args:
+ dataset: instance of Dataset class specifying the dataset.
+ batch_size: integer, number of examples in batch
+ num_preprocess_threads: integer, total number of preprocessing threads but
+ None defaults to FLAGS.num_preprocess_threads.
+
+ Returns:
+ images: Images. 4D tensor of size [batch_size, FLAGS.image_size,
+ FLAGS.image_size, 3].
+ labels: 1-D integer Tensor of [batch_size].
+ """
+ if not batch_size:
+ batch_size = FLAGS.batch_size
+
+ # Force all input processing onto CPU in order to reserve the GPU for
+ # the forward inference and back-propagation.
+ with tf.device('/cpu:0'):
+ images, labels = batch_inputs(
+ dataset, batch_size, train=True,
+ num_preprocess_threads=num_preprocess_threads,
+ num_readers=FLAGS.num_readers)
+ return images, labels
+
+
+def decode_jpeg(image_buffer, scope=None):
+ """Decode a JPEG string into one 3-D float image Tensor.
+
+ Args:
+ image_buffer: scalar string Tensor.
+ scope: Optional scope for name_scope.
+ Returns:
+ 3-D float Tensor with values ranging from [0, 1).
+ """
+ with tf.name_scope(values=[image_buffer], name=scope,
+ default_name='decode_jpeg'):
+ # Decode the string as an RGB JPEG.
+ # Note that the resulting image contains an unknown height and width
+ # that is set dynamically by decode_jpeg. In other words, the height
+ # and width of image is unknown at compile-time.
+ image = tf.image.decode_jpeg(image_buffer, channels=3)
+
+ # After this point, all image pixels reside in [0,1)
+ # until the very end, when they're rescaled to (-1, 1). The various
+ # adjust_* ops all require this range for dtype float.
+ image = tf.image.convert_image_dtype(image, dtype=tf.float32)
+ return image
+
+
+def distort_color(image, thread_id=0, scope=None):
+ """Distort the color of the image.
+
+ Each color distortion is non-commutative and thus ordering of the color ops
+ matters. Ideally we would randomly permute the ordering of the color ops.
+ Rather than adding that level of complication, we select a distinct ordering
+ of color ops for each preprocessing thread.
+
+ Args:
+ image: Tensor containing single image.
+ thread_id: preprocessing thread ID.
+ scope: Optional scope for name_scope.
+ Returns:
+ color-distorted image
+ """
+ with tf.name_scope(values=[image], name=scope, default_name='distort_color'):
+ color_ordering = thread_id % 2
+
+ if color_ordering == 0:
+ image = tf.image.random_brightness(image, max_delta=32. / 255.)
+ image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
+ image = tf.image.random_hue(image, max_delta=0.2)
+ image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
+ elif color_ordering == 1:
+ image = tf.image.random_brightness(image, max_delta=32. / 255.)
+ image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
+ image = tf.image.random_saturation(image, lower=0.5, upper=1.5)
+ image = tf.image.random_hue(image, max_delta=0.2)
+
+ # The random_* ops do not necessarily clamp.
+ image = tf.clip_by_value(image, 0.0, 1.0)
+ return image
+
+
+def distort_image(image, height, width, bbox, thread_id=0, scope=None):
+ """Distort one image for training a network.
+
+ Distorting images provides a useful technique for augmenting the data
+ set during training in order to make the network invariant to aspects
+ of the image that do not effect the label.
+
+ Args:
+ image: 3-D float Tensor of image
+ height: integer
+ width: integer
+ bbox: 3-D float Tensor of bounding boxes arranged [1, num_boxes, coords]
+ where each coordinate is [0, 1) and the coordinates are arranged
+ as [ymin, xmin, ymax, xmax].
+ thread_id: integer indicating the preprocessing thread.
+ scope: Optional scope for name_scope.
+ Returns:
+ 3-D float Tensor of distorted image used for training.
+ """
+ with tf.name_scope(values=[image, height, width, bbox], name=scope,
+ default_name='distort_image'):
+ # Each bounding box has shape [1, num_boxes, box coords] and
+ # the coordinates are ordered [ymin, xmin, ymax, xmax].
+
+ # Display the bounding box in the first thread only.
+ if not thread_id:
+ image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),
+ bbox)
+ tf.summary.image('image_with_bounding_boxes', image_with_box)
+
+ # A large fraction of image datasets contain a human-annotated bounding
+ # box delineating the region of the image containing the object of interest.
+ # We choose to create a new bounding box for the object which is a randomly
+ # distorted version of the human-annotated bounding box that obeys an allowed
+ # range of aspect ratios, sizes and overlap with the human-annotated
+ # bounding box. If no box is supplied, then we assume the bounding box is
+ # the entire image.
+ sample_distorted_bounding_box = tf.image.sample_distorted_bounding_box(
+ tf.shape(image),
+ bounding_boxes=bbox,
+ min_object_covered=0.1,
+ aspect_ratio_range=[0.75, 1.33],
+ area_range=[0.05, 1.0],
+ max_attempts=100,
+ use_image_if_no_bounding_boxes=True)
+ bbox_begin, bbox_size, distort_bbox = sample_distorted_bounding_box
+ if not thread_id:
+ image_with_distorted_box = tf.image.draw_bounding_boxes(
+ tf.expand_dims(image, 0), distort_bbox)
+ tf.summary.image('images_with_distorted_bounding_box',
+ image_with_distorted_box)
+
+ # Crop the image to the specified bounding box.
+ distorted_image = tf.slice(image, bbox_begin, bbox_size)
+
+ # This resizing operation may distort the images because the aspect
+ # ratio is not respected. We select a resize method in a round robin
+ # fashion based on the thread number.
+ # Note that ResizeMethod contains 4 enumerated resizing methods.
+ resize_method = thread_id % 4
+ distorted_image = tf.image.resize_images(distorted_image, [height, width],
+ method=resize_method)
+ # Restore the shape since the dynamic slice based upon the bbox_size loses
+ # the third dimension.
+ distorted_image.set_shape([height, width, 3])
+ if not thread_id:
+ tf.summary.image('cropped_resized_image',
+ tf.expand_dims(distorted_image, 0))
+
+ # Randomly flip the image horizontally.
+ distorted_image = tf.image.random_flip_left_right(distorted_image)
+
+ # Randomly distort the colors.
+ distorted_image = distort_color(distorted_image, thread_id)
+
+ if not thread_id:
+ tf.summary.image('final_distorted_image',
+ tf.expand_dims(distorted_image, 0))
+ return distorted_image
+
+
+def eval_image(image, height, width, scope=None):
+ """Prepare one image for evaluation.
+
+ Args:
+ image: 3-D float Tensor
+ height: integer
+ width: integer
+ scope: Optional scope for name_scope.
+ Returns:
+ 3-D float Tensor of prepared image.
+ """
+ with tf.name_scope(values=[image, height, width], name=scope,
+ default_name='eval_image'):
+ # Crop the central region of the image with an area containing 87.5% of
+ # the original image.
+ image = tf.image.central_crop(image, central_fraction=0.875)
+
+ # Resize the image to the original height and width.
+ image = tf.expand_dims(image, 0)
+ image = tf.image.resize_bilinear(image, [height, width],
+ align_corners=False)
+ image = tf.squeeze(image, [0])
+ return image
+
+
+def image_preprocessing(image_buffer, bbox, train, thread_id=0):
+ """Decode and preprocess one image for evaluation or training.
+
+ Args:
+ image_buffer: JPEG encoded string Tensor
+ bbox: 3-D float Tensor of bounding boxes arranged [1, num_boxes, coords]
+ where each coordinate is [0, 1) and the coordinates are arranged as
+ [ymin, xmin, ymax, xmax].
+ train: boolean
+ thread_id: integer indicating preprocessing thread
+
+ Returns:
+ 3-D float Tensor containing an appropriately scaled image
+
+ Raises:
+ ValueError: if user does not provide bounding box
+ """
+ if bbox is None:
+ raise ValueError('Please supply a bounding box.')
+
+ image = decode_jpeg(image_buffer)
+ height = FLAGS.image_size
+ width = FLAGS.image_size
+
+ if train:
+ image = distort_image(image, height, width, bbox, thread_id)
+ else:
+ image = eval_image(image, height, width)
+
+ # Finally, rescale to [-1,1] instead of [0, 1)
+ image = tf.subtract(image, 0.5)
+ image = tf.multiply(image, 2.0)
+ return image
+
+
+def parse_example_proto(example_serialized):
+ """Parses an Example proto containing a training example of an image.
+
+ The output of the build_image_data.py image preprocessing script is a dataset
+ containing serialized Example protocol buffers. Each Example proto contains
+ the following fields:
+
+ image/height: 462
+ image/width: 581
+ image/colorspace: 'RGB'
+ image/channels: 3
+ image/class/label: 615
+ image/class/synset: 'n03623198'
+ image/class/text: 'knee pad'
+ image/object/bbox/xmin: 0.1
+ image/object/bbox/xmax: 0.9
+ image/object/bbox/ymin: 0.2
+ image/object/bbox/ymax: 0.6
+ image/object/bbox/label: 615
+ image/format: 'JPEG'
+ image/filename: 'ILSVRC2012_val_00041207.JPEG'
+ image/encoded:
+
+ Args:
+ example_serialized: scalar Tensor tf.string containing a serialized
+ Example protocol buffer.
+
+ Returns:
+ image_buffer: Tensor tf.string containing the contents of a JPEG file.
+ label: Tensor tf.int32 containing the label.
+ bbox: 3-D float Tensor of bounding boxes arranged [1, num_boxes, coords]
+ where each coordinate is [0, 1) and the coordinates are arranged as
+ [ymin, xmin, ymax, xmax].
+ text: Tensor tf.string containing the human-readable label.
+ """
+ # Dense features in Example proto.
+ feature_map = {
+ 'image/encoded': tf.FixedLenFeature([], dtype=tf.string,
+ default_value=''),
+ 'image/class/label': tf.FixedLenFeature([1], dtype=tf.int64,
+ default_value=-1),
+ 'image/class/text': tf.FixedLenFeature([], dtype=tf.string,
+ default_value=''),
+ }
+ sparse_float32 = tf.VarLenFeature(dtype=tf.float32)
+ # Sparse features in Example proto.
+ feature_map.update(
+ {k: sparse_float32 for k in ['image/object/bbox/xmin',
+ 'image/object/bbox/ymin',
+ 'image/object/bbox/xmax',
+ 'image/object/bbox/ymax']})
+
+ features = tf.parse_single_example(example_serialized, feature_map)
+ label = tf.cast(features['image/class/label'], dtype=tf.int32)
+
+ xmin = tf.expand_dims(features['image/object/bbox/xmin'].values, 0)
+ ymin = tf.expand_dims(features['image/object/bbox/ymin'].values, 0)
+ xmax = tf.expand_dims(features['image/object/bbox/xmax'].values, 0)
+ ymax = tf.expand_dims(features['image/object/bbox/ymax'].values, 0)
+
+ # Note that we impose an ordering of (y, x) just to make life difficult.
+ bbox = tf.concat(axis=0, values=[ymin, xmin, ymax, xmax])
+
+ # Force the variable number of bounding boxes into the shape
+ # [1, num_boxes, coords].
+ bbox = tf.expand_dims(bbox, 0)
+ bbox = tf.transpose(bbox, [0, 2, 1])
+
+ return features['image/encoded'], label, bbox, features['image/class/text']
+
+
+def batch_inputs(dataset, batch_size, train, num_preprocess_threads=None,
+ num_readers=1):
+ """Contruct batches of training or evaluation examples from the image dataset.
+
+ Args:
+ dataset: instance of Dataset class specifying the dataset.
+ See dataset.py for details.
+ batch_size: integer
+ train: boolean
+ num_preprocess_threads: integer, total number of preprocessing threads
+ num_readers: integer, number of parallel readers
+
+ Returns:
+ images: 4-D float Tensor of a batch of images
+ labels: 1-D integer Tensor of [batch_size].
+
+ Raises:
+ ValueError: if data is not found
+ """
+ with tf.name_scope('batch_processing'):
+ data_files = dataset.data_files()
+ if data_files is None:
+ raise ValueError('No data files found for this dataset')
+
+ # Create filename_queue
+ if train:
+ filename_queue = tf.train.string_input_producer(data_files,
+ shuffle=True,
+ capacity=16)
+ else:
+ filename_queue = tf.train.string_input_producer(data_files,
+ shuffle=False,
+ capacity=1)
+ if num_preprocess_threads is None:
+ num_preprocess_threads = FLAGS.num_preprocess_threads
+
+ if num_preprocess_threads % 4:
+ raise ValueError('Please make num_preprocess_threads a multiple '
+ 'of 4 (%d % 4 != 0).', num_preprocess_threads)
+
+ if num_readers is None:
+ num_readers = FLAGS.num_readers
+
+ if num_readers < 1:
+ raise ValueError('Please make num_readers at least 1')
+
+ # Approximate number of examples per shard.
+ examples_per_shard = 1024
+ # Size the random shuffle queue to balance between good global
+ # mixing (more examples) and memory use (fewer examples).
+ # 1 image uses 299*299*3*4 bytes = 1MB
+ # The default input_queue_memory_factor is 16 implying a shuffling queue
+ # size: examples_per_shard * 16 * 1MB = 17.6GB
+ min_queue_examples = examples_per_shard * FLAGS.input_queue_memory_factor
+ if train:
+ examples_queue = tf.RandomShuffleQueue(
+ capacity=min_queue_examples + 3 * batch_size,
+ min_after_dequeue=min_queue_examples,
+ dtypes=[tf.string])
+ else:
+ examples_queue = tf.FIFOQueue(
+ capacity=examples_per_shard + 3 * batch_size,
+ dtypes=[tf.string])
+
+ # Create multiple readers to populate the queue of examples.
+ if num_readers > 1:
+ enqueue_ops = []
+ for _ in range(num_readers):
+ reader = dataset.reader()
+ _, value = reader.read(filename_queue)
+ enqueue_ops.append(examples_queue.enqueue([value]))
+
+ tf.train.queue_runner.add_queue_runner(
+ tf.train.queue_runner.QueueRunner(examples_queue, enqueue_ops))
+ example_serialized = examples_queue.dequeue()
+ else:
+ reader = dataset.reader()
+ _, example_serialized = reader.read(filename_queue)
+
+ images_and_labels = []
+ for thread_id in range(num_preprocess_threads):
+ # Parse a serialized Example proto to extract the image and metadata.
+ image_buffer, label_index, bbox, _ = parse_example_proto(
+ example_serialized)
+ image = image_preprocessing(image_buffer, bbox, train, thread_id)
+ images_and_labels.append([image, label_index])
+
+ images, label_index_batch = tf.train.batch_join(
+ images_and_labels,
+ batch_size=batch_size,
+ capacity=2 * num_preprocess_threads * batch_size)
+
+ # Reshape images into these desired dimensions.
+ height = FLAGS.image_size
+ width = FLAGS.image_size
+ depth = 3
+
+ images = tf.cast(images, tf.float32)
+ images = tf.reshape(images, shape=[batch_size, height, width, depth])
+
+ # Display the training images in the visualizer.
+ tf.summary.image('images', images)
+
+ return images, tf.reshape(label_index_batch, [batch_size])
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_data.py"
new file mode 100644
index 00000000..0a6d22e1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_data.py"
@@ -0,0 +1,59 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Small library that points to the ImageNet data set.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+from inception.dataset import Dataset
+
+
+class ImagenetData(Dataset):
+ """ImageNet data set."""
+
+ def __init__(self, subset):
+ super(ImagenetData, self).__init__('ImageNet', subset)
+
+ def num_classes(self):
+ """Returns the number of classes in the data set."""
+ return 1000
+
+ def num_examples_per_epoch(self):
+ """Returns the number of examples in the data set."""
+ # Bounding box data consists of 615299 bounding boxes for 544546 images.
+ if self.subset == 'train':
+ return 1281167
+ if self.subset == 'validation':
+ return 50000
+
+ def download_message(self):
+ """Instruction to download and extract the tarball from Flowers website."""
+
+ print('Failed to find any ImageNet %s files'% self.subset)
+ print('')
+ print('If you have already downloaded and processed the data, then make '
+ 'sure to set --data_dir to point to the directory containing the '
+ 'location of the sharded TFRecords.\n')
+ print('If you have not downloaded and prepared the ImageNet data in the '
+ 'TFRecord format, you will need to do this at least once. This '
+ 'process could take several hours depending on the speed of your '
+ 'computer and network connection\n')
+ print('Please see README.md for instructions on how to build '
+ 'the ImageNet dataset using download_and_preprocess_imagenet.\n')
+ print('Note that the raw data size is 300 GB and the processed data size '
+ 'is 150 GB. Please ensure you have at least 500GB disk space.')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_distributed_train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_distributed_train.py"
new file mode 100644
index 00000000..f3615e01
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_distributed_train.py"
@@ -0,0 +1,66 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+# pylint: disable=line-too-long
+"""A binary to train Inception in a distributed manner using multiple systems.
+
+Please see accompanying README.md for details and instructions.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from inception import inception_distributed_train
+from inception.imagenet_data import ImagenetData
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def main(unused_args):
+ assert FLAGS.job_name in ['ps', 'worker'], 'job_name must be ps or worker'
+
+ # Extract all the hostnames for the ps and worker jobs to construct the
+ # cluster spec.
+ ps_hosts = FLAGS.ps_hosts.split(',')
+ worker_hosts = FLAGS.worker_hosts.split(',')
+ tf.logging.info('PS hosts are: %s' % ps_hosts)
+ tf.logging.info('Worker hosts are: %s' % worker_hosts)
+
+ cluster_spec = tf.train.ClusterSpec({'ps': ps_hosts,
+ 'worker': worker_hosts})
+ server = tf.train.Server(
+ {'ps': ps_hosts,
+ 'worker': worker_hosts},
+ job_name=FLAGS.job_name,
+ task_index=FLAGS.task_id,
+ protocol=FLAGS.protocol)
+
+ if FLAGS.job_name == 'ps':
+ # `ps` jobs wait for incoming connections from the workers.
+ server.join()
+ else:
+ # `worker` jobs will actually do the work.
+ dataset = ImagenetData(subset=FLAGS.subset)
+ assert dataset.data_files()
+ # Only the chief checks for or creates train_dir.
+ if FLAGS.task_id == 0:
+ if not tf.gfile.Exists(FLAGS.train_dir):
+ tf.gfile.MakeDirs(FLAGS.train_dir)
+ inception_distributed_train.train(server.target, dataset, cluster_spec)
+
+if __name__ == '__main__':
+ tf.logging.set_verbosity(tf.logging.INFO)
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_eval.py"
new file mode 100644
index 00000000..e6f8bac2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_eval.py"
@@ -0,0 +1,46 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A binary to evaluate Inception on the ImageNet data set.
+
+Note that using the supplied pre-trained inception checkpoint, the eval should
+achieve:
+ precision @ 1 = 0.7874 recall @ 5 = 0.9436 [50000 examples]
+
+See the README.md for more details.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from inception import inception_eval
+from inception.imagenet_data import ImagenetData
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def main(unused_argv=None):
+ dataset = ImagenetData(subset=FLAGS.subset)
+ assert dataset.data_files()
+ if tf.gfile.Exists(FLAGS.eval_dir):
+ tf.gfile.DeleteRecursively(FLAGS.eval_dir)
+ tf.gfile.MakeDirs(FLAGS.eval_dir)
+ inception_eval.evaluate(dataset)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_train.py"
new file mode 100644
index 00000000..3ffb55ee
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/imagenet_train.py"
@@ -0,0 +1,41 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A binary to train Inception on the ImageNet data set.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+
+import tensorflow as tf
+
+from inception import inception_train
+from inception.imagenet_data import ImagenetData
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def main(_):
+ dataset = ImagenetData(subset=FLAGS.subset)
+ assert dataset.data_files()
+ if tf.gfile.Exists(FLAGS.train_dir):
+ tf.gfile.DeleteRecursively(FLAGS.train_dir)
+ tf.gfile.MakeDirs(FLAGS.train_dir)
+ inception_train.train(dataset)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_distributed_train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_distributed_train.py"
new file mode 100644
index 00000000..c1a589ac
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_distributed_train.py"
@@ -0,0 +1,314 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A library to train Inception using multiple replicas with synchronous update.
+
+Please see accompanying README.md for details and instructions.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from datetime import datetime
+import os.path
+import time
+
+import numpy as np
+import tensorflow as tf
+
+from inception import image_processing
+from inception import inception_model as inception
+from inception.slim import slim
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_string('job_name', '', 'One of "ps", "worker"')
+tf.app.flags.DEFINE_string('ps_hosts', '',
+ """Comma-separated list of hostname:port for the """
+ """parameter server jobs. e.g. """
+ """'machine1:2222,machine2:1111,machine2:2222'""")
+tf.app.flags.DEFINE_string('worker_hosts', '',
+ """Comma-separated list of hostname:port for the """
+ """worker jobs. e.g. """
+ """'machine1:2222,machine2:1111,machine2:2222'""")
+tf.app.flags.DEFINE_string('protocol', 'grpc',
+ """Communication protocol to use in distributed """
+ """execution (default grpc) """)
+
+tf.app.flags.DEFINE_string('train_dir', '/tmp/imagenet_train',
+ """Directory where to write event logs """
+ """and checkpoint.""")
+tf.app.flags.DEFINE_integer('max_steps', 1000000, 'Number of batches to run.')
+tf.app.flags.DEFINE_string('subset', 'train', 'Either "train" or "validation".')
+tf.app.flags.DEFINE_boolean('log_device_placement', False,
+ 'Whether to log device placement.')
+
+# Task ID is used to select the chief and also to access the local_step for
+# each replica to check staleness of the gradients in SyncReplicasOptimizer.
+tf.app.flags.DEFINE_integer(
+ 'task_id', 0, 'Task ID of the worker/replica running the training.')
+
+# More details can be found in the SyncReplicasOptimizer class:
+# tensorflow/python/training/sync_replicas_optimizer.py
+tf.app.flags.DEFINE_integer('num_replicas_to_aggregate', -1,
+ """Number of gradients to collect before """
+ """updating the parameters.""")
+tf.app.flags.DEFINE_integer('save_interval_secs', 10 * 60,
+ 'Save interval seconds.')
+tf.app.flags.DEFINE_integer('save_summaries_secs', 180,
+ 'Save summaries interval seconds.')
+
+# **IMPORTANT**
+# Please note that this learning rate schedule is heavily dependent on the
+# hardware architecture, batch size and any changes to the model architecture
+# specification. Selecting a finely tuned learning rate schedule is an
+# empirical process that requires some experimentation. Please see README.md
+# more guidance and discussion.
+#
+# Learning rate decay factor selected from https://arxiv.org/abs/1604.00981
+tf.app.flags.DEFINE_float('initial_learning_rate', 0.045,
+ 'Initial learning rate.')
+tf.app.flags.DEFINE_float('num_epochs_per_decay', 2.0,
+ 'Epochs after which learning rate decays.')
+tf.app.flags.DEFINE_float('learning_rate_decay_factor', 0.94,
+ 'Learning rate decay factor.')
+
+# Constants dictating the learning rate schedule.
+RMSPROP_DECAY = 0.9 # Decay term for RMSProp.
+RMSPROP_MOMENTUM = 0.9 # Momentum in RMSProp.
+RMSPROP_EPSILON = 1.0 # Epsilon term for RMSProp.
+
+
+def train(target, dataset, cluster_spec):
+ """Train Inception on a dataset for a number of steps."""
+ # Number of workers and parameter servers are inferred from the workers and ps
+ # hosts string.
+ num_workers = len(cluster_spec.as_dict()['worker'])
+ num_parameter_servers = len(cluster_spec.as_dict()['ps'])
+ # If no value is given, num_replicas_to_aggregate defaults to be the number of
+ # workers.
+ if FLAGS.num_replicas_to_aggregate == -1:
+ num_replicas_to_aggregate = num_workers
+ else:
+ num_replicas_to_aggregate = FLAGS.num_replicas_to_aggregate
+
+ # Both should be greater than 0 in a distributed training.
+ assert num_workers > 0 and num_parameter_servers > 0, (' num_workers and '
+ 'num_parameter_servers'
+ ' must be > 0.')
+
+ # Choose worker 0 as the chief. Note that any worker could be the chief
+ # but there should be only one chief.
+ is_chief = (FLAGS.task_id == 0)
+
+ # Ops are assigned to worker by default.
+ with tf.device('/job:worker/task:%d' % FLAGS.task_id):
+ # Variables and its related init/assign ops are assigned to ps.
+ with slim.scopes.arg_scope(
+ [slim.variables.variable, slim.variables.global_step],
+ device=slim.variables.VariableDeviceChooser(num_parameter_servers)):
+ # Create a variable to count the number of train() calls. This equals the
+ # number of updates applied to the variables.
+ global_step = slim.variables.global_step()
+
+ # Calculate the learning rate schedule.
+ num_batches_per_epoch = (dataset.num_examples_per_epoch() /
+ FLAGS.batch_size)
+ # Decay steps need to be divided by the number of replicas to aggregate.
+ decay_steps = int(num_batches_per_epoch * FLAGS.num_epochs_per_decay /
+ num_replicas_to_aggregate)
+
+ # Decay the learning rate exponentially based on the number of steps.
+ lr = tf.train.exponential_decay(FLAGS.initial_learning_rate,
+ global_step,
+ decay_steps,
+ FLAGS.learning_rate_decay_factor,
+ staircase=True)
+ # Add a summary to track the learning rate.
+ tf.summary.scalar('learning_rate', lr)
+
+ # Create an optimizer that performs gradient descent.
+ opt = tf.train.RMSPropOptimizer(lr,
+ RMSPROP_DECAY,
+ momentum=RMSPROP_MOMENTUM,
+ epsilon=RMSPROP_EPSILON)
+
+ images, labels = image_processing.distorted_inputs(
+ dataset,
+ batch_size=FLAGS.batch_size,
+ num_preprocess_threads=FLAGS.num_preprocess_threads)
+
+ # Number of classes in the Dataset label set plus 1.
+ # Label 0 is reserved for an (unused) background class.
+ num_classes = dataset.num_classes() + 1
+ logits = inception.inference(images, num_classes, for_training=True)
+ # Add classification loss.
+ inception.loss(logits, labels)
+
+ # Gather all of the losses including regularization losses.
+ losses = tf.get_collection(slim.losses.LOSSES_COLLECTION)
+ losses += tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
+
+ total_loss = tf.add_n(losses, name='total_loss')
+
+ if is_chief:
+ # Compute the moving average of all individual losses and the
+ # total loss.
+ loss_averages = tf.train.ExponentialMovingAverage(0.9, name='avg')
+ loss_averages_op = loss_averages.apply(losses + [total_loss])
+
+ # Attach a scalar summmary to all individual losses and the total loss;
+ # do the same for the averaged version of the losses.
+ for l in losses + [total_loss]:
+ loss_name = l.op.name
+ # Name each loss as '(raw)' and name the moving average version of the
+ # loss as the original loss name.
+ tf.summary.scalar(loss_name + ' (raw)', l)
+ tf.summary.scalar(loss_name, loss_averages.average(l))
+
+ # Add dependency to compute loss_averages.
+ with tf.control_dependencies([loss_averages_op]):
+ total_loss = tf.identity(total_loss)
+
+ # Track the moving averages of all trainable variables.
+ # Note that we maintain a 'double-average' of the BatchNormalization
+ # global statistics.
+ # This is not needed when the number of replicas are small but important
+ # for synchronous distributed training with tens of workers/replicas.
+ exp_moving_averager = tf.train.ExponentialMovingAverage(
+ inception.MOVING_AVERAGE_DECAY, global_step)
+
+ variables_to_average = (
+ tf.trainable_variables() + tf.moving_average_variables())
+
+ # Add histograms for model variables.
+ for var in variables_to_average:
+ tf.summary.histogram(var.op.name, var)
+
+ # Create synchronous replica optimizer.
+ opt = tf.train.SyncReplicasOptimizer(
+ opt,
+ replicas_to_aggregate=num_replicas_to_aggregate,
+ total_num_replicas=num_workers,
+ variable_averages=exp_moving_averager,
+ variables_to_average=variables_to_average)
+
+ batchnorm_updates = tf.get_collection(slim.ops.UPDATE_OPS_COLLECTION)
+ assert batchnorm_updates, 'Batchnorm updates are missing'
+ batchnorm_updates_op = tf.group(*batchnorm_updates)
+ # Add dependency to compute batchnorm_updates.
+ with tf.control_dependencies([batchnorm_updates_op]):
+ total_loss = tf.identity(total_loss)
+
+ # Compute gradients with respect to the loss.
+ grads = opt.compute_gradients(total_loss)
+
+ # Add histograms for gradients.
+ for grad, var in grads:
+ if grad is not None:
+ tf.summary.histogram(var.op.name + '/gradients', grad)
+
+ apply_gradients_op = opt.apply_gradients(grads, global_step=global_step)
+
+ with tf.control_dependencies([apply_gradients_op]):
+ train_op = tf.identity(total_loss, name='train_op')
+
+ # Get chief queue_runners and init_tokens, which is used to synchronize
+ # replicas. More details can be found in SyncReplicasOptimizer.
+ chief_queue_runners = [opt.get_chief_queue_runner()]
+ init_tokens_op = opt.get_init_tokens_op()
+
+ # Create a saver.
+ saver = tf.train.Saver()
+
+ # Build the summary operation based on the TF collection of Summaries.
+ summary_op = tf.summary.merge_all()
+
+ # Build an initialization operation to run below.
+ init_op = tf.global_variables_initializer()
+
+ # We run the summaries in the same thread as the training operations by
+ # passing in None for summary_op to avoid a summary_thread being started.
+ # Running summaries and training operations in parallel could run out of
+ # GPU memory.
+ sv = tf.train.Supervisor(is_chief=is_chief,
+ logdir=FLAGS.train_dir,
+ init_op=init_op,
+ summary_op=None,
+ global_step=global_step,
+ saver=saver,
+ save_model_secs=FLAGS.save_interval_secs)
+
+ tf.logging.info('%s Supervisor' % datetime.now())
+
+ sess_config = tf.ConfigProto(
+ allow_soft_placement=True,
+ log_device_placement=FLAGS.log_device_placement)
+
+ # Get a session.
+ sess = sv.prepare_or_wait_for_session(target, config=sess_config)
+
+ # Start the queue runners.
+ queue_runners = tf.get_collection(tf.GraphKeys.QUEUE_RUNNERS)
+ sv.start_queue_runners(sess, queue_runners)
+ tf.logging.info('Started %d queues for processing input data.',
+ len(queue_runners))
+
+ if is_chief:
+ sv.start_queue_runners(sess, chief_queue_runners)
+ sess.run(init_tokens_op)
+
+ # Train, checking for Nans. Concurrently run the summary operation at a
+ # specified interval. Note that the summary_op and train_op never run
+ # simultaneously in order to prevent running out of GPU memory.
+ next_summary_time = time.time() + FLAGS.save_summaries_secs
+ while not sv.should_stop():
+ try:
+ start_time = time.time()
+ loss_value, step = sess.run([train_op, global_step])
+ assert not np.isnan(loss_value), 'Model diverged with loss = NaN'
+ if step > FLAGS.max_steps:
+ break
+ duration = time.time() - start_time
+
+ if step % 30 == 0:
+ examples_per_sec = FLAGS.batch_size / float(duration)
+ format_str = ('Worker %d: %s: step %d, loss = %.2f'
+ '(%.1f examples/sec; %.3f sec/batch)')
+ tf.logging.info(format_str %
+ (FLAGS.task_id, datetime.now(), step, loss_value,
+ examples_per_sec, duration))
+
+ # Determine if the summary_op should be run on the chief worker.
+ if is_chief and next_summary_time < time.time():
+ tf.logging.info('Running Summary operation on the chief.')
+ summary_str = sess.run(summary_op)
+ sv.summary_computed(sess, summary_str)
+ tf.logging.info('Finished running Summary operation.')
+
+ # Determine the next time for running the summary.
+ next_summary_time += FLAGS.save_summaries_secs
+ except:
+ if is_chief:
+ tf.logging.info('Chief got exception while running!')
+ raise
+
+ # Stop the supervisor. This also waits for service threads to finish.
+ sv.stop()
+
+ # Save after the training ends.
+ if is_chief:
+ saver.save(sess,
+ os.path.join(FLAGS.train_dir, 'model.ckpt'),
+ global_step=global_step)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_eval.py"
new file mode 100644
index 00000000..e7cfc3c3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_eval.py"
@@ -0,0 +1,171 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A library to evaluate Inception on a single GPU.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from datetime import datetime
+import math
+import os.path
+import time
+
+
+import numpy as np
+import tensorflow as tf
+
+from inception import image_processing
+from inception import inception_model as inception
+
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_string('eval_dir', '/tmp/imagenet_eval',
+ """Directory where to write event logs.""")
+tf.app.flags.DEFINE_string('checkpoint_dir', '/tmp/imagenet_train',
+ """Directory where to read model checkpoints.""")
+
+# Flags governing the frequency of the eval.
+tf.app.flags.DEFINE_integer('eval_interval_secs', 60 * 5,
+ """How often to run the eval.""")
+tf.app.flags.DEFINE_boolean('run_once', False,
+ """Whether to run eval only once.""")
+
+# Flags governing the data used for the eval.
+tf.app.flags.DEFINE_integer('num_examples', 50000,
+ """Number of examples to run. Note that the eval """
+ """ImageNet dataset contains 50000 examples.""")
+tf.app.flags.DEFINE_string('subset', 'validation',
+ """Either 'validation' or 'train'.""")
+
+
+def _eval_once(saver, summary_writer, top_1_op, top_5_op, summary_op):
+ """Runs Eval once.
+
+ Args:
+ saver: Saver.
+ summary_writer: Summary writer.
+ top_1_op: Top 1 op.
+ top_5_op: Top 5 op.
+ summary_op: Summary op.
+ """
+ with tf.Session() as sess:
+ ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
+ if ckpt and ckpt.model_checkpoint_path:
+ if os.path.isabs(ckpt.model_checkpoint_path):
+ # Restores from checkpoint with absolute path.
+ saver.restore(sess, ckpt.model_checkpoint_path)
+ else:
+ # Restores from checkpoint with relative path.
+ saver.restore(sess, os.path.join(FLAGS.checkpoint_dir,
+ ckpt.model_checkpoint_path))
+
+ # Assuming model_checkpoint_path looks something like:
+ # /my-favorite-path/imagenet_train/model.ckpt-0,
+ # extract global_step from it.
+ global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
+ print('Successfully loaded model from %s at step=%s.' %
+ (ckpt.model_checkpoint_path, global_step))
+ else:
+ print('No checkpoint file found')
+ return
+
+ # Start the queue runners.
+ coord = tf.train.Coordinator()
+ try:
+ threads = []
+ for qr in tf.get_collection(tf.GraphKeys.QUEUE_RUNNERS):
+ threads.extend(qr.create_threads(sess, coord=coord, daemon=True,
+ start=True))
+
+ num_iter = int(math.ceil(FLAGS.num_examples / FLAGS.batch_size))
+ # Counts the number of correct predictions.
+ count_top_1 = 0.0
+ count_top_5 = 0.0
+ total_sample_count = num_iter * FLAGS.batch_size
+ step = 0
+
+ print('%s: starting evaluation on (%s).' % (datetime.now(), FLAGS.subset))
+ start_time = time.time()
+ while step < num_iter and not coord.should_stop():
+ top_1, top_5 = sess.run([top_1_op, top_5_op])
+ count_top_1 += np.sum(top_1)
+ count_top_5 += np.sum(top_5)
+ step += 1
+ if step % 20 == 0:
+ duration = time.time() - start_time
+ sec_per_batch = duration / 20.0
+ examples_per_sec = FLAGS.batch_size / sec_per_batch
+ print('%s: [%d batches out of %d] (%.1f examples/sec; %.3f'
+ 'sec/batch)' % (datetime.now(), step, num_iter,
+ examples_per_sec, sec_per_batch))
+ start_time = time.time()
+
+ # Compute precision @ 1.
+ precision_at_1 = count_top_1 / total_sample_count
+ recall_at_5 = count_top_5 / total_sample_count
+ print('%s: precision @ 1 = %.4f recall @ 5 = %.4f [%d examples]' %
+ (datetime.now(), precision_at_1, recall_at_5, total_sample_count))
+
+ summary = tf.Summary()
+ summary.ParseFromString(sess.run(summary_op))
+ summary.value.add(tag='Precision @ 1', simple_value=precision_at_1)
+ summary.value.add(tag='Recall @ 5', simple_value=recall_at_5)
+ summary_writer.add_summary(summary, global_step)
+
+ except Exception as e: # pylint: disable=broad-except
+ coord.request_stop(e)
+
+ coord.request_stop()
+ coord.join(threads, stop_grace_period_secs=10)
+
+
+def evaluate(dataset):
+ """Evaluate model on Dataset for a number of steps."""
+ with tf.Graph().as_default():
+ # Get images and labels from the dataset.
+ images, labels = image_processing.inputs(dataset)
+
+ # Number of classes in the Dataset label set plus 1.
+ # Label 0 is reserved for an (unused) background class.
+ num_classes = dataset.num_classes() + 1
+
+ # Build a Graph that computes the logits predictions from the
+ # inference model.
+ logits, _ = inception.inference(images, num_classes)
+
+ # Calculate predictions.
+ top_1_op = tf.nn.in_top_k(logits, labels, 1)
+ top_5_op = tf.nn.in_top_k(logits, labels, 5)
+
+ # Restore the moving average version of the learned variables for eval.
+ variable_averages = tf.train.ExponentialMovingAverage(
+ inception.MOVING_AVERAGE_DECAY)
+ variables_to_restore = variable_averages.variables_to_restore()
+ saver = tf.train.Saver(variables_to_restore)
+
+ # Build the summary operation based on the TF collection of Summaries.
+ summary_op = tf.summary.merge_all()
+
+ graph_def = tf.get_default_graph().as_graph_def()
+ summary_writer = tf.summary.FileWriter(FLAGS.eval_dir,
+ graph_def=graph_def)
+
+ while True:
+ _eval_once(saver, summary_writer, top_1_op, top_5_op, summary_op)
+ if FLAGS.run_once:
+ break
+ time.sleep(FLAGS.eval_interval_secs)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_model.py"
new file mode 100644
index 00000000..fedae13a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_model.py"
@@ -0,0 +1,157 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Build the Inception v3 network on ImageNet data set.
+
+The Inception v3 architecture is described in http://arxiv.org/abs/1512.00567
+
+Summary of available functions:
+ inference: Compute inference on the model inputs to make a prediction
+ loss: Compute the loss of the prediction with respect to the labels
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import re
+
+import tensorflow as tf
+
+from inception.slim import slim
+
+FLAGS = tf.app.flags.FLAGS
+
+# If a model is trained using multiple GPUs, prefix all Op names with tower_name
+# to differentiate the operations. Note that this prefix is removed from the
+# names of the summaries when visualizing a model.
+TOWER_NAME = 'tower'
+
+# Batch normalization. Constant governing the exponential moving average of
+# the 'global' mean and variance for all activations.
+BATCHNORM_MOVING_AVERAGE_DECAY = 0.9997
+
+# The decay to use for the moving average.
+MOVING_AVERAGE_DECAY = 0.9999
+
+
+def inference(images, num_classes, for_training=False, restore_logits=True,
+ scope=None):
+ """Build Inception v3 model architecture.
+
+ See here for reference: http://arxiv.org/abs/1512.00567
+
+ Args:
+ images: Images returned from inputs() or distorted_inputs().
+ num_classes: number of classes
+ for_training: If set to `True`, build the inference model for training.
+ Kernels that operate differently for inference during training
+ e.g. dropout, are appropriately configured.
+ restore_logits: whether or not the logits layers should be restored.
+ Useful for fine-tuning a model with different num_classes.
+ scope: optional prefix string identifying the ImageNet tower.
+
+ Returns:
+ Logits. 2-D float Tensor.
+ Auxiliary Logits. 2-D float Tensor of side-head. Used for training only.
+ """
+ # Parameters for BatchNorm.
+ batch_norm_params = {
+ # Decay for the moving averages.
+ 'decay': BATCHNORM_MOVING_AVERAGE_DECAY,
+ # epsilon to prevent 0s in variance.
+ 'epsilon': 0.001,
+ }
+ # Set weight_decay for weights in Conv and FC layers.
+ with slim.arg_scope([slim.ops.conv2d, slim.ops.fc], weight_decay=0.00004):
+ with slim.arg_scope([slim.ops.conv2d],
+ stddev=0.1,
+ activation=tf.nn.relu,
+ batch_norm_params=batch_norm_params):
+ logits, endpoints = slim.inception.inception_v3(
+ images,
+ dropout_keep_prob=0.8,
+ num_classes=num_classes,
+ is_training=for_training,
+ restore_logits=restore_logits,
+ scope=scope)
+
+ # Add summaries for viewing model statistics on TensorBoard.
+ _activation_summaries(endpoints)
+
+ # Grab the logits associated with the side head. Employed during training.
+ auxiliary_logits = endpoints['aux_logits']
+
+ return logits, auxiliary_logits
+
+
+def loss(logits, labels, batch_size=None):
+ """Adds all losses for the model.
+
+ Note the final loss is not returned. Instead, the list of losses are collected
+ by slim.losses. The losses are accumulated in tower_loss() and summed to
+ calculate the total loss.
+
+ Args:
+ logits: List of logits from inference(). Each entry is a 2-D float Tensor.
+ labels: Labels from distorted_inputs or inputs(). 1-D tensor
+ of shape [batch_size]
+ batch_size: integer
+ """
+ if not batch_size:
+ batch_size = FLAGS.batch_size
+
+ # Reshape the labels into a dense Tensor of
+ # shape [FLAGS.batch_size, num_classes].
+ sparse_labels = tf.reshape(labels, [batch_size, 1])
+ indices = tf.reshape(tf.range(batch_size), [batch_size, 1])
+ concated = tf.concat(axis=1, values=[indices, sparse_labels])
+ num_classes = logits[0].get_shape()[-1].value
+ dense_labels = tf.sparse_to_dense(concated,
+ [batch_size, num_classes],
+ 1.0, 0.0)
+
+ # Cross entropy loss for the main softmax prediction.
+ slim.losses.cross_entropy_loss(logits[0],
+ dense_labels,
+ label_smoothing=0.1,
+ weight=1.0)
+
+ # Cross entropy loss for the auxiliary softmax head.
+ slim.losses.cross_entropy_loss(logits[1],
+ dense_labels,
+ label_smoothing=0.1,
+ weight=0.4,
+ scope='aux_loss')
+
+
+def _activation_summary(x):
+ """Helper to create summaries for activations.
+
+ Creates a summary that provides a histogram of activations.
+ Creates a summary that measure the sparsity of activations.
+
+ Args:
+ x: Tensor
+ """
+ # Remove 'tower_[0-9]/' from the name in case this is a multi-GPU training
+ # session. This helps the clarity of presentation on tensorboard.
+ tensor_name = re.sub('%s_[0-9]*/' % TOWER_NAME, '', x.op.name)
+ tf.summary.histogram(tensor_name + '/activations', x)
+ tf.summary.scalar(tensor_name + '/sparsity', tf.nn.zero_fraction(x))
+
+
+def _activation_summaries(endpoints):
+ with tf.name_scope('summaries'):
+ for act in endpoints.values():
+ _activation_summary(act)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_train.py"
new file mode 100644
index 00000000..e1c32713
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/inception_train.py"
@@ -0,0 +1,357 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A library to train Inception using multiple GPUs with synchronous updates.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import copy
+from datetime import datetime
+import os.path
+import re
+import time
+
+import numpy as np
+import tensorflow as tf
+
+from inception import image_processing
+from inception import inception_model as inception
+from inception.slim import slim
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_string('train_dir', '/tmp/imagenet_train',
+ """Directory where to write event logs """
+ """and checkpoint.""")
+tf.app.flags.DEFINE_integer('max_steps', 10000000,
+ """Number of batches to run.""")
+tf.app.flags.DEFINE_string('subset', 'train',
+ """Either 'train' or 'validation'.""")
+
+# Flags governing the hardware employed for running TensorFlow.
+tf.app.flags.DEFINE_integer('num_gpus', 1,
+ """How many GPUs to use.""")
+tf.app.flags.DEFINE_boolean('log_device_placement', False,
+ """Whether to log device placement.""")
+
+# Flags governing the type of training.
+tf.app.flags.DEFINE_boolean('fine_tune', False,
+ """If set, randomly initialize the final layer """
+ """of weights in order to train the network on a """
+ """new task.""")
+tf.app.flags.DEFINE_string('pretrained_model_checkpoint_path', '',
+ """If specified, restore this pretrained model """
+ """before beginning any training.""")
+
+# **IMPORTANT**
+# Please note that this learning rate schedule is heavily dependent on the
+# hardware architecture, batch size and any changes to the model architecture
+# specification. Selecting a finely tuned learning rate schedule is an
+# empirical process that requires some experimentation. Please see README.md
+# more guidance and discussion.
+#
+# With 8 Tesla K40's and a batch size = 256, the following setup achieves
+# precision@1 = 73.5% after 100 hours and 100K steps (20 epochs).
+# Learning rate decay factor selected from http://arxiv.org/abs/1404.5997.
+tf.app.flags.DEFINE_float('initial_learning_rate', 0.1,
+ """Initial learning rate.""")
+tf.app.flags.DEFINE_float('num_epochs_per_decay', 30.0,
+ """Epochs after which learning rate decays.""")
+tf.app.flags.DEFINE_float('learning_rate_decay_factor', 0.16,
+ """Learning rate decay factor.""")
+
+# Constants dictating the learning rate schedule.
+RMSPROP_DECAY = 0.9 # Decay term for RMSProp.
+RMSPROP_MOMENTUM = 0.9 # Momentum in RMSProp.
+RMSPROP_EPSILON = 1.0 # Epsilon term for RMSProp.
+
+
+def _tower_loss(images, labels, num_classes, scope, reuse_variables=None):
+ """Calculate the total loss on a single tower running the ImageNet model.
+
+ We perform 'batch splitting'. This means that we cut up a batch across
+ multiple GPUs. For instance, if the batch size = 32 and num_gpus = 2,
+ then each tower will operate on an batch of 16 images.
+
+ Args:
+ images: Images. 4D tensor of size [batch_size, FLAGS.image_size,
+ FLAGS.image_size, 3].
+ labels: 1-D integer Tensor of [batch_size].
+ num_classes: number of classes
+ scope: unique prefix string identifying the ImageNet tower, e.g.
+ 'tower_0'.
+
+ Returns:
+ Tensor of shape [] containing the total loss for a batch of data
+ """
+ # When fine-tuning a model, we do not restore the logits but instead we
+ # randomly initialize the logits. The number of classes in the output of the
+ # logit is the number of classes in specified Dataset.
+ restore_logits = not FLAGS.fine_tune
+
+ # Build inference Graph.
+ with tf.variable_scope(tf.get_variable_scope(), reuse=reuse_variables):
+ logits = inception.inference(images, num_classes, for_training=True,
+ restore_logits=restore_logits,
+ scope=scope)
+
+ # Build the portion of the Graph calculating the losses. Note that we will
+ # assemble the total_loss using a custom function below.
+ split_batch_size = images.get_shape().as_list()[0]
+ inception.loss(logits, labels, batch_size=split_batch_size)
+
+ # Assemble all of the losses for the current tower only.
+ losses = tf.get_collection(slim.losses.LOSSES_COLLECTION, scope)
+
+ # Calculate the total loss for the current tower.
+ regularization_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
+ total_loss = tf.add_n(losses + regularization_losses, name='total_loss')
+
+ # Compute the moving average of all individual losses and the total loss.
+ loss_averages = tf.train.ExponentialMovingAverage(0.9, name='avg')
+ loss_averages_op = loss_averages.apply(losses + [total_loss])
+
+ # Attach a scalar summmary to all individual losses and the total loss; do the
+ # same for the averaged version of the losses.
+ for l in losses + [total_loss]:
+ # Remove 'tower_[0-9]/' from the name in case this is a multi-GPU training
+ # session. This helps the clarity of presentation on TensorBoard.
+ loss_name = re.sub('%s_[0-9]*/' % inception.TOWER_NAME, '', l.op.name)
+ # Name each loss as '(raw)' and name the moving average version of the loss
+ # as the original loss name.
+ tf.summary.scalar(loss_name +' (raw)', l)
+ tf.summary.scalar(loss_name, loss_averages.average(l))
+
+ with tf.control_dependencies([loss_averages_op]):
+ total_loss = tf.identity(total_loss)
+ return total_loss
+
+
+def _average_gradients(tower_grads):
+ """Calculate the average gradient for each shared variable across all towers.
+
+ Note that this function provides a synchronization point across all towers.
+
+ Args:
+ tower_grads: List of lists of (gradient, variable) tuples. The outer list
+ is over individual gradients. The inner list is over the gradient
+ calculation for each tower.
+ Returns:
+ List of pairs of (gradient, variable) where the gradient has been averaged
+ across all towers.
+ """
+ average_grads = []
+ for grad_and_vars in zip(*tower_grads):
+ # Note that each grad_and_vars looks like the following:
+ # ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))
+ grads = []
+ for g, _ in grad_and_vars:
+ # Add 0 dimension to the gradients to represent the tower.
+ expanded_g = tf.expand_dims(g, 0)
+
+ # Append on a 'tower' dimension which we will average over below.
+ grads.append(expanded_g)
+
+ # Average over the 'tower' dimension.
+ grad = tf.concat(axis=0, values=grads)
+ grad = tf.reduce_mean(grad, 0)
+
+ # Keep in mind that the Variables are redundant because they are shared
+ # across towers. So .. we will just return the first tower's pointer to
+ # the Variable.
+ v = grad_and_vars[0][1]
+ grad_and_var = (grad, v)
+ average_grads.append(grad_and_var)
+ return average_grads
+
+
+def train(dataset):
+ """Train on dataset for a number of steps."""
+ with tf.Graph().as_default(), tf.device('/cpu:0'):
+ # Create a variable to count the number of train() calls. This equals the
+ # number of batches processed * FLAGS.num_gpus.
+ global_step = tf.get_variable(
+ 'global_step', [],
+ initializer=tf.constant_initializer(0), trainable=False)
+
+ # Calculate the learning rate schedule.
+ num_batches_per_epoch = (dataset.num_examples_per_epoch() /
+ FLAGS.batch_size)
+ decay_steps = int(num_batches_per_epoch * FLAGS.num_epochs_per_decay)
+
+ # Decay the learning rate exponentially based on the number of steps.
+ lr = tf.train.exponential_decay(FLAGS.initial_learning_rate,
+ global_step,
+ decay_steps,
+ FLAGS.learning_rate_decay_factor,
+ staircase=True)
+
+ # Create an optimizer that performs gradient descent.
+ opt = tf.train.RMSPropOptimizer(lr, RMSPROP_DECAY,
+ momentum=RMSPROP_MOMENTUM,
+ epsilon=RMSPROP_EPSILON)
+
+ # Get images and labels for ImageNet and split the batch across GPUs.
+ assert FLAGS.batch_size % FLAGS.num_gpus == 0, (
+ 'Batch size must be divisible by number of GPUs')
+ split_batch_size = int(FLAGS.batch_size / FLAGS.num_gpus)
+
+ # Override the number of preprocessing threads to account for the increased
+ # number of GPU towers.
+ num_preprocess_threads = FLAGS.num_preprocess_threads * FLAGS.num_gpus
+ images, labels = image_processing.distorted_inputs(
+ dataset,
+ num_preprocess_threads=num_preprocess_threads)
+
+ input_summaries = copy.copy(tf.get_collection(tf.GraphKeys.SUMMARIES))
+
+ # Number of classes in the Dataset label set plus 1.
+ # Label 0 is reserved for an (unused) background class.
+ num_classes = dataset.num_classes() + 1
+
+ # Split the batch of images and labels for towers.
+ images_splits = tf.split(axis=0, num_or_size_splits=FLAGS.num_gpus, value=images)
+ labels_splits = tf.split(axis=0, num_or_size_splits=FLAGS.num_gpus, value=labels)
+
+ # Calculate the gradients for each model tower.
+ tower_grads = []
+ reuse_variables = None
+ for i in range(FLAGS.num_gpus):
+ with tf.device('/gpu:%d' % i):
+ with tf.name_scope('%s_%d' % (inception.TOWER_NAME, i)) as scope:
+ # Force all Variables to reside on the CPU.
+ with slim.arg_scope([slim.variables.variable], device='/cpu:0'):
+ # Calculate the loss for one tower of the ImageNet model. This
+ # function constructs the entire ImageNet model but shares the
+ # variables across all towers.
+ loss = _tower_loss(images_splits[i], labels_splits[i], num_classes,
+ scope, reuse_variables)
+
+ # Reuse variables for the next tower.
+ reuse_variables = True
+
+ # Retain the summaries from the final tower.
+ summaries = tf.get_collection(tf.GraphKeys.SUMMARIES, scope)
+
+ # Retain the Batch Normalization updates operations only from the
+ # final tower. Ideally, we should grab the updates from all towers
+ # but these stats accumulate extremely fast so we can ignore the
+ # other stats from the other towers without significant detriment.
+ batchnorm_updates = tf.get_collection(slim.ops.UPDATE_OPS_COLLECTION,
+ scope)
+
+ # Calculate the gradients for the batch of data on this ImageNet
+ # tower.
+ grads = opt.compute_gradients(loss)
+
+ # Keep track of the gradients across all towers.
+ tower_grads.append(grads)
+
+ # We must calculate the mean of each gradient. Note that this is the
+ # synchronization point across all towers.
+ grads = _average_gradients(tower_grads)
+
+ # Add a summaries for the input processing and global_step.
+ summaries.extend(input_summaries)
+
+ # Add a summary to track the learning rate.
+ summaries.append(tf.summary.scalar('learning_rate', lr))
+
+ # Add histograms for gradients.
+ for grad, var in grads:
+ if grad is not None:
+ summaries.append(
+ tf.summary.histogram(var.op.name + '/gradients', grad))
+
+ # Apply the gradients to adjust the shared variables.
+ apply_gradient_op = opt.apply_gradients(grads, global_step=global_step)
+
+ # Add histograms for trainable variables.
+ for var in tf.trainable_variables():
+ summaries.append(tf.summary.histogram(var.op.name, var))
+
+ # Track the moving averages of all trainable variables.
+ # Note that we maintain a "double-average" of the BatchNormalization
+ # global statistics. This is more complicated then need be but we employ
+ # this for backward-compatibility with our previous models.
+ variable_averages = tf.train.ExponentialMovingAverage(
+ inception.MOVING_AVERAGE_DECAY, global_step)
+
+ # Another possibility is to use tf.slim.get_variables().
+ variables_to_average = (tf.trainable_variables() +
+ tf.moving_average_variables())
+ variables_averages_op = variable_averages.apply(variables_to_average)
+
+ # Group all updates to into a single train op.
+ batchnorm_updates_op = tf.group(*batchnorm_updates)
+ train_op = tf.group(apply_gradient_op, variables_averages_op,
+ batchnorm_updates_op)
+
+ # Create a saver.
+ saver = tf.train.Saver(tf.global_variables())
+
+ # Build the summary operation from the last tower summaries.
+ summary_op = tf.summary.merge(summaries)
+
+ # Build an initialization operation to run below.
+ init = tf.global_variables_initializer()
+
+ # Start running operations on the Graph. allow_soft_placement must be set to
+ # True to build towers on GPU, as some of the ops do not have GPU
+ # implementations.
+ sess = tf.Session(config=tf.ConfigProto(
+ allow_soft_placement=True,
+ log_device_placement=FLAGS.log_device_placement))
+ sess.run(init)
+
+ if FLAGS.pretrained_model_checkpoint_path:
+ assert tf.gfile.Exists(FLAGS.pretrained_model_checkpoint_path)
+ variables_to_restore = tf.get_collection(
+ slim.variables.VARIABLES_TO_RESTORE)
+ restorer = tf.train.Saver(variables_to_restore)
+ restorer.restore(sess, FLAGS.pretrained_model_checkpoint_path)
+ print('%s: Pre-trained model restored from %s' %
+ (datetime.now(), FLAGS.pretrained_model_checkpoint_path))
+
+ # Start the queue runners.
+ tf.train.start_queue_runners(sess=sess)
+
+ summary_writer = tf.summary.FileWriter(
+ FLAGS.train_dir,
+ graph=sess.graph)
+
+ for step in range(FLAGS.max_steps):
+ start_time = time.time()
+ _, loss_value = sess.run([train_op, loss])
+ duration = time.time() - start_time
+
+ assert not np.isnan(loss_value), 'Model diverged with loss = NaN'
+
+ if step % 10 == 0:
+ examples_per_sec = FLAGS.batch_size / float(duration)
+ format_str = ('%s: step %d, loss = %.2f (%.1f examples/sec; %.3f '
+ 'sec/batch)')
+ print(format_str % (datetime.now(), step, loss_value,
+ examples_per_sec, duration))
+
+ if step % 100 == 0:
+ summary_str = sess.run(summary_op)
+ summary_writer.add_summary(summary_str, step)
+
+ # Save the model checkpoint periodically.
+ if step % 5000 == 0 or (step + 1) == FLAGS.max_steps:
+ checkpoint_path = os.path.join(FLAGS.train_dir, 'model.ckpt')
+ saver.save(sess, checkpoint_path, global_step=step)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/BUILD"
new file mode 100644
index 00000000..174e77d5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/BUILD"
@@ -0,0 +1,112 @@
+# Description:
+# Contains the operations and nets for building TensorFlow-Slim models.
+
+package(default_visibility = ["//inception:internal"])
+
+licenses(["notice"]) # Apache 2.0
+
+exports_files(["LICENSE"])
+
+py_library(
+ name = "scopes",
+ srcs = ["scopes.py"],
+)
+
+py_test(
+ name = "scopes_test",
+ size = "small",
+ srcs = ["scopes_test.py"],
+ deps = [
+ ":scopes",
+ ],
+)
+
+py_library(
+ name = "variables",
+ srcs = ["variables.py"],
+ deps = [
+ ":scopes",
+ ],
+)
+
+py_test(
+ name = "variables_test",
+ size = "small",
+ srcs = ["variables_test.py"],
+ deps = [
+ ":variables",
+ ],
+)
+
+py_library(
+ name = "losses",
+ srcs = ["losses.py"],
+)
+
+py_test(
+ name = "losses_test",
+ size = "small",
+ srcs = ["losses_test.py"],
+ deps = [
+ ":losses",
+ ],
+)
+
+py_library(
+ name = "ops",
+ srcs = ["ops.py"],
+ deps = [
+ ":losses",
+ ":scopes",
+ ":variables",
+ ],
+)
+
+py_test(
+ name = "ops_test",
+ size = "small",
+ srcs = ["ops_test.py"],
+ deps = [
+ ":ops",
+ ":variables",
+ ],
+)
+
+py_library(
+ name = "inception",
+ srcs = ["inception_model.py"],
+ deps = [
+ ":ops",
+ ":scopes",
+ ],
+)
+
+py_test(
+ name = "inception_test",
+ size = "medium",
+ srcs = ["inception_test.py"],
+ deps = [
+ ":inception",
+ ],
+)
+
+py_library(
+ name = "slim",
+ srcs = ["slim.py"],
+ deps = [
+ ":inception",
+ ":losses",
+ ":ops",
+ ":scopes",
+ ":variables",
+ ],
+)
+
+py_test(
+ name = "collections_test",
+ size = "small",
+ srcs = ["collections_test.py"],
+ deps = [
+ ":slim",
+ ],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/README.md"
new file mode 100644
index 00000000..36d8b7eb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/README.md"
@@ -0,0 +1,621 @@
+# TensorFlow-Slim
+
+TF-Slim is a lightweight library for defining, training and evaluating models in
+TensorFlow. It enables defining complex networks quickly and concisely while
+keeping a model's architecture transparent and its hyperparameters explicit.
+
+[TOC]
+
+## Teaser
+
+As a demonstration of the simplicity of using TF-Slim, compare the simplicity of
+the code necessary for defining the entire [VGG](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) network using TF-Slim to
+the lengthy and verbose nature of defining just the first three layers (out of
+16) using native tensorflow:
+
+```python{.good}
+# VGG16 in TF-Slim.
+def vgg16(inputs):
+ with slim.arg_scope([slim.ops.conv2d, slim.ops.fc], stddev=0.01, weight_decay=0.0005):
+ net = slim.ops.repeat_op(2, inputs, slim.ops.conv2d, 64, [3, 3], scope='conv1')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool1')
+ net = slim.ops.repeat_op(2, net, slim.ops.conv2d, 128, [3, 3], scope='conv2')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool2')
+ net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 256, [3, 3], scope='conv3')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool3')
+ net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 512, [3, 3], scope='conv4')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool4')
+ net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 512, [3, 3], scope='conv5')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool5')
+ net = slim.ops.flatten(net, scope='flatten5')
+ net = slim.ops.fc(net, 4096, scope='fc6')
+ net = slim.ops.dropout(net, 0.5, scope='dropout6')
+ net = slim.ops.fc(net, 4096, scope='fc7')
+ net = slim.ops.dropout(net, 0.5, scope='dropout7')
+ net = slim.ops.fc(net, 1000, activation=None, scope='fc8')
+ return net
+```
+
+```python{.bad}
+# Layers 1-3 (out of 16) of VGG16 in native tensorflow.
+def vgg16(inputs):
+ with tf.name_scope('conv1_1') as scope:
+ kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights')
+ conv = tf.nn.conv2d(inputs, kernel, [1, 1, 1, 1], padding='SAME')
+ biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
+ bias = tf.nn.bias_add(conv, biases)
+ conv1 = tf.nn.relu(bias, name=scope)
+ with tf.name_scope('conv1_2') as scope:
+ kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights')
+ conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
+ biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
+ bias = tf.nn.bias_add(conv, biases)
+ conv1 = tf.nn.relu(bias, name=scope)
+ with tf.name_scope('pool1')
+ pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID', name='pool1')
+```
+
+## Why TF-Slim?
+
+TF-Slim offers several advantages over just the built-in tensorflow libraries:
+
+* Allows one to define models much more compactly by eliminating boilerplate
+ code. This is accomplished through the use of [argument scoping](./scopes.py)
+ and numerous high level [operations](./ops.py). These tools increase
+ readability and maintainability, reduce the likelihood of an error from
+ copy-and-pasting hyperparameter values and simplifies hyperparameter tuning.
+* Makes developing models simple by providing commonly used [loss functions](./losses.py)
+* Provides a concise [definition](./inception_model.py) of [Inception v3](http://arxiv.org/abs/1512.00567) network architecture ready to be used
+ out-of-the-box or subsumed into new models.
+
+Additionally TF-Slim was designed with several principles in mind:
+
+* The various modules of TF-Slim (scopes, variables, ops, losses) are
+ independent. This flexibility allows users to pick and choose components of
+ TF-Slim completely à la carte.
+* TF-Slim is written using a Functional Programming style. That means it's
+ super-lightweight and can be used right alongside any of TensorFlow's native
+ operations.
+* Makes re-using network architectures easy. This allows users to build new
+ networks on top of existing ones as well as fine-tuning pre-trained models
+ on new tasks.
+
+## What are the various components of TF-Slim?
+
+TF-Slim is composed of several parts which were designed to exist independently.
+These include:
+
+* [scopes.py](./scopes.py): provides a new scope named `arg_scope` that allows
+ a user to define default arguments for specific operations within that
+ scope.
+* [variables.py](./variables.py): provides convenience wrappers for variable
+ creation and manipulation.
+* [ops.py](./ops.py): provides high level operations for building models using
+ tensorflow.
+* [losses.py](./losses.py): contains commonly used loss functions.
+
+## Defining Models
+
+Models can be succinctly defined using TF-Slim by combining its variables,
+operations and scopes. Each of these elements are defined below.
+
+### Variables
+
+Creating [`Variables`](https://www.tensorflow.org/how_tos/variables/index.html)
+in native tensorflow requires either a predefined value or an initialization
+mechanism (random, normally distributed). Furthermore, if a variable needs to be
+created on a specific device, such as a GPU, the specification must be [made
+explicit](https://www.tensorflow.org/how_tos/using_gpu/index.html). To alleviate
+the code required for variable creation, TF-Slim provides a set of thin wrapper
+functions in [variables.py](./variables.py) which allow callers to easily define
+variables.
+
+For example, to create a `weight` variable, initialize it using a truncated
+normal distribution, regularize it with an `l2_loss` and place it on the `CPU`,
+one need only declare the following:
+
+```python
+weights = variables.variable('weights',
+ shape=[10, 10, 3 , 3],
+ initializer=tf.truncated_normal_initializer(stddev=0.1),
+ regularizer=lambda t: losses.l2_loss(t, weight=0.05),
+ device='/cpu:0')
+```
+
+In addition to the functionality provided by `tf.Variable`, `slim.variables`
+keeps track of the variables created by `slim.ops` to define a model, which
+allows one to distinguish variables that belong to the model versus other
+variables.
+
+```python
+# Get all the variables defined by the model.
+model_variables = slim.variables.get_variables()
+
+# Get all the variables with the same given name, i.e. 'weights', 'biases'.
+weights = slim.variables.get_variables_by_name('weights')
+biases = slim.variables.get_variables_by_name('biases')
+
+# Get all the variables in VARIABLES_TO_RESTORE collection.
+variables_to_restore = tf.get_collection(slim.variables.VARIABLES_TO_RESTORE)
+
+
+weights = variables.variable('weights',
+ shape=[10, 10, 3 , 3],
+ initializer=tf.truncated_normal_initializer(stddev=0.1),
+ regularizer=lambda t: losses.l2_loss(t, weight=0.05),
+ device='/cpu:0')
+```
+
+### Operations (Layers)
+
+While the set of TensorFlow operations is quite extensive, builders of neural
+networks typically think of models in terms of "layers". A layer, such as a
+Convolutional Layer, a Fully Connected Layer or a BatchNorm Layer are more
+abstract than a single TensorFlow operation and typically involve many such
+operations. For example, a Convolutional Layer in a neural network is built
+using several steps:
+
+1. Creating the weight variables
+2. Creating the bias variables
+3. Convolving the weights with the input from the previous layer
+4. Adding the biases to the result of the convolution.
+
+In python code this can be rather laborious:
+
+```python
+input = ...
+with tf.name_scope('conv1_1') as scope:
+ kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,
+ stddev=1e-1), name='weights')
+ conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')
+ biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),
+ trainable=True, name='biases')
+ bias = tf.nn.bias_add(conv, biases)
+ conv1 = tf.nn.relu(bias, name=scope)
+```
+
+To alleviate the need to duplicate this code repeatedly, TF-Slim provides a
+number of convenient operations defined at the (more abstract) level of neural
+network layers. For example, compare the code above to an invocation of the
+TF-Slim code:
+
+```python
+input = ...
+net = slim.ops.conv2d(input, [3, 3], 128, scope='conv1_1')
+```
+
+TF-Slim provides numerous operations used in building neural networks which
+roughly correspond to such layers. These include:
+
+Layer | TF-Slim Op
+--------------------- | ------------------------
+Convolutional Layer | [ops.conv2d](./ops.py)
+Fully Connected Layer | [ops.fc](./ops.py)
+BatchNorm layer | [ops.batch_norm](./ops.py)
+Max Pooling Layer | [ops.max_pool](./ops.py)
+Avg Pooling Layer | [ops.avg_pool](./ops.py)
+Dropout Layer | [ops.dropout](./ops.py)
+
+[ops.py](./ops.py) also includes operations that are not really "layers" per se,
+but are often used to manipulate hidden unit representations during inference:
+
+Operation | TF-Slim Op
+--------- | ---------------------
+Flatten | [ops.flatten](./ops.py)
+
+TF-Slim also provides a meta-operation called `repeat_op` that allows one to
+repeatedly perform the same operation. Consider the following snippet from the
+[VGG](https://www.robots.ox.ac.uk/~vgg/research/very_deep/) network whose layers
+perform several convolutions in a row between pooling layers:
+
+```python
+net = ...
+net = slim.ops.conv2d(net, 256, [3, 3], scope='conv3_1')
+net = slim.ops.conv2d(net, 256, [3, 3], scope='conv3_2')
+net = slim.ops.conv2d(net, 256, [3, 3], scope='conv3_3')
+net = slim.ops.max_pool(net, [2, 2], scope='pool3')
+```
+
+This clear duplication of code can be removed via a standard loop:
+
+```python
+net = ...
+for i in range(3):
+ net = slim.ops.conv2d(net, 256, [3, 3], scope='conv3_' % (i+1))
+net = slim.ops.max_pool(net, [2, 2], scope='pool3')
+```
+
+While this does reduce the amount of duplication, it can be made even cleaner by
+using the `RepeatOp`:
+
+```python
+net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 256, [3, 3], scope='conv3')
+net = slim.ops.max_pool(net, [2, 2], scope='pool2')
+```
+
+Notice that the RepeatOp not only applies the same argument in-line, it also is
+smart enough to unroll the scopes such that the scopes assigned to each
+subsequent call of `ops.conv2d` is appended with an underscore and iteration
+number. More concretely, the scopes in the example above would be 'conv3_1',
+'conv3_2' and 'conv3_3'.
+
+### Scopes
+
+In addition to the types of scope mechanisms in TensorFlow ([name_scope](https://www.tensorflow.org/api_docs/python/framework.html#name_scope),
+[variable_scope](https://www.tensorflow.org/api_docs/python/state_ops.html#variable_scope),
+TF-Slim adds a new scoping mechanism called "argument scope" or [arg_scope](./scopes.py). This new scope allows a user to specify one or more operations and
+a set of arguments which will be passed to each of the operations defined in the
+`arg_scope`. This functionality is best illustrated by example. Consider the
+following code snippet:
+
+```python
+net = slim.ops.conv2d(inputs, 64, [11, 11], 4, padding='SAME', stddev=0.01, weight_decay=0.0005, scope='conv1')
+net = slim.ops.conv2d(net, 128, [11, 11], padding='VALID', stddev=0.01, weight_decay=0.0005, scope='conv2')
+net = slim.ops.conv2d(net, 256, [11, 11], padding='SAME', stddev=0.01, weight_decay=0.0005, scope='conv3')
+```
+
+It should be clear that these three Convolution layers share many of the same
+hyperparameters. Two have the same padding, all three have the same weight_decay
+and standard deviation of its weights. Not only do the duplicated values make
+the code more difficult to read, it also adds the addition burder to the writer
+of needing to doublecheck that all of the values are identical in each step. One
+solution would be to specify default values using variables:
+
+```python
+padding='SAME'
+stddev=0.01
+weight_decay=0.0005
+net = slim.ops.conv2d(inputs, 64, [11, 11], 4, padding=padding, stddev=stddev, weight_decay=weight_decay, scope='conv1')
+net = slim.ops.conv2d(net, 128, [11, 11], padding='VALID', stddev=stddev, weight_decay=weight_decay, scope='conv2')
+net = slim.ops.conv2d(net, 256, [11, 11], padding=padding, stddev=stddev, weight_decay=weight_decay, scope='conv3')
+
+```
+
+This solution ensures that all three convolutions share the exact same variable
+values but doesn't reduce the code clutter. By using an `arg_scope`, we can both
+ensure that each layer uses the same values and simplify the code:
+
+```python
+ with slim.arg_scope([slim.ops.conv2d], padding='SAME', stddev=0.01, weight_decay=0.0005):
+ net = slim.ops.conv2d(inputs, 64, [11, 11], scope='conv1')
+ net = slim.ops.conv2d(net, 128, [11, 11], padding='VALID', scope='conv2')
+ net = slim.ops.conv2d(net, 256, [11, 11], scope='conv3')
+```
+
+As the example illustrates, the use of arg_scope makes the code cleaner, simpler
+and easier to maintain. Notice that while argument values are specifed in the
+arg_scope, they can be overwritten locally. In particular, while the padding
+argument has been set to 'SAME', the second convolution overrides it with the
+value of 'VALID'.
+
+One can also nest `arg_scope`s and use multiple operations in the same scope.
+For example:
+
+```python
+with arg_scope([slim.ops.conv2d, slim.ops.fc], stddev=0.01, weight_decay=0.0005):
+ with arg_scope([slim.ops.conv2d], padding='SAME'), slim.arg_scope([slim.ops.fc], bias=1.0):
+ net = slim.ops.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')
+ net = slim.ops.conv2d(net, 256, [5, 5], stddev=0.03, scope='conv2')
+ net = slim.ops.flatten(net)
+ net = slim.ops.fc(net, 1000, activation=None, scope='fc')
+```
+
+In this example, the first `arg_scope` applies the same `stddev` and
+`weight_decay` arguments to the `conv2d` and `fc` ops in its scope. In the
+second `arg_scope`, additional default arguments to `conv2d` only are specified.
+
+In addition to `arg_scope`, TF-Slim provides several decorators that wrap the
+use of tensorflow arg scopes. These include `@AddArgScope`, `@AddNameScope`,
+`@AddVariableScope`, `@AddOpScope` and `@AddVariableOpScope`. To illustrate
+their use, consider the following example.
+
+```python
+def MyNewOp(inputs):
+ varA = ...
+ varB = ...
+ outputs = tf.multiply(varA, inputs) + varB
+ return outputs
+
+```
+
+In this example, the user has created a new op which creates two variables. To
+ensure that these variables exist within a certain variable scope (to avoid
+collisions with variables with the same name), in standard TF, the op must be
+called within a variable scope:
+
+```python
+inputs = ...
+with tf.variable_scope('layer1'):
+ outputs = MyNewOp(inputs)
+```
+
+As an alternative, one can use TF-Slim's decorators to decorate the function and
+simplify the call:
+
+```python
+@AddVariableScope
+def MyNewOp(inputs):
+ ...
+ return outputs
+
+
+inputs = ...
+outputs = MyNewOp('layer1')
+```
+
+The `@AddVariableScope` decorater simply applies the `tf.variable_scope` scoping
+to the called function taking "layer1" as its argument. This allows the code to
+be written more concisely.
+
+### Losses
+
+The loss function defines a quantity that we want to minimize. For
+classification problems, this is typically the cross entropy between the true
+(one-hot) distribution and the predicted probability distribution across
+classes. For regression problems, this is often the sum-of-squares differences
+between the predicted and true values.
+
+Certain models, such as multi-task learning models, require the use of multiple
+loss functions simultaneously. In other words, the loss function ultimatey being
+minimized is the sum of various other loss functions. For example, consider a
+model that predicts both the type of scene in an image as well as the depth from
+the camera of each pixel. This model's loss function would be the sum of the
+classification loss and depth prediction loss.
+
+TF-Slim provides an easy-to-use mechanism for defining and keeping track of loss
+functions via the [losses.py](./losses.py) module. Consider the simple case
+where we want to train the VGG network:
+
+```python
+# Load the images and labels.
+images, labels = ...
+
+# Create the model.
+predictions = ...
+
+# Define the loss functions and get the total loss.
+loss = losses.cross_entropy_loss(predictions, labels)
+```
+
+In this example, we start by creating the model (using TF-Slim's VGG
+implementation), and add the standard classification loss. Now, lets turn to the
+case where we have a multi-task model that produces multiple outputs:
+
+```python
+# Load the images and labels.
+images, scene_labels, depth_labels = ...
+
+# Create the model.
+scene_predictions, depth_predictions = CreateMultiTaskModel(images)
+
+# Define the loss functions and get the total loss.
+classification_loss = slim.losses.cross_entropy_loss(scene_predictions, scene_labels)
+sum_of_squares_loss = slim.losses.l2loss(depth_predictions - depth_labels)
+
+# The following two lines have the same effect:
+total_loss1 = classification_loss + sum_of_squares_loss
+total_loss2 = tf.get_collection(slim.losses.LOSSES_COLLECTION)
+```
+
+In this example, we have two losses which we add by calling
+`losses.cross_entropy_loss` and `losses.l2loss`. We can obtain the
+total loss by adding them together (`total_loss1`) or by calling
+`losses.GetTotalLoss()`. How did this work? When you create a loss function via
+TF-Slim, TF-Slim adds the loss to a special TensorFlow collection of loss
+functions. This enables you to either manage the total loss manually, or allow
+TF-Slim to manage them for you.
+
+What if you want to let TF-Slim manage the losses for you but have a custom loss
+function? [losses.py](./losses.py) also has a function that adds this loss to
+TF-Slims collection. For example:
+
+```python
+# Load the images and labels.
+images, scene_labels, depth_labels, pose_labels = ...
+
+# Create the model.
+scene_predictions, depth_predictions, pose_predictions = CreateMultiTaskModel(images)
+
+# Define the loss functions and get the total loss.
+classification_loss = slim.losses.cross_entropy_loss(scene_predictions, scene_labels)
+sum_of_squares_loss = slim.losses.l2loss(depth_predictions - depth_labels)
+pose_loss = MyCustomLossFunction(pose_predictions, pose_labels)
+tf.add_to_collection(slim.losses.LOSSES_COLLECTION, pose_loss) # Letting TF-Slim know about the additional loss.
+
+# The following two lines have the same effect:
+total_loss1 = classification_loss + sum_of_squares_loss + pose_loss
+total_loss2 = losses.GetTotalLoss()
+```
+
+In this example, we can again either produce the total loss function manually or
+let TF-Slim know about the additional loss and let TF-Slim handle the losses.
+
+## Putting the Pieces Together
+
+By combining TF-Slim Variables, Operations and scopes, we can write a normally
+very complex network with very few lines of code. For example, the entire [VGG](https://www.robots.ox.ac.uk/~vgg/research/very_deep/) architecture can be
+defined with just the following snippet:
+
+```python
+with arg_scope([slim.ops.conv2d, slim.ops.fc], stddev=0.01, weight_decay=0.0005):
+ net = slim.ops.repeat_op(2, inputs, slim.ops.conv2d, 64, [3, 3], scope='conv1')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool1')
+ net = slim.ops.repeat_op(2, net, slim.ops.conv2d, 128, [3, 3], scope='conv2')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool2')
+ net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 256, [3, 3], scope='conv3')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool3')
+ net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 512, [3, 3], scope='conv4')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool4')
+ net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 512, [3, 3], scope='conv5')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool5')
+ net = slim.ops.flatten(net, scope='flatten5')
+ net = slim.ops.fc(net, 4096, scope='fc6')
+ net = slim.ops.dropout(net, 0.5, scope='dropout6')
+ net = slim.ops.fc(net, 4096, scope='fc7')
+ net = slim.ops.dropout(net, 0.5, scope='dropout7')
+ net = slim.ops.fc(net, 1000, activation=None, scope='fc8')
+return net
+```
+
+## Re-using previously defined network architectures and pre-trained models.
+
+### Brief Recap on Restoring Variables from a Checkpoint
+
+After a model has been trained, it can be restored using `tf.train.Saver()`
+which restores `Variables` from a given checkpoint. For many cases,
+`tf.train.Saver()` provides a simple mechanism to restore all or just a few
+variables.
+
+```python
+# Create some variables.
+v1 = tf.Variable(..., name="v1")
+v2 = tf.Variable(..., name="v2")
+...
+# Add ops to restore all the variables.
+restorer = tf.train.Saver()
+
+# Add ops to restore some variables.
+restorer = tf.train.Saver([v1, v2])
+
+# Later, launch the model, use the saver to restore variables from disk, and
+# do some work with the model.
+with tf.Session() as sess:
+ # Restore variables from disk.
+ restorer.restore(sess, "/tmp/model.ckpt")
+ print("Model restored.")
+ # Do some work with the model
+ ...
+```
+
+See [Restoring Variables](https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html#restoring-variables)
+and [Choosing which Variables to Save and Restore](https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html#choosing-which-variables-to-save-and-restore)
+sections of the [Variables](https://www.tensorflow.org/versions/r0.7/how_tos/variables/index.html) page for
+more details.
+
+### Using slim.variables to Track which Variables need to be Restored
+
+It is often desirable to fine-tune a pre-trained model on an entirely new
+dataset or even a new task. In these situations, one must specify which layers
+of the model should be reused (and consequently loaded from a checkpoint) and
+which layers are new. Indicating which variables or layers should be restored is
+a process that quickly becomes cumbersome when done manually.
+
+To help keep track of which variables to restore, `slim.variables` provides a
+`restore` argument when creating each Variable. By default, all variables are
+marked as `restore=True`, which results in all variables defined by the model
+being restored.
+
+```python
+# Create some variables.
+v1 = slim.variables.variable(name="v1", ..., restore=False)
+v2 = slim.variables.variable(name="v2", ...) # By default restore=True
+...
+# Get list of variables to restore (which contains only 'v2')
+variables_to_restore = tf.get_collection(slim.variables.VARIABLES_TO_RESTORE)
+restorer = tf.train.Saver(variables_to_restore)
+with tf.Session() as sess:
+ # Restore variables from disk.
+ restorer.restore(sess, "/tmp/model.ckpt")
+ print("Model restored.")
+ # Do some work with the model
+ ...
+```
+
+Additionally, every layer in `slim.ops` that creates slim.variables (such as
+`slim.ops.conv2d`, `slim.ops.fc`, `slim.ops.batch_norm`) also has a `restore`
+argument which controls whether the variables created by that layer should be
+restored or not.
+
+```python
+# Create a small network.
+net = slim.ops.conv2d(images, 32, [7, 7], stride=2, scope='conv1')
+net = slim.ops.conv2d(net, 64, [3, 3], scope='conv2')
+net = slim.ops.conv2d(net, 128, [3, 3], scope='conv3')
+net = slim.ops.max_pool(net, [3, 3], stride=2, scope='pool3')
+net = slim.ops.flatten(net)
+net = slim.ops.fc(net, 10, scope='logits', restore=False)
+...
+
+# VARIABLES_TO_RESTORE would contain the 'weights' and 'bias' defined by 'conv1'
+# 'conv2' and 'conv3' but not the ones defined by 'logits'
+variables_to_restore = tf.get_collection(slim.variables.VARIABLES_TO_RESTORE)
+
+# Create a restorer that would restore only the needed variables.
+restorer = tf.train.Saver(variables_to_restore)
+
+# Create a saver that would save all the variables (including 'logits').
+saver = tf.train.Saver()
+with tf.Session() as sess:
+ # Restore variables from disk.
+ restorer.restore(sess, "/tmp/model.ckpt")
+ print("Model restored.")
+
+ # Do some work with the model
+ ...
+ saver.save(sess, "/tmp/new_model.ckpt")
+```
+
+Note: When restoring variables from a checkpoint, the `Saver` locates the
+variable names in a checkpoint file and maps them to variables in the current
+graph. Above, we created a saver by passing to it a list of variables. In this
+case, the names of the variables to locate in the checkpoint file were
+implicitly obtained from each provided variable's `var.op.name`.
+
+This works well when the variable names in the checkpoint file match those in
+the graph. However, sometimes, we want to restore a model from a checkpoint
+whose variables have different names those in the current graph. In this case,
+we must provide the `Saver` a dictionary that maps from each checkpoint variable
+name to each graph variable. Consider the following example where the checkpoint
+variables names are obtained via a simple function:
+
+```python
+# Assuming that 'conv1/weights' should be restored from 'vgg16/conv1/weights'
+def name_in_checkpoint(var):
+ return 'vgg16/' + var.op.name
+
+# Assuming that 'conv1/weights' and 'conv1/bias' should be restored from 'conv1/params1' and 'conv1/params2'
+def name_in_checkpoint(var):
+ if "weights" in var.op.name:
+ return var.op.name.replace("weights", "params1")
+ if "bias" in var.op.name:
+ return var.op.name.replace("bias", "params2")
+
+variables_to_restore = tf.get_collection(slim.variables.VARIABLES_TO_RESTORE)
+variables_to_restore = {name_in_checkpoint(var):var for var in variables_to_restore}
+restorer = tf.train.Saver(variables_to_restore)
+with tf.Session() as sess:
+ # Restore variables from disk.
+ restorer.restore(sess, "/tmp/model.ckpt")
+```
+
+### Reusing the VGG16 network defined in TF-Slim on a different task, i.e. PASCAL-VOC.
+
+Assuming one have already a pre-trained VGG16 model, one just need to replace
+the last layer `fc8` with a new layer `fc8_pascal` and use `restore=False`.
+
+```python
+def vgg16_pascal(inputs):
+ with slim.arg_scope([slim.ops.conv2d, slim.ops.fc], stddev=0.01, weight_decay=0.0005):
+ net = slim.ops.repeat_op(2, inputs, slim.ops.conv2d, 64, [3, 3], scope='conv1')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool1')
+ net = slim.ops.repeat_op(2, net, slim.ops.conv2d, 128, [3, 3], scope='conv2')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool2')
+ net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 256, [3, 3], scope='conv3')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool3')
+ net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 512, [3, 3], scope='conv4')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool4')
+ net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 512, [3, 3], scope='conv5')
+ net = slim.ops.max_pool(net, [2, 2], scope='pool5')
+ net = slim.ops.flatten(net, scope='flatten5')
+ net = slim.ops.fc(net, 4096, scope='fc6')
+ net = slim.ops.dropout(net, 0.5, scope='dropout6')
+ net = slim.ops.fc(net, 4096, scope='fc7')
+ net = slim.ops.dropout(net, 0.5, scope='dropout7')
+ # To reuse vgg16 on PASCAL-VOC, just change the last layer.
+ net = slim.ops.fc(net, 21, activation=None, scope='fc8_pascal', restore=False)
+ return net
+```
+
+## Authors
+
+Sergio Guadarrama and Nathan Silberman
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/collections_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/collections_test.py"
new file mode 100644
index 00000000..2a1f170e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/collections_test.py"
@@ -0,0 +1,181 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for inception."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from inception.slim import slim
+
+
+def get_variables(scope=None):
+ return slim.variables.get_variables(scope)
+
+
+def get_variables_by_name(name):
+ return slim.variables.get_variables_by_name(name)
+
+
+class CollectionsTest(tf.test.TestCase):
+
+ def testVariables(self):
+ batch_size = 5
+ height, width = 299, 299
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ with slim.arg_scope([slim.ops.conv2d],
+ batch_norm_params={'decay': 0.9997}):
+ slim.inception.inception_v3(inputs)
+ self.assertEqual(len(get_variables()), 388)
+ self.assertEqual(len(get_variables_by_name('weights')), 98)
+ self.assertEqual(len(get_variables_by_name('biases')), 2)
+ self.assertEqual(len(get_variables_by_name('beta')), 96)
+ self.assertEqual(len(get_variables_by_name('gamma')), 0)
+ self.assertEqual(len(get_variables_by_name('moving_mean')), 96)
+ self.assertEqual(len(get_variables_by_name('moving_variance')), 96)
+
+ def testVariablesWithoutBatchNorm(self):
+ batch_size = 5
+ height, width = 299, 299
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ with slim.arg_scope([slim.ops.conv2d],
+ batch_norm_params=None):
+ slim.inception.inception_v3(inputs)
+ self.assertEqual(len(get_variables()), 196)
+ self.assertEqual(len(get_variables_by_name('weights')), 98)
+ self.assertEqual(len(get_variables_by_name('biases')), 98)
+ self.assertEqual(len(get_variables_by_name('beta')), 0)
+ self.assertEqual(len(get_variables_by_name('gamma')), 0)
+ self.assertEqual(len(get_variables_by_name('moving_mean')), 0)
+ self.assertEqual(len(get_variables_by_name('moving_variance')), 0)
+
+ def testVariablesByLayer(self):
+ batch_size = 5
+ height, width = 299, 299
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ with slim.arg_scope([slim.ops.conv2d],
+ batch_norm_params={'decay': 0.9997}):
+ slim.inception.inception_v3(inputs)
+ self.assertEqual(len(get_variables()), 388)
+ self.assertEqual(len(get_variables('conv0')), 4)
+ self.assertEqual(len(get_variables('conv1')), 4)
+ self.assertEqual(len(get_variables('conv2')), 4)
+ self.assertEqual(len(get_variables('conv3')), 4)
+ self.assertEqual(len(get_variables('conv4')), 4)
+ self.assertEqual(len(get_variables('mixed_35x35x256a')), 28)
+ self.assertEqual(len(get_variables('mixed_35x35x288a')), 28)
+ self.assertEqual(len(get_variables('mixed_35x35x288b')), 28)
+ self.assertEqual(len(get_variables('mixed_17x17x768a')), 16)
+ self.assertEqual(len(get_variables('mixed_17x17x768b')), 40)
+ self.assertEqual(len(get_variables('mixed_17x17x768c')), 40)
+ self.assertEqual(len(get_variables('mixed_17x17x768d')), 40)
+ self.assertEqual(len(get_variables('mixed_17x17x768e')), 40)
+ self.assertEqual(len(get_variables('mixed_8x8x2048a')), 36)
+ self.assertEqual(len(get_variables('mixed_8x8x2048b')), 36)
+ self.assertEqual(len(get_variables('logits')), 2)
+ self.assertEqual(len(get_variables('aux_logits')), 10)
+
+ def testVariablesToRestore(self):
+ batch_size = 5
+ height, width = 299, 299
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ with slim.arg_scope([slim.ops.conv2d],
+ batch_norm_params={'decay': 0.9997}):
+ slim.inception.inception_v3(inputs)
+ variables_to_restore = tf.get_collection(
+ slim.variables.VARIABLES_TO_RESTORE)
+ self.assertEqual(len(variables_to_restore), 388)
+ self.assertListEqual(variables_to_restore, get_variables())
+
+ def testVariablesToRestoreWithoutLogits(self):
+ batch_size = 5
+ height, width = 299, 299
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ with slim.arg_scope([slim.ops.conv2d],
+ batch_norm_params={'decay': 0.9997}):
+ slim.inception.inception_v3(inputs, restore_logits=False)
+ variables_to_restore = tf.get_collection(
+ slim.variables.VARIABLES_TO_RESTORE)
+ self.assertEqual(len(variables_to_restore), 384)
+
+ def testRegularizationLosses(self):
+ batch_size = 5
+ height, width = 299, 299
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ with slim.arg_scope([slim.ops.conv2d, slim.ops.fc], weight_decay=0.00004):
+ slim.inception.inception_v3(inputs)
+ losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
+ self.assertEqual(len(losses), len(get_variables_by_name('weights')))
+
+ def testTotalLossWithoutRegularization(self):
+ batch_size = 5
+ height, width = 299, 299
+ num_classes = 1001
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ dense_labels = tf.random_uniform((batch_size, num_classes))
+ with slim.arg_scope([slim.ops.conv2d, slim.ops.fc], weight_decay=0):
+ logits, end_points = slim.inception.inception_v3(
+ inputs,
+ num_classes=num_classes)
+ # Cross entropy loss for the main softmax prediction.
+ slim.losses.cross_entropy_loss(logits,
+ dense_labels,
+ label_smoothing=0.1,
+ weight=1.0)
+ # Cross entropy loss for the auxiliary softmax head.
+ slim.losses.cross_entropy_loss(end_points['aux_logits'],
+ dense_labels,
+ label_smoothing=0.1,
+ weight=0.4,
+ scope='aux_loss')
+ losses = tf.get_collection(slim.losses.LOSSES_COLLECTION)
+ self.assertEqual(len(losses), 2)
+
+ def testTotalLossWithRegularization(self):
+ batch_size = 5
+ height, width = 299, 299
+ num_classes = 1000
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ dense_labels = tf.random_uniform((batch_size, num_classes))
+ with slim.arg_scope([slim.ops.conv2d, slim.ops.fc], weight_decay=0.00004):
+ logits, end_points = slim.inception.inception_v3(inputs, num_classes)
+ # Cross entropy loss for the main softmax prediction.
+ slim.losses.cross_entropy_loss(logits,
+ dense_labels,
+ label_smoothing=0.1,
+ weight=1.0)
+ # Cross entropy loss for the auxiliary softmax head.
+ slim.losses.cross_entropy_loss(end_points['aux_logits'],
+ dense_labels,
+ label_smoothing=0.1,
+ weight=0.4,
+ scope='aux_loss')
+ losses = tf.get_collection(slim.losses.LOSSES_COLLECTION)
+ self.assertEqual(len(losses), 2)
+ reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
+ self.assertEqual(len(reg_losses), 98)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/inception_model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/inception_model.py"
new file mode 100644
index 00000000..6136ab1b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/inception_model.py"
@@ -0,0 +1,356 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Inception-v3 expressed in TensorFlow-Slim.
+
+ Usage:
+
+ # Parameters for BatchNorm.
+ batch_norm_params = {
+ # Decay for the batch_norm moving averages.
+ 'decay': BATCHNORM_MOVING_AVERAGE_DECAY,
+ # epsilon to prevent 0s in variance.
+ 'epsilon': 0.001,
+ }
+ # Set weight_decay for weights in Conv and FC layers.
+ with slim.arg_scope([slim.ops.conv2d, slim.ops.fc], weight_decay=0.00004):
+ with slim.arg_scope([slim.ops.conv2d],
+ stddev=0.1,
+ activation=tf.nn.relu,
+ batch_norm_params=batch_norm_params):
+ # Force all Variables to reside on the CPU.
+ with slim.arg_scope([slim.variables.variable], device='/cpu:0'):
+ logits, endpoints = slim.inception.inception_v3(
+ images,
+ dropout_keep_prob=0.8,
+ num_classes=num_classes,
+ is_training=for_training,
+ restore_logits=restore_logits,
+ scope=scope)
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from inception.slim import ops
+from inception.slim import scopes
+
+
+def inception_v3(inputs,
+ dropout_keep_prob=0.8,
+ num_classes=1000,
+ is_training=True,
+ restore_logits=True,
+ scope=''):
+ """Latest Inception from http://arxiv.org/abs/1512.00567.
+
+ "Rethinking the Inception Architecture for Computer Vision"
+
+ Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens,
+ Zbigniew Wojna
+
+ Args:
+ inputs: a tensor of size [batch_size, height, width, channels].
+ dropout_keep_prob: dropout keep_prob.
+ num_classes: number of predicted classes.
+ is_training: whether is training or not.
+ restore_logits: whether or not the logits layers should be restored.
+ Useful for fine-tuning a model with different num_classes.
+ scope: Optional scope for name_scope.
+
+ Returns:
+ a list containing 'logits', 'aux_logits' Tensors.
+ """
+ # end_points will collect relevant activations for external use, for example
+ # summaries or losses.
+ end_points = {}
+ with tf.name_scope(scope, 'inception_v3', [inputs]):
+ with scopes.arg_scope([ops.conv2d, ops.fc, ops.batch_norm, ops.dropout],
+ is_training=is_training):
+ with scopes.arg_scope([ops.conv2d, ops.max_pool, ops.avg_pool],
+ stride=1, padding='VALID'):
+ # 299 x 299 x 3
+ end_points['conv0'] = ops.conv2d(inputs, 32, [3, 3], stride=2,
+ scope='conv0')
+ # 149 x 149 x 32
+ end_points['conv1'] = ops.conv2d(end_points['conv0'], 32, [3, 3],
+ scope='conv1')
+ # 147 x 147 x 32
+ end_points['conv2'] = ops.conv2d(end_points['conv1'], 64, [3, 3],
+ padding='SAME', scope='conv2')
+ # 147 x 147 x 64
+ end_points['pool1'] = ops.max_pool(end_points['conv2'], [3, 3],
+ stride=2, scope='pool1')
+ # 73 x 73 x 64
+ end_points['conv3'] = ops.conv2d(end_points['pool1'], 80, [1, 1],
+ scope='conv3')
+ # 73 x 73 x 80.
+ end_points['conv4'] = ops.conv2d(end_points['conv3'], 192, [3, 3],
+ scope='conv4')
+ # 71 x 71 x 192.
+ end_points['pool2'] = ops.max_pool(end_points['conv4'], [3, 3],
+ stride=2, scope='pool2')
+ # 35 x 35 x 192.
+ net = end_points['pool2']
+ # Inception blocks
+ with scopes.arg_scope([ops.conv2d, ops.max_pool, ops.avg_pool],
+ stride=1, padding='SAME'):
+ # mixed: 35 x 35 x 256.
+ with tf.variable_scope('mixed_35x35x256a'):
+ with tf.variable_scope('branch1x1'):
+ branch1x1 = ops.conv2d(net, 64, [1, 1])
+ with tf.variable_scope('branch5x5'):
+ branch5x5 = ops.conv2d(net, 48, [1, 1])
+ branch5x5 = ops.conv2d(branch5x5, 64, [5, 5])
+ with tf.variable_scope('branch3x3dbl'):
+ branch3x3dbl = ops.conv2d(net, 64, [1, 1])
+ branch3x3dbl = ops.conv2d(branch3x3dbl, 96, [3, 3])
+ branch3x3dbl = ops.conv2d(branch3x3dbl, 96, [3, 3])
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.avg_pool(net, [3, 3])
+ branch_pool = ops.conv2d(branch_pool, 32, [1, 1])
+ net = tf.concat(axis=3, values=[branch1x1, branch5x5, branch3x3dbl, branch_pool])
+ end_points['mixed_35x35x256a'] = net
+ # mixed_1: 35 x 35 x 288.
+ with tf.variable_scope('mixed_35x35x288a'):
+ with tf.variable_scope('branch1x1'):
+ branch1x1 = ops.conv2d(net, 64, [1, 1])
+ with tf.variable_scope('branch5x5'):
+ branch5x5 = ops.conv2d(net, 48, [1, 1])
+ branch5x5 = ops.conv2d(branch5x5, 64, [5, 5])
+ with tf.variable_scope('branch3x3dbl'):
+ branch3x3dbl = ops.conv2d(net, 64, [1, 1])
+ branch3x3dbl = ops.conv2d(branch3x3dbl, 96, [3, 3])
+ branch3x3dbl = ops.conv2d(branch3x3dbl, 96, [3, 3])
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.avg_pool(net, [3, 3])
+ branch_pool = ops.conv2d(branch_pool, 64, [1, 1])
+ net = tf.concat(axis=3, values=[branch1x1, branch5x5, branch3x3dbl, branch_pool])
+ end_points['mixed_35x35x288a'] = net
+ # mixed_2: 35 x 35 x 288.
+ with tf.variable_scope('mixed_35x35x288b'):
+ with tf.variable_scope('branch1x1'):
+ branch1x1 = ops.conv2d(net, 64, [1, 1])
+ with tf.variable_scope('branch5x5'):
+ branch5x5 = ops.conv2d(net, 48, [1, 1])
+ branch5x5 = ops.conv2d(branch5x5, 64, [5, 5])
+ with tf.variable_scope('branch3x3dbl'):
+ branch3x3dbl = ops.conv2d(net, 64, [1, 1])
+ branch3x3dbl = ops.conv2d(branch3x3dbl, 96, [3, 3])
+ branch3x3dbl = ops.conv2d(branch3x3dbl, 96, [3, 3])
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.avg_pool(net, [3, 3])
+ branch_pool = ops.conv2d(branch_pool, 64, [1, 1])
+ net = tf.concat(axis=3, values=[branch1x1, branch5x5, branch3x3dbl, branch_pool])
+ end_points['mixed_35x35x288b'] = net
+ # mixed_3: 17 x 17 x 768.
+ with tf.variable_scope('mixed_17x17x768a'):
+ with tf.variable_scope('branch3x3'):
+ branch3x3 = ops.conv2d(net, 384, [3, 3], stride=2, padding='VALID')
+ with tf.variable_scope('branch3x3dbl'):
+ branch3x3dbl = ops.conv2d(net, 64, [1, 1])
+ branch3x3dbl = ops.conv2d(branch3x3dbl, 96, [3, 3])
+ branch3x3dbl = ops.conv2d(branch3x3dbl, 96, [3, 3],
+ stride=2, padding='VALID')
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.max_pool(net, [3, 3], stride=2, padding='VALID')
+ net = tf.concat(axis=3, values=[branch3x3, branch3x3dbl, branch_pool])
+ end_points['mixed_17x17x768a'] = net
+ # mixed4: 17 x 17 x 768.
+ with tf.variable_scope('mixed_17x17x768b'):
+ with tf.variable_scope('branch1x1'):
+ branch1x1 = ops.conv2d(net, 192, [1, 1])
+ with tf.variable_scope('branch7x7'):
+ branch7x7 = ops.conv2d(net, 128, [1, 1])
+ branch7x7 = ops.conv2d(branch7x7, 128, [1, 7])
+ branch7x7 = ops.conv2d(branch7x7, 192, [7, 1])
+ with tf.variable_scope('branch7x7dbl'):
+ branch7x7dbl = ops.conv2d(net, 128, [1, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 128, [7, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 128, [1, 7])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 128, [7, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 192, [1, 7])
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.avg_pool(net, [3, 3])
+ branch_pool = ops.conv2d(branch_pool, 192, [1, 1])
+ net = tf.concat(axis=3, values=[branch1x1, branch7x7, branch7x7dbl, branch_pool])
+ end_points['mixed_17x17x768b'] = net
+ # mixed_5: 17 x 17 x 768.
+ with tf.variable_scope('mixed_17x17x768c'):
+ with tf.variable_scope('branch1x1'):
+ branch1x1 = ops.conv2d(net, 192, [1, 1])
+ with tf.variable_scope('branch7x7'):
+ branch7x7 = ops.conv2d(net, 160, [1, 1])
+ branch7x7 = ops.conv2d(branch7x7, 160, [1, 7])
+ branch7x7 = ops.conv2d(branch7x7, 192, [7, 1])
+ with tf.variable_scope('branch7x7dbl'):
+ branch7x7dbl = ops.conv2d(net, 160, [1, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 160, [7, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 160, [1, 7])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 160, [7, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 192, [1, 7])
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.avg_pool(net, [3, 3])
+ branch_pool = ops.conv2d(branch_pool, 192, [1, 1])
+ net = tf.concat(axis=3, values=[branch1x1, branch7x7, branch7x7dbl, branch_pool])
+ end_points['mixed_17x17x768c'] = net
+ # mixed_6: 17 x 17 x 768.
+ with tf.variable_scope('mixed_17x17x768d'):
+ with tf.variable_scope('branch1x1'):
+ branch1x1 = ops.conv2d(net, 192, [1, 1])
+ with tf.variable_scope('branch7x7'):
+ branch7x7 = ops.conv2d(net, 160, [1, 1])
+ branch7x7 = ops.conv2d(branch7x7, 160, [1, 7])
+ branch7x7 = ops.conv2d(branch7x7, 192, [7, 1])
+ with tf.variable_scope('branch7x7dbl'):
+ branch7x7dbl = ops.conv2d(net, 160, [1, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 160, [7, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 160, [1, 7])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 160, [7, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 192, [1, 7])
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.avg_pool(net, [3, 3])
+ branch_pool = ops.conv2d(branch_pool, 192, [1, 1])
+ net = tf.concat(axis=3, values=[branch1x1, branch7x7, branch7x7dbl, branch_pool])
+ end_points['mixed_17x17x768d'] = net
+ # mixed_7: 17 x 17 x 768.
+ with tf.variable_scope('mixed_17x17x768e'):
+ with tf.variable_scope('branch1x1'):
+ branch1x1 = ops.conv2d(net, 192, [1, 1])
+ with tf.variable_scope('branch7x7'):
+ branch7x7 = ops.conv2d(net, 192, [1, 1])
+ branch7x7 = ops.conv2d(branch7x7, 192, [1, 7])
+ branch7x7 = ops.conv2d(branch7x7, 192, [7, 1])
+ with tf.variable_scope('branch7x7dbl'):
+ branch7x7dbl = ops.conv2d(net, 192, [1, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 192, [7, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 192, [1, 7])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 192, [7, 1])
+ branch7x7dbl = ops.conv2d(branch7x7dbl, 192, [1, 7])
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.avg_pool(net, [3, 3])
+ branch_pool = ops.conv2d(branch_pool, 192, [1, 1])
+ net = tf.concat(axis=3, values=[branch1x1, branch7x7, branch7x7dbl, branch_pool])
+ end_points['mixed_17x17x768e'] = net
+ # Auxiliary Head logits
+ aux_logits = tf.identity(end_points['mixed_17x17x768e'])
+ with tf.variable_scope('aux_logits'):
+ aux_logits = ops.avg_pool(aux_logits, [5, 5], stride=3,
+ padding='VALID')
+ aux_logits = ops.conv2d(aux_logits, 128, [1, 1], scope='proj')
+ # Shape of feature map before the final layer.
+ shape = aux_logits.get_shape()
+ aux_logits = ops.conv2d(aux_logits, 768, shape[1:3], stddev=0.01,
+ padding='VALID')
+ aux_logits = ops.flatten(aux_logits)
+ aux_logits = ops.fc(aux_logits, num_classes, activation=None,
+ stddev=0.001, restore=restore_logits)
+ end_points['aux_logits'] = aux_logits
+ # mixed_8: 8 x 8 x 1280.
+ # Note that the scope below is not changed to not void previous
+ # checkpoints.
+ # (TODO) Fix the scope when appropriate.
+ with tf.variable_scope('mixed_17x17x1280a'):
+ with tf.variable_scope('branch3x3'):
+ branch3x3 = ops.conv2d(net, 192, [1, 1])
+ branch3x3 = ops.conv2d(branch3x3, 320, [3, 3], stride=2,
+ padding='VALID')
+ with tf.variable_scope('branch7x7x3'):
+ branch7x7x3 = ops.conv2d(net, 192, [1, 1])
+ branch7x7x3 = ops.conv2d(branch7x7x3, 192, [1, 7])
+ branch7x7x3 = ops.conv2d(branch7x7x3, 192, [7, 1])
+ branch7x7x3 = ops.conv2d(branch7x7x3, 192, [3, 3],
+ stride=2, padding='VALID')
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.max_pool(net, [3, 3], stride=2, padding='VALID')
+ net = tf.concat(axis=3, values=[branch3x3, branch7x7x3, branch_pool])
+ end_points['mixed_17x17x1280a'] = net
+ # mixed_9: 8 x 8 x 2048.
+ with tf.variable_scope('mixed_8x8x2048a'):
+ with tf.variable_scope('branch1x1'):
+ branch1x1 = ops.conv2d(net, 320, [1, 1])
+ with tf.variable_scope('branch3x3'):
+ branch3x3 = ops.conv2d(net, 384, [1, 1])
+ branch3x3 = tf.concat(axis=3, values=[ops.conv2d(branch3x3, 384, [1, 3]),
+ ops.conv2d(branch3x3, 384, [3, 1])])
+ with tf.variable_scope('branch3x3dbl'):
+ branch3x3dbl = ops.conv2d(net, 448, [1, 1])
+ branch3x3dbl = ops.conv2d(branch3x3dbl, 384, [3, 3])
+ branch3x3dbl = tf.concat(axis=3, values=[ops.conv2d(branch3x3dbl, 384, [1, 3]),
+ ops.conv2d(branch3x3dbl, 384, [3, 1])])
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.avg_pool(net, [3, 3])
+ branch_pool = ops.conv2d(branch_pool, 192, [1, 1])
+ net = tf.concat(axis=3, values=[branch1x1, branch3x3, branch3x3dbl, branch_pool])
+ end_points['mixed_8x8x2048a'] = net
+ # mixed_10: 8 x 8 x 2048.
+ with tf.variable_scope('mixed_8x8x2048b'):
+ with tf.variable_scope('branch1x1'):
+ branch1x1 = ops.conv2d(net, 320, [1, 1])
+ with tf.variable_scope('branch3x3'):
+ branch3x3 = ops.conv2d(net, 384, [1, 1])
+ branch3x3 = tf.concat(axis=3, values=[ops.conv2d(branch3x3, 384, [1, 3]),
+ ops.conv2d(branch3x3, 384, [3, 1])])
+ with tf.variable_scope('branch3x3dbl'):
+ branch3x3dbl = ops.conv2d(net, 448, [1, 1])
+ branch3x3dbl = ops.conv2d(branch3x3dbl, 384, [3, 3])
+ branch3x3dbl = tf.concat(axis=3, values=[ops.conv2d(branch3x3dbl, 384, [1, 3]),
+ ops.conv2d(branch3x3dbl, 384, [3, 1])])
+ with tf.variable_scope('branch_pool'):
+ branch_pool = ops.avg_pool(net, [3, 3])
+ branch_pool = ops.conv2d(branch_pool, 192, [1, 1])
+ net = tf.concat(axis=3, values=[branch1x1, branch3x3, branch3x3dbl, branch_pool])
+ end_points['mixed_8x8x2048b'] = net
+ # Final pooling and prediction
+ with tf.variable_scope('logits'):
+ shape = net.get_shape()
+ net = ops.avg_pool(net, shape[1:3], padding='VALID', scope='pool')
+ # 1 x 1 x 2048
+ net = ops.dropout(net, dropout_keep_prob, scope='dropout')
+ net = ops.flatten(net, scope='flatten')
+ # 2048
+ logits = ops.fc(net, num_classes, activation=None, scope='logits',
+ restore=restore_logits)
+ # 1000
+ end_points['logits'] = logits
+ end_points['predictions'] = tf.nn.softmax(logits, name='predictions')
+ return logits, end_points
+
+
+def inception_v3_parameters(weight_decay=0.00004, stddev=0.1,
+ batch_norm_decay=0.9997, batch_norm_epsilon=0.001):
+ """Yields the scope with the default parameters for inception_v3.
+
+ Args:
+ weight_decay: the weight decay for weights variables.
+ stddev: standard deviation of the truncated guassian weight distribution.
+ batch_norm_decay: decay for the moving average of batch_norm momentums.
+ batch_norm_epsilon: small float added to variance to avoid dividing by zero.
+
+ Yields:
+ a arg_scope with the parameters needed for inception_v3.
+ """
+ # Set weight_decay for weights in Conv and FC layers.
+ with scopes.arg_scope([ops.conv2d, ops.fc],
+ weight_decay=weight_decay):
+ # Set stddev, activation and parameters for batch_norm.
+ with scopes.arg_scope([ops.conv2d],
+ stddev=stddev,
+ activation=tf.nn.relu,
+ batch_norm_params={
+ 'decay': batch_norm_decay,
+ 'epsilon': batch_norm_epsilon}) as arg_scope:
+ yield arg_scope
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/inception_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/inception_test.py"
new file mode 100644
index 00000000..231dea29
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/inception_test.py"
@@ -0,0 +1,134 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for slim.inception."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from inception.slim import inception_model as inception
+
+
+class InceptionTest(tf.test.TestCase):
+
+ def testBuildLogits(self):
+ batch_size = 5
+ height, width = 299, 299
+ num_classes = 1000
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ logits, _ = inception.inception_v3(inputs, num_classes)
+ self.assertTrue(logits.op.name.startswith('logits'))
+ self.assertListEqual(logits.get_shape().as_list(),
+ [batch_size, num_classes])
+
+ def testBuildEndPoints(self):
+ batch_size = 5
+ height, width = 299, 299
+ num_classes = 1000
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ _, end_points = inception.inception_v3(inputs, num_classes)
+ self.assertTrue('logits' in end_points)
+ logits = end_points['logits']
+ self.assertListEqual(logits.get_shape().as_list(),
+ [batch_size, num_classes])
+ self.assertTrue('aux_logits' in end_points)
+ aux_logits = end_points['aux_logits']
+ self.assertListEqual(aux_logits.get_shape().as_list(),
+ [batch_size, num_classes])
+ pre_pool = end_points['mixed_8x8x2048b']
+ self.assertListEqual(pre_pool.get_shape().as_list(),
+ [batch_size, 8, 8, 2048])
+
+ def testVariablesSetDevice(self):
+ batch_size = 5
+ height, width = 299, 299
+ num_classes = 1000
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ # Force all Variables to reside on the device.
+ with tf.variable_scope('on_cpu'), tf.device('/cpu:0'):
+ inception.inception_v3(inputs, num_classes)
+ with tf.variable_scope('on_gpu'), tf.device('/gpu:0'):
+ inception.inception_v3(inputs, num_classes)
+ for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='on_cpu'):
+ self.assertDeviceEqual(v.device, '/cpu:0')
+ for v in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='on_gpu'):
+ self.assertDeviceEqual(v.device, '/gpu:0')
+
+ def testHalfSizeImages(self):
+ batch_size = 5
+ height, width = 150, 150
+ num_classes = 1000
+ with self.test_session():
+ inputs = tf.random_uniform((batch_size, height, width, 3))
+ logits, end_points = inception.inception_v3(inputs, num_classes)
+ self.assertTrue(logits.op.name.startswith('logits'))
+ self.assertListEqual(logits.get_shape().as_list(),
+ [batch_size, num_classes])
+ pre_pool = end_points['mixed_8x8x2048b']
+ self.assertListEqual(pre_pool.get_shape().as_list(),
+ [batch_size, 3, 3, 2048])
+
+ def testUnknowBatchSize(self):
+ batch_size = 1
+ height, width = 299, 299
+ num_classes = 1000
+ with self.test_session() as sess:
+ inputs = tf.placeholder(tf.float32, (None, height, width, 3))
+ logits, _ = inception.inception_v3(inputs, num_classes)
+ self.assertTrue(logits.op.name.startswith('logits'))
+ self.assertListEqual(logits.get_shape().as_list(),
+ [None, num_classes])
+ images = tf.random_uniform((batch_size, height, width, 3))
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(logits, {inputs: images.eval()})
+ self.assertEquals(output.shape, (batch_size, num_classes))
+
+ def testEvaluation(self):
+ batch_size = 2
+ height, width = 299, 299
+ num_classes = 1000
+ with self.test_session() as sess:
+ eval_inputs = tf.random_uniform((batch_size, height, width, 3))
+ logits, _ = inception.inception_v3(eval_inputs, num_classes,
+ is_training=False)
+ predictions = tf.argmax(logits, 1)
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(predictions)
+ self.assertEquals(output.shape, (batch_size,))
+
+ def testTrainEvalWithReuse(self):
+ train_batch_size = 5
+ eval_batch_size = 2
+ height, width = 150, 150
+ num_classes = 1000
+ with self.test_session() as sess:
+ train_inputs = tf.random_uniform((train_batch_size, height, width, 3))
+ inception.inception_v3(train_inputs, num_classes)
+ tf.get_variable_scope().reuse_variables()
+ eval_inputs = tf.random_uniform((eval_batch_size, height, width, 3))
+ logits, _ = inception.inception_v3(eval_inputs, num_classes,
+ is_training=False)
+ predictions = tf.argmax(logits, 1)
+ sess.run(tf.global_variables_initializer())
+ output = sess.run(predictions)
+ self.assertEquals(output.shape, (eval_batch_size,))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/losses.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/losses.py"
new file mode 100644
index 00000000..78298d09
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/losses.py"
@@ -0,0 +1,174 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains convenience wrappers for various Neural Network TensorFlow losses.
+
+ All the losses defined here add themselves to the LOSSES_COLLECTION
+ collection.
+
+ l1_loss: Define a L1 Loss, useful for regularization, i.e. lasso.
+ l2_loss: Define a L2 Loss, useful for regularization, i.e. weight decay.
+ cross_entropy_loss: Define a cross entropy loss using
+ softmax_cross_entropy_with_logits. Useful for classification.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+# In order to gather all losses in a network, the user should use this
+# key for get_collection, i.e:
+# losses = tf.get_collection(slim.losses.LOSSES_COLLECTION)
+LOSSES_COLLECTION = '_losses'
+
+
+def l1_regularizer(weight=1.0, scope=None):
+ """Define a L1 regularizer.
+
+ Args:
+ weight: scale the loss by this factor.
+ scope: Optional scope for name_scope.
+
+ Returns:
+ a regularizer function.
+ """
+ def regularizer(tensor):
+ with tf.name_scope(scope, 'L1Regularizer', [tensor]):
+ l1_weight = tf.convert_to_tensor(weight,
+ dtype=tensor.dtype.base_dtype,
+ name='weight')
+ return tf.multiply(l1_weight, tf.reduce_sum(tf.abs(tensor)), name='value')
+ return regularizer
+
+
+def l2_regularizer(weight=1.0, scope=None):
+ """Define a L2 regularizer.
+
+ Args:
+ weight: scale the loss by this factor.
+ scope: Optional scope for name_scope.
+
+ Returns:
+ a regularizer function.
+ """
+ def regularizer(tensor):
+ with tf.name_scope(scope, 'L2Regularizer', [tensor]):
+ l2_weight = tf.convert_to_tensor(weight,
+ dtype=tensor.dtype.base_dtype,
+ name='weight')
+ return tf.multiply(l2_weight, tf.nn.l2_loss(tensor), name='value')
+ return regularizer
+
+
+def l1_l2_regularizer(weight_l1=1.0, weight_l2=1.0, scope=None):
+ """Define a L1L2 regularizer.
+
+ Args:
+ weight_l1: scale the L1 loss by this factor.
+ weight_l2: scale the L2 loss by this factor.
+ scope: Optional scope for name_scope.
+
+ Returns:
+ a regularizer function.
+ """
+ def regularizer(tensor):
+ with tf.name_scope(scope, 'L1L2Regularizer', [tensor]):
+ weight_l1_t = tf.convert_to_tensor(weight_l1,
+ dtype=tensor.dtype.base_dtype,
+ name='weight_l1')
+ weight_l2_t = tf.convert_to_tensor(weight_l2,
+ dtype=tensor.dtype.base_dtype,
+ name='weight_l2')
+ reg_l1 = tf.multiply(weight_l1_t, tf.reduce_sum(tf.abs(tensor)),
+ name='value_l1')
+ reg_l2 = tf.multiply(weight_l2_t, tf.nn.l2_loss(tensor),
+ name='value_l2')
+ return tf.add(reg_l1, reg_l2, name='value')
+ return regularizer
+
+
+def l1_loss(tensor, weight=1.0, scope=None):
+ """Define a L1Loss, useful for regularize, i.e. lasso.
+
+ Args:
+ tensor: tensor to regularize.
+ weight: scale the loss by this factor.
+ scope: Optional scope for name_scope.
+
+ Returns:
+ the L1 loss op.
+ """
+ with tf.name_scope(scope, 'L1Loss', [tensor]):
+ weight = tf.convert_to_tensor(weight,
+ dtype=tensor.dtype.base_dtype,
+ name='loss_weight')
+ loss = tf.multiply(weight, tf.reduce_sum(tf.abs(tensor)), name='value')
+ tf.add_to_collection(LOSSES_COLLECTION, loss)
+ return loss
+
+
+def l2_loss(tensor, weight=1.0, scope=None):
+ """Define a L2Loss, useful for regularize, i.e. weight decay.
+
+ Args:
+ tensor: tensor to regularize.
+ weight: an optional weight to modulate the loss.
+ scope: Optional scope for name_scope.
+
+ Returns:
+ the L2 loss op.
+ """
+ with tf.name_scope(scope, 'L2Loss', [tensor]):
+ weight = tf.convert_to_tensor(weight,
+ dtype=tensor.dtype.base_dtype,
+ name='loss_weight')
+ loss = tf.multiply(weight, tf.nn.l2_loss(tensor), name='value')
+ tf.add_to_collection(LOSSES_COLLECTION, loss)
+ return loss
+
+
+def cross_entropy_loss(logits, one_hot_labels, label_smoothing=0,
+ weight=1.0, scope=None):
+ """Define a Cross Entropy loss using softmax_cross_entropy_with_logits.
+
+ It can scale the loss by weight factor, and smooth the labels.
+
+ Args:
+ logits: [batch_size, num_classes] logits outputs of the network .
+ one_hot_labels: [batch_size, num_classes] target one_hot_encoded labels.
+ label_smoothing: if greater than 0 then smooth the labels.
+ weight: scale the loss by this factor.
+ scope: Optional scope for name_scope.
+
+ Returns:
+ A tensor with the softmax_cross_entropy loss.
+ """
+ logits.get_shape().assert_is_compatible_with(one_hot_labels.get_shape())
+ with tf.name_scope(scope, 'CrossEntropyLoss', [logits, one_hot_labels]):
+ num_classes = one_hot_labels.get_shape()[-1].value
+ one_hot_labels = tf.cast(one_hot_labels, logits.dtype)
+ if label_smoothing > 0:
+ smooth_positives = 1.0 - label_smoothing
+ smooth_negatives = label_smoothing / num_classes
+ one_hot_labels = one_hot_labels * smooth_positives + smooth_negatives
+ cross_entropy = tf.contrib.nn.deprecated_flipped_softmax_cross_entropy_with_logits(
+ logits, one_hot_labels, name='xentropy')
+
+ weight = tf.convert_to_tensor(weight,
+ dtype=logits.dtype.base_dtype,
+ name='loss_weight')
+ loss = tf.multiply(weight, tf.reduce_mean(cross_entropy), name='value')
+ tf.add_to_collection(LOSSES_COLLECTION, loss)
+ return loss
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/losses_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/losses_test.py"
new file mode 100644
index 00000000..e267f652
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/losses_test.py"
@@ -0,0 +1,177 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for slim.losses."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from inception.slim import losses
+
+
+class LossesTest(tf.test.TestCase):
+
+ def testL1Loss(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ weights = tf.constant(1.0, shape=shape)
+ wd = 0.01
+ loss = losses.l1_loss(weights, wd)
+ self.assertEquals(loss.op.name, 'L1Loss/value')
+ self.assertAlmostEqual(loss.eval(), num_elem * wd, 5)
+
+ def testL2Loss(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ weights = tf.constant(1.0, shape=shape)
+ wd = 0.01
+ loss = losses.l2_loss(weights, wd)
+ self.assertEquals(loss.op.name, 'L2Loss/value')
+ self.assertAlmostEqual(loss.eval(), num_elem * wd / 2, 5)
+
+
+class RegularizersTest(tf.test.TestCase):
+
+ def testL1Regularizer(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ tensor = tf.constant(1.0, shape=shape)
+ loss = losses.l1_regularizer()(tensor)
+ self.assertEquals(loss.op.name, 'L1Regularizer/value')
+ self.assertAlmostEqual(loss.eval(), num_elem, 5)
+
+ def testL1RegularizerWithScope(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ tensor = tf.constant(1.0, shape=shape)
+ loss = losses.l1_regularizer(scope='L1')(tensor)
+ self.assertEquals(loss.op.name, 'L1/value')
+ self.assertAlmostEqual(loss.eval(), num_elem, 5)
+
+ def testL1RegularizerWithWeight(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ tensor = tf.constant(1.0, shape=shape)
+ weight = 0.01
+ loss = losses.l1_regularizer(weight)(tensor)
+ self.assertEquals(loss.op.name, 'L1Regularizer/value')
+ self.assertAlmostEqual(loss.eval(), num_elem * weight, 5)
+
+ def testL2Regularizer(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ tensor = tf.constant(1.0, shape=shape)
+ loss = losses.l2_regularizer()(tensor)
+ self.assertEquals(loss.op.name, 'L2Regularizer/value')
+ self.assertAlmostEqual(loss.eval(), num_elem / 2, 5)
+
+ def testL2RegularizerWithScope(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ tensor = tf.constant(1.0, shape=shape)
+ loss = losses.l2_regularizer(scope='L2')(tensor)
+ self.assertEquals(loss.op.name, 'L2/value')
+ self.assertAlmostEqual(loss.eval(), num_elem / 2, 5)
+
+ def testL2RegularizerWithWeight(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ tensor = tf.constant(1.0, shape=shape)
+ weight = 0.01
+ loss = losses.l2_regularizer(weight)(tensor)
+ self.assertEquals(loss.op.name, 'L2Regularizer/value')
+ self.assertAlmostEqual(loss.eval(), num_elem * weight / 2, 5)
+
+ def testL1L2Regularizer(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ tensor = tf.constant(1.0, shape=shape)
+ loss = losses.l1_l2_regularizer()(tensor)
+ self.assertEquals(loss.op.name, 'L1L2Regularizer/value')
+ self.assertAlmostEqual(loss.eval(), num_elem + num_elem / 2, 5)
+
+ def testL1L2RegularizerWithScope(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ tensor = tf.constant(1.0, shape=shape)
+ loss = losses.l1_l2_regularizer(scope='L1L2')(tensor)
+ self.assertEquals(loss.op.name, 'L1L2/value')
+ self.assertAlmostEqual(loss.eval(), num_elem + num_elem / 2, 5)
+
+ def testL1L2RegularizerWithWeights(self):
+ with self.test_session():
+ shape = [5, 5, 5]
+ num_elem = 5 * 5 * 5
+ tensor = tf.constant(1.0, shape=shape)
+ weight_l1 = 0.01
+ weight_l2 = 0.05
+ loss = losses.l1_l2_regularizer(weight_l1, weight_l2)(tensor)
+ self.assertEquals(loss.op.name, 'L1L2Regularizer/value')
+ self.assertAlmostEqual(loss.eval(),
+ num_elem * weight_l1 + num_elem * weight_l2 / 2, 5)
+
+
+class CrossEntropyLossTest(tf.test.TestCase):
+
+ def testCrossEntropyLossAllCorrect(self):
+ with self.test_session():
+ logits = tf.constant([[10.0, 0.0, 0.0],
+ [0.0, 10.0, 0.0],
+ [0.0, 0.0, 10.0]])
+ labels = tf.constant([[1, 0, 0],
+ [0, 1, 0],
+ [0, 0, 1]])
+ loss = losses.cross_entropy_loss(logits, labels)
+ self.assertEquals(loss.op.name, 'CrossEntropyLoss/value')
+ self.assertAlmostEqual(loss.eval(), 0.0, 3)
+
+ def testCrossEntropyLossAllWrong(self):
+ with self.test_session():
+ logits = tf.constant([[10.0, 0.0, 0.0],
+ [0.0, 10.0, 0.0],
+ [0.0, 0.0, 10.0]])
+ labels = tf.constant([[0, 0, 1],
+ [1, 0, 0],
+ [0, 1, 0]])
+ loss = losses.cross_entropy_loss(logits, labels)
+ self.assertEquals(loss.op.name, 'CrossEntropyLoss/value')
+ self.assertAlmostEqual(loss.eval(), 10.0, 3)
+
+ def testCrossEntropyLossAllWrongWithWeight(self):
+ with self.test_session():
+ logits = tf.constant([[10.0, 0.0, 0.0],
+ [0.0, 10.0, 0.0],
+ [0.0, 0.0, 10.0]])
+ labels = tf.constant([[0, 0, 1],
+ [1, 0, 0],
+ [0, 1, 0]])
+ loss = losses.cross_entropy_loss(logits, labels, weight=0.5)
+ self.assertEquals(loss.op.name, 'CrossEntropyLoss/value')
+ self.assertAlmostEqual(loss.eval(), 5.0, 3)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/ops.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/ops.py"
new file mode 100644
index 00000000..54fda4eb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/ops.py"
@@ -0,0 +1,473 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains convenience wrappers for typical Neural Network TensorFlow layers.
+
+ Additionally it maintains a collection with update_ops that need to be
+ updated after the ops have been computed, for example to update moving means
+ and moving variances of batch_norm.
+
+ Ops that have different behavior during training or eval have an is_training
+ parameter. Additionally Ops that contain variables.variable have a trainable
+ parameter, which control if the ops variables are trainable or not.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+
+from tensorflow.python.training import moving_averages
+
+from inception.slim import losses
+from inception.slim import scopes
+from inception.slim import variables
+
+# Used to keep the update ops done by batch_norm.
+UPDATE_OPS_COLLECTION = '_update_ops_'
+
+
+@scopes.add_arg_scope
+def batch_norm(inputs,
+ decay=0.999,
+ center=True,
+ scale=False,
+ epsilon=0.001,
+ moving_vars='moving_vars',
+ activation=None,
+ is_training=True,
+ trainable=True,
+ restore=True,
+ scope=None,
+ reuse=None):
+ """Adds a Batch Normalization layer.
+
+ Args:
+ inputs: a tensor of size [batch_size, height, width, channels]
+ or [batch_size, channels].
+ decay: decay for the moving average.
+ center: If True, subtract beta. If False, beta is not created and ignored.
+ scale: If True, multiply by gamma. If False, gamma is
+ not used. When the next layer is linear (also e.g. ReLU), this can be
+ disabled since the scaling can be done by the next layer.
+ epsilon: small float added to variance to avoid dividing by zero.
+ moving_vars: collection to store the moving_mean and moving_variance.
+ activation: activation function.
+ is_training: whether or not the model is in training mode.
+ trainable: whether or not the variables should be trainable or not.
+ restore: whether or not the variables should be marked for restore.
+ scope: Optional scope for variable_scope.
+ reuse: whether or not the layer and its variables should be reused. To be
+ able to reuse the layer scope must be given.
+
+ Returns:
+ a tensor representing the output of the operation.
+
+ """
+ inputs_shape = inputs.get_shape()
+ with tf.variable_scope(scope, 'BatchNorm', [inputs], reuse=reuse):
+ axis = list(range(len(inputs_shape) - 1))
+ params_shape = inputs_shape[-1:]
+ # Allocate parameters for the beta and gamma of the normalization.
+ beta, gamma = None, None
+ if center:
+ beta = variables.variable('beta',
+ params_shape,
+ initializer=tf.zeros_initializer(),
+ trainable=trainable,
+ restore=restore)
+ if scale:
+ gamma = variables.variable('gamma',
+ params_shape,
+ initializer=tf.ones_initializer(),
+ trainable=trainable,
+ restore=restore)
+ # Create moving_mean and moving_variance add them to
+ # GraphKeys.MOVING_AVERAGE_VARIABLES collections.
+ moving_collections = [moving_vars, tf.GraphKeys.MOVING_AVERAGE_VARIABLES]
+ moving_mean = variables.variable('moving_mean',
+ params_shape,
+ initializer=tf.zeros_initializer(),
+ trainable=False,
+ restore=restore,
+ collections=moving_collections)
+ moving_variance = variables.variable('moving_variance',
+ params_shape,
+ initializer=tf.ones_initializer(),
+ trainable=False,
+ restore=restore,
+ collections=moving_collections)
+ if is_training:
+ # Calculate the moments based on the individual batch.
+ mean, variance = tf.nn.moments(inputs, axis)
+
+ update_moving_mean = moving_averages.assign_moving_average(
+ moving_mean, mean, decay)
+ tf.add_to_collection(UPDATE_OPS_COLLECTION, update_moving_mean)
+ update_moving_variance = moving_averages.assign_moving_average(
+ moving_variance, variance, decay)
+ tf.add_to_collection(UPDATE_OPS_COLLECTION, update_moving_variance)
+ else:
+ # Just use the moving_mean and moving_variance.
+ mean = moving_mean
+ variance = moving_variance
+ # Normalize the activations.
+ outputs = tf.nn.batch_normalization(
+ inputs, mean, variance, beta, gamma, epsilon)
+ outputs.set_shape(inputs.get_shape())
+ if activation:
+ outputs = activation(outputs)
+ return outputs
+
+
+def _two_element_tuple(int_or_tuple):
+ """Converts `int_or_tuple` to height, width.
+
+ Several of the functions that follow accept arguments as either
+ a tuple of 2 integers or a single integer. A single integer
+ indicates that the 2 values of the tuple are the same.
+
+ This functions normalizes the input value by always returning a tuple.
+
+ Args:
+ int_or_tuple: A list of 2 ints, a single int or a tf.TensorShape.
+
+ Returns:
+ A tuple with 2 values.
+
+ Raises:
+ ValueError: If `int_or_tuple` it not well formed.
+ """
+ if isinstance(int_or_tuple, (list, tuple)):
+ if len(int_or_tuple) != 2:
+ raise ValueError('Must be a list with 2 elements: %s' % int_or_tuple)
+ return int(int_or_tuple[0]), int(int_or_tuple[1])
+ if isinstance(int_or_tuple, int):
+ return int(int_or_tuple), int(int_or_tuple)
+ if isinstance(int_or_tuple, tf.TensorShape):
+ if len(int_or_tuple) == 2:
+ return int_or_tuple[0], int_or_tuple[1]
+ raise ValueError('Must be an int, a list with 2 elements or a TensorShape of '
+ 'length 2')
+
+
+@scopes.add_arg_scope
+def conv2d(inputs,
+ num_filters_out,
+ kernel_size,
+ stride=1,
+ padding='SAME',
+ activation=tf.nn.relu,
+ stddev=0.01,
+ bias=0.0,
+ weight_decay=0,
+ batch_norm_params=None,
+ is_training=True,
+ trainable=True,
+ restore=True,
+ scope=None,
+ reuse=None):
+ """Adds a 2D convolution followed by an optional batch_norm layer.
+
+ conv2d creates a variable called 'weights', representing the convolutional
+ kernel, that is convolved with the input. If `batch_norm_params` is None, a
+ second variable called 'biases' is added to the result of the convolution
+ operation.
+
+ Args:
+ inputs: a tensor of size [batch_size, height, width, channels].
+ num_filters_out: the number of output filters.
+ kernel_size: a list of length 2: [kernel_height, kernel_width] of
+ of the filters. Can be an int if both values are the same.
+ stride: a list of length 2: [stride_height, stride_width].
+ Can be an int if both strides are the same. Note that presently
+ both strides must have the same value.
+ padding: one of 'VALID' or 'SAME'.
+ activation: activation function.
+ stddev: standard deviation of the truncated guassian weight distribution.
+ bias: the initial value of the biases.
+ weight_decay: the weight decay.
+ batch_norm_params: parameters for the batch_norm. If is None don't use it.
+ is_training: whether or not the model is in training mode.
+ trainable: whether or not the variables should be trainable or not.
+ restore: whether or not the variables should be marked for restore.
+ scope: Optional scope for variable_scope.
+ reuse: whether or not the layer and its variables should be reused. To be
+ able to reuse the layer scope must be given.
+ Returns:
+ a tensor representing the output of the operation.
+
+ """
+ with tf.variable_scope(scope, 'Conv', [inputs], reuse=reuse):
+ kernel_h, kernel_w = _two_element_tuple(kernel_size)
+ stride_h, stride_w = _two_element_tuple(stride)
+ num_filters_in = inputs.get_shape()[-1]
+ weights_shape = [kernel_h, kernel_w,
+ num_filters_in, num_filters_out]
+ weights_initializer = tf.truncated_normal_initializer(stddev=stddev)
+ l2_regularizer = None
+ if weight_decay and weight_decay > 0:
+ l2_regularizer = losses.l2_regularizer(weight_decay)
+ weights = variables.variable('weights',
+ shape=weights_shape,
+ initializer=weights_initializer,
+ regularizer=l2_regularizer,
+ trainable=trainable,
+ restore=restore)
+ conv = tf.nn.conv2d(inputs, weights, [1, stride_h, stride_w, 1],
+ padding=padding)
+ if batch_norm_params is not None:
+ with scopes.arg_scope([batch_norm], is_training=is_training,
+ trainable=trainable, restore=restore):
+ outputs = batch_norm(conv, **batch_norm_params)
+ else:
+ bias_shape = [num_filters_out,]
+ bias_initializer = tf.constant_initializer(bias)
+ biases = variables.variable('biases',
+ shape=bias_shape,
+ initializer=bias_initializer,
+ trainable=trainable,
+ restore=restore)
+ outputs = tf.nn.bias_add(conv, biases)
+ if activation:
+ outputs = activation(outputs)
+ return outputs
+
+
+@scopes.add_arg_scope
+def fc(inputs,
+ num_units_out,
+ activation=tf.nn.relu,
+ stddev=0.01,
+ bias=0.0,
+ weight_decay=0,
+ batch_norm_params=None,
+ is_training=True,
+ trainable=True,
+ restore=True,
+ scope=None,
+ reuse=None):
+ """Adds a fully connected layer followed by an optional batch_norm layer.
+
+ FC creates a variable called 'weights', representing the fully connected
+ weight matrix, that is multiplied by the input. If `batch_norm` is None, a
+ second variable called 'biases' is added to the result of the initial
+ vector-matrix multiplication.
+
+ Args:
+ inputs: a [B x N] tensor where B is the batch size and N is the number of
+ input units in the layer.
+ num_units_out: the number of output units in the layer.
+ activation: activation function.
+ stddev: the standard deviation for the weights.
+ bias: the initial value of the biases.
+ weight_decay: the weight decay.
+ batch_norm_params: parameters for the batch_norm. If is None don't use it.
+ is_training: whether or not the model is in training mode.
+ trainable: whether or not the variables should be trainable or not.
+ restore: whether or not the variables should be marked for restore.
+ scope: Optional scope for variable_scope.
+ reuse: whether or not the layer and its variables should be reused. To be
+ able to reuse the layer scope must be given.
+
+ Returns:
+ the tensor variable representing the result of the series of operations.
+ """
+ with tf.variable_scope(scope, 'FC', [inputs], reuse=reuse):
+ num_units_in = inputs.get_shape()[1]
+ weights_shape = [num_units_in, num_units_out]
+ weights_initializer = tf.truncated_normal_initializer(stddev=stddev)
+ l2_regularizer = None
+ if weight_decay and weight_decay > 0:
+ l2_regularizer = losses.l2_regularizer(weight_decay)
+ weights = variables.variable('weights',
+ shape=weights_shape,
+ initializer=weights_initializer,
+ regularizer=l2_regularizer,
+ trainable=trainable,
+ restore=restore)
+ if batch_norm_params is not None:
+ outputs = tf.matmul(inputs, weights)
+ with scopes.arg_scope([batch_norm], is_training=is_training,
+ trainable=trainable, restore=restore):
+ outputs = batch_norm(outputs, **batch_norm_params)
+ else:
+ bias_shape = [num_units_out,]
+ bias_initializer = tf.constant_initializer(bias)
+ biases = variables.variable('biases',
+ shape=bias_shape,
+ initializer=bias_initializer,
+ trainable=trainable,
+ restore=restore)
+ outputs = tf.nn.xw_plus_b(inputs, weights, biases)
+ if activation:
+ outputs = activation(outputs)
+ return outputs
+
+
+def one_hot_encoding(labels, num_classes, scope=None):
+ """Transform numeric labels into onehot_labels.
+
+ Args:
+ labels: [batch_size] target labels.
+ num_classes: total number of classes.
+ scope: Optional scope for name_scope.
+ Returns:
+ one hot encoding of the labels.
+ """
+ with tf.name_scope(scope, 'OneHotEncoding', [labels]):
+ batch_size = labels.get_shape()[0]
+ indices = tf.expand_dims(tf.range(0, batch_size), 1)
+ labels = tf.cast(tf.expand_dims(labels, 1), indices.dtype)
+ concated = tf.concat(axis=1, values=[indices, labels])
+ onehot_labels = tf.sparse_to_dense(
+ concated, tf.stack([batch_size, num_classes]), 1.0, 0.0)
+ onehot_labels.set_shape([batch_size, num_classes])
+ return onehot_labels
+
+
+@scopes.add_arg_scope
+def max_pool(inputs, kernel_size, stride=2, padding='VALID', scope=None):
+ """Adds a Max Pooling layer.
+
+ It is assumed by the wrapper that the pooling is only done per image and not
+ in depth or batch.
+
+ Args:
+ inputs: a tensor of size [batch_size, height, width, depth].
+ kernel_size: a list of length 2: [kernel_height, kernel_width] of the
+ pooling kernel over which the op is computed. Can be an int if both
+ values are the same.
+ stride: a list of length 2: [stride_height, stride_width].
+ Can be an int if both strides are the same. Note that presently
+ both strides must have the same value.
+ padding: the padding method, either 'VALID' or 'SAME'.
+ scope: Optional scope for name_scope.
+
+ Returns:
+ a tensor representing the results of the pooling operation.
+ Raises:
+ ValueError: if 'kernel_size' is not a 2-D list
+ """
+ with tf.name_scope(scope, 'MaxPool', [inputs]):
+ kernel_h, kernel_w = _two_element_tuple(kernel_size)
+ stride_h, stride_w = _two_element_tuple(stride)
+ return tf.nn.max_pool(inputs,
+ ksize=[1, kernel_h, kernel_w, 1],
+ strides=[1, stride_h, stride_w, 1],
+ padding=padding)
+
+
+@scopes.add_arg_scope
+def avg_pool(inputs, kernel_size, stride=2, padding='VALID', scope=None):
+ """Adds a Avg Pooling layer.
+
+ It is assumed by the wrapper that the pooling is only done per image and not
+ in depth or batch.
+
+ Args:
+ inputs: a tensor of size [batch_size, height, width, depth].
+ kernel_size: a list of length 2: [kernel_height, kernel_width] of the
+ pooling kernel over which the op is computed. Can be an int if both
+ values are the same.
+ stride: a list of length 2: [stride_height, stride_width].
+ Can be an int if both strides are the same. Note that presently
+ both strides must have the same value.
+ padding: the padding method, either 'VALID' or 'SAME'.
+ scope: Optional scope for name_scope.
+
+ Returns:
+ a tensor representing the results of the pooling operation.
+ """
+ with tf.name_scope(scope, 'AvgPool', [inputs]):
+ kernel_h, kernel_w = _two_element_tuple(kernel_size)
+ stride_h, stride_w = _two_element_tuple(stride)
+ return tf.nn.avg_pool(inputs,
+ ksize=[1, kernel_h, kernel_w, 1],
+ strides=[1, stride_h, stride_w, 1],
+ padding=padding)
+
+
+@scopes.add_arg_scope
+def dropout(inputs, keep_prob=0.5, is_training=True, scope=None):
+ """Returns a dropout layer applied to the input.
+
+ Args:
+ inputs: the tensor to pass to the Dropout layer.
+ keep_prob: the probability of keeping each input unit.
+ is_training: whether or not the model is in training mode. If so, dropout is
+ applied and values scaled. Otherwise, inputs is returned.
+ scope: Optional scope for name_scope.
+
+ Returns:
+ a tensor representing the output of the operation.
+ """
+ if is_training and keep_prob > 0:
+ with tf.name_scope(scope, 'Dropout', [inputs]):
+ return tf.nn.dropout(inputs, keep_prob)
+ else:
+ return inputs
+
+
+def flatten(inputs, scope=None):
+ """Flattens the input while maintaining the batch_size.
+
+ Assumes that the first dimension represents the batch.
+
+ Args:
+ inputs: a tensor of size [batch_size, ...].
+ scope: Optional scope for name_scope.
+
+ Returns:
+ a flattened tensor with shape [batch_size, k].
+ Raises:
+ ValueError: if inputs.shape is wrong.
+ """
+ if len(inputs.get_shape()) < 2:
+ raise ValueError('Inputs must be have a least 2 dimensions')
+ dims = inputs.get_shape()[1:]
+ k = dims.num_elements()
+ with tf.name_scope(scope, 'Flatten', [inputs]):
+ return tf.reshape(inputs, [-1, k])
+
+
+def repeat_op(repetitions, inputs, op, *args, **kwargs):
+ """Build a sequential Tower starting from inputs by using an op repeatedly.
+
+ It creates new scopes for each operation by increasing the counter.
+ Example: given repeat_op(3, _, ops.conv2d, 64, [3, 3], scope='conv1')
+ it will repeat the given op under the following variable_scopes:
+ conv1/Conv
+ conv1/Conv_1
+ conv1/Conv_2
+
+ Args:
+ repetitions: number or repetitions.
+ inputs: a tensor of size [batch_size, height, width, channels].
+ op: an operation.
+ *args: args for the op.
+ **kwargs: kwargs for the op.
+
+ Returns:
+ a tensor result of applying the operation op, num times.
+ Raises:
+ ValueError: if the op is unknown or wrong.
+ """
+ scope = kwargs.pop('scope', None)
+ with tf.variable_scope(scope, 'RepeatOp', [inputs]):
+ tower = inputs
+ for _ in range(repetitions):
+ tower = op(tower, *args, **kwargs)
+ return tower
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/ops_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/ops_test.py"
new file mode 100644
index 00000000..13dc5d9a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/ops_test.py"
@@ -0,0 +1,687 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for slim.ops."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import numpy as np
+import tensorflow as tf
+
+from inception.slim import ops
+from inception.slim import scopes
+from inception.slim import variables
+
+
+class ConvTest(tf.test.TestCase):
+
+ def testCreateConv(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.conv2d(images, 32, [3, 3])
+ self.assertEquals(output.op.name, 'Conv/Relu')
+ self.assertListEqual(output.get_shape().as_list(), [5, height, width, 32])
+
+ def testCreateSquareConv(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.conv2d(images, 32, 3)
+ self.assertEquals(output.op.name, 'Conv/Relu')
+ self.assertListEqual(output.get_shape().as_list(), [5, height, width, 32])
+
+ def testCreateConvWithTensorShape(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.conv2d(images, 32, images.get_shape()[1:3])
+ self.assertEquals(output.op.name, 'Conv/Relu')
+ self.assertListEqual(output.get_shape().as_list(), [5, height, width, 32])
+
+ def testCreateFullyConv(self):
+ height, width = 6, 6
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 32), seed=1)
+ output = ops.conv2d(images, 64, images.get_shape()[1:3], padding='VALID')
+ self.assertEquals(output.op.name, 'Conv/Relu')
+ self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 64])
+
+ def testCreateVerticalConv(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.conv2d(images, 32, [3, 1])
+ self.assertEquals(output.op.name, 'Conv/Relu')
+ self.assertListEqual(output.get_shape().as_list(),
+ [5, height, width, 32])
+
+ def testCreateHorizontalConv(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.conv2d(images, 32, [1, 3])
+ self.assertEquals(output.op.name, 'Conv/Relu')
+ self.assertListEqual(output.get_shape().as_list(),
+ [5, height, width, 32])
+
+ def testCreateConvWithStride(self):
+ height, width = 6, 6
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.conv2d(images, 32, [3, 3], stride=2)
+ self.assertEquals(output.op.name, 'Conv/Relu')
+ self.assertListEqual(output.get_shape().as_list(),
+ [5, height/2, width/2, 32])
+
+ def testCreateConvCreatesWeightsAndBiasesVars(self):
+ height, width = 3, 3
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ with self.test_session():
+ self.assertFalse(variables.get_variables('conv1/weights'))
+ self.assertFalse(variables.get_variables('conv1/biases'))
+ ops.conv2d(images, 32, [3, 3], scope='conv1')
+ self.assertTrue(variables.get_variables('conv1/weights'))
+ self.assertTrue(variables.get_variables('conv1/biases'))
+
+ def testCreateConvWithScope(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.conv2d(images, 32, [3, 3], scope='conv1')
+ self.assertEquals(output.op.name, 'conv1/Relu')
+
+ def testCreateConvWithoutActivation(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.conv2d(images, 32, [3, 3], activation=None)
+ self.assertEquals(output.op.name, 'Conv/BiasAdd')
+
+ def testCreateConvValid(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.conv2d(images, 32, [3, 3], padding='VALID')
+ self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 32])
+
+ def testCreateConvWithWD(self):
+ height, width = 3, 3
+ with self.test_session() as sess:
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.conv2d(images, 32, [3, 3], weight_decay=0.01)
+ wd = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)[0]
+ self.assertEquals(wd.op.name,
+ 'Conv/weights/Regularizer/L2Regularizer/value')
+ sess.run(tf.global_variables_initializer())
+ self.assertTrue(sess.run(wd) <= 0.01)
+
+ def testCreateConvWithoutWD(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.conv2d(images, 32, [3, 3], weight_decay=0)
+ self.assertEquals(
+ tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES), [])
+
+ def testReuseVars(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.conv2d(images, 32, [3, 3], scope='conv1')
+ self.assertEquals(len(variables.get_variables()), 2)
+ ops.conv2d(images, 32, [3, 3], scope='conv1', reuse=True)
+ self.assertEquals(len(variables.get_variables()), 2)
+
+ def testNonReuseVars(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.conv2d(images, 32, [3, 3])
+ self.assertEquals(len(variables.get_variables()), 2)
+ ops.conv2d(images, 32, [3, 3])
+ self.assertEquals(len(variables.get_variables()), 4)
+
+ def testReuseConvWithWD(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.conv2d(images, 32, [3, 3], weight_decay=0.01, scope='conv1')
+ self.assertEquals(len(variables.get_variables()), 2)
+ self.assertEquals(
+ len(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)), 1)
+ ops.conv2d(images, 32, [3, 3], weight_decay=0.01, scope='conv1',
+ reuse=True)
+ self.assertEquals(len(variables.get_variables()), 2)
+ self.assertEquals(
+ len(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)), 1)
+
+ def testConvWithBatchNorm(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 32), seed=1)
+ with scopes.arg_scope([ops.conv2d], batch_norm_params={'decay': 0.9}):
+ net = ops.conv2d(images, 32, [3, 3])
+ net = ops.conv2d(net, 32, [3, 3])
+ self.assertEquals(len(variables.get_variables()), 8)
+ self.assertEquals(len(variables.get_variables('Conv/BatchNorm')), 3)
+ self.assertEquals(len(variables.get_variables('Conv_1/BatchNorm')), 3)
+
+ def testReuseConvWithBatchNorm(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 32), seed=1)
+ with scopes.arg_scope([ops.conv2d], batch_norm_params={'decay': 0.9}):
+ net = ops.conv2d(images, 32, [3, 3], scope='Conv')
+ net = ops.conv2d(net, 32, [3, 3], scope='Conv', reuse=True)
+ self.assertEquals(len(variables.get_variables()), 4)
+ self.assertEquals(len(variables.get_variables('Conv/BatchNorm')), 3)
+ self.assertEquals(len(variables.get_variables('Conv_1/BatchNorm')), 0)
+
+
+class FCTest(tf.test.TestCase):
+
+ def testCreateFC(self):
+ height, width = 3, 3
+ with self.test_session():
+ inputs = tf.random_uniform((5, height * width * 3), seed=1)
+ output = ops.fc(inputs, 32)
+ self.assertEquals(output.op.name, 'FC/Relu')
+ self.assertListEqual(output.get_shape().as_list(), [5, 32])
+
+ def testCreateFCWithScope(self):
+ height, width = 3, 3
+ with self.test_session():
+ inputs = tf.random_uniform((5, height * width * 3), seed=1)
+ output = ops.fc(inputs, 32, scope='fc1')
+ self.assertEquals(output.op.name, 'fc1/Relu')
+
+ def testCreateFcCreatesWeightsAndBiasesVars(self):
+ height, width = 3, 3
+ inputs = tf.random_uniform((5, height * width * 3), seed=1)
+ with self.test_session():
+ self.assertFalse(variables.get_variables('fc1/weights'))
+ self.assertFalse(variables.get_variables('fc1/biases'))
+ ops.fc(inputs, 32, scope='fc1')
+ self.assertTrue(variables.get_variables('fc1/weights'))
+ self.assertTrue(variables.get_variables('fc1/biases'))
+
+ def testReuseVars(self):
+ height, width = 3, 3
+ inputs = tf.random_uniform((5, height * width * 3), seed=1)
+ with self.test_session():
+ ops.fc(inputs, 32, scope='fc1')
+ self.assertEquals(len(variables.get_variables('fc1')), 2)
+ ops.fc(inputs, 32, scope='fc1', reuse=True)
+ self.assertEquals(len(variables.get_variables('fc1')), 2)
+
+ def testNonReuseVars(self):
+ height, width = 3, 3
+ inputs = tf.random_uniform((5, height * width * 3), seed=1)
+ with self.test_session():
+ ops.fc(inputs, 32)
+ self.assertEquals(len(variables.get_variables('FC')), 2)
+ ops.fc(inputs, 32)
+ self.assertEquals(len(variables.get_variables('FC')), 4)
+
+ def testCreateFCWithoutActivation(self):
+ height, width = 3, 3
+ with self.test_session():
+ inputs = tf.random_uniform((5, height * width * 3), seed=1)
+ output = ops.fc(inputs, 32, activation=None)
+ self.assertEquals(output.op.name, 'FC/xw_plus_b')
+
+ def testCreateFCWithWD(self):
+ height, width = 3, 3
+ with self.test_session() as sess:
+ inputs = tf.random_uniform((5, height * width * 3), seed=1)
+ ops.fc(inputs, 32, weight_decay=0.01)
+ wd = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)[0]
+ self.assertEquals(wd.op.name,
+ 'FC/weights/Regularizer/L2Regularizer/value')
+ sess.run(tf.global_variables_initializer())
+ self.assertTrue(sess.run(wd) <= 0.01)
+
+ def testCreateFCWithoutWD(self):
+ height, width = 3, 3
+ with self.test_session():
+ inputs = tf.random_uniform((5, height * width * 3), seed=1)
+ ops.fc(inputs, 32, weight_decay=0)
+ self.assertEquals(
+ tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES), [])
+
+ def testReuseFCWithWD(self):
+ height, width = 3, 3
+ with self.test_session():
+ inputs = tf.random_uniform((5, height * width * 3), seed=1)
+ ops.fc(inputs, 32, weight_decay=0.01, scope='fc')
+ self.assertEquals(len(variables.get_variables()), 2)
+ self.assertEquals(
+ len(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)), 1)
+ ops.fc(inputs, 32, weight_decay=0.01, scope='fc', reuse=True)
+ self.assertEquals(len(variables.get_variables()), 2)
+ self.assertEquals(
+ len(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)), 1)
+
+ def testFCWithBatchNorm(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height * width * 3), seed=1)
+ with scopes.arg_scope([ops.fc], batch_norm_params={}):
+ net = ops.fc(images, 27)
+ net = ops.fc(net, 27)
+ self.assertEquals(len(variables.get_variables()), 8)
+ self.assertEquals(len(variables.get_variables('FC/BatchNorm')), 3)
+ self.assertEquals(len(variables.get_variables('FC_1/BatchNorm')), 3)
+
+ def testReuseFCWithBatchNorm(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height * width * 3), seed=1)
+ with scopes.arg_scope([ops.fc], batch_norm_params={'decay': 0.9}):
+ net = ops.fc(images, 27, scope='fc1')
+ net = ops.fc(net, 27, scope='fc1', reuse=True)
+ self.assertEquals(len(variables.get_variables()), 4)
+ self.assertEquals(len(variables.get_variables('fc1/BatchNorm')), 3)
+
+
+class MaxPoolTest(tf.test.TestCase):
+
+ def testCreateMaxPool(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.max_pool(images, [3, 3])
+ self.assertEquals(output.op.name, 'MaxPool/MaxPool')
+ self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
+
+ def testCreateSquareMaxPool(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.max_pool(images, 3)
+ self.assertEquals(output.op.name, 'MaxPool/MaxPool')
+ self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
+
+ def testCreateMaxPoolWithScope(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.max_pool(images, [3, 3], scope='pool1')
+ self.assertEquals(output.op.name, 'pool1/MaxPool')
+
+ def testCreateMaxPoolSAME(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.max_pool(images, [3, 3], padding='SAME')
+ self.assertListEqual(output.get_shape().as_list(), [5, 2, 2, 3])
+
+ def testCreateMaxPoolStrideSAME(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.max_pool(images, [3, 3], stride=1, padding='SAME')
+ self.assertListEqual(output.get_shape().as_list(), [5, height, width, 3])
+
+ def testGlobalMaxPool(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.max_pool(images, images.get_shape()[1:3], stride=1)
+ self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
+
+
+class AvgPoolTest(tf.test.TestCase):
+
+ def testCreateAvgPool(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.avg_pool(images, [3, 3])
+ self.assertEquals(output.op.name, 'AvgPool/AvgPool')
+ self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
+
+ def testCreateSquareAvgPool(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.avg_pool(images, 3)
+ self.assertEquals(output.op.name, 'AvgPool/AvgPool')
+ self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
+
+ def testCreateAvgPoolWithScope(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.avg_pool(images, [3, 3], scope='pool1')
+ self.assertEquals(output.op.name, 'pool1/AvgPool')
+
+ def testCreateAvgPoolSAME(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.avg_pool(images, [3, 3], padding='SAME')
+ self.assertListEqual(output.get_shape().as_list(), [5, 2, 2, 3])
+
+ def testCreateAvgPoolStrideSAME(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.avg_pool(images, [3, 3], stride=1, padding='SAME')
+ self.assertListEqual(output.get_shape().as_list(), [5, height, width, 3])
+
+ def testGlobalAvgPool(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.avg_pool(images, images.get_shape()[1:3], stride=1)
+ self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
+
+
+class OneHotEncodingTest(tf.test.TestCase):
+
+ def testOneHotEncodingCreate(self):
+ with self.test_session():
+ labels = tf.constant([0, 1, 2])
+ output = ops.one_hot_encoding(labels, num_classes=3)
+ self.assertEquals(output.op.name, 'OneHotEncoding/SparseToDense')
+ self.assertListEqual(output.get_shape().as_list(), [3, 3])
+
+ def testOneHotEncoding(self):
+ with self.test_session():
+ labels = tf.constant([0, 1, 2])
+ one_hot_labels = tf.constant([[1, 0, 0],
+ [0, 1, 0],
+ [0, 0, 1]])
+ output = ops.one_hot_encoding(labels, num_classes=3)
+ self.assertAllClose(output.eval(), one_hot_labels.eval())
+
+
+class DropoutTest(tf.test.TestCase):
+
+ def testCreateDropout(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.dropout(images)
+ self.assertEquals(output.op.name, 'Dropout/dropout/mul')
+ output.get_shape().assert_is_compatible_with(images.get_shape())
+
+ def testCreateDropoutNoTraining(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1, name='images')
+ output = ops.dropout(images, is_training=False)
+ self.assertEquals(output, images)
+
+
+class FlattenTest(tf.test.TestCase):
+
+ def testFlatten4D(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1, name='images')
+ output = ops.flatten(images)
+ self.assertEquals(output.get_shape().num_elements(),
+ images.get_shape().num_elements())
+ self.assertEqual(output.get_shape()[0], images.get_shape()[0])
+
+ def testFlatten3D(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width), seed=1, name='images')
+ output = ops.flatten(images)
+ self.assertEquals(output.get_shape().num_elements(),
+ images.get_shape().num_elements())
+ self.assertEqual(output.get_shape()[0], images.get_shape()[0])
+
+ def testFlattenBatchSize(self):
+ height, width = 3, 3
+ with self.test_session() as sess:
+ images = tf.random_uniform((5, height, width, 3), seed=1, name='images')
+ inputs = tf.placeholder(tf.int32, (None, height, width, 3))
+ output = ops.flatten(inputs)
+ self.assertEquals(output.get_shape().as_list(),
+ [None, height * width * 3])
+ output = sess.run(output, {inputs: images.eval()})
+ self.assertEquals(output.size,
+ images.get_shape().num_elements())
+ self.assertEqual(output.shape[0], images.get_shape()[0])
+
+
+class BatchNormTest(tf.test.TestCase):
+
+ def testCreateOp(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ output = ops.batch_norm(images)
+ self.assertTrue(output.op.name.startswith('BatchNorm/batchnorm'))
+ self.assertListEqual(output.get_shape().as_list(), [5, height, width, 3])
+
+ def testCreateVariables(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.batch_norm(images)
+ beta = variables.get_variables_by_name('beta')[0]
+ self.assertEquals(beta.op.name, 'BatchNorm/beta')
+ gamma = variables.get_variables_by_name('gamma')
+ self.assertEquals(gamma, [])
+ moving_mean = tf.moving_average_variables()[0]
+ moving_variance = tf.moving_average_variables()[1]
+ self.assertEquals(moving_mean.op.name, 'BatchNorm/moving_mean')
+ self.assertEquals(moving_variance.op.name, 'BatchNorm/moving_variance')
+
+ def testCreateVariablesWithScale(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.batch_norm(images, scale=True)
+ beta = variables.get_variables_by_name('beta')[0]
+ gamma = variables.get_variables_by_name('gamma')[0]
+ self.assertEquals(beta.op.name, 'BatchNorm/beta')
+ self.assertEquals(gamma.op.name, 'BatchNorm/gamma')
+ moving_mean = tf.moving_average_variables()[0]
+ moving_variance = tf.moving_average_variables()[1]
+ self.assertEquals(moving_mean.op.name, 'BatchNorm/moving_mean')
+ self.assertEquals(moving_variance.op.name, 'BatchNorm/moving_variance')
+
+ def testCreateVariablesWithoutCenterWithScale(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.batch_norm(images, center=False, scale=True)
+ beta = variables.get_variables_by_name('beta')
+ self.assertEquals(beta, [])
+ gamma = variables.get_variables_by_name('gamma')[0]
+ self.assertEquals(gamma.op.name, 'BatchNorm/gamma')
+ moving_mean = tf.moving_average_variables()[0]
+ moving_variance = tf.moving_average_variables()[1]
+ self.assertEquals(moving_mean.op.name, 'BatchNorm/moving_mean')
+ self.assertEquals(moving_variance.op.name, 'BatchNorm/moving_variance')
+
+ def testCreateVariablesWithoutCenterWithoutScale(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.batch_norm(images, center=False, scale=False)
+ beta = variables.get_variables_by_name('beta')
+ self.assertEquals(beta, [])
+ gamma = variables.get_variables_by_name('gamma')
+ self.assertEquals(gamma, [])
+ moving_mean = tf.moving_average_variables()[0]
+ moving_variance = tf.moving_average_variables()[1]
+ self.assertEquals(moving_mean.op.name, 'BatchNorm/moving_mean')
+ self.assertEquals(moving_variance.op.name, 'BatchNorm/moving_variance')
+
+ def testMovingAverageVariables(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.batch_norm(images, scale=True)
+ moving_mean = tf.moving_average_variables()[0]
+ moving_variance = tf.moving_average_variables()[1]
+ self.assertEquals(moving_mean.op.name, 'BatchNorm/moving_mean')
+ self.assertEquals(moving_variance.op.name, 'BatchNorm/moving_variance')
+
+ def testUpdateOps(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.batch_norm(images)
+ update_ops = tf.get_collection(ops.UPDATE_OPS_COLLECTION)
+ update_moving_mean = update_ops[0]
+ update_moving_variance = update_ops[1]
+ self.assertEquals(update_moving_mean.op.name,
+ 'BatchNorm/AssignMovingAvg')
+ self.assertEquals(update_moving_variance.op.name,
+ 'BatchNorm/AssignMovingAvg_1')
+
+ def testReuseVariables(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.batch_norm(images, scale=True, scope='bn')
+ ops.batch_norm(images, scale=True, scope='bn', reuse=True)
+ beta = variables.get_variables_by_name('beta')
+ gamma = variables.get_variables_by_name('gamma')
+ self.assertEquals(len(beta), 1)
+ self.assertEquals(len(gamma), 1)
+ moving_vars = tf.get_collection('moving_vars')
+ self.assertEquals(len(moving_vars), 2)
+
+ def testReuseUpdateOps(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ ops.batch_norm(images, scope='bn')
+ self.assertEquals(len(tf.get_collection(ops.UPDATE_OPS_COLLECTION)), 2)
+ ops.batch_norm(images, scope='bn', reuse=True)
+ self.assertEquals(len(tf.get_collection(ops.UPDATE_OPS_COLLECTION)), 4)
+
+ def testCreateMovingVars(self):
+ height, width = 3, 3
+ with self.test_session():
+ images = tf.random_uniform((5, height, width, 3), seed=1)
+ _ = ops.batch_norm(images, moving_vars='moving_vars')
+ moving_mean = tf.get_collection('moving_vars',
+ 'BatchNorm/moving_mean')
+ self.assertEquals(len(moving_mean), 1)
+ self.assertEquals(moving_mean[0].op.name, 'BatchNorm/moving_mean')
+ moving_variance = tf.get_collection('moving_vars',
+ 'BatchNorm/moving_variance')
+ self.assertEquals(len(moving_variance), 1)
+ self.assertEquals(moving_variance[0].op.name, 'BatchNorm/moving_variance')
+
+ def testComputeMovingVars(self):
+ height, width = 3, 3
+ with self.test_session() as sess:
+ image_shape = (10, height, width, 3)
+ image_values = np.random.rand(*image_shape)
+ expected_mean = np.mean(image_values, axis=(0, 1, 2))
+ expected_var = np.var(image_values, axis=(0, 1, 2))
+ images = tf.constant(image_values, shape=image_shape, dtype=tf.float32)
+ output = ops.batch_norm(images, decay=0.1)
+ update_ops = tf.get_collection(ops.UPDATE_OPS_COLLECTION)
+ with tf.control_dependencies(update_ops):
+ output = tf.identity(output)
+ # Initialize all variables
+ sess.run(tf.global_variables_initializer())
+ moving_mean = variables.get_variables('BatchNorm/moving_mean')[0]
+ moving_variance = variables.get_variables('BatchNorm/moving_variance')[0]
+ mean, variance = sess.run([moving_mean, moving_variance])
+ # After initialization moving_mean == 0 and moving_variance == 1.
+ self.assertAllClose(mean, [0] * 3)
+ self.assertAllClose(variance, [1] * 3)
+ for _ in range(10):
+ sess.run([output])
+ mean = moving_mean.eval()
+ variance = moving_variance.eval()
+ # After 10 updates with decay 0.1 moving_mean == expected_mean and
+ # moving_variance == expected_var.
+ self.assertAllClose(mean, expected_mean)
+ self.assertAllClose(variance, expected_var)
+
+ def testEvalMovingVars(self):
+ height, width = 3, 3
+ with self.test_session() as sess:
+ image_shape = (10, height, width, 3)
+ image_values = np.random.rand(*image_shape)
+ expected_mean = np.mean(image_values, axis=(0, 1, 2))
+ expected_var = np.var(image_values, axis=(0, 1, 2))
+ images = tf.constant(image_values, shape=image_shape, dtype=tf.float32)
+ output = ops.batch_norm(images, decay=0.1, is_training=False)
+ update_ops = tf.get_collection(ops.UPDATE_OPS_COLLECTION)
+ with tf.control_dependencies(update_ops):
+ output = tf.identity(output)
+ # Initialize all variables
+ sess.run(tf.global_variables_initializer())
+ moving_mean = variables.get_variables('BatchNorm/moving_mean')[0]
+ moving_variance = variables.get_variables('BatchNorm/moving_variance')[0]
+ mean, variance = sess.run([moving_mean, moving_variance])
+ # After initialization moving_mean == 0 and moving_variance == 1.
+ self.assertAllClose(mean, [0] * 3)
+ self.assertAllClose(variance, [1] * 3)
+ # Simulate assigment from saver restore.
+ init_assigns = [tf.assign(moving_mean, expected_mean),
+ tf.assign(moving_variance, expected_var)]
+ sess.run(init_assigns)
+ for _ in range(10):
+ sess.run([output], {images: np.random.rand(*image_shape)})
+ mean = moving_mean.eval()
+ variance = moving_variance.eval()
+ # Although we feed different images, the moving_mean and moving_variance
+ # shouldn't change.
+ self.assertAllClose(mean, expected_mean)
+ self.assertAllClose(variance, expected_var)
+
+ def testReuseVars(self):
+ height, width = 3, 3
+ with self.test_session() as sess:
+ image_shape = (10, height, width, 3)
+ image_values = np.random.rand(*image_shape)
+ expected_mean = np.mean(image_values, axis=(0, 1, 2))
+ expected_var = np.var(image_values, axis=(0, 1, 2))
+ images = tf.constant(image_values, shape=image_shape, dtype=tf.float32)
+ output = ops.batch_norm(images, decay=0.1, is_training=False)
+ update_ops = tf.get_collection(ops.UPDATE_OPS_COLLECTION)
+ with tf.control_dependencies(update_ops):
+ output = tf.identity(output)
+ # Initialize all variables
+ sess.run(tf.global_variables_initializer())
+ moving_mean = variables.get_variables('BatchNorm/moving_mean')[0]
+ moving_variance = variables.get_variables('BatchNorm/moving_variance')[0]
+ mean, variance = sess.run([moving_mean, moving_variance])
+ # After initialization moving_mean == 0 and moving_variance == 1.
+ self.assertAllClose(mean, [0] * 3)
+ self.assertAllClose(variance, [1] * 3)
+ # Simulate assigment from saver restore.
+ init_assigns = [tf.assign(moving_mean, expected_mean),
+ tf.assign(moving_variance, expected_var)]
+ sess.run(init_assigns)
+ for _ in range(10):
+ sess.run([output], {images: np.random.rand(*image_shape)})
+ mean = moving_mean.eval()
+ variance = moving_variance.eval()
+ # Although we feed different images, the moving_mean and moving_variance
+ # shouldn't change.
+ self.assertAllClose(mean, expected_mean)
+ self.assertAllClose(variance, expected_var)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/scopes.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/scopes.py"
new file mode 100644
index 00000000..2c2fb0a2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/scopes.py"
@@ -0,0 +1,170 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains the new arg_scope used for TF-Slim ops.
+
+ Allows one to define models much more compactly by eliminating boilerplate
+ code. This is accomplished through the use of argument scoping (arg_scope).
+
+ Example of how to use scopes.arg_scope:
+
+ with scopes.arg_scope(ops.conv2d, padding='SAME',
+ stddev=0.01, weight_decay=0.0005):
+ net = ops.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')
+ net = ops.conv2d(net, 256, [5, 5], scope='conv2')
+
+ The first call to conv2d will overwrite padding:
+ ops.conv2d(inputs, 64, [11, 11], 4, padding='VALID',
+ stddev=0.01, weight_decay=0.0005, scope='conv1')
+
+ The second call to Conv will use predefined args:
+ ops.conv2d(inputs, 256, [5, 5], padding='SAME',
+ stddev=0.01, weight_decay=0.0005, scope='conv2')
+
+ Example of how to reuse an arg_scope:
+ with scopes.arg_scope(ops.conv2d, padding='SAME',
+ stddev=0.01, weight_decay=0.0005) as conv2d_arg_scope:
+ net = ops.conv2d(net, 256, [5, 5], scope='conv1')
+ ....
+
+ with scopes.arg_scope(conv2d_arg_scope):
+ net = ops.conv2d(net, 256, [5, 5], scope='conv2')
+
+ Example of how to use scopes.add_arg_scope:
+
+ @scopes.add_arg_scope
+ def conv2d(*args, **kwargs)
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import contextlib
+import functools
+
+from tensorflow.python.framework import ops
+
+_ARGSTACK_KEY = ("__arg_stack",)
+
+_DECORATED_OPS = set()
+
+
+def _get_arg_stack():
+ stack = ops.get_collection(_ARGSTACK_KEY)
+ if stack:
+ return stack[0]
+ else:
+ stack = [{}]
+ ops.add_to_collection(_ARGSTACK_KEY, stack)
+ return stack
+
+
+def _current_arg_scope():
+ stack = _get_arg_stack()
+ return stack[-1]
+
+
+def _add_op(op):
+ key_op = (op.__module__, op.__name__)
+ if key_op not in _DECORATED_OPS:
+ _DECORATED_OPS.add(key_op)
+
+
+@contextlib.contextmanager
+def arg_scope(list_ops_or_scope, **kwargs):
+ """Stores the default arguments for the given set of list_ops.
+
+ For usage, please see examples at top of the file.
+
+ Args:
+ list_ops_or_scope: List or tuple of operations to set argument scope for or
+ a dictionary containg the current scope. When list_ops_or_scope is a dict,
+ kwargs must be empty. When list_ops_or_scope is a list or tuple, then
+ every op in it need to be decorated with @add_arg_scope to work.
+ **kwargs: keyword=value that will define the defaults for each op in
+ list_ops. All the ops need to accept the given set of arguments.
+
+ Yields:
+ the current_scope, which is a dictionary of {op: {arg: value}}
+ Raises:
+ TypeError: if list_ops is not a list or a tuple.
+ ValueError: if any op in list_ops has not be decorated with @add_arg_scope.
+ """
+ if isinstance(list_ops_or_scope, dict):
+ # Assumes that list_ops_or_scope is a scope that is being reused.
+ if kwargs:
+ raise ValueError("When attempting to re-use a scope by suppling a"
+ "dictionary, kwargs must be empty.")
+ current_scope = list_ops_or_scope.copy()
+ try:
+ _get_arg_stack().append(current_scope)
+ yield current_scope
+ finally:
+ _get_arg_stack().pop()
+ else:
+ # Assumes that list_ops_or_scope is a list/tuple of ops with kwargs.
+ if not isinstance(list_ops_or_scope, (list, tuple)):
+ raise TypeError("list_ops_or_scope must either be a list/tuple or reused"
+ "scope (i.e. dict)")
+ try:
+ current_scope = _current_arg_scope().copy()
+ for op in list_ops_or_scope:
+ key_op = (op.__module__, op.__name__)
+ if not has_arg_scope(op):
+ raise ValueError("%s is not decorated with @add_arg_scope", key_op)
+ if key_op in current_scope:
+ current_kwargs = current_scope[key_op].copy()
+ current_kwargs.update(kwargs)
+ current_scope[key_op] = current_kwargs
+ else:
+ current_scope[key_op] = kwargs.copy()
+ _get_arg_stack().append(current_scope)
+ yield current_scope
+ finally:
+ _get_arg_stack().pop()
+
+
+def add_arg_scope(func):
+ """Decorates a function with args so it can be used within an arg_scope.
+
+ Args:
+ func: function to decorate.
+
+ Returns:
+ A tuple with the decorated function func_with_args().
+ """
+ @functools.wraps(func)
+ def func_with_args(*args, **kwargs):
+ current_scope = _current_arg_scope()
+ current_args = kwargs
+ key_func = (func.__module__, func.__name__)
+ if key_func in current_scope:
+ current_args = current_scope[key_func].copy()
+ current_args.update(kwargs)
+ return func(*args, **current_args)
+ _add_op(func)
+ return func_with_args
+
+
+def has_arg_scope(func):
+ """Checks whether a func has been decorated with @add_arg_scope or not.
+
+ Args:
+ func: function to check.
+
+ Returns:
+ a boolean.
+ """
+ key_op = (func.__module__, func.__name__)
+ return key_op in _DECORATED_OPS
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/scopes_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/scopes_test.py"
new file mode 100644
index 00000000..cd349399
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/scopes_test.py"
@@ -0,0 +1,162 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests slim.scopes."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import tensorflow as tf
+from inception.slim import scopes
+
+
+@scopes.add_arg_scope
+def func1(*args, **kwargs):
+ return (args, kwargs)
+
+
+@scopes.add_arg_scope
+def func2(*args, **kwargs):
+ return (args, kwargs)
+
+
+class ArgScopeTest(tf.test.TestCase):
+
+ def testEmptyArgScope(self):
+ with self.test_session():
+ self.assertEqual(scopes._current_arg_scope(), {})
+
+ def testCurrentArgScope(self):
+ func1_kwargs = {'a': 1, 'b': None, 'c': [1]}
+ key_op = (func1.__module__, func1.__name__)
+ current_scope = {key_op: func1_kwargs.copy()}
+ with self.test_session():
+ with scopes.arg_scope([func1], a=1, b=None, c=[1]) as scope:
+ self.assertDictEqual(scope, current_scope)
+
+ def testCurrentArgScopeNested(self):
+ func1_kwargs = {'a': 1, 'b': None, 'c': [1]}
+ func2_kwargs = {'b': 2, 'd': [2]}
+ key = lambda f: (f.__module__, f.__name__)
+ current_scope = {key(func1): func1_kwargs.copy(),
+ key(func2): func2_kwargs.copy()}
+ with self.test_session():
+ with scopes.arg_scope([func1], a=1, b=None, c=[1]):
+ with scopes.arg_scope([func2], b=2, d=[2]) as scope:
+ self.assertDictEqual(scope, current_scope)
+
+ def testReuseArgScope(self):
+ func1_kwargs = {'a': 1, 'b': None, 'c': [1]}
+ key_op = (func1.__module__, func1.__name__)
+ current_scope = {key_op: func1_kwargs.copy()}
+ with self.test_session():
+ with scopes.arg_scope([func1], a=1, b=None, c=[1]) as scope1:
+ pass
+ with scopes.arg_scope(scope1) as scope:
+ self.assertDictEqual(scope, current_scope)
+
+ def testReuseArgScopeNested(self):
+ func1_kwargs = {'a': 1, 'b': None, 'c': [1]}
+ func2_kwargs = {'b': 2, 'd': [2]}
+ key = lambda f: (f.__module__, f.__name__)
+ current_scope1 = {key(func1): func1_kwargs.copy()}
+ current_scope2 = {key(func1): func1_kwargs.copy(),
+ key(func2): func2_kwargs.copy()}
+ with self.test_session():
+ with scopes.arg_scope([func1], a=1, b=None, c=[1]) as scope1:
+ with scopes.arg_scope([func2], b=2, d=[2]) as scope2:
+ pass
+ with scopes.arg_scope(scope1):
+ self.assertDictEqual(scopes._current_arg_scope(), current_scope1)
+ with scopes.arg_scope(scope2):
+ self.assertDictEqual(scopes._current_arg_scope(), current_scope2)
+
+ def testSimpleArgScope(self):
+ func1_args = (0,)
+ func1_kwargs = {'a': 1, 'b': None, 'c': [1]}
+ with self.test_session():
+ with scopes.arg_scope([func1], a=1, b=None, c=[1]):
+ args, kwargs = func1(0)
+ self.assertTupleEqual(args, func1_args)
+ self.assertDictEqual(kwargs, func1_kwargs)
+
+ def testSimpleArgScopeWithTuple(self):
+ func1_args = (0,)
+ func1_kwargs = {'a': 1, 'b': None, 'c': [1]}
+ with self.test_session():
+ with scopes.arg_scope((func1,), a=1, b=None, c=[1]):
+ args, kwargs = func1(0)
+ self.assertTupleEqual(args, func1_args)
+ self.assertDictEqual(kwargs, func1_kwargs)
+
+ def testOverwriteArgScope(self):
+ func1_args = (0,)
+ func1_kwargs = {'a': 1, 'b': 2, 'c': [1]}
+ with scopes.arg_scope([func1], a=1, b=None, c=[1]):
+ args, kwargs = func1(0, b=2)
+ self.assertTupleEqual(args, func1_args)
+ self.assertDictEqual(kwargs, func1_kwargs)
+
+ def testNestedArgScope(self):
+ func1_args = (0,)
+ func1_kwargs = {'a': 1, 'b': None, 'c': [1]}
+ with scopes.arg_scope([func1], a=1, b=None, c=[1]):
+ args, kwargs = func1(0)
+ self.assertTupleEqual(args, func1_args)
+ self.assertDictEqual(kwargs, func1_kwargs)
+ func1_kwargs['b'] = 2
+ with scopes.arg_scope([func1], b=2):
+ args, kwargs = func1(0)
+ self.assertTupleEqual(args, func1_args)
+ self.assertDictEqual(kwargs, func1_kwargs)
+
+ def testSharedArgScope(self):
+ func1_args = (0,)
+ func1_kwargs = {'a': 1, 'b': None, 'c': [1]}
+ with scopes.arg_scope([func1, func2], a=1, b=None, c=[1]):
+ args, kwargs = func1(0)
+ self.assertTupleEqual(args, func1_args)
+ self.assertDictEqual(kwargs, func1_kwargs)
+ args, kwargs = func2(0)
+ self.assertTupleEqual(args, func1_args)
+ self.assertDictEqual(kwargs, func1_kwargs)
+
+ def testSharedArgScopeTuple(self):
+ func1_args = (0,)
+ func1_kwargs = {'a': 1, 'b': None, 'c': [1]}
+ with scopes.arg_scope((func1, func2), a=1, b=None, c=[1]):
+ args, kwargs = func1(0)
+ self.assertTupleEqual(args, func1_args)
+ self.assertDictEqual(kwargs, func1_kwargs)
+ args, kwargs = func2(0)
+ self.assertTupleEqual(args, func1_args)
+ self.assertDictEqual(kwargs, func1_kwargs)
+
+ def testPartiallySharedArgScope(self):
+ func1_args = (0,)
+ func1_kwargs = {'a': 1, 'b': None, 'c': [1]}
+ func2_args = (1,)
+ func2_kwargs = {'a': 1, 'b': None, 'd': [2]}
+ with scopes.arg_scope([func1, func2], a=1, b=None):
+ with scopes.arg_scope([func1], c=[1]), scopes.arg_scope([func2], d=[2]):
+ args, kwargs = func1(0)
+ self.assertTupleEqual(args, func1_args)
+ self.assertDictEqual(kwargs, func1_kwargs)
+ args, kwargs = func2(1)
+ self.assertTupleEqual(args, func2_args)
+ self.assertDictEqual(kwargs, func2_kwargs)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/slim.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/slim.py"
new file mode 100644
index 00000000..b7a5c0f8
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/slim.py"
@@ -0,0 +1,24 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""TF-Slim grouped API. Please see README.md for details and usage."""
+# pylint: disable=unused-import
+
+# Collapse tf-slim into a single namespace.
+from inception.slim import inception_model as inception
+from inception.slim import losses
+from inception.slim import ops
+from inception.slim import scopes
+from inception.slim import variables
+from inception.slim.scopes import arg_scope
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/variables.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/variables.py"
new file mode 100644
index 00000000..1d967b79
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/variables.py"
@@ -0,0 +1,289 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains convenience wrappers for creating variables in TF-Slim.
+
+The variables module is typically used for defining model variables from the
+ops routines (see slim.ops). Such variables are used for training, evaluation
+and inference of models.
+
+All the variables created through this module would be added to the
+MODEL_VARIABLES collection, if you create a model variable outside slim, it can
+be added with slim.variables.add_variable(external_variable, reuse).
+
+Usage:
+ weights_initializer = tf.truncated_normal_initializer(stddev=0.01)
+ l2_regularizer = lambda t: losses.l2_loss(t, weight=0.0005)
+ weights = variables.variable('weights',
+ shape=[100, 100],
+ initializer=weights_initializer,
+ regularizer=l2_regularizer,
+ device='/cpu:0')
+
+ biases = variables.variable('biases',
+ shape=[100],
+ initializer=tf.zeros_initializer(),
+ device='/cpu:0')
+
+ # More complex example.
+
+ net = slim.ops.conv2d(input, 32, [3, 3], scope='conv1')
+ net = slim.ops.conv2d(net, 64, [3, 3], scope='conv2')
+ with slim.arg_scope([variables.variable], restore=False):
+ net = slim.ops.conv2d(net, 64, [3, 3], scope='conv3')
+
+ # Get all model variables from all the layers.
+ model_variables = slim.variables.get_variables()
+
+ # Get all model variables from a specific the layer, i.e 'conv1'.
+ conv1_variables = slim.variables.get_variables('conv1')
+
+ # Get all weights from all the layers.
+ weights = slim.variables.get_variables_by_name('weights')
+
+ # Get all bias from all the layers.
+ biases = slim.variables.get_variables_by_name('biases')
+
+ # Get all variables to restore.
+ # (i.e. only those created by 'conv1' and 'conv2')
+ variables_to_restore = slim.variables.get_variables_to_restore()
+
+************************************************
+* Initializing model variables from a checkpoint
+************************************************
+
+# Create some variables.
+v1 = slim.variables.variable(name="v1", ..., restore=False)
+v2 = slim.variables.variable(name="v2", ...) # By default restore=True
+...
+# The list of variables to restore should only contain 'v2'.
+variables_to_restore = slim.variables.get_variables_to_restore()
+restorer = tf.train.Saver(variables_to_restore)
+with tf.Session() as sess:
+ # Restore variables from disk.
+ restorer.restore(sess, "/tmp/model.ckpt")
+ print("Model restored.")
+ # Do some work with the model
+ ...
+
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from inception.slim import scopes
+
+# Collection containing all the variables created using slim.variables
+MODEL_VARIABLES = '_model_variables_'
+
+# Collection containing the slim.variables that are created with restore=True.
+VARIABLES_TO_RESTORE = '_variables_to_restore_'
+
+
+def add_variable(var, restore=True):
+ """Adds a variable to the MODEL_VARIABLES collection.
+
+ Optionally it will add the variable to the VARIABLES_TO_RESTORE collection.
+ Args:
+ var: a variable.
+ restore: whether the variable should be added to the
+ VARIABLES_TO_RESTORE collection.
+
+ """
+ collections = [MODEL_VARIABLES]
+ if restore:
+ collections.append(VARIABLES_TO_RESTORE)
+ for collection in collections:
+ if var not in tf.get_collection(collection):
+ tf.add_to_collection(collection, var)
+
+
+def get_variables(scope=None, suffix=None):
+ """Gets the list of variables, filtered by scope and/or suffix.
+
+ Args:
+ scope: an optional scope for filtering the variables to return.
+ suffix: an optional suffix for filtering the variables to return.
+
+ Returns:
+ a copied list of variables with scope and suffix.
+ """
+ candidates = tf.get_collection(MODEL_VARIABLES, scope)[:]
+ if suffix is not None:
+ candidates = [var for var in candidates if var.op.name.endswith(suffix)]
+ return candidates
+
+
+def get_variables_to_restore():
+ """Gets the list of variables to restore.
+
+ Returns:
+ a copied list of variables.
+ """
+ return tf.get_collection(VARIABLES_TO_RESTORE)[:]
+
+
+def get_variables_by_name(given_name, scope=None):
+ """Gets the list of variables that were given that name.
+
+ Args:
+ given_name: name given to the variable without scope.
+ scope: an optional scope for filtering the variables to return.
+
+ Returns:
+ a copied list of variables with the given name and prefix.
+ """
+ return get_variables(scope=scope, suffix=given_name)
+
+
+def get_unique_variable(name):
+ """Gets the variable uniquely identified by that name.
+
+ Args:
+ name: a name that uniquely identifies the variable.
+
+ Returns:
+ a tensorflow variable.
+
+ Raises:
+ ValueError: if no variable uniquely identified by the name exists.
+ """
+ candidates = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, name)
+ if not candidates:
+ raise ValueError('Couldnt find variable %s' % name)
+
+ for candidate in candidates:
+ if candidate.op.name == name:
+ return candidate
+ raise ValueError('Variable %s does not uniquely identify a variable', name)
+
+
+class VariableDeviceChooser(object):
+ """Slim device chooser for variables.
+
+ When using a parameter server it will assign them in a round-robin fashion.
+ When not using a parameter server it allows GPU:0 placement otherwise CPU:0.
+ """
+
+ def __init__(self,
+ num_parameter_servers=0,
+ ps_device='/job:ps',
+ placement='CPU:0'):
+ """Initialize VariableDeviceChooser.
+
+ Args:
+ num_parameter_servers: number of parameter servers.
+ ps_device: string representing the parameter server device.
+ placement: string representing the placement of the variable either CPU:0
+ or GPU:0. When using parameter servers forced to CPU:0.
+ """
+ self._num_ps = num_parameter_servers
+ self._ps_device = ps_device
+ self._placement = placement if num_parameter_servers == 0 else 'CPU:0'
+ self._next_task_id = 0
+
+ def __call__(self, op):
+ device_string = ''
+ if self._num_ps > 0:
+ task_id = self._next_task_id
+ self._next_task_id = (self._next_task_id + 1) % self._num_ps
+ device_string = '%s/task:%d' % (self._ps_device, task_id)
+ device_string += '/%s' % self._placement
+ return device_string
+
+
+# TODO(sguada) Remove once get_variable is able to colocate op.devices.
+def variable_device(device, name):
+ """Fix the variable device to colocate its ops."""
+ if callable(device):
+ var_name = tf.get_variable_scope().name + '/' + name
+ var_def = tf.NodeDef(name=var_name, op='Variable')
+ device = device(var_def)
+ if device is None:
+ device = ''
+ return device
+
+
+@scopes.add_arg_scope
+def global_step(device=''):
+ """Returns the global step variable.
+
+ Args:
+ device: Optional device to place the variable. It can be an string or a
+ function that is called to get the device for the variable.
+
+ Returns:
+ the tensor representing the global step variable.
+ """
+ global_step_ref = tf.get_collection(tf.GraphKeys.GLOBAL_STEP)
+ if global_step_ref:
+ return global_step_ref[0]
+ else:
+ collections = [
+ VARIABLES_TO_RESTORE,
+ tf.GraphKeys.GLOBAL_VARIABLES,
+ tf.GraphKeys.GLOBAL_STEP,
+ ]
+ # Get the device for the variable.
+ with tf.device(variable_device(device, 'global_step')):
+ return tf.get_variable('global_step', shape=[], dtype=tf.int64,
+ initializer=tf.zeros_initializer(),
+ trainable=False, collections=collections)
+
+
+@scopes.add_arg_scope
+def variable(name, shape=None, dtype=tf.float32, initializer=None,
+ regularizer=None, trainable=True, collections=None, device='',
+ restore=True):
+ """Gets an existing variable with these parameters or creates a new one.
+
+ It also add itself to a group with its name.
+
+ Args:
+ name: the name of the new or existing variable.
+ shape: shape of the new or existing variable.
+ dtype: type of the new or existing variable (defaults to `DT_FLOAT`).
+ initializer: initializer for the variable if one is created.
+ regularizer: a (Tensor -> Tensor or None) function; the result of
+ applying it on a newly created variable will be added to the collection
+ GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
+ trainable: If `True` also add the variable to the graph collection
+ `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
+ collections: A list of collection names to which the Variable will be added.
+ Note that the variable is always also added to the tf.GraphKeys.GLOBAL_VARIABLES
+ and MODEL_VARIABLES collections.
+ device: Optional device to place the variable. It can be an string or a
+ function that is called to get the device for the variable.
+ restore: whether the variable should be added to the
+ VARIABLES_TO_RESTORE collection.
+
+ Returns:
+ The created or existing variable.
+ """
+ collections = list(collections or [])
+
+ # Make sure variables are added to tf.GraphKeys.GLOBAL_VARIABLES and MODEL_VARIABLES
+ collections += [tf.GraphKeys.GLOBAL_VARIABLES, MODEL_VARIABLES]
+ # Add to VARIABLES_TO_RESTORE if necessary
+ if restore:
+ collections.append(VARIABLES_TO_RESTORE)
+ # Remove duplicates
+ collections = set(collections)
+ # Get the device for the variable.
+ with tf.device(variable_device(device, name)):
+ return tf.get_variable(name, shape=shape, dtype=dtype,
+ initializer=initializer, regularizer=regularizer,
+ trainable=trainable, collections=collections)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/variables_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/variables_test.py"
new file mode 100644
index 00000000..b8c1944d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/inception/inception/slim/variables_test.py"
@@ -0,0 +1,392 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for slim.variables."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from inception.slim import scopes
+from inception.slim import variables
+
+
+class VariablesTest(tf.test.TestCase):
+
+ def testCreateVariable(self):
+ with self.test_session():
+ with tf.variable_scope('A'):
+ a = variables.variable('a', [5])
+ self.assertEquals(a.op.name, 'A/a')
+ self.assertListEqual(a.get_shape().as_list(), [5])
+
+ def testGetVariables(self):
+ with self.test_session():
+ with tf.variable_scope('A'):
+ a = variables.variable('a', [5])
+ with tf.variable_scope('B'):
+ b = variables.variable('a', [5])
+ self.assertEquals([a, b], variables.get_variables())
+ self.assertEquals([a], variables.get_variables('A'))
+ self.assertEquals([b], variables.get_variables('B'))
+
+ def testGetVariablesSuffix(self):
+ with self.test_session():
+ with tf.variable_scope('A'):
+ a = variables.variable('a', [5])
+ with tf.variable_scope('A'):
+ b = variables.variable('b', [5])
+ self.assertEquals([a], variables.get_variables(suffix='a'))
+ self.assertEquals([b], variables.get_variables(suffix='b'))
+
+ def testGetVariableWithSingleVar(self):
+ with self.test_session():
+ with tf.variable_scope('parent'):
+ a = variables.variable('child', [5])
+ self.assertEquals(a, variables.get_unique_variable('parent/child'))
+
+ def testGetVariableWithDistractors(self):
+ with self.test_session():
+ with tf.variable_scope('parent'):
+ a = variables.variable('child', [5])
+ with tf.variable_scope('child'):
+ variables.variable('grandchild1', [7])
+ variables.variable('grandchild2', [9])
+ self.assertEquals(a, variables.get_unique_variable('parent/child'))
+
+ def testGetVariableThrowsExceptionWithNoMatch(self):
+ var_name = 'cant_find_me'
+ with self.test_session():
+ with self.assertRaises(ValueError):
+ variables.get_unique_variable(var_name)
+
+ def testGetThrowsExceptionWithChildrenButNoMatch(self):
+ var_name = 'parent/child'
+ with self.test_session():
+ with tf.variable_scope(var_name):
+ variables.variable('grandchild1', [7])
+ variables.variable('grandchild2', [9])
+ with self.assertRaises(ValueError):
+ variables.get_unique_variable(var_name)
+
+ def testGetVariablesToRestore(self):
+ with self.test_session():
+ with tf.variable_scope('A'):
+ a = variables.variable('a', [5])
+ with tf.variable_scope('B'):
+ b = variables.variable('a', [5])
+ self.assertEquals([a, b], variables.get_variables_to_restore())
+
+ def testNoneGetVariablesToRestore(self):
+ with self.test_session():
+ with tf.variable_scope('A'):
+ a = variables.variable('a', [5], restore=False)
+ with tf.variable_scope('B'):
+ b = variables.variable('a', [5], restore=False)
+ self.assertEquals([], variables.get_variables_to_restore())
+ self.assertEquals([a, b], variables.get_variables())
+
+ def testGetMixedVariablesToRestore(self):
+ with self.test_session():
+ with tf.variable_scope('A'):
+ a = variables.variable('a', [5])
+ b = variables.variable('b', [5], restore=False)
+ with tf.variable_scope('B'):
+ c = variables.variable('c', [5])
+ d = variables.variable('d', [5], restore=False)
+ self.assertEquals([a, b, c, d], variables.get_variables())
+ self.assertEquals([a, c], variables.get_variables_to_restore())
+
+ def testReuseVariable(self):
+ with self.test_session():
+ with tf.variable_scope('A'):
+ a = variables.variable('a', [])
+ with tf.variable_scope('A', reuse=True):
+ b = variables.variable('a', [])
+ self.assertEquals(a, b)
+ self.assertListEqual([a], variables.get_variables())
+
+ def testVariableWithDevice(self):
+ with self.test_session():
+ with tf.variable_scope('A'):
+ a = variables.variable('a', [], device='cpu:0')
+ b = variables.variable('b', [], device='cpu:1')
+ self.assertDeviceEqual(a.device, 'cpu:0')
+ self.assertDeviceEqual(b.device, 'cpu:1')
+
+ def testVariableWithDeviceFromScope(self):
+ with self.test_session():
+ with tf.device('/cpu:0'):
+ a = variables.variable('a', [])
+ b = variables.variable('b', [], device='cpu:1')
+ self.assertDeviceEqual(a.device, 'cpu:0')
+ self.assertDeviceEqual(b.device, 'cpu:1')
+
+ def testVariableWithDeviceFunction(self):
+ class DevFn(object):
+
+ def __init__(self):
+ self.counter = -1
+
+ def __call__(self, op):
+ self.counter += 1
+ return 'cpu:%d' % self.counter
+
+ with self.test_session():
+ with scopes.arg_scope([variables.variable], device=DevFn()):
+ a = variables.variable('a', [])
+ b = variables.variable('b', [])
+ c = variables.variable('c', [], device='cpu:12')
+ d = variables.variable('d', [])
+ with tf.device('cpu:99'):
+ e_init = tf.constant(12)
+ e = variables.variable('e', initializer=e_init)
+ self.assertDeviceEqual(a.device, 'cpu:0')
+ self.assertDeviceEqual(a.initial_value.device, 'cpu:0')
+ self.assertDeviceEqual(b.device, 'cpu:1')
+ self.assertDeviceEqual(b.initial_value.device, 'cpu:1')
+ self.assertDeviceEqual(c.device, 'cpu:12')
+ self.assertDeviceEqual(c.initial_value.device, 'cpu:12')
+ self.assertDeviceEqual(d.device, 'cpu:2')
+ self.assertDeviceEqual(d.initial_value.device, 'cpu:2')
+ self.assertDeviceEqual(e.device, 'cpu:3')
+ self.assertDeviceEqual(e.initial_value.device, 'cpu:99')
+
+ def testVariableWithReplicaDeviceSetter(self):
+ with self.test_session():
+ with tf.device(tf.train.replica_device_setter(ps_tasks=2)):
+ a = variables.variable('a', [])
+ b = variables.variable('b', [])
+ c = variables.variable('c', [], device='cpu:12')
+ d = variables.variable('d', [])
+ with tf.device('cpu:99'):
+ e_init = tf.constant(12)
+ e = variables.variable('e', initializer=e_init)
+ # The values below highlight how the replica_device_setter puts initial
+ # values on the worker job, and how it merges explicit devices.
+ self.assertDeviceEqual(a.device, '/job:ps/task:0/cpu:0')
+ self.assertDeviceEqual(a.initial_value.device, '/job:worker/cpu:0')
+ self.assertDeviceEqual(b.device, '/job:ps/task:1/cpu:0')
+ self.assertDeviceEqual(b.initial_value.device, '/job:worker/cpu:0')
+ self.assertDeviceEqual(c.device, '/job:ps/task:0/cpu:12')
+ self.assertDeviceEqual(c.initial_value.device, '/job:worker/cpu:12')
+ self.assertDeviceEqual(d.device, '/job:ps/task:1/cpu:0')
+ self.assertDeviceEqual(d.initial_value.device, '/job:worker/cpu:0')
+ self.assertDeviceEqual(e.device, '/job:ps/task:0/cpu:0')
+ self.assertDeviceEqual(e.initial_value.device, '/job:worker/cpu:99')
+
+ def testVariableWithVariableDeviceChooser(self):
+
+ with tf.Graph().as_default():
+ device_fn = variables.VariableDeviceChooser(num_parameter_servers=2)
+ with scopes.arg_scope([variables.variable], device=device_fn):
+ a = variables.variable('a', [])
+ b = variables.variable('b', [])
+ c = variables.variable('c', [], device='cpu:12')
+ d = variables.variable('d', [])
+ with tf.device('cpu:99'):
+ e_init = tf.constant(12)
+ e = variables.variable('e', initializer=e_init)
+ # The values below highlight how the VariableDeviceChooser puts initial
+ # values on the same device as the variable job.
+ self.assertDeviceEqual(a.device, '/job:ps/task:0/cpu:0')
+ self.assertDeviceEqual(a.initial_value.device, a.device)
+ self.assertDeviceEqual(b.device, '/job:ps/task:1/cpu:0')
+ self.assertDeviceEqual(b.initial_value.device, b.device)
+ self.assertDeviceEqual(c.device, '/cpu:12')
+ self.assertDeviceEqual(c.initial_value.device, c.device)
+ self.assertDeviceEqual(d.device, '/job:ps/task:0/cpu:0')
+ self.assertDeviceEqual(d.initial_value.device, d.device)
+ self.assertDeviceEqual(e.device, '/job:ps/task:1/cpu:0')
+ self.assertDeviceEqual(e.initial_value.device, '/cpu:99')
+
+ def testVariableGPUPlacement(self):
+
+ with tf.Graph().as_default():
+ device_fn = variables.VariableDeviceChooser(placement='gpu:0')
+ with scopes.arg_scope([variables.variable], device=device_fn):
+ a = variables.variable('a', [])
+ b = variables.variable('b', [])
+ c = variables.variable('c', [], device='cpu:12')
+ d = variables.variable('d', [])
+ with tf.device('cpu:99'):
+ e_init = tf.constant(12)
+ e = variables.variable('e', initializer=e_init)
+ # The values below highlight how the VariableDeviceChooser puts initial
+ # values on the same device as the variable job.
+ self.assertDeviceEqual(a.device, '/gpu:0')
+ self.assertDeviceEqual(a.initial_value.device, a.device)
+ self.assertDeviceEqual(b.device, '/gpu:0')
+ self.assertDeviceEqual(b.initial_value.device, b.device)
+ self.assertDeviceEqual(c.device, '/cpu:12')
+ self.assertDeviceEqual(c.initial_value.device, c.device)
+ self.assertDeviceEqual(d.device, '/gpu:0')
+ self.assertDeviceEqual(d.initial_value.device, d.device)
+ self.assertDeviceEqual(e.device, '/gpu:0')
+ self.assertDeviceEqual(e.initial_value.device, '/cpu:99')
+
+ def testVariableCollection(self):
+ with self.test_session():
+ a = variables.variable('a', [], collections='A')
+ b = variables.variable('b', [], collections='B')
+ self.assertEquals(a, tf.get_collection('A')[0])
+ self.assertEquals(b, tf.get_collection('B')[0])
+
+ def testVariableCollections(self):
+ with self.test_session():
+ a = variables.variable('a', [], collections=['A', 'C'])
+ b = variables.variable('b', [], collections=['B', 'C'])
+ self.assertEquals(a, tf.get_collection('A')[0])
+ self.assertEquals(b, tf.get_collection('B')[0])
+
+ def testVariableCollectionsWithArgScope(self):
+ with self.test_session():
+ with scopes.arg_scope([variables.variable], collections='A'):
+ a = variables.variable('a', [])
+ b = variables.variable('b', [])
+ self.assertListEqual([a, b], tf.get_collection('A'))
+
+ def testVariableCollectionsWithArgScopeNested(self):
+ with self.test_session():
+ with scopes.arg_scope([variables.variable], collections='A'):
+ a = variables.variable('a', [])
+ with scopes.arg_scope([variables.variable], collections='B'):
+ b = variables.variable('b', [])
+ self.assertEquals(a, tf.get_collection('A')[0])
+ self.assertEquals(b, tf.get_collection('B')[0])
+
+ def testVariableCollectionsWithArgScopeNonNested(self):
+ with self.test_session():
+ with scopes.arg_scope([variables.variable], collections='A'):
+ a = variables.variable('a', [])
+ with scopes.arg_scope([variables.variable], collections='B'):
+ b = variables.variable('b', [])
+ variables.variable('c', [])
+ self.assertListEqual([a], tf.get_collection('A'))
+ self.assertListEqual([b], tf.get_collection('B'))
+
+ def testVariableRestoreWithArgScopeNested(self):
+ with self.test_session():
+ with scopes.arg_scope([variables.variable], restore=True):
+ a = variables.variable('a', [])
+ with scopes.arg_scope([variables.variable],
+ trainable=False,
+ collections=['A', 'B']):
+ b = variables.variable('b', [])
+ c = variables.variable('c', [])
+ self.assertListEqual([a, b, c], variables.get_variables_to_restore())
+ self.assertListEqual([a, c], tf.trainable_variables())
+ self.assertListEqual([b], tf.get_collection('A'))
+ self.assertListEqual([b], tf.get_collection('B'))
+
+
+class GetVariablesByNameTest(tf.test.TestCase):
+
+ def testGetVariableGivenNameScoped(self):
+ with self.test_session():
+ with tf.variable_scope('A'):
+ a = variables.variable('a', [5])
+ b = variables.variable('b', [5])
+ self.assertEquals([a], variables.get_variables_by_name('a'))
+ self.assertEquals([b], variables.get_variables_by_name('b'))
+
+ def testGetVariablesByNameReturnsByValueWithScope(self):
+ with self.test_session():
+ with tf.variable_scope('A'):
+ a = variables.variable('a', [5])
+ matched_variables = variables.get_variables_by_name('a')
+
+ # If variables.get_variables_by_name returns the list by reference, the
+ # following append should persist, and be returned, in subsequent calls
+ # to variables.get_variables_by_name('a').
+ matched_variables.append(4)
+
+ matched_variables = variables.get_variables_by_name('a')
+ self.assertEquals([a], matched_variables)
+
+ def testGetVariablesByNameReturnsByValueWithoutScope(self):
+ with self.test_session():
+ a = variables.variable('a', [5])
+ matched_variables = variables.get_variables_by_name('a')
+
+ # If variables.get_variables_by_name returns the list by reference, the
+ # following append should persist, and be returned, in subsequent calls
+ # to variables.get_variables_by_name('a').
+ matched_variables.append(4)
+
+ matched_variables = variables.get_variables_by_name('a')
+ self.assertEquals([a], matched_variables)
+
+
+class GlobalStepTest(tf.test.TestCase):
+
+ def testStable(self):
+ with tf.Graph().as_default():
+ gs = variables.global_step()
+ gs2 = variables.global_step()
+ self.assertTrue(gs is gs2)
+
+ def testDevice(self):
+ with tf.Graph().as_default():
+ with scopes.arg_scope([variables.global_step], device='/gpu:0'):
+ gs = variables.global_step()
+ self.assertDeviceEqual(gs.device, '/gpu:0')
+
+ def testDeviceFn(self):
+ class DevFn(object):
+
+ def __init__(self):
+ self.counter = -1
+
+ def __call__(self, op):
+ self.counter += 1
+ return '/cpu:%d' % self.counter
+
+ with tf.Graph().as_default():
+ with scopes.arg_scope([variables.global_step], device=DevFn()):
+ gs = variables.global_step()
+ gs2 = variables.global_step()
+ self.assertDeviceEqual(gs.device, '/cpu:0')
+ self.assertEquals(gs, gs2)
+ self.assertDeviceEqual(gs2.device, '/cpu:0')
+
+ def testReplicaDeviceSetter(self):
+ device_fn = tf.train.replica_device_setter(2)
+ with tf.Graph().as_default():
+ with scopes.arg_scope([variables.global_step], device=device_fn):
+ gs = variables.global_step()
+ gs2 = variables.global_step()
+ self.assertEquals(gs, gs2)
+ self.assertDeviceEqual(gs.device, '/job:ps/task:0')
+ self.assertDeviceEqual(gs.initial_value.device, '/job:ps/task:0')
+ self.assertDeviceEqual(gs2.device, '/job:ps/task:0')
+ self.assertDeviceEqual(gs2.initial_value.device, '/job:ps/task:0')
+
+ def testVariableWithVariableDeviceChooser(self):
+
+ with tf.Graph().as_default():
+ device_fn = variables.VariableDeviceChooser()
+ with scopes.arg_scope([variables.global_step], device=device_fn):
+ gs = variables.global_step()
+ gs2 = variables.global_step()
+ self.assertEquals(gs, gs2)
+ self.assertDeviceEqual(gs.device, 'cpu:0')
+ self.assertDeviceEqual(gs.initial_value.device, gs.device)
+ self.assertDeviceEqual(gs2.device, 'cpu:0')
+ self.assertDeviceEqual(gs2.initial_value.device, gs2.device)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/any.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/any.proto"
new file mode 100644
index 00000000..81dcf46c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/any.proto"
@@ -0,0 +1,140 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option go_package = "github.com/golang/protobuf/ptypes/any";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "AnyProto";
+option java_multiple_files = true;
+option java_generate_equals_and_hash = true;
+option objc_class_prefix = "GPB";
+
+// `Any` contains an arbitrary serialized protocol buffer message along with a
+// URL that describes the type of the serialized message.
+//
+// Protobuf library provides support to pack/unpack Any values in the form
+// of utility functions or additional generated methods of the Any type.
+//
+// Example 1: Pack and unpack a message in C++.
+//
+// Foo foo = ...;
+// Any any;
+// any.PackFrom(foo);
+// ...
+// if (any.UnpackTo(&foo)) {
+// ...
+// }
+//
+// Example 2: Pack and unpack a message in Java.
+//
+// Foo foo = ...;
+// Any any = Any.pack(foo);
+// ...
+// if (any.is(Foo.class)) {
+// foo = any.unpack(Foo.class);
+// }
+//
+// Example 3: Pack and unpack a message in Python.
+//
+// foo = Foo(...)
+// any = Any()
+// any.Pack(foo)
+// ...
+// if any.Is(Foo.DESCRIPTOR):
+// any.Unpack(foo)
+// ...
+//
+// The pack methods provided by protobuf library will by default use
+// 'type.googleapis.com/full.type.name' as the type URL and the unpack
+// methods only use the fully qualified type name after the last '/'
+// in the type URL, for example "foo.bar.com/x/y.z" will yield type
+// name "y.z".
+//
+//
+// JSON
+// ====
+// The JSON representation of an `Any` value uses the regular
+// representation of the deserialized, embedded message, with an
+// additional field `@type` which contains the type URL. Example:
+//
+// package google.profile;
+// message Person {
+// string first_name = 1;
+// string last_name = 2;
+// }
+//
+// {
+// "@type": "type.googleapis.com/google.profile.Person",
+// "firstName": ,
+// "lastName":
+// }
+//
+// If the embedded message type is well-known and has a custom JSON
+// representation, that representation will be embedded adding a field
+// `value` which holds the custom JSON in addition to the `@type`
+// field. Example (for message [google.protobuf.Duration][]):
+//
+// {
+// "@type": "type.googleapis.com/google.protobuf.Duration",
+// "value": "1.212s"
+// }
+//
+message Any {
+ // A URL/resource name whose content describes the type of the
+ // serialized protocol buffer message.
+ //
+ // For URLs which use the scheme `http`, `https`, or no scheme, the
+ // following restrictions and interpretations apply:
+ //
+ // * If no scheme is provided, `https` is assumed.
+ // * The last segment of the URL's path must represent the fully
+ // qualified name of the type (as in `path/google.protobuf.Duration`).
+ // The name should be in a canonical form (e.g., leading "." is
+ // not accepted).
+ // * An HTTP GET on the URL must yield a [google.protobuf.Type][]
+ // value in binary format, or produce an error.
+ // * Applications are allowed to cache lookup results based on the
+ // URL, or have them precompiled into a binary to avoid any
+ // lookup. Therefore, binary compatibility needs to be preserved
+ // on changes to types. (Use versioned type names to manage
+ // breaking changes.)
+ //
+ // Schemes other than `http`, `https` (or the empty scheme) might be
+ // used with implementation specific semantics.
+ //
+ string type_url = 1;
+
+ // Must be a valid serialized protocol buffer of the above specified type.
+ bytes value = 2;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/api.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/api.proto"
new file mode 100644
index 00000000..dbe87b8f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/api.proto"
@@ -0,0 +1,202 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+import "google/protobuf/source_context.proto";
+import "google/protobuf/type.proto";
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "ApiProto";
+option java_multiple_files = true;
+option java_generate_equals_and_hash = true;
+option objc_class_prefix = "GPB";
+
+// Api is a light-weight descriptor for a protocol buffer service.
+message Api {
+
+ // The fully qualified name of this api, including package name
+ // followed by the api's simple name.
+ string name = 1;
+
+ // The methods of this api, in unspecified order.
+ repeated Method methods = 2;
+
+ // Any metadata attached to the API.
+ repeated Option options = 3;
+
+ // A version string for this api. If specified, must have the form
+ // `major-version.minor-version`, as in `1.10`. If the minor version
+ // is omitted, it defaults to zero. If the entire version field is
+ // empty, the major version is derived from the package name, as
+ // outlined below. If the field is not empty, the version in the
+ // package name will be verified to be consistent with what is
+ // provided here.
+ //
+ // The versioning schema uses [semantic
+ // versioning](http://semver.org) where the major version number
+ // indicates a breaking change and the minor version an additive,
+ // non-breaking change. Both version numbers are signals to users
+ // what to expect from different versions, and should be carefully
+ // chosen based on the product plan.
+ //
+ // The major version is also reflected in the package name of the
+ // API, which must end in `v`, as in
+ // `google.feature.v1`. For major versions 0 and 1, the suffix can
+ // be omitted. Zero major versions must only be used for
+ // experimental, none-GA apis.
+ //
+ //
+ string version = 4;
+
+ // Source context for the protocol buffer service represented by this
+ // message.
+ SourceContext source_context = 5;
+
+ // Included APIs. See [Mixin][].
+ repeated Mixin mixins = 6;
+
+ // The source syntax of the service.
+ Syntax syntax = 7;
+}
+
+// Method represents a method of an api.
+message Method {
+
+ // The simple name of this method.
+ string name = 1;
+
+ // A URL of the input message type.
+ string request_type_url = 2;
+
+ // If true, the request is streamed.
+ bool request_streaming = 3;
+
+ // The URL of the output message type.
+ string response_type_url = 4;
+
+ // If true, the response is streamed.
+ bool response_streaming = 5;
+
+ // Any metadata attached to the method.
+ repeated Option options = 6;
+
+ // The source syntax of this method.
+ Syntax syntax = 7;
+}
+
+// Declares an API to be included in this API. The including API must
+// redeclare all the methods from the included API, but documentation
+// and options are inherited as follows:
+//
+// - If after comment and whitespace stripping, the documentation
+// string of the redeclared method is empty, it will be inherited
+// from the original method.
+//
+// - Each annotation belonging to the service config (http,
+// visibility) which is not set in the redeclared method will be
+// inherited.
+//
+// - If an http annotation is inherited, the path pattern will be
+// modified as follows. Any version prefix will be replaced by the
+// version of the including API plus the [root][] path if specified.
+//
+// Example of a simple mixin:
+//
+// package google.acl.v1;
+// service AccessControl {
+// // Get the underlying ACL object.
+// rpc GetAcl(GetAclRequest) returns (Acl) {
+// option (google.api.http).get = "/v1/{resource=**}:getAcl";
+// }
+// }
+//
+// package google.storage.v2;
+// service Storage {
+// rpc GetAcl(GetAclRequest) returns (Acl);
+//
+// // Get a data record.
+// rpc GetData(GetDataRequest) returns (Data) {
+// option (google.api.http).get = "/v2/{resource=**}";
+// }
+// }
+//
+// Example of a mixin configuration:
+//
+// apis:
+// - name: google.storage.v2.Storage
+// mixins:
+// - name: google.acl.v1.AccessControl
+//
+// The mixin construct implies that all methods in `AccessControl` are
+// also declared with same name and request/response types in
+// `Storage`. A documentation generator or annotation processor will
+// see the effective `Storage.GetAcl` method after inherting
+// documentation and annotations as follows:
+//
+// service Storage {
+// // Get the underlying ACL object.
+// rpc GetAcl(GetAclRequest) returns (Acl) {
+// option (google.api.http).get = "/v2/{resource=**}:getAcl";
+// }
+// ...
+// }
+//
+// Note how the version in the path pattern changed from `v1` to `v2`.
+//
+// If the `root` field in the mixin is specified, it should be a
+// relative path under which inherited HTTP paths are placed. Example:
+//
+// apis:
+// - name: google.storage.v2.Storage
+// mixins:
+// - name: google.acl.v1.AccessControl
+// root: acls
+//
+// This implies the following inherited HTTP annotation:
+//
+// service Storage {
+// // Get the underlying ACL object.
+// rpc GetAcl(GetAclRequest) returns (Acl) {
+// option (google.api.http).get = "/v2/acls/{resource=**}:getAcl";
+// }
+// ...
+// }
+message Mixin {
+ // The fully qualified name of the API which is included.
+ string name = 1;
+
+ // If non-empty specifies a path under which inherited HTTP paths
+ // are rooted.
+ string root = 2;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/compiler/plugin.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/compiler/plugin.proto"
new file mode 100644
index 00000000..acaee1f4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/compiler/plugin.proto"
@@ -0,0 +1,150 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// Author: kenton@google.com (Kenton Varda)
+//
+// WARNING: The plugin interface is currently EXPERIMENTAL and is subject to
+// change.
+//
+// protoc (aka the Protocol Compiler) can be extended via plugins. A plugin is
+// just a program that reads a CodeGeneratorRequest from stdin and writes a
+// CodeGeneratorResponse to stdout.
+//
+// Plugins written using C++ can use google/protobuf/compiler/plugin.h instead
+// of dealing with the raw protocol defined here.
+//
+// A plugin executable needs only to be placed somewhere in the path. The
+// plugin should be named "protoc-gen-$NAME", and will then be used when the
+// flag "--${NAME}_out" is passed to protoc.
+
+syntax = "proto2";
+package google.protobuf.compiler;
+option java_package = "com.google.protobuf.compiler";
+option java_outer_classname = "PluginProtos";
+
+option go_package = "plugin_go";
+
+import "google/protobuf/descriptor.proto";
+
+// An encoded CodeGeneratorRequest is written to the plugin's stdin.
+message CodeGeneratorRequest {
+ // The .proto files that were explicitly listed on the command-line. The
+ // code generator should generate code only for these files. Each file's
+ // descriptor will be included in proto_file, below.
+ repeated string file_to_generate = 1;
+
+ // The generator parameter passed on the command-line.
+ optional string parameter = 2;
+
+ // FileDescriptorProtos for all files in files_to_generate and everything
+ // they import. The files will appear in topological order, so each file
+ // appears before any file that imports it.
+ //
+ // protoc guarantees that all proto_files will be written after
+ // the fields above, even though this is not technically guaranteed by the
+ // protobuf wire format. This theoretically could allow a plugin to stream
+ // in the FileDescriptorProtos and handle them one by one rather than read
+ // the entire set into memory at once. However, as of this writing, this
+ // is not similarly optimized on protoc's end -- it will store all fields in
+ // memory at once before sending them to the plugin.
+ repeated FileDescriptorProto proto_file = 15;
+}
+
+// The plugin writes an encoded CodeGeneratorResponse to stdout.
+message CodeGeneratorResponse {
+ // Error message. If non-empty, code generation failed. The plugin process
+ // should exit with status code zero even if it reports an error in this way.
+ //
+ // This should be used to indicate errors in .proto files which prevent the
+ // code generator from generating correct code. Errors which indicate a
+ // problem in protoc itself -- such as the input CodeGeneratorRequest being
+ // unparseable -- should be reported by writing a message to stderr and
+ // exiting with a non-zero status code.
+ optional string error = 1;
+
+ // Represents a single generated file.
+ message File {
+ // The file name, relative to the output directory. The name must not
+ // contain "." or ".." components and must be relative, not be absolute (so,
+ // the file cannot lie outside the output directory). "/" must be used as
+ // the path separator, not "\".
+ //
+ // If the name is omitted, the content will be appended to the previous
+ // file. This allows the generator to break large files into small chunks,
+ // and allows the generated text to be streamed back to protoc so that large
+ // files need not reside completely in memory at one time. Note that as of
+ // this writing protoc does not optimize for this -- it will read the entire
+ // CodeGeneratorResponse before writing files to disk.
+ optional string name = 1;
+
+ // If non-empty, indicates that the named file should already exist, and the
+ // content here is to be inserted into that file at a defined insertion
+ // point. This feature allows a code generator to extend the output
+ // produced by another code generator. The original generator may provide
+ // insertion points by placing special annotations in the file that look
+ // like:
+ // @@protoc_insertion_point(NAME)
+ // The annotation can have arbitrary text before and after it on the line,
+ // which allows it to be placed in a comment. NAME should be replaced with
+ // an identifier naming the point -- this is what other generators will use
+ // as the insertion_point. Code inserted at this point will be placed
+ // immediately above the line containing the insertion point (thus multiple
+ // insertions to the same point will come out in the order they were added).
+ // The double-@ is intended to make it unlikely that the generated code
+ // could contain things that look like insertion points by accident.
+ //
+ // For example, the C++ code generator places the following line in the
+ // .pb.h files that it generates:
+ // // @@protoc_insertion_point(namespace_scope)
+ // This line appears within the scope of the file's package namespace, but
+ // outside of any particular class. Another plugin can then specify the
+ // insertion_point "namespace_scope" to generate additional classes or
+ // other declarations that should be placed in this scope.
+ //
+ // Note that if the line containing the insertion point begins with
+ // whitespace, the same whitespace will be added to every line of the
+ // inserted text. This is useful for languages like Python, where
+ // indentation matters. In these languages, the insertion point comment
+ // should be indented the same amount as any inserted code will need to be
+ // in order to work correctly in that context.
+ //
+ // The code generator that generates the initial file and the one which
+ // inserts into it must both run as part of a single invocation of protoc.
+ // Code generators are executed in the order in which they appear on the
+ // command line.
+ //
+ // If |insertion_point| is present, |name| must also be present.
+ optional string insertion_point = 2;
+
+ // The file contents.
+ optional string content = 15;
+ }
+ repeated File file = 15;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/descriptor.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/descriptor.proto"
new file mode 100644
index 00000000..28410d4a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/descriptor.proto"
@@ -0,0 +1,813 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// Author: kenton@google.com (Kenton Varda)
+// Based on original Protocol Buffers design by
+// Sanjay Ghemawat, Jeff Dean, and others.
+//
+// The messages in this file describe the definitions found in .proto files.
+// A valid .proto file can be translated directly to a FileDescriptorProto
+// without any other information (e.g. without reading its imports).
+
+
+syntax = "proto2";
+
+package google.protobuf;
+option go_package = "descriptor";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "DescriptorProtos";
+option csharp_namespace = "Google.Protobuf.Reflection";
+option objc_class_prefix = "GPB";
+option java_generate_equals_and_hash = true;
+
+// descriptor.proto must be optimized for speed because reflection-based
+// algorithms don't work during bootstrapping.
+option optimize_for = SPEED;
+
+// The protocol compiler can output a FileDescriptorSet containing the .proto
+// files it parses.
+message FileDescriptorSet {
+ repeated FileDescriptorProto file = 1;
+}
+
+// Describes a complete .proto file.
+message FileDescriptorProto {
+ optional string name = 1; // file name, relative to root of source tree
+ optional string package = 2; // e.g. "foo", "foo.bar", etc.
+
+ // Names of files imported by this file.
+ repeated string dependency = 3;
+ // Indexes of the public imported files in the dependency list above.
+ repeated int32 public_dependency = 10;
+ // Indexes of the weak imported files in the dependency list.
+ // For Google-internal migration only. Do not use.
+ repeated int32 weak_dependency = 11;
+
+ // All top-level definitions in this file.
+ repeated DescriptorProto message_type = 4;
+ repeated EnumDescriptorProto enum_type = 5;
+ repeated ServiceDescriptorProto service = 6;
+ repeated FieldDescriptorProto extension = 7;
+
+ optional FileOptions options = 8;
+
+ // This field contains optional information about the original source code.
+ // You may safely remove this entire field without harming runtime
+ // functionality of the descriptors -- the information is needed only by
+ // development tools.
+ optional SourceCodeInfo source_code_info = 9;
+
+ // The syntax of the proto file.
+ // The supported values are "proto2" and "proto3".
+ optional string syntax = 12;
+}
+
+// Describes a message type.
+message DescriptorProto {
+ optional string name = 1;
+
+ repeated FieldDescriptorProto field = 2;
+ repeated FieldDescriptorProto extension = 6;
+
+ repeated DescriptorProto nested_type = 3;
+ repeated EnumDescriptorProto enum_type = 4;
+
+ message ExtensionRange {
+ optional int32 start = 1;
+ optional int32 end = 2;
+ }
+ repeated ExtensionRange extension_range = 5;
+
+ repeated OneofDescriptorProto oneof_decl = 8;
+
+ optional MessageOptions options = 7;
+
+ // Range of reserved tag numbers. Reserved tag numbers may not be used by
+ // fields or extension ranges in the same message. Reserved ranges may
+ // not overlap.
+ message ReservedRange {
+ optional int32 start = 1; // Inclusive.
+ optional int32 end = 2; // Exclusive.
+ }
+ repeated ReservedRange reserved_range = 9;
+ // Reserved field names, which may not be used by fields in the same message.
+ // A given name may only be reserved once.
+ repeated string reserved_name = 10;
+}
+
+// Describes a field within a message.
+message FieldDescriptorProto {
+ enum Type {
+ // 0 is reserved for errors.
+ // Order is weird for historical reasons.
+ TYPE_DOUBLE = 1;
+ TYPE_FLOAT = 2;
+ // Not ZigZag encoded. Negative numbers take 10 bytes. Use TYPE_SINT64 if
+ // negative values are likely.
+ TYPE_INT64 = 3;
+ TYPE_UINT64 = 4;
+ // Not ZigZag encoded. Negative numbers take 10 bytes. Use TYPE_SINT32 if
+ // negative values are likely.
+ TYPE_INT32 = 5;
+ TYPE_FIXED64 = 6;
+ TYPE_FIXED32 = 7;
+ TYPE_BOOL = 8;
+ TYPE_STRING = 9;
+ TYPE_GROUP = 10; // Tag-delimited aggregate.
+ TYPE_MESSAGE = 11; // Length-delimited aggregate.
+
+ // New in version 2.
+ TYPE_BYTES = 12;
+ TYPE_UINT32 = 13;
+ TYPE_ENUM = 14;
+ TYPE_SFIXED32 = 15;
+ TYPE_SFIXED64 = 16;
+ TYPE_SINT32 = 17; // Uses ZigZag encoding.
+ TYPE_SINT64 = 18; // Uses ZigZag encoding.
+ };
+
+ enum Label {
+ // 0 is reserved for errors
+ LABEL_OPTIONAL = 1;
+ LABEL_REQUIRED = 2;
+ LABEL_REPEATED = 3;
+ // TODO(sanjay): Should we add LABEL_MAP?
+ };
+
+ optional string name = 1;
+ optional int32 number = 3;
+ optional Label label = 4;
+
+ // If type_name is set, this need not be set. If both this and type_name
+ // are set, this must be one of TYPE_ENUM, TYPE_MESSAGE or TYPE_GROUP.
+ optional Type type = 5;
+
+ // For message and enum types, this is the name of the type. If the name
+ // starts with a '.', it is fully-qualified. Otherwise, C++-like scoping
+ // rules are used to find the type (i.e. first the nested types within this
+ // message are searched, then within the parent, on up to the root
+ // namespace).
+ optional string type_name = 6;
+
+ // For extensions, this is the name of the type being extended. It is
+ // resolved in the same manner as type_name.
+ optional string extendee = 2;
+
+ // For numeric types, contains the original text representation of the value.
+ // For booleans, "true" or "false".
+ // For strings, contains the default text contents (not escaped in any way).
+ // For bytes, contains the C escaped value. All bytes >= 128 are escaped.
+ // TODO(kenton): Base-64 encode?
+ optional string default_value = 7;
+
+ // If set, gives the index of a oneof in the containing type's oneof_decl
+ // list. This field is a member of that oneof.
+ optional int32 oneof_index = 9;
+
+ // JSON name of this field. The value is set by protocol compiler. If the
+ // user has set a "json_name" option on this field, that option's value
+ // will be used. Otherwise, it's deduced from the field's name by converting
+ // it to camelCase.
+ optional string json_name = 10;
+
+ optional FieldOptions options = 8;
+}
+
+// Describes a oneof.
+message OneofDescriptorProto {
+ optional string name = 1;
+ optional OneofOptions options = 2;
+}
+
+// Describes an enum type.
+message EnumDescriptorProto {
+ optional string name = 1;
+
+ repeated EnumValueDescriptorProto value = 2;
+
+ optional EnumOptions options = 3;
+}
+
+// Describes a value within an enum.
+message EnumValueDescriptorProto {
+ optional string name = 1;
+ optional int32 number = 2;
+
+ optional EnumValueOptions options = 3;
+}
+
+// Describes a service.
+message ServiceDescriptorProto {
+ optional string name = 1;
+ repeated MethodDescriptorProto method = 2;
+
+ optional ServiceOptions options = 3;
+}
+
+// Describes a method of a service.
+message MethodDescriptorProto {
+ optional string name = 1;
+
+ // Input and output type names. These are resolved in the same way as
+ // FieldDescriptorProto.type_name, but must refer to a message type.
+ optional string input_type = 2;
+ optional string output_type = 3;
+
+ optional MethodOptions options = 4;
+
+ // Identifies if client streams multiple client messages
+ optional bool client_streaming = 5 [default=false];
+ // Identifies if server streams multiple server messages
+ optional bool server_streaming = 6 [default=false];
+}
+
+
+// ===================================================================
+// Options
+
+// Each of the definitions above may have "options" attached. These are
+// just annotations which may cause code to be generated slightly differently
+// or may contain hints for code that manipulates protocol messages.
+//
+// Clients may define custom options as extensions of the *Options messages.
+// These extensions may not yet be known at parsing time, so the parser cannot
+// store the values in them. Instead it stores them in a field in the *Options
+// message called uninterpreted_option. This field must have the same name
+// across all *Options messages. We then use this field to populate the
+// extensions when we build a descriptor, at which point all protos have been
+// parsed and so all extensions are known.
+//
+// Extension numbers for custom options may be chosen as follows:
+// * For options which will only be used within a single application or
+// organization, or for experimental options, use field numbers 50000
+// through 99999. It is up to you to ensure that you do not use the
+// same number for multiple options.
+// * For options which will be published and used publicly by multiple
+// independent entities, e-mail protobuf-global-extension-registry@google.com
+// to reserve extension numbers. Simply provide your project name (e.g.
+// Objective-C plugin) and your project website (if available) -- there's no
+// need to explain how you intend to use them. Usually you only need one
+// extension number. You can declare multiple options with only one extension
+// number by putting them in a sub-message. See the Custom Options section of
+// the docs for examples:
+// https://developers.google.com/protocol-buffers/docs/proto#options
+// If this turns out to be popular, a web service will be set up
+// to automatically assign option numbers.
+
+
+message FileOptions {
+
+ // Sets the Java package where classes generated from this .proto will be
+ // placed. By default, the proto package is used, but this is often
+ // inappropriate because proto packages do not normally start with backwards
+ // domain names.
+ optional string java_package = 1;
+
+
+ // If set, all the classes from the .proto file are wrapped in a single
+ // outer class with the given name. This applies to both Proto1
+ // (equivalent to the old "--one_java_file" option) and Proto2 (where
+ // a .proto always translates to a single class, but you may want to
+ // explicitly choose the class name).
+ optional string java_outer_classname = 8;
+
+ // If set true, then the Java code generator will generate a separate .java
+ // file for each top-level message, enum, and service defined in the .proto
+ // file. Thus, these types will *not* be nested inside the outer class
+ // named by java_outer_classname. However, the outer class will still be
+ // generated to contain the file's getDescriptor() method as well as any
+ // top-level extensions defined in the file.
+ optional bool java_multiple_files = 10 [default=false];
+
+ // If set true, then the Java code generator will generate equals() and
+ // hashCode() methods for all messages defined in the .proto file.
+ // This increases generated code size, potentially substantially for large
+ // protos, which may harm a memory-constrained application.
+ // - In the full runtime this is a speed optimization, as the
+ // AbstractMessage base class includes reflection-based implementations of
+ // these methods.
+ // - In the lite runtime, setting this option changes the semantics of
+ // equals() and hashCode() to more closely match those of the full runtime;
+ // the generated methods compute their results based on field values rather
+ // than object identity. (Implementations should not assume that hashcodes
+ // will be consistent across runtimes or versions of the protocol compiler.)
+ optional bool java_generate_equals_and_hash = 20 [default=false];
+
+ // If set true, then the Java2 code generator will generate code that
+ // throws an exception whenever an attempt is made to assign a non-UTF-8
+ // byte sequence to a string field.
+ // Message reflection will do the same.
+ // However, an extension field still accepts non-UTF-8 byte sequences.
+ // This option has no effect on when used with the lite runtime.
+ optional bool java_string_check_utf8 = 27 [default=false];
+
+
+ // Generated classes can be optimized for speed or code size.
+ enum OptimizeMode {
+ SPEED = 1; // Generate complete code for parsing, serialization,
+ // etc.
+ CODE_SIZE = 2; // Use ReflectionOps to implement these methods.
+ LITE_RUNTIME = 3; // Generate code using MessageLite and the lite runtime.
+ }
+ optional OptimizeMode optimize_for = 9 [default=SPEED];
+
+ // Sets the Go package where structs generated from this .proto will be
+ // placed. If omitted, the Go package will be derived from the following:
+ // - The basename of the package import path, if provided.
+ // - Otherwise, the package statement in the .proto file, if present.
+ // - Otherwise, the basename of the .proto file, without extension.
+ optional string go_package = 11;
+
+
+
+ // Should generic services be generated in each language? "Generic" services
+ // are not specific to any particular RPC system. They are generated by the
+ // main code generators in each language (without additional plugins).
+ // Generic services were the only kind of service generation supported by
+ // early versions of google.protobuf.
+ //
+ // Generic services are now considered deprecated in favor of using plugins
+ // that generate code specific to your particular RPC system. Therefore,
+ // these default to false. Old code which depends on generic services should
+ // explicitly set them to true.
+ optional bool cc_generic_services = 16 [default=false];
+ optional bool java_generic_services = 17 [default=false];
+ optional bool py_generic_services = 18 [default=false];
+
+ // Is this file deprecated?
+ // Depending on the target platform, this can emit Deprecated annotations
+ // for everything in the file, or it will be completely ignored; in the very
+ // least, this is a formalization for deprecating files.
+ optional bool deprecated = 23 [default=false];
+
+ // Enables the use of arenas for the proto messages in this file. This applies
+ // only to generated classes for C++.
+ optional bool cc_enable_arenas = 31 [default=false];
+
+
+ // Sets the objective c class prefix which is prepended to all objective c
+ // generated classes from this .proto. There is no default.
+ optional string objc_class_prefix = 36;
+
+ // Namespace for generated classes; defaults to the package.
+ optional string csharp_namespace = 37;
+
+ // The parser stores options it doesn't recognize here. See above.
+ repeated UninterpretedOption uninterpreted_option = 999;
+
+ // Clients can define custom options in extensions of this message. See above.
+ extensions 1000 to max;
+
+ reserved 38;
+}
+
+message MessageOptions {
+ // Set true to use the old proto1 MessageSet wire format for extensions.
+ // This is provided for backwards-compatibility with the MessageSet wire
+ // format. You should not use this for any other reason: It's less
+ // efficient, has fewer features, and is more complicated.
+ //
+ // The message must be defined exactly as follows:
+ // message Foo {
+ // option message_set_wire_format = true;
+ // extensions 4 to max;
+ // }
+ // Note that the message cannot have any defined fields; MessageSets only
+ // have extensions.
+ //
+ // All extensions of your type must be singular messages; e.g. they cannot
+ // be int32s, enums, or repeated messages.
+ //
+ // Because this is an option, the above two restrictions are not enforced by
+ // the protocol compiler.
+ optional bool message_set_wire_format = 1 [default=false];
+
+ // Disables the generation of the standard "descriptor()" accessor, which can
+ // conflict with a field of the same name. This is meant to make migration
+ // from proto1 easier; new code should avoid fields named "descriptor".
+ optional bool no_standard_descriptor_accessor = 2 [default=false];
+
+ // Is this message deprecated?
+ // Depending on the target platform, this can emit Deprecated annotations
+ // for the message, or it will be completely ignored; in the very least,
+ // this is a formalization for deprecating messages.
+ optional bool deprecated = 3 [default=false];
+
+ // Whether the message is an automatically generated map entry type for the
+ // maps field.
+ //
+ // For maps fields:
+ // map map_field = 1;
+ // The parsed descriptor looks like:
+ // message MapFieldEntry {
+ // option map_entry = true;
+ // optional KeyType key = 1;
+ // optional ValueType value = 2;
+ // }
+ // repeated MapFieldEntry map_field = 1;
+ //
+ // Implementations may choose not to generate the map_entry=true message, but
+ // use a native map in the target language to hold the keys and values.
+ // The reflection APIs in such implementions still need to work as
+ // if the field is a repeated message field.
+ //
+ // NOTE: Do not set the option in .proto files. Always use the maps syntax
+ // instead. The option should only be implicitly set by the proto compiler
+ // parser.
+ optional bool map_entry = 7;
+
+ // The parser stores options it doesn't recognize here. See above.
+ repeated UninterpretedOption uninterpreted_option = 999;
+
+ // Clients can define custom options in extensions of this message. See above.
+ extensions 1000 to max;
+}
+
+message FieldOptions {
+ // The ctype option instructs the C++ code generator to use a different
+ // representation of the field than it normally would. See the specific
+ // options below. This option is not yet implemented in the open source
+ // release -- sorry, we'll try to include it in a future version!
+ optional CType ctype = 1 [default = STRING];
+ enum CType {
+ // Default mode.
+ STRING = 0;
+
+ CORD = 1;
+
+ STRING_PIECE = 2;
+ }
+ // The packed option can be enabled for repeated primitive fields to enable
+ // a more efficient representation on the wire. Rather than repeatedly
+ // writing the tag and type for each element, the entire array is encoded as
+ // a single length-delimited blob. In proto3, only explicit setting it to
+ // false will avoid using packed encoding.
+ optional bool packed = 2;
+
+
+ // The jstype option determines the JavaScript type used for values of the
+ // field. The option is permitted only for 64 bit integral and fixed types
+ // (int64, uint64, sint64, fixed64, sfixed64). By default these types are
+ // represented as JavaScript strings. This avoids loss of precision that can
+ // happen when a large value is converted to a floating point JavaScript
+ // numbers. Specifying JS_NUMBER for the jstype causes the generated
+ // JavaScript code to use the JavaScript "number" type instead of strings.
+ // This option is an enum to permit additional types to be added,
+ // e.g. goog.math.Integer.
+ optional JSType jstype = 6 [default = JS_NORMAL];
+ enum JSType {
+ // Use the default type.
+ JS_NORMAL = 0;
+
+ // Use JavaScript strings.
+ JS_STRING = 1;
+
+ // Use JavaScript numbers.
+ JS_NUMBER = 2;
+ }
+
+ // Should this field be parsed lazily? Lazy applies only to message-type
+ // fields. It means that when the outer message is initially parsed, the
+ // inner message's contents will not be parsed but instead stored in encoded
+ // form. The inner message will actually be parsed when it is first accessed.
+ //
+ // This is only a hint. Implementations are free to choose whether to use
+ // eager or lazy parsing regardless of the value of this option. However,
+ // setting this option true suggests that the protocol author believes that
+ // using lazy parsing on this field is worth the additional bookkeeping
+ // overhead typically needed to implement it.
+ //
+ // This option does not affect the public interface of any generated code;
+ // all method signatures remain the same. Furthermore, thread-safety of the
+ // interface is not affected by this option; const methods remain safe to
+ // call from multiple threads concurrently, while non-const methods continue
+ // to require exclusive access.
+ //
+ //
+ // Note that implementations may choose not to check required fields within
+ // a lazy sub-message. That is, calling IsInitialized() on the outher message
+ // may return true even if the inner message has missing required fields.
+ // This is necessary because otherwise the inner message would have to be
+ // parsed in order to perform the check, defeating the purpose of lazy
+ // parsing. An implementation which chooses not to check required fields
+ // must be consistent about it. That is, for any particular sub-message, the
+ // implementation must either *always* check its required fields, or *never*
+ // check its required fields, regardless of whether or not the message has
+ // been parsed.
+ optional bool lazy = 5 [default=false];
+
+ // Is this field deprecated?
+ // Depending on the target platform, this can emit Deprecated annotations
+ // for accessors, or it will be completely ignored; in the very least, this
+ // is a formalization for deprecating fields.
+ optional bool deprecated = 3 [default=false];
+
+ // For Google-internal migration only. Do not use.
+ optional bool weak = 10 [default=false];
+
+
+ // The parser stores options it doesn't recognize here. See above.
+ repeated UninterpretedOption uninterpreted_option = 999;
+
+ // Clients can define custom options in extensions of this message. See above.
+ extensions 1000 to max;
+}
+
+message OneofOptions {
+ // The parser stores options it doesn't recognize here. See above.
+ repeated UninterpretedOption uninterpreted_option = 999;
+
+ // Clients can define custom options in extensions of this message. See above.
+ extensions 1000 to max;
+}
+
+message EnumOptions {
+
+ // Set this option to true to allow mapping different tag names to the same
+ // value.
+ optional bool allow_alias = 2;
+
+ // Is this enum deprecated?
+ // Depending on the target platform, this can emit Deprecated annotations
+ // for the enum, or it will be completely ignored; in the very least, this
+ // is a formalization for deprecating enums.
+ optional bool deprecated = 3 [default=false];
+
+ // The parser stores options it doesn't recognize here. See above.
+ repeated UninterpretedOption uninterpreted_option = 999;
+
+ // Clients can define custom options in extensions of this message. See above.
+ extensions 1000 to max;
+}
+
+message EnumValueOptions {
+ // Is this enum value deprecated?
+ // Depending on the target platform, this can emit Deprecated annotations
+ // for the enum value, or it will be completely ignored; in the very least,
+ // this is a formalization for deprecating enum values.
+ optional bool deprecated = 1 [default=false];
+
+ // The parser stores options it doesn't recognize here. See above.
+ repeated UninterpretedOption uninterpreted_option = 999;
+
+ // Clients can define custom options in extensions of this message. See above.
+ extensions 1000 to max;
+}
+
+message ServiceOptions {
+
+ // Note: Field numbers 1 through 32 are reserved for Google's internal RPC
+ // framework. We apologize for hoarding these numbers to ourselves, but
+ // we were already using them long before we decided to release Protocol
+ // Buffers.
+
+ // Is this service deprecated?
+ // Depending on the target platform, this can emit Deprecated annotations
+ // for the service, or it will be completely ignored; in the very least,
+ // this is a formalization for deprecating services.
+ optional bool deprecated = 33 [default=false];
+
+ // The parser stores options it doesn't recognize here. See above.
+ repeated UninterpretedOption uninterpreted_option = 999;
+
+ // Clients can define custom options in extensions of this message. See above.
+ extensions 1000 to max;
+}
+
+message MethodOptions {
+
+ // Note: Field numbers 1 through 32 are reserved for Google's internal RPC
+ // framework. We apologize for hoarding these numbers to ourselves, but
+ // we were already using them long before we decided to release Protocol
+ // Buffers.
+
+ // Is this method deprecated?
+ // Depending on the target platform, this can emit Deprecated annotations
+ // for the method, or it will be completely ignored; in the very least,
+ // this is a formalization for deprecating methods.
+ optional bool deprecated = 33 [default=false];
+
+ // The parser stores options it doesn't recognize here. See above.
+ repeated UninterpretedOption uninterpreted_option = 999;
+
+ // Clients can define custom options in extensions of this message. See above.
+ extensions 1000 to max;
+}
+
+
+// A message representing a option the parser does not recognize. This only
+// appears in options protos created by the compiler::Parser class.
+// DescriptorPool resolves these when building Descriptor objects. Therefore,
+// options protos in descriptor objects (e.g. returned by Descriptor::options(),
+// or produced by Descriptor::CopyTo()) will never have UninterpretedOptions
+// in them.
+message UninterpretedOption {
+ // The name of the uninterpreted option. Each string represents a segment in
+ // a dot-separated name. is_extension is true iff a segment represents an
+ // extension (denoted with parentheses in options specs in .proto files).
+ // E.g.,{ ["foo", false], ["bar.baz", true], ["qux", false] } represents
+ // "foo.(bar.baz).qux".
+ message NamePart {
+ required string name_part = 1;
+ required bool is_extension = 2;
+ }
+ repeated NamePart name = 2;
+
+ // The value of the uninterpreted option, in whatever type the tokenizer
+ // identified it as during parsing. Exactly one of these should be set.
+ optional string identifier_value = 3;
+ optional uint64 positive_int_value = 4;
+ optional int64 negative_int_value = 5;
+ optional double double_value = 6;
+ optional bytes string_value = 7;
+ optional string aggregate_value = 8;
+}
+
+// ===================================================================
+// Optional source code info
+
+// Encapsulates information about the original source file from which a
+// FileDescriptorProto was generated.
+message SourceCodeInfo {
+ // A Location identifies a piece of source code in a .proto file which
+ // corresponds to a particular definition. This information is intended
+ // to be useful to IDEs, code indexers, documentation generators, and similar
+ // tools.
+ //
+ // For example, say we have a file like:
+ // message Foo {
+ // optional string foo = 1;
+ // }
+ // Let's look at just the field definition:
+ // optional string foo = 1;
+ // ^ ^^ ^^ ^ ^^^
+ // a bc de f ghi
+ // We have the following locations:
+ // span path represents
+ // [a,i) [ 4, 0, 2, 0 ] The whole field definition.
+ // [a,b) [ 4, 0, 2, 0, 4 ] The label (optional).
+ // [c,d) [ 4, 0, 2, 0, 5 ] The type (string).
+ // [e,f) [ 4, 0, 2, 0, 1 ] The name (foo).
+ // [g,h) [ 4, 0, 2, 0, 3 ] The number (1).
+ //
+ // Notes:
+ // - A location may refer to a repeated field itself (i.e. not to any
+ // particular index within it). This is used whenever a set of elements are
+ // logically enclosed in a single code segment. For example, an entire
+ // extend block (possibly containing multiple extension definitions) will
+ // have an outer location whose path refers to the "extensions" repeated
+ // field without an index.
+ // - Multiple locations may have the same path. This happens when a single
+ // logical declaration is spread out across multiple places. The most
+ // obvious example is the "extend" block again -- there may be multiple
+ // extend blocks in the same scope, each of which will have the same path.
+ // - A location's span is not always a subset of its parent's span. For
+ // example, the "extendee" of an extension declaration appears at the
+ // beginning of the "extend" block and is shared by all extensions within
+ // the block.
+ // - Just because a location's span is a subset of some other location's span
+ // does not mean that it is a descendent. For example, a "group" defines
+ // both a type and a field in a single declaration. Thus, the locations
+ // corresponding to the type and field and their components will overlap.
+ // - Code which tries to interpret locations should probably be designed to
+ // ignore those that it doesn't understand, as more types of locations could
+ // be recorded in the future.
+ repeated Location location = 1;
+ message Location {
+ // Identifies which part of the FileDescriptorProto was defined at this
+ // location.
+ //
+ // Each element is a field number or an index. They form a path from
+ // the root FileDescriptorProto to the place where the definition. For
+ // example, this path:
+ // [ 4, 3, 2, 7, 1 ]
+ // refers to:
+ // file.message_type(3) // 4, 3
+ // .field(7) // 2, 7
+ // .name() // 1
+ // This is because FileDescriptorProto.message_type has field number 4:
+ // repeated DescriptorProto message_type = 4;
+ // and DescriptorProto.field has field number 2:
+ // repeated FieldDescriptorProto field = 2;
+ // and FieldDescriptorProto.name has field number 1:
+ // optional string name = 1;
+ //
+ // Thus, the above path gives the location of a field name. If we removed
+ // the last element:
+ // [ 4, 3, 2, 7 ]
+ // this path refers to the whole field declaration (from the beginning
+ // of the label to the terminating semicolon).
+ repeated int32 path = 1 [packed=true];
+
+ // Always has exactly three or four elements: start line, start column,
+ // end line (optional, otherwise assumed same as start line), end column.
+ // These are packed into a single field for efficiency. Note that line
+ // and column numbers are zero-based -- typically you will want to add
+ // 1 to each before displaying to a user.
+ repeated int32 span = 2 [packed=true];
+
+ // If this SourceCodeInfo represents a complete declaration, these are any
+ // comments appearing before and after the declaration which appear to be
+ // attached to the declaration.
+ //
+ // A series of line comments appearing on consecutive lines, with no other
+ // tokens appearing on those lines, will be treated as a single comment.
+ //
+ // leading_detached_comments will keep paragraphs of comments that appear
+ // before (but not connected to) the current element. Each paragraph,
+ // separated by empty lines, will be one comment element in the repeated
+ // field.
+ //
+ // Only the comment content is provided; comment markers (e.g. //) are
+ // stripped out. For block comments, leading whitespace and an asterisk
+ // will be stripped from the beginning of each line other than the first.
+ // Newlines are included in the output.
+ //
+ // Examples:
+ //
+ // optional int32 foo = 1; // Comment attached to foo.
+ // // Comment attached to bar.
+ // optional int32 bar = 2;
+ //
+ // optional string baz = 3;
+ // // Comment attached to baz.
+ // // Another line attached to baz.
+ //
+ // // Comment attached to qux.
+ // //
+ // // Another line attached to qux.
+ // optional double qux = 4;
+ //
+ // // Detached comment for corge. This is not leading or trailing comments
+ // // to qux or corge because there are blank lines separating it from
+ // // both.
+ //
+ // // Detached comment for corge paragraph 2.
+ //
+ // optional string corge = 5;
+ // /* Block comment attached
+ // * to corge. Leading asterisks
+ // * will be removed. */
+ // /* Block comment attached to
+ // * grault. */
+ // optional int32 grault = 6;
+ //
+ // // ignored detached comments.
+ optional string leading_comments = 3;
+ optional string trailing_comments = 4;
+ repeated string leading_detached_comments = 6;
+ }
+}
+
+// Describes the relationship between generated code and its original source
+// file. A GeneratedCodeInfo message is associated with only one generated
+// source file, but may contain references to different source .proto files.
+message GeneratedCodeInfo {
+ // An Annotation connects some span of text in generated code to an element
+ // of its generating .proto file.
+ repeated Annotation annotation = 1;
+ message Annotation {
+ // Identifies the element in the original source .proto file. This field
+ // is formatted the same as SourceCodeInfo.Location.path.
+ repeated int32 path = 1 [packed=true];
+
+ // Identifies the filesystem path to the original source .proto.
+ optional string source_file = 2;
+
+ // Identifies the starting offset in bytes in the generated code
+ // that relates to the identified object.
+ optional int32 begin = 3;
+
+ // Identifies the ending offset in bytes in the generated code that
+ // relates to the identified offset. The end offset should be one past
+ // the last relevant byte (so the length of the text = end - begin).
+ optional int32 end = 4;
+ }
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/duration.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/duration.proto"
new file mode 100644
index 00000000..96c1796d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/duration.proto"
@@ -0,0 +1,98 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option go_package = "github.com/golang/protobuf/ptypes/duration";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "DurationProto";
+option java_multiple_files = true;
+option java_generate_equals_and_hash = true;
+option objc_class_prefix = "GPB";
+
+// A Duration represents a signed, fixed-length span of time represented
+// as a count of seconds and fractions of seconds at nanosecond
+// resolution. It is independent of any calendar and concepts like "day"
+// or "month". It is related to Timestamp in that the difference between
+// two Timestamp values is a Duration and it can be added or subtracted
+// from a Timestamp. Range is approximately +-10,000 years.
+//
+// Example 1: Compute Duration from two Timestamps in pseudo code.
+//
+// Timestamp start = ...;
+// Timestamp end = ...;
+// Duration duration = ...;
+//
+// duration.seconds = end.seconds - start.seconds;
+// duration.nanos = end.nanos - start.nanos;
+//
+// if (duration.seconds < 0 && duration.nanos > 0) {
+// duration.seconds += 1;
+// duration.nanos -= 1000000000;
+// } else if (durations.seconds > 0 && duration.nanos < 0) {
+// duration.seconds -= 1;
+// duration.nanos += 1000000000;
+// }
+//
+// Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.
+//
+// Timestamp start = ...;
+// Duration duration = ...;
+// Timestamp end = ...;
+//
+// end.seconds = start.seconds + duration.seconds;
+// end.nanos = start.nanos + duration.nanos;
+//
+// if (end.nanos < 0) {
+// end.seconds -= 1;
+// end.nanos += 1000000000;
+// } else if (end.nanos >= 1000000000) {
+// end.seconds += 1;
+// end.nanos -= 1000000000;
+// }
+//
+//
+message Duration {
+
+ // Signed seconds of the span of time. Must be from -315,576,000,000
+ // to +315,576,000,000 inclusive.
+ int64 seconds = 1;
+
+ // Signed fractions of a second at nanosecond resolution of the span
+ // of time. Durations less than one second are represented with a 0
+ // `seconds` field and a positive or negative `nanos` field. For durations
+ // of one second or more, a non-zero value for the `nanos` field must be
+ // of the same sign as the `seconds` field. Must be from -999,999,999
+ // to +999,999,999 inclusive.
+ int32 nanos = 2;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/empty.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/empty.proto"
new file mode 100644
index 00000000..37f4cd10
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/empty.proto"
@@ -0,0 +1,53 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option go_package = "github.com/golang/protobuf/ptypes/empty";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "EmptyProto";
+option java_multiple_files = true;
+option java_generate_equals_and_hash = true;
+option objc_class_prefix = "GPB";
+option cc_enable_arenas = true;
+
+// A generic empty message that you can re-use to avoid defining duplicated
+// empty messages in your APIs. A typical example is to use it as the request
+// or the response type of an API method. For instance:
+//
+// service Foo {
+// rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
+// }
+//
+// The JSON representation for `Empty` is empty JSON object `{}`.
+message Empty {}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/field_mask.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/field_mask.proto"
new file mode 100644
index 00000000..c51de09a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/field_mask.proto"
@@ -0,0 +1,246 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "FieldMaskProto";
+option java_multiple_files = true;
+option objc_class_prefix = "GPB";
+option java_generate_equals_and_hash = true;
+
+// `FieldMask` represents a set of symbolic field paths, for example:
+//
+// paths: "f.a"
+// paths: "f.b.d"
+//
+// Here `f` represents a field in some root message, `a` and `b`
+// fields in the message found in `f`, and `d` a field found in the
+// message in `f.b`.
+//
+// Field masks are used to specify a subset of fields that should be
+// returned by a get operation or modified by an update operation.
+// Field masks also have a custom JSON encoding (see below).
+//
+// # Field Masks in Projections
+//
+// When used in the context of a projection, a response message or
+// sub-message is filtered by the API to only contain those fields as
+// specified in the mask. For example, if the mask in the previous
+// example is applied to a response message as follows:
+//
+// f {
+// a : 22
+// b {
+// d : 1
+// x : 2
+// }
+// y : 13
+// }
+// z: 8
+//
+// The result will not contain specific values for fields x,y and z
+// (their value will be set to the default, and omitted in proto text
+// output):
+//
+//
+// f {
+// a : 22
+// b {
+// d : 1
+// }
+// }
+//
+// A repeated field is not allowed except at the last position of a
+// field mask.
+//
+// If a FieldMask object is not present in a get operation, the
+// operation applies to all fields (as if a FieldMask of all fields
+// had been specified).
+//
+// Note that a field mask does not necessarily apply to the
+// top-level response message. In case of a REST get operation, the
+// field mask applies directly to the response, but in case of a REST
+// list operation, the mask instead applies to each individual message
+// in the returned resource list. In case of a REST custom method,
+// other definitions may be used. Where the mask applies will be
+// clearly documented together with its declaration in the API. In
+// any case, the effect on the returned resource/resources is required
+// behavior for APIs.
+//
+// # Field Masks in Update Operations
+//
+// A field mask in update operations specifies which fields of the
+// targeted resource are going to be updated. The API is required
+// to only change the values of the fields as specified in the mask
+// and leave the others untouched. If a resource is passed in to
+// describe the updated values, the API ignores the values of all
+// fields not covered by the mask.
+//
+// If a repeated field is specified for an update operation, the existing
+// repeated values in the target resource will be overwritten by the new values.
+// Note that a repeated field is only allowed in the last position of a field
+// mask.
+//
+// If a sub-message is specified in the last position of the field mask for an
+// update operation, then the existing sub-message in the target resource is
+// overwritten. Given the target message:
+//
+// f {
+// b {
+// d : 1
+// x : 2
+// }
+// c : 1
+// }
+//
+// And an update message:
+//
+// f {
+// b {
+// d : 10
+// }
+// }
+//
+// then if the field mask is:
+//
+// paths: "f.b"
+//
+// then the result will be:
+//
+// f {
+// b {
+// d : 10
+// }
+// c : 1
+// }
+//
+// However, if the update mask was:
+//
+// paths: "f.b.d"
+//
+// then the result would be:
+//
+// f {
+// b {
+// d : 10
+// x : 2
+// }
+// c : 1
+// }
+//
+// In order to reset a field's value to the default, the field must
+// be in the mask and set to the default value in the provided resource.
+// Hence, in order to reset all fields of a resource, provide a default
+// instance of the resource and set all fields in the mask, or do
+// not provide a mask as described below.
+//
+// If a field mask is not present on update, the operation applies to
+// all fields (as if a field mask of all fields has been specified).
+// Note that in the presence of schema evolution, this may mean that
+// fields the client does not know and has therefore not filled into
+// the request will be reset to their default. If this is unwanted
+// behavior, a specific service may require a client to always specify
+// a field mask, producing an error if not.
+//
+// As with get operations, the location of the resource which
+// describes the updated values in the request message depends on the
+// operation kind. In any case, the effect of the field mask is
+// required to be honored by the API.
+//
+// ## Considerations for HTTP REST
+//
+// The HTTP kind of an update operation which uses a field mask must
+// be set to PATCH instead of PUT in order to satisfy HTTP semantics
+// (PUT must only be used for full updates).
+//
+// # JSON Encoding of Field Masks
+//
+// In JSON, a field mask is encoded as a single string where paths are
+// separated by a comma. Fields name in each path are converted
+// to/from lower-camel naming conventions.
+//
+// As an example, consider the following message declarations:
+//
+// message Profile {
+// User user = 1;
+// Photo photo = 2;
+// }
+// message User {
+// string display_name = 1;
+// string address = 2;
+// }
+//
+// In proto a field mask for `Profile` may look as such:
+//
+// mask {
+// paths: "user.display_name"
+// paths: "photo"
+// }
+//
+// In JSON, the same mask is represented as below:
+//
+// {
+// mask: "user.displayName,photo"
+// }
+//
+// # Field Masks and Oneof Fields
+//
+// Field masks treat fields in oneofs just as regular fields. Consider the
+// following message:
+//
+// message SampleMessage {
+// oneof test_oneof {
+// string name = 4;
+// SubMessage sub_message = 9;
+// }
+// }
+//
+// The field mask can be:
+//
+// mask {
+// paths: "name"
+// }
+//
+// Or:
+//
+// mask {
+// paths: "sub_message"
+// }
+//
+// Note that oneof type names ("test_oneof" in this case) cannot be used in
+// paths.
+message FieldMask {
+ // The set of field mask paths.
+ repeated string paths = 1;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/source_context.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/source_context.proto"
new file mode 100644
index 00000000..a2c08e2b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/source_context.proto"
@@ -0,0 +1,48 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "SourceContextProto";
+option java_multiple_files = true;
+option java_generate_equals_and_hash = true;
+option objc_class_prefix = "GPB";
+
+// `SourceContext` represents information about the source of a
+// protobuf element, like the file in which it is defined.
+message SourceContext {
+ // The path-qualified name of the .proto file that contained the associated
+ // protobuf element. For example: `"google/protobuf/source_context.proto"`.
+ string file_name = 1;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/struct.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/struct.proto"
new file mode 100644
index 00000000..beeba811
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/struct.proto"
@@ -0,0 +1,96 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option go_package = "github.com/golang/protobuf/ptypes/struct;structpb";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "StructProto";
+option java_multiple_files = true;
+option java_generate_equals_and_hash = true;
+option objc_class_prefix = "GPB";
+
+
+// `Struct` represents a structured data value, consisting of fields
+// which map to dynamically typed values. In some languages, `Struct`
+// might be supported by a native representation. For example, in
+// scripting languages like JS a struct is represented as an
+// object. The details of that representation are described together
+// with the proto support for the language.
+//
+// The JSON representation for `Struct` is JSON object.
+message Struct {
+ // Unordered map of dynamically typed values.
+ map fields = 1;
+}
+
+// `Value` represents a dynamically typed value which can be either
+// null, a number, a string, a boolean, a recursive struct value, or a
+// list of values. A producer of value is expected to set one of that
+// variants, absence of any variant indicates an error.
+//
+// The JSON representation for `Value` is JSON value.
+message Value {
+ // The kind of value.
+ oneof kind {
+ // Represents a null value.
+ NullValue null_value = 1;
+ // Represents a double value.
+ double number_value = 2;
+ // Represents a string value.
+ string string_value = 3;
+ // Represents a boolean value.
+ bool bool_value = 4;
+ // Represents a structured value.
+ Struct struct_value = 5;
+ // Represents a repeated `Value`.
+ ListValue list_value = 6;
+ }
+}
+
+// `NullValue` is a singleton enumeration to represent the null value for the
+// `Value` type union.
+//
+// The JSON representation for `NullValue` is JSON `null`.
+enum NullValue {
+ // Null value.
+ NULL_VALUE = 0;
+}
+
+// `ListValue` is a wrapper around a repeated field of values.
+//
+// The JSON representation for `ListValue` is JSON array.
+message ListValue {
+ // Repeated field of dynamically typed values.
+ repeated Value values = 1;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/timestamp.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/timestamp.proto"
new file mode 100644
index 00000000..7992a858
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/timestamp.proto"
@@ -0,0 +1,111 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option cc_enable_arenas = true;
+option go_package = "github.com/golang/protobuf/ptypes/timestamp";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "TimestampProto";
+option java_multiple_files = true;
+option java_generate_equals_and_hash = true;
+option objc_class_prefix = "GPB";
+
+// A Timestamp represents a point in time independent of any time zone
+// or calendar, represented as seconds and fractions of seconds at
+// nanosecond resolution in UTC Epoch time. It is encoded using the
+// Proleptic Gregorian Calendar which extends the Gregorian calendar
+// backwards to year one. It is encoded assuming all minutes are 60
+// seconds long, i.e. leap seconds are "smeared" so that no leap second
+// table is needed for interpretation. Range is from
+// 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z.
+// By restricting to that range, we ensure that we can convert to
+// and from RFC 3339 date strings.
+// See [https://www.ietf.org/rfc/rfc3339.txt](https://www.ietf.org/rfc/rfc3339.txt).
+//
+// Example 1: Compute Timestamp from POSIX `time()`.
+//
+// Timestamp timestamp;
+// timestamp.set_seconds(time(NULL));
+// timestamp.set_nanos(0);
+//
+// Example 2: Compute Timestamp from POSIX `gettimeofday()`.
+//
+// struct timeval tv;
+// gettimeofday(&tv, NULL);
+//
+// Timestamp timestamp;
+// timestamp.set_seconds(tv.tv_sec);
+// timestamp.set_nanos(tv.tv_usec * 1000);
+//
+// Example 3: Compute Timestamp from Win32 `GetSystemTimeAsFileTime()`.
+//
+// FILETIME ft;
+// GetSystemTimeAsFileTime(&ft);
+// UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
+//
+// // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z
+// // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z.
+// Timestamp timestamp;
+// timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
+// timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
+//
+// Example 4: Compute Timestamp from Java `System.currentTimeMillis()`.
+//
+// long millis = System.currentTimeMillis();
+//
+// Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000)
+// .setNanos((int) ((millis % 1000) * 1000000)).build();
+//
+//
+// Example 5: Compute Timestamp from current time in Python.
+//
+// now = time.time()
+// seconds = int(now)
+// nanos = int((now - seconds) * 10**9)
+// timestamp = Timestamp(seconds=seconds, nanos=nanos)
+//
+//
+message Timestamp {
+
+ // Represents seconds of UTC time since Unix epoch
+ // 1970-01-01T00:00:00Z. Must be from from 0001-01-01T00:00:00Z to
+ // 9999-12-31T23:59:59Z inclusive.
+ int64 seconds = 1;
+
+ // Non-negative fractions of a second at nanosecond resolution. Negative
+ // second values with fractions must still have non-negative nanos values
+ // that count forward in time. Must be from 0 to 999,999,999
+ // inclusive.
+ int32 nanos = 2;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/type.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/type.proto"
new file mode 100644
index 00000000..1c9cf53d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/type.proto"
@@ -0,0 +1,180 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+import "google/protobuf/any.proto";
+import "google/protobuf/source_context.proto";
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "TypeProto";
+option java_multiple_files = true;
+option java_generate_equals_and_hash = true;
+option objc_class_prefix = "GPB";
+
+// A protocol buffer message type.
+message Type {
+ // The fully qualified message name.
+ string name = 1;
+ // The list of fields.
+ repeated Field fields = 2;
+ // The list of types appearing in `oneof` definitions in this type.
+ repeated string oneofs = 3;
+ // The protocol buffer options.
+ repeated Option options = 4;
+ // The source context.
+ SourceContext source_context = 5;
+ // The source syntax.
+ Syntax syntax = 6;
+}
+
+// A single field of a message type.
+message Field {
+ // Basic field types.
+ enum Kind {
+ // Field type unknown.
+ TYPE_UNKNOWN = 0;
+ // Field type double.
+ TYPE_DOUBLE = 1;
+ // Field type float.
+ TYPE_FLOAT = 2;
+ // Field type int64.
+ TYPE_INT64 = 3;
+ // Field type uint64.
+ TYPE_UINT64 = 4;
+ // Field type int32.
+ TYPE_INT32 = 5;
+ // Field type fixed64.
+ TYPE_FIXED64 = 6;
+ // Field type fixed32.
+ TYPE_FIXED32 = 7;
+ // Field type bool.
+ TYPE_BOOL = 8;
+ // Field type string.
+ TYPE_STRING = 9;
+ // Field type group. Proto2 syntax only, and deprecated.
+ TYPE_GROUP = 10;
+ // Field type message.
+ TYPE_MESSAGE = 11;
+ // Field type bytes.
+ TYPE_BYTES = 12;
+ // Field type uint32.
+ TYPE_UINT32 = 13;
+ // Field type enum.
+ TYPE_ENUM = 14;
+ // Field type sfixed32.
+ TYPE_SFIXED32 = 15;
+ // Field type sfixed64.
+ TYPE_SFIXED64 = 16;
+ // Field type sint32.
+ TYPE_SINT32 = 17;
+ // Field type sint64.
+ TYPE_SINT64 = 18;
+ };
+
+ // Whether a field is optional, required, or repeated.
+ enum Cardinality {
+ // For fields with unknown cardinality.
+ CARDINALITY_UNKNOWN = 0;
+ // For optional fields.
+ CARDINALITY_OPTIONAL = 1;
+ // For required fields. Proto2 syntax only.
+ CARDINALITY_REQUIRED = 2;
+ // For repeated fields.
+ CARDINALITY_REPEATED = 3;
+ };
+
+ // The field type.
+ Kind kind = 1;
+ // The field cardinality.
+ Cardinality cardinality = 2;
+ // The field number.
+ int32 number = 3;
+ // The field name.
+ string name = 4;
+ // The field type URL, without the scheme, for message or enumeration
+ // types. Example: `"type.googleapis.com/google.protobuf.Timestamp"`.
+ string type_url = 6;
+ // The index of the field type in `Type.oneofs`, for message or enumeration
+ // types. The first type has index 1; zero means the type is not in the list.
+ int32 oneof_index = 7;
+ // Whether to use alternative packed wire representation.
+ bool packed = 8;
+ // The protocol buffer options.
+ repeated Option options = 9;
+ // The field JSON name.
+ string json_name = 10;
+ // The string value of the default value of this field. Proto2 syntax only.
+ string default_value = 11;
+}
+
+// Enum type definition.
+message Enum {
+ // Enum type name.
+ string name = 1;
+ // Enum value definitions.
+ repeated EnumValue enumvalue = 2;
+ // Protocol buffer options.
+ repeated Option options = 3;
+ // The source context.
+ SourceContext source_context = 4;
+ // The source syntax.
+ Syntax syntax = 5;
+}
+
+// Enum value definition.
+message EnumValue {
+ // Enum value name.
+ string name = 1;
+ // Enum value number.
+ int32 number = 2;
+ // Protocol buffer options.
+ repeated Option options = 3;
+}
+
+// A protocol buffer option, which can be attached to a message, field,
+// enumeration, etc.
+message Option {
+ // The option's name. For example, `"java_package"`.
+ string name = 1;
+ // The option's value. For example, `"com.google.protobuf"`.
+ Any value = 2;
+}
+
+// The syntax in which a protocol buffer element is defined.
+enum Syntax {
+ // Syntax `proto2`.
+ SYNTAX_PROTO2 = 0;
+ // Syntax `proto3`.
+ SYNTAX_PROTO3 = 1;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/wrappers.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/wrappers.proto"
new file mode 100644
index 00000000..4828ad9a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/include/google/protobuf/wrappers.proto"
@@ -0,0 +1,119 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// Wrappers for primitive (non-message) types. These types are useful
+// for embedding primitives in the `google.protobuf.Any` type and for places
+// where we need to distinguish between the absence of a primitive
+// typed field and its default value.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option cc_enable_arenas = true;
+option go_package = "github.com/golang/protobuf/ptypes/wrappers";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "WrappersProto";
+option java_multiple_files = true;
+option java_generate_equals_and_hash = true;
+option objc_class_prefix = "GPB";
+
+// Wrapper message for `double`.
+//
+// The JSON representation for `DoubleValue` is JSON number.
+message DoubleValue {
+ // The double value.
+ double value = 1;
+}
+
+// Wrapper message for `float`.
+//
+// The JSON representation for `FloatValue` is JSON number.
+message FloatValue {
+ // The float value.
+ float value = 1;
+}
+
+// Wrapper message for `int64`.
+//
+// The JSON representation for `Int64Value` is JSON string.
+message Int64Value {
+ // The int64 value.
+ int64 value = 1;
+}
+
+// Wrapper message for `uint64`.
+//
+// The JSON representation for `UInt64Value` is JSON string.
+message UInt64Value {
+ // The uint64 value.
+ uint64 value = 1;
+}
+
+// Wrapper message for `int32`.
+//
+// The JSON representation for `Int32Value` is JSON number.
+message Int32Value {
+ // The int32 value.
+ int32 value = 1;
+}
+
+// Wrapper message for `uint32`.
+//
+// The JSON representation for `UInt32Value` is JSON number.
+message UInt32Value {
+ // The uint32 value.
+ uint32 value = 1;
+}
+
+// Wrapper message for `bool`.
+//
+// The JSON representation for `BoolValue` is JSON `true` and `false`.
+message BoolValue {
+ // The bool value.
+ bool value = 1;
+}
+
+// Wrapper message for `string`.
+//
+// The JSON representation for `StringValue` is JSON string.
+message StringValue {
+ // The string value.
+ string value = 1;
+}
+
+// Wrapper message for `bytes`.
+//
+// The JSON representation for `BytesValue` is JSON string.
+message BytesValue {
+ // The bytes value.
+ bytes value = 1;
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/CONTRIBUTING.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/CONTRIBUTING.md"
new file mode 100644
index 00000000..939e5341
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/CONTRIBUTING.md"
@@ -0,0 +1,28 @@
+# How to Contribute
+
+We'd love to accept your patches and contributions to this project. There are
+just a few small guidelines you need to follow.
+
+## Contributor License Agreement
+
+Contributions to this project must be accompanied by a Contributor License
+Agreement. You (or your employer) retain the copyright to your contribution;
+this simply gives us permission to use and redistribute your contributions as
+part of the project. Head over to to see
+your current agreements on file or to sign a new one.
+
+You generally only need to submit a CLA once, so if you've already submitted one
+(even if it was for a different project), you probably don't need to do it
+again.
+
+## Code reviews
+
+All submissions, including submissions by project members, require review. We
+use GitHub pull requests for this purpose. Consult
+[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
+information on using pull requests.
+
+## Community Guidelines
+
+This project follows [Google's Open Source Community
+Guidelines](https://opensource.google.com/conduct/).
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/LICENSE" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/LICENSE"
new file mode 100644
index 00000000..d6456956
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/LICENSE"
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/README.md"
new file mode 100644
index 00000000..fd4039da
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/README.md"
@@ -0,0 +1,42 @@
+# KeypointNet
+This is an implementation of the keypoint network proposed in "Discovery of
+Latent 3D Keypoints via End-to-end Geometric Reasoning
+[[pdf](https://arxiv.org/pdf/1807.03146.pdf)]". Given a single 2D image of a
+known class, this network can predict a set of 3D keypoints that are consistent
+across viewing angles of the same object and across object instances. These
+keypoints and their detectors are discovered and learned automatically without
+keypoint location supervision [[demo](https://keypointnet.github.io)].
+
+## Datasets:
+ ShapeNet's rendering for
+ [Cars](https://storage.googleapis.com/discovery-3dkeypoints-data/cars_with_keypoints.zip),
+ [Planes](https://storage.googleapis.com/discovery-3dkeypoints-data/planes_with_keypoints.zip),
+ [Chairs](https://storage.googleapis.com/discovery-3dkeypoints-data/chairs_with_keypoints.zip).
+
+ Each set contains:
+1. tfrecords
+2. train.txt, a list of tfrecords used for training.
+2. dev.txt, a list of tfrecords used for validation.
+3. test.txt, a list of tfrecords used for testing.
+4. projection.txt, storing the global 4x4 camera projection matrix.
+5. job.txt, storing ShapeNet's object IDs in each tfrecord.
+
+## Training:
+ Run `main.py --model_dir=MODEL_DIR --dset=DSET`
+
+ where MODEL_DIR is a folder for storing model checkpoints: (see [tf.estimator](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator)), and DSET should point to the folder containing tfrecords (download above).
+
+## Inference:
+ Run `main.py --model_dir=MODEL_DIR --input=INPUT --predict`
+
+ where MODEL_DIR is the model checkpoint folder, and INPUT is a folder containing png or jpeg test images.
+ We trained the network using the total batch size of 256 (8 x 32 replicas). You may have to tune the learning rate if your batch size is different.
+
+## Code credit:
+ Supasorn Suwajanakorn
+
+## Contact:
+ supasorn@gmail.com, [snavely,tompson,mnorouzi]@google.com
+
+
+(This is not an officially supported Google product)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/main.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/main.py"
new file mode 100644
index 00000000..04b30159
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/main.py"
@@ -0,0 +1,697 @@
+# Copyright 2018 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# =============================================================================
+"""KeypointNet!!
+
+A reimplementation of 'Discovery of Latent 3D Keypoints via End-to-end
+Geometric Reasoning' keypoint network. Given a single 2D image of a known class,
+this network can predict a set of 3D keypoints that are consistent across
+viewing angles of the same object and across object instances. These keypoints
+and their detectors are discovered and learned automatically without
+keypoint location supervision.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import matplotlib.pyplot as plt
+import numpy as np
+import os
+from scipy import misc
+import sys
+import tensorflow as tf
+import tensorflow.contrib.slim as slim
+import utils
+
+FLAGS = tf.app.flags.FLAGS
+
+tf.app.flags.DEFINE_boolean("predict", False, "Running inference if true")
+tf.app.flags.DEFINE_string(
+ "input",
+ "",
+ "Input folder containing images")
+tf.app.flags.DEFINE_string("model_dir", None, "Estimator model_dir")
+tf.app.flags.DEFINE_string(
+ "dset",
+ "",
+ "Path to the directory containing the dataset.")
+tf.app.flags.DEFINE_integer("steps", 200000, "Training steps")
+tf.app.flags.DEFINE_integer("batch_size", 8, "Size of mini-batch.")
+tf.app.flags.DEFINE_string(
+ "hparams", "",
+ "A comma-separated list of `name=value` hyperparameter values. This flag "
+ "is used to override hyperparameter settings either when manually "
+ "selecting hyperparameters or when using Vizier.")
+tf.app.flags.DEFINE_integer(
+ "sync_replicas", -1,
+ "If > 0, use SyncReplicasOptimizer and use this many replicas per sync.")
+
+# Fixed input size 128 x 128.
+vw = vh = 128
+
+
+def create_input_fn(split, batch_size):
+ """Returns input_fn for tf.estimator.Estimator.
+
+ Reads tfrecords and construts input_fn for either training or eval. All
+ tfrecords not in test.txt or dev.txt will be assigned to training set.
+
+ Args:
+ split: A string indicating the split. Can be either 'train' or 'validation'.
+ batch_size: The batch size!
+
+ Returns:
+ input_fn for tf.estimator.Estimator.
+
+ Raises:
+ IOError: If test.txt or dev.txt are not found.
+ """
+
+ if (not os.path.exists(os.path.join(FLAGS.dset, "test.txt")) or
+ not os.path.exists(os.path.join(FLAGS.dset, "dev.txt"))):
+ raise IOError("test.txt or dev.txt not found")
+
+ with open(os.path.join(FLAGS.dset, "test.txt"), "r") as f:
+ testset = [x.strip() for x in f.readlines()]
+
+ with open(os.path.join(FLAGS.dset, "dev.txt"), "r") as f:
+ validset = [x.strip() for x in f.readlines()]
+
+ files = os.listdir(FLAGS.dset)
+ filenames = []
+ for f in files:
+ sp = os.path.splitext(f)
+ if sp[1] != ".tfrecord" or sp[0] in testset:
+ continue
+
+ if ((split == "validation" and sp[0] in validset) or
+ (split == "train" and sp[0] not in validset)):
+ filenames.append(os.path.join(FLAGS.dset, f))
+
+ def input_fn():
+ """input_fn for tf.estimator.Estimator."""
+
+ def parser(serialized_example):
+ """Parses a single tf.Example into image and label tensors."""
+ fs = tf.parse_single_example(
+ serialized_example,
+ features={
+ "img0": tf.FixedLenFeature([], tf.string),
+ "img1": tf.FixedLenFeature([], tf.string),
+ "mv0": tf.FixedLenFeature([16], tf.float32),
+ "mvi0": tf.FixedLenFeature([16], tf.float32),
+ "mv1": tf.FixedLenFeature([16], tf.float32),
+ "mvi1": tf.FixedLenFeature([16], tf.float32),
+ })
+
+ fs["img0"] = tf.div(tf.to_float(tf.image.decode_png(fs["img0"], 4)), 255)
+ fs["img1"] = tf.div(tf.to_float(tf.image.decode_png(fs["img1"], 4)), 255)
+
+ fs["img0"].set_shape([vh, vw, 4])
+ fs["img1"].set_shape([vh, vw, 4])
+
+ # fs["lr0"] = [fs["mv0"][0]]
+ # fs["lr1"] = [fs["mv1"][0]]
+
+ fs["lr0"] = tf.convert_to_tensor([fs["mv0"][0]])
+ fs["lr1"] = tf.convert_to_tensor([fs["mv1"][0]])
+
+ return fs
+
+ np.random.shuffle(filenames)
+ dataset = tf.data.TFRecordDataset(filenames)
+ dataset = dataset.map(parser, num_parallel_calls=4)
+ dataset = dataset.shuffle(400).repeat().batch(batch_size)
+ dataset = dataset.prefetch(buffer_size=256)
+
+ return dataset.make_one_shot_iterator().get_next(), None
+
+ return input_fn
+
+
+class Transformer(object):
+ """A utility for projecting 3D points to 2D coordinates and vice versa.
+
+ 3D points are represented in 4D-homogeneous world coordinates. The pixel
+ coordinates are represented in normalized device coordinates [-1, 1].
+ See https://learnopengl.com/Getting-started/Coordinate-Systems.
+ """
+
+ def __get_matrix(self, lines):
+ return np.array([[float(y) for y in x.strip().split(" ")] for x in lines])
+
+ def __read_projection_matrix(self, filename):
+ if not os.path.exists(filename):
+ filename = "/cns/vz-d/home/supasorn/datasets/cars/projection.txt"
+ with open(filename, "r") as f:
+ lines = f.readlines()
+ return self.__get_matrix(lines)
+
+ def __init__(self, w, h, dataset_dir):
+ self.w = w
+ self.h = h
+ p = self.__read_projection_matrix(dataset_dir + "projection.txt")
+
+ # transposed of inversed projection matrix.
+ self.pinv_t = tf.constant([[1.0 / p[0, 0], 0, 0,
+ 0], [0, 1.0 / p[1, 1], 0, 0], [0, 0, 1, 0],
+ [0, 0, 0, 1]])
+ self.f = p[0, 0]
+
+ def project(self, xyzw):
+ """Projects homogeneous 3D coordinates to normalized device coordinates."""
+
+ z = xyzw[:, :, 2:3] + 1e-8
+ return tf.concat([-self.f * xyzw[:, :, :2] / z, z], axis=2)
+
+ def unproject(self, xyz):
+ """Unprojects normalized device coordinates with depth to 3D coordinates."""
+
+ z = xyz[:, :, 2:]
+ xy = -xyz * z
+
+ def batch_matmul(a, b):
+ return tf.reshape(
+ tf.matmul(tf.reshape(a, [-1, a.shape[2].value]), b),
+ [-1, a.shape[1].value, a.shape[2].value])
+
+ return batch_matmul(
+ tf.concat([xy[:, :, :2], z, tf.ones_like(z)], axis=2), self.pinv_t)
+
+
+def meshgrid(h):
+ """Returns a meshgrid ranging from [-1, 1] in x, y axes."""
+
+ r = np.arange(0.5, h, 1) / (h / 2) - 1
+ ranx, rany = tf.meshgrid(r, -r)
+ return tf.to_float(ranx), tf.to_float(rany)
+
+
+def estimate_rotation(xyz0, xyz1, pconf, noise):
+ """Estimates the rotation between two sets of keypoints.
+
+ The rotation is estimated by first subtracting mean from each set of keypoints
+ and computing SVD of the covariance matrix.
+
+ Args:
+ xyz0: [batch, num_kp, 3] The first set of keypoints.
+ xyz1: [batch, num_kp, 3] The second set of keypoints.
+ pconf: [batch, num_kp] The weights used to compute the rotation estimate.
+ noise: A number indicating the noise added to the keypoints.
+
+ Returns:
+ [batch, 3, 3] A batch of transposed 3 x 3 rotation matrices.
+ """
+
+ xyz0 += tf.random_normal(tf.shape(xyz0), mean=0, stddev=noise)
+ xyz1 += tf.random_normal(tf.shape(xyz1), mean=0, stddev=noise)
+
+ pconf2 = tf.expand_dims(pconf, 2)
+ cen0 = tf.reduce_sum(xyz0 * pconf2, 1, keepdims=True)
+ cen1 = tf.reduce_sum(xyz1 * pconf2, 1, keepdims=True)
+
+ x = xyz0 - cen0
+ y = xyz1 - cen1
+
+ cov = tf.matmul(tf.matmul(x, tf.matrix_diag(pconf), transpose_a=True), y)
+ _, u, v = tf.svd(cov, full_matrices=True)
+
+ d = tf.matrix_determinant(tf.matmul(v, u, transpose_b=True))
+ ud = tf.concat(
+ [u[:, :, :-1], u[:, :, -1:] * tf.expand_dims(tf.expand_dims(d, 1), 1)],
+ axis=2)
+ return tf.matmul(ud, v, transpose_b=True)
+
+
+def relative_pose_loss(xyz0, xyz1, rot, pconf, noise):
+ """Computes the relative pose loss (chordal, angular).
+
+ Args:
+ xyz0: [batch, num_kp, 3] The first set of keypoints.
+ xyz1: [batch, num_kp, 3] The second set of keypoints.
+ rot: [batch, 4, 4] The ground-truth rotation matrices.
+ pconf: [batch, num_kp] The weights used to compute the rotation estimate.
+ noise: A number indicating the noise added to the keypoints.
+
+ Returns:
+ A tuple (chordal loss, angular loss).
+ """
+
+ r_transposed = estimate_rotation(xyz0, xyz1, pconf, noise)
+ rotation = rot[:, :3, :3]
+ frob_sqr = tf.reduce_sum(tf.square(r_transposed - rotation), axis=[1, 2])
+ frob = tf.sqrt(frob_sqr)
+
+ return tf.reduce_mean(frob_sqr), \
+ 2.0 * tf.reduce_mean(tf.asin(tf.minimum(1.0, frob / (2 * math.sqrt(2)))))
+
+
+def separation_loss(xyz, delta):
+ """Computes the separation loss.
+
+ Args:
+ xyz: [batch, num_kp, 3] Input keypoints.
+ delta: A separation threshold. Incur 0 cost if the distance >= delta.
+
+ Returns:
+ The seperation loss.
+ """
+
+ num_kp = tf.shape(xyz)[1]
+ t1 = tf.tile(xyz, [1, num_kp, 1])
+
+ t2 = tf.reshape(tf.tile(xyz, [1, 1, num_kp]), tf.shape(t1))
+ diffsq = tf.square(t1 - t2)
+
+ # -> [batch, num_kp ^ 2]
+ lensqr = tf.reduce_sum(diffsq, axis=2)
+
+ return (tf.reduce_sum(tf.maximum(-lensqr + delta, 0.0)) / tf.to_float(
+ num_kp * FLAGS.batch_size * 2))
+
+
+def consistency_loss(uv0, uv1, pconf):
+ """Computes multi-view consistency loss between two sets of keypoints.
+
+ Args:
+ uv0: [batch, num_kp, 2] The first set of keypoint 2D coordinates.
+ uv1: [batch, num_kp, 2] The second set of keypoint 2D coordinates.
+ pconf: [batch, num_kp] The weights used to compute the rotation estimate.
+
+ Returns:
+ The consistency loss.
+ """
+
+ # [batch, num_kp, 2]
+ wd = tf.square(uv0 - uv1) * tf.expand_dims(pconf, 2)
+ wd = tf.reduce_sum(wd, axis=[1, 2])
+ return tf.reduce_mean(wd)
+
+
+def variance_loss(probmap, ranx, rany, uv):
+ """Computes the variance loss as part of Sillhouette consistency.
+
+ Args:
+ probmap: [batch, num_kp, h, w] The distribution map of keypoint locations.
+ ranx: X-axis meshgrid.
+ rany: Y-axis meshgrid.
+ uv: [batch, num_kp, 2] Keypoint locations (in NDC).
+
+ Returns:
+ The variance loss.
+ """
+
+ ran = tf.stack([ranx, rany], axis=2)
+
+ sh = tf.shape(ran)
+ # [batch, num_kp, vh, vw, 2]
+ ran = tf.reshape(ran, [1, 1, sh[0], sh[1], 2])
+
+ sh = tf.shape(uv)
+ uv = tf.reshape(uv, [sh[0], sh[1], 1, 1, 2])
+
+ diff = tf.reduce_sum(tf.square(uv - ran), axis=4)
+ diff *= probmap
+
+ return tf.reduce_mean(tf.reduce_sum(diff, axis=[2, 3]))
+
+
+def dilated_cnn(images, num_filters, is_training):
+ """Constructs a base dilated convolutional network.
+
+ Args:
+ images: [batch, h, w, 3] Input RGB images.
+ num_filters: The number of filters for all layers.
+ is_training: True if this function is called during training.
+
+ Returns:
+ Output of this dilated CNN.
+ """
+
+ net = images
+
+ with slim.arg_scope(
+ [slim.conv2d, slim.fully_connected],
+ normalizer_fn=slim.batch_norm,
+ activation_fn=lambda x: tf.nn.leaky_relu(x, alpha=0.1),
+ normalizer_params={"is_training": is_training}):
+ for i, r in enumerate([1, 1, 2, 4, 8, 16, 1, 2, 4, 8, 16, 1]):
+ net = slim.conv2d(net, num_filters, [3, 3], rate=r, scope="dconv%d" % i)
+
+ return net
+
+
+def orientation_network(images, num_filters, is_training):
+ """Constructs a network that infers the orientation of an object.
+
+ Args:
+ images: [batch, h, w, 3] Input RGB images.
+ num_filters: The number of filters for all layers.
+ is_training: True if this function is called during training.
+
+ Returns:
+ Output of the orientation network.
+ """
+
+ with tf.variable_scope("OrientationNetwork"):
+ net = dilated_cnn(images, num_filters, is_training)
+
+ modules = 2
+ prob = slim.conv2d(net, 2, [3, 3], rate=1, activation_fn=None)
+ prob = tf.transpose(prob, [0, 3, 1, 2])
+
+ prob = tf.reshape(prob, [-1, modules, vh * vw])
+ prob = tf.nn.softmax(prob)
+ ranx, rany = meshgrid(vh)
+
+ prob = tf.reshape(prob, [-1, 2, vh, vw])
+
+ sx = tf.reduce_sum(prob * ranx, axis=[2, 3])
+ sy = tf.reduce_sum(prob * rany, axis=[2, 3]) # -> batch x modules
+
+ out_xy = tf.reshape(tf.stack([sx, sy], -1), [-1, modules, 2])
+
+ return out_xy
+
+
+def keypoint_network(rgba,
+ num_filters,
+ num_kp,
+ is_training,
+ lr_gt=None,
+ anneal=1):
+ """Constructs our main keypoint network that predicts 3D keypoints.
+
+ Args:
+ rgba: [batch, h, w, 4] Input RGB images with alpha channel.
+ num_filters: The number of filters for all layers.
+ num_kp: The number of keypoints.
+ is_training: True if this function is called during training.
+ lr_gt: The groundtruth orientation flag used at the beginning of training.
+ Then we linearly anneal in the prediction.
+ anneal: A number between [0, 1] where 1 means using the ground-truth
+ orientation and 0 means using our estimate.
+
+ Returns:
+ uv: [batch, num_kp, 2] 2D locations of keypoints.
+ z: [batch, num_kp] The depth of keypoints.
+ orient: [batch, 2, 2] Two 2D coordinates that correspond to [1, 0, 0] and
+ [-1, 0, 0] in object space.
+ sill: The Sillhouette loss.
+ variance: The variance loss.
+ prob_viz: A visualization of all predicted keypoints.
+ prob_vizs: A list of visualizations of each keypoint.
+
+ """
+
+ images = rgba[:, :, :, :3]
+
+ # [batch, 1]
+ orient = orientation_network(images, num_filters * 0.5, is_training)
+
+ # [batch, 1]
+ lr_estimated = tf.maximum(0.0, tf.sign(orient[:, 0, :1] - orient[:, 1, :1]))
+
+ if lr_gt is None:
+ lr = lr_estimated
+ else:
+ lr_gt = tf.maximum(0.0, tf.sign(lr_gt[:, :1]))
+ lr = tf.round(lr_gt * anneal + lr_estimated * (1 - anneal))
+
+ lrtiled = tf.tile(
+ tf.expand_dims(tf.expand_dims(lr, 1), 1),
+ [1, images.shape[1], images.shape[2], 1])
+
+ images = tf.concat([images, lrtiled], axis=3)
+
+ mask = rgba[:, :, :, 3]
+ mask = tf.cast(tf.greater(mask, tf.zeros_like(mask)), dtype=tf.float32)
+
+ net = dilated_cnn(images, num_filters, is_training)
+
+ # The probability distribution map.
+ prob = slim.conv2d(
+ net, num_kp, [3, 3], rate=1, scope="conv_xy", activation_fn=None)
+
+ # We added the fixed camera distance as a bias.
+ z = -30 + slim.conv2d(
+ net, num_kp, [3, 3], rate=1, scope="conv_z", activation_fn=None)
+
+ prob = tf.transpose(prob, [0, 3, 1, 2])
+ z = tf.transpose(z, [0, 3, 1, 2])
+
+ prob = tf.reshape(prob, [-1, num_kp, vh * vw])
+ prob = tf.nn.softmax(prob, name="softmax")
+
+ ranx, rany = meshgrid(vh)
+ prob = tf.reshape(prob, [-1, num_kp, vh, vw])
+
+ # These are for visualizing the distribution maps.
+ prob_viz = tf.expand_dims(tf.reduce_sum(prob, 1), 3)
+ prob_vizs = [tf.expand_dims(prob[:, i, :, :], 3) for i in range(num_kp)]
+
+ sx = tf.reduce_sum(prob * ranx, axis=[2, 3])
+ sy = tf.reduce_sum(prob * rany, axis=[2, 3]) # -> batch x num_kp
+
+ # [batch, num_kp]
+ sill = tf.reduce_sum(prob * tf.expand_dims(mask, 1), axis=[2, 3])
+ sill = tf.reduce_mean(-tf.log(sill + 1e-12))
+
+ z = tf.reduce_sum(prob * z, axis=[2, 3])
+ uv = tf.reshape(tf.stack([sx, sy], -1), [-1, num_kp, 2])
+
+ variance = variance_loss(prob, ranx, rany, uv)
+
+ return uv, z, orient, sill, variance, prob_viz, prob_vizs
+
+
+def model_fn(features, labels, mode, hparams):
+ """Returns model_fn for tf.estimator.Estimator."""
+
+ del labels
+
+ is_training = (mode == tf.estimator.ModeKeys.TRAIN)
+ t = Transformer(vw, vh, FLAGS.dset)
+
+ def func1(x):
+ return tf.transpose(tf.reshape(features[x], [-1, 4, 4]), [0, 2, 1])
+
+ mv = [func1("mv%d" % i) for i in range(2)]
+ mvi = [func1("mvi%d" % i) for i in range(2)]
+
+ uvz = [None] * 2
+ uvz_proj = [None] * 2 # uvz coordinates projected on to the other view.
+ viz = [None] * 2
+ vizs = [None] * 2
+
+ loss_sill = 0
+ loss_variance = 0
+ loss_con = 0
+ loss_sep = 0
+ loss_lr = 0
+
+ for i in range(2):
+ with tf.variable_scope("KeypointNetwork", reuse=i > 0):
+ # anneal: 1 = using ground-truth, 0 = using our estimate orientation.
+ anneal = tf.to_float(hparams.lr_anneal_end - tf.train.get_global_step())
+ anneal = tf.clip_by_value(
+ anneal / (hparams.lr_anneal_end - hparams.lr_anneal_start), 0.0, 1.0)
+
+ uv, z, orient, sill, variance, viz[i], vizs[i] = keypoint_network(
+ features["img%d" % i],
+ hparams.num_filters,
+ hparams.num_kp,
+ is_training,
+ lr_gt=features["lr%d" % i],
+ anneal=anneal)
+
+ # x-positive/negative axes (dominant direction).
+ xp_axis = tf.tile(
+ tf.constant([[[1.0, 0, 0, 1], [-1.0, 0, 0, 1]]]),
+ [tf.shape(orient)[0], 1, 1])
+
+ # [batch, 2, 4] = [batch, 2, 4] x [batch, 4, 4]
+ xp = tf.matmul(xp_axis, mv[i])
+
+ # [batch, 2, 3]
+ xp = t.project(xp)
+
+ loss_lr += tf.losses.mean_squared_error(orient[:, :, :2], xp[:, :, :2])
+ loss_variance += variance
+ loss_sill += sill
+
+ uv = tf.reshape(uv, [-1, hparams.num_kp, 2])
+ z = tf.reshape(z, [-1, hparams.num_kp, 1])
+
+ # [batch, num_kp, 3]
+ uvz[i] = tf.concat([uv, z], axis=2)
+
+ world_coords = tf.matmul(t.unproject(uvz[i]), mvi[i])
+
+ # [batch, num_kp, 3]
+ uvz_proj[i] = t.project(tf.matmul(world_coords, mv[1 - i]))
+
+ pconf = tf.ones(
+ [tf.shape(uv)[0], tf.shape(uv)[1]], dtype=tf.float32) / hparams.num_kp
+
+ for i in range(2):
+ loss_con += consistency_loss(uvz_proj[i][:, :, :2], uvz[1 - i][:, :, :2],
+ pconf)
+ loss_sep += separation_loss(
+ t.unproject(uvz[i])[:, :, :3], hparams.sep_delta)
+
+ chordal, angular = relative_pose_loss(
+ t.unproject(uvz[0])[:, :, :3],
+ t.unproject(uvz[1])[:, :, :3], tf.matmul(mvi[0], mv[1]), pconf,
+ hparams.noise)
+
+ loss = (
+ hparams.loss_pose * angular +
+ hparams.loss_con * loss_con +
+ hparams.loss_sep * loss_sep +
+ hparams.loss_sill * loss_sill +
+ hparams.loss_lr * loss_lr +
+ hparams.loss_variance * loss_variance
+ )
+
+ def touint8(img):
+ return tf.cast(img * 255.0, tf.uint8)
+
+ with tf.variable_scope("output"):
+ tf.summary.image("0_img0", touint8(features["img0"][:, :, :, :3]))
+ tf.summary.image("1_combined", viz[0])
+ for i in range(hparams.num_kp):
+ tf.summary.image("2_f%02d" % i, vizs[0][i])
+
+ with tf.variable_scope("stats"):
+ tf.summary.scalar("anneal", anneal)
+ tf.summary.scalar("closs", loss_con)
+ tf.summary.scalar("seploss", loss_sep)
+ tf.summary.scalar("angular", angular)
+ tf.summary.scalar("chordal", chordal)
+ tf.summary.scalar("lrloss", loss_lr)
+ tf.summary.scalar("sill", loss_sill)
+ tf.summary.scalar("vloss", loss_variance)
+
+ return {
+ "loss": loss,
+ "predictions": {
+ "img0": features["img0"],
+ "img1": features["img1"],
+ "uvz0": uvz[0],
+ "uvz1": uvz[1]
+ },
+ "eval_metric_ops": {
+ "closs": tf.metrics.mean(loss_con),
+ "angular_loss": tf.metrics.mean(angular),
+ "chordal_loss": tf.metrics.mean(chordal),
+ }
+ }
+
+
+def predict(input_folder, hparams):
+ """Predicts keypoints on all images in input_folder."""
+
+ cols = plt.cm.get_cmap("rainbow")(
+ np.linspace(0, 1.0, hparams.num_kp))[:, :4]
+
+ img = tf.placeholder(tf.float32, shape=(1, 128, 128, 4))
+
+ with tf.variable_scope("KeypointNetwork"):
+ ret = keypoint_network(
+ img, hparams.num_filters, hparams.num_kp, False)
+
+ uv = tf.reshape(ret[0], [-1, hparams.num_kp, 2])
+ z = tf.reshape(ret[1], [-1, hparams.num_kp, 1])
+ uvz = tf.concat([uv, z], axis=2)
+
+ sess = tf.Session()
+ saver = tf.train.Saver()
+ ckpt = tf.train.get_checkpoint_state(FLAGS.model_dir)
+
+ print("loading model: ", ckpt.model_checkpoint_path)
+ saver.restore(sess, ckpt.model_checkpoint_path)
+
+ files = [x for x in os.listdir(input_folder)
+ if x[-3:] in ["jpg", "png"]]
+
+ output_folder = os.path.join(input_folder, "output")
+ if not os.path.exists(output_folder):
+ os.mkdir(output_folder)
+
+ for f in files:
+ orig = misc.imread(os.path.join(input_folder, f)).astype(float) / 255
+ if orig.shape[2] == 3:
+ orig = np.concatenate((orig, np.ones_like(orig[:, :, :1])), axis=2)
+
+ uv_ret = sess.run(uvz, feed_dict={img: np.expand_dims(orig, 0)})
+
+ utils.draw_ndc_points(orig, uv_ret.reshape(hparams.num_kp, 3), cols)
+ misc.imsave(os.path.join(output_folder, f), orig)
+
+
+def _default_hparams():
+ """Returns default or overridden user-specified hyperparameters."""
+
+ hparams = tf.contrib.training.HParams(
+ num_filters=64, # Number of filters.
+ num_kp=10, # Numer of keypoints.
+
+ loss_pose=0.2, # Pose Loss.
+ loss_con=1.0, # Multiview consistency Loss.
+ loss_sep=1.0, # Seperation Loss.
+ loss_sill=1.0, # Sillhouette Loss.
+ loss_lr=1.0, # Orientation Loss.
+ loss_variance=0.5, # Variance Loss (part of Sillhouette loss).
+
+ sep_delta=0.05, # Seperation threshold.
+ noise=0.1, # Noise added during estimating rotation.
+
+ learning_rate=1.0e-3,
+ lr_anneal_start=30000, # When to anneal in the orientation prediction.
+ lr_anneal_end=60000, # When to use the prediction completely.
+ )
+ if FLAGS.hparams:
+ hparams = hparams.parse(FLAGS.hparams)
+ return hparams
+
+
+def main(argv):
+ del argv
+
+ hparams = _default_hparams()
+
+ if FLAGS.predict:
+ predict(FLAGS.input, hparams)
+ else:
+ utils.train_and_eval(
+ model_dir=FLAGS.model_dir,
+ model_fn=model_fn,
+ input_fn=create_input_fn,
+ hparams=hparams,
+ steps=FLAGS.steps,
+ batch_size=FLAGS.batch_size,
+ save_checkpoints_secs=600,
+ eval_throttle_secs=1800,
+ eval_steps=5,
+ sync_replicas=FLAGS.sync_replicas,
+ )
+
+
+if __name__ == "__main__":
+ sys.excepthook = utils.colored_hook(
+ os.path.dirname(os.path.realpath(__file__)))
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/tools/gen_tfrecords.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/tools/gen_tfrecords.py"
new file mode 100644
index 00000000..2f973b7f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/tools/gen_tfrecords.py"
@@ -0,0 +1,99 @@
+# Copyright 2018 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# =============================================================================
+"""An example script to generate a tfrecord file from a folder containing the
+renderings.
+
+Example usage:
+ python gen_tfrecords.py --input=FOLDER --output=output.tfrecord
+
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import os
+from scipy import misc
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+tf.app.flags.DEFINE_string("input", "", "Input folder containing images")
+tf.app.flags.DEFINE_string("output", "", "Output tfrecord.")
+
+
+def get_matrix(lines):
+ return np.array([[float(y) for y in x.strip().split(" ")] for x in lines])
+
+
+def read_model_view_matrices(filename):
+ with open(filename, "r") as f:
+ lines = f.readlines()
+ return get_matrix(lines[:4]), get_matrix(lines[4:])
+
+
+def bytes_feature(values):
+ return tf.train.Feature(bytes_list=tf.train.BytesList(value=[values]))
+
+
+def generate():
+ with tf.python_io.TFRecordWriter(FLAGS.output) as tfrecord_writer:
+ with tf.Graph().as_default():
+ im0 = tf.placeholder(dtype=tf.uint8)
+ im1 = tf.placeholder(dtype=tf.uint8)
+ encoded0 = tf.image.encode_png(im0)
+ encoded1 = tf.image.encode_png(im1)
+
+ with tf.Session() as sess:
+ count = 0
+ indir = FLAGS.input + "/"
+ while tf.gfile.Exists(indir + "%06d.txt" % count):
+ print("saving %06d" % count)
+ image0 = misc.imread(indir + "%06d.png" % (count * 2))
+ image1 = misc.imread(indir + "%06d.png" % (count * 2 + 1))
+
+ mat0, mat1 = read_model_view_matrices(indir + "%06d.txt" % count)
+
+ mati0 = np.linalg.inv(mat0).flatten()
+ mati1 = np.linalg.inv(mat1).flatten()
+ mat0 = mat0.flatten()
+ mat1 = mat1.flatten()
+
+ st0, st1 = sess.run([encoded0, encoded1],
+ feed_dict={im0: image0, im1: image1})
+
+ example = tf.train.Example(features=tf.train.Features(feature={
+ 'img0': bytes_feature(st0),
+ 'img1': bytes_feature(st1),
+ 'mv0': tf.train.Feature(
+ float_list=tf.train.FloatList(value=mat0)),
+ 'mvi0': tf.train.Feature(
+ float_list=tf.train.FloatList(value=mati0)),
+ 'mv1': tf.train.Feature(
+ float_list=tf.train.FloatList(value=mat1)),
+ 'mvi1': tf.train.Feature(
+ float_list=tf.train.FloatList(value=mati1)),
+ }))
+
+ tfrecord_writer.write(example.SerializeToString())
+ count += 1
+
+
+def main(argv):
+ del argv
+ generate()
+
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/tools/render.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/tools/render.py"
new file mode 100644
index 00000000..3a887267
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/tools/render.py"
@@ -0,0 +1,310 @@
+# Copyright 2018 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# =============================================================================
+"""Script to render object views from ShapeNet obj models.
+
+Example usage:
+ blender -b --python render.py -- -m model.obj -o output/ -s 128 -n 120 -fov 5
+
+"""
+from __future__ import print_function
+
+import argparse
+import itertools
+import json
+from math import pi
+import os
+import random
+import sys
+from mathutils import Vector
+import math
+import mathutils
+import time
+import copy
+
+import bpy
+
+sys.path.append(os.path.dirname(__file__))
+
+BG_LUMINANCE = 0
+
+
+def look_at(obj_camera, point):
+ loc_camera = obj_camera.location
+ direction = point - loc_camera
+ # point the cameras '-Z' and use its 'Y' as up
+ rot_quat = direction.to_track_quat('-Z', 'Y')
+
+ obj_camera.rotation_euler = rot_quat.to_euler()
+
+
+def roll_camera(obj_camera):
+ roll_rotate = mathutils.Euler(
+ (0, 0, random.random() * math.pi - math.pi * 0.5), 'XYZ')
+ obj_camera.rotation_euler = (obj_camera.rotation_euler.to_matrix() *
+ roll_rotate.to_matrix()).to_euler()
+
+
+def norm(x):
+ return math.sqrt(x[0] * x[0] + x[1] * x[1] + x[2] * x[2])
+
+
+def normalize(x):
+ n = norm(x)
+ x[0] /= n
+ x[1] /= n
+ x[2] /= n
+
+
+def random_top_sphere():
+ xyz = [random.normalvariate(0, 1) for x in range(3)]
+ normalize(xyz)
+
+ if xyz[2] < 0:
+ xyz[2] *= -1
+ return xyz
+
+
+def perturb_sphere(loc, size):
+ while True:
+ xyz = [random.normalvariate(0, 1) for x in range(3)]
+ normalize(xyz)
+
+ nloc = [loc[i] + xyz[i] * random.random() * size for i in range(3)]
+ normalize(nloc)
+
+ if nloc[2] >= 0:
+ return nloc
+
+
+def perturb(loc, size):
+ while True:
+ nloc = [loc[i] + random.random() * size * 2 - size for i in range(3)]
+ if nloc[2] >= 0:
+ return nloc
+
+ bpy.ops.object.mode_set()
+
+
+def delete_all_objects():
+ bpy.ops.object.select_by_type(type="MESH")
+ bpy.ops.object.delete(use_global=False)
+
+
+def set_scene(render_size, fov, alpha=False):
+ """Set up default scene properties."""
+ delete_all_objects()
+
+ cam = bpy.data.cameras["Camera"]
+ cam.angle = fov * pi / 180
+
+ light = bpy.data.objects["Lamp"]
+ light.location = (0, 0, 1)
+ look_at(light, Vector((0.0, 0, 0)))
+ bpy.data.lamps['Lamp'].type = "HEMI"
+ bpy.data.lamps['Lamp'].energy = 1
+ bpy.data.lamps['Lamp'].use_specular = False
+ bpy.data.lamps['Lamp'].use_diffuse = True
+
+ bpy.context.scene.world.horizon_color = (
+ BG_LUMINANCE, BG_LUMINANCE, BG_LUMINANCE)
+
+ bpy.context.scene.render.resolution_x = render_size
+ bpy.context.scene.render.resolution_y = render_size
+ bpy.context.scene.render.resolution_percentage = 100
+
+ bpy.context.scene.render.use_antialiasing = True
+ bpy.context.scene.render.antialiasing_samples = '5'
+
+
+def get_modelview_matrix():
+ cam = bpy.data.objects["Camera"]
+ bpy.context.scene.update()
+
+ # when apply to object with CV coordinate i.e. to_blender * obj
+ # this gives object in blender coordinate
+ to_blender = mathutils.Matrix(
+ ((1., 0., 0., 0.),
+ (0., 0., -1., 0.),
+ (0., 1., 0., 0.),
+ (0., 0., 0., 1.)))
+ return cam.matrix_world.inverted() * to_blender
+
+
+def print_matrix(f, mat):
+ for i in range(4):
+ for j in range(4):
+ f.write("%lf " % mat[i][j])
+ f.write("\n")
+
+
+def mul(loc, v):
+ return [loc[i] * v for i in range(3)]
+
+
+def merge_all():
+ bpy.ops.object.select_by_type(type="MESH")
+ bpy.context.scene.objects.active = bpy.context.selected_objects[0]
+ bpy.ops.object.join()
+ obj = bpy.context.scene.objects.active
+ bpy.ops.object.origin_set(type="ORIGIN_CENTER_OF_MASS")
+ return obj
+
+
+def insert_frame(obj, frame_number):
+ obj.keyframe_insert(data_path="location", frame=frame_number)
+ obj.keyframe_insert(data_path="rotation_euler", frame=frame_number)
+ obj.keyframe_insert(data_path="scale", frame=frame_number)
+
+
+def render(output_prefix):
+ bpy.context.scene.render.filepath = output_prefix
+ bpy.context.scene.render.image_settings.file_format = "PNG"
+ bpy.context.scene.render.alpha_mode = "TRANSPARENT"
+ bpy.context.scene.render.image_settings.color_mode = "RGBA"
+ bpy.ops.render.render(write_still=True, animation=True)
+
+
+def render_obj(
+ obj_fn, save_dir, n, perturb_size, rotate=False, roll=False, scale=1.0):
+
+ # Load object.
+ bpy.ops.import_scene.obj(filepath=obj_fn)
+ cur_obj = merge_all()
+
+ scale = 2.0 / max(cur_obj.dimensions) * scale
+ cur_obj.scale = (scale, scale, scale)
+ # Using the center of mass as the origin doesn't really work, because Blender
+ # assumes the object is a solid shell. This seems to generate better-looking
+ # rotations.
+
+ bpy.ops.object.origin_set(type='ORIGIN_GEOMETRY', center='BOUNDS')
+
+ # bpy.ops.mesh.primitive_cube_add(location=(0, 0, 1))
+ # cube = bpy.data.objects["Cube"]
+ # cube.scale = (0.2, 0.2, 0.2)
+
+ for polygon in cur_obj.data.polygons:
+ polygon.use_smooth = True
+
+ bpy.ops.object.select_all(action="DESELECT")
+
+ camera = bpy.data.objects["Camera"]
+
+ # os.system("mkdir " + save_dir)
+ for i in range(n):
+ fo = open(save_dir + "/%06d.txt" % i, "w")
+ d = 30
+ shift = 0.2
+ if rotate:
+ t = 1.0 * i / (n-1) * 2 * math.pi
+ loc = [math.sin(t), math.cos(t), 1]
+
+ normalize(loc)
+ camera.location = mul(loc, d)
+ look_at(camera, Vector((0.0, 0, 0)))
+
+ print_matrix(fo, get_modelview_matrix())
+ print_matrix(fo, get_modelview_matrix())
+
+ insert_frame(camera, 2 * i)
+ insert_frame(camera, 2 * i + 1)
+
+ else:
+ loc = random_top_sphere()
+
+ camera.location = mul(loc, d)
+ look_at(camera, Vector((0.0, 0, 0)))
+
+ if roll:
+ roll_camera(camera)
+ camera.location = perturb(mul(loc, d), shift)
+
+ print_matrix(fo, get_modelview_matrix())
+ insert_frame(camera, 2 * i)
+
+ if perturb_size > 0:
+ loc = perturb_sphere(loc, perturb_size)
+ else:
+ loc = random_top_sphere()
+
+ camera.location = mul(loc, d)
+ look_at(camera, Vector((0.0, 0, 0)))
+ if roll:
+ roll_camera(camera)
+ camera.location = perturb(mul(loc, d), shift)
+
+ print_matrix(fo, get_modelview_matrix())
+ insert_frame(camera, 2 * i + 1)
+
+ fo.close()
+
+ # Create a bunch of views of the object
+ bpy.context.scene.frame_start = 0
+ bpy.context.scene.frame_end = 2 * n - 1
+
+ stem = os.path.join(save_dir, '######')
+ render(stem)
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument('-m', '--model', dest='model',
+ required=True,
+ help='Path to model obj file.')
+ parser.add_argument('-o', '--output_dir', dest='output_dir',
+ required=True,
+ help='Where to output files.')
+ parser.add_argument('-s', '--output_size', dest='output_size',
+ required=True,
+ help='Width and height of output in pixels, e.g. 32x32.')
+ parser.add_argument('-n', '--num_frames', dest='n', type=int,
+ required=True,
+ help='Number of frames to generate per clip.')
+
+ parser.add_argument('-scale', '--scale', dest='scale', type=float,
+ help='object scaling', default=1)
+
+ parser.add_argument('-perturb', '--perturb', dest='perturb', type=float,
+ help='sphere perturbation', default=0)
+
+ parser.add_argument('-rotate', '--rotate', dest='rotate', action='store_true',
+ help='render rotating test set')
+
+ parser.add_argument('-roll', '--roll', dest='roll', action='store_true',
+ help='add roll')
+
+ parser.add_argument(
+ '-fov', '--fov', dest='fov', type=float, required=True,
+ help='field of view')
+
+ if '--' not in sys.argv:
+ parser.print_help()
+ exit(1)
+
+ argv = sys.argv[sys.argv.index('--') + 1:]
+ args, _ = parser.parse_known_args(argv)
+
+ random.seed(args.model + str(time.time()) + str(os.getpid()))
+ # random.seed(0)
+
+ set_scene(int(args.output_size), args.fov)
+ render_obj(
+ args.model, args.output_dir, args.n, args.perturb, args.rotate,
+ args.roll, args.scale)
+ exit()
+
+
+if __name__ == '__main__':
+ main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/utils.py"
new file mode 100644
index 00000000..148b7a3e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/keypointnet/utils.py"
@@ -0,0 +1,307 @@
+# Copyright 2018 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# =============================================================================
+"""Utility functions for KeypointNet.
+
+These are helper / tensorflow related functions. The actual implementation and
+algorithm is in main.py.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import numpy as np
+import os
+import re
+import tensorflow as tf
+import tensorflow.contrib.slim as slim
+import time
+import traceback
+
+
+class TrainingHook(tf.train.SessionRunHook):
+ """A utility for displaying training information such as the loss, percent
+ completed, estimated finish date and time."""
+
+ def __init__(self, steps):
+ self.steps = steps
+
+ self.last_time = time.time()
+ self.last_est = self.last_time
+
+ self.eta_interval = int(math.ceil(0.1 * self.steps))
+ self.current_interval = 0
+
+ def before_run(self, run_context):
+ graph = tf.get_default_graph()
+ return tf.train.SessionRunArgs(
+ {"loss": graph.get_collection("total_loss")[0]})
+
+ def after_run(self, run_context, run_values):
+ step = run_context.session.run(tf.train.get_global_step())
+ now = time.time()
+
+ if self.current_interval < self.eta_interval:
+ self.duration = now - self.last_est
+ self.current_interval += 1
+ if step % self.eta_interval == 0:
+ self.duration = now - self.last_est
+ self.last_est = now
+
+ eta_time = float(self.steps - step) / self.current_interval * \
+ self.duration
+ m, s = divmod(eta_time, 60)
+ h, m = divmod(m, 60)
+ eta = "%d:%02d:%02d" % (h, m, s)
+
+ print("%.2f%% (%d/%d): %.3e t %.3f @ %s (%s)" % (
+ step * 100.0 / self.steps,
+ step,
+ self.steps,
+ run_values.results["loss"],
+ now - self.last_time,
+ time.strftime("%a %d %H:%M:%S", time.localtime(time.time() + eta_time)),
+ eta))
+
+ self.last_time = now
+
+
+def standard_model_fn(
+ func, steps, run_config=None, sync_replicas=0, optimizer_fn=None):
+ """Creates model_fn for tf.Estimator.
+
+ Args:
+ func: A model_fn with prototype model_fn(features, labels, mode, hparams).
+ steps: Training steps.
+ run_config: tf.estimatorRunConfig (usually passed in from TF_CONFIG).
+ sync_replicas: The number of replicas used to compute gradient for
+ synchronous training.
+ optimizer_fn: The type of the optimizer. Default to Adam.
+
+ Returns:
+ model_fn for tf.estimator.Estimator.
+ """
+
+ def fn(features, labels, mode, params):
+ """Returns model_fn for tf.estimator.Estimator."""
+
+ is_training = (mode == tf.estimator.ModeKeys.TRAIN)
+ ret = func(features, labels, mode, params)
+
+ tf.add_to_collection("total_loss", ret["loss"])
+ train_op = None
+
+ training_hooks = []
+ if is_training:
+ training_hooks.append(TrainingHook(steps))
+
+ if optimizer_fn is None:
+ optimizer = tf.train.AdamOptimizer(params.learning_rate)
+ else:
+ optimizer = optimizer_fn
+
+ if run_config is not None and run_config.num_worker_replicas > 1:
+ sr = sync_replicas
+ if sr <= 0:
+ sr = run_config.num_worker_replicas
+
+ optimizer = tf.train.SyncReplicasOptimizer(
+ optimizer,
+ replicas_to_aggregate=sr,
+ total_num_replicas=run_config.num_worker_replicas)
+
+ training_hooks.append(
+ optimizer.make_session_run_hook(
+ run_config.is_chief, num_tokens=run_config.num_worker_replicas))
+
+ optimizer = tf.contrib.estimator.clip_gradients_by_norm(optimizer, 5)
+ train_op = slim.learning.create_train_op(ret["loss"], optimizer)
+
+ if "eval_metric_ops" not in ret:
+ ret["eval_metric_ops"] = {}
+
+ return tf.estimator.EstimatorSpec(
+ mode=mode,
+ predictions=ret["predictions"],
+ loss=ret["loss"],
+ train_op=train_op,
+ eval_metric_ops=ret["eval_metric_ops"],
+ training_hooks=training_hooks)
+ return fn
+
+
+def train_and_eval(
+ model_dir,
+ steps,
+ batch_size,
+ model_fn,
+ input_fn,
+ hparams,
+ keep_checkpoint_every_n_hours=0.5,
+ save_checkpoints_secs=180,
+ save_summary_steps=50,
+ eval_steps=20,
+ eval_start_delay_secs=10,
+ eval_throttle_secs=300,
+ sync_replicas=0):
+ """Trains and evaluates our model. Supports local and distributed training.
+
+ Args:
+ model_dir: The output directory for trained parameters, checkpoints, etc.
+ steps: Training steps.
+ batch_size: Batch size.
+ model_fn: A func with prototype model_fn(features, labels, mode, hparams).
+ input_fn: A input function for the tf.estimator.Estimator.
+ hparams: tf.HParams containing a set of hyperparameters.
+ keep_checkpoint_every_n_hours: Number of hours between each checkpoint
+ to be saved.
+ save_checkpoints_secs: Save checkpoints every this many seconds.
+ save_summary_steps: Save summaries every this many steps.
+ eval_steps: Number of steps to evaluate model.
+ eval_start_delay_secs: Start evaluating after waiting for this many seconds.
+ eval_throttle_secs: Do not re-evaluate unless the last evaluation was
+ started at least this many seconds ago
+ sync_replicas: Number of synchronous replicas for distributed training.
+
+ Returns:
+ None
+ """
+
+ run_config = tf.estimator.RunConfig(
+ keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours,
+ save_checkpoints_secs=save_checkpoints_secs,
+ save_summary_steps=save_summary_steps)
+
+ estimator = tf.estimator.Estimator(
+ model_dir=model_dir,
+ model_fn=standard_model_fn(
+ model_fn,
+ steps,
+ run_config,
+ sync_replicas=sync_replicas),
+ params=hparams, config=run_config)
+
+ train_spec = tf.estimator.TrainSpec(
+ input_fn=input_fn(split="train", batch_size=batch_size),
+ max_steps=steps)
+
+ eval_spec = tf.estimator.EvalSpec(
+ input_fn=input_fn(split="validation", batch_size=batch_size),
+ steps=eval_steps,
+ start_delay_secs=eval_start_delay_secs,
+ throttle_secs=eval_throttle_secs)
+
+ tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
+
+
+def draw_circle(rgb, u, v, col, r):
+ """Draws a simple anti-aliasing circle in-place.
+
+ Args:
+ rgb: Input image to be modified.
+ u: Horizontal coordinate.
+ v: Vertical coordinate.
+ col: Color.
+ r: Radius.
+ """
+
+ ir = int(math.ceil(r))
+ for i in range(-ir-1, ir+2):
+ for j in range(-ir-1, ir+2):
+ nu = int(round(u + i))
+ nv = int(round(v + j))
+ if nu < 0 or nu >= rgb.shape[1] or nv < 0 or nv >= rgb.shape[0]:
+ continue
+
+ du = abs(nu - u)
+ dv = abs(nv - v)
+
+ # need sqrt to keep scale
+ t = math.sqrt(du * du + dv * dv) - math.sqrt(r * r)
+ if t < 0:
+ rgb[nv, nu, :] = col
+ else:
+ t = 1 - t
+ if t > 0:
+ # t = t ** 0.3
+ rgb[nv, nu, :] = col * t + rgb[nv, nu, :] * (1-t)
+
+
+def draw_ndc_points(rgb, xy, cols):
+ """Draws keypoints onto an input image.
+
+ Args:
+ rgb: Input image to be modified.
+ xy: [n x 2] matrix of 2D locations.
+ cols: A list of colors for the keypoints.
+ """
+
+ vh, vw = rgb.shape[0], rgb.shape[1]
+
+ for j in range(len(cols)):
+ x, y = xy[j, :2]
+ x = (min(max(x, -1), 1) * vw / 2 + vw / 2) - 0.5
+ y = vh - 0.5 - (min(max(y, -1), 1) * vh / 2 + vh / 2)
+
+ x = int(round(x))
+ y = int(round(y))
+ if x < 0 or y < 0 or x >= vw or y >= vh:
+ continue
+
+ rad = 1.5
+ rad *= rgb.shape[0] / 128.0
+ draw_circle(rgb, x, y, np.array([0.0, 0.0, 0.0, 1.0]), rad * 1.5)
+ draw_circle(rgb, x, y, cols[j], rad)
+
+
+def colored_hook(home_dir):
+ """Colorizes python's error message.
+
+ Args:
+ home_dir: directory where code resides (to highlight your own files).
+ Returns:
+ The traceback hook.
+ """
+
+ def hook(type_, value, tb):
+ def colorize(text, color, own=0):
+ """Returns colorized text."""
+ endcolor = "\x1b[0m"
+ codes = {
+ "green": "\x1b[0;32m",
+ "green_own": "\x1b[1;32;40m",
+ "red": "\x1b[0;31m",
+ "red_own": "\x1b[1;31m",
+ "yellow": "\x1b[0;33m",
+ "yellow_own": "\x1b[1;33m",
+ "black": "\x1b[0;90m",
+ "black_own": "\x1b[1;90m",
+ "cyan": "\033[1;36m",
+ }
+ return codes[color + ("_own" if own else "")] + text + endcolor
+
+ for filename, line_num, func, text in traceback.extract_tb(tb):
+ basename = os.path.basename(filename)
+ own = (home_dir in filename) or ("/" not in filename)
+
+ print(colorize("\"" + basename + '"', "green", own) + " in " + func)
+ print("%s: %s" % (
+ colorize("%5d" % line_num, "red", own),
+ colorize(text, "yellow", own)))
+ print(" %s" % colorize(filename, "black", own))
+
+ print(colorize("%s: %s" % (type_.__name__, value), "cyan"))
+ return hook
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/.gitignore" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/.gitignore"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/BUILD"
new file mode 100644
index 00000000..629c9a06
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/BUILD"
@@ -0,0 +1,33 @@
+# Learning to Optimize Learning (LOL)
+
+package(default_visibility = ["//visibility:public"])
+
+# Libraries
+# =========
+
+py_library(
+ name = "metaopt",
+ srcs = ["metaopt.py"],
+ deps = [
+ "//learned_optimizer/problems:datasets",
+ "//learned_optimizer/problems:problem_generator",
+ ],
+)
+
+# Binaries
+# ========
+py_binary(
+ name = "metarun",
+ srcs = ["metarun.py"],
+ deps = [
+ ":metaopt",
+ "//learned_optimizer/optimizer:coordinatewise_rnn",
+ "//learned_optimizer/optimizer:global_learning_rate",
+ "//learned_optimizer/optimizer:hierarchical_rnn",
+ "//learned_optimizer/optimizer:learning_rate_schedule",
+ "//learned_optimizer/optimizer:trainable_adam",
+ "//learned_optimizer/problems:problem_sets",
+ "//learned_optimizer/problems:problem_spec",
+ ],
+)
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/README.md"
new file mode 100644
index 00000000..5ddab1f6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/README.md"
@@ -0,0 +1,42 @@
+# Learned Optimizer
+
+Code for [Learned Optimizers that Scale and Generalize](https://arxiv.org/abs/1703.04813).
+
+## Requirements
+
+* Bazel ([install](https://bazel.build/versions/master/docs/install.html))
+* TensorFlow >= v1.3
+
+## Training a Learned Optimizer
+
+## Code Overview
+In the top-level directory, ```metaopt.py``` contains the code to train and test a learned optimizer. ```metarun.py``` packages the actual training procedure into a
+single file, defining and exposing many flags to tune the procedure, from selecting the optimizer type and problem set to more fine-grained hyperparameter settings.
+There is no testing binary; testing can be done ad-hoc via ```metaopt.test_optimizer``` by passing an optimizer object and a directory with a checkpoint.
+
+The ```optimizer``` directory contains a base ```trainable_optimizer.py``` class and a number of extensions, including the ```hierarchical_rnn``` optimizer used in
+the paper, a ```coordinatewise_rnn``` optimizer that more closely matches previous work, and a number of simpler optimizers to demonstrate the basic mechanics of
+a learnable optimizer.
+
+The ```problems``` directory contains the code to build the problems that were used in the meta-training set.
+
+### Binaries
+```metarun.py```: meta-training of a learned optimizer
+
+### Command-Line Flags
+The flags most relevant to meta-training are defined in ```metarun.py```. The default values will meta-train a HierarchicalRNN optimizer with the hyperparameter
+settings used in the paper.
+
+### Using a Learned Optimizer as a Black Box
+The ```trainable_optimizer``` inherits from ```tf.train.Optimizer```, so a properly instantiated version can be used to train any model in any APIs that accept
+this class. There are just 2 caveats:
+
+1. If using the Hierarchical RNN optimizer, the apply_gradients return type must be changed (see comments inline for what exactly must be removed)
+
+2. Care must be taken to restore the variables from the optimizer without overriding them. Optimizer variables should be loaded manually using a pretrained checkpoint
+and a ```tf.train.Saver``` with only the optimizer variables. Then, when constructing the session, ensure that any automatic variable initialization does not
+re-initialize the loaded optimizer variables.
+
+## Contact for Issues
+
+* Olga Wichrowska (@olganw), Niru Maheswaranathan (@nirum)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/metaopt.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/metaopt.py"
new file mode 100644
index 00000000..62c06272
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/metaopt.py"
@@ -0,0 +1,639 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Helper utilities for training and testing optimizers."""
+
+from collections import defaultdict
+import random
+import sys
+import time
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+from learned_optimizer.optimizer import trainable_optimizer
+from learned_optimizer.optimizer import utils
+from learned_optimizer.problems import datasets
+from learned_optimizer.problems import problem_generator
+
+tf.app.flags.DEFINE_integer("ps_tasks", 0,
+ """Number of tasks in the ps job.
+ If 0 no ps job is used.""")
+tf.app.flags.DEFINE_float("nan_l2_reg", 1e-2,
+ """Strength of l2-reg when NaNs are encountered.""")
+tf.app.flags.DEFINE_float("l2_reg", 0.,
+ """Lambda value for parameter regularization.""")
+# Default is 0.9
+tf.app.flags.DEFINE_float("rms_decay", 0.9,
+ """Decay value for the RMSProp metaoptimizer.""")
+# Default is 1e-10
+tf.app.flags.DEFINE_float("rms_epsilon", 1e-20,
+ """Epsilon value for the RMSProp metaoptimizer.""")
+tf.app.flags.DEFINE_boolean("set_profiling", False,
+ """Enable memory usage and computation time """
+ """tracing for tensorflow nodes (available in """
+ """TensorBoard).""")
+tf.app.flags.DEFINE_boolean("reset_rnn_params", True,
+ """Reset the parameters of the optimizer
+ from one meta-iteration to the next.""")
+
+FLAGS = tf.app.flags.FLAGS
+OPTIMIZER_SCOPE = "LOL"
+OPT_SUM_COLLECTION = "LOL_summaries"
+
+
+def sigmoid_weights(n, slope=0.1, offset=5):
+ """Generates a sigmoid, scaled to sum to 1.
+
+ This function is used to generate weights that serve to mask out
+ the early objective values of an optimization problem such that
+ initial variation in the objective is phased out (hence the sigmoid
+ starts at zero and ramps up to the maximum value, and the total
+ weight is normalized to sum to one)
+
+ Args:
+ n: the number of samples
+ slope: slope of the sigmoid (Default: 0.1)
+ offset: threshold of the sigmoid (Default: 5)
+
+ Returns:
+ No
+ """
+ x = np.arange(n)
+ y = 1. / (1. + np.exp(-slope * (x-offset)))
+ y_normalized = y / np.sum(y)
+ return y_normalized
+
+
+def sample_numiter(scale, min_steps=50):
+ """Samples a number of iterations from an exponential distribution.
+
+ Args:
+ scale: parameter for the exponential distribution
+ min_steps: minimum number of steps to run (additive)
+
+ Returns:
+ num_steps: An integer equal to a rounded sample from the exponential
+ distribution + the value of min_steps.
+ """
+ return int(np.round(np.random.exponential(scale=scale)) + min_steps)
+
+
+def train_optimizer(logdir,
+ optimizer_spec,
+ problems_and_data,
+ num_problems,
+ num_meta_iterations,
+ num_unroll_func,
+ num_partial_unroll_itrs_func,
+ learning_rate=1e-4,
+ gradient_clip=5.,
+ is_chief=False,
+ select_random_problems=True,
+ callbacks=None,
+ obj_train_max_multiplier=-1,
+ out=sys.stdout):
+ """Trains the meta-parameters of this optimizer.
+
+ Args:
+ logdir: a directory filepath for storing model checkpoints (must exist)
+ optimizer_spec: specification for an Optimizer (see utils.Spec)
+ problems_and_data: a list of tuples containing three elements: a problem
+ specification (see utils.Spec), a dataset (see datasets.Dataset), and
+ a batch_size (int) for generating a problem and corresponding dataset. If
+ the problem doesn't have data, set dataset to None.
+ num_problems: the number of problems to sample during meta-training
+ num_meta_iterations: the number of iterations (steps) to run the
+ meta-optimizer for on each subproblem.
+ num_unroll_func: called once per meta iteration and returns the number of
+ unrolls to do for that meta iteration.
+ num_partial_unroll_itrs_func: called once per unroll and returns the number
+ of iterations to do for that unroll.
+ learning_rate: learning rate of the RMSProp meta-optimizer (Default: 1e-4)
+ gradient_clip: value to clip gradients at (Default: 5.0)
+ is_chief: whether this is the chief task (Default: False)
+ select_random_problems: whether to select training problems randomly
+ (Default: True)
+ callbacks: a list of callback functions that is run after every random
+ problem draw
+ obj_train_max_multiplier: the maximum increase in the objective value over
+ a single training run. Ignored if < 0.
+ out: where to write output to, e.g. a file handle (Default: sys.stdout)
+
+ Raises:
+ ValueError: If one of the subproblems has a negative objective value.
+ """
+
+ if select_random_problems:
+ # iterate over random draws of problem / dataset pairs
+ sampler = (random.choice(problems_and_data) for _ in range(num_problems))
+ else:
+ # iterate over a random shuffle of problems, looping if necessary
+ num_repeats = (num_problems / len(problems_and_data)) + 1
+ random.shuffle(problems_and_data)
+ sampler = (problems_and_data * num_repeats)[:num_problems]
+
+ for problem_itr, (problem_spec, dataset, batch_size) in enumerate(sampler):
+
+ # timer used to time how long it takes to initialize a problem
+ problem_start_time = time.time()
+
+ # if dataset is None, use the EMPTY_DATASET
+ if dataset is None:
+ dataset = datasets.EMPTY_DATASET
+ batch_size = dataset.size
+
+ # build a new graph for this problem
+ graph = tf.Graph()
+ real_device_setter = tf.train.replica_device_setter(FLAGS.ps_tasks)
+
+ def custom_device_setter(op):
+ # Places the local variables onto the workers.
+ if trainable_optimizer.is_local_state_variable(op):
+ return "/job:worker"
+ else:
+ return real_device_setter(op)
+
+ if real_device_setter:
+ device_setter = custom_device_setter
+ else:
+ device_setter = None
+
+ with graph.as_default(), graph.device(device_setter):
+
+ # initialize a problem
+ problem = problem_spec.build()
+
+ # build the optimizer
+ opt = optimizer_spec.build()
+
+ # get the meta-objective for training the optimizer
+ train_output = opt.train(problem, dataset)
+
+ state_keys = opt.state_keys
+ for key, val in zip(state_keys, train_output.output_state[0]):
+ finite_val = utils.make_finite(val, replacement=tf.zeros_like(val))
+ tf.summary.histogram("State/{}".format(key), finite_val,
+ collections=[OPT_SUM_COLLECTION])
+
+ tf.summary.scalar("MetaObjective", train_output.metaobj,
+ collections=[OPT_SUM_COLLECTION])
+
+ # Per-problem meta-objective
+ tf.summary.scalar(problem_spec.callable.__name__ + "_MetaObjective",
+ train_output.metaobj,
+ collections=[OPT_SUM_COLLECTION])
+
+ # create the meta-train_op
+ global_step = tf.Variable(0, name="global_step", trainable=False)
+ meta_parameters = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,
+ scope=OPTIMIZER_SCOPE)
+ # parameter regularization
+ reg_l2 = FLAGS.l2_reg * sum([tf.reduce_sum(param ** 2)
+ for param in meta_parameters])
+
+ # compute the meta-gradients
+ meta_opt = tf.train.RMSPropOptimizer(learning_rate, decay=FLAGS.rms_decay,
+ use_locking=True,
+ epsilon=FLAGS.rms_epsilon)
+ grads_and_vars = meta_opt.compute_gradients(train_output.metaobj + reg_l2,
+ meta_parameters)
+
+ # clip the gradients
+ clipped_grads_and_vars = []
+ for grad, var in grads_and_vars:
+ clipped_grad = tf.clip_by_value(
+ utils.make_finite(grad, replacement=tf.zeros_like(var)),
+ -gradient_clip, gradient_clip)
+ clipped_grads_and_vars.append((clipped_grad, var))
+
+ # histogram summary of grads and vars
+ for grad, var in grads_and_vars:
+ tf.summary.histogram(
+ var.name + "_rawgrad",
+ utils.make_finite(
+ grad, replacement=tf.zeros_like(grad)),
+ collections=[OPT_SUM_COLLECTION])
+ for grad, var in clipped_grads_and_vars:
+ tf.summary.histogram(var.name + "_var", var,
+ collections=[OPT_SUM_COLLECTION])
+ tf.summary.histogram(var.name + "_grad", grad,
+ collections=[OPT_SUM_COLLECTION])
+
+ # builds the train and summary operations
+ train_op = meta_opt.apply_gradients(clipped_grads_and_vars,
+ global_step=global_step)
+
+ # only grab summaries defined for LOL, not inside the problem
+ summary_op = tf.summary.merge_all(key=OPT_SUM_COLLECTION)
+
+ # make sure the state gets propagated after the gradients and summaries
+ # were computed.
+ with tf.control_dependencies([train_op, summary_op]):
+ propagate_loop_state_ops = []
+ for dest, src in zip(
+ train_output.init_loop_vars, train_output.output_loop_vars):
+ propagate_loop_state_ops.append(dest.assign(src))
+ propagate_loop_state_op = tf.group(*propagate_loop_state_ops)
+
+ # create the supervisor
+ sv = tf.train.Supervisor(
+ graph=graph,
+ is_chief=is_chief,
+ logdir=logdir,
+ summary_op=None,
+ save_model_secs=0, # we save checkpoints manually
+ global_step=global_step,
+ )
+
+ with sv.managed_session() as sess:
+
+ init_time = time.time() - problem_start_time
+ out.write("--------- Problem #{} ---------\n".format(problem_itr))
+ out.write("{callable.__name__}{args}{kwargs}\n".format(
+ **problem_spec.__dict__))
+ out.write("Took {} seconds to initialize.\n".format(init_time))
+ out.flush()
+
+ # For profiling summaries
+ if FLAGS.set_profiling:
+ summary_writer = tf.summary.FileWriter(logdir, graph=sess.graph)
+
+ # used to store information during training
+ metadata = defaultdict(list)
+
+ for k in range(num_meta_iterations):
+
+ if sv.should_stop():
+ break
+
+ problem.init_fn(sess)
+
+ # set run options (for profiling)
+ full_trace_opt = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
+ run_options = full_trace_opt if FLAGS.set_profiling else None
+ run_metadata = tf.RunMetadata() if FLAGS.set_profiling else None
+
+ num_unrolls = num_unroll_func()
+ partial_unroll_iters = [
+ num_partial_unroll_itrs_func() for _ in xrange(num_unrolls)
+ ]
+ total_num_iter = sum(partial_unroll_iters)
+
+ objective_weights = [np.ones(num) / float(num)
+ for num in partial_unroll_iters]
+ db = dataset.batch_indices(total_num_iter, batch_size)
+ dataset_batches = []
+ last_index = 0
+ for num in partial_unroll_iters:
+ dataset_batches.append(db[last_index:last_index + num])
+ last_index += num
+
+ train_start_time = time.time()
+
+ unroll_itr = 0
+ additional_log_info = ""
+
+ for unroll_itr in range(num_unrolls):
+ first_unroll = unroll_itr == 0
+ if FLAGS.reset_rnn_params:
+ reset_state = first_unroll and k == 0
+ else:
+ reset_state = first_unroll
+
+ feed = {
+ train_output.obj_weights: objective_weights[unroll_itr],
+ train_output.batches: dataset_batches[unroll_itr],
+ train_output.first_unroll: first_unroll,
+ train_output.reset_state: reset_state,
+ }
+
+ # run the train and summary ops
+ # when a "save_diagnostics" flag is turned on
+ fetches_list = [
+ train_output.metaobj,
+ train_output.problem_objectives,
+ train_output.initial_obj,
+ summary_op,
+ clipped_grads_and_vars,
+ train_op
+ ]
+ if unroll_itr + 1 < num_unrolls:
+ fetches_list += [propagate_loop_state_op]
+
+ fetched = sess.run(fetches_list, feed_dict=feed,
+ options=run_options, run_metadata=run_metadata)
+ meta_obj = fetched[0]
+ sub_obj = fetched[1]
+ init_obj = fetched[2]
+ summ = fetched[3]
+ meta_grads_and_params = fetched[4]
+
+ # assert that the subproblem objectives are non-negative
+ # (this is so that we can rescale the objective by the initial value
+ # and not worry about rescaling by a negative value)
+ if np.any(sub_obj < 0):
+ raise ValueError(
+ "Training problem objectives must be nonnegative.")
+ # If the objective has increased more than we want, exit this
+ # training run and start over on another meta iteration.
+ if obj_train_max_multiplier > 0 and (
+ sub_obj[-1] > (init_obj +
+ abs(init_obj) * (obj_train_max_multiplier - 1))):
+ msg = " Broke early at {} out of {} unrolls. ".format(
+ unroll_itr + 1, num_unrolls)
+ additional_log_info += msg
+ break
+
+ # only the chief task is allowed to write the summary
+ if is_chief:
+ sv.summary_computed(sess, summ)
+
+ metadata["subproblem_objs"].append(sub_obj)
+ # store training metadata to pass to the callback
+ metadata["meta_objs"].append(meta_obj)
+ metadata["meta_grads_and_params"].append(meta_grads_and_params)
+
+ optimization_time = time.time() - train_start_time
+
+ if FLAGS.set_profiling:
+ summary_name = "%02d_iter%04d_%02d" % (FLAGS.task, problem_itr, k)
+ summary_writer.add_run_metadata(run_metadata, summary_name)
+
+ metadata["global_step"].append(sess.run(global_step))
+ metadata["runtimes"].append(optimization_time)
+
+ # write a diagnostic message to the output
+ args = (k, meta_obj, optimization_time,
+ sum(partial_unroll_iters[:unroll_itr+1]))
+ out.write(" [{:02}] {}, {} seconds, {} iters ".format(*args))
+ out.write("(unrolled {} steps)".format(
+ ", ".join([str(s) for s in partial_unroll_iters[:unroll_itr+1]])))
+ out.write("{}\n".format(additional_log_info))
+ out.flush()
+
+ if FLAGS.set_profiling:
+ summary_writer.close()
+
+ # force a checkpoint save before we load a new problem
+ # only the chief task has the save_path and can write the checkpoint
+ if is_chief:
+ sv.saver.save(sess, sv.save_path, global_step=global_step)
+
+ # run the callbacks on the chief
+ if is_chief and callbacks is not None:
+ for callback in callbacks:
+ if hasattr(callback, "__call__"):
+ problem_name = problem_spec.callable.__name__
+ callback(problem_name, problem_itr, logdir, metadata)
+
+
+def test_optimizer(optimizer,
+ problem,
+ num_iter,
+ dataset=datasets.EMPTY_DATASET,
+ batch_size=None,
+ seed=None,
+ graph=None,
+ logdir=None,
+ record_every=None):
+ """Tests an optimization algorithm on a given problem.
+
+ Args:
+ optimizer: Either a tf.train.Optimizer instance, or an Optimizer instance
+ inheriting from trainable_optimizer.py
+ problem: A Problem instance that defines an optimization problem to solve
+ num_iter: The number of iterations of the optimizer to run
+ dataset: The dataset to train the problem against
+ batch_size: The number of samples per batch. If None (default), the
+ batch size is set to the full batch (dataset.size)
+ seed: A random seed used for drawing the initial parameters, or a list of
+ numpy arrays used to explicitly initialize the parameters.
+ graph: The tensorflow graph to execute (if None, uses the default graph)
+ logdir: A directory containing model checkpoints. If given, then the
+ parameters of the optimizer are loaded from the latest checkpoint
+ in this folder.
+ record_every: if an integer, stores the parameters, objective, and gradient
+ every recored_every iterations. If None, nothing is stored
+
+ Returns:
+ objective_values: A list of the objective values during optimization
+ parameters: The parameters obtained after training
+ records: A dictionary containing lists of the parameters and gradients
+ during optimization saved every record_every iterations (empty if
+ record_every is set to None)
+ """
+
+ if dataset is None:
+ dataset = datasets.EMPTY_DATASET
+ batch_size = dataset.size
+ else:
+ # default batch size is the entire dataset
+ batch_size = dataset.size if batch_size is None else batch_size
+
+ graph = tf.get_default_graph() if graph is None else graph
+ with graph.as_default():
+
+ # define the parameters of the optimization problem
+ if isinstance(seed, (list, tuple)):
+ # seed is a list of arrays
+ params = problem_generator.init_fixed_variables(seed)
+ else:
+ # seed is an int or None
+ params = problem.init_variables(seed)
+
+ data_placeholder = tf.placeholder(tf.float32)
+ labels_placeholder = tf.placeholder(tf.int32)
+
+ # get the problem objective and gradient(s)
+ obj = problem.objective(params, data_placeholder, labels_placeholder)
+ gradients = problem.gradients(obj, params)
+
+ vars_to_preinitialize = params
+
+ with tf.Session(graph=graph) as sess:
+ # initialize the parameter scope variables; necessary for apply_gradients
+ sess.run(tf.variables_initializer(vars_to_preinitialize))
+ coord = tf.train.Coordinator()
+ threads = tf.train.start_queue_runners(sess=sess, coord=coord)
+
+ # create the train operation and training variables
+ try:
+ train_op, real_params = optimizer.apply_gradients(zip(gradients, params))
+ obj = problem.objective(real_params, data_placeholder, labels_placeholder)
+ except TypeError:
+ # If all goes well, this exception should only be thrown when we are using
+ # a non-hrnn optimizer.
+ train_op = optimizer.apply_gradients(zip(gradients, params))
+
+ vars_to_restore = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,
+ scope=OPTIMIZER_SCOPE)
+ vars_to_initialize = list(
+ set(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)) -
+ set(vars_to_restore) - set(vars_to_preinitialize))
+ # load or initialize optimizer variables
+ if logdir is not None:
+ restorer = tf.Saver(var_list=vars_to_restore)
+ ckpt = tf.train.latest_checkpoint(logdir)
+ restorer.restore(sess, ckpt)
+ else:
+ sess.run(tf.variables_initializer(vars_to_restore))
+ # initialize all the other variables
+ sess.run(tf.variables_initializer(vars_to_initialize))
+
+ problem.init_fn(sess)
+
+ # generate the minibatch indices
+ batch_inds = dataset.batch_indices(num_iter, batch_size)
+
+ # run the train operation for n iterations and save the objectives
+ records = defaultdict(list)
+ objective_values = []
+ for itr, batch in enumerate(batch_inds):
+
+ # data to feed in
+ feed = {data_placeholder: dataset.data[batch],
+ labels_placeholder: dataset.labels[batch]}
+ full_feed = {data_placeholder: dataset.data,
+ labels_placeholder: dataset.labels}
+
+ # record stuff
+ if record_every is not None and (itr % record_every) == 0:
+ def grad_value(g):
+ if isinstance(g, tf.IndexedSlices):
+ return g.values
+ else:
+ return g
+
+ records_fetch = {}
+ for p in params:
+ for key in optimizer.get_slot_names():
+ v = optimizer.get_slot(p, key)
+ records_fetch[p.name + "_" + key] = v
+ gav_fetch = [(grad_value(g), v) for g, v in zip(gradients, params)]
+
+ _, gav_eval, records_eval = sess.run(
+ (obj, gav_fetch, records_fetch), feed_dict=feed)
+ full_obj_eval = sess.run([obj], feed_dict=full_feed)
+
+ records["objective"].append(full_obj_eval)
+ records["grad_norm"].append([np.linalg.norm(g.ravel())
+ for g, _ in gav_eval])
+ records["param_norm"].append([np.linalg.norm(v.ravel())
+ for _, v in gav_eval])
+ records["grad"].append([g for g, _ in gav_eval])
+ records["param"].append([v for _, v in gav_eval])
+ records["iter"].append(itr)
+
+ for k, v in records_eval.iteritems():
+ records[k].append(v)
+
+ # run the optimization train operation
+ objective_values.append(sess.run([train_op, obj], feed_dict=feed)[1])
+
+ # final parameters
+ parameters = [sess.run(p) for p in params]
+ coord.request_stop()
+ coord.join(threads)
+
+ return objective_values, parameters, records
+
+
+def run_wall_clock_test(optimizer,
+ problem,
+ num_steps,
+ dataset=datasets.EMPTY_DATASET,
+ seed=None,
+ logdir=None,
+ batch_size=None):
+ """Runs optimization with the given parameters and return average iter time.
+
+ Args:
+ optimizer: The tf.train.Optimizer instance
+ problem: The problem to optimize (a problem_generator.Problem)
+ num_steps: The number of steps to run optimization for
+ dataset: The dataset to train the problem against
+ seed: The seed used for drawing the initial parameters, or a list of
+ numpy arrays used to explicitly initialize the parameters
+ logdir: A directory containing model checkpoints. If given, then the
+ parameters of the optimizer are loaded from the latest checkpoint
+ in this folder.
+ batch_size: The number of samples per batch.
+
+ Returns:
+ The average time in seconds for a single optimization iteration.
+ """
+ if dataset is None:
+ dataset = datasets.EMPTY_DATASET
+ batch_size = dataset.size
+ else:
+ # default batch size is the entire dataset
+ batch_size = dataset.size if batch_size is None else batch_size
+
+ # define the parameters of the optimization problem
+ if isinstance(seed, (list, tuple)):
+ # seed is a list of arrays
+ params = problem_generator.init_fixed_variables(seed)
+ else:
+ # seed is an int or None
+ params = problem.init_variables(seed)
+
+ data_placeholder = tf.placeholder(tf.float32)
+ labels_placeholder = tf.placeholder(tf.int32)
+
+ obj = problem.objective(params, data_placeholder, labels_placeholder)
+ gradients = problem.gradients(obj, params)
+ vars_to_preinitialize = params
+
+ with tf.Session(graph=tf.get_default_graph()) as sess:
+ # initialize the parameter scope variables; necessary for apply_gradients
+ sess.run(tf.variables_initializer(vars_to_preinitialize))
+ train_op = optimizer.apply_gradients(zip(gradients, params))
+ if isinstance(train_op, tuple) or isinstance(train_op, list):
+ # LOL apply_gradients returns a tuple. Regular optimizers do not.
+ train_op = train_op[0]
+ vars_to_restore = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,
+ scope=OPTIMIZER_SCOPE)
+ vars_to_initialize = list(
+ set(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)) -
+ set(vars_to_restore) - set(vars_to_preinitialize))
+ # load or initialize optimizer variables
+ if logdir is not None:
+ restorer = tf.Saver(var_list=vars_to_restore)
+ ckpt = tf.train.latest_checkpoint(logdir)
+ restorer.restore(sess, ckpt)
+ else:
+ sess.run(tf.variables_initializer(vars_to_restore))
+ # initialize all the other variables
+ sess.run(tf.variables_initializer(vars_to_initialize))
+
+ problem.init_fn(sess)
+
+ # generate the minibatch indices
+ batch_inds = dataset.batch_indices(num_steps, batch_size)
+
+ avg_iter_time = []
+ for batch in batch_inds:
+ # data to feed in
+ feed = {data_placeholder: dataset.data[batch],
+ labels_placeholder: dataset.labels[batch]}
+
+ # run the optimization train operation
+ start = time.time()
+ sess.run([train_op], feed_dict=feed)
+ avg_iter_time.append(time.time() - start)
+
+ return np.median(np.array(avg_iter_time))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/metarun.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/metarun.py"
new file mode 100644
index 00000000..45a29623
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/metarun.py"
@@ -0,0 +1,394 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Scripts for meta-optimization."""
+
+from __future__ import print_function
+
+import os
+
+import tensorflow as tf
+
+import metaopt
+from learned_optimizer.optimizer import coordinatewise_rnn
+from learned_optimizer.optimizer import global_learning_rate
+from learned_optimizer.optimizer import hierarchical_rnn
+from learned_optimizer.optimizer import learning_rate_schedule
+from learned_optimizer.optimizer import trainable_adam
+from learned_optimizer.problems import problem_sets as ps
+from learned_optimizer.problems import problem_spec
+
+tf.app.flags.DEFINE_string("train_dir", "/tmp/lol/",
+ """Directory to store parameters and results.""")
+
+tf.app.flags.DEFINE_integer("task", 0,
+ """Task id of the replica running the training.""")
+tf.app.flags.DEFINE_integer("worker_tasks", 1,
+ """Number of tasks in the worker job.""")
+
+tf.app.flags.DEFINE_integer("num_problems", 1000,
+ """Number of sub-problems to run.""")
+tf.app.flags.DEFINE_integer("num_meta_iterations", 5,
+ """Number of meta-iterations to optimize.""")
+tf.app.flags.DEFINE_integer("num_unroll_scale", 40,
+ """The scale parameter of the exponential
+ distribution from which the number of partial
+ unrolls is drawn""")
+tf.app.flags.DEFINE_integer("min_num_unrolls", 1,
+ """The minimum number of unrolls per problem.""")
+tf.app.flags.DEFINE_integer("num_partial_unroll_itr_scale", 200,
+ """The scale parameter of the exponential
+ distribution from which the number of iterations
+ per unroll is drawn.""")
+tf.app.flags.DEFINE_integer("min_num_itr_partial_unroll", 50,
+ """The minimum number of iterations for one
+ unroll.""")
+
+tf.app.flags.DEFINE_string("optimizer", "HierarchicalRNN",
+ """Which meta-optimizer to train.""")
+
+# CoordinatewiseRNN-specific flags
+tf.app.flags.DEFINE_integer("cell_size", 20,
+ """Size of the RNN hidden state in each layer.""")
+tf.app.flags.DEFINE_integer("num_cells", 2,
+ """Number of RNN layers.""")
+tf.app.flags.DEFINE_string("cell_cls", "GRUCell",
+ """Type of RNN cell to use.""")
+
+# Metaoptimization parameters
+tf.app.flags.DEFINE_float("meta_learning_rate", 1e-6,
+ """The learning rate for the meta-optimizer.""")
+tf.app.flags.DEFINE_float("gradient_clip_level", 1e4,
+ """The level to clip gradients to.""")
+
+# Training set selection
+tf.app.flags.DEFINE_boolean("include_quadratic_problems", False,
+ """Include non-noisy quadratic problems.""")
+tf.app.flags.DEFINE_boolean("include_noisy_quadratic_problems", True,
+ """Include noisy quadratic problems.""")
+tf.app.flags.DEFINE_boolean("include_large_quadratic_problems", True,
+ """Include very large quadratic problems.""")
+tf.app.flags.DEFINE_boolean("include_bowl_problems", True,
+ """Include 2D bowl problems.""")
+tf.app.flags.DEFINE_boolean("include_softmax_2_class_problems", True,
+ """Include 2-class logistic regression problems.""")
+tf.app.flags.DEFINE_boolean("include_noisy_softmax_2_class_problems", True,
+ """Include noisy 2-class logistic regression
+ problems.""")
+tf.app.flags.DEFINE_boolean("include_optimization_test_problems", True,
+ """Include non-noisy versions of classic
+ optimization test problems, e.g. Rosenbrock.""")
+tf.app.flags.DEFINE_boolean("include_noisy_optimization_test_problems", True,
+ """Include gradient-noise versions of classic
+ optimization test problems, e.g. Rosenbrock""")
+tf.app.flags.DEFINE_boolean("include_fully_connected_random_2_class_problems",
+ True, """Include MLP problems for 2 classes.""")
+tf.app.flags.DEFINE_boolean("include_matmul_problems", True,
+ """Include matrix multiplication problems.""")
+tf.app.flags.DEFINE_boolean("include_log_objective_problems", True,
+ """Include problems where the objective is the log
+ objective of another problem, e.g. Bowl.""")
+tf.app.flags.DEFINE_boolean("include_rescale_problems", True,
+ """Include problems where the parameters are scaled
+ version of the original parameters.""")
+tf.app.flags.DEFINE_boolean("include_norm_problems", True,
+ """Include problems where the objective is the
+ N-norm of another problem, e.g. Quadratic.""")
+tf.app.flags.DEFINE_boolean("include_sum_problems", True,
+ """Include problems where the objective is the sum
+ of the objectives of the subproblems that make
+ up the problem parameters. Per-problem tensors
+ are still independent of each other.""")
+tf.app.flags.DEFINE_boolean("include_sparse_gradient_problems", True,
+ """Include problems where the gradient is set to 0
+ with some high probability.""")
+tf.app.flags.DEFINE_boolean("include_sparse_softmax_problems", False,
+ """Include sparse softmax problems.""")
+tf.app.flags.DEFINE_boolean("include_one_hot_sparse_softmax_problems", False,
+ """Include one-hot sparse softmax problems.""")
+tf.app.flags.DEFINE_boolean("include_noisy_bowl_problems", True,
+ """Include noisy bowl problems.""")
+tf.app.flags.DEFINE_boolean("include_noisy_norm_problems", True,
+ """Include noisy norm problems.""")
+tf.app.flags.DEFINE_boolean("include_noisy_sum_problems", True,
+ """Include noisy sum problems.""")
+tf.app.flags.DEFINE_boolean("include_sum_of_quadratics_problems", False,
+ """Include sum of quadratics problems.""")
+tf.app.flags.DEFINE_boolean("include_projection_quadratic_problems", False,
+ """Include projection quadratic problems.""")
+tf.app.flags.DEFINE_boolean("include_outward_snake_problems", False,
+ """Include outward snake problems.""")
+tf.app.flags.DEFINE_boolean("include_dependency_chain_problems", False,
+ """Include dependency chain problems.""")
+tf.app.flags.DEFINE_boolean("include_min_max_well_problems", False,
+ """Include min-max well problems.""")
+
+# Optimizer parameters: initialization and scale values
+tf.app.flags.DEFINE_float("min_lr", 1e-6,
+ """The minimum initial learning rate.""")
+tf.app.flags.DEFINE_float("max_lr", 1e-2,
+ """The maximum initial learning rate.""")
+
+# Optimizer parameters: small features.
+tf.app.flags.DEFINE_boolean("zero_init_lr_weights", True,
+ """Whether to initialize the learning rate weights
+ to 0 rather than the scaled random initialization
+ used for other RNN variables.""")
+tf.app.flags.DEFINE_boolean("use_relative_lr", True,
+ """Whether to use the relative learning rate as an
+ input during training. Can only be used if
+ learnable_decay is also True.""")
+tf.app.flags.DEFINE_boolean("use_extreme_indicator", False,
+ """Whether to use the extreme indicator for learning
+ rates as an input during training. Can only be
+ used if learnable_decay is also True.""")
+tf.app.flags.DEFINE_boolean("use_log_means_squared", True,
+ """Whether to track the log of the mean squared
+ grads instead of the means squared grads.""")
+tf.app.flags.DEFINE_boolean("use_problem_lr_mean", True,
+ """Whether to use the mean over all learning rates
+ in the problem when calculating the relative
+ learning rate.""")
+
+# Optimizer parameters: major features
+tf.app.flags.DEFINE_boolean("learnable_decay", True,
+ """Whether to learn weights that dynamically
+ modulate the input scale via RMS decay.""")
+tf.app.flags.DEFINE_boolean("dynamic_output_scale", True,
+ """Whether to learn weights that dynamically
+ modulate the output scale.""")
+tf.app.flags.DEFINE_boolean("use_log_objective", True,
+ """Whether to use the log of the scaled objective
+ rather than just the scaled obj for training.""")
+tf.app.flags.DEFINE_boolean("use_attention", False,
+ """Whether to learn where to attend.""")
+tf.app.flags.DEFINE_boolean("use_second_derivatives", True,
+ """Whether to use second derivatives.""")
+tf.app.flags.DEFINE_integer("num_gradient_scales", 4,
+ """How many different timescales to keep for
+ gradient history. If > 1, also learns a scale
+ factor for gradient history.""")
+tf.app.flags.DEFINE_float("max_log_lr", 33,
+ """The maximum log learning rate allowed.""")
+tf.app.flags.DEFINE_float("objective_training_max_multiplier", -1,
+ """How much the objective can grow before training on
+ this problem / param pair is terminated. Sets a max
+ on the objective value when multiplied by the
+ initial objective. If <= 0, not used.""")
+tf.app.flags.DEFINE_boolean("use_gradient_shortcut", True,
+ """Whether to add a learned affine projection of the
+ gradient to the update delta in addition to the
+ gradient function computed by the RNN.""")
+tf.app.flags.DEFINE_boolean("use_lr_shortcut", False,
+ """Whether to add the difference between the current
+ learning rate and the desired learning rate to
+ the RNN input.""")
+tf.app.flags.DEFINE_boolean("use_grad_products", True,
+ """Whether to use gradient products in the input to
+ the RNN. Only applicable when num_gradient_scales
+ > 1.""")
+tf.app.flags.DEFINE_boolean("use_multiple_scale_decays", False,
+ """Whether to use many-timescale scale decays.""")
+tf.app.flags.DEFINE_boolean("use_numerator_epsilon", False,
+ """Whether to use epsilon in the numerator of the
+ log objective.""")
+tf.app.flags.DEFINE_boolean("learnable_inp_decay", True,
+ """Whether to learn input decay weight and bias.""")
+tf.app.flags.DEFINE_boolean("learnable_rnn_init", True,
+ """Whether to learn RNN state initialization.""")
+
+FLAGS = tf.app.flags.FLAGS
+
+# The Size of the RNN hidden state in each layer:
+# [PerParam, PerTensor, Global]. The length of this list must be 1, 2, or 3.
+# If less than 3, the Global and/or PerTensor RNNs will not be created.
+
+HRNN_CELL_SIZES = [10, 20, 20]
+
+
+
+def register_optimizers():
+ opts = {}
+ opts["CoordinatewiseRNN"] = coordinatewise_rnn.CoordinatewiseRNN
+ opts["GlobalLearningRate"] = global_learning_rate.GlobalLearningRate
+ opts["HierarchicalRNN"] = hierarchical_rnn.HierarchicalRNN
+ opts["LearningRateSchedule"] = learning_rate_schedule.LearningRateSchedule
+ opts["TrainableAdam"] = trainable_adam.TrainableAdam
+ return opts
+
+
+def main(unused_argv):
+ """Runs the main script."""
+
+ opts = register_optimizers()
+
+ # Choose a set of problems to optimize. By default this includes quadratics,
+ # 2-dimensional bowls, 2-class softmax problems, and non-noisy optimization
+ # test problems (e.g. Rosenbrock, Beale)
+ problems_and_data = []
+
+ if FLAGS.include_sparse_softmax_problems:
+ problems_and_data.extend(ps.sparse_softmax_2_class_sparse_problems())
+
+ if FLAGS.include_one_hot_sparse_softmax_problems:
+ problems_and_data.extend(
+ ps.one_hot_sparse_softmax_2_class_sparse_problems())
+
+ if FLAGS.include_quadratic_problems:
+ problems_and_data.extend(ps.quadratic_problems())
+
+ if FLAGS.include_noisy_quadratic_problems:
+ problems_and_data.extend(ps.quadratic_problems_noisy())
+
+ if FLAGS.include_large_quadratic_problems:
+ problems_and_data.extend(ps.quadratic_problems_large())
+
+ if FLAGS.include_bowl_problems:
+ problems_and_data.extend(ps.bowl_problems())
+
+ if FLAGS.include_noisy_bowl_problems:
+ problems_and_data.extend(ps.bowl_problems_noisy())
+
+ if FLAGS.include_softmax_2_class_problems:
+ problems_and_data.extend(ps.softmax_2_class_problems())
+
+ if FLAGS.include_noisy_softmax_2_class_problems:
+ problems_and_data.extend(ps.softmax_2_class_problems_noisy())
+
+ if FLAGS.include_optimization_test_problems:
+ problems_and_data.extend(ps.optimization_test_problems())
+
+ if FLAGS.include_noisy_optimization_test_problems:
+ problems_and_data.extend(ps.optimization_test_problems_noisy())
+
+ if FLAGS.include_fully_connected_random_2_class_problems:
+ problems_and_data.extend(ps.fully_connected_random_2_class_problems())
+
+ if FLAGS.include_matmul_problems:
+ problems_and_data.extend(ps.matmul_problems())
+
+ if FLAGS.include_log_objective_problems:
+ problems_and_data.extend(ps.log_objective_problems())
+
+ if FLAGS.include_rescale_problems:
+ problems_and_data.extend(ps.rescale_problems())
+
+ if FLAGS.include_norm_problems:
+ problems_and_data.extend(ps.norm_problems())
+
+ if FLAGS.include_noisy_norm_problems:
+ problems_and_data.extend(ps.norm_problems_noisy())
+
+ if FLAGS.include_sum_problems:
+ problems_and_data.extend(ps.sum_problems())
+
+ if FLAGS.include_noisy_sum_problems:
+ problems_and_data.extend(ps.sum_problems_noisy())
+
+ if FLAGS.include_sparse_gradient_problems:
+ problems_and_data.extend(ps.sparse_gradient_problems())
+ if FLAGS.include_fully_connected_random_2_class_problems:
+ problems_and_data.extend(ps.sparse_gradient_problems_mlp())
+
+ if FLAGS.include_min_max_well_problems:
+ problems_and_data.extend(ps.min_max_well_problems())
+
+ if FLAGS.include_sum_of_quadratics_problems:
+ problems_and_data.extend(ps.sum_of_quadratics_problems())
+
+ if FLAGS.include_projection_quadratic_problems:
+ problems_and_data.extend(ps.projection_quadratic_problems())
+
+ if FLAGS.include_outward_snake_problems:
+ problems_and_data.extend(ps.outward_snake_problems())
+
+ if FLAGS.include_dependency_chain_problems:
+ problems_and_data.extend(ps.dependency_chain_problems())
+
+ # log directory
+ logdir = os.path.join(FLAGS.train_dir,
+ "{}_{}_{}_{}".format(FLAGS.optimizer,
+ FLAGS.cell_cls,
+ FLAGS.cell_size,
+ FLAGS.num_cells))
+
+ # get the optimizer class and arguments
+ optimizer_cls = opts[FLAGS.optimizer]
+
+ assert len(HRNN_CELL_SIZES) in [1, 2, 3]
+ optimizer_args = (HRNN_CELL_SIZES,)
+
+ optimizer_kwargs = {
+ "init_lr_range": (FLAGS.min_lr, FLAGS.max_lr),
+ "learnable_decay": FLAGS.learnable_decay,
+ "dynamic_output_scale": FLAGS.dynamic_output_scale,
+ "cell_cls": getattr(tf.contrib.rnn, FLAGS.cell_cls),
+ "use_attention": FLAGS.use_attention,
+ "use_log_objective": FLAGS.use_log_objective,
+ "num_gradient_scales": FLAGS.num_gradient_scales,
+ "zero_init_lr_weights": FLAGS.zero_init_lr_weights,
+ "use_log_means_squared": FLAGS.use_log_means_squared,
+ "use_relative_lr": FLAGS.use_relative_lr,
+ "use_extreme_indicator": FLAGS.use_extreme_indicator,
+ "max_log_lr": FLAGS.max_log_lr,
+ "obj_train_max_multiplier": FLAGS.objective_training_max_multiplier,
+ "use_problem_lr_mean": FLAGS.use_problem_lr_mean,
+ "use_gradient_shortcut": FLAGS.use_gradient_shortcut,
+ "use_second_derivatives": FLAGS.use_second_derivatives,
+ "use_lr_shortcut": FLAGS.use_lr_shortcut,
+ "use_grad_products": FLAGS.use_grad_products,
+ "use_multiple_scale_decays": FLAGS.use_multiple_scale_decays,
+ "use_numerator_epsilon": FLAGS.use_numerator_epsilon,
+ "learnable_inp_decay": FLAGS.learnable_inp_decay,
+ "learnable_rnn_init": FLAGS.learnable_rnn_init,
+ }
+ optimizer_spec = problem_spec.Spec(
+ optimizer_cls, optimizer_args, optimizer_kwargs)
+
+ # make log directory
+ tf.gfile.MakeDirs(logdir)
+
+ is_chief = FLAGS.task == 0
+ # if this is a distributed run, make the chief run through problems in order
+ select_random_problems = FLAGS.worker_tasks == 1 or not is_chief
+
+ def num_unrolls():
+ return metaopt.sample_numiter(FLAGS.num_unroll_scale, FLAGS.min_num_unrolls)
+
+ def num_partial_unroll_itrs():
+ return metaopt.sample_numiter(FLAGS.num_partial_unroll_itr_scale,
+ FLAGS.min_num_itr_partial_unroll)
+
+ # run it
+ metaopt.train_optimizer(
+ logdir,
+ optimizer_spec,
+ problems_and_data,
+ FLAGS.num_problems,
+ FLAGS.num_meta_iterations,
+ num_unrolls,
+ num_partial_unroll_itrs,
+ learning_rate=FLAGS.meta_learning_rate,
+ gradient_clip=FLAGS.gradient_clip_level,
+ is_chief=is_chief,
+ select_random_problems=select_random_problems,
+ obj_train_max_multiplier=FLAGS.objective_training_max_multiplier,
+ callbacks=[])
+
+ return 0
+
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/BUILD"
new file mode 100644
index 00000000..8953e759
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/BUILD"
@@ -0,0 +1,69 @@
+package(default_visibility = ["//visibility:public"])
+
+# Libraries
+# =========
+py_library(
+ name = "coordinatewise_rnn",
+ srcs = ["coordinatewise_rnn.py"],
+ deps = [
+ ":trainable_optimizer",
+ ":utils",
+ ],
+)
+
+py_library(
+ name = "global_learning_rate",
+ srcs = ["global_learning_rate.py"],
+ deps = [
+ ":trainable_optimizer",
+ ],
+)
+
+py_library(
+ name = "hierarchical_rnn",
+ srcs = ["hierarchical_rnn.py"],
+ deps = [
+ ":rnn_cells",
+ ":trainable_optimizer",
+ ":utils",
+ ],
+)
+
+py_library(
+ name = "learning_rate_schedule",
+ srcs = ["learning_rate_schedule.py"],
+ deps = [
+ ":trainable_optimizer",
+ ],
+)
+
+py_library(
+ name = "rnn_cells",
+ srcs = ["rnn_cells.py"],
+ deps = [
+ ":utils",
+ ],
+)
+
+py_library(
+ name = "trainable_adam",
+ srcs = ["trainable_adam.py"],
+ deps = [
+ ":trainable_optimizer",
+ ":utils",
+ ],
+)
+
+py_library(
+ name = "trainable_optimizer",
+ srcs = ["trainable_optimizer.py"],
+ deps = [
+ ],
+)
+
+py_library(
+ name = "utils",
+ srcs = ["utils.py"],
+ deps = [
+ ],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/coordinatewise_rnn.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/coordinatewise_rnn.py"
new file mode 100644
index 00000000..3d699504
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/coordinatewise_rnn.py"
@@ -0,0 +1,316 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Collection of trainable optimizers for meta-optimization."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+
+import numpy as np
+import tensorflow as tf
+
+from learned_optimizer.optimizer import utils
+from learned_optimizer.optimizer import trainable_optimizer as opt
+
+
+# Default was 1e-3
+tf.app.flags.DEFINE_float("crnn_rnn_readout_scale", 0.5,
+ """The initialization scale for the RNN readouts.""")
+tf.app.flags.DEFINE_float("crnn_default_decay_var_init", 2.2,
+ """The default initializer value for any decay/
+ momentum style variables and constants.
+ sigmoid(2.2) ~ 0.9, sigmoid(-2.2) ~ 0.01.""")
+
+FLAGS = tf.flags.FLAGS
+
+
+class CoordinatewiseRNN(opt.TrainableOptimizer):
+ """RNN that operates on each coordinate of the problem independently."""
+
+ def __init__(self,
+ cell_sizes,
+ cell_cls,
+ init_lr_range=(1., 1.),
+ dynamic_output_scale=True,
+ learnable_decay=True,
+ zero_init_lr_weights=False,
+ **kwargs):
+ """Initializes the RNN per-parameter optimizer.
+
+ Args:
+ cell_sizes: List of hidden state sizes for each RNN cell in the network
+ cell_cls: tf.contrib.rnn class for specifying the RNN cell type
+ init_lr_range: the range in which to initialize the learning rates.
+ dynamic_output_scale: whether to learn weights that dynamically modulate
+ the output scale (default: True)
+ learnable_decay: whether to learn weights that dynamically modulate the
+ input scale via RMS style decay (default: True)
+ zero_init_lr_weights: whether to initialize the lr weights to zero
+ **kwargs: args passed to TrainableOptimizer's constructor
+
+ Raises:
+ ValueError: If the init lr range is not of length 2.
+ ValueError: If the init lr range is not a valid range (min > max).
+ """
+ if len(init_lr_range) != 2:
+ raise ValueError(
+ "Initial LR range must be len 2, was {}".format(len(init_lr_range)))
+ if init_lr_range[0] > init_lr_range[1]:
+ raise ValueError("Initial LR range min is greater than max.")
+ self.init_lr_range = init_lr_range
+
+ self.zero_init_lr_weights = zero_init_lr_weights
+ self.reuse_vars = False
+
+ # create the RNN cell
+ with tf.variable_scope(opt.OPTIMIZER_SCOPE):
+ self.component_cells = [cell_cls(sz) for sz in cell_sizes]
+ self.cell = tf.contrib.rnn.MultiRNNCell(self.component_cells)
+
+ # random normal initialization scaled by the output size
+ scale_factor = FLAGS.crnn_rnn_readout_scale / math.sqrt(cell_sizes[-1])
+ scaled_init = tf.random_normal_initializer(0., scale_factor)
+
+ # weights for projecting the hidden state to a parameter update
+ self.update_weights = tf.get_variable("update_weights",
+ shape=(cell_sizes[-1], 1),
+ initializer=scaled_init)
+
+ self._initialize_decay(learnable_decay, (cell_sizes[-1], 1), scaled_init)
+
+ self._initialize_lr(dynamic_output_scale, (cell_sizes[-1], 1),
+ scaled_init)
+
+ state_size = sum([sum(state_size) for state_size in self.cell.state_size])
+ self._init_vector = tf.get_variable(
+ "init_vector", shape=[1, state_size],
+ initializer=tf.random_uniform_initializer(-1., 1.))
+
+ state_keys = ["rms", "rnn", "learning_rate", "decay"]
+ super(CoordinatewiseRNN, self).__init__("cRNN", state_keys, **kwargs)
+
+ def _initialize_decay(
+ self, learnable_decay, weights_tensor_shape, scaled_init):
+ """Initializes the decay weights and bias variables or tensors.
+
+ Args:
+ learnable_decay: Whether to use learnable decay.
+ weights_tensor_shape: The shape the weight tensor should take.
+ scaled_init: The scaled initialization for the weights tensor.
+ """
+ if learnable_decay:
+
+ # weights for projecting the hidden state to the RMS decay term
+ self.decay_weights = tf.get_variable("decay_weights",
+ shape=weights_tensor_shape,
+ initializer=scaled_init)
+ self.decay_bias = tf.get_variable(
+ "decay_bias", shape=(1,),
+ initializer=tf.constant_initializer(
+ FLAGS.crnn_default_decay_var_init))
+ else:
+ self.decay_weights = tf.zeros_like(self.update_weights)
+ self.decay_bias = tf.constant(FLAGS.crnn_default_decay_var_init)
+
+ def _initialize_lr(
+ self, dynamic_output_scale, weights_tensor_shape, scaled_init):
+ """Initializes the learning rate weights and bias variables or tensors.
+
+ Args:
+ dynamic_output_scale: Whether to use a dynamic output scale.
+ weights_tensor_shape: The shape the weight tensor should take.
+ scaled_init: The scaled initialization for the weights tensor.
+ """
+ if dynamic_output_scale:
+ zero_init = tf.constant_initializer(0.)
+ wt_init = zero_init if self.zero_init_lr_weights else scaled_init
+ self.lr_weights = tf.get_variable("learning_rate_weights",
+ shape=weights_tensor_shape,
+ initializer=wt_init)
+ self.lr_bias = tf.get_variable("learning_rate_bias", shape=(1,),
+ initializer=zero_init)
+ else:
+ self.lr_weights = tf.zeros_like(self.update_weights)
+ self.lr_bias = tf.zeros([1, 1])
+
+ def _initialize_state(self, var):
+ """Return a dictionary mapping names of state variables to their values."""
+ vectorized_shape = [var.get_shape().num_elements(), 1]
+
+ min_lr = self.init_lr_range[0]
+ max_lr = self.init_lr_range[1]
+ if min_lr == max_lr:
+ init_lr = tf.constant(min_lr, shape=vectorized_shape)
+ else:
+ actual_vals = tf.random_uniform(vectorized_shape,
+ np.log(min_lr),
+ np.log(max_lr))
+ init_lr = tf.exp(actual_vals)
+
+ ones = tf.ones(vectorized_shape)
+ rnn_init = ones * self._init_vector
+
+ return {
+ "rms": tf.ones(vectorized_shape),
+ "learning_rate": init_lr,
+ "rnn": rnn_init,
+ "decay": tf.ones(vectorized_shape),
+ }
+
+ def _compute_update(self, param, grad, state):
+ """Update parameters given the gradient and state.
+
+ Args:
+ param: tensor of parameters
+ grad: tensor of gradients with the same shape as param
+ state: a dictionary containing any state for the optimizer
+
+ Returns:
+ updated_param: updated parameters
+ updated_state: updated state variables in a dictionary
+ """
+
+ with tf.variable_scope(opt.OPTIMIZER_SCOPE) as scope:
+
+ if self.reuse_vars:
+ scope.reuse_variables()
+ else:
+ self.reuse_vars = True
+
+ param_shape = tf.shape(param)
+
+ (grad_values, decay_state, rms_state, rnn_state, learning_rate_state,
+ grad_indices) = self._extract_gradients_and_internal_state(
+ grad, state, param_shape)
+
+ # Vectorize and scale the gradients.
+ grad_scaled, rms = utils.rms_scaling(grad_values, decay_state, rms_state)
+
+ # Apply the RNN update.
+ rnn_state_tuples = self._unpack_rnn_state_into_tuples(rnn_state)
+ rnn_output, rnn_state_tuples = self.cell(grad_scaled, rnn_state_tuples)
+ rnn_state = self._pack_tuples_into_rnn_state(rnn_state_tuples)
+
+ # Compute the update direction (a linear projection of the RNN output).
+ delta = utils.project(rnn_output, self.update_weights)
+
+ # The updated decay is an affine projection of the hidden state
+ decay = utils.project(rnn_output, self.decay_weights,
+ bias=self.decay_bias, activation=tf.nn.sigmoid)
+
+ # Compute the change in learning rate (an affine projection of the RNN
+ # state, passed through a 2x sigmoid, so the change is bounded).
+ learning_rate_change = 2. * utils.project(rnn_output, self.lr_weights,
+ bias=self.lr_bias,
+ activation=tf.nn.sigmoid)
+
+ # Update the learning rate.
+ new_learning_rate = learning_rate_change * learning_rate_state
+
+ # Apply the update to the parameters.
+ update = tf.reshape(new_learning_rate * delta, tf.shape(grad_values))
+
+ if isinstance(grad, tf.IndexedSlices):
+ update = utils.stack_tensor(update, grad_indices, param,
+ param_shape[:1])
+ rms = utils.update_slices(rms, grad_indices, state["rms"], param_shape)
+ new_learning_rate = utils.update_slices(new_learning_rate, grad_indices,
+ state["learning_rate"],
+ param_shape)
+ rnn_state = utils.update_slices(rnn_state, grad_indices, state["rnn"],
+ param_shape)
+ decay = utils.update_slices(decay, grad_indices, state["decay"],
+ param_shape)
+
+ new_param = param - update
+
+ # Collect the update and new state.
+ new_state = {
+ "rms": rms,
+ "learning_rate": new_learning_rate,
+ "rnn": rnn_state,
+ "decay": decay,
+ }
+
+ return new_param, new_state
+
+ def _extract_gradients_and_internal_state(self, grad, state, param_shape):
+ """Extracts the gradients and relevant internal state.
+
+ If the gradient is sparse, extracts the appropriate slices from the state.
+
+ Args:
+ grad: The current gradient.
+ state: The current state.
+ param_shape: The shape of the parameter (used if gradient is sparse).
+
+ Returns:
+ grad_values: The gradient value tensor.
+ decay_state: The current decay state.
+ rms_state: The current rms state.
+ rnn_state: The current state of the internal rnns.
+ learning_rate_state: The current learning rate state.
+ grad_indices: The indices for the gradient tensor, if sparse.
+ None otherwise.
+ """
+ if isinstance(grad, tf.IndexedSlices):
+ grad_indices, grad_values = utils.accumulate_sparse_gradients(grad)
+ decay_state = utils.slice_tensor(state["decay"], grad_indices,
+ param_shape)
+ rms_state = utils.slice_tensor(state["rms"], grad_indices, param_shape)
+ rnn_state = utils.slice_tensor(state["rnn"], grad_indices, param_shape)
+ learning_rate_state = utils.slice_tensor(state["learning_rate"],
+ grad_indices, param_shape)
+ decay_state.set_shape([None, 1])
+ rms_state.set_shape([None, 1])
+ else:
+ grad_values = grad
+ grad_indices = None
+
+ decay_state = state["decay"]
+ rms_state = state["rms"]
+ rnn_state = state["rnn"]
+ learning_rate_state = state["learning_rate"]
+ return (grad_values, decay_state, rms_state, rnn_state, learning_rate_state,
+ grad_indices)
+
+ def _unpack_rnn_state_into_tuples(self, rnn_state):
+ """Creates state tuples from the rnn state vector."""
+ rnn_state_tuples = []
+ cur_state_pos = 0
+ for cell in self.component_cells:
+ total_state_size = sum(cell.state_size)
+ cur_state = tf.slice(rnn_state, [0, cur_state_pos],
+ [-1, total_state_size])
+ cur_state_tuple = tf.split(value=cur_state, num_or_size_splits=2,
+ axis=1)
+ rnn_state_tuples.append(cur_state_tuple)
+ cur_state_pos += total_state_size
+ return rnn_state_tuples
+
+ def _pack_tuples_into_rnn_state(self, rnn_state_tuples):
+ """Creates a single state vector concatenated along column axis."""
+ rnn_state = None
+ for new_state_tuple in rnn_state_tuples:
+ new_c, new_h = new_state_tuple
+ if rnn_state is None:
+ rnn_state = tf.concat([new_c, new_h], axis=1)
+ else:
+ rnn_state = tf.concat([rnn_state, tf.concat([new_c, new_h], 1)], axis=1)
+ return rnn_state
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/global_learning_rate.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/global_learning_rate.py"
new file mode 100644
index 00000000..bcf102ff
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/global_learning_rate.py"
@@ -0,0 +1,40 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A trainable optimizer that learns a single global learning rate."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from learned_optimizer.optimizer import trainable_optimizer
+
+
+class GlobalLearningRate(trainable_optimizer.TrainableOptimizer):
+ """Optimizes for a single global learning rate."""
+
+ def __init__(self, initial_rate=1e-3, **kwargs):
+ """Initializes the global learning rate."""
+ with tf.variable_scope(trainable_optimizer.OPTIMIZER_SCOPE):
+ initializer = tf.constant_initializer(initial_rate)
+ self.learning_rate = tf.get_variable("global_learning_rate", shape=(),
+ initializer=initializer)
+ super(GlobalLearningRate, self).__init__("GLR", [], **kwargs)
+
+ def _compute_update(self, param, grad, state):
+ return param - tf.scalar_mul(self.learning_rate, grad), state
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/hierarchical_rnn.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/hierarchical_rnn.py"
new file mode 100644
index 00000000..953b72b5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/hierarchical_rnn.py"
@@ -0,0 +1,792 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Collection of trainable optimizers for meta-optimization."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+
+import numpy as np
+import tensorflow as tf
+
+from tensorflow.python.ops import state_ops
+from learned_optimizer.optimizer import rnn_cells
+from learned_optimizer.optimizer import trainable_optimizer as opt
+from learned_optimizer.optimizer import utils
+
+# Default was 0.1
+tf.app.flags.DEFINE_float("biasgrucell_scale", 0.5,
+ """The scale for the internal BiasGRUCell vars.""")
+# Default was 0
+tf.app.flags.DEFINE_float("biasgrucell_gate_bias_init", 2.2,
+ """The bias for the internal BiasGRUCell reset and
+ update gate variables.""")
+# Default was 1e-3
+tf.app.flags.DEFINE_float("hrnn_rnn_readout_scale", 0.5,
+ """The initialization scale for the RNN readouts.""")
+tf.app.flags.DEFINE_float("hrnn_default_decay_var_init", 2.2,
+ """The default initializer value for any decay/
+ momentum style variables and constants.
+ sigmoid(2.2) ~ 0.9, sigmoid(-2.2) ~ 0.01.""")
+# Default was 2.2
+tf.app.flags.DEFINE_float("scale_decay_bias_init", 3.2,
+ """The initialization for the scale decay bias. This
+ is the initial bias for the timescale for the
+ exponential avg of the mean square gradients.""")
+tf.app.flags.DEFINE_float("learning_rate_momentum_logit_init", 3.2,
+ """Initialization for the learning rate momentum.""")
+# Default was 0.1
+tf.app.flags.DEFINE_float("hrnn_affine_scale", 0.5,
+ """The initialization scale for the weight matrix of
+ the bias variables in layer0 and 1 of the hrnn.""")
+
+FLAGS = tf.flags.FLAGS
+
+
+class HierarchicalRNN(opt.TrainableOptimizer):
+ """3 level hierarchical RNN.
+
+ Optionally uses second order gradient information and has decoupled evaluation
+ and update locations.
+ """
+
+ def __init__(self, level_sizes, init_lr_range=(1e-6, 1e-2),
+ learnable_decay=True, dynamic_output_scale=True,
+ use_attention=False, use_log_objective=True,
+ num_gradient_scales=4, zero_init_lr_weights=True,
+ use_log_means_squared=True, use_relative_lr=True,
+ use_extreme_indicator=False, max_log_lr=33,
+ obj_train_max_multiplier=-1, use_problem_lr_mean=False,
+ use_gradient_shortcut=False, use_lr_shortcut=False,
+ use_grad_products=False, use_multiple_scale_decays=False,
+ learnable_inp_decay=True, learnable_rnn_init=True,
+ random_seed=None, **kwargs):
+ """Initializes the RNN per-parameter optimizer.
+
+ The hierarchy consists of up to three levels:
+ Level 0: per parameter RNN
+ Level 1: per tensor RNN
+ Level 2: global RNN
+
+ Args:
+ level_sizes: list or tuple with 1, 2, or 3 integers, the number of units
+ in each RNN in the hierarchy (level0, level1, level2).
+ length 1: only coordinatewise rnn's will be used
+ length 2: coordinatewise and tensor-level rnn's will be used
+ length 3: a single global-level rnn will be used in addition to
+ coordinatewise and tensor-level
+ init_lr_range: the range in which to initialize the learning rates
+ learnable_decay: whether to learn weights that dynamically modulate the
+ input scale via RMS style decay
+ dynamic_output_scale: whether to learn weights that dynamically modulate
+ the output scale
+ use_attention: whether to use attention to train the optimizer
+ use_log_objective: whether to train on the log of the objective
+ num_gradient_scales: the number of scales to use for gradient history
+ zero_init_lr_weights: whether to initialize the lr weights to zero
+ use_log_means_squared: whether to track the log of the means_squared,
+ used as a measure of signal vs. noise in gradient.
+ use_relative_lr: whether to use the relative learning rate as an
+ input during training (requires learnable_decay=True)
+ use_extreme_indicator: whether to use the extreme indicator for learning
+ rates as an input during training (requires learnable_decay=True)
+ max_log_lr: the maximum log learning rate allowed during train or test
+ obj_train_max_multiplier: max objective increase during a training run
+ use_problem_lr_mean: whether to use the mean over all learning rates in
+ the problem when calculating the relative learning rate as opposed to
+ the per-tensor mean
+ use_gradient_shortcut: Whether to add a learned affine projection of the
+ gradient to the update delta in addition to the gradient function
+ computed by the RNN
+ use_lr_shortcut: Whether to add as input the difference between the log lr
+ and the desired log lr (1e-3)
+ use_grad_products: Whether to use gradient products in the rnn input.
+ Only applicable if num_gradient_scales > 1
+ use_multiple_scale_decays: Whether to use multiple scales for the scale
+ decay, as with input decay
+ learnable_inp_decay: Whether to learn the input decay weights and bias.
+ learnable_rnn_init: Whether to learn the RNN state initialization.
+ random_seed: Random seed for random variable initializers. (Default: None)
+ **kwargs: args passed to TrainableOptimizer's constructor
+
+ Raises:
+ ValueError: If level_sizes is not a length 1, 2, or 3 list.
+ ValueError: If there are any non-integer sizes in level_sizes.
+ ValueError: If the init lr range is not of length 2.
+ ValueError: If the init lr range is not a valid range (min > max).
+ """
+ if len(level_sizes) not in [1, 2, 3]:
+ raise ValueError("HierarchicalRNN only supports 1, 2, or 3 levels in the "
+ "hierarchy, but {} were requested.".format(
+ len(level_sizes)))
+ if any(not isinstance(level, int) for level in level_sizes):
+ raise ValueError("Level sizes must be integer values, were {}".format(
+ level_sizes))
+ if len(init_lr_range) != 2:
+ raise ValueError(
+ "Initial LR range must be len 2, was {}".format(len(init_lr_range)))
+ if init_lr_range[0] > init_lr_range[1]:
+ raise ValueError("Initial LR range min is greater than max.")
+
+ self.learnable_decay = learnable_decay
+ self.dynamic_output_scale = dynamic_output_scale
+ self.use_attention = use_attention
+ self.use_log_objective = use_log_objective
+ self.num_gradient_scales = num_gradient_scales
+ self.zero_init_lr_weights = zero_init_lr_weights
+ self.use_log_means_squared = use_log_means_squared
+ self.use_relative_lr = use_relative_lr
+ self.use_extreme_indicator = use_extreme_indicator
+ self.max_log_lr = max_log_lr
+ self.use_problem_lr_mean = use_problem_lr_mean
+ self.use_gradient_shortcut = use_gradient_shortcut
+ self.use_lr_shortcut = use_lr_shortcut
+ self.use_grad_products = use_grad_products
+ self.use_multiple_scale_decays = use_multiple_scale_decays
+ self.learnable_inp_decay = learnable_inp_decay
+ self.learnable_rnn_init = learnable_rnn_init
+
+ self.random_seed = random_seed
+
+ self.num_layers = len(level_sizes)
+ self.init_lr_range = init_lr_range
+
+ self.reuse_vars = None
+ self.reuse_global_state = None
+ self.cells = []
+ self.init_vectors = []
+
+ with tf.variable_scope(opt.OPTIMIZER_SCOPE):
+
+ self._initialize_rnn_cells(level_sizes)
+
+ # get the cell size for the per-parameter RNN (Level 0)
+ cell_size = level_sizes[0]
+
+ # Random normal initialization scaled by the output size. This is the
+ # scale for the RNN *readouts*. RNN internal weight scale is set in the
+ # BiasGRUCell call.
+ scale_factor = FLAGS.hrnn_rnn_readout_scale / math.sqrt(cell_size)
+ scaled_init = tf.random_normal_initializer(0., scale_factor,
+ seed=self.random_seed)
+
+ # weights for projecting the hidden state to a parameter update
+ self.update_weights = tf.get_variable("update_weights",
+ shape=(cell_size, 1),
+ initializer=scaled_init)
+
+ if self.use_attention:
+ # weights for projecting the hidden state to the location at which the
+ # gradient is attended
+ self.attention_weights = tf.get_variable(
+ "attention_weights",
+ initializer=self.update_weights.initialized_value())
+
+ # weights for projecting the hidden state to the RMS decay term
+ self._initialize_scale_decay((cell_size, 1), scaled_init)
+ self._initialize_input_decay((cell_size, 1), scaled_init)
+
+ self._initialize_lr((cell_size, 1), scaled_init)
+
+ state_keys = ["parameter", "layer", "scl_decay", "inp_decay", "true_param"]
+
+ if self.dynamic_output_scale:
+ state_keys.append("log_learning_rate")
+
+ for i in range(self.num_gradient_scales):
+ state_keys.append("grad_accum{}".format(i + 1))
+ state_keys.append("ms{}".format(i + 1))
+
+ super(HierarchicalRNN, self).__init__(
+ "hRNN", state_keys, use_attention=use_attention,
+ use_log_objective=use_log_objective,
+ obj_train_max_multiplier=obj_train_max_multiplier, **kwargs)
+
+ def _initialize_rnn_cells(self, level_sizes):
+ """Initializes the RNN cells to use in the hierarchical RNN."""
+
+ # RNN Cell layers (0 -> lowest, 1 -> middle, 2 -> global)
+ for level in range(self.num_layers):
+ scope = "Level{}_RNN".format(level)
+ with tf.variable_scope(scope):
+ hcell = rnn_cells.BiasGRUCell(
+ level_sizes[level],
+ scale=FLAGS.biasgrucell_scale,
+ gate_bias_init=FLAGS.biasgrucell_gate_bias_init,
+ random_seed=self.random_seed)
+ self.cells.append(hcell)
+ if self.learnable_rnn_init:
+ self.init_vectors.append(tf.Variable(
+ tf.random_uniform([1, hcell.state_size], -1., 1.,
+ seed=self.random_seed),
+ name="init_vector"))
+ else:
+ self.init_vectors.append(
+ tf.random_uniform([1, hcell.state_size], -1., 1.,
+ seed=self.random_seed))
+
+ def _initialize_scale_decay(self, weights_tensor_shape, scaled_init):
+ """Initializes the scale decay weights and bias variables or tensors.
+
+ Args:
+ weights_tensor_shape: The shape the weight tensor should take.
+ scaled_init: The scaled initialization for the weights tensor.
+ """
+ if self.learnable_decay:
+ self.scl_decay_weights = tf.get_variable("scl_decay_weights",
+ shape=weights_tensor_shape,
+ initializer=scaled_init)
+ scl_decay_bias_init = tf.constant_initializer(
+ FLAGS.scale_decay_bias_init)
+ self.scl_decay_bias = tf.get_variable("scl_decay_bias",
+ shape=(1,),
+ initializer=scl_decay_bias_init)
+ else:
+ self.scl_decay_weights = tf.zeros_like(self.update_weights)
+ self.scl_decay_bias = tf.log(0.93 / (1. - 0.93))
+
+ def _initialize_input_decay(self, weights_tensor_shape, scaled_init):
+ """Initializes the input scale decay weights and bias variables or tensors.
+
+ Args:
+ weights_tensor_shape: The shape the weight tensor should take.
+ scaled_init: The scaled initialization for the weights tensor.
+ """
+ if (self.learnable_decay and self.num_gradient_scales > 1 and
+ self.learnable_inp_decay):
+ self.inp_decay_weights = tf.get_variable("inp_decay_weights",
+ shape=weights_tensor_shape,
+ initializer=scaled_init)
+ inp_decay_bias_init = tf.constant_initializer(
+ FLAGS.hrnn_default_decay_var_init)
+ self.inp_decay_bias = tf.get_variable("inp_decay_bias",
+ shape=(1,),
+ initializer=inp_decay_bias_init)
+ else:
+ self.inp_decay_weights = tf.zeros_like(self.update_weights)
+ self.inp_decay_bias = tf.log(0.89 / (1. - 0.89))
+
+ def _initialize_lr(self, weights_tensor_shape, scaled_init):
+ """Initializes the learning rate weights and bias variables or tensors.
+
+ Args:
+ weights_tensor_shape: The shape the weight tensor should take.
+ scaled_init: The scaled initialization for the weights tensor.
+ """
+ if self.dynamic_output_scale:
+ zero_init = tf.constant_initializer(0.)
+ wt_init = zero_init if self.zero_init_lr_weights else scaled_init
+ self.lr_weights = tf.get_variable("learning_rate_weights",
+ shape=weights_tensor_shape,
+ initializer=wt_init)
+ self.lr_bias = tf.get_variable("learning_rate_bias", shape=(1,),
+ initializer=zero_init)
+ else:
+ self.lr_weights = tf.zeros_like(self.update_weights)
+ self.lr_bias = tf.zeros([1, 1])
+
+ def _initialize_state(self, var):
+ """Return a dictionary mapping names of state variables to their values."""
+ var_vectorized = tf.reshape(var, [-1, 1])
+ ndim = var_vectorized.get_shape().as_list()[0]
+
+ state = {
+ # parameter init tensor is [var_ndim x layer0_cell_size]
+ "parameter": tf.ones([ndim, 1]) * self.init_vectors[0],
+ "scl_decay": tf.zeros_like(var_vectorized),
+ "inp_decay": tf.zeros_like(var_vectorized),
+ "true_param": var,
+ }
+
+ if self.num_layers > 1:
+ # layer init tensor is [1 x layer1_cell_size]
+ state["layer"] = tf.ones([1, 1]) * self.init_vectors[1]
+
+ if self.dynamic_output_scale:
+ min_lr = self.init_lr_range[0]
+ max_lr = self.init_lr_range[1]
+ if min_lr == max_lr:
+ log_init_lr = tf.log(min_lr * tf.ones_like(var_vectorized))
+ else:
+ # Use a random offset to increase the likelihood that the average of the
+ # LRs for this variable is different from the LRs for other variables.
+ actual_vals = tf.random_uniform(var_vectorized.get_shape().as_list(),
+ np.log(min_lr) / 2.,
+ np.log(max_lr) / 2.,
+ seed=self.random_seed)
+ offset = tf.random_uniform((), np.log(min_lr) / 2., np.log(max_lr) / 2.,
+ seed=self.random_seed)
+ log_init_lr = actual_vals + offset
+ # Clip the log learning rate to the flag at the top end, and to
+ # (log(min int32) - 1) at the bottom
+ clipped = tf.clip_by_value(log_init_lr, -33, self.max_log_lr)
+ state["log_learning_rate"] = clipped
+
+ for i in range(self.num_gradient_scales):
+ state["grad_accum{}".format(i + 1)] = tf.zeros_like(var_vectorized)
+ state["ms{}".format(i + 1)] = tf.zeros_like(var_vectorized)
+
+ return state
+
+ def _initialize_global_state(self):
+ if self.num_layers < 3:
+ return []
+ rnn_global_init = tf.ones([1, 1]) * self.init_vectors[2]
+ return [rnn_global_init]
+
+ def _compute_updates(self, params, grads, states, global_state):
+ # Store the updated parameters and states.
+ updated_params = []
+ updated_attention = []
+ updated_states = []
+
+ with tf.variable_scope(opt.OPTIMIZER_SCOPE):
+
+ mean_log_lr = self._compute_mean_log_lr(states)
+
+ # Iterate over the layers.
+ for param, grad_unflat, state in zip(params, grads, states):
+
+ with tf.variable_scope("PerTensor", reuse=self.reuse_vars):
+ self.reuse_vars = True
+ grad = tf.reshape(grad_unflat, [-1, 1])
+
+ # Create the RNN input. We will optionally extend it with additional
+ # features such as curvature and gradient signal vs. noise.
+ (grads_scaled, mean_squared_gradients,
+ grads_accum) = self._compute_scaled_and_ms_grads(grad, state)
+ rnn_input = [g for g in grads_scaled]
+
+ self._extend_rnn_input(rnn_input, state, grads_scaled,
+ mean_squared_gradients, mean_log_lr)
+
+ # Concatenate any features we've collected.
+ rnn_input_tensor = tf.concat(rnn_input, 1)
+
+ layer_state, new_param_state = self._update_rnn_cells(
+ state, global_state, rnn_input_tensor,
+ len(rnn_input) != len(grads_scaled))
+
+ (scl_decay, inp_decay, new_log_lr, update_step, lr_attend,
+ attention_delta) = self._compute_rnn_state_projections(
+ state, new_param_state, grads_scaled)
+
+ # Apply updates and store state variables.
+ if self.use_attention:
+ truth = state["true_param"]
+ updated_param = truth - update_step
+ attention_step = tf.reshape(lr_attend * attention_delta,
+ truth.get_shape())
+ updated_attention.append(truth - attention_step)
+ else:
+ updated_param = param - update_step
+ updated_attention.append(updated_param)
+ updated_params.append(updated_param)
+
+ # Collect the new state.
+ new_state = {
+ "parameter": new_param_state,
+ "scl_decay": scl_decay,
+ "inp_decay": inp_decay,
+ "true_param": updated_param,
+ }
+ if layer_state is not None:
+ new_state["layer"] = layer_state
+
+ if self.dynamic_output_scale:
+ new_state["log_learning_rate"] = new_log_lr
+
+ for i in range(self.num_gradient_scales):
+ new_state["grad_accum{}".format(i + 1)] = grads_accum[i]
+ new_state["ms{}".format(i + 1)] = mean_squared_gradients[i]
+ updated_states.append(new_state)
+
+ updated_global_state = self._compute_updated_global_state([layer_state],
+ global_state)
+
+ return (updated_params, updated_states, [updated_global_state],
+ updated_attention)
+
+ def _compute_mean_log_lr(self, states):
+ """Computes the mean log learning rate across all variables."""
+ if self.use_problem_lr_mean and self.use_relative_lr:
+
+ sum_log_lr = 0.
+ count_log_lr = 0.
+ for state in states:
+ sum_log_lr += tf.reduce_sum(state["log_learning_rate"])
+ # Note: get_shape().num_elements()=num elements in the original tensor.
+ count_log_lr += state["log_learning_rate"].get_shape().num_elements()
+ return sum_log_lr / count_log_lr
+
+ def _compute_scaled_and_ms_grads(self, grad, state):
+ """Computes the scaled gradient and the mean squared gradients.
+
+ Gradients are also accumulated across different timescales if appropriate.
+
+ Args:
+ grad: The gradient tensor for this layer.
+ state: The optimizer state for this layer.
+
+ Returns:
+ The scaled gradients, mean squared gradients, and accumulated gradients.
+ """
+ input_decays = [state["inp_decay"]]
+ scale_decays = [state["scl_decay"]]
+ if self.use_multiple_scale_decays and self.num_gradient_scales > 1:
+ for i in range(self.num_gradient_scales - 1):
+ scale_decays.append(tf.sqrt(scale_decays[i]))
+
+ for i in range(self.num_gradient_scales - 1):
+ # Each accumulator on twice the timescale of the one before.
+ input_decays.append(tf.sqrt(input_decays[i]))
+ grads_accum = []
+ grads_scaled = []
+ mean_squared_gradients = []
+
+ # populate the scaled gradients and associated mean_squared values
+ if self.num_gradient_scales > 0:
+ for i, decay in enumerate(input_decays):
+ if self.num_gradient_scales == 1:
+ # We don't accumulate if no scales, just take the current gradient.
+ grad_accum = grad
+ else:
+ # The state vars are 1-indexed.
+ old_accum = state["grad_accum{}".format(i + 1)]
+ grad_accum = grad * (1. - decay) + old_accum * decay
+
+ grads_accum.append(grad_accum)
+
+ sd = scale_decays[i if self.use_multiple_scale_decays else 0]
+ grad_scaled, ms = utils.rms_scaling(grad_accum, sd,
+ state["ms{}".format(i + 1)],
+ update_ms=True)
+ grads_scaled.append(grad_scaled)
+ mean_squared_gradients.append(ms)
+
+ return grads_scaled, mean_squared_gradients, grads_accum
+
+ def _extend_rnn_input(self, rnn_input, state, grads_scaled,
+ mean_squared_gradients, mean_log_lr):
+ """Computes additional rnn inputs and adds them to the rnn_input list."""
+ if self.num_gradient_scales > 1 and self.use_grad_products:
+ # This gives a measure of curvature relative to input averaging
+ # lengthscale and to the learning rate
+ grad_products = [a * b for a, b in
+ zip(grads_scaled[:-1], grads_scaled[1:])]
+ rnn_input.extend([g for g in grad_products])
+
+ if self.use_log_means_squared:
+ log_means_squared = [tf.log(ms + 1e-16)
+ for ms in mean_squared_gradients]
+
+ avg = tf.reduce_mean(log_means_squared, axis=0)
+ # This gives a measure of the signal vs. noise contribution to the
+ # gradient, at the current averaging lengthscale. If all the noise
+ # is averaged out, and if updates are small, these will be 0.
+ mean_log_means_squared = [m - avg for m in log_means_squared]
+
+ rnn_input.extend([m for m in mean_log_means_squared])
+
+ if self.use_relative_lr or self.use_extreme_indicator:
+ if not self.dynamic_output_scale:
+ raise Exception("Relative LR and Extreme Indicator features "
+ "require dynamic_output_scale to be set to True.")
+ log_lr_vec = tf.reshape(state["log_learning_rate"], [-1, 1])
+ if self.use_relative_lr:
+ if self.use_problem_lr_mean:
+ # Learning rate of this dimension vs. rest of target problem.
+ relative_lr = log_lr_vec - mean_log_lr
+ else:
+ # Learning rate of this dimension vs. rest of tensor.
+ relative_lr = log_lr_vec - tf.reduce_mean(log_lr_vec)
+ rnn_input.append(relative_lr)
+ if self.use_extreme_indicator:
+ # Indicator of extremely large or extremely small learning rate.
+ extreme_indicator = (tf.nn.relu(log_lr_vec - tf.log(1.)) -
+ tf.nn.relu(tf.log(1e-6) - log_lr_vec))
+ rnn_input.append(extreme_indicator)
+
+ if self.use_lr_shortcut:
+ log_lr_vec = tf.reshape(state["log_learning_rate"], [-1, 1])
+ rnn_input.append(log_lr_vec - tf.log(1e-3))
+
+ def _update_rnn_cells(self, state, global_state, rnn_input_tensor,
+ use_additional_features):
+ """Updates the component RNN cells with the given state and tensor.
+
+ Args:
+ state: The current state of the optimizer.
+ global_state: The current global RNN state.
+ rnn_input_tensor: The input tensor to the RNN.
+ use_additional_features: Whether the rnn input tensor contains additional
+ features beyond the scaled gradients (affects whether the rnn input
+ tensor is used as input to the RNN.)
+
+ Returns:
+ layer_state: The new state of the per-tensor RNN.
+ new_param_state: The new state of the per-parameter RNN.
+ """
+ # lowest level (per parameter)
+ # input -> gradient for this parameter
+ # bias -> output from the layer RNN
+ with tf.variable_scope("Layer0_RNN"):
+ total_bias = None
+ if self.num_layers > 1:
+ sz = 3 * self.cells[0].state_size # size of the concatenated bias
+ param_bias = utils.affine([state["layer"]], sz,
+ scope="Param/Affine",
+ scale=FLAGS.hrnn_affine_scale,
+ random_seed=self.random_seed)
+ total_bias = param_bias
+ if self.num_layers == 3:
+ global_bias = utils.affine(global_state, sz,
+ scope="Global/Affine",
+ scale=FLAGS.hrnn_affine_scale,
+ random_seed=self.random_seed)
+ total_bias += global_bias
+
+ new_param_state, _ = self.cells[0](
+ rnn_input_tensor, state["parameter"], bias=total_bias)
+
+ if self.num_layers > 1:
+ # middle level (per layer)
+ # input -> average hidden state from each parameter in this layer
+ # bias -> output from the RNN at the global level
+ with tf.variable_scope("Layer1_RNN"):
+ if not use_additional_features:
+ # Restore old behavior and only add the mean of the new params.
+ layer_input = tf.reduce_mean(new_param_state, 0, keep_dims=True)
+ else:
+ layer_input = tf.reduce_mean(
+ tf.concat((new_param_state, rnn_input_tensor), 1), 0,
+ keep_dims=True)
+ if self.num_layers == 3:
+ sz = 3 * self.cells[1].state_size
+ layer_bias = utils.affine(global_state, sz,
+ scale=FLAGS.hrnn_affine_scale,
+ random_seed=self.random_seed)
+ layer_state, _ = self.cells[1](
+ layer_input, state["layer"], bias=layer_bias)
+ else:
+ layer_state, _ = self.cells[1](layer_input, state["layer"])
+ else:
+ layer_state = None
+
+ return layer_state, new_param_state
+
+ def _compute_rnn_state_projections(self, state, new_param_state,
+ grads_scaled):
+ """Computes the RNN state-based updates to parameters and update steps."""
+ # Compute the update direction (a linear projection of the RNN output).
+ update_weights = self.update_weights
+
+ update_delta = utils.project(new_param_state, update_weights)
+ if self.use_gradient_shortcut:
+ # Include an affine projection of just the direction of the gradient
+ # so that RNN hidden states are freed up to store more complex
+ # functions of the gradient and other parameters.
+ grads_scaled_tensor = tf.concat([g for g in grads_scaled], 1)
+ update_delta += utils.affine(grads_scaled_tensor, 1,
+ scope="GradsToDelta",
+ include_bias=False,
+ vec_mean=1. / len(grads_scaled),
+ random_seed=self.random_seed)
+ if self.dynamic_output_scale:
+ denom = tf.sqrt(tf.reduce_mean(update_delta ** 2) + 1e-16)
+
+ update_delta /= denom
+
+ if self.use_attention:
+ attention_weights = self.attention_weights
+ attention_delta = utils.project(new_param_state,
+ attention_weights)
+ if self.use_gradient_shortcut:
+ attention_delta += utils.affine(grads_scaled_tensor, 1,
+ scope="GradsToAttnDelta",
+ include_bias=False,
+ vec_mean=1. / len(grads_scaled),
+ random_seed=self.random_seed)
+ if self.dynamic_output_scale:
+ attention_delta /= tf.sqrt(
+ tf.reduce_mean(attention_delta ** 2) + 1e-16)
+ else:
+ attention_delta = None
+
+ # The updated decay is an affine projection of the hidden state.
+ scl_decay = utils.project(new_param_state, self.scl_decay_weights,
+ bias=self.scl_decay_bias,
+ activation=tf.nn.sigmoid)
+ # This is only used if learnable_decay and num_gradient_scales > 1
+ inp_decay = utils.project(new_param_state, self.inp_decay_weights,
+ bias=self.inp_decay_bias,
+ activation=tf.nn.sigmoid)
+
+ # Also update the learning rate.
+ lr_param, lr_attend, new_log_lr = self._compute_new_learning_rate(
+ state, new_param_state)
+
+ update_step = tf.reshape(lr_param * update_delta,
+ state["true_param"].get_shape())
+
+ return (scl_decay, inp_decay, new_log_lr, update_step, lr_attend,
+ attention_delta)
+
+ def _compute_new_learning_rate(self, state, new_param_state):
+ if self.dynamic_output_scale:
+ # Compute the change in learning rate (an affine projection of the
+ # RNN state, passed through a sigmoid or log depending on flags).
+ # Update the learning rate, w/ momentum.
+ lr_change = utils.project(new_param_state, self.lr_weights,
+ bias=self.lr_bias)
+ step_log_lr = state["log_learning_rate"] + lr_change
+
+ # Clip the log learning rate to the flag at the top end, and to
+ # (log(min int32) - 1) at the bottom
+
+ # Check out this hack: we want to be able to compute the gradient
+ # of the downstream result w.r.t lr weights and bias, even if the
+ # value of step_log_lr is outside the clip range. So we clip,
+ # subtract off step_log_lr, and wrap all that in a stop_gradient so
+ # TF never tries to take the gradient of the clip... or the
+ # subtraction. Then we add BACK step_log_lr so that downstream still
+ # receives the clipped value. But the GRADIENT of step_log_lr will
+ # be the gradient of the unclipped value, which we added back in
+ # after stop_gradients.
+ step_log_lr += tf.stop_gradient(
+ tf.clip_by_value(step_log_lr, -33, self.max_log_lr)
+ - step_log_lr)
+
+ lr_momentum_logit = tf.get_variable(
+ "learning_rate_momentum_logit",
+ initializer=FLAGS.learning_rate_momentum_logit_init)
+ lrm = tf.nn.sigmoid(lr_momentum_logit)
+ new_log_lr = (lrm * state["log_learning_rate"] +
+ (1. - lrm) * step_log_lr)
+ param_stepsize_offset = tf.get_variable("param_stepsize_offset",
+ initializer=-1.)
+ lr_param = tf.exp(step_log_lr + param_stepsize_offset)
+ lr_attend = tf.exp(step_log_lr) if self.use_attention else lr_param
+ else:
+ # Dynamic output scale is off, LR param is always 1.
+ lr_param = 2. * utils.project(new_param_state, self.lr_weights,
+ bias=self.lr_bias,
+ activation=tf.nn.sigmoid)
+ new_log_lr = None
+ lr_attend = lr_param
+
+ return lr_param, lr_attend, new_log_lr
+
+ def _compute_updated_global_state(self, layer_states, global_state):
+ """Computes the new global state gives the layers states and old state.
+
+ Args:
+ layer_states: The current layer states.
+ global_state: The old global state.
+
+ Returns:
+ The updated global state.
+ """
+ updated_global_state = []
+ if self.num_layers == 3:
+ # highest (global) layer
+ # input -> average hidden state from each layer-specific RNN
+ # bias -> None
+ with tf.variable_scope("Layer2_RNN", reuse=self.reuse_global_state):
+ self.reuse_global_state = True
+ global_input = tf.reduce_mean(tf.concat(layer_states, 0), 0,
+ keep_dims=True)
+ updated_global_state, _ = self.cells[2](global_input, global_state[0])
+ return updated_global_state
+
+ def apply_gradients(self, grads_and_vars, global_step=None, name=None):
+ """Overwrites the tf.train.Optimizer interface for applying gradients."""
+
+ # Pull out the variables.
+ grads_and_vars = tuple(grads_and_vars) # Make sure repeat iteration works.
+ for g, v in grads_and_vars:
+ if not isinstance(g, (tf.Tensor, tf.IndexedSlices, type(None))):
+ raise TypeError(
+ "Gradient must be a Tensor, IndexedSlices, or None: %s" % g)
+ if not isinstance(v, tf.Variable):
+ raise TypeError(
+ "Variable must be a tf.Variable: %s" % v)
+ if g is not None:
+ self._assert_valid_dtypes([g, v])
+ var_list = [v for g, v in grads_and_vars if g is not None]
+ if not var_list:
+ raise ValueError("No gradients provided for any variable: %s" %
+ (grads_and_vars,))
+
+ # Create slots for the variables.
+ with tf.control_dependencies(None):
+ self._create_slots(var_list)
+
+ # Store update ops in this list.
+ with tf.op_scope([], name, self._name) as name:
+
+ # Prepare the global state.
+ with tf.variable_scope(self._name, reuse=self.reuse_global_state):
+ gs = self._initialize_global_state()
+ if gs:
+ global_state = [tf.get_variable("global_state", initializer=gs[0])]
+ else:
+ global_state = []
+
+ # Get the states for each variable in the list.
+ states = [{key: self.get_slot(var, key) for key in self.get_slot_names()}
+ for var in var_list]
+
+ # Compute updated values.
+ grads, params = zip(*grads_and_vars)
+ args = (params, grads, states, global_state)
+ updates = self._compute_updates(*args)
+ new_params, new_states, new_global_state, new_attention = updates
+ # Assign op for new global state.
+ update_ops = [tf.assign(gs, ngs)
+ for gs, ngs in zip(global_state, new_global_state)]
+
+ # Create the assign ops for the params and state variables.
+ args = (params, states, new_params, new_attention, new_states)
+ for var, state, new_var, new_var_attend, new_state in zip(*args):
+ # Assign updates to the state variables.
+ state_assign_ops = [tf.assign(state_var, new_state[key])
+ for key, state_var in state.items()]
+
+ # Update the parameter.
+ with tf.control_dependencies(state_assign_ops):
+ if self.use_attention:
+ # Assign to the attended location, rather than the actual location
+ # so that the gradients are computed where attention is.
+ param_update_op = var.assign(new_var_attend)
+ else:
+ param_update_op = var.assign(new_var)
+
+ with tf.name_scope("update_" + var.op.name): #, tf.colocate_with(var):
+ update_ops.append(param_update_op)
+
+ real_params = [self.get_slot(var, "true_param") for var in var_list]
+
+ if global_step is None:
+ # NOTE: if using the optimizer in a non-test-optimizer setting (e.g.
+ # on Inception), remove the real_params return value. Otherwise
+ # the code will throw an error.
+ return self._finish(update_ops, name), real_params
+ else:
+ with tf.control_dependencies([self._finish(update_ops, "update")]):
+ return state_ops.assign_add(global_step, 1, name=name).op, real_params
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/learning_rate_schedule.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/learning_rate_schedule.py"
new file mode 100644
index 00000000..53db8add
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/learning_rate_schedule.py"
@@ -0,0 +1,60 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A trainable optimizer that learns a learning rate schedule."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from learned_optimizer.optimizer import trainable_optimizer
+
+
+class LearningRateSchedule(trainable_optimizer.TrainableOptimizer):
+ """Learns a learning rate schedule over a fixed number of iterations."""
+
+ def __init__(self, initial_rate=0.0, n_steps=1000, **kwargs):
+ """Initializes the learning rates."""
+ self.max_index = tf.constant(n_steps-1, dtype=tf.int32)
+
+ with tf.variable_scope(trainable_optimizer.OPTIMIZER_SCOPE):
+ initializer = tf.constant_initializer(initial_rate)
+ self.learning_rates = tf.get_variable("learning_rates",
+ shape=([n_steps,]),
+ initializer=initializer)
+
+ super(LearningRateSchedule, self).__init__("LRS", ["itr"], **kwargs)
+
+ def _initialize_state(self, var):
+ """Return a dictionary mapping names of state variables to their values."""
+ return {
+ "itr": tf.constant(0, dtype=tf.int32),
+ }
+
+ def _compute_update(self, param, grad, state):
+ """Compute updates of parameters."""
+
+ # get the learning rate at the current index, if the index
+ # is greater than the number of available learning rates,
+ # use the last one
+ index = tf.minimum(state["itr"], self.max_index)
+ learning_rate = tf.gather(self.learning_rates, index)
+
+ # update the parameters: parameter - learning_rate * gradient
+ updated_param = param - tf.scalar_mul(learning_rate, grad)
+
+ return updated_param, {"itr": state["itr"] + 1}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/rnn_cells.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/rnn_cells.py"
new file mode 100644
index 00000000..3d68de04
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/rnn_cells.py"
@@ -0,0 +1,68 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Custom RNN cells for hierarchical RNNs."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from learned_optimizer.optimizer import utils
+
+
+class BiasGRUCell(tf.contrib.rnn.RNNCell):
+ """GRU cell (cf. http://arxiv.org/abs/1406.1078) with an additional bias."""
+
+ def __init__(self, num_units, activation=tf.tanh, scale=0.1,
+ gate_bias_init=0., random_seed=None):
+ self._num_units = num_units
+ self._activation = activation
+ self._scale = scale
+ self._gate_bias_init = gate_bias_init
+ self._random_seed = random_seed
+
+ @property
+ def state_size(self):
+ return self._num_units
+
+ @property
+ def output_size(self):
+ return self._num_units
+
+ def __call__(self, inputs, state, bias=None):
+ # Split the injected bias vector into a bias for the r, u, and c updates.
+ if bias is None:
+ bias = tf.zeros((1, 3))
+
+ r_bias, u_bias, c_bias = tf.split(bias, 3, 1)
+
+ with tf.variable_scope(type(self).__name__): # "BiasGRUCell"
+ with tf.variable_scope("gates"): # Reset gate and update gate.
+ proj = utils.affine([inputs, state], 2 * self._num_units,
+ scale=self._scale, bias_init=self._gate_bias_init,
+ random_seed=self._random_seed)
+ r_lin, u_lin = tf.split(proj, 2, 1)
+ r, u = tf.nn.sigmoid(r_lin + r_bias), tf.nn.sigmoid(u_lin + u_bias)
+
+ with tf.variable_scope("candidate"):
+ proj = utils.affine([inputs, r * state], self._num_units,
+ scale=self._scale, random_seed=self._random_seed)
+ c = self._activation(proj + c_bias)
+
+ new_h = u * state + (1 - u) * c
+
+ return new_h, new_h
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/trainable_adam.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/trainable_adam.py"
new file mode 100644
index 00000000..638217f1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/trainable_adam.py"
@@ -0,0 +1,210 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A trainable ADAM optimizer that learns its internal variables."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+
+from learned_optimizer.optimizer import trainable_optimizer as opt
+from learned_optimizer.optimizer import utils
+
+
+class TrainableAdam(opt.TrainableOptimizer):
+ """Adam optimizer with learnable scalar parameters.
+
+ See Kingma et. al., 2014 for algorithm (http://arxiv.org/abs/1412.6980).
+ """
+
+ def __init__(self,
+ learning_rate=1e-3,
+ beta1=0.9,
+ beta2=0.999,
+ epsilon=1e-8,
+ **kwargs):
+ """Initializes the TrainableAdam optimizer with the given initial values.
+
+ Args:
+ learning_rate: The learning rate (default: 1e-3).
+ beta1: The exponential decay rate for the 1st moment estimates.
+ beta2: The exponential decay rate for the 2nd moment estimates.
+ epsilon: A small constant for numerical stability.
+ **kwargs: Any additional keyword arguments for TrainableOptimizer.
+
+ Raises:
+ ValueError: if the learning rate or epsilon is not positive
+ ValueError: if beta1 or beta2 is not in (0, 1).
+ """
+ if learning_rate <= 0:
+ raise ValueError("Learning rate must be positive.")
+ if epsilon <= 0:
+ raise ValueError("Epsilon must be positive.")
+ if not 0 < beta1 < 1 or not 0 < beta2 < 1:
+ raise ValueError("Beta values must be between 0 and 1, exclusive.")
+
+ self._reuse_vars = False
+
+ with tf.variable_scope(opt.OPTIMIZER_SCOPE):
+ def inv_sigmoid(x):
+ return np.log(x / (1.0 - x))
+
+ self.log_learning_rate = tf.get_variable(
+ "log_learning_rate",
+ shape=[],
+ initializer=tf.constant_initializer(np.log(learning_rate)))
+ self.beta1_logit = tf.get_variable(
+ "beta1_logit",
+ shape=[],
+ initializer=tf.constant_initializer(inv_sigmoid(beta1)))
+ self.beta2_logit = tf.get_variable(
+ "beta2_logit",
+ shape=[],
+ initializer=tf.constant_initializer(inv_sigmoid(beta2)))
+ self.log_epsilon = tf.get_variable(
+ "log_epsilon",
+ shape=[],
+ initializer=tf.constant_initializer(np.log(epsilon)))
+
+ # Key names are derived from Algorithm 1 described in
+ # https://arxiv.org/pdf/1412.6980.pdf
+ state_keys = ["m", "v", "t"]
+ super(TrainableAdam, self).__init__("Adam", state_keys, **kwargs)
+
+ def _initialize_state(self, var):
+ """Returns a dictionary mapping names of state variables to their values."""
+ vectorized_shape = var.get_shape().num_elements(), 1
+
+ return {key: tf.zeros(vectorized_shape) for key in self.state_keys}
+
+ def _compute_update(self, param, grad, state):
+ """Calculates the new internal state and parameters.
+
+ If the gradient is sparse, updates the appropriate slices in the internal
+ state and stacks the update tensor.
+
+ Args:
+ param: A tensor of parameters.
+ grad: A tensor of gradients with the same shape as param.
+ state: A dictionary containing any state for the optimizer.
+
+ Returns:
+ updated_param: The updated parameters.
+ updated_state: The updated state variables in a dictionary.
+ """
+
+ with tf.variable_scope(opt.OPTIMIZER_SCOPE) as scope:
+
+ if self._reuse_vars:
+ scope.reuse_variables()
+ else:
+ self._reuse_vars = True
+
+ (grad_values, first_moment, second_moment, timestep, grad_indices
+ ) = self._extract_gradients_and_internal_state(
+ grad, state, tf.shape(param))
+
+ beta1 = tf.nn.sigmoid(self.beta1_logit)
+ beta2 = tf.nn.sigmoid(self.beta2_logit)
+ epsilon = tf.exp(self.log_epsilon) + 1e-10
+ learning_rate = tf.exp(self.log_learning_rate)
+
+ old_grad_shape = tf.shape(grad_values)
+ grad_values = tf.reshape(grad_values, [-1, 1])
+
+ new_timestep = timestep + 1
+ new_first_moment = self._update_adam_estimate(
+ first_moment, grad_values, beta1)
+ new_second_moment = self._debias_adam_estimate(
+ second_moment, tf.square(grad_values), beta2)
+
+ debiased_first_moment = self._debias_adam_estimate(
+ new_first_moment, beta1, new_timestep)
+ debiased_second_moment = self._debias_adam_estimate(
+ new_second_moment, beta2, new_timestep)
+
+ # Propagating through the square root of 0 is very bad for stability.
+ update = (learning_rate * debiased_first_moment /
+ (tf.sqrt(debiased_second_moment + 1e-10) + epsilon))
+
+ update = tf.reshape(update, old_grad_shape)
+
+ if grad_indices is not None:
+ param_shape = tf.shape(param)
+ update = utils.stack_tensor(
+ update, grad_indices, param, param_shape[:1])
+ new_first_moment = utils.update_slices(
+ new_first_moment, grad_indices, state["m"], param_shape)
+ new_second_moment = utils.update_slices(
+ new_second_moment, grad_indices, state["v"], param_shape)
+ new_timestep = utils.update_slices(
+ new_timestep, grad_indices, state["t"], param_shape)
+
+ new_param = param - update
+
+ # collect the update and new state
+ new_state = {
+ "m": new_first_moment,
+ "v": new_second_moment,
+ "t": new_timestep
+ }
+
+ return new_param, new_state
+
+ def _update_adam_estimate(self, estimate, value, beta):
+ """Returns a beta-weighted average of estimate and value."""
+ return (beta * estimate) + ((1 - beta) * value)
+
+ def _debias_adam_estimate(self, estimate, beta, t_step):
+ """Returns a debiased estimate based on beta and the timestep."""
+ return estimate / (1 - tf.pow(beta, t_step))
+
+ def _extract_gradients_and_internal_state(self, grad, state, param_shape):
+ """Extracts the gradients and relevant internal state.
+
+ If the gradient is sparse, extracts the appropriate slices from the state.
+
+ Args:
+ grad: The current gradient.
+ state: The current state.
+ param_shape: The shape of the parameter (used if gradient is sparse).
+
+ Returns:
+ grad_values: The gradient value tensor.
+ first_moment: The first moment tensor (internal state).
+ second_moment: The second moment tensor (internal state).
+ timestep: The current timestep (internal state).
+ grad_indices: The indices for the gradient tensor, if sparse.
+ None otherwise.
+ """
+ grad_values = grad
+ grad_indices = None
+ first_moment = state["m"]
+ second_moment = state["v"]
+ timestep = state["t"]
+
+ if isinstance(grad, tf.IndexedSlices):
+ grad_indices, grad_values = utils.accumulate_sparse_gradients(grad)
+ first_moment = utils.slice_tensor(
+ first_moment, grad_indices, param_shape)
+ second_moment = utils.slice_tensor(
+ second_moment, grad_indices, param_shape)
+ timestep = utils.slice_tensor(timestep, grad_indices, param_shape)
+
+ return grad_values, first_moment, second_moment, timestep, grad_indices
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/trainable_optimizer.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/trainable_optimizer.py"
new file mode 100644
index 00000000..955112a9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/trainable_optimizer.py"
@@ -0,0 +1,574 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A base class definition for trainable optimizers."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import itertools
+
+import tensorflow as tf
+
+from tensorflow.python.framework import tensor_shape
+
+OPTIMIZER_SCOPE = "LOL"
+_LOCAL_VARIABLE_PREFIX = "local_state_"
+_LOCAL_STATE_VARIABLE_COLLECTION = "local_state_collection"
+EPSILON = 1e-6
+
+
+class TrainableOptimizer(tf.train.Optimizer):
+ """Base class for trainable optimizers.
+
+ A trainable optimizer is an optimizer that has parameters that can themselves
+ be learned (meta-optimized).
+
+ Subclasses must implement:
+ _compute_update(self, param, grad, state)
+ """
+
+ def __init__(self, name, state_keys, use_attention=False,
+ use_log_objective=False, obj_train_max_multiplier=-1,
+ use_second_derivatives=True, use_numerator_epsilon=False,
+ **kwargs):
+ """Initializes the optimizer with the given name and settings.
+
+ Args:
+ name: The name string for this optimizer.
+ state_keys: The names of any required state variables (list)
+ use_attention: Whether this optimizer uses attention (Default: True)
+ use_log_objective: Whether this optimizer uses the logarithm of the
+ objective when computing the loss (Default: False)
+ obj_train_max_multiplier: The maximum multiplier for the increase in the
+ objective before meta-training is stopped. If <= 0, meta-training is
+ not stopped early. (Default: -1)
+ use_second_derivatives: Whether this optimizer uses second derivatives in
+ meta-training. This should be set to False if some second derivatives
+ in the meta-training problem set are not defined in Tensorflow.
+ (Default: True)
+ use_numerator_epsilon: Whether to use epsilon in the numerator when
+ scaling the problem objective during meta-training. (Default: False)
+ **kwargs: Any additional keyword arguments.
+ """
+ self.use_second_derivatives = use_second_derivatives
+ self.state_keys = sorted(state_keys)
+ self.use_attention = use_attention
+ self.use_log_objective = use_log_objective
+ self.obj_train_max_multiplier = obj_train_max_multiplier
+ self.use_numerator_epsilon = use_numerator_epsilon
+
+ use_locking = False
+ super(TrainableOptimizer, self).__init__(use_locking, name)
+
+ def _create_slots(self, var_list):
+ """Creates all slots needed by the variables.
+
+ Args:
+ var_list: A list of `Variable` objects.
+ """
+ for var in var_list:
+ init_states = self._initialize_state(var)
+ for slot_name in sorted(init_states):
+ slot_var_name = "{}_{}".format(self.get_name(), slot_name)
+ value = init_states[slot_name]
+ self._get_or_make_slot(var, value, slot_name, slot_var_name)
+
+ def _initialize_state(self, var):
+ """Initializes any state required for this variable.
+
+ Args:
+ var: a tensor containing parameters to be optimized
+
+ Returns:
+ state: a dictionary mapping state keys to initial state values (tensors)
+ """
+ return {}
+
+ def _initialize_global_state(self):
+ """Initializes any global state values."""
+ return []
+
+ def _apply_common(self, grad, var):
+ """Applies the optimizer updates to the variables.
+
+ Note: this should only get called via _apply_dense or _apply_sparse when
+ using the optimizer via optimizer.minimize or optimizer.apply_gradients.
+ During meta-training, the optimizer.train function should be used to
+ construct an optimization path that is differentiable.
+
+ Args:
+ grad: A tensor representing the gradient.
+ var: A tf.Variable with the same shape as grad.
+
+ Returns:
+ update_op: A tensorflow op that assigns new values to the variable, and
+ also defines dependencies that update the state variables for the
+ optimizer.
+ """
+ state = {key: self.get_slot(var, key) for key in self.get_slot_names()}
+ new_var, new_state = self._compute_update(var, grad, state)
+ state_assign_ops = [tf.assign(state_var, new_state[key])
+ for key, state_var in state.items()]
+ with tf.control_dependencies(state_assign_ops):
+ update_op = var.assign(new_var)
+
+ return update_op
+
+ def _apply_dense(self, grad, var):
+ """Adds ops to apply dense gradients to 'var'."""
+ return self._apply_common(grad, var)
+
+ def _apply_sparse(self, grad, var):
+ """Adds ops to apply sparse gradients to 'var'."""
+ return self._apply_common(grad, var)
+
+ def _compute_update(self, param, grad, state):
+ """Computes the update step for optimization.
+
+ Args:
+ param: A tensor of parameters to optimize.
+ grad: The gradient tensor of the objective with respect to the parameters.
+ (It has the same shape as param.)
+ state: A dictionary containing any extra state required by the optimizer.
+
+ Returns:
+ updated_params: The updated parameters.
+ updated_state: The dictionary of updated state variable(s).
+ """
+ raise NotImplementedError
+
+ def _compute_updates(self, params, grads, states, global_state):
+ """Maps the compute update functions for each parameter.
+
+ This function can be overriden by a subclass if the subclass wants to
+ combine information across the different parameters in the list.
+
+ Args:
+ params: A list of parameter tensors.
+ grads: A list of gradients corresponding to each parameter.
+ states: A list of state variables corresponding to each parameter.
+ global_state: A list of global state variables for the problem.
+
+ Returns:
+ new_params: The updated parameters.
+ new_states: The updated states.
+ new_global_state: The updated global state.
+ attention_params: A list of attention parameters. This is the same as
+ new_params if the optimizer does not use attention.
+ """
+ # Zip up the arguments to _compute_update.
+ args = zip(params, grads, states)
+
+ # Call compute_update on each set of parameter/gradient/state args.
+ new_params, new_states = zip(*list(
+ itertools.starmap(self._compute_update, args)))
+
+ # Global state is unused in the basic case, just pass it through.
+ return list(new_params), list(new_states), global_state, list(new_params)
+
+ def train(self, problem, dataset):
+ """Creates graph operations to train the optimizer.
+
+ Args:
+ problem: A problem_generator.Problem instance to train on.
+ dataset: A datasets.Dataset tuple to use when training.
+
+ Returns:
+ meta_objective: A tensorflow operation for computing the meta-objective
+ obj_weights: A tensor placeholder for feeding in the objective weights
+ obj_values: The subproblem objective values during optimization
+ batches: The batch indexes tensor for overriding with feed_dict
+ first_unroll: A placeholder signifying if this is a first unroll
+ (this will propagate the gradients slightly differently).
+ reset_state: A placeholder signifying that the rnn state should be reset.
+ output_state: The final state of the optimizer
+ init_loop_vars_to_override: Local variables that can be assigned to
+ propagate the optimizer and problem state for unrolling
+ final_loop_vals: Final values of the loop variables that can be
+ assigned to init_loop_vars_to_override.
+ """
+
+ # Placeholder for the objective weights
+ obj_weights = tf.placeholder(tf.float32)
+ num_iter = tf.shape(obj_weights)[0]
+
+ # Unpack the dataset and generate the minibatches for training
+ data, labels = dataset
+ # Convert the ndarrays to tensors so we can pass them back in via feed_dict
+ data = tf.constant(data)
+ labels = tf.constant(labels)
+ batches = tf.placeholder(tf.int32)
+ first_unroll = tf.placeholder_with_default(False, [])
+ reset_state = tf.placeholder_with_default(False, [])
+
+ training_output = collections.namedtuple("TrainingOutput",
+ ["metaobj",
+ "obj_weights",
+ "problem_objectives",
+ "initial_obj",
+ "batches",
+ "first_unroll",
+ "reset_state",
+ "output_state",
+ "init_loop_vars",
+ "output_loop_vars"])
+
+ def loop_body(itr, obj_accum, params, attend_params, flattened_states,
+ global_state, all_obj, unused_init_obj, data,
+ labels, batches):
+ """Body of the meta-training while loop for optimizing a sub-problem.
+
+ Args:
+ itr: The current meta-training iteration.
+ obj_accum: The accumulated objective over all training steps so far.
+ params: The parameters of the sub-problem.
+ attend_params: The parameters of the sub-problems at the attended
+ location.
+ flattened_states: The states of the trainable optimizer, sorted and
+ flattened into a list (since a while loop can't handle nested lists
+ or dictionaries).
+ global_state: The global state of the optimizer.
+ all_obj: The list of all objective values in the training process.
+ unused_init_obj: The initial objective (unused here, but needed in the
+ variable list because it's used in a stopping condition in the
+ loop_cond.)
+ data: The data for this problem.
+ labels: The labels corresponding to the data.
+ batches: The batch indexes needed for shuffled minibatch creation.
+
+ Returns:
+ itr: The updated meta-training iteration.
+ obj_accum: The updated accumulated objective.
+ params: The new parameters of the sub-problem.
+ attend_params: The new parameters of the sub-problems at the attended
+ location.
+ flattened_states: The new states of the trainable optimizer.
+ global_state: The updated global state.
+ all_obj: The updates list of all objective values.
+ unused_init_obj: The initial objective.
+ data: The data for this problem.
+ labels: The labels corresponding to the data.
+ batches: The batch indexes needed for shuffled minibatch creation.
+ """
+ batch_indices = tf.gather(batches, itr)
+ batch_data = tf.gather(data, batch_indices)
+ batch_labels = tf.gather(labels, batch_indices)
+
+ # Compute the objective over the entire dataset (full batch).
+ obj = problem.objective(params, data, labels)
+
+ # Compute the gradients on just the current batch
+ if self.use_attention:
+ current_obj = problem.objective(attend_params, batch_data, batch_labels)
+ grads = problem.gradients(current_obj, attend_params)
+ else:
+ current_obj = problem.objective(params, batch_data, batch_labels)
+ grads = problem.gradients(current_obj, params)
+
+ if not self.use_second_derivatives:
+ new_grads = []
+ for grad in grads:
+ if isinstance(grad, tf.IndexedSlices):
+ new_grads.append(
+ tf.IndexedSlices(tf.stop_gradient(grad.values), grad.indices))
+ else:
+ new_grads.append(tf.stop_gradient(grad))
+ grads = new_grads
+
+ # store the objective value for the entire problem at each iteration
+ all_obj = tf.concat([all_obj, tf.reshape(obj, (1,))], 0)
+
+ # accumulate the weighted objective for the entire dataset
+ acc = tf.gather(obj_weights, itr) * obj
+
+ obj_accum = tf.add(obj_accum, acc)
+ # Set the shape to keep the shape invariant for obj_accum. Without this,
+ # the graph builder thinks the tensor shape is unknown on the 2nd iter.
+ obj_accum.set_shape([])
+
+ # convert flattened_states to dictionaries
+ dict_states = [dict(zip(self.state_keys, flat_state))
+ for flat_state in flattened_states]
+
+ # compute the new parameters and states
+ args = (params, grads, dict_states, global_state)
+ updates = self._compute_updates(*args)
+ new_params, new_states, new_global_state, new_attend_params = updates
+
+ # flatten the states
+ new_flattened_states = map(flatten_and_sort, new_states)
+
+ return [itr + 1, obj_accum, new_params, new_attend_params,
+ new_flattened_states, new_global_state, all_obj, unused_init_obj,
+ data, labels, batches]
+
+ def loop_cond(itr, obj_accum, unused_params, unused_attend_params,
+ unused_flattened_states, unused_global_state, all_obj,
+ init_obj, *args):
+ """Termination conditions of the sub-problem optimization loop."""
+ del args # unused
+
+ cond1 = tf.less(itr, num_iter) # We've run < num_iter times
+ cond2 = tf.is_finite(obj_accum) # The objective is still finite
+
+ if self.obj_train_max_multiplier > 0:
+ current_obj = tf.gather(all_obj, itr)
+ # Account for negative init_obj too
+ max_diff = (self.obj_train_max_multiplier - 1) * tf.abs(init_obj)
+ max_obj = init_obj + max_diff
+ # The objective is a reasonable multiplier of the original objective
+ cond3 = tf.less(current_obj, max_obj)
+
+ return tf.logical_and(tf.logical_and(cond1, cond2), cond3,
+ name="training_loop_cond")
+ else:
+ return tf.logical_and(cond1, cond2, name="training_loop_cond")
+
+ init = self._initialize_training_loop_parameters(
+ problem, data, labels, batches, first_unroll, reset_state)
+ loop_vars, invariants, initial_obj, init_loop_vars_to_override = init
+
+ loop_output = tf.while_loop(loop_cond, loop_body, loop_vars,
+ swap_memory=True, shape_invariants=invariants)
+ meta_obj, problem_objectives = loop_output[1], loop_output[6]
+
+ # The meta objective is normalized by the initial objective at the start of
+ # the series of partial unrolls.
+ scaled_meta_objective = self.scale_objective(
+ meta_obj, problem_objectives, initial_obj)
+
+ final_loop_vals = (
+ [initial_obj] + loop_output[2] + loop_output[3] + loop_output[5])
+ final_loop_vals.extend(itertools.chain(*loop_output[4]))
+
+ return training_output(scaled_meta_objective,
+ obj_weights,
+ problem_objectives,
+ initial_obj,
+ batches,
+ first_unroll,
+ reset_state,
+ loop_output[4],
+ init_loop_vars_to_override,
+ final_loop_vals)
+
+ def _initialize_training_loop_parameters(
+ self, problem, data, labels, batches, first_unroll, reset_state):
+ """Initializes the vars and params needed for the training process.
+
+ Args:
+ problem: The problem being optimized.
+ data: The data for the problem.
+ labels: The corresponding labels for the data.
+ batches: The indexes needed to create shuffled batches of the data.
+ first_unroll: Whether this is the first unroll in a partial unrolling.
+ reset_state: Whether RNN state variables should be reset.
+
+ Returns:
+ loop_vars: The while loop variables for training.
+ invariants: The corresponding variable shapes (required by while loop).
+ initial_obj: The initial objective (used later for scaling).
+ init_loop_vars_to_override: The loop vars that can be overridden when
+ performing training via partial unrolls.
+ """
+ # Extract these separately so we don't have to make inter-variable
+ # dependencies.
+ initial_tensors = problem.init_tensors()
+
+ return_initial_tensor_values = first_unroll
+ initial_params_vars, initial_params = local_state_variables(
+ initial_tensors, return_initial_tensor_values)
+ initial_attend_params_vars, initial_attend_params = local_state_variables(
+ initial_tensors, return_initial_tensor_values)
+ # Recalculate the initial objective for the list on each partial unroll with
+ # the new initial_params. initial_obj holds the value from the very first
+ # unroll.
+ initial_obj_init = problem.objective(initial_params, data, labels)
+ return_initial_obj_init = first_unroll
+ [initial_obj_var], [initial_obj] = local_state_variables(
+ [initial_obj_init], return_initial_obj_init)
+
+ # Initialize the loop variables.
+ initial_itr = tf.constant(0, dtype=tf.int32)
+ initial_meta_obj = tf.constant(0, dtype=tf.float32)
+ # N.B. the use of initial_obj_init here rather than initial_obj
+ initial_problem_objectives = tf.reshape(initial_obj_init, (1,))
+
+ # Initialize the extra state.
+ initial_state_vars = []
+ initial_state = []
+ state_shapes = []
+ return_initial_state_values = reset_state
+ for param in initial_tensors:
+ param_state_vars, param_state = local_state_variables(
+ flatten_and_sort(self._initialize_state(param)),
+ return_initial_state_values)
+
+ initial_state_vars.append(param_state_vars)
+ initial_state.append(param_state)
+ state_shapes.append([f.get_shape() for f in param_state])
+
+ # Initialize any global (problem-level) state.
+ initial_global_state_vars, initial_global_state = local_state_variables(
+ self._initialize_global_state(), return_initial_state_values)
+
+ global_shapes = []
+ for item in initial_global_state:
+ global_shapes.append(item.get_shape())
+
+ # build the list of loop variables:
+ loop_vars = [
+ initial_itr,
+ initial_meta_obj,
+ initial_params, # Local variables.
+ initial_attend_params, # Local variables.
+ initial_state, # Local variables.
+ initial_global_state, # Local variables.
+ initial_problem_objectives,
+ initial_obj, # Local variable.
+ data,
+ labels,
+ batches,
+ ]
+
+ invariants = [
+ initial_itr.get_shape(),
+ initial_meta_obj.get_shape(),
+ [t.get_shape() for t in initial_params],
+ [t.get_shape() for t in initial_attend_params],
+ state_shapes,
+ global_shapes,
+ tensor_shape.TensorShape([None]), # The problem objectives list grows
+ initial_obj.get_shape(),
+ tensor_shape.unknown_shape(), # Placeholder shapes are unknown
+ tensor_shape.unknown_shape(),
+ tensor_shape.unknown_shape(),
+ ]
+
+ # Initialize local variables that we will override with final tensors at the
+ # next iter.
+ init_loop_vars_to_override = (
+ [initial_obj_var] + initial_params_vars + initial_attend_params_vars +
+ initial_global_state_vars)
+ init_loop_vars_to_override.extend(itertools.chain(*initial_state_vars))
+
+ return loop_vars, invariants, initial_obj, init_loop_vars_to_override
+
+ def scale_objective(self, total_obj, all_objs, initial_obj,
+ obj_scale_eps=1e-6):
+ """Normalizes the objective based on the initial objective value.
+
+ Args:
+ total_obj: The total accumulated objective over the training run.
+ all_objs: A list of all the individual objectives over the training run.
+ initial_obj: The initial objective value.
+ obj_scale_eps: The epsilon value to use in computations for stability.
+
+ Returns:
+ The scaled objective as a single value.
+ """
+ if self.use_log_objective:
+ if self.use_numerator_epsilon:
+ scaled_problem_obj = ((all_objs + obj_scale_eps) /
+ (initial_obj + obj_scale_eps))
+ log_scaled_problem_obj = tf.log(scaled_problem_obj)
+ else:
+ scaled_problem_obj = all_objs / (initial_obj + obj_scale_eps)
+ log_scaled_problem_obj = tf.log(scaled_problem_obj + obj_scale_eps)
+ return tf.reduce_mean(log_scaled_problem_obj)
+ else:
+ return total_obj / (initial_obj + obj_scale_eps)
+
+
+def local_state_variables(init_values, return_init_values):
+ """Create local variables initialized from init_values.
+
+ This will create local variables from a list of init_values. Each variable
+ will be named based on the value's shape and dtype.
+
+ As a convenience, a boolean tensor allows you to return value from
+ the created local variable or from the original init value.
+
+ Args:
+ init_values: iterable of tensors
+ return_init_values: boolean tensor
+
+ Returns:
+ local_vars: list of the created local variables.
+ vals: if return_init_values is true, then this returns the values of
+ init_values. Otherwise it returns the values of the local_vars.
+ """
+ if not init_values:
+ return [], []
+
+ # This generates a harmless warning when saving the metagraph.
+ variable_use_count = tf.get_collection_ref(_LOCAL_STATE_VARIABLE_COLLECTION)
+ if not variable_use_count:
+ variable_use_count.append(collections.defaultdict(int))
+ variable_use_count = variable_use_count[0]
+
+ local_vars = []
+ with tf.variable_scope(OPTIMIZER_SCOPE):
+ # We can't use the init_value as an initializer as init_value may
+ # itself depend on some problem variables. This would produce
+ # inter-variable initialization order dependence which TensorFlow
+ # sucks at making easy.
+ for init_value in init_values:
+ name = create_local_state_variable_name(init_value)
+ unique_name = name + "_" + str(variable_use_count[name])
+ variable_use_count[name] += 1
+ # The overarching idea here is to be able to reuse variables between
+ # different sessions on the same TensorFlow master without errors. By
+ # uniquifying based on the type and name we mirror the checks made inside
+ # TensorFlow, while still allowing some memory reuse. Ultimately this is a
+ # hack due to the broken Session.reset().
+ local_vars.append(
+ tf.get_local_variable(
+ unique_name,
+ initializer=tf.zeros(
+ init_value.get_shape(), dtype=init_value.dtype)))
+
+ # It makes things a lot simpler if we use the init_value the first
+ # iteration, instead of the variable itself. It allows us to propagate
+ # gradients through it as well as simplifying initialization. The variable
+ # ends up assigned to after the first iteration.
+ vals = tf.cond(return_init_values, lambda: init_values, lambda: local_vars)
+ if len(init_values) == 1:
+ # tf.cond extracts elements from singleton lists.
+ vals = [vals]
+ return local_vars, vals
+
+
+def create_local_state_variable_name(tensor):
+ """Create a name of the variable based on its type and shape."""
+ if not tensor.get_shape().is_fully_defined():
+ raise ValueError("Need a fully specified shape to create a local variable.")
+
+ return (_LOCAL_VARIABLE_PREFIX + "_".join(
+ map(str, tensor.get_shape().as_list())) + "_" + tensor.dtype.name)
+
+
+def is_local_state_variable(op):
+ """Returns if this op is a local state variable created for training."""
+ return op.node_def.op in ["Variable", "VariableV2"] and op.name.startswith(
+ OPTIMIZER_SCOPE + "/" + _LOCAL_VARIABLE_PREFIX)
+
+
+def flatten_and_sort(dictionary):
+ """Flattens a dictionary into a list of values sorted by the keys."""
+ return [dictionary[k] for k in sorted(dictionary.keys())]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/utils.py"
new file mode 100644
index 00000000..58744f4c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/optimizer/utils.py"
@@ -0,0 +1,278 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Utilities and helper functions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+
+
+def make_finite(t, replacement):
+ """Replaces non-finite tensor values with the replacement value."""
+ return tf.where(tf.is_finite(t), t, replacement)
+
+
+def asinh(x):
+ """Computes the inverse hyperbolic sine function (in tensorflow)."""
+ return tf.log(x + tf.sqrt(1. + x ** 2))
+
+
+def affine(inputs, output_size, scope="Affine", scale=0.1, vec_mean=0.,
+ include_bias=True, bias_init=0., random_seed=None):
+ """Computes an affine function of the inputs.
+
+ Creates or recalls tensorflow variables "Matrix" and "Bias"
+ to generate an affine operation on the input.
+
+ If the inputs are a list of tensors, they are concatenated together.
+
+ Initial weights for the matrix are drawn from a Gaussian with zero
+ mean and standard deviation that is the given scale divided by the
+ square root of the input dimension. Initial weights for the bias are
+ set to zero.
+
+ Args:
+ inputs: List of tensors with shape (batch_size, input_size)
+ output_size: Size (dimension) of the output
+ scope: Variable scope for these parameters (default: "Affine")
+ scale: Initial weight scale for the matrix parameters (default: 0.1),
+ this constant is divided by the sqrt of the input size to get the
+ std. deviation of the initial weights
+ vec_mean: The mean for the random initializer
+ include_bias: Whether to include the bias term
+ bias_init: The initializer bias (default 0.)
+ random_seed: Random seed for random initializers. (Default: None)
+
+ Returns:
+ output: Tensor with shape (batch_size, output_size)
+ """
+
+ # Concatenate the input arguments.
+ x = tf.concat(inputs, 1)
+
+ with tf.variable_scope(scope):
+ input_size = x.get_shape().as_list()[1]
+
+ sigma = scale / np.sqrt(input_size)
+ rand_init = tf.random_normal_initializer(mean=vec_mean, stddev=sigma,
+ seed=random_seed)
+
+ matrix = tf.get_variable("Matrix", [input_size, output_size],
+ dtype=tf.float32, initializer=rand_init)
+
+ if include_bias:
+ bias = tf.get_variable("Bias", [output_size], dtype=tf.float32,
+ initializer=tf.constant_initializer(bias_init,
+ tf.float32))
+ else:
+ bias = 0.
+ output = tf.matmul(x, matrix) + bias
+
+ return output
+
+
+def project(inputs, weights, bias=0., activation=tf.identity):
+ """Computes an affine or linear projection of the inputs.
+
+ Projects the inputs onto the given weight vector and (optionally)
+ adds a bias and passes the result through an activation function.
+
+ Args:
+ inputs: matrix of inputs with shape [batch_size, dim]
+ weights: weight matrix with shape [dim, output_dim]
+ bias: bias vector with shape [output_dim] (default: 0)
+ activation: nonlinear activation function (default: tf.identity)
+
+ Returns:
+ outputs: an op which computes activation(inputs @ weights + bias)
+ """
+ return activation(tf.matmul(inputs, weights) + bias)
+
+
+def new_mean_squared(grad_vec, decay, ms):
+ """Calculates the new accumulated mean squared of the gradient.
+
+ Args:
+ grad_vec: the vector for the current gradient
+ decay: the decay term
+ ms: the previous mean_squared value
+
+ Returns:
+ the new mean_squared value
+ """
+ decay_size = decay.get_shape().num_elements()
+ decay_check_ops = [
+ tf.assert_less_equal(decay, 1., summarize=decay_size),
+ tf.assert_greater_equal(decay, 0., summarize=decay_size)]
+
+ with tf.control_dependencies(decay_check_ops):
+ grad_squared = tf.square(grad_vec)
+
+ # If the previous mean_squared is the 0 vector, don't use the decay and just
+ # return the full grad_squared. This should only happen on the first timestep.
+ decay = tf.cond(tf.reduce_all(tf.equal(ms, 0.)),
+ lambda: tf.zeros_like(decay, dtype=tf.float32), lambda: decay)
+
+ # Update the running average of squared gradients.
+ epsilon = 1e-12
+ return (1. - decay) * (grad_squared + epsilon) + decay * ms
+
+
+def rms_scaling(gradient, decay, ms, update_ms=True):
+ """Vectorizes and scales a tensor of gradients.
+
+ Args:
+ gradient: the current gradient
+ decay: the current decay value.
+ ms: the previous mean squared value
+ update_ms: Whether to update the mean squared value (default: True)
+
+ Returns:
+ The scaled gradient and the new ms value if update_ms is True,
+ the old ms value otherwise.
+ """
+
+ # Vectorize the gradients and compute the squared gradients.
+ grad_vec = tf.reshape(gradient, [-1, 1])
+
+ if update_ms:
+ ms = new_mean_squared(grad_vec, decay, ms)
+
+ # Scale the current gradients by the RMS, squashed by the asinh function.
+ scaled_gradient = asinh(grad_vec / tf.sqrt(ms + 1e-16))
+
+ return scaled_gradient, ms
+
+
+def accumulate_sparse_gradients(grad):
+ """Accumulates repeated indices of a sparse gradient update.
+
+ Args:
+ grad: a tf.IndexedSlices gradient
+
+ Returns:
+ grad_indices: unique indices
+ grad_values: gradient values corresponding to the indices
+ """
+
+ grad_indices, grad_segments = tf.unique(grad.indices)
+ grad_values = tf.unsorted_segment_sum(grad.values, grad_segments,
+ tf.shape(grad_indices)[0])
+ return grad_indices, grad_values
+
+
+def slice_tensor(dense_tensor, indices, head_dims):
+ """Extracts slices from a partially flattened dense tensor.
+
+ indices is assumed to index into the first dimension of head_dims.
+ dense_tensor is assumed to have a shape [D_0, D_1, ...] such that
+ prod(head_dims) == D_0. This function will extract slices along the
+ first_dimension of head_dims.
+
+ Example:
+
+ Consider a tensor with shape head_dims = [100, 2] and a dense_tensor with
+ shape [200, 3]. Note that the first dimension of dense_tensor equals the
+ product of head_dims. This function will reshape dense_tensor such that
+ its shape is now [100, 2, 3] (i.e. the first dimension became head-dims)
+ and then slice it along the first dimension. After slicing, the slices will
+ have their initial dimensions flattened just as they were in dense_tensor
+ (e.g. if there are 4 indices, the return value will have a shape of [4, 3]).
+
+ Args:
+ dense_tensor: a N-D dense tensor. Shape: [D_0, D_1, ...]
+ indices: a 1-D integer tensor. Shape: [K]
+ head_dims: True dimensions of the dense_tensor's first dimension.
+
+ Returns:
+ Extracted slices. Shape [K, D_1, ...]
+ """
+
+ tail_dims = tf.shape(dense_tensor)[1:]
+ dense_tensor = tf.reshape(dense_tensor,
+ tf.concat([head_dims, tail_dims], 0))
+
+ slices = tf.gather(dense_tensor, indices)
+ # NOTE(siege): This kills the shape annotation.
+ return tf.reshape(slices, tf.concat([[-1], tail_dims], 0))
+
+
+def stack_tensor(slices, indices, dense_tensor, head_dims):
+ """Reconsititutes a tensor from slices and corresponding indices.
+
+ This is an inverse operation to slice_tensor. Missing slices are set to 0.
+
+ Args:
+ slices: a tensor. Shape [K, D_1, ...]
+ indices: a 1-D integer tensor. Shape: [K]
+ dense_tensor: the original tensor the slices were taken
+ from. Shape: [D_0, D_1, ...]
+ head_dims: True dimensions of the dense_tensor's first dimension.
+
+ Returns:
+ Reconsituted tensor. Shape: [D_0, D_1, ...]
+ """
+ # NOTE(siege): This cast shouldn't be necessary.
+ indices = tf.cast(indices, tf.int32)
+
+ tail_dims = tf.shape(dense_tensor)[1:]
+ dense_shape = tf.concat([head_dims, tail_dims], 0)
+
+ slices = tf.reshape(slices, tf.concat([[-1], dense_shape[1:]], 0))
+ indices = tf.expand_dims(indices, -1)
+
+ return tf.reshape(tf.scatter_nd(indices, slices, dense_shape),
+ tf.shape(dense_tensor))
+
+
+def update_slices(slices, indices, dense_tensor, head_dims):
+ """Reconstitutes a tensor from slices and corresponding indices.
+
+ Like _stack_tensor, but instead of setting missing slices to 0, sets them to
+ what they were in the original tensor. The return value is reshaped to be
+ the same as dense_tensor.
+
+ Args:
+ slices: a tensor. Shape [K, D_1, ...]
+ indices: a 1-D integer tensor. Shape: [K]
+ dense_tensor: the original tensor the slices were taken
+ from. Shape: [D_0, D_1, ...]
+ head_dims: True dimensions of the dense_tensor's first dimension.
+
+ Returns:
+ Reconsituted tensor. Shape: [D_0, D_1, ...]
+ """
+ # NOTE(siege): This cast shouldn't be necessary.
+ indices = tf.cast(indices, tf.int32)
+
+ tail_dims = tf.shape(dense_tensor)[1:]
+ dense_shape = tf.concat([head_dims, tail_dims], 0)
+
+ update_mask_vals = tf.fill(tf.shape(indices), 1)
+ reshaped_indices = tf.expand_dims(indices, -1)
+ update_mask = tf.equal(
+ tf.scatter_nd(reshaped_indices, update_mask_vals, head_dims[:1]), 1)
+
+ reshaped_dense_slices = tf.reshape(
+ stack_tensor(slices, indices, dense_tensor, head_dims), dense_shape)
+ reshaped_dense_tensor = tf.reshape(dense_tensor, dense_shape)
+
+ return tf.reshape(
+ tf.where(update_mask, reshaped_dense_slices, reshaped_dense_tensor),
+ tf.shape(dense_tensor))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/BUILD"
new file mode 100644
index 00000000..c7046188
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/BUILD"
@@ -0,0 +1,43 @@
+package(default_visibility = ["//visibility:public"])
+
+# Libraries
+# =====
+
+py_library(
+ name = "datasets",
+ srcs = ["datasets.py"],
+ deps = [
+ ],
+)
+
+py_library(
+ name = "model_adapter",
+ srcs = ["model_adapter.py"],
+ deps = [
+ ":problem_generator",
+ ],
+)
+
+py_library(
+ name = "problem_generator",
+ srcs = ["problem_generator.py"],
+ deps = [
+ ":problem_spec",
+ ],
+)
+
+py_library(
+ name = "problem_sets",
+ srcs = ["problem_sets.py"],
+ deps = [
+ ":datasets",
+ ":model_adapter",
+ ":problem_generator",
+ ],
+)
+
+py_library(
+ name = "problem_spec",
+ srcs = ["problem_spec.py"],
+ deps = [],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/datasets.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/datasets.py"
new file mode 100644
index 00000000..edf3df65
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/datasets.py"
@@ -0,0 +1,218 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Functions to generate or load datasets for supervised learning."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from collections import namedtuple
+
+import numpy as np
+from sklearn.datasets import make_classification
+
+MAX_SEED = 4294967295
+
+
+class Dataset(namedtuple("Dataset", "data labels")):
+ """Helper class for managing a supervised learning dataset.
+
+ Args:
+ data: an array of type float32 with N samples, each of which is the set
+ of features for that sample. (Shape (N, D_i), where N is the number of
+ samples and D_i is the number of features for that sample.)
+ labels: an array of type int32 or int64 with N elements, indicating the
+ class label for the corresponding set of features in data.
+ """
+ # Since this is an immutable object, we don't need to reserve slots.
+ __slots__ = ()
+
+ @property
+ def size(self):
+ """Dataset size (number of samples)."""
+ return len(self.data)
+
+ def batch_indices(self, num_batches, batch_size):
+ """Creates indices of shuffled minibatches.
+
+ Args:
+ num_batches: the number of batches to generate
+ batch_size: the size of each batch
+
+ Returns:
+ batch_indices: a list of minibatch indices, arranged so that the dataset
+ is randomly shuffled.
+
+ Raises:
+ ValueError: if the data and labels have different lengths
+ """
+ if len(self.data) != len(self.labels):
+ raise ValueError("Labels and data must have the same number of samples.")
+
+ batch_indices = []
+
+ # Follows logic in mnist.py to ensure we cover the entire dataset.
+ index_in_epoch = 0
+ dataset_size = len(self.data)
+ dataset_indices = np.arange(dataset_size)
+ np.random.shuffle(dataset_indices)
+
+ for _ in range(num_batches):
+ start = index_in_epoch
+ index_in_epoch += batch_size
+ if index_in_epoch > dataset_size:
+
+ # Finished epoch, reshuffle.
+ np.random.shuffle(dataset_indices)
+
+ # Start next epoch.
+ start = 0
+ index_in_epoch = batch_size
+
+ end = index_in_epoch
+ batch_indices.append(dataset_indices[start:end].tolist())
+
+ return batch_indices
+
+
+def noisy_parity_class(n_samples,
+ n_classes=2,
+ n_context_ids=5,
+ noise_prob=0.25,
+ random_seed=None):
+ """Returns a randomly generated sparse-to-sparse dataset.
+
+ The label is a parity class of a set of context classes.
+
+ Args:
+ n_samples: number of samples (data points)
+ n_classes: number of class labels (default: 2)
+ n_context_ids: how many classes to take the parity of (default: 5).
+ noise_prob: how often to corrupt the label (default: 0.25)
+ random_seed: seed used for drawing the random data (default: None)
+ Returns:
+ dataset: A Dataset namedtuple containing the generated data and labels
+ """
+ np.random.seed(random_seed)
+ x = np.random.randint(0, n_classes, [n_samples, n_context_ids])
+ noise = np.random.binomial(1, noise_prob, [n_samples])
+ y = (np.sum(x, 1) + noise) % n_classes
+ return Dataset(x.astype("float32"), y.astype("int32"))
+
+
+def random(n_features, n_samples, n_classes=2, sep=1.0, random_seed=None):
+ """Returns a randomly generated classification dataset.
+
+ Args:
+ n_features: number of features (dependent variables)
+ n_samples: number of samples (data points)
+ n_classes: number of class labels (default: 2)
+ sep: separation of the two classes, a higher value corresponds to
+ an easier classification problem (default: 1.0)
+ random_seed: seed used for drawing the random data (default: None)
+
+ Returns:
+ dataset: A Dataset namedtuple containing the generated data and labels
+ """
+ # Generate the problem data.
+ x, y = make_classification(n_samples=n_samples,
+ n_features=n_features,
+ n_informative=n_features,
+ n_redundant=0,
+ n_classes=n_classes,
+ class_sep=sep,
+ random_state=random_seed)
+
+ return Dataset(x.astype("float32"), y.astype("int32"))
+
+
+def random_binary(n_features, n_samples, random_seed=None):
+ """Returns a randomly generated dataset of binary values.
+
+ Args:
+ n_features: number of features (dependent variables)
+ n_samples: number of samples (data points)
+ random_seed: seed used for drawing the random data (default: None)
+
+ Returns:
+ dataset: A Dataset namedtuple containing the generated data and labels
+ """
+ random_seed = (np.random.randint(MAX_SEED) if random_seed is None
+ else random_seed)
+ np.random.seed(random_seed)
+
+ x = np.random.randint(2, size=(n_samples, n_features))
+ y = np.zeros((n_samples, 1))
+
+ return Dataset(x.astype("float32"), y.astype("int32"))
+
+
+def random_symmetric(n_features, n_samples, random_seed=None):
+ """Returns a randomly generated dataset of values and their negatives.
+
+ Args:
+ n_features: number of features (dependent variables)
+ n_samples: number of samples (data points)
+ random_seed: seed used for drawing the random data (default: None)
+
+ Returns:
+ dataset: A Dataset namedtuple containing the generated data and labels
+ """
+ random_seed = (np.random.randint(MAX_SEED) if random_seed is None
+ else random_seed)
+ np.random.seed(random_seed)
+
+ x1 = np.random.normal(size=(int(n_samples/2), n_features))
+ x = np.concatenate((x1, -x1), axis=0)
+ y = np.zeros((n_samples, 1))
+
+ return Dataset(x.astype("float32"), y.astype("int32"))
+
+
+def random_mlp(n_features, n_samples, random_seed=None, n_layers=6, width=20):
+ """Returns a generated output of an MLP with random weights.
+
+ Args:
+ n_features: number of features (dependent variables)
+ n_samples: number of samples (data points)
+ random_seed: seed used for drawing the random data (default: None)
+ n_layers: number of layers in random MLP
+ width: width of the layers in random MLP
+
+ Returns:
+ dataset: A Dataset namedtuple containing the generated data and labels
+ """
+ random_seed = (np.random.randint(MAX_SEED) if random_seed is None
+ else random_seed)
+ np.random.seed(random_seed)
+
+ x = np.random.normal(size=(n_samples, n_features))
+ y = x
+ n_in = n_features
+ scale_factor = np.sqrt(2.) / np.sqrt(n_features)
+ for _ in range(n_layers):
+ weights = np.random.normal(size=(n_in, width)) * scale_factor
+ y = np.dot(y, weights).clip(min=0)
+ n_in = width
+
+ y = y[:, 0]
+ y[y > 0] = 1
+
+ return Dataset(x.astype("float32"), y.astype("int32"))
+
+
+EMPTY_DATASET = Dataset(np.array([], dtype="float32"),
+ np.array([], dtype="int32"))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/model_adapter.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/model_adapter.py"
new file mode 100644
index 00000000..84559923
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/model_adapter.py"
@@ -0,0 +1,190 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Implementation of the ModelAdapter class."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import mock
+import tensorflow as tf
+
+from learned_optimizer.problems import problem_generator as pg
+
+
+class ModelAdapter(pg.Problem):
+ """Adapts Tensorflow models/graphs into a form suitable for meta-training.
+
+ This class adapts an existing TensorFlow graph into a form suitable for
+ meta-training a learned optimizer.
+ """
+
+ def __init__(self, make_loss_and_init_fn):
+ """Wraps a model in the Problem interface.
+
+ make_loss_and_init argument is a callable that returns a tuple of
+ two other callables as follows.
+
+ The first will construct most of the graph and return the problem loss. It
+ is essential that this graph contains the totality of the model's variables,
+ but none of its queues.
+
+ The second will return construct the model initialization graph given a list
+ of parameters and return a callable that is passed an instance of
+ tf.Session, and should initialize the models' parameters.
+
+ An argument value function would look like this:
+
+ ```python
+ def make_loss_and_init_fn():
+ inputs = queued_reader()
+
+ def make_loss():
+ return create_model_with_variables(inputs)
+
+ def make_init_fn(parameters):
+ saver = tf.Saver(parameters)
+ def init_fn(sess):
+ sess.restore(sess, ...)
+ return init_fn
+
+ return make_loss, make_init_fn
+ ```
+
+ Args:
+ make_loss_and_init_fn: a callable, as described aboce
+ """
+ make_loss_fn, make_init_fn = make_loss_and_init_fn()
+
+ self.make_loss_fn = make_loss_fn
+ self.parameters, self.constants = _get_variables(make_loss_fn)
+
+ if make_init_fn is not None:
+ init_fn = make_init_fn(self.parameters + self.constants)
+ else:
+ init_op = tf.initialize_variables(self.parameters + self.constants)
+ init_fn = lambda sess: sess.run(init_op)
+
+ tf.logging.info("ModelAdapter parameters: %s",
+ [op.name for op in self.parameters])
+ tf.logging.info("ModelAdapter constants: %s",
+ [op.name for op in self.constants])
+
+ super(ModelAdapter, self).__init__(
+ [], random_seed=None, noise_stdev=0.0, init_fn=init_fn)
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return self.parameters
+
+ def init_variables(self, seed=None):
+ """Returns a list of variables with the given shape."""
+ # NOTE(siege): This is awkward, as these are not set as trainable.
+ return self.parameters
+
+ def objective(self, parameters, data=None, labels=None):
+ """Computes the objective given a list of parameters.
+
+ Args:
+ parameters: The parameters to optimize (as a list of tensors)
+ data: An optional batch of data for calculating objectives
+ labels: An optional batch of corresponding labels
+
+ Returns:
+ A scalar tensor representing the objective value
+ """
+ # We need to set up a mapping based on the original parameter names, because
+ # the parameters passed can be arbitrary tensors.
+ parameter_mapping = {
+ old_p.name: p
+ for old_p, p in zip(self.parameters, parameters)
+ }
+
+ with tf.variable_scope(tf.get_variable_scope(), reuse=True):
+ return _make_with_custom_variables(self.make_loss_fn, parameter_mapping)
+
+
+def _get_variables(func):
+ """Calls func, returning any variables created.
+
+ The created variables are modified to not be trainable, and are placed into
+ the LOCAL_VARIABLES collection.
+
+ Args:
+ func: Function to be called.
+
+ Returns:
+ A tuple (variables, constants) where the first element is a list of
+ trainable variables and the second is the non-trainable variables.
+ """
+ variables = []
+ constants = []
+
+ # We need to create these variables like normal, so grab the original
+ # constructor before we mock it.
+ original_init = tf.Variable.__init__
+
+ def custom_init(self, *args, **kwargs):
+ trainable = kwargs["trainable"]
+ kwargs["trainable"] = False
+ # Making these variables local keeps them out of the optimizer's checkpoints
+ # somehow.
+ kwargs["collections"] = [tf.GraphKeys.LOCAL_VARIABLES]
+ original_init(self, *args, **kwargs)
+ if trainable:
+ variables.append(self)
+ else:
+ constants.append(self)
+
+ # This name-scope is just a nicety for TensorBoard.
+ with tf.name_scope("unused_graph"):
+ with mock.patch.object(tf.Variable, "__init__", custom_init):
+ func()
+
+ return variables, constants
+
+
+def _make_with_custom_variables(func, variable_mapping):
+ """Calls func and replaces the value of some variables created in it.
+
+ Args:
+ func: Function to be called.
+ variable_mapping: A mapping of variable name to the replacement tensor or
+ tf.Variable.
+
+ Returns:
+ The return value of func is returned.
+ """
+ original_value = tf.Variable.value
+
+ def custom_value(self):
+ if self.name in variable_mapping:
+ replacement = variable_mapping[self.name]
+ tf.logging.info("Replaced %s with %s" % (self.name, replacement))
+
+ # value() method needs to return a tensor, we need to call value on it.
+ # This has to be done manually like this otherwise we'll get an infinite
+ # loop.
+ if isinstance(replacement, tf.Variable):
+ replacement = original_value(replacement)
+
+ return replacement
+ else:
+ return original_value(self)
+
+ with mock.patch.object(tf.Variable, "value", custom_value):
+ with mock.patch.object(tf.Variable, "_AsTensor", custom_value):
+ return func()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/problem_generator.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/problem_generator.py"
new file mode 100644
index 00000000..abe1008f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/problem_generator.py"
@@ -0,0 +1,1016 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Generates toy optimization problems.
+
+This module contains a base class, Problem, that defines a minimal interface
+for optimization problems, and a few specific problem types that subclass it.
+
+Test functions for optimization: http://www.sfu.ca/~ssurjano/optimization.html
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+
+from learned_optimizer.problems import problem_spec as prob_spec
+
+tf.app.flags.DEFINE_float("l2_reg_scale", 1e-3,
+ """Scaling factor for parameter value regularization
+ in softmax classifier problems.""")
+FLAGS = tf.app.flags.FLAGS
+
+EPSILON = 1e-6
+MAX_SEED = 4294967295
+PARAMETER_SCOPE = "parameters"
+
+_Spec = prob_spec.Spec
+
+
+class Problem(object):
+ """Base class for optimization problems.
+
+ This defines an interface for optimization problems, including objective and
+ gradients functions and a feed_generator function that yields data to pass to
+ feed_dict in tensorflow.
+
+ Subclasses of Problem must (at the minimum) override the objective method,
+ which computes the objective/loss/cost to minimize, and specify the desired
+ shape of the parameters in a list in the param_shapes attribute.
+ """
+
+ def __init__(self, param_shapes, random_seed, noise_stdev, init_fn=None):
+ """Initializes a global random seed for the problem.
+
+ Args:
+ param_shapes: A list of tuples defining the expected shapes of the
+ parameters for this problem
+ random_seed: Either an integer (or None, in which case the seed is
+ randomly drawn)
+ noise_stdev: Strength (standard deviation) of added gradient noise
+ init_fn: A function taking a tf.Session object that is used to
+ initialize the problem's variables.
+
+ Raises:
+ ValueError: If the random_seed is not an integer and not None
+ """
+ if random_seed is not None and not isinstance(random_seed, int):
+ raise ValueError("random_seed must be an integer or None")
+
+ # Pick a random seed.
+ self.random_seed = (np.random.randint(MAX_SEED) if random_seed is None
+ else random_seed)
+
+ # Store the noise level.
+ self.noise_stdev = noise_stdev
+
+ # Set the random seed to ensure any random data in the problem is the same.
+ np.random.seed(self.random_seed)
+
+ # Store the parameter shapes.
+ self.param_shapes = param_shapes
+
+ if init_fn is not None:
+ self.init_fn = init_fn
+ else:
+ self.init_fn = lambda _: None
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return [tf.random_normal(shape, seed=seed) for shape in self.param_shapes]
+
+ def init_variables(self, seed=None):
+ """Returns a list of variables with the given shape."""
+ with tf.variable_scope(PARAMETER_SCOPE):
+ params = [tf.Variable(param) for param in self.init_tensors(seed)]
+ return params
+
+ def objective(self, parameters, data=None, labels=None):
+ """Computes the objective given a list of parameters.
+
+ Args:
+ parameters: The parameters to optimize (as a list of tensors)
+ data: An optional batch of data for calculating objectives
+ labels: An optional batch of corresponding labels
+
+ Returns:
+ A scalar tensor representing the objective value
+ """
+ raise NotImplementedError
+
+ def gradients(self, objective, parameters):
+ """Compute gradients of the objective with respect to the parameters.
+
+ Args:
+ objective: The objective op (e.g. output of self.objective())
+ parameters: A list of tensors (the parameters to optimize)
+
+ Returns:
+ A list of tensors representing the gradient for each parameter,
+ returned in the same order as the given list
+ """
+ grads = tf.gradients(objective, list(parameters))
+ noisy_grads = []
+
+ for grad in grads:
+ if isinstance(grad, tf.IndexedSlices):
+ noise = self.noise_stdev * tf.random_normal(tf.shape(grad.values))
+ new_grad = tf.IndexedSlices(grad.values + noise, grad.indices)
+ else:
+ new_grad = grad + self.noise_stdev * tf.random_normal(grad.get_shape())
+ noisy_grads.append(new_grad)
+
+ return noisy_grads
+
+
+class Quadratic(Problem):
+ """Optimizes a random quadratic function.
+
+ The objective is: f(x) = (1/2) ||Wx - y||_2^2
+ where W is a random Gaussian matrix and y is a random Gaussian vector.
+ """
+
+ def __init__(self, ndim, random_seed=None, noise_stdev=0.0):
+ """Initializes a random quadratic problem."""
+ param_shapes = [(ndim, 1)]
+ super(Quadratic, self).__init__(param_shapes, random_seed, noise_stdev)
+
+ # Generate a random problem instance.
+ self.w = np.random.randn(ndim, ndim).astype("float32")
+ self.y = np.random.randn(ndim, 1).astype("float32")
+
+ def objective(self, params, data=None, labels=None):
+ """Quadratic objective (see base class for details)."""
+ return tf.nn.l2_loss(tf.matmul(self.w, params[0]) - self.y)
+
+
+class SoftmaxClassifier(Problem):
+ """Helper functions for supervised softmax classification problems."""
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return [tf.random_normal(shape, seed=seed) * 1.2 / np.sqrt(shape[0])
+ for shape in self.param_shapes]
+
+ def inference(self, params, data):
+ """Computes logits given parameters and data.
+
+ Args:
+ params: List of parameter tensors or variables
+ data: Batch of features with samples along the first dimension
+
+ Returns:
+ logits: Un-normalized logits with shape (num_samples, num_classes)
+ """
+ raise NotImplementedError
+
+ def objective(self, params, data, labels):
+ """Computes the softmax cross entropy.
+
+ Args:
+ params: List of parameter tensors or variables
+ data: Batch of features with samples along the first dimension
+ labels: Vector of labels with the same number of samples as the data
+
+ Returns:
+ loss: Softmax cross entropy loss averaged over the samples in the batch
+
+ Raises:
+ ValueError: If the objective is to be computed over >2 classes, because
+ this operation is broken in tensorflow at the moment.
+ """
+ # Forward pass.
+ logits = self.inference(params, data)
+
+ # Compute the loss.
+ l2reg = [tf.reduce_sum(param ** 2) for param in params]
+ if int(logits.get_shape()[1]) == 2:
+ labels = tf.cast(labels, tf.float32)
+ losses = tf.nn.sigmoid_cross_entropy_with_logits(
+ labels=labels, logits=logits[:, 0])
+ else:
+ raise ValueError("Unable to compute softmax cross entropy for more than"
+ " 2 classes.")
+
+ return tf.reduce_mean(losses) + tf.reduce_mean(l2reg) * FLAGS.l2_reg_scale
+
+ def argmax(self, logits):
+ """Samples the most likely class label given the logits.
+
+ Args:
+ logits: Un-normalized logits with shape (num_samples, num_classes)
+
+ Returns:
+ predictions: Predicted class labels, has shape (num_samples,)
+ """
+ return tf.cast(tf.argmax(tf.nn.softmax(logits), 1), tf.int32)
+
+ def accuracy(self, params, data, labels):
+ """Computes the accuracy (fraction of correct classifications).
+
+ Args:
+ params: List of parameter tensors or variables
+ data: Batch of features with samples along the first dimension
+ labels: Vector of labels with the same number of samples as the data
+
+ Returns:
+ accuracy: Fraction of correct classifications across the batch
+ """
+ predictions = self.argmax(self.inference(params, data))
+ return tf.contrib.metrics.accuracy(predictions, tf.cast(labels, tf.int32))
+
+
+class SoftmaxRegression(SoftmaxClassifier):
+ """Builds a softmax regression problem."""
+
+ def __init__(self, n_features, n_classes, activation=tf.identity,
+ random_seed=None, noise_stdev=0.0):
+ self.activation = activation
+ self.n_features = n_features
+ param_shapes = [(n_features, n_classes), (n_classes,)]
+ super(SoftmaxRegression, self).__init__(param_shapes,
+ random_seed,
+ noise_stdev)
+
+ def inference(self, params, data):
+ features = tf.reshape(data, (-1, self.n_features))
+ return tf.matmul(features, params[0]) + params[1]
+
+
+class SparseSoftmaxRegression(SoftmaxClassifier):
+ """Builds a sparse input softmax regression problem."""
+
+ def __init__(self,
+ n_features,
+ n_classes,
+ activation=tf.identity,
+ random_seed=None,
+ noise_stdev=0.0):
+ self.activation = activation
+ self.n_features = n_features
+ param_shapes = [(n_classes, n_features), (n_features, n_classes), (
+ n_classes,)]
+ super(SparseSoftmaxRegression, self).__init__(param_shapes, random_seed,
+ noise_stdev)
+
+ def inference(self, params, data):
+ all_embeddings, softmax_weights, softmax_bias = params
+ embeddings = tf.nn.embedding_lookup(all_embeddings, tf.cast(data, tf.int32))
+ embeddings = tf.reduce_sum(embeddings, 1)
+ return tf.matmul(embeddings, softmax_weights) + softmax_bias
+
+
+class OneHotSparseSoftmaxRegression(SoftmaxClassifier):
+ """Builds a sparse input softmax regression problem.
+
+ This is identical to SparseSoftmaxRegression, but without using embedding
+ ops.
+ """
+
+ def __init__(self,
+ n_features,
+ n_classes,
+ activation=tf.identity,
+ random_seed=None,
+ noise_stdev=0.0):
+ self.activation = activation
+ self.n_features = n_features
+ self.n_classes = n_classes
+ param_shapes = [(n_classes, n_features), (n_features, n_classes), (
+ n_classes,)]
+ super(OneHotSparseSoftmaxRegression, self).__init__(param_shapes,
+ random_seed,
+ noise_stdev)
+
+ def inference(self, params, data):
+ all_embeddings, softmax_weights, softmax_bias = params
+ num_ids = tf.shape(data)[1]
+ one_hot_embeddings = tf.one_hot(tf.cast(data, tf.int32), self.n_classes)
+ one_hot_embeddings = tf.reshape(one_hot_embeddings, [-1, self.n_classes])
+ embeddings = tf.matmul(one_hot_embeddings, all_embeddings)
+ embeddings = tf.reshape(embeddings, [-1, num_ids, self.n_features])
+ embeddings = tf.reduce_sum(embeddings, 1)
+ return tf.matmul(embeddings, softmax_weights) + softmax_bias
+
+
+class FullyConnected(SoftmaxClassifier):
+ """Builds a multi-layer perceptron classifier."""
+
+ def __init__(self, n_features, n_classes, hidden_sizes=(32, 64),
+ activation=tf.nn.sigmoid, random_seed=None, noise_stdev=0.0):
+ """Initializes an multi-layer perceptron classification problem."""
+ # Store the number of features and activation function.
+ self.n_features = n_features
+ self.activation = activation
+
+ # Define the network as a list of weight + bias shapes for each layer.
+ param_shapes = []
+ for ix, sz in enumerate(hidden_sizes + (n_classes,)):
+
+ # The previous layer"s size (n_features if input).
+ prev_size = n_features if ix == 0 else hidden_sizes[ix - 1]
+
+ # Weight shape for this layer.
+ param_shapes.append((prev_size, sz))
+
+ # Bias shape for this layer.
+ param_shapes.append((sz,))
+
+ super(FullyConnected, self).__init__(param_shapes, random_seed, noise_stdev)
+
+ def inference(self, params, data):
+ # Flatten the features into a vector.
+ features = tf.reshape(data, (-1, self.n_features))
+
+ # Pass the data through the network.
+ preactivations = tf.matmul(features, params[0]) + params[1]
+
+ for layer in range(2, len(self.param_shapes), 2):
+ net = self.activation(preactivations)
+ preactivations = tf.matmul(net, params[layer]) + params[layer + 1]
+
+ return preactivations
+
+ def accuracy(self, params, data, labels):
+ """Computes the accuracy (fraction of correct classifications).
+
+ Args:
+ params: List of parameter tensors or variables
+ data: Batch of features with samples along the first dimension
+ labels: Vector of labels with the same number of samples as the data
+
+ Returns:
+ accuracy: Fraction of correct classifications across the batch
+ """
+ predictions = self.argmax(self.activation(self.inference(params, data)))
+ return tf.contrib.metrics.accuracy(predictions, tf.cast(labels, tf.int32))
+
+
+class ConvNet(SoftmaxClassifier):
+ """Builds an N-layer convnet for image classification."""
+
+ def __init__(self,
+ image_shape,
+ n_classes,
+ filter_list,
+ activation=tf.nn.relu,
+ random_seed=None,
+ noise_stdev=0.0):
+ # Number of channels, number of pixels in x- and y- dimensions.
+ n_channels, px, py = image_shape
+
+ # Store the activation.
+ self.activation = activation
+
+ param_shapes = []
+ input_size = n_channels
+ for fltr in filter_list:
+ # Add conv2d filters.
+ param_shapes.append((fltr[0], fltr[1], input_size, fltr[2]))
+ input_size = fltr[2]
+
+ # Number of units in the final (dense) layer.
+ self.affine_size = input_size * px * py
+
+ param_shapes.append((self.affine_size, n_classes)) # affine weights
+ param_shapes.append((n_classes,)) # affine bias
+
+ super(ConvNet, self).__init__(param_shapes, random_seed, noise_stdev)
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return [tf.random_normal(shape, mean=0., stddev=0.01, seed=seed)
+ for shape in self.param_shapes]
+
+ def inference(self, params, data):
+
+ # Unpack.
+ w_conv_list = params[:-2]
+ output_w, output_b = params[-2:]
+
+ conv_input = data
+ for w_conv in w_conv_list:
+ layer = tf.nn.conv2d(conv_input, w_conv, strides=[1] * 4, padding="SAME")
+ output = self.activation(layer)
+ conv_input = output
+
+ # Flatten.
+ flattened = tf.reshape(conv_input, (-1, self.affine_size))
+
+ # Fully connected layer.
+ return tf.matmul(flattened, output_w) + output_b
+
+
+class Bowl(Problem):
+ """A 2D quadratic bowl."""
+
+ def __init__(self, condition_number, angle=0.0,
+ random_seed=None, noise_stdev=0.0):
+ assert condition_number > 0, "Condition number must be positive."
+
+ # Define parameter shapes.
+ param_shapes = [(2, 1)]
+ super(Bowl, self).__init__(param_shapes, random_seed, noise_stdev)
+
+ self.condition_number = condition_number
+ self.angle = angle
+ self._build_matrix(condition_number, angle)
+
+ def _build_matrix(self, condition_number, angle):
+ """Builds the Hessian matrix."""
+ hessian = np.array([[condition_number, 0.], [0., 1.]], dtype="float32")
+
+ # Build the rotation matrix.
+ rotation_matrix = np.array([
+ [np.cos(angle), -np.sin(angle)],
+ [np.sin(angle), np.cos(angle)]
+ ])
+
+ # The objective is 0.5 * || Ax ||_2^2
+ # where the data matrix (A) is: sqrt(Hessian).dot(rotation_matrix).
+ self.matrix = np.sqrt(hessian).dot(rotation_matrix)
+
+ def objective(self, params, data=None, labels=None):
+ mtx = tf.constant(self.matrix, dtype=tf.float32)
+ return tf.nn.l2_loss(tf.matmul(mtx, params[0]))
+
+ def surface(self, xlim=5, ylim=5, n=50):
+ xm, ym = _mesh(xlim, ylim, n)
+ pts = np.vstack([xm.ravel(), ym.ravel()])
+ zm = 0.5 * np.linalg.norm(self.matrix.dot(pts), axis=0) ** 2
+ return xm, ym, zm.reshape(n, n)
+
+
+class Problem2D(Problem):
+
+ def __init__(self, random_seed=None, noise_stdev=0.0):
+ param_shapes = [(2,)]
+ super(Problem2D, self).__init__(param_shapes, random_seed, noise_stdev)
+
+ def surface(self, n=50, xlim=5, ylim=5):
+ """Computes the objective surface over a 2d mesh."""
+
+ # Create a mesh over the given coordinate ranges.
+ xm, ym = _mesh(xlim, ylim, n)
+
+ with tf.Graph().as_default(), tf.Session() as sess:
+
+ # Ops to compute the objective at every (x, y) point.
+ x = tf.placeholder(tf.float32, shape=xm.shape)
+ y = tf.placeholder(tf.float32, shape=ym.shape)
+ obj = self.objective([[x, y]])
+
+ # Run the computation.
+ zm = sess.run(obj, feed_dict={x: xm, y: ym})
+
+ return xm, ym, zm
+
+
+class Rosenbrock(Problem2D):
+ """See https://en.wikipedia.org/wiki/Rosenbrock_function.
+
+ This function has a single global minima at [1, 1]
+ The objective value at this point is zero.
+ """
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return [tf.random_uniform(shape, minval=-5., maxval=10., seed=seed)
+ for shape in self.param_shapes]
+
+ def objective(self, params, data=None, labels=None):
+ x, y = tf.split(params[0], 2, axis=0)
+ obj = (1 - x)**2 + 100 * (y - x**2)**2
+ return tf.squeeze(obj)
+
+
+def make_rosenbrock_loss_and_init(device=None):
+ """A variable-backed version of Rosenbrock problem.
+
+ See the Rosenbrock class for details.
+
+ Args:
+ device: Where to place the ops of this problem.
+
+ Returns:
+ A tuple of two callables, first of which creates the loss and the second
+ creates the parameter initializer function.
+ """
+ def make_rosenbrock_loss():
+ with tf.name_scope("optimizee"):
+ with tf.device(device):
+ x = tf.get_variable("x", [1])
+ y = tf.get_variable("y", [1])
+ c = tf.get_variable(
+ "c", [1],
+ initializer=tf.constant_initializer(100.0),
+ trainable=False)
+ obj = (1 - x)**2 + c * (y - x**2)**2
+ return tf.squeeze(obj)
+
+ def make_init_fn(parameters):
+ with tf.device(device):
+ init_op = tf.variables_initializer(parameters)
+ def init_fn(sess):
+ tf.logging.info("Initializing model parameters.")
+ sess.run(init_op)
+ return init_fn
+
+ return make_rosenbrock_loss, make_init_fn
+
+
+class Saddle(Problem2D):
+ """Loss surface around a saddle point."""
+
+ def objective(self, params, data=None, labels=None):
+ x, y = tf.split(params[0], 2, axis=0)
+ obj = x ** 2 - y ** 2
+ return tf.squeeze(obj)
+
+
+class LogSumExp(Problem2D):
+ """2D function defined by the log of the sum of exponentials."""
+
+ def objective(self, params, data=None, labels=None):
+ x, y = tf.split(params[0], 2, axis=0)
+ obj = tf.log(tf.exp(x + 3. * y - 0.1) +
+ tf.exp(x - 3. * y - 0.1) +
+ tf.exp(-x - 0.1) + 1.0)
+ return tf.squeeze(obj)
+
+
+class Ackley(Problem2D):
+ """Ackley's function (contains many local minima)."""
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return [tf.random_uniform(shape, minval=-32.768, maxval=32.768, seed=seed)
+ for shape in self.param_shapes]
+
+ def objective(self, params, data=None, labels=None):
+ x, y = tf.split(params[0], 2, axis=0)
+ obj = (-20 * tf.exp(-0.2 * tf.sqrt(0.5 * (x ** 2 + y ** 2))) -
+ tf.exp(0.5 * (tf.cos(2 * np.pi * x) + tf.cos(2 * np.pi * y))) +
+ tf.exp(1.0) + 20.)
+ return tf.squeeze(obj)
+
+
+class Beale(Problem2D):
+ """Beale function (a multimodal function with sharp peaks)."""
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return [tf.random_uniform(shape, minval=-4.5, maxval=4.5, seed=seed)
+ for shape in self.param_shapes]
+
+ def objective(self, params, data=None, labels=None):
+ x, y = tf.split(params[0], 2, axis=0)
+ obj = ((1.5 - x + x * y) ** 2 +
+ (2.25 - x + x * y ** 2) ** 2 +
+ (2.625 - x + x * y ** 3) ** 2)
+ return tf.squeeze(obj)
+
+
+class Booth(Problem2D):
+ """Booth's function (has a long valley along one dimension)."""
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return [tf.random_uniform(shape, minval=-10., maxval=10., seed=seed)
+ for shape in self.param_shapes]
+
+ def objective(self, params, data=None, labels=None):
+ x, y = tf.split(params[0], 2, axis=0)
+ obj = (x + 2 * y - 7) ** 2 + (2 * x + y - 5) ** 2
+ return tf.squeeze(obj)
+
+
+class StyblinskiTang(Problem2D):
+ """Styblinski-Tang function (a bumpy function in two dimensions)."""
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return [tf.random_uniform(shape, minval=-5., maxval=5., seed=seed)
+ for shape in self.param_shapes]
+
+ def objective(self, params, data=None, labels=None):
+ params = tf.split(params[0], 2, axis=0)
+ obj = 0.5 * tf.reduce_sum([x ** 4 - 16 * x ** 2 + 5 * x
+ for x in params], 0) + 80.
+ return tf.squeeze(obj)
+
+
+class Matyas(Problem2D):
+ """Matyas function (a function with a single global minimum in a valley)."""
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return [tf.random_uniform(shape, minval=-10, maxval=10, seed=seed)
+ for shape in self.param_shapes]
+
+ def objective(self, params, data=None, labels=None):
+ x, y = tf.split(params[0], 2, axis=0)
+ obj = 0.26 * (x ** 2 + y ** 2) - 0.48 * x * y
+ return tf.squeeze(obj)
+
+
+class Branin(Problem2D):
+ """Branin function (a function with three global minima)."""
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ x1 = tf.random_uniform((1,), minval=-5., maxval=10.,
+ seed=seed)
+ x2 = tf.random_uniform((1,), minval=0., maxval=15.,
+ seed=seed)
+ return [tf.concat([x1, x2], 0)]
+
+ def objective(self, params, data=None, labels=None):
+ x, y = tf.split(params[0], 2, axis=0)
+
+ # Define some constants.
+ a = 1.
+ b = 5.1 / (4. * np.pi ** 2)
+ c = 5 / np.pi
+ r = 6.
+ s = 10.
+ t = 1 / (8. * np.pi)
+
+ # Evaluate the function.
+ obj = a * (y - b * x ** 2 + c * x - r) ** 2 + s * (1 - t) * tf.cos(x) + s
+ return tf.squeeze(obj)
+
+
+class Michalewicz(Problem2D):
+ """Michalewicz function (has steep ridges and valleys)."""
+
+ def init_tensors(self, seed=None):
+ """Returns a list of tensors with the given shape."""
+ return [tf.random_uniform(shape, minval=0., maxval=np.pi, seed=seed)
+ for shape in self.param_shapes]
+
+ def objective(self, params, data=None, labels=None):
+ x, y = tf.split(params[0], 2, axis=0)
+ m = 5 # Defines how steep the ridges are (larger m => steeper ridges).
+ obj = 2. - (tf.sin(x) * tf.sin(x ** 2 / np.pi) ** (2 * m) +
+ tf.sin(y) * tf.sin(2 * y ** 2 / np.pi) ** (2 * m))
+ return tf.squeeze(obj)
+
+
+class Rescale(Problem):
+ """Takes an existing problem, and rescales all the parameters."""
+
+ def __init__(self, problem_spec, scale=10., noise_stdev=0.0):
+ self.problem = problem_spec.build()
+ self.param_shapes = self.problem.param_shapes
+ self.scale = scale
+
+ super(Rescale, self).__init__(self.param_shapes, random_seed=None,
+ noise_stdev=noise_stdev)
+
+ def init_tensors(self, seed=None):
+ params_raw = self.problem.init_tensors(seed=seed)
+ params = [t * self.scale for t in params_raw]
+ return params
+
+ def objective(self, params, data=None, labels=None):
+ params_raw = [t/self.scale for t in params]
+
+ problem_obj = self.problem.objective(params_raw, data, labels)
+ return problem_obj
+
+
+class SumTask(Problem):
+ """Takes a list of problems and modifies the objective to be their sum."""
+
+ def __init__(self, problem_specs, noise_stdev=0.0):
+ self.problems = [ps.build() for ps in problem_specs]
+ self.param_shapes = []
+ for prob in self.problems:
+ self.param_shapes += prob.param_shapes
+
+ super(SumTask, self).__init__(self.param_shapes, random_seed=None,
+ noise_stdev=noise_stdev)
+
+ def init_tensors(self, seed=None):
+ tensors = []
+ for prob in self.problems:
+ tensors += prob.init_tensors(seed=seed)
+ return tensors
+
+ def objective(self, params, data=None, labels=None):
+ obj = 0.
+ index = 0
+ for prob in self.problems:
+ num_params = len(prob.param_shapes)
+ obj += prob.objective(params[index:index + num_params])
+ index += num_params
+ return obj
+
+
+class IsotropicQuadratic(Problem):
+ """An isotropic quadratic problem."""
+
+ def objective(self, params, data=None, labels=None):
+ return sum([tf.reduce_sum(param ** 2) for param in params])
+
+
+class Norm(Problem):
+ """Takes an existing problem and modifies the objective to be its N-norm."""
+
+ def __init__(self, ndim, random_seed=None, noise_stdev=0.0, norm_power=2.):
+ param_shapes = [(ndim, 1)]
+ super(Norm, self).__init__(param_shapes, random_seed, noise_stdev)
+
+ # Generate a random problem instance.
+ self.w = np.random.randn(ndim, ndim).astype("float32")
+ self.y = np.random.randn(ndim, 1).astype("float32")
+ self.norm_power = norm_power
+
+ def objective(self, params, data=None, labels=None):
+ diff = tf.matmul(self.w, params[0]) - self.y
+ exp = 1. / self.norm_power
+ loss = tf.reduce_sum((tf.abs(diff) + EPSILON) ** self.norm_power) ** exp
+ return loss
+
+
+class LogObjective(Problem):
+ """Takes an existing problem and modifies the objective to be its log."""
+
+ def __init__(self, problem_spec):
+ self.problem = problem_spec.build()
+ self.param_shapes = self.problem.param_shapes
+
+ super(LogObjective, self).__init__(self.param_shapes,
+ random_seed=None,
+ noise_stdev=0.0)
+
+ def objective(self, params, data=None, labels=None):
+ problem_obj = self.problem.objective(params, data, labels)
+ return tf.log(problem_obj + EPSILON) - tf.log(EPSILON)
+
+
+class SparseProblem(Problem):
+ """Takes a problem and sets gradients to 0 with the given probability."""
+
+ def __init__(self,
+ problem_spec,
+ zero_probability=0.99,
+ random_seed=None,
+ noise_stdev=0.0):
+ self.problem = problem_spec.build()
+ self.param_shapes = self.problem.param_shapes
+ self.zero_prob = zero_probability
+
+ super(SparseProblem, self).__init__(self.param_shapes,
+ random_seed=random_seed,
+ noise_stdev=noise_stdev)
+
+ def objective(self, parameters, data=None, labels=None):
+ return self.problem.objective(parameters, data, labels)
+
+ def gradients(self, objective, parameters):
+ grads = tf.gradients(objective, list(parameters))
+
+ new_grads = []
+ for grad in grads:
+ mask = tf.greater(self.zero_prob, tf.random_uniform(grad.get_shape()))
+ zero_grad = tf.zeros_like(grad, dtype=tf.float32)
+ noisy_grad = grad + self.noise_stdev * tf.random_normal(grad.get_shape())
+ new_grads.append(tf.where(mask, zero_grad, noisy_grad))
+ return new_grads
+
+
+class DependencyChain(Problem):
+ """A problem in which parameters must be optimized in order.
+
+ A sequence of parameters which all need to be brought to 0, but where each
+ parameter in the sequence can't be brought to 0 until the preceding one
+ has been. This should take a long time to optimize, with steady
+ (or accelerating) progress throughout the entire process.
+ """
+
+ def __init__(self, ndim, random_seed=None, noise_stdev=0.):
+ param_shapes = [(ndim + 1,)]
+ self.ndim = ndim
+ super(DependencyChain, self).__init__(
+ param_shapes, random_seed, noise_stdev)
+
+ def objective(self, params, data=None, labels=None):
+ terms = params[0][0]**2 + params[0][1:]**2 / (params[0][:-1]**2 + EPSILON)
+ return tf.reduce_sum(terms)
+
+
+class MinMaxWell(Problem):
+ """Problem with global min when both the min and max (absolute) params are 1.
+
+ The gradient for all but two parameters (the min and max) is zero. This
+ should therefore encourage the optimizer to behave sensible even when
+ parameters have zero gradients, as is common eg for some deep neural nets.
+ """
+
+ def __init__(self, ndim, random_seed=None, noise_stdev=0.):
+ param_shapes = [(ndim,)]
+ self.ndim = ndim
+ super(MinMaxWell, self).__init__(param_shapes, random_seed, noise_stdev)
+
+ def objective(self, params, data=None, labels=None):
+ params_sqr = params[0]**2
+ min_sqr = tf.reduce_min(params_sqr)
+ max_sqr = tf.reduce_max(params_sqr)
+ epsilon = 1e-12
+
+ return max_sqr + 1./min_sqr - 2. + epsilon
+
+
+class OutwardSnake(Problem):
+ """A winding path out to infinity.
+
+ Ideal step length stays constant along the entire path.
+ """
+
+ def __init__(self, ndim, random_seed=None, noise_stdev=0.):
+ param_shapes = [(ndim,)]
+ self.ndim = ndim
+ super(OutwardSnake, self).__init__(param_shapes, random_seed, noise_stdev)
+
+ def objective(self, params, data, labels=None):
+ radius = tf.sqrt(tf.reduce_sum(params[0]**2))
+ rad_loss = tf.reduce_sum(1. / (radius + 1e-6) * data[:, 0])
+
+ sin_dist = params[0][1:] - tf.cos(params[0][:-1]) * np.pi
+ sin_loss = tf.reduce_sum((sin_dist * data[:, 1:])**2)
+
+ return rad_loss + sin_loss
+
+
+class ProjectionQuadratic(Problem):
+ """Dataset consists of different directions to probe. Global min is at 0."""
+
+ def __init__(self, ndim, random_seed=None, noise_stdev=0.):
+ param_shapes = [(1, ndim)]
+ super(ProjectionQuadratic, self).__init__(
+ param_shapes, random_seed, noise_stdev)
+
+ def objective(self, params, data, labels=None):
+ return tf.reduce_sum((params[0] * data)**2)
+
+
+class SumOfQuadratics(Problem):
+
+ def __init__(self, ndim, random_seed=None, noise_stdev=0.):
+ param_shapes = [(1, ndim)]
+ super(SumOfQuadratics, self).__init__(
+ param_shapes, random_seed, noise_stdev)
+
+ def objective(self, params, data, labels=None):
+ epsilon = 1e-12
+ # Assume dataset is designed so that the global minimum is at params=0.
+ # Subtract loss at params=0, so that global minimum has objective value
+ # epsilon (added to avoid floating point issues).
+ return (tf.reduce_sum((params[0] - data)**2) - tf.reduce_sum(data**2) +
+ epsilon)
+
+
+class MatMulAlgorithm(Problem):
+ """A 6-th order polynomial optimization problem.
+
+ This problem is parametrized by n and k. A solution to this problem with
+ objective value exactly zero defines a matrix multiplication algorithm of
+ n x n matrices using k multiplications between matrices. When applied
+ recursively, such an algorithm has complexity O(n^(log_n(k))).
+
+ Given n, it is not known in general which values of k in [n^2, n^3] have a
+ solution. There is always a solution with k = n^3 (this is the naive
+ algorithm).
+
+ In the special case n = 2, it is known that there are solutions for k = {7, 8}
+ but not for k <= 6. For n = 3, it is known that there are exact solutions for
+ 23 <= k <= 27, and there are asymptotic solutions for k = {21, 22}, but the
+ other cases are unknown.
+
+ For a given n and k, if one solution exists then infinitely many solutions
+ exist due to permutation and scaling symmetries in the parameters.
+
+ This is a very hard problem for some values of n and k (e.g. n = 3, k = 21),
+ but very easy for other values (e.g. n = 2, k = 7).
+
+ For a given n and k, the specific formulation of this problem is as follows.
+ Let theta_a, theta_b, theta_c be parameter matrices with respective dimensions
+ [n**2, k], [n**2, k], [k, n**2]. Then for any matrices a, b with shape [n, n],
+ we can form the matrix c with shape [n, n] via the operation:
+ ((vec(a) * theta_a) .* (vec(b) * theta_b)) * theta_c = vec(c), (#)
+ where vec(x) is the operator that flattens a matrix with shape [n, n] into a
+ row vector with shape [1, n**2], * denotes matrix multiplication and .*
+ denotes elementwise multiplication.
+
+ This operation, parameterized by theta_a, theta_b, theta_c, is a matrix
+ multiplication algorithm iff c = a*b for all [n, n] matrices a and b. But
+ actually it suffices to verify all combinations of one-hot matrices a and b,
+ of which there are n**4 such combinations. This gives a batch of n**4 matrix
+ triplets (a, b, c) such that equation (#) must hold for each triplet. We solve
+ for theta_a, theta_b, theta_c by minimizing the sum of squares of errors
+ across this batch.
+
+ Finally, theta_c can be computed from theta_a and theta_b. Therefore it
+ suffices to learn theta_a and theta_b, from which theta_c and therefore the
+ objective value can be computed.
+ """
+
+ def __init__(self, n, k):
+ assert isinstance(n, int), "n must be an integer"
+ assert isinstance(k, int), "k must be an integer"
+ assert n >= 2, "Must have n >= 2"
+ assert k >= n**2 and k <= n**3, "Must have n**2 <= k <= n**3"
+
+ param_shapes = [(n**2, k), (n**2, k)] # theta_a, theta_b
+ super(MatMulAlgorithm, self).__init__(
+ param_shapes, random_seed=None, noise_stdev=0.0)
+
+ self.n = n
+ self.k = k
+
+ # Build a batch of all combinations of one-hot matrices a, b, and their
+ # respective products c. Correctness on this batch is a necessary and
+ # sufficient condition for the algorithm to be valid. The number of matrices
+ # in {a, b, c}_3d is n**4 and each matrix is n x n.
+ onehots = np.identity(n**2).reshape(n**2, n, n)
+ a_3d = np.repeat(onehots, n**2, axis=0)
+ b_3d = np.tile(onehots, [n**2, 1, 1])
+ c_3d = np.matmul(a_3d, b_3d)
+
+ # Convert the batch to 2D Tensors.
+ self.a = tf.constant(a_3d.reshape(n**4, n**2), tf.float32, name="a")
+ self.b = tf.constant(b_3d.reshape(n**4, n**2), tf.float32, name="b")
+ self.c = tf.constant(c_3d.reshape(n**4, n**2), tf.float32, name="c")
+
+ def init_tensors(self, seed=None):
+ # Initialize params such that the columns of theta_a and theta_b have L2
+ # norm 1.
+ def _param_initializer(shape, seed=None):
+ x = tf.random_normal(shape, dtype=tf.float32, seed=seed)
+ return tf.transpose(tf.nn.l2_normalize(tf.transpose(x), 1))
+
+ return [_param_initializer(shape, seed) for shape in self.param_shapes]
+
+ def objective(self, parameters, data=None, labels=None):
+ theta_a = parameters[0]
+ theta_b = parameters[1]
+
+ # Compute theta_c from theta_a and theta_b.
+ p = tf.matmul(self.a, theta_a) * tf.matmul(self.b, theta_b)
+ p_trans = tf.transpose(p, name="p_trans")
+ p_inv = tf.matmul(
+ tf.matrix_inverse(tf.matmul(p_trans, p)), p_trans, name="p_inv")
+ theta_c = tf.matmul(p_inv, self.c, name="theta_c")
+
+ # Compute the "predicted" value of c.
+ c_hat = tf.matmul(p, theta_c, name="c_hat")
+
+ # Compute the loss (sum of squared errors).
+ loss = tf.reduce_sum((c_hat - self.c)**2, name="loss")
+
+ return loss
+
+
+def matmul_problem_sequence(n, k_min, k_max):
+ """Helper to generate a sequence of matrix multiplication problems."""
+ return [(_Spec(MatMulAlgorithm, (n, k), {}), None, None)
+ for k in range(k_min, k_max + 1)]
+
+
+def init_fixed_variables(arrays):
+ with tf.variable_scope(PARAMETER_SCOPE):
+ params = [tf.Variable(arr.astype("float32")) for arr in arrays]
+ return params
+
+
+def _mesh(xlim, ylim, n):
+ """Creates a 2D meshgrid covering the given ranges.
+
+ Args:
+ xlim: int that defines the desired x-range (-xlim, xlim)
+ ylim: int that defines the desired y-range (-ylim, ylim)
+ n: number of points in each dimension of the mesh
+
+ Returns:
+ xm: 2D array of x-values in the mesh
+ ym: 2D array of y-values in the mesh
+ """
+ return np.meshgrid(np.linspace(-xlim, xlim, n),
+ np.linspace(-ylim, ylim, n))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/problem_sets.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/problem_sets.py"
new file mode 100644
index 00000000..eaf9273b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/problem_sets.py"
@@ -0,0 +1,561 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Groups of problems of different types for optimizer training."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+
+from learned_optimizer.problems import datasets
+from learned_optimizer.problems import model_adapter
+from learned_optimizer.problems import problem_generator as pg
+from learned_optimizer.problems import problem_spec
+
+_Spec = problem_spec.Spec
+
+
+def quadratic_problems():
+ return [
+ (_Spec(pg.Quadratic, (20,), {}), None, None),
+ (_Spec(pg.Quadratic, (25,), {}), None, None),
+ (_Spec(pg.Quadratic, (50,), {}), None, None),
+ (_Spec(pg.Quadratic, (100,), {}), None, None),
+ ]
+
+
+# Note: this group contains one non-noisy problem for historical reasons. The
+# original training set before the refactor included this set of quadratics.
+def quadratic_problems_noisy():
+ return [
+ (_Spec(pg.Quadratic, (20,), {"noise_stdev": 0.5}), None, None),
+ (_Spec(pg.Quadratic, (25,), {"noise_stdev": 0.0}), None, None),
+ (_Spec(pg.Quadratic, (50,), {"noise_stdev": 1.0}), None, None),
+ (_Spec(pg.Quadratic, (100,), {"noise_stdev": 2.0}), None, None),
+ ]
+
+
+def quadratic_problems_large():
+ return [
+ (_Spec(pg.Quadratic, (784,), {}), None, None),
+ (_Spec(pg.Quadratic, (1024,), {}), None, None),
+ (_Spec(pg.Quadratic, (2048,), {}), None, None),
+ ]
+
+
+def bowl_problems():
+ return [
+ (_Spec(pg.Bowl, (0.1,), {"noise_stdev": 0.0}), None, None),
+ (_Spec(pg.Bowl, (1.0,), {"noise_stdev": 0.0}), None, None),
+ (_Spec(pg.Bowl, (5.0,), {"noise_stdev": 0.0}), None, None),
+ (_Spec(pg.Bowl, (5.0,), {"noise_stdev": 0.0, "angle": np.pi / 4.}),
+ None, None),
+ ]
+
+
+def bowl_problems_noisy():
+ return [
+ (_Spec(pg.Bowl, (0.1,), {"noise_stdev": 0.1}), None, None),
+ (_Spec(pg.Bowl, (1.0,), {"noise_stdev": 0.1}), None, None),
+ (_Spec(pg.Bowl, (5.0,), {"noise_stdev": 0.1}), None, None),
+ (_Spec(pg.Bowl, (5.0,), {"noise_stdev": 0.1, "angle": np.pi / 4.}),
+ None, None),
+ ]
+
+
+def sparse_softmax_2_class_sparse_problems():
+ return [(_Spec(pg.SparseSoftmaxRegression, (5, 2), {"noise_stdev": 0.0}),
+ datasets.noisy_parity_class(5, random_seed=123), 23),]
+
+
+def one_hot_sparse_softmax_2_class_sparse_problems():
+ return [
+ (_Spec(pg.OneHotSparseSoftmaxRegression, (5, 2), {"noise_stdev": 0.0}),
+ datasets.noisy_parity_class(5, random_seed=123), 23),
+ ]
+
+
+def softmax_2_class_problems():
+ return [
+ (_Spec(pg.SoftmaxRegression, (10, 2), {}), datasets.random(
+ 10, 1000, random_seed=123, sep=2.0), 100),
+ (_Spec(pg.SoftmaxRegression, (100, 2), {}), datasets.random(
+ 100, 1000, random_seed=123), 50),
+ (_Spec(pg.SoftmaxRegression, (200, 2), {}), datasets.random(
+ 200, 1000, random_seed=123, sep=1.5), 20),
+ (_Spec(pg.SoftmaxRegression, (256, 2), {}), datasets.random(
+ 256, 1000, random_seed=123, sep=1.5), 100),
+ ]
+
+
+def softmax_2_class_problems_noisy():
+ return [
+ (_Spec(pg.SoftmaxRegression, (10, 2), {"noise_stdev": 0.5}),
+ datasets.random(10, 1000, random_seed=123, sep=2.0), 100),
+ (_Spec(pg.SoftmaxRegression, (100, 2), {"noise_stdev": 0.1}),
+ datasets.random(100, 1000, random_seed=123), 50),
+ (_Spec(pg.SoftmaxRegression, (200, 2), {"noise_stdev": 0.1}),
+ datasets.random(200, 1000, random_seed=123, sep=1.5), 20),
+ (_Spec(pg.SoftmaxRegression, (256, 2), {"noise_stdev": 0.5}),
+ datasets.random(256, 1000, random_seed=123, sep=1.5), 100),
+ ]
+
+
+def optimization_test_problems():
+ return [
+ (_Spec(pg.Ackley, (), {}), None, None),
+ (_Spec(pg.Beale, (), {}), None, None),
+ (_Spec(pg.Booth, (), {}), None, None),
+ (_Spec(pg.Branin, (), {}), None, None),
+ (_Spec(pg.LogSumExp, (), {}), None, None),
+ (_Spec(pg.Matyas, (), {}), None, None),
+ (_Spec(pg.Michalewicz, (), {}), None, None),
+ (_Spec(pg.Rosenbrock, (), {}), None, None),
+ (_Spec(pg.StyblinskiTang, (), {}), None, None),
+ ]
+
+
+def optimization_test_problems_noisy():
+ return [
+ (_Spec(pg.Ackley, (), {"noise_stdev": 1.}), None, None),
+ (_Spec(pg.Beale, (), {"noise_stdev": 1.}), None, None),
+ (_Spec(pg.Booth, (), {"noise_stdev": 1.}), None, None),
+ (_Spec(pg.Branin, (), {"noise_stdev": 1.}), None, None),
+ (_Spec(pg.LogSumExp, (), {"noise_stdev": 1.}), None, None),
+ (_Spec(pg.Matyas, (), {"noise_stdev": 1.}), None, None),
+ (_Spec(pg.Michalewicz, (), {"noise_stdev": 1.}), None, None),
+ (_Spec(pg.Rosenbrock, (), {"noise_stdev": 1.}), None, None),
+ (_Spec(pg.StyblinskiTang, (), {"noise_stdev": 1.}), None, None),
+ ]
+
+
+def fully_connected_random_2_class_problems():
+ return [
+ (_Spec(pg.FullyConnected, (8, 2),
+ {"hidden_sizes": (8, 5,), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(8, 1000), 10),
+ (_Spec(pg.FullyConnected, (12, 2),
+ {"hidden_sizes": (8, 5, 3), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(12, 1000), 200),
+ (_Spec(pg.FullyConnected, (5, 2),
+ {"hidden_sizes": (4, 4, 4, 4,), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(5, 1000), 100),
+ (_Spec(pg.FullyConnected, (11, 2),
+ {"hidden_sizes": (4, 5, 6,), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(11, 1000), 64),
+ (_Spec(pg.FullyConnected, (9, 2),
+ {"hidden_sizes": (8,), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(9, 1000), 128),
+ (_Spec(pg.FullyConnected, (7, 2),
+ {"hidden_sizes": (8, 5,), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(7, 1000), 16),
+ (_Spec(pg.FullyConnected, (8, 2),
+ {"hidden_sizes": (32, 64,), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(8, 1000), 10),
+ (_Spec(pg.FullyConnected, (12, 2),
+ {"hidden_sizes": (16, 8, 3), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(12, 1000), 200),
+ (_Spec(pg.FullyConnected, (5, 2),
+ {"hidden_sizes": (8, 8, 8, 8,), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(5, 1000), 100),
+ (_Spec(pg.FullyConnected, (11, 2),
+ {"hidden_sizes": (10, 12, 12,), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(11, 1000), 64),
+ (_Spec(pg.FullyConnected, (9, 2),
+ {"hidden_sizes": (32,), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(9, 1000), 128),
+ (_Spec(pg.FullyConnected, (7, 2),
+ {"hidden_sizes": (32, 64,), "activation": tf.nn.sigmoid}),
+ datasets.random_mlp(7, 1000), 16),
+ ]
+
+
+def matmul_problems():
+ return sum([
+ pg.matmul_problem_sequence(2, 5, 8),
+ pg.matmul_problem_sequence(3, 19, 24)], [])
+
+
+def log_objective_problems():
+ return [
+ (_Spec(pg.LogObjective, [_Spec(pg.Quadratic, (20,), {})], {}),
+ None, None),
+ (_Spec(pg.LogObjective, [_Spec(pg.Quadratic, (50,), {})], {}),
+ None, None),
+ (_Spec(pg.LogObjective, [_Spec(pg.Quadratic, (100,), {})], {}),
+ None, None),
+ (_Spec(pg.LogObjective, [_Spec(pg.Bowl, (0.1,), {})], {}), None, None),
+ (_Spec(pg.LogObjective, [_Spec(pg.Bowl, (1.0,), {})], {}), None, None),
+ (_Spec(pg.LogObjective, [_Spec(pg.Bowl, (5.0,), {})], {}), None, None),
+ ]
+
+
+def sparse_gradient_problems():
+ return [
+ (_Spec(pg.SparseProblem, [_Spec(pg.Quadratic, (20,), {})], {}),
+ None, None),
+ (_Spec(pg.SparseProblem, [_Spec(pg.Quadratic, (50,), {})], {}),
+ None, None),
+ (_Spec(pg.SparseProblem, [_Spec(pg.Quadratic, (100,), {})], {}),
+ None, None),
+ (_Spec(pg.SparseProblem, [_Spec(pg.Bowl, (0.1,), {})], {}), None, None),
+ (_Spec(pg.SparseProblem, [_Spec(pg.Bowl, (1.0,), {})], {}), None, None),
+ (_Spec(pg.SparseProblem, [_Spec(pg.Bowl, (5.0,), {})], {}), None, None),
+ ]
+
+
+def sparse_gradient_problems_mlp():
+ return [
+ (_Spec(pg.SparseProblem, [
+ _Spec(pg.FullyConnected, (8, 2), {
+ "hidden_sizes": (8, 5,),
+ "activation": tf.nn.sigmoid
+ })
+ ], {}), datasets.random_mlp(8, 1000), 10),
+ (_Spec(pg.SparseProblem, [
+ _Spec(pg.FullyConnected, (12, 2), {
+ "hidden_sizes": (8, 5, 3),
+ "activation": tf.nn.sigmoid
+ })
+ ], {}), datasets.random_mlp(12, 1000), 200),
+ (_Spec(pg.SparseProblem, [
+ _Spec(pg.FullyConnected, (5, 2), {
+ "hidden_sizes": (4, 4, 4, 4,),
+ "activation": tf.nn.sigmoid
+ })
+ ], {}), datasets.random_mlp(5, 1000), 100),
+ ]
+
+
+def rescale_problems():
+ return [
+ (_Spec(pg.Rescale, [_Spec(pg.Norm, (18,), {"norm_power": 2.5})],
+ {"scale": 0.123}), None, None),
+ (_Spec(pg.Rescale, [_Spec(pg.Norm, (18,), {"norm_power": 1.5})],
+ {"scale": 8}), None, None),
+ (_Spec(pg.Rescale, [_Spec(pg.Norm, (18,), {"norm_power": 2.})],
+ {"scale": 50}), None, None),
+ (_Spec(pg.Rescale, [_Spec(pg.Norm, (18,), {"norm_power": 3.})],
+ {"scale": 200}), None, None),
+ (_Spec(pg.Rescale, [_Spec(pg.Norm, (18,), {"norm_power": 1.})],
+ {"scale": 1000}), None, None),
+ (_Spec(pg.Rescale, [_Spec(pg.Quadratic, (20,), {})], {"scale": 0.1}),
+ None, None),
+ (_Spec(pg.Rescale, [_Spec(pg.Quadratic, (25,), {})], {"scale": 10.}),
+ None, None),
+ (_Spec(pg.Rescale, [_Spec(pg.Quadratic, (50,), {})], {"scale": 350.}),
+ None, None),
+ (_Spec(pg.Rescale, [_Spec(pg.Quadratic, (100,), {})], {"scale": 132}),
+ None, None),
+ ]
+
+
+def norm_problems():
+ return [
+ # < 1 Norm causes NaN gradients early in training.
+ (_Spec(pg.Norm, (27,), {"norm_power": 1.}), None, None),
+ (_Spec(pg.Norm, (25,), {"norm_power": 2.}), None, None),
+ (_Spec(pg.Norm, (22,), {"norm_power": 3.}), None, None),
+ ]
+
+
+def norm_problems_noisy():
+ return [
+ # < 1 Norm causes NaN gradients early in training.
+ (_Spec(pg.Norm, (19,), {"noise_stdev": .1, "norm_power": 1.}),
+ None, None),
+ (_Spec(pg.Norm, (26,), {"noise_stdev": .1, "norm_power": 2.}),
+ None, None),
+ (_Spec(pg.Norm, (23,), {"noise_stdev": .1, "norm_power": 3.}),
+ None, None),
+ ]
+
+
+def sum_problems():
+ return [
+ (_Spec(pg.SumTask, [[
+ _Spec(pg.Quadratic, (11,), {}),
+ _Spec(pg.Quadratic, (3,), {}),
+ _Spec(pg.Quadratic, (9,), {}),
+ _Spec(pg.Quadratic, (7,), {}),
+ _Spec(pg.Quadratic, (5,), {}),
+ _Spec(pg.Quadratic, (13,), {}),
+ _Spec(pg.Quadratic, (12,), {})
+ ]], {}), None, None),
+ (_Spec(pg.SumTask, [[
+ _Spec(pg.Norm, (18,), {"norm_power": 3}),
+ _Spec(pg.Quadratic, (25,), {}),
+ _Spec(pg.Rosenbrock, (), {})
+ ]], {}), None, None),
+ (_Spec(pg.SumTask, [[
+ _Spec(pg.Rosenbrock, (), {}),
+ _Spec(pg.LogSumExp, (), {}),
+ _Spec(pg.Ackley, (), {}),
+ _Spec(pg.Beale, (), {}),
+ _Spec(pg.Booth, (), {}),
+ _Spec(pg.StyblinskiTang, (), {}),
+ _Spec(pg.Matyas, (), {}),
+ _Spec(pg.Branin, (), {}),
+ _Spec(pg.Michalewicz, (), {})
+ ]], {}), None, None),
+ (_Spec(pg.SumTask, [[
+ _Spec(pg.Rosenbrock, (), {}),
+ _Spec(pg.LogSumExp, (), {}),
+ _Spec(pg.Ackley, (), {}),
+ _Spec(pg.Beale, (), {}),
+ _Spec(pg.Booth, (), {}),
+ _Spec(pg.StyblinskiTang, (), {}),
+ _Spec(pg.Matyas, (), {}),
+ _Spec(pg.Branin, (), {}),
+ _Spec(pg.Michalewicz, (), {}),
+ _Spec(pg.Quadratic, (5,), {}),
+ _Spec(pg.Quadratic, (13,), {})
+ ]], {}), None, None),
+ (_Spec(pg.SumTask, [[
+ _Spec(pg.Quadratic, (11,), {}),
+ _Spec(pg.Quadratic, (3,), {})
+ ]], {}), None, None),
+ (_Spec(pg.SumTask, [[
+ _Spec(pg.Rosenbrock, (), {}),
+ _Spec(pg.LogSumExp, (), {}),
+ _Spec(pg.Ackley, (), {})
+ ]], {}), None, None),
+ ]
+
+
+def sum_problems_noisy():
+ return [
+ (_Spec(pg.SumTask, [[
+ _Spec(pg.Quadratic, (11,), {"noise_stdev": 0.1}),
+ _Spec(pg.Quadratic, (3,), {"noise_stdev": 0.1}),
+ _Spec(pg.Quadratic, (9,), {"noise_stdev": 0.1}),
+ _Spec(pg.Quadratic, (7,), {"noise_stdev": 0.1}),
+ _Spec(pg.Quadratic, (5,), {"noise_stdev": 0.1}),
+ _Spec(pg.Quadratic, (13,), {"noise_stdev": 0.1}),
+ _Spec(pg.Quadratic, (12,), {"noise_stdev": 0.1})
+ ]], {}), None, None),
+ (_Spec(pg.SumTask, [[
+ _Spec(pg.Rosenbrock, (), {}),
+ _Spec(pg.LogSumExp, (), {}),
+ _Spec(pg.Ackley, (), {}),
+ _Spec(pg.Beale, (), {}),
+ _Spec(pg.Booth, (), {}),
+ _Spec(pg.StyblinskiTang, (), {}),
+ _Spec(pg.Matyas, (), {}),
+ _Spec(pg.Branin, (), {}),
+ _Spec(pg.Michalewicz, (), {}),
+ _Spec(pg.Quadratic, (5,), {}),
+ _Spec(pg.Quadratic, (13,), {"noise_stdev": 0.5})
+ ]], {}), None, None),
+ ]
+
+
+def dependency_chain_problems():
+ return [
+ (_Spec(pg.DependencyChain, (20,), {}), datasets.random_binary(
+ 20, 1000), 100),
+ (_Spec(pg.DependencyChain, (12,), {}), datasets.random_binary(
+ 12, 200), 10),
+ (_Spec(pg.DependencyChain, (56,), {}), datasets.random_binary(
+ 56, 5000), 100),
+ (_Spec(pg.DependencyChain, (64,), {}), datasets.random_binary(
+ 64, 1000), 50),
+ (_Spec(pg.DependencyChain, (13,), {}), datasets.random_binary(
+ 13, 10000), 50),
+ (_Spec(pg.DependencyChain, (20,), {}), datasets.random_binary(
+ 20, 1000), 128),
+ (_Spec(pg.DependencyChain, (12,), {}), datasets.random_binary(
+ 12, 300), 16),
+ (_Spec(pg.DependencyChain, (56,), {}), datasets.random_binary(
+ 56, 5000), 128),
+ (_Spec(pg.DependencyChain, (64,), {}), datasets.random_binary(
+ 64, 1000), 64),
+ (_Spec(pg.DependencyChain, (13,), {}), datasets.random_binary(
+ 13, 10000), 32),
+ ]
+
+
+def outward_snake_problems():
+ return [
+ (_Spec(pg.OutwardSnake, (20,), {}), datasets.random_binary(
+ 20, 1000), 100),
+ (_Spec(pg.OutwardSnake, (12,), {}), datasets.random_binary(
+ 12, 200), 10),
+ (_Spec(pg.OutwardSnake, (56,), {}), datasets.random_binary(
+ 56, 5000), 100),
+ (_Spec(pg.OutwardSnake, (64,), {}), datasets.random_binary(
+ 64, 1000), 50),
+ (_Spec(pg.OutwardSnake, (13,), {}), datasets.random_binary(
+ 13, 10000), 50),
+ (_Spec(pg.OutwardSnake, (20,), {}), datasets.random_binary(
+ 20, 1000), 128),
+ (_Spec(pg.OutwardSnake, (12,), {}), datasets.random_binary(
+ 12, 300), 16),
+ (_Spec(pg.OutwardSnake, (56,), {}), datasets.random_binary(
+ 56, 5000), 128),
+ (_Spec(pg.OutwardSnake, (64,), {}), datasets.random_binary(
+ 64, 1000), 64),
+ (_Spec(pg.OutwardSnake, (13,), {}), datasets.random_binary(
+ 13, 10000), 32),
+ ]
+
+
+def min_max_well_problems():
+ return [
+ (_Spec(pg.MinMaxWell, (20,), {}), None, None),
+ (_Spec(pg.MinMaxWell, (12,), {}), None, None),
+ (_Spec(pg.MinMaxWell, (56,), {}), None, None),
+ (_Spec(pg.MinMaxWell, (64,), {}), None, None),
+ (_Spec(pg.MinMaxWell, (13,), {}), None, None),
+ ]
+
+
+def sum_of_quadratics_problems():
+ return [
+ (_Spec(pg.SumOfQuadratics, (20,), {}),
+ datasets.random_symmetric(20, 1000), 100),
+ (_Spec(pg.SumOfQuadratics, (12,), {}),
+ datasets.random_symmetric(12, 100), 10),
+ (_Spec(pg.SumOfQuadratics, (56,), {}),
+ datasets.random_symmetric(56, 5000), 100),
+ (_Spec(pg.SumOfQuadratics, (64,), {}),
+ datasets.random_symmetric(64, 1000), 50),
+ (_Spec(pg.SumOfQuadratics, (13,), {}),
+ datasets.random_symmetric(13, 10000), 50),
+ (_Spec(pg.SumOfQuadratics, (20,), {}),
+ datasets.random_symmetric(20, 1000), 128),
+ (_Spec(pg.SumOfQuadratics, (12,), {}),
+ datasets.random_symmetric(12, 100), 16),
+ (_Spec(pg.SumOfQuadratics, (56,), {}),
+ datasets.random_symmetric(56, 5000), 128),
+ (_Spec(pg.SumOfQuadratics, (64,), {}),
+ datasets.random_symmetric(64, 1000), 64),
+ (_Spec(pg.SumOfQuadratics, (13,), {}),
+ datasets.random_symmetric(13, 10000), 32),
+ ]
+
+
+def projection_quadratic_problems():
+ return [
+ (_Spec(pg.ProjectionQuadratic, (20,), {}),
+ datasets.random_symmetric(20, 1000), 100),
+ (_Spec(pg.ProjectionQuadratic, (12,), {}),
+ datasets.random_symmetric(12, 100), 10),
+ (_Spec(pg.ProjectionQuadratic, (56,), {}),
+ datasets.random_symmetric(56, 5000), 100),
+ (_Spec(pg.ProjectionQuadratic, (64,), {}),
+ datasets.random_symmetric(64, 1000), 50),
+ (_Spec(pg.ProjectionQuadratic, (13,), {}),
+ datasets.random_symmetric(13, 10000), 50),
+ (_Spec(pg.ProjectionQuadratic, (20,), {}),
+ datasets.random_symmetric(20, 1000), 128),
+ (_Spec(pg.ProjectionQuadratic, (12,), {}),
+ datasets.random_symmetric(12, 100), 16),
+ (_Spec(pg.ProjectionQuadratic, (56,), {}),
+ datasets.random_symmetric(56, 5000), 128),
+ (_Spec(pg.ProjectionQuadratic, (64,), {}),
+ datasets.random_symmetric(64, 1000), 64),
+ (_Spec(pg.ProjectionQuadratic, (13,), {}),
+ datasets.random_symmetric(13, 10000), 32),
+ ]
+
+
+def adapter_rosenbrock_local():
+ return [(_Spec(model_adapter.ModelAdapter,
+ (pg.make_rosenbrock_loss_and_init,), {}), None, None),]
+
+
+def adapter_rosenbrock_worker():
+ return [(_Spec(model_adapter.ModelAdapter,
+ (pg.make_rosenbrock_loss_and_init,),
+ {"device": "/job:worker"}), None, None),]
+
+
+def _test_problem_mlp_scaled_init_small():
+ return [
+ np.random.randn(10, 32) * np.sqrt(2./10),
+ np.random.randn(32,) * 0.1,
+ np.random.randn(32, 64) * np.sqrt(2./32.),
+ np.random.randn(64,) * 0.1,
+ np.random.randn(64, 2) * np.sqrt(2./64.),
+ np.random.randn(2,) * 0.1
+ ]
+
+
+def _test_problem_mlp_scaled_init_large():
+ return [
+ np.random.randn(20, 32) * np.sqrt(2./20),
+ np.random.randn(32,) * 0.1,
+ np.random.randn(32, 64) * np.sqrt(2./32.),
+ np.random.randn(64,) * 0.1,
+ np.random.randn(64, 10) * np.sqrt(2./64.),
+ np.random.randn(10,) * 0.1
+ ]
+
+
+def _test_problem_mlp_scaled_init_mnist():
+ return [
+ np.random.randn(784, 64) * np.sqrt(2./784.),
+ np.random.randn(64,) * 0.1,
+ np.random.randn(64, 10) * np.sqrt(2./ 64.),
+ np.random.randn(10,) * 0.1,
+ ]
+
+
+# Wrap this construction in a function to avoid UnparsedFlagAccessError
+def test_problems():
+ """Test problems for visualizations."""
+ # Unlike the training problem sets, these test problems are made up of
+ # length-5 tuples. The final items in the tuple are the name of the problem
+ # and the initialization random_seed for testing consistency.
+ tp = [
+ (_Spec(pg.Quadratic, (20,), {"random_seed": 1234}), None, None,
+ "quad_problem", 5678),
+ (_Spec(pg.Quadratic, (20,), {"noise_stdev": 1.0, "random_seed": 1234}),
+ None, None, "quad_problem_noise", 5678),
+ (_Spec(pg.Rosenbrock, (), {"random_seed": 1234}), None, None,
+ "rosenbrock", 5678),
+ (_Spec(pg.Rosenbrock, (), {"random_seed": 1234, "noise_stdev": 1.0}),
+ None, None, "rosenbrock_noise", 5678),
+ (_Spec(pg.SoftmaxRegression, (10, 2), {}), datasets.random(
+ 10, 10000, random_seed=1234), 100, "softmax", 5678),
+ (_Spec(pg.SoftmaxRegression, (10, 2), {"noise_stdev": 1.0}),
+ datasets.random(10, 10000, random_seed=1234), 100, "softmax_noise",
+ 5678),
+ (_Spec(pg.FullyConnected, (10, 2), {}), datasets.random(
+ 10, 10000, random_seed=1234), 100, "mlp_small",
+ _test_problem_mlp_scaled_init_small()),
+ (_Spec(pg.FullyConnected, (20, 10), {}), datasets.random(
+ 20, 10000, n_classes=10, random_seed=1234), 100, "mlp_large",
+ _test_problem_mlp_scaled_init_large()),
+ (_Spec(pg.FullyConnected, (784, 10),
+ {"hidden_sizes": (64,), "activation": tf.nn.sigmoid}),
+ datasets.mnist(), 64, "mlp_mnist_sigmoid",
+ _test_problem_mlp_scaled_init_mnist()),
+ (_Spec(pg.FullyConnected, (784, 10),
+ {"hidden_sizes": (64,), "activation": tf.nn.relu}),
+ datasets.mnist(), 64, "mlp_mnist_relu",
+ _test_problem_mlp_scaled_init_mnist()),
+ (_Spec(pg.ConvNet, ((1, 28, 28), 10, [(3, 3, 8), (5, 5, 8)]),
+ {"activation": tf.nn.sigmoid}), datasets.mnist(), 64,
+ "convnet_mnist_sigmoid", None),
+ (_Spec(pg.ConvNet, ((1, 28, 28), 10, [(3, 3, 8), (5, 5, 8)]),
+ {"activation": tf.nn.relu}), datasets.mnist(), 64,
+ "convnet_mnist_relu", None),
+ ]
+ return tp
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/problem_spec.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/problem_spec.py"
new file mode 100644
index 00000000..e30c47b2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learned_optimizer/problems/problem_spec.py"
@@ -0,0 +1,33 @@
+# Copyright 2017 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Wrapper around a training problem."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from collections import namedtuple
+
+
+class Spec(namedtuple("Spec", "callable args kwargs")):
+ """Syntactic sugar for keeping track of a function/class + args."""
+
+ # Since this is an immutable object, we don't need to reserve slots.
+ __slots__ = ()
+
+ def build(self):
+ """Returns the output of the callable."""
+ return self.callable(*self.args, **self.kwargs)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/README.md"
new file mode 100644
index 00000000..f06c191a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/README.md"
@@ -0,0 +1,55 @@
+Code for the Memory Module as described
+in "Learning to Remember Rare Events" by
+Lukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio
+published as a conference paper at ICLR 2017.
+
+Requirements:
+* TensorFlow (see tensorflow.org for how to install)
+* Some basic command-line utilities (git, unzip).
+
+Description:
+
+The general memory module is located in memory.py.
+Some code is provided to see the memory module in
+action on the standard Omniglot dataset.
+Download and setup the dataset using data_utils.py
+and then run the training script train.py
+(see example commands below).
+
+Note that the structure and parameters of the model
+are optimized for the data preparation as provided.
+
+Quick Start:
+
+First download and set-up Omniglot data by running
+
+```
+python data_utils.py
+```
+
+Then run the training script:
+
+```
+python train.py --memory_size=8192 \
+ --batch_size=16 --validation_length=50 \
+ --episode_width=5 --episode_length=30
+```
+
+The first validation batch may look like this (although it is noisy):
+```
+0-shot: 0.040, 1-shot: 0.404, 2-shot: 0.516, 3-shot: 0.604,
+ 4-shot: 0.656, 5-shot: 0.684
+```
+At step 500 you may see something like this:
+```
+0-shot: 0.036, 1-shot: 0.836, 2-shot: 0.900, 3-shot: 0.940,
+ 4-shot: 0.944, 5-shot: 0.916
+```
+At step 4000 you may see something like this:
+```
+0-shot: 0.044, 1-shot: 0.960, 2-shot: 1.000, 3-shot: 0.988,
+ 4-shot: 0.972, 5-shot: 0.992
+```
+
+Maintained by Ofir Nachum (ofirnachum) and
+Lukasz Kaiser (lukaszkaiser).
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/data_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/data_utils.py"
new file mode 100644
index 00000000..03d5dafb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/data_utils.py"
@@ -0,0 +1,243 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+"""Data loading and other utilities.
+
+Use this file to first copy over and pre-process the Omniglot dataset.
+Simply call
+ python data_utils.py
+"""
+
+import logging
+import os
+import subprocess
+from six.moves import cPickle as pickle
+
+import numpy as np
+from scipy.misc import imresize
+from scipy.misc import imrotate
+from scipy.ndimage import imread
+from six.moves import xrange
+import tensorflow as tf
+
+
+MAIN_DIR = ''
+REPO_LOCATION = 'https://github.com/brendenlake/omniglot.git'
+REPO_DIR = os.path.join(MAIN_DIR, 'omniglot')
+DATA_DIR = os.path.join(REPO_DIR, 'python')
+TRAIN_DIR = os.path.join(DATA_DIR, 'images_background')
+TEST_DIR = os.path.join(DATA_DIR, 'images_evaluation')
+DATA_FILE_FORMAT = os.path.join(MAIN_DIR, '%s_omni.pkl')
+
+TRAIN_ROTATIONS = True # augment training data with rotations
+TEST_ROTATIONS = False # augment testing data with rotations
+IMAGE_ORIGINAL_SIZE = 105
+IMAGE_NEW_SIZE = 28
+
+
+def get_data():
+ """Get data in form suitable for episodic training.
+
+ Returns:
+ Train and test data as dictionaries mapping
+ label to list of examples.
+ """
+ with tf.gfile.GFile(DATA_FILE_FORMAT % 'train', 'rb') as f:
+ processed_train_data = pickle.load(f)
+ with tf.gfile.GFile(DATA_FILE_FORMAT % 'test', 'rb') as f:
+ processed_test_data = pickle.load(f)
+
+ train_data = {}
+ test_data = {}
+
+ for data, processed_data in zip([train_data, test_data],
+ [processed_train_data, processed_test_data]):
+ for image, label in zip(processed_data['images'],
+ processed_data['labels']):
+ if label not in data:
+ data[label] = []
+ data[label].append(image.reshape([-1]).astype('float32'))
+
+ intersection = set(train_data.keys()) & set(test_data.keys())
+ assert not intersection, 'Train and test data intersect.'
+ ok_num_examples = [len(ll) == 20 for _, ll in train_data.items()]
+ assert all(ok_num_examples), 'Bad number of examples in train data.'
+ ok_num_examples = [len(ll) == 20 for _, ll in test_data.items()]
+ assert all(ok_num_examples), 'Bad number of examples in test data.'
+
+ logging.info('Number of labels in train data: %d.', len(train_data))
+ logging.info('Number of labels in test data: %d.', len(test_data))
+
+ return train_data, test_data
+
+
+def crawl_directory(directory, augment_with_rotations=False,
+ first_label=0):
+ """Crawls data directory and returns stuff."""
+ label_idx = first_label
+ images = []
+ labels = []
+ info = []
+
+ # traverse root directory
+ for root, _, files in os.walk(directory):
+ logging.info('Reading files from %s', root)
+ fileflag = 0
+ for file_name in files:
+ full_file_name = os.path.join(root, file_name)
+ img = imread(full_file_name, flatten=True)
+ for i, angle in enumerate([0, 90, 180, 270]):
+ if not augment_with_rotations and i > 0:
+ break
+
+ images.append(imrotate(img, angle))
+ labels.append(label_idx + i)
+ info.append(full_file_name)
+
+ fileflag = 1
+
+ if fileflag:
+ label_idx += 4 if augment_with_rotations else 1
+
+ return images, labels, info
+
+
+def resize_images(images, new_width, new_height):
+ """Resize images to new dimensions."""
+ resized_images = np.zeros([images.shape[0], new_width, new_height],
+ dtype=np.float32)
+
+ for i in range(images.shape[0]):
+ resized_images[i, :, :] = imresize(images[i, :, :],
+ [new_width, new_height],
+ interp='bilinear',
+ mode=None)
+ return resized_images
+
+
+def write_datafiles(directory, write_file,
+ resize=True, rotate=False,
+ new_width=IMAGE_NEW_SIZE, new_height=IMAGE_NEW_SIZE,
+ first_label=0):
+ """Load and preprocess images from a directory and write them to a file.
+
+ Args:
+ directory: Directory of alphabet sub-directories.
+ write_file: Filename to write to.
+ resize: Whether to resize the images.
+ rotate: Whether to augment the dataset with rotations.
+ new_width: New resize width.
+ new_height: New resize height.
+ first_label: Label to start with.
+
+ Returns:
+ Number of new labels created.
+ """
+
+ # these are the default sizes for Omniglot:
+ imgwidth = IMAGE_ORIGINAL_SIZE
+ imgheight = IMAGE_ORIGINAL_SIZE
+
+ logging.info('Reading the data.')
+ images, labels, info = crawl_directory(directory,
+ augment_with_rotations=rotate,
+ first_label=first_label)
+
+ images_np = np.zeros([len(images), imgwidth, imgheight], dtype=np.bool)
+ labels_np = np.zeros([len(labels)], dtype=np.uint32)
+ for i in xrange(len(images)):
+ images_np[i, :, :] = images[i]
+ labels_np[i] = labels[i]
+
+ if resize:
+ logging.info('Resizing images.')
+ resized_images = resize_images(images_np, new_width, new_height)
+
+ logging.info('Writing resized data in float32 format.')
+ data = {'images': resized_images,
+ 'labels': labels_np,
+ 'info': info}
+ with tf.gfile.GFile(write_file, 'w') as f:
+ pickle.dump(data, f)
+ else:
+ logging.info('Writing original sized data in boolean format.')
+ data = {'images': images_np,
+ 'labels': labels_np,
+ 'info': info}
+ with tf.gfile.GFile(write_file, 'w') as f:
+ pickle.dump(data, f)
+
+ return len(np.unique(labels_np))
+
+
+def maybe_download_data():
+ """Download Omniglot repo if it does not exist."""
+ if os.path.exists(REPO_DIR):
+ logging.info('It appears that Git repo already exists.')
+ else:
+ logging.info('It appears that Git repo does not exist.')
+ logging.info('Cloning now.')
+
+ subprocess.check_output('git clone %s' % REPO_LOCATION, shell=True)
+
+ if os.path.exists(TRAIN_DIR):
+ logging.info('It appears that train data has already been unzipped.')
+ else:
+ logging.info('It appears that train data has not been unzipped.')
+ logging.info('Unzipping now.')
+
+ subprocess.check_output('unzip %s.zip -d %s' % (TRAIN_DIR, DATA_DIR),
+ shell=True)
+
+ if os.path.exists(TEST_DIR):
+ logging.info('It appears that test data has already been unzipped.')
+ else:
+ logging.info('It appears that test data has not been unzipped.')
+ logging.info('Unzipping now.')
+
+ subprocess.check_output('unzip %s.zip -d %s' % (TEST_DIR, DATA_DIR),
+ shell=True)
+
+
+def preprocess_omniglot():
+ """Download and prepare raw Omniglot data.
+
+ Downloads the data from GitHub if it does not exist.
+ Then load the images, augment with rotations if desired.
+ Resize the images and write them to a pickle file.
+ """
+
+ maybe_download_data()
+
+ directory = TRAIN_DIR
+ write_file = DATA_FILE_FORMAT % 'train'
+ num_labels = write_datafiles(
+ directory, write_file, resize=True, rotate=TRAIN_ROTATIONS,
+ new_width=IMAGE_NEW_SIZE, new_height=IMAGE_NEW_SIZE)
+
+ directory = TEST_DIR
+ write_file = DATA_FILE_FORMAT % 'test'
+ write_datafiles(directory, write_file, resize=True, rotate=TEST_ROTATIONS,
+ new_width=IMAGE_NEW_SIZE, new_height=IMAGE_NEW_SIZE,
+ first_label=num_labels)
+
+
+def main(unused_argv):
+ logging.basicConfig(level=logging.INFO)
+ preprocess_omniglot()
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/memory.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/memory.py"
new file mode 100644
index 00000000..2f40ff57
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/memory.py"
@@ -0,0 +1,392 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+"""Memory module for storing "nearest neighbors".
+
+Implements a key-value memory for generalized one-shot learning
+as described in the paper
+"Learning to Remember Rare Events"
+by Lukasz Kaiser, Ofir Nachum, Aurko Roy, Samy Bengio,
+published as a conference paper at ICLR 2017.
+"""
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+
+class Memory(object):
+ """Memory module."""
+
+ def __init__(self, key_dim, memory_size, vocab_size,
+ choose_k=256, alpha=0.1, correct_in_top=1, age_noise=8.0,
+ var_cache_device='', nn_device=''):
+ self.key_dim = key_dim
+ self.memory_size = memory_size
+ self.vocab_size = vocab_size
+ self.choose_k = min(choose_k, memory_size)
+ self.alpha = alpha
+ self.correct_in_top = correct_in_top
+ self.age_noise = age_noise
+ self.var_cache_device = var_cache_device # Variables are cached here.
+ self.nn_device = nn_device # Device to perform nearest neighbour matmul.
+
+ caching_device = var_cache_device if var_cache_device else None
+ self.update_memory = tf.constant(True) # Can be fed "false" if needed.
+ self.mem_keys = tf.get_variable(
+ 'memkeys', [self.memory_size, self.key_dim], trainable=False,
+ initializer=tf.random_uniform_initializer(-0.0, 0.0),
+ caching_device=caching_device)
+ self.mem_vals = tf.get_variable(
+ 'memvals', [self.memory_size], dtype=tf.int32, trainable=False,
+ initializer=tf.constant_initializer(0, tf.int32),
+ caching_device=caching_device)
+ self.mem_age = tf.get_variable(
+ 'memage', [self.memory_size], dtype=tf.float32, trainable=False,
+ initializer=tf.constant_initializer(0.0), caching_device=caching_device)
+ self.recent_idx = tf.get_variable(
+ 'recent_idx', [self.vocab_size], dtype=tf.int32, trainable=False,
+ initializer=tf.constant_initializer(0, tf.int32))
+
+ # variable for projecting query vector into memory key
+ self.query_proj = tf.get_variable(
+ 'memory_query_proj', [self.key_dim, self.key_dim], dtype=tf.float32,
+ initializer=tf.truncated_normal_initializer(0, 0.01),
+ caching_device=caching_device)
+
+ def get(self):
+ return self.mem_keys, self.mem_vals, self.mem_age, self.recent_idx
+
+ def set(self, k, v, a, r=None):
+ return tf.group(
+ self.mem_keys.assign(k),
+ self.mem_vals.assign(v),
+ self.mem_age.assign(a),
+ (self.recent_idx.assign(r) if r is not None else tf.group()))
+
+ def clear(self):
+ return tf.variables_initializer([self.mem_keys, self.mem_vals, self.mem_age,
+ self.recent_idx])
+
+ def get_hint_pool_idxs(self, normalized_query):
+ """Get small set of idxs to compute nearest neighbor queries on.
+
+ This is an expensive look-up on the whole memory that is used to
+ avoid more expensive operations later on.
+
+ Args:
+ normalized_query: A Tensor of shape [None, key_dim].
+
+ Returns:
+ A Tensor of shape [None, choose_k] of indices in memory
+ that are closest to the queries.
+
+ """
+ # look up in large memory, no gradients
+ with tf.device(self.nn_device):
+ similarities = tf.matmul(tf.stop_gradient(normalized_query),
+ self.mem_keys, transpose_b=True, name='nn_mmul')
+ _, hint_pool_idxs = tf.nn.top_k(
+ tf.stop_gradient(similarities), k=self.choose_k, name='nn_topk')
+ return hint_pool_idxs
+
+ def make_update_op(self, upd_idxs, upd_keys, upd_vals,
+ batch_size, use_recent_idx, intended_output):
+ """Function that creates all the update ops."""
+ mem_age_incr = self.mem_age.assign_add(tf.ones([self.memory_size],
+ dtype=tf.float32))
+ with tf.control_dependencies([mem_age_incr]):
+ mem_age_upd = tf.scatter_update(
+ self.mem_age, upd_idxs, tf.zeros([batch_size], dtype=tf.float32))
+
+ mem_key_upd = tf.scatter_update(
+ self.mem_keys, upd_idxs, upd_keys)
+ mem_val_upd = tf.scatter_update(
+ self.mem_vals, upd_idxs, upd_vals)
+
+ if use_recent_idx:
+ recent_idx_upd = tf.scatter_update(
+ self.recent_idx, intended_output, upd_idxs)
+ else:
+ recent_idx_upd = tf.group()
+
+ return tf.group(mem_age_upd, mem_key_upd, mem_val_upd, recent_idx_upd)
+
+ def query(self, query_vec, intended_output, use_recent_idx=True):
+ """Queries memory for nearest neighbor.
+
+ Args:
+ query_vec: A batch of vectors to query (embedding of input to model).
+ intended_output: The values that would be the correct output of the
+ memory.
+ use_recent_idx: Whether to always insert at least one instance of a
+ correct memory fetch.
+
+ Returns:
+ A tuple (result, mask, teacher_loss).
+ result: The result of the memory look up.
+ mask: The affinity of the query to the result.
+ teacher_loss: The loss for training the memory module.
+ """
+
+ batch_size = tf.shape(query_vec)[0]
+ output_given = intended_output is not None
+
+ # prepare query for memory lookup
+ query_vec = tf.matmul(query_vec, self.query_proj)
+ normalized_query = tf.nn.l2_normalize(query_vec, dim=1)
+
+ hint_pool_idxs = self.get_hint_pool_idxs(normalized_query)
+
+ if output_given and use_recent_idx: # add at least one correct memory
+ most_recent_hint_idx = tf.gather(self.recent_idx, intended_output)
+ hint_pool_idxs = tf.concat(
+ axis=1,
+ values=[hint_pool_idxs, tf.expand_dims(most_recent_hint_idx, 1)])
+ choose_k = tf.shape(hint_pool_idxs)[1]
+
+ with tf.device(self.var_cache_device):
+ # create small memory and look up with gradients
+ my_mem_keys = tf.stop_gradient(tf.gather(self.mem_keys, hint_pool_idxs,
+ name='my_mem_keys_gather'))
+ similarities = tf.matmul(tf.expand_dims(normalized_query, 1),
+ my_mem_keys, adjoint_b=True, name='batch_mmul')
+ hint_pool_sims = tf.squeeze(similarities, [1], name='hint_pool_sims')
+ hint_pool_mem_vals = tf.gather(self.mem_vals, hint_pool_idxs,
+ name='hint_pool_mem_vals')
+ # Calculate softmax mask on the top-k if requested.
+ # Softmax temperature. Say we have K elements at dist x and one at (x+a).
+ # Softmax of the last is e^tm(x+a)/Ke^tm*x + e^tm(x+a) = e^tm*a/K+e^tm*a.
+ # To make that 20% we'd need to have e^tm*a ~= 0.2K, so tm = log(0.2K)/a.
+ softmax_temp = max(1.0, np.log(0.2 * self.choose_k) / self.alpha)
+ mask = tf.nn.softmax(hint_pool_sims[:, :choose_k - 1] * softmax_temp)
+
+ # prepare returned values
+ nearest_neighbor = tf.to_int32(
+ tf.argmax(hint_pool_sims[:, :choose_k - 1], 1))
+
+ no_teacher_idxs = tf.gather(
+ tf.reshape(hint_pool_idxs, [-1]),
+ nearest_neighbor + choose_k * tf.range(batch_size))
+
+ with tf.device(self.var_cache_device):
+ result = tf.gather(self.mem_vals, tf.reshape(no_teacher_idxs, [-1]))
+
+ if not output_given:
+ teacher_loss = None
+ return result, mask, teacher_loss
+
+ # prepare hints from the teacher on hint pool
+ teacher_hints = tf.to_float(
+ tf.abs(tf.expand_dims(intended_output, 1) - hint_pool_mem_vals))
+ teacher_hints = 1.0 - tf.minimum(1.0, teacher_hints)
+
+ teacher_vals, teacher_hint_idxs = tf.nn.top_k(
+ hint_pool_sims * teacher_hints, k=1)
+ neg_teacher_vals, _ = tf.nn.top_k(
+ hint_pool_sims * (1 - teacher_hints), k=1)
+
+ # bring back idxs to full memory
+ teacher_idxs = tf.gather(
+ tf.reshape(hint_pool_idxs, [-1]),
+ teacher_hint_idxs[:, 0] + choose_k * tf.range(batch_size))
+
+ # zero-out teacher_vals if there are no hints
+ teacher_vals *= (
+ 1 - tf.to_float(tf.equal(0.0, tf.reduce_sum(teacher_hints, 1))))
+
+ # we'll determine whether to do an update to memory based on whether
+ # memory was queried correctly
+ sliced_hints = tf.slice(teacher_hints, [0, 0], [-1, self.correct_in_top])
+ incorrect_memory_lookup = tf.equal(0.0, tf.reduce_sum(sliced_hints, 1))
+
+ # loss based on triplet loss
+ teacher_loss = (tf.nn.relu(neg_teacher_vals - teacher_vals + self.alpha)
+ - self.alpha)
+
+ # prepare memory updates
+ update_keys = normalized_query
+ update_vals = intended_output
+
+ fetched_idxs = teacher_idxs # correctly fetched from memory
+ with tf.device(self.var_cache_device):
+ fetched_keys = tf.gather(self.mem_keys, fetched_idxs, name='fetched_keys')
+ fetched_vals = tf.gather(self.mem_vals, fetched_idxs, name='fetched_vals')
+
+ # do memory updates here
+ fetched_keys_upd = update_keys + fetched_keys # Momentum-like update
+ fetched_keys_upd = tf.nn.l2_normalize(fetched_keys_upd, dim=1)
+ # Randomize age a bit, e.g., to select different ones in parallel workers.
+ mem_age_with_noise = self.mem_age + tf.random_uniform(
+ [self.memory_size], - self.age_noise, self.age_noise)
+
+ _, oldest_idxs = tf.nn.top_k(mem_age_with_noise, k=batch_size, sorted=False)
+
+ with tf.control_dependencies([result]):
+ upd_idxs = tf.where(incorrect_memory_lookup,
+ oldest_idxs,
+ fetched_idxs)
+ # upd_idxs = tf.Print(upd_idxs, [upd_idxs], "UPD IDX", summarize=8)
+ upd_keys = tf.where(incorrect_memory_lookup,
+ update_keys,
+ fetched_keys_upd)
+ upd_vals = tf.where(incorrect_memory_lookup,
+ update_vals,
+ fetched_vals)
+
+ def make_update_op():
+ return self.make_update_op(upd_idxs, upd_keys, upd_vals,
+ batch_size, use_recent_idx, intended_output)
+
+ update_op = tf.cond(self.update_memory, make_update_op, tf.no_op)
+
+ with tf.control_dependencies([update_op]):
+ result = tf.identity(result)
+ mask = tf.identity(mask)
+ teacher_loss = tf.identity(teacher_loss)
+
+ return result, mask, tf.reduce_mean(teacher_loss)
+
+
+class LSHMemory(Memory):
+ """Memory employing locality sensitive hashing.
+
+ Note: Not fully tested.
+ """
+
+ def __init__(self, key_dim, memory_size, vocab_size,
+ choose_k=256, alpha=0.1, correct_in_top=1, age_noise=8.0,
+ var_cache_device='', nn_device='',
+ num_hashes=None, num_libraries=None):
+ super(LSHMemory, self).__init__(
+ key_dim, memory_size, vocab_size,
+ choose_k=choose_k, alpha=alpha, correct_in_top=1, age_noise=age_noise,
+ var_cache_device=var_cache_device, nn_device=nn_device)
+
+ self.num_libraries = num_libraries or int(self.choose_k ** 0.5)
+ self.num_per_hash_slot = max(1, self.choose_k // self.num_libraries)
+ self.num_hashes = (num_hashes or
+ int(np.log2(self.memory_size / self.num_per_hash_slot)))
+ self.num_hashes = min(max(self.num_hashes, 1), 20)
+ self.num_hash_slots = 2 ** self.num_hashes
+
+ # hashing vectors
+ self.hash_vecs = [
+ tf.get_variable(
+ 'hash_vecs%d' % i, [self.num_hashes, self.key_dim],
+ dtype=tf.float32, trainable=False,
+ initializer=tf.truncated_normal_initializer(0, 1))
+ for i in xrange(self.num_libraries)]
+
+ # map representing which hash slots map to which mem keys
+ self.hash_slots = [
+ tf.get_variable(
+ 'hash_slots%d' % i, [self.num_hash_slots, self.num_per_hash_slot],
+ dtype=tf.int32, trainable=False,
+ initializer=tf.random_uniform_initializer(maxval=self.memory_size,
+ dtype=tf.int32))
+ for i in xrange(self.num_libraries)]
+
+ def get(self): # not implemented
+ return self.mem_keys, self.mem_vals, self.mem_age, self.recent_idx
+
+ def set(self, k, v, a, r=None): # not implemented
+ return tf.group(
+ self.mem_keys.assign(k),
+ self.mem_vals.assign(v),
+ self.mem_age.assign(a),
+ (self.recent_idx.assign(r) if r is not None else tf.group()))
+
+ def clear(self):
+ return tf.variables_initializer([self.mem_keys, self.mem_vals, self.mem_age,
+ self.recent_idx] + self.hash_slots)
+
+ def get_hash_slots(self, query):
+ """Gets hashed-to buckets for batch of queries.
+
+ Args:
+ query: 2-d Tensor of query vectors.
+
+ Returns:
+ A list of hashed-to buckets for each hash function.
+ """
+
+ binary_hash = [
+ tf.less(tf.matmul(query, self.hash_vecs[i], transpose_b=True), 0)
+ for i in xrange(self.num_libraries)]
+ hash_slot_idxs = [
+ tf.reduce_sum(
+ tf.to_int32(binary_hash[i]) *
+ tf.constant([[2 ** i for i in xrange(self.num_hashes)]],
+ dtype=tf.int32), 1)
+ for i in xrange(self.num_libraries)]
+ return hash_slot_idxs
+
+ def get_hint_pool_idxs(self, normalized_query):
+ """Get small set of idxs to compute nearest neighbor queries on.
+
+ This is an expensive look-up on the whole memory that is used to
+ avoid more expensive operations later on.
+
+ Args:
+ normalized_query: A Tensor of shape [None, key_dim].
+
+ Returns:
+ A Tensor of shape [None, choose_k] of indices in memory
+ that are closest to the queries.
+
+ """
+ # get hash of query vecs
+ hash_slot_idxs = self.get_hash_slots(normalized_query)
+
+ # grab mem idxs in the hash slots
+ hint_pool_idxs = [
+ tf.maximum(tf.minimum(
+ tf.gather(self.hash_slots[i], idxs),
+ self.memory_size - 1), 0)
+ for i, idxs in enumerate(hash_slot_idxs)]
+
+ return tf.concat(axis=1, values=hint_pool_idxs)
+
+ def make_update_op(self, upd_idxs, upd_keys, upd_vals,
+ batch_size, use_recent_idx, intended_output):
+ """Function that creates all the update ops."""
+ base_update_op = super(LSHMemory, self).make_update_op(
+ upd_idxs, upd_keys, upd_vals,
+ batch_size, use_recent_idx, intended_output)
+
+ # compute hash slots to be updated
+ hash_slot_idxs = self.get_hash_slots(upd_keys)
+
+ # make updates
+ update_ops = []
+ with tf.control_dependencies([base_update_op]):
+ for i, slot_idxs in enumerate(hash_slot_idxs):
+ # for each slot, choose which entry to replace
+ entry_idx = tf.random_uniform([batch_size],
+ maxval=self.num_per_hash_slot,
+ dtype=tf.int32)
+ entry_mul = 1 - tf.one_hot(entry_idx, self.num_per_hash_slot,
+ dtype=tf.int32)
+ entry_add = (tf.expand_dims(upd_idxs, 1) *
+ tf.one_hot(entry_idx, self.num_per_hash_slot,
+ dtype=tf.int32))
+
+ mul_op = tf.scatter_mul(self.hash_slots[i], slot_idxs, entry_mul)
+ with tf.control_dependencies([mul_op]):
+ add_op = tf.scatter_add(self.hash_slots[i], slot_idxs, entry_add)
+ update_ops.append(add_op)
+
+ return tf.group(*update_ops)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/model.py"
new file mode 100644
index 00000000..7a6b4600
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/model.py"
@@ -0,0 +1,302 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+"""Model using memory component.
+
+The model embeds images using a standard CNN architecture.
+These embeddings are used as keys to the memory component,
+which returns nearest neighbors.
+"""
+
+import tensorflow as tf
+
+import memory
+
+FLAGS = tf.flags.FLAGS
+
+
+class BasicClassifier(object):
+
+ def __init__(self, output_dim):
+ self.output_dim = output_dim
+
+ def core_builder(self, memory_val, x, y):
+ del x, y
+ y_pred = memory_val
+ loss = 0.0
+
+ return loss, y_pred
+
+
+class LeNet(object):
+ """Standard CNN architecture."""
+
+ def __init__(self, image_size, num_channels, hidden_dim):
+ self.image_size = image_size
+ self.num_channels = num_channels
+ self.hidden_dim = hidden_dim
+ self.matrix_init = tf.truncated_normal_initializer(stddev=0.1)
+ self.vector_init = tf.constant_initializer(0.0)
+
+ def core_builder(self, x):
+ """Embeds x using standard CNN architecture.
+
+ Args:
+ x: Batch of images as a 2-d Tensor [batch_size, -1].
+
+ Returns:
+ A 2-d Tensor [batch_size, hidden_dim] of embedded images.
+ """
+
+ ch1 = 32 * 2 # number of channels in 1st layer
+ ch2 = 64 * 2 # number of channels in 2nd layer
+ conv1_weights = tf.get_variable('conv1_w',
+ [3, 3, self.num_channels, ch1],
+ initializer=self.matrix_init)
+ conv1_biases = tf.get_variable('conv1_b', [ch1],
+ initializer=self.vector_init)
+ conv1a_weights = tf.get_variable('conv1a_w',
+ [3, 3, ch1, ch1],
+ initializer=self.matrix_init)
+ conv1a_biases = tf.get_variable('conv1a_b', [ch1],
+ initializer=self.vector_init)
+
+ conv2_weights = tf.get_variable('conv2_w', [3, 3, ch1, ch2],
+ initializer=self.matrix_init)
+ conv2_biases = tf.get_variable('conv2_b', [ch2],
+ initializer=self.vector_init)
+ conv2a_weights = tf.get_variable('conv2a_w', [3, 3, ch2, ch2],
+ initializer=self.matrix_init)
+ conv2a_biases = tf.get_variable('conv2a_b', [ch2],
+ initializer=self.vector_init)
+
+ # fully connected
+ fc1_weights = tf.get_variable(
+ 'fc1_w', [self.image_size // 4 * self.image_size // 4 * ch2,
+ self.hidden_dim], initializer=self.matrix_init)
+ fc1_biases = tf.get_variable('fc1_b', [self.hidden_dim],
+ initializer=self.vector_init)
+
+ # define model
+ x = tf.reshape(x,
+ [-1, self.image_size, self.image_size, self.num_channels])
+ batch_size = tf.shape(x)[0]
+
+ conv1 = tf.nn.conv2d(x, conv1_weights,
+ strides=[1, 1, 1, 1], padding='SAME')
+ relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))
+ conv1 = tf.nn.conv2d(relu1, conv1a_weights,
+ strides=[1, 1, 1, 1], padding='SAME')
+ relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1a_biases))
+
+ pool1 = tf.nn.max_pool(relu1, ksize=[1, 2, 2, 1],
+ strides=[1, 2, 2, 1], padding='SAME')
+
+ conv2 = tf.nn.conv2d(pool1, conv2_weights,
+ strides=[1, 1, 1, 1], padding='SAME')
+ relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))
+ conv2 = tf.nn.conv2d(relu2, conv2a_weights,
+ strides=[1, 1, 1, 1], padding='SAME')
+ relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2a_biases))
+
+ pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1],
+ strides=[1, 2, 2, 1], padding='SAME')
+
+ reshape = tf.reshape(pool2, [batch_size, -1])
+ hidden = tf.matmul(reshape, fc1_weights) + fc1_biases
+
+ return hidden
+
+
+class Model(object):
+ """Model for coordinating between CNN embedder and Memory module."""
+
+ def __init__(self, input_dim, output_dim, rep_dim, memory_size, vocab_size,
+ learning_rate=0.0001, use_lsh=False):
+ self.input_dim = input_dim
+ self.output_dim = output_dim
+ self.rep_dim = rep_dim
+ self.memory_size = memory_size
+ self.vocab_size = vocab_size
+ self.learning_rate = learning_rate
+ self.use_lsh = use_lsh
+
+ self.embedder = self.get_embedder()
+ self.memory = self.get_memory()
+ self.classifier = self.get_classifier()
+
+ self.global_step = tf.train.get_or_create_global_step()
+
+ def get_embedder(self):
+ return LeNet(int(self.input_dim ** 0.5), 1, self.rep_dim)
+
+ def get_memory(self):
+ cls = memory.LSHMemory if self.use_lsh else memory.Memory
+ return cls(self.rep_dim, self.memory_size, self.vocab_size)
+
+ def get_classifier(self):
+ return BasicClassifier(self.output_dim)
+
+ def core_builder(self, x, y, keep_prob, use_recent_idx=True):
+ embeddings = self.embedder.core_builder(x)
+ if keep_prob < 1.0:
+ embeddings = tf.nn.dropout(embeddings, keep_prob)
+ memory_val, _, teacher_loss = self.memory.query(
+ embeddings, y, use_recent_idx=use_recent_idx)
+ loss, y_pred = self.classifier.core_builder(memory_val, x, y)
+
+ return loss + teacher_loss, y_pred
+
+ def train(self, x, y):
+ loss, _ = self.core_builder(x, y, keep_prob=0.3)
+ gradient_ops = self.training_ops(loss)
+ return loss, gradient_ops
+
+ def eval(self, x, y):
+ _, y_preds = self.core_builder(x, y, keep_prob=1.0,
+ use_recent_idx=False)
+ return y_preds
+
+ def get_xy_placeholders(self):
+ return (tf.placeholder(tf.float32, [None, self.input_dim]),
+ tf.placeholder(tf.int32, [None]))
+
+ def setup(self):
+ """Sets up all components of the computation graph."""
+
+ self.x, self.y = self.get_xy_placeholders()
+
+ # This context creates variables
+ with tf.variable_scope('core', reuse=None):
+ self.loss, self.gradient_ops = self.train(self.x, self.y)
+ # And this one re-uses them (thus the `reuse=True`)
+ with tf.variable_scope('core', reuse=True):
+ self.y_preds = self.eval(self.x, self.y)
+
+ def training_ops(self, loss):
+ opt = self.get_optimizer()
+ params = tf.trainable_variables()
+ gradients = tf.gradients(loss, params)
+ clipped_gradients, _ = tf.clip_by_global_norm(gradients, 5.0)
+ return opt.apply_gradients(zip(clipped_gradients, params),
+ global_step=self.global_step)
+
+ def get_optimizer(self):
+ return tf.train.AdamOptimizer(learning_rate=self.learning_rate,
+ epsilon=1e-4)
+
+ def one_step(self, sess, x, y):
+ outputs = [self.loss, self.gradient_ops]
+ return sess.run(outputs, feed_dict={self.x: x, self.y: y})
+
+ def episode_step(self, sess, x, y, clear_memory=False):
+ """Performs training steps on episodic input.
+
+ Args:
+ sess: A Tensorflow Session.
+ x: A list of batches of images defining the episode.
+ y: A list of batches of labels corresponding to x.
+ clear_memory: Whether to clear the memory before the episode.
+
+ Returns:
+ List of losses the same length as the episode.
+ """
+
+ outputs = [self.loss, self.gradient_ops]
+
+ if clear_memory:
+ self.clear_memory(sess)
+
+ losses = []
+ for xx, yy in zip(x, y):
+ out = sess.run(outputs, feed_dict={self.x: xx, self.y: yy})
+ loss = out[0]
+ losses.append(loss)
+
+ return losses
+
+ def predict(self, sess, x, y=None):
+ """Predict the labels on a single batch of examples.
+
+ Args:
+ sess: A Tensorflow Session.
+ x: A batch of images.
+ y: The labels for the images in x.
+ This allows for updating the memory.
+
+ Returns:
+ Predicted y.
+ """
+
+ # Storing current memory state to restore it after prediction
+ mem_keys, mem_vals, mem_age, _ = self.memory.get()
+ cur_memory = (
+ tf.identity(mem_keys),
+ tf.identity(mem_vals),
+ tf.identity(mem_age),
+ None,
+ )
+
+ outputs = [self.y_preds]
+ if y is None:
+ ret = sess.run(outputs, feed_dict={self.x: x})
+ else:
+ ret = sess.run(outputs, feed_dict={self.x: x, self.y: y})
+
+ # Restoring memory state
+ self.memory.set(*cur_memory)
+
+ return ret
+
+ def episode_predict(self, sess, x, y, clear_memory=False):
+ """Predict the labels on an episode of examples.
+
+ Args:
+ sess: A Tensorflow Session.
+ x: A list of batches of images.
+ y: A list of labels for the images in x.
+ This allows for updating the memory.
+ clear_memory: Whether to clear the memory before the episode.
+
+ Returns:
+ List of predicted y.
+ """
+
+ # Storing current memory state to restore it after prediction
+ mem_keys, mem_vals, mem_age, _ = self.memory.get()
+ cur_memory = (
+ tf.identity(mem_keys),
+ tf.identity(mem_vals),
+ tf.identity(mem_age),
+ None,
+ )
+
+ if clear_memory:
+ self.clear_memory(sess)
+
+ outputs = [self.y_preds]
+ y_preds = []
+ for xx, yy in zip(x, y):
+ out = sess.run(outputs, feed_dict={self.x: xx, self.y: yy})
+ y_pred = out[0]
+ y_preds.append(y_pred)
+
+ # Restoring memory state
+ self.memory.set(*cur_memory)
+
+ return y_preds
+
+ def clear_memory(self, sess):
+ sess.run([self.memory.clear()])
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/train.py"
new file mode 100644
index 00000000..c5c6d06b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_to_remember_rare_events/train.py"
@@ -0,0 +1,242 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+r"""Script for training model.
+
+Simple command to get up and running:
+ python train.py --memory_size=8192 \
+ --batch_size=16 --validation_length=50 \
+ --episode_width=5 --episode_length=30
+"""
+
+import logging
+import os
+import random
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+import data_utils
+import model
+
+FLAGS = tf.flags.FLAGS
+
+tf.flags.DEFINE_integer('rep_dim', 128,
+ 'dimension of keys to use in memory')
+tf.flags.DEFINE_integer('episode_length', 100, 'length of episode')
+tf.flags.DEFINE_integer('episode_width', 5,
+ 'number of distinct labels in a single episode')
+tf.flags.DEFINE_integer('memory_size', None, 'number of slots in memory. '
+ 'Leave as None to default to episode length')
+tf.flags.DEFINE_integer('batch_size', 16, 'batch size')
+tf.flags.DEFINE_integer('num_episodes', 100000, 'number of training episodes')
+tf.flags.DEFINE_integer('validation_frequency', 20,
+ 'every so many training episodes, '
+ 'assess validation accuracy')
+tf.flags.DEFINE_integer('validation_length', 10,
+ 'number of episodes to use to compute '
+ 'validation accuracy')
+tf.flags.DEFINE_integer('seed', 888, 'random seed for training sampling')
+tf.flags.DEFINE_string('save_dir', '', 'directory to save model to')
+tf.flags.DEFINE_bool('use_lsh', False,
+ 'use locality-sensitive hashing '
+ '(NOTE: not fully tested)')
+
+
+class Trainer(object):
+ """Class that takes care of training, validating, and checkpointing model."""
+
+ def __init__(self, train_data, valid_data, input_dim, output_dim=None):
+ self.train_data = train_data
+ self.valid_data = valid_data
+ self.input_dim = input_dim
+
+ self.rep_dim = FLAGS.rep_dim
+ self.episode_length = FLAGS.episode_length
+ self.episode_width = FLAGS.episode_width
+ self.batch_size = FLAGS.batch_size
+ self.memory_size = (self.episode_length * self.batch_size
+ if FLAGS.memory_size is None else FLAGS.memory_size)
+ self.use_lsh = FLAGS.use_lsh
+
+ self.output_dim = (output_dim if output_dim is not None
+ else self.episode_width)
+
+ def get_model(self):
+ # vocab size is the number of distinct values that
+ # could go into the memory key-value storage
+ vocab_size = self.episode_width * self.batch_size
+ return model.Model(
+ self.input_dim, self.output_dim, self.rep_dim, self.memory_size,
+ vocab_size, use_lsh=self.use_lsh)
+
+ def sample_episode_batch(self, data,
+ episode_length, episode_width, batch_size):
+ """Generates a random batch for training or validation.
+
+ Structures each element of the batch as an 'episode'.
+ Each episode contains episode_length examples and
+ episode_width distinct labels.
+
+ Args:
+ data: A dictionary mapping label to list of examples.
+ episode_length: Number of examples in each episode.
+ episode_width: Distinct number of labels in each episode.
+ batch_size: Batch size (number of episodes).
+
+ Returns:
+ A tuple (x, y) where x is a list of batches of examples
+ with size episode_length and y is a list of batches of labels.
+ """
+
+ episodes_x = [[] for _ in xrange(episode_length)]
+ episodes_y = [[] for _ in xrange(episode_length)]
+ assert len(data) >= episode_width
+ keys = data.keys()
+ for b in xrange(batch_size):
+ episode_labels = random.sample(keys, episode_width)
+ remainder = episode_length % episode_width
+ remainders = [0] * (episode_width - remainder) + [1] * remainder
+ episode_x = [
+ random.sample(data[lab],
+ r + (episode_length - remainder) // episode_width)
+ for lab, r in zip(episode_labels, remainders)]
+ episode = sum([[(x, i, ii) for ii, x in enumerate(xx)]
+ for i, xx in enumerate(episode_x)], [])
+ random.shuffle(episode)
+ # Arrange episode so that each distinct label is seen before moving to
+ # 2nd showing
+ episode.sort(key=lambda elem: elem[2])
+ assert len(episode) == episode_length
+ for i in xrange(episode_length):
+ episodes_x[i].append(episode[i][0])
+ episodes_y[i].append(episode[i][1] + b * episode_width)
+
+ return ([np.array(xx).astype('float32') for xx in episodes_x],
+ [np.array(yy).astype('int32') for yy in episodes_y])
+
+ def compute_correct(self, ys, y_preds):
+ return np.mean(np.equal(y_preds, np.array(ys)))
+
+ def individual_compute_correct(self, y, y_pred):
+ return y_pred == y
+
+ def run(self):
+ """Performs training.
+
+ Trains a model using episodic training.
+ Every so often, runs some evaluations on validation data.
+ """
+
+ train_data, valid_data = self.train_data, self.valid_data
+ input_dim, output_dim = self.input_dim, self.output_dim
+ rep_dim, episode_length = self.rep_dim, self.episode_length
+ episode_width, memory_size = self.episode_width, self.memory_size
+ batch_size = self.batch_size
+
+ train_size = len(train_data)
+ valid_size = len(valid_data)
+ logging.info('train_size (number of labels) %d', train_size)
+ logging.info('valid_size (number of labels) %d', valid_size)
+ logging.info('input_dim %d', input_dim)
+ logging.info('output_dim %d', output_dim)
+ logging.info('rep_dim %d', rep_dim)
+ logging.info('episode_length %d', episode_length)
+ logging.info('episode_width %d', episode_width)
+ logging.info('memory_size %d', memory_size)
+ logging.info('batch_size %d', batch_size)
+
+ assert all(len(v) >= float(episode_length) / episode_width
+ for v in train_data.values())
+ assert all(len(v) >= float(episode_length) / episode_width
+ for v in valid_data.values())
+
+ output_dim = episode_width
+ self.model = self.get_model()
+ self.model.setup()
+
+ sess = tf.Session()
+ sess.run(tf.global_variables_initializer())
+
+ saver = tf.train.Saver(max_to_keep=10)
+ ckpt = None
+ if FLAGS.save_dir:
+ ckpt = tf.train.get_checkpoint_state(FLAGS.save_dir)
+ if ckpt and ckpt.model_checkpoint_path:
+ logging.info('restoring from %s', ckpt.model_checkpoint_path)
+ saver.restore(sess, ckpt.model_checkpoint_path)
+
+ logging.info('starting now')
+ losses = []
+ random.seed(FLAGS.seed)
+ np.random.seed(FLAGS.seed)
+ for i in xrange(FLAGS.num_episodes):
+ x, y = self.sample_episode_batch(
+ train_data, episode_length, episode_width, batch_size)
+ outputs = self.model.episode_step(sess, x, y, clear_memory=True)
+ loss = outputs
+ losses.append(loss)
+
+ if i % FLAGS.validation_frequency == 0:
+ logging.info('episode batch %d, avg train loss %f',
+ i, np.mean(losses))
+ losses = []
+
+ # validation
+ correct = []
+ num_shots = episode_length // episode_width
+ correct_by_shot = dict((k, []) for k in xrange(num_shots))
+ for _ in xrange(FLAGS.validation_length):
+ x, y = self.sample_episode_batch(
+ valid_data, episode_length, episode_width, 1)
+ outputs = self.model.episode_predict(
+ sess, x, y, clear_memory=True)
+ y_preds = outputs
+ correct.append(self.compute_correct(np.array(y), y_preds))
+
+ # compute per-shot accuracies
+ seen_counts = [0] * episode_width
+ # loop over episode steps
+ for yy, yy_preds in zip(y, y_preds):
+ # loop over batch examples
+ yyy, yyy_preds = int(yy[0]), int(yy_preds[0])
+ count = seen_counts[yyy % episode_width]
+ if count in correct_by_shot:
+ correct_by_shot[count].append(
+ self.individual_compute_correct(yyy, yyy_preds))
+ seen_counts[yyy % episode_width] = count + 1
+
+ logging.info('validation overall accuracy %f', np.mean(correct))
+ logging.info('%d-shot: %.3f, ' * num_shots,
+ *sum([[k, np.mean(correct_by_shot[k])]
+ for k in xrange(num_shots)], []))
+
+ if saver and FLAGS.save_dir:
+ saved_file = saver.save(sess,
+ os.path.join(FLAGS.save_dir, 'model.ckpt'),
+ global_step=self.model.global_step)
+ logging.info('saved model to %s', saved_file)
+
+
+def main(unused_argv):
+ train_data, valid_data = data_utils.get_data()
+ trainer = Trainer(train_data, valid_data, data_utils.IMAGE_NEW_SIZE ** 2)
+ trainer.run()
+
+
+if __name__ == '__main__':
+ logging.basicConfig(level=logging.INFO)
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/.gitignore" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/.gitignore"
new file mode 100644
index 00000000..0d20b648
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/.gitignore"
@@ -0,0 +1 @@
+*.pyc
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/README.md"
new file mode 100644
index 00000000..6c5e8547
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/README.md"
@@ -0,0 +1,37 @@
+# Learning Unsupervised Learning Rules
+This repository contains code and weights for the learned update rule
+presented in "Learning Unsupervised Learning Rules." At this time, this
+code can not meta-train the update rule.
+
+
+### Structure
+`run_eval.py` contains the main training loop. This constructs an op
+that runs one iteration of the learned update rule and assigns the
+results to variables. Additionally, it loads the weights from our
+pre-trained model.
+
+The base model and the update rule architecture definition can be found in
+`architectures/more_local_weight_update.py`. For a complete description
+of the model, see our [paper](https://arxiv.org/abs/1804.00222).
+
+### Dependencies
+[absl]([https://github.com/abseil/abseil-py), [tensorflow](https://tensorflow.org), [sonnet](https://github.com/deepmind/sonnet)
+
+### Usage
+
+First, download the [pre-trained optimizer model weights](https://storage.googleapis.com/learning_unsupervised_learning/200_tf_graph.zip) and extract it.
+
+```bash
+# move to the folder above this folder
+cd path_to/research/learning_unsupervised_learning/../
+
+# launch the eval script
+python -m learning_unsupervised_learning.run_eval \
+--train_log_dir="/tmp/learning_unsupervised_learning" \
+--checkpoint_dir="/path/to/downloaded/model/tf_graph_data.ckpt"
+```
+
+### Contact
+Luke Metz, Niru Maheswaranathan, Github: @lukemetz, @nirum. Email: {lmetz, nirum}@google.com
+
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/architectures/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/architectures/__init__.py"
new file mode 100644
index 00000000..af9545f2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/architectures/__init__.py"
@@ -0,0 +1,17 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+import more_local_weight_update
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/architectures/common.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/architectures/common.py"
new file mode 100644
index 00000000..43a2d4f8
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/architectures/common.py"
@@ -0,0 +1,153 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import sonnet as snt
+import tensorflow as tf
+import numpy as np
+import collections
+from learning_unsupervised_learning import utils
+
+from tensorflow.python.util import nest
+
+from learning_unsupervised_learning import variable_replace
+
+
+class LinearBatchNorm(snt.AbstractModule):
+ """Module that does a Linear layer then a BatchNorm followed by an activation fn"""
+ def __init__(self, size, activation_fn=tf.nn.relu, name="LinearBatchNorm"):
+ self.size = size
+ self.activation_fn = activation_fn
+ super(LinearBatchNorm, self).__init__(name=name)
+
+ def _build(self, x):
+ x = tf.to_float(x)
+ initializers={"w": tf.truncated_normal_initializer(stddev=0.01)}
+ lin = snt.Linear(self.size, use_bias=False, initializers=initializers)
+ z = lin(x)
+
+ scale = tf.constant(1., dtype=tf.float32)
+ offset = tf.get_variable(
+ "b",
+ shape=[1, z.shape.as_list()[1]],
+ initializer=tf.truncated_normal_initializer(stddev=0.1),
+ dtype=tf.float32
+ )
+
+ mean, var = tf.nn.moments(z, [0], keep_dims=True)
+ z = ((z - mean) * tf.rsqrt(var + 1e-6)) * scale + offset
+
+ x_p = self.activation_fn(z)
+
+ return z, x_p
+
+ # This needs to work by string name sadly due to how the variable replace
+ # works and would also work even if the custom getter approuch was used.
+ # This is verbose, but it should atleast be clear as to what is going on.
+ # TODO(lmetz) a better way to do this (the next 3 functions:
+ # _raw_name, w(), b() )
+ def _raw_name(self, var_name):
+ """Return just the name of the variable, not the scopes."""
+ return var_name.split("/")[-1].split(":")[0]
+
+
+ @property
+ def w(self):
+ var_list = snt.get_variables_in_module(self)
+ w = [x for x in var_list if self._raw_name(x.name) == "w"]
+ assert len(w) == 1
+ return w[0]
+
+ @property
+ def b(self):
+ var_list = snt.get_variables_in_module(self)
+ b = [x for x in var_list if self._raw_name(x.name) == "b"]
+ assert len(b) == 1
+ return b[0]
+
+
+
+class Linear(snt.AbstractModule):
+ def __init__(self, size, use_bias=True, init_const_mag=True):
+ self.size = size
+ self.use_bias = use_bias
+ self.init_const_mag = init_const_mag
+ super(Linear, self).__init__(name="commonLinear")
+
+ def _build(self, x):
+ if self.init_const_mag:
+ initializers={"w": tf.truncated_normal_initializer(stddev=0.01)}
+ else:
+ initializers={}
+ lin = snt.Linear(self.size, use_bias=self.use_bias, initializers=initializers)
+ z = lin(x)
+ return z
+
+ # This needs to work by string name sadly due to how the variable replace
+ # works and would also work even if the custom getter approuch was used.
+ # This is verbose, but it should atleast be clear as to what is going on.
+ # TODO(lmetz) a better way to do this (the next 3 functions:
+ # _raw_name, w(), b() )
+ def _raw_name(self, var_name):
+ """Return just the name of the variable, not the scopes."""
+ return var_name.split("/")[-1].split(":")[0]
+
+ @property
+ def w(self):
+ var_list = snt.get_variables_in_module(self)
+ if self.use_bias:
+ assert len(var_list) == 2, "Found not 2 but %d" % len(var_list)
+ else:
+ assert len(var_list) == 1, "Found not 1 but %d" % len(var_list)
+ w = [x for x in var_list if self._raw_name(x.name) == "w"]
+ assert len(w) == 1
+ return w[0]
+
+ @property
+ def b(self):
+ var_list = snt.get_variables_in_module(self)
+ assert len(var_list) == 2, "Found not 2 but %d" % len(var_list)
+ b = [x for x in var_list if self._raw_name(x.name) == "b"]
+ assert len(b) == 1
+ return b[0]
+
+
+def transformer_at_state(base_model, new_variables):
+ """Get the base_model that has been transformed to use the variables
+ in final_state.
+ Args:
+ base_model: snt.Module
+ Goes from batch to features
+ new_variables: list
+ New list of variables to use
+ Returns:
+ func: callable of same api as base_model.
+ """
+ assert not variable_replace.in_variable_replace_scope()
+
+ def _feature_transformer(input_data):
+ """Feature transformer at the end of training."""
+ initial_variables = base_model.get_variables()
+ replacement = collections.OrderedDict(
+ utils.eqzip(initial_variables, new_variables))
+ with variable_replace.variable_replace(replacement):
+ features = base_model(input_data)
+ return features
+
+ return _feature_transformer
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/architectures/more_local_weight_update.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/architectures/more_local_weight_update.py"
new file mode 100644
index 00000000..117549af
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/architectures/more_local_weight_update.py"
@@ -0,0 +1,861 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import numpy as np
+import sonnet as snt
+import tensorflow as tf
+
+from learning_unsupervised_learning.architectures import common
+from learning_unsupervised_learning import optimizers
+from learning_unsupervised_learning import utils
+from learning_unsupervised_learning import summary_utils
+
+OptState = collections.namedtuple('OptState',
+ ['variables', 'opt_state', 'index'])
+
+BaseModelOutputs = collections.namedtuple(
+ 'BaseModelOutputs', ['xs', 'zs', 'mods', 'batch', 'backward_mods'])
+
+
+class GradChannelReadout(snt.AbstractModule):
+ """Perform a linear readout and reshape from input 3 tensor."""
+
+ def __init__(self,
+ num_grad_channels,
+ device,
+ perm=(2, 0, 1),
+ name='GradChannelReadout'):
+ """Args:
+
+ num_grad_channels: int
+ number of channels to readout to.
+ device: str or callable
+ devicwe to place weights.
+ perm: list or tuple
+ transpose applied.
+ """
+
+ self.num_grad_channels = num_grad_channels
+ self.device = device
+ self.perm = perm
+ super(GradChannelReadout, self).__init__(name=name)
+
+ def _build(self, h):
+ with tf.device(self.device):
+ mod = snt.Linear(self.num_grad_channels)
+ ret = snt.BatchApply(mod)(h)
+ # return as [num_grad_channels] x [bs] x [num units]
+ return tf.transpose(ret, perm=self.perm)
+
+
+def get_weight_stats(x, axis):
+ """ Compute weight statistics over the given axis.
+
+ Args:
+ x: tf.Tensor
+ a batch of activations.
+ axis: int
+ axis to perform statistics over.
+ Returns:
+ tf.Tensor
+ a 3-D tensor with statistics.
+ """
+ if x is None:
+ return []
+
+ stats = []
+ l1 = tf.reduce_mean(tf.abs(x), axis=axis)
+ l2 = tf.sqrt(tf.reduce_mean(x**2, axis=axis) + 1e-6)
+
+ mean, var = tf.nn.moments(x, [axis])
+ stats.extend([l1, l2, mean, tf.sqrt(var + 1e-8)])
+
+ stats = [tf.reshape(s, [-1, 1, 1]) for s in stats]
+
+ return stats
+
+
+class AddUnitBatchStatistics(snt.AbstractModule):
+ """Compute some number of statistics over units and concat them on."""
+
+ def __init__(self, name='AddUnitBatchStatistics'):
+ super(AddUnitBatchStatistics, self).__init__(name=name)
+
+ def _build(self, x):
+ # [channel, bs, 1]
+ output = x
+ for d in [0, 1]:
+ stats = []
+ l1 = tf.reduce_mean(tf.abs(x), axis=d, keepdims=True)
+ l2 = tf.sqrt(tf.reduce_mean(x**2, axis=d, keepdims=True) + 1e-6)
+
+ mean, var = tf.nn.moments(x, [d], keepdims=True)
+ stats.extend([l1, l2, mean, tf.sqrt(var + 1e-8)])
+
+ to_add = tf.concat(stats, axis=2) # [channels/1, units/1, stats]
+ output += snt.BatchApply(snt.Linear(x.shape.as_list()[2]))(to_add)
+ return output
+
+
+class ConcatUnitConv(snt.AbstractModule):
+ """Do a small number of convolutions over units and concat / add them on."""
+
+ def __init__(self, add=True):
+ self.add = add
+ super(ConcatUnitConv, self).__init__(name='ConcatUnitConv')
+
+ def _build(self, x):
+ # x is [units, bs, 1]
+ net = tf.transpose(x, [1, 0, 2]) # now [bs x units x 1]
+ channels = x.shape.as_list()[2]
+ mod = snt.Conv1D(output_channels=channels, kernel_shape=[3])
+ net = mod(net)
+ net = snt.BatchNorm(axis=[0, 1])(net, is_training=False)
+ net = tf.nn.relu(net)
+ mod = snt.Conv1D(output_channels=channels, kernel_shape=[3])
+ net = mod(net)
+ net = snt.BatchNorm(axis=[0, 1])(net, is_training=False)
+ net = tf.nn.relu(net)
+ to_concat = tf.transpose(net, [1, 0, 2])
+ if self.add:
+ return x + to_concat
+ else:
+ return tf.concat([x, to_concat], 2)
+
+
+class MoreLocalWeightUpdateProcess(snt.AbstractModule):
+
+ def __init__(
+ self,
+ remote_device,
+ local_device,
+ top_delta_size=64,
+ top_delta_layers=2,
+ compute_h_size=64,
+ compute_h_layers=1,
+ delta_dim=32,
+ num_grad_channels=4,
+ normalize_epsilon=1.,
+ ):
+ self.local_device = local_device
+ self.remote_device = remote_device
+ self.top_delta_size = top_delta_size
+ self.top_delta_layers = top_delta_layers
+ self.compute_h_size = compute_h_size
+ self.compute_h_layers = compute_h_layers
+ self.delta_dim = delta_dim
+ self.num_grad_channels = num_grad_channels
+ self.normalize_epsilon = normalize_epsilon,
+
+ with tf.device(local_device):
+ self.opt = optimizers.UnrollableGradientDescentRollingOptimizer(
+ learning_rate=1e-4)
+
+ # lazily initialized for readouts
+ self.readout_mods = {}
+
+ super(MoreLocalWeightUpdateProcess,
+ self).__init__(name='MoreLocalWeightUpdateProcess')
+
+ with tf.device(remote_device):
+ self()
+
+ def normalize(self, change_w, normalize_epsilon=None):
+ if normalize_epsilon is None:
+ normalize_epsilon = self.normalize_epsilon
+
+ # normalize the weights per receptive-field, rather than per-matrix
+ var = tf.reduce_mean(tf.square(change_w), axis=0, keepdims=True)
+ change_w = (change_w) / tf.sqrt(normalize_epsilon + var)
+ return change_w
+
+ def _build(self):
+ pass
+
+ @snt.reuse_variables
+ def compute_top_delta(self, z):
+ """ parameterization of topD. This converts the top level activation
+ to an error signal.
+ Args:
+ z: tf.Tensor
+ batch of final layer post activations
+ Returns
+ delta: tf.Tensor
+ the error signal
+ """
+ s_idx = 0
+ with tf.variable_scope('compute_top_delta'), tf.device(self.remote_device):
+ # typically this takes [BS, length, input_channels],
+ # We are applying this such that we convolve over the batch dimension.
+ act = tf.expand_dims(tf.transpose(z, [1, 0]), 2) # [channels, BS, 1]
+
+ mod = snt.Conv1D(output_channels=self.top_delta_size, kernel_shape=[5])
+ act = mod(act)
+
+ act = snt.BatchNorm(axis=[0, 1])(act, is_training=False)
+ act = tf.nn.relu(act)
+
+ bs = act.shape.as_list()[0]
+ act = tf.transpose(act, [2, 1, 0])
+ act = snt.Conv1D(output_channels=bs, kernel_shape=[3])(act)
+ act = snt.BatchNorm(axis=[0, 1])(act, is_training=False)
+ act = tf.nn.relu(act)
+ act = snt.Conv1D(output_channels=bs, kernel_shape=[3])(act)
+ act = snt.BatchNorm(axis=[0, 1])(act, is_training=False)
+ act = tf.nn.relu(act)
+ act = tf.transpose(act, [2, 1, 0])
+
+ prev_act = act
+ for i in range(self.top_delta_layers):
+ mod = snt.Conv1D(output_channels=self.top_delta_size, kernel_shape=[3])
+ act = mod(act)
+
+ act = snt.BatchNorm(axis=[0, 1])(act, is_training=False)
+ act = tf.nn.relu(act)
+
+ prev_act = act
+
+ mod = snt.Conv1D(output_channels=self.delta_dim, kernel_shape=[3])
+ act = mod(act)
+
+ # [bs, feature_channels, delta_channels]
+ act = tf.transpose(act, [1, 0, 2])
+ return act
+
+ @snt.reuse_variables
+ def compute_h(self,
+ x,
+ z,
+ d,
+ bias,
+ W_bot,
+ W_top,
+ compute_perc=1.0,
+ compute_units=None):
+ """z = [BS, n_units] a = [BS, n_units] b = [BS, n_units] d = [BS, n_units, delta_channels]
+
+ """
+
+ s_idx = 0
+ if compute_perc != 1.0:
+ assert compute_units is None
+
+ with tf.device(self.remote_device):
+ inp_feat = [x, z]
+ inp_feat = [tf.transpose(f, [1, 0]) for f in inp_feat]
+
+ units = x.shape.as_list()[1]
+ bs = x.shape.as_list()[0]
+
+ # add unit ID, to help the network differentiate units
+ id_theta = tf.linspace(0., (4) * np.pi, units)
+ assert bs is not None
+ id_theta_bs = tf.reshape(id_theta, [-1, 1]) * tf.ones([1, bs])
+ inp_feat += [tf.sin(id_theta_bs), tf.cos(id_theta_bs)]
+
+ # list of [units, BS, 1]
+ inp_feat = [tf.expand_dims(f, 2) for f in inp_feat]
+
+ d_trans = tf.transpose(d, [1, 0, 2])
+
+ if compute_perc != 1.0:
+ compute_units = int(compute_perc * inp_feat.shape.as_list()[0])
+
+ # add weight matrix statistics, both from above and below
+ w_stats_bot = get_weight_stats(W_bot, 0)
+ w_stats_top = get_weight_stats(W_top, 1)
+ w_stats = w_stats_bot + w_stats_top
+ if W_bot is None or W_top is None:
+ # if it's an edge layer (top or bottom), just duplicate the stats for
+ # the weight matrix that does exist
+ w_stats = w_stats + w_stats
+ w_stats = [tf.ones([1, x.shape[0], 1]) * ww for ww in w_stats]
+ # w_stats is a list, with entries with shape UNITS x 1 x channels
+
+ if compute_units is None:
+ inp_feat_in = inp_feat
+ d_trans_in = d_trans
+ w_stats_in = w_stats
+ bias_in = tf.transpose(bias)
+ else:
+ # only run on a subset of the activations.
+ mask = tf.random_uniform(
+ minval=0,
+ maxval=1,
+ dtype=tf.float32,
+ shape=inp_feat[0].shape.as_list()[0:1])
+ _, ind = tf.nn.top_k(mask, k=compute_units)
+ ind = tf.reshape(ind, [-1, 1])
+
+ inp_feat_in = [tf.gather_nd(xx, ind) for xx in inp_feat]
+ w_stats_in = [tf.gather_nd(xx, ind) for xx in w_stats]
+ d_trans_in = tf.gather_nd(d_trans, ind)
+ bias_in = tf.gather_nd(tf.transpose(bias), ind)
+
+ w_stats_in = tf.concat(w_stats_in, 2)
+ w_stats_in_norm = w_stats_in * tf.rsqrt(
+ tf.reduce_mean(w_stats_in**2) + 1e-6)
+
+ act = tf.concat(inp_feat_in + [d_trans_in], 2)
+ act = snt.BatchNorm(axis=[0, 1])(act, is_training=True)
+
+ bias_dense = tf.reshape(bias_in, [-1, 1, 1]) * tf.ones([1, bs, 1])
+ act = tf.concat([w_stats_in_norm, bias_dense, act], 2)
+
+ mod = snt.Conv1D(output_channels=self.compute_h_size, kernel_shape=[3])
+ act = mod(act)
+
+ act = snt.BatchNorm(axis=[0, 1])(act, is_training=True)
+ act = tf.nn.relu(act)
+
+ act2 = ConcatUnitConv()(act)
+ act = act2
+
+ prev_act = act
+ for i in range(self.compute_h_layers):
+ mod = snt.Conv1D(output_channels=self.compute_h_size, kernel_shape=[3])
+ act = mod(act)
+
+ act = snt.BatchNorm(axis=[0, 1])(act, is_training=True)
+ act = tf.nn.relu(act)
+
+ act = ConcatUnitConv()(act)
+
+ prev_act = act
+
+ h = act
+ if compute_units is not None:
+ shape = inp_feat[0].shape.as_list()[:1] + h.shape.as_list()[1:]
+ h = tf.scatter_nd(ind, h, shape=shape)
+
+ h = tf.transpose(h, [1, 0, 2]) # [bs, units, channels]
+
+ return h
+
+ ## wrappers to allow forward and backward to have different variables
+ @snt.reuse_variables
+ def merge_change_w_forward(self, change_w_terms, global_prefix='', prefix=''):
+ return self.merge_change_w(
+ change_w_terms, global_prefix=global_prefix, prefix=prefix)
+
+ @snt.reuse_variables
+ def merge_change_w_backward(self, change_w_terms, global_prefix='',
+ prefix=''):
+ return self.merge_change_w(
+ change_w_terms, global_prefix=global_prefix, prefix=prefix)
+
+ def merge_change_w(self, change_w_terms, global_prefix='', prefix=''):
+ with tf.device(
+ self.remote_device), tf.name_scope(global_prefix + '_merge_change_w'):
+ w_base = change_w_terms['w_base']
+
+ for kk in sorted(change_w_terms.keys()):
+ name = global_prefix + 'change_w_plane_%s' % kk
+ delta_w = change_w_terms[kk]
+ mean, var = tf.nn.moments(delta_w, [0, 1])
+ root_mean_square = tf.sqrt(tf.reduce_mean(delta_w**2) + 1e-6)
+
+ for kk in sorted(change_w_terms.keys()):
+ change_w_terms[kk] = self.normalize(change_w_terms[kk])
+
+ initializers = {
+ 'w': tf.constant_initializer(0.1),
+ 'b': tf.zeros_initializer()
+ }
+ mod = snt.Linear(
+ 1,
+ name=global_prefix + '_weight_readout_coeffs',
+ initializers=initializers)
+
+ change_w_terms_list = [
+ change_w_terms[kk] for kk in sorted(change_w_terms.keys())
+ ]
+ stack_terms = tf.stack(change_w_terms_list, axis=-1)
+ change_w = tf.squeeze(
+ snt.BatchApply(mod)(stack_terms), axis=-1) / len(change_w_terms)
+
+ # only allow perpendicular updates, or updates which grow length. don't
+ # allow length to decay towards zero.
+ ip = tf.reduce_mean(change_w * w_base)
+ # zero out any updates that shrink length
+ ip = tf.nn.relu(ip)
+ change_w -= w_base * ip
+ change_w /= tf.sqrt(len(change_w_terms) * 1.)
+
+ change_w = self.normalize(change_w)
+
+ # encourage the receptive field to not collapse to 0
+ change_w -= w_base / 7. # This is an arbitrary scale choice
+
+ return tf.identity(change_w)
+
+ @snt.reuse_variables
+ def bias_readout(self, h):
+ with tf.device(self.remote_device):
+ mod = snt.Linear(1, name='bias_readout')
+ ret = snt.BatchApply(mod)(h)
+ return tf.squeeze(ret, 2)
+
+ @snt.reuse_variables
+ def next_delta(self, z, h, d):
+ with tf.device(self.remote_device):
+ return d * tf.expand_dims(tf.nn.sigmoid(z), 2) + self.to_delta_size(h)
+
+ @utils.create_variables_in_class_scope
+ def get_readout_mod(self, name):
+ if name not in self.readout_mods:
+ self.readout_mods[name] = GradChannelReadout(
+ self.num_grad_channels, device=self.remote_device, name=name)
+
+ return self.readout_mods[name]
+
+ @utils.create_variables_in_class_scope
+ def low_rank_readout(self, name, h1, h2, psd=False):
+ BS = h1.shape.as_list()[0]
+ r_t = self.get_readout_mod(name + '_top')(h1)
+ if psd:
+ r_b = r_t
+ else:
+ r_b = self.get_readout_mod(name + '_bottom')(h2)
+ return tf.reduce_mean(tf.matmul(r_b, r_t, transpose_a=True), axis=0) / BS
+
+ @snt.reuse_variables
+ def to_delta_size(self, h):
+ with tf.device(self.remote_device):
+ mod = snt.Linear(self.delta_dim)
+ return snt.BatchApply(mod)(h)
+
+ @snt.reuse_variables
+ def initial_state(self, variables):
+ """The inner optimization state.
+
+ Args:
+ variables: list of tf.Variable
+ list of variables to get the initial state of.
+ Returns:
+ opt_state: OptState
+ """
+
+ with tf.device(self.local_device):
+ initial_opt_state = self.opt.get_state(variables)
+
+ return OptState(
+ variables=variables, opt_state=initial_opt_state, index=tf.constant(0))
+
+ @snt.reuse_variables
+ def compute_next_state(self, grads, learning_rate, cur_state,
+ cur_transformer):
+
+ summaries = []
+ with tf.device(self.local_device):
+ with tf.control_dependencies(summaries):
+ new_vars, new_state = self.opt.compute_updates(
+ cur_state.variables, grads, learning_rate, cur_state.opt_state)
+ pass
+
+ return OptState(
+ variables=tuple(new_vars),
+ opt_state=new_state,
+ index=cur_state.index + 1)
+
+ def assign_state(self, base_model, next_state):
+ var_ups = [
+ v.assign(nv) for v, nv in utils.eqzip(base_model.get_variables(),
+ next_state.variables)
+ ]
+
+ opt_ups = self.opt.assign_state(next_state.opt_state)
+
+ return tf.group(opt_ups, *var_ups)
+
+ def local_variables(self):
+ return list(self.opt.get_variables())
+
+ def remote_variables(self):
+ train = list(
+ snt.get_variables_in_module(self, tf.GraphKeys.TRAINABLE_VARIABLES))
+ train += list(
+ snt.get_variables_in_module(self,
+ tf.GraphKeys.MOVING_AVERAGE_VARIABLES))
+ return train
+
+
+class MoreLocalWeightUpdateWLearner(snt.AbstractModule):
+ """The BaseModel that the UnsupervisedUpdateRule acts on.
+ """
+
+ def __init__(self,
+ remote_device,
+ local_device,
+ inner_size=128,
+ output_size=32,
+ n_layers=4,
+ shuffle_input=True,
+ activation_fn=tf.nn.relu,
+ identical_updates=True,
+ **kwargs):
+ self.local_device = local_device
+ self.remote_device = remote_device
+ self.inner_size = inner_size
+ self.n_layers = n_layers
+ self.shuffle_input = shuffle_input
+ self.activation_fn = activation_fn
+ self.identical_updates = identical_updates
+
+ self.output_size = output_size
+ if output_size == None:
+ self.output_size = inner_size
+
+ self.shuffle_ind = None
+
+ super(MoreLocalWeightUpdateWLearner, self).__init__(
+ name='LocalWeightUpdateWLearner', **kwargs)
+
+ @snt.reuse_variables
+ def get_shuffle_ind(self, size):
+ if self.shuffle_ind is None:
+ # put the shuffle in tf memory to make the eval jobs
+ # re-entrant.
+ shuffle_ind_val = np.random.permutation(size)
+ shuffle_ind = tf.get_variable(
+ name='shuffle_ind', dtype=tf.int64, initializer=shuffle_ind_val)
+ unshuffle_ind = tf.scatter_nd(
+ tf.reshape(shuffle_ind, [-1, 1]), tf.range(size), [size])
+
+ return shuffle_ind, unshuffle_ind
+
+ def _build(self, batch):
+ image = batch.image
+ x0 = snt.BatchFlatten()(image)
+ if self.shuffle_input:
+ size = x0.shape.as_list()[1]
+ shuffle_ind, unshuffle_ind = self.get_shuffle_ind(size)
+ x0 = tf.gather(x0, shuffle_ind, axis=1)
+
+ xs = [x0]
+ mods = []
+ zs = []
+ init = {}
+
+ for i in range(self.n_layers):
+ mod = common.LinearBatchNorm(
+ self.inner_size, activation_fn=self.activation_fn)
+ z, x = mod(xs[i])
+ xs.append(x)
+ zs.append(z)
+ mods.append(mod)
+
+ mod = common.LinearBatchNorm(
+ self.output_size, activation_fn=self.activation_fn)
+ z, x = mod(xs[-1])
+ mods.append(mod)
+
+ xs.append(x)
+ zs.append(z)
+
+ embedding_x = xs[-1]
+
+ # make a random set of backward mods
+ backward_mods = []
+ for i, (x, x_p1) in enumerate(zip(xs[0:-1], xs[1:])):
+ m = common.LinearBatchNorm(
+ x_p1.shape.as_list()[1], activation_fn=tf.identity)
+ _ = m(x)
+ backward_mods.append(m)
+
+ shape = image.shape.as_list()[1:4]
+
+ for mods_p, prefix in [(mods, 'forward'), (backward_mods, 'backward')]:
+ if self.shuffle_input:
+ unshuf_w = tf.gather(mods_p[0].w, unshuffle_ind, axis=0)
+ else:
+ unshuf_w = mods_p[0].w
+ img = summary_utils.first_layer_weight_image(unshuf_w, shape)
+ tf.summary.image(prefix + '_w0_receptive_field', img)
+
+ for i, m in enumerate(mods_p[0:]):
+ img = summary_utils.inner_layer_weight_image(m.w)
+ tf.summary.image(prefix + '_w%d' % (i + 1), img)
+
+ img = summary_utils.sorted_images(image, batch.label_onehot)
+ tf.summary.image('inputs', img)
+
+ # log out pre-activations and activations
+ for all_vis, base_name in [(xs, 'x'), (zs, 'z')]:
+ for i, x_vis in enumerate(all_vis):
+ img = summary_utils.activation_image(x_vis, batch.label_onehot)
+ tf.summary.image('%s%d' % (base_name, i), img)
+
+ embedding_x = tf.identity(embedding_x)
+
+ outputs = BaseModelOutputs(
+ xs=xs, zs=zs, mods=mods, batch=batch, backward_mods=backward_mods)
+
+ return embedding_x, outputs
+
+ def compute_next_h_d(self, meta_opt, w_bot, w_top, bias, x, z, d, backward_w):
+ """ Propogate error back down the network while computing hidden state.
+ """
+ if z is None:
+ z = x
+
+ h = meta_opt.compute_h(x, z, d, bias, w_bot,
+ w_top) # [bs x 60 x h_channels]
+
+ # compute the next d
+ delta = meta_opt.next_delta(z, h, d)
+
+ if backward_w is not None:
+
+ def delta_matmul(w, delta):
+ d = tf.transpose(delta, [0, 2, 1]) # [bs x delta_channels x n_units)
+ d = snt.BatchApply(lambda x: tf.matmul(x, w, transpose_b=True))(d)
+ d = tf.transpose(d, [0, 2, 1])
+ return d
+
+ # replace the "backward pass" with a random matrix.
+ d = delta_matmul(backward_w, delta) # [bs x 60 x delta_channels]
+ var = tf.reduce_mean(tf.square(d), [2], keepdims=True)
+ d = d * tf.rsqrt(1e-6 + var)
+
+ return h, d
+
+ def weight_change_for_layer(self, meta_opt, l_idx, w_base, b_base, upper_h,
+ lower_h, upper_x, lower_x, prefix, include_bias):
+ """Compute the change in weights for each layer.
+ This computes something roughly analagous to a gradient.
+ """
+ reduce_upper_h = upper_h
+ reduce_lower_h = lower_h
+
+ BS = lower_x.shape.as_list()[0]
+
+ change_w_terms = dict()
+
+ # initial weight value normalized
+ # normalize the weights per receptive-field, rather than per-matrix
+ weight_scale = tf.rsqrt(
+ tf.reduce_mean(w_base**2, axis=0, keepdims=True) + 1e-6)
+ w_base *= weight_scale
+
+ change_w_terms['w_base'] = w_base
+
+ # this will act to decay larger weights towards zero
+ change_w_terms['large_decay'] = w_base**2 * tf.sign(w_base)
+
+ # term based on activations
+ ux0 = upper_x - tf.reduce_mean(upper_x, axis=0, keepdims=True)
+ uxs0 = ux0 * tf.rsqrt(tf.reduce_mean(ux0**2, axis=0, keepdims=True) + 1e-6)
+ change_U = tf.matmul(uxs0, uxs0, transpose_a=True) / BS
+ change_U /= tf.sqrt(float(change_U.shape.as_list()[0]))
+
+ cw = tf.matmul(w_base, change_U)
+ cw_scale = tf.rsqrt(tf.reduce_mean(cw**2 + 1e-8))
+ cw *= cw_scale
+ change_w_terms['decorr_x'] = cw
+
+ # hebbian term
+ lx0 = lower_x - tf.reduce_mean(lower_x, axis=0, keepdims=True)
+ lxs0 = lx0 * tf.rsqrt(tf.reduce_mean(lx0**2, axis=0, keepdims=True) + 1e-6)
+ cw = tf.matmul(lxs0, uxs0, transpose_a=True) / BS
+ change_w_terms['hebb'] = -cw
+
+ # 0th order term
+ w_term = meta_opt.low_rank_readout(prefix + 'weight_readout_0', upper_h,
+ lower_h)
+ change_w_terms['0_order'] = w_term
+
+ # # rbf term (weight update scaled by distance from 0)
+ w_term = meta_opt.low_rank_readout(prefix + 'weight_readout_rbf',
+ reduce_upper_h, reduce_lower_h)
+ change_w_terms['rbf'] = tf.exp(-w_base**2) * w_term
+
+ # 1st order term (weight dependent update to weights)
+ w_term = meta_opt.low_rank_readout(prefix + 'weight_readout_1',
+ reduce_upper_h, reduce_lower_h)
+ change_w_terms['1_order'] = w_base * w_term
+
+ # more terms based on single layer readouts.
+ for update_type in ['lin', 'sqr']:
+ for h_source, h_source_name in [(reduce_upper_h, 'upper'),
+ (reduce_lower_h, 'lower')]:
+ structures = ['symm']
+ if update_type == 'lin' and h_source_name == 'upper':
+ structures += ['psd']
+ for structure in structures:
+ name = update_type + '_' + h_source_name + '_' + structure
+ if structure == 'symm':
+ change_U = meta_opt.low_rank_readout(prefix + name, h_source,
+ h_source)
+ change_U = (change_U + tf.transpose(change_U)) / tf.sqrt(2.)
+ change_U = tf.matrix_set_diag(change_U,
+ tf.zeros(
+ [change_U.shape.as_list()[0]]))
+ elif structure == 'psd':
+ change_U = meta_opt.low_rank_readout(
+ prefix + name, h_source, None, psd=True)
+ else:
+ assert False
+ change_U /= tf.sqrt(float(change_U.shape.as_list()[0]))
+
+ if update_type == 'lin':
+ sign_multiplier = tf.ones_like(w_base)
+ w_base_l = w_base
+ elif update_type == 'sqr':
+ sign_multiplier = tf.sign(w_base)
+ w_base_l = tf.sqrt(1. + w_base**2) - 1.
+
+ if h_source_name == 'upper':
+ cw = tf.matmul(w_base_l, change_U) # [N^l-1 x N^l]
+ elif h_source_name == 'lower':
+ cw = tf.matmul(change_U, w_base_l)
+ change_w_terms[name] = cw * sign_multiplier
+
+
+ if prefix == 'forward':
+ change_w = meta_opt.merge_change_w_forward(
+ change_w_terms, global_prefix=prefix, prefix='l%d' % l_idx)
+ elif prefix == 'backward':
+ change_w = meta_opt.merge_change_w_backward(
+ change_w_terms, global_prefix=prefix, prefix='l%d' % l_idx)
+ else:
+ assert (False)
+
+ if not include_bias:
+ return change_w
+
+ change_b = tf.reduce_mean(meta_opt.bias_readout(upper_h), [0])
+
+ # force nonlinearities to be exercised -- biases can't all be increased without bound
+ change_b_mean = tf.reduce_mean(change_b)
+ offset = -tf.nn.relu(-change_b_mean)
+ change_b -= offset
+
+ var = tf.reduce_mean(tf.square(change_b), [0], keepdims=True)
+ change_b = (change_b) / tf.sqrt(0.5 + var)
+ return change_w, change_b
+
+ def compute_next_state(self, outputs, meta_opt, previous_state):
+ zs = outputs.zs
+ xs = outputs.xs
+ batch = outputs.batch
+ mods = outputs.mods
+ backward_mods = outputs.backward_mods
+ variables = self.get_variables()
+
+ rev_mods = mods[::-1]
+ rev_backward_mods = backward_mods[::-1]
+ rev_xs = xs[::-1]
+ rev_zs = zs[::-1] + [None]
+
+ to_top = xs[-1]
+
+ # variables that change in the loop
+ hs = []
+ d = meta_opt.compute_top_delta(to_top) # [bs x 32 x delta_channels]
+
+ iterator = utils.eqzip(rev_backward_mods + [None], rev_mods + [None],
+ [None] + rev_mods, rev_xs, rev_zs)
+ for (backward_mod, lower_mod, upper_mod, x, z) in iterator:
+ w_bot = None
+ if not lower_mod is None:
+ w_bot = previous_state.variables[variables.index(lower_mod.w)]
+ w_top = None
+ if not upper_mod is None:
+ w_top = previous_state.variables[variables.index(upper_mod.w)]
+ backward_w = None
+ if backward_mod is not None:
+ backward_w = previous_state.variables[variables.index(backward_mod.w)]
+ if lower_mod is not None:
+ bias = previous_state.variables[variables.index(lower_mod.b)]
+ else:
+ bias = tf.zeros([x.shape[1]])
+
+ h, d = self.compute_next_h_d(
+ meta_opt=meta_opt,
+ w_bot=w_bot,
+ w_top=w_top,
+ bias=bias,
+ backward_w=backward_w,
+ x=x,
+ z=z,
+ d=d)
+ hs.append(h)
+
+ w_forward_var_idx = [variables.index(mod.w) for mod in rev_mods]
+ w_backward_var_idx = [variables.index(mod.w) for mod in rev_backward_mods]
+ b_var_idx = [variables.index(mod.b) for mod in rev_mods]
+
+ # storage location for outputs of below loop
+ grads = [None for _ in previous_state.variables]
+
+ # over-ride learning rate for perturbation variables
+ learning_rate = [None for _ in previous_state.variables]
+
+ # This is a map -- no state is shared cross loop
+ for l_idx, w_forward_idx, w_backward_idx, b_idx, upper_h, lower_h, lower_x, upper_x in utils.eqzip(
+ range(len(w_forward_var_idx)), w_forward_var_idx, w_backward_var_idx,
+ b_var_idx, hs[:-1], hs[1:], xs[::-1][1:], xs[::-1][:-1]):
+
+ b_base = previous_state.variables[b_idx]
+ change_w_forward, change_b = self.weight_change_for_layer(
+ meta_opt=meta_opt,
+ l_idx=l_idx,
+ w_base=previous_state.variables[w_forward_idx],
+ b_base=b_base,
+ upper_h=upper_h,
+ lower_h=lower_h,
+ upper_x=upper_x,
+ lower_x=lower_x,
+ prefix='forward',
+ include_bias=True)
+
+ if self.identical_updates:
+ change_w_backward = change_w_forward
+ else:
+ change_w_backward = self.weight_change_for_layer(
+ meta_opt=meta_opt,
+ l_idx=l_idx,
+ w_base=previous_state.variables[w_backward_idx],
+ b_base=b_base,
+ upper_h=upper_h,
+ lower_h=lower_h,
+ upper_x=upper_x,
+ lower_x=lower_x,
+ prefix='backward',
+ include_bias=False)
+
+ grads[w_forward_idx] = change_w_forward
+
+ grads[w_backward_idx] = change_w_backward
+
+ grads[b_idx] = change_b
+
+ cur_transformer = common.transformer_at_state(self,
+ previous_state.variables)
+ next_state = meta_opt.compute_next_state(
+ grads,
+ learning_rate=learning_rate,
+ cur_state=previous_state,
+ cur_transformer=lambda x: cur_transformer(x)[0])
+ return next_state
+
+ def initial_state(self, meta_opt):
+ return meta_opt.initial_state(self.get_variables())
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/datasets/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/datasets/__init__.py"
new file mode 100644
index 00000000..9949cd96
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/datasets/__init__.py"
@@ -0,0 +1,16 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+import mnist
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/datasets/common.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/datasets/common.py"
new file mode 100644
index 00000000..11f65cea
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/datasets/common.py"
@@ -0,0 +1,29 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+
+import tensorflow as tf
+import numpy as np
+
+ImageLabelOnehot = collections.namedtuple('ImageLabelOnehot',
+ ['image', 'label', 'label_onehot'])
+ImageLabelOnehotRegression = collections.namedtuple(
+ "ImageLabelOnehotRegression",
+ ["image", "label", "label_onehot", "regression_target"])
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/datasets/mnist.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/datasets/mnist.py"
new file mode 100644
index 00000000..6ee595d9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/datasets/mnist.py"
@@ -0,0 +1,74 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+import sonnet as snt
+import tensorflow as tf
+from tensorflow.python.keras.datasets import mnist
+from learning_unsupervised_learning.datasets import common
+
+class Mnist(snt.AbstractModule):
+ def __init__(self, device, batch_size=128, name="Mnist"):
+ self.device = device
+ self.batch_size = batch_size
+
+ self._make_dataset()
+ self.iterator = None
+
+ super(Mnist, self).__init__(name=name)
+
+ def _make_dataset(self):
+ (x_train, y_train), (x_test, y_test) = mnist.load_data()
+
+ x_train = x_train.reshape(60000, 784)
+ x_test = x_test.reshape(10000, 784)
+
+ dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
+ dataset = dataset.repeat()
+ dataset = dataset.shuffle(self.batch_size * 3)
+ dataset = dataset.batch(self.batch_size)
+ def _map_fn(image, label):
+ image = tf.to_float(image) / 255.
+ label.set_shape([self.batch_size])
+ label = tf.cast(label, dtype=tf.int32)
+ label_onehot = tf.one_hot(label, 10)
+ image = tf.reshape(image, [self.batch_size, 28, 28, 1])
+ return common.ImageLabelOnehot(
+ image=image, label=label, label_onehot=label_onehot)
+
+ self.dataset = dataset.map(_map_fn)
+
+ def _build(self):
+ if self.iterator is None:
+ self.iterator = self.dataset.make_one_shot_iterator()
+ batch = self.iterator.get_next()
+ [b.set_shape([self.batch_size] + b.shape.as_list()[1:]) for b in batch]
+ return batch
+
+
+class TinyMnist(Mnist):
+ def __init__(self, *args, **kwargs):
+ kwargs.setdefault("name", "TinyMnist")
+ super(TinyMnist, self).__init__(*args, **kwargs)
+
+ def _make_dataset(self):
+ super(TinyMnist, self)._make_dataset()
+
+ def _map_fn(batch):
+ new_img = tf.image.resize_images(batch.image, [14, 14])
+ return common.ImageLabelOnehot(
+ image=new_img, label=batch.label, label_onehot=batch.label_onehot)
+
+ self.dataset = self.dataset.map(_map_fn)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/evaluation.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/evaluation.py"
new file mode 100644
index 00000000..2ec40e99
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/evaluation.py"
@@ -0,0 +1,76 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+"""Evaluation job.
+
+This sits on the side and performs evaluation on a saved model.
+This is a separate process for ease of use and stability of numbers.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from learning_unsupervised_learning import utils
+
+
+def construct_evaluation_graph(theta_process_fn=None,
+ w_learner_fn=None,
+ dataset_fn=None,
+ meta_objectives=None,
+ ):
+ """Construct the evaluation graph.
+ """
+ if meta_objectives is None:
+ meta_objectives = []
+
+ tf.train.create_global_step()
+
+ local_device = ""
+ remote_device = ""
+
+ meta_opt = theta_process_fn(
+ remote_device=remote_device, local_device=local_device)
+
+ base_model = w_learner_fn(
+ remote_device=remote_device, local_device=local_device)
+
+ train_dataset = dataset_fn(device=local_device)
+
+ # construct variables
+ x, outputs = base_model(train_dataset())
+ initial_state = base_model.initial_state(meta_opt, max_steps=10)
+ next_state = base_model.compute_next_state(outputs, meta_opt, initial_state)
+ with utils.state_barrier_context(next_state):
+ train_one_step_op = meta_opt.assign_state(base_model, next_state)
+
+ meta_objs = []
+ for meta_obj_fn in meta_objectives:
+ meta_obj = meta_obj_fn(local_device="", remote_device="")
+ meta_objs.append(meta_obj)
+ J = meta_obj(train_dataset, lambda x: base_model(x)[0])
+ tf.summary.scalar(str(meta_obj.__class__.__name__)+"_J", tf.reduce_mean(J))
+
+ # TODO(lmetz) this is kinda error prone.
+ # We should share the construction of the global variables across train and
+ # make sure both sets of savable variables are the same
+ checkpoint_vars = meta_opt.remote_variables() + [tf.train.get_global_step()]
+ for meta_obj in meta_objs:
+ checkpoint_vars.extend(meta_obj.remote_variables())
+
+ return checkpoint_vars, train_one_step_op, (base_model, train_dataset)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/__init__.py"
new file mode 100644
index 00000000..54c46145
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/__init__.py"
@@ -0,0 +1,18 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+import sklearn
+import linear_regression
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/linear_regression.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/linear_regression.py"
new file mode 100644
index 00000000..b49fc252
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/linear_regression.py"
@@ -0,0 +1,258 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+
+"""Closed form linear regression.
+
+Can be differentiated through.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import numpy as np
+import sonnet as snt
+import tensorflow as tf
+
+from learning_unsupervised_learning import utils
+from learning_unsupervised_learning import variable_replace
+
+
+def solve_ridge(x, y, ridge_factor):
+ with tf.name_scope("solve_ridge"):
+ # Added a column of ones to the end of the feature matrix for bias
+ A = tf.concat([x, tf.ones((x.shape.as_list()[0], 1))], axis=1)
+
+ # Analytic solution for the ridge regression loss
+ inv_target = tf.matmul(A, A, transpose_a=True)
+ np_diag_penalty = ridge_factor * np.ones(
+ A.shape.as_list()[1], dtype="float32")
+ # Remove penalty on bias component of weights
+ np_diag_penalty[-1] = 0.
+ diag_penalty = tf.constant(np_diag_penalty)
+ inv_target += tf.diag(diag_penalty)
+
+ inv = tf.matrix_inverse(inv_target)
+ w = tf.matmul(inv, tf.matmul(A, y, transpose_a=True))
+ return w
+
+
+class LinearRegressionMetaObjective(snt.AbstractModule):
+ """A meta objective based on training Ridge Regression with analytic solution.
+
+ This is used to evaluate the performance of a given feature set trained in
+ some other manner.
+ """
+
+ def __init__(self,
+ local_device=None,
+ remote_device=None,
+ zero_one_labels=True,
+ normalize_y_hat=True,
+ normalize_act=False,
+ averages=1,
+ ridge_factor=0.1,
+ center_y=True,
+ hinge_loss=False,
+ samples_per_class=10,
+ test_train_scalar=1.0,
+ ):
+ self._local_device = local_device
+ self._remote_device = remote_device
+ self.zero_one_labels = zero_one_labels
+ self.normalize_y_hat = normalize_y_hat
+ self.normalize_act = normalize_act
+ self.ridge_factor = ridge_factor
+ self.averages = averages
+ self.samples_per_class = samples_per_class
+ self.center_y=center_y
+ self.test_train_scalar=test_train_scalar
+ self.hinge_loss = hinge_loss
+
+ self.dataset_map = {}
+
+ super(LinearRegressionMetaObjective,
+ self).__init__(name="LinearRegressionMetaObjective")
+
+ def _build(self, dataset, feature_transformer):
+ if self.samples_per_class is not None:
+ if dataset not in self.dataset_map:
+ # datasets are outside of frames from while loops
+ with tf.control_dependencies(None):
+ self.dataset_map[dataset] = utils.sample_n_per_class(
+ dataset, self.samples_per_class)
+
+ dataset = self.dataset_map[dataset]
+
+ stats = collections.defaultdict(list)
+ losses = []
+ # TODO(lmetz) move this to ingraph control flow?
+ for _ in xrange(self.averages):
+ loss, stat = self._build_once(dataset, feature_transformer)
+ losses.append(loss)
+ for k, v in stat.items():
+ stats[k].append(v)
+ stats = {k: tf.add_n(v) / float(len(v)) for k, v in stats.items()}
+
+ summary_updates = []
+ for k, v in stats.items():
+ tf.summary.scalar(k, v)
+
+ with tf.control_dependencies(summary_updates):
+ return tf.add_n(losses) / float(len(losses))
+
+ def _build_once(self, dataset, feature_transformer):
+ with tf.device(self._local_device):
+ batch = dataset()
+ num_classes = batch.label_onehot.shape.as_list()[1]
+
+ regression_mod = snt.Linear(num_classes)
+
+ if self.normalize_act:
+
+ def normalize_transformer(x):
+ unnorm_x = feature_transformer(x)
+ return tf.nn.l2_normalize(unnorm_x, 0)
+
+ feature_transformer_wrap = normalize_transformer
+ else:
+ feature_transformer_wrap = feature_transformer
+
+ # construct the variables of the right shape in the sonnet module by
+ # calling a forward pass through the regressor.
+ with utils.assert_no_new_variables():
+ dummy_features = feature_transformer_wrap(batch)
+ regression_mod(dummy_features)
+ reg_w = regression_mod.w
+ reg_b = regression_mod.b
+
+ batch_test = dataset()
+ all_batch = utils.structure_map_multi(lambda x: tf.concat(x, 0), [batch, batch_test])
+ #all_batch = tf.concat([batch, batch_test], 0)
+ # Grab a new batch of data from the dataset.
+ features = feature_transformer_wrap(all_batch)
+ features, features_test = utils.structure_map_split(lambda x: tf.split(x, 2, axis=0), features)
+
+ def center_y(y):
+ y -= tf.reduce_mean(y)
+ y *= tf.rsqrt(tf.reduce_mean(tf.reduce_sum(y**2, axis=[1], keep_dims=True)))
+ return y
+ def get_y_vec(batch):
+ y_pieces = []
+ if hasattr(batch, "label_onehot"):
+ if self.zero_one_labels:
+ y_pieces += [batch.label_onehot]
+ else:
+ y_pieces += [2. * batch.label_onehot - 1.]
+ if hasattr(batch, "regression_target"):
+ y_pieces += [batch.regression_target]
+ y = tf.concat(y_pieces, 1)
+ if self.center_y:
+ y = center_y(y)
+ return y
+
+ y_train = get_y_vec(batch)
+
+ w = solve_ridge(features, y_train, self.ridge_factor)
+
+ # Generate features from another batch to evaluate loss on the validation
+ # set. This provide a less overfit signal to the learned optimizer.
+ y_test = get_y_vec(batch_test)
+
+ def compute_logit(features):
+ # We have updated the classifier mod in previous steps, we need to
+ # substitute out those variables to get new values.
+ replacement = collections.OrderedDict([(reg_w, w[:-1]), (reg_b, w[-1])])
+ with variable_replace.variable_replace(replacement):
+ logits = regression_mod(features)
+
+ return logits
+
+ batch_size = y_train.shape.as_list()[0]
+
+ logit_train = compute_logit(features)
+ logit_test_unnorm = compute_logit(features_test)
+ if self.normalize_y_hat:
+ logit_test = logit_test_unnorm / tf.sqrt(
+ tf.reduce_sum(logit_test_unnorm**2, axis=[1], keep_dims=True))
+ else:
+ logit_test = logit_test_unnorm
+
+ stats = {}
+
+ if self.hinge_loss:
+ # slightly closer to the true classification loss
+ # any distance smaller than 1 is guaranteed to map to the correct class
+ mse_test = tf.reduce_sum(tf.nn.relu(tf.reduce_sum(tf.square(logit_test - y_test), axis=1)-1.)) / batch_size
+ else:
+ mse_test = tf.reduce_sum(tf.square(logit_test - y_test)) / batch_size
+
+ stats["mse_test"] = mse_test
+
+ mse_train = tf.reduce_sum(tf.square(logit_train - y_train)) / batch_size
+ stats["mse_train"] = mse_train
+
+ is_correct_test = tf.equal(tf.argmax(logit_test, 1), tf.argmax(y_test, 1))
+ accuracy_test = tf.reduce_mean(tf.cast(is_correct_test, tf.float32))
+ stats["accuracy_test"] = accuracy_test
+
+ def test_confusion_fn():
+ test_confusion = tf.confusion_matrix(tf.argmax(y_test, 1), tf.argmax(logit_test, 1))
+ test_confusion = tf.to_float(test_confusion) / tf.constant((logit_test.shape.as_list()[0] / float(logit_test.shape.as_list()[1])), dtype=tf.float32)
+ test_confusion = tf.expand_dims(tf.expand_dims(test_confusion, 0), 3)
+ return test_confusion
+ tf.summary.image("test_confusion", test_confusion_fn())
+
+ def train_confusion_fn():
+ train_confusion = tf.confusion_matrix(tf.argmax(y_train, 1), tf.argmax(logit_train, 1))
+ train_confusion = tf.to_float(train_confusion) / tf.constant((logit_train.shape.as_list()[0] / float(logit_train.shape.as_list()[1])), dtype=tf.float32)
+ train_confusion = tf.expand_dims(tf.expand_dims(train_confusion, 0), 3)
+ return train_confusion
+ tf.summary.image("train_confusion", train_confusion_fn())
+
+ is_correct = tf.equal(tf.argmax(logit_train, 1), tf.argmax(y_train, 1))
+ accuracy_train = tf.reduce_mean(tf.cast(is_correct, tf.float32))
+ stats["accuracy_train"] = accuracy_train
+
+ reg = self.ridge_factor * tf.reduce_sum(tf.square(w[:-1])) / batch_size
+ stats["ridge_component"] = reg
+
+ stats["total_loss"] = mse_test + reg
+
+ loss_to_train_at = (reg+ mse_test) * self.test_train_scalar + (mse_train + reg)*(1 - self.test_train_scalar)
+
+ loss_to_train_at = tf.identity(loss_to_train_at)
+
+ # Minimizing the test loss should not require regurization because the
+ # metaobjective is solved for the training loss
+ return loss_to_train_at, stats
+
+ def local_variables(self):
+ """List of variables that need to be updated for each evaluation.
+
+ These variables should not be stored on a parameter server and
+ should be reset every computation of a meta_objective loss.
+
+ Returns:
+ vars: list of tf.Variable
+ """
+ return list(
+ snt.get_variables_in_module(self, tf.GraphKeys.TRAINABLE_VARIABLES))
+
+ def remote_variables(self):
+ return []
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/sklearn.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/sklearn.py"
new file mode 100644
index 00000000..4f1f2d59
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/sklearn.py"
@@ -0,0 +1,167 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+"""
+
+Can NOT be differentiated through.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import numpy as np
+import sonnet as snt
+import tensorflow as tf
+from tensorflow.python.framework import function
+
+from learning_unsupervised_learning import utils
+
+from learning_unsupervised_learning.meta_objective import utils as meta_obj_utils
+
+from sklearn import svm
+from sklearn import linear_model
+
+
+def build_fit(device, model_fn, num_classes, probs=True):
+
+ def _py_fit_predict(trX, trY, teX):
+ assert len(np.unique(trY)) == num_classes
+ model = model_fn()
+ model.fit(trX, trY)
+ trP = model.predict(trX)
+ teP = model.predict(teX)
+ if probs:
+ teP_probs = model.predict_log_proba(teX)
+ return trP.astype(np.int64), teP.astype(np.int64), teP_probs.astype(
+ np.float32)
+ else:
+ teP = model.predict(teX)
+ return trP.astype(np.int64), teP.astype(np.int64)
+
+ def return_fn(trX, trY, teX):
+ with tf.device(device):
+ with tf.device("/cpu:0"):
+ if probs:
+ return tf.py_func(
+ _py_fit_predict,
+ [tf.identity(trX),
+ tf.identity(trY),
+ tf.identity(teX)], [tf.int64, tf.int64, tf.float32])
+ else:
+ return tf.py_func(
+ _py_fit_predict,
+ [tf.identity(trX),
+ tf.identity(trY),
+ tf.identity(teX)], [tf.int64, tf.int64])
+
+ return return_fn
+
+
+class SKLearn(meta_obj_utils.MultiTrialMetaObjective):
+
+ def __init__(
+ self,
+ local_device=None,
+ remote_device=None,
+ averages=1,
+ samples_per_class=10,
+ probs=False,
+ stddev=0.01,
+ n_samples=10,
+ name="SKLearn",
+ ):
+ self._local_device = local_device
+ self._remote_device = remote_device
+ self.name = name
+ self.probs = probs
+ self.n_samples = n_samples
+ self.stddev = stddev
+
+ super(SKLearn, self).__init__(
+ name=name, samples_per_class=samples_per_class, averages=averages)
+
+ def _get_model(self):
+ raise NotImplemented()
+
+ def _build_once(self, dataset, feature_transformer):
+ with tf.device(self._local_device):
+ tr_batch = dataset()
+ te_batch = dataset()
+ num_classes = tr_batch.label_onehot.shape.as_list()[1]
+ all_batch = utils.structure_map_multi(lambda x: tf.concat(x, 0),
+ [tr_batch, te_batch])
+ features = feature_transformer(all_batch)
+ trX, teX = utils.structure_map_split(lambda x: tf.split(x, 2, axis=0),
+ features)
+ trY = tf.to_int64(tr_batch.label)
+ trY_onehot = tf.to_int32(tr_batch.label_onehot)
+ teY = tf.to_int64(te_batch.label)
+ teY_shape = teY.shape.as_list()
+
+ def blackbox((trX, trY, teX, teY)):
+ trY = tf.to_int32(tf.rint(trY))
+ teY = tf.to_int32(tf.rint(teY))
+ tf_fn = build_fit(
+ self._local_device,
+ self._get_model,
+ num_classes=num_classes,
+ probs=self.probs)
+ if self.probs:
+ trP, teP, teP_probs = tf_fn(trX, trY, teX)
+ else:
+ trP, teP = tf_fn(trX, trY, teX)
+
+ teY.set_shape(teY_shape)
+ if self.probs:
+ onehot = tf.one_hot(teY, num_classes)
+ crossent = -tf.reduce_sum(onehot * teP_probs, [1])
+ return tf.reduce_mean(crossent)
+ else:
+ # use error rate as the loss if no surrogate is avalible.
+ return 1 - tf.reduce_mean(
+ tf.to_float(tf.equal(teY, tf.to_int32(teP))))
+
+ test_loss = blackbox((trX, tf.to_float(trY), teX, tf.to_float(teY)))
+
+ stats = {}
+
+ tf_fn = build_fit(
+ self._local_device,
+ self._get_model,
+ num_classes=num_classes,
+ probs=self.probs)
+ if self.probs:
+ trP, teP, teP_probs = tf_fn(trX, trY, teX)
+ else:
+ trP, teP = tf_fn(trX, trY, teX)
+ stats["%s/accuracy_train" % self.name] = tf.reduce_mean(
+ tf.to_float(tf.equal(tf.to_int32(trY), tf.to_int32(trP))))
+ stats["%s/accuracy_test" % self.name] = tf.reduce_mean(
+ tf.to_float(tf.equal(tf.to_int32(teY), tf.to_int32(teP))))
+ stats["%s/test_loss" % self.name] = test_loss
+ return test_loss, stats
+
+
+class LogisticRegression(SKLearn):
+
+ def __init__(self, C=1.0, name="LogisticRegression", probs=True, **kwargs):
+ self.C = C
+ super(LogisticRegression, self).__init__(name=name, probs=probs, **kwargs)
+
+ def _get_model(self):
+ return linear_model.LogisticRegression(C=self.C)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/utils.py"
new file mode 100644
index 00000000..a29197d1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/meta_objective/utils.py"
@@ -0,0 +1,78 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import numpy as np
+import sonnet as snt
+import tensorflow as tf
+
+from learning_unsupervised_learning import optimizers
+from learning_unsupervised_learning import utils
+from learning_unsupervised_learning import summary_utils
+from learning_unsupervised_learning import variable_replace
+
+class MultiTrialMetaObjective(snt.AbstractModule):
+ def __init__(self, samples_per_class, averages, **kwargs):
+ self.samples_per_class = samples_per_class
+ self.averages = averages
+ self.dataset_map = {}
+
+ super(MultiTrialMetaObjective,
+ self).__init__(**kwargs)
+
+ def _build(self, dataset, feature_transformer):
+ if self.samples_per_class is not None:
+ if dataset not in self.dataset_map:
+ # datasets are outside of frames from while loops
+ with tf.control_dependencies(None):
+ self.dataset_map[dataset] = utils.sample_n_per_class(
+ dataset, self.samples_per_class)
+
+ dataset = self.dataset_map[dataset]
+
+ stats = collections.defaultdict(list)
+ losses = []
+ # TODO(lmetz) move this to ingraph control flow?
+ for _ in xrange(self.averages):
+ loss, stat = self._build_once(dataset, feature_transformer)
+ losses.append(loss)
+ for k, v in stat.items():
+ stats[k].append(v)
+ stats = {k: tf.add_n(v) / float(len(v)) for k, v in stats.items()}
+
+ for k, v in stats.items():
+ tf.summary.scalar(k, v)
+
+ return tf.add_n(losses) / float(len(losses))
+
+ def local_variables(self):
+ """List of variables that need to be updated for each evaluation.
+
+ These variables should not be stored on a parameter server and
+ should be reset every computation of a meta_objective loss.
+
+ Returns:
+ vars: list of tf.Variable
+ """
+ return list(
+ snt.get_variables_in_module(self, tf.GraphKeys.TRAINABLE_VARIABLES))
+
+ def remote_variables(self):
+ return []
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/optimizers.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/optimizers.py"
new file mode 100644
index 00000000..02c6106b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/optimizers.py"
@@ -0,0 +1,133 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+
+"""Optimizers for use in unrolled optimization.
+
+These optimizers contain a compute_updates function and its own ability to keep
+track of internal state.
+These functions can be used with a tf.while_loop to perform multiple training
+steps per sess.run.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import abc
+import collections
+import tensorflow as tf
+import sonnet as snt
+
+from learning_unsupervised_learning import utils
+
+from tensorflow.python.framework import ops
+from tensorflow.python.ops import math_ops
+from tensorflow.python.ops import resource_variable_ops
+from tensorflow.python.training import optimizer
+from tensorflow.python.training import training_ops
+
+
+class UnrollableOptimizer(snt.AbstractModule):
+ """Interface for optimizers that can be used in unrolled computation.
+ apply_gradients is derrived from compute_update and assign_state.
+ """
+
+ def __init__(self, *args, **kwargs):
+ super(UnrollableOptimizer, self).__init__(*args, **kwargs)
+ self()
+
+ @abc.abstractmethod
+ def compute_updates(self, xs, gs, state=None):
+ """Compute next step updates for a given variable list and state.
+
+ Args:
+ xs: list of tensors
+ The "variables" to perform an update on.
+ Note these must match the same order for which get_state was originally
+ called.
+ gs: list of tensors
+ Gradients of `xs` with respect to some loss.
+ state: Any
+ Optimizer specific state to keep track of accumulators such as momentum
+ terms
+ """
+ raise NotImplementedError()
+
+ def _build(self):
+ pass
+
+ @abc.abstractmethod
+ def get_state(self, var_list):
+ """Get the state value associated with a list of tf.Variables.
+
+ This state is commonly going to be a NamedTuple that contains some
+ mapping between variables and the state associated with those variables.
+ This state could be a moving momentum variable tracked by the optimizer.
+
+ Args:
+ var_list: list of tf.Variable
+ Returns:
+ state: Any
+ Optimizer specific state
+ """
+ raise NotImplementedError()
+
+ def assign_state(self, state):
+ """Assigns the state to the optimizers internal variables.
+
+ Args:
+ state: Any
+ Returns:
+ op: tf.Operation
+ The operation that performs the assignment.
+ """
+ raise NotImplementedError()
+
+ def apply_gradients(self, grad_vars):
+ gradients, variables = zip(*grad_vars)
+ state = self.get_state(variables)
+ new_vars, new_state = self.compute_updates(variables, gradients, state)
+ assign_op = self.assign_state(new_state)
+ op = utils.assign_variables(variables, new_vars)
+ return tf.group(assign_op, op, name="apply_gradients")
+
+
+class UnrollableGradientDescentRollingOptimizer(UnrollableOptimizer):
+
+ def __init__(self,
+ learning_rate,
+ name="UnrollableGradientDescentRollingOptimizer"):
+ self.learning_rate = learning_rate
+ super(UnrollableGradientDescentRollingOptimizer, self).__init__(name=name)
+
+
+ def compute_updates(self, xs, gs, learning_rates, state):
+ new_vars = []
+ for x, g, lr in utils.eqzip(xs, gs, learning_rates):
+ if lr is None:
+ lr = self.learning_rate
+ if g is not None:
+ new_vars.append((x * (1 - lr) - g * lr))
+ else:
+ new_vars.append(x)
+ return new_vars, state
+
+ def get_state(self, var_list):
+ return tf.constant(0.0)
+
+ def assign_state(self, state, var_list=None):
+ return tf.no_op()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/run_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/run_eval.py"
new file mode 100644
index 00000000..dcb2529d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/run_eval.py"
@@ -0,0 +1,122 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+""" Script that iteratively applies the unsupervised update rule and evaluates the
+
+meta-objective performance.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from absl import flags
+from absl import app
+
+from learning_unsupervised_learning import evaluation
+from learning_unsupervised_learning import datasets
+from learning_unsupervised_learning import architectures
+from learning_unsupervised_learning import summary_utils
+from learning_unsupervised_learning import meta_objective
+
+import tensorflow as tf
+import sonnet as snt
+
+from tensorflow.contrib.framework.python.framework import checkpoint_utils
+
+flags.DEFINE_string("checkpoint_dir", None, "Dir to load pretrained update rule from")
+flags.DEFINE_string("train_log_dir", None, "Training log directory")
+
+FLAGS = flags.FLAGS
+
+
+def train(train_log_dir, checkpoint_dir, eval_every_n_steps=10, num_steps=3000):
+ dataset_fn = datasets.mnist.TinyMnist
+ w_learner_fn = architectures.more_local_weight_update.MoreLocalWeightUpdateWLearner
+ theta_process_fn = architectures.more_local_weight_update.MoreLocalWeightUpdateProcess
+
+ meta_objectives = []
+ meta_objectives.append(
+ meta_objective.linear_regression.LinearRegressionMetaObjective)
+ meta_objectives.append(meta_objective.sklearn.LogisticRegression)
+
+ checkpoint_vars, train_one_step_op, (
+ base_model, dataset) = evaluation.construct_evaluation_graph(
+ theta_process_fn=theta_process_fn,
+ w_learner_fn=w_learner_fn,
+ dataset_fn=dataset_fn,
+ meta_objectives=meta_objectives)
+ batch = dataset()
+ pre_logit, outputs = base_model(batch)
+
+ global_step = tf.train.get_or_create_global_step()
+ var_list = list(
+ snt.get_variables_in_module(base_model, tf.GraphKeys.TRAINABLE_VARIABLES))
+
+ tf.logging.info("all vars")
+ for v in tf.all_variables():
+ tf.logging.info(" %s" % str(v))
+ global_step = tf.train.get_global_step()
+ accumulate_global_step = global_step.assign_add(1)
+ reset_global_step = global_step.assign(0)
+
+ train_op = tf.group(
+ train_one_step_op, accumulate_global_step, name="train_op")
+
+ summary_op = tf.summary.merge_all()
+
+ file_writer = summary_utils.LoggingFileWriter(train_log_dir, regexes=[".*"])
+ if checkpoint_dir:
+ str_var_list = checkpoint_utils.list_variables(checkpoint_dir)
+ name_to_v_map = {v.op.name: v for v in tf.all_variables()}
+ var_list = [
+ name_to_v_map[vn] for vn, _ in str_var_list if vn in name_to_v_map
+ ]
+ saver = tf.train.Saver(var_list)
+ missed_variables = [
+ v.op.name for v in set(
+ snt.get_variables_in_scope("LocalWeightUpdateProcess",
+ tf.GraphKeys.GLOBAL_VARIABLES)) -
+ set(var_list)
+ ]
+ assert len(missed_variables) == 0, "Missed a theta variable."
+
+ hooks = []
+
+ with tf.train.SingularMonitoredSession(master="", hooks=hooks) as sess:
+
+ # global step should be restored from the evals job checkpoint or zero for fresh.
+ step = sess.run(global_step)
+
+ if step == 0 and checkpoint_dir:
+ tf.logging.info("force restore")
+ saver.restore(sess, checkpoint_dir)
+ tf.logging.info("force restore done")
+ sess.run(reset_global_step)
+ step = sess.run(global_step)
+
+ while step < num_steps:
+ if step % eval_every_n_steps == 0:
+ s, _, step = sess.run([summary_op, train_op, global_step])
+ file_writer.add_summary(s, step)
+ else:
+ _, step = sess.run([train_op, global_step])
+
+
+def main(argv):
+ train(FLAGS.train_log_dir, FLAGS.checkpoint_dir)
+
+
+if __name__ == "__main__":
+ app.run(main)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/summary_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/summary_utils.py"
new file mode 100644
index 00000000..d5c0fdd9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/summary_utils.py"
@@ -0,0 +1,181 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+
+import collections
+import functools
+import threading
+import tensorflow as tf
+import matplotlib
+import numpy as np
+import time
+import re
+import math
+matplotlib.use("Agg")
+
+import matplotlib.pyplot as plt
+import scipy.signal
+
+from tensorflow.python.util import tf_should_use
+from tensorflow.contrib.summary import summary_ops
+from tensorflow.python.ops import summary_op_util
+from tensorflow.contrib.summary import gen_summary_ops
+
+_DEBUG_DISABLE_SUMMARIES=False
+
+class LoggingFileWriter(tf.summary.FileWriter):
+ """A FileWriter that also logs things out.
+
+ This is entirely for ease of debugging / not having to open up Tensorboard
+ a lot.
+ """
+
+ def __init__(self, logdir, regexes=[], **kwargs):
+ self.regexes = regexes
+ super(LoggingFileWriter, self).__init__(logdir, **kwargs)
+
+ def add_summary(self, summary, global_step):
+ if type(summary) != tf.Summary:
+ summary_p = tf.Summary()
+ summary_p.ParseFromString(summary)
+ summary = summary_p
+ for s in summary.value:
+ for exists in [re.match(p, s.tag) for p in self.regexes]:
+ if exists is not None:
+ tf.logging.info("%d ] %s : %f", global_step, s.tag, s.simple_value)
+ break
+ super(LoggingFileWriter, self).add_summary(summary, global_step)
+
+
+def image_grid(images, max_grid_size=4, border=1):
+ """Given images and N, return first N^2 images as an NxN image grid.
+
+ Args:
+ images: a `Tensor` of size [batch_size, height, width, channels]
+ max_grid_size: Maximum image grid height/width
+
+ Returns:
+ Single image batch, of dim [1, h*n, w*n, c]
+ """
+ batch_size = images.shape.as_list()[0]
+ to_pad = int((np.ceil(np.sqrt(batch_size)))**2 - batch_size)
+ images = tf.pad(images, [[0, to_pad], [0, border], [0, border], [0, 0]])
+
+ batch_size = images.shape.as_list()[0]
+ grid_size = min(int(np.sqrt(batch_size)), max_grid_size)
+ assert images.shape.as_list()[0] >= grid_size * grid_size
+
+ # If we have a depth channel
+ if images.shape.as_list()[-1] == 4:
+ images = images[:grid_size * grid_size, :, :, 0:3]
+ depth = tf.image.grayscale_to_rgb(images[:grid_size * grid_size, :, :, 3:4])
+
+ images = tf.reshape(images, [-1, images.shape.as_list()[2], 3])
+ split = tf.split(images, grid_size, axis=0)
+ depth = tf.reshape(depth, [-1, images.shape.as_list()[2], 3])
+ depth_split = tf.split(depth, grid_size, axis=0)
+ grid = tf.concat(split + depth_split, 1)
+ return tf.expand_dims(grid, 0)
+ else:
+ images = images[:grid_size * grid_size, :, :, :]
+ images = tf.reshape(
+ images, [-1, images.shape.as_list()[2],
+ images.shape.as_list()[3]])
+ split = tf.split(value=images, num_or_size_splits=grid_size, axis=0)
+ grid = tf.concat(split, 1)
+ return tf.expand_dims(grid, 0)
+
+
+def first_layer_weight_image(weight, shape):
+ weight_image = tf.reshape(weight,
+ shape + [tf.identity(weight).shape.as_list()[1]])
+ # [winx, winy, wout]
+ mean, var = tf.nn.moments(weight_image, [0,1,2], keep_dims=True)
+ #mean, var = tf.nn.moments(weight_image, [0,1], keep_dims=True)
+ weight_image = (weight_image - mean) / tf.sqrt(var + 1e-5)
+ weight_image = (weight_image + 1.0) / 2.0
+ weight_image = tf.clip_by_value(weight_image, 0, 1)
+ weight_image = tf.transpose(weight_image, (3, 0, 1, 2))
+ grid = image_grid(weight_image, max_grid_size=10)
+ return grid
+
+def inner_layer_weight_image(weight):
+ """Visualize a weight matrix of an inner layer.
+ Add padding to make it square, then visualize as a gray scale image
+ """
+ weight = tf.identity(weight) # turn into a tensor
+ weight = weight / (tf.reduce_max(tf.abs(weight), [0], keep_dims=True))
+ weight = tf.reshape(weight, [1]+weight.shape.as_list() + [1])
+ return weight
+
+
+def activation_image(activations, label_onehot):
+ """Make a row sorted by class for each activation. Put a black line around the activations."""
+ labels = tf.argmax(label_onehot, axis=1)
+ _, n_classes = label_onehot.shape.as_list()
+ mean, var = tf.nn.moments(activations, [0, 1])
+ activations = (activations - mean)/tf.sqrt(var+1e-5)
+
+ activations = tf.clip_by_value(activations, -1, 1)
+ activations = (activations + 1.0) / 2.0 # shift to [0, 1]
+
+ canvas = []
+ for i in xrange(n_classes):
+ inds = tf.where(tf.equal(labels, i))
+
+ def _gather():
+ return tf.squeeze(tf.gather(activations, inds), 1)
+
+ def _empty():
+ return tf.zeros([0, activations.shape.as_list()[1]], dtype=tf.float32)
+
+ assert inds.shape.as_list()[0] is None
+ x = tf.cond(tf.equal(tf.shape(inds)[0], 0), _empty, _gather)
+ canvas.append(x)
+ canvas.append(tf.zeros([1, activations.shape.as_list()[1]]))
+ canvas = tf.concat(canvas, 0)
+ canvas = tf.reshape(canvas, [1, activations.shape.as_list()[0]+n_classes, canvas.shape.as_list()[1], 1])
+ return canvas
+
+
+def sorted_images(images, label_onehot):
+ # images is [bs, x, y, c]
+ labels = tf.argmax(label_onehot, axis=1)
+ _, n_classes = label_onehot.shape.as_list()
+ to_stack = []
+ for i in xrange(n_classes):
+ inds = tf.where(tf.equal(labels, i))
+
+ def _gather():
+ return tf.squeeze(tf.gather(images, inds), 1)
+
+ def _empty():
+ return tf.zeros([0] + images.shape.as_list()[1:], dtype=tf.float32)
+
+ assert inds.shape.as_list()[0] is None
+ x = tf.cond(tf.equal(tf.shape(inds)[0], 0), _empty, _gather)
+ to_stack.append(x)
+ # pad / trim all up to 10.
+ padded = []
+ for t in to_stack:
+ n_found = tf.shape(t)[0]
+ pad = tf.pad(t[0:10], tf.stack([tf.stack([0,tf.maximum(0, 10-n_found)]), [0,0], [0,0], [0,0]]))
+ padded.append(pad)
+
+ xs = [tf.concat(tf.split(p, 10), axis=1) for p in padded]
+ ys = tf.concat(xs, axis=2)
+ ys = tf.cast(tf.clip_by_value(ys, 0., 1.) * 255., tf.uint8)
+ return ys
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/utils.py"
new file mode 100644
index 00000000..ca56ca93
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/utils.py"
@@ -0,0 +1,287 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Utilities.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import contextlib
+import tensorflow as tf
+import sonnet as snt
+import itertools
+import functools
+
+from tensorflow.core.framework import node_def_pb2
+from tensorflow.python.framework import device as pydev
+from tensorflow.python.framework import errors
+from tensorflow.python.ops import variable_scope as variable_scope_ops
+from sonnet.python.modules import util as snt_util
+
+from tensorflow.python.util import nest
+
+
+def eqzip(*args):
+ """Zip but raises error if lengths don't match.
+
+ Args:
+ *args: list of lists or tuples
+ Returns:
+ list: the result of zip
+ Raises:
+ ValueError: when the lengths don't match
+ """
+
+ sizes = [len(x) for x in args]
+ if not all([sizes[0] == x for x in sizes]):
+ raise ValueError("Lists are of different sizes. \n %s"%str(sizes))
+ return zip(*args)
+
+
+@contextlib.contextmanager
+def assert_no_new_variables():
+ """Ensure that no tf.Variables are constructed inside the context.
+
+ Yields:
+ None
+ Raises:
+ ValueError: if there is a variable created.
+ """
+ num_vars = len(tf.global_variables())
+ old_variables = tf.global_variables()
+ yield
+ if len(tf.global_variables()) != num_vars:
+ new_vars = set(tf.global_variables()) - set(old_variables)
+ tf.logging.error("NEW VARIABLES CREATED")
+ tf.logging.error(10*"=")
+ for v in new_vars:
+ tf.logging.error(v)
+
+ raise ValueError("Variables created inside an "
+ "assert_no_new_variables context")
+ if old_variables != tf.global_variables():
+ raise ValueError("Variables somehow changed inside an "
+ "assert_no_new_variables context."
+ "This means something modified the tf.global_variables()")
+
+
+def get_variables_in_modules(module_list):
+ var_list = []
+ for m in module_list:
+ var_list.extend(snt.get_variables_in_module(m))
+ return var_list
+
+
+def state_barrier_context(state):
+ """Return a context manager that prevents interior ops from running
+ unless the whole state has been computed.
+
+ This is to prevent assign race conditions.
+ """
+ tensors = [x for x in nest.flatten(state) if type(x) == tf.Tensor]
+ tarray = [x.flow for x in nest.flatten(state) if hasattr(x, "flow")]
+ return tf.control_dependencies(tensors + tarray)
+
+
+def _identity_fn(tf_entity):
+ if hasattr(tf_entity, "identity"):
+ return tf_entity.identity()
+ else:
+ return tf.identity(tf_entity)
+
+
+def state_barrier_result(state):
+ """Return the same state, but with a control dependency to prevent it from
+ being partially computed
+ """
+ with state_barrier_context(state):
+ return nest.map_structure(_identity_fn, state)
+
+
+def train_iterator(num_iterations):
+ """Iterator that returns an index of the current step.
+ This iterator runs forever if num_iterations is None
+ otherwise it runs for some fixed amount of steps.
+ """
+ if num_iterations is None:
+ return itertools.count()
+ else:
+ return xrange(num_iterations)
+
+
+def print_op(op, msg):
+ """Print a string and return an op wrapped in a control dependency to make
+ sure it ran."""
+ print_op = tf.Print(tf.constant(0), [tf.constant(0)], msg)
+ return tf.group(op, print_op)
+
+
+class MultiQueueRunner(tf.train.QueueRunner):
+ """A QueueRunner with multiple queues """
+ def __init__(self, queues, enqueue_ops):
+ close_op = tf.group(* [q.close() for q in queues])
+ cancel_op = tf.group(
+ * [q.close(cancel_pending_enqueues=True) for q in queues])
+ queue_closed_exception_types = (errors.OutOfRangeError,)
+
+ enqueue_op = tf.group(*enqueue_ops, name="multi_enqueue")
+
+ super(MultiQueueRunner, self).__init__(
+ queues[0],
+ enqueue_ops=[enqueue_op],
+ close_op=close_op,
+ cancel_op=cancel_op,
+ queue_closed_exception_types=queue_closed_exception_types)
+
+
+# This function is not elegant, but I tried so many other ways to get this to
+# work and this is the only one that ended up not incuring significant overhead
+# or obscure tensorflow bugs.
+def sample_n_per_class(dataset, samples_per_class):
+ """Create a new callable / dataset object that returns batches of each with
+ samples_per_class per label.
+
+ Args:
+ dataset: fn
+ samples_per_class: int
+ Returns:
+ function, [] -> batch where batch is the same type as the return of
+ dataset().
+ """
+
+ with tf.control_dependencies(None), tf.name_scope(None):
+ with tf.name_scope("queue_runner/sample_n_per_class"):
+ batch = dataset()
+ num_classes = batch.label_onehot.shape.as_list()[1]
+ batch_size = num_classes * samples_per_class
+
+ flatten = nest.flatten(batch)
+ queues = []
+ enqueue_ops = []
+ capacity = samples_per_class * 20
+ for i in xrange(num_classes):
+ queue = tf.FIFOQueue(
+ capacity=capacity,
+ shapes=[f.shape.as_list()[1:] for f in flatten],
+ dtypes=[f.dtype for f in flatten])
+ queues.append(queue)
+
+ idx = tf.where(tf.equal(batch.label, i))
+ sub_batch = []
+ to_enqueue = []
+ for elem in batch:
+ new_e = tf.gather(elem, idx)
+ new_e = tf.squeeze(new_e, 1)
+ to_enqueue.append(new_e)
+
+ remaining = (capacity - queue.size())
+ to_add = tf.minimum(tf.shape(idx)[0], remaining)
+
+ def _enqueue():
+ return queue.enqueue_many([t[:to_add] for t in to_enqueue])
+
+ enqueue_op = tf.cond(
+ tf.equal(to_add, 0), tf.no_op, _enqueue)
+ enqueue_ops.append(enqueue_op)
+
+ # This has caused many deadlocks / issues. This is some logging to at least
+ # shed light to what is going on.
+ print_lam = lambda: tf.Print(tf.constant(0.0), [q.size() for q in queues], "MultiQueueRunner queues status. Has capacity %d"%capacity)
+ some_percent_of_time = tf.less(tf.random_uniform([]), 0.0005)
+ maybe_print = tf.cond(some_percent_of_time, print_lam, lambda: tf.constant(0.0))
+ with tf.control_dependencies([maybe_print]):
+ enqueue_ops = [tf.group(e) for e in enqueue_ops]
+ qr = MultiQueueRunner(queues=queues, enqueue_ops=enqueue_ops)
+ tf.train.add_queue_runner(qr)
+
+ def dequeue_batch():
+ with tf.name_scope("sample_n_per_batch/dequeue/"):
+ entries = []
+ for q in queues:
+ entries.append(q.dequeue_many(samples_per_class))
+
+ flat_batch = [tf.concat(x, 0) for x in zip(*entries)]
+ idx = tf.random_shuffle(tf.range(batch_size))
+ flat_batch = [tf.gather(f, idx, axis=0) for f in flat_batch]
+ return nest.pack_sequence_as(batch, flat_batch)
+
+ return dequeue_batch
+
+def structure_map_multi(func, values):
+ all_values = [nest.flatten(v) for v in values]
+ rets = []
+ for pair in zip(*all_values):
+ rets.append(func(pair))
+ return nest.pack_sequence_as(values[0], rets)
+
+def structure_map_split(func, value):
+ vv = nest.flatten(value)
+ rets = []
+ for v in vv:
+ rets.append(func(v))
+ return [nest.pack_sequence_as(value, r) for r in zip(*rets)]
+
+def assign_variables(targets, values):
+ return tf.group(*[t.assign(v) for t,v in eqzip(targets, values)],
+ name="assign_variables")
+
+
+def create_variables_in_class_scope(method):
+ """Force the variables constructed in this class to live in the sonnet module.
+ Wraps a method on a sonnet module.
+
+ For example the following will create two different variables.
+ ```
+ class Mod(snt.AbstractModule):
+ @create_variables_in_class_scope
+ def dynamic_thing(self, input, name):
+ return snt.Linear(name)(input)
+ mod.dynamic_thing(x, name="module_nameA")
+ mod.dynamic_thing(x, name="module_nameB")
+ # reuse
+ mod.dynamic_thing(y, name="module_nameA")
+ ```
+ """
+ @functools.wraps(method)
+ def wrapper(obj, *args, **kwargs):
+ def default_context_manager(reuse=None):
+ variable_scope = obj.variable_scope
+ return tf.variable_scope(variable_scope, reuse=reuse)
+
+ variable_scope_context_manager = getattr(obj, "_enter_variable_scope",
+ default_context_manager)
+ graph = tf.get_default_graph()
+
+ # Temporarily enter the variable scope to capture it
+ with variable_scope_context_manager() as tmp_variable_scope:
+ variable_scope = tmp_variable_scope
+
+ with variable_scope_ops._pure_variable_scope(
+ variable_scope, reuse=tf.AUTO_REUSE) as pure_variable_scope:
+
+ name_scope = variable_scope.original_name_scope
+ if name_scope[-1] != "/":
+ name_scope += "/"
+
+ with tf.name_scope(name_scope):
+ sub_scope = snt_util.to_snake_case(method.__name__)
+ with tf.name_scope(sub_scope) as scope:
+ out_ops = method(obj, *args, **kwargs)
+ return out_ops
+
+ return wrapper
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/variable_replace.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/variable_replace.py"
new file mode 100644
index 00000000..ebfbeadc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/learning_unsupervised_learning/variable_replace.py"
@@ -0,0 +1,112 @@
+# Copyright 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+from __future__ import absolute_import
+from __future__ import division
+
+import tensorflow as tf
+from contextlib import contextmanager
+
+from tensorflow.python.ops import variable_scope
+
+# sanity global state to ensure non recursive.
+_is_variable_replacing = [False]
+
+def in_variable_replace_scope():
+ return _is_variable_replacing[0]
+
+@contextmanager
+def variable_replace(replacements, no_new=True):
+ """ A context manager that replaces variables.
+
+ This is a context manager that replaces all calls to
+ get_variable with the variable in replacements.
+ This function does not support recursive application.
+
+ Args:
+ replacements: dict
+ dictionary mapping a variable to replace (the key), with
+ the variable one wants to replace this variable with (the value).
+ no_new: bool
+ raise an error if variables were created.
+ This is for sanity checking.
+ Raises:
+ ValueError: if a new variable or not all the replacements are used.
+ """
+ # TODO(lmetz) This function is a bit scary, as it relies on monkey patching
+ # the call to get_variable. Ideally this can be done with variable_scope's
+ # custom_getter attribute, but when initially writing this that was not
+ # avalible.
+
+ replacements = {k: v for k, v in replacements.items() if not k == v}
+
+ init_vars = tf.trainable_variables()
+ old_get_variable = variable_scope.get_variable
+ old_tf_get_variable = tf.get_variable
+
+ names_replace = {}
+ has_replaced_names = []
+ tf.logging.vlog(2, "Trying to replace")
+ for k, v in replacements.items():
+ tf.logging.vlog(2, k.name + " >> " + v.name)
+ tf.logging.vlog(2, "===")
+
+ for k, v in replacements.items():
+ strip_name = k.name.replace("/read:0", "")
+ strip_name = strip_name.replace(":0", "")
+ names_replace[strip_name] = v
+ # TODO(lmetz) is there a cleaner way to do this?
+ def new_get_variable(name, *args, **kwargs):
+ #print "Monkeypatch get variable run with name:", name
+ n = tf.get_variable_scope().name + "/" + name
+ #print "Monkeypatch get variable run with name:", n
+ if n in names_replace:
+ has_replaced_names.append(n)
+ return names_replace[n]
+ else:
+ return old_get_variable(name, *args, **kwargs)
+
+ # perform the monkey patch
+ if _is_variable_replacing[0] == True:
+ raise ValueError("No recursive calling to variable replace allowed.")
+
+ variable_scope.get_variable = new_get_variable
+ tf.get_variable = new_get_variable
+
+ _is_variable_replacing[0] = True
+
+ yield
+
+ if set(has_replaced_names) != set(names_replace.keys()):
+ print "Didn't use all replacements"
+ print "replaced variables that are not requested??"
+ print "==="
+ for n in list(set(has_replaced_names) - set(names_replace.keys())):
+ print n
+ print "Missed replacing variables"
+ print "==="
+ for n in list(set(names_replace.keys()) - set(has_replaced_names)):
+ print n, "==>", names_replace[n].name
+ raise ValueError("Fix this -- see stderr")
+
+ # undo the monkey patch
+ tf.get_variable = old_tf_get_variable
+ variable_scope.get_variable = old_get_variable
+
+ _is_variable_replacing[0] = False
+
+ final_vars = tf.trainable_variables()
+ assert set(init_vars) == set(final_vars), "trainable variables changed"
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/README.md"
new file mode 100644
index 00000000..9d6fbd57
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/README.md"
@@ -0,0 +1,138 @@
+# LexNET for Noun Compound Relation Classification
+
+This is a [Tensorflow](http://www.tensorflow.org/) implementation of the LexNET
+algorithm for classifying relationships, specifically applied to classifying the
+relationships that hold between noun compounds:
+
+* *olive oil* is oil that is *made from* olives
+* *cooking oil* which is oil that is *used for* cooking
+* *motor oil* is oil that is *contained in* a motor
+
+The model is a supervised classifier that predicts the relationship that holds
+between the constituents of a two-word noun compound using:
+
+1. A neural "paraphrase" of each syntactic dependency path that connects the
+ constituents in a large corpus. For example, given a sentence like *This fine
+ oil is made from first-press olives*, the dependency path is something like
+ `oil from POBJ> olive`.
+2. The distributional information provided by the individual words; i.e., the
+ word embeddings of the two consituents.
+3. The distributional signal provided by the compound itself; i.e., the
+ embedding of the noun compound in context.
+
+The model includes several variants: *path-based model* uses (1) alone, the
+*distributional model* uses (2) alone, and the *integrated model* uses (1) and
+(2). The *distributional-nc model* and the *integrated-nc* model each add (3).
+
+Training a model requires the following:
+
+1. A collection of noun compounds that have been labeled using a *relation
+ inventory*. The inventory describes the specific relationships that you'd
+ like the model to differentiate (e.g. *part of* versus *composed of* versus
+ *purpose*), and generally may consist of tens of classes.
+ You can download the dataset used in the paper from [here](https://vered1986.github.io/papers/Tratz2011_Dataset.tar.gz).
+2. You'll need a collection of word embeddings: the path-based model uses the
+ word embeddings as part of the path representation, and the distributional
+ models use the word embeddings directly as prediction features.
+3. The path-based model requires a collection of syntactic dependency parses
+ that connect the constituents for each noun compound.
+
+At the moment, this repository does not contain the tools for generating this
+data, but we will provide references to existing datasets and plan to add tools
+to generate the data in the future.
+
+# Contents
+
+The following source code is included here:
+
+* `learn_path_embeddings.py` is a script that trains and evaluates a path-based
+ model to predict a noun-compound relationship given labeled noun-compounds and
+ dependency parse paths.
+* `learn_classifier.py` is a script that trains and evaluates a classifier based
+ on any combination of paths, word embeddings, and noun-compound embeddings.
+* `get_indicative_paths.py` is a script that generates the most indicative
+ syntactic dependency paths for a particular relationship.
+
+# Dependencies
+
+* [TensorFlow](http://www.tensorflow.org/): see detailed installation
+ instructions at that site.
+* [SciKit Learn](http://scikit-learn.org/): you can probably just install this
+ with `pip install sklearn`.
+
+# Creating the Model
+
+This section describes the necessary steps that you must follow to reproduce the
+results in the paper.
+
+## Generate/Download Path Data
+
+TBD! Our plan is to make the aggregate path data available that was used to
+train path embeddings and classifiers; however, this will be released
+separately.
+
+## Generate/Download Embedding Data
+
+TBD! While we used the standard Glove vectors for the relata embeddings, the NC
+embeddings were generated separately. Our plan is to make that data available,
+but it will be released separately.
+
+## Create Path Embeddings
+
+Create the path embeddings using `learn_path_embeddings.py`. This shell script
+fragment will iterate through each dataset, split, and corpus to generate path
+embeddings for each.
+
+ for DATASET in tratz/fine_grained tratz/coarse_grained ; do
+ for SPLIT in random lexical_head lexical_mod lexical_full ; do
+ for CORPUS in wiki_gigiawords ; do
+ python learn_path_embeddings.py \
+ --dataset_dir ~/lexnet/datasets \
+ --dataset "${DATASET}" \
+ --corpus "${SPLIT}/${CORPUS}" \
+ --embeddings_base_path ~/lexnet/embeddings \
+ --logdir /tmp/learn_path_embeddings
+ done
+ done
+ done
+
+The path embeddings will be placed in the directory specified by
+`--embeddings_base_path`.
+
+## Train classifiers
+
+Train classifiers and evaluate on the validation and test data using
+`train_classifiers.py` script. This shell script fragment will iterate through
+each dataset, split, corpus, and model type to train and evaluate classifiers.
+
+ LOGDIR=/tmp/learn_classifier
+ for DATASET in tratz/fine_grained tratz/coarse_grained ; do
+ for SPLIT in random lexical_head lexical_mod lexical_full ; do
+ for CORPUS in wiki_gigiawords ; do
+ for MODEL in dist dist-nc path integrated integrated-nc ; do
+ # Filename for the log that will contain the classifier results.
+ LOGFILE=$(echo "${DATASET}.${SPLIT}.${CORPUS}.${MODEL}.log" | sed -e "s,/,.,g")
+ python learn_classifier.py \
+ --dataset_dir ~/lexnet/datasets \
+ --dataset "${DATASET}" \
+ --corpus "${SPLIT}/${CORPUS}" \
+ --embeddings_base_path ~/lexnet/embeddings \
+ --logdir ${LOGDIR} \
+ --input "${MODEL}" > "${LOGDIR}/${LOGFILE}"
+ done
+ done
+ done
+ done
+
+The log file will contain the final performance (precision, recall, F1) on the
+train, dev, and test sets, and will include a confusion matrix for each.
+
+# Contact
+
+If you have any questions, issues, or suggestions, feel free to contact either
+@vered1986 or @waterson.
+
+If you use this code for any published research, please include the following citation:
+
+Olive Oil Is Made of Olives, Baby Oil Is Made for Babies: Interpreting Noun Compounds Using Paraphrases in a Neural Model.
+Vered Shwartz and Chris Waterson. NAACL 2018. [link](https://arxiv.org/pdf/1803.08073.pdf).
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/get_indicative_paths.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/get_indicative_paths.py"
new file mode 100644
index 00000000..f8b34cca
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/get_indicative_paths.py"
@@ -0,0 +1,111 @@
+#!/usr/bin/env python
+# Copyright 2017, 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Extracts paths that are indicative of each relation."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+import tensorflow as tf
+
+from . import path_model
+from . import lexnet_common
+
+tf.flags.DEFINE_string(
+ 'dataset_dir', 'datasets',
+ 'Dataset base directory')
+
+tf.flags.DEFINE_string(
+ 'dataset',
+ 'tratz/fine_grained',
+ 'Subdirectory containing the corpus directories: '
+ 'subdirectory of dataset_dir')
+
+tf.flags.DEFINE_string(
+ 'corpus', 'random/wiki',
+ 'Subdirectory containing the corpus and split: '
+ 'subdirectory of dataset_dir/dataset')
+
+tf.flags.DEFINE_string(
+ 'embeddings_base_path', 'embeddings',
+ 'Embeddings base directory')
+
+tf.flags.DEFINE_string(
+ 'logdir', 'logdir',
+ 'Directory of model output files')
+
+tf.flags.DEFINE_integer(
+ 'top_k', 20, 'Number of top paths to extract')
+
+tf.flags.DEFINE_float(
+ 'threshold', 0.8, 'Threshold above which to consider paths as indicative')
+
+FLAGS = tf.flags.FLAGS
+
+
+def main(_):
+ hparams = path_model.PathBasedModel.default_hparams()
+
+ # First things first. Load the path data.
+ path_embeddings_file = 'path_embeddings/{dataset}/{corpus}'.format(
+ dataset=FLAGS.dataset,
+ corpus=FLAGS.corpus)
+
+ path_dim = (hparams.lemma_dim + hparams.pos_dim +
+ hparams.dep_dim + hparams.dir_dim)
+
+ path_embeddings, path_to_index = path_model.load_path_embeddings(
+ os.path.join(FLAGS.embeddings_base_path, path_embeddings_file),
+ path_dim)
+
+ # Load and count the classes so we can correctly instantiate the model.
+ classes_filename = os.path.join(
+ FLAGS.dataset_dir, FLAGS.dataset, 'classes.txt')
+
+ with open(classes_filename) as f_in:
+ classes = f_in.read().splitlines()
+
+ hparams.num_classes = len(classes)
+
+ # We need the word embeddings to instantiate the model, too.
+ print('Loading word embeddings...')
+ lemma_embeddings = lexnet_common.load_word_embeddings(
+ FLAGS.embeddings_base_path, hparams.lemma_embeddings_file)
+
+ # Instantiate the model.
+ with tf.Graph().as_default():
+ with tf.variable_scope('lexnet'):
+ instance = tf.placeholder(dtype=tf.string)
+ model = path_model.PathBasedModel(
+ hparams, lemma_embeddings, instance)
+
+ with tf.Session() as session:
+ model_dir = '{logdir}/results/{dataset}/path/{corpus}'.format(
+ logdir=FLAGS.logdir,
+ dataset=FLAGS.dataset,
+ corpus=FLAGS.corpus)
+
+ saver = tf.train.Saver()
+ saver.restore(session, os.path.join(model_dir, 'best.ckpt'))
+
+ path_model.get_indicative_paths(
+ model, session, path_to_index, path_embeddings, classes,
+ model_dir, FLAGS.top_k, FLAGS.threshold)
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/learn_classifier.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/learn_classifier.py"
new file mode 100644
index 00000000..ec284029
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/learn_classifier.py"
@@ -0,0 +1,223 @@
+#!/usr/bin/env python
+# Copyright 2017, 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Trains the integrated LexNET classifier."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+import lexnet_common
+import lexnet_model
+import path_model
+from sklearn import metrics
+import tensorflow as tf
+
+tf.flags.DEFINE_string(
+ 'dataset_dir', 'datasets',
+ 'Dataset base directory')
+
+tf.flags.DEFINE_string(
+ 'dataset', 'tratz/fine_grained',
+ 'Subdirectory containing the corpus directories: '
+ 'subdirectory of dataset_dir')
+
+tf.flags.DEFINE_string(
+ 'corpus', 'wiki/random',
+ 'Subdirectory containing the corpus and split: '
+ 'subdirectory of dataset_dir/dataset')
+
+tf.flags.DEFINE_string(
+ 'embeddings_base_path', 'embeddings',
+ 'Embeddings base directory')
+
+tf.flags.DEFINE_string(
+ 'logdir', 'logdir',
+ 'Directory of model output files')
+
+tf.flags.DEFINE_string('hparams', '', 'Hyper-parameters')
+
+tf.flags.DEFINE_string(
+ 'input', 'integrated',
+ 'The model(dist/dist-nc/path/integrated/integrated-nc')
+
+FLAGS = tf.flags.FLAGS
+
+
+def main(_):
+ # Pick up any one-off hyper-parameters.
+ hparams = lexnet_model.LexNETModel.default_hparams()
+ hparams.corpus = FLAGS.corpus
+ hparams.input = FLAGS.input
+ hparams.path_embeddings_file = 'path_embeddings/%s/%s' % (
+ FLAGS.dataset, FLAGS.corpus)
+
+ input_dir = hparams.input if hparams.input != 'path' else 'path_classifier'
+
+ # Set the number of classes
+ classes_filename = os.path.join(
+ FLAGS.dataset_dir, FLAGS.dataset, 'classes.txt')
+ with open(classes_filename) as f_in:
+ classes = f_in.read().splitlines()
+
+ hparams.num_classes = len(classes)
+ print('Model will predict into %d classes' % hparams.num_classes)
+
+ # Get the datasets
+ train_set, val_set, test_set = (
+ os.path.join(
+ FLAGS.dataset_dir, FLAGS.dataset, FLAGS.corpus,
+ filename + '.tfrecs.gz')
+ for filename in ['train', 'val', 'test'])
+
+ print('Running with hyper-parameters: {}'.format(hparams))
+
+ # Load the instances
+ print('Loading instances...')
+ opts = tf.python_io.TFRecordOptions(
+ compression_type=tf.python_io.TFRecordCompressionType.GZIP)
+ train_instances = list(tf.python_io.tf_record_iterator(train_set, opts))
+ val_instances = list(tf.python_io.tf_record_iterator(val_set, opts))
+ test_instances = list(tf.python_io.tf_record_iterator(test_set, opts))
+
+ # Load the word embeddings
+ print('Loading word embeddings...')
+ relata_embeddings, path_embeddings, nc_embeddings, path_to_index = (
+ None, None, None, None)
+ if hparams.input in ['dist', 'dist-nc', 'integrated', 'integrated-nc']:
+ relata_embeddings = lexnet_common.load_word_embeddings(
+ FLAGS.embeddings_base_path, hparams.relata_embeddings_file)
+
+ if hparams.input in ['path', 'integrated', 'integrated-nc']:
+ path_embeddings, path_to_index = path_model.load_path_embeddings(
+ os.path.join(FLAGS.embeddings_base_path, hparams.path_embeddings_file),
+ hparams.path_dim)
+
+ if hparams.input in ['dist-nc', 'integrated-nc']:
+ nc_embeddings = lexnet_common.load_word_embeddings(
+ FLAGS.embeddings_base_path, hparams.nc_embeddings_file)
+
+ # Define the graph and the model
+ with tf.Graph().as_default():
+ model = lexnet_model.LexNETModel(
+ hparams, relata_embeddings, path_embeddings,
+ nc_embeddings, path_to_index)
+
+ # Initialize a session and start training
+ session = tf.Session()
+ session.run(tf.global_variables_initializer())
+
+ # Initalize the path mapping
+ if hparams.input in ['path', 'integrated', 'integrated-nc']:
+ session.run(tf.tables_initializer())
+ session.run(model.initialize_path_op, {
+ model.path_initial_value_t: path_embeddings
+ })
+
+ # Initialize the NC embeddings
+ if hparams.input in ['dist-nc', 'integrated-nc']:
+ session.run(model.initialize_nc_op, {
+ model.nc_initial_value_t: nc_embeddings
+ })
+
+ # Load the labels
+ print('Loading labels...')
+ train_labels = model.load_labels(session, train_instances)
+ val_labels = model.load_labels(session, val_instances)
+ test_labels = model.load_labels(session, test_instances)
+
+ save_path = '{logdir}/results/{dataset}/{input}/{corpus}'.format(
+ logdir=FLAGS.logdir, dataset=FLAGS.dataset,
+ corpus=model.hparams.corpus, input=input_dir)
+
+ if not os.path.exists(save_path):
+ os.makedirs(save_path)
+
+ # Train the model
+ print('Training the model...')
+ model.fit(session, train_instances, epoch_completed,
+ val_instances, val_labels, save_path)
+
+ # Print the best performance on the validation set
+ print('Best performance on the validation set: F1=%.3f' %
+ epoch_completed.best_f1)
+
+ # Evaluate on the train and validation sets
+ lexnet_common.full_evaluation(model, session, train_instances, train_labels,
+ 'Train', classes)
+ lexnet_common.full_evaluation(model, session, val_instances, val_labels,
+ 'Validation', classes)
+ test_predictions = lexnet_common.full_evaluation(
+ model, session, test_instances, test_labels, 'Test', classes)
+
+ # Write the test predictions to a file
+ predictions_file = os.path.join(save_path, 'test_predictions.tsv')
+ print('Saving test predictions to %s' % save_path)
+ test_pairs = model.load_pairs(session, test_instances)
+ lexnet_common.write_predictions(test_pairs, test_labels, test_predictions,
+ classes, predictions_file)
+
+
+def epoch_completed(model, session, epoch, epoch_loss,
+ val_instances, val_labels, save_path):
+ """Runs every time an epoch completes.
+
+ Print the performance on the validation set, and update the saved model if
+ its performance is better on the previous ones. If the performance dropped,
+ tell the training to stop.
+
+ Args:
+ model: The currently trained path-based model.
+ session: The current TensorFlow session.
+ epoch: The epoch number.
+ epoch_loss: The current epoch loss.
+ val_instances: The validation set instances (evaluation between epochs).
+ val_labels: The validation set labels (for evaluation between epochs).
+ save_path: Where to save the model.
+
+ Returns:
+ whether the training should stop.
+ """
+ stop_training = False
+
+ # Evaluate on the validation set
+ val_pred = model.predict(session, val_instances)
+ precision, recall, f1, _ = metrics.precision_recall_fscore_support(
+ val_labels, val_pred, average='weighted')
+ print(
+ 'Epoch: %d/%d, Loss: %f, validation set: P: %.3f, R: %.3f, F1: %.3f\n' % (
+ epoch + 1, model.hparams.num_epochs, epoch_loss,
+ precision, recall, f1))
+
+ # If the F1 is much smaller than the previous one, stop training. Else, if
+ # it's bigger, save the model.
+ if f1 < epoch_completed.best_f1 - 0.08:
+ stop_training = True
+
+ if f1 > epoch_completed.best_f1:
+ saver = tf.train.Saver()
+ checkpoint_filename = os.path.join(save_path, 'best.ckpt')
+ print('Saving model in: %s' % checkpoint_filename)
+ saver.save(session, checkpoint_filename)
+ print('Model saved in file: %s' % checkpoint_filename)
+ epoch_completed.best_f1 = f1
+
+ return stop_training
+
+epoch_completed.best_f1 = 0
+
+if __name__ == '__main__':
+ tf.app.run(main)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/learn_path_embeddings.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/learn_path_embeddings.py"
new file mode 100644
index 00000000..6d005990
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/learn_path_embeddings.py"
@@ -0,0 +1,225 @@
+#!/usr/bin/env python
+# Copyright 2017, 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Trains the LexNET path-based model."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+import lexnet_common
+import path_model
+from sklearn import metrics
+import tensorflow as tf
+
+tf.flags.DEFINE_string(
+ 'dataset_dir', 'datasets',
+ 'Dataset base directory')
+
+tf.flags.DEFINE_string(
+ 'dataset',
+ 'tratz/fine_grained',
+ 'Subdirectory containing the corpus directories: '
+ 'subdirectory of dataset_dir')
+
+tf.flags.DEFINE_string(
+ 'corpus', 'random/wiki_gigawords',
+ 'Subdirectory containing the corpus and split: '
+ 'subdirectory of dataset_dir/dataset')
+
+tf.flags.DEFINE_string(
+ 'embeddings_base_path', 'embeddings',
+ 'Embeddings base directory')
+
+tf.flags.DEFINE_string(
+ 'logdir', 'logdir',
+ 'Directory of model output files')
+
+FLAGS = tf.flags.FLAGS
+
+
+def main(_):
+ # Pick up any one-off hyper-parameters.
+ hparams = path_model.PathBasedModel.default_hparams()
+
+ # Set the number of classes
+ classes_filename = os.path.join(
+ FLAGS.dataset_dir, FLAGS.dataset, 'classes.txt')
+
+ with open(classes_filename) as f_in:
+ classes = f_in.read().splitlines()
+
+ hparams.num_classes = len(classes)
+ print('Model will predict into %d classes' % hparams.num_classes)
+
+ # Get the datasets
+ train_set, val_set, test_set = (
+ os.path.join(
+ FLAGS.dataset_dir, FLAGS.dataset, FLAGS.corpus,
+ filename + '.tfrecs.gz')
+ for filename in ['train', 'val', 'test'])
+
+ print('Running with hyper-parameters: {}'.format(hparams))
+
+ # Load the instances
+ print('Loading instances...')
+ opts = tf.python_io.TFRecordOptions(
+ compression_type=tf.python_io.TFRecordCompressionType.GZIP)
+ train_instances = list(tf.python_io.tf_record_iterator(train_set, opts))
+ val_instances = list(tf.python_io.tf_record_iterator(val_set, opts))
+ test_instances = list(tf.python_io.tf_record_iterator(test_set, opts))
+
+ # Load the word embeddings
+ print('Loading word embeddings...')
+ lemma_embeddings = lexnet_common.load_word_embeddings(
+ FLAGS.embeddings_base_path, hparams.lemma_embeddings_file)
+
+ # Define the graph and the model
+ with tf.Graph().as_default():
+ with tf.variable_scope('lexnet'):
+ options = tf.python_io.TFRecordOptions(
+ compression_type=tf.python_io.TFRecordCompressionType.GZIP)
+ reader = tf.TFRecordReader(options=options)
+ _, train_instance = reader.read(
+ tf.train.string_input_producer([train_set]))
+ shuffled_train_instance = tf.train.shuffle_batch(
+ [train_instance],
+ batch_size=1,
+ num_threads=1,
+ capacity=len(train_instances),
+ min_after_dequeue=100,
+ )[0]
+
+ train_model = path_model.PathBasedModel(
+ hparams, lemma_embeddings, shuffled_train_instance)
+
+ with tf.variable_scope('lexnet', reuse=True):
+ val_instance = tf.placeholder(dtype=tf.string)
+ val_model = path_model.PathBasedModel(
+ hparams, lemma_embeddings, val_instance)
+
+ # Initialize a session and start training
+ logdir = (
+ '{logdir}/results/{dataset}/path/{corpus}/supervisor.logdir'.format(
+ logdir=FLAGS.logdir, dataset=FLAGS.dataset, corpus=FLAGS.corpus))
+
+ best_model_saver = tf.train.Saver()
+ f1_t = tf.placeholder(tf.float32)
+ best_f1_t = tf.Variable(0.0, trainable=False, name='best_f1')
+ assign_best_f1_op = tf.assign(best_f1_t, f1_t)
+
+ supervisor = tf.train.Supervisor(
+ logdir=logdir,
+ global_step=train_model.global_step)
+
+ with supervisor.managed_session() as session:
+ # Load the labels
+ print('Loading labels...')
+ val_labels = train_model.load_labels(session, val_instances)
+
+ save_path = '{logdir}/results/{dataset}/path/{corpus}/'.format(
+ logdir=FLAGS.logdir,
+ dataset=FLAGS.dataset,
+ corpus=FLAGS.corpus)
+
+ # Train the model
+ print('Training the model...')
+
+ while True:
+ step = session.run(train_model.global_step)
+ epoch = (step + len(train_instances) - 1) // len(train_instances)
+ if epoch > hparams.num_epochs:
+ break
+
+ print('Starting epoch %d (step %d)...' % (1 + epoch, step))
+
+ epoch_loss = train_model.run_one_epoch(session, len(train_instances))
+
+ best_f1 = session.run(best_f1_t)
+ f1 = epoch_completed(val_model, session, epoch, epoch_loss,
+ val_instances, val_labels, best_model_saver,
+ save_path, best_f1)
+
+ if f1 > best_f1:
+ session.run(assign_best_f1_op, {f1_t: f1})
+
+ if f1 < best_f1 - 0.08:
+ tf.logging.fino('Stopping training after %d epochs.\n' % epoch)
+ break
+
+ # Print the best performance on the validation set
+ best_f1 = session.run(best_f1_t)
+ print('Best performance on the validation set: F1=%.3f' % best_f1)
+
+ # Save the path embeddings
+ print('Computing the path embeddings...')
+ instances = train_instances + val_instances + test_instances
+ path_index, path_vectors = path_model.compute_path_embeddings(
+ val_model, session, instances)
+ path_emb_dir = '{dir}/path_embeddings/{dataset}/{corpus}/'.format(
+ dir=FLAGS.embeddings_base_path,
+ dataset=FLAGS.dataset,
+ corpus=FLAGS.corpus)
+
+ if not os.path.exists(path_emb_dir):
+ os.makedirs(path_emb_dir)
+
+ path_model.save_path_embeddings(
+ val_model, path_vectors, path_index, path_emb_dir)
+
+
+def epoch_completed(model, session, epoch, epoch_loss,
+ val_instances, val_labels, saver, save_path, best_f1):
+ """Runs every time an epoch completes.
+
+ Print the performance on the validation set, and update the saved model if
+ its performance is better on the previous ones. If the performance dropped,
+ tell the training to stop.
+
+ Args:
+ model: The currently trained path-based model.
+ session: The current TensorFlow session.
+ epoch: The epoch number.
+ epoch_loss: The current epoch loss.
+ val_instances: The validation set instances (evaluation between epochs).
+ val_labels: The validation set labels (for evaluation between epochs).
+ saver: tf.Saver object
+ save_path: Where to save the model.
+ best_f1: the best F1 achieved so far.
+
+ Returns:
+ The F1 achieved on the training set.
+ """
+ # Evaluate on the validation set
+ val_pred = model.predict(session, val_instances)
+ precision, recall, f1, _ = metrics.precision_recall_fscore_support(
+ val_labels, val_pred, average='weighted')
+ print(
+ 'Epoch: %d/%d, Loss: %f, validation set: P: %.3f, R: %.3f, F1: %.3f\n' % (
+ epoch + 1, model.hparams.num_epochs, epoch_loss,
+ precision, recall, f1))
+
+ if f1 > best_f1:
+ print('Saving model in: %s' % (save_path + 'best.ckpt'))
+ saver.save(session, save_path + 'best.ckpt')
+ print('Model saved in file: %s' % (save_path + 'best.ckpt'))
+
+ return f1
+
+
+if __name__ == '__main__':
+ tf.app.run(main)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/lexnet_common.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/lexnet_common.py"
new file mode 100644
index 00000000..c4a576b4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/lexnet_common.py"
@@ -0,0 +1,209 @@
+# Copyright 2017, 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Common stuff used with LexNET."""
+# pylint: disable=bad-whitespace
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import numpy as np
+from sklearn import metrics
+import tensorflow as tf
+
+# Part of speech tags used in the paths.
+POSTAGS = [
+ 'PAD', 'VERB', 'CONJ', 'NOUN', 'PUNCT',
+ 'ADP', 'ADJ', 'DET', 'ADV', 'PART',
+ 'NUM', 'X', 'INTJ', 'SYM',
+]
+
+POSTAG_TO_ID = {tag: tid for tid, tag in enumerate(POSTAGS)}
+
+# Dependency labels used in the paths.
+DEPLABELS = [
+ 'PAD', 'UNK', 'ROOT', 'abbrev', 'acomp', 'advcl',
+ 'advmod', 'agent', 'amod', 'appos', 'attr', 'aux',
+ 'auxpass', 'cc', 'ccomp', 'complm', 'conj', 'cop',
+ 'csubj', 'csubjpass', 'dep', 'det', 'dobj', 'expl',
+ 'infmod', 'iobj', 'mark', 'mwe', 'nc', 'neg',
+ 'nn', 'npadvmod', 'nsubj', 'nsubjpass', 'num', 'number',
+ 'p', 'parataxis', 'partmod', 'pcomp', 'pobj', 'poss',
+ 'preconj', 'predet', 'prep', 'prepc', 'prt', 'ps',
+ 'purpcl', 'quantmod', 'rcmod', 'ref', 'rel', 'suffix',
+ 'title', 'tmod', 'xcomp', 'xsubj',
+]
+
+DEPLABEL_TO_ID = {label: lid for lid, label in enumerate(DEPLABELS)}
+
+# Direction codes used in the paths.
+DIRS = '_^V<>'
+DIR_TO_ID = {dir: did for did, dir in enumerate(DIRS)}
+
+
+def load_word_embeddings(word_embeddings_dir, word_embeddings_file):
+ """Loads pretrained word embeddings from a binary file and returns the matrix.
+
+ Args:
+ word_embeddings_dir: The directory for the word embeddings.
+ word_embeddings_file: The pretrained word embeddings text file.
+
+ Returns:
+ The word embeddings matrix
+ """
+ embedding_file = os.path.join(word_embeddings_dir, word_embeddings_file)
+ vocab_file = os.path.join(
+ word_embeddings_dir, os.path.dirname(word_embeddings_file), 'vocab.txt')
+
+ with open(vocab_file) as f_in:
+ vocab = [line.strip() for line in f_in]
+
+ vocab_size = len(vocab)
+
+ print('Embedding file "%s" has %d tokens' % (embedding_file, vocab_size))
+
+ with open(embedding_file) as f_in:
+ embeddings = np.load(f_in)
+
+ dim = embeddings.shape[1]
+
+ # Four initially random vectors for the special tokens: , , ,
+ special_embeddings = np.random.normal(0, 0.1, (4, dim))
+ embeddings = np.vstack((special_embeddings, embeddings))
+ embeddings = embeddings.astype(np.float32)
+
+ return embeddings
+
+
+def full_evaluation(model, session, instances, labels, set_name, classes):
+ """Prints a full evaluation on the current set.
+
+ Performance (recall, precision and F1), classification report (per
+ class performance), and confusion matrix).
+
+ Args:
+ model: The currently trained path-based model.
+ session: The current TensorFlow session.
+ instances: The current set instances.
+ labels: The current set labels.
+ set_name: The current set name (train/validation/test).
+ classes: The class label names.
+
+ Returns:
+ The model's prediction for the given instances.
+ """
+
+ # Predict the labels
+ pred = model.predict(session, instances)
+
+ # Print the performance
+ precision, recall, f1, _ = metrics.precision_recall_fscore_support(
+ labels, pred, average='weighted')
+
+ print('%s set: Precision: %.3f, Recall: %.3f, F1: %.3f' % (
+ set_name, precision, recall, f1))
+
+ # Print a classification report
+ print('%s classification report:' % set_name)
+ print(metrics.classification_report(labels, pred, target_names=classes))
+
+ # Print the confusion matrix
+ print('%s confusion matrix:' % set_name)
+ cm = metrics.confusion_matrix(labels, pred, labels=range(len(classes)))
+ cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] * 100
+ print_cm(cm, labels=classes)
+ return pred
+
+
+def print_cm(cm, labels):
+ """Pretty print for confusion matrices.
+
+ From: https://gist.github.com/zachguo/10296432.
+
+ Args:
+ cm: The confusion matrix.
+ labels: The class names.
+ """
+ columnwidth = 10
+ empty_cell = ' ' * columnwidth
+ short_labels = [label[:12].rjust(10, ' ') for label in labels]
+
+ # Print header
+ header = empty_cell + ' '
+ header += ''.join([' %{0}s '.format(columnwidth) % label
+ for label in short_labels])
+
+ print(header)
+
+ # Print rows
+ for i, label1 in enumerate(short_labels):
+ row = '%{0}s '.format(columnwidth) % label1[:10]
+ for j in range(len(short_labels)):
+ value = int(cm[i, j]) if not np.isnan(cm[i, j]) else 0
+ cell = ' %{0}d '.format(10) % value
+ row += cell + ' '
+ print(row)
+
+
+def load_all_labels(records):
+ """Reads TensorFlow examples from a RecordReader and returns only the labels.
+
+ Args:
+ records: a record list with TensorFlow examples.
+
+ Returns:
+ The labels
+ """
+ curr_features = tf.parse_example(records, {
+ 'rel_id': tf.FixedLenFeature([1], dtype=tf.int64),
+ })
+
+ labels = tf.squeeze(curr_features['rel_id'], [-1])
+ return labels
+
+
+def load_all_pairs(records):
+ """Reads TensorFlow examples from a RecordReader and returns the word pairs.
+
+ Args:
+ records: a record list with TensorFlow examples.
+
+ Returns:
+ The word pairs
+ """
+ curr_features = tf.parse_example(records, {
+ 'pair': tf.FixedLenFeature([1], dtype=tf.string)
+ })
+
+ word_pairs = curr_features['pair']
+ return word_pairs
+
+
+def write_predictions(pairs, labels, predictions, classes, predictions_file):
+ """Write the predictions to a file.
+
+ Args:
+ pairs: the word pairs (list of tuple of two strings).
+ labels: the gold-standard labels for these pairs (array of rel ID).
+ predictions: the predicted labels for these pairs (array of rel ID).
+ classes: a list of relation names.
+ predictions_file: where to save the predictions.
+ """
+ with open(predictions_file, 'w') as f_out:
+ for pair, label, pred in zip(pairs, labels, predictions):
+ w1, w2 = pair
+ f_out.write('\t'.join([w1, w2, classes[label], classes[pred]]) + '\n')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/lexnet_model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/lexnet_model.py"
new file mode 100644
index 00000000..b0f16b03
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/lexnet_model.py"
@@ -0,0 +1,438 @@
+# Copyright 2017, 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""The integrated LexNET model."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import lexnet_common
+import numpy as np
+import tensorflow as tf
+from six.moves import xrange
+
+
+class LexNETModel(object):
+ """The LexNET model for classifying relationships between noun compounds."""
+
+ @classmethod
+ def default_hparams(cls):
+ """Returns the default hyper-parameters."""
+ return tf.contrib.training.HParams(
+ batch_size=10,
+ num_classes=37,
+ num_epochs=30,
+ input_keep_prob=0.9,
+ input='integrated', # dist/ dist-nc/ path/ integrated/ integrated-nc
+ learn_relata=False,
+ corpus='wiki_gigawords',
+ random_seed=133, # zero means no random seed
+ relata_embeddings_file='glove/glove.6B.300d.bin',
+ nc_embeddings_file='nc_glove/vecs.6B.300d.bin',
+ path_embeddings_file='path_embeddings/tratz/fine_grained/wiki',
+ hidden_layers=1,
+ path_dim=60)
+
+ def __init__(self, hparams, relata_embeddings, path_embeddings, nc_embeddings,
+ path_to_index):
+ """Initialize the LexNET classifier.
+
+ Args:
+ hparams: the hyper-parameters.
+ relata_embeddings: word embeddings for the distributional component.
+ path_embeddings: embeddings for the paths.
+ nc_embeddings: noun compound embeddings.
+ path_to_index: a mapping from string path to an index in the path
+ embeddings matrix.
+ """
+ self.hparams = hparams
+
+ self.path_embeddings = path_embeddings
+ self.relata_embeddings = relata_embeddings
+ self.nc_embeddings = nc_embeddings
+
+ self.vocab_size, self.relata_dim = 0, 0
+ self.path_to_index = None
+ self.path_dim = 0
+
+ # Set the random seed
+ if hparams.random_seed > 0:
+ tf.set_random_seed(hparams.random_seed)
+
+ # Get the vocabulary size and relata dim
+ if self.hparams.input in ['dist', 'dist-nc', 'integrated', 'integrated-nc']:
+ self.vocab_size, self.relata_dim = self.relata_embeddings.shape
+
+ # Create the mapping from string path to an index in the embeddings matrix
+ if self.hparams.input in ['path', 'integrated', 'integrated-nc']:
+ self.path_to_index = tf.contrib.lookup.HashTable(
+ tf.contrib.lookup.KeyValueTensorInitializer(
+ tf.constant(path_to_index.keys()),
+ tf.constant(path_to_index.values()),
+ key_dtype=tf.string, value_dtype=tf.int32), 0)
+
+ self.path_dim = self.path_embeddings.shape[1]
+
+ # Create the network
+ self.__create_computation_graph__()
+
+ def __create_computation_graph__(self):
+ """Initialize the model and define the graph."""
+ network_input = 0
+
+ # Define the network inputs
+ # Distributional x and y
+ if self.hparams.input in ['dist', 'dist-nc', 'integrated', 'integrated-nc']:
+ network_input += 2 * self.relata_dim
+ self.relata_lookup = tf.get_variable(
+ 'relata_lookup',
+ initializer=self.relata_embeddings,
+ dtype=tf.float32,
+ trainable=self.hparams.learn_relata)
+
+ # Path-based
+ if self.hparams.input in ['path', 'integrated', 'integrated-nc']:
+ network_input += self.path_dim
+
+ self.path_initial_value_t = tf.placeholder(tf.float32, None)
+
+ self.path_lookup = tf.get_variable(
+ name='path_lookup',
+ dtype=tf.float32,
+ trainable=False,
+ shape=self.path_embeddings.shape)
+
+ self.initialize_path_op = tf.assign(
+ self.path_lookup, self.path_initial_value_t, validate_shape=False)
+
+ # Distributional noun compound
+ if self.hparams.input in ['dist-nc', 'integrated-nc']:
+ network_input += self.relata_dim
+
+ self.nc_initial_value_t = tf.placeholder(tf.float32, None)
+
+ self.nc_lookup = tf.get_variable(
+ name='nc_lookup',
+ dtype=tf.float32,
+ trainable=False,
+ shape=self.nc_embeddings.shape)
+
+ self.initialize_nc_op = tf.assign(
+ self.nc_lookup, self.nc_initial_value_t, validate_shape=False)
+
+ hidden_dim = network_input // 2
+
+ # Define the MLP
+ if self.hparams.hidden_layers == 0:
+ self.weights1 = tf.get_variable(
+ 'W1',
+ shape=[network_input, self.hparams.num_classes],
+ dtype=tf.float32)
+ self.bias1 = tf.get_variable(
+ 'b1',
+ shape=[self.hparams.num_classes],
+ dtype=tf.float32)
+
+ elif self.hparams.hidden_layers == 1:
+
+ self.weights1 = tf.get_variable(
+ 'W1',
+ shape=[network_input, hidden_dim],
+ dtype=tf.float32)
+ self.bias1 = tf.get_variable(
+ 'b1',
+ shape=[hidden_dim],
+ dtype=tf.float32)
+
+ self.weights2 = tf.get_variable(
+ 'W2',
+ shape=[hidden_dim, self.hparams.num_classes],
+ dtype=tf.float32)
+ self.bias2 = tf.get_variable(
+ 'b2',
+ shape=[self.hparams.num_classes],
+ dtype=tf.float32)
+
+ else:
+ raise ValueError('Only 0 or 1 hidden layers are supported')
+
+ # Define the variables
+ self.instances = tf.placeholder(dtype=tf.string,
+ shape=[self.hparams.batch_size])
+
+ (self.x_embedding_id,
+ self.y_embedding_id,
+ self.nc_embedding_id,
+ self.path_embedding_id,
+ self.path_counts,
+ self.labels) = parse_tensorflow_examples(
+ self.instances, self.hparams.batch_size, self.path_to_index)
+
+ # Create the MLP
+ self.__mlp__()
+
+ self.instances_to_load = tf.placeholder(dtype=tf.string, shape=[None])
+ self.labels_to_load = lexnet_common.load_all_labels(self.instances_to_load)
+ self.pairs_to_load = lexnet_common.load_all_pairs(self.instances_to_load)
+
+ def load_labels(self, session, instances):
+ """Loads the labels for these instances.
+
+ Args:
+ session: The current TensorFlow session,
+ instances: The instances for which to load the labels.
+
+ Returns:
+ the labels of these instances.
+ """
+ return session.run(self.labels_to_load,
+ feed_dict={self.instances_to_load: instances})
+
+ def load_pairs(self, session, instances):
+ """Loads the word pairs for these instances.
+
+ Args:
+ session: The current TensorFlow session,
+ instances: The instances for which to load the labels.
+
+ Returns:
+ the word pairs of these instances.
+ """
+ word_pairs = session.run(self.pairs_to_load,
+ feed_dict={self.instances_to_load: instances})
+ return [pair[0].split('::') for pair in word_pairs]
+
+ def __train_single_batch__(self, session, batch_instances):
+ """Train a single batch.
+
+ Args:
+ session: The current TensorFlow session.
+ batch_instances: TensorFlow examples containing the training intances
+
+ Returns:
+ The cost for the current batch.
+ """
+ cost, _ = session.run([self.cost, self.train_op],
+ feed_dict={self.instances: batch_instances})
+
+ return cost
+
+ def fit(self, session, inputs, on_epoch_completed, val_instances, val_labels,
+ save_path):
+ """Train the model.
+
+ Args:
+ session: The current TensorFlow session.
+ inputs:
+ on_epoch_completed: A method to call after each epoch.
+ val_instances: The validation set instances (evaluation between epochs).
+ val_labels: The validation set labels (for evaluation between epochs).
+ save_path: Where to save the model.
+ """
+ for epoch in range(self.hparams.num_epochs):
+
+ losses = []
+ epoch_indices = list(np.random.permutation(len(inputs)))
+
+ # If the number of instances doesn't divide by batch_size, enlarge it
+ # by duplicating training examples
+ mod = len(epoch_indices) % self.hparams.batch_size
+ if mod > 0:
+ epoch_indices.extend([np.random.randint(0, high=len(inputs))] * mod)
+
+ # Define the batches
+ n_batches = len(epoch_indices) // self.hparams.batch_size
+
+ for minibatch in range(n_batches):
+
+ batch_indices = epoch_indices[minibatch * self.hparams.batch_size:(
+ minibatch + 1) * self.hparams.batch_size]
+ batch_instances = [inputs[i] for i in batch_indices]
+
+ loss = self.__train_single_batch__(session, batch_instances)
+ losses.append(loss)
+
+ epoch_loss = np.nanmean(losses)
+
+ if on_epoch_completed:
+ should_stop = on_epoch_completed(self, session, epoch, epoch_loss,
+ val_instances, val_labels, save_path)
+ if should_stop:
+ print('Stopping training after %d epochs.' % epoch)
+ return
+
+ def predict(self, session, inputs):
+ """Predict the classification of the test set.
+
+ Args:
+ session: The current TensorFlow session.
+ inputs: the train paths, x, y and/or nc vectors
+
+ Returns:
+ The test predictions.
+ """
+ predictions, _ = zip(*self.predict_with_score(session, inputs))
+ return np.array(predictions)
+
+ def predict_with_score(self, session, inputs):
+ """Predict the classification of the test set.
+
+ Args:
+ session: The current TensorFlow session.
+ inputs: the test paths, x, y and/or nc vectors
+
+ Returns:
+ The test predictions along with their scores.
+ """
+ test_pred = [0] * len(inputs)
+
+ for chunk in xrange(0, len(test_pred), self.hparams.batch_size):
+
+ # Initialize the variables with the current batch data
+ batch_indices = list(
+ range(chunk, min(chunk + self.hparams.batch_size, len(test_pred))))
+
+ # If the batch is too small, add a few other examples
+ if len(batch_indices) < self.hparams.batch_size:
+ batch_indices += [0] * (self.hparams.batch_size-len(batch_indices))
+
+ batch_instances = [inputs[i] for i in batch_indices]
+
+ predictions, scores = session.run(
+ [self.predictions, self.scores],
+ feed_dict={self.instances: batch_instances})
+
+ for index_in_batch, index_in_dataset in enumerate(batch_indices):
+ prediction = predictions[index_in_batch]
+ score = scores[index_in_batch][prediction]
+ test_pred[index_in_dataset] = (prediction, score)
+
+ return test_pred
+
+ def __mlp__(self):
+ """Performs the MLP operations.
+
+ Returns: the prediction object to be computed in a Session
+ """
+ # Define the operations
+
+ # Network input
+ vec_inputs = []
+
+ # Distributional component
+ if self.hparams.input in ['dist', 'dist-nc', 'integrated', 'integrated-nc']:
+ for emb_id in [self.x_embedding_id, self.y_embedding_id]:
+ vec_inputs.append(tf.nn.embedding_lookup(self.relata_lookup, emb_id))
+
+ # Noun compound component
+ if self.hparams.input in ['dist-nc', 'integrated-nc']:
+ vec = tf.nn.embedding_lookup(self.nc_lookup, self.nc_embedding_id)
+ vec_inputs.append(vec)
+
+ # Path-based component
+ if self.hparams.input in ['path', 'integrated', 'integrated-nc']:
+
+ # Get the current paths for each batch instance
+ self.path_embeddings = tf.nn.embedding_lookup(self.path_lookup,
+ self.path_embedding_id)
+
+ # self.path_embeddings is of shape
+ # [batch_size, max_path_per_instance, output_dim]
+ # We need to multiply it by path counts
+ # ([batch_size, max_path_per_instance]).
+ # Start by duplicating path_counts along the output_dim axis.
+ self.path_freq = tf.tile(tf.expand_dims(self.path_counts, -1),
+ [1, 1, self.path_dim])
+
+ # Compute the averaged path vector for each instance.
+ # First, multiply the path embeddings and frequencies element-wise.
+ self.weighted = tf.multiply(self.path_freq, self.path_embeddings)
+
+ # Second, take the sum to get a tensor of shape [batch_size, output_dim].
+ self.pair_path_embeddings = tf.reduce_sum(self.weighted, 1)
+
+ # Finally, divide by the total number of paths.
+ # The number of paths for each pair has a shape [batch_size, 1],
+ # We duplicate it output_dim times along the second axis.
+ self.num_paths = tf.clip_by_value(
+ tf.reduce_sum(self.path_counts, 1), 1, np.inf)
+ self.num_paths = tf.tile(tf.expand_dims(self.num_paths, -1),
+ [1, self.path_dim])
+
+ # And finally, divide pair_path_embeddings by num_paths element-wise.
+ self.pair_path_embeddings = tf.div(
+ self.pair_path_embeddings, self.num_paths)
+ vec_inputs.append(self.pair_path_embeddings)
+
+ # Concatenate the inputs and feed to the MLP
+ self.input_vec = tf.nn.dropout(
+ tf.concat(vec_inputs, 1),
+ keep_prob=self.hparams.input_keep_prob)
+
+ h = tf.matmul(self.input_vec, self.weights1)
+ self.output = h
+
+ if self.hparams.hidden_layers == 1:
+ self.output = tf.matmul(tf.nn.tanh(h), self.weights2)
+
+ self.scores = self.output
+ self.predictions = tf.argmax(self.scores, axis=1)
+
+ # Define the loss function and the optimization algorithm
+ self.cross_entropies = tf.nn.sparse_softmax_cross_entropy_with_logits(
+ logits=self.scores, labels=self.labels)
+ self.cost = tf.reduce_sum(self.cross_entropies, name='cost')
+ self.global_step = tf.Variable(0, name='global_step', trainable=False)
+ self.optimizer = tf.train.AdamOptimizer()
+ self.train_op = self.optimizer.minimize(
+ self.cost, global_step=self.global_step)
+
+
+def parse_tensorflow_examples(record, batch_size, path_to_index):
+ """Reads TensorFlow examples from a RecordReader.
+
+ Args:
+ record: a record with TensorFlow examples.
+ batch_size: the number of instances in a minibatch
+ path_to_index: mapping from string path to index in the embeddings matrix.
+
+ Returns:
+ The word embeddings IDs, paths and counts
+ """
+ features = tf.parse_example(
+ record, {
+ 'x_embedding_id': tf.FixedLenFeature([1], dtype=tf.int64),
+ 'y_embedding_id': tf.FixedLenFeature([1], dtype=tf.int64),
+ 'nc_embedding_id': tf.FixedLenFeature([1], dtype=tf.int64),
+ 'reprs': tf.FixedLenSequenceFeature(
+ shape=(), dtype=tf.string, allow_missing=True),
+ 'counts': tf.FixedLenSequenceFeature(
+ shape=(), dtype=tf.int64, allow_missing=True),
+ 'rel_id': tf.FixedLenFeature([1], dtype=tf.int64)
+ })
+
+ x_embedding_id = tf.squeeze(features['x_embedding_id'], [-1])
+ y_embedding_id = tf.squeeze(features['y_embedding_id'], [-1])
+ nc_embedding_id = tf.squeeze(features['nc_embedding_id'], [-1])
+ labels = tf.squeeze(features['rel_id'], [-1])
+ path_counts = tf.to_float(tf.reshape(features['counts'], [batch_size, -1]))
+
+ path_embedding_id = None
+ if path_to_index:
+ path_embedding_id = path_to_index.lookup(features['reprs'])
+
+ return (
+ x_embedding_id, y_embedding_id, nc_embedding_id,
+ path_embedding_id, path_counts, labels)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/path_model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/path_model.py"
new file mode 100644
index 00000000..a6fef9bf
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lexnet_nc/path_model.py"
@@ -0,0 +1,547 @@
+# Copyright 2017, 2018 Google, Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""LexNET Path-based Model."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import itertools
+import os
+
+import lexnet_common
+import numpy as np
+import tensorflow as tf
+
+
+class PathBasedModel(object):
+ """The LexNET path-based model for classifying semantic relations."""
+
+ @classmethod
+ def default_hparams(cls):
+ """Returns the default hyper-parameters."""
+ return tf.contrib.training.HParams(
+ max_path_len=8,
+ num_classes=37,
+ num_epochs=30,
+ input_keep_prob=0.9,
+ learning_rate=0.001,
+ learn_lemmas=False,
+ random_seed=133, # zero means no random seed
+ lemma_embeddings_file='glove/glove.6B.50d.bin',
+ num_pos=len(lexnet_common.POSTAGS),
+ num_dep=len(lexnet_common.DEPLABELS),
+ num_directions=len(lexnet_common.DIRS),
+ lemma_dim=50,
+ pos_dim=4,
+ dep_dim=5,
+ dir_dim=1)
+
+ def __init__(self, hparams, lemma_embeddings, instance):
+ """Initialize the LexNET classifier.
+
+ Args:
+ hparams: the hyper-parameters.
+ lemma_embeddings: word embeddings for the path-based component.
+ instance: string tensor containing the input instance
+ """
+ self.hparams = hparams
+ self.lemma_embeddings = lemma_embeddings
+ self.instance = instance
+ self.vocab_size, self.lemma_dim = self.lemma_embeddings.shape
+
+ # Set the random seed
+ if hparams.random_seed > 0:
+ tf.set_random_seed(hparams.random_seed)
+
+ # Create the network
+ self.__create_computation_graph__()
+
+ def __create_computation_graph__(self):
+ """Initialize the model and define the graph."""
+ self.lstm_input_dim = sum([self.hparams.lemma_dim, self.hparams.pos_dim,
+ self.hparams.dep_dim, self.hparams.dir_dim])
+ self.lstm_output_dim = self.lstm_input_dim
+
+ network_input = self.lstm_output_dim
+ self.lemma_lookup = tf.get_variable(
+ 'lemma_lookup',
+ initializer=self.lemma_embeddings,
+ dtype=tf.float32,
+ trainable=self.hparams.learn_lemmas)
+ self.pos_lookup = tf.get_variable(
+ 'pos_lookup',
+ shape=[self.hparams.num_pos, self.hparams.pos_dim],
+ dtype=tf.float32)
+ self.dep_lookup = tf.get_variable(
+ 'dep_lookup',
+ shape=[self.hparams.num_dep, self.hparams.dep_dim],
+ dtype=tf.float32)
+ self.dir_lookup = tf.get_variable(
+ 'dir_lookup',
+ shape=[self.hparams.num_directions, self.hparams.dir_dim],
+ dtype=tf.float32)
+
+ self.weights1 = tf.get_variable(
+ 'W1',
+ shape=[network_input, self.hparams.num_classes],
+ dtype=tf.float32)
+ self.bias1 = tf.get_variable(
+ 'b1',
+ shape=[self.hparams.num_classes],
+ dtype=tf.float32)
+
+ # Define the variables
+ (self.batch_paths,
+ self.path_counts,
+ self.seq_lengths,
+ self.path_strings,
+ self.batch_labels) = _parse_tensorflow_example(
+ self.instance, self.hparams.max_path_len, self.hparams.input_keep_prob)
+
+ # Create the LSTM
+ self.__lstm__()
+
+ # Create the MLP
+ self.__mlp__()
+
+ self.instances_to_load = tf.placeholder(dtype=tf.string, shape=[None])
+ self.labels_to_load = lexnet_common.load_all_labels(self.instances_to_load)
+
+ def load_labels(self, session, batch_instances):
+ """Loads the labels of the current instances.
+
+ Args:
+ session: the current TensorFlow session.
+ batch_instances: the dataset instances.
+
+ Returns:
+ the labels.
+ """
+ return session.run(self.labels_to_load,
+ feed_dict={self.instances_to_load: batch_instances})
+
+ def run_one_epoch(self, session, num_steps):
+ """Train the model.
+
+ Args:
+ session: The current TensorFlow session.
+ num_steps: The number of steps in each epoch.
+
+ Returns:
+ The mean loss for the epoch.
+
+ Raises:
+ ArithmeticError: if the loss becomes non-finite.
+ """
+ losses = []
+
+ for step in range(num_steps):
+ curr_loss, _ = session.run([self.cost, self.train_op])
+ if not np.isfinite(curr_loss):
+ raise ArithmeticError('nan loss at step %d' % step)
+
+ losses.append(curr_loss)
+
+ return np.mean(losses)
+
+ def predict(self, session, inputs):
+ """Predict the classification of the test set.
+
+ Args:
+ session: The current TensorFlow session.
+ inputs: the train paths, x, y and/or nc vectors
+
+ Returns:
+ The test predictions.
+ """
+ predictions, _ = zip(*self.predict_with_score(session, inputs))
+ return np.array(predictions)
+
+ def predict_with_score(self, session, inputs):
+ """Predict the classification of the test set.
+
+ Args:
+ session: The current TensorFlow session.
+ inputs: the test paths, x, y and/or nc vectors
+
+ Returns:
+ The test predictions along with their scores.
+ """
+ test_pred = [0] * len(inputs)
+
+ for index, instance in enumerate(inputs):
+
+ prediction, scores = session.run(
+ [self.predictions, self.scores],
+ feed_dict={self.instance: instance})
+
+ test_pred[index] = (prediction, scores[prediction])
+
+ return test_pred
+
+ def __mlp__(self):
+ """Performs the MLP operations.
+
+ Returns: the prediction object to be computed in a Session
+ """
+ # Feed the paths to the MLP: path_embeddings is
+ # [num_batch_paths, output_dim], and when we multiply it by W
+ # ([output_dim, num_classes]), we get a matrix of class distributions:
+ # [num_batch_paths, num_classes].
+ self.distributions = tf.matmul(self.path_embeddings, self.weights1)
+
+ # Now, compute weighted average on the class distributions, using the path
+ # frequency as weights.
+
+ # First, reshape path_freq to the same shape of distributions
+ self.path_freq = tf.tile(tf.expand_dims(self.path_counts, -1),
+ [1, self.hparams.num_classes])
+
+ # Second, multiply the distributions and frequencies element-wise.
+ self.weighted = tf.multiply(self.path_freq, self.distributions)
+
+ # Finally, take the average to get a tensor of shape [1, num_classes].
+ self.weighted_sum = tf.reduce_sum(self.weighted, 0)
+ self.num_paths = tf.clip_by_value(tf.reduce_sum(self.path_counts),
+ 1, np.inf)
+ self.num_paths = tf.tile(tf.expand_dims(self.num_paths, -1),
+ [self.hparams.num_classes])
+ self.scores = tf.div(self.weighted_sum, self.num_paths)
+ self.predictions = tf.argmax(self.scores)
+
+ # Define the loss function and the optimization algorithm
+ self.cross_entropies = tf.nn.sparse_softmax_cross_entropy_with_logits(
+ logits=self.scores, labels=tf.reduce_mean(self.batch_labels))
+ self.cost = tf.reduce_sum(self.cross_entropies, name='cost')
+ self.global_step = tf.Variable(0, name='global_step', trainable=False)
+ self.optimizer = tf.train.AdamOptimizer()
+ self.train_op = self.optimizer.minimize(self.cost,
+ global_step=self.global_step)
+
+ def __lstm__(self):
+ """Defines the LSTM operations.
+
+ Returns:
+ A matrix of path embeddings.
+ """
+ lookup_tables = [self.lemma_lookup, self.pos_lookup,
+ self.dep_lookup, self.dir_lookup]
+
+ # Split the edges to components: list of 4 tensors
+ # [num_batch_paths, max_path_len, 1]
+ self.edge_components = tf.split(self.batch_paths, 4, axis=2)
+
+ # Look up the components embeddings and concatenate them back together
+ self.path_matrix = tf.concat([
+ tf.squeeze(tf.nn.embedding_lookup(lookup_table, component), 2)
+ for lookup_table, component in
+ zip(lookup_tables, self.edge_components)
+ ], axis=2)
+
+ self.sequence_lengths = tf.reshape(self.seq_lengths, [-1])
+
+ # Define the LSTM.
+ # The input is [num_batch_paths, max_path_len, input_dim].
+ lstm_cell = tf.contrib.rnn.BasicLSTMCell(self.lstm_output_dim)
+
+ # The output is [num_batch_paths, max_path_len, output_dim].
+ self.lstm_outputs, _ = tf.nn.dynamic_rnn(
+ lstm_cell, self.path_matrix, dtype=tf.float32,
+ sequence_length=self.sequence_lengths)
+
+ # Slice the last *relevant* output for each instance ->
+ # [num_batch_paths, output_dim]
+ self.path_embeddings = _extract_last_relevant(self.lstm_outputs,
+ self.sequence_lengths)
+
+
+def _parse_tensorflow_example(record, max_path_len, input_keep_prob):
+ """Reads TensorFlow examples from a RecordReader.
+
+ Args:
+ record: a record with TensorFlow example.
+ max_path_len: the maximum path length.
+ input_keep_prob: 1 - the word dropout probability
+
+ Returns:
+ The paths and counts
+ """
+ features = tf.parse_single_example(record, {
+ 'lemmas':
+ tf.FixedLenSequenceFeature(
+ shape=(), dtype=tf.int64, allow_missing=True),
+ 'postags':
+ tf.FixedLenSequenceFeature(
+ shape=(), dtype=tf.int64, allow_missing=True),
+ 'deplabels':
+ tf.FixedLenSequenceFeature(
+ shape=(), dtype=tf.int64, allow_missing=True),
+ 'dirs':
+ tf.FixedLenSequenceFeature(
+ shape=(), dtype=tf.int64, allow_missing=True),
+ 'counts':
+ tf.FixedLenSequenceFeature(
+ shape=(), dtype=tf.int64, allow_missing=True),
+ 'pathlens':
+ tf.FixedLenSequenceFeature(
+ shape=(), dtype=tf.int64, allow_missing=True),
+ 'reprs':
+ tf.FixedLenSequenceFeature(
+ shape=(), dtype=tf.string, allow_missing=True),
+ 'rel_id':
+ tf.FixedLenFeature([], dtype=tf.int64)
+ })
+
+ path_counts = tf.to_float(features['counts'])
+ seq_lengths = features['pathlens']
+
+ # Concatenate the edge components to create a path tensor:
+ # [max_paths_per_ins, max_path_length, 4]
+ lemmas = _word_dropout(
+ tf.reshape(features['lemmas'], [-1, max_path_len]), input_keep_prob)
+
+ paths = tf.stack(
+ [lemmas] + [
+ tf.reshape(features[f], [-1, max_path_len])
+ for f in ('postags', 'deplabels', 'dirs')
+ ],
+ axis=-1)
+
+ path_strings = features['reprs']
+
+ # Add an empty path to pairs with no paths
+ paths = tf.cond(
+ tf.shape(paths)[0] > 0,
+ lambda: paths,
+ lambda: tf.zeros([1, max_path_len, 4], dtype=tf.int64))
+
+ # Paths are left-padded. We reverse them to make them right-padded.
+ paths = tf.reverse(paths, axis=[1])
+
+ path_counts = tf.cond(
+ tf.shape(path_counts)[0] > 0,
+ lambda: path_counts,
+ lambda: tf.constant([1.0], dtype=tf.float32))
+
+ seq_lengths = tf.cond(
+ tf.shape(seq_lengths)[0] > 0,
+ lambda: seq_lengths,
+ lambda: tf.constant([1], dtype=tf.int64))
+
+ # Duplicate the label for each path
+ labels = tf.ones_like(path_counts, dtype=tf.int64) * features['rel_id']
+
+ return paths, path_counts, seq_lengths, path_strings, labels
+
+
+def _extract_last_relevant(output, seq_lengths):
+ """Get the last relevant LSTM output cell for each batch instance.
+
+ Args:
+ output: the LSTM outputs - a tensor with shape
+ [num_paths, output_dim, max_path_len]
+ seq_lengths: the sequences length per instance
+
+ Returns:
+ The last relevant LSTM output cell for each batch instance.
+ """
+ max_length = int(output.get_shape()[1])
+ path_lengths = tf.clip_by_value(seq_lengths - 1, 0, max_length)
+ relevant = tf.reduce_sum(tf.multiply(output, tf.expand_dims(
+ tf.one_hot(path_lengths, max_length), -1)), 1)
+ return relevant
+
+
+def _word_dropout(words, input_keep_prob):
+ """Drops words with probability 1 - input_keep_prob.
+
+ Args:
+ words: a list of lemmas from the paths.
+ input_keep_prob: the probability to keep the word.
+
+ Returns:
+ The revised list where some of the words are ed.
+ """
+ # Create the mask: (-1) to drop, 1 to keep
+ prob = tf.random_uniform(tf.shape(words), 0, 1)
+ condition = tf.less(prob, (1 - input_keep_prob))
+ mask = tf.where(condition,
+ tf.negative(tf.ones_like(words)), tf.ones_like(words))
+
+ # We need to keep zeros (), and change other numbers to 1 ()
+ # if their mask is -1. First, we multiply the mask and the words.
+ # Zeros will stay zeros, and words to drop will become negative.
+ # Then, we change negative values to 1.
+ masked_words = tf.multiply(mask, words)
+ condition = tf.less(masked_words, 0)
+ dropped_words = tf.where(condition, tf.ones_like(words), words)
+ return dropped_words
+
+
+def compute_path_embeddings(model, session, instances):
+ """Compute the path embeddings for all the distinct paths.
+
+ Args:
+ model: The trained path-based model.
+ session: The current TensorFlow session.
+ instances: All the train, test and validation instances.
+
+ Returns:
+ The path to ID index and the path embeddings.
+ """
+ # Get an index for each distinct path
+ path_index = collections.defaultdict(itertools.count(0).next)
+ path_vectors = {}
+
+ for instance in instances:
+ curr_path_embeddings, curr_path_strings = session.run(
+ [model.path_embeddings, model.path_strings],
+ feed_dict={model.instance: instance})
+
+ for i, path in enumerate(curr_path_strings):
+ if not path:
+ continue
+
+ # Set a new/existing index for the path
+ index = path_index[path]
+
+ # Save its vector
+ path_vectors[index] = curr_path_embeddings[i, :]
+
+ print('Number of distinct paths: %d' % len(path_index))
+ return path_index, path_vectors
+
+
+def save_path_embeddings(model, path_vectors, path_index, embeddings_base_path):
+ """Saves the path embeddings.
+
+ Args:
+ model: The trained path-based model.
+ path_vectors: The path embeddings.
+ path_index: A map from path to ID.
+ embeddings_base_path: The base directory where the embeddings are.
+ """
+ index_range = range(max(path_index.values()) + 1)
+ path_matrix = [path_vectors[i] for i in index_range]
+ path_matrix = np.vstack(path_matrix)
+
+ # Save the path embeddings
+ path_vector_filename = os.path.join(
+ embeddings_base_path, '%d_path_vectors' % model.lstm_output_dim)
+ with open(path_vector_filename, 'w') as f_out:
+ np.save(f_out, path_matrix)
+
+ index_to_path = {i: p for p, i in path_index.iteritems()}
+ path_vocab = [index_to_path[i] for i in index_range]
+
+ # Save the path vocabulary
+ path_vocab_filename = os.path.join(
+ embeddings_base_path, '%d_path_vocab' % model.lstm_output_dim)
+ with open(path_vocab_filename, 'w') as f_out:
+ f_out.write('\n'.join(path_vocab))
+ f_out.write('\n')
+
+ print('Saved path embeddings.')
+
+
+def load_path_embeddings(path_embeddings_dir, path_dim):
+ """Loads pretrained path embeddings from a binary file and returns the matrix.
+
+ Args:
+ path_embeddings_dir: The directory for the path embeddings.
+ path_dim: The dimension of the path embeddings, used as prefix to the
+ path_vocab and path_vectors files.
+
+ Returns:
+ The path embeddings matrix and the path_to_index dictionary.
+ """
+ prefix = path_embeddings_dir + '/%d' % path_dim + '_'
+ with open(prefix + 'path_vocab') as f_in:
+ vocab = f_in.read().splitlines()
+
+ vocab_size = len(vocab)
+ embedding_file = prefix + 'path_vectors'
+
+ print('Embedding file "%s" has %d paths' % (embedding_file, vocab_size))
+
+ with open(embedding_file) as f_in:
+ embeddings = np.load(f_in)
+
+ path_to_index = {p: i for i, p in enumerate(vocab)}
+ return embeddings, path_to_index
+
+
+def get_indicative_paths(model, session, path_index, path_vectors, classes,
+ save_dir, k=20, threshold=0.8):
+ """Gets the most indicative paths for each class.
+
+ Args:
+ model: The trained path-based model.
+ session: The current TensorFlow session.
+ path_index: A map from path to ID.
+ path_vectors: The path embeddings.
+ classes: The class label names.
+ save_dir: Where to save the paths.
+ k: The k for top-k paths.
+ threshold: The threshold above which to consider paths as indicative.
+ """
+ # Define graph variables for this operation
+ p_path_embedding = tf.placeholder(dtype=tf.float32,
+ shape=[1, model.lstm_output_dim])
+ p_distributions = tf.nn.softmax(tf.matmul(p_path_embedding, model.weights1))
+
+ # Treat each path as a pair instance with a single path, and get the
+ # relation distribution for it. Then, take the top paths for each relation.
+
+ # This dictionary contains a relation as a key, and the value is a list of
+ # tuples of path index and score. A relation r will contain (p, s) if the
+ # path p is classified to r with a confidence of s.
+ prediction_per_relation = collections.defaultdict(list)
+
+ index_to_path = {i: p for p, i in path_index.iteritems()}
+
+ # Predict all the paths
+ for index in range(len(path_index)):
+ curr_path_vector = path_vectors[index]
+
+ distribution = session.run(p_distributions,
+ feed_dict={
+ p_path_embedding: np.reshape(
+ curr_path_vector,
+ [1, model.lstm_output_dim])})
+
+ distribution = distribution[0, :]
+ prediction = np.argmax(distribution)
+ prediction_per_relation[prediction].append(
+ (index, distribution[prediction]))
+
+ if index % 10000 == 0:
+ print('Classified %d/%d (%3.2f%%) of the paths' % (
+ index, len(path_index), 100 * index / len(path_index)))
+
+ # Retrieve k-best scoring paths for each relation
+ for relation_index, relation in enumerate(classes):
+ curr_paths = sorted(prediction_per_relation[relation_index],
+ key=lambda item: item[1], reverse=True)
+ above_t = [(p, s) for (p, s) in curr_paths if s >= threshold]
+ top_k = curr_paths[k+1]
+ relation_paths = above_t if len(above_t) > len(top_k) else top_k
+
+ paths_filename = os.path.join(save_dir, '%s.paths' % relation)
+ with open(paths_filename, 'w') as f_out:
+ for index, score in relation_paths:
+ print('\t'.join([index_to_path[index], str(score)]), file=f_out)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/README.md"
new file mode 100644
index 00000000..fc4a0b5a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/README.md"
@@ -0,0 +1,221 @@
+# LFADS - Latent Factor Analysis via Dynamical Systems
+
+This code implements the model from the paper "[LFADS - Latent Factor Analysis via Dynamical Systems](http://biorxiv.org/content/early/2017/06/20/152884)". It is a sequential variational auto-encoder designed specifically for investigating neuroscience data, but can be applied widely to any time series data. In an unsupervised setting, LFADS is able to decompose time series data into various factors, such as an initial condition, a generative dynamical system, control inputs to that generator, and a low dimensional description of the observed data, called the factors. Additionally, the observation model is a loss on a probability distribution, so when LFADS processes a dataset, a denoised version of the dataset is also created. For example, if the dataset is raw spike counts, then under the negative log-likeihood loss under a Poisson distribution, the denoised data would be the inferred Poisson rates.
+
+
+## Prerequisites
+
+The code is written in Python 2.7.6. You will also need:
+
+* **TensorFlow** version 1.5 ([install](https://www.tensorflow.org/install/)) -
+* **NumPy, SciPy, Matplotlib** ([install SciPy stack](https://www.scipy.org/install.html), contains all of them)
+* **h5py** ([install](https://pypi.python.org/pypi/h5py))
+
+
+## Getting started
+
+Before starting, run the following:
+
+
+
+where "path/to/your/directory" is replaced with the path to the LFADS repository (you can get this path by using the `pwd` command). This allows the nested directories to access modules from their parent directory.
+
+## Generate synthetic data
+
+In order to generate the synthetic datasets first, from the top-level lfads directory, run:
+
+```sh
+$ cd synth_data
+$ ./run_generate_synth_data.sh
+$ cd ..
+```
+
+These synthetic datasets are provided 1. to gain insight into how the LFADS algorithm operates, and 2. to give reasonable starting points for analyses you might be interested for your own data.
+
+## Train an LFADS model
+
+Now that we have our example datasets, we can train some models! To spin up an LFADS model on the synthetic data, run any of the following commands. For the examples that are in the paper, the important hyperparameters are roughly replicated. Most hyperparameters are insensitive to small changes or won't ever be changed unless you want a very fine level of control. In the first example, all hyperparameter flags are enumerated for easy copy-pasting, but for the rest of the examples only the most important flags (~the first 9) are specified for brevity. For a full list of flags, their descriptions, and their default values, refer to the top of `run_lfads.py`. Please see Table 1 in the Online Methods of the associated paper for definitions of the most important hyperparameters.
+
+```sh
+# Run LFADS on chaotic rnn data with no input pulses (g = 1.5) with spiking noise
+$ python run_lfads.py --kind=train \
+--data_dir=/tmp/rnn_synth_data_v1.0/ \
+--data_filename_stem=chaotic_rnn_no_inputs \
+--lfads_save_dir=/tmp/lfads_chaotic_rnn_no_inputs \
+--co_dim=0 \
+--factors_dim=20 \
+--ext_input_dim=0 \
+--controller_input_lag=1 \
+--output_dist=poisson \
+--do_causal_controller=false \
+--batch_size=128 \
+--learning_rate_init=0.01 \
+--learning_rate_stop=1e-05 \
+--learning_rate_decay_factor=0.95 \
+--learning_rate_n_to_compare=6 \
+--do_reset_learning_rate=false \
+--keep_prob=0.95 \
+--con_dim=128 \
+--gen_dim=200 \
+--ci_enc_dim=128 \
+--ic_dim=64 \
+--ic_enc_dim=128 \
+--ic_prior_var_min=0.1 \
+--gen_cell_input_weight_scale=1.0 \
+--cell_weight_scale=1.0 \
+--do_feed_factors_to_controller=true \
+--kl_start_step=0 \
+--kl_increase_steps=2000 \
+--kl_ic_weight=1.0 \
+--l2_con_scale=0.0 \
+--l2_gen_scale=2000.0 \
+--l2_start_step=0 \
+--l2_increase_steps=2000 \
+--ic_prior_var_scale=0.1 \
+--ic_post_var_min=0.0001 \
+--kl_co_weight=1.0 \
+--prior_ar_nvar=0.1 \
+--cell_clip_value=5.0 \
+--max_ckpt_to_keep_lve=5 \
+--do_train_prior_ar_atau=true \
+--co_prior_var_scale=0.1 \
+--csv_log=fitlog \
+--feedback_factors_or_rates=factors \
+--do_train_prior_ar_nvar=true \
+--max_grad_norm=200.0 \
+--device=gpu:0 \
+--num_steps_for_gen_ic=100000000 \
+--ps_nexamples_to_process=100000000 \
+--checkpoint_name=lfads_vae \
+--temporal_spike_jitter_width=0 \
+--checkpoint_pb_load_name=checkpoint \
+--inject_ext_input_to_gen=false \
+--co_mean_corr_scale=0.0 \
+--gen_cell_rec_weight_scale=1.0 \
+--max_ckpt_to_keep=5 \
+--output_filename_stem="" \
+--ic_prior_var_max=0.1 \
+--prior_ar_atau=10.0 \
+--do_train_io_only=false \
+--do_train_encoder_only=false
+
+# Run LFADS on chaotic rnn data with no input pulses (g = 1.5) with Gaussian noise
+$ python run_lfads.py --kind=train \
+--data_dir=/tmp/rnn_synth_data_v1.0/ \
+--data_filename_stem=gaussian_chaotic_rnn_no_inputs \
+--lfads_save_dir=/tmp/lfads_chaotic_rnn_inputs_g2p5 \
+--co_dim=1 \
+--factors_dim=20 \
+--output_dist=gaussian
+
+
+# Run LFADS on chaotic rnn data with input pulses (g = 2.5)
+$ python run_lfads.py --kind=train \
+--data_dir=/tmp/rnn_synth_data_v1.0/ \
+--data_filename_stem=chaotic_rnn_inputs_g2p5 \
+--lfads_save_dir=/tmp/lfads_chaotic_rnn_inputs_g2p5 \
+--co_dim=1 \
+--factors_dim=20 \
+--output_dist=poisson
+
+# Run LFADS on multi-session RNN data
+$ python run_lfads.py --kind=train \
+--data_dir=/tmp/rnn_synth_data_v1.0/ \
+--data_filename_stem=chaotic_rnn_multisession \
+--lfads_save_dir=/tmp/lfads_chaotic_rnn_multisession \
+--factors_dim=10 \
+--output_dist=poisson
+
+# Run LFADS on integration to bound model data
+$ python run_lfads.py --kind=train \
+--data_dir=/tmp/rnn_synth_data_v1.0/ \
+--data_filename_stem=itb_rnn \
+--lfads_save_dir=/tmp/lfads_itb_rnn \
+--co_dim=1 \
+--factors_dim=20 \
+--controller_input_lag=0 \
+--output_dist=poisson
+
+# Run LFADS on chaotic RNN data with labels
+$ python run_lfads.py --kind=train \
+--data_dir=/tmp/rnn_synth_data_v1.0/ \
+--data_filename_stem=chaotic_rnns_labeled \
+--lfads_save_dir=/tmp/lfads_chaotic_rnns_labeled \
+--co_dim=0 \
+--factors_dim=20 \
+--controller_input_lag=0 \
+--ext_input_dim=1 \
+--output_dist=poisson
+
+# Run LFADS on chaotic rnn data with no input pulses (g = 1.5) with Gaussian noise
+$ python run_lfads.py --kind=train \
+--data_dir=/tmp/rnn_synth_data_v1.0/ \
+--data_filename_stem=chaotic_rnn_no_inputs \
+--lfads_save_dir=/tmp/lfads_chaotic_rnn_no_inputs \
+--co_dim=0 \
+--factors_dim=20 \
+--ext_input_dim=0 \
+--controller_input_lag=1 \
+--output_dist=gaussian \
+
+
+```
+
+**Tip**: If you are running LFADS on GPU and would like to run more than one model concurrently, set the `--allow_gpu_growth=True` flag on each job, otherwise one model will take up the entire GPU for performance purposes. Also, one needs to install the TensorFlow libraries with GPU support.
+
+
+## Visualize a training model
+
+To visualize training curves and various other metrics while training and LFADS model, run the following command on your model directory. To launch a tensorboard on the chaotic RNN data with input pulses, for example:
+
+```sh
+tensorboard --logdir=/tmp/lfads_chaotic_rnn_inputs_g2p5
+```
+
+## Evaluate a trained model
+
+Once your model is finished training, there are multiple ways you can evaluate
+it. Below are some sample commands to evaluate an LFADS model trained on the
+chaotic rnn data with input pulses (g = 2.5). The key differences here are
+setting the `--kind` flag to the appropriate mode, as well as the
+`--checkpoint_pb_load_name` flag to `checkpoint_lve` and the `--batch_size` flag
+(if you'd like to make it larger or smaller). All other flags should be the
+same as used in training, so that the same model architecture is built.
+
+```sh
+# Take samples from posterior then average (denoising operation)
+$ python run_lfads.py --kind=posterior_sample_and_average \
+--data_dir=/tmp/rnn_synth_data_v1.0/ \
+--data_filename_stem=chaotic_rnn_inputs_g2p5 \
+--lfads_save_dir=/tmp/lfads_chaotic_rnn_inputs_g2p5 \
+--co_dim=1 \
+--factors_dim=20 \
+--batch_size=1024 \
+--checkpoint_pb_load_name=checkpoint_lve
+
+# Sample from prior (generation of completely new samples)
+$ python run_lfads.py --kind=prior_sample \
+--data_dir=/tmp/rnn_synth_data_v1.0/ \
+--data_filename_stem=chaotic_rnn_inputs_g2p5 \
+--lfads_save_dir=/tmp/lfads_chaotic_rnn_inputs_g2p5 \
+--co_dim=1 \
+--factors_dim=20 \
+--batch_size=50 \
+--checkpoint_pb_load_name=checkpoint_lve
+
+# Write down model parameters
+$ python run_lfads.py --kind=write_model_params \
+--data_dir=/tmp/rnn_synth_data_v1.0/ \
+--data_filename_stem=chaotic_rnn_inputs_g2p5 \
+--lfads_save_dir=/tmp/lfads_chaotic_rnn_inputs_g2p5 \
+--co_dim=1 \
+--factors_dim=20 \
+--checkpoint_pb_load_name=checkpoint_lve
+```
+
+## Contact
+
+File any issues with the [issue tracker](https://github.com/tensorflow/models/issues). For any questions or problems, this code is maintained by [@sussillo](https://github.com/sussillo) and [@jazcollins](https://github.com/jazcollins).
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/distributions.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/distributions.py"
new file mode 100644
index 00000000..351d019a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/distributions.py"
@@ -0,0 +1,493 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+import numpy as np
+import tensorflow as tf
+from utils import linear, log_sum_exp
+
+class Poisson(object):
+ """Poisson distributon
+
+ Computes the log probability under the model.
+
+ """
+ def __init__(self, log_rates):
+ """ Create Poisson distributions with log_rates parameters.
+
+ Args:
+ log_rates: a tensor-like list of log rates underlying the Poisson dist.
+ """
+ self.logr = log_rates
+
+ def logp(self, bin_counts):
+ """Compute the log probability for the counts in the bin, under the model.
+
+ Args:
+ bin_counts: array-like integer counts
+
+ Returns:
+ The log-probability under the Poisson models for each element of
+ bin_counts.
+ """
+ k = tf.to_float(bin_counts)
+ # log poisson(k, r) = log(r^k * e^(-r) / k!) = k log(r) - r - log k!
+ # log poisson(k, r=exp(x)) = k * x - exp(x) - lgamma(k + 1)
+ return k * self.logr - tf.exp(self.logr) - tf.lgamma(k + 1)
+
+
+def diag_gaussian_log_likelihood(z, mu=0.0, logvar=0.0):
+ """Log-likelihood under a Gaussian distribution with diagonal covariance.
+ Returns the log-likelihood for each dimension. One should sum the
+ results for the log-likelihood under the full multidimensional model.
+
+ Args:
+ z: The value to compute the log-likelihood.
+ mu: The mean of the Gaussian
+ logvar: The log variance of the Gaussian.
+
+ Returns:
+ The log-likelihood under the Gaussian model.
+ """
+
+ return -0.5 * (logvar + np.log(2*np.pi) + \
+ tf.square((z-mu)/tf.exp(0.5*logvar)))
+
+
+def gaussian_pos_log_likelihood(unused_mean, logvar, noise):
+ """Gaussian log-likelihood function for a posterior in VAE
+
+ Note: This function is specialized for a posterior distribution, that has the
+ form of z = mean + sigma * noise.
+
+ Args:
+ unused_mean: ignore
+ logvar: The log variance of the distribution
+ noise: The noise used in the sampling of the posterior.
+
+ Returns:
+ The log-likelihood under the Gaussian model.
+ """
+ # ln N(z; mean, sigma) = - ln(sigma) - 0.5 ln 2pi - noise^2 / 2
+ return - 0.5 * (logvar + np.log(2 * np.pi) + tf.square(noise))
+
+
+class Gaussian(object):
+ """Base class for Gaussian distribution classes."""
+ pass
+
+
+class DiagonalGaussian(Gaussian):
+ """Diagonal Gaussian with different constant mean and variances in each
+ dimension.
+ """
+
+ def __init__(self, batch_size, z_size, mean, logvar):
+ """Create a diagonal gaussian distribution.
+
+ Args:
+ batch_size: The size of the batch, i.e. 0th dim in 2D tensor of samples.
+ z_size: The dimension of the distribution, i.e. 1st dim in 2D tensor.
+ mean: The N-D mean of the distribution.
+ logvar: The N-D log variance of the diagonal distribution.
+ """
+ size__xz = [None, z_size]
+ self.mean = mean # bxn already
+ self.logvar = logvar # bxn already
+ self.noise = noise = tf.random_normal(tf.shape(logvar))
+ self.sample = mean + tf.exp(0.5 * logvar) * noise
+ mean.set_shape(size__xz)
+ logvar.set_shape(size__xz)
+ self.sample.set_shape(size__xz)
+
+ def logp(self, z=None):
+ """Compute the log-likelihood under the distribution.
+
+ Args:
+ z (optional): value to compute likelihood for, if None, use sample.
+
+ Returns:
+ The likelihood of z under the model.
+ """
+ if z is None:
+ z = self.sample
+
+ # This is needed to make sure that the gradients are simple.
+ # The value of the function shouldn't change.
+ if z == self.sample:
+ return gaussian_pos_log_likelihood(self.mean, self.logvar, self.noise)
+
+ return diag_gaussian_log_likelihood(z, self.mean, self.logvar)
+
+
+class LearnableDiagonalGaussian(Gaussian):
+ """Diagonal Gaussian whose mean and variance are learned parameters."""
+
+ def __init__(self, batch_size, z_size, name, mean_init=0.0,
+ var_init=1.0, var_min=0.0, var_max=1000000.0):
+ """Create a learnable diagonal gaussian distribution.
+
+ Args:
+ batch_size: The size of the batch, i.e. 0th dim in 2D tensor of samples.
+ z_size: The dimension of the distribution, i.e. 1st dim in 2D tensor.
+ name: prefix name for the mean and log TF variables.
+ mean_init (optional): The N-D mean initialization of the distribution.
+ var_init (optional): The N-D variance initialization of the diagonal
+ distribution.
+ var_min (optional): The minimum value the learned variance can take in any
+ dimension.
+ var_max (optional): The maximum value the learned variance can take in any
+ dimension.
+ """
+
+ size_1xn = [1, z_size]
+ size__xn = [None, z_size]
+ size_bx1 = tf.stack([batch_size, 1])
+ assert var_init > 0.0, "Problems"
+ assert var_max >= var_min, "Problems"
+ assert var_init >= var_min, "Problems"
+ assert var_max >= var_init, "Problems"
+
+
+ z_mean_1xn = tf.get_variable(name=name+"/mean", shape=size_1xn,
+ initializer=tf.constant_initializer(mean_init))
+ self.mean_bxn = mean_bxn = tf.tile(z_mean_1xn, size_bx1)
+ mean_bxn.set_shape(size__xn) # tile loses shape
+
+ log_var_init = np.log(var_init)
+ if var_max > var_min:
+ var_is_trainable = True
+ else:
+ var_is_trainable = False
+
+ z_logvar_1xn = \
+ tf.get_variable(name=(name+"/logvar"), shape=size_1xn,
+ initializer=tf.constant_initializer(log_var_init),
+ trainable=var_is_trainable)
+
+ if var_is_trainable:
+ z_logit_var_1xn = tf.exp(z_logvar_1xn)
+ z_var_1xn = tf.nn.sigmoid(z_logit_var_1xn)*(var_max-var_min) + var_min
+ z_logvar_1xn = tf.log(z_var_1xn)
+
+ logvar_bxn = tf.tile(z_logvar_1xn, size_bx1)
+ self.logvar_bxn = logvar_bxn
+ self.noise_bxn = noise_bxn = tf.random_normal(tf.shape(logvar_bxn))
+ self.sample_bxn = mean_bxn + tf.exp(0.5 * logvar_bxn) * noise_bxn
+
+ def logp(self, z=None):
+ """Compute the log-likelihood under the distribution.
+
+ Args:
+ z (optional): value to compute likelihood for, if None, use sample.
+
+ Returns:
+ The likelihood of z under the model.
+ """
+ if z is None:
+ z = self.sample
+
+ # This is needed to make sure that the gradients are simple.
+ # The value of the function shouldn't change.
+ if z == self.sample_bxn:
+ return gaussian_pos_log_likelihood(self.mean_bxn, self.logvar_bxn,
+ self.noise_bxn)
+
+ return diag_gaussian_log_likelihood(z, self.mean_bxn, self.logvar_bxn)
+
+ @property
+ def mean(self):
+ return self.mean_bxn
+
+ @property
+ def logvar(self):
+ return self.logvar_bxn
+
+ @property
+ def sample(self):
+ return self.sample_bxn
+
+
+class DiagonalGaussianFromInput(Gaussian):
+ """Diagonal Gaussian whose mean and variance are conditioned on other
+ variables.
+
+ Note: the parameters to convert from input to the learned mean and log
+ variance are held in this class.
+ """
+
+ def __init__(self, x_bxu, z_size, name, var_min=0.0):
+ """Create an input dependent diagonal Gaussian distribution.
+
+ Args:
+ x: The input tensor from which the mean and variance are computed,
+ via a linear transformation of x. I.e.
+ mu = Wx + b, log(var) = Mx + c
+ z_size: The size of the distribution.
+ name: The name to prefix to learned variables.
+ var_min (optional): Minimal variance allowed. This is an additional
+ way to control the amount of information getting through the stochastic
+ layer.
+ """
+ size_bxn = tf.stack([tf.shape(x_bxu)[0], z_size])
+ self.mean_bxn = mean_bxn = linear(x_bxu, z_size, name=(name+"/mean"))
+ logvar_bxn = linear(x_bxu, z_size, name=(name+"/logvar"))
+ if var_min > 0.0:
+ logvar_bxn = tf.log(tf.exp(logvar_bxn) + var_min)
+ self.logvar_bxn = logvar_bxn
+
+ self.noise_bxn = noise_bxn = tf.random_normal(size_bxn)
+ self.noise_bxn.set_shape([None, z_size])
+ self.sample_bxn = mean_bxn + tf.exp(0.5 * logvar_bxn) * noise_bxn
+
+ def logp(self, z=None):
+ """Compute the log-likelihood under the distribution.
+
+ Args:
+ z (optional): value to compute likelihood for, if None, use sample.
+
+ Returns:
+ The likelihood of z under the model.
+ """
+
+ if z is None:
+ z = self.sample
+
+ # This is needed to make sure that the gradients are simple.
+ # The value of the function shouldn't change.
+ if z == self.sample_bxn:
+ return gaussian_pos_log_likelihood(self.mean_bxn,
+ self.logvar_bxn, self.noise_bxn)
+
+ return diag_gaussian_log_likelihood(z, self.mean_bxn, self.logvar_bxn)
+
+ @property
+ def mean(self):
+ return self.mean_bxn
+
+ @property
+ def logvar(self):
+ return self.logvar_bxn
+
+ @property
+ def sample(self):
+ return self.sample_bxn
+
+
+class GaussianProcess:
+ """Base class for Gaussian processes."""
+ pass
+
+
+class LearnableAutoRegressive1Prior(GaussianProcess):
+ """AR(1) model where autocorrelation and process variance are learned
+ parameters. Assumed zero mean.
+
+ """
+
+ def __init__(self, batch_size, z_size,
+ autocorrelation_taus, noise_variances,
+ do_train_prior_ar_atau, do_train_prior_ar_nvar,
+ num_steps, name):
+ """Create a learnable autoregressive (1) process.
+
+ Args:
+ batch_size: The size of the batch, i.e. 0th dim in 2D tensor of samples.
+ z_size: The dimension of the distribution, i.e. 1st dim in 2D tensor.
+ autocorrelation_taus: The auto correlation time constant of the AR(1)
+ process.
+ A value of 0 is uncorrelated gaussian noise.
+ noise_variances: The variance of the additive noise, *not* the process
+ variance.
+ do_train_prior_ar_atau: Train or leave as constant, the autocorrelation?
+ do_train_prior_ar_nvar: Train or leave as constant, the noise variance?
+ num_steps: Number of steps to run the process.
+ name: The name to prefix to learned TF variables.
+ """
+
+ # Note the use of the plural in all of these quantities. This is intended
+ # to mark that even though a sample z_t from the posterior is thought of a
+ # single sample of a multidimensional gaussian, the prior is actually
+ # thought of as U AR(1) processes, where U is the dimension of the inferred
+ # input.
+ size_bx1 = tf.stack([batch_size, 1])
+ size__xu = [None, z_size]
+ # process variance, the variance at time t over all instantiations of AR(1)
+ # with these parameters.
+ log_evar_inits_1xu = tf.expand_dims(tf.log(noise_variances), 0)
+ self.logevars_1xu = logevars_1xu = \
+ tf.Variable(log_evar_inits_1xu, name=name+"/logevars", dtype=tf.float32,
+ trainable=do_train_prior_ar_nvar)
+ self.logevars_bxu = logevars_bxu = tf.tile(logevars_1xu, size_bx1)
+ logevars_bxu.set_shape(size__xu) # tile loses shape
+
+ # \tau, which is the autocorrelation time constant of the AR(1) process
+ log_atau_inits_1xu = tf.expand_dims(tf.log(autocorrelation_taus), 0)
+ self.logataus_1xu = logataus_1xu = \
+ tf.Variable(log_atau_inits_1xu, name=name+"/logatau", dtype=tf.float32,
+ trainable=do_train_prior_ar_atau)
+
+ # phi in x_t = \mu + phi x_tm1 + \eps
+ # phi = exp(-1/tau)
+ # phi = exp(-1/exp(logtau))
+ # phi = exp(-exp(-logtau))
+ phis_1xu = tf.exp(-tf.exp(-logataus_1xu))
+ self.phis_bxu = phis_bxu = tf.tile(phis_1xu, size_bx1)
+ phis_bxu.set_shape(size__xu)
+
+ # process noise
+ # pvar = evar / (1- phi^2)
+ # logpvar = log ( exp(logevar) / (1 - phi^2) )
+ # logpvar = logevar - log(1-phi^2)
+ # logpvar = logevar - (log(1-phi) + log(1+phi))
+ self.logpvars_1xu = \
+ logevars_1xu - tf.log(1.0-phis_1xu) - tf.log(1.0+phis_1xu)
+ self.logpvars_bxu = logpvars_bxu = tf.tile(self.logpvars_1xu, size_bx1)
+ logpvars_bxu.set_shape(size__xu)
+
+ # process mean (zero but included in for completeness)
+ self.pmeans_bxu = pmeans_bxu = tf.zeros_like(phis_bxu)
+
+ # For sampling from the prior during de-novo generation.
+ self.means_t = means_t = [None] * num_steps
+ self.logvars_t = logvars_t = [None] * num_steps
+ self.samples_t = samples_t = [None] * num_steps
+ self.gaussians_t = gaussians_t = [None] * num_steps
+ sample_bxu = tf.zeros_like(phis_bxu)
+ for t in range(num_steps):
+ # process variance used here to make process completely stationary
+ if t == 0:
+ logvar_pt_bxu = self.logpvars_bxu
+ else:
+ logvar_pt_bxu = self.logevars_bxu
+
+ z_mean_pt_bxu = pmeans_bxu + phis_bxu * sample_bxu
+ gaussians_t[t] = DiagonalGaussian(batch_size, z_size,
+ mean=z_mean_pt_bxu,
+ logvar=logvar_pt_bxu)
+ sample_bxu = gaussians_t[t].sample
+ samples_t[t] = sample_bxu
+ logvars_t[t] = logvar_pt_bxu
+ means_t[t] = z_mean_pt_bxu
+
+ def logp_t(self, z_t_bxu, z_tm1_bxu=None):
+ """Compute the log-likelihood under the distribution for a given time t,
+ not the whole sequence.
+
+ Args:
+ z_t_bxu: sample to compute likelihood for at time t.
+ z_tm1_bxu (optional): sample condition probability of z_t upon.
+
+ Returns:
+ The likelihood of p_t under the model at time t. i.e.
+ p(z_t|z_tm1_bxu) = N(z_tm1_bxu * phis, eps^2)
+
+ """
+ if z_tm1_bxu is None:
+ return diag_gaussian_log_likelihood(z_t_bxu, self.pmeans_bxu,
+ self.logpvars_bxu)
+ else:
+ means_t_bxu = self.pmeans_bxu + self.phis_bxu * z_tm1_bxu
+ logp_tgtm1_bxu = diag_gaussian_log_likelihood(z_t_bxu,
+ means_t_bxu,
+ self.logevars_bxu)
+ return logp_tgtm1_bxu
+
+
+class KLCost_GaussianGaussian(object):
+ """log p(x|z) + KL(q||p) terms for Gaussian posterior and Gaussian prior. See
+ eqn 10 and Appendix B in VAE for latter term,
+ http://arxiv.org/abs/1312.6114
+
+ The log p(x|z) term is the reconstruction error under the model.
+ The KL term represents the penalty for passing information from the encoder
+ to the decoder.
+ To sample KL(q||p), we simply sample
+ ln q - ln p
+ by drawing samples from q and averaging.
+ """
+
+ def __init__(self, zs, prior_zs):
+ """Create a lower bound in three parts, normalized reconstruction
+ cost, normalized KL divergence cost, and their sum.
+
+ E_q[ln p(z_i | z_{i+1}) / q(z_i | x)
+ \int q(z) ln p(z) dz = - 0.5 ln(2pi) - 0.5 \sum (ln(sigma_p^2) + \
+ sigma_q^2 / sigma_p^2 + (mean_p - mean_q)^2 / sigma_p^2)
+
+ \int q(z) ln q(z) dz = - 0.5 ln(2pi) - 0.5 \sum (ln(sigma_q^2) + 1)
+
+ Args:
+ zs: posterior z ~ q(z|x)
+ prior_zs: prior zs
+ """
+ # L = -KL + log p(x|z), to maximize bound on likelihood
+ # -L = KL - log p(x|z), to minimize bound on NLL
+ # so 'KL cost' is postive KL divergence
+ kl_b = 0.0
+ for z, prior_z in zip(zs, prior_zs):
+ assert isinstance(z, Gaussian)
+ assert isinstance(prior_z, Gaussian)
+ # ln(2pi) terms cancel
+ kl_b += 0.5 * tf.reduce_sum(
+ prior_z.logvar - z.logvar
+ + tf.exp(z.logvar - prior_z.logvar)
+ + tf.square((z.mean - prior_z.mean) / tf.exp(0.5 * prior_z.logvar))
+ - 1.0, [1])
+
+ self.kl_cost_b = kl_b
+ self.kl_cost = tf.reduce_mean(kl_b)
+
+
+class KLCost_GaussianGaussianProcessSampled(object):
+ """ log p(x|z) + KL(q||p) terms for Gaussian posterior and Gaussian process
+ prior via sampling.
+
+ The log p(x|z) term is the reconstruction error under the model.
+ The KL term represents the penalty for passing information from the encoder
+ to the decoder.
+ To sample KL(q||p), we simply sample
+ ln q - ln p
+ by drawing samples from q and averaging.
+ """
+
+ def __init__(self, post_zs, prior_z_process):
+ """Create a lower bound in three parts, normalized reconstruction
+ cost, normalized KL divergence cost, and their sum.
+
+ Args:
+ post_zs: posterior z ~ q(z|x)
+ prior_z_process: prior AR(1) process
+ """
+ assert len(post_zs) > 1, "GP is for time, need more than 1 time step."
+ assert isinstance(prior_z_process, GaussianProcess), "Must use GP."
+
+ # L = -KL + log p(x|z), to maximize bound on likelihood
+ # -L = KL - log p(x|z), to minimize bound on NLL
+ # so 'KL cost' is postive KL divergence
+ z0_bxu = post_zs[0].sample
+ logq_bxu = post_zs[0].logp(z0_bxu)
+ logp_bxu = prior_z_process.logp_t(z0_bxu)
+ z_tm1_bxu = z0_bxu
+ for z_t in post_zs[1:]:
+ # posterior is independent in time, prior is not
+ z_t_bxu = z_t.sample
+ logq_bxu += z_t.logp(z_t_bxu)
+ logp_bxu += prior_z_process.logp_t(z_t_bxu, z_tm1_bxu)
+ z_tm1_bxu = z_t_bxu
+
+ kl_bxu = logq_bxu - logp_bxu
+ kl_b = tf.reduce_sum(kl_bxu, [1])
+ self.kl_cost_b = kl_b
+ self.kl_cost = tf.reduce_mean(kl_b)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/lfads.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/lfads.py"
new file mode 100644
index 00000000..0bbb0a3b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/lfads.py"
@@ -0,0 +1,2170 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+"""
+LFADS - Latent Factor Analysis via Dynamical Systems.
+
+LFADS is an unsupervised method to decompose time series data into
+various factors, such as an initial condition, a generative
+dynamical system, control inputs to that generator, and a low
+dimensional description of the observed data, called the factors.
+Additionally, the observations have a noise model (in this case
+Poisson), so a denoised version of the observations is also created
+(e.g. underlying rates of a Poisson distribution given the observed
+event counts).
+
+The main data structure being passed around is a dataset. This is a dictionary
+of data dictionaries.
+
+DATASET: The top level dictionary is simply name (string -> dictionary).
+The nested dictionary is the DATA DICTIONARY, which has the following keys:
+ 'train_data' and 'valid_data', whose values are the corresponding training
+ and validation data with shape
+ ExTxD, E - # examples, T - # time steps, D - # dimensions in data.
+ The data dictionary also has a few more keys:
+ 'train_ext_input' and 'valid_ext_input', if there are know external inputs
+ to the system being modeled, these take on dimensions:
+ ExTxI, E - # examples, T - # time steps, I = # dimensions in input.
+ 'alignment_matrix_cxf' - If you are using multiple days data, it's possible
+ that one can align the channels (see manuscript). If so each dataset will
+ contain this matrix, which will be used for both the input adapter and the
+ output adapter for each dataset. These matrices, if provided, must be of
+ size [data_dim x factors] where data_dim is the number of neurons recorded
+ on that day, and factors is chosen and set through the '--factors' flag.
+ 'alignment_bias_c' - See alignment_matrix_cxf. This bias will used to
+ the offset for the alignment transformation. It will *subtract* off the
+ bias from the data, so pca style inits can align factors across sessions.
+
+
+ If one runs LFADS on data where the true rates are known for some trials,
+ (say simulated, testing data, as in the example shipped with the paper), then
+ one can add three more fields for plotting purposes. These are 'train_truth'
+ and 'valid_truth', and 'conversion_factor'. These have the same dimensions as
+ 'train_data', and 'valid_data' but represent the underlying rates of the
+ observations. Finally, if one needs to convert scale for plotting the true
+ underlying firing rates, there is the 'conversion_factor' key.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import numpy as np
+import os
+import tensorflow as tf
+from distributions import LearnableDiagonalGaussian, DiagonalGaussianFromInput
+from distributions import diag_gaussian_log_likelihood
+from distributions import KLCost_GaussianGaussian, Poisson
+from distributions import LearnableAutoRegressive1Prior
+from distributions import KLCost_GaussianGaussianProcessSampled
+
+from utils import init_linear, linear, list_t_bxn_to_tensor_bxtxn, write_data
+from utils import log_sum_exp, flatten
+from plot_lfads import plot_lfads
+
+
+class GRU(object):
+ """Gated Recurrent Unit cell (cf. http://arxiv.org/abs/1406.1078).
+
+ """
+ def __init__(self, num_units, forget_bias=1.0, weight_scale=1.0,
+ clip_value=np.inf, collections=None):
+ """Create a GRU object.
+
+ Args:
+ num_units: Number of units in the GRU
+ forget_bias (optional): Hack to help learning.
+ weight_scale (optional): weights are scaled by ws/sqrt(#inputs), with
+ ws being the weight scale.
+ clip_value (optional): if the recurrent values grow above this value,
+ clip them.
+ collections (optional): List of additonal collections variables should
+ belong to.
+ """
+ self._num_units = num_units
+ self._forget_bias = forget_bias
+ self._weight_scale = weight_scale
+ self._clip_value = clip_value
+ self._collections = collections
+
+ @property
+ def state_size(self):
+ return self._num_units
+
+ @property
+ def output_size(self):
+ return self._num_units
+
+ @property
+ def state_multiplier(self):
+ return 1
+
+ def output_from_state(self, state):
+ """Return the output portion of the state."""
+ return state
+
+ def __call__(self, inputs, state, scope=None):
+ """Gated recurrent unit (GRU) function.
+
+ Args:
+ inputs: A 2D batch x input_dim tensor of inputs.
+ state: The previous state from the last time step.
+ scope (optional): TF variable scope for defined GRU variables.
+
+ Returns:
+ A tuple (state, state), where state is the newly computed state at time t.
+ It is returned twice to respect an interface that works for LSTMs.
+ """
+
+ x = inputs
+ h = state
+ if inputs is not None:
+ xh = tf.concat(axis=1, values=[x, h])
+ else:
+ xh = h
+
+ with tf.variable_scope(scope or type(self).__name__): # "GRU"
+ with tf.variable_scope("Gates"): # Reset gate and update gate.
+ # We start with bias of 1.0 to not reset and not update.
+ r, u = tf.split(axis=1, num_or_size_splits=2, value=linear(xh,
+ 2 * self._num_units,
+ alpha=self._weight_scale,
+ name="xh_2_ru",
+ collections=self._collections))
+ r, u = tf.sigmoid(r), tf.sigmoid(u + self._forget_bias)
+ with tf.variable_scope("Candidate"):
+ xrh = tf.concat(axis=1, values=[x, r * h])
+ c = tf.tanh(linear(xrh, self._num_units, name="xrh_2_c",
+ collections=self._collections))
+ new_h = u * h + (1 - u) * c
+ new_h = tf.clip_by_value(new_h, -self._clip_value, self._clip_value)
+
+ return new_h, new_h
+
+
+class GenGRU(object):
+ """Gated Recurrent Unit cell (cf. http://arxiv.org/abs/1406.1078).
+
+ This version is specialized for the generator, but isn't as fast, so
+ we have two. Note this allows for l2 regularization on the recurrent
+ weights, but also implicitly rescales the inputs via the 1/sqrt(input)
+ scaling in the linear helper routine to be large magnitude, if there are
+ fewer inputs than recurrent state.
+
+ """
+ def __init__(self, num_units, forget_bias=1.0,
+ input_weight_scale=1.0, rec_weight_scale=1.0, clip_value=np.inf,
+ input_collections=None, recurrent_collections=None):
+ """Create a GRU object.
+
+ Args:
+ num_units: Number of units in the GRU
+ forget_bias (optional): Hack to help learning.
+ input_weight_scale (optional): weights are scaled ws/sqrt(#inputs), with
+ ws being the weight scale.
+ rec_weight_scale (optional): weights are scaled ws/sqrt(#inputs),
+ with ws being the weight scale.
+ clip_value (optional): if the recurrent values grow above this value,
+ clip them.
+ input_collections (optional): List of additonal collections variables
+ that input->rec weights should belong to.
+ recurrent_collections (optional): List of additonal collections variables
+ that rec->rec weights should belong to.
+ """
+ self._num_units = num_units
+ self._forget_bias = forget_bias
+ self._input_weight_scale = input_weight_scale
+ self._rec_weight_scale = rec_weight_scale
+ self._clip_value = clip_value
+ self._input_collections = input_collections
+ self._rec_collections = recurrent_collections
+
+ @property
+ def state_size(self):
+ return self._num_units
+
+ @property
+ def output_size(self):
+ return self._num_units
+
+ @property
+ def state_multiplier(self):
+ return 1
+
+ def output_from_state(self, state):
+ """Return the output portion of the state."""
+ return state
+
+ def __call__(self, inputs, state, scope=None):
+ """Gated recurrent unit (GRU) function.
+
+ Args:
+ inputs: A 2D batch x input_dim tensor of inputs.
+ state: The previous state from the last time step.
+ scope (optional): TF variable scope for defined GRU variables.
+
+ Returns:
+ A tuple (state, state), where state is the newly computed state at time t.
+ It is returned twice to respect an interface that works for LSTMs.
+ """
+
+ x = inputs
+ h = state
+ with tf.variable_scope(scope or type(self).__name__): # "GRU"
+ with tf.variable_scope("Gates"): # Reset gate and update gate.
+ # We start with bias of 1.0 to not reset and not update.
+ r_x = u_x = 0.0
+ if x is not None:
+ r_x, u_x = tf.split(axis=1, num_or_size_splits=2, value=linear(x,
+ 2 * self._num_units,
+ alpha=self._input_weight_scale,
+ do_bias=False,
+ name="x_2_ru",
+ normalized=False,
+ collections=self._input_collections))
+
+ r_h, u_h = tf.split(axis=1, num_or_size_splits=2, value=linear(h,
+ 2 * self._num_units,
+ do_bias=True,
+ alpha=self._rec_weight_scale,
+ name="h_2_ru",
+ collections=self._rec_collections))
+ r = r_x + r_h
+ u = u_x + u_h
+ r, u = tf.sigmoid(r), tf.sigmoid(u + self._forget_bias)
+
+ with tf.variable_scope("Candidate"):
+ c_x = 0.0
+ if x is not None:
+ c_x = linear(x, self._num_units, name="x_2_c", do_bias=False,
+ alpha=self._input_weight_scale,
+ normalized=False,
+ collections=self._input_collections)
+ c_rh = linear(r*h, self._num_units, name="rh_2_c", do_bias=True,
+ alpha=self._rec_weight_scale,
+ collections=self._rec_collections)
+ c = tf.tanh(c_x + c_rh)
+
+ new_h = u * h + (1 - u) * c
+ new_h = tf.clip_by_value(new_h, -self._clip_value, self._clip_value)
+
+ return new_h, new_h
+
+
+class LFADS(object):
+ """LFADS - Latent Factor Analysis via Dynamical Systems.
+
+ LFADS is an unsupervised method to decompose time series data into
+ various factors, such as an initial condition, a generative
+ dynamical system, inferred inputs to that generator, and a low
+ dimensional description of the observed data, called the factors.
+ Additoinally, the observations have a noise model (in this case
+ Poisson), so a denoised version of the observations is also created
+ (e.g. underlying rates of a Poisson distribution given the observed
+ event counts).
+ """
+
+ def __init__(self, hps, kind="train", datasets=None):
+ """Create an LFADS model.
+
+ train - a model for training, sampling of posteriors is used
+ posterior_sample_and_average - sample from the posterior, this is used
+ for evaluating the expected value of the outputs of LFADS, given a
+ specific input, by averaging over multiple samples from the approx
+ posterior. Also used for the lower bound on the negative
+ log-likelihood using IWAE error (Importance Weighed Auto-encoder).
+ This is the denoising operation.
+ prior_sample - a model for generation - sampling from priors is used
+
+ Args:
+ hps: The dictionary of hyper parameters.
+ kind: the type of model to build (see above).
+ datasets: a dictionary of named data_dictionaries, see top of lfads.py
+ """
+ print("Building graph...")
+ all_kinds = ['train', 'posterior_sample_and_average', 'posterior_push_mean',
+ 'prior_sample']
+ assert kind in all_kinds, 'Wrong kind'
+ if hps.feedback_factors_or_rates == "rates":
+ assert len(hps.dataset_names) == 1, \
+ "Multiple datasets not supported for rate feedback."
+ num_steps = hps.num_steps
+ ic_dim = hps.ic_dim
+ co_dim = hps.co_dim
+ ext_input_dim = hps.ext_input_dim
+ cell_class = GRU
+ gen_cell_class = GenGRU
+
+ def makelambda(v): # Used with tf.case
+ return lambda: v
+
+ # Define the data placeholder, and deal with all parts of the graph
+ # that are dataset dependent.
+ self.dataName = tf.placeholder(tf.string, shape=())
+ # The batch_size to be inferred from data, as normal.
+ # Additionally, the data_dim will be inferred as well, allowing for a
+ # single placeholder for all datasets, regardless of data dimension.
+ if hps.output_dist == 'poisson':
+ # Enforce correct dtype
+ assert np.issubdtype(
+ datasets[hps.dataset_names[0]]['train_data'].dtype, int), \
+ "Data dtype must be int for poisson output distribution"
+ data_dtype = tf.int32
+ elif hps.output_dist == 'gaussian':
+ assert np.issubdtype(
+ datasets[hps.dataset_names[0]]['train_data'].dtype, float), \
+ "Data dtype must be float for gaussian output dsitribution"
+ data_dtype = tf.float32
+ else:
+ assert False, "NIY"
+ self.dataset_ph = dataset_ph = tf.placeholder(data_dtype,
+ [None, num_steps, None],
+ name="data")
+ self.train_step = tf.get_variable("global_step", [], tf.int64,
+ tf.zeros_initializer(),
+ trainable=False)
+ self.hps = hps
+ ndatasets = hps.ndatasets
+ factors_dim = hps.factors_dim
+ self.preds = preds = [None] * ndatasets
+ self.fns_in_fac_Ws = fns_in_fac_Ws = [None] * ndatasets
+ self.fns_in_fatcor_bs = fns_in_fac_bs = [None] * ndatasets
+ self.fns_out_fac_Ws = fns_out_fac_Ws = [None] * ndatasets
+ self.fns_out_fac_bs = fns_out_fac_bs = [None] * ndatasets
+ self.datasetNames = dataset_names = hps.dataset_names
+ self.ext_inputs = ext_inputs = None
+
+ if len(dataset_names) == 1: # single session
+ if 'alignment_matrix_cxf' in datasets[dataset_names[0]].keys():
+ used_in_factors_dim = factors_dim
+ in_identity_if_poss = False
+ else:
+ used_in_factors_dim = hps.dataset_dims[dataset_names[0]]
+ in_identity_if_poss = True
+ else: # multisession
+ used_in_factors_dim = factors_dim
+ in_identity_if_poss = False
+
+ for d, name in enumerate(dataset_names):
+ data_dim = hps.dataset_dims[name]
+ in_mat_cxf = None
+ in_bias_1xf = None
+ align_bias_1xc = None
+
+ if datasets and 'alignment_matrix_cxf' in datasets[name].keys():
+ dataset = datasets[name]
+ if hps.do_train_readin:
+ print("Initializing trainable readin matrix with alignment matrix" \
+ " provided for dataset:", name)
+ else:
+ print("Setting non-trainable readin matrix to alignment matrix" \
+ " provided for dataset:", name)
+ in_mat_cxf = dataset['alignment_matrix_cxf'].astype(np.float32)
+ if in_mat_cxf.shape != (data_dim, factors_dim):
+ raise ValueError("""Alignment matrix must have dimensions %d x %d
+ (data_dim x factors_dim), but currently has %d x %d."""%
+ (data_dim, factors_dim, in_mat_cxf.shape[0],
+ in_mat_cxf.shape[1]))
+ if datasets and 'alignment_bias_c' in datasets[name].keys():
+ dataset = datasets[name]
+ if hps.do_train_readin:
+ print("Initializing trainable readin bias with alignment bias " \
+ "provided for dataset:", name)
+ else:
+ print("Setting non-trainable readin bias to alignment bias " \
+ "provided for dataset:", name)
+ align_bias_c = dataset['alignment_bias_c'].astype(np.float32)
+ align_bias_1xc = np.expand_dims(align_bias_c, axis=0)
+ if align_bias_1xc.shape[1] != data_dim:
+ raise ValueError("""Alignment bias must have dimensions %d
+ (data_dim), but currently has %d."""%
+ (data_dim, in_mat_cxf.shape[0]))
+ if in_mat_cxf is not None and align_bias_1xc is not None:
+ # (data - alignment_bias) * W_in
+ # data * W_in - alignment_bias * W_in
+ # So b = -alignment_bias * W_in to accommodate PCA style offset.
+ in_bias_1xf = -np.dot(align_bias_1xc, in_mat_cxf)
+
+ if hps.do_train_readin:
+ # only add to IO transformations collection only if we want it to be
+ # learnable, because IO_transformations collection will be trained
+ # when do_train_io_only
+ collections_readin=['IO_transformations']
+ else:
+ collections_readin=None
+
+ in_fac_lin = init_linear(data_dim, used_in_factors_dim,
+ do_bias=True,
+ mat_init_value=in_mat_cxf,
+ bias_init_value=in_bias_1xf,
+ identity_if_possible=in_identity_if_poss,
+ normalized=False, name="x_2_infac_"+name,
+ collections=collections_readin,
+ trainable=hps.do_train_readin)
+ in_fac_W, in_fac_b = in_fac_lin
+ fns_in_fac_Ws[d] = makelambda(in_fac_W)
+ fns_in_fac_bs[d] = makelambda(in_fac_b)
+
+ with tf.variable_scope("glm"):
+ out_identity_if_poss = False
+ if len(dataset_names) == 1 and \
+ factors_dim == hps.dataset_dims[dataset_names[0]]:
+ out_identity_if_poss = True
+ for d, name in enumerate(dataset_names):
+ data_dim = hps.dataset_dims[name]
+ in_mat_cxf = None
+ if datasets and 'alignment_matrix_cxf' in datasets[name].keys():
+ dataset = datasets[name]
+ in_mat_cxf = dataset['alignment_matrix_cxf'].astype(np.float32)
+
+ if datasets and 'alignment_bias_c' in datasets[name].keys():
+ dataset = datasets[name]
+ align_bias_c = dataset['alignment_bias_c'].astype(np.float32)
+ align_bias_1xc = np.expand_dims(align_bias_c, axis=0)
+
+ out_mat_fxc = None
+ out_bias_1xc = None
+ if in_mat_cxf is not None:
+ out_mat_fxc = in_mat_cxf.T
+ if align_bias_1xc is not None:
+ out_bias_1xc = align_bias_1xc
+
+ if hps.output_dist == 'poisson':
+ out_fac_lin = init_linear(factors_dim, data_dim, do_bias=True,
+ mat_init_value=out_mat_fxc,
+ bias_init_value=out_bias_1xc,
+ identity_if_possible=out_identity_if_poss,
+ normalized=False,
+ name="fac_2_logrates_"+name,
+ collections=['IO_transformations'])
+ out_fac_W, out_fac_b = out_fac_lin
+
+ elif hps.output_dist == 'gaussian':
+ out_fac_lin_mean = \
+ init_linear(factors_dim, data_dim, do_bias=True,
+ mat_init_value=out_mat_fxc,
+ bias_init_value=out_bias_1xc,
+ normalized=False,
+ name="fac_2_means_"+name,
+ collections=['IO_transformations'])
+ out_fac_W_mean, out_fac_b_mean = out_fac_lin_mean
+
+ mat_init_value = np.zeros([factors_dim, data_dim]).astype(np.float32)
+ bias_init_value = np.ones([1, data_dim]).astype(np.float32)
+ out_fac_lin_logvar = \
+ init_linear(factors_dim, data_dim, do_bias=True,
+ mat_init_value=mat_init_value,
+ bias_init_value=bias_init_value,
+ normalized=False,
+ name="fac_2_logvars_"+name,
+ collections=['IO_transformations'])
+ out_fac_W_mean, out_fac_b_mean = out_fac_lin_mean
+ out_fac_W_logvar, out_fac_b_logvar = out_fac_lin_logvar
+ out_fac_W = tf.concat(
+ axis=1, values=[out_fac_W_mean, out_fac_W_logvar])
+ out_fac_b = tf.concat(
+ axis=1, values=[out_fac_b_mean, out_fac_b_logvar])
+ else:
+ assert False, "NIY"
+
+ preds[d] = tf.equal(tf.constant(name), self.dataName)
+ data_dim = hps.dataset_dims[name]
+ fns_out_fac_Ws[d] = makelambda(out_fac_W)
+ fns_out_fac_bs[d] = makelambda(out_fac_b)
+
+ pf_pairs_in_fac_Ws = zip(preds, fns_in_fac_Ws)
+ pf_pairs_in_fac_bs = zip(preds, fns_in_fac_bs)
+ pf_pairs_out_fac_Ws = zip(preds, fns_out_fac_Ws)
+ pf_pairs_out_fac_bs = zip(preds, fns_out_fac_bs)
+
+ this_in_fac_W = tf.case(pf_pairs_in_fac_Ws, exclusive=True)
+ this_in_fac_b = tf.case(pf_pairs_in_fac_bs, exclusive=True)
+ this_out_fac_W = tf.case(pf_pairs_out_fac_Ws, exclusive=True)
+ this_out_fac_b = tf.case(pf_pairs_out_fac_bs, exclusive=True)
+
+ # External inputs (not changing by dataset, by definition).
+ if hps.ext_input_dim > 0:
+ self.ext_input = tf.placeholder(tf.float32,
+ [None, num_steps, ext_input_dim],
+ name="ext_input")
+ else:
+ self.ext_input = None
+ ext_input_bxtxi = self.ext_input
+
+ self.keep_prob = keep_prob = tf.placeholder(tf.float32, [], "keep_prob")
+ self.batch_size = batch_size = int(hps.batch_size)
+ self.learning_rate = tf.Variable(float(hps.learning_rate_init),
+ trainable=False, name="learning_rate")
+ self.learning_rate_decay_op = self.learning_rate.assign(
+ self.learning_rate * hps.learning_rate_decay_factor)
+
+ # Dropout the data.
+ dataset_do_bxtxd = tf.nn.dropout(tf.to_float(dataset_ph), keep_prob)
+ if hps.ext_input_dim > 0:
+ ext_input_do_bxtxi = tf.nn.dropout(ext_input_bxtxi, keep_prob)
+ else:
+ ext_input_do_bxtxi = None
+
+ # ENCODERS
+ def encode_data(dataset_bxtxd, enc_cell, name, forward_or_reverse,
+ num_steps_to_encode):
+ """Encode data for LFADS
+ Args:
+ dataset_bxtxd - the data to encode, as a 3 tensor, with dims
+ time x batch x data dims.
+ enc_cell: encoder cell
+ name: name of encoder
+ forward_or_reverse: string, encode in forward or reverse direction
+ num_steps_to_encode: number of steps to encode, 0:num_steps_to_encode
+ Returns:
+ encoded data as a list with num_steps_to_encode items, in order
+ """
+ if forward_or_reverse == "forward":
+ dstr = "_fwd"
+ time_fwd_or_rev = range(num_steps_to_encode)
+ else:
+ dstr = "_rev"
+ time_fwd_or_rev = reversed(range(num_steps_to_encode))
+
+ with tf.variable_scope(name+"_enc"+dstr, reuse=False):
+ enc_state = tf.tile(
+ tf.Variable(tf.zeros([1, enc_cell.state_size]),
+ name=name+"_enc_t0"+dstr), tf.stack([batch_size, 1]))
+ enc_state.set_shape([None, enc_cell.state_size]) # tile loses shape
+
+ enc_outs = [None] * num_steps_to_encode
+ for i, t in enumerate(time_fwd_or_rev):
+ with tf.variable_scope(name+"_enc"+dstr, reuse=True if i > 0 else None):
+ dataset_t_bxd = dataset_bxtxd[:,t,:]
+ in_fac_t_bxf = tf.matmul(dataset_t_bxd, this_in_fac_W) + this_in_fac_b
+ in_fac_t_bxf.set_shape([None, used_in_factors_dim])
+ if ext_input_dim > 0 and not hps.inject_ext_input_to_gen:
+ ext_input_t_bxi = ext_input_do_bxtxi[:,t,:]
+ enc_input_t_bxfpe = tf.concat(
+ axis=1, values=[in_fac_t_bxf, ext_input_t_bxi])
+ else:
+ enc_input_t_bxfpe = in_fac_t_bxf
+ enc_out, enc_state = enc_cell(enc_input_t_bxfpe, enc_state)
+ enc_outs[t] = enc_out
+
+ return enc_outs
+
+ # Encode initial condition means and variances
+ # ([x_T, x_T-1, ... x_0] and [x_0, x_1, ... x_T] -> g0/c0)
+ self.ic_enc_fwd = [None] * num_steps
+ self.ic_enc_rev = [None] * num_steps
+ if ic_dim > 0:
+ enc_ic_cell = cell_class(hps.ic_enc_dim,
+ weight_scale=hps.cell_weight_scale,
+ clip_value=hps.cell_clip_value)
+ ic_enc_fwd = encode_data(dataset_do_bxtxd, enc_ic_cell,
+ "ic", "forward",
+ hps.num_steps_for_gen_ic)
+ ic_enc_rev = encode_data(dataset_do_bxtxd, enc_ic_cell,
+ "ic", "reverse",
+ hps.num_steps_for_gen_ic)
+ self.ic_enc_fwd = ic_enc_fwd
+ self.ic_enc_rev = ic_enc_rev
+
+ # Encoder control input means and variances, bi-directional encoding so:
+ # ([x_T, x_T-1, ..., x_0] and [x_0, x_1 ... x_T] -> u_t)
+ self.ci_enc_fwd = [None] * num_steps
+ self.ci_enc_rev = [None] * num_steps
+ if co_dim > 0:
+ enc_ci_cell = cell_class(hps.ci_enc_dim,
+ weight_scale=hps.cell_weight_scale,
+ clip_value=hps.cell_clip_value)
+ ci_enc_fwd = encode_data(dataset_do_bxtxd, enc_ci_cell,
+ "ci", "forward",
+ hps.num_steps)
+ if hps.do_causal_controller:
+ ci_enc_rev = None
+ else:
+ ci_enc_rev = encode_data(dataset_do_bxtxd, enc_ci_cell,
+ "ci", "reverse",
+ hps.num_steps)
+ self.ci_enc_fwd = ci_enc_fwd
+ self.ci_enc_rev = ci_enc_rev
+
+ # STOCHASTIC LATENT VARIABLES, priors and posteriors
+ # (initial conditions g0, and control inputs, u_t)
+ # Note that zs represent all the stochastic latent variables.
+ with tf.variable_scope("z", reuse=False):
+ self.prior_zs_g0 = None
+ self.posterior_zs_g0 = None
+ self.g0s_val = None
+ if ic_dim > 0:
+ self.prior_zs_g0 = \
+ LearnableDiagonalGaussian(batch_size, ic_dim, name="prior_g0",
+ mean_init=0.0,
+ var_min=hps.ic_prior_var_min,
+ var_init=hps.ic_prior_var_scale,
+ var_max=hps.ic_prior_var_max)
+ ic_enc = tf.concat(axis=1, values=[ic_enc_fwd[-1], ic_enc_rev[0]])
+ ic_enc = tf.nn.dropout(ic_enc, keep_prob)
+ self.posterior_zs_g0 = \
+ DiagonalGaussianFromInput(ic_enc, ic_dim, "ic_enc_2_post_g0",
+ var_min=hps.ic_post_var_min)
+ if kind in ["train", "posterior_sample_and_average",
+ "posterior_push_mean"]:
+ zs_g0 = self.posterior_zs_g0
+ else:
+ zs_g0 = self.prior_zs_g0
+ if kind in ["train", "posterior_sample_and_average", "prior_sample"]:
+ self.g0s_val = zs_g0.sample
+ else:
+ self.g0s_val = zs_g0.mean
+
+ # Priors for controller, 'co' for controller output
+ self.prior_zs_co = prior_zs_co = [None] * num_steps
+ self.posterior_zs_co = posterior_zs_co = [None] * num_steps
+ self.zs_co = zs_co = [None] * num_steps
+ self.prior_zs_ar_con = None
+ if co_dim > 0:
+ # Controller outputs
+ autocorrelation_taus = [hps.prior_ar_atau for x in range(hps.co_dim)]
+ noise_variances = [hps.prior_ar_nvar for x in range(hps.co_dim)]
+ self.prior_zs_ar_con = prior_zs_ar_con = \
+ LearnableAutoRegressive1Prior(batch_size, hps.co_dim,
+ autocorrelation_taus,
+ noise_variances,
+ hps.do_train_prior_ar_atau,
+ hps.do_train_prior_ar_nvar,
+ num_steps, "u_prior_ar1")
+
+ # CONTROLLER -> GENERATOR -> RATES
+ # (u(t) -> gen(t) -> factors(t) -> rates(t) -> p(x_t|z_t) )
+ self.controller_outputs = u_t = [None] * num_steps
+ self.con_ics = con_state = None
+ self.con_states = con_states = [None] * num_steps
+ self.con_outs = con_outs = [None] * num_steps
+ self.gen_inputs = gen_inputs = [None] * num_steps
+ if co_dim > 0:
+ # gen_cell_class here for l2 penalty recurrent weights
+ # didn't split the cell_weight scale here, because I doubt it matters
+ con_cell = gen_cell_class(hps.con_dim,
+ input_weight_scale=hps.cell_weight_scale,
+ rec_weight_scale=hps.cell_weight_scale,
+ clip_value=hps.cell_clip_value,
+ recurrent_collections=['l2_con_reg'])
+ with tf.variable_scope("con", reuse=False):
+ self.con_ics = tf.tile(
+ tf.Variable(tf.zeros([1, hps.con_dim*con_cell.state_multiplier]),
+ name="c0"),
+ tf.stack([batch_size, 1]))
+ self.con_ics.set_shape([None, con_cell.state_size]) # tile loses shape
+ con_states[-1] = self.con_ics
+
+ gen_cell = gen_cell_class(hps.gen_dim,
+ input_weight_scale=hps.gen_cell_input_weight_scale,
+ rec_weight_scale=hps.gen_cell_rec_weight_scale,
+ clip_value=hps.cell_clip_value,
+ recurrent_collections=['l2_gen_reg'])
+ with tf.variable_scope("gen", reuse=False):
+ if ic_dim == 0:
+ self.gen_ics = tf.tile(
+ tf.Variable(tf.zeros([1, gen_cell.state_size]), name="g0"),
+ tf.stack([batch_size, 1]))
+ else:
+ self.gen_ics = linear(self.g0s_val, gen_cell.state_size,
+ identity_if_possible=True,
+ name="g0_2_gen_ic")
+
+ self.gen_states = gen_states = [None] * num_steps
+ self.gen_outs = gen_outs = [None] * num_steps
+ gen_states[-1] = self.gen_ics
+ gen_outs[-1] = gen_cell.output_from_state(gen_states[-1])
+ self.factors = factors = [None] * num_steps
+ factors[-1] = linear(gen_outs[-1], factors_dim, do_bias=False,
+ normalized=True, name="gen_2_fac")
+
+ self.rates = rates = [None] * num_steps
+ # rates[-1] is collected to potentially feed back to controller
+ with tf.variable_scope("glm", reuse=False):
+ if hps.output_dist == 'poisson':
+ log_rates_t0 = tf.matmul(factors[-1], this_out_fac_W) + this_out_fac_b
+ log_rates_t0.set_shape([None, None])
+ rates[-1] = tf.exp(log_rates_t0) # rate
+ rates[-1].set_shape([None, hps.dataset_dims[hps.dataset_names[0]]])
+ elif hps.output_dist == 'gaussian':
+ mean_n_logvars = tf.matmul(factors[-1],this_out_fac_W) + this_out_fac_b
+ mean_n_logvars.set_shape([None, None])
+ means_t_bxd, logvars_t_bxd = tf.split(axis=1, num_or_size_splits=2,
+ value=mean_n_logvars)
+ rates[-1] = means_t_bxd
+ else:
+ assert False, "NIY"
+
+ # We support multiple output distributions, for example Poisson, and also
+ # Gaussian. In these two cases respectively, there are one and two
+ # parameters (rates vs. mean and variance). So the output_dist_params
+ # tensor will variable sizes via tf.concat and tf.split, along the 1st
+ # dimension. So in the case of gaussian, for example, it'll be
+ # batch x (D+D), where each D dims is the mean, and then variances,
+ # respectively. For a distribution with 3 parameters, it would be
+ # batch x (D+D+D).
+ self.output_dist_params = dist_params = [None] * num_steps
+ self.log_p_xgz_b = log_p_xgz_b = 0.0 # log P(x|z)
+ for t in range(num_steps):
+ # Controller
+ if co_dim > 0:
+ # Build inputs for controller
+ tlag = t - hps.controller_input_lag
+ if tlag < 0:
+ con_in_f_t = tf.zeros_like(ci_enc_fwd[0])
+ else:
+ con_in_f_t = ci_enc_fwd[tlag]
+ if hps.do_causal_controller:
+ # If controller is causal (wrt to data generation process), then it
+ # cannot see future data. Thus, excluding ci_enc_rev[t] is obvious.
+ # Less obvious is the need to exclude factors[t-1]. This arises
+ # because information flows from g0 through factors to the controller
+ # input. The g0 encoding is backwards, so we must necessarily exclude
+ # the factors in order to keep the controller input purely from a
+ # forward encoding (however unlikely it is that
+ # g0->factors->controller channel might actually be used in this way).
+ con_in_list_t = [con_in_f_t]
+ else:
+ tlag_rev = t + hps.controller_input_lag
+ if tlag_rev >= num_steps:
+ # better than zeros
+ con_in_r_t = tf.zeros_like(ci_enc_rev[0])
+ else:
+ con_in_r_t = ci_enc_rev[tlag_rev]
+ con_in_list_t = [con_in_f_t, con_in_r_t]
+
+ if hps.do_feed_factors_to_controller:
+ if hps.feedback_factors_or_rates == "factors":
+ con_in_list_t.append(factors[t-1])
+ elif hps.feedback_factors_or_rates == "rates":
+ con_in_list_t.append(rates[t-1])
+ else:
+ assert False, "NIY"
+
+ con_in_t = tf.concat(axis=1, values=con_in_list_t)
+ con_in_t = tf.nn.dropout(con_in_t, keep_prob)
+ with tf.variable_scope("con", reuse=True if t > 0 else None):
+ con_outs[t], con_states[t] = con_cell(con_in_t, con_states[t-1])
+ posterior_zs_co[t] = \
+ DiagonalGaussianFromInput(con_outs[t], co_dim,
+ name="con_to_post_co")
+ if kind == "train":
+ u_t[t] = posterior_zs_co[t].sample
+ elif kind == "posterior_sample_and_average":
+ u_t[t] = posterior_zs_co[t].sample
+ elif kind == "posterior_push_mean":
+ u_t[t] = posterior_zs_co[t].mean
+ else:
+ u_t[t] = prior_zs_ar_con.samples_t[t]
+
+ # Inputs to the generator (controller output + external input)
+ if ext_input_dim > 0 and hps.inject_ext_input_to_gen:
+ ext_input_t_bxi = ext_input_do_bxtxi[:,t,:]
+ if co_dim > 0:
+ gen_inputs[t] = tf.concat(axis=1, values=[u_t[t], ext_input_t_bxi])
+ else:
+ gen_inputs[t] = ext_input_t_bxi
+ else:
+ gen_inputs[t] = u_t[t]
+
+ # Generator
+ data_t_bxd = dataset_ph[:,t,:]
+ with tf.variable_scope("gen", reuse=True if t > 0 else None):
+ gen_outs[t], gen_states[t] = gen_cell(gen_inputs[t], gen_states[t-1])
+ gen_outs[t] = tf.nn.dropout(gen_outs[t], keep_prob)
+ with tf.variable_scope("gen", reuse=True): # ic defined it above
+ factors[t] = linear(gen_outs[t], factors_dim, do_bias=False,
+ normalized=True, name="gen_2_fac")
+ with tf.variable_scope("glm", reuse=True if t > 0 else None):
+ if hps.output_dist == 'poisson':
+ log_rates_t = tf.matmul(factors[t], this_out_fac_W) + this_out_fac_b
+ log_rates_t.set_shape([None, None])
+ rates[t] = dist_params[t] = tf.exp(log_rates_t) # rates feed back
+ rates[t].set_shape([None, hps.dataset_dims[hps.dataset_names[0]]])
+ loglikelihood_t = Poisson(log_rates_t).logp(data_t_bxd)
+
+ elif hps.output_dist == 'gaussian':
+ mean_n_logvars = tf.matmul(factors[t],this_out_fac_W) + this_out_fac_b
+ mean_n_logvars.set_shape([None, None])
+ means_t_bxd, logvars_t_bxd = tf.split(axis=1, num_or_size_splits=2,
+ value=mean_n_logvars)
+ rates[t] = means_t_bxd # rates feed back to controller
+ dist_params[t] = tf.concat(
+ axis=1, values=[means_t_bxd, tf.exp(logvars_t_bxd)])
+ loglikelihood_t = \
+ diag_gaussian_log_likelihood(data_t_bxd,
+ means_t_bxd, logvars_t_bxd)
+ else:
+ assert False, "NIY"
+
+ log_p_xgz_b += tf.reduce_sum(loglikelihood_t, [1])
+
+ # Correlation of inferred inputs cost.
+ self.corr_cost = tf.constant(0.0)
+ if hps.co_mean_corr_scale > 0.0:
+ all_sum_corr = []
+ for i in range(hps.co_dim):
+ for j in range(i+1, hps.co_dim):
+ sum_corr_ij = tf.constant(0.0)
+ for t in range(num_steps):
+ u_mean_t = posterior_zs_co[t].mean
+ sum_corr_ij += u_mean_t[:,i]*u_mean_t[:,j]
+ all_sum_corr.append(0.5 * tf.square(sum_corr_ij))
+ self.corr_cost = tf.reduce_mean(all_sum_corr) # div by batch and by n*(n-1)/2 pairs
+
+ # Variational Lower Bound on posterior, p(z|x), plus reconstruction cost.
+ # KL and reconstruction costs are normalized only by batch size, not by
+ # dimension, or by time steps.
+ kl_cost_g0_b = tf.zeros_like(batch_size, dtype=tf.float32)
+ kl_cost_co_b = tf.zeros_like(batch_size, dtype=tf.float32)
+ self.kl_cost = tf.constant(0.0) # VAE KL cost
+ self.recon_cost = tf.constant(0.0) # VAE reconstruction cost
+ self.nll_bound_vae = tf.constant(0.0)
+ self.nll_bound_iwae = tf.constant(0.0) # for eval with IWAE cost.
+ if kind in ["train", "posterior_sample_and_average", "posterior_push_mean"]:
+ kl_cost_g0_b = 0.0
+ kl_cost_co_b = 0.0
+ if ic_dim > 0:
+ g0_priors = [self.prior_zs_g0]
+ g0_posts = [self.posterior_zs_g0]
+ kl_cost_g0_b = KLCost_GaussianGaussian(g0_posts, g0_priors).kl_cost_b
+ kl_cost_g0_b = hps.kl_ic_weight * kl_cost_g0_b
+ if co_dim > 0:
+ kl_cost_co_b = \
+ KLCost_GaussianGaussianProcessSampled(
+ posterior_zs_co, prior_zs_ar_con).kl_cost_b
+ kl_cost_co_b = hps.kl_co_weight * kl_cost_co_b
+
+ # L = -KL + log p(x|z), to maximize bound on likelihood
+ # -L = KL - log p(x|z), to minimize bound on NLL
+ # so 'reconstruction cost' is negative log likelihood
+ self.recon_cost = - tf.reduce_mean(log_p_xgz_b)
+ self.kl_cost = tf.reduce_mean(kl_cost_g0_b + kl_cost_co_b)
+
+ lb_on_ll_b = log_p_xgz_b - kl_cost_g0_b - kl_cost_co_b
+
+ # VAE error averages outside the log
+ self.nll_bound_vae = -tf.reduce_mean(lb_on_ll_b)
+
+ # IWAE error averages inside the log
+ k = tf.cast(tf.shape(log_p_xgz_b)[0], tf.float32)
+ iwae_lb_on_ll = -tf.log(k) + log_sum_exp(lb_on_ll_b)
+ self.nll_bound_iwae = -iwae_lb_on_ll
+
+ # L2 regularization on the generator, normalized by number of parameters.
+ self.l2_cost = tf.constant(0.0)
+ if self.hps.l2_gen_scale > 0.0 or self.hps.l2_con_scale > 0.0:
+ l2_costs = []
+ l2_numels = []
+ l2_reg_var_lists = [tf.get_collection('l2_gen_reg'),
+ tf.get_collection('l2_con_reg')]
+ l2_reg_scales = [self.hps.l2_gen_scale, self.hps.l2_con_scale]
+ for l2_reg_vars, l2_scale in zip(l2_reg_var_lists, l2_reg_scales):
+ for v in l2_reg_vars:
+ numel = tf.reduce_prod(tf.concat(axis=0, values=tf.shape(v)))
+ numel_f = tf.cast(numel, tf.float32)
+ l2_numels.append(numel_f)
+ v_l2 = tf.reduce_sum(v*v)
+ l2_costs.append(0.5 * l2_scale * v_l2)
+ self.l2_cost = tf.add_n(l2_costs) / tf.add_n(l2_numels)
+
+ # Compute the cost for training, part of the graph regardless.
+ # The KL cost can be problematic at the beginning of optimization,
+ # so we allow an exponential increase in weighting the KL from 0
+ # to 1.
+ self.kl_decay_step = tf.maximum(self.train_step - hps.kl_start_step, 0)
+ self.l2_decay_step = tf.maximum(self.train_step - hps.l2_start_step, 0)
+ kl_decay_step_f = tf.cast(self.kl_decay_step, tf.float32)
+ l2_decay_step_f = tf.cast(self.l2_decay_step, tf.float32)
+ kl_increase_steps_f = tf.cast(hps.kl_increase_steps, tf.float32)
+ l2_increase_steps_f = tf.cast(hps.l2_increase_steps, tf.float32)
+ self.kl_weight = kl_weight = \
+ tf.minimum(kl_decay_step_f / kl_increase_steps_f, 1.0)
+ self.l2_weight = l2_weight = \
+ tf.minimum(l2_decay_step_f / l2_increase_steps_f, 1.0)
+
+ self.timed_kl_cost = kl_weight * self.kl_cost
+ self.timed_l2_cost = l2_weight * self.l2_cost
+ self.weight_corr_cost = hps.co_mean_corr_scale * self.corr_cost
+ self.cost = self.recon_cost + self.timed_kl_cost + \
+ self.timed_l2_cost + self.weight_corr_cost
+
+ if kind != "train":
+ # save every so often
+ self.seso_saver = tf.train.Saver(tf.global_variables(),
+ max_to_keep=hps.max_ckpt_to_keep)
+ # lowest validation error
+ self.lve_saver = tf.train.Saver(tf.global_variables(),
+ max_to_keep=hps.max_ckpt_to_keep_lve)
+
+ return
+
+ # OPTIMIZATION
+ # train the io matrices only
+ if self.hps.do_train_io_only:
+ self.train_vars = tvars = \
+ tf.get_collection('IO_transformations',
+ scope=tf.get_variable_scope().name)
+ # train the encoder only
+ elif self.hps.do_train_encoder_only:
+ tvars1 = \
+ tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
+ scope='LFADS/ic_enc_*')
+ tvars2 = \
+ tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
+ scope='LFADS/z/ic_enc_*')
+
+ self.train_vars = tvars = tvars1 + tvars2
+ # train all variables
+ else:
+ self.train_vars = tvars = \
+ tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
+ scope=tf.get_variable_scope().name)
+ print("done.")
+ print("Model Variables (to be optimized): ")
+ total_params = 0
+ for i in range(len(tvars)):
+ shape = tvars[i].get_shape().as_list()
+ print(" ", i, tvars[i].name, shape)
+ total_params += np.prod(shape)
+ print("Total model parameters: ", total_params)
+
+ grads = tf.gradients(self.cost, tvars)
+ grads, grad_global_norm = tf.clip_by_global_norm(grads, hps.max_grad_norm)
+ opt = tf.train.AdamOptimizer(self.learning_rate, beta1=0.9, beta2=0.999,
+ epsilon=1e-01)
+ self.grads = grads
+ self.grad_global_norm = grad_global_norm
+ self.train_op = opt.apply_gradients(
+ zip(grads, tvars), global_step=self.train_step)
+
+ self.seso_saver = tf.train.Saver(tf.global_variables(),
+ max_to_keep=hps.max_ckpt_to_keep)
+
+ # lowest validation error
+ self.lve_saver = tf.train.Saver(tf.global_variables(),
+ max_to_keep=hps.max_ckpt_to_keep)
+
+ # SUMMARIES, used only during training.
+ # example summary
+ self.example_image = tf.placeholder(tf.float32, shape=[1,None,None,3],
+ name='image_tensor')
+ self.example_summ = tf.summary.image("LFADS example", self.example_image,
+ collections=["example_summaries"])
+
+ # general training summaries
+ self.lr_summ = tf.summary.scalar("Learning rate", self.learning_rate)
+ self.kl_weight_summ = tf.summary.scalar("KL weight", self.kl_weight)
+ self.l2_weight_summ = tf.summary.scalar("L2 weight", self.l2_weight)
+ self.corr_cost_summ = tf.summary.scalar("Corr cost", self.weight_corr_cost)
+ self.grad_global_norm_summ = tf.summary.scalar("Gradient global norm",
+ self.grad_global_norm)
+ if hps.co_dim > 0:
+ self.atau_summ = [None] * hps.co_dim
+ self.pvar_summ = [None] * hps.co_dim
+ for c in range(hps.co_dim):
+ self.atau_summ[c] = \
+ tf.summary.scalar("AR Autocorrelation taus " + str(c),
+ tf.exp(self.prior_zs_ar_con.logataus_1xu[0,c]))
+ self.pvar_summ[c] = \
+ tf.summary.scalar("AR Variances " + str(c),
+ tf.exp(self.prior_zs_ar_con.logpvars_1xu[0,c]))
+
+ # cost summaries, separated into different collections for
+ # training vs validation. We make placeholders for these, because
+ # even though the graph computes these costs on a per-batch basis,
+ # we want to report the more reliable metric of per-epoch cost.
+ kl_cost_ph = tf.placeholder(tf.float32, shape=[], name='kl_cost_ph')
+ self.kl_t_cost_summ = tf.summary.scalar("KL cost (train)", kl_cost_ph,
+ collections=["train_summaries"])
+ self.kl_v_cost_summ = tf.summary.scalar("KL cost (valid)", kl_cost_ph,
+ collections=["valid_summaries"])
+ l2_cost_ph = tf.placeholder(tf.float32, shape=[], name='l2_cost_ph')
+ self.l2_cost_summ = tf.summary.scalar("L2 cost", l2_cost_ph,
+ collections=["train_summaries"])
+
+ recon_cost_ph = tf.placeholder(tf.float32, shape=[], name='recon_cost_ph')
+ self.recon_t_cost_summ = tf.summary.scalar("Reconstruction cost (train)",
+ recon_cost_ph,
+ collections=["train_summaries"])
+ self.recon_v_cost_summ = tf.summary.scalar("Reconstruction cost (valid)",
+ recon_cost_ph,
+ collections=["valid_summaries"])
+
+ total_cost_ph = tf.placeholder(tf.float32, shape=[], name='total_cost_ph')
+ self.cost_t_summ = tf.summary.scalar("Total cost (train)", total_cost_ph,
+ collections=["train_summaries"])
+ self.cost_v_summ = tf.summary.scalar("Total cost (valid)", total_cost_ph,
+ collections=["valid_summaries"])
+
+ self.kl_cost_ph = kl_cost_ph
+ self.l2_cost_ph = l2_cost_ph
+ self.recon_cost_ph = recon_cost_ph
+ self.total_cost_ph = total_cost_ph
+
+ # Merged summaries, for easy coding later.
+ self.merged_examples = tf.summary.merge_all(key="example_summaries")
+ self.merged_generic = tf.summary.merge_all() # default key is 'summaries'
+ self.merged_train = tf.summary.merge_all(key="train_summaries")
+ self.merged_valid = tf.summary.merge_all(key="valid_summaries")
+
+ session = tf.get_default_session()
+ self.logfile = os.path.join(hps.lfads_save_dir, "lfads_log")
+ self.writer = tf.summary.FileWriter(self.logfile)
+
+ def build_feed_dict(self, train_name, data_bxtxd, ext_input_bxtxi=None,
+ keep_prob=None):
+ """Build the feed dictionary, handles cases where there is no value defined.
+
+ Args:
+ train_name: The key into the datasets, to set the tf.case statement for
+ the proper readin / readout matrices.
+ data_bxtxd: The data tensor
+ ext_input_bxtxi (optional): The external input tensor
+ keep_prob: The drop out keep probability.
+
+ Returns:
+ The feed dictionary with TF tensors as keys and data as values, for use
+ with tf.Session.run()
+
+ """
+ feed_dict = {}
+ B, T, _ = data_bxtxd.shape
+ feed_dict[self.dataName] = train_name
+ feed_dict[self.dataset_ph] = data_bxtxd
+
+ if self.ext_input is not None and ext_input_bxtxi is not None:
+ feed_dict[self.ext_input] = ext_input_bxtxi
+
+ if keep_prob is None:
+ feed_dict[self.keep_prob] = self.hps.keep_prob
+ else:
+ feed_dict[self.keep_prob] = keep_prob
+
+ return feed_dict
+
+ @staticmethod
+ def get_batch(data_extxd, ext_input_extxi=None, batch_size=None,
+ example_idxs=None):
+ """Get a batch of data, either randomly chosen, or specified directly.
+
+ Args:
+ data_extxd: The data to model, numpy tensors with shape:
+ # examples x # time steps x # dimensions
+ ext_input_extxi (optional): The external inputs, numpy tensor with shape:
+ # examples x # time steps x # external input dimensions
+ batch_size: The size of the batch to return
+ example_idxs (optional): The example indices used to select examples.
+
+ Returns:
+ A tuple with two parts:
+ 1. Batched data numpy tensor with shape:
+ batch_size x # time steps x # dimensions
+ 2. Batched external input numpy tensor with shape:
+ batch_size x # time steps x # external input dims
+ """
+ assert batch_size is not None or example_idxs is not None, "Problems"
+ E, T, D = data_extxd.shape
+ if example_idxs is None:
+ example_idxs = np.random.choice(E, batch_size)
+
+ ext_input_bxtxi = None
+ if ext_input_extxi is not None:
+ ext_input_bxtxi = ext_input_extxi[example_idxs,:,:]
+
+ return data_extxd[example_idxs,:,:], ext_input_bxtxi
+
+ @staticmethod
+ def example_idxs_mod_batch_size(nexamples, batch_size):
+ """Given a number of examples, E, and a batch_size, B, generate indices
+ [0, 1, 2, ... B-1;
+ [B, B+1, ... 2*B-1;
+ ...
+ ]
+ returning those indices as a 2-dim tensor shaped like E/B x B. Note that
+ shape is only correct if E % B == 0. If not, then an extra row is generated
+ so that the remainder of examples is included. The extra examples are
+ explicitly to to the zero index (see randomize_example_idxs_mod_batch_size)
+ for randomized behavior.
+
+ Args:
+ nexamples: The number of examples to batch up.
+ batch_size: The size of the batch.
+ Returns:
+ 2-dim tensor as described above.
+ """
+ bmrem = batch_size - (nexamples % batch_size)
+ bmrem_examples = []
+ if bmrem < batch_size:
+ #bmrem_examples = np.zeros(bmrem, dtype=np.int32)
+ ridxs = np.random.permutation(nexamples)[0:bmrem].astype(np.int32)
+ bmrem_examples = np.sort(ridxs)
+ example_idxs = range(nexamples) + list(bmrem_examples)
+ example_idxs_e_x_edivb = np.reshape(example_idxs, [-1, batch_size])
+ return example_idxs_e_x_edivb, bmrem
+
+ @staticmethod
+ def randomize_example_idxs_mod_batch_size(nexamples, batch_size):
+ """Indices 1:nexamples, randomized, in 2D form of
+ shape = (nexamples / batch_size) x batch_size. The remainder
+ is managed by drawing randomly from 1:nexamples.
+
+ Args:
+ nexamples: number of examples to randomize
+ batch_size: number of elements in batch
+
+ Returns:
+ The randomized, properly shaped indicies.
+ """
+ assert nexamples > batch_size, "Problems"
+ bmrem = batch_size - nexamples % batch_size
+ bmrem_examples = []
+ if bmrem < batch_size:
+ bmrem_examples = np.random.choice(range(nexamples),
+ size=bmrem, replace=False)
+ example_idxs = range(nexamples) + list(bmrem_examples)
+ mixed_example_idxs = np.random.permutation(example_idxs)
+ example_idxs_e_x_edivb = np.reshape(mixed_example_idxs, [-1, batch_size])
+ return example_idxs_e_x_edivb, bmrem
+
+ def shuffle_spikes_in_time(self, data_bxtxd):
+ """Shuffle the spikes in the temporal dimension. This is useful to
+ help the LFADS system avoid overfitting to individual spikes or fast
+ oscillations found in the data that are irrelevant to behavior. A
+ pure 'tabula rasa' approach would avoid this, but LFADS is sensitive
+ enough to pick up dynamics that you may not want.
+
+ Args:
+ data_bxtxd: numpy array of spike count data to be shuffled.
+ Returns:
+ S_bxtxd, a numpy array with the same dimensions and contents as
+ data_bxtxd, but shuffled appropriately.
+
+ """
+
+ B, T, N = data_bxtxd.shape
+ w = self.hps.temporal_spike_jitter_width
+
+ if w == 0:
+ return data_bxtxd
+
+ max_counts = np.max(data_bxtxd)
+ S_bxtxd = np.zeros([B,T,N])
+
+ # Intuitively, shuffle spike occurances, 0 or 1, but since we have counts,
+ # Do it over and over again up to the max count.
+ for mc in range(1,max_counts+1):
+ idxs = np.nonzero(data_bxtxd >= mc)
+
+ data_ones = np.zeros_like(data_bxtxd)
+ data_ones[data_bxtxd >= mc] = 1
+
+ nfound = len(idxs[0])
+ shuffles_incrs_in_time = np.random.randint(-w, w, size=nfound)
+
+ shuffle_tidxs = idxs[1].copy()
+ shuffle_tidxs += shuffles_incrs_in_time
+
+ # Reflect on the boundaries to not lose mass.
+ shuffle_tidxs[shuffle_tidxs < 0] = -shuffle_tidxs[shuffle_tidxs < 0]
+ shuffle_tidxs[shuffle_tidxs > T-1] = \
+ (T-1)-(shuffle_tidxs[shuffle_tidxs > T-1] -(T-1))
+
+ for iii in zip(idxs[0], shuffle_tidxs, idxs[2]):
+ S_bxtxd[iii] += 1
+
+ return S_bxtxd
+
+ def shuffle_and_flatten_datasets(self, datasets, kind='train'):
+ """Since LFADS supports multiple datasets in the same dynamical model,
+ we have to be careful to use all the data in a single training epoch. But
+ since the datasets my have different data dimensionality, we cannot batch
+ examples from data dictionaries together. Instead, we generate random
+ batches within each data dictionary, and then randomize these batches
+ while holding onto the dataname, so that when it's time to feed
+ the graph, the correct in/out matrices can be selected, per batch.
+
+ Args:
+ datasets: A dict of data dicts. The dataset dict is simply a
+ name(string)-> data dictionary mapping (See top of lfads.py).
+ kind: 'train' or 'valid'
+
+ Returns:
+ A flat list, in which each element is a pair ('name', indices).
+ """
+ batch_size = self.hps.batch_size
+ ndatasets = len(datasets)
+ random_example_idxs = {}
+ epoch_idxs = {}
+ all_name_example_idx_pairs = []
+ kind_data = kind + '_data'
+ for name, data_dict in datasets.items():
+ nexamples, ntime, data_dim = data_dict[kind_data].shape
+ epoch_idxs[name] = 0
+ random_example_idxs, _ = \
+ self.randomize_example_idxs_mod_batch_size(nexamples, batch_size)
+
+ epoch_size = random_example_idxs.shape[0]
+ names = [name] * epoch_size
+ all_name_example_idx_pairs += zip(names, random_example_idxs)
+
+ np.random.shuffle(all_name_example_idx_pairs) # shuffle in place
+
+ return all_name_example_idx_pairs
+
+ def train_epoch(self, datasets, batch_size=None, do_save_ckpt=True):
+ """Train the model through the entire dataset once.
+
+ Args:
+ datasets: A dict of data dicts. The dataset dict is simply a
+ name(string)-> data dictionary mapping (See top of lfads.py).
+ batch_size (optional): The batch_size to use
+ do_save_ckpt (optional): Should the routine save a checkpoint on this
+ training epoch?
+
+ Returns:
+ A tuple with 6 float values:
+ (total cost of the epoch, epoch reconstruction cost,
+ epoch kl cost, KL weight used this training epoch,
+ total l2 cost on generator, and the corresponding weight).
+ """
+ ops_to_eval = [self.cost, self.recon_cost,
+ self.kl_cost, self.kl_weight,
+ self.l2_cost, self.l2_weight,
+ self.train_op]
+ collected_op_values = self.run_epoch(datasets, ops_to_eval, kind="train")
+
+ total_cost = total_recon_cost = total_kl_cost = 0.0
+ # normalizing by batch done in distributions.py
+ epoch_size = len(collected_op_values)
+ for op_values in collected_op_values:
+ total_cost += op_values[0]
+ total_recon_cost += op_values[1]
+ total_kl_cost += op_values[2]
+
+ kl_weight = collected_op_values[-1][3]
+ l2_cost = collected_op_values[-1][4]
+ l2_weight = collected_op_values[-1][5]
+
+ epoch_total_cost = total_cost / epoch_size
+ epoch_recon_cost = total_recon_cost / epoch_size
+ epoch_kl_cost = total_kl_cost / epoch_size
+
+ if do_save_ckpt:
+ session = tf.get_default_session()
+ checkpoint_path = os.path.join(self.hps.lfads_save_dir,
+ self.hps.checkpoint_name + '.ckpt')
+ self.seso_saver.save(session, checkpoint_path,
+ global_step=self.train_step)
+
+ return epoch_total_cost, epoch_recon_cost, epoch_kl_cost, \
+ kl_weight, l2_cost, l2_weight
+
+
+ def run_epoch(self, datasets, ops_to_eval, kind="train", batch_size=None,
+ do_collect=True, keep_prob=None):
+ """Run the model through the entire dataset once.
+
+ Args:
+ datasets: A dict of data dicts. The dataset dict is simply a
+ name(string)-> data dictionary mapping (See top of lfads.py).
+ ops_to_eval: A list of tensorflow operations that will be evaluated in
+ the tf.session.run() call.
+ batch_size (optional): The batch_size to use
+ do_collect (optional): Should the routine collect all session.run
+ output as a list, and return it?
+ keep_prob (optional): The dropout keep probability.
+
+ Returns:
+ A list of lists, the internal list is the return for the ops for each
+ session.run() call. The outer list collects over the epoch.
+ """
+ hps = self.hps
+ all_name_example_idx_pairs = \
+ self.shuffle_and_flatten_datasets(datasets, kind)
+
+ kind_data = kind + '_data'
+ kind_ext_input = kind + '_ext_input'
+
+ total_cost = total_recon_cost = total_kl_cost = 0.0
+ session = tf.get_default_session()
+ epoch_size = len(all_name_example_idx_pairs)
+ evaled_ops_list = []
+ for name, example_idxs in all_name_example_idx_pairs:
+ data_dict = datasets[name]
+ data_extxd = data_dict[kind_data]
+ if hps.output_dist == 'poisson' and hps.temporal_spike_jitter_width > 0:
+ data_extxd = self.shuffle_spikes_in_time(data_extxd)
+
+ ext_input_extxi = data_dict[kind_ext_input]
+ data_bxtxd, ext_input_bxtxi = self.get_batch(data_extxd, ext_input_extxi,
+ example_idxs=example_idxs)
+
+ feed_dict = self.build_feed_dict(name, data_bxtxd, ext_input_bxtxi,
+ keep_prob=keep_prob)
+ evaled_ops_np = session.run(ops_to_eval, feed_dict=feed_dict)
+ if do_collect:
+ evaled_ops_list.append(evaled_ops_np)
+
+ return evaled_ops_list
+
+ def summarize_all(self, datasets, summary_values):
+ """Plot and summarize stuff in tensorboard.
+
+ Note that everything done in the current function is otherwise done on
+ a single, randomly selected dataset (except for summary_values, which are
+ passed in.)
+
+ Args:
+ datasets, the dictionary of datasets used in the study.
+ summary_values: These summary values are created from the training loop,
+ and so summarize the entire set of datasets.
+ """
+ hps = self.hps
+ tr_kl_cost = summary_values['tr_kl_cost']
+ tr_recon_cost = summary_values['tr_recon_cost']
+ tr_total_cost = summary_values['tr_total_cost']
+ kl_weight = summary_values['kl_weight']
+ l2_weight = summary_values['l2_weight']
+ l2_cost = summary_values['l2_cost']
+ has_any_valid_set = summary_values['has_any_valid_set']
+ i = summary_values['nepochs']
+
+ session = tf.get_default_session()
+ train_summ, train_step = session.run([self.merged_train,
+ self.train_step],
+ feed_dict={self.l2_cost_ph:l2_cost,
+ self.kl_cost_ph:tr_kl_cost,
+ self.recon_cost_ph:tr_recon_cost,
+ self.total_cost_ph:tr_total_cost})
+ self.writer.add_summary(train_summ, train_step)
+ if has_any_valid_set:
+ ev_kl_cost = summary_values['ev_kl_cost']
+ ev_recon_cost = summary_values['ev_recon_cost']
+ ev_total_cost = summary_values['ev_total_cost']
+ eval_summ = session.run(self.merged_valid,
+ feed_dict={self.kl_cost_ph:ev_kl_cost,
+ self.recon_cost_ph:ev_recon_cost,
+ self.total_cost_ph:ev_total_cost})
+ self.writer.add_summary(eval_summ, train_step)
+ print("Epoch:%d, step:%d (TRAIN, VALID): total: %.2f, %.2f\
+ recon: %.2f, %.2f, kl: %.2f, %.2f, l2: %.5f,\
+ kl weight: %.2f, l2 weight: %.2f" % \
+ (i, train_step, tr_total_cost, ev_total_cost,
+ tr_recon_cost, ev_recon_cost, tr_kl_cost, ev_kl_cost,
+ l2_cost, kl_weight, l2_weight))
+
+ csv_outstr = "epoch,%d, step,%d, total,%.2f,%.2f, \
+ recon,%.2f,%.2f, kl,%.2f,%.2f, l2,%.5f, \
+ klweight,%.2f, l2weight,%.2f\n"% \
+ (i, train_step, tr_total_cost, ev_total_cost,
+ tr_recon_cost, ev_recon_cost, tr_kl_cost, ev_kl_cost,
+ l2_cost, kl_weight, l2_weight)
+
+ else:
+ print("Epoch:%d, step:%d TRAIN: total: %.2f recon: %.2f, kl: %.2f,\
+ l2: %.5f, kl weight: %.2f, l2 weight: %.2f" % \
+ (i, train_step, tr_total_cost, tr_recon_cost, tr_kl_cost,
+ l2_cost, kl_weight, l2_weight))
+ csv_outstr = "epoch,%d, step,%d, total,%.2f, recon,%.2f, kl,%.2f, \
+ l2,%.5f, klweight,%.2f, l2weight,%.2f\n"% \
+ (i, train_step, tr_total_cost, tr_recon_cost,
+ tr_kl_cost, l2_cost, kl_weight, l2_weight)
+
+ if self.hps.csv_log:
+ csv_file = os.path.join(self.hps.lfads_save_dir, self.hps.csv_log+'.csv')
+ with open(csv_file, "a") as myfile:
+ myfile.write(csv_outstr)
+
+
+ def plot_single_example(self, datasets):
+ """Plot an image relating to a randomly chosen, specific example. We use
+ posterior sample and average by taking one example, and filling a whole
+ batch with that example, sample from the posterior, and then average the
+ quantities.
+
+ """
+ hps = self.hps
+ all_data_names = datasets.keys()
+ data_name = np.random.permutation(all_data_names)[0]
+ data_dict = datasets[data_name]
+ has_valid_set = True if data_dict['valid_data'] is not None else False
+ cf = 1.0 # plotting concern
+
+ # posterior sample and average here
+ E, _, _ = data_dict['train_data'].shape
+ eidx = np.random.choice(E)
+ example_idxs = eidx * np.ones(hps.batch_size, dtype=np.int32)
+
+ train_data_bxtxd, train_ext_input_bxtxi = \
+ self.get_batch(data_dict['train_data'], data_dict['train_ext_input'],
+ example_idxs=example_idxs)
+
+ truth_train_data_bxtxd = None
+ if 'train_truth' in data_dict and data_dict['train_truth'] is not None:
+ truth_train_data_bxtxd, _ = self.get_batch(data_dict['train_truth'],
+ example_idxs=example_idxs)
+ cf = data_dict['conversion_factor']
+
+ # plotter does averaging
+ train_model_values = self.eval_model_runs_batch(data_name,
+ train_data_bxtxd,
+ train_ext_input_bxtxi,
+ do_average_batch=False)
+
+ train_step = train_model_values['train_steps']
+ feed_dict = self.build_feed_dict(data_name, train_data_bxtxd,
+ train_ext_input_bxtxi, keep_prob=1.0)
+
+ session = tf.get_default_session()
+ generic_summ = session.run(self.merged_generic, feed_dict=feed_dict)
+ self.writer.add_summary(generic_summ, train_step)
+
+ valid_data_bxtxd = valid_model_values = valid_ext_input_bxtxi = None
+ truth_valid_data_bxtxd = None
+ if has_valid_set:
+ E, _, _ = data_dict['valid_data'].shape
+ eidx = np.random.choice(E)
+ example_idxs = eidx * np.ones(hps.batch_size, dtype=np.int32)
+ valid_data_bxtxd, valid_ext_input_bxtxi = \
+ self.get_batch(data_dict['valid_data'],
+ data_dict['valid_ext_input'],
+ example_idxs=example_idxs)
+ if 'valid_truth' in data_dict and data_dict['valid_truth'] is not None:
+ truth_valid_data_bxtxd, _ = self.get_batch(data_dict['valid_truth'],
+ example_idxs=example_idxs)
+ else:
+ truth_valid_data_bxtxd = None
+
+ # plotter does averaging
+ valid_model_values = self.eval_model_runs_batch(data_name,
+ valid_data_bxtxd,
+ valid_ext_input_bxtxi,
+ do_average_batch=False)
+
+ example_image = plot_lfads(train_bxtxd=train_data_bxtxd,
+ train_model_vals=train_model_values,
+ train_ext_input_bxtxi=train_ext_input_bxtxi,
+ train_truth_bxtxd=truth_train_data_bxtxd,
+ valid_bxtxd=valid_data_bxtxd,
+ valid_model_vals=valid_model_values,
+ valid_ext_input_bxtxi=valid_ext_input_bxtxi,
+ valid_truth_bxtxd=truth_valid_data_bxtxd,
+ bidx=None, cf=cf, output_dist=hps.output_dist)
+ example_image = np.expand_dims(example_image, axis=0)
+ example_summ = session.run(self.merged_examples,
+ feed_dict={self.example_image : example_image})
+ self.writer.add_summary(example_summ)
+
+ def train_model(self, datasets):
+ """Train the model, print per-epoch information, and save checkpoints.
+
+ Loop over training epochs. The function that actually does the
+ training is train_epoch. This function iterates over the training
+ data, one epoch at a time. The learning rate schedule is such
+ that it will stay the same until the cost goes up in comparison to
+ the last few values, then it will drop.
+
+ Args:
+ datasets: A dict of data dicts. The dataset dict is simply a
+ name(string)-> data dictionary mapping (See top of lfads.py).
+ """
+ hps = self.hps
+ has_any_valid_set = False
+ for data_dict in datasets.values():
+ if data_dict['valid_data'] is not None:
+ has_any_valid_set = True
+ break
+
+ session = tf.get_default_session()
+ lr = session.run(self.learning_rate)
+ lr_stop = hps.learning_rate_stop
+ i = -1
+ train_costs = []
+ valid_costs = []
+ ev_total_cost = ev_recon_cost = ev_kl_cost = 0.0
+ lowest_ev_cost = np.Inf
+ while True:
+ i += 1
+ do_save_ckpt = True if i % 10 ==0 else False
+ tr_total_cost, tr_recon_cost, tr_kl_cost, kl_weight, l2_cost, l2_weight = \
+ self.train_epoch(datasets, do_save_ckpt=do_save_ckpt)
+
+ # Evaluate the validation cost, and potentially save. Note that this
+ # routine will not save a validation checkpoint until the kl weight and
+ # l2 weights are equal to 1.0.
+ if has_any_valid_set:
+ ev_total_cost, ev_recon_cost, ev_kl_cost = \
+ self.eval_cost_epoch(datasets, kind='valid')
+ valid_costs.append(ev_total_cost)
+
+ # > 1 may give more consistent results, but not the actual lowest vae.
+ # == 1 gives the lowest vae seen so far.
+ n_lve = 1
+ run_avg_lve = np.mean(valid_costs[-n_lve:])
+
+ # conditions for saving checkpoints:
+ # KL weight must have finished stepping (>=1.0), AND
+ # L2 weight must have finished stepping OR L2 is not being used, AND
+ # the current run has a lower LVE than previous runs AND
+ # len(valid_costs > n_lve) (not sure what that does)
+ if kl_weight >= 1.0 and \
+ (l2_weight >= 1.0 or \
+ (self.hps.l2_gen_scale == 0.0 and self.hps.l2_con_scale == 0.0)) \
+ and (len(valid_costs) > n_lve and run_avg_lve < lowest_ev_cost):
+
+ lowest_ev_cost = run_avg_lve
+ checkpoint_path = os.path.join(self.hps.lfads_save_dir,
+ self.hps.checkpoint_name + '_lve.ckpt')
+ self.lve_saver.save(session, checkpoint_path,
+ global_step=self.train_step,
+ latest_filename='checkpoint_lve')
+
+ # Plot and summarize.
+ values = {'nepochs':i, 'has_any_valid_set': has_any_valid_set,
+ 'tr_total_cost':tr_total_cost, 'ev_total_cost':ev_total_cost,
+ 'tr_recon_cost':tr_recon_cost, 'ev_recon_cost':ev_recon_cost,
+ 'tr_kl_cost':tr_kl_cost, 'ev_kl_cost':ev_kl_cost,
+ 'l2_weight':l2_weight, 'kl_weight':kl_weight,
+ 'l2_cost':l2_cost}
+ self.summarize_all(datasets, values)
+ self.plot_single_example(datasets)
+
+ # Manage learning rate.
+ train_res = tr_total_cost
+ n_lr = hps.learning_rate_n_to_compare
+ if len(train_costs) > n_lr and train_res > np.max(train_costs[-n_lr:]):
+ _ = session.run(self.learning_rate_decay_op)
+ lr = session.run(self.learning_rate)
+ print(" Decreasing learning rate to %f." % lr)
+ # Force the system to run n_lr times while at this lr.
+ train_costs.append(np.inf)
+ else:
+ train_costs.append(train_res)
+
+ if lr < lr_stop:
+ print("Stopping optimization based on learning rate criteria.")
+ break
+
+ def eval_cost_epoch(self, datasets, kind='train', ext_input_extxi=None,
+ batch_size=None):
+ """Evaluate the cost of the epoch.
+
+ Args:
+ data_dict: The dictionary of data (training and validation) used for
+ training and evaluation of the model, respectively.
+
+ Returns:
+ a 3 tuple of costs:
+ (epoch total cost, epoch reconstruction cost, epoch KL cost)
+ """
+ ops_to_eval = [self.cost, self.recon_cost, self.kl_cost]
+ collected_op_values = self.run_epoch(datasets, ops_to_eval, kind=kind,
+ keep_prob=1.0)
+
+ total_cost = total_recon_cost = total_kl_cost = 0.0
+ # normalizing by batch done in distributions.py
+ epoch_size = len(collected_op_values)
+ for op_values in collected_op_values:
+ total_cost += op_values[0]
+ total_recon_cost += op_values[1]
+ total_kl_cost += op_values[2]
+
+ epoch_total_cost = total_cost / epoch_size
+ epoch_recon_cost = total_recon_cost / epoch_size
+ epoch_kl_cost = total_kl_cost / epoch_size
+
+ return epoch_total_cost, epoch_recon_cost, epoch_kl_cost
+
+ def eval_model_runs_batch(self, data_name, data_bxtxd, ext_input_bxtxi=None,
+ do_eval_cost=False, do_average_batch=False):
+ """Returns all the goodies for the entire model, per batch.
+
+ If data_bxtxd and ext_input_bxtxi can have fewer than batch_size along dim 1
+ in which case this handles the padding and truncating automatically
+
+ Args:
+ data_name: The name of the data dict, to select which in/out matrices
+ to use.
+ data_bxtxd: Numpy array training data with shape:
+ batch_size x # time steps x # dimensions
+ ext_input_bxtxi: Numpy array training external input with shape:
+ batch_size x # time steps x # external input dims
+ do_eval_cost (optional): If true, the IWAE (Importance Weighted
+ Autoencoder) log likeihood bound, instead of the VAE version.
+ do_average_batch (optional): average over the batch, useful for getting
+ good IWAE costs, and model outputs for a single data point.
+
+ Returns:
+ A dictionary with the outputs of the model decoder, namely:
+ prior g0 mean, prior g0 variance, approx. posterior mean, approx
+ posterior mean, the generator initial conditions, the control inputs (if
+ enabled), the state of the generator, the factors, and the rates.
+ """
+ session = tf.get_default_session()
+
+ # if fewer than batch_size provided, pad to batch_size
+ hps = self.hps
+ batch_size = hps.batch_size
+ E, _, _ = data_bxtxd.shape
+ if E < hps.batch_size:
+ data_bxtxd = np.pad(data_bxtxd, ((0, hps.batch_size-E), (0, 0), (0, 0)),
+ mode='constant', constant_values=0)
+ if ext_input_bxtxi is not None:
+ ext_input_bxtxi = np.pad(ext_input_bxtxi,
+ ((0, hps.batch_size-E), (0, 0), (0, 0)),
+ mode='constant', constant_values=0)
+
+ feed_dict = self.build_feed_dict(data_name, data_bxtxd,
+ ext_input_bxtxi, keep_prob=1.0)
+
+ # Non-temporal signals will be batch x dim.
+ # Temporal signals are list length T with elements batch x dim.
+ tf_vals = [self.gen_ics, self.gen_states, self.factors,
+ self.output_dist_params]
+ tf_vals.append(self.cost)
+ tf_vals.append(self.nll_bound_vae)
+ tf_vals.append(self.nll_bound_iwae)
+ tf_vals.append(self.train_step) # not train_op!
+ if self.hps.ic_dim > 0:
+ tf_vals += [self.prior_zs_g0.mean, self.prior_zs_g0.logvar,
+ self.posterior_zs_g0.mean, self.posterior_zs_g0.logvar]
+ if self.hps.co_dim > 0:
+ tf_vals.append(self.controller_outputs)
+ tf_vals_flat, fidxs = flatten(tf_vals)
+
+ np_vals_flat = session.run(tf_vals_flat, feed_dict=feed_dict)
+
+ ff = 0
+ gen_ics = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ gen_states = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ factors = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ out_dist_params = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ costs = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ nll_bound_vaes = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ nll_bound_iwaes = [np_vals_flat[f] for f in fidxs[ff]]; ff +=1
+ train_steps = [np_vals_flat[f] for f in fidxs[ff]]; ff +=1
+ if self.hps.ic_dim > 0:
+ prior_g0_mean = [np_vals_flat[f] for f in fidxs[ff]]; ff +=1
+ prior_g0_logvar = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ post_g0_mean = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ post_g0_logvar = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ if self.hps.co_dim > 0:
+ controller_outputs = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+
+ # [0] are to take out the non-temporal items from lists
+ gen_ics = gen_ics[0]
+ costs = costs[0]
+ nll_bound_vaes = nll_bound_vaes[0]
+ nll_bound_iwaes = nll_bound_iwaes[0]
+ train_steps = train_steps[0]
+
+ # Convert to full tensors, not lists of tensors in time dim.
+ gen_states = list_t_bxn_to_tensor_bxtxn(gen_states)
+ factors = list_t_bxn_to_tensor_bxtxn(factors)
+ out_dist_params = list_t_bxn_to_tensor_bxtxn(out_dist_params)
+ if self.hps.ic_dim > 0:
+ # select first time point
+ prior_g0_mean = prior_g0_mean[0]
+ prior_g0_logvar = prior_g0_logvar[0]
+ post_g0_mean = post_g0_mean[0]
+ post_g0_logvar = post_g0_logvar[0]
+ if self.hps.co_dim > 0:
+ controller_outputs = list_t_bxn_to_tensor_bxtxn(controller_outputs)
+
+ # slice out the trials in case < batch_size provided
+ if E < hps.batch_size:
+ idx = np.arange(E)
+ gen_ics = gen_ics[idx, :]
+ gen_states = gen_states[idx, :]
+ factors = factors[idx, :, :]
+ out_dist_params = out_dist_params[idx, :, :]
+ if self.hps.ic_dim > 0:
+ prior_g0_mean = prior_g0_mean[idx, :]
+ prior_g0_logvar = prior_g0_logvar[idx, :]
+ post_g0_mean = post_g0_mean[idx, :]
+ post_g0_logvar = post_g0_logvar[idx, :]
+ if self.hps.co_dim > 0:
+ controller_outputs = controller_outputs[idx, :, :]
+
+ if do_average_batch:
+ gen_ics = np.mean(gen_ics, axis=0)
+ gen_states = np.mean(gen_states, axis=0)
+ factors = np.mean(factors, axis=0)
+ out_dist_params = np.mean(out_dist_params, axis=0)
+ if self.hps.ic_dim > 0:
+ prior_g0_mean = np.mean(prior_g0_mean, axis=0)
+ prior_g0_logvar = np.mean(prior_g0_logvar, axis=0)
+ post_g0_mean = np.mean(post_g0_mean, axis=0)
+ post_g0_logvar = np.mean(post_g0_logvar, axis=0)
+ if self.hps.co_dim > 0:
+ controller_outputs = np.mean(controller_outputs, axis=0)
+
+ model_vals = {}
+ model_vals['gen_ics'] = gen_ics
+ model_vals['gen_states'] = gen_states
+ model_vals['factors'] = factors
+ model_vals['output_dist_params'] = out_dist_params
+ model_vals['costs'] = costs
+ model_vals['nll_bound_vaes'] = nll_bound_vaes
+ model_vals['nll_bound_iwaes'] = nll_bound_iwaes
+ model_vals['train_steps'] = train_steps
+ if self.hps.ic_dim > 0:
+ model_vals['prior_g0_mean'] = prior_g0_mean
+ model_vals['prior_g0_logvar'] = prior_g0_logvar
+ model_vals['post_g0_mean'] = post_g0_mean
+ model_vals['post_g0_logvar'] = post_g0_logvar
+ if self.hps.co_dim > 0:
+ model_vals['controller_outputs'] = controller_outputs
+
+ return model_vals
+
+ def eval_model_runs_avg_epoch(self, data_name, data_extxd,
+ ext_input_extxi=None):
+ """Returns all the expected value for goodies for the entire model.
+
+ The expected value is taken over hidden (z) variables, namely the initial
+ conditions and the control inputs. The expected value is approximate, and
+ accomplished via sampling (batch_size) samples for every examples.
+
+ Args:
+ data_name: The name of the data dict, to select which in/out matrices
+ to use.
+ data_extxd: Numpy array training data with shape:
+ # examples x # time steps x # dimensions
+ ext_input_extxi (optional): Numpy array training external input with
+ shape: # examples x # time steps x # external input dims
+
+ Returns:
+ A dictionary with the averaged outputs of the model decoder, namely:
+ prior g0 mean, prior g0 variance, approx. posterior mean, approx
+ posterior mean, the generator initial conditions, the control inputs (if
+ enabled), the state of the generator, the factors, and the output
+ distribution parameters, e.g. (rates or mean and variances).
+ """
+ hps = self.hps
+ batch_size = hps.batch_size
+ E, T, D = data_extxd.shape
+ E_to_process = hps.ps_nexamples_to_process
+ if E_to_process > E:
+ E_to_process = E
+
+ if hps.ic_dim > 0:
+ prior_g0_mean = np.zeros([E_to_process, hps.ic_dim])
+ prior_g0_logvar = np.zeros([E_to_process, hps.ic_dim])
+ post_g0_mean = np.zeros([E_to_process, hps.ic_dim])
+ post_g0_logvar = np.zeros([E_to_process, hps.ic_dim])
+
+ if hps.co_dim > 0:
+ controller_outputs = np.zeros([E_to_process, T, hps.co_dim])
+ gen_ics = np.zeros([E_to_process, hps.gen_dim])
+ gen_states = np.zeros([E_to_process, T, hps.gen_dim])
+ factors = np.zeros([E_to_process, T, hps.factors_dim])
+
+ if hps.output_dist == 'poisson':
+ out_dist_params = np.zeros([E_to_process, T, D])
+ elif hps.output_dist == 'gaussian':
+ out_dist_params = np.zeros([E_to_process, T, D+D])
+ else:
+ assert False, "NIY"
+
+ costs = np.zeros(E_to_process)
+ nll_bound_vaes = np.zeros(E_to_process)
+ nll_bound_iwaes = np.zeros(E_to_process)
+ train_steps = np.zeros(E_to_process)
+ for es_idx in range(E_to_process):
+ print("Running %d of %d." % (es_idx+1, E_to_process))
+ example_idxs = es_idx * np.ones(batch_size, dtype=np.int32)
+ data_bxtxd, ext_input_bxtxi = self.get_batch(data_extxd,
+ ext_input_extxi,
+ batch_size=batch_size,
+ example_idxs=example_idxs)
+ model_values = self.eval_model_runs_batch(data_name, data_bxtxd,
+ ext_input_bxtxi,
+ do_eval_cost=True,
+ do_average_batch=True)
+
+ if self.hps.ic_dim > 0:
+ prior_g0_mean[es_idx,:] = model_values['prior_g0_mean']
+ prior_g0_logvar[es_idx,:] = model_values['prior_g0_logvar']
+ post_g0_mean[es_idx,:] = model_values['post_g0_mean']
+ post_g0_logvar[es_idx,:] = model_values['post_g0_logvar']
+ gen_ics[es_idx,:] = model_values['gen_ics']
+
+ if self.hps.co_dim > 0:
+ controller_outputs[es_idx,:,:] = model_values['controller_outputs']
+ gen_states[es_idx,:,:] = model_values['gen_states']
+ factors[es_idx,:,:] = model_values['factors']
+ out_dist_params[es_idx,:,:] = model_values['output_dist_params']
+ costs[es_idx] = model_values['costs']
+ nll_bound_vaes[es_idx] = model_values['nll_bound_vaes']
+ nll_bound_iwaes[es_idx] = model_values['nll_bound_iwaes']
+ train_steps[es_idx] = model_values['train_steps']
+ print('bound nll(vae): %.3f, bound nll(iwae): %.3f' \
+ % (nll_bound_vaes[es_idx], nll_bound_iwaes[es_idx]))
+
+ model_runs = {}
+ if self.hps.ic_dim > 0:
+ model_runs['prior_g0_mean'] = prior_g0_mean
+ model_runs['prior_g0_logvar'] = prior_g0_logvar
+ model_runs['post_g0_mean'] = post_g0_mean
+ model_runs['post_g0_logvar'] = post_g0_logvar
+ model_runs['gen_ics'] = gen_ics
+
+ if self.hps.co_dim > 0:
+ model_runs['controller_outputs'] = controller_outputs
+ model_runs['gen_states'] = gen_states
+ model_runs['factors'] = factors
+ model_runs['output_dist_params'] = out_dist_params
+ model_runs['costs'] = costs
+ model_runs['nll_bound_vaes'] = nll_bound_vaes
+ model_runs['nll_bound_iwaes'] = nll_bound_iwaes
+ model_runs['train_steps'] = train_steps
+ return model_runs
+
+ def eval_model_runs_push_mean(self, data_name, data_extxd,
+ ext_input_extxi=None):
+ """Returns values of interest for the model by pushing the means through
+
+ The mean values for both initial conditions and the control inputs are
+ pushed through the model instead of sampling (as is done in
+ eval_model_runs_avg_epoch).
+ This is a quick and approximate version of estimating these values instead
+ of sampling from the posterior many times and then averaging those values of
+ interest.
+
+ Internally, a total of batch_size trials are run through the model at once.
+
+ Args:
+ data_name: The name of the data dict, to select which in/out matrices
+ to use.
+ data_extxd: Numpy array training data with shape:
+ # examples x # time steps x # dimensions
+ ext_input_extxi (optional): Numpy array training external input with
+ shape: # examples x # time steps x # external input dims
+
+ Returns:
+ A dictionary with the estimated outputs of the model decoder, namely:
+ prior g0 mean, prior g0 variance, approx. posterior mean, approx
+ posterior mean, the generator initial conditions, the control inputs (if
+ enabled), the state of the generator, the factors, and the output
+ distribution parameters, e.g. (rates or mean and variances).
+ """
+ hps = self.hps
+ batch_size = hps.batch_size
+ E, T, D = data_extxd.shape
+ E_to_process = hps.ps_nexamples_to_process
+ if E_to_process > E:
+ print("Setting number of posterior samples to process to : ", E)
+ E_to_process = E
+
+ if hps.ic_dim > 0:
+ prior_g0_mean = np.zeros([E_to_process, hps.ic_dim])
+ prior_g0_logvar = np.zeros([E_to_process, hps.ic_dim])
+ post_g0_mean = np.zeros([E_to_process, hps.ic_dim])
+ post_g0_logvar = np.zeros([E_to_process, hps.ic_dim])
+
+ if hps.co_dim > 0:
+ controller_outputs = np.zeros([E_to_process, T, hps.co_dim])
+ gen_ics = np.zeros([E_to_process, hps.gen_dim])
+ gen_states = np.zeros([E_to_process, T, hps.gen_dim])
+ factors = np.zeros([E_to_process, T, hps.factors_dim])
+
+ if hps.output_dist == 'poisson':
+ out_dist_params = np.zeros([E_to_process, T, D])
+ elif hps.output_dist == 'gaussian':
+ out_dist_params = np.zeros([E_to_process, T, D+D])
+ else:
+ assert False, "NIY"
+
+ costs = np.zeros(E_to_process)
+ nll_bound_vaes = np.zeros(E_to_process)
+ nll_bound_iwaes = np.zeros(E_to_process)
+ train_steps = np.zeros(E_to_process)
+
+ # generator that will yield 0:N in groups of per items, e.g.
+ # (0:per-1), (per:2*per-1), ..., with the last group containing <= per items
+ # this will be used to feed per=batch_size trials into the model at a time
+ def trial_batches(N, per):
+ for i in range(0, N, per):
+ yield np.arange(i, min(i+per, N), dtype=np.int32)
+
+ for batch_idx, es_idx in enumerate(trial_batches(E_to_process,
+ hps.batch_size)):
+ print("Running trial batch %d with %d trials" % (batch_idx+1,
+ len(es_idx)))
+ data_bxtxd, ext_input_bxtxi = self.get_batch(data_extxd,
+ ext_input_extxi,
+ batch_size=batch_size,
+ example_idxs=es_idx)
+ model_values = self.eval_model_runs_batch(data_name, data_bxtxd,
+ ext_input_bxtxi,
+ do_eval_cost=True,
+ do_average_batch=False)
+
+ if self.hps.ic_dim > 0:
+ prior_g0_mean[es_idx,:] = model_values['prior_g0_mean']
+ prior_g0_logvar[es_idx,:] = model_values['prior_g0_logvar']
+ post_g0_mean[es_idx,:] = model_values['post_g0_mean']
+ post_g0_logvar[es_idx,:] = model_values['post_g0_logvar']
+ gen_ics[es_idx,:] = model_values['gen_ics']
+
+ if self.hps.co_dim > 0:
+ controller_outputs[es_idx,:,:] = model_values['controller_outputs']
+ gen_states[es_idx,:,:] = model_values['gen_states']
+ factors[es_idx,:,:] = model_values['factors']
+ out_dist_params[es_idx,:,:] = model_values['output_dist_params']
+
+ # TODO
+ # model_values['costs'] and other costs come out as scalars, summed over
+ # all the trials in the batch. what we want is the per-trial costs
+ costs[es_idx] = model_values['costs']
+ nll_bound_vaes[es_idx] = model_values['nll_bound_vaes']
+ nll_bound_iwaes[es_idx] = model_values['nll_bound_iwaes']
+
+ train_steps[es_idx] = model_values['train_steps']
+
+ model_runs = {}
+ if self.hps.ic_dim > 0:
+ model_runs['prior_g0_mean'] = prior_g0_mean
+ model_runs['prior_g0_logvar'] = prior_g0_logvar
+ model_runs['post_g0_mean'] = post_g0_mean
+ model_runs['post_g0_logvar'] = post_g0_logvar
+ model_runs['gen_ics'] = gen_ics
+
+ if self.hps.co_dim > 0:
+ model_runs['controller_outputs'] = controller_outputs
+ model_runs['gen_states'] = gen_states
+ model_runs['factors'] = factors
+ model_runs['output_dist_params'] = out_dist_params
+
+ # You probably do not want the LL associated values when pushing the mean
+ # instead of sampling.
+ model_runs['costs'] = costs
+ model_runs['nll_bound_vaes'] = nll_bound_vaes
+ model_runs['nll_bound_iwaes'] = nll_bound_iwaes
+ model_runs['train_steps'] = train_steps
+ return model_runs
+
+ def write_model_runs(self, datasets, output_fname=None, push_mean=False):
+ """Run the model on the data in data_dict, and save the computed values.
+
+ LFADS generates a number of outputs for each examples, and these are all
+ saved. They are:
+ The mean and variance of the prior of g0.
+ The mean and variance of approximate posterior of g0.
+ The control inputs (if enabled)
+ The initial conditions, g0, for all examples.
+ The generator states for all time.
+ The factors for all time.
+ The output distribution parameters (e.g. rates) for all time.
+
+ Args:
+ datasets: a dictionary of named data_dictionaries, see top of lfads.py
+ output_fname: a file name stem for the output files.
+ push_mean: if False (default), generates batch_size samples for each trial
+ and averages the results. if True, runs each trial once without noise,
+ pushing the posterior mean initial conditions and control inputs through
+ the trained model. False is used for posterior_sample_and_average, True
+ is used for posterior_push_mean.
+ """
+ hps = self.hps
+ kind = hps.kind
+
+ for data_name, data_dict in datasets.items():
+ data_tuple = [('train', data_dict['train_data'],
+ data_dict['train_ext_input']),
+ ('valid', data_dict['valid_data'],
+ data_dict['valid_ext_input'])]
+ for data_kind, data_extxd, ext_input_extxi in data_tuple:
+ if not output_fname:
+ fname = "model_runs_" + data_name + '_' + data_kind + '_' + kind
+ else:
+ fname = output_fname + data_name + '_' + data_kind + '_' + kind
+
+ print("Writing data for %s data and kind %s." % (data_name, data_kind))
+ if push_mean:
+ model_runs = self.eval_model_runs_push_mean(data_name, data_extxd,
+ ext_input_extxi)
+ else:
+ model_runs = self.eval_model_runs_avg_epoch(data_name, data_extxd,
+ ext_input_extxi)
+ full_fname = os.path.join(hps.lfads_save_dir, fname)
+ write_data(full_fname, model_runs, compression='gzip')
+ print("Done.")
+
+ def write_model_samples(self, dataset_name, output_fname=None):
+ """Use the prior distribution to generate batch_size number of samples
+ from the model.
+
+ LFADS generates a number of outputs for each sample, and these are all
+ saved. They are:
+ The mean and variance of the prior of g0.
+ The control inputs (if enabled)
+ The initial conditions, g0, for all examples.
+ The generator states for all time.
+ The factors for all time.
+ The output distribution parameters (e.g. rates) for all time.
+
+ Args:
+ dataset_name: The name of the dataset to grab the factors -> rates
+ alignment matrices from.
+ output_fname: The name of the file in which to save the generated
+ samples.
+ """
+ hps = self.hps
+ batch_size = hps.batch_size
+
+ print("Generating %d samples" % (batch_size))
+ tf_vals = [self.factors, self.gen_states, self.gen_ics,
+ self.cost, self.output_dist_params]
+ if hps.ic_dim > 0:
+ tf_vals += [self.prior_zs_g0.mean, self.prior_zs_g0.logvar]
+ if hps.co_dim > 0:
+ tf_vals += [self.prior_zs_ar_con.samples_t]
+ tf_vals_flat, fidxs = flatten(tf_vals)
+
+ session = tf.get_default_session()
+ feed_dict = {}
+ feed_dict[self.dataName] = dataset_name
+ feed_dict[self.keep_prob] = 1.0
+
+ np_vals_flat = session.run(tf_vals_flat, feed_dict=feed_dict)
+
+ ff = 0
+ factors = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ gen_states = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ gen_ics = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ costs = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ output_dist_params = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ if hps.ic_dim > 0:
+ prior_g0_mean = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ prior_g0_logvar = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+ if hps.co_dim > 0:
+ prior_zs_ar_con = [np_vals_flat[f] for f in fidxs[ff]]; ff += 1
+
+ # [0] are to take out the non-temporal items from lists
+ gen_ics = gen_ics[0]
+ costs = costs[0]
+
+ # Convert to full tensors, not lists of tensors in time dim.
+ gen_states = list_t_bxn_to_tensor_bxtxn(gen_states)
+ factors = list_t_bxn_to_tensor_bxtxn(factors)
+ output_dist_params = list_t_bxn_to_tensor_bxtxn(output_dist_params)
+ if hps.ic_dim > 0:
+ prior_g0_mean = prior_g0_mean[0]
+ prior_g0_logvar = prior_g0_logvar[0]
+ if hps.co_dim > 0:
+ prior_zs_ar_con = list_t_bxn_to_tensor_bxtxn(prior_zs_ar_con)
+
+ model_vals = {}
+ model_vals['gen_ics'] = gen_ics
+ model_vals['gen_states'] = gen_states
+ model_vals['factors'] = factors
+ model_vals['output_dist_params'] = output_dist_params
+ model_vals['costs'] = costs.reshape(1)
+ if hps.ic_dim > 0:
+ model_vals['prior_g0_mean'] = prior_g0_mean
+ model_vals['prior_g0_logvar'] = prior_g0_logvar
+ if hps.co_dim > 0:
+ model_vals['prior_zs_ar_con'] = prior_zs_ar_con
+
+ full_fname = os.path.join(hps.lfads_save_dir, output_fname)
+ write_data(full_fname, model_vals, compression='gzip')
+ print("Done.")
+
+ @staticmethod
+ def eval_model_parameters(use_nested=True, include_strs=None):
+ """Evaluate and return all of the TF variables in the model.
+
+ Args:
+ use_nested (optional): For returning values, use a nested dictoinary, based
+ on variable scoping, or return all variables in a flat dictionary.
+ include_strs (optional): A list of strings to use as a filter, to reduce the
+ number of variables returned. A variable name must contain at least one
+ string in include_strs as a sub-string in order to be returned.
+
+ Returns:
+ The parameters of the model. This can be in a flat
+ dictionary, or a nested dictionary, where the nesting is by variable
+ scope.
+ """
+ all_tf_vars = tf.global_variables()
+ session = tf.get_default_session()
+ all_tf_vars_eval = session.run(all_tf_vars)
+ vars_dict = {}
+ strs = ["LFADS"]
+ if include_strs:
+ strs += include_strs
+
+ for i, (var, var_eval) in enumerate(zip(all_tf_vars, all_tf_vars_eval)):
+ if any(s in include_strs for s in var.name):
+ if not isinstance(var_eval, np.ndarray): # for H5PY
+ print(var.name, """ is not numpy array, saving as numpy array
+ with value: """, var_eval, type(var_eval))
+ e = np.array(var_eval)
+ print(e, type(e))
+ else:
+ e = var_eval
+ vars_dict[var.name] = e
+
+ if not use_nested:
+ return vars_dict
+
+ var_names = vars_dict.keys()
+ nested_vars_dict = {}
+ current_dict = nested_vars_dict
+ for v, var_name in enumerate(var_names):
+ var_split_name_list = var_name.split('/')
+ split_name_list_len = len(var_split_name_list)
+ current_dict = nested_vars_dict
+ for p, part in enumerate(var_split_name_list):
+ if p < split_name_list_len - 1:
+ if part in current_dict:
+ current_dict = current_dict[part]
+ else:
+ current_dict[part] = {}
+ current_dict = current_dict[part]
+ else:
+ current_dict[part] = vars_dict[var_name]
+
+ return nested_vars_dict
+
+ @staticmethod
+ def spikify_rates(rates_bxtxd):
+ """Randomly spikify underlying rates according a Poisson distribution
+
+ Args:
+ rates_bxtxd: a numpy tensor with shape:
+
+ Returns:
+ A numpy array with the same shape as rates_bxtxd, but with the event
+ counts.
+ """
+
+ B,T,N = rates_bxtxd.shape
+ assert all([B > 0, N > 0]), "problems"
+
+ # Because the rates are changing, there is nesting
+ spikes_bxtxd = np.zeros([B,T,N], dtype=np.int32)
+ for b in range(B):
+ for t in range(T):
+ for n in range(N):
+ rate = rates_bxtxd[b,t,n]
+ count = np.random.poisson(rate)
+ spikes_bxtxd[b,t,n] = count
+
+ return spikes_bxtxd
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/plot_lfads.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/plot_lfads.py"
new file mode 100644
index 00000000..c4e1a033
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/plot_lfads.py"
@@ -0,0 +1,181 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import matplotlib
+matplotlib.use('Agg')
+from matplotlib import pyplot as plt
+import numpy as np
+import tensorflow as tf
+
+def _plot_item(W, name, full_name, nspaces):
+ plt.figure()
+ if W.shape == ():
+ print(name, ": ", W)
+ elif W.shape[0] == 1:
+ plt.stem(W.T)
+ plt.title(full_name)
+ elif W.shape[1] == 1:
+ plt.stem(W)
+ plt.title(full_name)
+ else:
+ plt.imshow(np.abs(W), interpolation='nearest', cmap='jet');
+ plt.colorbar()
+ plt.title(full_name)
+
+
+def all_plot(d, full_name="", exclude="", nspaces=0):
+ """Recursively plot all the LFADS model parameters in the nested
+ dictionary."""
+ for k, v in d.iteritems():
+ this_name = full_name+"/"+k
+ if isinstance(v, dict):
+ all_plot(v, full_name=this_name, exclude=exclude, nspaces=nspaces+4)
+ else:
+ if exclude == "" or exclude not in this_name:
+ _plot_item(v, name=k, full_name=full_name+"/"+k, nspaces=nspaces+4)
+
+
+
+def plot_time_series(vals_bxtxn, bidx=None, n_to_plot=np.inf, scale=1.0,
+ color='r', title=None):
+
+ if bidx is None:
+ vals_txn = np.mean(vals_bxtxn, axis=0)
+ else:
+ vals_txn = vals_bxtxn[bidx,:,:]
+
+ T, N = vals_txn.shape
+ if n_to_plot > N:
+ n_to_plot = N
+
+ plt.plot(vals_txn[:,0:n_to_plot] + scale*np.array(range(n_to_plot)),
+ color=color, lw=1.0)
+ plt.axis('tight')
+ if title:
+ plt.title(title)
+
+
+def plot_lfads_timeseries(data_bxtxn, model_vals, ext_input_bxtxi=None,
+ truth_bxtxn=None, bidx=None, output_dist="poisson",
+ conversion_factor=1.0, subplot_cidx=0,
+ col_title=None):
+
+ n_to_plot = 10
+ scale = 1.0
+ nrows = 7
+ plt.subplot(nrows,2,1+subplot_cidx)
+
+ if output_dist == 'poisson':
+ rates = means = conversion_factor * model_vals['output_dist_params']
+ plot_time_series(rates, bidx, n_to_plot=n_to_plot, scale=scale,
+ title=col_title + " rates (LFADS - red, Truth - black)")
+ elif output_dist == 'gaussian':
+ means_vars = model_vals['output_dist_params']
+ means, vars = np.split(means_vars,2, axis=2) # bxtxn
+ stds = np.sqrt(vars)
+ plot_time_series(means, bidx, n_to_plot=n_to_plot, scale=scale,
+ title=col_title + " means (LFADS - red, Truth - black)")
+ plot_time_series(means+stds, bidx, n_to_plot=n_to_plot, scale=scale,
+ color='c')
+ plot_time_series(means-stds, bidx, n_to_plot=n_to_plot, scale=scale,
+ color='c')
+ else:
+ assert 'NIY'
+
+
+ if truth_bxtxn is not None:
+ plot_time_series(truth_bxtxn, bidx, n_to_plot=n_to_plot, color='k',
+ scale=scale)
+
+ input_title = ""
+ if "controller_outputs" in model_vals.keys():
+ input_title += " Controller Output"
+ plt.subplot(nrows,2,3+subplot_cidx)
+ u_t = model_vals['controller_outputs'][0:-1]
+ plot_time_series(u_t, bidx, n_to_plot=n_to_plot, color='c', scale=1.0,
+ title=col_title + input_title)
+
+ if ext_input_bxtxi is not None:
+ input_title += " External Input"
+ plot_time_series(ext_input_bxtxi, n_to_plot=n_to_plot, color='b',
+ scale=scale, title=col_title + input_title)
+
+ plt.subplot(nrows,2,5+subplot_cidx)
+ plot_time_series(means, bidx,
+ n_to_plot=n_to_plot, scale=1.0,
+ title=col_title + " Spikes (LFADS - red, Spikes - black)")
+ plot_time_series(data_bxtxn, bidx, n_to_plot=n_to_plot, color='k', scale=1.0)
+
+ plt.subplot(nrows,2,7+subplot_cidx)
+ plot_time_series(model_vals['factors'], bidx, n_to_plot=n_to_plot, color='b',
+ scale=2.0, title=col_title + " Factors")
+
+ plt.subplot(nrows,2,9+subplot_cidx)
+ plot_time_series(model_vals['gen_states'], bidx, n_to_plot=n_to_plot,
+ color='g', scale=1.0, title=col_title + " Generator State")
+
+ if bidx is not None:
+ data_nxt = data_bxtxn[bidx,:,:].T
+ params_nxt = model_vals['output_dist_params'][bidx,:,:].T
+ else:
+ data_nxt = np.mean(data_bxtxn, axis=0).T
+ params_nxt = np.mean(model_vals['output_dist_params'], axis=0).T
+ if output_dist == 'poisson':
+ means_nxt = params_nxt
+ elif output_dist == 'gaussian': # (means+vars) x time
+ means_nxt = np.vsplit(params_nxt,2)[0] # get means
+ else:
+ assert "NIY"
+
+ plt.subplot(nrows,2,11+subplot_cidx)
+ plt.imshow(data_nxt, aspect='auto', interpolation='nearest')
+ plt.title(col_title + ' Data')
+
+ plt.subplot(nrows,2,13+subplot_cidx)
+ plt.imshow(means_nxt, aspect='auto', interpolation='nearest')
+ plt.title(col_title + ' Means')
+
+
+def plot_lfads(train_bxtxd, train_model_vals,
+ train_ext_input_bxtxi=None, train_truth_bxtxd=None,
+ valid_bxtxd=None, valid_model_vals=None,
+ valid_ext_input_bxtxi=None, valid_truth_bxtxd=None,
+ bidx=None, cf=1.0, output_dist='poisson'):
+
+ # Plotting
+ f = plt.figure(figsize=(18,20), tight_layout=True)
+ plot_lfads_timeseries(train_bxtxd, train_model_vals,
+ train_ext_input_bxtxi,
+ truth_bxtxn=train_truth_bxtxd,
+ conversion_factor=cf, bidx=bidx,
+ output_dist=output_dist, col_title='Train')
+ plot_lfads_timeseries(valid_bxtxd, valid_model_vals,
+ valid_ext_input_bxtxi,
+ truth_bxtxn=valid_truth_bxtxd,
+ conversion_factor=cf, bidx=bidx,
+ output_dist=output_dist,
+ subplot_cidx=1, col_title='Valid')
+
+ # Convert from figure to an numpy array width x height x 3 (last for RGB)
+ f.canvas.draw()
+ data = np.fromstring(f.canvas.tostring_rgb(), dtype=np.uint8, sep='')
+ data_wxhx3 = data.reshape(f.canvas.get_width_height()[::-1] + (3,))
+ plt.close()
+
+ return data_wxhx3
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/run_lfads.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/run_lfads.py"
new file mode 100644
index 00000000..6e81b0bc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/run_lfads.py"
@@ -0,0 +1,814 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from lfads import LFADS
+import numpy as np
+import os
+import tensorflow as tf
+import re
+import utils
+import sys
+MAX_INT = sys.maxsize
+
+# Lots of hyperparameters, but most are pretty insensitive. The
+# explanation of these hyperparameters is found below, in the flags
+# session.
+
+CHECKPOINT_PB_LOAD_NAME = "checkpoint"
+CHECKPOINT_NAME = "lfads_vae"
+CSV_LOG = "fitlog"
+OUTPUT_FILENAME_STEM = ""
+DEVICE = "gpu:0" # "cpu:0", or other gpus, e.g. "gpu:1"
+MAX_CKPT_TO_KEEP = 5
+MAX_CKPT_TO_KEEP_LVE = 5
+PS_NEXAMPLES_TO_PROCESS = MAX_INT # if larger than number of examples, process all
+EXT_INPUT_DIM = 0
+IC_DIM = 64
+FACTORS_DIM = 50
+IC_ENC_DIM = 128
+GEN_DIM = 200
+GEN_CELL_INPUT_WEIGHT_SCALE = 1.0
+GEN_CELL_REC_WEIGHT_SCALE = 1.0
+CELL_WEIGHT_SCALE = 1.0
+BATCH_SIZE = 128
+LEARNING_RATE_INIT = 0.01
+LEARNING_RATE_DECAY_FACTOR = 0.95
+LEARNING_RATE_STOP = 0.00001
+LEARNING_RATE_N_TO_COMPARE = 6
+INJECT_EXT_INPUT_TO_GEN = False
+DO_TRAIN_IO_ONLY = False
+DO_TRAIN_ENCODER_ONLY = False
+DO_RESET_LEARNING_RATE = False
+FEEDBACK_FACTORS_OR_RATES = "factors"
+DO_TRAIN_READIN = True
+
+# Calibrated just above the average value for the rnn synthetic data.
+MAX_GRAD_NORM = 200.0
+CELL_CLIP_VALUE = 5.0
+KEEP_PROB = 0.95
+TEMPORAL_SPIKE_JITTER_WIDTH = 0
+OUTPUT_DISTRIBUTION = 'poisson' # 'poisson' or 'gaussian'
+NUM_STEPS_FOR_GEN_IC = MAX_INT # set to num_steps if greater than num_steps
+
+DATA_DIR = "/tmp/rnn_synth_data_v1.0/"
+DATA_FILENAME_STEM = "chaotic_rnn_inputs_g1p5"
+LFADS_SAVE_DIR = "/tmp/lfads_chaotic_rnn_inputs_g1p5/"
+CO_DIM = 1
+DO_CAUSAL_CONTROLLER = False
+DO_FEED_FACTORS_TO_CONTROLLER = True
+CONTROLLER_INPUT_LAG = 1
+PRIOR_AR_AUTOCORRELATION = 10.0
+PRIOR_AR_PROCESS_VAR = 0.1
+DO_TRAIN_PRIOR_AR_ATAU = True
+DO_TRAIN_PRIOR_AR_NVAR = True
+CI_ENC_DIM = 128
+CON_DIM = 128
+CO_PRIOR_VAR_SCALE = 0.1
+KL_INCREASE_STEPS = 2000
+L2_INCREASE_STEPS = 2000
+L2_GEN_SCALE = 2000.0
+L2_CON_SCALE = 0.0
+# scale of regularizer on time correlation of inferred inputs
+CO_MEAN_CORR_SCALE = 0.0
+KL_IC_WEIGHT = 1.0
+KL_CO_WEIGHT = 1.0
+KL_START_STEP = 0
+L2_START_STEP = 0
+IC_PRIOR_VAR_MIN = 0.1
+IC_PRIOR_VAR_SCALE = 0.1
+IC_PRIOR_VAR_MAX = 0.1
+IC_POST_VAR_MIN = 0.0001 # protection from KL blowing up
+
+flags = tf.app.flags
+flags.DEFINE_string("kind", "train",
+ "Type of model to build {train, \
+ posterior_sample_and_average, \
+ posterior_push_mean, \
+ prior_sample, write_model_params")
+flags.DEFINE_string("output_dist", OUTPUT_DISTRIBUTION,
+ "Type of output distribution, 'poisson' or 'gaussian'")
+flags.DEFINE_boolean("allow_gpu_growth", False,
+ "If true, only allocate amount of memory needed for \
+ Session. Otherwise, use full GPU memory.")
+
+# DATA
+flags.DEFINE_string("data_dir", DATA_DIR, "Data for training")
+flags.DEFINE_string("data_filename_stem", DATA_FILENAME_STEM,
+ "Filename stem for data dictionaries.")
+flags.DEFINE_string("lfads_save_dir", LFADS_SAVE_DIR, "model save dir")
+flags.DEFINE_string("checkpoint_pb_load_name", CHECKPOINT_PB_LOAD_NAME,
+ "Name of checkpoint files, use 'checkpoint_lve' for best \
+ error")
+flags.DEFINE_string("checkpoint_name", CHECKPOINT_NAME,
+ "Name of checkpoint files (.ckpt appended)")
+flags.DEFINE_string("output_filename_stem", OUTPUT_FILENAME_STEM,
+ "Name of output file (postfix will be added)")
+flags.DEFINE_string("device", DEVICE,
+ "Which device to use (default: \"gpu:0\", can also be \
+ \"cpu:0\", \"gpu:1\", etc)")
+flags.DEFINE_string("csv_log", CSV_LOG,
+ "Name of file to keep running log of fit likelihoods, \
+ etc (.csv appended)")
+flags.DEFINE_integer("max_ckpt_to_keep", MAX_CKPT_TO_KEEP,
+ "Max # of checkpoints to keep (rolling)")
+flags.DEFINE_integer("ps_nexamples_to_process", PS_NEXAMPLES_TO_PROCESS,
+ "Number of examples to process for posterior sample and \
+ average (not number of samples to average over).")
+flags.DEFINE_integer("max_ckpt_to_keep_lve", MAX_CKPT_TO_KEEP_LVE,
+ "Max # of checkpoints to keep for lowest validation error \
+ models (rolling)")
+flags.DEFINE_integer("ext_input_dim", EXT_INPUT_DIM, "Dimension of external \
+inputs")
+flags.DEFINE_integer("num_steps_for_gen_ic", NUM_STEPS_FOR_GEN_IC,
+ "Number of steps to train the generator initial conditon.")
+
+
+# If there are observed inputs, there are two ways to add that observed
+# input to the model. The first is by treating as something to be
+# inferred, and thus encoding the observed input via the encoders, and then
+# input to the generator via the "inferred inputs" channel. Second, one
+# can input the input directly into the generator. This has the downside
+# of making the generation process strictly dependent on knowing the
+# observed input for any generated trial.
+flags.DEFINE_boolean("inject_ext_input_to_gen",
+ INJECT_EXT_INPUT_TO_GEN,
+ "Should observed inputs be input to model via encoders, \
+ or injected directly into generator?")
+
+# CELL
+
+# The combined recurrent and input weights of the encoder and
+# controller cells are by default set to scale at ws/sqrt(#inputs),
+# with ws=1.0. You can change this scaling with this parameter.
+flags.DEFINE_float("cell_weight_scale", CELL_WEIGHT_SCALE,
+ "Input scaling for input weights in generator.")
+
+
+# GENERATION
+
+# Note that the dimension of the initial conditions is separated from the
+# dimensions of the generator initial conditions (and a linear matrix will
+# adapt the shapes if necessary). This is just another way to control
+# complexity. In all likelihood, setting the ic dims to the size of the
+# generator hidden state is just fine.
+flags.DEFINE_integer("ic_dim", IC_DIM, "Dimension of h0")
+# Setting the dimensions of the factors to something smaller than the data
+# dimension is a way to get a reduced dimensionality representation of your
+# data.
+flags.DEFINE_integer("factors_dim", FACTORS_DIM,
+ "Number of factors from generator")
+flags.DEFINE_integer("ic_enc_dim", IC_ENC_DIM,
+ "Cell hidden size, encoder of h0")
+
+# Controlling the size of the generator is one way to control complexity of
+# the dynamics (there is also l2, which will squeeze out unnecessary
+# dynamics also). The modern deep learning approach is to make these cells
+# as large as tolerable (from a waiting perspective), and then regularize
+# them to death with drop out or whatever. I don't know if this is correct
+# for the LFADS application or not.
+flags.DEFINE_integer("gen_dim", GEN_DIM,
+ "Cell hidden size, generator.")
+# The weights of the generator cell by default set to scale at
+# ws/sqrt(#inputs), with ws=1.0. You can change ws for
+# the input weights or the recurrent weights with these hyperparameters.
+flags.DEFINE_float("gen_cell_input_weight_scale", GEN_CELL_INPUT_WEIGHT_SCALE,
+ "Input scaling for input weights in generator.")
+flags.DEFINE_float("gen_cell_rec_weight_scale", GEN_CELL_REC_WEIGHT_SCALE,
+ "Input scaling for rec weights in generator.")
+
+# KL DISTRIBUTIONS
+# If you don't know what you are donig here, please leave alone, the
+# defaults should be fine for most cases, irregardless of other parameters.
+#
+# If you don't want the prior variance to be learned, set the
+# following values to the same thing: ic_prior_var_min,
+# ic_prior_var_scale, ic_prior_var_max. The prior mean will be
+# learned regardless.
+flags.DEFINE_float("ic_prior_var_min", IC_PRIOR_VAR_MIN,
+ "Minimum variance in posterior h0 codes.")
+flags.DEFINE_float("ic_prior_var_scale", IC_PRIOR_VAR_SCALE,
+ "Variance of ic prior distribution")
+flags.DEFINE_float("ic_prior_var_max", IC_PRIOR_VAR_MAX,
+ "Maximum variance of IC prior distribution.")
+# If you really want to limit the information from encoder to decoder,
+# Increase ic_post_var_min above 0.0.
+flags.DEFINE_float("ic_post_var_min", IC_POST_VAR_MIN,
+ "Minimum variance of IC posterior distribution.")
+flags.DEFINE_float("co_prior_var_scale", CO_PRIOR_VAR_SCALE,
+ "Variance of control input prior distribution.")
+
+
+flags.DEFINE_float("prior_ar_atau", PRIOR_AR_AUTOCORRELATION,
+ "Initial autocorrelation of AR(1) priors.")
+flags.DEFINE_float("prior_ar_nvar", PRIOR_AR_PROCESS_VAR,
+ "Initial noise variance for AR(1) priors.")
+flags.DEFINE_boolean("do_train_prior_ar_atau", DO_TRAIN_PRIOR_AR_ATAU,
+ "Is the value for atau an init, or the constant value?")
+flags.DEFINE_boolean("do_train_prior_ar_nvar", DO_TRAIN_PRIOR_AR_NVAR,
+ "Is the value for noise variance an init, or the constant \
+ value?")
+
+# CONTROLLER
+# This parameter critically controls whether or not there is a controller
+# (along with controller encoders placed into the LFADS graph. If CO_DIM >
+# 1, that means there is a 1 dimensional controller outputs, if equal to 0,
+# then no controller.
+flags.DEFINE_integer("co_dim", CO_DIM,
+ "Number of control net outputs (>0 builds that graph).")
+
+# The controller will be more powerful if it can see the encoding of the entire
+# trial. However, this allows the controller to create inferred inputs that are
+# acausal with respect to the actual data generation process. E.g. the data
+# generator could have an input at time t, but the controller, after seeing the
+# entirety of the trial could infer that the input is coming a little before
+# time t, because there are no restrictions on the data the controller sees.
+# One can force the controller to be causal (with respect to perturbations in
+# the data generator) so that it only sees forward encodings of the data at time
+# t that originate at times before or at time t. One can also control the data
+# the controller sees by using an input lag (forward encoding at time [t-tlag]
+# for controller input at time t. The same can be done in the reverse direction
+# (controller input at time t from reverse encoding at time [t+tlag], in the
+# case of an acausal controller). Setting this lag > 0 (even lag=1) can be a
+# powerful way of avoiding very spiky decodes. Finally, one can manually control
+# whether the factors at time t-1 are fed to the controller at time t.
+#
+# If you don't care about any of this, and just want to smooth your data, set
+# do_causal_controller = False
+# do_feed_factors_to_controller = True
+# causal_input_lag = 0
+flags.DEFINE_boolean("do_causal_controller",
+ DO_CAUSAL_CONTROLLER,
+ "Restrict the controller create only causal inferred \
+ inputs?")
+# Strictly speaking, feeding either the factors or the rates to the controller
+# violates causality, since the g0 gets to see all the data. This may or may not
+# be only a theoretical concern.
+flags.DEFINE_boolean("do_feed_factors_to_controller",
+ DO_FEED_FACTORS_TO_CONTROLLER,
+ "Should factors[t-1] be input to controller at time t?")
+flags.DEFINE_string("feedback_factors_or_rates", FEEDBACK_FACTORS_OR_RATES,
+ "Feedback the factors or the rates to the controller? \
+ Acceptable values: 'factors' or 'rates'.")
+flags.DEFINE_integer("controller_input_lag", CONTROLLER_INPUT_LAG,
+ "Time lag on the encoding to controller t-lag for \
+ forward, t+lag for reverse.")
+
+flags.DEFINE_integer("ci_enc_dim", CI_ENC_DIM,
+ "Cell hidden size, encoder of control inputs")
+flags.DEFINE_integer("con_dim", CON_DIM,
+ "Cell hidden size, controller")
+
+
+# OPTIMIZATION
+flags.DEFINE_integer("batch_size", BATCH_SIZE,
+ "Batch size to use during training.")
+flags.DEFINE_float("learning_rate_init", LEARNING_RATE_INIT,
+ "Learning rate initial value")
+flags.DEFINE_float("learning_rate_decay_factor", LEARNING_RATE_DECAY_FACTOR,
+ "Learning rate decay, decay by this fraction every so \
+ often.")
+flags.DEFINE_float("learning_rate_stop", LEARNING_RATE_STOP,
+ "The lr is adaptively reduced, stop training at this value.")
+# Rather put the learning rate on an exponentially decreasiong schedule,
+# the current algorithm pays attention to the learning rate, and if it
+# isn't regularly decreasing, it will decrease the learning rate. So far,
+# it works fine, though it is not perfect.
+flags.DEFINE_integer("learning_rate_n_to_compare", LEARNING_RATE_N_TO_COMPARE,
+ "Number of previous costs current cost has to be worse \
+ than, to lower learning rate.")
+
+# This sets a value, above which, the gradients will be clipped. This hp
+# is extremely useful to avoid an infrequent, but highly pathological
+# problem whereby the gradient is so large that it destroys the
+# optimziation by setting parameters too large, leading to a vicious cycle
+# that ends in NaNs. If it's too large, it's useless, if it's too small,
+# it essentially becomes the learning rate. It's pretty insensitive, though.
+flags.DEFINE_float("max_grad_norm", MAX_GRAD_NORM,
+ "Max norm of gradient before clipping.")
+
+# If your optimizations start "NaN-ing out", reduce this value so that
+# the values of the network don't grow out of control. Typically, once
+# this parameter is set to a reasonable value, one stops having numerical
+# problems.
+flags.DEFINE_float("cell_clip_value", CELL_CLIP_VALUE,
+ "Max value recurrent cell can take before being clipped.")
+
+# This flag is used for an experiment where one sees if training a model with
+# many days data can be used to learn the dynamics from a held-out days data.
+# If you don't care about that particular experiment, this flag should always be
+# false.
+flags.DEFINE_boolean("do_train_io_only", DO_TRAIN_IO_ONLY,
+ "Train only the input (readin) and output (readout) \
+ affine functions.")
+
+# This flag is used for an experiment where one wants to know if the dynamics
+# learned by the generator generalize across conditions. In that case, you might
+# train up a model on one set of data, and then only further train the encoder
+# on another set of data (the conditions to be tested) so that the model is
+# forced to use the same dynamics to describe that data. If you don't care about
+# that particular experiment, this flag should always be false.
+flags.DEFINE_boolean("do_train_encoder_only", DO_TRAIN_ENCODER_ONLY,
+ "Train only the encoder weights.")
+
+flags.DEFINE_boolean("do_reset_learning_rate", DO_RESET_LEARNING_RATE,
+ "Reset the learning rate to initial value.")
+
+
+# for multi-session "stitching" models, the per-session readin matrices map from
+# neurons to input factors which are fed into the shared encoder. These are
+# initialized by alignment_matrix_cxf and alignment_bias_c in the input .h5
+# files. They can be fixed or made trainable.
+flags.DEFINE_boolean("do_train_readin", DO_TRAIN_READIN, "Whether to train the \
+ readin matrices and bias vectors. False leaves them fixed \
+ at their initial values specified by the alignment \
+ matrices and vectors.")
+
+
+# OVERFITTING
+# Dropout is done on the input data, on controller inputs (from
+# encoder), on outputs from generator to factors.
+flags.DEFINE_float("keep_prob", KEEP_PROB, "Dropout keep probability.")
+# It appears that the system will happily fit spikes (blessing or
+# curse, depending). You may not want this. Jittering the spikes a
+# bit will help (-/+ bin size, as specified here).
+flags.DEFINE_integer("temporal_spike_jitter_width",
+ TEMPORAL_SPIKE_JITTER_WIDTH,
+ "Shuffle spikes around this window.")
+
+# General note about helping ascribe controller inputs vs dynamics:
+#
+# If controller is heavily penalized, then it won't have any output.
+# If dynamics are heavily penalized, then generator won't make
+# dynamics. Note this l2 penalty is only on the recurrent portion of
+# the RNNs, as dropout is also available, penalizing the feed-forward
+# connections.
+flags.DEFINE_float("l2_gen_scale", L2_GEN_SCALE,
+ "L2 regularization cost for the generator only.")
+flags.DEFINE_float("l2_con_scale", L2_CON_SCALE,
+ "L2 regularization cost for the controller only.")
+flags.DEFINE_float("co_mean_corr_scale", CO_MEAN_CORR_SCALE,
+ "Cost of correlation (thru time)in the means of \
+ controller output.")
+
+# UNDERFITTING
+# If the primary task of LFADS is "filtering" of data and not
+# generation, then it is possible that the KL penalty is too strong.
+# Empirically, we have found this to be the case. So we add a
+# hyperparameter in front of the the two KL terms (one for the initial
+# conditions to the generator, the other for the controller outputs).
+# You should always think of the the default values as 1.0, and that
+# leads to a standard VAE formulation whereby the numbers that are
+# optimized are a lower-bound on the log-likelihood of the data. When
+# these 2 HPs deviate from 1.0, one cannot make any statement about
+# what those LL lower bounds mean anymore, and they cannot be compared
+# (AFAIK).
+flags.DEFINE_float("kl_ic_weight", KL_IC_WEIGHT,
+ "Strength of KL weight on initial conditions KL penatly.")
+flags.DEFINE_float("kl_co_weight", KL_CO_WEIGHT,
+ "Strength of KL weight on controller output KL penalty.")
+
+# Sometimes the task can be sufficiently hard to learn that the
+# optimizer takes the 'easy route', and simply minimizes the KL
+# divergence, setting it to near zero, and the optimization gets
+# stuck. These two parameters will help avoid that by by getting the
+# optimization to 'latch' on to the main optimization, and only
+# turning in the regularizers later.
+flags.DEFINE_integer("kl_start_step", KL_START_STEP,
+ "Start increasing weight after this many steps.")
+# training passes, not epochs, increase by 0.5 every kl_increase_steps
+flags.DEFINE_integer("kl_increase_steps", KL_INCREASE_STEPS,
+ "Increase weight of kl cost to avoid local minimum.")
+# Same story for l2 regularizer. One wants a simple generator, for scientific
+# reasons, but not at the expense of hosing the optimization.
+flags.DEFINE_integer("l2_start_step", L2_START_STEP,
+ "Start increasing l2 weight after this many steps.")
+flags.DEFINE_integer("l2_increase_steps", L2_INCREASE_STEPS,
+ "Increase weight of l2 cost to avoid local minimum.")
+
+FLAGS = flags.FLAGS
+
+
+def build_model(hps, kind="train", datasets=None):
+ """Builds a model from either random initialization, or saved parameters.
+
+ Args:
+ hps: The hyper parameters for the model.
+ kind: (optional) The kind of model to build. Training vs inference require
+ different graphs.
+ datasets: The datasets structure (see top of lfads.py).
+
+ Returns:
+ an LFADS model.
+ """
+
+ build_kind = kind
+ if build_kind == "write_model_params":
+ build_kind = "train"
+ with tf.variable_scope("LFADS", reuse=None):
+ model = LFADS(hps, kind=build_kind, datasets=datasets)
+
+ if not os.path.exists(hps.lfads_save_dir):
+ print("Save directory %s does not exist, creating it." % hps.lfads_save_dir)
+ os.makedirs(hps.lfads_save_dir)
+
+ cp_pb_ln = hps.checkpoint_pb_load_name
+ cp_pb_ln = 'checkpoint' if cp_pb_ln == "" else cp_pb_ln
+ if cp_pb_ln == 'checkpoint':
+ print("Loading latest training checkpoint in: ", hps.lfads_save_dir)
+ saver = model.seso_saver
+ elif cp_pb_ln == 'checkpoint_lve':
+ print("Loading lowest validation checkpoint in: ", hps.lfads_save_dir)
+ saver = model.lve_saver
+ else:
+ print("Loading checkpoint: ", cp_pb_ln, ", in: ", hps.lfads_save_dir)
+ saver = model.seso_saver
+
+ ckpt = tf.train.get_checkpoint_state(hps.lfads_save_dir,
+ latest_filename=cp_pb_ln)
+
+ session = tf.get_default_session()
+ print("ckpt: ", ckpt)
+ if ckpt and tf.train.checkpoint_exists(ckpt.model_checkpoint_path):
+ print("Reading model parameters from %s" % ckpt.model_checkpoint_path)
+ saver.restore(session, ckpt.model_checkpoint_path)
+ else:
+ print("Created model with fresh parameters.")
+ if kind in ["posterior_sample_and_average", "posterior_push_mean",
+ "prior_sample", "write_model_params"]:
+ print("Possible error!!! You are running ", kind, " on a newly \
+ initialized model!")
+ # cannot print ckpt.model_check_point path if no ckpt
+ print("Are you sure you sure a checkpoint in ", hps.lfads_save_dir,
+ " exists?")
+
+ tf.global_variables_initializer().run()
+
+ if ckpt:
+ train_step_str = re.search('-[0-9]+$', ckpt.model_checkpoint_path).group()
+ else:
+ train_step_str = '-0'
+
+ fname = 'hyperparameters' + train_step_str + '.txt'
+ hp_fname = os.path.join(hps.lfads_save_dir, fname)
+ hps_for_saving = jsonify_dict(hps)
+ utils.write_data(hp_fname, hps_for_saving, use_json=True)
+
+ return model
+
+
+def jsonify_dict(d):
+ """Turns python booleans into strings so hps dict can be written in json.
+ Creates a shallow-copied dictionary first, then accomplishes string
+ conversion.
+
+ Args:
+ d: hyperparameter dictionary
+
+ Returns: hyperparameter dictionary with bool's as strings
+ """
+
+ d2 = d.copy() # shallow copy is fine by assumption of d being shallow
+ def jsonify_bool(boolean_value):
+ if boolean_value:
+ return "true"
+ else:
+ return "false"
+
+ for key in d2.keys():
+ if isinstance(d2[key], bool):
+ d2[key] = jsonify_bool(d2[key])
+ return d2
+
+
+def build_hyperparameter_dict(flags):
+ """Simple script for saving hyper parameters. Under the hood the
+ flags structure isn't a dictionary, so it has to be simplified since we
+ want to be able to view file as text.
+
+ Args:
+ flags: From tf.app.flags
+
+ Returns:
+ dictionary of hyper parameters (ignoring other flag types).
+ """
+ d = {}
+ # Data
+ d['output_dist'] = flags.output_dist
+ d['data_dir'] = flags.data_dir
+ d['lfads_save_dir'] = flags.lfads_save_dir
+ d['checkpoint_pb_load_name'] = flags.checkpoint_pb_load_name
+ d['checkpoint_name'] = flags.checkpoint_name
+ d['output_filename_stem'] = flags.output_filename_stem
+ d['max_ckpt_to_keep'] = flags.max_ckpt_to_keep
+ d['max_ckpt_to_keep_lve'] = flags.max_ckpt_to_keep_lve
+ d['ps_nexamples_to_process'] = flags.ps_nexamples_to_process
+ d['ext_input_dim'] = flags.ext_input_dim
+ d['data_filename_stem'] = flags.data_filename_stem
+ d['device'] = flags.device
+ d['csv_log'] = flags.csv_log
+ d['num_steps_for_gen_ic'] = flags.num_steps_for_gen_ic
+ d['inject_ext_input_to_gen'] = flags.inject_ext_input_to_gen
+ # Cell
+ d['cell_weight_scale'] = flags.cell_weight_scale
+ # Generation
+ d['ic_dim'] = flags.ic_dim
+ d['factors_dim'] = flags.factors_dim
+ d['ic_enc_dim'] = flags.ic_enc_dim
+ d['gen_dim'] = flags.gen_dim
+ d['gen_cell_input_weight_scale'] = flags.gen_cell_input_weight_scale
+ d['gen_cell_rec_weight_scale'] = flags.gen_cell_rec_weight_scale
+ # KL distributions
+ d['ic_prior_var_min'] = flags.ic_prior_var_min
+ d['ic_prior_var_scale'] = flags.ic_prior_var_scale
+ d['ic_prior_var_max'] = flags.ic_prior_var_max
+ d['ic_post_var_min'] = flags.ic_post_var_min
+ d['co_prior_var_scale'] = flags.co_prior_var_scale
+ d['prior_ar_atau'] = flags.prior_ar_atau
+ d['prior_ar_nvar'] = flags.prior_ar_nvar
+ d['do_train_prior_ar_atau'] = flags.do_train_prior_ar_atau
+ d['do_train_prior_ar_nvar'] = flags.do_train_prior_ar_nvar
+ # Controller
+ d['do_causal_controller'] = flags.do_causal_controller
+ d['controller_input_lag'] = flags.controller_input_lag
+ d['do_feed_factors_to_controller'] = flags.do_feed_factors_to_controller
+ d['feedback_factors_or_rates'] = flags.feedback_factors_or_rates
+ d['co_dim'] = flags.co_dim
+ d['ci_enc_dim'] = flags.ci_enc_dim
+ d['con_dim'] = flags.con_dim
+ d['co_mean_corr_scale'] = flags.co_mean_corr_scale
+ # Optimization
+ d['batch_size'] = flags.batch_size
+ d['learning_rate_init'] = flags.learning_rate_init
+ d['learning_rate_decay_factor'] = flags.learning_rate_decay_factor
+ d['learning_rate_stop'] = flags.learning_rate_stop
+ d['learning_rate_n_to_compare'] = flags.learning_rate_n_to_compare
+ d['max_grad_norm'] = flags.max_grad_norm
+ d['cell_clip_value'] = flags.cell_clip_value
+ d['do_train_io_only'] = flags.do_train_io_only
+ d['do_train_encoder_only'] = flags.do_train_encoder_only
+ d['do_reset_learning_rate'] = flags.do_reset_learning_rate
+ d['do_train_readin'] = flags.do_train_readin
+
+ # Overfitting
+ d['keep_prob'] = flags.keep_prob
+ d['temporal_spike_jitter_width'] = flags.temporal_spike_jitter_width
+ d['l2_gen_scale'] = flags.l2_gen_scale
+ d['l2_con_scale'] = flags.l2_con_scale
+ # Underfitting
+ d['kl_ic_weight'] = flags.kl_ic_weight
+ d['kl_co_weight'] = flags.kl_co_weight
+ d['kl_start_step'] = flags.kl_start_step
+ d['kl_increase_steps'] = flags.kl_increase_steps
+ d['l2_start_step'] = flags.l2_start_step
+ d['l2_increase_steps'] = flags.l2_increase_steps
+
+ return d
+
+
+class hps_dict_to_obj(dict):
+ """Helper class allowing us to access hps dictionary more easily."""
+
+ def __getattr__(self, key):
+ if key in self:
+ return self[key]
+ else:
+ assert False, ("%s does not exist." % key)
+ def __setattr__(self, key, value):
+ self[key] = value
+
+
+def train(hps, datasets):
+ """Train the LFADS model.
+
+ Args:
+ hps: The dictionary of hyperparameters.
+ datasets: A dictionary of data dictionaries. The dataset dict is simply a
+ name(string)-> data dictionary mapping (See top of lfads.py).
+ """
+ model = build_model(hps, kind="train", datasets=datasets)
+ if hps.do_reset_learning_rate:
+ sess = tf.get_default_session()
+ sess.run(model.learning_rate.initializer)
+
+ model.train_model(datasets)
+
+
+def write_model_runs(hps, datasets, output_fname=None, push_mean=False):
+ """Run the model on the data in data_dict, and save the computed values.
+
+ LFADS generates a number of outputs for each examples, and these are all
+ saved. They are:
+ The mean and variance of the prior of g0.
+ The mean and variance of approximate posterior of g0.
+ The control inputs (if enabled)
+ The initial conditions, g0, for all examples.
+ The generator states for all time.
+ The factors for all time.
+ The rates for all time.
+
+ Args:
+ hps: The dictionary of hyperparameters.
+ datasets: A dictionary of data dictionaries. The dataset dict is simply a
+ name(string)-> data dictionary mapping (See top of lfads.py).
+ output_fname (optional): output filename stem to write the model runs.
+ push_mean: if False (default), generates batch_size samples for each trial
+ and averages the results. if True, runs each trial once without noise,
+ pushing the posterior mean initial conditions and control inputs through
+ the trained model. False is used for posterior_sample_and_average, True
+ is used for posterior_push_mean.
+ """
+ model = build_model(hps, kind=hps.kind, datasets=datasets)
+ model.write_model_runs(datasets, output_fname, push_mean)
+
+
+def write_model_samples(hps, datasets, dataset_name=None, output_fname=None):
+ """Use the prior distribution to generate samples from the model.
+ Generates batch_size number of samples (set through FLAGS).
+
+ LFADS generates a number of outputs for each examples, and these are all
+ saved. They are:
+ The mean and variance of the prior of g0.
+ The control inputs (if enabled)
+ The initial conditions, g0, for all examples.
+ The generator states for all time.
+ The factors for all time.
+ The output distribution parameters (e.g. rates) for all time.
+
+ Args:
+ hps: The dictionary of hyperparameters.
+ datasets: A dictionary of data dictionaries. The dataset dict is simply a
+ name(string)-> data dictionary mapping (See top of lfads.py).
+ dataset_name: The name of the dataset to grab the factors -> rates
+ alignment matrices from. Only a concern with models trained on
+ multi-session data. By default, uses the first dataset in the data dict.
+ output_fname: The name prefix of the file in which to save the generated
+ samples.
+ """
+ if not output_fname:
+ output_fname = "model_runs_" + hps.kind
+ else:
+ output_fname = output_fname + "model_runs_" + hps.kind
+ if not dataset_name:
+ dataset_name = datasets.keys()[0]
+ else:
+ if dataset_name not in datasets.keys():
+ raise ValueError("Invalid dataset name '%s'."%(dataset_name))
+ model = build_model(hps, kind=hps.kind, datasets=datasets)
+ model.write_model_samples(dataset_name, output_fname)
+
+
+def write_model_parameters(hps, output_fname=None, datasets=None):
+ """Save all the model parameters
+
+ Save all the parameters to hps.lfads_save_dir.
+
+ Args:
+ hps: The dictionary of hyperparameters.
+ output_fname: The prefix of the file in which to save the generated
+ samples.
+ datasets: A dictionary of data dictionaries. The dataset dict is simply a
+ name(string)-> data dictionary mapping (See top of lfads.py).
+ """
+ if not output_fname:
+ output_fname = "model_params"
+ else:
+ output_fname = output_fname + "_model_params"
+ fname = os.path.join(hps.lfads_save_dir, output_fname)
+ print("Writing model parameters to: ", fname)
+ # save the optimizer params as well
+ model = build_model(hps, kind="write_model_params", datasets=datasets)
+ model_params = model.eval_model_parameters(use_nested=False,
+ include_strs="LFADS")
+ utils.write_data(fname, model_params, compression=None)
+ print("Done.")
+
+
+def clean_data_dict(data_dict):
+ """Add some key/value pairs to the data dict, if they are missing.
+ Args:
+ data_dict - dictionary containing data for LFADS
+ Returns:
+ data_dict with some keys filled in, if they are absent.
+ """
+
+ keys = ['train_truth', 'train_ext_input', 'valid_data',
+ 'valid_truth', 'valid_ext_input', 'valid_train']
+ for k in keys:
+ if k not in data_dict:
+ data_dict[k] = None
+
+ return data_dict
+
+
+def load_datasets(data_dir, data_filename_stem):
+ """Load the datasets from a specified directory.
+
+ Example files look like
+ >data_dir/my_dataset_first_day
+ >data_dir/my_dataset_second_day
+
+ If my_dataset (filename) stem is in the directory, the read routine will try
+ and load it. The datasets dictionary will then look like
+ dataset['first_day'] -> (first day data dictionary)
+ dataset['second_day'] -> (first day data dictionary)
+
+ Args:
+ data_dir: The directory from which to load the datasets.
+ data_filename_stem: The stem of the filename for the datasets.
+
+ Returns:
+ datasets: a dataset dictionary, with one name->data dictionary pair for
+ each dataset file.
+ """
+ print("Reading data from ", data_dir)
+ datasets = utils.read_datasets(data_dir, data_filename_stem)
+ for k, data_dict in datasets.items():
+ datasets[k] = clean_data_dict(data_dict)
+
+ train_total_size = len(data_dict['train_data'])
+ if train_total_size == 0:
+ print("Did not load training set.")
+ else:
+ print("Found training set with number examples: ", train_total_size)
+
+ valid_total_size = len(data_dict['valid_data'])
+ if valid_total_size == 0:
+ print("Did not load validation set.")
+ else:
+ print("Found validation set with number examples: ", valid_total_size)
+
+ return datasets
+
+
+def main(_):
+ """Get this whole shindig off the ground."""
+ d = build_hyperparameter_dict(FLAGS)
+ hps = hps_dict_to_obj(d) # hyper parameters
+ kind = FLAGS.kind
+
+ # Read the data, if necessary.
+ train_set = valid_set = None
+ if kind in ["train", "posterior_sample_and_average", "posterior_push_mean",
+ "prior_sample", "write_model_params"]:
+ datasets = load_datasets(hps.data_dir, hps.data_filename_stem)
+ else:
+ raise ValueError('Kind {} is not supported.'.format(kind))
+
+ # infer the dataset names and dataset dimensions from the loaded files
+ hps.kind = kind # needs to be added here, cuz not saved as hyperparam
+ hps.dataset_names = []
+ hps.dataset_dims = {}
+ for key in datasets:
+ hps.dataset_names.append(key)
+ hps.dataset_dims[key] = datasets[key]['data_dim']
+
+ # also store down the dimensionality of the data
+ # - just pull from one set, required to be same for all sets
+ hps.num_steps = datasets.values()[0]['num_steps']
+ hps.ndatasets = len(hps.dataset_names)
+
+ if hps.num_steps_for_gen_ic > hps.num_steps:
+ hps.num_steps_for_gen_ic = hps.num_steps
+
+ # Build and run the model, for varying purposes.
+ config = tf.ConfigProto(allow_soft_placement=True,
+ log_device_placement=False)
+ if FLAGS.allow_gpu_growth:
+ config.gpu_options.allow_growth = True
+ sess = tf.Session(config=config)
+ with sess.as_default():
+ with tf.device(hps.device):
+ if kind == "train":
+ train(hps, datasets)
+ elif kind == "posterior_sample_and_average":
+ write_model_runs(hps, datasets, hps.output_filename_stem,
+ push_mean=False)
+ elif kind == "posterior_push_mean":
+ write_model_runs(hps, datasets, hps.output_filename_stem,
+ push_mean=True)
+ elif kind == "prior_sample":
+ write_model_samples(hps, datasets, hps.output_filename_stem)
+ elif kind == "write_model_params":
+ write_model_parameters(hps, hps.output_filename_stem, datasets)
+ else:
+ assert False, ("Kind %s is not implemented. " % kind)
+
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/generate_chaotic_rnn_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/generate_chaotic_rnn_data.py"
new file mode 100644
index 00000000..3de72e58
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/generate_chaotic_rnn_data.py"
@@ -0,0 +1,200 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+from __future__ import print_function
+
+import h5py
+import numpy as np
+import os
+import tensorflow as tf # used for flags here
+
+from utils import write_datasets
+from synthetic_data_utils import add_alignment_projections, generate_data
+from synthetic_data_utils import generate_rnn, get_train_n_valid_inds
+from synthetic_data_utils import nparray_and_transpose
+from synthetic_data_utils import spikify_data, gaussify_data, split_list_by_inds
+import matplotlib
+import matplotlib.pyplot as plt
+import scipy.signal
+
+matplotlib.rcParams['image.interpolation'] = 'nearest'
+DATA_DIR = "rnn_synth_data_v1.0"
+
+flags = tf.app.flags
+flags.DEFINE_string("save_dir", "/tmp/" + DATA_DIR + "/",
+ "Directory for saving data.")
+flags.DEFINE_string("datafile_name", "thits_data",
+ "Name of data file for input case.")
+flags.DEFINE_string("noise_type", "poisson", "Noise type for data.")
+flags.DEFINE_integer("synth_data_seed", 5, "Random seed for RNN generation.")
+flags.DEFINE_float("T", 1.0, "Time in seconds to generate.")
+flags.DEFINE_integer("C", 100, "Number of conditions")
+flags.DEFINE_integer("N", 50, "Number of units for the RNN")
+flags.DEFINE_integer("S", 50, "Number of sampled units from RNN")
+flags.DEFINE_integer("npcs", 10, "Number of PCS for multi-session case.")
+flags.DEFINE_float("train_percentage", 4.0/5.0,
+ "Percentage of train vs validation trials")
+flags.DEFINE_integer("nreplications", 40,
+ "Number of noise replications of the same underlying rates.")
+flags.DEFINE_float("g", 1.5, "Complexity of dynamics")
+flags.DEFINE_float("x0_std", 1.0,
+ "Volume from which to pull initial conditions (affects diversity of dynamics.")
+flags.DEFINE_float("tau", 0.025, "Time constant of RNN")
+flags.DEFINE_float("dt", 0.010, "Time bin")
+flags.DEFINE_float("input_magnitude", 20.0,
+ "For the input case, what is the value of the input?")
+flags.DEFINE_float("max_firing_rate", 30.0, "Map 1.0 of RNN to a spikes per second")
+FLAGS = flags.FLAGS
+
+
+# Note that with N small, (as it is 25 above), the finite size effects
+# will have pretty dramatic effects on the dynamics of the random RNN.
+# If you want more complex dynamics, you'll have to run the script a
+# lot, or increase N (or g).
+
+# Getting hard vs. easy data can be a little stochastic, so we set the seed.
+
+# Pull out some commonly used parameters.
+# These are user parameters (configuration)
+rng = np.random.RandomState(seed=FLAGS.synth_data_seed)
+T = FLAGS.T
+C = FLAGS.C
+N = FLAGS.N
+S = FLAGS.S
+input_magnitude = FLAGS.input_magnitude
+nreplications = FLAGS.nreplications
+E = nreplications * C # total number of trials
+# S is the number of measurements in each datasets, w/ each
+# dataset having a different set of observations.
+ndatasets = N/S # ok if rounded down
+train_percentage = FLAGS.train_percentage
+ntime_steps = int(T / FLAGS.dt)
+# End of user parameters
+
+rnn = generate_rnn(rng, N, FLAGS.g, FLAGS.tau, FLAGS.dt, FLAGS.max_firing_rate)
+
+# Check to make sure the RNN is the one we used in the paper.
+if N == 50:
+ assert abs(rnn['W'][0,0] - 0.06239899) < 1e-8, 'Error in random seed?'
+ rem_check = nreplications * train_percentage
+ assert abs(rem_check - int(rem_check)) < 1e-8, \
+ 'Train percentage * nreplications should be integral number.'
+
+
+# Initial condition generation, and condition label generation. This
+# happens outside of the dataset loop, so that all datasets have the
+# same conditions, which is similar to a neurophys setup.
+condition_number = 0
+x0s = []
+condition_labels = []
+for c in range(C):
+ x0 = FLAGS.x0_std * rng.randn(N, 1)
+ x0s.append(np.tile(x0, nreplications)) # replicate x0 nreplications times
+ # replicate the condition label nreplications times
+ for ns in range(nreplications):
+ condition_labels.append(condition_number)
+ condition_number += 1
+x0s = np.concatenate(x0s, axis=1)
+
+# Containers for storing data across data.
+datasets = {}
+for n in range(ndatasets):
+ print(n+1, " of ", ndatasets)
+
+ # First generate all firing rates. in the next loop, generate all
+ # replications this allows the random state for rate generation to be
+ # independent of n_replications.
+ dataset_name = 'dataset_N' + str(N) + '_S' + str(S)
+ if S < N:
+ dataset_name += '_n' + str(n+1)
+
+ # Sample neuron subsets. The assumption is the PC axes of the RNN
+ # are not unit aligned, so sampling units is adequate to sample all
+ # the high-variance PCs.
+ P_sxn = np.eye(S,N)
+ for m in range(n):
+ P_sxn = np.roll(P_sxn, S, axis=1)
+
+ if input_magnitude > 0.0:
+ # time of "hits" randomly chosen between [1/4 and 3/4] of total time
+ input_times = rng.choice(int(ntime_steps/2), size=[E]) + int(ntime_steps/4)
+ else:
+ input_times = None
+
+ rates, x0s, inputs = \
+ generate_data(rnn, T=T, E=E, x0s=x0s, P_sxn=P_sxn,
+ input_magnitude=input_magnitude,
+ input_times=input_times)
+
+ if FLAGS.noise_type == "poisson":
+ noisy_data = spikify_data(rates, rng, rnn['dt'], rnn['max_firing_rate'])
+ elif FLAGS.noise_type == "gaussian":
+ noisy_data = gaussify_data(rates, rng, rnn['dt'], rnn['max_firing_rate'])
+ else:
+ raise ValueError("Only noise types supported are poisson or gaussian")
+
+ # split into train and validation sets
+ train_inds, valid_inds = get_train_n_valid_inds(E, train_percentage,
+ nreplications)
+
+ # Split the data, inputs, labels and times into train vs. validation.
+ rates_train, rates_valid = \
+ split_list_by_inds(rates, train_inds, valid_inds)
+ noisy_data_train, noisy_data_valid = \
+ split_list_by_inds(noisy_data, train_inds, valid_inds)
+ input_train, inputs_valid = \
+ split_list_by_inds(inputs, train_inds, valid_inds)
+ condition_labels_train, condition_labels_valid = \
+ split_list_by_inds(condition_labels, train_inds, valid_inds)
+ input_times_train, input_times_valid = \
+ split_list_by_inds(input_times, train_inds, valid_inds)
+
+ # Turn rates, noisy_data, and input into numpy arrays.
+ rates_train = nparray_and_transpose(rates_train)
+ rates_valid = nparray_and_transpose(rates_valid)
+ noisy_data_train = nparray_and_transpose(noisy_data_train)
+ noisy_data_valid = nparray_and_transpose(noisy_data_valid)
+ input_train = nparray_and_transpose(input_train)
+ inputs_valid = nparray_and_transpose(inputs_valid)
+
+ # Note that we put these 'truth' rates and input into this
+ # structure, the only data that is used in LFADS are the noisy
+ # data e.g. spike trains. The rest is either for printing or posterity.
+ data = {'train_truth': rates_train,
+ 'valid_truth': rates_valid,
+ 'input_train_truth' : input_train,
+ 'input_valid_truth' : inputs_valid,
+ 'train_data' : noisy_data_train,
+ 'valid_data' : noisy_data_valid,
+ 'train_percentage' : train_percentage,
+ 'nreplications' : nreplications,
+ 'dt' : rnn['dt'],
+ 'input_magnitude' : input_magnitude,
+ 'input_times_train' : input_times_train,
+ 'input_times_valid' : input_times_valid,
+ 'P_sxn' : P_sxn,
+ 'condition_labels_train' : condition_labels_train,
+ 'condition_labels_valid' : condition_labels_valid,
+ 'conversion_factor': 1.0 / rnn['conversion_factor']}
+ datasets[dataset_name] = data
+
+if S < N:
+ # Note that this isn't necessary for this synthetic example, but
+ # it's useful to see how the input factor matrices were initialized
+ # for actual neurophysiology data.
+ datasets = add_alignment_projections(datasets, npcs=FLAGS.npcs)
+
+# Write out the datasets.
+write_datasets(FLAGS.save_dir, FLAGS.datafile_name, datasets)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/generate_itb_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/generate_itb_data.py"
new file mode 100644
index 00000000..66bc45d0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/generate_itb_data.py"
@@ -0,0 +1,209 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+from __future__ import print_function
+
+import h5py
+import numpy as np
+import os
+from six.moves import xrange
+import tensorflow as tf
+
+from utils import write_datasets
+from synthetic_data_utils import normalize_rates
+from synthetic_data_utils import get_train_n_valid_inds, nparray_and_transpose
+from synthetic_data_utils import spikify_data, split_list_by_inds
+
+DATA_DIR = "rnn_synth_data_v1.0"
+
+flags = tf.app.flags
+flags.DEFINE_string("save_dir", "/tmp/" + DATA_DIR + "/",
+ "Directory for saving data.")
+flags.DEFINE_string("datafile_name", "itb_rnn",
+ "Name of data file for input case.")
+flags.DEFINE_integer("synth_data_seed", 5, "Random seed for RNN generation.")
+flags.DEFINE_float("T", 1.0, "Time in seconds to generate.")
+flags.DEFINE_integer("C", 800, "Number of conditions")
+flags.DEFINE_integer("N", 50, "Number of units for the RNN")
+flags.DEFINE_float("train_percentage", 4.0/5.0,
+ "Percentage of train vs validation trials")
+flags.DEFINE_integer("nreplications", 5,
+ "Number of spikifications of the same underlying rates.")
+flags.DEFINE_float("tau", 0.025, "Time constant of RNN")
+flags.DEFINE_float("dt", 0.010, "Time bin")
+flags.DEFINE_float("max_firing_rate", 30.0,
+ "Map 1.0 of RNN to a spikes per second")
+flags.DEFINE_float("u_std", 0.25,
+ "Std dev of input to integration to bound model")
+flags.DEFINE_string("checkpoint_path", "SAMPLE_CHECKPOINT",
+ """Path to directory with checkpoints of model
+ trained on integration to bound task. Currently this
+ is a placeholder which tells the code to grab the
+ checkpoint that is provided with the code
+ (in /trained_itb/..). If you have your own checkpoint
+ you would like to restore, you would point it to
+ that path.""")
+FLAGS = flags.FLAGS
+
+
+class IntegrationToBoundModel:
+ def __init__(self, N):
+ scale = 0.8 / float(N**0.5)
+ self.N = N
+ self.Wh_nxn = tf.Variable(tf.random_normal([N, N], stddev=scale))
+ self.b_1xn = tf.Variable(tf.zeros([1, N]))
+ self.Bu_1xn = tf.Variable(tf.zeros([1, N]))
+ self.Wro_nxo = tf.Variable(tf.random_normal([N, 1], stddev=scale))
+ self.bro_o = tf.Variable(tf.zeros([1]))
+
+ def call(self, h_tm1_bxn, u_bx1):
+ act_t_bxn = tf.matmul(h_tm1_bxn, self.Wh_nxn) + self.b_1xn + u_bx1 * self.Bu_1xn
+ h_t_bxn = tf.nn.tanh(act_t_bxn)
+ z_t = tf.nn.xw_plus_b(h_t_bxn, self.Wro_nxo, self.bro_o)
+ return z_t, h_t_bxn
+
+def get_data_batch(batch_size, T, rng, u_std):
+ u_bxt = rng.randn(batch_size, T) * u_std
+ running_sum_b = np.zeros([batch_size])
+ labels_bxt = np.zeros([batch_size, T])
+ for t in xrange(T):
+ running_sum_b += u_bxt[:, t]
+ labels_bxt[:, t] += running_sum_b
+ labels_bxt = np.clip(labels_bxt, -1, 1)
+ return u_bxt, labels_bxt
+
+
+rng = np.random.RandomState(seed=FLAGS.synth_data_seed)
+u_rng = np.random.RandomState(seed=FLAGS.synth_data_seed+1)
+T = FLAGS.T
+C = FLAGS.C
+N = FLAGS.N # must be same N as in trained model (provided example is N = 50)
+nreplications = FLAGS.nreplications
+E = nreplications * C # total number of trials
+train_percentage = FLAGS.train_percentage
+ntimesteps = int(T / FLAGS.dt)
+batch_size = 1 # gives one example per ntrial
+
+model = IntegrationToBoundModel(N)
+inputs_ph_t = [tf.placeholder(tf.float32,
+ shape=[None, 1]) for _ in range(ntimesteps)]
+state = tf.zeros([batch_size, N])
+saver = tf.train.Saver()
+
+P_nxn = rng.randn(N,N) / np.sqrt(N) # random projections
+
+# unroll RNN for T timesteps
+outputs_t = []
+states_t = []
+
+for inp in inputs_ph_t:
+ output, state = model.call(state, inp)
+ outputs_t.append(output)
+ states_t.append(state)
+
+with tf.Session() as sess:
+ # restore the latest model ckpt
+ if FLAGS.checkpoint_path == "SAMPLE_CHECKPOINT":
+ dir_path = os.path.dirname(os.path.realpath(__file__))
+ model_checkpoint_path = os.path.join(dir_path, "trained_itb/model-65000")
+ else:
+ model_checkpoint_path = FLAGS.checkpoint_path
+ try:
+ saver.restore(sess, model_checkpoint_path)
+ print ('Model restored from', model_checkpoint_path)
+ except:
+ assert False, ("No checkpoints to restore from, is the path %s correct?"
+ %model_checkpoint_path)
+
+ # generate data for trials
+ data_e = []
+ u_e = []
+ outs_e = []
+ for c in range(C):
+ u_1xt, outs_1xt = get_data_batch(batch_size, ntimesteps, u_rng, FLAGS.u_std)
+
+ feed_dict = {}
+ for t in xrange(ntimesteps):
+ feed_dict[inputs_ph_t[t]] = np.reshape(u_1xt[:,t], (batch_size,-1))
+
+ states_t_bxn, outputs_t_bxn = sess.run([states_t, outputs_t],
+ feed_dict=feed_dict)
+ states_nxt = np.transpose(np.squeeze(np.asarray(states_t_bxn)))
+ outputs_t_bxn = np.squeeze(np.asarray(outputs_t_bxn))
+ r_sxt = np.dot(P_nxn, states_nxt)
+
+ for s in xrange(nreplications):
+ data_e.append(r_sxt)
+ u_e.append(u_1xt)
+ outs_e.append(outputs_t_bxn)
+
+ truth_data_e = normalize_rates(data_e, E, N)
+
+spiking_data_e = spikify_data(truth_data_e, rng, dt=FLAGS.dt,
+ max_firing_rate=FLAGS.max_firing_rate)
+train_inds, valid_inds = get_train_n_valid_inds(E, train_percentage,
+ nreplications)
+
+data_train_truth, data_valid_truth = split_list_by_inds(truth_data_e,
+ train_inds,
+ valid_inds)
+data_train_spiking, data_valid_spiking = split_list_by_inds(spiking_data_e,
+ train_inds,
+ valid_inds)
+
+data_train_truth = nparray_and_transpose(data_train_truth)
+data_valid_truth = nparray_and_transpose(data_valid_truth)
+data_train_spiking = nparray_and_transpose(data_train_spiking)
+data_valid_spiking = nparray_and_transpose(data_valid_spiking)
+
+# save down the inputs used to generate this data
+train_inputs_u, valid_inputs_u = split_list_by_inds(u_e,
+ train_inds,
+ valid_inds)
+train_inputs_u = nparray_and_transpose(train_inputs_u)
+valid_inputs_u = nparray_and_transpose(valid_inputs_u)
+
+# save down the network outputs (may be useful later)
+train_outputs_u, valid_outputs_u = split_list_by_inds(outs_e,
+ train_inds,
+ valid_inds)
+train_outputs_u = np.array(train_outputs_u)
+valid_outputs_u = np.array(valid_outputs_u)
+
+
+data = { 'train_truth': data_train_truth,
+ 'valid_truth': data_valid_truth,
+ 'train_data' : data_train_spiking,
+ 'valid_data' : data_valid_spiking,
+ 'train_percentage' : train_percentage,
+ 'nreplications' : nreplications,
+ 'dt' : FLAGS.dt,
+ 'u_std' : FLAGS.u_std,
+ 'max_firing_rate': FLAGS.max_firing_rate,
+ 'train_inputs_u': train_inputs_u,
+ 'valid_inputs_u': valid_inputs_u,
+ 'train_outputs_u': train_outputs_u,
+ 'valid_outputs_u': valid_outputs_u,
+ 'conversion_factor' : FLAGS.max_firing_rate/(1.0/FLAGS.dt) }
+
+# just one dataset here
+datasets = {}
+dataset_name = 'dataset_N' + str(N)
+datasets[dataset_name] = data
+
+# write out the dataset
+write_datasets(FLAGS.save_dir, FLAGS.datafile_name, datasets)
+print ('Saved to ', os.path.join(FLAGS.save_dir,
+ FLAGS.datafile_name + '_' + dataset_name))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/generate_labeled_rnn_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/generate_labeled_rnn_data.py"
new file mode 100644
index 00000000..06955854
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/generate_labeled_rnn_data.py"
@@ -0,0 +1,147 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+from __future__ import print_function
+
+import os
+import h5py
+import numpy as np
+from six.moves import xrange
+
+from synthetic_data_utils import generate_data, generate_rnn
+from synthetic_data_utils import get_train_n_valid_inds
+from synthetic_data_utils import nparray_and_transpose
+from synthetic_data_utils import spikify_data, split_list_by_inds
+import tensorflow as tf
+from utils import write_datasets
+
+DATA_DIR = "rnn_synth_data_v1.0"
+
+flags = tf.app.flags
+flags.DEFINE_string("save_dir", "/tmp/" + DATA_DIR + "/",
+ "Directory for saving data.")
+flags.DEFINE_string("datafile_name", "conditioned_rnn_data",
+ "Name of data file for input case.")
+flags.DEFINE_integer("synth_data_seed", 5, "Random seed for RNN generation.")
+flags.DEFINE_float("T", 1.0, "Time in seconds to generate.")
+flags.DEFINE_integer("C", 400, "Number of conditions")
+flags.DEFINE_integer("N", 50, "Number of units for the RNN")
+flags.DEFINE_float("train_percentage", 4.0/5.0,
+ "Percentage of train vs validation trials")
+flags.DEFINE_integer("nreplications", 10,
+ "Number of spikifications of the same underlying rates.")
+flags.DEFINE_float("g", 1.5, "Complexity of dynamics")
+flags.DEFINE_float("x0_std", 1.0,
+ "Volume from which to pull initial conditions (affects diversity of dynamics.")
+flags.DEFINE_float("tau", 0.025, "Time constant of RNN")
+flags.DEFINE_float("dt", 0.010, "Time bin")
+flags.DEFINE_float("max_firing_rate", 30.0, "Map 1.0 of RNN to a spikes per second")
+FLAGS = flags.FLAGS
+
+rng = np.random.RandomState(seed=FLAGS.synth_data_seed)
+rnn_rngs = [np.random.RandomState(seed=FLAGS.synth_data_seed+1),
+ np.random.RandomState(seed=FLAGS.synth_data_seed+2)]
+T = FLAGS.T
+C = FLAGS.C
+N = FLAGS.N
+nreplications = FLAGS.nreplications
+E = nreplications * C
+train_percentage = FLAGS.train_percentage
+ntimesteps = int(T / FLAGS.dt)
+
+rnn_a = generate_rnn(rnn_rngs[0], N, FLAGS.g, FLAGS.tau, FLAGS.dt,
+ FLAGS.max_firing_rate)
+rnn_b = generate_rnn(rnn_rngs[1], N, FLAGS.g, FLAGS.tau, FLAGS.dt,
+ FLAGS.max_firing_rate)
+rnns = [rnn_a, rnn_b]
+
+# pick which RNN is used on each trial
+rnn_to_use = rng.randint(2, size=E)
+ext_input = np.repeat(np.expand_dims(rnn_to_use, axis=1), ntimesteps, axis=1)
+ext_input = np.expand_dims(ext_input, axis=2) # these are "a's" in the paper
+
+x0s = []
+condition_labels = []
+condition_number = 0
+for c in range(C):
+ x0 = FLAGS.x0_std * rng.randn(N, 1)
+ x0s.append(np.tile(x0, nreplications))
+ for ns in range(nreplications):
+ condition_labels.append(condition_number)
+ condition_number += 1
+x0s = np.concatenate(x0s, axis=1)
+
+P_nxn = rng.randn(N, N) / np.sqrt(N)
+
+# generate trials for both RNNs
+rates_a, x0s_a, _ = generate_data(rnn_a, T=T, E=E, x0s=x0s, P_sxn=P_nxn,
+ input_magnitude=0.0, input_times=None)
+spikes_a = spikify_data(rates_a, rng, rnn_a['dt'], rnn_a['max_firing_rate'])
+
+rates_b, x0s_b, _ = generate_data(rnn_b, T=T, E=E, x0s=x0s, P_sxn=P_nxn,
+ input_magnitude=0.0, input_times=None)
+spikes_b = spikify_data(rates_b, rng, rnn_b['dt'], rnn_b['max_firing_rate'])
+
+# not the best way to do this but E is small enough
+rates = []
+spikes = []
+for trial in xrange(E):
+ if rnn_to_use[trial] == 0:
+ rates.append(rates_a[trial])
+ spikes.append(spikes_a[trial])
+ else:
+ rates.append(rates_b[trial])
+ spikes.append(spikes_b[trial])
+
+# split into train and validation sets
+train_inds, valid_inds = get_train_n_valid_inds(E, train_percentage,
+ nreplications)
+
+rates_train, rates_valid = split_list_by_inds(rates, train_inds, valid_inds)
+spikes_train, spikes_valid = split_list_by_inds(spikes, train_inds, valid_inds)
+condition_labels_train, condition_labels_valid = split_list_by_inds(
+ condition_labels, train_inds, valid_inds)
+ext_input_train, ext_input_valid = split_list_by_inds(
+ ext_input, train_inds, valid_inds)
+
+rates_train = nparray_and_transpose(rates_train)
+rates_valid = nparray_and_transpose(rates_valid)
+spikes_train = nparray_and_transpose(spikes_train)
+spikes_valid = nparray_and_transpose(spikes_valid)
+
+# add train_ext_input and valid_ext input
+data = {'train_truth': rates_train,
+ 'valid_truth': rates_valid,
+ 'train_data' : spikes_train,
+ 'valid_data' : spikes_valid,
+ 'train_ext_input' : np.array(ext_input_train),
+ 'valid_ext_input': np.array(ext_input_valid),
+ 'train_percentage' : train_percentage,
+ 'nreplications' : nreplications,
+ 'dt' : FLAGS.dt,
+ 'P_sxn' : P_nxn,
+ 'condition_labels_train' : condition_labels_train,
+ 'condition_labels_valid' : condition_labels_valid,
+ 'conversion_factor': 1.0 / rnn_a['conversion_factor']}
+
+# just one dataset here
+datasets = {}
+dataset_name = 'dataset_N' + str(N)
+datasets[dataset_name] = data
+
+# write out the dataset
+write_datasets(FLAGS.save_dir, FLAGS.datafile_name, datasets)
+print ('Saved to ', os.path.join(FLAGS.save_dir,
+ FLAGS.datafile_name + '_' + dataset_name))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/run_generate_synth_data.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/run_generate_synth_data.sh"
new file mode 100644
index 00000000..9ebc8ce2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/run_generate_synth_data.sh"
@@ -0,0 +1,40 @@
+#!/bin/bash
+
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+
+SYNTH_PATH=/tmp/rnn_synth_data_v1.0/
+
+ echo "Generating chaotic rnn data with no input pulses (g=1.5) with spiking noise"
+ python generate_chaotic_rnn_data.py --save_dir=$SYNTH_PATH --datafile_name=chaotic_rnn_no_inputs --synth_data_seed=5 --T=1.0 --C=400 --N=50 --S=50 --train_percentage=0.8 --nreplications=10 --g=1.5 --x0_std=1.0 --tau=0.025 --dt=0.01 --input_magnitude=0.0 --max_firing_rate=30.0 --noise_type='poisson'
+
+echo "Generating chaotic rnn data with no input pulses (g=1.5) with Gaussian noise"
+python generate_chaotic_rnn_data.py --save_dir=$SYNTH_PATH --datafile_name=gaussian_chaotic_rnn_no_inputs --synth_data_seed=5 --T=1.0 --C=400 --N=50 --S=50 --train_percentage=0.8 --nreplications=10 --g=1.5 --x0_std=1.0 --tau=0.025 --dt=0.01 --input_magnitude=0.0 --max_firing_rate=30.0 --noise_type='gaussian'
+
+ echo "Generating chaotic rnn data with input pulses (g=1.5)"
+ python generate_chaotic_rnn_data.py --save_dir=$SYNTH_PATH --datafile_name=chaotic_rnn_inputs_g1p5 --synth_data_seed=5 --T=1.0 --C=400 --N=50 --S=50 --train_percentage=0.8 --nreplications=10 --g=1.5 --x0_std=1.0 --tau=0.025 --dt=0.01 --input_magnitude=20.0 --max_firing_rate=30.0 --noise_type='poisson'
+
+ echo "Generating chaotic rnn data with input pulses (g=2.5)"
+ python generate_chaotic_rnn_data.py --save_dir=$SYNTH_PATH --datafile_name=chaotic_rnn_inputs_g2p5 --synth_data_seed=5 --T=1.0 --C=400 --N=50 --S=50 --train_percentage=0.8 --nreplications=10 --g=2.5 --x0_std=1.0 --tau=0.025 --dt=0.01 --input_magnitude=20.0 --max_firing_rate=30.0 --noise_type='poisson'
+
+ echo "Generate the multi-session RNN data (no multi-session synth example in paper)"
+ python generate_chaotic_rnn_data.py --save_dir=$SYNTH_PATH --datafile_name=chaotic_rnn_multisession --synth_data_seed=5 --T=1.0 --C=150 --N=100 --S=20 --npcs=10 --train_percentage=0.8 --nreplications=40 --g=1.5 --x0_std=1.0 --tau=0.025 --dt=0.01 --input_magnitude=0.0 --max_firing_rate=30.0 --noise_type='poisson'
+
+ echo "Generating Integration-to-bound RNN data"
+ python generate_itb_data.py --save_dir=$SYNTH_PATH --datafile_name=itb_rnn --u_std=0.25 --checkpoint_path=SAMPLE_CHECKPOINT --synth_data_seed=5 --T=1.0 --C=800 --N=50 --train_percentage=0.8 --nreplications=5 --tau=0.025 --dt=0.01 --max_firing_rate=30.0
+
+ echo "Generating chaotic rnn data with external input labels (no external input labels example in paper)"
+ python generate_labeled_rnn_data.py --save_dir=$SYNTH_PATH --datafile_name=chaotic_rnns_labeled --synth_data_seed=5 --T=1.0 --C=400 --N=50 --train_percentage=0.8 --nreplications=10 --g=1.5 --x0_std=1.0 --tau=0.025 --dt=0.01 --max_firing_rate=30.0
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/synthetic_data_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/synthetic_data_utils.py"
new file mode 100644
index 00000000..cc264ee4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/synthetic_data_utils.py"
@@ -0,0 +1,348 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+from __future__ import print_function
+
+import h5py
+import numpy as np
+import os
+
+from utils import write_datasets
+import matplotlib
+import matplotlib.pyplot as plt
+import scipy.signal
+
+
+def generate_rnn(rng, N, g, tau, dt, max_firing_rate):
+ """Create a (vanilla) RNN with a bunch of hyper parameters for generating
+chaotic data.
+ Args:
+ rng: numpy random number generator
+ N: number of hidden units
+ g: scaling of recurrent weight matrix in g W, with W ~ N(0,1/N)
+ tau: time scale of individual unit dynamics
+ dt: time step for equation updates
+ max_firing_rate: how to resecale the -1,1 firing rates
+ Returns:
+ the dictionary of these parameters, plus some others.
+"""
+ rnn = {}
+ rnn['N'] = N
+ rnn['W'] = rng.randn(N,N)/np.sqrt(N)
+ rnn['Bin'] = rng.randn(N)/np.sqrt(1.0)
+ rnn['Bin2'] = rng.randn(N)/np.sqrt(1.0)
+ rnn['b'] = np.zeros(N)
+ rnn['g'] = g
+ rnn['tau'] = tau
+ rnn['dt'] = dt
+ rnn['max_firing_rate'] = max_firing_rate
+ mfr = rnn['max_firing_rate'] # spikes / sec
+ nbins_per_sec = 1.0/rnn['dt'] # bins / sec
+ # Used for plotting in LFADS
+ rnn['conversion_factor'] = mfr / nbins_per_sec # spikes / bin
+ return rnn
+
+
+def generate_data(rnn, T, E, x0s=None, P_sxn=None, input_magnitude=0.0,
+ input_times=None):
+ """ Generates data from an randomly initialized RNN.
+ Args:
+ rnn: the rnn
+ T: Time in seconds to run (divided by rnn['dt'] to get steps, rounded down.
+ E: total number of examples
+ S: number of samples (subsampling N)
+ Returns:
+ A list of length E of NxT tensors of the network being run.
+ """
+ N = rnn['N']
+ def run_rnn(rnn, x0, ntime_steps, input_time=None):
+ rs = np.zeros([N,ntime_steps])
+ x_tm1 = x0
+ r_tm1 = np.tanh(x0)
+ tau = rnn['tau']
+ dt = rnn['dt']
+ alpha = (1.0-dt/tau)
+ W = dt/tau*rnn['W']*rnn['g']
+ Bin = dt/tau*rnn['Bin']
+ Bin2 = dt/tau*rnn['Bin2']
+ b = dt/tau*rnn['b']
+
+ us = np.zeros([1, ntime_steps])
+ for t in range(ntime_steps):
+ x_t = alpha*x_tm1 + np.dot(W,r_tm1) + b
+ if input_time is not None and t == input_time:
+ us[0,t] = input_magnitude
+ x_t += Bin * us[0,t] # DCS is this what was used?
+ r_t = np.tanh(x_t)
+ x_tm1 = x_t
+ r_tm1 = r_t
+ rs[:,t] = r_t
+ return rs, us
+
+ if P_sxn is None:
+ P_sxn = np.eye(N)
+ ntime_steps = int(T / rnn['dt'])
+ data_e = []
+ inputs_e = []
+ for e in range(E):
+ input_time = input_times[e] if input_times is not None else None
+ r_nxt, u_uxt = run_rnn(rnn, x0s[:,e], ntime_steps, input_time)
+ r_sxt = np.dot(P_sxn, r_nxt)
+ inputs_e.append(u_uxt)
+ data_e.append(r_sxt)
+
+ S = P_sxn.shape[0]
+ data_e = normalize_rates(data_e, E, S)
+
+ return data_e, x0s, inputs_e
+
+
+def normalize_rates(data_e, E, S):
+ # Normalization, made more complex because of the P matrices.
+ # Normalize by min and max in each channel. This normalization will
+ # cause offset differences between identical rnn runs, but different
+ # t hits.
+ for e in range(E):
+ r_sxt = data_e[e]
+ for i in range(S):
+ rmin = np.min(r_sxt[i,:])
+ rmax = np.max(r_sxt[i,:])
+ assert rmax - rmin != 0, 'Something wrong'
+ r_sxt[i,:] = (r_sxt[i,:] - rmin)/(rmax-rmin)
+ data_e[e] = r_sxt
+ return data_e
+
+
+def spikify_data(data_e, rng, dt=1.0, max_firing_rate=100):
+ """ Apply spikes to a continuous dataset whose values are between 0.0 and 1.0
+ Args:
+ data_e: nexamples length list of NxT trials
+ dt: how often the data are sampled
+ max_firing_rate: the firing rate that is associated with a value of 1.0
+ Returns:
+ spikified_e: a list of length b of the data represented as spikes,
+ sampled from the underlying poisson process.
+ """
+
+ E = len(data_e)
+ spikes_e = []
+ for e in range(E):
+ data = data_e[e]
+ N,T = data.shape
+ data_s = np.zeros([N,T]).astype(np.int)
+ for n in range(N):
+ f = data[n,:]
+ s = rng.poisson(f*max_firing_rate*dt, size=T)
+ data_s[n,:] = s
+ spikes_e.append(data_s)
+
+ return spikes_e
+
+
+def gaussify_data(data_e, rng, dt=1.0, max_firing_rate=100):
+ """ Apply gaussian noise to a continuous dataset whose values are between
+ 0.0 and 1.0
+
+ Args:
+ data_e: nexamples length list of NxT trials
+ dt: how often the data are sampled
+ max_firing_rate: the firing rate that is associated with a value of 1.0
+ Returns:
+ gauss_e: a list of length b of the data with noise.
+ """
+
+ E = len(data_e)
+ mfr = max_firing_rate
+ gauss_e = []
+ for e in range(E):
+ data = data_e[e]
+ N,T = data.shape
+ noisy_data = data * mfr + np.random.randn(N,T) * (5.0*mfr) * np.sqrt(dt)
+ gauss_e.append(noisy_data)
+
+ return gauss_e
+
+
+
+def get_train_n_valid_inds(num_trials, train_fraction, nreplications):
+ """Split the numbers between 0 and num_trials-1 into two portions for
+ training and validation, based on the train fraction.
+ Args:
+ num_trials: the number of trials
+ train_fraction: (e.g. .80)
+ nreplications: the number of spiking trials per initial condition
+ Returns:
+ a 2-tuple of two lists: the training indices and validation indices
+ """
+ train_inds = []
+ valid_inds = []
+ for i in range(num_trials):
+ # This line divides up the trials so that within one initial condition,
+ # the randomness of spikifying the condition is shared among both
+ # training and validation data splits.
+ if (i % nreplications)+1 > train_fraction * nreplications:
+ valid_inds.append(i)
+ else:
+ train_inds.append(i)
+
+ return train_inds, valid_inds
+
+
+def split_list_by_inds(data, inds1, inds2):
+ """Take the data, a list, and split it up based on the indices in inds1 and
+ inds2.
+ Args:
+ data: the list of data to split
+ inds1, the first list of indices
+ inds2, the second list of indices
+ Returns: a 2-tuple of two lists.
+ """
+ if data is None or len(data) == 0:
+ return [], []
+ else:
+ dout1 = [data[i] for i in inds1]
+ dout2 = [data[i] for i in inds2]
+ return dout1, dout2
+
+
+def nparray_and_transpose(data_a_b_c):
+ """Convert the list of items in data to a numpy array, and transpose it
+ Args:
+ data: data_asbsc: a nested, nested list of length a, with sublist length
+ b, with sublist length c.
+ Returns:
+ a numpy 3-tensor with dimensions a x c x b
+"""
+ data_axbxc = np.array([datum_b_c for datum_b_c in data_a_b_c])
+ data_axcxb = np.transpose(data_axbxc, axes=[0,2,1])
+ return data_axcxb
+
+
+def add_alignment_projections(datasets, npcs, ntime=None, nsamples=None):
+ """Create a matrix that aligns the datasets a bit, under
+ the assumption that each dataset is observing the same underlying dynamical
+ system.
+
+ Args:
+ datasets: The dictionary of dataset structures.
+ npcs: The number of pcs for each, basically like lfads factors.
+ nsamples (optional): Number of samples to take for each dataset.
+ ntime (optional): Number of time steps to take in each sample.
+
+ Returns:
+ The dataset structures, with the field alignment_matrix_cxf added.
+ This is # channels x npcs dimension
+"""
+ nchannels_all = 0
+ channel_idxs = {}
+ conditions_all = {}
+ nconditions_all = 0
+ for name, dataset in datasets.items():
+ cidxs = np.where(dataset['P_sxn'])[1] # non-zero entries in columns
+ channel_idxs[name] = [cidxs[0], cidxs[-1]+1]
+ nchannels_all += cidxs[-1]+1 - cidxs[0]
+ conditions_all[name] = np.unique(dataset['condition_labels_train'])
+
+ all_conditions_list = \
+ np.unique(np.ndarray.flatten(np.array(conditions_all.values())))
+ nconditions_all = all_conditions_list.shape[0]
+
+ if ntime is None:
+ ntime = dataset['train_data'].shape[1]
+ if nsamples is None:
+ nsamples = dataset['train_data'].shape[0]
+
+ # In the data workup in the paper, Chethan did intra condition
+ # averaging, so let's do that here.
+ avg_data_all = {}
+ for name, conditions in conditions_all.items():
+ dataset = datasets[name]
+ avg_data_all[name] = {}
+ for cname in conditions:
+ td_idxs = np.argwhere(np.array(dataset['condition_labels_train'])==cname)
+ data = np.squeeze(dataset['train_data'][td_idxs,:,:], axis=1)
+ avg_data = np.mean(data, axis=0)
+ avg_data_all[name][cname] = avg_data
+
+ # Visualize this in the morning.
+ all_data_nxtc = np.zeros([nchannels_all, ntime * nconditions_all])
+ for name, dataset in datasets.items():
+ cidx_s = channel_idxs[name][0]
+ cidx_f = channel_idxs[name][1]
+ for cname in conditions_all[name]:
+ cidxs = np.argwhere(all_conditions_list == cname)
+ if cidxs.shape[0] > 0:
+ cidx = cidxs[0][0]
+ all_tidxs = np.arange(0, ntime+1) + cidx*ntime
+ all_data_nxtc[cidx_s:cidx_f, all_tidxs[0]:all_tidxs[-1]] = \
+ avg_data_all[name][cname].T
+
+ # A bit of filtering. We don't care about spectral properties, or
+ # filtering artifacts, simply correlate time steps a bit.
+ filt_len = 6
+ bc_filt = np.ones([filt_len])/float(filt_len)
+ for c in range(nchannels_all):
+ all_data_nxtc[c,:] = scipy.signal.filtfilt(bc_filt, [1.0], all_data_nxtc[c,:])
+
+ # Compute the PCs.
+ all_data_mean_nx1 = np.mean(all_data_nxtc, axis=1, keepdims=True)
+ all_data_zm_nxtc = all_data_nxtc - all_data_mean_nx1
+ corr_mat_nxn = np.dot(all_data_zm_nxtc, all_data_zm_nxtc.T)
+ evals_n, evecs_nxn = np.linalg.eigh(corr_mat_nxn)
+ sidxs = np.flipud(np.argsort(evals_n)) # sort such that 0th is highest
+ evals_n = evals_n[sidxs]
+ evecs_nxn = evecs_nxn[:,sidxs]
+
+ # Project all the channels data onto the low-D PCA basis, where
+ # low-d is the npcs parameter.
+ all_data_pca_pxtc = np.dot(evecs_nxn[:, 0:npcs].T, all_data_zm_nxtc)
+
+ # Now for each dataset, we regress the channel data onto the top
+ # pcs, and this will be our alignment matrix for that dataset.
+ # |B - A*W|^2
+ for name, dataset in datasets.items():
+ cidx_s = channel_idxs[name][0]
+ cidx_f = channel_idxs[name][1]
+ all_data_zm_chxtc = all_data_zm_nxtc[cidx_s:cidx_f,:] # ch for channel
+ W_chxp, _, _, _ = \
+ np.linalg.lstsq(all_data_zm_chxtc.T, all_data_pca_pxtc.T)
+ dataset['alignment_matrix_cxf'] = W_chxp
+ alignment_bias_cx1 = all_data_mean_nx1[cidx_s:cidx_f]
+ dataset['alignment_bias_c'] = np.squeeze(alignment_bias_cx1, axis=1)
+
+ do_debug_plot = False
+ if do_debug_plot:
+ pc_vecs = evecs_nxn[:,0:npcs]
+ ntoplot = 400
+
+ plt.figure()
+ plt.plot(np.log10(evals_n), '-x')
+ plt.figure()
+ plt.subplot(311)
+ plt.imshow(all_data_pca_pxtc)
+ plt.colorbar()
+
+ plt.subplot(312)
+ plt.imshow(np.dot(W_chxp.T, all_data_zm_chxtc))
+ plt.colorbar()
+
+ plt.subplot(313)
+ plt.imshow(np.dot(all_data_zm_chxtc.T, W_chxp).T - all_data_pca_pxtc)
+ plt.colorbar()
+
+ import pdb
+ pdb.set_trace()
+
+ return datasets
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/trained_itb/model-65000.data-00000-of-00001" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/trained_itb/model-65000.data-00000-of-00001"
new file mode 100644
index 00000000..9459a2a1
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/trained_itb/model-65000.data-00000-of-00001" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/trained_itb/model-65000.index" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/trained_itb/model-65000.index"
new file mode 100644
index 00000000..dd9c793a
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/trained_itb/model-65000.index" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/trained_itb/model-65000.meta" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/trained_itb/model-65000.meta"
new file mode 100644
index 00000000..07bd2b96
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/synth_data/trained_itb/model-65000.meta" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/utils.py"
new file mode 100644
index 00000000..b9d3f7f9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lfads/utils.py"
@@ -0,0 +1,367 @@
+# Copyright 2017 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# ==============================================================================
+from __future__ import print_function
+
+import os
+import h5py
+import json
+
+import numpy as np
+import tensorflow as tf
+
+
+def log_sum_exp(x_k):
+ """Computes log \sum exp in a numerically stable way.
+ log ( sum_i exp(x_i) )
+ log ( sum_i exp(x_i - m + m) ), with m = max(x_i)
+ log ( sum_i exp(x_i - m)*exp(m) )
+ log ( sum_i exp(x_i - m) + m
+
+ Args:
+ x_k - k -dimensional list of arguments to log_sum_exp.
+
+ Returns:
+ log_sum_exp of the arguments.
+ """
+ m = tf.reduce_max(x_k)
+ x1_k = x_k - m
+ u_k = tf.exp(x1_k)
+ z = tf.reduce_sum(u_k)
+ return tf.log(z) + m
+
+
+def linear(x, out_size, do_bias=True, alpha=1.0, identity_if_possible=False,
+ normalized=False, name=None, collections=None):
+ """Linear (affine) transformation, y = x W + b, for a variety of
+ configurations.
+
+ Args:
+ x: input The tensor to tranformation.
+ out_size: The integer size of non-batch output dimension.
+ do_bias (optional): Add a learnable bias vector to the operation.
+ alpha (optional): A multiplicative scaling for the weight initialization
+ of the matrix, in the form \alpha * 1/\sqrt{x.shape[1]}.
+ identity_if_possible (optional): just return identity,
+ if x.shape[1] == out_size.
+ normalized (optional): Option to divide out by the norms of the rows of W.
+ name (optional): The name prefix to add to variables.
+ collections (optional): List of additional collections. (Placed in
+ tf.GraphKeys.GLOBAL_VARIABLES already, so no need for that.)
+
+ Returns:
+ In the equation, y = x W + b, returns the tensorflow op that yields y.
+ """
+ in_size = int(x.get_shape()[1]) # from Dimension(10) -> 10
+ stddev = alpha/np.sqrt(float(in_size))
+ mat_init = tf.random_normal_initializer(0.0, stddev)
+ wname = (name + "/W") if name else "/W"
+
+ if identity_if_possible and in_size == out_size:
+ # Sometimes linear layers are nothing more than size adapters.
+ return tf.identity(x, name=(wname+'_ident'))
+
+ W,b = init_linear(in_size, out_size, do_bias=do_bias, alpha=alpha,
+ normalized=normalized, name=name, collections=collections)
+
+ if do_bias:
+ return tf.matmul(x, W) + b
+ else:
+ return tf.matmul(x, W)
+
+
+def init_linear(in_size, out_size, do_bias=True, mat_init_value=None,
+ bias_init_value=None, alpha=1.0, identity_if_possible=False,
+ normalized=False, name=None, collections=None, trainable=True):
+ """Linear (affine) transformation, y = x W + b, for a variety of
+ configurations.
+
+ Args:
+ in_size: The integer size of the non-batc input dimension. [(x),y]
+ out_size: The integer size of non-batch output dimension. [x,(y)]
+ do_bias (optional): Add a (learnable) bias vector to the operation,
+ if false, b will be None
+ mat_init_value (optional): numpy constant for matrix initialization, if None
+ , do random, with additional parameters.
+ alpha (optional): A multiplicative scaling for the weight initialization
+ of the matrix, in the form \alpha * 1/\sqrt{x.shape[1]}.
+ identity_if_possible (optional): just return identity,
+ if x.shape[1] == out_size.
+ normalized (optional): Option to divide out by the norms of the rows of W.
+ name (optional): The name prefix to add to variables.
+ collections (optional): List of additional collections. (Placed in
+ tf.GraphKeys.GLOBAL_VARIABLES already, so no need for that.)
+
+ Returns:
+ In the equation, y = x W + b, returns the pair (W, b).
+ """
+
+ if mat_init_value is not None and mat_init_value.shape != (in_size, out_size):
+ raise ValueError(
+ 'Provided mat_init_value must have shape [%d, %d].'%(in_size, out_size))
+ if bias_init_value is not None and bias_init_value.shape != (1,out_size):
+ raise ValueError(
+ 'Provided bias_init_value must have shape [1,%d].'%(out_size,))
+
+ if mat_init_value is None:
+ stddev = alpha/np.sqrt(float(in_size))
+ mat_init = tf.random_normal_initializer(0.0, stddev)
+
+ wname = (name + "/W") if name else "/W"
+
+ if identity_if_possible and in_size == out_size:
+ return (tf.constant(np.eye(in_size).astype(np.float32)),
+ tf.zeros(in_size))
+
+ # Note the use of get_variable vs. tf.Variable. this is because get_variable
+ # does not allow the initialization of the variable with a value.
+ if normalized:
+ w_collections = [tf.GraphKeys.GLOBAL_VARIABLES, "norm-variables"]
+ if collections:
+ w_collections += collections
+ if mat_init_value is not None:
+ w = tf.Variable(mat_init_value, name=wname, collections=w_collections,
+ trainable=trainable)
+ else:
+ w = tf.get_variable(wname, [in_size, out_size], initializer=mat_init,
+ collections=w_collections, trainable=trainable)
+ w = tf.nn.l2_normalize(w, dim=0) # x W, so xW_j = \sum_i x_bi W_ij
+ else:
+ w_collections = [tf.GraphKeys.GLOBAL_VARIABLES]
+ if collections:
+ w_collections += collections
+ if mat_init_value is not None:
+ w = tf.Variable(mat_init_value, name=wname, collections=w_collections,
+ trainable=trainable)
+ else:
+ w = tf.get_variable(wname, [in_size, out_size], initializer=mat_init,
+ collections=w_collections, trainable=trainable)
+ b = None
+ if do_bias:
+ b_collections = [tf.GraphKeys.GLOBAL_VARIABLES]
+ if collections:
+ b_collections += collections
+ bname = (name + "/b") if name else "/b"
+ if bias_init_value is None:
+ b = tf.get_variable(bname, [1, out_size],
+ initializer=tf.zeros_initializer(),
+ collections=b_collections,
+ trainable=trainable)
+ else:
+ b = tf.Variable(bias_init_value, name=bname,
+ collections=b_collections,
+ trainable=trainable)
+
+ return (w, b)
+
+
+def write_data(data_fname, data_dict, use_json=False, compression=None):
+ """Write data in HD5F format.
+
+ Args:
+ data_fname: The filename of teh file in which to write the data.
+ data_dict: The dictionary of data to write. The keys are strings
+ and the values are numpy arrays.
+ use_json (optional): human readable format for simple items
+ compression (optional): The compression to use for h5py (disabled by
+ default because the library borks on scalars, otherwise try 'gzip').
+ """
+
+ dir_name = os.path.dirname(data_fname)
+ if not os.path.exists(dir_name):
+ os.makedirs(dir_name)
+
+ if use_json:
+ the_file = open(data_fname,'w')
+ json.dump(data_dict, the_file)
+ the_file.close()
+ else:
+ try:
+ with h5py.File(data_fname, 'w') as hf:
+ for k, v in data_dict.items():
+ clean_k = k.replace('/', '_')
+ if clean_k is not k:
+ print('Warning: saving variable with name: ', k, ' as ', clean_k)
+ else:
+ print('Saving variable with name: ', clean_k)
+ hf.create_dataset(clean_k, data=v, compression=compression)
+ except IOError:
+ print("Cannot open %s for writing.", data_fname)
+ raise
+
+
+def read_data(data_fname):
+ """ Read saved data in HDF5 format.
+
+ Args:
+ data_fname: The filename of the file from which to read the data.
+ Returns:
+ A dictionary whose keys will vary depending on dataset (but should
+ always contain the keys 'train_data' and 'valid_data') and whose
+ values are numpy arrays.
+ """
+
+ try:
+ with h5py.File(data_fname, 'r') as hf:
+ data_dict = {k: np.array(v) for k, v in hf.items()}
+ return data_dict
+ except IOError:
+ print("Cannot open %s for reading." % data_fname)
+ raise
+
+
+def write_datasets(data_path, data_fname_stem, dataset_dict, compression=None):
+ """Write datasets in HD5F format.
+
+ This function assumes the dataset_dict is a mapping ( string ->
+ to data_dict ). It calls write_data for each data dictionary,
+ post-fixing the data filename with the key of the dataset.
+
+ Args:
+ data_path: The path to the save directory.
+ data_fname_stem: The filename stem of the file in which to write the data.
+ dataset_dict: The dictionary of datasets. The keys are strings
+ and the values data dictionaries (str -> numpy arrays) associations.
+ compression (optional): The compression to use for h5py (disabled by
+ default because the library borks on scalars, otherwise try 'gzip').
+ """
+
+ full_name_stem = os.path.join(data_path, data_fname_stem)
+ for s, data_dict in dataset_dict.items():
+ write_data(full_name_stem + "_" + s, data_dict, compression=compression)
+
+
+def read_datasets(data_path, data_fname_stem):
+ """Read dataset sin HD5F format.
+
+ This function assumes the dataset_dict is a mapping ( string ->
+ to data_dict ). It calls write_data for each data dictionary,
+ post-fixing the data filename with the key of the dataset.
+
+ Args:
+ data_path: The path to the save directory.
+ data_fname_stem: The filename stem of the file in which to write the data.
+ """
+
+ dataset_dict = {}
+ fnames = os.listdir(data_path)
+
+ print ('loading data from ' + data_path + ' with stem ' + data_fname_stem)
+ for fname in fnames:
+ if fname.startswith(data_fname_stem):
+ data_dict = read_data(os.path.join(data_path,fname))
+ idx = len(data_fname_stem) + 1
+ key = fname[idx:]
+ data_dict['data_dim'] = data_dict['train_data'].shape[2]
+ data_dict['num_steps'] = data_dict['train_data'].shape[1]
+ dataset_dict[key] = data_dict
+
+ if len(dataset_dict) == 0:
+ raise ValueError("Failed to load any datasets, are you sure that the "
+ "'--data_dir' and '--data_filename_stem' flag values "
+ "are correct?")
+
+ print (str(len(dataset_dict)) + ' datasets loaded')
+ return dataset_dict
+
+
+# NUMPY utility functions
+def list_t_bxn_to_list_b_txn(values_t_bxn):
+ """Convert a length T list of BxN numpy tensors of length B list of TxN numpy
+ tensors.
+
+ Args:
+ values_t_bxn: The length T list of BxN numpy tensors.
+
+ Returns:
+ The length B list of TxN numpy tensors.
+ """
+ T = len(values_t_bxn)
+ B, N = values_t_bxn[0].shape
+ values_b_txn = []
+ for b in range(B):
+ values_pb_txn = np.zeros([T,N])
+ for t in range(T):
+ values_pb_txn[t,:] = values_t_bxn[t][b,:]
+ values_b_txn.append(values_pb_txn)
+
+ return values_b_txn
+
+
+def list_t_bxn_to_tensor_bxtxn(values_t_bxn):
+ """Convert a length T list of BxN numpy tensors to single numpy tensor with
+ shape BxTxN.
+
+ Args:
+ values_t_bxn: The length T list of BxN numpy tensors.
+
+ Returns:
+ values_bxtxn: The BxTxN numpy tensor.
+ """
+
+ T = len(values_t_bxn)
+ B, N = values_t_bxn[0].shape
+ values_bxtxn = np.zeros([B,T,N])
+ for t in range(T):
+ values_bxtxn[:,t,:] = values_t_bxn[t]
+
+ return values_bxtxn
+
+
+def tensor_bxtxn_to_list_t_bxn(tensor_bxtxn):
+ """Convert a numpy tensor with shape BxTxN to a length T list of numpy tensors
+ with shape BxT.
+
+ Args:
+ tensor_bxtxn: The BxTxN numpy tensor.
+
+ Returns:
+ A length T list of numpy tensors with shape BxT.
+ """
+
+ values_t_bxn = []
+ B, T, N = tensor_bxtxn.shape
+ for t in range(T):
+ values_t_bxn.append(np.squeeze(tensor_bxtxn[:,t,:]))
+
+ return values_t_bxn
+
+
+def flatten(list_of_lists):
+ """Takes a list of lists and returns a list of the elements.
+
+ Args:
+ list_of_lists: List of lists.
+
+ Returns:
+ flat_list: Flattened list.
+ flat_list_idxs: Flattened list indices.
+ """
+ flat_list = []
+ flat_list_idxs = []
+ start_idx = 0
+ for item in list_of_lists:
+ if isinstance(item, list):
+ flat_list += item
+ l = len(item)
+ idxs = range(start_idx, start_idx+l)
+ start_idx = start_idx+l
+ else: # a value
+ flat_list.append(item)
+ idxs = [start_idx]
+ start_idx += 1
+ flat_list_idxs.append(idxs)
+
+ return flat_list, flat_list_idxs
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/BUILD"
new file mode 100644
index 00000000..ca5bc1f6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/BUILD"
@@ -0,0 +1,27 @@
+package(default_visibility = [":internal"])
+
+licenses(["notice"]) # Apache 2.0
+
+exports_files(["LICENSE"])
+
+package_group(
+ name = "internal",
+ packages = [
+ "//lm_1b/...",
+ ],
+)
+
+py_library(
+ name = "data_utils",
+ srcs = ["data_utils.py"],
+)
+
+py_binary(
+ name = "lm_1b_eval",
+ srcs = [
+ "lm_1b_eval.py",
+ ],
+ deps = [
+ ":data_utils",
+ ],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/README.md"
new file mode 100644
index 00000000..88bdedc1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/README.md"
@@ -0,0 +1,194 @@
+Language Model on One Billion Word Benchmark
+
+Authors:
+
+Oriol Vinyals (vinyals@google.com, github: OriolVinyals),
+Xin Pan
+
+Paper Authors:
+
+Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
+
+TL;DR
+
+This is a pretrained model on One Billion Word Benchmark.
+If you use this model in your publication, please cite the original paper:
+
+@article{jozefowicz2016exploring,
+ title={Exploring the Limits of Language Modeling},
+ author={Jozefowicz, Rafal and Vinyals, Oriol and Schuster, Mike
+ and Shazeer, Noam and Wu, Yonghui},
+ journal={arXiv preprint arXiv:1602.02410},
+ year={2016}
+}
+
+Introduction
+
+In this release, we open source a model trained on the One Billion Word
+Benchmark (http://arxiv.org/abs/1312.3005), a large language corpus in English
+which was released in 2013. This dataset contains about one billion words, and
+has a vocabulary size of about 800K words. It contains mostly news data. Since
+sentences in the training set are shuffled, models can ignore the context and
+focus on sentence level language modeling.
+
+In the original release and subsequent work, people have used the same test set
+to train models on this dataset as a standard benchmark for language modeling.
+Recently, we wrote an article (http://arxiv.org/abs/1602.02410) describing a
+model hybrid between character CNN, a large and deep LSTM, and a specific
+Softmax architecture which allowed us to train the best model on this dataset
+thus far, almost halving the best perplexity previously obtained by others.
+
+Code Release
+
+The open-sourced components include:
+
+* TensorFlow GraphDef proto buffer text file.
+* TensorFlow pre-trained checkpoint shards.
+* Code used to evaluate the pre-trained model.
+* Vocabulary file.
+* Test set from LM-1B evaluation.
+
+The code supports 4 evaluation modes:
+
+* Given provided dataset, calculate the model's perplexity.
+* Given a prefix sentence, predict the next words.
+* Dump the softmax embedding, character-level CNN word embeddings.
+* Give a sentence, dump the embedding from the LSTM state.
+
+Results
+
+Model | Test Perplexity | Number of Params [billions]
+------|-----------------|----------------------------
+Sigmoid-RNN-2048 [Blackout] | 68.3 | 4.1
+Interpolated KN 5-gram, 1.1B n-grams [chelba2013one] | 67.6 | 1.76
+Sparse Non-Negative Matrix LM [shazeer2015sparse] | 52.9 | 33
+RNN-1024 + MaxEnt 9-gram features [chelba2013one] | 51.3 | 20
+LSTM-512-512 | 54.1 | 0.82
+LSTM-1024-512 | 48.2 | 0.82
+LSTM-2048-512 | 43.7 | 0.83
+LSTM-8192-2048 (No Dropout) | 37.9 | 3.3
+LSTM-8192-2048 (50\% Dropout) | 32.2 | 3.3
+2-Layer LSTM-8192-1024 (BIG LSTM) | 30.6 | 1.8
+(THIS RELEASE) BIG LSTM+CNN Inputs | 30.0 | 1.04
+
+How To Run
+
+Prerequisites:
+
+* Install TensorFlow.
+* Install Bazel.
+* Download the data files:
+ * Model GraphDef file:
+ [link](http://download.tensorflow.org/models/LM_LSTM_CNN/graph-2016-09-10.pbtxt)
+ * Model Checkpoint sharded file:
+ [1](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-base)
+ [2](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-char-embedding)
+ [3](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-lstm)
+ [4](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax0)
+ [5](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax1)
+ [6](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax2)
+ [7](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax3)
+ [8](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax4)
+ [9](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax5)
+ [10](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax6)
+ [11](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax7)
+ [12](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax8)
+ * Vocabulary file:
+ [link](http://download.tensorflow.org/models/LM_LSTM_CNN/vocab-2016-09-10.txt)
+ * test dataset: link
+ [link](http://download.tensorflow.org/models/LM_LSTM_CNN/test/news.en.heldout-00000-of-00050)
+* It is recommended to run on a modern desktop instead of a laptop.
+
+```shell
+# 1. Clone the code to your workspace.
+# 2. Download the data to your workspace.
+# 3. Create an empty WORKSPACE file in your workspace.
+# 4. Create an empty output directory in your workspace.
+# Example directory structure below:
+$ ls -R
+.:
+data lm_1b output WORKSPACE
+
+./data:
+ckpt-base ckpt-lstm ckpt-softmax1 ckpt-softmax3 ckpt-softmax5
+ckpt-softmax7 graph-2016-09-10.pbtxt vocab-2016-09-10.txt
+ckpt-char-embedding ckpt-softmax0 ckpt-softmax2 ckpt-softmax4 ckpt-softmax6
+ckpt-softmax8 news.en.heldout-00000-of-00050
+
+./lm_1b:
+BUILD data_utils.py lm_1b_eval.py README.md
+
+./output:
+
+# Build the codes.
+$ bazel build -c opt lm_1b/...
+# Run sample mode:
+$ bazel-bin/lm_1b/lm_1b_eval --mode sample \
+ --prefix "I love that I" \
+ --pbtxt data/graph-2016-09-10.pbtxt \
+ --vocab_file data/vocab-2016-09-10.txt \
+ --ckpt 'data/ckpt-*'
+...(omitted some TensorFlow output)
+I love
+I love that
+I love that I
+I love that I find
+I love that I find that
+I love that I find that amazing
+...(omitted)
+
+# Run eval mode:
+$ bazel-bin/lm_1b/lm_1b_eval --mode eval \
+ --pbtxt data/graph-2016-09-10.pbtxt \
+ --vocab_file data/vocab-2016-09-10.txt \
+ --input_data data/news.en.heldout-00000-of-00050 \
+ --ckpt 'data/ckpt-*'
+...(omitted some TensorFlow output)
+Loaded step 14108582.
+# perplexity is high initially because words without context are harder to
+# predict.
+Eval Step: 0, Average Perplexity: 2045.512297.
+Eval Step: 1, Average Perplexity: 229.478699.
+Eval Step: 2, Average Perplexity: 208.116787.
+Eval Step: 3, Average Perplexity: 338.870601.
+Eval Step: 4, Average Perplexity: 228.950107.
+Eval Step: 5, Average Perplexity: 197.685857.
+Eval Step: 6, Average Perplexity: 156.287063.
+Eval Step: 7, Average Perplexity: 124.866189.
+Eval Step: 8, Average Perplexity: 147.204975.
+Eval Step: 9, Average Perplexity: 90.124864.
+Eval Step: 10, Average Perplexity: 59.897914.
+Eval Step: 11, Average Perplexity: 42.591137.
+...(omitted)
+Eval Step: 4529, Average Perplexity: 29.243668.
+Eval Step: 4530, Average Perplexity: 29.302362.
+Eval Step: 4531, Average Perplexity: 29.285674.
+...(omitted. At convergence, it should be around 30.)
+
+# Run dump_emb mode:
+$ bazel-bin/lm_1b/lm_1b_eval --mode dump_emb \
+ --pbtxt data/graph-2016-09-10.pbtxt \
+ --vocab_file data/vocab-2016-09-10.txt \
+ --ckpt 'data/ckpt-*' \
+ --save_dir output
+...(omitted some TensorFlow output)
+Finished softmax weights
+Finished word embedding 0/793471
+Finished word embedding 1/793471
+Finished word embedding 2/793471
+...(omitted)
+$ ls output/
+embeddings_softmax.npy ...
+
+# Run dump_lstm_emb mode:
+$ bazel-bin/lm_1b/lm_1b_eval --mode dump_lstm_emb \
+ --pbtxt data/graph-2016-09-10.pbtxt \
+ --vocab_file data/vocab-2016-09-10.txt \
+ --ckpt 'data/ckpt-*' \
+ --sentence "I love who I am ." \
+ --save_dir output
+$ ls output/
+lstm_emb_step_0.npy lstm_emb_step_2.npy lstm_emb_step_4.npy
+lstm_emb_step_6.npy lstm_emb_step_1.npy lstm_emb_step_3.npy
+lstm_emb_step_5.npy
+```
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/data_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/data_utils.py"
new file mode 100644
index 00000000..ad8d3391
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/data_utils.py"
@@ -0,0 +1,279 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A library for loading 1B word benchmark dataset."""
+
+import random
+
+import numpy as np
+import tensorflow as tf
+
+
+class Vocabulary(object):
+ """Class that holds a vocabulary for the dataset."""
+
+ def __init__(self, filename):
+ """Initialize vocabulary.
+
+ Args:
+ filename: Vocabulary file name.
+ """
+
+ self._id_to_word = []
+ self._word_to_id = {}
+ self._unk = -1
+ self._bos = -1
+ self._eos = -1
+
+ with tf.gfile.Open(filename) as f:
+ idx = 0
+ for line in f:
+ word_name = line.strip()
+ if word_name == '':
+ self._bos = idx
+ elif word_name == '':
+ self._eos = idx
+ elif word_name == '':
+ self._unk = idx
+ if word_name == '!!!MAXTERMID':
+ continue
+
+ self._id_to_word.append(word_name)
+ self._word_to_id[word_name] = idx
+ idx += 1
+
+ @property
+ def bos(self):
+ return self._bos
+
+ @property
+ def eos(self):
+ return self._eos
+
+ @property
+ def unk(self):
+ return self._unk
+
+ @property
+ def size(self):
+ return len(self._id_to_word)
+
+ def word_to_id(self, word):
+ if word in self._word_to_id:
+ return self._word_to_id[word]
+ return self.unk
+
+ def id_to_word(self, cur_id):
+ if cur_id < self.size:
+ return self._id_to_word[cur_id]
+ return 'ERROR'
+
+ def decode(self, cur_ids):
+ """Convert a list of ids to a sentence, with space inserted."""
+ return ' '.join([self.id_to_word(cur_id) for cur_id in cur_ids])
+
+ def encode(self, sentence):
+ """Convert a sentence to a list of ids, with special tokens added."""
+ word_ids = [self.word_to_id(cur_word) for cur_word in sentence.split()]
+ return np.array([self.bos] + word_ids + [self.eos], dtype=np.int32)
+
+
+class CharsVocabulary(Vocabulary):
+ """Vocabulary containing character-level information."""
+
+ def __init__(self, filename, max_word_length):
+ super(CharsVocabulary, self).__init__(filename)
+ self._max_word_length = max_word_length
+ chars_set = set()
+
+ for word in self._id_to_word:
+ chars_set |= set(word)
+
+ free_ids = []
+ for i in range(256):
+ if chr(i) in chars_set:
+ continue
+ free_ids.append(chr(i))
+
+ if len(free_ids) < 5:
+ raise ValueError('Not enough free char ids: %d' % len(free_ids))
+
+ self.bos_char = free_ids[0] #
+ self.eos_char = free_ids[1] #
+ self.bow_char = free_ids[2] #
+ self.eow_char = free_ids[3] #
+ self.pad_char = free_ids[4] #
+
+ chars_set |= {self.bos_char, self.eos_char, self.bow_char, self.eow_char,
+ self.pad_char}
+
+ self._char_set = chars_set
+ num_words = len(self._id_to_word)
+
+ self._word_char_ids = np.zeros([num_words, max_word_length], dtype=np.int32)
+
+ self.bos_chars = self._convert_word_to_char_ids(self.bos_char)
+ self.eos_chars = self._convert_word_to_char_ids(self.eos_char)
+
+ for i, word in enumerate(self._id_to_word):
+ self._word_char_ids[i] = self._convert_word_to_char_ids(word)
+
+ @property
+ def word_char_ids(self):
+ return self._word_char_ids
+
+ @property
+ def max_word_length(self):
+ return self._max_word_length
+
+ def _convert_word_to_char_ids(self, word):
+ code = np.zeros([self.max_word_length], dtype=np.int32)
+ code[:] = ord(self.pad_char)
+
+ if len(word) > self.max_word_length - 2:
+ word = word[:self.max_word_length-2]
+ cur_word = self.bow_char + word + self.eow_char
+ for j in range(len(cur_word)):
+ code[j] = ord(cur_word[j])
+ return code
+
+ def word_to_char_ids(self, word):
+ if word in self._word_to_id:
+ return self._word_char_ids[self._word_to_id[word]]
+ else:
+ return self._convert_word_to_char_ids(word)
+
+ def encode_chars(self, sentence):
+ chars_ids = [self.word_to_char_ids(cur_word)
+ for cur_word in sentence.split()]
+ return np.vstack([self.bos_chars] + chars_ids + [self.eos_chars])
+
+
+def get_batch(generator, batch_size, num_steps, max_word_length, pad=False):
+ """Read batches of input."""
+ cur_stream = [None] * batch_size
+
+ inputs = np.zeros([batch_size, num_steps], np.int32)
+ char_inputs = np.zeros([batch_size, num_steps, max_word_length], np.int32)
+ global_word_ids = np.zeros([batch_size, num_steps], np.int32)
+ targets = np.zeros([batch_size, num_steps], np.int32)
+ weights = np.ones([batch_size, num_steps], np.float32)
+
+ no_more_data = False
+ while True:
+ inputs[:] = 0
+ char_inputs[:] = 0
+ global_word_ids[:] = 0
+ targets[:] = 0
+ weights[:] = 0.0
+
+ for i in range(batch_size):
+ cur_pos = 0
+
+ while cur_pos < num_steps:
+ if cur_stream[i] is None or len(cur_stream[i][0]) <= 1:
+ try:
+ cur_stream[i] = list(generator.next())
+ except StopIteration:
+ # No more data, exhaust current streams and quit
+ no_more_data = True
+ break
+
+ how_many = min(len(cur_stream[i][0]) - 1, num_steps - cur_pos)
+ next_pos = cur_pos + how_many
+
+ inputs[i, cur_pos:next_pos] = cur_stream[i][0][:how_many]
+ char_inputs[i, cur_pos:next_pos] = cur_stream[i][1][:how_many]
+ global_word_ids[i, cur_pos:next_pos] = cur_stream[i][2][:how_many]
+ targets[i, cur_pos:next_pos] = cur_stream[i][0][1:how_many+1]
+ weights[i, cur_pos:next_pos] = 1.0
+
+ cur_pos = next_pos
+ cur_stream[i][0] = cur_stream[i][0][how_many:]
+ cur_stream[i][1] = cur_stream[i][1][how_many:]
+ cur_stream[i][2] = cur_stream[i][2][how_many:]
+
+ if pad:
+ break
+
+ if no_more_data and np.sum(weights) == 0:
+ # There is no more data and this is an empty batch. Done!
+ break
+ yield inputs, char_inputs, global_word_ids, targets, weights
+
+
+class LM1BDataset(object):
+ """Utility class for 1B word benchmark dataset.
+
+ The current implementation reads the data from the tokenized text files.
+ """
+
+ def __init__(self, filepattern, vocab):
+ """Initialize LM1BDataset reader.
+
+ Args:
+ filepattern: Dataset file pattern.
+ vocab: Vocabulary.
+ """
+ self._vocab = vocab
+ self._all_shards = tf.gfile.Glob(filepattern)
+ tf.logging.info('Found %d shards at %s', len(self._all_shards), filepattern)
+
+ def _load_random_shard(self):
+ """Randomly select a file and read it."""
+ return self._load_shard(random.choice(self._all_shards))
+
+ def _load_shard(self, shard_name):
+ """Read one file and convert to ids.
+
+ Args:
+ shard_name: file path.
+
+ Returns:
+ list of (id, char_id, global_word_id) tuples.
+ """
+ tf.logging.info('Loading data from: %s', shard_name)
+ with tf.gfile.Open(shard_name) as f:
+ sentences = f.readlines()
+ chars_ids = [self.vocab.encode_chars(sentence) for sentence in sentences]
+ ids = [self.vocab.encode(sentence) for sentence in sentences]
+
+ global_word_ids = []
+ current_idx = 0
+ for word_ids in ids:
+ current_size = len(word_ids) - 1 # without symbol
+ cur_ids = np.arange(current_idx, current_idx + current_size)
+ global_word_ids.append(cur_ids)
+ current_idx += current_size
+
+ tf.logging.info('Loaded %d words.', current_idx)
+ tf.logging.info('Finished loading')
+ return zip(ids, chars_ids, global_word_ids)
+
+ def _get_sentence(self, forever=True):
+ while True:
+ ids = self._load_random_shard()
+ for current_ids in ids:
+ yield current_ids
+ if not forever:
+ break
+
+ def get_batch(self, batch_size, num_steps, pad=False, forever=True):
+ return get_batch(self._get_sentence(forever), batch_size, num_steps,
+ self.vocab.max_word_length, pad=pad)
+
+ @property
+ def vocab(self):
+ return self._vocab
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/lm_1b_eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/lm_1b_eval.py"
new file mode 100644
index 00000000..ce863475
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_1b/lm_1b_eval.py"
@@ -0,0 +1,308 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Eval pre-trained 1 billion word language model.
+"""
+import os
+import sys
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+from google.protobuf import text_format
+import data_utils
+
+FLAGS = tf.flags.FLAGS
+# General flags.
+tf.flags.DEFINE_string('mode', 'eval',
+ 'One of [sample, eval, dump_emb, dump_lstm_emb]. '
+ '"sample" mode samples future word predictions, using '
+ 'FLAGS.prefix as prefix (prefix could be left empty). '
+ '"eval" mode calculates perplexity of the '
+ 'FLAGS.input_data. '
+ '"dump_emb" mode dumps word and softmax embeddings to '
+ 'FLAGS.save_dir. embeddings are dumped in the same '
+ 'order as words in vocabulary. All words in vocabulary '
+ 'are dumped.'
+ 'dump_lstm_emb dumps lstm embeddings of FLAGS.sentence '
+ 'to FLAGS.save_dir.')
+tf.flags.DEFINE_string('pbtxt', '',
+ 'GraphDef proto text file used to construct model '
+ 'structure.')
+tf.flags.DEFINE_string('ckpt', '',
+ 'Checkpoint directory used to fill model values.')
+tf.flags.DEFINE_string('vocab_file', '', 'Vocabulary file.')
+tf.flags.DEFINE_string('save_dir', '',
+ 'Used for "dump_emb" mode to save word embeddings.')
+# sample mode flags.
+tf.flags.DEFINE_string('prefix', '',
+ 'Used for "sample" mode to predict next words.')
+tf.flags.DEFINE_integer('max_sample_words', 100,
+ 'Sampling stops either when is met or this number '
+ 'of steps has passed.')
+tf.flags.DEFINE_integer('num_samples', 3,
+ 'Number of samples to generate for the prefix.')
+# dump_lstm_emb mode flags.
+tf.flags.DEFINE_string('sentence', '',
+ 'Used as input for "dump_lstm_emb" mode.')
+# eval mode flags.
+tf.flags.DEFINE_string('input_data', '',
+ 'Input data files for eval model.')
+tf.flags.DEFINE_integer('max_eval_steps', 1000000,
+ 'Maximum mumber of steps to run "eval" mode.')
+
+
+# For saving demo resources, use batch size 1 and step 1.
+BATCH_SIZE = 1
+NUM_TIMESTEPS = 1
+MAX_WORD_LEN = 50
+
+
+def _LoadModel(gd_file, ckpt_file):
+ """Load the model from GraphDef and Checkpoint.
+
+ Args:
+ gd_file: GraphDef proto text file.
+ ckpt_file: TensorFlow Checkpoint file.
+
+ Returns:
+ TensorFlow session and tensors dict.
+ """
+ with tf.Graph().as_default():
+ sys.stderr.write('Recovering graph.\n')
+ with tf.gfile.FastGFile(gd_file, 'r') as f:
+ s = f.read().decode()
+ gd = tf.GraphDef()
+ text_format.Merge(s, gd)
+
+ tf.logging.info('Recovering Graph %s', gd_file)
+ t = {}
+ [t['states_init'], t['lstm/lstm_0/control_dependency'],
+ t['lstm/lstm_1/control_dependency'], t['softmax_out'], t['class_ids_out'],
+ t['class_weights_out'], t['log_perplexity_out'], t['inputs_in'],
+ t['targets_in'], t['target_weights_in'], t['char_inputs_in'],
+ t['all_embs'], t['softmax_weights'], t['global_step']
+ ] = tf.import_graph_def(gd, {}, ['states_init',
+ 'lstm/lstm_0/control_dependency:0',
+ 'lstm/lstm_1/control_dependency:0',
+ 'softmax_out:0',
+ 'class_ids_out:0',
+ 'class_weights_out:0',
+ 'log_perplexity_out:0',
+ 'inputs_in:0',
+ 'targets_in:0',
+ 'target_weights_in:0',
+ 'char_inputs_in:0',
+ 'all_embs_out:0',
+ 'Reshape_3:0',
+ 'global_step:0'], name='')
+
+ sys.stderr.write('Recovering checkpoint %s\n' % ckpt_file)
+ sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
+ sess.run('save/restore_all', {'save/Const:0': ckpt_file})
+ sess.run(t['states_init'])
+
+ return sess, t
+
+
+def _EvalModel(dataset):
+ """Evaluate model perplexity using provided dataset.
+
+ Args:
+ dataset: LM1BDataset object.
+ """
+ sess, t = _LoadModel(FLAGS.pbtxt, FLAGS.ckpt)
+
+ current_step = t['global_step'].eval(session=sess)
+ sys.stderr.write('Loaded step %d.\n' % current_step)
+
+ data_gen = dataset.get_batch(BATCH_SIZE, NUM_TIMESTEPS, forever=False)
+ sum_num = 0.0
+ sum_den = 0.0
+ perplexity = 0.0
+ for i, (inputs, char_inputs, _, targets, weights) in enumerate(data_gen):
+ input_dict = {t['inputs_in']: inputs,
+ t['targets_in']: targets,
+ t['target_weights_in']: weights}
+ if 'char_inputs_in' in t:
+ input_dict[t['char_inputs_in']] = char_inputs
+ log_perp = sess.run(t['log_perplexity_out'], feed_dict=input_dict)
+
+ if np.isnan(log_perp):
+ sys.stderr.error('log_perplexity is Nan.\n')
+ else:
+ sum_num += log_perp * weights.mean()
+ sum_den += weights.mean()
+ if sum_den > 0:
+ perplexity = np.exp(sum_num / sum_den)
+
+ sys.stderr.write('Eval Step: %d, Average Perplexity: %f.\n' %
+ (i, perplexity))
+
+ if i > FLAGS.max_eval_steps:
+ break
+
+
+def _SampleSoftmax(softmax):
+ return min(np.sum(np.cumsum(softmax) < np.random.rand()), len(softmax) - 1)
+
+
+def _SampleModel(prefix_words, vocab):
+ """Predict next words using the given prefix words.
+
+ Args:
+ prefix_words: Prefix words.
+ vocab: Vocabulary. Contains max word chard id length and converts between
+ words and ids.
+ """
+ targets = np.zeros([BATCH_SIZE, NUM_TIMESTEPS], np.int32)
+ weights = np.ones([BATCH_SIZE, NUM_TIMESTEPS], np.float32)
+
+ sess, t = _LoadModel(FLAGS.pbtxt, FLAGS.ckpt)
+
+ if prefix_words.find('') != 0:
+ prefix_words = ' ' + prefix_words
+
+ prefix = [vocab.word_to_id(w) for w in prefix_words.split()]
+ prefix_char_ids = [vocab.word_to_char_ids(w) for w in prefix_words.split()]
+ for _ in xrange(FLAGS.num_samples):
+ inputs = np.zeros([BATCH_SIZE, NUM_TIMESTEPS], np.int32)
+ char_ids_inputs = np.zeros(
+ [BATCH_SIZE, NUM_TIMESTEPS, vocab.max_word_length], np.int32)
+ samples = prefix[:]
+ char_ids_samples = prefix_char_ids[:]
+ sent = ''
+ while True:
+ inputs[0, 0] = samples[0]
+ char_ids_inputs[0, 0, :] = char_ids_samples[0]
+ samples = samples[1:]
+ char_ids_samples = char_ids_samples[1:]
+
+ softmax = sess.run(t['softmax_out'],
+ feed_dict={t['char_inputs_in']: char_ids_inputs,
+ t['inputs_in']: inputs,
+ t['targets_in']: targets,
+ t['target_weights_in']: weights})
+
+ sample = _SampleSoftmax(softmax[0])
+ sample_char_ids = vocab.word_to_char_ids(vocab.id_to_word(sample))
+
+ if not samples:
+ samples = [sample]
+ char_ids_samples = [sample_char_ids]
+ sent += vocab.id_to_word(samples[0]) + ' '
+ sys.stderr.write('%s\n' % sent)
+
+ if (vocab.id_to_word(samples[0]) == '' or
+ len(sent) > FLAGS.max_sample_words):
+ break
+
+
+def _DumpEmb(vocab):
+ """Dump the softmax weights and word embeddings to files.
+
+ Args:
+ vocab: Vocabulary. Contains vocabulary size and converts word to ids.
+ """
+ assert FLAGS.save_dir, 'Must specify FLAGS.save_dir for dump_emb.'
+ inputs = np.zeros([BATCH_SIZE, NUM_TIMESTEPS], np.int32)
+ targets = np.zeros([BATCH_SIZE, NUM_TIMESTEPS], np.int32)
+ weights = np.ones([BATCH_SIZE, NUM_TIMESTEPS], np.float32)
+
+ sess, t = _LoadModel(FLAGS.pbtxt, FLAGS.ckpt)
+
+ softmax_weights = sess.run(t['softmax_weights'])
+ fname = FLAGS.save_dir + '/embeddings_softmax.npy'
+ with tf.gfile.Open(fname, mode='w') as f:
+ np.save(f, softmax_weights)
+ sys.stderr.write('Finished softmax weights\n')
+
+ all_embs = np.zeros([vocab.size, 1024])
+ for i in xrange(vocab.size):
+ input_dict = {t['inputs_in']: inputs,
+ t['targets_in']: targets,
+ t['target_weights_in']: weights}
+ if 'char_inputs_in' in t:
+ input_dict[t['char_inputs_in']] = (
+ vocab.word_char_ids[i].reshape([-1, 1, MAX_WORD_LEN]))
+ embs = sess.run(t['all_embs'], input_dict)
+ all_embs[i, :] = embs
+ sys.stderr.write('Finished word embedding %d/%d\n' % (i, vocab.size))
+
+ fname = FLAGS.save_dir + '/embeddings_char_cnn.npy'
+ with tf.gfile.Open(fname, mode='w') as f:
+ np.save(f, all_embs)
+ sys.stderr.write('Embedding file saved\n')
+
+
+def _DumpSentenceEmbedding(sentence, vocab):
+ """Predict next words using the given prefix words.
+
+ Args:
+ sentence: Sentence words.
+ vocab: Vocabulary. Contains max word chard id length and converts between
+ words and ids.
+ """
+ targets = np.zeros([BATCH_SIZE, NUM_TIMESTEPS], np.int32)
+ weights = np.ones([BATCH_SIZE, NUM_TIMESTEPS], np.float32)
+
+ sess, t = _LoadModel(FLAGS.pbtxt, FLAGS.ckpt)
+
+ if sentence.find('') != 0:
+ sentence = ' ' + sentence
+
+ word_ids = [vocab.word_to_id(w) for w in sentence.split()]
+ char_ids = [vocab.word_to_char_ids(w) for w in sentence.split()]
+
+ inputs = np.zeros([BATCH_SIZE, NUM_TIMESTEPS], np.int32)
+ char_ids_inputs = np.zeros(
+ [BATCH_SIZE, NUM_TIMESTEPS, vocab.max_word_length], np.int32)
+ for i in xrange(len(word_ids)):
+ inputs[0, 0] = word_ids[i]
+ char_ids_inputs[0, 0, :] = char_ids[i]
+
+ # Add 'lstm/lstm_0/control_dependency' if you want to dump previous layer
+ # LSTM.
+ lstm_emb = sess.run(t['lstm/lstm_1/control_dependency'],
+ feed_dict={t['char_inputs_in']: char_ids_inputs,
+ t['inputs_in']: inputs,
+ t['targets_in']: targets,
+ t['target_weights_in']: weights})
+
+ fname = os.path.join(FLAGS.save_dir, 'lstm_emb_step_%d.npy' % i)
+ with tf.gfile.Open(fname, mode='w') as f:
+ np.save(f, lstm_emb)
+ sys.stderr.write('LSTM embedding step %d file saved\n' % i)
+
+
+def main(unused_argv):
+ vocab = data_utils.CharsVocabulary(FLAGS.vocab_file, MAX_WORD_LEN)
+
+ if FLAGS.mode == 'eval':
+ dataset = data_utils.LM1BDataset(FLAGS.input_data, vocab)
+ _EvalModel(dataset)
+ elif FLAGS.mode == 'sample':
+ _SampleModel(FLAGS.prefix, vocab)
+ elif FLAGS.mode == 'dump_emb':
+ _DumpEmb(vocab)
+ elif FLAGS.mode == 'dump_lstm_emb':
+ _DumpSentenceEmbedding(FLAGS.sentence, vocab)
+ else:
+ raise Exception('Mode not supported.')
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/README.md"
new file mode 100644
index 00000000..8e37f2e0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/README.md"
@@ -0,0 +1,166 @@
+# A Simple Method for Commonsense Reasoning
+
+This repository contains code to reproduce results from [*A Simple Method for Commonsense Reasoning*](https://arxiv.org/abs/1806.02847).
+
+Authors and contact:
+
+* Trieu H. Trinh (thtrieu@google.com, github: thtrieu)
+* Quoc V. Le (qvl@google.com)
+
+## TL;DR
+
+Commonsense reasoning is a long-standing challenge for deep learning. For example,
+it is difficult to use neural networks to tackle the Winograd Schema dataset - a difficult subset of Pronoun Disambiguation problems. In this work, we use language models to score substitued sentences to decide the correct reference of the ambiguous pronoun (see Figure below for an example).
+
+
+
+This simple unsupervised method achieves new state-of-the-art (*as of June 1st, 2018*) results on both benchmark PDP-60 and WSC-273 (See Table below), without using rule-based reasoning nor expensive annotated knowledge bases.
+
+| Commonsense-reasoning test | Previous best result | Ours |
+| ----------------------------|:----------------------:|:-----:|
+| Pronoun Disambiguation | 66.7% | 70% |
+| Winograd Schema Challenge | 52.8% | 63.7% |
+
+
+
+## Citation
+
+If you use our released models below in your publication, please cite the original paper:
+
+@article{TBD}
+
+
+## Requirements
+* Python >=2.6
+* Tensorflow >= v1.4
+* Numpy >= 1.12.1
+
+## Details of this release
+
+The open-sourced components include:
+
+* Test sets from Pronoun Disambiguation Problem (PDP-60) and Winograd Schema Challenges (WSC-273).
+* Tensorflow metagraph and checkpoints of 14 language models (See Appendix A in the paper).
+* A vocabulary file.
+* Code to reproduce results from the original paper.
+
+## How to run
+
+### 1. Download data files
+
+Download all files from the [Google Cloud Storage of this project](https://console.cloud.google.com/storage/browser/commonsense-reasoning/). The easiest way is to install and use `gsutil cp` command-line tool (See [install gsutil](https://cloud.google.com/storage/docs/gsutil_install)).
+
+
+```shell
+# Download everything from the project gs://commonsense-reasoning
+$ gsutil cp -R gs://commonsense-reasoning/* .
+Copying gs://commonsense-reasoning/reproduce/vocab.txt...
+Copying gs://commonsense-reasoning/reproduce/commonsense_test/pdp60.json...
+Copying gs://commonsense-reasoning/reproduce/commonsense_test/wsc273.json...
+
+...(omitted)
+```
+
+All downloaded content should be in `./reproduce/`. This includes two tests `pdp60.json` and `wsc273.json`, a vocabulary file `vocab.txt` and checkpoints for all 14 language models, each includes three files (`.data`, `.index` and `.meta`). All checkpoint names start with `ckpt-best` since they are saved at the best perplexity on a hold-out text corpus.
+
+```shell
+# Check for the content
+$ ls reproduce/*
+reproduce/vocab.txt
+
+reproduce/commonsense_test:
+pdp60.json wsc273.json
+
+reproduce/lm01:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm02:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm03:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm04:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm05:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm06:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm07:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm08:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm09:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm10:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm11:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm12:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm13:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+
+reproduce/lm14:
+ckpt-best.data-00000-of-00001 ckpt-best.index ckpt-best.meta
+```
+
+### 2. Run evaluation code
+
+To reproduce results from the paper, simply run `eval.py` script.
+
+```shell
+$ python eval.py --data_dir=reproduce
+
+Restored from ./reproduce/lm01
+Reset RNN states.
+Processing patch (1, 1) / (2, 4)
+Probs for
+[['Then' 'Dad' 'figured' ..., 'man' "'s" 'board-bill']
+ ['Then' 'Dad' 'figured' ..., 'man' "'s" 'board-bill']
+ ['Always' 'before' ',' ..., 'now' ',' 'for']
+ ...,
+ ['Mark' 'was' 'close' ..., 'promising' 'him' ',']
+ ['Mark' 'was' 'close' ..., 'promising' 'him' ',']
+ ['Mark' 'was' 'close' ..., 'promising' 'him' ',']]
+=
+[[ 1.64250596e-05 1.77780055e-06 4.14267970e-06 ..., 1.87315454e-03
+ 1.57723188e-01 6.31845817e-02]
+ [ 1.64250596e-05 1.77780055e-06 4.14267970e-06 ..., 1.87315454e-03
+ 1.57723188e-01 6.31845817e-02]
+ [ 1.28243030e-07 3.80435935e-03 1.12383246e-01 ..., 9.67682712e-03
+ 2.17407525e-01 1.08243264e-01]
+ ...,
+ [ 1.15557734e-04 2.92792241e-03 3.46455898e-04 ..., 2.72328052e-05
+ 3.37066874e-02 7.89367408e-02]
+ [ 1.15557734e-04 2.92792241e-03 3.46455898e-04 ..., 2.72328052e-05
+ 3.37066874e-02 7.89367408e-02]
+ [ 1.15557734e-04 2.92792241e-03 3.46455898e-04 ..., 2.72328052e-05
+ 3.37066874e-02 7.89367408e-02]]
+Processing patch (1, 2) / (2, 4)
+
+...(omitted)
+
+Accuracy of 1 LM(s) on pdp60 = 0.6
+
+...(omitted)
+
+Accuracy of 5 LM(s) on pdp60 = 0.7
+
+...(omitted)
+
+Accuracy of 10 LM(s) on wsc273 = 0.615
+
+...(omitted)
+
+Accuracy of 14 LM(s) on wsc273 = 0.637
+```
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/eval.py"
new file mode 100644
index 00000000..e5b7ff98
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/eval.py"
@@ -0,0 +1,190 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import pickle as pkl
+import numpy as np
+import tensorflow as tf
+import utils
+
+tf.app.flags.DEFINE_string(
+ 'data_dir', 'reproduce',
+ 'Path to directory containing data and model checkpoints.')
+
+
+FLAGS = tf.app.flags.FLAGS
+
+
+class EnsembleLM(object):
+ """Ensemble of language models."""
+
+ def __init__(self, test_data_name='wsc273'):
+ vocab_file = os.path.join(FLAGS.data_dir, 'vocab.txt')
+ self.vocab = utils.CharsVocabulary(vocab_file, 50)
+ assert test_data_name in ['pdp60', 'wsc273'], (
+ 'Test data must be pdp60 or wsc273, got {}'.format(test_data_name))
+ self.test_data_name = test_data_name
+
+ test_data = utils.parse_commonsense_reasoning_test(test_data_name)
+ self.question_ids, self.sentences, self.labels = test_data
+ self.all_probs = [] # aggregate single-model prediction here.
+
+ def add_single_model(self, model_name='lm1'):
+ """Add a single model into the current ensemble."""
+ # Create single LM
+ single_lm = SingleRecurrentLanguageModel(self.vocab, model_name)
+
+ # Add the single LM prediction.
+ probs = single_lm.assign_probs(self.sentences, self.test_data_name)
+ self.all_probs.append(probs)
+ print('Done adding {}'.format(model_name))
+
+ def evaluate(self):
+ """Evaluate the current ensemble."""
+ # Attach word probabilities and correctness label to each substitution
+ ensembled_probs = sum(self.all_probs) / len(self.all_probs)
+ scorings = []
+ for i, sentence in enumerate(self.sentences):
+ correctness = self.labels[i]
+ word_probs = ensembled_probs[i, :len(sentence)]
+ joint_prob = np.prod(word_probs, dtype=np.float64)
+
+ scorings.append(dict(
+ correctness=correctness,
+ sentence=sentence,
+ joint_prob=joint_prob,
+ word_probs=word_probs))
+ scoring_mode = 'full' if self.test_data_name == 'pdp60' else 'partial'
+ return utils.compare_substitutions(
+ self.question_ids, scorings, scoring_mode)
+
+
+class SingleRecurrentLanguageModel(object):
+ """Single Recurrent Language Model."""
+
+ def __init__(self, vocab, model_name='lm01'):
+ self.vocab = vocab
+ self.log_dir = os.path.join(FLAGS.data_dir, model_name)
+
+ def reset(self):
+ self.sess.run(self.tensors['states_init'])
+
+ def _score(self, word_patch):
+ """Score a matrix of shape (batch_size, num_timesteps+1) str tokens."""
+ word_ids = np.array(
+ [[self.vocab.word_to_id(word) for word in row]
+ for row in word_patch])
+ char_ids = np.array(
+ [[self.vocab.word_to_char_ids(word) for word in row]
+ for row in word_patch])
+ print('Probs for \n{}\n='.format(np.array(word_patch)[:, 1:]))
+
+ input_ids, target_ids = word_ids[:, :-1], word_ids[:, 1:]
+ input_char_ids = char_ids[:, :-1, :]
+
+ softmax = self.sess.run(self.tensors['softmax_out'], feed_dict={
+ self.tensors['inputs_in']: input_ids,
+ self.tensors['char_inputs_in']: input_char_ids
+ })
+
+ batch_size, num_timesteps = self.shape
+ softmax = softmax.reshape((num_timesteps, batch_size, -1))
+ softmax = np.transpose(softmax, [1, 0, 2])
+ probs = np.array([[softmax[row, col, target_ids[row, col]]
+ for col in range(num_timesteps)]
+ for row in range(batch_size)])
+ print(probs)
+ return probs
+
+ def _score_patches(self, word_patches):
+ """Score a 2D matrix of word_patches and stitch results together."""
+ batch_size, num_timesteps = self.shape
+ nrow, ncol = len(word_patches), len(word_patches[0])
+ max_len = num_timesteps * ncol
+ probs = np.zeros([0, max_len]) # accumulate results into this.
+
+ # Loop through the 2D matrix of word_patches and score each.
+ for i, row in enumerate(word_patches):
+ print('Reset RNN states.')
+ self.reset() # reset states before processing each row.
+ row_probs = np.zeros([batch_size, 0])
+ for j, word_patch in enumerate(row):
+ print('Processing patch '
+ '({}, {}) / ({}, {})'.format(i+1, j+1, nrow, ncol))
+ patch_probs = (self._score(word_patch) if word_patch else
+ np.zeros([batch_size, num_timesteps]))
+ row_probs = np.concatenate([row_probs, patch_probs], 1)
+ probs = np.concatenate([probs, row_probs], 0)
+ return probs
+
+ def assign_probs(self, sentences, test_data_name='wsc273'):
+ """Return prediction accuracy using this LM for a test."""
+
+ probs_cache = os.path.join(self.log_dir, '{}.probs'.format(test_data_name))
+ if os.path.exists(probs_cache):
+ print('Reading cached result from {}'.format(probs_cache))
+ with tf.gfile.Open(probs_cache, 'r') as f:
+ probs = pkl.load(f)
+ else:
+ tf.reset_default_graph()
+ self.sess = tf.Session()
+ # Build the graph.
+ saver = tf.train.import_meta_graph(
+ os.path.join(self.log_dir, 'ckpt-best.meta'))
+ saver.restore(self.sess, os.path.join(self.log_dir, 'ckpt-best'))
+ print('Restored from {}'.format(self.log_dir))
+ graph = tf.get_default_graph()
+ self.tensors = dict(
+ inputs_in=graph.get_tensor_by_name('test_inputs_in:0'),
+ char_inputs_in=graph.get_tensor_by_name('test_char_inputs_in:0'),
+ softmax_out=graph.get_tensor_by_name('SotaRNN_1/softmax_out:0'),
+ states_init=graph.get_operation_by_name('SotaRNN_1/states_init'))
+ self.shape = self.tensors['inputs_in'].shape.as_list()
+
+ # Cut sentences into patches of shape processable by the LM.
+ batch_size, num_timesteps = self.shape
+ word_patches = utils.cut_to_patches(sentences, batch_size, num_timesteps)
+ probs = self._score_patches(word_patches)
+
+ # Cache the probs since they are expensive to evaluate
+ with tf.gfile.Open(probs_cache, 'w') as f:
+ pkl.dump(probs, f)
+ return probs
+
+
+def evaluate_ensemble(test_data_name, number_of_lms):
+ ensemble = EnsembleLM(test_data_name)
+ model_list = ['lm{:02d}'.format(i+1) for i in range(number_of_lms)]
+ for model_name in model_list:
+ ensemble.add_single_model(model_name)
+ accuracy = ensemble.evaluate()
+ print('Accuracy of {} LM(s) on {} = {}'.format(
+ number_of_lms, test_data_name, accuracy))
+
+
+def main(_):
+ evaluate_ensemble('pdp60', 1) # 60%
+ evaluate_ensemble('pdp60', 5) # 70%
+ evaluate_ensemble('wsc273', 10) # 61.5%
+ evaluate_ensemble('wsc273', 14) # 63.7%
+
+
+if __name__ == '__main__':
+ tf.app.run(main)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/method.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/method.jpg"
new file mode 100644
index 00000000..ee8a5506
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/method.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/utils.py"
new file mode 100644
index 00000000..d75f2b0f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lm_commonsense/utils.py"
@@ -0,0 +1,368 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import json
+import os
+import numpy as np
+import tensorflow as tf
+
+FLAGS = tf.flags.FLAGS
+
+
+class Vocabulary(object):
+ """Class that holds a vocabulary for the dataset."""
+
+ def __init__(self, filename):
+
+ self._id_to_word = []
+ self._word_to_id = {}
+ self._unk = -1
+ self._bos = -1
+ self._eos = -1
+
+ with tf.gfile.Open(filename) as f:
+ idx = 0
+ for line in f:
+ word_name = line.strip()
+ if word_name == '':
+ self._bos = idx
+ elif word_name == '':
+ self._eos = idx
+ elif word_name == '':
+ self._unk = idx
+ if word_name == '!!!MAXTERMID':
+ continue
+
+ self._id_to_word.append(word_name)
+ self._word_to_id[word_name] = idx
+ idx += 1
+
+ @property
+ def bos(self):
+ return self._bos
+
+ @property
+ def eos(self):
+ return self._eos
+
+ @property
+ def unk(self):
+ return self._unk
+
+ @property
+ def size(self):
+ return len(self._id_to_word)
+
+ def word_to_id(self, word):
+ if word in self._word_to_id:
+ return self._word_to_id[word]
+ else:
+ if word.lower() in self._word_to_id:
+ return self._word_to_id[word.lower()]
+ return self.unk
+
+ def id_to_word(self, cur_id):
+ if cur_id < self.size:
+ return self._id_to_word[int(cur_id)]
+ return ''
+
+ def decode(self, cur_ids):
+ return ' '.join([self.id_to_word(cur_id) for cur_id in cur_ids])
+
+ def encode(self, sentence):
+ word_ids = [self.word_to_id(cur_word) for cur_word in sentence.split()]
+ return np.array([self.bos] + word_ids + [self.eos], dtype=np.int32)
+
+
+class CharsVocabulary(Vocabulary):
+ """Vocabulary containing character-level information."""
+
+ def __init__(self, filename, max_word_length):
+ super(CharsVocabulary, self).__init__(filename)
+
+ self._max_word_length = max_word_length
+ chars_set = set()
+
+ for word in self._id_to_word:
+ chars_set |= set(word)
+
+ free_ids = []
+ for i in range(256):
+ if chr(i) in chars_set:
+ continue
+ free_ids.append(chr(i))
+
+ if len(free_ids) < 5:
+ raise ValueError('Not enough free char ids: %d' % len(free_ids))
+
+ self.bos_char = free_ids[0] #
+ self.eos_char = free_ids[1] #
+ self.bow_char = free_ids[2] #
+ self.eow_char = free_ids[3] #
+ self.pad_char = free_ids[4] #
+
+ chars_set |= {self.bos_char, self.eos_char, self.bow_char, self.eow_char,
+ self.pad_char}
+
+ self._char_set = chars_set
+ num_words = len(self._id_to_word)
+
+ self._word_char_ids = np.zeros([num_words, max_word_length], dtype=np.int32)
+
+ self.bos_chars = self._convert_word_to_char_ids(self.bos_char)
+ self.eos_chars = self._convert_word_to_char_ids(self.eos_char)
+
+ for i, word in enumerate(self._id_to_word):
+ if i == self.bos:
+ self._word_char_ids[i] = self.bos_chars
+ elif i == self.eos:
+ self._word_char_ids[i] = self.eos_chars
+ else:
+ self._word_char_ids[i] = self._convert_word_to_char_ids(word)
+
+ @property
+ def max_word_length(self):
+ return self._max_word_length
+
+ def _convert_word_to_char_ids(self, word):
+ code = np.zeros([self.max_word_length], dtype=np.int32)
+ code[:] = ord(self.pad_char)
+
+ if len(word) > self.max_word_length - 2:
+ word = word[:self.max_word_length-2]
+ cur_word = self.bow_char + word + self.eow_char
+ for j in range(len(cur_word)):
+ code[j] = ord(cur_word[j])
+ return code
+
+ def word_to_char_ids(self, word):
+ if word in self._word_to_id:
+ return self._word_char_ids[self._word_to_id[word]]
+ else:
+ return self._convert_word_to_char_ids(word)
+
+ def encode_chars(self, sentence):
+ chars_ids = [self.word_to_char_ids(cur_word)
+ for cur_word in sentence.split()]
+ return np.vstack([self.bos_chars] + chars_ids + [self.eos_chars])
+
+
+_SPECIAL_CHAR_MAP = {
+ '\xe2\x80\x98': '\'',
+ '\xe2\x80\x99': '\'',
+ '\xe2\x80\x9c': '"',
+ '\xe2\x80\x9d': '"',
+ '\xe2\x80\x93': '-',
+ '\xe2\x80\x94': '-',
+ '\xe2\x88\x92': '-',
+ '\xce\x84': '\'',
+ '\xc2\xb4': '\'',
+ '`': '\''
+}
+
+_START_SPECIAL_CHARS = ['.', ',', '?', '!', ';', ':', '[', ']', '\'', '+', '/',
+ '\xc2\xa3', '$', '~', '*', '%', '{', '}', '#', '&', '-',
+ '"', '(', ')', '='] + list(_SPECIAL_CHAR_MAP.keys())
+_SPECIAL_CHARS = _START_SPECIAL_CHARS + [
+ '\'s', '\'m', '\'t', '\'re', '\'d', '\'ve', '\'ll']
+
+
+def tokenize(sentence):
+ """Tokenize a sentence."""
+ sentence = str(sentence)
+ words = sentence.strip().split()
+ tokenized = [] # return this
+
+ for word in words:
+ if word.lower() in ['mr.', 'ms.']:
+ tokenized.append(word)
+ continue
+
+ # Split special chars at the start of word
+ will_split = True
+ while will_split:
+ will_split = False
+ for char in _START_SPECIAL_CHARS:
+ if word.startswith(char):
+ tokenized.append(char)
+ word = word[len(char):]
+ will_split = True
+
+ # Split special chars at the end of word
+ special_end_tokens = []
+ will_split = True
+ while will_split:
+ will_split = False
+ for char in _SPECIAL_CHARS:
+ if word.endswith(char):
+ special_end_tokens = [char] + special_end_tokens
+ word = word[:-len(char)]
+ will_split = True
+
+ if word:
+ tokenized.append(word)
+ tokenized += special_end_tokens
+
+ # Add necessary end of sentence token.
+ if tokenized[-1] not in ['.', '!', '?']:
+ tokenized += ['.']
+ return tokenized
+
+
+def parse_commonsense_reasoning_test(test_data_name):
+ """Read JSON test data."""
+ with tf.gfile.Open(os.path.join(
+ FLAGS.data_dir, 'commonsense_test',
+ '{}.json'.format(test_data_name)), 'r') as f:
+ data = json.load(f)
+
+ question_ids = [d['question_id'] for d in data]
+ sentences = [tokenize(d['substitution']) for d in data]
+ labels = [d['correctness'] for d in data]
+
+ return question_ids, sentences, labels
+
+
+PAD = ''
+
+
+def cut_to_patches(sentences, batch_size, num_timesteps):
+ """Cut sentences into patches of shape (batch_size, num_timesteps).
+
+ Args:
+ sentences: a list of sentences, each sentence is a list of str token.
+ batch_size: batch size
+ num_timesteps: number of backprop step
+
+ Returns:
+ patches: A 2D matrix,
+ each entry is a matrix of shape (batch_size, num_timesteps).
+ """
+ preprocessed = [['']+sentence+[''] for sentence in sentences]
+ max_len = max([len(sent) for sent in preprocessed])
+
+ # Pad to shape [height, width]
+ # where height is a multiple of batch_size
+ # and width is a multiple of num_timesteps
+ nrow = int(np.ceil(len(preprocessed) * 1.0 / batch_size))
+ ncol = int(np.ceil(max_len * 1.0 / num_timesteps))
+ height, width = nrow * batch_size, ncol * num_timesteps + 1
+ preprocessed = [sent + [PAD] * (width - len(sent)) for sent in preprocessed]
+ preprocessed += [[PAD] * width] * (height - len(preprocessed))
+
+ # Cut preprocessed into patches of shape [batch_size, num_timesteps]
+ patches = []
+ for row in range(nrow):
+ patches.append([])
+ for col in range(ncol):
+ patch = [sent[col * num_timesteps:
+ (col+1) * num_timesteps + 1]
+ for sent in preprocessed[row * batch_size:
+ (row+1) * batch_size]]
+ if np.all(np.array(patch)[:, 1:] == PAD):
+ patch = None # no need to process this patch.
+ patches[-1].append(patch)
+ return patches
+
+
+def _substitution_mask(sent1, sent2):
+ """Binary mask identifying substituted part in two sentences.
+
+ Example sentence and their mask:
+ First sentence = "I like the cat 's color"
+ 0 0 0 1 0 0
+ Second sentence = "I like the yellow dog 's color"
+ 0 0 0 1 1 0 0
+
+ Args:
+ sent1: first sentence
+ sent2: second sentence
+
+ Returns:
+ mask1: mask for first sentence
+ mask2: mask for second sentence
+ """
+ mask1_start, mask2_start = [], []
+ while sent1[0] == sent2[0]:
+ sent1 = sent1[1:]
+ sent2 = sent2[1:]
+ mask1_start.append(0.)
+ mask2_start.append(0.)
+
+ mask1_end, mask2_end = [], []
+ while sent1[-1] == sent2[-1]:
+ if (len(sent1) == 1) or (len(sent2) == 1):
+ break
+ sent1 = sent1[:-1]
+ sent2 = sent2[:-1]
+ mask1_end = [0.] + mask1_end
+ mask2_end = [0.] + mask2_end
+
+ assert sent1 or sent2, 'Two sentences are identical.'
+ return (mask1_start + [1.] * len(sent1) + mask1_end,
+ mask2_start + [1.] * len(sent2) + mask2_end)
+
+
+def _convert_to_partial(scoring1, scoring2):
+ """Convert full scoring into partial scoring."""
+ mask1, mask2 = _substitution_mask(
+ scoring1['sentence'], scoring2['sentence'])
+
+ def _partial_score(scoring, mask):
+ word_probs = [max(_) for _ in zip(scoring['word_probs'], mask)]
+ scoring.update(word_probs=word_probs,
+ joint_prob=np.prod(word_probs))
+
+ _partial_score(scoring1, mask1)
+ _partial_score(scoring2, mask2)
+
+
+def compare_substitutions(question_ids, scorings, mode='full'):
+ """Return accuracy by comparing two consecutive scorings."""
+ prediction_correctness = []
+ # Compare two consecutive substitutions
+ for i in range(len(scorings) // 2):
+ scoring1, scoring2 = scorings[2*i: 2*i+2]
+ if mode == 'partial': # fix joint prob into partial prob
+ _convert_to_partial(scoring1, scoring2)
+
+ prediction_correctness.append(
+ (scoring2['joint_prob'] > scoring1['joint_prob']) ==
+ scoring2['correctness'])
+
+ # Two consecutive substitutions always belong to the same question
+ question_ids = [qid for i, qid in enumerate(question_ids) if i % 2 == 0]
+ assert len(question_ids) == len(prediction_correctness)
+ num_questions = len(set(question_ids))
+
+ # Question is correctly answered only if
+ # all predictions of the same question_id is correct
+ num_correct_answer = 0
+ previous_qid = None
+ correctly_answered = False
+ for predict, qid in zip(prediction_correctness, question_ids):
+ if qid != previous_qid:
+ previous_qid = qid
+ num_correct_answer += int(correctly_answered)
+ correctly_answered = True
+ correctly_answered = correctly_answered and predict
+ num_correct_answer += int(correctly_answered)
+
+ return num_correct_answer / num_questions
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/README" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/README"
new file mode 100644
index 00000000..3c859166
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/README"
@@ -0,0 +1,16 @@
+Tensorflow mobile video object detection implementation proposed in the following paper:
+Mobile Video Object Detection with Temporally-Aware Feature Maps (CVPR 2018).
+
+http://openaccess.thecvf.com/content_cvpr_2018/papers/Liu_Mobile_Video_Object_CVPR_2018_paper.pdf
+
+@article{liu2017mobile,
+ title={Mobile Video Object Detection with Temporally-Aware Feature Maps},
+ author={Liu, Mason and Zhu, Menglong},
+ journal={CVPR},
+ year={2018}
+}
+
+If you have any questions regarding this codebase, please contact us:
+masonliuw@gmail.com
+yinxiao@google.com
+menglong@google.com
\ No newline at end of file
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/configs/lstm_ssd_mobilenet_v1_imagenet.config" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/configs/lstm_ssd_mobilenet_v1_imagenet.config"
new file mode 100644
index 00000000..2b716a5e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/configs/lstm_ssd_mobilenet_v1_imagenet.config"
@@ -0,0 +1,232 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# For training on Imagenet Video with LSTM Mobilenet V1
+
+[object_detection.protos.lstm_model] {
+ train_unroll_length: 4
+ eval_unroll_length: 4
+}
+
+model {
+ ssd {
+ num_classes: 30
+ box_coder {
+ faster_rcnn_box_coder {
+ y_scale: 10.0
+ x_scale: 10.0
+ height_scale: 5.0
+ width_scale: 5.0
+ }
+ }
+ matcher {
+ argmax_matcher {
+ matched_threshold: 0.5
+ unmatched_threshold: 0.5
+ ignore_thresholds: false
+ negatives_lower_than_unmatched: true
+ force_match_for_each_row: true
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ num_layers: 5
+ min_scale: 0.2
+ max_scale: 0.95
+ aspect_ratios: 1.0
+ aspect_ratios: 2.0
+ aspect_ratios: 0.5
+ aspect_ratios: 3.0
+ aspect_ratios: 0.3333
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 256
+ width: 256
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ min_depth: 0
+ max_depth: 0
+ num_layers_before_predictor: 3
+ use_dropout: false
+ dropout_keep_probability: 0.8
+ kernel_size: 3
+ box_code_size: 4
+ apply_sigmoid_to_scores: false
+ use_depthwise: true
+ conv_hyperparams {
+ activation: RELU_6,
+ regularizer {
+ l2_regularizer {
+ weight: 0.00004
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ stddev: 0.03
+ mean: 0.0
+ }
+ }
+ batch_norm {
+ train: true,
+ scale: true,
+ center: true,
+ decay: 0.9997,
+ epsilon: 0.001,
+ }
+ }
+ }
+ }
+ feature_extractor {
+ type: 'lstm_mobilenet_v1'
+ min_depth: 16
+ depth_multiplier: 1.0
+ use_depthwise: true
+ conv_hyperparams {
+ activation: RELU_6,
+ regularizer {
+ l2_regularizer {
+ weight: 0.00004
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ stddev: 0.03
+ mean: 0.0
+ }
+ }
+ batch_norm {
+ train: true,
+ scale: true,
+ center: true,
+ decay: 0.9997,
+ epsilon: 0.001,
+ }
+ }
+ }
+ loss {
+ classification_loss {
+ weighted_sigmoid {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ hard_example_miner {
+ num_hard_examples: 3000
+ iou_threshold: 0.99
+ loss_type: CLASSIFICATION
+ max_negatives_per_positive: 3
+ min_negatives_per_image: 0
+ }
+ classification_weight: 1.0
+ localization_weight: 4.0
+ }
+ normalize_loss_by_num_matches: true
+ post_processing {
+ batch_non_max_suppression {
+ score_threshold: -20.0
+ iou_threshold: 0.5
+ max_detections_per_class: 100
+ max_total_detections: 100
+ }
+ score_converter: SIGMOID
+ }
+ }
+}
+
+train_config: {
+ batch_size: 8
+ data_augmentation_options {
+ random_horizontal_flip {
+ }
+ }
+ data_augmentation_options {
+ ssd_random_crop {
+ }
+ }
+ optimizer {
+ use_moving_average: false
+ rms_prop_optimizer: {
+ learning_rate: {
+ exponential_decay_learning_rate {
+ initial_learning_rate: 0.002
+ decay_steps: 200000
+ decay_factor: 0.95
+ }
+ }
+ momentum_optimizer_value: 0.9
+ decay: 0.9
+ epsilon: 1.0
+ }
+ }
+
+ from_detection_checkpoint: true
+ gradient_clipping_by_norm: 10.0
+ batch_queue_capacity: 12
+ prefetch_queue_capacity: 4
+ fine_tune_checkpoint: "/path/to/checkpoint/"
+ fine_tune_checkpoint_type: "detection"
+}
+
+
+train_input_reader: {
+ shuffle_buffer_size: 32
+ queue_capacity: 12
+ prefetch_size: 12
+ min_after_dequeue: 4
+ label_map_path: "path/to/label_map"
+ external_input_reader {
+ [lstm_object_detection.input_readers.GoogleInputReader.google_input_reader] {
+ tf_record_video_input_reader: {
+ input_path: "your/cns/path"
+ data_type: TF_SEQUENCE_EXAMPLE
+ video_length: 4
+ }
+ }
+ }
+}
+
+eval_config: {
+ metrics_set: "coco_evaluation_last_frame"
+ use_moving_averages: true
+ min_score_threshold: 0.5
+ max_num_boxes_to_visualize: 300
+ visualize_groundtruth_boxes: true
+ groundtruth_box_visualization_color: "red"
+}
+
+eval_input_reader: {
+ label_map_path: "path/to/label_map"
+ external_input_reader {
+ [lstm_object_detection.input_readers.GoogleInputReader.google_input_reader] {
+ tf_record_video_input_reader: {
+ input_path: "your/cns/path"
+ data_type: TF_SEQUENCE_EXAMPLE
+ video_length: 4
+ }
+ }
+ }
+ shuffle: true
+ num_readers: 1
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/eval.py"
new file mode 100644
index 00000000..e46383e3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/eval.py"
@@ -0,0 +1,110 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Evaluation executable for detection models.
+
+This executable is used to evaluate DetectionModels. Example usage:
+ ./eval \
+ --logtostderr \
+ --checkpoint_dir=path/to/checkpoint_dir \
+ --eval_dir=path/to/eval_dir \
+ --pipeline_config_path=pipeline_config.pbtxt
+"""
+
+import functools
+import os
+import tensorflow as tf
+from google.protobuf import text_format
+from google3.pyglib import app
+from google3.pyglib import flags
+from lstm_object_detection import evaluator
+from lstm_object_detection import model_builder
+from lstm_object_detection.inputs import seq_dataset_builder
+from lstm_object_detection.utils import config_util
+from object_detection.utils import label_map_util
+
+tf.logging.set_verbosity(tf.logging.INFO)
+flags = tf.app.flags
+flags.DEFINE_boolean('eval_training_data', False,
+ 'If training data should be evaluated for this job.')
+flags.DEFINE_string('checkpoint_dir', '',
+ 'Directory containing checkpoints to evaluate, typically '
+ 'set to `train_dir` used in the training job.')
+flags.DEFINE_string('eval_dir', '', 'Directory to write eval summaries to.')
+flags.DEFINE_string('pipeline_config_path', '',
+ 'Path to a pipeline_pb2.TrainEvalPipelineConfig config '
+ 'file. If provided, other configs are ignored')
+flags.DEFINE_boolean('run_once', False, 'Option to only run a single pass of '
+ 'evaluation. Overrides the `max_evals` parameter in the '
+ 'provided config.')
+FLAGS = flags.FLAGS
+
+
+def main(unused_argv):
+ assert FLAGS.checkpoint_dir, '`checkpoint_dir` is missing.'
+ assert FLAGS.eval_dir, '`eval_dir` is missing.'
+ if FLAGS.pipeline_config_path:
+ configs = config_util.get_configs_from_pipeline_file(
+ FLAGS.pipeline_config_path)
+ else:
+ configs = config_util.get_configs_from_multiple_files(
+ model_config_path=FLAGS.model_config_path,
+ eval_config_path=FLAGS.eval_config_path,
+ eval_input_config_path=FLAGS.input_config_path)
+
+ pipeline_proto = config_util.create_pipeline_proto_from_configs(configs)
+ config_text = text_format.MessageToString(pipeline_proto)
+ tf.gfile.MakeDirs(FLAGS.eval_dir)
+ with tf.gfile.Open(os.path.join(FLAGS.eval_dir, 'pipeline.config'),
+ 'wb') as f:
+ f.write(config_text)
+
+ model_config = configs['model']
+ lstm_config = configs['lstm_model']
+ eval_config = configs['eval_config']
+ input_config = configs['eval_input_config']
+
+ if FLAGS.eval_training_data:
+ input_config.external_input_reader.CopyFrom(
+ configs['train_input_config'].external_input_reader)
+ lstm_config.eval_unroll_length = lstm_config.train_unroll_length
+
+ model_fn = functools.partial(
+ model_builder.build,
+ model_config=model_config,
+ lstm_config=lstm_config,
+ is_training=False)
+
+ def get_next(config, model_config, lstm_config, unroll_length):
+ return seq_dataset_builder.build(config, model_config, lstm_config,
+ unroll_length)
+
+ create_input_dict_fn = functools.partial(get_next, input_config, model_config,
+ lstm_config,
+ lstm_config.eval_unroll_length)
+
+ label_map = label_map_util.load_labelmap(input_config.label_map_path)
+ max_num_classes = max([item.id for item in label_map.item])
+ categories = label_map_util.convert_label_map_to_categories(
+ label_map, max_num_classes)
+
+ if FLAGS.run_once:
+ eval_config.max_evals = 1
+
+ evaluator.evaluate(create_input_dict_fn, model_fn, eval_config, categories,
+ FLAGS.checkpoint_dir, FLAGS.eval_dir)
+
+if __name__ == '__main__':
+ app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/evaluator.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/evaluator.py"
new file mode 100644
index 00000000..28d4e043
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/evaluator.py"
@@ -0,0 +1,337 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Detection model evaluator.
+
+This file provides a generic evaluation method that can be used to evaluate a
+DetectionModel.
+
+"""
+
+import logging
+import tensorflow as tf
+from lstm_object_detection.metrics import coco_evaluation_all_frames
+from object_detection import eval_util
+from object_detection.core import prefetcher
+from object_detection.core import standard_fields as fields
+from object_detection.metrics import coco_evaluation
+from object_detection.utils import object_detection_evaluation
+
+
+# A dictionary of metric names to classes that implement the metric. The classes
+# in the dictionary must implement
+# utils.object_detection_evaluation.DetectionEvaluator interface.
+EVAL_METRICS_CLASS_DICT = {
+ 'pascal_voc_detection_metrics':
+ object_detection_evaluation.PascalDetectionEvaluator,
+ 'weighted_pascal_voc_detection_metrics':
+ object_detection_evaluation.WeightedPascalDetectionEvaluator,
+ 'pascal_voc_instance_segmentation_metrics':
+ object_detection_evaluation.PascalInstanceSegmentationEvaluator,
+ 'weighted_pascal_voc_instance_segmentation_metrics':
+ object_detection_evaluation.WeightedPascalInstanceSegmentationEvaluator,
+ 'open_images_detection_metrics':
+ object_detection_evaluation.OpenImagesDetectionEvaluator,
+ 'coco_detection_metrics':
+ coco_evaluation.CocoDetectionEvaluator,
+ 'coco_mask_metrics':
+ coco_evaluation.CocoMaskEvaluator,
+ 'coco_evaluation_all_frames':
+ coco_evaluation_all_frames.CocoEvaluationAllFrames,
+}
+
+EVAL_DEFAULT_METRIC = 'pascal_voc_detection_metrics'
+
+
+def _create_detection_op(model, input_dict, batch):
+ """Create detection ops.
+
+ Args:
+ model: model to perform predictions with.
+ input_dict: A dict holds input data.
+ batch: batch size for evaluation.
+
+ Returns:
+ Detection tensor ops.
+ """
+ video_tensor = tf.stack(list(input_dict[fields.InputDataFields.image]))
+ preprocessed_video, true_image_shapes = model.preprocess(
+ tf.to_float(video_tensor))
+ if batch is not None:
+ prediction_dict = model.predict(preprocessed_video, true_image_shapes,
+ batch)
+ else:
+ prediction_dict = model.predict(preprocessed_video, true_image_shapes)
+
+ return model.postprocess(prediction_dict, true_image_shapes)
+
+
+def _extract_prediction_tensors(model,
+ create_input_dict_fn,
+ ignore_groundtruth=False):
+ """Restores the model in a tensorflow session.
+
+ Args:
+ model: model to perform predictions with.
+ create_input_dict_fn: function to create input tensor dictionaries.
+ ignore_groundtruth: whether groundtruth should be ignored.
+
+
+ Returns:
+ tensor_dict: A tensor dictionary with evaluations.
+ """
+ input_dict = create_input_dict_fn()
+ batch = None
+ if 'batch' in input_dict:
+ batch = input_dict.pop('batch')
+ else:
+ prefetch_queue = prefetcher.prefetch(input_dict, capacity=500)
+ input_dict = prefetch_queue.dequeue()
+ # consistent format for images and videos
+ for key, value in input_dict.iteritems():
+ input_dict[key] = (value,)
+
+ detections = _create_detection_op(model, input_dict, batch)
+
+ # Print out anaylsis of the model.
+ tf.contrib.tfprof.model_analyzer.print_model_analysis(
+ tf.get_default_graph(),
+ tfprof_options=tf.contrib.tfprof.model_analyzer.
+ TRAINABLE_VARS_PARAMS_STAT_OPTIONS)
+ tf.contrib.tfprof.model_analyzer.print_model_analysis(
+ tf.get_default_graph(),
+ tfprof_options=tf.contrib.tfprof.model_analyzer.FLOAT_OPS_OPTIONS)
+
+ num_frames = len(input_dict[fields.InputDataFields.image])
+ ret = []
+ for i in range(num_frames):
+ original_image = tf.expand_dims(input_dict[fields.InputDataFields.image][i],
+ 0)
+ groundtruth = None
+ if not ignore_groundtruth:
+ groundtruth = {
+ fields.InputDataFields.groundtruth_boxes:
+ input_dict[fields.InputDataFields.groundtruth_boxes][i],
+ fields.InputDataFields.groundtruth_classes:
+ input_dict[fields.InputDataFields.groundtruth_classes][i],
+ }
+ optional_keys = (
+ fields.InputDataFields.groundtruth_area,
+ fields.InputDataFields.groundtruth_is_crowd,
+ fields.InputDataFields.groundtruth_difficult,
+ fields.InputDataFields.groundtruth_group_of,
+ )
+ for opt_key in optional_keys:
+ if opt_key in input_dict:
+ groundtruth[opt_key] = input_dict[opt_key][i]
+ if fields.DetectionResultFields.detection_masks in detections:
+ groundtruth[fields.InputDataFields.groundtruth_instance_masks] = (
+ input_dict[fields.InputDataFields.groundtruth_instance_masks][i])
+
+ detections_frame = {
+ key: tf.expand_dims(value[i], 0)
+ for key, value in detections.iteritems()
+ }
+
+ source_id = (
+ batch.key[0] if batch is not None else
+ input_dict[fields.InputDataFields.source_id][i])
+ ret.append(
+ eval_util.result_dict_for_single_example(
+ original_image,
+ source_id,
+ detections_frame,
+ groundtruth,
+ class_agnostic=(fields.DetectionResultFields.detection_classes
+ not in detections),
+ scale_to_absolute=True))
+ return ret
+
+
+def get_evaluators(eval_config, categories):
+ """Returns the evaluator class according to eval_config, valid for categories.
+
+ Args:
+ eval_config: evaluation configurations.
+ categories: a list of categories to evaluate.
+ Returns:
+ An list of instances of DetectionEvaluator.
+
+ Raises:
+ ValueError: if metric is not in the metric class dictionary.
+ """
+ eval_metric_fn_keys = eval_config.metrics_set
+ if not eval_metric_fn_keys:
+ eval_metric_fn_keys = [EVAL_DEFAULT_METRIC]
+ evaluators_list = []
+ for eval_metric_fn_key in eval_metric_fn_keys:
+ if eval_metric_fn_key not in EVAL_METRICS_CLASS_DICT:
+ raise ValueError('Metric not found: {}'.format(eval_metric_fn_key))
+ else:
+ evaluators_list.append(
+ EVAL_METRICS_CLASS_DICT[eval_metric_fn_key](categories=categories))
+ return evaluators_list
+
+
+def evaluate(create_input_dict_fn,
+ create_model_fn,
+ eval_config,
+ categories,
+ checkpoint_dir,
+ eval_dir,
+ graph_hook_fn=None):
+ """Evaluation function for detection models.
+
+ Args:
+ create_input_dict_fn: a function to create a tensor input dictionary.
+ create_model_fn: a function that creates a DetectionModel.
+ eval_config: a eval_pb2.EvalConfig protobuf.
+ categories: a list of category dictionaries. Each dict in the list should
+ have an integer 'id' field and string 'name' field.
+ checkpoint_dir: directory to load the checkpoints to evaluate from.
+ eval_dir: directory to write evaluation metrics summary to.
+ graph_hook_fn: Optional function that is called after the training graph is
+ completely built. This is helpful to perform additional changes to the
+ training graph such as optimizing batchnorm. The function should modify
+ the default graph.
+
+ Returns:
+ metrics: A dictionary containing metric names and values from the latest
+ run.
+ """
+
+ model = create_model_fn()
+
+ if eval_config.ignore_groundtruth and not eval_config.export_path:
+ logging.fatal('If ignore_groundtruth=True then an export_path is '
+ 'required. Aborting!!!')
+
+ tensor_dicts = _extract_prediction_tensors(
+ model=model,
+ create_input_dict_fn=create_input_dict_fn,
+ ignore_groundtruth=eval_config.ignore_groundtruth)
+
+ def _process_batch(tensor_dicts,
+ sess,
+ batch_index,
+ counters,
+ losses_dict=None):
+ """Evaluates tensors in tensor_dicts, visualizing the first K examples.
+
+ This function calls sess.run on tensor_dicts, evaluating the original_image
+ tensor only on the first K examples and visualizing detections overlaid
+ on this original_image.
+
+ Args:
+ tensor_dicts: a dictionary of tensors
+ sess: tensorflow session
+ batch_index: the index of the batch amongst all batches in the run.
+ counters: a dictionary holding 'success' and 'skipped' fields which can
+ be updated to keep track of number of successful and failed runs,
+ respectively. If these fields are not updated, then the success/skipped
+ counter values shown at the end of evaluation will be incorrect.
+ losses_dict: Optional dictonary of scalar loss tensors. Necessary only
+ for matching function signiture in third_party eval_util.py.
+
+ Returns:
+ result_dict: a dictionary of numpy arrays
+ result_losses_dict: a dictionary of scalar losses. This is empty if input
+ losses_dict is None. Necessary only for matching function signiture in
+ third_party eval_util.py.
+ """
+ if batch_index % 10 == 0:
+ logging.info('Running eval ops batch %d', batch_index)
+ if not losses_dict:
+ losses_dict = {}
+ try:
+ result_dicts, result_losses_dict = sess.run([tensor_dicts, losses_dict])
+ counters['success'] += 1
+ except tf.errors.InvalidArgumentError:
+ logging.info('Skipping image')
+ counters['skipped'] += 1
+ return {}
+ num_images = len(tensor_dicts)
+ for i in range(num_images):
+ result_dict = result_dicts[i]
+ global_step = tf.train.global_step(sess, tf.train.get_global_step())
+ tag = 'image-%d' % (batch_index * num_images + i)
+ if batch_index < eval_config.num_visualizations / num_images:
+ eval_util.visualize_detection_results(
+ result_dict,
+ tag,
+ global_step,
+ categories=categories,
+ summary_dir=eval_dir,
+ export_dir=eval_config.visualization_export_dir,
+ show_groundtruth=eval_config.visualize_groundtruth_boxes,
+ groundtruth_box_visualization_color=eval_config.
+ groundtruth_box_visualization_color,
+ min_score_thresh=eval_config.min_score_threshold,
+ max_num_predictions=eval_config.max_num_boxes_to_visualize,
+ skip_scores=eval_config.skip_scores,
+ skip_labels=eval_config.skip_labels,
+ keep_image_id_for_visualization_export=eval_config.
+ keep_image_id_for_visualization_export)
+ if num_images > 1:
+ return result_dicts, result_losses_dict
+ else:
+ return result_dicts[0], result_losses_dict
+
+ variables_to_restore = tf.global_variables()
+ global_step = tf.train.get_or_create_global_step()
+ variables_to_restore.append(global_step)
+
+ if graph_hook_fn:
+ graph_hook_fn()
+
+ if eval_config.use_moving_averages:
+ variable_averages = tf.train.ExponentialMovingAverage(0.0)
+ variables_to_restore = variable_averages.variables_to_restore()
+ for key in variables_to_restore.keys():
+ if 'moving_mean' in key:
+ variables_to_restore[key.replace(
+ 'moving_mean', 'moving_mean/ExponentialMovingAverage')] = (
+ variables_to_restore[key])
+ del variables_to_restore[key]
+ if 'moving_variance' in key:
+ variables_to_restore[key.replace(
+ 'moving_variance', 'moving_variance/ExponentialMovingAverage')] = (
+ variables_to_restore[key])
+ del variables_to_restore[key]
+
+ saver = tf.train.Saver(variables_to_restore)
+
+ def _restore_latest_checkpoint(sess):
+ latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
+ saver.restore(sess, latest_checkpoint)
+
+ metrics = eval_util.repeated_checkpoint_run(
+ tensor_dict=tensor_dicts,
+ summary_dir=eval_dir,
+ evaluators=get_evaluators(eval_config, categories),
+ batch_processor=_process_batch,
+ checkpoint_dirs=[checkpoint_dir],
+ variables_to_restore=None,
+ restore_fn=_restore_latest_checkpoint,
+ num_batches=eval_config.num_examples,
+ eval_interval_secs=eval_config.eval_interval_secs,
+ max_number_of_evaluations=(1 if eval_config.ignore_groundtruth else
+ eval_config.max_evals
+ if eval_config.max_evals else None),
+ master=eval_config.eval_master,
+ save_graph=eval_config.save_graph,
+ save_graph_dir=(eval_dir if eval_config.save_graph else ''))
+
+ return metrics
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/seq_dataset_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/seq_dataset_builder.py"
new file mode 100644
index 00000000..0446ada6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/seq_dataset_builder.py"
@@ -0,0 +1,241 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""tf.data.Dataset builder.
+
+Creates data sources for DetectionModels from an InputReader config. See
+input_reader.proto for options.
+
+Note: If users wishes to also use their own InputReaders with the Object
+Detection configuration framework, they should define their own builder function
+that wraps the build function.
+"""
+import tensorflow as tf
+import tensorflow.google as google_tf
+from tensorflow.contrib.training.python.training import sequence_queueing_state_saver as sqss
+from lstm_object_detection.inputs import tf_sequence_example_decoder
+from lstm_object_detection.protos import input_reader_google_pb2
+from object_detection.core import preprocessor
+from object_detection.core import preprocessor_cache
+from object_detection.core import standard_fields as fields
+from object_detection.protos import input_reader_pb2
+from object_detection.utils import ops as util_ops
+
+parallel_reader = tf.contrib.slim.parallel_reader
+# TODO(yinxiao): Make the following variable into configurable proto.
+# Padding size for the labeled objects in each frame. Here we assume each
+# frame has a total number of objects less than _PADDING_SIZE.
+_PADDING_SIZE = 30
+
+
+def _build_training_batch_dict(batch_sequences_with_states, unroll_length,
+ batch_size):
+ """Builds training batch samples.
+
+ Args:
+ batch_sequences_with_states: A batch_sequences_with_states object.
+ unroll_length: Unrolled length for LSTM training.
+ batch_size: Batch size for queue outputs.
+
+ Returns:
+ A dictionary of tensors based on items in input_reader_config.
+ """
+ seq_tensors_dict = {
+ fields.InputDataFields.image: [],
+ fields.InputDataFields.groundtruth_boxes: [],
+ fields.InputDataFields.groundtruth_classes: [],
+ 'batch': batch_sequences_with_states,
+ }
+ for i in range(unroll_length):
+ for j in range(batch_size):
+ filtered_dict = util_ops.filter_groundtruth_with_nan_box_coordinates({
+ fields.InputDataFields.groundtruth_boxes: (
+ batch_sequences_with_states.sequences['groundtruth_boxes'][j][i]),
+ fields.InputDataFields.groundtruth_classes: (
+ batch_sequences_with_states.sequences['groundtruth_classes'][j][i]
+ ),
+ })
+ filtered_dict = util_ops.retain_groundtruth_with_positive_classes(
+ filtered_dict)
+ seq_tensors_dict[fields.InputDataFields.image].append(
+ batch_sequences_with_states.sequences['image'][j][i])
+ seq_tensors_dict[fields.InputDataFields.groundtruth_boxes].append(
+ filtered_dict[fields.InputDataFields.groundtruth_boxes])
+ seq_tensors_dict[fields.InputDataFields.groundtruth_classes].append(
+ filtered_dict[fields.InputDataFields.groundtruth_classes])
+ seq_tensors_dict[fields.InputDataFields.image] = tuple(
+ seq_tensors_dict[fields.InputDataFields.image])
+ seq_tensors_dict[fields.InputDataFields.groundtruth_boxes] = tuple(
+ seq_tensors_dict[fields.InputDataFields.groundtruth_boxes])
+ seq_tensors_dict[fields.InputDataFields.groundtruth_classes] = tuple(
+ seq_tensors_dict[fields.InputDataFields.groundtruth_classes])
+
+ return seq_tensors_dict
+
+
+def build(input_reader_config,
+ model_config,
+ lstm_config,
+ unroll_length,
+ data_augmentation_options=None,
+ batch_size=1):
+ """Builds a tensor dictionary based on the InputReader config.
+
+ Args:
+ input_reader_config: An input_reader_builder.InputReader object.
+ model_config: A model.proto object containing the config for the desired
+ DetectionModel.
+ lstm_config: LSTM specific configs.
+ unroll_length: Unrolled length for LSTM training.
+ data_augmentation_options: A list of tuples, where each tuple contains a
+ data augmentation function and a dictionary containing arguments and their
+ values (see preprocessor.py).
+ batch_size: Batch size for queue outputs.
+
+ Returns:
+ A dictionary of tensors based on items in the input_reader_config.
+
+ Raises:
+ ValueError: On invalid input reader proto.
+ ValueError: If no input paths are specified.
+ """
+ if not isinstance(input_reader_config, input_reader_pb2.InputReader):
+ raise ValueError('input_reader_config not of type '
+ 'input_reader_pb2.InputReader.')
+
+ external_reader_config = input_reader_config.external_input_reader
+ google_input_reader_config = external_reader_config.Extensions[
+ input_reader_google_pb2.GoogleInputReader.google_input_reader]
+ input_reader_type = google_input_reader_config.WhichOneof('input_reader')
+
+ if input_reader_type == 'tf_record_video_input_reader':
+ config = google_input_reader_config.tf_record_video_input_reader
+ reader_type_class = tf.TFRecordReader
+ else:
+ raise ValueError(
+ 'Unsupported reader in input_reader_config: %s' % input_reader_type)
+
+ if not config.input_path:
+ raise ValueError('At least one input path must be specified in '
+ '`input_reader_config`.')
+ key, value = parallel_reader.parallel_read(
+ config.input_path[:], # Convert `RepeatedScalarContainer` to list.
+ reader_class=reader_type_class,
+ num_epochs=(input_reader_config.num_epochs
+ if input_reader_config.num_epochs else None),
+ num_readers=input_reader_config.num_readers,
+ shuffle=input_reader_config.shuffle,
+ dtypes=[tf.string, tf.string],
+ capacity=input_reader_config.queue_capacity,
+ min_after_dequeue=input_reader_config.min_after_dequeue)
+
+ # TODO(yinxiao): Add loading instance mask option.
+ decoder = tf_sequence_example_decoder.TFSequenceExampleDecoder()
+
+ keys_to_decode = [
+ fields.InputDataFields.image, fields.InputDataFields.groundtruth_boxes,
+ fields.InputDataFields.groundtruth_classes
+ ]
+ tensor_dict = decoder.decode(value, items=keys_to_decode)
+
+ tensor_dict['image'].set_shape([None, None, None, 3])
+ tensor_dict['groundtruth_boxes'].set_shape([None, None, 4])
+
+ height = model_config.ssd.image_resizer.fixed_shape_resizer.height
+ width = model_config.ssd.image_resizer.fixed_shape_resizer.width
+
+ # If data augmentation is specified in the config file, the preprocessor
+ # will be called here to augment the data as specified. Most common
+ # augmentations include horizontal flip and cropping.
+ if data_augmentation_options:
+ images_pre = tf.split(tensor_dict['image'], config.video_length, axis=0)
+ bboxes_pre = tf.split(
+ tensor_dict['groundtruth_boxes'], config.video_length, axis=0)
+ labels_pre = tf.split(
+ tensor_dict['groundtruth_classes'], config.video_length, axis=0)
+ images_proc, bboxes_proc, labels_proc = [], [], []
+ cache = preprocessor_cache.PreprocessorCache()
+
+ for i, _ in enumerate(images_pre):
+ image_dict = {
+ fields.InputDataFields.image:
+ images_pre[i],
+ fields.InputDataFields.groundtruth_boxes:
+ tf.squeeze(bboxes_pre[i], axis=0),
+ fields.InputDataFields.groundtruth_classes:
+ tf.squeeze(labels_pre[i], axis=0),
+ }
+ image_dict = preprocessor.preprocess(
+ image_dict,
+ data_augmentation_options,
+ func_arg_map=preprocessor.get_default_func_arg_map(),
+ preprocess_vars_cache=cache)
+ # Pads detection count to _PADDING_SIZE.
+ image_dict[fields.InputDataFields.groundtruth_boxes] = tf.pad(
+ image_dict[fields.InputDataFields.groundtruth_boxes],
+ [[0, _PADDING_SIZE], [0, 0]])
+ image_dict[fields.InputDataFields.groundtruth_boxes] = tf.slice(
+ image_dict[fields.InputDataFields.groundtruth_boxes], [0, 0],
+ [_PADDING_SIZE, -1])
+ image_dict[fields.InputDataFields.groundtruth_classes] = tf.pad(
+ image_dict[fields.InputDataFields.groundtruth_classes],
+ [[0, _PADDING_SIZE]])
+ image_dict[fields.InputDataFields.groundtruth_classes] = tf.slice(
+ image_dict[fields.InputDataFields.groundtruth_classes], [0],
+ [_PADDING_SIZE])
+ images_proc.append(image_dict[fields.InputDataFields.image])
+ bboxes_proc.append(image_dict[fields.InputDataFields.groundtruth_boxes])
+ labels_proc.append(image_dict[fields.InputDataFields.groundtruth_classes])
+ tensor_dict['image'] = tf.concat(images_proc, axis=0)
+ tensor_dict['groundtruth_boxes'] = tf.stack(bboxes_proc, axis=0)
+ tensor_dict['groundtruth_classes'] = tf.stack(labels_proc, axis=0)
+ else:
+ # Pads detection count to _PADDING_SIZE per frame.
+ tensor_dict['groundtruth_boxes'] = tf.pad(
+ tensor_dict['groundtruth_boxes'], [[0, 0], [0, _PADDING_SIZE], [0, 0]])
+ tensor_dict['groundtruth_boxes'] = tf.slice(
+ tensor_dict['groundtruth_boxes'], [0, 0, 0], [-1, _PADDING_SIZE, -1])
+ tensor_dict['groundtruth_classes'] = tf.pad(
+ tensor_dict['groundtruth_classes'], [[0, 0], [0, _PADDING_SIZE]])
+ tensor_dict['groundtruth_classes'] = tf.slice(
+ tensor_dict['groundtruth_classes'], [0, 0], [-1, _PADDING_SIZE])
+
+ tensor_dict['image'], _ = preprocessor.resize_image(
+ tensor_dict['image'], new_height=height, new_width=width)
+
+ num_steps = config.video_length / unroll_length
+
+ init_states = {
+ 'lstm_state_c':
+ tf.zeros([height / 32, width / 32, lstm_config.lstm_state_depth]),
+ 'lstm_state_h':
+ tf.zeros([height / 32, width / 32, lstm_config.lstm_state_depth]),
+ 'lstm_state_step':
+ tf.constant(num_steps, shape=[]),
+ }
+
+ batch = sqss.batch_sequences_with_states(
+ input_key=key,
+ input_sequences=tensor_dict,
+ input_context={},
+ input_length=None,
+ initial_states=init_states,
+ num_unroll=unroll_length,
+ batch_size=batch_size,
+ num_threads=batch_size,
+ make_keys_unique=True,
+ capacity=batch_size * batch_size)
+
+ return _build_training_batch_dict(batch, unroll_length, batch_size)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/seq_dataset_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/seq_dataset_builder_test.py"
new file mode 100644
index 00000000..fe9267bf
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/seq_dataset_builder_test.py"
@@ -0,0 +1,283 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for dataset_builder."""
+
+import os
+import numpy as np
+import tensorflow as tf
+
+from google.protobuf import text_format
+from google3.testing.pybase import parameterized
+from tensorflow.core.example import example_pb2
+from tensorflow.core.example import feature_pb2
+from lstm_object_detection.inputs import seq_dataset_builder
+from lstm_object_detection.protos import pipeline_pb2 as internal_pipeline_pb2
+from object_detection.builders import preprocessor_builder
+from object_detection.core import standard_fields as fields
+from object_detection.protos import input_reader_pb2
+from object_detection.protos import pipeline_pb2
+from object_detection.protos import preprocessor_pb2
+
+
+class DatasetBuilderTest(parameterized.TestCase):
+
+ def _create_tf_record(self):
+ path = os.path.join(self.get_temp_dir(), 'tfrecord')
+ writer = tf.python_io.TFRecordWriter(path)
+
+ image_tensor = np.random.randint(255, size=(16, 16, 3)).astype(np.uint8)
+ with self.test_session():
+ encoded_jpeg = tf.image.encode_jpeg(tf.constant(image_tensor)).eval()
+
+ sequence_example = example_pb2.SequenceExample(
+ context=feature_pb2.Features(
+ feature={
+ 'image/format':
+ feature_pb2.Feature(
+ bytes_list=feature_pb2.BytesList(
+ value=['jpeg'.encode('utf-8')])),
+ 'image/height':
+ feature_pb2.Feature(
+ int64_list=feature_pb2.Int64List(value=[16])),
+ 'image/width':
+ feature_pb2.Feature(
+ int64_list=feature_pb2.Int64List(value=[16])),
+ }),
+ feature_lists=feature_pb2.FeatureLists(
+ feature_list={
+ 'image/encoded':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ bytes_list=feature_pb2.BytesList(
+ value=[encoded_jpeg])),
+ ]),
+ 'image/object/bbox/xmin':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ float_list=feature_pb2.FloatList(value=[0.0])),
+ ]),
+ 'image/object/bbox/xmax':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ float_list=feature_pb2.FloatList(value=[1.0]))
+ ]),
+ 'image/object/bbox/ymin':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ float_list=feature_pb2.FloatList(value=[0.0])),
+ ]),
+ 'image/object/bbox/ymax':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ float_list=feature_pb2.FloatList(value=[1.0]))
+ ]),
+ 'image/object/class/label':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ int64_list=feature_pb2.Int64List(value=[2]))
+ ]),
+ }))
+
+ writer.write(sequence_example.SerializeToString())
+ writer.close()
+
+ return path
+
+ def _get_model_configs_from_proto(self):
+ """Creates a model text proto for testing.
+
+ Returns:
+ A dictionary of model configs.
+ """
+
+ model_text_proto = """
+ [object_detection.protos.lstm_model] {
+ train_unroll_length: 4
+ eval_unroll_length: 4
+ }
+ model {
+ ssd {
+ feature_extractor {
+ type: 'lstm_mobilenet_v1_fpn'
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ negative_class_weight: 2.0
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 32
+ width: 32
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ normalize_loc_loss_by_codesize: true
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }
+ }"""
+
+ pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
+ text_format.Merge(model_text_proto, pipeline_config)
+ configs = {}
+ configs['model'] = pipeline_config.model
+ configs['lstm_model'] = pipeline_config.Extensions[
+ internal_pipeline_pb2.lstm_model]
+
+ return configs
+
+ def _get_data_augmentation_preprocessor_proto(self):
+ preprocessor_text_proto = """
+ random_horizontal_flip {
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ return preprocessor_proto
+
+ def _create_training_dict(self, tensor_dict):
+ image_dict = {}
+ all_dict = {}
+ all_dict['batch'] = tensor_dict.pop('batch')
+ for i, _ in enumerate(tensor_dict[fields.InputDataFields.image]):
+ for key, val in tensor_dict.items():
+ image_dict[key] = val[i]
+
+ image_dict[fields.InputDataFields.image] = tf.to_float(
+ tf.expand_dims(image_dict[fields.InputDataFields.image], 0))
+ suffix = str(i)
+ for key, val in image_dict.items():
+ all_dict[key + suffix] = val
+ return all_dict
+
+ def _get_input_proto(self, input_reader):
+ return """
+ external_input_reader {
+ [lstm_object_detection.input_readers.GoogleInputReader.google_input_reader] {
+ %s: {
+ input_path: '{0}'
+ data_type: TF_SEQUENCE_EXAMPLE
+ video_length: 4
+ }
+ }
+ }
+ """ % input_reader
+
+ @parameterized.named_parameters(('tf_record', 'tf_record_video_input_reader'))
+ def test_video_input_reader(self, video_input_type):
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(
+ self._get_input_proto(video_input_type), input_reader_proto)
+
+ configs = self._get_model_configs_from_proto()
+ tensor_dict = seq_dataset_builder.build(
+ input_reader_proto,
+ configs['model'],
+ configs['lstm_model'],
+ unroll_length=1)
+
+ all_dict = self._create_training_dict(tensor_dict)
+
+ self.assertEqual((1, 32, 32, 3), all_dict['image0'].shape)
+ self.assertEqual(4, all_dict['groundtruth_boxes0'].shape[1])
+
+ def test_build_with_data_augmentation(self):
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(
+ self._get_input_proto('tf_record_video_input_reader'),
+ input_reader_proto)
+
+ configs = self._get_model_configs_from_proto()
+ data_augmentation_options = [
+ preprocessor_builder.build(
+ self._get_data_augmentation_preprocessor_proto())
+ ]
+ tensor_dict = seq_dataset_builder.build(
+ input_reader_proto,
+ configs['model'],
+ configs['lstm_model'],
+ unroll_length=1,
+ data_augmentation_options=data_augmentation_options)
+
+ all_dict = self._create_training_dict(tensor_dict)
+ self.assertEqual((1, 32, 32, 3), all_dict['image0'].shape)
+ self.assertEqual(4, all_dict['groundtruth_boxes0'].shape[1])
+
+ def test_raises_error_without_input_paths(self):
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ load_instance_masks: true
+ """
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+
+ configs = self._get_model_configs_from_proto()
+ with self.assertRaises(ValueError):
+ _ = seq_dataset_builder.build(
+ input_reader_proto,
+ configs['model'],
+ configs['lstm_model'],
+ unroll_length=1)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/tf_sequence_example_decoder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/tf_sequence_example_decoder.py"
new file mode 100644
index 00000000..18e4f472
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/tf_sequence_example_decoder.py"
@@ -0,0 +1,264 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tensorflow Sequence Example proto decoder.
+
+A decoder to decode string tensors containing serialized
+tensorflow.SequenceExample protos.
+TODO(yinxiao): When TensorFlow object detection API officially supports
+tensorflow.SequenceExample, merge this decoder.
+"""
+import tensorflow as tf
+from object_detection.core import data_decoder
+from object_detection.core import standard_fields as fields
+
+tfexample_decoder = tf.contrib.slim.tfexample_decoder
+
+
+class BoundingBoxSequence(tfexample_decoder.ItemHandler):
+ """An ItemHandler that concatenates SparseTensors to Bounding Boxes.
+ """
+
+ def __init__(self, keys=None, prefix=None, return_dense=True,
+ default_value=-1.0):
+ """Initialize the bounding box handler.
+
+ Args:
+ keys: A list of four key names representing the ymin, xmin, ymax, xmax
+ in the Example or SequenceExample.
+ prefix: An optional prefix for each of the bounding box keys in the
+ Example or SequenceExample. If provided, `prefix` is prepended to each
+ key in `keys`.
+ return_dense: if True, returns a dense tensor; if False, returns as
+ sparse tensor.
+ default_value: The value used when the `tensor_key` is not found in a
+ particular `TFExample`.
+
+ Raises:
+ ValueError: if keys is not `None` and also not a list of exactly 4 keys
+ """
+ if keys is None:
+ keys = ['ymin', 'xmin', 'ymax', 'xmax']
+ elif len(keys) != 4:
+ raise ValueError('BoundingBoxSequence expects 4 keys but got {}'.format(
+ len(keys)))
+ self._prefix = prefix
+ self._keys = keys
+ self._full_keys = [prefix + k for k in keys]
+ self._return_dense = return_dense
+ self._default_value = default_value
+ super(BoundingBoxSequence, self).__init__(self._full_keys)
+
+ def tensors_to_item(self, keys_to_tensors):
+ """Maps the given dictionary of tensors to a concatenated list of bboxes.
+
+ Args:
+ keys_to_tensors: a mapping of TF-Example keys to parsed tensors.
+
+ Returns:
+ [time, num_boxes, 4] tensor of bounding box coordinates, in order
+ [y_min, x_min, y_max, x_max]. Whether the tensor is a SparseTensor
+ or a dense Tensor is determined by the return_dense parameter. Empty
+ positions in the sparse tensor are filled with -1.0 values.
+ """
+ sides = []
+ for key in self._full_keys:
+ value = keys_to_tensors[key]
+ expanded_dims = tf.concat(
+ [tf.to_int64(tf.shape(value)),
+ tf.constant([1], dtype=tf.int64)], 0)
+ side = tf.sparse_reshape(value, expanded_dims)
+ sides.append(side)
+ bounding_boxes = tf.sparse_concat(2, sides)
+ if self._return_dense:
+ bounding_boxes = tf.sparse_tensor_to_dense(
+ bounding_boxes, default_value=self._default_value)
+ return bounding_boxes
+
+
+class TFSequenceExampleDecoder(data_decoder.DataDecoder):
+ """Tensorflow Sequence Example proto decoder."""
+
+ def __init__(self):
+ """Constructor sets keys_to_features and items_to_handlers."""
+ self.keys_to_context_features = {
+ 'image/format':
+ tf.FixedLenFeature((), tf.string, default_value='jpeg'),
+ 'image/filename':
+ tf.FixedLenFeature((), tf.string, default_value=''),
+ 'image/key/sha256':
+ tf.FixedLenFeature((), tf.string, default_value=''),
+ 'image/source_id':
+ tf.FixedLenFeature((), tf.string, default_value=''),
+ 'image/height':
+ tf.FixedLenFeature((), tf.int64, 1),
+ 'image/width':
+ tf.FixedLenFeature((), tf.int64, 1),
+ }
+ self.keys_to_features = {
+ 'image/encoded': tf.FixedLenSequenceFeature((), tf.string),
+ 'bbox/xmin': tf.VarLenFeature(dtype=tf.float32),
+ 'bbox/xmax': tf.VarLenFeature(dtype=tf.float32),
+ 'bbox/ymin': tf.VarLenFeature(dtype=tf.float32),
+ 'bbox/ymax': tf.VarLenFeature(dtype=tf.float32),
+ 'bbox/label/index': tf.VarLenFeature(dtype=tf.int64),
+ 'bbox/label/string': tf.VarLenFeature(tf.string),
+ 'area': tf.VarLenFeature(tf.float32),
+ 'is_crowd': tf.VarLenFeature(tf.int64),
+ 'difficult': tf.VarLenFeature(tf.int64),
+ 'group_of': tf.VarLenFeature(tf.int64),
+ }
+ self.items_to_handlers = {
+ fields.InputDataFields.image:
+ tfexample_decoder.Image(
+ image_key='image/encoded',
+ format_key='image/format',
+ channels=3,
+ repeated=True),
+ fields.InputDataFields.source_id: (
+ tfexample_decoder.Tensor('image/source_id')),
+ fields.InputDataFields.key: (
+ tfexample_decoder.Tensor('image/key/sha256')),
+ fields.InputDataFields.filename: (
+ tfexample_decoder.Tensor('image/filename')),
+ # Object boxes and classes.
+ fields.InputDataFields.groundtruth_boxes:
+ BoundingBoxSequence(prefix='bbox/'),
+ fields.InputDataFields.groundtruth_classes: (
+ tfexample_decoder.Tensor('bbox/label/index')),
+ fields.InputDataFields.groundtruth_area:
+ tfexample_decoder.Tensor('area'),
+ fields.InputDataFields.groundtruth_is_crowd: (
+ tfexample_decoder.Tensor('is_crowd')),
+ fields.InputDataFields.groundtruth_difficult: (
+ tfexample_decoder.Tensor('difficult')),
+ fields.InputDataFields.groundtruth_group_of: (
+ tfexample_decoder.Tensor('group_of'))
+ }
+
+ def decode(self, tf_seq_example_string_tensor, items=None):
+ """Decodes serialized tf.SequenceExample and returns a tensor dictionary.
+
+ Args:
+ tf_seq_example_string_tensor: A string tensor holding a serialized
+ tensorflow example proto.
+ items: The list of items to decode. These must be a subset of the item
+ keys in self._items_to_handlers. If `items` is left as None, then all
+ of the items in self._items_to_handlers are decoded.
+
+ Returns:
+ A dictionary of the following tensors.
+ fields.InputDataFields.image - 3D uint8 tensor of shape [None, None, seq]
+ containing image(s).
+ fields.InputDataFields.source_id - string tensor containing original
+ image id.
+ fields.InputDataFields.key - string tensor with unique sha256 hash key.
+ fields.InputDataFields.filename - string tensor with original dataset
+ filename.
+ fields.InputDataFields.groundtruth_boxes - 2D float32 tensor of shape
+ [None, 4] containing box corners.
+ fields.InputDataFields.groundtruth_classes - 1D int64 tensor of shape
+ [None] containing classes for the boxes.
+ fields.InputDataFields.groundtruth_area - 1D float32 tensor of shape
+ [None] containing object mask area in pixel squared.
+ fields.InputDataFields.groundtruth_is_crowd - 1D bool tensor of shape
+ [None] indicating if the boxes enclose a crowd.
+ fields.InputDataFields.groundtruth_difficult - 1D bool tensor of shape
+ [None] indicating if the boxes represent `difficult` instances.
+ """
+ serialized_example = tf.reshape(tf_seq_example_string_tensor, shape=[])
+ decoder = TFSequenceExampleDecoderHelper(self.keys_to_context_features,
+ self.keys_to_features,
+ self.items_to_handlers)
+ if not items:
+ items = decoder.list_items()
+ tensors = decoder.decode(serialized_example, items=items)
+ tensor_dict = dict(zip(items, tensors))
+
+ return tensor_dict
+
+
+class TFSequenceExampleDecoderHelper(data_decoder.DataDecoder):
+ """A decoder helper class for TensorFlow SequenceExamples.
+
+ To perform this decoding operation, a SequenceExampleDecoder is given a list
+ of ItemHandlers. Each ItemHandler indicates the set of features.
+ """
+
+ def __init__(self, keys_to_context_features, keys_to_sequence_features,
+ items_to_handlers):
+ """Constructs the decoder.
+
+ Args:
+ keys_to_context_features: A dictionary from TF-SequenceExample context
+ keys to either tf.VarLenFeature or tf.FixedLenFeature instances.
+ See tensorflow's parsing_ops.py.
+ keys_to_sequence_features: A dictionary from TF-SequenceExample sequence
+ keys to either tf.VarLenFeature or tf.FixedLenSequenceFeature instances.
+ items_to_handlers: A dictionary from items (strings) to ItemHandler
+ instances. Note that the ItemHandler's are provided the keys that they
+ use to return the final item Tensors.
+ Raises:
+ ValueError: If the same key is present for context features and sequence
+ features.
+ """
+ unique_keys = set()
+ unique_keys.update(keys_to_context_features)
+ unique_keys.update(keys_to_sequence_features)
+ if len(unique_keys) != (
+ len(keys_to_context_features) + len(keys_to_sequence_features)):
+ # This situation is ambiguous in the decoder's keys_to_tensors variable.
+ raise ValueError('Context and sequence keys are not unique. \n'
+ ' Context keys: %s \n Sequence keys: %s' %
+ (list(keys_to_context_features.keys()),
+ list(keys_to_sequence_features.keys())))
+ self._keys_to_context_features = keys_to_context_features
+ self._keys_to_sequence_features = keys_to_sequence_features
+ self._items_to_handlers = items_to_handlers
+
+ def list_items(self):
+ """Returns keys of items."""
+ return self._items_to_handlers.keys()
+
+ def decode(self, serialized_example, items=None):
+ """Decodes the given serialized TF-SequenceExample.
+
+ Args:
+ serialized_example: A serialized TF-SequenceExample tensor.
+ items: The list of items to decode. These must be a subset of the item
+ keys in self._items_to_handlers. If `items` is left as None, then all
+ of the items in self._items_to_handlers are decoded.
+ Returns:
+ The decoded items, a list of tensor.
+ """
+ context, feature_list = tf.parse_single_sequence_example(
+ serialized_example, self._keys_to_context_features,
+ self._keys_to_sequence_features)
+ # Reshape non-sparse elements just once:
+ for k in self._keys_to_context_features:
+ v = self._keys_to_context_features[k]
+ if isinstance(v, tf.FixedLenFeature):
+ context[k] = tf.reshape(context[k], v.shape)
+ if not items:
+ items = self._items_to_handlers.keys()
+ outputs = []
+ for item in items:
+ handler = self._items_to_handlers[item]
+ keys_to_tensors = {
+ key: context[key] if key in context else feature_list[key]
+ for key in handler.keys
+ }
+ outputs.append(handler.tensors_to_item(keys_to_tensors))
+ return outputs
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/tf_sequence_example_decoder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/tf_sequence_example_decoder_test.py"
new file mode 100644
index 00000000..d418573d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/inputs/tf_sequence_example_decoder_test.py"
@@ -0,0 +1,113 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for lstm_object_detection.tf_sequence_example_decoder."""
+
+import numpy as np
+import tensorflow as tf
+from tensorflow.core.example import example_pb2
+from tensorflow.core.example import feature_pb2
+from tensorflow.python.framework import dtypes
+from tensorflow.python.ops import parsing_ops
+from lstm_object_detection.inputs import tf_sequence_example_decoder
+from object_detection.core import standard_fields as fields
+
+
+class TFSequenceExampleDecoderTest(tf.test.TestCase):
+ """Tests for sequence example decoder."""
+
+ def _EncodeImage(self, image_tensor, encoding_type='jpeg'):
+ with self.test_session():
+ if encoding_type == 'jpeg':
+ image_encoded = tf.image.encode_jpeg(tf.constant(image_tensor)).eval()
+ else:
+ raise ValueError('Invalid encoding type.')
+ return image_encoded
+
+ def _DecodeImage(self, image_encoded, encoding_type='jpeg'):
+ with self.test_session():
+ if encoding_type == 'jpeg':
+ image_decoded = tf.image.decode_jpeg(tf.constant(image_encoded)).eval()
+ else:
+ raise ValueError('Invalid encoding type.')
+ return image_decoded
+
+ def testDecodeJpegImageAndBoundingBox(self):
+ """Test if the decoder can correctly decode the image and bounding box.
+
+ A set of random images (represented as an image tensor) is first decoded as
+ the groundtrue image. Meanwhile, the image tensor will be encoded and pass
+ through the sequence example, and then decoded as images. The groundtruth
+ image and the decoded image are expected to be equal. Similar tests are
+ also applied to labels such as bounding box.
+ """
+ image_tensor = np.random.randint(256, size=(256, 256, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ decoded_jpeg = self._DecodeImage(encoded_jpeg)
+
+ sequence_example = example_pb2.SequenceExample(
+ feature_lists=feature_pb2.FeatureLists(
+ feature_list={
+ 'image/encoded':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ bytes_list=feature_pb2.BytesList(
+ value=[encoded_jpeg])),
+ ]),
+ 'bbox/xmin':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ float_list=feature_pb2.FloatList(value=[0.0])),
+ ]),
+ 'bbox/xmax':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ float_list=feature_pb2.FloatList(value=[1.0]))
+ ]),
+ 'bbox/ymin':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ float_list=feature_pb2.FloatList(value=[0.0])),
+ ]),
+ 'bbox/ymax':
+ feature_pb2.FeatureList(feature=[
+ feature_pb2.Feature(
+ float_list=feature_pb2.FloatList(value=[1.0]))
+ ]),
+ })).SerializeToString()
+
+ example_decoder = tf_sequence_example_decoder.TFSequenceExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(sequence_example))
+
+ # Test tensor dict image dimension.
+ self.assertAllEqual(
+ (tensor_dict[fields.InputDataFields.image].get_shape().as_list()),
+ [None, None, None, 3])
+ with self.test_session() as sess:
+ tensor_dict[fields.InputDataFields.image] = tf.squeeze(
+ tensor_dict[fields.InputDataFields.image])
+ tensor_dict[fields.InputDataFields.groundtruth_boxes] = tf.squeeze(
+ tensor_dict[fields.InputDataFields.groundtruth_boxes])
+ tensor_dict = sess.run(tensor_dict)
+
+ # Test decoded image.
+ self.assertAllEqual(decoded_jpeg, tensor_dict[fields.InputDataFields.image])
+ # Test decoded bounding box.
+ self.assertAllEqual([0.0, 0.0, 1.0, 1.0],
+ tensor_dict[fields.InputDataFields.groundtruth_boxes])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/lstm_cells.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/lstm_cells.py"
new file mode 100644
index 00000000..b66ab271
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/lstm_cells.py"
@@ -0,0 +1,199 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""BottleneckConvLSTMCell implementation."""
+
+import tensorflow as tf
+from tensorflow.contrib.framework.python.ops import variables
+
+slim = tf.contrib.slim
+
+_batch_norm = tf.contrib.layers.batch_norm
+
+
+class BottleneckConvLSTMCell(tf.contrib.rnn.RNNCell):
+ """Basic LSTM recurrent network cell using separable convolutions.
+
+ The implementation is based on:
+ Mobile Video Object Detection with Temporally-Aware Feature Maps
+ https://arxiv.org/abs/1711.06368.
+
+ We add forget_bias (default: 1) to the biases of the forget gate in order to
+ reduce the scale of forgetting in the beginning of the training.
+
+ This LSTM first projects inputs to the size of the output before doing gate
+ computations. This saves params unless the input is less than a third of the
+ state size channel-wise.
+ """
+
+ def __init__(self,
+ filter_size,
+ output_size,
+ num_units,
+ forget_bias=1.0,
+ activation=tf.tanh,
+ flattened_state=False,
+ clip_state=False,
+ output_bottleneck=False,
+ visualize_gates=True):
+ """Initializes the basic LSTM cell.
+
+ Args:
+ filter_size: collection, conv filter size.
+ output_size: collection, the width/height dimensions of the cell/output.
+ num_units: int, The number of channels in the LSTM cell.
+ forget_bias: float, The bias added to forget gates (see above).
+ activation: Activation function of the inner states.
+ flattened_state: if True, state tensor will be flattened and stored as
+ a 2-d tensor. Use for exporting the model to tfmini.
+ clip_state: if True, clip state between [-6, 6].
+ output_bottleneck: if True, the cell bottleneck will be concatenated
+ to the cell output.
+ visualize_gates: if True, add histogram summaries of all gates
+ and outputs to tensorboard.
+ """
+ self._filter_size = list(filter_size)
+ self._output_size = list(output_size)
+ self._num_units = num_units
+ self._forget_bias = forget_bias
+ self._activation = activation
+ self._viz_gates = visualize_gates
+ self._flattened_state = flattened_state
+ self._clip_state = clip_state
+ self._output_bottleneck = output_bottleneck
+ self._param_count = self._num_units
+ for dim in self._output_size:
+ self._param_count *= dim
+
+ @property
+ def state_size(self):
+ return tf.contrib.rnn.LSTMStateTuple(self._output_size + [self._num_units],
+ self._output_size + [self._num_units])
+
+ @property
+ def state_size_flat(self):
+ return tf.contrib.rnn.LSTMStateTuple([self._param_count],
+ [self._param_count])
+
+ @property
+ def output_size(self):
+ return self._output_size + [self._num_units]
+
+ def __call__(self, inputs, state, scope=None):
+ """Long short-term memory cell (LSTM) with bottlenecking.
+
+ Args:
+ inputs: Input tensor at the current timestep.
+ state: Tuple of tensors, the state and output at the previous timestep.
+ scope: Optional scope.
+ Returns:
+ A tuple where the first element is the LSTM output and the second is
+ a LSTMStateTuple of the state at the current timestep.
+ """
+ scope = scope or 'conv_lstm_cell'
+ with tf.variable_scope(scope):
+ c, h = state
+
+ # unflatten state if necessary
+ if self._flattened_state:
+ c = tf.reshape(c, [-1] + self.output_size)
+ h = tf.reshape(h, [-1] + self.output_size)
+
+ # summary of input passed into cell
+ if self._viz_gates:
+ slim.summaries.add_histogram_summary(inputs, 'cell_input')
+
+ bottleneck = tf.contrib.layers.separable_conv2d(
+ tf.concat([inputs, h], 3),
+ self._num_units,
+ self._filter_size,
+ depth_multiplier=1,
+ activation_fn=self._activation,
+ normalizer_fn=None,
+ scope='bottleneck')
+
+ if self._viz_gates:
+ slim.summaries.add_histogram_summary(bottleneck, 'bottleneck')
+
+ concat = tf.contrib.layers.separable_conv2d(
+ bottleneck,
+ 4 * self._num_units,
+ self._filter_size,
+ depth_multiplier=1,
+ activation_fn=None,
+ normalizer_fn=None,
+ scope='gates')
+
+ i, j, f, o = tf.split(concat, 4, 3)
+
+ new_c = (
+ c * tf.sigmoid(f + self._forget_bias) +
+ tf.sigmoid(i) * self._activation(j))
+ if self._clip_state:
+ new_c = tf.clip_by_value(new_c, -6, 6)
+ new_h = self._activation(new_c) * tf.sigmoid(o)
+ # summary of cell output and new state
+ if self._viz_gates:
+ slim.summaries.add_histogram_summary(new_h, 'cell_output')
+ slim.summaries.add_histogram_summary(new_c, 'cell_state')
+
+ output = new_h
+ if self._output_bottleneck:
+ output = tf.concat([new_h, bottleneck], axis=3)
+
+ # reflatten state to store it
+ if self._flattened_state:
+ new_c = tf.reshape(new_c, [-1, self._param_count])
+ new_h = tf.reshape(new_h, [-1, self._param_count])
+
+ return output, tf.contrib.rnn.LSTMStateTuple(new_c, new_h)
+
+ def init_state(self, state_name, batch_size, dtype, learned_state=False):
+ """Creates an initial state compatible with this cell.
+
+ Args:
+ state_name: name of the state tensor
+ batch_size: model batch size
+ dtype: dtype for the tensor values i.e. tf.float32
+ learned_state: whether the initial state should be learnable. If false,
+ the initial state is set to all 0's
+
+ Returns:
+ The created initial state.
+ """
+ state_size = (
+ self.state_size_flat if self._flattened_state else self.state_size)
+ # list of 2 zero tensors or variables tensors, depending on if
+ # learned_state is true
+ ret_flat = [(variables.model_variable(
+ state_name + str(i),
+ shape=s,
+ dtype=dtype,
+ initializer=tf.truncated_normal_initializer(stddev=0.03))
+ if learned_state else tf.zeros(
+ [batch_size] + s, dtype=dtype, name=state_name))
+ for i, s in enumerate(state_size)]
+
+ # duplicates initial state across the batch axis if it's learned
+ if learned_state:
+ ret_flat = [
+ tf.stack([tensor
+ for i in range(int(batch_size))])
+ for tensor in ret_flat
+ ]
+ for s, r in zip(state_size, ret_flat):
+ r.set_shape([None] + s)
+ return tf.contrib.framework.nest.pack_sequence_as(
+ structure=[1, 1], flat_sequence=ret_flat)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/lstm_cells_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/lstm_cells_test.py"
new file mode 100644
index 00000000..df3c7377
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/lstm_cells_test.py"
@@ -0,0 +1,143 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for lstm_object_detection.lstm.lstm_cells."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+
+from lstm_object_detection.lstm import lstm_cells
+
+
+class BottleneckConvLstmCellsTest(tf.test.TestCase):
+
+ def test_run_lstm_cell(self):
+ filter_size = [3, 3]
+ output_size = [10, 10]
+ num_units = 15
+ state_name = 'lstm_state'
+ batch_size = 4
+ dtype = tf.float32
+ learned_state = False
+
+ inputs = tf.zeros([4, 10, 10, 3], dtype=tf.float32)
+ cell = lstm_cells.BottleneckConvLSTMCell(
+ filter_size=filter_size,
+ output_size=output_size,
+ num_units=num_units)
+ init_state = cell.init_state(
+ state_name, batch_size, dtype, learned_state)
+ output, state_tuple = cell(inputs, init_state)
+ self.assertAllEqual([4, 10, 10, 15], output.shape.as_list())
+ self.assertAllEqual([4, 10, 10, 15], state_tuple[0].shape.as_list())
+ self.assertAllEqual([4, 10, 10, 15], state_tuple[1].shape.as_list())
+
+ def test_run_lstm_cell_with_flattened_state(self):
+ filter_size = [3, 3]
+ output_dim = 10
+ output_size = [output_dim] * 2
+ num_units = 15
+ state_name = 'lstm_state'
+ batch_size = 4
+ dtype = tf.float32
+ learned_state = False
+
+ inputs = tf.zeros([batch_size, output_dim, output_dim, 3], dtype=tf.float32)
+ cell = lstm_cells.BottleneckConvLSTMCell(
+ filter_size=filter_size,
+ output_size=output_size,
+ num_units=num_units,
+ flattened_state=True)
+ init_state = cell.init_state(
+ state_name, batch_size, dtype, learned_state)
+ output, state_tuple = cell(inputs, init_state)
+ self.assertAllEqual([4, 10, 10, 15], output.shape.as_list())
+ self.assertAllEqual([4, 1500], state_tuple[0].shape.as_list())
+ self.assertAllEqual([4, 1500], state_tuple[1].shape.as_list())
+
+ def test_run_lstm_cell_with_output_bottleneck(self):
+ filter_size = [3, 3]
+ output_dim = 10
+ output_size = [output_dim] * 2
+ num_units = 15
+ state_name = 'lstm_state'
+ batch_size = 4
+ dtype = tf.float32
+ learned_state = False
+
+ inputs = tf.zeros([batch_size, output_dim, output_dim, 3], dtype=tf.float32)
+ cell = lstm_cells.BottleneckConvLSTMCell(
+ filter_size=filter_size,
+ output_size=output_size,
+ num_units=num_units,
+ output_bottleneck=True)
+ init_state = cell.init_state(
+ state_name, batch_size, dtype, learned_state)
+ output, state_tuple = cell(inputs, init_state)
+ self.assertAllEqual([4, 10, 10, 30], output.shape.as_list())
+ self.assertAllEqual([4, 10, 10, 15], state_tuple[0].shape.as_list())
+ self.assertAllEqual([4, 10, 10, 15], state_tuple[1].shape.as_list())
+
+ def test_get_init_state(self):
+ filter_size = [3, 3]
+ output_dim = 10
+ output_size = [output_dim] * 2
+ num_units = 15
+ state_name = 'lstm_state'
+ batch_size = 4
+ dtype = tf.float32
+ learned_state = False
+
+ cell = lstm_cells.BottleneckConvLSTMCell(
+ filter_size=filter_size,
+ output_size=output_size,
+ num_units=num_units)
+ init_c, init_h = cell.init_state(
+ state_name, batch_size, dtype, learned_state)
+
+ self.assertEqual(tf.float32, init_c.dtype)
+ self.assertEqual(tf.float32, init_h.dtype)
+ with self.test_session() as sess:
+ init_c_res, init_h_res = sess.run([init_c, init_h])
+ self.assertAllClose(np.zeros((4, 10, 10, 15)), init_c_res)
+ self.assertAllClose(np.zeros((4, 10, 10, 15)), init_h_res)
+
+ def test_get_init_learned_state(self):
+ filter_size = [3, 3]
+ output_size = [10, 10]
+ num_units = 15
+ state_name = 'lstm_state'
+ batch_size = 4
+ dtype = tf.float32
+ learned_state = True
+
+ cell = lstm_cells.BottleneckConvLSTMCell(
+ filter_size=filter_size,
+ output_size=output_size,
+ num_units=num_units)
+ init_c, init_h = cell.init_state(
+ state_name, batch_size, dtype, learned_state)
+
+ self.assertEqual(tf.float32, init_c.dtype)
+ self.assertEqual(tf.float32, init_h.dtype)
+ self.assertAllEqual([4, 10, 10, 15], init_c.shape.as_list())
+ self.assertAllEqual([4, 10, 10, 15], init_h.shape.as_list())
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/lstm_meta_arch.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/lstm_meta_arch.py"
new file mode 100644
index 00000000..a8fdaf26
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/lstm_meta_arch.py"
@@ -0,0 +1,334 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""LSTM Meta-architecture definition.
+
+General tensorflow implementation of convolutional Multibox/SSD detection
+models with LSTM states, for use on video data.
+
+See https://arxiv.org/abs/1711.06368 for details.
+"""
+import re
+import tensorflow as tf
+
+from object_detection.core import box_list_ops
+from object_detection.core import standard_fields as fields
+from object_detection.meta_architectures import ssd_meta_arch
+from object_detection.utils import ops
+from object_detection.utils import shape_utils
+
+slim = tf.contrib.slim
+
+
+class LSTMMetaArch(ssd_meta_arch.SSDMetaArch):
+ """LSTM Meta-architecture definition."""
+
+ def __init__(self,
+ is_training,
+ anchor_generator,
+ box_predictor,
+ box_coder,
+ feature_extractor,
+ encode_background_as_zeros,
+ image_resizer_fn,
+ non_max_suppression_fn,
+ score_conversion_fn,
+ classification_loss,
+ localization_loss,
+ classification_loss_weight,
+ localization_loss_weight,
+ normalize_loss_by_num_matches,
+ hard_example_miner,
+ unroll_length,
+ target_assigner_instance,
+ add_summaries=True):
+ super(LSTMMetaArch, self).__init__(
+ is_training=is_training,
+ anchor_generator=anchor_generator,
+ box_predictor=box_predictor,
+ box_coder=box_coder,
+ feature_extractor=feature_extractor,
+ encode_background_as_zeros=encode_background_as_zeros,
+ image_resizer_fn=image_resizer_fn,
+ non_max_suppression_fn=non_max_suppression_fn,
+ score_conversion_fn=score_conversion_fn,
+ classification_loss=classification_loss,
+ localization_loss=localization_loss,
+ classification_loss_weight=classification_loss_weight,
+ localization_loss_weight=localization_loss_weight,
+ normalize_loss_by_num_matches=normalize_loss_by_num_matches,
+ hard_example_miner=hard_example_miner,
+ target_assigner_instance=target_assigner_instance,
+ add_summaries=add_summaries)
+ self._unroll_length = unroll_length
+
+ @property
+ def unroll_length(self):
+ return self._unroll_length
+
+ @unroll_length.setter
+ def unroll_length(self, unroll_length):
+ self._unroll_length = unroll_length
+
+ def predict(self, preprocessed_inputs, true_image_shapes, states=None,
+ state_name='lstm_state', feature_scope=None):
+ with tf.variable_scope(self._extract_features_scope,
+ values=[preprocessed_inputs], reuse=tf.AUTO_REUSE):
+ feature_maps = self._feature_extractor.extract_features(
+ preprocessed_inputs, states, state_name,
+ unroll_length=self._unroll_length, scope=feature_scope)
+ feature_map_spatial_dims = self._get_feature_map_spatial_dims(feature_maps)
+ image_shape = shape_utils.combined_static_and_dynamic_shape(
+ preprocessed_inputs)
+ self._batch_size = preprocessed_inputs.shape[0].value / self._unroll_length
+ self._states = states
+ self._anchors = box_list_ops.concatenate(
+ self._anchor_generator.generate(
+ feature_map_spatial_dims,
+ im_height=image_shape[1],
+ im_width=image_shape[2]))
+ prediction_dict = self._box_predictor.predict(
+ feature_maps, self._anchor_generator.num_anchors_per_location())
+
+ # Multiscale_anchor_generator currently has a different dim compared to
+ # ssd_anchor_generator. Current fix is to check the dim of the box_encodings
+ # tensor. If dim is not 3(multiscale_anchor_generator), squeeze the 3rd dim.
+ # TODO(yinxiao): Remove this check once the anchor generator has unified
+ # dimension.
+ if len(prediction_dict['box_encodings'][0].get_shape().as_list()) == 3:
+ box_encodings = tf.concat(prediction_dict['box_encodings'], axis=1)
+ else:
+ box_encodings = tf.squeeze(
+ tf.concat(prediction_dict['box_encodings'], axis=1), axis=2)
+ class_predictions_with_background = tf.concat(
+ prediction_dict['class_predictions_with_background'], axis=1)
+ predictions_dict = {
+ 'preprocessed_inputs': preprocessed_inputs,
+ 'box_encodings': box_encodings,
+ 'class_predictions_with_background': class_predictions_with_background,
+ 'feature_maps': feature_maps,
+ 'anchors': self._anchors.get(),
+ 'states_and_outputs': self._feature_extractor.states_and_outputs,
+ }
+ # In cases such as exporting the model, the states is always zero. Thus the
+ # step should be ignored.
+ if states is not None:
+ predictions_dict['step'] = self._feature_extractor.step
+ return predictions_dict
+
+ def loss(self, prediction_dict, true_image_shapes, scope=None):
+ """Computes scalar loss tensors with respect to provided groundtruth.
+
+ Calling this function requires that groundtruth tensors have been
+ provided via the provide_groundtruth function.
+
+ Args:
+ prediction_dict: a dictionary holding prediction tensors with
+ 1) box_encodings: 3-D float tensor of shape [batch_size, num_anchors,
+ box_code_dimension] containing predicted boxes.
+ 2) class_predictions_with_background: 3-D float tensor of shape
+ [batch_size, num_anchors, num_classes+1] containing class predictions
+ (logits) for each of the anchors. Note that this tensor *includes*
+ background class predictions.
+ true_image_shapes: int32 tensor of shape [batch, 3] where each row is
+ of the form [height, width, channels] indicating the shapes
+ of true images in the resized images, as resized images can be padded
+ with zeros.
+ scope: Optional scope name.
+
+ Returns:
+ a dictionary mapping loss keys (`localization_loss` and
+ `classification_loss`) to scalar tensors representing corresponding loss
+ values.
+ """
+ with tf.name_scope(scope, 'Loss', prediction_dict.values()):
+ keypoints = None
+ if self.groundtruth_has_field(fields.BoxListFields.keypoints):
+ keypoints = self.groundtruth_lists(fields.BoxListFields.keypoints)
+ weights = None
+ if self.groundtruth_has_field(fields.BoxListFields.weights):
+ weights = self.groundtruth_lists(fields.BoxListFields.weights)
+ (batch_cls_targets, batch_cls_weights, batch_reg_targets,
+ batch_reg_weights, match_list) = self._assign_targets(
+ self.groundtruth_lists(fields.BoxListFields.boxes),
+ self.groundtruth_lists(fields.BoxListFields.classes),
+ keypoints, weights)
+ if self._add_summaries:
+ self._summarize_target_assignment(
+ self.groundtruth_lists(fields.BoxListFields.boxes), match_list)
+ location_losses = self._localization_loss(
+ prediction_dict['box_encodings'],
+ batch_reg_targets,
+ ignore_nan_targets=True,
+ weights=batch_reg_weights)
+ cls_losses = ops.reduce_sum_trailing_dimensions(
+ self._classification_loss(
+ prediction_dict['class_predictions_with_background'],
+ batch_cls_targets,
+ weights=batch_cls_weights),
+ ndims=2)
+
+ if self._hard_example_miner:
+ (loc_loss_list, cls_loss_list) = self._apply_hard_mining(
+ location_losses, cls_losses, prediction_dict, match_list)
+ localization_loss = tf.reduce_sum(tf.stack(loc_loss_list))
+ classification_loss = tf.reduce_sum(tf.stack(cls_loss_list))
+
+ if self._add_summaries:
+ self._hard_example_miner.summarize()
+ else:
+ if self._add_summaries:
+ class_ids = tf.argmax(batch_cls_targets, axis=2)
+ flattened_class_ids = tf.reshape(class_ids, [-1])
+ flattened_classification_losses = tf.reshape(cls_losses, [-1])
+ self._summarize_anchor_classification_loss(
+ flattened_class_ids, flattened_classification_losses)
+ localization_loss = tf.reduce_sum(location_losses)
+ classification_loss = tf.reduce_sum(cls_losses)
+
+ # Optionally normalize by number of positive matches
+ normalizer = tf.constant(1.0, dtype=tf.float32)
+ if self._normalize_loss_by_num_matches:
+ normalizer = tf.maximum(tf.to_float(tf.reduce_sum(batch_reg_weights)),
+ 1.0)
+
+ with tf.name_scope('localization_loss'):
+ localization_loss_normalizer = normalizer
+ if self._normalize_loc_loss_by_codesize:
+ localization_loss_normalizer *= self._box_coder.code_size
+ localization_loss = ((self._localization_loss_weight / (
+ localization_loss_normalizer)) * localization_loss)
+ with tf.name_scope('classification_loss'):
+ classification_loss = ((self._classification_loss_weight / normalizer) *
+ classification_loss)
+
+ loss_dict = {
+ 'localization_loss': localization_loss,
+ 'classification_loss': classification_loss
+ }
+ return loss_dict
+
+ def restore_map(self, fine_tune_checkpoint_type='lstm'):
+ """Returns a map of variables to load from a foreign checkpoint.
+
+ See parent class for details.
+
+ Args:
+ fine_tune_checkpoint_type: the type of checkpoint to restore from, either
+ SSD/LSTM detection checkpoint (with compatible variable names)
+ classification checkpoint for initialization prior to training.
+ Available options: `classification`, `detection`, `interleaved`,
+ and `lstm`.
+
+ Returns:
+ A dict mapping variable names (to load from a checkpoint) to variables in
+ the model graph.
+ Raises:
+ ValueError: if fine_tune_checkpoint_type is not among
+ `classification`/`detection`/`interleaved`/`lstm`.
+ """
+ if fine_tune_checkpoint_type not in [
+ 'classification', 'detection', 'lstm'
+ ]:
+ raise ValueError('Not supported fine_tune_checkpoint_type: {}'.format(
+ fine_tune_checkpoint_type))
+
+ variables_to_restore = {}
+ for variable in tf.global_variables():
+ var_name = variable.op.name
+ if 'global_step' in var_name:
+ continue
+
+ # Remove FeatureExtractor prefix for classification checkpoints.
+ if fine_tune_checkpoint_type == 'classification':
+ var_name = (
+ re.split('^' + self._extract_features_scope + '/', var_name)[-1])
+
+ # When loading from single frame detection checkpoints, we need to
+ # remap FeatureMaps variable names.
+ if ('FeatureMaps' in var_name and
+ fine_tune_checkpoint_type == 'detection'):
+ var_name = var_name.replace('FeatureMaps',
+ self.get_base_network_scope())
+ variables_to_restore[var_name] = variable
+
+ return variables_to_restore
+
+ def get_base_network_scope(self):
+ """Returns the variable scope of the base network.
+
+ Returns:
+ The variable scope of the feature extractor base network, e.g. MobilenetV1
+ """
+ return self._feature_extractor.get_base_network_scope()
+
+
+class LSTMFeatureExtractor(ssd_meta_arch.SSDFeatureExtractor):
+ """LSTM Meta-architecture Feature Extractor definition."""
+
+ @property
+ def depth_multipliers(self):
+ return self._depth_multipliers
+
+ @depth_multipliers.setter
+ def depth_multipliers(self, depth_multipliers):
+ self._depth_multipliers = depth_multipliers
+
+ @property
+ def lstm_state_depth(self):
+ return self._lstm_state_depth
+
+ @lstm_state_depth.setter
+ def lstm_state_depth(self, lstm_state_depth):
+ self._lstm_state_depth = lstm_state_depth
+
+ @property
+ def states_and_outputs(self):
+ """LSTM states and outputs.
+
+ This variable includes both LSTM states {C_t} and outputs {h_t}.
+
+ Returns:
+ states_and_outputs: A list of 4-D float tensors, including the lstm state
+ and output at each timestep.
+ """
+ return self._states_out
+
+ @property
+ def step(self):
+ return self._step
+
+ def preprocess(self, resized_inputs):
+ """SSD preprocessing.
+
+ Maps pixel values to the range [-1, 1].
+
+ Args:
+ resized_inputs: a [batch, height, width, channels] float tensor
+ representing a batch of images.
+
+ Returns:
+ preprocessed_inputs: a [batch, height, width, channels] float tensor
+ representing a batch of images.
+ """
+ return (2.0 / 255.0) * resized_inputs - 1.0
+
+ def get_base_network_scope(self):
+ """Returns the variable scope of the base network.
+
+ Returns:
+ The variable scope of the base network, e.g. MobilenetV1
+ """
+ return self._base_network_scope
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/rnn_decoder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/rnn_decoder.py"
new file mode 100644
index 00000000..df1f5e77
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/lstm/rnn_decoder.py"
@@ -0,0 +1,66 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Custom RNN decoder."""
+
+from tensorflow.python.ops import variable_scope
+
+
+def rnn_decoder(decoder_inputs,
+ initial_state,
+ cell,
+ loop_function=None,
+ scope=None):
+ """RNN decoder for the sequence-to-sequence model.
+
+ This decoder returns a list of all states, rather than only the final state.
+ Args:
+ decoder_inputs: A list of 4D Tensors with shape [batch_size x input_size].
+ initial_state: 2D Tensor with shape [batch_size x cell.state_size].
+ cell: rnn_cell.RNNCell defining the cell function and size.
+ loop_function: If not None, this function will be applied to the i-th output
+ in order to generate the i+1-st input, and decoder_inputs will be ignored,
+ except for the first element ("GO" symbol). This can be used for decoding,
+ but also for training to emulate http://arxiv.org/abs/1506.03099.
+ Signature -- loop_function(prev, i) = next
+ * prev is a 2D Tensor of shape [batch_size x output_size],
+ * i is an integer, the step number (when advanced control is needed),
+ * next is a 2D Tensor of shape [batch_size x input_size].
+ scope: VariableScope for the created subgraph; defaults to "rnn_decoder".
+ Returns:
+ A tuple of the form (outputs, state), where:
+ outputs: A list of the same length as decoder_inputs of 4D Tensors with
+ shape [batch_size x output_size] containing generated outputs.
+ state: A list of the same length as decoder_inputs of the state of each
+ cell at each time-step. It is a 2D Tensor of shape
+ [batch_size x cell.state_size].
+ """
+ with variable_scope.variable_scope(scope or 'rnn_decoder'):
+ state = initial_state
+ outputs = []
+ states = []
+ prev = None
+ for i, decoder_input in enumerate(decoder_inputs):
+ if loop_function is not None and prev is not None:
+ with variable_scope.variable_scope('loop_function', reuse=True):
+ decoder_input = loop_function(prev, i)
+ if i > 0:
+ variable_scope.get_variable_scope().reuse_variables()
+ output, state = cell(decoder_input, state)
+ outputs.append(output)
+ states.append(state)
+ if loop_function is not None:
+ prev = output
+ return outputs, states
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/metrics/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/metrics/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/metrics/coco_evaluation_all_frames.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/metrics/coco_evaluation_all_frames.py"
new file mode 100644
index 00000000..63ae076a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/metrics/coco_evaluation_all_frames.py"
@@ -0,0 +1,124 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Class for evaluating video object detections with COCO metrics."""
+
+import tensorflow as tf
+
+from object_detection.core import standard_fields
+from object_detection.metrics import coco_evaluation
+from object_detection.metrics import coco_tools
+
+
+class CocoEvaluationAllFrames(coco_evaluation.CocoDetectionEvaluator):
+ """Class to evaluate COCO detection metrics for frame sequences.
+
+ The class overrides two functions: add_single_ground_truth_image_info and
+ add_single_detected_image_info.
+
+ For the evaluation of sequence video detection, by iterating through the
+ entire groundtruth_dict, all the frames in the unrolled frames in one LSTM
+ training sample are considered. Therefore, both groundtruth and detection
+ results of all frames are added for the evaluation. This is used when all the
+ frames are labeled in the video object detection training job.
+ """
+
+ def add_single_ground_truth_image_info(self, image_id, groundtruth_dict):
+ """Add groundtruth results of all frames to the eval pipeline.
+
+ This method overrides the function defined in the base class.
+
+ Args:
+ image_id: A unique string/integer identifier for the image.
+ groundtruth_dict: A list of dictionary containing -
+ InputDataFields.groundtruth_boxes: float32 numpy array of shape
+ [num_boxes, 4] containing `num_boxes` groundtruth boxes of the format
+ [ymin, xmin, ymax, xmax] in absolute image coordinates.
+ InputDataFields.groundtruth_classes: integer numpy array of shape
+ [num_boxes] containing 1-indexed groundtruth classes for the boxes.
+ InputDataFields.groundtruth_is_crowd (optional): integer numpy array of
+ shape [num_boxes] containing iscrowd flag for groundtruth boxes.
+ """
+ for idx, gt in enumerate(groundtruth_dict):
+ if not gt:
+ continue
+
+ image_frame_id = '{}_{}'.format(image_id, idx)
+ if image_frame_id in self._image_ids:
+ tf.logging.warning(
+ 'Ignoring ground truth with image id %s since it was '
+ 'previously added', image_frame_id)
+ continue
+
+ self._groundtruth_list.extend(
+ coco_tools.ExportSingleImageGroundtruthToCoco(
+ image_id=image_frame_id,
+ next_annotation_id=self._annotation_id,
+ category_id_set=self._category_id_set,
+ groundtruth_boxes=gt[
+ standard_fields.InputDataFields.groundtruth_boxes],
+ groundtruth_classes=gt[
+ standard_fields.InputDataFields.groundtruth_classes]))
+ self._annotation_id += (
+ gt[standard_fields.InputDataFields.groundtruth_boxes].shape[0])
+
+ # Boolean to indicate whether a detection has been added for this image.
+ self._image_ids[image_frame_id] = False
+
+ def add_single_detected_image_info(self, image_id, detections_dict):
+ """Add detection results of all frames to the eval pipeline.
+
+ This method overrides the function defined in the base class.
+
+ Args:
+ image_id: A unique string/integer identifier for the image.
+ detections_dict: A list of dictionary containing -
+ DetectionResultFields.detection_boxes: float32 numpy array of shape
+ [num_boxes, 4] containing `num_boxes` detection boxes of the format
+ [ymin, xmin, ymax, xmax] in absolute image coordinates.
+ DetectionResultFields.detection_scores: float32 numpy array of shape
+ [num_boxes] containing detection scores for the boxes.
+ DetectionResultFields.detection_classes: integer numpy array of shape
+ [num_boxes] containing 1-indexed detection classes for the boxes.
+
+ Raises:
+ ValueError: If groundtruth for the image_id is not available.
+ """
+ for idx, det in enumerate(detections_dict):
+ if not det:
+ continue
+
+ image_frame_id = '{}_{}'.format(image_id, idx)
+ if image_frame_id not in self._image_ids:
+ raise ValueError(
+ 'Missing groundtruth for image-frame id: {}'.format(image_frame_id))
+
+ if self._image_ids[image_frame_id]:
+ tf.logging.warning(
+ 'Ignoring detection with image id %s since it was '
+ 'previously added', image_frame_id)
+ continue
+
+ self._detection_boxes_list.extend(
+ coco_tools.ExportSingleImageDetectionBoxesToCoco(
+ image_id=image_frame_id,
+ category_id_set=self._category_id_set,
+ detection_boxes=det[
+ standard_fields.DetectionResultFields.detection_boxes],
+ detection_scores=det[
+ standard_fields.DetectionResultFields.detection_scores],
+ detection_classes=det[
+ standard_fields.DetectionResultFields.detection_classes]))
+ self._image_ids[image_frame_id] = True
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/metrics/coco_evaluation_all_frames_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/metrics/coco_evaluation_all_frames_test.py"
new file mode 100644
index 00000000..c6bf4f03
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/metrics/coco_evaluation_all_frames_test.py"
@@ -0,0 +1,156 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for video_object_detection.metrics.coco_video_evaluation."""
+
+import numpy as np
+import tensorflow as tf
+from lstm_object_detection.metrics import coco_evaluation_all_frames
+from object_detection.core import standard_fields
+
+
+class CocoEvaluationAllFramesTest(tf.test.TestCase):
+
+ def testGroundtruthAndDetectionsDisagreeOnAllFrames(self):
+ """Tests that mAP is calculated on several different frame results."""
+ category_list = [{'id': 0, 'name': 'dog'}, {'id': 1, 'name': 'cat'}]
+ video_evaluator = coco_evaluation_all_frames.CocoEvaluationAllFrames(
+ category_list)
+ video_evaluator.add_single_ground_truth_image_info(
+ image_id='image1',
+ groundtruth_dict=[{
+ standard_fields.InputDataFields.groundtruth_boxes:
+ np.array([[50., 50., 200., 200.]]),
+ standard_fields.InputDataFields.groundtruth_classes:
+ np.array([1])
+ }, {
+ standard_fields.InputDataFields.groundtruth_boxes:
+ np.array([[50., 50., 100., 100.]]),
+ standard_fields.InputDataFields.groundtruth_classes:
+ np.array([1])
+ }])
+ video_evaluator.add_single_detected_image_info(
+ image_id='image1',
+ # A different groundtruth box on the frame other than the last one.
+ detections_dict=[{
+ standard_fields.DetectionResultFields.detection_boxes:
+ np.array([[100., 100., 200., 200.]]),
+ standard_fields.DetectionResultFields.detection_scores:
+ np.array([.8]),
+ standard_fields.DetectionResultFields.detection_classes:
+ np.array([1])
+ }, {
+ standard_fields.DetectionResultFields.detection_boxes:
+ np.array([[50., 50., 100., 100.]]),
+ standard_fields.DetectionResultFields.detection_scores:
+ np.array([.8]),
+ standard_fields.DetectionResultFields.detection_classes:
+ np.array([1])
+ }])
+
+ metrics = video_evaluator.evaluate()
+ self.assertNotEqual(metrics['DetectionBoxes_Precision/mAP'], 1.0)
+
+ def testGroundtruthAndDetections(self):
+ """Tests that mAP is calculated correctly on GT and Detections."""
+ category_list = [{'id': 0, 'name': 'dog'}, {'id': 1, 'name': 'cat'}]
+ video_evaluator = coco_evaluation_all_frames.CocoEvaluationAllFrames(
+ category_list)
+ video_evaluator.add_single_ground_truth_image_info(
+ image_id='image1',
+ groundtruth_dict=[{
+ standard_fields.InputDataFields.groundtruth_boxes:
+ np.array([[100., 100., 200., 200.]]),
+ standard_fields.InputDataFields.groundtruth_classes:
+ np.array([1])
+ }])
+ video_evaluator.add_single_ground_truth_image_info(
+ image_id='image2',
+ groundtruth_dict=[{
+ standard_fields.InputDataFields.groundtruth_boxes:
+ np.array([[50., 50., 100., 100.]]),
+ standard_fields.InputDataFields.groundtruth_classes:
+ np.array([1])
+ }])
+ video_evaluator.add_single_ground_truth_image_info(
+ image_id='image3',
+ groundtruth_dict=[{
+ standard_fields.InputDataFields.groundtruth_boxes:
+ np.array([[50., 100., 100., 120.]]),
+ standard_fields.InputDataFields.groundtruth_classes:
+ np.array([1])
+ }])
+ video_evaluator.add_single_detected_image_info(
+ image_id='image1',
+ detections_dict=[{
+ standard_fields.DetectionResultFields.detection_boxes:
+ np.array([[100., 100., 200., 200.]]),
+ standard_fields.DetectionResultFields.detection_scores:
+ np.array([.8]),
+ standard_fields.DetectionResultFields.detection_classes:
+ np.array([1])
+ }])
+ video_evaluator.add_single_detected_image_info(
+ image_id='image2',
+ detections_dict=[{
+ standard_fields.DetectionResultFields.detection_boxes:
+ np.array([[50., 50., 100., 100.]]),
+ standard_fields.DetectionResultFields.detection_scores:
+ np.array([.8]),
+ standard_fields.DetectionResultFields.detection_classes:
+ np.array([1])
+ }])
+ video_evaluator.add_single_detected_image_info(
+ image_id='image3',
+ detections_dict=[{
+ standard_fields.DetectionResultFields.detection_boxes:
+ np.array([[50., 100., 100., 120.]]),
+ standard_fields.DetectionResultFields.detection_scores:
+ np.array([.8]),
+ standard_fields.DetectionResultFields.detection_classes:
+ np.array([1])
+ }])
+ metrics = video_evaluator.evaluate()
+ self.assertAlmostEqual(metrics['DetectionBoxes_Precision/mAP'], 1.0)
+
+ def testMissingDetectionResults(self):
+ """Tests if groundtrue is missing, raises ValueError."""
+ category_list = [{'id': 0, 'name': 'dog'}]
+ video_evaluator = coco_evaluation_all_frames.CocoEvaluationAllFrames(
+ category_list)
+ video_evaluator.add_single_ground_truth_image_info(
+ image_id='image1',
+ groundtruth_dict=[{
+ standard_fields.InputDataFields.groundtruth_boxes:
+ np.array([[100., 100., 200., 200.]]),
+ standard_fields.InputDataFields.groundtruth_classes:
+ np.array([1])
+ }])
+ with self.assertRaisesRegexp(ValueError,
+ r'Missing groundtruth for image-frame id:.*'):
+ video_evaluator.add_single_detected_image_info(
+ image_id='image3',
+ detections_dict=[{
+ standard_fields.DetectionResultFields.detection_boxes:
+ np.array([[100., 100., 200., 200.]]),
+ standard_fields.DetectionResultFields.detection_scores:
+ np.array([.8]),
+ standard_fields.DetectionResultFields.detection_classes:
+ np.array([1])
+ }])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/model_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/model_builder.py"
new file mode 100644
index 00000000..86f26b9d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/model_builder.py"
@@ -0,0 +1,169 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A function to build a DetectionModel from configuration."""
+from lstm_object_detection.lstm import lstm_meta_arch
+from lstm_object_detection.models.lstm_ssd_mobilenet_v1_feature_extractor import LSTMMobileNetV1FeatureExtractor
+from object_detection.builders import anchor_generator_builder
+from object_detection.builders import box_coder_builder
+from object_detection.builders import box_predictor_builder
+from object_detection.builders import hyperparams_builder
+from object_detection.builders import image_resizer_builder
+from object_detection.builders import losses_builder
+from object_detection.builders import matcher_builder
+from object_detection.builders import model_builder
+from object_detection.builders import post_processing_builder
+from object_detection.builders import region_similarity_calculator_builder as sim_calc
+from object_detection.core import target_assigner
+
+model_builder.SSD_FEATURE_EXTRACTOR_CLASS_MAP.update({
+ 'lstm_mobilenet_v1': LSTMMobileNetV1FeatureExtractor,
+})
+SSD_FEATURE_EXTRACTOR_CLASS_MAP = model_builder.SSD_FEATURE_EXTRACTOR_CLASS_MAP
+
+
+def build(model_config, lstm_config, is_training):
+ """Builds a DetectionModel based on the model config.
+
+ Args:
+ model_config: A model.proto object containing the config for the desired
+ DetectionModel.
+ lstm_config: LstmModel config proto that specifies LSTM train/eval configs.
+ is_training: True if this model is being built for training purposes.
+
+ Returns:
+ DetectionModel based on the config.
+
+ Raises:
+ ValueError: On invalid meta architecture or model.
+ """
+ return _build_lstm_model(model_config.ssd, lstm_config, is_training)
+
+
+def _build_lstm_feature_extractor(feature_extractor_config,
+ is_training,
+ lstm_state_depth,
+ reuse_weights=None):
+ """Builds a ssd_meta_arch.SSDFeatureExtractor based on config.
+
+ Args:
+ feature_extractor_config: A SSDFeatureExtractor proto config from ssd.proto.
+ is_training: True if this feature extractor is being built for training.
+ lstm_state_depth: An integer of the depth of the lstm state.
+ reuse_weights: If the feature extractor should reuse weights.
+
+ Returns:
+ ssd_meta_arch.SSDFeatureExtractor based on config.
+
+ Raises:
+ ValueError: On invalid feature extractor type.
+ """
+
+ feature_type = feature_extractor_config.type
+ depth_multiplier = feature_extractor_config.depth_multiplier
+ min_depth = feature_extractor_config.min_depth
+ pad_to_multiple = feature_extractor_config.pad_to_multiple
+ use_explicit_padding = feature_extractor_config.use_explicit_padding
+ use_depthwise = feature_extractor_config.use_depthwise
+ conv_hyperparams = hyperparams_builder.build(
+ feature_extractor_config.conv_hyperparams, is_training)
+ override_base_feature_extractor_hyperparams = (
+ feature_extractor_config.override_base_feature_extractor_hyperparams)
+
+ if feature_type not in SSD_FEATURE_EXTRACTOR_CLASS_MAP:
+ raise ValueError('Unknown ssd feature_extractor: {}'.format(feature_type))
+
+ feature_extractor_class = SSD_FEATURE_EXTRACTOR_CLASS_MAP[feature_type]
+ return feature_extractor_class(
+ is_training, depth_multiplier, min_depth, pad_to_multiple,
+ conv_hyperparams, reuse_weights, use_explicit_padding, use_depthwise,
+ override_base_feature_extractor_hyperparams, lstm_state_depth)
+
+
+def _build_lstm_model(ssd_config, lstm_config, is_training):
+ """Builds an LSTM detection model based on the model config.
+
+ Args:
+ ssd_config: A ssd.proto object containing the config for the desired
+ LSTMMetaArch.
+ lstm_config: LstmModel config proto that specifies LSTM train/eval configs.
+ is_training: True if this model is being built for training purposes.
+
+ Returns:
+ LSTMMetaArch based on the config.
+ Raises:
+ ValueError: If ssd_config.type is not recognized (i.e. not registered in
+ model_class_map), or if lstm_config.interleave_strategy is not recognized.
+ ValueError: If unroll_length is not specified in the config file.
+ """
+ feature_extractor = _build_lstm_feature_extractor(
+ ssd_config.feature_extractor, is_training, lstm_config.lstm_state_depth)
+
+ box_coder = box_coder_builder.build(ssd_config.box_coder)
+ matcher = matcher_builder.build(ssd_config.matcher)
+ region_similarity_calculator = sim_calc.build(
+ ssd_config.similarity_calculator)
+
+ num_classes = ssd_config.num_classes
+ ssd_box_predictor = box_predictor_builder.build(hyperparams_builder.build,
+ ssd_config.box_predictor,
+ is_training, num_classes)
+ anchor_generator = anchor_generator_builder.build(ssd_config.anchor_generator)
+ image_resizer_fn = image_resizer_builder.build(ssd_config.image_resizer)
+ non_max_suppression_fn, score_conversion_fn = post_processing_builder.build(
+ ssd_config.post_processing)
+ (classification_loss, localization_loss, classification_weight,
+ localization_weight, miner, _, _) = losses_builder.build(ssd_config.loss)
+
+ normalize_loss_by_num_matches = ssd_config.normalize_loss_by_num_matches
+ encode_background_as_zeros = ssd_config.encode_background_as_zeros
+ negative_class_weight = ssd_config.negative_class_weight
+
+ # Extra configs for lstm unroll length.
+ unroll_length = None
+ if 'lstm' in ssd_config.feature_extractor.type:
+ if is_training:
+ unroll_length = lstm_config.train_unroll_length
+ else:
+ unroll_length = lstm_config.eval_unroll_length
+ if unroll_length is None:
+ raise ValueError('No unroll length found in the config file')
+
+ target_assigner_instance = target_assigner.TargetAssigner(
+ region_similarity_calculator,
+ matcher,
+ box_coder,
+ negative_class_weight=negative_class_weight)
+
+ lstm_model = lstm_meta_arch.LSTMMetaArch(
+ is_training=is_training,
+ anchor_generator=anchor_generator,
+ box_predictor=ssd_box_predictor,
+ box_coder=box_coder,
+ feature_extractor=feature_extractor,
+ encode_background_as_zeros=encode_background_as_zeros,
+ image_resizer_fn=image_resizer_fn,
+ non_max_suppression_fn=non_max_suppression_fn,
+ score_conversion_fn=score_conversion_fn,
+ classification_loss=classification_loss,
+ localization_loss=localization_loss,
+ classification_loss_weight=classification_weight,
+ localization_loss_weight=localization_weight,
+ normalize_loss_by_num_matches=normalize_loss_by_num_matches,
+ hard_example_miner=miner,
+ unroll_length=unroll_length,
+ target_assigner_instance=target_assigner_instance)
+
+ return lstm_model
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/model_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/model_builder_test.py"
new file mode 100644
index 00000000..445a26b4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/model_builder_test.py"
@@ -0,0 +1,158 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for video_object_detection.tensorflow.model_builder."""
+
+import tensorflow as tf
+from google.protobuf import text_format
+from lstm_object_detection import model_builder
+from lstm_object_detection.lstm import lstm_meta_arch
+from lstm_object_detection.protos import pipeline_pb2 as internal_pipeline_pb2
+from object_detection.protos import pipeline_pb2
+
+
+class ModelBuilderTest(tf.test.TestCase):
+
+ def create_model(self, model_config, lstm_config):
+ """Builds a DetectionModel based on the model config.
+
+ Args:
+ model_config: A model.proto object containing the config for the desired
+ DetectionModel.
+ lstm_config: LstmModel config proto that specifies LSTM train/eval
+ configs.
+
+ Returns:
+ DetectionModel based on the config.
+ """
+ return model_builder.build(model_config, lstm_config, is_training=True)
+
+ def get_model_configs_from_proto(self):
+ """Creates a model text proto for testing.
+
+ Returns:
+ A dictionary of model configs.
+ """
+
+ model_text_proto = """
+ [object_detection.protos.lstm_model] {
+ train_unroll_length: 4
+ eval_unroll_length: 4
+ }
+ model {
+ ssd {
+ feature_extractor {
+ type: 'lstm_mobilenet_v1'
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ negative_class_weight: 2.0
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ normalize_loc_loss_by_codesize: true
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }
+ }"""
+
+ pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
+ text_format.Merge(model_text_proto, pipeline_config)
+
+ configs = {}
+ configs['model'] = pipeline_config.model
+ configs['lstm_model'] = pipeline_config.Extensions[
+ internal_pipeline_pb2.lstm_model]
+
+ return configs
+
+ def test_model_creation_from_valid_configs(self):
+ configs = self.get_model_configs_from_proto()
+ # Test model properties.
+ self.assertEqual(configs['model'].ssd.negative_class_weight, 2.0)
+ self.assertTrue(configs['model'].ssd.normalize_loc_loss_by_codesize)
+ self.assertEqual(configs['model'].ssd.feature_extractor.type,
+ 'lstm_mobilenet_v1')
+
+ model = self.create_model(configs['model'], configs['lstm_model'])
+ # Test architechture type.
+ self.assertIsInstance(model, lstm_meta_arch.LSTMMetaArch)
+ # Test LSTM unroll length.
+ self.assertEqual(model.unroll_length, 4)
+
+ def test_model_creation_from_invalid_configs(self):
+ configs = self.get_model_configs_from_proto()
+ # Test model build failure with wrong input configs.
+ with self.assertRaises(AttributeError):
+ _ = self.create_model(configs['model'], configs['model'])
+
+ # Test model builder failure with missing configs.
+ with self.assertRaises(TypeError):
+ # pylint: disable=no-value-for-parameter
+ _ = self.create_model(configs['lstm_model'])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/models/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/models/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/models/lstm_ssd_mobilenet_v1_feature_extractor.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/models/lstm_ssd_mobilenet_v1_feature_extractor.py"
new file mode 100644
index 00000000..d753f9b2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/models/lstm_ssd_mobilenet_v1_feature_extractor.py"
@@ -0,0 +1,178 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""LSTMFeatureExtractor for MobilenetV1 features."""
+
+import tensorflow as tf
+from tensorflow.python.framework import ops as tf_ops
+from lstm_object_detection.lstm import lstm_cells
+from lstm_object_detection.lstm import lstm_meta_arch
+from lstm_object_detection.lstm import rnn_decoder
+from object_detection.models import feature_map_generators
+from object_detection.utils import context_manager
+from object_detection.utils import ops
+from object_detection.utils import shape_utils
+from nets import mobilenet_v1
+
+slim = tf.contrib.slim
+
+
+class LSTMMobileNetV1FeatureExtractor(lstm_meta_arch.LSTMFeatureExtractor):
+ """LSTM Feature Extractor using MobilenetV1 features."""
+
+ def __init__(self,
+ is_training,
+ depth_multiplier,
+ min_depth,
+ pad_to_multiple,
+ conv_hyperparams,
+ reuse_weights=None,
+ use_explicit_padding=False,
+ use_depthwise=True,
+ override_base_feature_extractor_hyperparams=False,
+ lstm_state_depth=256):
+ """Initializes instance of MobileNetV1 Feature Extractor for LSTM Models.
+
+ Args:
+ is_training: A boolean whether the network is in training mode.
+ depth_multiplier: A float depth multiplier for feature extractor.
+ min_depth: A number representing minimum feature extractor depth.
+ pad_to_multiple: The nearest multiple to zero pad the input height and
+ width dimensions to.
+ conv_hyperparams: A function to construct tf slim arg_scope for conv2d
+ and separable_conv2d ops in the layers that are added on top of the
+ base feature extractor.
+ reuse_weights: Whether to reuse variables. Default is None.
+ use_explicit_padding: Whether to use explicit padding when extracting
+ features. Default is False.
+ use_depthwise: Whether to use depthwise convolutions. Default is True.
+ override_base_feature_extractor_hyperparams: Whether to override
+ hyperparameters of the base feature extractor with the one from
+ `conv_hyperparams_fn`.
+ lstm_state_depth: An integter of the depth of the lstm state.
+ """
+ super(LSTMMobileNetV1FeatureExtractor, self).__init__(
+ is_training, depth_multiplier, min_depth, pad_to_multiple,
+ conv_hyperparams, reuse_weights, use_explicit_padding, use_depthwise,
+ override_base_feature_extractor_hyperparams)
+ self._feature_map_layout = {
+ 'from_layer': ['Conv2d_13_pointwise_lstm', '', '', '', ''],
+ 'layer_depth': [-1, 512, 256, 256, 128],
+ 'use_explicit_padding': self._use_explicit_padding,
+ 'use_depthwise': self._use_depthwise,
+ }
+ self._base_network_scope = 'MobilenetV1'
+ self._lstm_state_depth = lstm_state_depth
+
+ def extract_features(self,
+ preprocessed_inputs,
+ state_saver=None,
+ state_name='lstm_state',
+ unroll_length=5,
+ scope=None):
+ """Extracts features from preprocessed inputs.
+
+ The features include the base network features, lstm features and SSD
+ features, organized in the following name scope:
+
+ /MobilenetV1/...
+ /LSTM/...
+ /FeatureMaps/...
+
+ Args:
+ preprocessed_inputs: A [batch, height, width, channels] float tensor
+ representing a batch of consecutive frames from video clips.
+ state_saver: A state saver object with methods `state` and `save_state`.
+ state_name: A python string for the name to use with the state_saver.
+ unroll_length: The number of steps to unroll the lstm.
+ scope: The scope for the base network of the feature extractor.
+
+ Returns:
+ A list of tensors where the ith tensor has shape [batch, height_i,
+ width_i, depth_i]
+ """
+ preprocessed_inputs = shape_utils.check_min_image_dim(
+ 33, preprocessed_inputs)
+ with slim.arg_scope(
+ mobilenet_v1.mobilenet_v1_arg_scope(is_training=self._is_training)):
+ with (slim.arg_scope(self._conv_hyperparams_fn())
+ if self._override_base_feature_extractor_hyperparams else
+ context_manager.IdentityContextManager()):
+ with slim.arg_scope([slim.batch_norm], fused=False):
+ # Base network.
+ with tf.variable_scope(
+ scope, self._base_network_scope,
+ reuse=self._reuse_weights) as scope:
+ net, image_features = mobilenet_v1.mobilenet_v1_base(
+ ops.pad_to_multiple(preprocessed_inputs, self._pad_to_multiple),
+ final_endpoint='Conv2d_13_pointwise',
+ min_depth=self._min_depth,
+ depth_multiplier=self._depth_multiplier,
+ scope=scope)
+
+ with slim.arg_scope(self._conv_hyperparams_fn()):
+ with slim.arg_scope(
+ [slim.batch_norm], fused=False, is_training=self._is_training):
+ # ConvLSTM layers.
+ with tf.variable_scope('LSTM', reuse=self._reuse_weights) as lstm_scope:
+ lstm_cell = lstm_cells.BottleneckConvLSTMCell(
+ filter_size=(3, 3),
+ output_size=(net.shape[1].value, net.shape[2].value),
+ num_units=max(self._min_depth, self._lstm_state_depth),
+ activation=tf.nn.relu6,
+ visualize_gates=True)
+
+ net_seq = list(tf.split(net, unroll_length))
+ if state_saver is None:
+ init_state = lstm_cell.init_state(
+ state_name, net.shape[0].value / unroll_length, tf.float32)
+ else:
+ c = state_saver.state('%s_c' % state_name)
+ h = state_saver.state('%s_h' % state_name)
+ init_state = (c, h)
+
+ # Identities added for inputing state tensors externally.
+ c_ident = tf.identity(init_state[0], name='lstm_state_in_c')
+ h_ident = tf.identity(init_state[1], name='lstm_state_in_h')
+ init_state = (c_ident, h_ident)
+
+ net_seq, states_out = rnn_decoder.rnn_decoder(
+ net_seq, init_state, lstm_cell, scope=lstm_scope)
+ batcher_ops = None
+ self._states_out = states_out
+ if state_saver is not None:
+ self._step = state_saver.state('%s_step' % state_name)
+ batcher_ops = [
+ state_saver.save_state('%s_c' % state_name, states_out[-1][0]),
+ state_saver.save_state('%s_h' % state_name, states_out[-1][1]),
+ state_saver.save_state('%s_step' % state_name, self._step - 1)
+ ]
+ with tf_ops.control_dependencies(batcher_ops):
+ image_features['Conv2d_13_pointwise_lstm'] = tf.concat(net_seq, 0)
+
+ # Identities added for reading output states, to be reused externally.
+ tf.identity(states_out[-1][0], name='lstm_state_out_c')
+ tf.identity(states_out[-1][1], name='lstm_state_out_h')
+
+ # SSD layers.
+ with tf.variable_scope('FeatureMaps', reuse=self._reuse_weights):
+ feature_maps = feature_map_generators.multi_resolution_feature_maps(
+ feature_map_layout=self._feature_map_layout,
+ depth_multiplier=(self._depth_multiplier),
+ min_depth=self._min_depth,
+ insert_1x1_conv=True,
+ image_features=image_features)
+
+ return feature_maps.values()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/models/lstm_ssd_mobilenet_v1_feature_extractor_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/models/lstm_ssd_mobilenet_v1_feature_extractor_test.py"
new file mode 100644
index 00000000..9471b147
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/models/lstm_ssd_mobilenet_v1_feature_extractor_test.py"
@@ -0,0 +1,139 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for models.lstm_ssd_mobilenet_v1_feature_extractor."""
+
+import numpy as np
+import tensorflow as tf
+
+from lstm_object_detection.models import lstm_ssd_mobilenet_v1_feature_extractor as feature_extactor
+from object_detection.models import ssd_feature_extractor_test
+
+slim = tf.contrib.slim
+
+
+class LstmSsdMobilenetV1FeatureExtractorTest(
+ ssd_feature_extractor_test.SsdFeatureExtractorTestBase):
+
+ def _create_feature_extractor(self,
+ depth_multiplier=1.0,
+ pad_to_multiple=1,
+ is_training=True,
+ use_explicit_padding=False):
+ """Constructs a new feature extractor.
+
+ Args:
+ depth_multiplier: A float depth multiplier for feature extractor.
+ pad_to_multiple: The nearest multiple to zero pad the input height and
+ width dimensions to.
+ is_training: A boolean whether the network is in training mode.
+ use_explicit_padding: A boolean whether to use explicit padding.
+
+ Returns:
+ An lstm_ssd_meta_arch.LSTMMobileNetV1FeatureExtractor object.
+ """
+ min_depth = 32
+ extractor = (
+ feature_extactor.LSTMMobileNetV1FeatureExtractor(
+ is_training,
+ depth_multiplier,
+ min_depth,
+ pad_to_multiple,
+ self.conv_hyperparams_fn,
+ use_explicit_padding=use_explicit_padding))
+ extractor.lstm_state_depth = int(256 * depth_multiplier)
+ return extractor
+
+ def test_extract_features_returns_correct_shapes_256(self):
+ image_height = 256
+ image_width = 256
+ depth_multiplier = 1.0
+ pad_to_multiple = 1
+ batch_size = 5
+ expected_feature_map_shape = [(batch_size, 8, 8, 256), (batch_size, 4, 4,
+ 512),
+ (batch_size, 2, 2, 256), (batch_size, 1, 1,
+ 256)]
+ self.check_extract_features_returns_correct_shape(
+ batch_size,
+ image_height,
+ image_width,
+ depth_multiplier,
+ pad_to_multiple,
+ expected_feature_map_shape,
+ use_explicit_padding=False)
+ self.check_extract_features_returns_correct_shape(
+ batch_size,
+ image_height,
+ image_width,
+ depth_multiplier,
+ pad_to_multiple,
+ expected_feature_map_shape,
+ use_explicit_padding=True)
+
+ def test_preprocess_returns_correct_value_range(self):
+ test_image = np.random.rand(5, 128, 128, 3)
+ feature_extractor = self._create_feature_extractor()
+ preprocessed_image = feature_extractor.preprocess(test_image)
+ self.assertTrue(np.all(np.less_equal(np.abs(preprocessed_image), 1.0)))
+
+ def test_variables_only_created_in_scope(self):
+ scope_name = 'MobilenetV1'
+ g = tf.Graph()
+ with g.as_default():
+ preprocessed_inputs = tf.placeholder(tf.float32, (5, 256, 256, 3))
+ feature_extractor = self._create_feature_extractor()
+ feature_extractor.extract_features(preprocessed_inputs)
+ variables = g.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
+ find_scope = False
+ for variable in variables:
+ if scope_name in variable.name:
+ find_scope = True
+ break
+ self.assertTrue(find_scope)
+
+ def test_lstm_non_zero_state(self):
+ init_state = {
+ 'lstm_state_c': tf.zeros([8, 8, 256]),
+ 'lstm_state_h': tf.zeros([8, 8, 256]),
+ 'lstm_state_step': tf.zeros([1])
+ }
+ seq = {'test': tf.random_uniform([3, 1, 1, 1])}
+ stateful_reader = tf.contrib.training.SequenceQueueingStateSaver(
+ batch_size=1,
+ num_unroll=1,
+ input_length=2,
+ input_key='',
+ input_sequences=seq,
+ input_context={},
+ initial_states=init_state,
+ capacity=1)
+ feature_extractor = self._create_feature_extractor()
+ image = tf.random_uniform([5, 256, 256, 3])
+ with tf.variable_scope('zero_state'):
+ feature_map = feature_extractor.extract_features(
+ image, stateful_reader.next_batch)
+ with tf.Session() as sess:
+ sess.run(tf.global_variables_initializer())
+ sess.run([stateful_reader.prefetch_op])
+ _ = sess.run([feature_map])
+ # Update states with the next batch.
+ state = sess.run(stateful_reader.next_batch.state('lstm_state_c'))
+ # State should no longer be zero after update.
+ self.assertTrue(state.any())
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/protos/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/protos/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/protos/input_reader_google.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/protos/input_reader_google.proto"
new file mode 100644
index 00000000..11f18cfc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/protos/input_reader_google.proto"
@@ -0,0 +1,33 @@
+syntax = "proto2";
+
+package lstm_object_detection.input_readers;
+
+import "object_detection/protos/input_reader.proto";
+
+message GoogleInputReader {
+ extend object_detection.protos.ExternalInputReader {
+ optional GoogleInputReader google_input_reader = 444;
+ }
+
+ oneof input_reader {
+ TFRecordVideoInputReader tf_record_video_input_reader = 1;
+ }
+}
+
+message TFRecordVideoInputReader {
+ // Path(s) to tfrecords of input data.
+ repeated string input_path = 1;
+
+ enum DataType {
+ UNSPECIFIED = 0;
+ ANNOTATED_IMAGE = 1;
+ TF_EXAMPLE = 2;
+ TF_SEQUENCE_EXAMPLE = 3;
+ }
+ optional DataType data_type = 2 [default=TF_SEQUENCE_EXAMPLE];
+
+ // Length of the video sequence. All the input video sequence should have the
+ // same length in frames, e.g. 5 frames.
+ optional int32 video_length = 3;
+}
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/protos/pipeline.proto" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/protos/pipeline.proto"
new file mode 100644
index 00000000..910852e6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/protos/pipeline.proto"
@@ -0,0 +1,21 @@
+syntax = "proto2";
+
+package object_detection.protos;
+
+import "object_detection/protos/pipeline.proto";
+
+extend TrainEvalPipelineConfig {
+ optional LstmModel lstm_model = 205743444;
+}
+
+// Message for extra fields needed for configuring LSTM model.
+message LstmModel {
+ // Unroll length for training LSTMs.
+ optional int32 train_unroll_length = 1;
+
+ // Unroll length for evaluating LSTMs.
+ optional int32 eval_unroll_length = 2;
+
+ // Depth of the lstm feature map.
+ optional int32 lstm_state_depth = 3 [default = 256];
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/train.py"
new file mode 100644
index 00000000..c8659740
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/train.py"
@@ -0,0 +1,185 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Training executable for detection models.
+
+This executable is used to train DetectionModels. There are two ways of
+configuring the training job:
+
+1) A single pipeline_pb2.TrainEvalPipelineConfig configuration file
+can be specified by --pipeline_config_path.
+
+Example usage:
+ ./train \
+ --logtostderr \
+ --train_dir=path/to/train_dir \
+ --pipeline_config_path=pipeline_config.pbtxt
+
+2) Three configuration files can be provided: a model_pb2.DetectionModel
+configuration file to define what type of DetectionModel is being trained, an
+input_reader_pb2.InputReader file to specify what training data will be used and
+a train_pb2.TrainConfig file to configure training parameters.
+
+Example usage:
+ ./train \
+ --logtostderr \
+ --train_dir=path/to/train_dir \
+ --model_config_path=model_config.pbtxt \
+ --train_config_path=train_config.pbtxt \
+ --input_config_path=train_input_config.pbtxt
+
+"""
+
+import functools
+import json
+import os
+from absl import flags
+import tensorflow as tf
+from lstm_object_detection import model_builder
+from lstm_object_detection import trainer
+from lstm_object_detection.inputs import seq_dataset_builder
+from lstm_object_detection.utils import config_util
+from object_detection.builders import preprocessor_builder
+
+flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
+flags.DEFINE_integer('task', 0, 'task id')
+flags.DEFINE_integer('num_clones', 1, 'Number of clones to deploy per worker.')
+flags.DEFINE_boolean(
+ 'clone_on_cpu', False,
+ 'Force clones to be deployed on CPU. Note that even if '
+ 'set to False (allowing ops to run on gpu), some ops may '
+ 'still be run on the CPU if they have no GPU kernel.')
+flags.DEFINE_integer('worker_replicas', 1, 'Number of worker+trainer '
+ 'replicas.')
+flags.DEFINE_integer(
+ 'ps_tasks', 0, 'Number of parameter server tasks. If None, does not use '
+ 'a parameter server.')
+flags.DEFINE_string(
+ 'train_dir', '',
+ 'Directory to save the checkpoints and training summaries.')
+
+flags.DEFINE_string(
+ 'pipeline_config_path', '',
+ 'Path to a pipeline_pb2.TrainEvalPipelineConfig config '
+ 'file. If provided, other configs are ignored')
+
+flags.DEFINE_string('train_config_path', '',
+ 'Path to a train_pb2.TrainConfig config file.')
+flags.DEFINE_string('input_config_path', '',
+ 'Path to an input_reader_pb2.InputReader config file.')
+flags.DEFINE_string('model_config_path', '',
+ 'Path to a model_pb2.DetectionModel config file.')
+
+FLAGS = flags.FLAGS
+
+
+def main(_):
+ assert FLAGS.train_dir, '`train_dir` is missing.'
+ if FLAGS.task == 0:
+ tf.gfile.MakeDirs(FLAGS.train_dir)
+ if FLAGS.pipeline_config_path:
+ configs = config_util.get_configs_from_pipeline_file(
+ FLAGS.pipeline_config_path)
+ if FLAGS.task == 0:
+ tf.gfile.Copy(
+ FLAGS.pipeline_config_path,
+ os.path.join(FLAGS.train_dir, 'pipeline.config'),
+ overwrite=True)
+ else:
+ configs = config_util.get_configs_from_multiple_files(
+ model_config_path=FLAGS.model_config_path,
+ train_config_path=FLAGS.train_config_path,
+ train_input_config_path=FLAGS.input_config_path)
+ if FLAGS.task == 0:
+ for name, config in [('model.config', FLAGS.model_config_path),
+ ('train.config', FLAGS.train_config_path),
+ ('input.config', FLAGS.input_config_path)]:
+ tf.gfile.Copy(
+ config, os.path.join(FLAGS.train_dir, name), overwrite=True)
+
+ model_config = configs['model']
+ lstm_config = configs['lstm_model']
+ train_config = configs['train_config']
+ input_config = configs['train_input_config']
+
+ model_fn = functools.partial(
+ model_builder.build,
+ model_config=model_config,
+ lstm_config=lstm_config,
+ is_training=True)
+
+ def get_next(config, model_config, lstm_config, unroll_length):
+ data_augmentation_options = [
+ preprocessor_builder.build(step)
+ for step in train_config.data_augmentation_options
+ ]
+ return seq_dataset_builder.build(
+ config,
+ model_config,
+ lstm_config,
+ unroll_length,
+ data_augmentation_options,
+ batch_size=train_config.batch_size)
+
+ create_input_dict_fn = functools.partial(get_next, input_config, model_config,
+ lstm_config,
+ lstm_config.train_unroll_length)
+
+ env = json.loads(os.environ.get('TF_CONFIG', '{}'))
+ cluster_data = env.get('cluster', None)
+ cluster = tf.train.ClusterSpec(cluster_data) if cluster_data else None
+ task_data = env.get('task', None) or {'type': 'master', 'index': 0}
+ task_info = type('TaskSpec', (object,), task_data)
+
+ # Parameters for a single worker.
+ ps_tasks = 0
+ worker_replicas = 1
+ worker_job_name = 'lonely_worker'
+ task = 0
+ is_chief = True
+ master = ''
+
+ if cluster_data and 'worker' in cluster_data:
+ # Number of total worker replicas include "worker"s and the "master".
+ worker_replicas = len(cluster_data['worker']) + 1
+ if cluster_data and 'ps' in cluster_data:
+ ps_tasks = len(cluster_data['ps'])
+
+ if worker_replicas > 1 and ps_tasks < 1:
+ raise ValueError('At least 1 ps task is needed for distributed training.')
+
+ if worker_replicas >= 1 and ps_tasks > 0:
+ # Set up distributed training.
+ server = tf.train.Server(
+ tf.train.ClusterSpec(cluster),
+ protocol='grpc',
+ job_name=task_info.type,
+ task_index=task_info.index)
+ if task_info.type == 'ps':
+ server.join()
+ return
+
+ worker_job_name = '%s/task:%d' % (task_info.type, task_info.index)
+ task = task_info.index
+ is_chief = (task_info.type == 'master')
+ master = server.target
+
+ trainer.train(create_input_dict_fn, model_fn, train_config, master, task,
+ FLAGS.num_clones, worker_replicas, FLAGS.clone_on_cpu, ps_tasks,
+ worker_job_name, is_chief, FLAGS.train_dir)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/trainer.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/trainer.py"
new file mode 100644
index 00000000..e73e32c1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/trainer.py"
@@ -0,0 +1,410 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Detection model trainer.
+
+This file provides a generic training method that can be used to train a
+DetectionModel.
+"""
+
+import functools
+import tensorflow as tf
+from google3.pyglib import logging
+
+from object_detection.builders import optimizer_builder
+from object_detection.core import standard_fields as fields
+from object_detection.utils import ops as util_ops
+from object_detection.utils import variables_helper
+from deployment import model_deploy
+
+slim = tf.contrib.slim
+
+
+def create_input_queue(create_tensor_dict_fn):
+ """Sets up reader, prefetcher and returns input queue.
+
+ Args:
+ create_tensor_dict_fn: function to create tensor dictionary.
+
+ Returns:
+ all_dict: A dictionary holds tensors for images, boxes, and targets.
+ """
+ tensor_dict = create_tensor_dict_fn()
+ all_dict = {}
+
+ num_images = len(tensor_dict[fields.InputDataFields.image])
+ all_dict['batch'] = tensor_dict['batch']
+ del tensor_dict['batch']
+
+ for i in range(num_images):
+ suffix = str(i)
+ for key, val in tensor_dict.items():
+ all_dict[key + suffix] = val[i]
+
+ all_dict[fields.InputDataFields.image + suffix] = tf.to_float(
+ tf.expand_dims(all_dict[fields.InputDataFields.image + suffix], 0))
+
+ return all_dict
+
+
+def get_inputs(input_queue, num_classes, merge_multiple_label_boxes=False):
+ """Dequeues batch and constructs inputs to object detection model.
+
+ Args:
+ input_queue: BatchQueue object holding enqueued tensor_dicts.
+ num_classes: Number of classes.
+ merge_multiple_label_boxes: Whether to merge boxes with multiple labels
+ or not. Defaults to false. Merged boxes are represented with a single
+ box and a k-hot encoding of the multiple labels associated with the
+ boxes.
+
+ Returns:
+ images: a list of 3-D float tensor of images.
+ image_keys: a list of string keys for the images.
+ locations: a list of tensors of shape [num_boxes, 4] containing the corners
+ of the groundtruth boxes.
+ classes: a list of padded one-hot tensors containing target classes.
+ masks: a list of 3-D float tensors of shape [num_boxes, image_height,
+ image_width] containing instance masks for objects if present in the
+ input_queue. Else returns None.
+ keypoints: a list of 3-D float tensors of shape [num_boxes, num_keypoints,
+ 2] containing keypoints for objects if present in the
+ input queue. Else returns None.
+ """
+ read_data_list = input_queue
+ label_id_offset = 1
+
+ def extract_images_and_targets(read_data):
+ """Extract images and targets from the input dict."""
+ suffix = 0
+
+ images = []
+ keys = []
+ locations = []
+ classes = []
+ masks = []
+ keypoints = []
+
+ while fields.InputDataFields.image + str(suffix) in read_data:
+ image = read_data[fields.InputDataFields.image + str(suffix)]
+ key = ''
+ if fields.InputDataFields.source_id in read_data:
+ key = read_data[fields.InputDataFields.source_id + str(suffix)]
+ location_gt = (
+ read_data[fields.InputDataFields.groundtruth_boxes + str(suffix)])
+ classes_gt = tf.cast(
+ read_data[fields.InputDataFields.groundtruth_classes + str(suffix)],
+ tf.int32)
+ classes_gt -= label_id_offset
+ masks_gt = read_data.get(
+ fields.InputDataFields.groundtruth_instance_masks + str(suffix))
+ keypoints_gt = read_data.get(
+ fields.InputDataFields.groundtruth_keypoints + str(suffix))
+
+ if merge_multiple_label_boxes:
+ location_gt, classes_gt, _ = util_ops.merge_boxes_with_multiple_labels(
+ location_gt, classes_gt, num_classes)
+ else:
+ classes_gt = util_ops.padded_one_hot_encoding(
+ indices=classes_gt, depth=num_classes, left_pad=0)
+
+ # Batch read input data and groundtruth. Images and locations, classes by
+ # default should have the same number of items.
+ images.append(image)
+ keys.append(key)
+ locations.append(location_gt)
+ classes.append(classes_gt)
+ masks.append(masks_gt)
+ keypoints.append(keypoints_gt)
+
+ suffix += 1
+
+ return (images, keys, locations, classes, masks, keypoints)
+
+ return extract_images_and_targets(read_data_list)
+
+
+def _create_losses(input_queue, create_model_fn, train_config):
+ """Creates loss function for a DetectionModel.
+
+ Args:
+ input_queue: BatchQueue object holding enqueued tensor_dicts.
+ create_model_fn: A function to create the DetectionModel.
+ train_config: a train_pb2.TrainConfig protobuf.
+ """
+
+ detection_model = create_model_fn()
+ (images, _, groundtruth_boxes_list, groundtruth_classes_list,
+ groundtruth_masks_list, groundtruth_keypoints_list) = get_inputs(
+ input_queue, detection_model.num_classes,
+ train_config.merge_multiple_label_boxes)
+
+ preprocessed_images = []
+ true_image_shapes = []
+ for image in images:
+ resized_image, true_image_shape = detection_model.preprocess(image)
+ preprocessed_images.append(resized_image)
+ true_image_shapes.append(true_image_shape)
+
+ images = tf.concat(preprocessed_images, 0)
+ true_image_shapes = tf.concat(true_image_shapes, 0)
+
+ if any(mask is None for mask in groundtruth_masks_list):
+ groundtruth_masks_list = None
+ if any(keypoints is None for keypoints in groundtruth_keypoints_list):
+ groundtruth_keypoints_list = None
+
+ detection_model.provide_groundtruth(
+ groundtruth_boxes_list, groundtruth_classes_list, groundtruth_masks_list,
+ groundtruth_keypoints_list)
+ prediction_dict = detection_model.predict(images, true_image_shapes,
+ input_queue['batch'])
+
+ losses_dict = detection_model.loss(prediction_dict, true_image_shapes)
+ for loss_tensor in losses_dict.values():
+ tf.losses.add_loss(loss_tensor)
+
+
+def get_restore_checkpoint_ops(restore_checkpoints, detection_model,
+ train_config):
+ """Restore checkpoint from saved checkpoints.
+
+ Args:
+ restore_checkpoints: loaded checkpoints.
+ detection_model: Object detection model built from config file.
+ train_config: a train_pb2.TrainConfig protobuf.
+
+ Returns:
+ restorers: A list ops to init the model from checkpoints.
+
+ """
+ restorers = []
+ vars_restored = []
+ for restore_checkpoint in restore_checkpoints:
+ var_map = detection_model.restore_map(
+ fine_tune_checkpoint_type=train_config.fine_tune_checkpoint_type)
+ available_var_map = (
+ variables_helper.get_variables_available_in_checkpoint(
+ var_map, restore_checkpoint))
+ for var_name, var in available_var_map.iteritems():
+ if var in vars_restored:
+ logging.info('Variable %s contained in multiple checkpoints',
+ var.op.name)
+ del available_var_map[var_name]
+ else:
+ vars_restored.append(var)
+
+ # Initialize from ExponentialMovingAverages if possible.
+ available_ema_var_map = {}
+ ckpt_reader = tf.train.NewCheckpointReader(restore_checkpoint)
+ ckpt_vars_to_shape_map = ckpt_reader.get_variable_to_shape_map()
+ for var_name, var in available_var_map.iteritems():
+ var_name_ema = var_name + '/ExponentialMovingAverage'
+ if var_name_ema in ckpt_vars_to_shape_map:
+ available_ema_var_map[var_name_ema] = var
+ else:
+ available_ema_var_map[var_name] = var
+ available_var_map = available_ema_var_map
+ init_saver = tf.train.Saver(available_var_map)
+ if available_var_map.keys():
+ restorers.append(init_saver)
+ else:
+ logging.info('WARNING: Checkpoint %s has no restorable variables',
+ restore_checkpoint)
+
+ return restorers
+
+
+def train(create_tensor_dict_fn,
+ create_model_fn,
+ train_config,
+ master,
+ task,
+ num_clones,
+ worker_replicas,
+ clone_on_cpu,
+ ps_tasks,
+ worker_job_name,
+ is_chief,
+ train_dir,
+ graph_hook_fn=None):
+ """Training function for detection models.
+
+ Args:
+ create_tensor_dict_fn: a function to create a tensor input dictionary.
+ create_model_fn: a function that creates a DetectionModel and generates
+ losses.
+ train_config: a train_pb2.TrainConfig protobuf.
+ master: BNS name of the TensorFlow master to use.
+ task: The task id of this training instance.
+ num_clones: The number of clones to run per machine.
+ worker_replicas: The number of work replicas to train with.
+ clone_on_cpu: True if clones should be forced to run on CPU.
+ ps_tasks: Number of parameter server tasks.
+ worker_job_name: Name of the worker job.
+ is_chief: Whether this replica is the chief replica.
+ train_dir: Directory to write checkpoints and training summaries to.
+ graph_hook_fn: Optional function that is called after the training graph is
+ completely built. This is helpful to perform additional changes to the
+ training graph such as optimizing batchnorm. The function should modify
+ the default graph.
+ """
+
+ detection_model = create_model_fn()
+
+ with tf.Graph().as_default():
+ # Build a configuration specifying multi-GPU and multi-replicas.
+ deploy_config = model_deploy.DeploymentConfig(
+ num_clones=num_clones,
+ clone_on_cpu=clone_on_cpu,
+ replica_id=task,
+ num_replicas=worker_replicas,
+ num_ps_tasks=ps_tasks,
+ worker_job_name=worker_job_name)
+
+ # Place the global step on the device storing the variables.
+ with tf.device(deploy_config.variables_device()):
+ global_step = slim.create_global_step()
+
+ with tf.device(deploy_config.inputs_device()):
+ input_queue = create_input_queue(create_tensor_dict_fn)
+
+ # Gather initial summaries.
+ # TODO(rathodv): See if summaries can be added/extracted from global tf
+ # collections so that they don't have to be passed around.
+ summaries = set(tf.get_collection(tf.GraphKeys.SUMMARIES))
+ global_summaries = set([])
+
+ model_fn = functools.partial(
+ _create_losses,
+ create_model_fn=create_model_fn,
+ train_config=train_config)
+ clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
+ first_clone_scope = clones[0].scope
+
+ # Gather update_ops from the first clone. These contain, for example,
+ # the updates for the batch_norm variables created by model_fn.
+ update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS, first_clone_scope)
+
+ with tf.device(deploy_config.optimizer_device()):
+ training_optimizer, optimizer_summary_vars = optimizer_builder.build(
+ train_config.optimizer)
+ for var in optimizer_summary_vars:
+ tf.summary.scalar(var.op.name, var)
+
+ sync_optimizer = None
+ if train_config.sync_replicas:
+ training_optimizer = tf.train.SyncReplicasOptimizer(
+ training_optimizer,
+ replicas_to_aggregate=train_config.replicas_to_aggregate,
+ total_num_replicas=train_config.worker_replicas)
+ sync_optimizer = training_optimizer
+
+ # Create ops required to initialize the model from a given checkpoint.
+ init_fn = None
+ if train_config.fine_tune_checkpoint:
+ restore_checkpoints = [
+ path.strip() for path in train_config.fine_tune_checkpoint.split(',')
+ ]
+
+ restorers = get_restore_checkpoint_ops(restore_checkpoints,
+ detection_model, train_config)
+
+ def initializer_fn(sess):
+ for i, restorer in enumerate(restorers):
+ restorer.restore(sess, restore_checkpoints[i])
+
+ init_fn = initializer_fn
+
+ with tf.device(deploy_config.optimizer_device()):
+ regularization_losses = (
+ None if train_config.add_regularization_loss else [])
+ total_loss, grads_and_vars = model_deploy.optimize_clones(
+ clones,
+ training_optimizer,
+ regularization_losses=regularization_losses)
+ total_loss = tf.check_numerics(total_loss, 'LossTensor is inf or nan.')
+
+ # Optionally multiply bias gradients by train_config.bias_grad_multiplier.
+ if train_config.bias_grad_multiplier:
+ biases_regex_list = ['.*/biases']
+ grads_and_vars = variables_helper.multiply_gradients_matching_regex(
+ grads_and_vars,
+ biases_regex_list,
+ multiplier=train_config.bias_grad_multiplier)
+
+ # Optionally clip gradients
+ if train_config.gradient_clipping_by_norm > 0:
+ with tf.name_scope('clip_grads'):
+ grads_and_vars = slim.learning.clip_gradient_norms(
+ grads_and_vars, train_config.gradient_clipping_by_norm)
+
+ moving_average_variables = slim.get_model_variables()
+ variable_averages = tf.train.ExponentialMovingAverage(0.9999, global_step)
+ update_ops.append(variable_averages.apply(moving_average_variables))
+
+ # Create gradient updates.
+ grad_updates = training_optimizer.apply_gradients(
+ grads_and_vars, global_step=global_step)
+ update_ops.append(grad_updates)
+ update_op = tf.group(*update_ops, name='update_barrier')
+ with tf.control_dependencies([update_op]):
+ train_tensor = tf.identity(total_loss, name='train_op')
+
+ if graph_hook_fn:
+ with tf.device(deploy_config.variables_device()):
+ graph_hook_fn()
+
+ # Add summaries.
+ for model_var in slim.get_model_variables():
+ global_summaries.add(tf.summary.histogram(model_var.op.name, model_var))
+ for loss_tensor in tf.losses.get_losses():
+ global_summaries.add(tf.summary.scalar(loss_tensor.op.name, loss_tensor))
+ global_summaries.add(
+ tf.summary.scalar('TotalLoss', tf.losses.get_total_loss()))
+
+ # Add the summaries from the first clone. These contain the summaries
+ # created by model_fn and either optimize_clones() or _gather_clone_loss().
+ summaries |= set(
+ tf.get_collection(tf.GraphKeys.SUMMARIES, first_clone_scope))
+ summaries |= set(tf.get_collection(tf.GraphKeys.SUMMARIES, 'critic_loss'))
+ summaries |= global_summaries
+
+ # Merge all summaries together.
+ summary_op = tf.summary.merge(list(summaries), name='summary_op')
+
+ # Soft placement allows placing on CPU ops without GPU implementation.
+ session_config = tf.ConfigProto(
+ allow_soft_placement=True, log_device_placement=False)
+
+ # Save checkpoints regularly.
+ keep_checkpoint_every_n_hours = train_config.keep_checkpoint_every_n_hours
+ saver = tf.train.Saver(
+ keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours)
+
+ slim.learning.train(
+ train_tensor,
+ logdir=train_dir,
+ master=master,
+ is_chief=is_chief,
+ session_config=session_config,
+ startup_delay_steps=train_config.startup_delay_steps,
+ init_fn=init_fn,
+ summary_op=summary_op,
+ number_of_steps=(train_config.num_steps
+ if train_config.num_steps else None),
+ save_summaries_secs=120,
+ sync_optimizer=sync_optimizer,
+ saver=saver)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/utils/config_util.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/utils/config_util.py"
new file mode 100644
index 00000000..2bf67b8b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/utils/config_util.py"
@@ -0,0 +1,106 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Added functionality to load from pipeline config for lstm framework."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from google.protobuf import text_format
+from lstm_object_detection.protos import input_reader_google_pb2 # pylint: disable=unused-import
+from lstm_object_detection.protos import pipeline_pb2 as internal_pipeline_pb2
+from object_detection.protos import pipeline_pb2
+from object_detection.utils import config_util
+
+
+def get_configs_from_pipeline_file(pipeline_config_path):
+ """Reads configuration from a pipeline_pb2.TrainEvalPipelineConfig.
+
+ Args:
+ pipeline_config_path: Path to pipeline_pb2.TrainEvalPipelineConfig text
+ proto.
+
+ Returns:
+ Dictionary of configuration objects. Keys are `model`, `train_config`,
+ `train_input_config`, `eval_config`, `eval_input_config`, `lstm_confg`.
+ Value are the corresponding config objects.
+ """
+ pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
+ with tf.gfile.GFile(pipeline_config_path, "r") as f:
+ proto_str = f.read()
+ text_format.Merge(proto_str, pipeline_config)
+ configs = config_util.get_configs_from_pipeline_file(pipeline_config_path)
+ if pipeline_config.HasExtension(internal_pipeline_pb2.lstm_model):
+ configs["lstm_model"] = pipeline_config.Extensions[
+ internal_pipeline_pb2.lstm_model]
+ return configs
+
+
+def create_pipeline_proto_from_configs(configs):
+ """Creates a pipeline_pb2.TrainEvalPipelineConfig from configs dictionary.
+
+ This function nearly performs the inverse operation of
+ get_configs_from_pipeline_file(). Instead of returning a file path, it returns
+ a `TrainEvalPipelineConfig` object.
+
+ Args:
+ configs: Dictionary of configs. See get_configs_from_pipeline_file().
+
+ Returns:
+ A fully populated pipeline_pb2.TrainEvalPipelineConfig.
+ """
+ pipeline_config = config_util.create_pipeline_proto_from_configs(configs)
+ if "lstm_model" in configs:
+ pipeline_config.Extensions[internal_pipeline_pb2.lstm_model].CopyFrom(
+ configs["lstm_model"])
+ return pipeline_config
+
+
+def get_configs_from_multiple_files(model_config_path="",
+ train_config_path="",
+ train_input_config_path="",
+ eval_config_path="",
+ eval_input_config_path="",
+ lstm_config_path=""):
+ """Reads training configuration from multiple config files.
+
+ Args:
+ model_config_path: Path to model_pb2.DetectionModel.
+ train_config_path: Path to train_pb2.TrainConfig.
+ train_input_config_path: Path to input_reader_pb2.InputReader.
+ eval_config_path: Path to eval_pb2.EvalConfig.
+ eval_input_config_path: Path to input_reader_pb2.InputReader.
+ lstm_config_path: Path to pipeline_pb2.LstmModel.
+
+ Returns:
+ Dictionary of configuration objects. Keys are `model`, `train_config`,
+ `train_input_config`, `eval_config`, `eval_input_config`, `lstm_model`.
+ Key/Values are returned only for valid (non-empty) strings.
+ """
+ configs = config_util.get_configs_from_multiple_files(
+ model_config_path=model_config_path,
+ train_config_path=train_config_path,
+ train_input_config_path=train_input_config_path,
+ eval_config_path=eval_config_path,
+ eval_input_config_path=eval_input_config_path)
+ if lstm_config_path:
+ lstm_config = internal_pipeline_pb2.LstmModel()
+ with tf.gfile.GFile(lstm_config_path, "r") as f:
+ text_format.Merge(f.read(), lstm_config)
+ configs["lstm_model"] = lstm_config
+ return configs
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/utils/config_util_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/utils/config_util_test.py"
new file mode 100644
index 00000000..9919bc61
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/lstm_object_detection/utils/config_util_test.py"
@@ -0,0 +1,94 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.utils.config_util."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import tensorflow as tf
+
+from google.protobuf import text_format
+from lstm_object_detection.protos import pipeline_pb2 as internal_pipeline_pb2
+from lstm_object_detection.utils import config_util
+from object_detection.protos import pipeline_pb2
+
+
+def _write_config(config, config_path):
+ """Writes a config object to disk."""
+ config_text = text_format.MessageToString(config)
+ with tf.gfile.Open(config_path, "wb") as f:
+ f.write(config_text)
+
+
+class ConfigUtilTest(tf.test.TestCase):
+
+ def test_get_configs_from_pipeline_file(self):
+ """Test that proto configs can be read from pipeline config file."""
+ pipeline_config_path = os.path.join(self.get_temp_dir(), "pipeline.config")
+ pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
+ pipeline_config.model.ssd.num_classes = 10
+ pipeline_config.train_config.batch_size = 32
+ pipeline_config.train_input_reader.label_map_path = "path/to/label_map"
+ pipeline_config.eval_config.num_examples = 20
+ pipeline_config.eval_input_reader.add().queue_capacity = 100
+
+ pipeline_config.Extensions[
+ internal_pipeline_pb2.lstm_model].train_unroll_length = 5
+ pipeline_config.Extensions[
+ internal_pipeline_pb2.lstm_model].eval_unroll_length = 10
+
+ _write_config(pipeline_config, pipeline_config_path)
+
+ configs = config_util.get_configs_from_pipeline_file(pipeline_config_path)
+ self.assertProtoEquals(pipeline_config.model, configs["model"])
+ self.assertProtoEquals(pipeline_config.train_config,
+ configs["train_config"])
+ self.assertProtoEquals(pipeline_config.train_input_reader,
+ configs["train_input_config"])
+ self.assertProtoEquals(pipeline_config.eval_config, configs["eval_config"])
+ self.assertProtoEquals(pipeline_config.eval_input_reader,
+ configs["eval_input_configs"])
+ self.assertProtoEquals(
+ pipeline_config.Extensions[internal_pipeline_pb2.lstm_model],
+ configs["lstm_model"])
+
+ def test_create_pipeline_proto_from_configs(self):
+ """Tests that proto can be reconstructed from configs dictionary."""
+ pipeline_config_path = os.path.join(self.get_temp_dir(), "pipeline.config")
+
+ pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
+ pipeline_config.model.ssd.num_classes = 10
+ pipeline_config.train_config.batch_size = 32
+ pipeline_config.train_input_reader.label_map_path = "path/to/label_map"
+ pipeline_config.eval_config.num_examples = 20
+ pipeline_config.eval_input_reader.add().queue_capacity = 100
+
+ pipeline_config.Extensions[
+ internal_pipeline_pb2.lstm_model].train_unroll_length = 5
+ pipeline_config.Extensions[
+ internal_pipeline_pb2.lstm_model].eval_unroll_length = 10
+ _write_config(pipeline_config, pipeline_config_path)
+
+ configs = config_util.get_configs_from_pipeline_file(pipeline_config_path)
+ pipeline_config_reconstructed = (
+ config_util.create_pipeline_proto_from_configs(configs))
+ self.assertEqual(pipeline_config, pipeline_config_reconstructed)
+
+
+if __name__ == "__main__":
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/Automated_Marco.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/Automated_Marco.py"
new file mode 100644
index 00000000..d0b50d33
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/Automated_Marco.py"
@@ -0,0 +1,72 @@
+#!/usr/bin/python
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+import tensorflow as tf
+import csv
+import os
+import argparse
+
+
+"""
+usage:
+Processes all .jpg, .png, .bmp and .gif files found in the specified directory and its subdirectories.
+ --PATH ( Path to directory of images or path to directory with subdirectory of images). e.g Path/To/Directory/
+ --Model_PATH path to the tensorflow model
+"""
+
+
+parser = argparse.ArgumentParser(description='Crystal Detection Program')
+
+
+parser.add_argument('--PATH', type=str, help='path to image directory. Recursively finds all image files in directory and sub directories') # path to image directory or containing sub directories.
+parser.add_argument('--MODEL_PATH', type=str, default='./savedmodel',help='the file path to the tensorflow model ')
+args = vars(parser.parse_args())
+PATH = args['PATH']
+model_path = args['MODEL_PATH']
+
+
+crystal_images = [os.path.join(dp, f) for dp, dn, filenames in os.walk(PATH) for f in filenames if os.path.splitext(f)[1] in ['.jpg','png','bmp','gif']]
+size = len(crystal_images)
+
+def load_images(file_list):
+ for i in file_list:
+ files = open(i,'rb')
+ yield {"image_bytes":[files.read()]},i
+
+
+
+iterator = load_images(crystal_images)
+
+with open(PATH +'results.csv', 'w') as csvfile:
+ Writer = csv.writer(csvfile, delimiter=' ',quotechar=' ', quoting=csv.QUOTE_MINIMAL)
+
+ predicter= tf.contrib.predictor.from_saved_model(model_path)
+ dic = {}
+
+
+ k = 0
+ for _ in range(size):
+
+ data,name = next(iterator)
+ results = predicter(data)
+
+ vals =results['scores'][0]
+ classes = results['classes'][0]
+ dictionary = dict(zip(classes,vals))
+
+ print('Image path: '+ name+' Crystal: '+str(dictionary[b'Crystals'])+' Other: '+ str(dictionary[b'Other'])+' Precipitate: '+ str(dictionary[b'Precipitate'])+' Clear: '+ str(dictionary[b'Clear']))
+ Writer.writerow(['Image path: '+ name,'Crystal: '+str(dictionary[b'Crystals']),'Other: '+ str(dictionary[b'Other']),'Precipitate: '+ str(dictionary[b'Precipitate']),'Clear: '+ str(dictionary[b'Clear'])])
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/README.md"
new file mode 100644
index 00000000..84f0b24c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/README.md"
@@ -0,0 +1,78 @@
+Automating the Evaluation of Crystallization Experiments
+========================================================
+
+This is a pretrained model described in the paper:
+
+[Classification of crystallization outcomes using deep convolutional neural networks](https://arxiv.org/abs/1803.10342).
+
+This model takes images of crystallization experiments as an input:
+
+
+
+It classifies it as belonging to one of four categories: crystals, precipitate, clear, or 'others'.
+
+The model is a variant of [Inception-v3](https://arxiv.org/abs/1512.00567) trained on data from the [MARCO](http://marco.ccr.buffalo.edu) repository.
+
+Model
+-----
+
+The model can be downloaded from:
+
+https://storage.googleapis.com/marco-168219-model/savedmodel.zip
+
+Example
+-------
+
+1. Install TensorFlow and the [Google Cloud SDK](https://cloud.google.com/sdk/gcloud/).
+
+2. Download and unzip the model:
+
+ ```bash
+ unzip savedmodel.zip
+ ```
+
+3. A sample image can be downloaded from:
+
+ https://storage.googleapis.com/marco-168219-model/002s_C6_ImagerDefaults_9.jpg
+
+ Convert your image into a JSON request using:
+
+ ```bash
+ python jpeg2json.py 002s_C6_ImagerDefaults_9.jpg > request.json
+ ```
+
+4. To issue a prediction, run:
+
+ ```bash
+ gcloud ml-engine local predict --model-dir=savedmodel --json-instances=request.json
+ ```
+
+The request should return normalized scores for each class:
+
+
+
+CloudML Endpoint
+----------------
+
+The model can also be accessed on [Google CloudML](https://cloud.google.com/ml-engine/) by issuing:
+
+```bash
+gcloud ml-engine predict --model marco_168219_model --json-instances request.json
+```
+
+Ask the author for access privileges to the CloudML instance.
+
+Note
+----
+
+`002s_C6_ImagerDefaults_9.jpg` is a sample from the
+[MARCO](http://marco.ccr.buffalo.edu) repository, contributed to the dataset under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
+
+Author
+------
+
+[Vincent Vanhoucke](mailto:vanhoucke@google.com) (github: vincentvanhoucke)
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/jpeg2json.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/jpeg2json.py"
new file mode 100644
index 00000000..db795e05
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/jpeg2json.py"
@@ -0,0 +1,35 @@
+#!/usr/bin/python
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""jpeg2json.py: Converts a JPEG image into a json request to CloudML.
+
+Usage:
+python jpeg2json.py 002s_C6_ImagerDefaults_9.jpg > request.json
+
+See:
+https://cloud.google.com/ml-engine/docs/concepts/prediction-overview#online_prediction_input_data
+"""
+
+import base64
+import sys
+
+
+def to_json(data):
+ return '{"image_bytes":{"b64": "%s"}}' % base64.b64encode(data)
+
+
+if __name__ == '__main__':
+ file = open(sys.argv[1]) if len(sys.argv) > 1 else sys.stdin
+ print(to_json(file.read()))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/request.json" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/request.json"
new file mode 100644
index 00000000..338a0091
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/marco/request.json"
@@ -0,0 +1 @@
+{"image_bytes":{"b64": "/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCAPABQADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD0iPg4J47VKB0FNAPUVIORius9ERRxyacvTJpB1GOpqVMY4GKBEeOfalGQcdqd04xR1IzQFhe3P05pyg9zSOMAHH40hOF60mIjcAjIJyPSoMke1TnGPfrVWd8nAHNBLEkKsKpk4JHbPFOkl29etUpJsE0bEMsTOFBBP41kXdwFPX9addXqYIDVy+t6rFbq21gz46Vk5Ethrd6gyOpPbJrjdRmVhnHP1NOmupLiYtIevYVk6tKYzhazb6Gb1M6+mUSYUc/U1QlYNyP50yV2eUk9KbCQ0h3HpTdiSRVJHp9KrXG3POcj1NPaf5GORWdJKSxJ5qQFlK+p/CqzkHuabJIc9aibt3qhEsTHouMUr4yRn9ahDYHFG4kmkAjDrg/gKapKt9KCSBTQOeRTAs7t67s8+lMbGOaaWwAM0EjPJpANNGfWkY/NTW60wJPwFHT0pVGQKNtACYzwOab04xzUpG1eKjYE9aAFUkGnn14FRrhakA3dKAG9u1AznpSlSOv5VLEny5P50ARnI+tHIOR+NSbcHmmyN2HSgBhHpxSoSTwMmgDJzUiDHSgAPGPyp/PFSwQruy3FEyqw+WlcCDODUhOVyeKjVR3qQ4K8UANQEngA+5qxnKYYkH271FGQvXmnPJxjpQ9QIm4anAjA5UfSnou8+op4RBgcUXERBuepp8JO7jH9aawA6U6MEjgc0AEhBBHNMGfxqXyj1OcUgUhccYoAhbrTkJ3U9YsnFSGMLQBE0meO9KGyQDTXC9e9M3fMaaVwLGflFIzcVEpOPanhC3AzVqArjWPNNCk9quwWbP2NatppDnkIfyqrJDSbMOK3duxq5BYO3YgV00Gk7BjGDVxLDHAFK/YfL3OYh0w8Zq7Fp6jrg10Caa7dBV630SVv4OKYaHNx2qqen6VZS3JPyrxXWQaC3939KuxaCwx8tK6A4tbORj6e9TJpjNjiu5TRD/dq0mjYAyBRzIRwqaVzkjr1qzFpA4yufrXdR6SBjip00tRxtpc4jh49I4GFqZdI5AC813C6ZjAxUi6cMdKXMBw6aSOygfSpRo/qK7VdPA7U4aeoHI5o5hHFDSM4yOPT0p40oD+H9K7NbFcdKcLIADgYouwOJ/skenvTRpQzytdwbJT2Gab9g44x60cwjhzpI/ujApG0kf3a7c2A/u0n2AY6UXA4VtJGBxUb6QM/cGK7s2IxyM9qjOnjuPei4HANpAPbpVZtIB7HmvQzp4OcioTp2ewp8wjzt9IA6D8KryaS46DBr0Z9LB/hFV30r/Z4o5wPOH06RfUiq0ltIvVc/hXo8mk/7NVJdHPZc/hT5kFzzuS3z1WqstkrenNd9Nouc/J+lZ9xorA5Cn8qNBcxw02nHsKoy2jp2/Su7fTmXqBioH01HGMU72HzHAtEynkU3B712N1opwSq8fWse60t4zyho0ZXMYzdeaZjrVua2dSeDiq7Ls+lJxKTGg4OKeaiJ5pwbjmpcRpjh6UPmkBNKzVIxADjFGDmlGQOKkXnrRcBpBKj0pD0qRWwp3VGxz1HWgGKjAHqalEg9W+lVs84NSBcDOaGgJi47k/jVctuYnrQxHU1GWXPBzRYCYtx1OaZk5pVUuM5pPLbBNAg5xkDNM2nHSpAfWl3fJ0oATado4GKYVPXAp2444pMnJ7ZoGQ9zQc+macFO4daklgYDd2oERA47nNMY+oyfanYPenYOc4oAg7d8mm/yqaZR2qHtgdqAHD1pyimjpThkCgY/PcHPtUTEk96cp60hKkDPWgCP64NKPSlYYPHWgZ4oAeAe/SnA00df60uMcmgCeNsfWrVtKFPP86oKakVuc00xHSWl2Aa3bS5BwR3rhopyMd61LO+KHrT3Gd9azKVAz+tbFnMoI964axvwSOea3bS9GRzUONxpncWU4Dr6D3robWRSuQRiuEsLrewyRXTabOSMcGs3oWtTdc5BqOQ7h8tAIMfXmk5I/rW0JESiV3BBpIyM89ae/Qk1AWweCK6E7madmXIyM9cVcixxisyN898mr0L5HH5VLRpuaCVIPwqvCxIIJqb6UjKcRSee1SIwBqNsYOKaCQaGrnHONjRicZFXY5FHB9KxkkIPJq3FJyCT+Fc04GXNY2Y5BjipVcY68VlpN71MswzyawaNYTL+7J60hb8qqrLxmneZ7mhM6IzJWbgZP4UjNxnrUTPxxmkL8YqkzVSJSenPvSFzj0NQh/em7/enc1UiUn8qRicfyqNn6CmbwenWnctSKAPTFSbiOOxqDqeDUkeTXedZLuwPTFOjPy8jmoic8cD6UKWB5oEWFGTj1p/CkdKihkGeuKkd1bAyM9aQxW75qF+vBqYsAPpUEyjII6UhMgkDMcKelRsmIycc09uM7eBVO+vQIWUHDYxxSbIehQubgvL5aYJ/lWRrt6tlaEs3zHoB60+S5WC3lmBweTXF6xdNITcXUh4PyL2qW9TKT0Hy3spjZiTuP6Vzc08MAkluW3OScKamvdVkeQRL8oIrAvY1dnadiD2PrWUnrqZk41SISB2GKx9Tv8AzZdxOF7VT1CZdmF7VXmX7Sq7PugUcq3JuSvcKSpH3aglIiBbPJ9Kqu5AESjJBpkrMh+fqe1FhAZy5KgUE/u+BUWRjvzTgHSNj296bEQYJbnvSmNgM0+AgNlzwKLiYOcKcCgCDvRtPamjlvapVYZ5FAEZHSjOOtSSJjk8VD1OO9ADkAJyTmnhC3OKjGFq1FINtDArsmGo206Zhu460zdnrQA9FPYUvRgD1pnnYXC0hbGCTzQA98s2OwpGHFNDkngcU8DjBoAWOIkbiODUqDjillcLEq/rSL/qyaQD0iLHP60jnGBnjNILn5dq8VCWyffvQA6VsHj6VGCc89aU53DPU0pOBx1pgTMu0AnFNVvmzSEtKADQo2MDnOaALABK5NNHPy0s0pChTj6iohLjvSAPLO8imhGJwM8UGQ7s8k1KW2x5BwTTAiV9jYpWYkA0zcGPrSnJwMHFFgLFtISNvelnBQ5PApYk8o7qbNKZm29KXURD5nzVcjkULx371CLUsm5WG3NJ5L8heQKNGBaW4XgMeKYzBxhartFtAOas2cqRklkBOOKEuwhqkxgnGD703fnk5qa4kMrDA/KprKwnuHwkZNaKHVgURGXORzQLeRm4Fdnp3heR8Gf5V71v2WhWNtjcvmOPaqukUoNnA2GjXNwRhD+VdJY+FmwGnYKPeu0trORgFt4Qi+o4rQt9DlmwZWOKLtlcqRytvpVrbgADefYVfgs3k4iiIHrXZWegxIclAc9a1oNMRMDaKWgORxFtoc0mN/Falt4fQY3LmuxjslGPlAqwlsB0FK/Ym5zMGjImML+lXYtLUHoK6FLX25qVLb1oEYSaeqgkj9KmSxGeBW2IAM5FL5IBGBzSEZAshxwPyp4s1GcitgQc4A5p3k4PSgZkC0XI6ipBaj0zWr5OBwKTyjnpRYVjMFqPSni2HoK0TFxnB5oSPPGMe9FgM42xA5Wj7N7VppESMn8qGQqBxxRYLGZ9nGOB1oW2xjitDy+c0KpzTsBn+Rg9DQbYHAI6VoPF6fWlEYwKLBYzTbDHTrSfZxjGK0RGR9KDHmiwGYbYelM+zAnpWqyYPPSlaMelFgsYxtwcjGBTTakHkcetaphzT/IxwB0osFjEa0yegqJrMf3elbxhC9qa0C4ziiwrGA1kOPlqNrJf7oroGtxjPWo2txjkUuULHOSacvYCq8mmqQcLXTmAY6cGo2t+vHNFibHHz6RE+dyCs+50BGyVGDXdyW4weKryWwIzijUTiecXOiyJ9zms24s9pImh49a9Qlsweo4qlcaeHGCgNHqS4voeU3WjW0+dvDHtWDf+GJcloxmvXbvQ4pM4XB9RWTcaNPHkwsfpTTa2YczR4vdabNA2HQjHtVF0ZTXsVxbN927tg49axbzw/YXIPlfu29MfSq5u6KVTuebFio5pm/mun1Lwvcw5MYDr7H6Vz81nJE2HRh+FHKnqjRSTIVc5q0hxg+tVvL21NEwwM8YqHEpMtyRqY92KgRMnmjzcnGcipFXIzgjHtU7FELxjfgU18/d9Kmd4wMgnfTQruVIXNAio5bJxSRxFiTirs0iBSpQbvWo4pQoIwDRcBsY2Ec1K5d14AxVV92SxzyakjlcDBHFKwDZCcBR19qVVdR8wp8bhW981NJFI8e4MMfWncCHeBUZUu/HSnxKrEiT8MVC29GOATQIl3hWwetSeePL2nkVVYEjcahYk8A0rDuWHcHIXvSlsrVcH3qVH28GiwDCf/wBVMZKc+S2QKcpH8XamIYvSnbqdJgKCOagyQaBj+pOaAoPIpAafuFAiNgRiinZyOaQDJ9TQAozgUMaUnt0pNuevWgBVIx7U4MDimY5xS9MUDHhiD1qxC/qaqdWPrUidRzQBr285TkH6Vq2t5Iu05rn4mOMLVj7SwAAGMdcU0wO/0u9GzJbB+tdVo2pgMuTkGvH7e+YsDk4rqtI1NQoG/kUpRuioux7BBdo7YzitCFlcHB6V53pWqFnG45ArttOkBhD7hj0zWXwGi94szJ6VQmUqSSa2ZNvljbzWdcpkY71vTncxnCxTinweTgVoW8w3DniseePac96La4IYA9a3eqIjKz1OrgfPSrac4xWNYT7sDtWzDzg9jWexrox+PYUx+BU2B25psiZHrVrUwqQuV1apVlIOaikGOv51EW9OlEoXR59SLRoxzEnkipklrLWQnFSrLgjmuWcDNSsayy8cHmpBL2rNWXsakWT3rBo2jMvmTjkmmtJz6VUEh9aaZKSN4zLZfnGcUm+qqyZPNL5naqRqplnzPzpN/Wqxcg88+lG+maKQ1epp4OCaaD3pTn6V6B6Y8YBGelKXHIpmegPWhid3bJoAlAzj0p+QeMVEucECkjDBuelIRZUjBBOapXk/lR1ZOOtZmsMqpljjFJCZVur9hGQoO7Fc7NemR/K3jOck1YuLwMr4HbGaw/JjhEk8zkKMk+9DaijBu5Hrt6kVuEzwOTXGT34u78B8eSgzxVrxBe+ajEHCPwM9q5We5W1tikRDO3U+lYbkNliS/S61QL0jU4ArJ1zUDJdkRj5E4ApiIdplDfN1qndyKrAcE9SfWpdrkFW5uN2cj8altBItu0zcIOnvVGd90megqR71jGIh9wVT2sIWKQIxlxk9hVaWQO+WOTSFzLJhQcVNcWvlxg96AIUYZBxmpHkZj83Sm2ZEZZ5OcVFLIWbd2pCFYbnOKYyEDJpY2457808tujx2pgQgD15oyQacSFBxUbdBQBIWLdaiTh6DnoKApGKAHhNzZJwKXocLTA3r6U9GHagAdQo68mmA5FSMhkHyn8ab5e3A/OgBhAyKc+CcY4ApGB61JAuASeTQA+BgFI21IgBbntRFHlck4GaDIiqQM0gGz/PyBwKUofLHX6UzJ2/WrDcBRntTAqgFiT6VNEF3DI5p4cZxildkB+UCgCKdVViRzUflkhWGaldw5wB1qaMbmWPgAUAOs7Xz2LE4VetSTwRgYU8imzSbMpF931pS67QvUmlqBnyYwOmaEwelTNbky7QRmr1tYKFLOR9TTvYCpawhnUscCtu70iGO1Ry/LDOPSqIgVblAzYTGatTSrcR7UY7R0FKTfQRiyqsTYABwauQRrIgfuBmqdxCyOec02O4ZOBnAoeuwE805U7AOtRxgtk5xUUkpZt2PpSxSNnnkU0gJPOKkDPy1MtzjIAHPFRbPNcYBrY0vQbm8YbYzt9SK0VPuG+xlsrOeh57Vpabol1dOCqHb713Gm+F7e0Aa5YM4/hrobOxdwFt4gq/TFUn2LVPucnp/heCEBrj5m9K6Oz08IoW1gA9yK6Wx0Mkgycmt+00pUUYXAqX5lXS2OTtdGllx5rH6Vt2WhRpglRn1rpoLEAD5eaux2mByBRclyMO301U6KMVfislAHH4VrJAM9KnSDp2FIkzI7TGPl/Op1teuRzWmkIHXrThEcikBnrbcc1PHCAPu9quGMZB9KcE4NAioIz/CKf5PfpU4U1JszwevWmwKgiwDnmpo0ABwKk24PTil2UAV2XnKgD0oKEqOanKkCnIvXkU2IgKYWjbn+lTlOlNK5JpAM25HAphj/A1OuQOOBQy5wKLWAhVc9qHXIx+NTBcGmlfamtR2K4TJ/pQFw2alCknkUEcCgBm3oaXYAc4/+tT+Dg07kigCApgjvSFO44FTuOOKbj5aaCxCQCvvQU9OlSYpBw1MLEIUdKcmD1qQAbs4696VVweBigdiF155696jK4PQ1aZQeQBnvTMA9RQ9AsQBMUeWe9WSOKawyMgc0bisVjGD2ppi9quBRg8fhSsmRz3pWCxnND3qAwDHvWmy89s1BIvPToaLCsZ7QDHAqCS39K1dmaYUyelKwrGLLbDnpVR7QHtXQvBkVWeDtgUWFY5m409G+8tYt7oMUvO3B9a7prfIqtLb/KBj8aVmtiXFM8zuNJurbiI709DWPd2ltMSt1b4PrivWJ7Pd1H4Vk3mkJIpDIM/Slp1I5WtjxzUvC6uu60bcPSuYvdNntTiRG/KvbLrQWQkxZU+1Yt9ZkDbcwhl9cVab9RqbW54/kxtg9qsrfMq4IHpXaX/hmC4DPakBv7prldR0i4tWO+JsD2otGRqplGTym+YZ3UqXAVNoJDVXcMvUYNM6npzUOBdyYRSTyYXkmpms5oWBkX5T3pltIyMCCRVm4vHdQpYkd6nW49CKYDygByahVSODV2PY0Y3HBqHPmSADAA4BoTANoCsSMDFQyTADAHtU8u7oG4NVZYGCjHNCERqST157U4uysM0oj7dKa55Cn86AJA/mDGBSG1KnJ6GpEi+XcMU52Y4254pDIhZv97bxUTpjK8Zq1vmA2hjg9s0xrKXy2kyPpmn6iKh3cjNAHqOtIGIPekLZwDQBNEgc4Y0jxbGIzkUsKhj15qN9ysQ3agY5gAlMUZNICSDxxSxvg+9AD3j+XIqFW2nFTO5PtULKepOOaBE0bKeGpSnXbzUA61KhI4FAxrgg9KOanZjgZA4quwOScYoESBMcg0v3SKjDcYNKMk+1AywhOcjipVm2ocgEmoFYYA70Ow3AUCJY5Pm61o2MzeYOcVkqOOas2p+cc1SYHfaVdlGXdj8a7TT9RfA8tsg9RXmOnXKjBcnAre0/VBHIRGeO1TJcxadj1Wwvy4CSHg1cxvyc8elcjpeoiZAv8eOtdLpRkYAtgjNZp8pe4XMYYYA6VnSwlWyBXSPAoGetZ91ECSAOldEJ3MJxsVbG4ZWUd/5V0llNuUVyoQxvuArUsbjGMGnIISOoQgjP60/Gc5/OqVvLuTBPNWlOQM0QkWxkiDBx1qlIhWtVBkc4JqvPH1xW0Wc1WFzNPBGaVXO4U+VCDxUByDRKFzgnGxaWXJx1qRZOnNUd/wClOD8e1csqZCdjQEnv9KUv61QWQ5wPyp5k/wAmsXA1jItiXHFLuqkr+pp6ydOaLG0ZFrd0yaN4qtv9+fegvnvRY0UjSUDPPSncZ4pinP5UAH8K7j3BW65xmkyep60EEinBcjANADo85B7d6V3+YAjGKapwTzjHFQyPzjt61IhZLjbkHgVn38sbx4kIINVb+5ZXIBFZl3KVQySOFA6ZpSsjJsztVuY4BIw6CuV1OWSdY98mxC2duetW/EV8rRBItpGcsa5bXbtjJDHGTuCj8Kyk2zFlXX5PtF1FaRD0yao61bwWNosatumIy1RKsk138smHB+8TVHWWYAgnew75zUbNK5FyjJOzQ4BwtU2dSQoOTnmmszlMH1qLYRkk4qrE3JpkTI5471EAgOeq0juGU81CWO3AFAE1o6ifc33RU15deaMc4qkq4HJpG4/GjcBQ55FIwJPFAG0k0rOAPc0AMyUOO9KJOmaYRwCc80DGMgfSgQrEknvR2FJn5QfzpV5UkUAKeg9aUAtj0qNietOjlOMd6AHFMtxSJGWfCmg7gDxVmwdFcMw+tAEbBosD1prOMD1qa6cSzZUcVAwABoGPjUbcn9amUBUzUUbbyM8Cp5U3uqpznigRFuLE4zxUbx4Gea0GhECEEfNjNQMVcqvagCJGAQcc1MFDR7j1PFJOQABtAApJJFKYHGKQEewKDzmlg24IahAGQ881GEI5zx60wLYUyfLEvI7gUxUeJsyA5q7pdwkMeMZPUmkvLhLhnZVCmlfWwFSBWlkzjCjrUlzGUAI7jtTrcs37peM9TTZXwwUnPFPqA21jZyW5+tXTIqwlZGJPYiqskhVAqqRUIilnb5fypPUCSSbnk5pYCdpKnBqCaAwthup61IJcAIqjFP0ESSkGPGMk96gCIfT3qy8LGMEAgfzpbHTbi7lCxIWJ9BVxhcCqLfPI79BWppWg3V84EcbbfUiux0bwqkKrLft/wGuqsrVyoisYNqj+LFWnbYtU+rOc0rwrbWYV7ohn9K6mzs3lwlpD5aHvitvTfD3zbpyWb3rqbPS1TAC4HtSfmXdLY5mw0AZ3SfOx9a6K00tEAwuK2oLQLjAq7FbcDjAobZLZmQ2K8YA4q7Fa4GAPzq8kIAHFTpEO9IkpLDjnFSrEM9M1bEYB9KURjjFJiI0iGRkCn+WKlRSBg04oTyOuKBMhI54FOC5I4pwGCc88Up4ziiwr2G7Rt4NLtwcVKo+XB/KlIA4xxUhcgKHrzQmOanK8VGyBTnPHencBrD5qaBg9eKZ9ohMhQPk/WpsfKMUxBt3AdqRU6mpFI4OaXI7UuoyJhUZGD71K3DdKaykjIqmAwdSD0p+OOKaRkZGOOtOXp1osA0A9O9GeMGn/AMXNNbkgigZH2NNBHIqXA6A0wjB9qdrDsJt6c8U5R8vvTkXOacRjpzQgI26YHNJgDtxTj3ozjrTeobDdmDnOKYVwfTNTE5HTmk60tSiMKCoxS44HNOUdqD931ppCIyDimbSM55qU/dqMigGJjjmgcCnDkCjpxTsISgk8c4pp56ikz19aqwmI3Uc1GeakOev4Uw9e5+tKwxgHak2+1TdQcDFIR6Ac0rARlO3rUTxjcatEZP4daaQuPehhYoPFzxULRdBWmyDpTWi+vFSKxkvDySR9arSQde9bM0XAqsYuOR+VKxNjEltQVII5rNu9NWRSGTNdS0XFVngzn+tIlpM891HQActBlG9q56/s5ovku4fMT+9jNesz2oOeKy7zTldSCoI+lHqQ4djxXU/D1vcgvanDelcpfaXNZuQ8Z+uK9s1Tw8Mlocox9OK5y+smVTHeRBl6BsU9fUFNrc8qZtoxRGUOS1dZq3htXVpbQj1xXMXFlLbyFZEK/hRZS2NVO4yVgV+XNRCQADrzV2BFxtk4zwKhazfcxjG4ZzUWsWMhR5ThTn2qSSGWI4bNRo0kMvHBBqzNdtJGd5B4pO6ApNnOeaSUqwGBzVqKBnRmBBx2qrM+RsAwc8mgCzAR5XFR/wAX+FRxblI+YYrXtkhWHdJtI/Wlew9zN3AMF5yKhmmkCkZYD09auzCIy7oiDVa9bd94L+FMRWhQOeankijVMnB9KrLwwwcZq/FFGytuPTpikxlBTtOQacxBYknJqxLbKBlTmodgFMQhQbOOtQopBzVjoD/KmoR+H8qBipEZB+tDwnHXOO1OJMbZB4qMykuTnrSERD0p4yRxS5Xj3p3ABPQdqYwdyF60zzPlI4+tMk5XINMB6Ciwh4IPFPDYB9aaqgjOOaXac9RQA6LJPSpGGMEiokO1sGpd27g8igBU3dQOKlUlO1NBwn3T9aYzswHNAF+1uMMA5OK1dPu1SXnpXNgkHaetXLWQ5GTxTQ7npej30ZOQ2CK7fSdQ2oAhPPUntXlWjeWQHMwBHbNdro98g/d7h061nNJlxbPSIJ45bZTuyaSWLgkDmuf0u58sbWOR2ro4JDMhJ+UelKDsOSujLuIxycc4qpG5RsE1qTiNdwzlqzJFBLEcYrpT0MHoblhPkLzWvE+celclYzFGwTj0zW/aTgkcis3oy07mrEfm4FTum4HjtVSJuRVxTn6VpGYmZ88eCaoSLg5rcmjz07VnzxYzxXRF3OSrAzW4NM3YNTyofpVZwRnH4mqcLnHKNh+/v3pwYZ5xmoM8UoJ696xlSEThvm+tO3cmqykhvapQ3fNYuBaZNvOSTRv/ABqEtgCjfnFLlNFI3I+CR608tgD0pMYIoxn2Fbn0g4kbaFYAZPSmkcZ7U0PkhcdqTEDncc5wKpXsywQu5OMD9atSsMHtj1rmtUuDJKYx90etLciTsRQSPIGdsnPIFcn4n1GQSNASQK37u+jsLN5HOSBwK4iSQ3Mkl1d4KtyKzb1cjGT6GVqMk32eOKLJldsj6VQhtLmS9Ac5bGGz6VpWF15t/NLIFxGPkBqIX/lSSO2zdITjiolJozsjn9WuBZSukZyzZGfSsO4uw5A/n3qzrkvn3RIGOayQheXAznPFEV1Zm2XJZFMWcAfhUccDS5Z/lQdzTJQVID8ACo/OZxtGSoo9AGzICcRDI9aMCNckc+9SPIq44OaqsS3XikAkjZIAFHAGOppo5z6UsPDAnGBTEK/3feoS3t+dSv8AMS2cYqPpnrQA6M+YSD0FLwFz2pigAZ701s8CgCSMbzzTipx8o49aWCPIIzk+1SM5SPaRgnrQMquDnH501eHBParOwFOeD1qArxmgRKWLj1pAxXsc09BsT5sZpQm8EgGgB8UmxSMAseKHjyO3HJpmAjLzSzMc8Hg+lAyNSVbj86uRTFFDdxVdkXC7c5PXNWYkDr8/HFADHuGl3Zp9kmSzEZI5qeyhRi/uKRpUhBVOfei/RCK8qtLMOB70jqoyF5PanGQs2eQKYjKJCe/oaAIwCjH+VAYkAEdKtR2xmyxOD6UpjRY1UDLZ5NAFUsy84OKVJMZyetWr1N6BY1JA74qkUKNz+dCAuLvyG9fSpfLjlbAzkdKhQu0ee3algDeZknGOaQhZ3EbFetPs5zGzEYBPemXCo/ziolViOnWqUeYCW5zKwYNkmnW9o7SJsG4ntV7R9JuL11Eanb/eNegaNoVvYqrSKHlrRRURxi5GFo3hqa7CyXfyRj1rr7CwgtF8qxhDt/exWxp+kzXZXcpSL0AxXX6ZosUAUKn4kU/U2UVE5vTfD8k7B7rJ/wBmutsdKSIAKoA9hWvbWSr2+laUVqAOmPwovclsyo7cRn5Y8mr1vaMcFyfwrSit1B4A5qxHAMVJJUSHHQfjVlYMirCxgDAFSx8elJiuVkhAHOCMU7YAcgVaUfNjikaPIBHWjYVyvsDH+VJswamxhqcFz9aTYrkGORSgEmpWXYKaOmR1pITZFKmTx170gX1qVuM+tN70xXGA5Ge1OJyPUCnYFIQeRxQmAo+YVDcRNIpUHGe9TIPTin/lQBj2mlJBL5jMWY+taBXYRipiBnFIRmi7e4ETjutKD9M0/wBqaAMniqQDG55OMU1SehqQYxSFM0dRkQyp5FOCgHNKw4yOlOZc89TR1GR8mlz044pwAHXj2pxA2imAwAdT0pCuG9qeQSB6UDleO1IYijbyfrQ2OSe9OHHFNY8AkfhTQEZGQfXrTQDmpMEN2+tMIxzVDDikxgUoOKTPPNUA0YDc0ueBSkAHpQPzoWwWEFBTqT3pw+8aMcYPNIZGAaTbTsc0rYByM0kBGwBppUetPPFNOe4ouA3GTj8aay5OcVIehHSjt+FNhYjP3cDFKBtABzTyvGPWnIo70XHYjIGaTb+FTAZJpjD0oFYjxg88+tI2AAOOafjpgDmlwM9KdhWInUcDvUJi44/GrJ69eKbjHpj0qbBYqNFzULwg8dK0QMrheM1HKig8ikS0Zbw9qqyQjnI5rXaMc8VBJEPSlYVjCuLMEcDrWPfaXHKhBQH6iuteIHjFVZYOpA496WxLjc8y1HQZIHZoM467a53UdOiuQ0dxEEkHQ4r2Ce1VwcjIPrWDqejRzKcrj3Ao33M3FrY8R1XRZ7Ukou6POQQM+tYxmkgJAzketeuanpcttv8Al8yLngjOK5TVNEgulZ4Rsk5yPz9qfqVGocSV887icGoZ4zGcHn6VfvbGa1lwynA9qpsTn56lxNU7iQucY3EeuDULoSxI71YkMOwbMhj15qJWJGD+dQMQowIwCR3qzBD56kBirfWlUErxj3qu0jwyddoNLcAkgeJzuJqFYzKxwwB96tSzeapGOfWolXy1+YcnvTAYItqBn5z0qRSqjC9asu1vIQCGUYqk6YJ2NgUtwHxsQcSECmS4HeomLM3PFHzAc5/GmA9fmYVNIi+X8mNxqBC3b8qA7A9ce1KwxGRxw2eKaUHXvUrSbuD1okgaOPeelAiueuKVmOAPWmFjn1qRAGxupgR9uelSIoIxjinFRnpx7UPhcbaBgF25OelMByeaTnPSnYIoENqRCFPPWomBPFKVOaALqSDaVOMVX3DPApijkZp4jGM/lQAu7PTtU8OV57VXCY5BpyykDbxmgDbtLoRAVu6dqb7ly2K4uJz1GfzrZ06cAqrHAHrVDR6lpGoSOU2tnHOD6V31nexzwDkBsc4rxvTroxSKUbr3zXd+Hr3dKFYggj1rCUbamsX0OqMAkUsG5qp9mIc56VNbmTcdn3c4qd2JXB9K1jLoZyiZcy7GyBV6zuOmTyPSq065YgkfQ1VR2jlHpnpWjVzNHX2k27HOT2rThOQMEVzNhcdOePWt61kBGQc1jezKL5OQarTJyeKnXPHpSsnHNb05Gc1cx5ojg4qm6cmtmdPUY9qz5ose1dsHc5ZRM5lx1poHHFWJFqLArRwuYuIwcHOKcTTee1BrKVMmwuc0Bs+1Jnjikbr1rNwGdKDkc8EU4EYOar52nrkVJvyPSsrn04rEgZ7VU8xmlznAqdmJG3pUE2I1JXrRcTKl/dAkoueOprC1J0jjLMSOO1Wrycbi7EDmuR1S9lnmkQOAg6A96iXkZSkZ2pXLX1xyT5a8AHvXOarcFnNvASW7Cr2sXYtLYmM5lbjFc5b3Lxb5GGZm6VjdswbLNxiyswkrZmc8gelYVzeHzTk89quXc/Pm3BDP2FYNyWlkLqO/QULXclsluHTYA2S55NVrZxHMGI+lQPuHLHJJwKmMQWEOxwT0xT2J3JL6VXztx0rPBcfd60kpxxnNKuRii1gGkkHJP50wMQeeRUkiHHXmoCCCSfyFAh2/t+tOzhQTTApYc08Y3AUABB2fKOtQnk1bY/LtXrURRe45oAYvABPJHpTgC/zKc06OLecZwBT+Nm0daBjYWKAnvTXO9qU8cCkX3oEORS5CrktS7SjENww9alicRhm4yelJkHLN1NAyMlnGcfpUiuAnApY1VlJ6Y61G4Gfl6UAI6lyMkcmppYjkBfXFRRsC6g+tSTEq/HTNADGUq2CeasW0TO2AxHue1IuGcBRnjrWoWiitAuPnPehuwJFE/uA2W5PFU5G3OfappgXOc1XKknAGaEhE0YXC5bn61YgjQNlxkVWSJ1wSDinyy7BtFAFiW4IJ2DGeOKbbMcjPTuarpvkyQDj1q6XxAFCjNGwFhJfPPloASe+Ky7jKOwOD26VYjuPKB2nDHimvEsiFy3PakBFETnr8uO9P3ggY5NQHOeOlXtOsZryRVjUnn0rRRvqIgijaRgqgk5rr9A8NSTkTXXyp1wc1r6D4ditFWS4XMnXBrtdL0eW7K71KxDooq/Q0jDqzN03TwuIbOLA7tiut0jw8NytKCW681uaToyRKoCYA9RXRWtkFwMc0Ghn2enhNvy4xWvBa46gCrcNtgHIq3HFgYxSJZBHAB0FWUi9c1MseADipUQZ4oJZGsXA9KeEAPIzUyL0z1pzLUktjGUDOOajxg8dDUpHHBqJuDg00Sxx4P0pd2M80wHGCOlOIyBjvzQSKw3Dgc03aQcUq4z7U85HNQ0FxvDA5PNRMhHuKmcdx1puc8YzRsgTIsA9RTQMdRU7AdqY6+1FwGKOOKULnrSgcZpwPpQA3ZjntRj8akbGcdqb1zihAMII6ik6CpCM/jQADnmquBFj5qQ5z9acynHtTR074pDEIGOBRtpRTiuRz0oQyIDJ6UFSO1SAClx+VMCEjvR+FPIAGDSEdu1UgE600rhifypxHHHWgdMHqKYxmex60u4Z6dutDLzntQ4wOOlPcAYegqPB/On4GAcn6Ug6Akd6BoiHB5FB4/GnOpJ6YppzimgAdKRjzxR0Xikp2AVSQM96d1HPWmc5NOQYNFrjAgd6M5OKCPrTtpoGRnHemnoakK84PQ0mM9KQDQvAzSAcEU8D3pSuT70DsNWlHUGjGOMc08ZCnHemkA3gnimEZ6U9RnPakHNJqwxhXAxSAHPSpMZz6UBaBWI9o5NIU460/vRwRTERgYHFRScHPX61OenFQtjHI70MViFgPzqN1yDnpUxpm3IxUtCsQlBgdKhki5PpnirrDp6Uwp8vtSaJsZkkPOcVUltwQeK2XQYz1qB4e4GR1pNCOavbBZEOVBz7VyWseHjuaSAYbngcDNekyxd6o3FsrdR+NJaEygmeMalpwdWjuogD0ziuO1jQniLPHll/P+le86ro8cysCnHsK4rVNHkg3YXcnuKa8jPWG541LE6nB60ioRyTXbatoaybnhXDdxXLXNpJCxUg5HtQ0maxlcgRuPTFRSN5ko4qTCn5W4zUR2qeOfes7WLLkBSNQdvHfPNQSkPnjp2pPMBXC0x1kHIBwalAOeMlQQetRuOR7VMhyRkUSYC/dwc0wIwh8vOO9IzjGG5xU6uFU549KgdEKkjrSAhGN2CTj2obAbg5pwBHXHSlSJ2bpk0wECgkEnmnylmXb2qN1KHHfNKruhPFAETRkDgdKcuSvT2qYHcvIGKEAycEdaAIQDnBqQDGQRSTAhj0/CmuWHI70DAAE4px4XJxjpTVDAf0pWz359qACPAzjFJtYEHqKZtPPYUbm6E0CHZ59vSnoQwOTUTZ6ikVuTQMlYkdM4poGCCach7cUrEbh7UCH/dxjrU8Ltu461TZ8sfbinxyYPWhAdLpt228K7cCuy0W8MTgh/mx615taz4bIzXTaXeElfmwKbVykz1vQNVZWPmHg+prpgodMnqa8x0e8wyHOK9A0e8EybJThh0rBvldzVaqxPNGuCBj3rKu0PRcnHoK2JVzJx0qndx966IswmitYz4IXdXRafcAgAnkda5I5imH90mtqxmIwR26jNTUXUlM62J90eKsKfWsqzmBA5HSr6NkCs4SsKQ+RC3FVJo+vHH0q+Dmo5EyOK7qUzJoxpo++OKqsnNa80fJzVSSLArvg7kOJmlCDSEEDpVl0x2phT1qmjJxKxHU44o7ZqYrg5z+FMK+1ZuJm0a24kjI/GjzOgpmQM+9I3XPpXnn0orykP7VUupw7EZpbidQvNYWsah5aBY8B36GlchuxnazfIboW8Z6dTXJeJroQzIUHbBxWrqfl2cJurhvmPT3Ned6/eXF1IZBkJWT956GE3YW4k81zJkmqktykUJbPznIFWoI86SXc4ftWKys/LdM1Oj0M2QQsZ5iXJIzSyuoD469qgJMbPtPuKhdyEO48GmyQZQSMnrSTuZDgZwKiBOcn8KfGw35OM0ARtHgYPNIn3gM0sjF2wDT1AXnPSgCJ2O44HA71Fgs9W4yNhAAOagY7WIweKEA4jahApsQy3HU05XGwr/EaZuMZJzzQAu3bk5zUbtl+OKcXyoFJt56c0AAcquB+NK52r71KigQgtjNRuAWIoAkgxs3NyDTHGOneljXMZJP4UojJYAmgBQAIwT1PQUqJu5NMkXLYU5AqUH7qL+NAE3lKsRO76CqwBJOBxjvT+Q2CcilcnGR0oArxnbJ8wqby2kG4dKikUhqt2kihTG4yM5oAWLEUR4+Y8fSpkR5NoAOO5qCNx52X6elW5btdp8sYyOKAIpEA4Xn1pitGhzjLe9PX5Eyx5YZxQsCMm5jy3QUAKkhuZdgAG7inXNpFHJgPu4pGha3BZOGxnNRSE43ZOcUegFhAsMJAwSajIG0cn3qFGcgkg4pzSgjpgUWERybSxwMAU0M2AAcgUpXexCium8NeG5r11kmUpEO571pGPViWuiKWhaLNqEq4UhO5xXoelaTFZIsUKBpT3q9p1gq4tbKMYBwWFdvoGgLEFZxk+tV6m0YWM7Q9BZysk/J4/Cu507TkjUALirdjYhQMDA+lbEFvjGBijcor29oF7VfjgxgGp4ouBxVhIvxNBLZGkYz0/SpQnP1qZU/KnmPI5pCZXVNp56VIoz1p+BmlK8+lJsliBe/eg84pQSMd6UDPTgUEXGjBHPWmmPPWptuQSOKaR0zUXFch2bRjqKQL3WpSeoph74OB6UCbIySG4pQ4/E08DjHeonXnmncVhw5yB0puCM4pV+7mnE4FDAapGOfwpxAbrQV6YpyjHHakMiYY980wgZ4Jqf2qE8ZA9aaAXkcmjOad2pSAcYouFhmd1JjBpxGMelIRnmmAnt2pmOakx1xTCPXrQMQDBPpTu3JpgOKkxkZotcY3rnA5oGSeaTdg4pQ3NVYQ08mk9fWnMMUwHJ9KBoPT1pD9KWg4xmgYhw3FB4NIBS9eO9MZGvDfWjOOD0p+Oc96GXcBRcYEZGKhZOBU5PGaY6+lF7DREVz1pCncVMvIwevtTdpJHpVXCxDt9cinKARnmpNgNMwRnFCY7CqOtLkEdcYpQ2evajHNMAVcjrTW4py8HAoPXkA0JDGKOTnFAJpGyelA9sUAKw44pOce3SnEcDAyaQ5zTQWGH5c5pRnginZBx60vFK47AVB/rRgY5pccUY55osBE64P0pCCBUj9Mn8qZ169aNhWuRnnikI4xUhAB96GAIxSCxUZOcUiipyAM+tNxjoOtJMGiIrz9KMEkdakK4x3pSvQmhk2K5T8aYy9ABVph6cCoyvPvSJaKMsXPH0qBo859K0XTJqJk54FKwrGRNACp71kX+nLIrZHX2rp5IsD1NVpYc5yKVhWvoeYavoxiYvGvy1yGraOk6uUUB69rvbRZAVx19q5LWdDY5khGD6etPcxlBx1R4RqlhLA5DLj3xWSysODmvWtU0uO5DRTLskHQkVwGsaTLYSHcpK56099OpUJ3MUIf4etXIWkHG3I+lVxtD89qDcEE4PaspI1THxuVkLHAI6UskokJ3Y98VBuDHnpR5ZUbs8UrAPcqGAXmnNGWAOQMVAXDNUkZZmOc4oAbj97zzUsrEMCgIpqxl3OOgqzA6Iu2UZ9DSApuJN24jrQ2ecitCWQFNq49uKqptJIencCoH5xihBkkE4FTShQDgD2quv3s+lAEnltkjOSKbnA+cUrHCk/nSK27/wCvQA+BlHQcmknO0ntSjCLuWmSSecwyBQBFkkd6cCCcnt0p6oBgHjPemyL83SgY1gD3GKFUnkClxngU4ZTB70CJYoyw+brUB4PXpUpkLLx+OKYwIoQEXc81Io4BxSIhLcVY2iNVJ6+lADolfnA4rSsrhkcAnGPeqEbnIINSFz5mTTTA7fSLw/Lk++a7nSb1jhi3I9DXlGmXRUjdkD612mi3v3STxUTiXFnp1nfCRgCMnFWJPmDHGc1zumXKkq3YmuoAVrfevQ1nGXK7FNXVzEuI8k8HiktJtkgGfapr0MCx7Vl7ijj0roaujnOrsLjoM1t2824cHmuMs7jGDk10VhPkAZrmkrA9Ub0TAgd6l6iqcLZq2nNbU5kEUkeRVaWHHpWiVFRvHXo0pjSuZEsXPSq5THatWWLHIqs0fNdaZMolBk46UwqQcYq40Z9ue1RMnPTmnYxlAcD7UxslCSalIyRmq92+IsAc9K8ls90xr6Vg2T9wd65Sd3v9TVx/qouOK3temMFmwzy3SvP9U1WTT7f93neeTUu72MJu24/xbOJrhFZ/lTgCsHUnhitkRSC5+ZqqxyXGoNNcS5wDms6VlKl5HJOelZNW0uZ3vqNuNQkZDHtAQdMVnm8wePypl7IRkrwDVJAXPc45NNJENlidyBuJ5aq0jHPNOZi7YPQUyRwc4/OgQ2Q7gAKAjDvRngAdaeZCA30oAgZtrZ6mnq+Y/XNQu2T2+tNVgGBNAicOyjjg0m4596crgryMmm5PagYK3zHNKF3Hr1pkalnJ7Chjg8HvQApXbyBScuaex+QZpig47gCgCRNzfeIwKiJOSRzmpApZTjpTkwqYNAECu2fap1y7fLURA3VPD8pG3uKACJeSMcChyAwK/nU4wq5z1qPaNhLde1AERJzkHr1qcD5MfxVHaqHmAY4FXZAkQLDpjihgiuI8quepp08SxhdvpyaYZN2MZpXBYAmgBqrmRQxwKt3CRKo8rqeKjaIJGuep5pMhJFJOQOTRuACOV3CkVM6CNcfeYU+Kblifv0yRkUhmb5qAsKzN5W0/efrUXy9DzikaUEE55qN1YKG9aBEjy/LtX7tMWMu2ADkninW8bSsAoLMeAK7vwz4cESi4uwPUCtFFLVgk2VPDHhrewuLsEJ1wa7/T7B7orDbrshHUgVPpWmyXsiKEKwjpXoWiaQkESgLVebNoxKmg6HHbIo2c/qa6yzslUcDjtUtrbBMACtKKLJAFFu5Y23gAxV6KLoMYFLHBwOM1bij9qCWRrH6VKiY6/jTyoHsaUYPHamRcbt7CgZyCKsRqDxxTJo9rZHSovqSxgAJ9KULkYoQgn3qQqV5BpMTZAyjI703BU4FTSDpxg0wDg0iRq9eaVsfiKMcAg80E4PPGe9BLQxvaowQCQanI4JxULLg0BYF605kyAQM4pq8nB61MnGQRUsHoVynUgc07HBB6ipsHdTXGDwaPIZGeBn9KQjj2pXPB/pUZY+4HpVCHN17gVG3IOKkAOPakwBkAc0Ahh9CcUq9CCaAOxHWlYY5HXNCAaceuaOo460q9eaGBGcUxjeOmaTqOtKRkdaTkCgBjDAzTgflxSn2H4U2mgEYelIOtKe56UgGef0pjAHdmmlTnPanYGelLgk9KBjTnrRjrTsde1Jj3pjsMwMDrTu/NJSDntTGJ9cUr5HI9aDzxRj9KAAkUvYegqMZ79KfkYBNFhjcc8UvBHFBOc5oXHQUWuMUcGmPy3v0p/J4pp9OelO1gTGhefrSn9aO1KP1oGNAwRR6g0HuAeKaT2qkAwgfhR0U44p3agc0kMFPqaUjnIo6daOADigBv8We1NHt3oY+gNKAMD1p3Acp9e9OPXHr3pjcHOKdklR60BsDEY5qM9RxTweucUwvjOKTQCHrTTjr+lKefpTc460CDgmkPFIWweBkUu4Y56UrANPHvS4yOtHvS/WiwEbHB4ppHJ5p5+lN75JosKw0rnOOvpTNoqRl+uaMcdKViStIvJNQumRV1lz+NRsh7YpCaM6SP2qhc2wYZxW0yEgiq8kfbGRSsI4bXNDS5jJUbZOxFcFqdkObe9j47MRXtNxbhl/pXN63o0d5EwK/MOho30ZlOn1R8+6/oUlozSRZMZ7j8Kw4FUON44zzXrWoWL2cjQ3S7oz3xXF+INBa3PnQDKH0/Ch9mKE+jMjyYWVtowe1ULtWUbRT/ADjHkEHB/SoTKWPXPpWbi0bXuQxo2Ru4z0q+kagYk4HrVUP83sKfJvIHPymkwJ1eNW4Hy0jMkjHHA7VHHjacjJqaUReSuF2sOvvSAjIBx6dqSTCfMDTFlCsCO1STXAlUjAoAqOxY5PNQknPFSMcHHYU0kk5IyKYCDkYJODUsQGfm6VGOWA7VIACnoc0DGyfNnHSmgbTTlBBqxGgIG6i9hEBJZQKeFORnt2pkqGNjjpSbs4yaAFdgCMcetIQM8U4oOMflUZbHXpQAoOBxTlIZsntTVOelGCOgoGTOfk44NMPzDLUCQ4x1zU0Me8gA8mjYQ1cjGOaexwwJNTtbvGeegqJyGYZ4oQFi1kJIBrq9FmKsoY5rkYFHmjJwK3tNlEcqgninuho9K0uUYXnk13Gnyq9uFJzxXl+k3Y454rt9JvU+UHpiueaNovoat5GCp9KwbtQGY1vzOsi5XpWNeJyeOoreErowmrEFpMVPoK6LT7nlckYNcdvKSAH8s1uadPkDntioqIz8zt7aQMAxNX4nz1rA0+bgDOfati3c8ZNYxlZkM0FOVp5UY4zmooiKsx4bNdtOZcCpLHkHHSoGi/OtJ0zg1A8eBXfTnc05bma8eByPaozFnrWi8fPp7VH5RJ5FdCZlKJkSPgZHX+VUbmUDOT71NK/yk1harMwUgdTXk7npt2MLxBc/aZtqnCR9TXF36C73luEHGT3rotauDaWh3D55D1rh9S1LGyBBx3rFy10MJPuV768W3t2ghIBY1ivFsYM7ZXGcVNqePNDD7xGcVRuWd1BPQDGKSM2xsp8zLfwjiqrNsXCjGaaZSqkGnAGYZxwKqxJET1C9fWliUYOeppsgaMHA4NQkkDJ4NAFiNDJIVXrUUwIJA60RStGDtGM8ZpRmQjA+tAFdlI70RrubJqaXGcAdutRLmgAVecZ4pw4IUHinIoGDTuACetAAX2oVGADUQ+Y5FOOWJzTkwqkY5NACbs4HSpHPyADgetRICSS3SnyuCMCgBYuUI79aVVzktTUO1SB3p5VsEnpQAoC46U9CBknsKhjXOe1WIYlHzufwoAf5B8rzG4HvVV2yOOlWnmaXCgYA6CqsisjZYYoAdGrBgTVt2LgKvSqyvuAHQ1aDqke1RljQBDHG5bPYd6JHZjgCrDtt2oBwRzU0MUZkO77oGaYWKf7wx/OOe1PAAVVYck8mrspjIAwAo6Gq8kZlJkBwo6UrjsV2kVVIHWhAZevAqOZfnAyOavlUjtiRy2MUySpJEu3IbJ9KfEjzsqICT0xTYYnmfYmSSa9B8MaBFZxi5uxuc8gEVpFW1YJXegeGPDyWsS3N2Pm7Ka7vRtLkvpFZ1IjHQU7Q9JlvpFlmGIh0WvRtK01YkUBeKfXU3hAi0jSxCihUArpLaALhMfSnWlv8oHvWpDBwOKdiyOCDjGKuJFjkVNHHhSMYqVVxgdqdhDYxtGKkUgEcUpUHpxSYxSsZtkpUEZHWmFPm600OU6GnBgcenvTtoRckjzjg8fSldty4brSRuVbGeKeygjNQvMTsVxwePyqQZP8AhSlDzjinIcHB61LENPK46n3qLP51Oy5B9ag570rEgqnsKc6DoOfehCVGOxqUjcCFFILkG30NQsCCcirDAg0x+e1MRCeCCDUi4znNGPlJ/OmjqKSAcenHSkIwhOM0KcH2pGYDNNITGHnNRPktz0qXnmgjPtQURKSfoKc33s4zQoIzxxTsdSPpQFhpHcUA+nT1pwH40jDr7UIBhxk5zQOBz+lKeQSRzRxt9aYDe3rimkHNOA+XnOaD056dqYxOCBjrUTdakIKmmtn8KYDT05HNC8UpwTzzQeBzSGLgH60Ac0q8gHFAAHWmhicZzxQAMU7HcUZwKY7jCvfrim9sVIw44PFJgDHFAyFhg/WlA/8Ar045FNHWmA0rhjSHAHapDyeOlMxnrQAmefWlI9OtCjn1pGGfrTAWmnjn8Kcppr4Ix3pjQd6QdSaYM5zmnAjPtQMUnIppyRxUhA9OKZxnigaGkY47UdCOaCOxoC9c/hQANyM+lITilIAxmkPOeadxCcHBNI3XNKB04pxAI78Uhjd2UpVYBT0NAGAcdfSmk4JA+lNBYG6+55pp4PFKwx3oJ6Y9elFwGFsEg0h+tK5GRmkIBIx/+qlcAIz3oOSM0DpTjx0ouA3HQ0oHrRnt3FKKQDSM8Gkx34p/UHtSL6Gl1ERkenSggg1KQCfalIwBRcLEDJz0/GgrjJIqcrmmMvBFK4miqyc9MVFImRxVorg5pjDIoJaM94wKqTQBicitWReeBUEiUWEcjrGkRXkbLIozivONSsZNNmMVwm6BuhNe0zwgnBGMc1gaxpcd3EY5Fz6UJ9GZzhfVHz94j0LywZ7UZT0/KuUEZ3YI5r2LVdPl0ycxyDdCTxXF+IdEA/0i1GV9PyoatoyITtozlPJCkMatAEx5UDiqku/OOmKdC+Fwf51lKNjYVJPmJxTZiXXjoPSmOcM2BgVJCE2jccGkA2KFXxuOKRrUhuD0qcqudoaq7ZOQG4FADMLu2kZNXEt43hILYfqKgtygY7hmlnkBdQhxSYEZjVWO7tTHAByDRyzYJ4zUzKhgwD8+f0pgV9wLAkdKsR4Zc5waj2DHJx61Ecr0/CluBPne2CB71FcIqj5PxpEBLZqSZWxnj6UAVQ7KBT0IOcjrTjCzLkHp2oXG3OKYAABgj8qe+NnHJqI54pYz60DF8plGW6VNb8HOaYz/ACADoBUe89AcUAXjcMwIc5zUDcc1DuJNPDBhg8GjYC1GQFznpVu1my6gn6VlZYfSrNqSWApoDutIlDAZbkV2ujEO20NzXmOnT+Xtwea63Sr5o5EbJFZzTLi0ehicxBY2NR3rEqMDrVC1uVuVVs/NmtVgGgHpjrURdhyV0c9drhs4NWNOmw4HNLerubgcAVRjcJL6HOcVvLVHOdppk+cEnFdJaycLg8+tcRptwCFP511NhLkAA1yS0ZDN6Ft1aEJGBz7VkW7Z68VpWzdj17VpCQQepbUZHTimtHmpI+g9aeUwCK7aczqjqigyf/rprRke1XSlNKe1dsZilE4G6kIXqPzrn9QnCrvboK1L+QKnPWuR1662xlc9R0rzW9Dokzl/GGpGQLsP3elcTcs5Ifv61sa6XILnkE1gm6LBs8gDpUJWRzyd3qNExeVi54A6GqUkpZ+R71PDkgsQck1Dc/7P44oJKrAvJhsdavxhYrdiOc8CqJUqMnvUok/dBe5psENuJRjjHSq4G9Qx4GaeYt74HNOZAGVO1AEeN3HYU7ftJ46inMEjJHU9sVA5OcdKAHAA5yfzqFiAacctwvT1ocbOM5oAejAKexpjZxmo2bOOtSA71A9KAFTnj86eBwR+VKAFTpUaNlwD2oAcRtbHNI2CamdBjJPNQugGOeT1oAljGSO6npViTDYB4A4qAfJjH3RTxIOSaAGyEKcACnwrJOdq9qiLbiMda0rPbbwSM33jwBRsCJtMt0yWcg4NVNQdXlfABA4FPhmxGw7dcVSYlmYAdaLa3G3pYYkbE5XpVuzA88eaflzzUWCMKCOatFQsIBxz3oEizNscsykcfd4o0vyyzmc/L/OqudpOPSoojuZieg6UdLDvqWbp0aRynAzgYpk9wqRrGvQDNVJFJkAFNmI6A5A65osK4rnOGHNWYEebCAEk9qrQxmQgfpXd+FNEEarcTjqOBWsVbUW+hY8MaGlrGs9woL9QCK9B0HSHvJVlnB2DotR6FpLXcgeRfk7LXpekacI1GF5A+lM3hANM05Y1UKuAK6G1tsD9KdaW4GOK04IcYNNI1EghwOnIq0g5zSKPbFTxgZHrT2JHr27VIOnHSkVAfbFKFK5wKdyWhQOQRTSMZJ7U5DngcVMAGGCBUt2IZWABzSbecDrU8keOR1pig+/FJsWwAZ78075l4PJpwXJBAGKfjJ4x9DSbIYiNuGBx2pDke5pwUZ470jKRipExrNxxjNRdep5qU8cgVG69MdP5UxIQAdMnHpUm4AdMHFNAyuAcgUZzgdMUkhAecdD71Ew5z271Jj+9wKUgEZHOKQiEfNxjimkY4Ap+O4HBoAznmgYwAGmkc88fSngYzTWJz7UIBmMHJpMYGAPwpzDgEHil789aAGD249qCMH2pxGD7elIw70IAIzzim05Tk+1L14pgMxzwKUAelL0oPHTp2oQxmCc5NJjI+tPPsfwpMEHrx60xjCMkimyAdRUrfTHrUbcHnNMCLjr3FLgnjrSsMc0oweaQxE6+uKepyeRTQAG4pwIHtTQBjFN/lUgw4HtTWGDVDGfSkJ607v8ASo25oAUncMgUzrTwe3WmnigaE7ZowPpThSMSCOKAExTQMDnpTvqaDgmgZGAR9KQjHPrT8dqRhkYNMCNhzzQc+lOyNvTPpTQaChV59aCMHpSrkZA5pGIxTAaw5x+VKB70EAKKM85BpAGBTcetKTnij86YxMnPtS+3ekz1JNA6jFACDjI/GmtjPvUhpp6jNIBnf09qaxw3+eakxye9BxjnFO4EB5+o9aco47UoBH86Mc9qLisJ3xT+MZpCPxo7frQA3HNGD1B4NIT2GOKFPP1oQCg8e1A7Z60p5owM8UuoAc0vJxQe2elJkDFFhC5wfWmvhh6U4jP/ANakIP5UPQViE+gzkU1jkkVMQB16moyMnsKSYNERXdmoivPFWD/k0zGRQKxVkjz16VUngBA4z71pleP6VC65+lKwjkda0uO7hZXXOf8A61eaapYSaZcmORd0LdD+Ve23MII5/Sub1zSo72JlkUdODQn0exnUp31R4B4k0Xyc3FsMxnrXKMPLbpXr+o2T6fM0E4zE3ANcR4g0cQTGROYm5+n60mraMiEujOXYZB9abGr7sAZqaYeWSOo9aIJPLcMpxWbVjVBLu2424NLFblgMnk1PdXCTAcfN61CGVVHODUgRSwmGTDcj0FOeJXhVkxuzjGaWb5ud2TUHmMMD0pgMcFGwevpSM2T84pWJJB70o/2uc0ATQ9PXioZAd/y1LgKflPGKSXEbAk5pICEsVOccVLHKrN8wOKjJ3tz0FG7jA6U2ArMFc7SalWMeWSfyqADdyDz9cUbyM+gpAPEDsm4UiR5bHQ1LBcGM8njHSonlxIT6+lPUY2RNjlSaiUHdmpGPmHPc0gQ/lQIM460AA85pUHOTyalmRPL3Kee9ADA3btUicEZ71CV+XI605GIxuoGa1jLhhzXVadIzbQpJNcPBLtYZ4rpdJvNpB/Kh7DR3GnzMhAJPFdhpzedb5LZrzuzuzIwOMV1Wj3mxsE8HisZ+RcX0NS+j27iM4x0rEmyrFuOa37h/Mj46GsW9Xg8Yx6VrB3RhNWZY0mbDYJ5z0rsNLnztyeK89s3KzZrstKm4U9vasqsSWjr7eTpWpbP0xWHayZArVtm4GOaiLsQtGbUPJFWgPUVRt26c81fTketbxnY64O6IylJsGas7T2HNIE5rqhU0NLHiGr3GDgcYrhdeuWkcgGug1q6xuJauK1G4DliOa576Eyeph6xNJIojGSB1NY5QQodx5JrWu5MNtHzHuayLyQMwLcn09KSfQzYrv8m2PntVVQeWbqPzqeylVXaRhwB0qGeQNlwMA0CK8rmRximS4U8Hkd6ecgZUgVGowCTjFMAjYoCe5pxIIyMe5qB33HHvT8HGOM9aAE5bkfhUbsQTnrUwBAwKZjLZbigBigr8x707G7lqOS2OwqTIUHI4oAiC5bAo27ODT93IweaTnJ6E0AI/IwBzREh5JOKE+VuQKnOWwBQAwtn5efpQrcE1IUC4PU03YD15oAhLkngcetWIYt4/xpFjUPj1p7OYyAOnpQAGEowIHXpT2c+WFPNPXe6bwMD3qOKEu5LHAzQBLCpYrGBjPU1cMEcSMeMjvVQv5c2I+T0zUc8sgJU/zo3HsRSPh89fenks+1agzx75qzbqGQs3JHTNMRLLE0cYO/k9qjZQkIIOXPahd8zYJIA71FKhQjJyKQDt5RSCTk/pUYUu2eaMFz9K6Dw1orX1wCynyxyTirjHqxF7wnopncTzriJecGvTNE01rqVTsxEv3Riquj6d5zxwW64jXr716ZoOmLCqYUccYq/M1pwuWdF04RquABXU20AGBio7S2AAAA4rVgj7YppHSlYkgixjgfSrgj24x+RpIkxgdvWp1HP9apEsQJntxRtOKnC+nNGzJzQyRFb2qUIMcH8KaEYHPIFSodvcVIpERBU8inqAQM05lHJPWkUAdCfxpMjccAP60wkA+h9qcoHfj0NIyDuT7UrEsQkZPPFSZx/T3qEEng8iphjA6/jSe5LDv0/I0xycHkg/nT146ce1IQCOePxoRLRGp4HPB9qXHBB6UjrxjHFNBIxk9KQAAw+7/OmHr1+bNSlsjOeKY6g5IoRI0DIPSkB2kjHNNORyPypdx64/KiwWuDdM80zgZzUit2PIpkgG7HagLC4Bz+maawB4FCn5e9BPbOAfWkAzGBTSOTin9STmmtyx60wE9PalxwaD04HtQPSkMYPrTueOcUjf/qpVP6UwBsd+KAeoob3ppOKYDj2xSZ7HigHP1pCeOlFgAjA4PSm4z1IxS4JxnNIcjB6j1pjGN3zzTV46VIDx6+tNORRYYHkA8YpCenFBAIxTR15HNMBQfSlbOaQ8c96Qn/a4qkAm7tTTz0oPXqaAxBAycUhjl4GSabnBGKO3WkPTigYYyeBR9T3oB7UHp6UAIetHXignHWjn6igYD0Oaae+TmlOeeeKTsKaAjbPIPTNKAPXinHHqDTSMdKBjgcf4imv69aBwOxobp2zQgG59KRaPbvSrgUIoTGT9KBg0p6Hn6U3tzQIU9DikHTr+VGfloXrzTGBOCOaG9aVufrTVIB+tACZyaRjlsU7JA9qaTnJ9PWkA09PpSjke9KSenNIOmDzQAh6elGR0Uc0hz2pyjH/16LgN245o2ggYwKU5JOR+dJjrSAG5ApwXjrSA8cGggk4qhAzAgZpQhY8U0ZLYNLvI6VIDtgGcdfamYPSlzg+1JnnrQDYEc+/SmFePepCcfWm9SaQETgg9Mg00g4x0FSnHQcijGPrQmJkJXJNNZQBzjmrGABz3qIqc+1ArFV0wCOc1SuYAw6dK1nj+Wq0iYGKLAcR4h0dLyBlKjcORXm1/aNAz2tyMr0Fe43UAZcVxvinRluYy6j5xzTWqszGpDqjwDxBpTWs5Kj92en+c1ipleCOa9SvrMTxvbzrhl6Zrz/VrF7KdlbIx0NS09mTGRmScNkip4YfNzgjOMjNRIjSnrSgtG2RyelZNWNCPy2EhXnOetTGApy/ekLHO5uallmMkAA42/wAqGwK8xXACjp6VCWPOT07U9mUxYx8wNEODyQCaAIzkHhuaVlIxv61ZuEi2K6gg9CKhDbuvJH6UAMbAwKgY5Per8cKsMN1NU2Qq5GelCYCbDwcnAoXGfmqXeFjIK81FuAPHFAEu4Yzjio2AJG09aQMQeKdwrAigZJGAEwTyaWYhPlBNNdeAQaY+W69BQIA2akxgc8ioVFTMT0oGWLdlz8wBX3qC7ZTIdgwBUTSYIApM5NCEOQ4Na+myHcBWTGPWtGxbawx+FMaOtsZSpGBXR6c+8qR1rj7OYnAJrpdKfCcN06is5FI7K0nJh2tyRxVS7Od27mmWoJiDKc06R8rjHNKGmxM0Ze4pICOK6TRrnKAE9K5q6O1icYq1pF2ElAz3qpq6M90ekWEgIHNblq/I965HS7gHbzjPauktJBxXNszNnRWrcCtOJuBzxWLavyK1rdgR/Wq5jopl1Onv60uPamockYBqUelbwmdMT5V1y4LIctz9a469uFJ2qfetXWLrO7B4FcpLIdxZqq2hg3qO3Z3E9aybkkEk8k1fkmUR44BNUCu9iQeKSEyBZNikE9ajZzkDPSlkXYeDTQuVJNMkcW3EDvUcxycDoKbkinBMgluPegYxOME09mbrjrT4gAOeTUtyqgAd8ZouBBHIBgt1pHffyDULH5vXFSqMqO2O1ADA2DzmpWO4diKhZOM1NBEXQkdutAEJbjjFShSFyaRY/mx6dakGWYKKAI36+1TwtwKV0DDaORSRLgjNAE7gBcnrUMR+YkjtUz4L801ABuJHAoAYGywB788U94yWGBkVEhw+TUkUpjbJoGTtIyoFxwarmQr0p7yGVgelJMAoAAGRQgCF8yqXHeprkBnynSmIiDbk8mtG6CGAFQAQOtDBIy9g3AGpHwnyKOKhPLgnNPkOOM0CJN22EY4OarPIXb5qezMV4NOs7d551RASScVUY3E2X9C02S/u1jReCeTivUdOsFtY47S3XLn7xql4e0xdNs0CqGuHrvPDmkMp8yVcu36Vpv6Fwjc2PDOkrDGoIyx6mu4sbbaBxiqGl2gjVcCuitYadjrikkS28WAO1aMSDA6CoYl2gADNWEIBHvV2Bkyp26U8DB4pFJ4qZcNTJYijHTNPA9P0ppyDg08AEnFSyRwQ846+lKUwwz26UhPHfil35GGPzVNiWxCM9F6ZzSZw3I4pwbcCOnvTWHzZoJHZzx296XovI4pm71HWngggnmlYkAAfp1xijO0e1KSKThu2anqJjW46GkOcZIpSCue9NVjyGFMhiFiQPbtSggjk4pNmeaMZGPx6UnYdgX7uB0+tMYHbjp9KcDjPp6UknTIzQTYh5XjqPXNJ2OfyqQYI5zTGXD0MaQ7BxTX6U8DoM5FNcHoeKSFYjBOcGl4zg0nfI70rYHSgBrAjketNbk0/JNNGcnFACHoaQHk7acO5prD2oCwHkEGjFIKeRjNA7DTyMGmkAnFOPSkIA6cUwEFIeBmg5GCTSigAyTzmkPTGf1oJ+bnpTSQQT0NMYmD6cUnIz604Hgdc0nbPrQAxe5zn+lHuP/10YORnpSnrTAb0xjmhvwzSnr7+tJj1psdhpPbrSdeD0p/+cU3HpSGN/nR+NKeo44pDjFACZpT1pM9uaXk0wGkUAngd6U03kHrQMOaXgdDzSnp0pMCgYnT2zSY5zk0p4pvc0wQnTg896TPrwKUnBwKCDnmgY08HmgkdeKUgCmMcmgBTz3prHBpcUEe/FFwGkk0uOaCOeeaOlADmxtqP2JpxPGfypOpBOKLjFBPIBxQ2Ce9J1ApdvuaAEwDTiOKbQXAxxTAMDPI5FIeD3pSRg4701hkAj86lAPBA696jPFGCAfWkA7ZpgL2pueacMFj6U3HzUXAXtTfenZ7Gk6jJ5obEHJNLyVpQc0vGMUXCwyj8etLt465puKTCwYzTgpHSmrknjpUqD2pCIyo/Gm445qdh6f8A6qjbjk//AK6OoMjcflio2QEZ6jNTHJPpRt+X3piKEsecis27tgykECtuRKqTKO4yKLAeW+LNDIzPEPmFefa1Yrf2zcYlUV77f2qyKwYdR0ry/wATaQ1jcmWNf3TH8qfxepz1IcuqPFLiJ7WZlYYIqMgyHcOK7TxJpYljM8S89xXGOhR2zxUNX1HF3AptTOOOpNM3gpjFDM20+hqNMK/qvesrFiovcjHc/SlUr5hwMVYlmQxrxyOKgPIBpIBJG42kcdqjTg9PxqeSLdjaaiwVyCM0wJt24DB5FVmRsliaduK8jimNKTketCQDpU+UMCDmqzL7VKhbpjNWFVRGcjmjYCosbnBPAHepTEUAIwakI/d4HQVG75VVHSgBrFht6YNIVbJ9KmyCi+lJIwXDLzxQMhyT0pVb1pN2X46U4jc+BxQIa3XI4pVOB/Wl8snO0HNMYFeGwDQBIGxxVi2fDiqa5IqeLg5BxQM3rKTJGOtdHpkhQ4JPNcnp8m1x04robWXLgg1LGjtLKVoFUqcgitRSJE3dzzXN2VyXRUPT1rchcLGKzRTsylqK5rPhby5VPvWxeruTsKwZmKsc1undGGzO70SbcF56119i/AwT715t4euuzHkV32mybkBrnqKzJaOqtHyBWvamsCybOOa2LU9BmsjSBqxtjFWFGQKqRtjFTLJge5qos6os+LLpizHd35rFvCApA5rYvXxGT0rnLliSTXTe5gVZX45NMR9q/Wmyc89KaoLUySVQHyzDiq8jnPHTtTpXYKFzgVCxyRg0gEJyc04vkYqJj83rU0Cc7mHFMBAGRgevenEsSSx60+4YZG3pUJbK8jpQMHwH4FG4bTTGByPej2oEKx45q5BJiHavGTzUCwnHPNSINqN2J9aBhI2M4qMOd2c4pyjIOe1KyAD3NAD0fsMZNBBDZzUUfD80/OTigByk78nOKkeTIIpEQFSSTQiBnOMYoAjXPy/rVgKCcHpio4gDIDzgU525IDGgY+OAupbsKbLEcZ61NCzLbn3pI28xgg70AQxxlyOcYrWLRraYGC3vVeWJYMgfMcVQ3t5oLcUbhsXIwuDkDNQzRYUv2J4pC3zYB4PXNLLL5mFUfKtCQNkKAthR613ngzR1jT7ZcLkg/KDWH4V0lr+7UsP3ack16pptmJHjjRcRrjGPWtdtEEVfU0fD+nGebzpBz2HpXoem2iqi+tZejWQjC5ArrLKEccc/yqkdcI2LVpCAorWhXAGRUNrF8oq8inr2q0rFgv8A+qpUHGSCKao5FWE5FDE7CxYPFTBcYx+tMVF7HBqYKQPpSuSxp5HNLHnBBzinlc8k0xiQaNyGObj3zTe+en4UuQeBximE4PSggkTkEZxSgdR2pgPUdQRTkbceevekxMcwBAGKQHHHalYHBx2qNlxyeKRI9zwaQOCB/Kq0pJ4ByadGuBu5pdbE2LIOaTo3HNNTGOMkUvQnFILADjPPeg8ckU1/vZHHenZDLxSEMY5BI7UzfnP8qVzt/wDrVASPxp6CsTE/Lg9e1IQSOetJG+evIqXhl7ikA1Rjp0ND9OtD/d47dqaWGPSkgIsc46UrAEdaU88ikU/lRYCPkGnAZB9KTaCaATkg/lQAYI6UdeD0pQOvehlJ57UAMKkE+lHO7tTnOTzSNmhIBpHXFIO+TSn0FIf8mqAQ84NNHv3o+6Tim9D1oGh5+71GKbj1zijcfrSn7xIoAbjA9KAccUp65pPXvTGJ7HikORSnOeaTBwaAEJJ+tIDS9CD6UhGBxzQAHoKQ0Z9fwpG65oGGMDimn0BpevJ5pDzzQAYyD6+lNxTh9aQjpTQxDjBxQMHt0o/ClGCefSkMQ5xSEDPWnY980jdRimIOKb2I5p3Y80hPWmMYcY+lIPanA5HHFAXrnFK4xjHIx3+tIAO9PIHOP1pv86BiAdeuaD05NKOF96ME5pgMI3Z9aQ9OKf7U3t9aAsIfu4OMGgDK4oIz9DQfXoaQhwPHsaO3WmbqUcN1NMdx2OD71GyjbnOR61Jnt1oPT2FAEK8dakAyvY5o2gilGccdPWkloA0gjtSAZPGKc3UUnoScUAHGfWmcknFPzjntSMcDFADeMY70Uh460ZoAUmgD1oHA56e9LkZ4oAQjgYoGMc07OaUgdD0pdQGgjOMYAp275eO9Iy88dKMVWgrAWz0603GQadjj0p+BgfyqLjI8Yx796Q47YqVgcVGQefWqFYY4zxUMiZGCO3FTBSKYwO7NCEZ0ycEGsHW9OS7t3Rx1rqJo6pXMWVPFHmFk1Znh+rWL2Nw8Ug/dk8E+lcD4k0vynMsQ+Q88V794p0hL2Anb84HFeYXlqQZLedfUc0S/mRytODseWFioKckU4bMdMGtLXdPa0mbA+U8jFYxU884qJK+paY8jc+B0pzkxkAimKSnBHFSBDOfp61mUNWUZGani2ty3SotqKG9RTg4VQMcd6QDmWNjjpVcwFn2gfiKfIQSSB1psUhRuetMC1bwEMqnqKryOxd1GCKnky2ZEP3RVFS27dzk0ICZIzn5jgUyaBcAoalcgxgk8ioAx+b0NICJwyqMHr70wbsDdUzMCNucn1pECs3zHAFMCPgdKlQbmyvU0skKqgYEYNLCyq+c8UDL0duQm4sBiqNyn7481LJcMrED7pqA5ZtxpLQGKuFOCBS7hmkbP8X6U3jOe1MRetXG9cV0FlJ8o9a5a3fD981u2Mu4A0MaOt09wcc8V0enSK7FG6VxljMcjB4rfsZGJBB5rJotM37qMFTjpXOX6YkPrXQqxeIZPasfUowDkVdNmU1ZkOkTGKdee9ekaJcB1XB4IryeKQpLXfeGLnfGvPNFWOlxbnotjIMD0rbtnP1Fc1p7cDPNb9s3ArlY4mvGSVzjJqwvYYzVODt1xV1Me9JOx0I+Hr24yCKxZmyTz1q1dPknnmqbjucgV2JGBC65/ChCvTn3prMOMU3d2pgOlQEccmq7qU4AqZZME+tRyHceec0DIlGXBqwuTkdKhGNwAqfzAiketAiFxtJySaYctT/vHpTW6HtmgB2dwzwcUvC9uTTR90cVIVwoPc+tAx8b8ZJ5xQzg8CmsoC49qbEnyseaADJAO08U6IktyaVBhcYpu4hsigCQgqc4/OlBBfnrULs3cn601X+brQBfZlA9j2pgkCucDjGKjQbgDnr0pGTrk80AWEbGdoyTUcpwSTiiJ9i89aAA4O7pQA+CYlNgGalt42DgrgHOaigjAHXGf0p6eYvzA8UDL1zNGny9Wxkn3qiMTNg4qCQkud2fxoBKcjgGhIG7j5xtchegqfTbV7mdY0BJY9qgjy5wOSa9B8GaQIY1uJR8xPyg1a91XFa7N/QNOWytUiVcMRljiu60Ky2qGxk1jaZbb5Bkd67XSrbAUAU4rudFOJsabD93I610NpFuIxVCyhxjity0iAHpWqR0JE8SbcDNWY+vPNNRefSpQhHI/OrCw/aD0poDKRTx154qTrS2JEB7kc1MmCDjr6UxAOg/OnhRjrUktBnmj7w/zzSuCe3WmqTmmRYDlTQP73alY5+tAbOQKGSLtBXIGaAuOmOKUEA8flSlSTxUiY0sV5zmnAg8Dk/Sk2kZGOlIq4b1pMmw0Lhzx171JtXbgijHA45BpW4HFIVhgHTHJpT196HGORQBkdcHFAhhJI70inBweM08A4weKR1G3gUhWIJgxIPamBVJz09qnxleaQKFJzQ1fcBqjBx2p6kg9eKQqDz37iggZ4HGMUMQ4nJwRUVL0J65pCeBn8KewkIScEDmk6H096UtlqTIxUjFxzxQcCkU8Gn8EnJ4pAMBAbHFLjBxSOuM/zpRwuKaQDHXByOlITx605j1YHtTR0zQAw8D1o7U5vUU0dxQAxhgdKQ4OPSnnk4pMcHnNAyP1FG4enNL/AJ5pP50wFz05oyfQYpMYOKU9KYCDk5o68HNJj3xRyKAAjj3pOlHXrmjrx3oGNwKafzFOI4pMUAN6YoJ5pQPzpPxoGID60lHej+dAA3BpAfpS/wA6TH50DFHPUUHg+1IDk07tTAb04Pem9j0pSfmoH60MENXjgmjODinYpp4A6EUDF69KTHB5py8Ckz+VADV4IyAaQjmnkYz70Y7UDIyKTBp7DnpSdxQAwjAIpuMk8kmnsOe+KaRxzQAhI6gc0h/zmnbTTec0CFHH1oBySKaeuaMn8aYDsgdaTOBikPqen86QjHHrQA4nPOaTtQeOgpR0waQXEOM0Dp9aXikJBOBQMYeeBSheTS4PbjFOxkUgGgZ696MUvvS4yeKAEB9OlO49cGkHv1oI4GKYCsc9O9GcAYppXAHY0o6ikAvSnDpTcYHYnvSqcdadhDz+lRt1p+TkjtTT9KWwxm3rzSHg8U8jFIRk07EleRc/Sq8iZ/GrzgYxUMi4XjmgZiXtuGUjbmvOfFuk/OZo1wQea9WnQkdK53V7MTIwYA5prQmpHmR4PrFmLmCRSPnXOK4O6t2hmZWGCDXsXiXTTZ3TOq/KTk1wfiawBX7RGvB6/rUuPKzmi7HLFgw2t1HNPUqijk+1V2ba54pu4nkmspRNVqPz8/BPJqw0n7kLtBOetPsoY3OHOCaiuY2jkYA5Uc1N7sLWEbGzJWmQxGSRQvU1LHhhnPShW2NuTNMBZEeJyh71TdHXk5GavzEmIODkmqjTB1CkZPrQgIQxYHJpCcgKO1K+Am3+IUxR05/GgAIwcd6B7VKyFQDgHNAiy/BwD60XAjbcVx2oRc9xwKnKYXA9ahdW6cAGgBm8k8/SnB8Ag0qR9c9aa6hT70DFd+KaTk+9AGTz1p+AOn50CHRtyMYzWxp3IHIxWQowc1o2DY5zQNHS2I2mui0ztXK2coyOfrXR6fOigelZyHE6dBlB6Vm6mvymr0MqNENp/WqWoNlDgUQ3JqI5yVsOcGui8JXZW5VSeDXMXTYkIq3oly0d0o962kroUT2/TJQyqc10dm5IFcVoU4eJOe1ddZNkCuKaGlqbluTwM4rQhbgVlW5461oQnAxWTZtE+EXUk5HaoJiCwB6VK7+lQEE43d69AxIWT1NN2cc1JJkVAz9T1oAY/wArcChVO0k0INz85xT5NqjAPAoAiAABJHJpFGSSe1IAzHjpUqYTIzQBESQ2FyKcVyRnrQepPakD85xQA51wMAjNPxhAWPNFuuXwabNnpnIoGSoQQelSRoFByeKiiXC55z6U4v8AL0oAeq5VtuB+FVzwx7g8VKzHZgdD3pwwEGcZzQBG6fLgdajEfzDdxUxOG5PXtUYJZhjmgCQcY5FSCIv+J4piJuJboBVuBCBuz06ZoHYjuIFjIGee9Vy2FxU9yCXJZhk81UxuYdaEJl2AN5YJHWpQ+35ABkCoVcoioegpU3HLDpQMhlPzHik3b+CRx2p0rcUWsJmkVFBJNVFXEbnhbTDe3is3+rXk8V6hZRKgVVHA4ArH8P6eLCwjBxvYZNdLpcW9wcHihPmfkaRidBoluflz3rsrCHAHBFY+kQfKDjmuqsIeF5rVHVBGnYwcjjFbEUeAMVWtE2ge9aMY6Yq0agi7mwamAxxjilAH0pRlT7UyWG31FLsPY08HjmkPQEUEMFyKcMfnSBsmlI5wAc0iWPAPBpCucFaRTg+gp4PHvikySMg9DTQTnkVKVJzUbAg+1BI7qRj6U8DAGfzqNQSeOg71IM9DzSYmPGSc9fakIB6dc801TnkGnNkge1TYkRvTvQ3QZoBIxTvegQwYOcilxz1HNB75oVuvSlYQjgH8KavTFSbRyKjHymgQx1HUHvTVORjtUj9elMPy9qBCcrmm7jnjpTzyMjGabgAkUANY55oOR9aVh2/Gmng0AIeR70mOppkMflh8MW3Etg1I1FhDcY5p27pj8qQc0o6cdKEAsjBmx0ph9OTSvluaYchsUwsObOOKYfqc07PNBANIBpIB6U0+3WnMMGkP3aBjGXnI7UAjHNKcd/rTcDdk/lQAMOmORTeg9qU9PrTR1oAcCCATR1UYNMPUU5en1pjEPByaQkk+1OPOKT2HX1oAQ9PrR7ijr6YoI5x70ANOB60g7E07pnNNX3oAQcDmkIx2607HajqRQMYfWk7e1ObtTe1ABwetGPekX3o6UDExyTR+dBPtRQAdc00/Q0v1pSMfSgBB0HpQTxSUEfLQAY5FJg5+tLjikNMBSePWk6d/ypCcmlz1HegYpPJpMetAoY5xQAh/HPao+3NPOcZJoP5UDGHOcikOM07HFIRz9KAGHg96Mj8aeRxTMZODzSEKoz0oYevWk6D0FKT6UwG+vPFAPJ6/jQcY96MeppDQY7UEYNOUetDdMCgLCDrnpS/qRTe3NOzjFAC8fl60p5PHX1ppPHTmlIxg/pQABckd6XBHalXH50DpQwsNzkc4pBxz1p23v0pH44xQAhb5cDApQP5Uz8OacueD6elMB+AOnagigYxz1px9KkY3jpTTjpinkccc4pmOvNVcVhGGe+KicZyDwKkIJpvXntRcCs6cEGs67iBU8VsEZHFVJ4sqeRTA898Vad59u/y8gHtXmF5bf623k+62QPY17nqVtvVs/lXl3irTvIuiQMKxz+NO3MrHPVhZ8x41rVi1rduuOMkg49zVOGNXzuwPSu58SWIubYyKv7xBz+GfauFkLRuQeDk4rLdExY4bkOATVmB1Y4l/HNUvMIyepoL7nBPGf1rNos0LmJNn7gZHfFOjCfYzu4b3qCKcLGRjmq0ryI3zHrSXYCwZB5eMDjtVWbKvnGBTox5ufmxSXCFCAxyPWmBG3znIpjDaQO1KvHOaQDPrQBIp+U5PTtUjvgKRj1zUWxgOBwR1oXdxgZoAUyfPkn6Uu8MRxyKibLDBGMdxT4lXBzkY6ZoAHJWQHtTZD8+aViH47im8buRQhilx2GaUtkAUFlZcAANQgGcEUAP9DVu2foDVFuGx2qeBsHvQBt2j9Md627WTgckVz9q42qffpWtA2cbc1LA6vT5OBhs+2asXjfIaydNduvatO4P7kk1K0YS1RzF99/GKZZzbJgR1pdRb5jVCOTa4OelbkRPavC05kgjbtXeae2QK8o8BXm5Nmf8APNeo6adyjBriqblvc6C3OK0IW4zWXbMOM1oxHjisWVFnwnMoHQ1CSQSeatmLJz1AqC4xjC16BmVJXYj3qMe9SDoSaacbAOhoGM3kA4FRZLVMRkEDFQ4wc+tAEy4UCkxnLCo9xBHqas4HlgAjd1NAyHIA75NRnG72qYKAM9agYEscUCJoD++yOvpTmAMh3flSQ/Icnr0o8wZLHBoGTKu84A4pHjKjpUayspyKkRvNYZ5AoAVFBPOQBTXwD8tMbdk7etKmTndigBZCCAB1qLDAfL3qxGgEZZup6UzeMDA5oAWOTOE5q60mwDBBI9qopkc/xdhU8Tc/McmhjQ2YtIeRjNLEgx15FJIS7Niot7bcZxQBPC4JI70pk52/wiq6Ern1oY46GiwDm+Y5Ga6/wPpXnTm5mH7tOa5fT7Z7m4SJckk16va2y6dpsVsgw5GWqpOysuoRV3ctq3myjHQ8D6V1OiW/Q45Nc1pURklBx0ru9Jt8hdoqoqysbwV9ToNMgwF4rp7BNoHGMVmabCNqgAZrobSHaMfnWqR1RRbjGFHGKsoce1RIMcVOgyO9VcbRMjA8HrTxjrUIGCM1IASvNVoQ9BxfmlDbutQ4z1py9cigm1yXbg1IBke9NFOBx7elQJi7d3UU/HrSBscmnkc5pEsYQc80NSkg8UDA46CkSNUYzzRgDGMmnMMDqMU0H5c9qCWIOp6U44PWkQndR0NAmCk4waeDkZzyKacHkUgOGpCHkE9Kjx+lSbs9Kb3PFIQEn049aQrnBPNL0HHI9KTPGT+VAhjdx+VRkHHANTP1yBTSfbjvSERjgYpMDmnZGBjrmm+uRTAQ/wA6bjIwOtOJB6DtSHI6daQhvQ45ocd+9KTkmkz9aYCDA70p5HBpre1IvA9qQDjxSEdQTTTwevWgnDUABODxSAn6ZpW700c8AUAKxHr+NNJ4pxqMk/WgAY8YxTScHp0pew9e9N+tAwPfB/A0zt1zTj14pM57UAHejJFJgZG3rRimAE0u8fWmmjHp3oAdnjik7cYz6UCk5B44oAdxgUnegcik5K9qAEJ9OtH1peSOtNJweeaBiEkGmntSjvR05oAaeDQf50uaTIz6DNIYnb1pRkUNjOKO3WmITqaKM0gpjCl+tHejsaQCYHWkPrilyce1NPWgA4/GjvmkPb29KCfT/wDVQMOho6Z7+9JnAyaQngCgBxPI7A0w9aU56ntQDznNMYDgc0nUk5pwGQc0hAyPWkA3rxTcDt1p+CR6UFcc0AR9qXGfrTyvAA4oOBxQBHj6GndBxShaXoKAGAcHPFJg1IcdPWkA46UXCwwEZ5pxXP0pCpHenDpigBPQU7BH0pAeTSn6nigBe+FpMcc9aUf5NBwDjvQxjcHt1poHFS/Wm8H6UMBnuKcpwp4pcDtRimmKwv8AnFKQe9GO9Kd3egdhp79s0h+makAyOeDTdvpxSGMxntSMPlxjFSYAFIASM0bCIW+U1FKvpmrJXPOcU0qTyapBYxL+MFfTNcT4n09bi2faPnAJBx9favRLyIFTxnjjiuZ1WDcp44poUldWPF7uPazI445Vga8/8Sad9nuiyg7DyD+ftXrniax8q5aRBhG6/rXG63Zi6s3+UeYgJHHNRLR37nJZp2PPUUd+tKI8n5TzSXMbRykHjnFIm4Hk8e1ZyRaY4ZQ4PH4UXD+awOOBUmMgFxz05p0VuGBPbtUjIwcL8uAabMSyZIpGR0PIwKeWTb/SgCsgyQKtLGCp6cVW6yHAI9KtRRllBLUMAkB8g4xxUEZK5JB/GrLxsAf7p64quSWQkUAOkO9QQMAelAChQevtUBY5HOaljTJ+bIFADWYB+B+NNkI35HQ0k0ZWQgUD5uowaAE5BPH0oLFT0waNxDYpUyc5oAGfcQe9SxZ3DjIqNULNgVNHlHwaBmpbEFQFzWvbZIUjqKyLMA8nrWxbAKoqWCNmxcrx1rUZiYTngYrM0+RcjpW5MF+z54PFQ3qU1ocjqfDGskt83tWtqhy7VjSHnr0rfoYo7PwNe+VfRpnvj+de46VIQBg9RXzboU5hvo3zwDzX0RocgeCNgc5Fctbc0e1zqrY9Oa0YWxisu2IwMdO1aMJ6YrAEz4maIhWqpIhIAxWmSDUckQPSu65JhygioJGzjmt/+zzKp4rIvLfYxGKdxlQNikbnBPSjbjNBPH6UANDfNkflUm4gf0qNRlqkdicdOPagCRWAjII60xPmf26U1AWY56U9FHOScUDCQg5xxioc4Izk1Mo+VjigJu68UACrnHuanU7GIHpTFXa2TnApehB65NAATj+tC8ZK0jqXfriprGIPI6t0xxQBWkLbwueKVF+dQOtOIwzA9RTo4sYbkgdaAJCgWUDHTvTmAJG0Utuu9ixPyjrQ7BAQBzQMgJIBqMgkDIqwE3Rlu+abOy8AY4FADHOFHY00Lls0xiScY4rQ0q1a6u44lH3jVRXUk67wFpQy15KPlTpXSyuZ589cnj6UqRLY2EVrGADjLYqSwi3yjA4qU+Z8xslZWOg0K3wASD1rvdIg+6eg61zWiwYCV2+lQbQpxWyR0QRuafFtANbUA+UCqVnGQg6VpR4GB3rVLQ3sSqB2qZeO1RKBjjpTsnHU0WAm+lKp5oTpTwvHvTIYhUnpSKvenqcHFDAhutIkFyKeCCOlM5HUUucUrEscD2pynkgg8U337U4kkD1oJZIelJjtTVY/WlJ9ue9SSHPTtTVYA7TTv6UOAeSBQITo39aXjPWkI+U+lAxj3oEHQ/SkJ5zjmnAZHzfnTScnjtSJDGRnt7U4AN1OKQZFJkKeQaAA8ZpgPTv6U58c4NMyc8CgQpzhsZFMzjkjNPDAcjmgEHknn0pAM9PSk7YxyalIzjt70zgPx+dIQwrzxxRyG5p3qelNbnimDGngikPXig/ewD2oPrQITqOlMPGfSn96aehpANPzAYowaOAcHijPP9aAFzxTSOaA3PSndv6UABGR6VEwP1qTI6Z+tNPH40wGEHHtTD0/xqZxkZFR49RQMZjnFB704qBx3FGO/PtmkBGMilY5oPNIODQIQke1KPpQAOwI780e9ADsYGaaaM4Oadj0FAxlKuKUZxzSenp7VQCd+aaRzTuSKa2ePSkAnWjrRmgH16UDGmm49acD+FB6A4pAJ/Kig+tJQAHikycUp5pDxx3p3AB1obp70nOeKOlAAOTQQAc0E88de9ITxzQMTjjvQeuOtGSelNyetABikIzwM5p2OOtJ0HTvSGGMUdOKUcUg9qYC8d6RsZoUcmlPf2oAQUY79qUD8zQDjrQAhH5UEelKc/nSE9hQAdDxQRxRig9fagBKUDH0pTTSSxoAccY60gHOcU3GKeGPGKAE+tIDn0pTz1pMDigBQM9OfpSA4zSr19qCaAADPFBAFGfSg460DG5NOHXnpRSgd89KBCqckj1px7E0wEYGKd6etMY4HA+tJuJH0pQcjB4/CgDnFIBrZJ5pME+tSAZ6U8DHvQO5AEORxShOuasEDtxTH4GOKrYRRmXPGOKwNVhI3cda6SRc9BWZqMIkiJPahDPM/Edp5kTgD8a4C7jw5z2OGFesazDlW4wB/wDXrzvWbby7okjCtxRJXVjCrHqeZeJ9P8ifzUHyNz/OsCNvmwfWvR9Ysxc2boV+YZxXns0XkzMpGCCayWqMo9ia4IZAFxmoPNZECngClQlm+lLMoAHINRtoWNaVnTB60sEQYjdzk1CMhqn34+YdaALF9bxxTbIz2FVyGjFTIfMcFs4qGR/m2kULsBLE+VOTweuap3H7tiB0qUOqkgiorltzLihAQE+lTRuwHPP1pmNy4HFIMD5TQA5n3HpzTZF2hTzzQRsAJpSwcjNADc5OKcv1GO9IygHABxQwwKBksRwcjjFSA7ucc1WV8DFSxkE4HWgDVtOV9K0oC2Bisaz3etasBZSM85pMDVst25ea6Qk/ZhzkY5rlreUg8HArcgkLW5GSazkUtjF1RhuOKw5SNxrY1TIc1iSnnrW62MkWLWXY4I7V718PNQ+2aRESeV4P5mvn2JhnrXrXwhvwVmtyemMDP1rCsvduX0Pa7VvlFaEbDgg8Vk2bZUdq04TxjtXKiUz42RMmpQnSokOCaljbnJ6V3MC1GdiE96xb23Llz0A61q9RVe9kxblV61KBHKyqUY0zaDzVuWJi7FhgGq7ptqxkaHBzUhA2n3pirg59KXOevFADckLingHbkH600/KpPrTonzgdqBgWwuKfGMLnOaV0APNMCMAOe9ACs52465p0XHJoX/VkdTTo1AVt3pQAwt83FTwEowC9TUaIN3JoyTIdtACvky4A4qYHbEy460sKqvLcmhF80sT2oGMiYIhGevao2Ys3NSPGEan2kAmlA6DrR5iHYBU88DtUMkeY92KsbNspDH5eoqKeUMhUHrSGVApJxmu88DadsU3ko4XpmuU0qza7u44lHOa9QSEWtpFbR4G1ctVzdlYIK7uNmYu5Y9TWzo0HzLWNAnmTCut0e3wV4JpxVjaCuzqNFt8hcCuz02LgZ61g6RCFC4HaursYwFGRzWsUdUUacC4UCrcY54NVohxU8YPrzV2NCyMCnK35YqIN69adkZ4NUkSyRScjBJqUEd6hwcAAcU5O+etIlkjDnrT1ORTFcYwcinL1oJJFGep4oI4GaTd+VKeQMn6VJLE9jThnk5pABjBoIwQP1oEPI+XgUHnmjPAzS9uOtIkRecg0+PBODUbcfSmB8du9FrkslZduRyeaZnDfWpCRnvyKYcA8/nSELu/woz2pF6Y5pCexH40CFznj0ppwaUDHShhwD+tAhCuOeopuD3xTxnpnOaQ9CCMUmA0Lk4FNYEZx+tSDkcdv1pykNxilcREpOOaDgnNOIHfj0ph4zmgBT+lMYjPFO6Y7CmEYP60xDWxnI603dnofwp+SMZphXk4xSAUnrimd+M8045x3pp9qBBjpR7etLzzSHrnIpDG4wSRyKPp1p46E01uc4piDtnFNPT2pynilK/nSGN4xUfQU8sBkDimnkcCgAPfNNI4yO1KeV4FIelADSvOaQjHNOJOOOc03AA70IYuOfpSH3pSeKQE4piDjoaFOB60vc4pvJJAoQCnr7UmST7UhOORSA4z1oAXHFN4xzyacTzTCcUABH1Bph69KcDnjrQTn/GkMaccZpcUfjRnjHSgBvSk/Cndu5o5NADRx2pTyeaPrQf0oAZj0o6ilI4FNoAOw4NNP3uKeelN6ZouMTjGR1pG7D0pexFGPWgAOeO9GcmmmnY7jjigYgGR9KTuSaft4oIoAO1NLc80uOeaae9AhwOKQZz/jSKeM07PTn6cU0MPX1pCOcUoAwCaQkn60ALx3pCO56Ug5oIIpIBeCOKAOcetNHNLTAVvTI+lNBwelOpR785pANAOcmgDmlz+VISc4pgGcHGKP5Uo569qVRzxz7UhhjtSFSOfWnYPBo68mmA0DA5NIw/CnhcjJ700jNFwEX0zT/amY6gGnAZNMEKuT07elSAHcf0xTFGO9PBPSgA57YpSSKCR60Y3CmmAm70phP51IVx1pu3npQBG31qtcIXVhiruzvimsuOaaHucTrNvjdkZFcB4it8q5x06V6xrUCujEDmvPvEEBIfHbNNEyWh51OuMMBx0NcR4psvJuTIowrciu/uU/fMhBGawtdtPtOnupGXTJFc8vdmczVjzpWIyKfywHrTZkKSMpHINKFOBzRJDQ+OJmbIqxBEkm7edpHSpLGIOMt09agu0a3kK561G+gy1aGME7xuI6VWuXQy8DGO1MhlK5x1psifvQ/rT6iHXDIzDjBA7VXfDJ1HFWZfLIGOeKogHPFAxV9aMgN1z3p444qQxoI8MCG7UAQP8AvGzSDj1p4U4xjOaDGfSgBATjpzQeVPtT1wMZpz4wdvQ0DK4PpxUsfvTePxpVOTigDUsjkAZrSRtvANZFqTuBHGK04iTSYF+3yMZrWgfahyeayLc9PStJCdhyOKhjM7U3yxrElPzHFauokBs5rHk9a1WxmKrYI7123wzvTba9Fk4RyQfyNcKp54rd8LXHkatC2cYapmrxaLR9T2DDYK1YjXP6VLvhQ5681uwnncDXBEzPjiElnGPSrax5qtbYAJ78VN5vU13yZVicptUDrURjDsSRkD9ahluD602OVnOOwHNCTAZqUKvCCi4IrnpAd5B4FdRaRNPcYboelVdYsxBuGBkHrRtoNGFtAYDPGKicnOPyp7ZJ9qaevH60wGkCiPhs0r8cZzTOx+U0ATK4YndmpNw28iq5G3oetPjDNjNAyaBVJJJz6UuQWKgc01EIfgdaRz5cmR1oAJUZMkjmljyibzwTTw4lwW5NOnK+WoBxigBsbHBY1NDISVVBz3qsysq5z1qzppWIPJJ06A0PYa3EucZ5+97U22ZowxFQSk7zjnNSISVwBmjoFyeabO7sTVVVLHjtQ/3RnvVnT4TNPGg6scVUES2dn4F07ar3co6cCuiuMnJ/iY1PYWq2emQwgckc1C43yHHIHAqE+aVza3Kkixp0O5xxXbaJDkrkVzWlwHI712+iQ4xWqNaaOn0mHG2ult0IQD+lZWlxYANbcPAGa2R1JEyLgVOnT0pijpUijPtTGLuweRinYPUGkxlc0zJBzzVIgsIxyAak4NQg5wRzUuBwaSEx/GORzT0PFRgnvTue1IRKeR60h+vIpoBHWnckfjQTYetOPQ5FMXgDinscUiWA6GkOcjFHBz2OKY2R04NBI8DP1NNYdMil3dcYoz3pCHDlcUgHH0pVPGefejA6rzSENyPwpSOmP1prdcj9aUcjkUCE6GnMMrwRTCT3pQ3AyKBAR0K/lTiNwzjmmgjrxQDkntSEBwOufrQBtPTrQe386GOPwoEKWFNcg0nGQe5pGznkUWABjvz9aYfz9KVsUnX8KQDTyPpTOx5p5PBx0prKOooEIeO/FIemSacccik6Y4oAaQeSDwaD14605uRxR6Y4pAC/dwfrSZODS57E01vvcdaLANbg0rZ7GkYc9KTIOaLABzg5ppxxRu4pvGM0ALnGaTP0ppYUgYZ9qAuPyO1J1qGSeOL5pJFUe5rLu/EWnWynzLpOKaV9hXSNn3/Wm+u6uTm8Z2IYiFWl+h61Rl8YzOD5MG0HuTT5H1DmXQ7neO9IW9wK83u/E2puT5TImfbNUDqmqXDjzbg4PYDFFl1YubyPUmnRerKPxqu2oW6H5pV/OvPIprgn5pHOepJNTKrMckk/jRePcLs7Z9XtUyfNX8/8+lRPrdqPuyA1yPkkrnBzxU0Vv83Ipc8SrM6Y61Ao6n8qF1qAnABzWDHbkqMDk1NDb/NyKXPELM2xqsbMQEapUvVb+EisyGE8etWkXGAOAKXMhl9bng8U7zlA5qn79aWlcRY+2QDgyAH3NSLIjgEMCMVlPaozbsZNWIvlIxwBxTuNF7IPTmjtUAY4NPQkjr+dAyQmkxzzSbsn07Ude9OwBwDzRxzSjpScd+ppBcQemKO5wM0pGB/WgDKkUwD+KkGB2px4I5waPWgBrZHTpQRntTm5GRim8ZOO1AxMflQTwad1z1xTe2MYoEKDgY7Uh55Gce9LjNG04oGJ0xTe/NOOcCjGe1ADQe+OKd0HvQF47UEYoAXt70n50KMmlIoAAAB1zRsznmlUc85/KlY80AIV5wORS44/kKBxmnYz9fWgYhHHrQAM89cUqjnA7UopgIOaMZHSnKOe1LjJ560gI2XmhRx9acRzS4wTTQDVTsDzTscUoBHNPXG3GPwoGMCDmpFRQOtL9BxSnBp2AYwJPFLt4HGakUYFLtwPahDIWqIjg1ZK+vSmMh5xTuOxmX0QaMjFcJr9tw+enP4V6FcLlSBXK6/AAGOM+1MTPHNbiMU+e2azHAJ9Vbg10vieDJY4/CuYT51Ze4rOrG8bnPJHAeI7T7NetxhTyKyA/wA3Fdz4ttPPs1mUfMvWuEK/Nipi+aJmuxo2Eqqw3Hiqt6zGYsTle1RoWUHBp5yY+elZpWZQQ44OeKWccgr0NJEoTGTVuNY2OHPHancCpsDMuelBRRu/SkmXbIQDxTXbpigB8LL0IBqKcFXAPSlwAQR3pLjIYAnpQA+JsKSelPYqFDj05qGM5BGKWQDYKQETv8xxTQ+QOeadsyOvNL5e0HNMBVBPFP2Y5HSmLwM04vxg0DLVr1rThbNZdsQelXrc455FDA1LY4IzWqj/ACcism0IYitWFcoQDWbKMbUzySOTWNJ1zWxqYO4gmsaTrWy2MhoOau6dIY7qNh2NUBVq0IEopIo+nvB919p0uB85JWuutm+Uep5rzT4a3Yl0+NcdB6fX2r0i2PAAJrz7csmiJbnx0smF4oacLyDWYLg9M0GXcOvSu+xRceVmPBzmtLSrdpwcZ4rItG3Nk1v2JMcBIO0k9ap7AatjbrG6sQBtpviG2V43lxw3Sr+mbJLdVfqe9WfESRQaWpxyeMVhJ2kUkeVXKeW7Cq7AoAe9X75t05IPyiqchBxxWzERgNkGpox+7JxmkIyOnAFG8KmOxpDEkTgetW0RViBIBJqtkN9RTzuwAucCgCdCGYKvWo5ouueppbMfvBmnAvLcBTyCaBlbymHfA/Knxplvm6CtK5jRRtwOBzVWZkWNVTr3pXuFrFVyTwf0qcMRGsQxjqaImTzFyBjvT3KZYjrTAghhLygHgelWzDjKpj61GsoViQBk9KZLMMEL1NJgRzgb8L2rovBNl9p1RGYcLzXNIcn1969I8B2nk2L3DDluBmtPhg2KKvI6a4Yc47cCq1vHvcd6kmH3UHXvVywh3NmogrI23Zq6ZBjZ713GjQ4ArltLi+ccV3ekRL8uRWqOimjfsYsIO9aaJx0qtbpwo6VcVcY5rU6B6L79KeG9qavHWnYGc1QmPUk+mKcw9e9IFxwOlOzng079CWMB2+/vUyHI6/hTFWnquDz0qQaH4JHWnDI6mhak4I5FBAgJ6U4Ed6TqDiigkfjkcilxnIx1puSMe9KSQf8A61IliZxQec01sdR60gz2piFGO9PXGBTFBbkdqeo/P2qWIdjjNNBPXtT+eKacAdKESJuGORTT/k0MBxxTTyeOhoELyOaQj0pOo5pvQ5oAfnpSbuenNHXnvQeaBNDieKQng9aaQeTik7ZzSEO6fT1ozzj+lIM4wabkgHP6UgHH1BGKbu9e9KcEj1pCM57CgQhOOOKQ8fjQQPxpmfxoAcfXGaQkjHegjB9s0g/UUgA9Pal7cc0gx6dab0zzQA4nDUHkHPWmE8ehpu8KCSaBCk5OKjz7dKzNT16wsB+/nXf/AHRXKal44d2KWMW0f3m5pqLZLkkdzLKEGWYAfWsu78QWNsWDTKWHUCvPZ9Svb07ppW+gOKrhGfIbOfU0m4rcV29jq7vxkoVjbwknsTXOXnijV7piqSLGp7AVXjhwMEZpVtgCOM0vaJbILX3KUs11dOTNPI3/AAI0gtyxG7Jz681qrbDgYx/Wp4rXn3+lS6smPlRl29sdwA61cjtiVHGKvx2+GGBVqODtjAqHJspIzPsh3AAZFSQ2mGArWEA3cDk1Mtv8w9KQWM5bYDbkVbjt+en41eEA4OOnFTrFgj1pDsUFtx3FTRw8jAq4Yhkccnk04KBggYH0oSGQLFkfSnRxirO0AUKuMnpzVJCGquB049aULyPpTs4FCAfjTSBgeop4Gf8AGm456VIAMDP1oBDGHFIgxznilk/rSjgVSAcDjp1p/TjmmJ1+lOP1poYbveoLm5aAhsbk747VMRwO496jkQMuCOKd7AS29zFOoMbA+tTnmspbRFkDoSD7GtGFuADSfkA/AxwKUeooPfPSgDAz2pgBA29PyoODzRjNJ9P1oACOMc0N1z/KlJ456Uh5Hy5zSAZjnvS4/Glxk0AcZNMA46CjP5fzpee3WgjI7UgEHvSc4z0FI3UcZp68/SmMb260uMH1pcZ6YpQKAGAEduacF70vtQD2AoAUc0oA7ijjsBSc8cc0xAMelKevAoVhR35HNIYDPAoBx/8AXoGfSlUfN0+gpgL+FHOc07H6UoHPIpDGAU7BFOLDk8U0ZJpgIvPuPSnr+lBHb0pGBx6fSqQEwGKdgcdMVXwQOc0qs2O1Fhp2JT1459qUNke1R456/hS8dc02gH9eaaw596Pxp3QUrWKTK0qg81g6vBvibjPB/CukZcg88VnXkO+JxjqKEDPIPEdsMPn3rzyX91eMCcAmvXfE1r9/gd68p1yLyrjIxnNNrSxjJEF1EJoZYiPlZeK801GEwXMinsa9OVt8auP4eprjfGFqI7nzFHDf59a5qbtJxMZKzOdHAHqadIzFPL6iohljz+FSxgLyTTkrMZGVYGnx5/iOPrU8xTcAoqKUgpgcAUrgCFPJfd1qBCCcgUqgBW3fhSRMQOBye9AD5F8sAg9aRcMPm/OpThkVT1A61CoIJVulCAQblJxRjcSOhqZVUDrjNRuNsmGPSgBgUg57UjtlsHmrCSqM71+XHFVXIOSelADlIU1JlTkVAoPrSjgZNAF62ALVowrx6msy3YhR2FaEDZHvSY0X7YYIrWgxt561k25FasG0p6ECpkNGVq2M1iSk5NbWqfe4rEk6mtVsZkYPPtUsRAYVCetOBORzmktxntPwkvNwZDgFehH4+1ez2pG0Cvm74Y33kauiEcNnt7H2r6LspMqMgY7VxVlaoKfRnxGWweTTlfvmovcmlTk12oo0bI7pAtdxFaqNLQKPmPJrhbRtjA9TXaW11v05ULYIonfSwI09JXZEN5O76VZ1CCS5s3GcharaXMVjx1+tdDpUYmglUg5KkmsqmmpSPIdWg8mUjPU81QAwwyMV1PimER3D8Ywe1cu5rW91cWwkrBuFpJFHyr3pEOAT1qRFyN2eAKQEe3DBep9qtW6qFc85AqqrbWJFTxMBE5PJbgUMEM34Y4HJqxbssWX5ziqzKV57VNGymP5uSKHsBLdTB0B9arxgBGYnmlkUlN3YdKYqFgece1HQYpTcpboaW3gaRjg8DvSfMvyk8CpbabylOOtAhskGxiN3I61VcHJPf3q1LKSCx4PtVZmJI9KEDJ7GIySoo6k17HpVr9l0yCEDGFycV5t4PsvterxjGQvPSvWpV2jaOi8VVXpEumtGysib5S3ataxiwuaq28PT1rWgQbVHOaEaRRsaNDkg13GlQ4VcD8a5nQoSQuOa7SwTaoyOa1idcFoXosgVaRs5z1qFcEe9OVccE1pYssA/nSqx70xeDg1IvuKdxD0fnmn8Z44pu0AinBWHTmi4Eg+bkccdaVT2NMXG4dafn86TJY4e1SKwqLqvFOQ4WglkhxmjORTTz9KTP5UyR5IOaBz14xUZPcGmGXH1oJuWOh9+lJ19qh80NjNPUgHrSYiQE5HHPSnBsDI6elRfePPalXrUiJ8+9NPByM0zPX1oz04oJFzxx940c/iKaSaAeOelIBMZoPTFIGHINLnPFIAHp6d6Xg+tMJAPWgt60hNj89utIeOO/YVGWOKTcd3SmTcfuI4zTWwDkdaRiN2RSbgAR70ALuJoOecVGST0/lS7vWkIXNIeaaWx2xTWYbf8KAJMjacUn1zUW8DI9RUN1dxW0ZkmkWNAM7mPFAi0W46/hUFxdRW6F5WVVHUscVwXiH4i2dqWj07/AEiXpuHK1wGpa9qWrSlrmdgh6IpwKrk6yIc+x6hrPjiytWK2585/RegrjtU8WajfhlVvKibsvX865qGJmxnNaUFv7H8aTnFbInV7kKxs7ZZmY+rHNX7e3yOanhgxgAdKtwxYIOPwrGU3LcpJIhgiKnFWjDtcHsamjhO4cYFXBBuXPcVmykUFi5GBzUptzu9q0Uh46VIsI9OaaGUUt/lz+lWI7focVciiGamijB4xTApLByOlWUiAOTU5j6dvapgOBiiw7kAiGF4/Sn+WFIJqX+EcUrDBGKaQMTaOmKeF6Yo5696ce1FguMYDIHFLySM0rH5qVeH6UbDAjjr7Ug6c09/amDp700IQjPFOXj1zTRyfSnDpTAFHNPJwKRcc5FNJODSGgHWlJ7Ui+/pThknrVCHqMAg0gJzyOKTFOPFA7DQMscUpoHByKYSScAimMVQCcmpEOGJ7dqjzjAp2dq9KBoS4mKKWHJHam2d5Hcr8rDcOCPSq902VOK5+XzbW486AkHuOxqo2ejEzsscfjTsZrI0vVUuUCMdsvQqeM1qKwI45pNNbiF70oGSf6UYoyB0FIA4FNB5peo9u9J6cYpgHcDFIfYU7Az6Uck0AIOvFKTnpSGl4pDE6c4pcc+1HXGOKOnWgAX8BQeMYH5Uc9aMcc9aYB+BxRg0DpxSnGPSgBB9acMnvTR1zg08HvQA7p8v86XAyOMU0Ng5NODggDnFMBQCWJH50MeOv5UZH/wCqnBhgcYoGgUDHA/OkXPbinqCcY6elDYHQfnQMZz9Ce9Az06inEc04AUwG57cc00rk0/AzwOaciYXmgZFt46kfWnYwOvNSYwORzSZU9RTuFho+vFG/3pCMHrSBevpVXCwrEEVXlHy1OFycimMnHNIo4rxLbZDkDj3NeP8Aiq3w7npg/wCe9e863DviNePeLYTukOPWmZTRxVi+d6HpVDxHD51g395aswnZPnPerN0gljZWHDCuSouWVzGSujytwyuQfWnJkjHOc1a1aEwXTKQetVkY9hWkiUT+SwjMhIwPeqiMx4qdA0ibQeO4phxGSMc1CGKV3JsUc0kaMrcgYFS27rlsj5quXojisYtmN7dfzpX1sBVLKVx0IqPKspPekeIsAQajClWpgOO/oKhkDZznmrTcIWXr61Dk9z1oTAhy3rTgrYBJz70qjcTT0HBXFADcZHuKIxkHFBG1vemqDuHpQBatxwQRVyLrVWBqtRYDUMZo2xAIz09K04CMFunFYsRIYfzrQST5QO9SxlbUzk8VjSDrWpfNnOay5DzirWxBGaBweRSGgnmkM3vDNwYNSicdQ1fUejy+ZaxOOQwr5M0x9tyh9DzX034Eu/tGiQMD0H9T7Vz4pbSFL4T5DXvTkPOKbnmhTzXShl2zG6UV1ML5jSNeK5axbDA10dhICMntTYI6XSIfMJA7DNdJoziATFzwVrA0aYKrbACzcc1Z1O6FnZFSwEp64NYVHf3Slpqcj4xnEl5KAeAa5HPzelamrXBuJ2Oc81lYJJFapWVidxzfKmAOTSJKQpFIVbYTzxT0QCLnqaALUEKm2Zm601CA4PGAaZFK3CdBUzlFOO4FIoa5Dxc9z1pIo8A+lV5JCowDxUkbloyM80xE7OpUL0ApFdQ4AxgVW2ktjNSiLGCx60WAkbMoJQcVCqYOH7VY84IhA6UyEFjk9KAIpB1HQVGnJ/rVm477BkVDBHudVHOTVU1diZ6L8NLIBZbpwQF4z+ddsi+ZJk8g81l+G7T7FoUKAYZ8E1v2UOfmx1qW7ybN0rJImgi6ACrsaZnUDtSwxgHOKmsl3XGT61UTSKOt0CHhT6V1tshxxWDokO2MNXQW+R6GtonVHREoUnGRUmMU4YPtT8etWyhijHrUikcUmBRtIoETjk9aVWwcc0xQccUZIpkseeTS8gZPSo92CKlyDjjNBLFU5Ape/Smgc+1OwccA0rksepI60hOOvSk6H0pG6elBLFJBYVBMhbgfXNOPrnpQWJOCDQ9iRIEI+8eKnHBwelRjGeOnanhh0/WpirKwmO9uQe1AyGNNH3hn8Kd0YZzQwH88YGKMnPvTfoKTJoJFLfSkLH2xQeeR+tNPU4oExSR24pc+1R9wSD+dFIQ7JxxzTCxx7UNxxmkbn60EgW460hNIeOtAzwKAED+vSkySTjpSE47UwkDPGaLASBiMkdOnNIzEnP8Ak0xSPXH1prt64xS2EOZ8cmopZlRGZ2AHvWD4g8U2OlIy7xLOOka15lr3ibUdXYqXMUPZFNUlfchz7HceI/HNpYbo7Miecdh0FeZavrOqazKTczssZ6IpwKijtWJ5q/b2Rz61LqKPwkWb3Mq2tMnGMe/rWlBagEZya0oLAowLDArQjtRkEAVk5ORSRTgtzgcYrQgt/UVZhg6YFW47c8HpUjIIoOM4q3FbgfU1bitxjpxViOIA4qSrFVIMgZq5FCCoGKmjiyDgdKniXHaiwIrJCAcYzinBBkjirWzB4FJj5u3NUkMhWPGMnFKAA9S4wSMUEdKpEoYy5BFLjjin4/KjHFOwDDnBHrTu3FG3JwacAe/WgYgzxnrTj6Uij5qdg4pMY1+DnilXrTXHNOjHQfrR0AVifpTcnaKcw/GmUIQqZ60ufakUelLjj3psAB601jxzTumfWmEE9DTSHcehz1pc4pOmO9O6cetAAuSeaUkk0oHGaTHBz9eaTY9RCeMUijGeKCufalAB57U0CYmOp7fypCc9KDycDPFKqk84oGiJlyDjvWfcwcGtYrjpVeVC2allo5ySEq26MlXHcVraXqhGIrrqOA3amTwAZxVOSIjtxWkZ6WZLXY61JAwBXoaceeeRXL2uqGzkRJcmI8Z9K6S3lWaJXRsqRnNDjbURJS8e9IDigd+tSAdc9MUnanUhoGIBzyKQjHanDB6fhRnHH60CE5xjJpG6ZoHJ5pQOaBh1pfrTT2wM0oYY96bAcQTjn9KaTgDA5pwx0xSHGcUAJz3pw68UKBTsdDQA09ev0oAwOetOweM0AgjmncLDl6D0HelHXNAAIxThxjIoQx38NLu9iaTGenGaAPemMU9Pem554pT35pCCBigB3BOfSnhs4qLJHbin9OtMB7cjoBTGXnjGaUcGjdyKGhoZsOf8aXn05p+fpSAZz2pjG8gdKYxxnFT7flNRsueOlFxpGZfoHjYDrivKfGdttMmPrXr06jbnFcB41tMo7dyKZEkeGXS7Lsj3q2T8qnOKTWY/Luc4706EhoD9KwrIxtocX4uttk+8DrXMgkCu98UQedYCTGSua4gIATmlB3gZLckikVfSoJTvkOMU8qCSAOKYU2zbRk+9SUTW4AJzgkUjnzGPotLHGUfBpQuAegoQDRISccUj7iDn71CrtYZGakkA5brmgCqT8mAcGmBjx37VY+zlgCDTxaMIPMABAPNAFZSc1Juy31FPSLfx3pjoUfGOlACn1NOTG2kk5QZHNRkHcPSgCzEOTjgVZiBz9arRZx7VZiySKBlyFcmtKGMNHnvWfb/eFacDrsII59qiRSMzUMqTWS/WtfUsFjjmsh+tarYzIycGm9RyaU8mmE/nUgWrRsSjI6V9B/CW783TDGcjaB2+vtXzvEcSA+lez/By6IkMf97/AAPtWeIV6Y+jPBD7E0q0rClVe561oBNAdoGK07S4OAOnrWPvqaKXY4PNVcDutOvltIix5wO9Yutas05YA9az1mdoyxJ2ntVWbn61CSTuA12IHJ+Y0wYRe2TTTyDnrTYwWlHt60wJZG+QL0GaapycdqbOckj8KfaKfMwRke9AxJD5Z47U3l1J9qluUXnHX3ojYLARwCaAK7DKdeRUkZIAA6UmN3A/OpI/kOO/SgBM7ev4Yq0VPkFzyTwKTyf3YduWpruSQOlC1Au2tqnkeZKOvQVXkAGQuABVqaQi3BFZ28u3fFK1xsPukkmtHw9bG61KGPtmsvHz/ePFdt8ObPzL55mHC9/wraOkWxJXaR6KUCmKJQdqjFbVlDhORxWbaJ5kpfsTxXQwx4iGRyaxR0bsjZdkRbNWdHi3Sg9ar3ZIjUY61r+H4d0i4A461pE0gtTsdOj2wqB2FaSKwUbeaoWQIQda0UbICjpWyOglRcADmpVPrTATxx+NKMZ5/KqGSgjIyaeBzxyKjqRcetMQ4Ajp0oII7daAMUoOR0xSExOooAOaApBzxipEGV5HGfShskcB0IpQeMEUoHoKCOMigliLwaHHII6UtNJ/LFBLGdSPeg5IAFL94jPWgDPTilIQuMIDTj6UIMcZzmjbg1KBi96UZNJjpxzS9KZLCkB79qU8g460xjtPoKCWBJz703JDfKDmlDdM4pMjdkZIoExSefWmZz0pXHHHWojwc9D70hbkm4Ac0hPGaaB15yaaTzQKwp9aaW4xnBpCcGmE9hQIfmmE4PP6U122jJNc3rniA24MdsMuf4vSjcTdjX1HU7fT4i9xIAew9a4HX/FV3eBorTMUR43dzVLUJpb2QvKSWNVVt89RxSc1HYzd2ZIgd2JclmPcmp4rTGPX6Vqpaj+7VmO3z1GKylNsEjOitR1AxjtVyGHaeFyRV+K3wcqtXI7bBqB2KNrbTOpNxIHPXAGMVcith0HSr8Fvz04qysHHTimOxShtwrYI4NW0hAHvUwi2jOKsxp04FAFeNO1WAmGBpwTBFSquRTsNCKuMU4Lhj6Ui9MEU4HJ5FFhgRgUw53e1TYyBmonAxkdKaHYeVx0ximsOPb1qWP5k6VG3OeOKa0E0MUc0pHNOAGf8KPrQJCFeQaTHPHepNuVBoAycigdhmPWlx3oI+Yjnij+lDGMYfNTkBH0o2nPFPXqaQIawGfrTSPapGGcU0LzzTQWEUYFH4U4jA96QDt1FHULDNvPegjHpTyMdKQDkDtVBYFBPPFOIJOcU4fL15oPTjpUjsNGSff2p2CeBzQoyfepAMDnrR1HYiI4IxUIlVmKKeasMOPrUDIoYHABpoSQ5Rng08jA460KMfjTsfnQUNwfzpjKTxipRwMmjB/Ok0Mpypnp1qjcR8YxitYr1qrNHkehpDMOaIEYIB9jTLbUprC5Xq0BwCPT3rSmiyPeqE8HU45qozsJxOotLhLiJXQ5B561aAyfauFt7m4sbhXjy0efmX29q66yu1njDKeccg1Uo9UQXDxgUwinAg+9HUYFSAmBQeetH4UHpQMb0HelHTkdaOP8A61BIx9KBCYyOlL9AKTORQKQxw4oBGDikzx7ChcHmmCHAcU7GQMHj0pFHPanDk+gpjA9evSnEDt+lIq46mnpx05oAMe1GO3Wjnt0pRjjsaaGOHHejJC0AevWl79KaAABjOOaM4PIpSeeTTGBJGKAHMKQDmhQe4zTwucU0MQ9AOxpeOnalxtOMZpdmQMUDI9vXFOUdOakWPABI6Uu3NMCPBJAzSFSOP1qQxsM4PNAPGGB+tFrjRUmj4IFcp4rti9k5A5Arsmx6Via7DutpB6j0prQHqj5y8TQbJmJ55rPs2ymMfhXTeMINs0gI9a5ax6svf1rOqtDBDb6LzrKaP2zivNLgFJmX0Neqov78g8hhjmvOtft/s+oyKc4JrKl1RjLczgWxwDmkjYq+W61JG4C/XpRIQzbunrQ9xlu3ZN6bwDnrQYFM5GcrmoY0LsQvB7U6KQxvtYnPWpS7ATajD9nwAQQaolsrj0p9zPK7bXJKjpUIJ64ppCJYyyKPQ0+S7KRmJRwetRgN5YJzihQGDFhmiwxIZgr5HWnht5JOKrEFT04qaMfLkGgB5XcR7Uph3ozimhyBznFPSQlSoPXtQMkgTI57VYVQACDVeJvlK9DU0ZbPSgC1FnitC35HrWamc1cilK+1S0UivqPBJrHfmtTUH3Ek1lydTVrYzIz65phpxqNgc+vNIZJEcPnFemfCq68vVo1x68/gfavMFOD7mux8B3Bh1eNh7/yNElzRaDqcIRzxQpHSg4z/AIUmc9RzQApI7UFiMcH60gIwB6UE89sUAW1lJiAHQUIwwWzzUEZwpzSfwHFADshmFOkkVT8tRD2pVUk4oGMdgTirMDHhhkYoNvtRSR1qMHY2COM0bgTOjMN5GATxmmOuMA5zU3nghQRwKEIJLnGBzSAiKMigjoKAQXHpSsSUJPTtTAC3I70wLRuOwyBVq0hEwLNjd2rP24GOpq1aStHKhOaLaaDTNC8VYYxGck4rJk/dj5R1q9f3AlnyDxis+Y/MM96UQkNQktXq3gK1+z6KZiPmcjrXl1qu90XuTXtmi23laVawL6A/rW09I2HS+K50OlW3yoCD0rdEeAB6VW06LHPY1oNjP61ibRMu++aZRjgV0fh2PnjtXOSfPekmuu8Op8pJraOxtDc6S3T92M8VZiUgZANQRdqux4A561aNxVPrThtznpQMeg5pdoJxVAOGM0vJ6cUwAL9acCPoaZLHKSuM1IpDZ4z9aZjgdPypUz6c+1FxEgAAGRipVXjjmoh2yMn1qRW2j7vFITHgjOKUAcdqbkDkZ/ClY8UyWHeowBkjOaew46/lTcZFBIncd6UDFLgH2FOQen/6qlisLs4BB6UHkA9Kd6elIBgmhCGgdsc0hxnHenFTScAn+tAhrfL3NBAbtTjzxSgDvQKxHt9qNv0qTj8KRhyDjIpCGEcdeajZc896mI55FDLkA8UCsVwhzxTH689asbRg461GVz1zSvYCDHNQXU8VpG0szBVHrTdWv4NNtnnncKF6Dua8x1nXbjVp8klIB0X1p76szlK2iNnV/ED3c2y3JSEH86ypnMzAk5NUoByBV6BCDk9D2rKc77EIRYcjmpBAOuKsKmQSOatwxZ6Csh2KccDH+GrK23PI+tXI4W4zxVqKEdhQOxUjtx1xVmOEL2qykODzVhYsj6VQFVIz2qykYIqREAyDUgUAAUxoh2dyKVVwasKnzGh09KEDRGQAMt0oX6VMgBXH6UpiFNMLFcjB9PajGD9akdMjNCgHGTzTAROT05pjDBIPOKlxtYEDinSrxuoDcht8kFTSsuGwce9PiABzTpBg5p9RoiwBSkdyMU/Azx1pwX5cUgGAcY/pQoIP+FPUcYPNKRk9MCgZCV5PApNp9MVYxzyfzpm3nnpQIYgyeRSKPm9qlAx25pMc/wCNAxCpApoXjkYqXGAOmfajGOfzoAhIoC55xUhHOP1pyqT1x+VCGQspPbmlVakK0bQO3NO4IYFyelBHORUwGAfemMOcClce4iKSeuPwpzjrmnoAFycUjdMEY70LUZA/GMU1RnmpXXJOBSqo28imJDOhzilxipNoAx2poUljQMaBxkikPXnrTz+gpDgUxkbAY571Ew6j9alwcnvRsAGTSsFynJFkEkcVUli3gjHFarLu57fzqF03c+lIe5km3GcYxV62AjAxwadIuOAOagZtmc9aaYmjWglEi4JwR1xU3pXNm/8AIl3Hp3rftpluIlkRhgjNU11IZKeTzRyD05oPOKOv09qkQhyO1IOaUgd6DjFAxcjt1pvbOD9KjldIyN5xuOBUnUe1ABkYBpwP5UEDqetC+1MB4OKeOaYtSDOfSmCBcZ65p64HAHNAwMUYNBQ4Yx70YB7cGmj1oxg8UwHgcUAE444ozgelJnjr1oGLj1HWgDHTrQeg9qXOR9aAHA8cDn1pPu5pFH6U8ZOCeRTAQZH+Jp4yBz1oyMjilGPxpoYgyeKce3HFIOuKeufqKdgQKc8jpQVyOlKAKX6dfWixRXli9AcVm6hHujPHatdskHNUL5fkIApXK3PCvHtuUu3GOK4C0wt2R2Jr1f4i22H34x+FeUP+7vgeOtFRaHLsy5cLskDVx3jq32XSyY4YV3F0peIGuc8ZQedp0UuMkZzXLB2kjKojzzPapBkgYxTXTGT6Uq9ucYq56MEWopfKlUjtSTtvlZxjrTCmVDg/hUcW85HY1KAnlBMYJXGeacqLsXOOakL7kSIj7vemMp3fLyooAibcMxnoKaMjjH5VcZlIJON2KzzIA5IPFCAkfJA45p2dpUGmK+45zjFMlcsxOOlACvJtJHHNRxth8ilTDN82KUABuOlAFuL5sVahUg9earxJgA1YiYn6UDLkK/MKsrHz6iqsOdwxWjCPlOallGXfrjt0rKl69K19R6kc1kv1q1sZkRqN+vtUjc9KjakMaOv+Nb/hqXy71D3Gfw4Nc/24rT0iTZcIfSqiJmGeenWk5HWjFIPQ1IxT7U4DPSmYoDEe1AEvbHeg4xjNNGWPXNNJwelADwMcVNFIEcHrVccHg4qReQR0oAt7zMwA+lOubYAqBg8ZJqvbybDn05p7XBZtzDnFIYhiAUDPzZpufm29B0pUJaUbu9STRgOFTJ4zkUwJpFUqFQY9agddjAfjzUkB2kZ6D1qKZsybgfx9KQEqbV6nPvSBwwZugqFnBGBmkHOAKYBIzFuM5NIQTyTT4hh8t2qYbSTwD6U1uIueHYPP1OBeOte6aVH+9UAcIuK8l8CwCXVlbH3RXseix5DsehPatKvRGlPZs6PTkAiGRU742sc8AU63TbGPXFR3fy27HvWJsjKgJM7E812/h5MQg4/GuMsk3E+5ruNEUi2A5FbrY1gbcY7k1MhwOTUKcDrzUi4/izVG1ywD0zTs4qvkgjtTlb5ulUImByKeBkUxOlPwcUXAcnHXrT1xxyKZn2p/APpQwH9emDihc4yMe9MPH3TkUK204PfrQSyQDaeeadkk8daaCMcHNLtwMiglik880d+e/rSdeo6U4kduaLiFXjHFPwMehpuelOxkcmkxC47ilweKQAgc0H26UxMXimkU3PGe9O9OlIkQdaQ9Tmg9aM8YPWgQoPOO1GQOtJjI4pCDnikAuQD60nQ/hQOOveg5NAhucH0rL1/WbfSbQzXEgBH3QO5pviHWbbR7J57lgCBwvcmvGNY1a512+NxOx8v+BOwFFlu9jKc7aIt6vrFzrV60szERA/IlNgjOBgcVDbRZxitW3iBIrGc2yEghi7Y4q/FGT+FLDGM4FXYYj0qBpBBHhRxVxItuMcg0qQ45q3CgPWgYsUQOPSp449pB606JdvFWvLXGR3oKsRlBinR4B7ZpVG0+tKI1EhYD5qoBXTuKTAxkZqUn2qNh83P1oQApPB796k6qD1NRjg5A4qROAaYCKcEU+mOMdKeGyOaQxrLg896YFAb0NTldy59KjK5zTRIki7gMdRTzho+nIpFPODxSgYyKChgHI46U51zTQMMKfxxxTER4xjPanjkegoYdcCkXpQAg6nB4NOIIA7UmOfbNPPIxxQMaMYpGALDI6UoIPPNI3JHegBf5UgHPIpfSlxQAh+lJ396d0FN78UwGgZ6cU/GB70Y4xjFLjBGaAEx370qjj/GgAdM0/wBvbikMjPbjj1pAPnpx4FKqjNMBGBxjNIx96ecZz0qFkJJPvQA4BetOUdSwoC5OM07kt+lAxhBb6U5h2FSYAGO9RN8uTQAhbHJHNQEFjwKecseBxT8BR7mq2KGAADpQRnJPcU7HHP6U3uaBDSCST0FNcZzxxUoUnr0NNdjuIHJpFFScbBx970qhOCQcda1JE+UmqNwuM4HTmpBnOaoSqEd6q+FddayvDZXjHynb5GPqTWlfQZDZ6VyOrWpbJXIYHgitIvozKXc9fRw4DLyDTs449K4fwV4gMqCxvWxMvCk9x0FdsCDzxzUtWGtRSMjI+nFIOpBo6AD2oz+dIAZQcZHApw4AxTSM5BpR0xmgLDs8Uo4pvXBApw/UU0AoOOlO5zmkGBj9KdnFMBwzxmnio808dadhij6UYwOBzRyQDjOaBk47UFBwOtHA/wAaX69aAOO30oAABjOOaXIPX8qaRz14pwAFNDH4wM460oz93tQuW69Kk2js2aAsR5PIAxTgOMUo4NLkZ4pgIEOc0oYDrSFxnHp6U3ryBTv0CxKWzyOBQhBI/wAKjXB4PWpApXp1qkrjHYU/1qrcxZU1Mc9c8g0yQ5Xnk1LVikzy74jW7GANt55/z0rxXUVKXYPQZr3/AOIMIewJxyPb/wCtXg2uJiVjjHNN6xOeatIvH57VGHpWNrUXm6XKmOV6VtWB8yxUHtVC7jyJY8cEVxPRk1UeUyr+9IPrimyKFQjPNWNUQx3bgDoapnLHnNbTRkiaJio55FPjfBOF/SpLZQBh+9N3KhIIJrNDHjqrMMZpzfLHlTk1WdiwAJOBwKEYkEZyKAEQM5IB/CmNGvbg0u4r06mnhMqDnn0oAhTO7n9KeRhju71YZFUA96jucFlKjtQMr+WcnFOUYPNSLnvTlHz9KBEsDnAU9KtRcZqkpO4irUJY9aBmhD1yOavwMMYrMhYirsb4XJqWikU9SPJxWU+Mk1p3/OazJBVrYzIjUL1MRj/CoZOvApDI8/gav6ef3vpVDPNW7M/MPUd6qO4mZoHNBAFPprc9KkYzNJg5+tOI/lSoD3oAkB29KYx44pzKQajPcdaAJY0yhY0oHHpSxAuAvQCpTHtX8aBjSAqjPWnnAUbcc9eKruSeSM08NgAUATQjdMPQVbQRgNkjJ4yaplgrAZ5zzTpzuPy+nNICa9ZcAJjgdqrPl1CjHvSxoTG2evvSI+0kYzTsBGIzyPSpYxs5PNKy4IOODSuyk/X0oAmRA0BbHU1G4A6HHvSiXbEVz+NRo2etVTWomd/8OYcGWTIr1jQYsWikjljXm/w/h22DPjhsV6npEYWGIAHIp1Xqax0ijcRQIee1UtSOLbHTNaDACMZ65rM1c4jUHuazjuarYj08cqK7bSxshWuN05cyoBXbWkeEX6VumawNGE7sDIqdkI5FVolxyOlXB0Ao5tTWxHypweRUyAE8CgAEjNPTg1QChcHIp6sR1pRjGKXYDTQMXqcg0Dr60m0g9aOhpkjyVNI4yKTODxilXB46GgTEUkH2qROD9elRgc461J6cc0XYtCRuRjvR09j/ADpq55PengHkEUrkgD1Hanjj8ahyQeKkB7c+1Ah+eSD0pQwxim57mk4HSmIccYBpuefUUBhnB6GlPNIQ3nn0pB+tPyKjbof6UCHKcjGKXPNRse44oznpxSAecc5rN13V7bSrGS4uXACjgdyak1PUYdNsnubpgiIO/evFvEWvS67ds7kiEH5E7YoXd7GVSdtEUtd1a68QagZ7gkQg/InYCnW0HAyKbbQ5AI5rWtojisZzcmZJC28PGa04YxkcfhTYIQOnJrRhix2qChYIhuzitCOPvTIY9varcaeg4pWGgjTJwOKsJHg59aciBQCPyqZVDjHpTKsNUZFToex9KjA5HrT8YbI/GqsA/HODSElWpvOc9qcDvBB607AP4I65zTeh56UxCUO08j+VSHLDjGaQXE6dehpwPXPSm+9H6CgQ8jPemrw1CnOQaDjOaY2SofxFNZdrYxSKfzFSN93cOaQyJhyDTgOfak544pV7iqEA74opy88UhHegAIyuM01fSngdKTGG9qaAQClWlHH40DrQwGj2oNOx6009KQxB/KnDO7FIM5pe+adgCkIxSnpgUi9KAFUZAzQAcmnKOeKAM9qLhYQD0pQDijHajIFHUY09etKuMc9KABzml7dePSgYh6elJ+tOPc0Ih60xIQKevSpAvSjGaRj2AoKGyMOfSoT87Yp7jP0pp45HU8U0IXp8oHNAAA9aVVxyetB5PrihjuNY5PPApUQYDGlRORkZ9qc5wcD86Q+gx27L1qPGOnNPGCTgcUYCqMU7ARuvHzVUnTdzirjDJ+Y5NRSIWz2pNBcxbmLcDxn3rDvrXIOBXWTQ4B61mXUAKnOBUg0cZPZtHKJYvldTkEV23hvVvtdusNxgTJx9RWNPByeMCo7dfKmDp8rL3q076Mi1jvQQce1APJ9Ko6ddrcQqT94cGr4XvSasAvX1peOKUD3pOp9qQxcUDrilA5xjNKQBx+dMAGOuelKOaQKT34py8c9qAHD6U/HpUecYp4OVAqhirn149KXk8YoGOueKUdOKLDAgYpmew4qQcZJpOD060ACr6mnrk8cdOppAnBzQASRjp60wH4HGKeo9aYvOcde9PXAxzQNCsOSc00jBBp+e/wClNGCSKYxh/lTgpxmnlcCkHAJHAoGKE/OpOgqJiQM07fnGTV2AQAc8VHKpzntU3X2qOQDHPSpZSOQ8axh9NkHGR7V8/wDiCPDv9a+jvE0G+wkwMnFfPXiFcyuSKa1iYVfiItBbfbsvpUV4Ck+OlHhw/M6Hv2qbVEw5OOlcclq0TP4Uzy/xLB5eoPxgZzWbEACM8iuh8aRbLoMOh9vpXNAEg4FavWKZzonndVChecVBJIryBsY+lQEnPOacBlRjrWexRMvJBI61JjY2AR7VXbchHrUqyAgbhjA4pASR4IYuRimqd7daZINqYz1pI+MDt7UwJJgQm7nIqLrzzx2qdm3DA6U3jNAEe89ORTw2DzSShdwxikXnjFAE8bDr3NWomB+tVI0yBircULgbh0oGWI25wKtRN+AqlGDmrAIB5pMZDeHNZsnWr143NZ7Gq6EDKhc1Ke9RP6ZpARmrNofnAPSq2KmtzhqaAqbvekJJ5Apmfxp+D2pDFAzTlU9hUtsu5uR9a1Le2Tq3SmBlFWzgikKbQTith7FpizRAflVFrZ0DBwQc9DSAqRMY8mp2fMOMZOaY0RGajG4A5PAoGDYHXrSrjYWPaneSzDceAelAUZ2nmgALAnigPtOPypdg3e4pjqSc4/GgCVGOAACalSDOOxJqKGYKBgc08TkOGoAu3ceyJV+XP0rP2Mrc8d60pWDIjvyaqN+9lO3gUkNjdoEQzyWqOLO7j1pJTiTGeBUtspaZQO5rWluQz13wbF5ejRnnnFel6cvC8ZxXC6DEI9KtIzzkCu/sRjH0rObuzfZJGi3AA7msnWD8yAdK1W7HpxWNqp/0hRSjuWi5oy5uE/Cu6t0woritCU/aU9Aa7mI5HTrW25tB2RMmKmQgnA71CAR9akjGTT2NLkwOKk569qjUdutSj600wAH1p6nng0jbeoxmkAqhEjHJyOtNPA9cUAnNOIyfamIbgnqKQ5Bp6kdCaXHPI4pITBW46YNPByMZ96YEA57Gn+WDk5oExw4we1OJ4pFHGDRyMcUgYv8A+qhW7YNIpPSlIG6glj+D1pvIPNAo9zTENzjgHvTgcjjmgn2/ClUelK4gOSBimk5zTjxTWIzg0xDc4PXioZ50gieSQhUUZJNPc7QT2715L8TvFhnk/srTnwo4lZf5UkrmcpcqMrx54pbXr829sxWziOOD941k2kXTiqWn22TnHNbttDheKynO+iMUurJrSLgVr28WMe9V7eEZxitSCPFZlEsMXHBFaEEQReeajhTpV2JeetAxyRkEVcjSmxp6elWVXuKBoQJxnFAG05FSYB6fSkOSMntVJDYuN2CODQG45FIrbTQ4B5B5pgIxPQHihWIYY/Gm7smhSN2D1oEydxuAIpoOOtCsQcHpSugJpeQxxxjNNbgj0pUYcg0jcjrQAjHHzDrTu+ajLH8qcegz0FMB3RuKmjbBIPQ1ChyOmKeD0oYIVupzR3zTjyM9qaKaAcoHWg4zjsaF5P0pxpgMwaUnvS0Yz14pAJ1NLS8ntS49aYEZ5pGHHpT8d8UhGcUhjR6/pTsYANC9cmjPFMBMcUvH5UuOQKMDaMUwEXp70oHoDSqOaXn8akoQHJ9TTThj0p4GKMYHvTAbjJ46UEfQU7GBTcfnTEGNxx7U/oKRRxQRmixQMM8Uhx0FDNxTf/10ANc9vypwGBlqVecmlJ46ZoAafSgLk0uAT3p2MZ9aBoR8KMCoSM9P/wBdSsvc96QgAZpiY3BA9Kbxnj86eASeaUL6DJphYjCcgnr6UjJgEd/epyoUHuaifPbjmluUVZl4PAJ9qz54/WtbYSDge1QvD3IwKnYdrmBcQc5HU1nXA8s+/aujukABC1iX0Z5pIloz49YOn3KM/wDqs4au6tZlnt0kQggjINea6jDvRlI61r+CtWMTGwuW5H+rJ7+1av3kZ9TuhjtR34pFOelKDt+tQUPHAye1OGCfemA88GlHegCTbnpzSHoOKQD3NP7cUxjMFunBp6gCgUozg4NMLB3p2RSLg9e1O288UIYoUYOaWPGfelIzz3pwAU0DEJwRkUoIxjGKdgY+al8sAUwEUCnHngDFIfzpwzxkcUDEU07GDn1pehpQeMfrVIYgPtQAfSnHOeBS5wKAGEYGTSZx0p3P4UYGadyrDM+opu7cDj1waew4+lVI92+T0zRoBT1dC9s47Fa+d/FEOy4lXHQ19G3uTCwxkYrwLxtEE1CcEYGaFszGrujldBbZd7T3rS1VflyvesfS3Camo9TW1qfC1yT+ImWsTgPGkeVjYjPHWuXt1BIBya7LxcgNhEQPX+dcQjkMBWkdYHMtGSXFthiV6etVWPlvjqKsyzH7p6CqrqCM1mUIzmQjNSxLlc9hSwx7uOM1KNoQjPzCgCF1LEYySP0oQEfWpEcHoeKbuBYnbxQAqMQf8aCpYZHUGmlgDkDrSo4AOeuaAEZSp5609MGkGXfOKcnDc9RQCLduiZ56VYE6j5cfKDVFGO7jkVKiHbnHB9aLDuWRIC3HAqcAnHaq8K8c81dgXOM0noNFC7UrVB+prUvhjg1mv3xVLYghPTrUTHmpiOfeoZeuaQyLvUsB+YZqI1LDjdTAqKufbFPJ59hSjA4A5p23P1pATWx5966K0jU26+p7VzkPDCtrTpy08SY4zVctxM67T9PC+VlQAe9VPE2lhNzbcDrkDrXXW1sWsoT+tXvEdgkmkI5AJC4rmcrSRpFXPCZ/lYg1CTzgc1tajZgzyDGME81jSKI3PoK6GSPkm4C4wAKiQMxz+dRu3zZqzAwEWfXpS2AYQVJU96txQ7ogG6d6iK+Ypf0pPNZSAPWgAmh8sjbToLbchY8VPNFlgxOQFBqI3PVQKWoxZXJUIucigDy4G3cMaSBxuJYU6QNODtHAoEVyNw3EVd0lN97EvqapEEDBrV8Npv1SAehrekS9z2zSY8GzjHZa7OzyTXKaYP8ATYP9lT/KursyQDXO2bzZdJ4FYuonN8c88CtkkYGPSsS5+e+Y9acdxo2/Dqf6QK7SLBUdq5Dw8Ns2T6V18JyBjJrY3hsTBc8mnkYHAojzmlcZPNC1LbHrIOvp1qRXBxnvVcYVu9TDDCqvYCRSM08jGKiTjvmpBkU7gL0PvTsZA5pvNAHegTHdPSnA4GRTRzzTlODg0xDsn6ipF7mmA/Sl7elIQ7nH1oz9RSZGQBxS5z3zTFcaOtPJxx2pOlAI6kdaQhwPJB7UjZHIPFDECmseaBMcCCRkc9qbuIPpTSQH570pbI5oFcdu6GmE8800nb05rF8U65Boely3UzYYDCrnqaGTJ8quc/8AEnxSNKtDaWrj7ZKMcfwivILKIyOXlyzscsT6065up9X1KW9uiWkc557VpWkBA6VM5cq5Uc7bbuy1aQZIxxWzbQ9BVeyh6YFbMEQ9MVzjQ6CHnjitCGPkd8UW8fTPAq7HHjoMU0UOiQd+DVuNMCmRJkYAFW1Tj/69MSFjHcdKsKpPIxmolGOtSK3FOxSJCKYxApwwfakYc4o2KI24zjH50zcV69DTiCOtN/x6UyQbA5HSgYJx0PakyRkHvRkYx37UAySMjgHk9qlzkYJ/GoGGcEU5JO2OaGCYrDawIpxOWGDxSsAOp4pFHUZ6jNBQjgnIHehN38QoXgYp5J7UAIhw1SY6mo+/vUinNDEOXpg0Y9KXtSjlTz1poBvH/wBangccUmMHFKCOlMA6HpSkYNLjP4UuMgZ/GgY3OD9aDmmCHEmSc+1SkdO9IBhGCDSD2xTiMAZ5oGM80DGr1HpS4FOXpyKABVCsIB6HrSke1Lj04FA6/wA6BjfrSD3pwHNKB3pDD3ApTxyKQD1o+tMBDntSKppwHf1pwHGaBoQ8Dimtnt0p2eOetNPNMBowDk0AEj60uMnOKdjnk/hQFhAOMdqQjIxTiePSkzzmkNIABzg07APTrSA9KUY/Ggqwm3JppALYzwKe2QKb8o6daBWFCHBA4FBwBhfzoyT1pjHseB6U7DGk7sgfnQqj+I5NKOnAwO5o3ADA6UwSQMAP8KhmwT0x7U4k5HOT600KWJPalYdyjNHnPp2FZt1ED2rdeM5PJqrPGAOKlitc5G8tiGJ7GsDUI3gkWWLIdDkYruLyDNYl5aEhvT0pxdmZyidB4U1pNVsEYH96vysvcVvHryK8rsPM0fURPFkKT849RXpthdR3lqksZBB5qpLqhIsU5cCjt1oFJDH9vrT1GAaYvB6cU8CgYHBFO25AxSfU04egpjQgHTjFKD0ppIzzSjBximA9Tzk04nknt7U3BA6UL1PagYqnnrTw59aaQM+9KM9sU0FiQAf/AFqXOe2KRT0zTj6kc0FB2PSlUEc01Tjg08dc9qYCls470vPJyKQDgUEDPBoGOGc+tKBwaaOOQeKN+O1OwIRgM9MCq4G2U+h61ZDBuoxTGUck01puVoyndrmM4GeK8J+IkeNUnwOv+Fe7z5wee1eJfElP+JnIcdRmhdTKtsjzS0OzUlJ9a3tSHyVz+7bfKfeuivgTGCPSuWp8Rn9g47xYudKQ9x/jXnxch69J8Sx50hu+3/GvN5FBbjrTh8JzdSYFWJbggdagKgnp36U6NTtPapjbkKCOagoVFEZyOhFVpI2LDHepFUiQBqv22wbmYdKWwGUQyZDdaaWyas3RDHeO9VgN7cGmA6ULtG3pTUXvSFCCck1KR8gGMZoAUNsPAp2cnPc0/wAtfKHPNRiLB9BQgLEWPlq5GM/SqUYOeBVuPcq5YUDJ1BXpViBypqtG2fcelW7f7xz0pMaKl8+7rWa5rQv/ALxrOeqWxAwnioXxUpNQvSGRd85qSI4PBqI/rT4utMBrD1qRBzzxQVxUqqNmcUgCEbmrf0Kz33KOegNZVnCSNxHSus0KP5o8YzWqdkKx6FbMP7OAXGVFV9Ruml0WXLY206CIiHngEZNQa2n2fRSuMmTnr2rkaRojyjUZi9xJngd6yJmBcAc1oasTHOwHFZkXzNk/rW7IInTnGO9PUYGSeKmnIGKiDbjxSGOSYqrr6jFBfKjHY0iqGOAeaHAHAGKaA1YlNxAvGeMVmXMflyEDmtOwYi154xyB61VuV/eA9jUrcbKnIA65q8k/lwFCMEjrULY3jNJdMPM46Yoeohkj5I4xW14RXdq0X1rFLKR710PglM6tGevNb0/hYup7TpS5vlz2FdTbcKa5nSR/pZPoK6W2+7nNczNZ7lxjxWNH814enJrZk5JI/KsWA/6QSaqI0dHoY/ftXU25JUe1czoYyxb1NdPbgKo5rQ3g9C0rYxRuJOT3pucj+tKPTNWtCr3JBzQCQwGcYpUAA96XHzZpDJFP51Ipyeeo9ajA96cvDCqAcDTgeOKAM45pNuMYpiFBx24p3IJOKjpwyOM0CJAwwO1OB6A9ai6+1L260CJe+DSHrx0pAQR6Ypc5z6GgTFBJGfwpM800dDg/hTgeOuc0CHt0yOtNJx1FJkgYI4ozx/SjYQMM4PXFIGGB3ppzj+tMZsDNAhLiVYYWkkYKiAsSTXgXjjxBJ4i1dljY/Y4jhB2PvXY/FPxM0UX9lWb/ADv/AK0g9PavOLK34HHNF+VXZhOXM7FqzgGAABW3ZwnjI/Gq9pAPlrYt4+QDn8K53qIt2sXFalvFnjrVa2iyRita3XAqWNE0aYA4GKnjjpY0BxVqNMcjmhDFRRgHHFSoCDj19KVV9cVIRjgZqkMaenuaVcZ4596VlPbk0zHPXj2qguPZsE/40KwPB60m/Iw1NcdxRYCUgMPWomByQDyKVJOeakzuB5pbBuVnH50nO7pz7VNIuO/Woj8ppoB6HPGcU1hjkUDnH507qOaY9ySNtyc9qQZDc0wDnAxipCQVJakA4j0of1/ChGOOOlL0PrmkMQU5TjkdaTHpTl6fpTEScYHrSrwTjrSDnFPGKEMQg9c80ZI5yKUYz14pCfxqhjvQfrS8A0KBwc/hSjnijoAgxQ2c8U4dfakbngUhjOe9AxS4yOtAGPpTAUjAGBQOtH45pB1znimIF/SlpB6ml5OKBiAEdqdgY70g9cU8eueaQxuKQjindTz0pO+KYgAz3oz69KU+1NbNMYmc0f54oPTmlGAPegAxg8GgnjJxmlJBPvTenfNIY0DJJJp1KBz2p34UDGdvTNKMgf1p386RiFGTQMbgn2pvC8/lTiSeabjqT1phYjlmCAbsjJx0p6YIDE/SnHaecZzTGz04p7iegjtuzg0wIxxg/jUioe5qTZj6CnsCRGIwvv7UpUnrwKkJVOnWm5LcngUbjRG47A5qvLEe/wCFXBj+GmPgDLYqGi7GZLDnPHHrVC4tuwFbbgHoKheMnORUtCaOWubDfnjmptAMlhMUbmFu3pW4YAaQWyjGa0i7aEOJpIQcEd6eOOtVLZlWQRk9elWjzSaJsO75605TmkGcUoHPJpjJAARnrQRyMHFIB6U7p3oGIMA8UE5Pv3pSAQcHB6ik7+5700A9SSOR06044xg1HubGDSq3IoGPB28dKl2k4IqIHnBB+tPySOOKdgJAuPelNRiTGA1Sj9KBoZjjNOH3aXtxwaMUxgOlBA79KFOc7hS7sAZpjGc546UdRzTifekwOx4poCPJB65oZsdaeQB0JFMkGBx/OhjVivOcpx1xXjnxMQ/bCepxXsE4+U4615N8SsiZCevNEDOtseQXHy3a/WukujmAfSubvv8Aj5H1roZv+PVPpXLU3RkvhZz+uDOkzCvM5Fwxz+den60MaRKfX/GvMJD8+TTp/Czne43Ow81ZSf5SvrVYqXO6ljUhsk8VDGWJwCikDB7024LLCmOCetLdSZK7RgVXdmcAUIGMc5GGNOhiyeD3phBz7VatnMfIHHU0MCGRTvweCKkKARg5xTrhxIxcYGaZvyu0ngUAJCeTzxTy4aTnnFVwQuaVR1IPWgC4hAxirsf7wemKzojxkdelXLfOM96GNMnRMMa0LdQV6dqpxDnJ61ehPNTIpGZfr81ZzjitPUSd5rNetFsZkJ56VDIOTU9QSYzUgQkc+1PiGCO9MPTrzT4uT60xkuCSB2qcJkgHpTQADzU6j7uaEIuQIPKGO3Na2gXGLtcdAax423HauefStrS7F1cFVrRWtqB6VBOJkiVcZPUUzxfCBYoB6cnNZmlRTrLFuAySKseNr8CPykONq4P1rllFqSSNYvTU8l1wjzn74rKP3R9au3x3zMzHqarTYG1VPbpW0jMiYBtwJ7cUwDYOOasSKqpnvjJqsFLcj7tShksY3E9OKtwxKY8sAWJ4qpnj5elTWxK7c5PNMC0CVIXsfSoZQxfmpZpAAuF6HOfSoPNLOSOaENiTINmSCDVMklvWtHynmidx2qkYmByR0oQgC4xXVeBEB1VcVyxJ711XgIj+1F4/StofCxdUez6Io+0OeuP04rpIPu8VzOjNm7lDHpjmumtvu1zGk9y02e9YcQzO31rdkPDE+lYcH+uY+9OA0dZ4fX5eldJEMKM1znh8jBOc810sZBHXmtTeJID+dOHIyaQYpy4yRxmmVceuc8H61NtyOOKiXn3ApckdzmmFyQEpQSeCKaST0oU460xEoY468+tAZs4PSkyPxpc8U7gLwBkHmlU8800EelBPNAEp9ugpw5HvUat29KcGxwTQIcAOvegH1xRuBNKCOc9KBC89qaevtTvUA9qTrx3oEAPbPtSdOh5pR9RQeeaLgNYYPtWL4o1eLRdImupSA2NqDPUnpWuXwDu6DrXifxH1xtV1k2kL5tbc7QezHihaszqSsjmZZJL67e4nJaWQ7iTWlZ2+ACRVa1iz16963bOLCDisqk7swSJraLpkc1q2sZ/+tUNvH6VqWsOTz17VmUWLWLHSr0SDJNQxptIq4iAtkDn61SGTRL07Vdj4GO1QIuB3qZR6Hii1wTsSAKBT8DnmkGCKM9OtModTGXjg08N3Jp2PYYo2AgHPQc+lIxIPT8KmZA3Tg1C3DEfnTTDYafmOe/WlDdcjmmsTnK5znrS7tw5HNMETZyCDzTGTd9aYrYJ3ZAp5PPTiiw7kJUpmnjkZFOPzDB60nOT6UCDkHtmnj7nGKjbr7CnID14GOaBkgOWOT0/SnnOajB4PvUgPvmhhcB2pw64xx7Uh605f0+tIZIOmAcUvB69Kbxjk8inqOuOtACY9KDzn+VPPoKaetUAo45FO5wcU0c/hTh60xjTwDijt796XGfr2pB0x1pAhAOMGnY554oIx1oHXrigYuAT3zSEdMU9fQmjqAPancLDPSinYwOKB0wKAEAOKUenWgc+mKUY4zSGGM/UUmOSDxT19qQkjjqaaAZRzmlJJ60dF5oGITgU0c9ad34oGfamgGgDnFKOegpQOORRkigYE8YBpQOuKb0pCQScfzosArHA460z1+vNOwT2pSAP/AK1MBgznrj2oGOval6+tPCgZDEdOlAxpGePXvQExyTT8YxnuKGOB1zQOwdulIT70ZwPakClj7UhjSQDkc0bSclqlCKBz+VN4707iGEcYFN8vPLHmnluflowT1zQVYYdoGBzUZTOOw+tWCoHfFJtGOelKwFYx/wB0fjThB/eNWSAOO1HIA9R6U7AVmtEJB7joaeBjg1KxOPSmuOadiZIaOM08HP0pmeCO/tT19+aRA4DjvSigYB44FB5x60DDjPPUUoHAORzSqO+c0uMKBnn6UxjcEcmlXA6jmjPYnp3pdwwdtADs8DFKc59BSLuA4x9c1JuyKaQXARrwcdKeOKTP4ClPTgcUxhkmnD1NAbjpSbiOhNMY4EAdaCfpmmH5j1pMH8qVhihd3PSk5z0pyigtzVIYHpzTCD14p24ZpCRjn86ARUnA6GvKPiavzJzzz/nrXrM3KnH4mvLPifwIwDnr/T3pxWpFbY8W1D/j4GPWt8jNoh9uawdR5m59fWt0H/Q06jiuSr0MVszG1of8SSY9Px968rnOWI7V6trPGhz57/415RNndk06Xws55fEJvwcEnAp7SFuAPyqIRk8+vekyUfGKgZoIAwUHnHetGS2gfTfMjwJF4NYcc+AfWrUMrlSFJ2t1pNMZW2HI4zUi5QFc8VYZVUIR1FNuAZlLDjHWm9RFMnPHamHnj9KeBzyc0qpzTAiUkdc4qRQB0p5ClT69qVFzwKQFiNBjOetWoV7Dmq0YwvNW4uBxQMsxDketXoV496pwrk8VfRSF59KllIydQOCeePas2Q1o3/3jWa/WtOhmR+1QyHjnvUx5qCT9KkZCetOj6009afH1FMC+y9MfzpWJwQKkK8c02NSzYFNCLekIGuVBr0jRoYy0ajjceeK89sImWUFRXovhyMmLzXH3RRV2uOJ0MiiArGhBJbAzXEePpkjkIV8kDB/zmuhnuxGzTE/KoP515f4q1Fri7bnjNY04vmuW3oYF1Idx9OtQofmBY0yQ7pOnFOfgj6VqyB80nPXrRvHlhSKgcYI71NHGW69O1ICQMoVQPX0qWJ93yr1J5oWHbGS1PgjxG75AI6UXGEzADZnj+VMQbUJFMfOSx6Ui7tp7ChAaenzeVbur96jmXzU3J+dQ7CYy+enWrML7Ygrd6lrqhmW+d3OeK6bwOcaqmcf5/CubnwZCVOR2re8GMBqqeua6Kfwsnqe2aRgXTn1ArqLU/LxXK6Sf9KfPoK6e0OVGAcVzMue5el5BArFg/wBeeeM1tOep9qw4f9ecetOIROs0Fvl565rpYeetczoWVQEiulgfj+Va3N4llRkelAAHXH+FAJx7fWjI780Iq49T6H86VXznJ59KjJ6ilA6ZNUguTBs9+aenI5IBqIenFSoBj3pXBD8Up6cikBwOaTPbJqlqDYpPfPSkzk0ueeQDR3yKBXADPPpTlxzyaTPPtS9aQ7jhknBPSnjPP+NMPGM5pQfc07kiknd1zStj680wnJ/nSZ55zg0wJBhep6+tDEeuTTCSRknntTJnCIWc4AGSSelJ2Qnoct8Q9fGjaKwjcC6n+VAOuO5rxOLmRQSWZjk+tbPjXVn8QeInkU/6LBlIsd+nNV9Ktdt9DcFiNgIxj1ok+VWOZvmdy/aW5BwevpW3bxEYGB6Y7VFFGpY7BgdhnrWlaxYI9652BPBFkjuK1bePA561DboRir6JwOOaQxwTOOOnvVlExgmkiXA+vrVpVGMdaq47CKSCM81KufUU0CngcDHA9apAOGfxpTwOopF6euaXPamFwAznHWlBIPBxQBTtoYUAGc9ODTJOeuKdhl/xpc7hg0irlVxg/WmgZ74qeVKi2k9evbmqTEKe+OtKBj7p4NNZTkE8VIh9fzphqKOhoIHXPFOwfwpCCDSGRlcAnnBpo46jIqYEHOetRuOecjFNDJFYADninj0HPtUKnPFTA0WEO5NIDgHNA6AinY4OKLDHZ9+KcvOaYvSngcf/AF6Vhjl6jr+NOPX6UwHk05TjmmCFBwe3NKOxGKQc9s0px9aaAOnekbj607rzSHrikMOlAbt+tKAMnml+8OvFIoVcEdSBRjnOKAP85pOcghsY7UxAR6nFAGCDS5Ocik3DqTnNMYN15pw644/Om8EHn8KUfdx0/GkMdjjtTeSRxxSg+vI9aQduv500Ah4Jx36imjr70/GMHNNb680AIwwPegdM0oyQN3X2NBHTnFMYhPHek7c0vNHXrQAnBAp2OnNIenFJnnB5zQMcDxgdaT0A4pQcZyaDQAcLznJpCfU/lSYycmnHHrTGISThQf1oA9OvrSn0HSgsAOKBocFAGSaaXAHBpCWNJtJOWNFgbEZyw4oSPd17U/IA4o5PSmNCrgDgUAk9DigDBOTmlz+FIoQADvmg4J4FHHt+dBPH+BpisJjJ5NOx74ppak30w0Hgd6DgI2TmowxzwcUErtOT+tA7DFPfvTuTn+dRRtk8fdqUNSasYi/U/jTl/wB7FID/APWpRgnk4oAcFByKft9Tj3zTAcN6gU/cfUUAKF685pNoHpQGGaAeaChVVs8HFPC+p60gzt4pw9DjjvmmAvT6U4YC8ZpvTFOB2nBp2GHX/Cl257Upx1H6UFuKYxoIHDDijOelHUUKOec0DF/H9aaV54Jp+MdTmg/jTGRgZ4oKnqf50/B6g8UE02BTuAcHNeWfE/IWIEnoePy969YnGQeePrXk/wAUj80Qz0Df096cdGZ1tUeMajgzY9620P8AoSViajjzhj1rcTIsl69K463QxjszK1zA0OUdj/jXlMwBfjFeo60f+JHICf8AOa8ukGX565p0/hZzv4hORHTHyTyKmMeY6bgsRgdqgZFHGCw5rRjjEcYY9arxQbnA7n3qa7VomCnkClu7DEZsNuycVEJt2cEjPao3k3k5NQA4binYRcgClzuH0p6p+86dTVeBucE1M0pzx0pDCQICw7iogw7HFIQXYnNAUdqYi4jfJjGanhIxg1UTg1aQHAx19qBl+BsdKvIVKAjrWbCCQee1XYgSvJqWVcz77ljjvWY+c81pXv3iKznPPFadDMiPWq8o5qz2OMVXl61IyA9aki6jtUbfeqWHJYUwNNjmp7WLLZxUA4NXLV8dKOmgkbujWw8wFx8tddFcJBCY0OO1cPa35jBGfxqxLqhVcluBUNNvUpOxq65fJHYSKrfNnFee3hErFzVrVdUM7lQcrWW8xOF7etUlyoW5VcENxzSqMMN1OLAOT09KazkjPWgBzhS3Tk0/zQsg9Bio4zxk1OtuZIS4HSgZJ5vmD0FJ5m/5FHyjmo3A27R0AzRbyLGCx70AWGUqAGXtTXIYccYo80TTcn5QKW5UBRt6mgCVziNVU8HrSBiysfTpUMKsGDOangHUg8mh6AUXDbsmtzwkSNVi44J9KxpgfOIx0rV8MnbqkP1ransT1PcNMbF0eOCK6myPyYrkLBv9LTnqtdXpxzjFc3QupuacnOfpWHD/AK81tk8cdaxAMXBHTBogCOn0NjtZR+ldHA2QM1zOgHlveuli6DHP1rQ2iy2G9TTuG75NRLipB9KpMolBAGP1pwzjrUY570/JB69aLjbHKDjJOMe9Sp04FRqQQepIpw46ZoFcdnHB6+1KQM89KQHjpQCapMCQDk5xtpdoxnFNU9u1OGaEFxR+lJnFGetIR3NMLi5J5p4ODTFGORxSnrzQK4rGmk4peMGmnr9KYDicjiuO+JGtnTtI+zQv/pNxwuOoXua6qWQRxs7EAKMkmvFPFGpHWddlmUkwxnbHnpj1pLuzKpLSxk21vhR3PvW1ZwYGB1zVa2jJI4rZtoscYOM1hKV3cysWbWPOM9TWvbw4Ud/aq9tGOwrUgXBqLjQ+NMD2q5ChBB602JOatIuBxxVIY5KkA7jmmjtxTwTVWBDkf86kHX0qLgjtmhSw64oKJ+vXrSgYHFMV8++DUgYetAJXFH6GlHA9qaCDn+tBPHtTAeBx7/WmOCTwOlODYAz0pep5oAhORnOajI5JBxVllBBqIpk+9UgGE44PX2pODnbmpCvTNM2/Nzn60bDHK5xt5p/H503v+NLjOOaLAIV5zTcbuvJqbtSFd3K8Gi4EWP0p6Dn/ABpcc+1OA5+lMQA/5FSAYGaaDk8cU8dMZoGIRyMjBNL247UoOT15oZvegYEcZOKM5bnpSAZODTsEjr0oGhV9RT15pqccd6digYvqO1Nx0oXO4mnHBbOKBjR79KX+Hg5ox2J70vA4yaBijtzxSk889M0igAADvSkndQgDA7UHPfkUo9setNJP4CmAmOfpSgEAHNKPUc+lHbk4NIYmT6DFHQ5yeaXOPpSKc0AIDkUd/wCgo74Xn1pwOBjODQOwg+v60nYk804jn39qbn8qYAp44xmkbHHtRjJ460AYPrTAOeo/Kj370pOFxmkyevGaBh65pewBHJ5ppzRzn3qkgFJ46UgGDgmndcdaXG3tQMaOTz0oKjvn6Up4PB5o5b60DEyBn0o59Tg0oUjnrUm3HGTQNIiA4yMU7HqTink/3Rig8gilcZH9KTrUoxjqBTT75x7U0wsMPoeRSYP0p+fSkZuKGxpDSozkmo2cJTZGINVJpO5NTcb0JJLjA61RnuyVOGxmoLq4ODg/lWPd3e0EgmnHciTOq0ty0AyavoQaxfDM/wBpsC3vitracCqluZoXgE0H2HNKBjOafkcUkMaM59/epFHGOM03d1wKQMewoCxIFB54o4GTjmhSCvFHbkd6Yx3zHNOAxyeabuyRjp6Upbn2oXcY5j+VBPzU0YPPanDpmmgHr93FKG9fzpobg4o6gHmmxodx6ikyR0/nTMbT61IhBNMYq80hPrTqQnpxQMFYY4prd/WmOTnNNDNjDDj1p2DYjmbDf4V5L8VHzMmO2f6V6vMeDg1498TJN92VJ5GacdzOtseS3+DMM+tbsf8Ax4LnPSsG8G64GSOveugbIsQB6d64qvQxWzMDWDjR7jPt3ry9ziTP869M159ukTY74615jISXOaun8LOd/EPR9zAN0qRASx9BUUYIGetT7gCFFZjJlUKm/JyDxSSnz0LPkGmmUhRH29adOyuiKh+6OfegCr5e4kY6UGIBfrUwQqhYHFRlt/X8KAIdoA5p288AChkycZ5qeGHc+CO1AEKqT0P4VJGduRU0wWNfl64qsSOKEBZjzxmrSEAVTRuAO1WYvu+1Ay5EenFX4mAB4rNiJFXY2yv4VLGilfn5zz+NZj9etXr0jJxVB+tadCBuTmq8nfNTfSoJOSakZHUsAy1Rd6ntvvCmhF4sMipoicfKazi/NWrWYA5Pb1pDLm/ywcdxVK4umZNueKbd3JIODWc0hpoAJy9EgIbrSIMuCae2NxyKVwKzEk1MFAj6dalCAHkc02RSOP0oGNRQTg8Vp2jp5ToTjis5cqckU5XLS7R0PFDVwvYdKp6DByaaU5PpVlVLg/Lz0qMgoDnk0APW32QBwcknFMZThdx71PET5LBs461AwZjnPApIB8rjcFFXbdUQ5bjFZhjcNk59avKC8XJ4pvYERzKGbJNTaKdupxHPfrVdtwyDmpNObF6h6c1rSJZ7fpzZmtmP92us005rjdLbMdqw9DXXaeefr6Vgy57mzmsR+Lxq2/4V+lYt0AL09amG4kzoNBb94SfSurgxt4/OuP0M4mPvXWwt8o9a1NYstBeevFPBx3qNWxgmpHGQSD+tMu4hk4x3p4I9cmqV45jUNjPPrVmFiyqe1DaQJ3J1bnnFSjk1EBg8inrmi4Eig/hS9Dg0ijvmlZuc96oQ5SM57UpIzxUYO7tThgH0o2He4/OKTODSfXpQ3TFMGxQQTyaUdOTTMjt1pwPrQK4p+U9KYxxn1p5IqNmC5L8Ack0XEcp8QNWNhpbQxN+/m+Uewry+3iOfetbxbqTanrspU5jiJRfzqrbRFm6daU3yqxzt8zuXLOLAyea2LaPkYJwaqW0Y6dAK1IIx6cVgBct04G2tCIdsYFVYBgjtmr0IweaLAWYVxVhQO/WmIuFp44xmmkUOAycZpQME8U3JHFOBNWguVdRnkgtg0K7nLBcelW4RIF+cgj1oO0jB59adHwu0YwOwovoCFxxwcDvRklcqMinZB6U4HHvSuOxWMuGDMtSpMh71J1z6U0xIwwQKLgPDKeQQR9aXPpUAgQk7crj9aPLccq5pjLH86GB5x19KrZmXsGFKJz0ZSKdguT5zSMoIpnnKfXP0p6sp4zyelUA3G0nPNKp560/r3ppTBzQA4+xozz1470i8cU4gfjRYdwxxzRg/gKTnk9aep9KLAIMdBTwBzimnjp0pwORgcUDQNnt1oAxz3pVyOvWjryelIYYwc8Ype1KOOtBFO4wHXnkUtJ0603djpzSQEnbAoB5NNj5GW4NPxmmMaT+dHfHf1pQOelKB81NjQoPGT1Jpxb14HpQME55pG5GO1JagAOMk0gbnilIwMYz2pOg44ppAKDzg0cE0zHINL909SKdhgRz04pR8ooLcA03vnNIYp9jSlu1IOTQeeBRYAHPSk/z1peelAHPPNMYAcHtSr04oA9SacRkUAN6D1NNI684p5HuB/WlyAMYwKYDFBxQeeMmn/wARpAR1PWgYAECjtjJozn8eaXqOOKBiY/T0pcjtRxn60m7kkdaYx49qQkY6kmmlu2TSbs8+9Fhji2O+cUZGD296Znj0pQMn/GgYuSOuabmnc44oYZ5PagBneo3YAc4zUjdO9QvnvikwRXldj34qhcHg561cmBPU1RnHy1IMzb2TCnkVzt/ccEd63L04Q965LVX+QmiO5jN2Ox+Htz51rOo/heuzU4xmvLfhDeGWW+iY4IbI+leoKMk1o3rYI6ok3cHIpMZHHQUY+uKUZAODSQxoOM54pdxBwKceo6Gj5e1MYA5wf0p45wAfzpg6EgcUp+XnrQMecbeKUZIyahVucCnMx4GadgJVzjnpTh161FuP3akUcDOaYxVyO+KkUg/hTMZwTTwBnrQPYXGabtI6U8ED6U4kHpTAjUnNKRmlppXJ96BoNoprqCBSjjqaaT+FMZVmGAcmvE/iNKG1afHQHA/SvbLvDBvYV4J4+k/4nF0PRsfoKa6mNboefT4a5H1roZ+LQ59BXPDLXgHfNdDdcWrEdBXFV3Rl9k5bxI3/ABKXOMZrzgjLGu/8VvjSl9/8a4A85Jq4fCc7+IngACMSePSoepHNOVD5eR19KRACDWYx27OMnFAUq+e5piD58sOKtuUEe4HntQBAXJyp9akS2Jbg54zVQSHdk81YF4V6dcYodwQTAJIMU9J8E8fjVRpN7Enk5p0WCDRYCaUb1zUXJ6Cngkt7U5gQ9FwHxKfSrESkAYqKNsH8KsxtnGcUDJYQSPerUaHaeahj9RVuJh9aljMu8+8aoOOK0b/79Z79elX0IIz9agfrUxqFsZ5pDI+hwas2g+eq9XLJfmA71SEyoW5pd56jpUeeaQ5xxUjHNIW4pNopgJJp7N0JFAACd2KkVsP81QqfmyTUqkeZk9BQBJnc/FK2MHJ5pqShWJqN2BbPSkBLH8+eKbEpSccd6WEhCMkGpoSvnZbHtTGWw6xhi2MkVCMMpYDgVE5D557mtK1jjWwfIyzUnohrUzC5JO38qZFJ+9G4cZpzkeaVWmxJy2TyKZJeuJo2IVRxgVFI5jC7fyqtyx4ByDU75kizt6cUrWHuMknBzjrRaPidD3zULrwB6UsXyyDtWtPclntuhSeZp9u2c4rstOPK/SvPvB83maWoJyRjvXeaa4IB6dqxlo2VLZHQ54UelZV8MXX1rUQkpzWbqQxMhzUx3EjQ0ltk6+4rrrY5UVxdk379PyrsrM5RfXFao0TLoNTDkVGi/XNSlcYHfrxQWiORA4wfrU6LhRTQpx70/kUPUaHgDFLigEdSaODQhNjhwOKUjPNNFOBqkFwAwacOc0zf29aXIyKAuPB696ac9utJkDJ70uSRTBi4Gc0hPIzQDSEimK4ZIX1NYni7URp+iTuDh3BVR7kVs54rzn4hXvn30VmpO2P529O9C31Im7I5G2TcdzA5Y5P1rYtIgWqpbRZI+tbNtH04rGbuzIsW8Y9K0oY8kAZqC3TgcVehXk4NQMniXHarsC47darwpyOauRjB+lUBIuMU/rTRgZp3OKpDuB5PHalB+bvQBzS46g0wHHGM5oBO3g03P40o46UDFUkDrTxn8cU0HnnFPAHWgQ7JA4oDep5pPp2p2MigYobApQeuaZjvSd8+lFh3JO/Bo4PUUzPf14p3GBTsFxdinOAMmmSQLnpjNSZGM4pc+ozQhkSxuh+9kVMOAcjtSDmndRTTAQjIyOtKFIpwH8qdx0J4pgRnrjGRSEDtUhA9aacbsY5qrjE69acuAOKUKByc8U7j0o3AaAcetL0yCaX+LHSgjnNTYdw/WhfckijtkUnIORzSsUKcGg9elNYnIOKdn86dgFGT0pwT36UwHH1NSLkY96LDEAJPWlAAIPNKOuaa2QcDvQMeMg9OTRxnrzTQSCM0uACCOR3oAXjscU1eppx5HFCgdutNAJ6Y4pCP507ORjkYpuOQKdx2GnnjNA+lKBkkelOUjpjigY1uegxijH5kU/tjik6kHOKAE/h5yaXOAc9qUdMmg4xyaLjEQgjPenZqInY+c/KacT6dKAA889aUY5yaaTg8dDShcHpTGKCOfegnB6/hQOB70YLdqQxN3OOnakLcn3pSoJ470u0Z6ZqkFhpJIA7UoVs81IODxil7HrRcoYFxnJz3pMcdsU89DikIGKAG9fpQcYOaVs4xzQFJ4zTGIzfL2zTdxzzzUgRR1oIVfSlcZESTnHSopB6VMzD6VDIwGRUtjKcuATg4FUbjkGrspBziqU/NSJmPeDg8/XFcnqyHY1djcqCD61zWrxHaeKEzGaMj4YXDW3iqSJuFkQ17cvIHvXz9o9wbLxVbN0UuF/WvoCIhowfXmtHuTT1ViQ/d549KOcUnf6U7PynrmgsO3IyaOBjA5pNxJwOaec8cUxijB5ppAPPag5PSnYAHrQCIymOlJtOcj9KmIyOtGD07U0MEG7rxUq8YzTAD2pwqgQ5gOtIAaRSGHrindqBihCPencge1CnAGaGINACEjFNGM8GgikA5yKaKFPSoXGDwetTHgetQydOooTGineSAROxzkCvnzxpOZNTuX6gua971SUR2sxAH3T/KvnHxDLulkY85NO/us5626MG1G++X0zW3qbbbFjzzisjS1DXi56CtTXW2Wij1rhqO8jJ6ROK8XuRYwoK4rGTgV1vi5yfLUdMVyyqSCQOc1pH4DDqx27yvl9RQgyvHU1HODvBPQ0qZzkE1mMtyRD7Pk4/CqagthSflFTrvY4OeajcLEvPX0oQCSweWwwc5qvMhQ4PBqwJd8i7ugpLwl5N2BjFAFUDNTwrzSIBj5qMY6NQA8sRJx0p2/wCc1GOlO7g4BoAsRkkg1aQcZqrHj86vx4K+9DYySIdM1cj4XJqpHyelW0Q1LGjOvSSxrPf3rRveDis2TrWi2IGEflUDGpj0qF+vFSMaOTWhpiZlUVQHQEGtfRIi91GMVUdxMxMGmZ5qRqjP3sVIwbGBR/yzyaPanD7vsaAEQU5hg8dKfs+UYp20MRmi4yr1Oe1SxqXOKe0YDH0ojbZ2PsaAHSLsYDoMVIi7zkdFHaos5TJGaEkIBA4B60ATIpKnj3p6PNtx29KZBJsfJ5HSpxcRqNvvk5ouBDCjBnds8d6iPzScE1NLcKY9qfxHmmxRsp3MOtCAs2iEN8/A96n8vccIOKiYlxtXHNWLF1UgueelDGUmhKoxP3j61XBAbArWu3RssAMdBist02tVU2KSPTPAUoexZc8j/wCvXomktwpz1ryr4ez4JTOMjufrXpmlPwKmoveYnrE62I7ohiqWqrwjVatTlPpUepLm3BxWS3JTI7JuUOa7TTDuiXPpXC2jYAAPNdror7oAenatehombcPHp061Jg7s1Ajd8VI0hxzihIq5OOOlKOh9agR9x61IDTGOI4yKXOD0pvfr9aUcmmA4GlY8e9J6Uv1oQAAO4o7UY9aD29aBgfel4FJ2zxRjmmFwB556UFsc9qQ9OlIemKLgQ3UwhhdycbR+leSajK13qNxMxzliB9K7/wAX3H2fTJQv8Q2ivPoEw2MfWm3aJhN6lu1i4HX6VqQryOKqW6cDqK0rdORXOSXIlGKuRryO3rUEK9KuQrmgEyaJcAEjNWF/KmRjC461KMDFUhiqMelOXPakBH4Uo+tUA8deBRj3oHX3pR1oGKBnvzS7SKAQTyKUN65pjG4NKDz34p6kEk/hQByKLgAzjrxTs9gKCBn3oPT60DQbvTpTic8imY460cn8KB3Hd+1L0JppBx70bjnmncB4AH1p2eOvFMU8UucH3osMfgEcdaUZFIDjnNLnvSAAevNKc9qaBk5o3HJJqhkgPr0NO2ggZ6+tRj7wOaeeeKdwHMvHFAx+VKpwOeRRwcZoGLgZ5xg96Q4PNKRjOaTPHFAxp4z6UgHze9PI49aMc5/lQMj7804dulKRxjFIP0pgLjJpVbI5pB6U/oB70hjc5IzRuIYe9GM9etDDuOtMYueMZ+lPDdAegqI84NOByMnrSsA/PHHT0pOAfT0pu7PPU00nmmkMkzzmkBwD/WmK4ztOfpSqc0DHA5X0pcdMU0EdutO4HIpBYQ9adj8aUD2o6U0AmPSkA5zT8U3vyOfWqGNbBBB4zUcZAJU5zUx6Yxk0yVTnI6jmgY/AOc+lLnjrTVwygjFKeBQAhORwOtOPt6Ug/MU4AdzTKsN9cfpQAccZpwwaXPYUBYQg8/ypBz0zTuwPWkPegYdO/wCVKR155prMMcZphfnpRYLj8gKcCmliM7c00uSBTcnn1NUkK45mPuOaRj680nJ70pAweaGWiNz1qvKT+dWHIHSoJMnNQx2Kz5waqSg4NW2HFQMvGKRJQlj3DpWRqFuWQ4HtXQPGeOKhktt3UcUmJo8r1y1a0u4bpVPySAn869r0G6F5pFrMpyWjBNcpqGjR3MTKwBBrb8KwvaWn2ZzwnC/SrWqM0uVnRDqMdaMdxQjD1xTsgEjIIoKEA259e1OXPX86QDg7aeoyDTAB7daOgOKcoAGMYPtRt4z6U0MaAQfenA5IzShSR0FJtJ5I5FCGPHemltqnHagZ6AUFCe2KoCK0Y4bOeuanDADp1oRNopWUN1pJ9xjgScelBBpF44p4NMBhBxQO1PprDmhAhD6VDJwuBUp4JzUMhxyOtMpHNeK5TBpk0i5A24r551p8u2K90+IVxs0mRBwWIrwTWGy5olpA5aushuiJm4LVPr7ZbaB0p2gpwWHJqnq7Fro+2c1wS1k2RLZI4nxRJm4AyOKw5VKBSOh5q54glL3h9M1nEls8k4rZ6RRgu5PKwkCjjHrTVAVsHHFQ53DC9RUTOxPv61mMvSzKjnbUHlNOSxPPtTI1yeeSeatwSeUCAuc9c0bbAU0Q7jyKdcnAAWrDMu4tx0qEAO3PHegCsNzdeBUiDJwcU4A5bp+VSRwlwT0xQMaORgDNAU5GDQMKxFPQ4YZoESoNv0qzGx2j0qH0xjmpkHA70DLUKk1oIDt6VTtiAea0PMHl44zUMpGPf/eNZjnmtG9bLms1+ta9DMa1QMfSpT1qJ+T061IwXlun1ro/CsJkvkx74/KudQEsABxXcfD+3M2rwjBI55x04NVeyuJnCjk9SajYc5qZRycHNRtjvUDIacTwAKQ4HfNNJ+bnpTAsB/kx3pm7qOQai5z7VPGmVzQAoJKD8qG4TNTCMCMGkKqVyCMCkMYMiMe5phQj588VIMHjPApsj5Ax2pgHYc856UrxnzFHrSIu6Qe3NShgsoY57UAIqBJxu+6KvzSp5akKBniqk6Z+ZeaZligGeKNw2LRwU3xsBUMjHOO9RxllJHOKmCb3GThjQA/GyMHPPvVWV8k8jrVqQI+FDYI9TUEsezFOL1EzpPA8/l36qxxnP9fevWdIkG9h+XNeJ+G5zFqEZz3r2LSJcXCnH3uaqruC2O5s3yOnWn3a7oCOtVbFuF71dm5Rh0rme5CMu2znmuy8PuDEAa4yE4Yiuo0GTAxnpWpaOqTBxRJ14FRwvlan61S3KuMDc8cVMhyf1qHGD8tSA+tJ7lJkm4Uqk5qPPX1qRRjGeaARJ/OjNLjIyDSn3FO4XAUHOPemj26UucUBcCTgDGaD1o460n06UwFzikIPamn8xSOSAfagDjPHMwZ44f8AgVcxAmWB9a1PE03n6pJjoo2iqcK4HSlUeiRzt6lu3XpWhB1qpADtHf3q/AuCRx65rIm5ch5xxV6FeAaqQLnFX4h3oBMlXpx1p2MduaF/WnDmmixuOKcOlH4Ud/SquA4Zp30pv5U4dKLgOH604DdgnpnpTQehx+VKMH0/Gi4xxUHpxQqnv+VHGBz+VEfbnmncY5c45Bpw7Z5poYHqfenDrnrmgAApee/SjI7UHFMYEcUbQeO9KMj6UqnJpXGM2kZpRntT8+tNJGeKYxQfwoBHalHJ4xRjGDQMU9DxSA9aRs8YpD1xzVICVRzmnMCRnuKjHHfOKcCScmmND1JyQepp2eeR+VMPX196CwyetTYaJC2RzzQSB0x9Kj2knJxTlUgdKv1C4uACetH86UKd2cUd+OKkpBg5zR06UoPGKBngUwDkDjrRikPqe1AOfzpjHL6H86acn6Up4PFI3A4pDGpwcGg9Rx+VDdBSDIbJxTAXPPFMJ5OM0Mew/OgYA9+9AxQoC88+9OUjOBTFGfpTh14oGSDH5U4e3SmLzkGnqCBQMUdM+9O700dPrSbvm96dgHk45IpCfrTcnnOaMkn2p2GL06ZpCenX3pOOc5+lGSeoGKYIiJ8t+B8p/nUo6A5/CkdQ2Rjr3pkWQdrfeHH1ouOxPn9aUmmnn/ClPQd/pSKAjJFAPJxz70vT1+lNJABx3oAM4B4NIQR360A7uwzQQce5qgEI696GGMZFOwRSdiKBoUgD0owB9aFTJGfpShDnrSuVYYTzgU1sf/rqZkH4+tRtHgZobCxA/FQyHg1PIpGKqy5x7VLC5CTmmHBpWzzUefU0CuLjnFORAe1NU/TFWICu4dMVSVwHx2wOAyjFWBbBCGQHI4p8ZXHrUxIKnqKtKwNaEa9vapV6DAqLn0p4Py1JBJx27UoOc4pq/dz2pRhST0FAxw6nP4U4k8+lN6kClJwOODTAdk5wKATn5qMgAUvXGaaACSKcD8vzUuMjmm4AHQ0xjgeOaTcCcUgx0NNIGeKBj896kDLjmoATn2p5PTA/Om0IeQO1Nx1xSnnigY6GgBjemKrTH04qw45wPzqpdsNh5/GkWtjzb4mz/uo0HuT+leK6k4aU4716j8SrjN0FyPlrye4O+5H1oqO0UjilrJm5palLXOKwb1yWmbP3e9dCP3VmfpXJ6vL5Wmyv0LGuFakTZwuoAzXLEc5PWoNjKGAFS+Z85c4pY5BvJYcn1raZlErCNlXcenal8sYyanu3HCoOBUCqTjOdprMYRnYeuM1a81AgDYzjqKriHL+wFRAYfk8D3otcCwg3Nz0pr5JO3gUrEBsqeMUoKkccUAMjGOMZz7UskjINvrShwqkg9arysG+tADgcHJ71IrDbk1Wzk0obBwOhoA0VYEirAOAMZxVKLGBmri9BQMtwsT1PNXlxtrOh68nirYchT3FSxlC9xuwOtZz9cmr94ck1QY81p0II2zUbe/ankGmH696kY+Llq9P+E1v5mrxNjOAe3sfavM7cZfn17V7X8HLMeY0xGAvQ4+vtSqu1NivY8L3YprGk701ucUxjM+tHtinYzTQKAHAYHpVqNh5eKqgZBz1qSNsD5umKAJS+fpSBgynioi29wB9KkaPyyB68mgCIbiwUdKldCp9DTC2wgjtzUm4yHI+tAye3TMYJ6k4qO44YZHA/nSwSeW4J6DpSysOe560dQEdz5YB602Bxkk/lSMQabAuZApoAtEE4OOvpR5wSTpVhwqIfXFVGQuwNCAUEli+KkbEqZx0FQF9o46CkSQhGA70CLFk5iukYV67os4aGCXOB3NeNxsd+R2r07wrcGfSwM5K4rSprG4lueoWLnC56Vr9QAB2rndKm8yBGz7VvxnKKx6etcrIMwjbcsPet3RXIfrxWLejZOG7GtDSJP36+/WtFsWjtYH+Uc1aTp61RtOVGaupx9KbKJQO1OAGfQ+lIhqTGTmjcdxjrxxmlUkZHNBzuzmkyQe1PoFyVG6D86kbnHTNQg4P9akVuKEwF4HFBA7GkGTzQPSmMPrilzgUjdM96Rj2poLinkdMVXu2Edu7E9BU3QcdayvEU/laZMfajdibsjgrgmW5kY9WYmpoU+n0qvGctk/Wr0PHHvWc3qc5agTgcVehX8arwjgYzV+AcDjIrO4iwiYAxVtF4FQxDpmpxyPWmmOw5eRxTgeOlNJ7Cl6iqKHUoIwOMU0ce1LgepNMBwFLz0HSkBFOzRcYZbOKUdaTPbvS5zxTuAvNPUAD39aYOOvSlyegoGSbfbApegHftTAcAGjd2OPrTGSDt2xS9Bn1qMEZ65NLjPGSBQMlyP/rUvbpxUS8dKeDjpxQMcBnrzS47frTUPU070oGJtGDRkd6XnpjP0pCfUHPtTGGe1KRgdD9KarYPel3DPNO4CrweBwe1OIwOaYGABPSlVh0/SmA85xgUvb3pu70NA55zQkO48fnTgcnC9femBeevFKFFMaJKcfXGBTQCKB+JFAwA9PzpQcUnUd6d6dBQMaQODTQPSn8kAY+lIOvNMYAcCl9uxo5JzzmkI6YzmhAJyOvaoQSwwCcetOOWOB09aUgbhgYFDGhFG3p+NKBtPHSjp060KeO1IYo7evpSjnoOlMGckdvWnY5poY8EgdqevfnJqEnA6Z+lHmAcHI780bjuWMjOBSYBz61H5qjHzCjcCOCMVSQD2ycDr3pAc4FMLHik3dqeiGkSEgfX1NBHTimcGnjpzSckUkOAxnH0pjrzuAwak9qDwDzSuOwKQVzil4xzTFwpOMYNPLAjGOtAxCBzRgdhQOvOKUsBVCEwMHA6UrLxznrxQp5PNIeetBSFA49+lJ34zx7UHO40nJNMB3ufypdwHemEYPHNHuKVhpj9/wAvpUbsOnUUEZ4pjcCk0MZI+OgNVJXzU8hA69KpyEAZNSDZHITzVZj81SSNjPpVV32twaZJJuwODT45egzVGSYVA10FzyKaYrnSW10ucE8VcWZWOAQa4eXVBGMk4rV8J3v9pNLIDlIzs61otri5+h06n0xTwD1piYzzmn44OM4qRIXnIBzz6UvXnpikxg89KPlFMew/cOBxmlDDv96oyoPT8KcPcH8qYxSWz0zUmCVz3pucj+VOyOh60AOXPHr6Uvfmox94jvSg4PNMBSAOaAQB2pGpmMnFA7D8+3NAbPb8aaqjvkYp+Bx60wHA980pwfqKgdiB0P1piueKFqOxMzBeTWdqL7YHI7VaZ8H3rF8Q3SxWMh44HHFG4S0R4r48uvN1GVs8dK4e1UyXqjuTW54luPNupWzxuNZuhx77ouT0rKvKyOJasv6o+yED14rivF0nl2EcXc5rrNXbfN5Y7VwPjK43XQjByF7Vz0lqjOo9Dn098VHK58zC9O1PiwSPX3psuFIIq5vUks28QlOJKSQLGTzmo4t2flPLVFKGDkHk56VAFiIZYAnANRyRbZSByM0iOV5z0pwyV3Z5PWkAkmAcHpUB4Jx0qZxtAPrTAoJ9qYEZz60jRnr2p+OTmpGJkQADAFAysRjp1pyg54pWBHFOjBJoEWoVJFXY0+UeveqsYwPar0f3RigZNCnOKuCMbeOlVoRVzPyemKljMe9TD96z3FaN8fnNZ78HitOhBEentTCM/SpD9KZipAtWSbpFHc19CfCu18nTfMx97GOPr7V4LosPm3aADvX0t4Itfs+jwjAHGenuazxHwpETdonyZmlAx9KdxTSOa0NBPWmgc4pe5Hekx81ADgBnGaJOQMdqQDHOaVOc5PWgBI8KwapXl3nkioSMZ9Kap+bjkUAWCobk9hQrhc4PtQuPKJzUeO/btQMlLZwe2ac5+UeppgJGadGSQSaAH7dqqD3pOkmR2o3lnGRxT0IG48UIRo2cP2rrwcU5bIq7DPSqNrPJCxZW4Bqy1455z170NPoUmrEE1uRGSarLEfLJPHNWROzqyt0pSjIikjg9KNhblbbtHcGu38B3ed8LHg//AF6424IHpmtTwndeRqCDOATzWq96LRL0Z7VoMv7sx5+7XU2zbk29utcRpMoS5xnAPpXYWT8DJ5rlkiXuLqCYCsBwOKfpzbJVJp1yu6L1OearQuBJlTTgwTO90+QeWv0rQjyRWFpUgaIc54rct2OASau5RMDzUytntios5PrTlbBoHck4xyKa3sKdnIpuMH2poBqgkHPSnhSMd6Qc854p45pMdwDYxS5Ipo6471IRx70x3AEGmnlvT0oBHrSMOM5HFAhD71z3i+TFiEPG41v7sdelcz4xb93Cp9TTjuKb0OYQAdquQjgGqsQGeauQ4z6VjLc5y9COavwdqoQ9sflV+A+hqQLcYqYE+uBUCYxUg7Ypool3UZ+uaYMYOacKsLj8/lSjtjikHXA5oblSF4PY0XAeBnr0peBVIi5H3XU/WgPdj+FCPWixSLvFKMepqkZ515aLP0FNjvskh4ZAR7U7MDRB4pV69KqLeRn+Fh+FWUYNyKYyROfcelKOTz1+lM5wO1IWI6A5pjJQO/Y0uQM9M1B8x5FSKo/i5PpRcBxl2g00O7cIMD1NLtWndOO1BQRgqpyc1KvTJqMj8KcpBJGKLDuPBPf8KacjPGaXIAoOSeOlUhpijJHSkKk9RR06U7POeKGgAqAKCOnApXYYPt603qMmpRQoxnpxThjPNMzg0oIxzirQEnuO1PyCPwqNc+1OGNvA+tN6Aheh6U9T2/pR2GMGm8fjUosecYHNL0xnp2qPcaA3r+NUA/6daXrng0wHIoLZ+tAx2c8cYFMyWOOQPX1o+8w44pxwBz1pjEI/ugYFNPSn4z04pOTQgGAdCTmkYAnFOJx7ikzwTimNEfzL0ORT0bI4pQBxikI4460h2Hjn6UpXI5xUYJxyeKcpz0ouMYI0DEbevNKY8cAkU8gkZA6Uo+ancdiIKwPJyKeo45qQLxRt6YFMaAL6c0ds84pxBz6Uo5zg0WGhpBz7UjDnAp5HB9aYoLMen5UWHcaHVX2E/MegqYDPU1Xmtg8ySHhl71YXsOvpSGGM0mDyCOKfg96THBFOwxOg4NGMA9+aBgUvGPXmqAQ5pD1p2O1I2MDNAwx1xSdDRkYz0FNY/lTsFxcjHNRueKUmoJjwTUtDuRzNgVScjNSynrk1TduOamwiKVgCcGqUj81PO2AfSs+d+M9qCGiC4mxkA9OlZl1d4U81Ley4yc1hX0+A3WkRIzPEOqNFC53Y4r074ZRbPDEE3ef95wfWvFtWZJJAspyrHFfQPguNYPDWnKv3RCuPyrSTskkRS1bbNkc9Kfs785+tOC46Dmndj2BoNhgHtz6UuAV6dPWlYntjFMDcmmhjgQpJI5p+QR8p60wrk5HX0xSqMnnANMCQHIx1xSEZGT1oAx35PWpKY0RnnPPNJ3x3p+0Y7UuB1pDEC469aQg9sU8DH0ppweDQAwuQeetPHIHpTJPl6Dr3pgYjPWqCxKVJ9KiIIPNKshPBoJFGwyvKcfSuL8dXXladJgkEqa7G4O0E9RXlfxKvuBED+FNaMxqvQ8q1eUu5/vE1b0KPbAznOayb598uOp9q3rZfJ0wdiRXHXd9DnRl3bgzu57V5nrMxmvpG684rvdVmEdpPLnBIwK83lJaRmPOadPqzGW6Qm0qn16VHwUPc1IHGAMc04RrvGOBiobAbby+WBnk0v33LN0p/ljy/ftVQkgkZNAEryL82KRXwuO9MKZFNAO73oAkkkGwKeTUaEn2qXy89agHUjtQgJiucN2p+4BeKjZvlCimMeAKAJFyxFXYIkYEk/hWerEYqxHLggetFho0zEghB7U9BgACqqysVA7Vci+6PWlsMtQDn2qywwh7iqsZOTirQxtxSYGPe/frPkHNaF7jccdaznrToQM70ijsaKcnLrxSQHS+D7YzajGMd/SvpXRojDaxJjGBjFeFfDSyMt+jFeh9K9+s1IUDpgdaxq6zSMqr0SPjEjHSkXngU7qaRenFamxGRj6Uq8nNB5pVHGeMUAPfk4FNXJGMUoPPNOUcYFAET4BOaaDxx3qWRckA1E4w2PSgCcgCFfekBBC+1ITuTjtTVPPSgZPIB1B5PFRhwBgHk0h559aQoVznrQImA3OoFOkwHwOeKZFwhOcVKhHlEnBNCGWhH/owIGc0hUlApGBSQ3arDtHUVIJgE3dSaNR6FVX2McjgVNLcq6jA6VG6lkZgOKpkHoM0bi2Hs2/5jViwk8m5VunNVyjog3DrSI2DmtKbJZ7PpFz5kFvMDXcWMmcc/WvK/B9z5+nmInlen616LpM26FSetYTVnYmWp0TAlT6VlqSkuCOhrSiIZBis66G2fPTPNTF6iR1OhyjYvPtXTWpridElIIBOM9BXZWLZUGrKL5PHtR3yKTr/jQvBqugEozg5pG4AIzmgEFD6d8U4jI6gD2pILjFPJGcAVKMgZ71Dgg8c09X46UNjQ8dc4p5PvUYb86UHn60wbFHDHFJ0oJyOKQHigLit3x+Vcf4xYtNCoPTJrrWOAATiuN8XsRdxj2prcmb0MWPP61bhPQ1TUg9Ks2/TnNYswuaMJq9CcVQhHAq/DyKi4XLcfIB71OgqCLjFTq2eadxj1HenY7GmFvalU+n1qkxjwDSgUme4py0x3FwMc0uBjikXn2FLu4oKQox3NV7gbZFdQD2NS5GKVhuUg0ILiqFYfdH5U8DAwMCq1u+MxseR+tWQcgVQx3tmkA60nGeetOHGKYxRj/wCsKUDGc9fSk4/+vSkgYoC48Dj2oHXNNBGMYpQeM9qYx+e5paZnilHPbimO4/Pc0o5PGKTooo60ikKDS5HQdaQcdB0pBwO9MaBxlTn0pVOUA9qQ9xjimxnKe+cUDHngClY/lSdBz1pQc46VSGKpPfpUoAxUXGcAU8dOetMESe45NKOvOM00AY5pUyc8UkUBzj3poGWOelKw5HNNzgkDgimMf06imkZOTwKNucHP4UoHABoGOHsOaYxIFKSSQB1oI3jkfhTAbG+VzTlORz+VIEG3HQUqjAGaCkOI5BxzTcc809c/jS4yBxQMjAweO/akxjrUgUj60EE9qYxMj0FKODwM0hJHA796VBgnigY/HGCOKbH8pxxThng01+SCPxoHYkHTjpQBk8Ui8nNKAB060BYUjjNIQMHA5xRgZwetK2Rz270xkcp4xzTlGFwelIFyS2cjt9KlI9qY0MxnseaUdPpS8d+9IQSaY0Gcgd6OnUZoJGT9KQniiwDiRzk+1NYZPTBo7Y/GkK54zTHYKa2cHHIpSOo5+hoK9BTAZjHBzTfpUuB600r6d6LhykdRSAH6VKenOagmbjipZaKk3J9KpyDnmrMzc9KpTNjJ61DFcr3B+U5NZVy2FNX7hsg1kXbc461JDM29k4yCa53UZvlb69a1759oPWuX1SYgH0poxkzNbE+oRRsN2ZAK+k9Ej8nSrSPH3Y1H6V8yaFK1z4psYUPBmXI/GvqaEbI1XsBim9JJDpaxbJ0JIxmhs8g0ikcYpWwx/wAKs1SA9OT7UgYbfelwDTQO2PrTAk3fL+nFKDlQTTep6cd6U4xgHimgsOzyc8/hTuSOaYe3FKORjnFO40iTP50DOKaMj7vOKdzjPagYnzc8cUh6Z6GnqcikJA+lIZHg8Z59KcAM/wCNKRkelNB6c1XQCKUYPSoXJxVl1z1qu/GaSKsUryUCJjnHFeF+O70z6lKQflXivYNeuVgtJXJAwD3r5/8AEV15lzM2c5Y1TdkclZ3djHt18+9UA8Zre1F9luqjHSsvQYt0pkI6Vc1GTdMe6ivPm7yMuljj/Ftx5VqsOeTya4pmyuQcVteLLvzr8gcovFYABNbJcsLGG7bDJLA1b2gwMzfhUG3P0qRvlQKe1ZsYxXOBkYxSAB3J7U7PzH0xUWecDkUAOkBXHNPTbxkVEzF2BxwKsRYYsT0ApAOLjyGAqrEATyKGPU+vSmqOOuKYEgx5xyeKjcjcQBTnwF460wcnmgY70p8eMjFM+9jFTR4Bx+NAFyFPkya0oBwAKowqTGMGrkZKgUmMvLDtIwalWM7MmoYmPFXPmETAjFSxmBfDDms1h9K1L4fO2RWZJ1rboZEff2qSAZkApmKns/8AXKDk80luM9j+FVltXzSvHv8AjXr1omdv+c1wXw8tfK0uOTAG4f416DadBnrXPvNs56jvI+KG+nNNz6U/nNNPWtjpG1Iv3eajPSl38CgCXaCvB/ClJVV4696jXPFDKc8UWAMksCTxmklA3nmpSoAAFRso346UDEQqENES7n/WgY2kZ5p1u+xy3YDpQBKkWx/X0qOZssalklDAHOM1A2DxQAkbZOD0pRkEjNCJjJ6YpAOTigQ9EJOe9WniKqvPFWdOtWkQn1qWaAkEOelF+g7EBYLatjqaqpgHcR096sMA0fl9xUHlZXI6UWAkkO9Nw6kYqoQQenA7Vb3BUA6GmEZI9O9OOgmdD4NuzDd+XnAb/wCvXq2iy4yuOeteIabOILpGXjmvWtGut3lSqeGp1V1IZ39o+UAJOai1FcqG44qvp8ucZ596vXA3wtjk1z7MhEemzbWXnBBrtdMn3YGa88t5Nr4PUV1ujzkgVqizr1bNOJI5qvA2VFWOo5FAh6nA7c1IuSORUajJHFSlcnIGKm4xD/OlHP4UoAxjrilxxz0NMBO3rSdetOPHPemHjpTQXELEHnGKU4I4o4YZ7+9MyB9Kdxi1xXjJ8XyLjtXaEjaccVxPjA4vIyfShbkTehlJzj3q3AMdapRNxVuLp9KxZhc0IiavQn1/OqEPQVehxx24qRltD3qfBIPPNQQ9s5qZT6DigEyRRwMnPFPUc0wGlBPWqTHclAA7UufSowcDGc0vB+tNFIcT7cUhPXAoA5pVGO1UO4DJAzThgd+tKCDyOtOGO4FAIo3wZGSeMcqcN9PWrcciyKGXkEZBpzAFCrKMHrUFqBGTGenYe1PoUizznJBpQeehpM5HTmnfWmFxMjOKcOT0pOMjANOyehwKY7inNKAO9ITzg8D2pc5YdKY7ju2B2pR9aYH9elJg7gfXvSGSkjHJ5pwqIsA2MUoYdxTGSk8cUg9TTM7s4py5PBpFIOMY61GrYYjHQ5p5Xk46+lMAAds8kimMk3Z+hpRx0GaZgHkVIOnrVDFH0qVG9Kip69BTGSr14oUYzTVPJo8xQOetFh3HEDFIBjJFRPMACRyaFZ2XKjjtmkUibORzxSHOOtRqG3ZboalI60xjM9MU4MMDjFGPUUHOAKYw3DoetOHUU1cE54pwxkY5FBSHjk8HilGBznrSDHBHWlxjGD9aQxSoPTrSMpyaXPbr70hIABHWqQDQvAzTgajeVEH7xwoPrUJvI84TL/7ozRuNMtMc8YpMAj3oHzANjHsaeCPTpQhjVGD3pwY5wetC46ilIGc9adhhkdcZprZJ29R/KlIIHFIOX5//AF00gHH0/pS8ADimknGaMk5yMCnYY485puO+Mn2oJ6+nvTS3AA6daaAU80FWx6/4UgOe2PejPJNMYnoRwBS5PT9aMf40hXn2oGkO3DmgHjHpQRjHHNN/iwBigsU89elNbuRR1GKGPr+lS2FiJ+evFVJh8xOatueCc8VTmPoc1IFSWqMxwauTnjqKoz89etIkpTtjNY18+M4PPpWpcH2rFvT1qSJMxtQfKmuV1NshhnvXSXzcHngVzV9973oTMZIrfDi3M/ju2Ug/Kd/5V9QqMjmvBfhZp6nxck47RmvfV6DFNvmnc1pq0Ehy/KeeKkwO/emgc8enSn9uwrUoacHNG1scUvTIp+Plx/KmMaRj06U36ipGAwc5qM5BzjikMDnsakAwMnrSbfSncgciqAMEDnij+dIX7UbwfegYoHp0p20Hn07Uwtjp0qJ2xytDYWFdsNmoy/PTrTXY4pBQhskEnGO9RyMCDjgUxmIz/KqV3OUibBqhXscV8RNQ+z2RjDDLe/1rw7VJS8hx3Nd/8QdRE12yK3yr7154oNxeKoHGaio7KxwyfNK5uaXGILPeeM81l6rcCK1mlY844rWvW8m3SJeCR0Fcd4zuxDbiBWwT15riguaRM3ZHFXpM07Oe5qJk2cetODhiM4pLlt7D2rab1sZJDoPnYCkuQcgY+tPt/kGcdaWUbm56+9ZdRkCpg880BcZIFTSkA9cA0ZAAUdTTAgfAQY696iLEdzU8iBFBJ5NQ43N6ihAIhJND5B9qcylR9aXbuAAoGRgkketSKtMPykCngkDigBu3B5/OpogAcmohknntzinpuL9OKANKFsqKvxjgZ61nQDAAB57VpQdB60mCLluvzc1osB5B7cVStVOeauuCIiKl7j6HOX3MjVmuOfxrUvx+8PrWa/WtuhmiIDPWrmmReZeRKOpNVVGTmt7wjbfaNXtwBk7hSWmoz6D8M23kafbxgdFHtXU2gx6ZrK0yPCAeg7VtQAYxgkGueBxyd2fEuznOc0xh81TOMjmo2HPNbHaMbFNC4TIqVunbmo8/LjNACBttOLYGeee1QnJOM8Uqnn6UASliemaaAxYbqmjGc8UjkbjQAwp82cUjRlEz61LG65I7ikd94wBQAxUyo5pAuGIY1MjDy8cbulIOCe5PagZJDEWUlulM8oBjUsUh27RSORjHc0IC3DO0MS4JouZxKgwTmq7PhAuBxT7cKXG84p2W4XLMCKiDcOT+lS7EMKhce9MdwUcA8Cqcc5Hy560rXC9iUwb5gg6etLdQrGpCdQcU63kXzTnJNBG4FjkrmlrcZQ+6eRyK9C8H3vnWnllvmX/69cHdKFJPOTzWt4SvPs94oJ4Nav3ombR7VpU+9F5roI23r+GK4rSbjEhB6Hpiuss5MqAT1rmkjMq3I8uc4ra0a4wwBNZ2ooNm+madLtYVUXdFHpdg+Y1Pf3q+uMdDWBo84eIe1bsT/LyaqwmSA+vFSA4+lMBB9804jNKwXH5yfenHjioSCpJ7U9HyMj8aYDuR1NRMDzUp5+lNJGMHpTuBGCfxFNPU4p+BzzSMQf8AGkO4zOfwrivG3/H5AeeQa7U/dzjiuO8cEb7Y+uf6UR3JnsYULZI+lXYeSMms2B+Rjir8R+YZ71kznNGI9OO1XYSMDms63PuQKvQnKA9qgdy/ESPerCduwqvGe9Tq1CBEoHNKG/Cmqccd6ARmrQx4pytUYGetOH6Ux3JFpwpg7EYp3QUxocT6D9KNw9KZ/KgZ9KaKuP8AvHr0qOYELvXqPbtTh+FKeeOtO4xYn3oDn8qkyTVdVKORztNS49aLjRITwOeaAR603FIBg800xjsgilB7mkwD0HWnDOPei5Vhy++OaXOD7UgJxmhQRz0p3GKG57mnZAGBn/Cl74pehoGNUcknpT8+lJn0NIQMYoKQDIbr+NI3MwIpTjHr7UjdAe+aaAd0PT8KXePXHsaafu8UwRZ6nimNEhmUeppBK7fcXj3pyooA9qeBiquOwKJGHzMPwpdgC55Jp4PPTFIfvfSh3Y0ATCEAZptqSYgO461ID9BmoYm2yOp4xilYosdOfwpw6elMVi2QBShe/emUh24dO9IMgYpQoo6n0FMYEjFKv5Cmjpg0/qKTKF4z0oB9OlNIO7mhlJBAOM96dgGSziMYxmq264l5TCj1Iq3HCijPUjvSkelO47dysLRCQ0pZ29CeKsxxomNiKv4YoCjv2p4xjB+uaBpDWcqRnv61IDwM9aidQyHHXtT4m3ovHPf2pjHcUuM8UnTPvS8/WmMCQB82eKUcjPX8KQjpnpQvsefamMaVPvSHJqYdOtMcckHigdhnXgdqXBGfSlwQP/rUmcdqYAO44/KjjJBBpcDseKTtjjFA0BOAeOaCeenNLkH05pCATntTGNLHoc0pbuRmhsdM005BxQMXPf8ApTW47UE4prEGpY0iKXJ+lVJieatSdD0qpOwyeoqGBSmPbv61SnPYircnf3qlPgjFDJM+5P8Ak1h3zc9a2blhtOf0rCvT14qSGYeotgHmuW1CZVOTXR6ietcVrxYk7Dk+lVFXZzzbR6X8FYxc6vNc87UQrXtyKPyryD9ny2K6FeTMPmabGfbFeuoPXn6Uo/EzoivdRKzA9zUfb1p+M8Z4oPA65zWqLEXJP0qTd68VHjnFPUAg0xEmAQcU3G0kHmnduaRj6mgYuMjigZIIzzQBjpxSgnv+dMBuykKBeak5PSkY460DIWB6nn0pobOc8VMcEc0wqDTQDCARzUTDb0HFSsD60wkYx2oGVpidpwRWB4guxa2cjk4wvBrcuG2A8jBrzr4i6iIoTCp6jHB+tMyqS5UeWeIrwy3Er7s5JNUtDh8yUysOBzVbUJPMlK571q2yi104nozDiuavLoci7iTyCS5eRj8kY6dq8x8S3putQkOc4OAO1dt4huvsOlMM/PJmvNJn3yMxOOamkrLmM5u7sRM2CKemScDqajK5xjrUijaQMc1L1ESRq6nHp3qQKckseB61IoXlmwMdKrNIzNj0pARynL/N0qUMARjrTGjO4Me/vUkbAPkc0MCCU4PNNDBTnsKsOAxJIwO1V5Vx7ZoAbv8AmyxwKlhcDJIqBY8kc/hUuMEDPSgYN8zZH05pDkHrUyj5TioW4Y9aAJEAOcnHarChflC81Tzxmp4HywoA0o02gdq0IQAoP86pQsCgq/EflAxmkxl23bOBirUxJi+U8VDbICK0ZYwtv0GakdtDlL3lzWawrW1AfOay2GWrYyQxRXc/Cq1NxrqsR8kYya4pFycV658KLEQWMlwRhpTgfhUz+EU3yxZ6xYY2g9617cc9Kx7EDaK2YPXtWaRxrc+J3HHFRkY71akTjgVVcHNWd4xh6VG4xUx6fhULHJ5oARVIBbvTRnOafzzQvHXvQA8PgVFIWY/WhhhvapIwM5OMYoAZEcAipIuWx096h/iPTrUmMNntQA8OFcgDinMwJBHU1GCOnHNKRghjQMsW7qoIK5Y1YeDcu8cH0qkDgjHWrRchdpPJFDAgUEsAeaV3ORUoZQMetK6KTkdMUAQbnwR1pIY2dhtHenoME5OKsWrIo54NMQ6OIxzjd1qYHbGy7sE1HKCxDZ4qUKrx78A4FJjKd83I7/4VFZSmKdW7g09wZGz2HU1Bja3Wrg7Es9Y0K8823jkU845rt9LuMoOma8i8HX3WImvRtLuMYIPFZTWpkzrZMSRH6VmQkxyEe9X7WQMue1UrxCk25e9ZxetgR1mgXPOCeDXVWzbl5rzrSJyjqeBXdabLvQHtWrGzWQkkYqVW45FQIfQ1KrYGOc1KE2Pxk/rQB6UisfWnFvQUxCEZHHSgHtTsnFM3c89qY7gTgnPTtQ2fxpDkdDkfypACuck0Bca3cVyPj2M+RbOP4WNdgTuHfiub8bpu0dnH8JH86E9QlscRD1HPetCMnI/nWbEeKvRnOM9qzluc7NKFvzq5Cw6VnQnJx2q5ARn2qBGpEx6VZVqpQn9atoex6UBcl7/WngAVGD+JpwODnNMZIO3oaeT0qJTmng4pjTHg96M5600ZyKcB69qaKuOA9T1pVGeaQdOetOxyKZSGlRnJFPA4yaT6c0vegoDzThwBnrTWHoeacnQZpgOFAHI96B1oB7UxjugooHSjrzTKRIo/Kn4B61CTkYzTyxJ6dqQ0x+MdKXoOtM3ZOB+FKTxTKuIBycUp/SjOcilPp2ouMQY6/rSEZGAKdgAnFKvIplIaATgdxTgeDk4pc54PFBUZy3UUwHAZFOxk0isNvFGen6UxjsUhIBppznjpSqBz6mncYpJI+UVHtxLnrmph046+tRvkc56UFE46dBTjjFMRh6U8cmmUA6cUo5HPWk5+lKOOvSmMBSgZIxQTgc0gIHHY0WKHE4x6Clx696aOvGadkg85/KgYmCBwOtJnnPemyTKhyTye1R4km/2U+nNNIBXmVWCqM5NT46YFCQqq4HX1pQNuMnihlDSuMH8KjhJR3X1ORUjnnBqOb5WVxn0pjJ+owetJvCjnpVW8uDBEGCliTgAVWhjublSZ/wB2D0Aql3JvrZGpHIsmSpDAUrjiq9tAtsjCPOCcmrGcjpQWJ6cUjLuAyehpcAjkUFcDg8CmUGcjHFIfTinY64pMYoQ7DfSj19acRzmm4IzimAduO9NOead34+lJgdDQAn60jHjpR2x2oxigdhpJ+pNRnGOakbgVBI3JIxxUspEUjYzwAPWqk7DBGKsyHI68VSlI5A/OpYMrM6n6+hqlKeSeasy8ZPcVSnPX+lSSyhcnjjoKxL0n5jWvcNWTeAbTSM5HN6ieDXIajjzDnvXaX6ZVjjmuI1pHWYYHWrhqznqaHvHwftfsvhWNsY81t/1r0Bc5yDXOeBLcW/hDS0wAxhBIroQTU0tbs67WHjqfSl5OeOaBzgU9QOSa1EJ396eML3pODnFJkE4yMVS0GSjlc5pWUEc1GD7+1OD9c9qLAL9Kdjim+9ODdqAEL4NBYMO1MYc5IqNuDzQOxIWxnBqJuSdtAOO1NJwKCkIWz160xjjp1pXwRx1qCWTZ19ODTuJlXUpRFEzsegzXhXjvUvPvX+YnGf616l4z1IW2nSfNgkE14Hrl200ztnqaHtc5K8tbFSyT7RejjvWzdNukSIfdXrVbRYhHC07DnsahvbkQ2087kZwcVwzfNLQy2WpyPjS+8668oHKrXLAAntmrF/cme5d25yeKpNuGSK2l7q5TFa6ltYxgkU1gMj1piS8D9aZ5jbqxGWCnzYzmosAUb85p6fd+bgmmASyZiUIOBUDMAQR1FTswweeBVVgCx4IoAk83PzE/hTGbdksPpUJBBz0qROR1zigBYmAOe9O27yTTNgz0p4POKBjoZMZ/rTJTls9aQoQCaCeMHpQAg5+gqeBefaoRgGrETD15oA0LfoCenpWrb9OT+VZlvjArShHygjpSYzVs0DMK0LgYgrOsiRg9DWhOMwZweal7jb0OXv8A7xrLYfMa1L7O8k1nlQWyK3MkLboXkVR1Jr3zwnai10u3iXoFrxjw/ame+iABPNe7aRHtiRfQAVEzKs9DpLIYArXtycAisuzAKgVqwDjrUHMtT4xlHGM1VZa0JNp6dRVJxzTPQKzZz7VEy/nVllPaonAxTAb1j5po5NNdugpM4PWgCZkwuT+VQnqAD1NSFiVyeoqFgQc0ADjnNP3g4xwabyQC1IwJGR0oAfEMtknpVmcABcYxVVDhvanMd2TzQBJGR5q45Heps+bNheM8VVibA7getTQZV9wJxQMtPF5bgHn3p8u0RZXg0yeXewJ6DqKilcAf0oAAoZl3dM9afcjEvHAH60wPiJTimzMZCB0zTEaFsEdAAx5qJ2aJmC9M4qmpaJ8E1q29uJArE5LUnpqNa6EDAJbjHU9c1SmAzxVy83AjgZNV2iPl5zyaE+oMm0m5Ntcq2SOa9S0W7EsSMDwRXj6khq7nwfqGV8ljz2q5q6uYyR6zpkwaMCrl2vmRE45rndLuPu84ro4mDR1zPR3JTK1lJhwpNdrol0doB61wk4MU2fxrf0e4zgg/MOorRaopnoUT7gDUu4j6Vm6fPuVa0lO4DFMgenPOOKeDjt1qMDB5607OMc1IXHDPSmk9eePrQTk4oYYFO4XEBGeM+9KG3ZFMJwAOaCcnNFxjicjnjFZPiSLztGuIxzwD+taobnFQXSCSCRB/EMUXBnlMZweD0q5C4zgis9iVuJIyMFWINW4W5HSpqKzOdmnCelXYOtUIDjG6r0Rw3J/GsmI0YeMY5PvVlG5HOapRnPSrMZzigCyvJqQcCmJUg9KpMB69OeKcBz0po/lTgTigpMeMetL0PHX3po6ZNO79aY0O5zyaQexoOMCgnHFNFXHDrzSZyODmmk980DFMdyTNLkZpgI5xTJJPLwSM544podyfP50opqYIB7U7qaY0KOtBIHrQSBjtSigpMN3P9afj3pgNOznPNOxSYoPPp70rdyPzpF5HGcUdhQMfHSEnf7UoYDjIOabvy5A60ykSKScnsOlKDnJpBjtTh15GaEMDk9Dx708cHoKQDIzxmngY570IpMao4x0GaUjaSRTgOc0HkYzTGhh69aAc4Hb1pXUY+lGM00UGeeOaSTkH1p2PwFJ9eaoY2NjtVhVhWAFVYTlWHoalXsOtMZMxoXpzTF4ahFYFiTkHpQUiX0zwaa2OM0mRtz0AqNp8kqgy30pjHtKqDLMARUQlklOI12g9zUYgyweU5PpVpSNg4xjqBRsMbDboDk/M/qasKe1MAx+FKG+Yeh70blIk/i9CKM568UwN+dKMU7DFK5bkcCiVdyn6VIuMYzTT1yBSsUV4cFcEdOKfjnFNYbZ+PukfrTDcIZ/LU5Y8n2qrAixxx9aXv/SmkkEjrRniixSHn6Ucd/ypuTnANL069KY0GB1xS4zSZ9zSBgSQKaGKRTSTnj86CfmHelI6+9ADT3x1pGzilOdvHWkPT2phYaT+dIxpxGT60w9D6UMYyQn61Wc8GrDYxVWQAZyeahjIZmPaqshwM9zVhgOc1TmIx9KQmVJ3Izzn2qnO+0YPQ1POeWbPNZ0rdc81LJK05BDY71nXIB+npVuR8t9KpympIZlXse5ABwaxJdPW4uYldSQXAFdNIm/oKm07TzcajbYHAkBI/GqhuTy3Z6to0Ig0q0hXjZGBir+QMdabGu0AAcKMU4jOMVUFZGz1HAg80u7J9vSo8Yz6U8EZ4FaJisPyc0m0FqUAHGKU8GhDFUegpxI44pB04JFKSM5pgDDA60Bjjmg5IyOcU3PJzQMXOc+lNYHnnNLxz2FLu455osBE4yuRUZ5PvUzmom7mkMhZtrHmqV7KAh54q3NyCK5rxJfLa2cpJwQDQTN2Vzzz4i6sJJGiUnGeefrXmLnz7oD3rW8SX5uLmVyepP8AWqOjw5cyt255qa0uVHBe7uzRn/dW6Qr3rkfGN+I4vs6H64rpbu4CiSd+Ao4rzHWrs3N5IxOeeK56MdbkVH0M8tzmmliQQKGxjmnoB5Z9TTk9SRkXXHIqWaPoQKci4YZHX1qzLIrj5QMAVAFSHaWwadP97K9BULAhuO/WpVbdwKYEBbJBNSoFbqaQJhsn7tMUkE56UAOkiG3dng0wcfKBx61JI2Y1A4pnHYUDEwR05pvQ5NPbMeCRyaiZsHNAEkhJFNHAx1pw55pRtPWgBoU8kdqmgByMjimqw544qWE/MMDjNAGpaodo9BWrAPlGKy4TwMnjFalufkFJjNS1jOQQK0bpCLbk9qpWLEkYFaN1n7NyMVPUb2OQvh85BqkAd1aF8vzmqsUeWGfWtzNHWeBrPfcq5HC816xpvGOa4XwZbCK0Dkct7V32mDGMVMtzlqu7OitRhRmtWEcCsu0HC+tacPSpMkfGcinJzUBXDVcuEOapSqQN1DPQGSAAnmonGUNPdwQc1GjdAKVgK5XjNMHWrUgGGqsBkcU0BKQNgJqFupp4B249OabigBjcYGDVjZ+5yKjQAnDDrUqoeQT8ooAjjXJ6ZpHXr9anRdrnP6U4R5yaLjIMDYDSo7AfSpCm1fbNMxycc0CJACRuJolOUz3qIs2MdqceYxnk5oAcW3hVAwBVxogqxscdOapBvmXA6VrWcX2t1Q8Gh6DRRZRJLhRxVqG4MbBc/KOKSaA20jh/vCqnJY0boNiSRi0pJPB6ZpjSn/VilLqE2kciiUBUDDr1oAY0WEz3q1o90be6Rs45qp5u4HrUathwRVxfRktXPYtFvBJErevrXXafNuUZNeU+E78PEI2bkV32lXX59KxnG2hg9Gb18mU3DtS6XOY5B/WnIwkhx6iqOfKmwfXilB9Ckz0PSbkFRjjNdHAcoOcV5/ot4CAGPSu10+UMnvVMTNEdu9Ox70xTnGacGHI70iRSSaaSfx9DRnrk4NOHND0AjIPNNzx1qTGevJprLj6UBcSmtkjg0Hil7Aikxnl/iaD7Nrc4HAJB+vFVoGOP1rf+INuVubedR97Ib9K5qB+gFOWqTMZaM2IXBUCr8GCBWTbMPf8ACtCF8YHasiGaULZ71cRsdxWdE2OtW4nzx0pAX4yc81MDg+1VI36CrEZzTQideB0p1RKcdKeGH1pjuSbh1o3ZqMtimuXLIVOFHWmkUmTZwRRnHJpmcDjJozn8apDuPySe2KVfvZ45po9/zp64HJ70FJigYNPOTwOnrTByck/lTwcZAFBSHDtSnPPY03PNOBHHFMq4oP596XnOM5ozx0/OkyPxpjHHA60hYY64pD60A/lVFIcGOadg568UwcgYzUmQO/SgYh44PWhcBqB8xJP50HOQVOOeadhkvPGOtSZJ61H6mlB5PqaLFXJeAOuKN2KYcke9C89aZSHZ+bnNOPr2prD0JoLYGKLlIcOtKcZ5zSA5FLntSKQg6ZP5UoPHqKD7GkJIGTwapDIx8kzj8RUi5BycVGw/exn14NS855H5U7jQ7PA4pw9e9NB6AU9cGncqxC6MzDnC06NQvQfjTv1zQo2j+tPUaHE5/GomLIwI5HpUnIxg/WlUZ60yrXFVsgYxRk/SoiuxsgnaetSjkdfegaHduTThwuec0zOBk0oPHemUScggAinE5br0qLcOtQyTeWCzZwOaAY+4uNiHHL9hUNhasjNLKcyvyT6e1V7RmnnM825VB+RPX3rQEmDxmnboCXVj2Jx05pp6cdacDzkUnc+lUirCcHnp70ueoz+NJnk4PFKcfh6CkUO5pCKTP0oz9aBi54z3ppIzxSluOKCAeB1poAJ/Kkbpx/OkznjtSd/880DQHjimMetOJz9aikYeppMpEbnuOtVpDzknmpZDVSYkDk9ahgRyv1GKpzN1qaR+ME4NUZn+9zxSJZVuXAbJPGOazLmTanB61YuW+Y5NZdzL830pENjZH5qo8mWI5xUdxNtzz0qk1yA3pSIuakJDHmus8K26vcq5GdvOa4WC6AYdK9N8FxF7ETFcbunuKqK6lQZ0w56cU7p7GhRt6CnHGc81ZYwD16U5eRQBnGc0Yx0qkgHg0p5pFOB7UjH8KY0B4GaQE5pQRwaH5HGKAAHrnilyMY6ioiDmgEgjuaYyTb0wc57U3GM5PNOUnHP40jcnPApbAMY8dQajZsrStgVC7fKf60AV7uTaD9K8r+ImrlVMKnrwa73xBeC2tXZiAcE8V4N4s1H7VcyNnjJAprTU5a8+hz1yxnuNvqe1a2Bb2qqv3nqjpcPmTGV/urzU11cAeZM33U6Vx1Zc0rGC8zC8WX3k2ohRsE9a4STczZx1rR1u8N1dsc8Z4qpCCcZxgVfwRsY/E7lbaT1HFS5AwCOlPYZc7elQSgqeaz3GStIGYYo8wKpUDmoEY7uhzU8Y+Uk8mgBjMc5IxmnIoAJFMOWBPpTdxztzxQA9pAQAAOPWmDLtu9etN78c05WwT6dKAGkZz6UxWKn6VJIdo6dai68d6AFkkLkHtQATjgUBeOlIjc+goGSjjjsBSqNw9qYX6gYpwf5SKAHYOOKlgySKhMny7RUtuMsBQhGrbgnHrWvax5x61l2o+7xW7Y4GOM1MikjX06IKeeRVy9GYTjOMVXs8lhxirl4P3B+lJLUqexx98MselNs4TJMi9yanukJkPA+tXtEtt9wCR3rpRj0O+0ONVgjQDAFdbp4xt44rmtLXaqjFdPYgAj2rJnHPc37Xoo7VpQ8gHHNZtp0FaUWcCkSj5Iu4Nrcisa7T94fSuz1618qQ8e9cndLubIoR6BmTKB+NQxghquMnPsahKjmgCJm3HGDTY8cg8HtSsuBketNAz7UAWQyiIYA3dDVc4DdRSxqdrHNQv6mlYBy8v1p4l2grwar5OM/rRtzznrTAtSDhSppBIVcc8Ug4i56k8UkgwAcc96AHvL1AqON9oOe9JG2TzTyOe3FACuQStDL0wafIBhSKjBy200IZYiQAbjVy1d45leM9PSqsagMFbOOtWrR/Llbd0p3Al1R/ObzAOtZzkoxz1Iqe5ly5VemaHjEuCvJA5FJaBuVYlLyZOSBUko+XHWpU2onynkHml8suuQOCaGBWWPinGElN/QCpJk2YByKswyKICpAOemaL9RCaPdta3SkHC969K0u8DbHVvlavKJAUfI611/hi+82LymbkdM1UldXMZo9XsbjcBjGKW+Xo47c1iaLd7gFY8j3rf3eZHgn6cVhsyYsdpdxtYAn613ui3QKgE15ipMM+Peus0O8+569K13Gz0ON9yg55p44IxVKxm3Jzwavbs9KTRAA4OT0qUEGoOCaevBGe9SwJCQeaTbnqcinAAr70nGKAuR4ySKCMGlJHPHFKT8ue1JoLnM+ObVrnQ5miGZEIK+vUV5xA3SvZr2FZYJEYAhhg145cQtaXcsL9UPSqWsbET7l+3fDDHFacTgjrWLC/HPWtG1fIwcVkzNmpC/PPNXIn+Y84rLifB61ajekSakbd81ajfPfFZsT5xzVyI5PNFgLgbNOBx2qFDxUmcdDVIZICcYoBA60zNOA5A7Ux3F544qQDpzj0oTGAadnJ6cUXHccoAPNGP50gPPANKMk9uaC7i9TkHrSrz70ijr2xS7sfSmNMeKAeQBUZPA5Ip2RjrTKQ8nHfmjtTATjANKD3zTKHMeB2NOUE9aaOv9akB45607DuKcADHJphO58DpSsewpUHrTKQ8AYPIFCjJ7UmMn/Gk5JPpTuUSA4/xpwwW+lRgc4PSnAgE88dKZQ9jzjNOz6daaFBpw/Si5SF+lL164oUflS9+lBSDoaAfypDg5zRnP8AhTGh/TueKB830pM89OKAxHehFDZeUyM5BGKkU5wc00gtkZx7UqZEYDcGkykORgWIyMjinAc89aqQ8XUv4Yq0DwOf1popC45yTSjk45pyjvjvRgZPr7VVxobjBpQM/SlU4FHT6UXKD261CP3fH8PanuxHSmSSxqv7xgMnHWmmNkgbPelByT60wDDZHSpMdx0pjQm7j0qh5hupyin92h+Y+p9KfqEkhUQ2+DI3U/3RU1rbrbwhRySPmPqfWnfQErsmGABtAAHalAGSehFBz06UE4B70IsdkdD0pfvA+vamAhjx2pGLYOOo6CrBDnUc4zmoXkMYOenrSC5ABEuVI6+lNj/0li5P7teg96NgJ0cEfWlyMD1FMZQeOajbcAR1FBROSQODTvfn86rQyhlIPGKmBGODQMDyeOtDe9Ge/emn1zzSGIzcEgfhUMp6H9KlJ96rSNycmkykRyN1wapXDZzg4qd2wD6VRuJMZzUiZDLJwc9az7iQZHfipZ5enPOKzZZRgH8jSIbILqT5TzWPcS4U56VZu5z81YN9PtBXPFIybIry6xxmsi5vcNljTL65KkknJ7VzGo3xDn5ufSqjG5hKVjtNBkk1DUoraIbizAH6V9BaVarZ2UMCABUUCvHfgVo5uHn1WZSNnyICOua9vTGM9QaL3nZdDppxtDXqPyQetO+gpp9qVMnGfyrRFjsEdOKaRg96kOM8UhIxyM5pjIyccg0gBY4JoY+gpytgYoAUKegp20+lIDtIp2/mhjsNJAYYNIQD0prEE1GWOcGlcCQNjg4qNjgZzUbkgn1pN+eD1o3AezZXNU7mUKCewp8z8cHAFc94g1AW1sxJ/GgmUlFHGfEHWdsbRI/LcV5Bdyma4wOQTW94s1I3V45zkZwKxrCIZLuPlHNKrLljY4G+Z3ZcwILQIo+Z+vtXNeKb8QQ+QjcnqK2r25WKF534A4ArzvU7o3Vy7k9+K5qcbu7Im9LFJ+Tk85qxChx14quRk8GrcB+Xb39ac3clIj3kN0xUcke8j1NSSMCfemq4RsnOakCNozG1TZEcbE8se1O2eZ8/Qe9V53BYAc4o3GRsT+fSm9D70pG4HB6UwEjr0oAXIPbihTxSoMrmkZcZx1NAC8uuSeKPLwu6kBKjBHFBbt2oAUk44/GmAZNPbB4XmnmPagJ60ARKMnFOxlsZ4oPCknihc4PFACKOe4q9bKMjHIqqgyRV+1TpmgDSg7bRyK2rBRjnvxWbZx5I5HNbtlGqkD9ahlo1bFGZvlHFT3o/dkd6ksWAGFApb1T5Z4pxWo6mxy00e6Tit7Qbfb8xFZyRB5uPWuj02Py0AzXQ9jlk9DodPHSujsevSsDT1GetdBZdazOWW5vWg6cVpQg+1Z1qOBWlD05pWFE+fvFFuvORyOn6V57dw4dua9j8VaeSrAcjjGT9PevLNWtzGzZHIqY7Hovc55lycVDOm3HNXVUnOBzVaeM7vmNF9RFcMuMGm7QynpUUnDcU1XIbPWnYCRQQrdMVXYZBq0CGjx0NRSgA4oAijQgEmlxwSfwqRAChpjE4GR37UAKTkgH8/SlfDL64pSnGT6VGSQSKBgoxJk05dxOByKQfMAcVPDtRiW6GgCwkQZQW7VAqqswJ5XNAnKnb/Caa5DJ6YoQF26VcK6EYaoBPkHPX1qPz8xqD/DUR557mhAWoWDklsY/WrFk8aRSbjz2rNGVOF4qWEYkIbOO9DQrj2fzJfl6e1SxOVBBJ9qfbomWzgCmzbASB60eQ7DXYtjdyaswwFsbRgVAqMW2gE96mgnaNyOooe2gIr3SjecA8dak0y5a2uUcHAzzT22sCeCetVZBwGA5qovoTJXPTNKu9wjlU8HqQa7CwuRJGCCK8m8Nahx5Lvj05rvdIusEKTWc4nO1Zm9eJzuFT6XdGKQAnHNMVhLFxzVNgYZcj1pQl0KWp6do95vCHPHSukhYMK830C9ztBPNdzp8+4Alqtolo1CO/f0qVVwOaarAqCBSgnd7VD1JbHgAUxulOzzjvTXxjrRohDVHPYn0o5BwOKTPNKM8ZFJlAwPtXmPj2yNprCTxr8lwOSBxkAe1eoMMgdu1cz450032jOyDdLCQVwOeoz2oi7Mlq55zExH41fgkwQayoWBUY6fzq3E/OOaUlZmVjYR+hFWonzWXE5x2q7A+VHtUiNW3bmr0TdPSsiB8GtCKTIFIRoo3pUoOcZ5qpE3NWkHA6VQXJFx+FSA4FMXHGaAeaBkue46U4dOKjyO1KD0pgmSDkY6UFiqEjk03fzS5yOKCkxxOcZpeMcdKYBye9P79uaaLAnOPypW4wB0pFwTmlBOTx+dMpMVR78U4elNByeKUHjiqGiQHAoLY6U3r0pVHzZP4UFIUEgc96eCMUzqRThy360XKQ/nHP5UDjIpOvAp2CAT6Uy0xw9+46in4GMVEM9u9SqOOlNjQvYgU4AgU0kkccc0oPy0FD/XtR70zOD6igEgmgpDj1pTmmc5BFKx4xVFCk/wD6qQnLfSkU4OMU2aQIvuO1BSJAcY4+tK8qgcHJqsPNnGc7E9asRRCPtzSKRUjDtfHJ2AitFVAGB1qpKP8AS4TzyDVwY6dxTQ1oKpwRTx69Pao0Oc9c08H8jVFoUcCmSlto2DJzzUgPHFMxyTk80hkcpIhkI6qpP6Vxmn6Tc61bvd3d9Mjs52xqOBjpXaOm9ChPBGDTLW2S2t44YhtRB0FNScVoHLd6kelpPDZRxXTGSROC/wDe96t5xnBpnI6dDTwM00WRQxiMsT8zN1NS54po5J9RS4IOO9OwxTk0DpzwaQn86TPHXNMYqoqA475peAMGm57EmoZnLN5UfUjk+gpoYyUfapDHnMY6+9L5DRMWgfC/3O1SqgRNqjp+tRSxPIhAldD6gUwSHrOPuvw3vThIjD76n8apnTyVKvcyN+FVxpAXCpcSBAc49aLoNS5O0auCzrz700zhOUkRh6FqrNpUDoQ7OaSLTLT7jxDcPWi6HZlqO/hJIMigjqM0r38A58xfwOaiGl2JXm3Sm/2faJ923SldFJMnjmSeHfGcqehqN2O33pVCxoFRQqj0qCZgQRmpuUQSuSD6Vm3D/MeeBVmaTGcdKzLl+Cc0rkMhuZAO/Ssm7m2pwenUVYvJgO56ViXUpOcmkZyZWurkDdya5+9uOpzn2q1e3G4/L+tYN/PgdeaDGTKGp3J2nmuaw91eIi7iWbGM1e1G4BYgkV0Pws0T+2fFFuCMxRsHfjtVOXIrmUI+0mkfQ3w/0gaP4Zs4CoDlNzd+tdQg44x+VRIuEVQcADFTDAGCOKmjGyuz0HuKABzxinEZ5AFGMjpTeQePyra4hQQx7nNLgZJoz6Co3bnFDGh33jz0o2gkimg+lAYj60wHFcDk9PSmkEDrmjdxzTWbPSkxjS2aTeT701gc5pOMUIQ4kN7Gq04xkjrTnbae9RSzAqc8H1oYXKdzNtQ7sj3ry3x3rXMkaMceldj4r1UWsDANzivDtdv2uLl2znJpruzjrTvojPkcz3H1NW24AjHT+KoLUGNDI2Nx6CqusXgsbJiT+8cd65Jy55WRitFdmD4t1IM32eJgFUc4Fct94ZqS5kaaZnY5JNRxkbuat+6rIz3d2SBNoGcc0oyQArD60yWQuwC9BUkalRnFZjHGIBuuaryhgdwzxU5k+fHaopZdw2gUCASny9uflqJsc0jZ6Cm7S1ADwcd6YefqKcEI570Y5OaBjolJJxyKHOHHFSRHJwBjNJdxgBSMZIoAikYGmE8YzRsPGeadgEZFACxD3qedwVX2FVNxA46UoYt160AKx56VKhAHPT3oAB4IJ+lOOM4529aAEQ/NkVoWbFmA6VQHDdwKv2hHy8dO9MDdtccAVuWEYJ+bNYljgEE810enoXwMVnIuJrWSAfNg4p94AVIyKmtoWCnt6UyeIgnNVDVhU2KNrB8+SMGte2QAgVVgTBHGa0LdfmGa2kckjasFxg1vWHUA8CsSxHAFbtkMnNQc8jctenA6VpQjIz3rOteBwa04enSgS3OB8SWxZNwHOP8ACvK/Edph+R19Pwr2/U4d8eG57V5v4r0/IYKOR0/Ss1oem0eSTqIpiKr3igN1GDWnqtm4k4HNY1ykg+U5yKbRJn3CAHioNvrVtlLcnk01oyAeKYiBwQBtOaR+VyakVSWx2pZV2DANFwKwbaMEUq/OyrjFGBn3p6Dy2z+VAE8rLuwOMcVUZhu6c0ssgycDmmY4B7mhASo2SAPxp0nNQqSh6dachOTQAoG1xU8i8k9jUDjkZ/OpnPyDnpQwGnGD2qSCLILHoKi3Aj5qkSViNoFAEoCtKoXkVI8YD7R1NNtHCSbnHGKsOS65XqKNhoTb5S4PWq4UsxJ+7TpZGK4YHipraMyr04FGwtyKNmB6HHTNKkgOQcZ7VcYxKqqcdO1Z8qDfx60LUGTCFtoc9KiuDheKvSyKbaMAYYDFU3h8xwB3oT7g/IjtJWilVgT19a7/AES9FxAGDfOBzXBPGYwcgjFXdE1FrS4Xc3yE81TXMjKpG569pN4HAU9RV+4XcuRz3rk7K6H7uaM5RuvtXTWc6zQjnINYNWdzKLLOl3BilAJwfQ13ujXu9VyeRXm8oMUuR+db+hX2GUFsGtNy2eqWswZAc1aGDzXPaXdqygZrcR9yjnilYzZKM5pHzS5/D3pwJI5FIkhxzUgJ2gUEfNjjHpSjOT2oY0xAcYBpkqK6lWXIYYNOxjmhicc1LVwPF9ZsW0zU57QghUI2e4IzUMTGu2+JGml7ePUIl5j4fA65xXAQyc8GqfvRuQ0asD5HXpV6B8EVkRvz1q7A/IrNkmtG+CD3rQjbGMVjo+RnvWhbMCvJGfSgTNeFulXI2G0etZEcuOhq3FJ6ikkI0Q3HXmlBzVdHzxUwOQKpCuSdelOHqO9MBHanKfSmUO5yecmnj1HFMHt0pwOfpQNDgfT05pRxx3pqjkUoYD8apFXHdOvSjJb6UzP1FOWmO47OSKd1FNxzT0Bplocq0/gYx1pB2prHAPpSLQq55Pan96Yp+UY6YpepzTHckByOKd2Pt1pmfSnDjrVFIeOnI4PWnBuaYDzgg805up9h0p+pQ8HAx+tCn5qRc4waVaRSHHmkyBn1oYZ4FAHNBaYpHT0pGPc9qCcDNIpLD2plEYZ3OEGB61IsIXluT6mnH6UpIOBQUKjDpipARjNMCgjgU7IXFOxSZBPxPAT05zVkcmq1x1RvQ1ZQ5OTTGhy/N160AYPPSlHC4FJk556Uy0OjIGRk07qOKYPf9KcDwaaGM3E5A4p3QUhPpxRjp3oKQY+tAPvilGe9J196ZQH6Um7NOZSeKaRzTHYM5OO9NPHfmlA+akkcIpZugpoBk8mxMAZY9BRDH5YJP3m6mmxqcmRxyeg9qlP5UxoUjngmkOOc0pPFNPHvQUH0opATikY+tADiQahkBVlkHbrTyeOKTJ+62MUx2AsD3PPtTH4GRzjtSKcfKTyKY7YU1DLRHK3B5qjNJznFWJn4x2rMuZeSM9aklsinfqfX0rKupsAjJ5FWbiXGfWsi6my2OOKRm2VbubYhJwT6Vz93OWcnJwOlXL6fjBPSufvJwXbHAH60GUmV72cLnFc9fT5yf51cvZcknP51gX1x1GRnFNIwkzKuJS87fXFfR/wL8Pf2foTX8yETXBwuf7vBBrwjwnpD6xr9naquRLKFb6V9f6TaJZ6fBbRLhIowg49Bis6r5pKCN8NCyc2W+hFTIBxUJHPJ61KowT7V0RVjckAIpjAYpT9cE0xuKYwJA4oxkdcU3BPJpyKfWhdwAgZx0pG7VIU4zUZU5OcUANJqIsQcinmmEdcc0ABPBPrTA3Jz+VO3Y69KrTZzkHNDuA5znINZmpXC28TP6cVNLIyAknArg/Guui3t5EQ4PTrSvdmVSfKjj/G2sGadkDZwa4UEyy7jyB3qbUbh7iYsT1plvGSQAfrUVp8qsjiSbZOzBIzI+Aij864PxFqJvLk4PyjitrxRqYjT7NC3Trg1yG0u57k1nTXKuZkTfM7IRMZJPT1qJs5O3pUzDsPpT4otwGalu4iBDgggcetW0bKnNMkjCrjvURViNwzgUtxgW5IHWmNgAY60+NRgt1qM5JyPyoEIpyak4Lciq54J605CQcmgY6Q/Pz0ozleO9NdtzZxQzDAI4oESE+WRilmk3lQewqA5YZpgJU0WGTd+O1PX5uO5qEvTkJFACyIA5A7VGoKt14qRmIJzyaaAOcigB4yTg0N96hSOgzUgjJ6jigAUgnrWpYrlgMVmRJhiPStayDKy8UwR0NhHyPlxXXaZEAgO3Fcvp+4jJA4FdLppdtoJIFZNXNYm+qKIhjr61XmjA6805pCEULwaaTleTWtJWJqshVfSrlqvzAkVXUdu1XLbrWkjkZsWC4HNbtiBnp1rGsh+lbVn1FRY55m1ajgetaMI96z7U1ow+uM0hRMe6TcCBXIeILbep4rtJweufrWJqdruViO4pNHpni2swGGUh1+UdK5m8tzJPmPnI5r0XxJZF5SCM4/+tXHLbhJ2DcA96jzJOTMflTNkZqvcH5uK19UjVJWBHU1jTgB6a11EyNiUGcZqOR9y8fSpGJ2H9KgU4+9TAj7/AEpznPIp8YyzdKHA5GOaYFYZJJqfIMQ5ppX5AaY2Vx70AKBk9aenEnTio8kH3NKGOcGgBzHg/pUkPzR5NVznPtUkblc+9AD2UlvapbdhGxyPxqCQ9xxSZJAGfpQBdmUBMgcnvTbeYp1PFETF1CMPpUcseF4o8gJnYSZCmrWnyAMU9RVa3j2455PWpmTyn30eQBcKFlwBmmSMCQcCh5gSWI7YqFTkZ9KYFmUhQrAZpqXBZsgYxT4lDqfaq0yhPumlvoBJc3HnLt4B71Uwyn6Gn243TZ6ip7iIIee9NPl0Dc6Hwvqo4t5TweBn8K7XT7loZQpPymvI4pDFIHGQQcjFd7oOoreWqqT+8Uf4Upx6o5pxtqd6ds0WeOeait5mgmGDjniqOmXOV8tzz61buVwNw+tRF2dhxdzu9CvzKqYPzjt612Wn3IeMd68f0e/aCRcEg16DpN8JVR1OD/EKtqwSR2CkHrTsnIqnbTBgOasg56VLRBN70o7Gowe2acGKg5qBMexB5ppHvz2pC3y8Cmg7uvAqkSQXtvHd2rwSKGRxXiOo2kmmahNaSD5ozgH14r3JlwARXB/ErSvMhTUYF+aPiTHfOBSi7St3KepxcchIGDVyGUjnPFYsU46k1dhlxQ1bQzsbcMnc1egkxisWCQd6uwy4PPeoJNxXyBircUmeBWRG+e/FXYWGB1piNVHxgirSPu61nRvxVyJu1IRaTGKkXioFapARVDuSgnuRS9OKi3Clz71SQ7jwTjml3Yx3Paow2frUiKSeaewxwy2PWpB8vXHpScAetKvJyetBSHKM81IBimj16UFuKNy0KetMk+430p45NIxBGMUyiO0fdCM9qsA5+lUNPk3GZVPCNg1fH6U7WZdx49qVTkYPApo/Sne46GgaHfj0pc8k4pp6Hin8EH9KZaEyTnnFSA4FRdT/ADpxODjPFMpEvQHvTHyeRxQxwvWonnVFyTSKRJ64yTTlyOhrLa9eRysCk+4rQt43UFm7jvTtbctE5JGCOaAOOOtJ1HWjBHIoGPxjjr9KG/GmhjTlOOT1qykIMEfMOO1Kr/h3prnkDtQevFJFEiybulOJ4HNV4wy8YzUqtyM9apaopEvagMcA/hTA2BjvSb8dRQhkh4OaVTzmolY5wafkZxmqZSH5HU9aXgjjpUfU89acPlpNFpjjnk5pucGlDA/1pWAx0oGhrDP1pjKGwHGcGnZwT1IoI4poY3OeKrajJLHayPCPnAyKsY7ikYBlKsAR0NUOxDZTGa2SRxgkcips+lJsCKAv3R2oPIJpFAeOlNJyORQDuXpTXJzzn6UwFb2PNMJP1pW4PpUZ79qYIc3JzVaZvkIp7NjJzxVWeTPFQyiCd8Dk1l3Eg5zVi5kxWVcycHFSQ2V7ubA681jXc5A6jNWbqXdn0FYV9PtGM8mkZSZQvJ8g85rBvrj0/OrV7PhiAa52+ueGyce1NIwlIgvLrjr061hzS735p13OTwDUmg6fLqurW1pCNzSuEH41SajqzKzm7I9p+AfhziXWJ14H7uPI6Hg5r3NRhaxvDGlR6RolrZxAL5cahsd2xya2l4AJPFY0U5NzfU9KyilFdByjd2pW4OaQEEEqaQnHXrXSJBnJxTjkim/xdaXc2MEUwFLY60bgMg0pIxSEce9K5Vh+7K5HpUTSA/WkOV4BqJ/mIx2pbsBzHg1ETjgdaepwcHrUcnqPxqgGSHP4dqgeQKOaWVqydSu1t42djjHSpbIk7alDxDqa21s7E4bBrxDxJqjXdw/zZGa6DxprzTSvGjVwjNuLMeaL8iuzhnPnYirubPc9BTdWu10+yOG/esKnDrbQtcTYGOlcPrWoNeXDMWOM8CuZLnldkzfKrIz7ydpp2ZiTk0sJwD9OtV2IJGKnjXgk9MVU5XM0hTja2ajExUfLRP8AKgx61CORUWAcZS55qfzgYggqKOMnoCaUQ5lCng0aDGEkd+KaW5yKkkX5ioNMkUjAXpQIj4OfSkBy1Ozgcim54GetAx74C5prfdBGOe1K7ZGBUbbgOaAJB90HPaox8zemaF4HzH8KdkHnGBQAEAH1pAdpwTxS54yORSdTk0AKzktxThyfw5pV29aVCN1ACsu3HPWpN/alYDihV3HnpQBJb4LZP41tWPLr7dKyoI+duOtbWmR/MBigaR0enRlmUIPauvs7cRxKD1NYejw/dAH412lnaEqjMO3NYyep0QjoVDAFXkVXbitm5QbOOtZk6Y61001oY1Ssoq5arluarLz7VbtRnGDVSORm3ZDnjrW3aLyKxbPpyMVt2nUVG5zyNi2HA960YemR2qhbdBmtGIHGc5osKO5lyZPBH61RuVBUk1ot8ynOarzJgYODQ1c9JHCeIbLduMYzxXnt8kUwdSNkqV65qURJwe9ebeLLF4pmaIc98VlbWxTPO9WySd3BHFZjwlotw7Vvanay+aBIuARmsqc7QUTPpmqIMp22rgjioCNx4HNXjCWJGKpkGNzimIRRhsd6bLz9RTycneAc1G+D689aAFjfIwe1MkA6jmkUEEY+lOYYzigZGSc4H4Gn44z3ojTcevNKw6gGgQ3aTyKeD2PX1pyYZQPSmMOcgUAOIwgJ60sZDOAKa7AgAA01MrzQMtFtkwx1FFw+6XAPWoSxJ3HqamxlVJ+8aBDYy6SAZ6Vblc5+Y9etVZARgt1NSxDcp396AIZX9OlSWgDEg0yVduBTYw6nKnimBftGxOR2amX8RBPpTYJEVt7feqS5uVmQAde9T1DoRWQUg56ipWOUOe1V1kVVIXrVi3KsDk8H1oYFGXjpVvS7t7WdXB71FIqmRh2BpXj2qD2q4slq6PRNNvVniWVDzgZH5V0VtN5sYHcivKtE1BrWYKT8h4P6V3lhdAorqcqelZzjY5muVmyxMUuRXR+H9RKuoJ4rmwfMjBHXFJbzNFJkGqi7qxondHsmn3auoIOK3beTcvavM9A1XKqp6mu3066DADNDRDRubqN2OKhjcMOKf1/nUEihj9KUEZ5NMIz0PSj6daTuA9snoeagurdZ4HhlGVdcYNTDFObOefrUyV0B4J4g099F1aW1fO0H5GPcYH+NV7ebBHNepfEbQ/7T00XFuoNzB09xxmvHUmIOD+RrRPnjfqJnRW8wxg9far8EowK5u3n6c1q282QKholo3oJOmDWjBJ61g28mDwa0beTPOSaRDNu3fJx2q/E1Y8MmQO1X4pB0zigRoo2B161IrcVSRyTVhG4qloIsAg+1OFRqcVIOnpVIEOA5qccCokqQD3NA0OHzZp4GDSLgCgHnFItDyfypBzTcfN06U4cDpTRaY7PzY9qOmOtN6cnFOwccUMtGfp48u+u06ZfcPyrSBBOOayfMEet7Cf8AWJn8q1x7VbZQ9e/pS0g9utHbNBQ4dyBR0Pak3cZNRySN/AuSe1MpFgleckiq7TqMhSCfQVGIHl/1r/gDVmGFIx8qjPSjRFIhkEswwgKj3piWOeJHJ9s1fHHQUADHYGldl6EcEKRLhVAqx0HPWowwNKOvNBQ5elKCB70hIFJ05HSqRSYrLk8cGmbmU4apA3vTZMMuD1NMpAvTPWng9ARzUKkp7iplYNzxTKQ8H26U0nLcd6F45pCdwB6YplAchs5GO9Y2qS6jDIxtVVkPAz1raHGe/PNRyAkgDpTTsO1xlj5gtoxOR5uOcetWelMAxwOvrS5yOKZSHryc9D0o3Z6c+1R7ivJpVIJBHWgpEg+U0/cO1REEse1CnDc0MpEhwSeaaeD7UjONvagncD3NMoOmeKT9c9aUggZ/SkYcEc0XGJ3pp68daaFwxOcnFK2O3WgYdOKaDS7vzppznjigoR+Rz+lMbp3pze3Wo2Ye9DAgmfGQDiqE0p2nirN05I4x9ay7hyM9ai4mVrqTOfSsm7lxkd6tXcuFPOax7iQDn+dIybKd3LtVjXN6hcDJOeK072bqCa5bUrnGcdqZjJlDUrnDMQa5jULrLEd6ualcnDAGuduJSzcmqOaTHF979a9s+AHhvzrmXV7hf3UfyR5H8frXkfhfSZta1i2s4FJMrhAcdMmvsLwzpEWh6NbWUAH7tQGPq2OTWFV3tBdTqwsLe+zaiTapqQ45qMHsfSnEnp61vBWR0sXp7UooXg5oyc4NWAuaUnnmmE56UZx1pgSHgAimucc55pC9NIDZ5pMZHI5HTk0qtx05proQ2aRiTjsaa2B6ivgg9eKrs2DjvT3k59RVO5lA5HFJuwmyK7nCrnOAK818beINgdEbHatrxVrS2sDgNzXjer373lw5ZiRmiK+0zjrVL+6indztPKXdutNiXOWbhRUYXc2OazNf1NbeEwxn5u5Fc9STm7Iy0irsoeJdVMshhiPyDjiubPzfWmyyF3JJ60gJpv3VZGW+rAjDdKlRzjB6U0Lk4PWpBGACWOAKzGExUgAdajJVXHcUxxtGfWm4z1oAuCQKvWo1kxlu/aoMkmnMCFxRYADZJPepdwCkkjcars3A9qM7vcUAK2M5qJ2yR+lTMny+9QuMnihAKhwPenKoI5pjDAAHWlDcgfjQA9488iom44q15gKAfnUPBY5/GgCIknjtQOT9al2bug4qMjLYGKBD40z9KXBDcVJF1CmpJ1UYwecUDIxkkc1Mj8YqE9RUioRzjNAF2CTn3xW1pAZ5AM9TXP2ykuOOc11egW7M4Jzinsio6nceH4uVB5rubWMGEZ44HFYHh6z8uMFRknvXVwxYT5uvasN2dcVZGXfJgHHpWLOM9a3tQHp0rEnXJrshsclXcroM9RxVy1A3DjiqwUj6VatsbhxSZzSNuzxj2rXtF+YVlWK/Litm2GGGelQjnma9scjpWjD0ArOt+2K0YsY4602StzNz+dNfB9jT9tMfjjimegjKvog+Sa4XxFZvJLhR1PNejTQl+g5rC1ayVlJHDAVD0L3PLdVt45LZyF+ZPlIri9Rsfs7Ag53DIr0m4s3aecgfKTXP6/p6Pag52yLWT91huefO21ypOCBVOUF92a0biBjOy+hqK7i8qHnrVkGYRsTHSkjQNk05/mGe/akiVgTimA1kGSelNB4GKlwdrE81GeOQKAGhgh6cmmM/YUp5570gXJBOaAJFJXBHAp5be1MbgDHNCgAZ7UAOVRkimY5FOXLtwaftXaSTzQArHauMdqQOfyoB3gseopUXKk44oAk3HaAe1Sr8yAgc1CjDBBp8XzSKIyeKAHyx8qDjnvVhI1WMlevpVS5diwx1HekVyvLHrStoArqpJGKj+4pxzSmYKcYp0jqQD7U9QIkwCfSpBnscCnW8YkiY55HSnJEVGetFwGBQW/nVvyw1vlR8w6io43RQcjBPSiF8OQT8pHekBXIKc+9dB4e1TYywSN8p4Ht0rCvCB93oTUMLMrAgcirWqsyJRuj1qxuQCBnr0q5J83K81xHh/U/NAhlPPQV1lpcgqFfv3rJpxZz3cXY09PuzBJ1xg16BoepCQKpNeZv8uSv1rV0i/aFwCcAVonzI03PZbK434FaQOPeuJ0XUg6KCeK621mEiD1pMzasWuaUcdeTTQc9D+FOGD2qWTYXvTh055NN7elAHc1Ihsg+UrgYPBrw74i6E2jawbiAH7LcEkY6KeBjrXursGBx+NYviXSYtY0qa2l5LDKn3HTuKlS9nK/QaPn+Cb5uDWra3HHX86wtQgl0zUJbScYkjbacVPa3PI5reURtHV2824A1p20vTBxXNWk44HrWtbyj16Vk0ZtHQwyj1NXoJNx61gQSE4IrUt5AQCKRLRsxP6kZq2jHOKzIH4GccVdRsAHNMRfVumKlT2qnE2WBOKux9PU072ETJ6dqePamjpTx60hoVT6/hR3oPWkzmmix3JB/lThwKaeD7Ucnn0plJinJBp2cj6UyQ4QnPWnLycmmUjD1xzBqtlL7EfqK6FegrF8QWvnxRuDjyznn8K07JxJbRuMEMM1WjSNFsWQ3PBo75/Km5GacvWkNC4zin4x05poFBOMYHA96opMcWx6D6UobABPfjimnnFKy/KCD0oLRLRkU0HIJzims3JGeMetIpDlxyR0pcbQcc96a/CDaOtB+X/wDXVFocr7j/AEp4x1qNBkU9TjH8qZSFBwcd6UdetNJ78/jSAgYyaBokXBHNMI28pyPSnAg+tOAoLG7t33e9KnA78U106svWgv2b8aZSHhsfSmF8nHanjAHA4pnygk96pFIcSAOaE6nPTtSEcZ7elJF05/KgpDyBnBo54xSHBwQelLj1pjHMeBjr601vUYzTc4z/ADpRn+LFMY/gj0NLnFRgH1yKU9MigslzkdqY+enY02NzyDTz1pFEeMLikxzz+FPwQvA4pjdePzoGhMnPGPxpCfzpG+tNJx35pjFqCVvl4HIqRzge1VJ2wp5qWMqXL4zxWRcvye4q9dvwT1zWPdSYUipM5MpXcgBIGKxbuXBPIq7eOOeRWHqMu3PNBlJmZqU4HcVympT9cmtbVJs59q5TVZyRwcDFUjmmzI1C5y5FZ2SzU+Z9zk1r+EdGk1vW7S0jUnzXAJHYZok0tWZRi5ysj2r9n/wssNm+s3SfvGPlxgj8c17gntjArN0LTY9L0y2soQAsCBR25rVC8HuaxormfO+p6llFcq6DsAknPWpljGOmaiUdeKkDsD7V1aIQ50xUTbSOTzTgxHPamyMCfalcpIjJIpG6Gms2OvSkzlaAEZj0H60iyEHgcU0n65qMnAJoAshgR7e9QyBsZWofMPUHFMlnKjg0XAY85A+YYNc7r+qraROSwDH36VZ1jUUtoi7sM4ryHxbrrXEzrGxI+tKK5mc1WpyqyMzxPrD3ty6hjtrAVdxyevagZZtx61Df3aWkRPG4jgZqKtS/uxOZK2rK+qXy2sJAPz1xV5M80rMxJzVm8uzcysXPFQOV2YHNQvdRm3zO5VCg1IF2KM0qgZOaJXLduB2qQHHJQY60wqwODnmrEEiqvIyccCmGQGQFh+FICMQF1JPAAqILjIq1JMXUj+ECqqkZoAawwwwenWrSKrQ5PWqrDc/FPyRwDQBFNjJAzQp4/lT5F+QnGKgB6ZPFAhzsxB7Ck7e9KDzx0ppILGgYpXjNNXIIJp+/gcYAoP8Aq/rQAqMpb5ulDnL5HSogfX0pQfloEPDgjBNIODnHNIAfWnY2qO1AyRG5zSlmZ6ijNSrjGT2oAkUZYDvV7CsoRce9UlG5cirlqh3jvQCNPTbT5xx2ru9AtI44csOSa5bTVUBWPfgV23h5PNaPOQM81LV0bw8js9JGIkAXAxitPawzkkUWUYSNAAeB6VbOGB6UoRN2Yt5k5FZEw59xW7ejINY1wPmOK6kcdQrD8OKs2681CAOg61at8g81Mkc8jYsQQBj862LbtisqzGMZ59K1rbrnNQjnma9vjj+VaMXQGs625FaEPIpkR3KANJtBHFKef5UqnGMjBpnfEix2z17mqWq2/wC4LDritI8kelMuMFCvWpZqjgAikyAjOOK5TXYxtaMfezzXpV7poUvJGPvda4TXbUpc5bHrms5JN3QbHlWqp5dwxAztODWdqDieEFBjHBNdl4lskEbMh5fnP+TXI7QIXVuaSd1cl6GakYCEnHtQioZBin+WxYgdBRDGFclulWSRTL+AqsAc89BzV59ufm6daicKm4/wt0pAVHVSARSNjAFPzjgdKRv85pgNPfIzS5Gwr3pGYY44NRMSc4oAmU7ScdaGzkkjApi545yalYfhQAsYABOeTR5jpwBxTS+0Db070pJIGKAIz82cGrFg5iYueg4pgVRwAOamQbFZSOKGA8lHO7bxVeduwHB7U5JgpIAyDSSDADdc9aAGqgK8c57UbGJ29KIsMSCfpVxEAhOetDYFYqYgCO9OMzBeD14ps+7hR0xmiNxtwR0oEI5c84qXeXRc8Ed6SRwUAXqKYrBYyD160APWMtLhhwankiCDBUDFRxbn5Y9qSWV/utzRrcBscpikDL97PFdjoOqLcxLHIf3g6ZrkDFiMN2NMgmaGYMpwQarSSsZzhc9Vt7jI2Meam3FGznPeuZ0XU1uogrnEg71vQy8YbrWabi9TGLcXZnU6HqZicBmrv9F1QMoDHj0rxxZDE+5a6fQtV8sgFua0tc0auexW0ocA9qtDAxmuQ0rUt+05zXTWs3mqCP1rNozaLgI6Cjg9KbkgmkDcZotYkkLYB5+oqM4NGOOmcGgc9TzUSV0CPNPi14ZN7bjU7Nf38Qw6juO5OT/SvHYJirYPrX1VcRiSNlbJBGDzXz78SvDL6JqrT2y/6NMdwx/D7dTVUJ/8u5fIpFCxucDngelblrKGAOQK4y1lIIFbtlcdMVco2E0dTbygYB61p28nQg8GudtZemK1LaXBx296zsZs6KCT5eKuxSehBrDglwQAeDWlavTIZswN8uavxNxyayoJM4z0q9C/FSybF9DnHen5AqBGAANOX5jzTRaJAcn6U8HHpTBgL2oA3ds1QDsknjpTicAY60DAHHWkHOeKZSI7gEJuyfpVlB61Dd8WxxUq5IA5o6FpkOoLutmA6ngAVDonyWIhJy8R2NzV7GF68U1ESMsVADPyfemtrFoePwxTs01cAU7PH1plIcOcUYI9KQAAfSl+8M0FoN2DzwD3qRGB5J/+vSAcYodV57GnuUgdw30HvTCik+mOaYQV78deaYLhfMwM5+lNIpFkg4ADHH0pN5B5Gfxpm52XgY/GnCPco3sTz0oLQhnA4YEfQU8SE8qOPU0mwBcjinOyLjuTTLQcs4BOcdRUi4GBUFuS5ZhwM9alaLfyx5FBSHGQbsA804kn61HgBgcVIpzxk0yh5OQAKY4DDDde3tTjwO2f501uvGCe+e1MaGoSCFPPvUg7j34piEmjPzDFMoGJLYHalb26eooc8DIpCwoLQvOOMYFAYE4xTSxB4FNOetNDHk44Ip33uM4xTONtG7rmmUkP3DpijHWos5APSnhtvPWmNDuAflGaVX7YFMDDHoMUm4n6UikTKRzmmP7AcUxTgc9aUsNo5oKGE9sUx6eevtUch9etK4+gx275qjcPgEg1PK+MjNZ1zL161LFcqXknB5xWJeSZ5wMVevHJJ9BWPdy880iGZ93KACcGuf1CXr6VqX0gGe2K53UZeDmgwkYupzYUnNclqUpJIzW5qswJIXpXMXrkuavY5psqgFmxX0L+z/4WNvBJrN3Fh3BjiBHb1rxnwToUuv69bWUakhmG/wBlzzX2Nomnxabp1vZ26gRQoEUDjpWFV3agup1YWnZc7NCMYxwPxqXP401QMHPWlBxzW8UkrHQOzSgg/SmA8kUhGM4xVjHPj8KiOCead2pjUAGPpTCQpPP4UrPj6VE5B5yM0n5AIXz6VDI2Bk0jyCMc9PaoJJ125Xmi5LYszjAJPFZl/eJCjszAClvbpUUszfKO1eceMPEQUMkbcmkldmFWpyoqeMPEHmO6Ix56V59PKZXyxJJp9xM9xMSSTmmXEkdjCZZcb/Q0qk7e6jlSv7zIry4S0ty8hG7sK4jU9Qa5mY5qbV9RkvJic8VjvxWaXLqyJS5h6Z5yM08uFTGKIF34HFEycnaDgVDd2FhquTx2xTk5+lSQJleTSbMMQKQWHZRFB6mkQeZzSNH+7J6elMUtt47UASzoqgAnH0qqeQdtPkLFsE84pGwOM0ANIC555qPkcsMmn8E5JpzY25H4UAQs7MAO1IMHjp704c5P8qaSM8GgQ4oMcmoyvPFDOSMDGKeDgADrQMMhV/2qXBYd8VE3DZpwkx1oARwo6dab79v505hvwelNYHj0FAEqH0pZOTjPFMiBzkdakUE80AKkZJGalEfpigAlhipSHWgAUYIBrStyiRncPmNVbePLh3qxw0nygYoWo0bekRvLOmAdleueGdPCxRFlHJzivNvCysHX5cnPXFev+HoH2I756VMzqoo3VtzswBgY4pJICi8CtCHG3kdBzUN0MLn+VaQT2Ln3OfvuRyeax5xzWxfhuSCax5AcknoO9bI45kIGAc1YgHIPBqEj2zU1v196hmDNmzPQVr246YrIsz0rYt+oPT6UrHLM1bYAgelX4TwPWs+3z74rQh/P3pNExKWBzig89BQtLj0pHchOVX1qoz/Nz1q43C4zWZNw4+tOKuaXsTP8wx+lcrr1kJhIAvuK6o5ABX0rMulaRyMketKxVzyfW7EvA0RGG6iuDnt/LuHjHTOCa9j8R6XJ5vmR5wRXE32kKLjDDLNyfWs9hNHGNCqShf4TVS7i8qUrj5fatXXLWSKUgKVI4HuKzGWRocPzihdySjJHuGagZWCnP4VPuZJscgUtyv8AF2psRQ55WkYE9T0qVlwc9qTCkZHamBEFyM8ZpSgVR696erAdsYochxuFAETNzThLkHNM2ZyDSEH0AoAcf1qUZVFJFMhXdJz0qWUhhtU/hQAxT849Kndt4x0amRqGDBjjA4ogXDZbmgBJY+m3r3pu4hSpFTXBIPy4ANREfJz1oAZCoZ+nPvU8rlRjnIqugw+d1SuzHk9qAGSFiop0IHcZzQo3ocdqI2KH5gM0ATmLbIOOMVHOg6U+SUtHk8mopJC4A60gJYXIwD17UpJMm5sUioCo9RTmTeMjORTEXvka0LAfhWVcLzkVPFM8asgPympUjWWAufvUl7o9yra3L28gZTXc6LqiXcQRzhxXn8g2vxU9pctAyujYIq2lJGU4XPUVfs3SpopTE4weveuf0TVUu0EcpAcVs57H9KzTcXZmSbi7M7PQdVKgKWNegaRqe4LzXiME7ROCDxXX6Fq/K5J/zmtWr6lvU9ghuFYdf1qZSTXLaffrIF55rftp1fHzYNZszaLgo6jk4oyOMdKcMdMcUibDDyAaxvEukw61pktrKMnGVPv2rc6elRPjPHArKceqGfLeuadNpGpy20y4KN19afZTevSvZfiT4WTV7JruBf8ASYlz16gZ968RdHtZTGwKspxg10wmqkb9SjprOfAHJrXgmOBzwa5OynJUcjIrcsps4z2qGiGjpbWXcBmtSCXGMVz1tL0rUtnHBqWZ2Ogt5DitCKTvmsS3fB571fiftU2FY1oX6VaRuQRWbExO05q4j596aAsryeB+NSg44qFDgD1qUZzmqQC/Tn1qVRjqelMTjJpSw709xojueYmqWP7gqNgSDnFODbY+TTLiPLHnA+tCk55pgNP3HbgYplkg/WnDjk0wYB+tO+8QPSgtCjnNOwc4zxQBjmlGOc0yhC3ovIpG3NxwPpUhPOKT1oRaI2jGeSWp4QAY4x1peM4600gknHFMpCMoB4OKPMYDgbsU7aM880/AxxwBTLRHHmRcnIz2p8oCISoGR3pVbHGOtJMRgKO9HUsW3UrEoIxUqA9CelIvYYoyecUFIGHGSckU1CAQeaC3WgDdyeD2plIkAGCfWmHPHpTkPGAefSmHriqKQquM8daTdzxikAA7YzTSBnAGPagY7JJJJP0p3bp1pqbsk9qXk54yKCkLkkHnGOlKp4pucDBpucGmWh+MZ7Gm9vcUOcgf0pGIpoYp5GaXdzkimE4pQ3NMpC560meopG6nsKbnrz9aBkmcGjOCcUzcCOaTOOo6UikPJ65qGQkZ6Upb14qvIwA7+9SxkM54PNZV1Jg8VbuXOSBWZdMOeeakllK5b5T61iXcmPpWjdyY9axbx/U0GTZlX8nOK5vU5tqnnJrV1CfBbnk9K5u9f5Tk9KDCTMXUZNoJJ4rAcmST6960tTkDHArS8CaBJr3iC2tkQsm8F/Ze9XoldmMYucuVHsXwC8L/AGKwk1a6TbNN8kRPXb617Oi7R14qnpdlFYWUNrAAsUShQB6VfUDAyOKxpLmbm+p6duVcq6C/TkUHg5HJoDAHGKecduldNtRDVbGTjmkJ4OKdjg4qM8UMEN3dc5o3jHrSNgjFQsdpNIZI7emPaq8jYBIod/mGelVppByCc0mJsZPJuB9Kz7q5WJMsQAKbe3axKSWrz/xP4hADpG34UJX2MKlRRRJ4p8RBEZEb8jXl9/dyXU5IJZmNSaheSXUhAySarSyxadCZJiC/XFEpqCstzltze9LYkeWLT4DJMRvxwK43VdRe9lZ3J2+lR6vqUl7KSW+XsKzcsQfSskravciUub0I5D81M344IpS2T0pSuTn1qW7iJIhjkc1MZSw2AYFRBgqgYoD4bOOO9SBIF8vt1pobYcHrTppwygAdKriTe2SKAJyN3XgY6U5ACdqECqkkjZ44pysdhOeaAuSXIwMdTVULg4NSqS2Qac6lVVvWjYCI+nagneQB0prsBgDqTTUJHI60ASzAJHgHmq6ghuTjNOkJY8mmtQISQgvxRnHvR1pyJxzxQAxjzzTTnGae4J5PQU3Gef0oGSRncRipX2jjNV0+XkVLyVoAci8e9WoICxGAKrx4wM8VopeLHDtRRmk/IBsICTHd0okfzZCEqFfmYsxqe0UbyR9adrBuWIYumT83pWpZ2OWGR1qrbIpkBb1rrtLsi6K7ABe1S3Y0irm94T00SPGcbQvWvV9ItwkIBHauL8NWqxBd2BnpXoFkQIxgYxx0pqLudkLJFlYwqkiqdzyCDWg0gZBt61nXfetkiZmHfdxiseUHBxWxe8k1lyjB4qzkmU8EVNAOR0qM9Tnipohz71LOdmtZjpzWvbE59KyrTjAI5rWtx+dTc55mnb9sDrWhCeKzoK0YD2oIjuViB680qAZOaPwpSRj3qTsQyRAQRVZoQRz+FWge56elBxinsWiDy8ID14qnNAu7Pc1edii57dqz72bMYK/eJ7Uldl3M7UkXZ90EDivPvEVs1rdNOoO3GelejOpa3ycbvQ1i6zAsts6uoOR0qZRsO9zyrWmS8CTEbWHy1gXcASJ2HUDoK7vUdMje0cIuHUk1yWpWjCU7D0HI/OostkJnMCJZcnow5xVOdsBlPUVo+Q4uGIBHJ4qjdRkyEkfWq6kFRXJzn8KbEw3kHjNSlOeOtQFfmpgLIuG9qQEbD3IqYkPFgj5hUflbQSaAI1IOc8UYGSfwBpyjcDx9KjJxwTQA5SYzkY5pYziXLDI9KYW5GOlPMg3dOaAHygE8cVJ91KrmQA8fSpdw2CgCdIDImV5x1pCAFIJ/Go4LtoSQOQfSnSOGUkDFKzArFSXIXnFPLMFw1PgKiT5uKSQgk98UwHQMu3aepprheSfWnRqrEEdaSYFWxnNICJjg4HenxxnZu6mpHRdo9SKcnyoe1MCwsRW2DADk4pI5lAIYc9KktpM27K3IFUZMh8DuaSAmIDMfQ1JJIYkK8YIpiuCm3uKiuTujAB5o3EV8GRiQKcYip5qS2IClTTm5kx0p3ALaV4pA6Egiux0bV1uUEcpw471yjRARqVxnvVZJmim3IcGq0mrMznC56aGGMHpVi2uGgbKng1y+i6ysyiKcjPqa6ANwSpBFQm4OzMU3F6nb6LrOGAZsf5NdxpephlBYgn3rxaGZonyp4FdTour/ADAFsVo0mro03PZrWcSAE1bBNcVo+rBgBurqLa6EigA/rWdiWi7270hGVOOuaajZJ559Kf34pMixBIMjaw4ryD4qeFGik/tKxj+Rj86r68nNezMob3qpd2y3NvJDMoZHBBBrNXpy5kNM+XrdijcituymBAq9498NyaLqLvGh+zSNuUgcDJPFYVm5Ugdq6XaS5kD1OrtJQwB6VrW8o4ORzXM2kvQ/hWxby8DmsmiGjoYJd3Gea0LduetYNs5PIPNatq/TOakho24X9TxVyJ6yYn4FXYWz60CNSN/SrKngDOfeqEcgXA5NONwQOF69KaAvu4AzUe4Ek9qoG5Gc80onBxzT5iki+XGeKjD5qt5gLc5NShgadiywrd6kBqFB1J5H8qlT2OaEUPSQbsGp4xk57VVeEyHKnBp/mPEoEgOOmRVehaLR54HSl6AepqJHDDKnOafnPPWnYY4HJpGP50ZwDTe/egtCmlHtSH5uPSlHAPpTuWhepNOz8o5xTQTjjvSE8e1BaHr8o46+9G0HDc5FMBJI/lUpOMCmUh6n1oXJJ7elImCOtB57cUFiADcfWjJxj9aO3WjkDpTKHJ0wODmo2JzTlIC5brUZJJODVIYoOeKU9fSmoADzT2x1ploVT1pN2OlIpxSMOPegpDt2aQjue1MBIapDkjNBSE6Dr1pGHH68UmcZ9KQ5pjE6cZzS9D7UmR0pGboBTKRIMHvTGHWkB60p5xSKtcapAPqKcx4PBqN84PagsccGhjQjN3FVpm4zmpJTVOd8DGfpUFMqyvway7hxyT1q7cP1xWRdvwcUjJlG8kHJFc/fTehrVvHABrn72TAJP50GUmY+oSAZLVzWoTHJrW1GYkt6Vzl/LnjvTSOaTKDIZ5gFBr6I+DHhgaTpQv50H2iYZUkcgGvLvhh4ZfW9ZjaVCYYiGY44+lfTNlarFGiIAkajAX0FZVm21TXzOrDQ5Vzsso4AzjOasK3PFQgAD5f1oX5ffmt4KysbvXcnA6mgtioy5UHPSk35GMjFNvsCHF9tKzDGQeKhZgRz1FReYV4PIoTBkrnGearSSjnJ5pJ5evNZ88wU8k/WjYlsnnnwP8KytQv0hiLZGP1qpqOpLAjF2ArzrxL4kLllRqSTkc9Sryl7xN4i5Koxx7GvPr27e5lJYk1FcXMlzIeSRVG/vYrKI8gy0Snb3YnPa/vSJ7q7isoizYLkcCuL1e/kupSXJx6U3UNQeeUl2yKz3bce9ZrTVkSlzAGyRmnk4zjmmKucYp4+9UN3EMZdvzHpTM5NSuc9f/100RZOVPFIBshzgA9KUbj1pJVKgelPUEKCTxQAbgCB19adEFVmZunYVEuPM5JNOY5Y+hoAjkIJJz1pm7jHannHI7VCx+bApgTKw69qW4lJwvaolJAAwDQwLLk96QDfvfhT1HHXmmIDjNByrd6AFkGCKCBjmgtkZo3CgBF6UrH5iOopmcZI6ZpcZ/nQAbs8cYpO3FBUYFOjXLdc0ACcCnKdx9qUqFBpF696AHqpIPHFOB4wBUqLlcetEkRXaRQARke+TVu2B3be9PsYEX55O3apo08yfIGKL3HY1LC2DvGX65Brv9Fs5braiDCevtWD4Z07zWUyivTdIsxCiqg5qL3eh0wh3NLTLBIRGM9P1rqLSI8Z44FZFnbSM4bHGa6CMbI8V0JWRuhkiqueOnSqM4H4Vdmbt3qhcg446U9TOZjX3X3rKlPJrUvRWVNwDVHJMrsMsealh7cZqMgbsGpIeWqGYs1rTtWvb9qybMdM9a17ftipOeZpW+Soz1q/EOnrVC36Dpmr0PFJma3IgOaDwfSjI7daByOlCOtMjc+tM3E5HYU5wQDxzUCA59qZaC6YmIqvWqcEBbhx0rQMJZc96fGm0Ypc1kWigbYZPPSs+4sfO3HkDpWvu/eMveoZH2Z3dM9qWpSscNd6a0UsiOPvZwa43WNKkiu1Crwzdfzr1vUbcSruxlu1cpqsDMSzL9zNQ73A8p1ixa1vMuMIR/jXPX4xIcD5O1ena/HFf2JQAeapz+HNcPqmn/ZoyX5z0/WpT7ia7HLTfIcCodwwatSxsWJquYiGOR8prQgjLcjNOd8rsxyKHXHIxikQF2APfpQBHu2nnvTGG7nFWDGd20imYEbnPSgCLbhcZpjZ65qQrgkDvSeX8vOcUAIV3YNTgp5W09aRVxFmo2bJNADtvGR0pytgGmhyY8Ht6VGWz9KACRyTxUkHXnkU0xkfN2oU4NAFoEK3YcVFI53DoaiBJPXNKvr3oSAnyeOKkUBupqASFWGRUiyqDjHvQBYjKrGVyCc1A/ykng0qkcg8ClZDKQFOMUCGwnb82OBTZJAzkAdalkZQuwdaYqDDZ+9SGLbBeckdabMy7vlphHJ5xTQhHJpgSM52dSDUUg6EUA/OBipmUAj+7RsIjikZDnkHrXTaLrew+XOfl9T+NcuRuPHUU9VZRwarSWjJlFM9NjaOZd8TZHtSrM0TgrwRXEaRq8tpIAWyvcfn/jXY29zBexbo2APpmou4ehztOLOr0XWSGAJ5HFd9o2r7iuWGMeteL5eIjqPcVt6RrjW7Ksh49a2spq6KUkz3a0uhIM5rQVsr2PuK850TXY5Qu1x9M119pehsc1k1YrlNkHPSlfpVeOZelS7w2OetS1fcjlMrX9Jh1iwkt7hByDtJ7HnFeH+I9Dl0a+kiZW2AnafUZP8AhX0KeTzgGue8WaEmsWLrtXzQPlb3wf8AGohL2b8gPErd+Qc8VsWkmStZupadNpl5JFKpBBI6cHk1NZyHA9utbyXVEs6O1f14rVtpOOv0rBtXPGa1bd849ayIsbduxOM1pwEYrGtmxz2rSjf5aBGgjc8VaiUGqUP51eiHSgRYEKY+YYNRmzjbkinqSeBzUy56CmkVcgWzVTx+dSCAKeefpUwx3pw+76U7DuRiPC89aeqjrjmlGSefwFSBeaZSFRdo96bJgg55Bp+femN1oSLIhDtH7vK57VYUEDnrTV7804EAf41RSFIycCk9c84oHQnNJ7DvTLQgzjNLgn3pQMj2pygdjnmlY0QHsKXvSE8k+lGQBziqRSBPvHmpB0zxUa4JOadyTgUFocFwCetKSc4pD2x2pCeeaZQ8MAOlMkfaBzik3EDOOtMc5PzflTHcep3DpxScD6U4HPPrUYB4zVIocGHelJyKYc5pQ2BSLQoOOD1pc8YpvakAyKChWPNKG44600kKaaWAOe1A0yQnGSaaWBGT2qNpo0++4A9zUD3cQyF+fPZeae5SZaBBX3oz2xVSOZ26Iw+tWUPtTKQpOOfelJ7+nSmE5GO9NOfYkUFIkbHPrULnAOO9PJNRSnipKI5GwDmqMxzU8z8GqFwxwT+VJhcrXT4BFY13JyQe1aFyxIOTWReMTnFSZyMy+l71zOpTZBHGa2r9+uDXN38nLHjiqRzyZiahIADzzWdYWUl/fRxICxdgBUt6xaTaBnJr1j4R+FdxGo3ScD7oIqpP2ceYzpw9pKx3PgPw/HoWjRx4HnOAzketdZEwB5GcVEBgYxilPHIrClF35md78i0pDcAe/wBKXpnI+tV1fbyKeZck7jjNdJKYrMOR2phbA5NMZuc96glmC9Dk+lJiuTNJwearST4ORmqstxtzWbd6gsYJZuaVyXNF64uMBsnGO9c9q+spBGxLcj3rD13xJHAGCvnHHWvOdZ12a7kYITtNUoX1ZzTrX0ibHiHxI9wzqrfL061yru87MWJx71Dy2XmOFrG1bVwAYrc4HrUyqX92BnZR1luW9T1aO1UxwkF/WuVvL1pSWc5NV55WkYknJqEgkeorPSJm25biuxY57U3Oc46UuzPTmlCHGf0qGwJEcgbRj0q0Ihs5IzVLaQal3kDaRmlYBHBzkjihZAsZBA570+SUbACKjZQUyOlADXfcgXHFO3ERbaYF+XJ6UqyDac9ulAEYGG54okJb7vNP3ZxxTGIxgCgBvBXnNNOD3p2Mgdu+aFXnnGKAI+/0p4cFcE0uAQahC5PHagCQHAPpTXYHAFObIT0qIj169qAAn5cY5pM44p3oaRcZHegQZz2pxyBRjBI5zTwBjOaBjSfWnRHnmmkZ6VJCpYjHSgAfJHShFOKtmDAyadBHl8HFFwsOtl2g7uvar1valyGY5HpUDYDgJg1q2q7rfAHIHJqWUiosW6Ty0HOa3LCzCSKpTc/oBWfpsLPdcA59a7jSbWO3dHPzSHtQ+xcVc3PDWmOEjeYbR2Fd5YQKHGR0rO0OzklUO/yg9BXUWtpgHJBxzxVU49zsjGyJII2RgUPBq2d2zkU+GIKAQDVgAEYYYrcqxnyIRyRxVK6Iwa1LgAA4rLuSCCM0GEzFvDycVkzHDHNat7jcaypF5pnLIh7k9qmhGcdKixg1PCOR3qGYs1bPoOBWrbg4FZVnzWtbnkVJzzNGHBAq5CeKpW56VciNBktyBG9eKkVhgD0rNhmBfrmrqOD06dKk6kyRxUYGDz0qTcM80u8dMdaZaGO4RfeoBPuHANLdqdvAPNV4gxOMYxQloXcVQTOc9alWAMTn8qd5ZDBqlXnpQ2UmV5rQKvv0waybzT0ljkVhyQf610T4PXBxWe7BpmBGBU2bLPL9X0lra9KjoehH41zniTTgbUkoxXGM46HmvVde04XOSn3xypHbr7VyOo2kz2k0UifMASOKiavZhY8hnsgQ3l8YrHvkaNMDmu1u7IW85MmQWJ4Nc5qUQVnBXOeR7U+pHQ5856c1LAjD5lycU4RsQx2mngFY1wOKokimmH0aq5bcQamnj5zUSoCQaAFlXgEUw5JxUqjrnpUbKcnBH1oAaJSYyhBo24xnPtTWwpBxxU8bBh79qAK5JHFKq/OCakP3iTxTHHAPX3oAWVznA70i9PmpoUk0/wBwaABGHPvSrgNkUirwT3p2QFx3NACSEnJHNMBOR6mg/dxTh0FAE27KAHr60qSMr/LnmoEyWx+VWoUAwz9QcUANkDFwcYJpJWKHJ61NKwJG0dO9QT5bn+dAhNytjH41ZKgqCDkVDHD8nI49aVVKn5vu0MBjArJuwDSSMrKoAOae4BUgGmqm0/MMehoASNdrDNTNtx6YpGG3bx9KZw5xg8mgCPdhsg8Vcs7+S2kBjYgDsTSw2XmEkcheoqvNCY+cECqUlsJq6O50zWIbyMJIQG9zVuaMjkEY9q85inaN8qSD7HFdPpGujiO55Hrn/wCvU8ri7xMJU7bHR2WpT2T5jLY9K7nw74vRmVJpArD1P/164IBJo98Lbh6CqksTA5UlW9RWsakamkhKTR9Eabq0VwimORWJ962YpwwBUivmrTfE19pTqHLFAeoz/jXoPhv4g2s+2O4cIT1yR7e9OVGS1jqWmnuevJIGHOOlPOCeeBXOafrMF2oaGVGUjsc1rR3Csg55HrXO49GOxheMfDseqWpkiQCZe+OvB9vevLbmxlsLhopFKsDjnvXu5dCBnvXM+KNBj1AGWFQJQPTr19qVOXL7r2IlHqeeWxOQCa1rY9wM1Sls5LWZo5lKsOnGM1atcjk/pVSRmzZtXDHHStGI54BrIgwehrRt9wxj8aRJrQNhcZq7Cx6DrWbC3A6/SrsRPFArF+LjvU64wPWqsZIPNWEYnJ7U0MlGc9qOv0pO3WnAA/4VQ0PUfWpB04NMX3+lKx20FoHbt3po64pOetOA6Uy0Lxmg80neng5GcUykI/AxSKOM5oY8UowF/pQWhSfanqABkVGDk09jxgdKCkBNIcEc1m3uqxWkojmDD0NLaalDcttUEVXK9yk0zRQ89akB5qFMZ4/WnIfmyTzSNESZ554NIenrUeT6UoPHcYplJgucnJ4o703cS2e9Ln6UxoeAAPrUYPOKeOlR00V1F98013VTzSjnio5IwTk/lQUHnDGScj2pjXDEnYjHHtUixrtHyjj2py8KeKCiq32iQjaFA96Ps8j/AH5GH+6cVa7c4oAzxTuxpIgS0jxhgW/3ualWKNT8igeuBT+c+lKuOvekzRDT17YpAcmnN1zTT1OCaaGhcZJPemnqachHrzTGHU9zQWJnuKjkJxinNx0qGQ9c8UmMrXBI44rPuHwDVyckg1mznk5pEso3Lkk88VlXLEkgVo3DZzgVQmGVJqWQzDvUwGJrlNSkwzBB3rrdSRmyFHHTisD7C9xdLGqkljxgVUFcwmr6EPhDw9Jq+qx7lJQHLelfQ2l2sdhaRwQgBEGOKwvB2hx6VYruUCVhycV0a4UYY8VnNupLyR0QioRsSkAc5+lRseppu8A+38qid89Oa1SC5IZCDj+VBkI5/SqjyANz1xVS5vljGWbkVXMQ2aE1xzwD0rPuLhV5Y4rntY8T2tnGxaZQfc151r3xADs0dqCxPfPFOMJS2MZ14xPQ9Y8QQWqMWkUEH1rzvXPGDTMy25P1rjrrUbu/YvO7YPbNRovy5PCj1pvkp+bMPeqblqe6numJkZsGq800Nom6QjPpmqd7qsdspSPlsdc1zV/ePO2WY1lKcpvXYG1HSJe1TWHnJVDhB2FZG8seT1qA5bPJp8Le2ahu2xnvqyUQk8npSOu0E08yY4HU024+bhOfWo3GRRPirLYEfAyfaqqIQamL4BA6n1oYETNtOc81JG696gc7eWOTSKcigETO6s1NZzgjoKj3ADPJozuBoAcM4pgHNKvoafIuEPvQAwkZOMVGO+abyDR60ASM4Apmcn2puMjoacCAMUAA7c/nRyDwRjNNYYxQORQA9uRj0pHwMADPvSZGaM89aAEOO1Nzz8oxSk/pQ3txQIMnPPNOJ47VGF9c1IF55oGIoJ+mKsW6ktjdgCogOP6U5CVBxkZ60AXJ5iAFXpjFRwuwORSRoXYKe9TCHEu1SaFZAWbdMuNxrobSIyQhQAqDqayhayQqhKk7j1rrNFst8aCUn1wKiUtLmkUT6LpeZd+PkBzn1rudH06LzBLMuSPugVR0m2UNl+ABhR0rq9FtA0gdiSOwpwTerN4RNnS0IA+Qity0BJwQMVBbxDACr+VacUYC55FdEUrHSkAOMjrSngc0j464qGWQ568VdhNjJ3ABA7VnTAMDirEs2Tjiq0n3TipehhJ3Me+GDWXKBnj8q1b5c9azZF9elHQ5ZlVuTUsI5A71GRg1ND1zUsyZqWuep6VqQZ4xWZa5yDitSEcUjmmXosEVdiHHSqUI9eKvQHHrQzJbnKQzfNnPNalvcAKK5mGbDda0beb1NS0bJm+smeTUqOCetZ9tJ5i9eavQAcUI1THyDIBxxUcijgrxU7EHioyMEccUkWNZcJkg4qMOQuTwKlc4HHSoyA69xjpTRSY4McDALZ/Sq8vyOdwxn2qZFKHt9TUd2pLAjnjmi+pRnyyABsjOeKyrvy3kJKDOMdK2xACOQTn1rK1O2kCkxDlecYptJ6DTOU8Q6FBPF5oQbh8wIA/wrzfXNLZXO2MkE46dK9oglSaIpKvzdGBrmfEumbnJiT5D3x0/SueScWVpLU8YMX2aYo4z6ioZ4tsm6MZXuK6nVtHbezbfmHU461XsLSOWGWOQgNjg1TkrXIsczOmEJx1rPIIPrWzdIyM0TjofSsx1AyO9UiWVHbLU0H34qSSPaoIqIjn60wCXGRg0gO0Zyc9qac4GalUAx0AM39j3qQdMHpUW0kmnZO3BFACk4z6UisMYNNLZHFCYJ5oAVyQeOlGSV9/SmMcU4dM9qAFDcc/yp8ZUnBOKj47807ZxmgCeLAcYxg1ZkX5cY+lUYwd/Bq7vGBk/WhgRlsqcDB96jL88809sEnb0NMUgEg0CHQT7flbke9XEYSDGAR61msOeDzViEkDg5FDQD3IVzkflULMScKOlLI5yT3qSNNybgMUANZiV+br2qKFmD5XtU5cFsECmMNjEDnNCAvWd6YXbKgA9QRVe9mE8nGOew4qMZIwaSKLL5J6UrK9wEeDaDkfjUCkp3xV1ssAjVG8Chcg4pqVgLum6vNauBuJX0/ya6q01C3vUHzBZPQ4rgGyD0x71NBPJEwKMePQ05RUjOUL7HezQAqdygg+1Zd3YMh8y3Zgw9KraZrxUKk/I9f8AJroIJYbld0TAg+/NKNSdLczs0Z2meJtT0iRR5sm1T0YnHb39q7/w/wDE5SES9ODxk/l7/WuHv9PjmVuMH6VgXemywElckdutdMa1Op8SGvI+mdI8YaffxqI7iPkDjIz2/wAa6OC9ilQFHXBr4/tb+6snBilkVh/tH/H2rqdG+IOoWRRZHLqpGc5Pp7+1TLDRl8DKu1ufQmtabDexlkA8wcggf/WrlGtHt5Crqc544rn9F+KcEm1bsBSfp7e9djZa/pWsoPKmj3kdMisnTnDdaGckuhFbofl9q1IAABjrTI7Zc/uyCD+NXIYDuGRWZFixboO47VdiTAHrUUSkdRVuIcc9adhEiD0FTKKYg9KkwfxNNILDhzx/KnqppFXjng1IoplJCgYXPao+rewp0pGCBTV9qaRQoPJ7Uv0pp5OO9OUbjgdKZSHAY57mlYDp+NAzu9qRzljikaIbj5jTm5IHekGfr9KdgA5NMpACM+2cUp659aaOM8UoGSM0FIr3VtDO2ZIlY+6imw2cUbblUA+1WT97mhjjHB/CjUtAp47CpF74FRL61MDgD0plIj53U5sY4600EZzR3ODTKQ3nOe9L2pO1HYU0NEmeARUefU0uTtOeaZ2pooePyo4+tNBO6jp0oKQq9CBQOmfwoB60LwM0FIUnB96b3yKRh60oyBz1pooXdnjilXpgHp1zUbeoGPwp+TtoLQrHAphH508/d+tNP6Uihh+vSl3DA56UEDJIGB7UmMEkUykNcnmq8pwPSp36Hio5Bn1pMdihNyDWVdZJPPFatyCD1rMuRnNSJmZM2MgVRlJdsCrdwPmOKqJxKAoyP5U0rmRdi0ky25cJkn0FWfDGhJHcvNMv3emRWtpOViBYYFWpLyKJf4VB/CpbaVipcq1NRZFAA7jtio5psqQRg5rmb3xLY2gYy3EYI7bq5TWfiXptsGEcquw7Ag1cKb6IwnXit2ekvdKoJ3Csu81i3gRi8yg9eteH6x8U7iYsLVNo9/8A9dcXqXivU74nfOwU9lJH9a29j/M7HPLEt/Cj3XXPH1jaBv3ylvQEV5xrnxFubosloSAfT/8AXXnqie6OZGdvcmtK1sQOvJ9aTlTpke/U3ZJcXl9qDlriVyD2yasW9uF5Pf1pwRYl+bAHqaoX2qJECIyCaxnXlPRFqEYbmnLPDbplyM9hWHqGsNIdqHAFZdzeSTMSScVUYlsdazStqyZTciZpHdySc/WmyA7eeKWGMueuKmng2oDnPvQ5E2K8ZH8VIfvfL1pzLgcHNMLc4AqQFDYPXmrKAADPWq6DHPFSSOWjGB0oAkkZf4SKrscnA4pyKWOSaa5AyB16UAQuOcHpUiYIoCZFNI280AOcD0phO3vSs3HFRM+ScUASq3NSjDjLHiqwYcc1IXG3AoAZIQGOOlMGc0H/ADmkzTAVs4wf0pq5PFDHI5py8dOtIAJwwNB/PNI1KPl680AL05P40pXd0/Kmsx3elSBhjFAEWMHFKQT2qTbkA05kYLntQBCoIP8AQ1Io3HnNJtOelOQMOgoAk24Az1NWYLYuOgquvLD1q9ayMDgcZ9aTuCJIbZsMRwR3qzpECy3PzsAB1NW32xWeFALt1NXNJsFWMNI3WpvoWlqaLKkrDy1DgcDAro9Js2hiDclz2qPw/p8eN2MovrXW6Zai4m+RMKvc0oroaxRJplk3yNLjJ7Yrr7G38oqVXg1Rs7QqybscV0dtGABxXQkdEFYmhA3DOPwq6gDKcVUA/wDrVPCGySa1SNESFRjmq0sG5uDirJbPB61HI204GKNRNGfLbhSSeapy9CM1oSk4P51n3Jz2qXqYT8jLvCMc1nP0PNXbw8ms5zyc0WOWTISKlh61F1NTQfexUszZq2nStSE8d6zLXtWnAeKRzTL0Q6Yq5C2R6VThPpVyPihoyW55bHOwbJrRtLn5uuazZYiDxzUSzGNxTTUth7HZWM2CPSt6BhtB71x+lXIYgZznvXU2swYDGDxUS3Noss9j78im9QeeaXORUO47gMjaaNzRMB1Pv605yAB7d6CoXJHWoXyQKW5SYM25sg9DSK+9SD16VGAeamgQKwx0qikx8I3AqR0qG6jUZy3OKsswV/eqtyp3Bj0qUrsq5g3dkyzb4lIJ61TvFwhWUdeK6kKCMlQao6rZq6Bgv4AU3roxo821qBYslV3Z9q5e7tRLETbLtkHJr1S70QXMLlfv44rhRplxZ3kizjg+1YuKV7D9TzbUIpC754esOfcX54Ndzq1gDfSLztzXO6lYmOT7vB71UWQ0ZGCwx6VEYW6nrV/ywnQZpsu1owR24piKHl84IwPek2E8CrD9vSm8HgUwIDkdRUTkY4zVp05x2qN4+elFwIUwc80jcdKcw2n3pp5oAaWzjNSA4AFM2HPH508ggUAAIJIP6VYEgEe0jIFV14HvS7sdaLASAgH0FSMDjFRRkE1MzYHI5FADBwT14pjEk09nBB9ahY4P1oAcmS49KsfdBzTIV+bP6VKWx8uKAExnnv6U4EgEUbTyR1qWFQcbuvegQ1oS3OPm7mmBypG8fXitQNGIyGIBNZ90UBIHakmBJI8ZAK4FVpJMPkcVCpI9805TzzzTsBZgnJbpk0+dlYDAwarx/uznIpzuWxnrSsBVkBLEYp4X5cA8+9TCLq3FRNwfrTuBECQ3oau2l7NbsCjnr61AqBvr60MuAOearmBpM6vT9bSTak5w3rWruinTjayn0rz4MVI61cs9RlgwN5xnkGs5U09UZuHY6i80uKYFogM+341g3mmyxMSvIrUs9XSQDecN39K0PNSZeQGz3FJTnASbRxbq8bdCKu6dqU9nKrxTMpHYMRW5c2EUoJA59qx7jTHQkpzXTTr33Hoz1rwJ45SXZBeOA3ADE/8A169c0+6guEDRMDx2r5CiaW2kBUspHevSPBHjqS1kSG8clRwCT/8AXq6lBT96BlKNj6FWMHkdKmQZxgcetc9omuw30CssqlSOxrfibzMbcEe1c22jIsTBTj0A71Ii4ALf/qpUHAJp3DHGeKdykKPWgnaM0nGcZprksRjkU0hiZ3EUvT2JoA780ZxVFJBn881JGODzzTAOOTTx0yOlDKSHDABpvbihunH1pGByAPrSLQq5zQx5INKvLEmmnqcUF9BfTNL6mmpk53Up+UUxoQctyaDwSTRSMeKCkgXpink4WmL36U9jxjrTKQwE+1KT06fSk7mgGgpAevSjtQ3UdKTqQPTvTLQ8HjHao9x3Y7UoYEY9ec0wcnPpTSGSdMZo70gABpO3oaCh3NCmmjOME8etCdDQUhT1/WlHPPemsM9acuD0/GmNASB7UqnKnnimOelOU8cUFoVe4PGeBSMOMZoztIwMipMZ6UilqM7/AEpG5zmn4GMdKXblSOlBaIGXk1Gcen4VZZOvtUMi8HI/KgZSuEyM/pWTdR46VuSr06Vl3rbVJOBUNgzAukI6nisi+1qy0iJnuWG4DgU3xZr0Om27szjdjge9eF+Idfn1G6eSRjszwK2p07rmexwVq/K+WO56Jq3xUeNTHaxHjp09/euO1P4g6pdk4l2D/wDX71xc8zOSc8VGuSenNN1Ix2RyO8t2ad5rV3dMWlnkOfc1SaZ3bqSfenQWjyc4wDWnbWAGABk1nLEMqNIz4YHl9RWlbWQHJHNX4rYIMtiia5hizlhx2rB1JS2NVBR3HRRgDgDHrTbi+it165YVlXWpO5Ii4FZkheQksSaFC+rE6n8pdu9RknYjOB6VRJLA560ir+lJ6jNVe2xkI3Wnqh+tA2jgmrChVTI61LYyMbUYVK8ofAPSoQhLE4NNYlevWiw7jpyo4TpURUAZJ5puTnrzSck80CFPSng9KjcnApQfl96AJgyoDzmmBSx46UzHembjnGeKAJzgdaglfrjpTS5OaaBk80ALnt60ypcccU11AAoAYpJ4pTSjjpSgZoAYevWmk88cVK4xUY4JoEJjHel+7R1OT1p2OKBiDnqaeVGaaOMelP3Y5NAAUz1oC4PpTg49KOT2oAlgUF+elWZSrKFjHSoIojjPOKswAIdx+9SArmPa3PWniLAG6niTfLk1Y2k/NjP4UAVCQ0nAxWhZWjO24D5V71Xji3TZrdsYpLkrBCOvWlJ2HFEVnBLcTqqoWFdvpWjl3G/lF7CmQWsemWaoiBpcYPFdRosEi2ymUYZvbpU3ctUbRSWhJptkJnMaLtQd667SYo4YxFjkfrVDS7cRrhV575rodPhAIIArZRN4IfHCAd3Q9cVfhBOCKUQAkHHFTRIRkAfjWqNErEsaAsTnmraAbTkc1UhBGSe/aplIHGaqxSFZDuz/ACqGZB19KsZJPvSTKNpz6U2D2MyZtoIxWbdvkEAdav3JIYgVl3CkE1NjmmZd0etUG5/pV+6HX+lUR1ORQc0txgHPJqWJeRjqKZ0bAqeIYOahmcjRtumD1rSh+6DWdbDAHrWnCOBxQjmmXIO5q7Fgjk1Sizjp1q7ERjmmzJbnmjMGNULxQDkcA96keQbzjiqN1Mc4rCndMto09KuMMMkV2WnyblHI5rzvTZCZhg8HtXfaPzGOK2qaIFozXRietKvpnj6Ug+6KQkqeKyTNEySQHbxUW7I57VITkYPWoUHzkHp60y0SIoZcnpT1+UheophG08GlQ7mz3oLTCUHdu7EYqM/OCD1qyuDRsVeelNMaIY4yODSXIyCMDAqZ+nXBqIgsPc0y7lNotoytcvqlsJpHJUBh04rtJYsr8vasDUodkgZh8vrU8qY7nl2uWYS9Ylcetc1rtnugEkXOOtep67piXO1k+9XKapprOrqOMcY9axasG55WwIkwRxVa5BQkrx9K6PU7Aw3GwptJNZlxaEcAZrRNMmxkxqzREkU3Zz6cVq20aBSrjk1TuF2MVHQUX1EUXyCMUNICOetD4L/SmOozkd+1MBjtkZHNEY54pQnegDt3oAcRimtyB60pY96iJP4UASDAprnjim8+tGD3oAdHwasHGMnmq+CBxT9xAxzQA75Ryabs55oPIFSxkk80ANycZGTiph80ZY9aacAYxg5oDcYFAChypp6NhuKRgCB0zURJ4HSgRYZ9w5PNRSqcZ60Rjdk56U53ZVxQBX3HGBzQpwcHrUgQjkjg05EDHk0DGgFjipFToTTVJV808SHccdu1AhzOMHjioiBjk1NHhjgiiSMdDSAjQ8HGOlMKNnPehuuVPWnqRjDdaYERDHtzSFSvWp2I2cdaikcEc0XGMWQg5U81etNSkhIGfzqgq5HWmHiqvclq51tnqkcgUOcGtHcsi8YYe1cIjsv3c+1Xra/lhIwxFQ6fYlxOmmto3HIBqlJYFSTF1os9ZjYATfnWpG8M4zGwqoVJU2TYt+GfEN3pE6q5Yx59a9p8MeLbe8iTDj35rwl48HDCrFhdS2MgaGQrz0rqbhWWu5DVj6ktbxJ1XDe9W9/TuOleJ+G/GLLtSdjn/wDXXo+m67FcLy4Oa55U5QZSszpOp9qcBjBzmq0NwrruBBFWUcHr3pXuWoigUhBHNSLg0bc00x8omM/jTgML7fWnAcUEZGKCrDByfxpD98+gFO6CgDHNMpICMVExznHapWqLjnFCHYUAFSeeaG/u0uOOtNHLfSmNLUUnHFI3Q05uGxUchz6UyrDgflzmnk8DAqLoBT1PA96CkhR6Gm55pNw9T70dScGmMQsNxPUU4HnHamLyR+tPHQcUFJB1HXimDrxTm+7jPApg4YetCKHZ+bpRnnijPcnAoI5wKChPrTo/p+NM6fWnqO1MYhb0pyE8/Wmjng05OM+1BSFYdTTlA9eKXaOx4oApFIRhzkdKVeBkdKf1oA4I70FAenAoHAoA6UHAAo0GmNY8n2qJjkH9BT5GUHkis3UNQigQlnAx0oE5Jai3cwijLEgYrzzxj4mj0+CQs43dhVfxl40itEdIn3Pjsa8T1zU7rVbhnkZiCfWtYUftTOKtiG/dgQ+I9dn1S6dnY7ewzWAQ7nitBbYZ+bmrMVrgcACoq1uiMIU3uZkNkzY3Vft7JRjirqxxQgGRsEVDPqcUYIiArku5bGtox3LUVqqjLnApJruC3XAPNYk2oSSE8kCqU8pJ5P1pqn3E6vY1LvVGkGE4FZskrMSSSfrVcMSeaUscYAq9FsZu73J0Oe1SMVVeBzUEBIqWQ4+719aQEb8jik6Yx0pxHAFIQAOvNIAUZcccVYQY57VDEQCcjipVZduB3oAVpCFOOlV3cu1WTHlCScDFMt4Qx5IA70XArZweachBOB1qW6Vd2EqNVIXPegAlXiouegqRjximhsL70ARkkkCkxzQDycUpxn3oAY2e/wCVPUVG780gY49qAHlueKM5xmmN0FANADiM9KBkd+acMd6QnngGgBpb160gB/AUcnr3py9xQAw8HjpSg5OaU+tIAe1ADuPSjOee1PC8c9aOAfagAVQOtTRgZ56VF+lSKMDJNAF5JI1UAdBTpZRJhVAHvVBclutXbVctgDmi1tQWoyOMCTkVswJGIhhTTbSyM0qjHfmtm5WIbYolBZfapcruxSVkYi2TyXA2dTXX6NaNbsscSFnI5NR6HY+STNKMueme1dvoFtGy+YVyalts0hETStKMr75xnHIyK6azs8oMDn+VSwRoEAHUdK1LNAXGFrSMXubpIgtLVkODWtZqEUetSwwjbnGKRkLMB3raOpdrbGhGw2DipI/aoIw3lqDUynC4qki7kgUZJA5oVTv5pgyelSrV7D3FyFIpHYYI7U8gcnIqGTgHmpYnoUrk5J45rLuQMYrQnJ5OKz5sYyaTOeZl3QBOapMoUnHFX7j5m57dqqSDAxSOeRUPX3qxCO2KhYZfAqeHg1JkzSthnFaUQxj1rMgPygmtOE8ZoRzTLcRzirsPQCqUQ7irkdNmKPIZM7TiqhUuTxWqYCQKfFZkkHHJ71C0NHIqaZbHzAcd673S12xD1rCsLLDA/jXRWybAMVMpX0JvqXAMHk0kg5FIDwBSZ7VKZaYqn5h2qXHXHSoenTrS7j7iqLTHN3wTTYuCSaUZ9cmhVOeOlO5aJI+GOelPkJKkjp0qMDnmgkkY6UFpkbuc4p65HOODTCpJ6VMOV5GOKbGmDHCk54NZ9zEJ1ZW6GtDGQSelQSx5PTApIq5zZtWSVk6j1rJ8QaPL5HmwcsOvFdm1uNxOOaJY9yYIGBSn3GjxLWrKS7QEJ+8WuRZdkrRzrtf0r3a80XdK0irjPbFeeeJ9AYXxdkPPQ4/z6VktHYGjz29j53L2rMn+YZHXvXU6ppkiE4GBXPXVuUPIzVolmaqgEkimPHzxVlojmkK4HTNMCAIAMEUyRQM1YKg8g1G45pAVSpzTSMnjpUzLxTStMCPHNOBFKBQRigBQcUcYpO1HQ0ALyKduxgjr60zNKBk0AKSWbJNSL1qMCnduetAEj/d6/hULNmnl8jB61EwBoQDopCD1q2F8wgnpVFRzViNscUMC4ANhDVUm4Jx0qUykgdDUMrZNJCGAkgE5yKdznNNUcc9KlTkEUxjgxVs5qQyBl5qGnr1x0FAhpQjmjdzUu0YqMrnFACt82OOtRuuOcYxVpIyE5pGRTSApoMNyKJEwferAjweBmnOuRnFFxlVAAfanMV2j1odCB7UzBJA5pgOA461NFdSwt8rEU1SPoaY6mqTE0bVnrbqMSjPvWtb31vN3AJri+lSpIVPUimkuhDid3HnO6J81tabrdzZuOSR6V5za6hLEepNa1tq+7Ac1tGb2epDiez6T40QYEjAf5Ndhpvi21lABkBNfO8V6rjINTR37xnKsR+NPkpyGnJH1Ha6vbTr8sig9+aupdRN0dTmvluHW7lPu3DCtiw8WX0RH+kEjpS+r9mUqj6o+k1YHnr9KdkH3JrxXSfHk6lRI+6uw07xtFIAHIqXSnE0U4s7np060E5GDWFb+IraX+Ic1ej1K3kHD1G25orPYuN/KkAHXqaYs8bgYINP3DrTuilED2HYUKM5zS8etKPXNFx2EI5FQyKQTirBwaaxBoRVisd/HAxT4t2BnpUhIx2pFO3ApphYaDjnvQOjHt0pWIINIOnJp3Q7CJkY4qQdKQemacpGMUmy0hj8DH4U0DmpG+Y4zTR256U7jSDaM/rQxx3pwI654pHwetIdhg9KevTgUi9qcOvp+NNsaAc59aXbj3pNwHpSNKqnIIxSchomHSlPoe1U5L+CLJaVfzrL1DxTp9qp3yihXewnOMdzfyByc9Kgub6C3XdI6gCvMtb+JcESsttkmvNde8bX+oOQspRfTNaxoyfxaGMsSvsnuGseOtN08Nl1Jrgta+Lx3stmmfevHri5eZiZHZj7moC+BnOK0/dU/Mwc6k92d1d/ErXLhv3cmwH2rKu/F+rXQIuLknPYDFcxu3d6XzETqc1DxKWyFyd2W555bhyWJJPUmoCn96q0t8q8LVKW8Zz14rGdWUxrljsaTzRxgliDVOfUyAVjFZ0khYnJqIjNZcqBzbJZriSX7zVD35opfr0p3JHDkZFMKgmlzj6UHpSuA3AzUgXPSmDHrTlOKQDtoA461JGB1JqPPHHSlUHAoAHbDfSoict7U6Xjp+tMHSgB/TpSFiDkUKeKawA5zQBIZmK4ycUqMV6GocipA4AoAeSc+9Rk4PWkZyRmoy/egB7HAqJmNBORn1pKYC5/OkLZ9qO9GN2cUgGnnNA4FL3wKUYzigBB156Up9qTmgHuaAFz8tAFAwaXGKAFzgUh6ilHuaQ8daAALxmnYwB60A8U4DLUAIFJFPWMnipAtTQoQTxSuBCUwBgUuw5/pU5xu4p0SFnyBmgCJV2/WtGygY4K8miG182YZ/Gt6ytFiyU6ngUmykhtqhRRn71bOn6c0zglfmNWNJ0uV2V2UkHpXZ6bprRBfkAPtS9DSMOpnWeiOVVSeWrr9H0xbW2C/xU6wt2D5K8it62g+TvmrjF7G0Yle0tsEZ5atOGEKc4wakiiCAcc1YVSW6ZFbJGiQKvPWniP5snFOAz2p6rxTSKQ5fQ0pAwfWlC4PtSt1xVoYi8HnpT93OMUzvxwKUn1psB+eKq3DkZ5qZjxxVW455pEyZVnNULjNXJeRVScfnUmEjOlGT3zVeQfLzVyQc+lVpQcHPNIwkUGHOBU8XOO1ROOaljwGGelJmLNC3JAGa0Ys4HFZ1vjbWlD0H0pHNMuQnjHvV2KqMJq7F+lNmUdzhkthgVZit1AHSrSwc4P86nEQHbJrmchEUEQVs4xir0XSo1GMYp4yBx+VG40SA8UmSBSE9+9A5+lUjRD0OTS9TUecEU4c9eKopMeuMj0p45IxUaAnmnL1pFpj2J6GgUMOh6CjtyKEVccNoGB1oP3cd6Ewe2MUuATnnJp3LQoI2gCkYdMUjfex2pc8D3p7DuIVPHFMZfUdam/iwORSHHU9qRSKckQPGKxtVsFljIKA/hXQuAScdKryIG68iplEo8t1vRVKttFcFq2l+Wx4r3bUrESKSMH/ACPeuL1fSyysSox/n3pJktHj01sQTkc1QkhIzxmu11TTmR2GP1rnbm3weasRiuuM1G3TmtGaEDvVV4xnrSApsPTpTSBjFTMpzimMKAGAetMbrzUmPemjnrQBGQeOtIcmnkY70nagBnsacO2aMc/Wlxg0AAznFO9j0pCKKAA03GTmloxQAgHSnq2etJ9etLjIzjFAD88U1uT1pCeaXvQAoPt+FSZ/CmDOcelOIyPpQAmTnnFOVsHqKaPzoIxj3oAsA56U0Zz82KjBIIwadk4yaBFpXUjjrQSB9aq7iKeHOM0WAsooPNJKhHNQq5FTK24YOealoZXlqIDODU064J54quTg8VSEOI5waaTz605GJ4NBHPFIZG3PJB5oUY4pSOc04c5p3AQHnGeakRyOQaZjHHalxj6VSkKxaiumQjDHHpVyK9GMGsjvTlYnNVzCsbizA/danCV1OQxx9axBIR3OalS5YVSqNCsbkd5IpGCePQ1oW+ryR4+Y1zK3We3NTC4X1rVVu4uU7ix8Syxn/WEYrdsvGMiYzIeK8tWbPQ1IJmB61XNF7is0e0WvjZxj94PwrUg8csMDf+teFJdSKBz+tTLfyA4JNHLBlKUke/Q+Nl/ibNXYvGKHHIOK+fU1WQdGNWo9blUEhjU+yiUqkkfQS+L4zyePxqQeKovWvA08QPg5Y1YTxCwOcmp9kUqrPdG8SxMeCBTT4jjz1NeLL4hPqfapB4g56nn60ezH7U9l/wCEjj9eKd/wkkY968a/4SD1Y4pTr/X5jR7Maqnso8Rx/jR/wkae2a8ZbX+eGPNNbxAR/EcUeyH7Y9mPiNR0PFJ/wkyDqR9K8XbxFgHDGo28RNz8xo9kHtme1jxRGCeRxUUni2Jf4q8Rk8RsecmqsviGRs8mmqQe3Z7hJ4yTPDA1Tn8a44DAV4jJrkv941XfWJXB+Ymn7KIvbSPZLrxxIucSCsO98dzkELJ+teXSajIwwSfzqs107dz+dUowRDnJnc33i+5lB/fEVz95rEsxJaVjz61gPMe5qJpfeh1EthcvcvTXe7r1qs8x9s1VeYCommrKVRspaFsyn1phmweTVJ5ielRF2JrJu4+YuPcHtVd5iT1JqHJpeeM80hCliaQHNCjmnChsQjAU085wKU00ZBIpXGN9aO49KdjvTggNIBmOeKCOhqZV60w/eoAYBSgHNOC54py0AJgdTR5nFOYflTT34oAjY5PNM705854pg5PvQA/IxxTCaOe1IRigAJNJk96OtNPBoAUnNGOODTcUvI6UwE7AdKUdaQg7qeOg70gF6dqOKKCSeo70AMoxznmnH0pO/NADD70o60EZGaVQfxoAUZ7Zp2MjmkApQOOKAEIoIp2KMZNAAOnSpFGDSDtUirwaAJY+RU7EhcUyEDb71OsYbn8qVgGQRb26Vq2toSen6UafbBsciun0uwVsA4oZaRnWWnFnG1evpXWaR4fLbWcVqaPpaggkCuusLNRjOKTZrGCKOn6UEQDb0rbt7PC/1q7BCqrwPwq9CgHQfhVRia2K9pahCMj3rRij65HHaiMDPGKmXjGRWti0AXHHfFPQelJt5BzUiD86pIaDpTwaQ9aVOfoKuwx4+gNAznnpQDnPvS9qaHcQjn2pp6c04/rSelMBjdM1Xl6HFWSOKrTcqRipZEilLiqkwOKtyfSqk2fTNIxkVH7ZNVpjVhz6iqkpOKlowkVH4bHrzT4+ue1MkxnJHFSR/hQzBmhb8gVoRHKjms63NX4SR0qTnkXoj2OfpVyI5z6VRgIBOTV2LmqZmkf/2Q=="}}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/README.md"
new file mode 100644
index 00000000..17a5763f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/README.md"
@@ -0,0 +1,91 @@
+# MaskGAN: Better Text Generation via Filling in the ______
+
+Code for [*MaskGAN: Better Text Generation via Filling in the
+______*](https://arxiv.org/abs/1801.07736) published at ICLR 2018.
+
+## Requirements
+
+* TensorFlow >= v1.5
+
+## Instructions
+
+Warning: The open-source version of this code is still in the process of being
+tested. Pretraining may not work correctly.
+
+For training on PTB:
+
+1. Pretrain a LM on PTB and store the checkpoint in `/tmp/pretrain-lm/`.
+Instructions WIP.
+
+2. Run MaskGAN in MLE pretraining mode. If step 1 was not run, set
+`language_model_ckpt_dir` to empty.
+
+```bash
+python train_mask_gan.py \
+ --data_dir='/tmp/ptb' \
+ --batch_size=20 \
+ --sequence_length=20 \
+ --base_directory='/tmp/maskGAN' \
+ --hparams="gen_rnn_size=650,dis_rnn_size=650,gen_num_layers=2,dis_num_layers=2,gen_learning_rate=0.00074876,dis_learning_rate=5e-4,baseline_decay=0.99,dis_train_iterations=1,gen_learning_rate_decay=0.95" \
+ --mode='TRAIN' \
+ --max_steps=100000 \
+ --language_model_ckpt_dir=/tmp/pretrain-lm/ \
+ --generator_model='seq2seq_vd' \
+ --discriminator_model='rnn_zaremba' \
+ --is_present_rate=0.5 \
+ --summaries_every=10 \
+ --print_every=250 \
+ --max_num_to_print=3 \
+ --gen_training_strategy=cross_entropy \
+ --seq2seq_share_embedding
+```
+
+3. Run MaskGAN in GAN mode. If step 2 was not run, set `maskgan_ckpt` to empty.
+```bash
+python train_mask_gan.py \
+ --data_dir='/tmp/ptb' \
+ --batch_size=128 \
+ --sequence_length=20 \
+ --base_directory='/tmp/maskGAN' \
+ --mask_strategy=contiguous \
+ --maskgan_ckpt='/tmp/maskGAN' \
+ --hparams="gen_rnn_size=650,dis_rnn_size=650,gen_num_layers=2,dis_num_layers=2,gen_learning_rate=0.000038877,gen_learning_rate_decay=1.0,gen_full_learning_rate_steps=2000000,gen_vd_keep_prob=0.33971,rl_discount_rate=0.89072,dis_learning_rate=5e-4,baseline_decay=0.99,dis_train_iterations=2,dis_pretrain_learning_rate=0.005,critic_learning_rate=5.1761e-7,dis_vd_keep_prob=0.71940" \
+ --mode='TRAIN' \
+ --max_steps=100000 \
+ --generator_model='seq2seq_vd' \
+ --discriminator_model='seq2seq_vd' \
+ --is_present_rate=0.5 \
+ --summaries_every=250 \
+ --print_every=250 \
+ --max_num_to_print=3 \
+ --gen_training_strategy='reinforce' \
+ --seq2seq_share_embedding=true \
+ --baseline_method=critic \
+ --attention_option=luong
+```
+
+4. Generate samples:
+```bash
+python generate_samples.py \
+ --data_dir /tmp/ptb/ \
+ --data_set=ptb \
+ --batch_size=256 \
+ --sequence_length=20 \
+ --base_directory /tmp/imdbsample/ \
+ --hparams="gen_rnn_size=650,dis_rnn_size=650,gen_num_layers=2,gen_vd_keep_prob=0.33971" \
+ --generator_model=seq2seq_vd \
+ --discriminator_model=seq2seq_vd \
+ --is_present_rate=0.0 \
+ --maskgan_ckpt=/tmp/maskGAN \
+ --seq2seq_share_embedding=True \
+ --dis_share_embedding=True \
+ --attention_option=luong \
+ --mask_strategy=contiguous \
+ --baseline_method=critic \
+ --number_epochs=4
+```
+
+## Contact for Issues
+
+* Liam Fedus, @liamb315
+* Andrew M. Dai, @a-dai
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/data/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/data/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/data/imdb_loader.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/data/imdb_loader.py"
new file mode 100644
index 00000000..8169b333
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/data/imdb_loader.py"
@@ -0,0 +1,136 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""IMDB data loader and helpers."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+# Dependency imports
+import numpy as np
+
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+tf.app.flags.DEFINE_boolean('prefix_label', True, 'Vocabulary file.')
+
+np.set_printoptions(precision=3)
+np.set_printoptions(suppress=True)
+
+EOS_INDEX = 88892
+
+
+def _read_words(filename, use_prefix=True):
+ all_words = []
+ sequence_example = tf.train.SequenceExample()
+ for r in tf.python_io.tf_record_iterator(filename):
+ sequence_example.ParseFromString(r)
+
+ if FLAGS.prefix_label and use_prefix:
+ label = sequence_example.context.feature['class'].int64_list.value[0]
+ review_words = [EOS_INDEX + 1 + label]
+ else:
+ review_words = []
+ review_words.extend([
+ f.int64_list.value[0]
+ for f in sequence_example.feature_lists.feature_list['token_id'].feature
+ ])
+ all_words.append(review_words)
+ return all_words
+
+
+def build_vocab(vocab_file):
+ word_to_id = {}
+
+ with tf.gfile.GFile(vocab_file, 'r') as f:
+ index = 0
+ for word in f:
+ word_to_id[word.strip()] = index
+ index += 1
+ word_to_id[''] = EOS_INDEX
+
+ return word_to_id
+
+
+def imdb_raw_data(data_path=None):
+ """Load IMDB raw data from data directory "data_path".
+ Reads IMDB tf record files containing integer ids,
+ and performs mini-batching of the inputs.
+ Args:
+ data_path: string path to the directory where simple-examples.tgz has
+ been extracted.
+ Returns:
+ tuple (train_data, valid_data)
+ where each of the data objects can be passed to IMDBIterator.
+ """
+
+ train_path = os.path.join(data_path, 'train_lm.tfrecords')
+ valid_path = os.path.join(data_path, 'test_lm.tfrecords')
+
+ train_data = _read_words(train_path)
+ valid_data = _read_words(valid_path)
+ return train_data, valid_data
+
+
+def imdb_iterator(raw_data, batch_size, num_steps, epoch_size_override=None):
+ """Iterate on the raw IMDB data.
+
+ This generates batch_size pointers into the raw IMDB data, and allows
+ minibatch iteration along these pointers.
+
+ Args:
+ raw_data: one of the raw data outputs from imdb_raw_data.
+ batch_size: int, the batch size.
+ num_steps: int, the number of unrolls.
+
+ Yields:
+ Pairs of the batched data, each a matrix of shape [batch_size, num_steps].
+ The second element of the tuple is the same data time-shifted to the
+ right by one. The third is a set of weights with 1 indicating a word was
+ present and 0 not.
+
+ Raises:
+ ValueError: if batch_size or num_steps are too high.
+ """
+ del epoch_size_override
+ data_len = len(raw_data)
+ num_batches = data_len // batch_size - 1
+
+ for batch in range(num_batches):
+ x = np.zeros([batch_size, num_steps], dtype=np.int32)
+ y = np.zeros([batch_size, num_steps], dtype=np.int32)
+ w = np.zeros([batch_size, num_steps], dtype=np.float)
+
+ for i in range(batch_size):
+ data_index = batch * batch_size + i
+ example = raw_data[data_index]
+
+ if len(example) > num_steps:
+ final_x = example[:num_steps]
+ final_y = example[1:(num_steps + 1)]
+ w[i] = 1
+
+ else:
+ to_fill_in = num_steps - len(example)
+ final_x = example + [EOS_INDEX] * to_fill_in
+ final_y = final_x[1:] + [EOS_INDEX]
+ w[i] = [1] * len(example) + [0] * to_fill_in
+
+ x[i] = final_x
+ y[i] = final_y
+
+ yield (x, y, w)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/data/ptb_loader.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/data/ptb_loader.py"
new file mode 100644
index 00000000..43105952
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/data/ptb_loader.py"
@@ -0,0 +1,123 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""PTB data loader and helpers."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import os
+# Dependency imports
+import numpy as np
+
+import tensorflow as tf
+
+EOS_INDEX = 0
+
+
+def _read_words(filename):
+ with tf.gfile.GFile(filename, "r") as f:
+ return f.read().decode("utf-8").replace("\n", "").split()
+
+
+def build_vocab(filename):
+ data = _read_words(filename)
+
+ counter = collections.Counter(data)
+ count_pairs = sorted(counter.items(), key=lambda x: (-x[1], x[0]))
+
+ words, _ = list(zip(*count_pairs))
+ word_to_id = dict(zip(words, range(len(words))))
+ print(":", word_to_id[""])
+ global EOS_INDEX
+ EOS_INDEX = word_to_id[""]
+
+ return word_to_id
+
+
+def _file_to_word_ids(filename, word_to_id):
+ data = _read_words(filename)
+ return [word_to_id[word] for word in data if word in word_to_id]
+
+
+def ptb_raw_data(data_path=None):
+ """Load PTB raw data from data directory "data_path".
+ Reads PTB text files, converts strings to integer ids,
+ and performs mini-batching of the inputs.
+ The PTB dataset comes from Tomas Mikolov's webpage:
+ http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz
+ Args:
+ data_path: string path to the directory where simple-examples.tgz has
+ been extracted.
+ Returns:
+ tuple (train_data, valid_data, test_data, vocabulary)
+ where each of the data objects can be passed to PTBIterator.
+ """
+
+ train_path = os.path.join(data_path, "ptb.train.txt")
+ valid_path = os.path.join(data_path, "ptb.valid.txt")
+ test_path = os.path.join(data_path, "ptb.test.txt")
+
+ word_to_id = build_vocab(train_path)
+ train_data = _file_to_word_ids(train_path, word_to_id)
+ valid_data = _file_to_word_ids(valid_path, word_to_id)
+ test_data = _file_to_word_ids(test_path, word_to_id)
+ vocabulary = len(word_to_id)
+ return train_data, valid_data, test_data, vocabulary
+
+
+def ptb_iterator(raw_data, batch_size, num_steps, epoch_size_override=None):
+ """Iterate on the raw PTB data.
+
+ This generates batch_size pointers into the raw PTB data, and allows
+ minibatch iteration along these pointers.
+
+ Args:
+ raw_data: one of the raw data outputs from ptb_raw_data.
+ batch_size: int, the batch size.
+ num_steps: int, the number of unrolls.
+
+ Yields:
+ Pairs of the batched data, each a matrix of shape [batch_size, num_steps].
+ The second element of the tuple is the same data time-shifted to the
+ right by one.
+
+ Raises:
+ ValueError: if batch_size or num_steps are too high.
+ """
+ raw_data = np.array(raw_data, dtype=np.int32)
+
+ data_len = len(raw_data)
+ batch_len = data_len // batch_size
+ data = np.full([batch_size, batch_len], EOS_INDEX, dtype=np.int32)
+ for i in range(batch_size):
+ data[i] = raw_data[batch_len * i:batch_len * (i + 1)]
+
+ if epoch_size_override:
+ epoch_size = epoch_size_override
+ else:
+ epoch_size = (batch_len - 1) // num_steps
+
+ if epoch_size == 0:
+ raise ValueError("epoch_size == 0, decrease batch_size or num_steps")
+
+ # print("Number of batches per epoch: %d" % epoch_size)
+ for i in range(epoch_size):
+ x = data[:, i * num_steps:(i + 1) * num_steps]
+ y = data[:, i * num_steps + 1:(i + 1) * num_steps + 1]
+ w = np.ones_like(x)
+ yield (x, y, w)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/generate_samples.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/generate_samples.py"
new file mode 100644
index 00000000..d4215ebc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/generate_samples.py"
@@ -0,0 +1,281 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Generate samples from the MaskGAN.
+
+Launch command:
+ python generate_samples.py
+ --data_dir=/tmp/data/imdb --data_set=imdb
+ --batch_size=256 --sequence_length=20 --base_directory=/tmp/imdb
+ --hparams="gen_rnn_size=650,dis_rnn_size=650,gen_num_layers=2,
+ gen_vd_keep_prob=1.0" --generator_model=seq2seq_vd
+ --discriminator_model=seq2seq_vd --is_present_rate=0.5
+ --maskgan_ckpt=/tmp/model.ckpt-45494
+ --seq2seq_share_embedding=True --dis_share_embedding=True
+ --attention_option=luong --mask_strategy=contiguous --baseline_method=critic
+ --number_epochs=4
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from functools import partial
+import os
+# Dependency imports
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+import train_mask_gan
+from data import imdb_loader
+from data import ptb_loader
+
+# Data.
+from model_utils import helper
+from model_utils import model_utils
+
+SAMPLE_TRAIN = 'TRAIN'
+SAMPLE_VALIDATION = 'VALIDATION'
+
+## Sample Generation.
+## Binary and setup FLAGS.
+tf.app.flags.DEFINE_enum('sample_mode', 'TRAIN',
+ [SAMPLE_TRAIN, SAMPLE_VALIDATION],
+ 'Dataset to sample from.')
+tf.app.flags.DEFINE_string('output_path', '/tmp', 'Model output directory.')
+tf.app.flags.DEFINE_boolean(
+ 'output_masked_logs', False,
+ 'Whether to display for human evaluation (show masking).')
+tf.app.flags.DEFINE_integer('number_epochs', 1,
+ 'The number of epochs to produce.')
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def get_iterator(data):
+ """Return the data iterator."""
+ if FLAGS.data_set == 'ptb':
+ iterator = ptb_loader.ptb_iterator(data, FLAGS.batch_size,
+ FLAGS.sequence_length,
+ FLAGS.epoch_size_override)
+ elif FLAGS.data_set == 'imdb':
+ iterator = imdb_loader.imdb_iterator(data, FLAGS.batch_size,
+ FLAGS.sequence_length)
+ return iterator
+
+
+def convert_to_human_readable(id_to_word, arr, p, max_num_to_print):
+ """Convert a np.array of indices into words using id_to_word dictionary.
+ Return max_num_to_print results.
+ """
+
+ assert arr.ndim == 2
+
+ samples = []
+ for sequence_id in xrange(min(len(arr), max_num_to_print)):
+ sample = []
+ for i, index in enumerate(arr[sequence_id, :]):
+ if p[sequence_id, i] == 1:
+ sample.append(str(id_to_word[index]))
+ else:
+ sample.append('*' + str(id_to_word[index]))
+ buffer_str = ' '.join(sample)
+ samples.append(buffer_str)
+ return samples
+
+
+def write_unmasked_log(log, id_to_word, sequence_eval):
+ """Helper function for logging evaluated sequences without mask."""
+ indices_arr = np.asarray(sequence_eval)
+ samples = helper.convert_to_human_readable(id_to_word, indices_arr,
+ FLAGS.batch_size)
+ for sample in samples:
+ log.write(sample + '\n')
+ log.flush()
+ return samples
+
+
+def write_masked_log(log, id_to_word, sequence_eval, present_eval):
+ indices_arr = np.asarray(sequence_eval)
+ samples = convert_to_human_readable(id_to_word, indices_arr, present_eval,
+ FLAGS.batch_size)
+ for sample in samples:
+ log.write(sample + '\n')
+ log.flush()
+ return samples
+
+
+def generate_logs(sess, model, log, id_to_word, feed):
+ """Impute Sequences using the model for a particular feed and send it to
+ logs.
+ """
+ # Impute Sequences.
+ [p, inputs_eval, sequence_eval] = sess.run(
+ [model.present, model.inputs, model.fake_sequence], feed_dict=feed)
+
+ # Add the 0th time-step for coherence.
+ first_token = np.expand_dims(inputs_eval[:, 0], axis=1)
+ sequence_eval = np.concatenate((first_token, sequence_eval), axis=1)
+
+ # 0th token always present.
+ p = np.concatenate((np.ones((FLAGS.batch_size, 1)), p), axis=1)
+
+ if FLAGS.output_masked_logs:
+ samples = write_masked_log(log, id_to_word, sequence_eval, p)
+ else:
+ samples = write_unmasked_log(log, id_to_word, sequence_eval)
+ return samples
+
+
+def generate_samples(hparams, data, id_to_word, log_dir, output_file):
+ """"Generate samples.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ data: Data to evaluate.
+ id_to_word: Dictionary of indices to words.
+ log_dir: Log directory.
+ output_file: Output file for the samples.
+ """
+ # Boolean indicating operational mode.
+ is_training = False
+
+ # Set a random seed to keep fixed mask.
+ np.random.seed(0)
+
+ with tf.Graph().as_default():
+ # Construct the model.
+ model = train_mask_gan.create_MaskGAN(hparams, is_training)
+
+ ## Retrieve the initial savers.
+ init_savers = model_utils.retrieve_init_savers(hparams)
+
+ ## Initial saver function to supervisor.
+ init_fn = partial(model_utils.init_fn, init_savers)
+
+ is_chief = FLAGS.task == 0
+
+ # Create the supervisor. It will take care of initialization, summaries,
+ # checkpoints, and recovery.
+ sv = tf.Supervisor(
+ logdir=log_dir,
+ is_chief=is_chief,
+ saver=model.saver,
+ global_step=model.global_step,
+ recovery_wait_secs=30,
+ summary_op=None,
+ init_fn=init_fn)
+
+ # Get an initialized, and possibly recovered session. Launch the
+ # services: Checkpointing, Summaries, step counting.
+ #
+ # When multiple replicas of this program are running the services are
+ # only launched by the 'chief' replica.
+ with sv.managed_session(
+ FLAGS.master, start_standard_services=False) as sess:
+
+ # Generator statefulness over the epoch.
+ [gen_initial_state_eval, fake_gen_initial_state_eval] = sess.run(
+ [model.eval_initial_state, model.fake_gen_initial_state])
+
+ for n in xrange(FLAGS.number_epochs):
+ print('Epoch number: %d' % n)
+ # print('Percent done: %.2f' % float(n) / float(FLAGS.number_epochs))
+ iterator = get_iterator(data)
+ for x, y, _ in iterator:
+ if FLAGS.eval_language_model:
+ is_present_rate = 0.
+ else:
+ is_present_rate = FLAGS.is_present_rate
+ tf.logging.info(
+ 'Evaluating on is_present_rate=%.3f.' % is_present_rate)
+
+ model_utils.assign_percent_real(sess, model.percent_real_update,
+ model.new_rate, is_present_rate)
+
+ # Randomly mask out tokens.
+ p = model_utils.generate_mask()
+
+ eval_feed = {model.inputs: x, model.targets: y, model.present: p}
+
+ if FLAGS.data_set == 'ptb':
+ # Statefulness for *evaluation* Generator.
+ for i, (c, h) in enumerate(model.eval_initial_state):
+ eval_feed[c] = gen_initial_state_eval[i].c
+ eval_feed[h] = gen_initial_state_eval[i].h
+
+ # Statefulness for the Generator.
+ for i, (c, h) in enumerate(model.fake_gen_initial_state):
+ eval_feed[c] = fake_gen_initial_state_eval[i].c
+ eval_feed[h] = fake_gen_initial_state_eval[i].h
+
+ [gen_initial_state_eval, fake_gen_initial_state_eval, _] = sess.run(
+ [
+ model.eval_final_state, model.fake_gen_final_state,
+ model.global_step
+ ],
+ feed_dict=eval_feed)
+
+ generate_logs(sess, model, output_file, id_to_word, eval_feed)
+ output_file.close()
+ print('Closing output_file.')
+ return
+
+
+def main(_):
+ hparams = train_mask_gan.create_hparams()
+ log_dir = FLAGS.base_directory
+
+ tf.gfile.MakeDirs(FLAGS.output_path)
+ output_file = tf.gfile.GFile(
+ os.path.join(FLAGS.output_path, 'reviews.txt'), mode='w')
+
+ # Load data set.
+ if FLAGS.data_set == 'ptb':
+ raw_data = ptb_loader.ptb_raw_data(FLAGS.data_dir)
+ train_data, valid_data, _, _ = raw_data
+ elif FLAGS.data_set == 'imdb':
+ raw_data = imdb_loader.imdb_raw_data(FLAGS.data_dir)
+ train_data, valid_data = raw_data
+ else:
+ raise NotImplementedError
+
+ # Generating more data on train set.
+ if FLAGS.sample_mode == SAMPLE_TRAIN:
+ data_set = train_data
+ elif FLAGS.sample_mode == SAMPLE_VALIDATION:
+ data_set = valid_data
+ else:
+ raise NotImplementedError
+
+ # Dictionary and reverse dictionry.
+ if FLAGS.data_set == 'ptb':
+ word_to_id = ptb_loader.build_vocab(
+ os.path.join(FLAGS.data_dir, 'ptb.train.txt'))
+ elif FLAGS.data_set == 'imdb':
+ word_to_id = imdb_loader.build_vocab(
+ os.path.join(FLAGS.data_dir, 'vocab.txt'))
+ id_to_word = {v: k for k, v in word_to_id.iteritems()}
+
+ FLAGS.vocab_size = len(id_to_word)
+ print('Vocab size: %d' % FLAGS.vocab_size)
+
+ generate_samples(hparams, data_set, id_to_word, log_dir, output_file)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/losses/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/losses/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/losses/losses.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/losses/losses.py"
new file mode 100644
index 00000000..38d0e7b4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/losses/losses.py"
@@ -0,0 +1,186 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Losses for Generator and Discriminator."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+
+def discriminator_loss(predictions, labels, missing_tokens):
+ """Discriminator loss based on predictions and labels.
+
+ Args:
+ predictions: Discriminator linear predictions Tensor of shape [batch_size,
+ sequence_length]
+ labels: Labels for predictions, Tensor of shape [batch_size,
+ sequence_length]
+ missing_tokens: Indicator for the missing tokens. Evaluate the loss only
+ on the tokens that were missing.
+
+ Returns:
+ loss: Scalar tf.float32 loss.
+
+ """
+ loss = tf.losses.sigmoid_cross_entropy(labels,
+ predictions,
+ weights=missing_tokens)
+ loss = tf.Print(
+ loss, [loss, labels, missing_tokens],
+ message='loss, labels, missing_tokens',
+ summarize=25,
+ first_n=25)
+ return loss
+
+
+def cross_entropy_loss_matrix(gen_labels, gen_logits):
+ """Computes the cross entropy loss for G.
+
+ Args:
+ gen_labels: Labels for the correct token.
+ gen_logits: Generator logits.
+
+ Returns:
+ loss_matrix: Loss matrix of shape [batch_size, sequence_length].
+ """
+ cross_entropy_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
+ labels=gen_labels, logits=gen_logits)
+ return cross_entropy_loss
+
+
+def GAN_loss_matrix(dis_predictions):
+ """Computes the cross entropy loss for G.
+
+ Args:
+ dis_predictions: Discriminator predictions.
+
+ Returns:
+ loss_matrix: Loss matrix of shape [batch_size, sequence_length].
+ """
+ eps = tf.constant(1e-7, tf.float32)
+ gan_loss_matrix = -tf.log(dis_predictions + eps)
+ return gan_loss_matrix
+
+
+def generator_GAN_loss(predictions):
+ """Generator GAN loss based on Discriminator predictions."""
+ return -tf.log(tf.reduce_mean(predictions))
+
+
+def generator_blended_forward_loss(gen_logits, gen_labels, dis_predictions,
+ is_real_input):
+ """Computes the masked-loss for G. This will be a blend of cross-entropy
+ loss where the true label is known and GAN loss where the true label has been
+ masked.
+
+ Args:
+ gen_logits: Generator logits.
+ gen_labels: Labels for the correct token.
+ dis_predictions: Discriminator predictions.
+ is_real_input: Tensor indicating whether the label is present.
+
+ Returns:
+ loss: Scalar tf.float32 total loss.
+ """
+ cross_entropy_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
+ labels=gen_labels, logits=gen_logits)
+ gan_loss = -tf.log(dis_predictions)
+ loss_matrix = tf.where(is_real_input, cross_entropy_loss, gan_loss)
+ return tf.reduce_mean(loss_matrix)
+
+
+def wasserstein_generator_loss(gen_logits, gen_labels, dis_values,
+ is_real_input):
+ """Computes the masked-loss for G. This will be a blend of cross-entropy
+ loss where the true label is known and GAN loss where the true label is
+ missing.
+
+ Args:
+ gen_logits: Generator logits.
+ gen_labels: Labels for the correct token.
+ dis_values: Discriminator values Tensor of shape [batch_size,
+ sequence_length].
+ is_real_input: Tensor indicating whether the label is present.
+
+ Returns:
+ loss: Scalar tf.float32 total loss.
+ """
+ cross_entropy_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
+ labels=gen_labels, logits=gen_logits)
+ # Maximize the dis_values (minimize the negative)
+ gan_loss = -dis_values
+ loss_matrix = tf.where(is_real_input, cross_entropy_loss, gan_loss)
+ loss = tf.reduce_mean(loss_matrix)
+ return loss
+
+
+def wasserstein_discriminator_loss(real_values, fake_values):
+ """Wasserstein discriminator loss.
+
+ Args:
+ real_values: Value given by the Wasserstein Discriminator to real data.
+ fake_values: Value given by the Wasserstein Discriminator to fake data.
+
+ Returns:
+ loss: Scalar tf.float32 loss.
+
+ """
+ real_avg = tf.reduce_mean(real_values)
+ fake_avg = tf.reduce_mean(fake_values)
+
+ wasserstein_loss = real_avg - fake_avg
+ return wasserstein_loss
+
+
+def wasserstein_discriminator_loss_intrabatch(values, is_real_input):
+ """Wasserstein discriminator loss. This is an odd variant where the value
+ difference is between the real tokens and the fake tokens within a single
+ batch.
+
+ Args:
+ values: Value given by the Wasserstein Discriminator of shape [batch_size,
+ sequence_length] to an imputed batch (real and fake).
+ is_real_input: tf.bool Tensor of shape [batch_size, sequence_length]. If
+ true, it indicates that the label is known.
+
+ Returns:
+ wasserstein_loss: Scalar tf.float32 loss.
+
+ """
+ zero_tensor = tf.constant(0., dtype=tf.float32, shape=[])
+
+ present = tf.cast(is_real_input, tf.float32)
+ missing = tf.cast(1 - present, tf.float32)
+
+ # Counts for real and fake tokens.
+ real_count = tf.reduce_sum(present)
+ fake_count = tf.reduce_sum(missing)
+
+ # Averages for real and fake token values.
+ real = tf.mul(values, present)
+ fake = tf.mul(values, missing)
+ real_avg = tf.reduce_sum(real) / real_count
+ fake_avg = tf.reduce_sum(fake) / fake_count
+
+ # If there are no real or fake entries in the batch, we assign an average
+ # value of zero.
+ real_avg = tf.where(tf.equal(real_count, 0), zero_tensor, real_avg)
+ fake_avg = tf.where(tf.equal(fake_count, 0), zero_tensor, fake_avg)
+
+ wasserstein_loss = real_avg - fake_avg
+ return wasserstein_loss
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/helper.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/helper.py"
new file mode 100644
index 00000000..36115b48
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/helper.py"
@@ -0,0 +1,158 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Random helper functions for converting between indices and one-hot encodings
+as well as printing/logging helpers.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+
+def variable_summaries(var, name):
+ """Attach a lot of summaries to a Tensor."""
+ mean = tf.reduce_mean(var)
+ tf.summary.scalar('mean/' + name, mean)
+ with tf.name_scope('stddev'):
+ stddev = tf.sqrt(tf.reduce_sum(tf.square(var - mean)))
+ tf.summary.scalar('sttdev/' + name, stddev)
+ tf.summary.scalar('max/' + name, tf.reduce_max(var))
+ tf.summary.scalar('min/' + name, tf.reduce_min(var))
+ tf.summary.histogram(name, var)
+
+
+def zip_seq_pred_crossent(id_to_word, sequences, predictions, cross_entropy):
+ """Zip together the sequences, predictions, cross entropy."""
+ indices = convert_to_indices(sequences)
+
+ batch_of_metrics = []
+
+ for ind_batch, pred_batch, crossent_batch in zip(indices, predictions,
+ cross_entropy):
+ metrics = []
+
+ for index, pred, crossent in zip(ind_batch, pred_batch, crossent_batch):
+ metrics.append([str(id_to_word[index]), pred, crossent])
+
+ batch_of_metrics.append(metrics)
+ return batch_of_metrics
+
+
+def print_and_log(log, id_to_word, sequence_eval, max_num_to_print=5):
+ """Helper function for printing and logging evaluated sequences."""
+ indices_eval = convert_to_indices(sequence_eval)
+ indices_arr = np.asarray(indices_eval)
+ samples = convert_to_human_readable(id_to_word, indices_arr, max_num_to_print)
+
+ for i, sample in enumerate(samples):
+ print('Sample', i, '. ', sample)
+ log.write('\nSample ' + str(i) + '. ' + sample)
+ log.write('\n')
+ print('\n')
+ log.flush()
+
+
+def convert_to_human_readable(id_to_word, arr, max_num_to_print):
+ """Convert a np.array of indices into words using id_to_word dictionary.
+ Return max_num_to_print results.
+ """
+ assert arr.ndim == 2
+
+ samples = []
+ for sequence_id in xrange(min(len(arr), max_num_to_print)):
+ buffer_str = ' '.join(
+ [str(id_to_word[index]) for index in arr[sequence_id, :]])
+ samples.append(buffer_str)
+ return samples
+
+
+def index_to_vocab_array(indices, vocab_size, sequence_length):
+ """Convert the indices into an array with vocab_size one-hot encoding."""
+
+ # Extract properties of the indices.
+ num_batches = len(indices)
+ shape = list(indices.shape)
+ shape.append(vocab_size)
+
+ # Construct the vocab_size array.
+ new_arr = np.zeros(shape)
+
+ for n in xrange(num_batches):
+ indices_batch = indices[n]
+ new_arr_batch = new_arr[n]
+
+ # We map all indices greater than the vocabulary size to an unknown
+ # character.
+ indices_batch = np.where(indices_batch < vocab_size, indices_batch,
+ vocab_size - 1)
+
+ # Convert indices to vocab_size dimensions.
+ new_arr_batch[np.arange(sequence_length), indices_batch] = 1
+ return new_arr
+
+
+def convert_to_indices(sequences):
+ """Convert a list of size [batch_size, sequence_length, vocab_size] to
+ a list of size [batch_size, sequence_length] where the vocab element is
+ denoted by the index.
+ """
+ batch_of_indices = []
+
+ for sequence in sequences:
+ indices = []
+ for embedding in sequence:
+ indices.append(np.argmax(embedding))
+ batch_of_indices.append(indices)
+ return batch_of_indices
+
+
+def convert_and_zip(id_to_word, sequences, predictions):
+ """Helper function for printing or logging. Retrieves list of sequences
+ and predictions and zips them together.
+ """
+ indices = convert_to_indices(sequences)
+
+ batch_of_indices_predictions = []
+
+ for index_batch, pred_batch in zip(indices, predictions):
+ indices_predictions = []
+
+ for index, pred in zip(index_batch, pred_batch):
+ indices_predictions.append([str(id_to_word[index]), pred])
+ batch_of_indices_predictions.append(indices_predictions)
+ return batch_of_indices_predictions
+
+
+def recursive_length(item):
+ """Recursively determine the total number of elements in nested list."""
+ if type(item) == list:
+ return sum(recursive_length(subitem) for subitem in item)
+ else:
+ return 1.
+
+
+def percent_correct(real_sequence, fake_sequences):
+ """Determine the percent of tokens correctly generated within a batch."""
+ identical = 0.
+ for fake_sequence in fake_sequences:
+ for real, fake in zip(real_sequence, fake_sequence):
+ if real == fake:
+ identical += 1.
+ return identical / recursive_length(fake_sequences)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_construction.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_construction.py"
new file mode 100644
index 00000000..8dfa1df3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_construction.py"
@@ -0,0 +1,234 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Model construction."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+
+import tensorflow as tf
+from models import bidirectional
+from models import bidirectional_vd
+
+from models import bidirectional_zaremba
+from models import cnn
+from models import critic_vd
+from models import feedforward
+from models import rnn
+from models import rnn_nas
+from models import rnn_vd
+from models import rnn_zaremba
+from models import seq2seq
+from models import seq2seq_nas
+from models import seq2seq_vd
+from models import seq2seq_zaremba
+
+FLAGS = tf.app.flags.FLAGS
+
+
+# TODO(adai): IMDB labels placeholder to model.
+def create_generator(hparams,
+ inputs,
+ targets,
+ present,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Create the Generator model specified by the FLAGS and hparams.
+
+ Args;
+ hparams: Hyperparameters for the MaskGAN.
+ inputs: tf.int32 Tensor of the sequence input of shape [batch_size,
+ sequence_length].
+ present: tf.bool Tensor indicating the presence or absence of the token
+ of shape [batch_size, sequence_length].
+ is_training: Whether the model is training.
+ is_validating: Whether the model is being run in validation mode for
+ calculating the perplexity.
+ reuse (Optional): Whether to reuse the model.
+
+ Returns:
+ Tuple of the (sequence, logits, log_probs) of the Generator. Sequence
+ and logits have shape [batch_size, sequence_length, vocab_size]. The
+ log_probs will have shape [batch_size, sequence_length]. Log_probs
+ corresponds to the log probability of selecting the words.
+ """
+ if FLAGS.generator_model == 'rnn':
+ (sequence, logits, log_probs, initial_state, final_state) = rnn.generator(
+ hparams,
+ inputs,
+ targets,
+ present,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ elif FLAGS.generator_model == 'rnn_zaremba':
+ (sequence, logits, log_probs, initial_state,
+ final_state) = rnn_zaremba.generator(
+ hparams,
+ inputs,
+ targets,
+ present,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ elif FLAGS.generator_model == 'seq2seq':
+ (sequence, logits, log_probs, initial_state,
+ final_state) = seq2seq.generator(
+ hparams,
+ inputs,
+ targets,
+ present,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ elif FLAGS.generator_model == 'seq2seq_zaremba':
+ (sequence, logits, log_probs, initial_state,
+ final_state) = seq2seq_zaremba.generator(
+ hparams,
+ inputs,
+ targets,
+ present,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ elif FLAGS.generator_model == 'rnn_nas':
+ (sequence, logits, log_probs, initial_state,
+ final_state) = rnn_nas.generator(
+ hparams,
+ inputs,
+ targets,
+ present,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ elif FLAGS.generator_model == 'seq2seq_nas':
+ (sequence, logits, log_probs, initial_state,
+ final_state) = seq2seq_nas.generator(
+ hparams,
+ inputs,
+ targets,
+ present,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ elif FLAGS.generator_model == 'seq2seq_vd':
+ (sequence, logits, log_probs, initial_state, final_state,
+ encoder_states) = seq2seq_vd.generator(
+ hparams,
+ inputs,
+ targets,
+ present,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ else:
+ raise NotImplementedError
+ return (sequence, logits, log_probs, initial_state, final_state,
+ encoder_states)
+
+
+def create_discriminator(hparams,
+ sequence,
+ is_training,
+ reuse=None,
+ initial_state=None,
+ inputs=None,
+ present=None):
+ """Create the Discriminator model specified by the FLAGS and hparams.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ sequence: tf.int32 Tensor sequence of shape [batch_size, sequence_length]
+ is_training: Whether the model is training.
+ reuse (Optional): Whether to reuse the model.
+
+ Returns:
+ predictions: tf.float32 Tensor of predictions of shape [batch_size,
+ sequence_length]
+ """
+ if FLAGS.discriminator_model == 'cnn':
+ predictions = cnn.discriminator(
+ hparams, sequence, is_training=is_training, reuse=reuse)
+ elif FLAGS.discriminator_model == 'fnn':
+ predictions = feedforward.discriminator(
+ hparams, sequence, is_training=is_training, reuse=reuse)
+ elif FLAGS.discriminator_model == 'rnn':
+ predictions = rnn.discriminator(
+ hparams, sequence, is_training=is_training, reuse=reuse)
+ elif FLAGS.discriminator_model == 'bidirectional':
+ predictions = bidirectional.discriminator(
+ hparams, sequence, is_training=is_training, reuse=reuse)
+ elif FLAGS.discriminator_model == 'bidirectional_zaremba':
+ predictions = bidirectional_zaremba.discriminator(
+ hparams, sequence, is_training=is_training, reuse=reuse)
+ elif FLAGS.discriminator_model == 'seq2seq_vd':
+ predictions = seq2seq_vd.discriminator(
+ hparams,
+ inputs,
+ present,
+ sequence,
+ is_training=is_training,
+ reuse=reuse)
+ elif FLAGS.discriminator_model == 'rnn_zaremba':
+ predictions = rnn_zaremba.discriminator(
+ hparams, sequence, is_training=is_training, reuse=reuse)
+ elif FLAGS.discriminator_model == 'rnn_nas':
+ predictions = rnn_nas.discriminator(
+ hparams, sequence, is_training=is_training, reuse=reuse)
+ elif FLAGS.discriminator_model == 'rnn_vd':
+ predictions = rnn_vd.discriminator(
+ hparams,
+ sequence,
+ is_training=is_training,
+ reuse=reuse,
+ initial_state=initial_state)
+ elif FLAGS.discriminator_model == 'bidirectional_vd':
+ predictions = bidirectional_vd.discriminator(
+ hparams,
+ sequence,
+ is_training=is_training,
+ reuse=reuse,
+ initial_state=initial_state)
+ else:
+ raise NotImplementedError
+ return predictions
+
+
+def create_critic(hparams, sequence, is_training, reuse=None):
+ """Create the Critic model specified by the FLAGS and hparams.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ sequence: tf.int32 Tensor sequence of shape [batch_size, sequence_length]
+ is_training: Whether the model is training.
+ reuse (Optional): Whether to reuse the model.
+
+ Returns:
+ values: tf.float32 Tensor of predictions of shape [batch_size,
+ sequence_length]
+ """
+ if FLAGS.baseline_method == 'critic':
+ if FLAGS.discriminator_model == 'seq2seq_vd':
+ values = critic_vd.critic_seq2seq_vd_derivative(
+ hparams, sequence, is_training, reuse=reuse)
+ else:
+ raise NotImplementedError
+ else:
+ raise NotImplementedError
+ return values
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_losses.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_losses.py"
new file mode 100644
index 00000000..c8f337dc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_losses.py"
@@ -0,0 +1,327 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Model loss construction."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+# Useful for REINFORCE baseline.
+from losses import losses
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def create_dis_loss(fake_predictions, real_predictions, targets_present):
+ """Compute Discriminator loss across real/fake."""
+
+ missing = tf.cast(targets_present, tf.int32)
+ missing = 1 - missing
+ missing = tf.cast(missing, tf.bool)
+
+ real_labels = tf.ones([FLAGS.batch_size, FLAGS.sequence_length])
+ dis_loss_real = tf.losses.sigmoid_cross_entropy(
+ real_labels, real_predictions, weights=missing)
+ dis_loss_fake = tf.losses.sigmoid_cross_entropy(
+ targets_present, fake_predictions, weights=missing)
+
+ dis_loss = (dis_loss_fake + dis_loss_real) / 2.
+ return dis_loss, dis_loss_fake, dis_loss_real
+
+
+def create_critic_loss(cumulative_rewards, estimated_values, present):
+ """Compute Critic loss in estimating the value function. This should be an
+ estimate only for the missing elements."""
+ missing = tf.cast(present, tf.int32)
+ missing = 1 - missing
+ missing = tf.cast(missing, tf.bool)
+
+ loss = tf.losses.mean_squared_error(
+ labels=cumulative_rewards, predictions=estimated_values, weights=missing)
+ return loss
+
+
+def create_masked_cross_entropy_loss(targets, present, logits):
+ """Calculate the cross entropy loss matrices for the masked tokens."""
+ cross_entropy_losses = losses.cross_entropy_loss_matrix(targets, logits)
+
+ # Zeros matrix.
+ zeros_losses = tf.zeros(
+ shape=[FLAGS.batch_size, FLAGS.sequence_length], dtype=tf.float32)
+
+ missing_ce_loss = tf.where(present, zeros_losses, cross_entropy_losses)
+
+ return missing_ce_loss
+
+
+def calculate_reinforce_objective(hparams,
+ log_probs,
+ dis_predictions,
+ present,
+ estimated_values=None):
+ """Calculate the REINFORCE objectives. The REINFORCE objective should
+ only be on the tokens that were missing. Specifically, the final Generator
+ reward should be based on the Discriminator predictions on missing tokens.
+ The log probaibilities should be only for missing tokens and the baseline
+ should be calculated only on the missing tokens.
+
+ For this model, we optimize the reward is the log of the *conditional*
+ probability the Discriminator assigns to the distribution. Specifically, for
+ a Discriminator D which outputs probability of real, given the past context,
+
+ r_t = log D(x_t|x_0,x_1,...x_{t-1})
+
+ And the policy for Generator G is the log-probability of taking action x2
+ given the past context.
+
+
+ Args:
+ hparams: MaskGAN hyperparameters.
+ log_probs: tf.float32 Tensor of log probailities of the tokens selected by
+ the Generator. Shape [batch_size, sequence_length].
+ dis_predictions: tf.float32 Tensor of the predictions from the
+ Discriminator. Shape [batch_size, sequence_length].
+ present: tf.bool Tensor indicating which tokens are present. Shape
+ [batch_size, sequence_length].
+ estimated_values: tf.float32 Tensor of estimated state values of tokens.
+ Shape [batch_size, sequence_length]
+
+ Returns:
+ final_gen_objective: Final REINFORCE objective for the sequence.
+ rewards: tf.float32 Tensor of rewards for sequence of shape [batch_size,
+ sequence_length]
+ advantages: tf.float32 Tensor of advantages for sequence of shape
+ [batch_size, sequence_length]
+ baselines: tf.float32 Tensor of baselines for sequence of shape
+ [batch_size, sequence_length]
+ maintain_averages_op: ExponentialMovingAverage apply average op to
+ maintain the baseline.
+ """
+ # Final Generator objective.
+ final_gen_objective = 0.
+ gamma = hparams.rl_discount_rate
+ eps = 1e-7
+
+ # Generator rewards are log-probabilities.
+ eps = tf.constant(1e-7, tf.float32)
+ dis_predictions = tf.nn.sigmoid(dis_predictions)
+ rewards = tf.log(dis_predictions + eps)
+
+ # Apply only for missing elements.
+ zeros = tf.zeros_like(present, dtype=tf.float32)
+ log_probs = tf.where(present, zeros, log_probs)
+ rewards = tf.where(present, zeros, rewards)
+
+ # Unstack Tensors into lists.
+ rewards_list = tf.unstack(rewards, axis=1)
+ log_probs_list = tf.unstack(log_probs, axis=1)
+ missing = 1. - tf.cast(present, tf.float32)
+ missing_list = tf.unstack(missing, axis=1)
+
+ # Cumulative Discounted Returns. The true value function V*(s).
+ cumulative_rewards = []
+ for t in xrange(FLAGS.sequence_length):
+ cum_value = tf.zeros(shape=[FLAGS.batch_size])
+ for s in xrange(t, FLAGS.sequence_length):
+ cum_value += missing_list[s] * np.power(gamma, (s - t)) * rewards_list[s]
+ cumulative_rewards.append(cum_value)
+ cumulative_rewards = tf.stack(cumulative_rewards, axis=1)
+
+ ## REINFORCE with different baselines.
+ # We create a separate critic functionality for the Discriminator. This
+ # will need to operate unidirectionally and it may take in the past context.
+ if FLAGS.baseline_method == 'critic':
+
+ # Critic loss calculated from the estimated value function \hat{V}(s)
+ # versus the true value function V*(s).
+ critic_loss = create_critic_loss(cumulative_rewards, estimated_values,
+ present)
+
+ # Baselines are coming from the critic's estimated state values.
+ baselines = tf.unstack(estimated_values, axis=1)
+
+ ## Calculate the Advantages, A(s,a) = Q(s,a) - \hat{V}(s).
+ advantages = []
+ for t in xrange(FLAGS.sequence_length):
+ log_probability = log_probs_list[t]
+ cum_advantage = tf.zeros(shape=[FLAGS.batch_size])
+
+ for s in xrange(t, FLAGS.sequence_length):
+ cum_advantage += missing_list[s] * np.power(gamma,
+ (s - t)) * rewards_list[s]
+ cum_advantage -= baselines[t]
+ # Clip advantages.
+ cum_advantage = tf.clip_by_value(cum_advantage, -FLAGS.advantage_clipping,
+ FLAGS.advantage_clipping)
+ advantages.append(missing_list[t] * cum_advantage)
+ final_gen_objective += tf.multiply(
+ log_probability, missing_list[t] * tf.stop_gradient(cum_advantage))
+
+ maintain_averages_op = None
+ baselines = tf.stack(baselines, axis=1)
+ advantages = tf.stack(advantages, axis=1)
+
+ # Split the batch into half. Use half for MC estimates for REINFORCE.
+ # Use the other half to establish a baseline.
+ elif FLAGS.baseline_method == 'dis_batch':
+ # TODO(liamfedus): Recheck.
+ [rewards_half, baseline_half] = tf.split(
+ rewards, num_or_size_splits=2, axis=0)
+ [log_probs_half, _] = tf.split(log_probs, num_or_size_splits=2, axis=0)
+ [reward_present_half, baseline_present_half] = tf.split(
+ present, num_or_size_splits=2, axis=0)
+
+ # Unstack to lists.
+ baseline_list = tf.unstack(baseline_half, axis=1)
+ baseline_missing = 1. - tf.cast(baseline_present_half, tf.float32)
+ baseline_missing_list = tf.unstack(baseline_missing, axis=1)
+
+ baselines = []
+ for t in xrange(FLAGS.sequence_length):
+ # Calculate baseline only for missing tokens.
+ num_missing = tf.reduce_sum(baseline_missing_list[t])
+
+ avg_baseline = tf.reduce_sum(
+ baseline_missing_list[t] * baseline_list[t], keep_dims=True) / (
+ num_missing + eps)
+ baseline = tf.tile(avg_baseline, multiples=[FLAGS.batch_size / 2])
+ baselines.append(baseline)
+
+ # Unstack to lists.
+ rewards_list = tf.unstack(rewards_half, axis=1)
+ log_probs_list = tf.unstack(log_probs_half, axis=1)
+ reward_missing = 1. - tf.cast(reward_present_half, tf.float32)
+ reward_missing_list = tf.unstack(reward_missing, axis=1)
+
+ ## Calculate the Advantages, A(s,a) = Q(s,a) - \hat{V}(s).
+ advantages = []
+ for t in xrange(FLAGS.sequence_length):
+ log_probability = log_probs_list[t]
+ cum_advantage = tf.zeros(shape=[FLAGS.batch_size / 2])
+
+ for s in xrange(t, FLAGS.sequence_length):
+ cum_advantage += reward_missing_list[s] * np.power(gamma, (s - t)) * (
+ rewards_list[s] - baselines[s])
+ # Clip advantages.
+ cum_advantage = tf.clip_by_value(cum_advantage, -FLAGS.advantage_clipping,
+ FLAGS.advantage_clipping)
+ advantages.append(reward_missing_list[t] * cum_advantage)
+ final_gen_objective += tf.multiply(
+ log_probability,
+ reward_missing_list[t] * tf.stop_gradient(cum_advantage))
+
+ # Cumulative Discounted Returns. The true value function V*(s).
+ cumulative_rewards = []
+ for t in xrange(FLAGS.sequence_length):
+ cum_value = tf.zeros(shape=[FLAGS.batch_size / 2])
+ for s in xrange(t, FLAGS.sequence_length):
+ cum_value += reward_missing_list[s] * np.power(gamma, (
+ s - t)) * rewards_list[s]
+ cumulative_rewards.append(cum_value)
+ cumulative_rewards = tf.stack(cumulative_rewards, axis=1)
+
+ rewards = rewards_half
+ critic_loss = None
+ maintain_averages_op = None
+ baselines = tf.stack(baselines, axis=1)
+ advantages = tf.stack(advantages, axis=1)
+
+ # Exponential Moving Average baseline.
+ elif FLAGS.baseline_method == 'ema':
+ # TODO(liamfedus): Recheck.
+ # Lists of rewards and Log probabilities of the actions taken only for
+ # missing tokens.
+ ema = tf.train.ExponentialMovingAverage(decay=hparams.baseline_decay)
+ maintain_averages_op = ema.apply(rewards_list)
+
+ baselines = []
+ for r in rewards_list:
+ baselines.append(ema.average(r))
+
+ ## Calculate the Advantages, A(s,a) = Q(s,a) - \hat{V}(s).
+ advantages = []
+ for t in xrange(FLAGS.sequence_length):
+ log_probability = log_probs_list[t]
+
+ # Calculate the forward advantage only on the missing tokens.
+ cum_advantage = tf.zeros(shape=[FLAGS.batch_size])
+ for s in xrange(t, FLAGS.sequence_length):
+ cum_advantage += missing_list[s] * np.power(gamma, (s - t)) * (
+ rewards_list[s] - baselines[s])
+ # Clip advantages.
+ cum_advantage = tf.clip_by_value(cum_advantage, -FLAGS.advantage_clipping,
+ FLAGS.advantage_clipping)
+ advantages.append(missing_list[t] * cum_advantage)
+ final_gen_objective += tf.multiply(
+ log_probability, missing_list[t] * tf.stop_gradient(cum_advantage))
+
+ critic_loss = None
+ baselines = tf.stack(baselines, axis=1)
+ advantages = tf.stack(advantages, axis=1)
+
+ elif FLAGS.baseline_method is None:
+ num_missing = tf.reduce_sum(missing)
+ final_gen_objective += tf.reduce_sum(rewards) / (num_missing + eps)
+ baselines = tf.zeros_like(rewards)
+ critic_loss = None
+ maintain_averages_op = None
+ advantages = cumulative_rewards
+
+ else:
+ raise NotImplementedError
+
+ return [
+ final_gen_objective, log_probs, rewards, advantages, baselines,
+ maintain_averages_op, critic_loss, cumulative_rewards
+ ]
+
+
+def calculate_log_perplexity(logits, targets, present):
+ """Calculate the average log perplexity per *missing* token.
+
+ Args:
+ logits: tf.float32 Tensor of the logits of shape [batch_size,
+ sequence_length, vocab_size].
+ targets: tf.int32 Tensor of the sequence target of shape [batch_size,
+ sequence_length].
+ present: tf.bool Tensor indicating the presence or absence of the token
+ of shape [batch_size, sequence_length].
+
+ Returns:
+ avg_log_perplexity: Scalar indicating the average log perplexity per
+ missing token in the batch.
+ """
+ # logits = tf.Print(logits, [logits], message='logits:', summarize=50)
+ # targets = tf.Print(targets, [targets], message='targets:', summarize=50)
+ eps = 1e-12
+ logits = tf.reshape(logits, [-1, FLAGS.vocab_size])
+
+ # Only calculate log-perplexity on missing tokens.
+ weights = tf.cast(present, tf.float32)
+ weights = 1. - weights
+ weights = tf.reshape(weights, [-1])
+ num_missing = tf.reduce_sum(weights)
+
+ log_perplexity = tf.contrib.legacy_seq2seq.sequence_loss_by_example(
+ [logits], [tf.reshape(targets, [-1])], [weights])
+
+ avg_log_perplexity = tf.reduce_sum(log_perplexity) / (num_missing + eps)
+ return avg_log_perplexity
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_optimization.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_optimization.py"
new file mode 100644
index 00000000..caae271f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_optimization.py"
@@ -0,0 +1,194 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Model optimization."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def create_dis_pretrain_op(hparams, dis_loss, global_step):
+ """Create a train op for pretraining."""
+ with tf.name_scope('pretrain_generator'):
+ optimizer = tf.train.AdamOptimizer(hparams.dis_pretrain_learning_rate)
+ dis_vars = [
+ v for v in tf.trainable_variables() if v.op.name.startswith('dis')
+ ]
+ if FLAGS.dis_update_share_embedding and FLAGS.dis_share_embedding:
+ shared_embedding = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'gen/decoder/rnn/embedding'
+ ][0]
+ dis_vars.append(shared_embedding)
+ dis_grads = tf.gradients(dis_loss, dis_vars)
+ dis_grads_clipped, _ = tf.clip_by_global_norm(dis_grads,
+ FLAGS.grad_clipping)
+ dis_pretrain_op = optimizer.apply_gradients(
+ zip(dis_grads_clipped, dis_vars), global_step=global_step)
+ return dis_pretrain_op
+
+
+def create_gen_pretrain_op(hparams, cross_entropy_loss, global_step):
+ """Create a train op for pretraining."""
+ with tf.name_scope('pretrain_generator'):
+ optimizer = tf.train.AdamOptimizer(hparams.gen_pretrain_learning_rate)
+ gen_vars = [
+ v for v in tf.trainable_variables() if v.op.name.startswith('gen')
+ ]
+ gen_grads = tf.gradients(cross_entropy_loss, gen_vars)
+ gen_grads_clipped, _ = tf.clip_by_global_norm(gen_grads,
+ FLAGS.grad_clipping)
+ gen_pretrain_op = optimizer.apply_gradients(
+ zip(gen_grads_clipped, gen_vars), global_step=global_step)
+ return gen_pretrain_op
+
+
+def create_gen_train_op(hparams, learning_rate, gen_loss, global_step, mode):
+ """Create Generator train op."""
+ del hparams
+ with tf.name_scope('train_generator'):
+ if FLAGS.generator_optimizer == 'sgd':
+ gen_optimizer = tf.train.GradientDescentOptimizer(learning_rate)
+ elif FLAGS.generator_optimizer == 'adam':
+ gen_optimizer = tf.train.AdamOptimizer(learning_rate)
+ else:
+ raise NotImplementedError
+ gen_vars = [
+ v for v in tf.trainable_variables() if v.op.name.startswith('gen')
+ ]
+ print('Optimizing Generator vars.')
+ for v in gen_vars:
+ print(v)
+ if mode == 'MINIMIZE':
+ gen_grads = tf.gradients(gen_loss, gen_vars)
+ elif mode == 'MAXIMIZE':
+ gen_grads = tf.gradients(-gen_loss, gen_vars)
+ else:
+ raise ValueError("Must be one of 'MINIMIZE' or 'MAXIMIZE'")
+ gen_grads_clipped, _ = tf.clip_by_global_norm(gen_grads,
+ FLAGS.grad_clipping)
+ gen_train_op = gen_optimizer.apply_gradients(
+ zip(gen_grads_clipped, gen_vars), global_step=global_step)
+ return gen_train_op, gen_grads_clipped, gen_vars
+
+
+def create_reinforce_gen_train_op(hparams, learning_rate, final_gen_reward,
+ averages_op, global_step):
+ """Create the Generator train_op when using REINFORCE.
+
+ Args:
+ hparams: MaskGAN hyperparameters.
+ learning_rate: tf.Variable scalar learning rate.
+ final_gen_objective: Scalar final REINFORCE objective for the sequence.
+ averages_op: ExponentialMovingAverage apply average op to
+ maintain the baseline.
+ global_step: global_step tf.Variable.
+
+ Returns:
+ gen_train_op: Generator training op.
+ """
+ del hparams
+ with tf.name_scope('train_generator'):
+ if FLAGS.generator_optimizer == 'sgd':
+ gen_optimizer = tf.train.GradientDescentOptimizer(learning_rate)
+ elif FLAGS.generator_optimizer == 'adam':
+ gen_optimizer = tf.train.AdamOptimizer(learning_rate)
+ else:
+ raise NotImplementedError
+ gen_vars = [
+ v for v in tf.trainable_variables() if v.op.name.startswith('gen')
+ ]
+ print('\nOptimizing Generator vars:')
+ for v in gen_vars:
+ print(v)
+
+ # Maximize reward.
+ gen_grads = tf.gradients(-final_gen_reward, gen_vars)
+ gen_grads_clipped, _ = tf.clip_by_global_norm(gen_grads,
+ FLAGS.grad_clipping)
+ maximize_op = gen_optimizer.apply_gradients(
+ zip(gen_grads_clipped, gen_vars), global_step=global_step)
+
+ # Group maintain averages op.
+ if averages_op:
+ gen_train_op = tf.group(maximize_op, averages_op)
+ else:
+ gen_train_op = maximize_op
+
+ return [gen_train_op, gen_grads, gen_vars]
+
+
+def create_dis_train_op(hparams, dis_loss, global_step):
+ """Create Discriminator train op."""
+ with tf.name_scope('train_discriminator'):
+ dis_optimizer = tf.train.AdamOptimizer(hparams.dis_learning_rate)
+ dis_vars = [
+ v for v in tf.trainable_variables() if v.op.name.startswith('dis')
+ ]
+ if FLAGS.dis_update_share_embedding and FLAGS.dis_share_embedding:
+ shared_embedding = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'gen/decoder/rnn/embedding'
+ ][0]
+ dis_vars.append(shared_embedding)
+ print('\nOptimizing Discriminator vars:')
+ for v in dis_vars:
+ print(v)
+ dis_grads = tf.gradients(dis_loss, dis_vars)
+ dis_grads_clipped, _ = tf.clip_by_global_norm(dis_grads,
+ FLAGS.grad_clipping)
+ dis_train_op = dis_optimizer.apply_gradients(
+ zip(dis_grads_clipped, dis_vars), global_step=global_step)
+ return dis_train_op, dis_grads_clipped, dis_vars
+
+
+def create_critic_train_op(hparams, critic_loss, global_step):
+ """Create Discriminator train op."""
+ with tf.name_scope('train_critic'):
+ critic_optimizer = tf.train.AdamOptimizer(hparams.critic_learning_rate)
+ output_vars = [
+ v for v in tf.trainable_variables() if v.op.name.startswith('critic')
+ ]
+
+ if FLAGS.critic_update_dis_vars:
+ if FLAGS.discriminator_model == 'bidirectional_vd':
+ critic_vars = [
+ v for v in tf.trainable_variables()
+ if v.op.name.startswith('dis/rnn')
+ ]
+ elif FLAGS.discriminator_model == 'seq2seq_vd':
+ critic_vars = [
+ v for v in tf.trainable_variables()
+ if v.op.name.startswith('dis/decoder/rnn/multi_rnn_cell')
+ ]
+ critic_vars.extend(output_vars)
+ else:
+ critic_vars = output_vars
+ print('\nOptimizing Critic vars:')
+ for v in critic_vars:
+ print(v)
+ critic_grads = tf.gradients(critic_loss, critic_vars)
+ critic_grads_clipped, _ = tf.clip_by_global_norm(critic_grads,
+ FLAGS.grad_clipping)
+ critic_train_op = critic_optimizer.apply_gradients(
+ zip(critic_grads_clipped, critic_vars), global_step=global_step)
+ return critic_train_op, critic_grads_clipped, critic_vars
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_utils.py"
new file mode 100644
index 00000000..0e318358
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/model_utils.py"
@@ -0,0 +1,291 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Model utilities."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+import numpy as np
+
+import tensorflow as tf
+from model_utils import variable_mapping
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def generate_mask():
+ """Generate the mask to be fed into the model."""
+ if FLAGS.mask_strategy == 'random':
+ p = np.random.choice(
+ [True, False],
+ size=[FLAGS.batch_size, FLAGS.sequence_length],
+ p=[FLAGS.is_present_rate, 1. - FLAGS.is_present_rate])
+
+ elif FLAGS.mask_strategy == 'contiguous':
+ masked_length = int((1 - FLAGS.is_present_rate) * FLAGS.sequence_length) - 1
+ # Determine location to start masking.
+ start_mask = np.random.randint(
+ 1, FLAGS.sequence_length - masked_length + 1, size=FLAGS.batch_size)
+ p = np.full([FLAGS.batch_size, FLAGS.sequence_length], True, dtype=bool)
+
+ # Create contiguous masked section to be False.
+ for i, index in enumerate(start_mask):
+ p[i, index:index + masked_length] = False
+
+ else:
+ raise NotImplementedError
+
+ return p
+
+
+def assign_percent_real(session, percent_real_update, new_rate, current_rate):
+ """Run assign operation where the we load the current_rate of percent
+ real into a Tensorflow variable.
+
+ Args:
+ session: Current tf.Session.
+ percent_real_update: tf.assign operation.
+ new_rate: tf.placeholder for the new rate.
+ current_rate: Percent of tokens that are currently real. Fake tokens
+ are the ones being imputed by the Generator.
+ """
+ session.run(percent_real_update, feed_dict={new_rate: current_rate})
+
+
+def assign_learning_rate(session, lr_update, lr_placeholder, new_lr):
+ """Run assign operation where the we load the current_rate of percent
+ real into a Tensorflow variable.
+
+ Args:
+ session: Current tf.Session.
+ lr_update: tf.assign operation.
+ lr_placeholder: tf.placeholder for the new learning rate.
+ new_lr: New learning rate to use.
+ """
+ session.run(lr_update, feed_dict={lr_placeholder: new_lr})
+
+
+def clip_weights(variables, c_lower, c_upper):
+ """Clip a list of weights to be within a certain range.
+
+ Args:
+ variables: List of tf.Variable weights.
+ c_lower: Lower bound for weights.
+ c_upper: Upper bound for weights.
+ """
+ clip_ops = []
+
+ for var in variables:
+ clipped_var = tf.clip_by_value(var, c_lower, c_upper)
+
+ clip_ops.append(tf.assign(var, clipped_var))
+ return tf.group(*clip_ops)
+
+
+def retrieve_init_savers(hparams):
+ """Retrieve a dictionary of all the initial savers for the models.
+
+ Args:
+ hparams: MaskGAN hyperparameters.
+ """
+ ## Dictionary of init savers.
+ init_savers = {}
+
+ ## Load Generator weights from MaskGAN checkpoint.
+ if FLAGS.maskgan_ckpt:
+ gen_vars = [
+ v for v in tf.trainable_variables() if v.op.name.startswith('gen')
+ ]
+ init_saver = tf.train.Saver(var_list=gen_vars)
+ init_savers['init_saver'] = init_saver
+
+ ## Load the Discriminator weights from the MaskGAN checkpoint if
+ # the weights are compatible.
+ if FLAGS.discriminator_model == 'seq2seq_vd':
+ dis_variable_maps = variable_mapping.dis_seq2seq_vd(hparams)
+ dis_init_saver = tf.train.Saver(var_list=dis_variable_maps)
+ init_savers['dis_init_saver'] = dis_init_saver
+
+ ## Load weights from language model checkpoint.
+ if FLAGS.language_model_ckpt_dir:
+ if FLAGS.maskgan_ckpt is None:
+ ## Generator Variables/Savers.
+ if FLAGS.generator_model == 'rnn_nas':
+ gen_variable_maps = variable_mapping.rnn_nas(hparams, model='gen')
+ gen_init_saver = tf.train.Saver(var_list=gen_variable_maps)
+ init_savers['gen_init_saver'] = gen_init_saver
+
+ elif FLAGS.generator_model == 'seq2seq_nas':
+ # Encoder.
+ gen_encoder_variable_maps = variable_mapping.gen_encoder_seq2seq_nas(
+ hparams)
+ gen_encoder_init_saver = tf.train.Saver(
+ var_list=gen_encoder_variable_maps)
+ # Decoder.
+ gen_decoder_variable_maps = variable_mapping.gen_decoder_seq2seq_nas(
+ hparams)
+ gen_decoder_init_saver = tf.train.Saver(
+ var_list=gen_decoder_variable_maps)
+ init_savers['gen_encoder_init_saver'] = gen_encoder_init_saver
+ init_savers['gen_decoder_init_saver'] = gen_decoder_init_saver
+
+ # seq2seq_vd derived from the same code base as seq2seq_zaremba.
+ elif (FLAGS.generator_model == 'seq2seq_zaremba' or
+ FLAGS.generator_model == 'seq2seq_vd'):
+ # Encoder.
+ gen_encoder_variable_maps = variable_mapping.gen_encoder_seq2seq(
+ hparams)
+ gen_encoder_init_saver = tf.train.Saver(
+ var_list=gen_encoder_variable_maps)
+ # Decoder.
+ gen_decoder_variable_maps = variable_mapping.gen_decoder_seq2seq(
+ hparams)
+ gen_decoder_init_saver = tf.train.Saver(
+ var_list=gen_decoder_variable_maps)
+ init_savers['gen_encoder_init_saver'] = gen_encoder_init_saver
+ init_savers['gen_decoder_init_saver'] = gen_decoder_init_saver
+
+ else:
+ raise NotImplementedError
+
+ ## Discriminator Variables/Savers.
+ if FLAGS.discriminator_model == 'rnn_nas':
+ dis_variable_maps = variable_mapping.rnn_nas(hparams, model='dis')
+ dis_init_saver = tf.train.Saver(var_list=dis_variable_maps)
+ init_savers['dis_init_saver'] = dis_init_saver
+
+ # rnn_vd derived from the same code base as rnn_zaremba.
+ elif (FLAGS.discriminator_model == 'rnn_zaremba' or
+ FLAGS.discriminator_model == 'rnn_vd'):
+ dis_variable_maps = variable_mapping.rnn_zaremba(hparams, model='dis')
+ dis_init_saver = tf.train.Saver(var_list=dis_variable_maps)
+ init_savers['dis_init_saver'] = dis_init_saver
+
+ elif (FLAGS.discriminator_model == 'bidirectional_zaremba' or
+ FLAGS.discriminator_model == 'bidirectional_vd'):
+ dis_fwd_variable_maps = variable_mapping.dis_fwd_bidirectional(hparams)
+ dis_bwd_variable_maps = variable_mapping.dis_bwd_bidirectional(hparams)
+ # Savers for the forward/backward Discriminator components.
+ dis_fwd_init_saver = tf.train.Saver(var_list=dis_fwd_variable_maps)
+ dis_bwd_init_saver = tf.train.Saver(var_list=dis_bwd_variable_maps)
+ init_savers['dis_fwd_init_saver'] = dis_fwd_init_saver
+ init_savers['dis_bwd_init_saver'] = dis_bwd_init_saver
+
+ elif FLAGS.discriminator_model == 'cnn':
+ dis_variable_maps = variable_mapping.cnn()
+ dis_init_saver = tf.train.Saver(var_list=dis_variable_maps)
+ init_savers['dis_init_saver'] = dis_init_saver
+
+ elif FLAGS.discriminator_model == 'seq2seq_vd':
+ # Encoder.
+ dis_encoder_variable_maps = variable_mapping.dis_encoder_seq2seq(hparams)
+ dis_encoder_init_saver = tf.train.Saver(
+ var_list=dis_encoder_variable_maps)
+ # Decoder.
+ dis_decoder_variable_maps = variable_mapping.dis_decoder_seq2seq(hparams)
+ dis_decoder_init_saver = tf.train.Saver(
+ var_list=dis_decoder_variable_maps)
+ init_savers['dis_encoder_init_saver'] = dis_encoder_init_saver
+ init_savers['dis_decoder_init_saver'] = dis_decoder_init_saver
+
+ return init_savers
+
+
+def init_fn(init_savers, sess):
+ """The init_fn to be passed to the Supervisor.
+
+ Args:
+ init_savers: Dictionary of init_savers. 'init_saver_name': init_saver.
+ sess: tf.Session.
+ """
+ ## Load Generator weights from MaskGAN checkpoint.
+ if FLAGS.maskgan_ckpt:
+ print('Restoring Generator from %s.' % FLAGS.maskgan_ckpt)
+ tf.logging.info('Restoring Generator from %s.' % FLAGS.maskgan_ckpt)
+ print('Asserting Generator is a seq2seq-variant.')
+ tf.logging.info('Asserting Generator is a seq2seq-variant.')
+ assert FLAGS.generator_model.startswith('seq2seq')
+ init_saver = init_savers['init_saver']
+ init_saver.restore(sess, FLAGS.maskgan_ckpt)
+
+ ## Load the Discriminator weights from the MaskGAN checkpoint if
+ # the weights are compatible.
+ if FLAGS.discriminator_model == 'seq2seq_vd':
+ print('Restoring Discriminator from %s.' % FLAGS.maskgan_ckpt)
+ tf.logging.info('Restoring Discriminator from %s.' % FLAGS.maskgan_ckpt)
+ dis_init_saver = init_savers['dis_init_saver']
+ dis_init_saver.restore(sess, FLAGS.maskgan_ckpt)
+
+ ## Load weights from language model checkpoint.
+ if FLAGS.language_model_ckpt_dir:
+ if FLAGS.maskgan_ckpt is None:
+ ## Generator Models.
+ if FLAGS.generator_model == 'rnn_nas':
+ load_ckpt = tf.train.latest_checkpoint(FLAGS.language_model_ckpt_dir)
+ print('Restoring Generator from %s.' % load_ckpt)
+ tf.logging.info('Restoring Generator from %s.' % load_ckpt)
+ gen_init_saver = init_savers['gen_init_saver']
+ gen_init_saver.restore(sess, load_ckpt)
+
+ elif FLAGS.generator_model.startswith('seq2seq'):
+ load_ckpt = tf.train.latest_checkpoint(FLAGS.language_model_ckpt_dir)
+ print('Restoring Generator from %s.' % load_ckpt)
+ tf.logging.info('Restoring Generator from %s.' % load_ckpt)
+ gen_encoder_init_saver = init_savers['gen_encoder_init_saver']
+ gen_decoder_init_saver = init_savers['gen_decoder_init_saver']
+ gen_encoder_init_saver.restore(sess, load_ckpt)
+ gen_decoder_init_saver.restore(sess, load_ckpt)
+
+ ## Discriminator Models.
+ if (FLAGS.discriminator_model == 'rnn_nas' or
+ FLAGS.discriminator_model == 'rnn_zaremba' or
+ FLAGS.discriminator_model == 'rnn_vd' or
+ FLAGS.discriminator_model == 'cnn'):
+ load_ckpt = tf.train.latest_checkpoint(FLAGS.language_model_ckpt_dir)
+ print('Restoring Discriminator from %s.' % load_ckpt)
+ tf.logging.info('Restoring Discriminator from %s.' % load_ckpt)
+ dis_init_saver = init_savers['dis_init_saver']
+ dis_init_saver.restore(sess, load_ckpt)
+
+ elif (FLAGS.discriminator_model == 'bidirectional_zaremba' or
+ FLAGS.discriminator_model == 'bidirectional_vd'):
+ assert FLAGS.language_model_ckpt_dir_reversed is not None, (
+ 'Need a reversed directory to fill in the backward components.')
+ load_fwd_ckpt = tf.train.latest_checkpoint(FLAGS.language_model_ckpt_dir)
+ load_bwd_ckpt = tf.train.latest_checkpoint(
+ FLAGS.language_model_ckpt_dir_reversed)
+ print('Restoring Discriminator from %s and %s.' % (load_fwd_ckpt,
+ load_bwd_ckpt))
+ tf.logging.info('Restoring Discriminator from %s and %s.' %
+ (load_fwd_ckpt, load_bwd_ckpt))
+ dis_fwd_init_saver = init_savers['dis_fwd_init_saver']
+ dis_bwd_init_saver = init_savers['dis_bwd_init_saver']
+ dis_fwd_init_saver.restore(sess, load_fwd_ckpt)
+ dis_bwd_init_saver.restore(sess, load_bwd_ckpt)
+
+ elif FLAGS.discriminator_model == 'seq2seq_vd':
+ load_ckpt = tf.train.latest_checkpoint(FLAGS.language_model_ckpt_dir)
+ print('Restoring Discriminator from %s.' % load_ckpt)
+ tf.logging.info('Restoring Discriminator from %s.' % load_ckpt)
+ dis_encoder_init_saver = init_savers['dis_encoder_init_saver']
+ dis_decoder_init_saver = init_savers['dis_decoder_init_saver']
+ dis_encoder_init_saver.restore(sess, load_ckpt)
+ dis_decoder_init_saver.restore(sess, load_ckpt)
+
+ else:
+ return
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/n_gram.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/n_gram.py"
new file mode 100644
index 00000000..b889dde8
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/n_gram.py"
@@ -0,0 +1,66 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""We calculate n-Grams from the training text. We will use this as an
+evaluation metric."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from six.moves import xrange
+
+
+def hash_function(input_tuple):
+ """Hash function for a tuple."""
+ return hash(input_tuple)
+
+
+def find_all_ngrams(dataset, n):
+ """Generate a list of all ngrams."""
+ return zip(*[dataset[i:] for i in xrange(n)])
+
+
+def construct_ngrams_dict(ngrams_list):
+ """Construct a ngram dictionary which maps an ngram tuple to the number
+ of times it appears in the text."""
+ counts = {}
+
+ for t in ngrams_list:
+ key = hash_function(t)
+ if key in counts:
+ counts[key] += 1
+ else:
+ counts[key] = 1
+ return counts
+
+
+def percent_unique_ngrams_in_train(train_ngrams_dict, gen_ngrams_dict):
+ """Compute the percent of ngrams generated by the model that are
+ present in the training text and are unique."""
+
+ # *Total* number of n-grams produced by the generator.
+ total_ngrams_produced = 0
+
+ for _, value in gen_ngrams_dict.iteritems():
+ total_ngrams_produced += value
+
+ # The unique ngrams in the training set.
+ unique_ngrams_in_train = 0.
+
+ for key, _ in gen_ngrams_dict.iteritems():
+ if key in train_ngrams_dict:
+ unique_ngrams_in_train += 1
+ return float(unique_ngrams_in_train) / float(total_ngrams_produced)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/variable_mapping.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/variable_mapping.py"
new file mode 100644
index 00000000..0301b969
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/model_utils/variable_mapping.py"
@@ -0,0 +1,745 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def rnn_nas(hparams, model):
+ assert model == 'gen' or model == 'dis'
+
+ # This logic is only valid for rnn_zaremba
+ if model == 'gen':
+ assert FLAGS.generator_model == 'rnn_nas'
+ assert hparams.gen_num_layers == 2
+
+ if model == 'dis':
+ assert FLAGS.discriminator_model == 'rnn_nas'
+ assert hparams.dis_num_layers == 2
+
+ # Output variables only for the Generator. Discriminator output biases
+ # will begin randomly initialized.
+ if model == 'gen':
+ softmax_b = [
+ v for v in tf.trainable_variables() if v.op.name == 'gen/rnn/softmax_b'
+ ][0]
+
+ # Common elements to Generator and Discriminator.
+ embedding = [
+ v for v in tf.trainable_variables()
+ if v.op.name == str(model) + '/rnn/embedding'
+ ][0]
+ lstm_w_0 = [
+ v for v in tf.trainable_variables()
+ if v.op.name ==
+ str(model) + '/rnn/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_h_mat'
+ ][0]
+ lstm_b_0 = [
+ v for v in tf.trainable_variables()
+ if v.op.name == str(model) +
+ '/rnn/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_inputs_mat'
+ ][0]
+ lstm_w_1 = [
+ v for v in tf.trainable_variables()
+ if v.op.name ==
+ str(model) + '/rnn/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_h_mat'
+ ][0]
+ lstm_b_1 = [
+ v for v in tf.trainable_variables()
+ if v.op.name == str(model) +
+ '/rnn/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_inputs_mat'
+ ][0]
+
+ # Dictionary mapping.
+ if model == 'gen':
+ variable_mapping = {
+ 'Model/embeddings/input_embedding':
+ embedding,
+ 'Model/RNN/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_h_mat':
+ lstm_w_0,
+ 'Model/RNN/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_inputs_mat':
+ lstm_b_0,
+ 'Model/RNN/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_h_mat':
+ lstm_w_1,
+ 'Model/RNN/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_inputs_mat':
+ lstm_b_1,
+ 'Model/softmax_b':
+ softmax_b
+ }
+ else:
+ variable_mapping = {
+ 'Model/embeddings/input_embedding':
+ embedding,
+ 'Model/RNN/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_h_mat':
+ lstm_w_0,
+ 'Model/RNN/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_inputs_mat':
+ lstm_b_0,
+ 'Model/RNN/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_h_mat':
+ lstm_w_1,
+ 'Model/RNN/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_inputs_mat':
+ lstm_b_1
+ }
+
+ return variable_mapping
+
+
+def cnn():
+ """Variable mapping for the CNN embedding.
+
+ Returns:
+ variable_mapping: Dictionary with Key: ckpt_name, Value: model_var.
+ """
+ # This logic is only valid for cnn
+ assert FLAGS.discriminator_model == 'cnn'
+
+ # Retrieve CNN embedding.
+ embedding = [
+ v for v in tf.trainable_variables() if v.op.name == 'dis/embedding'
+ ][0]
+
+ # Variable mapping.
+ variable_mapping = {'Model/embedding': embedding}
+
+ return variable_mapping
+
+
+def rnn_zaremba(hparams, model):
+ """Returns the PTB Variable name to MaskGAN Variable dictionary mapping. This
+ is a highly restrictive function just for testing. This will need to be
+ generalized.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ model: Model type, one of ['gen', 'dis'].
+
+ Returns:
+ variable_mapping: Dictionary with Key: ckpt_name, Value: model_var.
+ """
+ assert model == 'gen' or model == 'dis'
+
+ # This logic is only valid for rnn_zaremba
+ if model == 'gen':
+ assert FLAGS.generator_model == 'rnn_zaremba'
+ assert hparams.gen_num_layers == 2
+
+ if model == 'dis':
+ assert (FLAGS.discriminator_model == 'rnn_zaremba' or
+ FLAGS.discriminator_model == 'rnn_vd')
+ assert hparams.dis_num_layers == 2
+
+ # Output variables only for the Generator. Discriminator output weights
+ # and biases will begin randomly initialized.
+ if model == 'gen':
+ softmax_w = [
+ v for v in tf.trainable_variables() if v.op.name == 'gen/rnn/softmax_w'
+ ][0]
+ softmax_b = [
+ v for v in tf.trainable_variables() if v.op.name == 'gen/rnn/softmax_b'
+ ][0]
+
+ # Common elements to Generator and Discriminator.
+ if not FLAGS.dis_share_embedding or model != 'dis':
+ embedding = [
+ v for v in tf.trainable_variables()
+ if v.op.name == str(model) + '/rnn/embedding'
+ ][0]
+ lstm_w_0 = [
+ v for v in tf.trainable_variables() if v.op.name == str(model) +
+ '/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel'
+ ][0]
+ lstm_b_0 = [
+ v for v in tf.trainable_variables() if v.op.name == str(model) +
+ '/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/bias'
+ ][0]
+ lstm_w_1 = [
+ v for v in tf.trainable_variables() if v.op.name == str(model) +
+ '/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/kernel'
+ ][0]
+ lstm_b_1 = [
+ v for v in tf.trainable_variables() if v.op.name == str(model) +
+ '/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/bias'
+ ][0]
+
+ # Dictionary mapping.
+ if model == 'gen':
+ variable_mapping = {
+ 'Model/embedding': embedding,
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel': lstm_w_0,
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias': lstm_b_0,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel': lstm_w_1,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias': lstm_b_1,
+ 'Model/softmax_w': softmax_w,
+ 'Model/softmax_b': softmax_b
+ }
+ else:
+ if FLAGS.dis_share_embedding:
+ variable_mapping = {
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel': lstm_w_0,
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias': lstm_b_0,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel': lstm_w_1,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias': lstm_b_1
+ }
+ else:
+ variable_mapping = {
+ 'Model/embedding': embedding,
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel': lstm_w_0,
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias': lstm_b_0,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel': lstm_w_1,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias': lstm_b_1
+ }
+
+ return variable_mapping
+
+
+def gen_encoder_seq2seq_nas(hparams):
+ """Returns the NAS Variable name to MaskGAN Variable
+ dictionary mapping. This is a highly restrictive function just for testing.
+ This is for the *unidirecitional* seq2seq_nas encoder.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+
+ Returns:
+ variable_mapping: Dictionary with Key: ckpt_name, Value: model_varself.
+ """
+ assert FLAGS.generator_model == 'seq2seq_nas'
+ assert hparams.gen_num_layers == 2
+ ## Encoder forward variables.
+
+ if not FLAGS.seq2seq_share_embedding:
+ encoder_embedding = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'gen/encoder/rnn/embedding'
+ ][0]
+ encoder_lstm_w_0 = [
+ v for v in tf.trainable_variables()
+ if v.op.name ==
+ 'gen/encoder/rnn/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_h_mat'
+ ][0]
+ encoder_lstm_b_0 = [
+ v for v in tf.trainable_variables()
+ if v.op.name ==
+ 'gen/encoder/rnn/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_inputs_mat'
+ ][0]
+ encoder_lstm_w_1 = [
+ v for v in tf.trainable_variables()
+ if v.op.name ==
+ 'gen/encoder/rnn/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_h_mat'
+ ][0]
+ encoder_lstm_b_1 = [
+ v for v in tf.trainable_variables()
+ if v.op.name ==
+ 'gen/encoder/rnn/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_inputs_mat'
+ ][0]
+
+ if not FLAGS.seq2seq_share_embedding:
+ variable_mapping = {
+ 'Model/embeddings/input_embedding':
+ encoder_embedding,
+ 'Model/RNN/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_h_mat':
+ encoder_lstm_w_0,
+ 'Model/RNN/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_inputs_mat':
+ encoder_lstm_b_0,
+ 'Model/RNN/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_h_mat':
+ encoder_lstm_w_1,
+ 'Model/RNN/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_inputs_mat':
+ encoder_lstm_b_1
+ }
+ else:
+ variable_mapping = {
+ 'Model/RNN/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_h_mat':
+ encoder_lstm_w_0,
+ 'Model/RNN/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_inputs_mat':
+ encoder_lstm_b_0,
+ 'Model/RNN/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_h_mat':
+ encoder_lstm_w_1,
+ 'Model/RNN/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_inputs_mat':
+ encoder_lstm_b_1
+ }
+ return variable_mapping
+
+
+def gen_decoder_seq2seq_nas(hparams):
+ assert FLAGS.generator_model == 'seq2seq_nas'
+ assert hparams.gen_num_layers == 2
+
+ decoder_embedding = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'gen/decoder/rnn/embedding'
+ ][0]
+ decoder_lstm_w_0 = [
+ v for v in tf.trainable_variables()
+ if v.op.name ==
+ 'gen/decoder/rnn/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_h_mat'
+ ][0]
+ decoder_lstm_b_0 = [
+ v for v in tf.trainable_variables()
+ if v.op.name ==
+ 'gen/decoder/rnn/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_inputs_mat'
+ ][0]
+ decoder_lstm_w_1 = [
+ v for v in tf.trainable_variables()
+ if v.op.name ==
+ 'gen/decoder/rnn/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_h_mat'
+ ][0]
+ decoder_lstm_b_1 = [
+ v for v in tf.trainable_variables()
+ if v.op.name ==
+ 'gen/decoder/rnn/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_inputs_mat'
+ ][0]
+
+ decoder_softmax_b = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'gen/decoder/rnn/softmax_b'
+ ][0]
+
+ variable_mapping = {
+ 'Model/embeddings/input_embedding':
+ decoder_embedding,
+ 'Model/RNN/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_h_mat':
+ decoder_lstm_w_0,
+ 'Model/RNN/GenericMultiRNNCell/Cell0/Alien/rnn_builder/big_inputs_mat':
+ decoder_lstm_b_0,
+ 'Model/RNN/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_h_mat':
+ decoder_lstm_w_1,
+ 'Model/RNN/GenericMultiRNNCell/Cell1/Alien/rnn_builder/big_inputs_mat':
+ decoder_lstm_b_1,
+ 'Model/softmax_b':
+ decoder_softmax_b
+ }
+
+ return variable_mapping
+
+
+def gen_encoder_seq2seq(hparams):
+ """Returns the PTB Variable name to MaskGAN Variable
+ dictionary mapping. This is a highly restrictive function just for testing.
+ This is foe the *unidirecitional* seq2seq_zaremba encoder.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+
+ Returns:
+ variable_mapping: Dictionary with Key: ckpt_name, Value: model_varself.
+ """
+ assert (FLAGS.generator_model == 'seq2seq_zaremba' or
+ FLAGS.generator_model == 'seq2seq_vd')
+ assert hparams.gen_num_layers == 2
+
+ ## Encoder forward variables.
+ if not FLAGS.seq2seq_share_embedding:
+ encoder_embedding = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'gen/encoder/rnn/embedding'
+ ][0]
+ encoder_lstm_w_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'gen/encoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel'
+ ][0]
+ encoder_lstm_b_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'gen/encoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/bias'
+ ][0]
+ encoder_lstm_w_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'gen/encoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/kernel'
+ ][0]
+ encoder_lstm_b_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'gen/encoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/bias'
+ ][0]
+
+ if FLAGS.data_set == 'ptb':
+ model_str = 'Model'
+ else:
+ model_str = 'model'
+
+ if not FLAGS.seq2seq_share_embedding:
+ variable_mapping = {
+ str(model_str) + '/embedding':
+ encoder_embedding,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel':
+ encoder_lstm_w_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias':
+ encoder_lstm_b_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel':
+ encoder_lstm_w_1,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias':
+ encoder_lstm_b_1
+ }
+ else:
+ variable_mapping = {
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel':
+ encoder_lstm_w_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias':
+ encoder_lstm_b_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel':
+ encoder_lstm_w_1,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias':
+ encoder_lstm_b_1
+ }
+ return variable_mapping
+
+
+def gen_decoder_seq2seq(hparams):
+ assert (FLAGS.generator_model == 'seq2seq_zaremba' or
+ FLAGS.generator_model == 'seq2seq_vd')
+ assert hparams.gen_num_layers == 2
+
+ decoder_embedding = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'gen/decoder/rnn/embedding'
+ ][0]
+ decoder_lstm_w_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'gen/decoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel'
+ ][0]
+ decoder_lstm_b_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'gen/decoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/bias'
+ ][0]
+ decoder_lstm_w_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'gen/decoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/kernel'
+ ][0]
+ decoder_lstm_b_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'gen/decoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/bias'
+ ][0]
+ decoder_softmax_b = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'gen/decoder/rnn/softmax_b'
+ ][0]
+
+ if FLAGS.data_set == 'ptb':
+ model_str = 'Model'
+ else:
+ model_str = 'model'
+
+ variable_mapping = {
+ str(model_str) + '/embedding':
+ decoder_embedding,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel':
+ decoder_lstm_w_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias':
+ decoder_lstm_b_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel':
+ decoder_lstm_w_1,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias':
+ decoder_lstm_b_1,
+ str(model_str) + '/softmax_b':
+ decoder_softmax_b
+ }
+ return variable_mapping
+
+
+def dis_fwd_bidirectional(hparams):
+ """Returns the *forward* PTB Variable name to MaskGAN Variable dictionary
+ mapping. This is a highly restrictive function just for testing. This is for
+ the bidirectional_zaremba discriminator.
+
+ Args:
+ FLAGS: Flags for the model.
+ hparams: Hyperparameters for the MaskGAN.
+
+ Returns:
+ variable_mapping: Dictionary with Key: ckpt_name, Value: model_varself.
+ """
+ assert (FLAGS.discriminator_model == 'bidirectional_zaremba' or
+ FLAGS.discriminator_model == 'bidirectional_vd')
+ assert hparams.dis_num_layers == 2
+
+ # Forward Discriminator Elements.
+ if not FLAGS.dis_share_embedding:
+ embedding = [
+ v for v in tf.trainable_variables() if v.op.name == 'dis/embedding'
+ ][0]
+ fw_lstm_w_0 = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel'
+ ][0]
+ fw_lstm_b_0 = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/bias'
+ ][0]
+ fw_lstm_w_1 = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel'
+ ][0]
+ fw_lstm_b_1 = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/bias'
+ ][0]
+ if FLAGS.dis_share_embedding:
+ variable_mapping = {
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel': fw_lstm_w_0,
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias': fw_lstm_b_0,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel': fw_lstm_w_1,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias': fw_lstm_b_1
+ }
+ else:
+ variable_mapping = {
+ 'Model/embedding': embedding,
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel': fw_lstm_w_0,
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias': fw_lstm_b_0,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel': fw_lstm_w_1,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias': fw_lstm_b_1
+ }
+ return variable_mapping
+
+
+def dis_bwd_bidirectional(hparams):
+ """Returns the *backward* PTB Variable name to MaskGAN Variable dictionary
+ mapping. This is a highly restrictive function just for testing. This is for
+ the bidirectional_zaremba discriminator.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+
+ Returns:
+ variable_mapping: Dictionary with Key: ckpt_name, Value: model_varself.
+ """
+ assert (FLAGS.discriminator_model == 'bidirectional_zaremba' or
+ FLAGS.discriminator_model == 'bidirectional_vd')
+ assert hparams.dis_num_layers == 2
+
+ # Backward Discriminator Elements.
+ bw_lstm_w_0 = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel'
+ ][0]
+ bw_lstm_b_0 = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/bias'
+ ][0]
+ bw_lstm_w_1 = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel'
+ ][0]
+ bw_lstm_b_1 = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/bias'
+ ][0]
+
+ variable_mapping = {
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel': bw_lstm_w_0,
+ 'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias': bw_lstm_b_0,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel': bw_lstm_w_1,
+ 'Model/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias': bw_lstm_b_1
+ }
+ return variable_mapping
+
+
+def dis_encoder_seq2seq(hparams):
+ """Returns the PTB Variable name to MaskGAN Variable
+ dictionary mapping.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+
+ Returns:
+ variable_mapping: Dictionary with Key: ckpt_name, Value: model_varself.
+ """
+ assert FLAGS.discriminator_model == 'seq2seq_vd'
+ assert hparams.dis_num_layers == 2
+
+ ## Encoder forward variables.
+ encoder_lstm_w_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/encoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel'
+ ][0]
+ encoder_lstm_b_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/encoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/bias'
+ ][0]
+ encoder_lstm_w_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/encoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/kernel'
+ ][0]
+ encoder_lstm_b_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/encoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/bias'
+ ][0]
+
+ if FLAGS.data_set == 'ptb':
+ model_str = 'Model'
+ else:
+ model_str = 'model'
+
+ variable_mapping = {
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel':
+ encoder_lstm_w_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias':
+ encoder_lstm_b_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel':
+ encoder_lstm_w_1,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias':
+ encoder_lstm_b_1
+ }
+ return variable_mapping
+
+
+def dis_decoder_seq2seq(hparams):
+ assert FLAGS.discriminator_model == 'seq2seq_vd'
+ assert hparams.dis_num_layers == 2
+
+ if not FLAGS.dis_share_embedding:
+ decoder_embedding = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/decoder/rnn/embedding'
+ ][0]
+ decoder_lstm_w_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/decoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel'
+ ][0]
+ decoder_lstm_b_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/decoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/bias'
+ ][0]
+ decoder_lstm_w_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/decoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/kernel'
+ ][0]
+ decoder_lstm_b_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/decoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/bias'
+ ][0]
+
+ if FLAGS.data_set == 'ptb':
+ model_str = 'Model'
+ else:
+ model_str = 'model'
+
+ if not FLAGS.dis_share_embedding:
+ variable_mapping = {
+ str(model_str) + '/embedding':
+ decoder_embedding,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel':
+ decoder_lstm_w_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias':
+ decoder_lstm_b_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel':
+ decoder_lstm_w_1,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias':
+ decoder_lstm_b_1
+ }
+ else:
+ variable_mapping = {
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel':
+ decoder_lstm_w_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/bias':
+ decoder_lstm_b_0,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/kernel':
+ decoder_lstm_w_1,
+ str(model_str) + '/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/bias':
+ decoder_lstm_b_1,
+ }
+ return variable_mapping
+
+
+def dis_seq2seq_vd(hparams):
+ assert FLAGS.discriminator_model == 'seq2seq_vd'
+ assert hparams.dis_num_layers == 2
+
+ if not FLAGS.dis_share_embedding:
+ decoder_embedding = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/decoder/rnn/embedding'
+ ][0]
+
+ ## Encoder variables.
+ encoder_lstm_w_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/encoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel'
+ ][0]
+ encoder_lstm_b_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/encoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/bias'
+ ][0]
+ encoder_lstm_w_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/encoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/kernel'
+ ][0]
+ encoder_lstm_b_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/encoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/bias'
+ ][0]
+
+ ## Attention.
+ if FLAGS.attention_option is not None:
+ decoder_attention_keys = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/decoder/attention_keys/weights'
+ ][0]
+ decoder_attention_construct_weights = [
+ v for v in tf.trainable_variables()
+ if v.op.name == 'dis/decoder/rnn/attention_construct/weights'
+ ][0]
+
+ ## Decoder.
+ decoder_lstm_w_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/decoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel'
+ ][0]
+ decoder_lstm_b_0 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/decoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/bias'
+ ][0]
+ decoder_lstm_w_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/decoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/kernel'
+ ][0]
+ decoder_lstm_b_1 = [
+ v for v in tf.trainable_variables() if v.op.name ==
+ 'dis/decoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/bias'
+ ][0]
+
+ # Standard variable mappings.
+ variable_mapping = {
+ 'gen/encoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel':
+ encoder_lstm_w_0,
+ 'gen/encoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/bias':
+ encoder_lstm_b_0,
+ 'gen/encoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/kernel':
+ encoder_lstm_w_1,
+ 'gen/encoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/bias':
+ encoder_lstm_b_1,
+ 'gen/decoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel':
+ decoder_lstm_w_0,
+ 'gen/decoder/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/bias':
+ decoder_lstm_b_0,
+ 'gen/decoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/kernel':
+ decoder_lstm_w_1,
+ 'gen/decoder/rnn/multi_rnn_cell/cell_1/basic_lstm_cell/bias':
+ decoder_lstm_b_1
+ }
+
+ # Optional variable mappings.
+ if not FLAGS.dis_share_embedding:
+ variable_mapping['gen/decoder/rnn/embedding'] = decoder_embedding
+ if FLAGS.attention_option is not None:
+ variable_mapping[
+ 'gen/decoder/attention_keys/weights'] = decoder_attention_keys
+ variable_mapping[
+ 'gen/decoder/rnn/attention_construct/weights'] = decoder_attention_construct_weights
+
+ return variable_mapping
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/attention_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/attention_utils.py"
new file mode 100644
index 00000000..4bd9e41d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/attention_utils.py"
@@ -0,0 +1,477 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Attention-based decoder functions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+from tensorflow.python.framework import function
+
+__all__ = [
+ "prepare_attention", "attention_decoder_fn_train",
+ "attention_decoder_fn_inference"
+]
+
+
+def attention_decoder_fn_train(encoder_state,
+ attention_keys,
+ attention_values,
+ attention_score_fn,
+ attention_construct_fn,
+ name=None):
+ """Attentional decoder function for `dynamic_rnn_decoder` during training.
+
+ The `attention_decoder_fn_train` is a training function for an
+ attention-based sequence-to-sequence model. It should be used when
+ `dynamic_rnn_decoder` is in the training mode.
+
+ The `attention_decoder_fn_train` is called with a set of the user arguments
+ and returns the `decoder_fn`, which can be passed to the
+ `dynamic_rnn_decoder`, such that
+
+ ```
+ dynamic_fn_train = attention_decoder_fn_train(encoder_state)
+ outputs_train, state_train = dynamic_rnn_decoder(
+ decoder_fn=dynamic_fn_train, ...)
+ ```
+
+ Further usage can be found in the `kernel_tests/seq2seq_test.py`.
+
+ Args:
+ encoder_state: The encoded state to initialize the `dynamic_rnn_decoder`.
+ attention_keys: to be compared with target states.
+ attention_values: to be used to construct context vectors.
+ attention_score_fn: to compute similarity between key and target states.
+ attention_construct_fn: to build attention states.
+ name: (default: `None`) NameScope for the decoder function;
+ defaults to "simple_decoder_fn_train"
+
+ Returns:
+ A decoder function with the required interface of `dynamic_rnn_decoder`
+ intended for training.
+ """
+ with tf.name_scope(name, "attention_decoder_fn_train", [
+ encoder_state, attention_keys, attention_values, attention_score_fn,
+ attention_construct_fn
+ ]):
+ pass
+
+ def decoder_fn(time, cell_state, cell_input, cell_output, context_state):
+ """Decoder function used in the `dynamic_rnn_decoder` for training.
+
+ Args:
+ time: positive integer constant reflecting the current timestep.
+ cell_state: state of RNNCell.
+ cell_input: input provided by `dynamic_rnn_decoder`.
+ cell_output: output of RNNCell.
+ context_state: context state provided by `dynamic_rnn_decoder`.
+
+ Returns:
+ A tuple (done, next state, next input, emit output, next context state)
+ where:
+
+ done: `None`, which is used by the `dynamic_rnn_decoder` to indicate
+ that `sequence_lengths` in `dynamic_rnn_decoder` should be used.
+
+ next state: `cell_state`, this decoder function does not modify the
+ given state.
+
+ next input: `cell_input`, this decoder function does not modify the
+ given input. The input could be modified when applying e.g. attention.
+
+ emit output: `cell_output`, this decoder function does not modify the
+ given output.
+
+ next context state: `context_state`, this decoder function does not
+ modify the given context state. The context state could be modified when
+ applying e.g. beam search.
+ """
+ with tf.name_scope(
+ name, "attention_decoder_fn_train",
+ [time, cell_state, cell_input, cell_output, context_state]):
+ if cell_state is None: # first call, return encoder_state
+ cell_state = encoder_state
+
+ # init attention
+ attention = _init_attention(encoder_state)
+ else:
+ # construct attention
+ attention = attention_construct_fn(cell_output, attention_keys,
+ attention_values)
+ cell_output = attention
+
+ # combine cell_input and attention
+ next_input = tf.concat([cell_input, attention], 1)
+
+ return (None, cell_state, next_input, cell_output, context_state)
+
+ return decoder_fn
+
+
+def attention_decoder_fn_inference(output_fn,
+ encoder_state,
+ attention_keys,
+ attention_values,
+ attention_score_fn,
+ attention_construct_fn,
+ embeddings,
+ start_of_sequence_id,
+ end_of_sequence_id,
+ maximum_length,
+ num_decoder_symbols,
+ dtype=tf.int32,
+ name=None):
+ """Attentional decoder function for `dynamic_rnn_decoder` during inference.
+
+ The `attention_decoder_fn_inference` is a simple inference function for a
+ sequence-to-sequence model. It should be used when `dynamic_rnn_decoder` is
+ in the inference mode.
+
+ The `attention_decoder_fn_inference` is called with user arguments
+ and returns the `decoder_fn`, which can be passed to the
+ `dynamic_rnn_decoder`, such that
+
+ ```
+ dynamic_fn_inference = attention_decoder_fn_inference(...)
+ outputs_inference, state_inference = dynamic_rnn_decoder(
+ decoder_fn=dynamic_fn_inference, ...)
+ ```
+
+ Further usage can be found in the `kernel_tests/seq2seq_test.py`.
+
+ Args:
+ output_fn: An output function to project your `cell_output` onto class
+ logits.
+
+ An example of an output function;
+
+ ```
+ tf.variable_scope("decoder") as varscope
+ output_fn = lambda x: tf.contrib.layers.linear(x, num_decoder_symbols,
+ scope=varscope)
+
+ outputs_train, state_train = seq2seq.dynamic_rnn_decoder(...)
+ logits_train = output_fn(outputs_train)
+
+ varscope.reuse_variables()
+ logits_inference, state_inference = seq2seq.dynamic_rnn_decoder(
+ output_fn=output_fn, ...)
+ ```
+
+ If `None` is supplied it will act as an identity function, which
+ might be wanted when using the RNNCell `OutputProjectionWrapper`.
+
+ encoder_state: The encoded state to initialize the `dynamic_rnn_decoder`.
+ attention_keys: to be compared with target states.
+ attention_values: to be used to construct context vectors.
+ attention_score_fn: to compute similarity between key and target states.
+ attention_construct_fn: to build attention states.
+ embeddings: The embeddings matrix used for the decoder sized
+ `[num_decoder_symbols, embedding_size]`.
+ start_of_sequence_id: The start of sequence ID in the decoder embeddings.
+ end_of_sequence_id: The end of sequence ID in the decoder embeddings.
+ maximum_length: The maximum allowed of time steps to decode.
+ num_decoder_symbols: The number of classes to decode at each time step.
+ dtype: (default: `tf.int32`) The default data type to use when
+ handling integer objects.
+ name: (default: `None`) NameScope for the decoder function;
+ defaults to "attention_decoder_fn_inference"
+
+ Returns:
+ A decoder function with the required interface of `dynamic_rnn_decoder`
+ intended for inference.
+ """
+ with tf.name_scope(name, "attention_decoder_fn_inference", [
+ output_fn, encoder_state, attention_keys, attention_values,
+ attention_score_fn, attention_construct_fn, embeddings,
+ start_of_sequence_id, end_of_sequence_id, maximum_length,
+ num_decoder_symbols, dtype
+ ]):
+ start_of_sequence_id = tf.convert_to_tensor(start_of_sequence_id, dtype)
+ end_of_sequence_id = tf.convert_to_tensor(end_of_sequence_id, dtype)
+ maximum_length = tf.convert_to_tensor(maximum_length, dtype)
+ num_decoder_symbols = tf.convert_to_tensor(num_decoder_symbols, dtype)
+ encoder_info = tf.contrib.framework.nest.flatten(encoder_state)[0]
+ batch_size = encoder_info.get_shape()[0].value
+ if output_fn is None:
+ output_fn = lambda x: x
+ if batch_size is None:
+ batch_size = tf.shape(encoder_info)[0]
+
+ def decoder_fn(time, cell_state, cell_input, cell_output, context_state):
+ """Decoder function used in the `dynamic_rnn_decoder` for inference.
+
+ The main difference between this decoder function and the `decoder_fn` in
+ `attention_decoder_fn_train` is how `next_cell_input` is calculated. In
+ decoder function we calculate the next input by applying an argmax across
+ the feature dimension of the output from the decoder. This is a
+ greedy-search approach. (Bahdanau et al., 2014) & (Sutskever et al., 2014)
+ use beam-search instead.
+
+ Args:
+ time: positive integer constant reflecting the current timestep.
+ cell_state: state of RNNCell.
+ cell_input: input provided by `dynamic_rnn_decoder`.
+ cell_output: output of RNNCell.
+ context_state: context state provided by `dynamic_rnn_decoder`.
+
+ Returns:
+ A tuple (done, next state, next input, emit output, next context state)
+ where:
+
+ done: A boolean vector to indicate which sentences has reached a
+ `end_of_sequence_id`. This is used for early stopping by the
+ `dynamic_rnn_decoder`. When `time>=maximum_length` a boolean vector with
+ all elements as `true` is returned.
+
+ next state: `cell_state`, this decoder function does not modify the
+ given state.
+
+ next input: The embedding from argmax of the `cell_output` is used as
+ `next_input`.
+
+ emit output: If `output_fn is None` the supplied `cell_output` is
+ returned, else the `output_fn` is used to update the `cell_output`
+ before calculating `next_input` and returning `cell_output`.
+
+ next context state: `context_state`, this decoder function does not
+ modify the given context state. The context state could be modified when
+ applying e.g. beam search.
+
+ Raises:
+ ValueError: if cell_input is not None.
+
+ """
+ with tf.name_scope(
+ name, "attention_decoder_fn_inference",
+ [time, cell_state, cell_input, cell_output, context_state]):
+ if cell_input is not None:
+ raise ValueError(
+ "Expected cell_input to be None, but saw: %s" % cell_input)
+ if cell_output is None:
+ # invariant that this is time == 0
+ next_input_id = tf.ones(
+ [
+ batch_size,
+ ], dtype=dtype) * (
+ start_of_sequence_id)
+ done = tf.zeros(
+ [
+ batch_size,
+ ], dtype=tf.bool)
+ cell_state = encoder_state
+ cell_output = tf.zeros([num_decoder_symbols], dtype=tf.float32)
+ cell_input = tf.gather(embeddings, next_input_id)
+
+ # init attention
+ attention = _init_attention(encoder_state)
+ else:
+ # construct attention
+ attention = attention_construct_fn(cell_output, attention_keys,
+ attention_values)
+ cell_output = attention
+
+ # argmax decoder
+ cell_output = output_fn(cell_output) # logits
+ next_input_id = tf.cast(tf.argmax(cell_output, 1), dtype=dtype)
+ done = tf.equal(next_input_id, end_of_sequence_id)
+ cell_input = tf.gather(embeddings, next_input_id)
+
+ # combine cell_input and attention
+ next_input = tf.concat([cell_input, attention], 1)
+
+ # if time > maxlen, return all true vector
+ done = tf.cond(
+ tf.greater(time, maximum_length),
+ lambda: tf.ones([
+ batch_size,], dtype=tf.bool), lambda: done)
+ return (done, cell_state, next_input, cell_output, context_state)
+
+ return decoder_fn
+
+
+## Helper functions ##
+def prepare_attention(attention_states, attention_option, num_units,
+ reuse=None):
+ """Prepare keys/values/functions for attention.
+
+ Args:
+ attention_states: hidden states to attend over.
+ attention_option: how to compute attention, either "luong" or "bahdanau".
+ num_units: hidden state dimension.
+ reuse: whether to reuse variable scope.
+
+ Returns:
+ attention_keys: to be compared with target states.
+ attention_values: to be used to construct context vectors.
+ attention_score_fn: to compute similarity between key and target states.
+ attention_construct_fn: to build attention states.
+ """
+ # Prepare attention keys / values from attention_states
+ with tf.variable_scope("attention_keys", reuse=reuse) as scope:
+ attention_keys = tf.contrib.layers.linear(
+ attention_states, num_units, biases_initializer=None, scope=scope)
+ attention_values = attention_states
+
+ # Attention score function
+ attention_score_fn = _create_attention_score_fn("attention_score", num_units,
+ attention_option, reuse)
+ # Attention construction function
+ attention_construct_fn = _create_attention_construct_fn(
+ "attention_construct", num_units, attention_score_fn, reuse)
+
+ return (attention_keys, attention_values, attention_score_fn,
+ attention_construct_fn)
+
+
+def _init_attention(encoder_state):
+ """Initialize attention. Handling both LSTM and GRU.
+
+ Args:
+ encoder_state: The encoded state to initialize the `dynamic_rnn_decoder`.
+
+ Returns:
+ attn: initial zero attention vector.
+ """
+
+ # Multi- vs single-layer
+ # TODO(thangluong): is this the best way to check?
+ if isinstance(encoder_state, tuple):
+ top_state = encoder_state[-1]
+ else:
+ top_state = encoder_state
+
+ # LSTM vs GRU
+ if isinstance(top_state, tf.contrib.rnn.LSTMStateTuple):
+ attn = tf.zeros_like(top_state.h)
+ else:
+ attn = tf.zeros_like(top_state)
+
+ return attn
+
+
+def _create_attention_construct_fn(name, num_units, attention_score_fn, reuse):
+ """Function to compute attention vectors.
+
+ Args:
+ name: to label variables.
+ num_units: hidden state dimension.
+ attention_score_fn: to compute similarity between key and target states.
+ reuse: whether to reuse variable scope.
+
+ Returns:
+ attention_construct_fn: to build attention states.
+ """
+
+ def construct_fn(attention_query, attention_keys, attention_values):
+ with tf.variable_scope(name, reuse=reuse) as scope:
+ context = attention_score_fn(attention_query, attention_keys,
+ attention_values)
+ concat_input = tf.concat([attention_query, context], 1)
+ attention = tf.contrib.layers.linear(
+ concat_input, num_units, biases_initializer=None, scope=scope)
+ return attention
+
+ return construct_fn
+
+
+# keys: [batch_size, attention_length, attn_size]
+# query: [batch_size, 1, attn_size]
+# return weights [batch_size, attention_length]
+@function.Defun(func_name="attn_add_fun", noinline=True)
+def _attn_add_fun(v, keys, query):
+ return tf.reduce_sum(v * tf.tanh(keys + query), [2])
+
+
+@function.Defun(func_name="attn_mul_fun", noinline=True)
+def _attn_mul_fun(keys, query):
+ return tf.reduce_sum(keys * query, [2])
+
+
+def _create_attention_score_fn(name,
+ num_units,
+ attention_option,
+ reuse,
+ dtype=tf.float32):
+ """Different ways to compute attention scores.
+
+ Args:
+ name: to label variables.
+ num_units: hidden state dimension.
+ attention_option: how to compute attention, either "luong" or "bahdanau".
+ "bahdanau": additive (Bahdanau et al., ICLR'2015)
+ "luong": multiplicative (Luong et al., EMNLP'2015)
+ reuse: whether to reuse variable scope.
+ dtype: (default: `tf.float32`) data type to use.
+
+ Returns:
+ attention_score_fn: to compute similarity between key and target states.
+ """
+ with tf.variable_scope(name, reuse=reuse):
+ if attention_option == "bahdanau":
+ query_w = tf.get_variable("attnW", [num_units, num_units], dtype=dtype)
+ score_v = tf.get_variable("attnV", [num_units], dtype=dtype)
+
+ def attention_score_fn(query, keys, values):
+ """Put attention masks on attention_values using attention_keys and query.
+
+ Args:
+ query: A Tensor of shape [batch_size, num_units].
+ keys: A Tensor of shape [batch_size, attention_length, num_units].
+ values: A Tensor of shape [batch_size, attention_length, num_units].
+
+ Returns:
+ context_vector: A Tensor of shape [batch_size, num_units].
+
+ Raises:
+ ValueError: if attention_option is neither "luong" or "bahdanau".
+
+
+ """
+ if attention_option == "bahdanau":
+ # transform query
+ query = tf.matmul(query, query_w)
+
+ # reshape query: [batch_size, 1, num_units]
+ query = tf.reshape(query, [-1, 1, num_units])
+
+ # attn_fun
+ scores = _attn_add_fun(score_v, keys, query)
+ elif attention_option == "luong":
+ # reshape query: [batch_size, 1, num_units]
+ query = tf.reshape(query, [-1, 1, num_units])
+
+ # attn_fun
+ scores = _attn_mul_fun(keys, query)
+ else:
+ raise ValueError("Unknown attention option %s!" % attention_option)
+
+ # Compute alignment weights
+ # scores: [batch_size, length]
+ # alignments: [batch_size, length]
+ # TODO(thangluong): not normalize over padding positions.
+ alignments = tf.nn.softmax(scores)
+
+ # Now calculate the attention-weighted vector.
+ alignments = tf.expand_dims(alignments, 2)
+ context_vector = tf.reduce_sum(alignments * values, [1])
+ context_vector.set_shape([None, num_units])
+
+ return context_vector
+
+ return attention_score_fn
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/bidirectional.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/bidirectional.py"
new file mode 100644
index 00000000..1e6b3fe4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/bidirectional.py"
@@ -0,0 +1,75 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple bidirectional model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+# ZoneoutWrapper.
+from regularization import zoneout
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def discriminator(hparams, sequence, is_training, reuse=None):
+ """Define the bidirectional Discriminator graph."""
+ sequence = tf.cast(sequence, tf.int32)
+
+ if FLAGS.dis_share_embedding:
+ assert hparams.dis_rnn_size == hparams.gen_rnn_size, (
+ 'If you wish to share Discriminator/Generator embeddings, they must be'
+ ' same dimension.')
+ with tf.variable_scope('gen/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ with tf.variable_scope('dis', reuse=reuse):
+ cell_fwd = tf.contrib.rnn.LayerNormBasicLSTMCell(
+ hparams.dis_rnn_size, forget_bias=1.0, reuse=reuse)
+ cell_bwd = tf.contrib.rnn.LayerNormBasicLSTMCell(
+ hparams.dis_rnn_size, forget_bias=1.0, reuse=reuse)
+ if FLAGS.zoneout_drop_prob > 0.0:
+ cell_fwd = zoneout.ZoneoutWrapper(
+ cell_fwd,
+ zoneout_drop_prob=FLAGS.zoneout_drop_prob,
+ is_training=is_training)
+ cell_bwd = zoneout.ZoneoutWrapper(
+ cell_bwd,
+ zoneout_drop_prob=FLAGS.zoneout_drop_prob,
+ is_training=is_training)
+
+ state_fwd = cell_fwd.zero_state(FLAGS.batch_size, tf.float32)
+ state_bwd = cell_bwd.zero_state(FLAGS.batch_size, tf.float32)
+
+ if not FLAGS.dis_share_embedding:
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+ rnn_inputs = tf.unstack(rnn_inputs, axis=1)
+
+ with tf.variable_scope('rnn') as vs:
+ outputs, _, _ = tf.contrib.rnn.static_bidirectional_rnn(
+ cell_fwd, cell_bwd, rnn_inputs, state_fwd, state_bwd, scope=vs)
+
+ # Prediction is linear output for Discriminator.
+ predictions = tf.contrib.layers.linear(outputs, 1, scope=vs)
+
+ predictions = tf.transpose(predictions, [1, 0, 2])
+ return tf.squeeze(predictions, axis=2)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/bidirectional_vd.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/bidirectional_vd.py"
new file mode 100644
index 00000000..469af9da
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/bidirectional_vd.py"
@@ -0,0 +1,116 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple bidirectional model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+from regularization import variational_dropout
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def discriminator(hparams,
+ sequence,
+ is_training,
+ reuse=None,
+ initial_state=None):
+ """Define the Discriminator graph."""
+ sequence = tf.cast(sequence, tf.int32)
+
+ if FLAGS.dis_share_embedding:
+ assert hparams.dis_rnn_size == hparams.gen_rnn_size, (
+ 'If you wish to share Discriminator/Generator embeddings, they must be'
+ ' same dimension.')
+ with tf.variable_scope('gen/decoder/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ with tf.variable_scope('dis', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(
+ hparams.dis_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and hparams.dis_vd_keep_prob < 1:
+
+ def attn_cell():
+ return variational_dropout.VariationalDropoutWrapper(
+ lstm_cell(), FLAGS.batch_size, hparams.dis_rnn_size,
+ hparams.dis_vd_keep_prob, hparams.dis_vd_keep_prob)
+
+ cell_fwd = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.dis_num_layers)],
+ state_is_tuple=True)
+
+ cell_bwd = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.dis_num_layers)],
+ state_is_tuple=True)
+
+ # print initial_state
+ # print cell_fwd.zero_state(FLAGS.batch_size, tf.float32)
+ if initial_state:
+ state_fwd = [[tf.identity(x) for x in inner_initial_state]
+ for inner_initial_state in initial_state]
+ state_bwd = cell_bwd.zero_state(FLAGS.batch_size, tf.float32)
+ else:
+ state_fwd = cell_fwd.zero_state(FLAGS.batch_size, tf.float32)
+ state_bwd = cell_bwd.zero_state(FLAGS.batch_size, tf.float32)
+
+ def make_mask(keep_prob, units):
+ random_tensor = keep_prob
+ # 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
+ random_tensor += tf.random_uniform(tf.stack([FLAGS.batch_size, units]))
+ return tf.floor(random_tensor) / keep_prob
+
+ if is_training:
+ output_mask = make_mask(hparams.dis_vd_keep_prob,
+ 2 * hparams.dis_rnn_size)
+
+ if not FLAGS.dis_share_embedding:
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+
+ rnn_inputs = tf.unstack(rnn_inputs, axis=1)
+
+ with tf.variable_scope('rnn') as vs:
+ outputs, _, _ = tf.contrib.rnn.static_bidirectional_rnn(
+ cell_fwd, cell_bwd, rnn_inputs, state_fwd, state_bwd, scope=vs)
+
+ if is_training:
+ outputs *= output_mask
+
+ # Prediction is linear output for Discriminator.
+ predictions = tf.contrib.layers.linear(outputs, 1, scope=vs)
+ predictions = tf.transpose(predictions, [1, 0, 2])
+
+ if FLAGS.baseline_method == 'critic':
+ with tf.variable_scope('critic', reuse=reuse) as critic_scope:
+ values = tf.contrib.layers.linear(outputs, 1, scope=critic_scope)
+ values = tf.transpose(values, [1, 0, 2])
+
+ return tf.squeeze(predictions, axis=2), tf.squeeze(values, axis=2)
+
+ else:
+ return tf.squeeze(predictions, axis=2), None
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/bidirectional_zaremba.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/bidirectional_zaremba.py"
new file mode 100644
index 00000000..b0683d7c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/bidirectional_zaremba.py"
@@ -0,0 +1,83 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple bidirectional model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def discriminator(hparams, sequence, is_training, reuse=None):
+ """Define the bidirectional Discriminator graph."""
+ sequence = tf.cast(sequence, tf.int32)
+
+ if FLAGS.dis_share_embedding:
+ assert hparams.dis_rnn_size == hparams.gen_rnn_size, (
+ 'If you wish to share Discriminator/Generator embeddings, they must be'
+ ' same dimension.')
+ with tf.variable_scope('gen/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ with tf.variable_scope('dis', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(
+ hparams.dis_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and FLAGS.keep_prob < 1:
+
+ def attn_cell():
+ return tf.contrib.rnn.DropoutWrapper(
+ lstm_cell(), output_keep_prob=FLAGS.keep_prob)
+
+ cell_fwd = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.dis_num_layers)],
+ state_is_tuple=True)
+
+ cell_bwd = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.dis_num_layers)],
+ state_is_tuple=True)
+
+ state_fwd = cell_fwd.zero_state(FLAGS.batch_size, tf.float32)
+ state_bwd = cell_bwd.zero_state(FLAGS.batch_size, tf.float32)
+
+ if not FLAGS.dis_share_embedding:
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+ if is_training and FLAGS.keep_prob < 1:
+ rnn_inputs = tf.nn.dropout(rnn_inputs, FLAGS.keep_prob)
+ rnn_inputs = tf.unstack(rnn_inputs, axis=1)
+
+ with tf.variable_scope('rnn') as vs:
+ outputs, _, _ = tf.contrib.rnn.static_bidirectional_rnn(
+ cell_fwd, cell_bwd, rnn_inputs, state_fwd, state_bwd, scope=vs)
+
+ # Prediction is linear output for Discriminator.
+ predictions = tf.contrib.layers.linear(outputs, 1, scope=vs)
+
+ predictions = tf.transpose(predictions, [1, 0, 2])
+ return tf.squeeze(predictions, axis=2)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/cnn.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/cnn.py"
new file mode 100644
index 00000000..ca682deb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/cnn.py"
@@ -0,0 +1,93 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple CNN model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def discriminator(hparams, sequence, is_training, reuse=None):
+ """Define the Discriminator graph."""
+ del is_training
+ sequence = tf.cast(sequence, tf.int32)
+
+ if FLAGS.dis_share_embedding:
+ assert hparams.dis_rnn_size == hparams.gen_rnn_size, (
+ "If you wish to share Discriminator/Generator embeddings, they must be"
+ " same dimension.")
+ with tf.variable_scope("gen/rnn", reuse=True):
+ embedding = tf.get_variable("embedding",
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ dis_filter_sizes = [3, 4, 5, 6, 7, 8, 9, 10, 15, 20]
+
+ with tf.variable_scope("dis", reuse=reuse):
+ if not FLAGS.dis_share_embedding:
+ embedding = tf.get_variable("embedding",
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+ cnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+
+ # Create a convolution layer for each filter size
+ conv_outputs = []
+ for filter_size in dis_filter_sizes:
+ with tf.variable_scope("conv-%s" % filter_size):
+ # Convolution Layer
+ filter_shape = [
+ filter_size, hparams.dis_rnn_size, hparams.dis_num_filters
+ ]
+ W = tf.get_variable(
+ name="W", initializer=tf.truncated_normal(filter_shape, stddev=0.1))
+ b = tf.get_variable(
+ name="b",
+ initializer=tf.constant(0.1, shape=[hparams.dis_num_filters]))
+ conv = tf.nn.conv1d(
+ cnn_inputs, W, stride=1, padding="SAME", name="conv")
+
+ # Apply nonlinearity
+ h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
+
+ conv_outputs.append(h)
+
+ # Combine all the pooled features
+ dis_num_filters_total = hparams.dis_num_filters * len(dis_filter_sizes)
+
+ h_conv = tf.concat(conv_outputs, axis=2)
+ h_conv_flat = tf.reshape(h_conv, [-1, dis_num_filters_total])
+
+ # Add dropout
+ with tf.variable_scope("dropout"):
+ h_drop = tf.nn.dropout(h_conv_flat, FLAGS.keep_prob)
+
+ with tf.variable_scope("fully_connected"):
+ fc = tf.contrib.layers.fully_connected(
+ h_drop, num_outputs=dis_num_filters_total / 2)
+
+ # Final (unnormalized) scores and predictions
+ with tf.variable_scope("output"):
+ W = tf.get_variable(
+ "W",
+ shape=[dis_num_filters_total / 2, 1],
+ initializer=tf.contrib.layers.xavier_initializer())
+ b = tf.get_variable(name="b", initializer=tf.constant(0.1, shape=[1]))
+ predictions = tf.nn.xw_plus_b(fc, W, b, name="predictions")
+ predictions = tf.reshape(
+ predictions, shape=[FLAGS.batch_size, FLAGS.sequence_length])
+ return predictions
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/critic_vd.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/critic_vd.py"
new file mode 100644
index 00000000..ede8b7bb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/critic_vd.py"
@@ -0,0 +1,108 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Critic model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from six.moves import xrange
+import tensorflow as tf
+from regularization import variational_dropout
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def critic_seq2seq_vd_derivative(hparams, sequence, is_training, reuse=None):
+ """Define the Critic graph which is derived from the seq2seq_vd
+ Discriminator. This will be initialized with the same parameters as the
+ language model and will share the forward RNN components with the
+ Discriminator. This estimates the V(s_t), where the state
+ s_t = x_0,...,x_t-1.
+ """
+ assert FLAGS.discriminator_model == 'seq2seq_vd'
+ sequence = tf.cast(sequence, tf.int32)
+
+ if FLAGS.dis_share_embedding:
+ assert hparams.dis_rnn_size == hparams.gen_rnn_size, (
+ 'If you wish to share Discriminator/Generator embeddings, they must be'
+ ' same dimension.')
+ with tf.variable_scope('gen/decoder/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+ else:
+ with tf.variable_scope('dis/decoder/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+
+ with tf.variable_scope(
+ 'dis/decoder/rnn/multi_rnn_cell', reuse=True) as dis_scope:
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(
+ hparams.dis_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=True)
+
+ attn_cell = lstm_cell
+ if is_training and hparams.dis_vd_keep_prob < 1:
+
+ def attn_cell():
+ return variational_dropout.VariationalDropoutWrapper(
+ lstm_cell(), FLAGS.batch_size, hparams.dis_rnn_size,
+ hparams.dis_vd_keep_prob, hparams.dis_vd_keep_prob)
+
+ cell_critic = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.dis_num_layers)],
+ state_is_tuple=True)
+
+ with tf.variable_scope('critic', reuse=reuse):
+ state_dis = cell_critic.zero_state(FLAGS.batch_size, tf.float32)
+
+ def make_mask(keep_prob, units):
+ random_tensor = keep_prob
+ # 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
+ random_tensor += tf.random_uniform(tf.stack([FLAGS.batch_size, units]))
+ return tf.floor(random_tensor) / keep_prob
+
+ if is_training:
+ output_mask = make_mask(hparams.dis_vd_keep_prob, hparams.dis_rnn_size)
+
+ with tf.variable_scope('rnn') as vs:
+ values = []
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ if t == 0:
+ rnn_in = tf.zeros_like(rnn_inputs[:, 0])
+ else:
+ rnn_in = rnn_inputs[:, t - 1]
+ rnn_out, state_dis = cell_critic(rnn_in, state_dis, scope=dis_scope)
+
+ if is_training:
+ rnn_out *= output_mask
+
+ # Prediction is linear output for Discriminator.
+ value = tf.contrib.layers.linear(rnn_out, 1, scope=vs)
+
+ values.append(value)
+ values = tf.stack(values, axis=1)
+ return tf.squeeze(values, axis=2)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/evaluation_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/evaluation_utils.py"
new file mode 100644
index 00000000..fc2a3a16
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/evaluation_utils.py"
@@ -0,0 +1,280 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Evaluation utilities."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from collections import Counter
+# Dependency imports
+import numpy as np
+from scipy.special import expit
+
+import tensorflow as tf
+
+from model_utils import helper
+from model_utils import n_gram
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def print_and_log_losses(log, step, is_present_rate, avg_dis_loss,
+ avg_gen_loss):
+ """Prints and logs losses to the log file.
+
+ Args:
+ log: GFile for logs.
+ step: Global step.
+ is_present_rate: Current masking rate.
+ avg_dis_loss: List of Discriminator losses.
+ avg_gen_loss: List of Generator losses.
+ """
+ print('global_step: %d' % step)
+ print(' is_present_rate: %.3f' % is_present_rate)
+ print(' D train loss: %.5f' % np.mean(avg_dis_loss))
+ print(' G train loss: %.5f' % np.mean(avg_gen_loss))
+ log.write('\nglobal_step: %d\n' % step)
+ log.write((' is_present_rate: %.3f\n' % is_present_rate))
+ log.write(' D train loss: %.5f\n' % np.mean(avg_dis_loss))
+ log.write(' G train loss: %.5f\n' % np.mean(avg_gen_loss))
+
+
+def print_and_log(log, id_to_word, sequence_eval, max_num_to_print=5):
+ """Helper function for printing and logging evaluated sequences."""
+ indices_arr = np.asarray(sequence_eval)
+ samples = helper.convert_to_human_readable(id_to_word, indices_arr,
+ max_num_to_print)
+
+ for i, sample in enumerate(samples):
+ print('Sample', i, '. ', sample)
+ log.write('\nSample ' + str(i) + '. ' + sample)
+ log.write('\n')
+ print('\n')
+ log.flush()
+ return samples
+
+
+def zip_seq_pred_crossent(id_to_word, sequences, predictions, cross_entropy):
+ """Zip together the sequences, predictions, cross entropy."""
+ indices = np.asarray(sequences)
+
+ batch_of_metrics = []
+
+ for ind_batch, pred_batch, crossent_batch in zip(indices, predictions,
+ cross_entropy):
+ metrics = []
+
+ for index, pred, crossent in zip(ind_batch, pred_batch, crossent_batch):
+ metrics.append([str(id_to_word[index]), pred, crossent])
+
+ batch_of_metrics.append(metrics)
+ return batch_of_metrics
+
+
+def zip_metrics(indices, *args):
+ """Zip together the indices matrices with the provided metrics matrices."""
+ batch_of_metrics = []
+ for metrics_batch in zip(indices, *args):
+
+ metrics = []
+ for m in zip(*metrics_batch):
+ metrics.append(m)
+ batch_of_metrics.append(metrics)
+ return batch_of_metrics
+
+
+def print_formatted(present, id_to_word, log, batch_of_tuples):
+ """Print and log metrics."""
+ num_cols = len(batch_of_tuples[0][0])
+ repeat_float_format = '{:<12.3f} '
+ repeat_str_format = '{:<13}'
+
+ format_str = ''.join(
+ ['[{:<1}] {:<20}',
+ str(repeat_float_format * (num_cols - 1))])
+
+ # TODO(liamfedus): Generalize the logging. This is sloppy.
+ header_format_str = ''.join(
+ ['[{:<1}] {:<20}',
+ str(repeat_str_format * (num_cols - 1))])
+ header_str = header_format_str.format('p', 'Word', 'p(real)', 'log-perp',
+ 'log(p(a))', 'r', 'R=V*(s)', 'b=V(s)',
+ 'A(a,s)')
+
+ for i, batch in enumerate(batch_of_tuples):
+ print(' Sample: %d' % i)
+ log.write(' Sample %d.\n' % i)
+ print(' ', header_str)
+ log.write(' ' + str(header_str) + '\n')
+
+ for j, t in enumerate(batch):
+ t = list(t)
+ t[0] = id_to_word[t[0]]
+ buffer_str = format_str.format(int(present[i][j]), *t)
+ print(' ', buffer_str)
+ log.write(' ' + str(buffer_str) + '\n')
+ log.flush()
+
+
+def generate_RL_logs(sess, model, log, id_to_word, feed):
+ """Generate complete logs while running with REINFORCE."""
+ # Impute Sequences.
+ [
+ p,
+ fake_sequence_eval,
+ fake_predictions_eval,
+ _,
+ fake_cross_entropy_losses_eval,
+ _,
+ fake_log_probs_eval,
+ fake_rewards_eval,
+ fake_baselines_eval,
+ cumulative_rewards_eval,
+ fake_advantages_eval,
+ ] = sess.run(
+ [
+ model.present,
+ model.fake_sequence,
+ model.fake_predictions,
+ model.real_predictions,
+ model.fake_cross_entropy_losses,
+ model.fake_logits,
+ model.fake_log_probs,
+ model.fake_rewards,
+ model.fake_baselines,
+ model.cumulative_rewards,
+ model.fake_advantages,
+ ],
+ feed_dict=feed)
+
+ indices = np.asarray(fake_sequence_eval)
+
+ # Convert Discriminator linear layer to probability.
+ fake_prob_eval = expit(fake_predictions_eval)
+
+ # Add metrics.
+ fake_tuples = zip_metrics(indices, fake_prob_eval,
+ fake_cross_entropy_losses_eval, fake_log_probs_eval,
+ fake_rewards_eval, cumulative_rewards_eval,
+ fake_baselines_eval, fake_advantages_eval)
+
+ # real_tuples = zip_metrics(indices, )
+
+ # Print forward sequences.
+ tuples_to_print = fake_tuples[:FLAGS.max_num_to_print]
+ print_formatted(p, id_to_word, log, tuples_to_print)
+
+ print('Samples')
+ log.write('Samples\n')
+ samples = print_and_log(log, id_to_word, fake_sequence_eval,
+ FLAGS.max_num_to_print)
+ return samples
+
+
+def generate_logs(sess, model, log, id_to_word, feed):
+ """Impute Sequences using the model for a particular feed and send it to
+ logs."""
+ # Impute Sequences.
+ [
+ p, sequence_eval, fake_predictions_eval, fake_cross_entropy_losses_eval,
+ fake_logits_eval
+ ] = sess.run(
+ [
+ model.present, model.fake_sequence, model.fake_predictions,
+ model.fake_cross_entropy_losses, model.fake_logits
+ ],
+ feed_dict=feed)
+
+ # Convert Discriminator linear layer to probability.
+ fake_prob_eval = expit(fake_predictions_eval)
+
+ # Forward Masked Tuples.
+ fake_tuples = zip_seq_pred_crossent(id_to_word, sequence_eval, fake_prob_eval,
+ fake_cross_entropy_losses_eval)
+
+ tuples_to_print = fake_tuples[:FLAGS.max_num_to_print]
+
+ if FLAGS.print_verbose:
+ print('fake_logits_eval')
+ print(fake_logits_eval)
+
+ for i, batch in enumerate(tuples_to_print):
+ print(' Sample %d.' % i)
+ log.write(' Sample %d.\n' % i)
+ for j, pred in enumerate(batch):
+ buffer_str = ('[{:<1}] {:<20} {:<7.3f} {:<7.3f}').format(
+ int(p[i][j]), pred[0], pred[1], pred[2])
+ print(' ', buffer_str)
+ log.write(' ' + str(buffer_str) + '\n')
+ log.flush()
+
+ print('Samples')
+ log.write('Samples\n')
+ samples = print_and_log(log, id_to_word, sequence_eval,
+ FLAGS.max_num_to_print)
+ return samples
+
+
+def create_merged_ngram_dictionaries(indices, n):
+ """Generate a single dictionary for the full batch.
+
+ Args:
+ indices: List of lists of indices.
+ n: Degree of n-grams.
+
+ Returns:
+ Dictionary of hashed(n-gram tuples) to counts in the batch of indices.
+ """
+ ngram_dicts = []
+
+ for ind in indices:
+ ngrams = n_gram.find_all_ngrams(ind, n=n)
+ ngram_counts = n_gram.construct_ngrams_dict(ngrams)
+ ngram_dicts.append(ngram_counts)
+
+ merged_gen_dict = Counter()
+ for ngram_dict in ngram_dicts:
+ merged_gen_dict += Counter(ngram_dict)
+ return merged_gen_dict
+
+
+def sequence_ngram_evaluation(sess, sequence, log, feed, data_ngram_count, n):
+ """Calculates the percent of ngrams produced in the sequence is present in
+ data_ngram_count.
+
+ Args:
+ sess: tf.Session.
+ sequence: Sequence Tensor from the MaskGAN model.
+ log: gFile log.
+ feed: Feed to evaluate.
+ data_ngram_count: Dictionary of hashed(n-gram tuples) to counts in the
+ data_set.
+
+ Returns:
+ avg_percent_captured: Percent of produced ngrams that appear in the
+ data_ngram_count.
+ """
+ del log
+ # Impute sequence.
+ [sequence_eval] = sess.run([sequence], feed_dict=feed)
+ indices = sequence_eval
+
+ # Retrieve the counts across the batch of indices.
+ gen_ngram_counts = create_merged_ngram_dictionaries(
+ indices, n=n)
+ return n_gram.percent_unique_ngrams_in_train(data_ngram_count,
+ gen_ngram_counts)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/feedforward.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/feedforward.py"
new file mode 100644
index 00000000..d48a517d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/feedforward.py"
@@ -0,0 +1,98 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple FNN model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from six.moves import xrange
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def discriminator(hparams, sequence, is_training, reuse=None):
+ """Define the Discriminator graph."""
+ del is_training
+ sequence = tf.cast(sequence, tf.int32)
+
+ if FLAGS.dis_share_embedding:
+ assert hparams.dis_rnn_size == hparams.gen_rnn_size, (
+ "If you wish to share Discriminator/Generator embeddings, they must be"
+ " same dimension.")
+ with tf.variable_scope("gen/rnn", reuse=True):
+ embedding = tf.get_variable("embedding",
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ with tf.variable_scope("dis", reuse=reuse):
+ if not FLAGS.dis_share_embedding:
+ embedding = tf.get_variable("embedding",
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+
+ embeddings = tf.nn.embedding_lookup(embedding, sequence)
+
+ # Input matrices.
+ W = tf.get_variable(
+ "W",
+ initializer=tf.truncated_normal(
+ shape=[3 * hparams.dis_embedding_dim, hparams.dis_hidden_dim],
+ stddev=0.1))
+ b = tf.get_variable(
+ "b", initializer=tf.constant(0.1, shape=[hparams.dis_hidden_dim]))
+
+ # Output matrices.
+ W_out = tf.get_variable(
+ "W_out",
+ initializer=tf.truncated_normal(
+ shape=[hparams.dis_hidden_dim, 1], stddev=0.1))
+ b_out = tf.get_variable("b_out", initializer=tf.constant(0.1, shape=[1]))
+
+ predictions = []
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ inp = embeddings[:, t]
+
+ if t > 0:
+ past_inp = tf.unstack(embeddings[:, 0:t], axis=1)
+ avg_past_inp = tf.add_n(past_inp) / len(past_inp)
+ else:
+ avg_past_inp = tf.zeros_like(inp)
+
+ if t < FLAGS.sequence_length:
+ future_inp = tf.unstack(embeddings[:, t:], axis=1)
+ avg_future_inp = tf.add_n(future_inp) / len(future_inp)
+ else:
+ avg_future_inp = tf.zeros_like(inp)
+
+ # Cumulative input.
+ concat_inp = tf.concat([avg_past_inp, inp, avg_future_inp], axis=1)
+
+ # Hidden activations.
+ hidden = tf.nn.relu(tf.nn.xw_plus_b(concat_inp, W, b, name="scores"))
+
+ # Add dropout
+ with tf.variable_scope("dropout"):
+ hidden = tf.nn.dropout(hidden, FLAGS.keep_prob)
+
+ # Output.
+ output = tf.nn.xw_plus_b(hidden, W_out, b_out, name="output")
+
+ predictions.append(output)
+ predictions = tf.stack(predictions, axis=1)
+ return tf.squeeze(predictions, axis=2)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn.py"
new file mode 100644
index 00000000..40b3a7aa
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn.py"
@@ -0,0 +1,211 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple RNN model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from six.moves import xrange
+import tensorflow as tf
+
+# ZoneoutWrapper.
+from regularization import zoneout
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def generator(hparams,
+ inputs,
+ targets,
+ targets_present,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Generator graph.
+
+ G will now impute tokens that have been masked from the input seqeunce.
+ """
+ tf.logging.warning(
+ 'Undirectional generative model is not a useful model for this MaskGAN '
+ 'because future context is needed. Use only for debugging purposes.')
+ init_scale = 0.05
+ initializer = tf.random_uniform_initializer(-init_scale, init_scale)
+
+ with tf.variable_scope('gen', reuse=reuse, initializer=initializer):
+
+ def lstm_cell():
+ return tf.contrib.rnn.LayerNormBasicLSTMCell(
+ hparams.gen_rnn_size, reuse=reuse)
+
+ attn_cell = lstm_cell
+ if FLAGS.zoneout_drop_prob > 0.0:
+
+ def attn_cell():
+ return zoneout.ZoneoutWrapper(
+ lstm_cell(),
+ zoneout_drop_prob=FLAGS.zoneout_drop_prob,
+ is_training=is_training)
+
+ cell_gen = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.gen_num_layers)],
+ state_is_tuple=True)
+
+ initial_state = cell_gen.zero_state(FLAGS.batch_size, tf.float32)
+
+ with tf.variable_scope('rnn'):
+ sequence, logits, log_probs = [], [], []
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+ softmax_w = tf.get_variable('softmax_w',
+ [hparams.gen_rnn_size, FLAGS.vocab_size])
+ softmax_b = tf.get_variable('softmax_b', [FLAGS.vocab_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, inputs)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ # Input to the model is the first token to provide context. The
+ # model will then predict token t > 0.
+ if t == 0:
+ # Always provide the real input at t = 0.
+ state_gen = initial_state
+ rnn_inp = rnn_inputs[:, t]
+
+ # If the target at the last time-step was present, read in the real.
+ # If the target at the last time-step was not present, read in the fake.
+ else:
+ real_rnn_inp = rnn_inputs[:, t]
+ fake_rnn_inp = tf.nn.embedding_lookup(embedding, fake)
+
+ # Use teacher forcing.
+ if (is_training and
+ FLAGS.gen_training_strategy == 'cross_entropy') or is_validating:
+ rnn_inp = real_rnn_inp
+ else:
+ # Note that targets_t-1 == inputs_(t)
+ rnn_inp = tf.where(targets_present[:, t - 1], real_rnn_inp,
+ fake_rnn_inp)
+
+ # RNN.
+ rnn_out, state_gen = cell_gen(rnn_inp, state_gen)
+ logit = tf.matmul(rnn_out, softmax_w) + softmax_b
+
+ # Real sample.
+ real = targets[:, t]
+
+ # Fake sample.
+ categorical = tf.contrib.distributions.Categorical(logits=logit)
+ fake = categorical.sample()
+ log_prob = categorical.log_prob(fake)
+
+ # Output for Generator will either be generated or the target.
+ # If present: Return real.
+ # If not present: Return fake.
+ output = tf.where(targets_present[:, t], real, fake)
+
+ # Append to lists.
+ sequence.append(output)
+ logits.append(logit)
+ log_probs.append(log_prob)
+
+ # Produce the RNN state had the model operated only
+ # over real data.
+ real_state_gen = initial_state
+ for t in xrange(FLAGS.sequence_length):
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_inp = rnn_inputs[:, t]
+
+ # RNN.
+ rnn_out, real_state_gen = cell_gen(rnn_inp, real_state_gen)
+
+ final_state = real_state_gen
+
+ return (tf.stack(sequence, axis=1), tf.stack(logits, axis=1), tf.stack(
+ log_probs, axis=1), initial_state, final_state)
+
+
+def discriminator(hparams, sequence, is_training, reuse=None):
+ """Define the Discriminator graph.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ FLAGS: Current flags.
+ sequence: [FLAGS.batch_size, FLAGS.sequence_length]
+ is_training:
+ reuse
+
+ Returns:
+ predictions:
+ """
+ tf.logging.warning(
+ 'Undirectional Discriminative model is not a useful model for this '
+ 'MaskGAN because future context is needed. Use only for debugging '
+ 'purposes.')
+ sequence = tf.cast(sequence, tf.int32)
+
+ if FLAGS.dis_share_embedding:
+ assert hparams.dis_rnn_size == hparams.gen_rnn_size, (
+ 'If you wish to share Discriminator/Generator embeddings, they must be'
+ ' same dimension.')
+ with tf.variable_scope('gen/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ with tf.variable_scope('dis', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.LayerNormBasicLSTMCell(
+ hparams.dis_rnn_size, reuse=reuse)
+
+ attn_cell = lstm_cell
+ if FLAGS.zoneout_drop_prob > 0.0:
+
+ def attn_cell():
+ return zoneout.ZoneoutWrapper(
+ lstm_cell(),
+ zoneout_drop_prob=FLAGS.zoneout_drop_prob,
+ is_training=is_training)
+
+ cell_dis = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.dis_num_layers)],
+ state_is_tuple=True)
+ state_dis = cell_dis.zero_state(FLAGS.batch_size, tf.float32)
+
+ with tf.variable_scope('rnn') as vs:
+ predictions = []
+ if not FLAGS.dis_share_embedding:
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_in = rnn_inputs[:, t]
+ rnn_out, state_dis = cell_dis(rnn_in, state_dis)
+
+ # Prediction is linear output for Discriminator.
+ pred = tf.contrib.layers.linear(rnn_out, 1, scope=vs)
+
+ predictions.append(pred)
+ predictions = tf.stack(predictions, axis=1)
+ return tf.squeeze(predictions, axis=2)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn_nas.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn_nas.py"
new file mode 100644
index 00000000..618ace2f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn_nas.py"
@@ -0,0 +1,234 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple RNN model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+from six.moves import xrange
+import tensorflow as tf
+
+# NAS Code..
+from nas_utils import configs
+from nas_utils import custom_cell
+from nas_utils import variational_dropout
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def get_config():
+ return configs.AlienConfig2()
+
+
+LSTMTuple = collections.namedtuple('LSTMTuple', ['c', 'h'])
+
+
+def generator(hparams,
+ inputs,
+ targets,
+ targets_present,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Generator graph.
+
+ G will now impute tokens that have been masked from the input seqeunce.
+ """
+ tf.logging.info(
+ 'Undirectional generative model is not a useful model for this MaskGAN '
+ 'because future context is needed. Use only for debugging purposes.')
+ config = get_config()
+ config.keep_prob = [hparams.gen_nas_keep_prob_0, hparams.gen_nas_keep_prob_1]
+ configs.print_config(config)
+
+ init_scale = config.init_scale
+ initializer = tf.random_uniform_initializer(-init_scale, init_scale)
+
+ with tf.variable_scope('gen', reuse=reuse, initializer=initializer):
+ # Neural architecture search cell.
+ cell = custom_cell.Alien(config.hidden_size)
+
+ if is_training:
+ [h2h_masks, _, _,
+ output_mask] = variational_dropout.generate_variational_dropout_masks(
+ hparams, config.keep_prob)
+ else:
+ output_mask = None
+
+ cell_gen = custom_cell.GenericMultiRNNCell([cell] * config.num_layers)
+ initial_state = cell_gen.zero_state(FLAGS.batch_size, tf.float32)
+
+ with tf.variable_scope('rnn'):
+ sequence, logits, log_probs = [], [], []
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+ softmax_w = tf.matrix_transpose(embedding)
+ softmax_b = tf.get_variable('softmax_b', [FLAGS.vocab_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, inputs)
+
+ if is_training and FLAGS.keep_prob < 1:
+ rnn_inputs = tf.nn.dropout(rnn_inputs, FLAGS.keep_prob)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ # Input to the model is the first token to provide context. The
+ # model will then predict token t > 0.
+ if t == 0:
+ # Always provide the real input at t = 0.
+ state_gen = initial_state
+ rnn_inp = rnn_inputs[:, t]
+
+ # If the input is present, read in the input at t.
+ # If the input is not present, read in the previously generated.
+ else:
+ real_rnn_inp = rnn_inputs[:, t]
+ fake_rnn_inp = tf.nn.embedding_lookup(embedding, fake)
+
+ # While validating, the decoder should be operating in teacher
+ # forcing regime. Also, if we're just training with cross_entropy
+ # use teacher forcing.
+ if is_validating or (is_training and
+ FLAGS.gen_training_strategy == 'cross_entropy'):
+ rnn_inp = real_rnn_inp
+ else:
+ rnn_inp = tf.where(targets_present[:, t - 1], real_rnn_inp,
+ fake_rnn_inp)
+
+ if is_training:
+ state_gen = list(state_gen)
+ for layer_num, per_layer_state in enumerate(state_gen):
+ per_layer_state = LSTMTuple(
+ per_layer_state[0], per_layer_state[1] * h2h_masks[layer_num])
+ state_gen[layer_num] = per_layer_state
+
+ # RNN.
+ rnn_out, state_gen = cell_gen(rnn_inp, state_gen)
+
+ if is_training:
+ rnn_out = output_mask * rnn_out
+
+ logit = tf.matmul(rnn_out, softmax_w) + softmax_b
+
+ # Real sample.
+ real = targets[:, t]
+
+ categorical = tf.contrib.distributions.Categorical(logits=logit)
+ fake = categorical.sample()
+ log_prob = categorical.log_prob(fake)
+
+ # Output for Generator will either be generated or the input.
+ #
+ # If present: Return real.
+ # If not present: Return fake.
+ output = tf.where(targets_present[:, t], real, fake)
+
+ # Add to lists.
+ sequence.append(output)
+ log_probs.append(log_prob)
+ logits.append(logit)
+
+ # Produce the RNN state had the model operated only
+ # over real data.
+ real_state_gen = initial_state
+ for t in xrange(FLAGS.sequence_length):
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_inp = rnn_inputs[:, t]
+
+ # RNN.
+ rnn_out, real_state_gen = cell_gen(rnn_inp, real_state_gen)
+
+ final_state = real_state_gen
+
+ return (tf.stack(sequence, axis=1), tf.stack(logits, axis=1), tf.stack(
+ log_probs, axis=1), initial_state, final_state)
+
+
+def discriminator(hparams, sequence, is_training, reuse=None):
+ """Define the Discriminator graph."""
+ tf.logging.info(
+ 'Undirectional Discriminative model is not a useful model for this '
+ 'MaskGAN because future context is needed. Use only for debugging '
+ 'purposes.')
+ sequence = tf.cast(sequence, tf.int32)
+
+ if FLAGS.dis_share_embedding:
+ assert hparams.dis_rnn_size == hparams.gen_rnn_size, (
+ 'If you wish to share Discriminator/Generator embeddings, they must be'
+ ' same dimension.')
+ with tf.variable_scope('gen/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ config = get_config()
+ config.keep_prob = [hparams.dis_nas_keep_prob_0, hparams.dis_nas_keep_prob_1]
+ configs.print_config(config)
+
+ with tf.variable_scope('dis', reuse=reuse):
+ # Neural architecture search cell.
+ cell = custom_cell.Alien(config.hidden_size)
+
+ if is_training:
+ [h2h_masks, _, _,
+ output_mask] = variational_dropout.generate_variational_dropout_masks(
+ hparams, config.keep_prob)
+ else:
+ output_mask = None
+
+ cell_dis = custom_cell.GenericMultiRNNCell([cell] * config.num_layers)
+ state_dis = cell_dis.zero_state(FLAGS.batch_size, tf.float32)
+
+ with tf.variable_scope('rnn') as vs:
+ predictions = []
+ if not FLAGS.dis_share_embedding:
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+
+ if is_training and FLAGS.keep_prob < 1:
+ rnn_inputs = tf.nn.dropout(rnn_inputs, FLAGS.keep_prob)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_in = rnn_inputs[:, t]
+
+ if is_training:
+ state_dis = list(state_dis)
+ for layer_num, per_layer_state in enumerate(state_dis):
+ per_layer_state = LSTMTuple(
+ per_layer_state[0], per_layer_state[1] * h2h_masks[layer_num])
+ state_dis[layer_num] = per_layer_state
+
+ # RNN.
+ rnn_out, state_dis = cell_dis(rnn_in, state_dis)
+
+ if is_training:
+ rnn_out = output_mask * rnn_out
+
+ # Prediction is linear output for Discriminator.
+ pred = tf.contrib.layers.linear(rnn_out, 1, scope=vs)
+
+ predictions.append(pred)
+ predictions = tf.stack(predictions, axis=1)
+ return tf.squeeze(predictions, axis=2)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn_vd.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn_vd.py"
new file mode 100644
index 00000000..428f1a54
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn_vd.py"
@@ -0,0 +1,118 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple RNN model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from six.moves import xrange
+import tensorflow as tf
+from regularization import variational_dropout
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def discriminator(hparams,
+ sequence,
+ is_training,
+ reuse=None,
+ initial_state=None):
+ """Define the Discriminator graph."""
+ tf.logging.info(
+ 'Undirectional Discriminative model is not a useful model for this '
+ 'MaskGAN because future context is needed. Use only for debugging '
+ 'purposes.')
+ sequence = tf.cast(sequence, tf.int32)
+
+ if FLAGS.dis_share_embedding:
+ assert hparams.dis_rnn_size == hparams.gen_rnn_size, (
+ 'If you wish to share Discriminator/Generator embeddings, they must be'
+ ' same dimension.')
+ with tf.variable_scope('gen/decoder/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ with tf.variable_scope('dis', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(
+ hparams.dis_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and hparams.dis_vd_keep_prob < 1:
+
+ def attn_cell():
+ return variational_dropout.VariationalDropoutWrapper(
+ lstm_cell(), FLAGS.batch_size, hparams.dis_rnn_size,
+ hparams.dis_vd_keep_prob, hparams.dis_vd_keep_prob)
+
+ cell_dis = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.dis_num_layers)],
+ state_is_tuple=True)
+
+ if initial_state:
+ state_dis = [[tf.identity(x) for x in inner_initial_state]
+ for inner_initial_state in initial_state]
+ else:
+ state_dis = cell_dis.zero_state(FLAGS.batch_size, tf.float32)
+
+ def make_mask(keep_prob, units):
+ random_tensor = keep_prob
+ # 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
+ random_tensor += tf.random_uniform(tf.stack([FLAGS.batch_size, units]))
+ return tf.floor(random_tensor) / keep_prob
+
+ if is_training:
+ output_mask = make_mask(hparams.dis_vd_keep_prob, hparams.dis_rnn_size)
+
+ with tf.variable_scope('rnn') as vs:
+ predictions, rnn_outs = [], []
+
+ if not FLAGS.dis_share_embedding:
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_in = rnn_inputs[:, t]
+ rnn_out, state_dis = cell_dis(rnn_in, state_dis)
+
+ if is_training:
+ rnn_out *= output_mask
+
+ # Prediction is linear output for Discriminator.
+ pred = tf.contrib.layers.linear(rnn_out, 1, scope=vs)
+ predictions.append(pred)
+ rnn_outs.append(rnn_out)
+
+ predictions = tf.stack(predictions, axis=1)
+
+ if FLAGS.baseline_method == 'critic':
+ with tf.variable_scope('critic', reuse=reuse) as critic_scope:
+ rnn_outs = tf.stack(rnn_outs, axis=1)
+ values = tf.contrib.layers.linear(rnn_outs, 1, scope=critic_scope)
+ return tf.squeeze(predictions, axis=2), tf.squeeze(values, axis=2)
+
+ else:
+ return tf.squeeze(predictions, axis=2), None
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn_zaremba.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn_zaremba.py"
new file mode 100644
index 00000000..9369c77f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rnn_zaremba.py"
@@ -0,0 +1,196 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple RNN model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from six.moves import xrange
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def generator(hparams,
+ inputs,
+ targets,
+ targets_present,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Generator graph.
+
+ G will now impute tokens that have been masked from the input seqeunce.
+ """
+ tf.logging.warning(
+ 'Undirectional generative model is not a useful model for this MaskGAN '
+ 'because future context is needed. Use only for debugging purposes.')
+ init_scale = 0.05
+ initializer = tf.random_uniform_initializer(-init_scale, init_scale)
+ with tf.variable_scope('gen', reuse=reuse, initializer=initializer):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(hparams.gen_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and FLAGS.keep_prob < 1:
+
+ def attn_cell():
+ return tf.contrib.rnn.DropoutWrapper(
+ lstm_cell(), output_keep_prob=FLAGS.keep_prob)
+
+ cell_gen = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.gen_num_layers)],
+ state_is_tuple=True)
+
+ initial_state = cell_gen.zero_state(FLAGS.batch_size, tf.float32)
+
+ with tf.variable_scope('rnn'):
+ sequence, logits, log_probs = [], [], []
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+ softmax_w = tf.get_variable('softmax_w',
+ [hparams.gen_rnn_size, FLAGS.vocab_size])
+ softmax_b = tf.get_variable('softmax_b', [FLAGS.vocab_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, inputs)
+
+ if is_training and FLAGS.keep_prob < 1:
+ rnn_inputs = tf.nn.dropout(rnn_inputs, FLAGS.keep_prob)
+
+ fake = None
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ # Input to the model is the first token to provide context. The
+ # model will then predict token t > 0.
+ if t == 0:
+ # Always provide the real input at t = 0.
+ state_gen = initial_state
+ rnn_inp = rnn_inputs[:, t]
+
+ # If the input is present, read in the input at t.
+ # If the input is not present, read in the previously generated.
+ else:
+ real_rnn_inp = rnn_inputs[:, t]
+ fake_rnn_inp = tf.nn.embedding_lookup(embedding, fake)
+
+ # While validating, the decoder should be operating in teacher
+ # forcing regime. Also, if we're just training with cross_entropy
+ # use teacher forcing.
+ if is_validating or (is_training and
+ FLAGS.gen_training_strategy == 'cross_entropy'):
+ rnn_inp = real_rnn_inp
+ else:
+ rnn_inp = tf.where(targets_present[:, t - 1], real_rnn_inp,
+ fake_rnn_inp)
+
+ # RNN.
+ rnn_out, state_gen = cell_gen(rnn_inp, state_gen)
+ logit = tf.matmul(rnn_out, softmax_w) + softmax_b
+
+ # Real sample.
+ real = targets[:, t]
+
+ categorical = tf.contrib.distributions.Categorical(logits=logit)
+ fake = categorical.sample()
+ log_prob = categorical.log_prob(fake)
+
+ # Output for Generator will either be generated or the input.
+ #
+ # If present: Return real.
+ # If not present: Return fake.
+ output = tf.where(targets_present[:, t], real, fake)
+
+ # Add to lists.
+ sequence.append(output)
+ log_probs.append(log_prob)
+ logits.append(logit)
+
+ # Produce the RNN state had the model operated only
+ # over real data.
+ real_state_gen = initial_state
+ for t in xrange(FLAGS.sequence_length):
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_inp = rnn_inputs[:, t]
+
+ # RNN.
+ rnn_out, real_state_gen = cell_gen(rnn_inp, real_state_gen)
+
+ final_state = real_state_gen
+
+ return (tf.stack(sequence, axis=1), tf.stack(logits, axis=1), tf.stack(
+ log_probs, axis=1), initial_state, final_state)
+
+
+def discriminator(hparams, sequence, is_training, reuse=None):
+ """Define the Discriminator graph."""
+ tf.logging.warning(
+ 'Undirectional Discriminative model is not a useful model for this '
+ 'MaskGAN because future context is needed. Use only for debugging '
+ 'purposes.')
+ sequence = tf.cast(sequence, tf.int32)
+
+ with tf.variable_scope('dis', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(hparams.dis_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and FLAGS.keep_prob < 1:
+
+ def attn_cell():
+ return tf.contrib.rnn.DropoutWrapper(
+ lstm_cell(), output_keep_prob=FLAGS.keep_prob)
+
+ cell_dis = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.dis_num_layers)],
+ state_is_tuple=True)
+
+ state_dis = cell_dis.zero_state(FLAGS.batch_size, tf.float32)
+
+ with tf.variable_scope('rnn') as vs:
+ predictions = []
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+
+ if is_training and FLAGS.keep_prob < 1:
+ rnn_inputs = tf.nn.dropout(rnn_inputs, FLAGS.keep_prob)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_in = rnn_inputs[:, t]
+ rnn_out, state_dis = cell_dis(rnn_in, state_dis)
+
+ # Prediction is linear output for Discriminator.
+ pred = tf.contrib.layers.linear(rnn_out, 1, scope=vs)
+
+ predictions.append(pred)
+ predictions = tf.stack(predictions, axis=1)
+ return tf.squeeze(predictions, axis=2)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rollout.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rollout.py"
new file mode 100644
index 00000000..6919af2e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/rollout.py"
@@ -0,0 +1,384 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Rollout RNN model definitions which call rnn_zaremba code."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+
+from six.moves import xrange
+import tensorflow as tf
+
+from losses import losses
+from model_utils import helper
+from model_utils import model_construction
+from model_utils import model_losses
+from model_utils import model_optimization
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def create_rollout_MaskGAN(hparams, is_training):
+ """Create the MaskGAN model.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ is_training: Boolean indicating operational mode (train/inference).
+ evaluated with a teacher forcing regime.
+
+ Return:
+ model: Namedtuple for specifying the MaskGAN."""
+ global_step = tf.Variable(0, name='global_step', trainable=False)
+
+ new_learning_rate = tf.placeholder(tf.float32, [], name='new_learning_rate')
+ learning_rate = tf.Variable(0.0, name='learning_rate', trainable=False)
+ learning_rate_update = tf.assign(learning_rate, new_learning_rate)
+
+ new_rate = tf.placeholder(tf.float32, [], name='new_rate')
+ percent_real_var = tf.Variable(0.0, trainable=False)
+ percent_real_update = tf.assign(percent_real_var, new_rate)
+
+ ## Placeholders.
+ inputs = tf.placeholder(
+ tf.int32, shape=[FLAGS.batch_size, FLAGS.sequence_length])
+ present = tf.placeholder(
+ tf.bool, shape=[FLAGS.batch_size, FLAGS.sequence_length])
+ inv_present = tf.placeholder(
+ tf.bool, shape=[FLAGS.batch_size, FLAGS.sequence_length])
+
+ ## Rollout Generator.
+ fwd_gen_rollouts = rollout_generator(
+ hparams, inputs, present, is_training=is_training, is_validating=False)
+ inv_gen_rollouts = rollout_generator(
+ hparams,
+ inputs,
+ inv_present,
+ is_training=is_training,
+ is_validating=False,
+ reuse=True)
+
+ ## Rollout Discriminator.
+ fwd_dis_rollouts = rollout_discriminator(
+ hparams, fwd_gen_rollouts, is_training=is_training)
+ inv_dis_rollouts = rollout_discriminator(
+ hparams, inv_gen_rollouts, is_training=is_training, reuse=True)
+
+ ## Discriminator Loss.
+ [dis_loss, dis_loss_pred, dis_loss_inv_pred] = rollout_discriminator_loss(
+ fwd_dis_rollouts, present, inv_dis_rollouts, inv_present)
+
+ ## Average log-perplexity for only missing words. However, to do this,
+ # the logits are still computed using teacher forcing, that is, the ground
+ # truth tokens are fed in at each time point to be valid.
+ # TODO(liamfedus): Fix the naming convention.
+ with tf.variable_scope('gen_rollout'):
+ _, fwd_eval_logits, _ = model_construction.create_generator(
+ hparams,
+ inputs,
+ present,
+ is_training=False,
+ is_validating=True,
+ reuse=True)
+
+ avg_log_perplexity = model_losses.calculate_log_perplexity(
+ fwd_eval_logits, inputs, present)
+
+ ## Generator Loss.
+ # 1. Cross Entropy losses on missing tokens.
+ [fwd_cross_entropy_losses,
+ inv_cross_entropy_losses] = rollout_masked_cross_entropy_loss(
+ inputs, present, inv_present, fwd_gen_rollouts, inv_gen_rollouts)
+
+ # 2. GAN losses on missing tokens.
+ [fwd_RL_loss,
+ fwd_RL_statistics, fwd_averages_op] = rollout_reinforce_objective(
+ hparams, fwd_gen_rollouts, fwd_dis_rollouts, present)
+ [inv_RL_loss,
+ inv_RL_statistics, inv_averages_op] = rollout_reinforce_objective(
+ hparams, inv_gen_rollouts, inv_dis_rollouts, inv_present)
+
+ # TODO(liamfedus): Generalize this to use all logs.
+ [fwd_sequence, fwd_logits, fwd_log_probs] = fwd_gen_rollouts[-1]
+ [inv_sequence, inv_logits, inv_log_probs] = inv_gen_rollouts[-1]
+
+ # TODO(liamfedus): Generalize this to use all logs.
+ fwd_predictions = fwd_dis_rollouts[-1]
+ inv_predictions = inv_dis_rollouts[-1]
+
+ # TODO(liamfedus): Generalize this to use all logs.
+ [fwd_log_probs, fwd_rewards, fwd_advantages,
+ fwd_baselines] = fwd_RL_statistics[-1]
+ [inv_log_probs, inv_rewards, inv_advantages,
+ inv_baselines] = inv_RL_statistics[-1]
+
+ ## Pre-training.
+ if FLAGS.gen_pretrain_steps:
+ # TODO(liamfedus): Rewrite this.
+ fwd_cross_entropy_loss = tf.reduce_mean(fwd_cross_entropy_losses)
+ gen_pretrain_op = model_optimization.create_gen_pretrain_op(
+ hparams, fwd_cross_entropy_loss, global_step)
+ else:
+ gen_pretrain_op = tf.no_op('gen_pretrain_no_op')
+ if FLAGS.dis_pretrain_steps:
+ dis_pretrain_op = model_optimization.create_dis_pretrain_op(
+ hparams, dis_loss, global_step)
+ else:
+ dis_pretrain_op = tf.no_op('dis_pretrain_no_op')
+
+ ## Generator Train Op.
+ # 1. Cross-Entropy.
+ if FLAGS.gen_training_strategy == 'cross_entropy':
+ gen_loss = tf.reduce_mean(
+ fwd_cross_entropy_losses + inv_cross_entropy_losses) / 2.
+ [gen_train_op, gen_grads,
+ gen_vars] = model_optimization.create_gen_train_op(
+ hparams, learning_rate, gen_loss, global_step, mode='MINIMIZE')
+
+ # 2. GAN (REINFORCE)
+ elif FLAGS.gen_training_strategy == 'reinforce':
+ gen_loss = (fwd_RL_loss + inv_RL_loss) / 2.
+ [gen_train_op, gen_grads,
+ gen_vars] = model_optimization.create_reinforce_gen_train_op(
+ hparams, learning_rate, gen_loss, fwd_averages_op, inv_averages_op,
+ global_step)
+
+ else:
+ raise NotImplementedError
+
+ ## Discriminator Train Op.
+ dis_train_op, dis_grads, dis_vars = model_optimization.create_dis_train_op(
+ hparams, dis_loss, global_step)
+
+ ## Summaries.
+ with tf.name_scope('general'):
+ tf.summary.scalar('percent_real', percent_real_var)
+ tf.summary.scalar('learning_rate', learning_rate)
+
+ with tf.name_scope('generator_losses'):
+ tf.summary.scalar('gen_loss', tf.reduce_mean(gen_loss))
+ tf.summary.scalar('gen_loss_fwd_cross_entropy',
+ tf.reduce_mean(fwd_cross_entropy_losses))
+ tf.summary.scalar('gen_loss_inv_cross_entropy',
+ tf.reduce_mean(inv_cross_entropy_losses))
+
+ with tf.name_scope('REINFORCE'):
+ with tf.name_scope('objective'):
+ tf.summary.scalar('fwd_RL_loss', tf.reduce_mean(fwd_RL_loss))
+ tf.summary.scalar('inv_RL_loss', tf.reduce_mean(inv_RL_loss))
+
+ with tf.name_scope('rewards'):
+ helper.variable_summaries(fwd_rewards, 'fwd_rewards')
+ helper.variable_summaries(inv_rewards, 'inv_rewards')
+
+ with tf.name_scope('advantages'):
+ helper.variable_summaries(fwd_advantages, 'fwd_advantages')
+ helper.variable_summaries(inv_advantages, 'inv_advantages')
+
+ with tf.name_scope('baselines'):
+ helper.variable_summaries(fwd_baselines, 'fwd_baselines')
+ helper.variable_summaries(inv_baselines, 'inv_baselines')
+
+ with tf.name_scope('log_probs'):
+ helper.variable_summaries(fwd_log_probs, 'fwd_log_probs')
+ helper.variable_summaries(inv_log_probs, 'inv_log_probs')
+
+ with tf.name_scope('discriminator_losses'):
+ tf.summary.scalar('dis_loss', dis_loss)
+ tf.summary.scalar('dis_loss_fwd_sequence', dis_loss_pred)
+ tf.summary.scalar('dis_loss_inv_sequence', dis_loss_inv_pred)
+
+ with tf.name_scope('logits'):
+ helper.variable_summaries(fwd_logits, 'fwd_logits')
+ helper.variable_summaries(inv_logits, 'inv_logits')
+
+ for v, g in zip(gen_vars, gen_grads):
+ helper.variable_summaries(v, v.op.name)
+ helper.variable_summaries(g, 'grad/' + v.op.name)
+
+ for v, g in zip(dis_vars, dis_grads):
+ helper.variable_summaries(v, v.op.name)
+ helper.variable_summaries(g, 'grad/' + v.op.name)
+
+ merge_summaries_op = tf.summary.merge_all()
+
+ # Model saver.
+ saver = tf.train.Saver(keep_checkpoint_every_n_hours=1, max_to_keep=5)
+
+ # Named tuple that captures elements of the MaskGAN model.
+ Model = collections.namedtuple('Model', [
+ 'inputs', 'present', 'inv_present', 'percent_real_update', 'new_rate',
+ 'fwd_sequence', 'fwd_logits', 'fwd_rewards', 'fwd_advantages',
+ 'fwd_log_probs', 'fwd_predictions', 'fwd_cross_entropy_losses',
+ 'inv_sequence', 'inv_logits', 'inv_rewards', 'inv_advantages',
+ 'inv_log_probs', 'inv_predictions', 'inv_cross_entropy_losses',
+ 'avg_log_perplexity', 'dis_loss', 'gen_loss', 'dis_train_op',
+ 'gen_train_op', 'gen_pretrain_op', 'dis_pretrain_op',
+ 'merge_summaries_op', 'global_step', 'new_learning_rate',
+ 'learning_rate_update', 'saver'
+ ])
+
+ model = Model(
+ inputs, present, inv_present, percent_real_update, new_rate, fwd_sequence,
+ fwd_logits, fwd_rewards, fwd_advantages, fwd_log_probs, fwd_predictions,
+ fwd_cross_entropy_losses, inv_sequence, inv_logits, inv_rewards,
+ inv_advantages, inv_log_probs, inv_predictions, inv_cross_entropy_losses,
+ avg_log_perplexity, dis_loss, gen_loss, dis_train_op, gen_train_op,
+ gen_pretrain_op, dis_pretrain_op, merge_summaries_op, global_step,
+ new_learning_rate, learning_rate_update, saver)
+ return model
+
+
+def rollout_generator(hparams,
+ inputs,
+ input_present,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Generator graph which does rollouts.
+
+ G will now impute tokens that have been masked from the input seqeunce.
+ """
+ rollouts = []
+
+ with tf.variable_scope('gen_rollout'):
+ for n in xrange(FLAGS.num_rollouts):
+ if n > 0:
+ # TODO(liamfedus): Why is it necessary here to manually set reuse?
+ reuse = True
+ tf.get_variable_scope().reuse_variables()
+
+ [sequence, logits, log_probs] = model_construction.create_generator(
+ hparams,
+ inputs,
+ input_present,
+ is_training,
+ is_validating,
+ reuse=reuse)
+
+ rollouts.append([sequence, logits, log_probs])
+
+ # Length assertion.
+ assert len(rollouts) == FLAGS.num_rollouts
+
+ return rollouts
+
+
+def rollout_discriminator(hparams, gen_rollouts, is_training, reuse=None):
+ """Define the Discriminator graph which does rollouts.
+
+ G will now impute tokens that have been masked from the input seqeunce.
+ """
+ rollout_predictions = []
+
+ with tf.variable_scope('dis_rollout'):
+ for n, rollout in enumerate(gen_rollouts):
+ if n > 0:
+ # TODO(liamfedus): Why is it necessary here to manually set reuse?
+ reuse = True
+ tf.get_variable_scope().reuse_variables()
+
+ [sequence, _, _] = rollout
+
+ predictions = model_construction.create_discriminator(
+ hparams, sequence, is_training=is_training, reuse=reuse)
+
+ # Predictions for each rollout.
+ rollout_predictions.append(predictions)
+
+ # Length assertion.
+ assert len(rollout_predictions) == FLAGS.num_rollouts
+
+ return rollout_predictions
+
+
+def rollout_reinforce_objective(hparams, gen_rollouts, dis_rollouts, present):
+ cumulative_gen_objective = 0.
+ cumulative_averages_op = []
+ cumulative_statistics = []
+
+ assert len(gen_rollouts) == len(dis_rollouts)
+
+ for gen_rollout, dis_rollout in zip(gen_rollouts, dis_rollouts):
+ [_, _, log_probs] = gen_rollout
+ dis_predictions = dis_rollout
+
+ [
+ final_gen_objective, log_probs, rewards, advantages, baselines,
+ maintain_averages_op
+ ] = model_losses.calculate_reinforce_objective(hparams, log_probs,
+ dis_predictions, present)
+
+ # Accumulate results.
+ cumulative_gen_objective += final_gen_objective
+ cumulative_averages_op.append(maintain_averages_op)
+ cumulative_statistics.append([log_probs, rewards, advantages, baselines])
+
+ # Group all the averaging operations.
+ cumulative_averages_op = tf.group(*cumulative_averages_op)
+ cumulative_gen_objective /= FLAGS.num_rollouts
+ [log_probs, rewards, advantages, baselines] = cumulative_statistics[-1]
+
+ # Length assertion.
+ assert len(cumulative_statistics) == FLAGS.num_rollouts
+
+ return [
+ cumulative_gen_objective, cumulative_statistics, cumulative_averages_op
+ ]
+
+
+def rollout_masked_cross_entropy_loss(inputs, present, inv_present,
+ fwd_rollouts, inv_rollouts):
+ cumulative_fwd_cross_entropy_losses = tf.zeros(
+ shape=[FLAGS.batch_size, FLAGS.sequence_length])
+ cumulative_inv_cross_entropy_losses = tf.zeros(
+ shape=[FLAGS.batch_size, FLAGS.sequence_length])
+
+ for fwd_rollout, inv_rollout in zip(fwd_rollouts, inv_rollouts):
+ [_, fwd_logits, _] = fwd_rollout
+ [_, inv_logits, _] = inv_rollout
+
+ [fwd_cross_entropy_losses,
+ inv_cross_entropy_losses] = model_losses.create_masked_cross_entropy_loss(
+ inputs, present, inv_present, fwd_logits, inv_logits)
+
+ cumulative_fwd_cross_entropy_losses = tf.add(
+ cumulative_fwd_cross_entropy_losses, fwd_cross_entropy_losses)
+ cumulative_inv_cross_entropy_losses = tf.add(
+ cumulative_inv_cross_entropy_losses, inv_cross_entropy_losses)
+
+ return [
+ cumulative_fwd_cross_entropy_losses, cumulative_inv_cross_entropy_losses
+ ]
+
+
+def rollout_discriminator_loss(fwd_rollouts, present, inv_rollouts,
+ inv_present):
+
+ dis_loss = 0
+ dis_loss_pred = 0
+ dis_loss_inv_pred = 0
+
+ for fwd_predictions, inv_predictions in zip(fwd_rollouts, inv_rollouts):
+ dis_loss_pred += losses.discriminator_loss(fwd_predictions, present)
+ dis_loss_inv_pred += losses.discriminator_loss(inv_predictions, inv_present)
+
+ dis_loss_pred /= FLAGS.num_rollouts
+ dis_loss_inv_pred /= FLAGS.num_rollouts
+
+ dis_loss = (dis_loss_pred + dis_loss_inv_pred) / 2.
+ return [dis_loss, dis_loss_pred, dis_loss_inv_pred]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq.py"
new file mode 100644
index 00000000..fac397c9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq.py"
@@ -0,0 +1,277 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple seq2seq model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+from six.moves import xrange
+from models import attention_utils
+
+# ZoneoutWrapper.
+from regularization import zoneout
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def transform_input_with_is_missing_token(inputs, targets_present):
+ """Transforms the inputs to have missing tokens when it's masked out. The
+ mask is for the targets, so therefore, to determine if an input at time t is
+ masked, we have to check if the target at time t - 1 is masked out.
+
+ e.g.
+ inputs = [a, b, c, d]
+ targets = [b, c, d, e]
+ targets_present = [1, 0, 1, 0]
+
+ then,
+ transformed_input = [a, b, , d]
+
+ Args:
+ inputs: tf.int32 Tensor of shape [batch_size, sequence_length] with tokens
+ up to, but not including, vocab_size.
+ targets_present: tf.bool Tensor of shape [batch_size, sequence_length] with
+ True representing the presence of the word.
+
+ Returns:
+ transformed_input: tf.int32 Tensor of shape [batch_size, sequence_length]
+ which takes on value of inputs when the input is present and takes on
+ value=vocab_size to indicate a missing token.
+ """
+ # To fill in if the input is missing.
+ input_missing = tf.constant(
+ FLAGS.vocab_size,
+ dtype=tf.int32,
+ shape=[FLAGS.batch_size, FLAGS.sequence_length])
+
+ # The 0th input will always be present to MaskGAN.
+ zeroth_input_present = tf.constant(True, tf.bool, shape=[FLAGS.batch_size, 1])
+
+ # Input present mask.
+ inputs_present = tf.concat(
+ [zeroth_input_present, targets_present[:, :-1]], axis=1)
+
+ transformed_input = tf.where(inputs_present, inputs, input_missing)
+ return transformed_input
+
+
+def gen_encoder(hparams, inputs, targets_present, is_training, reuse=None):
+ """Define the Encoder graph."""
+ # We will use the same variable from the decoder.
+ if FLAGS.seq2seq_share_embedding:
+ with tf.variable_scope('decoder/rnn'):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ with tf.variable_scope('encoder', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.LayerNormBasicLSTMCell(
+ hparams.gen_rnn_size, reuse=reuse)
+
+ attn_cell = lstm_cell
+ if FLAGS.zoneout_drop_prob > 0.0:
+
+ def attn_cell():
+ return zoneout.ZoneoutWrapper(
+ lstm_cell(),
+ zoneout_drop_prob=FLAGS.zoneout_drop_prob,
+ is_training=is_training)
+
+ cell = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.gen_num_layers)],
+ state_is_tuple=True)
+
+ initial_state = cell.zero_state(FLAGS.batch_size, tf.float32)
+
+ # Add a missing token for inputs not present.
+ real_inputs = inputs
+ masked_inputs = transform_input_with_is_missing_token(
+ inputs, targets_present)
+
+ with tf.variable_scope('rnn'):
+ hidden_states = []
+
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size + 1, hparams.gen_rnn_size])
+
+ real_rnn_inputs = tf.nn.embedding_lookup(embedding, real_inputs)
+ masked_rnn_inputs = tf.nn.embedding_lookup(embedding, masked_inputs)
+
+ state = initial_state
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_inp = masked_rnn_inputs[:, t]
+ rnn_out, state = cell(rnn_inp, state)
+ hidden_states.append(rnn_out)
+ final_masked_state = state
+ hidden_states = tf.stack(hidden_states, axis=1)
+
+ # Produce the RNN state had the model operated only
+ # over real data.
+ real_state = initial_state
+ for t in xrange(FLAGS.sequence_length):
+ tf.get_variable_scope().reuse_variables()
+
+ # RNN.
+ rnn_inp = real_rnn_inputs[:, t]
+ rnn_out, real_state = cell(rnn_inp, real_state)
+ final_state = real_state
+
+ return (hidden_states, final_masked_state), initial_state, final_state
+
+
+def gen_decoder(hparams,
+ inputs,
+ targets,
+ targets_present,
+ encoding_state,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Decoder graph. The Decoder will now impute tokens that
+ have been masked from the input seqeunce.
+ """
+ gen_decoder_rnn_size = hparams.gen_rnn_size
+
+ with tf.variable_scope('decoder', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.LayerNormBasicLSTMCell(
+ gen_decoder_rnn_size, reuse=reuse)
+
+ attn_cell = lstm_cell
+ if FLAGS.zoneout_drop_prob > 0.0:
+
+ def attn_cell():
+ return zoneout.ZoneoutWrapper(
+ lstm_cell(),
+ zoneout_drop_prob=FLAGS.zoneout_drop_prob,
+ is_training=is_training)
+
+ cell_gen = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.gen_num_layers)],
+ state_is_tuple=True)
+
+ # Hidden encoder states.
+ hidden_vector_encodings = encoding_state[0]
+
+ # Carry forward the final state tuple from the encoder.
+ # State tuples.
+ state_gen = encoding_state[1]
+
+ if FLAGS.attention_option is not None:
+ (attention_keys, attention_values, _,
+ attention_construct_fn) = attention_utils.prepare_attention(
+ hidden_vector_encodings,
+ FLAGS.attention_option,
+ num_units=gen_decoder_rnn_size,
+ reuse=reuse)
+
+ with tf.variable_scope('rnn'):
+ sequence, logits, log_probs = [], [], []
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, gen_decoder_rnn_size])
+ softmax_w = tf.get_variable('softmax_w',
+ [gen_decoder_rnn_size, FLAGS.vocab_size])
+ softmax_b = tf.get_variable('softmax_b', [FLAGS.vocab_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, inputs)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ # Input to the Decoder.
+ if t == 0:
+ # Always provide the real input at t = 0.
+ rnn_inp = rnn_inputs[:, t]
+
+ # If the input is present, read in the input at t.
+ # If the input is not present, read in the previously generated.
+ else:
+ real_rnn_inp = rnn_inputs[:, t]
+ fake_rnn_inp = tf.nn.embedding_lookup(embedding, fake)
+
+ # While validating, the decoder should be operating in teacher
+ # forcing regime. Also, if we're just training with cross_entropy
+ # use teacher forcing.
+ if is_validating or (is_training and
+ FLAGS.gen_training_strategy == 'cross_entropy'):
+ rnn_inp = real_rnn_inp
+ else:
+ rnn_inp = tf.where(targets_present[:, t - 1], real_rnn_inp,
+ fake_rnn_inp)
+
+ # RNN.
+ rnn_out, state_gen = cell_gen(rnn_inp, state_gen)
+
+ if FLAGS.attention_option is not None:
+ rnn_out = attention_construct_fn(rnn_out, attention_keys,
+ attention_values)
+ # # TODO(liamfedus): Assert not "monotonic" attention_type.
+ # # TODO(liamfedus): FLAGS.attention_type.
+ # context_state = revised_attention_utils._empty_state()
+ # rnn_out, context_state = attention_construct_fn(
+ # rnn_out, attention_keys, attention_values, context_state, t)
+ logit = tf.matmul(rnn_out, softmax_w) + softmax_b
+
+ # Output for Decoder.
+ # If input is present: Return real at t+1.
+ # If input is not present: Return fake for t+1.
+ real = targets[:, t]
+
+ categorical = tf.contrib.distributions.Categorical(logits=logit)
+ fake = categorical.sample()
+ log_prob = categorical.log_prob(fake)
+
+ output = tf.where(targets_present[:, t], real, fake)
+
+ # Add to lists.
+ sequence.append(output)
+ log_probs.append(log_prob)
+ logits.append(logit)
+
+ return (tf.stack(sequence, axis=1), tf.stack(logits, axis=1), tf.stack(
+ log_probs, axis=1))
+
+
+def generator(hparams,
+ inputs,
+ targets,
+ targets_present,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Generator graph."""
+ with tf.variable_scope('gen', reuse=reuse):
+ encoder_states, initial_state, final_state = gen_encoder(
+ hparams, inputs, targets_present, is_training=is_training, reuse=reuse)
+ stacked_sequence, stacked_logits, stacked_log_probs = gen_decoder(
+ hparams,
+ inputs,
+ targets,
+ targets_present,
+ encoder_states,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ return (stacked_sequence, stacked_logits, stacked_log_probs, initial_state,
+ final_state)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq_nas.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq_nas.py"
new file mode 100644
index 00000000..cede90f5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq_nas.py"
@@ -0,0 +1,333 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple seq2seq model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+from six.moves import xrange
+import tensorflow as tf
+
+from models import attention_utils
+
+# NAS Code..
+from nas_utils import configs
+from nas_utils import custom_cell
+from nas_utils import variational_dropout
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def get_config():
+ return configs.AlienConfig2()
+
+
+LSTMTuple = collections.namedtuple('LSTMTuple', ['c', 'h'])
+
+
+def transform_input_with_is_missing_token(inputs, targets_present):
+ """Transforms the inputs to have missing tokens when it's masked out. The
+ mask is for the targets, so therefore, to determine if an input at time t is
+ masked, we have to check if the target at time t - 1 is masked out.
+
+ e.g.
+ inputs = [a, b, c, d]
+ targets = [b, c, d, e]
+ targets_present = [1, 0, 1, 0]
+
+ then,
+ transformed_input = [a, b, , d]
+
+ Args:
+ inputs: tf.int32 Tensor of shape [batch_size, sequence_length] with tokens
+ up to, but not including, vocab_size.
+ targets_present: tf.bool Tensor of shape [batch_size, sequence_length] with
+ True representing the presence of the word.
+
+ Returns:
+ transformed_input: tf.int32 Tensor of shape [batch_size, sequence_length]
+ which takes on value of inputs when the input is present and takes on
+ value=vocab_size to indicate a missing token.
+ """
+ # To fill in if the input is missing.
+ input_missing = tf.constant(
+ FLAGS.vocab_size,
+ dtype=tf.int32,
+ shape=[FLAGS.batch_size, FLAGS.sequence_length])
+
+ # The 0th input will always be present to MaskGAN.
+ zeroth_input_present = tf.constant(True, tf.bool, shape=[FLAGS.batch_size, 1])
+
+ # Input present mask.
+ inputs_present = tf.concat(
+ [zeroth_input_present, targets_present[:, :-1]], axis=1)
+
+ transformed_input = tf.where(inputs_present, inputs, input_missing)
+ return transformed_input
+
+
+def gen_encoder(hparams, inputs, targets_present, is_training, reuse=None):
+ """Define the Encoder graph.
+
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ inputs: tf.int32 Tensor of shape [batch_size, sequence_length] with tokens
+ up to, but not including, vocab_size.
+ targets_present: tf.bool Tensor of shape [batch_size, sequence_length] with
+ True representing the presence of the target.
+ is_training: Boolean indicating operational mode (train/inference).
+ reuse (Optional): Whether to reuse the variables.
+
+ Returns:
+ Tuple of (hidden_states, final_state).
+ """
+ config = get_config()
+ configs.print_config(config)
+ # We will use the same variable from the decoder.
+ if FLAGS.seq2seq_share_embedding:
+ with tf.variable_scope('decoder/rnn'):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ with tf.variable_scope('encoder', reuse=reuse):
+ # Neural architecture search cell.
+ cell = custom_cell.Alien(config.hidden_size)
+
+ if is_training:
+ [h2h_masks, h2i_masks, _,
+ output_mask] = variational_dropout.generate_variational_dropout_masks(
+ hparams, config.keep_prob)
+ else:
+ h2i_masks, output_mask = None, None
+
+ cell = custom_cell.GenericMultiRNNCell([cell] * config.num_layers)
+
+ initial_state = cell.zero_state(FLAGS.batch_size, tf.float32)
+
+ # Add a missing token for inputs not present.
+ real_inputs = inputs
+ masked_inputs = transform_input_with_is_missing_token(
+ inputs, targets_present)
+
+ with tf.variable_scope('rnn'):
+ hidden_states = []
+
+ # Split the embedding into two parts so that we can load the PTB
+ # weights into one part of the Variable.
+ if not FLAGS.seq2seq_share_embedding:
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+ missing_embedding = tf.get_variable('missing_embedding',
+ [1, hparams.gen_rnn_size])
+ embedding = tf.concat([embedding, missing_embedding], axis=0)
+
+ real_rnn_inputs = tf.nn.embedding_lookup(embedding, real_inputs)
+ masked_rnn_inputs = tf.nn.embedding_lookup(embedding, masked_inputs)
+
+ if is_training and FLAGS.keep_prob < 1:
+ masked_rnn_inputs = tf.nn.dropout(masked_rnn_inputs, FLAGS.keep_prob)
+
+ state = initial_state
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_inp = masked_rnn_inputs[:, t]
+
+ if is_training:
+ state = list(state)
+ for layer_num, per_layer_state in enumerate(state):
+ per_layer_state = LSTMTuple(
+ per_layer_state[0], per_layer_state[1] * h2h_masks[layer_num])
+ state[layer_num] = per_layer_state
+
+ rnn_out, state = cell(rnn_inp, state, h2i_masks)
+
+ if is_training:
+ rnn_out = output_mask * rnn_out
+
+ hidden_states.append(rnn_out)
+ final_masked_state = state
+ hidden_states = tf.stack(hidden_states, axis=1)
+
+ # Produce the RNN state had the model operated only
+ # over real data.
+ real_state = initial_state
+ for t in xrange(FLAGS.sequence_length):
+ tf.get_variable_scope().reuse_variables()
+
+ # RNN.
+ rnn_inp = real_rnn_inputs[:, t]
+ rnn_out, real_state = cell(rnn_inp, real_state)
+ final_state = real_state
+
+ return (hidden_states, final_masked_state), initial_state, final_state
+
+
+def gen_decoder(hparams,
+ inputs,
+ targets,
+ targets_present,
+ encoding_state,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Decoder graph. The Decoder will now impute tokens that
+ have been masked from the input seqeunce.
+ """
+ config = get_config()
+ gen_decoder_rnn_size = hparams.gen_rnn_size
+
+ if FLAGS.seq2seq_share_embedding:
+ with tf.variable_scope('decoder/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, gen_decoder_rnn_size])
+
+ with tf.variable_scope('decoder', reuse=reuse):
+ # Neural architecture search cell.
+ cell = custom_cell.Alien(config.hidden_size)
+
+ if is_training:
+ [h2h_masks, _, _,
+ output_mask] = variational_dropout.generate_variational_dropout_masks(
+ hparams, config.keep_prob)
+ else:
+ output_mask = None
+
+ cell_gen = custom_cell.GenericMultiRNNCell([cell] * config.num_layers)
+
+ # Hidden encoder states.
+ hidden_vector_encodings = encoding_state[0]
+
+ # Carry forward the final state tuple from the encoder.
+ # State tuples.
+ state_gen = encoding_state[1]
+
+ if FLAGS.attention_option is not None:
+ (attention_keys, attention_values, _,
+ attention_construct_fn) = attention_utils.prepare_attention(
+ hidden_vector_encodings,
+ FLAGS.attention_option,
+ num_units=gen_decoder_rnn_size,
+ reuse=reuse)
+
+ with tf.variable_scope('rnn'):
+ sequence, logits, log_probs = [], [], []
+
+ if not FLAGS.seq2seq_share_embedding:
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, gen_decoder_rnn_size])
+ softmax_w = tf.matrix_transpose(embedding)
+ softmax_b = tf.get_variable('softmax_b', [FLAGS.vocab_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, inputs)
+
+ if is_training and FLAGS.keep_prob < 1:
+ rnn_inputs = tf.nn.dropout(rnn_inputs, FLAGS.keep_prob)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ # Input to the Decoder.
+ if t == 0:
+ # Always provide the real input at t = 0.
+ rnn_inp = rnn_inputs[:, t]
+
+ # If the input is present, read in the input at t.
+ # If the input is not present, read in the previously generated.
+ else:
+ real_rnn_inp = rnn_inputs[:, t]
+ fake_rnn_inp = tf.nn.embedding_lookup(embedding, fake)
+
+ # While validating, the decoder should be operating in teacher
+ # forcing regime. Also, if we're just training with cross_entropy
+ # use teacher forcing.
+ if is_validating or (is_training and
+ FLAGS.gen_training_strategy == 'cross_entropy'):
+ rnn_inp = real_rnn_inp
+ else:
+ rnn_inp = tf.where(targets_present[:, t - 1], real_rnn_inp,
+ fake_rnn_inp)
+
+ if is_training:
+ state_gen = list(state_gen)
+ for layer_num, per_layer_state in enumerate(state_gen):
+ per_layer_state = LSTMTuple(
+ per_layer_state[0], per_layer_state[1] * h2h_masks[layer_num])
+ state_gen[layer_num] = per_layer_state
+
+ # RNN.
+ rnn_out, state_gen = cell_gen(rnn_inp, state_gen)
+
+ if is_training:
+ rnn_out = output_mask * rnn_out
+
+ if FLAGS.attention_option is not None:
+ rnn_out = attention_construct_fn(rnn_out, attention_keys,
+ attention_values)
+ # # TODO(liamfedus): Assert not "monotonic" attention_type.
+ # # TODO(liamfedus): FLAGS.attention_type.
+ # context_state = revised_attention_utils._empty_state()
+ # rnn_out, context_state = attention_construct_fn(
+ # rnn_out, attention_keys, attention_values, context_state, t)
+ logit = tf.matmul(rnn_out, softmax_w) + softmax_b
+
+ # Output for Decoder.
+ # If input is present: Return real at t+1.
+ # If input is not present: Return fake for t+1.
+ real = targets[:, t]
+
+ categorical = tf.contrib.distributions.Categorical(logits=logit)
+ fake = categorical.sample()
+ log_prob = categorical.log_prob(fake)
+
+ output = tf.where(targets_present[:, t], real, fake)
+
+ # Add to lists.
+ sequence.append(output)
+ log_probs.append(log_prob)
+ logits.append(logit)
+
+ return (tf.stack(sequence, axis=1), tf.stack(logits, axis=1), tf.stack(
+ log_probs, axis=1))
+
+
+def generator(hparams,
+ inputs,
+ targets,
+ targets_present,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Generator graph."""
+ with tf.variable_scope('gen', reuse=reuse):
+ encoder_states, initial_state, final_state = gen_encoder(
+ hparams, inputs, targets_present, is_training=is_training, reuse=reuse)
+ stacked_sequence, stacked_logits, stacked_log_probs = gen_decoder(
+ hparams,
+ inputs,
+ targets,
+ targets_present,
+ encoder_states,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ return (stacked_sequence, stacked_logits, stacked_log_probs, initial_state,
+ final_state)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq_vd.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq_vd.py"
new file mode 100644
index 00000000..850eda43
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq_vd.py"
@@ -0,0 +1,609 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple seq2seq model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from six.moves import xrange
+import tensorflow as tf
+
+from models import attention_utils
+from regularization import variational_dropout
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def transform_input_with_is_missing_token(inputs, targets_present):
+ """Transforms the inputs to have missing tokens when it's masked out. The
+ mask is for the targets, so therefore, to determine if an input at time t is
+ masked, we have to check if the target at time t - 1 is masked out.
+
+ e.g.
+ inputs = [a, b, c, d]
+ targets = [b, c, d, e]
+ targets_present = [1, 0, 1, 0]
+
+ which computes,
+ inputs_present = [1, 1, 0, 1]
+
+ and outputs,
+ transformed_input = [a, b, , d]
+
+ Args:
+ inputs: tf.int32 Tensor of shape [batch_size, sequence_length] with tokens
+ up to, but not including, vocab_size.
+ targets_present: tf.bool Tensor of shape [batch_size, sequence_length] with
+ True representing the presence of the word.
+
+ Returns:
+ transformed_input: tf.int32 Tensor of shape [batch_size, sequence_length]
+ which takes on value of inputs when the input is present and takes on
+ value=vocab_size to indicate a missing token.
+ """
+ # To fill in if the input is missing.
+ input_missing = tf.constant(
+ FLAGS.vocab_size,
+ dtype=tf.int32,
+ shape=[FLAGS.batch_size, FLAGS.sequence_length])
+
+ # The 0th input will always be present to MaskGAN.
+ zeroth_input_present = tf.constant(True, tf.bool, shape=[FLAGS.batch_size, 1])
+
+ # Input present mask.
+ inputs_present = tf.concat(
+ [zeroth_input_present, targets_present[:, :-1]], axis=1)
+
+ transformed_input = tf.where(inputs_present, inputs, input_missing)
+ return transformed_input
+
+
+# TODO(adai): IMDB labels placeholder to encoder.
+def gen_encoder(hparams, inputs, targets_present, is_training, reuse=None):
+ """Define the Encoder graph.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ inputs: tf.int32 Tensor of shape [batch_size, sequence_length] with tokens
+ up to, but not including, vocab_size.
+ targets_present: tf.bool Tensor of shape [batch_size, sequence_length] with
+ True representing the presence of the target.
+ is_training: Boolean indicating operational mode (train/inference).
+ reuse (Optional): Whether to reuse the variables.
+
+ Returns:
+ Tuple of (hidden_states, final_state).
+ """
+ # We will use the same variable from the decoder.
+ if FLAGS.seq2seq_share_embedding:
+ with tf.variable_scope('decoder/rnn'):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ with tf.variable_scope('encoder', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(
+ hparams.gen_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and hparams.gen_vd_keep_prob < 1:
+
+ def attn_cell():
+ return variational_dropout.VariationalDropoutWrapper(
+ lstm_cell(), FLAGS.batch_size, hparams.gen_rnn_size,
+ hparams.gen_vd_keep_prob, hparams.gen_vd_keep_prob)
+
+ cell = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.gen_num_layers)],
+ state_is_tuple=True)
+
+ initial_state = cell.zero_state(FLAGS.batch_size, tf.float32)
+
+ # Add a missing token for inputs not present.
+ real_inputs = inputs
+ masked_inputs = transform_input_with_is_missing_token(
+ inputs, targets_present)
+
+ with tf.variable_scope('rnn') as scope:
+ hidden_states = []
+
+ # Split the embedding into two parts so that we can load the PTB
+ # weights into one part of the Variable.
+ if not FLAGS.seq2seq_share_embedding:
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+ missing_embedding = tf.get_variable('missing_embedding',
+ [1, hparams.gen_rnn_size])
+ embedding = tf.concat([embedding, missing_embedding], axis=0)
+
+ # TODO(adai): Perhaps append IMDB labels placeholder to input at
+ # each time point.
+ real_rnn_inputs = tf.nn.embedding_lookup(embedding, real_inputs)
+ masked_rnn_inputs = tf.nn.embedding_lookup(embedding, masked_inputs)
+
+ state = initial_state
+
+ def make_mask(keep_prob, units):
+ random_tensor = keep_prob
+ # 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
+ random_tensor += tf.random_uniform(
+ tf.stack([FLAGS.batch_size, 1, units]))
+ return tf.floor(random_tensor) / keep_prob
+
+ if is_training:
+ output_mask = make_mask(hparams.gen_vd_keep_prob, hparams.gen_rnn_size)
+
+ hidden_states, state = tf.nn.dynamic_rnn(
+ cell, masked_rnn_inputs, initial_state=state, scope=scope)
+ if is_training:
+ hidden_states *= output_mask
+
+ final_masked_state = state
+
+ # Produce the RNN state had the model operated only
+ # over real data.
+ real_state = initial_state
+ _, real_state = tf.nn.dynamic_rnn(
+ cell, real_rnn_inputs, initial_state=real_state, scope=scope)
+ final_state = real_state
+
+ return (hidden_states, final_masked_state), initial_state, final_state
+
+
+# TODO(adai): IMDB labels placeholder to encoder.
+def gen_encoder_cnn(hparams, inputs, targets_present, is_training, reuse=None):
+ """Define the CNN Encoder graph."""
+ del reuse
+ sequence = transform_input_with_is_missing_token(inputs, targets_present)
+
+ # TODO(liamfedus): Make this a hyperparameter.
+ dis_filter_sizes = [3, 4, 5, 6, 7, 8, 9, 10, 15, 20]
+
+ # Keeping track of l2 regularization loss (optional)
+ # l2_loss = tf.constant(0.0)
+
+ with tf.variable_scope('encoder', reuse=True):
+ with tf.variable_scope('rnn'):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ cnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+
+ # Create a convolution layer for each filter size
+ conv_outputs = []
+ for filter_size in dis_filter_sizes:
+ with tf.variable_scope('conv-%s' % filter_size):
+ # Convolution Layer
+ filter_shape = [
+ filter_size, hparams.gen_rnn_size, hparams.dis_num_filters
+ ]
+ W = tf.get_variable(
+ name='W', initializer=tf.truncated_normal(filter_shape, stddev=0.1))
+ b = tf.get_variable(
+ name='b',
+ initializer=tf.constant(0.1, shape=[hparams.dis_num_filters]))
+ conv = tf.nn.conv1d(cnn_inputs, W, stride=1, padding='SAME', name='conv')
+
+ # Apply nonlinearity
+ h = tf.nn.relu(tf.nn.bias_add(conv, b), name='relu')
+
+ conv_outputs.append(h)
+
+ # Combine all the pooled features
+ dis_num_filters_total = hparams.dis_num_filters * len(dis_filter_sizes)
+
+ h_conv = tf.concat(conv_outputs, axis=2)
+ h_conv_flat = tf.reshape(h_conv, [-1, dis_num_filters_total])
+
+ # Add dropout
+ if is_training:
+ with tf.variable_scope('dropout'):
+ h_conv_flat = tf.nn.dropout(h_conv_flat, hparams.gen_vd_keep_prob)
+
+ # Final (unnormalized) scores and predictions
+ with tf.variable_scope('output'):
+ W = tf.get_variable(
+ 'W',
+ shape=[dis_num_filters_total, hparams.gen_rnn_size],
+ initializer=tf.contrib.layers.xavier_initializer())
+ b = tf.get_variable(
+ name='b', initializer=tf.constant(0.1, shape=[hparams.gen_rnn_size]))
+ # l2_loss += tf.nn.l2_loss(W)
+ # l2_loss += tf.nn.l2_loss(b)
+ predictions = tf.nn.xw_plus_b(h_conv_flat, W, b, name='predictions')
+ predictions = tf.reshape(
+ predictions,
+ shape=[FLAGS.batch_size, FLAGS.sequence_length, hparams.gen_rnn_size])
+ final_state = tf.reduce_mean(predictions, 1)
+ return predictions, (final_state, final_state)
+
+
+# TODO(adai): IMDB labels placeholder to decoder.
+def gen_decoder(hparams,
+ inputs,
+ targets,
+ targets_present,
+ encoding_state,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Decoder graph. The Decoder will now impute tokens that
+ have been masked from the input seqeunce.
+ """
+ gen_decoder_rnn_size = hparams.gen_rnn_size
+
+ targets = tf.Print(targets, [targets], message='targets', summarize=50)
+ if FLAGS.seq2seq_share_embedding:
+ with tf.variable_scope('decoder/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+
+ with tf.variable_scope('decoder', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(
+ gen_decoder_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and hparams.gen_vd_keep_prob < 1:
+
+ def attn_cell():
+ return variational_dropout.VariationalDropoutWrapper(
+ lstm_cell(), FLAGS.batch_size, hparams.gen_rnn_size,
+ hparams.gen_vd_keep_prob, hparams.gen_vd_keep_prob)
+
+ cell_gen = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.gen_num_layers)],
+ state_is_tuple=True)
+
+ # Hidden encoder states.
+ hidden_vector_encodings = encoding_state[0]
+
+ # Carry forward the final state tuple from the encoder.
+ # State tuples.
+ state_gen = encoding_state[1]
+
+ if FLAGS.attention_option is not None:
+ (attention_keys, attention_values, _,
+ attention_construct_fn) = attention_utils.prepare_attention(
+ hidden_vector_encodings,
+ FLAGS.attention_option,
+ num_units=gen_decoder_rnn_size,
+ reuse=reuse)
+
+ def make_mask(keep_prob, units):
+ random_tensor = keep_prob
+ # 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
+ random_tensor += tf.random_uniform(tf.stack([FLAGS.batch_size, units]))
+ return tf.floor(random_tensor) / keep_prob
+
+ if is_training:
+ output_mask = make_mask(hparams.gen_vd_keep_prob, hparams.gen_rnn_size)
+
+ with tf.variable_scope('rnn'):
+ sequence, logits, log_probs = [], [], []
+
+ if not FLAGS.seq2seq_share_embedding:
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+ softmax_w = tf.matrix_transpose(embedding)
+ softmax_b = tf.get_variable('softmax_b', [FLAGS.vocab_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, inputs)
+ # TODO(adai): Perhaps append IMDB labels placeholder to input at
+ # each time point.
+
+ rnn_outs = []
+
+ fake = None
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ # Input to the Decoder.
+ if t == 0:
+ # Always provide the real input at t = 0.
+ rnn_inp = rnn_inputs[:, t]
+
+ # If the input is present, read in the input at t.
+ # If the input is not present, read in the previously generated.
+ else:
+ real_rnn_inp = rnn_inputs[:, t]
+
+ # While validating, the decoder should be operating in teacher
+ # forcing regime. Also, if we're just training with cross_entropy
+ # use teacher forcing.
+ if is_validating or FLAGS.gen_training_strategy == 'cross_entropy':
+ rnn_inp = real_rnn_inp
+ else:
+ fake_rnn_inp = tf.nn.embedding_lookup(embedding, fake)
+ rnn_inp = tf.where(targets_present[:, t - 1], real_rnn_inp,
+ fake_rnn_inp)
+
+ # RNN.
+ rnn_out, state_gen = cell_gen(rnn_inp, state_gen)
+
+ if FLAGS.attention_option is not None:
+ rnn_out = attention_construct_fn(rnn_out, attention_keys,
+ attention_values)
+ if is_training:
+ rnn_out *= output_mask
+
+ rnn_outs.append(rnn_out)
+ if FLAGS.gen_training_strategy != 'cross_entropy':
+ logit = tf.nn.bias_add(tf.matmul(rnn_out, softmax_w), softmax_b)
+
+ # Output for Decoder.
+ # If input is present: Return real at t+1.
+ # If input is not present: Return fake for t+1.
+ real = targets[:, t]
+
+ categorical = tf.contrib.distributions.Categorical(logits=logit)
+ if FLAGS.use_gen_mode:
+ fake = categorical.mode()
+ else:
+ fake = categorical.sample()
+ log_prob = categorical.log_prob(fake)
+ output = tf.where(targets_present[:, t], real, fake)
+
+ else:
+ real = targets[:, t]
+ logit = tf.zeros(tf.stack([FLAGS.batch_size, FLAGS.vocab_size]))
+ log_prob = tf.zeros(tf.stack([FLAGS.batch_size]))
+ output = real
+
+ # Add to lists.
+ sequence.append(output)
+ log_probs.append(log_prob)
+ logits.append(logit)
+
+ if FLAGS.gen_training_strategy == 'cross_entropy':
+ logits = tf.nn.bias_add(
+ tf.matmul(
+ tf.reshape(tf.stack(rnn_outs, 1), [-1, gen_decoder_rnn_size]),
+ softmax_w), softmax_b)
+ logits = tf.reshape(logits,
+ [-1, FLAGS.sequence_length, FLAGS.vocab_size])
+ else:
+ logits = tf.stack(logits, axis=1)
+
+ return (tf.stack(sequence, axis=1), logits, tf.stack(log_probs, axis=1))
+
+
+def dis_encoder(hparams, masked_inputs, is_training, reuse=None,
+ embedding=None):
+ """Define the Discriminator encoder. Reads in the masked inputs for context
+ and produces the hidden states of the encoder."""
+ with tf.variable_scope('encoder', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(
+ hparams.dis_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and hparams.dis_vd_keep_prob < 1:
+
+ def attn_cell():
+ return variational_dropout.VariationalDropoutWrapper(
+ lstm_cell(), FLAGS.batch_size, hparams.dis_rnn_size,
+ hparams.dis_vd_keep_prob, hparams.dis_vd_keep_prob)
+
+ cell_dis = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.dis_num_layers)],
+ state_is_tuple=True)
+
+ state_dis = cell_dis.zero_state(FLAGS.batch_size, tf.float32)
+
+ with tf.variable_scope('rnn'):
+ hidden_states = []
+
+ missing_embedding = tf.get_variable('missing_embedding',
+ [1, hparams.dis_rnn_size])
+ embedding = tf.concat([embedding, missing_embedding], axis=0)
+ masked_rnn_inputs = tf.nn.embedding_lookup(embedding, masked_inputs)
+
+ def make_mask(keep_prob, units):
+ random_tensor = keep_prob
+ # 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
+ random_tensor += tf.random_uniform(tf.stack([FLAGS.batch_size, units]))
+ return tf.floor(random_tensor) / keep_prob
+
+ if is_training:
+ output_mask = make_mask(hparams.dis_vd_keep_prob, hparams.dis_rnn_size)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_in = masked_rnn_inputs[:, t]
+ rnn_out, state_dis = cell_dis(rnn_in, state_dis)
+ if is_training:
+ rnn_out *= output_mask
+ hidden_states.append(rnn_out)
+ final_state = state_dis
+
+ return (tf.stack(hidden_states, axis=1), final_state)
+
+
+def dis_decoder(hparams,
+ sequence,
+ encoding_state,
+ is_training,
+ reuse=None,
+ embedding=None):
+ """Define the Discriminator decoder. Read in the sequence and predict
+ at each time point."""
+ sequence = tf.cast(sequence, tf.int32)
+
+ with tf.variable_scope('decoder', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(
+ hparams.dis_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and hparams.dis_vd_keep_prob < 1:
+
+ def attn_cell():
+ return variational_dropout.VariationalDropoutWrapper(
+ lstm_cell(), FLAGS.batch_size, hparams.dis_rnn_size,
+ hparams.dis_vd_keep_prob, hparams.dis_vd_keep_prob)
+
+ cell_dis = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.dis_num_layers)],
+ state_is_tuple=True)
+
+ # Hidden encoder states.
+ hidden_vector_encodings = encoding_state[0]
+
+ # Carry forward the final state tuple from the encoder.
+ # State tuples.
+ state = encoding_state[1]
+
+ if FLAGS.attention_option is not None:
+ (attention_keys, attention_values, _,
+ attention_construct_fn) = attention_utils.prepare_attention(
+ hidden_vector_encodings,
+ FLAGS.attention_option,
+ num_units=hparams.dis_rnn_size,
+ reuse=reuse)
+
+ def make_mask(keep_prob, units):
+ random_tensor = keep_prob
+ # 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
+ random_tensor += tf.random_uniform(tf.stack([FLAGS.batch_size, units]))
+ return tf.floor(random_tensor) / keep_prob
+
+ if is_training:
+ output_mask = make_mask(hparams.dis_vd_keep_prob, hparams.dis_rnn_size)
+
+ with tf.variable_scope('rnn') as vs:
+ predictions = []
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, sequence)
+
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_in = rnn_inputs[:, t]
+ rnn_out, state = cell_dis(rnn_in, state)
+
+ if FLAGS.attention_option is not None:
+ rnn_out = attention_construct_fn(rnn_out, attention_keys,
+ attention_values)
+ if is_training:
+ rnn_out *= output_mask
+
+ # Prediction is linear output for Discriminator.
+ pred = tf.contrib.layers.linear(rnn_out, 1, scope=vs)
+ predictions.append(pred)
+
+ predictions = tf.stack(predictions, axis=1)
+ return tf.squeeze(predictions, axis=2)
+
+
+def discriminator(hparams,
+ inputs,
+ targets_present,
+ sequence,
+ is_training,
+ reuse=None):
+ """Define the Discriminator graph."""
+ if FLAGS.dis_share_embedding:
+ assert hparams.dis_rnn_size == hparams.gen_rnn_size, (
+ 'If you wish to share Discriminator/Generator embeddings, they must be'
+ ' same dimension.')
+ with tf.variable_scope('gen/decoder/rnn', reuse=True):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+ else:
+ # Explicitly share the embedding.
+ with tf.variable_scope('dis/decoder/rnn', reuse=reuse):
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.dis_rnn_size])
+
+ # Mask the input sequence.
+ masked_inputs = transform_input_with_is_missing_token(inputs, targets_present)
+
+ # Confirm masking.
+ masked_inputs = tf.Print(
+ masked_inputs, [inputs, targets_present, masked_inputs, sequence],
+ message='inputs, targets_present, masked_inputs, sequence',
+ summarize=10)
+
+ with tf.variable_scope('dis', reuse=reuse):
+ encoder_states = dis_encoder(
+ hparams,
+ masked_inputs,
+ is_training=is_training,
+ reuse=reuse,
+ embedding=embedding)
+ predictions = dis_decoder(
+ hparams,
+ sequence,
+ encoder_states,
+ is_training=is_training,
+ reuse=reuse,
+ embedding=embedding)
+
+ # if FLAGS.baseline_method == 'critic':
+ # with tf.variable_scope('critic', reuse=reuse) as critic_scope:
+ # values = tf.contrib.layers.linear(rnn_outs, 1, scope=critic_scope)
+ # values = tf.squeeze(values, axis=2)
+ # else:
+ # values = None
+
+ return predictions
+
+
+# TODO(adai): IMDB labels placeholder to encoder/decoder.
+def generator(hparams,
+ inputs,
+ targets,
+ targets_present,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Generator graph."""
+ with tf.variable_scope('gen', reuse=reuse):
+ encoder_states, initial_state, final_state = gen_encoder(
+ hparams, inputs, targets_present, is_training=is_training, reuse=reuse)
+ stacked_sequence, stacked_logits, stacked_log_probs = gen_decoder(
+ hparams,
+ inputs,
+ targets,
+ targets_present,
+ encoder_states,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ return (stacked_sequence, stacked_logits, stacked_log_probs, initial_state,
+ final_state, encoder_states)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq_zaremba.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq_zaremba.py"
new file mode 100644
index 00000000..25f6ce44
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/models/seq2seq_zaremba.py"
@@ -0,0 +1,305 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Simple seq2seq model definitions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+from six.moves import xrange
+from models import attention_utils
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def transform_input_with_is_missing_token(inputs, targets_present):
+ """Transforms the inputs to have missing tokens when it's masked out. The
+ mask is for the targets, so therefore, to determine if an input at time t is
+ masked, we have to check if the target at time t - 1 is masked out.
+
+ e.g.
+ inputs = [a, b, c, d]
+ targets = [b, c, d, e]
+ targets_present = [1, 0, 1, 0]
+
+ then,
+ transformed_input = [a, b, , d]
+
+ Args:
+ inputs: tf.int32 Tensor of shape [batch_size, sequence_length] with tokens
+ up to, but not including, vocab_size.
+ targets_present: tf.bool Tensor of shape [batch_size, sequence_length] with
+ True representing the presence of the word.
+
+ Returns:
+ transformed_input: tf.int32 Tensor of shape [batch_size, sequence_length]
+ which takes on value of inputs when the input is present and takes on
+ value=vocab_size to indicate a missing token.
+ """
+ # To fill in if the input is missing.
+ input_missing = tf.constant(FLAGS.vocab_size,
+ dtype=tf.int32,
+ shape=[FLAGS.batch_size, FLAGS.sequence_length])
+
+ # The 0th input will always be present to MaskGAN.
+ zeroth_input_present = tf.constant(True, tf.bool, shape=[FLAGS.batch_size, 1])
+
+ # Input present mask.
+ inputs_present = tf.concat(
+ [zeroth_input_present, targets_present[:, :-1]], axis=1)
+
+ transformed_input = tf.where(inputs_present, inputs, input_missing)
+ return transformed_input
+
+
+def gen_encoder(hparams, inputs, targets_present, is_training, reuse=None):
+ """Define the Encoder graph.
+
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ inputs: tf.int32 Tensor of shape [batch_size, sequence_length] with tokens
+ up to, but not including, vocab_size.
+ targets_present: tf.bool Tensor of shape [batch_size, sequence_length] with
+ True representing the presence of the target.
+ is_training: Boolean indicating operational mode (train/inference).
+ reuse (Optional): Whether to reuse the variables.
+
+ Returns:
+ Tuple of (hidden_states, final_state).
+ """
+ with tf.variable_scope('encoder', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(hparams.gen_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and FLAGS.keep_prob < 1:
+
+ def attn_cell():
+ return tf.contrib.rnn.DropoutWrapper(
+ lstm_cell(), output_keep_prob=FLAGS.keep_prob)
+
+ cell = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.gen_num_layers)],
+ state_is_tuple=True)
+
+ initial_state = cell.zero_state(FLAGS.batch_size, tf.float32)
+
+ # Add a missing token for inputs not present.
+ real_inputs = inputs
+ masked_inputs = transform_input_with_is_missing_token(inputs,
+ targets_present)
+
+ with tf.variable_scope('rnn'):
+ hidden_states = []
+
+ # Split the embedding into two parts so that we can load the PTB
+ # weights into one part of the Variable.
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+ missing_embedding = tf.get_variable('missing_embedding',
+ [1, hparams.gen_rnn_size])
+ embedding = tf.concat([embedding, missing_embedding], axis=0)
+
+ real_rnn_inputs = tf.nn.embedding_lookup(embedding, real_inputs)
+ masked_rnn_inputs = tf.nn.embedding_lookup(embedding, masked_inputs)
+
+ if is_training and FLAGS.keep_prob < 1:
+ masked_rnn_inputs = tf.nn.dropout(masked_rnn_inputs, FLAGS.keep_prob)
+
+ state = initial_state
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ rnn_inp = masked_rnn_inputs[:, t]
+ rnn_out, state = cell(rnn_inp, state)
+ hidden_states.append(rnn_out)
+ final_masked_state = state
+ hidden_states = tf.stack(hidden_states, axis=1)
+
+ # Produce the RNN state had the model operated only
+ # over real data.
+ real_state = initial_state
+ for t in xrange(FLAGS.sequence_length):
+ tf.get_variable_scope().reuse_variables()
+
+ # RNN.
+ rnn_inp = real_rnn_inputs[:, t]
+ rnn_out, real_state = cell(rnn_inp, real_state)
+ final_state = real_state
+
+ return (hidden_states, final_masked_state), initial_state, final_state
+
+
+def gen_decoder(hparams,
+ inputs,
+ targets,
+ targets_present,
+ encoding_state,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Decoder graph. The Decoder will now impute tokens that
+ have been masked from the input seqeunce.
+ """
+ gen_decoder_rnn_size = hparams.gen_rnn_size
+
+ with tf.variable_scope('decoder', reuse=reuse):
+
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(gen_decoder_rnn_size,
+ forget_bias=0.0,
+ state_is_tuple=True,
+ reuse=reuse)
+
+ attn_cell = lstm_cell
+ if is_training and FLAGS.keep_prob < 1:
+
+ def attn_cell():
+ return tf.contrib.rnn.DropoutWrapper(
+ lstm_cell(), output_keep_prob=FLAGS.keep_prob)
+
+ cell_gen = tf.contrib.rnn.MultiRNNCell(
+ [attn_cell() for _ in range(hparams.gen_num_layers)],
+ state_is_tuple=True)
+
+ # Hidden encoder states.
+ hidden_vector_encodings = encoding_state[0]
+
+ # Carry forward the final state tuple from the encoder.
+ # State tuples.
+ state_gen = encoding_state[1]
+
+ if FLAGS.attention_option is not None:
+ (attention_keys, attention_values, _,
+ attention_construct_fn) = attention_utils.prepare_attention(
+ hidden_vector_encodings,
+ FLAGS.attention_option,
+ num_units=gen_decoder_rnn_size,
+ reuse=reuse)
+
+ with tf.variable_scope('rnn'):
+ sequence, logits, log_probs = [], [], []
+
+ embedding = tf.get_variable('embedding',
+ [FLAGS.vocab_size, hparams.gen_rnn_size])
+ softmax_w = tf.matrix_transpose(embedding)
+ softmax_b = tf.get_variable('softmax_b', [FLAGS.vocab_size])
+
+ rnn_inputs = tf.nn.embedding_lookup(embedding, inputs)
+
+ if is_training and FLAGS.keep_prob < 1:
+ rnn_inputs = tf.nn.dropout(rnn_inputs, FLAGS.keep_prob)
+
+ rnn_outs = []
+
+ fake = None
+ for t in xrange(FLAGS.sequence_length):
+ if t > 0:
+ tf.get_variable_scope().reuse_variables()
+
+ # Input to the Decoder.
+ if t == 0:
+ # Always provide the real input at t = 0.
+ rnn_inp = rnn_inputs[:, t]
+
+ # If the input is present, read in the input at t.
+ # If the input is not present, read in the previously generated.
+ else:
+ real_rnn_inp = rnn_inputs[:, t]
+
+ # While validating, the decoder should be operating in teacher
+ # forcing regime. Also, if we're just training with cross_entropy
+ # use teacher forcing.
+ if is_validating or FLAGS.gen_training_strategy == 'cross_entropy':
+ rnn_inp = real_rnn_inp
+ else:
+ fake_rnn_inp = tf.nn.embedding_lookup(embedding, fake)
+ rnn_inp = tf.where(targets_present[:, t - 1], real_rnn_inp,
+ fake_rnn_inp)
+
+ # RNN.
+ rnn_out, state_gen = cell_gen(rnn_inp, state_gen)
+
+ if FLAGS.attention_option is not None:
+ rnn_out = attention_construct_fn(rnn_out, attention_keys,
+ attention_values)
+ rnn_outs.append(rnn_out)
+ if FLAGS.gen_training_strategy != 'cross_entropy':
+ logit = tf.nn.bias_add(tf.matmul(rnn_out, softmax_w), softmax_b)
+
+ # Output for Decoder.
+ # If input is present: Return real at t+1.
+ # If input is not present: Return fake for t+1.
+ real = targets[:, t]
+
+ categorical = tf.contrib.distributions.Categorical(logits=logit)
+ fake = categorical.sample()
+ log_prob = categorical.log_prob(fake)
+
+ output = tf.where(targets_present[:, t], real, fake)
+
+ else:
+ batch_size = tf.shape(rnn_out)[0]
+ logit = tf.zeros(tf.stack([batch_size, FLAGS.vocab_size]))
+ log_prob = tf.zeros(tf.stack([batch_size]))
+ output = targets[:, t]
+
+ # Add to lists.
+ sequence.append(output)
+ log_probs.append(log_prob)
+ logits.append(logit)
+ if FLAGS.gen_training_strategy == 'cross_entropy':
+ logits = tf.nn.bias_add(
+ tf.matmul(
+ tf.reshape(tf.stack(rnn_outs, 1), [-1, gen_decoder_rnn_size]),
+ softmax_w), softmax_b)
+ logits = tf.reshape(logits,
+ [-1, FLAGS.sequence_length, FLAGS.vocab_size])
+ else:
+ logits = tf.stack(logits, axis=1)
+
+ return (tf.stack(sequence, axis=1), logits, tf.stack(log_probs, axis=1))
+
+
+def generator(hparams,
+ inputs,
+ targets,
+ targets_present,
+ is_training,
+ is_validating,
+ reuse=None):
+ """Define the Generator graph."""
+ with tf.variable_scope('gen', reuse=reuse):
+ encoder_states, initial_state, final_state = gen_encoder(
+ hparams, inputs, targets_present, is_training=is_training, reuse=reuse)
+ stacked_sequence, stacked_logits, stacked_log_probs = gen_decoder(
+ hparams,
+ inputs,
+ targets,
+ targets_present,
+ encoder_states,
+ is_training=is_training,
+ is_validating=is_validating,
+ reuse=reuse)
+ return (stacked_sequence, stacked_logits, stacked_log_probs, initial_state,
+ final_state)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/configs.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/configs.py"
new file mode 100644
index 00000000..80d867c3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/configs.py"
@@ -0,0 +1,46 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+def print_config(config):
+ print("-" * 10, "Configuration Specs", "-" * 10)
+ for item in dir(config):
+ if list(item)[0] != "_":
+ print(item, getattr(config, item))
+ print("-" * 29)
+
+
+class AlienConfig2(object):
+ """Base 8 740 shared embeddings, gets 64.0 (mean: std: min: max: )."""
+ init_scale = 0.05
+ learning_rate = 1.0
+ max_grad_norm = 10
+ num_layers = 2
+ num_steps = 25
+ hidden_size = 740
+ max_epoch = 70
+ max_max_epoch = 250
+ keep_prob = [1 - 0.15, 1 - 0.45]
+ lr_decay = 0.95
+ batch_size = 20
+ vocab_size = 10000
+ weight_decay = 1e-4
+ share_embeddings = True
+ cell = "alien"
+ dropout_type = "variational"
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/custom_cell.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/custom_cell.py"
new file mode 100644
index 00000000..6add7ffa
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/custom_cell.py"
@@ -0,0 +1,166 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import numpy as np
+import tensorflow as tf
+
+flags = tf.flags
+FLAGS = tf.app.flags.FLAGS
+LSTMTuple = collections.namedtuple('LSTMTuple', ['c', 'h'])
+
+
+def cell_depth(num):
+ num /= 2
+ val = np.log2(1 + num)
+ assert abs(val - int(val)) == 0
+ return int(val)
+
+
+class GenericMultiRNNCell(tf.contrib.rnn.RNNCell):
+ """More generic version of MultiRNNCell that allows you to pass in a dropout mask"""
+
+ def __init__(self, cells):
+ """Create a RNN cell composed sequentially of a number of RNNCells.
+
+ Args:
+ cells: list of RNNCells that will be composed in this order.
+ state_is_tuple: If True, accepted and returned states are n-tuples, where
+ `n = len(cells)`. If False, the states are all
+ concatenated along the column axis. This latter behavior will soon be
+ deprecated.
+
+ Raises:
+ ValueError: if cells is empty (not allowed), or at least one of the cells
+ returns a state tuple but the flag `state_is_tuple` is `False`.
+ """
+ self._cells = cells
+
+ @property
+ def state_size(self):
+ return tuple(cell.state_size for cell in self._cells)
+
+ @property
+ def output_size(self):
+ return self._cells[-1].output_size
+
+ def __call__(self, inputs, state, input_masks=None, scope=None):
+ """Run this multi-layer cell on inputs, starting from state."""
+ with tf.variable_scope(scope or type(self).__name__):
+ cur_inp = inputs
+ new_states = []
+ for i, cell in enumerate(self._cells):
+ with tf.variable_scope('Cell%d' % i):
+ cur_state = state[i]
+ if input_masks is not None:
+ cur_inp *= input_masks[i]
+ cur_inp, new_state = cell(cur_inp, cur_state)
+ new_states.append(new_state)
+ new_states = tuple(new_states)
+ return cur_inp, new_states
+
+
+class AlienRNNBuilder(tf.contrib.rnn.RNNCell):
+
+ def __init__(self, num_units, params, additional_params, base_size):
+ self.num_units = num_units
+ self.cell_create_index = additional_params[0]
+ self.cell_inject_index = additional_params[1]
+ self.base_size = base_size
+ self.cell_params = params[
+ -2:] # Cell injection parameters are always the last two
+ params = params[:-2]
+ self.depth = cell_depth(len(params))
+ self.params = params
+ self.units_per_layer = [2**i for i in range(self.depth)
+ ][::-1] # start with the biggest layer
+
+ def __call__(self, inputs, state, scope=None):
+ with tf.variable_scope(scope or type(self).__name__):
+ definition1 = ['add', 'elem_mult', 'max']
+ definition2 = [tf.identity, tf.tanh, tf.sigmoid, tf.nn.relu, tf.sin]
+ layer_outputs = [[] for _ in range(self.depth)]
+ with tf.variable_scope('rnn_builder'):
+ curr_index = 0
+ c, h = state
+
+ # Run all dense matrix multiplications at once
+ big_h_mat = tf.get_variable(
+ 'big_h_mat', [self.num_units,
+ self.base_size * self.num_units], tf.float32)
+ big_inputs_mat = tf.get_variable(
+ 'big_inputs_mat', [self.num_units,
+ self.base_size * self.num_units], tf.float32)
+ big_h_output = tf.matmul(h, big_h_mat)
+ big_inputs_output = tf.matmul(inputs, big_inputs_mat)
+ h_splits = tf.split(big_h_output, self.base_size, axis=1)
+ inputs_splits = tf.split(big_inputs_output, self.base_size, axis=1)
+
+ for layer_num, units in enumerate(self.units_per_layer):
+ for unit_num in range(units):
+ with tf.variable_scope(
+ 'layer_{}_unit_{}'.format(layer_num, unit_num)):
+ if layer_num == 0:
+ prev1_mat = h_splits[unit_num]
+ prev2_mat = inputs_splits[unit_num]
+ else:
+ prev1_mat = layer_outputs[layer_num - 1][2 * unit_num]
+ prev2_mat = layer_outputs[layer_num - 1][2 * unit_num + 1]
+ if definition1[self.params[curr_index]] == 'add':
+ output = prev1_mat + prev2_mat
+ elif definition1[self.params[curr_index]] == 'elem_mult':
+ output = prev1_mat * prev2_mat
+ elif definition1[self.params[curr_index]] == 'max':
+ output = tf.maximum(prev1_mat, prev2_mat)
+ if curr_index / 2 == self.cell_create_index: # Take the new cell before the activation
+ new_c = tf.identity(output)
+ output = definition2[self.params[curr_index + 1]](output)
+ if curr_index / 2 == self.cell_inject_index:
+ if definition1[self.cell_params[0]] == 'add':
+ output += c
+ elif definition1[self.cell_params[0]] == 'elem_mult':
+ output *= c
+ elif definition1[self.cell_params[0]] == 'max':
+ output = tf.maximum(output, c)
+ output = definition2[self.cell_params[1]](output)
+ layer_outputs[layer_num].append(output)
+ curr_index += 2
+ new_h = layer_outputs[-1][-1]
+ return new_h, LSTMTuple(new_c, new_h)
+
+ @property
+ def state_size(self):
+ return LSTMTuple(self.num_units, self.num_units)
+
+ @property
+ def output_size(self):
+ return self.num_units
+
+
+class Alien(AlienRNNBuilder):
+ """Base 8 Cell."""
+
+ def __init__(self, num_units):
+ params = [
+ 0, 2, 0, 3, 0, 2, 1, 3, 0, 1, 0, 2, 0, 1, 0, 2, 1, 1, 0, 1, 1, 1, 0, 2,
+ 1, 0, 0, 1, 1, 1, 0, 1
+ ]
+ additional_params = [12, 8]
+ base_size = 8
+ super(Alien, self).__init__(num_units, params, additional_params, base_size)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/variational_dropout.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/variational_dropout.py"
new file mode 100644
index 00000000..49cc29f0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/nas_utils/variational_dropout.py"
@@ -0,0 +1,61 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Variational Dropout."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def generate_dropout_masks(keep_prob, shape, amount):
+ masks = []
+ for _ in range(amount):
+ dropout_mask = tf.random_uniform(shape) + (keep_prob)
+ dropout_mask = tf.floor(dropout_mask) / (keep_prob)
+ masks.append(dropout_mask)
+ return masks
+
+
+def generate_variational_dropout_masks(hparams, keep_prob):
+ [batch_size, num_steps, size, num_layers] = [
+ FLAGS.batch_size, FLAGS.sequence_length, hparams.gen_rnn_size,
+ hparams.gen_num_layers
+ ]
+ if len(keep_prob) == 2:
+ emb_keep_prob = keep_prob[0] # keep prob for embedding matrix
+ h2h_keep_prob = emb_keep_prob # keep prob for hidden to hidden connections
+ h2i_keep_prob = keep_prob[1] # keep prob for hidden to input connections
+ out_keep_prob = h2i_keep_prob # keep probability for output state
+ else:
+ emb_keep_prob = keep_prob[0] # keep prob for embedding matrix
+ h2h_keep_prob = keep_prob[1] # keep prob for hidden to hidden connections
+ h2i_keep_prob = keep_prob[2] # keep prob for hidden to input connections
+ out_keep_prob = keep_prob[3] # keep probability for output state
+ h2i_masks = [] # Masks for input to recurrent connections
+ h2h_masks = [] # Masks for recurrent to recurrent connections
+
+ # Input word dropout mask
+ emb_masks = generate_dropout_masks(emb_keep_prob, [num_steps, 1], batch_size)
+ output_mask = generate_dropout_masks(out_keep_prob, [batch_size, size], 1)[0]
+ h2i_masks = generate_dropout_masks(h2i_keep_prob, [batch_size, size],
+ num_layers)
+ h2h_masks = generate_dropout_masks(h2h_keep_prob, [batch_size, size],
+ num_layers)
+ return h2h_masks, h2i_masks, emb_masks, output_mask
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/pretrain_mask_gan.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/pretrain_mask_gan.py"
new file mode 100644
index 00000000..1a9d8ee9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/pretrain_mask_gan.py"
@@ -0,0 +1,231 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Pretraining functions."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+
+import numpy as np
+
+import tensorflow as tf
+
+from data import imdb_loader
+from data import ptb_loader
+
+# Data.
+from model_utils import model_utils
+from models import evaluation_utils
+
+tf.app.flags.DEFINE_integer(
+ 'gen_pretrain_steps', None,
+ 'The number of steps to pretrain the generator with cross entropy loss.')
+tf.app.flags.DEFINE_integer(
+ 'dis_pretrain_steps', None,
+ 'The number of steps to pretrain the discriminator.')
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def pretrain_generator(sv, sess, model, data, log, id_to_word,
+ data_ngram_counts, is_chief):
+ """Pretrain the generator with classic language modeling training."""
+ print('\nPretraining generator for %d steps.' % FLAGS.gen_pretrain_steps)
+ log.write(
+ '\nPretraining generator for %d steps.\n' % FLAGS.gen_pretrain_steps)
+
+ is_pretraining = True
+
+ while is_pretraining:
+
+ costs = 0.
+ iters = 0
+ if FLAGS.data_set == 'ptb':
+ iterator = ptb_loader.ptb_iterator(data, FLAGS.batch_size,
+ FLAGS.sequence_length,
+ FLAGS.epoch_size_override)
+ elif FLAGS.data_set == 'imdb':
+ iterator = imdb_loader.imdb_iterator(data, FLAGS.batch_size,
+ FLAGS.sequence_length)
+
+ for x, y, _ in iterator:
+
+ # For pretraining with cross entropy loss, we have all tokens in the
+ # forward sequence present (all True).
+ model_utils.assign_percent_real(sess, model.percent_real_update,
+ model.new_rate, 1.0)
+ p = np.ones(shape=[FLAGS.batch_size, FLAGS.sequence_length], dtype=bool)
+
+ pretrain_feed = {model.inputs: x, model.targets: y, model.present: p}
+
+ [losses, cost_eval, _, step] = sess.run(
+ [
+ model.fake_cross_entropy_losses, model.avg_log_perplexity,
+ model.gen_pretrain_op, model.global_step
+ ],
+ feed_dict=pretrain_feed)
+
+ costs += cost_eval
+ iters += FLAGS.sequence_length
+
+ # Calulate rolling perplexity.
+ perplexity = np.exp(costs / iters)
+
+ # Summaries.
+ if is_chief and step % FLAGS.summaries_every == 0:
+ # Graph summaries.
+ summary_str = sess.run(
+ model.merge_summaries_op, feed_dict=pretrain_feed)
+ sv.SummaryComputed(sess, summary_str)
+
+ # Additional summary.
+ for n, data_ngram_count in data_ngram_counts.iteritems():
+ avg_percent_captured = evaluation_utils.sequence_ngram_evaluation(
+ sess, model.fake_sequence, log, pretrain_feed, data_ngram_count,
+ int(n))
+ summary_percent_str = tf.Summary(value=[
+ tf.Summary.Value(
+ tag='general/%s-grams_percent_correct' % n,
+ simple_value=avg_percent_captured)
+ ])
+ sv.SummaryComputed(sess, summary_percent_str, global_step=step)
+
+ summary_perplexity_str = tf.Summary(value=[
+ tf.Summary.Value(tag='general/perplexity', simple_value=perplexity)
+ ])
+ sv.SummaryComputed(sess, summary_perplexity_str, global_step=step)
+
+ # Printing and logging
+ if is_chief and step % FLAGS.print_every == 0:
+ print('global_step: %d' % step)
+ print(' generator loss: %.3f' % np.mean(losses))
+ print(' perplexity: %.3f' % perplexity)
+ log.write('global_step: %d\n' % step)
+ log.write(' generator loss: %.3f\n' % np.mean(losses))
+ log.write(' perplexity: %.3f\n' % perplexity)
+
+ for n, data_ngram_count in data_ngram_counts.iteritems():
+ avg_percent_captured = evaluation_utils.sequence_ngram_evaluation(
+ sess, model.fake_sequence, log, pretrain_feed, data_ngram_count,
+ int(n))
+ print(' percent of %s-grams captured: %.3f.\n' %
+ (n, avg_percent_captured))
+ log.write(' percent of %s-grams captured: %.3f.\n\n' %
+ (n, avg_percent_captured))
+
+ evaluation_utils.generate_logs(sess, model, log, id_to_word,
+ pretrain_feed)
+
+ if step >= FLAGS.gen_pretrain_steps:
+ is_pretraining = False
+ break
+ return
+
+
+def pretrain_discriminator(sv, sess, model, data, log, id_to_word,
+ data_ngram_counts, is_chief):
+ print('\nPretraining discriminator for %d steps.' % FLAGS.dis_pretrain_steps)
+ log.write(
+ '\nPretraining discriminator for %d steps.\n' % FLAGS.dis_pretrain_steps)
+
+ is_pretraining = True
+
+ while is_pretraining:
+
+ cumulative_costs = 0.
+ iters = 0
+ if FLAGS.data_set == 'ptb':
+ iterator = ptb_loader.ptb_iterator(data, FLAGS.batch_size,
+ FLAGS.sequence_length,
+ FLAGS.epoch_size_override)
+ elif FLAGS.data_set == 'imdb':
+ iterator = imdb_loader.imdb_iterator(data, FLAGS.batch_size,
+ FLAGS.sequence_length)
+
+ for x, y, _ in iterator:
+ is_present_rate = FLAGS.is_present_rate
+ # is_present_rate = np.random.uniform(low=0.0, high=1.0)
+ model_utils.assign_percent_real(sess, model.percent_real_update,
+ model.new_rate, is_present_rate)
+ # Randomly mask out tokens.
+ p = model_utils.generate_mask()
+
+ pretrain_feed = {model.inputs: x, model.targets: y, model.present: p}
+
+ [_, dis_loss_eval, gen_log_perplexity_eval, step] = sess.run(
+ [
+ model.dis_pretrain_op, model.dis_loss, model.avg_log_perplexity,
+ model.global_step
+ ],
+ feed_dict=pretrain_feed)
+
+ cumulative_costs += gen_log_perplexity_eval
+ iters += 1
+
+ # Calulate rolling perplexity.
+ perplexity = np.exp(cumulative_costs / iters)
+
+ # Summaries.
+ if is_chief and step % FLAGS.summaries_every == 0:
+ # Graph summaries.
+ summary_str = sess.run(
+ model.merge_summaries_op, feed_dict=pretrain_feed)
+ sv.SummaryComputed(sess, summary_str)
+
+ # Additional summary.
+ for n, data_ngram_count in data_ngram_counts.iteritems():
+ avg_percent_captured = evaluation_utils.sequence_ngram_evaluation(
+ sess, model.fake_sequence, log, pretrain_feed, data_ngram_count,
+ int(n))
+ summary_percent_str = tf.Summary(value=[
+ tf.Summary.Value(
+ tag='general/%s-grams_percent_correct' % n,
+ simple_value=avg_percent_captured)
+ ])
+ sv.SummaryComputed(sess, summary_percent_str, global_step=step)
+
+ summary_perplexity_str = tf.Summary(value=[
+ tf.Summary.Value(tag='general/perplexity', simple_value=perplexity)
+ ])
+ sv.SummaryComputed(sess, summary_perplexity_str, global_step=step)
+
+ # Printing and logging
+ if is_chief and step % FLAGS.print_every == 0:
+ print('global_step: %d' % step)
+ print(' discriminator loss: %.3f' % dis_loss_eval)
+ print(' perplexity: %.3f' % perplexity)
+ log.write('global_step: %d\n' % step)
+ log.write(' discriminator loss: %.3f\n' % dis_loss_eval)
+ log.write(' perplexity: %.3f\n' % perplexity)
+
+ for n, data_ngram_count in data_ngram_counts.iteritems():
+ avg_percent_captured = evaluation_utils.sequence_ngram_evaluation(
+ sess, model.fake_sequence, log, pretrain_feed, data_ngram_count,
+ int(n))
+ print(' percent of %s-grams captured: %.3f.\n' %
+ (n, avg_percent_captured))
+ log.write(' percent of %s-grams captured: %.3f.\n\n' %
+ (n, avg_percent_captured))
+
+ evaluation_utils.generate_logs(sess, model, log, id_to_word,
+ pretrain_feed)
+
+ if step >= FLAGS.dis_pretrain_steps + int(FLAGS.gen_pretrain_steps or 0):
+ is_pretraining = False
+ break
+ return
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/regularization/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/regularization/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/regularization/variational_dropout.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/regularization/variational_dropout.py"
new file mode 100644
index 00000000..d67fe52e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/regularization/variational_dropout.py"
@@ -0,0 +1,56 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Variational Dropout Wrapper."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+
+class VariationalDropoutWrapper(tf.contrib.rnn.RNNCell):
+ """Add variational dropout to a RNN cell."""
+
+ def __init__(self, cell, batch_size, input_size, recurrent_keep_prob,
+ input_keep_prob):
+ self._cell = cell
+ self._recurrent_keep_prob = recurrent_keep_prob
+ self._input_keep_prob = input_keep_prob
+
+ def make_mask(keep_prob, units):
+ random_tensor = keep_prob
+ # 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
+ random_tensor += tf.random_uniform(tf.stack([batch_size, units]))
+ return tf.floor(random_tensor) / keep_prob
+
+ self._recurrent_mask = make_mask(recurrent_keep_prob,
+ self._cell.state_size[0])
+ self._input_mask = self._recurrent_mask
+
+ @property
+ def state_size(self):
+ return self._cell.state_size
+
+ @property
+ def output_size(self):
+ return self._cell.output_size
+
+ def __call__(self, inputs, state, scope=None):
+ dropped_inputs = inputs * self._input_mask
+ dropped_state = (state[0], state[1] * self._recurrent_mask)
+ new_h, new_state = self._cell(dropped_inputs, dropped_state, scope)
+ return new_h, new_state
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/regularization/zoneout.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/regularization/zoneout.py"
new file mode 100644
index 00000000..5f9ef3e3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/regularization/zoneout.py"
@@ -0,0 +1,64 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Zoneout Wrapper"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+
+class ZoneoutWrapper(tf.contrib.rnn.RNNCell):
+ """Add Zoneout to a RNN cell."""
+
+ def __init__(self, cell, zoneout_drop_prob, is_training=True):
+ self._cell = cell
+ self._zoneout_prob = zoneout_drop_prob
+ self._is_training = is_training
+
+ @property
+ def state_size(self):
+ return self._cell.state_size
+
+ @property
+ def output_size(self):
+ return self._cell.output_size
+
+ def __call__(self, inputs, state, scope=None):
+ output, new_state = self._cell(inputs, state, scope)
+ if not isinstance(self._cell.state_size, tuple):
+ new_state = tf.split(value=new_state, num_or_size_splits=2, axis=1)
+ state = tf.split(value=state, num_or_size_splits=2, axis=1)
+ final_new_state = [new_state[0], new_state[1]]
+ if self._is_training:
+ for i, state_element in enumerate(state):
+ random_tensor = 1 - self._zoneout_prob # keep probability
+ random_tensor += tf.random_uniform(tf.shape(state_element))
+ # 0. if [zoneout_prob, 1.0) and 1. if [1.0, 1.0 + zoneout_prob)
+ binary_tensor = tf.floor(random_tensor)
+ final_new_state[
+ i] = (new_state[i] - state_element) * binary_tensor + state_element
+ else:
+ for i, state_element in enumerate(state):
+ final_new_state[
+ i] = state_element * self._zoneout_prob + new_state[i] * (
+ 1 - self._zoneout_prob)
+ if isinstance(self._cell.state_size, tuple):
+ return output, tf.contrib.rnn.LSTMStateTuple(
+ final_new_state[0], final_new_state[1])
+
+ return output, tf.concat([final_new_state[0], final_new_state[1]], 1)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/sample_shuffler.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/sample_shuffler.py"
new file mode 100644
index 00000000..58c31fb5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/sample_shuffler.py"
@@ -0,0 +1,95 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Shuffle samples for human evaluation.
+
+Local launch command:
+ python sample_shuffler.py
+ --input_ml_path=/tmp/ptb/seq2seq_vd_shareemb_forreal_55_3
+ --input_gan_path=/tmp/ptb/MaskGAN_PTB_ari_avg_56.29_v2.0.0
+ --output_file_name=/tmp/ptb/shuffled_output.txt
+
+ python sample_shuffler.py
+ --input_ml_path=/tmp/generate_samples/MaskGAN_IMDB_Benchmark_87.1_v0.3.0
+ --input_gan_path=/tmp/generate_samples/MaskGAN_IMDB_v1.0.1
+ --output_file_name=/tmp/imdb/shuffled_output.txt
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+# Dependency imports
+import numpy as np
+
+import tensorflow as tf
+
+tf.app.flags.DEFINE_string('input_ml_path', '/tmp', 'Model output directory.')
+tf.app.flags.DEFINE_string('input_gan_path', '/tmp', 'Model output directory.')
+tf.app.flags.DEFINE_string('output_file_name', '/tmp/ptb/shuffled_output.txt',
+ 'Model output file.')
+tf.app.flags.DEFINE_boolean(
+ 'output_masked_logs', False,
+ 'Whether to display for human evaluation (show masking).')
+tf.app.flags.DEFINE_integer('number_epochs', 1,
+ 'The number of epochs to produce.')
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def shuffle_samples(input_file_1, input_file_2):
+ """Shuffle the examples."""
+ shuffled = []
+
+ # Set a random seed to keep fixed mask.
+ np.random.seed(0)
+
+ for line_1, line_2 in zip(input_file_1, input_file_2):
+ rand = np.random.randint(1, 3)
+ if rand == 1:
+ shuffled.append((rand, line_1, line_2))
+ else:
+ shuffled.append((rand, line_2, line_1))
+ input_file_1.close()
+ input_file_2.close()
+ return shuffled
+
+
+def generate_output(shuffled_tuples, output_file_name):
+ output_file = tf.gfile.GFile(output_file_name, mode='w')
+
+ for tup in shuffled_tuples:
+ formatted_tuple = ('\n{:<1}, {:<1}, {:<1}').format(tup[0], tup[1].rstrip(),
+ tup[2].rstrip())
+ output_file.write(formatted_tuple)
+ output_file.close()
+
+
+def main(_):
+ ml_samples_file = tf.gfile.GFile(
+ os.path.join(FLAGS.input_ml_path, 'reviews.txt'), mode='r')
+ gan_samples_file = tf.gfile.GFile(
+ os.path.join(FLAGS.input_gan_path, 'reviews.txt'), mode='r')
+
+ # Generate shuffled tuples.
+ shuffled_tuples = shuffle_samples(ml_samples_file, gan_samples_file)
+
+ # Output to file.
+ generate_output(shuffled_tuples, FLAGS.output_file_name)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/train_mask_gan.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/train_mask_gan.py"
new file mode 100644
index 00000000..1e70c228
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/maskgan/train_mask_gan.py"
@@ -0,0 +1,1167 @@
+# Copyright 2017 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Launch example:
+
+[IMDB]
+python train_mask_gan.py --data_dir
+/tmp/imdb --data_set imdb --batch_size 128
+--sequence_length 20 --base_directory /tmp/maskGAN_v0.01
+--hparams="gen_rnn_size=650,gen_num_layers=2,dis_rnn_size=650,dis_num_layers=2
+,critic_learning_rate=0.0009756,dis_learning_rate=0.0000585,
+dis_train_iterations=8,gen_learning_rate=0.0016624,
+gen_full_learning_rate_steps=1e9,gen_learning_rate_decay=0.999999,
+rl_discount_rate=0.8835659" --mode TRAIN --max_steps 1000000
+--generator_model seq2seq_vd --discriminator_model seq2seq_vd
+--is_present_rate 0.5 --summaries_every 25 --print_every 25
+ --max_num_to_print=3 --generator_optimizer=adam
+ --seq2seq_share_embedding=True --baseline_method=critic
+ --attention_option=luong --n_gram_eval=4 --mask_strategy=contiguous
+ --gen_training_strategy=reinforce --dis_pretrain_steps=100
+ --perplexity_threshold=1000000
+ --dis_share_embedding=True --maskgan_ckpt
+ /tmp/model.ckpt-171091
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+
+from functools import partial
+import os
+import time
+# Dependency imports
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+import pretrain_mask_gan
+from data import imdb_loader
+from data import ptb_loader
+from model_utils import helper
+from model_utils import model_construction
+from model_utils import model_losses
+from model_utils import model_optimization
+
+# Data.
+from model_utils import model_utils
+
+from model_utils import n_gram
+from models import evaluation_utils
+
+from models import rollout
+
+np.set_printoptions(precision=3)
+np.set_printoptions(suppress=True)
+
+MODE_TRAIN = 'TRAIN'
+MODE_TRAIN_EVAL = 'TRAIN_EVAL'
+MODE_VALIDATION = 'VALIDATION'
+MODE_TEST = 'TEST'
+
+## Binary and setup FLAGS.
+tf.app.flags.DEFINE_enum(
+ 'mode', 'TRAIN', [MODE_TRAIN, MODE_VALIDATION, MODE_TEST, MODE_TRAIN_EVAL],
+ 'What this binary will do.')
+tf.app.flags.DEFINE_string('master', '',
+ """Name of the TensorFlow master to use.""")
+tf.app.flags.DEFINE_string('eval_master', '',
+ """Name prefix of the Tensorflow eval master.""")
+tf.app.flags.DEFINE_integer('task', 0,
+ """Task id of the replica running the training.""")
+tf.app.flags.DEFINE_integer('ps_tasks', 0, """Number of tasks in the ps job.
+ If 0 no ps job is used.""")
+
+## General FLAGS.
+tf.app.flags.DEFINE_string(
+ 'hparams', '', 'Comma separated list of name=value hyperparameter pairs.')
+tf.app.flags.DEFINE_integer('batch_size', 20, 'The batch size.')
+tf.app.flags.DEFINE_integer('vocab_size', 10000, 'The vocabulary size.')
+tf.app.flags.DEFINE_integer('sequence_length', 20, 'The sequence length.')
+tf.app.flags.DEFINE_integer('max_steps', 1000000,
+ 'Maximum number of steps to run.')
+tf.app.flags.DEFINE_string(
+ 'mask_strategy', 'random', 'Strategy for masking the words. Determine the '
+ 'characterisitics of how the words are dropped out. One of '
+ "['contiguous', 'random'].")
+tf.app.flags.DEFINE_float('is_present_rate', 0.5,
+ 'Percent of tokens present in the forward sequence.')
+tf.app.flags.DEFINE_float('is_present_rate_decay', None, 'Decay rate for the '
+ 'percent of words that are real (are present).')
+tf.app.flags.DEFINE_string(
+ 'generator_model', 'seq2seq',
+ "Type of Generator model. One of ['rnn', 'seq2seq', 'seq2seq_zaremba',"
+ "'rnn_zaremba', 'rnn_nas', 'seq2seq_nas']")
+tf.app.flags.DEFINE_string(
+ 'attention_option', None,
+ "Attention mechanism. One of [None, 'luong', 'bahdanau']")
+tf.app.flags.DEFINE_string(
+ 'discriminator_model', 'bidirectional',
+ "Type of Discriminator model. One of ['cnn', 'rnn', 'bidirectional', "
+ "'rnn_zaremba', 'bidirectional_zaremba', 'rnn_nas', 'rnn_vd', 'seq2seq_vd']"
+)
+tf.app.flags.DEFINE_boolean('seq2seq_share_embedding', False,
+ 'Whether to share the '
+ 'embeddings between the encoder and decoder.')
+tf.app.flags.DEFINE_boolean(
+ 'dis_share_embedding', False, 'Whether to share the '
+ 'embeddings between the generator and discriminator.')
+tf.app.flags.DEFINE_boolean('dis_update_share_embedding', False, 'Whether the '
+ 'discriminator should update the shared embedding.')
+tf.app.flags.DEFINE_boolean('use_gen_mode', False,
+ 'Use the mode of the generator '
+ 'to produce samples.')
+tf.app.flags.DEFINE_boolean('critic_update_dis_vars', False,
+ 'Whether the critic '
+ 'updates the discriminator variables.')
+
+## Training FLAGS.
+tf.app.flags.DEFINE_string(
+ 'gen_training_strategy', 'reinforce',
+ "Method for training the Generator. One of ['cross_entropy', 'reinforce']")
+tf.app.flags.DEFINE_string(
+ 'generator_optimizer', 'adam',
+ "Type of Generator optimizer. One of ['sgd', 'adam']")
+tf.app.flags.DEFINE_float('grad_clipping', 10., 'Norm for gradient clipping.')
+tf.app.flags.DEFINE_float('advantage_clipping', 5., 'Clipping for advantages.')
+tf.app.flags.DEFINE_string(
+ 'baseline_method', None,
+ "Approach for baseline. One of ['critic', 'dis_batch', 'ema', None]")
+tf.app.flags.DEFINE_float('perplexity_threshold', 15000,
+ 'Limit for perplexity before terminating job.')
+tf.app.flags.DEFINE_float('zoneout_drop_prob', 0.1,
+ 'Probability for dropping parameter for zoneout.')
+tf.app.flags.DEFINE_float('keep_prob', 0.5,
+ 'Probability for keeping parameter for dropout.')
+
+## Logging and evaluation FLAGS.
+tf.app.flags.DEFINE_integer('print_every', 250,
+ 'Frequency to print and log the '
+ 'outputs of the model.')
+tf.app.flags.DEFINE_integer('max_num_to_print', 5,
+ 'Number of samples to log/print.')
+tf.app.flags.DEFINE_boolean('print_verbose', False, 'Whether to print in full.')
+tf.app.flags.DEFINE_integer('summaries_every', 100,
+ 'Frequency to compute summaries.')
+tf.app.flags.DEFINE_boolean('eval_language_model', False,
+ 'Whether to evaluate on '
+ 'all words as in language modeling.')
+tf.app.flags.DEFINE_float('eval_interval_secs', 60,
+ 'Delay for evaluating model.')
+tf.app.flags.DEFINE_integer(
+ 'n_gram_eval', 4, """The degree of the n-grams to use for evaluation.""")
+tf.app.flags.DEFINE_integer(
+ 'epoch_size_override', None,
+ 'If an integer, this dictates the size of the epochs and will potentially '
+ 'not iterate over all the data.')
+tf.app.flags.DEFINE_integer('eval_epoch_size_override', None,
+ 'Number of evaluation steps.')
+
+## Directories and checkpoints.
+tf.app.flags.DEFINE_string('base_directory', '/tmp/maskGAN_v0.00',
+ 'Base directory for the logging, events and graph.')
+tf.app.flags.DEFINE_string('data_set', 'ptb', 'Data set to operate on. One of'
+ "['ptb', 'imdb']")
+tf.app.flags.DEFINE_string('data_dir', '/tmp/data/ptb',
+ 'Directory for the training data.')
+tf.app.flags.DEFINE_string(
+ 'language_model_ckpt_dir', None,
+ 'Directory storing checkpoints to initialize the model. Pretrained models'
+ 'are stored at /tmp/maskGAN/pretrained/')
+tf.app.flags.DEFINE_string(
+ 'language_model_ckpt_dir_reversed', None,
+ 'Directory storing checkpoints of reversed models to initialize the model.'
+ 'Pretrained models stored at'
+ 'are stored at /tmp/PTB/pretrained_reversed')
+tf.app.flags.DEFINE_string(
+ 'maskgan_ckpt', None,
+ 'Override which checkpoint file to use to restore the '
+ 'model. A pretrained seq2seq_zaremba model is stored at '
+ '/tmp/maskGAN/pretrain/seq2seq_zaremba/train/model.ckpt-64912')
+
+tf.app.flags.DEFINE_boolean('wasserstein_objective', False,
+ '(DEPRECATED) Whether to use the WGAN training.')
+tf.app.flags.DEFINE_integer('num_rollouts', 1,
+ 'The number of rolled out predictions to make.')
+tf.app.flags.DEFINE_float('c_lower', -0.01, 'Lower bound for weights.')
+tf.app.flags.DEFINE_float('c_upper', 0.01, 'Upper bound for weights.')
+
+FLAGS = tf.app.flags.FLAGS
+
+
+def create_hparams():
+ """Create the hparams object for generic training hyperparameters."""
+ hparams = tf.contrib.training.HParams(
+ gen_num_layers=2,
+ dis_num_layers=2,
+ gen_rnn_size=740,
+ dis_rnn_size=740,
+ gen_learning_rate=5e-4,
+ dis_learning_rate=5e-3,
+ critic_learning_rate=5e-3,
+ dis_train_iterations=1,
+ gen_learning_rate_decay=1.0,
+ gen_full_learning_rate_steps=1e7,
+ baseline_decay=0.999999,
+ rl_discount_rate=0.9,
+ gen_vd_keep_prob=0.5,
+ dis_vd_keep_prob=0.5,
+ dis_pretrain_learning_rate=5e-3,
+ dis_num_filters=128,
+ dis_hidden_dim=128,
+ gen_nas_keep_prob_0=0.85,
+ gen_nas_keep_prob_1=0.55,
+ dis_nas_keep_prob_0=0.85,
+ dis_nas_keep_prob_1=0.55)
+ # Command line flags override any of the preceding hyperparameter values.
+ if FLAGS.hparams:
+ hparams = hparams.parse(FLAGS.hparams)
+ return hparams
+
+
+def create_MaskGAN(hparams, is_training):
+ """Create the MaskGAN model.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ is_training: Boolean indicating operational mode (train/inference).
+ evaluated with a teacher forcing regime.
+
+ Return:
+ model: Namedtuple for specifying the MaskGAN.
+ """
+ global_step = tf.Variable(0, name='global_step', trainable=False)
+
+ new_learning_rate = tf.placeholder(tf.float32, [], name='new_learning_rate')
+ learning_rate = tf.Variable(0.0, name='learning_rate', trainable=False)
+ learning_rate_update = tf.assign(learning_rate, new_learning_rate)
+
+ new_rate = tf.placeholder(tf.float32, [], name='new_rate')
+ percent_real_var = tf.Variable(0.0, trainable=False)
+ percent_real_update = tf.assign(percent_real_var, new_rate)
+
+ ## Placeholders.
+ inputs = tf.placeholder(
+ tf.int32, shape=[FLAGS.batch_size, FLAGS.sequence_length])
+ targets = tf.placeholder(
+ tf.int32, shape=[FLAGS.batch_size, FLAGS.sequence_length])
+ present = tf.placeholder(
+ tf.bool, shape=[FLAGS.batch_size, FLAGS.sequence_length])
+ # TODO(adai): Placeholder for IMDB label.
+
+ ## Real Sequence is the targets.
+ real_sequence = targets
+
+ ## Fakse Sequence from the Generator.
+ # TODO(adai): Generator must have IMDB labels placeholder.
+ (fake_sequence, fake_logits, fake_log_probs, fake_gen_initial_state,
+ fake_gen_final_state, _) = model_construction.create_generator(
+ hparams,
+ inputs,
+ targets,
+ present,
+ is_training=is_training,
+ is_validating=False)
+ (_, eval_logits, _, eval_initial_state, eval_final_state,
+ _) = model_construction.create_generator(
+ hparams,
+ inputs,
+ targets,
+ present,
+ is_training=False,
+ is_validating=True,
+ reuse=True)
+
+ ## Discriminator.
+ fake_predictions = model_construction.create_discriminator(
+ hparams,
+ fake_sequence,
+ is_training=is_training,
+ inputs=inputs,
+ present=present)
+ real_predictions = model_construction.create_discriminator(
+ hparams,
+ real_sequence,
+ is_training=is_training,
+ reuse=True,
+ inputs=inputs,
+ present=present)
+
+ ## Critic.
+ # The critic will be used to estimate the forward rewards to the Generator.
+ if FLAGS.baseline_method == 'critic':
+ est_state_values = model_construction.create_critic(
+ hparams, fake_sequence, is_training=is_training)
+ else:
+ est_state_values = None
+
+ ## Discriminator Loss.
+ [dis_loss, dis_loss_fake, dis_loss_real] = model_losses.create_dis_loss(
+ fake_predictions, real_predictions, present)
+
+ ## Average log-perplexity for only missing words. However, to do this,
+ # the logits are still computed using teacher forcing, that is, the ground
+ # truth tokens are fed in at each time point to be valid.
+ avg_log_perplexity = model_losses.calculate_log_perplexity(
+ eval_logits, targets, present)
+
+ ## Generator Objective.
+ # 1. Cross Entropy losses on missing tokens.
+ fake_cross_entropy_losses = model_losses.create_masked_cross_entropy_loss(
+ targets, present, fake_logits)
+
+ # 2. GAN REINFORCE losses.
+ [
+ fake_RL_loss, fake_log_probs, fake_rewards, fake_advantages,
+ fake_baselines, fake_averages_op, critic_loss, cumulative_rewards
+ ] = model_losses.calculate_reinforce_objective(
+ hparams, fake_log_probs, fake_predictions, present, est_state_values)
+
+ ## Pre-training.
+ if FLAGS.gen_pretrain_steps:
+ raise NotImplementedError
+ # # TODO(liamfedus): Rewrite this.
+ # fwd_cross_entropy_loss = tf.reduce_mean(fwd_cross_entropy_losses)
+ # gen_pretrain_op = model_optimization.create_gen_pretrain_op(
+ # hparams, fwd_cross_entropy_loss, global_step)
+ else:
+ gen_pretrain_op = None
+ if FLAGS.dis_pretrain_steps:
+ dis_pretrain_op = model_optimization.create_dis_pretrain_op(
+ hparams, dis_loss, global_step)
+ else:
+ dis_pretrain_op = None
+
+ ## Generator Train Op.
+ # 1. Cross-Entropy.
+ if FLAGS.gen_training_strategy == 'cross_entropy':
+ gen_loss = tf.reduce_mean(fake_cross_entropy_losses)
+ [gen_train_op, gen_grads,
+ gen_vars] = model_optimization.create_gen_train_op(
+ hparams, learning_rate, gen_loss, global_step, mode='MINIMIZE')
+
+ # 2. GAN (REINFORCE)
+ elif FLAGS.gen_training_strategy == 'reinforce':
+ gen_loss = fake_RL_loss
+ [gen_train_op, gen_grads,
+ gen_vars] = model_optimization.create_reinforce_gen_train_op(
+ hparams, learning_rate, gen_loss, fake_averages_op, global_step)
+
+ else:
+ raise NotImplementedError
+
+ ## Discriminator Train Op.
+ dis_train_op, dis_grads, dis_vars = model_optimization.create_dis_train_op(
+ hparams, dis_loss, global_step)
+
+ ## Critic Train Op.
+ if critic_loss is not None:
+ [critic_train_op, _, _] = model_optimization.create_critic_train_op(
+ hparams, critic_loss, global_step)
+ dis_train_op = tf.group(dis_train_op, critic_train_op)
+
+ ## Summaries.
+ with tf.name_scope('general'):
+ tf.summary.scalar('percent_real', percent_real_var)
+ tf.summary.scalar('learning_rate', learning_rate)
+
+ with tf.name_scope('generator_objectives'):
+ tf.summary.scalar('gen_objective', tf.reduce_mean(gen_loss))
+ tf.summary.scalar('gen_loss_cross_entropy',
+ tf.reduce_mean(fake_cross_entropy_losses))
+
+ with tf.name_scope('REINFORCE'):
+ with tf.name_scope('objective'):
+ tf.summary.scalar('fake_RL_loss', tf.reduce_mean(fake_RL_loss))
+
+ with tf.name_scope('rewards'):
+ helper.variable_summaries(cumulative_rewards, 'rewards')
+
+ with tf.name_scope('advantages'):
+ helper.variable_summaries(fake_advantages, 'advantages')
+
+ with tf.name_scope('baselines'):
+ helper.variable_summaries(fake_baselines, 'baselines')
+
+ with tf.name_scope('log_probs'):
+ helper.variable_summaries(fake_log_probs, 'log_probs')
+
+ with tf.name_scope('discriminator_losses'):
+ tf.summary.scalar('dis_loss', dis_loss)
+ tf.summary.scalar('dis_loss_fake_sequence', dis_loss_fake)
+ tf.summary.scalar('dis_loss_prob_fake_sequence', tf.exp(-dis_loss_fake))
+ tf.summary.scalar('dis_loss_real_sequence', dis_loss_real)
+ tf.summary.scalar('dis_loss_prob_real_sequence', tf.exp(-dis_loss_real))
+
+ if critic_loss is not None:
+ with tf.name_scope('critic_losses'):
+ tf.summary.scalar('critic_loss', critic_loss)
+
+ with tf.name_scope('logits'):
+ helper.variable_summaries(fake_logits, 'fake_logits')
+
+ for v, g in zip(gen_vars, gen_grads):
+ helper.variable_summaries(v, v.op.name)
+ helper.variable_summaries(g, 'grad/' + v.op.name)
+
+ for v, g in zip(dis_vars, dis_grads):
+ helper.variable_summaries(v, v.op.name)
+ helper.variable_summaries(g, 'grad/' + v.op.name)
+
+ merge_summaries_op = tf.summary.merge_all()
+ text_summary_placeholder = tf.placeholder(tf.string)
+ text_summary_op = tf.summary.text('Samples', text_summary_placeholder)
+
+ # Model saver.
+ saver = tf.train.Saver(keep_checkpoint_every_n_hours=1, max_to_keep=5)
+
+ # Named tuple that captures elements of the MaskGAN model.
+ Model = collections.namedtuple('Model', [
+ 'inputs', 'targets', 'present', 'percent_real_update', 'new_rate',
+ 'fake_sequence', 'fake_logits', 'fake_rewards', 'fake_baselines',
+ 'fake_advantages', 'fake_log_probs', 'fake_predictions',
+ 'real_predictions', 'fake_cross_entropy_losses', 'fake_gen_initial_state',
+ 'fake_gen_final_state', 'eval_initial_state', 'eval_final_state',
+ 'avg_log_perplexity', 'dis_loss', 'gen_loss', 'critic_loss',
+ 'cumulative_rewards', 'dis_train_op', 'gen_train_op', 'gen_pretrain_op',
+ 'dis_pretrain_op', 'merge_summaries_op', 'global_step',
+ 'new_learning_rate', 'learning_rate_update', 'saver', 'text_summary_op',
+ 'text_summary_placeholder'
+ ])
+
+ model = Model(
+ inputs, targets, present, percent_real_update, new_rate, fake_sequence,
+ fake_logits, fake_rewards, fake_baselines, fake_advantages,
+ fake_log_probs, fake_predictions, real_predictions,
+ fake_cross_entropy_losses, fake_gen_initial_state, fake_gen_final_state,
+ eval_initial_state, eval_final_state, avg_log_perplexity, dis_loss,
+ gen_loss, critic_loss, cumulative_rewards, dis_train_op, gen_train_op,
+ gen_pretrain_op, dis_pretrain_op, merge_summaries_op, global_step,
+ new_learning_rate, learning_rate_update, saver, text_summary_op,
+ text_summary_placeholder)
+ return model
+
+
+def compute_geometric_average(percent_captured):
+ """Compute the geometric average of the n-gram metrics."""
+
+ res = 1.
+ for _, n_gram_percent in percent_captured.iteritems():
+ res *= n_gram_percent
+
+ return np.power(res, 1. / float(len(percent_captured)))
+
+
+def compute_arithmetic_average(percent_captured):
+ """Compute the arithmetic average of the n-gram metrics."""
+ N = len(percent_captured)
+
+ res = 0.
+ for _, n_gram_percent in percent_captured.iteritems():
+ res += n_gram_percent
+
+ return res / float(N)
+
+
+def get_iterator(data):
+ """Return the data iterator."""
+ if FLAGS.data_set == 'ptb':
+ iterator = ptb_loader.ptb_iterator(data, FLAGS.batch_size,
+ FLAGS.sequence_length,
+ FLAGS.epoch_size_override)
+ elif FLAGS.data_set == 'imdb':
+ iterator = imdb_loader.imdb_iterator(data, FLAGS.batch_size,
+ FLAGS.sequence_length)
+ return iterator
+
+
+def train_model(hparams, data, log_dir, log, id_to_word, data_ngram_counts):
+ """Train model.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ data: Data to evaluate.
+ log_dir: Directory to save checkpoints.
+ log: Readable log for the experiment.
+ id_to_word: Dictionary of indices to words.
+ data_ngram_counts: Dictionary of hashed(n-gram tuples) to counts in the
+ data_set.
+ """
+ print('Training model.')
+ tf.logging.info('Training model.')
+
+ # Boolean indicating operational mode.
+ is_training = True
+
+ # Write all the information to the logs.
+ log.write('hparams\n')
+ log.write(str(hparams))
+ log.flush()
+
+ is_chief = FLAGS.task == 0
+
+ with tf.Graph().as_default():
+ with tf.device(tf.train.replica_device_setter(FLAGS.ps_tasks)):
+ container_name = ''
+ with tf.container(container_name):
+ # Construct the model.
+ if FLAGS.num_rollouts == 1:
+ model = create_MaskGAN(hparams, is_training)
+ elif FLAGS.num_rollouts > 1:
+ model = rollout.create_rollout_MaskGAN(hparams, is_training)
+ else:
+ raise ValueError
+
+ print('\nTrainable Variables in Graph:')
+ for v in tf.trainable_variables():
+ print(v)
+
+ ## Retrieve the initial savers.
+ init_savers = model_utils.retrieve_init_savers(hparams)
+
+ ## Initial saver function to supervisor.
+ init_fn = partial(model_utils.init_fn, init_savers)
+
+ # Create the supervisor. It will take care of initialization,
+ # summaries, checkpoints, and recovery.
+ sv = tf.train.Supervisor(
+ logdir=log_dir,
+ is_chief=is_chief,
+ saver=model.saver,
+ global_step=model.global_step,
+ save_model_secs=60,
+ recovery_wait_secs=30,
+ summary_op=None,
+ init_fn=init_fn)
+
+ # Get an initialized, and possibly recovered session. Launch the
+ # services: Checkpointing, Summaries, step counting.
+ #
+ # When multiple replicas of this program are running the services are
+ # only launched by the 'chief' replica.
+ with sv.managed_session(FLAGS.master) as sess:
+
+ ## Pretrain the generator.
+ if FLAGS.gen_pretrain_steps:
+ pretrain_mask_gan.pretrain_generator(sv, sess, model, data, log,
+ id_to_word, data_ngram_counts,
+ is_chief)
+
+ ## Pretrain the discriminator.
+ if FLAGS.dis_pretrain_steps:
+ pretrain_mask_gan.pretrain_discriminator(
+ sv, sess, model, data, log, id_to_word, data_ngram_counts,
+ is_chief)
+
+ # Initial indicators for printing and summarizing.
+ print_step_division = -1
+ summary_step_division = -1
+
+ # Run iterative computation in a loop.
+ while not sv.ShouldStop():
+ is_present_rate = FLAGS.is_present_rate
+
+ if FLAGS.is_present_rate_decay is not None:
+ is_present_rate *= (1. - FLAGS.is_present_rate_decay)
+
+ model_utils.assign_percent_real(sess, model.percent_real_update,
+ model.new_rate, is_present_rate)
+
+ # GAN training.
+ avg_epoch_gen_loss, avg_epoch_dis_loss = [], []
+ cumulative_costs = 0.
+ gen_iters = 0
+
+ # Generator and Discriminator statefulness initial evaluation.
+ # TODO(liamfedus): Throughout the code I am implicitly assuming
+ # that the Generator and Discriminator are equal sized.
+ [gen_initial_state_eval, fake_gen_initial_state_eval] = sess.run(
+ [model.eval_initial_state, model.fake_gen_initial_state])
+ dis_initial_state_eval = fake_gen_initial_state_eval
+
+ # Save zeros state to reset later.
+ zeros_state = fake_gen_initial_state_eval
+
+ ## Offset Discriminator.
+ if FLAGS.ps_tasks == 0:
+ dis_offset = 1
+ else:
+ dis_offset = FLAGS.task * 1000 + 1
+ dis_iterator = get_iterator(data)
+
+ for i in range(dis_offset):
+ try:
+ dis_x, dis_y, _ = next(dis_iterator)
+ except StopIteration:
+ dis_iterator = get_iterator(data)
+ dis_initial_state_eval = zeros_state
+ dis_x, dis_y, _ = next(dis_iterator)
+
+ p = model_utils.generate_mask()
+
+ # Construct the train feed.
+ train_feed = {
+ model.inputs: dis_x,
+ model.targets: dis_y,
+ model.present: p
+ }
+
+ if FLAGS.data_set == 'ptb':
+ # Statefulness of the Generator being used for Discriminator.
+ for i, (c, h) in enumerate(model.fake_gen_initial_state):
+ train_feed[c] = dis_initial_state_eval[i].c
+ train_feed[h] = dis_initial_state_eval[i].h
+
+ # Determine the state had the Generator run over real data. We
+ # use this state for the Discriminator.
+ [dis_initial_state_eval] = sess.run(
+ [model.fake_gen_final_state], train_feed)
+
+ ## Training loop.
+ iterator = get_iterator(data)
+ gen_initial_state_eval = zeros_state
+
+ if FLAGS.ps_tasks > 0:
+ gen_offset = FLAGS.task * 1000 + 1
+ for i in range(gen_offset):
+ try:
+ next(iterator)
+ except StopIteration:
+ dis_iterator = get_iterator(data)
+ dis_initial_state_eval = zeros_state
+ next(dis_iterator)
+
+ for x, y, _ in iterator:
+ for _ in xrange(hparams.dis_train_iterations):
+ try:
+ dis_x, dis_y, _ = next(dis_iterator)
+ except StopIteration:
+ dis_iterator = get_iterator(data)
+ dis_initial_state_eval = zeros_state
+ dis_x, dis_y, _ = next(dis_iterator)
+
+ if FLAGS.data_set == 'ptb':
+ [dis_initial_state_eval] = sess.run(
+ [model.fake_gen_initial_state])
+
+ p = model_utils.generate_mask()
+
+ # Construct the train feed.
+ train_feed = {
+ model.inputs: dis_x,
+ model.targets: dis_y,
+ model.present: p
+ }
+
+ # Statefulness for the Discriminator.
+ if FLAGS.data_set == 'ptb':
+ for i, (c, h) in enumerate(model.fake_gen_initial_state):
+ train_feed[c] = dis_initial_state_eval[i].c
+ train_feed[h] = dis_initial_state_eval[i].h
+
+ _, dis_loss_eval, step = sess.run(
+ [model.dis_train_op, model.dis_loss, model.global_step],
+ feed_dict=train_feed)
+
+ # Determine the state had the Generator run over real data.
+ # Use this state for the Discriminator.
+ [dis_initial_state_eval] = sess.run(
+ [model.fake_gen_final_state], train_feed)
+
+ # Randomly mask out tokens.
+ p = model_utils.generate_mask()
+
+ # Construct the train feed.
+ train_feed = {model.inputs: x, model.targets: y, model.present: p}
+
+ # Statefulness for Generator.
+ if FLAGS.data_set == 'ptb':
+ tf.logging.info('Generator is stateful.')
+ print('Generator is stateful.')
+ # Statefulness for *evaluation* Generator.
+ for i, (c, h) in enumerate(model.eval_initial_state):
+ train_feed[c] = gen_initial_state_eval[i].c
+ train_feed[h] = gen_initial_state_eval[i].h
+
+ # Statefulness for Generator.
+ for i, (c, h) in enumerate(model.fake_gen_initial_state):
+ train_feed[c] = fake_gen_initial_state_eval[i].c
+ train_feed[h] = fake_gen_initial_state_eval[i].h
+
+ # Determine whether to decay learning rate.
+ lr_decay = hparams.gen_learning_rate_decay**max(
+ step + 1 - hparams.gen_full_learning_rate_steps, 0.0)
+
+ # Assign learning rate.
+ gen_learning_rate = hparams.gen_learning_rate * lr_decay
+ model_utils.assign_learning_rate(sess, model.learning_rate_update,
+ model.new_learning_rate,
+ gen_learning_rate)
+
+ [_, gen_loss_eval, gen_log_perplexity_eval, step] = sess.run(
+ [
+ model.gen_train_op, model.gen_loss,
+ model.avg_log_perplexity, model.global_step
+ ],
+ feed_dict=train_feed)
+
+ cumulative_costs += gen_log_perplexity_eval
+ gen_iters += 1
+
+ # Determine the state had the Generator run over real data.
+ [gen_initial_state_eval, fake_gen_initial_state_eval] = sess.run(
+ [model.eval_final_state,
+ model.fake_gen_final_state], train_feed)
+
+ avg_epoch_dis_loss.append(dis_loss_eval)
+ avg_epoch_gen_loss.append(gen_loss_eval)
+
+ ## Summaries.
+ # Calulate rolling perplexity.
+ perplexity = np.exp(cumulative_costs / gen_iters)
+
+ if is_chief and (step / FLAGS.summaries_every >
+ summary_step_division):
+ summary_step_division = step / FLAGS.summaries_every
+
+ # Confirm perplexity is not infinite.
+ if (not np.isfinite(perplexity) or
+ perplexity >= FLAGS.perplexity_threshold):
+ print('Training raising FloatingPoinError.')
+ raise FloatingPointError(
+ 'Training infinite perplexity: %.3f' % perplexity)
+
+ # Graph summaries.
+ summary_str = sess.run(
+ model.merge_summaries_op, feed_dict=train_feed)
+ sv.SummaryComputed(sess, summary_str)
+
+ # Summary: n-gram
+ avg_percent_captured = {'2': 0., '3': 0., '4': 0.}
+ for n, data_ngram_count in data_ngram_counts.iteritems():
+ batch_percent_captured = evaluation_utils.sequence_ngram_evaluation(
+ sess, model.fake_sequence, log, train_feed,
+ data_ngram_count, int(n))
+ summary_percent_str = tf.Summary(value=[
+ tf.Summary.Value(
+ tag='general/%s-grams_percent_correct' % n,
+ simple_value=batch_percent_captured)
+ ])
+ sv.SummaryComputed(
+ sess, summary_percent_str, global_step=step)
+
+ # Summary: geometric_avg
+ geometric_avg = compute_geometric_average(avg_percent_captured)
+ summary_geometric_avg_str = tf.Summary(value=[
+ tf.Summary.Value(
+ tag='general/geometric_avg', simple_value=geometric_avg)
+ ])
+ sv.SummaryComputed(
+ sess, summary_geometric_avg_str, global_step=step)
+
+ # Summary: arithmetic_avg
+ arithmetic_avg = compute_arithmetic_average(
+ avg_percent_captured)
+ summary_arithmetic_avg_str = tf.Summary(value=[
+ tf.Summary.Value(
+ tag='general/arithmetic_avg',
+ simple_value=arithmetic_avg)
+ ])
+ sv.SummaryComputed(
+ sess, summary_arithmetic_avg_str, global_step=step)
+
+ # Summary: perplexity
+ summary_perplexity_str = tf.Summary(value=[
+ tf.Summary.Value(
+ tag='general/perplexity', simple_value=perplexity)
+ ])
+ sv.SummaryComputed(
+ sess, summary_perplexity_str, global_step=step)
+
+ ## Printing and logging
+ if is_chief and (step / FLAGS.print_every > print_step_division):
+ print_step_division = (step / FLAGS.print_every)
+ print('global_step: %d' % step)
+ print(' perplexity: %.3f' % perplexity)
+ print(' gen_learning_rate: %.6f' % gen_learning_rate)
+ log.write('global_step: %d\n' % step)
+ log.write(' perplexity: %.3f\n' % perplexity)
+ log.write(' gen_learning_rate: %.6f' % gen_learning_rate)
+
+ # Average percent captured for each of the n-grams.
+ avg_percent_captured = {'2': 0., '3': 0., '4': 0.}
+ for n, data_ngram_count in data_ngram_counts.iteritems():
+ batch_percent_captured = evaluation_utils.sequence_ngram_evaluation(
+ sess, model.fake_sequence, log, train_feed,
+ data_ngram_count, int(n))
+ avg_percent_captured[n] = batch_percent_captured
+ print(' percent of %s-grams captured: %.3f.' %
+ (n, batch_percent_captured))
+ log.write(' percent of %s-grams captured: %.3f.\n' %
+ (n, batch_percent_captured))
+ geometric_avg = compute_geometric_average(avg_percent_captured)
+ print(' geometric_avg: %.3f.' % geometric_avg)
+ log.write(' geometric_avg: %.3f.' % geometric_avg)
+ arithmetic_avg = compute_arithmetic_average(
+ avg_percent_captured)
+ print(' arithmetic_avg: %.3f.' % arithmetic_avg)
+ log.write(' arithmetic_avg: %.3f.' % arithmetic_avg)
+
+ evaluation_utils.print_and_log_losses(
+ log, step, is_present_rate, avg_epoch_dis_loss,
+ avg_epoch_gen_loss)
+
+ if FLAGS.gen_training_strategy == 'reinforce':
+ evaluation_utils.generate_RL_logs(sess, model, log,
+ id_to_word, train_feed)
+ else:
+ evaluation_utils.generate_logs(sess, model, log, id_to_word,
+ train_feed)
+ log.flush()
+
+ log.close()
+
+
+def evaluate_once(data, sv, model, sess, train_dir, log, id_to_word,
+ data_ngram_counts, eval_saver):
+ """Evaluate model for a number of steps.
+
+ Args:
+ data: Dataset.
+ sv: Supervisor.
+ model: The GAN model we have just built.
+ sess: A session to use.
+ train_dir: Path to a directory containing checkpoints.
+ log: Evaluation log for evaluation.
+ id_to_word: Dictionary of indices to words.
+ data_ngram_counts: Dictionary of hashed(n-gram tuples) to counts in the
+ data_set.
+ eval_saver: Evaluation saver.r.
+ """
+ tf.logging.info('Evaluate Once.')
+ # Load the last model checkpoint, or initialize the graph.
+ model_save_path = tf.latest_checkpoint(train_dir)
+ if not model_save_path:
+ tf.logging.warning('No checkpoint yet in: %s', train_dir)
+ return
+
+ tf.logging.info('Starting eval of: %s' % model_save_path)
+ tf.logging.info('Only restoring trainable variables.')
+ eval_saver.restore(sess, model_save_path)
+
+ # Run the requested number of evaluation steps
+ avg_epoch_gen_loss, avg_epoch_dis_loss = [], []
+ cumulative_costs = 0.
+
+ # Average percent captured for each of the n-grams.
+ avg_percent_captured = {'2': 0., '3': 0., '4': 0.}
+
+ # Set a random seed to keep fixed mask.
+ np.random.seed(0)
+ gen_iters = 0
+
+ # Generator statefulness over the epoch.
+ # TODO(liamfedus): Check this.
+ [gen_initial_state_eval, fake_gen_initial_state_eval] = sess.run(
+ [model.eval_initial_state, model.fake_gen_initial_state])
+
+ if FLAGS.eval_language_model:
+ is_present_rate = 0.
+ tf.logging.info('Overriding is_present_rate=0. for evaluation.')
+ print('Overriding is_present_rate=0. for evaluation.')
+
+ iterator = get_iterator(data)
+
+ for x, y, _ in iterator:
+ if FLAGS.eval_language_model:
+ is_present_rate = 0.
+ else:
+ is_present_rate = FLAGS.is_present_rate
+ tf.logging.info('Evaluating on is_present_rate=%.3f.' % is_present_rate)
+
+ model_utils.assign_percent_real(sess, model.percent_real_update,
+ model.new_rate, is_present_rate)
+
+ # Randomly mask out tokens.
+ p = model_utils.generate_mask()
+
+ eval_feed = {model.inputs: x, model.targets: y, model.present: p}
+
+ if FLAGS.data_set == 'ptb':
+ # Statefulness for *evaluation* Generator.
+ for i, (c, h) in enumerate(model.eval_initial_state):
+ eval_feed[c] = gen_initial_state_eval[i].c
+ eval_feed[h] = gen_initial_state_eval[i].h
+
+ # Statefulness for the Generator.
+ for i, (c, h) in enumerate(model.fake_gen_initial_state):
+ eval_feed[c] = fake_gen_initial_state_eval[i].c
+ eval_feed[h] = fake_gen_initial_state_eval[i].h
+
+ [
+ gen_log_perplexity_eval, dis_loss_eval, gen_loss_eval,
+ gen_initial_state_eval, fake_gen_initial_state_eval, step
+ ] = sess.run(
+ [
+ model.avg_log_perplexity, model.dis_loss, model.gen_loss,
+ model.eval_final_state, model.fake_gen_final_state,
+ model.global_step
+ ],
+ feed_dict=eval_feed)
+
+ for n, data_ngram_count in data_ngram_counts.iteritems():
+ batch_percent_captured = evaluation_utils.sequence_ngram_evaluation(
+ sess, model.fake_sequence, log, eval_feed, data_ngram_count, int(n))
+ avg_percent_captured[n] += batch_percent_captured
+
+ cumulative_costs += gen_log_perplexity_eval
+
+ avg_epoch_dis_loss.append(dis_loss_eval)
+ avg_epoch_gen_loss.append(gen_loss_eval)
+
+ gen_iters += 1
+
+ # Calulate rolling metrics.
+ perplexity = np.exp(cumulative_costs / gen_iters)
+ for n, _ in avg_percent_captured.iteritems():
+ avg_percent_captured[n] /= gen_iters
+
+ # Confirm perplexity is not infinite.
+ if not np.isfinite(perplexity) or perplexity >= FLAGS.perplexity_threshold:
+ print('Evaluation raising FloatingPointError.')
+ raise FloatingPointError(
+ 'Evaluation infinite perplexity: %.3f' % perplexity)
+
+ ## Printing and logging.
+ evaluation_utils.print_and_log_losses(log, step, is_present_rate,
+ avg_epoch_dis_loss, avg_epoch_gen_loss)
+ print(' perplexity: %.3f' % perplexity)
+ log.write(' perplexity: %.3f\n' % perplexity)
+
+ for n, n_gram_percent in avg_percent_captured.iteritems():
+ n = int(n)
+ print(' percent of %d-grams captured: %.3f.' % (n, n_gram_percent))
+ log.write(' percent of %d-grams captured: %.3f.\n' % (n, n_gram_percent))
+
+ samples = evaluation_utils.generate_logs(sess, model, log, id_to_word,
+ eval_feed)
+
+ ## Summaries.
+ summary_str = sess.run(model.merge_summaries_op, feed_dict=eval_feed)
+ sv.SummaryComputed(sess, summary_str)
+
+ # Summary: text
+ summary_str = sess.run(model.text_summary_op,
+ {model.text_summary_placeholder: '\n\n'.join(samples)})
+ sv.SummaryComputed(sess, summary_str, global_step=step)
+
+ # Summary: n-gram
+ for n, n_gram_percent in avg_percent_captured.iteritems():
+ n = int(n)
+ summary_percent_str = tf.Summary(value=[
+ tf.Summary.Value(
+ tag='general/%d-grams_percent_correct' % n,
+ simple_value=n_gram_percent)
+ ])
+ sv.SummaryComputed(sess, summary_percent_str, global_step=step)
+
+ # Summary: geometric_avg
+ geometric_avg = compute_geometric_average(avg_percent_captured)
+ summary_geometric_avg_str = tf.Summary(value=[
+ tf.Summary.Value(tag='general/geometric_avg', simple_value=geometric_avg)
+ ])
+ sv.SummaryComputed(sess, summary_geometric_avg_str, global_step=step)
+
+ # Summary: arithmetic_avg
+ arithmetic_avg = compute_arithmetic_average(avg_percent_captured)
+ summary_arithmetic_avg_str = tf.Summary(value=[
+ tf.Summary.Value(
+ tag='general/arithmetic_avg', simple_value=arithmetic_avg)
+ ])
+ sv.SummaryComputed(sess, summary_arithmetic_avg_str, global_step=step)
+
+ # Summary: perplexity
+ summary_perplexity_str = tf.Summary(value=[
+ tf.Summary.Value(tag='general/perplexity', simple_value=perplexity)
+ ])
+ sv.SummaryComputed(sess, summary_perplexity_str, global_step=step)
+
+
+def evaluate_model(hparams, data, train_dir, log, id_to_word,
+ data_ngram_counts):
+ """Evaluate MaskGAN model.
+
+ Args:
+ hparams: Hyperparameters for the MaskGAN.
+ data: Data to evaluate.
+ train_dir: Path to a directory containing checkpoints.
+ id_to_word: Dictionary of indices to words.
+ data_ngram_counts: Dictionary of hashed(n-gram tuples) to counts in the
+ data_set.
+ """
+ tf.logging.error('Evaluate model.')
+
+ # Boolean indicating operational mode.
+ is_training = False
+
+ if FLAGS.mode == MODE_VALIDATION:
+ logdir = FLAGS.base_directory + '/validation'
+ elif FLAGS.mode == MODE_TRAIN_EVAL:
+ logdir = FLAGS.base_directory + '/train_eval'
+ elif FLAGS.mode == MODE_TEST:
+ logdir = FLAGS.base_directory + '/test'
+ else:
+ raise NotImplementedError
+
+ # Wait for a checkpoint to exist.
+ print(train_dir)
+ print(tf.train.latest_checkpoint(train_dir))
+ while not tf.train.latest_checkpoint(train_dir):
+ tf.logging.error('Waiting for checkpoint...')
+ print('Waiting for checkpoint...')
+ time.sleep(10)
+
+ with tf.Graph().as_default():
+ # Use a separate container for each trial
+ container_name = ''
+ with tf.container(container_name):
+
+ # Construct the model.
+ if FLAGS.num_rollouts == 1:
+ model = create_MaskGAN(hparams, is_training)
+ elif FLAGS.num_rollouts > 1:
+ model = rollout.create_rollout_MaskGAN(hparams, is_training)
+ else:
+ raise ValueError
+
+ # Create the supervisor. It will take care of initialization, summaries,
+ # checkpoints, and recovery. We only pass the trainable variables
+ # to load since things like baselines keep batch_size which may not
+ # match between training and evaluation.
+ evaluation_variables = tf.trainable_variables()
+ evaluation_variables.append(model.global_step)
+ eval_saver = tf.train.Saver(var_list=evaluation_variables)
+ sv = tf.Supervisor(logdir=logdir)
+ sess = sv.PrepareSession(FLAGS.eval_master, start_standard_services=False)
+
+ tf.logging.info('Before sv.Loop.')
+ sv.Loop(FLAGS.eval_interval_secs, evaluate_once,
+ (data, sv, model, sess, train_dir, log, id_to_word,
+ data_ngram_counts, eval_saver))
+
+ sv.WaitForStop()
+ tf.logging.info('sv.Stop().')
+ sv.Stop()
+
+
+def main(_):
+ hparams = create_hparams()
+ train_dir = FLAGS.base_directory + '/train'
+
+ # Load data set.
+ if FLAGS.data_set == 'ptb':
+ raw_data = ptb_loader.ptb_raw_data(FLAGS.data_dir)
+ train_data, valid_data, test_data, _ = raw_data
+ valid_data_flat = valid_data
+ elif FLAGS.data_set == 'imdb':
+ raw_data = imdb_loader.imdb_raw_data(FLAGS.data_dir)
+ # TODO(liamfedus): Get an IMDB test partition.
+ train_data, valid_data = raw_data
+ valid_data_flat = [word for review in valid_data for word in review]
+ else:
+ raise NotImplementedError
+
+ if FLAGS.mode == MODE_TRAIN or FLAGS.mode == MODE_TRAIN_EVAL:
+ data_set = train_data
+ elif FLAGS.mode == MODE_VALIDATION:
+ data_set = valid_data
+ elif FLAGS.mode == MODE_TEST:
+ data_set = test_data
+ else:
+ raise NotImplementedError
+
+ # Dictionary and reverse dictionry.
+ if FLAGS.data_set == 'ptb':
+ word_to_id = ptb_loader.build_vocab(
+ os.path.join(FLAGS.data_dir, 'ptb.train.txt'))
+ elif FLAGS.data_set == 'imdb':
+ word_to_id = imdb_loader.build_vocab(
+ os.path.join(FLAGS.data_dir, 'vocab.txt'))
+ id_to_word = {v: k for k, v in word_to_id.iteritems()}
+
+ # Dictionary of Training Set n-gram counts.
+ bigram_tuples = n_gram.find_all_ngrams(valid_data_flat, n=2)
+ trigram_tuples = n_gram.find_all_ngrams(valid_data_flat, n=3)
+ fourgram_tuples = n_gram.find_all_ngrams(valid_data_flat, n=4)
+
+ bigram_counts = n_gram.construct_ngrams_dict(bigram_tuples)
+ trigram_counts = n_gram.construct_ngrams_dict(trigram_tuples)
+ fourgram_counts = n_gram.construct_ngrams_dict(fourgram_tuples)
+ print('Unique %d-grams: %d' % (2, len(bigram_counts)))
+ print('Unique %d-grams: %d' % (3, len(trigram_counts)))
+ print('Unique %d-grams: %d' % (4, len(fourgram_counts)))
+
+ data_ngram_counts = {
+ '2': bigram_counts,
+ '3': trigram_counts,
+ '4': fourgram_counts
+ }
+
+ # TODO(liamfedus): This was necessary because there was a problem with our
+ # originally trained IMDB models. The EOS_INDEX was off by one, which means,
+ # two words were mapping to index 86933. The presence of '' is going
+ # to throw and out of vocabulary error.
+ FLAGS.vocab_size = len(id_to_word)
+ print('Vocab size: %d' % FLAGS.vocab_size)
+
+ tf.gfile.MakeDirs(FLAGS.base_directory)
+
+ if FLAGS.mode == MODE_TRAIN:
+ log = tf.gfile.GFile(
+ os.path.join(FLAGS.base_directory, 'train-log.txt'), mode='w')
+ elif FLAGS.mode == MODE_VALIDATION:
+ log = tf.gfile.GFile(
+ os.path.join(FLAGS.base_directory, 'validation-log.txt'), mode='w')
+ elif FLAGS.mode == MODE_TRAIN_EVAL:
+ log = tf.gfile.GFile(
+ os.path.join(FLAGS.base_directory, 'train_eval-log.txt'), mode='w')
+ else:
+ log = tf.gfile.GFile(
+ os.path.join(FLAGS.base_directory, 'test-log.txt'), mode='w')
+
+ if FLAGS.mode == MODE_TRAIN:
+ train_model(hparams, data_set, train_dir, log, id_to_word,
+ data_ngram_counts)
+
+ elif FLAGS.mode == MODE_VALIDATION:
+ evaluate_model(hparams, data_set, train_dir, log, id_to_word,
+ data_ngram_counts)
+ elif FLAGS.mode == MODE_TRAIN_EVAL:
+ evaluate_model(hparams, data_set, train_dir, log, id_to_word,
+ data_ngram_counts)
+
+ elif FLAGS.mode == MODE_TEST:
+ evaluate_model(hparams, data_set, train_dir, log, id_to_word,
+ data_ngram_counts)
+
+ else:
+ raise NotImplementedError
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/README.md"
new file mode 100644
index 00000000..d078c211
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/README.md"
@@ -0,0 +1,138 @@
+# MiniGo
+This is a simplified implementation of MiniGo based on the code provided by the authors: [MiniGo](https://github.com/tensorflow/minigo).
+
+MiniGo is a minimalist Go engine modeled after AlphaGo Zero, ["Mastering the Game of Go without Human
+Knowledge"](https://www.nature.com/articles/nature24270). An useful one-diagram overview of Alphago Zero can be found in the [cheat sheet](https://medium.com/applied-data-science/alphago-zero-explained-in-one-diagram-365f5abf67e0).
+
+The implementation of MiniGo consists of three main components: the DualNet model, the Monte Carlo Tree Search (MCTS), and Go domain knowledge. Currently, the **DualNet model** is our focus.
+
+
+## DualNet Architecture
+DualNet is the neural network used in MiniGo. It's based on residual blocks with two heads output. Following is a brief overview of the DualNet architecture.
+
+### Input Features
+The input to the neural network is a [board_size * board_size * 17] image stack
+comprising 17 binary feature planes. 8 feature planes consist of binary values
+indicating the presence of the current player's stones; A further 8 feature
+planes represent the corresponding features for the opponent's stones; The final
+feature plane represents the color to play, and has a constant value of either 1
+if black is to play or 0 if white to play. Check [features.py](features.py) for more details.
+
+### Neural Network Structure
+In MiniGo implementation, the input features are processed by a residual tower
+that consists of a single convolutional block followed by either 9 or 19
+residual blocks.
+The convolutional block applies the following modules:
+ 1. A convolution of num_filter filters of kernel size 3 x 3 with stride 1
+ 2. Batch normalization
+ 3. A rectifier non-linearity
+
+Each residual block applies the following modules sequentially to its input:
+ 1. A convolution of num_filter filters of kernel size 3 x 3 with stride 1
+ 2. Batch normalization
+ 3. A rectifier non-linearity
+ 4. A convolution of num_filter filters of kernel size 3 x 3 with stride 1
+ 5. Batch normalization
+ 6. A skip connection that adds the input to the block
+ 7. A rectifier non-linearity
+
+Note: num_filter is 128 for 19 x 19 board size, and 32 for 9 x 9 board size in MiniGo implementation.
+
+### Dual Heads Output
+The output of the residual tower is passed into two separate "heads" for
+computing the policy and value respectively. The policy head applies the
+following modules:
+ 1. A convolution of 2 filters of kernel size 1 x 1 with stride 1
+ 2. Batch normalization
+ 3. A rectifier non-linearity
+ 4. A fully connected linear layer that outputs a vector of size (board_size * board_size + 1) corresponding to logit probabilities for all intersections and the pass move
+
+The value head applies the following modules:
+ 1. A convolution of 1 filter of kernel size 1 x 1 with stride 1
+ 2. Batch normalization
+ 3. A rectifier non-linearity
+ 4. A fully connected linear layer to a hidden layer of size 256 for 19 x 19
+ board size and 64 for 9x9 board size
+ 5. A rectifier non-linearity
+ 6. A fully connected linear layer to a scalar
+ 7. A tanh non-linearity outputting a scalar in the range [-1, 1]
+
+In MiniGo, the overall network depth, in the 10 or 20 block network, is 19 or 39
+parameterized layers respectively for the residual tower, plus an additional 2
+layers for the policy head and 3 layers for the value head.
+
+## Getting Started
+This project assumes you have virtualenv, TensorFlow (>= 1.5) and two other Go-related
+packages pygtp(>=0.4) and sgf (==0.5).
+
+## Training Model
+One iteration of reinforcement learning (RL) consists of the following steps:
+ - Bootstrap: initializes a random DualNet model. If the estimator directory has exist, the model is initialized with the last checkpoint.
+ - Selfplay: plays games with the latest model or the best model so far identified by evaluation, producing data used for training
+ - Gather: groups games played with the same model into larger files of tfexamples.
+ - Train: trains a new model with the selfplay results from the most recent N generations.
+
+To run the RL pipeline, issue the following command:
+ ```
+ python minigo.py --base_dir=$HOME/minigo/ --board_size=9 --batch_size=256
+ ```
+ Arguments:
+ * `--base_dir`: Base directory for MiniGo data and models. If not specified, it's set as /tmp/minigo/ by default.
+ * `--board_size`: Go board size. It can be either 9 or 19. By default, it's 9.
+ * `--batch_size`: Batch size for model training. If not specified, it's calculated based on go board size.
+ Use the `--help` or `-h` flag to get a full list of possible arguments. Besides all these arguments, other parameters about RL pipeline and DualNet model can be found and configured in [model_params.py](model_params.py).
+
+Suppose the base directory argument `base_dir` is `$HOME/minigo/` and we use 9 as the `board_size`. After model training, the following directories are created to store models and game data:
+
+ $HOME/minigo # base directory
+ │
+ ├── 9_size # directory for 9x9 board size
+ │ │
+ │ ├── data
+ │ │ ├── holdout # holdout data for model validation
+ │ │ ├── selfplay # data generated by selfplay of each model
+ │ │ └── training_chunks # gatherd tf_examples for model training
+ │ │
+ │ ├── estimator_model_dir # estimator working directory
+ │ │
+ │ ├── trained_models # all the trained models
+ │ │
+ │ └── sgf # sgf (smart go files) folder
+ │ ├── 000000-bootstrap # model name
+ │ │ ├── clean # clean sgf files of model selfplay
+ │ │ └── full # full sgf files of model selfplay
+ │ ├── ...
+ │ └── evaluate # clean sgf files of model evaluation
+ │
+ └── ...
+
+## Validating Model
+To validate the trained model, issue the following command with `--validation` argument:
+ ```
+ python minigo.py --base_dir=$HOME/minigo/ --board_size=9 --batch_size=256 --validation
+ ```
+
+## Evaluating Models
+The performance of two models are compared with evaluation step. Given two models, one plays black and the other plays white. They play several games (# of games can be configured by parameter `eval_games` in [model_params.py](model_params.py)), and the one wins by a margin of 55% will be the winner.
+
+To include the evaluation step in the RL pipeline, `--evaluation` argument can be specified to compare the performance of the `current_trained_model` and the `best_model_so_far`. The winner is used to update `best_model_so_far`. Run the following command to include evaluation step in the pipeline:
+ ```
+ python minigo.py --base_dir=$HOME/minigo/ --board_size=9 --batch_size=256 --evaluation
+ ```
+
+## Testing Pipeline
+As the whole RL pipeline may take hours to train even for a 9x9 board size, a `--test` argument is provided to test the pipeline quickly with a dummy neural network model.
+
+To test the RL pipeline with a dummy model, issue the following command:
+```
+ python minigo.py --base_dir=$HOME/minigo/ --board_size=9 --batch_size=256 --test
+```
+
+## Running Self-play Only
+Self-play only option is provided to run selfplay step individually to generate training data in parallel. Issue the following command to run selfplay only with the latest trained model:
+```
+ python minigo.py --selfplay
+```
+Other optional arguments:
+ * `--selfplay_model_name`: The name of the model used for selfplay only. If not specified, the latest trained model will be used for selfplay.
+ * `--selfplay_max_games`: The maximum number of games selfplay is required to generate. If not specified, the default parameter `max_games_per_generation` is used.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/coords.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/coords.py"
new file mode 100644
index 00000000..ca89d56a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/coords.py"
@@ -0,0 +1,113 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Logic for dealing with coordinates.
+
+This introduces some helpers and terminology that are used throughout MiniGo.
+
+MiniGo Coordinate: This is a tuple of the form (row, column) that is indexed
+ starting out at (0, 0) from the upper-left.
+Flattened Coordinate: this is a number ranging from 0 - N^2 (so N^2+1
+ possible values). The extra value N^2 is used to mark a 'pass' move.
+SGF Coordinate: Coordinate used for SGF serialization format. Coordinates use
+ two-letter pairs having the form (column, row) indexed from the upper-left
+ where 0, 0 = 'aa'.
+KGS Coordinate: Human-readable coordinate string indexed from bottom left, with
+ the first character a capital letter for the column and the second a number
+ from 1-19 for the row. Note that KGS chooses to skip the letter 'I' due to
+ its similarity with 'l' (lowercase 'L').
+PYGTP Coordinate: Tuple coordinate indexed starting at 1,1 from bottom-left
+ in the format (column, row)
+
+So, for a 19x19,
+
+Coord Type upper_left upper_right pass
+-------------------------------------------------------
+minigo coord (0, 0) (0, 18) None
+flat 0 18 361
+SGF 'aa' 'sa' ''
+KGS 'A19' 'T19' 'pass'
+pygtp (1, 19) (19, 19) (0, 0)
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import gtp
+
+# We provide more than 19 entries here in case of boards larger than 19 x 19.
+_SGF_COLUMNS = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
+_KGS_COLUMNS = 'ABCDEFGHJKLMNOPQRSTUVWXYZ'
+
+
+def from_flat(board_size, flat):
+ """Converts from a flattened coordinate to a MiniGo coordinate."""
+ if flat == board_size * board_size:
+ return None
+ return divmod(flat, board_size)
+
+
+def to_flat(board_size, coord):
+ """Converts from a MiniGo coordinate to a flattened coordinate."""
+ if coord is None:
+ return board_size * board_size
+ return board_size * coord[0] + coord[1]
+
+
+def from_sgf(sgfc):
+ """Converts from an SGF coordinate to a MiniGo coordinate."""
+ if not sgfc:
+ return None
+ return _SGF_COLUMNS.index(sgfc[1]), _SGF_COLUMNS.index(sgfc[0])
+
+
+def to_sgf(coord):
+ """Converts from a MiniGo coordinate to an SGF coordinate."""
+ if coord is None:
+ return ''
+ return _SGF_COLUMNS[coord[1]] + _SGF_COLUMNS[coord[0]]
+
+
+def from_kgs(board_size, kgsc):
+ """Converts from a KGS coordinate to a MiniGo coordinate."""
+ if kgsc == 'pass':
+ return None
+ kgsc = kgsc.upper()
+ col = _KGS_COLUMNS.index(kgsc[0])
+ row_from_bottom = int(kgsc[1:])
+ return board_size - row_from_bottom, col
+
+
+def to_kgs(board_size, coord):
+ """Converts from a MiniGo coordinate to a KGS coordinate."""
+ if coord is None:
+ return 'pass'
+ y, x = coord
+ return '{}{}'.format(_KGS_COLUMNS[x], board_size - y)
+
+
+def from_pygtp(board_size, pygtpc):
+ """Converts from a pygtp coordinate to a MiniGo coordinate."""
+ # GTP has a notion of both a Pass and a Resign, both of which are mapped to
+ # None, so the conversion is not precisely bijective.
+ if pygtpc in (gtp.PASS, gtp.RESIGN):
+ return None
+ return board_size - pygtpc[1], pygtpc[0] - 1
+
+
+def to_pygtp(board_size, coord):
+ """Converts from a MiniGo coordinate to a pygtp coordinate."""
+ if coord is None:
+ return gtp.PASS
+ return coord[1] + 1, board_size - coord[0]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/coords_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/coords_test.py"
new file mode 100644
index 00000000..3bbfd1be
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/coords_test.py"
@@ -0,0 +1,112 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for coords."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import coords
+import numpy
+import utils_test
+
+tf.logging.set_verbosity(tf.logging.ERROR)
+
+
+class TestCoords(utils_test.MiniGoUnitTest):
+
+ def test_upperleft(self):
+ self.assertEqual(coords.from_sgf('aa'), (0, 0))
+ self.assertEqual(coords.from_flat(utils_test.BOARD_SIZE, 0), (0, 0))
+ self.assertEqual(coords.from_kgs(utils_test.BOARD_SIZE, 'A9'), (0, 0))
+ self.assertEqual(coords.from_pygtp(utils_test.BOARD_SIZE, (1, 9)), (0, 0))
+
+ self.assertEqual(coords.to_sgf((0, 0)), 'aa')
+ self.assertEqual(coords.to_flat(utils_test.BOARD_SIZE, (0, 0)), 0)
+ self.assertEqual(coords.to_kgs(utils_test.BOARD_SIZE, (0, 0)), 'A9')
+ self.assertEqual(coords.to_pygtp(utils_test.BOARD_SIZE, (0, 0)), (1, 9))
+
+ def test_topleft(self):
+ self.assertEqual(coords.from_sgf('ia'), (0, 8))
+ self.assertEqual(coords.from_flat(utils_test.BOARD_SIZE, 8), (0, 8))
+ self.assertEqual(coords.from_kgs(utils_test.BOARD_SIZE, 'J9'), (0, 8))
+ self.assertEqual(coords.from_pygtp(utils_test.BOARD_SIZE, (9, 9)), (0, 8))
+
+ self.assertEqual(coords.to_sgf((0, 8)), 'ia')
+ self.assertEqual(coords.to_flat(utils_test.BOARD_SIZE, (0, 8)), 8)
+ self.assertEqual(coords.to_kgs(utils_test.BOARD_SIZE, (0, 8)), 'J9')
+ self.assertEqual(coords.to_pygtp(utils_test.BOARD_SIZE, (0, 8)), (9, 9))
+
+ def test_pass(self):
+ self.assertEqual(coords.from_sgf(''), None)
+ self.assertEqual(coords.from_flat(utils_test.BOARD_SIZE, 81), None)
+ self.assertEqual(coords.from_kgs(utils_test.BOARD_SIZE, 'pass'), None)
+ self.assertEqual(coords.from_pygtp(utils_test.BOARD_SIZE, (0, 0)), None)
+
+ self.assertEqual(coords.to_sgf(None), '')
+ self.assertEqual(coords.to_flat(utils_test.BOARD_SIZE, None), 81)
+ self.assertEqual(coords.to_kgs(utils_test.BOARD_SIZE, None), 'pass')
+ self.assertEqual(coords.to_pygtp(utils_test.BOARD_SIZE, None), (0, 0))
+
+ def test_parsing_9x9(self):
+ self.assertEqual(coords.from_sgf('aa'), (0, 0))
+ self.assertEqual(coords.from_sgf('ac'), (2, 0))
+ self.assertEqual(coords.from_sgf('ca'), (0, 2))
+ self.assertEqual(coords.from_sgf(''), None)
+ self.assertEqual(coords.to_sgf(None), '')
+ self.assertEqual('aa', coords.to_sgf(coords.from_sgf('aa')))
+ self.assertEqual('sa', coords.to_sgf(coords.from_sgf('sa')))
+ self.assertEqual((1, 17), coords.from_sgf(coords.to_sgf((1, 17))))
+ self.assertEqual(coords.from_kgs(utils_test.BOARD_SIZE, 'A1'), (8, 0))
+ self.assertEqual(coords.from_kgs(utils_test.BOARD_SIZE, 'A9'), (0, 0))
+ self.assertEqual(coords.from_kgs(utils_test.BOARD_SIZE, 'C2'), (7, 2))
+ self.assertEqual(coords.from_kgs(utils_test.BOARD_SIZE, 'J2'), (7, 8))
+ self.assertEqual(coords.from_pygtp(utils_test.BOARD_SIZE, (1, 1)), (8, 0))
+ self.assertEqual(coords.from_pygtp(utils_test.BOARD_SIZE, (1, 9)), (0, 0))
+ self.assertEqual(coords.from_pygtp(utils_test.BOARD_SIZE, (3, 2)), (7, 2))
+ self.assertEqual(coords.to_pygtp(utils_test.BOARD_SIZE, (8, 0)), (1, 1))
+ self.assertEqual(coords.to_pygtp(utils_test.BOARD_SIZE, (0, 0)), (1, 9))
+ self.assertEqual(coords.to_pygtp(utils_test.BOARD_SIZE, (7, 2)), (3, 2))
+
+ self.assertEqual(coords.to_kgs(utils_test.BOARD_SIZE, (0, 8)), 'J9')
+ self.assertEqual(coords.to_kgs(utils_test.BOARD_SIZE, (8, 0)), 'A1')
+
+ def test_flatten(self):
+ self.assertEqual(coords.to_flat(utils_test.BOARD_SIZE, (0, 0)), 0)
+ self.assertEqual(coords.to_flat(utils_test.BOARD_SIZE, (0, 3)), 3)
+ self.assertEqual(coords.to_flat(utils_test.BOARD_SIZE, (3, 0)), 27)
+ self.assertEqual(coords.from_flat(utils_test.BOARD_SIZE, 27), (3, 0))
+ self.assertEqual(coords.from_flat(utils_test.BOARD_SIZE, 10), (1, 1))
+ self.assertEqual(coords.from_flat(utils_test.BOARD_SIZE, 80), (8, 8))
+ self.assertEqual(coords.to_flat(
+ utils_test.BOARD_SIZE, coords.from_flat(utils_test.BOARD_SIZE, 10)), 10)
+ self.assertEqual(coords.from_flat(
+ utils_test.BOARD_SIZE, coords.to_flat(
+ utils_test.BOARD_SIZE, (5, 4))), (5, 4))
+
+ def test_from_flat_ndindex_equivalence(self):
+ ndindices = list(numpy.ndindex(
+ utils_test.BOARD_SIZE, utils_test.BOARD_SIZE))
+ flat_coords = list(range(
+ utils_test.BOARD_SIZE * utils_test.BOARD_SIZE))
+ def _from_flat(flat_coords):
+ return coords.from_flat(utils_test.BOARD_SIZE, flat_coords)
+ self.assertEqual(
+ list(map(_from_flat, flat_coords)), ndindices)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/dualnet.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/dualnet.py"
new file mode 100644
index 00000000..a43286ab
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/dualnet.py"
@@ -0,0 +1,235 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Contains utility and supporting functions for DualNet.
+
+This module provides the model interface, including functions for DualNet model
+bootstrap, training, validation, loading and exporting.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import dualnet_model
+import features
+import preprocessing
+import symmetries
+
+
+class DualNetRunner(object):
+ """The DualNetRunner class for the complete model with graph and weights.
+
+ This class can restore the model from saved files, and provide inference for
+ given examples.
+ """
+
+ def __init__(self, save_file, params):
+ """Initialize the dual network from saved model/checkpoints.
+
+ Args:
+ save_file: Path where model parameters were previously saved. For example:
+ '/tmp/minigo/models_dir/000000-bootstrap/'
+ params: An object with hyperparameters for DualNetRunner
+ """
+ self.save_file = save_file
+ self.hparams = params
+ self.inference_input = None
+ self.inference_output = None
+ config = tf.ConfigProto()
+ config.gpu_options.allow_growth = True
+ self.sess = tf.Session(graph=tf.Graph(), config=config)
+ self.initialize_graph()
+
+ def initialize_graph(self):
+ """Initialize the graph with saved model."""
+ with self.sess.graph.as_default():
+ input_features, labels = get_inference_input(self.hparams)
+ estimator_spec = dualnet_model.model_fn(
+ input_features, labels, tf.estimator.ModeKeys.PREDICT, self.hparams)
+ self.inference_input = input_features
+ self.inference_output = estimator_spec.predictions
+ if self.save_file is not None:
+ self.initialize_weights(self.save_file)
+ else:
+ self.sess.run(tf.global_variables_initializer())
+
+ def initialize_weights(self, save_file):
+ """Initialize the weights from the given save_file.
+
+ Assumes that the graph has been constructed, and the save_file contains
+ weights that match the graph. Used to set the weights to a different version
+ of the player without redefining the entire graph.
+
+ Args:
+ save_file: Path where model parameters were previously saved.
+ """
+
+ tf.train.Saver().restore(self.sess, save_file)
+
+ def run(self, position, use_random_symmetry=True):
+ """Compute the policy and value output for a given position.
+
+ Args:
+ position: A given go board status
+ use_random_symmetry: Apply random symmetry (defined in symmetries.py) to
+ the extracted feature (defined in features.py) of the given position
+
+ Returns:
+ prob, value: The policy and value output (defined in dualnet_model.py)
+ """
+ probs, values = self.run_many(
+ [position], use_random_symmetry=use_random_symmetry)
+ return probs[0], values[0]
+
+ def run_many(self, positions, use_random_symmetry=True):
+ """Compute the policy and value output for given positions.
+
+ Args:
+ positions: A list of positions for go board status
+ use_random_symmetry: Apply random symmetry (defined in symmetries.py) to
+ the extracted features (defined in features.py) of the given positions
+
+ Returns:
+ probabilities, value: The policy and value outputs (defined in
+ dualnet_model.py)
+ """
+ def _extract_features(positions):
+ return features.extract_features(self.hparams.board_size, positions)
+ processed = list(map(_extract_features, positions))
+ # processed = [
+ # features.extract_features(self.hparams.board_size, p) for p in positions]
+ if use_random_symmetry:
+ syms_used, processed = symmetries.randomize_symmetries_feat(processed)
+ # feed_dict is a dict object to provide the input examples for the step of
+ # inference. sess.run() returns the inference predictions (indicated by
+ # self.inference_output) of the given input as outputs
+ outputs = self.sess.run(
+ self.inference_output, feed_dict={self.inference_input: processed})
+ probabilities, value = outputs['policy_output'], outputs['value_output']
+ if use_random_symmetry:
+ probabilities = symmetries.invert_symmetries_pi(
+ self.hparams.board_size, syms_used, probabilities)
+ return probabilities, value
+
+
+def get_inference_input(params):
+ """Set up placeholders for input features/labels.
+
+ Args:
+ params: An object to indicate the hyperparameters of the model.
+
+ Returns:
+ The features and output tensors that get passed into model_fn. Check
+ dualnet_model.py for more details on the models input and output.
+ """
+ input_features = tf.placeholder(
+ tf.float32, [None, params.board_size, params.board_size,
+ features.NEW_FEATURES_PLANES],
+ name='pos_tensor')
+
+ labels = {
+ 'pi_tensor': tf.placeholder(
+ tf.float32, [None, params.board_size * params.board_size + 1]),
+ 'value_tensor': tf.placeholder(tf.float32, [None])
+ }
+
+ return input_features, labels
+
+
+def bootstrap(working_dir, params):
+ """Initialize a tf.Estimator run with random initial weights.
+
+ Args:
+ working_dir: The directory where tf.estimator will drop logs,
+ checkpoints, and so on
+ params: hyperparams of the model.
+ """
+ # Forge an initial checkpoint with the name that subsequent Estimator will
+ # expect to find.
+ estimator_initial_checkpoint_name = 'model.ckpt-1'
+ save_file = os.path.join(working_dir,
+ estimator_initial_checkpoint_name)
+ sess = tf.Session()
+ with sess.graph.as_default():
+ input_features, labels = get_inference_input(params)
+ dualnet_model.model_fn(
+ input_features, labels, tf.estimator.ModeKeys.PREDICT, params)
+ sess.run(tf.global_variables_initializer())
+ tf.train.Saver().save(sess, save_file)
+
+
+def export_model(working_dir, model_path):
+ """Take the latest checkpoint and export it to model_path for selfplay.
+
+ Assumes that all relevant model files are prefixed by the same name.
+ (For example, foo.index, foo.meta and foo.data-00000-of-00001).
+
+ Args:
+ working_dir: The directory where tf.estimator keeps its checkpoints.
+ model_path: Either a local path or a gs:// path to export model to.
+ """
+ latest_checkpoint = tf.train.latest_checkpoint(working_dir)
+ all_checkpoint_files = tf.gfile.Glob(latest_checkpoint + '*')
+ for filename in all_checkpoint_files:
+ suffix = filename.partition(latest_checkpoint)[2]
+ destination_path = model_path + suffix
+ tf.gfile.Copy(filename, destination_path)
+
+
+def train(working_dir, tf_records, generation, params):
+ """Train the model for a specific generation.
+
+ Args:
+ working_dir: The model working directory to save model parameters,
+ drop logs, checkpoints, and so on.
+ tf_records: A list of tf_record filenames for training input.
+ generation: The generation to be trained.
+ params: hyperparams of the model.
+
+ Raises:
+ ValueError: if generation is not greater than 0.
+ """
+ if generation <= 0:
+ raise ValueError('Model 0 is random weights')
+ estimator = tf.estimator.Estimator(
+ dualnet_model.model_fn, model_dir=working_dir, params=params)
+ max_steps = (generation * params.examples_per_generation
+ // params.batch_size)
+ profiler_hook = tf.train.ProfilerHook(output_dir=working_dir, save_secs=600)
+
+ def input_fn():
+ return preprocessing.get_input_tensors(
+ params, params.batch_size, tf_records)
+ estimator.train(
+ input_fn, hooks=[profiler_hook], max_steps=max_steps)
+
+
+def validate(working_dir, tf_records, params):
+ """Perform model validation on the hold out data.
+
+ Args:
+ working_dir: The model working directory.
+ tf_records: A list of tf_records filenames for holdout data.
+ params: hyperparams of the model.
+ """
+ estimator = tf.estimator.Estimator(
+ dualnet_model.model_fn, model_dir=working_dir, params=params)
+ def input_fn():
+ return preprocessing.get_input_tensors(
+ params, params.batch_size, tf_records, filter_amount=0.05)
+ estimator.evaluate(input_fn, steps=1000)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/dualnet_model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/dualnet_model.py"
new file mode 100644
index 00000000..2fd5ac34
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/dualnet_model.py"
@@ -0,0 +1,281 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Defines DualNet model, the architecture of the policy and value network.
+
+The input to the neural network is a [board_size * board_size * 17] image stack
+comprising 17 binary feature planes. 8 feature planes consist of binary values
+indicating the presence of the current player's stones; A further 8 feature
+planes represent the corresponding features for the opponent's stones; The final
+feature plane represents the color to play, and has a constant value of either 1
+if black is to play or 0 if white to play. Check 'features.py' for more details.
+
+In MiniGo implementation, the input features are processed by a residual tower
+that consists of a single convolutional block followed by either 9 or 19
+residual blocks.
+The convolutional block applies the following modules:
+ 1. A convolution of num_filter filters of kernel size 3 x 3 with stride 1
+ 2. Batch normalization
+ 3. A rectifier non-linearity
+Each residual block applies the following modules sequentially to its input:
+ 1. A convolution of num_filter filters of kernel size 3 x 3 with stride 1
+ 2. Batch normalization
+ 3. A rectifier non-linearity
+ 4. A convolution of num_filter filters of kernel size 3 x 3 with stride 1
+ 5. Batch normalization
+ 6. A skip connection that adds the input to the block
+ 7. A rectifier non-linearity
+Note: num_filter is 128 for 19 x 19 board size, and 32 for 9 x 9 board size.
+
+The output of the residual tower is passed into two separate "heads" for
+computing the policy and value respectively. The policy head applies the
+following modules:
+ 1. A convolution of 2 filters of kernel size 1 x 1 with stride 1
+ 2. Batch normalization
+ 3. A rectifier non-linearity
+ 4. A fully connected linear layer that outputs a vector of size 19^2 + 1 = 362
+ corresponding to logit probabilities for all intersections and the pass move
+The value head applies the following modules:
+ 1. A convolution of 1 filter of kernel size 1 x 1 with stride 1
+ 2. Batch normalization
+ 3. A rectifier non-linearity
+ 4. A fully connected linear layer to a hidden layer of size 256 for 19 x 19
+ board size and 64 for 9x9 board size
+ 5. A rectifier non-linearity
+ 6. A fully connected linear layer to a scalar
+ 7. A tanh non-linearity outputting a scalar in the range [-1, 1]
+
+The overall network depth, in the 10 or 20 block network, is 19 or 39
+parameterized layers respectively for the residual tower, plus an additional 2
+layers for the policy head and 3 layers for the value head.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+_BATCH_NORM_DECAY = 0.997
+_BATCH_NORM_EPSILON = 1e-5
+
+
+def _batch_norm(inputs, training, center=True, scale=True):
+ """Performs a batch normalization using a standard set of parameters."""
+ return tf.layers.batch_normalization(
+ inputs=inputs, momentum=_BATCH_NORM_DECAY, epsilon=_BATCH_NORM_EPSILON,
+ center=center, scale=scale, fused=True, training=training)
+
+
+def _conv2d(inputs, filters, kernel_size):
+ """Performs 2D convolution with a standard set of parameters."""
+ return tf.layers.conv2d(
+ inputs=inputs, filters=filters, kernel_size=kernel_size,
+ padding='same')
+
+
+def _conv_block(inputs, filters, kernel_size, training):
+ """A convolutional block.
+
+ Args:
+ inputs: A tensor representing a batch of input features with shape
+ [BATCH_SIZE, board_size, board_size, features.NEW_FEATURES_PLANES].
+ filters: The number of filters for network layers in residual tower.
+ kernel_size: The kernel to be used in conv2d.
+ training: Either True or False, whether we are currently training the
+ model. Needed for batch norm.
+
+ Returns:
+ The output tensor of the convolutional block layer.
+ """
+ conv = _conv2d(inputs, filters, kernel_size)
+ batchn = _batch_norm(conv, training)
+
+ output = tf.nn.relu(batchn)
+
+ return output
+
+
+def _res_block(inputs, filters, kernel_size, training):
+ """A residual block.
+
+ Args:
+ inputs: A tensor representing a batch of input features with shape
+ [BATCH_SIZE, board_size, board_size, features.NEW_FEATURES_PLANES].
+ filters: The number of filters for network layers in residual tower.
+ kernel_size: The kernel to be used in conv2d.
+ training: Either True or False, whether we are currently training the
+ model. Needed for batch norm.
+
+ Returns:
+ The output tensor of the residual block layer.
+ """
+
+ initial_output = _conv_block(inputs, filters, kernel_size, training)
+
+ int_layer2_conv = _conv2d(initial_output, filters, kernel_size)
+ int_layer2_batchn = _batch_norm(int_layer2_conv, training)
+
+ output = tf.nn.relu(inputs + int_layer2_batchn)
+
+ return output
+
+
+class Model(object):
+ """Base class for building the DualNet Model."""
+
+ def __init__(self, num_filters, num_shared_layers, fc_width, board_size):
+ """Initialize a model for computing the policy and value in RL.
+
+ Args:
+ num_filters: Number of filters (AlphaGoZero used 256). We use 128 by
+ default for a 19x19 go board, and 32 for 9x9 size.
+ num_shared_layers: Number of shared residual blocks. AGZ used both 19
+ and 39. Here we use 19 for 19x19 size and 9 for 9x9 size because it's
+ faster to train.
+ fc_width: Dimensionality of the fully connected linear layer.
+ board_size: A single integer for the board size.
+ """
+ self.num_filters = num_filters
+ self.num_shared_layers = num_shared_layers
+ self.fc_width = fc_width
+ self.board_size = board_size
+ self.kernel_size = [3, 3] # kernel size is from AGZ paper
+
+ def __call__(self, inputs, training):
+ """Add operations to classify a batch of input Go features.
+
+ Args:
+ inputs: A Tensor representing a batch of input Go features with shape
+ [BATCH_SIZE, board_size, board_size, features.NEW_FEATURES_PLANES]
+ training: A boolean. Set to True to add operations required only when
+ training the classifier.
+
+ Returns:
+ policy_logits: A vector of size self.board_size * self.board_size + 1
+ corresponding to the policy logit probabilities for all intersections
+ and the pass move.
+ value_logits: A scalar for the value logits output
+ """
+ initial_output = _conv_block(
+ inputs=inputs, filters=self.num_filters,
+ kernel_size=self.kernel_size, training=training)
+ # the shared stack
+ shared_output = initial_output
+ for _ in range(self.num_shared_layers):
+ shared_output = _res_block(
+ inputs=shared_output, filters=self.num_filters,
+ kernel_size=self.kernel_size, training=training)
+
+ # policy head
+ policy_conv2d = _conv2d(inputs=shared_output, filters=2, kernel_size=[1, 1])
+ policy_batchn = _batch_norm(inputs=policy_conv2d, training=training,
+ center=False, scale=False)
+ policy_relu = tf.nn.relu(policy_batchn)
+ policy_logits = tf.layers.dense(
+ tf.reshape(policy_relu, [-1, self.board_size * self.board_size * 2]),
+ self.board_size * self.board_size + 1)
+
+ # value head
+ value_conv2d = _conv2d(shared_output, filters=1, kernel_size=[1, 1])
+ value_batchn = _batch_norm(value_conv2d, training,
+ center=False, scale=False)
+ value_relu = tf.nn.relu(value_batchn)
+ value_fc_hidden = tf.nn.relu(tf.layers.dense(
+ tf.reshape(value_relu, [-1, self.board_size * self.board_size]),
+ self.fc_width))
+ value_logits = tf.reshape(tf.layers.dense(value_fc_hidden, 1), [-1])
+
+ return policy_logits, value_logits
+
+
+def model_fn(features, labels, mode, params, config=None): # pylint: disable=unused-argument
+ """DualNet model function.
+
+ Args:
+ features: tensor with shape
+ [BATCH_SIZE, self.board_size, self.board_size,
+ features.NEW_FEATURES_PLANES]
+ labels: dict from string to tensor with shape
+ 'pi_tensor': [BATCH_SIZE, self.board_size * self.board_size + 1]
+ 'value_tensor': [BATCH_SIZE]
+ mode: a tf.estimator.ModeKeys (batchnorm params update for TRAIN only)
+ params: an object of hyperparams
+ config: ignored; is required by Estimator API.
+ Returns:
+ EstimatorSpec parameterized according to the input params and the current
+ mode.
+ """
+ model = Model(params.num_filters, params.num_shared_layers, params.fc_width,
+ params.board_size)
+ policy_logits, value_logits = model(
+ features, mode == tf.estimator.ModeKeys.TRAIN)
+
+ policy_output = tf.nn.softmax(policy_logits, name='policy_output')
+ value_output = tf.nn.tanh(value_logits, name='value_output')
+
+ # Calculate model loss. The loss function sums over the mean-squared error,
+ # the cross-entropy losses and the l2 regularization term.
+ # Cross-entropy of policy
+ policy_entropy = -tf.reduce_mean(tf.reduce_sum(
+ policy_output * tf.log(policy_output), axis=1))
+ policy_cost = tf.reduce_mean(
+ tf.nn.softmax_cross_entropy_with_logits(
+ logits=policy_logits, labels=labels['pi_tensor']))
+ # Mean squared error
+ value_cost = tf.reduce_mean(
+ tf.square(value_output - labels['value_tensor']))
+ # L2 term
+ l2_cost = params.l2_strength * tf.add_n(
+ [tf.nn.l2_loss(v) for v in tf.trainable_variables()
+ if 'bias' not in v.name])
+ # The loss function
+ combined_cost = policy_cost + value_cost + l2_cost
+
+ # Get model train ops
+ global_step = tf.train.get_or_create_global_step()
+ boundaries = [int(1e6), int(2e6)]
+ values = [1e-2, 1e-3, 1e-4]
+ learning_rate = tf.train.piecewise_constant(
+ global_step, boundaries, values)
+ update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
+ with tf.control_dependencies(update_ops):
+ train_op = tf.train.MomentumOptimizer(
+ learning_rate, params.momentum).minimize(
+ combined_cost, global_step=global_step)
+
+ # Create multiple tensors for logging purpose
+ metric_ops = {
+ 'accuracy': tf.metrics.accuracy(labels=labels['pi_tensor'],
+ predictions=policy_output,
+ name='accuracy_op'),
+ 'policy_cost': tf.metrics.mean(policy_cost),
+ 'value_cost': tf.metrics.mean(value_cost),
+ 'l2_cost': tf.metrics.mean(l2_cost),
+ 'policy_entropy': tf.metrics.mean(policy_entropy),
+ 'combined_cost': tf.metrics.mean(combined_cost),
+ }
+ for metric_name, metric_op in metric_ops.items():
+ tf.summary.scalar(metric_name, metric_op[1])
+
+ # Return tf.estimator.EstimatorSpec
+ return tf.estimator.EstimatorSpec(
+ mode=mode,
+ predictions={
+ 'policy_output': policy_output,
+ 'value_output': value_output,
+ },
+ loss=combined_cost,
+ train_op=train_op,
+ eval_metric_ops=metric_ops)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/dualnet_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/dualnet_test.py"
new file mode 100644
index 00000000..6d53b167
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/dualnet_test.py"
@@ -0,0 +1,62 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for dualnet and dualnet_model."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import tempfile
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import dualnet
+import go
+import model_params
+import preprocessing
+import utils_test
+
+tf.logging.set_verbosity(tf.logging.ERROR)
+
+
+class TestDualNet(utils_test.MiniGoUnitTest):
+
+ def test_train(self):
+ with tempfile.TemporaryDirectory() as working_dir, \
+ tempfile.NamedTemporaryFile() as tf_record:
+ preprocessing.make_dataset_from_sgf(
+ utils_test.BOARD_SIZE, 'example_game.sgf', tf_record.name)
+ dualnet.train(
+ working_dir, [tf_record.name], 1, model_params.DummyMiniGoParams())
+
+ def test_inference(self):
+ with tempfile.TemporaryDirectory() as working_dir, \
+ tempfile.TemporaryDirectory() as export_dir:
+ dualnet.bootstrap(working_dir, model_params.DummyMiniGoParams())
+ exported_model = os.path.join(export_dir, 'bootstrap-model')
+ dualnet.export_model(working_dir, exported_model)
+
+ n1 = dualnet.DualNetRunner(
+ exported_model, model_params.DummyMiniGoParams())
+ n1.run(go.Position(utils_test.BOARD_SIZE))
+
+ n2 = dualnet.DualNetRunner(
+ exported_model, model_params.DummyMiniGoParams())
+ n2.run(go.Position(utils_test.BOARD_SIZE))
+
+
+if __name__ == '__main__':
+ tf.test.main()
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/evaluation.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/evaluation.py"
new file mode 100644
index 00000000..f8374d2d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/evaluation.py"
@@ -0,0 +1,118 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Evaluation of playing games between two neural nets."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import time
+
+import go
+from gtp_wrapper import MCTSPlayer
+import sgf_wrapper
+
+
+def play_match(params, black_net, white_net, games, readouts,
+ sgf_dir, verbosity):
+ """Plays matches between two neural nets.
+
+ One net that wins by a margin of 55% will be the winner.
+
+ Args:
+ params: An object of hyperparameters.
+ black_net: Instance of the DualNetRunner class to play as black.
+ white_net: Instance of the DualNetRunner class to play as white.
+ games: Number of games to play. We play all the games at the same time.
+ readouts: Number of readouts to perform for each step in each game.
+ sgf_dir: Directory to write the sgf results.
+ verbosity: Verbosity to show evaluation process.
+
+ Returns:
+ 'B' is the winner is black_net, otherwise 'W'.
+ """
+ # For n games, we create lists of n black and n white players
+ black = MCTSPlayer(
+ params.board_size, black_net, verbosity=verbosity, two_player_mode=True,
+ num_parallel=params.simultaneous_leaves)
+ white = MCTSPlayer(
+ params.board_size, white_net, verbosity=verbosity, two_player_mode=True,
+ num_parallel=params.simultaneous_leaves)
+
+ black_name = os.path.basename(black_net.save_file)
+ white_name = os.path.basename(white_net.save_file)
+
+ black_win_counts = 0
+ white_win_counts = 0
+
+ for i in range(games):
+ num_move = 0 # The move number of the current game
+
+ black.initialize_game()
+ white.initialize_game()
+
+ while True:
+ start = time.time()
+ active = white if num_move % 2 else black
+ inactive = black if num_move % 2 else white
+
+ current_readouts = active.root.N
+ while active.root.N < current_readouts + readouts:
+ active.tree_search()
+
+ # print some stats on the search
+ if verbosity >= 3:
+ print(active.root.position)
+
+ # First, check the roots for hopeless games.
+ if active.should_resign(): # Force resign
+ active.set_result(-active.root.position.to_play, was_resign=True)
+ inactive.set_result(
+ active.root.position.to_play, was_resign=True)
+
+ if active.is_done():
+ fname = '{:d}-{:s}-vs-{:s}-{:d}.sgf'.format(
+ int(time.time()), white_name, black_name, i)
+ with open(os.path.join(sgf_dir, fname), 'w') as f:
+ sgfstr = sgf_wrapper.make_sgf(
+ params.board_size, active.position.recent, active.result_string,
+ black_name=black_name, white_name=white_name)
+ f.write(sgfstr)
+ print('Finished game', i, active.result_string)
+ if active.result_string is not None:
+ if active.result_string[0] == 'B':
+ black_win_counts += 1
+ elif active.result_string[0] == 'W':
+ white_win_counts += 1
+
+ break
+
+ move = active.pick_move()
+ active.play_move(move)
+ inactive.play_move(move)
+
+ dur = time.time() - start
+ num_move += 1
+
+ if (verbosity > 1) or (verbosity == 1 and num_move % 10 == 9):
+ timeper = (dur / readouts) * 100.0
+ print(active.root.position)
+ print('{:d}: {:d} readouts, {:.3f} s/100. ({:.2f} sec)'.format(
+ num_move, readouts, timeper, dur))
+
+ if (black_win_counts - white_win_counts) > params.eval_win_rate * games:
+ return go.BLACK_NAME
+ else:
+ return go.WHITE_NAME
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/example_game.sgf" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/example_game.sgf"
new file mode 100644
index 00000000..a2f63df3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/example_game.sgf"
@@ -0,0 +1,95 @@
+(;GM[1]FF[4]CA[UTF-8]AP[CGoban:3]ST[2]
+RU[Japanese]SZ[9]KM[0.00]
+PW[White]PB[Black]RE[B+4.00]
+;B[de]
+;W[fe]
+;B[ee]
+;W[fd]
+;B[ff]
+;W[gf]
+;B[gg]
+;W[fg]
+;B[ef]
+;W[gh]
+;B[hg]
+;W[hh]
+;B[eg]
+;W[fh]
+;B[ge]
+;W[hf]
+;B[he]
+;W[ig]
+;B[fc]
+;W[gd]
+;B[gc]
+;W[hd]
+;B[ed]
+;W[be]
+;B[hc]
+;W[ie]
+;B[bc]
+;W[cg]
+;B[cf]
+;W[bf]
+;B[ch]
+(;W[dg]
+;B[dh]
+;W[bh]
+;B[eh]
+;W[cc]
+;B[cb])
+(;W[cc]
+;B[cb]
+(;W[bh]
+;B[dh])
+(;W[dg]
+;B[dh]
+;W[bh]
+;B[eh]
+;W[dc]
+;B[bd]
+;W[ec]
+;B[cd]
+;W[fb]
+;B[gb]
+(;W[db])
+(;W[bb]
+;B[eb]
+;W[db]
+;B[fa]
+;W[ca]
+;B[ea]
+;W[da]
+;B[df]
+;W[bg]
+;B[bi]
+;W[ab]
+;B[ah]
+;W[ci]
+;B[di]
+;W[ag]
+;B[ae]
+;W[ac]
+;B[ad]
+;W[ha]
+;B[hb]
+;W[fi]
+;B[ce]
+;W[ai]
+;B[ci]
+;W[ei]
+;B[ah]
+;W[ic]
+;B[ib]
+;W[ai]
+;B[ba]
+;W[aa]
+;B[ah]
+;W[ga]
+;B[ia]
+;W[ai]
+;B[ga]
+;W[id]
+;B[ah]
+;W[dd]
+;B[af]TW[ba][cb][ge][he][if][gg][hg][ih][gi][hi][ii]TB[ha][fb][be][bf][ag][bg][cg][dg][bh][ai]))))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/features.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/features.py"
new file mode 100644
index 00000000..011e585a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/features.py"
@@ -0,0 +1,91 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Features used by AlphaGo Zero, in approximate order of importance.
+
+Feature # Notes
+Stone History 16 The stones of each color during the last 8 moves.
+Ones 1 Constant plane of 1s
+All features with 8 planes are 1-hot encoded, with plane i marked with 1
+only if the feature was equal to i. Any features >= 8 would be marked as 8.
+
+This file includes the features from from AlphaGo Zero (AGZ) as NEW_FEATURES.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import go
+import numpy as np
+
+
+def planes(num_planes):
+ # to specify the number of planes in the features. For example, for a 19x19
+ # go board, the input stone feature will be in the shape of [19, 19, 16],
+ # where the third dimension is the num_planes.
+ def deco(f):
+ f.planes = num_planes
+ return f
+ return deco
+
+
+@planes(16)
+def stone_features(board_size, position):
+ """Create the 16 planes of features for a given position.
+
+ Args:
+ board_size: the go board size.
+ position: a given go board status.
+
+ Returns:
+ The 16 plane features.
+ """
+ # a bit easier to calculate it with axis 0 being the 16 board states,
+ # and then roll axis 0 to the end.
+ features = np.zeros([16, board_size, board_size], dtype=np.uint8)
+
+ num_deltas_avail = position.board_deltas.shape[0]
+ cumulative_deltas = np.cumsum(position.board_deltas, axis=0)
+ last_eight = np.tile(position.board, [8, 1, 1])
+ # apply deltas to compute previous board states
+ last_eight[1:num_deltas_avail + 1] -= cumulative_deltas
+ # if no more deltas are available, just repeat oldest board.
+ last_eight[num_deltas_avail + 1:] = last_eight[num_deltas_avail].reshape(
+ 1, board_size, board_size)
+
+ features[::2] = last_eight == position.to_play
+ features[1::2] = last_eight == -position.to_play
+ return np.rollaxis(features, 0, 3)
+
+
+@planes(1)
+def color_to_play_feature(board_size, position):
+ if position.to_play == go.BLACK:
+ return np.ones([board_size, board_size, 1], dtype=np.uint8)
+ else:
+ return np.zeros([board_size, board_size, 1], dtype=np.uint8)
+
+NEW_FEATURES = [
+ stone_features,
+ color_to_play_feature
+]
+
+NEW_FEATURES_PLANES = sum(f.planes for f in NEW_FEATURES)
+
+
+def extract_features(board_size, position, features=None):
+ if features is None:
+ features = NEW_FEATURES
+ return np.concatenate([feature(board_size, position) for feature in features],
+ axis=2)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/features_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/features_test.py"
new file mode 100644
index 00000000..45532088
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/features_test.py"
@@ -0,0 +1,112 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for features."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import features
+import go
+import numpy as np
+import utils_test
+
+tf.logging.set_verbosity(tf.logging.ERROR)
+
+EMPTY_ROW = '.' * utils_test.BOARD_SIZE + '\n'
+TEST_BOARD = utils_test.load_board('''
+.X.....OO
+X........
+XXXXXXXXX
+''' + EMPTY_ROW * 6)
+
+TEST_POSITION = go.Position(
+ utils_test.BOARD_SIZE,
+ board=TEST_BOARD,
+ n=3,
+ komi=6.5,
+ caps=(1, 2),
+ ko=None,
+ recent=(go.PlayerMove(go.BLACK, (0, 1)),
+ go.PlayerMove(go.WHITE, (0, 8)),
+ go.PlayerMove(go.BLACK, (1, 0))),
+ to_play=go.BLACK,
+)
+
+TEST_BOARD2 = utils_test.load_board('''
+.XOXXOO..
+XO.OXOX..
+XXO..X...
+''' + EMPTY_ROW * 6)
+
+TEST_POSITION2 = go.Position(
+ utils_test.BOARD_SIZE,
+ board=TEST_BOARD2,
+ n=0,
+ komi=6.5,
+ caps=(0, 0),
+ ko=None,
+ recent=tuple(),
+ to_play=go.BLACK,
+)
+
+
+TEST_POSITION3 = go.Position(utils_test.BOARD_SIZE)
+for coord in ((0, 0), (0, 1), (0, 2), (0, 3), (1, 1)):
+ TEST_POSITION3.play_move(coord, mutate=True)
+# resulting position should look like this:
+# X.XO.....
+# .X.......
+# .........
+
+
+class TestFeatureExtraction(utils_test.MiniGoUnitTest):
+
+ def test_stone_features(self):
+ f = features.stone_features(utils_test.BOARD_SIZE, TEST_POSITION3)
+ self.assertEqual(TEST_POSITION3.to_play, go.WHITE)
+ self.assertEqual(f.shape, (9, 9, 16))
+ self.assertEqualNPArray(f[:, :, 0], utils_test.load_board('''
+ ...X.....
+ .........''' + EMPTY_ROW * 7))
+
+ self.assertEqualNPArray(f[:, :, 1], utils_test.load_board('''
+ X.X......
+ .X.......''' + EMPTY_ROW * 7))
+
+ self.assertEqualNPArray(f[:, :, 2], utils_test.load_board('''
+ .X.X.....
+ .........''' + EMPTY_ROW * 7))
+
+ self.assertEqualNPArray(f[:, :, 3], utils_test.load_board('''
+ X.X......
+ .........''' + EMPTY_ROW * 7))
+
+ self.assertEqualNPArray(f[:, :, 4], utils_test.load_board('''
+ .X.......
+ .........''' + EMPTY_ROW * 7))
+
+ self.assertEqualNPArray(f[:, :, 5], utils_test.load_board('''
+ X.X......
+ .........''' + EMPTY_ROW * 7))
+
+ for i in range(10, 16):
+ self.assertEqualNPArray(
+ f[:, :, i], np.zeros([utils_test.BOARD_SIZE, utils_test.BOARD_SIZE]))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/go.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/go.py"
new file mode 100644
index 00000000..94f03313
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/go.py"
@@ -0,0 +1,584 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Describe the Go game status.
+
+A board is a NxN numpy array.
+A Coordinate is a tuple index into the board.
+A Move is a (Coordinate c | None).
+A PlayerMove is a (Color, Move) tuple
+
+(0, 0) is considered to be the upper left corner of the board, and (18, 0)
+is the lower left.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from collections import namedtuple
+import copy
+import itertools
+
+import coords
+import numpy as np
+
+# Represent a board as a numpy array, with 0 empty, 1 is black, -1 is white.
+# This means that swapping colors is as simple as multiplying array by -1.
+WHITE, EMPTY, BLACK, FILL, KO, UNKNOWN = range(-1, 5)
+
+# Represents "group not found" in the LibertyTracker object
+MISSING_GROUP_ID = -1
+
+BLACK_NAME = 'BLACK'
+WHITE_NAME = 'WHITE'
+
+
+def _check_bounds(board_size, c):
+ return c[0] % board_size == c[0] and c[1] % board_size == c[1]
+
+
+def get_neighbors_diagonals(board_size):
+ """Return coordinates of neighbors and diagonals for a go board."""
+ all_coords = [(i, j) for i in range(board_size) for j in range(board_size)]
+ def check_bounds(c):
+ return _check_bounds(board_size, c)
+
+ neighbors = {(x, y): list(filter(check_bounds, [
+ (x+1, y), (x-1, y), (x, y+1), (x, y-1)])) for x, y in all_coords}
+
+ diagonals = {(x, y): list(filter(check_bounds, [
+ (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)])) for x, y in all_coords}
+
+ return neighbors, diagonals
+
+
+class IllegalMove(Exception):
+ pass
+
+
+class PlayerMove(namedtuple('PlayerMove', ['color', 'move'])):
+ pass
+
+
+class PositionWithContext(namedtuple('SgfPosition',
+ ['position', 'next_move', 'result'])):
+ pass
+
+
+def place_stones(board, color, stones):
+ for s in stones:
+ board[s] = color
+
+
+def replay_position(board_size, position, result):
+ """Wrapper for a go.Position which replays its history."""
+ # Assumes an empty start position! (i.e. no handicap, and history must
+ # be exhaustive.)
+ # Result must be passed in, since a resign cannot be inferred from position
+ # history alone.
+ # for position_w_context in replay_position(position):
+ # print(position_w_context.position)
+ if position.n != len(position.recent):
+ raise ValueError('Position history is incomplete!')
+ pos = Position(board_size=board_size, komi=position.komi)
+ for player_move in position.recent:
+ color, next_move = player_move
+ yield PositionWithContext(pos, next_move, result)
+ pos = pos.play_move(next_move, color=color)
+
+
+def find_reached(board_size, board, c):
+ """Find the chain to reach c."""
+ color = board[c]
+ chain = set([c])
+ reached = set()
+ frontier = [c]
+ neighbors, _ = get_neighbors_diagonals(board_size)
+ while frontier:
+ current = frontier.pop()
+ chain.add(current)
+ for n in neighbors[current]:
+ if board[n] == color and n not in chain:
+ frontier.append(n)
+ elif board[n] != color:
+ reached.add(n)
+ return chain, reached
+
+
+def is_koish(board_size, board, c):
+ """Check if c is surrounded on all sides by 1 color, and return that color."""
+ if board[c] != EMPTY:
+ return None
+ full_neighbors, _ = get_neighbors_diagonals(board_size)
+ neighbors = {board[n] for n in full_neighbors[c]}
+ if len(neighbors) == 1 and EMPTY not in neighbors:
+ return list(neighbors)[0]
+ else:
+ return None
+
+
+def is_eyeish(board_size, board, c):
+ """Check if c is an eye, for the purpose of restricting MC rollouts."""
+ # pass is fine.
+ if c is None:
+ return
+ color = is_koish(board_size, board, c)
+ if color is None:
+ return None
+ diagonal_faults = 0
+ _, all_diagonals = get_neighbors_diagonals(board_size)
+ diagonals = all_diagonals[c]
+ if len(diagonals) < 4:
+ diagonal_faults += 1
+ for d in diagonals:
+ if board[d] not in (color, EMPTY):
+ diagonal_faults += 1
+ if diagonal_faults > 1:
+ return None
+ else:
+ return color
+
+
+class Group(namedtuple('Group', ['id', 'stones', 'liberties', 'color'])):
+ """Group class.
+
+ stones: a frozenset of Coordinates belonging to this group
+ liberties: a frozenset of Coordinates that are empty and adjacent to
+ this group.
+ color: color of this group
+ """
+
+ def __eq__(self, other):
+ return (self.stones == other.stones and self.liberties == other.liberties
+ and self.color == other.color)
+
+
+class LibertyTracker(object):
+ """LibertyTracker class."""
+
+ @staticmethod
+ def from_board(board_size, board):
+ board = np.copy(board)
+ curr_group_id = 0
+ lib_tracker = LibertyTracker(board_size)
+ for color in (WHITE, BLACK):
+ while color in board:
+ curr_group_id += 1
+ found_color = np.where(board == color)
+ coord = found_color[0][0], found_color[1][0]
+ chain, reached = find_reached(board_size, board, coord)
+ liberties = frozenset(r for r in reached if board[r] == EMPTY)
+ new_group = Group(curr_group_id, frozenset(
+ chain), liberties, color)
+ lib_tracker.groups[curr_group_id] = new_group
+ for s in chain:
+ lib_tracker.group_index[s] = curr_group_id
+ place_stones(board, FILL, chain)
+
+ lib_tracker.max_group_id = curr_group_id
+
+ liberty_counts = np.zeros([board_size, board_size], dtype=np.uint8)
+ for group in lib_tracker.groups.values():
+ num_libs = len(group.liberties)
+ for s in group.stones:
+ liberty_counts[s] = num_libs
+ lib_tracker.liberty_cache = liberty_counts
+
+ return lib_tracker
+
+ def __init__(self, board_size, group_index=None, groups=None,
+ liberty_cache=None, max_group_id=1):
+ # group_index: a NxN numpy array of group_ids. -1 means no group
+ # groups: a dict of group_id to groups
+ # liberty_cache: a NxN numpy array of liberty counts
+ self.board_size = board_size
+ self.group_index = (group_index if group_index is not None else
+ -np.ones([board_size, board_size], dtype=np.int32))
+ self.groups = groups or {}
+ self.liberty_cache = (
+ liberty_cache if liberty_cache is not None
+ else -np.zeros([board_size, board_size], dtype=np.uint8))
+ self.max_group_id = max_group_id
+ self.neighbors, _ = get_neighbors_diagonals(board_size)
+
+ def __deepcopy__(self, memodict=None):
+ new_group_index = np.copy(self.group_index)
+ new_lib_cache = np.copy(self.liberty_cache)
+ # shallow copy
+ new_groups = copy.copy(self.groups)
+ return LibertyTracker(
+ self.board_size, new_group_index, new_groups,
+ liberty_cache=new_lib_cache, max_group_id=self.max_group_id)
+
+ def add_stone(self, color, c):
+ assert self.group_index[c] == MISSING_GROUP_ID
+ captured_stones = set()
+ opponent_neighboring_group_ids = set()
+ friendly_neighboring_group_ids = set()
+ empty_neighbors = set()
+
+ for n in self.neighbors[c]:
+ neighbor_group_id = self.group_index[n]
+ if neighbor_group_id != MISSING_GROUP_ID:
+ neighbor_group = self.groups[neighbor_group_id]
+ if neighbor_group.color == color:
+ friendly_neighboring_group_ids.add(neighbor_group_id)
+ else:
+ opponent_neighboring_group_ids.add(neighbor_group_id)
+ else:
+ empty_neighbors.add(n)
+
+ new_group = self._create_group(color, c, empty_neighbors)
+
+ for group_id in friendly_neighboring_group_ids:
+ new_group = self._merge_groups(group_id, new_group.id)
+
+ # new_group becomes stale as _update_liberties and
+ # _handle_captures are called; must refetch with self.groups[new_group.id]
+ for group_id in opponent_neighboring_group_ids:
+ neighbor_group = self.groups[group_id]
+ if len(neighbor_group.liberties) == 1:
+ captured = self._capture_group(group_id)
+ captured_stones.update(captured)
+ else:
+ self._update_liberties(group_id, remove={c})
+
+ self._handle_captures(captured_stones)
+
+ # suicide is illegal
+ if self.groups[new_group.id].liberties is None:
+ raise IllegalMove('Move at {} would commit suicide!\n'.format(c))
+
+ return captured_stones
+
+ def _create_group(self, color, c, liberties):
+ self.max_group_id += 1
+ new_group = Group(self.max_group_id, frozenset([c]), liberties, color)
+ self.groups[new_group.id] = new_group
+ self.group_index[c] = new_group.id
+ self.liberty_cache[c] = len(liberties)
+ return new_group
+
+ def _merge_groups(self, group1_id, group2_id):
+ group1 = self.groups[group1_id]
+ group2 = self.groups[group2_id]
+ self.groups[group1_id] = Group(
+ group1_id, group1.stones | group2.stones, group1.liberties,
+ group1.color)
+ del self.groups[group2_id]
+ for s in group2.stones:
+ self.group_index[s] = group1_id
+
+ self._update_liberties(
+ group1_id, add=group2.liberties, remove=group2.stones)
+
+ return group1
+
+ def _capture_group(self, group_id):
+ dead_group = self.groups[group_id]
+ del self.groups[group_id]
+ for s in dead_group.stones:
+ self.group_index[s] = MISSING_GROUP_ID
+ self.liberty_cache[s] = 0
+ return dead_group.stones
+
+ def _update_liberties(self, group_id, add=set(), remove=set()):
+ group = self.groups[group_id]
+ new_libs = (group.liberties | add) - remove
+ self.groups[group_id] = Group(
+ group_id, group.stones, new_libs, group.color)
+
+ new_lib_count = len(new_libs)
+ for s in self.groups[group_id].stones:
+ self.liberty_cache[s] = new_lib_count
+
+ def _handle_captures(self, captured_stones):
+ for s in captured_stones:
+ for n in self.neighbors[s]:
+ group_id = self.group_index[n]
+ if group_id != MISSING_GROUP_ID:
+ self._update_liberties(group_id, add={s})
+
+
+class Position(object):
+
+ def __init__(self, board_size, board=None, n=0, komi=7.5, caps=(0, 0),
+ lib_tracker=None, ko=None, recent=tuple(),
+ board_deltas=None, to_play=BLACK):
+ """Initialize position class.
+
+ Args:
+ board_size: the go board size.
+ board: a numpy array
+ n: an int representing moves played so far
+ komi: a float, representing points given to the second player.
+ caps: a (int, int) tuple of captures for B, W.
+ lib_tracker: a LibertyTracker object
+ ko: a Move
+ recent: a tuple of PlayerMoves, such that recent[-1] is the last move.
+ board_deltas: a np.array of shape (n, go.N, go.N) representing changes
+ made to the board at each move (played move and captures).
+ Should satisfy next_pos.board - next_pos.board_deltas[0] == pos.board
+ to_play: BLACK or WHITE
+ """
+ if not isinstance(recent, tuple):
+ raise TypeError('Recent must be a tuple!')
+ self.board_size = board_size
+ self.board = (board if board is not None else
+ -np.zeros([board_size, board_size], dtype=np.int8))
+ self.n = n
+ self.komi = komi
+ self.caps = caps
+ self.lib_tracker = lib_tracker or LibertyTracker.from_board(
+ self.board_size, self.board)
+ self.ko = ko
+ self.recent = recent
+ self.board_deltas = (board_deltas if board_deltas is not None else
+ -np.zeros([0, board_size, board_size], dtype=np.int8))
+ self.to_play = to_play
+ self.last_eight = None
+ self.neighbors, _ = get_neighbors_diagonals(board_size)
+
+ def __deepcopy__(self, memodict=None):
+ new_board = np.copy(self.board)
+ new_lib_tracker = copy.deepcopy(self.lib_tracker)
+ return Position(
+ self.board_size, new_board, self.n, self.komi, self.caps,
+ new_lib_tracker, self.ko, self.recent, self.board_deltas, self.to_play)
+
+ def __str__(self):
+ pretty_print_map = {
+ WHITE: '\x1b[0;31;47mO',
+ EMPTY: '\x1b[0;31;43m.',
+ BLACK: '\x1b[0;31;40mX',
+ FILL: '#',
+ KO: '*',
+ }
+ board = np.copy(self.board)
+ captures = self.caps
+ if self.ko is not None:
+ place_stones(board, KO, [self.ko])
+ raw_board_contents = []
+ for i in range(self.board_size):
+ row = []
+ for j in range(self.board_size):
+ appended = '<' if (
+ self.recent and (i, j) == self.recent[-1].move) else ' '
+ row.append(pretty_print_map[board[i, j]] + appended)
+ row.append('\x1b[0m')
+ raw_board_contents.append(''.join(row))
+
+ row_labels = ['%2d ' % i for i in range(self.board_size, 0, -1)]
+ annotated_board_contents = [''.join(r) for r in zip(
+ row_labels, raw_board_contents, row_labels)]
+ header_footer_rows = [
+ ' ' + ' '.join('ABCDEFGHJKLMNOPQRST'[:self.board_size]) + ' ']
+ annotated_board = '\n'.join(itertools.chain(
+ header_footer_rows, annotated_board_contents, header_footer_rows))
+ details = '\nMove: {}. Captures X: {} O: {}\n'.format(
+ self.n, *captures)
+ return annotated_board + details
+
+ def is_move_suicidal(self, move):
+ potential_libs = set()
+ for n in self.neighbors[move]:
+ neighbor_group_id = self.lib_tracker.group_index[n]
+ if neighbor_group_id == MISSING_GROUP_ID:
+ # at least one liberty after playing here, so not a suicide
+ return False
+ neighbor_group = self.lib_tracker.groups[neighbor_group_id]
+ if neighbor_group.color == self.to_play:
+ potential_libs |= neighbor_group.liberties
+ elif len(neighbor_group.liberties) == 1:
+ # would capture an opponent group if they only had one lib.
+ return False
+ # it's possible to suicide by connecting several friendly groups
+ # each of which had one liberty.
+ potential_libs -= set([move])
+ return not potential_libs
+
+ def is_move_legal(self, move):
+ """Checks that a move is on an empty space, not on ko, and not suicide."""
+ if move is None:
+ return True
+ if self.board[move] != EMPTY:
+ return False
+ if move == self.ko:
+ return False
+ if self.is_move_suicidal(move):
+ return False
+
+ return True
+
+ def all_legal_moves(self):
+ """Returns a np.array of size go.N**2 + 1, with 1 = legal, 0 = illegal."""
+ # by default, every move is legal
+ legal_moves = np.ones([self.board_size, self.board_size], dtype=np.int8)
+ # ...unless there is already a stone there
+ legal_moves[self.board != EMPTY] = 0
+ # calculate which spots have 4 stones next to them
+ # padding is because the edge always counts as a lost liberty.
+ adjacent = np.ones([self.board_size+2, self.board_size+2], dtype=np.int8)
+ adjacent[1:-1, 1:-1] = np.abs(self.board)
+ num_adjacent_stones = (adjacent[:-2, 1:-1] + adjacent[1:-1, :-2] +
+ adjacent[2:, 1:-1] + adjacent[1:-1, 2:])
+ # Surrounded spots are those that are empty and have 4 adjacent stones.
+ surrounded_spots = np.multiply(
+ (self.board == EMPTY),
+ (num_adjacent_stones == 4))
+ # Such spots are possibly illegal, unless they are capturing something.
+ # Iterate over and manually check each spot.
+ for coord in np.transpose(np.nonzero(surrounded_spots)):
+ if self.is_move_suicidal(tuple(coord)):
+ legal_moves[tuple(coord)] = 0
+
+ # ...and retaking ko is always illegal
+ if self.ko is not None:
+ legal_moves[self.ko] = 0
+
+ # and pass is always legal
+ return np.concatenate([legal_moves.ravel(), [1]])
+
+ def pass_move(self, mutate=False):
+ pos = self if mutate else copy.deepcopy(self)
+ pos.n += 1
+ pos.recent += (PlayerMove(pos.to_play, None),)
+ pos.board_deltas = np.concatenate((
+ np.zeros([1, self.board_size, self.board_size], dtype=np.int8),
+ pos.board_deltas[:6]))
+ pos.to_play *= -1
+ pos.ko = None
+ return pos
+
+ def flip_playerturn(self, mutate=False):
+ pos = self if mutate else copy.deepcopy(self)
+ pos.ko = None
+ pos.to_play *= -1
+ return pos
+
+ def get_liberties(self):
+ return self.lib_tracker.liberty_cache
+
+ def play_move(self, c, color=None, mutate=False):
+ """Obeys CGOS Rules of Play.
+
+ In short:
+ No suicides
+ Chinese/area scoring
+ Positional superko (this is very crudely approximate at the moment.)
+
+ Args:
+ c: the coordinate to play from.
+ color: the color of the player to play.
+ mutate:
+
+ Returns:
+ The position of next move.
+
+ Raises:
+ IllegalMove: if the input c is an illegal move.
+ """
+ if color is None:
+ color = self.to_play
+
+ pos = self if mutate else copy.deepcopy(self)
+
+ if c is None:
+ pos = pos.pass_move(mutate=mutate)
+ return pos
+
+ if not self.is_move_legal(c):
+ raise IllegalMove('{} move at {} is illegal: \n{}'.format(
+ 'Black' if self.to_play == BLACK else 'White',
+ coords.to_kgs(self.board_size, c), self))
+
+ potential_ko = is_koish(self.board_size, self.board, c)
+
+ place_stones(pos.board, color, [c])
+ captured_stones = pos.lib_tracker.add_stone(color, c)
+ place_stones(pos.board, EMPTY, captured_stones)
+
+ opp_color = -1 * color
+
+ new_board_delta = np.zeros([self.board_size, self.board_size],
+ dtype=np.int8)
+ new_board_delta[c] = color
+ place_stones(new_board_delta, color, captured_stones)
+
+ if len(captured_stones) == 1 and potential_ko == opp_color:
+ new_ko = list(captured_stones)[0]
+ else:
+ new_ko = None
+
+ if pos.to_play == BLACK:
+ new_caps = (pos.caps[0] + len(captured_stones), pos.caps[1])
+ else:
+ new_caps = (pos.caps[0], pos.caps[1] + len(captured_stones))
+
+ pos.n += 1
+ pos.caps = new_caps
+ pos.ko = new_ko
+ pos.recent += (PlayerMove(color, c),)
+
+ # keep a rolling history of last 7 deltas - that's all we'll need to
+ # extract the last 8 board states.
+ pos.board_deltas = np.concatenate((
+ new_board_delta.reshape(1, self.board_size, self.board_size),
+ pos.board_deltas[:6]))
+ pos.to_play *= -1
+ return pos
+
+ def is_game_over(self):
+ return (len(self.recent) >= 2
+ and self.recent[-1].move is None
+ and self.recent[-2].move is None)
+
+ def score(self):
+ """Return score from B perspective. If W is winning, score is negative."""
+ working_board = np.copy(self.board)
+ while EMPTY in working_board:
+ unassigned_spaces = np.where(working_board == EMPTY)
+ c = unassigned_spaces[0][0], unassigned_spaces[1][0]
+ territory, borders = find_reached(self.board_size, working_board, c)
+ border_colors = set(working_board[b] for b in borders)
+ X_border = BLACK in border_colors # pylint: disable=invalid-name
+ O_border = WHITE in border_colors # pylint: disable=invalid-name
+ if X_border and not O_border:
+ territory_color = BLACK
+ elif O_border and not X_border:
+ territory_color = WHITE
+ else:
+ territory_color = UNKNOWN # dame, or seki
+ place_stones(working_board, territory_color, territory)
+
+ return np.count_nonzero(working_board == BLACK) - np.count_nonzero(
+ working_board == WHITE) - self.komi
+
+ def result(self):
+ score = self.score()
+ if score > 0:
+ return 1
+ elif score < 0:
+ return -1
+ else:
+ return 0
+
+ def result_string(self):
+ score = self.score()
+ if score > 0:
+ return 'B+' + '{:.1f}'.format(score)
+ elif score < 0:
+ return 'W+' + '{:.1f}'.format(abs(score))
+ else:
+ return 'DRAW'
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/go_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/go_test.py"
new file mode 100644
index 00000000..9ebdd0ae
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/go_test.py"
@@ -0,0 +1,704 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for go."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import coords
+import go
+from go import Position, PlayerMove, LibertyTracker, WHITE, BLACK
+import numpy as np
+import sgf_wrapper
+import utils_test
+
+EMPTY_ROW = '.' * utils_test.BOARD_SIZE + '\n'
+TEST_BOARD = utils_test.load_board('''
+.X.....OO
+X........
+''' + EMPTY_ROW * 7)
+
+NO_HANDICAP_SGF = '''(;CA[UTF-8]SZ[9]PB[Murakawa Daisuke]PW[Iyama Yuta]KM[6.5]
+ HA[0]RE[W+1.5]GM[1];B[fd];W[cf];B[eg];W[dd];B[dc];W[cc];
+ B[de];W[cd];B[ed];W[he];B[ce];W[be];B[df];W[bf];B[hd];
+ W[ge];B[gd];W[gg];B[db];W[cb];B[cg];W[bg];B[gh];W[fh];
+ B[hh];W[fg];B[eh];W[ei];B[di];W[fi];B[hg];W[dh];B[ch];
+ W[ci];B[bh];W[ff];B[fe];W[hf];B[id];W[bi];B[ah];W[ef];
+ B[dg];W[ee];B[di];W[ig];B[ai];W[ih];B[fb];W[hi];B[ag];
+ W[ab];B[bd];W[bc];B[ae];W[ad];B[af];W[bd];B[ca];W[ba];
+ B[da];W[ie])'''
+
+
+def coords_from_kgs_set(string):
+ def _from_kgs(kgsc):
+ return coords.from_kgs(utils_test.BOARD_SIZE, kgsc)
+ return frozenset(map(_from_kgs, string.split()))
+
+
+class TestBasicFunctions(utils_test.MiniGoUnitTest):
+
+ def test_load_board(self):
+ self.assertEqualNPArray(utils_test.EMPTY_BOARD, np.zeros(
+ [utils_test.BOARD_SIZE, utils_test.BOARD_SIZE]))
+ self.assertEqualNPArray(
+ utils_test.EMPTY_BOARD, utils_test.load_board(
+ '. \n' * utils_test.BOARD_SIZE ** 2))
+
+ def test_neighbors(self):
+ corner = coords.from_kgs(utils_test.BOARD_SIZE, 'A1')
+ neighbors = [
+ utils_test.EMPTY_BOARD[c] for c in utils_test.NEIGHBORS[corner]]
+ self.assertEqual(len(neighbors), 2)
+
+ side = coords.from_kgs(utils_test.BOARD_SIZE, 'A2')
+ side_neighbors = [
+ utils_test.EMPTY_BOARD[c] for c in utils_test.NEIGHBORS[side]]
+ self.assertEqual(len(side_neighbors), 3)
+
+ def test_is_koish(self):
+ self.assertEqual(go.is_koish(
+ utils_test.BOARD_SIZE, TEST_BOARD, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')), BLACK)
+ self.assertEqual(go.is_koish(
+ utils_test.BOARD_SIZE, TEST_BOARD, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B8')), None)
+ self.assertEqual(go.is_koish(
+ utils_test.BOARD_SIZE, TEST_BOARD, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B9')), None)
+ self.assertEqual(
+ go.is_koish(utils_test.BOARD_SIZE, TEST_BOARD, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'E5')), None)
+
+ def test_is_eyeish(self):
+ board = utils_test.load_board('''
+ .XX...XXX
+ X.X...X.X
+ XX.....X.
+ ........X
+ XXXX.....
+ OOOX....O
+ X.OXX.OO.
+ .XO.X.O.O
+ XXO.X.OO.
+ ''')
+ B_eyes = coords_from_kgs_set('A2 A9 B8 J7 H8')
+ W_eyes = coords_from_kgs_set('H2 J1 J3')
+ not_eyes = coords_from_kgs_set('B3 E5')
+ for be in B_eyes:
+ self.assertEqual(go.is_eyeish(
+ utils_test.BOARD_SIZE, board, be), BLACK, str(be))
+ for we in W_eyes:
+ self.assertEqual(go.is_eyeish(
+ utils_test.BOARD_SIZE, board, we), WHITE, str(we))
+ for ne in not_eyes:
+ self.assertEqual(go.is_eyeish(
+ utils_test.BOARD_SIZE, board, ne), None, str(ne))
+
+
+class TestLibertyTracker(utils_test.MiniGoUnitTest):
+
+ def test_lib_tracker_init(self):
+ board = utils_test.load_board('X........' + EMPTY_ROW * 8)
+
+ lib_tracker = LibertyTracker.from_board(utils_test.BOARD_SIZE, board)
+ self.assertEqual(len(lib_tracker.groups), 1)
+ self.assertNotEqual(
+ lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')], go.MISSING_GROUP_ID)
+ self.assertEqual(lib_tracker.liberty_cache[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')], 2)
+ sole_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')]]
+ self.assertEqual(sole_group.stones, coords_from_kgs_set('A9'))
+ self.assertEqual(sole_group.liberties, coords_from_kgs_set('B9 A8'))
+ self.assertEqual(sole_group.color, BLACK)
+
+ def test_place_stone(self):
+ board = utils_test.load_board('X........' + EMPTY_ROW * 8)
+ lib_tracker = LibertyTracker.from_board(utils_test.BOARD_SIZE, board)
+ lib_tracker.add_stone(BLACK, coords.from_kgs(utils_test.BOARD_SIZE, 'B9'))
+ self.assertEqual(len(lib_tracker.groups), 1)
+ self.assertNotEqual(
+ lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')], go.MISSING_GROUP_ID)
+ self.assertEqual(lib_tracker.liberty_cache[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')], 3)
+ self.assertEqual(lib_tracker.liberty_cache[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B9')], 3)
+ sole_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')]]
+ self.assertEqual(sole_group.stones, coords_from_kgs_set('A9 B9'))
+ self.assertEqual(sole_group.liberties,
+ coords_from_kgs_set('C9 A8 B8'))
+ self.assertEqual(sole_group.color, BLACK)
+
+ def test_place_stone_opposite_color(self):
+ board = utils_test.load_board('X........' + EMPTY_ROW * 8)
+ lib_tracker = LibertyTracker.from_board(utils_test.BOARD_SIZE, board)
+ lib_tracker.add_stone(WHITE, coords.from_kgs(utils_test.BOARD_SIZE, 'B9'))
+ self.assertEqual(len(lib_tracker.groups), 2)
+ self.assertNotEqual(
+ lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')], go.MISSING_GROUP_ID)
+ self.assertNotEqual(
+ lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B9')], go.MISSING_GROUP_ID)
+ self.assertEqual(lib_tracker.liberty_cache[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')], 1)
+ self.assertEqual(lib_tracker.liberty_cache[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B9')], 2)
+ black_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')]]
+ white_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B9')]]
+ self.assertEqual(black_group.stones, coords_from_kgs_set('A9'))
+ self.assertEqual(black_group.liberties, coords_from_kgs_set('A8'))
+ self.assertEqual(black_group.color, BLACK)
+ self.assertEqual(white_group.stones, coords_from_kgs_set('B9'))
+ self.assertEqual(white_group.liberties, coords_from_kgs_set('C9 B8'))
+ self.assertEqual(white_group.color, WHITE)
+
+ def test_merge_multiple_groups(self):
+ board = utils_test.load_board('''
+ .X.......
+ X.X......
+ .X.......
+ ''' + EMPTY_ROW * 6)
+ lib_tracker = LibertyTracker.from_board(utils_test.BOARD_SIZE, board)
+ lib_tracker.add_stone(BLACK, coords.from_kgs(utils_test.BOARD_SIZE, 'B8'))
+ self.assertEqual(len(lib_tracker.groups), 1)
+ self.assertNotEqual(
+ lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B8')], go.MISSING_GROUP_ID)
+ sole_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B8')]]
+ self.assertEqual(sole_group.stones,
+ coords_from_kgs_set('B9 A8 B8 C8 B7'))
+ self.assertEqual(sole_group.liberties,
+ coords_from_kgs_set('A9 C9 D8 A7 C7 B6'))
+ self.assertEqual(sole_group.color, BLACK)
+
+ liberty_cache = lib_tracker.liberty_cache
+ for stone in sole_group.stones:
+ self.assertEqual(liberty_cache[stone], 6, str(stone))
+
+ def test_capture_stone(self):
+ board = utils_test.load_board('''
+ .X.......
+ XO.......
+ .X.......
+ ''' + EMPTY_ROW * 6)
+ lib_tracker = LibertyTracker.from_board(utils_test.BOARD_SIZE, board)
+ captured = lib_tracker.add_stone(BLACK, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'C8'))
+ self.assertEqual(len(lib_tracker.groups), 4)
+ self.assertEqual(
+ lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B8')], go.MISSING_GROUP_ID)
+ self.assertEqual(captured, coords_from_kgs_set('B8'))
+
+ def test_capture_many(self):
+ board = utils_test.load_board('''
+ .XX......
+ XOO......
+ .XX......
+ ''' + EMPTY_ROW * 6)
+ lib_tracker = LibertyTracker.from_board(utils_test.BOARD_SIZE, board)
+ captured = lib_tracker.add_stone(BLACK, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'D8'))
+ self.assertEqual(len(lib_tracker.groups), 4)
+ self.assertEqual(
+ lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B8')], go.MISSING_GROUP_ID)
+ self.assertEqual(captured, coords_from_kgs_set('B8 C8'))
+
+ left_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A8')]]
+ self.assertEqual(left_group.stones, coords_from_kgs_set('A8'))
+ self.assertEqual(left_group.liberties,
+ coords_from_kgs_set('A9 B8 A7'))
+
+ right_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'D8')]]
+ self.assertEqual(right_group.stones, coords_from_kgs_set('D8'))
+ self.assertEqual(right_group.liberties,
+ coords_from_kgs_set('D9 C8 E8 D7'))
+
+ top_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B9')]]
+ self.assertEqual(top_group.stones, coords_from_kgs_set('B9 C9'))
+ self.assertEqual(top_group.liberties,
+ coords_from_kgs_set('A9 D9 B8 C8'))
+
+ bottom_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B7')]]
+ self.assertEqual(bottom_group.stones, coords_from_kgs_set('B7 C7'))
+ self.assertEqual(bottom_group.liberties,
+ coords_from_kgs_set('B8 C8 A7 D7 B6 C6'))
+
+ liberty_cache = lib_tracker.liberty_cache
+ for stone in top_group.stones:
+ self.assertEqual(liberty_cache[stone], 4, str(stone))
+ for stone in left_group.stones:
+ self.assertEqual(liberty_cache[stone], 3, str(stone))
+ for stone in right_group.stones:
+ self.assertEqual(liberty_cache[stone], 4, str(stone))
+ for stone in bottom_group.stones:
+ self.assertEqual(liberty_cache[stone], 6, str(stone))
+ for stone in captured:
+ self.assertEqual(liberty_cache[stone], 0, str(stone))
+
+ def test_capture_multiple_groups(self):
+ board = utils_test.load_board('''
+ .OX......
+ OXX......
+ XX.......
+ ''' + EMPTY_ROW * 6)
+ lib_tracker = LibertyTracker.from_board(utils_test.BOARD_SIZE, board)
+ captured = lib_tracker.add_stone(BLACK, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9'))
+ self.assertEqual(len(lib_tracker.groups), 2)
+ self.assertEqual(captured, coords_from_kgs_set('B9 A8'))
+
+ corner_stone = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')]]
+ self.assertEqual(corner_stone.stones, coords_from_kgs_set('A9'))
+ self.assertEqual(corner_stone.liberties, coords_from_kgs_set('B9 A8'))
+
+ surrounding_stones = lib_tracker.groups[lib_tracker.group_index[
+ coords.from_kgs(utils_test.BOARD_SIZE, 'C9')]]
+ self.assertEqual(surrounding_stones.stones,
+ coords_from_kgs_set('C9 B8 C8 A7 B7'))
+ self.assertEqual(surrounding_stones.liberties,
+ coords_from_kgs_set('B9 D9 A8 D8 C7 A6 B6'))
+
+ liberty_cache = lib_tracker.liberty_cache
+ for stone in corner_stone.stones:
+ self.assertEqual(liberty_cache[stone], 2, str(stone))
+ for stone in surrounding_stones.stones:
+ self.assertEqual(liberty_cache[stone], 7, str(stone))
+
+ def test_same_friendly_group_neighboring_twice(self):
+ board = utils_test.load_board('''
+ XX.......
+ X........
+ ''' + EMPTY_ROW * 7)
+
+ lib_tracker = LibertyTracker.from_board(utils_test.BOARD_SIZE, board)
+ captured = lib_tracker.add_stone(BLACK, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B8'))
+ self.assertEqual(len(lib_tracker.groups), 1)
+ sole_group_id = lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')]
+ sole_group = lib_tracker.groups[sole_group_id]
+ self.assertEqual(sole_group.stones,
+ coords_from_kgs_set('A9 B9 A8 B8'))
+ self.assertEqual(sole_group.liberties,
+ coords_from_kgs_set('C9 C8 A7 B7'))
+ self.assertEqual(captured, set())
+
+ def test_same_opponent_group_neighboring_twice(self):
+ board = utils_test.load_board('''
+ XX.......
+ X........
+ ''' + EMPTY_ROW * 7)
+
+ lib_tracker = LibertyTracker.from_board(utils_test.BOARD_SIZE, board)
+ captured = lib_tracker.add_stone(WHITE, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B8'))
+ self.assertEqual(len(lib_tracker.groups), 2)
+ black_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')]]
+ self.assertEqual(black_group.stones, coords_from_kgs_set('A9 B9 A8'))
+ self.assertEqual(black_group.liberties, coords_from_kgs_set('C9 A7'))
+
+ white_group = lib_tracker.groups[lib_tracker.group_index[coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B8')]]
+ self.assertEqual(white_group.stones, coords_from_kgs_set('B8'))
+ self.assertEqual(white_group.liberties, coords_from_kgs_set('C8 B7'))
+
+ self.assertEqual(captured, set())
+
+
+class TestPosition(utils_test.MiniGoUnitTest):
+
+ def test_passing(self):
+ start_position = Position(
+ utils_test.BOARD_SIZE,
+ board=TEST_BOARD,
+ n=0,
+ komi=6.5,
+ caps=(1, 2),
+ ko=coords.from_kgs(utils_test.BOARD_SIZE, 'A1'),
+ recent=tuple(),
+ to_play=BLACK,
+ )
+ expected_position = Position(
+ utils_test.BOARD_SIZE,
+ board=TEST_BOARD,
+ n=1,
+ komi=6.5,
+ caps=(1, 2),
+ ko=None,
+ recent=(PlayerMove(BLACK, None),),
+ to_play=WHITE,
+ )
+ pass_position = start_position.pass_move()
+ self.assertEqualPositions(pass_position, expected_position)
+
+ def test_flipturn(self):
+ start_position = Position(
+ utils_test.BOARD_SIZE,
+ board=TEST_BOARD,
+ n=0,
+ komi=6.5,
+ caps=(1, 2),
+ ko=coords.from_kgs(utils_test.BOARD_SIZE, 'A1'),
+ recent=tuple(),
+ to_play=BLACK,
+ )
+ expected_position = Position(
+ utils_test.BOARD_SIZE,
+ board=TEST_BOARD,
+ n=0,
+ komi=6.5,
+ caps=(1, 2),
+ ko=None,
+ recent=tuple(),
+ to_play=WHITE,
+ )
+ flip_position = start_position.flip_playerturn()
+ self.assertEqualPositions(flip_position, expected_position)
+
+ def test_is_move_suicidal(self):
+ board = utils_test.load_board('''
+ ...O.O...
+ ....O....
+ XO.....O.
+ OXO...OXO
+ O.XO.OX.O
+ OXO...OOX
+ XO.......
+ ......XXO
+ .....XOO.
+ ''')
+ position = Position(
+ utils_test.BOARD_SIZE,
+ board=board,
+ to_play=BLACK,
+ )
+ suicidal_moves = coords_from_kgs_set('E9 H5')
+ nonsuicidal_moves = coords_from_kgs_set('B5 J1 A9')
+ for move in suicidal_moves:
+ # sanity check my coordinate input
+ assert position.board[move] == go.EMPTY
+ self.assertTrue(position.is_move_suicidal(move), str(move))
+ for move in nonsuicidal_moves:
+ # sanity check my coordinate input
+ assert position.board[move] == go.EMPTY
+ self.assertFalse(position.is_move_suicidal(move), str(move))
+
+ def test_legal_moves(self):
+ board = utils_test.load_board('''
+ .O.O.XOX.
+ O..OOOOOX
+ ......O.O
+ OO.....OX
+ XO.....X.
+ .O.......
+ OX.....OO
+ XX...OOOX
+ .....O.X.
+ ''')
+ position = Position(utils_test.BOARD_SIZE, board=board, to_play=BLACK)
+ illegal_moves = coords_from_kgs_set('A9 E9 J9')
+ legal_moves = coords_from_kgs_set('A4 G1 J1 H7') | {None}
+ for move in illegal_moves:
+ with self.subTest(type='illegal', move=move):
+ self.assertFalse(position.is_move_legal(move))
+ for move in legal_moves:
+ with self.subTest(type='legal', move=move):
+ self.assertTrue(position.is_move_legal(move))
+ # check that the bulk legal test agrees with move-by-move illegal test.
+ bulk_legality = position.all_legal_moves()
+ for i, bulk_legal in enumerate(bulk_legality):
+ with self.subTest(type='bulk', move=coords.from_flat(
+ utils_test.BOARD_SIZE, i)):
+ self.assertEqual(
+ bulk_legal, position.is_move_legal(
+ coords.from_flat(utils_test.BOARD_SIZE, i)))
+
+ # flip the colors and check that everything is still (il)legal
+ position = Position(utils_test.BOARD_SIZE, board=-board, to_play=WHITE)
+ for move in illegal_moves:
+ with self.subTest(type='illegal', move=move):
+ self.assertFalse(position.is_move_legal(move))
+ for move in legal_moves:
+ with self.subTest(type='legal', move=move):
+ self.assertTrue(position.is_move_legal(move))
+ bulk_legality = position.all_legal_moves()
+ for i, bulk_legal in enumerate(bulk_legality):
+ with self.subTest(type='bulk', move=coords.from_flat(
+ utils_test.BOARD_SIZE, i)):
+ self.assertEqual(
+ bulk_legal, position.is_move_legal(coords.from_flat(
+ utils_test.BOARD_SIZE, i)))
+
+ def test_move(self):
+ start_position = Position(
+ utils_test.BOARD_SIZE,
+ board=TEST_BOARD,
+ n=0,
+ komi=6.5,
+ caps=(1, 2),
+ ko=None,
+ recent=tuple(),
+ to_play=BLACK,
+ )
+ expected_board = utils_test.load_board('''
+ .XX....OO
+ X........
+ ''' + EMPTY_ROW * 7)
+ expected_position = Position(
+ utils_test.BOARD_SIZE,
+ board=expected_board,
+ n=1,
+ komi=6.5,
+ caps=(1, 2),
+ ko=None,
+ recent=(PlayerMove(BLACK, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'C9')),),
+ to_play=WHITE,
+ )
+ actual_position = start_position.play_move(coords.from_kgs(
+ utils_test.BOARD_SIZE, 'C9'))
+ self.assertEqualPositions(actual_position, expected_position)
+
+ expected_board2 = utils_test.load_board('''
+ .XX....OO
+ X.......O
+ ''' + EMPTY_ROW * 7)
+ expected_position2 = Position(
+ utils_test.BOARD_SIZE,
+ board=expected_board2,
+ n=2,
+ komi=6.5,
+ caps=(1, 2),
+ ko=None,
+ recent=(
+ PlayerMove(BLACK, coords.from_kgs(utils_test.BOARD_SIZE, 'C9')),
+ PlayerMove(WHITE, coords.from_kgs(utils_test.BOARD_SIZE, 'J8')),),
+ to_play=BLACK,
+ )
+ actual_position2 = actual_position.play_move(coords.from_kgs(
+ utils_test.BOARD_SIZE, 'J8'))
+ self.assertEqualPositions(actual_position2, expected_position2)
+
+ def test_move_with_capture(self):
+ start_board = utils_test.load_board(
+ EMPTY_ROW * 5 + '''
+ XXXX.....
+ XOOX.....
+ O.OX.....
+ OOXX.....
+ ''')
+ start_position = Position(
+ utils_test.BOARD_SIZE,
+ board=start_board,
+ n=0,
+ komi=6.5,
+ caps=(1, 2),
+ ko=None,
+ recent=tuple(),
+ to_play=BLACK,
+ )
+ expected_board = utils_test.load_board(
+ EMPTY_ROW * 5 + '''
+ XXXX.....
+ X..X.....
+ .X.X.....
+ ..XX.....
+ ''')
+ expected_position = Position(
+ utils_test.BOARD_SIZE,
+ board=expected_board,
+ n=1,
+ komi=6.5,
+ caps=(7, 2),
+ ko=None,
+ recent=(PlayerMove(BLACK, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B2')),),
+ to_play=WHITE,)
+ actual_position = start_position.play_move(coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B2'))
+ self.assertEqualPositions(actual_position, expected_position)
+
+ def test_ko_move(self):
+ start_board = utils_test.load_board('''
+ .OX......
+ OX.......
+ ''' + EMPTY_ROW * 7)
+ start_position = Position(
+ utils_test.BOARD_SIZE,
+ board=start_board,
+ n=0,
+ komi=6.5,
+ caps=(1, 2),
+ ko=None,
+ recent=tuple(),
+ to_play=BLACK,
+ )
+ expected_board = utils_test.load_board('''
+ X.X......
+ OX.......
+ ''' + EMPTY_ROW * 7)
+ expected_position = Position(
+ utils_test.BOARD_SIZE,
+ board=expected_board,
+ n=1,
+ komi=6.5,
+ caps=(2, 2),
+ ko=coords.from_kgs(utils_test.BOARD_SIZE, 'B9'),
+ recent=(PlayerMove(BLACK, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9')),),
+ to_play=WHITE,
+ )
+ actual_position = start_position.play_move(coords.from_kgs(
+ utils_test.BOARD_SIZE, 'A9'))
+
+ self.assertEqualPositions(actual_position, expected_position)
+
+ # Check that retaking ko is illegal until two intervening moves
+ with self.assertRaises(go.IllegalMove):
+ actual_position.play_move(coords.from_kgs(utils_test.BOARD_SIZE, 'B9'))
+ pass_twice = actual_position.pass_move().pass_move()
+ ko_delayed_retake = pass_twice.play_move(coords.from_kgs(
+ utils_test.BOARD_SIZE, 'B9'))
+ expected_position = Position(
+ utils_test.BOARD_SIZE,
+ board=start_board,
+ n=4,
+ komi=6.5,
+ caps=(2, 3),
+ ko=coords.from_kgs(utils_test.BOARD_SIZE, 'A9'),
+ recent=(
+ PlayerMove(BLACK, coords.from_kgs(utils_test.BOARD_SIZE, 'A9')),
+ PlayerMove(WHITE, None),
+ PlayerMove(BLACK, None),
+ PlayerMove(WHITE, coords.from_kgs(utils_test.BOARD_SIZE, 'B9')),),
+ to_play=BLACK)
+ self.assertEqualPositions(ko_delayed_retake, expected_position)
+
+ def test_is_game_over(self):
+ root = go.Position(utils_test.BOARD_SIZE)
+ self.assertFalse(root.is_game_over())
+ first_pass = root.play_move(None)
+ self.assertFalse(first_pass.is_game_over())
+ second_pass = first_pass.play_move(None)
+ self.assertTrue(second_pass.is_game_over())
+
+ def test_scoring(self):
+ board = utils_test.load_board('''
+ .XX......
+ OOXX.....
+ OOOX...X.
+ OXX......
+ OOXXXXXX.
+ OOOXOXOXX
+ .O.OOXOOX
+ .O.O.OOXX
+ ......OOO
+ ''')
+ position = Position(
+ utils_test.BOARD_SIZE,
+ board=board,
+ n=54,
+ komi=6.5,
+ caps=(2, 5),
+ ko=None,
+ recent=tuple(),
+ to_play=BLACK,
+ )
+ expected_score = 1.5
+ self.assertEqual(position.score(), expected_score)
+
+ board = utils_test.load_board('''
+ XXX......
+ OOXX.....
+ OOOX...X.
+ OXX......
+ OOXXXXXX.
+ OOOXOXOXX
+ .O.OOXOOX
+ .O.O.OOXX
+ ......OOO
+ ''')
+ position = Position(
+ utils_test.BOARD_SIZE,
+ board=board,
+ n=55,
+ komi=6.5,
+ caps=(2, 5),
+ ko=None,
+ recent=tuple(),
+ to_play=WHITE,
+ )
+ expected_score = 2.5
+ self.assertEqual(position.score(), expected_score)
+
+ def test_replay_position(self):
+ sgf_positions = list(sgf_wrapper.replay_sgf(
+ utils_test.BOARD_SIZE, NO_HANDICAP_SGF))
+ initial = sgf_positions[0]
+ self.assertEqual(initial.result, go.WHITE)
+
+ final = sgf_positions[-1].position.play_move(
+ sgf_positions[-1].next_move)
+
+ # sanity check to ensure we're working with the right position
+ final_board = utils_test.load_board('''
+ .OXX.....
+ O.OX.X...
+ .OOX.....
+ OOOOXXXXX
+ XOXXOXOOO
+ XOOXOO.O.
+ XOXXXOOXO
+ XXX.XOXXO
+ X..XOO.O.
+ ''')
+ expected_final_position = go.Position(
+ utils_test.BOARD_SIZE,
+ final_board,
+ n=62,
+ komi=6.5,
+ caps=(3, 2),
+ ko=None,
+ recent=tuple(),
+ to_play=go.BLACK
+ )
+ self.assertEqualPositions(expected_final_position, final)
+ self.assertEqual(final.n, len(final.recent))
+
+ replayed_positions = list(go.replay_position(
+ utils_test.BOARD_SIZE, final, 1))
+ for sgf_pos, replay_pos in zip(sgf_positions, replayed_positions):
+ self.assertEqualPositions(sgf_pos.position, replay_pos.position)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/gtp_extensions.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/gtp_extensions.py"
new file mode 100644
index 00000000..710a6bc2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/gtp_extensions.py"
@@ -0,0 +1,172 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the 'License');
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an 'AS IS' BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Extends gtp.py."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import itertools
+import sys
+
+import coords
+import go
+import gtp
+import sgf_wrapper
+
+
+def parse_message(message):
+ message = gtp.pre_engine(message).strip()
+ first, rest = (message.split(' ', 1) + [None])[:2]
+ if first.isdigit():
+ message_id = int(first)
+ if rest is not None:
+ command, arguments = (rest.split(' ', 1) + [None])[:2]
+ else:
+ command, arguments = None, None
+ else:
+ message_id = None
+ command, arguments = first, rest
+
+ command = command.replace('-', '_') # for kgs extensions.
+ return message_id, command, arguments
+
+
+class KgsExtensionsMixin(gtp.Engine):
+
+ def __init__(self, game_obj, name='gtp (python, kgs-chat extensions)',
+ version='0.1'):
+ super().__init__(game_obj=game_obj, name=name, version=version)
+ self.known_commands += ['kgs-chat']
+
+ def send(self, message):
+ message_id, command, arguments = parse_message(message)
+ if command in self.known_commands:
+ try:
+ retval = getattr(self, 'cmd_' + command)(arguments)
+ response = gtp.format_success(message_id, retval)
+ sys.stderr.flush()
+ return response
+ except ValueError as exception:
+ return gtp.format_error(message_id, exception.args[0])
+ else:
+ return gtp.format_error(message_id, 'unknown command: ' + command)
+
+ # Nice to implement this, as KGS sends it each move.
+ def cmd_time_left(self, arguments):
+ pass
+
+ def cmd_showboard(self, arguments):
+ return self._game.showboard()
+
+ def cmd_kgs_chat(self, arguments):
+ try:
+ arg_list = arguments.split()
+ msg_type, sender, text = arg_list[0], arg_list[1], arg_list[2:]
+ text = ' '.join(text)
+ except ValueError:
+ return 'Unparseable message, args: %r' % arguments
+ return self._game.chat(msg_type, sender, text)
+
+
+class RegressionsMixin(gtp.Engine):
+
+ def cmd_loadsgf(self, arguments):
+ args = arguments.split()
+ if len(args) == 2:
+ file_, movenum = args
+ movenum = int(movenum)
+ print('movenum =', movenum, file=sys.stderr)
+ else:
+ file_ = args[0]
+ movenum = None
+
+ try:
+ with open(file_, 'r') as f:
+ contents = f.read()
+ except:
+ raise ValueError('Unreadable file: ' + file_)
+
+ try:
+ # This is kinda bad, because replay_sgf is already calling
+ # 'play move' on its internal position objects, but we really
+ # want to advance the engine along with us rather than try to
+ # push in some finished Position object.
+ for idx, p in enumerate(sgf_wrapper.replay_sgf(contents)):
+ print('playing #', idx, p.next_move, file=sys.stderr)
+ self._game.play_move(p.next_move)
+ if movenum and idx == movenum:
+ break
+ except:
+ raise
+
+
+class GoGuiMixin(gtp.Engine):
+ """GTP extensions of 'analysis commands' for gogui.
+
+ We reach into the game_obj (an instance of the players in strategies.py),
+ and extract stuff from its root nodes, etc. These could be extracted into
+ methods on the Player object, but its a little weird to do that on a Player,
+ which doesn't really care about GTP commands, etc. So instead, we just
+ violate encapsulation a bit.
+ """
+
+ def __init__(self, game_obj, name='gtp (python, gogui extensions)',
+ version='0.1'):
+ super().__init__(game_obj=game_obj, name=name, version=version)
+ self.known_commands += ['gogui-analyze_commands']
+
+ def cmd_gogui_analyze_commands(self, arguments):
+ return '\n'.join(['var/Most Read Variation/nextplay',
+ 'var/Think a spell/spin',
+ 'pspairs/Visit Heatmap/visit_heatmap',
+ 'pspairs/Q Heatmap/q_heatmap'])
+
+ def cmd_nextplay(self, arguments):
+ return self._game.root.mvp_gg()
+
+ def cmd_visit_heatmap(self, arguments):
+ sort_order = list(range(self._game.size * self._game.size + 1))
+ sort_order.sort(key=lambda i: self._game.root.child_N[i], reverse=True)
+ return self.heatmap(sort_order, self._game.root, 'child_N')
+
+ def cmd_q_heatmap(self, arguments):
+ sort_order = list(range(self._game.size * self._game.size + 1))
+ reverse = True if self._game.root.position.to_play is go.BLACK else False
+ sort_order.sort(
+ key=lambda i: self._game.root.child_Q[i], reverse=reverse)
+ return self.heatmap(sort_order, self._game.root, 'child_Q')
+
+ def heatmap(self, sort_order, node, prop):
+ return '\n'.join(['{!s:6} {}'.format(
+ coords.to_kgs(coords.from_flat(key)), node.__dict__.get(prop)[key])
+ for key in sort_order if node.child_N[key] > 0][:20])
+
+ def cmd_spin(self, arguments):
+ for _ in range(50):
+ for _ in range(100):
+ self._game.tree_search()
+ moves = self.cmd_nextplay(None).lower()
+ moves = moves.split()
+ colors = 'bw' if self._game.root.position.to_play is go.BLACK else 'wb'
+ moves_cols = ' '.join(['{} {}'.format(*z)
+ for z in zip(itertools.cycle(colors), moves)])
+ print('gogui-gfx: TEXT', '{:.3f} after {}'.format(
+ self._game.root.Q, self._game.root.N), file=sys.stderr, flush=True)
+ print('gogui-gfx: VAR', moves_cols, file=sys.stderr, flush=True)
+ return self.cmd_nextplay(None)
+
+
+class GTPDeluxe(KgsExtensionsMixin, RegressionsMixin, GoGuiMixin):
+ pass
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/gtp_wrapper.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/gtp_wrapper.py"
new file mode 100644
index 00000000..e1921469
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/gtp_wrapper.py"
@@ -0,0 +1,140 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A wrapper of gtp and gtp_extensions."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import datetime
+import os
+import sys
+
+import coords
+from dualnet import DualNetRunner
+import go
+import gtp
+import gtp_extensions
+from strategies import MCTSPlayerMixin, CGOSPlayerMixin
+
+
+def translate_gtp_colors(gtp_color):
+ if gtp_color == gtp.BLACK:
+ return go.BLACK
+ elif gtp_color == gtp.WHITE:
+ return go.WHITE
+ else:
+ return go.EMPTY
+
+
+class GtpInterface(object):
+
+ def __init__(self, board_size):
+ self.size = 9
+ self.position = None
+ self.komi = 6.5
+ self.board_size = board_size
+
+ def set_size(self, n):
+ if n != self.board_size:
+ raise ValueError((
+ "Can't handle boardsize {}! Please check the board size.").format(n))
+
+ def set_komi(self, komi):
+ self.komi = komi
+ self.position.komi = komi
+
+ def clear(self):
+ if self.position and len(self.position.recent) > 1:
+ try:
+ sgf = self.to_sgf()
+ with open(datetime.datetime.now().strftime(
+ '%Y-%m-%d-%H:%M.sgf'), 'w') as f:
+ f.write(sgf)
+ except NotImplementedError:
+ pass
+ except:
+ print('Error saving sgf', file=sys.stderr, flush=True)
+ self.position = go.Position(komi=self.komi)
+ self.initialize_game(self.position)
+
+ def accomodate_out_of_turn(self, color):
+ if translate_gtp_colors(color) != self.position.to_play:
+ self.position.flip_playerturn(mutate=True)
+
+ def make_move(self, color, vertex):
+ c = coords.from_pygtp(self.board_size, vertex)
+ # let's assume this never happens for now.
+ # self.accomodate_out_of_turn(color)
+ return self.play_move(c)
+
+ def get_move(self, color):
+ self.accomodate_out_of_turn(color)
+ move = self.suggest_move(self.position)
+ if self.should_resign():
+ return gtp.RESIGN
+ return coords.to_pygtp(self.board_size, move)
+
+ def final_score(self):
+ return self.position.result_string()
+
+ def showboard(self):
+ print('\n\n' + str(self.position) + '\n\n', file=sys.stderr)
+ return True
+
+ def should_resign(self):
+ raise NotImplementedError
+
+ def get_score(self):
+ return self.position.result_string()
+
+ def suggest_move(self, position):
+ raise NotImplementedError
+
+ def play_move(self, c):
+ raise NotImplementedError
+
+ def initialize_game(self):
+ raise NotImplementedError
+
+ def chat(self, msg_type, sender, text):
+ raise NotImplementedError
+
+ def to_sgf(self):
+ raise NotImplementedError
+
+
+class MCTSPlayer(MCTSPlayerMixin, GtpInterface):
+ pass
+
+
+class CGOSPlayer(CGOSPlayerMixin, GtpInterface):
+ pass
+
+
+def make_gtp_instance(board_size, read_file, readouts_per_move=100,
+ verbosity=1, cgos_mode=False):
+ n = DualNetRunner(read_file)
+ instance = MCTSPlayer(board_size, n, simulations_per_move=readouts_per_move,
+ verbosity=verbosity, two_player_mode=True)
+ gtp_engine = gtp.Engine(instance)
+ if cgos_mode:
+ instance = CGOSPlayer(board_size, n, seconds_per_move=5,
+ verbosity=verbosity, two_player_mode=True)
+ else:
+ instance = MCTSPlayer(board_size, n, simulations_per_move=readouts_per_move,
+ verbosity=verbosity, two_player_mode=True)
+ name = 'Somebot-' + os.path.basename(read_file)
+ gtp_engine = gtp_extensions.GTPDeluxe(instance, name=name)
+ return gtp_engine
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/mcts.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/mcts.py"
new file mode 100644
index 00000000..a3711d44
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/mcts.py"
@@ -0,0 +1,318 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Monte Carlo Tree Search implementation.
+
+All terminology here (Q, U, N, p_UCT) uses the same notation as in the
+AlphaGo (AG) paper, and more details can be found in the paper. Here is a brief
+description:
+ Q: the action value of a position
+ U: the search control strategy
+ N: the visit counts of a state
+ p_UCT: the PUCT algorithm for action selection
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import math
+
+import coords
+import numpy as np
+
+# Exploration constant
+c_PUCT = 1.38 # pylint: disable=invalid-name
+
+
+# Dirichlet noise, as a function of board_size
+def D_NOISE_ALPHA(board_size): # pylint: disable=invalid-name
+ return 0.03 * 361 / (board_size ** 2)
+
+
+class DummyNode(object):
+ """A fake node of a MCTS search tree.
+
+ This node is intended to be a placeholder for the root node, which would
+ otherwise have no parent node. If all nodes have parents, code becomes
+ simpler.
+ """
+
+ # pylint: disable=invalid-name
+ def __init__(self, board_size):
+ self.board_size = board_size
+ self.parent = None
+ self.child_N = collections.defaultdict(float)
+ self.child_W = collections.defaultdict(float)
+
+
+class MCTSNode(object):
+ """A node of a MCTS search tree.
+
+ A node knows how to compute the action scores of all of its children,
+ so that a decision can be made about which move to explore next. Upon
+ selecting a move, the children dictionary is updated with a new node.
+
+ position: A go.Position instance
+ fmove: A move (coordinate) that led to this position, a a flattened coord
+ (raw number between 0-N^2, with None a pass)
+ parent: A parent MCTSNode.
+ """
+ # pylint: disable=invalid-name
+
+ def __init__(self, board_size, position, fmove=None, parent=None):
+ if parent is None:
+ parent = DummyNode(board_size)
+ self.board_size = board_size
+ self.parent = parent
+ self.fmove = fmove # move that led to this position, as flattened coords
+ self.position = position
+ self.is_expanded = False
+ self.losses_applied = 0 # number of virtual losses on this node
+ # using child_() allows vectorized computation of action score.
+ self.illegal_moves = 1000 * (1 - self.position.all_legal_moves())
+ self.child_N = np.zeros([board_size * board_size + 1], dtype=np.float32)
+ self.child_W = np.zeros([board_size * board_size + 1], dtype=np.float32)
+ # save a copy of the original prior before it gets mutated by d-noise.
+ self.original_prior = np.zeros([board_size * board_size + 1],
+ dtype=np.float32)
+ self.child_prior = np.zeros([board_size * board_size + 1], dtype=np.float32)
+ self.children = {} # map of flattened moves to resulting MCTSNode
+
+ def __repr__(self):
+ return ''.format(
+ self.position.recent[-1:], self.N, self.position.to_play)
+
+ @property
+ def child_action_score(self):
+ return (self.child_Q * self.position.to_play
+ + self.child_U - self.illegal_moves)
+
+ @property
+ def child_Q(self):
+ return self.child_W / (1 + self.child_N)
+
+ @property
+ def child_U(self):
+ return (c_PUCT * math.sqrt(1 + self.N) *
+ self.child_prior / (1 + self.child_N))
+
+ @property
+ def Q(self):
+ return self.W / (1 + self.N)
+
+ @property
+ def N(self):
+ return self.parent.child_N[self.fmove]
+
+ @N.setter
+ def N(self, value):
+ self.parent.child_N[self.fmove] = value
+
+ @property
+ def W(self):
+ return self.parent.child_W[self.fmove]
+
+ @W.setter
+ def W(self, value):
+ self.parent.child_W[self.fmove] = value
+
+ @property
+ def Q_perspective(self):
+ """Return value of position, from perspective of player to play."""
+ return self.Q * self.position.to_play
+
+ def select_leaf(self):
+ current = self
+ pass_move = self.board_size * self.board_size
+ while True:
+ current.N += 1
+ # if a node has never been evaluated, we have no basis to select a child.
+ if not current.is_expanded:
+ break
+ # HACK: if last move was a pass, always investigate double-pass first
+ # to avoid situations where we auto-lose by passing too early.
+ if (current.position.recent
+ and current.position.recent[-1].move is None
+ and current.child_N[pass_move] == 0):
+ current = current.maybe_add_child(pass_move)
+ continue
+
+ best_move = np.argmax(current.child_action_score)
+ current = current.maybe_add_child(best_move)
+ return current
+
+ def maybe_add_child(self, fcoord):
+ """Add child node for fcoord if it doesn't already exist, and returns it."""
+ if fcoord not in self.children:
+ new_position = self.position.play_move(
+ coords.from_flat(self.board_size, fcoord))
+ self.children[fcoord] = MCTSNode(
+ self.board_size, new_position, fmove=fcoord, parent=self)
+ return self.children[fcoord]
+
+ def add_virtual_loss(self, up_to):
+ """Propagate a virtual loss up to the root node.
+
+ Args:
+ up_to: The node to propagate until. (Keep track of this! You'll
+ need it to reverse the virtual loss later.)
+ """
+ self.losses_applied += 1
+ # This is a "win" for the current node; hence a loss for its parent node
+ # who will be deciding whether to investigate this node again.
+ loss = self.position.to_play
+ self.W += loss
+ if self.parent is None or self is up_to:
+ return
+ self.parent.add_virtual_loss(up_to)
+
+ def revert_virtual_loss(self, up_to):
+ self.losses_applied -= 1
+ revert = -self.position.to_play
+ self.W += revert
+ if self.parent is None or self is up_to:
+ return
+ self.parent.revert_virtual_loss(up_to)
+
+ def revert_visits(self, up_to):
+ """Revert visit increments."""
+ # Sometimes, repeated calls to select_leaf return the same node.
+ # This is rare and we're okay with the wasted computation to evaluate
+ # the position multiple times by the dual_net. But select_leaf has the
+ # side effect of incrementing visit counts. Since we want the value to
+ # only count once for the repeatedly selected node, we also have to
+ # revert the incremented visit counts.
+
+ self.N -= 1
+ if self.parent is None or self is up_to:
+ return
+ self.parent.revert_visits(up_to)
+
+ def incorporate_results(self, move_probabilities, value, up_to):
+ assert move_probabilities.shape == (self.board_size * self.board_size + 1,)
+ # A finished game should not be going through this code path - should
+ # directly call backup_value() on the result of the game.
+ assert not self.position.is_game_over()
+ if self.is_expanded:
+ self.revert_visits(up_to=up_to)
+ return
+ self.is_expanded = True
+ self.original_prior = self.child_prior = move_probabilities
+ # initialize child Q as current node's value, to prevent dynamics where
+ # if B is winning, then B will only ever explore 1 move, because the Q
+ # estimation will be so much larger than the 0 of the other moves.
+ #
+ # Conversely, if W is winning, then B will explore all 362 moves before
+ # continuing to explore the most favorable move. This is a waste of search.
+ #
+ # The value seeded here acts as a prior, and gets averaged into
+ # Q calculations.
+ self.child_W = np.ones([self.board_size * self.board_size + 1],
+ dtype=np.float32) * value
+ self.backup_value(value, up_to=up_to)
+
+ def backup_value(self, value, up_to):
+ """Propagates a value estimation up to the root node.
+
+ Args:
+ value: the value to be propagated (1 = black wins, -1 = white wins)
+ up_to: the node to propagate until.
+ """
+ self.W += value
+ if self.parent is None or self is up_to:
+ return
+ self.parent.backup_value(value, up_to)
+
+ def is_done(self):
+ # True if the last two moves were Pass or if the position is at a move
+ # greater than the max depth.
+
+ max_depth = (self.board_size ** 2) * 1.4 # 505 moves for 19x19, 113 for 9x9
+ return self.position.is_game_over() or self.position.n >= max_depth
+
+ def inject_noise(self):
+ dirch = np.random.dirichlet([D_NOISE_ALPHA(self.board_size)] * (
+ (self.board_size * self.board_size) + 1))
+ self.child_prior = self.child_prior * 0.75 + dirch * 0.25
+
+ def children_as_pi(self, squash=False):
+ """Returns the child visit counts as a probability distribution, pi."""
+ # If squash is true, exponentiate the probabilities by a temperature
+ # slightly larger than unity to encourage diversity in early play and
+ # hopefully to move away from 3-3s
+
+ probs = self.child_N
+ if squash:
+ probs **= .95
+ return probs / np.sum(probs)
+
+ def most_visited_path(self):
+ node = self
+ output = []
+ while node.children:
+ next_kid = np.argmax(node.child_N)
+ node = node.children.get(next_kid)
+ if node is None:
+ output.append('GAME END')
+ break
+ output.append('{} ({}) ==> '.format(
+ coords.to_kgs(
+ self.board_size,
+ coords.from_flat(self.board_size, node.fmove)), node.N))
+ output.append('Q: {:.5f}\n'.format(node.Q))
+ return ''.join(output)
+
+ def mvp_gg(self):
+ """ Returns most visited path in go-gui VAR format e.g. 'b r3 w c17..."""
+ node = self
+ output = []
+ while node.children and max(node.child_N) > 1:
+ next_kid = np.argmax(node.child_N)
+ node = node.children[next_kid]
+ output.append('{}'.format(coords.to_kgs(
+ self.board_size, coords.from_flat(self.board_size, node.fmove))))
+ return ' '.join(output)
+
+ def describe(self):
+ sort_order = list(range(self.board_size * self.board_size + 1))
+ sort_order.sort(key=lambda i: (
+ self.child_N[i], self.child_action_score[i]), reverse=True)
+ soft_n = self.child_N / sum(self.child_N)
+ p_delta = soft_n - self.child_prior
+ p_rel = p_delta / self.child_prior
+ # Dump out some statistics
+ output = []
+ output.append('{q:.4f}\n'.format(q=self.Q))
+ output.append(self.most_visited_path())
+ output.append(
+ '''move: action Q U P P-Dir N soft-N
+ p-delta p-rel\n''')
+ output.append(
+ '\n'.join([
+ '''{!s:6}: {: .3f}, {: .3f}, {:.3f}, {:.3f}, {:.3f}, {:4d} {:.4f}
+ {: .5f} {: .2f}'''.format(
+ coords.to_kgs(self.board_size, coords.from_flat(
+ self.board_size, key)),
+ self.child_action_score[key],
+ self.child_Q[key],
+ self.child_U[key],
+ self.child_prior[key],
+ self.original_prior[key],
+ int(self.child_N[key]),
+ soft_n[key],
+ p_delta[key],
+ p_rel[key])
+ for key in sort_order][:15]))
+ return ''.join(output)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/mcts_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/mcts_test.py"
new file mode 100644
index 00000000..9cbfe562
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/mcts_test.py"
@@ -0,0 +1,222 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for mcts."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import copy
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import coords
+import go
+from mcts import MCTSNode
+import numpy as np
+import utils_test
+
+tf.logging.set_verbosity(tf.logging.ERROR)
+
+ALMOST_DONE_BOARD = utils_test.load_board('''
+ .XO.XO.OO
+ X.XXOOOO.
+ XXXXXOOOO
+ XXXXXOOOO
+ .XXXXOOO.
+ XXXXXOOOO
+ .XXXXOOO.
+ XXXXXOOOO
+ XXXXOOOOO
+''')
+
+TEST_POSITION = go.Position(
+ utils_test.BOARD_SIZE,
+ board=ALMOST_DONE_BOARD,
+ n=105,
+ komi=2.5,
+ caps=(1, 4),
+ ko=None,
+ recent=(go.PlayerMove(go.BLACK, (0, 1)),
+ go.PlayerMove(go.WHITE, (0, 8))),
+ to_play=go.BLACK
+)
+
+SEND_TWO_RETURN_ONE = go.Position(
+ utils_test.BOARD_SIZE,
+ board=ALMOST_DONE_BOARD,
+ n=75,
+ komi=0.5,
+ caps=(0, 0),
+ ko=None,
+ recent=(
+ go.PlayerMove(go.BLACK, (0, 1)),
+ go.PlayerMove(go.WHITE, (0, 8)),
+ go.PlayerMove(go.BLACK, (1, 0))),
+ to_play=go.WHITE
+)
+
+MAX_DEPTH = (utils_test.BOARD_SIZE ** 2) * 1.4
+
+
+class TestMctsNodes(utils_test.MiniGoUnitTest):
+
+ def test_action_flipping(self):
+ np.random.seed(1)
+ probs = np.array([.02] * (
+ utils_test.BOARD_SIZE * utils_test.BOARD_SIZE + 1))
+ probs += np.random.random(
+ [utils_test.BOARD_SIZE * utils_test.BOARD_SIZE + 1]) * 0.001
+ black_root = MCTSNode(
+ utils_test.BOARD_SIZE, go.Position(utils_test.BOARD_SIZE))
+ white_root = MCTSNode(utils_test.BOARD_SIZE, go.Position(
+ utils_test.BOARD_SIZE, to_play=go.WHITE))
+ black_root.select_leaf().incorporate_results(probs, 0, black_root)
+ white_root.select_leaf().incorporate_results(probs, 0, white_root)
+ # No matter who is to play, when we know nothing else, the priors
+ # should be respected, and the same move should be picked
+ black_leaf = black_root.select_leaf()
+ white_leaf = white_root.select_leaf()
+ self.assertEqual(black_leaf.fmove, white_leaf.fmove)
+ self.assertEqualNPArray(
+ black_root.child_action_score, white_root.child_action_score)
+
+ def test_select_leaf(self):
+ flattened = coords.to_flat(utils_test.BOARD_SIZE, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'D9'))
+ probs = np.array([.02] * (
+ utils_test.BOARD_SIZE * utils_test.BOARD_SIZE + 1))
+ probs[flattened] = 0.4
+ root = MCTSNode(utils_test.BOARD_SIZE, SEND_TWO_RETURN_ONE)
+ root.select_leaf().incorporate_results(probs, 0, root)
+
+ self.assertEqual(root.position.to_play, go.WHITE)
+ self.assertEqual(root.select_leaf(), root.children[flattened])
+
+ def test_backup_incorporate_results(self):
+ probs = np.array([.02] * (
+ utils_test.BOARD_SIZE * utils_test.BOARD_SIZE + 1))
+ root = MCTSNode(utils_test.BOARD_SIZE, SEND_TWO_RETURN_ONE)
+ root.select_leaf().incorporate_results(probs, 0, root)
+
+ leaf = root.select_leaf()
+ leaf.incorporate_results(probs, -1, root) # white wins!
+
+ # Root was visited twice: first at the root, then at this child.
+ self.assertEqual(root.N, 2)
+ # Root has 0 as a prior and two visits with value 0, -1
+ self.assertAlmostEqual(root.Q, -1/3) # average of 0, 0, -1
+ # Leaf should have one visit
+ self.assertEqual(root.child_N[leaf.fmove], 1)
+ self.assertEqual(leaf.N, 1)
+ # And that leaf's value had its parent's Q (0) as a prior, so the Q
+ # should now be the average of 0, -1
+ self.assertAlmostEqual(root.child_Q[leaf.fmove], -0.5)
+ self.assertAlmostEqual(leaf.Q, -0.5)
+
+ # We're assuming that select_leaf() returns a leaf like:
+ # root
+ # \
+ # leaf
+ # \
+ # leaf2
+ # which happens in this test because root is W to play and leaf was a W win.
+ self.assertEqual(root.position.to_play, go.WHITE)
+ leaf2 = root.select_leaf()
+ leaf2.incorporate_results(probs, -0.2, root) # another white semi-win
+ self.assertEqual(root.N, 3)
+ # average of 0, 0, -1, -0.2
+ self.assertAlmostEqual(root.Q, -0.3)
+
+ self.assertEqual(leaf.N, 2)
+ self.assertEqual(leaf2.N, 1)
+ # average of 0, -1, -0.2
+ self.assertAlmostEqual(leaf.Q, root.child_Q[leaf.fmove])
+ self.assertAlmostEqual(leaf.Q, -0.4)
+ # average of -1, -0.2
+ self.assertAlmostEqual(leaf.child_Q[leaf2.fmove], -0.6)
+ self.assertAlmostEqual(leaf2.Q, -0.6)
+
+ def test_do_not_explore_past_finish(self):
+ probs = np.array([0.02] * (
+ utils_test.BOARD_SIZE * utils_test.BOARD_SIZE + 1), dtype=np.float32)
+ root = MCTSNode(utils_test.BOARD_SIZE, go.Position(utils_test.BOARD_SIZE))
+ root.select_leaf().incorporate_results(probs, 0, root)
+ first_pass = root.maybe_add_child(
+ coords.to_flat(utils_test.BOARD_SIZE, None))
+ first_pass.incorporate_results(probs, 0, root)
+ second_pass = first_pass.maybe_add_child(
+ coords.to_flat(utils_test.BOARD_SIZE, None))
+ with self.assertRaises(AssertionError):
+ second_pass.incorporate_results(probs, 0, root)
+ node_to_explore = second_pass.select_leaf()
+ # should just stop exploring at the end position.
+ self.assertEqual(node_to_explore, second_pass)
+
+ def test_add_child(self):
+ root = MCTSNode(utils_test.BOARD_SIZE, go.Position(utils_test.BOARD_SIZE))
+ child = root.maybe_add_child(17)
+ self.assertIn(17, root.children)
+ self.assertEqual(child.parent, root)
+ self.assertEqual(child.fmove, 17)
+
+ def test_add_child_idempotency(self):
+ root = MCTSNode(utils_test.BOARD_SIZE, go.Position(utils_test.BOARD_SIZE))
+ child = root.maybe_add_child(17)
+ current_children = copy.copy(root.children)
+ child2 = root.maybe_add_child(17)
+ self.assertEqual(child, child2)
+ self.assertEqual(current_children, root.children)
+
+ def test_never_select_illegal_moves(self):
+ probs = np.array([0.02] * (
+ utils_test.BOARD_SIZE * utils_test.BOARD_SIZE + 1))
+ # let's say the NN were to accidentally put a high weight on an illegal move
+ probs[1] = 0.99
+ root = MCTSNode(utils_test.BOARD_SIZE, SEND_TWO_RETURN_ONE)
+ root.incorporate_results(probs, 0, root)
+ # and let's say the root were visited a lot of times, which pumps up the
+ # action score for unvisited moves...
+ root.N = 100000
+ root.child_N[root.position.all_legal_moves()] = 10000
+ # this should not throw an error...
+ leaf = root.select_leaf()
+ # the returned leaf should not be the illegal move
+ self.assertNotEqual(leaf.fmove, 1)
+
+ # and even after injecting noise, we should still not select an illegal move
+ for _ in range(10):
+ root.inject_noise()
+ leaf = root.select_leaf()
+ self.assertNotEqual(leaf.fmove, 1)
+
+ def test_dont_pick_unexpanded_child(self):
+ probs = np.array([0.001] * (
+ utils_test.BOARD_SIZE * utils_test.BOARD_SIZE + 1))
+ # make one move really likely so that tree search goes down that path twice
+ # even with a virtual loss
+ probs[17] = 0.999
+ root = MCTSNode(utils_test.BOARD_SIZE, go.Position(utils_test.BOARD_SIZE))
+ root.incorporate_results(probs, 0, root)
+ leaf1 = root.select_leaf()
+ self.assertEqual(leaf1.fmove, 17)
+ leaf1.add_virtual_loss(up_to=root)
+ # the second select_leaf pick should return the same thing, since the child
+ # hasn't yet been sent to neural net for eval + result incorporation
+ leaf2 = root.select_leaf()
+ self.assertIs(leaf1, leaf2)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/minigo.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/minigo.py"
new file mode 100644
index 00000000..772f7a01
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/minigo.py"
@@ -0,0 +1,506 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Train MiniGo with several iterations of RL learning.
+
+One iteration of RL learning consists of bootstrap, selfplay, gather and train:
+ bootstrap: Initialize a random model
+ selfplay: Play games with the latest model to produce data used for training
+ gather: Group games played with the same model into larger files of tfexamples
+ train: Train a new model with the selfplay results from the most recent
+ N generations.
+After training, validation can be performed on the holdout data.
+Given two models, evaluation can be applied to choose a stronger model.
+The training pipeline consists of multiple RL learning iterations to achieve
+better models.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import argparse
+import os
+import random
+import socket
+import sys
+import time
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import dualnet
+import evaluation
+import go
+import model_params
+import preprocessing
+import selfplay_mcts
+import utils
+
+_TF_RECORD_SUFFIX = '.tfrecord.zz'
+
+
+def _ensure_dir_exists(directory):
+ """Check if directory exists. If not, create it.
+
+ Args:
+ directory: A given directory
+ """
+ if os.path.isdir(directory) is False:
+ tf.gfile.MakeDirs(directory)
+
+
+def bootstrap(estimator_model_dir, trained_models_dir, params):
+ """Initialize the model with random weights.
+
+ Args:
+ estimator_model_dir: tf.estimator model directory.
+ trained_models_dir: Dir to save the trained models. Here to export the first
+ bootstrapped generation.
+ params: A MiniGoParams instance of hyperparameters for the model.
+ """
+ bootstrap_name = utils.generate_model_name(0)
+ _ensure_dir_exists(trained_models_dir)
+ bootstrap_model_path = os.path.join(trained_models_dir, bootstrap_name)
+ _ensure_dir_exists(estimator_model_dir)
+
+ print('Bootstrapping with working dir {}\n Model 0 exported to {}'.format(
+ estimator_model_dir, bootstrap_model_path))
+ dualnet.bootstrap(estimator_model_dir, params)
+ dualnet.export_model(estimator_model_dir, bootstrap_model_path)
+
+
+def selfplay(selfplay_dirs, selfplay_model, params):
+ """Perform selfplay with a specific model.
+
+ Args:
+ selfplay_dirs: A dict to specify the directories used in selfplay.
+ selfplay_dirs = {
+ 'output_dir': output_dir,
+ 'holdout_dir': holdout_dir,
+ 'clean_sgf': clean_sgf,
+ 'full_sgf': full_sgf
+ }
+ selfplay_model: The actual Dualnet runner for selfplay.
+ params: A MiniGoParams instance of hyperparameters for the model.
+ """
+ with utils.logged_timer('Playing game'):
+ player = selfplay_mcts.play(
+ params.board_size, selfplay_model, params.selfplay_readouts,
+ params.selfplay_resign_threshold, params.simultaneous_leaves,
+ params.selfplay_verbose)
+
+ output_name = '{}-{}'.format(int(time.time()), socket.gethostname())
+
+ def _write_sgf_data(dir_sgf, use_comments):
+ with tf.gfile.GFile(
+ os.path.join(dir_sgf, '{}.sgf'.format(output_name)), 'w') as f:
+ f.write(player.to_sgf(use_comments=use_comments))
+
+ _write_sgf_data(selfplay_dirs['clean_sgf'], use_comments=False)
+ _write_sgf_data(selfplay_dirs['full_sgf'], use_comments=True)
+
+ game_data = player.extract_data()
+ tf_examples = preprocessing.make_dataset_from_selfplay(game_data, params)
+
+ # Hold out 5% of games for evaluation.
+ if random.random() < params.holdout_pct:
+ fname = os.path.join(
+ selfplay_dirs['holdout_dir'], output_name + _TF_RECORD_SUFFIX)
+ else:
+ fname = os.path.join(
+ selfplay_dirs['output_dir'], output_name + _TF_RECORD_SUFFIX)
+
+ preprocessing.write_tf_examples(fname, tf_examples)
+
+
+def gather(selfplay_dir, training_chunk_dir, params):
+ """Gather selfplay data into large training chunk.
+
+ Args:
+ selfplay_dir: Where to look for games. Set as 'base_dir/data/selfplay/'.
+ training_chunk_dir: where to put collected games. Set as
+ 'base_dir/data/training_chunks/'.
+ params: A MiniGoParams instance of hyperparameters for the model.
+ """
+ # Check the selfplay data from the most recent 50 models.
+ _ensure_dir_exists(training_chunk_dir)
+ sorted_model_dirs = sorted(tf.gfile.ListDirectory(selfplay_dir))
+ models = [model_dir.strip('/')
+ for model_dir in sorted_model_dirs[-params.gather_generation:]]
+
+ with utils.logged_timer('Finding existing tfrecords...'):
+ model_gamedata = {
+ model: tf.gfile.Glob(
+ os.path.join(selfplay_dir, model, '*'+_TF_RECORD_SUFFIX))
+ for model in models
+ }
+ print('Found {} models'.format(len(models)))
+ for model_name, record_files in sorted(model_gamedata.items()):
+ print(' {}: {} files'.format(model_name, len(record_files)))
+
+ meta_file = os.path.join(training_chunk_dir, 'meta.txt')
+ try:
+ with tf.gfile.GFile(meta_file, 'r') as f:
+ already_processed = set(f.read().split())
+ except tf.errors.NotFoundError:
+ already_processed = set()
+
+ num_already_processed = len(already_processed)
+
+ for model_name, record_files in sorted(model_gamedata.items()):
+ if set(record_files) <= already_processed:
+ continue
+ print('Gathering files from {}:'.format(model_name))
+ tf_examples = preprocessing.shuffle_tf_examples(
+ params.shuffle_buffer_size, params.examples_per_chunk, record_files)
+ # tqdm to make the loops show a smart progress meter
+ for i, example_batch in enumerate(tf_examples):
+ output_record = os.path.join(
+ training_chunk_dir,
+ ('{}-{}'+_TF_RECORD_SUFFIX).format(model_name, str(i)))
+ preprocessing.write_tf_examples(
+ output_record, example_batch, serialize=False)
+ already_processed.update(record_files)
+
+ print('Processed {} new files'.format(
+ len(already_processed) - num_already_processed))
+ with tf.gfile.GFile(meta_file, 'w') as f:
+ f.write('\n'.join(sorted(already_processed)))
+
+
+def train(trained_models_dir, estimator_model_dir, training_chunk_dir,
+ generation, params):
+ """Train the latest model from gathered data.
+
+ Args:
+ trained_models_dir: Where to export the completed generation.
+ estimator_model_dir: tf.estimator model directory.
+ training_chunk_dir: Directory where gathered training chunks are.
+ generation: Which generation you are training.
+ params: A MiniGoParams instance of hyperparameters for the model.
+ """
+ new_model_name = utils.generate_model_name(generation)
+ print('New model will be {}'.format(new_model_name))
+ new_model = os.path.join(trained_models_dir, new_model_name)
+
+ print('Training on gathered game data...')
+ tf_records = sorted(
+ tf.gfile.Glob(os.path.join(training_chunk_dir, '*'+_TF_RECORD_SUFFIX)))
+ tf_records = tf_records[
+ -(params.train_window_size // params.examples_per_chunk):]
+
+ print('Training from: {} to {}'.format(tf_records[0], tf_records[-1]))
+ with utils.logged_timer('Training'):
+ dualnet.train(estimator_model_dir, tf_records, generation, params)
+ dualnet.export_model(estimator_model_dir, new_model)
+
+
+def validate(trained_models_dir, holdout_dir, estimator_model_dir, params):
+ """Validate the latest model on the holdout dataset.
+
+ Args:
+ trained_models_dir: Directories where the completed generations/models are.
+ holdout_dir: Directories where holdout data are.
+ estimator_model_dir: tf.estimator model directory.
+ params: A MiniGoParams instance of hyperparameters for the model.
+ """
+ model_num, _ = utils.get_latest_model(trained_models_dir)
+
+ # Get the holdout game data
+ nums_names = utils.get_models(trained_models_dir)
+
+ # Model N was trained on games up through model N-1, so the validation set
+ # should only be for models through N-1 as well, thus the (model_num) term.
+ models = [num_name for num_name in nums_names if num_name[0] < model_num]
+
+ # pair is a tuple of (model_num, model_name), like (13, 000013-modelname)
+ holdout_dirs = [os.path.join(holdout_dir, pair[1])
+ for pair in models[-params.holdout_generation:]]
+ tf_records = []
+ with utils.logged_timer('Building lists of holdout files'):
+ for record_dir in holdout_dirs:
+ if os.path.exists(record_dir): # make sure holdout dir exists
+ tf_records.extend(
+ tf.gfile.Glob(os.path.join(record_dir, '*'+_TF_RECORD_SUFFIX)))
+
+ if not tf_records:
+ print('No holdout dataset for validation! '
+ 'Please check your holdout directory: {}'.format(holdout_dir))
+ return
+
+ print('The length of tf_records is {}.'.format(len(tf_records)))
+ first_tf_record = os.path.basename(tf_records[0])
+ last_tf_record = os.path.basename(tf_records[-1])
+ with utils.logged_timer('Validating from {} to {}'.format(
+ first_tf_record, last_tf_record)):
+ dualnet.validate(estimator_model_dir, tf_records, params)
+
+
+def evaluate(black_model_name, black_net, white_model_name, white_net,
+ evaluate_dir, params):
+ """Evaluate with two models.
+
+ With two DualNetRunners to play as black and white in a Go match. Two models
+ play several games, and the model that wins by a margin of 55% will be the
+ winner.
+
+ Args:
+ black_model_name: The name of the model playing black.
+ black_net: The DualNetRunner model for black
+ white_model_name: The name of the model playing white.
+ white_net: The DualNetRunner model for white.
+ evaluate_dir: Where to write the evaluation results. Set as
+ 'base_dir/sgf/evaluate/'.
+ params: A MiniGoParams instance of hyperparameters for the model.
+
+ Returns:
+ The model name of the winner.
+
+ Raises:
+ ValueError: if neither `WHITE` or `BLACK` is returned.
+ """
+ with utils.logged_timer('{} games'.format(params.eval_games)):
+ winner = evaluation.play_match(
+ params, black_net, white_net, params.eval_games,
+ params.eval_readouts, evaluate_dir, params.eval_verbose)
+
+ if winner != go.WHITE_NAME and winner != go.BLACK_NAME:
+ raise ValueError('Winner should be either White or Black!')
+
+ return black_model_name if winner == go.BLACK_NAME else white_model_name
+
+
+def _set_params(flags):
+ """Set hyperparameters from board size.
+
+ Args:
+ flags: Flags from Argparser.
+
+ Returns:
+ An MiniGoParams instance of hyperparameters.
+ """
+ params = model_params.MiniGoParams()
+ k = utils.round_power_of_two(flags.board_size ** 2 / 3)
+ params.num_filters = k # Number of filters in the convolution layer
+ params.fc_width = 2 * k # Width of each fully connected layer
+ params.num_shared_layers = flags.board_size # Number of shared trunk layers
+ params.board_size = flags.board_size # Board size
+
+ # How many positions can fit on a graphics card. 256 for 9s, 16 or 32 for 19s.
+ if flags.batch_size is None:
+ if flags.board_size == 9:
+ params.batch_size = 256
+ else:
+ params.batch_size = 32
+ else:
+ params.batch_size = flags.batch_size
+
+ return params
+
+
+def _prepare_selfplay(
+ model_name, trained_models_dir, selfplay_dir, holdout_dir, sgf_dir, params):
+ """Set directories and load the network for selfplay.
+
+ Args:
+ model_name: The name of the model for self-play
+ trained_models_dir: Directories where the completed generations/models are.
+ selfplay_dir: Where to write the games. Set as 'base_dir/data/selfplay/'.
+ holdout_dir: Where to write the holdout data. Set as
+ 'base_dir/data/holdout/'.
+ sgf_dir: Where to write the sgf (Smart Game Format) files. Set as
+ 'base_dir/sgf/'.
+ params: A MiniGoParams instance of hyperparameters for the model.
+
+ Returns:
+ The directories and network model for selfplay.
+ """
+ # Set paths for the model with 'model_name'
+ model_path = os.path.join(trained_models_dir, model_name)
+ output_dir = os.path.join(selfplay_dir, model_name)
+ holdout_dir = os.path.join(holdout_dir, model_name)
+ # clean_sgf is to write sgf file without comments.
+ # full_sgf is to write sgf file with comments.
+ clean_sgf = os.path.join(sgf_dir, model_name, 'clean')
+ full_sgf = os.path.join(sgf_dir, model_name, 'full')
+
+ _ensure_dir_exists(output_dir)
+ _ensure_dir_exists(holdout_dir)
+ _ensure_dir_exists(clean_sgf)
+ _ensure_dir_exists(full_sgf)
+ selfplay_dirs = {
+ 'output_dir': output_dir,
+ 'holdout_dir': holdout_dir,
+ 'clean_sgf': clean_sgf,
+ 'full_sgf': full_sgf
+ }
+ # cache the network model for self-play
+ with utils.logged_timer('Loading weights from {} ... '.format(model_path)):
+ network = dualnet.DualNetRunner(model_path, params)
+ return selfplay_dirs, network
+
+
+def run_selfplay(selfplay_model, selfplay_games, dirs, params):
+ """Run selfplay to generate training data.
+
+ Args:
+ selfplay_model: The model name for selfplay.
+ selfplay_games: The number of selfplay games.
+ dirs: A MiniGoDirectory instance of directories used in each step.
+ params: A MiniGoParams instance of hyperparameters for the model.
+ """
+ selfplay_dirs, network = _prepare_selfplay(
+ selfplay_model, dirs.trained_models_dir, dirs.selfplay_dir,
+ dirs.holdout_dir, dirs.sgf_dir, params)
+
+ print('Self-play with model: {}'.format(selfplay_model))
+ for _ in range(selfplay_games):
+ selfplay(selfplay_dirs, network, params)
+
+
+def main(_):
+ """Run the reinforcement learning loop."""
+ tf.logging.set_verbosity(tf.logging.INFO)
+
+ params = _set_params(FLAGS)
+
+ # A dummy model for debug/testing purpose with fewer games and iterations
+ if FLAGS.test:
+ params = model_params.DummyMiniGoParams()
+ base_dir = FLAGS.base_dir + str(FLAGS.board_size) + '_size_dummy/'
+ else:
+ # Set directories for models and datasets
+ base_dir = FLAGS.base_dir + str(FLAGS.board_size) + '_size/'
+
+ dirs = utils.MiniGoDirectory(base_dir)
+
+ # Run selfplay only if user specifies the argument.
+ if FLAGS.selfplay:
+ selfplay_model_name = FLAGS.selfplay_model_name or utils.get_latest_model(
+ dirs.trained_models_dir)[1]
+ max_games = FLAGS.selfplay_max_games or params.max_games_per_generation
+ run_selfplay(selfplay_model_name, max_games, dirs, params)
+ return
+
+ # Run the RL pipeline
+ # if no models have been trained, start from bootstrap model
+
+ if not os.path.isdir(dirs.trained_models_dir):
+ print('No trained model exists! Starting from Bootstrap...')
+ print('Creating random initial weights...')
+ bootstrap(dirs.estimator_model_dir, dirs.trained_models_dir, params)
+ else:
+ print('A MiniGo base directory has been found! ')
+ print('Start from the last checkpoint...')
+
+ _, best_model_so_far = utils.get_latest_model(dirs.trained_models_dir)
+ for rl_iter in range(params.max_iters_per_pipeline):
+ print('RL_iteration: {}'.format(rl_iter))
+ # Self-play with the best model to generate training data
+ run_selfplay(
+ best_model_so_far, params.max_games_per_generation, dirs, params)
+
+ # gather selfplay data for training
+ print('Gathering game output...')
+ gather(dirs.selfplay_dir, dirs.training_chunk_dir, params)
+
+ # train the next generation model
+ model_num, _ = utils.get_latest_model(dirs.trained_models_dir)
+ print('Training on gathered game data...')
+ train(dirs.trained_models_dir, dirs.estimator_model_dir,
+ dirs.training_chunk_dir, model_num + 1, params)
+
+ # validate the latest model if needed
+ if FLAGS.validation:
+ print('Validating on the holdout game data...')
+ validate(dirs.trained_models_dir, dirs.holdout_dir,
+ dirs.estimator_model_dir, params)
+
+ _, current_model = utils.get_latest_model(dirs.trained_models_dir)
+
+ if FLAGS.evaluation: # Perform evaluation if needed
+ print('Evaluate models between {} and {}'.format(
+ best_model_so_far, current_model))
+ black_model = os.path.join(dirs.trained_models_dir, best_model_so_far)
+ white_model = os.path.join(dirs.trained_models_dir, current_model)
+ _ensure_dir_exists(dirs.evaluate_dir)
+ with utils.logged_timer('Loading weights'):
+ black_net = dualnet.DualNetRunner(black_model, params)
+ white_net = dualnet.DualNetRunner(white_model, params)
+
+ best_model_so_far = evaluate(
+ best_model_so_far, black_net, current_model, white_net,
+ dirs.evaluate_dir, params)
+ print('Winner of evaluation: {}!'.format(best_model_so_far))
+ else:
+ best_model_so_far = current_model
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ # flags to run the RL pipeline
+ parser.add_argument(
+ '--base_dir',
+ type=str,
+ default='/tmp/minigo/',
+ metavar='BD',
+ help='Base directory for the MiniGo models and datasets.')
+ parser.add_argument(
+ '--board_size',
+ type=int,
+ default=9,
+ metavar='N',
+ choices=[9, 19],
+ help='Go board size. The default size is 9.')
+ parser.add_argument(
+ '--batch_size',
+ type=int,
+ default=None,
+ metavar='BS',
+ help='Batch size for training. The default size is None')
+ # Test the pipeline with a dummy model
+ parser.add_argument(
+ '--test',
+ action='store_true',
+ help='A boolean to test RL pipeline with a dummy model.')
+ # Run RL pipeline with the validation step
+ parser.add_argument(
+ '--validation',
+ action='store_true',
+ help='A boolean to specify validation in the RL pipeline.')
+ # Run RL pipeline with the evaluation step
+ parser.add_argument(
+ '--evaluation',
+ action='store_true',
+ help='A boolean to specify evaluation in the RL pipeline.')
+
+ # self-play only
+ parser.add_argument(
+ '--selfplay',
+ action='store_true',
+ help='A boolean to run self-play only.')
+ parser.add_argument(
+ '--selfplay_model_name',
+ type=str,
+ default=None,
+ metavar='SM',
+ help='The model used for self-play only.')
+ parser.add_argument(
+ '--selfplay_max_games',
+ type=int,
+ default=None,
+ metavar='SMG',
+ help='The number of game data self-play only needs to generate')
+
+ FLAGS, unparsed = parser.parse_known_args()
+ tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/model_params.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/model_params.py"
new file mode 100644
index 00000000..0b8fee5e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/model_params.py"
@@ -0,0 +1,95 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Defines MiniGo parameters."""
+
+
+class MiniGoParams(object):
+ """Parameters for MiniGo."""
+
+ # Go board size
+ board_size = 9
+
+ # RL pipeline
+ max_games_per_generation = 10 # Number of games per selfplay generation
+ max_iters_per_pipeline = 2 # Number of RL iterations in one pipeline
+
+ # The shuffle buffer size determines how far an example could end up from
+ # where it started; this and the interleave parameters in preprocessing can
+ # give us an approximation of a uniform sampling. The default of 4M is used
+ # in training, but smaller numbers can be used for aggregation or validation.
+ shuffle_buffer_size = 2000000 # shuffle buffer size in preprocessing
+
+ # dual_net
+ # How many positions to look at per generation.
+ # Per AlphaGo Zero (AGZ), 2048 minibatch * 1k = 2M positions/generation
+ examples_per_generation = 2000000
+
+ # for learning rate
+ l2_strength = 1e-4 # Regularization strength
+ momentum = 0.9 # Momentum used in SGD
+
+ kernel_size = [3, 3] # kernel size of conv and res blocks is from AGZ paper
+
+ # selfplay
+ selfplay_readouts = 100 # How many simulations to run per move
+ selfplay_verbose = 1 # >=2 will print debug info, >=3 will print boards
+ # an absolute value of threshold to resign at
+ selfplay_resign_threshold = 0.95
+
+ # the number of simultaneous leaves in MCTS
+ simultaneous_leaves = 8
+
+ # holdout data for validation
+ holdout_pct = 0.05 # How many games to hold out for validation
+ holdout_generation = 50 # How many recent generations/models for holdout data
+
+ # gather
+ gather_generation = 50 # How many recent generations/models for gathered data
+
+ # How many positions we should aggregate per 'chunk'.
+ examples_per_chunk = 10000
+ # How many positions to draw from for our training window.
+ # AGZ used the most recent 500k games, which, assuming 250 moves/game = 125M
+ train_window_size = 125000000
+
+ # evaluation with two models
+ eval_games = 50 # The number of games to play in evaluation
+ eval_readouts = 100 # How many readouts to make per move in evaluation
+ eval_verbose = 1 # How verbose the players should be in evaluation
+ eval_win_rate = 0.55 # Winner needs to win by a margin of 55%.
+
+
+class DummyMiniGoParams(MiniGoParams):
+ """Parameters for a dummy model."""
+ num_filters = 8 # Number of filters in the convolution layer
+ fc_width = 16 # Width of each fully connected layer
+ num_shared_layers = 1 # Number of shared trunk layers
+ batch_size = 16
+ examples_per_generation = 64
+ max_games_per_generation = 2
+ max_iters_per_pipeline = 1
+ selfplay_readouts = 10
+
+ shuffle_buffer_size = 1000
+
+ # evaluation
+ eval_games = 10 # The number of games to play in evaluation
+ eval_readouts = 10 # How many readouts to make per move in evaluation
+ eval_verbose = 1 # How verbose the players should be in evaluation
+
+
+class DummyValidationParams(DummyMiniGoParams, MiniGoParams):
+ """Parameters for a dummy model."""
+ holdout_pct = 1 # Set holdout percent as 1 for validation testing purpose
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/preprocessing.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/preprocessing.py"
new file mode 100644
index 00000000..67bb4365
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/preprocessing.py"
@@ -0,0 +1,238 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Preprocessing step to create, read, write tf.Examples."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import functools
+import random
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import coords
+import features as features_lib
+import numpy as np
+import sgf_wrapper
+
+TF_RECORD_CONFIG = tf.python_io.TFRecordOptions(
+ tf.python_io.TFRecordCompressionType.ZLIB)
+
+
+# Constructing tf.Examples
+def _one_hot(board_size, index):
+ onehot = np.zeros([board_size * board_size + 1], dtype=np.float32)
+ onehot[index] = 1
+ return onehot
+
+
+def make_tf_example(features, pi, value):
+ """Make tf examples.
+
+ Args:
+ features: [N, N, FEATURE_DIM] nparray of uint8
+ pi: [N * N + 1] nparray of float32
+ value: float
+
+ Returns:
+ tf example.
+
+ """
+ return tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'x': tf.train.Feature(
+ bytes_list=tf.train.BytesList(value=[features.tostring()])),
+ 'pi': tf.train.Feature(
+ bytes_list=tf.train.BytesList(value=[pi.tostring()])),
+ 'outcome': tf.train.Feature(
+ float_list=tf.train.FloatList(value=[value]))
+ }))
+
+
+def write_tf_examples(filename, tf_examples, serialize=True):
+ """Write tf.Example to files.
+
+ Args:
+ filename: Where to write tf.records
+ tf_examples: An iterable of tf.Example
+ serialize: whether to serialize the examples.
+ """
+ with tf.python_io.TFRecordWriter(
+ filename, options=TF_RECORD_CONFIG) as writer:
+ for ex in tf_examples:
+ if serialize:
+ writer.write(ex.SerializeToString())
+ else:
+ writer.write(ex)
+
+
+# Read tf.Example from files
+def _batch_parse_tf_example(board_size, batch_size, example_batch):
+ """Parse tf examples.
+
+ Args:
+ board_size: the go board size
+ batch_size: the batch size
+ example_batch: a batch of tf.Example
+
+ Returns:
+ A tuple (feature_tensor, dict of output tensors)
+ """
+ features = {
+ 'x': tf.FixedLenFeature([], tf.string),
+ 'pi': tf.FixedLenFeature([], tf.string),
+ 'outcome': tf.FixedLenFeature([], tf.float32),
+ }
+ parsed = tf.parse_example(example_batch, features)
+ x = tf.decode_raw(parsed['x'], tf.uint8)
+ x = tf.cast(x, tf.float32)
+ x = tf.reshape(x, [batch_size, board_size, board_size,
+ features_lib.NEW_FEATURES_PLANES])
+ pi = tf.decode_raw(parsed['pi'], tf.float32)
+ pi = tf.reshape(pi, [batch_size, board_size * board_size + 1])
+ outcome = parsed['outcome']
+ outcome.set_shape([batch_size])
+ return (x, {'pi_tensor': pi, 'value_tensor': outcome})
+
+
+def read_tf_records(
+ shuffle_buffer_size, batch_size, tf_records, num_repeats=None,
+ shuffle_records=True, shuffle_examples=True, filter_amount=1.0):
+ """Read tf records.
+
+ Args:
+ shuffle_buffer_size: how big of a buffer to fill before shuffling
+ batch_size: batch size to return
+ tf_records: a list of tf_record filenames
+ num_repeats: how many times the data should be read (default: infinite)
+ shuffle_records: whether to shuffle the order of files read
+ shuffle_examples: whether to shuffle the tf.Examples
+ filter_amount: what fraction of records to keep
+
+ Returns:
+ a tf dataset of batched tensors
+ """
+ if shuffle_records:
+ random.shuffle(tf_records)
+ record_list = tf.data.Dataset.from_tensor_slices(tf_records)
+
+ # compression_type here must agree with write_tf_examples
+ # cycle_length = how many tfrecord files are read in parallel
+ # block_length = how many tf.Examples are read from each file before
+ # moving to the next file
+ # The idea is to shuffle both the order of the files being read,
+ # and the examples being read from the files.
+ dataset = record_list.interleave(
+ lambda x: tf.data.TFRecordDataset(x, compression_type='ZLIB'),
+ cycle_length=64, block_length=16)
+ dataset = dataset.filter(
+ lambda x: tf.less(tf.random_uniform([1]), filter_amount)[0])
+ # TODO(amj): apply py_func for transforms here.
+ if num_repeats is not None:
+ dataset = dataset.repeat(num_repeats)
+ else:
+ dataset = dataset.repeat()
+ if shuffle_examples:
+ dataset = dataset.shuffle(buffer_size=shuffle_buffer_size)
+ dataset = dataset.batch(batch_size)
+ return dataset
+
+
+def get_input_tensors(params, batch_size, tf_records, num_repeats=None,
+ shuffle_records=True, shuffle_examples=True,
+ filter_amount=0.05):
+ """Read tf.Records and prepare them for ingestion by dualnet.
+
+ Args:
+ params: An object of hyperparameters
+ batch_size: batch size to return
+ tf_records: a list of tf_record filenames
+ num_repeats: how many times the data should be read (default: infinite)
+ shuffle_records: whether to shuffle the order of files read
+ shuffle_examples: whether to shuffle the tf.Examples
+ filter_amount: what fraction of records to keep
+
+ Returns:
+ A dict of tensors (see return value of batch_parse_tf_example)
+ """
+ shuffle_buffer_size = params.shuffle_buffer_size
+ dataset = read_tf_records(
+ shuffle_buffer_size, batch_size, tf_records, num_repeats=num_repeats,
+ shuffle_records=shuffle_records, shuffle_examples=shuffle_examples,
+ filter_amount=filter_amount)
+ dataset = dataset.filter(lambda t: tf.equal(tf.shape(t)[0], batch_size))
+ def batch_parse_tf_example(batch_size, dataset):
+ return _batch_parse_tf_example(params.board_size, batch_size, dataset)
+ dataset = dataset.map(functools.partial(
+ batch_parse_tf_example, batch_size))
+ return dataset.make_one_shot_iterator().get_next()
+
+
+# End-to-end utility functions
+def make_dataset_from_selfplay(data_extracts, params):
+ """Make an iterable of tf.Examples.
+
+ Args:
+ data_extracts: An iterable of (position, pi, result) tuples
+ params: An object of hyperparameters
+
+ Returns:
+ An iterable of tf.Examples.
+ """
+ board_size = params.board_size
+ tf_examples = (make_tf_example(features_lib.extract_features(
+ board_size, pos), pi, result) for pos, pi, result in data_extracts)
+ return tf_examples
+
+
+def make_dataset_from_sgf(board_size, sgf_filename, tf_record):
+ pwcs = sgf_wrapper.replay_sgf_file(board_size, sgf_filename)
+ def make_tf_example_from_pwc(pwcs):
+ return _make_tf_example_from_pwc(board_size, pwcs)
+ tf_examples = map(make_tf_example_from_pwc, pwcs)
+ write_tf_examples(tf_record, tf_examples)
+
+
+def _make_tf_example_from_pwc(board_size, position_w_context):
+ features = features_lib.extract_features(
+ board_size, position_w_context.position)
+ pi = _one_hot(board_size, coords.to_flat(
+ board_size, position_w_context.next_move))
+ value = position_w_context.result
+ return make_tf_example(features, pi, value)
+
+
+def shuffle_tf_examples(shuffle_buffer_size, gather_size, records_to_shuffle):
+ """Read through tf.Record and yield shuffled, but unparsed tf.Examples.
+
+ Args:
+ shuffle_buffer_size: the size for shuffle buffer
+ gather_size: The number of tf.Examples to be gathered together
+ records_to_shuffle: A list of filenames
+
+ Yields:
+ An iterator yielding lists of bytes, which are serialized tf.Examples.
+ """
+ dataset = read_tf_records(shuffle_buffer_size, gather_size,
+ records_to_shuffle, num_repeats=1)
+ batch = dataset.make_one_shot_iterator().get_next()
+ sess = tf.Session()
+ while True:
+ try:
+ result = sess.run(batch)
+ yield list(result)
+ except tf.errors.OutOfRangeError:
+ break
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/preprocessing_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/preprocessing_test.py"
new file mode 100644
index 00000000..21a41dd4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/preprocessing_test.py"
@@ -0,0 +1,166 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for preprocessing."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import itertools
+import tempfile
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import coords
+import features
+import go
+import model_params
+import numpy as np
+import preprocessing
+import utils_test
+
+tf.logging.set_verbosity(tf.logging.ERROR)
+
+TEST_SGF = '''(;CA[UTF-8]SZ[9]PB[Murakawa Daisuke]PW[Iyama Yuta]KM[6.5]
+ HA[0]RE[W+1.5]GM[1];B[fd];W[cf])'''
+
+
+class TestPreprocessing(utils_test.MiniGoUnitTest):
+
+ def create_random_data(self, num_examples):
+ raw_data = []
+ for _ in range(num_examples):
+ feature = np.random.random([
+ utils_test.BOARD_SIZE, utils_test.BOARD_SIZE,
+ features.NEW_FEATURES_PLANES]).astype(np.uint8)
+ pi = np.random.random([utils_test.BOARD_SIZE * utils_test.BOARD_SIZE
+ + 1]).astype(np.float32)
+ value = np.random.random()
+ raw_data.append((feature, pi, value))
+ return raw_data
+
+ def extract_data(self, tf_record, filter_amount=1):
+ pos_tensor, label_tensors = preprocessing.get_input_tensors(
+ model_params.DummyMiniGoParams(), 1, [tf_record], num_repeats=1,
+ shuffle_records=False, shuffle_examples=False,
+ filter_amount=filter_amount)
+ recovered_data = []
+ with tf.Session() as sess:
+ while True:
+ try:
+ pos_value, label_values = sess.run([pos_tensor, label_tensors])
+ recovered_data.append((
+ pos_value,
+ label_values['pi_tensor'],
+ label_values['value_tensor']))
+ except tf.errors.OutOfRangeError:
+ break
+ return recovered_data
+
+ def assertEqualData(self, data1, data2):
+ # Assert that two data are equal, where both are of form:
+ # data = List>
+ self.assertEqual(len(data1), len(data2))
+ for datum1, datum2 in zip(data1, data2):
+ # feature
+ self.assertEqualNPArray(datum1[0], datum2[0])
+ # pi
+ self.assertEqualNPArray(datum1[1], datum2[1])
+ # value
+ self.assertEqual(datum1[2], datum2[2])
+
+ def test_serialize_round_trip(self):
+ np.random.seed(1)
+ raw_data = self.create_random_data(10)
+ tfexamples = list(map(preprocessing.make_tf_example, *zip(*raw_data)))
+
+ with tempfile.NamedTemporaryFile() as f:
+ preprocessing.write_tf_examples(f.name, tfexamples)
+ recovered_data = self.extract_data(f.name)
+
+ self.assertEqualData(raw_data, recovered_data)
+
+ def test_filter(self):
+ raw_data = self.create_random_data(100)
+ tfexamples = list(map(preprocessing.make_tf_example, *zip(*raw_data)))
+
+ with tempfile.NamedTemporaryFile() as f:
+ preprocessing.write_tf_examples(f.name, tfexamples)
+ recovered_data = self.extract_data(f.name, filter_amount=.05)
+
+ self.assertLess(len(recovered_data), 50)
+
+ def test_serialize_round_trip_no_parse(self):
+ np.random.seed(1)
+ raw_data = self.create_random_data(10)
+ tfexamples = list(map(preprocessing.make_tf_example, *zip(*raw_data)))
+
+ with tempfile.NamedTemporaryFile() as start_file, \
+ tempfile.NamedTemporaryFile() as rewritten_file:
+ preprocessing.write_tf_examples(start_file.name, tfexamples)
+ # We want to test that the rewritten, shuffled file contains correctly
+ # serialized tf.Examples.
+ batch_size = 4
+ batches = list(preprocessing.shuffle_tf_examples(
+ 1000, batch_size, [start_file.name]))
+ # 2 batches of 4, 1 incomplete batch of 2.
+ self.assertEqual(len(batches), 3)
+
+ # concatenate list of lists into one list
+ all_batches = list(itertools.chain.from_iterable(batches))
+
+ for _ in batches:
+ preprocessing.write_tf_examples(
+ rewritten_file.name, all_batches, serialize=False)
+
+ original_data = self.extract_data(start_file.name)
+ recovered_data = self.extract_data(rewritten_file.name)
+
+ # stuff is shuffled, so sort before checking equality
+ def sort_key(nparray_tuple):
+ return nparray_tuple[2]
+ original_data = sorted(original_data, key=sort_key)
+ recovered_data = sorted(recovered_data, key=sort_key)
+
+ self.assertEqualData(original_data, recovered_data)
+
+ def test_make_dataset_from_sgf(self):
+ with tempfile.NamedTemporaryFile() as sgf_file, \
+ tempfile.NamedTemporaryFile() as record_file:
+ sgf_file.write(TEST_SGF.encode('utf8'))
+ sgf_file.seek(0)
+ preprocessing.make_dataset_from_sgf(
+ utils_test.BOARD_SIZE, sgf_file.name, record_file.name)
+ recovered_data = self.extract_data(record_file.name)
+ start_pos = go.Position(utils_test.BOARD_SIZE)
+ first_move = coords.from_sgf('fd')
+ next_pos = start_pos.play_move(first_move)
+ second_move = coords.from_sgf('cf')
+ expected_data = [
+ (
+ features.extract_features(utils_test.BOARD_SIZE, start_pos),
+ preprocessing._one_hot(utils_test.BOARD_SIZE, coords.to_flat(
+ utils_test.BOARD_SIZE, first_move)), -1
+ ),
+ (
+ features.extract_features(utils_test.BOARD_SIZE, next_pos),
+ preprocessing._one_hot(utils_test.BOARD_SIZE, coords.to_flat(
+ utils_test.BOARD_SIZE, second_move)), -1
+ )
+ ]
+ self.assertEqualData(expected_data, recovered_data)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/selfplay_mcts.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/selfplay_mcts.py"
new file mode 100644
index 00000000..14bfd85e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/selfplay_mcts.py"
@@ -0,0 +1,96 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Play a self-play match with a given DualNet model."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import random
+import sys
+import time
+
+import coords
+from gtp_wrapper import MCTSPlayer
+
+
+def play(board_size, network, readouts, resign_threshold, simultaneous_leaves,
+ verbosity=0):
+ """Plays out a self-play match.
+
+ Args:
+ board_size: the go board size
+ network: the DualNet model
+ readouts: the number of readouts in MCTS
+ resign_threshold: the threshold to resign at in the match
+ simultaneous_leaves: the number of simultaneous leaves in MCTS
+ verbosity: the verbosity of the self-play match
+
+ Returns:
+ the final position
+ the n x 362 tensor of floats representing the mcts search probabilities
+ the n-ary tensor of floats representing the original value-net estimate
+ where n is the number of moves in the game.
+ """
+ player = MCTSPlayer(board_size, network, resign_threshold=resign_threshold,
+ verbosity=verbosity, num_parallel=simultaneous_leaves)
+ # Disable resign in 5% of games
+ if random.random() < 0.05:
+ player.resign_threshold = -1.0
+
+ player.initialize_game()
+
+ # Must run this once at the start, so that noise injection actually
+ # affects the first move of the game.
+ first_node = player.root.select_leaf()
+ prob, val = network.run(first_node.position)
+ first_node.incorporate_results(prob, val, first_node)
+
+ while True:
+ start = time.time()
+ player.root.inject_noise()
+ current_readouts = player.root.N
+ # we want to do "X additional readouts", rather than "up to X readouts".
+ while player.root.N < current_readouts + readouts:
+ player.tree_search()
+
+ if verbosity >= 3:
+ print(player.root.position)
+ print(player.root.describe())
+
+ if player.should_resign():
+ player.set_result(-1 * player.root.position.to_play, was_resign=True)
+ break
+ move = player.pick_move()
+ player.play_move(move)
+ if player.root.is_done():
+ player.set_result(player.root.position.result(), was_resign=False)
+ break
+
+ if (verbosity >= 2) or (
+ verbosity >= 1 and player.root.position.n % 10 == 9):
+ print("Q: {:.5f}".format(player.root.Q))
+ dur = time.time() - start
+ print("%d: %d readouts, %.3f s/100. (%.2f sec)" % (
+ player.root.position.n, readouts, dur / readouts * 100.0, dur))
+ if verbosity >= 3:
+ print("Played >>",
+ coords.to_kgs(coords.from_flat(player.root.fmove)))
+
+ if verbosity >= 2:
+ print("%s: %.3f" % (player.result_string, player.root.Q), file=sys.stderr)
+ print(player.root.position,
+ player.root.position.score(), file=sys.stderr)
+
+ return player
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/sgf_wrapper.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/sgf_wrapper.py"
new file mode 100644
index 00000000..b2378ccb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/sgf_wrapper.py"
@@ -0,0 +1,192 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Code to extract a series of positions + their next moves from an SGF.
+
+Most of the complexity here is dealing with two features of SGF:
+- Stones can be added via "play move" or "add move", the latter being used
+ to configure L+D puzzles, but also for initial handicap placement.
+- Plays don't necessarily alternate colors; they can be repeated B or W moves
+ This feature is used to handle free handicap placement.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import coords
+import go
+from go import Position, PositionWithContext
+import numpy as np
+import sgf
+import utils
+
+SGF_TEMPLATE = '''(;GM[1]FF[4]CA[UTF-8]AP[Minigo_sgfgenerator]RU[{ruleset}]
+SZ[{boardsize}]KM[{komi}]PW[{white_name}]PB[{black_name}]RE[{result}]
+{game_moves})'''
+
+PROGRAM_IDENTIFIER = 'Minigo'
+
+
+def translate_sgf_move_qs(player_move, q):
+ return '{move}C[{q:.4f}]'.format(
+ move=translate_sgf_move(player_move), q=q)
+
+
+def translate_sgf_move(player_move, comment):
+ if player_move.color not in (go.BLACK, go.WHITE):
+ raise ValueError(
+ 'Can\'t translate color {} to sgf'.format(player_move.color))
+ c = coords.to_sgf(player_move.move)
+ color = 'B' if player_move.color == go.BLACK else 'W'
+ if comment is not None:
+ comment = comment.replace(']', r'\]')
+ comment_node = 'C[{}]'.format(comment)
+ else:
+ comment_node = ''
+ return ';{color}[{coords}]{comment_node}'.format(
+ color=color, coords=c, comment_node=comment_node)
+
+ # pylint: disable=unused-argument
+ # pylint: disable=unused-variable
+def make_sgf(board_size, move_history, result_string, ruleset='Chinese',
+ komi=7.5, white_name=PROGRAM_IDENTIFIER,
+ black_name=PROGRAM_IDENTIFIER, comments=[]):
+ """Turn a game into SGF.
+
+ Doesn't handle handicap games or positions with incomplete history.
+
+ Args:
+ board_size: the go board size.
+ move_history: iterable of PlayerMoves.
+ result_string: "B+R", "W+0.5", etc.
+ ruleset: the rule set of go game
+ komi: komi score
+ white_name: the name of white player
+ black_name: the name of black player
+ comments: iterable of string/None. Will be zipped with move_history.
+ """
+ try:
+ # Python 2
+ from itertools import izip_longest
+ zip_longest = izip_longest
+ except ImportError:
+ # Python 3
+ from itertools import zip_longest
+
+ boardsize = board_size
+ game_moves = ''.join(translate_sgf_move(*z) for z in zip_longest(
+ move_history, comments))
+ result = result_string
+ return SGF_TEMPLATE.format(**locals())
+
+
+def sgf_prop(value_list):
+ """Converts raw sgf library output to sensible value."""
+ if value_list is None:
+ return None
+ if len(value_list) == 1:
+ return value_list[0]
+ else:
+ return value_list
+
+
+def sgf_prop_get(props, key, default):
+ return sgf_prop(props.get(key, default))
+
+
+def handle_node(board_size, pos, node):
+ """A node can either add B+W stones, play as B, or play as W."""
+ props = node.properties
+ black_stones_added = [coords.from_sgf(c) for c in props.get('AB', [])]
+ white_stones_added = [coords.from_sgf(c) for c in props.get('AW', [])]
+ if black_stones_added or white_stones_added:
+ return add_stones(board_size, pos, black_stones_added, white_stones_added)
+ # If B/W props are not present, then there is no move. But if it is present
+ # and equal to the empty string, then the move was a pass.
+ elif 'B' in props:
+ black_move = coords.from_sgf(props.get('B', [''])[0])
+ return pos.play_move(black_move, color=go.BLACK)
+ elif 'W' in props:
+ white_move = coords.from_sgf(props.get('W', [''])[0])
+ return pos.play_move(white_move, color=go.WHITE)
+ else:
+ return pos
+
+
+def add_stones(board_size, pos, black_stones_added, white_stones_added):
+ working_board = np.copy(pos.board)
+ go.place_stones(working_board, go.BLACK, black_stones_added)
+ go.place_stones(working_board, go.WHITE, white_stones_added)
+ new_position = Position(
+ board_size, board=working_board, n=pos.n, komi=pos.komi,
+ caps=pos.caps, ko=pos.ko, recent=pos.recent, to_play=pos.to_play)
+ return new_position
+
+
+def get_next_move(node):
+ props = node.next.properties
+ if 'W' in props:
+ return coords.from_sgf(props['W'][0])
+ else:
+ return coords.from_sgf(props['B'][0])
+
+
+def maybe_correct_next(pos, next_node):
+ if (('B' in next_node.properties and pos.to_play != go.BLACK) or
+ ('W' in next_node.properties and pos.to_play != go.WHITE)):
+ pos.flip_playerturn(mutate=True)
+
+
+def replay_sgf(board_size, sgf_contents):
+ """Wrapper for sgf files.
+
+ It does NOT return the very final position, as there is no follow up.
+ To get the final position, call pwc.position.play_move(pwc.next_move)
+ on the last PositionWithContext returned.
+ Example usage:
+ with open(filename) as f:
+ for position_w_context in replay_sgf(f.read()):
+ print(position_w_context.position)
+
+ Args:
+ board_size: the go board size.
+ sgf_contents: the content in sgf.
+
+ Yields:
+ The go.PositionWithContext instances.
+ """
+ collection = sgf.parse(sgf_contents)
+ game = collection.children[0]
+ props = game.root.properties
+ assert int(sgf_prop(props.get('GM', ['1']))) == 1, 'Not a Go SGF!'
+
+ komi = 0
+ if props.get('KM') is not None:
+ komi = float(sgf_prop(props.get('KM')))
+ result = utils.parse_game_result(sgf_prop(props.get('RE')))
+
+ pos = Position(board_size, komi=komi)
+ current_node = game.root
+ while pos is not None and current_node.next is not None:
+ pos = handle_node(board_size, pos, current_node)
+ maybe_correct_next(pos, current_node.next)
+ next_move = get_next_move(current_node)
+ yield PositionWithContext(pos, next_move, result)
+ current_node = current_node.next
+
+
+def replay_sgf_file(board_size, sgf_file):
+ with open(sgf_file) as f:
+ for pwc in replay_sgf(board_size, f.read()):
+ yield pwc
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/sgf_wrapper_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/sgf_wrapper_test.py"
new file mode 100644
index 00000000..2c68b826
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/sgf_wrapper_test.py"
@@ -0,0 +1,205 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for sgf_wrapper."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import coords
+import go
+from sgf_wrapper import replay_sgf, translate_sgf_move, make_sgf
+import utils_test
+
+JAPANESE_HANDICAP_SGF = '''(;GM[1]FF[4]CA[UTF-8]AP[CGoban:3]ST[2]RU[Japanese]
+SZ[9]HA[2]RE[Void]KM[5.50]PW[test_white]PB[test_black]AB[gc][cg];W[ee];B[dg])'''
+
+CHINESE_HANDICAP_SGF = '''(;GM[1]FF[4]CA[UTF-8]AP[CGoban:3]ST[2]RU[Chinese]SZ[9]
+HA[2]RE[Void]KM[5.50]PW[test_white]PB[test_black]RE[B+39.50];B[gc];B[cg];W[ee];
+B[gg];W[eg];B[ge];W[ce];B[ec];W[cc];B[dd];W[de];B[cd];W[bd];B[bc];W[bb];B[be];
+W[ac];B[bf];W[dh];B[ch];W[ci];B[bi];W[di];B[ah];W[gh];B[hh];W[fh];B[hg];W[gi];
+B[fg];W[dg];B[ei];W[cf];B[ef];W[ff];B[fe];W[bg];B[bh];W[af];B[ag];W[ae];B[ad];
+W[ae];B[ed];W[db];B[df];W[eb];B[fb];W[ea];B[fa])'''
+
+NO_HANDICAP_SGF = '''(;CA[UTF-8]SZ[9]PB[Murakawa Daisuke]PW[Iyama Yuta]KM[6.5]
+HA[0]RE[W+1.5]GM[1];B[fd];W[cf];B[eg];W[dd];B[dc];W[cc];B[de];W[cd];B[ed];W[he];
+B[ce];W[be];B[df];W[bf];B[hd];W[ge];B[gd];W[gg];B[db];W[cb];B[cg];W[bg];B[gh];
+W[fh];B[hh];W[fg];B[eh];W[ei];B[di];W[fi];B[hg];W[dh];B[ch];W[ci];B[bh];W[ff];
+B[fe];W[hf];B[id];W[bi];B[ah];W[ef];B[dg];W[ee];B[di];W[ig];B[ai];W[ih];B[fb];
+W[hi];B[ag];W[ab];B[bd];W[bc];B[ae];W[ad];B[af];W[bd];B[ca];W[ba];B[da];W[ie])
+'''
+
+tf.logging.set_verbosity(tf.logging.ERROR)
+
+
+class TestSgfGeneration(utils_test.MiniGoUnitTest):
+
+ def test_translate_sgf_move(self):
+ self.assertEqual(
+ ';B[db]',
+ translate_sgf_move(go.PlayerMove(go.BLACK, (1, 3)), None))
+ self.assertEqual(
+ ';W[aa]',
+ translate_sgf_move(go.PlayerMove(go.WHITE, (0, 0)), None))
+ self.assertEqual(
+ ';W[]',
+ translate_sgf_move(go.PlayerMove(go.WHITE, None), None))
+ self.assertEqual(
+ ';B[db]C[comment]',
+ translate_sgf_move(go.PlayerMove(go.BLACK, (1, 3)), 'comment'))
+
+ def test_make_sgf(self):
+ all_pwcs = list(replay_sgf(utils_test.BOARD_SIZE, NO_HANDICAP_SGF))
+ second_last_position, last_move, _ = all_pwcs[-1]
+ last_position = second_last_position.play_move(last_move)
+
+ back_to_sgf = make_sgf(
+ utils_test.BOARD_SIZE,
+ last_position.recent,
+ last_position.score(),
+ komi=last_position.komi,
+ )
+ reconstructed_positions = list(replay_sgf(
+ utils_test.BOARD_SIZE, back_to_sgf))
+ second_last_position2, last_move2, _ = reconstructed_positions[-1]
+ last_position2 = second_last_position2.play_move(last_move2)
+
+ self.assertEqualPositions(last_position, last_position2)
+
+
+class TestSgfWrapper(utils_test.MiniGoUnitTest):
+
+ def test_sgf_props(self):
+ sgf_replayer = replay_sgf(utils_test.BOARD_SIZE, CHINESE_HANDICAP_SGF)
+ initial = next(sgf_replayer)
+ self.assertEqual(initial.result, go.BLACK)
+ self.assertEqual(initial.position.komi, 5.5)
+
+ def test_japanese_handicap_handling(self):
+ intermediate_board = utils_test.load_board('''
+ .........
+ .........
+ ......X..
+ .........
+ ....O....
+ .........
+ ..X......
+ .........
+ .........
+ ''')
+ intermediate_position = go.Position(
+ utils_test.BOARD_SIZE,
+ intermediate_board,
+ n=1,
+ komi=5.5,
+ caps=(0, 0),
+ recent=(go.PlayerMove(go.WHITE, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'E5')),),
+ to_play=go.BLACK,
+ )
+ final_board = utils_test.load_board('''
+ .........
+ .........
+ ......X..
+ .........
+ ....O....
+ .........
+ ..XX.....
+ .........
+ .........
+ ''')
+ final_position = go.Position(
+ utils_test.BOARD_SIZE,
+ final_board,
+ n=2,
+ komi=5.5,
+ caps=(0, 0),
+ recent=(
+ go.PlayerMove(go.WHITE, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'E5')),
+ go.PlayerMove(go.BLACK, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'D3')),),
+ to_play=go.WHITE,
+ )
+ positions_w_context = list(replay_sgf(
+ utils_test.BOARD_SIZE, JAPANESE_HANDICAP_SGF))
+ self.assertEqualPositions(
+ intermediate_position, positions_w_context[1].position)
+ final_replayed_position = positions_w_context[-1].position.play_move(
+ positions_w_context[-1].next_move)
+ self.assertEqualPositions(final_position, final_replayed_position)
+
+ def test_chinese_handicap_handling(self):
+ intermediate_board = utils_test.load_board('''
+ .........
+ .........
+ ......X..
+ .........
+ .........
+ .........
+ .........
+ .........
+ .........
+ ''')
+ intermediate_position = go.Position(
+ utils_test.BOARD_SIZE,
+ intermediate_board,
+ n=1,
+ komi=5.5,
+ caps=(0, 0),
+ recent=(go.PlayerMove(go.BLACK, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'G7')),),
+ to_play=go.BLACK,
+ )
+ final_board = utils_test.load_board('''
+ ....OX...
+ .O.OOX...
+ O.O.X.X..
+ .OXXX....
+ OX...XX..
+ .X.XXO...
+ X.XOOXXX.
+ XXXO.OOX.
+ .XOOX.O..
+ ''')
+ final_position = go.Position(
+ utils_test.BOARD_SIZE,
+ final_board,
+ n=50,
+ komi=5.5,
+ caps=(7, 2),
+ ko=None,
+ recent=(
+ go.PlayerMove(
+ go.WHITE, coords.from_kgs(utils_test.BOARD_SIZE, 'E9')),
+ go.PlayerMove(
+ go.BLACK, coords.from_kgs(utils_test.BOARD_SIZE, 'F9')),),
+ to_play=go.WHITE
+ )
+ positions_w_context = list(replay_sgf(
+ utils_test.BOARD_SIZE, CHINESE_HANDICAP_SGF))
+ self.assertEqualPositions(
+ intermediate_position, positions_w_context[1].position)
+ self.assertEqual(
+ positions_w_context[1].next_move, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'C3'))
+ final_replayed_position = positions_w_context[-1].position.play_move(
+ positions_w_context[-1].next_move)
+ self.assertEqualPositions(final_position, final_replayed_position)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/strategies.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/strategies.py"
new file mode 100644
index 00000000..e62b9182
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/strategies.py"
@@ -0,0 +1,275 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""The strategy to play each move with MCTS."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import random
+import sys
+import time
+
+import coords
+import go
+from mcts import MCTSNode
+import numpy as np
+import sgf_wrapper
+
+
+def time_recommendation(move_num, seconds_per_move=5, time_limit=15*60,
+ decay_factor=0.98):
+ """Compute the time can be used."""
+
+ # Given current move number and "desired" seconds per move,
+ # return how much time should actually be used. To be used specifically
+ # for CGOS time controls, which are absolute 15 minute time.
+
+ # The strategy is to spend the maximum time possible using seconds_per_move,
+ # and then switch to an exponentially decaying time usage, calibrated so that
+ # we have enough time for an infinite number of moves.
+
+ # divide by two since you only play half the moves in a game.
+ player_move_num = move_num / 2
+
+ # sum of geometric series maxes out at endgame_time seconds.
+ endgame_time = seconds_per_move / (1 - decay_factor)
+
+ if endgame_time > time_limit:
+ # there is so little main time that we're already in "endgame" mode.
+ base_time = time_limit * (1 - decay_factor)
+ return base_time * decay_factor ** player_move_num
+
+ # leave over endgame_time seconds for the end, and play at seconds_per_move
+ # for as long as possible
+ core_time = time_limit - endgame_time
+ core_moves = core_time / seconds_per_move
+
+ if player_move_num < core_moves:
+ return seconds_per_move
+ else:
+ return seconds_per_move * decay_factor ** (player_move_num - core_moves)
+
+
+def _get_temperature_cutoff(board_size):
+ # When to do deterministic move selection. ~30 moves on a 19x19, ~8 on 9x9
+ return int((board_size * board_size) / 12)
+
+
+class MCTSPlayerMixin(object):
+
+ # If 'simulations_per_move' is nonzero, it will perform that many reads
+ # before playing. Otherwise, it uses 'seconds_per_move' of wall time'
+ def __init__(self, board_size, network, seconds_per_move=5,
+ simulations_per_move=0, resign_threshold=-0.90,
+ verbosity=0, two_player_mode=False, num_parallel=8):
+ self.board_size = board_size
+ self.network = network
+ self.seconds_per_move = seconds_per_move
+ self.simulations_per_move = simulations_per_move
+ self.verbosity = verbosity
+ self.two_player_mode = two_player_mode
+ if two_player_mode:
+ self.temp_threshold = -1
+ else:
+ self.temp_threshold = _get_temperature_cutoff(board_size)
+ self.num_parallel = num_parallel
+ self.qs = []
+ self.comments = []
+ self.searches_pi = []
+ self.root = None
+ self.result = 0
+ self.result_string = None
+ self.resign_threshold = -abs(resign_threshold)
+
+ def initialize_game(self, position=None):
+ if position is None:
+ position = go.Position(self.board_size)
+ self.root = MCTSNode(self.board_size, position)
+ self.result = 0
+ self.result_string = None
+ self.comments = []
+ self.searches_pi = []
+ self.qs = []
+
+ def suggest_move(self, position):
+ """ Used for playing a single game."""
+ # For parallel play, use initialize_move, select_leaf,
+ # incorporate_results, and pick_move
+
+ start = time.time()
+
+ if self.simulations_per_move == 0:
+ while time.time() - start < self.seconds_per_move:
+ self.tree_search()
+ else:
+ current_readouts = self.root.N
+ while self.root.N < current_readouts + self.simulations_per_move:
+ self.tree_search()
+ if self.verbosity > 0:
+ print('%d: Searched %d times in %s seconds\n\n' % (
+ position.n, self.simulations_per_move, time.time() - start),
+ file=sys.stderr)
+
+ # print some stats on anything with probability > 1%
+ if self.verbosity > 2:
+ print(self.root.describe(), file=sys.stderr)
+ print('\n\n', file=sys.stderr)
+ if self.verbosity > 3:
+ print(self.root.position, file=sys.stderr)
+
+ return self.pick_move()
+
+ def play_move(self, c):
+ """Play a move."""
+
+ # Notable side effects:
+ # - finalizes the probability distribution according to
+ # this roots visit counts into the class' running tally, `searches_pi`
+ # - Makes the node associated with this move the root, for future
+ # `inject_noise` calls.
+ if not self.two_player_mode:
+ self.searches_pi.append(
+ self.root.children_as_pi(self.root.position.n < self.temp_threshold))
+ self.qs.append(self.root.Q) # Save our resulting Q.
+ self.comments.append(self.root.describe())
+ self.root = self.root.maybe_add_child(coords.to_flat(self.board_size, c))
+ self.position = self.root.position # for showboard
+ del self.root.parent.children
+ return True # GTP requires positive result.
+
+ def pick_move(self):
+ """Picks a move to play, based on MCTS readout statistics.
+
+ Highest N is most robust indicator. In the early stage of the game, pick
+ a move weighted by visit count; later on, pick the absolute max.
+ """
+ if self.root.position.n > self.temp_threshold:
+ fcoord = np.argmax(self.root.child_N)
+ else:
+ cdf = self.root.child_N.cumsum()
+ cdf /= cdf[-1]
+ selection = random.random()
+ fcoord = cdf.searchsorted(selection)
+ assert self.root.child_N[fcoord] != 0
+ return coords.from_flat(self.board_size, fcoord)
+
+ def tree_search(self, num_parallel=None):
+ if num_parallel is None:
+ num_parallel = self.num_parallel
+ leaves = []
+ failsafe = 0
+ while len(leaves) < num_parallel and failsafe < num_parallel * 2:
+ failsafe += 1
+ leaf = self.root.select_leaf()
+ if self.verbosity >= 4:
+ print(self.show_path_to_root(leaf))
+ # if game is over, override the value estimate with the true score
+ if leaf.is_done():
+ value = 1 if leaf.position.score() > 0 else -1
+ leaf.backup_value(value, up_to=self.root)
+ continue
+ leaf.add_virtual_loss(up_to=self.root)
+ leaves.append(leaf)
+ if leaves:
+ move_probs, values = self.network.run_many(
+ [leaf.position for leaf in leaves])
+ for leaf, move_prob, value in zip(leaves, move_probs, values):
+ leaf.revert_virtual_loss(up_to=self.root)
+ leaf.incorporate_results(move_prob, value, up_to=self.root)
+
+ def show_path_to_root(self, node):
+ max_depth = (self.board_size ** 2) * 1.4 # 505 moves for 19x19, 113 for 9x9
+ pos = node.position
+ diff = node.position.n - self.root.position.n
+ if pos.recent is None:
+ return
+
+ def fmt(move):
+ return '{}-{}'.format('b' if move.color == 1 else 'w',
+ coords.to_kgs(self.board_size, move.move))
+ path = ' '.join(fmt(move) for move in pos.recent[-diff:])
+ if node.position.n >= max_depth:
+ path += ' (depth cutoff reached) %0.1f' % node.position.score()
+ elif node.position.is_game_over():
+ path += ' (game over) %0.1f' % node.position.score()
+ return path
+
+ def should_resign(self):
+ """Returns true if the player resigned.
+
+ No further moves should be played.
+ """
+ return self.root.Q_perspective < self.resign_threshold
+
+ def set_result(self, winner, was_resign):
+ self.result = winner
+ if was_resign:
+ string = 'B+R' if winner == go.BLACK else 'W+R'
+ else:
+ string = self.root.position.result_string()
+ self.result_string = string
+
+ def to_sgf(self, use_comments=True):
+ assert self.result_string is not None
+ pos = self.root.position
+ if use_comments:
+ comments = self.comments or ['No comments.']
+ comments[0] = ('Resign Threshold: %0.3f\n' %
+ self.resign_threshold) + comments[0]
+ else:
+ comments = []
+ return sgf_wrapper.make_sgf(
+ self.board_size, pos.recent, self.result_string,
+ white_name=os.path.basename(self.network.save_file) or 'Unknown',
+ black_name=os.path.basename(self.network.save_file) or 'Unknown',
+ comments=comments)
+
+ def is_done(self):
+ return self.result != 0 or self.root.is_done()
+
+ def extract_data(self):
+ assert len(self.searches_pi) == self.root.position.n
+ assert self.result != 0
+ for pwc, pi in zip(go.replay_position(
+ self.board_size, self.root.position, self.result), self.searches_pi):
+ yield pwc.position, pi, pwc.result
+
+ def chat(self, msg_type, sender, text):
+ default_response = (
+ "Supported commands are 'winrate', 'nextplay', 'fortune', and 'help'.")
+ if self.root is None or self.root.position.n == 0:
+ return "I'm not playing right now. " + default_response
+
+ if 'winrate' in text.lower():
+ wr = (abs(self.root.Q) + 1.0) / 2.0
+ color = 'Black' if self.root.Q > 0 else 'White'
+ return '{:s} {:.2f}%'.format(color, wr * 100.0)
+ elif 'nextplay' in text.lower():
+ return "I'm thinking... " + self.root.most_visited_path()
+ elif 'fortune' in text.lower():
+ return "You're feeling lucky!"
+ elif 'help' in text.lower():
+ return "I can't help much with go -- try ladders! Otherwise: {}".format(
+ default_response)
+ else:
+ return default_response
+
+
+class CGOSPlayerMixin(MCTSPlayerMixin):
+
+ def suggest_move(self, position):
+ self.seconds_per_move = time_recommendation(position.n)
+ return super().suggest_move(position)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/strategies_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/strategies_test.py"
new file mode 100644
index 00000000..aefbcc7f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/strategies_test.py"
@@ -0,0 +1,342 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for strategies."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import unittest
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import coords
+import go
+import numpy as np
+from strategies import MCTSPlayerMixin, time_recommendation
+import utils_test
+
+ALMOST_DONE_BOARD = utils_test.load_board('''
+ .XO.XO.OO
+ X.XXOOOO.
+ XXXXXOOOO
+ XXXXXOOOO
+ .XXXXOOO.
+ XXXXXOOOO
+ .XXXXOOO.
+ XXXXXOOOO
+ XXXXOOOOO
+ ''')
+
+# Tromp taylor means black can win if we hit the move limit.
+TT_FTW_BOARD = utils_test.load_board('''
+ .XXOOOOOO
+ X.XOO...O
+ .XXOO...O
+ X.XOO...O
+ .XXOO..OO
+ X.XOOOOOO
+ .XXOOOOOO
+ X.XXXXXXX
+ XXXXXXXXX
+ ''')
+
+SEND_TWO_RETURN_ONE = go.Position(
+ utils_test.BOARD_SIZE,
+ board=ALMOST_DONE_BOARD,
+ n=70,
+ komi=2.5,
+ caps=(1, 4),
+ ko=None,
+ recent=(go.PlayerMove(go.BLACK, (0, 1)),
+ go.PlayerMove(go.WHITE, (0, 8))),
+ to_play=go.BLACK
+)
+
+# 505 moves for 19x19, 113 for 9x9
+MAX_DEPTH = (utils_test.BOARD_SIZE ** 2) * 1.4
+
+
+class DummyNet():
+
+ def __init__(self, fake_priors=None, fake_value=0):
+ if fake_priors is None:
+ fake_priors = np.ones(
+ (utils_test.BOARD_SIZE ** 2) + 1) / (utils_test.BOARD_SIZE ** 2 + 1)
+ self.fake_priors = fake_priors
+ self.fake_value = fake_value
+
+ def run(self, position):
+ return self.fake_priors, self.fake_value
+
+ def run_many(self, positions):
+ if not positions:
+ raise ValueError(
+ "No positions passed! (Tensorflow would have failed here.")
+ return [self.fake_priors] * len(positions), [
+ self.fake_value] * len(positions)
+
+
+def initialize_basic_player():
+ player = MCTSPlayerMixin(utils_test.BOARD_SIZE, DummyNet())
+ player.initialize_game()
+ first_node = player.root.select_leaf()
+ first_node.incorporate_results(
+ *player.network.run(player.root.position), up_to=player.root)
+ return player
+
+
+def initialize_almost_done_player():
+ probs = np.array([.001] * (utils_test.BOARD_SIZE * utils_test.BOARD_SIZE + 1))
+ probs[2:5] = 0.2 # some legal moves along the top.
+ probs[-1] = 0.2 # passing is also ok
+ net = DummyNet(fake_priors=probs)
+ player = MCTSPlayerMixin(utils_test.BOARD_SIZE, net)
+ # root position is white to play with no history == white passed.
+ player.initialize_game(SEND_TWO_RETURN_ONE)
+ return player
+
+
+class TestMCTSPlayerMixin(utils_test.MiniGoUnitTest):
+
+ def test_time_controls(self):
+ secs_per_move = 5
+ for time_limit in (10, 100, 1000):
+ # in the worst case imaginable, let's say a game goes 1000 moves long
+ move_numbers = range(0, 1000, 2)
+ total_time_spent = sum(
+ time_recommendation(move_num, secs_per_move,
+ time_limit=time_limit)
+ for move_num in move_numbers)
+ # we should not exceed available game time
+ self.assertLess(total_time_spent, time_limit)
+ # but we should have used at least 95% of our time by the end.
+ self.assertGreater(total_time_spent, time_limit * 0.95)
+
+ def test_inject_noise(self):
+ player = initialize_basic_player()
+ sum_priors = np.sum(player.root.child_prior)
+ # dummyNet should return normalized priors.
+ self.assertAlmostEqual(sum_priors, 1)
+ self.assertTrue(np.all(player.root.child_U == player.root.child_U[0]))
+
+ player.root.inject_noise()
+ new_sum_priors = np.sum(player.root.child_prior)
+ # priors should still be normalized after injecting noise
+ self.assertAlmostEqual(sum_priors, new_sum_priors)
+
+ # With dirichelet noise, majority of density should be in one node.
+ max_p = np.max(player.root.child_prior)
+ self.assertGreater(max_p, 3/(utils_test.BOARD_SIZE ** 2 + 1))
+
+ def test_pick_moves(self):
+ player = initialize_basic_player()
+ root = player.root
+ root.child_N[coords.to_flat(utils_test.BOARD_SIZE, (2, 0))] = 10
+ root.child_N[coords.to_flat(utils_test.BOARD_SIZE, (1, 0))] = 5
+ root.child_N[coords.to_flat(utils_test.BOARD_SIZE, (3, 0))] = 1
+
+ # move 81, or 361, or... Endgame.
+ root.position.n = utils_test.BOARD_SIZE ** 2
+
+ # Assert we're picking deterministically
+ self.assertTrue(root.position.n > player.temp_threshold)
+ move = player.pick_move()
+ self.assertEqual(move, (2, 0))
+
+ # But if we're in the early part of the game, pick randomly
+ root.position.n = 3
+ self.assertFalse(player.root.position.n > player.temp_threshold)
+
+ with unittest.mock.patch('random.random', lambda: .5):
+ move = player.pick_move()
+ self.assertEqual(move, (2, 0))
+
+ with unittest.mock.patch('random.random', lambda: .99):
+ move = player.pick_move()
+ self.assertEqual(move, (3, 0))
+
+ def test_dont_pass_if_losing(self):
+ player = initialize_almost_done_player()
+
+ # check -- white is losing.
+ self.assertEqual(player.root.position.score(), -0.5)
+
+ for i in range(20):
+ player.tree_search()
+ # uncomment to debug this test
+ # print(player.root.describe())
+
+ # Search should converge on D9 as only winning move.
+ flattened = coords.to_flat(utils_test.BOARD_SIZE, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'D9'))
+ best_move = np.argmax(player.root.child_N)
+ self.assertEqual(best_move, flattened)
+ # D9 should have a positive value
+ self.assertGreater(player.root.children[flattened].Q, 0)
+ self.assertGreaterEqual(player.root.N, 20)
+ # passing should be ineffective.
+ self.assertLess(player.root.child_Q[-1], 0)
+ # no virtual losses should be pending
+ self.assertNoPendingVirtualLosses(player.root)
+ # uncomment to debug this test
+ # print(player.root.describe())
+
+ def test_parallel_tree_search(self):
+ player = initialize_almost_done_player()
+ # check -- white is losing.
+ self.assertEqual(player.root.position.score(), -0.5)
+ # initialize the tree so that the root node has populated children.
+ player.tree_search(num_parallel=1)
+ # virtual losses should enable multiple searches to happen simultaneously
+ # without throwing an error...
+ for i in range(5):
+ player.tree_search(num_parallel=4)
+ # uncomment to debug this test
+ # print(player.root.describe())
+
+ # Search should converge on D9 as only winning move.
+ flattened = coords.to_flat(utils_test.BOARD_SIZE, coords.from_kgs(
+ utils_test.BOARD_SIZE, 'D9'))
+ best_move = np.argmax(player.root.child_N)
+ self.assertEqual(best_move, flattened)
+ # D9 should have a positive value
+ self.assertGreater(player.root.children[flattened].Q, 0)
+ self.assertGreaterEqual(player.root.N, 20)
+ # passing should be ineffective.
+ self.assertLess(player.root.child_Q[-1], 0)
+ # no virtual losses should be pending
+ self.assertNoPendingVirtualLosses(player.root)
+
+ def test_ridiculously_parallel_tree_search(self):
+ player = initialize_almost_done_player()
+ # Test that an almost complete game
+ # will tree search with # parallelism > # legal moves.
+ for i in range(10):
+ player.tree_search(num_parallel=50)
+ self.assertNoPendingVirtualLosses(player.root)
+
+ def test_long_game_tree_search(self):
+ player = MCTSPlayerMixin(utils_test.BOARD_SIZE, DummyNet())
+ endgame = go.Position(
+ utils_test.BOARD_SIZE,
+ board=TT_FTW_BOARD,
+ n=MAX_DEPTH-2,
+ komi=2.5,
+ ko=None,
+ recent=(go.PlayerMove(go.BLACK, (0, 1)),
+ go.PlayerMove(go.WHITE, (0, 8))),
+ to_play=go.BLACK
+ )
+ player.initialize_game(endgame)
+
+ # Test that an almost complete game
+ for i in range(10):
+ player.tree_search(num_parallel=8)
+ self.assertNoPendingVirtualLosses(player.root)
+ self.assertGreater(player.root.Q, 0)
+
+ def test_cold_start_parallel_tree_search(self):
+ # Test that parallel tree search doesn't trip on an empty tree
+ player = MCTSPlayerMixin(utils_test.BOARD_SIZE, DummyNet(fake_value=0.17))
+ player.initialize_game()
+ self.assertEqual(player.root.N, 0)
+ self.assertFalse(player.root.is_expanded)
+ player.tree_search(num_parallel=4)
+ self.assertNoPendingVirtualLosses(player.root)
+ # Even though the root gets selected 4 times by tree search, its
+ # final visit count should just be 1.
+ self.assertEqual(player.root.N, 1)
+ # 0.085 = average(0, 0.17), since 0 is the prior on the root.
+ self.assertAlmostEqual(player.root.Q, 0.085)
+
+ def test_tree_search_failsafe(self):
+ # Test that the failsafe works correctly. It can trigger if the MCTS
+ # repeatedly visits a finished game state.
+ probs = np.array([.001] * (
+ utils_test.BOARD_SIZE * utils_test.BOARD_SIZE + 1))
+ probs[-1] = 1 # Make the dummy net always want to pass
+ player = MCTSPlayerMixin(utils_test.BOARD_SIZE, DummyNet(fake_priors=probs))
+ pass_position = go.Position(utils_test.BOARD_SIZE).pass_move()
+ player.initialize_game(pass_position)
+ player.tree_search(num_parallel=1)
+ self.assertNoPendingVirtualLosses(player.root)
+
+ def test_only_check_game_end_once(self):
+ # When presented with a situation where the last move was a pass,
+ # and we have to decide whether to pass, it should be the first thing
+ # we check, but not more than that.
+
+ white_passed_pos = go.Position(
+ utils_test.BOARD_SIZE,).play_move(
+ (3, 3) # b plays
+ ).play_move(
+ (3, 4) # w plays
+ ).play_move(
+ (4, 3) # b plays
+ ).pass_move() # w passes - if B passes too, B would lose by komi.
+
+ player = MCTSPlayerMixin(utils_test.BOARD_SIZE, DummyNet())
+ player.initialize_game(white_passed_pos)
+ # initialize the root
+ player.tree_search()
+ # explore a child - should be a pass move.
+ player.tree_search()
+ pass_move = utils_test.BOARD_SIZE * utils_test.BOARD_SIZE
+ self.assertEqual(player.root.children[pass_move].N, 1)
+ self.assertEqual(player.root.child_N[pass_move], 1)
+ player.tree_search()
+ # check that we didn't visit the pass node any more times.
+ self.assertEqual(player.root.child_N[pass_move], 1)
+
+ def test_extract_data_normal_end(self):
+ player = MCTSPlayerMixin(utils_test.BOARD_SIZE, DummyNet())
+ player.initialize_game()
+ player.tree_search()
+ player.play_move(None)
+ player.tree_search()
+ player.play_move(None)
+ self.assertTrue(player.root.is_done())
+ player.set_result(player.root.position.result(), was_resign=False)
+
+ data = list(player.extract_data())
+ self.assertEqual(len(data), 2)
+ position, pi, result = data[0]
+ # White wins by komi
+ self.assertEqual(result, go.WHITE)
+ self.assertEqual(player.result_string, 'W+{}'.format(
+ player.root.position.komi))
+
+ def test_extract_data_resign_end(self):
+ player = MCTSPlayerMixin(utils_test.BOARD_SIZE, DummyNet())
+ player.initialize_game()
+ player.tree_search()
+ player.play_move((0, 0))
+ player.tree_search()
+ player.play_move(None)
+ player.tree_search()
+ # Black is winning on the board
+ self.assertEqual(player.root.position.result(), go.BLACK)
+ # But if Black resigns
+ player.set_result(go.WHITE, was_resign=True)
+
+ data = list(player.extract_data())
+ position, pi, result = data[0]
+ # Result should say White is the winner
+ self.assertEqual(result, go.WHITE)
+ self.assertEqual(player.result_string, 'W+R')
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/symmetries.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/symmetries.py"
new file mode 100644
index 00000000..964d80a0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/symmetries.py"
@@ -0,0 +1,86 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Define symmetries for feature transformation.
+
+Allowable symmetries:
+ identity [12][34]
+ rot90 [24][13]
+ rot180 [43][21]
+ rot270 [31][42]
+ flip [13][24]
+ fliprot90 [34][12]
+ fliprot180 [42][31]
+ fliprot270 [21][43]
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import functools
+import random
+
+import numpy as np
+
+INVERSES = {
+ 'identity': 'identity',
+ 'rot90': 'rot270',
+ 'rot180': 'rot180',
+ 'rot270': 'rot90',
+ 'flip': 'flip',
+ 'fliprot90': 'fliprot90',
+ 'fliprot180': 'fliprot180',
+ 'fliprot270': 'fliprot270',
+}
+
+IMPLS = {
+ 'identity': lambda x: x,
+ 'rot90': np.rot90,
+ 'rot180': functools.partial(np.rot90, k=2),
+ 'rot270': functools.partial(np.rot90, k=3),
+ 'flip': lambda x: np.rot90(np.fliplr(x)),
+ 'fliprot90': np.flipud,
+ 'fliprot180': lambda x: np.rot90(np.flipud(x)),
+ 'fliprot270': np.fliplr,
+}
+
+assert set(INVERSES.keys()) == set(IMPLS.keys())
+SYMMETRIES = list(INVERSES.keys())
+
+
+# A symmetry is just a string describing the transformation.
+def invert_symmetry(s):
+ return INVERSES[s]
+
+
+def apply_symmetry_feat(s, features):
+ return IMPLS[s](features)
+
+
+def apply_symmetry_pi(board_size, s, pi):
+ pi = np.copy(pi)
+ # rotate all moves except for the pass move at end
+ pi[:-1] = IMPLS[s](pi[:-1].reshape([board_size, board_size])).ravel()
+ return pi
+
+
+def randomize_symmetries_feat(features):
+ symmetries_used = [random.choice(SYMMETRIES) for f in features]
+ return symmetries_used, [apply_symmetry_feat(s, f)
+ for s, f in zip(symmetries_used, features)]
+
+
+def invert_symmetries_pi(board_size, symmetries, pis):
+ return [apply_symmetry_pi(board_size, invert_symmetry(s), pi)
+ for s, pi in zip(symmetries, pis)]
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/symmetries_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/symmetries_test.py"
new file mode 100644
index 00000000..94150252
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/symmetries_test.py"
@@ -0,0 +1,119 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for symmetries."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import itertools
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import coords
+import numpy as np
+import symmetries
+import utils_test
+
+tf.logging.set_verbosity(tf.logging.ERROR)
+
+
+class TestSymmetryOperations(utils_test.MiniGoUnitTest):
+
+ def setUp(self):
+ np.random.seed(1)
+ self.feat = np.random.random(
+ [utils_test.BOARD_SIZE, utils_test.BOARD_SIZE, 3])
+ self.pi = np.random.random([utils_test.BOARD_SIZE ** 2 + 1])
+ super().setUp()
+
+ def test_inversions(self):
+ for s in symmetries.SYMMETRIES:
+ with self.subTest(symmetry=s):
+ self.assertEqualNPArray(
+ self.feat, symmetries.apply_symmetry_feat(
+ s, symmetries.apply_symmetry_feat(
+ symmetries.invert_symmetry(s), self.feat)))
+ self.assertEqualNPArray(
+ self.feat, symmetries.apply_symmetry_feat(
+ symmetries.invert_symmetry(s), symmetries.apply_symmetry_feat(
+ s, self.feat)))
+
+ self.assertEqualNPArray(
+ self.pi, symmetries.apply_symmetry_pi(
+ utils_test.BOARD_SIZE, s, symmetries.apply_symmetry_pi(
+ utils_test.BOARD_SIZE, symmetries.invert_symmetry(s),
+ self.pi)))
+ self.assertEqualNPArray(
+ self.pi, symmetries.apply_symmetry_pi(
+ utils_test.BOARD_SIZE, symmetries.invert_symmetry(s),
+ symmetries.apply_symmetry_pi(
+ utils_test.BOARD_SIZE, s, self.pi)))
+
+ def test_compositions(self):
+ test_cases = [
+ ('rot90', 'rot90', 'rot180'),
+ ('rot90', 'rot180', 'rot270'),
+ ('identity', 'rot90', 'rot90'),
+ ('fliprot90', 'rot90', 'fliprot180'),
+ ('rot90', 'rot270', 'identity'),
+ ]
+ for s1, s2, composed in test_cases:
+ with self.subTest(s1=s1, s2=s2, composed=composed):
+ self.assertEqualNPArray(symmetries.apply_symmetry_feat(
+ composed, self.feat), symmetries.apply_symmetry_feat(
+ s2, symmetries.apply_symmetry_feat(s1, self.feat)))
+ self.assertEqualNPArray(
+ symmetries.apply_symmetry_pi(
+ utils_test.BOARD_SIZE, composed, self.pi),
+ symmetries.apply_symmetry_pi(
+ utils_test.BOARD_SIZE, s2,
+ symmetries.apply_symmetry_pi(
+ utils_test.BOARD_SIZE, s1, self.pi)))
+
+ def test_uniqueness(self):
+ all_symmetries_f = [
+ symmetries.apply_symmetry_feat(
+ s, self.feat) for s in symmetries.SYMMETRIES
+ ]
+ all_symmetries_pi = [
+ symmetries.apply_symmetry_pi(
+ utils_test.BOARD_SIZE, s, self.pi) for s in symmetries.SYMMETRIES
+ ]
+ for f1, f2 in itertools.combinations(all_symmetries_f, 2):
+ self.assertNotEqualNPArray(f1, f2)
+ for pi1, pi2 in itertools.combinations(all_symmetries_pi, 2):
+ self.assertNotEqualNPArray(pi1, pi2)
+
+ def test_proper_move_transform(self):
+ # Check that the reinterpretation of 362 = 19*19 + 1 during symmetry
+ # application is consistent with coords.from_flat
+ move_array = np.arange(utils_test.BOARD_SIZE ** 2 + 1)
+ coord_array = np.zeros([utils_test.BOARD_SIZE, utils_test.BOARD_SIZE])
+ for c in range(utils_test.BOARD_SIZE ** 2):
+ coord_array[coords.from_flat(utils_test.BOARD_SIZE, c)] = c
+ for s in symmetries.SYMMETRIES:
+ with self.subTest(symmetry=s):
+ transformed_moves = symmetries.apply_symmetry_pi(
+ utils_test.BOARD_SIZE, s, move_array)
+ transformed_board = symmetries.apply_symmetry_feat(s, coord_array)
+ for new_coord, old_coord in enumerate(transformed_moves[:-1]):
+ self.assertEqual(
+ old_coord,
+ transformed_board[
+ coords.from_flat(utils_test.BOARD_SIZE, new_coord)])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/utils.py"
new file mode 100644
index 00000000..c9f873fa
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/utils.py"
@@ -0,0 +1,214 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Utilities for MiniGo and DualNet model."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from contextlib import contextmanager
+import functools
+import itertools
+import math
+import operator
+import os
+import random
+import re
+import string
+import time
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+# Regular expression of model number and name.
+MODEL_NUM_REGEX = r'^\d{6}' # model_num consists of six digits
+# model_name consists of six digits followed by a dash and the model name
+MODEL_NAME_REGEX = r'^\d{6}(-\w+)+'
+
+
+def random_generator(size=6, chars=string.ascii_letters + string.digits):
+ return ''.join(random.choice(chars) for x in range(size))
+
+
+def generate_model_name(model_num):
+ """Generate a full model name for the given model number.
+
+ Args:
+ model_num: The number/generation of the model.
+
+ Returns:
+ The model's full name: model_num-model_name.
+ """
+ if model_num == 0: # Model number for bootstrap model
+ new_name = 'bootstrap'
+ else:
+ new_name = random_generator()
+ full_name = '{:06d}-{}'.format(model_num, new_name)
+ return full_name
+
+
+def detect_model_num(full_name):
+ """Take the full name of a model and extract its model number.
+
+ Args:
+ full_name: The full name of a model.
+
+ Returns:
+ The model number. For example: '000000-bootstrap.index' => 0.
+ """
+ match = re.match(MODEL_NUM_REGEX, full_name)
+ if match:
+ return int(match.group())
+ else:
+ return None
+
+
+def detect_model_name(full_name):
+ """Take the full name of a model and extract its model name.
+
+ Args:
+ full_name: The full name of a model.
+
+ Returns:
+ The model name. For example: '000000-bootstrap.index' => '000000-bootstrap'.
+ """
+ match = re.match(MODEL_NAME_REGEX, full_name)
+ if match:
+ return match.group()
+ else:
+ return None
+
+
+def get_models(models_dir):
+ """Get all models.
+
+ Args:
+ models_dir: The directory of all models.
+
+ Returns:
+ A list of model number and names sorted increasingly. For example:
+ [(13, 000013-modelname), (17, 000017-modelname), ...etc]
+ """
+ all_models = tf.gfile.Glob(os.path.join(models_dir, '*.meta'))
+ model_filenames = [os.path.basename(m) for m in all_models]
+ model_numbers_names = sorted([
+ (detect_model_num(m), detect_model_name(m))
+ for m in model_filenames])
+ return model_numbers_names
+
+
+def get_latest_model(models_dir):
+ """Find the latest model.
+
+ Args:
+ models_dir: The directory of all models.
+
+ Returns:
+ The model number and name of the latest model. For example:
+ (17, 000017-modelname)
+ """
+ models = get_models(models_dir)
+ if models is None:
+ models = [(0, '000000-bootstrap')]
+ return models[-1]
+
+
+def round_power_of_two(n):
+ """Finds the nearest power of 2 to a number.
+
+ Thus 84 -> 64, 120 -> 128, etc.
+
+ Args:
+ n: The given number.
+
+ Returns:
+ The nearest 2-power number to n.
+ """
+ return 2 ** int(round(math.log(n, 2)))
+
+
+def parse_game_result(result):
+ if re.match(r'[bB]\+', result):
+ return 1
+ elif re.match(r'[wW]\+', result):
+ return -1
+ else:
+ return 0
+
+
+def product(numbers):
+ return functools.reduce(operator.mul, numbers)
+
+
+def take_n(n, iterable):
+ return list(itertools.islice(iterable, n))
+
+
+def iter_chunks(chunk_size, iterator):
+ iterator = iter(iterator)
+ while True:
+ next_chunk = take_n(chunk_size, iterator)
+ # If len(iterable) % chunk_size == 0, don't return an empty chunk.
+ if next_chunk:
+ yield next_chunk
+ else:
+ break
+
+
+def shuffler(iterator, pool_size=10**5, refill_threshold=0.9):
+ yields_between_refills = round(pool_size * (1 - refill_threshold))
+ # initialize pool; this step may or may not exhaust the iterator.
+ pool = take_n(pool_size, iterator)
+ while True:
+ random.shuffle(pool)
+ for _ in range(yields_between_refills):
+ yield pool.pop()
+ next_batch = take_n(yields_between_refills, iterator)
+ if not next_batch:
+ break
+ pool.extend(next_batch)
+ # finish consuming whatever's left - no need for further randomization.
+ # yield from pool
+ print(type(pool))
+ for p in pool:
+ yield p
+
+
+@contextmanager
+def timer(message):
+ tick = time.time()
+ yield
+ tock = time.time()
+ print('{}: {:.3} seconds'.foramt(message, (tock - tick)))
+
+
+@contextmanager
+def logged_timer(message):
+ tick = time.time()
+ yield
+ tock = time.time()
+ print('{}: {:.3} seconds'.format(message, (tock - tick)))
+ tf.logging.info('{}: {:.3} seconds'.format(message, (tock - tick)))
+
+
+class MiniGoDirectory(object):
+ """The class to set up directories of MiniGo."""
+
+ def __init__(self, base_dir):
+ self.trained_models_dir = os.path.join(base_dir, 'trained_models')
+ self.estimator_model_dir = os.path.join(base_dir, 'estimator_model_dir/')
+ self.selfplay_dir = os.path.join(base_dir, 'data/selfplay/')
+ self.holdout_dir = os.path.join(base_dir, 'data/holdout/')
+ self.training_chunk_dir = os.path.join(base_dir, 'data/training_chunks/')
+ self.sgf_dir = os.path.join(base_dir, 'sgf/')
+ self.evaluate_dir = os.path.join(base_dir, 'sgf/evaluate/')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/utils_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/utils_test.py"
new file mode 100644
index 00000000..3f0a3b85
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/minigo/utils_test.py"
@@ -0,0 +1,203 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for utils, and base class for other unit tests."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import random
+import re
+import tempfile
+import time
+
+import tensorflow as tf # pylint: disable=g-bad-import-order
+
+import go
+import numpy as np
+import utils
+
+tf.logging.set_verbosity(tf.logging.ERROR)
+
+BOARD_SIZE = 9
+EMPTY_BOARD = np.zeros([BOARD_SIZE, BOARD_SIZE], dtype=np.int8)
+ALL_COORDS = [(i, j) for i in range(BOARD_SIZE) for j in range(BOARD_SIZE)]
+
+
+def _check_bounds(c):
+ return c[0] % BOARD_SIZE == c[0] and c[1] % BOARD_SIZE == c[1]
+
+NEIGHBORS = {(x, y): list(filter(_check_bounds, [
+ (x+1, y), (x-1, y), (x, y+1), (x, y-1)])) for x, y in ALL_COORDS}
+
+
+def load_board(string):
+ reverse_map = {
+ 'X': go.BLACK,
+ 'O': go.WHITE,
+ '.': go.EMPTY,
+ '#': go.FILL,
+ '*': go.KO,
+ '?': go.UNKNOWN
+ }
+ string = re.sub(r'[^XO\.#]+', '', string)
+ if len(string) != BOARD_SIZE ** 2:
+ raise ValueError("Board to load didn't have right dimensions")
+ board = np.zeros([BOARD_SIZE, BOARD_SIZE], dtype=np.int8)
+ for ii, char in enumerate(string):
+ np.ravel(board)[ii] = reverse_map[char]
+ return board
+
+
+class TestUtils(tf.test.TestCase):
+
+ def test_bootstrap_name(self):
+ name = utils.generate_model_name(0)
+ self.assertIn('bootstrap', name)
+
+ def test_generate_model_name(self):
+ name = utils.generate_model_name(17)
+ self.assertIn('000017', name)
+
+ def test_detect_name(self):
+ string = '000017-model.index'
+ detected_name = utils.detect_model_name(string)
+ self.assertEqual(detected_name, '000017-model')
+
+ def test_detect_num(self):
+ string = '000017-model.index'
+ detected_name = utils.detect_model_num(string)
+ self.assertEqual(detected_name, 17)
+
+ def test_get_models(self):
+ with tempfile.TemporaryDirectory() as models_dir:
+ model1 = '000013-model.meta'
+ model2 = '000017-model.meta'
+ f1 = open(os.path.join(models_dir, model1), 'w')
+ f1.close()
+ f2 = open(os.path.join(models_dir, model2), 'w')
+ f2.close()
+ model_nums_names = utils.get_models(models_dir)
+ self.assertEqual(len(model_nums_names), 2)
+ self.assertEqual(model_nums_names[0], (13, '000013-model'))
+ self.assertEqual(model_nums_names[1], (17, '000017-model'))
+
+ def test_get_latest_model(self):
+ with tempfile.TemporaryDirectory() as models_dir:
+ model1 = '000013-model.meta'
+ model2 = '000017-model.meta'
+ f1 = open(os.path.join(models_dir, model1), 'w')
+ f1.close()
+ f2 = open(os.path.join(models_dir, model2), 'w')
+ f2.close()
+ latest_model = utils.get_latest_model(models_dir)
+ self.assertEqual(latest_model, (17, '000017-model'))
+
+ def test_round_power_of_two(self):
+ self.assertEqual(utils.round_power_of_two(84), 64)
+ self.assertEqual(utils.round_power_of_two(120), 128)
+
+ def test_shuffler(self):
+ random.seed(1)
+ dataset = (i for i in range(10))
+ shuffled = list(utils.shuffler(
+ dataset, pool_size=5, refill_threshold=0.8))
+ self.assertEqual(len(shuffled), 10)
+ self.assertNotEqual(shuffled, list(range(10)))
+
+ def test_parse_game_result(self):
+ self.assertEqual(utils.parse_game_result('B+3.5'), go.BLACK)
+ self.assertEqual(utils.parse_game_result('W+T'), go.WHITE)
+ self.assertEqual(utils.parse_game_result('Void'), 0)
+
+
+class MiniGoUnitTest(tf.test.TestCase):
+
+ @classmethod
+ def setUpClass(cls):
+ cls.start_time = time.time()
+
+ @classmethod
+ def tearDownClass(cls):
+ print('\n%s.%s: %.3f seconds' %
+ (cls.__module__, cls.__name__, time.time() - cls.start_time))
+
+ def assertEqualNPArray(self, array1, array2):
+ if not np.all(array1 == array2):
+ raise AssertionError(
+ 'Arrays differed in one or more locations:\n%s\n%s' % (array1, array2)
+ )
+
+ def assertNotEqualNPArray(self, array1, array2):
+ if np.all(array1 == array2):
+ raise AssertionError('Arrays were identical:\n%s' % array1)
+
+ def assertEqualLibTracker(self, lib_tracker1, lib_tracker2):
+ # A lib tracker may have differently numbered groups yet still
+ # represent the same set of groups.
+ # "Sort" the group_ids to ensure they are the same.
+ def find_group_mapping(lib_tracker):
+ current_gid = 0
+ mapping = {}
+ for group_id in lib_tracker.group_index.ravel().tolist():
+ if group_id == go.MISSING_GROUP_ID:
+ continue
+ if group_id not in mapping:
+ mapping[group_id] = current_gid
+ current_gid += 1
+ return mapping
+
+ lt1_mapping = find_group_mapping(lib_tracker1)
+ lt2_mapping = find_group_mapping(lib_tracker2)
+
+ remapped_group_index1 = [
+ lt1_mapping.get(gid, go.MISSING_GROUP_ID)
+ for gid in lib_tracker1.group_index.ravel().tolist()]
+ remapped_group_index2 = [
+ lt2_mapping.get(gid, go.MISSING_GROUP_ID)
+ for gid in lib_tracker2.group_index.ravel().tolist()]
+ self.assertEqual(remapped_group_index1, remapped_group_index2)
+
+ remapped_groups1 = {lt1_mapping.get(
+ gid): group for gid, group in lib_tracker1.groups.items()}
+ remapped_groups2 = {lt2_mapping.get(
+ gid): group for gid, group in lib_tracker2.groups.items()}
+ self.assertEqual(remapped_groups1, remapped_groups2)
+
+ self.assertEqualNPArray(
+ lib_tracker1.liberty_cache, lib_tracker2.liberty_cache)
+
+ def assertEqualPositions(self, pos1, pos2):
+ self.assertEqualNPArray(pos1.board, pos2.board)
+ self.assertEqualLibTracker(pos1.lib_tracker, pos2.lib_tracker)
+ self.assertEqual(pos1.n, pos2.n)
+ self.assertEqual(pos1.caps, pos2.caps)
+ self.assertEqual(pos1.ko, pos2.ko)
+ r_len = min(len(pos1.recent), len(pos2.recent))
+ if r_len > 0: # if a position has no history, then don't bother testing
+ self.assertEqual(pos1.recent[-r_len:], pos2.recent[-r_len:])
+ self.assertEqual(pos1.to_play, pos2.to_play)
+
+ def assertNoPendingVirtualLosses(self, root):
+ """Raise an error if any node in this subtree has vlosses pending."""
+ queue = [root]
+ while queue:
+ current = queue.pop()
+ self.assertEqual(current.losses_applied, 0)
+ queue.extend(current.children.values())
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/README.md"
new file mode 100644
index 00000000..2c4672df
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/README.md"
@@ -0,0 +1,84 @@
+# MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks
+
+[TOC]
+
+## What is MorphNet?
+
+MorphNet is a method for learning deep network structure during training. The
+key principle is continuous relaxation of the network-structure learning
+problem. Specifically, we use regularizers that induce sparsity in the space of
+activations of the network. The regularizers can be tailored to target the
+consumption of specific resources by the network, such as FLOPs or model size.
+When such a regularizer is added to the training loss and their sum is
+minimized via stochastic gradient descent or a similar optimizer, the learning
+problem becomes also a constrained optimization of the structure of the network,
+under the constraint represented by the regularizer. The method is described in
+detail in the [this paper](https://arxiv.org/abs/1711.06798), to appear in [CVPR
+2018](http://cvpr2018.thecvf.com/).
+
+## Adding a MorphNet regularizer to your training code
+
+Your interaction with the MorphNet codebase will most likely be through
+subclasses of `NetworkRegularizer`. Each subclass represents a resource that we
+wish to target/constrain when optimizing the network. The MorphNet package
+provides several `NetworkRegularizer`s in the `network_regularizers` directory,
+as well as a framework for writing your own. The framework is described in
+detail [here](g3doc/regularizers_framework.md). The interface of
+`NetworkRegularizer` is given
+[here](g3doc/regularizers_framework.md?#network-regularizers).
+
+To apply a `NetworkRegularizer` to your network, your code would look similar to
+the example below. The example refers to a specific type of `NetworkRegularizer`
+that targets FLOPs, and to make the discussion simpler we henceforth restrict it
+to this case, but generalization to an arbitrary constrained resource and an
+arbitrary regularization method that targets that resource is straightforward.
+
+```python
+my_gamma_threshold = 1e-3
+regularizer_strength = 1e-9
+network_reg = network_regularizers.GammaFlopsRegularizer(
+ [my_network_output.op], my_gamma_threshold)
+my_training_loss += regularizer_strength * network_reg.get_regularization_term()
+tf.summary.scalar('FLOPs', network_reg.get_cost()
+```
+
+Once you start your training, your TensorBoard will display the effective FLOP
+count of the model. "Effective" is the sense that as activations are zeroed out
+by the regularizer, their impact on the FLOP count is discounted.
+
+
+
+The larger the `regularizer_strength`, the smaller the effective FLOP count to
+which the network will converge. If `regularizer_strength` is large enough, the
+FLOP count will collapse to zero, whereas if it is small enough, the FLOP count
+will remain at its initial value and the network structure will not vary.
+`regularizer_strength` is your knob to control where you want to be on the
+price-performance curve. The `my_gamma_threshold` parameter is used for
+determining when an activation is alive. It is described in more detail
+[here](framework/README.md?#the-opregularizer-interface), including
+an explanation for how to tune it.
+
+## Extracting the architecture learned by MorphNet
+
+One way to extract the structure is querying the `network_reg` object created
+above. To query which activations in a given op were kept alive (as opposed to
+removed) by MorphNet, your code would look similar to
+
+```python
+alive = sess.run(network_reg.opreg_manager.get_regularizer(op).alive_vector)
+```
+
+where `op` is the tensorflow op in question, and `sess` is a tf.Session object.
+The result is a vector of booleans, designating which activations were kept
+alive (more details can be found
+[here](framework/README.md?#the-opregularizer-interface)). Typically
+one would be interested in the number of alive activations, which can be
+obtained by counting the `True` values in `alive`. Looping over all convolutions
+and / or fully connected layers (as `op`) is typically sufficient to extract the
+full structure learned by MorphNet.
+
+## Maintainers
+
+* Elad Eban
+* Ariel Gordon, github: [gariel-google](https://github.com/gariel-google).
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/README.md"
new file mode 100644
index 00000000..4266c85d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/README.md"
@@ -0,0 +1,367 @@
+# Regularizers Framework
+
+
+[TOC]
+
+
+## Goal
+
+The goal of this framework is to facilitate building sparsifying regularizers
+for deep networks. A regularizer targets a certain cost (***targeted
+cost***), such as the FLOP cost of inference, model size, latency, memory
+footprint, etc.
+
+In order to form such a regularizer, we traverse the TensorFlow graph and find
+the ops that contribute to the *targeted cost*. For each op, we apply a
+sparsifying regularizer that induces sparsity *among the activations*. The
+sparsifying regularizer of each activation is weighted by its marginal
+contribution to the *targeted cost*.
+
+Calculating this weight may be a nontrivial task. For example, for a fully
+connected layer the FLOP cost is proportional to the number of inputs times
+the number of outputs, which means that the marginal cost of each output
+is proportional to the number of inputs. Some of the inputs may have been
+already regularized away, which means that the calculation of one op's FLOP
+regularizer depends on the regularization of the output of other ops. Moreover,
+if an op receives its input a concatenation or a sum of several other ops,
+figuring out the regularizer requires some bookkeeping.
+
+The goal of this framework is to take care of this bookkeeping in a general way,
+to facilitate building a wide variety of regularizers, targeting a wide variety
+of *targeted costs*, with little effort and less opportunities to err. In
+what follows we outline the framework, building it from the bottom up: From a
+single activation all the way to a full complex network.
+
+## `OpRegularizers` and how they are assigned
+
+### The `OpRegularizer` interface
+
+`OpRegularizer` is the most primitive element in the framework. An
+`OpRegularizer` refers to TensorFlow op, and has two methods,
+`regularization_vector` and `alive_vector`, both return `tf.Tensor`s or rank 1
+(vectors). `regularization_vector` is of type float, and its `i`-th entry is the
+regularizer of the `i`-th activation of the op the `OpRegularizer` refers to.
+In order to regularize away that activation, one would need to add the `i`-th
+entry of `regularization_vector`, multiplied by some coefficient, to the
+training loss. The stronger we want to penalize it, the larger the coefficient
+is. Assuming that the regularizer is of sparsifying nature (e.g. L1 norm), with
+a large enough coefficient, the `i`-th activation will eventually vanish.
+Loosely speaking, if we were to target the total number of activations in the
+network, we would add the sum of all `regularization_vector`s from all
+`OpRegularizer` to the training loss.
+
+Since `OpRegularizer` is an abstract interface, with no awareness of the nature
+of regularization used, the decision when an activation can be considered alive
+is also deferred to `OpRegularizer`, via the `alive_vector` method. The `i`-th
+entry evaluates to a boolean that indicates whether the activation is alive.
+
+```python
+class OpRegularizer(object):
+
+ @abc.abstractproperty
+ def regularization_vector(self):
+ """Returns a vector of floats with a regularizer for each activation."""
+ pass
+
+ @abc.abstractproperty
+ def alive_vector(self):
+ """Returns a bool vector indicating which activations are alive."""
+ pass
+```
+
+As an example, we can consider a fully connected layer that has `m` inputs and
+`n` outputs. The layer is represented by an `m * n` matrix, and one way to
+impose sparsifying regularizer on the `i`-th output is by grouping all weights
+associated with it into a group LASSO regularizer, such as the L2 norm of the
+`i`-th row of the matrix. That would therefore be the `i`-th entry of the
+`regularization_vector`.
+
+When such a regularization is added to the training loss, the L2 norms of the
+rows of the matrix tend to form a bimodal distribution with one peak near "zero"
+(up to numerical noise), another peak away from zero, and a void in between. A
+natural way to detemine whether the `i`-th activation is alive is thus by
+comparing the `i`-th entry of the `regularization_vector` to some threshold that
+lies in that void: If it's above the threshold, it's alive.
+
+
+
+There are ops that are not regularized, such as constants, or the input to the
+network. For an un-regularized op, the `OpRegularizer` is set to `None`, which
+implies an all-zero `regularization_vector` and an all-True `alive_vector`.
+
+### Rules for assigning `OpRegularizer`s to ops
+
+As we traverse the TensorFlow graph, we assign an `OpRegularizer` to each op we
+encounter according to the set of rules outlined in this section. We first
+explain "default rules", rules that address propagating `OpRegularizers` across
+connections in the TensorFlow graph. Then we discuss client-specified rules,
+which can augment and override the default rules.
+
+#### Pass-through ops
+
+Many TensorFlow ops inherit the `OpRegularizer` of their input. These are ops
+that:
+
+* Don't change the alive status of activations.
+* The only way an activation can be eliminated form their output is if
+it's eliminated from their input.
+
+An example is adding a bias to the output of a convolution. After adding a bias
+to it, an activation will be alive (that is, have nonzero variance) if and only
+if was alive before adding the bias. If we want to regularize away an activation
+at the output of a `BiasAdd` op, the only way to do so is to penalize the same
+activation in the preceding convolution.
+
+Since both the `regularization_vector` and the `alive_vector` of such an op is
+identical to those of its input, so is the entire `OpRegularizer`. We refer to
+such ops as *pass-through* ops. Shape-preserving unary ops (e.g. ReLU) are
+generally *pass-through*, but some binary ops are too. In our framework ops are
+assumed to be *pass-through* by default. Exceptions to this rule are discussed
+below.
+
+#### Grouping
+
+When learning the number of outputs of ops in a TensorFlow graph, some ops are
+constrained to maintain the same number of outputs as others. Elementwise
+ops that are performed on two (or more) tensors, such as addition,
+multiplication, or maximum, constrain their input tensors to have the same size.
+
+Common use cases are attention maps, recurrent models, and residual connections.
+An example of a residual connection is illustrated in the diagram below. It
+would be problematic if the activations of op1 and op2 didn't live or die
+together. For example, if the `i`-th activation of op1 is alive but for op2 it's
+dead, we still cannot eliminate the `i`-th activation from op2 without breaking
+the topology of the network.
+
+
+
+In our framework we choose to impose preservation of the topology. That is, ops
+that are connected with addition (or other elementwise binary ops) are
+constrained to have their activations live and die together. The `i`-th
+activations of each of those ops are grouped together in a single LASSO group.
+The default grouping mechanism is maximum for the `regularization_vector` and
+elementwise logical OR for the `alive_vector`. To regularize away the `i`-th
+element of the group one needs to penalize the maximum of `i`-th regularization
+terms of all ops comprising the group, and to declare the entire `i`-th group
+dead, the `i`-th element in all ops comprising the group must be dead. However
+the framework admits other forms of grouping, and user-defined grouping methods
+can be easily plugged into it.
+
+One property of the grouping, which may seem confusing initially, is that once
+two (or more) `OpRegularizer`s are grouped, and the `OpRegularizer` of the
+group is formed, the `OpRegularizer`s comprising the group are all 'replaced' by
+the `OpRegularizer` of the group. For example, in the diagram above, the
+`OpRegularizer`s of op1 and op2 have to be grouped. Therefore if the `i`-th
+output of op1 is alive and that of op2 is dead, and we use the default grouping
+described above, the `i`-th output of the group is *alive*.
+
+Now, consider op4, which receives only op2 as input. From the point of view of
+op4, the `i`-th activation of op2 must be considered *alive*, even though the
+original op2 regularizer deemed it *dead*. This is because we already know that
+we won't be able to do away with the `i`-th activation of op2 - it is tied to
+the one of op1, which is alive. Therefore after the grouping, the
+`OpRegularizer`s of all constituents of the group are henceforth *replaced* by
+the `OpRegularizer` of the group.
+
+#### Concatenation
+
+Often outputs of several ops are concatenated to a single tensor. For example,
+in Inception networks, the outputs of various convolutional 'towers' are
+concatenated along the channels dimension. In such a case, it is obvious that
+the `regularization_vector` (`alive_vector`) of the concatenation is a
+concatenation of the `regularization_vector` (`alive_vector`) of the
+concatenated ops.
+
+Similarly to the logic of grouping, once the concatenation of the
+`OpRegularizer`s has happened, the concatenated `OpRegularizer`s cease to exist
+and are replaced by slices of their concatenation. For example if op1 has 3
+outputs and op2 has 4, and op3 is their concatenation, op3 has 7 outputs. After
+the concatenation, the `alive_vector` of op1 will be a slice (from index 0 to
+index 2) of the `alive_vector` of op3, whereas for op2 it will be another slice
+(index from 3 to index 6).
+
+If op3 is later grouped with op4, as happens in Inception ResNet architectures,
+a group will be formed, and the `alive_vector` of op1 will henceforth be a slice
+(index from 0 to index 2) of the `alive_vector` of *the new group*. This is for
+the same reasons as the ones described in the section above.
+
+#### Client-specified rules
+
+The client code of the framework has the opportunity to specify rules for
+creating `OpRegularizers`. For example, for ops of type `MatMul`, which are the
+common implementation of fully-connected layers, the client can choose to assign
+group LASSO regularizers similar to the one described above. Typically the
+client code would choose to do that for 'interesting' ops, like convolutions and
+fully-connected layers, but the choice of rules is ultimately deferred to the
+client code.
+
+The client code may also choose to override the *default rules*. Ops are
+considered *pass-through* by default, and obviously there are cases where this
+is not true, such as reshaping, slicing, sparse maxtrix operations etc.
+TensorFlow is much too expressive for us to be able to anticipate every usage
+pattern of its ops and to properly regularize them. The set of default rules
+cover most of the common published convolutional networks, but we do not presume
+to cover *all* networks. More complex networks may require adding some custom
+rules.
+
+
+### OpRegularizerManager
+
+`OpRegularizerManager` is the class responsible for assigning an `OpRegularizer`
+to each op in the TensorFlow graph. Its constructor crawls the TensorFlow graph,
+starting from the ops listed in the `ops` argument (typically the output of the
+network), recursively, and assigns `OpRegularizer`s to each op encountered. Once
+the object is constructed, it provides read-only methods that allow querying the
+`OpRegularizer` for any op that was encountered during construction, and a list
+of the latter ops for convenience.
+
+```python
+class OpRegularizerManager(object):
+ """Assigns OpRegularizers to ops in a graph and bookkeeps the mapping."""
+
+ def __init__(self, ops, op_regularizer_factory_dict,
+ create_grouping_regularizer=None):
+ """Creates an instance.
+
+ Args:
+ ops: A list of tf.Operation. An OpRegularizer will be created for all the
+ ops in `ops`, and recursively for all ops they depend on via data
+ dependency. Typically `ops` would contain a single tf.Operation, which
+ is the output of the network.
+ op_regularizer_factory_dict: A dictionary, where the keys are strings
+ representing TensorFlow Op types, and the values are callables that
+ create the respective OpRegularizers. For every op encountered during
+ the recursion, if op.type is in op_regularizer_factory_dict, the
+ respective callable will be used to create an OpRegularizer. The
+ signature of the callables is the following args;
+ op; a tf.Operation for which to create a regularizer.
+ opreg_manager; A reference to an OpRegularizerManager object. Can be
+ None if the callable does not need access to OpRegularizerManager.
+ create_grouping_regularizer: A callable that has the signature of
+ grouping_regularizers.MaxGroupingRegularizer's constructor. Will be
+ called whenever a grouping op (see _GROUPING_OPS) is encountered.
+ Defaults to MaxGroupingRegularizer if None.
+
+ Raises:
+ ValueError: If ops is not a list.
+ """
+ ...
+
+ def get_regularizer(self, op):
+ """Returns the OpRegularizer object pertaining to `op`.
+
+ Args:
+ op: a tf.Operation object.
+
+ Returns:
+ An OpRegularizer object, or None if `op` does not have one.
+
+ Raises:
+ ValueError: The OpRegularizerManager object did not encounter `op` when
+ it was constructed and the grpah was traversed, and thus does not know
+ the answer.
+ """
+ ...
+
+
+ @property
+ def ops(self):
+ """Returns all tf.Operations for which `get_regularizer` is known."""
+ ...
+
+```
+As the constructor crawls the graph, it invokes the following set of rules, for
+any op encountered:
+
+* If `op_regularizer_factory_dict` has a rule on how to create an
+`OpRegularizer` for the type of the op encountered, invoke the rule. These
+are the user-specified rules. Otherwise:
+* If the op has no inputs, return `None`. Examples are constants and variables.
+Otherwise:
+* If the ops is concatenation, invoke the rule for concatenation decribed above.
+Otherwise:
+* If the op has more than one regularized input (that is, input that has a non-
+`None` `OpRegularizer`, perform grouping. Being conservative, we first check if
+the op is whitelisted for being a grouping op (elemetwise addition, subtraction
+etc). Otherwise:
+* The op is a *pass-through*. That is, its OpRegularizer is the same as of its
+input.
+
+The implementaiton is recursive: We start from the output nodes(s) of the graph.
+To build an `OpRegularizer` for each op, we need to know the `OpRegularizer` of
+its inputs, so we make a recursive call to find out those, and so on.
+
+
+
+## Network Regularizers
+
+A `NetworkRegularizer` object targets a certain *targeted cost* of an entire
+network. Its interface is:
+
+```python
+class NetworkRegularizer(object):
+ """An interface for Network Regularizers."""
+
+ @abc.abstractmethod
+ def get_regularization_term(self, ops=None):
+ """Compute the FluidNet regularization term.
+
+ Args:
+ ops: A list of tf.Operation. If specified, only the regularization term
+ associated with the ops in `ops` will be returned. Otherwise, all
+ relevant ops in the default TensorFlow graph will be included.
+
+ Returns:
+ A tf.Tensor scalar of floating point type that evaluates to the
+ regularization term.
+ """
+ pass
+
+ @abc.abstractmethod
+ def get_cost(self, ops=None):
+ """Calculates the cost targeted by the Regularizer.
+
+ Args:
+ ops: A list of tf.Operation. If specified, only the cost pertaining to the
+ ops in the `ops` will be returned. Otherwise, all relevant ops in the
+ default TensorFlow graph will be included.
+
+ Returns:
+ A tf.Tensor scalar that evaluates to the cost.
+ """
+ pass
+```
+
+The TensorFlow scalar returned by `get_cost` evaluates to the *targeted
+cost*, and is typically used for monitoring (e.g. displaying it in
+TensorBoard). The scalar returned by `get_regularization_term` is the one that
+has to be added to the training loss, multiplied by a coefficient controlling
+its strength.
+
+`OpRegularizerManager` and the `OpRegularizer`s it provides for ops in the graph
+are intended to facilitate easy implementation of `NetworkRegularizer`s. We
+exemplify it here in the context of targeting FLOPs for a convolutional network,
+but the same principles apply for other *targeted costs*.
+
+Most of the consumption of FLOPs in convolutional networks happens in the
+convolutions. As a first approximation, we can neglect the FLOP impact of the
+other ops in the graph, even though the framework readily allows including the
+FLOP contribution of all ops, even the ones that have negligible cost.
+
+Within this approximation, in order to build the FLOP `NetworkRegularizer`, its
+constructor needs to:
+
+* Crawl the graph, starting from the output of the network, and find all
+convolution ops on which the output depends.
+* For each of these convolution ops, create an `OpRegularizer`.
+* Find the `OpRegularizer` of the *input* of each convolution op.
+* Implement Eq. (6) in the [MorphNet paper](https://arxiv.org/abs/1711.06798) to
+calculate the total FLOP cost of all convolutions, and an equation similar to
+Eq. (9) to calcluate the respective regularization term. We say 'similar'
+because Eq. (9) refers to a specific type of regularization, where the
+`regularization_vector` of a convolution is the absolute value of the respective
+batch-norm gamma vector. However the exact nature of the `regularization_vector`
+is delegated to the `OpRegularizer`.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/concat_and_slice_regularizers.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/concat_and_slice_regularizers.py"
new file mode 100644
index 00000000..22e6d8ec
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/concat_and_slice_regularizers.py"
@@ -0,0 +1,108 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""OpRegularizers that concatenate and slice other OpRegularizers.
+
+When we have a concatenation op in the network, which concatenates several
+tensors, the regularizers of the concatenated ops (that is, the
+regularization_vector-s and the alive_vector-s) should be concatenated as well.
+
+Slicing is the complementary op - if regularizers Ra and Rb were concatenated
+into a regularizer Rc, Ra and Rb can be obtained form Rc by slicing.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from morph_net.framework import generic_regularizers
+
+
+class ConcatRegularizer(generic_regularizers.OpRegularizer):
+ """An OpRegularizer that concatenates others, to reflect a Concat op."""
+
+ def __init__(self, regularizers_to_concatenate):
+ for r in regularizers_to_concatenate:
+ if not generic_regularizers.dimensions_are_compatible(r):
+ raise ValueError('Bad regularizer: dimensions are not compatible')
+
+ self._alive_vector = tf.concat(
+ [r.alive_vector for r in regularizers_to_concatenate], 0)
+ self._regularization_vector = tf.concat(
+ [r.regularization_vector for r in regularizers_to_concatenate], 0)
+
+ @property
+ def regularization_vector(self):
+ return self._regularization_vector
+
+ @property
+ def alive_vector(self):
+ return self._alive_vector
+
+
+class SlicingReferenceRegularizer(generic_regularizers.OpRegularizer):
+ """An OpRegularizer that slices a segment of another regularizer.
+
+ This is useful to complement the ConcatRegularizer. For example, suppose that
+ we have two ops, one with 3 outputs (Op1) and the other with 4 outputs (Op2).
+ Each has own regularizer, Reg1 and Reg2.
+
+ Now suppose that a concat op concatenated Op1 and Op2 into OpC. Reg1 and Reg2
+ should be concatenated to RegC. To make the situation more complicated, RegC
+ was grouped in a group lasso with another op in the graph, resulting in RegG.
+
+ Whan happens next? All references to RegC should obviously be replaced by
+ RegG. But what about Reg1? The latter could be the first 3 outputs of RegG,
+ and Reg2 would be the 4 last outputs of RegG.
+
+ SlicingReferenceRegularizer is a regularizer that picks a segment of outputs
+ form an existing OpRegularizer. When OpRegularizers are concatenated, they
+ are replaced by SlicingReferenceRegularizer-s.
+ """
+
+ def __init__(self, get_regularizer_to_slice, begin, size):
+ """Creates an instance.
+
+ Args:
+ get_regularizer_to_slice: A callable, such that get_regularizer_to_slice()
+ returns an OpRegularizer that has to be sliced.
+ begin: An integer, where to begin the slice.
+ size: An integer, the length of the slice (so the slice ends at
+ begin + size
+ """
+ self._get_regularizer_to_slice = get_regularizer_to_slice
+ self._begin = begin
+ self._size = size
+ self._alive_vector = None
+ self._regularization_vector = None
+
+ @property
+ def regularization_vector(self):
+ if self._regularization_vector is None:
+ regularizer_to_slice = self._get_regularizer_to_slice()
+ self._regularization_vector = tf.slice(
+ regularizer_to_slice.regularization_vector, [self._begin],
+ [self._size])
+ return self._regularization_vector
+
+ @property
+ def alive_vector(self):
+ if self._alive_vector is None:
+ regularizer_to_slice = self._get_regularizer_to_slice()
+ assert regularizer_to_slice is not self
+ self._alive_vector = tf.slice(regularizer_to_slice.alive_vector,
+ [self._begin], [self._size])
+ return self._alive_vector
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/concat_and_slice_regularizers_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/concat_and_slice_regularizers_test.py"
new file mode 100644
index 00000000..8321ee85
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/concat_and_slice_regularizers_test.py"
@@ -0,0 +1,59 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for framework.concat_and_slice_regularizers."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from morph_net.framework import concat_and_slice_regularizers
+from morph_net.testing import op_regularizer_stub
+
+
+class ConcatAndSliceRegularizersTest(tf.test.TestCase):
+
+ def setUp(self):
+ self._reg_vec1 = [0.1, 0.3, 0.6, 0.2]
+ self._alive_vec1 = [False, True, True, False]
+ self._reg_vec2 = [0.2, 0.4, 0.5]
+ self._alive_vec2 = [False, True, False]
+ self._reg1 = op_regularizer_stub.OpRegularizerStub(self._reg_vec1,
+ self._alive_vec1)
+ self._reg2 = op_regularizer_stub.OpRegularizerStub(self._reg_vec2,
+ self._alive_vec2)
+
+ def testConcatRegularizer(self):
+ concat_reg = concat_and_slice_regularizers.ConcatRegularizer(
+ [self._reg1, self._reg2])
+ with self.test_session():
+ self.assertAllEqual(self._alive_vec1 + self._alive_vec2,
+ concat_reg.alive_vector.eval())
+ self.assertAllClose(self._reg_vec1 + self._reg_vec2,
+ concat_reg.regularization_vector.eval(), 1e-5)
+
+ def testSliceRegularizer(self):
+ concat_reg = concat_and_slice_regularizers.SlicingReferenceRegularizer(
+ lambda: self._reg1, 1, 2)
+ with self.test_session():
+ self.assertAllEqual(self._alive_vec1[1:3],
+ concat_reg.alive_vector.eval())
+ self.assertAllClose(self._reg_vec1[1:3],
+ concat_reg.regularization_vector.eval(), 1e-5)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/generic_regularizers.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/generic_regularizers.py"
new file mode 100644
index 00000000..284242c1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/generic_regularizers.py"
@@ -0,0 +1,107 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Interface for MorphNet regularizers framework.
+
+A subclasses of Regularizer represent a regularizer that targets a certain
+quantity: Number of flops, model size, number of activations etc. The
+Regularizer interface has two methods:
+
+1. `get_regularization_term`, which returns a regularization term that should be
+ included in the total loss to target the quantity.
+
+2. `get_cost`, the quantity itself (for example, the number of flops). This is
+ useful for display in TensorBoard, and later, to to provide feedback for
+ automatically tuning the coefficient that multplies the regularization term,
+ until the cost reaches (or goes below) its target value.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import abc
+
+
+class OpRegularizer(object):
+ """An interface for Op Regularizers.
+
+ An OpRegularizer object corresponds to a tf.Operation, and provides
+ a regularizer for the output of the op (we assume that the op has one output
+ of interest in the context of MorphNet).
+ """
+ __metaclass__ = abc.ABCMeta
+
+ @abc.abstractproperty
+ def regularization_vector(self):
+ """Returns a vector of floats, with regularizers.
+
+ The length of the vector is the number of "output activations" (call them
+ neurons, nodes, filters etc) of the op. For a convolutional network, it's
+ the number of filters (aka "depth"). For a fully-connected layer, it's
+ usually the second (and last) dimension - assuming the first one is the
+ batch size.
+ """
+ pass
+
+ @abc.abstractproperty
+ def alive_vector(self):
+ """Returns a vector of booleans, indicating which activations are alive.
+
+ (call them activations, neurons, nodes, filters etc). This vector is of the
+ same length as the regularization_vector.
+ """
+ pass
+
+
+class NetworkRegularizer(object):
+ """An interface for Network Regularizers."""
+ __metaclass__ = abc.ABCMeta
+
+ @abc.abstractmethod
+ def get_regularization_term(self, ops=None):
+ """Compute the regularization term.
+
+ Args:
+ ops: A list of tf.Operation objects. If specified, only the regularization
+ term associated with the ops in `ops` will be returned. Otherwise, all
+ relevant ops in the default TensorFlow graph will be included.
+
+ Returns:
+ A tf.Tensor scalar of floating point type that evaluates to the
+ regularization term (that should be added to the total loss, with a
+ suitable coefficient)
+ """
+ pass
+
+ @abc.abstractmethod
+ def get_cost(self, ops=None):
+ """Calculates the cost targeted by the Regularizer.
+
+ Args:
+ ops: A list of tf.Operation objects. If specified, only the cost
+ pertaining to the ops in the `ops` will be returned. Otherwise, all
+ relevant ops in the default TensorFlow graph will be included.
+
+ Returns:
+ A tf.Tensor scalar that evaluates to the cost.
+ """
+ pass
+
+
+def dimensions_are_compatible(op_regularizer):
+ """Checks if op_regularizer's alive_vector matches regularization_vector."""
+ return op_regularizer.alive_vector.shape.with_rank(1).dims[
+ 0].is_compatible_with(
+ op_regularizer.regularization_vector.shape.with_rank(1).dims[0])
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/grouping_regularizers.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/grouping_regularizers.py"
new file mode 100644
index 00000000..fefdd8ed
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/grouping_regularizers.py"
@@ -0,0 +1,134 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Regularizers that group other regularizers for residual connections.
+
+An Elementwise operation between two tensors (addition, multiplication, maximum
+etc) imposes a constraint of equality of the shapes of the constituents. For
+example, if A, B are convolutions, and another op in the network
+receives A + B as input, it means that the i-th output of A is tied to the i-th
+output of B. Only if the i-th output was regularized away by the reguarizer in
+both A and B can we discard the i-th activation in both.
+
+Therefore we group the i-th output of A and the i-th output of B in a group
+LASSO, a group for each i. The grouping methods can vary, and this file offers
+several variants.
+
+Residual connections, in ResNet or in RNNs, are examples where this kind of
+grouping is needed.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from morph_net.framework import generic_regularizers
+
+
+DEFAULT_THRESHOLD = 0.01
+
+
+class MaxGroupingRegularizer(generic_regularizers.OpRegularizer):
+ """A regularizer that groups others by taking their maximum."""
+
+ def __init__(self, regularizers_to_group):
+ """Creates an instance.
+
+ Args:
+ regularizers_to_group: A list of generic_regularizers.OpRegularizer
+ objects.Their regularization_vector (alive_vector) are expected to be of
+ the same length.
+
+ Raises:
+ ValueError: regularizers_to_group is not of length 2 (TODO:
+ support arbitrary length if needed.
+ """
+ _raise_if_length_is_not2(regularizers_to_group)
+ self._regularization_vector = tf.maximum(
+ regularizers_to_group[0].regularization_vector,
+ regularizers_to_group[1].regularization_vector)
+ self._alive_vector = tf.logical_or(regularizers_to_group[0].alive_vector,
+ regularizers_to_group[1].alive_vector)
+
+ @property
+ def regularization_vector(self):
+ return self._regularization_vector
+
+ @property
+ def alive_vector(self):
+ return self._alive_vector
+
+
+class L2GroupingRegularizer(generic_regularizers.OpRegularizer):
+ r"""A regularizer that groups others by taking their L2 norm.
+
+ R_j = sqrt((\sum_i r_{ij}^2))
+
+ Where r_i is the i-th regularization vector, r_{ij} is its j-th element, and
+ R_j is the j-th element of the resulting regularization vector.
+ """
+
+ def __init__(self, regularizers_to_group, threshold=DEFAULT_THRESHOLD):
+ """Creates an instance.
+
+ Args:
+ regularizers_to_group: A list of generic_regularizers.OpRegularizer
+ objects.Their regularization_vector (alive_vector) are expected to be of
+ the same length.
+ threshold: A float. An group of activations will be considered alive if
+ its L2 norm is greater than `threshold`.
+
+ Raises:
+ ValueError: regularizers_to_group is not of length 2 (TODO:
+ support arbitrary length if needed.
+ """
+ _raise_if_length_is_not2(regularizers_to_group)
+ self._regularization_vector = tf.sqrt((
+ lazy_square(regularizers_to_group[0].regularization_vector) +
+ lazy_square(regularizers_to_group[1].regularization_vector)))
+ self._alive_vector = self._regularization_vector > threshold
+
+ @property
+ def regularization_vector(self):
+ return self._regularization_vector
+
+ @property
+ def alive_vector(self):
+ return self._alive_vector
+
+
+def _raise_if_length_is_not2(regularizers_to_group):
+ if len(regularizers_to_group) != 2:
+ raise ValueError('Currently only groups of size 2 are supported.')
+
+
+def lazy_square(tensor):
+ """Computes the square of a tensor in a lazy way.
+
+ This function is lazy in the following sense, for:
+ tensor = tf.sqrt(input)
+ will return input (and not tf.square(tensor)).
+
+ Args:
+ tensor: A `Tensor` of floats to compute the square of.
+
+ Returns:
+ The squre of the input tensor.
+ """
+ if tensor.op.type == 'Sqrt':
+ return tensor.op.inputs[0]
+ else:
+ return tf.square(tensor)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/grouping_regularizers_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/grouping_regularizers_test.py"
new file mode 100644
index 00000000..c9696db0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/grouping_regularizers_test.py"
@@ -0,0 +1,101 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for framework.grouping_regularizers."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from absl.testing import parameterized
+import numpy as np
+import tensorflow as tf
+
+from morph_net.framework import grouping_regularizers
+from morph_net.testing import op_regularizer_stub
+
+
+def _l2_reg_with_025_threshold(regularizers_to_group):
+ return grouping_regularizers.L2GroupingRegularizer(regularizers_to_group,
+ 0.25)
+
+
+class GroupingRegularizersTest(parameterized.TestCase, tf.test.TestCase):
+
+ # TODO: Add parametrized tests.
+ def setUp(self):
+ self._reg_vec1 = [0.1, 0.3, 0.6, 0.2]
+ self._alive_vec1 = [False, True, True, False]
+ self._reg_vec2 = [0.2, 0.4, 0.5, 0.1]
+ self._alive_vec2 = [False, True, False, True]
+ self._reg_vec3 = [0.3, 0.2, 0.0, 0.25]
+ self._alive_vec3 = [False, True, False, True]
+
+ self._reg1 = op_regularizer_stub.OpRegularizerStub(self._reg_vec1,
+ self._alive_vec1)
+ self._reg2 = op_regularizer_stub.OpRegularizerStub(self._reg_vec2,
+ self._alive_vec2)
+ self._reg3 = op_regularizer_stub.OpRegularizerStub(self._reg_vec3,
+ self._alive_vec3)
+
+ def testMaxGroupingRegularizer(self):
+ group_reg = grouping_regularizers.MaxGroupingRegularizer(
+ [self._reg1, self._reg2])
+ with self.test_session():
+ self.assertAllEqual(
+ [x or y for x, y in zip(self._alive_vec1, self._alive_vec2)],
+ group_reg.alive_vector.eval())
+ self.assertAllClose(
+ [max(x, y) for x, y in zip(self._reg_vec1, self._reg_vec2)],
+ group_reg.regularization_vector.eval(), 1e-5)
+
+ def testL2GroupingRegularizer(self):
+ group_reg = grouping_regularizers.L2GroupingRegularizer(
+ [self._reg1, self._reg2], 0.25)
+ expcted_reg_vec = [
+ np.sqrt((x**2 + y**2))
+ for x, y in zip(self._reg_vec1, self._reg_vec2)
+ ]
+ with self.test_session():
+ self.assertAllEqual([x > 0.25 for x in expcted_reg_vec],
+ group_reg.alive_vector.eval())
+ self.assertAllClose(expcted_reg_vec,
+ group_reg.regularization_vector.eval(), 1e-5)
+
+ @parameterized.named_parameters(
+ ('Max', grouping_regularizers.MaxGroupingRegularizer),
+ ('L2', _l2_reg_with_025_threshold))
+ def testOrderDoesNotMatter(self, create_reg):
+ group12 = create_reg([self._reg1, self._reg2])
+ group13 = create_reg([self._reg1, self._reg3])
+ group23 = create_reg([self._reg2, self._reg3])
+
+ group123 = create_reg([group12, self._reg3])
+ group132 = create_reg([group13, self._reg2])
+ group231 = create_reg([group23, self._reg1])
+
+ with self.test_session():
+ self.assertAllEqual(group123.alive_vector.eval(),
+ group132.alive_vector.eval())
+ self.assertAllEqual(group123.alive_vector.eval(),
+ group231.alive_vector.eval())
+
+ self.assertAllClose(group123.regularization_vector.eval(),
+ group132.regularization_vector.eval())
+ self.assertAllClose(group123.regularization_vector.eval(),
+ group231.regularization_vector.eval())
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/op_regularizer_manager.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/op_regularizer_manager.py"
new file mode 100644
index 00000000..fe0b0349
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/op_regularizer_manager.py"
@@ -0,0 +1,333 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A class for managing OpRegularizers.
+
+OpRegularizerManager creates the required regulrizers and manages the
+association between ops and their regularizers. OpRegularizerManager handles the
+logic associated with the graph topology:
+- Concatenating tensors is reflected in concatenating their regularizers.
+- Skip-connections (aka residual connections), RNNs and other structures where
+ the shapes of two (or more) tensors are tied together are reflected in
+ grouping their regularizers together.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import logging
+import tensorflow as tf
+
+from morph_net.framework import concat_and_slice_regularizers
+from morph_net.framework import generic_regularizers
+from morph_net.framework import grouping_regularizers
+
+
+# When an op has two (or more) inputs, that haveregularizers, the latter need to
+# be grouped. _GROUPING_OPS is a whitelist of ops that are allowed to group, as
+# a form of verification of the correctness of the code. The list is not
+# exhaustive, feel free to add other grouping ops as needed.
+_GROUPING_OPS = ('Add', 'Sub', 'Mul', 'Div', 'Maximum', 'Minimum',
+ 'SquaredDifference', 'RealDiv') # TODO: Is Div needed?
+# Ops that are not pass-through, that necessarily modify the regularizer.
+# These are the Ops that should not have an regularizer that is identifical to
+# one of its input. When we recursively look for regularizers along the graph
+# the recursion will always stop at these Ops even if no regularizer factory
+# is provided, and never assume that they pass the regularizer of their input
+# through.
+NON_PASS_THROUGH_OPS = ('Conv2D', 'Conv2DBackpropInput', 'MatMul')
+
+
+def _remove_nones_and_dups(items):
+ result = []
+ for i in items:
+ if i is not None and i not in result:
+ result.append(i)
+ return result
+
+
+def _raise_type_error_if_not_operation(op):
+ if not isinstance(op, tf.Operation):
+ raise TypeError('\'op\' must be of type tf.Operation, not %s' %
+ str(type(op)))
+
+
+class OpRegularizerManager(object):
+ """A class for managing OpRegularizers."""
+
+ # Public methods -------------------------------------------------------------
+ def __init__(self, ops, op_regularizer_factory_dict,
+ create_grouping_regularizer=None):
+ """Creates an instance.
+
+ Args:
+ ops: A list of tf.Operation-s. An OpRegularizer will be created for all
+ the ops in `ops`, and recursively for all ops they depend on via data
+ dependency. Typically `ops` would contain a single tf.Operation, which
+ is the output of the network.
+ op_regularizer_factory_dict: A dictionary, where the keys are strings
+ representing TensorFlow Op types, and the values are callables that
+ create the respective OpRegularizers. For every op encountered during
+ the recursion, if op.type is in op_regularizer_factory_dict, the
+ respective callable will be used to create an OpRegularizer. The
+ signature of the callables is the following args;
+ op; a tf.Operation for which to create a regularizer.
+ opreg_manager; A reference to an OpRegularizerManager object. Can be
+ None if the callable does not need access to OpRegularizerManager.
+ create_grouping_regularizer: A callable that has the signature of
+ grouping_regularizers.MaxGroupingRegularizer's constructor. Will be
+ called whenever a grouping op (see _GROUPING_OPS) is encountered.
+ Defaults to MaxGroupingRegularizer if None.
+
+ Raises:
+ ValueError: If ops is not a list.
+ """
+ self._constructed = False
+ if not isinstance(ops, list):
+ raise ValueError(
+ 'Input %s ops is not a list. Should probably use []' % str(ops))
+ self._op_to_regularizer = {}
+ self._regularizer_to_ops = collections.defaultdict(list)
+ self._op_regularizer_factory_dict = op_regularizer_factory_dict
+ for op_type in NON_PASS_THROUGH_OPS:
+ if op_type not in self._op_regularizer_factory_dict:
+ self._op_regularizer_factory_dict[op_type] = lambda x, y: None
+ self._create_grouping_regularizer = (
+ create_grouping_regularizer or
+ grouping_regularizers.MaxGroupingRegularizer)
+ self._visited = set()
+ for op in ops:
+ self._get_regularizer(op)
+ self._constructed = True
+
+ def get_regularizer(self, op):
+ """Looks up or creates an OpRegularizer for a tf.Operation.
+
+ Args:
+ op: A tf.Operation.
+
+ - If `self` has an OpRegularizer for `op`, it will be returned.
+ Otherwise:
+ - If called before construction of `self` was completed (that is, from the
+ constructor), an attempt to create an OpRegularizer for `op` will be made.
+ Otherwise:
+ - If called after contstruction of `self` was completed, an exception will
+ be raised.
+
+ Returns:
+ An OpRegularizer for `op`. Can be None if `op` is not regularized (e.g.
+ `op` is a constant).
+
+ Raises:
+ RuntimeError: If `self` object has no OpRegularizer for `op` in its
+ lookup table, and the construction of `self` has already been completed
+ (because them `self` is immutable and an OpRegularizer cannot be
+ created).
+ """
+ try:
+ return self._op_to_regularizer[op]
+ except KeyError:
+ if self._constructed:
+ raise ValueError('Op %s does not have a regularizer.' % op.name)
+ else:
+ return self._get_regularizer(op)
+
+ @property
+ def ops(self):
+ return self._op_to_regularizer.keys()
+
+ # ---- Public MUTABLE methods ------------------------------------------------
+ #
+ # These methods are intended to be called by OpRegularizer factory functions,
+ # in the constructor of OpRegularizerManager. OpRegularizerManager is
+ # immutable after construction, so calling these methods after construction
+ # has been completed will raise an exception.
+
+ def group_and_replace_regularizers(self, regularizers):
+ """Groups a list of OpRegularizers and replaces them by the grouped one.
+
+ Args:
+ regularizers: A list of OpRegularizer objects to be grouped.
+
+ Returns:
+ An OpRegularizer object formed by the grouping.
+
+ Raises:
+ RuntimeError: group_and_replace_regularizers was called affter
+ construction of the OpRegularizerManager object was completed.
+ """
+ if self._constructed:
+ raise RuntimeError('group_and_replace_regularizers can only be called '
+ 'before construction of the OpRegularizerManager was '
+ 'completed.')
+ grouped = self._create_grouping_regularizer(regularizers)
+ # Replace all the references to the regularizers by the new grouped
+ # regularizer.
+ for r in regularizers:
+ self._replace_regularizer(r, grouped)
+ return grouped
+
+ # Private methods ------------------------------------------------------------
+ def _get_regularizer(self, op):
+ """Fetches the regularizer of `op` if exists, creates it otherwise.
+
+ This function calls itself recursively, directly or via _create_regularizer
+ (which in turn calls _get_regularizer). It performs DFS along the data
+ dependencies of the graph, and uses a self._visited set to detect loops. The
+ use of self._visited makes it not thread safe, but _get_regularizer is a
+ private method that is supposed to only be called form the constructor, so
+ execution in multiple threads (for the same object) is not expected.
+
+ Args:
+ op: A Tf.Operation.
+
+ Returns:
+ An OpRegularizer that corresponds to `op`, or None if op does not have
+ a regularizer (e. g. it's a constant op).
+ """
+ _raise_type_error_if_not_operation(op)
+ if op not in self._op_to_regularizer:
+ if op in self._visited:
+ # In while loops, the data dependencies form a loop.
+ # TODO: RNNs have "legit" loops - will this still work?
+ return None
+ self._visited.add(op)
+ regularizer = self._create_regularizer(op)
+ self._op_to_regularizer[op] = regularizer
+ self._regularizer_to_ops[regularizer].append(op)
+ # Make sure that there is a regularizer (or None) for every op on which
+ # `op` depends via data dependency.
+ for i in op.inputs:
+ self._get_regularizer(i.op)
+ self._visited.remove(op)
+ return self._op_to_regularizer[op]
+
+ def _create_regularizer(self, op):
+ """Creates an OpRegularizer for `op`.
+
+ Args:
+ op: A Tf.Operation.
+
+ Returns:
+ An OpRegularizer that corresponds to `op`, or None if op does not have
+ a regularizer.
+
+ Raises:
+ RuntimeError: Grouping is attempted at op which is not whitelisted for
+ grouping (in _GROUPING_OPS).
+ """
+ # First we see if there is a factory function for creating the regularizer
+ # in the op_regularizer_factory_dict (supplied in the constructor).
+ if op.type in self._op_regularizer_factory_dict:
+ regularizer = self._op_regularizer_factory_dict[op.type](op, self)
+ if regularizer is None:
+ logging.warning('Failed to create regularizer for %s.', op.name)
+ else:
+ logging.info('Created regularizer for %s.', op.name)
+ return regularizer
+ # Unless overridden in op_regularizer_factory_dict, we assume that ops
+ # without inputs have no regularizers. These are 'leaf' ops, typically
+ # constants and variables.
+ if not op.inputs:
+ return None
+ if op.type == 'ConcatV2':
+ return self._create_concat_regularizer(op)
+
+ inputs_regularizers = _remove_nones_and_dups(
+ [self._get_regularizer(i.op) for i in op.inputs])
+
+ # Ops whose inputs have no regularizers, and that are not in
+ # op_regularizer_factory_dict, have no regularizer either (think of ops that
+ # only involve constants as an example).
+ if not inputs_regularizers:
+ return None
+
+ # Ops that have one input with a regularizer, and are not in
+ # op_regularizer_factory_dict, are assumed to be pass-through, that is, to
+ # carry over the regularizer of their inputs. Examples:
+ # - Unary ops, such as as RELU.
+ # - BiasAdd, or similar ops, that involve a constant/variable and a
+ # regularized op (e.g. the convolution that comes before the bias).
+ elif len(inputs_regularizers) == 1:
+ return inputs_regularizers[0]
+
+ # Group if we have more than one regularizer in the inputs of `op` and if it
+ # is white-listed for grouping.
+ elif op.type in _GROUPING_OPS:
+ return self.group_and_replace_regularizers(inputs_regularizers)
+
+ raise RuntimeError('Grouping is attempted at op which is not whitelisted '
+ 'for grouping: %s' % str(op.type))
+
+ def _create_concat_regularizer(self, concat_op):
+ """Creates an OpRegularizer for a concat op.
+
+ Args:
+ concat_op: A tf.Operation of type ConcatV2.
+
+ Returns:
+ An OpRegularizer for `concat_op`.
+ """
+ # We omit the last input, because it's the concat dimension. Others are
+ # the tensors to be concatenated.
+ input_ops = [i.op for i in concat_op.inputs[:-1]]
+ regularizers_to_concat = [self._get_regularizer(op) for op in input_ops]
+ # If all inputs have no regularizer, so does the concat op.
+ if regularizers_to_concat == [None] * len(regularizers_to_concat):
+ return None
+ offset = 0
+
+ # Replace the regularizers_to_concat by SlicingReferenceRegularizer-s that
+ # slice the concatenated regularizer.
+ ops_to_concat = []
+ for r, op in zip(regularizers_to_concat, input_ops):
+ if r is None:
+ length = op.outputs[0].shape.as_list()[-1]
+ offset += length
+ ops_to_concat.append(self._ConstantOpReg(length))
+ else:
+ length = tf.shape(r.alive_vector)[0]
+ slice_ref = concat_and_slice_regularizers.SlicingReferenceRegularizer(
+ lambda: self._get_regularizer(concat_op), offset, length)
+ offset += length
+ self._replace_regularizer(r, slice_ref)
+ ops_to_concat.append(r)
+
+ # Create the concatenated regularizer itself.
+ return concat_and_slice_regularizers.ConcatRegularizer(ops_to_concat)
+
+ def _replace_regularizer(self, source, target):
+ """Replaces `source` by 'target' in self's lookup tables."""
+ for op in self._regularizer_to_ops[source]:
+ assert self._op_to_regularizer[op] is source
+ self._op_to_regularizer[op] = target
+ self._regularizer_to_ops[target].append(op)
+ del self._regularizer_to_ops[source]
+
+ class _ConstantOpReg(generic_regularizers.OpRegularizer):
+ """A class with the constant alive property, and zero regularization."""
+
+ def __init__(self, size):
+ self._regularization_vector = tf.zeros(size)
+ self._alive_vector = tf.cast(tf.ones(size), tf.bool)
+
+ @property
+ def regularization_vector(self):
+ return self._regularization_vector
+
+ @property
+ def alive_vector(self):
+ return self._alive_vector
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/op_regularizer_manager_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/op_regularizer_manager_test.py"
new file mode 100644
index 00000000..5adc9ec2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/framework/op_regularizer_manager_test.py"
@@ -0,0 +1,160 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for op_regularizer_manager."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from absl.testing import parameterized
+import numpy as np
+import tensorflow as tf
+
+from morph_net.framework import op_regularizer_manager as orm
+from morph_net.testing import op_regularizer_stub
+
+layers = tf.contrib.layers
+
+
+def _get_op(name):
+ return tf.get_default_graph().get_operation_by_name(name)
+
+
+class TestOpRegularizerManager(parameterized.TestCase, tf.test.TestCase):
+
+ def setUp(self):
+ tf.reset_default_graph()
+ tf.set_random_seed(12)
+ np.random.seed(665544)
+
+ def _batch_norm_scope(self):
+ params = {
+ 'trainable': True,
+ 'normalizer_fn': layers.batch_norm,
+ 'normalizer_params': {
+ 'scale': True
+ }
+ }
+
+ with tf.contrib.framework.arg_scope([layers.conv2d], **params) as sc:
+ return sc
+
+ @parameterized.named_parameters(('Batch_no_par1', True, False, 'conv1'),
+ ('Batch_par1', True, True, 'conv1'),
+ ('NoBatch_no_par1', False, False, 'conv1'),
+ ('NoBatch_par2', False, True, 'conv2'),
+ ('Batch_no_par2', True, False, 'conv2'),
+ ('Batch_par2', True, True, 'conv2'),
+ ('Batch_par3', True, True, 'conv3'),
+ ('NoBatch_par3', False, True, 'conv3'),
+ ('NoBatch_no_par3', False, False, 'conv3'))
+ def testSimpleOpGetRegularizer(self, use_batch_norm, use_partitioner, scope):
+ # Tests the alive patern of the conv and relu ops.
+ # use_batch_norm: A Boolean. Inidcats if batch norm should be used.
+ # use_partitioner: A Boolean. Inidcats if a fixed_size_partitioner should be
+ # used.
+ # scope: A String. with the scope to test.
+ sc = self._batch_norm_scope() if use_batch_norm else []
+ partitioner = tf.fixed_size_partitioner(2) if use_partitioner else None
+ with tf.contrib.framework.arg_scope(sc):
+ with tf.variable_scope(tf.get_variable_scope(), partitioner=partitioner):
+ final_op = op_regularizer_stub.build_model()
+
+ op_reg_manager = orm.OpRegularizerManager([final_op],
+ op_regularizer_stub.MOCK_REG_DICT)
+ expected_alive = op_regularizer_stub.expected_alive()
+ with self.test_session():
+ conv_reg = op_reg_manager.get_regularizer(_get_op(scope + '/Conv2D'))
+ self.assertAllEqual(expected_alive[scope],
+ conv_reg.alive_vector.eval())
+
+ relu_reg = op_reg_manager.get_regularizer(_get_op(scope + '/Relu'))
+ self.assertAllEqual(expected_alive[scope],
+ relu_reg.alive_vector.eval())
+
+ @parameterized.named_parameters(('Batch_no_par', True, False),
+ ('Batch_par', True, True),
+ ('NoBatch_no_par', False, False),
+ ('NoBatch_par', False, True))
+ def testConcatOpGetRegularizer(self, use_batch_norm, use_partitioner):
+ sc = self._batch_norm_scope() if use_batch_norm else []
+ partitioner = tf.fixed_size_partitioner(2) if use_partitioner else None
+ with tf.contrib.framework.arg_scope(sc):
+ with tf.variable_scope(tf.get_variable_scope(), partitioner=partitioner):
+ final_op = op_regularizer_stub.build_model()
+ op_reg_manager = orm.OpRegularizerManager([final_op],
+ op_regularizer_stub.MOCK_REG_DICT)
+ expected_alive = op_regularizer_stub.expected_alive()
+
+ expected = np.logical_or(expected_alive['conv4'],
+ expected_alive['concat'])
+ with self.test_session():
+ conv_reg = op_reg_manager.get_regularizer(_get_op('conv4/Conv2D'))
+ self.assertAllEqual(expected, conv_reg.alive_vector.eval())
+
+ relu_reg = op_reg_manager.get_regularizer(_get_op('conv4/Relu'))
+ self.assertAllEqual(expected, relu_reg.alive_vector.eval())
+
+ @parameterized.named_parameters(('Concat_5', True, 5),
+ ('Concat_7', True, 7),
+ ('Add_6', False, 6))
+ def testGetRegularizerForConcatWithNone(self, test_concat, depth):
+ image = tf.constant(0.0, shape=[1, 17, 19, 3])
+ conv2 = layers.conv2d(image, 5, [1, 1], padding='SAME', scope='conv2')
+ other_input = tf.add(
+ tf.identity(tf.constant(3.0, shape=[1, 17, 19, depth])), 3.0)
+ # other_input has None as regularizer.
+ concat = tf.concat([other_input, conv2], 3)
+ output = tf.add(concat, concat, name='output_out')
+ op = concat.op if test_concat else output.op
+ op_reg_manager = orm.OpRegularizerManager([output.op],
+ op_regularizer_stub.MOCK_REG_DICT)
+ expected_alive = op_regularizer_stub.expected_alive()
+
+ with self.test_session():
+ alive = op_reg_manager.get_regularizer(op).alive_vector.eval()
+ self.assertAllEqual([True] * depth, alive[:depth])
+ self.assertAllEqual(expected_alive['conv2'], alive[depth:])
+
+ @parameterized.named_parameters(('add', tf.add),
+ ('div', tf.divide),
+ ('mul', tf.multiply),
+ ('max', tf.maximum),
+ ('min', tf.minimum),
+ ('l2', tf.squared_difference))
+ def testGroupingOps(self, tested_op):
+ th, size = 0.5, 11
+ image = tf.constant(0.5, shape=[1, 17, 19, 3])
+
+ conv1 = layers.conv2d(image, 5, [1, 1], padding='SAME', scope='conv1')
+ conv2 = layers.conv2d(image, 5, [1, 1], padding='SAME', scope='conv2')
+ res = tested_op(conv1, conv2)
+ reg = {'conv1': np.random.random(size), 'conv2': np.random.random(size)}
+
+ def regularizer(conv_op, manager=None):
+ del manager # unused
+ for prefix in ['conv1', 'conv2']:
+ if conv_op.name.startswith(prefix):
+ return op_regularizer_stub.OpRegularizerStub(
+ reg[prefix], reg[prefix] > th)
+
+ op_reg_manager = orm.OpRegularizerManager([res.op], {'Conv2D': regularizer})
+ with self.test_session():
+ alive = op_reg_manager.get_regularizer(res.op).alive_vector.eval()
+ self.assertAllEqual(alive,
+ np.logical_or(reg['conv1'] > th, reg['conv2'] > th))
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/g3doc/grouping.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/g3doc/grouping.png"
new file mode 100644
index 00000000..4a59c651
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/g3doc/grouping.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/g3doc/histogram.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/g3doc/histogram.png"
new file mode 100644
index 00000000..34fbf43c
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/g3doc/histogram.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/g3doc/tensorboard.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/g3doc/tensorboard.png"
new file mode 100644
index 00000000..75e9c398
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/g3doc/tensorboard.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/bilinear_cost_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/bilinear_cost_utils.py"
new file mode 100644
index 00000000..ca32fd8d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/bilinear_cost_utils.py"
@@ -0,0 +1,213 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Helpers for Network Regularizers that are bilinear in their inputs/outputs.
+
+Examples: The number of FLOPs and the number weights of a convolution are both
+a bilinear expression in the number of its inputs and outputs.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from morph_net.framework import generic_regularizers
+
+
+_CONV2D_OPS = ('Conv2D', 'Conv2DBackpropInput', 'DepthwiseConv2dNative')
+_SUPPORTED_OPS = _CONV2D_OPS + ('MatMul',)
+
+
+def _raise_if_not_supported(op):
+ if not isinstance(op, tf.Operation):
+ raise ValueError('conv_op must be a tf.Operation, not %s' % type(op))
+ if op.type not in _SUPPORTED_OPS:
+ raise ValueError('conv_op must be a Conv2D or a MatMul, not %s' % op.type)
+
+
+def _get_conv_filter_size(conv_op):
+ assert conv_op.type in _CONV2D_OPS
+ conv_weights = conv_op.inputs[1]
+ filter_shape = conv_weights.shape.as_list()[:2]
+ return filter_shape[0] * filter_shape[1]
+
+
+def flop_coeff(op):
+ """Computes the coefficient of number of flops associated with a convolution.
+
+ The FLOPs cost of a convolution is given by C * output_depth * input_depth,
+ where C = 2 * output_width * output_height * filter_size. The 2 is because we
+ have one multiplication and one addition for each convolution weight and
+ pixel. This function returns C.
+
+ Args:
+ op: A tf.Operation of type 'Conv2D' or 'MatMul'.
+
+ Returns:
+ A float, the coefficient that when multiplied by the input depth and by the
+ output depth gives the number of flops needed to compute the convolution.
+
+ Raises:
+ ValueError: conv_op is not a tf.Operation of type Conv2D.
+ """
+ _raise_if_not_supported(op)
+ if op.type in _CONV2D_OPS:
+ # Looking at the output shape makes it easy to automatically take into
+ # account strides and the type of padding.
+ if op.type == 'Conv2D' or op.type == 'DepthwiseConv2dNative':
+ shape = op.outputs[0].shape.as_list()
+ else: # Conv2DBackpropInput
+ # For a transposed convolution, the input and the output are swapped (as
+ # far as shapes are concerned). In other words, for a given filter shape
+ # and stride, if Conv2D maps from shapeX to shapeY, Conv2DBackpropInput
+ # maps from shapeY to shapeX. Therefore wherever we use the output shape
+ # for Conv2D, we use the input shape for Conv2DBackpropInput.
+ shape = _get_input(op).shape.as_list()
+ size = shape[1] * shape[2]
+ return 2.0 * size * _get_conv_filter_size(op)
+ else: # MatMul
+ # A MatMul is like a 1x1 conv with an output size of 1x1, so from the factor
+ # above only the 2.0 remains.
+ return 2.0
+
+
+def num_weights_coeff(op):
+ """The number of weights of a conv is C * output_depth * input_depth. Finds C.
+
+ Args:
+ op: A tf.Operation of type 'Conv2D' or 'MatMul'
+
+ Returns:
+ A float, the coefficient that when multiplied by the input depth and by the
+ output depth gives the number of flops needed to compute the convolution.
+
+ Raises:
+ ValueError: conv_op is not a tf.Operation of type Conv2D.
+ """
+ _raise_if_not_supported(op)
+ return _get_conv_filter_size(op) if op.type in _CONV2D_OPS else 1.0
+
+
+class BilinearNetworkRegularizer(generic_regularizers.NetworkRegularizer):
+ """A NetworkRegularizer with bilinear cost and loss.
+
+ Can be used for FLOPs regularization or for model size regularization.
+ """
+
+ def __init__(self, opreg_manager, coeff_func):
+ """Creates an instance.
+
+ Args:
+ opreg_manager: An OpRegularizerManager object that will be used to query
+ OpRegularizers of the various ops in the graph.
+ coeff_func: A callable that receives a tf.Operation of type Conv2D and
+ returns a bilinear coefficient of its cost. Examples:
+ - Use conv_flop_coeff for a FLOP regularizer.
+ - Use conv_num_weights_coeff for a number-of-weights regularizer.
+ """
+ self._opreg_manager = opreg_manager
+ self._coeff_func = coeff_func
+
+ def _get_cost_or_regularization_term(self, is_regularization, ops=None):
+ total = 0.0
+ if not ops:
+ ops = self._opreg_manager.ops
+ for op in ops:
+ if op.type not in _SUPPORTED_OPS:
+ continue
+ # We use the following expression for thr regularizer:
+ #
+ # coeff * (number_of_inputs_alive * sum_of_output_regularizers +
+ # number_of_outputs_alive * sum_of_input_regularizers)
+ #
+ # where 'coeff' is a coefficient (for a particular convolution) such that
+ # the number of flops of that convolution is given by:
+ # number_of_flops = coeff * number_of_inputs * number_of_outputs.
+ input_op = _get_input(op).op
+ input_op_reg = self._opreg_manager.get_regularizer(input_op)
+ output_op_reg = self._opreg_manager.get_regularizer(op)
+ coeff = self._coeff_func(op)
+ num_alive_inputs = _count_alive(input_op, input_op_reg)
+ num_alive_outputs = _count_alive(op, output_op_reg)
+ if op.type == 'DepthwiseConv2dNative':
+ if is_regularization:
+ reg_inputs = _sum_of_reg_vector(input_op_reg)
+ reg_outputs = _sum_of_reg_vector(output_op_reg)
+ # reg_inputs and reg_outputs are often identical since they should
+ # come from the same reguarlizer. Duplicate them for symmetry.
+ # When the input doesn't have a regularizer (e.g. input), only the
+ # second term is used.
+ # TODO: revisit this expression after experiments.
+ total += coeff * (reg_inputs + reg_outputs)
+ else:
+ # num_alive_inputs may not always equals num_alive_outputs because the
+ # input (e.g. the image) may not have a gamma regularizer. In this
+ # case the computation is porportional only to num_alive_outputs.
+ total += coeff * num_alive_outputs
+ else:
+ if is_regularization:
+ reg_inputs = _sum_of_reg_vector(input_op_reg)
+ reg_outputs = _sum_of_reg_vector(output_op_reg)
+ total += coeff * (
+ num_alive_inputs * reg_outputs + num_alive_outputs * reg_inputs)
+ else:
+ total += coeff * num_alive_inputs * num_alive_outputs
+ return total
+
+ def get_cost(self, ops=None):
+ return self._get_cost_or_regularization_term(False, ops)
+
+ def get_regularization_term(self, ops=None):
+ return self._get_cost_or_regularization_term(True, ops)
+
+
+def _get_input(op):
+ """Returns the input to that op that represents the activations.
+
+ (as opposed to e.g. weights.)
+
+ Args:
+ op: A tf.Operation object with type in _SUPPORTED_OPS.
+
+ Returns:
+ A tf.Tensor representing the input activations.
+
+ Raises:
+ ValueError: MatMul is used with transposition (unsupported).
+ """
+ assert op.type in _SUPPORTED_OPS, 'Op type %s is not supported.' % op.type
+ if op.type == 'Conv2D' or op.type == 'DepthwiseConv2dNative':
+ return op.inputs[0]
+ if op.type == 'Conv2DBackpropInput':
+ return op.inputs[2]
+ if op.type == 'MatMul':
+ if op.get_attr('transpose_a') or op.get_attr('transpose_b'):
+ raise ValueError('MatMul with transposition is not yet supported.')
+ return op.inputs[0]
+
+
+def _count_alive(op, opreg):
+ if opreg:
+ return tf.reduce_sum(tf.cast(opreg.alive_vector, tf.float32))
+ else:
+ return float(op.outputs[0].shape.as_list()[-1])
+
+
+def _sum_of_reg_vector(opreg):
+ if opreg:
+ return tf.reduce_sum(opreg.regularization_vector)
+ else:
+ return 0.0
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/bilinear_cost_utils_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/bilinear_cost_utils_test.py"
new file mode 100644
index 00000000..b35c397d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/bilinear_cost_utils_test.py"
@@ -0,0 +1,117 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for compute_cost_estimator.
+
+Note that BilinearNetworkRegularizer is not tested here - its specific
+instantiation is tested in flop_regularizer_test.py.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+from tensorflow.python.framework import ops
+
+from morph_net.network_regularizers import bilinear_cost_utils
+
+layers = tf.contrib.layers
+
+
+def _flops(op):
+ """Get the number of flops of a convolution, from the ops stats registry.
+
+ Args:
+ op: A tf.Operation object.
+
+ Returns:
+ The number os flops needed to evaluate conv_op.
+ """
+ return (ops.get_stats_for_node_def(tf.get_default_graph(), op.node_def,
+ 'flops').value)
+
+
+def _output_depth(conv_op):
+ return conv_op.outputs[0].shape.as_list()[-1]
+
+
+def _input_depth(conv_op):
+ conv_weights = conv_op.inputs[1]
+ return conv_weights.shape.as_list()[2]
+
+
+class BilinearCostUtilTest(tf.test.TestCase):
+
+ def setUp(self):
+ tf.reset_default_graph()
+ image = tf.constant(0.0, shape=[1, 11, 13, 17])
+ net = layers.conv2d(
+ image, 19, [7, 5], stride=2, padding='SAME', scope='conv1')
+ layers.conv2d_transpose(
+ image, 29, [7, 5], stride=2, padding='SAME', scope='convt2')
+ net = tf.reduce_mean(net, axis=(1, 2))
+ layers.fully_connected(net, 23, scope='FC')
+ net = layers.conv2d(
+ image, 10, [7, 5], stride=2, padding='SAME', scope='conv2')
+ layers.separable_conv2d(
+ net, None, [3, 2], depth_multiplier=1, padding='SAME', scope='dw1')
+ self.conv_op = tf.get_default_graph().get_operation_by_name('conv1/Conv2D')
+ self.convt_op = tf.get_default_graph().get_operation_by_name(
+ 'convt2/conv2d_transpose')
+ self.matmul_op = tf.get_default_graph().get_operation_by_name(
+ 'FC/MatMul')
+ self.dw_op = tf.get_default_graph().get_operation_by_name(
+ 'dw1/depthwise')
+
+ def assertNearRelatively(self, expected, actual):
+ self.assertNear(expected, actual, expected * 1e-6)
+
+ def testConvFlopsCoeff(self):
+ # Divide by the input depth and the output depth to get the coefficient.
+ expected_coeff = _flops(self.conv_op) / (17.0 * 19.0)
+ actual_coeff = bilinear_cost_utils.flop_coeff(self.conv_op)
+ self.assertNearRelatively(expected_coeff, actual_coeff)
+
+ def testConvTransposeFlopsCoeff(self):
+ # Divide by the input depth and the output depth to get the coefficient.
+ expected_coeff = _flops(self.convt_op) / (17.0 * 29.0)
+ actual_coeff = bilinear_cost_utils.flop_coeff(self.convt_op)
+ self.assertNearRelatively(expected_coeff, actual_coeff)
+
+ def testFcFlopsCoeff(self):
+ expected_coeff = _flops(self.matmul_op) / (19.0 * 23.0)
+ actual_coeff = bilinear_cost_utils.flop_coeff(self.matmul_op)
+ self.assertNearRelatively(expected_coeff, actual_coeff)
+
+ def testConvNumWeightsCoeff(self):
+ actual_coeff = bilinear_cost_utils.num_weights_coeff(self.conv_op)
+ # The coefficient is just the filter size - 7 * 5 = 35:
+ self.assertNearRelatively(35, actual_coeff)
+
+ def testFcNumWeightsCoeff(self):
+ actual_coeff = bilinear_cost_utils.num_weights_coeff(self.matmul_op)
+ # The coefficient is 1.0, the number of weights is just inputs x outputs.
+ self.assertNearRelatively(1.0, actual_coeff)
+
+ def testDepthwiseConvFlopsCoeff(self):
+ # Divide by the input depth (which is also the output depth) to get the
+ # coefficient.
+ expected_coeff = _flops(self.dw_op) / (10.0)
+ actual_coeff = bilinear_cost_utils.flop_coeff(self.dw_op)
+ self.assertNearRelatively(expected_coeff, actual_coeff)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/flop_regularizer.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/flop_regularizer.py"
new file mode 100644
index 00000000..16d970ea
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/flop_regularizer.py"
@@ -0,0 +1,59 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A NetworkRegularizer that targets the number of FLOPs."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from morph_net.framework import op_regularizer_manager
+from morph_net.network_regularizers import bilinear_cost_utils
+from morph_net.op_regularizers import conv_group_lasso_regularizer
+from morph_net.op_regularizers import gamma_l1_regularizer
+
+
+class GammaFlopsRegularizer(bilinear_cost_utils.BilinearNetworkRegularizer):
+ """A NetworkRegularizer that targets FLOPs using Gamma L1 as OpRegularizer."""
+
+ def __init__(self, ops, gamma_threshold):
+ gamma_l1_reg_factory = gamma_l1_regularizer.GammaL1RegularizerFactory(
+ gamma_threshold)
+ opreg_manager = op_regularizer_manager.OpRegularizerManager(
+ ops, {
+ 'Conv2D': gamma_l1_reg_factory.create_regularizer,
+ 'DepthwiseConv2dNative': gamma_l1_reg_factory.create_regularizer
+ })
+ super(GammaFlopsRegularizer, self).__init__(opreg_manager,
+ bilinear_cost_utils.flop_coeff)
+
+
+class GroupLassoFlopsRegularizer(
+ bilinear_cost_utils.BilinearNetworkRegularizer):
+ """A NetworkRegularizer that targets FLOPs using L1 group lasso."""
+
+ def __init__(self, ops, threshold):
+ # Regularizer factories for convolution and fully connected layers.
+ conv_regularizer_factory = (
+ conv_group_lasso_regularizer.ConvGroupLassoRegularizerFactory(threshold)
+ )
+ regularizer_factories = {
+ 'Conv2D': conv_regularizer_factory.create_regularizer,
+ 'Conv2DBackpropInput': conv_regularizer_factory.create_regularizer,
+ }
+ # Create OpRegularizerManager instance.
+ opreg_manager = op_regularizer_manager.OpRegularizerManager(
+ ops, regularizer_factories)
+ super(GroupLassoFlopsRegularizer, self).__init__(
+ opreg_manager, bilinear_cost_utils.flop_coeff)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/flop_regularizer_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/flop_regularizer_test.py"
new file mode 100644
index 00000000..f32eca22
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/flop_regularizer_test.py"
@@ -0,0 +1,553 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for flop_regularizer."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import abc
+import numpy as np
+
+import tensorflow as tf
+
+from tensorflow.contrib.slim.nets import resnet_v1
+from morph_net.network_regularizers import bilinear_cost_utils
+from morph_net.network_regularizers import flop_regularizer
+
+arg_scope = tf.contrib.framework.arg_scope
+layers = tf.contrib.layers
+_coeff = bilinear_cost_utils.flop_coeff
+NUM_CHANNELS = 3
+
+
+class GammaFlopLossTest(tf.test.TestCase):
+
+ def setUp(self):
+ tf.reset_default_graph()
+ self.BuildWithBatchNorm()
+ with self.test_session():
+ self.Init()
+
+ def BuildWithBatchNorm(self):
+ params = {
+ 'trainable': True,
+ 'normalizer_fn': layers.batch_norm,
+ 'normalizer_params': {
+ 'scale': True
+ }
+ }
+
+ with arg_scope([layers.conv2d], **params):
+ self.BuildModel()
+
+ def BuildModel(self):
+ # Our test model is:
+ #
+ # -> conv1 --+ -> conv3 -->
+ # / | /
+ # image [concat]
+ # \ | \
+ # -> conv2 --+ -> conv4 -->
+ #
+ # (the model has two "outputs", conv3 and conv4).
+ #
+ image = tf.constant(0.0, shape=[1, 17, 19, NUM_CHANNELS])
+ conv1 = layers.conv2d(image, 13, [7, 5], padding='SAME', scope='conv1')
+ conv2 = layers.conv2d(image, 23, [1, 1], padding='SAME', scope='conv2')
+ concat = tf.concat([conv1, conv2], 3)
+ self.conv3 = layers.conv2d(
+ concat, 29, [3, 3], stride=2, padding='SAME', scope='conv3')
+ self.conv4 = layers.conv2d(
+ concat, 31, [1, 1], stride=1, padding='SAME', scope='conv4')
+ self.name_to_var = {v.op.name: v for v in tf.global_variables()}
+
+ self.gamma_flop_reg = flop_regularizer.GammaFlopsRegularizer(
+ [self.conv3.op, self.conv4.op], gamma_threshold=0.45)
+
+ def GetConv(self, name):
+ return tf.get_default_graph().get_operation_by_name(name + '/Conv2D')
+
+ def Init(self):
+ tf.global_variables_initializer().run()
+ gamma1 = self.name_to_var['conv1/BatchNorm/gamma']
+ gamma1.assign([0.8] * 7 + [0.2] * 6).eval()
+ gamma2 = self.name_to_var['conv2/BatchNorm/gamma']
+ gamma2.assign([-0.7] * 11 + [0.1] * 12).eval()
+ gamma3 = self.name_to_var['conv3/BatchNorm/gamma']
+ gamma3.assign([0.6] * 10 + [-0.3] * 19).eval()
+ gamma4 = self.name_to_var['conv4/BatchNorm/gamma']
+ gamma4.assign([-0.5] * 17 + [-0.4] * 14).eval()
+
+ def cost(self, conv):
+ with self.test_session():
+ return self.gamma_flop_reg.get_cost(conv).eval()
+
+ def loss(self, conv):
+ with self.test_session():
+ return self.gamma_flop_reg.get_regularization_term(conv).eval()
+
+ def testCost(self):
+ # Conv1 has 7 gammas above 0.45, and NUM_CHANNELS inputs (from the image).
+ conv = self.GetConv('conv1')
+ self.assertEqual(_coeff(conv) * 7 * NUM_CHANNELS, self.cost([conv]))
+
+ # Conv2 has 11 gammas above 0.45, and NUM_CHANNELS inputs (from the image).
+ conv = self.GetConv('conv2')
+ self.assertEqual(_coeff(conv) * 11 * NUM_CHANNELS, self.cost([conv]))
+
+ # Conv3 has 10 gammas above 0.45, and 7 + 11 inputs from conv1 and conv2.
+ conv = self.GetConv('conv3')
+ self.assertEqual(_coeff(conv) * 10 * 18, self.cost([conv]))
+
+ # Conv4 has 17 gammas above 0.45, and 7 + 11 inputs from conv1 and conv2.
+ conv = self.GetConv('conv4')
+ self.assertEqual(_coeff(conv) * 17 * 18, self.cost([conv]))
+
+ # Test that passing a list of convs sums their contributions:
+ convs = [self.GetConv('conv3'), self.GetConv('conv4')]
+ self.assertEqual(
+ self.cost(convs[:1]) + self.cost(convs[1:]), self.cost(convs))
+
+
+class GammaFlopLossWithDepthwiseConvTestBase(object):
+ """Test flop_regularizer for a network with depthwise convolutions."""
+ __metaclass__ = abc.ABCMeta
+
+ @abc.abstractmethod
+ def GetSession(self):
+ return
+
+ def BuildWithBatchNorm(self):
+ params = {
+ 'trainable': True,
+ 'normalizer_fn': layers.batch_norm,
+ 'normalizer_params': {
+ 'scale': True
+ }
+ }
+ ops_with_batchnorm = [layers.conv2d]
+ if self._depthwise_use_batchnorm:
+ ops_with_batchnorm.append(layers.separable_conv2d)
+
+ with arg_scope(ops_with_batchnorm, **params):
+ self.BuildModel()
+
+ def BuildModel(self):
+ # Our test model is:
+ #
+ # -> dw1 --> conv1 --+
+ # / |
+ # image [concat] --> conv3
+ # \ |
+ # -> conv2 --> dw2 --+
+ #
+ # (the model has one "output", conv3).
+ #
+ image = tf.constant(0.0, shape=[1, 17, 19, NUM_CHANNELS])
+ dw1 = layers.separable_conv2d(
+ image, None, [3, 3], depth_multiplier=1, stride=1, scope='dw1')
+ conv1 = layers.conv2d(dw1, 13, [7, 5], padding='SAME', scope='conv1')
+ conv2 = layers.conv2d(image, 23, [1, 1], padding='SAME', scope='conv2')
+ dw2 = layers.separable_conv2d(
+ conv2, None, [5, 5], depth_multiplier=1, stride=1, scope='dw2')
+ concat = tf.concat([conv1, dw2], 3)
+ self.conv3 = layers.conv2d(
+ concat, 29, [3, 3], stride=2, padding='SAME', scope='conv3')
+ self.name_to_var = {v.op.name: v for v in tf.global_variables()}
+
+ self.gamma_flop_reg = flop_regularizer.GammaFlopsRegularizer(
+ [self.conv3.op], gamma_threshold=0.45)
+
+ def GetConv(self, name):
+ return tf.get_default_graph().get_operation_by_name(
+ name + ('/Conv2D' if 'conv' in name else '/depthwise'))
+
+ def GetGammaAbsValue(self, name):
+ gamma_op = tf.get_default_graph().get_operation_by_name(name +
+ '/BatchNorm/gamma')
+ with self.GetSession(): # pylint: disable=not-context-manager
+ gamma = gamma_op.outputs[0].eval()
+ return np.abs(gamma)
+
+ def Init(self):
+ tf.global_variables_initializer().run()
+ gamma1 = self.name_to_var['conv1/BatchNorm/gamma']
+ gamma1.assign([0.8] * 7 + [0.2] * 6).eval()
+ gamma2 = self.name_to_var['conv2/BatchNorm/gamma']
+ gamma2.assign([-0.7] * 11 + [0.1] * 12).eval()
+ gamma3 = self.name_to_var['conv3/BatchNorm/gamma']
+ gamma3.assign([0.6] * 10 + [-0.3] * 19).eval()
+ # Initialize gamma for depthwise convs only if there are Batchnorm for them.
+ if self._depthwise_use_batchnorm:
+ gammad1 = self.name_to_var['dw1/BatchNorm/gamma']
+ gammad1.assign([-0.3] * 1 + [-0.9] * 2).eval()
+ gammad2 = self.name_to_var['dw2/BatchNorm/gamma']
+ gammad2.assign([0.3] * 5 + [0.9] * 10 + [-0.1] * 8).eval()
+
+ def cost(self, conv): # pylint: disable=invalid-name
+ with self.GetSession(): # pylint: disable=not-context-manager
+ cost = self.gamma_flop_reg.get_cost(conv)
+ return cost.eval() if isinstance(cost, tf.Tensor) else cost
+
+ def loss(self, conv): # pylint: disable=invalid-name
+ with self.GetSession(): # pylint: disable=not-context-manager
+ reg = self.gamma_flop_reg.get_regularization_term(conv)
+ return reg.eval() if isinstance(reg, tf.Tensor) else reg
+
+
+class GammaFlopLossWithDepthwiseConvTest(
+ tf.test.TestCase, GammaFlopLossWithDepthwiseConvTestBase):
+ """Test flop_regularizer for a network with depthwise convolutions."""
+
+ def setUp(self):
+ self._depthwise_use_batchnorm = True
+ tf.reset_default_graph()
+ self.BuildWithBatchNorm()
+ with self.test_session():
+ self.Init()
+
+ def GetSession(self):
+ return self.test_session()
+
+ def testCost(self):
+ # Dw1 has 2 gammas above 0.45 out of NUM_CHANNELS inputs (from the image),
+ # but because the input doesn't have a regularizer, it has no way of
+ # removing the channels, so the channel count is still NUM_CHANNELS.
+ conv = self.GetConv('dw1')
+ self.assertEqual(_coeff(conv) * NUM_CHANNELS, self.cost([conv]))
+
+ # Conv1 has 7 gammas above 0.45, and NUM_CHANNELS inputs (from dw1).
+ conv = self.GetConv('conv1')
+ self.assertEqual(_coeff(conv) * 7 * NUM_CHANNELS, self.cost([conv]))
+
+ # Conv2 has 11 active + 12 inactive, while Dw2 has 5 inactive, 10 active and
+ # 8 active. Their max (or) has 15 active and 8 inactive.
+ # Conv2 has NUM_CHANNELS inputs (from the image).
+ conv = self.GetConv('conv2')
+ self.assertEqual(_coeff(conv) * 15 * NUM_CHANNELS, self.cost([conv]))
+
+ # Dw2 has 15 out of 23 inputs (from the Conv2).
+ conv = self.GetConv('dw2')
+ self.assertEqual(_coeff(conv) * 15, self.cost([conv]))
+
+ # Conv3 has 10 gammas above 0.45, and 7 + 15 inputs from conv1 and dw2.
+ conv = self.GetConv('conv3')
+ self.assertEqual(_coeff(conv) * 10 * 22, self.cost([conv]))
+
+ def testRegularizer(self):
+ # Dw1 depthwise convolution is connected to the input (no regularizer).
+ conv = self.GetConv('dw1')
+ # Although the effective regularizer for dw is computed as below:
+ # gamma = self.GetGammaAbsValue('dw1')
+ # expected_loss = _coeff(conv) * gamma.sum()
+ # Since the input is not regularized, dw does not return a regularizer.
+ expected_loss = 0.0
+ self.assertNear(expected_loss, self.loss([conv]), expected_loss * 1e-5)
+
+ # Conv1 takes Dw1 as input, its input regularizer is from dw1.
+ conv = self.GetConv('conv1')
+ gamma = self.GetGammaAbsValue('conv1')
+ # The effective size for dw can be computed from its gamma, and
+ # the loss may be computed as follows:
+ # gamma_dw = self.GetGammaAbsValue('dw1')
+ # expected_loss = _coeff(conv) * (
+ # gamma.sum() * (gamma_dw > 0.45).sum() + gamma_dw.sum() *
+ # (gamma > 0.45).sum())
+ # However, since dw cannot change shape because its input doesn't have a
+ # regularizer, the real loss we expect should be:
+ expected_loss = _coeff(conv) * (gamma.sum() * NUM_CHANNELS)
+ self.assertNear(expected_loss, self.loss([conv]), expected_loss * 1e-5)
+
+ # Dw2 depthwise convolution is connected to conv2 (grouped regularizer).
+ conv = self.GetConv('conv2')
+ gamma_conv = self.GetGammaAbsValue('conv2')
+ dw = self.GetConv('dw2')
+ gamma_dw = self.GetGammaAbsValue('dw2')
+ gamma = np.maximum(gamma_dw, gamma_conv).sum()
+ expected_loss = _coeff(conv) * (gamma * 3 + (gamma > 0.45).sum() * 0)
+ self.assertNear(expected_loss, self.loss([conv]), expected_loss * 1e-5)
+ expected_loss = _coeff(dw) * gamma * 2
+ self.assertNear(expected_loss, self.loss([dw]), expected_loss * 1e-5)
+
+
+class GammaFlopLossWithDepthwiseConvNoBatchNormTest(
+ tf.test.TestCase, GammaFlopLossWithDepthwiseConvTestBase):
+ """Test flop_regularizer for un-batchnormed depthwise convolutions.
+
+ This test is used to confirm that when depthwise convolution is not BNed, it
+ will not be considered towards the regularizer, but it will be counted towards
+ the cost.
+ This design choice is for backward compatibility for users who did not
+ regularize depthwise convolutions. However, the cost will be reported
+ regardless in order to be faithful to the real computation complexity.
+ """
+
+ def setUp(self):
+ self._depthwise_use_batchnorm = False
+ tf.reset_default_graph()
+ self.BuildWithBatchNorm()
+ with self.test_session():
+ self.Init()
+
+ def GetSession(self):
+ return self.test_session()
+
+ def testCost(self):
+ # Dw1 has NUM_CHANNELS inputs (from the image).
+ conv = self.GetConv('dw1')
+ self.assertEqual(_coeff(conv) * 3, self.cost([conv]))
+
+ # Conv1 has 7 gammas above 0.45, and 3 inputs (from dw1).
+ conv = self.GetConv('conv1')
+ self.assertEqual(_coeff(conv) * 7 * 3, self.cost([conv]))
+
+ # Conv2 has 11 active outputs and NUM_CHANNELS inputs (from the image).
+ conv = self.GetConv('conv2')
+ self.assertEqual(_coeff(conv) * 11 * NUM_CHANNELS, self.cost([conv]))
+
+ # Dw2 has 11 inputs (pass-through from the Conv2).
+ conv = self.GetConv('dw2')
+ self.assertEqual(_coeff(conv) * 11, self.cost([conv]))
+
+ # Conv3 has 10 gammas above 0.45, and 7 + 11 inputs from conv1 and dw2.
+ conv = self.GetConv('conv3')
+ self.assertEqual(_coeff(conv) * 10 * 18, self.cost([conv]))
+
+ def testRegularizer(self):
+ # Dw1 depthwise convolution is connected to the input (no regularizer).
+ conv = self.GetConv('dw1')
+ expected_loss = 0.0
+ self.assertNear(expected_loss, self.loss([conv]), expected_loss * 1e-5)
+
+ # Conv1 takes Dw1 as input, but it's not affected by dw1 because depthwise
+ # is not BNed.
+ conv = self.GetConv('conv1')
+ gamma = self.GetGammaAbsValue('conv1')
+ expected_loss = _coeff(conv) * (gamma.sum() * NUM_CHANNELS)
+ self.assertNear(expected_loss, self.loss([conv]), expected_loss * 1e-5)
+
+ # Dw2 depthwise convolution is connected to conv2 (pass through).
+ dw = self.GetConv('dw2')
+ gamma = self.GetGammaAbsValue('conv2')
+ expected_loss = _coeff(dw) * gamma.sum() * 2
+ self.assertNear(expected_loss, self.loss([dw]), expected_loss * 1e-5)
+
+
+class GammaFlopResidualConnectionsLossTest(tf.test.TestCase):
+ """Tests flop_regularizer for a network with residual connections."""
+
+ def setUp(self):
+ tf.reset_default_graph()
+ tf.set_random_seed(7)
+ self._threshold = 0.6
+
+ def buildModel(self, resnet_fn, block_fn):
+ # We use this model as a test case because the slim.nets.resnet module is
+ # used in some production.
+ #
+ # The model looks as follows:
+ #
+ # Image --> unit_1/shortcut
+ # Image --> unit_1/conv1 --> unit_1/conv2 --> unit_1/conv3
+ #
+ # unit_1/shortcut + unit_1/conv3 --> unit_1 (residual connection)
+ #
+ # unit_1 --> unit_2/conv1 -> unit_2/conv2 --> unit_2/conv3
+ #
+ # unit_1 + unit_2/conv3 --> unit_2 (residual connection)
+ #
+ # In between, there are strided convolutions and pooling ops, but these
+ # should not affect the regularizer.
+ blocks = [
+ block_fn('block1', base_depth=7, num_units=2, stride=2),
+ ]
+ image = tf.constant(0.0, shape=[1, 2, 2, NUM_CHANNELS])
+ net = resnet_fn(
+ image, blocks, include_root_block=False, is_training=False)[0]
+ net = tf.reduce_mean(net, axis=(1, 2))
+ return layers.fully_connected(net, 23, scope='FC')
+
+ def buildGraphWithBatchNorm(self, resnet_fn, block_fn):
+ params = {
+ 'trainable': True,
+ 'normalizer_fn': layers.batch_norm,
+ 'normalizer_params': {
+ 'scale': True
+ }
+ }
+
+ with arg_scope([layers.conv2d, layers.separable_conv2d], **params):
+ self.net = self.buildModel(resnet_fn, block_fn)
+
+ def initGamma(self):
+ assignments = []
+ gammas = {}
+ for v in tf.global_variables():
+ if v.op.name.endswith('/gamma'):
+ assignments.append(v.assign(tf.random_uniform(v.shape)))
+ gammas[v.op.name] = v
+ with self.test_session() as s:
+ s.run(assignments)
+ self._gammas = s.run(gammas)
+
+ def getGamma(self, short_name):
+ tokens = short_name.split('/')
+ name = ('resnet_v1/block1/' + tokens[0] + '/bottleneck_v1/' + tokens[1] +
+ '/BatchNorm/gamma')
+ return self._gammas[name]
+
+ def getOp(self, short_name):
+ if short_name == 'FC':
+ return tf.get_default_graph().get_operation_by_name('FC/MatMul')
+ tokens = short_name.split('/')
+ name = ('resnet_v1/block1/' + tokens[0] + '/bottleneck_v1/' + tokens[1] +
+ '/Conv2D')
+ return tf.get_default_graph().get_operation_by_name(name)
+
+ def numAlive(self, short_name):
+ return np.sum(self.getGamma(short_name) > self._threshold)
+
+ def getCoeff(self, short_name):
+ return _coeff(self.getOp(short_name))
+
+ def testCost(self):
+ self.buildGraphWithBatchNorm(resnet_v1.resnet_v1, resnet_v1.resnet_v1_block)
+ self.initGamma()
+ res_alive = np.logical_or(
+ np.logical_or(
+ self.getGamma('unit_1/shortcut') > self._threshold,
+ self.getGamma('unit_1/conv3') > self._threshold),
+ self.getGamma('unit_2/conv3') > self._threshold)
+
+ self.gamma_flop_reg = flop_regularizer.GammaFlopsRegularizer(
+ [self.net.op], self._threshold)
+
+ expected = {}
+ expected['unit_1/shortcut'] = (
+ self.getCoeff('unit_1/shortcut') * np.sum(res_alive) * NUM_CHANNELS)
+ expected['unit_1/conv1'] = (
+ self.getCoeff('unit_1/conv1') * self.numAlive('unit_1/conv1') *
+ NUM_CHANNELS)
+ expected['unit_1/conv2'] = (
+ self.getCoeff('unit_1/conv2') * self.numAlive('unit_1/conv2') *
+ self.numAlive('unit_1/conv1'))
+ expected['unit_1/conv3'] = (
+ self.getCoeff('unit_1/conv3') * np.sum(res_alive) *
+ self.numAlive('unit_1/conv2'))
+ expected['unit_2/conv1'] = (
+ self.getCoeff('unit_2/conv1') * self.numAlive('unit_2/conv1') *
+ np.sum(res_alive))
+ expected['unit_2/conv2'] = (
+ self.getCoeff('unit_2/conv2') * self.numAlive('unit_2/conv2') *
+ self.numAlive('unit_2/conv1'))
+ expected['unit_2/conv3'] = (
+ self.getCoeff('unit_2/conv3') * np.sum(res_alive) *
+ self.numAlive('unit_2/conv2'))
+ expected['FC'] = 2.0 * np.sum(res_alive) * 23.0
+
+ # TODO: Is there a way to use Parametrized Tests to make this more
+ # elegant?
+ with self.test_session():
+ for short_name in expected:
+ cost = self.gamma_flop_reg.get_cost([self.getOp(short_name)]).eval()
+ self.assertEqual(expected[short_name], cost)
+
+ self.assertEqual(
+ sum(expected.values()),
+ self.gamma_flop_reg.get_cost().eval())
+
+
+class GroupLassoFlopRegTest(tf.test.TestCase):
+
+ def assertNearRelatively(self, expected, actual):
+ self.assertNear(expected, actual, expected * 1e-6)
+
+ def testFlopRegularizer(self):
+ tf.reset_default_graph()
+ tf.set_random_seed(7907)
+ with arg_scope(
+ [layers.conv2d, layers.conv2d_transpose],
+ weights_initializer=tf.random_normal_initializer):
+ # Our test model is:
+ #
+ # -> conv1 --+
+ # / |--[concat]
+ # image --> conv2 --+
+ # \
+ # -> convt
+ #
+ # (the model has two "outputs", convt and concat).
+ #
+ image = tf.constant(0.0, shape=[1, 17, 19, NUM_CHANNELS])
+ conv1 = layers.conv2d(
+ image, 13, [7, 5], padding='SAME', scope='conv1')
+ conv2 = layers.conv2d(
+ image, 23, [1, 1], padding='SAME', scope='conv2')
+ self.concat = tf.concat([conv1, conv2], 3)
+ self.convt = layers.conv2d_transpose(
+ image, 29, [7, 5], stride=3, padding='SAME', scope='convt')
+ self.name_to_var = {v.op.name: v for v in tf.global_variables()}
+ with self.test_session():
+ tf.global_variables_initializer().run()
+
+ threshold = 1.0
+ flop_reg = flop_regularizer.GroupLassoFlopsRegularizer(
+ [self.concat.op, self.convt.op], threshold=threshold)
+
+ with self.test_session() as s:
+ evaluated_vars = s.run(self.name_to_var)
+
+ def group_norm(weights, axis=(0, 1, 2)): # pylint: disable=invalid-name
+ return np.sqrt(np.mean(weights**2, axis=axis))
+
+ reg_vectors = {
+ 'conv1': group_norm(evaluated_vars['conv1/weights'], (0, 1, 2)),
+ 'conv2': group_norm(evaluated_vars['conv2/weights'], (0, 1, 2)),
+ 'convt': group_norm(evaluated_vars['convt/weights'], (0, 1, 3))
+ }
+
+ num_alive = {k: np.sum(r > threshold) for k, r in reg_vectors.iteritems()}
+ total_outputs = (
+ reg_vectors['conv1'].shape[0] + reg_vectors['conv2'].shape[0])
+ total_alive_outputs = sum(num_alive.values())
+ assert total_alive_outputs > 0, (
+ 'All outputs are dead - test is trivial. Decrease the threshold.')
+ assert total_alive_outputs < total_outputs, (
+ 'All outputs are alive - test is trivial. Increase the threshold.')
+
+ coeff1 = _coeff(_get_op('conv1/Conv2D'))
+ coeff2 = _coeff(_get_op('conv2/Conv2D'))
+ coefft = _coeff(_get_op('convt/conv2d_transpose'))
+
+ expected_flop_cost = NUM_CHANNELS * (
+ coeff1 * num_alive['conv1'] + coeff2 * num_alive['conv2'] +
+ coefft * num_alive['convt'])
+ expected_reg_term = NUM_CHANNELS * (
+ coeff1 * np.sum(reg_vectors['conv1']) + coeff2 * np.sum(
+ reg_vectors['conv2']) + coefft * np.sum(reg_vectors['convt']))
+ with self.test_session():
+ self.assertEqual(
+ round(expected_flop_cost), round(flop_reg.get_cost().eval()))
+ self.assertNearRelatively(expected_reg_term,
+ flop_reg.get_regularization_term().eval())
+
+
+def _get_op(name): # pylint: disable=invalid-name
+ return tf.get_default_graph().get_operation_by_name(name)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/model_size_regularizer.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/model_size_regularizer.py"
new file mode 100644
index 00000000..d7615cc6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/network_regularizers/model_size_regularizer.py"
@@ -0,0 +1,40 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A NetworkRegularizer that targets the number of weights in the model."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from morph_net.framework import op_regularizer_manager
+from morph_net.network_regularizers import bilinear_cost_utils
+from morph_net.op_regularizers import gamma_l1_regularizer
+
+
+# TODO: Add unit tests. This class is very similar to
+# GammaFlopsRegularizer in flop_regularizer, so we have some indirect testing,
+# but more is needed.
+class GammaModelSizeRegularizer(
+ bilinear_cost_utils.BilinearNetworkRegularizer):
+ """A NetworkRegularizer that targets model size using Gamma L1 OpReg."""
+
+ def __init__(self, ops, gamma_threshold):
+ gamma_l1_reg_factory = gamma_l1_regularizer.GammaL1RegularizerFactory(
+ gamma_threshold)
+ opreg_manager = op_regularizer_manager.OpRegularizerManager(
+ ops, {'Conv2D': gamma_l1_reg_factory.create_regularizer,
+ 'DepthwiseConv2dNative': gamma_l1_reg_factory.create_regularizer})
+ super(GammaModelSizeRegularizer, self).__init__(
+ opreg_manager, bilinear_cost_utils.num_weights_coeff)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/conv_group_lasso_regularizer.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/conv_group_lasso_regularizer.py"
new file mode 100644
index 00000000..12b3a436
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/conv_group_lasso_regularizer.py"
@@ -0,0 +1,127 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""A regularizer for convolutions, based on group-lasso.
+
+All the weights that are related to a single output are grouped into one LASSO
+group (https://arxiv.org/pdf/1611.06321.pdf).
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from morph_net.framework import generic_regularizers
+
+
+class ConvGroupLassoRegularizer(generic_regularizers.OpRegularizer):
+ """A regularizer for convolutions, based on group-lasso.
+
+ Supported ops: Conv2D and Conv2DBackpropInput (transposed Conv2D).
+ are supported. The grouping is done according to the formula:
+
+ (1 - l1_fraction) * L2(weights) / sqrt(dim) + l1_fraction * L1(weights) / dim,
+
+ where `dim` is the number of weights associated with an activation, L2 and L1
+ are the respective norms, and l1_fraction controls the balance between L1 and
+ L2 grouping. The paper cited above experiments with 0.0 and 0.5 for
+ l1_fraction.
+ """
+
+ def __init__(self, op, threshold, l1_fraction=0.0):
+ """Creates an instance.
+
+ Args:
+ op: A tf.Operation object of type Conv2D or Conv2DBackpropInput.
+ threshold: A float. When the norm of the group associated with an
+ activation is below the threshold, it will be considered dead.
+ l1_fraction: A float, controls the balance between L1 and L2 grouping
+ (see above).
+
+ Raises:
+ ValueError: `op` is not of type 'Conv2D' or 'Conv2DBackpropInput', or
+ l1_fraction is outside interval [0.0, 1.0].
+ """
+ if op.type not in ('Conv2D', 'Conv2DBackpropInput'):
+ raise ValueError('The given op is not Conv2D or Conv2DBackpropInput.')
+ if l1_fraction < 0.0 or l1_fraction > 1.0:
+ raise ValueError(
+ 'l1_fraction should be in [0.0, 1.0], not %e.' % l1_fraction)
+
+ self._threshold = threshold
+ conv_weights = op.inputs[1]
+ # For a Conv2D (Conv2DBackpropInput) the output dimension of the weight
+ # matrix is 3 (2). We thus reduce over all other dimensions.
+
+ l2_norm = tf.sqrt(
+ tf.reduce_mean(tf.square(conv_weights), axis=_get_reduce_dims(op)))
+ if l1_fraction > 0.0:
+ l1_norm = tf.reduce_mean(tf.abs(conv_weights), axis=_get_reduce_dims(op))
+ norm = l1_fraction * l1_norm + (1.0 - l1_fraction) * l2_norm
+ else:
+ norm = l2_norm
+ # Sanity check: Output dimension of 'op' should match that of 'norm':
+ assert op.outputs[0].shape.ndims == 4
+ assert norm.shape.ndims == 1
+ op.outputs[0].shape.dims[3].assert_is_compatible_with(norm.shape.dims[0])
+ self._regularization_vector = norm
+ self._alive_vector = norm > threshold
+
+ @property
+ def regularization_vector(self):
+ return self._regularization_vector
+
+ @property
+ def alive_vector(self):
+ return self._alive_vector
+
+
+class ConvGroupLassoRegularizerFactory(object):
+ """A class for creating a ConvGroupLassoRegularizer for convolutions."""
+
+ def __init__(self, threshold, l1_fraction=0.0):
+ """Creates an instance.
+
+ Args:
+ threshold: A float scalar, will be used as a threshold for all
+ ConvGroupLassoRegularizer-s created by this class.
+ l1_fraction: A float scalar, will be passed as l1_fraction to all
+ ConvGroupLassoRegularizer-s created by this class.
+ """
+ self._threshold = threshold
+ self._l1_fraction = l1_fraction
+
+ def create_regularizer(self, op, opreg_manager=None):
+ """Creates a ConvGroupLassoRegularizer for `op`.
+
+ Args:
+ op: A tf.Operation of type 'Conv2D'.
+ opreg_manager: unused
+
+ Returns:
+ a ConvGroupLassoRegularizer that corresponds to `op`.
+ """
+ del opreg_manager # unused
+ return ConvGroupLassoRegularizer(op, self._threshold, self._l1_fraction)
+
+
+def _get_reduce_dims(op):
+ """Returns the reduction dimensions for grouping weights of various ops."""
+ type_to_dims = {'Conv2D': (0, 1, 2), 'Conv2DBackpropInput': (0, 1, 3)}
+ try:
+ return type_to_dims[op.type]
+ except KeyError:
+ raise ValueError('Reduce dims are unknown for op type %s' % op.type)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/conv_group_lasso_regularizer_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/conv_group_lasso_regularizer_test.py"
new file mode 100644
index 00000000..26bf44ff
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/conv_group_lasso_regularizer_test.py"
@@ -0,0 +1,92 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for op_regularizers.conv_group_lasso_regularizer."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from absl.testing import parameterized
+import numpy as np
+import tensorflow as tf
+
+from morph_net.op_regularizers import conv_group_lasso_regularizer
+
+layers = tf.contrib.layers
+
+ALIVE_THRESHOLD = 1.0
+
+
+def assert_not_all_are_alive_or_dead(alive_vector):
+ assert not all(alive_vector), (
+ 'All activations are alive, test case is trivial. Increase threshold')
+ assert any(alive_vector), (
+ 'All activations are dead, test case is trivial. Decrease threshold')
+
+
+class GroupLassoRegularizerTest(parameterized.TestCase, tf.test.TestCase):
+
+ def setUp(self):
+ tf.reset_default_graph()
+ tf.set_random_seed(7907)
+ with tf.contrib.framework.arg_scope(
+ [layers.conv2d, layers.conv2d_transpose],
+ weights_initializer=tf.random_normal_initializer):
+ self.BuildModel()
+ with self.test_session():
+ tf.global_variables_initializer().run()
+
+ def BuildModel(self):
+ image = tf.constant(0.0, shape=[1, 17, 19, 3])
+ conv = layers.conv2d(image, 13, [7, 5], padding='SAME', scope='conv')
+ layers.conv2d_transpose(conv, 11, [5, 5], scope='convt')
+
+ # For Conv2D (Conv2DBackpropInput, aka conv2d transpose), the reduction
+ # indices for group lasso are (0, 1, 2) ((0, 1, 3)).
+ @parameterized.named_parameters(
+ ('_regular_conv', 'conv/Conv2D', (0, 1, 2), 0.0),
+ ('_transpose_conv', 'convt/conv2d_transpose', (0, 1, 3), 0.0),
+ ('_regular_conv_l10.5', 'conv/Conv2D', (0, 1, 2), 0.5))
+ def testOp(self, op_name, axis, l1_fraction):
+ op = tf.get_default_graph().get_operation_by_name(op_name)
+ with self.test_session():
+ weights = op.inputs[1].eval()
+ l1_reg_vector = np.mean(np.abs(weights), axis=axis)
+ l2_reg_vector = np.sqrt(np.mean(weights**2, axis=axis))
+ expected_reg_vector = (
+ l1_fraction * l1_reg_vector + (1.0 - l1_fraction) * l2_reg_vector)
+
+ # We choose the threshold at the expectation value, so that some activations
+ # end up above threshold and others end up below. The weights are normally
+ # distributed, so the L2 norm is 1.0, and the L1 norm is sqrt(2/pi).
+ # With a general l1_fraction, we compute a weighted average of the two:
+ threshold = (1.0 - l1_fraction) + l1_fraction * np.sqrt(2 / np.pi)
+ expected_alive = expected_reg_vector > threshold
+ assert_not_all_are_alive_or_dead(expected_alive)
+
+ conv_reg = (
+ conv_group_lasso_regularizer.ConvGroupLassoRegularizer(
+ op, threshold=threshold, l1_fraction=l1_fraction))
+
+ with self.test_session():
+ actual_reg_vector = conv_reg.regularization_vector.eval()
+ actual_alive = conv_reg.alive_vector.eval()
+
+ self.assertAllClose(expected_reg_vector, actual_reg_vector)
+ self.assertAllEqual(expected_alive, actual_alive)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/gamma_l1_regularizer.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/gamma_l1_regularizer.py"
new file mode 100644
index 00000000..53f2907e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/gamma_l1_regularizer.py"
@@ -0,0 +1,120 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""An OpRegularizer that applies L1 regularization on batch-norm gammas."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from morph_net.framework import generic_regularizers
+from morph_net.op_regularizers import gamma_mapper
+
+
+class GammaL1Regularizer(generic_regularizers.OpRegularizer):
+ """An OpRegularizer that L1-regularizes batch-norm gamma."""
+
+ def __init__(self, gamma, gamma_threshold):
+ """Creates an instance.
+
+ Args:
+ gamma: a tf.Tensor of rank 1 with the gammas.
+ gamma_threshold: A float scalar, the threshold above which a gamma is
+ considered 'alive'.
+ """
+ self._gamma = gamma
+ self._gamma_threshold = gamma_threshold
+ abs_gamma = tf.abs(gamma)
+ self._alive_vector = abs_gamma > gamma_threshold
+ self._regularization_vector = abs_gamma
+
+ @property
+ def regularization_vector(self):
+ return self._regularization_vector
+
+ @property
+ def alive_vector(self):
+ return self._alive_vector
+
+
+class GammaL1RegularizerFactory(object):
+ """A class for creating a GammaL1Regularizer for convolutions."""
+
+ def __init__(self, gamma_threshold):
+ """Creates an instance.
+
+ Args:
+ gamma_threshold: A float scalar, will be used as a 'gamma_threshold' for
+ all the GammaL1Regularizer-s created by this class.
+ """
+ self._gamma_conv_mapper = gamma_mapper.ConvGammaMapperByName()
+ self._gamma_threshold = gamma_threshold
+
+ def create_regularizer(self, op, opreg_manager):
+ """Creates a GammaL1Regularizer for `op`.
+
+ Args:
+ op: A tf.Operation of type 'Conv2D' or 'DepthwiseConv2dNative'.
+ opreg_manager: An OpRegularizerManager object that will host the created
+ OpRegularizer object.
+
+ Returns:
+ a GammaL1Regularizer that corresponds to `op`.
+
+ Raises:
+ ValueError: If `op` does not have a Gamma that corresponds to it.
+ """
+ gamma = self._gamma_conv_mapper.get_gamma(op)
+ if gamma is None:
+ regularizer = None
+ else:
+ regularizer = GammaL1Regularizer(gamma, self._gamma_threshold)
+
+ if op.type == 'DepthwiseConv2dNative':
+ regularizer = _group_depthwise_conv_regularizer(op, regularizer,
+ opreg_manager)
+ return regularizer
+
+
+def _group_depthwise_conv_regularizer(op, regularizer, opreg_manager):
+ """Groups the regularizer of a depthwise convolution if needed."""
+ # If its first input doesn't have regularizers, return None. While pruning
+ # the depthwise_conv still effectively shut down input channels, fluid_net
+ # currently has not implemented a way to interpret it. In particular, the
+ # union of active channels of dw's input and output will always return all
+ # channels.
+ # TODO: update the interpretation to discover channels that are
+ # effectively pruned by dw.
+ input_reg = opreg_manager.get_regularizer(op.inputs[0].op)
+ if input_reg is None:
+ return None
+ # Check for cases where the depthwise convolution has a multiplier that
+ # is not one. Do not regularize this case.
+ # TODO: add support for depthwise with multiplier != 1.
+ if (op.inputs[0].shape.as_list()[-1] !=
+ op.outputs[0].shape.as_list()[-1]):
+ return None
+ # If the op is not regularized, return the regularizer of its input.
+ # This applies to cases in separable convolutions where the pointwise
+ # is batchnormed but the depthwise is not.
+ if regularizer is None:
+ return input_reg
+ # If both the input and depthwise have regularizers, we group them.
+ # This applies to Mobilenets where both 1x1 and depthwise have
+ # batchnorms.
+ else:
+ return opreg_manager.group_and_replace_regularizers(
+ [regularizer, input_reg])
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/gamma_mapper.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/gamma_mapper.py"
new file mode 100644
index 00000000..89948210
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/gamma_mapper.py"
@@ -0,0 +1,201 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Classes for mapping convolutions to their batch-norm gammas."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import abc
+import collections
+
+import tensorflow as tf
+
+from morph_net.framework import op_regularizer_manager
+
+
+class GenericConvGammaMapper(object):
+ """An interface for mapping convolutions to their batch-norm gammas."""
+
+ __metaclass__ = abc.ABCMeta
+
+ @abc.abstractmethod
+ def get_gamma(self, conv_op):
+ """Returns the BatchNorm gamma tensor associated with `conv_op`, or None.
+
+ Args:
+ conv_op: A tf.Operation of type Conv2D.
+
+ Returns:
+ A tf.Tensor containing the BatchNorm gamma associated with `conv_op`, or
+ None if `conv_op` has no BatchNorm gamma.
+
+ Raises:
+ ValueError: `conv_op` is not a tf.Operation of type `Conv2D`.
+ KeyError: `conv_op` is not in the graph that was used to construct `self`
+ """
+
+ @abc.abstractproperty
+ def all_conv_ops(self):
+ """Return all Conv2D ops that were in the graph when `self` was created."""
+ pass
+
+
+def _get_existing_variable(name):
+ """Fetches a variable by name (like tf.get_variable with reuse=True).
+
+ The reason why we can't simply use tf.get_variable with reuse=True is that
+ when variable partitioner is used, tf.get_variable requires knowing the shape
+ of the variable (even though it knows it and thus shouldn't require it). This
+ helper is a convenience function to solve this problem.
+
+ Args:
+ name: A string, the name of the variable.
+
+ Returns:
+ A tf.Tensor which is the result of convert_to_tensor of the variable, or
+ None if the variable does not exist.
+ """
+ try:
+ op = tf.get_default_graph().get_operation_by_name(name)
+ except KeyError:
+ return None
+
+ # Among all cases (partitioned variable, resource variable, or regular one),
+ # we assume that there's either a shape attribute to the op or to its output.
+ try:
+ shape = tf.TensorShape(op.get_attr('shape'))
+ except ValueError:
+ shape = op.outputs[0].shape
+
+ with tf.variable_scope(tf.get_variable_scope(), reuse=True):
+ try:
+ # tf.Variable and tf.PartitionedVariable are not polymorphic, but
+ # both support convert_to_tensor. The result is thus always a
+ # tf.Tensor.
+ return tf.convert_to_tensor(tf.get_variable(name, shape=shape))
+ except ValueError as e:
+ if 'Variable %s does not exist' % name in str(e):
+ return None
+ else:
+ raise e # pass through any other exceptions.
+
+
+class ConvGammaMapperByName(GenericConvGammaMapper):
+ """Maps a convolution to its BatchNorm gamma.
+
+ Assumes that the convolutions and their respective gammas conform to the
+ naming convention of tf.contrib.layers: A convolution's name ends with
+ `/Conv2D`, and the respective batch-norm gamma ends with
+ `/BatchNorm/gamma`
+ """
+
+ def __init__(self):
+ """Constructs an instance. Builds mapping from Conv2D ops to their Gamma."""
+ self._conv_to_gamma = {}
+ # We use get_variable under a reuse=True scope because this is a way to
+ # capture both a regular tf.Variable and a PartitionedVariable.
+ with tf.variable_scope(tf.get_variable_scope(), reuse=True):
+ for op in tf.get_default_graph().get_operations():
+ if op.type != 'Conv2D' and op.type != 'DepthwiseConv2dNative':
+ continue
+ base_name = op.name.rsplit('/', 1)[0]
+ self._conv_to_gamma[op] = _get_existing_variable(base_name +
+ '/BatchNorm/gamma')
+
+ def get_gamma(self, conv_op):
+ _raise_if_not_conv(conv_op)
+ return self._conv_to_gamma[conv_op]
+
+ @property
+ def all_conv_ops(self):
+ return self._conv_to_gamma.keys()
+
+
+class ConvGammaMapperByConnectivity(GenericConvGammaMapper):
+ """Maps a convolution to its BatchNorm gammas based on graph connectivity.
+
+ Given a batch-norm gamma, propagates along the graph to find the convolutions
+ that are batch-nomalized by this gamma. It can me more than one convolution
+ that are normalized by the same batch-norm gamma in ResNet-s, where
+ un-normalized convolutions are first summed and then their sum is normalized.
+ The converse is also true - a single convolution can be connected (through
+ residual connections) to multiple batch-norms.
+
+ Only fused batch-norm is supported: there seems to be significant variability
+ in the way non-fused batch-norm manifests in the tensorflow graph.
+ """
+
+ def __init__(self):
+ """Constructs an instance. Builds mapping from Conv2D ops to their Gamma."""
+ self._conv_to_gamma = collections.defaultdict(set)
+ for op in tf.get_default_graph().get_operations():
+ if op.type != 'FusedBatchNorm':
+ continue
+
+ convs = _dfs(op)
+ for conv in convs:
+ if conv.type == 'Conv2D':
+ self._conv_to_gamma[conv].add(op.inputs[1]) # Input #1 is gamma.
+
+ for op in tf.get_default_graph().get_operations():
+ if op.type == 'Conv2D' and op not in self._conv_to_gamma:
+ self._conv_to_gamma[op] = None
+
+ def get_gamma(self, conv_op):
+ _raise_if_not_conv(conv_op)
+ if conv_op not in self._conv_to_gamma:
+ raise KeyError
+ gammas = self._conv_to_gamma[conv_op]
+ if gammas and len(gammas) == 1:
+ # For a single element, return the element itself, to conform with
+ # ConvGammaMapperByName.
+ return list(gammas)[0]
+ return gammas
+
+ @property
+ def all_conv_ops(self):
+ return self._conv_to_gamma.keys()
+
+
+def _dfs(op, visited=None):
+ """Perform DFS on a graph.
+
+ Args:
+ op: A tf.Operation, the root node for the DFS.
+ visited: A set, used in the recursion.
+
+ Returns:
+ A list of the tf.Operations of type Conv2D that were encountered.
+ """
+ visited = visited or set()
+ ret = []
+ for child in op.inputs:
+ if child.op in visited:
+ return ret
+ visited.add(child.op)
+ if child.op.type not in op_regularizer_manager.NON_PASS_THROUGH_OPS:
+ ret.extend(_dfs(child.op, visited))
+ if child.op.type in ('Conv2D',): # TODO: support depthwise conv.
+ ret.append(child.op)
+ return ret
+
+
+def _raise_if_not_conv(op):
+ if not isinstance(op, tf.Operation):
+ raise ValueError('conv_op must be a tf.Operation, not %s' % type(op))
+ if op.type != 'Conv2D' and op.type != 'DepthwiseConv2dNative':
+ raise ValueError('conv_op must be a Conv2D or DepthwiseConv2dNative,'
+ 'not %s' % op.type)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/gamma_mapper_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/gamma_mapper_test.py"
new file mode 100644
index 00000000..22f65083
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/op_regularizers/gamma_mapper_test.py"
@@ -0,0 +1,318 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for gamma_mapper."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+from absl.testing import parameterized
+import tensorflow as tf
+
+from tensorflow.contrib.slim.nets import resnet_v1
+from tensorflow.contrib.slim.nets import resnet_v2
+from tensorflow.python.platform import flags
+from morph_net.op_regularizers import gamma_mapper
+
+
+FLAGS = flags.FLAGS
+
+
+layers = tf.contrib.layers
+arg_scope = tf.contrib.framework.arg_scope
+
+
+NUM_CHANNELS = 3
+
+
+def get_op(name):
+ return tf.get_default_graph().get_operation_by_name(name)
+
+
+CONV1_GAMMA = [0.1 * x for x in range(13)]
+SEP_CONV_GAMMA = [0.07 * x for x in range(23)]
+CKPT_FILE_NAME = 'ckpt'
+
+
+def build_model():
+ image = tf.constant(0.0, shape=[1, 17, 19, 3])
+ conv1 = layers.conv2d(image, 13, (3, 3), padding='SAME', scope='conv1')
+ layers.separable_conv2d(conv1, 23, (3, 3), 1, scope='sep_conv')
+
+
+def setUpModule():
+ """Save a model for later loading it.
+
+ This is the only way we're aware of for assigning values to variables
+ irrespectively of their type (regular or partitioned), since partitioned
+ variables do not support assignment.
+ """
+ with tf.Graph().as_default():
+ params = {
+ 'normalizer_fn': layers.batch_norm,
+ 'normalizer_params': {
+ 'scale': True,
+ }
+ }
+ with tf.contrib.framework.arg_scope(
+ [layers.conv2d, layers.separable_conv2d], **params):
+ build_model()
+
+ with tf.variable_scope(tf.get_variable_scope(), reuse=True):
+ conv_gamma = tf.get_variable('conv1/BatchNorm/gamma')
+ sep_gamma = tf.get_variable('sep_conv/BatchNorm/gamma')
+ s = tf.Session()
+ s.run(tf.global_variables_initializer())
+ s.run([conv_gamma.assign(CONV1_GAMMA), sep_gamma.assign(SEP_CONV_GAMMA)])
+ saver = tf.train.Saver()
+ saver.save(s, os.path.join(FLAGS.test_tmpdir, CKPT_FILE_NAME))
+
+
+class ConvGammaMapperTest(parameterized.TestCase, tf.test.TestCase):
+
+ def createMapper(self, connectivity):
+ if connectivity:
+ return gamma_mapper.ConvGammaMapperByConnectivity()
+ return gamma_mapper.ConvGammaMapperByName()
+
+ def setUp(self):
+ tf.reset_default_graph()
+
+ def TestSuccess(self, connectivity, partitioning, fused, use_resource):
+ params = {
+ 'trainable': True,
+ 'normalizer_fn': layers.batch_norm,
+ 'normalizer_params': {
+ 'scale': True,
+ 'fused': fused
+ }
+ }
+
+ partitioner = tf.fixed_size_partitioner(2) if partitioning else None
+ with tf.variable_scope(
+ tf.get_variable_scope(),
+ partitioner=partitioner,
+ use_resource=use_resource):
+ with tf.contrib.framework.arg_scope(
+ [layers.conv2d, layers.separable_conv2d], **params):
+ build_model()
+
+ sess = tf.Session()
+ saver = tf.train.Saver()
+ saver.restore(sess, os.path.join(FLAGS.test_tmpdir, CKPT_FILE_NAME))
+ mapper = self.createMapper(connectivity)
+ conv = get_op('conv1/Conv2D')
+ sep_conv = get_op('sep_conv/separable_conv2d')
+ with sess.as_default():
+ self.assertAllClose(CONV1_GAMMA, mapper.get_gamma(conv).eval())
+ self.assertAllClose(SEP_CONV_GAMMA, mapper.get_gamma(sep_conv).eval())
+
+ def testSuccess(self):
+ for connectivity in (False, True):
+ for partitioning in (False, True):
+ for fused in (False, True):
+ if connectivity and not fused: # This combination is not supported
+ continue
+ for use_resource in (False, True):
+ tf.reset_default_graph()
+ self.TestSuccess(connectivity, partitioning, fused, use_resource)
+
+ @parameterized.named_parameters(
+ ('_name_nopart', False, False), ('_name_part', False, True),
+ ('_conn_nopart', True, False), ('_conn_part', True, True))
+ def testNoBatchNorm(self, connectivity, partitioning):
+ partitioner = tf.fixed_size_partitioner(2) if partitioning else None
+ with tf.variable_scope(
+ tf.get_variable_scope(), partitioner=partitioner):
+ build_model()
+ mapper = self.createMapper(connectivity)
+ conv = get_op('conv1/Conv2D')
+ self.assertEqual(None, mapper.get_gamma(conv))
+
+ @parameterized.named_parameters(('_name_nopart', False),
+ ('_conn_nopart', True))
+ def testNotAConv(self, connectivity):
+ build_model()
+ mapper = self.createMapper(connectivity)
+ bias_add = get_op('conv1/BiasAdd')
+ with self.assertRaises(ValueError):
+ mapper.get_gamma(bias_add)
+
+ @parameterized.named_parameters(('_name_nopart', False),
+ ('_conn_nopart', True))
+ def testNotAnOpButATensor(self, connectivity):
+ build_model()
+ mapper = self.createMapper(connectivity)
+ conv = get_op('conv1/Conv2D')
+ with self.assertRaises(ValueError):
+ mapper.get_gamma(conv.outputs[0])
+
+ @parameterized.named_parameters(('_name_nopart', False),
+ ('_conn_nopart', True))
+ def testNotInGraph(self, connectivity):
+ mapper = self.createMapper(connectivity)
+ # Graph is built after the mapper
+ build_model()
+ conv = get_op('conv1/Conv2D')
+ with self.assertRaises(KeyError):
+ mapper.get_gamma(conv)
+
+
+def build_resnet(block_fn, resnet_fn):
+ params = {
+ 'trainable': True,
+ 'normalizer_fn': layers.batch_norm,
+ 'normalizer_params': {
+ 'is_training': True,
+ 'scale': True,
+ 'fused': True
+ }
+ }
+
+ with arg_scope([layers.conv2d], **params):
+ with arg_scope([layers.batch_norm], **(params['normalizer_params'])):
+ # Each block looks like:
+ # Image --> unit_1/shortcut
+ # Image --> unit_1/conv1 --> unit_1/conv2 --> unit_1/conv3
+ #
+ # unit_1/shortcut + unit_1/conv3 --> unit_1 (residual connection)
+ #
+ # unit_1 --> unit_2/conv1 -> unit_2/conv2 --> unit_2/conv3
+ #
+ # unit_1 + unit_2/conv3 --> unit_2 (residual connection)
+ #
+ # In between, there are strided convolutions and pooling ops, but these
+ # should not affect the regularizer.
+ blocks = [
+ block_fn('block1', base_depth=7, num_units=2, stride=2),
+ block_fn('block2', base_depth=13, num_units=2, stride=2),
+ ]
+ image = tf.constant(0.0, shape=[1, 2, 2, NUM_CHANNELS])
+ return resnet_fn(
+ image, blocks, include_root_block=False, is_training=False)[0]
+
+
+class ConvGammaMapperByConnectivityResnetTest(parameterized.TestCase,
+ tf.test.TestCase):
+
+ def assertGammaMatchesConv(self, mapper, prefix):
+ conv = get_op(prefix + '/Conv2D')
+ gamma = mapper.get_gamma(conv)
+ self.assertTrue(gamma.op.name.startswith(prefix + '/BatchNorm/gamma'))
+
+ def assertConvsConnectedToGammas(self, conv_names, gamma_prefixes, mapper):
+ """Asserts that each convolution is connected to each gamma.
+
+ Args:
+ conv_names: A list of strings representing names of Conv2D operations.
+ gamma_prefixes: A list of strings representing name prefixes of gamma
+ variables (we only verify prefixes because suffixes may depend on
+ whether we have partitioning or no).
+ mapper: a ConvGammaMapperByConnectivity object
+ """
+ def make_set(item):
+ return item if isinstance(item, set) else set([item,])
+
+ convs = [get_op(conv_name) for conv_name in conv_names]
+ gamma_sets = [make_set(mapper.get_gamma(conv)) for conv in convs]
+ if len(gamma_sets) > 1:
+ for i in range(1, len(gamma_sets)):
+ self.assertEqual(gamma_sets[i], gamma_sets[0])
+
+ actual_gamma_names = sorted([g.op.name for g in gamma_sets[0]])
+ gamma_prefixes = sorted(gamma_prefixes)
+ for expected, actual in zip(gamma_prefixes, actual_gamma_names):
+ self.assertTrue(actual.startswith(expected))
+
+ def testSuccessResnetV2(self):
+ build_resnet(resnet_v2.resnet_v2_block, resnet_v2.resnet_v2)
+ mapper = gamma_mapper.ConvGammaMapperByConnectivity()
+ # Check all "regular" convs, that are connected to their own batch norm,
+ # without residual connecitons involved.
+ for block in (1, 2):
+ for unit in (1, 2):
+ for conv in (1, 2):
+ self.assertGammaMatchesConv(
+ mapper, 'resnet_v2/block%d/unit_%d/bottleneck_v2/conv%d' %
+ (block, unit, conv))
+
+ # This diagram depicts all the convs and the batch-norm that don't have a
+ # one to one mapping:
+ #
+ # CONVS BATCH-NORMS
+ #
+ # block1/unit_1/shortcut --+
+ # |
+ # block1/unit_1/conv3 ----+--> block1/unit_2/preact
+ # |
+ # block1/unit_2/conv3 ----+--> block2/unit_1/preact
+ #
+ #
+ # block2/unit_1/shortcut --+
+ # |
+ # block2/unit_1/conv3 ----+--> block2/unit_1/preact
+ # |
+ # block2/unit_2/conv3 ----+--> postnorm
+ #
+ # This connectivity is tested below.
+
+ self.assertConvsConnectedToGammas([
+ 'resnet_v2/block1/unit_1/bottleneck_v2/shortcut/Conv2D',
+ 'resnet_v2/block1/unit_1/bottleneck_v2/conv3/Conv2D'
+ ], [
+ 'resnet_v2/block1/unit_2/bottleneck_v2/preact/gamma',
+ 'resnet_v2/block2/unit_1/bottleneck_v2/preact/gamma'
+ ], mapper)
+
+ self.assertConvsConnectedToGammas([
+ 'resnet_v2/block1/unit_2/bottleneck_v2/conv3/Conv2D',
+ ], [
+ 'resnet_v2/block2/unit_1/bottleneck_v2/preact/gamma',
+ ], mapper)
+
+ self.assertConvsConnectedToGammas([
+ 'resnet_v2/block2/unit_1/bottleneck_v2/shortcut/Conv2D',
+ 'resnet_v2/block2/unit_1/bottleneck_v2/conv3/Conv2D'
+ ], [
+ 'resnet_v2/block2/unit_2/bottleneck_v2/preact/gamma',
+ 'resnet_v2/postnorm/gamma'
+ ], mapper)
+
+ self.assertConvsConnectedToGammas([
+ 'resnet_v2/block2/unit_2/bottleneck_v2/conv3/Conv2D',
+ ], [
+ 'resnet_v2/postnorm/gamma',
+ ], mapper)
+
+ def testSuccessResnetV1(self):
+ build_resnet(resnet_v1.resnet_v1_block, resnet_v1.resnet_v1)
+ mapper = gamma_mapper.ConvGammaMapperByConnectivity()
+ # Here the mapping between convolutions and batch-norms is simple one to
+ # one.
+ for block in (1, 2):
+ self.assertGammaMatchesConv(
+ mapper, 'resnet_v1/block%d/unit_1/bottleneck_v1/shortcut' % block)
+
+ for unit in (1, 2):
+ for conv in (1, 2, 3):
+ self.assertGammaMatchesConv(
+ mapper, 'resnet_v1/block%d/unit_%d/bottleneck_v1/conv%d' %
+ (block, unit, conv))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/testing/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/testing/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/testing/op_regularizer_stub.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/testing/op_regularizer_stub.py"
new file mode 100644
index 00000000..a73562f2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/morph_net/testing/op_regularizer_stub.py"
@@ -0,0 +1,147 @@
+# Copyright 2018 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+r"""Helpers for testing the regularizers framework.
+
+Contains:
+
+- Code to build a simple convolutional model with concatenation and a residual
+ connection.
+
+- Logic for creating Stubs for OpRegularizers for the convolutions in the model.
+
+- Helpers that calculate the expected values of the alive and regularization
+ vectors.
+
+
+The model is:
+
+ -> conv1 --+ -> conv3 --> conv4 --
+ / | / \
+ image [concat] (add) --> output
+ \ | \ /
+ -> conv2 --+ -> -------------------
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+from morph_net.framework import generic_regularizers
+
+
+layers = tf.contrib.layers
+
+
+class OpRegularizerStub(generic_regularizers.OpRegularizer):
+ """A stub that exponses a constant regularization_vector and alive_vector."""
+
+ def __init__(self, regularization_vector, alive_vector):
+ self._regularization_vector = tf.constant(
+ regularization_vector, dtype=tf.float32)
+ self._alive_vector = tf.constant(alive_vector, dtype=tf.bool)
+
+ @property
+ def regularization_vector(self):
+ return self._regularization_vector
+
+ @property
+ def alive_vector(self):
+ return self._alive_vector
+
+
+ALIVE_STUB = {
+ 'conv1': [False, True, True, False, True, False, True],
+ 'conv2': [True, False, True, False, False],
+ 'conv3': [False, False, True, True],
+ 'conv4': [False, True, False, True, True, False, True,
+ False, False, True, False, False],
+ 'conv5': [False, True, False]
+}
+
+
+REG_STUB = {
+ 'conv1': [0.1, 0.3, 0.6, 0.2, 0.4, 0.0, 0.8],
+ 'conv2': [0.15, 0.25, 0.05, 0.55, 0.45],
+ 'conv3': [0.07, 0.27, 0.17, 0.37],
+ 'conv4': [
+ 0.07, 0.27, 0.17, 0.37, 0.28, 0.32, 0.12, 0.22, 0.19, 0.11, 0.02, 0.06
+ ],
+ 'conv5': [0.24, 0.34, 0.29]
+}
+
+
+def _create_stub(key):
+ return OpRegularizerStub(REG_STUB[key], ALIVE_STUB[key])
+
+
+def build_model():
+ image = tf.constant(0.0, shape=[1, 17, 19, 3])
+ conv1 = layers.conv2d(image, 7, [7, 5], padding='SAME', scope='conv1')
+ conv2 = layers.conv2d(image, 5, [1, 1], padding='SAME', scope='conv2')
+ concat = tf.concat([conv1, conv2], 3)
+ conv3 = layers.conv2d(concat, 4, [1, 1], padding='SAME', scope='conv3')
+ conv4 = layers.conv2d(conv3, 12, [3, 3], padding='SAME', scope='conv4')
+ conv5 = layers.conv2d(
+ concat + conv4, 3, [3, 3], stride=2, padding='SAME', scope='conv5')
+ return conv5.op
+
+
+def _create_conv2d_regularizer(conv_op, manager=None):
+ del manager # unused
+ for key in REG_STUB:
+ if conv_op.name.startswith(key):
+ return _create_stub(key)
+ raise ValueError('No regularizer for %s' % conv_op.name)
+
+
+MOCK_REG_DICT = {'Conv2D': _create_conv2d_regularizer}
+
+
+def expected_regularization():
+ """Build the expected alive vectors applying the rules of concat and group."""
+ concat = REG_STUB['conv1'] + REG_STUB['conv2']
+ # Grouping: Activation is alive after grouping if one of the constituents is
+ # alive.
+ grouped = [max(a, b) for a, b in zip(concat, REG_STUB['conv4'])]
+ conv1_length = len(REG_STUB['conv1'])
+ return {
+ 'conv1': grouped[:conv1_length],
+ 'conv2': grouped[conv1_length:],
+ 'conv3': REG_STUB['conv3'],
+ 'conv4': grouped,
+ 'conv5': REG_STUB['conv5'],
+ 'add': grouped,
+ 'concat': grouped
+ }
+
+
+def expected_alive():
+ """Build the expected alive vectors applying the rules of concat and group."""
+ concat = ALIVE_STUB['conv1'] + ALIVE_STUB['conv2']
+ # Grouping: Activation is alive after grouping if one of the constituents is
+ # alive.
+ grouped = [a or b for a, b in zip(concat, ALIVE_STUB['conv4'])]
+ conv1_length = len(ALIVE_STUB['conv1'])
+ return {
+ 'conv1': grouped[:conv1_length],
+ 'conv2': grouped[conv1_length:],
+ 'conv3': ALIVE_STUB['conv3'],
+ 'conv4': grouped,
+ 'conv5': ALIVE_STUB['conv5'],
+ 'add': grouped,
+ 'concat': grouped
+ }
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/.gitignore" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/.gitignore"
new file mode 100644
index 00000000..2dae8043
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/.gitignore"
@@ -0,0 +1,6 @@
+# Remove the pyc files
+*.pyc
+
+# Ignore the model and the data
+model/
+data/
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/README.md"
new file mode 100644
index 00000000..88ddaf81
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/README.md"
@@ -0,0 +1,82 @@
+# Namignizer
+
+Use a variation of the [PTB](https://www.tensorflow.org/versions/r0.8/tutorials/recurrent/index.html#recurrent-neural-networks) model to recognize and generate names using the [Kaggle Baby Name Database](https://www.kaggle.com/kaggle/us-baby-names).
+
+### API
+Namignizer is implemented in Tensorflow 0.8r and uses the python package `pandas` for some data processing.
+
+#### How to use
+Download the data from Kaggle and place it in your data directory (or use the small training data provided). The example data looks like so:
+
+```
+Id,Name,Year,Gender,Count
+1,Mary,1880,F,7065
+2,Anna,1880,F,2604
+3,Emma,1880,F,2003
+4,Elizabeth,1880,F,1939
+5,Minnie,1880,F,1746
+6,Margaret,1880,F,1578
+7,Ida,1880,F,1472
+8,Alice,1880,F,1414
+9,Bertha,1880,F,1320
+```
+
+But any data with the two columns: `Name` and `Count` will work.
+
+With the data, we can then train the model:
+
+```python
+train("data/SmallNames.txt", "model/namignizer", SmallConfig)
+```
+
+And you will get the output:
+
+```
+Reading Name data in data/SmallNames.txt
+Epoch: 1 Learning rate: 1.000
+0.090 perplexity: 18.539 speed: 282 lps
+...
+0.890 perplexity: 1.478 speed: 285 lps
+0.990 perplexity: 1.477 speed: 284 lps
+Epoch: 13 Train Perplexity: 1.477
+```
+
+This will as a side effect write model checkpoints to the `model` directory. With this you will be able to determine the perplexity your model will give you for any arbitrary set of names like so:
+
+```python
+namignize(["mary", "ida", "gazorpazorp", "houyhnhnms", "bob"],
+ tf.train.latest_checkpoint("model"), SmallConfig)
+```
+You will provide the same config and the same checkpoint directory. This will allow you to use a the model you just trained. You will then get a perplexity output for each name like so:
+
+```
+Name mary gives us a perplexity of 1.03105580807
+Name ida gives us a perplexity of 1.07770049572
+Name gazorpazorp gives us a perplexity of 175.940353394
+Name houyhnhnms gives us a perplexity of 9.53870773315
+Name bob gives us a perplexity of 6.03938627243
+```
+
+Finally, you will also be able generate names using the model like so:
+
+```python
+namignator(tf.train.latest_checkpoint("model"), SmallConfig)
+```
+
+Again, you will need to provide the same config and the same checkpoint directory. This will allow you to use a the model you just trained. You will then get a single generated name. Examples of output that I got when using the provided data are:
+
+```
+['b', 'e', 'r', 't', 'h', 'a', '`']
+['m', 'a', 'r', 'y', '`']
+['a', 'n', 'n', 'a', '`']
+['m', 'a', 'r', 'y', '`']
+['b', 'e', 'r', 't', 'h', 'a', '`']
+['a', 'n', 'n', 'a', '`']
+['e', 'l', 'i', 'z', 'a', 'b', 'e', 't', 'h', '`']
+```
+
+Notice that each name ends with a backtick. This marks the end of the name.
+
+### Contact Info
+
+Feel free to reach out to me at knt(at google) or k.nathaniel.tucker(at gmail)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/data_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/data_utils.py"
new file mode 100644
index 00000000..43202150
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/data_utils.py"
@@ -0,0 +1,119 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Utilities for parsing Kaggle baby names files."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import collections
+import os
+
+import numpy as np
+import tensorflow as tf
+import pandas as pd
+
+# the default end of name rep will be zero
+_EON = 0
+
+
+def read_names(names_path):
+ """read data from downloaded file. See SmallNames.txt for example format
+ or go to https://www.kaggle.com/kaggle/us-baby-names for full lists
+
+ Args:
+ names_path: path to the csv file similar to the example type
+ Returns:
+ Dataset: a namedtuple of two elements: deduped names and their associated
+ counts. The names contain only 26 chars and are all lower case
+ """
+ names_data = pd.read_csv(names_path)
+ names_data.Name = names_data.Name.str.lower()
+
+ name_data = names_data.groupby(by=["Name"])["Count"].sum()
+ name_counts = np.array(name_data.tolist())
+ names_deduped = np.array(name_data.index.tolist())
+
+ Dataset = collections.namedtuple('Dataset', ['Name', 'Count'])
+ return Dataset(names_deduped, name_counts)
+
+
+def _letter_to_number(letter):
+ """converts letters to numbers between 1 and 27"""
+ # ord of lower case 'a' is 97
+ return ord(letter) - 96
+
+
+def namignizer_iterator(names, counts, batch_size, num_steps, epoch_size):
+ """Takes a list of names and counts like those output from read_names, and
+ makes an iterator yielding a batch_size by num_steps array of random names
+ separated by an end of name token. The names are chosen randomly according
+ to their counts. The batch may end mid-name
+
+ Args:
+ names: a set of lowercase names composed of 26 characters
+ counts: a list of the frequency of those names
+ batch_size: int
+ num_steps: int
+ epoch_size: number of batches to yield
+ Yields:
+ (x, y): a batch_size by num_steps array of ints representing letters, where
+ x will be the input and y will be the target
+ """
+ name_distribution = counts / counts.sum()
+
+ for i in range(epoch_size):
+ data = np.zeros(batch_size * num_steps + 1)
+ samples = np.random.choice(names, size=batch_size * num_steps // 2,
+ replace=True, p=name_distribution)
+
+ data_index = 0
+ for sample in samples:
+ if data_index >= batch_size * num_steps:
+ break
+ for letter in map(_letter_to_number, sample) + [_EON]:
+ if data_index >= batch_size * num_steps:
+ break
+ data[data_index] = letter
+ data_index += 1
+
+ x = data[:batch_size * num_steps].reshape((batch_size, num_steps))
+ y = data[1:batch_size * num_steps + 1].reshape((batch_size, num_steps))
+
+ yield (x, y)
+
+
+def name_to_batch(name, batch_size, num_steps):
+ """ Takes a single name and fills a batch with it
+
+ Args:
+ name: lowercase composed of 26 characters
+ batch_size: int
+ num_steps: int
+ Returns:
+ x, y: a batch_size by num_steps array of ints representing letters, where
+ x will be the input and y will be the target. The array is filled up
+ to the length of the string, the rest is filled with zeros
+ """
+ data = np.zeros(batch_size * num_steps + 1)
+
+ data_index = 0
+ for letter in map(_letter_to_number, name) + [_EON]:
+ data[data_index] = letter
+ data_index += 1
+
+ x = data[:batch_size * num_steps].reshape((batch_size, num_steps))
+ y = data[1:batch_size * num_steps + 1].reshape((batch_size, num_steps))
+
+ return x, y
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/model.py"
new file mode 100644
index 00000000..72c5c5ec
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/model.py"
@@ -0,0 +1,136 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""RNN model with embeddings"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import tensorflow as tf
+
+
+class NamignizerModel(object):
+ """The Namignizer model ~ strongly based on PTB"""
+
+ def __init__(self, is_training, config):
+ self.batch_size = batch_size = config.batch_size
+ self.num_steps = num_steps = config.num_steps
+ size = config.hidden_size
+ # will always be 27
+ vocab_size = config.vocab_size
+
+ # placeholders for inputs
+ self._input_data = tf.placeholder(tf.int32, [batch_size, num_steps])
+ self._targets = tf.placeholder(tf.int32, [batch_size, num_steps])
+ # weights for the loss function
+ self._weights = tf.placeholder(tf.float32, [batch_size * num_steps])
+
+ # lstm for our RNN cell (GRU supported too)
+ lstm_cells = []
+ for layer in range(config.num_layers):
+ lstm_cell = tf.contrib.rnn.BasicLSTMCell(size, forget_bias=0.0)
+ if is_training and config.keep_prob < 1:
+ lstm_cell = tf.contrib.rnn.DropoutWrapper(
+ lstm_cell, output_keep_prob=config.keep_prob)
+ lstm_cells.append(lstm_cell)
+ cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
+
+ self._initial_state = cell.zero_state(batch_size, tf.float32)
+
+ with tf.device("/cpu:0"):
+ embedding = tf.get_variable("embedding", [vocab_size, size])
+ inputs = tf.nn.embedding_lookup(embedding, self._input_data)
+
+ if is_training and config.keep_prob < 1:
+ inputs = tf.nn.dropout(inputs, config.keep_prob)
+
+ outputs = []
+ state = self._initial_state
+ with tf.variable_scope("RNN"):
+ for time_step in range(num_steps):
+ if time_step > 0:
+ tf.get_variable_scope().reuse_variables()
+ (cell_output, state) = cell(inputs[:, time_step, :], state)
+ outputs.append(cell_output)
+
+ output = tf.reshape(tf.concat(axis=1, values=outputs), [-1, size])
+ softmax_w = tf.get_variable("softmax_w", [size, vocab_size])
+ softmax_b = tf.get_variable("softmax_b", [vocab_size])
+ logits = tf.matmul(output, softmax_w) + softmax_b
+ loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example(
+ [logits],
+ [tf.reshape(self._targets, [-1])],
+ [self._weights])
+ self._loss = loss
+ self._cost = cost = tf.reduce_sum(loss) / batch_size
+ self._final_state = state
+
+ # probabilities of each letter
+ self._activations = tf.nn.softmax(logits)
+
+ # ability to save the model
+ self.saver = tf.train.Saver(tf.global_variables())
+
+ if not is_training:
+ return
+
+ self._lr = tf.Variable(0.0, trainable=False)
+ tvars = tf.trainable_variables()
+ grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),
+ config.max_grad_norm)
+ optimizer = tf.train.GradientDescentOptimizer(self.lr)
+ self._train_op = optimizer.apply_gradients(zip(grads, tvars))
+
+ def assign_lr(self, session, lr_value):
+ session.run(tf.assign(self.lr, lr_value))
+
+ @property
+ def input_data(self):
+ return self._input_data
+
+ @property
+ def targets(self):
+ return self._targets
+
+ @property
+ def activations(self):
+ return self._activations
+
+ @property
+ def weights(self):
+ return self._weights
+
+ @property
+ def initial_state(self):
+ return self._initial_state
+
+ @property
+ def cost(self):
+ return self._cost
+
+ @property
+ def loss(self):
+ return self._loss
+
+ @property
+ def final_state(self):
+ return self._final_state
+
+ @property
+ def lr(self):
+ return self._lr
+
+ @property
+ def train_op(self):
+ return self._train_op
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/names.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/names.py"
new file mode 100644
index 00000000..25374271
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/namignizer/names.py"
@@ -0,0 +1,259 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""A library showing off sequence recognition and generation with the simple
+example of names.
+
+We use recurrent neural nets to learn complex functions able to recognize and
+generate sequences of a given form. This can be used for natural language
+syntax recognition, dynamically generating maps or puzzles and of course
+baby name generation.
+
+Before using this module, it is recommended to read the Tensorflow tutorial on
+recurrent neural nets, as it explains the basic concepts of this model, and
+will show off another module, the PTB module on which this model bases itself.
+
+Here is an overview of the functions available in this module:
+
+* RNN Module for sequence functions based on PTB
+
+* Name recognition specifically for recognizing names, but can be adapted to
+ recognizing sequence patterns
+
+* Name generations specifically for generating names, but can be adapted to
+ generating arbitrary sequence patterns
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import time
+
+import tensorflow as tf
+import numpy as np
+
+from model import NamignizerModel
+import data_utils
+
+
+class SmallConfig(object):
+ """Small config."""
+ init_scale = 0.1
+ learning_rate = 1.0
+ max_grad_norm = 5
+ num_layers = 2
+ num_steps = 20
+ hidden_size = 200
+ max_epoch = 4
+ max_max_epoch = 13
+ keep_prob = 1.0
+ lr_decay = 0.5
+ batch_size = 20
+ vocab_size = 27
+ epoch_size = 100
+
+
+class LargeConfig(object):
+ """Medium config."""
+ init_scale = 0.05
+ learning_rate = 1.0
+ max_grad_norm = 5
+ num_layers = 2
+ num_steps = 35
+ hidden_size = 650
+ max_epoch = 6
+ max_max_epoch = 39
+ keep_prob = 0.5
+ lr_decay = 0.8
+ batch_size = 20
+ vocab_size = 27
+ epoch_size = 100
+
+
+class TestConfig(object):
+ """Tiny config, for testing."""
+ init_scale = 0.1
+ learning_rate = 1.0
+ max_grad_norm = 1
+ num_layers = 1
+ num_steps = 2
+ hidden_size = 2
+ max_epoch = 1
+ max_max_epoch = 1
+ keep_prob = 1.0
+ lr_decay = 0.5
+ batch_size = 20
+ vocab_size = 27
+ epoch_size = 100
+
+
+def run_epoch(session, m, names, counts, epoch_size, eval_op, verbose=False):
+ """Runs the model on the given data for one epoch
+
+ Args:
+ session: the tf session holding the model graph
+ m: an instance of the NamignizerModel
+ names: a set of lowercase names of 26 characters
+ counts: a list of the frequency of the above names
+ epoch_size: the number of batches to run
+ eval_op: whether to change the params or not, and how to do it
+ Kwargs:
+ verbose: whether to print out state of training during the epoch
+ Returns:
+ cost: the average cost during the last stage of the epoch
+ """
+ start_time = time.time()
+ costs = 0.0
+ iters = 0
+ for step, (x, y) in enumerate(data_utils.namignizer_iterator(names, counts,
+ m.batch_size, m.num_steps, epoch_size)):
+
+ cost, _ = session.run([m.cost, eval_op],
+ {m.input_data: x,
+ m.targets: y,
+ m.weights: np.ones(m.batch_size * m.num_steps)})
+ costs += cost
+ iters += m.num_steps
+
+ if verbose and step % (epoch_size // 10) == 9:
+ print("%.3f perplexity: %.3f speed: %.0f lps" %
+ (step * 1.0 / epoch_size, np.exp(costs / iters),
+ iters * m.batch_size / (time.time() - start_time)))
+
+ if step >= epoch_size:
+ break
+
+ return np.exp(costs / iters)
+
+
+def train(data_dir, checkpoint_path, config):
+ """Trains the model with the given data
+
+ Args:
+ data_dir: path to the data for the model (see data_utils for data
+ format)
+ checkpoint_path: the path to save the trained model checkpoints
+ config: one of the above configs that specify the model and how it
+ should be run and trained
+ Returns:
+ None
+ """
+ # Prepare Name data.
+ print("Reading Name data in %s" % data_dir)
+ names, counts = data_utils.read_names(data_dir)
+
+ with tf.Graph().as_default(), tf.Session() as session:
+ initializer = tf.random_uniform_initializer(-config.init_scale,
+ config.init_scale)
+ with tf.variable_scope("model", reuse=None, initializer=initializer):
+ m = NamignizerModel(is_training=True, config=config)
+
+ tf.global_variables_initializer().run()
+
+ for i in range(config.max_max_epoch):
+ lr_decay = config.lr_decay ** max(i - config.max_epoch, 0.0)
+ m.assign_lr(session, config.learning_rate * lr_decay)
+
+ print("Epoch: %d Learning rate: %.3f" % (i + 1, session.run(m.lr)))
+ train_perplexity = run_epoch(session, m, names, counts, config.epoch_size, m.train_op,
+ verbose=True)
+ print("Epoch: %d Train Perplexity: %.3f" %
+ (i + 1, train_perplexity))
+
+ m.saver.save(session, checkpoint_path, global_step=i)
+
+
+def namignize(names, checkpoint_path, config):
+ """Recognizes names and prints the Perplexity of the model for each names
+ in the list
+
+ Args:
+ names: a list of names in the model format
+ checkpoint_path: the path to restore the trained model from, should not
+ include the model name, just the path to
+ config: one of the above configs that specify the model and how it
+ should be run and trained
+ Returns:
+ None
+ """
+ with tf.Graph().as_default(), tf.Session() as session:
+
+ with tf.variable_scope("model"):
+ m = NamignizerModel(is_training=False, config=config)
+
+ m.saver.restore(session, checkpoint_path)
+
+ for name in names:
+ x, y = data_utils.name_to_batch(name, m.batch_size, m.num_steps)
+
+ cost, loss, _ = session.run([m.cost, m.loss, tf.no_op()],
+ {m.input_data: x,
+ m.targets: y,
+ m.weights: np.concatenate((
+ np.ones(len(name)), np.zeros(m.batch_size * m.num_steps - len(name))))})
+
+ print("Name {} gives us a perplexity of {}".format(
+ name, np.exp(cost)))
+
+
+def namignator(checkpoint_path, config):
+ """Generates names randomly according to a given model
+
+ Args:
+ checkpoint_path: the path to restore the trained model from, should not
+ include the model name, just the path to
+ config: one of the above configs that specify the model and how it
+ should be run and trained
+ Returns:
+ None
+ """
+ # mutate the config to become a name generator config
+ config.num_steps = 1
+ config.batch_size = 1
+
+ with tf.Graph().as_default(), tf.Session() as session:
+
+ with tf.variable_scope("model"):
+ m = NamignizerModel(is_training=False, config=config)
+
+ m.saver.restore(session, checkpoint_path)
+
+ activations, final_state, _ = session.run([m.activations, m.final_state, tf.no_op()],
+ {m.input_data: np.zeros((1, 1)),
+ m.targets: np.zeros((1, 1)),
+ m.weights: np.ones(1)})
+
+ # sample from our softmax activations
+ next_letter = np.random.choice(27, p=activations[0])
+ name = [next_letter]
+ while next_letter != 0:
+ activations, final_state, _ = session.run([m.activations, m.final_state, tf.no_op()],
+ {m.input_data: [[next_letter]],
+ m.targets: np.zeros((1, 1)),
+ m.initial_state: final_state,
+ m.weights: np.ones(1)})
+
+ next_letter = np.random.choice(27, p=activations[0])
+ name += [next_letter]
+
+ print(map(lambda x: chr(x + 96), name))
+
+
+if __name__ == "__main__":
+ train("data/SmallNames.txt", "model/namignizer", SmallConfig)
+
+ namignize(["mary", "ida", "gazorbazorb", "mmmhmm", "bob"],
+ tf.train.latest_checkpoint("model"), SmallConfig)
+
+ namignator(tf.train.latest_checkpoint("model"), SmallConfig)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/README.md"
new file mode 100644
index 00000000..510f1c5e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/README.md"
@@ -0,0 +1,83 @@
+# NeuralGPU
+Code for the Neural GPU model described in http://arxiv.org/abs/1511.08228.
+The extended version was described in https://arxiv.org/abs/1610.08613.
+
+Requirements:
+* TensorFlow (see tensorflow.org for how to install)
+
+The model can be trained on the following algorithmic tasks:
+
+* `sort` - Sort a symbol list
+* `kvsort` - Sort symbol keys in dictionary
+* `id` - Return the same symbol list
+* `rev` - Reverse a symbol list
+* `rev2` - Reverse a symbol dictionary by key
+* `incr` - Add one to a symbol value
+* `add` - Long decimal addition
+* `left` - First symbol in list
+* `right` - Last symbol in list
+* `left-shift` - Left shift a symbol list
+* `right-shift` - Right shift a symbol list
+* `bmul` - Long binary multiplication
+* `mul` - Long decimal multiplication
+* `dup` - Duplicate a symbol list with padding
+* `badd` - Long binary addition
+* `qadd` - Long quaternary addition
+* `search` - Search for symbol key in dictionary
+
+It can also be trained on the WMT English-French translation task:
+
+* `wmt` - WMT English-French translation (data will be downloaded)
+
+The value range for symbols are defined by the `vocab_size` flag.
+In particular, the values are in the range `vocab_size - 1`.
+So if you set `--vocab_size=16` (the default) then `--problem=rev`
+will be reversing lists of 15 symbols, and `--problem=id` will be identity
+on a list of up to 15 symbols.
+
+
+To train the model on the binary multiplication task run:
+
+```
+python neural_gpu_trainer.py --problem=bmul
+```
+
+This trains the Extended Neural GPU, to train the original model run:
+
+```
+python neural_gpu_trainer.py --problem=bmul --beam_size=0
+```
+
+While training, interim / checkpoint model parameters will be
+written to `/tmp/neural_gpu/`.
+
+Once the amount of error gets down to what you're comfortable
+with, hit `Ctrl-C` to stop the training process. The latest
+model parameters will be in `/tmp/neural_gpu/neural_gpu.ckpt-`
+and used on any subsequent run.
+
+To evaluate a trained model on how well it decodes run:
+
+```
+python neural_gpu_trainer.py --problem=bmul --mode=1
+```
+
+To interact with a model (experimental, see code) run:
+
+```
+python neural_gpu_trainer.py --problem=bmul --mode=2
+```
+
+To train on WMT data, set a larger --nmaps and --vocab_size and avoid curriculum:
+
+```
+python neural_gpu_trainer.py --problem=wmt --vocab_size=32768 --nmaps=256
+ --vec_size=256 --curriculum_seq=1.0 --max_length=60 --data_dir ~/wmt
+```
+
+With less memory, try lower batch size, e.g. `--batch_size=4`. With more GPUs
+in your system, there will be a batch on every GPU so you can run larger models.
+For example, `--batch_size=4 --num_gpus=4 --nmaps=512 --vec_size=512` will
+run a large model (512-size) on 4 GPUs, with effective batches of 4*4=16.
+
+Maintained by Lukasz Kaiser (lukaszkaiser)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/data_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/data_utils.py"
new file mode 100644
index 00000000..3c14ff70
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/data_utils.py"
@@ -0,0 +1,458 @@
+# Copyright 2015 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Neural GPU -- data generation and batching utilities."""
+
+import math
+import os
+import random
+import sys
+import time
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+import program_utils
+
+FLAGS = tf.app.flags.FLAGS
+
+bins = [2 + bin_idx_i for bin_idx_i in xrange(256)]
+all_tasks = ["sort", "kvsort", "id", "rev", "rev2", "incr", "add", "left",
+ "right", "left-shift", "right-shift", "bmul", "mul", "dup",
+ "badd", "qadd", "search", "progeval", "progsynth"]
+log_filename = ""
+vocab, rev_vocab = None, None
+
+
+def pad(l):
+ for b in bins:
+ if b >= l: return b
+ return bins[-1]
+
+
+def bin_for(l):
+ for i, b in enumerate(bins):
+ if b >= l: return i
+ return len(bins) - 1
+
+
+train_set = {}
+test_set = {}
+for some_task in all_tasks:
+ train_set[some_task] = []
+ test_set[some_task] = []
+ for all_max_len in xrange(10000):
+ train_set[some_task].append([])
+ test_set[some_task].append([])
+
+
+def read_tmp_file(name):
+ """Read from a file with the given name in our log directory or above."""
+ dirname = os.path.dirname(log_filename)
+ fname = os.path.join(dirname, name + ".txt")
+ if not tf.gfile.Exists(fname):
+ print_out("== not found file: " + fname)
+ fname = os.path.join(dirname, "../" + name + ".txt")
+ if not tf.gfile.Exists(fname):
+ print_out("== not found file: " + fname)
+ fname = os.path.join(dirname, "../../" + name + ".txt")
+ if not tf.gfile.Exists(fname):
+ print_out("== not found file: " + fname)
+ return None
+ print_out("== found file: " + fname)
+ res = []
+ with tf.gfile.GFile(fname, mode="r") as f:
+ for line in f:
+ res.append(line.strip())
+ return res
+
+
+def write_tmp_file(name, lines):
+ dirname = os.path.dirname(log_filename)
+ fname = os.path.join(dirname, name + ".txt")
+ with tf.gfile.GFile(fname, mode="w") as f:
+ for line in lines:
+ f.write(line + "\n")
+
+
+def add(n1, n2, base=10):
+ """Add two numbers represented as lower-endian digit lists."""
+ k = max(len(n1), len(n2)) + 1
+ d1 = n1 + [0 for _ in xrange(k - len(n1))]
+ d2 = n2 + [0 for _ in xrange(k - len(n2))]
+ res = []
+ carry = 0
+ for i in xrange(k):
+ if d1[i] + d2[i] + carry < base:
+ res.append(d1[i] + d2[i] + carry)
+ carry = 0
+ else:
+ res.append(d1[i] + d2[i] + carry - base)
+ carry = 1
+ while res and res[-1] == 0:
+ res = res[:-1]
+ if res: return res
+ return [0]
+
+
+def init_data(task, length, nbr_cases, nclass):
+ """Data initialization."""
+ def rand_pair(l, task):
+ """Random data pair for a task. Total length should be <= l."""
+ k = int((l-1)/2)
+ base = 10
+ if task[0] == "b": base = 2
+ if task[0] == "q": base = 4
+ d1 = [np.random.randint(base) for _ in xrange(k)]
+ d2 = [np.random.randint(base) for _ in xrange(k)]
+ if task in ["add", "badd", "qadd"]:
+ res = add(d1, d2, base)
+ elif task in ["mul", "bmul"]:
+ d1n = sum([d * (base ** i) for i, d in enumerate(d1)])
+ d2n = sum([d * (base ** i) for i, d in enumerate(d2)])
+ if task == "bmul":
+ res = [int(x) for x in list(reversed(str(bin(d1n * d2n))))[:-2]]
+ else:
+ res = [int(x) for x in list(reversed(str(d1n * d2n)))]
+ else:
+ sys.exit()
+ sep = [12]
+ if task in ["add", "badd", "qadd"]: sep = [11]
+ inp = [d + 1 for d in d1] + sep + [d + 1 for d in d2]
+ return inp, [r + 1 for r in res]
+
+ def rand_dup_pair(l):
+ """Random data pair for duplication task. Total length should be <= l."""
+ k = int(l/2)
+ x = [np.random.randint(nclass - 1) + 1 for _ in xrange(k)]
+ inp = x + [0 for _ in xrange(l - k)]
+ res = x + x + [0 for _ in xrange(l - 2*k)]
+ return inp, res
+
+ def rand_rev2_pair(l):
+ """Random data pair for reverse2 task. Total length should be <= l."""
+ inp = [(np.random.randint(nclass - 1) + 1,
+ np.random.randint(nclass - 1) + 1) for _ in xrange(l/2)]
+ res = [i for i in reversed(inp)]
+ return [x for p in inp for x in p], [x for p in res for x in p]
+
+ def rand_search_pair(l):
+ """Random data pair for search task. Total length should be <= l."""
+ inp = [(np.random.randint(nclass - 1) + 1,
+ np.random.randint(nclass - 1) + 1) for _ in xrange(l-1/2)]
+ q = np.random.randint(nclass - 1) + 1
+ res = 0
+ for (k, v) in reversed(inp):
+ if k == q:
+ res = v
+ return [x for p in inp for x in p] + [q], [res]
+
+ def rand_kvsort_pair(l):
+ """Random data pair for key-value sort. Total length should be <= l."""
+ keys = [(np.random.randint(nclass - 1) + 1, i) for i in xrange(l/2)]
+ vals = [np.random.randint(nclass - 1) + 1 for _ in xrange(l/2)]
+ kv = [(k, vals[i]) for (k, i) in keys]
+ sorted_kv = [(k, vals[i]) for (k, i) in sorted(keys)]
+ return [x for p in kv for x in p], [x for p in sorted_kv for x in p]
+
+ def prog_io_pair(prog, max_len, counter=0):
+ try:
+ ilen = np.random.randint(max_len - 3) + 1
+ bound = max(15 - (counter / 20), 1)
+ inp = [random.choice(range(-bound, bound)) for _ in range(ilen)]
+ inp_toks = [program_utils.prog_rev_vocab[t]
+ for t in program_utils.tokenize(str(inp)) if t != ","]
+ out = program_utils.evaluate(prog, {"a": inp})
+ out_toks = [program_utils.prog_rev_vocab[t]
+ for t in program_utils.tokenize(str(out)) if t != ","]
+ if counter > 400:
+ out_toks = []
+ if (out_toks and out_toks[0] == program_utils.prog_rev_vocab["["] and
+ len(out_toks) != len([o for o in out if o == ","]) + 3):
+ raise ValueError("generated list with too long ints")
+ if (out_toks and out_toks[0] != program_utils.prog_rev_vocab["["] and
+ len(out_toks) > 1):
+ raise ValueError("generated one int but tokenized it to many")
+ if len(out_toks) > max_len:
+ raise ValueError("output too long")
+ return (inp_toks, out_toks)
+ except ValueError:
+ return prog_io_pair(prog, max_len, counter+1)
+
+ def spec(inp):
+ """Return the target given the input for some tasks."""
+ if task == "sort":
+ return sorted(inp)
+ elif task == "id":
+ return inp
+ elif task == "rev":
+ return [i for i in reversed(inp)]
+ elif task == "incr":
+ carry = 1
+ res = []
+ for i in xrange(len(inp)):
+ if inp[i] + carry < nclass:
+ res.append(inp[i] + carry)
+ carry = 0
+ else:
+ res.append(1)
+ carry = 1
+ return res
+ elif task == "left":
+ return [inp[0]]
+ elif task == "right":
+ return [inp[-1]]
+ elif task == "left-shift":
+ return [inp[l-1] for l in xrange(len(inp))]
+ elif task == "right-shift":
+ return [inp[l+1] for l in xrange(len(inp))]
+ else:
+ print_out("Unknown spec for task " + str(task))
+ sys.exit()
+
+ l = length
+ cur_time = time.time()
+ total_time = 0.0
+
+ is_prog = task in ["progeval", "progsynth"]
+ if is_prog:
+ inputs_per_prog = 5
+ program_utils.make_vocab()
+ progs = read_tmp_file("programs_len%d" % (l / 10))
+ if not progs:
+ progs = program_utils.gen(l / 10, 1.2 * nbr_cases / inputs_per_prog)
+ write_tmp_file("programs_len%d" % (l / 10), progs)
+ prog_ios = read_tmp_file("programs_len%d_io" % (l / 10))
+ nbr_cases = min(nbr_cases, len(progs) * inputs_per_prog) / 1.2
+ if not prog_ios:
+ # Generate program io data.
+ prog_ios = []
+ for pidx, prog in enumerate(progs):
+ if pidx % 500 == 0:
+ print_out("== generating io pairs for program %d" % pidx)
+ if pidx * inputs_per_prog > nbr_cases * 1.2:
+ break
+ ptoks = [program_utils.prog_rev_vocab[t]
+ for t in program_utils.tokenize(prog)]
+ ptoks.append(program_utils.prog_rev_vocab["_EOS"])
+ plen = len(ptoks)
+ for _ in xrange(inputs_per_prog):
+ if task == "progeval":
+ inp, out = prog_io_pair(prog, plen)
+ prog_ios.append(str(inp) + "\t" + str(out) + "\t" + prog)
+ elif task == "progsynth":
+ plen = max(len(ptoks), 8)
+ for _ in xrange(3):
+ inp, out = prog_io_pair(prog, plen / 2)
+ prog_ios.append(str(inp) + "\t" + str(out) + "\t" + prog)
+ write_tmp_file("programs_len%d_io" % (l / 10), prog_ios)
+ prog_ios_dict = {}
+ for s in prog_ios:
+ i, o, p = s.split("\t")
+ i_clean = "".join([c for c in i if c.isdigit() or c == " "])
+ o_clean = "".join([c for c in o if c.isdigit() or c == " "])
+ inp = [int(x) for x in i_clean.split()]
+ out = [int(x) for x in o_clean.split()]
+ if inp and out:
+ if p in prog_ios_dict:
+ prog_ios_dict[p].append([inp, out])
+ else:
+ prog_ios_dict[p] = [[inp, out]]
+ # Use prog_ios_dict to create data.
+ progs = []
+ for prog in prog_ios_dict:
+ if len([c for c in prog if c == ";"]) <= (l / 10):
+ progs.append(prog)
+ nbr_cases = min(nbr_cases, len(progs) * inputs_per_prog) / 1.2
+ print_out("== %d training cases on %d progs" % (nbr_cases, len(progs)))
+ for pidx, prog in enumerate(progs):
+ if pidx * inputs_per_prog > nbr_cases * 1.2:
+ break
+ ptoks = [program_utils.prog_rev_vocab[t]
+ for t in program_utils.tokenize(prog)]
+ ptoks.append(program_utils.prog_rev_vocab["_EOS"])
+ plen = len(ptoks)
+ dset = train_set if pidx < nbr_cases / inputs_per_prog else test_set
+ for _ in xrange(inputs_per_prog):
+ if task == "progeval":
+ inp, out = prog_ios_dict[prog].pop()
+ dset[task][bin_for(plen)].append([[ptoks, inp, [], []], [out]])
+ elif task == "progsynth":
+ plen, ilist = max(len(ptoks), 8), [[]]
+ for _ in xrange(3):
+ inp, out = prog_ios_dict[prog].pop()
+ ilist.append(inp + out)
+ dset[task][bin_for(plen)].append([ilist, [ptoks]])
+
+ for case in xrange(0 if is_prog else nbr_cases):
+ total_time += time.time() - cur_time
+ cur_time = time.time()
+ if l > 10000 and case % 100 == 1:
+ print_out(" avg gen time %.4f s" % (total_time / float(case)))
+ if task in ["add", "badd", "qadd", "bmul", "mul"]:
+ i, t = rand_pair(l, task)
+ train_set[task][bin_for(len(i))].append([[[], i, [], []], [t]])
+ i, t = rand_pair(l, task)
+ test_set[task][bin_for(len(i))].append([[[], i, [], []], [t]])
+ elif task == "dup":
+ i, t = rand_dup_pair(l)
+ train_set[task][bin_for(len(i))].append([[i], [t]])
+ i, t = rand_dup_pair(l)
+ test_set[task][bin_for(len(i))].append([[i], [t]])
+ elif task == "rev2":
+ i, t = rand_rev2_pair(l)
+ train_set[task][bin_for(len(i))].append([[i], [t]])
+ i, t = rand_rev2_pair(l)
+ test_set[task][bin_for(len(i))].append([[i], [t]])
+ elif task == "search":
+ i, t = rand_search_pair(l)
+ train_set[task][bin_for(len(i))].append([[i], [t]])
+ i, t = rand_search_pair(l)
+ test_set[task][bin_for(len(i))].append([[i], [t]])
+ elif task == "kvsort":
+ i, t = rand_kvsort_pair(l)
+ train_set[task][bin_for(len(i))].append([[i], [t]])
+ i, t = rand_kvsort_pair(l)
+ test_set[task][bin_for(len(i))].append([[i], [t]])
+ elif task not in ["progeval", "progsynth"]:
+ inp = [np.random.randint(nclass - 1) + 1 for i in xrange(l)]
+ target = spec(inp)
+ train_set[task][bin_for(l)].append([[inp], [target]])
+ inp = [np.random.randint(nclass - 1) + 1 for i in xrange(l)]
+ target = spec(inp)
+ test_set[task][bin_for(l)].append([[inp], [target]])
+
+
+def to_symbol(i):
+ """Covert ids to text."""
+ if i == 0: return ""
+ if i == 11: return "+"
+ if i == 12: return "*"
+ return str(i-1)
+
+
+def to_id(s):
+ """Covert text to ids."""
+ if s == "+": return 11
+ if s == "*": return 12
+ return int(s) + 1
+
+
+def get_batch(bin_id, batch_size, data_set, height, offset=None, preset=None):
+ """Get a batch of data, training or testing."""
+ inputs, targets = [], []
+ pad_length = bins[bin_id]
+ for b in xrange(batch_size):
+ if preset is None:
+ elem = random.choice(data_set[bin_id])
+ if offset is not None and offset + b < len(data_set[bin_id]):
+ elem = data_set[bin_id][offset + b]
+ else:
+ elem = preset
+ inpt, targett, inpl, targetl = elem[0], elem[1], [], []
+ for inp in inpt:
+ inpl.append(inp + [0 for _ in xrange(pad_length - len(inp))])
+ if len(inpl) == 1:
+ for _ in xrange(height - 1):
+ inpl.append([0 for _ in xrange(pad_length)])
+ for target in targett:
+ targetl.append(target + [0 for _ in xrange(pad_length - len(target))])
+ inputs.append(inpl)
+ targets.append(targetl)
+ res_input = np.array(inputs, dtype=np.int32)
+ res_target = np.array(targets, dtype=np.int32)
+ assert list(res_input.shape) == [batch_size, height, pad_length]
+ assert list(res_target.shape) == [batch_size, 1, pad_length]
+ return res_input, res_target
+
+
+def print_out(s, newline=True):
+ """Print a message out and log it to file."""
+ if log_filename:
+ try:
+ with tf.gfile.GFile(log_filename, mode="a") as f:
+ f.write(s + ("\n" if newline else ""))
+ # pylint: disable=bare-except
+ except:
+ sys.stderr.write("Error appending to %s\n" % log_filename)
+ sys.stdout.write(s + ("\n" if newline else ""))
+ sys.stdout.flush()
+
+
+def decode(output):
+ return [np.argmax(o, axis=1) for o in output]
+
+
+def accuracy(inpt_t, output, target_t, batch_size, nprint,
+ beam_out=None, beam_scores=None):
+ """Calculate output accuracy given target."""
+ assert nprint < batch_size + 1
+ inpt = []
+ for h in xrange(inpt_t.shape[1]):
+ inpt.extend([inpt_t[:, h, l] for l in xrange(inpt_t.shape[2])])
+ target = [target_t[:, 0, l] for l in xrange(target_t.shape[2])]
+ def tok(i):
+ if rev_vocab and i < len(rev_vocab):
+ return rev_vocab[i]
+ return str(i - 1)
+ def task_print(inp, output, target):
+ stop_bound = 0
+ print_len = 0
+ while print_len < len(target) and target[print_len] > stop_bound:
+ print_len += 1
+ print_out(" i: " + " ".join([tok(i) for i in inp if i > 0]))
+ print_out(" o: " +
+ " ".join([tok(output[l]) for l in xrange(print_len)]))
+ print_out(" t: " +
+ " ".join([tok(target[l]) for l in xrange(print_len)]))
+ decoded_target = target
+ decoded_output = decode(output)
+ # Use beam output if given and score is high enough.
+ if beam_out is not None:
+ for b in xrange(batch_size):
+ if beam_scores[b] >= 10.0:
+ for l in xrange(min(len(decoded_output), beam_out.shape[2])):
+ decoded_output[l][b] = int(beam_out[b, 0, l])
+ total = 0
+ errors = 0
+ seq = [0 for b in xrange(batch_size)]
+ for l in xrange(len(decoded_output)):
+ for b in xrange(batch_size):
+ if decoded_target[l][b] > 0:
+ total += 1
+ if decoded_output[l][b] != decoded_target[l][b]:
+ seq[b] = 1
+ errors += 1
+ e = 0 # Previous error index
+ for _ in xrange(min(nprint, sum(seq))):
+ while seq[e] == 0:
+ e += 1
+ task_print([inpt[l][e] for l in xrange(len(inpt))],
+ [decoded_output[l][e] for l in xrange(len(decoded_target))],
+ [decoded_target[l][e] for l in xrange(len(decoded_target))])
+ e += 1
+ for b in xrange(nprint - errors):
+ task_print([inpt[l][b] for l in xrange(len(inpt))],
+ [decoded_output[l][b] for l in xrange(len(decoded_target))],
+ [decoded_target[l][b] for l in xrange(len(decoded_target))])
+ return errors, total, sum(seq)
+
+
+def safe_exp(x):
+ perp = 10000
+ x = float(x)
+ if x < 100: perp = math.exp(x)
+ if perp > 10000: return 10000
+ return perp
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/neural_gpu.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/neural_gpu.py"
new file mode 100644
index 00000000..55b2b3e9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/neural_gpu.py"
@@ -0,0 +1,747 @@
+# Copyright 2015 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""The Neural GPU Model."""
+
+import time
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+from tensorflow.python.framework import function
+import data_utils as data
+
+do_jit = False # Gives more speed but experimental for now.
+jit_scope = tf.contrib.compiler.jit.experimental_jit_scope
+
+
+def conv_linear(args, kw, kh, nin, nout, rate, do_bias, bias_start, prefix):
+ """Convolutional linear map."""
+ if not isinstance(args, (list, tuple)):
+ args = [args]
+ with tf.variable_scope(prefix):
+ with tf.device("/cpu:0"):
+ k = tf.get_variable("CvK", [kw, kh, nin, nout])
+ if len(args) == 1:
+ arg = args[0]
+ else:
+ arg = tf.concat(axis=3, values=args)
+ res = tf.nn.convolution(arg, k, dilation_rate=(rate, 1), padding="SAME")
+ if not do_bias: return res
+ with tf.device("/cpu:0"):
+ bias_term = tf.get_variable(
+ "CvB", [nout], initializer=tf.constant_initializer(bias_start))
+ bias_term = tf.reshape(bias_term, [1, 1, 1, nout])
+ return res + bias_term
+
+
+def sigmoid_cutoff(x, cutoff):
+ """Sigmoid with cutoff, e.g., 1.2sigmoid(x) - 0.1."""
+ y = tf.sigmoid(x)
+ if cutoff < 1.01: return y
+ d = (cutoff - 1.0) / 2.0
+ return tf.minimum(1.0, tf.maximum(0.0, cutoff * y - d), name="cutoff_min")
+
+
+@function.Defun(tf.float32, noinline=True)
+def sigmoid_cutoff_12(x):
+ """Sigmoid with cutoff 1.2, specialized for speed and memory use."""
+ y = tf.sigmoid(x)
+ return tf.minimum(1.0, tf.maximum(0.0, 1.2 * y - 0.1), name="cutoff_min_12")
+
+
+@function.Defun(tf.float32, noinline=True)
+def sigmoid_hard(x):
+ """Hard sigmoid."""
+ return tf.minimum(1.0, tf.maximum(0.0, 0.25 * x + 0.5))
+
+
+def place_at14(decided, selected, it):
+ """Place selected at it-th coordinate of decided, dim=1 of 4."""
+ slice1 = decided[:, :it, :, :]
+ slice2 = decided[:, it + 1:, :, :]
+ return tf.concat(axis=1, values=[slice1, selected, slice2])
+
+
+def place_at13(decided, selected, it):
+ """Place selected at it-th coordinate of decided, dim=1 of 3."""
+ slice1 = decided[:, :it, :]
+ slice2 = decided[:, it + 1:, :]
+ return tf.concat(axis=1, values=[slice1, selected, slice2])
+
+
+def tanh_cutoff(x, cutoff):
+ """Tanh with cutoff, e.g., 1.1tanh(x) cut to [-1. 1]."""
+ y = tf.tanh(x)
+ if cutoff < 1.01: return y
+ d = (cutoff - 1.0) / 2.0
+ return tf.minimum(1.0, tf.maximum(-1.0, (1.0 + d) * y))
+
+
+@function.Defun(tf.float32, noinline=True)
+def tanh_hard(x):
+ """Hard tanh."""
+ return tf.minimum(1.0, tf.maximum(0.0, x))
+
+
+def layer_norm(x, nmaps, prefix, epsilon=1e-5):
+ """Layer normalize the 4D tensor x, averaging over the last dimension."""
+ with tf.variable_scope(prefix):
+ scale = tf.get_variable("layer_norm_scale", [nmaps],
+ initializer=tf.ones_initializer())
+ bias = tf.get_variable("layer_norm_bias", [nmaps],
+ initializer=tf.zeros_initializer())
+ mean, variance = tf.nn.moments(x, [3], keep_dims=True)
+ norm_x = (x - mean) / tf.sqrt(variance + epsilon)
+ return norm_x * scale + bias
+
+
+def conv_gru(inpts, mem, kw, kh, nmaps, rate, cutoff, prefix, do_layer_norm,
+ args_len=None):
+ """Convolutional GRU."""
+ def conv_lin(args, suffix, bias_start):
+ total_args_len = args_len or len(args) * nmaps
+ res = conv_linear(args, kw, kh, total_args_len, nmaps, rate, True,
+ bias_start, prefix + "/" + suffix)
+ if do_layer_norm:
+ return layer_norm(res, nmaps, prefix + "/" + suffix)
+ else:
+ return res
+ if cutoff == 1.2:
+ reset = sigmoid_cutoff_12(conv_lin(inpts + [mem], "r", 1.0))
+ gate = sigmoid_cutoff_12(conv_lin(inpts + [mem], "g", 1.0))
+ elif cutoff > 10:
+ reset = sigmoid_hard(conv_lin(inpts + [mem], "r", 1.0))
+ gate = sigmoid_hard(conv_lin(inpts + [mem], "g", 1.0))
+ else:
+ reset = sigmoid_cutoff(conv_lin(inpts + [mem], "r", 1.0), cutoff)
+ gate = sigmoid_cutoff(conv_lin(inpts + [mem], "g", 1.0), cutoff)
+ if cutoff > 10:
+ candidate = tanh_hard(conv_lin(inpts + [reset * mem], "c", 0.0))
+ else:
+ # candidate = tanh_cutoff(conv_lin(inpts + [reset * mem], "c", 0.0), cutoff)
+ candidate = tf.tanh(conv_lin(inpts + [reset * mem], "c", 0.0))
+ return gate * mem + (1 - gate) * candidate
+
+
+CHOOSE_K = 256
+
+
+def memory_call(q, l, nmaps, mem_size, vocab_size, num_gpus, update_mem):
+ raise ValueError("Fill for experiments with additional memory structures.")
+
+
+def memory_run(step, nmaps, mem_size, batch_size, vocab_size,
+ global_step, do_training, update_mem, decay_factor, num_gpus,
+ target_emb_weights, output_w, gpu_targets_tn, it):
+ """Run memory."""
+ q = step[:, 0, it, :]
+ mlabels = gpu_targets_tn[:, it, 0]
+ res, mask, mem_loss = memory_call(
+ q, mlabels, nmaps, mem_size, vocab_size, num_gpus, update_mem)
+ res = tf.gather(target_emb_weights, res) * tf.expand_dims(mask[:, 0], 1)
+
+ # Mix gold and original in the first steps, 20% later.
+ gold = tf.nn.dropout(tf.gather(target_emb_weights, mlabels), 0.7)
+ use_gold = 1.0 - tf.cast(global_step, tf.float32) / (1000. * decay_factor)
+ use_gold = tf.maximum(use_gold, 0.2) * do_training
+ mem = tf.cond(tf.less(tf.random_uniform([]), use_gold),
+ lambda: use_gold * gold + (1.0 - use_gold) * res,
+ lambda: res)
+ mem = tf.reshape(mem, [-1, 1, 1, nmaps])
+ return mem, mem_loss, update_mem
+
+
+@tf.RegisterGradient("CustomIdG")
+def _custom_id_grad(_, grads):
+ return grads
+
+
+def quantize(t, quant_scale, max_value=1.0):
+ """Quantize a tensor t with each element in [-max_value, max_value]."""
+ t = tf.minimum(max_value, tf.maximum(t, -max_value))
+ big = quant_scale * (t + max_value) + 0.5
+ with tf.get_default_graph().gradient_override_map({"Floor": "CustomIdG"}):
+ res = (tf.floor(big) / quant_scale) - max_value
+ return res
+
+
+def quantize_weights_op(quant_scale, max_value):
+ ops = [v.assign(quantize(v, quant_scale, float(max_value)))
+ for v in tf.trainable_variables()]
+ return tf.group(*ops)
+
+
+def autoenc_quantize(x, nbits, nmaps, do_training, layers=1):
+ """Autoencoder into nbits vectors of bits, using noise and sigmoids."""
+ enc_x = tf.reshape(x, [-1, nmaps])
+ for i in xrange(layers - 1):
+ enc_x = tf.layers.dense(enc_x, nmaps, name="autoenc_%d" % i)
+ enc_x = tf.layers.dense(enc_x, nbits, name="autoenc_%d" % (layers - 1))
+ noise = tf.truncated_normal(tf.shape(enc_x), stddev=2.0)
+ dec_x = sigmoid_cutoff_12(enc_x + noise * do_training)
+ dec_x = tf.reshape(dec_x, [-1, nbits])
+ for i in xrange(layers):
+ dec_x = tf.layers.dense(dec_x, nmaps, name="autodec_%d" % i)
+ return tf.reshape(dec_x, tf.shape(x))
+
+
+def make_dense(targets, noclass, low_param):
+ """Move a batch of targets to a dense 1-hot representation."""
+ low = low_param / float(noclass - 1)
+ high = 1.0 - low * (noclass - 1)
+ targets = tf.cast(targets, tf.int64)
+ return tf.one_hot(targets, depth=noclass, on_value=high, off_value=low)
+
+
+def reorder_beam(beam_size, batch_size, beam_val, output, is_first,
+ tensors_to_reorder):
+ """Reorder to minimize beam costs."""
+ # beam_val is [batch_size x beam_size]; let b = batch_size * beam_size
+ # decided is len x b x a x b
+ # output is b x out_size; step is b x len x a x b;
+ outputs = tf.split(axis=0, num_or_size_splits=beam_size, value=tf.nn.log_softmax(output))
+ all_beam_vals, all_beam_idx = [], []
+ beam_range = 1 if is_first else beam_size
+ for i in xrange(beam_range):
+ top_out, top_out_idx = tf.nn.top_k(outputs[i], k=beam_size)
+ cur_beam_val = beam_val[:, i]
+ top_out = tf.Print(top_out, [top_out, top_out_idx, beam_val, i,
+ cur_beam_val], "GREPO", summarize=8)
+ all_beam_vals.append(top_out + tf.expand_dims(cur_beam_val, 1))
+ all_beam_idx.append(top_out_idx)
+ all_beam_idx = tf.reshape(tf.transpose(tf.concat(axis=1, values=all_beam_idx), [1, 0]),
+ [-1])
+ top_beam, top_beam_idx = tf.nn.top_k(tf.concat(axis=1, values=all_beam_vals), k=beam_size)
+ top_beam_idx = tf.Print(top_beam_idx, [top_beam, top_beam_idx],
+ "GREP", summarize=8)
+ reordered = [[] for _ in xrange(len(tensors_to_reorder) + 1)]
+ top_out_idx = []
+ for i in xrange(beam_size):
+ which_idx = top_beam_idx[:, i] * batch_size + tf.range(batch_size)
+ top_out_idx.append(tf.gather(all_beam_idx, which_idx))
+ which_beam = top_beam_idx[:, i] / beam_size # [batch]
+ which_beam = which_beam * batch_size + tf.range(batch_size)
+ reordered[0].append(tf.gather(output, which_beam))
+ for i, t in enumerate(tensors_to_reorder):
+ reordered[i + 1].append(tf.gather(t, which_beam))
+ new_tensors = [tf.concat(axis=0, values=t) for t in reordered]
+ top_out_idx = tf.concat(axis=0, values=top_out_idx)
+ return (top_beam, new_tensors[0], top_out_idx, new_tensors[1:])
+
+
+class NeuralGPU(object):
+ """Neural GPU Model."""
+
+ def __init__(self, nmaps, vec_size, niclass, noclass, dropout,
+ max_grad_norm, cutoff, nconvs, kw, kh, height, mem_size,
+ learning_rate, min_length, num_gpus, num_replicas,
+ grad_noise_scale, sampling_rate, act_noise=0.0, do_rnn=False,
+ atrous=False, beam_size=1, backward=True, do_layer_norm=False,
+ autoenc_decay=1.0):
+ # Feeds for parameters and ops to update them.
+ self.nmaps = nmaps
+ if backward:
+ self.global_step = tf.Variable(0, trainable=False, name="global_step")
+ self.cur_length = tf.Variable(min_length, trainable=False)
+ self.cur_length_incr_op = self.cur_length.assign_add(1)
+ self.lr = tf.Variable(learning_rate, trainable=False)
+ self.lr_decay_op = self.lr.assign(self.lr * 0.995)
+ self.do_training = tf.placeholder(tf.float32, name="do_training")
+ self.update_mem = tf.placeholder(tf.int32, name="update_mem")
+ self.noise_param = tf.placeholder(tf.float32, name="noise_param")
+
+ # Feeds for inputs, targets, outputs, losses, etc.
+ self.input = tf.placeholder(tf.int32, name="inp")
+ self.target = tf.placeholder(tf.int32, name="tgt")
+ self.prev_step = tf.placeholder(tf.float32, name="prev_step")
+ gpu_input = tf.split(axis=0, num_or_size_splits=num_gpus, value=self.input)
+ gpu_target = tf.split(axis=0, num_or_size_splits=num_gpus, value=self.target)
+ gpu_prev_step = tf.split(axis=0, num_or_size_splits=num_gpus, value=self.prev_step)
+ batch_size = tf.shape(gpu_input[0])[0]
+
+ if backward:
+ adam_lr = 0.005 * self.lr
+ adam = tf.train.AdamOptimizer(adam_lr, epsilon=1e-3)
+
+ def adam_update(grads):
+ return adam.apply_gradients(zip(grads, tf.trainable_variables()),
+ global_step=self.global_step,
+ name="adam_update")
+
+ # When switching from Adam to SGD we perform reverse-decay.
+ if backward:
+ global_step_float = tf.cast(self.global_step, tf.float32)
+ sampling_decay_exponent = global_step_float / 100000.0
+ sampling_decay = tf.maximum(0.05, tf.pow(0.5, sampling_decay_exponent))
+ self.sampling = sampling_rate * 0.05 / sampling_decay
+ else:
+ self.sampling = tf.constant(0.0)
+
+ # Cache variables on cpu if needed.
+ if num_replicas > 1 or num_gpus > 1:
+ with tf.device("/cpu:0"):
+ caching_const = tf.constant(0)
+ tf.get_variable_scope().set_caching_device(caching_const.op.device)
+ # partitioner = tf.variable_axis_size_partitioner(1024*256*4)
+ # tf.get_variable_scope().set_partitioner(partitioner)
+
+ def gpu_avg(l):
+ if l[0] is None:
+ for elem in l:
+ assert elem is None
+ return 0.0
+ if len(l) < 2:
+ return l[0]
+ return sum(l) / float(num_gpus)
+
+ self.length_tensor = tf.placeholder(tf.int32, name="length")
+
+ with tf.device("/cpu:0"):
+ emb_weights = tf.get_variable(
+ "embedding", [niclass, vec_size],
+ initializer=tf.random_uniform_initializer(-1.7, 1.7))
+ if beam_size > 0:
+ target_emb_weights = tf.get_variable(
+ "target_embedding", [noclass, nmaps],
+ initializer=tf.random_uniform_initializer(-1.7, 1.7))
+ e0 = tf.scatter_update(emb_weights,
+ tf.constant(0, dtype=tf.int32, shape=[1]),
+ tf.zeros([1, vec_size]))
+ output_w = tf.get_variable("output_w", [nmaps, noclass], tf.float32)
+
+ def conv_rate(layer):
+ if atrous:
+ return 2**layer
+ return 1
+
+ # pylint: disable=cell-var-from-loop
+ def enc_step(step):
+ """Encoder step."""
+ if autoenc_decay < 1.0:
+ quant_step = autoenc_quantize(step, 16, nmaps, self.do_training)
+ if backward:
+ exp_glob = tf.train.exponential_decay(1.0, self.global_step - 10000,
+ 1000, autoenc_decay)
+ dec_factor = 1.0 - exp_glob # * self.do_training
+ dec_factor = tf.cond(tf.less(self.global_step, 10500),
+ lambda: tf.constant(0.05), lambda: dec_factor)
+ else:
+ dec_factor = 1.0
+ cur = tf.cond(tf.less(tf.random_uniform([]), dec_factor),
+ lambda: quant_step, lambda: step)
+ else:
+ cur = step
+ if dropout > 0.0001:
+ cur = tf.nn.dropout(cur, keep_prob)
+ if act_noise > 0.00001:
+ cur += tf.truncated_normal(tf.shape(cur)) * act_noise_scale
+ # Do nconvs-many CGRU steps.
+ if do_jit and tf.get_variable_scope().reuse:
+ with jit_scope():
+ for layer in xrange(nconvs):
+ cur = conv_gru([], cur, kw, kh, nmaps, conv_rate(layer),
+ cutoff, "ecgru_%d" % layer, do_layer_norm)
+ else:
+ for layer in xrange(nconvs):
+ cur = conv_gru([], cur, kw, kh, nmaps, conv_rate(layer),
+ cutoff, "ecgru_%d" % layer, do_layer_norm)
+ return cur
+
+ zero_tgt = tf.zeros([batch_size, nmaps, 1])
+ zero_tgt.set_shape([None, nmaps, 1])
+
+ def dec_substep(step, decided):
+ """Decoder sub-step."""
+ cur = step
+ if dropout > 0.0001:
+ cur = tf.nn.dropout(cur, keep_prob)
+ if act_noise > 0.00001:
+ cur += tf.truncated_normal(tf.shape(cur)) * act_noise_scale
+ # Do nconvs-many CGRU steps.
+ if do_jit and tf.get_variable_scope().reuse:
+ with jit_scope():
+ for layer in xrange(nconvs):
+ cur = conv_gru([decided], cur, kw, kh, nmaps, conv_rate(layer),
+ cutoff, "dcgru_%d" % layer, do_layer_norm)
+ else:
+ for layer in xrange(nconvs):
+ cur = conv_gru([decided], cur, kw, kh, nmaps, conv_rate(layer),
+ cutoff, "dcgru_%d" % layer, do_layer_norm)
+ return cur
+ # pylint: enable=cell-var-from-loop
+
+ def dec_step(step, it, it_int, decided, output_ta, tgts,
+ mloss, nupd_in, out_idx, beam_cost):
+ """Decoder step."""
+ nupd, mem_loss = 0, 0.0
+ if mem_size > 0:
+ it_incr = tf.minimum(it+1, length - 1)
+ mem, mem_loss, nupd = memory_run(
+ step, nmaps, mem_size, batch_size, noclass, self.global_step,
+ self.do_training, self.update_mem, 10, num_gpus,
+ target_emb_weights, output_w, gpu_targets_tn, it_incr)
+ step = dec_substep(step, decided)
+ output_l = tf.expand_dims(tf.expand_dims(step[:, it, 0, :], 1), 1)
+ # Calculate argmax output.
+ output = tf.reshape(output_l, [-1, nmaps])
+ # pylint: disable=cell-var-from-loop
+ output = tf.matmul(output, output_w)
+ if beam_size > 1:
+ beam_cost, output, out, reordered = reorder_beam(
+ beam_size, batch_size, beam_cost, output, it_int == 0,
+ [output_l, out_idx, step, decided])
+ [output_l, out_idx, step, decided] = reordered
+ else:
+ # Scheduled sampling.
+ out = tf.multinomial(tf.stop_gradient(output), 1)
+ out = tf.to_int32(tf.squeeze(out, [1]))
+ out_write = output_ta.write(it, output_l[:batch_size, :, :, :])
+ output = tf.gather(target_emb_weights, out)
+ output = tf.reshape(output, [-1, 1, nmaps])
+ output = tf.concat(axis=1, values=[output] * height)
+ tgt = tgts[it, :, :, :]
+ selected = tf.cond(tf.less(tf.random_uniform([]), self.sampling),
+ lambda: output, lambda: tgt)
+ # pylint: enable=cell-var-from-loop
+ dec_write = place_at14(decided, tf.expand_dims(selected, 1), it)
+ out_idx = place_at13(
+ out_idx, tf.reshape(out, [beam_size * batch_size, 1, 1]), it)
+ if mem_size > 0:
+ mem = tf.concat(axis=2, values=[mem] * height)
+ dec_write = place_at14(dec_write, mem, it_incr)
+ return (step, dec_write, out_write, mloss + mem_loss, nupd_in + nupd,
+ out_idx, beam_cost)
+
+ # Main model construction.
+ gpu_outputs = []
+ gpu_losses = []
+ gpu_grad_norms = []
+ grads_list = []
+ gpu_out_idx = []
+ self.after_enc_step = []
+ for gpu in xrange(num_gpus): # Multi-GPU towers, average gradients later.
+ length = self.length_tensor
+ length_float = tf.cast(length, tf.float32)
+ if gpu > 0:
+ tf.get_variable_scope().reuse_variables()
+ gpu_outputs.append([])
+ gpu_losses.append([])
+ gpu_grad_norms.append([])
+ with tf.name_scope("gpu%d" % gpu), tf.device("/gpu:%d" % gpu):
+ # Main graph creation loop.
+ data.print_out("Creating model.")
+ start_time = time.time()
+
+ # Embed inputs and calculate mask.
+ with tf.device("/cpu:0"):
+ tgt_shape = tf.shape(tf.squeeze(gpu_target[gpu], [1]))
+ weights = tf.where(tf.squeeze(gpu_target[gpu], [1]) > 0,
+ tf.ones(tgt_shape), tf.zeros(tgt_shape))
+
+ # Embed inputs and targets.
+ with tf.control_dependencies([e0]):
+ start = tf.gather(emb_weights, gpu_input[gpu]) # b x h x l x nmaps
+ gpu_targets_tn = gpu_target[gpu] # b x 1 x len
+ if beam_size > 0:
+ embedded_targets_tn = tf.gather(target_emb_weights,
+ gpu_targets_tn)
+ embedded_targets_tn = tf.transpose(
+ embedded_targets_tn, [2, 0, 1, 3]) # len x b x 1 x nmaps
+ embedded_targets_tn = tf.concat(axis=2, values=[embedded_targets_tn] * height)
+
+ # First image comes from start by applying convolution and adding 0s.
+ start = tf.transpose(start, [0, 2, 1, 3]) # Now b x len x h x vec_s
+ first = conv_linear(start, 1, 1, vec_size, nmaps, 1, True, 0.0, "input")
+ first = layer_norm(first, nmaps, "input")
+
+ # Computation steps.
+ keep_prob = dropout * 3.0 / tf.sqrt(length_float)
+ keep_prob = 1.0 - self.do_training * keep_prob
+ act_noise_scale = act_noise * self.do_training
+
+ # Start with a convolutional gate merging previous step.
+ step = conv_gru([gpu_prev_step[gpu]], first,
+ kw, kh, nmaps, 1, cutoff, "first", do_layer_norm)
+
+ # This is just for running a baseline RNN seq2seq model.
+ if do_rnn:
+ self.after_enc_step.append(step) # Not meaningful here, but needed.
+ def lstm_cell():
+ return tf.contrib.rnn.BasicLSTMCell(height * nmaps)
+ cell = tf.contrib.rnn.MultiRNNCell(
+ [lstm_cell() for _ in range(nconvs)])
+ with tf.variable_scope("encoder"):
+ encoder_outputs, encoder_state = tf.nn.dynamic_rnn(
+ cell, tf.reshape(step, [batch_size, length, height * nmaps]),
+ dtype=tf.float32, time_major=False)
+
+ # Attention.
+ attn = tf.layers.dense(
+ encoder_outputs, height * nmaps, name="attn1")
+
+ # pylint: disable=cell-var-from-loop
+ @function.Defun(noinline=True)
+ def attention_query(query, attn_v):
+ vecs = tf.tanh(attn + tf.expand_dims(query, 1))
+ mask = tf.reduce_sum(vecs * tf.reshape(attn_v, [1, 1, -1]), 2)
+ mask = tf.nn.softmax(mask)
+ return tf.reduce_sum(encoder_outputs * tf.expand_dims(mask, 2), 1)
+
+ with tf.variable_scope("decoder"):
+ def decoder_loop_fn(state__prev_cell_out__unused, cell_inp__cur_tgt):
+ """Decoder loop function."""
+ state, prev_cell_out, _ = state__prev_cell_out__unused
+ cell_inp, cur_tgt = cell_inp__cur_tgt
+ attn_q = tf.layers.dense(prev_cell_out, height * nmaps,
+ name="attn_query")
+ attn_res = attention_query(attn_q, tf.get_variable(
+ "attn_v", [height * nmaps],
+ initializer=tf.random_uniform_initializer(-0.1, 0.1)))
+ concatenated = tf.reshape(tf.concat(axis=1, values=[cell_inp, attn_res]),
+ [batch_size, 2 * height * nmaps])
+ cell_inp = tf.layers.dense(
+ concatenated, height * nmaps, name="attn_merge")
+ output, new_state = cell(cell_inp, state)
+
+ mem_loss = 0.0
+ if mem_size > 0:
+ res, mask, mem_loss = memory_call(
+ output, cur_tgt, height * nmaps, mem_size, noclass,
+ num_gpus, self.update_mem)
+ res = tf.gather(target_emb_weights, res)
+ res *= tf.expand_dims(mask[:, 0], 1)
+ output = tf.layers.dense(
+ tf.concat(axis=1, values=[output, res]), height * nmaps, name="rnnmem")
+
+ return new_state, output, mem_loss
+ # pylint: enable=cell-var-from-loop
+ gpu_targets = tf.squeeze(gpu_target[gpu], [1]) # b x len
+ gpu_tgt_trans = tf.transpose(gpu_targets, [1, 0])
+ dec_zero = tf.zeros([batch_size, 1], dtype=tf.int32)
+ dec_inp = tf.concat(axis=1, values=[dec_zero, gpu_targets])
+ dec_inp = dec_inp[:, :length]
+ embedded_dec_inp = tf.gather(target_emb_weights, dec_inp)
+ embedded_dec_inp_proj = tf.layers.dense(
+ embedded_dec_inp, height * nmaps, name="dec_proj")
+ embedded_dec_inp_proj = tf.transpose(embedded_dec_inp_proj,
+ [1, 0, 2])
+ init_vals = (encoder_state,
+ tf.zeros([batch_size, height * nmaps]), 0.0)
+ _, dec_outputs, mem_losses = tf.scan(
+ decoder_loop_fn, (embedded_dec_inp_proj, gpu_tgt_trans),
+ initializer=init_vals)
+ mem_loss = tf.reduce_mean(mem_losses)
+ outputs = tf.layers.dense(dec_outputs, nmaps, name="out_proj")
+ # Final convolution to get logits, list outputs.
+ outputs = tf.matmul(tf.reshape(outputs, [-1, nmaps]), output_w)
+ outputs = tf.reshape(outputs, [length, batch_size, noclass])
+ gpu_out_idx.append(tf.argmax(outputs, 2))
+ else: # Here we go with the Neural GPU.
+ # Encoder.
+ enc_length = length
+ step = enc_step(step) # First step hard-coded.
+ # pylint: disable=cell-var-from-loop
+ i = tf.constant(1)
+ c = lambda i, _s: tf.less(i, enc_length)
+ def enc_step_lambda(i, step):
+ with tf.variable_scope(tf.get_variable_scope(), reuse=True):
+ new_step = enc_step(step)
+ return (i + 1, new_step)
+ _, step = tf.while_loop(
+ c, enc_step_lambda, [i, step],
+ parallel_iterations=1, swap_memory=True)
+ # pylint: enable=cell-var-from-loop
+
+ self.after_enc_step.append(step)
+
+ # Decoder.
+ if beam_size > 0:
+ output_ta = tf.TensorArray(
+ dtype=tf.float32, size=length, dynamic_size=False,
+ infer_shape=False, name="outputs")
+ out_idx = tf.zeros([beam_size * batch_size, length, 1],
+ dtype=tf.int32)
+ decided_t = tf.zeros([beam_size * batch_size, length,
+ height, vec_size])
+
+ # Prepare for beam search.
+ tgts = tf.concat(axis=1, values=[embedded_targets_tn] * beam_size)
+ beam_cost = tf.zeros([batch_size, beam_size])
+ step = tf.concat(axis=0, values=[step] * beam_size)
+ # First step hard-coded.
+ step, decided_t, output_ta, mem_loss, nupd, oi, bc = dec_step(
+ step, 0, 0, decided_t, output_ta, tgts, 0.0, 0, out_idx,
+ beam_cost)
+ tf.get_variable_scope().reuse_variables()
+ # pylint: disable=cell-var-from-loop
+ def step_lambda(i, step, dec_t, out_ta, ml, nu, oi, bc):
+ with tf.variable_scope(tf.get_variable_scope(), reuse=True):
+ s, d, t, nml, nu, oi, bc = dec_step(
+ step, i, 1, dec_t, out_ta, tgts, ml, nu, oi, bc)
+ return (i + 1, s, d, t, nml, nu, oi, bc)
+ i = tf.constant(1)
+ c = lambda i, _s, _d, _o, _ml, _nu, _oi, _bc: tf.less(i, length)
+ _, step, _, output_ta, mem_loss, nupd, out_idx, _ = tf.while_loop(
+ c, step_lambda,
+ [i, step, decided_t, output_ta, mem_loss, nupd, oi, bc],
+ parallel_iterations=1, swap_memory=True)
+ # pylint: enable=cell-var-from-loop
+ gpu_out_idx.append(tf.squeeze(out_idx, [2]))
+ outputs = output_ta.stack()
+ outputs = tf.squeeze(outputs, [2, 3]) # Now l x b x nmaps
+ else:
+ # If beam_size is 0 or less, we don't have a decoder.
+ mem_loss = 0.0
+ outputs = tf.transpose(step[:, :, 1, :], [1, 0, 2])
+ gpu_out_idx.append(tf.argmax(outputs, 2))
+
+ # Final convolution to get logits, list outputs.
+ outputs = tf.matmul(tf.reshape(outputs, [-1, nmaps]), output_w)
+ outputs = tf.reshape(outputs, [length, batch_size, noclass])
+ gpu_outputs[gpu] = tf.nn.softmax(outputs)
+
+ # Calculate cross-entropy loss and normalize it.
+ targets_soft = make_dense(tf.squeeze(gpu_target[gpu], [1]),
+ noclass, 0.1)
+ targets_soft = tf.reshape(targets_soft, [-1, noclass])
+ targets_hard = make_dense(tf.squeeze(gpu_target[gpu], [1]),
+ noclass, 0.0)
+ targets_hard = tf.reshape(targets_hard, [-1, noclass])
+ output = tf.transpose(outputs, [1, 0, 2])
+ xent_soft = tf.reshape(tf.nn.softmax_cross_entropy_with_logits(
+ logits=tf.reshape(output, [-1, noclass]), labels=targets_soft),
+ [batch_size, length])
+ xent_hard = tf.reshape(tf.nn.softmax_cross_entropy_with_logits(
+ logits=tf.reshape(output, [-1, noclass]), labels=targets_hard),
+ [batch_size, length])
+ low, high = 0.1 / float(noclass - 1), 0.9
+ const = high * tf.log(high) + float(noclass - 1) * low * tf.log(low)
+ weight_sum = tf.reduce_sum(weights) + 1e-20
+ true_perp = tf.reduce_sum(xent_hard * weights) / weight_sum
+ soft_loss = tf.reduce_sum(xent_soft * weights) / weight_sum
+ perp_loss = soft_loss + const
+ # Final loss: cross-entropy + shared parameter relaxation part + extra.
+ mem_loss = 0.5 * tf.reduce_mean(mem_loss) / length_float
+ total_loss = perp_loss + mem_loss
+ gpu_losses[gpu].append(true_perp)
+
+ # Gradients.
+ if backward:
+ data.print_out("Creating backward pass for the model.")
+ grads = tf.gradients(
+ total_loss, tf.trainable_variables(),
+ colocate_gradients_with_ops=True)
+ for g_i, g in enumerate(grads):
+ if isinstance(g, tf.IndexedSlices):
+ grads[g_i] = tf.convert_to_tensor(g)
+ grads, norm = tf.clip_by_global_norm(grads, max_grad_norm)
+ gpu_grad_norms[gpu].append(norm)
+ for g in grads:
+ if grad_noise_scale > 0.001:
+ g += tf.truncated_normal(tf.shape(g)) * self.noise_param
+ grads_list.append(grads)
+ else:
+ gpu_grad_norms[gpu].append(0.0)
+ data.print_out("Created model for gpu %d in %.2f s."
+ % (gpu, time.time() - start_time))
+
+ self.updates = []
+ self.after_enc_step = tf.concat(axis=0, values=self.after_enc_step) # Concat GPUs.
+ if backward:
+ tf.get_variable_scope()._reuse = False
+ tf.get_variable_scope().set_caching_device(None)
+ grads = [gpu_avg([grads_list[g][i] for g in xrange(num_gpus)])
+ for i in xrange(len(grads_list[0]))]
+ update = adam_update(grads)
+ self.updates.append(update)
+ else:
+ self.updates.append(tf.no_op())
+
+ self.losses = [gpu_avg([gpu_losses[g][i] for g in xrange(num_gpus)])
+ for i in xrange(len(gpu_losses[0]))]
+ self.out_idx = tf.concat(axis=0, values=gpu_out_idx)
+ self.grad_norms = [gpu_avg([gpu_grad_norms[g][i] for g in xrange(num_gpus)])
+ for i in xrange(len(gpu_grad_norms[0]))]
+ self.outputs = [tf.concat(axis=1, values=[gpu_outputs[g] for g in xrange(num_gpus)])]
+ self.quantize_op = quantize_weights_op(512, 8)
+ if backward:
+ self.saver = tf.train.Saver(tf.global_variables(), max_to_keep=10)
+
+ def step(self, sess, inp, target, do_backward_in, noise_param=None,
+ beam_size=2, eos_id=2, eos_cost=0.0, update_mem=None, state=None):
+ """Run a step of the network."""
+ batch_size, height, length = inp.shape[0], inp.shape[1], inp.shape[2]
+ do_backward = do_backward_in
+ train_mode = True
+ if do_backward_in is None:
+ do_backward = False
+ train_mode = False
+ if update_mem is None:
+ update_mem = do_backward
+ feed_in = {}
+ # print " feeding sequences of length %d" % length
+ if state is None:
+ state = np.zeros([batch_size, length, height, self.nmaps])
+ feed_in[self.prev_step.name] = state
+ feed_in[self.length_tensor.name] = length
+ feed_in[self.noise_param.name] = noise_param if noise_param else 0.0
+ feed_in[self.do_training.name] = 1.0 if do_backward else 0.0
+ feed_in[self.update_mem.name] = 1 if update_mem else 0
+ if do_backward_in is False:
+ feed_in[self.sampling.name] = 0.0
+ index = 0 # We're dynamic now.
+ feed_out = []
+ if do_backward:
+ feed_out.append(self.updates[index])
+ feed_out.append(self.grad_norms[index])
+ if train_mode:
+ feed_out.append(self.losses[index])
+ feed_in[self.input.name] = inp
+ feed_in[self.target.name] = target
+ feed_out.append(self.outputs[index])
+ if train_mode:
+ # Make a full-sequence training step with one call to session.run.
+ res = sess.run([self.after_enc_step] + feed_out, feed_in)
+ after_enc_state, res = res[0], res[1:]
+ else:
+ # Make a full-sequence decoding step with one call to session.run.
+ feed_in[self.sampling.name] = 1.1 # Sample every time.
+ res = sess.run([self.after_enc_step, self.out_idx] + feed_out, feed_in)
+ after_enc_state, out_idx = res[0], res[1]
+ res = [res[2][l] for l in xrange(length)]
+ outputs = [out_idx[:, i] for i in xrange(length)]
+ cost = [0.0 for _ in xrange(beam_size * batch_size)]
+ seen_eos = [0 for _ in xrange(beam_size * batch_size)]
+ for idx, logit in enumerate(res):
+ best = outputs[idx]
+ for b in xrange(batch_size):
+ if seen_eos[b] > 1:
+ cost[b] -= eos_cost
+ else:
+ cost[b] += np.log(logit[b][best[b]])
+ if best[b] in [eos_id]:
+ seen_eos[b] += 1
+ res = [[-c for c in cost]] + outputs
+ # Collect and output results.
+ offset = 0
+ norm = None
+ if do_backward:
+ offset = 2
+ norm = res[1]
+ if train_mode:
+ outputs = res[offset + 1]
+ outputs = [outputs[l] for l in xrange(length)]
+ return res[offset], outputs, norm, after_enc_state
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/neural_gpu_trainer.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/neural_gpu_trainer.py"
new file mode 100644
index 00000000..1f704b0d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/neural_gpu_trainer.py"
@@ -0,0 +1,1027 @@
+# Copyright 2015 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Neural GPU."""
+
+from __future__ import print_function
+
+import math
+import os
+import random
+import sys
+import threading
+import time
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+import program_utils
+import data_utils as data
+import neural_gpu as ngpu
+import wmt_utils as wmt
+
+tf.app.flags.DEFINE_float("lr", 0.1, "Learning rate.")
+tf.app.flags.DEFINE_float("init_weight", 0.8, "Initial weights deviation.")
+tf.app.flags.DEFINE_float("max_grad_norm", 4.0, "Clip gradients to this norm.")
+tf.app.flags.DEFINE_float("cutoff", 1.2, "Cutoff at the gates.")
+tf.app.flags.DEFINE_float("curriculum_ppx", 9.9, "Move curriculum if ppl < X.")
+tf.app.flags.DEFINE_float("curriculum_seq", 0.3, "Move curriculum if seq < X.")
+tf.app.flags.DEFINE_float("dropout", 0.1, "Dropout that much.")
+tf.app.flags.DEFINE_float("grad_noise_scale", 0.0, "Gradient noise scale.")
+tf.app.flags.DEFINE_float("max_sampling_rate", 0.1, "Maximal sampling rate.")
+tf.app.flags.DEFINE_float("length_norm", 0.0, "Length normalization.")
+tf.app.flags.DEFINE_float("train_beam_freq", 0.0, "Beam-based training.")
+tf.app.flags.DEFINE_float("train_beam_anneal", 20000, "How many steps anneal.")
+tf.app.flags.DEFINE_integer("eval_beam_steps", 4, "How many beam steps eval.")
+tf.app.flags.DEFINE_integer("batch_size", 32, "Batch size.")
+tf.app.flags.DEFINE_integer("steps_per_checkpoint", 100, "Steps per epoch.")
+tf.app.flags.DEFINE_integer("nmaps", 64, "Number of floats in each cell.")
+tf.app.flags.DEFINE_integer("vec_size", 64, "Size of word vectors.")
+tf.app.flags.DEFINE_integer("train_data_size", 1000, "Training examples/len.")
+tf.app.flags.DEFINE_integer("max_length", 40, "Maximum length.")
+tf.app.flags.DEFINE_integer("random_seed", 125459, "Random seed.")
+tf.app.flags.DEFINE_integer("nconvs", 2, "How many convolutions / 1 step.")
+tf.app.flags.DEFINE_integer("kw", 3, "Kernel width.")
+tf.app.flags.DEFINE_integer("kh", 3, "Kernel height.")
+tf.app.flags.DEFINE_integer("height", 4, "Height.")
+tf.app.flags.DEFINE_integer("mem_size", -1, "Memory size (sqrt)")
+tf.app.flags.DEFINE_integer("soft_mem_size", 1024, "Softmax memory this size.")
+tf.app.flags.DEFINE_integer("num_gpus", 1, "Number of GPUs to use.")
+tf.app.flags.DEFINE_integer("num_replicas", 1, "Number of replicas in use.")
+tf.app.flags.DEFINE_integer("beam_size", 1, "Beam size during decoding. "
+ "If 0, no decoder, the non-extended Neural GPU.")
+tf.app.flags.DEFINE_integer("max_target_vocab", 0,
+ "Maximal size of target vocabulary.")
+tf.app.flags.DEFINE_integer("decode_offset", 0, "Offset for decoding.")
+tf.app.flags.DEFINE_integer("task", -1, "Task id when running on borg.")
+tf.app.flags.DEFINE_integer("nprint", 0, "How many test examples to print out.")
+tf.app.flags.DEFINE_integer("eval_bin_print", 3, "How many bins step in eval.")
+tf.app.flags.DEFINE_integer("mode", 0, "Mode: 0-train other-decode.")
+tf.app.flags.DEFINE_bool("atrous", False, "Whether to use atrous convs.")
+tf.app.flags.DEFINE_bool("layer_norm", False, "Do layer normalization.")
+tf.app.flags.DEFINE_bool("quantize", False, "Whether to quantize variables.")
+tf.app.flags.DEFINE_bool("do_train", True, "If false, only update memory.")
+tf.app.flags.DEFINE_bool("rnn_baseline", False, "If true build an RNN instead.")
+tf.app.flags.DEFINE_bool("simple_tokenizer", False,
+ "If true, tokenize on spaces only, digits are 0.")
+tf.app.flags.DEFINE_bool("normalize_digits", True,
+ "Whether to normalize digits with simple tokenizer.")
+tf.app.flags.DEFINE_integer("vocab_size", 16, "Joint vocabulary size.")
+tf.app.flags.DEFINE_string("data_dir", "/tmp", "Data directory")
+tf.app.flags.DEFINE_string("train_dir", "/tmp/", "Directory to store models.")
+tf.app.flags.DEFINE_string("test_file_prefix", "", "Files to test (.en,.fr).")
+tf.app.flags.DEFINE_integer("max_train_data_size", 0,
+ "Limit on the size of training data (0: no limit).")
+tf.app.flags.DEFINE_string("word_vector_file_en", "",
+ "Optional file with word vectors to start training.")
+tf.app.flags.DEFINE_string("word_vector_file_fr", "",
+ "Optional file with word vectors to start training.")
+tf.app.flags.DEFINE_string("problem", "wmt", "What problem are we solving?.")
+
+tf.app.flags.DEFINE_integer("ps_tasks", 0, "Number of ps tasks used.")
+tf.app.flags.DEFINE_string("master", "", "Name of the TensorFlow master.")
+
+FLAGS = tf.app.flags.FLAGS
+EXTRA_EVAL = 10
+EVAL_LEN_INCR = 8
+MAXLEN_F = 2.0
+
+
+def zero_split(tok_list, append=None):
+ """Split tok_list (list of ints) on 0s, append int to all parts if given."""
+ res, cur, l = [], [], 0
+ for tok in tok_list:
+ if tok == 0:
+ if append is not None:
+ cur.append(append)
+ res.append(cur)
+ l = max(l, len(cur))
+ cur = []
+ else:
+ cur.append(tok)
+ if append is not None:
+ cur.append(append)
+ res.append(cur)
+ l = max(l, len(cur))
+ return res, l
+
+
+def read_data(source_path, target_path, buckets, max_size=None, print_out=True):
+ """Read data from source and target files and put into buckets.
+
+ Args:
+ source_path: path to the files with token-ids for the source language.
+ target_path: path to the file with token-ids for the target language;
+ it must be aligned with the source file: n-th line contains the desired
+ output for n-th line from the source_path.
+ buckets: the buckets to use.
+ max_size: maximum number of lines to read, all other will be ignored;
+ if 0 or None, data files will be read completely (no limit).
+ If set to 1, no data will be returned (empty lists of the right form).
+ print_out: whether to print out status or not.
+
+ Returns:
+ data_set: a list of length len(_buckets); data_set[n] contains a list of
+ (source, target) pairs read from the provided data files that fit
+ into the n-th bucket, i.e., such that len(source) < _buckets[n][0] and
+ len(target) < _buckets[n][1]; source and target are lists of token-ids.
+ """
+ data_set = [[] for _ in buckets]
+ counter = 0
+ if max_size != 1:
+ with tf.gfile.GFile(source_path, mode="r") as source_file:
+ with tf.gfile.GFile(target_path, mode="r") as target_file:
+ source, target = source_file.readline(), target_file.readline()
+ while source and target and (not max_size or counter < max_size):
+ counter += 1
+ if counter % 100000 == 0 and print_out:
+ print(" reading data line %d" % counter)
+ sys.stdout.flush()
+ source_ids = [int(x) for x in source.split()]
+ target_ids = [int(x) for x in target.split()]
+ source_ids, source_len = zero_split(source_ids)
+ target_ids, target_len = zero_split(target_ids, append=wmt.EOS_ID)
+ for bucket_id, size in enumerate(buckets):
+ if source_len <= size and target_len <= size:
+ data_set[bucket_id].append([source_ids, target_ids])
+ break
+ source, target = source_file.readline(), target_file.readline()
+ return data_set
+
+
+global_train_set = {"wmt": []}
+train_buckets_scale = {"wmt": []}
+
+
+def calculate_buckets_scale(data_set, buckets, problem):
+ """Calculate buckets scales for the given data set."""
+ train_bucket_sizes = [len(data_set[b]) for b in xrange(len(buckets))]
+ train_total_size = max(1, float(sum(train_bucket_sizes)))
+
+ # A bucket scale is a list of increasing numbers from 0 to 1 that we'll use
+ # to select a bucket. Length of [scale[i], scale[i+1]] is proportional to
+ # the size if i-th training bucket, as used later.
+ if problem not in train_buckets_scale:
+ train_buckets_scale[problem] = []
+ train_buckets_scale[problem].append(
+ [sum(train_bucket_sizes[:i + 1]) / train_total_size
+ for i in xrange(len(train_bucket_sizes))])
+ return train_total_size
+
+
+def read_data_into_global(source_path, target_path, buckets,
+ max_size=None, print_out=True):
+ """Read data into the global variables (can be in a separate thread)."""
+ # pylint: disable=global-variable-not-assigned
+ global global_train_set, train_buckets_scale
+ # pylint: enable=global-variable-not-assigned
+ data_set = read_data(source_path, target_path, buckets, max_size, print_out)
+ global_train_set["wmt"].append(data_set)
+ train_total_size = calculate_buckets_scale(data_set, buckets, "wmt")
+ if print_out:
+ print(" Finished global data reading (%d)." % train_total_size)
+
+
+def initialize(sess=None):
+ """Initialize data and model."""
+ global MAXLEN_F
+ # Create training directory if it does not exist.
+ if not tf.gfile.IsDirectory(FLAGS.train_dir):
+ data.print_out("Creating training directory %s." % FLAGS.train_dir)
+ tf.gfile.MkDir(FLAGS.train_dir)
+ decode_suffix = "beam%dln%d" % (FLAGS.beam_size,
+ int(100 * FLAGS.length_norm))
+ if FLAGS.mode == 0:
+ decode_suffix = ""
+ if FLAGS.task >= 0:
+ data.log_filename = os.path.join(FLAGS.train_dir,
+ "log%d%s" % (FLAGS.task, decode_suffix))
+ else:
+ data.log_filename = os.path.join(FLAGS.train_dir, "neural_gpu/log")
+
+ # Set random seed.
+ if FLAGS.random_seed > 0:
+ seed = FLAGS.random_seed + max(0, FLAGS.task)
+ tf.set_random_seed(seed)
+ random.seed(seed)
+ np.random.seed(seed)
+
+ # Check data sizes.
+ assert data.bins
+ max_length = min(FLAGS.max_length, data.bins[-1])
+ while len(data.bins) > 1 and data.bins[-2] >= max_length + EXTRA_EVAL:
+ data.bins = data.bins[:-1]
+ if sess is None and FLAGS.task == 0 and FLAGS.num_replicas > 1:
+ if max_length > 60:
+ max_length = max_length * 1 / 2 # Save memory on chief.
+ min_length = min(14, max_length - 3) if FLAGS.problem == "wmt" else 3
+ for p in FLAGS.problem.split("-"):
+ if p in ["progeval", "progsynth"]:
+ min_length = max(26, min_length)
+ assert max_length + 1 > min_length
+ while len(data.bins) > 1 and data.bins[-2] >= max_length + EXTRA_EVAL:
+ data.bins = data.bins[:-1]
+
+ # Create checkpoint directory if it does not exist.
+ if FLAGS.mode == 0 or FLAGS.task < 0:
+ checkpoint_dir = os.path.join(FLAGS.train_dir, "neural_gpu%s"
+ % ("" if FLAGS.task < 0 else str(FLAGS.task)))
+ else:
+ checkpoint_dir = FLAGS.train_dir
+ if not tf.gfile.IsDirectory(checkpoint_dir):
+ data.print_out("Creating checkpoint directory %s." % checkpoint_dir)
+ tf.gfile.MkDir(checkpoint_dir)
+
+ # Prepare data.
+ if FLAGS.problem == "wmt":
+ # Prepare WMT data.
+ data.print_out("Preparing WMT data in %s" % FLAGS.data_dir)
+ if FLAGS.simple_tokenizer:
+ MAXLEN_F = 3.5
+ (en_train, fr_train, en_dev, fr_dev,
+ en_path, fr_path) = wmt.prepare_wmt_data(
+ FLAGS.data_dir, FLAGS.vocab_size,
+ tokenizer=wmt.space_tokenizer,
+ normalize_digits=FLAGS.normalize_digits)
+ else:
+ (en_train, fr_train, en_dev, fr_dev,
+ en_path, fr_path) = wmt.prepare_wmt_data(
+ FLAGS.data_dir, FLAGS.vocab_size)
+
+ # Read data into buckets and compute their sizes.
+ fr_vocab, rev_fr_vocab = wmt.initialize_vocabulary(fr_path)
+ data.vocab = fr_vocab
+ data.rev_vocab = rev_fr_vocab
+ data.print_out("Reading development and training data (limit: %d)."
+ % FLAGS.max_train_data_size)
+ dev_set = {}
+ dev_set["wmt"] = read_data(en_dev, fr_dev, data.bins)
+ def data_read(size, print_out):
+ read_data_into_global(en_train, fr_train, data.bins, size, print_out)
+ data_read(50000, False)
+ read_thread_small = threading.Thread(
+ name="reading-data-small", target=lambda: data_read(900000, False))
+ read_thread_small.start()
+ read_thread_full = threading.Thread(
+ name="reading-data-full",
+ target=lambda: data_read(FLAGS.max_train_data_size, True))
+ read_thread_full.start()
+ data.print_out("Data reading set up.")
+ else:
+ # Prepare algorithmic data.
+ en_path, fr_path = None, None
+ tasks = FLAGS.problem.split("-")
+ data_size = FLAGS.train_data_size
+ for t in tasks:
+ data.print_out("Generating data for %s." % t)
+ if t in ["progeval", "progsynth"]:
+ data.init_data(t, data.bins[-1], 20 * data_size, FLAGS.vocab_size)
+ if len(program_utils.prog_vocab) > FLAGS.vocab_size - 2:
+ raise ValueError("Increase vocab_size to %d for prog-tasks."
+ % (len(program_utils.prog_vocab) + 2))
+ data.rev_vocab = program_utils.prog_vocab
+ data.vocab = program_utils.prog_rev_vocab
+ else:
+ for l in xrange(max_length + EXTRA_EVAL - 1):
+ data.init_data(t, l, data_size, FLAGS.vocab_size)
+ data.init_data(t, data.bins[-2], data_size, FLAGS.vocab_size)
+ data.init_data(t, data.bins[-1], data_size, FLAGS.vocab_size)
+ if t not in global_train_set:
+ global_train_set[t] = []
+ global_train_set[t].append(data.train_set[t])
+ calculate_buckets_scale(data.train_set[t], data.bins, t)
+ dev_set = data.test_set
+
+ # Grid-search parameters.
+ lr = FLAGS.lr
+ init_weight = FLAGS.init_weight
+ max_grad_norm = FLAGS.max_grad_norm
+ if sess is not None and FLAGS.task > -1:
+ def job_id_factor(step):
+ """If jobid / step mod 3 is 0, 1, 2: say 0, 1, -1."""
+ return ((((FLAGS.task / step) % 3) + 1) % 3) - 1
+ lr *= math.pow(2, job_id_factor(1))
+ init_weight *= math.pow(1.5, job_id_factor(3))
+ max_grad_norm *= math.pow(2, job_id_factor(9))
+
+ # Print out parameters.
+ curriculum = FLAGS.curriculum_seq
+ msg1 = ("layers %d kw %d h %d kh %d batch %d noise %.2f"
+ % (FLAGS.nconvs, FLAGS.kw, FLAGS.height, FLAGS.kh,
+ FLAGS.batch_size, FLAGS.grad_noise_scale))
+ msg2 = ("cut %.2f lr %.3f iw %.2f cr %.2f nm %d d%.4f gn %.2f %s"
+ % (FLAGS.cutoff, lr, init_weight, curriculum, FLAGS.nmaps,
+ FLAGS.dropout, max_grad_norm, msg1))
+ data.print_out(msg2)
+
+ # Create model and initialize it.
+ tf.get_variable_scope().set_initializer(
+ tf.orthogonal_initializer(gain=1.8 * init_weight))
+ max_sampling_rate = FLAGS.max_sampling_rate if FLAGS.mode == 0 else 0.0
+ o = FLAGS.vocab_size if FLAGS.max_target_vocab < 1 else FLAGS.max_target_vocab
+ ngpu.CHOOSE_K = FLAGS.soft_mem_size
+ do_beam_model = FLAGS.train_beam_freq > 0.0001 and FLAGS.beam_size > 1
+ beam_size = FLAGS.beam_size if FLAGS.mode > 0 and not do_beam_model else 1
+ beam_size = min(beam_size, FLAGS.beam_size)
+ beam_model = None
+ def make_ngpu(cur_beam_size, back):
+ return ngpu.NeuralGPU(
+ FLAGS.nmaps, FLAGS.vec_size, FLAGS.vocab_size, o,
+ FLAGS.dropout, max_grad_norm, FLAGS.cutoff, FLAGS.nconvs,
+ FLAGS.kw, FLAGS.kh, FLAGS.height, FLAGS.mem_size,
+ lr / math.sqrt(FLAGS.num_replicas), min_length + 3, FLAGS.num_gpus,
+ FLAGS.num_replicas, FLAGS.grad_noise_scale, max_sampling_rate,
+ atrous=FLAGS.atrous, do_rnn=FLAGS.rnn_baseline,
+ do_layer_norm=FLAGS.layer_norm, beam_size=cur_beam_size, backward=back)
+ if sess is None:
+ with tf.device(tf.train.replica_device_setter(FLAGS.ps_tasks)):
+ model = make_ngpu(beam_size, True)
+ if do_beam_model:
+ tf.get_variable_scope().reuse_variables()
+ beam_model = make_ngpu(FLAGS.beam_size, False)
+ else:
+ model = make_ngpu(beam_size, True)
+ if do_beam_model:
+ tf.get_variable_scope().reuse_variables()
+ beam_model = make_ngpu(FLAGS.beam_size, False)
+
+ sv = None
+ if sess is None:
+ # The supervisor configuration has a few overriden options.
+ sv = tf.train.Supervisor(logdir=checkpoint_dir,
+ is_chief=(FLAGS.task < 1),
+ saver=model.saver,
+ summary_op=None,
+ save_summaries_secs=60,
+ save_model_secs=15 * 60,
+ global_step=model.global_step)
+
+ config = tf.ConfigProto(allow_soft_placement=True)
+ sess = sv.PrepareSession(FLAGS.master, config=config)
+
+ data.print_out("Created model. Checkpoint dir %s" % checkpoint_dir)
+
+ # Load model from parameters if a checkpoint exists.
+ ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
+ if ckpt and tf.gfile.Exists(ckpt.model_checkpoint_path + ".index"):
+ data.print_out("Reading model parameters from %s"
+ % ckpt.model_checkpoint_path)
+ model.saver.restore(sess, ckpt.model_checkpoint_path)
+ elif sv is None:
+ sess.run(tf.global_variables_initializer())
+ data.print_out("Initialized variables (no supervisor mode).")
+ elif FLAGS.task < 1 and FLAGS.mem_size > 0:
+ # sess.run(model.mem_norm_op)
+ data.print_out("Created new model and normalized mem (on chief).")
+
+ # Return the model and needed variables.
+ return (model, beam_model, min_length, max_length, checkpoint_dir,
+ (global_train_set, dev_set, en_path, fr_path), sv, sess)
+
+
+def m_step(model, beam_model, sess, batch_size, inp, target, bucket, nsteps, p):
+ """Evaluation multi-step for program synthesis."""
+ state, scores, hist = None, [[-11.0 for _ in xrange(batch_size)]], []
+ for _ in xrange(nsteps):
+ # Get the best beam (no training, just forward model).
+ new_target, new_first, new_inp, new_scores = get_best_beam(
+ beam_model, sess, inp, target,
+ batch_size, FLAGS.beam_size, bucket, hist, p, test_mode=True)
+ hist.append(new_first)
+ _, _, _, state = model.step(sess, inp, new_target, False, state=state)
+ inp = new_inp
+ scores.append([max(scores[-1][i], new_scores[i])
+ for i in xrange(batch_size)])
+ # The final step with the true target.
+ loss, res, _, _ = model.step(sess, inp, target, False, state=state)
+ return loss, res, new_target, scores[1:]
+
+
+def single_test(bin_id, model, sess, nprint, batch_size, dev, p, print_out=True,
+ offset=None, beam_model=None):
+ """Test model on test data of length l using the given session."""
+ if not dev[p][bin_id]:
+ data.print_out(" bin %d (%d)\t%s\tppl NA errors NA seq-errors NA"
+ % (bin_id, data.bins[bin_id], p))
+ return 1.0, 1.0, 0.0
+ inpt, target = data.get_batch(
+ bin_id, batch_size, dev[p], FLAGS.height, offset)
+ if FLAGS.beam_size > 1 and beam_model:
+ loss, res, new_tgt, scores = m_step(
+ model, beam_model, sess, batch_size, inpt, target, bin_id,
+ FLAGS.eval_beam_steps, p)
+ score_avgs = [sum(s) / float(len(s)) for s in scores]
+ score_maxs = [max(s) for s in scores]
+ score_str = ["(%.2f, %.2f)" % (score_avgs[i], score_maxs[i])
+ for i in xrange(FLAGS.eval_beam_steps)]
+ data.print_out(" == scores (avg, max): %s" % "; ".join(score_str))
+ errors, total, seq_err = data.accuracy(inpt, res, target, batch_size,
+ nprint, new_tgt, scores[-1])
+ else:
+ loss, res, _, _ = model.step(sess, inpt, target, False)
+ errors, total, seq_err = data.accuracy(inpt, res, target, batch_size,
+ nprint)
+ seq_err = float(seq_err) / batch_size
+ if total > 0:
+ errors = float(errors) / total
+ if print_out:
+ data.print_out(" bin %d (%d)\t%s\tppl %.2f errors %.2f seq-errors %.2f"
+ % (bin_id, data.bins[bin_id], p, data.safe_exp(loss),
+ 100 * errors, 100 * seq_err))
+ return (errors, seq_err, loss)
+
+
+def assign_vectors(word_vector_file, embedding_key, vocab_path, sess):
+ """Assign the embedding_key variable from the given word vectors file."""
+ # For words in the word vector file, set their embedding at start.
+ if not tf.gfile.Exists(word_vector_file):
+ data.print_out("Word vector file does not exist: %s" % word_vector_file)
+ sys.exit(1)
+ vocab, _ = wmt.initialize_vocabulary(vocab_path)
+ vectors_variable = [v for v in tf.trainable_variables()
+ if embedding_key == v.name]
+ if len(vectors_variable) != 1:
+ data.print_out("Word vector variable not found or too many.")
+ sys.exit(1)
+ vectors_variable = vectors_variable[0]
+ vectors = vectors_variable.eval()
+ data.print_out("Pre-setting word vectors from %s" % word_vector_file)
+ with tf.gfile.GFile(word_vector_file, mode="r") as f:
+ # Lines have format: dog 0.045123 -0.61323 0.413667 ...
+ for line in f:
+ line_parts = line.split()
+ # The first part is the word.
+ word = line_parts[0]
+ if word in vocab:
+ # Remaining parts are components of the vector.
+ word_vector = np.array(map(float, line_parts[1:]))
+ if len(word_vector) != FLAGS.vec_size:
+ data.print_out("Warn: Word '%s', Expecting vector size %d, "
+ "found %d" % (word, FLAGS.vec_size,
+ len(word_vector)))
+ else:
+ vectors[vocab[word]] = word_vector
+ # Assign the modified vectors to the vectors_variable in the graph.
+ sess.run([vectors_variable.initializer],
+ {vectors_variable.initializer.inputs[1]: vectors})
+
+
+def print_vectors(embedding_key, vocab_path, word_vector_file):
+ """Print vectors from the given variable."""
+ _, rev_vocab = wmt.initialize_vocabulary(vocab_path)
+ vectors_variable = [v for v in tf.trainable_variables()
+ if embedding_key == v.name]
+ if len(vectors_variable) != 1:
+ data.print_out("Word vector variable not found or too many.")
+ sys.exit(1)
+ vectors_variable = vectors_variable[0]
+ vectors = vectors_variable.eval()
+ l, s = vectors.shape[0], vectors.shape[1]
+ data.print_out("Printing %d word vectors from %s to %s."
+ % (l, embedding_key, word_vector_file))
+ with tf.gfile.GFile(word_vector_file, mode="w") as f:
+ # Lines have format: dog 0.045123 -0.61323 0.413667 ...
+ for i in xrange(l):
+ f.write(rev_vocab[i])
+ for j in xrange(s):
+ f.write(" %.8f" % vectors[i][j])
+ f.write("\n")
+
+
+def get_bucket_id(train_buckets_scale_c, max_cur_length, data_set):
+ """Get a random bucket id."""
+ # Choose a bucket according to data distribution. Pick a random number
+ # in [0, 1] and use the corresponding interval in train_buckets_scale.
+ random_number_01 = np.random.random_sample()
+ bucket_id = min([i for i in xrange(len(train_buckets_scale_c))
+ if train_buckets_scale_c[i] > random_number_01])
+ while bucket_id > 0 and not data_set[bucket_id]:
+ bucket_id -= 1
+ for _ in xrange(10 if np.random.random_sample() < 0.9 else 1):
+ if data.bins[bucket_id] > max_cur_length:
+ random_number_01 = min(random_number_01, np.random.random_sample())
+ bucket_id = min([i for i in xrange(len(train_buckets_scale_c))
+ if train_buckets_scale_c[i] > random_number_01])
+ while bucket_id > 0 and not data_set[bucket_id]:
+ bucket_id -= 1
+ return bucket_id
+
+
+def score_beams(beams, target, inp, history, p,
+ print_out=False, test_mode=False):
+ """Score beams."""
+ if p == "progsynth":
+ return score_beams_prog(beams, target, inp, history, print_out, test_mode)
+ elif test_mode:
+ return beams[0], 10.0 if str(beams[0][:len(target)]) == str(target) else 0.0
+ else:
+ history_s = [str(h) for h in history]
+ best, best_score, tgt, eos_id = None, -1000.0, target, None
+ if p == "wmt":
+ eos_id = wmt.EOS_ID
+ if eos_id and eos_id in target:
+ tgt = target[:target.index(eos_id)]
+ for beam in beams:
+ if eos_id and eos_id in beam:
+ beam = beam[:beam.index(eos_id)]
+ l = min(len(tgt), len(beam))
+ score = len([i for i in xrange(l) if tgt[i] == beam[i]]) / float(len(tgt))
+ hist_score = 20.0 if str([b for b in beam if b > 0]) in history_s else 0.0
+ if score < 1.0:
+ score -= hist_score
+ if score > best_score:
+ best = beam
+ best_score = score
+ return best, best_score
+
+
+def score_beams_prog(beams, target, inp, history, print_out=False,
+ test_mode=False):
+ """Score beams for program synthesis."""
+ tgt_prog = linearize(target, program_utils.prog_vocab, True, 1)
+ hist_progs = [linearize(h, program_utils.prog_vocab, True, 1)
+ for h in history]
+ tgt_set = set(target)
+ if print_out:
+ print("target: ", tgt_prog)
+ inps, tgt_outs = [], []
+ for i in xrange(3):
+ ilist = [inp[i + 1, l] for l in xrange(inp.shape[1])]
+ clist = [program_utils.prog_vocab[x] for x in ilist if x > 0]
+ olist = clist[clist.index("]") + 1:] # outputs
+ clist = clist[1:clist.index("]")] # inputs
+ inps.append([int(x) for x in clist])
+ if olist[0] == "[": # olist may be [int] or just int
+ tgt_outs.append(str([int(x) for x in olist[1:-1]]))
+ else:
+ if len(olist) == 1:
+ tgt_outs.append(olist[0])
+ else:
+ print([program_utils.prog_vocab[x] for x in ilist if x > 0])
+ print(olist)
+ print(tgt_prog)
+ print(program_utils.evaluate(tgt_prog, {"a": inps[-1]}))
+ print("AAAAA")
+ tgt_outs.append(olist[0])
+ if not test_mode:
+ for _ in xrange(7):
+ ilen = np.random.randint(len(target) - 3) + 1
+ inps.append([random.choice(range(-15, 15)) for _ in range(ilen)])
+ tgt_outs.extend([program_utils.evaluate(tgt_prog, {"a": inp})
+ for inp in inps[3:]])
+ best, best_prog, best_score = None, "", -1000.0
+ for beam in beams:
+ b_prog = linearize(beam, program_utils.prog_vocab, True, 1)
+ b_set = set(beam)
+ jsim = len(tgt_set & b_set) / float(len(tgt_set | b_set))
+ b_outs = [program_utils.evaluate(b_prog, {"a": inp}) for inp in inps]
+ errs = len([x for x in b_outs if x == "ERROR"])
+ imatches = len([i for i in xrange(3) if b_outs[i] == tgt_outs[i]])
+ perfect = 10.0 if imatches == 3 else 0.0
+ hist_score = 20.0 if b_prog in hist_progs else 0.0
+ if test_mode:
+ score = perfect - errs
+ else:
+ matches = len([i for i in xrange(10) if b_outs[i] == tgt_outs[i]])
+ score = perfect + matches + jsim - errs
+ if score < 10.0:
+ score -= hist_score
+ # print b_prog
+ # print "jsim: ", jsim, " errs: ", errs, " mtchs: ", matches, " s: ", score
+ if score > best_score:
+ best = beam
+ best_prog = b_prog
+ best_score = score
+ if print_out:
+ print("best score: ", best_score, " best prog: ", best_prog)
+ return best, best_score
+
+
+def get_best_beam(beam_model, sess, inp, target, batch_size, beam_size,
+ bucket, history, p, test_mode=False):
+ """Run beam_model, score beams, and return the best as target and in input."""
+ _, output_logits, _, _ = beam_model.step(
+ sess, inp, target, None, beam_size=FLAGS.beam_size)
+ new_targets, new_firsts, scores, new_inp = [], [], [], np.copy(inp)
+ for b in xrange(batch_size):
+ outputs = []
+ history_b = [[h[b, 0, l] for l in xrange(data.bins[bucket])]
+ for h in history]
+ for beam_idx in xrange(beam_size):
+ outputs.append([int(o[beam_idx * batch_size + b])
+ for o in output_logits])
+ target_t = [target[b, 0, l] for l in xrange(data.bins[bucket])]
+ best, best_score = score_beams(
+ outputs, [t for t in target_t if t > 0], inp[b, :, :],
+ [[t for t in h if t > 0] for h in history_b], p, test_mode=test_mode)
+ scores.append(best_score)
+ if 1 in best: # Only until _EOS.
+ best = best[:best.index(1) + 1]
+ best += [0 for _ in xrange(len(target_t) - len(best))]
+ new_targets.append([best])
+ first, _ = score_beams(
+ outputs, [t for t in target_t if t > 0], inp[b, :, :],
+ [[t for t in h if t > 0] for h in history_b], p, test_mode=True)
+ if 1 in first: # Only until _EOS.
+ first = first[:first.index(1) + 1]
+ first += [0 for _ in xrange(len(target_t) - len(first))]
+ new_inp[b, 0, :] = np.array(first, dtype=np.int32)
+ new_firsts.append([first])
+ # Change target if we found a great answer.
+ new_target = np.array(new_targets, dtype=np.int32)
+ for b in xrange(batch_size):
+ if scores[b] >= 10.0:
+ target[b, 0, :] = new_target[b, 0, :]
+ new_first = np.array(new_firsts, dtype=np.int32)
+ return new_target, new_first, new_inp, scores
+
+
+def train():
+ """Train the model."""
+ batch_size = FLAGS.batch_size * FLAGS.num_gpus
+ (model, beam_model, min_length, max_length, checkpoint_dir,
+ (train_set, dev_set, en_vocab_path, fr_vocab_path), sv, sess) = initialize()
+ with sess.as_default():
+ quant_op = model.quantize_op
+ max_cur_length = min(min_length + 3, max_length)
+ prev_acc_perp = [1000000 for _ in xrange(5)]
+ prev_seq_err = 1.0
+ is_chief = FLAGS.task < 1
+ do_report = False
+
+ # Main traning loop.
+ while not sv.ShouldStop():
+ global_step, max_cur_length, learning_rate = sess.run(
+ [model.global_step, model.cur_length, model.lr])
+ acc_loss, acc_l1, acc_total, acc_errors, acc_seq_err = 0.0, 0.0, 0, 0, 0
+ acc_grad_norm, step_count, step_c1, step_time = 0.0, 0, 0, 0.0
+
+ # For words in the word vector file, set their embedding at start.
+ bound1 = FLAGS.steps_per_checkpoint - 1
+ if FLAGS.word_vector_file_en and global_step < bound1 and is_chief:
+ assign_vectors(FLAGS.word_vector_file_en, "embedding:0",
+ en_vocab_path, sess)
+ if FLAGS.max_target_vocab < 1:
+ assign_vectors(FLAGS.word_vector_file_en, "target_embedding:0",
+ en_vocab_path, sess)
+
+ if FLAGS.word_vector_file_fr and global_step < bound1 and is_chief:
+ assign_vectors(FLAGS.word_vector_file_fr, "embedding:0",
+ fr_vocab_path, sess)
+ if FLAGS.max_target_vocab < 1:
+ assign_vectors(FLAGS.word_vector_file_fr, "target_embedding:0",
+ fr_vocab_path, sess)
+
+ for _ in xrange(FLAGS.steps_per_checkpoint):
+ step_count += 1
+ step_c1 += 1
+ global_step = int(model.global_step.eval())
+ train_beam_anneal = global_step / float(FLAGS.train_beam_anneal)
+ train_beam_freq = FLAGS.train_beam_freq * min(1.0, train_beam_anneal)
+ p = random.choice(FLAGS.problem.split("-"))
+ train_set = global_train_set[p][-1]
+ bucket_id = get_bucket_id(train_buckets_scale[p][-1], max_cur_length,
+ train_set)
+ # Prefer longer stuff 60% of time if not wmt.
+ if np.random.randint(100) < 60 and FLAGS.problem != "wmt":
+ bucket1 = get_bucket_id(train_buckets_scale[p][-1], max_cur_length,
+ train_set)
+ bucket_id = max(bucket1, bucket_id)
+
+ # Run a step and time it.
+ start_time = time.time()
+ inp, target = data.get_batch(bucket_id, batch_size, train_set,
+ FLAGS.height)
+ noise_param = math.sqrt(math.pow(global_step + 1, -0.55) *
+ prev_seq_err) * FLAGS.grad_noise_scale
+ # In multi-step mode, we use best from beam for middle steps.
+ state, new_target, scores, history = None, None, None, []
+ while (FLAGS.beam_size > 1 and
+ train_beam_freq > np.random.random_sample()):
+ # Get the best beam (no training, just forward model).
+ new_target, new_first, new_inp, scores = get_best_beam(
+ beam_model, sess, inp, target,
+ batch_size, FLAGS.beam_size, bucket_id, history, p)
+ history.append(new_first)
+ # Training step with the previous input and the best beam as target.
+ _, _, _, state = model.step(sess, inp, new_target, FLAGS.do_train,
+ noise_param, update_mem=True, state=state)
+ # Change input to the new one for the next step.
+ inp = new_inp
+ # If all results are great, stop (todo: not to wait for all?).
+ if FLAGS.nprint > 1:
+ print(scores)
+ if sum(scores) / float(len(scores)) >= 10.0:
+ break
+ # The final step with the true target.
+ loss, res, gnorm, _ = model.step(
+ sess, inp, target, FLAGS.do_train, noise_param,
+ update_mem=True, state=state)
+ step_time += time.time() - start_time
+ acc_grad_norm += 0.0 if gnorm is None else float(gnorm)
+
+ # Accumulate statistics.
+ acc_loss += loss
+ acc_l1 += loss
+ errors, total, seq_err = data.accuracy(
+ inp, res, target, batch_size, 0, new_target, scores)
+ if FLAGS.nprint > 1:
+ print("seq_err: ", seq_err)
+ acc_total += total
+ acc_errors += errors
+ acc_seq_err += seq_err
+
+ # Report summary every 10 steps.
+ if step_count + 3 > FLAGS.steps_per_checkpoint:
+ do_report = True # Don't polute plot too early.
+ if is_chief and step_count % 10 == 1 and do_report:
+ cur_loss = acc_l1 / float(step_c1)
+ acc_l1, step_c1 = 0.0, 0
+ cur_perp = data.safe_exp(cur_loss)
+ summary = tf.Summary()
+ summary.value.extend(
+ [tf.Summary.Value(tag="log_perplexity", simple_value=cur_loss),
+ tf.Summary.Value(tag="perplexity", simple_value=cur_perp)])
+ sv.SummaryComputed(sess, summary, global_step)
+
+ # Normalize and print out accumulated statistics.
+ acc_loss /= step_count
+ step_time /= FLAGS.steps_per_checkpoint
+ acc_seq_err = float(acc_seq_err) / (step_count * batch_size)
+ prev_seq_err = max(0.0, acc_seq_err - 0.02) # No noise at error < 2%.
+ acc_errors = float(acc_errors) / acc_total if acc_total > 0 else 1.0
+ t_size = float(sum([len(x) for x in train_set])) / float(1000000)
+ msg = ("step %d step-time %.2f train-size %.3f lr %.6f grad-norm %.4f"
+ % (global_step + 1, step_time, t_size, learning_rate,
+ acc_grad_norm / FLAGS.steps_per_checkpoint))
+ data.print_out("%s len %d ppl %.6f errors %.2f sequence-errors %.2f" %
+ (msg, max_cur_length, data.safe_exp(acc_loss),
+ 100*acc_errors, 100*acc_seq_err))
+
+ # If errors are below the curriculum threshold, move curriculum forward.
+ is_good = FLAGS.curriculum_ppx > data.safe_exp(acc_loss)
+ is_good = is_good and FLAGS.curriculum_seq > acc_seq_err
+ if is_good and is_chief:
+ if FLAGS.quantize:
+ # Quantize weights.
+ data.print_out(" Quantizing parameters.")
+ sess.run([quant_op])
+ # Increase current length (until the next with training data).
+ sess.run(model.cur_length_incr_op)
+ # Forget last perplexities if we're not yet at the end.
+ if max_cur_length < max_length:
+ prev_acc_perp.append(1000000)
+
+ # Lower learning rate if we're worse than the last 5 checkpoints.
+ acc_perp = data.safe_exp(acc_loss)
+ if acc_perp > max(prev_acc_perp[-5:]) and is_chief:
+ sess.run(model.lr_decay_op)
+ prev_acc_perp.append(acc_perp)
+
+ # Save checkpoint.
+ if is_chief:
+ checkpoint_path = os.path.join(checkpoint_dir, "neural_gpu.ckpt")
+ model.saver.save(sess, checkpoint_path,
+ global_step=model.global_step)
+
+ # Run evaluation.
+ bin_bound = 4
+ for p in FLAGS.problem.split("-"):
+ total_loss, total_err, tl_counter = 0.0, 0.0, 0
+ for bin_id in xrange(len(data.bins)):
+ if bin_id < bin_bound or bin_id % FLAGS.eval_bin_print == 1:
+ err, _, loss = single_test(bin_id, model, sess, FLAGS.nprint,
+ batch_size * 4, dev_set, p,
+ beam_model=beam_model)
+ if loss > 0.0:
+ total_loss += loss
+ total_err += err
+ tl_counter += 1
+ test_loss = total_loss / max(1, tl_counter)
+ test_err = total_err / max(1, tl_counter)
+ test_perp = data.safe_exp(test_loss)
+ summary = tf.Summary()
+ summary.value.extend(
+ [tf.Summary.Value(tag="test/%s/loss" % p, simple_value=test_loss),
+ tf.Summary.Value(tag="test/%s/error" % p, simple_value=test_err),
+ tf.Summary.Value(tag="test/%s/perplexity" % p,
+ simple_value=test_perp)])
+ sv.SummaryComputed(sess, summary, global_step)
+
+
+def linearize(output, rev_fr_vocab, simple_tokenizer=None, eos_id=wmt.EOS_ID):
+ # If there is an EOS symbol in outputs, cut them at that point (WMT).
+ if eos_id in output:
+ output = output[:output.index(eos_id)]
+ # Print out French sentence corresponding to outputs.
+ if simple_tokenizer or FLAGS.simple_tokenizer:
+ vlen = len(rev_fr_vocab)
+ def vget(o):
+ if o < vlen:
+ return rev_fr_vocab[o]
+ return "UNK"
+ return " ".join([vget(o) for o in output])
+ else:
+ return wmt.basic_detokenizer([rev_fr_vocab[o] for o in output])
+
+
+def evaluate():
+ """Evaluate an existing model."""
+ batch_size = FLAGS.batch_size * FLAGS.num_gpus
+ with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
+ (model, beam_model, _, _, _,
+ (_, dev_set, en_vocab_path, fr_vocab_path), _, sess) = initialize(sess)
+ for p in FLAGS.problem.split("-"):
+ for bin_id in xrange(len(data.bins)):
+ if (FLAGS.task >= 0 and bin_id > 4) or (FLAGS.nprint == 0 and
+ bin_id > 8 and p == "wmt"):
+ break
+ single_test(bin_id, model, sess, FLAGS.nprint, batch_size, dev_set, p,
+ beam_model=beam_model)
+ path = FLAGS.test_file_prefix
+ xid = "" if FLAGS.task < 0 else ("%.4d" % (FLAGS.task+FLAGS.decode_offset))
+ en_path, fr_path = path + ".en" + xid, path + ".fr" + xid
+ # Evaluate the test file if they exist.
+ if path and tf.gfile.Exists(en_path) and tf.gfile.Exists(fr_path):
+ data.print_out("Translating test set %s" % en_path)
+ # Read lines.
+ en_lines, fr_lines = [], []
+ with tf.gfile.GFile(en_path, mode="r") as f:
+ for line in f:
+ en_lines.append(line.strip())
+ with tf.gfile.GFile(fr_path, mode="r") as f:
+ for line in f:
+ fr_lines.append(line.strip())
+ # Tokenize and convert to ids.
+ en_vocab, _ = wmt.initialize_vocabulary(en_vocab_path)
+ _, rev_fr_vocab = wmt.initialize_vocabulary(fr_vocab_path)
+ if FLAGS.simple_tokenizer:
+ en_ids = [wmt.sentence_to_token_ids(
+ l, en_vocab, tokenizer=wmt.space_tokenizer,
+ normalize_digits=FLAGS.normalize_digits)
+ for l in en_lines]
+ else:
+ en_ids = [wmt.sentence_to_token_ids(l, en_vocab) for l in en_lines]
+ # Translate.
+ results = []
+ for idx, token_ids in enumerate(en_ids):
+ if idx % 5 == 0:
+ data.print_out("Translating example %d of %d." % (idx, len(en_ids)))
+ # Which bucket does it belong to?
+ buckets = [b for b in xrange(len(data.bins))
+ if data.bins[b] >= len(token_ids)]
+ if buckets:
+ result, result_cost = [], 100000000.0
+ for bucket_id in buckets:
+ if data.bins[bucket_id] > MAXLEN_F * len(token_ids) + EVAL_LEN_INCR:
+ break
+ # Get a 1-element batch to feed the sentence to the model.
+ used_batch_size = 1 # batch_size
+ inp, target = data.get_batch(
+ bucket_id, used_batch_size, None, FLAGS.height,
+ preset=([token_ids], [[]]))
+ loss, output_logits, _, _ = model.step(
+ sess, inp, target, None, beam_size=FLAGS.beam_size)
+ outputs = [int(o[0]) for o in output_logits]
+ loss = loss[0] - (data.bins[bucket_id] * FLAGS.length_norm)
+ if FLAGS.simple_tokenizer:
+ cur_out = outputs
+ if wmt.EOS_ID in cur_out:
+ cur_out = cur_out[:cur_out.index(wmt.EOS_ID)]
+ res_tags = [rev_fr_vocab[o] for o in cur_out]
+ bad_words, bad_brack = wmt.parse_constraints(token_ids, res_tags)
+ loss += 1000.0 * bad_words + 100.0 * bad_brack
+ # print (bucket_id, loss)
+ if loss < result_cost:
+ result = outputs
+ result_cost = loss
+ final = linearize(result, rev_fr_vocab)
+ results.append("%s\t%s\n" % (final, fr_lines[idx]))
+ # print result_cost
+ sys.stderr.write(results[-1])
+ sys.stderr.flush()
+ else:
+ sys.stderr.write("TOOO_LONG\t%s\n" % fr_lines[idx])
+ sys.stderr.flush()
+ if xid:
+ decode_suffix = "beam%dln%dn" % (FLAGS.beam_size,
+ int(100 * FLAGS.length_norm))
+ with tf.gfile.GFile(path + ".res" + decode_suffix + xid, mode="w") as f:
+ for line in results:
+ f.write(line)
+
+
+def mul(l):
+ res = 1.0
+ for s in l:
+ res *= s
+ return res
+
+
+def interactive():
+ """Interactively probe an existing model."""
+ with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
+ # Initialize model.
+ (model, _, _, _, _, (_, _, en_path, fr_path), _, _) = initialize(sess)
+ # Load vocabularies.
+ en_vocab, rev_en_vocab = wmt.initialize_vocabulary(en_path)
+ _, rev_fr_vocab = wmt.initialize_vocabulary(fr_path)
+ # Print out vectors and variables.
+ if FLAGS.nprint > 0 and FLAGS.word_vector_file_en:
+ print_vectors("embedding:0", en_path, FLAGS.word_vector_file_en)
+ if FLAGS.nprint > 0 and FLAGS.word_vector_file_fr:
+ print_vectors("target_embedding:0", fr_path, FLAGS.word_vector_file_fr)
+ total = 0
+ for v in tf.trainable_variables():
+ shape = v.get_shape().as_list()
+ total += mul(shape)
+ print(v.name, shape, mul(shape))
+ print(total)
+ # Start interactive loop.
+ sys.stdout.write("Input to Neural GPU Translation Model.\n")
+ sys.stdout.write("> ")
+ sys.stdout.flush()
+ inpt = sys.stdin.readline(), ""
+ while inpt:
+ cures = []
+ # Get token-ids for the input sentence.
+ if FLAGS.simple_tokenizer:
+ token_ids = wmt.sentence_to_token_ids(
+ inpt, en_vocab, tokenizer=wmt.space_tokenizer,
+ normalize_digits=FLAGS.normalize_digits)
+ else:
+ token_ids = wmt.sentence_to_token_ids(inpt, en_vocab)
+ print([rev_en_vocab[t] for t in token_ids])
+ # Which bucket does it belong to?
+ buckets = [b for b in xrange(len(data.bins))
+ if data.bins[b] >= max(len(token_ids), len(cures))]
+ if cures:
+ buckets = [buckets[0]]
+ if buckets:
+ result, result_cost = [], 10000000.0
+ for bucket_id in buckets:
+ if data.bins[bucket_id] > MAXLEN_F * len(token_ids) + EVAL_LEN_INCR:
+ break
+ glen = 1
+ for gen_idx in xrange(glen):
+ # Get a 1-element batch to feed the sentence to the model.
+ inp, target = data.get_batch(
+ bucket_id, 1, None, FLAGS.height, preset=([token_ids], [cures]))
+ loss, output_logits, _, _ = model.step(
+ sess, inp, target, None, beam_size=FLAGS.beam_size,
+ update_mem=False)
+ # If it is a greedy decoder, outputs are argmaxes of output_logits.
+ if FLAGS.beam_size > 1:
+ outputs = [int(o) for o in output_logits]
+ else:
+ loss = loss[0] - (data.bins[bucket_id] * FLAGS.length_norm)
+ outputs = [int(np.argmax(logit, axis=1))
+ for logit in output_logits]
+ print([rev_fr_vocab[t] for t in outputs])
+ print(loss, data.bins[bucket_id])
+ print(linearize(outputs, rev_fr_vocab))
+ cures.append(outputs[gen_idx])
+ print(cures)
+ print(linearize(cures, rev_fr_vocab))
+ if FLAGS.simple_tokenizer:
+ cur_out = outputs
+ if wmt.EOS_ID in cur_out:
+ cur_out = cur_out[:cur_out.index(wmt.EOS_ID)]
+ res_tags = [rev_fr_vocab[o] for o in cur_out]
+ bad_words, bad_brack = wmt.parse_constraints(token_ids, res_tags)
+ loss += 1000.0 * bad_words + 100.0 * bad_brack
+ if loss < result_cost:
+ result = outputs
+ result_cost = loss
+ print("FINAL", result_cost)
+ print([rev_fr_vocab[t] for t in result])
+ print(linearize(result, rev_fr_vocab))
+ else:
+ print("TOOO_LONG")
+ sys.stdout.write("> ")
+ sys.stdout.flush()
+ inpt = sys.stdin.readline(), ""
+
+
+def main(_):
+ if FLAGS.mode == 0:
+ train()
+ elif FLAGS.mode == 1:
+ evaluate()
+ else:
+ interactive()
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/program_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/program_utils.py"
new file mode 100644
index 00000000..1f49d012
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/program_utils.py"
@@ -0,0 +1,444 @@
+# Copyright 2015 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Utilities for generating program synthesis and evaluation data."""
+
+import contextlib
+import sys
+import random
+import os
+
+try:
+ import StringIO
+except ImportError:
+ from io import StringIO
+
+class ListType(object):
+ def __init__(self, arg):
+ self.arg = arg
+
+ def __str__(self):
+ return "[" + str(self.arg) + "]"
+
+ def __eq__(self, other):
+ if not isinstance(other, ListType):
+ return False
+ return self.arg == other.arg
+
+ def __hash__(self):
+ return hash(self.arg)
+
+class VarType(object):
+ def __init__(self, arg):
+ self.arg = arg
+
+ def __str__(self):
+ return str(self.arg)
+
+ def __eq__(self, other):
+ if not isinstance(other, VarType):
+ return False
+ return self.arg == other.arg
+
+ def __hash__(self):
+ return hash(self.arg)
+
+class FunctionType(object):
+ def __init__(self, args):
+ self.args = args
+
+ def __str__(self):
+ return str(self.args[0]) + " -> " + str(self.args[1])
+
+ def __eq__(self, other):
+ if not isinstance(other, FunctionType):
+ return False
+ return self.args == other.args
+
+ def __hash__(self):
+ return hash(tuple(self.args))
+
+
+class Function(object):
+ def __init__(self, name, arg_types, output_type, fn_arg_types = None):
+ self.name = name
+ self.arg_types = arg_types
+ self.fn_arg_types = fn_arg_types or []
+ self.output_type = output_type
+
+Null = 100
+## Functions
+f_head = Function("c_head", [ListType("Int")], "Int")
+def c_head(xs): return xs[0] if len(xs) > 0 else Null
+
+f_last = Function("c_last", [ListType("Int")], "Int")
+def c_last(xs): return xs[-1] if len(xs) > 0 else Null
+
+f_take = Function("c_take", ["Int", ListType("Int")], ListType("Int"))
+def c_take(n, xs): return xs[:n]
+
+f_drop = Function("c_drop", ["Int", ListType("Int")], ListType("Int"))
+def c_drop(n, xs): return xs[n:]
+
+f_access = Function("c_access", ["Int", ListType("Int")], "Int")
+def c_access(n, xs): return xs[n] if n >= 0 and len(xs) > n else Null
+
+f_max = Function("c_max", [ListType("Int")], "Int")
+def c_max(xs): return max(xs) if len(xs) > 0 else Null
+
+f_min = Function("c_min", [ListType("Int")], "Int")
+def c_min(xs): return min(xs) if len(xs) > 0 else Null
+
+f_reverse = Function("c_reverse", [ListType("Int")], ListType("Int"))
+def c_reverse(xs): return list(reversed(xs))
+
+f_sort = Function("sorted", [ListType("Int")], ListType("Int"))
+# def c_sort(xs): return sorted(xs)
+
+f_sum = Function("sum", [ListType("Int")], "Int")
+# def c_sum(xs): return sum(xs)
+
+
+## Lambdas
+# Int -> Int
+def plus_one(x): return x + 1
+def minus_one(x): return x - 1
+def times_two(x): return x * 2
+def neg(x): return x * (-1)
+def div_two(x): return int(x/2)
+def sq(x): return x**2
+def times_three(x): return x * 3
+def div_three(x): return int(x/3)
+def times_four(x): return x * 4
+def div_four(x): return int(x/4)
+
+# Int -> Bool
+def pos(x): return x > 0
+def neg(x): return x < 0
+def even(x): return x%2 == 0
+def odd(x): return x%2 == 1
+
+# Int -> Int -> Int
+def add(x, y): return x + y
+def sub(x, y): return x - y
+def mul(x, y): return x * y
+
+# HOFs
+f_map = Function("map", [ListType("Int")],
+ ListType("Int"),
+ [FunctionType(["Int", "Int"])])
+f_filter = Function("filter", [ListType("Int")],
+ ListType("Int"),
+ [FunctionType(["Int", "Bool"])])
+f_count = Function("c_count", [ListType("Int")],
+ "Int",
+ [FunctionType(["Int", "Bool"])])
+def c_count(f, xs): return len([x for x in xs if f(x)])
+
+f_zipwith = Function("c_zipwith", [ListType("Int"), ListType("Int")],
+ ListType("Int"),
+ [FunctionType(["Int", "Int", "Int"])]) #FIX
+def c_zipwith(f, xs, ys): return [f(x, y) for (x, y) in zip(xs, ys)]
+
+f_scan = Function("c_scan", [ListType("Int")],
+ ListType("Int"),
+ [FunctionType(["Int", "Int", "Int"])])
+def c_scan(f, xs):
+ out = xs
+ for i in range(1, len(xs)):
+ out[i] = f(xs[i], xs[i -1])
+ return out
+
+@contextlib.contextmanager
+def stdoutIO(stdout=None):
+ old = sys.stdout
+ if stdout is None:
+ stdout = StringIO.StringIO()
+ sys.stdout = stdout
+ yield stdout
+ sys.stdout = old
+
+
+def evaluate(program_str, input_names_to_vals, default="ERROR"):
+ exec_str = []
+ for name, val in input_names_to_vals.iteritems():
+ exec_str += name + " = " + str(val) + "; "
+ exec_str += program_str
+ if type(exec_str) is list:
+ exec_str = "".join(exec_str)
+
+ with stdoutIO() as s:
+ # pylint: disable=bare-except
+ try:
+ exec(exec_str + " print(out)")
+ return s.getvalue()[:-1]
+ except:
+ return default
+ # pylint: enable=bare-except
+
+
+class Statement(object):
+ """Statement class."""
+
+ def __init__(self, fn, output_var, arg_vars, fn_args=None):
+ self.fn = fn
+ self.output_var = output_var
+ self.arg_vars = arg_vars
+ self.fn_args = fn_args or []
+
+ def __str__(self):
+ return "%s = %s(%s%s%s)"%(self.output_var,
+ self.fn.name,
+ ", ".join(self.fn_args),
+ ", " if self.fn_args else "",
+ ", ".join(self.arg_vars))
+
+ def substitute(self, env):
+ self.output_var = env.get(self.output_var, self.output_var)
+ self.arg_vars = [env.get(v, v) for v in self.arg_vars]
+
+
+class ProgramGrower(object):
+ """Grow programs."""
+
+ def __init__(self, functions, types_to_lambdas):
+ self.functions = functions
+ self.types_to_lambdas = types_to_lambdas
+
+ def grow_body(self, new_var_name, dependencies, types_to_vars):
+ """Grow the program body."""
+ choices = []
+ for f in self.functions:
+ if all([a in types_to_vars.keys() for a in f.arg_types]):
+ choices.append(f)
+
+ f = random.choice(choices)
+ args = []
+ for t in f.arg_types:
+ possible_vars = random.choice(types_to_vars[t])
+ var = random.choice(possible_vars)
+ args.append(var)
+ dependencies.setdefault(new_var_name, []).extend(
+ [var] + (dependencies[var]))
+
+ fn_args = [random.choice(self.types_to_lambdas[t]) for t in f.fn_arg_types]
+ types_to_vars.setdefault(f.output_type, []).append(new_var_name)
+
+ return Statement(f, new_var_name, args, fn_args)
+
+ def grow(self, program_len, input_types):
+ """Grow the program."""
+ var_names = list(reversed(map(chr, range(97, 123))))
+ dependencies = dict()
+ types_to_vars = dict()
+ input_names = []
+ for t in input_types:
+ var = var_names.pop()
+ dependencies[var] = []
+ types_to_vars.setdefault(t, []).append(var)
+ input_names.append(var)
+
+ statements = []
+ for _ in range(program_len - 1):
+ var = var_names.pop()
+ statements.append(self.grow_body(var, dependencies, types_to_vars))
+ statements.append(self.grow_body("out", dependencies, types_to_vars))
+
+ new_var_names = [c for c in map(chr, range(97, 123))
+ if c not in input_names]
+ new_var_names.reverse()
+ keep_statements = []
+ env = dict()
+ for s in statements:
+ if s.output_var in dependencies["out"]:
+ keep_statements.append(s)
+ env[s.output_var] = new_var_names.pop()
+ if s.output_var == "out":
+ keep_statements.append(s)
+
+ for k in keep_statements:
+ k.substitute(env)
+
+ return Program(input_names, input_types, ";".join(
+ [str(k) for k in keep_statements]))
+
+
+class Program(object):
+ """The program class."""
+
+ def __init__(self, input_names, input_types, body):
+ self.input_names = input_names
+ self.input_types = input_types
+ self.body = body
+
+ def evaluate(self, inputs):
+ """Evaluate this program."""
+ if len(inputs) != len(self.input_names):
+ raise AssertionError("inputs and input_names have to"
+ "have the same len. inp: %s , names: %s" %
+ (str(inputs), str(self.input_names)))
+ inp_str = ""
+ for (name, inp) in zip(self.input_names, inputs):
+ inp_str += name + " = " + str(inp) + "; "
+
+ with stdoutIO() as s:
+ # pylint: disable=exec-used
+ exec(inp_str + self.body + "; print(out)")
+ # pylint: enable=exec-used
+ return s.getvalue()[:-1]
+
+ def flat_str(self):
+ out = ""
+ for s in self.body.split(";"):
+ out += s + ";"
+ return out
+
+ def __str__(self):
+ out = ""
+ for (n, t) in zip(self.input_names, self.input_types):
+ out += n + " = " + str(t) + "\n"
+ for s in self.body.split(";"):
+ out += s + "\n"
+ return out
+
+
+prog_vocab = []
+prog_rev_vocab = {}
+
+
+def tokenize(string, tokens=None):
+ """Tokenize the program string."""
+ if tokens is None:
+ tokens = prog_vocab
+ tokens = sorted(tokens, key=len, reverse=True)
+ out = []
+ string = string.strip()
+ while string:
+ found = False
+ for t in tokens:
+ if string.startswith(t):
+ out.append(t)
+ string = string[len(t):]
+ found = True
+ break
+ if not found:
+ raise ValueError("Couldn't tokenize this: " + string)
+ string = string.strip()
+ return out
+
+
+def clean_up(output, max_val=100):
+ o = eval(str(output))
+ if isinstance(o, bool):
+ return o
+ if isinstance(o, int):
+ if o >= 0:
+ return min(o, max_val)
+ else:
+ return max(o, -1 * max_val)
+ if isinstance(o, list):
+ return [clean_up(l) for l in o]
+
+
+def make_vocab():
+ gen(2, 0)
+
+
+def gen(max_len, how_many):
+ """Generate some programs."""
+ functions = [f_head, f_last, f_take, f_drop, f_access, f_max, f_min,
+ f_reverse, f_sort, f_sum, f_map, f_filter, f_count, f_zipwith,
+ f_scan]
+
+ types_to_lambdas = {
+ FunctionType(["Int", "Int"]): ["plus_one", "minus_one", "times_two",
+ "div_two", "sq", "times_three",
+ "div_three", "times_four", "div_four"],
+ FunctionType(["Int", "Bool"]): ["pos", "neg", "even", "odd"],
+ FunctionType(["Int", "Int", "Int"]): ["add", "sub", "mul"]
+ }
+
+ tokens = []
+ for f in functions:
+ tokens.append(f.name)
+ for v in types_to_lambdas.values():
+ tokens.extend(v)
+ tokens.extend(["=", ";", ",", "(", ")", "[", "]", "Int", "out"])
+ tokens.extend(map(chr, range(97, 123)))
+
+ io_tokens = map(str, range(-220, 220))
+ if not prog_vocab:
+ prog_vocab.extend(["_PAD", "_EOS"] + tokens + io_tokens)
+ for i, t in enumerate(prog_vocab):
+ prog_rev_vocab[t] = i
+
+ io_tokens += [",", "[", "]", ")", "(", "None"]
+ grower = ProgramGrower(functions=functions,
+ types_to_lambdas=types_to_lambdas)
+
+ def mk_inp(l):
+ return [random.choice(range(-5, 5)) for _ in range(l)]
+
+ tar = [ListType("Int")]
+ inps = [[mk_inp(3)], [mk_inp(5)], [mk_inp(7)], [mk_inp(15)]]
+
+ save_prefix = None
+ outcomes_to_programs = dict()
+ tried = set()
+ counter = 0
+ choices = [0] if max_len == 0 else range(max_len)
+ while counter < 100 * how_many and len(outcomes_to_programs) < how_many:
+ counter += 1
+ length = random.choice(choices)
+ t = grower.grow(length, tar)
+ while t in tried:
+ length = random.choice(choices)
+ t = grower.grow(length, tar)
+ # print(t.flat_str())
+ tried.add(t)
+ outcomes = [clean_up(t.evaluate(i)) for i in inps]
+ outcome_str = str(zip(inps, outcomes))
+ if outcome_str in outcomes_to_programs:
+ outcomes_to_programs[outcome_str] = min(
+ [t.flat_str(), outcomes_to_programs[outcome_str]],
+ key=lambda x: len(tokenize(x, tokens)))
+ else:
+ outcomes_to_programs[outcome_str] = t.flat_str()
+ if counter % 5000 == 0:
+ print("== proggen: tried: " + str(counter))
+ print("== proggen: kept: " + str(len(outcomes_to_programs)))
+
+ if counter % 250000 == 0 and save_prefix is not None:
+ print("saving...")
+ save_counter = 0
+ progfilename = os.path.join(save_prefix, "prog_" + str(counter) + ".txt")
+ iofilename = os.path.join(save_prefix, "io_" + str(counter) + ".txt")
+ prog_token_filename = os.path.join(save_prefix,
+ "prog_tokens_" + str(counter) + ".txt")
+ io_token_filename = os.path.join(save_prefix,
+ "io_tokens_" + str(counter) + ".txt")
+ with open(progfilename, "a+") as fp, \
+ open(iofilename, "a+") as fi, \
+ open(prog_token_filename, "a+") as ftp, \
+ open(io_token_filename, "a+") as fti:
+ for (o, p) in outcomes_to_programs.iteritems():
+ save_counter += 1
+ if save_counter % 500 == 0:
+ print("saving %d of %d" % (save_counter, len(outcomes_to_programs)))
+ fp.write(p+"\n")
+ fi.write(o+"\n")
+ ftp.write(str(tokenize(p, tokens))+"\n")
+ fti.write(str(tokenize(o, io_tokens))+"\n")
+
+ return list(outcomes_to_programs.values())
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/wmt_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/wmt_utils.py"
new file mode 100644
index 00000000..ef831918
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_gpu/wmt_utils.py"
@@ -0,0 +1,437 @@
+# Copyright 2015 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Utilities for downloading data from WMT, tokenizing, vocabularies."""
+
+from __future__ import print_function
+
+import gzip
+import os
+import re
+import tarfile
+
+from six.moves import urllib
+import tensorflow as tf
+
+# Special vocabulary symbols - we always put them at the start.
+_PAD = b"_PAD"
+_GO = b"_GO"
+_EOS = b"_EOS"
+_UNK = b"_CHAR_UNK"
+_SPACE = b"_SPACE"
+_START_VOCAB = [_PAD, _GO, _EOS, _UNK, _SPACE]
+
+PAD_ID = 0
+GO_ID = 1
+EOS_ID = 2
+UNK_ID = 3
+SPACE_ID = 4
+
+# Regular expressions used to tokenize.
+_CHAR_MARKER = "_CHAR_"
+_CHAR_MARKER_LEN = len(_CHAR_MARKER)
+_SPEC_CHARS = "" + chr(226) + chr(153) + chr(128)
+_PUNCTUATION = "][.,!?\"':;%$#@&*+}{|><=/^~)(_`,0123456789" + _SPEC_CHARS + "-"
+_WORD_SPLIT = re.compile("([" + _PUNCTUATION + "])")
+_OLD_WORD_SPLIT = re.compile(b"([.,!?\"':;)(])")
+_DIGIT_RE = re.compile(br"\d")
+
+# URLs for WMT data.
+_WMT_ENFR_TRAIN_URL = "http://www.statmt.org/wmt10/training-giga-fren.tar"
+_WMT_ENFR_DEV_URL = "http://www.statmt.org/wmt15/dev-v2.tgz"
+
+
+def maybe_download(directory, filename, url):
+ """Download filename from url unless it's already in directory."""
+ if not tf.gfile.Exists(directory):
+ print("Creating directory %s" % directory)
+ os.mkdir(directory)
+ filepath = os.path.join(directory, filename)
+ if not tf.gfile.Exists(filepath):
+ print("Downloading %s to %s" % (url, filepath))
+ filepath, _ = urllib.request.urlretrieve(url, filepath)
+ statinfo = os.stat(filepath)
+ print("Successfully downloaded", filename, statinfo.st_size, "bytes")
+ return filepath
+
+
+def gunzip_file(gz_path, new_path):
+ """Unzips from gz_path into new_path."""
+ print("Unpacking %s to %s" % (gz_path, new_path))
+ with gzip.open(gz_path, "rb") as gz_file:
+ with open(new_path, "wb") as new_file:
+ for line in gz_file:
+ new_file.write(line)
+
+
+def get_wmt_enfr_train_set(directory):
+ """Download the WMT en-fr training corpus to directory unless it's there."""
+ train_path = os.path.join(directory, "giga-fren.release2.fixed")
+ if not (tf.gfile.Exists(train_path +".fr") and
+ tf.gfile.Exists(train_path +".en")):
+ corpus_file = maybe_download(directory, "training-giga-fren.tar",
+ _WMT_ENFR_TRAIN_URL)
+ print("Extracting tar file %s" % corpus_file)
+ with tarfile.open(corpus_file, "r") as corpus_tar:
+ corpus_tar.extractall(directory)
+ gunzip_file(train_path + ".fr.gz", train_path + ".fr")
+ gunzip_file(train_path + ".en.gz", train_path + ".en")
+ return train_path
+
+
+def get_wmt_enfr_dev_set(directory):
+ """Download the WMT en-fr training corpus to directory unless it's there."""
+ dev_name = "newstest2013"
+ dev_path = os.path.join(directory, dev_name)
+ if not (tf.gfile.Exists(dev_path + ".fr") and
+ tf.gfile.Exists(dev_path + ".en")):
+ dev_file = maybe_download(directory, "dev-v2.tgz", _WMT_ENFR_DEV_URL)
+ print("Extracting tgz file %s" % dev_file)
+ with tarfile.open(dev_file, "r:gz") as dev_tar:
+ fr_dev_file = dev_tar.getmember("dev/" + dev_name + ".fr")
+ en_dev_file = dev_tar.getmember("dev/" + dev_name + ".en")
+ fr_dev_file.name = dev_name + ".fr" # Extract without "dev/" prefix.
+ en_dev_file.name = dev_name + ".en"
+ dev_tar.extract(fr_dev_file, directory)
+ dev_tar.extract(en_dev_file, directory)
+ return dev_path
+
+
+def is_char(token):
+ if len(token) > _CHAR_MARKER_LEN:
+ if token[:_CHAR_MARKER_LEN] == _CHAR_MARKER:
+ return True
+ return False
+
+
+def basic_detokenizer(tokens):
+ """Reverse the process of the basic tokenizer below."""
+ result = []
+ previous_nospace = True
+ for t in tokens:
+ if is_char(t):
+ result.append(t[_CHAR_MARKER_LEN:])
+ previous_nospace = True
+ elif t == _SPACE:
+ result.append(" ")
+ previous_nospace = True
+ elif previous_nospace:
+ result.append(t)
+ previous_nospace = False
+ else:
+ result.extend([" ", t])
+ previous_nospace = False
+ return "".join(result)
+
+
+old_style = False
+
+
+def basic_tokenizer(sentence):
+ """Very basic tokenizer: split the sentence into a list of tokens."""
+ words = []
+ if old_style:
+ for space_separated_fragment in sentence.strip().split():
+ words.extend(re.split(_OLD_WORD_SPLIT, space_separated_fragment))
+ return [w for w in words if w]
+ for space_separated_fragment in sentence.strip().split():
+ tokens = [t for t in re.split(_WORD_SPLIT, space_separated_fragment) if t]
+ first_is_char = False
+ for i, t in enumerate(tokens):
+ if len(t) == 1 and t in _PUNCTUATION:
+ tokens[i] = _CHAR_MARKER + t
+ if i == 0:
+ first_is_char = True
+ if words and words[-1] != _SPACE and (first_is_char or is_char(words[-1])):
+ tokens = [_SPACE] + tokens
+ spaced_tokens = []
+ for i, tok in enumerate(tokens):
+ spaced_tokens.append(tokens[i])
+ if i < len(tokens) - 1:
+ if tok != _SPACE and not (is_char(tok) or is_char(tokens[i+1])):
+ spaced_tokens.append(_SPACE)
+ words.extend(spaced_tokens)
+ return words
+
+
+def space_tokenizer(sentence):
+ return sentence.strip().split()
+
+
+def is_pos_tag(token):
+ """Check if token is a part-of-speech tag."""
+ return(token in ["CC", "CD", "DT", "EX", "FW", "IN", "JJ", "JJR",
+ "JJS", "LS", "MD", "NN", "NNS", "NNP", "NNPS", "PDT",
+ "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO",
+ "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP",
+ "WP$", "WRB", ".", ",", ":", ")", "-LRB-", "(", "-RRB-",
+ "HYPH", "$", "``", "''", "ADD", "AFX", "QTR", "BES", "-DFL-",
+ "GW", "HVS", "NFP"])
+
+
+def parse_constraints(inpt, res):
+ ntags = len(res)
+ nwords = len(inpt)
+ npostags = len([x for x in res if is_pos_tag(x)])
+ nclose = len([x for x in res if x[0] == "/"])
+ nopen = ntags - nclose - npostags
+ return (abs(npostags - nwords), abs(nclose - nopen))
+
+
+def create_vocabulary(vocabulary_path, data_path, max_vocabulary_size,
+ tokenizer=None, normalize_digits=False):
+ """Create vocabulary file (if it does not exist yet) from data file.
+
+ Data file is assumed to contain one sentence per line. Each sentence is
+ tokenized and digits are normalized (if normalize_digits is set).
+ Vocabulary contains the most-frequent tokens up to max_vocabulary_size.
+ We write it to vocabulary_path in a one-token-per-line format, so that later
+ token in the first line gets id=0, second line gets id=1, and so on.
+
+ Args:
+ vocabulary_path: path where the vocabulary will be created.
+ data_path: data file that will be used to create vocabulary.
+ max_vocabulary_size: limit on the size of the created vocabulary.
+ tokenizer: a function to use to tokenize each data sentence;
+ if None, basic_tokenizer will be used.
+ normalize_digits: Boolean; if true, all digits are replaced by 0s.
+ """
+ if not tf.gfile.Exists(vocabulary_path):
+ print("Creating vocabulary %s from data %s" % (vocabulary_path, data_path))
+ vocab, chars = {}, {}
+ for c in _PUNCTUATION:
+ chars[c] = 1
+
+ # Read French file.
+ with tf.gfile.GFile(data_path + ".fr", mode="rb") as f:
+ counter = 0
+ for line_in in f:
+ line = " ".join(line_in.split())
+ counter += 1
+ if counter % 100000 == 0:
+ print(" processing fr line %d" % counter)
+ for c in line:
+ if c in chars:
+ chars[c] += 1
+ else:
+ chars[c] = 1
+ tokens = tokenizer(line) if tokenizer else basic_tokenizer(line)
+ tokens = [t for t in tokens if not is_char(t) and t != _SPACE]
+ for w in tokens:
+ word = re.sub(_DIGIT_RE, b"0", w) if normalize_digits else w
+ if word in vocab:
+ vocab[word] += 1000000000 # We want target words first.
+ else:
+ vocab[word] = 1000000000
+
+ # Read English file.
+ with tf.gfile.GFile(data_path + ".en", mode="rb") as f:
+ counter = 0
+ for line_in in f:
+ line = " ".join(line_in.split())
+ counter += 1
+ if counter % 100000 == 0:
+ print(" processing en line %d" % counter)
+ for c in line:
+ if c in chars:
+ chars[c] += 1
+ else:
+ chars[c] = 1
+ tokens = tokenizer(line) if tokenizer else basic_tokenizer(line)
+ tokens = [t for t in tokens if not is_char(t) and t != _SPACE]
+ for w in tokens:
+ word = re.sub(_DIGIT_RE, b"0", w) if normalize_digits else w
+ if word in vocab:
+ vocab[word] += 1
+ else:
+ vocab[word] = 1
+
+ sorted_vocab = sorted(vocab, key=vocab.get, reverse=True)
+ sorted_chars = sorted(chars, key=vocab.get, reverse=True)
+ sorted_chars = [_CHAR_MARKER + c for c in sorted_chars]
+ vocab_list = _START_VOCAB + sorted_chars + sorted_vocab
+ if tokenizer:
+ vocab_list = _START_VOCAB + sorted_vocab
+ if len(vocab_list) > max_vocabulary_size:
+ vocab_list = vocab_list[:max_vocabulary_size]
+ with tf.gfile.GFile(vocabulary_path, mode="wb") as vocab_file:
+ for w in vocab_list:
+ vocab_file.write(w + b"\n")
+
+
+def initialize_vocabulary(vocabulary_path):
+ """Initialize vocabulary from file.
+
+ We assume the vocabulary is stored one-item-per-line, so a file:
+ dog
+ cat
+ will result in a vocabulary {"dog": 0, "cat": 1}, and this function will
+ also return the reversed-vocabulary ["dog", "cat"].
+
+ Args:
+ vocabulary_path: path to the file containing the vocabulary.
+
+ Returns:
+ a pair: the vocabulary (a dictionary mapping string to integers), and
+ the reversed vocabulary (a list, which reverses the vocabulary mapping).
+
+ Raises:
+ ValueError: if the provided vocabulary_path does not exist.
+ """
+ if tf.gfile.Exists(vocabulary_path):
+ rev_vocab = []
+ with tf.gfile.GFile(vocabulary_path, mode="rb") as f:
+ rev_vocab.extend(f.readlines())
+ rev_vocab = [line.strip() for line in rev_vocab]
+ vocab = dict([(x, y) for (y, x) in enumerate(rev_vocab)])
+ return vocab, rev_vocab
+ else:
+ raise ValueError("Vocabulary file %s not found.", vocabulary_path)
+
+
+def sentence_to_token_ids_raw(sentence, vocabulary,
+ tokenizer=None, normalize_digits=old_style):
+ """Convert a string to list of integers representing token-ids.
+
+ For example, a sentence "I have a dog" may become tokenized into
+ ["I", "have", "a", "dog"] and with vocabulary {"I": 1, "have": 2,
+ "a": 4, "dog": 7"} this function will return [1, 2, 4, 7].
+
+ Args:
+ sentence: the sentence in bytes format to convert to token-ids.
+ vocabulary: a dictionary mapping tokens to integers.
+ tokenizer: a function to use to tokenize each sentence;
+ if None, basic_tokenizer will be used.
+ normalize_digits: Boolean; if true, all digits are replaced by 0s.
+
+ Returns:
+ a list of integers, the token-ids for the sentence.
+ """
+ if tokenizer:
+ words = tokenizer(sentence)
+ else:
+ words = basic_tokenizer(sentence)
+ result = []
+ for w in words:
+ if normalize_digits:
+ w = re.sub(_DIGIT_RE, b"0", w)
+ if w in vocabulary:
+ result.append(vocabulary[w])
+ else:
+ if tokenizer:
+ result.append(UNK_ID)
+ else:
+ result.append(SPACE_ID)
+ for c in w:
+ result.append(vocabulary.get(_CHAR_MARKER + c, UNK_ID))
+ result.append(SPACE_ID)
+ while result and result[0] == SPACE_ID:
+ result = result[1:]
+ while result and result[-1] == SPACE_ID:
+ result = result[:-1]
+ return result
+
+
+def sentence_to_token_ids(sentence, vocabulary,
+ tokenizer=None, normalize_digits=old_style):
+ """Convert a string to list of integers representing token-ids, tab=0."""
+ tab_parts = sentence.strip().split("\t")
+ toks = [sentence_to_token_ids_raw(t, vocabulary, tokenizer, normalize_digits)
+ for t in tab_parts]
+ res = []
+ for t in toks:
+ res.extend(t)
+ res.append(0)
+ return res[:-1]
+
+
+def data_to_token_ids(data_path, target_path, vocabulary_path,
+ tokenizer=None, normalize_digits=False):
+ """Tokenize data file and turn into token-ids using given vocabulary file.
+
+ This function loads data line-by-line from data_path, calls the above
+ sentence_to_token_ids, and saves the result to target_path. See comment
+ for sentence_to_token_ids on the details of token-ids format.
+
+ Args:
+ data_path: path to the data file in one-sentence-per-line format.
+ target_path: path where the file with token-ids will be created.
+ vocabulary_path: path to the vocabulary file.
+ tokenizer: a function to use to tokenize each sentence;
+ if None, basic_tokenizer will be used.
+ normalize_digits: Boolean; if true, all digits are replaced by 0s.
+ """
+ if not tf.gfile.Exists(target_path):
+ print("Tokenizing data in %s" % data_path)
+ vocab, _ = initialize_vocabulary(vocabulary_path)
+ with tf.gfile.GFile(data_path, mode="rb") as data_file:
+ with tf.gfile.GFile(target_path, mode="w") as tokens_file:
+ counter = 0
+ for line in data_file:
+ counter += 1
+ if counter % 100000 == 0:
+ print(" tokenizing line %d" % counter)
+ token_ids = sentence_to_token_ids(line, vocab, tokenizer,
+ normalize_digits)
+ tokens_file.write(" ".join([str(tok) for tok in token_ids]) + "\n")
+
+
+def prepare_wmt_data(data_dir, vocabulary_size,
+ tokenizer=None, normalize_digits=False):
+ """Get WMT data into data_dir, create vocabularies and tokenize data.
+
+ Args:
+ data_dir: directory in which the data sets will be stored.
+ vocabulary_size: size of the joint vocabulary to create and use.
+ tokenizer: a function to use to tokenize each data sentence;
+ if None, basic_tokenizer will be used.
+ normalize_digits: Boolean; if true, all digits are replaced by 0s.
+
+ Returns:
+ A tuple of 6 elements:
+ (1) path to the token-ids for English training data-set,
+ (2) path to the token-ids for French training data-set,
+ (3) path to the token-ids for English development data-set,
+ (4) path to the token-ids for French development data-set,
+ (5) path to the vocabulary file,
+ (6) path to the vocabulary file (for compatibility with non-joint vocab).
+ """
+ # Get wmt data to the specified directory.
+ train_path = get_wmt_enfr_train_set(data_dir)
+ dev_path = get_wmt_enfr_dev_set(data_dir)
+
+ # Create vocabularies of the appropriate sizes.
+ vocab_path = os.path.join(data_dir, "vocab%d.txt" % vocabulary_size)
+ create_vocabulary(vocab_path, train_path, vocabulary_size,
+ tokenizer=tokenizer, normalize_digits=normalize_digits)
+
+ # Create token ids for the training data.
+ fr_train_ids_path = train_path + (".ids%d.fr" % vocabulary_size)
+ en_train_ids_path = train_path + (".ids%d.en" % vocabulary_size)
+ data_to_token_ids(train_path + ".fr", fr_train_ids_path, vocab_path,
+ tokenizer=tokenizer, normalize_digits=normalize_digits)
+ data_to_token_ids(train_path + ".en", en_train_ids_path, vocab_path,
+ tokenizer=tokenizer, normalize_digits=normalize_digits)
+
+ # Create token ids for the development data.
+ fr_dev_ids_path = dev_path + (".ids%d.fr" % vocabulary_size)
+ en_dev_ids_path = dev_path + (".ids%d.en" % vocabulary_size)
+ data_to_token_ids(dev_path + ".fr", fr_dev_ids_path, vocab_path,
+ tokenizer=tokenizer, normalize_digits=normalize_digits)
+ data_to_token_ids(dev_path + ".en", en_dev_ids_path, vocab_path,
+ tokenizer=tokenizer, normalize_digits=normalize_digits)
+
+ return (en_train_ids_path, fr_train_ids_path,
+ en_dev_ids_path, fr_dev_ids_path,
+ vocab_path, vocab_path)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/README.md"
new file mode 100644
index 00000000..2036d8cc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/README.md"
@@ -0,0 +1,19 @@
+# Neural Programmer
+
+Implementation of the Neural Programmer model described in [paper](https://openreview.net/pdf?id=ry2YOrcge)
+
+Download and extract the data from [dropbox](https://www.dropbox.com/s/9tvtcv6lmy51zfw/data.zip?dl=0). Change the ``data_dir FLAG`` to the location of the data.
+
+### Training
+``python neural_programmer.py``
+
+The models are written to FLAGS.output_dir
+
+### Testing
+``python neural_programmer.py --evaluator_job=True``
+
+The models are loaded from ``FLAGS.output_dir``. The evaluation is done on development data.
+
+In case of errors because of encoding, add ``"# -*- coding: utf-8 -*-"`` as the first line in ``wiki_data.py``
+
+Maintained by Arvind Neelakantan (arvind2505)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/data_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/data_utils.py"
new file mode 100644
index 00000000..4df80c66
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/data_utils.py"
@@ -0,0 +1,666 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Functions for constructing vocabulary, converting the examples to integer format and building the required masks for batch computation Author: aneelakantan (Arvind Neelakantan)
+"""
+
+from __future__ import print_function
+
+import copy
+import numbers
+import numpy as np
+import wiki_data
+
+
+def return_index(a):
+ for i in range(len(a)):
+ if (a[i] == 1.0):
+ return i
+
+
+def construct_vocab(data, utility, add_word=False):
+ ans = []
+ for example in data:
+ sent = ""
+ for word in example.question:
+ if (not (isinstance(word, numbers.Number))):
+ sent += word + " "
+ example.original_nc = copy.deepcopy(example.number_columns)
+ example.original_wc = copy.deepcopy(example.word_columns)
+ example.original_nc_names = copy.deepcopy(example.number_column_names)
+ example.original_wc_names = copy.deepcopy(example.word_column_names)
+ if (add_word):
+ continue
+ number_found = 0
+ if (not (example.is_bad_example)):
+ for word in example.question:
+ if (isinstance(word, numbers.Number)):
+ number_found += 1
+ else:
+ if (not (utility.word_ids.has_key(word))):
+ utility.words.append(word)
+ utility.word_count[word] = 1
+ utility.word_ids[word] = len(utility.word_ids)
+ utility.reverse_word_ids[utility.word_ids[word]] = word
+ else:
+ utility.word_count[word] += 1
+ for col_name in example.word_column_names:
+ for word in col_name:
+ if (isinstance(word, numbers.Number)):
+ number_found += 1
+ else:
+ if (not (utility.word_ids.has_key(word))):
+ utility.words.append(word)
+ utility.word_count[word] = 1
+ utility.word_ids[word] = len(utility.word_ids)
+ utility.reverse_word_ids[utility.word_ids[word]] = word
+ else:
+ utility.word_count[word] += 1
+ for col_name in example.number_column_names:
+ for word in col_name:
+ if (isinstance(word, numbers.Number)):
+ number_found += 1
+ else:
+ if (not (utility.word_ids.has_key(word))):
+ utility.words.append(word)
+ utility.word_count[word] = 1
+ utility.word_ids[word] = len(utility.word_ids)
+ utility.reverse_word_ids[utility.word_ids[word]] = word
+ else:
+ utility.word_count[word] += 1
+
+
+def word_lookup(word, utility):
+ if (utility.word_ids.has_key(word)):
+ return word
+ else:
+ return utility.unk_token
+
+
+def convert_to_int_2d_and_pad(a, utility):
+ ans = []
+ #print a
+ for b in a:
+ temp = []
+ if (len(b) > utility.FLAGS.max_entry_length):
+ b = b[0:utility.FLAGS.max_entry_length]
+ for remaining in range(len(b), utility.FLAGS.max_entry_length):
+ b.append(utility.dummy_token)
+ assert len(b) == utility.FLAGS.max_entry_length
+ for word in b:
+ temp.append(utility.word_ids[word_lookup(word, utility)])
+ ans.append(temp)
+ #print ans
+ return ans
+
+
+def convert_to_bool_and_pad(a, utility):
+ a = a.tolist()
+ for i in range(len(a)):
+ for j in range(len(a[i])):
+ if (a[i][j] < 1):
+ a[i][j] = False
+ else:
+ a[i][j] = True
+ a[i] = a[i] + [False] * (utility.FLAGS.max_elements - len(a[i]))
+ return a
+
+
+seen_tables = {}
+
+
+def partial_match(question, table, number):
+ answer = []
+ match = {}
+ for i in range(len(table)):
+ temp = []
+ for j in range(len(table[i])):
+ temp.append(0)
+ answer.append(temp)
+ for i in range(len(table)):
+ for j in range(len(table[i])):
+ for word in question:
+ if (number):
+ if (word == table[i][j]):
+ answer[i][j] = 1.0
+ match[i] = 1.0
+ else:
+ if (word in table[i][j]):
+ answer[i][j] = 1.0
+ match[i] = 1.0
+ return answer, match
+
+
+def exact_match(question, table, number):
+ #performs exact match operation
+ answer = []
+ match = {}
+ matched_indices = []
+ for i in range(len(table)):
+ temp = []
+ for j in range(len(table[i])):
+ temp.append(0)
+ answer.append(temp)
+ for i in range(len(table)):
+ for j in range(len(table[i])):
+ if (number):
+ for word in question:
+ if (word == table[i][j]):
+ match[i] = 1.0
+ answer[i][j] = 1.0
+ else:
+ table_entry = table[i][j]
+ for k in range(len(question)):
+ if (k + len(table_entry) <= len(question)):
+ if (table_entry == question[k:(k + len(table_entry))]):
+ #if(len(table_entry) == 1):
+ #print "match: ", table_entry, question
+ match[i] = 1.0
+ answer[i][j] = 1.0
+ matched_indices.append((k, len(table_entry)))
+ return answer, match, matched_indices
+
+
+def partial_column_match(question, table, number):
+ answer = []
+ for i in range(len(table)):
+ answer.append(0)
+ for i in range(len(table)):
+ for word in question:
+ if (word in table[i]):
+ answer[i] = 1.0
+ return answer
+
+
+def exact_column_match(question, table, number):
+ #performs exact match on column names
+ answer = []
+ matched_indices = []
+ for i in range(len(table)):
+ answer.append(0)
+ for i in range(len(table)):
+ table_entry = table[i]
+ for k in range(len(question)):
+ if (k + len(table_entry) <= len(question)):
+ if (table_entry == question[k:(k + len(table_entry))]):
+ answer[i] = 1.0
+ matched_indices.append((k, len(table_entry)))
+ return answer, matched_indices
+
+
+def get_max_entry(a):
+ e = {}
+ for w in a:
+ if (w != "UNK, "):
+ if (e.has_key(w)):
+ e[w] += 1
+ else:
+ e[w] = 1
+ if (len(e) > 0):
+ (key, val) = sorted(e.items(), key=lambda x: -1 * x[1])[0]
+ if (val > 1):
+ return key
+ else:
+ return -1.0
+ else:
+ return -1.0
+
+
+def list_join(a):
+ ans = ""
+ for w in a:
+ ans += str(w) + ", "
+ return ans
+
+
+def group_by_max(table, number):
+ #computes the most frequently occurring entry in a column
+ answer = []
+ for i in range(len(table)):
+ temp = []
+ for j in range(len(table[i])):
+ temp.append(0)
+ answer.append(temp)
+ for i in range(len(table)):
+ if (number):
+ curr = table[i]
+ else:
+ curr = [list_join(w) for w in table[i]]
+ max_entry = get_max_entry(curr)
+ #print i, max_entry
+ for j in range(len(curr)):
+ if (max_entry == curr[j]):
+ answer[i][j] = 1.0
+ else:
+ answer[i][j] = 0.0
+ return answer
+
+
+def pick_one(a):
+ for i in range(len(a)):
+ if (1.0 in a[i]):
+ return True
+ return False
+
+
+def check_processed_cols(col, utility):
+ return True in [
+ True for y in col
+ if (y != utility.FLAGS.pad_int and y !=
+ utility.FLAGS.bad_number_pre_process)
+ ]
+
+
+def complete_wiki_processing(data, utility, train=True):
+ #convert to integers and padding
+ processed_data = []
+ num_bad_examples = 0
+ for example in data:
+ number_found = 0
+ if (example.is_bad_example):
+ num_bad_examples += 1
+ if (not (example.is_bad_example)):
+ example.string_question = example.question[:]
+ #entry match
+ example.processed_number_columns = example.processed_number_columns[:]
+ example.processed_word_columns = example.processed_word_columns[:]
+ example.word_exact_match, word_match, matched_indices = exact_match(
+ example.string_question, example.original_wc, number=False)
+ example.number_exact_match, number_match, _ = exact_match(
+ example.string_question, example.original_nc, number=True)
+ if (not (pick_one(example.word_exact_match)) and not (
+ pick_one(example.number_exact_match))):
+ assert len(word_match) == 0
+ assert len(number_match) == 0
+ example.word_exact_match, word_match = partial_match(
+ example.string_question, example.original_wc, number=False)
+ #group by max
+ example.word_group_by_max = group_by_max(example.original_wc, False)
+ example.number_group_by_max = group_by_max(example.original_nc, True)
+ #column name match
+ example.word_column_exact_match, wcol_matched_indices = exact_column_match(
+ example.string_question, example.original_wc_names, number=False)
+ example.number_column_exact_match, ncol_matched_indices = exact_column_match(
+ example.string_question, example.original_nc_names, number=False)
+ if (not (1.0 in example.word_column_exact_match) and not (
+ 1.0 in example.number_column_exact_match)):
+ example.word_column_exact_match = partial_column_match(
+ example.string_question, example.original_wc_names, number=False)
+ example.number_column_exact_match = partial_column_match(
+ example.string_question, example.original_nc_names, number=False)
+ if (len(word_match) > 0 or len(number_match) > 0):
+ example.question.append(utility.entry_match_token)
+ if (1.0 in example.word_column_exact_match or
+ 1.0 in example.number_column_exact_match):
+ example.question.append(utility.column_match_token)
+ example.string_question = example.question[:]
+ example.number_lookup_matrix = np.transpose(
+ example.number_lookup_matrix)[:]
+ example.word_lookup_matrix = np.transpose(example.word_lookup_matrix)[:]
+ example.columns = example.number_columns[:]
+ example.word_columns = example.word_columns[:]
+ example.len_total_cols = len(example.word_column_names) + len(
+ example.number_column_names)
+ example.column_names = example.number_column_names[:]
+ example.word_column_names = example.word_column_names[:]
+ example.string_column_names = example.number_column_names[:]
+ example.string_word_column_names = example.word_column_names[:]
+ example.sorted_number_index = []
+ example.sorted_word_index = []
+ example.column_mask = []
+ example.word_column_mask = []
+ example.processed_column_mask = []
+ example.processed_word_column_mask = []
+ example.word_column_entry_mask = []
+ example.question_attention_mask = []
+ example.question_number = example.question_number_1 = -1
+ example.question_attention_mask = []
+ example.ordinal_question = []
+ example.ordinal_question_one = []
+ new_question = []
+ if (len(example.number_columns) > 0):
+ example.len_col = len(example.number_columns[0])
+ else:
+ example.len_col = len(example.word_columns[0])
+ for (start, length) in matched_indices:
+ for j in range(length):
+ example.question[start + j] = utility.unk_token
+ #print example.question
+ for word in example.question:
+ if (isinstance(word, numbers.Number) or wiki_data.is_date(word)):
+ if (not (isinstance(word, numbers.Number)) and
+ wiki_data.is_date(word)):
+ word = word.replace("X", "").replace("-", "")
+ number_found += 1
+ if (number_found == 1):
+ example.question_number = word
+ if (len(example.ordinal_question) > 0):
+ example.ordinal_question[len(example.ordinal_question) - 1] = 1.0
+ else:
+ example.ordinal_question.append(1.0)
+ elif (number_found == 2):
+ example.question_number_1 = word
+ if (len(example.ordinal_question_one) > 0):
+ example.ordinal_question_one[len(example.ordinal_question_one) -
+ 1] = 1.0
+ else:
+ example.ordinal_question_one.append(1.0)
+ else:
+ new_question.append(word)
+ example.ordinal_question.append(0.0)
+ example.ordinal_question_one.append(0.0)
+ example.question = [
+ utility.word_ids[word_lookup(w, utility)] for w in new_question
+ ]
+ example.question_attention_mask = [0.0] * len(example.question)
+ #when the first question number occurs before a word
+ example.ordinal_question = example.ordinal_question[0:len(
+ example.question)]
+ example.ordinal_question_one = example.ordinal_question_one[0:len(
+ example.question)]
+ #question-padding
+ example.question = [utility.word_ids[utility.dummy_token]] * (
+ utility.FLAGS.question_length - len(example.question)
+ ) + example.question
+ example.question_attention_mask = [-10000.0] * (
+ utility.FLAGS.question_length - len(example.question_attention_mask)
+ ) + example.question_attention_mask
+ example.ordinal_question = [0.0] * (utility.FLAGS.question_length -
+ len(example.ordinal_question)
+ ) + example.ordinal_question
+ example.ordinal_question_one = [0.0] * (utility.FLAGS.question_length -
+ len(example.ordinal_question_one)
+ ) + example.ordinal_question_one
+ if (True):
+ #number columns and related-padding
+ num_cols = len(example.columns)
+ start = 0
+ for column in example.number_columns:
+ if (check_processed_cols(example.processed_number_columns[start],
+ utility)):
+ example.processed_column_mask.append(0.0)
+ sorted_index = sorted(
+ range(len(example.processed_number_columns[start])),
+ key=lambda k: example.processed_number_columns[start][k],
+ reverse=True)
+ sorted_index = sorted_index + [utility.FLAGS.pad_int] * (
+ utility.FLAGS.max_elements - len(sorted_index))
+ example.sorted_number_index.append(sorted_index)
+ example.columns[start] = column + [utility.FLAGS.pad_int] * (
+ utility.FLAGS.max_elements - len(column))
+ example.processed_number_columns[start] += [utility.FLAGS.pad_int] * (
+ utility.FLAGS.max_elements -
+ len(example.processed_number_columns[start]))
+ start += 1
+ example.column_mask.append(0.0)
+ for remaining in range(num_cols, utility.FLAGS.max_number_cols):
+ example.sorted_number_index.append([utility.FLAGS.pad_int] *
+ (utility.FLAGS.max_elements))
+ example.columns.append([utility.FLAGS.pad_int] *
+ (utility.FLAGS.max_elements))
+ example.processed_number_columns.append([utility.FLAGS.pad_int] *
+ (utility.FLAGS.max_elements))
+ example.number_exact_match.append([0.0] *
+ (utility.FLAGS.max_elements))
+ example.number_group_by_max.append([0.0] *
+ (utility.FLAGS.max_elements))
+ example.column_mask.append(-100000000.0)
+ example.processed_column_mask.append(-100000000.0)
+ example.number_column_exact_match.append(0.0)
+ example.column_names.append([utility.dummy_token])
+ #word column and related-padding
+ start = 0
+ word_num_cols = len(example.word_columns)
+ for column in example.word_columns:
+ if (check_processed_cols(example.processed_word_columns[start],
+ utility)):
+ example.processed_word_column_mask.append(0.0)
+ sorted_index = sorted(
+ range(len(example.processed_word_columns[start])),
+ key=lambda k: example.processed_word_columns[start][k],
+ reverse=True)
+ sorted_index = sorted_index + [utility.FLAGS.pad_int] * (
+ utility.FLAGS.max_elements - len(sorted_index))
+ example.sorted_word_index.append(sorted_index)
+ column = convert_to_int_2d_and_pad(column, utility)
+ example.word_columns[start] = column + [[
+ utility.word_ids[utility.dummy_token]
+ ] * utility.FLAGS.max_entry_length] * (utility.FLAGS.max_elements -
+ len(column))
+ example.processed_word_columns[start] += [utility.FLAGS.pad_int] * (
+ utility.FLAGS.max_elements -
+ len(example.processed_word_columns[start]))
+ example.word_column_entry_mask.append([0] * len(column) + [
+ utility.word_ids[utility.dummy_token]
+ ] * (utility.FLAGS.max_elements - len(column)))
+ start += 1
+ example.word_column_mask.append(0.0)
+ for remaining in range(word_num_cols, utility.FLAGS.max_word_cols):
+ example.sorted_word_index.append([utility.FLAGS.pad_int] *
+ (utility.FLAGS.max_elements))
+ example.word_columns.append([[utility.word_ids[utility.dummy_token]] *
+ utility.FLAGS.max_entry_length] *
+ (utility.FLAGS.max_elements))
+ example.word_column_entry_mask.append(
+ [utility.word_ids[utility.dummy_token]] *
+ (utility.FLAGS.max_elements))
+ example.word_exact_match.append([0.0] * (utility.FLAGS.max_elements))
+ example.word_group_by_max.append([0.0] * (utility.FLAGS.max_elements))
+ example.processed_word_columns.append([utility.FLAGS.pad_int] *
+ (utility.FLAGS.max_elements))
+ example.word_column_mask.append(-100000000.0)
+ example.processed_word_column_mask.append(-100000000.0)
+ example.word_column_exact_match.append(0.0)
+ example.word_column_names.append([utility.dummy_token] *
+ utility.FLAGS.max_entry_length)
+ seen_tables[example.table_key] = 1
+ #convert column and word column names to integers
+ example.column_ids = convert_to_int_2d_and_pad(example.column_names,
+ utility)
+ example.word_column_ids = convert_to_int_2d_and_pad(
+ example.word_column_names, utility)
+ for i_em in range(len(example.number_exact_match)):
+ example.number_exact_match[i_em] = example.number_exact_match[
+ i_em] + [0.0] * (utility.FLAGS.max_elements -
+ len(example.number_exact_match[i_em]))
+ example.number_group_by_max[i_em] = example.number_group_by_max[
+ i_em] + [0.0] * (utility.FLAGS.max_elements -
+ len(example.number_group_by_max[i_em]))
+ for i_em in range(len(example.word_exact_match)):
+ example.word_exact_match[i_em] = example.word_exact_match[
+ i_em] + [0.0] * (utility.FLAGS.max_elements -
+ len(example.word_exact_match[i_em]))
+ example.word_group_by_max[i_em] = example.word_group_by_max[
+ i_em] + [0.0] * (utility.FLAGS.max_elements -
+ len(example.word_group_by_max[i_em]))
+ example.exact_match = example.number_exact_match + example.word_exact_match
+ example.group_by_max = example.number_group_by_max + example.word_group_by_max
+ example.exact_column_match = example.number_column_exact_match + example.word_column_exact_match
+ #answer and related mask, padding
+ if (example.is_lookup):
+ example.answer = example.calc_answer
+ example.number_print_answer = example.number_lookup_matrix.tolist()
+ example.word_print_answer = example.word_lookup_matrix.tolist()
+ for i_answer in range(len(example.number_print_answer)):
+ example.number_print_answer[i_answer] = example.number_print_answer[
+ i_answer] + [0.0] * (utility.FLAGS.max_elements -
+ len(example.number_print_answer[i_answer]))
+ for i_answer in range(len(example.word_print_answer)):
+ example.word_print_answer[i_answer] = example.word_print_answer[
+ i_answer] + [0.0] * (utility.FLAGS.max_elements -
+ len(example.word_print_answer[i_answer]))
+ example.number_lookup_matrix = convert_to_bool_and_pad(
+ example.number_lookup_matrix, utility)
+ example.word_lookup_matrix = convert_to_bool_and_pad(
+ example.word_lookup_matrix, utility)
+ for remaining in range(num_cols, utility.FLAGS.max_number_cols):
+ example.number_lookup_matrix.append([False] *
+ utility.FLAGS.max_elements)
+ example.number_print_answer.append([0.0] * utility.FLAGS.max_elements)
+ for remaining in range(word_num_cols, utility.FLAGS.max_word_cols):
+ example.word_lookup_matrix.append([False] *
+ utility.FLAGS.max_elements)
+ example.word_print_answer.append([0.0] * utility.FLAGS.max_elements)
+ example.print_answer = example.number_print_answer + example.word_print_answer
+ else:
+ example.answer = example.calc_answer
+ example.print_answer = [[0.0] * (utility.FLAGS.max_elements)] * (
+ utility.FLAGS.max_number_cols + utility.FLAGS.max_word_cols)
+ #question_number masks
+ if (example.question_number == -1):
+ example.question_number_mask = np.zeros([utility.FLAGS.max_elements])
+ else:
+ example.question_number_mask = np.ones([utility.FLAGS.max_elements])
+ if (example.question_number_1 == -1):
+ example.question_number_one_mask = -10000.0
+ else:
+ example.question_number_one_mask = np.float64(0.0)
+ if (example.len_col > utility.FLAGS.max_elements):
+ continue
+ processed_data.append(example)
+ return processed_data
+
+
+def add_special_words(utility):
+ utility.words.append(utility.entry_match_token)
+ utility.word_ids[utility.entry_match_token] = len(utility.word_ids)
+ utility.reverse_word_ids[utility.word_ids[
+ utility.entry_match_token]] = utility.entry_match_token
+ utility.entry_match_token_id = utility.word_ids[utility.entry_match_token]
+ print("entry match token: ", utility.word_ids[
+ utility.entry_match_token], utility.entry_match_token_id)
+ utility.words.append(utility.column_match_token)
+ utility.word_ids[utility.column_match_token] = len(utility.word_ids)
+ utility.reverse_word_ids[utility.word_ids[
+ utility.column_match_token]] = utility.column_match_token
+ utility.column_match_token_id = utility.word_ids[utility.column_match_token]
+ print("entry match token: ", utility.word_ids[
+ utility.column_match_token], utility.column_match_token_id)
+ utility.words.append(utility.dummy_token)
+ utility.word_ids[utility.dummy_token] = len(utility.word_ids)
+ utility.reverse_word_ids[utility.word_ids[
+ utility.dummy_token]] = utility.dummy_token
+ utility.dummy_token_id = utility.word_ids[utility.dummy_token]
+ utility.words.append(utility.unk_token)
+ utility.word_ids[utility.unk_token] = len(utility.word_ids)
+ utility.reverse_word_ids[utility.word_ids[
+ utility.unk_token]] = utility.unk_token
+
+
+def perform_word_cutoff(utility):
+ if (utility.FLAGS.word_cutoff > 0):
+ for word in utility.word_ids.keys():
+ if (utility.word_count.has_key(word) and utility.word_count[word] <
+ utility.FLAGS.word_cutoff and word != utility.unk_token and
+ word != utility.dummy_token and word != utility.entry_match_token and
+ word != utility.column_match_token):
+ utility.word_ids.pop(word)
+ utility.words.remove(word)
+
+
+def word_dropout(question, utility):
+ if (utility.FLAGS.word_dropout_prob > 0.0):
+ new_question = []
+ for i in range(len(question)):
+ if (question[i] != utility.dummy_token_id and
+ utility.random.random() > utility.FLAGS.word_dropout_prob):
+ new_question.append(utility.word_ids[utility.unk_token])
+ else:
+ new_question.append(question[i])
+ return new_question
+ else:
+ return question
+
+
+def generate_feed_dict(data, curr, batch_size, gr, train=False, utility=None):
+ #prepare feed dict dictionary
+ feed_dict = {}
+ feed_examples = []
+ for j in range(batch_size):
+ feed_examples.append(data[curr + j])
+ if (train):
+ feed_dict[gr.batch_question] = [
+ word_dropout(feed_examples[j].question, utility)
+ for j in range(batch_size)
+ ]
+ else:
+ feed_dict[gr.batch_question] = [
+ feed_examples[j].question for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_question_attention_mask] = [
+ feed_examples[j].question_attention_mask for j in range(batch_size)
+ ]
+ feed_dict[
+ gr.batch_answer] = [feed_examples[j].answer for j in range(batch_size)]
+ feed_dict[gr.batch_number_column] = [
+ feed_examples[j].columns for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_processed_number_column] = [
+ feed_examples[j].processed_number_columns for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_processed_sorted_index_number_column] = [
+ feed_examples[j].sorted_number_index for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_processed_sorted_index_word_column] = [
+ feed_examples[j].sorted_word_index for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_question_number] = np.array(
+ [feed_examples[j].question_number for j in range(batch_size)]).reshape(
+ (batch_size, 1))
+ feed_dict[gr.batch_question_number_one] = np.array(
+ [feed_examples[j].question_number_1 for j in range(batch_size)]).reshape(
+ (batch_size, 1))
+ feed_dict[gr.batch_question_number_mask] = [
+ feed_examples[j].question_number_mask for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_question_number_one_mask] = np.array(
+ [feed_examples[j].question_number_one_mask for j in range(batch_size)
+ ]).reshape((batch_size, 1))
+ feed_dict[gr.batch_print_answer] = [
+ feed_examples[j].print_answer for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_exact_match] = [
+ feed_examples[j].exact_match for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_group_by_max] = [
+ feed_examples[j].group_by_max for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_column_exact_match] = [
+ feed_examples[j].exact_column_match for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_ordinal_question] = [
+ feed_examples[j].ordinal_question for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_ordinal_question_one] = [
+ feed_examples[j].ordinal_question_one for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_number_column_mask] = [
+ feed_examples[j].column_mask for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_number_column_names] = [
+ feed_examples[j].column_ids for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_processed_word_column] = [
+ feed_examples[j].processed_word_columns for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_word_column_mask] = [
+ feed_examples[j].word_column_mask for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_word_column_names] = [
+ feed_examples[j].word_column_ids for j in range(batch_size)
+ ]
+ feed_dict[gr.batch_word_column_entry_mask] = [
+ feed_examples[j].word_column_entry_mask for j in range(batch_size)
+ ]
+ return feed_dict
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/model.py"
new file mode 100644
index 00000000..610d6669
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/model.py"
@@ -0,0 +1,679 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Author: aneelakantan (Arvind Neelakantan)
+"""
+
+from __future__ import print_function
+
+import numpy as np
+import tensorflow as tf
+import nn_utils
+
+
+class Graph():
+
+ def __init__(self, utility, batch_size, max_passes, mode="train"):
+ self.utility = utility
+ self.data_type = self.utility.tf_data_type[self.utility.FLAGS.data_type]
+ self.max_elements = self.utility.FLAGS.max_elements
+ max_elements = self.utility.FLAGS.max_elements
+ self.num_cols = self.utility.FLAGS.max_number_cols
+ self.num_word_cols = self.utility.FLAGS.max_word_cols
+ self.question_length = self.utility.FLAGS.question_length
+ self.batch_size = batch_size
+ self.max_passes = max_passes
+ self.mode = mode
+ self.embedding_dims = self.utility.FLAGS.embedding_dims
+ #input question and a mask
+ self.batch_question = tf.placeholder(tf.int32,
+ [batch_size, self.question_length])
+ self.batch_question_attention_mask = tf.placeholder(
+ self.data_type, [batch_size, self.question_length])
+ #ground truth scalar answer and lookup answer
+ self.batch_answer = tf.placeholder(self.data_type, [batch_size])
+ self.batch_print_answer = tf.placeholder(
+ self.data_type,
+ [batch_size, self.num_cols + self.num_word_cols, max_elements])
+ #number columns and its processed version
+ self.batch_number_column = tf.placeholder(
+ self.data_type, [batch_size, self.num_cols, max_elements
+ ]) #columns with numeric entries
+ self.batch_processed_number_column = tf.placeholder(
+ self.data_type, [batch_size, self.num_cols, max_elements])
+ self.batch_processed_sorted_index_number_column = tf.placeholder(
+ tf.int32, [batch_size, self.num_cols, max_elements])
+ #word columns and its processed version
+ self.batch_processed_word_column = tf.placeholder(
+ self.data_type, [batch_size, self.num_word_cols, max_elements])
+ self.batch_processed_sorted_index_word_column = tf.placeholder(
+ tf.int32, [batch_size, self.num_word_cols, max_elements])
+ self.batch_word_column_entry_mask = tf.placeholder(
+ tf.int32, [batch_size, self.num_word_cols, max_elements])
+ #names of word and number columns along with their mask
+ self.batch_word_column_names = tf.placeholder(
+ tf.int32,
+ [batch_size, self.num_word_cols, self.utility.FLAGS.max_entry_length])
+ self.batch_word_column_mask = tf.placeholder(
+ self.data_type, [batch_size, self.num_word_cols])
+ self.batch_number_column_names = tf.placeholder(
+ tf.int32,
+ [batch_size, self.num_cols, self.utility.FLAGS.max_entry_length])
+ self.batch_number_column_mask = tf.placeholder(self.data_type,
+ [batch_size, self.num_cols])
+ #exact match and group by max operation
+ self.batch_exact_match = tf.placeholder(
+ self.data_type,
+ [batch_size, self.num_cols + self.num_word_cols, max_elements])
+ self.batch_column_exact_match = tf.placeholder(
+ self.data_type, [batch_size, self.num_cols + self.num_word_cols])
+ self.batch_group_by_max = tf.placeholder(
+ self.data_type,
+ [batch_size, self.num_cols + self.num_word_cols, max_elements])
+ #numbers in the question along with their position. This is used to compute arguments to the comparison operations
+ self.batch_question_number = tf.placeholder(self.data_type, [batch_size, 1])
+ self.batch_question_number_one = tf.placeholder(self.data_type,
+ [batch_size, 1])
+ self.batch_question_number_mask = tf.placeholder(
+ self.data_type, [batch_size, max_elements])
+ self.batch_question_number_one_mask = tf.placeholder(self.data_type,
+ [batch_size, 1])
+ self.batch_ordinal_question = tf.placeholder(
+ self.data_type, [batch_size, self.question_length])
+ self.batch_ordinal_question_one = tf.placeholder(
+ self.data_type, [batch_size, self.question_length])
+
+ def LSTM_question_embedding(self, sentence, sentence_length):
+ #LSTM processes the input question
+ lstm_params = "question_lstm"
+ hidden_vectors = []
+ sentence = self.batch_question
+ question_hidden = tf.zeros(
+ [self.batch_size, self.utility.FLAGS.embedding_dims], self.data_type)
+ question_c_hidden = tf.zeros(
+ [self.batch_size, self.utility.FLAGS.embedding_dims], self.data_type)
+ if (self.utility.FLAGS.rnn_dropout > 0.0):
+ if (self.mode == "train"):
+ rnn_dropout_mask = tf.cast(
+ tf.random_uniform(
+ tf.shape(question_hidden), minval=0.0, maxval=1.0) <
+ self.utility.FLAGS.rnn_dropout,
+ self.data_type) / self.utility.FLAGS.rnn_dropout
+ else:
+ rnn_dropout_mask = tf.ones_like(question_hidden)
+ for question_iterator in range(self.question_length):
+ curr_word = sentence[:, question_iterator]
+ question_vector = nn_utils.apply_dropout(
+ nn_utils.get_embedding(curr_word, self.utility, self.params),
+ self.utility.FLAGS.dropout, self.mode)
+ question_hidden, question_c_hidden = nn_utils.LSTMCell(
+ question_vector, question_hidden, question_c_hidden, lstm_params,
+ self.params)
+ if (self.utility.FLAGS.rnn_dropout > 0.0):
+ question_hidden = question_hidden * rnn_dropout_mask
+ hidden_vectors.append(tf.expand_dims(question_hidden, 0))
+ hidden_vectors = tf.concat(axis=0, values=hidden_vectors)
+ return question_hidden, hidden_vectors
+
+ def history_recurrent_step(self, curr_hprev, hprev):
+ #A single RNN step for controller or history RNN
+ return tf.tanh(
+ tf.matmul(
+ tf.concat(axis=1, values=[hprev, curr_hprev]), self.params[
+ "history_recurrent"])) + self.params["history_recurrent_bias"]
+
+ def question_number_softmax(self, hidden_vectors):
+ #Attention on quetsion to decide the question number to passed to comparison ops
+ def compute_ans(op_embedding, comparison):
+ op_embedding = tf.expand_dims(op_embedding, 0)
+ #dot product of operation embedding with hidden state to the left of the number occurrence
+ first = tf.transpose(
+ tf.matmul(op_embedding,
+ tf.transpose(
+ tf.reduce_sum(hidden_vectors * tf.tile(
+ tf.expand_dims(
+ tf.transpose(self.batch_ordinal_question), 2),
+ [1, 1, self.utility.FLAGS.embedding_dims]), 0))))
+ second = self.batch_question_number_one_mask + tf.transpose(
+ tf.matmul(op_embedding,
+ tf.transpose(
+ tf.reduce_sum(hidden_vectors * tf.tile(
+ tf.expand_dims(
+ tf.transpose(self.batch_ordinal_question_one), 2
+ ), [1, 1, self.utility.FLAGS.embedding_dims]), 0))))
+ question_number_softmax = tf.nn.softmax(tf.concat(axis=1, values=[first, second]))
+ if (self.mode == "test"):
+ cond = tf.equal(question_number_softmax,
+ tf.reshape(
+ tf.reduce_max(question_number_softmax, 1),
+ [self.batch_size, 1]))
+ question_number_softmax = tf.where(
+ cond,
+ tf.fill(tf.shape(question_number_softmax), 1.0),
+ tf.fill(tf.shape(question_number_softmax), 0.0))
+ question_number_softmax = tf.cast(question_number_softmax,
+ self.data_type)
+ ans = tf.reshape(
+ tf.reduce_sum(question_number_softmax * tf.concat(
+ axis=1, values=[self.batch_question_number, self.batch_question_number_one]),
+ 1), [self.batch_size, 1])
+ return ans
+
+ def compute_op_position(op_name):
+ for i in range(len(self.utility.operations_set)):
+ if (op_name == self.utility.operations_set[i]):
+ return i
+
+ def compute_question_number(op_name):
+ op_embedding = tf.nn.embedding_lookup(self.params_unit,
+ compute_op_position(op_name))
+ return compute_ans(op_embedding, op_name)
+
+ curr_greater_question_number = compute_question_number("greater")
+ curr_lesser_question_number = compute_question_number("lesser")
+ curr_geq_question_number = compute_question_number("geq")
+ curr_leq_question_number = compute_question_number("leq")
+ return curr_greater_question_number, curr_lesser_question_number, curr_geq_question_number, curr_leq_question_number
+
+ def perform_attention(self, context_vector, hidden_vectors, length, mask):
+ #Performs attention on hiddent_vectors using context vector
+ context_vector = tf.tile(
+ tf.expand_dims(context_vector, 0), [length, 1, 1]) #time * bs * d
+ attention_softmax = tf.nn.softmax(
+ tf.transpose(tf.reduce_sum(context_vector * hidden_vectors, 2)) +
+ mask) #batch_size * time
+ attention_softmax = tf.tile(
+ tf.expand_dims(tf.transpose(attention_softmax), 2),
+ [1, 1, self.embedding_dims])
+ ans_vector = tf.reduce_sum(attention_softmax * hidden_vectors, 0)
+ return ans_vector
+
+ #computes embeddings for column names using parameters of question module
+ def get_column_hidden_vectors(self):
+ #vector representations for the column names
+ self.column_hidden_vectors = tf.reduce_sum(
+ nn_utils.get_embedding(self.batch_number_column_names, self.utility,
+ self.params), 2)
+ self.word_column_hidden_vectors = tf.reduce_sum(
+ nn_utils.get_embedding(self.batch_word_column_names, self.utility,
+ self.params), 2)
+
+ def create_summary_embeddings(self):
+ #embeddings for each text entry in the table using parameters of the question module
+ self.summary_text_entry_embeddings = tf.reduce_sum(
+ tf.expand_dims(self.batch_exact_match, 3) * tf.expand_dims(
+ tf.expand_dims(
+ tf.expand_dims(
+ nn_utils.get_embedding(self.utility.entry_match_token_id,
+ self.utility, self.params), 0), 1),
+ 2), 2)
+
+ def compute_column_softmax(self, column_controller_vector, time_step):
+ #compute softmax over all the columns using column controller vector
+ column_controller_vector = tf.tile(
+ tf.expand_dims(column_controller_vector, 1),
+ [1, self.num_cols + self.num_word_cols, 1]) #max_cols * bs * d
+ column_controller_vector = nn_utils.apply_dropout(
+ column_controller_vector, self.utility.FLAGS.dropout, self.mode)
+ self.full_column_hidden_vectors = tf.concat(
+ axis=1, values=[self.column_hidden_vectors, self.word_column_hidden_vectors])
+ self.full_column_hidden_vectors += self.summary_text_entry_embeddings
+ self.full_column_hidden_vectors = nn_utils.apply_dropout(
+ self.full_column_hidden_vectors, self.utility.FLAGS.dropout, self.mode)
+ column_logits = tf.reduce_sum(
+ column_controller_vector * self.full_column_hidden_vectors, 2) + (
+ self.params["word_match_feature_column_name"] *
+ self.batch_column_exact_match) + self.full_column_mask
+ column_softmax = tf.nn.softmax(column_logits) #batch_size * max_cols
+ return column_softmax
+
+ def compute_first_or_last(self, select, first=True):
+ #perform first ot last operation on row select with probabilistic row selection
+ answer = tf.zeros_like(select)
+ running_sum = tf.zeros([self.batch_size, 1], self.data_type)
+ for i in range(self.max_elements):
+ if (first):
+ current = tf.slice(select, [0, i], [self.batch_size, 1])
+ else:
+ current = tf.slice(select, [0, self.max_elements - 1 - i],
+ [self.batch_size, 1])
+ curr_prob = current * (1 - running_sum)
+ curr_prob = curr_prob * tf.cast(curr_prob >= 0.0, self.data_type)
+ running_sum += curr_prob
+ temp_ans = []
+ curr_prob = tf.expand_dims(tf.reshape(curr_prob, [self.batch_size]), 0)
+ for i_ans in range(self.max_elements):
+ if (not (first) and i_ans == self.max_elements - 1 - i):
+ temp_ans.append(curr_prob)
+ elif (first and i_ans == i):
+ temp_ans.append(curr_prob)
+ else:
+ temp_ans.append(tf.zeros_like(curr_prob))
+ temp_ans = tf.transpose(tf.concat(axis=0, values=temp_ans))
+ answer += temp_ans
+ return answer
+
+ def make_hard_softmax(self, softmax):
+ #converts soft selection to hard selection. used at test time
+ cond = tf.equal(
+ softmax, tf.reshape(tf.reduce_max(softmax, 1), [self.batch_size, 1]))
+ softmax = tf.where(
+ cond, tf.fill(tf.shape(softmax), 1.0), tf.fill(tf.shape(softmax), 0.0))
+ softmax = tf.cast(softmax, self.data_type)
+ return softmax
+
+ def compute_max_or_min(self, select, maxi=True):
+ #computes the argmax and argmin of a column with probabilistic row selection
+ answer = tf.zeros([
+ self.batch_size, self.num_cols + self.num_word_cols, self.max_elements
+ ], self.data_type)
+ sum_prob = tf.zeros([self.batch_size, self.num_cols + self.num_word_cols],
+ self.data_type)
+ for j in range(self.max_elements):
+ if (maxi):
+ curr_pos = j
+ else:
+ curr_pos = self.max_elements - 1 - j
+ select_index = tf.slice(self.full_processed_sorted_index_column,
+ [0, 0, curr_pos], [self.batch_size, -1, 1])
+ select_mask = tf.equal(
+ tf.tile(
+ tf.expand_dims(
+ tf.tile(
+ tf.expand_dims(tf.range(self.max_elements), 0),
+ [self.batch_size, 1]), 1),
+ [1, self.num_cols + self.num_word_cols, 1]), select_index)
+ curr_prob = tf.expand_dims(select, 1) * tf.cast(
+ select_mask, self.data_type) * self.select_bad_number_mask
+ curr_prob = curr_prob * tf.expand_dims((1 - sum_prob), 2)
+ curr_prob = curr_prob * tf.expand_dims(
+ tf.cast((1 - sum_prob) > 0.0, self.data_type), 2)
+ answer = tf.where(select_mask, curr_prob, answer)
+ sum_prob += tf.reduce_sum(curr_prob, 2)
+ return answer
+
+ def perform_operations(self, softmax, full_column_softmax, select,
+ prev_select_1, curr_pass):
+ #performs all the 15 operations. computes scalar output, lookup answer and row selector
+ column_softmax = tf.slice(full_column_softmax, [0, 0],
+ [self.batch_size, self.num_cols])
+ word_column_softmax = tf.slice(full_column_softmax, [0, self.num_cols],
+ [self.batch_size, self.num_word_cols])
+ init_max = self.compute_max_or_min(select, maxi=True)
+ init_min = self.compute_max_or_min(select, maxi=False)
+ #operations that are column independent
+ count = tf.reshape(tf.reduce_sum(select, 1), [self.batch_size, 1])
+ select_full_column_softmax = tf.tile(
+ tf.expand_dims(full_column_softmax, 2),
+ [1, 1, self.max_elements
+ ]) #BS * (max_cols + max_word_cols) * max_elements
+ select_word_column_softmax = tf.tile(
+ tf.expand_dims(word_column_softmax, 2),
+ [1, 1, self.max_elements]) #BS * max_word_cols * max_elements
+ select_greater = tf.reduce_sum(
+ self.init_select_greater * select_full_column_softmax,
+ 1) * self.batch_question_number_mask #BS * max_elements
+ select_lesser = tf.reduce_sum(
+ self.init_select_lesser * select_full_column_softmax,
+ 1) * self.batch_question_number_mask #BS * max_elements
+ select_geq = tf.reduce_sum(
+ self.init_select_geq * select_full_column_softmax,
+ 1) * self.batch_question_number_mask #BS * max_elements
+ select_leq = tf.reduce_sum(
+ self.init_select_leq * select_full_column_softmax,
+ 1) * self.batch_question_number_mask #BS * max_elements
+ select_max = tf.reduce_sum(init_max * select_full_column_softmax,
+ 1) #BS * max_elements
+ select_min = tf.reduce_sum(init_min * select_full_column_softmax,
+ 1) #BS * max_elements
+ select_prev = tf.concat(axis=1, values=[
+ tf.slice(select, [0, 1], [self.batch_size, self.max_elements - 1]),
+ tf.cast(tf.zeros([self.batch_size, 1]), self.data_type)
+ ])
+ select_next = tf.concat(axis=1, values=[
+ tf.cast(tf.zeros([self.batch_size, 1]), self.data_type), tf.slice(
+ select, [0, 0], [self.batch_size, self.max_elements - 1])
+ ])
+ select_last_rs = self.compute_first_or_last(select, False)
+ select_first_rs = self.compute_first_or_last(select, True)
+ select_word_match = tf.reduce_sum(self.batch_exact_match *
+ select_full_column_softmax, 1)
+ select_group_by_max = tf.reduce_sum(self.batch_group_by_max *
+ select_full_column_softmax, 1)
+ length_content = 1
+ length_select = 13
+ length_print = 1
+ values = tf.concat(axis=1, values=[count])
+ softmax_content = tf.slice(softmax, [0, 0],
+ [self.batch_size, length_content])
+ #compute scalar output
+ output = tf.reduce_sum(tf.multiply(softmax_content, values), 1)
+ #compute lookup answer
+ softmax_print = tf.slice(softmax, [0, length_content + length_select],
+ [self.batch_size, length_print])
+ curr_print = select_full_column_softmax * tf.tile(
+ tf.expand_dims(select, 1),
+ [1, self.num_cols + self.num_word_cols, 1
+ ]) #BS * max_cols * max_elements (conisders only column)
+ self.batch_lookup_answer = curr_print * tf.tile(
+ tf.expand_dims(softmax_print, 2),
+ [1, self.num_cols + self.num_word_cols, self.max_elements
+ ]) #BS * max_cols * max_elements
+ self.batch_lookup_answer = self.batch_lookup_answer * self.select_full_mask
+ #compute row select
+ softmax_select = tf.slice(softmax, [0, length_content],
+ [self.batch_size, length_select])
+ select_lists = [
+ tf.expand_dims(select_prev, 1), tf.expand_dims(select_next, 1),
+ tf.expand_dims(select_first_rs, 1), tf.expand_dims(select_last_rs, 1),
+ tf.expand_dims(select_group_by_max, 1),
+ tf.expand_dims(select_greater, 1), tf.expand_dims(select_lesser, 1),
+ tf.expand_dims(select_geq, 1), tf.expand_dims(select_leq, 1),
+ tf.expand_dims(select_max, 1), tf.expand_dims(select_min, 1),
+ tf.expand_dims(select_word_match, 1),
+ tf.expand_dims(self.reset_select, 1)
+ ]
+ select = tf.reduce_sum(
+ tf.tile(tf.expand_dims(softmax_select, 2), [1, 1, self.max_elements]) *
+ tf.concat(axis=1, values=select_lists), 1)
+ select = select * self.select_whole_mask
+ return output, select
+
+ def one_pass(self, select, question_embedding, hidden_vectors, hprev,
+ prev_select_1, curr_pass):
+ #Performs one timestep which involves selecting an operation and a column
+ attention_vector = self.perform_attention(
+ hprev, hidden_vectors, self.question_length,
+ self.batch_question_attention_mask) #batch_size * embedding_dims
+ controller_vector = tf.nn.relu(
+ tf.matmul(hprev, self.params["controller_prev"]) + tf.matmul(
+ tf.concat(axis=1, values=[question_embedding, attention_vector]), self.params[
+ "controller"]))
+ column_controller_vector = tf.nn.relu(
+ tf.matmul(hprev, self.params["column_controller_prev"]) + tf.matmul(
+ tf.concat(axis=1, values=[question_embedding, attention_vector]), self.params[
+ "column_controller"]))
+ controller_vector = nn_utils.apply_dropout(
+ controller_vector, self.utility.FLAGS.dropout, self.mode)
+ self.operation_logits = tf.matmul(controller_vector,
+ tf.transpose(self.params_unit))
+ softmax = tf.nn.softmax(self.operation_logits)
+ soft_softmax = softmax
+ #compute column softmax: bs * max_columns
+ weighted_op_representation = tf.transpose(
+ tf.matmul(tf.transpose(self.params_unit), tf.transpose(softmax)))
+ column_controller_vector = tf.nn.relu(
+ tf.matmul(
+ tf.concat(axis=1, values=[
+ column_controller_vector, weighted_op_representation
+ ]), self.params["break_conditional"]))
+ full_column_softmax = self.compute_column_softmax(column_controller_vector,
+ curr_pass)
+ soft_column_softmax = full_column_softmax
+ if (self.mode == "test"):
+ full_column_softmax = self.make_hard_softmax(full_column_softmax)
+ softmax = self.make_hard_softmax(softmax)
+ output, select = self.perform_operations(softmax, full_column_softmax,
+ select, prev_select_1, curr_pass)
+ return output, select, softmax, soft_softmax, full_column_softmax, soft_column_softmax
+
+ def compute_lookup_error(self, val):
+ #computes lookup error.
+ cond = tf.equal(self.batch_print_answer, val)
+ inter = tf.where(
+ cond, self.init_print_error,
+ tf.tile(
+ tf.reshape(tf.constant(1e10, self.data_type), [1, 1, 1]), [
+ self.batch_size, self.utility.FLAGS.max_word_cols +
+ self.utility.FLAGS.max_number_cols,
+ self.utility.FLAGS.max_elements
+ ]))
+ return tf.reduce_min(tf.reduce_min(inter, 1), 1) * tf.cast(
+ tf.greater(
+ tf.reduce_sum(tf.reduce_sum(tf.cast(cond, self.data_type), 1), 1),
+ 0.0), self.data_type)
+
+ def soft_min(self, x, y):
+ return tf.maximum(-1.0 * (1 / (
+ self.utility.FLAGS.soft_min_value + 0.0)) * tf.log(
+ tf.exp(-self.utility.FLAGS.soft_min_value * x) + tf.exp(
+ -self.utility.FLAGS.soft_min_value * y)), tf.zeros_like(x))
+
+ def error_computation(self):
+ #computes the error of each example in a batch
+ math_error = 0.5 * tf.square(tf.subtract(self.scalar_output, self.batch_answer))
+ #scale math error
+ math_error = math_error / self.rows
+ math_error = tf.minimum(math_error, self.utility.FLAGS.max_math_error *
+ tf.ones(tf.shape(math_error), self.data_type))
+ self.init_print_error = tf.where(
+ self.batch_gold_select, -1 * tf.log(self.batch_lookup_answer + 1e-300 +
+ self.invert_select_full_mask), -1 *
+ tf.log(1 - self.batch_lookup_answer)) * self.select_full_mask
+ print_error_1 = self.init_print_error * tf.cast(
+ tf.equal(self.batch_print_answer, 0.0), self.data_type)
+ print_error = tf.reduce_sum(tf.reduce_sum((print_error_1), 1), 1)
+ for val in range(1, 58):
+ print_error += self.compute_lookup_error(val + 0.0)
+ print_error = print_error * self.utility.FLAGS.print_cost / self.num_entries
+ if (self.mode == "train"):
+ error = tf.where(
+ tf.logical_and(
+ tf.not_equal(self.batch_answer, 0.0),
+ tf.not_equal(
+ tf.reduce_sum(tf.reduce_sum(self.batch_print_answer, 1), 1),
+ 0.0)),
+ self.soft_min(math_error, print_error),
+ tf.where(
+ tf.not_equal(self.batch_answer, 0.0), math_error, print_error))
+ else:
+ error = tf.where(
+ tf.logical_and(
+ tf.equal(self.scalar_output, 0.0),
+ tf.equal(
+ tf.reduce_sum(tf.reduce_sum(self.batch_lookup_answer, 1), 1),
+ 0.0)),
+ tf.ones_like(math_error),
+ tf.where(
+ tf.equal(self.scalar_output, 0.0), print_error, math_error))
+ return error
+
+ def batch_process(self):
+ #Computes loss and fraction of correct examples in a batch.
+ self.params_unit = nn_utils.apply_dropout(
+ self.params["unit"], self.utility.FLAGS.dropout, self.mode)
+ batch_size = self.batch_size
+ max_passes = self.max_passes
+ num_timesteps = 1
+ max_elements = self.max_elements
+ select = tf.cast(
+ tf.fill([self.batch_size, max_elements], 1.0), self.data_type)
+ hprev = tf.cast(
+ tf.fill([self.batch_size, self.embedding_dims], 0.0),
+ self.data_type) #running sum of the hidden states of the model
+ output = tf.cast(tf.fill([self.batch_size, 1], 0.0),
+ self.data_type) #output of the model
+ correct = tf.cast(
+ tf.fill([1], 0.0), self.data_type
+ ) #to compute accuracy, returns number of correct examples for this batch
+ total_error = 0.0
+ prev_select_1 = tf.zeros_like(select)
+ self.create_summary_embeddings()
+ self.get_column_hidden_vectors()
+ #get question embedding
+ question_embedding, hidden_vectors = self.LSTM_question_embedding(
+ self.batch_question, self.question_length)
+ #compute arguments for comparison operation
+ greater_question_number, lesser_question_number, geq_question_number, leq_question_number = self.question_number_softmax(
+ hidden_vectors)
+ self.init_select_greater = tf.cast(
+ tf.greater(self.full_processed_column,
+ tf.expand_dims(greater_question_number, 2)), self.
+ data_type) * self.select_bad_number_mask #bs * max_cols * max_elements
+ self.init_select_lesser = tf.cast(
+ tf.less(self.full_processed_column,
+ tf.expand_dims(lesser_question_number, 2)), self.
+ data_type) * self.select_bad_number_mask #bs * max_cols * max_elements
+ self.init_select_geq = tf.cast(
+ tf.greater_equal(self.full_processed_column,
+ tf.expand_dims(geq_question_number, 2)), self.
+ data_type) * self.select_bad_number_mask #bs * max_cols * max_elements
+ self.init_select_leq = tf.cast(
+ tf.less_equal(self.full_processed_column,
+ tf.expand_dims(leq_question_number, 2)), self.
+ data_type) * self.select_bad_number_mask #bs * max_cols * max_elements
+ self.init_select_word_match = 0
+ if (self.utility.FLAGS.rnn_dropout > 0.0):
+ if (self.mode == "train"):
+ history_rnn_dropout_mask = tf.cast(
+ tf.random_uniform(
+ tf.shape(hprev), minval=0.0, maxval=1.0) <
+ self.utility.FLAGS.rnn_dropout,
+ self.data_type) / self.utility.FLAGS.rnn_dropout
+ else:
+ history_rnn_dropout_mask = tf.ones_like(hprev)
+ select = select * self.select_whole_mask
+ self.batch_log_prob = tf.zeros([self.batch_size], dtype=self.data_type)
+ #Perform max_passes and at each pass select operation and column
+ for curr_pass in range(max_passes):
+ print("step: ", curr_pass)
+ output, select, softmax, soft_softmax, column_softmax, soft_column_softmax = self.one_pass(
+ select, question_embedding, hidden_vectors, hprev, prev_select_1,
+ curr_pass)
+ prev_select_1 = select
+ #compute input to history RNN
+ input_op = tf.transpose(
+ tf.matmul(
+ tf.transpose(self.params_unit), tf.transpose(
+ soft_softmax))) #weighted average of emebdding of operations
+ input_col = tf.reduce_sum(
+ tf.expand_dims(soft_column_softmax, 2) *
+ self.full_column_hidden_vectors, 1)
+ history_input = tf.concat(axis=1, values=[input_op, input_col])
+ history_input = nn_utils.apply_dropout(
+ history_input, self.utility.FLAGS.dropout, self.mode)
+ hprev = self.history_recurrent_step(history_input, hprev)
+ if (self.utility.FLAGS.rnn_dropout > 0.0):
+ hprev = hprev * history_rnn_dropout_mask
+ self.scalar_output = output
+ error = self.error_computation()
+ cond = tf.less(error, 0.0001, name="cond")
+ correct_add = tf.where(
+ cond, tf.fill(tf.shape(cond), 1.0), tf.fill(tf.shape(cond), 0.0))
+ correct = tf.reduce_sum(correct_add)
+ error = error / batch_size
+ total_error = tf.reduce_sum(error)
+ total_correct = correct / batch_size
+ return total_error, total_correct
+
+ def compute_error(self):
+ #Sets mask variables and performs batch processing
+ self.batch_gold_select = self.batch_print_answer > 0.0
+ self.full_column_mask = tf.concat(
+ axis=1, values=[self.batch_number_column_mask, self.batch_word_column_mask])
+ self.full_processed_column = tf.concat(
+ axis=1,
+ values=[self.batch_processed_number_column, self.batch_processed_word_column])
+ self.full_processed_sorted_index_column = tf.concat(axis=1, values=[
+ self.batch_processed_sorted_index_number_column,
+ self.batch_processed_sorted_index_word_column
+ ])
+ self.select_bad_number_mask = tf.cast(
+ tf.logical_and(
+ tf.not_equal(self.full_processed_column,
+ self.utility.FLAGS.pad_int),
+ tf.not_equal(self.full_processed_column,
+ self.utility.FLAGS.bad_number_pre_process)),
+ self.data_type)
+ self.select_mask = tf.cast(
+ tf.logical_not(
+ tf.equal(self.batch_number_column, self.utility.FLAGS.pad_int)),
+ self.data_type)
+ self.select_word_mask = tf.cast(
+ tf.logical_not(
+ tf.equal(self.batch_word_column_entry_mask,
+ self.utility.dummy_token_id)), self.data_type)
+ self.select_full_mask = tf.concat(
+ axis=1, values=[self.select_mask, self.select_word_mask])
+ self.select_whole_mask = tf.maximum(
+ tf.reshape(
+ tf.slice(self.select_mask, [0, 0, 0],
+ [self.batch_size, 1, self.max_elements]),
+ [self.batch_size, self.max_elements]),
+ tf.reshape(
+ tf.slice(self.select_word_mask, [0, 0, 0],
+ [self.batch_size, 1, self.max_elements]),
+ [self.batch_size, self.max_elements]))
+ self.invert_select_full_mask = tf.cast(
+ tf.concat(axis=1, values=[
+ tf.equal(self.batch_number_column, self.utility.FLAGS.pad_int),
+ tf.equal(self.batch_word_column_entry_mask,
+ self.utility.dummy_token_id)
+ ]), self.data_type)
+ self.batch_lookup_answer = tf.zeros(tf.shape(self.batch_gold_select))
+ self.reset_select = self.select_whole_mask
+ self.rows = tf.reduce_sum(self.select_whole_mask, 1)
+ self.num_entries = tf.reshape(
+ tf.reduce_sum(tf.reduce_sum(self.select_full_mask, 1), 1),
+ [self.batch_size])
+ self.final_error, self.final_correct = self.batch_process()
+ return self.final_error
+
+ def create_graph(self, params, global_step):
+ #Creates the graph to compute error, gradient computation and updates parameters
+ self.params = params
+ batch_size = self.batch_size
+ learning_rate = tf.cast(self.utility.FLAGS.learning_rate, self.data_type)
+ self.total_cost = self.compute_error()
+ optimize_params = self.params.values()
+ optimize_names = self.params.keys()
+ print("optimize params ", optimize_names)
+ if (self.utility.FLAGS.l2_regularizer > 0.0):
+ reg_cost = 0.0
+ for ind_param in self.params.keys():
+ reg_cost += tf.nn.l2_loss(self.params[ind_param])
+ self.total_cost += self.utility.FLAGS.l2_regularizer * reg_cost
+ grads = tf.gradients(self.total_cost, optimize_params, name="gradients")
+ grad_norm = 0.0
+ for p, name in zip(grads, optimize_names):
+ print("grads: ", p, name)
+ if isinstance(p, tf.IndexedSlices):
+ grad_norm += tf.reduce_sum(p.values * p.values)
+ elif not (p == None):
+ grad_norm += tf.reduce_sum(p * p)
+ grad_norm = tf.sqrt(grad_norm)
+ max_grad_norm = np.float32(self.utility.FLAGS.clip_gradients).astype(
+ self.utility.np_data_type[self.utility.FLAGS.data_type])
+ grad_scale = tf.minimum(
+ tf.cast(1.0, self.data_type), max_grad_norm / grad_norm)
+ clipped_grads = list()
+ for p in grads:
+ if isinstance(p, tf.IndexedSlices):
+ tmp = p.values * grad_scale
+ clipped_grads.append(tf.IndexedSlices(tmp, p.indices))
+ elif not (p == None):
+ clipped_grads.append(p * grad_scale)
+ else:
+ clipped_grads.append(p)
+ grads = clipped_grads
+ self.global_step = global_step
+ params_list = self.params.values()
+ params_list.append(self.global_step)
+ adam = tf.train.AdamOptimizer(
+ learning_rate,
+ epsilon=tf.cast(self.utility.FLAGS.eps, self.data_type),
+ use_locking=True)
+ self.step = adam.apply_gradients(zip(grads, optimize_params),
+ global_step=self.global_step)
+ self.init_op = tf.global_variables_initializer()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/neural_programmer.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/neural_programmer.py"
new file mode 100644
index 00000000..145ca13d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/neural_programmer.py"
@@ -0,0 +1,239 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Implementation of the Neural Programmer model described in https://openreview.net/pdf?id=ry2YOrcge
+
+This file calls functions to load & pre-process data, construct the TF graph
+and performs training or evaluation as specified by the flag evaluator_job
+Author: aneelakantan (Arvind Neelakantan)
+"""
+from __future__ import print_function
+
+import time
+from random import Random
+import numpy as np
+import tensorflow as tf
+import model
+import wiki_data
+import parameters
+import data_utils
+
+tf.flags.DEFINE_integer("train_steps", 100001, "Number of steps to train")
+tf.flags.DEFINE_integer("eval_cycle", 500,
+ "Evaluate model at every eval_cycle steps")
+tf.flags.DEFINE_integer("max_elements", 100,
+ "maximum rows that are considered for processing")
+tf.flags.DEFINE_integer(
+ "max_number_cols", 15,
+ "maximum number columns that are considered for processing")
+tf.flags.DEFINE_integer(
+ "max_word_cols", 25,
+ "maximum number columns that are considered for processing")
+tf.flags.DEFINE_integer("question_length", 62, "maximum question length")
+tf.flags.DEFINE_integer("max_entry_length", 1, "")
+tf.flags.DEFINE_integer("max_passes", 4, "number of operation passes")
+tf.flags.DEFINE_integer("embedding_dims", 256, "")
+tf.flags.DEFINE_integer("batch_size", 20, "")
+tf.flags.DEFINE_float("clip_gradients", 1.0, "")
+tf.flags.DEFINE_float("eps", 1e-6, "")
+tf.flags.DEFINE_float("param_init", 0.1, "")
+tf.flags.DEFINE_float("learning_rate", 0.001, "")
+tf.flags.DEFINE_float("l2_regularizer", 0.0001, "")
+tf.flags.DEFINE_float("print_cost", 50.0,
+ "weighting factor in the objective function")
+tf.flags.DEFINE_string("job_id", "temp", """job id""")
+tf.flags.DEFINE_string("output_dir", "../model/",
+ """output_dir""")
+tf.flags.DEFINE_string("data_dir", "../data/",
+ """data_dir""")
+tf.flags.DEFINE_integer("write_every", 500, "wrtie every N")
+tf.flags.DEFINE_integer("param_seed", 150, "")
+tf.flags.DEFINE_integer("python_seed", 200, "")
+tf.flags.DEFINE_float("dropout", 0.8, "dropout keep probability")
+tf.flags.DEFINE_float("rnn_dropout", 0.9,
+ "dropout keep probability for rnn connections")
+tf.flags.DEFINE_float("pad_int", -20000.0,
+ "number columns are padded with pad_int")
+tf.flags.DEFINE_string("data_type", "double", "float or double")
+tf.flags.DEFINE_float("word_dropout_prob", 0.9, "word dropout keep prob")
+tf.flags.DEFINE_integer("word_cutoff", 10, "")
+tf.flags.DEFINE_integer("vocab_size", 10800, "")
+tf.flags.DEFINE_boolean("evaluator_job", False,
+ "wehther to run as trainer/evaluator")
+tf.flags.DEFINE_float(
+ "bad_number_pre_process", -200000.0,
+ "number that is added to a corrupted table entry in a number column")
+tf.flags.DEFINE_float("max_math_error", 3.0,
+ "max square loss error that is considered")
+tf.flags.DEFINE_float("soft_min_value", 5.0, "")
+FLAGS = tf.flags.FLAGS
+
+
+class Utility:
+ #holds FLAGS and other variables that are used in different files
+ def __init__(self):
+ global FLAGS
+ self.FLAGS = FLAGS
+ self.unk_token = "UNK"
+ self.entry_match_token = "entry_match"
+ self.column_match_token = "column_match"
+ self.dummy_token = "dummy_token"
+ self.tf_data_type = {}
+ self.tf_data_type["double"] = tf.float64
+ self.tf_data_type["float"] = tf.float32
+ self.np_data_type = {}
+ self.np_data_type["double"] = np.float64
+ self.np_data_type["float"] = np.float32
+ self.operations_set = ["count"] + [
+ "prev", "next", "first_rs", "last_rs", "group_by_max", "greater",
+ "lesser", "geq", "leq", "max", "min", "word-match"
+ ] + ["reset_select"] + ["print"]
+ self.word_ids = {}
+ self.reverse_word_ids = {}
+ self.word_count = {}
+ self.random = Random(FLAGS.python_seed)
+
+
+def evaluate(sess, data, batch_size, graph, i):
+ #computes accuracy
+ num_examples = 0.0
+ gc = 0.0
+ for j in range(0, len(data) - batch_size + 1, batch_size):
+ [ct] = sess.run([graph.final_correct],
+ feed_dict=data_utils.generate_feed_dict(data, j, batch_size,
+ graph))
+ gc += ct * batch_size
+ num_examples += batch_size
+ print("dev set accuracy after ", i, " : ", gc / num_examples)
+ print(num_examples, len(data))
+ print("--------")
+
+
+def Train(graph, utility, batch_size, train_data, sess, model_dir,
+ saver):
+ #performs training
+ curr = 0
+ train_set_loss = 0.0
+ utility.random.shuffle(train_data)
+ start = time.time()
+ for i in range(utility.FLAGS.train_steps):
+ curr_step = i
+ if (i > 0 and i % FLAGS.write_every == 0):
+ model_file = model_dir + "/model_" + str(i)
+ saver.save(sess, model_file)
+ if curr + batch_size >= len(train_data):
+ curr = 0
+ utility.random.shuffle(train_data)
+ step, cost_value = sess.run(
+ [graph.step, graph.total_cost],
+ feed_dict=data_utils.generate_feed_dict(
+ train_data, curr, batch_size, graph, train=True, utility=utility))
+ curr = curr + batch_size
+ train_set_loss += cost_value
+ if (i > 0 and i % FLAGS.eval_cycle == 0):
+ end = time.time()
+ time_taken = end - start
+ print("step ", i, " ", time_taken, " seconds ")
+ start = end
+ print(" printing train set loss: ", train_set_loss / utility.FLAGS.eval_cycle)
+ train_set_loss = 0.0
+
+
+def master(train_data, dev_data, utility):
+ #creates TF graph and calls trainer or evaluator
+ batch_size = utility.FLAGS.batch_size
+ model_dir = utility.FLAGS.output_dir + "/model" + utility.FLAGS.job_id + "/"
+ #create all paramters of the model
+ param_class = parameters.Parameters(utility)
+ params, global_step, init = param_class.parameters(utility)
+ key = "test" if (FLAGS.evaluator_job) else "train"
+ graph = model.Graph(utility, batch_size, utility.FLAGS.max_passes, mode=key)
+ graph.create_graph(params, global_step)
+ prev_dev_error = 0.0
+ final_loss = 0.0
+ final_accuracy = 0.0
+ #start session
+ with tf.Session() as sess:
+ sess.run(init.name)
+ sess.run(graph.init_op.name)
+ to_save = params.copy()
+ saver = tf.train.Saver(to_save, max_to_keep=500)
+ if (FLAGS.evaluator_job):
+ while True:
+ selected_models = {}
+ file_list = tf.gfile.ListDirectory(model_dir)
+ for model_file in file_list:
+ if ("checkpoint" in model_file or "index" in model_file or
+ "meta" in model_file):
+ continue
+ if ("data" in model_file):
+ model_file = model_file.split(".")[0]
+ model_step = int(
+ model_file.split("_")[len(model_file.split("_")) - 1])
+ selected_models[model_step] = model_file
+ file_list = sorted(selected_models.items(), key=lambda x: x[0])
+ if (len(file_list) > 0):
+ file_list = file_list[0:len(file_list) - 1]
+ print("list of models: ", file_list)
+ for model_file in file_list:
+ model_file = model_file[1]
+ print("restoring: ", model_file)
+ saver.restore(sess, model_dir + "/" + model_file)
+ model_step = int(
+ model_file.split("_")[len(model_file.split("_")) - 1])
+ print("evaluating on dev ", model_file, model_step)
+ evaluate(sess, dev_data, batch_size, graph, model_step)
+ else:
+ ckpt = tf.train.get_checkpoint_state(model_dir)
+ print("model dir: ", model_dir)
+ if (not (tf.gfile.IsDirectory(utility.FLAGS.output_dir))):
+ print("create dir: ", utility.FLAGS.output_dir)
+ tf.gfile.MkDir(utility.FLAGS.output_dir)
+ if (not (tf.gfile.IsDirectory(model_dir))):
+ print("create dir: ", model_dir)
+ tf.gfile.MkDir(model_dir)
+ Train(graph, utility, batch_size, train_data, sess, model_dir,
+ saver)
+
+def main(args):
+ utility = Utility()
+ train_name = "random-split-1-train.examples"
+ dev_name = "random-split-1-dev.examples"
+ test_name = "pristine-unseen-tables.examples"
+ #load data
+ dat = wiki_data.WikiQuestionGenerator(train_name, dev_name, test_name, FLAGS.data_dir)
+ train_data, dev_data, test_data = dat.load()
+ utility.words = []
+ utility.word_ids = {}
+ utility.reverse_word_ids = {}
+ #construct vocabulary
+ data_utils.construct_vocab(train_data, utility)
+ data_utils.construct_vocab(dev_data, utility, True)
+ data_utils.construct_vocab(test_data, utility, True)
+ data_utils.add_special_words(utility)
+ data_utils.perform_word_cutoff(utility)
+ #convert data to int format and pad the inputs
+ train_data = data_utils.complete_wiki_processing(train_data, utility, True)
+ dev_data = data_utils.complete_wiki_processing(dev_data, utility, False)
+ test_data = data_utils.complete_wiki_processing(test_data, utility, False)
+ print("# train examples ", len(train_data))
+ print("# dev examples ", len(dev_data))
+ print("# test examples ", len(test_data))
+ print("running open source")
+ #construct TF graph and train or evaluate
+ master(train_data, dev_data, utility)
+
+
+if __name__ == "__main__":
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/nn_utils.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/nn_utils.py"
new file mode 100644
index 00000000..2f3a1a98
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/nn_utils.py"
@@ -0,0 +1,68 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Author: aneelakantan (Arvind Neelakantan)
+"""
+
+import tensorflow as tf
+
+def get_embedding(word, utility, params):
+ return tf.nn.embedding_lookup(params["word"], word)
+
+
+def apply_dropout(x, dropout_rate, mode):
+ if (dropout_rate > 0.0):
+ if (mode == "train"):
+ x = tf.nn.dropout(x, dropout_rate)
+ else:
+ x = x
+ return x
+
+
+def LSTMCell(x, mprev, cprev, key, params):
+ """Create an LSTM cell.
+
+ Implements the equations in pg.2 from
+ "Long Short-Term Memory Based Recurrent Neural Network Architectures
+ For Large Vocabulary Speech Recognition",
+ Hasim Sak, Andrew Senior, Francoise Beaufays.
+
+ Args:
+ w: A dictionary of the weights and optional biases as returned
+ by LSTMParametersSplit().
+ x: Inputs to this cell.
+ mprev: m_{t-1}, the recurrent activations (same as the output)
+ from the previous cell.
+ cprev: c_{t-1}, the cell activations from the previous cell.
+ keep_prob: Keep probability on the input and the outputs of a cell.
+
+ Returns:
+ m: Outputs of this cell.
+ c: Cell Activations.
+ """
+
+ i = tf.matmul(x, params[key + "_ix"]) + tf.matmul(mprev, params[key + "_im"])
+ i = tf.nn.bias_add(i, params[key + "_i"])
+ f = tf.matmul(x, params[key + "_fx"]) + tf.matmul(mprev, params[key + "_fm"])
+ f = tf.nn.bias_add(f, params[key + "_f"])
+ c = tf.matmul(x, params[key + "_cx"]) + tf.matmul(mprev, params[key + "_cm"])
+ c = tf.nn.bias_add(c, params[key + "_c"])
+ o = tf.matmul(x, params[key + "_ox"]) + tf.matmul(mprev, params[key + "_om"])
+ o = tf.nn.bias_add(o, params[key + "_o"])
+ i = tf.sigmoid(i, name="i_gate")
+ f = tf.sigmoid(f, name="f_gate")
+ o = tf.sigmoid(o, name="o_gate")
+ c = f * cprev + i * tf.tanh(c)
+ m = o * c
+ return m, c
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/parameters.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/parameters.py"
new file mode 100644
index 00000000..c576ae82
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/parameters.py"
@@ -0,0 +1,89 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Author: aneelakantan (Arvind Neelakantan)
+"""
+
+import numpy as np
+import tensorflow as tf
+
+
+class Parameters:
+
+ def __init__(self, u):
+ self.utility = u
+ self.init_seed_counter = 0
+ self.word_init = {}
+
+ def parameters(self, utility):
+ params = {}
+ inits = []
+ embedding_dims = self.utility.FLAGS.embedding_dims
+ params["unit"] = tf.Variable(
+ self.RandomUniformInit([len(utility.operations_set), embedding_dims]))
+ params["word"] = tf.Variable(
+ self.RandomUniformInit([utility.FLAGS.vocab_size, embedding_dims]))
+ params["word_match_feature_column_name"] = tf.Variable(
+ self.RandomUniformInit([1]))
+ params["controller"] = tf.Variable(
+ self.RandomUniformInit([2 * embedding_dims, embedding_dims]))
+ params["column_controller"] = tf.Variable(
+ self.RandomUniformInit([2 * embedding_dims, embedding_dims]))
+ params["column_controller_prev"] = tf.Variable(
+ self.RandomUniformInit([embedding_dims, embedding_dims]))
+ params["controller_prev"] = tf.Variable(
+ self.RandomUniformInit([embedding_dims, embedding_dims]))
+ global_step = tf.Variable(1, name="global_step")
+ #weigths of question and history RNN (or LSTM)
+ key_list = ["question_lstm"]
+ for key in key_list:
+ # Weights going from inputs to nodes.
+ for wgts in ["ix", "fx", "cx", "ox"]:
+ params[key + "_" + wgts] = tf.Variable(
+ self.RandomUniformInit([embedding_dims, embedding_dims]))
+ # Weights going from nodes to nodes.
+ for wgts in ["im", "fm", "cm", "om"]:
+ params[key + "_" + wgts] = tf.Variable(
+ self.RandomUniformInit([embedding_dims, embedding_dims]))
+ #Biases for the gates and cell
+ for bias in ["i", "f", "c", "o"]:
+ if (bias == "f"):
+ print("forget gate bias")
+ params[key + "_" + bias] = tf.Variable(
+ tf.random_uniform([embedding_dims], 1.0, 1.1, self.utility.
+ tf_data_type[self.utility.FLAGS.data_type]))
+ else:
+ params[key + "_" + bias] = tf.Variable(
+ self.RandomUniformInit([embedding_dims]))
+ params["history_recurrent"] = tf.Variable(
+ self.RandomUniformInit([3 * embedding_dims, embedding_dims]))
+ params["history_recurrent_bias"] = tf.Variable(
+ self.RandomUniformInit([1, embedding_dims]))
+ params["break_conditional"] = tf.Variable(
+ self.RandomUniformInit([2 * embedding_dims, embedding_dims]))
+ init = tf.global_variables_initializer()
+ return params, global_step, init
+
+ def RandomUniformInit(self, shape):
+ """Returns a RandomUniform Tensor between -param_init and param_init."""
+ param_seed = self.utility.FLAGS.param_seed
+ self.init_seed_counter += 1
+ return tf.random_uniform(
+ shape, -1.0 *
+ (np.float32(self.utility.FLAGS.param_init)
+ ).astype(self.utility.np_data_type[self.utility.FLAGS.data_type]),
+ (np.float32(self.utility.FLAGS.param_init)
+ ).astype(self.utility.np_data_type[self.utility.FLAGS.data_type]),
+ self.utility.tf_data_type[self.utility.FLAGS.data_type],
+ param_seed + self.init_seed_counter)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/wiki_data.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/wiki_data.py"
new file mode 100644
index 00000000..c91637ca
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/neural_programmer/wiki_data.py"
@@ -0,0 +1,532 @@
+# Copyright 2016 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Loads the WikiQuestions dataset.
+
+An example consists of question, table. Additionally, we store the processed
+columns which store the entries after performing number, date and other
+preprocessing as done in the baseline.
+columns, column names and processed columns are split into word and number
+columns.
+lookup answer (or matrix) is also split into number and word lookup matrix
+Author: aneelakantan (Arvind Neelakantan)
+"""
+from __future__ import print_function
+
+import math
+import os
+import re
+import numpy as np
+import unicodedata as ud
+import tensorflow as tf
+
+bad_number = -200000.0 #number that is added to a corrupted table entry in a number column
+
+def is_nan_or_inf(number):
+ return math.isnan(number) or math.isinf(number)
+
+def strip_accents(s):
+ u = unicode(s, "utf-8")
+ u_new = ''.join(c for c in ud.normalize('NFKD', u) if ud.category(c) != 'Mn')
+ return u_new.encode("utf-8")
+
+
+def correct_unicode(string):
+ string = strip_accents(string)
+ string = re.sub("\xc2\xa0", " ", string).strip()
+ string = re.sub("\xe2\x80\x93", "-", string).strip()
+ #string = re.sub(ur'[\u0300-\u036F]', "", string)
+ string = re.sub("‚", ",", string)
+ string = re.sub("…", "...", string)
+ #string = re.sub("[·・]", ".", string)
+ string = re.sub("ˆ", "^", string)
+ string = re.sub("˜", "~", string)
+ string = re.sub("‹", "<", string)
+ string = re.sub("›", ">", string)
+ #string = re.sub("[‘’´`]", "'", string)
+ #string = re.sub("[“â€Â«Â»]", "\"", string)
+ #string = re.sub("[•†‡]", "", string)
+ #string = re.sub("[â€â€‘–—]", "-", string)
+ string = re.sub(r'[\u2E00-\uFFFF]', "", string)
+ string = re.sub("\\s+", " ", string).strip()
+ return string
+
+
+def simple_normalize(string):
+ string = correct_unicode(string)
+ # Citations
+ string = re.sub("\[(nb ?)?\d+\]", "", string)
+ string = re.sub("\*+$", "", string)
+ # Year in parenthesis
+ string = re.sub("\(\d* ?-? ?\d*\)", "", string)
+ string = re.sub("^\"(.*)\"$", "", string)
+ return string
+
+
+def full_normalize(string):
+ #print "an: ", string
+ string = simple_normalize(string)
+ # Remove trailing info in brackets
+ string = re.sub("\[[^\]]*\]", "", string)
+ # Remove most unicode characters in other languages
+ string = re.sub(r'[\u007F-\uFFFF]', "", string.strip())
+ # Remove trailing info in parenthesis
+ string = re.sub("\([^)]*\)$", "", string.strip())
+ string = final_normalize(string)
+ # Get rid of question marks
+ string = re.sub("\?", "", string).strip()
+ # Get rid of trailing colons (usually occur in column titles)
+ string = re.sub("\:$", " ", string).strip()
+ # Get rid of slashes
+ string = re.sub(r"/", " ", string).strip()
+ string = re.sub(r"\\", " ", string).strip()
+ # Replace colon, slash, and dash with space
+ # Note: need better replacement for this when parsing time
+ string = re.sub(r"\:", " ", string).strip()
+ string = re.sub("/", " ", string).strip()
+ string = re.sub("-", " ", string).strip()
+ # Convert empty strings to UNK
+ # Important to do this last or near last
+ if not string:
+ string = "UNK"
+ return string
+
+def final_normalize(string):
+ # Remove leading and trailing whitespace
+ string = re.sub("\\s+", " ", string).strip()
+ # Convert entirely to lowercase
+ string = string.lower()
+ # Get rid of strangely escaped newline characters
+ string = re.sub("\\\\n", " ", string).strip()
+ # Get rid of quotation marks
+ string = re.sub(r"\"", "", string).strip()
+ string = re.sub(r"\'", "", string).strip()
+ string = re.sub(r"`", "", string).strip()
+ # Get rid of *
+ string = re.sub("\*", "", string).strip()
+ return string
+
+def is_number(x):
+ try:
+ f = float(x)
+ return not is_nan_or_inf(f)
+ except ValueError:
+ return False
+ except TypeError:
+ return False
+
+
+class WikiExample(object):
+
+ def __init__(self, id, question, answer, table_key):
+ self.question_id = id
+ self.question = question
+ self.answer = answer
+ self.table_key = table_key
+ self.lookup_matrix = []
+ self.is_bad_example = False
+ self.is_word_lookup = False
+ self.is_ambiguous_word_lookup = False
+ self.is_number_lookup = False
+ self.is_number_calc = False
+ self.is_unknown_answer = False
+
+
+class TableInfo(object):
+
+ def __init__(self, word_columns, word_column_names, word_column_indices,
+ number_columns, number_column_names, number_column_indices,
+ processed_word_columns, processed_number_columns, orig_columns):
+ self.word_columns = word_columns
+ self.word_column_names = word_column_names
+ self.word_column_indices = word_column_indices
+ self.number_columns = number_columns
+ self.number_column_names = number_column_names
+ self.number_column_indices = number_column_indices
+ self.processed_word_columns = processed_word_columns
+ self.processed_number_columns = processed_number_columns
+ self.orig_columns = orig_columns
+
+
+class WikiQuestionLoader(object):
+
+ def __init__(self, data_name, root_folder):
+ self.root_folder = root_folder
+ self.data_folder = os.path.join(self.root_folder, "data")
+ self.examples = []
+ self.data_name = data_name
+
+ def num_questions(self):
+ return len(self.examples)
+
+ def load_qa(self):
+ data_source = os.path.join(self.data_folder, self.data_name)
+ f = tf.gfile.GFile(data_source, "r")
+ id_regex = re.compile("\(id ([^\)]*)\)")
+ for line in f:
+ id_match = id_regex.search(line)
+ id = id_match.group(1)
+ self.examples.append(id)
+
+ def load(self):
+ self.load_qa()
+
+
+def is_date(word):
+ if (not (bool(re.search("[a-z0-9]", word, re.IGNORECASE)))):
+ return False
+ if (len(word) != 10):
+ return False
+ if (word[4] != "-"):
+ return False
+ if (word[7] != "-"):
+ return False
+ for i in range(len(word)):
+ if (not (word[i] == "X" or word[i] == "x" or word[i] == "-" or re.search(
+ "[0-9]", word[i]))):
+ return False
+ return True
+
+
+class WikiQuestionGenerator(object):
+
+ def __init__(self, train_name, dev_name, test_name, root_folder):
+ self.train_name = train_name
+ self.dev_name = dev_name
+ self.test_name = test_name
+ self.train_loader = WikiQuestionLoader(train_name, root_folder)
+ self.dev_loader = WikiQuestionLoader(dev_name, root_folder)
+ self.test_loader = WikiQuestionLoader(test_name, root_folder)
+ self.bad_examples = 0
+ self.root_folder = root_folder
+ self.data_folder = os.path.join(self.root_folder, "annotated/data")
+ self.annotated_examples = {}
+ self.annotated_tables = {}
+ self.annotated_word_reject = {}
+ self.annotated_word_reject["-lrb-"] = 1
+ self.annotated_word_reject["-rrb-"] = 1
+ self.annotated_word_reject["UNK"] = 1
+
+ def is_money(self, word):
+ if (not (bool(re.search("[a-z0-9]", word, re.IGNORECASE)))):
+ return False
+ for i in range(len(word)):
+ if (not (word[i] == "E" or word[i] == "." or re.search("[0-9]",
+ word[i]))):
+ return False
+ return True
+
+ def remove_consecutive(self, ner_tags, ner_values):
+ for i in range(len(ner_tags)):
+ if ((ner_tags[i] == "NUMBER" or ner_tags[i] == "MONEY" or
+ ner_tags[i] == "PERCENT" or ner_tags[i] == "DATE") and
+ i + 1 < len(ner_tags) and ner_tags[i] == ner_tags[i + 1] and
+ ner_values[i] == ner_values[i + 1] and ner_values[i] != ""):
+ word = ner_values[i]
+ word = word.replace(">", "").replace("<", "").replace("=", "").replace(
+ "%", "").replace("~", "").replace("$", "").replace("£", "").replace(
+ "€", "")
+ if (re.search("[A-Z]", word) and not (is_date(word)) and not (
+ self.is_money(word))):
+ ner_values[i] = "A"
+ else:
+ ner_values[i] = ","
+ return ner_tags, ner_values
+
+ def pre_process_sentence(self, tokens, ner_tags, ner_values):
+ sentence = []
+ tokens = tokens.split("|")
+ ner_tags = ner_tags.split("|")
+ ner_values = ner_values.split("|")
+ ner_tags, ner_values = self.remove_consecutive(ner_tags, ner_values)
+ #print "old: ", tokens
+ for i in range(len(tokens)):
+ word = tokens[i]
+ if (ner_values[i] != "" and
+ (ner_tags[i] == "NUMBER" or ner_tags[i] == "MONEY" or
+ ner_tags[i] == "PERCENT" or ner_tags[i] == "DATE")):
+ word = ner_values[i]
+ word = word.replace(">", "").replace("<", "").replace("=", "").replace(
+ "%", "").replace("~", "").replace("$", "").replace("£", "").replace(
+ "€", "")
+ if (re.search("[A-Z]", word) and not (is_date(word)) and not (
+ self.is_money(word))):
+ word = tokens[i]
+ if (is_number(ner_values[i])):
+ word = float(ner_values[i])
+ elif (is_number(word)):
+ word = float(word)
+ if (tokens[i] == "score"):
+ word = "score"
+ if (is_number(word)):
+ word = float(word)
+ if (not (self.annotated_word_reject.has_key(word))):
+ if (is_number(word) or is_date(word) or self.is_money(word)):
+ sentence.append(word)
+ else:
+ word = full_normalize(word)
+ if (not (self.annotated_word_reject.has_key(word)) and
+ bool(re.search("[a-z0-9]", word, re.IGNORECASE))):
+ m = re.search(",", word)
+ sentence.append(word.replace(",", ""))
+ if (len(sentence) == 0):
+ sentence.append("UNK")
+ return sentence
+
+ def load_annotated_data(self, in_file):
+ self.annotated_examples = {}
+ self.annotated_tables = {}
+ f = tf.gfile.GFile(in_file, "r")
+ counter = 0
+ for line in f:
+ if (counter > 0):
+ line = line.strip()
+ (question_id, utterance, context, target_value, tokens, lemma_tokens,
+ pos_tags, ner_tags, ner_values, target_canon) = line.split("\t")
+ question = self.pre_process_sentence(tokens, ner_tags, ner_values)
+ target_canon = target_canon.split("|")
+ self.annotated_examples[question_id] = WikiExample(
+ question_id, question, target_canon, context)
+ self.annotated_tables[context] = []
+ counter += 1
+ print("Annotated examples loaded ", len(self.annotated_examples))
+ f.close()
+
+ def is_number_column(self, a):
+ for w in a:
+ if (len(w) != 1):
+ return False
+ if (not (is_number(w[0]))):
+ return False
+ return True
+
+ def convert_table(self, table):
+ answer = []
+ for i in range(len(table)):
+ temp = []
+ for j in range(len(table[i])):
+ temp.append(" ".join([str(w) for w in table[i][j]]))
+ answer.append(temp)
+ return answer
+
+ def load_annotated_tables(self):
+ for table in self.annotated_tables.keys():
+ annotated_table = table.replace("csv", "annotated")
+ orig_columns = []
+ processed_columns = []
+ f = tf.gfile.GFile(os.path.join(self.root_folder, annotated_table), "r")
+ counter = 0
+ for line in f:
+ if (counter > 0):
+ line = line.strip()
+ line = line + "\t" * (13 - len(line.split("\t")))
+ (row, col, read_id, content, tokens, lemma_tokens, pos_tags, ner_tags,
+ ner_values, number, date, num2, read_list) = line.split("\t")
+ counter += 1
+ f.close()
+ max_row = int(row)
+ max_col = int(col)
+ for i in range(max_col + 1):
+ orig_columns.append([])
+ processed_columns.append([])
+ for j in range(max_row + 1):
+ orig_columns[i].append(bad_number)
+ processed_columns[i].append(bad_number)
+ #print orig_columns
+ f = tf.gfile.GFile(os.path.join(self.root_folder, annotated_table), "r")
+ counter = 0
+ column_names = []
+ for line in f:
+ if (counter > 0):
+ line = line.strip()
+ line = line + "\t" * (13 - len(line.split("\t")))
+ (row, col, read_id, content, tokens, lemma_tokens, pos_tags, ner_tags,
+ ner_values, number, date, num2, read_list) = line.split("\t")
+ entry = self.pre_process_sentence(tokens, ner_tags, ner_values)
+ if (row == "-1"):
+ column_names.append(entry)
+ else:
+ orig_columns[int(col)][int(row)] = entry
+ if (len(entry) == 1 and is_number(entry[0])):
+ processed_columns[int(col)][int(row)] = float(entry[0])
+ else:
+ for single_entry in entry:
+ if (is_number(single_entry)):
+ processed_columns[int(col)][int(row)] = float(single_entry)
+ break
+ nt = ner_tags.split("|")
+ nv = ner_values.split("|")
+ for i_entry in range(len(tokens.split("|"))):
+ if (nt[i_entry] == "DATE" and
+ is_number(nv[i_entry].replace("-", "").replace("X", ""))):
+ processed_columns[int(col)][int(row)] = float(nv[
+ i_entry].replace("-", "").replace("X", ""))
+ #processed_columns[int(col)][int(row)] = float(nv[i_entry])
+ if (len(entry) == 1 and (is_number(entry[0]) or is_date(entry[0]) or
+ self.is_money(entry[0]))):
+ if (len(entry) == 1 and not (is_number(entry[0])) and
+ is_date(entry[0])):
+ entry[0] = entry[0].replace("X", "x")
+ counter += 1
+ word_columns = []
+ processed_word_columns = []
+ word_column_names = []
+ word_column_indices = []
+ number_columns = []
+ processed_number_columns = []
+ number_column_names = []
+ number_column_indices = []
+ for i in range(max_col + 1):
+ if (self.is_number_column(orig_columns[i])):
+ number_column_indices.append(i)
+ number_column_names.append(column_names[i])
+ temp = []
+ for w in orig_columns[i]:
+ if (is_number(w[0])):
+ temp.append(w[0])
+ number_columns.append(temp)
+ processed_number_columns.append(processed_columns[i])
+ else:
+ word_column_indices.append(i)
+ word_column_names.append(column_names[i])
+ word_columns.append(orig_columns[i])
+ processed_word_columns.append(processed_columns[i])
+ table_info = TableInfo(
+ word_columns, word_column_names, word_column_indices, number_columns,
+ number_column_names, number_column_indices, processed_word_columns,
+ processed_number_columns, orig_columns)
+ self.annotated_tables[table] = table_info
+ f.close()
+
+ def answer_classification(self):
+ lookup_questions = 0
+ number_lookup_questions = 0
+ word_lookup_questions = 0
+ ambiguous_lookup_questions = 0
+ number_questions = 0
+ bad_questions = 0
+ ice_bad_questions = 0
+ tot = 0
+ got = 0
+ ice = {}
+ with tf.gfile.GFile(
+ self.root_folder + "/arvind-with-norms-2.tsv", mode="r") as f:
+ lines = f.readlines()
+ for line in lines:
+ line = line.strip()
+ if (not (self.annotated_examples.has_key(line.split("\t")[0]))):
+ continue
+ if (len(line.split("\t")) == 4):
+ line = line + "\t" * (5 - len(line.split("\t")))
+ if (not (is_number(line.split("\t")[2]))):
+ ice_bad_questions += 1
+ (example_id, ans_index, ans_raw, process_answer,
+ matched_cells) = line.split("\t")
+ if (ice.has_key(example_id)):
+ ice[example_id].append(line.split("\t"))
+ else:
+ ice[example_id] = [line.split("\t")]
+ for q_id in self.annotated_examples.keys():
+ tot += 1
+ example = self.annotated_examples[q_id]
+ table_info = self.annotated_tables[example.table_key]
+ # Figure out if the answer is numerical or lookup
+ n_cols = len(table_info.orig_columns)
+ n_rows = len(table_info.orig_columns[0])
+ example.lookup_matrix = np.zeros((n_rows, n_cols))
+ exact_matches = {}
+ for (example_id, ans_index, ans_raw, process_answer,
+ matched_cells) in ice[q_id]:
+ for match_cell in matched_cells.split("|"):
+ if (len(match_cell.split(",")) == 2):
+ (row, col) = match_cell.split(",")
+ row = int(row)
+ col = int(col)
+ if (row >= 0):
+ exact_matches[ans_index] = 1
+ answer_is_in_table = len(exact_matches) == len(example.answer)
+ if (answer_is_in_table):
+ for (example_id, ans_index, ans_raw, process_answer,
+ matched_cells) in ice[q_id]:
+ for match_cell in matched_cells.split("|"):
+ if (len(match_cell.split(",")) == 2):
+ (row, col) = match_cell.split(",")
+ row = int(row)
+ col = int(col)
+ example.lookup_matrix[row, col] = float(ans_index) + 1.0
+ example.lookup_number_answer = 0.0
+ if (answer_is_in_table):
+ lookup_questions += 1
+ if len(example.answer) == 1 and is_number(example.answer[0]):
+ example.number_answer = float(example.answer[0])
+ number_lookup_questions += 1
+ example.is_number_lookup = True
+ else:
+ #print "word lookup"
+ example.calc_answer = example.number_answer = 0.0
+ word_lookup_questions += 1
+ example.is_word_lookup = True
+ else:
+ if (len(example.answer) == 1 and is_number(example.answer[0])):
+ example.number_answer = example.answer[0]
+ example.is_number_calc = True
+ else:
+ bad_questions += 1
+ example.is_bad_example = True
+ example.is_unknown_answer = True
+ example.is_lookup = example.is_word_lookup or example.is_number_lookup
+ if not example.is_word_lookup and not example.is_bad_example:
+ number_questions += 1
+ example.calc_answer = example.answer[0]
+ example.lookup_number_answer = example.calc_answer
+ # Split up the lookup matrix into word part and number part
+ number_column_indices = table_info.number_column_indices
+ word_column_indices = table_info.word_column_indices
+ example.word_columns = table_info.word_columns
+ example.number_columns = table_info.number_columns
+ example.word_column_names = table_info.word_column_names
+ example.processed_number_columns = table_info.processed_number_columns
+ example.processed_word_columns = table_info.processed_word_columns
+ example.number_column_names = table_info.number_column_names
+ example.number_lookup_matrix = example.lookup_matrix[:,
+ number_column_indices]
+ example.word_lookup_matrix = example.lookup_matrix[:, word_column_indices]
+
+ def load(self):
+ train_data = []
+ dev_data = []
+ test_data = []
+ self.load_annotated_data(
+ os.path.join(self.data_folder, "training.annotated"))
+ self.load_annotated_tables()
+ self.answer_classification()
+ self.train_loader.load()
+ self.dev_loader.load()
+ for i in range(self.train_loader.num_questions()):
+ example = self.train_loader.examples[i]
+ example = self.annotated_examples[example]
+ train_data.append(example)
+ for i in range(self.dev_loader.num_questions()):
+ example = self.dev_loader.examples[i]
+ dev_data.append(self.annotated_examples[example])
+
+ self.load_annotated_data(
+ os.path.join(self.data_folder, "pristine-unseen-tables.annotated"))
+ self.load_annotated_tables()
+ self.answer_classification()
+ self.test_loader.load()
+ for i in range(self.test_loader.num_questions()):
+ example = self.test_loader.examples[i]
+ test_data.append(self.annotated_examples[example])
+ return train_data, dev_data, test_data
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/README.md"
new file mode 100644
index 00000000..19bf0577
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/README.md"
@@ -0,0 +1,85 @@
+Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks.
+
+Introduction
+
+https://arxiv.org/pdf/1607.02586v1.pdf
+
+This is an implementation based on my understanding, with small
+variations. It doesn't necessarily represents the paper published
+by the original authors.
+
+Authors: Xin Pan, Anelia Angelova
+
+Results:
+
+
+
+
+
+
+
+Prerequisite:
+
+1. Install TensorFlow (r0.12), Bazel.
+
+2. Download the Sprites dataset or generate moving object dataset.
+
+Sprites data is located here:
+
+http://www.scottreed.info/files/nips2015-analogy-data.tar.gz
+
+Convert .mat files into images and use sprites_gen.py to convert them
+to tf.SequenceExample.
+
+How to run:
+
+```shell
+$ ls -R
+.:
+data next_frame_prediction WORKSPACE
+
+./data:
+tfrecords tfrecords_test
+
+./next_frame_prediction:
+cross_conv g3doc README.md
+
+./next_frame_prediction/cross_conv:
+BUILD eval.py objects_gen.py model.py reader.py sprites_gen.py train.py
+
+./next_frame_prediction/g3doc:
+cross_conv2.png cross_conv3.png cross_conv.png
+
+
+# Build everything.
+$ bazel build -c opt next_frame_prediction/...
+
+# The following example runs the generated 2d objects.
+# For Sprites dataset, image_size should be 60, norm_scale should be 255.0.
+# Batch size is normally 16~64, depending on your memory size.
+
+# Run training.
+$ bazel-bin/next_frame_prediction/cross_conv/train \
+ --batch_size=1 \
+ --data_filepattern=data/tfrecords \
+ --image_size=64 \
+ --log_root=/tmp/predict
+
+step: 1, loss: 24.428671
+step: 2, loss: 19.211605
+step: 3, loss: 5.543143
+step: 4, loss: 3.035339
+step: 5, loss: 1.771392
+step: 6, loss: 2.099824
+step: 7, loss: 1.747665
+step: 8, loss: 1.572436
+step: 9, loss: 1.586816
+step: 10, loss: 1.434191
+
+# Run eval.
+$ bazel-bin/next_frame_prediction/cross_conv/eval \
+ --batch_size=1 \
+ --data_filepattern=data/tfrecords_test \
+ --image_size=64 \
+ --log_root=/tmp/predict
+```
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/BUILD" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/BUILD"
new file mode 100644
index 00000000..b435087f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/BUILD"
@@ -0,0 +1,48 @@
+licenses(["notice"]) # Apache 2.0
+
+package_group(
+ name = "internal",
+ packages = [
+ "//next_frame_prediction/...",
+ ],
+)
+
+package(default_visibility = [":internal"])
+
+py_library(
+ name = "model",
+ srcs = ["model.py"],
+)
+
+py_library(
+ name = "reader",
+ srcs = ["reader.py"],
+)
+
+py_binary(
+ name = "train",
+ srcs = ["train.py"],
+ deps = [
+ ":model",
+ ":reader",
+ ],
+)
+
+py_binary(
+ name = "eval",
+ srcs = ["eval.py"],
+ deps = [
+ ":model",
+ ":reader",
+ ],
+)
+
+py_binary(
+ name = "example_gen",
+ srcs = ["example_gen.py"],
+)
+
+py_binary(
+ name = "sprites_gen",
+ srcs = ["sprites_gen.py"],
+)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/eval.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/eval.py"
new file mode 100644
index 00000000..17ebc0e0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/eval.py"
@@ -0,0 +1,119 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Eval Cross Convolutional Model."""
+import io
+import os
+import sys
+import time
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+import model as cross_conv_model
+import reader
+
+FLAGS = tf.flags.FLAGS
+tf.flags.DEFINE_string('log_root', '/tmp/moving_obj', 'The root dir of output.')
+tf.flags.DEFINE_string('data_filepattern',
+ 'est',
+ 'training data file pattern.')
+tf.flags.DEFINE_integer('batch_size', 1, 'Batch size.')
+tf.flags.DEFINE_integer('image_size', 64, 'Image height and width.')
+tf.flags.DEFINE_float('norm_scale', 1.0, 'Normalize the original image')
+tf.flags.DEFINE_float('scale', 10.0,
+ 'Scale the image after norm_scale and move the diff '
+ 'to the positive realm.')
+tf.flags.DEFINE_integer('sequence_length', 2, 'tf.SequenceExample length.')
+tf.flags.DEFINE_integer('eval_batch_count', 100,
+ 'Average the result this number of examples.')
+tf.flags.DEFINE_bool('l2_loss', True, 'If true, include l2_loss.')
+tf.flags.DEFINE_bool('reconstr_loss', False, 'If true, include reconstr_loss.')
+tf.flags.DEFINE_bool('kl_loss', True, 'If true, include KL loss.')
+
+slim = tf.contrib.slim
+
+
+def _Eval():
+ params = dict()
+ params['batch_size'] = FLAGS.batch_size
+ params['seq_len'] = FLAGS.sequence_length
+ params['image_size'] = FLAGS.image_size
+ params['is_training'] = False
+ params['norm_scale'] = FLAGS.norm_scale
+ params['scale'] = FLAGS.scale
+ params['l2_loss'] = FLAGS.l2_loss
+ params['reconstr_loss'] = FLAGS.reconstr_loss
+ params['kl_loss'] = FLAGS.kl_loss
+
+ eval_dir = os.path.join(FLAGS.log_root, 'eval')
+
+ images = reader.ReadInput(
+ FLAGS.data_filepattern, shuffle=False, params=params)
+ images *= params['scale']
+ # Increase the value makes training much faster.
+ image_diff_list = reader.SequenceToImageAndDiff(images)
+ model = cross_conv_model.CrossConvModel(image_diff_list, params)
+ model.Build()
+
+ summary_writer = tf.summary.FileWriter(eval_dir)
+ saver = tf.train.Saver()
+ sess = tf.Session('', config=tf.ConfigProto(allow_soft_placement=True))
+ tf.train.start_queue_runners(sess)
+
+ while True:
+ time.sleep(60)
+ try:
+ ckpt_state = tf.train.get_checkpoint_state(FLAGS.log_root)
+ except tf.errors.OutOfRangeError as e:
+ sys.stderr.write('Cannot restore checkpoint: %s\n' % e)
+ continue
+ if not (ckpt_state and ckpt_state.model_checkpoint_path):
+ sys.stderr.write('No model to eval yet at %s\n' % FLAGS.log_root)
+ continue
+ sys.stderr.write('Loading checkpoint %s\n' %
+ ckpt_state.model_checkpoint_path)
+ saver.restore(sess, ckpt_state.model_checkpoint_path)
+ # Use the empirical distribution of z from training set.
+ if not tf.gfile.Exists(os.path.join(FLAGS.log_root, 'z_mean.npy')):
+ sys.stderr.write('No z at %s\n' % FLAGS.log_root)
+ continue
+
+ with tf.gfile.Open(os.path.join(FLAGS.log_root, 'z_mean.npy')) as f:
+ sample_z_mean = np.load(io.BytesIO(f.read()))
+ with tf.gfile.Open(
+ os.path.join(FLAGS.log_root, 'z_stddev_log.npy')) as f:
+ sample_z_stddev_log = np.load(io.BytesIO(f.read()))
+
+ total_loss = 0.0
+ for _ in xrange(FLAGS.eval_batch_count):
+ loss_val, total_steps, summaries = sess.run(
+ [model.loss, model.global_step, model.summary_op],
+ feed_dict={model.z_mean: sample_z_mean,
+ model.z_stddev_log: sample_z_stddev_log})
+ total_loss += loss_val
+
+ summary_writer.add_summary(summaries, total_steps)
+ sys.stderr.write('steps: %d, loss: %f\n' %
+ (total_steps, total_loss / FLAGS.eval_batch_count))
+
+
+def main(_):
+ _Eval()
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/example_gen.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/example_gen.py"
new file mode 100644
index 00000000..bcda0bc4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/example_gen.py"
@@ -0,0 +1,93 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Generate examples of two objects moving in different directions."""
+import random
+import sys
+
+import numpy as np
+from six.moves import xrange
+import tensorflow as tf
+
+
+tf.flags.DEFINE_string('out_file', '',
+ 'Output file for the tfrecords.')
+
+
+def _add_object(obj_type, image, image2, xpos, ypos):
+ """Add a moving obj to two consecutive images."""
+ obj_size = random.randint(8, 10)
+ channel = random.randint(0, 2)
+ move = random.randint(6, 10)
+
+ obj = np.zeros([obj_size, obj_size, 3])
+ if obj_type == 'rectangle':
+ xpos2 = xpos + move
+ ypos2 = ypos
+ for i in xrange(obj_size):
+ obj[i, 0:i+1, channel] = [1.0 for _ in xrange(i+1)]
+ elif obj_type == 'square':
+ xpos2 = xpos
+ ypos2 = ypos + move
+ obj[:, :, channel] = 1.0
+
+ for x in xrange(obj_size):
+ for y in xrange(obj_size):
+ if obj[x, y, channel] == 1.0:
+ image[xpos+x, ypos+y, channel] = 1.0
+ image2[xpos2+x, ypos2+y, channel] = 1.0
+
+
+def _images_to_example(image, image2):
+ """Convert two consecutive images to SequenceExample."""
+ example = tf.SequenceExample()
+ feature_list = example.feature_lists.feature_list['moving_objs']
+ feature = feature_list.feature.add()
+ feature.float_list.value.extend(np.reshape(image, [-1]).tolist())
+ feature = feature_list.feature.add()
+ feature.float_list.value.extend(np.reshape(image2, [-1]).tolist())
+ return example
+
+
+def generate_input():
+ """Generate tfrecords."""
+ writer = tf.python_io.TFRecordWriter(tf.flags.FLAGS.out_file)
+ writer2 = tf.python_io.TFRecordWriter(tf.flags.FLAGS.out_file + '_test')
+
+ examples = []
+ for xpos in xrange(0, 40, 3):
+ for ypos in xrange(0, 40, 3):
+ for xpos2 in xrange(0, 40, 3):
+ for ypos2 in xrange(0, 40, 3):
+ image = np.zeros([64, 64, 3])
+ image2 = np.zeros([64, 64, 3])
+ _add_object('rectangle', image, image2, xpos, ypos)
+ _add_object('square', image, image2, xpos2, ypos2)
+ examples.append(_images_to_example(image, image2))
+
+ sys.stderr.write('Finish generating examples.\n')
+ random.shuffle(examples)
+ for count, ex in enumerate(examples):
+ if count % 10 == 0:
+ writer2.write(ex.SerializeToString())
+ else:
+ writer.write(ex.SerializeToString())
+
+def main(_):
+ generate_input()
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/model.py"
new file mode 100644
index 00000000..7b48e446
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/model.py"
@@ -0,0 +1,233 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Cross Convolutional Model.
+
+https://arxiv.org/pdf/1607.02586v1.pdf
+"""
+import math
+import sys
+
+from six.moves import xrange
+import tensorflow as tf
+
+slim = tf.contrib.slim
+
+
+class CrossConvModel(object):
+
+ def __init__(self, image_diff_list, params):
+ """Constructor.
+
+ Args:
+ image_diff_list: A list of (image, diff) tuples, with shape
+ [batch_size, image_size, image_size, 3] and image_sizes as
+ [32, 64, 128, 256].
+ params: Dict of parameters.
+ """
+ self.images = [i for (i, _) in image_diff_list]
+ # Move the diff to the positive realm.
+ self.diffs = [(d + params['scale']) / 2 for (i, d) in image_diff_list]
+ self.params = params
+
+ def Build(self):
+ with tf.device('/gpu:0'):
+ with slim.arg_scope([slim.conv2d],
+ activation_fn=tf.nn.relu,
+ normalizer_fn=slim.batch_norm,
+ normalizer_params={'is_training':
+ self.params['is_training']}):
+ self._BuildMotionKernel()
+ encoded_images = self._BuildImageEncoder()
+ cross_conved_images = self._CrossConv(encoded_images)
+ self._BuildImageDecoder(cross_conved_images)
+ self._BuildLoss()
+
+ image = self.images[1]
+ diff = self.diffs[1]
+
+ self.global_step = tf.Variable(0, name='global_step', trainable=False)
+
+ if self.params['is_training']:
+ self._BuildTrainOp()
+
+ diff = diff * 2.0 - self.params['scale']
+ diff_output = self.diff_output * 2.0 - self.params['scale']
+ concat_image = tf.concat(
+ axis=1, values=[image, image + diff_output, image + diff, diff_output])
+ tf.summary.image('origin_predict_expect_predictdiff', concat_image)
+ self.summary_op = tf.summary.merge_all()
+ return self.loss
+
+ def _BuildTrainOp(self):
+ lrn_rate = tf.maximum(
+ 0.01, # min_lr_rate.
+ tf.train.exponential_decay(
+ self.params['learning_rate'], self.global_step, 10000, 0.5))
+ tf.summary.scalar('learning rate', lrn_rate)
+ optimizer = tf.train.GradientDescentOptimizer(lrn_rate)
+ self.train_op = slim.learning.create_train_op(
+ self.loss, optimizer, global_step=self.global_step)
+
+ def _BuildLoss(self):
+ # 1. reconstr_loss seems doesn't do better than l2 loss.
+ # 2. Only works when using reduce_mean. reduce_sum doesn't work.
+ # 3. It seems kl loss doesn't play an important role.
+ self.loss = 0
+ with tf.variable_scope('loss'):
+ if self.params['l2_loss']:
+ l2_loss = tf.reduce_mean(tf.square(self.diff_output - self.diffs[1]))
+ tf.summary.scalar('l2_loss', l2_loss)
+ self.loss += l2_loss
+ if self.params['reconstr_loss']:
+ reconstr_loss = (-tf.reduce_mean(
+ self.diffs[1] * (1e-10 + self.diff_output) +
+ (1-self.diffs[1]) * tf.log(1e-10 + 1 - self.diff_output)))
+ reconstr_loss = tf.check_numerics(reconstr_loss, 'reconstr_loss')
+ tf.summary.scalar('reconstr_loss', reconstr_loss)
+ self.loss += reconstr_loss
+ if self.params['kl_loss']:
+ kl_loss = (0.5 * tf.reduce_mean(
+ tf.square(self.z_mean) + tf.square(self.z_stddev) -
+ 2 * self.z_stddev_log - 1))
+ tf.summary.scalar('kl_loss', kl_loss)
+ self.loss += kl_loss
+
+ tf.summary.scalar('loss', self.loss)
+
+ def _BuildMotionKernel(self):
+ image = self.images[-2]
+ diff = self.diffs[-2]
+ shape = image.get_shape().as_list()
+ assert shape[1] == shape[2] and shape[1] == 128
+ batch_size = shape[0]
+
+ net = tf.concat(axis=3, values=[image, diff])
+ with tf.variable_scope('motion_encoder'):
+ with slim.arg_scope([slim.conv2d], padding='VALID'):
+ net = slim.conv2d(net, 96, [5, 5], stride=1)
+ net = slim.max_pool2d(net, [2, 2])
+ net = slim.conv2d(net, 96, [5, 5], stride=1)
+ net = slim.max_pool2d(net, [2, 2])
+ net = slim.conv2d(net, 128, [5, 5], stride=1)
+ net = slim.conv2d(net, 128, [5, 5], stride=1)
+ net = slim.max_pool2d(net, [2, 2])
+ net = slim.conv2d(net, 256, [4, 4], stride=1)
+ net = slim.conv2d(net, 256, [3, 3], stride=1)
+
+ z = tf.reshape(net, shape=[batch_size, -1])
+ self.z_mean, self.z_stddev_log = tf.split(
+ axis=1, num_or_size_splits=2, value=z)
+ self.z_stddev = tf.exp(self.z_stddev_log)
+
+ epsilon = tf.random_normal(
+ self.z_mean.get_shape().as_list(), 0, 1, dtype=tf.float32)
+ kernel = self.z_mean + tf.multiply(self.z_stddev, epsilon)
+
+ width = int(math.sqrt(kernel.get_shape().as_list()[1] // 128))
+ kernel = tf.reshape(kernel, [batch_size, width, width, 128])
+ with tf.variable_scope('kernel_decoder'):
+ with slim.arg_scope([slim.conv2d], padding='SAME'):
+ kernel = slim.conv2d(kernel, 128, [5, 5], stride=1)
+ self.kernel = slim.conv2d(kernel, 128, [5, 5], stride=1)
+
+ sys.stderr.write('kernel shape: %s\n' % kernel.get_shape())
+
+ def _BuildImageEncoder(self):
+ feature_maps = []
+ for (i, image) in enumerate(self.images):
+ with tf.variable_scope('image_encoder_%d' % i):
+ with slim.arg_scope([slim.conv2d, slim.max_pool2d], padding='SAME'):
+ net = slim.conv2d(image, 64, [5, 5], stride=1)
+ net = slim.conv2d(net, 64, [5, 5], stride=1)
+ net = slim.max_pool2d(net, [5, 5])
+ net = slim.conv2d(net, 64, [5, 5], stride=1)
+ net = slim.conv2d(net, 32, [5, 5], stride=1)
+ net = slim.max_pool2d(net, [2, 2])
+ sys.stderr.write('image_conv shape: %s\n' % net.get_shape())
+ feature_maps.append(net)
+ return feature_maps
+
+ def _CrossConvHelper(self, encoded_image, kernel):
+ """Cross Convolution.
+
+ The encoded image and kernel are of the same shape. Namely
+ [batch_size, image_size, image_size, channels]. They are split
+ into [image_size, image_size] image squares [kernel_size, kernel_size]
+ kernel squares. kernel squares are used to convolute image squares.
+ """
+ images = tf.expand_dims(encoded_image, 0)
+ kernels = tf.expand_dims(kernel, 3)
+ return tf.nn.depthwise_conv2d(images, kernels, [1, 1, 1, 1], 'SAME')
+
+ def _CrossConv(self, encoded_images):
+ """Apply the motion kernel on the encoded_images."""
+ cross_conved_images = []
+ kernels = tf.split(axis=3, num_or_size_splits=4, value=self.kernel)
+ for (i, encoded_image) in enumerate(encoded_images):
+ with tf.variable_scope('cross_conv_%d' % i):
+ kernel = kernels[i]
+
+ encoded_image = tf.unstack(encoded_image, axis=0)
+ kernel = tf.unstack(kernel, axis=0)
+ assert len(encoded_image) == len(kernel)
+ assert len(encoded_image) == self.params['batch_size']
+ conved_image = []
+ for j in xrange(len(encoded_image)):
+ conved_image.append(self._CrossConvHelper(
+ encoded_image[j], kernel[j]))
+ cross_conved_images.append(tf.concat(axis=0, values=conved_image))
+ sys.stderr.write('cross_conved shape: %s\n' %
+ cross_conved_images[-1].get_shape())
+ return cross_conved_images
+
+ def _Deconv(self, net, out_filters, kernel_size, stride):
+ shape = net.get_shape().as_list()
+ in_filters = shape[3]
+ kernel_shape = [kernel_size, kernel_size, out_filters, in_filters]
+
+ weights = tf.get_variable(
+ name='weights',
+ shape=kernel_shape,
+ dtype=tf.float32,
+ initializer=tf.truncated_normal_initializer(stddev=0.01))
+
+
+ out_height = shape[1] * stride
+ out_width = shape[2] * stride
+ batch_size = shape[0]
+
+ output_shape = [batch_size, out_height, out_width, out_filters]
+ net = tf.nn.conv2d_transpose(net, weights, output_shape,
+ [1, stride, stride, 1], padding='SAME')
+ slim.batch_norm(net)
+ return net
+
+ def _BuildImageDecoder(self, cross_conved_images):
+ """Decode the cross_conved feature maps into the predicted images."""
+ nets = []
+ for i, cross_conved_image in enumerate(cross_conved_images):
+ with tf.variable_scope('image_decoder_%d' % i):
+ stride = 64 / cross_conved_image.get_shape().as_list()[1]
+ # TODO(xpan): Alternative solution for upsampling?
+ nets.append(self._Deconv(
+ cross_conved_image, 64, kernel_size=3, stride=stride))
+
+ net = tf.concat(axis=3, values=nets)
+ net = slim.conv2d(net, 128, [9, 9], padding='SAME', stride=1)
+ net = slim.conv2d(net, 128, [1, 1], padding='SAME', stride=1)
+ net = slim.conv2d(net, 3, [1, 1], padding='SAME', stride=1)
+ self.diff_output = net
+ sys.stderr.write('diff_output shape: %s\n' % self.diff_output.get_shape())
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/reader.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/reader.py"
new file mode 100644
index 00000000..ab4ab698
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/reader.py"
@@ -0,0 +1,86 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Read image sequence."""
+
+from six.moves import xrange
+import tensorflow as tf
+
+
+def SequenceToImageAndDiff(images):
+ """Convert image sequence batch into image and diff batch.
+
+ Each image pair is converted to the first image and their diff.
+ Batch size will increase if sequence length is larger than 2.
+
+ Args:
+ images: Image sequence with shape
+ [batch_size, seq_len, image_size, image_size, channel]
+
+ Returns:
+ the list of (image, diff) tuples with shape
+ [batch_size2, image_size, image_size, channel]. image_sizes are
+ [32, 64, 128, 256].
+ """
+ image_diff_list = []
+ image_seq = tf.unstack(images, axis=1)
+ for size in [32, 64, 128, 256]:
+ resized_images = [
+ tf.image.resize_images(i, [size, size]) for i in image_seq]
+ diffs = []
+ for i in xrange(0, len(resized_images)-1):
+ diffs.append(resized_images[i+1] - resized_images[i])
+ image_diff_list.append(
+ (tf.concat(axis=0, values=resized_images[:-1]), tf.concat(axis=0, values=diffs)))
+ return image_diff_list
+
+
+def ReadInput(data_filepattern, shuffle, params):
+ """Read the tf.SequenceExample tfrecord files.
+
+ Args:
+ data_filepattern: tf.SequenceExample tfrecord filepattern.
+ shuffle: Whether to shuffle the examples.
+ params: parameter dict.
+
+ Returns:
+ image sequence batch [batch_size, seq_len, image_size, image_size, channel].
+ """
+ image_size = params['image_size']
+ filenames = tf.gfile.Glob(data_filepattern)
+ filename_queue = tf.train.string_input_producer(filenames, shuffle=shuffle)
+ reader = tf.TFRecordReader()
+ _, example = reader.read(filename_queue)
+ feature_sepc = {
+ 'moving_objs': tf.FixedLenSequenceFeature(
+ shape=[image_size * image_size * 3], dtype=tf.float32)}
+ _, features = tf.parse_single_sequence_example(
+ example, sequence_features=feature_sepc)
+ moving_objs = tf.reshape(
+ features['moving_objs'], [params['seq_len'], image_size, image_size, 3])
+ if shuffle:
+ examples = tf.train.shuffle_batch(
+ [moving_objs],
+ batch_size=params['batch_size'],
+ num_threads=64,
+ capacity=params['batch_size'] * 100,
+ min_after_dequeue=params['batch_size'] * 4)
+ else:
+ examples = tf.train.batch([moving_objs],
+ batch_size=params['batch_size'],
+ num_threads=16,
+ capacity=params['batch_size'])
+ examples /= params['norm_scale']
+ return examples
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/sprites_gen.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/sprites_gen.py"
new file mode 100644
index 00000000..0d36c255
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/sprites_gen.py"
@@ -0,0 +1,98 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Generate the sprites tfrecords from raw_images."""
+import os
+import random
+import re
+import sys
+
+import numpy as np
+import scipy.misc
+from six.moves import xrange
+import tensorflow as tf
+
+
+tf.flags.DEFINE_string('data_filepattern', '', 'The raw images.')
+tf.flags.DEFINE_string('out_file', '',
+ 'File name for the tfrecord output.')
+
+
+def _read_images():
+ """Read images from image files into data structure."""
+ sprites = dict()
+ files = tf.gfile.Glob(tf.flags.FLAGS.data_filepattern)
+ for f in files:
+ image = scipy.misc.imread(f)
+ m = re.search('image_([0-9]+)_([0-9]+)_([0-9]+).jpg', os.path.basename(f))
+ if m.group(1) not in sprites:
+ sprites[m.group(1)] = dict()
+ character = sprites[m.group(1)]
+ if m.group(2) not in character:
+ character[m.group(2)] = dict()
+ pose = character[m.group(2)]
+ pose[int(m.group(3))] = image
+ return sprites
+
+
+def _images_to_example(image, image2):
+ """Convert 2 consecutive image to a SequenceExample."""
+ example = tf.SequenceExample()
+ feature_list = example.feature_lists.feature_list['moving_objs']
+ feature = feature_list.feature.add()
+ feature.float_list.value.extend(np.reshape(image, [-1]).tolist())
+ feature = feature_list.feature.add()
+ feature.float_list.value.extend(np.reshape(image2, [-1]).tolist())
+ return example
+
+
+def generate_input():
+ """Generate tfrecords."""
+ sprites = _read_images()
+ sys.stderr.write('Finish reading images.\n')
+ train_writer = tf.python_io.TFRecordWriter(
+ tf.flags.FLAGS.out_file.replace('sprites', 'sprites_train'))
+ test_writer = tf.python_io.TFRecordWriter(
+ tf.flags.FLAGS.out_file.replace('sprites', 'sprites_test'))
+
+ train_examples = []
+ test_examples = []
+ for i in sprites:
+ if int(i) < 24:
+ examples = test_examples
+ else:
+ examples = train_examples
+
+ character = sprites[i]
+ for j in character.keys():
+ pose = character[j]
+ for k in xrange(1, len(pose), 1):
+ image = pose[k]
+ image2 = pose[k+1]
+ examples.append(_images_to_example(image, image2))
+
+ sys.stderr.write('Finish generating examples: %d, %d.\n' %
+ (len(train_examples), len(test_examples)))
+ random.shuffle(train_examples)
+ _ = [train_writer.write(ex.SerializeToString()) for ex in train_examples]
+ _ = [test_writer.write(ex.SerializeToString()) for ex in test_examples]
+
+
+def main(_):
+ generate_input()
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/train.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/train.py"
new file mode 100644
index 00000000..5b9973f5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/cross_conv/train.py"
@@ -0,0 +1,122 @@
+# Copyright 2016 The TensorFlow Authors All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Train the cross convolutional model."""
+import os
+import sys
+
+import numpy as np
+import tensorflow as tf
+
+import model as cross_conv_model
+import reader
+
+FLAGS = tf.flags.FLAGS
+tf.flags.DEFINE_string('master', '', 'Session address.')
+tf.flags.DEFINE_string('log_root', '/tmp/moving_obj', 'The root dir of output.')
+tf.flags.DEFINE_string('data_filepattern', '',
+ 'training data file pattern.')
+tf.flags.DEFINE_integer('image_size', 64, 'Image height and width.')
+tf.flags.DEFINE_integer('batch_size', 1, 'Batch size.')
+tf.flags.DEFINE_float('norm_scale', 1.0, 'Normalize the original image')
+tf.flags.DEFINE_float('scale', 10.0,
+ 'Scale the image after norm_scale and move the diff '
+ 'to the positive realm.')
+tf.flags.DEFINE_integer('sequence_length', 2, 'tf.SequenceExample length.')
+tf.flags.DEFINE_float('learning_rate', 0.8, 'Learning rate.')
+tf.flags.DEFINE_bool('l2_loss', True, 'If true, include l2_loss.')
+tf.flags.DEFINE_bool('reconstr_loss', False, 'If true, include reconstr_loss.')
+tf.flags.DEFINE_bool('kl_loss', True, 'If true, include KL loss.')
+
+slim = tf.contrib.slim
+
+
+def _Train():
+ params = dict()
+ params['batch_size'] = FLAGS.batch_size
+ params['seq_len'] = FLAGS.sequence_length
+ params['image_size'] = FLAGS.image_size
+ params['is_training'] = True
+ params['norm_scale'] = FLAGS.norm_scale
+ params['scale'] = FLAGS.scale
+ params['learning_rate'] = FLAGS.learning_rate
+ params['l2_loss'] = FLAGS.l2_loss
+ params['reconstr_loss'] = FLAGS.reconstr_loss
+ params['kl_loss'] = FLAGS.kl_loss
+
+ train_dir = os.path.join(FLAGS.log_root, 'train')
+
+ images = reader.ReadInput(FLAGS.data_filepattern, shuffle=True, params=params)
+ images *= params['scale']
+ # Increase the value makes training much faster.
+ image_diff_list = reader.SequenceToImageAndDiff(images)
+ model = cross_conv_model.CrossConvModel(image_diff_list, params)
+ model.Build()
+ tf.contrib.tfprof.model_analyzer.print_model_analysis(tf.get_default_graph())
+
+ summary_writer = tf.summary.FileWriter(train_dir)
+ sv = tf.train.Supervisor(logdir=FLAGS.log_root,
+ summary_op=None,
+ is_chief=True,
+ save_model_secs=60,
+ global_step=model.global_step)
+ sess = sv.prepare_or_wait_for_session(
+ FLAGS.master, config=tf.ConfigProto(allow_soft_placement=True))
+
+ total_loss = 0.0
+ step = 0
+ sample_z_mean = np.zeros(model.z_mean.get_shape().as_list())
+ sample_z_stddev_log = np.zeros(model.z_stddev_log.get_shape().as_list())
+ sample_step = 0
+
+ while True:
+ _, loss_val, total_steps, summaries, z_mean, z_stddev_log = sess.run(
+ [model.train_op, model.loss, model.global_step,
+ model.summary_op,
+ model.z_mean, model.z_stddev_log])
+
+ sample_z_mean += z_mean
+ sample_z_stddev_log += z_stddev_log
+ total_loss += loss_val
+ step += 1
+ sample_step += 1
+
+ if step % 100 == 0:
+ summary_writer.add_summary(summaries, total_steps)
+ sys.stderr.write('step: %d, loss: %f\n' %
+ (total_steps, total_loss / step))
+ total_loss = 0.0
+ step = 0
+
+ # Sampled z is used for eval.
+ # It seems 10k is better than 1k. Maybe try 100k next?
+ if sample_step % 10000 == 0:
+ with tf.gfile.Open(os.path.join(FLAGS.log_root, 'z_mean.npy'), 'w') as f:
+ np.save(f, sample_z_mean / sample_step)
+ with tf.gfile.Open(
+ os.path.join(FLAGS.log_root, 'z_stddev_log.npy'), 'w') as f:
+ np.save(f, sample_z_stddev_log / sample_step)
+ sample_z_mean = np.zeros(model.z_mean.get_shape().as_list())
+ sample_z_stddev_log = np.zeros(
+ model.z_stddev_log.get_shape().as_list())
+ sample_step = 0
+
+
+def main(_):
+ _Train()
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/g3doc/cross_conv.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/g3doc/cross_conv.png"
new file mode 100644
index 00000000..13915f94
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/g3doc/cross_conv.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/g3doc/cross_conv2.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/g3doc/cross_conv2.png"
new file mode 100644
index 00000000..c4b5e8e9
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/g3doc/cross_conv2.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/g3doc/cross_conv3.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/g3doc/cross_conv3.png"
new file mode 100644
index 00000000..054d7d1e
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/next_frame_prediction/g3doc/cross_conv3.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/4_Neural_Style_Transfer_with_Eager_Execution.ipynb" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/4_Neural_Style_Transfer_with_Eager_Execution.ipynb"
new file mode 100644
index 00000000..5d974f83
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/4_Neural_Style_Transfer_with_Eager_Execution.ipynb"
@@ -0,0 +1,1225 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "name": "Neural Style Transfer with Eager Execution",
+ "version": "0.3.2",
+ "provenance": [],
+ "private_outputs": true,
+ "collapsed_sections": [],
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "accelerator": "GPU"
+ },
+ "cells": [
+ {
+ "metadata": {
+ "id": "jo5PziEC4hWs",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "# Neural Style Transfer with tf.keras\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "aDyGj8DmXCJI",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Overview\n",
+ "\n",
+ "In this tutorial, we will learn how to use deep learning to compose images in the style of another image (ever wish you could paint like Picasso or Van Gogh?). This is known as **neural style transfer**! This is a technique outlined in [Leon A. Gatys' paper, A Neural Algorithm of Artistic Style](https://arxiv.org/abs/1508.06576), which is a great read, and you should definitely check it out. \n",
+ "\n",
+ "But, what is neural style transfer?\n",
+ "\n",
+ "Neural style transfer is an optimization technique used to take three images, a **content** image, a **style reference** image (such as an artwork by a famous painter), and the **input** image you want to style -- and blend them together such that the input image is transformed to look like the content image, but “painted” in the style of the style image.\n",
+ "\n",
+ "\n",
+ "For example, let’s take an image of this turtle and Katsushika Hokusai's *The Great Wave off Kanagawa*:\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "[Image of Green Sea Turtle](https://commons.wikimedia.org/wiki/File:Green_Sea_Turtle_grazing_seagrass.jpg)\n",
+ "-By P.Lindgren [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Common\n",
+ "\n",
+ "\n",
+ "Now how would it look like if Hokusai decided to paint the picture of this Turtle exclusively with this style? Something like this?\n",
+ "\n",
+ "\n",
+ "\n",
+ "Is this magic or just deep learning? Fortunately, this doesn’t involve any witchcraft: style transfer is a fun and interesting technique that showcases the capabilities and internal representations of neural networks. \n",
+ "\n",
+ "The principle of neural style transfer is to define two distance functions, one that describes how different the content of two images are , $L_{content}$, and one that describes the difference between two images in terms of their style, $L_{style}$. Then, given three images, a desired style image, a desired content image, and the input image (initialized with the content image), we try to transform the input image to minimize the content distance with the content image and its style distance with the style image. \n",
+ "In summary, we’ll take the base input image, a content image that we want to match, and the style image that we want to match. We’ll transform the base input image by minimizing the content and style distances (losses) with backpropagation, creating an image that matches the content of the content image and the style of the style image. \n",
+ "\n",
+ "### Specific concepts that will be covered:\n",
+ "In the process, we will build practical experience and develop intuition around the following concepts\n",
+ "\n",
+ "* **Eager Execution** - use TensorFlow's imperative programming environment that evaluates operations immediately \n",
+ " * [Learn more about eager execution](https://www.tensorflow.org/programmers_guide/eager)\n",
+ " * [See it in action](https://www.tensorflow.org/get_started/eager)\n",
+ "* ** Using [Functional API](https://keras.io/getting-started/functional-api-guide/) to define a model** - we'll build a subset of our model that will give us access to the necessary intermediate activations using the Functional API \n",
+ "* **Leveraging feature maps of a pretrained model** - Learn how to use pretrained models and their feature maps \n",
+ "* **Create custom training loops** - we'll examine how to set up an optimizer to minimize a given loss with respect to input parameters\n",
+ "\n",
+ "### We will follow the general steps to perform style transfer:\n",
+ "\n",
+ "1. Visualize data\n",
+ "2. Basic Preprocessing/preparing our data\n",
+ "3. Set up loss functions \n",
+ "4. Create model\n",
+ "5. Optimize for loss function\n",
+ "\n",
+ "**Audience:** This post is geared towards intermediate users who are comfortable with basic machine learning concepts. To get the most out of this post, you should: \n",
+ "* Read [Gatys' paper](https://arxiv.org/abs/1508.06576) - we'll explain along the way, but the paper will provide a more thorough understanding of the task\n",
+ "* [Understand reducing loss with gradient descent](https://developers.google.com/machine-learning/crash-course/reducing-loss/gradient-descent)\n",
+ "\n",
+ "**Time Estimated**: 30 min\n"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "U8ajP_u73s6m",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Setup\n",
+ "\n",
+ "### Download Images"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "riWE_b8k3s6o",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "import os\n",
+ "img_dir = '/tmp/nst'\n",
+ "if not os.path.exists(img_dir):\n",
+ " os.makedirs(img_dir)\n",
+ "!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg\n",
+ "!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg\n",
+ "!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/b/b4/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg\n",
+ "!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/0/00/Tuebingen_Neckarfront.jpg\n",
+ "!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/6/68/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg\n",
+ "!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "eqxUicSPUOP6",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Import and configure modules"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "sc1OLbOWhPCO",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "import matplotlib.pyplot as plt\n",
+ "import matplotlib as mpl\n",
+ "mpl.rcParams['figure.figsize'] = (10,10)\n",
+ "mpl.rcParams['axes.grid'] = False\n",
+ "\n",
+ "import numpy as np\n",
+ "from PIL import Image\n",
+ "import time\n",
+ "import functools"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "RYEjlrYk3s6w",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "import tensorflow as tf\n",
+ "import tensorflow.contrib.eager as tfe\n",
+ "\n",
+ "from tensorflow.python.keras.preprocessing import image as kp_image\n",
+ "from tensorflow.python.keras import models \n",
+ "from tensorflow.python.keras import losses\n",
+ "from tensorflow.python.keras import layers\n",
+ "from tensorflow.python.keras import backend as K"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "L7sjDODq67HQ",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "We’ll begin by enabling [eager execution](https://www.tensorflow.org/guide/eager). Eager execution allows us to work through this technique in the clearest and most readable way. "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "sfjsSAtNrqQx",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "tf.enable_eager_execution()\n",
+ "print(\"Eager execution: {}\".format(tf.executing_eagerly()))"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "IOiGrIV1iERH",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "# Set up some global values here\n",
+ "content_path = '/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg'\n",
+ "style_path = '/tmp/nst/The_Great_Wave_off_Kanagawa.jpg'"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "xE4Yt8nArTeR",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Visualize the input"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "3TLljcwv5qZs",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def load_img(path_to_img):\n",
+ " max_dim = 512\n",
+ " img = Image.open(path_to_img)\n",
+ " long = max(img.size)\n",
+ " scale = max_dim/long\n",
+ " img = img.resize((round(img.size[0]*scale), round(img.size[1]*scale)), Image.ANTIALIAS)\n",
+ " \n",
+ " img = kp_image.img_to_array(img)\n",
+ " \n",
+ " # We need to broadcast the image array such that it has a batch dimension \n",
+ " img = np.expand_dims(img, axis=0)\n",
+ " return img"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "vupl0CI18aAG",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def imshow(img, title=None):\n",
+ " # Remove the batch dimension\n",
+ " out = np.squeeze(img, axis=0)\n",
+ " # Normalize for display \n",
+ " out = out.astype('uint8')\n",
+ " plt.imshow(out)\n",
+ " if title is not None:\n",
+ " plt.title(title)\n",
+ " plt.imshow(out)"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "2yAlRzJZrWM3",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "These are input content and style images. We hope to \"create\" an image with the content of our content image, but with the style of the style image. "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "_UWQmeEaiKkP",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "plt.figure(figsize=(10,10))\n",
+ "\n",
+ "content = load_img(content_path).astype('uint8')\n",
+ "style = load_img(style_path).astype('uint8')\n",
+ "\n",
+ "plt.subplot(1, 2, 1)\n",
+ "imshow(content, 'Content Image')\n",
+ "\n",
+ "plt.subplot(1, 2, 2)\n",
+ "imshow(style, 'Style Image')\n",
+ "plt.show()"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "7qMVNvEsK-_D",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Prepare the data\n",
+ "Let's create methods that will allow us to load and preprocess our images easily. We perform the same preprocessing process as are expected according to the VGG training process. VGG networks are trained on image with each channel normalized by `mean = [103.939, 116.779, 123.68]`and with channels BGR."
+ ]
+ },
+ {
+ "metadata": {
+ "id": "hGwmTwJNmv2a",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def load_and_process_img(path_to_img):\n",
+ " img = load_img(path_to_img)\n",
+ " img = tf.keras.applications.vgg19.preprocess_input(img)\n",
+ " return img"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "xCgooqs6tAka",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "In order to view the outputs of our optimization, we are required to perform the inverse preprocessing step. Furthermore, since our optimized image may take its values anywhere between $- \\infty$ and $\\infty$, we must clip to maintain our values from within the 0-255 range. "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "mjzlKRQRs_y2",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def deprocess_img(processed_img):\n",
+ " x = processed_img.copy()\n",
+ " if len(x.shape) == 4:\n",
+ " x = np.squeeze(x, 0)\n",
+ " assert len(x.shape) == 3, (\"Input to deprocess image must be an image of \"\n",
+ " \"dimension [1, height, width, channel] or [height, width, channel]\")\n",
+ " if len(x.shape) != 3:\n",
+ " raise ValueError(\"Invalid input to deprocessing image\")\n",
+ " \n",
+ " # perform the inverse of the preprocessiing step\n",
+ " x[:, :, 0] += 103.939\n",
+ " x[:, :, 1] += 116.779\n",
+ " x[:, :, 2] += 123.68\n",
+ " x = x[:, :, ::-1]\n",
+ "\n",
+ " x = np.clip(x, 0, 255).astype('uint8')\n",
+ " return x"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "GEwZ7FlwrjoZ",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Define content and style representations\n",
+ "In order to get both the content and style representations of our image, we will look at some intermediate layers within our model. As we go deeper into the model, these intermediate layers represent higher and higher order features. In this case, we are using the network architecture VGG19, a pretrained image classification network. These intermediate layers are necessary to define the representation of content and style from our images. For an input image, we will try to match the corresponding style and content target representations at these intermediate layers. \n",
+ "\n",
+ "#### Why intermediate layers?\n",
+ "\n",
+ "You may be wondering why these intermediate outputs within our pretrained image classification network allow us to define style and content representations. At a high level, this phenomenon can be explained by the fact that in order for a network to perform image classification (which our network has been trained to do), it must understand the image. This involves taking the raw image as input pixels and building an internal representation through transformations that turn the raw image pixels into a complex understanding of the features present within the image. This is also partly why convolutional neural networks are able to generalize well: they’re able to capture the invariances and defining features within classes (e.g., cats vs. dogs) that are agnostic to background noise and other nuisances. Thus, somewhere between where the raw image is fed in and the classification label is output, the model serves as a complex feature extractor; hence by accessing intermediate layers, we’re able to describe the content and style of input images. \n",
+ "\n",
+ "\n",
+ "Specifically we’ll pull out these intermediate layers from our network: \n"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "N4-8eUp_Kc-j",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "# Content layer where will pull our feature maps\n",
+ "content_layers = ['block5_conv2'] \n",
+ "\n",
+ "# Style layer we are interested in\n",
+ "style_layers = ['block1_conv1',\n",
+ " 'block2_conv1',\n",
+ " 'block3_conv1', \n",
+ " 'block4_conv1', \n",
+ " 'block5_conv1'\n",
+ " ]\n",
+ "\n",
+ "num_content_layers = len(content_layers)\n",
+ "num_style_layers = len(style_layers)"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "Jt3i3RRrJiOX",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Build the Model \n",
+ "In this case, we load [VGG19](https://keras.io/applications/#vgg19), and feed in our input tensor to the model. This will allow us to extract the feature maps (and subsequently the content and style representations) of the content, style, and generated images.\n",
+ "\n",
+ "We use VGG19, as suggested in the paper. In addition, since VGG19 is a relatively simple model (compared with ResNet, Inception, etc) the feature maps actually work better for style transfer. "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "v9AnzEUU6hhx",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "In order to access the intermediate layers corresponding to our style and content feature maps, we get the corresponding outputs and using the Keras [**Functional API**](https://keras.io/getting-started/functional-api-guide/), we define our model with the desired output activations. \n",
+ "\n",
+ "With the Functional API defining a model simply involves defining the input and output: \n",
+ "\n",
+ "`model = Model(inputs, outputs)`"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "nfec6MuMAbPx",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def get_model():\n",
+ " \"\"\" Creates our model with access to intermediate layers. \n",
+ " \n",
+ " This function will load the VGG19 model and access the intermediate layers. \n",
+ " These layers will then be used to create a new model that will take input image\n",
+ " and return the outputs from these intermediate layers from the VGG model. \n",
+ " \n",
+ " Returns:\n",
+ " returns a keras model that takes image inputs and outputs the style and \n",
+ " content intermediate layers. \n",
+ " \"\"\"\n",
+ " # Load our model. We load pretrained VGG, trained on imagenet data\n",
+ " vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet')\n",
+ " vgg.trainable = False\n",
+ " # Get output layers corresponding to style and content layers \n",
+ " style_outputs = [vgg.get_layer(name).output for name in style_layers]\n",
+ " content_outputs = [vgg.get_layer(name).output for name in content_layers]\n",
+ " model_outputs = style_outputs + content_outputs\n",
+ " # Build model \n",
+ " return models.Model(vgg.input, model_outputs)"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "kl6eFGa7-OtV",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "In the above code snippet, we’ll load our pretrained image classification network. Then we grab the layers of interest as we defined earlier. Then we define a Model by setting the model’s inputs to an image and the outputs to the outputs of the style and content layers. In other words, we created a model that will take an input image and output the content and style intermediate layers! \n"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "vJdYvJTZ4bdS",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Define and create our loss functions (content and style distances)"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "F2Hcepii7_qh",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Content Loss"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "1FvH-gwXi4nq",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "Our content loss definition is actually quite simple. We’ll pass the network both the desired content image and our base input image. This will return the intermediate layer outputs (from the layers defined above) from our model. Then we simply take the euclidean distance between the two intermediate representations of those images. \n",
+ "\n",
+ "More formally, content loss is a function that describes the distance of content from our output image $x$ and our content image, $p$. Let $C_{nn}$ be a pre-trained deep convolutional neural network. Again, in this case we use [VGG19](https://keras.io/applications/#vgg19). Let $X$ be any image, then $C_{nn}(X)$ is the network fed by X. Let $F^l_{ij}(x) \\in C_{nn}(x)$ and $P^l_{ij}(p) \\in C_{nn}(p)$ describe the respective intermediate feature representation of the network with inputs $x$ and $p$ at layer $l$. Then we describe the content distance (loss) formally as: $$L^l_{content}(p, x) = \\sum_{i, j} (F^l_{ij}(x) - P^l_{ij}(p))^2$$\n",
+ "\n",
+ "We perform backpropagation in the usual way such that we minimize this content loss. We thus change the initial image until it generates a similar response in a certain layer (defined in content_layer) as the original content image.\n",
+ "\n",
+ "This can be implemented quite simply. Again it will take as input the feature maps at a layer L in a network fed by x, our input image, and p, our content image, and return the content distance.\n",
+ "\n"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "6KsbqPA8J9DY",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Computing content loss\n",
+ "We will actually add our content losses at each desired layer. This way, each iteration when we feed our input image through the model (which in eager is simply `model(input_image)`!) all the content losses through the model will be properly compute and because we are executing eagerly, all the gradients will be computed. "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "d2mf7JwRMkCd",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def get_content_loss(base_content, target):\n",
+ " return tf.reduce_mean(tf.square(base_content - target))"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "lGUfttK9F8d5",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Style Loss"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "I6XtkGK_YGD1",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "Computing style loss is a bit more involved, but follows the same principle, this time feeding our network the base input image and the style image. However, instead of comparing the raw intermediate outputs of the base input image and the style image, we instead compare the Gram matrices of the two outputs. \n",
+ "\n",
+ "Mathematically, we describe the style loss of the base input image, $x$, and the style image, $a$, as the distance between the style representation (the gram matrices) of these images. We describe the style representation of an image as the correlation between different filter responses given by the Gram matrix $G^l$, where $G^l_{ij}$ is the inner product between the vectorized feature map $i$ and $j$ in layer $l$. We can see that $G^l_{ij}$ generated over the feature map for a given image represents the correlation between feature maps $i$ and $j$. \n",
+ "\n",
+ "To generate a style for our base input image, we perform gradient descent from the content image to transform it into an image that matches the style representation of the original image. We do so by minimizing the mean squared distance between the feature correlation map of the style image and the input image. The contribution of each layer to the total style loss is described by\n",
+ "$$E_l = \\frac{1}{4N_l^2M_l^2} \\sum_{i,j}(G^l_{ij} - A^l_{ij})^2$$\n",
+ "\n",
+ "where $G^l_{ij}$ and $A^l_{ij}$ are the respective style representation in layer $l$ of $x$ and $a$. $N_l$ describes the number of feature maps, each of size $M_l = height * width$. Thus, the total style loss across each layer is \n",
+ "$$L_{style}(a, x) = \\sum_{l \\in L} w_l E_l$$\n",
+ "where we weight the contribution of each layer's loss by some factor $w_l$. In our case, we weight each layer equally ($w_l =\\frac{1}{|L|}$)"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "F21Hm61yLKk5",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Computing style loss\n",
+ "Again, we implement our loss as a distance metric . "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "N7MOqwKLLke8",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def gram_matrix(input_tensor):\n",
+ " # We make the image channels first \n",
+ " channels = int(input_tensor.shape[-1])\n",
+ " a = tf.reshape(input_tensor, [-1, channels])\n",
+ " n = tf.shape(a)[0]\n",
+ " gram = tf.matmul(a, a, transpose_a=True)\n",
+ " return gram / tf.cast(n, tf.float32)\n",
+ "\n",
+ "def get_style_loss(base_style, gram_target):\n",
+ " \"\"\"Expects two images of dimension h, w, c\"\"\"\n",
+ " # height, width, num filters of each layer\n",
+ " # We scale the loss at a given layer by the size of the feature map and the number of filters\n",
+ " height, width, channels = base_style.get_shape().as_list()\n",
+ " gram_style = gram_matrix(base_style)\n",
+ " \n",
+ " return tf.reduce_mean(tf.square(gram_style - gram_target))# / (4. * (channels ** 2) * (width * height) ** 2)"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "pXIUX6czZABh",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Apply style transfer to our images\n"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "y9r8Lyjb_m0u",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Run Gradient Descent \n",
+ "If you aren't familiar with gradient descent/backpropagation or need a refresher, you should definitely check out this [awesome resource](https://developers.google.com/machine-learning/crash-course/reducing-loss/gradient-descent).\n",
+ "\n",
+ "In this case, we use the [Adam](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam)* optimizer in order to minimize our loss. We iteratively update our output image such that it minimizes our loss: we don't update the weights associated with our network, but instead we train our input image to minimize loss. In order to do this, we must know how we calculate our loss and gradients. \n",
+ "\n",
+ "\\* Note that L-BFGS, which if you are familiar with this algorithm is recommended, isn’t used in this tutorial because a primary motivation behind this tutorial was to illustrate best practices with eager execution, and, by using Adam, we can demonstrate the autograd/gradient tape functionality with custom training loops.\n"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "-kGzV6LTp4CU",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "We’ll define a little helper function that will load our content and style image, feed them forward through our network, which will then output the content and style feature representations from our model. "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "O-lj5LxgtmnI",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def get_feature_representations(model, content_path, style_path):\n",
+ " \"\"\"Helper function to compute our content and style feature representations.\n",
+ "\n",
+ " This function will simply load and preprocess both the content and style \n",
+ " images from their path. Then it will feed them through the network to obtain\n",
+ " the outputs of the intermediate layers. \n",
+ " \n",
+ " Arguments:\n",
+ " model: The model that we are using.\n",
+ " content_path: The path to the content image.\n",
+ " style_path: The path to the style image\n",
+ " \n",
+ " Returns:\n",
+ " returns the style features and the content features. \n",
+ " \"\"\"\n",
+ " # Load our images in \n",
+ " content_image = load_and_process_img(content_path)\n",
+ " style_image = load_and_process_img(style_path)\n",
+ " \n",
+ " # batch compute content and style features\n",
+ " style_outputs = model(style_image)\n",
+ " content_outputs = model(content_image)\n",
+ " \n",
+ " \n",
+ " # Get the style and content feature representations from our model \n",
+ " style_features = [style_layer[0] for style_layer in style_outputs[:num_style_layers]]\n",
+ " content_features = [content_layer[0] for content_layer in content_outputs[num_style_layers:]]\n",
+ " return style_features, content_features"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "3DopXw7-lFHa",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Computing the loss and gradients\n",
+ "Here we use [**tf.GradientTape**](https://www.tensorflow.org/programmers_guide/eager#computing_gradients) to compute the gradient. It allows us to take advantage of the automatic differentiation available by tracing operations for computing the gradient later. It records the operations during the forward pass and then is able to compute the gradient of our loss function with respect to our input image for the backwards pass."
+ ]
+ },
+ {
+ "metadata": {
+ "id": "oVDhSo8iJunf",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def compute_loss(model, loss_weights, init_image, gram_style_features, content_features):\n",
+ " \"\"\"This function will compute the loss total loss.\n",
+ " \n",
+ " Arguments:\n",
+ " model: The model that will give us access to the intermediate layers\n",
+ " loss_weights: The weights of each contribution of each loss function. \n",
+ " (style weight, content weight, and total variation weight)\n",
+ " init_image: Our initial base image. This image is what we are updating with \n",
+ " our optimization process. We apply the gradients wrt the loss we are \n",
+ " calculating to this image.\n",
+ " gram_style_features: Precomputed gram matrices corresponding to the \n",
+ " defined style layers of interest.\n",
+ " content_features: Precomputed outputs from defined content layers of \n",
+ " interest.\n",
+ " \n",
+ " Returns:\n",
+ " returns the total loss, style loss, content loss, and total variational loss\n",
+ " \"\"\"\n",
+ " style_weight, content_weight = loss_weights\n",
+ " \n",
+ " # Feed our init image through our model. This will give us the content and \n",
+ " # style representations at our desired layers. Since we're using eager\n",
+ " # our model is callable just like any other function!\n",
+ " model_outputs = model(init_image)\n",
+ " \n",
+ " style_output_features = model_outputs[:num_style_layers]\n",
+ " content_output_features = model_outputs[num_style_layers:]\n",
+ " \n",
+ " style_score = 0\n",
+ " content_score = 0\n",
+ "\n",
+ " # Accumulate style losses from all layers\n",
+ " # Here, we equally weight each contribution of each loss layer\n",
+ " weight_per_style_layer = 1.0 / float(num_style_layers)\n",
+ " for target_style, comb_style in zip(gram_style_features, style_output_features):\n",
+ " style_score += weight_per_style_layer * get_style_loss(comb_style[0], target_style)\n",
+ " \n",
+ " # Accumulate content losses from all layers \n",
+ " weight_per_content_layer = 1.0 / float(num_content_layers)\n",
+ " for target_content, comb_content in zip(content_features, content_output_features):\n",
+ " content_score += weight_per_content_layer* get_content_loss(comb_content[0], target_content)\n",
+ " \n",
+ " style_score *= style_weight\n",
+ " content_score *= content_weight\n",
+ "\n",
+ " # Get total loss\n",
+ " loss = style_score + content_score \n",
+ " return loss, style_score, content_score"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "r5XTvbP6nJQa",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "Then computing the gradients is easy:"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "fwzYeOqOUH9_",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def compute_grads(cfg):\n",
+ " with tf.GradientTape() as tape: \n",
+ " all_loss = compute_loss(**cfg)\n",
+ " # Compute gradients wrt input image\n",
+ " total_loss = all_loss[0]\n",
+ " return tape.gradient(total_loss, cfg['init_image']), all_loss"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "T9yKu2PLlBIE",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Optimization loop"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "pj_enNo6tACQ",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "import IPython.display\n",
+ "\n",
+ "def run_style_transfer(content_path, \n",
+ " style_path,\n",
+ " num_iterations=1000,\n",
+ " content_weight=1e3, \n",
+ " style_weight=1e-2): \n",
+ " # We don't need to (or want to) train any layers of our model, so we set their\n",
+ " # trainable to false. \n",
+ " model = get_model() \n",
+ " for layer in model.layers:\n",
+ " layer.trainable = False\n",
+ " \n",
+ " # Get the style and content feature representations (from our specified intermediate layers) \n",
+ " style_features, content_features = get_feature_representations(model, content_path, style_path)\n",
+ " gram_style_features = [gram_matrix(style_feature) for style_feature in style_features]\n",
+ " \n",
+ " # Set initial image\n",
+ " init_image = load_and_process_img(content_path)\n",
+ " init_image = tfe.Variable(init_image, dtype=tf.float32)\n",
+ " # Create our optimizer\n",
+ " opt = tf.train.AdamOptimizer(learning_rate=5, beta1=0.99, epsilon=1e-1)\n",
+ "\n",
+ " # For displaying intermediate images \n",
+ " iter_count = 1\n",
+ " \n",
+ " # Store our best result\n",
+ " best_loss, best_img = float('inf'), None\n",
+ " \n",
+ " # Create a nice config \n",
+ " loss_weights = (style_weight, content_weight)\n",
+ " cfg = {\n",
+ " 'model': model,\n",
+ " 'loss_weights': loss_weights,\n",
+ " 'init_image': init_image,\n",
+ " 'gram_style_features': gram_style_features,\n",
+ " 'content_features': content_features\n",
+ " }\n",
+ " \n",
+ " # For displaying\n",
+ " num_rows = 2\n",
+ " num_cols = 5\n",
+ " display_interval = num_iterations/(num_rows*num_cols)\n",
+ " start_time = time.time()\n",
+ " global_start = time.time()\n",
+ " \n",
+ " norm_means = np.array([103.939, 116.779, 123.68])\n",
+ " min_vals = -norm_means\n",
+ " max_vals = 255 - norm_means \n",
+ " \n",
+ " imgs = []\n",
+ " for i in range(num_iterations):\n",
+ " grads, all_loss = compute_grads(cfg)\n",
+ " loss, style_score, content_score = all_loss\n",
+ " opt.apply_gradients([(grads, init_image)])\n",
+ " clipped = tf.clip_by_value(init_image, min_vals, max_vals)\n",
+ " init_image.assign(clipped)\n",
+ " end_time = time.time() \n",
+ " \n",
+ " if loss < best_loss:\n",
+ " # Update best loss and best image from total loss. \n",
+ " best_loss = loss\n",
+ " best_img = deprocess_img(init_image.numpy())\n",
+ "\n",
+ " if i % display_interval== 0:\n",
+ " start_time = time.time()\n",
+ " \n",
+ " # Use the .numpy() method to get the concrete numpy array\n",
+ " plot_img = init_image.numpy()\n",
+ " plot_img = deprocess_img(plot_img)\n",
+ " imgs.append(plot_img)\n",
+ " IPython.display.clear_output(wait=True)\n",
+ " IPython.display.display_png(Image.fromarray(plot_img))\n",
+ " print('Iteration: {}'.format(i)) \n",
+ " print('Total loss: {:.4e}, ' \n",
+ " 'style loss: {:.4e}, '\n",
+ " 'content loss: {:.4e}, '\n",
+ " 'time: {:.4f}s'.format(loss, style_score, content_score, time.time() - start_time))\n",
+ " print('Total time: {:.4f}s'.format(time.time() - global_start))\n",
+ " IPython.display.clear_output(wait=True)\n",
+ " plt.figure(figsize=(14,4))\n",
+ " for i,img in enumerate(imgs):\n",
+ " plt.subplot(num_rows,num_cols,i+1)\n",
+ " plt.imshow(img)\n",
+ " plt.xticks([])\n",
+ " plt.yticks([])\n",
+ " \n",
+ " return best_img, best_loss "
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "vSVMx4burydi",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "best, best_loss = run_style_transfer(content_path, \n",
+ " style_path, num_iterations=1000)"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "dzJTObpsO3TZ",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "Image.fromarray(best)"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "dCXQ9vSnQbDy",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "To download the image from Colab uncomment the following code:"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "SSH6OpyyQn7w",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "#from google.colab import files\n",
+ "#files.download('wave_turtle.png')"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "LwiZfCW0AZwt",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Visualize outputs\n",
+ "We \"deprocess\" the output image in order to remove the processing that was applied to it. "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "lqTQN1PjulV9",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "def show_results(best_img, content_path, style_path, show_large_final=True):\n",
+ " plt.figure(figsize=(10, 5))\n",
+ " content = load_img(content_path) \n",
+ " style = load_img(style_path)\n",
+ "\n",
+ " plt.subplot(1, 2, 1)\n",
+ " imshow(content, 'Content Image')\n",
+ "\n",
+ " plt.subplot(1, 2, 2)\n",
+ " imshow(style, 'Style Image')\n",
+ "\n",
+ " if show_large_final: \n",
+ " plt.figure(figsize=(10, 10))\n",
+ "\n",
+ " plt.imshow(best_img)\n",
+ " plt.title('Output Image')\n",
+ " plt.show()"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "i6d6O50Yvs6a",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "show_results(best, content_path, style_path)"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "tyGMmWh2Pss8",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Try it on other images\n",
+ "Image of Tuebingen \n",
+ "\n",
+ "Photo By: Andreas Praefcke [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], from Wikimedia Commons"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "x2TePU39k9lb",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Starry night + Tuebingen"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "ES9dC6ZyJBD2",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "best_starry_night, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg',\n",
+ " '/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg')"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "X8w8WLkKvzXu",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "show_results(best_starry_night, '/tmp/nst/Tuebingen_Neckarfront.jpg',\n",
+ " '/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg')"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "QcXwvViek4Br",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Pillars of Creation + Tuebingen"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "vJ3u2U-gGmgP",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "best_poc_tubingen, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg', \n",
+ " '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "pQUq3KxpGv2O",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "show_results(best_poc_tubingen, \n",
+ " '/tmp/nst/Tuebingen_Neckarfront.jpg',\n",
+ " '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "bTZdTOdW3s8H",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Kandinsky Composition 7 + Tuebingen"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "bt9mbQfl7exl",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "best_kandinsky_tubingen, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg', \n",
+ " '/tmp/nst/Vassily_Kandinsky,_1913_-_Composition_7.jpg')"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "Qnz8HeXSXg6P",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "show_results(best_kandinsky_tubingen, \n",
+ " '/tmp/nst/Tuebingen_Neckarfront.jpg',\n",
+ " '/tmp/nst/Vassily_Kandinsky,_1913_-_Composition_7.jpg')"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "cg68lW2A3s8N",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "### Pillars of Creation + Sea Turtle"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "dl0DUot_bFST",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "best_poc_turtle, best_loss = run_style_transfer('/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg', \n",
+ " '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "UzJfE0I1bQn8",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "show_results(best_poc_turtle, \n",
+ " '/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg',\n",
+ " '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "sElaeNX-4Vnc",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Key Takeaways\n",
+ "\n",
+ "### What we covered:\n",
+ "\n",
+ "* We built several different loss functions and used backpropagation to transform our input image in order to minimize these losses\n",
+ " * In order to do this we had to load in a **pretrained model** and use its learned feature maps to describe the content and style representation of our images.\n",
+ " * Our main loss functions were primarily computing the distance in terms of these different representations\n",
+ "* We implemented this with a custom model and **eager execution**\n",
+ " * We built our custom model with the Functional API \n",
+ " * Eager execution allows us to dynamically work with tensors, using a natural python control flow\n",
+ " * We manipulated tensors directly, which makes debugging and working with tensors easier. \n",
+ "* We iteratively updated our image by applying our optimizers update rules using **tf.gradient**. The optimizer minimized a given loss with respect to our input image. "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "U-y02GWonqnD",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "\n",
+ "**[Image of Tuebingen](https://commons.wikimedia.org/wiki/File:Tuebingen_Neckarfront.jpg)** \n",
+ "Photo By: Andreas Praefcke [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], from Wikimedia Commons\n",
+ "\n",
+ "**[Image of Green Sea Turtle](https://commons.wikimedia.org/wiki/File:Green_Sea_Turtle_grazing_seagrass.jpg)**\n",
+ "By P.Lindgren [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons\n",
+ "\n"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "IpUD9W6ZkeyM",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ ""
+ ],
+ "execution_count": 0,
+ "outputs": []
+ }
+ ]
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/Green_Sea_Turtle_grazing_seagrass.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/Green_Sea_Turtle_grazing_seagrass.jpg"
new file mode 100644
index 00000000..c98791b8
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/Green_Sea_Turtle_grazing_seagrass.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/The_Great_Wave_off_Kanagawa.jpg" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/The_Great_Wave_off_Kanagawa.jpg"
new file mode 100644
index 00000000..5d3475c9
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/The_Great_Wave_off_Kanagawa.jpg" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/wave_turtle.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/wave_turtle.png"
new file mode 100644
index 00000000..d15ee0fc
Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/nst_blogpost/wave_turtle.png" differ
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/CONTRIBUTING.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/CONTRIBUTING.md"
new file mode 100644
index 00000000..e3d87e3c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/CONTRIBUTING.md"
@@ -0,0 +1,13 @@
+# Contributing to the Tensorflow Object Detection API
+
+Patches to Tensorflow Object Detection API are welcome!
+
+We require contributors to fill out either the individual or corporate
+Contributor License Agreement (CLA).
+
+ * If you are an individual writing original source code and you're sure you own the intellectual property, then you'll need to sign an [individual CLA](http://code.google.com/legal/individual-cla-v1.0.html).
+ * If you work for a company that wants to allow you to contribute your work, then you'll need to sign a [corporate CLA](http://code.google.com/legal/corporate-cla-v1.0.html).
+
+Please follow the
+[Tensorflow contributing guidelines](https://github.com/tensorflow/tensorflow/blob/master/CONTRIBUTING.md)
+when submitting pull requests.
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/README.md"
new file mode 100644
index 00000000..2fd23408
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/README.md"
@@ -0,0 +1,245 @@
+
+# Tensorflow Object Detection API
+Creating accurate machine learning models capable of localizing and identifying
+multiple objects in a single image remains a core challenge in computer vision.
+The TensorFlow Object Detection API is an open source framework built on top of
+TensorFlow that makes it easy to construct, train and deploy object detection
+models. At Google we’ve certainly found this codebase to be useful for our
+computer vision needs, and we hope that you will as well.
+
+
+
+Contributions to the codebase are welcome and we would love to hear back from
+you if you find this API useful. Finally if you use the Tensorflow Object
+Detection API for a research publication, please consider citing:
+
+```
+"Speed/accuracy trade-offs for modern convolutional object detectors."
+Huang J, Rathod V, Sun C, Zhu M, Korattikara A, Fathi A, Fischer I, Wojna Z,
+Song Y, Guadarrama S, Murphy K, CVPR 2017
+```
+\[[link](https://arxiv.org/abs/1611.10012)\]\[[bibtex](
+https://scholar.googleusercontent.com/scholar.bib?q=info:l291WsrB-hQJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWUIIlnPZ_L9jxvPwcC49kDlELtaeIyU-&scisf=4&ct=citation&cd=-1&hl=en&scfhb=1)\]
+
+
+
+
+
+## Maintainers
+
+* Jonathan Huang, github: [jch1](https://github.com/jch1)
+* Vivek Rathod, github: [tombstone](https://github.com/tombstone)
+* Ronny Votel, github: [ronnyvotel](https://github.com/ronnyvotel)
+* Derek Chow, github: [derekjchow](https://github.com/derekjchow)
+* Chen Sun, github: [jesu9](https://github.com/jesu9)
+* Menglong Zhu, github: [dreamdragon](https://github.com/dreamdragon)
+* Alireza Fathi, github: [afathi3](https://github.com/afathi3)
+* Zhichao Lu, github: [pkulzc](https://github.com/pkulzc)
+
+
+## Table of contents
+
+Setup:
+
+ * Installation
+
+Quick Start:
+
+ *
+ Quick Start: Jupyter notebook for off-the-shelf inference
+ * Quick Start: Training a pet detector
+
+Customizing a Pipeline:
+
+ *
+ Configuring an object detection pipeline
+ * Preparing inputs
+
+Running:
+
+ * Running locally
+ * Running on the cloud
+
+Extras:
+
+ * Tensorflow detection model zoo
+ *
+ Exporting a trained model for inference
+ *
+ Defining your own model architecture
+ *
+ Bringing in your own dataset
+ *
+ Supported object detection evaluation protocols
+ *
+ Inference and evaluation on the Open Images dataset
+ *
+ Run an instance segmentation model
+ *
+ Run the evaluation for the Open Images Challenge 2018
+ *
+ TPU compatible detection pipelines
+ *
+ Running object detection on mobile devices with TensorFlow Lite
+
+## Getting Help
+
+To get help with issues you may encounter using the Tensorflow Object Detection
+API, create a new question on [StackOverflow](https://stackoverflow.com/) with
+the tags "tensorflow" and "object-detection".
+
+Please report bugs (actually broken code, not usage questions) to the
+tensorflow/models GitHub
+[issue tracker](https://github.com/tensorflow/models/issues), prefixing the
+issue name with "object_detection".
+
+Please check [FAQ](g3doc/faq.md) for frequently asked questions before
+reporting an issue.
+
+
+## Release information
+
+### Sep 17, 2018
+
+We have released Faster R-CNN detectors with ResNet-50 / ResNet-101 feature
+extractors trained on the [iNaturalist Species Detection Dataset](https://github.com/visipedia/inat_comp/blob/master/2017/README.md#bounding-boxes).
+The models are trained on the training split of the iNaturalist data for 4M
+iterations, they achieve 55% and 58% mean AP@.5 over 2854 classes respectively.
+For more details please refer to this [paper](https://arxiv.org/abs/1707.06642).
+
+Thanks to contributors: Chen Sun
+
+### July 13, 2018
+
+There are many new updates in this release, extending the functionality and
+capability of the API:
+
+* Moving from slim-based training to [Estimator](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator)-based
+training.
+* Support for [RetinaNet](https://arxiv.org/abs/1708.02002), and a [MobileNet](https://ai.googleblog.com/2017/06/mobilenets-open-source-models-for.html)
+adaptation of RetinaNet.
+* A novel SSD-based architecture called the [Pooling Pyramid Network](https://arxiv.org/abs/1807.03284) (PPN).
+* Releasing several [TPU](https://cloud.google.com/tpu/)-compatible models.
+These can be found in the `samples/configs/` directory with a comment in the
+pipeline configuration files indicating TPU compatibility.
+* Support for quantized training.
+* Updated documentation for new binaries, Cloud training, and [Tensorflow Lite](https://www.tensorflow.org/mobile/tflite/).
+
+See also our [expanded announcement blogpost](https://ai.googleblog.com/2018/07/accelerated-training-and-inference-with.html) and accompanying tutorial at the [TensorFlow blog](https://medium.com/tensorflow/training-and-serving-a-realtime-mobile-object-detector-in-30-minutes-with-cloud-tpus-b78971cf1193).
+
+Thanks to contributors: Sara Robinson, Aakanksha Chowdhery, Derek Chow,
+Pengchong Jin, Jonathan Huang, Vivek Rathod, Zhichao Lu, Ronny Votel
+
+
+### June 25, 2018
+
+Additional evaluation tools for the [Open Images Challenge 2018](https://storage.googleapis.com/openimages/web/challenge.html) are out.
+Check out our short tutorial on data preparation and running evaluation [here](g3doc/challenge_evaluation.md)!
+
+Thanks to contributors: Alina Kuznetsova
+
+### June 5, 2018
+
+We have released the implementation of evaluation metrics for both tracks of the [Open Images Challenge 2018](https://storage.googleapis.com/openimages/web/challenge.html) as a part of the Object Detection API - see the [evaluation protocols](g3doc/evaluation_protocols.md) for more details.
+Additionally, we have released a tool for hierarchical labels expansion for the Open Images Challenge: check out [oid_hierarchical_labels_expansion.py](dataset_tools/oid_hierarchical_labels_expansion.py).
+
+Thanks to contributors: Alina Kuznetsova, Vittorio Ferrari, Jasper Uijlings
+
+### April 30, 2018
+
+We have released a Faster R-CNN detector with ResNet-101 feature extractor trained on [AVA](https://research.google.com/ava/) v2.1.
+Compared with other commonly used object detectors, it changes the action classification loss function to per-class Sigmoid loss to handle boxes with multiple labels.
+The model is trained on the training split of AVA v2.1 for 1.5M iterations, it achieves mean AP of 11.25% over 60 classes on the validation split of AVA v2.1.
+For more details please refer to this [paper](https://arxiv.org/abs/1705.08421).
+
+Thanks to contributors: Chen Sun, David Ross
+
+### April 2, 2018
+
+Supercharge your mobile phones with the next generation mobile object detector!
+We are adding support for MobileNet V2 with SSDLite presented in
+[MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381).
+This model is 35% faster than Mobilenet V1 SSD on a Google Pixel phone CPU (200ms vs. 270ms) at the same accuracy.
+Along with the model definition, we are also releasing a model checkpoint trained on the COCO dataset.
+
+Thanks to contributors: Menglong Zhu, Mark Sandler, Zhichao Lu, Vivek Rathod, Jonathan Huang
+
+### February 9, 2018
+
+We now support instance segmentation!! In this API update we support a number of instance segmentation models similar to those discussed in the [Mask R-CNN paper](https://arxiv.org/abs/1703.06870). For further details refer to
+[our slides](http://presentations.cocodataset.org/Places17-GMRI.pdf) from the 2017 Coco + Places Workshop.
+Refer to the section on [Running an Instance Segmentation Model](g3doc/instance_segmentation.md) for instructions on how to configure a model
+that predicts masks in addition to object bounding boxes.
+
+Thanks to contributors: Alireza Fathi, Zhichao Lu, Vivek Rathod, Ronny Votel, Jonathan Huang
+
+### November 17, 2017
+
+As a part of the Open Images V3 release we have released:
+
+* An implementation of the Open Images evaluation metric and the [protocol](g3doc/evaluation_protocols.md#open-images).
+* Additional tools to separate inference of detection and evaluation (see [this tutorial](g3doc/oid_inference_and_evaluation.md)).
+* A new detection model trained on the Open Images V2 data release (see [Open Images model](g3doc/detection_model_zoo.md#open-images-models)).
+
+See more information on the [Open Images website](https://github.com/openimages/dataset)!
+
+Thanks to contributors: Stefan Popov, Alina Kuznetsova
+
+### November 6, 2017
+
+We have re-released faster versions of our (pre-trained) models in the
+model zoo. In addition to what
+was available before, we are also adding Faster R-CNN models trained on COCO
+with Inception V2 and Resnet-50 feature extractors, as well as a Faster R-CNN
+with Resnet-101 model trained on the KITTI dataset.
+
+Thanks to contributors: Jonathan Huang, Vivek Rathod, Derek Chow,
+Tal Remez, Chen Sun.
+
+### October 31, 2017
+
+We have released a new state-of-the-art model for object detection using
+the Faster-RCNN with the
+[NASNet-A image featurization](https://arxiv.org/abs/1707.07012). This
+model achieves mAP of 43.1% on the test-dev validation dataset for COCO,
+improving on the best available model in the zoo by 6% in terms
+of absolute mAP.
+
+Thanks to contributors: Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc Le
+
+### August 11, 2017
+
+We have released an update to the [Android Detect
+demo](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android)
+which will now run models trained using the Tensorflow Object
+Detection API on an Android device. By default, it currently runs a
+frozen SSD w/Mobilenet detector trained on COCO, but we encourage
+you to try out other detection models!
+
+Thanks to contributors: Jonathan Huang, Andrew Harp
+
+
+### June 15, 2017
+
+In addition to our base Tensorflow detection model definitions, this
+release includes:
+
+* A selection of trainable detection models, including:
+ * Single Shot Multibox Detector (SSD) with MobileNet,
+ * SSD with Inception V2,
+ * Region-Based Fully Convolutional Networks (R-FCN) with Resnet 101,
+ * Faster RCNN with Resnet 101,
+ * Faster RCNN with Inception Resnet v2
+* Frozen weights (trained on the COCO dataset) for each of the above models to
+ be used for out-of-the-box inference purposes.
+* A [Jupyter notebook](object_detection_tutorial.ipynb) for performing
+ out-of-the-box inference with one of our released models
+* Convenient [local training](g3doc/running_locally.md) scripts as well as
+ distributed training and evaluation pipelines via
+ [Google Cloud](g3doc/running_on_cloud.md).
+
+
+Thanks to contributors: Jonathan Huang, Vivek Rathod, Derek Chow,
+Chen Sun, Menglong Zhu, Matthew Tang, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Jasper Uijlings,
+Viacheslav Kovalevskyi, Kevin Murphy
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/grid_anchor_generator.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/grid_anchor_generator.py"
new file mode 100644
index 00000000..ba43f013
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/grid_anchor_generator.py"
@@ -0,0 +1,205 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Generates grid anchors on the fly as used in Faster RCNN.
+
+Generates grid anchors on the fly as described in:
+"Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks"
+Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.
+"""
+
+import tensorflow as tf
+
+from object_detection.core import anchor_generator
+from object_detection.core import box_list
+from object_detection.utils import ops
+
+
+class GridAnchorGenerator(anchor_generator.AnchorGenerator):
+ """Generates a grid of anchors at given scales and aspect ratios."""
+
+ def __init__(self,
+ scales=(0.5, 1.0, 2.0),
+ aspect_ratios=(0.5, 1.0, 2.0),
+ base_anchor_size=None,
+ anchor_stride=None,
+ anchor_offset=None):
+ """Constructs a GridAnchorGenerator.
+
+ Args:
+ scales: a list of (float) scales, default=(0.5, 1.0, 2.0)
+ aspect_ratios: a list of (float) aspect ratios, default=(0.5, 1.0, 2.0)
+ base_anchor_size: base anchor size as height, width (
+ (length-2 float32 list or tensor, default=[256, 256])
+ anchor_stride: difference in centers between base anchors for adjacent
+ grid positions (length-2 float32 list or tensor,
+ default=[16, 16])
+ anchor_offset: center of the anchor with scale and aspect ratio 1 for the
+ upper left element of the grid, this should be zero for
+ feature networks with only VALID padding and even receptive
+ field size, but may need additional calculation if other
+ padding is used (length-2 float32 list or tensor,
+ default=[0, 0])
+ """
+ # Handle argument defaults
+ if base_anchor_size is None:
+ base_anchor_size = [256, 256]
+ base_anchor_size = tf.to_float(tf.convert_to_tensor(base_anchor_size))
+ if anchor_stride is None:
+ anchor_stride = [16, 16]
+ anchor_stride = tf.to_float(tf.convert_to_tensor(anchor_stride))
+ if anchor_offset is None:
+ anchor_offset = [0, 0]
+ anchor_offset = tf.to_float(tf.convert_to_tensor(anchor_offset))
+
+ self._scales = scales
+ self._aspect_ratios = aspect_ratios
+ self._base_anchor_size = base_anchor_size
+ self._anchor_stride = anchor_stride
+ self._anchor_offset = anchor_offset
+
+ def name_scope(self):
+ return 'GridAnchorGenerator'
+
+ def num_anchors_per_location(self):
+ """Returns the number of anchors per spatial location.
+
+ Returns:
+ a list of integers, one for each expected feature map to be passed to
+ the `generate` function.
+ """
+ return [len(self._scales) * len(self._aspect_ratios)]
+
+ def _generate(self, feature_map_shape_list):
+ """Generates a collection of bounding boxes to be used as anchors.
+
+ Args:
+ feature_map_shape_list: list of pairs of convnet layer resolutions in the
+ format [(height_0, width_0)]. For example, setting
+ feature_map_shape_list=[(8, 8)] asks for anchors that correspond
+ to an 8x8 layer. For this anchor generator, only lists of length 1 are
+ allowed.
+
+ Returns:
+ boxes_list: a list of BoxLists each holding anchor boxes corresponding to
+ the input feature map shapes.
+
+ Raises:
+ ValueError: if feature_map_shape_list, box_specs_list do not have the same
+ length.
+ ValueError: if feature_map_shape_list does not consist of pairs of
+ integers
+ """
+ if not (isinstance(feature_map_shape_list, list)
+ and len(feature_map_shape_list) == 1):
+ raise ValueError('feature_map_shape_list must be a list of length 1.')
+ if not all([isinstance(list_item, tuple) and len(list_item) == 2
+ for list_item in feature_map_shape_list]):
+ raise ValueError('feature_map_shape_list must be a list of pairs.')
+ grid_height, grid_width = feature_map_shape_list[0]
+ scales_grid, aspect_ratios_grid = ops.meshgrid(self._scales,
+ self._aspect_ratios)
+ scales_grid = tf.reshape(scales_grid, [-1])
+ aspect_ratios_grid = tf.reshape(aspect_ratios_grid, [-1])
+ anchors = tile_anchors(grid_height,
+ grid_width,
+ scales_grid,
+ aspect_ratios_grid,
+ self._base_anchor_size,
+ self._anchor_stride,
+ self._anchor_offset)
+
+ num_anchors = anchors.num_boxes_static()
+ if num_anchors is None:
+ num_anchors = anchors.num_boxes()
+ anchor_indices = tf.zeros([num_anchors])
+ anchors.add_field('feature_map_index', anchor_indices)
+ return [anchors]
+
+
+def tile_anchors(grid_height,
+ grid_width,
+ scales,
+ aspect_ratios,
+ base_anchor_size,
+ anchor_stride,
+ anchor_offset):
+ """Create a tiled set of anchors strided along a grid in image space.
+
+ This op creates a set of anchor boxes by placing a "basis" collection of
+ boxes with user-specified scales and aspect ratios centered at evenly
+ distributed points along a grid. The basis collection is specified via the
+ scale and aspect_ratios arguments. For example, setting scales=[.1, .2, .2]
+ and aspect ratios = [2,2,1/2] means that we create three boxes: one with scale
+ .1, aspect ratio 2, one with scale .2, aspect ratio 2, and one with scale .2
+ and aspect ratio 1/2. Each box is multiplied by "base_anchor_size" before
+ placing it over its respective center.
+
+ Grid points are specified via grid_height, grid_width parameters as well as
+ the anchor_stride and anchor_offset parameters.
+
+ Args:
+ grid_height: size of the grid in the y direction (int or int scalar tensor)
+ grid_width: size of the grid in the x direction (int or int scalar tensor)
+ scales: a 1-d (float) tensor representing the scale of each box in the
+ basis set.
+ aspect_ratios: a 1-d (float) tensor representing the aspect ratio of each
+ box in the basis set. The length of the scales and aspect_ratios tensors
+ must be equal.
+ base_anchor_size: base anchor size as [height, width]
+ (float tensor of shape [2])
+ anchor_stride: difference in centers between base anchors for adjacent grid
+ positions (float tensor of shape [2])
+ anchor_offset: center of the anchor with scale and aspect ratio 1 for the
+ upper left element of the grid, this should be zero for
+ feature networks with only VALID padding and even receptive
+ field size, but may need some additional calculation if other
+ padding is used (float tensor of shape [2])
+ Returns:
+ a BoxList holding a collection of N anchor boxes
+ """
+ ratio_sqrts = tf.sqrt(aspect_ratios)
+ heights = scales / ratio_sqrts * base_anchor_size[0]
+ widths = scales * ratio_sqrts * base_anchor_size[1]
+
+ # Get a grid of box centers
+ y_centers = tf.to_float(tf.range(grid_height))
+ y_centers = y_centers * anchor_stride[0] + anchor_offset[0]
+ x_centers = tf.to_float(tf.range(grid_width))
+ x_centers = x_centers * anchor_stride[1] + anchor_offset[1]
+ x_centers, y_centers = ops.meshgrid(x_centers, y_centers)
+
+ widths_grid, x_centers_grid = ops.meshgrid(widths, x_centers)
+ heights_grid, y_centers_grid = ops.meshgrid(heights, y_centers)
+ bbox_centers = tf.stack([y_centers_grid, x_centers_grid], axis=3)
+ bbox_sizes = tf.stack([heights_grid, widths_grid], axis=3)
+ bbox_centers = tf.reshape(bbox_centers, [-1, 2])
+ bbox_sizes = tf.reshape(bbox_sizes, [-1, 2])
+ bbox_corners = _center_size_bbox_to_corners_bbox(bbox_centers, bbox_sizes)
+ return box_list.BoxList(bbox_corners)
+
+
+def _center_size_bbox_to_corners_bbox(centers, sizes):
+ """Converts bbox center-size representation to corners representation.
+
+ Args:
+ centers: a tensor with shape [N, 2] representing bounding box centers
+ sizes: a tensor with shape [N, 2] representing bounding boxes
+
+ Returns:
+ corners: tensor with shape [N, 4] representing bounding boxes in corners
+ representation
+ """
+ return tf.concat([centers - .5 * sizes, centers + .5 * sizes], 1)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/grid_anchor_generator_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/grid_anchor_generator_test.py"
new file mode 100644
index 00000000..8de74aa7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/grid_anchor_generator_test.py"
@@ -0,0 +1,104 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.grid_anchor_generator."""
+import numpy as np
+import tensorflow as tf
+
+from object_detection.anchor_generators import grid_anchor_generator
+from object_detection.utils import test_case
+
+
+class GridAnchorGeneratorTest(test_case.TestCase):
+
+ def test_construct_single_anchor(self):
+ """Builds a 1x1 anchor grid to test the size of the output boxes."""
+ def graph_fn():
+ scales = [0.5, 1.0, 2.0]
+ aspect_ratios = [0.25, 1.0, 4.0]
+ anchor_offset = [7, -3]
+ anchor_generator = grid_anchor_generator.GridAnchorGenerator(
+ scales, aspect_ratios, anchor_offset=anchor_offset)
+ anchors_list = anchor_generator.generate(feature_map_shape_list=[(1, 1)])
+ anchor_corners = anchors_list[0].get()
+ return (anchor_corners,)
+ exp_anchor_corners = [[-121, -35, 135, 29], [-249, -67, 263, 61],
+ [-505, -131, 519, 125], [-57, -67, 71, 61],
+ [-121, -131, 135, 125], [-249, -259, 263, 253],
+ [-25, -131, 39, 125], [-57, -259, 71, 253],
+ [-121, -515, 135, 509]]
+ anchor_corners_out = self.execute(graph_fn, [])
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_anchor_grid(self):
+ def graph_fn():
+ base_anchor_size = [10, 10]
+ anchor_stride = [19, 19]
+ anchor_offset = [0, 0]
+ scales = [0.5, 1.0, 2.0]
+ aspect_ratios = [1.0]
+
+ anchor_generator = grid_anchor_generator.GridAnchorGenerator(
+ scales,
+ aspect_ratios,
+ base_anchor_size=base_anchor_size,
+ anchor_stride=anchor_stride,
+ anchor_offset=anchor_offset)
+
+ anchors_list = anchor_generator.generate(feature_map_shape_list=[(2, 2)])
+ anchor_corners = anchors_list[0].get()
+ return (anchor_corners,)
+ exp_anchor_corners = [[-2.5, -2.5, 2.5, 2.5], [-5., -5., 5., 5.],
+ [-10., -10., 10., 10.], [-2.5, 16.5, 2.5, 21.5],
+ [-5., 14., 5, 24], [-10., 9., 10, 29],
+ [16.5, -2.5, 21.5, 2.5], [14., -5., 24, 5],
+ [9., -10., 29, 10], [16.5, 16.5, 21.5, 21.5],
+ [14., 14., 24, 24], [9., 9., 29, 29]]
+ anchor_corners_out = self.execute(graph_fn, [])
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_anchor_grid_with_dynamic_feature_map_shapes(self):
+ def graph_fn(feature_map_height, feature_map_width):
+ base_anchor_size = [10, 10]
+ anchor_stride = [19, 19]
+ anchor_offset = [0, 0]
+ scales = [0.5, 1.0, 2.0]
+ aspect_ratios = [1.0]
+ anchor_generator = grid_anchor_generator.GridAnchorGenerator(
+ scales,
+ aspect_ratios,
+ base_anchor_size=base_anchor_size,
+ anchor_stride=anchor_stride,
+ anchor_offset=anchor_offset)
+
+ anchors_list = anchor_generator.generate(
+ feature_map_shape_list=[(feature_map_height, feature_map_width)])
+ anchor_corners = anchors_list[0].get()
+ return (anchor_corners,)
+
+ exp_anchor_corners = [[-2.5, -2.5, 2.5, 2.5], [-5., -5., 5., 5.],
+ [-10., -10., 10., 10.], [-2.5, 16.5, 2.5, 21.5],
+ [-5., 14., 5, 24], [-10., 9., 10, 29],
+ [16.5, -2.5, 21.5, 2.5], [14., -5., 24, 5],
+ [9., -10., 29, 10], [16.5, 16.5, 21.5, 21.5],
+ [14., 14., 24, 24], [9., 9., 29, 29]]
+ anchor_corners_out = self.execute_cpu(graph_fn,
+ [np.array(2, dtype=np.int32),
+ np.array(2, dtype=np.int32)])
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiple_grid_anchor_generator.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiple_grid_anchor_generator.py"
new file mode 100644
index 00000000..b870adce
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiple_grid_anchor_generator.py"
@@ -0,0 +1,334 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Generates grid anchors on the fly corresponding to multiple CNN layers.
+
+Generates grid anchors on the fly corresponding to multiple CNN layers as
+described in:
+"SSD: Single Shot MultiBox Detector"
+Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed,
+Cheng-Yang Fu, Alexander C. Berg
+(see Section 2.2: Choosing scales and aspect ratios for default boxes)
+"""
+
+import numpy as np
+
+import tensorflow as tf
+
+from object_detection.anchor_generators import grid_anchor_generator
+from object_detection.core import anchor_generator
+from object_detection.core import box_list_ops
+
+
+class MultipleGridAnchorGenerator(anchor_generator.AnchorGenerator):
+ """Generate a grid of anchors for multiple CNN layers."""
+
+ def __init__(self,
+ box_specs_list,
+ base_anchor_size=None,
+ anchor_strides=None,
+ anchor_offsets=None,
+ clip_window=None):
+ """Constructs a MultipleGridAnchorGenerator.
+
+ To construct anchors, at multiple grid resolutions, one must provide a
+ list of feature_map_shape_list (e.g., [(8, 8), (4, 4)]), and for each grid
+ size, a corresponding list of (scale, aspect ratio) box specifications.
+
+ For example:
+ box_specs_list = [[(.1, 1.0), (.1, 2.0)], # for 8x8 grid
+ [(.2, 1.0), (.3, 1.0), (.2, 2.0)]] # for 4x4 grid
+
+ To support the fully convolutional setting, we pass grid sizes in at
+ generation time, while scale and aspect ratios are fixed at construction
+ time.
+
+ Args:
+ box_specs_list: list of list of (scale, aspect ratio) pairs with the
+ outside list having the same number of entries as feature_map_shape_list
+ (which is passed in at generation time).
+ base_anchor_size: base anchor size as [height, width]
+ (length-2 float tensor, default=[1.0, 1.0]).
+ The height and width values are normalized to the
+ minimum dimension of the input height and width, so that
+ when the base anchor height equals the base anchor
+ width, the resulting anchor is square even if the input
+ image is not square.
+ anchor_strides: list of pairs of strides in pixels (in y and x directions
+ respectively). For example, setting anchor_strides=[(25, 25), (50, 50)]
+ means that we want the anchors corresponding to the first layer to be
+ strided by 25 pixels and those in the second layer to be strided by 50
+ pixels in both y and x directions. If anchor_strides=None, they are set
+ to be the reciprocal of the corresponding feature map shapes.
+ anchor_offsets: list of pairs of offsets in pixels (in y and x directions
+ respectively). The offset specifies where we want the center of the
+ (0, 0)-th anchor to lie for each layer. For example, setting
+ anchor_offsets=[(10, 10), (20, 20)]) means that we want the
+ (0, 0)-th anchor of the first layer to lie at (10, 10) in pixel space
+ and likewise that we want the (0, 0)-th anchor of the second layer to
+ lie at (25, 25) in pixel space. If anchor_offsets=None, then they are
+ set to be half of the corresponding anchor stride.
+ clip_window: a tensor of shape [4] specifying a window to which all
+ anchors should be clipped. If clip_window is None, then no clipping
+ is performed.
+
+ Raises:
+ ValueError: if box_specs_list is not a list of list of pairs
+ ValueError: if clip_window is not either None or a tensor of shape [4]
+ """
+ if isinstance(box_specs_list, list) and all(
+ [isinstance(list_item, list) for list_item in box_specs_list]):
+ self._box_specs = box_specs_list
+ else:
+ raise ValueError('box_specs_list is expected to be a '
+ 'list of lists of pairs')
+ if base_anchor_size is None:
+ base_anchor_size = tf.constant([256, 256], dtype=tf.float32)
+ self._base_anchor_size = base_anchor_size
+ self._anchor_strides = anchor_strides
+ self._anchor_offsets = anchor_offsets
+ if clip_window is not None and clip_window.get_shape().as_list() != [4]:
+ raise ValueError('clip_window must either be None or a shape [4] tensor')
+ self._clip_window = clip_window
+ self._scales = []
+ self._aspect_ratios = []
+ for box_spec in self._box_specs:
+ if not all([isinstance(entry, tuple) and len(entry) == 2
+ for entry in box_spec]):
+ raise ValueError('box_specs_list is expected to be a '
+ 'list of lists of pairs')
+ scales, aspect_ratios = zip(*box_spec)
+ self._scales.append(scales)
+ self._aspect_ratios.append(aspect_ratios)
+
+ for arg, arg_name in zip([self._anchor_strides, self._anchor_offsets],
+ ['anchor_strides', 'anchor_offsets']):
+ if arg and not (isinstance(arg, list) and
+ len(arg) == len(self._box_specs)):
+ raise ValueError('%s must be a list with the same length '
+ 'as self._box_specs' % arg_name)
+ if arg and not all([
+ isinstance(list_item, tuple) and len(list_item) == 2
+ for list_item in arg
+ ]):
+ raise ValueError('%s must be a list of pairs.' % arg_name)
+
+ def name_scope(self):
+ return 'MultipleGridAnchorGenerator'
+
+ def num_anchors_per_location(self):
+ """Returns the number of anchors per spatial location.
+
+ Returns:
+ a list of integers, one for each expected feature map to be passed to
+ the Generate function.
+ """
+ return [len(box_specs) for box_specs in self._box_specs]
+
+ def _generate(self, feature_map_shape_list, im_height=1, im_width=1):
+ """Generates a collection of bounding boxes to be used as anchors.
+
+ The number of anchors generated for a single grid with shape MxM where we
+ place k boxes over each grid center is k*M^2 and thus the total number of
+ anchors is the sum over all grids. In our box_specs_list example
+ (see the constructor docstring), we would place two boxes over each grid
+ point on an 8x8 grid and three boxes over each grid point on a 4x4 grid and
+ thus end up with 2*8^2 + 3*4^2 = 176 anchors in total. The layout of the
+ output anchors follows the order of how the grid sizes and box_specs are
+ specified (with box_spec index varying the fastest, followed by width
+ index, then height index, then grid index).
+
+ Args:
+ feature_map_shape_list: list of pairs of convnet layer resolutions in the
+ format [(height_0, width_0), (height_1, width_1), ...]. For example,
+ setting feature_map_shape_list=[(8, 8), (7, 7)] asks for anchors that
+ correspond to an 8x8 layer followed by a 7x7 layer.
+ im_height: the height of the image to generate the grid for. If both
+ im_height and im_width are 1, the generated anchors default to
+ absolute coordinates, otherwise normalized coordinates are produced.
+ im_width: the width of the image to generate the grid for. If both
+ im_height and im_width are 1, the generated anchors default to
+ absolute coordinates, otherwise normalized coordinates are produced.
+
+ Returns:
+ boxes_list: a list of BoxLists each holding anchor boxes corresponding to
+ the input feature map shapes.
+
+ Raises:
+ ValueError: if feature_map_shape_list, box_specs_list do not have the same
+ length.
+ ValueError: if feature_map_shape_list does not consist of pairs of
+ integers
+ """
+ if not (isinstance(feature_map_shape_list, list)
+ and len(feature_map_shape_list) == len(self._box_specs)):
+ raise ValueError('feature_map_shape_list must be a list with the same '
+ 'length as self._box_specs')
+ if not all([isinstance(list_item, tuple) and len(list_item) == 2
+ for list_item in feature_map_shape_list]):
+ raise ValueError('feature_map_shape_list must be a list of pairs.')
+
+ im_height = tf.to_float(im_height)
+ im_width = tf.to_float(im_width)
+
+ if not self._anchor_strides:
+ anchor_strides = [(1.0 / tf.to_float(pair[0]), 1.0 / tf.to_float(pair[1]))
+ for pair in feature_map_shape_list]
+ else:
+ anchor_strides = [(tf.to_float(stride[0]) / im_height,
+ tf.to_float(stride[1]) / im_width)
+ for stride in self._anchor_strides]
+ if not self._anchor_offsets:
+ anchor_offsets = [(0.5 * stride[0], 0.5 * stride[1])
+ for stride in anchor_strides]
+ else:
+ anchor_offsets = [(tf.to_float(offset[0]) / im_height,
+ tf.to_float(offset[1]) / im_width)
+ for offset in self._anchor_offsets]
+
+ for arg, arg_name in zip([anchor_strides, anchor_offsets],
+ ['anchor_strides', 'anchor_offsets']):
+ if not (isinstance(arg, list) and len(arg) == len(self._box_specs)):
+ raise ValueError('%s must be a list with the same length '
+ 'as self._box_specs' % arg_name)
+ if not all([isinstance(list_item, tuple) and len(list_item) == 2
+ for list_item in arg]):
+ raise ValueError('%s must be a list of pairs.' % arg_name)
+
+ anchor_grid_list = []
+ min_im_shape = tf.minimum(im_height, im_width)
+ scale_height = min_im_shape / im_height
+ scale_width = min_im_shape / im_width
+ base_anchor_size = [
+ scale_height * self._base_anchor_size[0],
+ scale_width * self._base_anchor_size[1]
+ ]
+ for feature_map_index, (grid_size, scales, aspect_ratios, stride,
+ offset) in enumerate(
+ zip(feature_map_shape_list, self._scales,
+ self._aspect_ratios, anchor_strides,
+ anchor_offsets)):
+ tiled_anchors = grid_anchor_generator.tile_anchors(
+ grid_height=grid_size[0],
+ grid_width=grid_size[1],
+ scales=scales,
+ aspect_ratios=aspect_ratios,
+ base_anchor_size=base_anchor_size,
+ anchor_stride=stride,
+ anchor_offset=offset)
+ if self._clip_window is not None:
+ tiled_anchors = box_list_ops.clip_to_window(
+ tiled_anchors, self._clip_window, filter_nonoverlapping=False)
+ num_anchors_in_layer = tiled_anchors.num_boxes_static()
+ if num_anchors_in_layer is None:
+ num_anchors_in_layer = tiled_anchors.num_boxes()
+ anchor_indices = feature_map_index * tf.ones([num_anchors_in_layer])
+ tiled_anchors.add_field('feature_map_index', anchor_indices)
+ anchor_grid_list.append(tiled_anchors)
+
+ return anchor_grid_list
+
+
+def create_ssd_anchors(num_layers=6,
+ min_scale=0.2,
+ max_scale=0.95,
+ scales=None,
+ aspect_ratios=(1.0, 2.0, 3.0, 1.0 / 2, 1.0 / 3),
+ interpolated_scale_aspect_ratio=1.0,
+ base_anchor_size=None,
+ anchor_strides=None,
+ anchor_offsets=None,
+ reduce_boxes_in_lowest_layer=True):
+ """Creates MultipleGridAnchorGenerator for SSD anchors.
+
+ This function instantiates a MultipleGridAnchorGenerator that reproduces
+ ``default box`` construction proposed by Liu et al in the SSD paper.
+ See Section 2.2 for details. Grid sizes are assumed to be passed in
+ at generation time from finest resolution to coarsest resolution --- this is
+ used to (linearly) interpolate scales of anchor boxes corresponding to the
+ intermediate grid sizes.
+
+ Anchors that are returned by calling the `generate` method on the returned
+ MultipleGridAnchorGenerator object are always in normalized coordinates
+ and clipped to the unit square: (i.e. all coordinates lie in [0, 1]x[0, 1]).
+
+ Args:
+ num_layers: integer number of grid layers to create anchors for (actual
+ grid sizes passed in at generation time)
+ min_scale: scale of anchors corresponding to finest resolution (float)
+ max_scale: scale of anchors corresponding to coarsest resolution (float)
+ scales: As list of anchor scales to use. When not None and not empty,
+ min_scale and max_scale are not used.
+ aspect_ratios: list or tuple of (float) aspect ratios to place on each
+ grid point.
+ interpolated_scale_aspect_ratio: An additional anchor is added with this
+ aspect ratio and a scale interpolated between the scale for a layer
+ and the scale for the next layer (1.0 for the last layer).
+ This anchor is not included if this value is 0.
+ base_anchor_size: base anchor size as [height, width].
+ The height and width values are normalized to the minimum dimension of the
+ input height and width, so that when the base anchor height equals the
+ base anchor width, the resulting anchor is square even if the input image
+ is not square.
+ anchor_strides: list of pairs of strides in pixels (in y and x directions
+ respectively). For example, setting anchor_strides=[(25, 25), (50, 50)]
+ means that we want the anchors corresponding to the first layer to be
+ strided by 25 pixels and those in the second layer to be strided by 50
+ pixels in both y and x directions. If anchor_strides=None, they are set to
+ be the reciprocal of the corresponding feature map shapes.
+ anchor_offsets: list of pairs of offsets in pixels (in y and x directions
+ respectively). The offset specifies where we want the center of the
+ (0, 0)-th anchor to lie for each layer. For example, setting
+ anchor_offsets=[(10, 10), (20, 20)]) means that we want the
+ (0, 0)-th anchor of the first layer to lie at (10, 10) in pixel space
+ and likewise that we want the (0, 0)-th anchor of the second layer to lie
+ at (25, 25) in pixel space. If anchor_offsets=None, then they are set to
+ be half of the corresponding anchor stride.
+ reduce_boxes_in_lowest_layer: a boolean to indicate whether the fixed 3
+ boxes per location is used in the lowest layer.
+
+ Returns:
+ a MultipleGridAnchorGenerator
+ """
+ if base_anchor_size is None:
+ base_anchor_size = [1.0, 1.0]
+ base_anchor_size = tf.constant(base_anchor_size, dtype=tf.float32)
+ box_specs_list = []
+ if scales is None or not scales:
+ scales = [min_scale + (max_scale - min_scale) * i / (num_layers - 1)
+ for i in range(num_layers)] + [1.0]
+ else:
+ # Add 1.0 to the end, which will only be used in scale_next below and used
+ # for computing an interpolated scale for the largest scale in the list.
+ scales += [1.0]
+
+ for layer, scale, scale_next in zip(
+ range(num_layers), scales[:-1], scales[1:]):
+ layer_box_specs = []
+ if layer == 0 and reduce_boxes_in_lowest_layer:
+ layer_box_specs = [(0.1, 1.0), (scale, 2.0), (scale, 0.5)]
+ else:
+ for aspect_ratio in aspect_ratios:
+ layer_box_specs.append((scale, aspect_ratio))
+ # Add one more anchor, with a scale between the current scale, and the
+ # scale for the next layer, with a specified aspect ratio (1.0 by
+ # default).
+ if interpolated_scale_aspect_ratio > 0.0:
+ layer_box_specs.append((np.sqrt(scale*scale_next),
+ interpolated_scale_aspect_ratio))
+ box_specs_list.append(layer_box_specs)
+
+ return MultipleGridAnchorGenerator(box_specs_list, base_anchor_size,
+ anchor_strides, anchor_offsets)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py"
new file mode 100644
index 00000000..070d81d3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py"
@@ -0,0 +1,289 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for anchor_generators.multiple_grid_anchor_generator_test.py."""
+
+import numpy as np
+
+import tensorflow as tf
+
+from object_detection.anchor_generators import multiple_grid_anchor_generator as ag
+from object_detection.utils import test_case
+
+
+class MultipleGridAnchorGeneratorTest(test_case.TestCase):
+
+ def test_construct_single_anchor_grid(self):
+ """Builds a 1x1 anchor grid to test the size of the output boxes."""
+ def graph_fn():
+
+ box_specs_list = [[(.5, .25), (1.0, .25), (2.0, .25),
+ (.5, 1.0), (1.0, 1.0), (2.0, 1.0),
+ (.5, 4.0), (1.0, 4.0), (2.0, 4.0)]]
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list,
+ base_anchor_size=tf.constant([256, 256], dtype=tf.float32),
+ anchor_strides=[(16, 16)],
+ anchor_offsets=[(7, -3)])
+ anchors_list = anchor_generator.generate(feature_map_shape_list=[(1, 1)])
+ return anchors_list[0].get()
+ exp_anchor_corners = [[-121, -35, 135, 29], [-249, -67, 263, 61],
+ [-505, -131, 519, 125], [-57, -67, 71, 61],
+ [-121, -131, 135, 125], [-249, -259, 263, 253],
+ [-25, -131, 39, 125], [-57, -259, 71, 253],
+ [-121, -515, 135, 509]]
+
+ anchor_corners_out = self.execute(graph_fn, [])
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_anchor_grid(self):
+ def graph_fn():
+ box_specs_list = [[(0.5, 1.0), (1.0, 1.0), (2.0, 1.0)]]
+
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list,
+ base_anchor_size=tf.constant([10, 10], dtype=tf.float32),
+ anchor_strides=[(19, 19)],
+ anchor_offsets=[(0, 0)])
+ anchors_list = anchor_generator.generate(feature_map_shape_list=[(2, 2)])
+ return anchors_list[0].get()
+ exp_anchor_corners = [[-2.5, -2.5, 2.5, 2.5], [-5., -5., 5., 5.],
+ [-10., -10., 10., 10.], [-2.5, 16.5, 2.5, 21.5],
+ [-5., 14., 5, 24], [-10., 9., 10, 29],
+ [16.5, -2.5, 21.5, 2.5], [14., -5., 24, 5],
+ [9., -10., 29, 10], [16.5, 16.5, 21.5, 21.5],
+ [14., 14., 24, 24], [9., 9., 29, 29]]
+
+ anchor_corners_out = self.execute(graph_fn, [])
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_anchor_grid_non_square(self):
+
+ def graph_fn():
+ box_specs_list = [[(1.0, 1.0)]]
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list, base_anchor_size=tf.constant([1, 1],
+ dtype=tf.float32))
+ anchors_list = anchor_generator.generate(feature_map_shape_list=[(
+ tf.constant(1, dtype=tf.int32), tf.constant(2, dtype=tf.int32))])
+ return anchors_list[0].get()
+
+ exp_anchor_corners = [[0., -0.25, 1., 0.75], [0., 0.25, 1., 1.25]]
+ anchor_corners_out = self.execute(graph_fn, [])
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_dynamic_size_anchor_grid(self):
+
+ def graph_fn(height, width):
+ box_specs_list = [[(1.0, 1.0)]]
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list, base_anchor_size=tf.constant([1, 1],
+ dtype=tf.float32))
+ anchors_list = anchor_generator.generate(feature_map_shape_list=[(height,
+ width)])
+ return anchors_list[0].get()
+
+ exp_anchor_corners = [[0., -0.25, 1., 0.75], [0., 0.25, 1., 1.25]]
+
+ anchor_corners_out = self.execute_cpu(graph_fn,
+ [np.array(1, dtype=np.int32),
+ np.array(2, dtype=np.int32)])
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_anchor_grid_normalized(self):
+ def graph_fn():
+ box_specs_list = [[(1.0, 1.0)]]
+
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list, base_anchor_size=tf.constant([1, 1],
+ dtype=tf.float32))
+ anchors_list = anchor_generator.generate(
+ feature_map_shape_list=[(tf.constant(1, dtype=tf.int32), tf.constant(
+ 2, dtype=tf.int32))],
+ im_height=320,
+ im_width=640)
+ return anchors_list[0].get()
+
+ exp_anchor_corners = [[0., 0., 1., 0.5], [0., 0.5, 1., 1.]]
+ anchor_corners_out = self.execute(graph_fn, [])
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_multiple_grids(self):
+
+ def graph_fn():
+ box_specs_list = [[(1.0, 1.0), (2.0, 1.0), (1.0, 0.5)],
+ [(1.0, 1.0), (1.0, 0.5)]]
+
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list,
+ base_anchor_size=tf.constant([1.0, 1.0], dtype=tf.float32),
+ anchor_strides=[(.25, .25), (.5, .5)],
+ anchor_offsets=[(.125, .125), (.25, .25)])
+ anchors_list = anchor_generator.generate(feature_map_shape_list=[(4, 4), (
+ 2, 2)])
+ return [anchors.get() for anchors in anchors_list]
+ # height and width of box with .5 aspect ratio
+ h = np.sqrt(2)
+ w = 1.0/np.sqrt(2)
+ exp_small_grid_corners = [[-.25, -.25, .75, .75],
+ [.25-.5*h, .25-.5*w, .25+.5*h, .25+.5*w],
+ [-.25, .25, .75, 1.25],
+ [.25-.5*h, .75-.5*w, .25+.5*h, .75+.5*w],
+ [.25, -.25, 1.25, .75],
+ [.75-.5*h, .25-.5*w, .75+.5*h, .25+.5*w],
+ [.25, .25, 1.25, 1.25],
+ [.75-.5*h, .75-.5*w, .75+.5*h, .75+.5*w]]
+ # only test first entry of larger set of anchors
+ exp_big_grid_corners = [[.125-.5, .125-.5, .125+.5, .125+.5],
+ [.125-1.0, .125-1.0, .125+1.0, .125+1.0],
+ [.125-.5*h, .125-.5*w, .125+.5*h, .125+.5*w],]
+
+ anchor_corners_out = np.concatenate(self.execute(graph_fn, []), axis=0)
+ self.assertEquals(anchor_corners_out.shape, (56, 4))
+ big_grid_corners = anchor_corners_out[0:3, :]
+ small_grid_corners = anchor_corners_out[48:, :]
+ self.assertAllClose(small_grid_corners, exp_small_grid_corners)
+ self.assertAllClose(big_grid_corners, exp_big_grid_corners)
+
+ def test_construct_multiple_grids_with_clipping(self):
+
+ def graph_fn():
+ box_specs_list = [[(1.0, 1.0), (2.0, 1.0), (1.0, 0.5)],
+ [(1.0, 1.0), (1.0, 0.5)]]
+
+ clip_window = tf.constant([0, 0, 1, 1], dtype=tf.float32)
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list,
+ base_anchor_size=tf.constant([1.0, 1.0], dtype=tf.float32),
+ clip_window=clip_window)
+ anchors_list = anchor_generator.generate(feature_map_shape_list=[(4, 4), (
+ 2, 2)])
+ return [anchors.get() for anchors in anchors_list]
+ # height and width of box with .5 aspect ratio
+ h = np.sqrt(2)
+ w = 1.0/np.sqrt(2)
+ exp_small_grid_corners = [[0, 0, .75, .75],
+ [0, 0, .25+.5*h, .25+.5*w],
+ [0, .25, .75, 1],
+ [0, .75-.5*w, .25+.5*h, 1],
+ [.25, 0, 1, .75],
+ [.75-.5*h, 0, 1, .25+.5*w],
+ [.25, .25, 1, 1],
+ [.75-.5*h, .75-.5*w, 1, 1]]
+
+ anchor_corners_out = np.concatenate(self.execute(graph_fn, []), axis=0)
+ small_grid_corners = anchor_corners_out[48:, :]
+ self.assertAllClose(small_grid_corners, exp_small_grid_corners)
+
+ def test_invalid_box_specs(self):
+ # not all box specs are pairs
+ box_specs_list = [[(1.0, 1.0), (2.0, 1.0), (1.0, 0.5)],
+ [(1.0, 1.0), (1.0, 0.5, .3)]]
+ with self.assertRaises(ValueError):
+ ag.MultipleGridAnchorGenerator(box_specs_list)
+
+ # box_specs_list is not a list of lists
+ box_specs_list = [(1.0, 1.0), (2.0, 1.0), (1.0, 0.5)]
+ with self.assertRaises(ValueError):
+ ag.MultipleGridAnchorGenerator(box_specs_list)
+
+ def test_invalid_generate_arguments(self):
+ box_specs_list = [[(1.0, 1.0), (2.0, 1.0), (1.0, 0.5)],
+ [(1.0, 1.0), (1.0, 0.5)]]
+
+ # incompatible lengths with box_specs_list
+ with self.assertRaises(ValueError):
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list,
+ base_anchor_size=tf.constant([1.0, 1.0], dtype=tf.float32),
+ anchor_strides=[(.25, .25)],
+ anchor_offsets=[(.125, .125), (.25, .25)])
+ anchor_generator.generate(feature_map_shape_list=[(4, 4), (2, 2)])
+ with self.assertRaises(ValueError):
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list,
+ base_anchor_size=tf.constant([1.0, 1.0], dtype=tf.float32),
+ anchor_strides=[(.25, .25), (.5, .5)],
+ anchor_offsets=[(.125, .125), (.25, .25)])
+ anchor_generator.generate(feature_map_shape_list=[(4, 4), (2, 2), (1, 1)])
+ with self.assertRaises(ValueError):
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list,
+ base_anchor_size=tf.constant([1.0, 1.0], dtype=tf.float32),
+ anchor_strides=[(.5, .5)],
+ anchor_offsets=[(.25, .25)])
+ anchor_generator.generate(feature_map_shape_list=[(4, 4), (2, 2)])
+
+ # not pairs
+ with self.assertRaises(ValueError):
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list,
+ base_anchor_size=tf.constant([1.0, 1.0], dtype=tf.float32),
+ anchor_strides=[(.25, .25), (.5, .5)],
+ anchor_offsets=[(.125, .125), (.25, .25)])
+ anchor_generator.generate(feature_map_shape_list=[(4, 4, 4), (2, 2)])
+ with self.assertRaises(ValueError):
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list,
+ base_anchor_size=tf.constant([1.0, 1.0], dtype=tf.float32),
+ anchor_strides=[(.25, .25, .1), (.5, .5)],
+ anchor_offsets=[(.125, .125), (.25, .25)])
+ anchor_generator.generate(feature_map_shape_list=[(4, 4), (2, 2)])
+ with self.assertRaises(ValueError):
+ anchor_generator = ag.MultipleGridAnchorGenerator(
+ box_specs_list,
+ base_anchor_size=tf.constant([1.0, 1.0], dtype=tf.float32),
+ anchor_strides=[(.25, .25), (.5, .5)],
+ anchor_offsets=[(.125, .125), (.25, .25)])
+ anchor_generator.generate(feature_map_shape_list=[(4), (2, 2)])
+
+
+class CreateSSDAnchorsTest(test_case.TestCase):
+
+ def test_create_ssd_anchors_returns_correct_shape(self):
+
+ def graph_fn1():
+ anchor_generator = ag.create_ssd_anchors(
+ num_layers=6,
+ min_scale=0.2,
+ max_scale=0.95,
+ aspect_ratios=(1.0, 2.0, 3.0, 1.0 / 2, 1.0 / 3),
+ reduce_boxes_in_lowest_layer=True)
+
+ feature_map_shape_list = [(38, 38), (19, 19), (10, 10),
+ (5, 5), (3, 3), (1, 1)]
+ anchors_list = anchor_generator.generate(
+ feature_map_shape_list=feature_map_shape_list)
+ return [anchors.get() for anchors in anchors_list]
+ anchor_corners_out = np.concatenate(self.execute(graph_fn1, []), axis=0)
+ self.assertEquals(anchor_corners_out.shape, (7308, 4))
+
+ def graph_fn2():
+ anchor_generator = ag.create_ssd_anchors(
+ num_layers=6, min_scale=0.2, max_scale=0.95,
+ aspect_ratios=(1.0, 2.0, 3.0, 1.0/2, 1.0/3),
+ reduce_boxes_in_lowest_layer=False)
+
+ feature_map_shape_list = [(38, 38), (19, 19), (10, 10),
+ (5, 5), (3, 3), (1, 1)]
+ anchors_list = anchor_generator.generate(
+ feature_map_shape_list=feature_map_shape_list)
+ return [anchors.get() for anchors in anchors_list]
+ anchor_corners_out = np.concatenate(self.execute(graph_fn2, []), axis=0)
+ self.assertEquals(anchor_corners_out.shape, (11640, 4))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiscale_grid_anchor_generator.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiscale_grid_anchor_generator.py"
new file mode 100644
index 00000000..cd2440a4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiscale_grid_anchor_generator.py"
@@ -0,0 +1,145 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Generates grid anchors on the fly corresponding to multiple CNN layers.
+
+Generates grid anchors on the fly corresponding to multiple CNN layers as
+described in:
+"Focal Loss for Dense Object Detection" (https://arxiv.org/abs/1708.02002)
+T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollar
+"""
+
+from object_detection.anchor_generators import grid_anchor_generator
+from object_detection.core import anchor_generator
+from object_detection.core import box_list_ops
+
+
+class MultiscaleGridAnchorGenerator(anchor_generator.AnchorGenerator):
+ """Generate a grid of anchors for multiple CNN layers of different scale."""
+
+ def __init__(self, min_level, max_level, anchor_scale, aspect_ratios,
+ scales_per_octave, normalize_coordinates=True):
+ """Constructs a MultiscaleGridAnchorGenerator.
+
+ To construct anchors, at multiple scale resolutions, one must provide a
+ the minimum level and maximum levels on a scale pyramid. To define the size
+ of anchor, the anchor scale is provided to decide the size relatively to the
+ stride of the corresponding feature map. The generator allows one pixel
+ location on feature map maps to multiple anchors, that have different aspect
+ ratios and intermediate scales.
+
+ Args:
+ min_level: minimum level in feature pyramid.
+ max_level: maximum level in feature pyramid.
+ anchor_scale: anchor scale and feature stride define the size of the base
+ anchor on an image. For example, given a feature pyramid with strides
+ [2^3, ..., 2^7] and anchor scale 4. The base anchor size is
+ 4 * [2^3, ..., 2^7].
+ aspect_ratios: list or tuple of (float) aspect ratios to place on each
+ grid point.
+ scales_per_octave: integer number of intermediate scales per scale octave.
+ normalize_coordinates: whether to produce anchors in normalized
+ coordinates. (defaults to True).
+ """
+ self._anchor_grid_info = []
+ self._aspect_ratios = aspect_ratios
+ self._scales_per_octave = scales_per_octave
+ self._normalize_coordinates = normalize_coordinates
+
+ scales = [2**(float(scale) / scales_per_octave)
+ for scale in range(scales_per_octave)]
+ aspects = list(aspect_ratios)
+
+ for level in range(min_level, max_level + 1):
+ anchor_stride = [2**level, 2**level]
+ base_anchor_size = [2**level * anchor_scale, 2**level * anchor_scale]
+ self._anchor_grid_info.append({
+ 'level': level,
+ 'info': [scales, aspects, base_anchor_size, anchor_stride]
+ })
+
+ def name_scope(self):
+ return 'MultiscaleGridAnchorGenerator'
+
+ def num_anchors_per_location(self):
+ """Returns the number of anchors per spatial location.
+
+ Returns:
+ a list of integers, one for each expected feature map to be passed to
+ the Generate function.
+ """
+ return len(self._anchor_grid_info) * [
+ len(self._aspect_ratios) * self._scales_per_octave]
+
+ def _generate(self, feature_map_shape_list, im_height=1, im_width=1):
+ """Generates a collection of bounding boxes to be used as anchors.
+
+ Currently we require the input image shape to be statically defined. That
+ is, im_height and im_width should be integers rather than tensors.
+
+ Args:
+ feature_map_shape_list: list of pairs of convnet layer resolutions in the
+ format [(height_0, width_0), (height_1, width_1), ...]. For example,
+ setting feature_map_shape_list=[(8, 8), (7, 7)] asks for anchors that
+ correspond to an 8x8 layer followed by a 7x7 layer.
+ im_height: the height of the image to generate the grid for. If both
+ im_height and im_width are 1, anchors can only be generated in
+ absolute coordinates.
+ im_width: the width of the image to generate the grid for. If both
+ im_height and im_width are 1, anchors can only be generated in
+ absolute coordinates.
+
+ Returns:
+ boxes_list: a list of BoxLists each holding anchor boxes corresponding to
+ the input feature map shapes.
+ Raises:
+ ValueError: if im_height and im_width are not integers.
+ ValueError: if im_height and im_width are 1, but normalized coordinates
+ were requested.
+ """
+ anchor_grid_list = []
+ for feat_shape, grid_info in zip(feature_map_shape_list,
+ self._anchor_grid_info):
+ # TODO(rathodv) check the feature_map_shape_list is consistent with
+ # self._anchor_grid_info
+ level = grid_info['level']
+ stride = 2**level
+ scales, aspect_ratios, base_anchor_size, anchor_stride = grid_info['info']
+ feat_h = feat_shape[0]
+ feat_w = feat_shape[1]
+ anchor_offset = [0, 0]
+ if isinstance(im_height, int) and isinstance(im_width, int):
+ if im_height % 2.0**level == 0 or im_height == 1:
+ anchor_offset[0] = stride / 2.0
+ if im_width % 2.0**level == 0 or im_width == 1:
+ anchor_offset[1] = stride / 2.0
+ ag = grid_anchor_generator.GridAnchorGenerator(
+ scales,
+ aspect_ratios,
+ base_anchor_size=base_anchor_size,
+ anchor_stride=anchor_stride,
+ anchor_offset=anchor_offset)
+ (anchor_grid,) = ag.generate(feature_map_shape_list=[(feat_h, feat_w)])
+
+ if self._normalize_coordinates:
+ if im_height == 1 or im_width == 1:
+ raise ValueError(
+ 'Normalized coordinates were requested upon construction of the '
+ 'MultiscaleGridAnchorGenerator, but a subsequent call to '
+ 'generate did not supply dimension information.')
+ anchor_grid = box_list_ops.to_normalized_coordinates(
+ anchor_grid, im_height, im_width, check_range=False)
+ anchor_grid_list.append(anchor_grid)
+
+ return anchor_grid_list
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiscale_grid_anchor_generator_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiscale_grid_anchor_generator_test.py"
new file mode 100644
index 00000000..178705c1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/anchor_generators/multiscale_grid_anchor_generator_test.py"
@@ -0,0 +1,302 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for anchor_generators.multiscale_grid_anchor_generator_test.py."""
+import numpy as np
+import tensorflow as tf
+
+from object_detection.anchor_generators import multiscale_grid_anchor_generator as mg
+from object_detection.utils import test_case
+
+
+class MultiscaleGridAnchorGeneratorTest(test_case.TestCase):
+
+ def test_construct_single_anchor(self):
+ min_level = 5
+ max_level = 5
+ anchor_scale = 4.0
+ aspect_ratios = [1.0]
+ scales_per_octave = 1
+ im_height = 64
+ im_width = 64
+ feature_map_shape_list = [(2, 2)]
+ exp_anchor_corners = [[-48, -48, 80, 80],
+ [-48, -16, 80, 112],
+ [-16, -48, 112, 80],
+ [-16, -16, 112, 112]]
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
+ normalize_coordinates=False)
+ anchors_list = anchor_generator.generate(
+ feature_map_shape_list, im_height=im_height, im_width=im_width)
+ anchor_corners = anchors_list[0].get()
+
+ with self.test_session():
+ anchor_corners_out = anchor_corners.eval()
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_single_anchor_unit_dimensions(self):
+ min_level = 5
+ max_level = 5
+ anchor_scale = 1.0
+ aspect_ratios = [1.0]
+ scales_per_octave = 1
+ im_height = 1
+ im_width = 1
+ feature_map_shape_list = [(2, 2)]
+ # Positive offsets are produced.
+ exp_anchor_corners = [[0, 0, 32, 32],
+ [0, 32, 32, 64],
+ [32, 0, 64, 32],
+ [32, 32, 64, 64]]
+
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
+ normalize_coordinates=False)
+ anchors_list = anchor_generator.generate(
+ feature_map_shape_list, im_height=im_height, im_width=im_width)
+ anchor_corners = anchors_list[0].get()
+
+ with self.test_session():
+ anchor_corners_out = anchor_corners.eval()
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_normalized_anchors_fails_with_unit_dimensions(self):
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level=5, max_level=5, anchor_scale=1.0, aspect_ratios=[1.0],
+ scales_per_octave=1, normalize_coordinates=True)
+ with self.assertRaisesRegexp(ValueError, 'Normalized coordinates'):
+ anchor_generator.generate(
+ feature_map_shape_list=[(2, 2)], im_height=1, im_width=1)
+
+ def test_construct_single_anchor_in_normalized_coordinates(self):
+ min_level = 5
+ max_level = 5
+ anchor_scale = 4.0
+ aspect_ratios = [1.0]
+ scales_per_octave = 1
+ im_height = 64
+ im_width = 128
+ feature_map_shape_list = [(2, 2)]
+ exp_anchor_corners = [[-48./64, -48./128, 80./64, 80./128],
+ [-48./64, -16./128, 80./64, 112./128],
+ [-16./64, -48./128, 112./64, 80./128],
+ [-16./64, -16./128, 112./64, 112./128]]
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
+ normalize_coordinates=True)
+ anchors_list = anchor_generator.generate(
+ feature_map_shape_list, im_height=im_height, im_width=im_width)
+ anchor_corners = anchors_list[0].get()
+
+ with self.test_session():
+ anchor_corners_out = anchor_corners.eval()
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_num_anchors_per_location(self):
+ min_level = 5
+ max_level = 6
+ anchor_scale = 4.0
+ aspect_ratios = [1.0, 2.0]
+ scales_per_octave = 3
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
+ normalize_coordinates=False)
+ self.assertEqual(anchor_generator.num_anchors_per_location(), [6, 6])
+
+ def test_construct_single_anchor_dynamic_size(self):
+ min_level = 5
+ max_level = 5
+ anchor_scale = 4.0
+ aspect_ratios = [1.0]
+ scales_per_octave = 1
+ im_height = tf.constant(64)
+ im_width = tf.constant(64)
+ feature_map_shape_list = [(2, 2)]
+ # Zero offsets are used.
+ exp_anchor_corners = [[-64, -64, 64, 64],
+ [-64, -32, 64, 96],
+ [-32, -64, 96, 64],
+ [-32, -32, 96, 96]]
+
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
+ normalize_coordinates=False)
+ anchors_list = anchor_generator.generate(
+ feature_map_shape_list, im_height=im_height, im_width=im_width)
+ anchor_corners = anchors_list[0].get()
+
+ with self.test_session():
+ anchor_corners_out = anchor_corners.eval()
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_single_anchor_with_odd_input_dimension(self):
+
+ def graph_fn():
+ min_level = 5
+ max_level = 5
+ anchor_scale = 4.0
+ aspect_ratios = [1.0]
+ scales_per_octave = 1
+ im_height = 65
+ im_width = 65
+ feature_map_shape_list = [(3, 3)]
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
+ normalize_coordinates=False)
+ anchors_list = anchor_generator.generate(
+ feature_map_shape_list, im_height=im_height, im_width=im_width)
+ anchor_corners = anchors_list[0].get()
+ return (anchor_corners,)
+ anchor_corners_out = self.execute(graph_fn, [])
+ exp_anchor_corners = [[-64, -64, 64, 64],
+ [-64, -32, 64, 96],
+ [-64, 0, 64, 128],
+ [-32, -64, 96, 64],
+ [-32, -32, 96, 96],
+ [-32, 0, 96, 128],
+ [0, -64, 128, 64],
+ [0, -32, 128, 96],
+ [0, 0, 128, 128]]
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_single_anchor_on_two_feature_maps(self):
+
+ def graph_fn():
+ min_level = 5
+ max_level = 6
+ anchor_scale = 4.0
+ aspect_ratios = [1.0]
+ scales_per_octave = 1
+ im_height = 64
+ im_width = 64
+ feature_map_shape_list = [(2, 2), (1, 1)]
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
+ normalize_coordinates=False)
+ anchors_list = anchor_generator.generate(feature_map_shape_list,
+ im_height=im_height,
+ im_width=im_width)
+ anchor_corners = [anchors.get() for anchors in anchors_list]
+ return anchor_corners
+
+ anchor_corners_out = np.concatenate(self.execute(graph_fn, []), axis=0)
+ exp_anchor_corners = [[-48, -48, 80, 80],
+ [-48, -16, 80, 112],
+ [-16, -48, 112, 80],
+ [-16, -16, 112, 112],
+ [-96, -96, 160, 160]]
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_single_anchor_with_two_scales_per_octave(self):
+
+ def graph_fn():
+ min_level = 6
+ max_level = 6
+ anchor_scale = 4.0
+ aspect_ratios = [1.0]
+ scales_per_octave = 2
+ im_height = 64
+ im_width = 64
+ feature_map_shape_list = [(1, 1)]
+
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
+ normalize_coordinates=False)
+ anchors_list = anchor_generator.generate(feature_map_shape_list,
+ im_height=im_height,
+ im_width=im_width)
+ anchor_corners = [anchors.get() for anchors in anchors_list]
+ return anchor_corners
+ # There are 4 set of anchors in this configuration. The order is:
+ # [[2**0.0 intermediate scale + 1.0 aspect],
+ # [2**0.5 intermediate scale + 1.0 aspect]]
+ exp_anchor_corners = [[-96., -96., 160., 160.],
+ [-149.0193, -149.0193, 213.0193, 213.0193]]
+
+ anchor_corners_out = self.execute(graph_fn, [])
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_single_anchor_with_two_scales_per_octave_and_aspect(self):
+ def graph_fn():
+ min_level = 6
+ max_level = 6
+ anchor_scale = 4.0
+ aspect_ratios = [1.0, 2.0]
+ scales_per_octave = 2
+ im_height = 64
+ im_width = 64
+ feature_map_shape_list = [(1, 1)]
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
+ normalize_coordinates=False)
+ anchors_list = anchor_generator.generate(feature_map_shape_list,
+ im_height=im_height,
+ im_width=im_width)
+ anchor_corners = [anchors.get() for anchors in anchors_list]
+ return anchor_corners
+ # There are 4 set of anchors in this configuration. The order is:
+ # [[2**0.0 intermediate scale + 1.0 aspect],
+ # [2**0.5 intermediate scale + 1.0 aspect],
+ # [2**0.0 intermediate scale + 2.0 aspect],
+ # [2**0.5 intermediate scale + 2.0 aspect]]
+
+ exp_anchor_corners = [[-96., -96., 160., 160.],
+ [-149.0193, -149.0193, 213.0193, 213.0193],
+ [-58.50967, -149.0193, 122.50967, 213.0193],
+ [-96., -224., 160., 288.]]
+ anchor_corners_out = self.execute(graph_fn, [])
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+ def test_construct_single_anchors_on_feature_maps_with_dynamic_shape(self):
+
+ def graph_fn(feature_map1_height, feature_map1_width, feature_map2_height,
+ feature_map2_width):
+ min_level = 5
+ max_level = 6
+ anchor_scale = 4.0
+ aspect_ratios = [1.0]
+ scales_per_octave = 1
+ im_height = 64
+ im_width = 64
+ feature_map_shape_list = [(feature_map1_height, feature_map1_width),
+ (feature_map2_height, feature_map2_width)]
+ anchor_generator = mg.MultiscaleGridAnchorGenerator(
+ min_level, max_level, anchor_scale, aspect_ratios, scales_per_octave,
+ normalize_coordinates=False)
+ anchors_list = anchor_generator.generate(feature_map_shape_list,
+ im_height=im_height,
+ im_width=im_width)
+ anchor_corners = [anchors.get() for anchors in anchors_list]
+ return anchor_corners
+
+ anchor_corners_out = np.concatenate(
+ self.execute_cpu(graph_fn, [
+ np.array(2, dtype=np.int32),
+ np.array(2, dtype=np.int32),
+ np.array(1, dtype=np.int32),
+ np.array(1, dtype=np.int32)
+ ]),
+ axis=0)
+ exp_anchor_corners = [[-48, -48, 80, 80],
+ [-48, -16, 80, 112],
+ [-16, -48, 112, 80],
+ [-16, -16, 112, 112],
+ [-96, -96, 160, 160]]
+ self.assertAllClose(anchor_corners_out, exp_anchor_corners)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/faster_rcnn_box_coder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/faster_rcnn_box_coder.py"
new file mode 100644
index 00000000..af25e21a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/faster_rcnn_box_coder.py"
@@ -0,0 +1,118 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Faster RCNN box coder.
+
+Faster RCNN box coder follows the coding schema described below:
+ ty = (y - ya) / ha
+ tx = (x - xa) / wa
+ th = log(h / ha)
+ tw = log(w / wa)
+ where x, y, w, h denote the box's center coordinates, width and height
+ respectively. Similarly, xa, ya, wa, ha denote the anchor's center
+ coordinates, width and height. tx, ty, tw and th denote the anchor-encoded
+ center, width and height respectively.
+
+ See http://arxiv.org/abs/1506.01497 for details.
+"""
+
+import tensorflow as tf
+
+from object_detection.core import box_coder
+from object_detection.core import box_list
+
+EPSILON = 1e-8
+
+
+class FasterRcnnBoxCoder(box_coder.BoxCoder):
+ """Faster RCNN box coder."""
+
+ def __init__(self, scale_factors=None):
+ """Constructor for FasterRcnnBoxCoder.
+
+ Args:
+ scale_factors: List of 4 positive scalars to scale ty, tx, th and tw.
+ If set to None, does not perform scaling. For Faster RCNN,
+ the open-source implementation recommends using [10.0, 10.0, 5.0, 5.0].
+ """
+ if scale_factors:
+ assert len(scale_factors) == 4
+ for scalar in scale_factors:
+ assert scalar > 0
+ self._scale_factors = scale_factors
+
+ @property
+ def code_size(self):
+ return 4
+
+ def _encode(self, boxes, anchors):
+ """Encode a box collection with respect to anchor collection.
+
+ Args:
+ boxes: BoxList holding N boxes to be encoded.
+ anchors: BoxList of anchors.
+
+ Returns:
+ a tensor representing N anchor-encoded boxes of the format
+ [ty, tx, th, tw].
+ """
+ # Convert anchors to the center coordinate representation.
+ ycenter_a, xcenter_a, ha, wa = anchors.get_center_coordinates_and_sizes()
+ ycenter, xcenter, h, w = boxes.get_center_coordinates_and_sizes()
+ # Avoid NaN in division and log below.
+ ha += EPSILON
+ wa += EPSILON
+ h += EPSILON
+ w += EPSILON
+
+ tx = (xcenter - xcenter_a) / wa
+ ty = (ycenter - ycenter_a) / ha
+ tw = tf.log(w / wa)
+ th = tf.log(h / ha)
+ # Scales location targets as used in paper for joint training.
+ if self._scale_factors:
+ ty *= self._scale_factors[0]
+ tx *= self._scale_factors[1]
+ th *= self._scale_factors[2]
+ tw *= self._scale_factors[3]
+ return tf.transpose(tf.stack([ty, tx, th, tw]))
+
+ def _decode(self, rel_codes, anchors):
+ """Decode relative codes to boxes.
+
+ Args:
+ rel_codes: a tensor representing N anchor-encoded boxes.
+ anchors: BoxList of anchors.
+
+ Returns:
+ boxes: BoxList holding N bounding boxes.
+ """
+ ycenter_a, xcenter_a, ha, wa = anchors.get_center_coordinates_and_sizes()
+
+ ty, tx, th, tw = tf.unstack(tf.transpose(rel_codes))
+ if self._scale_factors:
+ ty /= self._scale_factors[0]
+ tx /= self._scale_factors[1]
+ th /= self._scale_factors[2]
+ tw /= self._scale_factors[3]
+ w = tf.exp(tw) * wa
+ h = tf.exp(th) * ha
+ ycenter = ty * ha + ycenter_a
+ xcenter = tx * wa + xcenter_a
+ ymin = ycenter - h / 2.
+ xmin = xcenter - w / 2.
+ ymax = ycenter + h / 2.
+ xmax = xcenter + w / 2.
+ return box_list.BoxList(tf.transpose(tf.stack([ymin, xmin, ymax, xmax])))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/faster_rcnn_box_coder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/faster_rcnn_box_coder_test.py"
new file mode 100644
index 00000000..b2135f06
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/faster_rcnn_box_coder_test.py"
@@ -0,0 +1,94 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.box_coder.faster_rcnn_box_coder."""
+
+import tensorflow as tf
+
+from object_detection.box_coders import faster_rcnn_box_coder
+from object_detection.core import box_list
+
+
+class FasterRcnnBoxCoderTest(tf.test.TestCase):
+
+ def test_get_correct_relative_codes_after_encoding(self):
+ boxes = [[10.0, 10.0, 20.0, 15.0], [0.2, 0.1, 0.5, 0.4]]
+ anchors = [[15.0, 12.0, 30.0, 18.0], [0.1, 0.0, 0.7, 0.9]]
+ expected_rel_codes = [[-0.5, -0.416666, -0.405465, -0.182321],
+ [-0.083333, -0.222222, -0.693147, -1.098612]]
+ boxes = box_list.BoxList(tf.constant(boxes))
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = faster_rcnn_box_coder.FasterRcnnBoxCoder()
+ rel_codes = coder.encode(boxes, anchors)
+ with self.test_session() as sess:
+ rel_codes_out, = sess.run([rel_codes])
+ self.assertAllClose(rel_codes_out, expected_rel_codes)
+
+ def test_get_correct_relative_codes_after_encoding_with_scaling(self):
+ boxes = [[10.0, 10.0, 20.0, 15.0], [0.2, 0.1, 0.5, 0.4]]
+ anchors = [[15.0, 12.0, 30.0, 18.0], [0.1, 0.0, 0.7, 0.9]]
+ scale_factors = [2, 3, 4, 5]
+ expected_rel_codes = [[-1., -1.25, -1.62186, -0.911608],
+ [-0.166667, -0.666667, -2.772588, -5.493062]]
+ boxes = box_list.BoxList(tf.constant(boxes))
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = faster_rcnn_box_coder.FasterRcnnBoxCoder(
+ scale_factors=scale_factors)
+ rel_codes = coder.encode(boxes, anchors)
+ with self.test_session() as sess:
+ rel_codes_out, = sess.run([rel_codes])
+ self.assertAllClose(rel_codes_out, expected_rel_codes)
+
+ def test_get_correct_boxes_after_decoding(self):
+ anchors = [[15.0, 12.0, 30.0, 18.0], [0.1, 0.0, 0.7, 0.9]]
+ rel_codes = [[-0.5, -0.416666, -0.405465, -0.182321],
+ [-0.083333, -0.222222, -0.693147, -1.098612]]
+ expected_boxes = [[10.0, 10.0, 20.0, 15.0], [0.2, 0.1, 0.5, 0.4]]
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = faster_rcnn_box_coder.FasterRcnnBoxCoder()
+ boxes = coder.decode(rel_codes, anchors)
+ with self.test_session() as sess:
+ boxes_out, = sess.run([boxes.get()])
+ self.assertAllClose(boxes_out, expected_boxes)
+
+ def test_get_correct_boxes_after_decoding_with_scaling(self):
+ anchors = [[15.0, 12.0, 30.0, 18.0], [0.1, 0.0, 0.7, 0.9]]
+ rel_codes = [[-1., -1.25, -1.62186, -0.911608],
+ [-0.166667, -0.666667, -2.772588, -5.493062]]
+ scale_factors = [2, 3, 4, 5]
+ expected_boxes = [[10.0, 10.0, 20.0, 15.0], [0.2, 0.1, 0.5, 0.4]]
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = faster_rcnn_box_coder.FasterRcnnBoxCoder(
+ scale_factors=scale_factors)
+ boxes = coder.decode(rel_codes, anchors)
+ with self.test_session() as sess:
+ boxes_out, = sess.run([boxes.get()])
+ self.assertAllClose(boxes_out, expected_boxes)
+
+ def test_very_small_Width_nan_after_encoding(self):
+ boxes = [[10.0, 10.0, 10.0000001, 20.0]]
+ anchors = [[15.0, 12.0, 30.0, 18.0]]
+ expected_rel_codes = [[-0.833333, 0., -21.128731, 0.510826]]
+ boxes = box_list.BoxList(tf.constant(boxes))
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = faster_rcnn_box_coder.FasterRcnnBoxCoder()
+ rel_codes = coder.encode(boxes, anchors)
+ with self.test_session() as sess:
+ rel_codes_out, = sess.run([rel_codes])
+ self.assertAllClose(rel_codes_out, expected_rel_codes)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/keypoint_box_coder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/keypoint_box_coder.py"
new file mode 100644
index 00000000..67df3b82
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/keypoint_box_coder.py"
@@ -0,0 +1,171 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Keypoint box coder.
+
+The keypoint box coder follows the coding schema described below (this is
+similar to the FasterRcnnBoxCoder, except that it encodes keypoints in addition
+to box coordinates):
+ ty = (y - ya) / ha
+ tx = (x - xa) / wa
+ th = log(h / ha)
+ tw = log(w / wa)
+ tky0 = (ky0 - ya) / ha
+ tkx0 = (kx0 - xa) / wa
+ tky1 = (ky1 - ya) / ha
+ tkx1 = (kx1 - xa) / wa
+ ...
+ where x, y, w, h denote the box's center coordinates, width and height
+ respectively. Similarly, xa, ya, wa, ha denote the anchor's center
+ coordinates, width and height. tx, ty, tw and th denote the anchor-encoded
+ center, width and height respectively. ky0, kx0, ky1, kx1, ... denote the
+ keypoints' coordinates, and tky0, tkx0, tky1, tkx1, ... denote the
+ anchor-encoded keypoint coordinates.
+"""
+
+import tensorflow as tf
+
+from object_detection.core import box_coder
+from object_detection.core import box_list
+from object_detection.core import standard_fields as fields
+
+EPSILON = 1e-8
+
+
+class KeypointBoxCoder(box_coder.BoxCoder):
+ """Keypoint box coder."""
+
+ def __init__(self, num_keypoints, scale_factors=None):
+ """Constructor for KeypointBoxCoder.
+
+ Args:
+ num_keypoints: Number of keypoints to encode/decode.
+ scale_factors: List of 4 positive scalars to scale ty, tx, th and tw.
+ In addition to scaling ty and tx, the first 2 scalars are used to scale
+ the y and x coordinates of the keypoints as well. If set to None, does
+ not perform scaling.
+ """
+ self._num_keypoints = num_keypoints
+
+ if scale_factors:
+ assert len(scale_factors) == 4
+ for scalar in scale_factors:
+ assert scalar > 0
+ self._scale_factors = scale_factors
+ self._keypoint_scale_factors = None
+ if scale_factors is not None:
+ self._keypoint_scale_factors = tf.expand_dims(tf.tile(
+ [tf.to_float(scale_factors[0]), tf.to_float(scale_factors[1])],
+ [num_keypoints]), 1)
+
+ @property
+ def code_size(self):
+ return 4 + self._num_keypoints * 2
+
+ def _encode(self, boxes, anchors):
+ """Encode a box and keypoint collection with respect to anchor collection.
+
+ Args:
+ boxes: BoxList holding N boxes and keypoints to be encoded. Boxes are
+ tensors with the shape [N, 4], and keypoints are tensors with the shape
+ [N, num_keypoints, 2].
+ anchors: BoxList of anchors.
+
+ Returns:
+ a tensor representing N anchor-encoded boxes of the format
+ [ty, tx, th, tw, tky0, tkx0, tky1, tkx1, ...] where tky0 and tkx0
+ represent the y and x coordinates of the first keypoint, tky1 and tkx1
+ represent the y and x coordinates of the second keypoint, and so on.
+ """
+ # Convert anchors to the center coordinate representation.
+ ycenter_a, xcenter_a, ha, wa = anchors.get_center_coordinates_and_sizes()
+ ycenter, xcenter, h, w = boxes.get_center_coordinates_and_sizes()
+ keypoints = boxes.get_field(fields.BoxListFields.keypoints)
+ keypoints = tf.transpose(tf.reshape(keypoints,
+ [-1, self._num_keypoints * 2]))
+ num_boxes = boxes.num_boxes()
+
+ # Avoid NaN in division and log below.
+ ha += EPSILON
+ wa += EPSILON
+ h += EPSILON
+ w += EPSILON
+
+ tx = (xcenter - xcenter_a) / wa
+ ty = (ycenter - ycenter_a) / ha
+ tw = tf.log(w / wa)
+ th = tf.log(h / ha)
+
+ tiled_anchor_centers = tf.tile(
+ tf.stack([ycenter_a, xcenter_a]), [self._num_keypoints, 1])
+ tiled_anchor_sizes = tf.tile(
+ tf.stack([ha, wa]), [self._num_keypoints, 1])
+ tkeypoints = (keypoints - tiled_anchor_centers) / tiled_anchor_sizes
+
+ # Scales location targets as used in paper for joint training.
+ if self._scale_factors:
+ ty *= self._scale_factors[0]
+ tx *= self._scale_factors[1]
+ th *= self._scale_factors[2]
+ tw *= self._scale_factors[3]
+ tkeypoints *= tf.tile(self._keypoint_scale_factors, [1, num_boxes])
+
+ tboxes = tf.stack([ty, tx, th, tw])
+ return tf.transpose(tf.concat([tboxes, tkeypoints], 0))
+
+ def _decode(self, rel_codes, anchors):
+ """Decode relative codes to boxes and keypoints.
+
+ Args:
+ rel_codes: a tensor with shape [N, 4 + 2 * num_keypoints] representing N
+ anchor-encoded boxes and keypoints
+ anchors: BoxList of anchors.
+
+ Returns:
+ boxes: BoxList holding N bounding boxes and keypoints.
+ """
+ ycenter_a, xcenter_a, ha, wa = anchors.get_center_coordinates_and_sizes()
+
+ num_codes = tf.shape(rel_codes)[0]
+ result = tf.unstack(tf.transpose(rel_codes))
+ ty, tx, th, tw = result[:4]
+ tkeypoints = result[4:]
+ if self._scale_factors:
+ ty /= self._scale_factors[0]
+ tx /= self._scale_factors[1]
+ th /= self._scale_factors[2]
+ tw /= self._scale_factors[3]
+ tkeypoints /= tf.tile(self._keypoint_scale_factors, [1, num_codes])
+
+ w = tf.exp(tw) * wa
+ h = tf.exp(th) * ha
+ ycenter = ty * ha + ycenter_a
+ xcenter = tx * wa + xcenter_a
+ ymin = ycenter - h / 2.
+ xmin = xcenter - w / 2.
+ ymax = ycenter + h / 2.
+ xmax = xcenter + w / 2.
+ decoded_boxes_keypoints = box_list.BoxList(
+ tf.transpose(tf.stack([ymin, xmin, ymax, xmax])))
+
+ tiled_anchor_centers = tf.tile(
+ tf.stack([ycenter_a, xcenter_a]), [self._num_keypoints, 1])
+ tiled_anchor_sizes = tf.tile(
+ tf.stack([ha, wa]), [self._num_keypoints, 1])
+ keypoints = tkeypoints * tiled_anchor_sizes + tiled_anchor_centers
+ keypoints = tf.reshape(tf.transpose(keypoints),
+ [-1, self._num_keypoints, 2])
+ decoded_boxes_keypoints.add_field(fields.BoxListFields.keypoints, keypoints)
+ return decoded_boxes_keypoints
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/keypoint_box_coder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/keypoint_box_coder_test.py"
new file mode 100644
index 00000000..330641e5
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/keypoint_box_coder_test.py"
@@ -0,0 +1,140 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.box_coder.keypoint_box_coder."""
+
+import tensorflow as tf
+
+from object_detection.box_coders import keypoint_box_coder
+from object_detection.core import box_list
+from object_detection.core import standard_fields as fields
+
+
+class KeypointBoxCoderTest(tf.test.TestCase):
+
+ def test_get_correct_relative_codes_after_encoding(self):
+ boxes = [[10., 10., 20., 15.],
+ [0.2, 0.1, 0.5, 0.4]]
+ keypoints = [[[15., 12.], [10., 15.]],
+ [[0.5, 0.3], [0.2, 0.4]]]
+ num_keypoints = len(keypoints[0])
+ anchors = [[15., 12., 30., 18.],
+ [0.1, 0.0, 0.7, 0.9]]
+ expected_rel_codes = [
+ [-0.5, -0.416666, -0.405465, -0.182321,
+ -0.5, -0.5, -0.833333, 0.],
+ [-0.083333, -0.222222, -0.693147, -1.098612,
+ 0.166667, -0.166667, -0.333333, -0.055556]
+ ]
+ boxes = box_list.BoxList(tf.constant(boxes))
+ boxes.add_field(fields.BoxListFields.keypoints, tf.constant(keypoints))
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = keypoint_box_coder.KeypointBoxCoder(num_keypoints)
+ rel_codes = coder.encode(boxes, anchors)
+ with self.test_session() as sess:
+ rel_codes_out, = sess.run([rel_codes])
+ self.assertAllClose(rel_codes_out, expected_rel_codes)
+
+ def test_get_correct_relative_codes_after_encoding_with_scaling(self):
+ boxes = [[10., 10., 20., 15.],
+ [0.2, 0.1, 0.5, 0.4]]
+ keypoints = [[[15., 12.], [10., 15.]],
+ [[0.5, 0.3], [0.2, 0.4]]]
+ num_keypoints = len(keypoints[0])
+ anchors = [[15., 12., 30., 18.],
+ [0.1, 0.0, 0.7, 0.9]]
+ scale_factors = [2, 3, 4, 5]
+ expected_rel_codes = [
+ [-1., -1.25, -1.62186, -0.911608,
+ -1.0, -1.5, -1.666667, 0.],
+ [-0.166667, -0.666667, -2.772588, -5.493062,
+ 0.333333, -0.5, -0.666667, -0.166667]
+ ]
+ boxes = box_list.BoxList(tf.constant(boxes))
+ boxes.add_field(fields.BoxListFields.keypoints, tf.constant(keypoints))
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = keypoint_box_coder.KeypointBoxCoder(
+ num_keypoints, scale_factors=scale_factors)
+ rel_codes = coder.encode(boxes, anchors)
+ with self.test_session() as sess:
+ rel_codes_out, = sess.run([rel_codes])
+ self.assertAllClose(rel_codes_out, expected_rel_codes)
+
+ def test_get_correct_boxes_after_decoding(self):
+ anchors = [[15., 12., 30., 18.],
+ [0.1, 0.0, 0.7, 0.9]]
+ rel_codes = [
+ [-0.5, -0.416666, -0.405465, -0.182321,
+ -0.5, -0.5, -0.833333, 0.],
+ [-0.083333, -0.222222, -0.693147, -1.098612,
+ 0.166667, -0.166667, -0.333333, -0.055556]
+ ]
+ expected_boxes = [[10., 10., 20., 15.],
+ [0.2, 0.1, 0.5, 0.4]]
+ expected_keypoints = [[[15., 12.], [10., 15.]],
+ [[0.5, 0.3], [0.2, 0.4]]]
+ num_keypoints = len(expected_keypoints[0])
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = keypoint_box_coder.KeypointBoxCoder(num_keypoints)
+ boxes = coder.decode(rel_codes, anchors)
+ with self.test_session() as sess:
+ boxes_out, keypoints_out = sess.run(
+ [boxes.get(), boxes.get_field(fields.BoxListFields.keypoints)])
+ self.assertAllClose(boxes_out, expected_boxes)
+ self.assertAllClose(keypoints_out, expected_keypoints)
+
+ def test_get_correct_boxes_after_decoding_with_scaling(self):
+ anchors = [[15., 12., 30., 18.],
+ [0.1, 0.0, 0.7, 0.9]]
+ rel_codes = [
+ [-1., -1.25, -1.62186, -0.911608,
+ -1.0, -1.5, -1.666667, 0.],
+ [-0.166667, -0.666667, -2.772588, -5.493062,
+ 0.333333, -0.5, -0.666667, -0.166667]
+ ]
+ scale_factors = [2, 3, 4, 5]
+ expected_boxes = [[10., 10., 20., 15.],
+ [0.2, 0.1, 0.5, 0.4]]
+ expected_keypoints = [[[15., 12.], [10., 15.]],
+ [[0.5, 0.3], [0.2, 0.4]]]
+ num_keypoints = len(expected_keypoints[0])
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = keypoint_box_coder.KeypointBoxCoder(
+ num_keypoints, scale_factors=scale_factors)
+ boxes = coder.decode(rel_codes, anchors)
+ with self.test_session() as sess:
+ boxes_out, keypoints_out = sess.run(
+ [boxes.get(), boxes.get_field(fields.BoxListFields.keypoints)])
+ self.assertAllClose(boxes_out, expected_boxes)
+ self.assertAllClose(keypoints_out, expected_keypoints)
+
+ def test_very_small_width_nan_after_encoding(self):
+ boxes = [[10., 10., 10.0000001, 20.]]
+ keypoints = [[[10., 10.], [10.0000001, 20.]]]
+ anchors = [[15., 12., 30., 18.]]
+ expected_rel_codes = [[-0.833333, 0., -21.128731, 0.510826,
+ -0.833333, -0.833333, -0.833333, 0.833333]]
+ boxes = box_list.BoxList(tf.constant(boxes))
+ boxes.add_field(fields.BoxListFields.keypoints, tf.constant(keypoints))
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = keypoint_box_coder.KeypointBoxCoder(2)
+ rel_codes = coder.encode(boxes, anchors)
+ with self.test_session() as sess:
+ rel_codes_out, = sess.run([rel_codes])
+ self.assertAllClose(rel_codes_out, expected_rel_codes)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/mean_stddev_box_coder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/mean_stddev_box_coder.py"
new file mode 100644
index 00000000..256f53fd
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/mean_stddev_box_coder.py"
@@ -0,0 +1,79 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Mean stddev box coder.
+
+This box coder use the following coding schema to encode boxes:
+rel_code = (box_corner - anchor_corner_mean) / anchor_corner_stddev.
+"""
+from object_detection.core import box_coder
+from object_detection.core import box_list
+
+
+class MeanStddevBoxCoder(box_coder.BoxCoder):
+ """Mean stddev box coder."""
+
+ def __init__(self, stddev=0.01):
+ """Constructor for MeanStddevBoxCoder.
+
+ Args:
+ stddev: The standard deviation used to encode and decode boxes.
+ """
+ self._stddev = stddev
+
+ @property
+ def code_size(self):
+ return 4
+
+ def _encode(self, boxes, anchors):
+ """Encode a box collection with respect to anchor collection.
+
+ Args:
+ boxes: BoxList holding N boxes to be encoded.
+ anchors: BoxList of N anchors.
+
+ Returns:
+ a tensor representing N anchor-encoded boxes
+
+ Raises:
+ ValueError: if the anchors still have deprecated stddev field.
+ """
+ box_corners = boxes.get()
+ if anchors.has_field('stddev'):
+ raise ValueError("'stddev' is a parameter of MeanStddevBoxCoder and "
+ "should not be specified in the box list.")
+ means = anchors.get()
+ return (box_corners - means) / self._stddev
+
+ def _decode(self, rel_codes, anchors):
+ """Decode.
+
+ Args:
+ rel_codes: a tensor representing N anchor-encoded boxes.
+ anchors: BoxList of anchors.
+
+ Returns:
+ boxes: BoxList holding N bounding boxes
+
+ Raises:
+ ValueError: if the anchors still have deprecated stddev field and expects
+ the decode method to use stddev value from that field.
+ """
+ means = anchors.get()
+ if anchors.has_field('stddev'):
+ raise ValueError("'stddev' is a parameter of MeanStddevBoxCoder and "
+ "should not be specified in the box list.")
+ box_corners = rel_codes * self._stddev + means
+ return box_list.BoxList(box_corners)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/mean_stddev_box_coder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/mean_stddev_box_coder_test.py"
new file mode 100644
index 00000000..3e0eba93
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/mean_stddev_box_coder_test.py"
@@ -0,0 +1,54 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.box_coder.mean_stddev_boxcoder."""
+
+import tensorflow as tf
+
+from object_detection.box_coders import mean_stddev_box_coder
+from object_detection.core import box_list
+
+
+class MeanStddevBoxCoderTest(tf.test.TestCase):
+
+ def testGetCorrectRelativeCodesAfterEncoding(self):
+ box_corners = [[0.0, 0.0, 0.5, 0.5], [0.0, 0.0, 0.5, 0.5]]
+ boxes = box_list.BoxList(tf.constant(box_corners))
+ expected_rel_codes = [[0.0, 0.0, 0.0, 0.0], [-5.0, -5.0, -5.0, -3.0]]
+ prior_means = tf.constant([[0.0, 0.0, 0.5, 0.5], [0.5, 0.5, 1.0, 0.8]])
+ priors = box_list.BoxList(prior_means)
+
+ coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=0.1)
+ rel_codes = coder.encode(boxes, priors)
+ with self.test_session() as sess:
+ rel_codes_out = sess.run(rel_codes)
+ self.assertAllClose(rel_codes_out, expected_rel_codes)
+
+ def testGetCorrectBoxesAfterDecoding(self):
+ rel_codes = tf.constant([[0.0, 0.0, 0.0, 0.0], [-5.0, -5.0, -5.0, -3.0]])
+ expected_box_corners = [[0.0, 0.0, 0.5, 0.5], [0.0, 0.0, 0.5, 0.5]]
+ prior_means = tf.constant([[0.0, 0.0, 0.5, 0.5], [0.5, 0.5, 1.0, 0.8]])
+ priors = box_list.BoxList(prior_means)
+
+ coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=0.1)
+ decoded_boxes = coder.decode(rel_codes, priors)
+ decoded_box_corners = decoded_boxes.get()
+ with self.test_session() as sess:
+ decoded_out = sess.run(decoded_box_corners)
+ self.assertAllClose(decoded_out, expected_box_corners)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/square_box_coder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/square_box_coder.py"
new file mode 100644
index 00000000..ee46b689
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/square_box_coder.py"
@@ -0,0 +1,126 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Square box coder.
+
+Square box coder follows the coding schema described below:
+l = sqrt(h * w)
+la = sqrt(ha * wa)
+ty = (y - ya) / la
+tx = (x - xa) / la
+tl = log(l / la)
+where x, y, w, h denote the box's center coordinates, width, and height,
+respectively. Similarly, xa, ya, wa, ha denote the anchor's center
+coordinates, width and height. tx, ty, tl denote the anchor-encoded
+center, and length, respectively. Because the encoded box is a square, only
+one length is encoded.
+
+This has shown to provide performance improvements over the Faster RCNN box
+coder when the objects being detected tend to be square (e.g. faces) and when
+the input images are not distorted via resizing.
+"""
+
+import tensorflow as tf
+
+from object_detection.core import box_coder
+from object_detection.core import box_list
+
+EPSILON = 1e-8
+
+
+class SquareBoxCoder(box_coder.BoxCoder):
+ """Encodes a 3-scalar representation of a square box."""
+
+ def __init__(self, scale_factors=None):
+ """Constructor for SquareBoxCoder.
+
+ Args:
+ scale_factors: List of 3 positive scalars to scale ty, tx, and tl.
+ If set to None, does not perform scaling. For faster RCNN,
+ the open-source implementation recommends using [10.0, 10.0, 5.0].
+
+ Raises:
+ ValueError: If scale_factors is not length 3 or contains values less than
+ or equal to 0.
+ """
+ if scale_factors:
+ if len(scale_factors) != 3:
+ raise ValueError('The argument scale_factors must be a list of length '
+ '3.')
+ if any(scalar <= 0 for scalar in scale_factors):
+ raise ValueError('The values in scale_factors must all be greater '
+ 'than 0.')
+ self._scale_factors = scale_factors
+
+ @property
+ def code_size(self):
+ return 3
+
+ def _encode(self, boxes, anchors):
+ """Encodes a box collection with respect to an anchor collection.
+
+ Args:
+ boxes: BoxList holding N boxes to be encoded.
+ anchors: BoxList of anchors.
+
+ Returns:
+ a tensor representing N anchor-encoded boxes of the format
+ [ty, tx, tl].
+ """
+ # Convert anchors to the center coordinate representation.
+ ycenter_a, xcenter_a, ha, wa = anchors.get_center_coordinates_and_sizes()
+ la = tf.sqrt(ha * wa)
+ ycenter, xcenter, h, w = boxes.get_center_coordinates_and_sizes()
+ l = tf.sqrt(h * w)
+ # Avoid NaN in division and log below.
+ la += EPSILON
+ l += EPSILON
+
+ tx = (xcenter - xcenter_a) / la
+ ty = (ycenter - ycenter_a) / la
+ tl = tf.log(l / la)
+ # Scales location targets for joint training.
+ if self._scale_factors:
+ ty *= self._scale_factors[0]
+ tx *= self._scale_factors[1]
+ tl *= self._scale_factors[2]
+ return tf.transpose(tf.stack([ty, tx, tl]))
+
+ def _decode(self, rel_codes, anchors):
+ """Decodes relative codes to boxes.
+
+ Args:
+ rel_codes: a tensor representing N anchor-encoded boxes.
+ anchors: BoxList of anchors.
+
+ Returns:
+ boxes: BoxList holding N bounding boxes.
+ """
+ ycenter_a, xcenter_a, ha, wa = anchors.get_center_coordinates_and_sizes()
+ la = tf.sqrt(ha * wa)
+
+ ty, tx, tl = tf.unstack(tf.transpose(rel_codes))
+ if self._scale_factors:
+ ty /= self._scale_factors[0]
+ tx /= self._scale_factors[1]
+ tl /= self._scale_factors[2]
+ l = tf.exp(tl) * la
+ ycenter = ty * la + ycenter_a
+ xcenter = tx * la + xcenter_a
+ ymin = ycenter - l / 2.
+ xmin = xcenter - l / 2.
+ ymax = ycenter + l / 2.
+ xmax = xcenter + l / 2.
+ return box_list.BoxList(tf.transpose(tf.stack([ymin, xmin, ymax, xmax])))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/square_box_coder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/square_box_coder_test.py"
new file mode 100644
index 00000000..7f739c6b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/box_coders/square_box_coder_test.py"
@@ -0,0 +1,97 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.box_coder.square_box_coder."""
+
+import tensorflow as tf
+
+from object_detection.box_coders import square_box_coder
+from object_detection.core import box_list
+
+
+class SquareBoxCoderTest(tf.test.TestCase):
+
+ def test_correct_relative_codes_with_default_scale(self):
+ boxes = [[10.0, 10.0, 20.0, 15.0], [0.2, 0.1, 0.5, 0.4]]
+ anchors = [[15.0, 12.0, 30.0, 18.0], [0.1, 0.0, 0.7, 0.9]]
+ scale_factors = None
+ expected_rel_codes = [[-0.790569, -0.263523, -0.293893],
+ [-0.068041, -0.272166, -0.89588]]
+
+ boxes = box_list.BoxList(tf.constant(boxes))
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = square_box_coder.SquareBoxCoder(scale_factors=scale_factors)
+ rel_codes = coder.encode(boxes, anchors)
+ with self.test_session() as sess:
+ (rel_codes_out,) = sess.run([rel_codes])
+ self.assertAllClose(rel_codes_out, expected_rel_codes)
+
+ def test_correct_relative_codes_with_non_default_scale(self):
+ boxes = [[10.0, 10.0, 20.0, 15.0], [0.2, 0.1, 0.5, 0.4]]
+ anchors = [[15.0, 12.0, 30.0, 18.0], [0.1, 0.0, 0.7, 0.9]]
+ scale_factors = [2, 3, 4]
+ expected_rel_codes = [[-1.581139, -0.790569, -1.175573],
+ [-0.136083, -0.816497, -3.583519]]
+ boxes = box_list.BoxList(tf.constant(boxes))
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = square_box_coder.SquareBoxCoder(scale_factors=scale_factors)
+ rel_codes = coder.encode(boxes, anchors)
+ with self.test_session() as sess:
+ (rel_codes_out,) = sess.run([rel_codes])
+ self.assertAllClose(rel_codes_out, expected_rel_codes)
+
+ def test_correct_relative_codes_with_small_width(self):
+ boxes = [[10.0, 10.0, 10.0000001, 20.0]]
+ anchors = [[15.0, 12.0, 30.0, 18.0]]
+ scale_factors = None
+ expected_rel_codes = [[-1.317616, 0., -20.670586]]
+ boxes = box_list.BoxList(tf.constant(boxes))
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = square_box_coder.SquareBoxCoder(scale_factors=scale_factors)
+ rel_codes = coder.encode(boxes, anchors)
+ with self.test_session() as sess:
+ (rel_codes_out,) = sess.run([rel_codes])
+ self.assertAllClose(rel_codes_out, expected_rel_codes)
+
+ def test_correct_boxes_with_default_scale(self):
+ anchors = [[15.0, 12.0, 30.0, 18.0], [0.1, 0.0, 0.7, 0.9]]
+ rel_codes = [[-0.5, -0.416666, -0.405465],
+ [-0.083333, -0.222222, -0.693147]]
+ scale_factors = None
+ expected_boxes = [[14.594306, 7.884875, 20.918861, 14.209432],
+ [0.155051, 0.102989, 0.522474, 0.470412]]
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = square_box_coder.SquareBoxCoder(scale_factors=scale_factors)
+ boxes = coder.decode(rel_codes, anchors)
+ with self.test_session() as sess:
+ (boxes_out,) = sess.run([boxes.get()])
+ self.assertAllClose(boxes_out, expected_boxes)
+
+ def test_correct_boxes_with_non_default_scale(self):
+ anchors = [[15.0, 12.0, 30.0, 18.0], [0.1, 0.0, 0.7, 0.9]]
+ rel_codes = [[-1., -1.25, -1.62186], [-0.166667, -0.666667, -2.772588]]
+ scale_factors = [2, 3, 4]
+ expected_boxes = [[14.594306, 7.884875, 20.918861, 14.209432],
+ [0.155051, 0.102989, 0.522474, 0.470412]]
+ anchors = box_list.BoxList(tf.constant(anchors))
+ coder = square_box_coder.SquareBoxCoder(scale_factors=scale_factors)
+ boxes = coder.decode(rel_codes, anchors)
+ with self.test_session() as sess:
+ (boxes_out,) = sess.run([boxes.get()])
+ self.assertAllClose(boxes_out, expected_boxes)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/anchor_generator_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/anchor_generator_builder.py"
new file mode 100644
index 00000000..54cec3a1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/anchor_generator_builder.py"
@@ -0,0 +1,94 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A function to build an object detection anchor generator from config."""
+
+from object_detection.anchor_generators import grid_anchor_generator
+from object_detection.anchor_generators import multiple_grid_anchor_generator
+from object_detection.anchor_generators import multiscale_grid_anchor_generator
+from object_detection.protos import anchor_generator_pb2
+
+
+def build(anchor_generator_config):
+ """Builds an anchor generator based on the config.
+
+ Args:
+ anchor_generator_config: An anchor_generator.proto object containing the
+ config for the desired anchor generator.
+
+ Returns:
+ Anchor generator based on the config.
+
+ Raises:
+ ValueError: On empty anchor generator proto.
+ """
+ if not isinstance(anchor_generator_config,
+ anchor_generator_pb2.AnchorGenerator):
+ raise ValueError('anchor_generator_config not of type '
+ 'anchor_generator_pb2.AnchorGenerator')
+ if anchor_generator_config.WhichOneof(
+ 'anchor_generator_oneof') == 'grid_anchor_generator':
+ grid_anchor_generator_config = anchor_generator_config.grid_anchor_generator
+ return grid_anchor_generator.GridAnchorGenerator(
+ scales=[float(scale) for scale in grid_anchor_generator_config.scales],
+ aspect_ratios=[float(aspect_ratio)
+ for aspect_ratio
+ in grid_anchor_generator_config.aspect_ratios],
+ base_anchor_size=[grid_anchor_generator_config.height,
+ grid_anchor_generator_config.width],
+ anchor_stride=[grid_anchor_generator_config.height_stride,
+ grid_anchor_generator_config.width_stride],
+ anchor_offset=[grid_anchor_generator_config.height_offset,
+ grid_anchor_generator_config.width_offset])
+ elif anchor_generator_config.WhichOneof(
+ 'anchor_generator_oneof') == 'ssd_anchor_generator':
+ ssd_anchor_generator_config = anchor_generator_config.ssd_anchor_generator
+ anchor_strides = None
+ if ssd_anchor_generator_config.height_stride:
+ anchor_strides = zip(ssd_anchor_generator_config.height_stride,
+ ssd_anchor_generator_config.width_stride)
+ anchor_offsets = None
+ if ssd_anchor_generator_config.height_offset:
+ anchor_offsets = zip(ssd_anchor_generator_config.height_offset,
+ ssd_anchor_generator_config.width_offset)
+ return multiple_grid_anchor_generator.create_ssd_anchors(
+ num_layers=ssd_anchor_generator_config.num_layers,
+ min_scale=ssd_anchor_generator_config.min_scale,
+ max_scale=ssd_anchor_generator_config.max_scale,
+ scales=[float(scale) for scale in ssd_anchor_generator_config.scales],
+ aspect_ratios=ssd_anchor_generator_config.aspect_ratios,
+ interpolated_scale_aspect_ratio=(
+ ssd_anchor_generator_config.interpolated_scale_aspect_ratio),
+ base_anchor_size=[
+ ssd_anchor_generator_config.base_anchor_height,
+ ssd_anchor_generator_config.base_anchor_width
+ ],
+ anchor_strides=anchor_strides,
+ anchor_offsets=anchor_offsets,
+ reduce_boxes_in_lowest_layer=(
+ ssd_anchor_generator_config.reduce_boxes_in_lowest_layer))
+ elif anchor_generator_config.WhichOneof(
+ 'anchor_generator_oneof') == 'multiscale_anchor_generator':
+ cfg = anchor_generator_config.multiscale_anchor_generator
+ return multiscale_grid_anchor_generator.MultiscaleGridAnchorGenerator(
+ cfg.min_level,
+ cfg.max_level,
+ cfg.anchor_scale,
+ [float(aspect_ratio) for aspect_ratio in cfg.aspect_ratios],
+ cfg.scales_per_octave,
+ cfg.normalize_coordinates
+ )
+ else:
+ raise ValueError('Empty anchor generator.')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/anchor_generator_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/anchor_generator_builder_test.py"
new file mode 100644
index 00000000..2a23c2d9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/anchor_generator_builder_test.py"
@@ -0,0 +1,300 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for anchor_generator_builder."""
+
+import math
+
+import tensorflow as tf
+
+from google.protobuf import text_format
+from object_detection.anchor_generators import grid_anchor_generator
+from object_detection.anchor_generators import multiple_grid_anchor_generator
+from object_detection.anchor_generators import multiscale_grid_anchor_generator
+from object_detection.builders import anchor_generator_builder
+from object_detection.protos import anchor_generator_pb2
+
+
+class AnchorGeneratorBuilderTest(tf.test.TestCase):
+
+ def assert_almost_list_equal(self, expected_list, actual_list, delta=None):
+ self.assertEqual(len(expected_list), len(actual_list))
+ for expected_item, actual_item in zip(expected_list, actual_list):
+ self.assertAlmostEqual(expected_item, actual_item, delta=delta)
+
+ def test_build_grid_anchor_generator_with_defaults(self):
+ anchor_generator_text_proto = """
+ grid_anchor_generator {
+ }
+ """
+ anchor_generator_proto = anchor_generator_pb2.AnchorGenerator()
+ text_format.Merge(anchor_generator_text_proto, anchor_generator_proto)
+ anchor_generator_object = anchor_generator_builder.build(
+ anchor_generator_proto)
+ self.assertTrue(isinstance(anchor_generator_object,
+ grid_anchor_generator.GridAnchorGenerator))
+ self.assertListEqual(anchor_generator_object._scales, [])
+ self.assertListEqual(anchor_generator_object._aspect_ratios, [])
+ with self.test_session() as sess:
+ base_anchor_size, anchor_offset, anchor_stride = sess.run(
+ [anchor_generator_object._base_anchor_size,
+ anchor_generator_object._anchor_offset,
+ anchor_generator_object._anchor_stride])
+ self.assertAllEqual(anchor_offset, [0, 0])
+ self.assertAllEqual(anchor_stride, [16, 16])
+ self.assertAllEqual(base_anchor_size, [256, 256])
+
+ def test_build_grid_anchor_generator_with_non_default_parameters(self):
+ anchor_generator_text_proto = """
+ grid_anchor_generator {
+ height: 128
+ width: 512
+ height_stride: 10
+ width_stride: 20
+ height_offset: 30
+ width_offset: 40
+ scales: [0.4, 2.2]
+ aspect_ratios: [0.3, 4.5]
+ }
+ """
+ anchor_generator_proto = anchor_generator_pb2.AnchorGenerator()
+ text_format.Merge(anchor_generator_text_proto, anchor_generator_proto)
+ anchor_generator_object = anchor_generator_builder.build(
+ anchor_generator_proto)
+ self.assertTrue(isinstance(anchor_generator_object,
+ grid_anchor_generator.GridAnchorGenerator))
+ self.assert_almost_list_equal(anchor_generator_object._scales,
+ [0.4, 2.2])
+ self.assert_almost_list_equal(anchor_generator_object._aspect_ratios,
+ [0.3, 4.5])
+ with self.test_session() as sess:
+ base_anchor_size, anchor_offset, anchor_stride = sess.run(
+ [anchor_generator_object._base_anchor_size,
+ anchor_generator_object._anchor_offset,
+ anchor_generator_object._anchor_stride])
+ self.assertAllEqual(anchor_offset, [30, 40])
+ self.assertAllEqual(anchor_stride, [10, 20])
+ self.assertAllEqual(base_anchor_size, [128, 512])
+
+ def test_build_ssd_anchor_generator_with_defaults(self):
+ anchor_generator_text_proto = """
+ ssd_anchor_generator {
+ aspect_ratios: [1.0]
+ }
+ """
+ anchor_generator_proto = anchor_generator_pb2.AnchorGenerator()
+ text_format.Merge(anchor_generator_text_proto, anchor_generator_proto)
+ anchor_generator_object = anchor_generator_builder.build(
+ anchor_generator_proto)
+ self.assertTrue(isinstance(anchor_generator_object,
+ multiple_grid_anchor_generator.
+ MultipleGridAnchorGenerator))
+ for actual_scales, expected_scales in zip(
+ list(anchor_generator_object._scales),
+ [(0.1, 0.2, 0.2),
+ (0.35, 0.418),
+ (0.499, 0.570),
+ (0.649, 0.721),
+ (0.799, 0.871),
+ (0.949, 0.974)]):
+ self.assert_almost_list_equal(expected_scales, actual_scales, delta=1e-2)
+ for actual_aspect_ratio, expected_aspect_ratio in zip(
+ list(anchor_generator_object._aspect_ratios),
+ [(1.0, 2.0, 0.5)] + 5 * [(1.0, 1.0)]):
+ self.assert_almost_list_equal(expected_aspect_ratio, actual_aspect_ratio)
+
+ with self.test_session() as sess:
+ base_anchor_size = sess.run(anchor_generator_object._base_anchor_size)
+ self.assertAllClose(base_anchor_size, [1.0, 1.0])
+
+ def test_build_ssd_anchor_generator_with_custom_scales(self):
+ anchor_generator_text_proto = """
+ ssd_anchor_generator {
+ aspect_ratios: [1.0]
+ scales: [0.1, 0.15, 0.2, 0.4, 0.6, 0.8]
+ reduce_boxes_in_lowest_layer: false
+ }
+ """
+ anchor_generator_proto = anchor_generator_pb2.AnchorGenerator()
+ text_format.Merge(anchor_generator_text_proto, anchor_generator_proto)
+ anchor_generator_object = anchor_generator_builder.build(
+ anchor_generator_proto)
+ self.assertTrue(isinstance(anchor_generator_object,
+ multiple_grid_anchor_generator.
+ MultipleGridAnchorGenerator))
+ for actual_scales, expected_scales in zip(
+ list(anchor_generator_object._scales),
+ [(0.1, math.sqrt(0.1 * 0.15)),
+ (0.15, math.sqrt(0.15 * 0.2)),
+ (0.2, math.sqrt(0.2 * 0.4)),
+ (0.4, math.sqrt(0.4 * 0.6)),
+ (0.6, math.sqrt(0.6 * 0.8)),
+ (0.8, math.sqrt(0.8 * 1.0))]):
+ self.assert_almost_list_equal(expected_scales, actual_scales, delta=1e-2)
+
+ def test_build_ssd_anchor_generator_with_custom_interpolated_scale(self):
+ anchor_generator_text_proto = """
+ ssd_anchor_generator {
+ aspect_ratios: [0.5]
+ interpolated_scale_aspect_ratio: 0.5
+ reduce_boxes_in_lowest_layer: false
+ }
+ """
+ anchor_generator_proto = anchor_generator_pb2.AnchorGenerator()
+ text_format.Merge(anchor_generator_text_proto, anchor_generator_proto)
+ anchor_generator_object = anchor_generator_builder.build(
+ anchor_generator_proto)
+ self.assertTrue(isinstance(anchor_generator_object,
+ multiple_grid_anchor_generator.
+ MultipleGridAnchorGenerator))
+ for actual_aspect_ratio, expected_aspect_ratio in zip(
+ list(anchor_generator_object._aspect_ratios),
+ 6 * [(0.5, 0.5)]):
+ self.assert_almost_list_equal(expected_aspect_ratio, actual_aspect_ratio)
+
+ def test_build_ssd_anchor_generator_without_reduced_boxes(self):
+ anchor_generator_text_proto = """
+ ssd_anchor_generator {
+ aspect_ratios: [1.0]
+ reduce_boxes_in_lowest_layer: false
+ }
+ """
+ anchor_generator_proto = anchor_generator_pb2.AnchorGenerator()
+ text_format.Merge(anchor_generator_text_proto, anchor_generator_proto)
+ anchor_generator_object = anchor_generator_builder.build(
+ anchor_generator_proto)
+ self.assertTrue(isinstance(anchor_generator_object,
+ multiple_grid_anchor_generator.
+ MultipleGridAnchorGenerator))
+
+ for actual_scales, expected_scales in zip(
+ list(anchor_generator_object._scales),
+ [(0.2, 0.264),
+ (0.35, 0.418),
+ (0.499, 0.570),
+ (0.649, 0.721),
+ (0.799, 0.871),
+ (0.949, 0.974)]):
+ self.assert_almost_list_equal(expected_scales, actual_scales, delta=1e-2)
+
+ for actual_aspect_ratio, expected_aspect_ratio in zip(
+ list(anchor_generator_object._aspect_ratios),
+ 6 * [(1.0, 1.0)]):
+ self.assert_almost_list_equal(expected_aspect_ratio, actual_aspect_ratio)
+
+ with self.test_session() as sess:
+ base_anchor_size = sess.run(anchor_generator_object._base_anchor_size)
+ self.assertAllClose(base_anchor_size, [1.0, 1.0])
+
+ def test_build_ssd_anchor_generator_with_non_default_parameters(self):
+ anchor_generator_text_proto = """
+ ssd_anchor_generator {
+ num_layers: 2
+ min_scale: 0.3
+ max_scale: 0.8
+ aspect_ratios: [2.0]
+ height_stride: 16
+ height_stride: 32
+ width_stride: 20
+ width_stride: 30
+ height_offset: 8
+ height_offset: 16
+ width_offset: 0
+ width_offset: 10
+ }
+ """
+ anchor_generator_proto = anchor_generator_pb2.AnchorGenerator()
+ text_format.Merge(anchor_generator_text_proto, anchor_generator_proto)
+ anchor_generator_object = anchor_generator_builder.build(
+ anchor_generator_proto)
+ self.assertTrue(isinstance(anchor_generator_object,
+ multiple_grid_anchor_generator.
+ MultipleGridAnchorGenerator))
+
+ for actual_scales, expected_scales in zip(
+ list(anchor_generator_object._scales),
+ [(0.1, 0.3, 0.3), (0.8, 0.894)]):
+ self.assert_almost_list_equal(expected_scales, actual_scales, delta=1e-2)
+
+ for actual_aspect_ratio, expected_aspect_ratio in zip(
+ list(anchor_generator_object._aspect_ratios),
+ [(1.0, 2.0, 0.5), (2.0, 1.0)]):
+ self.assert_almost_list_equal(expected_aspect_ratio, actual_aspect_ratio)
+
+ for actual_strides, expected_strides in zip(
+ list(anchor_generator_object._anchor_strides), [(16, 20), (32, 30)]):
+ self.assert_almost_list_equal(expected_strides, actual_strides)
+
+ for actual_offsets, expected_offsets in zip(
+ list(anchor_generator_object._anchor_offsets), [(8, 0), (16, 10)]):
+ self.assert_almost_list_equal(expected_offsets, actual_offsets)
+
+ with self.test_session() as sess:
+ base_anchor_size = sess.run(anchor_generator_object._base_anchor_size)
+ self.assertAllClose(base_anchor_size, [1.0, 1.0])
+
+ def test_raise_value_error_on_empty_anchor_genertor(self):
+ anchor_generator_text_proto = """
+ """
+ anchor_generator_proto = anchor_generator_pb2.AnchorGenerator()
+ text_format.Merge(anchor_generator_text_proto, anchor_generator_proto)
+ with self.assertRaises(ValueError):
+ anchor_generator_builder.build(anchor_generator_proto)
+
+ def test_build_multiscale_anchor_generator_custom_aspect_ratios(self):
+ anchor_generator_text_proto = """
+ multiscale_anchor_generator {
+ aspect_ratios: [1.0]
+ }
+ """
+ anchor_generator_proto = anchor_generator_pb2.AnchorGenerator()
+ text_format.Merge(anchor_generator_text_proto, anchor_generator_proto)
+ anchor_generator_object = anchor_generator_builder.build(
+ anchor_generator_proto)
+ self.assertTrue(isinstance(anchor_generator_object,
+ multiscale_grid_anchor_generator.
+ MultiscaleGridAnchorGenerator))
+ for level, anchor_grid_info in zip(
+ range(3, 8), anchor_generator_object._anchor_grid_info):
+ self.assertEqual(set(anchor_grid_info.keys()), set(['level', 'info']))
+ self.assertTrue(level, anchor_grid_info['level'])
+ self.assertEqual(len(anchor_grid_info['info']), 4)
+ self.assertAllClose(anchor_grid_info['info'][0], [2**0, 2**0.5])
+ self.assertTrue(anchor_grid_info['info'][1], 1.0)
+ self.assertAllClose(anchor_grid_info['info'][2],
+ [4.0 * 2**level, 4.0 * 2**level])
+ self.assertAllClose(anchor_grid_info['info'][3], [2**level, 2**level])
+ self.assertTrue(anchor_generator_object._normalize_coordinates)
+
+ def test_build_multiscale_anchor_generator_with_anchors_in_pixel_coordinates(
+ self):
+ anchor_generator_text_proto = """
+ multiscale_anchor_generator {
+ aspect_ratios: [1.0]
+ normalize_coordinates: false
+ }
+ """
+ anchor_generator_proto = anchor_generator_pb2.AnchorGenerator()
+ text_format.Merge(anchor_generator_text_proto, anchor_generator_proto)
+ anchor_generator_object = anchor_generator_builder.build(
+ anchor_generator_proto)
+ self.assertTrue(isinstance(anchor_generator_object,
+ multiscale_grid_anchor_generator.
+ MultiscaleGridAnchorGenerator))
+ self.assertFalse(anchor_generator_object._normalize_coordinates)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_coder_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_coder_builder.py"
new file mode 100644
index 00000000..cc13d5a2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_coder_builder.py"
@@ -0,0 +1,66 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A function to build an object detection box coder from configuration."""
+from object_detection.box_coders import faster_rcnn_box_coder
+from object_detection.box_coders import keypoint_box_coder
+from object_detection.box_coders import mean_stddev_box_coder
+from object_detection.box_coders import square_box_coder
+from object_detection.protos import box_coder_pb2
+
+
+def build(box_coder_config):
+ """Builds a box coder object based on the box coder config.
+
+ Args:
+ box_coder_config: A box_coder.proto object containing the config for the
+ desired box coder.
+
+ Returns:
+ BoxCoder based on the config.
+
+ Raises:
+ ValueError: On empty box coder proto.
+ """
+ if not isinstance(box_coder_config, box_coder_pb2.BoxCoder):
+ raise ValueError('box_coder_config not of type box_coder_pb2.BoxCoder.')
+
+ if box_coder_config.WhichOneof('box_coder_oneof') == 'faster_rcnn_box_coder':
+ return faster_rcnn_box_coder.FasterRcnnBoxCoder(scale_factors=[
+ box_coder_config.faster_rcnn_box_coder.y_scale,
+ box_coder_config.faster_rcnn_box_coder.x_scale,
+ box_coder_config.faster_rcnn_box_coder.height_scale,
+ box_coder_config.faster_rcnn_box_coder.width_scale
+ ])
+ if box_coder_config.WhichOneof('box_coder_oneof') == 'keypoint_box_coder':
+ return keypoint_box_coder.KeypointBoxCoder(
+ box_coder_config.keypoint_box_coder.num_keypoints,
+ scale_factors=[
+ box_coder_config.keypoint_box_coder.y_scale,
+ box_coder_config.keypoint_box_coder.x_scale,
+ box_coder_config.keypoint_box_coder.height_scale,
+ box_coder_config.keypoint_box_coder.width_scale
+ ])
+ if (box_coder_config.WhichOneof('box_coder_oneof') ==
+ 'mean_stddev_box_coder'):
+ return mean_stddev_box_coder.MeanStddevBoxCoder(
+ stddev=box_coder_config.mean_stddev_box_coder.stddev)
+ if box_coder_config.WhichOneof('box_coder_oneof') == 'square_box_coder':
+ return square_box_coder.SquareBoxCoder(scale_factors=[
+ box_coder_config.square_box_coder.y_scale,
+ box_coder_config.square_box_coder.x_scale,
+ box_coder_config.square_box_coder.length_scale
+ ])
+ raise ValueError('Empty box coder.')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_coder_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_coder_builder_test.py"
new file mode 100644
index 00000000..286012e9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_coder_builder_test.py"
@@ -0,0 +1,136 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for box_coder_builder."""
+
+import tensorflow as tf
+
+from google.protobuf import text_format
+from object_detection.box_coders import faster_rcnn_box_coder
+from object_detection.box_coders import keypoint_box_coder
+from object_detection.box_coders import mean_stddev_box_coder
+from object_detection.box_coders import square_box_coder
+from object_detection.builders import box_coder_builder
+from object_detection.protos import box_coder_pb2
+
+
+class BoxCoderBuilderTest(tf.test.TestCase):
+
+ def test_build_faster_rcnn_box_coder_with_defaults(self):
+ box_coder_text_proto = """
+ faster_rcnn_box_coder {
+ }
+ """
+ box_coder_proto = box_coder_pb2.BoxCoder()
+ text_format.Merge(box_coder_text_proto, box_coder_proto)
+ box_coder_object = box_coder_builder.build(box_coder_proto)
+ self.assertIsInstance(box_coder_object,
+ faster_rcnn_box_coder.FasterRcnnBoxCoder)
+ self.assertEqual(box_coder_object._scale_factors, [10.0, 10.0, 5.0, 5.0])
+
+ def test_build_faster_rcnn_box_coder_with_non_default_parameters(self):
+ box_coder_text_proto = """
+ faster_rcnn_box_coder {
+ y_scale: 6.0
+ x_scale: 3.0
+ height_scale: 7.0
+ width_scale: 8.0
+ }
+ """
+ box_coder_proto = box_coder_pb2.BoxCoder()
+ text_format.Merge(box_coder_text_proto, box_coder_proto)
+ box_coder_object = box_coder_builder.build(box_coder_proto)
+ self.assertIsInstance(box_coder_object,
+ faster_rcnn_box_coder.FasterRcnnBoxCoder)
+ self.assertEqual(box_coder_object._scale_factors, [6.0, 3.0, 7.0, 8.0])
+
+ def test_build_keypoint_box_coder_with_defaults(self):
+ box_coder_text_proto = """
+ keypoint_box_coder {
+ }
+ """
+ box_coder_proto = box_coder_pb2.BoxCoder()
+ text_format.Merge(box_coder_text_proto, box_coder_proto)
+ box_coder_object = box_coder_builder.build(box_coder_proto)
+ self.assertIsInstance(box_coder_object, keypoint_box_coder.KeypointBoxCoder)
+ self.assertEqual(box_coder_object._scale_factors, [10.0, 10.0, 5.0, 5.0])
+
+ def test_build_keypoint_box_coder_with_non_default_parameters(self):
+ box_coder_text_proto = """
+ keypoint_box_coder {
+ num_keypoints: 6
+ y_scale: 6.0
+ x_scale: 3.0
+ height_scale: 7.0
+ width_scale: 8.0
+ }
+ """
+ box_coder_proto = box_coder_pb2.BoxCoder()
+ text_format.Merge(box_coder_text_proto, box_coder_proto)
+ box_coder_object = box_coder_builder.build(box_coder_proto)
+ self.assertIsInstance(box_coder_object, keypoint_box_coder.KeypointBoxCoder)
+ self.assertEqual(box_coder_object._num_keypoints, 6)
+ self.assertEqual(box_coder_object._scale_factors, [6.0, 3.0, 7.0, 8.0])
+
+ def test_build_mean_stddev_box_coder(self):
+ box_coder_text_proto = """
+ mean_stddev_box_coder {
+ }
+ """
+ box_coder_proto = box_coder_pb2.BoxCoder()
+ text_format.Merge(box_coder_text_proto, box_coder_proto)
+ box_coder_object = box_coder_builder.build(box_coder_proto)
+ self.assertTrue(
+ isinstance(box_coder_object,
+ mean_stddev_box_coder.MeanStddevBoxCoder))
+
+ def test_build_square_box_coder_with_defaults(self):
+ box_coder_text_proto = """
+ square_box_coder {
+ }
+ """
+ box_coder_proto = box_coder_pb2.BoxCoder()
+ text_format.Merge(box_coder_text_proto, box_coder_proto)
+ box_coder_object = box_coder_builder.build(box_coder_proto)
+ self.assertTrue(
+ isinstance(box_coder_object, square_box_coder.SquareBoxCoder))
+ self.assertEqual(box_coder_object._scale_factors, [10.0, 10.0, 5.0])
+
+ def test_build_square_box_coder_with_non_default_parameters(self):
+ box_coder_text_proto = """
+ square_box_coder {
+ y_scale: 6.0
+ x_scale: 3.0
+ length_scale: 7.0
+ }
+ """
+ box_coder_proto = box_coder_pb2.BoxCoder()
+ text_format.Merge(box_coder_text_proto, box_coder_proto)
+ box_coder_object = box_coder_builder.build(box_coder_proto)
+ self.assertTrue(
+ isinstance(box_coder_object, square_box_coder.SquareBoxCoder))
+ self.assertEqual(box_coder_object._scale_factors, [6.0, 3.0, 7.0])
+
+ def test_raise_error_on_empty_box_coder(self):
+ box_coder_text_proto = """
+ """
+ box_coder_proto = box_coder_pb2.BoxCoder()
+ text_format.Merge(box_coder_text_proto, box_coder_proto)
+ with self.assertRaises(ValueError):
+ box_coder_builder.build(box_coder_proto)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_predictor_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_predictor_builder.py"
new file mode 100644
index 00000000..53a88daa
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_predictor_builder.py"
@@ -0,0 +1,633 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Function to build box predictor from configuration."""
+
+import collections
+import tensorflow as tf
+from object_detection.predictors import convolutional_box_predictor
+from object_detection.predictors import convolutional_keras_box_predictor
+from object_detection.predictors import mask_rcnn_box_predictor
+from object_detection.predictors import rfcn_box_predictor
+from object_detection.predictors.heads import box_head
+from object_detection.predictors.heads import class_head
+from object_detection.predictors.heads import keras_box_head
+from object_detection.predictors.heads import keras_class_head
+from object_detection.predictors.heads import mask_head
+from object_detection.protos import box_predictor_pb2
+
+
+def build_convolutional_box_predictor(is_training,
+ num_classes,
+ conv_hyperparams_fn,
+ min_depth,
+ max_depth,
+ num_layers_before_predictor,
+ use_dropout,
+ dropout_keep_prob,
+ kernel_size,
+ box_code_size,
+ apply_sigmoid_to_scores=False,
+ add_background_class=True,
+ class_prediction_bias_init=0.0,
+ use_depthwise=False,):
+ """Builds the ConvolutionalBoxPredictor from the arguments.
+
+ Args:
+ is_training: Indicates whether the BoxPredictor is in training mode.
+ num_classes: number of classes. Note that num_classes *does not*
+ include the background category, so if groundtruth labels take values
+ in {0, 1, .., K-1}, num_classes=K (and not K+1, even though the
+ assigned classification targets can range from {0,... K}).
+ conv_hyperparams_fn: A function to generate tf-slim arg_scope with
+ hyperparameters for convolution ops.
+ min_depth: Minimum feature depth prior to predicting box encodings
+ and class predictions.
+ max_depth: Maximum feature depth prior to predicting box encodings
+ and class predictions. If max_depth is set to 0, no additional
+ feature map will be inserted before location and class predictions.
+ num_layers_before_predictor: Number of the additional conv layers before
+ the predictor.
+ use_dropout: Option to use dropout or not. Note that a single dropout
+ op is applied here prior to both box and class predictions, which stands
+ in contrast to the ConvolutionalBoxPredictor below.
+ dropout_keep_prob: Keep probability for dropout.
+ This is only used if use_dropout is True.
+ kernel_size: Size of final convolution kernel. If the
+ spatial resolution of the feature map is smaller than the kernel size,
+ then the kernel size is automatically set to be
+ min(feature_width, feature_height).
+ box_code_size: Size of encoding for each box.
+ apply_sigmoid_to_scores: If True, apply the sigmoid on the output
+ class_predictions.
+ add_background_class: Whether to add an implicit background class.
+ class_prediction_bias_init: Constant value to initialize bias of the last
+ conv2d layer before class prediction.
+ use_depthwise: Whether to use depthwise convolutions for prediction
+ steps. Default is False.
+
+ Returns:
+ A ConvolutionalBoxPredictor class.
+ """
+ box_prediction_head = box_head.ConvolutionalBoxHead(
+ is_training=is_training,
+ box_code_size=box_code_size,
+ kernel_size=kernel_size,
+ use_depthwise=use_depthwise)
+ class_prediction_head = class_head.ConvolutionalClassHead(
+ is_training=is_training,
+ num_class_slots=num_classes + 1 if add_background_class else num_classes,
+ use_dropout=use_dropout,
+ dropout_keep_prob=dropout_keep_prob,
+ kernel_size=kernel_size,
+ apply_sigmoid_to_scores=apply_sigmoid_to_scores,
+ class_prediction_bias_init=class_prediction_bias_init,
+ use_depthwise=use_depthwise)
+ other_heads = {}
+ return convolutional_box_predictor.ConvolutionalBoxPredictor(
+ is_training=is_training,
+ num_classes=num_classes,
+ box_prediction_head=box_prediction_head,
+ class_prediction_head=class_prediction_head,
+ other_heads=other_heads,
+ conv_hyperparams_fn=conv_hyperparams_fn,
+ num_layers_before_predictor=num_layers_before_predictor,
+ min_depth=min_depth,
+ max_depth=max_depth)
+
+
+def build_convolutional_keras_box_predictor(is_training,
+ num_classes,
+ conv_hyperparams,
+ freeze_batchnorm,
+ inplace_batchnorm_update,
+ num_predictions_per_location_list,
+ min_depth,
+ max_depth,
+ num_layers_before_predictor,
+ use_dropout,
+ dropout_keep_prob,
+ kernel_size,
+ box_code_size,
+ add_background_class=True,
+ class_prediction_bias_init=0.0,
+ use_depthwise=False,
+ name='BoxPredictor'):
+ """Builds the Keras ConvolutionalBoxPredictor from the arguments.
+
+ Args:
+ is_training: Indicates whether the BoxPredictor is in training mode.
+ num_classes: number of classes. Note that num_classes *does not*
+ include the background category, so if groundtruth labels take values
+ in {0, 1, .., K-1}, num_classes=K (and not K+1, even though the
+ assigned classification targets can range from {0,... K}).
+ conv_hyperparams: A `hyperparams_builder.KerasLayerHyperparams` object
+ containing hyperparameters for convolution ops.
+ freeze_batchnorm: Whether to freeze batch norm parameters during
+ training or not. When training with a small batch size (e.g. 1), it is
+ desirable to freeze batch norm update and use pretrained batch norm
+ params.
+ inplace_batchnorm_update: Whether to update batch norm moving average
+ values inplace. When this is false train op must add a control
+ dependency on tf.graphkeys.UPDATE_OPS collection in order to update
+ batch norm statistics.
+ num_predictions_per_location_list: A list of integers representing the
+ number of box predictions to be made per spatial location for each
+ feature map.
+ min_depth: Minimum feature depth prior to predicting box encodings
+ and class predictions.
+ max_depth: Maximum feature depth prior to predicting box encodings
+ and class predictions. If max_depth is set to 0, no additional
+ feature map will be inserted before location and class predictions.
+ num_layers_before_predictor: Number of the additional conv layers before
+ the predictor.
+ use_dropout: Option to use dropout or not. Note that a single dropout
+ op is applied here prior to both box and class predictions, which stands
+ in contrast to the ConvolutionalBoxPredictor below.
+ dropout_keep_prob: Keep probability for dropout.
+ This is only used if use_dropout is True.
+ kernel_size: Size of final convolution kernel. If the
+ spatial resolution of the feature map is smaller than the kernel size,
+ then the kernel size is automatically set to be
+ min(feature_width, feature_height).
+ box_code_size: Size of encoding for each box.
+ add_background_class: Whether to add an implicit background class.
+ class_prediction_bias_init: constant value to initialize bias of the last
+ conv2d layer before class prediction.
+ use_depthwise: Whether to use depthwise convolutions for prediction
+ steps. Default is False.
+ name: A string name scope to assign to the box predictor. If `None`, Keras
+ will auto-generate one from the class name.
+
+ Returns:
+ A Keras ConvolutionalBoxPredictor class.
+ """
+ box_prediction_heads = []
+ class_prediction_heads = []
+ other_heads = {}
+
+ for stack_index, num_predictions_per_location in enumerate(
+ num_predictions_per_location_list):
+ box_prediction_heads.append(
+ keras_box_head.ConvolutionalBoxHead(
+ is_training=is_training,
+ box_code_size=box_code_size,
+ kernel_size=kernel_size,
+ conv_hyperparams=conv_hyperparams,
+ freeze_batchnorm=freeze_batchnorm,
+ num_predictions_per_location=num_predictions_per_location,
+ use_depthwise=use_depthwise,
+ name='ConvolutionalBoxHead_%d' % stack_index))
+ class_prediction_heads.append(
+ keras_class_head.ConvolutionalClassHead(
+ is_training=is_training,
+ num_class_slots=(
+ num_classes + 1 if add_background_class else num_classes),
+ use_dropout=use_dropout,
+ dropout_keep_prob=dropout_keep_prob,
+ kernel_size=kernel_size,
+ conv_hyperparams=conv_hyperparams,
+ freeze_batchnorm=freeze_batchnorm,
+ num_predictions_per_location=num_predictions_per_location,
+ class_prediction_bias_init=class_prediction_bias_init,
+ use_depthwise=use_depthwise,
+ name='ConvolutionalClassHead_%d' % stack_index))
+
+ return convolutional_keras_box_predictor.ConvolutionalBoxPredictor(
+ is_training=is_training,
+ num_classes=num_classes,
+ box_prediction_heads=box_prediction_heads,
+ class_prediction_heads=class_prediction_heads,
+ other_heads=other_heads,
+ conv_hyperparams=conv_hyperparams,
+ num_layers_before_predictor=num_layers_before_predictor,
+ min_depth=min_depth,
+ max_depth=max_depth,
+ freeze_batchnorm=freeze_batchnorm,
+ inplace_batchnorm_update=inplace_batchnorm_update,
+ name=name)
+
+
+def build_weight_shared_convolutional_box_predictor(
+ is_training,
+ num_classes,
+ conv_hyperparams_fn,
+ depth,
+ num_layers_before_predictor,
+ box_code_size,
+ kernel_size=3,
+ add_background_class=True,
+ class_prediction_bias_init=0.0,
+ use_dropout=False,
+ dropout_keep_prob=0.8,
+ share_prediction_tower=False,
+ apply_batch_norm=True,
+ use_depthwise=False,
+ score_converter_fn=tf.identity,
+ box_encodings_clip_range=None):
+ """Builds and returns a WeightSharedConvolutionalBoxPredictor class.
+
+ Args:
+ is_training: Indicates whether the BoxPredictor is in training mode.
+ num_classes: number of classes. Note that num_classes *does not*
+ include the background category, so if groundtruth labels take values
+ in {0, 1, .., K-1}, num_classes=K (and not K+1, even though the
+ assigned classification targets can range from {0,... K}).
+ conv_hyperparams_fn: A function to generate tf-slim arg_scope with
+ hyperparameters for convolution ops.
+ depth: depth of conv layers.
+ num_layers_before_predictor: Number of the additional conv layers before
+ the predictor.
+ box_code_size: Size of encoding for each box.
+ kernel_size: Size of final convolution kernel.
+ add_background_class: Whether to add an implicit background class.
+ class_prediction_bias_init: constant value to initialize bias of the last
+ conv2d layer before class prediction.
+ use_dropout: Whether to apply dropout to class prediction head.
+ dropout_keep_prob: Probability of keeping activiations.
+ share_prediction_tower: Whether to share the multi-layer tower between box
+ prediction and class prediction heads.
+ apply_batch_norm: Whether to apply batch normalization to conv layers in
+ this predictor.
+ use_depthwise: Whether to use depthwise separable conv2d instead of conv2d.
+ score_converter_fn: Callable score converter to perform elementwise op on
+ class scores.
+ box_encodings_clip_range: Min and max values for clipping the box_encodings.
+
+ Returns:
+ A WeightSharedConvolutionalBoxPredictor class.
+ """
+ box_prediction_head = box_head.WeightSharedConvolutionalBoxHead(
+ box_code_size=box_code_size,
+ kernel_size=kernel_size,
+ use_depthwise=use_depthwise,
+ box_encodings_clip_range=box_encodings_clip_range)
+ class_prediction_head = (
+ class_head.WeightSharedConvolutionalClassHead(
+ num_class_slots=(
+ num_classes + 1 if add_background_class else num_classes),
+ kernel_size=kernel_size,
+ class_prediction_bias_init=class_prediction_bias_init,
+ use_dropout=use_dropout,
+ dropout_keep_prob=dropout_keep_prob,
+ use_depthwise=use_depthwise,
+ score_converter_fn=score_converter_fn))
+ other_heads = {}
+ return convolutional_box_predictor.WeightSharedConvolutionalBoxPredictor(
+ is_training=is_training,
+ num_classes=num_classes,
+ box_prediction_head=box_prediction_head,
+ class_prediction_head=class_prediction_head,
+ other_heads=other_heads,
+ conv_hyperparams_fn=conv_hyperparams_fn,
+ depth=depth,
+ num_layers_before_predictor=num_layers_before_predictor,
+ kernel_size=kernel_size,
+ apply_batch_norm=apply_batch_norm,
+ share_prediction_tower=share_prediction_tower,
+ use_depthwise=use_depthwise)
+
+
+def build_mask_rcnn_box_predictor(is_training,
+ num_classes,
+ fc_hyperparams_fn,
+ use_dropout,
+ dropout_keep_prob,
+ box_code_size,
+ add_background_class=True,
+ share_box_across_classes=False,
+ predict_instance_masks=False,
+ conv_hyperparams_fn=None,
+ mask_height=14,
+ mask_width=14,
+ mask_prediction_num_conv_layers=2,
+ mask_prediction_conv_depth=256,
+ masks_are_class_agnostic=False,
+ convolve_then_upsample_masks=False):
+ """Builds and returns a MaskRCNNBoxPredictor class.
+
+ Args:
+ is_training: Indicates whether the BoxPredictor is in training mode.
+ num_classes: number of classes. Note that num_classes *does not*
+ include the background category, so if groundtruth labels take values
+ in {0, 1, .., K-1}, num_classes=K (and not K+1, even though the
+ assigned classification targets can range from {0,... K}).
+ fc_hyperparams_fn: A function to generate tf-slim arg_scope with
+ hyperparameters for fully connected ops.
+ use_dropout: Option to use dropout or not. Note that a single dropout
+ op is applied here prior to both box and class predictions, which stands
+ in contrast to the ConvolutionalBoxPredictor below.
+ dropout_keep_prob: Keep probability for dropout.
+ This is only used if use_dropout is True.
+ box_code_size: Size of encoding for each box.
+ add_background_class: Whether to add an implicit background class.
+ share_box_across_classes: Whether to share boxes across classes rather
+ than use a different box for each class.
+ predict_instance_masks: If True, will add a third stage mask prediction
+ to the returned class.
+ conv_hyperparams_fn: A function to generate tf-slim arg_scope with
+ hyperparameters for convolution ops.
+ mask_height: Desired output mask height. The default value is 14.
+ mask_width: Desired output mask width. The default value is 14.
+ mask_prediction_num_conv_layers: Number of convolution layers applied to
+ the image_features in mask prediction branch.
+ mask_prediction_conv_depth: The depth for the first conv2d_transpose op
+ applied to the image_features in the mask prediction branch. If set
+ to 0, the depth of the convolution layers will be automatically chosen
+ based on the number of object classes and the number of channels in the
+ image features.
+ masks_are_class_agnostic: Boolean determining if the mask-head is
+ class-agnostic or not.
+ convolve_then_upsample_masks: Whether to apply convolutions on mask
+ features before upsampling using nearest neighbor resizing. Otherwise,
+ mask features are resized to [`mask_height`, `mask_width`] using
+ bilinear resizing before applying convolutions.
+
+ Returns:
+ A MaskRCNNBoxPredictor class.
+ """
+ box_prediction_head = box_head.MaskRCNNBoxHead(
+ is_training=is_training,
+ num_classes=num_classes,
+ fc_hyperparams_fn=fc_hyperparams_fn,
+ use_dropout=use_dropout,
+ dropout_keep_prob=dropout_keep_prob,
+ box_code_size=box_code_size,
+ share_box_across_classes=share_box_across_classes)
+ class_prediction_head = class_head.MaskRCNNClassHead(
+ is_training=is_training,
+ num_class_slots=num_classes + 1 if add_background_class else num_classes,
+ fc_hyperparams_fn=fc_hyperparams_fn,
+ use_dropout=use_dropout,
+ dropout_keep_prob=dropout_keep_prob)
+ third_stage_heads = {}
+ if predict_instance_masks:
+ third_stage_heads[
+ mask_rcnn_box_predictor.
+ MASK_PREDICTIONS] = mask_head.MaskRCNNMaskHead(
+ num_classes=num_classes,
+ conv_hyperparams_fn=conv_hyperparams_fn,
+ mask_height=mask_height,
+ mask_width=mask_width,
+ mask_prediction_num_conv_layers=mask_prediction_num_conv_layers,
+ mask_prediction_conv_depth=mask_prediction_conv_depth,
+ masks_are_class_agnostic=masks_are_class_agnostic,
+ convolve_then_upsample=convolve_then_upsample_masks)
+ return mask_rcnn_box_predictor.MaskRCNNBoxPredictor(
+ is_training=is_training,
+ num_classes=num_classes,
+ box_prediction_head=box_prediction_head,
+ class_prediction_head=class_prediction_head,
+ third_stage_heads=third_stage_heads)
+
+
+def build_score_converter(score_converter_config, is_training):
+ """Builds score converter based on the config.
+
+ Builds one of [tf.identity, tf.sigmoid] score converters based on the config
+ and whether the BoxPredictor is for training or inference.
+
+ Args:
+ score_converter_config:
+ box_predictor_pb2.WeightSharedConvolutionalBoxPredictor.score_converter.
+ is_training: Indicates whether the BoxPredictor is in training mode.
+
+ Returns:
+ Callable score converter op.
+
+ Raises:
+ ValueError: On unknown score converter.
+ """
+ if score_converter_config == (
+ box_predictor_pb2.WeightSharedConvolutionalBoxPredictor.IDENTITY):
+ return tf.identity
+ if score_converter_config == (
+ box_predictor_pb2.WeightSharedConvolutionalBoxPredictor.SIGMOID):
+ return tf.identity if is_training else tf.sigmoid
+ raise ValueError('Unknown score converter.')
+
+
+BoxEncodingsClipRange = collections.namedtuple('BoxEncodingsClipRange',
+ ['min', 'max'])
+
+
+def build(argscope_fn, box_predictor_config, is_training, num_classes,
+ add_background_class=True):
+ """Builds box predictor based on the configuration.
+
+ Builds box predictor based on the configuration. See box_predictor.proto for
+ configurable options. Also, see box_predictor.py for more details.
+
+ Args:
+ argscope_fn: A function that takes the following inputs:
+ * hyperparams_pb2.Hyperparams proto
+ * a boolean indicating if the model is in training mode.
+ and returns a tf slim argscope for Conv and FC hyperparameters.
+ box_predictor_config: box_predictor_pb2.BoxPredictor proto containing
+ configuration.
+ is_training: Whether the models is in training mode.
+ num_classes: Number of classes to predict.
+ add_background_class: Whether to add an implicit background class.
+
+ Returns:
+ box_predictor: box_predictor.BoxPredictor object.
+
+ Raises:
+ ValueError: On unknown box predictor.
+ """
+ if not isinstance(box_predictor_config, box_predictor_pb2.BoxPredictor):
+ raise ValueError('box_predictor_config not of type '
+ 'box_predictor_pb2.BoxPredictor.')
+
+ box_predictor_oneof = box_predictor_config.WhichOneof('box_predictor_oneof')
+
+ if box_predictor_oneof == 'convolutional_box_predictor':
+ config_box_predictor = box_predictor_config.convolutional_box_predictor
+ conv_hyperparams_fn = argscope_fn(config_box_predictor.conv_hyperparams,
+ is_training)
+ return build_convolutional_box_predictor(
+ is_training=is_training,
+ num_classes=num_classes,
+ add_background_class=add_background_class,
+ conv_hyperparams_fn=conv_hyperparams_fn,
+ use_dropout=config_box_predictor.use_dropout,
+ dropout_keep_prob=config_box_predictor.dropout_keep_probability,
+ box_code_size=config_box_predictor.box_code_size,
+ kernel_size=config_box_predictor.kernel_size,
+ num_layers_before_predictor=(
+ config_box_predictor.num_layers_before_predictor),
+ min_depth=config_box_predictor.min_depth,
+ max_depth=config_box_predictor.max_depth,
+ apply_sigmoid_to_scores=config_box_predictor.apply_sigmoid_to_scores,
+ class_prediction_bias_init=(
+ config_box_predictor.class_prediction_bias_init),
+ use_depthwise=config_box_predictor.use_depthwise)
+
+ if box_predictor_oneof == 'weight_shared_convolutional_box_predictor':
+ config_box_predictor = (
+ box_predictor_config.weight_shared_convolutional_box_predictor)
+ conv_hyperparams_fn = argscope_fn(config_box_predictor.conv_hyperparams,
+ is_training)
+ apply_batch_norm = config_box_predictor.conv_hyperparams.HasField(
+ 'batch_norm')
+ # During training phase, logits are used to compute the loss. Only apply
+ # sigmoid at inference to make the inference graph TPU friendly.
+ score_converter_fn = build_score_converter(
+ config_box_predictor.score_converter, is_training)
+ # Optionally apply clipping to box encodings, when box_encodings_clip_range
+ # is set.
+ box_encodings_clip_range = (
+ BoxEncodingsClipRange(
+ min=config_box_predictor.box_encodings_clip_range.min,
+ max=config_box_predictor.box_encodings_clip_range.max)
+ if config_box_predictor.HasField('box_encodings_clip_range') else None)
+
+ return build_weight_shared_convolutional_box_predictor(
+ is_training=is_training,
+ num_classes=num_classes,
+ add_background_class=add_background_class,
+ conv_hyperparams_fn=conv_hyperparams_fn,
+ depth=config_box_predictor.depth,
+ num_layers_before_predictor=(
+ config_box_predictor.num_layers_before_predictor),
+ box_code_size=config_box_predictor.box_code_size,
+ kernel_size=config_box_predictor.kernel_size,
+ class_prediction_bias_init=(
+ config_box_predictor.class_prediction_bias_init),
+ use_dropout=config_box_predictor.use_dropout,
+ dropout_keep_prob=config_box_predictor.dropout_keep_probability,
+ share_prediction_tower=config_box_predictor.share_prediction_tower,
+ apply_batch_norm=apply_batch_norm,
+ use_depthwise=config_box_predictor.use_depthwise,
+ score_converter_fn=score_converter_fn,
+ box_encodings_clip_range=box_encodings_clip_range)
+
+ if box_predictor_oneof == 'mask_rcnn_box_predictor':
+ config_box_predictor = box_predictor_config.mask_rcnn_box_predictor
+ fc_hyperparams_fn = argscope_fn(config_box_predictor.fc_hyperparams,
+ is_training)
+ conv_hyperparams_fn = None
+ if config_box_predictor.HasField('conv_hyperparams'):
+ conv_hyperparams_fn = argscope_fn(
+ config_box_predictor.conv_hyperparams, is_training)
+ return build_mask_rcnn_box_predictor(
+ is_training=is_training,
+ num_classes=num_classes,
+ add_background_class=add_background_class,
+ fc_hyperparams_fn=fc_hyperparams_fn,
+ use_dropout=config_box_predictor.use_dropout,
+ dropout_keep_prob=config_box_predictor.dropout_keep_probability,
+ box_code_size=config_box_predictor.box_code_size,
+ share_box_across_classes=(
+ config_box_predictor.share_box_across_classes),
+ predict_instance_masks=config_box_predictor.predict_instance_masks,
+ conv_hyperparams_fn=conv_hyperparams_fn,
+ mask_height=config_box_predictor.mask_height,
+ mask_width=config_box_predictor.mask_width,
+ mask_prediction_num_conv_layers=(
+ config_box_predictor.mask_prediction_num_conv_layers),
+ mask_prediction_conv_depth=(
+ config_box_predictor.mask_prediction_conv_depth),
+ masks_are_class_agnostic=(
+ config_box_predictor.masks_are_class_agnostic),
+ convolve_then_upsample_masks=(
+ config_box_predictor.convolve_then_upsample_masks))
+
+ if box_predictor_oneof == 'rfcn_box_predictor':
+ config_box_predictor = box_predictor_config.rfcn_box_predictor
+ conv_hyperparams_fn = argscope_fn(config_box_predictor.conv_hyperparams,
+ is_training)
+ box_predictor_object = rfcn_box_predictor.RfcnBoxPredictor(
+ is_training=is_training,
+ num_classes=num_classes,
+ conv_hyperparams_fn=conv_hyperparams_fn,
+ crop_size=[config_box_predictor.crop_height,
+ config_box_predictor.crop_width],
+ num_spatial_bins=[config_box_predictor.num_spatial_bins_height,
+ config_box_predictor.num_spatial_bins_width],
+ depth=config_box_predictor.depth,
+ box_code_size=config_box_predictor.box_code_size)
+ return box_predictor_object
+ raise ValueError('Unknown box predictor: {}'.format(box_predictor_oneof))
+
+
+def build_keras(conv_hyperparams_fn, freeze_batchnorm, inplace_batchnorm_update,
+ num_predictions_per_location_list, box_predictor_config,
+ is_training, num_classes, add_background_class=True):
+ """Builds a Keras-based box predictor based on the configuration.
+
+ Builds Keras-based box predictor based on the configuration.
+ See box_predictor.proto for configurable options. Also, see box_predictor.py
+ for more details.
+
+ Args:
+ conv_hyperparams_fn: A function that takes a hyperparams_pb2.Hyperparams
+ proto and returns a `hyperparams_builder.KerasLayerHyperparams`
+ for Conv or FC hyperparameters.
+ freeze_batchnorm: Whether to freeze batch norm parameters during
+ training or not. When training with a small batch size (e.g. 1), it is
+ desirable to freeze batch norm update and use pretrained batch norm
+ params.
+ inplace_batchnorm_update: Whether to update batch norm moving average
+ values inplace. When this is false train op must add a control
+ dependency on tf.graphkeys.UPDATE_OPS collection in order to update
+ batch norm statistics.
+ num_predictions_per_location_list: A list of integers representing the
+ number of box predictions to be made per spatial location for each
+ feature map.
+ box_predictor_config: box_predictor_pb2.BoxPredictor proto containing
+ configuration.
+ is_training: Whether the models is in training mode.
+ num_classes: Number of classes to predict.
+ add_background_class: Whether to add an implicit background class.
+
+ Returns:
+ box_predictor: box_predictor.KerasBoxPredictor object.
+
+ Raises:
+ ValueError: On unknown box predictor, or one with no Keras box predictor.
+ """
+ if not isinstance(box_predictor_config, box_predictor_pb2.BoxPredictor):
+ raise ValueError('box_predictor_config not of type '
+ 'box_predictor_pb2.BoxPredictor.')
+
+ box_predictor_oneof = box_predictor_config.WhichOneof('box_predictor_oneof')
+
+ if box_predictor_oneof == 'convolutional_box_predictor':
+ config_box_predictor = box_predictor_config.convolutional_box_predictor
+ conv_hyperparams = conv_hyperparams_fn(
+ config_box_predictor.conv_hyperparams)
+ return build_convolutional_keras_box_predictor(
+ is_training=is_training,
+ num_classes=num_classes,
+ add_background_class=add_background_class,
+ conv_hyperparams=conv_hyperparams,
+ freeze_batchnorm=freeze_batchnorm,
+ inplace_batchnorm_update=inplace_batchnorm_update,
+ num_predictions_per_location_list=num_predictions_per_location_list,
+ use_dropout=config_box_predictor.use_dropout,
+ dropout_keep_prob=config_box_predictor.dropout_keep_probability,
+ box_code_size=config_box_predictor.box_code_size,
+ kernel_size=config_box_predictor.kernel_size,
+ num_layers_before_predictor=(
+ config_box_predictor.num_layers_before_predictor),
+ min_depth=config_box_predictor.min_depth,
+ max_depth=config_box_predictor.max_depth,
+ class_prediction_bias_init=(
+ config_box_predictor.class_prediction_bias_init),
+ use_depthwise=config_box_predictor.use_depthwise)
+
+ raise ValueError(
+ 'Unknown box predictor for Keras: {}'.format(box_predictor_oneof))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_predictor_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_predictor_builder_test.py"
new file mode 100644
index 00000000..08029df7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/box_predictor_builder_test.py"
@@ -0,0 +1,658 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for box_predictor_builder."""
+
+import mock
+import tensorflow as tf
+
+from google.protobuf import text_format
+from object_detection.builders import box_predictor_builder
+from object_detection.builders import hyperparams_builder
+from object_detection.predictors import mask_rcnn_box_predictor
+from object_detection.protos import box_predictor_pb2
+from object_detection.protos import hyperparams_pb2
+
+
+class ConvolutionalBoxPredictorBuilderTest(tf.test.TestCase):
+
+ def test_box_predictor_calls_conv_argscope_fn(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ weight: 0.0003
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ mean: 0.0
+ stddev: 0.3
+ }
+ }
+ activation: RELU_6
+ """
+ hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, hyperparams_proto)
+ def mock_conv_argscope_builder(conv_hyperparams_arg, is_training):
+ return (conv_hyperparams_arg, is_training)
+
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ box_predictor_proto.convolutional_box_predictor.conv_hyperparams.CopyFrom(
+ hyperparams_proto)
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_conv_argscope_builder,
+ box_predictor_config=box_predictor_proto,
+ is_training=False,
+ num_classes=10)
+ (conv_hyperparams_actual, is_training) = box_predictor._conv_hyperparams_fn
+ self.assertAlmostEqual((hyperparams_proto.regularizer.
+ l1_regularizer.weight),
+ (conv_hyperparams_actual.regularizer.l1_regularizer.
+ weight))
+ self.assertAlmostEqual((hyperparams_proto.initializer.
+ truncated_normal_initializer.stddev),
+ (conv_hyperparams_actual.initializer.
+ truncated_normal_initializer.stddev))
+ self.assertAlmostEqual((hyperparams_proto.initializer.
+ truncated_normal_initializer.mean),
+ (conv_hyperparams_actual.initializer.
+ truncated_normal_initializer.mean))
+ self.assertEqual(hyperparams_proto.activation,
+ conv_hyperparams_actual.activation)
+ self.assertFalse(is_training)
+
+ def test_construct_non_default_conv_box_predictor(self):
+ box_predictor_text_proto = """
+ convolutional_box_predictor {
+ min_depth: 2
+ max_depth: 16
+ num_layers_before_predictor: 2
+ use_dropout: false
+ dropout_keep_probability: 0.4
+ kernel_size: 3
+ box_code_size: 3
+ apply_sigmoid_to_scores: true
+ class_prediction_bias_init: 4.0
+ use_depthwise: true
+ }
+ """
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, hyperparams_proto)
+ def mock_conv_argscope_builder(conv_hyperparams_arg, is_training):
+ return (conv_hyperparams_arg, is_training)
+
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ text_format.Merge(box_predictor_text_proto, box_predictor_proto)
+ box_predictor_proto.convolutional_box_predictor.conv_hyperparams.CopyFrom(
+ hyperparams_proto)
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_conv_argscope_builder,
+ box_predictor_config=box_predictor_proto,
+ is_training=False,
+ num_classes=10,
+ add_background_class=False)
+ class_head = box_predictor._class_prediction_head
+ self.assertEqual(box_predictor._min_depth, 2)
+ self.assertEqual(box_predictor._max_depth, 16)
+ self.assertEqual(box_predictor._num_layers_before_predictor, 2)
+ self.assertFalse(class_head._use_dropout)
+ self.assertAlmostEqual(class_head._dropout_keep_prob, 0.4)
+ self.assertTrue(class_head._apply_sigmoid_to_scores)
+ self.assertAlmostEqual(class_head._class_prediction_bias_init, 4.0)
+ self.assertEqual(class_head._num_class_slots, 10)
+ self.assertEqual(box_predictor.num_classes, 10)
+ self.assertFalse(box_predictor._is_training)
+ self.assertTrue(class_head._use_depthwise)
+
+ def test_construct_default_conv_box_predictor(self):
+ box_predictor_text_proto = """
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }"""
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ text_format.Merge(box_predictor_text_proto, box_predictor_proto)
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=hyperparams_builder.build,
+ box_predictor_config=box_predictor_proto,
+ is_training=True,
+ num_classes=90)
+ class_head = box_predictor._class_prediction_head
+ self.assertEqual(box_predictor._min_depth, 0)
+ self.assertEqual(box_predictor._max_depth, 0)
+ self.assertEqual(box_predictor._num_layers_before_predictor, 0)
+ self.assertTrue(class_head._use_dropout)
+ self.assertAlmostEqual(class_head._dropout_keep_prob, 0.8)
+ self.assertFalse(class_head._apply_sigmoid_to_scores)
+ self.assertEqual(class_head._num_class_slots, 91)
+ self.assertEqual(box_predictor.num_classes, 90)
+ self.assertTrue(box_predictor._is_training)
+ self.assertFalse(class_head._use_depthwise)
+
+
+class WeightSharedConvolutionalBoxPredictorBuilderTest(tf.test.TestCase):
+
+ def test_box_predictor_calls_conv_argscope_fn(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ weight: 0.0003
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ mean: 0.0
+ stddev: 0.3
+ }
+ }
+ activation: RELU_6
+ """
+ hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, hyperparams_proto)
+ def mock_conv_argscope_builder(conv_hyperparams_arg, is_training):
+ return (conv_hyperparams_arg, is_training)
+
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ (box_predictor_proto.weight_shared_convolutional_box_predictor
+ .conv_hyperparams.CopyFrom(hyperparams_proto))
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_conv_argscope_builder,
+ box_predictor_config=box_predictor_proto,
+ is_training=False,
+ num_classes=10)
+ (conv_hyperparams_actual, is_training) = box_predictor._conv_hyperparams_fn
+ self.assertAlmostEqual((hyperparams_proto.regularizer.
+ l1_regularizer.weight),
+ (conv_hyperparams_actual.regularizer.l1_regularizer.
+ weight))
+ self.assertAlmostEqual((hyperparams_proto.initializer.
+ truncated_normal_initializer.stddev),
+ (conv_hyperparams_actual.initializer.
+ truncated_normal_initializer.stddev))
+ self.assertAlmostEqual((hyperparams_proto.initializer.
+ truncated_normal_initializer.mean),
+ (conv_hyperparams_actual.initializer.
+ truncated_normal_initializer.mean))
+ self.assertEqual(hyperparams_proto.activation,
+ conv_hyperparams_actual.activation)
+ self.assertFalse(is_training)
+
+ def test_construct_non_default_conv_box_predictor(self):
+ box_predictor_text_proto = """
+ weight_shared_convolutional_box_predictor {
+ depth: 2
+ num_layers_before_predictor: 2
+ kernel_size: 7
+ box_code_size: 3
+ class_prediction_bias_init: 4.0
+ }
+ """
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, hyperparams_proto)
+ def mock_conv_argscope_builder(conv_hyperparams_arg, is_training):
+ return (conv_hyperparams_arg, is_training)
+
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ text_format.Merge(box_predictor_text_proto, box_predictor_proto)
+ (box_predictor_proto.weight_shared_convolutional_box_predictor.
+ conv_hyperparams.CopyFrom(hyperparams_proto))
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_conv_argscope_builder,
+ box_predictor_config=box_predictor_proto,
+ is_training=False,
+ num_classes=10,
+ add_background_class=False)
+ class_head = box_predictor._class_prediction_head
+ self.assertEqual(box_predictor._depth, 2)
+ self.assertEqual(box_predictor._num_layers_before_predictor, 2)
+ self.assertAlmostEqual(class_head._class_prediction_bias_init, 4.0)
+ self.assertEqual(box_predictor.num_classes, 10)
+ self.assertFalse(box_predictor._is_training)
+ self.assertEqual(box_predictor._apply_batch_norm, False)
+
+ def test_construct_non_default_depthwise_conv_box_predictor(self):
+ box_predictor_text_proto = """
+ weight_shared_convolutional_box_predictor {
+ depth: 2
+ num_layers_before_predictor: 2
+ kernel_size: 7
+ box_code_size: 3
+ class_prediction_bias_init: 4.0
+ use_depthwise: true
+ }
+ """
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, hyperparams_proto)
+ def mock_conv_argscope_builder(conv_hyperparams_arg, is_training):
+ return (conv_hyperparams_arg, is_training)
+
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ text_format.Merge(box_predictor_text_proto, box_predictor_proto)
+ (box_predictor_proto.weight_shared_convolutional_box_predictor.
+ conv_hyperparams.CopyFrom(hyperparams_proto))
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_conv_argscope_builder,
+ box_predictor_config=box_predictor_proto,
+ is_training=False,
+ num_classes=10,
+ add_background_class=False)
+ class_head = box_predictor._class_prediction_head
+ self.assertEqual(box_predictor._depth, 2)
+ self.assertEqual(box_predictor._num_layers_before_predictor, 2)
+ self.assertEqual(box_predictor._apply_batch_norm, False)
+ self.assertEqual(box_predictor._use_depthwise, True)
+ self.assertAlmostEqual(class_head._class_prediction_bias_init, 4.0)
+ self.assertEqual(box_predictor.num_classes, 10)
+ self.assertFalse(box_predictor._is_training)
+
+ def test_construct_default_conv_box_predictor(self):
+ box_predictor_text_proto = """
+ weight_shared_convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }"""
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ text_format.Merge(box_predictor_text_proto, box_predictor_proto)
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=hyperparams_builder.build,
+ box_predictor_config=box_predictor_proto,
+ is_training=True,
+ num_classes=90)
+ self.assertEqual(box_predictor._depth, 0)
+ self.assertEqual(box_predictor._num_layers_before_predictor, 0)
+ self.assertEqual(box_predictor.num_classes, 90)
+ self.assertTrue(box_predictor._is_training)
+ self.assertEqual(box_predictor._apply_batch_norm, False)
+
+ def test_construct_default_conv_box_predictor_with_batch_norm(self):
+ box_predictor_text_proto = """
+ weight_shared_convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ batch_norm {
+ train: true
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }"""
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ text_format.Merge(box_predictor_text_proto, box_predictor_proto)
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=hyperparams_builder.build,
+ box_predictor_config=box_predictor_proto,
+ is_training=True,
+ num_classes=90)
+ self.assertEqual(box_predictor._depth, 0)
+ self.assertEqual(box_predictor._num_layers_before_predictor, 0)
+ self.assertEqual(box_predictor.num_classes, 90)
+ self.assertTrue(box_predictor._is_training)
+ self.assertEqual(box_predictor._apply_batch_norm, True)
+
+
+class MaskRCNNBoxPredictorBuilderTest(tf.test.TestCase):
+
+ def test_box_predictor_builder_calls_fc_argscope_fn(self):
+ fc_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ weight: 0.0003
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ mean: 0.0
+ stddev: 0.3
+ }
+ }
+ activation: RELU_6
+ op: FC
+ """
+ hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(fc_hyperparams_text_proto, hyperparams_proto)
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ box_predictor_proto.mask_rcnn_box_predictor.fc_hyperparams.CopyFrom(
+ hyperparams_proto)
+ mock_argscope_fn = mock.Mock(return_value='arg_scope')
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_argscope_fn,
+ box_predictor_config=box_predictor_proto,
+ is_training=False,
+ num_classes=10)
+ mock_argscope_fn.assert_called_with(hyperparams_proto, False)
+ self.assertEqual(box_predictor._box_prediction_head._fc_hyperparams_fn,
+ 'arg_scope')
+ self.assertEqual(box_predictor._class_prediction_head._fc_hyperparams_fn,
+ 'arg_scope')
+
+ def test_non_default_mask_rcnn_box_predictor(self):
+ fc_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ activation: RELU_6
+ op: FC
+ """
+ box_predictor_text_proto = """
+ mask_rcnn_box_predictor {
+ use_dropout: true
+ dropout_keep_probability: 0.8
+ box_code_size: 3
+ share_box_across_classes: true
+ }
+ """
+ hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(fc_hyperparams_text_proto, hyperparams_proto)
+ def mock_fc_argscope_builder(fc_hyperparams_arg, is_training):
+ return (fc_hyperparams_arg, is_training)
+
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ text_format.Merge(box_predictor_text_proto, box_predictor_proto)
+ box_predictor_proto.mask_rcnn_box_predictor.fc_hyperparams.CopyFrom(
+ hyperparams_proto)
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_fc_argscope_builder,
+ box_predictor_config=box_predictor_proto,
+ is_training=True,
+ num_classes=90)
+ box_head = box_predictor._box_prediction_head
+ class_head = box_predictor._class_prediction_head
+ self.assertTrue(box_head._use_dropout)
+ self.assertTrue(class_head._use_dropout)
+ self.assertAlmostEqual(box_head._dropout_keep_prob, 0.8)
+ self.assertAlmostEqual(class_head._dropout_keep_prob, 0.8)
+ self.assertEqual(box_predictor.num_classes, 90)
+ self.assertTrue(box_predictor._is_training)
+ self.assertEqual(box_head._box_code_size, 3)
+ self.assertEqual(box_head._share_box_across_classes, True)
+
+ def test_build_default_mask_rcnn_box_predictor(self):
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ box_predictor_proto.mask_rcnn_box_predictor.fc_hyperparams.op = (
+ hyperparams_pb2.Hyperparams.FC)
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock.Mock(return_value='arg_scope'),
+ box_predictor_config=box_predictor_proto,
+ is_training=True,
+ num_classes=90)
+ box_head = box_predictor._box_prediction_head
+ class_head = box_predictor._class_prediction_head
+ self.assertFalse(box_head._use_dropout)
+ self.assertFalse(class_head._use_dropout)
+ self.assertAlmostEqual(box_head._dropout_keep_prob, 0.5)
+ self.assertEqual(box_predictor.num_classes, 90)
+ self.assertTrue(box_predictor._is_training)
+ self.assertEqual(box_head._box_code_size, 4)
+ self.assertEqual(len(box_predictor._third_stage_heads.keys()), 0)
+
+ def test_build_box_predictor_with_mask_branch(self):
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ box_predictor_proto.mask_rcnn_box_predictor.fc_hyperparams.op = (
+ hyperparams_pb2.Hyperparams.FC)
+ box_predictor_proto.mask_rcnn_box_predictor.conv_hyperparams.op = (
+ hyperparams_pb2.Hyperparams.CONV)
+ box_predictor_proto.mask_rcnn_box_predictor.predict_instance_masks = True
+ box_predictor_proto.mask_rcnn_box_predictor.mask_prediction_conv_depth = 512
+ box_predictor_proto.mask_rcnn_box_predictor.mask_height = 16
+ box_predictor_proto.mask_rcnn_box_predictor.mask_width = 16
+ mock_argscope_fn = mock.Mock(return_value='arg_scope')
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_argscope_fn,
+ box_predictor_config=box_predictor_proto,
+ is_training=True,
+ num_classes=90)
+ mock_argscope_fn.assert_has_calls(
+ [mock.call(box_predictor_proto.mask_rcnn_box_predictor.fc_hyperparams,
+ True),
+ mock.call(box_predictor_proto.mask_rcnn_box_predictor.conv_hyperparams,
+ True)], any_order=True)
+ box_head = box_predictor._box_prediction_head
+ class_head = box_predictor._class_prediction_head
+ third_stage_heads = box_predictor._third_stage_heads
+ self.assertFalse(box_head._use_dropout)
+ self.assertFalse(class_head._use_dropout)
+ self.assertAlmostEqual(box_head._dropout_keep_prob, 0.5)
+ self.assertAlmostEqual(class_head._dropout_keep_prob, 0.5)
+ self.assertEqual(box_predictor.num_classes, 90)
+ self.assertTrue(box_predictor._is_training)
+ self.assertEqual(box_head._box_code_size, 4)
+ self.assertTrue(
+ mask_rcnn_box_predictor.MASK_PREDICTIONS in third_stage_heads)
+ self.assertEqual(
+ third_stage_heads[mask_rcnn_box_predictor.MASK_PREDICTIONS]
+ ._mask_prediction_conv_depth, 512)
+
+ def test_build_box_predictor_with_convlve_then_upsample_masks(self):
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ box_predictor_proto.mask_rcnn_box_predictor.fc_hyperparams.op = (
+ hyperparams_pb2.Hyperparams.FC)
+ box_predictor_proto.mask_rcnn_box_predictor.conv_hyperparams.op = (
+ hyperparams_pb2.Hyperparams.CONV)
+ box_predictor_proto.mask_rcnn_box_predictor.predict_instance_masks = True
+ box_predictor_proto.mask_rcnn_box_predictor.mask_prediction_conv_depth = 512
+ box_predictor_proto.mask_rcnn_box_predictor.mask_height = 24
+ box_predictor_proto.mask_rcnn_box_predictor.mask_width = 24
+ box_predictor_proto.mask_rcnn_box_predictor.convolve_then_upsample_masks = (
+ True)
+
+ mock_argscope_fn = mock.Mock(return_value='arg_scope')
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_argscope_fn,
+ box_predictor_config=box_predictor_proto,
+ is_training=True,
+ num_classes=90)
+ mock_argscope_fn.assert_has_calls(
+ [mock.call(box_predictor_proto.mask_rcnn_box_predictor.fc_hyperparams,
+ True),
+ mock.call(box_predictor_proto.mask_rcnn_box_predictor.conv_hyperparams,
+ True)], any_order=True)
+ box_head = box_predictor._box_prediction_head
+ class_head = box_predictor._class_prediction_head
+ third_stage_heads = box_predictor._third_stage_heads
+ self.assertFalse(box_head._use_dropout)
+ self.assertFalse(class_head._use_dropout)
+ self.assertAlmostEqual(box_head._dropout_keep_prob, 0.5)
+ self.assertAlmostEqual(class_head._dropout_keep_prob, 0.5)
+ self.assertEqual(box_predictor.num_classes, 90)
+ self.assertTrue(box_predictor._is_training)
+ self.assertEqual(box_head._box_code_size, 4)
+ self.assertTrue(
+ mask_rcnn_box_predictor.MASK_PREDICTIONS in third_stage_heads)
+ self.assertEqual(
+ third_stage_heads[mask_rcnn_box_predictor.MASK_PREDICTIONS]
+ ._mask_prediction_conv_depth, 512)
+ self.assertTrue(third_stage_heads[mask_rcnn_box_predictor.MASK_PREDICTIONS]
+ ._convolve_then_upsample)
+
+
+class RfcnBoxPredictorBuilderTest(tf.test.TestCase):
+
+ def test_box_predictor_calls_fc_argscope_fn(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ weight: 0.0003
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ mean: 0.0
+ stddev: 0.3
+ }
+ }
+ activation: RELU_6
+ """
+ hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, hyperparams_proto)
+ def mock_conv_argscope_builder(conv_hyperparams_arg, is_training):
+ return (conv_hyperparams_arg, is_training)
+
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ box_predictor_proto.rfcn_box_predictor.conv_hyperparams.CopyFrom(
+ hyperparams_proto)
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_conv_argscope_builder,
+ box_predictor_config=box_predictor_proto,
+ is_training=False,
+ num_classes=10)
+ (conv_hyperparams_actual, is_training) = box_predictor._conv_hyperparams_fn
+ self.assertAlmostEqual((hyperparams_proto.regularizer.
+ l1_regularizer.weight),
+ (conv_hyperparams_actual.regularizer.l1_regularizer.
+ weight))
+ self.assertAlmostEqual((hyperparams_proto.initializer.
+ truncated_normal_initializer.stddev),
+ (conv_hyperparams_actual.initializer.
+ truncated_normal_initializer.stddev))
+ self.assertAlmostEqual((hyperparams_proto.initializer.
+ truncated_normal_initializer.mean),
+ (conv_hyperparams_actual.initializer.
+ truncated_normal_initializer.mean))
+ self.assertEqual(hyperparams_proto.activation,
+ conv_hyperparams_actual.activation)
+ self.assertFalse(is_training)
+
+ def test_non_default_rfcn_box_predictor(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ activation: RELU_6
+ """
+ box_predictor_text_proto = """
+ rfcn_box_predictor {
+ num_spatial_bins_height: 4
+ num_spatial_bins_width: 4
+ depth: 4
+ box_code_size: 3
+ crop_height: 16
+ crop_width: 16
+ }
+ """
+ hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, hyperparams_proto)
+ def mock_conv_argscope_builder(conv_hyperparams_arg, is_training):
+ return (conv_hyperparams_arg, is_training)
+
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ text_format.Merge(box_predictor_text_proto, box_predictor_proto)
+ box_predictor_proto.rfcn_box_predictor.conv_hyperparams.CopyFrom(
+ hyperparams_proto)
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_conv_argscope_builder,
+ box_predictor_config=box_predictor_proto,
+ is_training=True,
+ num_classes=90)
+ self.assertEqual(box_predictor.num_classes, 90)
+ self.assertTrue(box_predictor._is_training)
+ self.assertEqual(box_predictor._box_code_size, 3)
+ self.assertEqual(box_predictor._num_spatial_bins, [4, 4])
+ self.assertEqual(box_predictor._crop_size, [16, 16])
+
+ def test_default_rfcn_box_predictor(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ activation: RELU_6
+ """
+ hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, hyperparams_proto)
+ def mock_conv_argscope_builder(conv_hyperparams_arg, is_training):
+ return (conv_hyperparams_arg, is_training)
+
+ box_predictor_proto = box_predictor_pb2.BoxPredictor()
+ box_predictor_proto.rfcn_box_predictor.conv_hyperparams.CopyFrom(
+ hyperparams_proto)
+ box_predictor = box_predictor_builder.build(
+ argscope_fn=mock_conv_argscope_builder,
+ box_predictor_config=box_predictor_proto,
+ is_training=True,
+ num_classes=90)
+ self.assertEqual(box_predictor.num_classes, 90)
+ self.assertTrue(box_predictor._is_training)
+ self.assertEqual(box_predictor._box_code_size, 4)
+ self.assertEqual(box_predictor._num_spatial_bins, [3, 3])
+ self.assertEqual(box_predictor._crop_size, [12, 12])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/dataset_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/dataset_builder.py"
new file mode 100644
index 00000000..b46d5f7f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/dataset_builder.py"
@@ -0,0 +1,152 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""tf.data.Dataset builder.
+
+Creates data sources for DetectionModels from an InputReader config. See
+input_reader.proto for options.
+
+Note: If users wishes to also use their own InputReaders with the Object
+Detection configuration framework, they should define their own builder function
+that wraps the build function.
+"""
+import functools
+import tensorflow as tf
+
+from object_detection.data_decoders import tf_example_decoder
+from object_detection.protos import input_reader_pb2
+
+
+def make_initializable_iterator(dataset):
+ """Creates an iterator, and initializes tables.
+
+ This is useful in cases where make_one_shot_iterator wouldn't work because
+ the graph contains a hash table that needs to be initialized.
+
+ Args:
+ dataset: A `tf.data.Dataset` object.
+
+ Returns:
+ A `tf.data.Iterator`.
+ """
+ iterator = dataset.make_initializable_iterator()
+ tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS, iterator.initializer)
+ return iterator
+
+
+def read_dataset(file_read_func, input_files, config):
+ """Reads a dataset, and handles repetition and shuffling.
+
+ Args:
+ file_read_func: Function to use in tf.contrib.data.parallel_interleave, to
+ read every individual file into a tf.data.Dataset.
+ input_files: A list of file paths to read.
+ config: A input_reader_builder.InputReader object.
+
+ Returns:
+ A tf.data.Dataset of (undecoded) tf-records based on config.
+ """
+ # Shard, shuffle, and read files.
+ filenames = tf.gfile.Glob(input_files)
+ num_readers = config.num_readers
+ if num_readers > len(filenames):
+ num_readers = len(filenames)
+ tf.logging.warning('num_readers has been reduced to %d to match input file '
+ 'shards.' % num_readers)
+ filename_dataset = tf.data.Dataset.from_tensor_slices(filenames)
+ if config.shuffle:
+ filename_dataset = filename_dataset.shuffle(
+ config.filenames_shuffle_buffer_size)
+ elif num_readers > 1:
+ tf.logging.warning('`shuffle` is false, but the input data stream is '
+ 'still slightly shuffled since `num_readers` > 1.')
+ filename_dataset = filename_dataset.repeat(config.num_epochs or None)
+ records_dataset = filename_dataset.apply(
+ tf.contrib.data.parallel_interleave(
+ file_read_func,
+ cycle_length=num_readers,
+ block_length=config.read_block_length,
+ sloppy=config.shuffle))
+ if config.shuffle:
+ records_dataset = records_dataset.shuffle(config.shuffle_buffer_size)
+ return records_dataset
+
+
+def build(input_reader_config, batch_size=None, transform_input_data_fn=None):
+ """Builds a tf.data.Dataset.
+
+ Builds a tf.data.Dataset by applying the `transform_input_data_fn` on all
+ records. Applies a padded batch to the resulting dataset.
+
+ Args:
+ input_reader_config: A input_reader_pb2.InputReader object.
+ batch_size: Batch size. If batch size is None, no batching is performed.
+ transform_input_data_fn: Function to apply transformation to all records,
+ or None if no extra decoding is required.
+
+ Returns:
+ A tf.data.Dataset based on the input_reader_config.
+
+ Raises:
+ ValueError: On invalid input reader proto.
+ ValueError: If no input paths are specified.
+ """
+ if not isinstance(input_reader_config, input_reader_pb2.InputReader):
+ raise ValueError('input_reader_config not of type '
+ 'input_reader_pb2.InputReader.')
+
+ if input_reader_config.WhichOneof('input_reader') == 'tf_record_input_reader':
+ config = input_reader_config.tf_record_input_reader
+ if not config.input_path:
+ raise ValueError('At least one input path must be specified in '
+ '`input_reader_config`.')
+
+ label_map_proto_file = None
+ if input_reader_config.HasField('label_map_path'):
+ label_map_proto_file = input_reader_config.label_map_path
+ decoder = tf_example_decoder.TfExampleDecoder(
+ load_instance_masks=input_reader_config.load_instance_masks,
+ instance_mask_type=input_reader_config.mask_type,
+ label_map_proto_file=label_map_proto_file,
+ use_display_name=input_reader_config.use_display_name,
+ num_additional_channels=input_reader_config.num_additional_channels)
+
+ def process_fn(value):
+ """Sets up tf graph that decodes, transforms and pads input data."""
+ processed_tensors = decoder.decode(value)
+ if transform_input_data_fn is not None:
+ processed_tensors = transform_input_data_fn(processed_tensors)
+ return processed_tensors
+
+ dataset = read_dataset(
+ functools.partial(tf.data.TFRecordDataset, buffer_size=8 * 1000 * 1000),
+ config.input_path[:], input_reader_config)
+ if input_reader_config.sample_1_of_n_examples > 1:
+ dataset = dataset.shard(input_reader_config.sample_1_of_n_examples, 0)
+ # TODO(rathodv): make batch size a required argument once the old binaries
+ # are deleted.
+ if batch_size:
+ num_parallel_calls = batch_size * input_reader_config.num_parallel_batches
+ else:
+ num_parallel_calls = input_reader_config.num_parallel_map_calls
+ dataset = dataset.map(
+ process_fn,
+ num_parallel_calls=num_parallel_calls)
+ if batch_size:
+ dataset = dataset.apply(
+ tf.contrib.data.batch_and_drop_remainder(batch_size))
+ dataset = dataset.prefetch(input_reader_config.num_prefetch_batches)
+ return dataset
+
+ raise ValueError('Unsupported input_reader_config.')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/dataset_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/dataset_builder_test.py"
new file mode 100644
index 00000000..78677310
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/dataset_builder_test.py"
@@ -0,0 +1,356 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for dataset_builder."""
+
+import os
+import numpy as np
+import tensorflow as tf
+
+from google.protobuf import text_format
+
+from object_detection.builders import dataset_builder
+from object_detection.core import standard_fields as fields
+from object_detection.protos import input_reader_pb2
+from object_detection.utils import dataset_util
+
+
+class DatasetBuilderTest(tf.test.TestCase):
+
+ def create_tf_record(self, has_additional_channels=False, num_examples=1):
+ path = os.path.join(self.get_temp_dir(), 'tfrecord')
+ writer = tf.python_io.TFRecordWriter(path)
+
+ image_tensor = np.random.randint(255, size=(4, 5, 3)).astype(np.uint8)
+ additional_channels_tensor = np.random.randint(
+ 255, size=(4, 5, 1)).astype(np.uint8)
+ flat_mask = (4 * 5) * [1.0]
+ with self.test_session():
+ encoded_jpeg = tf.image.encode_jpeg(tf.constant(image_tensor)).eval()
+ encoded_additional_channels_jpeg = tf.image.encode_jpeg(
+ tf.constant(additional_channels_tensor)).eval()
+ for i in range(num_examples):
+ features = {
+ 'image/source_id': dataset_util.bytes_feature(str(i)),
+ 'image/encoded': dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),
+ 'image/height': dataset_util.int64_feature(4),
+ 'image/width': dataset_util.int64_feature(5),
+ 'image/object/bbox/xmin': dataset_util.float_list_feature([0.0]),
+ 'image/object/bbox/xmax': dataset_util.float_list_feature([1.0]),
+ 'image/object/bbox/ymin': dataset_util.float_list_feature([0.0]),
+ 'image/object/bbox/ymax': dataset_util.float_list_feature([1.0]),
+ 'image/object/class/label': dataset_util.int64_list_feature([2]),
+ 'image/object/mask': dataset_util.float_list_feature(flat_mask),
+ }
+ if has_additional_channels:
+ additional_channels_key = 'image/additional_channels/encoded'
+ features[additional_channels_key] = dataset_util.bytes_list_feature(
+ [encoded_additional_channels_jpeg] * 2)
+ example = tf.train.Example(features=tf.train.Features(feature=features))
+ writer.write(example.SerializeToString())
+ writer.close()
+
+ return path
+
+ def test_build_tf_record_input_reader(self):
+ tf_record_path = self.create_tf_record()
+
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ tf_record_input_reader {{
+ input_path: '{0}'
+ }}
+ """.format(tf_record_path)
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+ tensor_dict = dataset_builder.make_initializable_iterator(
+ dataset_builder.build(input_reader_proto, batch_size=1)).get_next()
+
+ with tf.train.MonitoredSession() as sess:
+ output_dict = sess.run(tensor_dict)
+
+ self.assertTrue(
+ fields.InputDataFields.groundtruth_instance_masks not in output_dict)
+ self.assertEquals((1, 4, 5, 3),
+ output_dict[fields.InputDataFields.image].shape)
+ self.assertAllEqual([[2]],
+ output_dict[fields.InputDataFields.groundtruth_classes])
+ self.assertEquals(
+ (1, 1, 4), output_dict[fields.InputDataFields.groundtruth_boxes].shape)
+ self.assertAllEqual(
+ [0.0, 0.0, 1.0, 1.0],
+ output_dict[fields.InputDataFields.groundtruth_boxes][0][0])
+
+ def test_build_tf_record_input_reader_and_load_instance_masks(self):
+ tf_record_path = self.create_tf_record()
+
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ load_instance_masks: true
+ tf_record_input_reader {{
+ input_path: '{0}'
+ }}
+ """.format(tf_record_path)
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+ tensor_dict = dataset_builder.make_initializable_iterator(
+ dataset_builder.build(input_reader_proto, batch_size=1)).get_next()
+
+ with tf.train.MonitoredSession() as sess:
+ output_dict = sess.run(tensor_dict)
+ self.assertAllEqual(
+ (1, 1, 4, 5),
+ output_dict[fields.InputDataFields.groundtruth_instance_masks].shape)
+
+ def test_build_tf_record_input_reader_with_batch_size_two(self):
+ tf_record_path = self.create_tf_record()
+
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ tf_record_input_reader {{
+ input_path: '{0}'
+ }}
+ """.format(tf_record_path)
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+
+ def one_hot_class_encoding_fn(tensor_dict):
+ tensor_dict[fields.InputDataFields.groundtruth_classes] = tf.one_hot(
+ tensor_dict[fields.InputDataFields.groundtruth_classes] - 1, depth=3)
+ return tensor_dict
+
+ tensor_dict = dataset_builder.make_initializable_iterator(
+ dataset_builder.build(
+ input_reader_proto,
+ transform_input_data_fn=one_hot_class_encoding_fn,
+ batch_size=2)).get_next()
+
+ with tf.train.MonitoredSession() as sess:
+ output_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual([2, 4, 5, 3],
+ output_dict[fields.InputDataFields.image].shape)
+ self.assertAllEqual(
+ [2, 1, 3],
+ output_dict[fields.InputDataFields.groundtruth_classes].shape)
+ self.assertAllEqual(
+ [2, 1, 4], output_dict[fields.InputDataFields.groundtruth_boxes].shape)
+ self.assertAllEqual([[[0.0, 0.0, 1.0, 1.0]], [[0.0, 0.0, 1.0, 1.0]]],
+ output_dict[fields.InputDataFields.groundtruth_boxes])
+
+ def test_build_tf_record_input_reader_with_batch_size_two_and_masks(self):
+ tf_record_path = self.create_tf_record()
+
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ load_instance_masks: true
+ tf_record_input_reader {{
+ input_path: '{0}'
+ }}
+ """.format(tf_record_path)
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+
+ def one_hot_class_encoding_fn(tensor_dict):
+ tensor_dict[fields.InputDataFields.groundtruth_classes] = tf.one_hot(
+ tensor_dict[fields.InputDataFields.groundtruth_classes] - 1, depth=3)
+ return tensor_dict
+
+ tensor_dict = dataset_builder.make_initializable_iterator(
+ dataset_builder.build(
+ input_reader_proto,
+ transform_input_data_fn=one_hot_class_encoding_fn,
+ batch_size=2)).get_next()
+
+ with tf.train.MonitoredSession() as sess:
+ output_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(
+ [2, 1, 4, 5],
+ output_dict[fields.InputDataFields.groundtruth_instance_masks].shape)
+
+ def test_raises_error_with_no_input_paths(self):
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ load_instance_masks: true
+ """
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+ with self.assertRaises(ValueError):
+ dataset_builder.build(input_reader_proto, batch_size=1)
+
+ def test_sample_all_data(self):
+ tf_record_path = self.create_tf_record(num_examples=2)
+
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ sample_1_of_n_examples: 1
+ tf_record_input_reader {{
+ input_path: '{0}'
+ }}
+ """.format(tf_record_path)
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+ tensor_dict = dataset_builder.make_initializable_iterator(
+ dataset_builder.build(input_reader_proto, batch_size=1)).get_next()
+
+ with tf.train.MonitoredSession() as sess:
+ output_dict = sess.run(tensor_dict)
+ self.assertAllEqual(['0'], output_dict[fields.InputDataFields.source_id])
+ output_dict = sess.run(tensor_dict)
+ self.assertEquals(['1'], output_dict[fields.InputDataFields.source_id])
+
+ def test_sample_one_of_n_shards(self):
+ tf_record_path = self.create_tf_record(num_examples=4)
+
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ sample_1_of_n_examples: 2
+ tf_record_input_reader {{
+ input_path: '{0}'
+ }}
+ """.format(tf_record_path)
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+ tensor_dict = dataset_builder.make_initializable_iterator(
+ dataset_builder.build(input_reader_proto, batch_size=1)).get_next()
+
+ with tf.train.MonitoredSession() as sess:
+ output_dict = sess.run(tensor_dict)
+ self.assertAllEqual(['0'], output_dict[fields.InputDataFields.source_id])
+ output_dict = sess.run(tensor_dict)
+ self.assertEquals(['2'], output_dict[fields.InputDataFields.source_id])
+
+
+class ReadDatasetTest(tf.test.TestCase):
+
+ def setUp(self):
+ self._path_template = os.path.join(self.get_temp_dir(), 'examples_%s.txt')
+
+ for i in range(5):
+ path = self._path_template % i
+ with tf.gfile.Open(path, 'wb') as f:
+ f.write('\n'.join([str(i + 1), str((i + 1) * 10)]))
+
+ self._shuffle_path_template = os.path.join(self.get_temp_dir(),
+ 'shuffle_%s.txt')
+ for i in range(2):
+ path = self._shuffle_path_template % i
+ with tf.gfile.Open(path, 'wb') as f:
+ f.write('\n'.join([str(i)] * 5))
+
+ def _get_dataset_next(self, files, config, batch_size):
+
+ def decode_func(value):
+ return [tf.string_to_number(value, out_type=tf.int32)]
+
+ dataset = dataset_builder.read_dataset(tf.data.TextLineDataset, files,
+ config)
+ dataset = dataset.map(decode_func)
+ dataset = dataset.batch(batch_size)
+ return dataset.make_one_shot_iterator().get_next()
+
+ def test_make_initializable_iterator_with_hashTable(self):
+ keys = [1, 0, -1]
+ dataset = tf.data.Dataset.from_tensor_slices([[1, 2, -1, 5]])
+ table = tf.contrib.lookup.HashTable(
+ initializer=tf.contrib.lookup.KeyValueTensorInitializer(
+ keys=keys, values=list(reversed(keys))),
+ default_value=100)
+ dataset = dataset.map(table.lookup)
+ data = dataset_builder.make_initializable_iterator(dataset).get_next()
+ init = tf.tables_initializer()
+
+ with self.test_session() as sess:
+ sess.run(init)
+ self.assertAllEqual(sess.run(data), [-1, 100, 1, 100])
+
+ def test_read_dataset(self):
+ config = input_reader_pb2.InputReader()
+ config.num_readers = 1
+ config.shuffle = False
+
+ data = self._get_dataset_next(
+ [self._path_template % '*'], config, batch_size=20)
+ with self.test_session() as sess:
+ self.assertAllEqual(
+ sess.run(data), [[
+ 1, 10, 2, 20, 3, 30, 4, 40, 5, 50, 1, 10, 2, 20, 3, 30, 4, 40, 5,
+ 50
+ ]])
+
+ def test_reduce_num_reader(self):
+ config = input_reader_pb2.InputReader()
+ config.num_readers = 10
+ config.shuffle = False
+
+ data = self._get_dataset_next(
+ [self._path_template % '*'], config, batch_size=20)
+ with self.test_session() as sess:
+ self.assertAllEqual(
+ sess.run(data), [[
+ 1, 10, 2, 20, 3, 30, 4, 40, 5, 50, 1, 10, 2, 20, 3, 30, 4, 40, 5,
+ 50
+ ]])
+
+ def test_enable_shuffle(self):
+ config = input_reader_pb2.InputReader()
+ config.num_readers = 1
+ config.shuffle = True
+
+ tf.set_random_seed(1) # Set graph level seed.
+ data = self._get_dataset_next(
+ [self._shuffle_path_template % '*'], config, batch_size=10)
+ expected_non_shuffle_output = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
+
+ with self.test_session() as sess:
+ self.assertTrue(
+ np.any(np.not_equal(sess.run(data), expected_non_shuffle_output)))
+
+ def test_disable_shuffle_(self):
+ config = input_reader_pb2.InputReader()
+ config.num_readers = 1
+ config.shuffle = False
+
+ data = self._get_dataset_next(
+ [self._shuffle_path_template % '*'], config, batch_size=10)
+ expected_non_shuffle_output = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
+
+ with self.test_session() as sess:
+ self.assertAllEqual(sess.run(data), [expected_non_shuffle_output])
+
+ def test_read_dataset_single_epoch(self):
+ config = input_reader_pb2.InputReader()
+ config.num_epochs = 1
+ config.num_readers = 1
+ config.shuffle = False
+
+ data = self._get_dataset_next(
+ [self._path_template % '0'], config, batch_size=30)
+ with self.test_session() as sess:
+ # First batch will retrieve as much as it can, second batch will fail.
+ self.assertAllEqual(sess.run(data), [[1, 10]])
+ self.assertRaises(tf.errors.OutOfRangeError, sess.run, data)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/graph_rewriter_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/graph_rewriter_builder.py"
new file mode 100644
index 00000000..77e60479
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/graph_rewriter_builder.py"
@@ -0,0 +1,42 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Functions for quantized training and evaluation."""
+
+import tensorflow as tf
+
+
+def build(graph_rewriter_config, is_training):
+ """Returns a function that modifies default graph based on options.
+
+ Args:
+ graph_rewriter_config: graph_rewriter_pb2.GraphRewriter proto.
+ is_training: whether in training of eval mode.
+ """
+ def graph_rewrite_fn():
+ """Function to quantize weights and activation of the default graph."""
+ if (graph_rewriter_config.quantization.weight_bits != 8 or
+ graph_rewriter_config.quantization.activation_bits != 8):
+ raise ValueError('Only 8bit quantization is supported')
+
+ # Quantize the graph by inserting quantize ops for weights and activations
+ if is_training:
+ tf.contrib.quantize.create_training_graph(
+ input_graph=tf.get_default_graph(),
+ quant_delay=graph_rewriter_config.quantization.delay)
+ else:
+ tf.contrib.quantize.create_eval_graph(input_graph=tf.get_default_graph())
+
+ tf.contrib.layers.summarize_collection('quant_vars')
+ return graph_rewrite_fn
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/graph_rewriter_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/graph_rewriter_builder_test.py"
new file mode 100644
index 00000000..5f38d5a2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/graph_rewriter_builder_test.py"
@@ -0,0 +1,57 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for graph_rewriter_builder."""
+import mock
+import tensorflow as tf
+from object_detection.builders import graph_rewriter_builder
+from object_detection.protos import graph_rewriter_pb2
+
+
+class QuantizationBuilderTest(tf.test.TestCase):
+
+ def testQuantizationBuilderSetsUpCorrectTrainArguments(self):
+ with mock.patch.object(
+ tf.contrib.quantize, 'create_training_graph') as mock_quant_fn:
+ with mock.patch.object(tf.contrib.layers,
+ 'summarize_collection') as mock_summarize_col:
+ graph_rewriter_proto = graph_rewriter_pb2.GraphRewriter()
+ graph_rewriter_proto.quantization.delay = 10
+ graph_rewriter_proto.quantization.weight_bits = 8
+ graph_rewriter_proto.quantization.activation_bits = 8
+ graph_rewrite_fn = graph_rewriter_builder.build(
+ graph_rewriter_proto, is_training=True)
+ graph_rewrite_fn()
+ _, kwargs = mock_quant_fn.call_args
+ self.assertEqual(kwargs['input_graph'], tf.get_default_graph())
+ self.assertEqual(kwargs['quant_delay'], 10)
+ mock_summarize_col.assert_called_with('quant_vars')
+
+ def testQuantizationBuilderSetsUpCorrectEvalArguments(self):
+ with mock.patch.object(tf.contrib.quantize,
+ 'create_eval_graph') as mock_quant_fn:
+ with mock.patch.object(tf.contrib.layers,
+ 'summarize_collection') as mock_summarize_col:
+ graph_rewriter_proto = graph_rewriter_pb2.GraphRewriter()
+ graph_rewriter_proto.quantization.delay = 10
+ graph_rewrite_fn = graph_rewriter_builder.build(
+ graph_rewriter_proto, is_training=False)
+ graph_rewrite_fn()
+ _, kwargs = mock_quant_fn.call_args
+ self.assertEqual(kwargs['input_graph'], tf.get_default_graph())
+ mock_summarize_col.assert_called_with('quant_vars')
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/hyperparams_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/hyperparams_builder.py"
new file mode 100644
index 00000000..cd503e22
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/hyperparams_builder.py"
@@ -0,0 +1,418 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Builder function to construct tf-slim arg_scope for convolution, fc ops."""
+import tensorflow as tf
+
+from object_detection.core import freezable_batch_norm
+from object_detection.protos import hyperparams_pb2
+from object_detection.utils import context_manager
+
+slim = tf.contrib.slim
+
+
+class KerasLayerHyperparams(object):
+ """
+ A hyperparameter configuration object for Keras layers used in
+ Object Detection models.
+ """
+
+ def __init__(self, hyperparams_config):
+ """Builds keras hyperparameter config for layers based on the proto config.
+
+ It automatically converts from Slim layer hyperparameter configs to
+ Keras layer hyperparameters. Namely, it:
+ - Builds Keras initializers/regularizers instead of Slim ones
+ - sets weights_regularizer/initializer to kernel_regularizer/initializer
+ - converts batchnorm decay to momentum
+ - converts Slim l2 regularizer weights to the equivalent Keras l2 weights
+
+ Contains a hyperparameter configuration for ops that specifies kernel
+ initializer, kernel regularizer, activation. Also contains parameters for
+ batch norm operators based on the configuration.
+
+ Note that if the batch_norm parameters are not specified in the config
+ (i.e. left to default) then batch norm is excluded from the config.
+
+ Args:
+ hyperparams_config: hyperparams.proto object containing
+ hyperparameters.
+
+ Raises:
+ ValueError: if hyperparams_config is not of type hyperparams.Hyperparams.
+ """
+ if not isinstance(hyperparams_config,
+ hyperparams_pb2.Hyperparams):
+ raise ValueError('hyperparams_config not of type '
+ 'hyperparams_pb.Hyperparams.')
+
+ self._batch_norm_params = None
+ if hyperparams_config.HasField('batch_norm'):
+ self._batch_norm_params = _build_keras_batch_norm_params(
+ hyperparams_config.batch_norm)
+
+ self._activation_fn = _build_activation_fn(hyperparams_config.activation)
+ # TODO(kaftan): Unclear if these kwargs apply to separable & depthwise conv
+ # (Those might use depthwise_* instead of kernel_*)
+ # We should probably switch to using build_conv2d_layer and
+ # build_depthwise_conv2d_layer methods instead.
+ self._op_params = {
+ 'kernel_regularizer': _build_keras_regularizer(
+ hyperparams_config.regularizer),
+ 'kernel_initializer': _build_initializer(
+ hyperparams_config.initializer, build_for_keras=True),
+ 'activation': _build_activation_fn(hyperparams_config.activation)
+ }
+
+ def use_batch_norm(self):
+ return self._batch_norm_params is not None
+
+ def batch_norm_params(self, **overrides):
+ """Returns a dict containing batchnorm layer construction hyperparameters.
+
+ Optionally overrides values in the batchnorm hyperparam dict. Overrides
+ only apply to individual calls of this method, and do not affect
+ future calls.
+
+ Args:
+ **overrides: keyword arguments to override in the hyperparams dictionary
+
+ Returns: dict containing the layer construction keyword arguments, with
+ values overridden by the `overrides` keyword arguments.
+ """
+ if self._batch_norm_params is None:
+ new_batch_norm_params = dict()
+ else:
+ new_batch_norm_params = self._batch_norm_params.copy()
+ new_batch_norm_params.update(overrides)
+ return new_batch_norm_params
+
+ def build_batch_norm(self, training=None, **overrides):
+ """Returns a Batch Normalization layer with the appropriate hyperparams.
+
+ If the hyperparams are configured to not use batch normalization,
+ this will return a Keras Lambda layer that only applies tf.Identity,
+ without doing any normalization.
+
+ Optionally overrides values in the batch_norm hyperparam dict. Overrides
+ only apply to individual calls of this method, and do not affect
+ future calls.
+
+ Args:
+ training: if True, the normalization layer will normalize using the batch
+ statistics. If False, the normalization layer will be frozen and will
+ act as if it is being used for inference. If None, the layer
+ will look up the Keras learning phase at `call` time to decide what to
+ do.
+ **overrides: batch normalization construction args to override from the
+ batch_norm hyperparams dictionary.
+
+ Returns: Either a FreezableBatchNorm layer (if use_batch_norm() is True),
+ or a Keras Lambda layer that applies the identity (if use_batch_norm()
+ is False)
+ """
+ if self.use_batch_norm():
+ return freezable_batch_norm.FreezableBatchNorm(
+ training=training,
+ **self.batch_norm_params(**overrides)
+ )
+ else:
+ return tf.keras.layers.Lambda(tf.identity)
+
+ def build_activation_layer(self, name='activation'):
+ """Returns a Keras layer that applies the desired activation function.
+
+ Args:
+ name: The name to assign the Keras layer.
+ Returns: A Keras lambda layer that applies the activation function
+ specified in the hyperparam config, or applies the identity if the
+ activation function is None.
+ """
+ if self._activation_fn:
+ return tf.keras.layers.Lambda(self._activation_fn, name=name)
+ else:
+ return tf.keras.layers.Lambda(tf.identity, name=name)
+
+ def params(self, include_activation=False, **overrides):
+ """Returns a dict containing the layer construction hyperparameters to use.
+
+ Optionally overrides values in the returned dict. Overrides
+ only apply to individual calls of this method, and do not affect
+ future calls.
+
+ Args:
+ include_activation: If False, activation in the returned dictionary will
+ be set to `None`, and the activation must be applied via a separate
+ layer created by `build_activation_layer`. If True, `activation` in the
+ output param dictionary will be set to the activation function
+ specified in the hyperparams config.
+ **overrides: keyword arguments to override in the hyperparams dictionary.
+
+ Returns: dict containing the layer construction keyword arguments, with
+ values overridden by the `overrides` keyword arguments.
+ """
+ new_params = self._op_params.copy()
+ new_params['activation'] = None
+ if include_activation:
+ new_params['activation'] = self._activation_fn
+ if self.use_batch_norm() and self.batch_norm_params()['center']:
+ new_params['use_bias'] = False
+ else:
+ new_params['use_bias'] = True
+ new_params.update(**overrides)
+ return new_params
+
+
+def build(hyperparams_config, is_training):
+ """Builds tf-slim arg_scope for convolution ops based on the config.
+
+ Returns an arg_scope to use for convolution ops containing weights
+ initializer, weights regularizer, activation function, batch norm function
+ and batch norm parameters based on the configuration.
+
+ Note that if no normalization parameters are specified in the config,
+ (i.e. left to default) then both batch norm and group norm are excluded
+ from the arg_scope.
+
+ The batch norm parameters are set for updates based on `is_training` argument
+ and conv_hyperparams_config.batch_norm.train parameter. During training, they
+ are updated only if batch_norm.train parameter is true. However, during eval,
+ no updates are made to the batch norm variables. In both cases, their current
+ values are used during forward pass.
+
+ Args:
+ hyperparams_config: hyperparams.proto object containing
+ hyperparameters.
+ is_training: Whether the network is in training mode.
+
+ Returns:
+ arg_scope_fn: A function to construct tf-slim arg_scope containing
+ hyperparameters for ops.
+
+ Raises:
+ ValueError: if hyperparams_config is not of type hyperparams.Hyperparams.
+ """
+ if not isinstance(hyperparams_config,
+ hyperparams_pb2.Hyperparams):
+ raise ValueError('hyperparams_config not of type '
+ 'hyperparams_pb.Hyperparams.')
+
+ normalizer_fn = None
+ batch_norm_params = None
+ if hyperparams_config.HasField('batch_norm'):
+ normalizer_fn = slim.batch_norm
+ batch_norm_params = _build_batch_norm_params(
+ hyperparams_config.batch_norm, is_training)
+ if hyperparams_config.HasField('group_norm'):
+ normalizer_fn = tf.contrib.layers.group_norm
+ affected_ops = [slim.conv2d, slim.separable_conv2d, slim.conv2d_transpose]
+ if hyperparams_config.HasField('op') and (
+ hyperparams_config.op == hyperparams_pb2.Hyperparams.FC):
+ affected_ops = [slim.fully_connected]
+ def scope_fn():
+ with (slim.arg_scope([slim.batch_norm], **batch_norm_params)
+ if batch_norm_params is not None else
+ context_manager.IdentityContextManager()):
+ with slim.arg_scope(
+ affected_ops,
+ weights_regularizer=_build_slim_regularizer(
+ hyperparams_config.regularizer),
+ weights_initializer=_build_initializer(
+ hyperparams_config.initializer),
+ activation_fn=_build_activation_fn(hyperparams_config.activation),
+ normalizer_fn=normalizer_fn) as sc:
+ return sc
+
+ return scope_fn
+
+
+def _build_activation_fn(activation_fn):
+ """Builds a callable activation from config.
+
+ Args:
+ activation_fn: hyperparams_pb2.Hyperparams.activation
+
+ Returns:
+ Callable activation function.
+
+ Raises:
+ ValueError: On unknown activation function.
+ """
+ if activation_fn == hyperparams_pb2.Hyperparams.NONE:
+ return None
+ if activation_fn == hyperparams_pb2.Hyperparams.RELU:
+ return tf.nn.relu
+ if activation_fn == hyperparams_pb2.Hyperparams.RELU_6:
+ return tf.nn.relu6
+ raise ValueError('Unknown activation function: {}'.format(activation_fn))
+
+
+def _build_slim_regularizer(regularizer):
+ """Builds a tf-slim regularizer from config.
+
+ Args:
+ regularizer: hyperparams_pb2.Hyperparams.regularizer proto.
+
+ Returns:
+ tf-slim regularizer.
+
+ Raises:
+ ValueError: On unknown regularizer.
+ """
+ regularizer_oneof = regularizer.WhichOneof('regularizer_oneof')
+ if regularizer_oneof == 'l1_regularizer':
+ return slim.l1_regularizer(scale=float(regularizer.l1_regularizer.weight))
+ if regularizer_oneof == 'l2_regularizer':
+ return slim.l2_regularizer(scale=float(regularizer.l2_regularizer.weight))
+ if regularizer_oneof is None:
+ return None
+ raise ValueError('Unknown regularizer function: {}'.format(regularizer_oneof))
+
+
+def _build_keras_regularizer(regularizer):
+ """Builds a keras regularizer from config.
+
+ Args:
+ regularizer: hyperparams_pb2.Hyperparams.regularizer proto.
+
+ Returns:
+ Keras regularizer.
+
+ Raises:
+ ValueError: On unknown regularizer.
+ """
+ regularizer_oneof = regularizer.WhichOneof('regularizer_oneof')
+ if regularizer_oneof == 'l1_regularizer':
+ return tf.keras.regularizers.l1(float(regularizer.l1_regularizer.weight))
+ if regularizer_oneof == 'l2_regularizer':
+ # The Keras L2 regularizer weight differs from the Slim L2 regularizer
+ # weight by a factor of 2
+ return tf.keras.regularizers.l2(
+ float(regularizer.l2_regularizer.weight * 0.5))
+ raise ValueError('Unknown regularizer function: {}'.format(regularizer_oneof))
+
+
+def _build_initializer(initializer, build_for_keras=False):
+ """Build a tf initializer from config.
+
+ Args:
+ initializer: hyperparams_pb2.Hyperparams.regularizer proto.
+ build_for_keras: Whether the initializers should be built for Keras
+ operators. If false builds for Slim.
+
+ Returns:
+ tf initializer.
+
+ Raises:
+ ValueError: On unknown initializer.
+ """
+ initializer_oneof = initializer.WhichOneof('initializer_oneof')
+ if initializer_oneof == 'truncated_normal_initializer':
+ return tf.truncated_normal_initializer(
+ mean=initializer.truncated_normal_initializer.mean,
+ stddev=initializer.truncated_normal_initializer.stddev)
+ if initializer_oneof == 'random_normal_initializer':
+ return tf.random_normal_initializer(
+ mean=initializer.random_normal_initializer.mean,
+ stddev=initializer.random_normal_initializer.stddev)
+ if initializer_oneof == 'variance_scaling_initializer':
+ enum_descriptor = (hyperparams_pb2.VarianceScalingInitializer.
+ DESCRIPTOR.enum_types_by_name['Mode'])
+ mode = enum_descriptor.values_by_number[initializer.
+ variance_scaling_initializer.
+ mode].name
+ if build_for_keras:
+ if initializer.variance_scaling_initializer.uniform:
+ return tf.variance_scaling_initializer(
+ scale=initializer.variance_scaling_initializer.factor,
+ mode=mode.lower(),
+ distribution='uniform')
+ else:
+ # In TF 1.9 release and earlier, the truncated_normal distribution was
+ # not supported correctly. So, in these earlier versions of tensorflow,
+ # the ValueError will be raised, and we manually truncate the
+ # distribution scale.
+ #
+ # It is insufficient to just set distribution to `normal` from the
+ # start, because the `normal` distribution in newer Tensorflow versions
+ # creates a truncated distribution, whereas it created untruncated
+ # distributions in older versions.
+ try:
+ return tf.variance_scaling_initializer(
+ scale=initializer.variance_scaling_initializer.factor,
+ mode=mode.lower(),
+ distribution='truncated_normal')
+ except ValueError:
+ truncate_constant = 0.87962566103423978
+ truncated_scale = initializer.variance_scaling_initializer.factor / (
+ truncate_constant * truncate_constant
+ )
+ return tf.variance_scaling_initializer(
+ scale=truncated_scale,
+ mode=mode.lower(),
+ distribution='normal')
+
+ else:
+ return slim.variance_scaling_initializer(
+ factor=initializer.variance_scaling_initializer.factor,
+ mode=mode,
+ uniform=initializer.variance_scaling_initializer.uniform)
+ raise ValueError('Unknown initializer function: {}'.format(
+ initializer_oneof))
+
+
+def _build_batch_norm_params(batch_norm, is_training):
+ """Build a dictionary of batch_norm params from config.
+
+ Args:
+ batch_norm: hyperparams_pb2.ConvHyperparams.batch_norm proto.
+ is_training: Whether the models is in training mode.
+
+ Returns:
+ A dictionary containing batch_norm parameters.
+ """
+ batch_norm_params = {
+ 'decay': batch_norm.decay,
+ 'center': batch_norm.center,
+ 'scale': batch_norm.scale,
+ 'epsilon': batch_norm.epsilon,
+ # Remove is_training parameter from here and deprecate it in the proto
+ # once we refactor Faster RCNN models to set is_training through an outer
+ # arg_scope in the meta architecture.
+ 'is_training': is_training and batch_norm.train,
+ }
+ return batch_norm_params
+
+
+def _build_keras_batch_norm_params(batch_norm):
+ """Build a dictionary of Keras BatchNormalization params from config.
+
+ Args:
+ batch_norm: hyperparams_pb2.ConvHyperparams.batch_norm proto.
+
+ Returns:
+ A dictionary containing Keras BatchNormalization parameters.
+ """
+ # Note: Although decay is defined to be 1 - momentum in batch_norm,
+ # decay in the slim batch_norm layers was erroneously defined and is
+ # actually the same as momentum in the Keras batch_norm layers.
+ # For context, see: github.com/keras-team/keras/issues/6839
+ batch_norm_params = {
+ 'momentum': batch_norm.decay,
+ 'center': batch_norm.center,
+ 'scale': batch_norm.scale,
+ 'epsilon': batch_norm.epsilon,
+ }
+ return batch_norm_params
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/hyperparams_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/hyperparams_builder_test.py"
new file mode 100644
index 00000000..a83b9eea
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/hyperparams_builder_test.py"
@@ -0,0 +1,865 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests object_detection.core.hyperparams_builder."""
+
+import numpy as np
+import tensorflow as tf
+
+from google.protobuf import text_format
+
+from object_detection.builders import hyperparams_builder
+from object_detection.core import freezable_batch_norm
+from object_detection.protos import hyperparams_pb2
+
+slim = tf.contrib.slim
+
+
+def _get_scope_key(op):
+ return getattr(op, '_key_op', str(op))
+
+
+class HyperparamsBuilderTest(tf.test.TestCase):
+
+ def test_default_arg_scope_has_conv2d_op(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ self.assertTrue(_get_scope_key(slim.conv2d) in scope)
+
+ def test_default_arg_scope_has_separable_conv2d_op(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ self.assertTrue(_get_scope_key(slim.separable_conv2d) in scope)
+
+ def test_default_arg_scope_has_conv2d_transpose_op(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ self.assertTrue(_get_scope_key(slim.conv2d_transpose) in scope)
+
+ def test_explicit_fc_op_arg_scope_has_fully_connected_op(self):
+ conv_hyperparams_text_proto = """
+ op: FC
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ self.assertTrue(_get_scope_key(slim.fully_connected) in scope)
+
+ def test_separable_conv2d_and_conv2d_and_transpose_have_same_parameters(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ kwargs_1, kwargs_2, kwargs_3 = scope.values()
+ self.assertDictEqual(kwargs_1, kwargs_2)
+ self.assertDictEqual(kwargs_1, kwargs_3)
+
+ def test_return_l1_regularized_weights(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ weight: 0.5
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope.values()[0]
+ regularizer = conv_scope_arguments['weights_regularizer']
+ weights = np.array([1., -1, 4., 2.])
+ with self.test_session() as sess:
+ result = sess.run(regularizer(tf.constant(weights)))
+ self.assertAllClose(np.abs(weights).sum() * 0.5, result)
+
+ def test_return_l1_regularized_weights_keras(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l1_regularizer {
+ weight: 0.5
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+
+ regularizer = keras_config.params()['kernel_regularizer']
+ weights = np.array([1., -1, 4., 2.])
+ with self.test_session() as sess:
+ result = sess.run(regularizer(tf.constant(weights)))
+ self.assertAllClose(np.abs(weights).sum() * 0.5, result)
+
+ def test_return_l2_regularizer_weights(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ weight: 0.42
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+
+ regularizer = conv_scope_arguments['weights_regularizer']
+ weights = np.array([1., -1, 4., 2.])
+ with self.test_session() as sess:
+ result = sess.run(regularizer(tf.constant(weights)))
+ self.assertAllClose(np.power(weights, 2).sum() / 2.0 * 0.42, result)
+
+ def test_return_l2_regularizer_weights_keras(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ weight: 0.42
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+
+ regularizer = keras_config.params()['kernel_regularizer']
+ weights = np.array([1., -1, 4., 2.])
+ with self.test_session() as sess:
+ result = sess.run(regularizer(tf.constant(weights)))
+ self.assertAllClose(np.power(weights, 2).sum() / 2.0 * 0.42, result)
+
+ def test_return_non_default_batch_norm_params_with_train_during_train(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ batch_norm {
+ decay: 0.7
+ center: false
+ scale: true
+ epsilon: 0.03
+ train: true
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ self.assertEqual(conv_scope_arguments['normalizer_fn'], slim.batch_norm)
+ batch_norm_params = scope[_get_scope_key(slim.batch_norm)]
+ self.assertAlmostEqual(batch_norm_params['decay'], 0.7)
+ self.assertAlmostEqual(batch_norm_params['epsilon'], 0.03)
+ self.assertFalse(batch_norm_params['center'])
+ self.assertTrue(batch_norm_params['scale'])
+ self.assertTrue(batch_norm_params['is_training'])
+
+ def test_return_non_default_batch_norm_params_keras(
+ self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ batch_norm {
+ decay: 0.7
+ center: false
+ scale: true
+ epsilon: 0.03
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+
+ self.assertTrue(keras_config.use_batch_norm())
+ batch_norm_params = keras_config.batch_norm_params()
+ self.assertAlmostEqual(batch_norm_params['momentum'], 0.7)
+ self.assertAlmostEqual(batch_norm_params['epsilon'], 0.03)
+ self.assertFalse(batch_norm_params['center'])
+ self.assertTrue(batch_norm_params['scale'])
+
+ batch_norm_layer = keras_config.build_batch_norm()
+ self.assertTrue(isinstance(batch_norm_layer,
+ freezable_batch_norm.FreezableBatchNorm))
+
+ def test_return_non_default_batch_norm_params_keras_override(
+ self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ batch_norm {
+ decay: 0.7
+ center: false
+ scale: true
+ epsilon: 0.03
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+
+ self.assertTrue(keras_config.use_batch_norm())
+ batch_norm_params = keras_config.batch_norm_params(momentum=0.4)
+ self.assertAlmostEqual(batch_norm_params['momentum'], 0.4)
+ self.assertAlmostEqual(batch_norm_params['epsilon'], 0.03)
+ self.assertFalse(batch_norm_params['center'])
+ self.assertTrue(batch_norm_params['scale'])
+
+ def test_return_batch_norm_params_with_notrain_during_eval(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ batch_norm {
+ decay: 0.7
+ center: false
+ scale: true
+ epsilon: 0.03
+ train: true
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=False)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ self.assertEqual(conv_scope_arguments['normalizer_fn'], slim.batch_norm)
+ batch_norm_params = scope[_get_scope_key(slim.batch_norm)]
+ self.assertAlmostEqual(batch_norm_params['decay'], 0.7)
+ self.assertAlmostEqual(batch_norm_params['epsilon'], 0.03)
+ self.assertFalse(batch_norm_params['center'])
+ self.assertTrue(batch_norm_params['scale'])
+ self.assertFalse(batch_norm_params['is_training'])
+
+ def test_return_batch_norm_params_with_notrain_when_train_is_false(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ batch_norm {
+ decay: 0.7
+ center: false
+ scale: true
+ epsilon: 0.03
+ train: false
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ self.assertEqual(conv_scope_arguments['normalizer_fn'], slim.batch_norm)
+ batch_norm_params = scope[_get_scope_key(slim.batch_norm)]
+ self.assertAlmostEqual(batch_norm_params['decay'], 0.7)
+ self.assertAlmostEqual(batch_norm_params['epsilon'], 0.03)
+ self.assertFalse(batch_norm_params['center'])
+ self.assertTrue(batch_norm_params['scale'])
+ self.assertFalse(batch_norm_params['is_training'])
+
+ def test_do_not_use_batch_norm_if_default(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ self.assertEqual(conv_scope_arguments['normalizer_fn'], None)
+
+ def test_do_not_use_batch_norm_if_default_keras(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ self.assertFalse(keras_config.use_batch_norm())
+ self.assertEqual(keras_config.batch_norm_params(), {})
+
+ # The batch norm builder should build an identity Lambda layer
+ identity_layer = keras_config.build_batch_norm()
+ self.assertTrue(isinstance(identity_layer,
+ tf.keras.layers.Lambda))
+
+ def test_use_none_activation(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ activation: NONE
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ self.assertEqual(conv_scope_arguments['activation_fn'], None)
+
+ def test_use_none_activation_keras(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ activation: NONE
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ self.assertEqual(keras_config.params()['activation'], None)
+ self.assertEqual(
+ keras_config.params(include_activation=True)['activation'], None)
+ activation_layer = keras_config.build_activation_layer()
+ self.assertTrue(isinstance(activation_layer, tf.keras.layers.Lambda))
+ self.assertEqual(activation_layer.function, tf.identity)
+
+ def test_use_relu_activation(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ activation: RELU
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ self.assertEqual(conv_scope_arguments['activation_fn'], tf.nn.relu)
+
+ def test_use_relu_activation_keras(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ activation: RELU
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ self.assertEqual(keras_config.params()['activation'], None)
+ self.assertEqual(
+ keras_config.params(include_activation=True)['activation'], tf.nn.relu)
+ activation_layer = keras_config.build_activation_layer()
+ self.assertTrue(isinstance(activation_layer, tf.keras.layers.Lambda))
+ self.assertEqual(activation_layer.function, tf.nn.relu)
+
+ def test_use_relu_6_activation(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ activation: RELU_6
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ self.assertEqual(conv_scope_arguments['activation_fn'], tf.nn.relu6)
+
+ def test_use_relu_6_activation_keras(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ activation: RELU_6
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ self.assertEqual(keras_config.params()['activation'], None)
+ self.assertEqual(
+ keras_config.params(include_activation=True)['activation'], tf.nn.relu6)
+ activation_layer = keras_config.build_activation_layer()
+ self.assertTrue(isinstance(activation_layer, tf.keras.layers.Lambda))
+ self.assertEqual(activation_layer.function, tf.nn.relu6)
+
+ def test_override_activation_keras(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ activation: RELU_6
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ new_params = keras_config.params(activation=tf.nn.relu)
+ self.assertEqual(new_params['activation'], tf.nn.relu)
+
+ def _assert_variance_in_range(self, initializer, shape, variance,
+ tol=1e-2):
+ with tf.Graph().as_default() as g:
+ with self.test_session(graph=g) as sess:
+ var = tf.get_variable(
+ name='test',
+ shape=shape,
+ dtype=tf.float32,
+ initializer=initializer)
+ sess.run(tf.global_variables_initializer())
+ values = sess.run(var)
+ self.assertAllClose(np.var(values), variance, tol, tol)
+
+ def test_variance_in_range_with_variance_scaling_initializer_fan_in(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ variance_scaling_initializer {
+ factor: 2.0
+ mode: FAN_IN
+ uniform: false
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ initializer = conv_scope_arguments['weights_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=2. / 100.)
+
+ def test_variance_in_range_with_variance_scaling_initializer_fan_in_keras(
+ self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ variance_scaling_initializer {
+ factor: 2.0
+ mode: FAN_IN
+ uniform: false
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ initializer = keras_config.params()['kernel_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=2. / 100.)
+
+ def test_variance_in_range_with_variance_scaling_initializer_fan_out(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ variance_scaling_initializer {
+ factor: 2.0
+ mode: FAN_OUT
+ uniform: false
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ initializer = conv_scope_arguments['weights_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=2. / 40.)
+
+ def test_variance_in_range_with_variance_scaling_initializer_fan_out_keras(
+ self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ variance_scaling_initializer {
+ factor: 2.0
+ mode: FAN_OUT
+ uniform: false
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ initializer = keras_config.params()['kernel_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=2. / 40.)
+
+ def test_variance_in_range_with_variance_scaling_initializer_fan_avg(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ variance_scaling_initializer {
+ factor: 2.0
+ mode: FAN_AVG
+ uniform: false
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ initializer = conv_scope_arguments['weights_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=4. / (100. + 40.))
+
+ def test_variance_in_range_with_variance_scaling_initializer_fan_avg_keras(
+ self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ variance_scaling_initializer {
+ factor: 2.0
+ mode: FAN_AVG
+ uniform: false
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ initializer = keras_config.params()['kernel_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=4. / (100. + 40.))
+
+ def test_variance_in_range_with_variance_scaling_initializer_uniform(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ variance_scaling_initializer {
+ factor: 2.0
+ mode: FAN_IN
+ uniform: true
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ initializer = conv_scope_arguments['weights_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=2. / 100.)
+
+ def test_variance_in_range_with_variance_scaling_initializer_uniform_keras(
+ self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ variance_scaling_initializer {
+ factor: 2.0
+ mode: FAN_IN
+ uniform: true
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ initializer = keras_config.params()['kernel_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=2. / 100.)
+
+ def test_variance_in_range_with_truncated_normal_initializer(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ mean: 0.0
+ stddev: 0.8
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ initializer = conv_scope_arguments['weights_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=0.49, tol=1e-1)
+
+ def test_variance_in_range_with_truncated_normal_initializer_keras(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ mean: 0.0
+ stddev: 0.8
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ initializer = keras_config.params()['kernel_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=0.49, tol=1e-1)
+
+ def test_variance_in_range_with_random_normal_initializer(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ random_normal_initializer {
+ mean: 0.0
+ stddev: 0.8
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ scope_fn = hyperparams_builder.build(conv_hyperparams_proto,
+ is_training=True)
+ scope = scope_fn()
+ conv_scope_arguments = scope[_get_scope_key(slim.conv2d)]
+ initializer = conv_scope_arguments['weights_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=0.64, tol=1e-1)
+
+ def test_variance_in_range_with_random_normal_initializer_keras(self):
+ conv_hyperparams_text_proto = """
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ random_normal_initializer {
+ mean: 0.0
+ stddev: 0.8
+ }
+ }
+ """
+ conv_hyperparams_proto = hyperparams_pb2.Hyperparams()
+ text_format.Merge(conv_hyperparams_text_proto, conv_hyperparams_proto)
+ keras_config = hyperparams_builder.KerasLayerHyperparams(
+ conv_hyperparams_proto)
+ initializer = keras_config.params()['kernel_initializer']
+ self._assert_variance_in_range(initializer, shape=[100, 40],
+ variance=0.64, tol=1e-1)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/image_resizer_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/image_resizer_builder.py"
new file mode 100644
index 00000000..243c84dd
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/image_resizer_builder.py"
@@ -0,0 +1,140 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Builder function for image resizing operations."""
+import functools
+import tensorflow as tf
+
+from object_detection.core import preprocessor
+from object_detection.protos import image_resizer_pb2
+
+
+def _tf_resize_method(resize_method):
+ """Maps image resize method from enumeration type to TensorFlow.
+
+ Args:
+ resize_method: The resize_method attribute of keep_aspect_ratio_resizer or
+ fixed_shape_resizer.
+
+ Returns:
+ method: The corresponding TensorFlow ResizeMethod.
+
+ Raises:
+ ValueError: if `resize_method` is of unknown type.
+ """
+ dict_method = {
+ image_resizer_pb2.BILINEAR:
+ tf.image.ResizeMethod.BILINEAR,
+ image_resizer_pb2.NEAREST_NEIGHBOR:
+ tf.image.ResizeMethod.NEAREST_NEIGHBOR,
+ image_resizer_pb2.BICUBIC:
+ tf.image.ResizeMethod.BICUBIC,
+ image_resizer_pb2.AREA:
+ tf.image.ResizeMethod.AREA
+ }
+ if resize_method in dict_method:
+ return dict_method[resize_method]
+ else:
+ raise ValueError('Unknown resize_method')
+
+
+def build(image_resizer_config):
+ """Builds callable for image resizing operations.
+
+ Args:
+ image_resizer_config: image_resizer.proto object containing parameters for
+ an image resizing operation.
+
+ Returns:
+ image_resizer_fn: Callable for image resizing. This callable always takes
+ a rank-3 image tensor (corresponding to a single image) and returns a
+ rank-3 image tensor, possibly with new spatial dimensions.
+
+ Raises:
+ ValueError: if `image_resizer_config` is of incorrect type.
+ ValueError: if `image_resizer_config.image_resizer_oneof` is of expected
+ type.
+ ValueError: if min_dimension > max_dimension when keep_aspect_ratio_resizer
+ is used.
+ """
+ if not isinstance(image_resizer_config, image_resizer_pb2.ImageResizer):
+ raise ValueError('image_resizer_config not of type '
+ 'image_resizer_pb2.ImageResizer.')
+
+ image_resizer_oneof = image_resizer_config.WhichOneof('image_resizer_oneof')
+ if image_resizer_oneof == 'keep_aspect_ratio_resizer':
+ keep_aspect_ratio_config = image_resizer_config.keep_aspect_ratio_resizer
+ if not (keep_aspect_ratio_config.min_dimension <=
+ keep_aspect_ratio_config.max_dimension):
+ raise ValueError('min_dimension > max_dimension')
+ method = _tf_resize_method(keep_aspect_ratio_config.resize_method)
+ per_channel_pad_value = (0, 0, 0)
+ if keep_aspect_ratio_config.per_channel_pad_value:
+ per_channel_pad_value = tuple(keep_aspect_ratio_config.
+ per_channel_pad_value)
+ image_resizer_fn = functools.partial(
+ preprocessor.resize_to_range,
+ min_dimension=keep_aspect_ratio_config.min_dimension,
+ max_dimension=keep_aspect_ratio_config.max_dimension,
+ method=method,
+ pad_to_max_dimension=keep_aspect_ratio_config.pad_to_max_dimension,
+ per_channel_pad_value=per_channel_pad_value)
+ if not keep_aspect_ratio_config.convert_to_grayscale:
+ return image_resizer_fn
+ elif image_resizer_oneof == 'fixed_shape_resizer':
+ fixed_shape_resizer_config = image_resizer_config.fixed_shape_resizer
+ method = _tf_resize_method(fixed_shape_resizer_config.resize_method)
+ image_resizer_fn = functools.partial(
+ preprocessor.resize_image,
+ new_height=fixed_shape_resizer_config.height,
+ new_width=fixed_shape_resizer_config.width,
+ method=method)
+ if not fixed_shape_resizer_config.convert_to_grayscale:
+ return image_resizer_fn
+ else:
+ raise ValueError(
+ 'Invalid image resizer option: \'%s\'.' % image_resizer_oneof)
+
+ def grayscale_image_resizer(image, masks=None):
+ """Convert to grayscale before applying image_resizer_fn.
+
+ Args:
+ image: A 3D tensor of shape [height, width, 3]
+ masks: (optional) rank 3 float32 tensor with shape [num_instances, height,
+ width] containing instance masks.
+
+ Returns:
+ Note that the position of the resized_image_shape changes based on whether
+ masks are present.
+ resized_image: A 3D tensor of shape [new_height, new_width, 1],
+ where the image has been resized (with bilinear interpolation) so that
+ min(new_height, new_width) == min_dimension or
+ max(new_height, new_width) == max_dimension.
+ resized_masks: If masks is not None, also outputs masks. A 3D tensor of
+ shape [num_instances, new_height, new_width].
+ resized_image_shape: A 1D tensor of shape [3] containing shape of the
+ resized image.
+ """
+ # image_resizer_fn returns [resized_image, resized_image_shape] if
+ # mask==None, otherwise it returns
+ # [resized_image, resized_mask, resized_image_shape]. In either case, we
+ # only deal with first and last element of the returned list.
+ retval = image_resizer_fn(image, masks)
+ resized_image = retval[0]
+ resized_image_shape = retval[-1]
+ retval[0] = preprocessor.rgb_to_gray(resized_image)
+ retval[-1] = tf.concat([resized_image_shape[:-1], [1]], 0)
+ return retval
+
+ return functools.partial(grayscale_image_resizer)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/image_resizer_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/image_resizer_builder_test.py"
new file mode 100644
index 00000000..f7da1912
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/image_resizer_builder_test.py"
@@ -0,0 +1,141 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for object_detection.builders.image_resizer_builder."""
+import numpy as np
+import tensorflow as tf
+from google.protobuf import text_format
+from object_detection.builders import image_resizer_builder
+from object_detection.protos import image_resizer_pb2
+
+
+class ImageResizerBuilderTest(tf.test.TestCase):
+
+ def _shape_of_resized_random_image_given_text_proto(self, input_shape,
+ text_proto):
+ image_resizer_config = image_resizer_pb2.ImageResizer()
+ text_format.Merge(text_proto, image_resizer_config)
+ image_resizer_fn = image_resizer_builder.build(image_resizer_config)
+ images = tf.to_float(
+ tf.random_uniform(input_shape, minval=0, maxval=255, dtype=tf.int32))
+ resized_images, _ = image_resizer_fn(images)
+ with self.test_session() as sess:
+ return sess.run(resized_images).shape
+
+ def test_build_keep_aspect_ratio_resizer_returns_expected_shape(self):
+ image_resizer_text_proto = """
+ keep_aspect_ratio_resizer {
+ min_dimension: 10
+ max_dimension: 20
+ }
+ """
+ input_shape = (50, 25, 3)
+ expected_output_shape = (20, 10, 3)
+ output_shape = self._shape_of_resized_random_image_given_text_proto(
+ input_shape, image_resizer_text_proto)
+ self.assertEqual(output_shape, expected_output_shape)
+
+ def test_build_keep_aspect_ratio_resizer_grayscale(self):
+ image_resizer_text_proto = """
+ keep_aspect_ratio_resizer {
+ min_dimension: 10
+ max_dimension: 20
+ convert_to_grayscale: true
+ }
+ """
+ input_shape = (50, 25, 3)
+ expected_output_shape = (20, 10, 1)
+ output_shape = self._shape_of_resized_random_image_given_text_proto(
+ input_shape, image_resizer_text_proto)
+ self.assertEqual(output_shape, expected_output_shape)
+
+ def test_build_keep_aspect_ratio_resizer_with_padding(self):
+ image_resizer_text_proto = """
+ keep_aspect_ratio_resizer {
+ min_dimension: 10
+ max_dimension: 20
+ pad_to_max_dimension: true
+ per_channel_pad_value: 3
+ per_channel_pad_value: 4
+ per_channel_pad_value: 5
+ }
+ """
+ input_shape = (50, 25, 3)
+ expected_output_shape = (20, 20, 3)
+ output_shape = self._shape_of_resized_random_image_given_text_proto(
+ input_shape, image_resizer_text_proto)
+ self.assertEqual(output_shape, expected_output_shape)
+
+ def test_built_fixed_shape_resizer_returns_expected_shape(self):
+ image_resizer_text_proto = """
+ fixed_shape_resizer {
+ height: 10
+ width: 20
+ }
+ """
+ input_shape = (50, 25, 3)
+ expected_output_shape = (10, 20, 3)
+ output_shape = self._shape_of_resized_random_image_given_text_proto(
+ input_shape, image_resizer_text_proto)
+ self.assertEqual(output_shape, expected_output_shape)
+
+ def test_built_fixed_shape_resizer_grayscale(self):
+ image_resizer_text_proto = """
+ fixed_shape_resizer {
+ height: 10
+ width: 20
+ convert_to_grayscale: true
+ }
+ """
+ input_shape = (50, 25, 3)
+ expected_output_shape = (10, 20, 1)
+ output_shape = self._shape_of_resized_random_image_given_text_proto(
+ input_shape, image_resizer_text_proto)
+ self.assertEqual(output_shape, expected_output_shape)
+
+ def test_raises_error_on_invalid_input(self):
+ invalid_input = 'invalid_input'
+ with self.assertRaises(ValueError):
+ image_resizer_builder.build(invalid_input)
+
+ def _resized_image_given_text_proto(self, image, text_proto):
+ image_resizer_config = image_resizer_pb2.ImageResizer()
+ text_format.Merge(text_proto, image_resizer_config)
+ image_resizer_fn = image_resizer_builder.build(image_resizer_config)
+ image_placeholder = tf.placeholder(tf.uint8, [1, None, None, 3])
+ resized_image, _ = image_resizer_fn(image_placeholder)
+ with self.test_session() as sess:
+ return sess.run(resized_image, feed_dict={image_placeholder: image})
+
+ def test_fixed_shape_resizer_nearest_neighbor_method(self):
+ image_resizer_text_proto = """
+ fixed_shape_resizer {
+ height: 1
+ width: 1
+ resize_method: NEAREST_NEIGHBOR
+ }
+ """
+ image = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
+ image = np.expand_dims(image, axis=2)
+ image = np.tile(image, (1, 1, 3))
+ image = np.expand_dims(image, axis=0)
+ resized_image = self._resized_image_given_text_proto(
+ image, image_resizer_text_proto)
+ vals = np.unique(resized_image).tolist()
+ self.assertEqual(len(vals), 1)
+ self.assertEqual(vals[0], 1)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/input_reader_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/input_reader_builder.py"
new file mode 100644
index 00000000..8cb5e2f0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/input_reader_builder.py"
@@ -0,0 +1,76 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Input reader builder.
+
+Creates data sources for DetectionModels from an InputReader config. See
+input_reader.proto for options.
+
+Note: If users wishes to also use their own InputReaders with the Object
+Detection configuration framework, they should define their own builder function
+that wraps the build function.
+"""
+
+import tensorflow as tf
+
+from object_detection.data_decoders import tf_example_decoder
+from object_detection.protos import input_reader_pb2
+
+parallel_reader = tf.contrib.slim.parallel_reader
+
+
+def build(input_reader_config):
+ """Builds a tensor dictionary based on the InputReader config.
+
+ Args:
+ input_reader_config: A input_reader_pb2.InputReader object.
+
+ Returns:
+ A tensor dict based on the input_reader_config.
+
+ Raises:
+ ValueError: On invalid input reader proto.
+ ValueError: If no input paths are specified.
+ """
+ if not isinstance(input_reader_config, input_reader_pb2.InputReader):
+ raise ValueError('input_reader_config not of type '
+ 'input_reader_pb2.InputReader.')
+
+ if input_reader_config.WhichOneof('input_reader') == 'tf_record_input_reader':
+ config = input_reader_config.tf_record_input_reader
+ if not config.input_path:
+ raise ValueError('At least one input path must be specified in '
+ '`input_reader_config`.')
+ _, string_tensor = parallel_reader.parallel_read(
+ config.input_path[:], # Convert `RepeatedScalarContainer` to list.
+ reader_class=tf.TFRecordReader,
+ num_epochs=(input_reader_config.num_epochs
+ if input_reader_config.num_epochs else None),
+ num_readers=input_reader_config.num_readers,
+ shuffle=input_reader_config.shuffle,
+ dtypes=[tf.string, tf.string],
+ capacity=input_reader_config.queue_capacity,
+ min_after_dequeue=input_reader_config.min_after_dequeue)
+
+ label_map_proto_file = None
+ if input_reader_config.HasField('label_map_path'):
+ label_map_proto_file = input_reader_config.label_map_path
+ decoder = tf_example_decoder.TfExampleDecoder(
+ load_instance_masks=input_reader_config.load_instance_masks,
+ instance_mask_type=input_reader_config.mask_type,
+ label_map_proto_file=label_map_proto_file)
+ return decoder.decode(string_tensor)
+
+ raise ValueError('Unsupported input_reader_config.')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/input_reader_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/input_reader_builder_test.py"
new file mode 100644
index 00000000..c2c8ef4f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/input_reader_builder_test.py"
@@ -0,0 +1,129 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for input_reader_builder."""
+
+import os
+import numpy as np
+import tensorflow as tf
+
+from google.protobuf import text_format
+
+from object_detection.builders import input_reader_builder
+from object_detection.core import standard_fields as fields
+from object_detection.protos import input_reader_pb2
+from object_detection.utils import dataset_util
+
+
+class InputReaderBuilderTest(tf.test.TestCase):
+
+ def create_tf_record(self):
+ path = os.path.join(self.get_temp_dir(), 'tfrecord')
+ writer = tf.python_io.TFRecordWriter(path)
+
+ image_tensor = np.random.randint(255, size=(4, 5, 3)).astype(np.uint8)
+ flat_mask = (4 * 5) * [1.0]
+ with self.test_session():
+ encoded_jpeg = tf.image.encode_jpeg(tf.constant(image_tensor)).eval()
+ example = tf.train.Example(features=tf.train.Features(feature={
+ 'image/encoded': dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),
+ 'image/height': dataset_util.int64_feature(4),
+ 'image/width': dataset_util.int64_feature(5),
+ 'image/object/bbox/xmin': dataset_util.float_list_feature([0.0]),
+ 'image/object/bbox/xmax': dataset_util.float_list_feature([1.0]),
+ 'image/object/bbox/ymin': dataset_util.float_list_feature([0.0]),
+ 'image/object/bbox/ymax': dataset_util.float_list_feature([1.0]),
+ 'image/object/class/label': dataset_util.int64_list_feature([2]),
+ 'image/object/mask': dataset_util.float_list_feature(flat_mask),
+ }))
+ writer.write(example.SerializeToString())
+ writer.close()
+
+ return path
+
+ def test_build_tf_record_input_reader(self):
+ tf_record_path = self.create_tf_record()
+
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ tf_record_input_reader {{
+ input_path: '{0}'
+ }}
+ """.format(tf_record_path)
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+ tensor_dict = input_reader_builder.build(input_reader_proto)
+
+ with tf.train.MonitoredSession() as sess:
+ output_dict = sess.run(tensor_dict)
+
+ self.assertTrue(fields.InputDataFields.groundtruth_instance_masks
+ not in output_dict)
+ self.assertEquals(
+ (4, 5, 3), output_dict[fields.InputDataFields.image].shape)
+ self.assertEquals(
+ [2], output_dict[fields.InputDataFields.groundtruth_classes])
+ self.assertEquals(
+ (1, 4), output_dict[fields.InputDataFields.groundtruth_boxes].shape)
+ self.assertAllEqual(
+ [0.0, 0.0, 1.0, 1.0],
+ output_dict[fields.InputDataFields.groundtruth_boxes][0])
+
+ def test_build_tf_record_input_reader_and_load_instance_masks(self):
+ tf_record_path = self.create_tf_record()
+
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ load_instance_masks: true
+ tf_record_input_reader {{
+ input_path: '{0}'
+ }}
+ """.format(tf_record_path)
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+ tensor_dict = input_reader_builder.build(input_reader_proto)
+
+ with tf.train.MonitoredSession() as sess:
+ output_dict = sess.run(tensor_dict)
+
+ self.assertEquals(
+ (4, 5, 3), output_dict[fields.InputDataFields.image].shape)
+ self.assertEquals(
+ [2], output_dict[fields.InputDataFields.groundtruth_classes])
+ self.assertEquals(
+ (1, 4), output_dict[fields.InputDataFields.groundtruth_boxes].shape)
+ self.assertAllEqual(
+ [0.0, 0.0, 1.0, 1.0],
+ output_dict[fields.InputDataFields.groundtruth_boxes][0])
+ self.assertAllEqual(
+ (1, 4, 5),
+ output_dict[fields.InputDataFields.groundtruth_instance_masks].shape)
+
+ def test_raises_error_with_no_input_paths(self):
+ input_reader_text_proto = """
+ shuffle: false
+ num_readers: 1
+ load_instance_masks: true
+ """
+ input_reader_proto = input_reader_pb2.InputReader()
+ text_format.Merge(input_reader_text_proto, input_reader_proto)
+ with self.assertRaises(ValueError):
+ input_reader_builder.build(input_reader_proto)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/losses_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/losses_builder.py"
new file mode 100644
index 00000000..2b98d0aa
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/losses_builder.py"
@@ -0,0 +1,252 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A function to build localization and classification losses from config."""
+
+import functools
+from object_detection.core import balanced_positive_negative_sampler as sampler
+from object_detection.core import losses
+from object_detection.protos import losses_pb2
+from object_detection.utils import ops
+
+
+def build(loss_config):
+ """Build losses based on the config.
+
+ Builds classification, localization losses and optionally a hard example miner
+ based on the config.
+
+ Args:
+ loss_config: A losses_pb2.Loss object.
+
+ Returns:
+ classification_loss: Classification loss object.
+ localization_loss: Localization loss object.
+ classification_weight: Classification loss weight.
+ localization_weight: Localization loss weight.
+ hard_example_miner: Hard example miner object.
+ random_example_sampler: BalancedPositiveNegativeSampler object.
+
+ Raises:
+ ValueError: If hard_example_miner is used with sigmoid_focal_loss.
+ ValueError: If random_example_sampler is getting non-positive value as
+ desired positive example fraction.
+ """
+ classification_loss = _build_classification_loss(
+ loss_config.classification_loss)
+ localization_loss = _build_localization_loss(
+ loss_config.localization_loss)
+ classification_weight = loss_config.classification_weight
+ localization_weight = loss_config.localization_weight
+ hard_example_miner = None
+ if loss_config.HasField('hard_example_miner'):
+ if (loss_config.classification_loss.WhichOneof('classification_loss') ==
+ 'weighted_sigmoid_focal'):
+ raise ValueError('HardExampleMiner should not be used with sigmoid focal '
+ 'loss')
+ hard_example_miner = build_hard_example_miner(
+ loss_config.hard_example_miner,
+ classification_weight,
+ localization_weight)
+ random_example_sampler = None
+ if loss_config.HasField('random_example_sampler'):
+ if loss_config.random_example_sampler.positive_sample_fraction <= 0:
+ raise ValueError('RandomExampleSampler should not use non-positive'
+ 'value as positive sample fraction.')
+ random_example_sampler = sampler.BalancedPositiveNegativeSampler(
+ positive_fraction=loss_config.random_example_sampler.
+ positive_sample_fraction)
+
+ if loss_config.expected_loss_weights == loss_config.NONE:
+ expected_loss_weights_fn = None
+ elif loss_config.expected_loss_weights == loss_config.EXPECTED_SAMPLING:
+ expected_loss_weights_fn = functools.partial(
+ ops.expected_classification_loss_by_expected_sampling,
+ min_num_negative_samples=loss_config.min_num_negative_samples,
+ desired_negative_sampling_ratio=loss_config
+ .desired_negative_sampling_ratio)
+ elif (loss_config.expected_loss_weights == loss_config
+ .REWEIGHTING_UNMATCHED_ANCHORS):
+ expected_loss_weights_fn = functools.partial(
+ ops.expected_classification_loss_by_reweighting_unmatched_anchors,
+ min_num_negative_samples=loss_config.min_num_negative_samples,
+ desired_negative_sampling_ratio=loss_config
+ .desired_negative_sampling_ratio)
+ else:
+ raise ValueError('Not a valid value for expected_classification_loss.')
+
+ return (classification_loss, localization_loss, classification_weight,
+ localization_weight, hard_example_miner, random_example_sampler,
+ expected_loss_weights_fn)
+
+
+def build_hard_example_miner(config,
+ classification_weight,
+ localization_weight):
+ """Builds hard example miner based on the config.
+
+ Args:
+ config: A losses_pb2.HardExampleMiner object.
+ classification_weight: Classification loss weight.
+ localization_weight: Localization loss weight.
+
+ Returns:
+ Hard example miner.
+
+ """
+ loss_type = None
+ if config.loss_type == losses_pb2.HardExampleMiner.BOTH:
+ loss_type = 'both'
+ if config.loss_type == losses_pb2.HardExampleMiner.CLASSIFICATION:
+ loss_type = 'cls'
+ if config.loss_type == losses_pb2.HardExampleMiner.LOCALIZATION:
+ loss_type = 'loc'
+
+ max_negatives_per_positive = None
+ num_hard_examples = None
+ if config.max_negatives_per_positive > 0:
+ max_negatives_per_positive = config.max_negatives_per_positive
+ if config.num_hard_examples > 0:
+ num_hard_examples = config.num_hard_examples
+ hard_example_miner = losses.HardExampleMiner(
+ num_hard_examples=num_hard_examples,
+ iou_threshold=config.iou_threshold,
+ loss_type=loss_type,
+ cls_loss_weight=classification_weight,
+ loc_loss_weight=localization_weight,
+ max_negatives_per_positive=max_negatives_per_positive,
+ min_negatives_per_image=config.min_negatives_per_image)
+ return hard_example_miner
+
+
+def build_faster_rcnn_classification_loss(loss_config):
+ """Builds a classification loss for Faster RCNN based on the loss config.
+
+ Args:
+ loss_config: A losses_pb2.ClassificationLoss object.
+
+ Returns:
+ Loss based on the config.
+
+ Raises:
+ ValueError: On invalid loss_config.
+ """
+ if not isinstance(loss_config, losses_pb2.ClassificationLoss):
+ raise ValueError('loss_config not of type losses_pb2.ClassificationLoss.')
+
+ loss_type = loss_config.WhichOneof('classification_loss')
+
+ if loss_type == 'weighted_sigmoid':
+ return losses.WeightedSigmoidClassificationLoss()
+ if loss_type == 'weighted_softmax':
+ config = loss_config.weighted_softmax
+ return losses.WeightedSoftmaxClassificationLoss(
+ logit_scale=config.logit_scale)
+ if loss_type == 'weighted_logits_softmax':
+ config = loss_config.weighted_logits_softmax
+ return losses.WeightedSoftmaxClassificationAgainstLogitsLoss(
+ logit_scale=config.logit_scale)
+ if loss_type == 'weighted_sigmoid_focal':
+ config = loss_config.weighted_sigmoid_focal
+ alpha = None
+ if config.HasField('alpha'):
+ alpha = config.alpha
+ return losses.SigmoidFocalClassificationLoss(
+ gamma=config.gamma,
+ alpha=alpha)
+
+ # By default, Faster RCNN second stage classifier uses Softmax loss
+ # with anchor-wise outputs.
+ config = loss_config.weighted_softmax
+ return losses.WeightedSoftmaxClassificationLoss(
+ logit_scale=config.logit_scale)
+
+
+def _build_localization_loss(loss_config):
+ """Builds a localization loss based on the loss config.
+
+ Args:
+ loss_config: A losses_pb2.LocalizationLoss object.
+
+ Returns:
+ Loss based on the config.
+
+ Raises:
+ ValueError: On invalid loss_config.
+ """
+ if not isinstance(loss_config, losses_pb2.LocalizationLoss):
+ raise ValueError('loss_config not of type losses_pb2.LocalizationLoss.')
+
+ loss_type = loss_config.WhichOneof('localization_loss')
+
+ if loss_type == 'weighted_l2':
+ return losses.WeightedL2LocalizationLoss()
+
+ if loss_type == 'weighted_smooth_l1':
+ return losses.WeightedSmoothL1LocalizationLoss(
+ loss_config.weighted_smooth_l1.delta)
+
+ if loss_type == 'weighted_iou':
+ return losses.WeightedIOULocalizationLoss()
+
+ raise ValueError('Empty loss config.')
+
+
+def _build_classification_loss(loss_config):
+ """Builds a classification loss based on the loss config.
+
+ Args:
+ loss_config: A losses_pb2.ClassificationLoss object.
+
+ Returns:
+ Loss based on the config.
+
+ Raises:
+ ValueError: On invalid loss_config.
+ """
+ if not isinstance(loss_config, losses_pb2.ClassificationLoss):
+ raise ValueError('loss_config not of type losses_pb2.ClassificationLoss.')
+
+ loss_type = loss_config.WhichOneof('classification_loss')
+
+ if loss_type == 'weighted_sigmoid':
+ return losses.WeightedSigmoidClassificationLoss()
+
+ if loss_type == 'weighted_sigmoid_focal':
+ config = loss_config.weighted_sigmoid_focal
+ alpha = None
+ if config.HasField('alpha'):
+ alpha = config.alpha
+ return losses.SigmoidFocalClassificationLoss(
+ gamma=config.gamma,
+ alpha=alpha)
+
+ if loss_type == 'weighted_softmax':
+ config = loss_config.weighted_softmax
+ return losses.WeightedSoftmaxClassificationLoss(
+ logit_scale=config.logit_scale)
+
+ if loss_type == 'weighted_logits_softmax':
+ config = loss_config.weighted_logits_softmax
+ return losses.WeightedSoftmaxClassificationAgainstLogitsLoss(
+ logit_scale=config.logit_scale)
+
+ if loss_type == 'bootstrapped_sigmoid':
+ config = loss_config.bootstrapped_sigmoid
+ return losses.BootstrappedSigmoidClassificationLoss(
+ alpha=config.alpha,
+ bootstrap_type=('hard' if config.hard_bootstrap else 'soft'))
+
+ raise ValueError('Empty loss config.')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/losses_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/losses_builder_test.py"
new file mode 100644
index 00000000..24b96b40
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/losses_builder_test.py"
@@ -0,0 +1,561 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for losses_builder."""
+
+import tensorflow as tf
+
+from google.protobuf import text_format
+from object_detection.builders import losses_builder
+from object_detection.core import losses
+from object_detection.protos import losses_pb2
+from object_detection.utils import ops
+
+
+class LocalizationLossBuilderTest(tf.test.TestCase):
+
+ def test_build_weighted_l2_localization_loss(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ _, localization_loss, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(localization_loss,
+ losses.WeightedL2LocalizationLoss))
+
+ def test_build_weighted_smooth_l1_localization_loss_default_delta(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ _, localization_loss, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(localization_loss,
+ losses.WeightedSmoothL1LocalizationLoss))
+ self.assertAlmostEqual(localization_loss._delta, 1.0)
+
+ def test_build_weighted_smooth_l1_localization_loss_non_default_delta(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_smooth_l1 {
+ delta: 0.1
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ _, localization_loss, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(localization_loss,
+ losses.WeightedSmoothL1LocalizationLoss))
+ self.assertAlmostEqual(localization_loss._delta, 0.1)
+
+ def test_build_weighted_iou_localization_loss(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_iou {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ _, localization_loss, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(localization_loss,
+ losses.WeightedIOULocalizationLoss))
+
+ def test_anchorwise_output(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ _, localization_loss, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(localization_loss,
+ losses.WeightedSmoothL1LocalizationLoss))
+ predictions = tf.constant([[[0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0]]])
+ targets = tf.constant([[[0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0]]])
+ weights = tf.constant([[1.0, 1.0]])
+ loss = localization_loss(predictions, targets, weights=weights)
+ self.assertEqual(loss.shape, [1, 2])
+
+ def test_raise_error_on_empty_localization_config(self):
+ losses_text_proto = """
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ with self.assertRaises(ValueError):
+ losses_builder._build_localization_loss(losses_proto)
+
+
+class ClassificationLossBuilderTest(tf.test.TestCase):
+
+ def test_build_weighted_sigmoid_classification_loss(self):
+ losses_text_proto = """
+ classification_loss {
+ weighted_sigmoid {
+ }
+ }
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss, _, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(classification_loss,
+ losses.WeightedSigmoidClassificationLoss))
+
+ def test_build_weighted_sigmoid_focal_classification_loss(self):
+ losses_text_proto = """
+ classification_loss {
+ weighted_sigmoid_focal {
+ }
+ }
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss, _, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(classification_loss,
+ losses.SigmoidFocalClassificationLoss))
+ self.assertAlmostEqual(classification_loss._alpha, None)
+ self.assertAlmostEqual(classification_loss._gamma, 2.0)
+
+ def test_build_weighted_sigmoid_focal_loss_non_default(self):
+ losses_text_proto = """
+ classification_loss {
+ weighted_sigmoid_focal {
+ alpha: 0.25
+ gamma: 3.0
+ }
+ }
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss, _, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(classification_loss,
+ losses.SigmoidFocalClassificationLoss))
+ self.assertAlmostEqual(classification_loss._alpha, 0.25)
+ self.assertAlmostEqual(classification_loss._gamma, 3.0)
+
+ def test_build_weighted_softmax_classification_loss(self):
+ losses_text_proto = """
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss, _, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(classification_loss,
+ losses.WeightedSoftmaxClassificationLoss))
+
+ def test_build_weighted_logits_softmax_classification_loss(self):
+ losses_text_proto = """
+ classification_loss {
+ weighted_logits_softmax {
+ }
+ }
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss, _, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(
+ isinstance(classification_loss,
+ losses.WeightedSoftmaxClassificationAgainstLogitsLoss))
+
+ def test_build_weighted_softmax_classification_loss_with_logit_scale(self):
+ losses_text_proto = """
+ classification_loss {
+ weighted_softmax {
+ logit_scale: 2.0
+ }
+ }
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss, _, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(classification_loss,
+ losses.WeightedSoftmaxClassificationLoss))
+
+ def test_build_bootstrapped_sigmoid_classification_loss(self):
+ losses_text_proto = """
+ classification_loss {
+ bootstrapped_sigmoid {
+ alpha: 0.5
+ }
+ }
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss, _, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(classification_loss,
+ losses.BootstrappedSigmoidClassificationLoss))
+
+ def test_anchorwise_output(self):
+ losses_text_proto = """
+ classification_loss {
+ weighted_sigmoid {
+ anchorwise_output: true
+ }
+ }
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss, _, _, _, _, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(classification_loss,
+ losses.WeightedSigmoidClassificationLoss))
+ predictions = tf.constant([[[0.0, 1.0, 0.0], [0.0, 0.5, 0.5]]])
+ targets = tf.constant([[[0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]])
+ weights = tf.constant([[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]]])
+ loss = classification_loss(predictions, targets, weights=weights)
+ self.assertEqual(loss.shape, [1, 2, 3])
+
+ def test_raise_error_on_empty_config(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ with self.assertRaises(ValueError):
+ losses_builder.build(losses_proto)
+
+
+class HardExampleMinerBuilderTest(tf.test.TestCase):
+
+ def test_do_not_build_hard_example_miner_by_default(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ _, _, _, _, hard_example_miner, _, _ = losses_builder.build(losses_proto)
+ self.assertEqual(hard_example_miner, None)
+
+ def test_build_hard_example_miner_for_classification_loss(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ hard_example_miner {
+ loss_type: CLASSIFICATION
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ _, _, _, _, hard_example_miner, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(hard_example_miner, losses.HardExampleMiner))
+ self.assertEqual(hard_example_miner._loss_type, 'cls')
+
+ def test_build_hard_example_miner_for_localization_loss(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ hard_example_miner {
+ loss_type: LOCALIZATION
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ _, _, _, _, hard_example_miner, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(hard_example_miner, losses.HardExampleMiner))
+ self.assertEqual(hard_example_miner._loss_type, 'loc')
+
+ def test_build_hard_example_miner_with_non_default_values(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ hard_example_miner {
+ num_hard_examples: 32
+ iou_threshold: 0.5
+ loss_type: LOCALIZATION
+ max_negatives_per_positive: 10
+ min_negatives_per_image: 3
+ }
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ _, _, _, _, hard_example_miner, _, _ = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(hard_example_miner, losses.HardExampleMiner))
+ self.assertEqual(hard_example_miner._num_hard_examples, 32)
+ self.assertAlmostEqual(hard_example_miner._iou_threshold, 0.5)
+ self.assertEqual(hard_example_miner._max_negatives_per_positive, 10)
+ self.assertEqual(hard_example_miner._min_negatives_per_image, 3)
+
+
+class LossBuilderTest(tf.test.TestCase):
+
+ def test_build_all_loss_parameters(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ hard_example_miner {
+ }
+ classification_weight: 0.8
+ localization_weight: 0.2
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ (classification_loss, localization_loss, classification_weight,
+ localization_weight, hard_example_miner, _,
+ _) = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(hard_example_miner, losses.HardExampleMiner))
+ self.assertTrue(isinstance(classification_loss,
+ losses.WeightedSoftmaxClassificationLoss))
+ self.assertTrue(isinstance(localization_loss,
+ losses.WeightedL2LocalizationLoss))
+ self.assertAlmostEqual(classification_weight, 0.8)
+ self.assertAlmostEqual(localization_weight, 0.2)
+
+ def test_build_expected_sampling(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ hard_example_miner {
+ }
+ classification_weight: 0.8
+ localization_weight: 0.2
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ (classification_loss, localization_loss, classification_weight,
+ localization_weight, hard_example_miner, _,
+ _) = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(hard_example_miner, losses.HardExampleMiner))
+ self.assertTrue(
+ isinstance(classification_loss,
+ losses.WeightedSoftmaxClassificationLoss))
+ self.assertTrue(
+ isinstance(localization_loss, losses.WeightedL2LocalizationLoss))
+ self.assertAlmostEqual(classification_weight, 0.8)
+ self.assertAlmostEqual(localization_weight, 0.2)
+
+
+ def test_build_reweighting_unmatched_anchors(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ hard_example_miner {
+ }
+ classification_weight: 0.8
+ localization_weight: 0.2
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ (classification_loss, localization_loss, classification_weight,
+ localization_weight, hard_example_miner, _,
+ _) = losses_builder.build(losses_proto)
+ self.assertTrue(isinstance(hard_example_miner, losses.HardExampleMiner))
+ self.assertTrue(
+ isinstance(classification_loss,
+ losses.WeightedSoftmaxClassificationLoss))
+ self.assertTrue(
+ isinstance(localization_loss, losses.WeightedL2LocalizationLoss))
+ self.assertAlmostEqual(classification_weight, 0.8)
+ self.assertAlmostEqual(localization_weight, 0.2)
+
+ def test_raise_error_when_both_focal_loss_and_hard_example_miner(self):
+ losses_text_proto = """
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ classification_loss {
+ weighted_sigmoid_focal {
+ }
+ }
+ hard_example_miner {
+ }
+ classification_weight: 0.8
+ localization_weight: 0.2
+ """
+ losses_proto = losses_pb2.Loss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ with self.assertRaises(ValueError):
+ losses_builder.build(losses_proto)
+
+
+class FasterRcnnClassificationLossBuilderTest(tf.test.TestCase):
+
+ def test_build_sigmoid_loss(self):
+ losses_text_proto = """
+ weighted_sigmoid {
+ }
+ """
+ losses_proto = losses_pb2.ClassificationLoss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss = losses_builder.build_faster_rcnn_classification_loss(
+ losses_proto)
+ self.assertTrue(isinstance(classification_loss,
+ losses.WeightedSigmoidClassificationLoss))
+
+ def test_build_softmax_loss(self):
+ losses_text_proto = """
+ weighted_softmax {
+ }
+ """
+ losses_proto = losses_pb2.ClassificationLoss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss = losses_builder.build_faster_rcnn_classification_loss(
+ losses_proto)
+ self.assertTrue(isinstance(classification_loss,
+ losses.WeightedSoftmaxClassificationLoss))
+
+ def test_build_logits_softmax_loss(self):
+ losses_text_proto = """
+ weighted_logits_softmax {
+ }
+ """
+ losses_proto = losses_pb2.ClassificationLoss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss = losses_builder.build_faster_rcnn_classification_loss(
+ losses_proto)
+ self.assertTrue(
+ isinstance(classification_loss,
+ losses.WeightedSoftmaxClassificationAgainstLogitsLoss))
+
+ def test_build_sigmoid_focal_loss(self):
+ losses_text_proto = """
+ weighted_sigmoid_focal {
+ }
+ """
+ losses_proto = losses_pb2.ClassificationLoss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss = losses_builder.build_faster_rcnn_classification_loss(
+ losses_proto)
+ self.assertTrue(
+ isinstance(classification_loss,
+ losses.SigmoidFocalClassificationLoss))
+
+ def test_build_softmax_loss_by_default(self):
+ losses_text_proto = """
+ """
+ losses_proto = losses_pb2.ClassificationLoss()
+ text_format.Merge(losses_text_proto, losses_proto)
+ classification_loss = losses_builder.build_faster_rcnn_classification_loss(
+ losses_proto)
+ self.assertTrue(isinstance(classification_loss,
+ losses.WeightedSoftmaxClassificationLoss))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/matcher_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/matcher_builder.py"
new file mode 100644
index 00000000..d334f435
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/matcher_builder.py"
@@ -0,0 +1,53 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A function to build an object detection matcher from configuration."""
+
+from object_detection.matchers import argmax_matcher
+from object_detection.matchers import bipartite_matcher
+from object_detection.protos import matcher_pb2
+
+
+def build(matcher_config):
+ """Builds a matcher object based on the matcher config.
+
+ Args:
+ matcher_config: A matcher.proto object containing the config for the desired
+ Matcher.
+
+ Returns:
+ Matcher based on the config.
+
+ Raises:
+ ValueError: On empty matcher proto.
+ """
+ if not isinstance(matcher_config, matcher_pb2.Matcher):
+ raise ValueError('matcher_config not of type matcher_pb2.Matcher.')
+ if matcher_config.WhichOneof('matcher_oneof') == 'argmax_matcher':
+ matcher = matcher_config.argmax_matcher
+ matched_threshold = unmatched_threshold = None
+ if not matcher.ignore_thresholds:
+ matched_threshold = matcher.matched_threshold
+ unmatched_threshold = matcher.unmatched_threshold
+ return argmax_matcher.ArgMaxMatcher(
+ matched_threshold=matched_threshold,
+ unmatched_threshold=unmatched_threshold,
+ negatives_lower_than_unmatched=matcher.negatives_lower_than_unmatched,
+ force_match_for_each_row=matcher.force_match_for_each_row,
+ use_matmul_gather=matcher.use_matmul_gather)
+ if matcher_config.WhichOneof('matcher_oneof') == 'bipartite_matcher':
+ matcher = matcher_config.bipartite_matcher
+ return bipartite_matcher.GreedyBipartiteMatcher(matcher.use_matmul_gather)
+ raise ValueError('Empty matcher.')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/matcher_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/matcher_builder_test.py"
new file mode 100644
index 00000000..66854491
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/matcher_builder_test.py"
@@ -0,0 +1,99 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for matcher_builder."""
+
+import tensorflow as tf
+
+from google.protobuf import text_format
+from object_detection.builders import matcher_builder
+from object_detection.matchers import argmax_matcher
+from object_detection.matchers import bipartite_matcher
+from object_detection.protos import matcher_pb2
+
+
+class MatcherBuilderTest(tf.test.TestCase):
+
+ def test_build_arg_max_matcher_with_defaults(self):
+ matcher_text_proto = """
+ argmax_matcher {
+ }
+ """
+ matcher_proto = matcher_pb2.Matcher()
+ text_format.Merge(matcher_text_proto, matcher_proto)
+ matcher_object = matcher_builder.build(matcher_proto)
+ self.assertTrue(isinstance(matcher_object, argmax_matcher.ArgMaxMatcher))
+ self.assertAlmostEqual(matcher_object._matched_threshold, 0.5)
+ self.assertAlmostEqual(matcher_object._unmatched_threshold, 0.5)
+ self.assertTrue(matcher_object._negatives_lower_than_unmatched)
+ self.assertFalse(matcher_object._force_match_for_each_row)
+
+ def test_build_arg_max_matcher_without_thresholds(self):
+ matcher_text_proto = """
+ argmax_matcher {
+ ignore_thresholds: true
+ }
+ """
+ matcher_proto = matcher_pb2.Matcher()
+ text_format.Merge(matcher_text_proto, matcher_proto)
+ matcher_object = matcher_builder.build(matcher_proto)
+ self.assertTrue(isinstance(matcher_object, argmax_matcher.ArgMaxMatcher))
+ self.assertEqual(matcher_object._matched_threshold, None)
+ self.assertEqual(matcher_object._unmatched_threshold, None)
+ self.assertTrue(matcher_object._negatives_lower_than_unmatched)
+ self.assertFalse(matcher_object._force_match_for_each_row)
+
+ def test_build_arg_max_matcher_with_non_default_parameters(self):
+ matcher_text_proto = """
+ argmax_matcher {
+ matched_threshold: 0.7
+ unmatched_threshold: 0.3
+ negatives_lower_than_unmatched: false
+ force_match_for_each_row: true
+ use_matmul_gather: true
+ }
+ """
+ matcher_proto = matcher_pb2.Matcher()
+ text_format.Merge(matcher_text_proto, matcher_proto)
+ matcher_object = matcher_builder.build(matcher_proto)
+ self.assertTrue(isinstance(matcher_object, argmax_matcher.ArgMaxMatcher))
+ self.assertAlmostEqual(matcher_object._matched_threshold, 0.7)
+ self.assertAlmostEqual(matcher_object._unmatched_threshold, 0.3)
+ self.assertFalse(matcher_object._negatives_lower_than_unmatched)
+ self.assertTrue(matcher_object._force_match_for_each_row)
+ self.assertTrue(matcher_object._use_matmul_gather)
+
+ def test_build_bipartite_matcher(self):
+ matcher_text_proto = """
+ bipartite_matcher {
+ }
+ """
+ matcher_proto = matcher_pb2.Matcher()
+ text_format.Merge(matcher_text_proto, matcher_proto)
+ matcher_object = matcher_builder.build(matcher_proto)
+ self.assertTrue(
+ isinstance(matcher_object, bipartite_matcher.GreedyBipartiteMatcher))
+
+ def test_raise_error_on_empty_matcher(self):
+ matcher_text_proto = """
+ """
+ matcher_proto = matcher_pb2.Matcher()
+ text_format.Merge(matcher_text_proto, matcher_proto)
+ with self.assertRaises(ValueError):
+ matcher_builder.build(matcher_proto)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/model_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/model_builder.py"
new file mode 100644
index 00000000..83b709ae
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/model_builder.py"
@@ -0,0 +1,515 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A function to build a DetectionModel from configuration."""
+
+import functools
+
+from object_detection.builders import anchor_generator_builder
+from object_detection.builders import box_coder_builder
+from object_detection.builders import box_predictor_builder
+from object_detection.builders import hyperparams_builder
+from object_detection.builders import image_resizer_builder
+from object_detection.builders import losses_builder
+from object_detection.builders import matcher_builder
+from object_detection.builders import post_processing_builder
+from object_detection.builders import region_similarity_calculator_builder as sim_calc
+from object_detection.core import balanced_positive_negative_sampler as sampler
+from object_detection.core import post_processing
+from object_detection.core import target_assigner
+from object_detection.meta_architectures import faster_rcnn_meta_arch
+from object_detection.meta_architectures import rfcn_meta_arch
+from object_detection.meta_architectures import ssd_meta_arch
+from object_detection.models import faster_rcnn_inception_resnet_v2_feature_extractor as frcnn_inc_res
+from object_detection.models import faster_rcnn_inception_v2_feature_extractor as frcnn_inc_v2
+from object_detection.models import faster_rcnn_nas_feature_extractor as frcnn_nas
+from object_detection.models import faster_rcnn_pnas_feature_extractor as frcnn_pnas
+from object_detection.models import faster_rcnn_resnet_v1_feature_extractor as frcnn_resnet_v1
+from object_detection.models import ssd_resnet_v1_fpn_feature_extractor as ssd_resnet_v1_fpn
+from object_detection.models import ssd_resnet_v1_ppn_feature_extractor as ssd_resnet_v1_ppn
+from object_detection.models.embedded_ssd_mobilenet_v1_feature_extractor import EmbeddedSSDMobileNetV1FeatureExtractor
+from object_detection.models.ssd_inception_v2_feature_extractor import SSDInceptionV2FeatureExtractor
+from object_detection.models.ssd_inception_v3_feature_extractor import SSDInceptionV3FeatureExtractor
+from object_detection.models.ssd_mobilenet_v1_feature_extractor import SSDMobileNetV1FeatureExtractor
+from object_detection.models.ssd_mobilenet_v1_fpn_feature_extractor import SSDMobileNetV1FpnFeatureExtractor
+from object_detection.models.ssd_mobilenet_v1_ppn_feature_extractor import SSDMobileNetV1PpnFeatureExtractor
+from object_detection.models.ssd_mobilenet_v2_feature_extractor import SSDMobileNetV2FeatureExtractor
+from object_detection.models.ssd_mobilenet_v2_fpn_feature_extractor import SSDMobileNetV2FpnFeatureExtractor
+from object_detection.models.ssd_mobilenet_v2_keras_feature_extractor import SSDMobileNetV2KerasFeatureExtractor
+from object_detection.models.ssd_pnasnet_feature_extractor import SSDPNASNetFeatureExtractor
+from object_detection.predictors import rfcn_box_predictor
+from object_detection.predictors.heads import mask_head
+from object_detection.protos import model_pb2
+from object_detection.utils import ops
+
+# A map of names to SSD feature extractors.
+SSD_FEATURE_EXTRACTOR_CLASS_MAP = {
+ 'ssd_inception_v2': SSDInceptionV2FeatureExtractor,
+ 'ssd_inception_v3': SSDInceptionV3FeatureExtractor,
+ 'ssd_mobilenet_v1': SSDMobileNetV1FeatureExtractor,
+ 'ssd_mobilenet_v1_fpn': SSDMobileNetV1FpnFeatureExtractor,
+ 'ssd_mobilenet_v1_ppn': SSDMobileNetV1PpnFeatureExtractor,
+ 'ssd_mobilenet_v2': SSDMobileNetV2FeatureExtractor,
+ 'ssd_mobilenet_v2_fpn': SSDMobileNetV2FpnFeatureExtractor,
+ 'ssd_resnet50_v1_fpn': ssd_resnet_v1_fpn.SSDResnet50V1FpnFeatureExtractor,
+ 'ssd_resnet101_v1_fpn': ssd_resnet_v1_fpn.SSDResnet101V1FpnFeatureExtractor,
+ 'ssd_resnet152_v1_fpn': ssd_resnet_v1_fpn.SSDResnet152V1FpnFeatureExtractor,
+ 'ssd_resnet50_v1_ppn': ssd_resnet_v1_ppn.SSDResnet50V1PpnFeatureExtractor,
+ 'ssd_resnet101_v1_ppn':
+ ssd_resnet_v1_ppn.SSDResnet101V1PpnFeatureExtractor,
+ 'ssd_resnet152_v1_ppn':
+ ssd_resnet_v1_ppn.SSDResnet152V1PpnFeatureExtractor,
+ 'embedded_ssd_mobilenet_v1': EmbeddedSSDMobileNetV1FeatureExtractor,
+ 'ssd_pnasnet': SSDPNASNetFeatureExtractor,
+}
+
+SSD_KERAS_FEATURE_EXTRACTOR_CLASS_MAP = {
+ 'ssd_mobilenet_v2_keras': SSDMobileNetV2KerasFeatureExtractor
+}
+
+# A map of names to Faster R-CNN feature extractors.
+FASTER_RCNN_FEATURE_EXTRACTOR_CLASS_MAP = {
+ 'faster_rcnn_nas':
+ frcnn_nas.FasterRCNNNASFeatureExtractor,
+ 'faster_rcnn_pnas':
+ frcnn_pnas.FasterRCNNPNASFeatureExtractor,
+ 'faster_rcnn_inception_resnet_v2':
+ frcnn_inc_res.FasterRCNNInceptionResnetV2FeatureExtractor,
+ 'faster_rcnn_inception_v2':
+ frcnn_inc_v2.FasterRCNNInceptionV2FeatureExtractor,
+ 'faster_rcnn_resnet50':
+ frcnn_resnet_v1.FasterRCNNResnet50FeatureExtractor,
+ 'faster_rcnn_resnet101':
+ frcnn_resnet_v1.FasterRCNNResnet101FeatureExtractor,
+ 'faster_rcnn_resnet152':
+ frcnn_resnet_v1.FasterRCNNResnet152FeatureExtractor,
+}
+
+
+def build(model_config, is_training, add_summaries=True):
+ """Builds a DetectionModel based on the model config.
+
+ Args:
+ model_config: A model.proto object containing the config for the desired
+ DetectionModel.
+ is_training: True if this model is being built for training purposes.
+ add_summaries: Whether to add tensorflow summaries in the model graph.
+ Returns:
+ DetectionModel based on the config.
+
+ Raises:
+ ValueError: On invalid meta architecture or model.
+ """
+ if not isinstance(model_config, model_pb2.DetectionModel):
+ raise ValueError('model_config not of type model_pb2.DetectionModel.')
+ meta_architecture = model_config.WhichOneof('model')
+ if meta_architecture == 'ssd':
+ return _build_ssd_model(model_config.ssd, is_training, add_summaries)
+ if meta_architecture == 'faster_rcnn':
+ return _build_faster_rcnn_model(model_config.faster_rcnn, is_training,
+ add_summaries)
+ raise ValueError('Unknown meta architecture: {}'.format(meta_architecture))
+
+
+def _build_ssd_feature_extractor(feature_extractor_config,
+ is_training,
+ freeze_batchnorm,
+ reuse_weights=None):
+ """Builds a ssd_meta_arch.SSDFeatureExtractor based on config.
+
+ Args:
+ feature_extractor_config: A SSDFeatureExtractor proto config from ssd.proto.
+ is_training: True if this feature extractor is being built for training.
+ freeze_batchnorm: Whether to freeze batch norm parameters during
+ training or not. When training with a small batch size (e.g. 1), it is
+ desirable to freeze batch norm update and use pretrained batch norm
+ params.
+ reuse_weights: if the feature extractor should reuse weights.
+
+ Returns:
+ ssd_meta_arch.SSDFeatureExtractor based on config.
+
+ Raises:
+ ValueError: On invalid feature extractor type.
+ """
+ feature_type = feature_extractor_config.type
+ is_keras_extractor = feature_type in SSD_KERAS_FEATURE_EXTRACTOR_CLASS_MAP
+ depth_multiplier = feature_extractor_config.depth_multiplier
+ min_depth = feature_extractor_config.min_depth
+ pad_to_multiple = feature_extractor_config.pad_to_multiple
+ use_explicit_padding = feature_extractor_config.use_explicit_padding
+ use_depthwise = feature_extractor_config.use_depthwise
+
+ if is_keras_extractor:
+ conv_hyperparams = hyperparams_builder.KerasLayerHyperparams(
+ feature_extractor_config.conv_hyperparams)
+ else:
+ conv_hyperparams = hyperparams_builder.build(
+ feature_extractor_config.conv_hyperparams, is_training)
+ override_base_feature_extractor_hyperparams = (
+ feature_extractor_config.override_base_feature_extractor_hyperparams)
+
+ if (feature_type not in SSD_FEATURE_EXTRACTOR_CLASS_MAP) and (
+ not is_keras_extractor):
+ raise ValueError('Unknown ssd feature_extractor: {}'.format(feature_type))
+
+ if is_keras_extractor:
+ feature_extractor_class = SSD_KERAS_FEATURE_EXTRACTOR_CLASS_MAP[
+ feature_type]
+ else:
+ feature_extractor_class = SSD_FEATURE_EXTRACTOR_CLASS_MAP[feature_type]
+ kwargs = {
+ 'is_training':
+ is_training,
+ 'depth_multiplier':
+ depth_multiplier,
+ 'min_depth':
+ min_depth,
+ 'pad_to_multiple':
+ pad_to_multiple,
+ 'use_explicit_padding':
+ use_explicit_padding,
+ 'use_depthwise':
+ use_depthwise,
+ 'override_base_feature_extractor_hyperparams':
+ override_base_feature_extractor_hyperparams
+ }
+
+ if is_keras_extractor:
+ kwargs.update({
+ 'conv_hyperparams': conv_hyperparams,
+ 'inplace_batchnorm_update': False,
+ 'freeze_batchnorm': freeze_batchnorm
+ })
+ else:
+ kwargs.update({
+ 'conv_hyperparams_fn': conv_hyperparams,
+ 'reuse_weights': reuse_weights,
+ })
+
+ if feature_extractor_config.HasField('fpn'):
+ kwargs.update({
+ 'fpn_min_level':
+ feature_extractor_config.fpn.min_level,
+ 'fpn_max_level':
+ feature_extractor_config.fpn.max_level,
+ 'additional_layer_depth':
+ feature_extractor_config.fpn.additional_layer_depth,
+ })
+
+ return feature_extractor_class(**kwargs)
+
+
+def _build_ssd_model(ssd_config, is_training, add_summaries):
+ """Builds an SSD detection model based on the model config.
+
+ Args:
+ ssd_config: A ssd.proto object containing the config for the desired
+ SSDMetaArch.
+ is_training: True if this model is being built for training purposes.
+ add_summaries: Whether to add tf summaries in the model.
+ Returns:
+ SSDMetaArch based on the config.
+
+ Raises:
+ ValueError: If ssd_config.type is not recognized (i.e. not registered in
+ model_class_map).
+ """
+ num_classes = ssd_config.num_classes
+
+ # Feature extractor
+ feature_extractor = _build_ssd_feature_extractor(
+ feature_extractor_config=ssd_config.feature_extractor,
+ freeze_batchnorm=ssd_config.freeze_batchnorm,
+ is_training=is_training)
+
+ box_coder = box_coder_builder.build(ssd_config.box_coder)
+ matcher = matcher_builder.build(ssd_config.matcher)
+ region_similarity_calculator = sim_calc.build(
+ ssd_config.similarity_calculator)
+ encode_background_as_zeros = ssd_config.encode_background_as_zeros
+ negative_class_weight = ssd_config.negative_class_weight
+ anchor_generator = anchor_generator_builder.build(
+ ssd_config.anchor_generator)
+ if feature_extractor.is_keras_model:
+ ssd_box_predictor = box_predictor_builder.build_keras(
+ conv_hyperparams_fn=hyperparams_builder.KerasLayerHyperparams,
+ freeze_batchnorm=ssd_config.freeze_batchnorm,
+ inplace_batchnorm_update=False,
+ num_predictions_per_location_list=anchor_generator
+ .num_anchors_per_location(),
+ box_predictor_config=ssd_config.box_predictor,
+ is_training=is_training,
+ num_classes=num_classes,
+ add_background_class=ssd_config.add_background_class)
+ else:
+ ssd_box_predictor = box_predictor_builder.build(
+ hyperparams_builder.build, ssd_config.box_predictor, is_training,
+ num_classes, ssd_config.add_background_class)
+ image_resizer_fn = image_resizer_builder.build(ssd_config.image_resizer)
+ non_max_suppression_fn, score_conversion_fn = post_processing_builder.build(
+ ssd_config.post_processing)
+ (classification_loss, localization_loss, classification_weight,
+ localization_weight, hard_example_miner, random_example_sampler,
+ expected_loss_weights_fn) = losses_builder.build(ssd_config.loss)
+ normalize_loss_by_num_matches = ssd_config.normalize_loss_by_num_matches
+ normalize_loc_loss_by_codesize = ssd_config.normalize_loc_loss_by_codesize
+
+ equalization_loss_config = ops.EqualizationLossConfig(
+ weight=ssd_config.loss.equalization_loss.weight,
+ exclude_prefixes=ssd_config.loss.equalization_loss.exclude_prefixes)
+
+ target_assigner_instance = target_assigner.TargetAssigner(
+ region_similarity_calculator,
+ matcher,
+ box_coder,
+ negative_class_weight=negative_class_weight)
+
+ ssd_meta_arch_fn = ssd_meta_arch.SSDMetaArch
+ kwargs = {}
+
+ return ssd_meta_arch_fn(
+ is_training=is_training,
+ anchor_generator=anchor_generator,
+ box_predictor=ssd_box_predictor,
+ box_coder=box_coder,
+ feature_extractor=feature_extractor,
+ encode_background_as_zeros=encode_background_as_zeros,
+ image_resizer_fn=image_resizer_fn,
+ non_max_suppression_fn=non_max_suppression_fn,
+ score_conversion_fn=score_conversion_fn,
+ classification_loss=classification_loss,
+ localization_loss=localization_loss,
+ classification_loss_weight=classification_weight,
+ localization_loss_weight=localization_weight,
+ normalize_loss_by_num_matches=normalize_loss_by_num_matches,
+ hard_example_miner=hard_example_miner,
+ target_assigner_instance=target_assigner_instance,
+ add_summaries=add_summaries,
+ normalize_loc_loss_by_codesize=normalize_loc_loss_by_codesize,
+ freeze_batchnorm=ssd_config.freeze_batchnorm,
+ inplace_batchnorm_update=ssd_config.inplace_batchnorm_update,
+ add_background_class=ssd_config.add_background_class,
+ explicit_background_class=ssd_config.explicit_background_class,
+ random_example_sampler=random_example_sampler,
+ expected_loss_weights_fn=expected_loss_weights_fn,
+ use_confidences_as_targets=ssd_config.use_confidences_as_targets,
+ implicit_example_weight=ssd_config.implicit_example_weight,
+ equalization_loss_config=equalization_loss_config,
+ **kwargs)
+
+
+def _build_faster_rcnn_feature_extractor(
+ feature_extractor_config, is_training, reuse_weights=None,
+ inplace_batchnorm_update=False):
+ """Builds a faster_rcnn_meta_arch.FasterRCNNFeatureExtractor based on config.
+
+ Args:
+ feature_extractor_config: A FasterRcnnFeatureExtractor proto config from
+ faster_rcnn.proto.
+ is_training: True if this feature extractor is being built for training.
+ reuse_weights: if the feature extractor should reuse weights.
+ inplace_batchnorm_update: Whether to update batch_norm inplace during
+ training. This is required for batch norm to work correctly on TPUs. When
+ this is false, user must add a control dependency on
+ tf.GraphKeys.UPDATE_OPS for train/loss op in order to update the batch
+ norm moving average parameters.
+
+ Returns:
+ faster_rcnn_meta_arch.FasterRCNNFeatureExtractor based on config.
+
+ Raises:
+ ValueError: On invalid feature extractor type.
+ """
+ if inplace_batchnorm_update:
+ raise ValueError('inplace batchnorm updates not supported.')
+ feature_type = feature_extractor_config.type
+ first_stage_features_stride = (
+ feature_extractor_config.first_stage_features_stride)
+ batch_norm_trainable = feature_extractor_config.batch_norm_trainable
+
+ if feature_type not in FASTER_RCNN_FEATURE_EXTRACTOR_CLASS_MAP:
+ raise ValueError('Unknown Faster R-CNN feature_extractor: {}'.format(
+ feature_type))
+ feature_extractor_class = FASTER_RCNN_FEATURE_EXTRACTOR_CLASS_MAP[
+ feature_type]
+ return feature_extractor_class(
+ is_training, first_stage_features_stride,
+ batch_norm_trainable, reuse_weights)
+
+
+def _build_faster_rcnn_model(frcnn_config, is_training, add_summaries):
+ """Builds a Faster R-CNN or R-FCN detection model based on the model config.
+
+ Builds R-FCN model if the second_stage_box_predictor in the config is of type
+ `rfcn_box_predictor` else builds a Faster R-CNN model.
+
+ Args:
+ frcnn_config: A faster_rcnn.proto object containing the config for the
+ desired FasterRCNNMetaArch or RFCNMetaArch.
+ is_training: True if this model is being built for training purposes.
+ add_summaries: Whether to add tf summaries in the model.
+
+ Returns:
+ FasterRCNNMetaArch based on the config.
+
+ Raises:
+ ValueError: If frcnn_config.type is not recognized (i.e. not registered in
+ model_class_map).
+ """
+ num_classes = frcnn_config.num_classes
+ image_resizer_fn = image_resizer_builder.build(frcnn_config.image_resizer)
+
+ feature_extractor = _build_faster_rcnn_feature_extractor(
+ frcnn_config.feature_extractor, is_training,
+ inplace_batchnorm_update=frcnn_config.inplace_batchnorm_update)
+
+ number_of_stages = frcnn_config.number_of_stages
+ first_stage_anchor_generator = anchor_generator_builder.build(
+ frcnn_config.first_stage_anchor_generator)
+
+ first_stage_target_assigner = target_assigner.create_target_assigner(
+ 'FasterRCNN',
+ 'proposal',
+ use_matmul_gather=frcnn_config.use_matmul_gather_in_matcher)
+ first_stage_atrous_rate = frcnn_config.first_stage_atrous_rate
+ first_stage_box_predictor_arg_scope_fn = hyperparams_builder.build(
+ frcnn_config.first_stage_box_predictor_conv_hyperparams, is_training)
+ first_stage_box_predictor_kernel_size = (
+ frcnn_config.first_stage_box_predictor_kernel_size)
+ first_stage_box_predictor_depth = frcnn_config.first_stage_box_predictor_depth
+ first_stage_minibatch_size = frcnn_config.first_stage_minibatch_size
+ use_static_shapes = frcnn_config.use_static_shapes and (
+ frcnn_config.use_static_shapes_for_eval or is_training)
+ first_stage_sampler = sampler.BalancedPositiveNegativeSampler(
+ positive_fraction=frcnn_config.first_stage_positive_balance_fraction,
+ is_static=(frcnn_config.use_static_balanced_label_sampler and
+ use_static_shapes))
+ first_stage_max_proposals = frcnn_config.first_stage_max_proposals
+ if (frcnn_config.first_stage_nms_iou_threshold < 0 or
+ frcnn_config.first_stage_nms_iou_threshold > 1.0):
+ raise ValueError('iou_threshold not in [0, 1.0].')
+ if (is_training and frcnn_config.second_stage_batch_size >
+ first_stage_max_proposals):
+ raise ValueError('second_stage_batch_size should be no greater than '
+ 'first_stage_max_proposals.')
+ first_stage_non_max_suppression_fn = functools.partial(
+ post_processing.batch_multiclass_non_max_suppression,
+ score_thresh=frcnn_config.first_stage_nms_score_threshold,
+ iou_thresh=frcnn_config.first_stage_nms_iou_threshold,
+ max_size_per_class=frcnn_config.first_stage_max_proposals,
+ max_total_size=frcnn_config.first_stage_max_proposals,
+ use_static_shapes=use_static_shapes)
+ first_stage_loc_loss_weight = (
+ frcnn_config.first_stage_localization_loss_weight)
+ first_stage_obj_loss_weight = frcnn_config.first_stage_objectness_loss_weight
+
+ initial_crop_size = frcnn_config.initial_crop_size
+ maxpool_kernel_size = frcnn_config.maxpool_kernel_size
+ maxpool_stride = frcnn_config.maxpool_stride
+
+ second_stage_target_assigner = target_assigner.create_target_assigner(
+ 'FasterRCNN',
+ 'detection',
+ use_matmul_gather=frcnn_config.use_matmul_gather_in_matcher)
+ second_stage_box_predictor = box_predictor_builder.build(
+ hyperparams_builder.build,
+ frcnn_config.second_stage_box_predictor,
+ is_training=is_training,
+ num_classes=num_classes)
+ second_stage_batch_size = frcnn_config.second_stage_batch_size
+ second_stage_sampler = sampler.BalancedPositiveNegativeSampler(
+ positive_fraction=frcnn_config.second_stage_balance_fraction,
+ is_static=(frcnn_config.use_static_balanced_label_sampler and
+ use_static_shapes))
+ (second_stage_non_max_suppression_fn, second_stage_score_conversion_fn
+ ) = post_processing_builder.build(frcnn_config.second_stage_post_processing)
+ second_stage_localization_loss_weight = (
+ frcnn_config.second_stage_localization_loss_weight)
+ second_stage_classification_loss = (
+ losses_builder.build_faster_rcnn_classification_loss(
+ frcnn_config.second_stage_classification_loss))
+ second_stage_classification_loss_weight = (
+ frcnn_config.second_stage_classification_loss_weight)
+ second_stage_mask_prediction_loss_weight = (
+ frcnn_config.second_stage_mask_prediction_loss_weight)
+
+ hard_example_miner = None
+ if frcnn_config.HasField('hard_example_miner'):
+ hard_example_miner = losses_builder.build_hard_example_miner(
+ frcnn_config.hard_example_miner,
+ second_stage_classification_loss_weight,
+ second_stage_localization_loss_weight)
+
+ crop_and_resize_fn = (
+ ops.matmul_crop_and_resize if frcnn_config.use_matmul_crop_and_resize
+ else ops.native_crop_and_resize)
+ clip_anchors_to_image = (
+ frcnn_config.clip_anchors_to_image)
+
+ common_kwargs = {
+ 'is_training': is_training,
+ 'num_classes': num_classes,
+ 'image_resizer_fn': image_resizer_fn,
+ 'feature_extractor': feature_extractor,
+ 'number_of_stages': number_of_stages,
+ 'first_stage_anchor_generator': first_stage_anchor_generator,
+ 'first_stage_target_assigner': first_stage_target_assigner,
+ 'first_stage_atrous_rate': first_stage_atrous_rate,
+ 'first_stage_box_predictor_arg_scope_fn':
+ first_stage_box_predictor_arg_scope_fn,
+ 'first_stage_box_predictor_kernel_size':
+ first_stage_box_predictor_kernel_size,
+ 'first_stage_box_predictor_depth': first_stage_box_predictor_depth,
+ 'first_stage_minibatch_size': first_stage_minibatch_size,
+ 'first_stage_sampler': first_stage_sampler,
+ 'first_stage_non_max_suppression_fn': first_stage_non_max_suppression_fn,
+ 'first_stage_max_proposals': first_stage_max_proposals,
+ 'first_stage_localization_loss_weight': first_stage_loc_loss_weight,
+ 'first_stage_objectness_loss_weight': first_stage_obj_loss_weight,
+ 'second_stage_target_assigner': second_stage_target_assigner,
+ 'second_stage_batch_size': second_stage_batch_size,
+ 'second_stage_sampler': second_stage_sampler,
+ 'second_stage_non_max_suppression_fn':
+ second_stage_non_max_suppression_fn,
+ 'second_stage_score_conversion_fn': second_stage_score_conversion_fn,
+ 'second_stage_localization_loss_weight':
+ second_stage_localization_loss_weight,
+ 'second_stage_classification_loss':
+ second_stage_classification_loss,
+ 'second_stage_classification_loss_weight':
+ second_stage_classification_loss_weight,
+ 'hard_example_miner': hard_example_miner,
+ 'add_summaries': add_summaries,
+ 'crop_and_resize_fn': crop_and_resize_fn,
+ 'clip_anchors_to_image': clip_anchors_to_image,
+ 'use_static_shapes': use_static_shapes,
+ 'resize_masks': frcnn_config.resize_masks
+ }
+
+ if isinstance(second_stage_box_predictor,
+ rfcn_box_predictor.RfcnBoxPredictor):
+ return rfcn_meta_arch.RFCNMetaArch(
+ second_stage_rfcn_box_predictor=second_stage_box_predictor,
+ **common_kwargs)
+ else:
+ return faster_rcnn_meta_arch.FasterRCNNMetaArch(
+ initial_crop_size=initial_crop_size,
+ maxpool_kernel_size=maxpool_kernel_size,
+ maxpool_stride=maxpool_stride,
+ second_stage_mask_rcnn_box_predictor=second_stage_box_predictor,
+ second_stage_mask_prediction_loss_weight=(
+ second_stage_mask_prediction_loss_weight),
+ **common_kwargs)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/model_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/model_builder_test.py"
new file mode 100644
index 00000000..87ac0a8b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/model_builder_test.py"
@@ -0,0 +1,1570 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.models.model_builder."""
+
+from absl.testing import parameterized
+
+import tensorflow as tf
+
+from google.protobuf import text_format
+from object_detection.builders import model_builder
+from object_detection.meta_architectures import faster_rcnn_meta_arch
+from object_detection.meta_architectures import rfcn_meta_arch
+from object_detection.meta_architectures import ssd_meta_arch
+from object_detection.models import faster_rcnn_inception_resnet_v2_feature_extractor as frcnn_inc_res
+from object_detection.models import faster_rcnn_inception_v2_feature_extractor as frcnn_inc_v2
+from object_detection.models import faster_rcnn_nas_feature_extractor as frcnn_nas
+from object_detection.models import faster_rcnn_pnas_feature_extractor as frcnn_pnas
+from object_detection.models import faster_rcnn_resnet_v1_feature_extractor as frcnn_resnet_v1
+from object_detection.models import ssd_resnet_v1_fpn_feature_extractor as ssd_resnet_v1_fpn
+from object_detection.models import ssd_resnet_v1_ppn_feature_extractor as ssd_resnet_v1_ppn
+from object_detection.models.embedded_ssd_mobilenet_v1_feature_extractor import EmbeddedSSDMobileNetV1FeatureExtractor
+from object_detection.models.ssd_inception_v2_feature_extractor import SSDInceptionV2FeatureExtractor
+from object_detection.models.ssd_inception_v3_feature_extractor import SSDInceptionV3FeatureExtractor
+from object_detection.models.ssd_mobilenet_v1_feature_extractor import SSDMobileNetV1FeatureExtractor
+from object_detection.models.ssd_mobilenet_v1_fpn_feature_extractor import SSDMobileNetV1FpnFeatureExtractor
+from object_detection.models.ssd_mobilenet_v1_ppn_feature_extractor import SSDMobileNetV1PpnFeatureExtractor
+from object_detection.models.ssd_mobilenet_v2_feature_extractor import SSDMobileNetV2FeatureExtractor
+from object_detection.models.ssd_mobilenet_v2_fpn_feature_extractor import SSDMobileNetV2FpnFeatureExtractor
+from object_detection.models.ssd_mobilenet_v2_keras_feature_extractor import SSDMobileNetV2KerasFeatureExtractor
+from object_detection.predictors import convolutional_box_predictor
+from object_detection.predictors import convolutional_keras_box_predictor
+from object_detection.protos import model_pb2
+
+FRCNN_RESNET_FEAT_MAPS = {
+ 'faster_rcnn_resnet50':
+ frcnn_resnet_v1.FasterRCNNResnet50FeatureExtractor,
+ 'faster_rcnn_resnet101':
+ frcnn_resnet_v1.FasterRCNNResnet101FeatureExtractor,
+ 'faster_rcnn_resnet152':
+ frcnn_resnet_v1.FasterRCNNResnet152FeatureExtractor
+}
+
+SSD_RESNET_V1_FPN_FEAT_MAPS = {
+ 'ssd_resnet50_v1_fpn':
+ ssd_resnet_v1_fpn.SSDResnet50V1FpnFeatureExtractor,
+ 'ssd_resnet101_v1_fpn':
+ ssd_resnet_v1_fpn.SSDResnet101V1FpnFeatureExtractor,
+ 'ssd_resnet152_v1_fpn':
+ ssd_resnet_v1_fpn.SSDResnet152V1FpnFeatureExtractor,
+}
+
+SSD_RESNET_V1_PPN_FEAT_MAPS = {
+ 'ssd_resnet50_v1_ppn':
+ ssd_resnet_v1_ppn.SSDResnet50V1PpnFeatureExtractor,
+ 'ssd_resnet101_v1_ppn':
+ ssd_resnet_v1_ppn.SSDResnet101V1PpnFeatureExtractor,
+ 'ssd_resnet152_v1_ppn':
+ ssd_resnet_v1_ppn.SSDResnet152V1PpnFeatureExtractor
+}
+
+
+class ModelBuilderTest(tf.test.TestCase, parameterized.TestCase):
+
+ def create_model(self, model_config):
+ """Builds a DetectionModel based on the model config.
+
+ Args:
+ model_config: A model.proto object containing the config for the desired
+ DetectionModel.
+
+ Returns:
+ DetectionModel based on the config.
+ """
+ return model_builder.build(model_config, is_training=True)
+
+ def test_create_ssd_inception_v2_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ feature_extractor {
+ type: 'ssd_inception_v2'
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ override_base_feature_extractor_hyperparams: true
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = self.create_model(model_proto)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ SSDInceptionV2FeatureExtractor)
+ self.assertIsNone(model._expected_loss_weights_fn)
+
+
+
+ def test_create_ssd_inception_v3_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ feature_extractor {
+ type: 'ssd_inception_v3'
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ override_base_feature_extractor_hyperparams: true
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = self.create_model(model_proto)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ SSDInceptionV3FeatureExtractor)
+
+ def test_create_ssd_resnet_v1_fpn_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ feature_extractor {
+ type: 'ssd_resnet50_v1_fpn'
+ fpn {
+ min_level: 3
+ max_level: 7
+ }
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ encode_background_as_zeros: true
+ anchor_generator {
+ multiscale_anchor_generator {
+ aspect_ratios: [1.0, 2.0, 0.5]
+ scales_per_octave: 2
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ weight_shared_convolutional_box_predictor {
+ depth: 32
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ random_normal_initializer {
+ }
+ }
+ }
+ num_layers_before_predictor: 1
+ }
+ }
+ normalize_loss_by_num_matches: true
+ normalize_loc_loss_by_codesize: true
+ loss {
+ classification_loss {
+ weighted_sigmoid_focal {
+ alpha: 0.25
+ gamma: 2.0
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ delta: 0.1
+ }
+ }
+ classification_weight: 1.0
+ localization_weight: 1.0
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+
+ for extractor_type, extractor_class in SSD_RESNET_V1_FPN_FEAT_MAPS.items():
+ model_proto.ssd.feature_extractor.type = extractor_type
+ model = model_builder.build(model_proto, is_training=True)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor, extractor_class)
+
+ def test_create_ssd_resnet_v1_ppn_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ feature_extractor {
+ type: 'ssd_resnet_v1_50_ppn'
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ box_coder {
+ mean_stddev_box_coder {
+ }
+ }
+ matcher {
+ bipartite_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ weight_shared_convolutional_box_predictor {
+ depth: 1024
+ class_prediction_bias_init: -4.6
+ conv_hyperparams {
+ activation: RELU_6,
+ regularizer {
+ l2_regularizer {
+ weight: 0.0004
+ }
+ }
+ initializer {
+ variance_scaling_initializer {
+ }
+ }
+ }
+ num_layers_before_predictor: 2
+ kernel_size: 1
+ }
+ }
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_l2 {
+ }
+ }
+ classification_weight: 1.0
+ localization_weight: 1.0
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+
+ for extractor_type, extractor_class in SSD_RESNET_V1_PPN_FEAT_MAPS.items():
+ model_proto.ssd.feature_extractor.type = extractor_type
+ model = model_builder.build(model_proto, is_training=True)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor, extractor_class)
+
+ def test_create_ssd_mobilenet_v1_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ freeze_batchnorm: true
+ inplace_batchnorm_update: true
+ feature_extractor {
+ type: 'ssd_mobilenet_v1'
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ normalize_loc_loss_by_codesize: true
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = self.create_model(model_proto)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ SSDMobileNetV1FeatureExtractor)
+ self.assertTrue(model._normalize_loc_loss_by_codesize)
+ self.assertTrue(model._freeze_batchnorm)
+ self.assertTrue(model._inplace_batchnorm_update)
+
+ def test_create_ssd_mobilenet_v1_fpn_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ freeze_batchnorm: true
+ inplace_batchnorm_update: true
+ feature_extractor {
+ type: 'ssd_mobilenet_v1_fpn'
+ fpn {
+ min_level: 3
+ max_level: 7
+ }
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ normalize_loc_loss_by_codesize: true
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = self.create_model(model_proto)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ SSDMobileNetV1FpnFeatureExtractor)
+ self.assertTrue(model._normalize_loc_loss_by_codesize)
+ self.assertTrue(model._freeze_batchnorm)
+ self.assertTrue(model._inplace_batchnorm_update)
+
+ def test_create_ssd_mobilenet_v1_ppn_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ freeze_batchnorm: true
+ inplace_batchnorm_update: true
+ feature_extractor {
+ type: 'ssd_mobilenet_v1_ppn'
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ normalize_loc_loss_by_codesize: true
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = self.create_model(model_proto)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ SSDMobileNetV1PpnFeatureExtractor)
+ self.assertTrue(model._normalize_loc_loss_by_codesize)
+ self.assertTrue(model._freeze_batchnorm)
+ self.assertTrue(model._inplace_batchnorm_update)
+
+ def test_create_ssd_mobilenet_v2_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ feature_extractor {
+ type: 'ssd_mobilenet_v2'
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ normalize_loc_loss_by_codesize: true
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = self.create_model(model_proto)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ SSDMobileNetV2FeatureExtractor)
+ self.assertIsInstance(model._box_predictor,
+ convolutional_box_predictor.ConvolutionalBoxPredictor)
+ self.assertTrue(model._normalize_loc_loss_by_codesize)
+
+ def test_create_ssd_mobilenet_v2_keras_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ feature_extractor {
+ type: 'ssd_mobilenet_v2_keras'
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ normalize_loc_loss_by_codesize: true
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = self.create_model(model_proto)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ SSDMobileNetV2KerasFeatureExtractor)
+ self.assertIsInstance(
+ model._box_predictor,
+ convolutional_keras_box_predictor.ConvolutionalBoxPredictor)
+ self.assertTrue(model._normalize_loc_loss_by_codesize)
+
+ def test_create_ssd_mobilenet_v2_fpn_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ freeze_batchnorm: true
+ inplace_batchnorm_update: true
+ feature_extractor {
+ type: 'ssd_mobilenet_v2_fpn'
+ fpn {
+ min_level: 3
+ max_level: 7
+ }
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ normalize_loc_loss_by_codesize: true
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = self.create_model(model_proto)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ SSDMobileNetV2FpnFeatureExtractor)
+ self.assertTrue(model._normalize_loc_loss_by_codesize)
+ self.assertTrue(model._freeze_batchnorm)
+ self.assertTrue(model._inplace_batchnorm_update)
+
+ def test_create_ssd_mobilenet_v2_fpnlite_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ freeze_batchnorm: true
+ inplace_batchnorm_update: true
+ feature_extractor {
+ type: 'ssd_mobilenet_v2_fpn'
+ use_depthwise: true
+ fpn {
+ min_level: 3
+ max_level: 7
+ additional_layer_depth: 128
+ }
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 320
+ width: 320
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ normalize_loc_loss_by_codesize: true
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = self.create_model(model_proto)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ SSDMobileNetV2FpnFeatureExtractor)
+ self.assertTrue(model._normalize_loc_loss_by_codesize)
+ self.assertTrue(model._freeze_batchnorm)
+ self.assertTrue(model._inplace_batchnorm_update)
+
+ def test_create_embedded_ssd_mobilenet_v1_model_from_config(self):
+ model_text_proto = """
+ ssd {
+ feature_extractor {
+ type: 'embedded_ssd_mobilenet_v1'
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ box_coder {
+ faster_rcnn_box_coder {
+ }
+ }
+ matcher {
+ argmax_matcher {
+ }
+ }
+ similarity_calculator {
+ iou_similarity {
+ }
+ }
+ anchor_generator {
+ ssd_anchor_generator {
+ aspect_ratios: 1.0
+ }
+ }
+ image_resizer {
+ fixed_shape_resizer {
+ height: 256
+ width: 256
+ }
+ }
+ box_predictor {
+ convolutional_box_predictor {
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ loss {
+ classification_loss {
+ weighted_softmax {
+ }
+ }
+ localization_loss {
+ weighted_smooth_l1 {
+ }
+ }
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = self.create_model(model_proto)
+ self.assertIsInstance(model, ssd_meta_arch.SSDMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ EmbeddedSSDMobileNetV1FeatureExtractor)
+
+ def test_create_faster_rcnn_resnet_v1_models_from_config(self):
+ model_text_proto = """
+ faster_rcnn {
+ inplace_batchnorm_update: false
+ num_classes: 3
+ image_resizer {
+ keep_aspect_ratio_resizer {
+ min_dimension: 600
+ max_dimension: 1024
+ }
+ }
+ feature_extractor {
+ type: 'faster_rcnn_resnet101'
+ }
+ first_stage_anchor_generator {
+ grid_anchor_generator {
+ scales: [0.25, 0.5, 1.0, 2.0]
+ aspect_ratios: [0.5, 1.0, 2.0]
+ height_stride: 16
+ width_stride: 16
+ }
+ }
+ first_stage_box_predictor_conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ initial_crop_size: 14
+ maxpool_kernel_size: 2
+ maxpool_stride: 2
+ second_stage_box_predictor {
+ mask_rcnn_box_predictor {
+ fc_hyperparams {
+ op: FC
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ second_stage_post_processing {
+ batch_non_max_suppression {
+ score_threshold: 0.01
+ iou_threshold: 0.6
+ max_detections_per_class: 100
+ max_total_detections: 300
+ }
+ score_converter: SOFTMAX
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+
+ for extractor_type, extractor_class in FRCNN_RESNET_FEAT_MAPS.items():
+ model_proto.faster_rcnn.feature_extractor.type = extractor_type
+ model = model_builder.build(model_proto, is_training=True)
+ self.assertIsInstance(model, faster_rcnn_meta_arch.FasterRCNNMetaArch)
+ self.assertIsInstance(model._feature_extractor, extractor_class)
+
+ @parameterized.parameters(
+ {'use_matmul_crop_and_resize': False},
+ {'use_matmul_crop_and_resize': True},
+ )
+ def test_create_faster_rcnn_resnet101_with_mask_prediction_enabled(
+ self, use_matmul_crop_and_resize):
+ model_text_proto = """
+ faster_rcnn {
+ num_classes: 3
+ image_resizer {
+ keep_aspect_ratio_resizer {
+ min_dimension: 600
+ max_dimension: 1024
+ }
+ }
+ feature_extractor {
+ type: 'faster_rcnn_resnet101'
+ }
+ first_stage_anchor_generator {
+ grid_anchor_generator {
+ scales: [0.25, 0.5, 1.0, 2.0]
+ aspect_ratios: [0.5, 1.0, 2.0]
+ height_stride: 16
+ width_stride: 16
+ }
+ }
+ first_stage_box_predictor_conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ initial_crop_size: 14
+ maxpool_kernel_size: 2
+ maxpool_stride: 2
+ second_stage_box_predictor {
+ mask_rcnn_box_predictor {
+ fc_hyperparams {
+ op: FC
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ predict_instance_masks: true
+ }
+ }
+ second_stage_mask_prediction_loss_weight: 3.0
+ second_stage_post_processing {
+ batch_non_max_suppression {
+ score_threshold: 0.01
+ iou_threshold: 0.6
+ max_detections_per_class: 100
+ max_total_detections: 300
+ }
+ score_converter: SOFTMAX
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model_proto.faster_rcnn.use_matmul_crop_and_resize = (
+ use_matmul_crop_and_resize)
+ model = model_builder.build(model_proto, is_training=True)
+ self.assertAlmostEqual(model._second_stage_mask_loss_weight, 3.0)
+
+ def test_create_faster_rcnn_nas_model_from_config(self):
+ model_text_proto = """
+ faster_rcnn {
+ num_classes: 3
+ image_resizer {
+ keep_aspect_ratio_resizer {
+ min_dimension: 600
+ max_dimension: 1024
+ }
+ }
+ feature_extractor {
+ type: 'faster_rcnn_nas'
+ }
+ first_stage_anchor_generator {
+ grid_anchor_generator {
+ scales: [0.25, 0.5, 1.0, 2.0]
+ aspect_ratios: [0.5, 1.0, 2.0]
+ height_stride: 16
+ width_stride: 16
+ }
+ }
+ first_stage_box_predictor_conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ initial_crop_size: 17
+ maxpool_kernel_size: 1
+ maxpool_stride: 1
+ second_stage_box_predictor {
+ mask_rcnn_box_predictor {
+ fc_hyperparams {
+ op: FC
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ second_stage_post_processing {
+ batch_non_max_suppression {
+ score_threshold: 0.01
+ iou_threshold: 0.6
+ max_detections_per_class: 100
+ max_total_detections: 300
+ }
+ score_converter: SOFTMAX
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = model_builder.build(model_proto, is_training=True)
+ self.assertIsInstance(model, faster_rcnn_meta_arch.FasterRCNNMetaArch)
+ self.assertIsInstance(
+ model._feature_extractor,
+ frcnn_nas.FasterRCNNNASFeatureExtractor)
+
+ def test_create_faster_rcnn_pnas_model_from_config(self):
+ model_text_proto = """
+ faster_rcnn {
+ num_classes: 3
+ image_resizer {
+ keep_aspect_ratio_resizer {
+ min_dimension: 600
+ max_dimension: 1024
+ }
+ }
+ feature_extractor {
+ type: 'faster_rcnn_pnas'
+ }
+ first_stage_anchor_generator {
+ grid_anchor_generator {
+ scales: [0.25, 0.5, 1.0, 2.0]
+ aspect_ratios: [0.5, 1.0, 2.0]
+ height_stride: 16
+ width_stride: 16
+ }
+ }
+ first_stage_box_predictor_conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ initial_crop_size: 17
+ maxpool_kernel_size: 1
+ maxpool_stride: 1
+ second_stage_box_predictor {
+ mask_rcnn_box_predictor {
+ fc_hyperparams {
+ op: FC
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ second_stage_post_processing {
+ batch_non_max_suppression {
+ score_threshold: 0.01
+ iou_threshold: 0.6
+ max_detections_per_class: 100
+ max_total_detections: 300
+ }
+ score_converter: SOFTMAX
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = model_builder.build(model_proto, is_training=True)
+ self.assertIsInstance(model, faster_rcnn_meta_arch.FasterRCNNMetaArch)
+ self.assertIsInstance(
+ model._feature_extractor,
+ frcnn_pnas.FasterRCNNPNASFeatureExtractor)
+
+ def test_create_faster_rcnn_inception_resnet_v2_model_from_config(self):
+ model_text_proto = """
+ faster_rcnn {
+ num_classes: 3
+ image_resizer {
+ keep_aspect_ratio_resizer {
+ min_dimension: 600
+ max_dimension: 1024
+ }
+ }
+ feature_extractor {
+ type: 'faster_rcnn_inception_resnet_v2'
+ }
+ first_stage_anchor_generator {
+ grid_anchor_generator {
+ scales: [0.25, 0.5, 1.0, 2.0]
+ aspect_ratios: [0.5, 1.0, 2.0]
+ height_stride: 16
+ width_stride: 16
+ }
+ }
+ first_stage_box_predictor_conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ initial_crop_size: 17
+ maxpool_kernel_size: 1
+ maxpool_stride: 1
+ second_stage_box_predictor {
+ mask_rcnn_box_predictor {
+ fc_hyperparams {
+ op: FC
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ second_stage_post_processing {
+ batch_non_max_suppression {
+ score_threshold: 0.01
+ iou_threshold: 0.6
+ max_detections_per_class: 100
+ max_total_detections: 300
+ }
+ score_converter: SOFTMAX
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = model_builder.build(model_proto, is_training=True)
+ self.assertIsInstance(model, faster_rcnn_meta_arch.FasterRCNNMetaArch)
+ self.assertIsInstance(
+ model._feature_extractor,
+ frcnn_inc_res.FasterRCNNInceptionResnetV2FeatureExtractor)
+
+ def test_create_faster_rcnn_inception_v2_model_from_config(self):
+ model_text_proto = """
+ faster_rcnn {
+ num_classes: 3
+ image_resizer {
+ keep_aspect_ratio_resizer {
+ min_dimension: 600
+ max_dimension: 1024
+ }
+ }
+ feature_extractor {
+ type: 'faster_rcnn_inception_v2'
+ }
+ first_stage_anchor_generator {
+ grid_anchor_generator {
+ scales: [0.25, 0.5, 1.0, 2.0]
+ aspect_ratios: [0.5, 1.0, 2.0]
+ height_stride: 16
+ width_stride: 16
+ }
+ }
+ first_stage_box_predictor_conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ initial_crop_size: 14
+ maxpool_kernel_size: 2
+ maxpool_stride: 2
+ second_stage_box_predictor {
+ mask_rcnn_box_predictor {
+ fc_hyperparams {
+ op: FC
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ second_stage_post_processing {
+ batch_non_max_suppression {
+ score_threshold: 0.01
+ iou_threshold: 0.6
+ max_detections_per_class: 100
+ max_total_detections: 300
+ }
+ score_converter: SOFTMAX
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = model_builder.build(model_proto, is_training=True)
+ self.assertIsInstance(model, faster_rcnn_meta_arch.FasterRCNNMetaArch)
+ self.assertIsInstance(model._feature_extractor,
+ frcnn_inc_v2.FasterRCNNInceptionV2FeatureExtractor)
+
+ def test_create_faster_rcnn_model_from_config_with_example_miner(self):
+ model_text_proto = """
+ faster_rcnn {
+ num_classes: 3
+ feature_extractor {
+ type: 'faster_rcnn_inception_resnet_v2'
+ }
+ image_resizer {
+ keep_aspect_ratio_resizer {
+ min_dimension: 600
+ max_dimension: 1024
+ }
+ }
+ first_stage_anchor_generator {
+ grid_anchor_generator {
+ scales: [0.25, 0.5, 1.0, 2.0]
+ aspect_ratios: [0.5, 1.0, 2.0]
+ height_stride: 16
+ width_stride: 16
+ }
+ }
+ first_stage_box_predictor_conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ second_stage_box_predictor {
+ mask_rcnn_box_predictor {
+ fc_hyperparams {
+ op: FC
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ hard_example_miner {
+ num_hard_examples: 10
+ iou_threshold: 0.99
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ model = model_builder.build(model_proto, is_training=True)
+ self.assertIsNotNone(model._hard_example_miner)
+
+ def test_create_rfcn_resnet_v1_model_from_config(self):
+ model_text_proto = """
+ faster_rcnn {
+ num_classes: 3
+ image_resizer {
+ keep_aspect_ratio_resizer {
+ min_dimension: 600
+ max_dimension: 1024
+ }
+ }
+ feature_extractor {
+ type: 'faster_rcnn_resnet101'
+ }
+ first_stage_anchor_generator {
+ grid_anchor_generator {
+ scales: [0.25, 0.5, 1.0, 2.0]
+ aspect_ratios: [0.5, 1.0, 2.0]
+ height_stride: 16
+ width_stride: 16
+ }
+ }
+ first_stage_box_predictor_conv_hyperparams {
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ initial_crop_size: 14
+ maxpool_kernel_size: 2
+ maxpool_stride: 2
+ second_stage_box_predictor {
+ rfcn_box_predictor {
+ conv_hyperparams {
+ op: CONV
+ regularizer {
+ l2_regularizer {
+ }
+ }
+ initializer {
+ truncated_normal_initializer {
+ }
+ }
+ }
+ }
+ }
+ second_stage_post_processing {
+ batch_non_max_suppression {
+ score_threshold: 0.01
+ iou_threshold: 0.6
+ max_detections_per_class: 100
+ max_total_detections: 300
+ }
+ score_converter: SOFTMAX
+ }
+ }"""
+ model_proto = model_pb2.DetectionModel()
+ text_format.Merge(model_text_proto, model_proto)
+ for extractor_type, extractor_class in FRCNN_RESNET_FEAT_MAPS.items():
+ model_proto.faster_rcnn.feature_extractor.type = extractor_type
+ model = model_builder.build(model_proto, is_training=True)
+ self.assertIsInstance(model, rfcn_meta_arch.RFCNMetaArch)
+ self.assertIsInstance(model._feature_extractor, extractor_class)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/optimizer_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/optimizer_builder.py"
new file mode 100644
index 00000000..ce64bfe6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/optimizer_builder.py"
@@ -0,0 +1,127 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Functions to build DetectionModel training optimizers."""
+
+import tensorflow as tf
+from object_detection.utils import learning_schedules
+
+
+def build(optimizer_config):
+ """Create optimizer based on config.
+
+ Args:
+ optimizer_config: A Optimizer proto message.
+
+ Returns:
+ An optimizer and a list of variables for summary.
+
+ Raises:
+ ValueError: when using an unsupported input data type.
+ """
+ optimizer_type = optimizer_config.WhichOneof('optimizer')
+ optimizer = None
+
+ summary_vars = []
+ if optimizer_type == 'rms_prop_optimizer':
+ config = optimizer_config.rms_prop_optimizer
+ learning_rate = _create_learning_rate(config.learning_rate)
+ summary_vars.append(learning_rate)
+ optimizer = tf.train.RMSPropOptimizer(
+ learning_rate,
+ decay=config.decay,
+ momentum=config.momentum_optimizer_value,
+ epsilon=config.epsilon)
+
+ if optimizer_type == 'momentum_optimizer':
+ config = optimizer_config.momentum_optimizer
+ learning_rate = _create_learning_rate(config.learning_rate)
+ summary_vars.append(learning_rate)
+ optimizer = tf.train.MomentumOptimizer(
+ learning_rate,
+ momentum=config.momentum_optimizer_value)
+
+ if optimizer_type == 'adam_optimizer':
+ config = optimizer_config.adam_optimizer
+ learning_rate = _create_learning_rate(config.learning_rate)
+ summary_vars.append(learning_rate)
+ optimizer = tf.train.AdamOptimizer(learning_rate)
+
+ if optimizer is None:
+ raise ValueError('Optimizer %s not supported.' % optimizer_type)
+
+ if optimizer_config.use_moving_average:
+ optimizer = tf.contrib.opt.MovingAverageOptimizer(
+ optimizer, average_decay=optimizer_config.moving_average_decay)
+
+ return optimizer, summary_vars
+
+
+def _create_learning_rate(learning_rate_config):
+ """Create optimizer learning rate based on config.
+
+ Args:
+ learning_rate_config: A LearningRate proto message.
+
+ Returns:
+ A learning rate.
+
+ Raises:
+ ValueError: when using an unsupported input data type.
+ """
+ learning_rate = None
+ learning_rate_type = learning_rate_config.WhichOneof('learning_rate')
+ if learning_rate_type == 'constant_learning_rate':
+ config = learning_rate_config.constant_learning_rate
+ learning_rate = tf.constant(config.learning_rate, dtype=tf.float32,
+ name='learning_rate')
+
+ if learning_rate_type == 'exponential_decay_learning_rate':
+ config = learning_rate_config.exponential_decay_learning_rate
+ learning_rate = learning_schedules.exponential_decay_with_burnin(
+ tf.train.get_or_create_global_step(),
+ config.initial_learning_rate,
+ config.decay_steps,
+ config.decay_factor,
+ burnin_learning_rate=config.burnin_learning_rate,
+ burnin_steps=config.burnin_steps,
+ min_learning_rate=config.min_learning_rate,
+ staircase=config.staircase)
+
+ if learning_rate_type == 'manual_step_learning_rate':
+ config = learning_rate_config.manual_step_learning_rate
+ if not config.schedule:
+ raise ValueError('Empty learning rate schedule.')
+ learning_rate_step_boundaries = [x.step for x in config.schedule]
+ learning_rate_sequence = [config.initial_learning_rate]
+ learning_rate_sequence += [x.learning_rate for x in config.schedule]
+ learning_rate = learning_schedules.manual_stepping(
+ tf.train.get_or_create_global_step(), learning_rate_step_boundaries,
+ learning_rate_sequence, config.warmup)
+
+ if learning_rate_type == 'cosine_decay_learning_rate':
+ config = learning_rate_config.cosine_decay_learning_rate
+ learning_rate = learning_schedules.cosine_decay_with_warmup(
+ tf.train.get_or_create_global_step(),
+ config.learning_rate_base,
+ config.total_steps,
+ config.warmup_learning_rate,
+ config.warmup_steps,
+ config.hold_base_rate_steps)
+
+ if learning_rate is None:
+ raise ValueError('Learning_rate %s not supported.' % learning_rate_type)
+
+ return learning_rate
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/optimizer_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/optimizer_builder_test.py"
new file mode 100644
index 00000000..343a858f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/optimizer_builder_test.py"
@@ -0,0 +1,208 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for optimizer_builder."""
+
+import tensorflow as tf
+
+from google.protobuf import text_format
+
+from object_detection.builders import optimizer_builder
+from object_detection.protos import optimizer_pb2
+
+
+class LearningRateBuilderTest(tf.test.TestCase):
+
+ def testBuildConstantLearningRate(self):
+ learning_rate_text_proto = """
+ constant_learning_rate {
+ learning_rate: 0.004
+ }
+ """
+ learning_rate_proto = optimizer_pb2.LearningRate()
+ text_format.Merge(learning_rate_text_proto, learning_rate_proto)
+ learning_rate = optimizer_builder._create_learning_rate(
+ learning_rate_proto)
+ self.assertTrue(learning_rate.op.name.endswith('learning_rate'))
+ with self.test_session():
+ learning_rate_out = learning_rate.eval()
+ self.assertAlmostEqual(learning_rate_out, 0.004)
+
+ def testBuildExponentialDecayLearningRate(self):
+ learning_rate_text_proto = """
+ exponential_decay_learning_rate {
+ initial_learning_rate: 0.004
+ decay_steps: 99999
+ decay_factor: 0.85
+ staircase: false
+ }
+ """
+ learning_rate_proto = optimizer_pb2.LearningRate()
+ text_format.Merge(learning_rate_text_proto, learning_rate_proto)
+ learning_rate = optimizer_builder._create_learning_rate(
+ learning_rate_proto)
+ self.assertTrue(learning_rate.op.name.endswith('learning_rate'))
+ self.assertTrue(isinstance(learning_rate, tf.Tensor))
+
+ def testBuildManualStepLearningRate(self):
+ learning_rate_text_proto = """
+ manual_step_learning_rate {
+ initial_learning_rate: 0.002
+ schedule {
+ step: 100
+ learning_rate: 0.006
+ }
+ schedule {
+ step: 90000
+ learning_rate: 0.00006
+ }
+ warmup: true
+ }
+ """
+ learning_rate_proto = optimizer_pb2.LearningRate()
+ text_format.Merge(learning_rate_text_proto, learning_rate_proto)
+ learning_rate = optimizer_builder._create_learning_rate(
+ learning_rate_proto)
+ self.assertTrue(isinstance(learning_rate, tf.Tensor))
+
+ def testBuildCosineDecayLearningRate(self):
+ learning_rate_text_proto = """
+ cosine_decay_learning_rate {
+ learning_rate_base: 0.002
+ total_steps: 20000
+ warmup_learning_rate: 0.0001
+ warmup_steps: 1000
+ hold_base_rate_steps: 20000
+ }
+ """
+ learning_rate_proto = optimizer_pb2.LearningRate()
+ text_format.Merge(learning_rate_text_proto, learning_rate_proto)
+ learning_rate = optimizer_builder._create_learning_rate(
+ learning_rate_proto)
+ self.assertTrue(isinstance(learning_rate, tf.Tensor))
+
+ def testRaiseErrorOnEmptyLearningRate(self):
+ learning_rate_text_proto = """
+ """
+ learning_rate_proto = optimizer_pb2.LearningRate()
+ text_format.Merge(learning_rate_text_proto, learning_rate_proto)
+ with self.assertRaises(ValueError):
+ optimizer_builder._create_learning_rate(learning_rate_proto)
+
+
+class OptimizerBuilderTest(tf.test.TestCase):
+
+ def testBuildRMSPropOptimizer(self):
+ optimizer_text_proto = """
+ rms_prop_optimizer: {
+ learning_rate: {
+ exponential_decay_learning_rate {
+ initial_learning_rate: 0.004
+ decay_steps: 800720
+ decay_factor: 0.95
+ }
+ }
+ momentum_optimizer_value: 0.9
+ decay: 0.9
+ epsilon: 1.0
+ }
+ use_moving_average: false
+ """
+ optimizer_proto = optimizer_pb2.Optimizer()
+ text_format.Merge(optimizer_text_proto, optimizer_proto)
+ optimizer, _ = optimizer_builder.build(optimizer_proto)
+ self.assertTrue(isinstance(optimizer, tf.train.RMSPropOptimizer))
+
+ def testBuildMomentumOptimizer(self):
+ optimizer_text_proto = """
+ momentum_optimizer: {
+ learning_rate: {
+ constant_learning_rate {
+ learning_rate: 0.001
+ }
+ }
+ momentum_optimizer_value: 0.99
+ }
+ use_moving_average: false
+ """
+ optimizer_proto = optimizer_pb2.Optimizer()
+ text_format.Merge(optimizer_text_proto, optimizer_proto)
+ optimizer, _ = optimizer_builder.build(optimizer_proto)
+ self.assertTrue(isinstance(optimizer, tf.train.MomentumOptimizer))
+
+ def testBuildAdamOptimizer(self):
+ optimizer_text_proto = """
+ adam_optimizer: {
+ learning_rate: {
+ constant_learning_rate {
+ learning_rate: 0.002
+ }
+ }
+ }
+ use_moving_average: false
+ """
+ optimizer_proto = optimizer_pb2.Optimizer()
+ text_format.Merge(optimizer_text_proto, optimizer_proto)
+ optimizer, _ = optimizer_builder.build(optimizer_proto)
+ self.assertTrue(isinstance(optimizer, tf.train.AdamOptimizer))
+
+ def testBuildMovingAverageOptimizer(self):
+ optimizer_text_proto = """
+ adam_optimizer: {
+ learning_rate: {
+ constant_learning_rate {
+ learning_rate: 0.002
+ }
+ }
+ }
+ use_moving_average: True
+ """
+ optimizer_proto = optimizer_pb2.Optimizer()
+ text_format.Merge(optimizer_text_proto, optimizer_proto)
+ optimizer, _ = optimizer_builder.build(optimizer_proto)
+ self.assertTrue(
+ isinstance(optimizer, tf.contrib.opt.MovingAverageOptimizer))
+
+ def testBuildMovingAverageOptimizerWithNonDefaultDecay(self):
+ optimizer_text_proto = """
+ adam_optimizer: {
+ learning_rate: {
+ constant_learning_rate {
+ learning_rate: 0.002
+ }
+ }
+ }
+ use_moving_average: True
+ moving_average_decay: 0.2
+ """
+ optimizer_proto = optimizer_pb2.Optimizer()
+ text_format.Merge(optimizer_text_proto, optimizer_proto)
+ optimizer, _ = optimizer_builder.build(optimizer_proto)
+ self.assertTrue(
+ isinstance(optimizer, tf.contrib.opt.MovingAverageOptimizer))
+ # TODO(rathodv): Find a way to not depend on the private members.
+ self.assertAlmostEqual(optimizer._ema._decay, 0.2)
+
+ def testBuildEmptyOptimizer(self):
+ optimizer_text_proto = """
+ """
+ optimizer_proto = optimizer_pb2.Optimizer()
+ text_format.Merge(optimizer_text_proto, optimizer_proto)
+ with self.assertRaises(ValueError):
+ optimizer_builder.build(optimizer_proto)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/post_processing_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/post_processing_builder.py"
new file mode 100644
index 00000000..b48ed7ef
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/post_processing_builder.py"
@@ -0,0 +1,124 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Builder function for post processing operations."""
+import functools
+
+import tensorflow as tf
+from object_detection.core import post_processing
+from object_detection.protos import post_processing_pb2
+
+
+def build(post_processing_config):
+ """Builds callables for post-processing operations.
+
+ Builds callables for non-max suppression and score conversion based on the
+ configuration.
+
+ Non-max suppression callable takes `boxes`, `scores`, and optionally
+ `clip_window`, `parallel_iterations` `masks, and `scope` as inputs. It returns
+ `nms_boxes`, `nms_scores`, `nms_classes` `nms_masks` and `num_detections`. See
+ post_processing.batch_multiclass_non_max_suppression for the type and shape
+ of these tensors.
+
+ Score converter callable should be called with `input` tensor. The callable
+ returns the output from one of 3 tf operations based on the configuration -
+ tf.identity, tf.sigmoid or tf.nn.softmax. See tensorflow documentation for
+ argument and return value descriptions.
+
+ Args:
+ post_processing_config: post_processing.proto object containing the
+ parameters for the post-processing operations.
+
+ Returns:
+ non_max_suppressor_fn: Callable for non-max suppression.
+ score_converter_fn: Callable for score conversion.
+
+ Raises:
+ ValueError: if the post_processing_config is of incorrect type.
+ """
+ if not isinstance(post_processing_config, post_processing_pb2.PostProcessing):
+ raise ValueError('post_processing_config not of type '
+ 'post_processing_pb2.Postprocessing.')
+ non_max_suppressor_fn = _build_non_max_suppressor(
+ post_processing_config.batch_non_max_suppression)
+ score_converter_fn = _build_score_converter(
+ post_processing_config.score_converter,
+ post_processing_config.logit_scale)
+ return non_max_suppressor_fn, score_converter_fn
+
+
+def _build_non_max_suppressor(nms_config):
+ """Builds non-max suppresson based on the nms config.
+
+ Args:
+ nms_config: post_processing_pb2.PostProcessing.BatchNonMaxSuppression proto.
+
+ Returns:
+ non_max_suppressor_fn: Callable non-max suppressor.
+
+ Raises:
+ ValueError: On incorrect iou_threshold or on incompatible values of
+ max_total_detections and max_detections_per_class.
+ """
+ if nms_config.iou_threshold < 0 or nms_config.iou_threshold > 1.0:
+ raise ValueError('iou_threshold not in [0, 1.0].')
+ if nms_config.max_detections_per_class > nms_config.max_total_detections:
+ raise ValueError('max_detections_per_class should be no greater than '
+ 'max_total_detections.')
+
+ non_max_suppressor_fn = functools.partial(
+ post_processing.batch_multiclass_non_max_suppression,
+ score_thresh=nms_config.score_threshold,
+ iou_thresh=nms_config.iou_threshold,
+ max_size_per_class=nms_config.max_detections_per_class,
+ max_total_size=nms_config.max_total_detections,
+ use_static_shapes=nms_config.use_static_shapes)
+ return non_max_suppressor_fn
+
+
+def _score_converter_fn_with_logit_scale(tf_score_converter_fn, logit_scale):
+ """Create a function to scale logits then apply a Tensorflow function."""
+ def score_converter_fn(logits):
+ scaled_logits = tf.divide(logits, logit_scale, name='scale_logits')
+ return tf_score_converter_fn(scaled_logits, name='convert_scores')
+ score_converter_fn.__name__ = '%s_with_logit_scale' % (
+ tf_score_converter_fn.__name__)
+ return score_converter_fn
+
+
+def _build_score_converter(score_converter_config, logit_scale):
+ """Builds score converter based on the config.
+
+ Builds one of [tf.identity, tf.sigmoid, tf.softmax] score converters based on
+ the config.
+
+ Args:
+ score_converter_config: post_processing_pb2.PostProcessing.score_converter.
+ logit_scale: temperature to use for SOFTMAX score_converter.
+
+ Returns:
+ Callable score converter op.
+
+ Raises:
+ ValueError: On unknown score converter.
+ """
+ if score_converter_config == post_processing_pb2.PostProcessing.IDENTITY:
+ return _score_converter_fn_with_logit_scale(tf.identity, logit_scale)
+ if score_converter_config == post_processing_pb2.PostProcessing.SIGMOID:
+ return _score_converter_fn_with_logit_scale(tf.sigmoid, logit_scale)
+ if score_converter_config == post_processing_pb2.PostProcessing.SOFTMAX:
+ return _score_converter_fn_with_logit_scale(tf.nn.softmax, logit_scale)
+ raise ValueError('Unknown score converter.')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/post_processing_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/post_processing_builder_test.py"
new file mode 100644
index 00000000..c39fbfb4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/post_processing_builder_test.py"
@@ -0,0 +1,107 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for post_processing_builder."""
+
+import tensorflow as tf
+from google.protobuf import text_format
+from object_detection.builders import post_processing_builder
+from object_detection.protos import post_processing_pb2
+
+
+class PostProcessingBuilderTest(tf.test.TestCase):
+
+ def test_build_non_max_suppressor_with_correct_parameters(self):
+ post_processing_text_proto = """
+ batch_non_max_suppression {
+ score_threshold: 0.7
+ iou_threshold: 0.6
+ max_detections_per_class: 100
+ max_total_detections: 300
+ }
+ """
+ post_processing_config = post_processing_pb2.PostProcessing()
+ text_format.Merge(post_processing_text_proto, post_processing_config)
+ non_max_suppressor, _ = post_processing_builder.build(
+ post_processing_config)
+ self.assertEqual(non_max_suppressor.keywords['max_size_per_class'], 100)
+ self.assertEqual(non_max_suppressor.keywords['max_total_size'], 300)
+ self.assertAlmostEqual(non_max_suppressor.keywords['score_thresh'], 0.7)
+ self.assertAlmostEqual(non_max_suppressor.keywords['iou_thresh'], 0.6)
+
+ def test_build_identity_score_converter(self):
+ post_processing_text_proto = """
+ score_converter: IDENTITY
+ """
+ post_processing_config = post_processing_pb2.PostProcessing()
+ text_format.Merge(post_processing_text_proto, post_processing_config)
+ _, score_converter = post_processing_builder.build(post_processing_config)
+ self.assertEqual(score_converter.__name__, 'identity_with_logit_scale')
+
+ inputs = tf.constant([1, 1], tf.float32)
+ outputs = score_converter(inputs)
+ with self.test_session() as sess:
+ converted_scores = sess.run(outputs)
+ expected_converted_scores = sess.run(inputs)
+ self.assertAllClose(converted_scores, expected_converted_scores)
+
+ def test_build_identity_score_converter_with_logit_scale(self):
+ post_processing_text_proto = """
+ score_converter: IDENTITY
+ logit_scale: 2.0
+ """
+ post_processing_config = post_processing_pb2.PostProcessing()
+ text_format.Merge(post_processing_text_proto, post_processing_config)
+ _, score_converter = post_processing_builder.build(post_processing_config)
+ self.assertEqual(score_converter.__name__, 'identity_with_logit_scale')
+
+ inputs = tf.constant([1, 1], tf.float32)
+ outputs = score_converter(inputs)
+ with self.test_session() as sess:
+ converted_scores = sess.run(outputs)
+ expected_converted_scores = sess.run(tf.constant([.5, .5], tf.float32))
+ self.assertAllClose(converted_scores, expected_converted_scores)
+
+ def test_build_sigmoid_score_converter(self):
+ post_processing_text_proto = """
+ score_converter: SIGMOID
+ """
+ post_processing_config = post_processing_pb2.PostProcessing()
+ text_format.Merge(post_processing_text_proto, post_processing_config)
+ _, score_converter = post_processing_builder.build(post_processing_config)
+ self.assertEqual(score_converter.__name__, 'sigmoid_with_logit_scale')
+
+ def test_build_softmax_score_converter(self):
+ post_processing_text_proto = """
+ score_converter: SOFTMAX
+ """
+ post_processing_config = post_processing_pb2.PostProcessing()
+ text_format.Merge(post_processing_text_proto, post_processing_config)
+ _, score_converter = post_processing_builder.build(post_processing_config)
+ self.assertEqual(score_converter.__name__, 'softmax_with_logit_scale')
+
+ def test_build_softmax_score_converter_with_temperature(self):
+ post_processing_text_proto = """
+ score_converter: SOFTMAX
+ logit_scale: 2.0
+ """
+ post_processing_config = post_processing_pb2.PostProcessing()
+ text_format.Merge(post_processing_text_proto, post_processing_config)
+ _, score_converter = post_processing_builder.build(post_processing_config)
+ self.assertEqual(score_converter.__name__, 'softmax_with_logit_scale')
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/preprocessor_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/preprocessor_builder.py"
new file mode 100644
index 00000000..1d03b641
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/preprocessor_builder.py"
@@ -0,0 +1,348 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Builder for preprocessing steps."""
+
+import tensorflow as tf
+
+from object_detection.core import preprocessor
+from object_detection.protos import preprocessor_pb2
+
+
+def _get_step_config_from_proto(preprocessor_step_config, step_name):
+ """Returns the value of a field named step_name from proto.
+
+ Args:
+ preprocessor_step_config: A preprocessor_pb2.PreprocessingStep object.
+ step_name: Name of the field to get value from.
+
+ Returns:
+ result_dict: a sub proto message from preprocessor_step_config which will be
+ later converted to a dictionary.
+
+ Raises:
+ ValueError: If field does not exist in proto.
+ """
+ for field, value in preprocessor_step_config.ListFields():
+ if field.name == step_name:
+ return value
+
+ raise ValueError('Could not get field %s from proto!', step_name)
+
+
+def _get_dict_from_proto(config):
+ """Helper function to put all proto fields into a dictionary.
+
+ For many preprocessing steps, there's an trivial 1-1 mapping from proto fields
+ to function arguments. This function automatically populates a dictionary with
+ the arguments from the proto.
+
+ Protos that CANNOT be trivially populated include:
+ * nested messages.
+ * steps that check if an optional field is set (ie. where None != 0).
+ * protos that don't map 1-1 to arguments (ie. list should be reshaped).
+ * fields requiring additional validation (ie. repeated field has n elements).
+
+ Args:
+ config: A protobuf object that does not violate the conditions above.
+
+ Returns:
+ result_dict: |config| converted into a python dictionary.
+ """
+ result_dict = {}
+ for field, value in config.ListFields():
+ result_dict[field.name] = value
+ return result_dict
+
+
+# A map from a PreprocessingStep proto config field name to the preprocessing
+# function that should be used. The PreprocessingStep proto should be parsable
+# with _get_dict_from_proto.
+PREPROCESSING_FUNCTION_MAP = {
+ 'normalize_image':
+ preprocessor.normalize_image,
+ 'random_pixel_value_scale':
+ preprocessor.random_pixel_value_scale,
+ 'random_image_scale':
+ preprocessor.random_image_scale,
+ 'random_rgb_to_gray':
+ preprocessor.random_rgb_to_gray,
+ 'random_adjust_brightness':
+ preprocessor.random_adjust_brightness,
+ 'random_adjust_contrast':
+ preprocessor.random_adjust_contrast,
+ 'random_adjust_hue':
+ preprocessor.random_adjust_hue,
+ 'random_adjust_saturation':
+ preprocessor.random_adjust_saturation,
+ 'random_distort_color':
+ preprocessor.random_distort_color,
+ 'random_jitter_boxes':
+ preprocessor.random_jitter_boxes,
+ 'random_crop_to_aspect_ratio':
+ preprocessor.random_crop_to_aspect_ratio,
+ 'random_black_patches':
+ preprocessor.random_black_patches,
+ 'rgb_to_gray':
+ preprocessor.rgb_to_gray,
+ 'scale_boxes_to_pixel_coordinates': (
+ preprocessor.scale_boxes_to_pixel_coordinates),
+ 'subtract_channel_mean':
+ preprocessor.subtract_channel_mean,
+ 'convert_class_logits_to_softmax':
+ preprocessor.convert_class_logits_to_softmax,
+}
+
+
+# A map to convert from preprocessor_pb2.ResizeImage.Method enum to
+# tf.image.ResizeMethod.
+RESIZE_METHOD_MAP = {
+ preprocessor_pb2.ResizeImage.AREA: tf.image.ResizeMethod.AREA,
+ preprocessor_pb2.ResizeImage.BICUBIC: tf.image.ResizeMethod.BICUBIC,
+ preprocessor_pb2.ResizeImage.BILINEAR: tf.image.ResizeMethod.BILINEAR,
+ preprocessor_pb2.ResizeImage.NEAREST_NEIGHBOR: (
+ tf.image.ResizeMethod.NEAREST_NEIGHBOR),
+}
+
+
+def build(preprocessor_step_config):
+ """Builds preprocessing step based on the configuration.
+
+ Args:
+ preprocessor_step_config: PreprocessingStep configuration proto.
+
+ Returns:
+ function, argmap: A callable function and an argument map to call function
+ with.
+
+ Raises:
+ ValueError: On invalid configuration.
+ """
+ step_type = preprocessor_step_config.WhichOneof('preprocessing_step')
+
+ if step_type in PREPROCESSING_FUNCTION_MAP:
+ preprocessing_function = PREPROCESSING_FUNCTION_MAP[step_type]
+ step_config = _get_step_config_from_proto(preprocessor_step_config,
+ step_type)
+ function_args = _get_dict_from_proto(step_config)
+ return (preprocessing_function, function_args)
+
+ if step_type == 'random_horizontal_flip':
+ config = preprocessor_step_config.random_horizontal_flip
+ return (preprocessor.random_horizontal_flip,
+ {
+ 'keypoint_flip_permutation': tuple(
+ config.keypoint_flip_permutation),
+ })
+
+ if step_type == 'random_vertical_flip':
+ config = preprocessor_step_config.random_vertical_flip
+ return (preprocessor.random_vertical_flip,
+ {
+ 'keypoint_flip_permutation': tuple(
+ config.keypoint_flip_permutation),
+ })
+
+ if step_type == 'random_rotation90':
+ return (preprocessor.random_rotation90, {})
+
+ if step_type == 'random_crop_image':
+ config = preprocessor_step_config.random_crop_image
+ return (preprocessor.random_crop_image,
+ {
+ 'min_object_covered': config.min_object_covered,
+ 'aspect_ratio_range': (config.min_aspect_ratio,
+ config.max_aspect_ratio),
+ 'area_range': (config.min_area, config.max_area),
+ 'overlap_thresh': config.overlap_thresh,
+ 'clip_boxes': config.clip_boxes,
+ 'random_coef': config.random_coef,
+ })
+
+ if step_type == 'random_pad_image':
+ config = preprocessor_step_config.random_pad_image
+ min_image_size = None
+ if (config.HasField('min_image_height') !=
+ config.HasField('min_image_width')):
+ raise ValueError('min_image_height and min_image_width should be either '
+ 'both set or both unset.')
+ if config.HasField('min_image_height'):
+ min_image_size = (config.min_image_height, config.min_image_width)
+
+ max_image_size = None
+ if (config.HasField('max_image_height') !=
+ config.HasField('max_image_width')):
+ raise ValueError('max_image_height and max_image_width should be either '
+ 'both set or both unset.')
+ if config.HasField('max_image_height'):
+ max_image_size = (config.max_image_height, config.max_image_width)
+
+ pad_color = config.pad_color or None
+ if pad_color:
+ if len(pad_color) == 3:
+ pad_color = tf.to_float([x for x in config.pad_color])
+ else:
+ raise ValueError('pad_color should have 3 elements (RGB) if set!')
+ return (preprocessor.random_pad_image,
+ {
+ 'min_image_size': min_image_size,
+ 'max_image_size': max_image_size,
+ 'pad_color': pad_color,
+ })
+
+ if step_type == 'random_crop_pad_image':
+ config = preprocessor_step_config.random_crop_pad_image
+ min_padded_size_ratio = config.min_padded_size_ratio
+ if min_padded_size_ratio and len(min_padded_size_ratio) != 2:
+ raise ValueError('min_padded_size_ratio should have 2 elements if set!')
+ max_padded_size_ratio = config.max_padded_size_ratio
+ if max_padded_size_ratio and len(max_padded_size_ratio) != 2:
+ raise ValueError('max_padded_size_ratio should have 2 elements if set!')
+ pad_color = config.pad_color
+ if pad_color and len(pad_color) != 3:
+ raise ValueError('pad_color should have 3 elements if set!')
+ kwargs = {
+ 'min_object_covered': config.min_object_covered,
+ 'aspect_ratio_range': (config.min_aspect_ratio,
+ config.max_aspect_ratio),
+ 'area_range': (config.min_area, config.max_area),
+ 'overlap_thresh': config.overlap_thresh,
+ 'clip_boxes': config.clip_boxes,
+ 'random_coef': config.random_coef,
+ }
+ if min_padded_size_ratio:
+ kwargs['min_padded_size_ratio'] = tuple(min_padded_size_ratio)
+ if max_padded_size_ratio:
+ kwargs['max_padded_size_ratio'] = tuple(max_padded_size_ratio)
+ if pad_color:
+ kwargs['pad_color'] = tuple(pad_color)
+ return (preprocessor.random_crop_pad_image, kwargs)
+
+ if step_type == 'random_resize_method':
+ config = preprocessor_step_config.random_resize_method
+ return (preprocessor.random_resize_method,
+ {
+ 'target_size': [config.target_height, config.target_width],
+ })
+
+ if step_type == 'resize_image':
+ config = preprocessor_step_config.resize_image
+ method = RESIZE_METHOD_MAP[config.method]
+ return (preprocessor.resize_image,
+ {
+ 'new_height': config.new_height,
+ 'new_width': config.new_width,
+ 'method': method
+ })
+
+ if step_type == 'ssd_random_crop':
+ config = preprocessor_step_config.ssd_random_crop
+ if config.operations:
+ min_object_covered = [op.min_object_covered for op in config.operations]
+ aspect_ratio_range = [(op.min_aspect_ratio, op.max_aspect_ratio)
+ for op in config.operations]
+ area_range = [(op.min_area, op.max_area) for op in config.operations]
+ overlap_thresh = [op.overlap_thresh for op in config.operations]
+ clip_boxes = [op.clip_boxes for op in config.operations]
+ random_coef = [op.random_coef for op in config.operations]
+ return (preprocessor.ssd_random_crop,
+ {
+ 'min_object_covered': min_object_covered,
+ 'aspect_ratio_range': aspect_ratio_range,
+ 'area_range': area_range,
+ 'overlap_thresh': overlap_thresh,
+ 'clip_boxes': clip_boxes,
+ 'random_coef': random_coef,
+ })
+ return (preprocessor.ssd_random_crop, {})
+
+ if step_type == 'ssd_random_crop_pad':
+ config = preprocessor_step_config.ssd_random_crop_pad
+ if config.operations:
+ min_object_covered = [op.min_object_covered for op in config.operations]
+ aspect_ratio_range = [(op.min_aspect_ratio, op.max_aspect_ratio)
+ for op in config.operations]
+ area_range = [(op.min_area, op.max_area) for op in config.operations]
+ overlap_thresh = [op.overlap_thresh for op in config.operations]
+ clip_boxes = [op.clip_boxes for op in config.operations]
+ random_coef = [op.random_coef for op in config.operations]
+ min_padded_size_ratio = [tuple(op.min_padded_size_ratio)
+ for op in config.operations]
+ max_padded_size_ratio = [tuple(op.max_padded_size_ratio)
+ for op in config.operations]
+ pad_color = [(op.pad_color_r, op.pad_color_g, op.pad_color_b)
+ for op in config.operations]
+ return (preprocessor.ssd_random_crop_pad,
+ {
+ 'min_object_covered': min_object_covered,
+ 'aspect_ratio_range': aspect_ratio_range,
+ 'area_range': area_range,
+ 'overlap_thresh': overlap_thresh,
+ 'clip_boxes': clip_boxes,
+ 'random_coef': random_coef,
+ 'min_padded_size_ratio': min_padded_size_ratio,
+ 'max_padded_size_ratio': max_padded_size_ratio,
+ 'pad_color': pad_color,
+ })
+ return (preprocessor.ssd_random_crop_pad, {})
+
+ if step_type == 'ssd_random_crop_fixed_aspect_ratio':
+ config = preprocessor_step_config.ssd_random_crop_fixed_aspect_ratio
+ if config.operations:
+ min_object_covered = [op.min_object_covered for op in config.operations]
+ area_range = [(op.min_area, op.max_area) for op in config.operations]
+ overlap_thresh = [op.overlap_thresh for op in config.operations]
+ clip_boxes = [op.clip_boxes for op in config.operations]
+ random_coef = [op.random_coef for op in config.operations]
+ return (preprocessor.ssd_random_crop_fixed_aspect_ratio,
+ {
+ 'min_object_covered': min_object_covered,
+ 'aspect_ratio': config.aspect_ratio,
+ 'area_range': area_range,
+ 'overlap_thresh': overlap_thresh,
+ 'clip_boxes': clip_boxes,
+ 'random_coef': random_coef,
+ })
+ return (preprocessor.ssd_random_crop_fixed_aspect_ratio, {})
+
+ if step_type == 'ssd_random_crop_pad_fixed_aspect_ratio':
+ config = preprocessor_step_config.ssd_random_crop_pad_fixed_aspect_ratio
+ kwargs = {}
+ aspect_ratio = config.aspect_ratio
+ if aspect_ratio:
+ kwargs['aspect_ratio'] = aspect_ratio
+ min_padded_size_ratio = config.min_padded_size_ratio
+ if min_padded_size_ratio:
+ if len(min_padded_size_ratio) != 2:
+ raise ValueError('min_padded_size_ratio should have 2 elements if set!')
+ kwargs['min_padded_size_ratio'] = tuple(min_padded_size_ratio)
+ max_padded_size_ratio = config.max_padded_size_ratio
+ if max_padded_size_ratio:
+ if len(max_padded_size_ratio) != 2:
+ raise ValueError('max_padded_size_ratio should have 2 elements if set!')
+ kwargs['max_padded_size_ratio'] = tuple(max_padded_size_ratio)
+ if config.operations:
+ kwargs['min_object_covered'] = [op.min_object_covered
+ for op in config.operations]
+ kwargs['aspect_ratio_range'] = [(op.min_aspect_ratio, op.max_aspect_ratio)
+ for op in config.operations]
+ kwargs['area_range'] = [(op.min_area, op.max_area)
+ for op in config.operations]
+ kwargs['overlap_thresh'] = [op.overlap_thresh for op in config.operations]
+ kwargs['clip_boxes'] = [op.clip_boxes for op in config.operations]
+ kwargs['random_coef'] = [op.random_coef for op in config.operations]
+ return (preprocessor.ssd_random_crop_pad_fixed_aspect_ratio, kwargs)
+
+ raise ValueError('Unknown preprocessing step.')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/preprocessor_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/preprocessor_builder_test.py"
new file mode 100644
index 00000000..89de2b43
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/preprocessor_builder_test.py"
@@ -0,0 +1,598 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for preprocessor_builder."""
+
+import tensorflow as tf
+
+from google.protobuf import text_format
+
+from object_detection.builders import preprocessor_builder
+from object_detection.core import preprocessor
+from object_detection.protos import preprocessor_pb2
+
+
+class PreprocessorBuilderTest(tf.test.TestCase):
+
+ def assert_dictionary_close(self, dict1, dict2):
+ """Helper to check if two dicts with floatst or integers are close."""
+ self.assertEqual(sorted(dict1.keys()), sorted(dict2.keys()))
+ for key in dict1:
+ value = dict1[key]
+ if isinstance(value, float):
+ self.assertAlmostEqual(value, dict2[key])
+ else:
+ self.assertEqual(value, dict2[key])
+
+ def test_build_normalize_image(self):
+ preprocessor_text_proto = """
+ normalize_image {
+ original_minval: 0.0
+ original_maxval: 255.0
+ target_minval: -1.0
+ target_maxval: 1.0
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.normalize_image)
+ self.assertEqual(args, {
+ 'original_minval': 0.0,
+ 'original_maxval': 255.0,
+ 'target_minval': -1.0,
+ 'target_maxval': 1.0,
+ })
+
+ def test_build_random_horizontal_flip(self):
+ preprocessor_text_proto = """
+ random_horizontal_flip {
+ keypoint_flip_permutation: 1
+ keypoint_flip_permutation: 0
+ keypoint_flip_permutation: 2
+ keypoint_flip_permutation: 3
+ keypoint_flip_permutation: 5
+ keypoint_flip_permutation: 4
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_horizontal_flip)
+ self.assertEqual(args, {'keypoint_flip_permutation': (1, 0, 2, 3, 5, 4)})
+
+ def test_build_random_vertical_flip(self):
+ preprocessor_text_proto = """
+ random_vertical_flip {
+ keypoint_flip_permutation: 1
+ keypoint_flip_permutation: 0
+ keypoint_flip_permutation: 2
+ keypoint_flip_permutation: 3
+ keypoint_flip_permutation: 5
+ keypoint_flip_permutation: 4
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_vertical_flip)
+ self.assertEqual(args, {'keypoint_flip_permutation': (1, 0, 2, 3, 5, 4)})
+
+ def test_build_random_rotation90(self):
+ preprocessor_text_proto = """
+ random_rotation90 {}
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_rotation90)
+ self.assertEqual(args, {})
+
+ def test_build_random_pixel_value_scale(self):
+ preprocessor_text_proto = """
+ random_pixel_value_scale {
+ minval: 0.8
+ maxval: 1.2
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_pixel_value_scale)
+ self.assert_dictionary_close(args, {'minval': 0.8, 'maxval': 1.2})
+
+ def test_build_random_image_scale(self):
+ preprocessor_text_proto = """
+ random_image_scale {
+ min_scale_ratio: 0.8
+ max_scale_ratio: 2.2
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_image_scale)
+ self.assert_dictionary_close(args, {'min_scale_ratio': 0.8,
+ 'max_scale_ratio': 2.2})
+
+ def test_build_random_rgb_to_gray(self):
+ preprocessor_text_proto = """
+ random_rgb_to_gray {
+ probability: 0.8
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_rgb_to_gray)
+ self.assert_dictionary_close(args, {'probability': 0.8})
+
+ def test_build_random_adjust_brightness(self):
+ preprocessor_text_proto = """
+ random_adjust_brightness {
+ max_delta: 0.2
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_adjust_brightness)
+ self.assert_dictionary_close(args, {'max_delta': 0.2})
+
+ def test_build_random_adjust_contrast(self):
+ preprocessor_text_proto = """
+ random_adjust_contrast {
+ min_delta: 0.7
+ max_delta: 1.1
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_adjust_contrast)
+ self.assert_dictionary_close(args, {'min_delta': 0.7, 'max_delta': 1.1})
+
+ def test_build_random_adjust_hue(self):
+ preprocessor_text_proto = """
+ random_adjust_hue {
+ max_delta: 0.01
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_adjust_hue)
+ self.assert_dictionary_close(args, {'max_delta': 0.01})
+
+ def test_build_random_adjust_saturation(self):
+ preprocessor_text_proto = """
+ random_adjust_saturation {
+ min_delta: 0.75
+ max_delta: 1.15
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_adjust_saturation)
+ self.assert_dictionary_close(args, {'min_delta': 0.75, 'max_delta': 1.15})
+
+ def test_build_random_distort_color(self):
+ preprocessor_text_proto = """
+ random_distort_color {
+ color_ordering: 1
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_distort_color)
+ self.assertEqual(args, {'color_ordering': 1})
+
+ def test_build_random_jitter_boxes(self):
+ preprocessor_text_proto = """
+ random_jitter_boxes {
+ ratio: 0.1
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_jitter_boxes)
+ self.assert_dictionary_close(args, {'ratio': 0.1})
+
+ def test_build_random_crop_image(self):
+ preprocessor_text_proto = """
+ random_crop_image {
+ min_object_covered: 0.75
+ min_aspect_ratio: 0.75
+ max_aspect_ratio: 1.5
+ min_area: 0.25
+ max_area: 0.875
+ overlap_thresh: 0.5
+ clip_boxes: False
+ random_coef: 0.125
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_crop_image)
+ self.assertEqual(args, {
+ 'min_object_covered': 0.75,
+ 'aspect_ratio_range': (0.75, 1.5),
+ 'area_range': (0.25, 0.875),
+ 'overlap_thresh': 0.5,
+ 'clip_boxes': False,
+ 'random_coef': 0.125,
+ })
+
+ def test_build_random_pad_image(self):
+ preprocessor_text_proto = """
+ random_pad_image {
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_pad_image)
+ self.assertEqual(args, {
+ 'min_image_size': None,
+ 'max_image_size': None,
+ 'pad_color': None,
+ })
+
+ def test_build_random_crop_pad_image(self):
+ preprocessor_text_proto = """
+ random_crop_pad_image {
+ min_object_covered: 0.75
+ min_aspect_ratio: 0.75
+ max_aspect_ratio: 1.5
+ min_area: 0.25
+ max_area: 0.875
+ overlap_thresh: 0.5
+ clip_boxes: False
+ random_coef: 0.125
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_crop_pad_image)
+ self.assertEqual(args, {
+ 'min_object_covered': 0.75,
+ 'aspect_ratio_range': (0.75, 1.5),
+ 'area_range': (0.25, 0.875),
+ 'overlap_thresh': 0.5,
+ 'clip_boxes': False,
+ 'random_coef': 0.125,
+ })
+
+ def test_build_random_crop_pad_image_with_optional_parameters(self):
+ preprocessor_text_proto = """
+ random_crop_pad_image {
+ min_object_covered: 0.75
+ min_aspect_ratio: 0.75
+ max_aspect_ratio: 1.5
+ min_area: 0.25
+ max_area: 0.875
+ overlap_thresh: 0.5
+ clip_boxes: False
+ random_coef: 0.125
+ min_padded_size_ratio: 0.5
+ min_padded_size_ratio: 0.75
+ max_padded_size_ratio: 0.5
+ max_padded_size_ratio: 0.75
+ pad_color: 0.5
+ pad_color: 0.5
+ pad_color: 1.0
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_crop_pad_image)
+ self.assertEqual(args, {
+ 'min_object_covered': 0.75,
+ 'aspect_ratio_range': (0.75, 1.5),
+ 'area_range': (0.25, 0.875),
+ 'overlap_thresh': 0.5,
+ 'clip_boxes': False,
+ 'random_coef': 0.125,
+ 'min_padded_size_ratio': (0.5, 0.75),
+ 'max_padded_size_ratio': (0.5, 0.75),
+ 'pad_color': (0.5, 0.5, 1.0)
+ })
+
+ def test_build_random_crop_to_aspect_ratio(self):
+ preprocessor_text_proto = """
+ random_crop_to_aspect_ratio {
+ aspect_ratio: 0.85
+ overlap_thresh: 0.35
+ clip_boxes: False
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_crop_to_aspect_ratio)
+ self.assert_dictionary_close(args, {'aspect_ratio': 0.85,
+ 'overlap_thresh': 0.35,
+ 'clip_boxes': False})
+
+ def test_build_random_black_patches(self):
+ preprocessor_text_proto = """
+ random_black_patches {
+ max_black_patches: 20
+ probability: 0.95
+ size_to_image_ratio: 0.12
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_black_patches)
+ self.assert_dictionary_close(args, {'max_black_patches': 20,
+ 'probability': 0.95,
+ 'size_to_image_ratio': 0.12})
+
+ def test_build_random_resize_method(self):
+ preprocessor_text_proto = """
+ random_resize_method {
+ target_height: 75
+ target_width: 100
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.random_resize_method)
+ self.assert_dictionary_close(args, {'target_size': [75, 100]})
+
+ def test_build_scale_boxes_to_pixel_coordinates(self):
+ preprocessor_text_proto = """
+ scale_boxes_to_pixel_coordinates {}
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.scale_boxes_to_pixel_coordinates)
+ self.assertEqual(args, {})
+
+ def test_build_resize_image(self):
+ preprocessor_text_proto = """
+ resize_image {
+ new_height: 75
+ new_width: 100
+ method: BICUBIC
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.resize_image)
+ self.assertEqual(args, {'new_height': 75,
+ 'new_width': 100,
+ 'method': tf.image.ResizeMethod.BICUBIC})
+
+ def test_build_rgb_to_gray(self):
+ preprocessor_text_proto = """
+ rgb_to_gray {}
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.rgb_to_gray)
+ self.assertEqual(args, {})
+
+ def test_build_subtract_channel_mean(self):
+ preprocessor_text_proto = """
+ subtract_channel_mean {
+ means: [1.0, 2.0, 3.0]
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.subtract_channel_mean)
+ self.assertEqual(args, {'means': [1.0, 2.0, 3.0]})
+
+ def test_build_ssd_random_crop(self):
+ preprocessor_text_proto = """
+ ssd_random_crop {
+ operations {
+ min_object_covered: 0.0
+ min_aspect_ratio: 0.875
+ max_aspect_ratio: 1.125
+ min_area: 0.5
+ max_area: 1.0
+ overlap_thresh: 0.0
+ clip_boxes: False
+ random_coef: 0.375
+ }
+ operations {
+ min_object_covered: 0.25
+ min_aspect_ratio: 0.75
+ max_aspect_ratio: 1.5
+ min_area: 0.5
+ max_area: 1.0
+ overlap_thresh: 0.25
+ clip_boxes: True
+ random_coef: 0.375
+ }
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.ssd_random_crop)
+ self.assertEqual(args, {'min_object_covered': [0.0, 0.25],
+ 'aspect_ratio_range': [(0.875, 1.125), (0.75, 1.5)],
+ 'area_range': [(0.5, 1.0), (0.5, 1.0)],
+ 'overlap_thresh': [0.0, 0.25],
+ 'clip_boxes': [False, True],
+ 'random_coef': [0.375, 0.375]})
+
+ def test_build_ssd_random_crop_empty_operations(self):
+ preprocessor_text_proto = """
+ ssd_random_crop {
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.ssd_random_crop)
+ self.assertEqual(args, {})
+
+ def test_build_ssd_random_crop_pad(self):
+ preprocessor_text_proto = """
+ ssd_random_crop_pad {
+ operations {
+ min_object_covered: 0.0
+ min_aspect_ratio: 0.875
+ max_aspect_ratio: 1.125
+ min_area: 0.5
+ max_area: 1.0
+ overlap_thresh: 0.0
+ clip_boxes: False
+ random_coef: 0.375
+ min_padded_size_ratio: [1.0, 1.0]
+ max_padded_size_ratio: [2.0, 2.0]
+ pad_color_r: 0.5
+ pad_color_g: 0.5
+ pad_color_b: 0.5
+ }
+ operations {
+ min_object_covered: 0.25
+ min_aspect_ratio: 0.75
+ max_aspect_ratio: 1.5
+ min_area: 0.5
+ max_area: 1.0
+ overlap_thresh: 0.25
+ clip_boxes: True
+ random_coef: 0.375
+ min_padded_size_ratio: [1.0, 1.0]
+ max_padded_size_ratio: [2.0, 2.0]
+ pad_color_r: 0.5
+ pad_color_g: 0.5
+ pad_color_b: 0.5
+ }
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.ssd_random_crop_pad)
+ self.assertEqual(args, {'min_object_covered': [0.0, 0.25],
+ 'aspect_ratio_range': [(0.875, 1.125), (0.75, 1.5)],
+ 'area_range': [(0.5, 1.0), (0.5, 1.0)],
+ 'overlap_thresh': [0.0, 0.25],
+ 'clip_boxes': [False, True],
+ 'random_coef': [0.375, 0.375],
+ 'min_padded_size_ratio': [(1.0, 1.0), (1.0, 1.0)],
+ 'max_padded_size_ratio': [(2.0, 2.0), (2.0, 2.0)],
+ 'pad_color': [(0.5, 0.5, 0.5), (0.5, 0.5, 0.5)]})
+
+ def test_build_ssd_random_crop_fixed_aspect_ratio(self):
+ preprocessor_text_proto = """
+ ssd_random_crop_fixed_aspect_ratio {
+ operations {
+ min_object_covered: 0.0
+ min_area: 0.5
+ max_area: 1.0
+ overlap_thresh: 0.0
+ clip_boxes: False
+ random_coef: 0.375
+ }
+ operations {
+ min_object_covered: 0.25
+ min_area: 0.5
+ max_area: 1.0
+ overlap_thresh: 0.25
+ clip_boxes: True
+ random_coef: 0.375
+ }
+ aspect_ratio: 0.875
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.ssd_random_crop_fixed_aspect_ratio)
+ self.assertEqual(args, {'min_object_covered': [0.0, 0.25],
+ 'aspect_ratio': 0.875,
+ 'area_range': [(0.5, 1.0), (0.5, 1.0)],
+ 'overlap_thresh': [0.0, 0.25],
+ 'clip_boxes': [False, True],
+ 'random_coef': [0.375, 0.375]})
+
+ def test_build_ssd_random_crop_pad_fixed_aspect_ratio(self):
+ preprocessor_text_proto = """
+ ssd_random_crop_pad_fixed_aspect_ratio {
+ operations {
+ min_object_covered: 0.0
+ min_aspect_ratio: 0.875
+ max_aspect_ratio: 1.125
+ min_area: 0.5
+ max_area: 1.0
+ overlap_thresh: 0.0
+ clip_boxes: False
+ random_coef: 0.375
+ }
+ operations {
+ min_object_covered: 0.25
+ min_aspect_ratio: 0.75
+ max_aspect_ratio: 1.5
+ min_area: 0.5
+ max_area: 1.0
+ overlap_thresh: 0.25
+ clip_boxes: True
+ random_coef: 0.375
+ }
+ aspect_ratio: 0.875
+ min_padded_size_ratio: [1.0, 1.0]
+ max_padded_size_ratio: [2.0, 2.0]
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function,
+ preprocessor.ssd_random_crop_pad_fixed_aspect_ratio)
+ self.assertEqual(args, {'min_object_covered': [0.0, 0.25],
+ 'aspect_ratio': 0.875,
+ 'aspect_ratio_range': [(0.875, 1.125), (0.75, 1.5)],
+ 'area_range': [(0.5, 1.0), (0.5, 1.0)],
+ 'overlap_thresh': [0.0, 0.25],
+ 'clip_boxes': [False, True],
+ 'random_coef': [0.375, 0.375],
+ 'min_padded_size_ratio': (1.0, 1.0),
+ 'max_padded_size_ratio': (2.0, 2.0)})
+
+ def test_build_normalize_image_convert_class_logits_to_softmax(self):
+ preprocessor_text_proto = """
+ convert_class_logits_to_softmax {
+ temperature: 2
+ }
+ """
+ preprocessor_proto = preprocessor_pb2.PreprocessingStep()
+ text_format.Merge(preprocessor_text_proto, preprocessor_proto)
+ function, args = preprocessor_builder.build(preprocessor_proto)
+ self.assertEqual(function, preprocessor.convert_class_logits_to_softmax)
+ self.assertEqual(args, {'temperature': 2})
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/region_similarity_calculator_builder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/region_similarity_calculator_builder.py"
new file mode 100644
index 00000000..157c94fb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/region_similarity_calculator_builder.py"
@@ -0,0 +1,59 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Builder for region similarity calculators."""
+
+from object_detection.core import region_similarity_calculator
+from object_detection.protos import region_similarity_calculator_pb2
+
+
+def build(region_similarity_calculator_config):
+ """Builds region similarity calculator based on the configuration.
+
+ Builds one of [IouSimilarity, IoaSimilarity, NegSqDistSimilarity] objects. See
+ core/region_similarity_calculator.proto for details.
+
+ Args:
+ region_similarity_calculator_config: RegionSimilarityCalculator
+ configuration proto.
+
+ Returns:
+ region_similarity_calculator: RegionSimilarityCalculator object.
+
+ Raises:
+ ValueError: On unknown region similarity calculator.
+ """
+
+ if not isinstance(
+ region_similarity_calculator_config,
+ region_similarity_calculator_pb2.RegionSimilarityCalculator):
+ raise ValueError(
+ 'region_similarity_calculator_config not of type '
+ 'region_similarity_calculator_pb2.RegionsSimilarityCalculator')
+
+ similarity_calculator = region_similarity_calculator_config.WhichOneof(
+ 'region_similarity')
+ if similarity_calculator == 'iou_similarity':
+ return region_similarity_calculator.IouSimilarity()
+ if similarity_calculator == 'ioa_similarity':
+ return region_similarity_calculator.IoaSimilarity()
+ if similarity_calculator == 'neg_sq_dist_similarity':
+ return region_similarity_calculator.NegSqDistSimilarity()
+ if similarity_calculator == 'thresholded_iou_similarity':
+ return region_similarity_calculator.ThresholdedIouSimilarity(
+ region_similarity_calculator_config.thresholded_iou_similarity.threshold
+ )
+
+ raise ValueError('Unknown region similarity calculator.')
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/region_similarity_calculator_builder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/region_similarity_calculator_builder_test.py"
new file mode 100644
index 00000000..ca3a5512
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/builders/region_similarity_calculator_builder_test.py"
@@ -0,0 +1,67 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for region_similarity_calculator_builder."""
+
+import tensorflow as tf
+
+from google.protobuf import text_format
+from object_detection.builders import region_similarity_calculator_builder
+from object_detection.core import region_similarity_calculator
+from object_detection.protos import region_similarity_calculator_pb2 as sim_calc_pb2
+
+
+class RegionSimilarityCalculatorBuilderTest(tf.test.TestCase):
+
+ def testBuildIoaSimilarityCalculator(self):
+ similarity_calc_text_proto = """
+ ioa_similarity {
+ }
+ """
+ similarity_calc_proto = sim_calc_pb2.RegionSimilarityCalculator()
+ text_format.Merge(similarity_calc_text_proto, similarity_calc_proto)
+ similarity_calc = region_similarity_calculator_builder.build(
+ similarity_calc_proto)
+ self.assertTrue(isinstance(similarity_calc,
+ region_similarity_calculator.IoaSimilarity))
+
+ def testBuildIouSimilarityCalculator(self):
+ similarity_calc_text_proto = """
+ iou_similarity {
+ }
+ """
+ similarity_calc_proto = sim_calc_pb2.RegionSimilarityCalculator()
+ text_format.Merge(similarity_calc_text_proto, similarity_calc_proto)
+ similarity_calc = region_similarity_calculator_builder.build(
+ similarity_calc_proto)
+ self.assertTrue(isinstance(similarity_calc,
+ region_similarity_calculator.IouSimilarity))
+
+ def testBuildNegSqDistSimilarityCalculator(self):
+ similarity_calc_text_proto = """
+ neg_sq_dist_similarity {
+ }
+ """
+ similarity_calc_proto = sim_calc_pb2.RegionSimilarityCalculator()
+ text_format.Merge(similarity_calc_text_proto, similarity_calc_proto)
+ similarity_calc = region_similarity_calculator_builder.build(
+ similarity_calc_proto)
+ self.assertTrue(isinstance(similarity_calc,
+ region_similarity_calculator.
+ NegSqDistSimilarity))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/__init__.py"
new file mode 100644
index 00000000..8b137891
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/__init__.py"
@@ -0,0 +1 @@
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/anchor_generator.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/anchor_generator.py"
new file mode 100644
index 00000000..f2797ef7
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/anchor_generator.py"
@@ -0,0 +1,150 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Base anchor generator.
+
+The job of the anchor generator is to create (or load) a collection
+of bounding boxes to be used as anchors.
+
+Generated anchors are assumed to match some convolutional grid or list of grid
+shapes. For example, we might want to generate anchors matching an 8x8
+feature map and a 4x4 feature map. If we place 3 anchors per grid location
+on the first feature map and 6 anchors per grid location on the second feature
+map, then 3*8*8 + 6*4*4 = 288 anchors are generated in total.
+
+To support fully convolutional settings, feature map shapes are passed
+dynamically at generation time. The number of anchors to place at each location
+is static --- implementations of AnchorGenerator must always be able return
+the number of anchors that it uses per location for each feature map.
+"""
+from abc import ABCMeta
+from abc import abstractmethod
+
+import tensorflow as tf
+
+
+class AnchorGenerator(object):
+ """Abstract base class for anchor generators."""
+ __metaclass__ = ABCMeta
+
+ @abstractmethod
+ def name_scope(self):
+ """Name scope.
+
+ Must be defined by implementations.
+
+ Returns:
+ a string representing the name scope of the anchor generation operation.
+ """
+ pass
+
+ @property
+ def check_num_anchors(self):
+ """Whether to dynamically check the number of anchors generated.
+
+ Can be overridden by implementations that would like to disable this
+ behavior.
+
+ Returns:
+ a boolean controlling whether the Generate function should dynamically
+ check the number of anchors generated against the mathematically
+ expected number of anchors.
+ """
+ return True
+
+ @abstractmethod
+ def num_anchors_per_location(self):
+ """Returns the number of anchors per spatial location.
+
+ Returns:
+ a list of integers, one for each expected feature map to be passed to
+ the `generate` function.
+ """
+ pass
+
+ def generate(self, feature_map_shape_list, **params):
+ """Generates a collection of bounding boxes to be used as anchors.
+
+ TODO(rathodv): remove **params from argument list and make stride and
+ offsets (for multiple_grid_anchor_generator) constructor arguments.
+
+ Args:
+ feature_map_shape_list: list of (height, width) pairs in the format
+ [(height_0, width_0), (height_1, width_1), ...] that the generated
+ anchors must align with. Pairs can be provided as 1-dimensional
+ integer tensors of length 2 or simply as tuples of integers.
+ **params: parameters for anchor generation op
+
+ Returns:
+ boxes_list: a list of BoxLists each holding anchor boxes corresponding to
+ the input feature map shapes.
+
+ Raises:
+ ValueError: if the number of feature map shapes does not match the length
+ of NumAnchorsPerLocation.
+ """
+ if self.check_num_anchors and (
+ len(feature_map_shape_list) != len(self.num_anchors_per_location())):
+ raise ValueError('Number of feature maps is expected to equal the length '
+ 'of `num_anchors_per_location`.')
+ with tf.name_scope(self.name_scope()):
+ anchors_list = self._generate(feature_map_shape_list, **params)
+ if self.check_num_anchors:
+ with tf.control_dependencies([
+ self._assert_correct_number_of_anchors(
+ anchors_list, feature_map_shape_list)]):
+ for item in anchors_list:
+ item.set(tf.identity(item.get()))
+ return anchors_list
+
+ @abstractmethod
+ def _generate(self, feature_map_shape_list, **params):
+ """To be overridden by implementations.
+
+ Args:
+ feature_map_shape_list: list of (height, width) pairs in the format
+ [(height_0, width_0), (height_1, width_1), ...] that the generated
+ anchors must align with.
+ **params: parameters for anchor generation op
+
+ Returns:
+ boxes_list: a list of BoxList, each holding a collection of N anchor
+ boxes.
+ """
+ pass
+
+ def _assert_correct_number_of_anchors(self, anchors_list,
+ feature_map_shape_list):
+ """Assert that correct number of anchors was generated.
+
+ Args:
+ anchors_list: A list of box_list.BoxList object holding anchors generated.
+ feature_map_shape_list: list of (height, width) pairs in the format
+ [(height_0, width_0), (height_1, width_1), ...] that the generated
+ anchors must align with.
+ Returns:
+ Op that raises InvalidArgumentError if the number of anchors does not
+ match the number of expected anchors.
+ """
+ expected_num_anchors = 0
+ actual_num_anchors = 0
+ for num_anchors_per_location, feature_map_shape, anchors in zip(
+ self.num_anchors_per_location(), feature_map_shape_list, anchors_list):
+ expected_num_anchors += (num_anchors_per_location
+ * feature_map_shape[0]
+ * feature_map_shape[1])
+ actual_num_anchors += anchors.num_boxes()
+ return tf.assert_equal(expected_num_anchors, actual_num_anchors)
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/balanced_positive_negative_sampler.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/balanced_positive_negative_sampler.py"
new file mode 100644
index 00000000..a38f82f6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/balanced_positive_negative_sampler.py"
@@ -0,0 +1,264 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Class to subsample minibatches by balancing positives and negatives.
+
+Subsamples minibatches based on a pre-specified positive fraction in range
+[0,1]. The class presumes there are many more negatives than positive examples:
+if the desired batch_size cannot be achieved with the pre-specified positive
+fraction, it fills the rest with negative examples. If this is not sufficient
+for obtaining the desired batch_size, it returns fewer examples.
+
+The main function to call is Subsample(self, indicator, labels). For convenience
+one can also call SubsampleWeights(self, weights, labels) which is defined in
+the minibatch_sampler base class.
+
+When is_static is True, it implements a method that guarantees static shapes.
+It also ensures the length of output of the subsample is always batch_size, even
+when number of examples set to True in indicator is less than batch_size.
+"""
+
+import tensorflow as tf
+
+from object_detection.core import minibatch_sampler
+from object_detection.utils import ops
+
+
+class BalancedPositiveNegativeSampler(minibatch_sampler.MinibatchSampler):
+ """Subsamples minibatches to a desired balance of positives and negatives."""
+
+ def __init__(self, positive_fraction=0.5, is_static=False):
+ """Constructs a minibatch sampler.
+
+ Args:
+ positive_fraction: desired fraction of positive examples (scalar in [0,1])
+ in the batch.
+ is_static: If True, uses an implementation with static shape guarantees.
+
+ Raises:
+ ValueError: if positive_fraction < 0, or positive_fraction > 1
+ """
+ if positive_fraction < 0 or positive_fraction > 1:
+ raise ValueError('positive_fraction should be in range [0,1]. '
+ 'Received: %s.' % positive_fraction)
+ self._positive_fraction = positive_fraction
+ self._is_static = is_static
+
+ def _get_num_pos_neg_samples(self, sorted_indices_tensor, sample_size):
+ """Counts the number of positives and negatives numbers to be sampled.
+
+ Args:
+ sorted_indices_tensor: A sorted int32 tensor of shape [N] which contains
+ the signed indices of the examples where the sign is based on the label
+ value. The examples that cannot be sampled are set to 0. It samples
+ atmost sample_size*positive_fraction positive examples and remaining
+ from negative examples.
+ sample_size: Size of subsamples.
+
+ Returns:
+ A tuple containing the number of positive and negative labels in the
+ subsample.
+ """
+ input_length = tf.shape(sorted_indices_tensor)[0]
+ valid_positive_index = tf.greater(sorted_indices_tensor,
+ tf.zeros(input_length, tf.int32))
+ num_sampled_pos = tf.reduce_sum(tf.cast(valid_positive_index, tf.int32))
+ max_num_positive_samples = tf.constant(
+ int(sample_size * self._positive_fraction), tf.int32)
+ num_positive_samples = tf.minimum(max_num_positive_samples, num_sampled_pos)
+ num_negative_samples = tf.constant(sample_size,
+ tf.int32) - num_positive_samples
+
+ return num_positive_samples, num_negative_samples
+
+ def _get_values_from_start_and_end(self, input_tensor, num_start_samples,
+ num_end_samples, total_num_samples):
+ """slices num_start_samples and last num_end_samples from input_tensor.
+
+ Args:
+ input_tensor: An int32 tensor of shape [N] to be sliced.
+ num_start_samples: Number of examples to be sliced from the beginning
+ of the input tensor.
+ num_end_samples: Number of examples to be sliced from the end of the
+ input tensor.
+ total_num_samples: Sum of is num_start_samples and num_end_samples. This
+ should be a scalar.
+
+ Returns:
+ A tensor containing the first num_start_samples and last num_end_samples
+ from input_tensor.
+
+ """
+ input_length = tf.shape(input_tensor)[0]
+ start_positions = tf.less(tf.range(input_length), num_start_samples)
+ end_positions = tf.greater_equal(
+ tf.range(input_length), input_length - num_end_samples)
+ selected_positions = tf.logical_or(start_positions, end_positions)
+ selected_positions = tf.cast(selected_positions, tf.float32)
+ indexed_positions = tf.multiply(tf.cumsum(selected_positions),
+ selected_positions)
+ one_hot_selector = tf.one_hot(tf.cast(indexed_positions, tf.int32) - 1,
+ total_num_samples,
+ dtype=tf.float32)
+ return tf.cast(tf.tensordot(tf.cast(input_tensor, tf.float32),
+ one_hot_selector, axes=[0, 0]), tf.int32)
+
+ def _static_subsample(self, indicator, batch_size, labels):
+ """Returns subsampled minibatch.
+
+ Args:
+ indicator: boolean tensor of shape [N] whose True entries can be sampled.
+ N should be a complie time constant.
+ batch_size: desired batch size. This scalar cannot be None.
+ labels: boolean tensor of shape [N] denoting positive(=True) and negative
+ (=False) examples. N should be a complie time constant.
+
+ Returns:
+ sampled_idx_indicator: boolean tensor of shape [N], True for entries which
+ are sampled. It ensures the length of output of the subsample is always
+ batch_size, even when number of examples set to True in indicator is
+ less than batch_size.
+
+ Raises:
+ ValueError: if labels and indicator are not 1D boolean tensors.
+ """
+ # Check if indicator and labels have a static size.
+ if not indicator.shape.is_fully_defined():
+ raise ValueError('indicator must be static in shape when is_static is'
+ 'True')
+ if not labels.shape.is_fully_defined():
+ raise ValueError('labels must be static in shape when is_static is'
+ 'True')
+ if not isinstance(batch_size, int):
+ raise ValueError('batch_size has to be an integer when is_static is'
+ 'True.')
+
+ input_length = tf.shape(indicator)[0]
+
+ # Set the number of examples set True in indicator to be at least
+ # batch_size.
+ num_true_sampled = tf.reduce_sum(tf.cast(indicator, tf.float32))
+ additional_false_sample = tf.less_equal(
+ tf.cumsum(tf.cast(tf.logical_not(indicator), tf.float32)),
+ batch_size - num_true_sampled)
+ indicator = tf.logical_or(indicator, additional_false_sample)
+
+ # Shuffle indicator and label. Need to store the permutation to restore the
+ # order post sampling.
+ permutation = tf.random_shuffle(tf.range(input_length))
+ indicator = ops.matmul_gather_on_zeroth_axis(
+ tf.cast(indicator, tf.float32), permutation)
+ labels = ops.matmul_gather_on_zeroth_axis(
+ tf.cast(labels, tf.float32), permutation)
+
+ # index (starting from 1) when indicator is True, 0 when False
+ indicator_idx = tf.where(
+ tf.cast(indicator, tf.bool), tf.range(1, input_length + 1),
+ tf.zeros(input_length, tf.int32))
+
+ # Replace -1 for negative, +1 for positive labels
+ signed_label = tf.where(
+ tf.cast(labels, tf.bool), tf.ones(input_length, tf.int32),
+ tf.scalar_mul(-1, tf.ones(input_length, tf.int32)))
+ # negative of index for negative label, positive index for positive label,
+ # 0 when indicator is False.
+ signed_indicator_idx = tf.multiply(indicator_idx, signed_label)
+ sorted_signed_indicator_idx = tf.nn.top_k(
+ signed_indicator_idx, input_length, sorted=True).values
+
+ [num_positive_samples,
+ num_negative_samples] = self._get_num_pos_neg_samples(
+ sorted_signed_indicator_idx, batch_size)
+
+ sampled_idx = self._get_values_from_start_and_end(
+ sorted_signed_indicator_idx, num_positive_samples,
+ num_negative_samples, batch_size)
+
+ # Shift the indices to start from 0 and remove any samples that are set as
+ # False.
+ sampled_idx = tf.abs(sampled_idx) - tf.ones(batch_size, tf.int32)
+ sampled_idx = tf.multiply(
+ tf.cast(tf.greater_equal(sampled_idx, tf.constant(0)), tf.int32),
+ sampled_idx)
+
+ sampled_idx_indicator = tf.cast(tf.reduce_sum(
+ tf.one_hot(sampled_idx, depth=input_length),
+ axis=0), tf.bool)
+
+ # project back the order based on stored permutations
+ reprojections = tf.one_hot(permutation, depth=input_length,
+ dtype=tf.float32)
+ return tf.cast(tf.tensordot(
+ tf.cast(sampled_idx_indicator, tf.float32),
+ reprojections, axes=[0, 0]), tf.bool)
+
+ def subsample(self, indicator, batch_size, labels, scope=None):
+ """Returns subsampled minibatch.
+
+ Args:
+ indicator: boolean tensor of shape [N] whose True entries can be sampled.
+ batch_size: desired batch size. If None, keeps all positive samples and
+ randomly selects negative samples so that the positive sample fraction
+ matches self._positive_fraction. It cannot be None is is_static is True.
+ labels: boolean tensor of shape [N] denoting positive(=True) and negative
+ (=False) examples.
+ scope: name scope.
+
+ Returns:
+ sampled_idx_indicator: boolean tensor of shape [N], True for entries which
+ are sampled.
+
+ Raises:
+ ValueError: if labels and indicator are not 1D boolean tensors.
+ """
+ if len(indicator.get_shape().as_list()) != 1:
+ raise ValueError('indicator must be 1 dimensional, got a tensor of '
+ 'shape %s' % indicator.get_shape())
+ if len(labels.get_shape().as_list()) != 1:
+ raise ValueError('labels must be 1 dimensional, got a tensor of '
+ 'shape %s' % labels.get_shape())
+ if labels.dtype != tf.bool:
+ raise ValueError('labels should be of type bool. Received: %s' %
+ labels.dtype)
+ if indicator.dtype != tf.bool:
+ raise ValueError('indicator should be of type bool. Received: %s' %
+ indicator.dtype)
+ with tf.name_scope(scope, 'BalancedPositiveNegativeSampler'):
+ if self._is_static:
+ return self._static_subsample(indicator, batch_size, labels)
+
+ else:
+ # Only sample from indicated samples
+ negative_idx = tf.logical_not(labels)
+ positive_idx = tf.logical_and(labels, indicator)
+ negative_idx = tf.logical_and(negative_idx, indicator)
+
+ # Sample positive and negative samples separately
+ if batch_size is None:
+ max_num_pos = tf.reduce_sum(tf.to_int32(positive_idx))
+ else:
+ max_num_pos = int(self._positive_fraction * batch_size)
+ sampled_pos_idx = self.subsample_indicator(positive_idx, max_num_pos)
+ num_sampled_pos = tf.reduce_sum(tf.cast(sampled_pos_idx, tf.int32))
+ if batch_size is None:
+ negative_positive_ratio = (
+ 1 - self._positive_fraction) / self._positive_fraction
+ max_num_neg = tf.to_int32(
+ negative_positive_ratio * tf.to_float(num_sampled_pos))
+ else:
+ max_num_neg = batch_size - num_sampled_pos
+ sampled_neg_idx = self.subsample_indicator(negative_idx, max_num_neg)
+
+ return tf.logical_or(sampled_pos_idx, sampled_neg_idx)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/balanced_positive_negative_sampler_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/balanced_positive_negative_sampler_test.py"
new file mode 100644
index 00000000..1df28e4c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/balanced_positive_negative_sampler_test.py"
@@ -0,0 +1,204 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.balanced_positive_negative_sampler."""
+
+import numpy as np
+import tensorflow as tf
+
+from object_detection.core import balanced_positive_negative_sampler
+from object_detection.utils import test_case
+
+
+class BalancedPositiveNegativeSamplerTest(test_case.TestCase):
+
+ def test_subsample_all_examples_dynamic(self):
+ numpy_labels = np.random.permutation(300)
+ indicator = tf.constant(np.ones(300) == 1)
+ numpy_labels = (numpy_labels - 200) > 0
+
+ labels = tf.constant(numpy_labels)
+
+ sampler = (
+ balanced_positive_negative_sampler.BalancedPositiveNegativeSampler())
+ is_sampled = sampler.subsample(indicator, 64, labels)
+ with self.test_session() as sess:
+ is_sampled = sess.run(is_sampled)
+ self.assertTrue(sum(is_sampled) == 64)
+ self.assertTrue(sum(np.logical_and(numpy_labels, is_sampled)) == 32)
+ self.assertTrue(sum(np.logical_and(
+ np.logical_not(numpy_labels), is_sampled)) == 32)
+
+ def test_subsample_all_examples_static(self):
+ numpy_labels = np.random.permutation(300)
+ indicator = np.array(np.ones(300) == 1, np.bool)
+ numpy_labels = (numpy_labels - 200) > 0
+
+ labels = np.array(numpy_labels, np.bool)
+
+ def graph_fn(indicator, labels):
+ sampler = (
+ balanced_positive_negative_sampler.BalancedPositiveNegativeSampler(
+ is_static=True))
+ return sampler.subsample(indicator, 64, labels)
+
+ is_sampled = self.execute(graph_fn, [indicator, labels])
+ self.assertTrue(sum(is_sampled) == 64)
+ self.assertTrue(sum(np.logical_and(numpy_labels, is_sampled)) == 32)
+ self.assertTrue(sum(np.logical_and(
+ np.logical_not(numpy_labels), is_sampled)) == 32)
+
+ def test_subsample_selection_dynamic(self):
+ # Test random sampling when only some examples can be sampled:
+ # 100 samples, 20 positives, 10 positives cannot be sampled
+ numpy_labels = np.arange(100)
+ numpy_indicator = numpy_labels < 90
+ indicator = tf.constant(numpy_indicator)
+ numpy_labels = (numpy_labels - 80) >= 0
+
+ labels = tf.constant(numpy_labels)
+
+ sampler = (
+ balanced_positive_negative_sampler.BalancedPositiveNegativeSampler())
+ is_sampled = sampler.subsample(indicator, 64, labels)
+ with self.test_session() as sess:
+ is_sampled = sess.run(is_sampled)
+ self.assertTrue(sum(is_sampled) == 64)
+ self.assertTrue(sum(np.logical_and(numpy_labels, is_sampled)) == 10)
+ self.assertTrue(sum(np.logical_and(
+ np.logical_not(numpy_labels), is_sampled)) == 54)
+ self.assertAllEqual(is_sampled, np.logical_and(is_sampled,
+ numpy_indicator))
+
+ def test_subsample_selection_static(self):
+ # Test random sampling when only some examples can be sampled:
+ # 100 samples, 20 positives, 10 positives cannot be sampled.
+ numpy_labels = np.arange(100)
+ numpy_indicator = numpy_labels < 90
+ indicator = np.array(numpy_indicator, np.bool)
+ numpy_labels = (numpy_labels - 80) >= 0
+
+ labels = np.array(numpy_labels, np.bool)
+
+ def graph_fn(indicator, labels):
+ sampler = (
+ balanced_positive_negative_sampler.BalancedPositiveNegativeSampler(
+ is_static=True))
+ return sampler.subsample(indicator, 64, labels)
+
+ is_sampled = self.execute(graph_fn, [indicator, labels])
+ self.assertTrue(sum(is_sampled) == 64)
+ self.assertTrue(sum(np.logical_and(numpy_labels, is_sampled)) == 10)
+ self.assertTrue(sum(np.logical_and(
+ np.logical_not(numpy_labels), is_sampled)) == 54)
+ self.assertAllEqual(is_sampled, np.logical_and(is_sampled, numpy_indicator))
+
+ def test_subsample_selection_larger_batch_size_dynamic(self):
+ # Test random sampling when total number of examples that can be sampled are
+ # less than batch size:
+ # 100 samples, 50 positives, 40 positives cannot be sampled, batch size 64.
+ numpy_labels = np.arange(100)
+ numpy_indicator = numpy_labels < 60
+ indicator = tf.constant(numpy_indicator)
+ numpy_labels = (numpy_labels - 50) >= 0
+
+ labels = tf.constant(numpy_labels)
+
+ sampler = (
+ balanced_positive_negative_sampler.BalancedPositiveNegativeSampler())
+ is_sampled = sampler.subsample(indicator, 64, labels)
+ with self.test_session() as sess:
+ is_sampled = sess.run(is_sampled)
+ self.assertTrue(sum(is_sampled) == 60)
+ self.assertTrue(sum(np.logical_and(numpy_labels, is_sampled)) == 10)
+ self.assertTrue(
+ sum(np.logical_and(np.logical_not(numpy_labels), is_sampled)) == 50)
+ self.assertAllEqual(is_sampled, np.logical_and(is_sampled,
+ numpy_indicator))
+
+ def test_subsample_selection_larger_batch_size_static(self):
+ # Test random sampling when total number of examples that can be sampled are
+ # less than batch size:
+ # 100 samples, 50 positives, 40 positives cannot be sampled, batch size 64.
+ # It should still return 64 samples, with 4 of them that couldn't have been
+ # sampled.
+ numpy_labels = np.arange(100)
+ numpy_indicator = numpy_labels < 60
+ indicator = np.array(numpy_indicator, np.bool)
+ numpy_labels = (numpy_labels - 50) >= 0
+
+ labels = np.array(numpy_labels, np.bool)
+
+ def graph_fn(indicator, labels):
+ sampler = (
+ balanced_positive_negative_sampler.BalancedPositiveNegativeSampler(
+ is_static=True))
+ return sampler.subsample(indicator, 64, labels)
+
+ is_sampled = self.execute(graph_fn, [indicator, labels])
+ self.assertTrue(sum(is_sampled) == 64)
+ self.assertTrue(sum(np.logical_and(numpy_labels, is_sampled)) >= 10)
+ self.assertTrue(
+ sum(np.logical_and(np.logical_not(numpy_labels), is_sampled)) >= 50)
+ self.assertTrue(sum(np.logical_and(is_sampled, numpy_indicator)) == 60)
+
+ def test_subsample_selection_no_batch_size(self):
+ # Test random sampling when only some examples can be sampled:
+ # 1000 samples, 6 positives (5 can be sampled).
+ numpy_labels = np.arange(1000)
+ numpy_indicator = numpy_labels < 999
+ indicator = tf.constant(numpy_indicator)
+ numpy_labels = (numpy_labels - 994) >= 0
+
+ labels = tf.constant(numpy_labels)
+
+ sampler = (balanced_positive_negative_sampler.
+ BalancedPositiveNegativeSampler(0.01))
+ is_sampled = sampler.subsample(indicator, None, labels)
+ with self.test_session() as sess:
+ is_sampled = sess.run(is_sampled)
+ self.assertTrue(sum(is_sampled) == 500)
+ self.assertTrue(sum(np.logical_and(numpy_labels, is_sampled)) == 5)
+ self.assertTrue(sum(np.logical_and(
+ np.logical_not(numpy_labels), is_sampled)) == 495)
+ self.assertAllEqual(is_sampled, np.logical_and(is_sampled,
+ numpy_indicator))
+
+ def test_subsample_selection_no_batch_size_static(self):
+ labels = tf.constant([[True, False, False]])
+ indicator = tf.constant([True, False, True])
+ sampler = (
+ balanced_positive_negative_sampler.BalancedPositiveNegativeSampler())
+ with self.assertRaises(ValueError):
+ sampler.subsample(indicator, None, labels)
+
+ def test_raises_error_with_incorrect_label_shape(self):
+ labels = tf.constant([[True, False, False]])
+ indicator = tf.constant([True, False, True])
+ sampler = (balanced_positive_negative_sampler.
+ BalancedPositiveNegativeSampler())
+ with self.assertRaises(ValueError):
+ sampler.subsample(indicator, 64, labels)
+
+ def test_raises_error_with_incorrect_indicator_shape(self):
+ labels = tf.constant([True, False, False])
+ indicator = tf.constant([[True, False, True]])
+ sampler = (balanced_positive_negative_sampler.
+ BalancedPositiveNegativeSampler())
+ with self.assertRaises(ValueError):
+ sampler.subsample(indicator, 64, labels)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/batcher.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/batcher.py"
new file mode 100644
index 00000000..c5dfb712
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/batcher.py"
@@ -0,0 +1,136 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Provides functions to batch a dictionary of input tensors."""
+import collections
+
+import tensorflow as tf
+
+from object_detection.core import prefetcher
+
+rt_shape_str = '_runtime_shapes'
+
+
+class BatchQueue(object):
+ """BatchQueue class.
+
+ This class creates a batch queue to asynchronously enqueue tensors_dict.
+ It also adds a FIFO prefetcher so that the batches are readily available
+ for the consumers. Dequeue ops for a BatchQueue object can be created via
+ the Dequeue method which evaluates to a batch of tensor_dict.
+
+ Example input pipeline with batching:
+ ------------------------------------
+ key, string_tensor = slim.parallel_reader.parallel_read(...)
+ tensor_dict = decoder.decode(string_tensor)
+ tensor_dict = preprocessor.preprocess(tensor_dict, ...)
+ batch_queue = batcher.BatchQueue(tensor_dict,
+ batch_size=32,
+ batch_queue_capacity=2000,
+ num_batch_queue_threads=8,
+ prefetch_queue_capacity=20)
+ tensor_dict = batch_queue.dequeue()
+ outputs = Model(tensor_dict)
+ ...
+ -----------------------------------
+
+ Notes:
+ -----
+ This class batches tensors of unequal sizes by zero padding and unpadding
+ them after generating a batch. This can be computationally expensive when
+ batching tensors (such as images) that are of vastly different sizes. So it is
+ recommended that the shapes of such tensors be fully defined in tensor_dict
+ while other lightweight tensors such as bounding box corners and class labels
+ can be of varying sizes. Use either crop or resize operations to fully define
+ the shape of an image in tensor_dict.
+
+ It is also recommended to perform any preprocessing operations on tensors
+ before passing to BatchQueue and subsequently calling the Dequeue method.
+
+ Another caveat is that this class does not read the last batch if it is not
+ full. The current implementation makes it hard to support that use case. So,
+ for evaluation, when it is critical to run all the examples through your
+ network use the input pipeline example mentioned in core/prefetcher.py.
+ """
+
+ def __init__(self, tensor_dict, batch_size, batch_queue_capacity,
+ num_batch_queue_threads, prefetch_queue_capacity):
+ """Constructs a batch queue holding tensor_dict.
+
+ Args:
+ tensor_dict: dictionary of tensors to batch.
+ batch_size: batch size.
+ batch_queue_capacity: max capacity of the queue from which the tensors are
+ batched.
+ num_batch_queue_threads: number of threads to use for batching.
+ prefetch_queue_capacity: max capacity of the queue used to prefetch
+ assembled batches.
+ """
+ # Remember static shapes to set shapes of batched tensors.
+ static_shapes = collections.OrderedDict(
+ {key: tensor.get_shape() for key, tensor in tensor_dict.items()})
+ # Remember runtime shapes to unpad tensors after batching.
+ runtime_shapes = collections.OrderedDict(
+ {(key + rt_shape_str): tf.shape(tensor)
+ for key, tensor in tensor_dict.items()})
+
+ all_tensors = tensor_dict
+ all_tensors.update(runtime_shapes)
+ batched_tensors = tf.train.batch(
+ all_tensors,
+ capacity=batch_queue_capacity,
+ batch_size=batch_size,
+ dynamic_pad=True,
+ num_threads=num_batch_queue_threads)
+
+ self._queue = prefetcher.prefetch(batched_tensors,
+ prefetch_queue_capacity)
+ self._static_shapes = static_shapes
+ self._batch_size = batch_size
+
+ def dequeue(self):
+ """Dequeues a batch of tensor_dict from the BatchQueue.
+
+ TODO: use allow_smaller_final_batch to allow running over the whole eval set
+
+ Returns:
+ A list of tensor_dicts of the requested batch_size.
+ """
+ batched_tensors = self._queue.dequeue()
+ # Separate input tensors from tensors containing their runtime shapes.
+ tensors = {}
+ shapes = {}
+ for key, batched_tensor in batched_tensors.items():
+ unbatched_tensor_list = tf.unstack(batched_tensor)
+ for i, unbatched_tensor in enumerate(unbatched_tensor_list):
+ if rt_shape_str in key:
+ shapes[(key[:-len(rt_shape_str)], i)] = unbatched_tensor
+ else:
+ tensors[(key, i)] = unbatched_tensor
+
+ # Undo that padding using shapes and create a list of size `batch_size` that
+ # contains tensor dictionaries.
+ tensor_dict_list = []
+ batch_size = self._batch_size
+ for batch_id in range(batch_size):
+ tensor_dict = {}
+ for key in self._static_shapes:
+ tensor_dict[key] = tf.slice(tensors[(key, batch_id)],
+ tf.zeros_like(shapes[(key, batch_id)]),
+ shapes[(key, batch_id)])
+ tensor_dict[key].set_shape(self._static_shapes[key])
+ tensor_dict_list.append(tensor_dict)
+
+ return tensor_dict_list
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/batcher_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/batcher_test.py"
new file mode 100644
index 00000000..61b4390b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/batcher_test.py"
@@ -0,0 +1,158 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.batcher."""
+
+import numpy as np
+import tensorflow as tf
+
+from object_detection.core import batcher
+
+slim = tf.contrib.slim
+
+
+class BatcherTest(tf.test.TestCase):
+
+ def test_batch_and_unpad_2d_tensors_of_different_sizes_in_1st_dimension(self):
+ with self.test_session() as sess:
+ batch_size = 3
+ num_batches = 2
+ examples = tf.Variable(tf.constant(2, dtype=tf.int32))
+ counter = examples.count_up_to(num_batches * batch_size + 2)
+ boxes = tf.tile(
+ tf.reshape(tf.range(4), [1, 4]), tf.stack([counter, tf.constant(1)]))
+ batch_queue = batcher.BatchQueue(
+ tensor_dict={'boxes': boxes},
+ batch_size=batch_size,
+ batch_queue_capacity=100,
+ num_batch_queue_threads=1,
+ prefetch_queue_capacity=100)
+ batch = batch_queue.dequeue()
+
+ for tensor_dict in batch:
+ for tensor in tensor_dict.values():
+ self.assertAllEqual([None, 4], tensor.get_shape().as_list())
+
+ tf.initialize_all_variables().run()
+ with slim.queues.QueueRunners(sess):
+ i = 2
+ for _ in range(num_batches):
+ batch_np = sess.run(batch)
+ for tensor_dict in batch_np:
+ for tensor in tensor_dict.values():
+ self.assertAllEqual(tensor, np.tile(np.arange(4), (i, 1)))
+ i += 1
+ with self.assertRaises(tf.errors.OutOfRangeError):
+ sess.run(batch)
+
+ def test_batch_and_unpad_2d_tensors_of_different_sizes_in_all_dimensions(
+ self):
+ with self.test_session() as sess:
+ batch_size = 3
+ num_batches = 2
+ examples = tf.Variable(tf.constant(2, dtype=tf.int32))
+ counter = examples.count_up_to(num_batches * batch_size + 2)
+ image = tf.reshape(
+ tf.range(counter * counter), tf.stack([counter, counter]))
+ batch_queue = batcher.BatchQueue(
+ tensor_dict={'image': image},
+ batch_size=batch_size,
+ batch_queue_capacity=100,
+ num_batch_queue_threads=1,
+ prefetch_queue_capacity=100)
+ batch = batch_queue.dequeue()
+
+ for tensor_dict in batch:
+ for tensor in tensor_dict.values():
+ self.assertAllEqual([None, None], tensor.get_shape().as_list())
+
+ tf.initialize_all_variables().run()
+ with slim.queues.QueueRunners(sess):
+ i = 2
+ for _ in range(num_batches):
+ batch_np = sess.run(batch)
+ for tensor_dict in batch_np:
+ for tensor in tensor_dict.values():
+ self.assertAllEqual(tensor, np.arange(i * i).reshape((i, i)))
+ i += 1
+ with self.assertRaises(tf.errors.OutOfRangeError):
+ sess.run(batch)
+
+ def test_batch_and_unpad_2d_tensors_of_same_size_in_all_dimensions(self):
+ with self.test_session() as sess:
+ batch_size = 3
+ num_batches = 2
+ examples = tf.Variable(tf.constant(1, dtype=tf.int32))
+ counter = examples.count_up_to(num_batches * batch_size + 1)
+ image = tf.reshape(tf.range(1, 13), [4, 3]) * counter
+ batch_queue = batcher.BatchQueue(
+ tensor_dict={'image': image},
+ batch_size=batch_size,
+ batch_queue_capacity=100,
+ num_batch_queue_threads=1,
+ prefetch_queue_capacity=100)
+ batch = batch_queue.dequeue()
+
+ for tensor_dict in batch:
+ for tensor in tensor_dict.values():
+ self.assertAllEqual([4, 3], tensor.get_shape().as_list())
+
+ tf.initialize_all_variables().run()
+ with slim.queues.QueueRunners(sess):
+ i = 1
+ for _ in range(num_batches):
+ batch_np = sess.run(batch)
+ for tensor_dict in batch_np:
+ for tensor in tensor_dict.values():
+ self.assertAllEqual(tensor, np.arange(1, 13).reshape((4, 3)) * i)
+ i += 1
+ with self.assertRaises(tf.errors.OutOfRangeError):
+ sess.run(batch)
+
+ def test_batcher_when_batch_size_is_one(self):
+ with self.test_session() as sess:
+ batch_size = 1
+ num_batches = 2
+ examples = tf.Variable(tf.constant(2, dtype=tf.int32))
+ counter = examples.count_up_to(num_batches * batch_size + 2)
+ image = tf.reshape(
+ tf.range(counter * counter), tf.stack([counter, counter]))
+ batch_queue = batcher.BatchQueue(
+ tensor_dict={'image': image},
+ batch_size=batch_size,
+ batch_queue_capacity=100,
+ num_batch_queue_threads=1,
+ prefetch_queue_capacity=100)
+ batch = batch_queue.dequeue()
+
+ for tensor_dict in batch:
+ for tensor in tensor_dict.values():
+ self.assertAllEqual([None, None], tensor.get_shape().as_list())
+
+ tf.initialize_all_variables().run()
+ with slim.queues.QueueRunners(sess):
+ i = 2
+ for _ in range(num_batches):
+ batch_np = sess.run(batch)
+ for tensor_dict in batch_np:
+ for tensor in tensor_dict.values():
+ self.assertAllEqual(tensor, np.arange(i * i).reshape((i, i)))
+ i += 1
+ with self.assertRaises(tf.errors.OutOfRangeError):
+ sess.run(batch)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_coder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_coder.py"
new file mode 100644
index 00000000..f20ac956
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_coder.py"
@@ -0,0 +1,151 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Base box coder.
+
+Box coders convert between coordinate frames, namely image-centric
+(with (0,0) on the top left of image) and anchor-centric (with (0,0) being
+defined by a specific anchor).
+
+Users of a BoxCoder can call two methods:
+ encode: which encodes a box with respect to a given anchor
+ (or rather, a tensor of boxes wrt a corresponding tensor of anchors) and
+ decode: which inverts this encoding with a decode operation.
+In both cases, the arguments are assumed to be in 1-1 correspondence already;
+it is not the job of a BoxCoder to perform matching.
+"""
+from abc import ABCMeta
+from abc import abstractmethod
+from abc import abstractproperty
+
+import tensorflow as tf
+
+
+# Box coder types.
+FASTER_RCNN = 'faster_rcnn'
+KEYPOINT = 'keypoint'
+MEAN_STDDEV = 'mean_stddev'
+SQUARE = 'square'
+
+
+class BoxCoder(object):
+ """Abstract base class for box coder."""
+ __metaclass__ = ABCMeta
+
+ @abstractproperty
+ def code_size(self):
+ """Return the size of each code.
+
+ This number is a constant and should agree with the output of the `encode`
+ op (e.g. if rel_codes is the output of self.encode(...), then it should have
+ shape [N, code_size()]). This abstractproperty should be overridden by
+ implementations.
+
+ Returns:
+ an integer constant
+ """
+ pass
+
+ def encode(self, boxes, anchors):
+ """Encode a box list relative to an anchor collection.
+
+ Args:
+ boxes: BoxList holding N boxes to be encoded
+ anchors: BoxList of N anchors
+
+ Returns:
+ a tensor representing N relative-encoded boxes
+ """
+ with tf.name_scope('Encode'):
+ return self._encode(boxes, anchors)
+
+ def decode(self, rel_codes, anchors):
+ """Decode boxes that are encoded relative to an anchor collection.
+
+ Args:
+ rel_codes: a tensor representing N relative-encoded boxes
+ anchors: BoxList of anchors
+
+ Returns:
+ boxlist: BoxList holding N boxes encoded in the ordinary way (i.e.,
+ with corners y_min, x_min, y_max, x_max)
+ """
+ with tf.name_scope('Decode'):
+ return self._decode(rel_codes, anchors)
+
+ @abstractmethod
+ def _encode(self, boxes, anchors):
+ """Method to be overriden by implementations.
+
+ Args:
+ boxes: BoxList holding N boxes to be encoded
+ anchors: BoxList of N anchors
+
+ Returns:
+ a tensor representing N relative-encoded boxes
+ """
+ pass
+
+ @abstractmethod
+ def _decode(self, rel_codes, anchors):
+ """Method to be overriden by implementations.
+
+ Args:
+ rel_codes: a tensor representing N relative-encoded boxes
+ anchors: BoxList of anchors
+
+ Returns:
+ boxlist: BoxList holding N boxes encoded in the ordinary way (i.e.,
+ with corners y_min, x_min, y_max, x_max)
+ """
+ pass
+
+
+def batch_decode(encoded_boxes, box_coder, anchors):
+ """Decode a batch of encoded boxes.
+
+ This op takes a batch of encoded bounding boxes and transforms
+ them to a batch of bounding boxes specified by their corners in
+ the order of [y_min, x_min, y_max, x_max].
+
+ Args:
+ encoded_boxes: a float32 tensor of shape [batch_size, num_anchors,
+ code_size] representing the location of the objects.
+ box_coder: a BoxCoder object.
+ anchors: a BoxList of anchors used to encode `encoded_boxes`.
+
+ Returns:
+ decoded_boxes: a float32 tensor of shape [batch_size, num_anchors,
+ coder_size] representing the corners of the objects in the order
+ of [y_min, x_min, y_max, x_max].
+
+ Raises:
+ ValueError: if batch sizes of the inputs are inconsistent, or if
+ the number of anchors inferred from encoded_boxes and anchors are
+ inconsistent.
+ """
+ encoded_boxes.get_shape().assert_has_rank(3)
+ if encoded_boxes.get_shape()[1].value != anchors.num_boxes_static():
+ raise ValueError('The number of anchors inferred from encoded_boxes'
+ ' and anchors are inconsistent: shape[1] of encoded_boxes'
+ ' %s should be equal to the number of anchors: %s.' %
+ (encoded_boxes.get_shape()[1].value,
+ anchors.num_boxes_static()))
+
+ decoded_boxes = tf.stack([
+ box_coder.decode(boxes, anchors).get()
+ for boxes in tf.unstack(encoded_boxes)
+ ])
+ return decoded_boxes
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_coder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_coder_test.py"
new file mode 100644
index 00000000..c087a325
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_coder_test.py"
@@ -0,0 +1,61 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.box_coder."""
+
+import tensorflow as tf
+
+from object_detection.core import box_coder
+from object_detection.core import box_list
+
+
+class MockBoxCoder(box_coder.BoxCoder):
+ """Test BoxCoder that encodes/decodes using the multiply-by-two function."""
+
+ def code_size(self):
+ return 4
+
+ def _encode(self, boxes, anchors):
+ return 2.0 * boxes.get()
+
+ def _decode(self, rel_codes, anchors):
+ return box_list.BoxList(rel_codes / 2.0)
+
+
+class BoxCoderTest(tf.test.TestCase):
+
+ def test_batch_decode(self):
+ mock_anchor_corners = tf.constant(
+ [[0, 0.1, 0.2, 0.3], [0.2, 0.4, 0.4, 0.6]], tf.float32)
+ mock_anchors = box_list.BoxList(mock_anchor_corners)
+ mock_box_coder = MockBoxCoder()
+
+ expected_boxes = [[[0.0, 0.1, 0.5, 0.6], [0.5, 0.6, 0.7, 0.8]],
+ [[0.1, 0.2, 0.3, 0.4], [0.7, 0.8, 0.9, 1.0]]]
+
+ encoded_boxes_list = [mock_box_coder.encode(
+ box_list.BoxList(tf.constant(boxes)), mock_anchors)
+ for boxes in expected_boxes]
+ encoded_boxes = tf.stack(encoded_boxes_list)
+ decoded_boxes = box_coder.batch_decode(
+ encoded_boxes, mock_box_coder, mock_anchors)
+
+ with self.test_session() as sess:
+ decoded_boxes_result = sess.run(decoded_boxes)
+ self.assertAllClose(expected_boxes, decoded_boxes_result)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list.py"
new file mode 100644
index 00000000..c0196f05
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list.py"
@@ -0,0 +1,207 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Bounding Box List definition.
+
+BoxList represents a list of bounding boxes as tensorflow
+tensors, where each bounding box is represented as a row of 4 numbers,
+[y_min, x_min, y_max, x_max]. It is assumed that all bounding boxes
+within a given list correspond to a single image. See also
+box_list_ops.py for common box related operations (such as area, iou, etc).
+
+Optionally, users can add additional related fields (such as weights).
+We assume the following things to be true about fields:
+* they correspond to boxes in the box_list along the 0th dimension
+* they have inferrable rank at graph construction time
+* all dimensions except for possibly the 0th can be inferred
+ (i.e., not None) at graph construction time.
+
+Some other notes:
+ * Following tensorflow conventions, we use height, width ordering,
+ and correspondingly, y,x (or ymin, xmin, ymax, xmax) ordering
+ * Tensors are always provided as (flat) [N, 4] tensors.
+"""
+
+import tensorflow as tf
+
+
+class BoxList(object):
+ """Box collection."""
+
+ def __init__(self, boxes):
+ """Constructs box collection.
+
+ Args:
+ boxes: a tensor of shape [N, 4] representing box corners
+
+ Raises:
+ ValueError: if invalid dimensions for bbox data or if bbox data is not in
+ float32 format.
+ """
+ if len(boxes.get_shape()) != 2 or boxes.get_shape()[-1] != 4:
+ raise ValueError('Invalid dimensions for box data.')
+ if boxes.dtype != tf.float32:
+ raise ValueError('Invalid tensor type: should be tf.float32')
+ self.data = {'boxes': boxes}
+
+ def num_boxes(self):
+ """Returns number of boxes held in collection.
+
+ Returns:
+ a tensor representing the number of boxes held in the collection.
+ """
+ return tf.shape(self.data['boxes'])[0]
+
+ def num_boxes_static(self):
+ """Returns number of boxes held in collection.
+
+ This number is inferred at graph construction time rather than run-time.
+
+ Returns:
+ Number of boxes held in collection (integer) or None if this is not
+ inferrable at graph construction time.
+ """
+ return self.data['boxes'].get_shape()[0].value
+
+ def get_all_fields(self):
+ """Returns all fields."""
+ return self.data.keys()
+
+ def get_extra_fields(self):
+ """Returns all non-box fields (i.e., everything not named 'boxes')."""
+ return [k for k in self.data.keys() if k != 'boxes']
+
+ def add_field(self, field, field_data):
+ """Add field to box list.
+
+ This method can be used to add related box data such as
+ weights/labels, etc.
+
+ Args:
+ field: a string key to access the data via `get`
+ field_data: a tensor containing the data to store in the BoxList
+ """
+ self.data[field] = field_data
+
+ def has_field(self, field):
+ return field in self.data
+
+ def get(self):
+ """Convenience function for accessing box coordinates.
+
+ Returns:
+ a tensor with shape [N, 4] representing box coordinates.
+ """
+ return self.get_field('boxes')
+
+ def set(self, boxes):
+ """Convenience function for setting box coordinates.
+
+ Args:
+ boxes: a tensor of shape [N, 4] representing box corners
+
+ Raises:
+ ValueError: if invalid dimensions for bbox data
+ """
+ if len(boxes.get_shape()) != 2 or boxes.get_shape()[-1] != 4:
+ raise ValueError('Invalid dimensions for box data.')
+ self.data['boxes'] = boxes
+
+ def get_field(self, field):
+ """Accesses a box collection and associated fields.
+
+ This function returns specified field with object; if no field is specified,
+ it returns the box coordinates.
+
+ Args:
+ field: this optional string parameter can be used to specify
+ a related field to be accessed.
+
+ Returns:
+ a tensor representing the box collection or an associated field.
+
+ Raises:
+ ValueError: if invalid field
+ """
+ if not self.has_field(field):
+ raise ValueError('field ' + str(field) + ' does not exist')
+ return self.data[field]
+
+ def set_field(self, field, value):
+ """Sets the value of a field.
+
+ Updates the field of a box_list with a given value.
+
+ Args:
+ field: (string) name of the field to set value.
+ value: the value to assign to the field.
+
+ Raises:
+ ValueError: if the box_list does not have specified field.
+ """
+ if not self.has_field(field):
+ raise ValueError('field %s does not exist' % field)
+ self.data[field] = value
+
+ def get_center_coordinates_and_sizes(self, scope=None):
+ """Computes the center coordinates, height and width of the boxes.
+
+ Args:
+ scope: name scope of the function.
+
+ Returns:
+ a list of 4 1-D tensors [ycenter, xcenter, height, width].
+ """
+ with tf.name_scope(scope, 'get_center_coordinates_and_sizes'):
+ box_corners = self.get()
+ ymin, xmin, ymax, xmax = tf.unstack(tf.transpose(box_corners))
+ width = xmax - xmin
+ height = ymax - ymin
+ ycenter = ymin + height / 2.
+ xcenter = xmin + width / 2.
+ return [ycenter, xcenter, height, width]
+
+ def transpose_coordinates(self, scope=None):
+ """Transpose the coordinate representation in a boxlist.
+
+ Args:
+ scope: name scope of the function.
+ """
+ with tf.name_scope(scope, 'transpose_coordinates'):
+ y_min, x_min, y_max, x_max = tf.split(
+ value=self.get(), num_or_size_splits=4, axis=1)
+ self.set(tf.concat([x_min, y_min, x_max, y_max], 1))
+
+ def as_tensor_dict(self, fields=None):
+ """Retrieves specified fields as a dictionary of tensors.
+
+ Args:
+ fields: (optional) list of fields to return in the dictionary.
+ If None (default), all fields are returned.
+
+ Returns:
+ tensor_dict: A dictionary of tensors specified by fields.
+
+ Raises:
+ ValueError: if specified field is not contained in boxlist.
+ """
+ tensor_dict = {}
+ if fields is None:
+ fields = self.get_all_fields()
+ for field in fields:
+ if not self.has_field(field):
+ raise ValueError('boxlist must contain all specified fields')
+ tensor_dict[field] = self.get_field(field)
+ return tensor_dict
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list_ops.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list_ops.py"
new file mode 100644
index 00000000..7c6d75c8
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list_ops.py"
@@ -0,0 +1,1136 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Bounding Box List operations.
+
+Example box operations that are supported:
+ * areas: compute bounding box areas
+ * iou: pairwise intersection-over-union scores
+ * sq_dist: pairwise distances between bounding boxes
+
+Whenever box_list_ops functions output a BoxList, the fields of the incoming
+BoxList are retained unless documented otherwise.
+"""
+import tensorflow as tf
+
+from object_detection.core import box_list
+from object_detection.utils import ops
+from object_detection.utils import shape_utils
+
+
+class SortOrder(object):
+ """Enum class for sort order.
+
+ Attributes:
+ ascend: ascend order.
+ descend: descend order.
+ """
+ ascend = 1
+ descend = 2
+
+
+def area(boxlist, scope=None):
+ """Computes area of boxes.
+
+ Args:
+ boxlist: BoxList holding N boxes
+ scope: name scope.
+
+ Returns:
+ a tensor with shape [N] representing box areas.
+ """
+ with tf.name_scope(scope, 'Area'):
+ y_min, x_min, y_max, x_max = tf.split(
+ value=boxlist.get(), num_or_size_splits=4, axis=1)
+ return tf.squeeze((y_max - y_min) * (x_max - x_min), [1])
+
+
+def height_width(boxlist, scope=None):
+ """Computes height and width of boxes in boxlist.
+
+ Args:
+ boxlist: BoxList holding N boxes
+ scope: name scope.
+
+ Returns:
+ Height: A tensor with shape [N] representing box heights.
+ Width: A tensor with shape [N] representing box widths.
+ """
+ with tf.name_scope(scope, 'HeightWidth'):
+ y_min, x_min, y_max, x_max = tf.split(
+ value=boxlist.get(), num_or_size_splits=4, axis=1)
+ return tf.squeeze(y_max - y_min, [1]), tf.squeeze(x_max - x_min, [1])
+
+
+def scale(boxlist, y_scale, x_scale, scope=None):
+ """scale box coordinates in x and y dimensions.
+
+ Args:
+ boxlist: BoxList holding N boxes
+ y_scale: (float) scalar tensor
+ x_scale: (float) scalar tensor
+ scope: name scope.
+
+ Returns:
+ boxlist: BoxList holding N boxes
+ """
+ with tf.name_scope(scope, 'Scale'):
+ y_scale = tf.cast(y_scale, tf.float32)
+ x_scale = tf.cast(x_scale, tf.float32)
+ y_min, x_min, y_max, x_max = tf.split(
+ value=boxlist.get(), num_or_size_splits=4, axis=1)
+ y_min = y_scale * y_min
+ y_max = y_scale * y_max
+ x_min = x_scale * x_min
+ x_max = x_scale * x_max
+ scaled_boxlist = box_list.BoxList(
+ tf.concat([y_min, x_min, y_max, x_max], 1))
+ return _copy_extra_fields(scaled_boxlist, boxlist)
+
+
+def clip_to_window(boxlist, window, filter_nonoverlapping=True, scope=None):
+ """Clip bounding boxes to a window.
+
+ This op clips any input bounding boxes (represented by bounding box
+ corners) to a window, optionally filtering out boxes that do not
+ overlap at all with the window.
+
+ Args:
+ boxlist: BoxList holding M_in boxes
+ window: a tensor of shape [4] representing the [y_min, x_min, y_max, x_max]
+ window to which the op should clip boxes.
+ filter_nonoverlapping: whether to filter out boxes that do not overlap at
+ all with the window.
+ scope: name scope.
+
+ Returns:
+ a BoxList holding M_out boxes where M_out <= M_in
+ """
+ with tf.name_scope(scope, 'ClipToWindow'):
+ y_min, x_min, y_max, x_max = tf.split(
+ value=boxlist.get(), num_or_size_splits=4, axis=1)
+ win_y_min, win_x_min, win_y_max, win_x_max = tf.unstack(window)
+ y_min_clipped = tf.maximum(tf.minimum(y_min, win_y_max), win_y_min)
+ y_max_clipped = tf.maximum(tf.minimum(y_max, win_y_max), win_y_min)
+ x_min_clipped = tf.maximum(tf.minimum(x_min, win_x_max), win_x_min)
+ x_max_clipped = tf.maximum(tf.minimum(x_max, win_x_max), win_x_min)
+ clipped = box_list.BoxList(
+ tf.concat([y_min_clipped, x_min_clipped, y_max_clipped, x_max_clipped],
+ 1))
+ clipped = _copy_extra_fields(clipped, boxlist)
+ if filter_nonoverlapping:
+ areas = area(clipped)
+ nonzero_area_indices = tf.cast(
+ tf.reshape(tf.where(tf.greater(areas, 0.0)), [-1]), tf.int32)
+ clipped = gather(clipped, nonzero_area_indices)
+ return clipped
+
+
+def prune_outside_window(boxlist, window, scope=None):
+ """Prunes bounding boxes that fall outside a given window.
+
+ This function prunes bounding boxes that even partially fall outside the given
+ window. See also clip_to_window which only prunes bounding boxes that fall
+ completely outside the window, and clips any bounding boxes that partially
+ overflow.
+
+ Args:
+ boxlist: a BoxList holding M_in boxes.
+ window: a float tensor of shape [4] representing [ymin, xmin, ymax, xmax]
+ of the window
+ scope: name scope.
+
+ Returns:
+ pruned_corners: a tensor with shape [M_out, 4] where M_out <= M_in
+ valid_indices: a tensor with shape [M_out] indexing the valid bounding boxes
+ in the input tensor.
+ """
+ with tf.name_scope(scope, 'PruneOutsideWindow'):
+ y_min, x_min, y_max, x_max = tf.split(
+ value=boxlist.get(), num_or_size_splits=4, axis=1)
+ win_y_min, win_x_min, win_y_max, win_x_max = tf.unstack(window)
+ coordinate_violations = tf.concat([
+ tf.less(y_min, win_y_min), tf.less(x_min, win_x_min),
+ tf.greater(y_max, win_y_max), tf.greater(x_max, win_x_max)
+ ], 1)
+ valid_indices = tf.reshape(
+ tf.where(tf.logical_not(tf.reduce_any(coordinate_violations, 1))), [-1])
+ return gather(boxlist, valid_indices), valid_indices
+
+
+def prune_completely_outside_window(boxlist, window, scope=None):
+ """Prunes bounding boxes that fall completely outside of the given window.
+
+ The function clip_to_window prunes bounding boxes that fall
+ completely outside the window, but also clips any bounding boxes that
+ partially overflow. This function does not clip partially overflowing boxes.
+
+ Args:
+ boxlist: a BoxList holding M_in boxes.
+ window: a float tensor of shape [4] representing [ymin, xmin, ymax, xmax]
+ of the window
+ scope: name scope.
+
+ Returns:
+ pruned_boxlist: a new BoxList with all bounding boxes partially or fully in
+ the window.
+ valid_indices: a tensor with shape [M_out] indexing the valid bounding boxes
+ in the input tensor.
+ """
+ with tf.name_scope(scope, 'PruneCompleteleyOutsideWindow'):
+ y_min, x_min, y_max, x_max = tf.split(
+ value=boxlist.get(), num_or_size_splits=4, axis=1)
+ win_y_min, win_x_min, win_y_max, win_x_max = tf.unstack(window)
+ coordinate_violations = tf.concat([
+ tf.greater_equal(y_min, win_y_max), tf.greater_equal(x_min, win_x_max),
+ tf.less_equal(y_max, win_y_min), tf.less_equal(x_max, win_x_min)
+ ], 1)
+ valid_indices = tf.reshape(
+ tf.where(tf.logical_not(tf.reduce_any(coordinate_violations, 1))), [-1])
+ return gather(boxlist, valid_indices), valid_indices
+
+
+def intersection(boxlist1, boxlist2, scope=None):
+ """Compute pairwise intersection areas between boxes.
+
+ Args:
+ boxlist1: BoxList holding N boxes
+ boxlist2: BoxList holding M boxes
+ scope: name scope.
+
+ Returns:
+ a tensor with shape [N, M] representing pairwise intersections
+ """
+ with tf.name_scope(scope, 'Intersection'):
+ y_min1, x_min1, y_max1, x_max1 = tf.split(
+ value=boxlist1.get(), num_or_size_splits=4, axis=1)
+ y_min2, x_min2, y_max2, x_max2 = tf.split(
+ value=boxlist2.get(), num_or_size_splits=4, axis=1)
+ all_pairs_min_ymax = tf.minimum(y_max1, tf.transpose(y_max2))
+ all_pairs_max_ymin = tf.maximum(y_min1, tf.transpose(y_min2))
+ intersect_heights = tf.maximum(0.0, all_pairs_min_ymax - all_pairs_max_ymin)
+ all_pairs_min_xmax = tf.minimum(x_max1, tf.transpose(x_max2))
+ all_pairs_max_xmin = tf.maximum(x_min1, tf.transpose(x_min2))
+ intersect_widths = tf.maximum(0.0, all_pairs_min_xmax - all_pairs_max_xmin)
+ return intersect_heights * intersect_widths
+
+
+def matched_intersection(boxlist1, boxlist2, scope=None):
+ """Compute intersection areas between corresponding boxes in two boxlists.
+
+ Args:
+ boxlist1: BoxList holding N boxes
+ boxlist2: BoxList holding N boxes
+ scope: name scope.
+
+ Returns:
+ a tensor with shape [N] representing pairwise intersections
+ """
+ with tf.name_scope(scope, 'MatchedIntersection'):
+ y_min1, x_min1, y_max1, x_max1 = tf.split(
+ value=boxlist1.get(), num_or_size_splits=4, axis=1)
+ y_min2, x_min2, y_max2, x_max2 = tf.split(
+ value=boxlist2.get(), num_or_size_splits=4, axis=1)
+ min_ymax = tf.minimum(y_max1, y_max2)
+ max_ymin = tf.maximum(y_min1, y_min2)
+ intersect_heights = tf.maximum(0.0, min_ymax - max_ymin)
+ min_xmax = tf.minimum(x_max1, x_max2)
+ max_xmin = tf.maximum(x_min1, x_min2)
+ intersect_widths = tf.maximum(0.0, min_xmax - max_xmin)
+ return tf.reshape(intersect_heights * intersect_widths, [-1])
+
+
+def iou(boxlist1, boxlist2, scope=None):
+ """Computes pairwise intersection-over-union between box collections.
+
+ Args:
+ boxlist1: BoxList holding N boxes
+ boxlist2: BoxList holding M boxes
+ scope: name scope.
+
+ Returns:
+ a tensor with shape [N, M] representing pairwise iou scores.
+ """
+ with tf.name_scope(scope, 'IOU'):
+ intersections = intersection(boxlist1, boxlist2)
+ areas1 = area(boxlist1)
+ areas2 = area(boxlist2)
+ unions = (
+ tf.expand_dims(areas1, 1) + tf.expand_dims(areas2, 0) - intersections)
+ return tf.where(
+ tf.equal(intersections, 0.0),
+ tf.zeros_like(intersections), tf.truediv(intersections, unions))
+
+
+def matched_iou(boxlist1, boxlist2, scope=None):
+ """Compute intersection-over-union between corresponding boxes in boxlists.
+
+ Args:
+ boxlist1: BoxList holding N boxes
+ boxlist2: BoxList holding N boxes
+ scope: name scope.
+
+ Returns:
+ a tensor with shape [N] representing pairwise iou scores.
+ """
+ with tf.name_scope(scope, 'MatchedIOU'):
+ intersections = matched_intersection(boxlist1, boxlist2)
+ areas1 = area(boxlist1)
+ areas2 = area(boxlist2)
+ unions = areas1 + areas2 - intersections
+ return tf.where(
+ tf.equal(intersections, 0.0),
+ tf.zeros_like(intersections), tf.truediv(intersections, unions))
+
+
+def ioa(boxlist1, boxlist2, scope=None):
+ """Computes pairwise intersection-over-area between box collections.
+
+ intersection-over-area (IOA) between two boxes box1 and box2 is defined as
+ their intersection area over box2's area. Note that ioa is not symmetric,
+ that is, ioa(box1, box2) != ioa(box2, box1).
+
+ Args:
+ boxlist1: BoxList holding N boxes
+ boxlist2: BoxList holding M boxes
+ scope: name scope.
+
+ Returns:
+ a tensor with shape [N, M] representing pairwise ioa scores.
+ """
+ with tf.name_scope(scope, 'IOA'):
+ intersections = intersection(boxlist1, boxlist2)
+ areas = tf.expand_dims(area(boxlist2), 0)
+ return tf.truediv(intersections, areas)
+
+
+def prune_non_overlapping_boxes(
+ boxlist1, boxlist2, min_overlap=0.0, scope=None):
+ """Prunes the boxes in boxlist1 that overlap less than thresh with boxlist2.
+
+ For each box in boxlist1, we want its IOA to be more than minoverlap with
+ at least one of the boxes in boxlist2. If it does not, we remove it.
+
+ Args:
+ boxlist1: BoxList holding N boxes.
+ boxlist2: BoxList holding M boxes.
+ min_overlap: Minimum required overlap between boxes, to count them as
+ overlapping.
+ scope: name scope.
+
+ Returns:
+ new_boxlist1: A pruned boxlist with size [N', 4].
+ keep_inds: A tensor with shape [N'] indexing kept bounding boxes in the
+ first input BoxList `boxlist1`.
+ """
+ with tf.name_scope(scope, 'PruneNonOverlappingBoxes'):
+ ioa_ = ioa(boxlist2, boxlist1) # [M, N] tensor
+ ioa_ = tf.reduce_max(ioa_, reduction_indices=[0]) # [N] tensor
+ keep_bool = tf.greater_equal(ioa_, tf.constant(min_overlap))
+ keep_inds = tf.squeeze(tf.where(keep_bool), squeeze_dims=[1])
+ new_boxlist1 = gather(boxlist1, keep_inds)
+ return new_boxlist1, keep_inds
+
+
+def prune_small_boxes(boxlist, min_side, scope=None):
+ """Prunes small boxes in the boxlist which have a side smaller than min_side.
+
+ Args:
+ boxlist: BoxList holding N boxes.
+ min_side: Minimum width AND height of box to survive pruning.
+ scope: name scope.
+
+ Returns:
+ A pruned boxlist.
+ """
+ with tf.name_scope(scope, 'PruneSmallBoxes'):
+ height, width = height_width(boxlist)
+ is_valid = tf.logical_and(tf.greater_equal(width, min_side),
+ tf.greater_equal(height, min_side))
+ return gather(boxlist, tf.reshape(tf.where(is_valid), [-1]))
+
+
+def change_coordinate_frame(boxlist, window, scope=None):
+ """Change coordinate frame of the boxlist to be relative to window's frame.
+
+ Given a window of the form [ymin, xmin, ymax, xmax],
+ changes bounding box coordinates from boxlist to be relative to this window
+ (e.g., the min corner maps to (0,0) and the max corner maps to (1,1)).
+
+ An example use case is data augmentation: where we are given groundtruth
+ boxes (boxlist) and would like to randomly crop the image to some
+ window (window). In this case we need to change the coordinate frame of
+ each groundtruth box to be relative to this new window.
+
+ Args:
+ boxlist: A BoxList object holding N boxes.
+ window: A rank 1 tensor [4].
+ scope: name scope.
+
+ Returns:
+ Returns a BoxList object with N boxes.
+ """
+ with tf.name_scope(scope, 'ChangeCoordinateFrame'):
+ win_height = window[2] - window[0]
+ win_width = window[3] - window[1]
+ boxlist_new = scale(box_list.BoxList(
+ boxlist.get() - [window[0], window[1], window[0], window[1]]),
+ 1.0 / win_height, 1.0 / win_width)
+ boxlist_new = _copy_extra_fields(boxlist_new, boxlist)
+ return boxlist_new
+
+
+def sq_dist(boxlist1, boxlist2, scope=None):
+ """Computes the pairwise squared distances between box corners.
+
+ This op treats each box as if it were a point in a 4d Euclidean space and
+ computes pairwise squared distances.
+
+ Mathematically, we are given two matrices of box coordinates X and Y,
+ where X(i,:) is the i'th row of X, containing the 4 numbers defining the
+ corners of the i'th box in boxlist1. Similarly Y(j,:) corresponds to
+ boxlist2. We compute
+ Z(i,j) = ||X(i,:) - Y(j,:)||^2
+ = ||X(i,:)||^2 + ||Y(j,:)||^2 - 2 X(i,:)' * Y(j,:),
+
+ Args:
+ boxlist1: BoxList holding N boxes
+ boxlist2: BoxList holding M boxes
+ scope: name scope.
+
+ Returns:
+ a tensor with shape [N, M] representing pairwise distances
+ """
+ with tf.name_scope(scope, 'SqDist'):
+ sqnorm1 = tf.reduce_sum(tf.square(boxlist1.get()), 1, keep_dims=True)
+ sqnorm2 = tf.reduce_sum(tf.square(boxlist2.get()), 1, keep_dims=True)
+ innerprod = tf.matmul(boxlist1.get(), boxlist2.get(),
+ transpose_a=False, transpose_b=True)
+ return sqnorm1 + tf.transpose(sqnorm2) - 2.0 * innerprod
+
+
+def boolean_mask(boxlist, indicator, fields=None, scope=None,
+ use_static_shapes=False, indicator_sum=None):
+ """Select boxes from BoxList according to indicator and return new BoxList.
+
+ `boolean_mask` returns the subset of boxes that are marked as "True" by the
+ indicator tensor. By default, `boolean_mask` returns boxes corresponding to
+ the input index list, as well as all additional fields stored in the boxlist
+ (indexing into the first dimension). However one can optionally only draw
+ from a subset of fields.
+
+ Args:
+ boxlist: BoxList holding N boxes
+ indicator: a rank-1 boolean tensor
+ fields: (optional) list of fields to also gather from. If None (default),
+ all fields are gathered from. Pass an empty fields list to only gather
+ the box coordinates.
+ scope: name scope.
+ use_static_shapes: Whether to use an implementation with static shape
+ gurantees.
+ indicator_sum: An integer containing the sum of `indicator` vector. Only
+ required if `use_static_shape` is True.
+
+ Returns:
+ subboxlist: a BoxList corresponding to the subset of the input BoxList
+ specified by indicator
+ Raises:
+ ValueError: if `indicator` is not a rank-1 boolean tensor.
+ """
+ with tf.name_scope(scope, 'BooleanMask'):
+ if indicator.shape.ndims != 1:
+ raise ValueError('indicator should have rank 1')
+ if indicator.dtype != tf.bool:
+ raise ValueError('indicator should be a boolean tensor')
+ if use_static_shapes:
+ if not (indicator_sum and isinstance(indicator_sum, int)):
+ raise ValueError('`indicator_sum` must be a of type int')
+ selected_positions = tf.to_float(indicator)
+ indexed_positions = tf.cast(
+ tf.multiply(
+ tf.cumsum(selected_positions), selected_positions),
+ dtype=tf.int32)
+ one_hot_selector = tf.one_hot(
+ indexed_positions - 1, indicator_sum, dtype=tf.float32)
+ sampled_indices = tf.cast(
+ tf.tensordot(
+ tf.to_float(tf.range(tf.shape(indicator)[0])),
+ one_hot_selector,
+ axes=[0, 0]),
+ dtype=tf.int32)
+ return gather(boxlist, sampled_indices, use_static_shapes=True)
+ else:
+ subboxlist = box_list.BoxList(tf.boolean_mask(boxlist.get(), indicator))
+ if fields is None:
+ fields = boxlist.get_extra_fields()
+ for field in fields:
+ if not boxlist.has_field(field):
+ raise ValueError('boxlist must contain all specified fields')
+ subfieldlist = tf.boolean_mask(boxlist.get_field(field), indicator)
+ subboxlist.add_field(field, subfieldlist)
+ return subboxlist
+
+
+def gather(boxlist, indices, fields=None, scope=None, use_static_shapes=False):
+ """Gather boxes from BoxList according to indices and return new BoxList.
+
+ By default, `gather` returns boxes corresponding to the input index list, as
+ well as all additional fields stored in the boxlist (indexing into the
+ first dimension). However one can optionally only gather from a
+ subset of fields.
+
+ Args:
+ boxlist: BoxList holding N boxes
+ indices: a rank-1 tensor of type int32 / int64
+ fields: (optional) list of fields to also gather from. If None (default),
+ all fields are gathered from. Pass an empty fields list to only gather
+ the box coordinates.
+ scope: name scope.
+ use_static_shapes: Whether to use an implementation with static shape
+ gurantees.
+
+ Returns:
+ subboxlist: a BoxList corresponding to the subset of the input BoxList
+ specified by indices
+ Raises:
+ ValueError: if specified field is not contained in boxlist or if the
+ indices are not of type int32
+ """
+ with tf.name_scope(scope, 'Gather'):
+ if len(indices.shape.as_list()) != 1:
+ raise ValueError('indices should have rank 1')
+ if indices.dtype != tf.int32 and indices.dtype != tf.int64:
+ raise ValueError('indices should be an int32 / int64 tensor')
+ gather_op = tf.gather
+ if use_static_shapes:
+ gather_op = ops.matmul_gather_on_zeroth_axis
+ subboxlist = box_list.BoxList(gather_op(boxlist.get(), indices))
+ if fields is None:
+ fields = boxlist.get_extra_fields()
+ fields += ['boxes']
+ for field in fields:
+ if not boxlist.has_field(field):
+ raise ValueError('boxlist must contain all specified fields')
+ subfieldlist = gather_op(boxlist.get_field(field), indices)
+ subboxlist.add_field(field, subfieldlist)
+ return subboxlist
+
+
+def concatenate(boxlists, fields=None, scope=None):
+ """Concatenate list of BoxLists.
+
+ This op concatenates a list of input BoxLists into a larger BoxList. It also
+ handles concatenation of BoxList fields as long as the field tensor shapes
+ are equal except for the first dimension.
+
+ Args:
+ boxlists: list of BoxList objects
+ fields: optional list of fields to also concatenate. By default, all
+ fields from the first BoxList in the list are included in the
+ concatenation.
+ scope: name scope.
+
+ Returns:
+ a BoxList with number of boxes equal to
+ sum([boxlist.num_boxes() for boxlist in BoxList])
+ Raises:
+ ValueError: if boxlists is invalid (i.e., is not a list, is empty, or
+ contains non BoxList objects), or if requested fields are not contained in
+ all boxlists
+ """
+ with tf.name_scope(scope, 'Concatenate'):
+ if not isinstance(boxlists, list):
+ raise ValueError('boxlists should be a list')
+ if not boxlists:
+ raise ValueError('boxlists should have nonzero length')
+ for boxlist in boxlists:
+ if not isinstance(boxlist, box_list.BoxList):
+ raise ValueError('all elements of boxlists should be BoxList objects')
+ concatenated = box_list.BoxList(
+ tf.concat([boxlist.get() for boxlist in boxlists], 0))
+ if fields is None:
+ fields = boxlists[0].get_extra_fields()
+ for field in fields:
+ first_field_shape = boxlists[0].get_field(field).get_shape().as_list()
+ first_field_shape[0] = -1
+ if None in first_field_shape:
+ raise ValueError('field %s must have fully defined shape except for the'
+ ' 0th dimension.' % field)
+ for boxlist in boxlists:
+ if not boxlist.has_field(field):
+ raise ValueError('boxlist must contain all requested fields')
+ field_shape = boxlist.get_field(field).get_shape().as_list()
+ field_shape[0] = -1
+ if field_shape != first_field_shape:
+ raise ValueError('field %s must have same shape for all boxlists '
+ 'except for the 0th dimension.' % field)
+ concatenated_field = tf.concat(
+ [boxlist.get_field(field) for boxlist in boxlists], 0)
+ concatenated.add_field(field, concatenated_field)
+ return concatenated
+
+
+def sort_by_field(boxlist, field, order=SortOrder.descend, scope=None):
+ """Sort boxes and associated fields according to a scalar field.
+
+ A common use case is reordering the boxes according to descending scores.
+
+ Args:
+ boxlist: BoxList holding N boxes.
+ field: A BoxList field for sorting and reordering the BoxList.
+ order: (Optional) descend or ascend. Default is descend.
+ scope: name scope.
+
+ Returns:
+ sorted_boxlist: A sorted BoxList with the field in the specified order.
+
+ Raises:
+ ValueError: if specified field does not exist
+ ValueError: if the order is not either descend or ascend
+ """
+ with tf.name_scope(scope, 'SortByField'):
+ if order != SortOrder.descend and order != SortOrder.ascend:
+ raise ValueError('Invalid sort order')
+
+ field_to_sort = boxlist.get_field(field)
+ if len(field_to_sort.shape.as_list()) != 1:
+ raise ValueError('Field should have rank 1')
+
+ num_boxes = boxlist.num_boxes()
+ num_entries = tf.size(field_to_sort)
+ length_assert = tf.Assert(
+ tf.equal(num_boxes, num_entries),
+ ['Incorrect field size: actual vs expected.', num_entries, num_boxes])
+
+ with tf.control_dependencies([length_assert]):
+ _, sorted_indices = tf.nn.top_k(field_to_sort, num_boxes, sorted=True)
+
+ if order == SortOrder.ascend:
+ sorted_indices = tf.reverse_v2(sorted_indices, [0])
+
+ return gather(boxlist, sorted_indices)
+
+
+def visualize_boxes_in_image(image, boxlist, normalized=False, scope=None):
+ """Overlay bounding box list on image.
+
+ Currently this visualization plots a 1 pixel thick red bounding box on top
+ of the image. Note that tf.image.draw_bounding_boxes essentially is
+ 1 indexed.
+
+ Args:
+ image: an image tensor with shape [height, width, 3]
+ boxlist: a BoxList
+ normalized: (boolean) specify whether corners are to be interpreted
+ as absolute coordinates in image space or normalized with respect to the
+ image size.
+ scope: name scope.
+
+ Returns:
+ image_and_boxes: an image tensor with shape [height, width, 3]
+ """
+ with tf.name_scope(scope, 'VisualizeBoxesInImage'):
+ if not normalized:
+ height, width, _ = tf.unstack(tf.shape(image))
+ boxlist = scale(boxlist,
+ 1.0 / tf.cast(height, tf.float32),
+ 1.0 / tf.cast(width, tf.float32))
+ corners = tf.expand_dims(boxlist.get(), 0)
+ image = tf.expand_dims(image, 0)
+ return tf.squeeze(tf.image.draw_bounding_boxes(image, corners), [0])
+
+
+def filter_field_value_equals(boxlist, field, value, scope=None):
+ """Filter to keep only boxes with field entries equal to the given value.
+
+ Args:
+ boxlist: BoxList holding N boxes.
+ field: field name for filtering.
+ value: scalar value.
+ scope: name scope.
+
+ Returns:
+ a BoxList holding M boxes where M <= N
+
+ Raises:
+ ValueError: if boxlist not a BoxList object or if it does not have
+ the specified field.
+ """
+ with tf.name_scope(scope, 'FilterFieldValueEquals'):
+ if not isinstance(boxlist, box_list.BoxList):
+ raise ValueError('boxlist must be a BoxList')
+ if not boxlist.has_field(field):
+ raise ValueError('boxlist must contain the specified field')
+ filter_field = boxlist.get_field(field)
+ gather_index = tf.reshape(tf.where(tf.equal(filter_field, value)), [-1])
+ return gather(boxlist, gather_index)
+
+
+def filter_greater_than(boxlist, thresh, scope=None):
+ """Filter to keep only boxes with score exceeding a given threshold.
+
+ This op keeps the collection of boxes whose corresponding scores are
+ greater than the input threshold.
+
+ TODO(jonathanhuang): Change function name to filter_scores_greater_than
+
+ Args:
+ boxlist: BoxList holding N boxes. Must contain a 'scores' field
+ representing detection scores.
+ thresh: scalar threshold
+ scope: name scope.
+
+ Returns:
+ a BoxList holding M boxes where M <= N
+
+ Raises:
+ ValueError: if boxlist not a BoxList object or if it does not
+ have a scores field
+ """
+ with tf.name_scope(scope, 'FilterGreaterThan'):
+ if not isinstance(boxlist, box_list.BoxList):
+ raise ValueError('boxlist must be a BoxList')
+ if not boxlist.has_field('scores'):
+ raise ValueError('input boxlist must have \'scores\' field')
+ scores = boxlist.get_field('scores')
+ if len(scores.shape.as_list()) > 2:
+ raise ValueError('Scores should have rank 1 or 2')
+ if len(scores.shape.as_list()) == 2 and scores.shape.as_list()[1] != 1:
+ raise ValueError('Scores should have rank 1 or have shape '
+ 'consistent with [None, 1]')
+ high_score_indices = tf.cast(tf.reshape(
+ tf.where(tf.greater(scores, thresh)),
+ [-1]), tf.int32)
+ return gather(boxlist, high_score_indices)
+
+
+def non_max_suppression(boxlist, thresh, max_output_size, scope=None):
+ """Non maximum suppression.
+
+ This op greedily selects a subset of detection bounding boxes, pruning
+ away boxes that have high IOU (intersection over union) overlap (> thresh)
+ with already selected boxes. Note that this only works for a single class ---
+ to apply NMS to multi-class predictions, use MultiClassNonMaxSuppression.
+
+ Args:
+ boxlist: BoxList holding N boxes. Must contain a 'scores' field
+ representing detection scores.
+ thresh: scalar threshold
+ max_output_size: maximum number of retained boxes
+ scope: name scope.
+
+ Returns:
+ a BoxList holding M boxes where M <= max_output_size
+ Raises:
+ ValueError: if thresh is not in [0, 1]
+ """
+ with tf.name_scope(scope, 'NonMaxSuppression'):
+ if not 0 <= thresh <= 1.0:
+ raise ValueError('thresh must be between 0 and 1')
+ if not isinstance(boxlist, box_list.BoxList):
+ raise ValueError('boxlist must be a BoxList')
+ if not boxlist.has_field('scores'):
+ raise ValueError('input boxlist must have \'scores\' field')
+ selected_indices = tf.image.non_max_suppression(
+ boxlist.get(), boxlist.get_field('scores'),
+ max_output_size, iou_threshold=thresh)
+ return gather(boxlist, selected_indices)
+
+
+def _copy_extra_fields(boxlist_to_copy_to, boxlist_to_copy_from):
+ """Copies the extra fields of boxlist_to_copy_from to boxlist_to_copy_to.
+
+ Args:
+ boxlist_to_copy_to: BoxList to which extra fields are copied.
+ boxlist_to_copy_from: BoxList from which fields are copied.
+
+ Returns:
+ boxlist_to_copy_to with extra fields.
+ """
+ for field in boxlist_to_copy_from.get_extra_fields():
+ boxlist_to_copy_to.add_field(field, boxlist_to_copy_from.get_field(field))
+ return boxlist_to_copy_to
+
+
+def to_normalized_coordinates(boxlist, height, width,
+ check_range=True, scope=None):
+ """Converts absolute box coordinates to normalized coordinates in [0, 1].
+
+ Usually one uses the dynamic shape of the image or conv-layer tensor:
+ boxlist = box_list_ops.to_normalized_coordinates(boxlist,
+ tf.shape(images)[1],
+ tf.shape(images)[2]),
+
+ This function raises an assertion failed error at graph execution time when
+ the maximum coordinate is smaller than 1.01 (which means that coordinates are
+ already normalized). The value 1.01 is to deal with small rounding errors.
+
+ Args:
+ boxlist: BoxList with coordinates in terms of pixel-locations.
+ height: Maximum value for height of absolute box coordinates.
+ width: Maximum value for width of absolute box coordinates.
+ check_range: If True, checks if the coordinates are normalized or not.
+ scope: name scope.
+
+ Returns:
+ boxlist with normalized coordinates in [0, 1].
+ """
+ with tf.name_scope(scope, 'ToNormalizedCoordinates'):
+ height = tf.cast(height, tf.float32)
+ width = tf.cast(width, tf.float32)
+
+ if check_range:
+ max_val = tf.reduce_max(boxlist.get())
+ max_assert = tf.Assert(tf.greater(max_val, 1.01),
+ ['max value is lower than 1.01: ', max_val])
+ with tf.control_dependencies([max_assert]):
+ width = tf.identity(width)
+
+ return scale(boxlist, 1 / height, 1 / width)
+
+
+def to_absolute_coordinates(boxlist,
+ height,
+ width,
+ check_range=True,
+ maximum_normalized_coordinate=1.1,
+ scope=None):
+ """Converts normalized box coordinates to absolute pixel coordinates.
+
+ This function raises an assertion failed error when the maximum box coordinate
+ value is larger than maximum_normalized_coordinate (in which case coordinates
+ are already absolute).
+
+ Args:
+ boxlist: BoxList with coordinates in range [0, 1].
+ height: Maximum value for height of absolute box coordinates.
+ width: Maximum value for width of absolute box coordinates.
+ check_range: If True, checks if the coordinates are normalized or not.
+ maximum_normalized_coordinate: Maximum coordinate value to be considered
+ as normalized, default to 1.1.
+ scope: name scope.
+
+ Returns:
+ boxlist with absolute coordinates in terms of the image size.
+
+ """
+ with tf.name_scope(scope, 'ToAbsoluteCoordinates'):
+ height = tf.cast(height, tf.float32)
+ width = tf.cast(width, tf.float32)
+
+ # Ensure range of input boxes is correct.
+ if check_range:
+ box_maximum = tf.reduce_max(boxlist.get())
+ max_assert = tf.Assert(
+ tf.greater_equal(maximum_normalized_coordinate, box_maximum),
+ ['maximum box coordinate value is larger '
+ 'than %f: ' % maximum_normalized_coordinate, box_maximum])
+ with tf.control_dependencies([max_assert]):
+ width = tf.identity(width)
+
+ return scale(boxlist, height, width)
+
+
+def refine_boxes_multi_class(pool_boxes,
+ num_classes,
+ nms_iou_thresh,
+ nms_max_detections,
+ voting_iou_thresh=0.5):
+ """Refines a pool of boxes using non max suppression and box voting.
+
+ Box refinement is done independently for each class.
+
+ Args:
+ pool_boxes: (BoxList) A collection of boxes to be refined. pool_boxes must
+ have a rank 1 'scores' field and a rank 1 'classes' field.
+ num_classes: (int scalar) Number of classes.
+ nms_iou_thresh: (float scalar) iou threshold for non max suppression (NMS).
+ nms_max_detections: (int scalar) maximum output size for NMS.
+ voting_iou_thresh: (float scalar) iou threshold for box voting.
+
+ Returns:
+ BoxList of refined boxes.
+
+ Raises:
+ ValueError: if
+ a) nms_iou_thresh or voting_iou_thresh is not in [0, 1].
+ b) pool_boxes is not a BoxList.
+ c) pool_boxes does not have a scores and classes field.
+ """
+ if not 0.0 <= nms_iou_thresh <= 1.0:
+ raise ValueError('nms_iou_thresh must be between 0 and 1')
+ if not 0.0 <= voting_iou_thresh <= 1.0:
+ raise ValueError('voting_iou_thresh must be between 0 and 1')
+ if not isinstance(pool_boxes, box_list.BoxList):
+ raise ValueError('pool_boxes must be a BoxList')
+ if not pool_boxes.has_field('scores'):
+ raise ValueError('pool_boxes must have a \'scores\' field')
+ if not pool_boxes.has_field('classes'):
+ raise ValueError('pool_boxes must have a \'classes\' field')
+
+ refined_boxes = []
+ for i in range(num_classes):
+ boxes_class = filter_field_value_equals(pool_boxes, 'classes', i)
+ refined_boxes_class = refine_boxes(boxes_class, nms_iou_thresh,
+ nms_max_detections, voting_iou_thresh)
+ refined_boxes.append(refined_boxes_class)
+ return sort_by_field(concatenate(refined_boxes), 'scores')
+
+
+def refine_boxes(pool_boxes,
+ nms_iou_thresh,
+ nms_max_detections,
+ voting_iou_thresh=0.5):
+ """Refines a pool of boxes using non max suppression and box voting.
+
+ Args:
+ pool_boxes: (BoxList) A collection of boxes to be refined. pool_boxes must
+ have a rank 1 'scores' field.
+ nms_iou_thresh: (float scalar) iou threshold for non max suppression (NMS).
+ nms_max_detections: (int scalar) maximum output size for NMS.
+ voting_iou_thresh: (float scalar) iou threshold for box voting.
+
+ Returns:
+ BoxList of refined boxes.
+
+ Raises:
+ ValueError: if
+ a) nms_iou_thresh or voting_iou_thresh is not in [0, 1].
+ b) pool_boxes is not a BoxList.
+ c) pool_boxes does not have a scores field.
+ """
+ if not 0.0 <= nms_iou_thresh <= 1.0:
+ raise ValueError('nms_iou_thresh must be between 0 and 1')
+ if not 0.0 <= voting_iou_thresh <= 1.0:
+ raise ValueError('voting_iou_thresh must be between 0 and 1')
+ if not isinstance(pool_boxes, box_list.BoxList):
+ raise ValueError('pool_boxes must be a BoxList')
+ if not pool_boxes.has_field('scores'):
+ raise ValueError('pool_boxes must have a \'scores\' field')
+
+ nms_boxes = non_max_suppression(
+ pool_boxes, nms_iou_thresh, nms_max_detections)
+ return box_voting(nms_boxes, pool_boxes, voting_iou_thresh)
+
+
+def box_voting(selected_boxes, pool_boxes, iou_thresh=0.5):
+ """Performs box voting as described in S. Gidaris and N. Komodakis, ICCV 2015.
+
+ Performs box voting as described in 'Object detection via a multi-region &
+ semantic segmentation-aware CNN model', Gidaris and Komodakis, ICCV 2015. For
+ each box 'B' in selected_boxes, we find the set 'S' of boxes in pool_boxes
+ with iou overlap >= iou_thresh. The location of B is set to the weighted
+ average location of boxes in S (scores are used for weighting). And the score
+ of B is set to the average score of boxes in S.
+
+ Args:
+ selected_boxes: BoxList containing a subset of boxes in pool_boxes. These
+ boxes are usually selected from pool_boxes using non max suppression.
+ pool_boxes: BoxList containing a set of (possibly redundant) boxes.
+ iou_thresh: (float scalar) iou threshold for matching boxes in
+ selected_boxes and pool_boxes.
+
+ Returns:
+ BoxList containing averaged locations and scores for each box in
+ selected_boxes.
+
+ Raises:
+ ValueError: if
+ a) selected_boxes or pool_boxes is not a BoxList.
+ b) if iou_thresh is not in [0, 1].
+ c) pool_boxes does not have a scores field.
+ """
+ if not 0.0 <= iou_thresh <= 1.0:
+ raise ValueError('iou_thresh must be between 0 and 1')
+ if not isinstance(selected_boxes, box_list.BoxList):
+ raise ValueError('selected_boxes must be a BoxList')
+ if not isinstance(pool_boxes, box_list.BoxList):
+ raise ValueError('pool_boxes must be a BoxList')
+ if not pool_boxes.has_field('scores'):
+ raise ValueError('pool_boxes must have a \'scores\' field')
+
+ iou_ = iou(selected_boxes, pool_boxes)
+ match_indicator = tf.to_float(tf.greater(iou_, iou_thresh))
+ num_matches = tf.reduce_sum(match_indicator, 1)
+ # TODO(kbanoop): Handle the case where some boxes in selected_boxes do not
+ # match to any boxes in pool_boxes. For such boxes without any matches, we
+ # should return the original boxes without voting.
+ match_assert = tf.Assert(
+ tf.reduce_all(tf.greater(num_matches, 0)),
+ ['Each box in selected_boxes must match with at least one box '
+ 'in pool_boxes.'])
+
+ scores = tf.expand_dims(pool_boxes.get_field('scores'), 1)
+ scores_assert = tf.Assert(
+ tf.reduce_all(tf.greater_equal(scores, 0)),
+ ['Scores must be non negative.'])
+
+ with tf.control_dependencies([scores_assert, match_assert]):
+ sum_scores = tf.matmul(match_indicator, scores)
+ averaged_scores = tf.reshape(sum_scores, [-1]) / num_matches
+
+ box_locations = tf.matmul(match_indicator,
+ pool_boxes.get() * scores) / sum_scores
+ averaged_boxes = box_list.BoxList(box_locations)
+ _copy_extra_fields(averaged_boxes, selected_boxes)
+ averaged_boxes.add_field('scores', averaged_scores)
+ return averaged_boxes
+
+
+def pad_or_clip_box_list(boxlist, num_boxes, scope=None):
+ """Pads or clips all fields of a BoxList.
+
+ Args:
+ boxlist: A BoxList with arbitrary of number of boxes.
+ num_boxes: First num_boxes in boxlist are kept.
+ The fields are zero-padded if num_boxes is bigger than the
+ actual number of boxes.
+ scope: name scope.
+
+ Returns:
+ BoxList with all fields padded or clipped.
+ """
+ with tf.name_scope(scope, 'PadOrClipBoxList'):
+ subboxlist = box_list.BoxList(shape_utils.pad_or_clip_tensor(
+ boxlist.get(), num_boxes))
+ for field in boxlist.get_extra_fields():
+ subfield = shape_utils.pad_or_clip_tensor(
+ boxlist.get_field(field), num_boxes)
+ subboxlist.add_field(field, subfield)
+ return subboxlist
+
+
+def select_random_box(boxlist,
+ default_box=None,
+ seed=None,
+ scope=None):
+ """Selects a random bounding box from a `BoxList`.
+
+ Args:
+ boxlist: A BoxList.
+ default_box: A [1, 4] float32 tensor. If no boxes are present in `boxlist`,
+ this default box will be returned. If None, will use a default box of
+ [[-1., -1., -1., -1.]].
+ seed: Random seed.
+ scope: Name scope.
+
+ Returns:
+ bbox: A [1, 4] tensor with a random bounding box.
+ valid: A bool tensor indicating whether a valid bounding box is returned
+ (True) or whether the default box is returned (False).
+ """
+ with tf.name_scope(scope, 'SelectRandomBox'):
+ bboxes = boxlist.get()
+ combined_shape = shape_utils.combined_static_and_dynamic_shape(bboxes)
+ number_of_boxes = combined_shape[0]
+ default_box = default_box or tf.constant([[-1., -1., -1., -1.]])
+
+ def select_box():
+ random_index = tf.random_uniform([],
+ maxval=number_of_boxes,
+ dtype=tf.int32,
+ seed=seed)
+ return tf.expand_dims(bboxes[random_index], axis=0), tf.constant(True)
+
+ return tf.cond(
+ tf.greater_equal(number_of_boxes, 1),
+ true_fn=select_box,
+ false_fn=lambda: (default_box, tf.constant(False)))
+
+
+def get_minimal_coverage_box(boxlist,
+ default_box=None,
+ scope=None):
+ """Creates a single bounding box which covers all boxes in the boxlist.
+
+ Args:
+ boxlist: A Boxlist.
+ default_box: A [1, 4] float32 tensor. If no boxes are present in `boxlist`,
+ this default box will be returned. If None, will use a default box of
+ [[0., 0., 1., 1.]].
+ scope: Name scope.
+
+ Returns:
+ A [1, 4] float32 tensor with a bounding box that tightly covers all the
+ boxes in the box list. If the boxlist does not contain any boxes, the
+ default box is returned.
+ """
+ with tf.name_scope(scope, 'CreateCoverageBox'):
+ num_boxes = boxlist.num_boxes()
+
+ def coverage_box(bboxes):
+ y_min, x_min, y_max, x_max = tf.split(
+ value=bboxes, num_or_size_splits=4, axis=1)
+ y_min_coverage = tf.reduce_min(y_min, axis=0)
+ x_min_coverage = tf.reduce_min(x_min, axis=0)
+ y_max_coverage = tf.reduce_max(y_max, axis=0)
+ x_max_coverage = tf.reduce_max(x_max, axis=0)
+ return tf.stack(
+ [y_min_coverage, x_min_coverage, y_max_coverage, x_max_coverage],
+ axis=1)
+
+ default_box = default_box or tf.constant([[0., 0., 1., 1.]])
+ return tf.cond(
+ tf.greater_equal(num_boxes, 1),
+ true_fn=lambda: coverage_box(boxlist.get()),
+ false_fn=lambda: default_box)
+
+
+def sample_boxes_by_jittering(boxlist,
+ num_boxes_to_sample,
+ stddev=0.1,
+ scope=None):
+ """Samples num_boxes_to_sample boxes by jittering around boxlist boxes.
+
+ It is possible that this function might generate boxes with size 0. The larger
+ the stddev, this is more probable. For a small stddev of 0.1 this probability
+ is very small.
+
+ Args:
+ boxlist: A boxlist containing N boxes in normalized coordinates.
+ num_boxes_to_sample: A positive integer containing the number of boxes to
+ sample.
+ stddev: Standard deviation. This is used to draw random offsets for the
+ box corners from a normal distribution. The offset is multiplied by the
+ box size so will be larger in terms of pixels for larger boxes.
+ scope: Name scope.
+
+ Returns:
+ sampled_boxlist: A boxlist containing num_boxes_to_sample boxes in
+ normalized coordinates.
+ """
+ with tf.name_scope(scope, 'SampleBoxesByJittering'):
+ num_boxes = boxlist.num_boxes()
+ box_indices = tf.random_uniform(
+ [num_boxes_to_sample],
+ minval=0,
+ maxval=num_boxes,
+ dtype=tf.int32)
+ sampled_boxes = tf.gather(boxlist.get(), box_indices)
+ sampled_boxes_height = sampled_boxes[:, 2] - sampled_boxes[:, 0]
+ sampled_boxes_width = sampled_boxes[:, 3] - sampled_boxes[:, 1]
+ rand_miny_gaussian = tf.random_normal([num_boxes_to_sample], stddev=stddev)
+ rand_minx_gaussian = tf.random_normal([num_boxes_to_sample], stddev=stddev)
+ rand_maxy_gaussian = tf.random_normal([num_boxes_to_sample], stddev=stddev)
+ rand_maxx_gaussian = tf.random_normal([num_boxes_to_sample], stddev=stddev)
+ miny = rand_miny_gaussian * sampled_boxes_height + sampled_boxes[:, 0]
+ minx = rand_minx_gaussian * sampled_boxes_width + sampled_boxes[:, 1]
+ maxy = rand_maxy_gaussian * sampled_boxes_height + sampled_boxes[:, 2]
+ maxx = rand_maxx_gaussian * sampled_boxes_width + sampled_boxes[:, 3]
+ maxy = tf.maximum(miny, maxy)
+ maxx = tf.maximum(minx, maxx)
+ sampled_boxes = tf.stack([miny, minx, maxy, maxx], axis=1)
+ sampled_boxes = tf.maximum(tf.minimum(sampled_boxes, 1.0), 0.0)
+ return box_list.BoxList(sampled_boxes)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list_ops_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list_ops_test.py"
new file mode 100644
index 00000000..727c198b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list_ops_test.py"
@@ -0,0 +1,1108 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.box_list_ops."""
+import numpy as np
+import tensorflow as tf
+
+from object_detection.core import box_list
+from object_detection.core import box_list_ops
+from object_detection.utils import test_case
+
+
+class BoxListOpsTest(test_case.TestCase):
+ """Tests for common bounding box operations."""
+
+ def test_area(self):
+ corners = tf.constant([[0.0, 0.0, 10.0, 20.0], [1.0, 2.0, 3.0, 4.0]])
+ exp_output = [200.0, 4.0]
+ boxes = box_list.BoxList(corners)
+ areas = box_list_ops.area(boxes)
+ with self.test_session() as sess:
+ areas_output = sess.run(areas)
+ self.assertAllClose(areas_output, exp_output)
+
+ def test_height_width(self):
+ corners = tf.constant([[0.0, 0.0, 10.0, 20.0], [1.0, 2.0, 3.0, 4.0]])
+ exp_output_heights = [10., 2.]
+ exp_output_widths = [20., 2.]
+ boxes = box_list.BoxList(corners)
+ heights, widths = box_list_ops.height_width(boxes)
+ with self.test_session() as sess:
+ output_heights, output_widths = sess.run([heights, widths])
+ self.assertAllClose(output_heights, exp_output_heights)
+ self.assertAllClose(output_widths, exp_output_widths)
+
+ def test_scale(self):
+ corners = tf.constant([[0, 0, 100, 200], [50, 120, 100, 140]],
+ dtype=tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('extra_data', tf.constant([[1], [2]]))
+
+ y_scale = tf.constant(1.0/100)
+ x_scale = tf.constant(1.0/200)
+ scaled_boxes = box_list_ops.scale(boxes, y_scale, x_scale)
+ exp_output = [[0, 0, 1, 1], [0.5, 0.6, 1.0, 0.7]]
+ with self.test_session() as sess:
+ scaled_corners_out = sess.run(scaled_boxes.get())
+ self.assertAllClose(scaled_corners_out, exp_output)
+ extra_data_out = sess.run(scaled_boxes.get_field('extra_data'))
+ self.assertAllEqual(extra_data_out, [[1], [2]])
+
+ def test_clip_to_window_filter_boxes_which_fall_outside_the_window(
+ self):
+ window = tf.constant([0, 0, 9, 14], tf.float32)
+ corners = tf.constant([[5.0, 5.0, 6.0, 6.0],
+ [-1.0, -2.0, 4.0, 5.0],
+ [2.0, 3.0, 5.0, 9.0],
+ [0.0, 0.0, 9.0, 14.0],
+ [-100.0, -100.0, 300.0, 600.0],
+ [-10.0, -10.0, -9.0, -9.0]])
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('extra_data', tf.constant([[1], [2], [3], [4], [5], [6]]))
+ exp_output = [[5.0, 5.0, 6.0, 6.0], [0.0, 0.0, 4.0, 5.0],
+ [2.0, 3.0, 5.0, 9.0], [0.0, 0.0, 9.0, 14.0],
+ [0.0, 0.0, 9.0, 14.0]]
+ pruned = box_list_ops.clip_to_window(
+ boxes, window, filter_nonoverlapping=True)
+ with self.test_session() as sess:
+ pruned_output = sess.run(pruned.get())
+ self.assertAllClose(pruned_output, exp_output)
+ extra_data_out = sess.run(pruned.get_field('extra_data'))
+ self.assertAllEqual(extra_data_out, [[1], [2], [3], [4], [5]])
+
+ def test_clip_to_window_without_filtering_boxes_which_fall_outside_the_window(
+ self):
+ window = tf.constant([0, 0, 9, 14], tf.float32)
+ corners = tf.constant([[5.0, 5.0, 6.0, 6.0],
+ [-1.0, -2.0, 4.0, 5.0],
+ [2.0, 3.0, 5.0, 9.0],
+ [0.0, 0.0, 9.0, 14.0],
+ [-100.0, -100.0, 300.0, 600.0],
+ [-10.0, -10.0, -9.0, -9.0]])
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('extra_data', tf.constant([[1], [2], [3], [4], [5], [6]]))
+ exp_output = [[5.0, 5.0, 6.0, 6.0], [0.0, 0.0, 4.0, 5.0],
+ [2.0, 3.0, 5.0, 9.0], [0.0, 0.0, 9.0, 14.0],
+ [0.0, 0.0, 9.0, 14.0], [0.0, 0.0, 0.0, 0.0]]
+ pruned = box_list_ops.clip_to_window(
+ boxes, window, filter_nonoverlapping=False)
+ with self.test_session() as sess:
+ pruned_output = sess.run(pruned.get())
+ self.assertAllClose(pruned_output, exp_output)
+ extra_data_out = sess.run(pruned.get_field('extra_data'))
+ self.assertAllEqual(extra_data_out, [[1], [2], [3], [4], [5], [6]])
+
+ def test_prune_outside_window_filters_boxes_which_fall_outside_the_window(
+ self):
+ window = tf.constant([0, 0, 9, 14], tf.float32)
+ corners = tf.constant([[5.0, 5.0, 6.0, 6.0],
+ [-1.0, -2.0, 4.0, 5.0],
+ [2.0, 3.0, 5.0, 9.0],
+ [0.0, 0.0, 9.0, 14.0],
+ [-10.0, -10.0, -9.0, -9.0],
+ [-100.0, -100.0, 300.0, 600.0]])
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('extra_data', tf.constant([[1], [2], [3], [4], [5], [6]]))
+ exp_output = [[5.0, 5.0, 6.0, 6.0],
+ [2.0, 3.0, 5.0, 9.0],
+ [0.0, 0.0, 9.0, 14.0]]
+ pruned, keep_indices = box_list_ops.prune_outside_window(boxes, window)
+ with self.test_session() as sess:
+ pruned_output = sess.run(pruned.get())
+ self.assertAllClose(pruned_output, exp_output)
+ keep_indices_out = sess.run(keep_indices)
+ self.assertAllEqual(keep_indices_out, [0, 2, 3])
+ extra_data_out = sess.run(pruned.get_field('extra_data'))
+ self.assertAllEqual(extra_data_out, [[1], [3], [4]])
+
+ def test_prune_completely_outside_window(self):
+ window = tf.constant([0, 0, 9, 14], tf.float32)
+ corners = tf.constant([[5.0, 5.0, 6.0, 6.0],
+ [-1.0, -2.0, 4.0, 5.0],
+ [2.0, 3.0, 5.0, 9.0],
+ [0.0, 0.0, 9.0, 14.0],
+ [-10.0, -10.0, -9.0, -9.0],
+ [-100.0, -100.0, 300.0, 600.0]])
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('extra_data', tf.constant([[1], [2], [3], [4], [5], [6]]))
+ exp_output = [[5.0, 5.0, 6.0, 6.0],
+ [-1.0, -2.0, 4.0, 5.0],
+ [2.0, 3.0, 5.0, 9.0],
+ [0.0, 0.0, 9.0, 14.0],
+ [-100.0, -100.0, 300.0, 600.0]]
+ pruned, keep_indices = box_list_ops.prune_completely_outside_window(boxes,
+ window)
+ with self.test_session() as sess:
+ pruned_output = sess.run(pruned.get())
+ self.assertAllClose(pruned_output, exp_output)
+ keep_indices_out = sess.run(keep_indices)
+ self.assertAllEqual(keep_indices_out, [0, 1, 2, 3, 5])
+ extra_data_out = sess.run(pruned.get_field('extra_data'))
+ self.assertAllEqual(extra_data_out, [[1], [2], [3], [4], [6]])
+
+ def test_prune_completely_outside_window_with_empty_boxlist(self):
+ window = tf.constant([0, 0, 9, 14], tf.float32)
+ corners = tf.zeros(shape=[0, 4], dtype=tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('extra_data', tf.zeros(shape=[0], dtype=tf.int32))
+ pruned, keep_indices = box_list_ops.prune_completely_outside_window(boxes,
+ window)
+ pruned_boxes = pruned.get()
+ extra = pruned.get_field('extra_data')
+
+ exp_pruned_boxes = np.zeros(shape=[0, 4], dtype=np.float32)
+ exp_extra = np.zeros(shape=[0], dtype=np.int32)
+ with self.test_session() as sess:
+ pruned_boxes_out, keep_indices_out, extra_out = sess.run(
+ [pruned_boxes, keep_indices, extra])
+ self.assertAllClose(exp_pruned_boxes, pruned_boxes_out)
+ self.assertAllEqual([], keep_indices_out)
+ self.assertAllEqual(exp_extra, extra_out)
+
+ def test_intersection(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0]])
+ exp_output = [[2.0, 0.0, 6.0], [1.0, 0.0, 5.0]]
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ intersect = box_list_ops.intersection(boxes1, boxes2)
+ with self.test_session() as sess:
+ intersect_output = sess.run(intersect)
+ self.assertAllClose(intersect_output, exp_output)
+
+ def test_matched_intersection(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0]])
+ exp_output = [2.0, 0.0]
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ intersect = box_list_ops.matched_intersection(boxes1, boxes2)
+ with self.test_session() as sess:
+ intersect_output = sess.run(intersect)
+ self.assertAllClose(intersect_output, exp_output)
+
+ def test_iou(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0]])
+ exp_output = [[2.0 / 16.0, 0, 6.0 / 400.0], [1.0 / 16.0, 0.0, 5.0 / 400.0]]
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ iou = box_list_ops.iou(boxes1, boxes2)
+ with self.test_session() as sess:
+ iou_output = sess.run(iou)
+ self.assertAllClose(iou_output, exp_output)
+
+ def test_matched_iou(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0]])
+ exp_output = [2.0 / 16.0, 0]
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ iou = box_list_ops.matched_iou(boxes1, boxes2)
+ with self.test_session() as sess:
+ iou_output = sess.run(iou)
+ self.assertAllClose(iou_output, exp_output)
+
+ def test_iouworks_on_empty_inputs(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0]])
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ boxes_empty = box_list.BoxList(tf.zeros((0, 4)))
+ iou_empty_1 = box_list_ops.iou(boxes1, boxes_empty)
+ iou_empty_2 = box_list_ops.iou(boxes_empty, boxes2)
+ iou_empty_3 = box_list_ops.iou(boxes_empty, boxes_empty)
+ with self.test_session() as sess:
+ iou_output_1, iou_output_2, iou_output_3 = sess.run(
+ [iou_empty_1, iou_empty_2, iou_empty_3])
+ self.assertAllEqual(iou_output_1.shape, (2, 0))
+ self.assertAllEqual(iou_output_2.shape, (0, 3))
+ self.assertAllEqual(iou_output_3.shape, (0, 0))
+
+ def test_ioa(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0]])
+ exp_output_1 = [[2.0 / 12.0, 0, 6.0 / 400.0],
+ [1.0 / 12.0, 0.0, 5.0 / 400.0]]
+ exp_output_2 = [[2.0 / 6.0, 1.0 / 5.0],
+ [0, 0],
+ [6.0 / 6.0, 5.0 / 5.0]]
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ ioa_1 = box_list_ops.ioa(boxes1, boxes2)
+ ioa_2 = box_list_ops.ioa(boxes2, boxes1)
+ with self.test_session() as sess:
+ ioa_output_1, ioa_output_2 = sess.run([ioa_1, ioa_2])
+ self.assertAllClose(ioa_output_1, exp_output_1)
+ self.assertAllClose(ioa_output_2, exp_output_2)
+
+ def test_prune_non_overlapping_boxes(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0]])
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ minoverlap = 0.5
+
+ exp_output_1 = boxes1
+ exp_output_2 = box_list.BoxList(tf.constant(0.0, shape=[0, 4]))
+ output_1, keep_indices_1 = box_list_ops.prune_non_overlapping_boxes(
+ boxes1, boxes2, min_overlap=minoverlap)
+ output_2, keep_indices_2 = box_list_ops.prune_non_overlapping_boxes(
+ boxes2, boxes1, min_overlap=minoverlap)
+ with self.test_session() as sess:
+ (output_1_, keep_indices_1_, output_2_, keep_indices_2_, exp_output_1_,
+ exp_output_2_) = sess.run(
+ [output_1.get(), keep_indices_1,
+ output_2.get(), keep_indices_2,
+ exp_output_1.get(), exp_output_2.get()])
+ self.assertAllClose(output_1_, exp_output_1_)
+ self.assertAllClose(output_2_, exp_output_2_)
+ self.assertAllEqual(keep_indices_1_, [0, 1])
+ self.assertAllEqual(keep_indices_2_, [])
+
+ def test_prune_small_boxes(self):
+ boxes = tf.constant([[4.0, 3.0, 7.0, 5.0],
+ [5.0, 6.0, 10.0, 7.0],
+ [3.0, 4.0, 6.0, 8.0],
+ [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0]])
+ exp_boxes = [[3.0, 4.0, 6.0, 8.0],
+ [0.0, 0.0, 20.0, 20.0]]
+ boxes = box_list.BoxList(boxes)
+ pruned_boxes = box_list_ops.prune_small_boxes(boxes, 3)
+ with self.test_session() as sess:
+ pruned_boxes = sess.run(pruned_boxes.get())
+ self.assertAllEqual(pruned_boxes, exp_boxes)
+
+ def test_prune_small_boxes_prunes_boxes_with_negative_side(self):
+ boxes = tf.constant([[4.0, 3.0, 7.0, 5.0],
+ [5.0, 6.0, 10.0, 7.0],
+ [3.0, 4.0, 6.0, 8.0],
+ [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0],
+ [2.0, 3.0, 1.5, 7.0], # negative height
+ [2.0, 3.0, 5.0, 1.7]]) # negative width
+ exp_boxes = [[3.0, 4.0, 6.0, 8.0],
+ [0.0, 0.0, 20.0, 20.0]]
+ boxes = box_list.BoxList(boxes)
+ pruned_boxes = box_list_ops.prune_small_boxes(boxes, 3)
+ with self.test_session() as sess:
+ pruned_boxes = sess.run(pruned_boxes.get())
+ self.assertAllEqual(pruned_boxes, exp_boxes)
+
+ def test_change_coordinate_frame(self):
+ corners = tf.constant([[0.25, 0.5, 0.75, 0.75], [0.5, 0.0, 1.0, 1.0]])
+ window = tf.constant([0.25, 0.25, 0.75, 0.75])
+ boxes = box_list.BoxList(corners)
+
+ expected_corners = tf.constant([[0, 0.5, 1.0, 1.0], [0.5, -0.5, 1.5, 1.5]])
+ expected_boxes = box_list.BoxList(expected_corners)
+ output = box_list_ops.change_coordinate_frame(boxes, window)
+
+ with self.test_session() as sess:
+ output_, expected_boxes_ = sess.run([output.get(), expected_boxes.get()])
+ self.assertAllClose(output_, expected_boxes_)
+
+ def test_ioaworks_on_empty_inputs(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0]])
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ boxes_empty = box_list.BoxList(tf.zeros((0, 4)))
+ ioa_empty_1 = box_list_ops.ioa(boxes1, boxes_empty)
+ ioa_empty_2 = box_list_ops.ioa(boxes_empty, boxes2)
+ ioa_empty_3 = box_list_ops.ioa(boxes_empty, boxes_empty)
+ with self.test_session() as sess:
+ ioa_output_1, ioa_output_2, ioa_output_3 = sess.run(
+ [ioa_empty_1, ioa_empty_2, ioa_empty_3])
+ self.assertAllEqual(ioa_output_1.shape, (2, 0))
+ self.assertAllEqual(ioa_output_2.shape, (0, 3))
+ self.assertAllEqual(ioa_output_3.shape, (0, 0))
+
+ def test_pairwise_distances(self):
+ corners1 = tf.constant([[0.0, 0.0, 0.0, 0.0],
+ [1.0, 1.0, 0.0, 2.0]])
+ corners2 = tf.constant([[3.0, 4.0, 1.0, 0.0],
+ [-4.0, 0.0, 0.0, 3.0],
+ [0.0, 0.0, 0.0, 0.0]])
+ exp_output = [[26, 25, 0], [18, 27, 6]]
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ dist_matrix = box_list_ops.sq_dist(boxes1, boxes2)
+ with self.test_session() as sess:
+ dist_output = sess.run(dist_matrix)
+ self.assertAllClose(dist_output, exp_output)
+
+ def test_boolean_mask(self):
+ corners = tf.constant(
+ [4 * [0.0], 4 * [1.0], 4 * [2.0], 4 * [3.0], 4 * [4.0]])
+ indicator = tf.constant([True, False, True, False, True], tf.bool)
+ expected_subset = [4 * [0.0], 4 * [2.0], 4 * [4.0]]
+ boxes = box_list.BoxList(corners)
+ subset = box_list_ops.boolean_mask(boxes, indicator)
+ with self.test_session() as sess:
+ subset_output = sess.run(subset.get())
+ self.assertAllClose(subset_output, expected_subset)
+
+ def test_static_boolean_mask_with_field(self):
+
+ def graph_fn(corners, weights, indicator):
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('weights', weights)
+ subset = box_list_ops.boolean_mask(
+ boxes,
+ indicator, ['weights'],
+ use_static_shapes=True,
+ indicator_sum=3)
+ return (subset.get_field('boxes'), subset.get_field('weights'))
+
+ corners = np.array(
+ [4 * [0.0], 4 * [1.0], 4 * [2.0], 4 * [3.0], 4 * [4.0]],
+ dtype=np.float32)
+ indicator = np.array([True, False, True, False, True], dtype=np.bool)
+ weights = np.array([[.1], [.3], [.5], [.7], [.9]], dtype=np.float32)
+ result_boxes, result_weights = self.execute(graph_fn,
+ [corners, weights, indicator])
+ expected_boxes = [4 * [0.0], 4 * [2.0], 4 * [4.0]]
+ expected_weights = [[.1], [.5], [.9]]
+
+ self.assertAllClose(result_boxes, expected_boxes)
+ self.assertAllClose(result_weights, expected_weights)
+
+ def test_dynamic_boolean_mask_with_field(self):
+ corners = tf.placeholder(tf.float32, [None, 4])
+ indicator = tf.placeholder(tf.bool, [None])
+ weights = tf.placeholder(tf.float32, [None, 1])
+ expected_subset = [4 * [0.0], 4 * [2.0], 4 * [4.0]]
+ expected_weights = [[.1], [.5], [.9]]
+
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('weights', weights)
+ subset = box_list_ops.boolean_mask(boxes, indicator, ['weights'])
+ with self.test_session() as sess:
+ subset_output, weights_output = sess.run(
+ [subset.get(), subset.get_field('weights')],
+ feed_dict={
+ corners:
+ np.array(
+ [4 * [0.0], 4 * [1.0], 4 * [2.0], 4 * [3.0], 4 * [4.0]]),
+ indicator:
+ np.array([True, False, True, False, True]).astype(np.bool),
+ weights:
+ np.array([[.1], [.3], [.5], [.7], [.9]])
+ })
+ self.assertAllClose(subset_output, expected_subset)
+ self.assertAllClose(weights_output, expected_weights)
+
+ def test_gather(self):
+ corners = tf.constant(
+ [4 * [0.0], 4 * [1.0], 4 * [2.0], 4 * [3.0], 4 * [4.0]])
+ indices = tf.constant([0, 2, 4], tf.int32)
+ expected_subset = [4 * [0.0], 4 * [2.0], 4 * [4.0]]
+ boxes = box_list.BoxList(corners)
+ subset = box_list_ops.gather(boxes, indices)
+ with self.test_session() as sess:
+ subset_output = sess.run(subset.get())
+ self.assertAllClose(subset_output, expected_subset)
+
+ def test_static_gather_with_field(self):
+
+ def graph_fn(corners, weights, indices):
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('weights', weights)
+ subset = box_list_ops.gather(
+ boxes, indices, ['weights'], use_static_shapes=True)
+ return (subset.get_field('boxes'), subset.get_field('weights'))
+
+ corners = np.array([4 * [0.0], 4 * [1.0], 4 * [2.0], 4 * [3.0],
+ 4 * [4.0]], dtype=np.float32)
+ weights = np.array([[.1], [.3], [.5], [.7], [.9]], dtype=np.float32)
+ indices = np.array([0, 2, 4], dtype=np.int32)
+
+ result_boxes, result_weights = self.execute(graph_fn,
+ [corners, weights, indices])
+ expected_boxes = [4 * [0.0], 4 * [2.0], 4 * [4.0]]
+ expected_weights = [[.1], [.5], [.9]]
+ self.assertAllClose(result_boxes, expected_boxes)
+ self.assertAllClose(result_weights, expected_weights)
+
+ def test_dynamic_gather_with_field(self):
+ corners = tf.placeholder(tf.float32, [None, 4])
+ indices = tf.placeholder(tf.int32, [None])
+ weights = tf.placeholder(tf.float32, [None, 1])
+ expected_subset = [4 * [0.0], 4 * [2.0], 4 * [4.0]]
+ expected_weights = [[.1], [.5], [.9]]
+
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('weights', weights)
+ subset = box_list_ops.gather(boxes, indices, ['weights'],
+ use_static_shapes=True)
+ with self.test_session() as sess:
+ subset_output, weights_output = sess.run(
+ [subset.get(), subset.get_field('weights')],
+ feed_dict={
+ corners:
+ np.array(
+ [4 * [0.0], 4 * [1.0], 4 * [2.0], 4 * [3.0], 4 * [4.0]]),
+ indices:
+ np.array([0, 2, 4]).astype(np.int32),
+ weights:
+ np.array([[.1], [.3], [.5], [.7], [.9]])
+ })
+ self.assertAllClose(subset_output, expected_subset)
+ self.assertAllClose(weights_output, expected_weights)
+
+ def test_gather_with_invalid_field(self):
+ corners = tf.constant([4 * [0.0], 4 * [1.0]])
+ indices = tf.constant([0, 1], tf.int32)
+ weights = tf.constant([[.1], [.3]], tf.float32)
+
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('weights', weights)
+ with self.assertRaises(ValueError):
+ box_list_ops.gather(boxes, indices, ['foo', 'bar'])
+
+ def test_gather_with_invalid_inputs(self):
+ corners = tf.constant(
+ [4 * [0.0], 4 * [1.0], 4 * [2.0], 4 * [3.0], 4 * [4.0]])
+ indices_float32 = tf.constant([0, 2, 4], tf.float32)
+ boxes = box_list.BoxList(corners)
+ with self.assertRaises(ValueError):
+ _ = box_list_ops.gather(boxes, indices_float32)
+ indices_2d = tf.constant([[0, 2, 4]], tf.int32)
+ boxes = box_list.BoxList(corners)
+ with self.assertRaises(ValueError):
+ _ = box_list_ops.gather(boxes, indices_2d)
+
+ def test_gather_with_dynamic_indexing(self):
+ corners = tf.constant([4 * [0.0], 4 * [1.0], 4 * [2.0], 4 * [3.0], 4 * [4.0]
+ ])
+ weights = tf.constant([.5, .3, .7, .1, .9], tf.float32)
+ indices = tf.reshape(tf.where(tf.greater(weights, 0.4)), [-1])
+ expected_subset = [4 * [0.0], 4 * [2.0], 4 * [4.0]]
+ expected_weights = [.5, .7, .9]
+
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('weights', weights)
+ subset = box_list_ops.gather(boxes, indices, ['weights'])
+ with self.test_session() as sess:
+ subset_output, weights_output = sess.run([subset.get(), subset.get_field(
+ 'weights')])
+ self.assertAllClose(subset_output, expected_subset)
+ self.assertAllClose(weights_output, expected_weights)
+
+ def test_sort_by_field_ascending_order(self):
+ exp_corners = [[0, 0, 1, 1], [0, 0.1, 1, 1.1], [0, -0.1, 1, 0.9],
+ [0, 10, 1, 11], [0, 10.1, 1, 11.1], [0, 100, 1, 101]]
+ exp_scores = [.95, .9, .75, .6, .5, .3]
+ exp_weights = [.2, .45, .6, .75, .8, .92]
+ shuffle = [2, 4, 0, 5, 1, 3]
+ corners = tf.constant([exp_corners[i] for i in shuffle], tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('scores', tf.constant(
+ [exp_scores[i] for i in shuffle], tf.float32))
+ boxes.add_field('weights', tf.constant(
+ [exp_weights[i] for i in shuffle], tf.float32))
+ sort_by_weight = box_list_ops.sort_by_field(
+ boxes,
+ 'weights',
+ order=box_list_ops.SortOrder.ascend)
+ with self.test_session() as sess:
+ corners_out, scores_out, weights_out = sess.run([
+ sort_by_weight.get(),
+ sort_by_weight.get_field('scores'),
+ sort_by_weight.get_field('weights')])
+ self.assertAllClose(corners_out, exp_corners)
+ self.assertAllClose(scores_out, exp_scores)
+ self.assertAllClose(weights_out, exp_weights)
+
+ def test_sort_by_field_descending_order(self):
+ exp_corners = [[0, 0, 1, 1], [0, 0.1, 1, 1.1], [0, -0.1, 1, 0.9],
+ [0, 10, 1, 11], [0, 10.1, 1, 11.1], [0, 100, 1, 101]]
+ exp_scores = [.95, .9, .75, .6, .5, .3]
+ exp_weights = [.2, .45, .6, .75, .8, .92]
+ shuffle = [2, 4, 0, 5, 1, 3]
+
+ corners = tf.constant([exp_corners[i] for i in shuffle], tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('scores', tf.constant(
+ [exp_scores[i] for i in shuffle], tf.float32))
+ boxes.add_field('weights', tf.constant(
+ [exp_weights[i] for i in shuffle], tf.float32))
+
+ sort_by_score = box_list_ops.sort_by_field(boxes, 'scores')
+ with self.test_session() as sess:
+ corners_out, scores_out, weights_out = sess.run([sort_by_score.get(
+ ), sort_by_score.get_field('scores'), sort_by_score.get_field('weights')])
+ self.assertAllClose(corners_out, exp_corners)
+ self.assertAllClose(scores_out, exp_scores)
+ self.assertAllClose(weights_out, exp_weights)
+
+ def test_sort_by_field_invalid_inputs(self):
+ corners = tf.constant([4 * [0.0], 4 * [0.5], 4 * [1.0], 4 * [2.0], 4 *
+ [3.0], 4 * [4.0]])
+ misc = tf.constant([[.95, .9], [.5, .3]], tf.float32)
+ weights = tf.constant([.1, .2], tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('misc', misc)
+ boxes.add_field('weights', weights)
+
+ with self.assertRaises(ValueError):
+ box_list_ops.sort_by_field(boxes, 'area')
+
+ with self.assertRaises(ValueError):
+ box_list_ops.sort_by_field(boxes, 'misc')
+
+ with self.assertRaises(ValueError):
+ box_list_ops.sort_by_field(boxes, 'weights')
+
+ def test_visualize_boxes_in_image(self):
+ image = tf.zeros((6, 4, 3))
+ corners = tf.constant([[0, 0, 5, 3],
+ [0, 0, 3, 2]], tf.float32)
+ boxes = box_list.BoxList(corners)
+ image_and_boxes = box_list_ops.visualize_boxes_in_image(image, boxes)
+ image_and_boxes_bw = tf.to_float(
+ tf.greater(tf.reduce_sum(image_and_boxes, 2), 0.0))
+ exp_result = [[1, 1, 1, 0],
+ [1, 1, 1, 0],
+ [1, 1, 1, 0],
+ [1, 0, 1, 0],
+ [1, 1, 1, 0],
+ [0, 0, 0, 0]]
+ with self.test_session() as sess:
+ output = sess.run(image_and_boxes_bw)
+ self.assertAllEqual(output.astype(int), exp_result)
+
+ def test_filter_field_value_equals(self):
+ corners = tf.constant([[0, 0, 1, 1],
+ [0, 0.1, 1, 1.1],
+ [0, -0.1, 1, 0.9],
+ [0, 10, 1, 11],
+ [0, 10.1, 1, 11.1],
+ [0, 100, 1, 101]], tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('classes', tf.constant([1, 2, 1, 2, 2, 1]))
+ exp_output1 = [[0, 0, 1, 1], [0, -0.1, 1, 0.9], [0, 100, 1, 101]]
+ exp_output2 = [[0, 0.1, 1, 1.1], [0, 10, 1, 11], [0, 10.1, 1, 11.1]]
+
+ filtered_boxes1 = box_list_ops.filter_field_value_equals(
+ boxes, 'classes', 1)
+ filtered_boxes2 = box_list_ops.filter_field_value_equals(
+ boxes, 'classes', 2)
+ with self.test_session() as sess:
+ filtered_output1, filtered_output2 = sess.run([filtered_boxes1.get(),
+ filtered_boxes2.get()])
+ self.assertAllClose(filtered_output1, exp_output1)
+ self.assertAllClose(filtered_output2, exp_output2)
+
+ def test_filter_greater_than(self):
+ corners = tf.constant([[0, 0, 1, 1],
+ [0, 0.1, 1, 1.1],
+ [0, -0.1, 1, 0.9],
+ [0, 10, 1, 11],
+ [0, 10.1, 1, 11.1],
+ [0, 100, 1, 101]], tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('scores', tf.constant([.1, .75, .9, .5, .5, .8]))
+ thresh = .6
+ exp_output = [[0, 0.1, 1, 1.1], [0, -0.1, 1, 0.9], [0, 100, 1, 101]]
+
+ filtered_boxes = box_list_ops.filter_greater_than(boxes, thresh)
+ with self.test_session() as sess:
+ filtered_output = sess.run(filtered_boxes.get())
+ self.assertAllClose(filtered_output, exp_output)
+
+ def test_clip_box_list(self):
+ boxlist = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5],
+ [0.6, 0.6, 0.8, 0.8], [0.2, 0.2, 0.3, 0.3]], tf.float32))
+ boxlist.add_field('classes', tf.constant([0, 0, 1, 1]))
+ boxlist.add_field('scores', tf.constant([0.75, 0.65, 0.3, 0.2]))
+ num_boxes = 2
+ clipped_boxlist = box_list_ops.pad_or_clip_box_list(boxlist, num_boxes)
+
+ expected_boxes = [[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5]]
+ expected_classes = [0, 0]
+ expected_scores = [0.75, 0.65]
+ with self.test_session() as sess:
+ boxes_out, classes_out, scores_out = sess.run(
+ [clipped_boxlist.get(), clipped_boxlist.get_field('classes'),
+ clipped_boxlist.get_field('scores')])
+
+ self.assertAllClose(expected_boxes, boxes_out)
+ self.assertAllEqual(expected_classes, classes_out)
+ self.assertAllClose(expected_scores, scores_out)
+
+ def test_pad_box_list(self):
+ boxlist = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5]], tf.float32))
+ boxlist.add_field('classes', tf.constant([0, 1]))
+ boxlist.add_field('scores', tf.constant([0.75, 0.2]))
+ num_boxes = 4
+ padded_boxlist = box_list_ops.pad_or_clip_box_list(boxlist, num_boxes)
+
+ expected_boxes = [[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5],
+ [0, 0, 0, 0], [0, 0, 0, 0]]
+ expected_classes = [0, 1, 0, 0]
+ expected_scores = [0.75, 0.2, 0, 0]
+ with self.test_session() as sess:
+ boxes_out, classes_out, scores_out = sess.run(
+ [padded_boxlist.get(), padded_boxlist.get_field('classes'),
+ padded_boxlist.get_field('scores')])
+
+ self.assertAllClose(expected_boxes, boxes_out)
+ self.assertAllEqual(expected_classes, classes_out)
+ self.assertAllClose(expected_scores, scores_out)
+
+ def test_select_random_box(self):
+ boxes = [[0., 0., 1., 1.],
+ [0., 1., 2., 3.],
+ [0., 2., 3., 4.]]
+
+ corners = tf.constant(boxes, dtype=tf.float32)
+ boxlist = box_list.BoxList(corners)
+ random_bbox, valid = box_list_ops.select_random_box(boxlist)
+ with self.test_session() as sess:
+ random_bbox_out, valid_out = sess.run([random_bbox, valid])
+
+ norm_small = any(
+ [np.linalg.norm(random_bbox_out - box) < 1e-6 for box in boxes])
+
+ self.assertTrue(norm_small)
+ self.assertTrue(valid_out)
+
+ def test_select_random_box_with_empty_boxlist(self):
+ corners = tf.constant([], shape=[0, 4], dtype=tf.float32)
+ boxlist = box_list.BoxList(corners)
+ random_bbox, valid = box_list_ops.select_random_box(boxlist)
+ with self.test_session() as sess:
+ random_bbox_out, valid_out = sess.run([random_bbox, valid])
+
+ expected_bbox_out = np.array([[-1., -1., -1., -1.]], dtype=np.float32)
+ self.assertAllEqual(expected_bbox_out, random_bbox_out)
+ self.assertFalse(valid_out)
+
+ def test_get_minimal_coverage_box(self):
+ boxes = [[0., 0., 1., 1.],
+ [-1., 1., 2., 3.],
+ [0., 2., 3., 4.]]
+
+ expected_coverage_box = [[-1., 0., 3., 4.]]
+
+ corners = tf.constant(boxes, dtype=tf.float32)
+ boxlist = box_list.BoxList(corners)
+ coverage_box = box_list_ops.get_minimal_coverage_box(boxlist)
+ with self.test_session() as sess:
+ coverage_box_out = sess.run(coverage_box)
+
+ self.assertAllClose(expected_coverage_box, coverage_box_out)
+
+ def test_get_minimal_coverage_box_with_empty_boxlist(self):
+ corners = tf.constant([], shape=[0, 4], dtype=tf.float32)
+ boxlist = box_list.BoxList(corners)
+ coverage_box = box_list_ops.get_minimal_coverage_box(boxlist)
+ with self.test_session() as sess:
+ coverage_box_out = sess.run(coverage_box)
+
+ self.assertAllClose([[0.0, 0.0, 1.0, 1.0]], coverage_box_out)
+
+
+class ConcatenateTest(tf.test.TestCase):
+
+ def test_invalid_input_box_list_list(self):
+ with self.assertRaises(ValueError):
+ box_list_ops.concatenate(None)
+ with self.assertRaises(ValueError):
+ box_list_ops.concatenate([])
+ with self.assertRaises(ValueError):
+ corners = tf.constant([[0, 0, 0, 0]], tf.float32)
+ boxlist = box_list.BoxList(corners)
+ box_list_ops.concatenate([boxlist, 2])
+
+ def test_concatenate_with_missing_fields(self):
+ corners1 = tf.constant([[0, 0, 0, 0], [1, 2, 3, 4]], tf.float32)
+ scores1 = tf.constant([1.0, 2.1])
+ corners2 = tf.constant([[0, 3, 1, 6], [2, 4, 3, 8]], tf.float32)
+ boxlist1 = box_list.BoxList(corners1)
+ boxlist1.add_field('scores', scores1)
+ boxlist2 = box_list.BoxList(corners2)
+ with self.assertRaises(ValueError):
+ box_list_ops.concatenate([boxlist1, boxlist2])
+
+ def test_concatenate_with_incompatible_field_shapes(self):
+ corners1 = tf.constant([[0, 0, 0, 0], [1, 2, 3, 4]], tf.float32)
+ scores1 = tf.constant([1.0, 2.1])
+ corners2 = tf.constant([[0, 3, 1, 6], [2, 4, 3, 8]], tf.float32)
+ scores2 = tf.constant([[1.0, 1.0], [2.1, 3.2]])
+ boxlist1 = box_list.BoxList(corners1)
+ boxlist1.add_field('scores', scores1)
+ boxlist2 = box_list.BoxList(corners2)
+ boxlist2.add_field('scores', scores2)
+ with self.assertRaises(ValueError):
+ box_list_ops.concatenate([boxlist1, boxlist2])
+
+ def test_concatenate_is_correct(self):
+ corners1 = tf.constant([[0, 0, 0, 0], [1, 2, 3, 4]], tf.float32)
+ scores1 = tf.constant([1.0, 2.1])
+ corners2 = tf.constant([[0, 3, 1, 6], [2, 4, 3, 8], [1, 0, 5, 10]],
+ tf.float32)
+ scores2 = tf.constant([1.0, 2.1, 5.6])
+
+ exp_corners = [[0, 0, 0, 0],
+ [1, 2, 3, 4],
+ [0, 3, 1, 6],
+ [2, 4, 3, 8],
+ [1, 0, 5, 10]]
+ exp_scores = [1.0, 2.1, 1.0, 2.1, 5.6]
+
+ boxlist1 = box_list.BoxList(corners1)
+ boxlist1.add_field('scores', scores1)
+ boxlist2 = box_list.BoxList(corners2)
+ boxlist2.add_field('scores', scores2)
+ result = box_list_ops.concatenate([boxlist1, boxlist2])
+ with self.test_session() as sess:
+ corners_output, scores_output = sess.run(
+ [result.get(), result.get_field('scores')])
+ self.assertAllClose(corners_output, exp_corners)
+ self.assertAllClose(scores_output, exp_scores)
+
+
+class NonMaxSuppressionTest(tf.test.TestCase):
+
+ def test_select_from_three_clusters(self):
+ corners = tf.constant([[0, 0, 1, 1],
+ [0, 0.1, 1, 1.1],
+ [0, -0.1, 1, 0.9],
+ [0, 10, 1, 11],
+ [0, 10.1, 1, 11.1],
+ [0, 100, 1, 101]], tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('scores', tf.constant([.9, .75, .6, .95, .5, .3]))
+ iou_thresh = .5
+ max_output_size = 3
+
+ exp_nms = [[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 100, 1, 101]]
+ nms = box_list_ops.non_max_suppression(
+ boxes, iou_thresh, max_output_size)
+ with self.test_session() as sess:
+ nms_output = sess.run(nms.get())
+ self.assertAllClose(nms_output, exp_nms)
+
+ def test_select_at_most_two_boxes_from_three_clusters(self):
+ corners = tf.constant([[0, 0, 1, 1],
+ [0, 0.1, 1, 1.1],
+ [0, -0.1, 1, 0.9],
+ [0, 10, 1, 11],
+ [0, 10.1, 1, 11.1],
+ [0, 100, 1, 101]], tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('scores', tf.constant([.9, .75, .6, .95, .5, .3]))
+ iou_thresh = .5
+ max_output_size = 2
+
+ exp_nms = [[0, 10, 1, 11],
+ [0, 0, 1, 1]]
+ nms = box_list_ops.non_max_suppression(
+ boxes, iou_thresh, max_output_size)
+ with self.test_session() as sess:
+ nms_output = sess.run(nms.get())
+ self.assertAllClose(nms_output, exp_nms)
+
+ def test_select_at_most_thirty_boxes_from_three_clusters(self):
+ corners = tf.constant([[0, 0, 1, 1],
+ [0, 0.1, 1, 1.1],
+ [0, -0.1, 1, 0.9],
+ [0, 10, 1, 11],
+ [0, 10.1, 1, 11.1],
+ [0, 100, 1, 101]], tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('scores', tf.constant([.9, .75, .6, .95, .5, .3]))
+ iou_thresh = .5
+ max_output_size = 30
+
+ exp_nms = [[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 100, 1, 101]]
+ nms = box_list_ops.non_max_suppression(
+ boxes, iou_thresh, max_output_size)
+ with self.test_session() as sess:
+ nms_output = sess.run(nms.get())
+ self.assertAllClose(nms_output, exp_nms)
+
+ def test_select_single_box(self):
+ corners = tf.constant([[0, 0, 1, 1]], tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('scores', tf.constant([.9]))
+ iou_thresh = .5
+ max_output_size = 3
+
+ exp_nms = [[0, 0, 1, 1]]
+ nms = box_list_ops.non_max_suppression(
+ boxes, iou_thresh, max_output_size)
+ with self.test_session() as sess:
+ nms_output = sess.run(nms.get())
+ self.assertAllClose(nms_output, exp_nms)
+
+ def test_select_from_ten_identical_boxes(self):
+ corners = tf.constant(10 * [[0, 0, 1, 1]], tf.float32)
+ boxes = box_list.BoxList(corners)
+ boxes.add_field('scores', tf.constant(10 * [.9]))
+ iou_thresh = .5
+ max_output_size = 3
+
+ exp_nms = [[0, 0, 1, 1]]
+ nms = box_list_ops.non_max_suppression(
+ boxes, iou_thresh, max_output_size)
+ with self.test_session() as sess:
+ nms_output = sess.run(nms.get())
+ self.assertAllClose(nms_output, exp_nms)
+
+ def test_copy_extra_fields(self):
+ corners = tf.constant([[0, 0, 1, 1],
+ [0, 0.1, 1, 1.1]], tf.float32)
+ boxes = box_list.BoxList(corners)
+ tensor1 = np.array([[1], [4]])
+ tensor2 = np.array([[1, 1], [2, 2]])
+ boxes.add_field('tensor1', tf.constant(tensor1))
+ boxes.add_field('tensor2', tf.constant(tensor2))
+ new_boxes = box_list.BoxList(tf.constant([[0, 0, 10, 10],
+ [1, 3, 5, 5]], tf.float32))
+ new_boxes = box_list_ops._copy_extra_fields(new_boxes, boxes)
+ with self.test_session() as sess:
+ self.assertAllClose(tensor1, sess.run(new_boxes.get_field('tensor1')))
+ self.assertAllClose(tensor2, sess.run(new_boxes.get_field('tensor2')))
+
+
+class CoordinatesConversionTest(tf.test.TestCase):
+
+ def test_to_normalized_coordinates(self):
+ coordinates = tf.constant([[0, 0, 100, 100],
+ [25, 25, 75, 75]], tf.float32)
+ img = tf.ones((128, 100, 100, 3))
+ boxlist = box_list.BoxList(coordinates)
+ normalized_boxlist = box_list_ops.to_normalized_coordinates(
+ boxlist, tf.shape(img)[1], tf.shape(img)[2])
+ expected_boxes = [[0, 0, 1, 1],
+ [0.25, 0.25, 0.75, 0.75]]
+
+ with self.test_session() as sess:
+ normalized_boxes = sess.run(normalized_boxlist.get())
+ self.assertAllClose(normalized_boxes, expected_boxes)
+
+ def test_to_normalized_coordinates_already_normalized(self):
+ coordinates = tf.constant([[0, 0, 1, 1],
+ [0.25, 0.25, 0.75, 0.75]], tf.float32)
+ img = tf.ones((128, 100, 100, 3))
+ boxlist = box_list.BoxList(coordinates)
+ normalized_boxlist = box_list_ops.to_normalized_coordinates(
+ boxlist, tf.shape(img)[1], tf.shape(img)[2])
+
+ with self.test_session() as sess:
+ with self.assertRaisesOpError('assertion failed'):
+ sess.run(normalized_boxlist.get())
+
+ def test_to_absolute_coordinates(self):
+ coordinates = tf.constant([[0, 0, 1, 1],
+ [0.25, 0.25, 0.75, 0.75]], tf.float32)
+ img = tf.ones((128, 100, 100, 3))
+ boxlist = box_list.BoxList(coordinates)
+ absolute_boxlist = box_list_ops.to_absolute_coordinates(boxlist,
+ tf.shape(img)[1],
+ tf.shape(img)[2])
+ expected_boxes = [[0, 0, 100, 100],
+ [25, 25, 75, 75]]
+
+ with self.test_session() as sess:
+ absolute_boxes = sess.run(absolute_boxlist.get())
+ self.assertAllClose(absolute_boxes, expected_boxes)
+
+ def test_to_absolute_coordinates_already_abolute(self):
+ coordinates = tf.constant([[0, 0, 100, 100],
+ [25, 25, 75, 75]], tf.float32)
+ img = tf.ones((128, 100, 100, 3))
+ boxlist = box_list.BoxList(coordinates)
+ absolute_boxlist = box_list_ops.to_absolute_coordinates(boxlist,
+ tf.shape(img)[1],
+ tf.shape(img)[2])
+
+ with self.test_session() as sess:
+ with self.assertRaisesOpError('assertion failed'):
+ sess.run(absolute_boxlist.get())
+
+ def test_convert_to_normalized_and_back(self):
+ coordinates = np.random.uniform(size=(100, 4))
+ coordinates = np.round(np.sort(coordinates) * 200)
+ coordinates[:, 2:4] += 1
+ coordinates[99, :] = [0, 0, 201, 201]
+ img = tf.ones((128, 202, 202, 3))
+
+ boxlist = box_list.BoxList(tf.constant(coordinates, tf.float32))
+ boxlist = box_list_ops.to_normalized_coordinates(boxlist,
+ tf.shape(img)[1],
+ tf.shape(img)[2])
+ boxlist = box_list_ops.to_absolute_coordinates(boxlist,
+ tf.shape(img)[1],
+ tf.shape(img)[2])
+
+ with self.test_session() as sess:
+ out = sess.run(boxlist.get())
+ self.assertAllClose(out, coordinates)
+
+ def test_convert_to_absolute_and_back(self):
+ coordinates = np.random.uniform(size=(100, 4))
+ coordinates = np.sort(coordinates)
+ coordinates[99, :] = [0, 0, 1, 1]
+ img = tf.ones((128, 202, 202, 3))
+
+ boxlist = box_list.BoxList(tf.constant(coordinates, tf.float32))
+ boxlist = box_list_ops.to_absolute_coordinates(boxlist,
+ tf.shape(img)[1],
+ tf.shape(img)[2])
+ boxlist = box_list_ops.to_normalized_coordinates(boxlist,
+ tf.shape(img)[1],
+ tf.shape(img)[2])
+
+ with self.test_session() as sess:
+ out = sess.run(boxlist.get())
+ self.assertAllClose(out, coordinates)
+
+ def test_to_absolute_coordinates_maximum_coordinate_check(self):
+ coordinates = tf.constant([[0, 0, 1.2, 1.2],
+ [0.25, 0.25, 0.75, 0.75]], tf.float32)
+ img = tf.ones((128, 100, 100, 3))
+ boxlist = box_list.BoxList(coordinates)
+ absolute_boxlist = box_list_ops.to_absolute_coordinates(
+ boxlist,
+ tf.shape(img)[1],
+ tf.shape(img)[2],
+ maximum_normalized_coordinate=1.1)
+
+ with self.test_session() as sess:
+ with self.assertRaisesOpError('assertion failed'):
+ sess.run(absolute_boxlist.get())
+
+
+class BoxRefinementTest(tf.test.TestCase):
+
+ def test_box_voting(self):
+ candidates = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4], [0.6, 0.6, 0.8, 0.8]], tf.float32))
+ candidates.add_field('ExtraField', tf.constant([1, 2]))
+ pool = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5],
+ [0.6, 0.6, 0.8, 0.8]], tf.float32))
+ pool.add_field('scores', tf.constant([0.75, 0.25, 0.3]))
+ averaged_boxes = box_list_ops.box_voting(candidates, pool)
+ expected_boxes = [[0.1, 0.1, 0.425, 0.425], [0.6, 0.6, 0.8, 0.8]]
+ expected_scores = [0.5, 0.3]
+ with self.test_session() as sess:
+ boxes_out, scores_out, extra_field_out = sess.run(
+ [averaged_boxes.get(), averaged_boxes.get_field('scores'),
+ averaged_boxes.get_field('ExtraField')])
+
+ self.assertAllClose(expected_boxes, boxes_out)
+ self.assertAllClose(expected_scores, scores_out)
+ self.assertAllEqual(extra_field_out, [1, 2])
+
+ def test_box_voting_fails_with_negative_scores(self):
+ candidates = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4]], tf.float32))
+ pool = box_list.BoxList(tf.constant([[0.1, 0.1, 0.4, 0.4]], tf.float32))
+ pool.add_field('scores', tf.constant([-0.2]))
+ averaged_boxes = box_list_ops.box_voting(candidates, pool)
+
+ with self.test_session() as sess:
+ with self.assertRaisesOpError('Scores must be non negative'):
+ sess.run([averaged_boxes.get()])
+
+ def test_box_voting_fails_when_unmatched(self):
+ candidates = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4]], tf.float32))
+ pool = box_list.BoxList(tf.constant([[0.6, 0.6, 0.8, 0.8]], tf.float32))
+ pool.add_field('scores', tf.constant([0.2]))
+ averaged_boxes = box_list_ops.box_voting(candidates, pool)
+
+ with self.test_session() as sess:
+ with self.assertRaisesOpError('Each box in selected_boxes must match '
+ 'with at least one box in pool_boxes.'):
+ sess.run([averaged_boxes.get()])
+
+ def test_refine_boxes(self):
+ pool = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5],
+ [0.6, 0.6, 0.8, 0.8]], tf.float32))
+ pool.add_field('ExtraField', tf.constant([1, 2, 3]))
+ pool.add_field('scores', tf.constant([0.75, 0.25, 0.3]))
+ refined_boxes = box_list_ops.refine_boxes(pool, 0.5, 10)
+
+ expected_boxes = [[0.1, 0.1, 0.425, 0.425], [0.6, 0.6, 0.8, 0.8]]
+ expected_scores = [0.5, 0.3]
+ with self.test_session() as sess:
+ boxes_out, scores_out, extra_field_out = sess.run(
+ [refined_boxes.get(), refined_boxes.get_field('scores'),
+ refined_boxes.get_field('ExtraField')])
+
+ self.assertAllClose(expected_boxes, boxes_out)
+ self.assertAllClose(expected_scores, scores_out)
+ self.assertAllEqual(extra_field_out, [1, 3])
+
+ def test_refine_boxes_multi_class(self):
+ pool = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5],
+ [0.6, 0.6, 0.8, 0.8], [0.2, 0.2, 0.3, 0.3]], tf.float32))
+ pool.add_field('classes', tf.constant([0, 0, 1, 1]))
+ pool.add_field('scores', tf.constant([0.75, 0.25, 0.3, 0.2]))
+ refined_boxes = box_list_ops.refine_boxes_multi_class(pool, 3, 0.5, 10)
+
+ expected_boxes = [[0.1, 0.1, 0.425, 0.425], [0.6, 0.6, 0.8, 0.8],
+ [0.2, 0.2, 0.3, 0.3]]
+ expected_scores = [0.5, 0.3, 0.2]
+ with self.test_session() as sess:
+ boxes_out, scores_out, extra_field_out = sess.run(
+ [refined_boxes.get(), refined_boxes.get_field('scores'),
+ refined_boxes.get_field('classes')])
+
+ self.assertAllClose(expected_boxes, boxes_out)
+ self.assertAllClose(expected_scores, scores_out)
+ self.assertAllEqual(extra_field_out, [0, 1, 1])
+
+ def test_sample_boxes_by_jittering(self):
+ boxes = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4],
+ [0.1, 0.1, 0.5, 0.5],
+ [0.6, 0.6, 0.8, 0.8],
+ [0.2, 0.2, 0.3, 0.3]], tf.float32))
+ sampled_boxes = box_list_ops.sample_boxes_by_jittering(
+ boxlist=boxes, num_boxes_to_sample=10)
+ iou = box_list_ops.iou(boxes, sampled_boxes)
+ iou_max = tf.reduce_max(iou, axis=0)
+ with self.test_session() as sess:
+ (np_sampled_boxes, np_iou_max) = sess.run([sampled_boxes.get(), iou_max])
+ self.assertAllEqual(np_sampled_boxes.shape, [10, 4])
+ self.assertAllGreater(np_iou_max, 0.5)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list_test.py"
new file mode 100644
index 00000000..edc00ebb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_list_test.py"
@@ -0,0 +1,134 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.box_list."""
+
+import tensorflow as tf
+
+from object_detection.core import box_list
+
+
+class BoxListTest(tf.test.TestCase):
+ """Tests for BoxList class."""
+
+ def test_num_boxes(self):
+ data = tf.constant([[0, 0, 1, 1], [1, 1, 2, 3], [3, 4, 5, 5]], tf.float32)
+ expected_num_boxes = 3
+
+ boxes = box_list.BoxList(data)
+ with self.test_session() as sess:
+ num_boxes_output = sess.run(boxes.num_boxes())
+ self.assertEquals(num_boxes_output, expected_num_boxes)
+
+ def test_get_correct_center_coordinates_and_sizes(self):
+ boxes = [[10.0, 10.0, 20.0, 15.0], [0.2, 0.1, 0.5, 0.4]]
+ boxes = box_list.BoxList(tf.constant(boxes))
+ centers_sizes = boxes.get_center_coordinates_and_sizes()
+ expected_centers_sizes = [[15, 0.35], [12.5, 0.25], [10, 0.3], [5, 0.3]]
+ with self.test_session() as sess:
+ centers_sizes_out = sess.run(centers_sizes)
+ self.assertAllClose(centers_sizes_out, expected_centers_sizes)
+
+ def test_create_box_list_with_dynamic_shape(self):
+ data = tf.constant([[0, 0, 1, 1], [1, 1, 2, 3], [3, 4, 5, 5]], tf.float32)
+ indices = tf.reshape(tf.where(tf.greater([1, 0, 1], 0)), [-1])
+ data = tf.gather(data, indices)
+ assert data.get_shape().as_list() == [None, 4]
+ expected_num_boxes = 2
+
+ boxes = box_list.BoxList(data)
+ with self.test_session() as sess:
+ num_boxes_output = sess.run(boxes.num_boxes())
+ self.assertEquals(num_boxes_output, expected_num_boxes)
+
+ def test_transpose_coordinates(self):
+ boxes = [[10.0, 10.0, 20.0, 15.0], [0.2, 0.1, 0.5, 0.4]]
+ boxes = box_list.BoxList(tf.constant(boxes))
+ boxes.transpose_coordinates()
+ expected_corners = [[10.0, 10.0, 15.0, 20.0], [0.1, 0.2, 0.4, 0.5]]
+ with self.test_session() as sess:
+ corners_out = sess.run(boxes.get())
+ self.assertAllClose(corners_out, expected_corners)
+
+ def test_box_list_invalid_inputs(self):
+ data0 = tf.constant([[[0, 0, 1, 1], [3, 4, 5, 5]]], tf.float32)
+ data1 = tf.constant([[0, 0, 1], [1, 1, 2], [3, 4, 5]], tf.float32)
+ data2 = tf.constant([[0, 0, 1], [1, 1, 2], [3, 4, 5]], tf.int32)
+
+ with self.assertRaises(ValueError):
+ _ = box_list.BoxList(data0)
+ with self.assertRaises(ValueError):
+ _ = box_list.BoxList(data1)
+ with self.assertRaises(ValueError):
+ _ = box_list.BoxList(data2)
+
+ def test_num_boxes_static(self):
+ box_corners = [[10.0, 10.0, 20.0, 15.0], [0.2, 0.1, 0.5, 0.4]]
+ boxes = box_list.BoxList(tf.constant(box_corners))
+ self.assertEquals(boxes.num_boxes_static(), 2)
+ self.assertEquals(type(boxes.num_boxes_static()), int)
+
+ def test_num_boxes_static_for_uninferrable_shape(self):
+ placeholder = tf.placeholder(tf.float32, shape=[None, 4])
+ boxes = box_list.BoxList(placeholder)
+ self.assertEquals(boxes.num_boxes_static(), None)
+
+ def test_as_tensor_dict(self):
+ boxlist = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5]], tf.float32))
+ boxlist.add_field('classes', tf.constant([0, 1]))
+ boxlist.add_field('scores', tf.constant([0.75, 0.2]))
+ tensor_dict = boxlist.as_tensor_dict()
+
+ expected_boxes = [[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5]]
+ expected_classes = [0, 1]
+ expected_scores = [0.75, 0.2]
+
+ with self.test_session() as sess:
+ tensor_dict_out = sess.run(tensor_dict)
+ self.assertAllEqual(3, len(tensor_dict_out))
+ self.assertAllClose(expected_boxes, tensor_dict_out['boxes'])
+ self.assertAllEqual(expected_classes, tensor_dict_out['classes'])
+ self.assertAllClose(expected_scores, tensor_dict_out['scores'])
+
+ def test_as_tensor_dict_with_features(self):
+ boxlist = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5]], tf.float32))
+ boxlist.add_field('classes', tf.constant([0, 1]))
+ boxlist.add_field('scores', tf.constant([0.75, 0.2]))
+ tensor_dict = boxlist.as_tensor_dict(['boxes', 'classes', 'scores'])
+
+ expected_boxes = [[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5]]
+ expected_classes = [0, 1]
+ expected_scores = [0.75, 0.2]
+
+ with self.test_session() as sess:
+ tensor_dict_out = sess.run(tensor_dict)
+ self.assertAllEqual(3, len(tensor_dict_out))
+ self.assertAllClose(expected_boxes, tensor_dict_out['boxes'])
+ self.assertAllEqual(expected_classes, tensor_dict_out['classes'])
+ self.assertAllClose(expected_scores, tensor_dict_out['scores'])
+
+ def test_as_tensor_dict_missing_field(self):
+ boxlist = box_list.BoxList(
+ tf.constant([[0.1, 0.1, 0.4, 0.4], [0.1, 0.1, 0.5, 0.5]], tf.float32))
+ boxlist.add_field('classes', tf.constant([0, 1]))
+ boxlist.add_field('scores', tf.constant([0.75, 0.2]))
+ with self.assertRaises(ValueError):
+ boxlist.as_tensor_dict(['foo', 'bar'])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_predictor.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_predictor.py"
new file mode 100644
index 00000000..91a6647e
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/box_predictor.py"
@@ -0,0 +1,227 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Box predictor for object detectors.
+
+Box predictors are classes that take a high level
+image feature map as input and produce two predictions,
+(1) a tensor encoding box locations, and
+(2) a tensor encoding classes for each box.
+
+These components are passed directly to loss functions
+in our detection models.
+
+These modules are separated from the main model since the same
+few box predictor architectures are shared across many models.
+"""
+from abc import abstractmethod
+import tensorflow as tf
+
+BOX_ENCODINGS = 'box_encodings'
+CLASS_PREDICTIONS_WITH_BACKGROUND = 'class_predictions_with_background'
+MASK_PREDICTIONS = 'mask_predictions'
+
+
+class BoxPredictor(object):
+ """BoxPredictor."""
+
+ def __init__(self, is_training, num_classes):
+ """Constructor.
+
+ Args:
+ is_training: Indicates whether the BoxPredictor is in training mode.
+ num_classes: number of classes. Note that num_classes *does not*
+ include the background category, so if groundtruth labels take values
+ in {0, 1, .., K-1}, num_classes=K (and not K+1, even though the
+ assigned classification targets can range from {0,... K}).
+ """
+ self._is_training = is_training
+ self._num_classes = num_classes
+
+ @property
+ def is_keras_model(self):
+ return False
+
+ @property
+ def num_classes(self):
+ return self._num_classes
+
+ def predict(self, image_features, num_predictions_per_location,
+ scope=None, **params):
+ """Computes encoded object locations and corresponding confidences.
+
+ Takes a list of high level image feature maps as input and produces a list
+ of box encodings and a list of class scores where each element in the output
+ lists correspond to the feature maps in the input list.
+
+ Args:
+ image_features: A list of float tensors of shape [batch_size, height_i,
+ width_i, channels_i] containing features for a batch of images.
+ num_predictions_per_location: A list of integers representing the number
+ of box predictions to be made per spatial location for each feature map.
+ scope: Variable and Op scope name.
+ **params: Additional keyword arguments for specific implementations of
+ BoxPredictor.
+
+ Returns:
+ A dictionary containing at least the following tensors.
+ box_encodings: A list of float tensors. Each entry in the list
+ corresponds to a feature map in the input `image_features` list. All
+ tensors in the list have one of the two following shapes:
+ a. [batch_size, num_anchors_i, q, code_size] representing the location
+ of the objects, where q is 1 or the number of classes.
+ b. [batch_size, num_anchors_i, code_size].
+ class_predictions_with_background: A list of float tensors of shape
+ [batch_size, num_anchors_i, num_classes + 1] representing the class
+ predictions for the proposals. Each entry in the list corresponds to a
+ feature map in the input `image_features` list.
+
+ Raises:
+ ValueError: If length of `image_features` is not equal to length of
+ `num_predictions_per_location`.
+ """
+ if len(image_features) != len(num_predictions_per_location):
+ raise ValueError('image_feature and num_predictions_per_location must '
+ 'be of same length, found: {} vs {}'.
+ format(len(image_features),
+ len(num_predictions_per_location)))
+ if scope is not None:
+ with tf.variable_scope(scope):
+ return self._predict(image_features, num_predictions_per_location,
+ **params)
+ return self._predict(image_features, num_predictions_per_location,
+ **params)
+
+ # TODO(rathodv): num_predictions_per_location could be moved to constructor.
+ # This is currently only used by ConvolutionalBoxPredictor.
+ @abstractmethod
+ def _predict(self, image_features, num_predictions_per_location, **params):
+ """Implementations must override this method.
+
+ Args:
+ image_features: A list of float tensors of shape [batch_size, height_i,
+ width_i, channels_i] containing features for a batch of images.
+ num_predictions_per_location: A list of integers representing the number
+ of box predictions to be made per spatial location for each feature map.
+ **params: Additional keyword arguments for specific implementations of
+ BoxPredictor.
+
+ Returns:
+ A dictionary containing at least the following tensors.
+ box_encodings: A list of float tensors. Each entry in the list
+ corresponds to a feature map in the input `image_features` list. All
+ tensors in the list have one of the two following shapes:
+ a. [batch_size, num_anchors_i, q, code_size] representing the location
+ of the objects, where q is 1 or the number of classes.
+ b. [batch_size, num_anchors_i, code_size].
+ class_predictions_with_background: A list of float tensors of shape
+ [batch_size, num_anchors_i, num_classes + 1] representing the class
+ predictions for the proposals. Each entry in the list corresponds to a
+ feature map in the input `image_features` list.
+ """
+ pass
+
+
+class KerasBoxPredictor(tf.keras.Model):
+ """Keras-based BoxPredictor."""
+
+ def __init__(self, is_training, num_classes, freeze_batchnorm,
+ inplace_batchnorm_update, name=None):
+ """Constructor.
+
+ Args:
+ is_training: Indicates whether the BoxPredictor is in training mode.
+ num_classes: number of classes. Note that num_classes *does not*
+ include the background category, so if groundtruth labels take values
+ in {0, 1, .., K-1}, num_classes=K (and not K+1, even though the
+ assigned classification targets can range from {0,... K}).
+ freeze_batchnorm: Whether to freeze batch norm parameters during
+ training or not. When training with a small batch size (e.g. 1), it is
+ desirable to freeze batch norm update and use pretrained batch norm
+ params.
+ inplace_batchnorm_update: Whether to update batch norm moving average
+ values inplace. When this is false train op must add a control
+ dependency on tf.graphkeys.UPDATE_OPS collection in order to update
+ batch norm statistics.
+ name: A string name scope to assign to the model. If `None`, Keras
+ will auto-generate one from the class name.
+ """
+ super(KerasBoxPredictor, self).__init__(name=name)
+
+ self._is_training = is_training
+ self._num_classes = num_classes
+ self._freeze_batchnorm = freeze_batchnorm
+ self._inplace_batchnorm_update = inplace_batchnorm_update
+
+ @property
+ def is_keras_model(self):
+ return True
+
+ @property
+ def num_classes(self):
+ return self._num_classes
+
+ def call(self, image_features, **kwargs):
+ """Computes encoded object locations and corresponding confidences.
+
+ Takes a list of high level image feature maps as input and produces a list
+ of box encodings and a list of class scores where each element in the output
+ lists correspond to the feature maps in the input list.
+
+ Args:
+ image_features: A list of float tensors of shape [batch_size, height_i,
+ width_i, channels_i] containing features for a batch of images.
+ **kwargs: Additional keyword arguments for specific implementations of
+ BoxPredictor.
+
+ Returns:
+ A dictionary containing at least the following tensors.
+ box_encodings: A list of float tensors. Each entry in the list
+ corresponds to a feature map in the input `image_features` list. All
+ tensors in the list have one of the two following shapes:
+ a. [batch_size, num_anchors_i, q, code_size] representing the location
+ of the objects, where q is 1 or the number of classes.
+ b. [batch_size, num_anchors_i, code_size].
+ class_predictions_with_background: A list of float tensors of shape
+ [batch_size, num_anchors_i, num_classes + 1] representing the class
+ predictions for the proposals. Each entry in the list corresponds to a
+ feature map in the input `image_features` list.
+ """
+ return self._predict(image_features, **kwargs)
+
+ @abstractmethod
+ def _predict(self, image_features, **kwargs):
+ """Implementations must override this method.
+
+ Args:
+ image_features: A list of float tensors of shape [batch_size, height_i,
+ width_i, channels_i] containing features for a batch of images.
+ **kwargs: Additional keyword arguments for specific implementations of
+ BoxPredictor.
+
+ Returns:
+ A dictionary containing at least the following tensors.
+ box_encodings: A list of float tensors. Each entry in the list
+ corresponds to a feature map in the input `image_features` list. All
+ tensors in the list have one of the two following shapes:
+ a. [batch_size, num_anchors_i, q, code_size] representing the location
+ of the objects, where q is 1 or the number of classes.
+ b. [batch_size, num_anchors_i, code_size].
+ class_predictions_with_background: A list of float tensors of shape
+ [batch_size, num_anchors_i, num_classes + 1] representing the class
+ predictions for the proposals. Each entry in the list corresponds to a
+ feature map in the input `image_features` list.
+ """
+ raise NotImplementedError
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/data_decoder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/data_decoder.py"
new file mode 100644
index 00000000..9ae18c1f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/data_decoder.py"
@@ -0,0 +1,41 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Interface for data decoders.
+
+Data decoders decode the input data and return a dictionary of tensors keyed by
+the entries in core.reader.Fields.
+"""
+from abc import ABCMeta
+from abc import abstractmethod
+
+
+class DataDecoder(object):
+ """Interface for data decoders."""
+ __metaclass__ = ABCMeta
+
+ @abstractmethod
+ def decode(self, data):
+ """Return a single image and associated labels.
+
+ Args:
+ data: a string tensor holding a serialized protocol buffer corresponding
+ to data for a single image.
+
+ Returns:
+ tensor_dict: a dictionary containing tensors. Possible keys are defined in
+ reader.Fields.
+ """
+ pass
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/data_parser.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/data_parser.py"
new file mode 100644
index 00000000..3dac4de2
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/data_parser.py"
@@ -0,0 +1,41 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Interface for data parsers.
+
+Data parser parses input data and returns a dictionary of numpy arrays
+keyed by the entries in standard_fields.py. Since the parser parses records
+to numpy arrays (materialized tensors) directly, it is used to read data for
+evaluation/visualization; to parse the data during training, DataDecoder should
+be used.
+"""
+from abc import ABCMeta
+from abc import abstractmethod
+
+
+class DataToNumpyParser(object):
+ __metaclass__ = ABCMeta
+
+ @abstractmethod
+ def parse(self, input_data):
+ """Parses input and returns a numpy array or a dictionary of numpy arrays.
+
+ Args:
+ input_data: an input data
+
+ Returns:
+ A numpy array or a dictionary of numpy arrays or None, if input
+ cannot be parsed.
+ """
+ pass
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/freezable_batch_norm.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/freezable_batch_norm.py"
new file mode 100644
index 00000000..68a56aa6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/freezable_batch_norm.py"
@@ -0,0 +1,70 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""A freezable batch norm layer that uses Keras batch normalization."""
+import tensorflow as tf
+
+
+class FreezableBatchNorm(tf.keras.layers.BatchNormalization):
+ """Batch normalization layer (Ioffe and Szegedy, 2014).
+
+ This is a `freezable` batch norm layer that supports setting the `training`
+ parameter in the __init__ method rather than having to set it either via
+ the Keras learning phase or via the `call` method parameter. This layer will
+ forward all other parameters to the default Keras `BatchNormalization`
+ layer
+
+ This is class is necessary because Object Detection model training sometimes
+ requires batch normalization layers to be `frozen` and used as if it was
+ evaluation time, despite still training (and potentially using dropout layers)
+
+ Like the default Keras BatchNormalization layer, this will normalize the
+ activations of the previous layer at each batch,
+ i.e. applies a transformation that maintains the mean activation
+ close to 0 and the activation standard deviation close to 1.
+
+ Arguments:
+ training: Boolean or None. If True, the batch normalization layer will
+ normalize the input batch using the batch mean and standard deviation,
+ and update the total moving mean and standard deviations. If False, the
+ layer will normalize using the moving average and std. dev, without
+ updating the learned avg and std. dev.
+ If None, the layer will follow the keras BatchNormalization layer
+ strategy of checking the Keras learning phase at `call` time to decide
+ what to do.
+ **kwargs: The keyword arguments to forward to the keras BatchNormalization
+ layer constructor.
+
+ Input shape:
+ Arbitrary. Use the keyword argument `input_shape`
+ (tuple of integers, does not include the samples axis)
+ when using this layer as the first layer in a model.
+
+ Output shape:
+ Same shape as input.
+
+ References:
+ - [Batch Normalization: Accelerating Deep Network Training by Reducing
+ Internal Covariate Shift](https://arxiv.org/abs/1502.03167)
+ """
+
+ def __init__(self, training=None, **kwargs):
+ super(FreezableBatchNorm, self).__init__(**kwargs)
+ self._training = training
+
+ def call(self, inputs, training=None):
+ if training is None:
+ training = self._training
+ return super(FreezableBatchNorm, self).call(inputs, training=training)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/freezable_batch_norm_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/freezable_batch_norm_test.py"
new file mode 100644
index 00000000..504b9e71
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/freezable_batch_norm_test.py"
@@ -0,0 +1,121 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.freezable_batch_norm."""
+import numpy as np
+import tensorflow as tf
+
+from object_detection.core import freezable_batch_norm
+
+
+class FreezableBatchNormTest(tf.test.TestCase):
+ """Tests for FreezableBatchNorm operations."""
+
+ def _build_model(self, training=None):
+ model = tf.keras.models.Sequential()
+ norm = freezable_batch_norm.FreezableBatchNorm(training=training,
+ input_shape=(10,),
+ momentum=0.8)
+ model.add(norm)
+ return model, norm
+
+ def _train_freezable_batch_norm(self, training_mean, training_var):
+ model, _ = self._build_model()
+ model.compile(loss='mse', optimizer='sgd')
+
+ # centered on training_mean, variance training_var
+ train_data = np.random.normal(
+ loc=training_mean,
+ scale=training_var,
+ size=(1000, 10))
+ model.fit(train_data, train_data, epochs=4, verbose=0)
+ return model.weights
+
+ def test_batchnorm_freezing_training_true(self):
+ with self.test_session():
+ training_mean = 5.0
+ training_var = 10.0
+
+ testing_mean = -10.0
+ testing_var = 5.0
+
+ # Initially train the batch norm, and save the weights
+ trained_weights = self._train_freezable_batch_norm(training_mean,
+ training_var)
+
+ # Load the batch norm weights, freezing training to True.
+ # Apply the batch norm layer to testing data and ensure it is normalized
+ # according to the batch statistics.
+ model, norm = self._build_model(training=True)
+ for trained_weight, blank_weight in zip(trained_weights, model.weights):
+ weight_copy = blank_weight.assign(tf.keras.backend.eval(trained_weight))
+ tf.keras.backend.eval(weight_copy)
+
+ # centered on testing_mean, variance testing_var
+ test_data = np.random.normal(
+ loc=testing_mean,
+ scale=testing_var,
+ size=(1000, 10))
+
+ out_tensor = norm(tf.convert_to_tensor(test_data, dtype=tf.float32))
+ out = tf.keras.backend.eval(out_tensor)
+
+ out -= tf.keras.backend.eval(norm.beta)
+ out /= tf.keras.backend.eval(norm.gamma)
+
+ np.testing.assert_allclose(out.mean(), 0.0, atol=1.5e-1)
+ np.testing.assert_allclose(out.std(), 1.0, atol=1.5e-1)
+
+ def test_batchnorm_freezing_training_false(self):
+ with self.test_session():
+ training_mean = 5.0
+ training_var = 10.0
+
+ testing_mean = -10.0
+ testing_var = 5.0
+
+ # Initially train the batch norm, and save the weights
+ trained_weights = self._train_freezable_batch_norm(training_mean,
+ training_var)
+
+ # Load the batch norm back up, freezing training to False.
+ # Apply the batch norm layer to testing data and ensure it is normalized
+ # according to the training data's statistics.
+ model, norm = self._build_model(training=False)
+ for trained_weight, blank_weight in zip(trained_weights, model.weights):
+ weight_copy = blank_weight.assign(tf.keras.backend.eval(trained_weight))
+ tf.keras.backend.eval(weight_copy)
+
+ # centered on testing_mean, variance testing_var
+ test_data = np.random.normal(
+ loc=testing_mean,
+ scale=testing_var,
+ size=(1000, 10))
+
+ out_tensor = norm(tf.convert_to_tensor(test_data, dtype=tf.float32))
+ out = tf.keras.backend.eval(out_tensor)
+
+ out -= tf.keras.backend.eval(norm.beta)
+ out /= tf.keras.backend.eval(norm.gamma)
+
+ out *= training_var
+ out += (training_mean - testing_mean)
+ out /= testing_var
+
+ np.testing.assert_allclose(out.mean(), 0.0, atol=1.5e-1)
+ np.testing.assert_allclose(out.std(), 1.0, atol=1.5e-1)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/keypoint_ops.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/keypoint_ops.py"
new file mode 100644
index 00000000..e520845f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/keypoint_ops.py"
@@ -0,0 +1,282 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Keypoint operations.
+
+Keypoints are represented as tensors of shape [num_instances, num_keypoints, 2],
+where the last dimension holds rank 2 tensors of the form [y, x] representing
+the coordinates of the keypoint.
+"""
+import numpy as np
+import tensorflow as tf
+
+
+def scale(keypoints, y_scale, x_scale, scope=None):
+ """Scales keypoint coordinates in x and y dimensions.
+
+ Args:
+ keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ y_scale: (float) scalar tensor
+ x_scale: (float) scalar tensor
+ scope: name scope.
+
+ Returns:
+ new_keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ """
+ with tf.name_scope(scope, 'Scale'):
+ y_scale = tf.cast(y_scale, tf.float32)
+ x_scale = tf.cast(x_scale, tf.float32)
+ new_keypoints = keypoints * [[[y_scale, x_scale]]]
+ return new_keypoints
+
+
+def clip_to_window(keypoints, window, scope=None):
+ """Clips keypoints to a window.
+
+ This op clips any input keypoints to a window.
+
+ Args:
+ keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ window: a tensor of shape [4] representing the [y_min, x_min, y_max, x_max]
+ window to which the op should clip the keypoints.
+ scope: name scope.
+
+ Returns:
+ new_keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ """
+ with tf.name_scope(scope, 'ClipToWindow'):
+ y, x = tf.split(value=keypoints, num_or_size_splits=2, axis=2)
+ win_y_min, win_x_min, win_y_max, win_x_max = tf.unstack(window)
+ y = tf.maximum(tf.minimum(y, win_y_max), win_y_min)
+ x = tf.maximum(tf.minimum(x, win_x_max), win_x_min)
+ new_keypoints = tf.concat([y, x], 2)
+ return new_keypoints
+
+
+def prune_outside_window(keypoints, window, scope=None):
+ """Prunes keypoints that fall outside a given window.
+
+ This function replaces keypoints that fall outside the given window with nan.
+ See also clip_to_window which clips any keypoints that fall outside the given
+ window.
+
+ Args:
+ keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ window: a tensor of shape [4] representing the [y_min, x_min, y_max, x_max]
+ window outside of which the op should prune the keypoints.
+ scope: name scope.
+
+ Returns:
+ new_keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ """
+ with tf.name_scope(scope, 'PruneOutsideWindow'):
+ y, x = tf.split(value=keypoints, num_or_size_splits=2, axis=2)
+ win_y_min, win_x_min, win_y_max, win_x_max = tf.unstack(window)
+
+ valid_indices = tf.logical_and(
+ tf.logical_and(y >= win_y_min, y <= win_y_max),
+ tf.logical_and(x >= win_x_min, x <= win_x_max))
+
+ new_y = tf.where(valid_indices, y, np.nan * tf.ones_like(y))
+ new_x = tf.where(valid_indices, x, np.nan * tf.ones_like(x))
+ new_keypoints = tf.concat([new_y, new_x], 2)
+
+ return new_keypoints
+
+
+def change_coordinate_frame(keypoints, window, scope=None):
+ """Changes coordinate frame of the keypoints to be relative to window's frame.
+
+ Given a window of the form [y_min, x_min, y_max, x_max], changes keypoint
+ coordinates from keypoints of shape [num_instances, num_keypoints, 2]
+ to be relative to this window.
+
+ An example use case is data augmentation: where we are given groundtruth
+ keypoints and would like to randomly crop the image to some window. In this
+ case we need to change the coordinate frame of each groundtruth keypoint to be
+ relative to this new window.
+
+ Args:
+ keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ window: a tensor of shape [4] representing the [y_min, x_min, y_max, x_max]
+ window we should change the coordinate frame to.
+ scope: name scope.
+
+ Returns:
+ new_keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ """
+ with tf.name_scope(scope, 'ChangeCoordinateFrame'):
+ win_height = window[2] - window[0]
+ win_width = window[3] - window[1]
+ new_keypoints = scale(keypoints - [window[0], window[1]], 1.0 / win_height,
+ 1.0 / win_width)
+ return new_keypoints
+
+
+def to_normalized_coordinates(keypoints, height, width,
+ check_range=True, scope=None):
+ """Converts absolute keypoint coordinates to normalized coordinates in [0, 1].
+
+ Usually one uses the dynamic shape of the image or conv-layer tensor:
+ keypoints = keypoint_ops.to_normalized_coordinates(keypoints,
+ tf.shape(images)[1],
+ tf.shape(images)[2]),
+
+ This function raises an assertion failed error at graph execution time when
+ the maximum coordinate is smaller than 1.01 (which means that coordinates are
+ already normalized). The value 1.01 is to deal with small rounding errors.
+
+ Args:
+ keypoints: A tensor of shape [num_instances, num_keypoints, 2].
+ height: Maximum value for y coordinate of absolute keypoint coordinates.
+ width: Maximum value for x coordinate of absolute keypoint coordinates.
+ check_range: If True, checks if the coordinates are normalized.
+ scope: name scope.
+
+ Returns:
+ tensor of shape [num_instances, num_keypoints, 2] with normalized
+ coordinates in [0, 1].
+ """
+ with tf.name_scope(scope, 'ToNormalizedCoordinates'):
+ height = tf.cast(height, tf.float32)
+ width = tf.cast(width, tf.float32)
+
+ if check_range:
+ max_val = tf.reduce_max(keypoints)
+ max_assert = tf.Assert(tf.greater(max_val, 1.01),
+ ['max value is lower than 1.01: ', max_val])
+ with tf.control_dependencies([max_assert]):
+ width = tf.identity(width)
+
+ return scale(keypoints, 1.0 / height, 1.0 / width)
+
+
+def to_absolute_coordinates(keypoints, height, width,
+ check_range=True, scope=None):
+ """Converts normalized keypoint coordinates to absolute pixel coordinates.
+
+ This function raises an assertion failed error when the maximum keypoint
+ coordinate value is larger than 1.01 (in which case coordinates are already
+ absolute).
+
+ Args:
+ keypoints: A tensor of shape [num_instances, num_keypoints, 2]
+ height: Maximum value for y coordinate of absolute keypoint coordinates.
+ width: Maximum value for x coordinate of absolute keypoint coordinates.
+ check_range: If True, checks if the coordinates are normalized or not.
+ scope: name scope.
+
+ Returns:
+ tensor of shape [num_instances, num_keypoints, 2] with absolute coordinates
+ in terms of the image size.
+
+ """
+ with tf.name_scope(scope, 'ToAbsoluteCoordinates'):
+ height = tf.cast(height, tf.float32)
+ width = tf.cast(width, tf.float32)
+
+ # Ensure range of input keypoints is correct.
+ if check_range:
+ max_val = tf.reduce_max(keypoints)
+ max_assert = tf.Assert(tf.greater_equal(1.01, max_val),
+ ['maximum keypoint coordinate value is larger '
+ 'than 1.01: ', max_val])
+ with tf.control_dependencies([max_assert]):
+ width = tf.identity(width)
+
+ return scale(keypoints, height, width)
+
+
+def flip_horizontal(keypoints, flip_point, flip_permutation, scope=None):
+ """Flips the keypoints horizontally around the flip_point.
+
+ This operation flips the x coordinate for each keypoint around the flip_point
+ and also permutes the keypoints in a manner specified by flip_permutation.
+
+ Args:
+ keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ flip_point: (float) scalar tensor representing the x coordinate to flip the
+ keypoints around.
+ flip_permutation: rank 1 int32 tensor containing the keypoint flip
+ permutation. This specifies the mapping from original keypoint indices
+ to the flipped keypoint indices. This is used primarily for keypoints
+ that are not reflection invariant. E.g. Suppose there are 3 keypoints
+ representing ['head', 'right_eye', 'left_eye'], then a logical choice for
+ flip_permutation might be [0, 2, 1] since we want to swap the 'left_eye'
+ and 'right_eye' after a horizontal flip.
+ scope: name scope.
+
+ Returns:
+ new_keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ """
+ with tf.name_scope(scope, 'FlipHorizontal'):
+ keypoints = tf.transpose(keypoints, [1, 0, 2])
+ keypoints = tf.gather(keypoints, flip_permutation)
+ v, u = tf.split(value=keypoints, num_or_size_splits=2, axis=2)
+ u = flip_point * 2.0 - u
+ new_keypoints = tf.concat([v, u], 2)
+ new_keypoints = tf.transpose(new_keypoints, [1, 0, 2])
+ return new_keypoints
+
+
+def flip_vertical(keypoints, flip_point, flip_permutation, scope=None):
+ """Flips the keypoints vertically around the flip_point.
+
+ This operation flips the y coordinate for each keypoint around the flip_point
+ and also permutes the keypoints in a manner specified by flip_permutation.
+
+ Args:
+ keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ flip_point: (float) scalar tensor representing the y coordinate to flip the
+ keypoints around.
+ flip_permutation: rank 1 int32 tensor containing the keypoint flip
+ permutation. This specifies the mapping from original keypoint indices
+ to the flipped keypoint indices. This is used primarily for keypoints
+ that are not reflection invariant. E.g. Suppose there are 3 keypoints
+ representing ['head', 'right_eye', 'left_eye'], then a logical choice for
+ flip_permutation might be [0, 2, 1] since we want to swap the 'left_eye'
+ and 'right_eye' after a horizontal flip.
+ scope: name scope.
+
+ Returns:
+ new_keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ """
+ with tf.name_scope(scope, 'FlipVertical'):
+ keypoints = tf.transpose(keypoints, [1, 0, 2])
+ keypoints = tf.gather(keypoints, flip_permutation)
+ v, u = tf.split(value=keypoints, num_or_size_splits=2, axis=2)
+ v = flip_point * 2.0 - v
+ new_keypoints = tf.concat([v, u], 2)
+ new_keypoints = tf.transpose(new_keypoints, [1, 0, 2])
+ return new_keypoints
+
+
+def rot90(keypoints, scope=None):
+ """Rotates the keypoints counter-clockwise by 90 degrees.
+
+ Args:
+ keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ scope: name scope.
+
+ Returns:
+ new_keypoints: a tensor of shape [num_instances, num_keypoints, 2]
+ """
+ with tf.name_scope(scope, 'Rot90'):
+ keypoints = tf.transpose(keypoints, [1, 0, 2])
+ v, u = tf.split(value=keypoints[:, :, ::-1], num_or_size_splits=2, axis=2)
+ v = 1.0 - v
+ new_keypoints = tf.concat([v, u], 2)
+ new_keypoints = tf.transpose(new_keypoints, [1, 0, 2])
+ return new_keypoints
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/keypoint_ops_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/keypoint_ops_test.py"
new file mode 100644
index 00000000..1c09c55a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/keypoint_ops_test.py"
@@ -0,0 +1,200 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.keypoint_ops."""
+import numpy as np
+import tensorflow as tf
+
+from object_detection.core import keypoint_ops
+
+
+class KeypointOpsTest(tf.test.TestCase):
+ """Tests for common keypoint operations."""
+
+ def test_scale(self):
+ keypoints = tf.constant([
+ [[0.0, 0.0], [100.0, 200.0]],
+ [[50.0, 120.0], [100.0, 140.0]]
+ ])
+ y_scale = tf.constant(1.0 / 100)
+ x_scale = tf.constant(1.0 / 200)
+
+ expected_keypoints = tf.constant([
+ [[0., 0.], [1.0, 1.0]],
+ [[0.5, 0.6], [1.0, 0.7]]
+ ])
+ output = keypoint_ops.scale(keypoints, y_scale, x_scale)
+
+ with self.test_session() as sess:
+ output_, expected_keypoints_ = sess.run([output, expected_keypoints])
+ self.assertAllClose(output_, expected_keypoints_)
+
+ def test_clip_to_window(self):
+ keypoints = tf.constant([
+ [[0.25, 0.5], [0.75, 0.75]],
+ [[0.5, 0.0], [1.0, 1.0]]
+ ])
+ window = tf.constant([0.25, 0.25, 0.75, 0.75])
+
+ expected_keypoints = tf.constant([
+ [[0.25, 0.5], [0.75, 0.75]],
+ [[0.5, 0.25], [0.75, 0.75]]
+ ])
+ output = keypoint_ops.clip_to_window(keypoints, window)
+
+ with self.test_session() as sess:
+ output_, expected_keypoints_ = sess.run([output, expected_keypoints])
+ self.assertAllClose(output_, expected_keypoints_)
+
+ def test_prune_outside_window(self):
+ keypoints = tf.constant([
+ [[0.25, 0.5], [0.75, 0.75]],
+ [[0.5, 0.0], [1.0, 1.0]]
+ ])
+ window = tf.constant([0.25, 0.25, 0.75, 0.75])
+
+ expected_keypoints = tf.constant([[[0.25, 0.5], [0.75, 0.75]],
+ [[np.nan, np.nan], [np.nan, np.nan]]])
+ output = keypoint_ops.prune_outside_window(keypoints, window)
+
+ with self.test_session() as sess:
+ output_, expected_keypoints_ = sess.run([output, expected_keypoints])
+ self.assertAllClose(output_, expected_keypoints_)
+
+ def test_change_coordinate_frame(self):
+ keypoints = tf.constant([
+ [[0.25, 0.5], [0.75, 0.75]],
+ [[0.5, 0.0], [1.0, 1.0]]
+ ])
+ window = tf.constant([0.25, 0.25, 0.75, 0.75])
+
+ expected_keypoints = tf.constant([
+ [[0, 0.5], [1.0, 1.0]],
+ [[0.5, -0.5], [1.5, 1.5]]
+ ])
+ output = keypoint_ops.change_coordinate_frame(keypoints, window)
+
+ with self.test_session() as sess:
+ output_, expected_keypoints_ = sess.run([output, expected_keypoints])
+ self.assertAllClose(output_, expected_keypoints_)
+
+ def test_to_normalized_coordinates(self):
+ keypoints = tf.constant([
+ [[10., 30.], [30., 45.]],
+ [[20., 0.], [40., 60.]]
+ ])
+ output = keypoint_ops.to_normalized_coordinates(
+ keypoints, 40, 60)
+ expected_keypoints = tf.constant([
+ [[0.25, 0.5], [0.75, 0.75]],
+ [[0.5, 0.0], [1.0, 1.0]]
+ ])
+
+ with self.test_session() as sess:
+ output_, expected_keypoints_ = sess.run([output, expected_keypoints])
+ self.assertAllClose(output_, expected_keypoints_)
+
+ def test_to_normalized_coordinates_already_normalized(self):
+ keypoints = tf.constant([
+ [[0.25, 0.5], [0.75, 0.75]],
+ [[0.5, 0.0], [1.0, 1.0]]
+ ])
+ output = keypoint_ops.to_normalized_coordinates(
+ keypoints, 40, 60)
+
+ with self.test_session() as sess:
+ with self.assertRaisesOpError('assertion failed'):
+ sess.run(output)
+
+ def test_to_absolute_coordinates(self):
+ keypoints = tf.constant([
+ [[0.25, 0.5], [0.75, 0.75]],
+ [[0.5, 0.0], [1.0, 1.0]]
+ ])
+ output = keypoint_ops.to_absolute_coordinates(
+ keypoints, 40, 60)
+ expected_keypoints = tf.constant([
+ [[10., 30.], [30., 45.]],
+ [[20., 0.], [40., 60.]]
+ ])
+
+ with self.test_session() as sess:
+ output_, expected_keypoints_ = sess.run([output, expected_keypoints])
+ self.assertAllClose(output_, expected_keypoints_)
+
+ def test_to_absolute_coordinates_already_absolute(self):
+ keypoints = tf.constant([
+ [[10., 30.], [30., 45.]],
+ [[20., 0.], [40., 60.]]
+ ])
+ output = keypoint_ops.to_absolute_coordinates(
+ keypoints, 40, 60)
+
+ with self.test_session() as sess:
+ with self.assertRaisesOpError('assertion failed'):
+ sess.run(output)
+
+ def test_flip_horizontal(self):
+ keypoints = tf.constant([
+ [[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]],
+ [[0.4, 0.4], [0.5, 0.5], [0.6, 0.6]]
+ ])
+ flip_permutation = [0, 2, 1]
+
+ expected_keypoints = tf.constant([
+ [[0.1, 0.9], [0.3, 0.7], [0.2, 0.8]],
+ [[0.4, 0.6], [0.6, 0.4], [0.5, 0.5]],
+ ])
+ output = keypoint_ops.flip_horizontal(keypoints, 0.5, flip_permutation)
+
+ with self.test_session() as sess:
+ output_, expected_keypoints_ = sess.run([output, expected_keypoints])
+ self.assertAllClose(output_, expected_keypoints_)
+
+ def test_flip_vertical(self):
+ keypoints = tf.constant([
+ [[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]],
+ [[0.4, 0.4], [0.5, 0.5], [0.6, 0.6]]
+ ])
+ flip_permutation = [0, 2, 1]
+
+ expected_keypoints = tf.constant([
+ [[0.9, 0.1], [0.7, 0.3], [0.8, 0.2]],
+ [[0.6, 0.4], [0.4, 0.6], [0.5, 0.5]],
+ ])
+ output = keypoint_ops.flip_vertical(keypoints, 0.5, flip_permutation)
+
+ with self.test_session() as sess:
+ output_, expected_keypoints_ = sess.run([output, expected_keypoints])
+ self.assertAllClose(output_, expected_keypoints_)
+
+ def test_rot90(self):
+ keypoints = tf.constant([
+ [[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]],
+ [[0.4, 0.6], [0.5, 0.6], [0.6, 0.7]]
+ ])
+ expected_keypoints = tf.constant([
+ [[0.9, 0.1], [0.8, 0.2], [0.7, 0.3]],
+ [[0.4, 0.4], [0.4, 0.5], [0.3, 0.6]],
+ ])
+ output = keypoint_ops.rot90(keypoints)
+
+ with self.test_session() as sess:
+ output_, expected_keypoints_ = sess.run([output, expected_keypoints])
+ self.assertAllClose(output_, expected_keypoints_)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/losses.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/losses.py"
new file mode 100644
index 00000000..c7a85ac3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/losses.py"
@@ -0,0 +1,674 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Classification and regression loss functions for object detection.
+
+Localization losses:
+ * WeightedL2LocalizationLoss
+ * WeightedSmoothL1LocalizationLoss
+ * WeightedIOULocalizationLoss
+
+Classification losses:
+ * WeightedSigmoidClassificationLoss
+ * WeightedSoftmaxClassificationLoss
+ * WeightedSoftmaxClassificationAgainstLogitsLoss
+ * BootstrappedSigmoidClassificationLoss
+"""
+from abc import ABCMeta
+from abc import abstractmethod
+
+import tensorflow as tf
+
+from object_detection.core import box_list
+from object_detection.core import box_list_ops
+from object_detection.utils import ops
+
+slim = tf.contrib.slim
+
+
+class Loss(object):
+ """Abstract base class for loss functions."""
+ __metaclass__ = ABCMeta
+
+ def __call__(self,
+ prediction_tensor,
+ target_tensor,
+ ignore_nan_targets=False,
+ losses_mask=None,
+ scope=None,
+ **params):
+ """Call the loss function.
+
+ Args:
+ prediction_tensor: an N-d tensor of shape [batch, anchors, ...]
+ representing predicted quantities.
+ target_tensor: an N-d tensor of shape [batch, anchors, ...] representing
+ regression or classification targets.
+ ignore_nan_targets: whether to ignore nan targets in the loss computation.
+ E.g. can be used if the target tensor is missing groundtruth data that
+ shouldn't be factored into the loss.
+ losses_mask: A [batch] boolean tensor that indicates whether losses should
+ be applied to individual images in the batch. For elements that
+ are True, corresponding prediction, target, and weight tensors will be
+ removed prior to loss computation. If None, no filtering will take place
+ prior to loss computation.
+ scope: Op scope name. Defaults to 'Loss' if None.
+ **params: Additional keyword arguments for specific implementations of
+ the Loss.
+
+ Returns:
+ loss: a tensor representing the value of the loss function.
+ """
+ with tf.name_scope(scope, 'Loss',
+ [prediction_tensor, target_tensor, params]) as scope:
+ if ignore_nan_targets:
+ target_tensor = tf.where(tf.is_nan(target_tensor),
+ prediction_tensor,
+ target_tensor)
+ if losses_mask is not None:
+ tensor_multiplier = self._get_loss_multiplier_for_tensor(
+ prediction_tensor,
+ losses_mask)
+ prediction_tensor *= tensor_multiplier
+ target_tensor *= tensor_multiplier
+
+ if 'weights' in params:
+ params['weights'] = tf.convert_to_tensor(params['weights'])
+ weights_multiplier = self._get_loss_multiplier_for_tensor(
+ params['weights'],
+ losses_mask)
+ params['weights'] *= weights_multiplier
+ return self._compute_loss(prediction_tensor, target_tensor, **params)
+
+ def _get_loss_multiplier_for_tensor(self, tensor, losses_mask):
+ loss_multiplier_shape = tf.stack([-1] + [1] * (len(tensor.shape) - 1))
+ return tf.cast(tf.reshape(losses_mask, loss_multiplier_shape), tf.float32)
+
+ @abstractmethod
+ def _compute_loss(self, prediction_tensor, target_tensor, **params):
+ """Method to be overridden by implementations.
+
+ Args:
+ prediction_tensor: a tensor representing predicted quantities
+ target_tensor: a tensor representing regression or classification targets
+ **params: Additional keyword arguments for specific implementations of
+ the Loss.
+
+ Returns:
+ loss: an N-d tensor of shape [batch, anchors, ...] containing the loss per
+ anchor
+ """
+ pass
+
+
+class WeightedL2LocalizationLoss(Loss):
+ """L2 localization loss function with anchorwise output support.
+
+ Loss[b,a] = .5 * ||weights[b,a] * (prediction[b,a,:] - target[b,a,:])||^2
+ """
+
+ def _compute_loss(self, prediction_tensor, target_tensor, weights):
+ """Compute loss function.
+
+ Args:
+ prediction_tensor: A float tensor of shape [batch_size, num_anchors,
+ code_size] representing the (encoded) predicted locations of objects.
+ target_tensor: A float tensor of shape [batch_size, num_anchors,
+ code_size] representing the regression targets
+ weights: a float tensor of shape [batch_size, num_anchors]
+
+ Returns:
+ loss: a float tensor of shape [batch_size, num_anchors] tensor
+ representing the value of the loss function.
+ """
+ weighted_diff = (prediction_tensor - target_tensor) * tf.expand_dims(
+ weights, 2)
+ square_diff = 0.5 * tf.square(weighted_diff)
+ return tf.reduce_sum(square_diff, 2)
+
+
+class WeightedSmoothL1LocalizationLoss(Loss):
+ """Smooth L1 localization loss function aka Huber Loss..
+
+ The smooth L1_loss is defined elementwise as .5 x^2 if |x| <= delta and
+ delta * (|x|- 0.5*delta) otherwise, where x is the difference between
+ predictions and target.
+
+ See also Equation (3) in the Fast R-CNN paper by Ross Girshick (ICCV 2015)
+ """
+
+ def __init__(self, delta=1.0):
+ """Constructor.
+
+ Args:
+ delta: delta for smooth L1 loss.
+ """
+ self._delta = delta
+
+ def _compute_loss(self, prediction_tensor, target_tensor, weights):
+ """Compute loss function.
+
+ Args:
+ prediction_tensor: A float tensor of shape [batch_size, num_anchors,
+ code_size] representing the (encoded) predicted locations of objects.
+ target_tensor: A float tensor of shape [batch_size, num_anchors,
+ code_size] representing the regression targets
+ weights: a float tensor of shape [batch_size, num_anchors]
+
+ Returns:
+ loss: a float tensor of shape [batch_size, num_anchors] tensor
+ representing the value of the loss function.
+ """
+ return tf.reduce_sum(tf.losses.huber_loss(
+ target_tensor,
+ prediction_tensor,
+ delta=self._delta,
+ weights=tf.expand_dims(weights, axis=2),
+ loss_collection=None,
+ reduction=tf.losses.Reduction.NONE
+ ), axis=2)
+
+
+class WeightedIOULocalizationLoss(Loss):
+ """IOU localization loss function.
+
+ Sums the IOU for corresponding pairs of predicted/groundtruth boxes
+ and for each pair assign a loss of 1 - IOU. We then compute a weighted
+ sum over all pairs which is returned as the total loss.
+ """
+
+ def _compute_loss(self, prediction_tensor, target_tensor, weights):
+ """Compute loss function.
+
+ Args:
+ prediction_tensor: A float tensor of shape [batch_size, num_anchors, 4]
+ representing the decoded predicted boxes
+ target_tensor: A float tensor of shape [batch_size, num_anchors, 4]
+ representing the decoded target boxes
+ weights: a float tensor of shape [batch_size, num_anchors]
+
+ Returns:
+ loss: a float tensor of shape [batch_size, num_anchors] tensor
+ representing the value of the loss function.
+ """
+ predicted_boxes = box_list.BoxList(tf.reshape(prediction_tensor, [-1, 4]))
+ target_boxes = box_list.BoxList(tf.reshape(target_tensor, [-1, 4]))
+ per_anchor_iou_loss = 1.0 - box_list_ops.matched_iou(predicted_boxes,
+ target_boxes)
+ return tf.reshape(weights, [-1]) * per_anchor_iou_loss
+
+
+class WeightedSigmoidClassificationLoss(Loss):
+ """Sigmoid cross entropy classification loss function."""
+
+ def _compute_loss(self,
+ prediction_tensor,
+ target_tensor,
+ weights,
+ class_indices=None):
+ """Compute loss function.
+
+ Args:
+ prediction_tensor: A float tensor of shape [batch_size, num_anchors,
+ num_classes] representing the predicted logits for each class
+ target_tensor: A float tensor of shape [batch_size, num_anchors,
+ num_classes] representing one-hot encoded classification targets
+ weights: a float tensor of shape, either [batch_size, num_anchors,
+ num_classes] or [batch_size, num_anchors, 1]. If the shape is
+ [batch_size, num_anchors, 1], all the classses are equally weighted.
+ class_indices: (Optional) A 1-D integer tensor of class indices.
+ If provided, computes loss only for the specified class indices.
+
+ Returns:
+ loss: a float tensor of shape [batch_size, num_anchors, num_classes]
+ representing the value of the loss function.
+ """
+ if class_indices is not None:
+ weights *= tf.reshape(
+ ops.indices_to_dense_vector(class_indices,
+ tf.shape(prediction_tensor)[2]),
+ [1, 1, -1])
+ per_entry_cross_ent = (tf.nn.sigmoid_cross_entropy_with_logits(
+ labels=target_tensor, logits=prediction_tensor))
+ return per_entry_cross_ent * weights
+
+
+class SigmoidFocalClassificationLoss(Loss):
+ """Sigmoid focal cross entropy loss.
+
+ Focal loss down-weights well classified examples and focusses on the hard
+ examples. See https://arxiv.org/pdf/1708.02002.pdf for the loss definition.
+ """
+
+ def __init__(self, gamma=2.0, alpha=0.25):
+ """Constructor.
+
+ Args:
+ gamma: exponent of the modulating factor (1 - p_t) ^ gamma.
+ alpha: optional alpha weighting factor to balance positives vs negatives.
+ """
+ self._alpha = alpha
+ self._gamma = gamma
+
+ def _compute_loss(self,
+ prediction_tensor,
+ target_tensor,
+ weights,
+ class_indices=None):
+ """Compute loss function.
+
+ Args:
+ prediction_tensor: A float tensor of shape [batch_size, num_anchors,
+ num_classes] representing the predicted logits for each class
+ target_tensor: A float tensor of shape [batch_size, num_anchors,
+ num_classes] representing one-hot encoded classification targets
+ weights: a float tensor of shape, either [batch_size, num_anchors,
+ num_classes] or [batch_size, num_anchors, 1]. If the shape is
+ [batch_size, num_anchors, 1], all the classses are equally weighted.
+ class_indices: (Optional) A 1-D integer tensor of class indices.
+ If provided, computes loss only for the specified class indices.
+
+ Returns:
+ loss: a float tensor of shape [batch_size, num_anchors, num_classes]
+ representing the value of the loss function.
+ """
+ if class_indices is not None:
+ weights *= tf.reshape(
+ ops.indices_to_dense_vector(class_indices,
+ tf.shape(prediction_tensor)[2]),
+ [1, 1, -1])
+ per_entry_cross_ent = (tf.nn.sigmoid_cross_entropy_with_logits(
+ labels=target_tensor, logits=prediction_tensor))
+ prediction_probabilities = tf.sigmoid(prediction_tensor)
+ p_t = ((target_tensor * prediction_probabilities) +
+ ((1 - target_tensor) * (1 - prediction_probabilities)))
+ modulating_factor = 1.0
+ if self._gamma:
+ modulating_factor = tf.pow(1.0 - p_t, self._gamma)
+ alpha_weight_factor = 1.0
+ if self._alpha is not None:
+ alpha_weight_factor = (target_tensor * self._alpha +
+ (1 - target_tensor) * (1 - self._alpha))
+ focal_cross_entropy_loss = (modulating_factor * alpha_weight_factor *
+ per_entry_cross_ent)
+ return focal_cross_entropy_loss * weights
+
+
+class WeightedSoftmaxClassificationLoss(Loss):
+ """Softmax loss function."""
+
+ def __init__(self, logit_scale=1.0):
+ """Constructor.
+
+ Args:
+ logit_scale: When this value is high, the prediction is "diffused" and
+ when this value is low, the prediction is made peakier.
+ (default 1.0)
+
+ """
+ self._logit_scale = logit_scale
+
+ def _compute_loss(self, prediction_tensor, target_tensor, weights):
+ """Compute loss function.
+
+ Args:
+ prediction_tensor: A float tensor of shape [batch_size, num_anchors,
+ num_classes] representing the predicted logits for each class
+ target_tensor: A float tensor of shape [batch_size, num_anchors,
+ num_classes] representing one-hot encoded classification targets
+ weights: a float tensor of shape, either [batch_size, num_anchors,
+ num_classes] or [batch_size, num_anchors, 1]. If the shape is
+ [batch_size, num_anchors, 1], all the classses are equally weighted.
+
+ Returns:
+ loss: a float tensor of shape [batch_size, num_anchors]
+ representing the value of the loss function.
+ """
+ weights = tf.reduce_mean(weights, axis=2)
+ num_classes = prediction_tensor.get_shape().as_list()[-1]
+ prediction_tensor = tf.divide(
+ prediction_tensor, self._logit_scale, name='scale_logit')
+ per_row_cross_ent = (tf.nn.softmax_cross_entropy_with_logits(
+ labels=tf.reshape(target_tensor, [-1, num_classes]),
+ logits=tf.reshape(prediction_tensor, [-1, num_classes])))
+ return tf.reshape(per_row_cross_ent, tf.shape(weights)) * weights
+
+
+class WeightedSoftmaxClassificationAgainstLogitsLoss(Loss):
+ """Softmax loss function against logits.
+
+ Targets are expected to be provided in logits space instead of "one hot" or
+ "probability distribution" space.
+ """
+
+ def __init__(self, logit_scale=1.0):
+ """Constructor.
+
+ Args:
+ logit_scale: When this value is high, the target is "diffused" and
+ when this value is low, the target is made peakier.
+ (default 1.0)
+
+ """
+ self._logit_scale = logit_scale
+
+ def _scale_and_softmax_logits(self, logits):
+ """Scale logits then apply softmax."""
+ scaled_logits = tf.divide(logits, self._logit_scale, name='scale_logits')
+ return tf.nn.softmax(scaled_logits, name='convert_scores')
+
+ def _compute_loss(self, prediction_tensor, target_tensor, weights):
+ """Compute loss function.
+
+ Args:
+ prediction_tensor: A float tensor of shape [batch_size, num_anchors,
+ num_classes] representing the predicted logits for each class
+ target_tensor: A float tensor of shape [batch_size, num_anchors,
+ num_classes] representing logit classification targets
+ weights: a float tensor of shape, either [batch_size, num_anchors,
+ num_classes] or [batch_size, num_anchors, 1]. If the shape is
+ [batch_size, num_anchors, 1], all the classses are equally weighted.
+
+ Returns:
+ loss: a float tensor of shape [batch_size, num_anchors]
+ representing the value of the loss function.
+ """
+ weights = tf.reduce_mean(weights, axis=2)
+ num_classes = prediction_tensor.get_shape().as_list()[-1]
+ target_tensor = self._scale_and_softmax_logits(target_tensor)
+ prediction_tensor = tf.divide(prediction_tensor, self._logit_scale,
+ name='scale_logits')
+
+ per_row_cross_ent = (tf.nn.softmax_cross_entropy_with_logits(
+ labels=tf.reshape(target_tensor, [-1, num_classes]),
+ logits=tf.reshape(prediction_tensor, [-1, num_classes])))
+ return tf.reshape(per_row_cross_ent, tf.shape(weights)) * weights
+
+
+class BootstrappedSigmoidClassificationLoss(Loss):
+ """Bootstrapped sigmoid cross entropy classification loss function.
+
+ This loss uses a convex combination of training labels and the current model's
+ predictions as training targets in the classification loss. The idea is that
+ as the model improves over time, its predictions can be trusted more and we
+ can use these predictions to mitigate the damage of noisy/incorrect labels,
+ because incorrect labels are likely to be eventually highly inconsistent with
+ other stimuli predicted to have the same label by the model.
+
+ In "soft" bootstrapping, we use all predicted class probabilities, whereas in
+ "hard" bootstrapping, we use the single class favored by the model.
+
+ See also Training Deep Neural Networks On Noisy Labels with Bootstrapping by
+ Reed et al. (ICLR 2015).
+ """
+
+ def __init__(self, alpha, bootstrap_type='soft'):
+ """Constructor.
+
+ Args:
+ alpha: a float32 scalar tensor between 0 and 1 representing interpolation
+ weight
+ bootstrap_type: set to either 'hard' or 'soft' (default)
+
+ Raises:
+ ValueError: if bootstrap_type is not either 'hard' or 'soft'
+ """
+ if bootstrap_type != 'hard' and bootstrap_type != 'soft':
+ raise ValueError('Unrecognized bootstrap_type: must be one of '
+ '\'hard\' or \'soft.\'')
+ self._alpha = alpha
+ self._bootstrap_type = bootstrap_type
+
+ def _compute_loss(self, prediction_tensor, target_tensor, weights):
+ """Compute loss function.
+
+ Args:
+ prediction_tensor: A float tensor of shape [batch_size, num_anchors,
+ num_classes] representing the predicted logits for each class
+ target_tensor: A float tensor of shape [batch_size, num_anchors,
+ num_classes] representing one-hot encoded classification targets
+ weights: a float tensor of shape, either [batch_size, num_anchors,
+ num_classes] or [batch_size, num_anchors, 1]. If the shape is
+ [batch_size, num_anchors, 1], all the classses are equally weighted.
+
+ Returns:
+ loss: a float tensor of shape [batch_size, num_anchors, num_classes]
+ representing the value of the loss function.
+ """
+ if self._bootstrap_type == 'soft':
+ bootstrap_target_tensor = self._alpha * target_tensor + (
+ 1.0 - self._alpha) * tf.sigmoid(prediction_tensor)
+ else:
+ bootstrap_target_tensor = self._alpha * target_tensor + (
+ 1.0 - self._alpha) * tf.cast(
+ tf.sigmoid(prediction_tensor) > 0.5, tf.float32)
+ per_entry_cross_ent = (tf.nn.sigmoid_cross_entropy_with_logits(
+ labels=bootstrap_target_tensor, logits=prediction_tensor))
+ return per_entry_cross_ent * weights
+
+
+class HardExampleMiner(object):
+ """Hard example mining for regions in a list of images.
+
+ Implements hard example mining to select a subset of regions to be
+ back-propagated. For each image, selects the regions with highest losses,
+ subject to the condition that a newly selected region cannot have
+ an IOU > iou_threshold with any of the previously selected regions.
+ This can be achieved by re-using a greedy non-maximum suppression algorithm.
+ A constraint on the number of negatives mined per positive region can also be
+ enforced.
+
+ Reference papers: "Training Region-based Object Detectors with Online
+ Hard Example Mining" (CVPR 2016) by Srivastava et al., and
+ "SSD: Single Shot MultiBox Detector" (ECCV 2016) by Liu et al.
+ """
+
+ def __init__(self,
+ num_hard_examples=64,
+ iou_threshold=0.7,
+ loss_type='both',
+ cls_loss_weight=0.05,
+ loc_loss_weight=0.06,
+ max_negatives_per_positive=None,
+ min_negatives_per_image=0):
+ """Constructor.
+
+ The hard example mining implemented by this class can replicate the behavior
+ in the two aforementioned papers (Srivastava et al., and Liu et al).
+ To replicate the A2 paper (Srivastava et al), num_hard_examples is set
+ to a fixed parameter (64 by default) and iou_threshold is set to .7 for
+ running non-max-suppression the predicted boxes prior to hard mining.
+ In order to replicate the SSD paper (Liu et al), num_hard_examples should
+ be set to None, max_negatives_per_positive should be 3 and iou_threshold
+ should be 1.0 (in order to effectively turn off NMS).
+
+ Args:
+ num_hard_examples: maximum number of hard examples to be
+ selected per image (prior to enforcing max negative to positive ratio
+ constraint). If set to None, all examples obtained after NMS are
+ considered.
+ iou_threshold: minimum intersection over union for an example
+ to be discarded during NMS.
+ loss_type: use only classification losses ('cls', default),
+ localization losses ('loc') or both losses ('both').
+ In the last case, cls_loss_weight and loc_loss_weight are used to
+ compute weighted sum of the two losses.
+ cls_loss_weight: weight for classification loss.
+ loc_loss_weight: weight for location loss.
+ max_negatives_per_positive: maximum number of negatives to retain for
+ each positive anchor. By default, num_negatives_per_positive is None,
+ which means that we do not enforce a prespecified negative:positive
+ ratio. Note also that num_negatives_per_positives can be a float
+ (and will be converted to be a float even if it is passed in otherwise).
+ min_negatives_per_image: minimum number of negative anchors to sample for
+ a given image. Setting this to a positive number allows sampling
+ negatives in an image without any positive anchors and thus not biased
+ towards at least one detection per image.
+ """
+ self._num_hard_examples = num_hard_examples
+ self._iou_threshold = iou_threshold
+ self._loss_type = loss_type
+ self._cls_loss_weight = cls_loss_weight
+ self._loc_loss_weight = loc_loss_weight
+ self._max_negatives_per_positive = max_negatives_per_positive
+ self._min_negatives_per_image = min_negatives_per_image
+ if self._max_negatives_per_positive is not None:
+ self._max_negatives_per_positive = float(self._max_negatives_per_positive)
+ self._num_positives_list = None
+ self._num_negatives_list = None
+
+ def __call__(self,
+ location_losses,
+ cls_losses,
+ decoded_boxlist_list,
+ match_list=None):
+ """Computes localization and classification losses after hard mining.
+
+ Args:
+ location_losses: a float tensor of shape [num_images, num_anchors]
+ representing anchorwise localization losses.
+ cls_losses: a float tensor of shape [num_images, num_anchors]
+ representing anchorwise classification losses.
+ decoded_boxlist_list: a list of decoded BoxList representing location
+ predictions for each image.
+ match_list: an optional list of matcher.Match objects encoding the match
+ between anchors and groundtruth boxes for each image of the batch,
+ with rows of the Match objects corresponding to groundtruth boxes
+ and columns corresponding to anchors. Match objects in match_list are
+ used to reference which anchors are positive, negative or ignored. If
+ self._max_negatives_per_positive exists, these are then used to enforce
+ a prespecified negative to positive ratio.
+
+ Returns:
+ mined_location_loss: a float scalar with sum of localization losses from
+ selected hard examples.
+ mined_cls_loss: a float scalar with sum of classification losses from
+ selected hard examples.
+ Raises:
+ ValueError: if location_losses, cls_losses and decoded_boxlist_list do
+ not have compatible shapes (i.e., they must correspond to the same
+ number of images).
+ ValueError: if match_list is specified but its length does not match
+ len(decoded_boxlist_list).
+ """
+ mined_location_losses = []
+ mined_cls_losses = []
+ location_losses = tf.unstack(location_losses)
+ cls_losses = tf.unstack(cls_losses)
+ num_images = len(decoded_boxlist_list)
+ if not match_list:
+ match_list = num_images * [None]
+ if not len(location_losses) == len(decoded_boxlist_list) == len(cls_losses):
+ raise ValueError('location_losses, cls_losses and decoded_boxlist_list '
+ 'do not have compatible shapes.')
+ if not isinstance(match_list, list):
+ raise ValueError('match_list must be a list.')
+ if len(match_list) != len(decoded_boxlist_list):
+ raise ValueError('match_list must either be None or have '
+ 'length=len(decoded_boxlist_list).')
+ num_positives_list = []
+ num_negatives_list = []
+ for ind, detection_boxlist in enumerate(decoded_boxlist_list):
+ box_locations = detection_boxlist.get()
+ match = match_list[ind]
+ image_losses = cls_losses[ind]
+ if self._loss_type == 'loc':
+ image_losses = location_losses[ind]
+ elif self._loss_type == 'both':
+ image_losses *= self._cls_loss_weight
+ image_losses += location_losses[ind] * self._loc_loss_weight
+ if self._num_hard_examples is not None:
+ num_hard_examples = self._num_hard_examples
+ else:
+ num_hard_examples = detection_boxlist.num_boxes()
+ selected_indices = tf.image.non_max_suppression(
+ box_locations, image_losses, num_hard_examples, self._iou_threshold)
+ if self._max_negatives_per_positive is not None and match:
+ (selected_indices, num_positives,
+ num_negatives) = self._subsample_selection_to_desired_neg_pos_ratio(
+ selected_indices, match, self._max_negatives_per_positive,
+ self._min_negatives_per_image)
+ num_positives_list.append(num_positives)
+ num_negatives_list.append(num_negatives)
+ mined_location_losses.append(
+ tf.reduce_sum(tf.gather(location_losses[ind], selected_indices)))
+ mined_cls_losses.append(
+ tf.reduce_sum(tf.gather(cls_losses[ind], selected_indices)))
+ location_loss = tf.reduce_sum(tf.stack(mined_location_losses))
+ cls_loss = tf.reduce_sum(tf.stack(mined_cls_losses))
+ if match and self._max_negatives_per_positive:
+ self._num_positives_list = num_positives_list
+ self._num_negatives_list = num_negatives_list
+ return (location_loss, cls_loss)
+
+ def summarize(self):
+ """Summarize the number of positives and negatives after mining."""
+ if self._num_positives_list and self._num_negatives_list:
+ avg_num_positives = tf.reduce_mean(tf.to_float(self._num_positives_list))
+ avg_num_negatives = tf.reduce_mean(tf.to_float(self._num_negatives_list))
+ tf.summary.scalar('HardExampleMiner/NumPositives', avg_num_positives)
+ tf.summary.scalar('HardExampleMiner/NumNegatives', avg_num_negatives)
+
+ def _subsample_selection_to_desired_neg_pos_ratio(self,
+ indices,
+ match,
+ max_negatives_per_positive,
+ min_negatives_per_image=0):
+ """Subsample a collection of selected indices to a desired neg:pos ratio.
+
+ This function takes a subset of M indices (indexing into a large anchor
+ collection of N anchors where M=0,
+ meaning that column i is matched with row match_results[i].
+ (2) match_results[i]=-1, meaning that column i is not matched.
+ (3) match_results[i]=-2, meaning that column i is ignored.
+ use_matmul_gather: Use matrix multiplication based gather instead of
+ standard tf.gather. (Default: False).
+
+ Raises:
+ ValueError: if match_results does not have rank 1 or is not an
+ integer int32 scalar tensor
+ """
+ if match_results.shape.ndims != 1:
+ raise ValueError('match_results should have rank 1')
+ if match_results.dtype != tf.int32:
+ raise ValueError('match_results should be an int32 or int64 scalar '
+ 'tensor')
+ self._match_results = match_results
+ self._gather_op = tf.gather
+ if use_matmul_gather:
+ self._gather_op = ops.matmul_gather_on_zeroth_axis
+
+ @property
+ def match_results(self):
+ """The accessor for match results.
+
+ Returns:
+ the tensor which encodes the match results.
+ """
+ return self._match_results
+
+ def matched_column_indices(self):
+ """Returns column indices that match to some row.
+
+ The indices returned by this op are always sorted in increasing order.
+
+ Returns:
+ column_indices: int32 tensor of shape [K] with column indices.
+ """
+ return self._reshape_and_cast(tf.where(tf.greater(self._match_results, -1)))
+
+ def matched_column_indicator(self):
+ """Returns column indices that are matched.
+
+ Returns:
+ column_indices: int32 tensor of shape [K] with column indices.
+ """
+ return tf.greater_equal(self._match_results, 0)
+
+ def num_matched_columns(self):
+ """Returns number (int32 scalar tensor) of matched columns."""
+ return tf.size(self.matched_column_indices())
+
+ def unmatched_column_indices(self):
+ """Returns column indices that do not match any row.
+
+ The indices returned by this op are always sorted in increasing order.
+
+ Returns:
+ column_indices: int32 tensor of shape [K] with column indices.
+ """
+ return self._reshape_and_cast(tf.where(tf.equal(self._match_results, -1)))
+
+ def unmatched_column_indicator(self):
+ """Returns column indices that are unmatched.
+
+ Returns:
+ column_indices: int32 tensor of shape [K] with column indices.
+ """
+ return tf.equal(self._match_results, -1)
+
+ def num_unmatched_columns(self):
+ """Returns number (int32 scalar tensor) of unmatched columns."""
+ return tf.size(self.unmatched_column_indices())
+
+ def ignored_column_indices(self):
+ """Returns column indices that are ignored (neither Matched nor Unmatched).
+
+ The indices returned by this op are always sorted in increasing order.
+
+ Returns:
+ column_indices: int32 tensor of shape [K] with column indices.
+ """
+ return self._reshape_and_cast(tf.where(self.ignored_column_indicator()))
+
+ def ignored_column_indicator(self):
+ """Returns boolean column indicator where True means the colum is ignored.
+
+ Returns:
+ column_indicator: boolean vector which is True for all ignored column
+ indices.
+ """
+ return tf.equal(self._match_results, -2)
+
+ def num_ignored_columns(self):
+ """Returns number (int32 scalar tensor) of matched columns."""
+ return tf.size(self.ignored_column_indices())
+
+ def unmatched_or_ignored_column_indices(self):
+ """Returns column indices that are unmatched or ignored.
+
+ The indices returned by this op are always sorted in increasing order.
+
+ Returns:
+ column_indices: int32 tensor of shape [K] with column indices.
+ """
+ return self._reshape_and_cast(tf.where(tf.greater(0, self._match_results)))
+
+ def matched_row_indices(self):
+ """Returns row indices that match some column.
+
+ The indices returned by this op are ordered so as to be in correspondence
+ with the output of matched_column_indicator(). For example if
+ self.matched_column_indicator() is [0,2], and self.matched_row_indices() is
+ [7, 3], then we know that column 0 was matched to row 7 and column 2 was
+ matched to row 3.
+
+ Returns:
+ row_indices: int32 tensor of shape [K] with row indices.
+ """
+ return self._reshape_and_cast(
+ self._gather_op(self._match_results, self.matched_column_indices()))
+
+ def _reshape_and_cast(self, t):
+ return tf.cast(tf.reshape(t, [-1]), tf.int32)
+
+ def gather_based_on_match(self, input_tensor, unmatched_value,
+ ignored_value):
+ """Gathers elements from `input_tensor` based on match results.
+
+ For columns that are matched to a row, gathered_tensor[col] is set to
+ input_tensor[match_results[col]]. For columns that are unmatched,
+ gathered_tensor[col] is set to unmatched_value. Finally, for columns that
+ are ignored gathered_tensor[col] is set to ignored_value.
+
+ Note that the input_tensor.shape[1:] must match with unmatched_value.shape
+ and ignored_value.shape
+
+ Args:
+ input_tensor: Tensor to gather values from.
+ unmatched_value: Constant tensor value for unmatched columns.
+ ignored_value: Constant tensor value for ignored columns.
+
+ Returns:
+ gathered_tensor: A tensor containing values gathered from input_tensor.
+ The shape of the gathered tensor is [match_results.shape[0]] +
+ input_tensor.shape[1:].
+ """
+ input_tensor = tf.concat(
+ [tf.stack([ignored_value, unmatched_value]),
+ tf.to_float(input_tensor)],
+ axis=0)
+ gather_indices = tf.maximum(self.match_results + 2, 0)
+ gathered_tensor = self._gather_op(input_tensor, gather_indices)
+ return gathered_tensor
+
+
+class Matcher(object):
+ """Abstract base class for matcher.
+ """
+ __metaclass__ = ABCMeta
+
+ def __init__(self, use_matmul_gather=False):
+ """Constructs a Matcher.
+
+ Args:
+ use_matmul_gather: Force constructed match objects to use matrix
+ multiplication based gather instead of standard tf.gather.
+ (Default: False).
+ """
+ self._use_matmul_gather = use_matmul_gather
+
+ def match(self, similarity_matrix, valid_rows=None, scope=None):
+ """Computes matches among row and column indices and returns the result.
+
+ Computes matches among the row and column indices based on the similarity
+ matrix and optional arguments.
+
+ Args:
+ similarity_matrix: Float tensor of shape [N, M] with pairwise similarity
+ where higher value means more similar.
+ valid_rows: A boolean tensor of shape [N] indicating the rows that are
+ valid for matching.
+ scope: Op scope name. Defaults to 'Match' if None.
+
+ Returns:
+ A Match object with the results of matching.
+ """
+ with tf.name_scope(scope, 'Match') as scope:
+ if valid_rows is None:
+ valid_rows = tf.ones(tf.shape(similarity_matrix)[0], dtype=tf.bool)
+ return Match(self._match(similarity_matrix, valid_rows),
+ self._use_matmul_gather)
+
+ @abstractmethod
+ def _match(self, similarity_matrix, valid_rows):
+ """Method to be overridden by implementations.
+
+ Args:
+ similarity_matrix: Float tensor of shape [N, M] with pairwise similarity
+ where higher value means more similar.
+ valid_rows: A boolean tensor of shape [N] indicating the rows that are
+ valid for matching.
+ Returns:
+ match_results: Integer tensor of shape [M]: match_results[i]>=0 means
+ that column i is matched to row match_results[i], match_results[i]=-1
+ means that the column is not matched. match_results[i]=-2 means that
+ the column is ignored (usually this happens when there is a very weak
+ match which one neither wants as positive nor negative example).
+ """
+ pass
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/matcher_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/matcher_test.py"
new file mode 100644
index 00000000..05607834
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/matcher_test.py"
@@ -0,0 +1,192 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.matcher."""
+import numpy as np
+import tensorflow as tf
+
+from object_detection.core import matcher
+
+
+class MatchTest(tf.test.TestCase):
+
+ def test_get_correct_matched_columnIndices(self):
+ match_results = tf.constant([3, 1, -1, 0, -1, 5, -2])
+ match = matcher.Match(match_results)
+ expected_column_indices = [0, 1, 3, 5]
+ matched_column_indices = match.matched_column_indices()
+ self.assertEquals(matched_column_indices.dtype, tf.int32)
+ with self.test_session() as sess:
+ matched_column_indices = sess.run(matched_column_indices)
+ self.assertAllEqual(matched_column_indices, expected_column_indices)
+
+ def test_get_correct_counts(self):
+ match_results = tf.constant([3, 1, -1, 0, -1, 5, -2])
+ match = matcher.Match(match_results)
+ exp_num_matched_columns = 4
+ exp_num_unmatched_columns = 2
+ exp_num_ignored_columns = 1
+ num_matched_columns = match.num_matched_columns()
+ num_unmatched_columns = match.num_unmatched_columns()
+ num_ignored_columns = match.num_ignored_columns()
+ self.assertEquals(num_matched_columns.dtype, tf.int32)
+ self.assertEquals(num_unmatched_columns.dtype, tf.int32)
+ self.assertEquals(num_ignored_columns.dtype, tf.int32)
+ with self.test_session() as sess:
+ (num_matched_columns_out, num_unmatched_columns_out,
+ num_ignored_columns_out) = sess.run(
+ [num_matched_columns, num_unmatched_columns, num_ignored_columns])
+ self.assertAllEqual(num_matched_columns_out, exp_num_matched_columns)
+ self.assertAllEqual(num_unmatched_columns_out, exp_num_unmatched_columns)
+ self.assertAllEqual(num_ignored_columns_out, exp_num_ignored_columns)
+
+ def testGetCorrectUnmatchedColumnIndices(self):
+ match_results = tf.constant([3, 1, -1, 0, -1, 5, -2])
+ match = matcher.Match(match_results)
+ expected_column_indices = [2, 4]
+ unmatched_column_indices = match.unmatched_column_indices()
+ self.assertEquals(unmatched_column_indices.dtype, tf.int32)
+ with self.test_session() as sess:
+ unmatched_column_indices = sess.run(unmatched_column_indices)
+ self.assertAllEqual(unmatched_column_indices, expected_column_indices)
+
+ def testGetCorrectMatchedRowIndices(self):
+ match_results = tf.constant([3, 1, -1, 0, -1, 5, -2])
+ match = matcher.Match(match_results)
+ expected_row_indices = [3, 1, 0, 5]
+ matched_row_indices = match.matched_row_indices()
+ self.assertEquals(matched_row_indices.dtype, tf.int32)
+ with self.test_session() as sess:
+ matched_row_inds = sess.run(matched_row_indices)
+ self.assertAllEqual(matched_row_inds, expected_row_indices)
+
+ def test_get_correct_ignored_column_indices(self):
+ match_results = tf.constant([3, 1, -1, 0, -1, 5, -2])
+ match = matcher.Match(match_results)
+ expected_column_indices = [6]
+ ignored_column_indices = match.ignored_column_indices()
+ self.assertEquals(ignored_column_indices.dtype, tf.int32)
+ with self.test_session() as sess:
+ ignored_column_indices = sess.run(ignored_column_indices)
+ self.assertAllEqual(ignored_column_indices, expected_column_indices)
+
+ def test_get_correct_matched_column_indicator(self):
+ match_results = tf.constant([3, 1, -1, 0, -1, 5, -2])
+ match = matcher.Match(match_results)
+ expected_column_indicator = [True, True, False, True, False, True, False]
+ matched_column_indicator = match.matched_column_indicator()
+ self.assertEquals(matched_column_indicator.dtype, tf.bool)
+ with self.test_session() as sess:
+ matched_column_indicator = sess.run(matched_column_indicator)
+ self.assertAllEqual(matched_column_indicator, expected_column_indicator)
+
+ def test_get_correct_unmatched_column_indicator(self):
+ match_results = tf.constant([3, 1, -1, 0, -1, 5, -2])
+ match = matcher.Match(match_results)
+ expected_column_indicator = [False, False, True, False, True, False, False]
+ unmatched_column_indicator = match.unmatched_column_indicator()
+ self.assertEquals(unmatched_column_indicator.dtype, tf.bool)
+ with self.test_session() as sess:
+ unmatched_column_indicator = sess.run(unmatched_column_indicator)
+ self.assertAllEqual(unmatched_column_indicator, expected_column_indicator)
+
+ def test_get_correct_ignored_column_indicator(self):
+ match_results = tf.constant([3, 1, -1, 0, -1, 5, -2])
+ match = matcher.Match(match_results)
+ expected_column_indicator = [False, False, False, False, False, False, True]
+ ignored_column_indicator = match.ignored_column_indicator()
+ self.assertEquals(ignored_column_indicator.dtype, tf.bool)
+ with self.test_session() as sess:
+ ignored_column_indicator = sess.run(ignored_column_indicator)
+ self.assertAllEqual(ignored_column_indicator, expected_column_indicator)
+
+ def test_get_correct_unmatched_ignored_column_indices(self):
+ match_results = tf.constant([3, 1, -1, 0, -1, 5, -2])
+ match = matcher.Match(match_results)
+ expected_column_indices = [2, 4, 6]
+ unmatched_ignored_column_indices = (match.
+ unmatched_or_ignored_column_indices())
+ self.assertEquals(unmatched_ignored_column_indices.dtype, tf.int32)
+ with self.test_session() as sess:
+ unmatched_ignored_column_indices = sess.run(
+ unmatched_ignored_column_indices)
+ self.assertAllEqual(unmatched_ignored_column_indices,
+ expected_column_indices)
+
+ def test_all_columns_accounted_for(self):
+ # Note: deliberately setting to small number so not always
+ # all possibilities appear (matched, unmatched, ignored)
+ num_matches = 10
+ match_results = tf.random_uniform(
+ [num_matches], minval=-2, maxval=5, dtype=tf.int32)
+ match = matcher.Match(match_results)
+ matched_column_indices = match.matched_column_indices()
+ unmatched_column_indices = match.unmatched_column_indices()
+ ignored_column_indices = match.ignored_column_indices()
+ with self.test_session() as sess:
+ matched, unmatched, ignored = sess.run([
+ matched_column_indices, unmatched_column_indices,
+ ignored_column_indices
+ ])
+ all_indices = np.hstack((matched, unmatched, ignored))
+ all_indices_sorted = np.sort(all_indices)
+ self.assertAllEqual(all_indices_sorted,
+ np.arange(num_matches, dtype=np.int32))
+
+ def test_scalar_gather_based_on_match(self):
+ match_results = tf.constant([3, 1, -1, 0, -1, 5, -2])
+ input_tensor = tf.constant([0, 1, 2, 3, 4, 5, 6, 7], dtype=tf.float32)
+ expected_gathered_tensor = [3, 1, 100, 0, 100, 5, 200]
+ match = matcher.Match(match_results)
+ gathered_tensor = match.gather_based_on_match(input_tensor,
+ unmatched_value=100.,
+ ignored_value=200.)
+ self.assertEquals(gathered_tensor.dtype, tf.float32)
+ with self.test_session():
+ gathered_tensor_out = gathered_tensor.eval()
+ self.assertAllEqual(expected_gathered_tensor, gathered_tensor_out)
+
+ def test_multidimensional_gather_based_on_match(self):
+ match_results = tf.constant([1, -1, -2])
+ input_tensor = tf.constant([[0, 0.5, 0, 0.5], [0, 0, 0.5, 0.5]],
+ dtype=tf.float32)
+ expected_gathered_tensor = [[0, 0, 0.5, 0.5], [0, 0, 0, 0], [0, 0, 0, 0]]
+ match = matcher.Match(match_results)
+ gathered_tensor = match.gather_based_on_match(input_tensor,
+ unmatched_value=tf.zeros(4),
+ ignored_value=tf.zeros(4))
+ self.assertEquals(gathered_tensor.dtype, tf.float32)
+ with self.test_session():
+ gathered_tensor_out = gathered_tensor.eval()
+ self.assertAllEqual(expected_gathered_tensor, gathered_tensor_out)
+
+ def test_multidimensional_gather_based_on_match_with_matmul_gather_op(self):
+ match_results = tf.constant([1, -1, -2])
+ input_tensor = tf.constant([[0, 0.5, 0, 0.5], [0, 0, 0.5, 0.5]],
+ dtype=tf.float32)
+ expected_gathered_tensor = [[0, 0, 0.5, 0.5], [0, 0, 0, 0], [0, 0, 0, 0]]
+ match = matcher.Match(match_results, use_matmul_gather=True)
+ gathered_tensor = match.gather_based_on_match(input_tensor,
+ unmatched_value=tf.zeros(4),
+ ignored_value=tf.zeros(4))
+ self.assertEquals(gathered_tensor.dtype, tf.float32)
+ with self.test_session() as sess:
+ self.assertTrue(
+ all([op.name is not 'Gather' for op in sess.graph.get_operations()]))
+ gathered_tensor_out = gathered_tensor.eval()
+ self.assertAllEqual(expected_gathered_tensor, gathered_tensor_out)
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/minibatch_sampler.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/minibatch_sampler.py"
new file mode 100644
index 00000000..dc622221
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/minibatch_sampler.py"
@@ -0,0 +1,90 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Base minibatch sampler module.
+
+The job of the minibatch_sampler is to subsample a minibatch based on some
+criterion.
+
+The main function call is:
+ subsample(indicator, batch_size, **params).
+Indicator is a 1d boolean tensor where True denotes which examples can be
+sampled. It returns a boolean indicator where True denotes an example has been
+sampled..
+
+Subclasses should implement the Subsample function and can make use of the
+@staticmethod SubsampleIndicator.
+"""
+
+from abc import ABCMeta
+from abc import abstractmethod
+
+import tensorflow as tf
+
+from object_detection.utils import ops
+
+
+class MinibatchSampler(object):
+ """Abstract base class for subsampling minibatches."""
+ __metaclass__ = ABCMeta
+
+ def __init__(self):
+ """Constructs a minibatch sampler."""
+ pass
+
+ @abstractmethod
+ def subsample(self, indicator, batch_size, **params):
+ """Returns subsample of entries in indicator.
+
+ Args:
+ indicator: boolean tensor of shape [N] whose True entries can be sampled.
+ batch_size: desired batch size.
+ **params: additional keyword arguments for specific implementations of
+ the MinibatchSampler.
+
+ Returns:
+ sample_indicator: boolean tensor of shape [N] whose True entries have been
+ sampled. If sum(indicator) >= batch_size, sum(is_sampled) = batch_size
+ """
+ pass
+
+ @staticmethod
+ def subsample_indicator(indicator, num_samples):
+ """Subsample indicator vector.
+
+ Given a boolean indicator vector with M elements set to `True`, the function
+ assigns all but `num_samples` of these previously `True` elements to
+ `False`. If `num_samples` is greater than M, the original indicator vector
+ is returned.
+
+ Args:
+ indicator: a 1-dimensional boolean tensor indicating which elements
+ are allowed to be sampled and which are not.
+ num_samples: int32 scalar tensor
+
+ Returns:
+ a boolean tensor with the same shape as input (indicator) tensor
+ """
+ indices = tf.where(indicator)
+ indices = tf.random_shuffle(indices)
+ indices = tf.reshape(indices, [-1])
+
+ num_samples = tf.minimum(tf.size(indices), num_samples)
+ selected_indices = tf.slice(indices, [0], tf.reshape(num_samples, [1]))
+
+ selected_indicator = ops.indices_to_dense_vector(selected_indices,
+ tf.shape(indicator)[0])
+
+ return tf.equal(selected_indicator, 1)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/minibatch_sampler_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/minibatch_sampler_test.py"
new file mode 100644
index 00000000..7420ae5d
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/minibatch_sampler_test.py"
@@ -0,0 +1,82 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for google3.research.vale.object_detection.minibatch_sampler."""
+
+import numpy as np
+import tensorflow as tf
+
+from object_detection.core import minibatch_sampler
+
+
+class MinibatchSamplerTest(tf.test.TestCase):
+
+ def test_subsample_indicator_when_more_true_elements_than_num_samples(self):
+ np_indicator = [True, False, True, False, True, True, False]
+ indicator = tf.constant(np_indicator)
+ samples = minibatch_sampler.MinibatchSampler.subsample_indicator(
+ indicator, 3)
+ with self.test_session() as sess:
+ samples_out = sess.run(samples)
+ self.assertTrue(np.sum(samples_out), 3)
+ self.assertAllEqual(samples_out,
+ np.logical_and(samples_out, np_indicator))
+
+ def test_subsample_when_more_true_elements_than_num_samples_no_shape(self):
+ np_indicator = [True, False, True, False, True, True, False]
+ indicator = tf.placeholder(tf.bool)
+ feed_dict = {indicator: np_indicator}
+
+ samples = minibatch_sampler.MinibatchSampler.subsample_indicator(
+ indicator, 3)
+ with self.test_session() as sess:
+ samples_out = sess.run(samples, feed_dict=feed_dict)
+ self.assertTrue(np.sum(samples_out), 3)
+ self.assertAllEqual(samples_out,
+ np.logical_and(samples_out, np_indicator))
+
+ def test_subsample_indicator_when_less_true_elements_than_num_samples(self):
+ np_indicator = [True, False, True, False, True, True, False]
+ indicator = tf.constant(np_indicator)
+ samples = minibatch_sampler.MinibatchSampler.subsample_indicator(
+ indicator, 5)
+ with self.test_session() as sess:
+ samples_out = sess.run(samples)
+ self.assertTrue(np.sum(samples_out), 4)
+ self.assertAllEqual(samples_out,
+ np.logical_and(samples_out, np_indicator))
+
+ def test_subsample_indicator_when_num_samples_is_zero(self):
+ np_indicator = [True, False, True, False, True, True, False]
+ indicator = tf.constant(np_indicator)
+ samples_none = minibatch_sampler.MinibatchSampler.subsample_indicator(
+ indicator, 0)
+ with self.test_session() as sess:
+ samples_none_out = sess.run(samples_none)
+ self.assertAllEqual(
+ np.zeros_like(samples_none_out, dtype=bool),
+ samples_none_out)
+
+ def test_subsample_indicator_when_indicator_all_false(self):
+ indicator_empty = tf.zeros([0], dtype=tf.bool)
+ samples_empty = minibatch_sampler.MinibatchSampler.subsample_indicator(
+ indicator_empty, 4)
+ with self.test_session() as sess:
+ samples_empty_out = sess.run(samples_empty)
+ self.assertEqual(0, samples_empty_out.size)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/model.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/model.py"
new file mode 100644
index 00000000..91e31813
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/model.py"
@@ -0,0 +1,346 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Abstract detection model.
+
+This file defines a generic base class for detection models. Programs that are
+designed to work with arbitrary detection models should only depend on this
+class. We intend for the functions in this class to follow tensor-in/tensor-out
+design, thus all functions have tensors or lists/dictionaries holding tensors as
+inputs and outputs.
+
+Abstractly, detection models predict output tensors given input images
+which can be passed to a loss function at training time or passed to a
+postprocessing function at eval time. The computation graphs at a high level
+consequently look as follows:
+
+Training time:
+inputs (images tensor) -> preprocess -> predict -> loss -> outputs (loss tensor)
+
+Evaluation time:
+inputs (images tensor) -> preprocess -> predict -> postprocess
+ -> outputs (boxes tensor, scores tensor, classes tensor, num_detections tensor)
+
+DetectionModels must thus implement four functions (1) preprocess, (2) predict,
+(3) postprocess and (4) loss. DetectionModels should make no assumptions about
+the input size or aspect ratio --- they are responsible for doing any
+resize/reshaping necessary (see docstring for the preprocess function).
+Output classes are always integers in the range [0, num_classes). Any mapping
+of these integers to semantic labels is to be handled outside of this class.
+
+Images are resized in the `preprocess` method. All of `preprocess`, `predict`,
+and `postprocess` should be reentrant.
+
+The `preprocess` method runs `image_resizer_fn` that returns resized_images and
+`true_image_shapes`. Since `image_resizer_fn` can pad the images with zeros,
+true_image_shapes indicate the slices that contain the image without padding.
+This is useful for padding images to be a fixed size for batching.
+
+The `postprocess` method uses the true image shapes to clip predictions that lie
+outside of images.
+
+By default, DetectionModels produce bounding box detections; However, we support
+a handful of auxiliary annotations associated with each bounding box, namely,
+instance masks and keypoints.
+"""
+from abc import ABCMeta
+from abc import abstractmethod
+
+from object_detection.core import standard_fields as fields
+
+
+class DetectionModel(object):
+ """Abstract base class for detection models."""
+ __metaclass__ = ABCMeta
+
+ def __init__(self, num_classes):
+ """Constructor.
+
+ Args:
+ num_classes: number of classes. Note that num_classes *does not* include
+ background categories that might be implicitly predicted in various
+ implementations.
+ """
+ self._num_classes = num_classes
+ self._groundtruth_lists = {}
+
+ @property
+ def num_classes(self):
+ return self._num_classes
+
+ def groundtruth_lists(self, field):
+ """Access list of groundtruth tensors.
+
+ Args:
+ field: a string key, options are
+ fields.BoxListFields.{boxes,classes,masks,keypoints} or
+ fields.InputDataFields.is_annotated.
+
+ Returns:
+ a list of tensors holding groundtruth information (see also
+ provide_groundtruth function below), with one entry for each image in the
+ batch.
+ Raises:
+ RuntimeError: if the field has not been provided via provide_groundtruth.
+ """
+ if field not in self._groundtruth_lists:
+ raise RuntimeError('Groundtruth tensor {} has not been provided'.format(
+ field))
+ return self._groundtruth_lists[field]
+
+ def groundtruth_has_field(self, field):
+ """Determines whether the groundtruth includes the given field.
+
+ Args:
+ field: a string key, options are
+ fields.BoxListFields.{boxes,classes,masks,keypoints} or
+ fields.InputDataFields.is_annotated.
+
+ Returns:
+ True if the groundtruth includes the given field, False otherwise.
+ """
+ return field in self._groundtruth_lists
+
+ @abstractmethod
+ def preprocess(self, inputs):
+ """Input preprocessing.
+
+ To be overridden by implementations.
+
+ This function is responsible for any scaling/shifting of input values that
+ is necessary prior to running the detector on an input image.
+ It is also responsible for any resizing, padding that might be necessary
+ as images are assumed to arrive in arbitrary sizes. While this function
+ could conceivably be part of the predict method (below), it is often
+ convenient to keep these separate --- for example, we may want to preprocess
+ on one device, place onto a queue, and let another device (e.g., the GPU)
+ handle prediction.
+
+ A few important notes about the preprocess function:
+ + We assume that this operation does not have any trainable variables nor
+ does it affect the groundtruth annotations in any way (thus data
+ augmentation operations such as random cropping should be performed
+ externally).
+ + There is no assumption that the batchsize in this function is the same as
+ the batch size in the predict function. In fact, we recommend calling the
+ preprocess function prior to calling any batching operations (which should
+ happen outside of the model) and thus assuming that batch sizes are equal
+ to 1 in the preprocess function.
+ + There is also no explicit assumption that the output resolutions
+ must be fixed across inputs --- this is to support "fully convolutional"
+ settings in which input images can have different shapes/resolutions.
+
+ Args:
+ inputs: a [batch, height_in, width_in, channels] float32 tensor
+ representing a batch of images with values between 0 and 255.0.
+
+ Returns:
+ preprocessed_inputs: a [batch, height_out, width_out, channels] float32
+ tensor representing a batch of images.
+ true_image_shapes: int32 tensor of shape [batch, 3] where each row is
+ of the form [height, width, channels] indicating the shapes
+ of true images in the resized images, as resized images can be padded
+ with zeros.
+ """
+ pass
+
+ @abstractmethod
+ def predict(self, preprocessed_inputs, true_image_shapes):
+ """Predict prediction tensors from inputs tensor.
+
+ Outputs of this function can be passed to loss or postprocess functions.
+
+ Args:
+ preprocessed_inputs: a [batch, height, width, channels] float32 tensor
+ representing a batch of images.
+ true_image_shapes: int32 tensor of shape [batch, 3] where each row is
+ of the form [height, width, channels] indicating the shapes
+ of true images in the resized images, as resized images can be padded
+ with zeros.
+
+ Returns:
+ prediction_dict: a dictionary holding prediction tensors to be
+ passed to the Loss or Postprocess functions.
+ """
+ pass
+
+ @abstractmethod
+ def postprocess(self, prediction_dict, true_image_shapes, **params):
+ """Convert predicted output tensors to final detections.
+
+ Outputs adhere to the following conventions:
+ * Classes are integers in [0, num_classes); background classes are removed
+ and the first non-background class is mapped to 0. If the model produces
+ class-agnostic detections, then no output is produced for classes.
+ * Boxes are to be interpreted as being in [y_min, x_min, y_max, x_max]
+ format and normalized relative to the image window.
+ * `num_detections` is provided for settings where detections are padded to a
+ fixed number of boxes.
+ * We do not specifically assume any kind of probabilistic interpretation
+ of the scores --- the only important thing is their relative ordering.
+ Thus implementations of the postprocess function are free to output
+ logits, probabilities, calibrated probabilities, or anything else.
+
+ Args:
+ prediction_dict: a dictionary holding prediction tensors.
+ true_image_shapes: int32 tensor of shape [batch, 3] where each row is
+ of the form [height, width, channels] indicating the shapes
+ of true images in the resized images, as resized images can be padded
+ with zeros.
+ **params: Additional keyword arguments for specific implementations of
+ DetectionModel.
+
+ Returns:
+ detections: a dictionary containing the following fields
+ detection_boxes: [batch, max_detections, 4]
+ detection_scores: [batch, max_detections]
+ detection_classes: [batch, max_detections]
+ (If a model is producing class-agnostic detections, this field may be
+ missing)
+ instance_masks: [batch, max_detections, image_height, image_width]
+ (optional)
+ keypoints: [batch, max_detections, num_keypoints, 2] (optional)
+ num_detections: [batch]
+ """
+ pass
+
+ @abstractmethod
+ def loss(self, prediction_dict, true_image_shapes):
+ """Compute scalar loss tensors with respect to provided groundtruth.
+
+ Calling this function requires that groundtruth tensors have been
+ provided via the provide_groundtruth function.
+
+ Args:
+ prediction_dict: a dictionary holding predicted tensors
+ true_image_shapes: int32 tensor of shape [batch, 3] where each row is
+ of the form [height, width, channels] indicating the shapes
+ of true images in the resized images, as resized images can be padded
+ with zeros.
+
+ Returns:
+ a dictionary mapping strings (loss names) to scalar tensors representing
+ loss values.
+ """
+ pass
+
+ def provide_groundtruth(self,
+ groundtruth_boxes_list,
+ groundtruth_classes_list,
+ groundtruth_masks_list=None,
+ groundtruth_keypoints_list=None,
+ groundtruth_weights_list=None,
+ groundtruth_confidences_list=None,
+ groundtruth_is_crowd_list=None,
+ is_annotated_list=None):
+ """Provide groundtruth tensors.
+
+ Args:
+ groundtruth_boxes_list: a list of 2-D tf.float32 tensors of shape
+ [num_boxes, 4] containing coordinates of the groundtruth boxes.
+ Groundtruth boxes are provided in [y_min, x_min, y_max, x_max]
+ format and assumed to be normalized and clipped
+ relative to the image window with y_min <= y_max and x_min <= x_max.
+ groundtruth_classes_list: a list of 2-D tf.float32 one-hot (or k-hot)
+ tensors of shape [num_boxes, num_classes] containing the class targets
+ with the 0th index assumed to map to the first non-background class.
+ groundtruth_masks_list: a list of 3-D tf.float32 tensors of
+ shape [num_boxes, height_in, width_in] containing instance
+ masks with values in {0, 1}. If None, no masks are provided.
+ Mask resolution `height_in`x`width_in` must agree with the resolution
+ of the input image tensor provided to the `preprocess` function.
+ groundtruth_keypoints_list: a list of 3-D tf.float32 tensors of
+ shape [num_boxes, num_keypoints, 2] containing keypoints.
+ Keypoints are assumed to be provided in normalized coordinates and
+ missing keypoints should be encoded as NaN.
+ groundtruth_weights_list: A list of 1-D tf.float32 tensors of shape
+ [num_boxes] containing weights for groundtruth boxes.
+ groundtruth_confidences_list: A list of 2-D tf.float32 tensors of shape
+ [num_boxes, num_classes] containing class confidences for groundtruth
+ boxes.
+ groundtruth_is_crowd_list: A list of 1-D tf.bool tensors of shape
+ [num_boxes] containing is_crowd annotations
+ is_annotated_list: A list of scalar tf.bool tensors indicating whether
+ images have been labeled or not.
+ """
+ self._groundtruth_lists[fields.BoxListFields.boxes] = groundtruth_boxes_list
+ self._groundtruth_lists[
+ fields.BoxListFields.classes] = groundtruth_classes_list
+ if groundtruth_weights_list:
+ self._groundtruth_lists[fields.BoxListFields.
+ weights] = groundtruth_weights_list
+ if groundtruth_confidences_list:
+ self._groundtruth_lists[fields.BoxListFields.
+ confidences] = groundtruth_confidences_list
+ if groundtruth_masks_list:
+ self._groundtruth_lists[
+ fields.BoxListFields.masks] = groundtruth_masks_list
+ if groundtruth_keypoints_list:
+ self._groundtruth_lists[
+ fields.BoxListFields.keypoints] = groundtruth_keypoints_list
+ if groundtruth_is_crowd_list:
+ self._groundtruth_lists[
+ fields.BoxListFields.is_crowd] = groundtruth_is_crowd_list
+ if is_annotated_list:
+ self._groundtruth_lists[
+ fields.InputDataFields.is_annotated] = is_annotated_list
+
+ @abstractmethod
+ def regularization_losses(self):
+ """Returns a list of regularization losses for this model.
+
+ Returns a list of regularization losses for this model that the estimator
+ needs to use during training/optimization.
+
+ Returns:
+ A list of regularization loss tensors.
+ """
+ pass
+
+ @abstractmethod
+ def restore_map(self, fine_tune_checkpoint_type='detection'):
+ """Returns a map of variables to load from a foreign checkpoint.
+
+ Returns a map of variable names to load from a checkpoint to variables in
+ the model graph. This enables the model to initialize based on weights from
+ another task. For example, the feature extractor variables from a
+ classification model can be used to bootstrap training of an object
+ detector. When loading from an object detection model, the checkpoint model
+ should have the same parameters as this detection model with exception of
+ the num_classes parameter.
+
+ Args:
+ fine_tune_checkpoint_type: whether to restore from a full detection
+ checkpoint (with compatible variable names) or to restore from a
+ classification checkpoint for initialization prior to training.
+ Valid values: `detection`, `classification`. Default 'detection'.
+
+ Returns:
+ A dict mapping variable names (to load from a checkpoint) to variables in
+ the model graph.
+ """
+ pass
+
+ @abstractmethod
+ def updates(self):
+ """Returns a list of update operators for this model.
+
+ Returns a list of update operators for this model that must be executed at
+ each training step. The estimator's train op needs to have a control
+ dependency on these updates.
+
+ Returns:
+ A list of update operators.
+ """
+ pass
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/post_processing.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/post_processing.py"
new file mode 100644
index 00000000..20775859
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/post_processing.py"
@@ -0,0 +1,498 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Post-processing operations on detected boxes."""
+
+import numpy as np
+import tensorflow as tf
+
+from object_detection.core import box_list
+from object_detection.core import box_list_ops
+from object_detection.core import standard_fields as fields
+from object_detection.utils import shape_utils
+
+
+def multiclass_non_max_suppression(boxes,
+ scores,
+ score_thresh,
+ iou_thresh,
+ max_size_per_class,
+ max_total_size=0,
+ clip_window=None,
+ change_coordinate_frame=False,
+ masks=None,
+ boundaries=None,
+ pad_to_max_output_size=False,
+ additional_fields=None,
+ scope=None):
+ """Multi-class version of non maximum suppression.
+
+ This op greedily selects a subset of detection bounding boxes, pruning
+ away boxes that have high IOU (intersection over union) overlap (> thresh)
+ with already selected boxes. It operates independently for each class for
+ which scores are provided (via the scores field of the input box_list),
+ pruning boxes with score less than a provided threshold prior to
+ applying NMS.
+
+ Please note that this operation is performed on *all* classes, therefore any
+ background classes should be removed prior to calling this function.
+
+ Selected boxes are guaranteed to be sorted in decreasing order by score (but
+ the sort is not guaranteed to be stable).
+
+ Args:
+ boxes: A [k, q, 4] float32 tensor containing k detections. `q` can be either
+ number of classes or 1 depending on whether a separate box is predicted
+ per class.
+ scores: A [k, num_classes] float32 tensor containing the scores for each of
+ the k detections. The scores have to be non-negative when
+ pad_to_max_output_size is True.
+ score_thresh: scalar threshold for score (low scoring boxes are removed).
+ iou_thresh: scalar threshold for IOU (new boxes that have high IOU overlap
+ with previously selected boxes are removed).
+ max_size_per_class: maximum number of retained boxes per class.
+ max_total_size: maximum number of boxes retained over all classes. By
+ default returns all boxes retained after capping boxes per class.
+ clip_window: A float32 tensor of the form [y_min, x_min, y_max, x_max]
+ representing the window to clip and normalize boxes to before performing
+ non-max suppression.
+ change_coordinate_frame: Whether to normalize coordinates after clipping
+ relative to clip_window (this can only be set to True if a clip_window
+ is provided)
+ masks: (optional) a [k, q, mask_height, mask_width] float32 tensor
+ containing box masks. `q` can be either number of classes or 1 depending
+ on whether a separate mask is predicted per class.
+ boundaries: (optional) a [k, q, boundary_height, boundary_width] float32
+ tensor containing box boundaries. `q` can be either number of classes or 1
+ depending on whether a separate boundary is predicted per class.
+ pad_to_max_output_size: If true, the output nmsed boxes are padded to be of
+ length `max_size_per_class`. Defaults to false.
+ additional_fields: (optional) If not None, a dictionary that maps keys to
+ tensors whose first dimensions are all of size `k`. After non-maximum
+ suppression, all tensors corresponding to the selected boxes will be
+ added to resulting BoxList.
+ scope: name scope.
+
+ Returns:
+ A tuple of sorted_boxes and num_valid_nms_boxes. The sorted_boxes is a
+ BoxList holds M boxes with a rank-1 scores field representing
+ corresponding scores for each box with scores sorted in decreasing order
+ and a rank-1 classes field representing a class label for each box. The
+ num_valid_nms_boxes is a 0-D integer tensor representing the number of
+ valid elements in `BoxList`, with the valid elements appearing first.
+
+ Raises:
+ ValueError: if iou_thresh is not in [0, 1] or if input boxlist does not have
+ a valid scores field.
+ """
+ if not 0 <= iou_thresh <= 1.0:
+ raise ValueError('iou_thresh must be between 0 and 1')
+ if scores.shape.ndims != 2:
+ raise ValueError('scores field must be of rank 2')
+ if scores.shape[1].value is None:
+ raise ValueError('scores must have statically defined second '
+ 'dimension')
+ if boxes.shape.ndims != 3:
+ raise ValueError('boxes must be of rank 3.')
+ if not (boxes.shape[1].value == scores.shape[1].value or
+ boxes.shape[1].value == 1):
+ raise ValueError('second dimension of boxes must be either 1 or equal '
+ 'to the second dimension of scores')
+ if boxes.shape[2].value != 4:
+ raise ValueError('last dimension of boxes must be of size 4.')
+ if change_coordinate_frame and clip_window is None:
+ raise ValueError('if change_coordinate_frame is True, then a clip_window'
+ 'must be specified.')
+
+ with tf.name_scope(scope, 'MultiClassNonMaxSuppression'):
+ num_scores = tf.shape(scores)[0]
+ num_classes = scores.get_shape()[1]
+
+ selected_boxes_list = []
+ num_valid_nms_boxes_cumulative = tf.constant(0)
+ per_class_boxes_list = tf.unstack(boxes, axis=1)
+ if masks is not None:
+ per_class_masks_list = tf.unstack(masks, axis=1)
+ if boundaries is not None:
+ per_class_boundaries_list = tf.unstack(boundaries, axis=1)
+ boxes_ids = (range(num_classes) if len(per_class_boxes_list) > 1
+ else [0] * num_classes.value)
+ for class_idx, boxes_idx in zip(range(num_classes), boxes_ids):
+ per_class_boxes = per_class_boxes_list[boxes_idx]
+ boxlist_and_class_scores = box_list.BoxList(per_class_boxes)
+ class_scores = tf.reshape(
+ tf.slice(scores, [0, class_idx], tf.stack([num_scores, 1])), [-1])
+
+ boxlist_and_class_scores.add_field(fields.BoxListFields.scores,
+ class_scores)
+ if masks is not None:
+ per_class_masks = per_class_masks_list[boxes_idx]
+ boxlist_and_class_scores.add_field(fields.BoxListFields.masks,
+ per_class_masks)
+ if boundaries is not None:
+ per_class_boundaries = per_class_boundaries_list[boxes_idx]
+ boxlist_and_class_scores.add_field(fields.BoxListFields.boundaries,
+ per_class_boundaries)
+ if additional_fields is not None:
+ for key, tensor in additional_fields.items():
+ boxlist_and_class_scores.add_field(key, tensor)
+
+ if pad_to_max_output_size:
+ max_selection_size = max_size_per_class
+ selected_indices, num_valid_nms_boxes = (
+ tf.image.non_max_suppression_padded(
+ boxlist_and_class_scores.get(),
+ boxlist_and_class_scores.get_field(fields.BoxListFields.scores),
+ max_selection_size,
+ iou_threshold=iou_thresh,
+ score_threshold=score_thresh,
+ pad_to_max_output_size=True))
+ else:
+ max_selection_size = tf.minimum(max_size_per_class,
+ boxlist_and_class_scores.num_boxes())
+ selected_indices = tf.image.non_max_suppression(
+ boxlist_and_class_scores.get(),
+ boxlist_and_class_scores.get_field(fields.BoxListFields.scores),
+ max_selection_size,
+ iou_threshold=iou_thresh,
+ score_threshold=score_thresh)
+ num_valid_nms_boxes = tf.shape(selected_indices)[0]
+ selected_indices = tf.concat(
+ [selected_indices,
+ tf.zeros(max_selection_size-num_valid_nms_boxes, tf.int32)], 0)
+ nms_result = box_list_ops.gather(boxlist_and_class_scores,
+ selected_indices)
+ # Make the scores -1 for invalid boxes.
+ valid_nms_boxes_indx = tf.less(
+ tf.range(max_selection_size), num_valid_nms_boxes)
+ nms_scores = nms_result.get_field(fields.BoxListFields.scores)
+ nms_result.add_field(fields.BoxListFields.scores,
+ tf.where(valid_nms_boxes_indx,
+ nms_scores, -1*tf.ones(max_selection_size)))
+ num_valid_nms_boxes_cumulative += num_valid_nms_boxes
+
+ nms_result.add_field(
+ fields.BoxListFields.classes, (tf.zeros_like(
+ nms_result.get_field(fields.BoxListFields.scores)) + class_idx))
+ selected_boxes_list.append(nms_result)
+ selected_boxes = box_list_ops.concatenate(selected_boxes_list)
+ sorted_boxes = box_list_ops.sort_by_field(selected_boxes,
+ fields.BoxListFields.scores)
+ if clip_window is not None:
+ # When pad_to_max_output_size is False, it prunes the boxes with zero
+ # area.
+ sorted_boxes = box_list_ops.clip_to_window(
+ sorted_boxes,
+ clip_window,
+ filter_nonoverlapping=not pad_to_max_output_size)
+ # Set the scores of boxes with zero area to -1 to keep the default
+ # behaviour of pruning out zero area boxes.
+ sorted_boxes_size = tf.shape(sorted_boxes.get())[0]
+ non_zero_box_area = tf.cast(box_list_ops.area(sorted_boxes), tf.bool)
+ sorted_boxes_scores = tf.where(
+ non_zero_box_area,
+ sorted_boxes.get_field(fields.BoxListFields.scores),
+ -1*tf.ones(sorted_boxes_size))
+ sorted_boxes.add_field(fields.BoxListFields.scores, sorted_boxes_scores)
+ num_valid_nms_boxes_cumulative = tf.reduce_sum(
+ tf.cast(tf.greater_equal(sorted_boxes_scores, 0), tf.int32))
+ sorted_boxes = box_list_ops.sort_by_field(sorted_boxes,
+ fields.BoxListFields.scores)
+ if change_coordinate_frame:
+ sorted_boxes = box_list_ops.change_coordinate_frame(
+ sorted_boxes, clip_window)
+
+ if max_total_size:
+ max_total_size = tf.minimum(max_total_size,
+ sorted_boxes.num_boxes())
+ sorted_boxes = box_list_ops.gather(sorted_boxes,
+ tf.range(max_total_size))
+ num_valid_nms_boxes_cumulative = tf.where(
+ max_total_size > num_valid_nms_boxes_cumulative,
+ num_valid_nms_boxes_cumulative, max_total_size)
+ # Select only the valid boxes if pad_to_max_output_size is False.
+ if not pad_to_max_output_size:
+ sorted_boxes = box_list_ops.gather(
+ sorted_boxes, tf.range(num_valid_nms_boxes_cumulative))
+
+ return sorted_boxes, num_valid_nms_boxes_cumulative
+
+
+def batch_multiclass_non_max_suppression(boxes,
+ scores,
+ score_thresh,
+ iou_thresh,
+ max_size_per_class,
+ max_total_size=0,
+ clip_window=None,
+ change_coordinate_frame=False,
+ num_valid_boxes=None,
+ masks=None,
+ additional_fields=None,
+ scope=None,
+ use_static_shapes=False,
+ parallel_iterations=32):
+ """Multi-class version of non maximum suppression that operates on a batch.
+
+ This op is similar to `multiclass_non_max_suppression` but operates on a batch
+ of boxes and scores. See documentation for `multiclass_non_max_suppression`
+ for details.
+
+ Args:
+ boxes: A [batch_size, num_anchors, q, 4] float32 tensor containing
+ detections. If `q` is 1 then same boxes are used for all classes
+ otherwise, if `q` is equal to number of classes, class-specific boxes
+ are used.
+ scores: A [batch_size, num_anchors, num_classes] float32 tensor containing
+ the scores for each of the `num_anchors` detections. The scores have to be
+ non-negative when use_static_shapes is set True.
+ score_thresh: scalar threshold for score (low scoring boxes are removed).
+ iou_thresh: scalar threshold for IOU (new boxes that have high IOU overlap
+ with previously selected boxes are removed).
+ max_size_per_class: maximum number of retained boxes per class.
+ max_total_size: maximum number of boxes retained over all classes. By
+ default returns all boxes retained after capping boxes per class.
+ clip_window: A float32 tensor of shape [batch_size, 4] where each entry is
+ of the form [y_min, x_min, y_max, x_max] representing the window to clip
+ boxes to before performing non-max suppression. This argument can also be
+ a tensor of shape [4] in which case, the same clip window is applied to
+ all images in the batch. If clip_widow is None, all boxes are used to
+ perform non-max suppression.
+ change_coordinate_frame: Whether to normalize coordinates after clipping
+ relative to clip_window (this can only be set to True if a clip_window
+ is provided)
+ num_valid_boxes: (optional) a Tensor of type `int32`. A 1-D tensor of shape
+ [batch_size] representing the number of valid boxes to be considered
+ for each image in the batch. This parameter allows for ignoring zero
+ paddings.
+ masks: (optional) a [batch_size, num_anchors, q, mask_height, mask_width]
+ float32 tensor containing box masks. `q` can be either number of classes
+ or 1 depending on whether a separate mask is predicted per class.
+ additional_fields: (optional) If not None, a dictionary that maps keys to
+ tensors whose dimensions are [batch_size, num_anchors, ...].
+ scope: tf scope name.
+ use_static_shapes: If true, the output nmsed boxes are padded to be of
+ length `max_size_per_class` and it doesn't clip boxes to max_total_size.
+ Defaults to false.
+ parallel_iterations: (optional) number of batch items to process in
+ parallel.
+
+ Returns:
+ 'nmsed_boxes': A [batch_size, max_detections, 4] float32 tensor
+ containing the non-max suppressed boxes.
+ 'nmsed_scores': A [batch_size, max_detections] float32 tensor containing
+ the scores for the boxes.
+ 'nmsed_classes': A [batch_size, max_detections] float32 tensor
+ containing the class for boxes.
+ 'nmsed_masks': (optional) a
+ [batch_size, max_detections, mask_height, mask_width] float32 tensor
+ containing masks for each selected box. This is set to None if input
+ `masks` is None.
+ 'nmsed_additional_fields': (optional) a dictionary of
+ [batch_size, max_detections, ...] float32 tensors corresponding to the
+ tensors specified in the input `additional_fields`. This is not returned
+ if input `additional_fields` is None.
+ 'num_detections': A [batch_size] int32 tensor indicating the number of
+ valid detections per batch item. Only the top num_detections[i] entries in
+ nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The rest of the
+ entries are zero paddings.
+
+ Raises:
+ ValueError: if `q` in boxes.shape is not 1 or not equal to number of
+ classes as inferred from scores.shape.
+ """
+ q = boxes.shape[2].value
+ num_classes = scores.shape[2].value
+ if q != 1 and q != num_classes:
+ raise ValueError('third dimension of boxes must be either 1 or equal '
+ 'to the third dimension of scores')
+ if change_coordinate_frame and clip_window is None:
+ raise ValueError('if change_coordinate_frame is True, then a clip_window'
+ 'must be specified.')
+ original_masks = masks
+ original_additional_fields = additional_fields
+ with tf.name_scope(scope, 'BatchMultiClassNonMaxSuppression'):
+ boxes_shape = boxes.shape
+ batch_size = boxes_shape[0].value
+ num_anchors = boxes_shape[1].value
+
+ if batch_size is None:
+ batch_size = tf.shape(boxes)[0]
+ if num_anchors is None:
+ num_anchors = tf.shape(boxes)[1]
+
+ # If num valid boxes aren't provided, create one and mark all boxes as
+ # valid.
+ if num_valid_boxes is None:
+ num_valid_boxes = tf.ones([batch_size], dtype=tf.int32) * num_anchors
+
+ # If masks aren't provided, create dummy masks so we can only have one copy
+ # of _single_image_nms_fn and discard the dummy masks after map_fn.
+ if masks is None:
+ masks_shape = tf.stack([batch_size, num_anchors, q, 1, 1])
+ masks = tf.zeros(masks_shape)
+
+ if clip_window is None:
+ clip_window = tf.stack([
+ tf.reduce_min(boxes[:, :, :, 0]),
+ tf.reduce_min(boxes[:, :, :, 1]),
+ tf.reduce_max(boxes[:, :, :, 2]),
+ tf.reduce_max(boxes[:, :, :, 3])
+ ])
+ if clip_window.shape.ndims == 1:
+ clip_window = tf.tile(tf.expand_dims(clip_window, 0), [batch_size, 1])
+
+ if additional_fields is None:
+ additional_fields = {}
+
+ def _single_image_nms_fn(args):
+ """Runs NMS on a single image and returns padded output.
+
+ Args:
+ args: A list of tensors consisting of the following:
+ per_image_boxes - A [num_anchors, q, 4] float32 tensor containing
+ detections. If `q` is 1 then same boxes are used for all classes
+ otherwise, if `q` is equal to number of classes, class-specific
+ boxes are used.
+ per_image_scores - A [num_anchors, num_classes] float32 tensor
+ containing the scores for each of the `num_anchors` detections.
+ per_image_masks - A [num_anchors, q, mask_height, mask_width] float32
+ tensor containing box masks. `q` can be either number of classes
+ or 1 depending on whether a separate mask is predicted per class.
+ per_image_clip_window - A 1D float32 tensor of the form
+ [ymin, xmin, ymax, xmax] representing the window to clip the boxes
+ to.
+ per_image_additional_fields - (optional) A variable number of float32
+ tensors each with size [num_anchors, ...].
+ per_image_num_valid_boxes - A tensor of type `int32`. A 1-D tensor of
+ shape [batch_size] representing the number of valid boxes to be
+ considered for each image in the batch. This parameter allows for
+ ignoring zero paddings.
+
+ Returns:
+ 'nmsed_boxes': A [max_detections, 4] float32 tensor containing the
+ non-max suppressed boxes.
+ 'nmsed_scores': A [max_detections] float32 tensor containing the scores
+ for the boxes.
+ 'nmsed_classes': A [max_detections] float32 tensor containing the class
+ for boxes.
+ 'nmsed_masks': (optional) a [max_detections, mask_height, mask_width]
+ float32 tensor containing masks for each selected box. This is set to
+ None if input `masks` is None.
+ 'nmsed_additional_fields': (optional) A variable number of float32
+ tensors each with size [max_detections, ...] corresponding to the
+ input `per_image_additional_fields`.
+ 'num_detections': A [batch_size] int32 tensor indicating the number of
+ valid detections per batch item. Only the top num_detections[i]
+ entries in nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The
+ rest of the entries are zero paddings.
+ """
+ per_image_boxes = args[0]
+ per_image_scores = args[1]
+ per_image_masks = args[2]
+ per_image_clip_window = args[3]
+ per_image_additional_fields = {
+ key: value
+ for key, value in zip(additional_fields, args[4:-1])
+ }
+ per_image_num_valid_boxes = args[-1]
+ if use_static_shapes:
+ total_proposals = tf.shape(per_image_scores)
+ per_image_scores = tf.where(
+ tf.less(tf.range(total_proposals[0]), per_image_num_valid_boxes),
+ per_image_scores,
+ tf.fill(total_proposals, np.finfo('float32').min))
+ else:
+ per_image_boxes = tf.reshape(
+ tf.slice(per_image_boxes, 3 * [0],
+ tf.stack([per_image_num_valid_boxes, -1, -1])), [-1, q, 4])
+ per_image_scores = tf.reshape(
+ tf.slice(per_image_scores, [0, 0],
+ tf.stack([per_image_num_valid_boxes, -1])),
+ [-1, num_classes])
+ per_image_masks = tf.reshape(
+ tf.slice(per_image_masks, 4 * [0],
+ tf.stack([per_image_num_valid_boxes, -1, -1, -1])),
+ [-1, q, per_image_masks.shape[2].value,
+ per_image_masks.shape[3].value])
+ if per_image_additional_fields is not None:
+ for key, tensor in per_image_additional_fields.items():
+ additional_field_shape = tensor.get_shape()
+ additional_field_dim = len(additional_field_shape)
+ per_image_additional_fields[key] = tf.reshape(
+ tf.slice(per_image_additional_fields[key],
+ additional_field_dim * [0],
+ tf.stack([per_image_num_valid_boxes] +
+ (additional_field_dim - 1) * [-1])),
+ [-1] + [dim.value for dim in additional_field_shape[1:]])
+
+ nmsed_boxlist, num_valid_nms_boxes = multiclass_non_max_suppression(
+ per_image_boxes,
+ per_image_scores,
+ score_thresh,
+ iou_thresh,
+ max_size_per_class,
+ max_total_size,
+ clip_window=per_image_clip_window,
+ change_coordinate_frame=change_coordinate_frame,
+ masks=per_image_masks,
+ pad_to_max_output_size=use_static_shapes,
+ additional_fields=per_image_additional_fields)
+
+ if not use_static_shapes:
+ nmsed_boxlist = box_list_ops.pad_or_clip_box_list(
+ nmsed_boxlist, max_total_size)
+ num_detections = num_valid_nms_boxes
+ nmsed_boxes = nmsed_boxlist.get()
+ nmsed_scores = nmsed_boxlist.get_field(fields.BoxListFields.scores)
+ nmsed_classes = nmsed_boxlist.get_field(fields.BoxListFields.classes)
+ nmsed_masks = nmsed_boxlist.get_field(fields.BoxListFields.masks)
+ nmsed_additional_fields = [
+ nmsed_boxlist.get_field(key) for key in per_image_additional_fields
+ ]
+ return ([nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks] +
+ nmsed_additional_fields + [num_detections])
+
+ num_additional_fields = 0
+ if additional_fields is not None:
+ num_additional_fields = len(additional_fields)
+ num_nmsed_outputs = 4 + num_additional_fields
+
+ batch_outputs = shape_utils.static_or_dynamic_map_fn(
+ _single_image_nms_fn,
+ elems=([boxes, scores, masks, clip_window] +
+ list(additional_fields.values()) + [num_valid_boxes]),
+ dtype=(num_nmsed_outputs * [tf.float32] + [tf.int32]),
+ parallel_iterations=parallel_iterations)
+
+ batch_nmsed_boxes = batch_outputs[0]
+ batch_nmsed_scores = batch_outputs[1]
+ batch_nmsed_classes = batch_outputs[2]
+ batch_nmsed_masks = batch_outputs[3]
+ batch_nmsed_additional_fields = {
+ key: value
+ for key, value in zip(additional_fields, batch_outputs[4:-1])
+ }
+ batch_num_detections = batch_outputs[-1]
+
+ if original_masks is None:
+ batch_nmsed_masks = None
+
+ if original_additional_fields is None:
+ batch_nmsed_additional_fields = None
+
+ return (batch_nmsed_boxes, batch_nmsed_scores, batch_nmsed_classes,
+ batch_nmsed_masks, batch_nmsed_additional_fields,
+ batch_num_detections)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/post_processing_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/post_processing_test.py"
new file mode 100644
index 00000000..c69aacd9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/post_processing_test.py"
@@ -0,0 +1,1082 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for tensorflow_models.object_detection.core.post_processing."""
+import numpy as np
+import tensorflow as tf
+from object_detection.core import post_processing
+from object_detection.core import standard_fields as fields
+from object_detection.utils import test_case
+
+
+class MulticlassNonMaxSuppressionTest(test_case.TestCase):
+
+ def test_multiclass_nms_select_with_shared_boxes(self):
+ boxes = tf.constant([[[0, 0, 1, 1]],
+ [[0, 0.1, 1, 1.1]],
+ [[0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11]],
+ [[0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101]],
+ [[0, 1000, 1, 1002]],
+ [[0, 1000, 1, 1002.1]]], tf.float32)
+ scores = tf.constant([[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0],
+ [.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = [[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 1000, 1, 1002],
+ [0, 100, 1, 101]]
+ exp_nms_scores = [.95, .9, .85, .3]
+ exp_nms_classes = [0, 0, 1, 0]
+
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh, max_output_size)
+ with self.test_session() as sess:
+ nms_corners_output, nms_scores_output, nms_classes_output = sess.run(
+ [nms.get(), nms.get_field(fields.BoxListFields.scores),
+ nms.get_field(fields.BoxListFields.classes)])
+ self.assertAllClose(nms_corners_output, exp_nms_corners)
+ self.assertAllClose(nms_scores_output, exp_nms_scores)
+ self.assertAllClose(nms_classes_output, exp_nms_classes)
+
+ # TODO(bhattad): Remove conditional after CMLE moves to TF 1.9
+
+ def test_multiclass_nms_select_with_shared_boxes_given_keypoints(self):
+ boxes = tf.constant([[[0, 0, 1, 1]],
+ [[0, 0.1, 1, 1.1]],
+ [[0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11]],
+ [[0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101]],
+ [[0, 1000, 1, 1002]],
+ [[0, 1000, 1, 1002.1]]], tf.float32)
+ scores = tf.constant([[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0],
+ [.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]])
+ num_keypoints = 6
+ keypoints = tf.tile(
+ tf.reshape(tf.range(8), [8, 1, 1]),
+ [1, num_keypoints, 2])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = [[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 1000, 1, 1002],
+ [0, 100, 1, 101]]
+ exp_nms_scores = [.95, .9, .85, .3]
+ exp_nms_classes = [0, 0, 1, 0]
+ exp_nms_keypoints_tensor = tf.tile(
+ tf.reshape(tf.constant([3, 0, 6, 5], dtype=tf.float32), [4, 1, 1]),
+ [1, num_keypoints, 2])
+
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes,
+ scores,
+ score_thresh,
+ iou_thresh,
+ max_output_size,
+ additional_fields={fields.BoxListFields.keypoints: keypoints})
+
+ with self.test_session() as sess:
+ (nms_corners_output,
+ nms_scores_output,
+ nms_classes_output,
+ nms_keypoints,
+ exp_nms_keypoints) = sess.run([
+ nms.get(),
+ nms.get_field(fields.BoxListFields.scores),
+ nms.get_field(fields.BoxListFields.classes),
+ nms.get_field(fields.BoxListFields.keypoints),
+ exp_nms_keypoints_tensor
+ ])
+ self.assertAllClose(nms_corners_output, exp_nms_corners)
+ self.assertAllClose(nms_scores_output, exp_nms_scores)
+ self.assertAllClose(nms_classes_output, exp_nms_classes)
+ self.assertAllEqual(nms_keypoints, exp_nms_keypoints)
+
+ def test_multiclass_nms_with_shared_boxes_given_keypoint_heatmaps(self):
+ boxes = tf.constant([[[0, 0, 1, 1]],
+ [[0, 0.1, 1, 1.1]],
+ [[0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11]],
+ [[0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101]],
+ [[0, 1000, 1, 1002]],
+ [[0, 1000, 1, 1002.1]]], tf.float32)
+
+ scores = tf.constant([[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0],
+ [.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]])
+
+ num_boxes = tf.shape(boxes)[0]
+ heatmap_height = 5
+ heatmap_width = 5
+ num_keypoints = 17
+ keypoint_heatmaps = tf.ones(
+ [num_boxes, heatmap_height, heatmap_width, num_keypoints],
+ dtype=tf.float32)
+
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+ exp_nms_corners = [[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 1000, 1, 1002],
+ [0, 100, 1, 101]]
+
+ exp_nms_scores = [.95, .9, .85, .3]
+ exp_nms_classes = [0, 0, 1, 0]
+ exp_nms_keypoint_heatmaps = np.ones(
+ (4, heatmap_height, heatmap_width, num_keypoints), dtype=np.float32)
+
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes,
+ scores,
+ score_thresh,
+ iou_thresh,
+ max_output_size,
+ additional_fields={
+ fields.BoxListFields.keypoint_heatmaps: keypoint_heatmaps
+ })
+
+ with self.test_session() as sess:
+ (nms_corners_output,
+ nms_scores_output,
+ nms_classes_output,
+ nms_keypoint_heatmaps) = sess.run(
+ [nms.get(),
+ nms.get_field(fields.BoxListFields.scores),
+ nms.get_field(fields.BoxListFields.classes),
+ nms.get_field(fields.BoxListFields.keypoint_heatmaps)])
+
+ self.assertAllClose(nms_corners_output, exp_nms_corners)
+ self.assertAllClose(nms_scores_output, exp_nms_scores)
+ self.assertAllClose(nms_classes_output, exp_nms_classes)
+ self.assertAllEqual(nms_keypoint_heatmaps, exp_nms_keypoint_heatmaps)
+
+ def test_multiclass_nms_with_additional_fields(self):
+ boxes = tf.constant([[[0, 0, 1, 1]],
+ [[0, 0.1, 1, 1.1]],
+ [[0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11]],
+ [[0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101]],
+ [[0, 1000, 1, 1002]],
+ [[0, 1000, 1, 1002.1]]], tf.float32)
+
+ scores = tf.constant([[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0],
+ [.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]])
+
+ coarse_boxes_key = 'coarse_boxes'
+ coarse_boxes = tf.constant([[0.1, 0.1, 1.1, 1.1],
+ [0.1, 0.2, 1.1, 1.2],
+ [0.1, -0.2, 1.1, 1.0],
+ [0.1, 10.1, 1.1, 11.1],
+ [0.1, 10.2, 1.1, 11.2],
+ [0.1, 100.1, 1.1, 101.1],
+ [0.1, 1000.1, 1.1, 1002.1],
+ [0.1, 1000.1, 1.1, 1002.2]], tf.float32)
+
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = np.array([[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 1000, 1, 1002],
+ [0, 100, 1, 101]], dtype=np.float32)
+
+ exp_nms_coarse_corners = np.array([[0.1, 10.1, 1.1, 11.1],
+ [0.1, 0.1, 1.1, 1.1],
+ [0.1, 1000.1, 1.1, 1002.1],
+ [0.1, 100.1, 1.1, 101.1]],
+ dtype=np.float32)
+
+ exp_nms_scores = [.95, .9, .85, .3]
+ exp_nms_classes = [0, 0, 1, 0]
+
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes,
+ scores,
+ score_thresh,
+ iou_thresh,
+ max_output_size,
+ additional_fields={coarse_boxes_key: coarse_boxes})
+
+ with self.test_session() as sess:
+ (nms_corners_output,
+ nms_scores_output,
+ nms_classes_output,
+ nms_coarse_corners) = sess.run(
+ [nms.get(),
+ nms.get_field(fields.BoxListFields.scores),
+ nms.get_field(fields.BoxListFields.classes),
+ nms.get_field(coarse_boxes_key)])
+
+ self.assertAllClose(nms_corners_output, exp_nms_corners)
+ self.assertAllClose(nms_scores_output, exp_nms_scores)
+ self.assertAllClose(nms_classes_output, exp_nms_classes)
+ self.assertAllEqual(nms_coarse_corners, exp_nms_coarse_corners)
+
+ def test_multiclass_nms_select_with_shared_boxes_given_masks(self):
+ boxes = tf.constant([[[0, 0, 1, 1]],
+ [[0, 0.1, 1, 1.1]],
+ [[0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11]],
+ [[0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101]],
+ [[0, 1000, 1, 1002]],
+ [[0, 1000, 1, 1002.1]]], tf.float32)
+ scores = tf.constant([[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0],
+ [.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]])
+ num_classes = 2
+ mask_height = 3
+ mask_width = 3
+ masks = tf.tile(
+ tf.reshape(tf.range(8), [8, 1, 1, 1]),
+ [1, num_classes, mask_height, mask_width])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = [[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 1000, 1, 1002],
+ [0, 100, 1, 101]]
+ exp_nms_scores = [.95, .9, .85, .3]
+ exp_nms_classes = [0, 0, 1, 0]
+ exp_nms_masks_tensor = tf.tile(
+ tf.reshape(tf.constant([3, 0, 6, 5], dtype=tf.float32), [4, 1, 1]),
+ [1, mask_height, mask_width])
+
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh, max_output_size, masks=masks)
+ with self.test_session() as sess:
+ (nms_corners_output,
+ nms_scores_output,
+ nms_classes_output,
+ nms_masks,
+ exp_nms_masks) = sess.run([nms.get(),
+ nms.get_field(fields.BoxListFields.scores),
+ nms.get_field(fields.BoxListFields.classes),
+ nms.get_field(fields.BoxListFields.masks),
+ exp_nms_masks_tensor])
+ self.assertAllClose(nms_corners_output, exp_nms_corners)
+ self.assertAllClose(nms_scores_output, exp_nms_scores)
+ self.assertAllClose(nms_classes_output, exp_nms_classes)
+ self.assertAllEqual(nms_masks, exp_nms_masks)
+
+ def test_multiclass_nms_select_with_clip_window(self):
+ boxes = tf.constant([[[0, 0, 10, 10]],
+ [[1, 1, 11, 11]]], tf.float32)
+ scores = tf.constant([[.9], [.75]])
+ clip_window = tf.constant([5, 4, 8, 7], tf.float32)
+ score_thresh = 0.0
+ iou_thresh = 0.5
+ max_output_size = 100
+
+ exp_nms_corners = [[5, 4, 8, 7]]
+ exp_nms_scores = [.9]
+ exp_nms_classes = [0]
+
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes,
+ scores,
+ score_thresh,
+ iou_thresh,
+ max_output_size,
+ clip_window=clip_window)
+ with self.test_session() as sess:
+ nms_corners_output, nms_scores_output, nms_classes_output = sess.run(
+ [nms.get(), nms.get_field(fields.BoxListFields.scores),
+ nms.get_field(fields.BoxListFields.classes)])
+ self.assertAllClose(nms_corners_output, exp_nms_corners)
+ self.assertAllClose(nms_scores_output, exp_nms_scores)
+ self.assertAllClose(nms_classes_output, exp_nms_classes)
+
+ def test_multiclass_nms_select_with_clip_window_change_coordinate_frame(self):
+ boxes = tf.constant([[[0, 0, 10, 10]],
+ [[1, 1, 11, 11]]], tf.float32)
+ scores = tf.constant([[.9], [.75]])
+ clip_window = tf.constant([5, 4, 8, 7], tf.float32)
+ score_thresh = 0.0
+ iou_thresh = 0.5
+ max_output_size = 100
+
+ exp_nms_corners = [[0, 0, 1, 1]]
+ exp_nms_scores = [.9]
+ exp_nms_classes = [0]
+
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes,
+ scores,
+ score_thresh,
+ iou_thresh,
+ max_output_size,
+ clip_window=clip_window,
+ change_coordinate_frame=True)
+ with self.test_session() as sess:
+ nms_corners_output, nms_scores_output, nms_classes_output = sess.run(
+ [nms.get(), nms.get_field(fields.BoxListFields.scores),
+ nms.get_field(fields.BoxListFields.classes)])
+ self.assertAllClose(nms_corners_output, exp_nms_corners)
+ self.assertAllClose(nms_scores_output, exp_nms_scores)
+ self.assertAllClose(nms_classes_output, exp_nms_classes)
+
+ def test_multiclass_nms_select_with_per_class_cap(self):
+ boxes = tf.constant([[[0, 0, 1, 1]],
+ [[0, 0.1, 1, 1.1]],
+ [[0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11]],
+ [[0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101]],
+ [[0, 1000, 1, 1002]],
+ [[0, 1000, 1, 1002.1]]], tf.float32)
+ scores = tf.constant([[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0],
+ [.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_size_per_class = 2
+
+ exp_nms_corners = [[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 1000, 1, 1002]]
+ exp_nms_scores = [.95, .9, .85]
+ exp_nms_classes = [0, 0, 1]
+
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh, max_size_per_class)
+ with self.test_session() as sess:
+ nms_corners_output, nms_scores_output, nms_classes_output = sess.run(
+ [nms.get(), nms.get_field(fields.BoxListFields.scores),
+ nms.get_field(fields.BoxListFields.classes)])
+ self.assertAllClose(nms_corners_output, exp_nms_corners)
+ self.assertAllClose(nms_scores_output, exp_nms_scores)
+ self.assertAllClose(nms_classes_output, exp_nms_classes)
+
+ def test_multiclass_nms_select_with_total_cap(self):
+ boxes = tf.constant([[[0, 0, 1, 1]],
+ [[0, 0.1, 1, 1.1]],
+ [[0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11]],
+ [[0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101]],
+ [[0, 1000, 1, 1002]],
+ [[0, 1000, 1, 1002.1]]], tf.float32)
+ scores = tf.constant([[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0],
+ [.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_size_per_class = 4
+ max_total_size = 2
+
+ exp_nms_corners = [[0, 10, 1, 11],
+ [0, 0, 1, 1]]
+ exp_nms_scores = [.95, .9]
+ exp_nms_classes = [0, 0]
+
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh, max_size_per_class,
+ max_total_size)
+ with self.test_session() as sess:
+ nms_corners_output, nms_scores_output, nms_classes_output = sess.run(
+ [nms.get(), nms.get_field(fields.BoxListFields.scores),
+ nms.get_field(fields.BoxListFields.classes)])
+ self.assertAllClose(nms_corners_output, exp_nms_corners)
+ self.assertAllClose(nms_scores_output, exp_nms_scores)
+ self.assertAllClose(nms_classes_output, exp_nms_classes)
+
+ def test_multiclass_nms_threshold_then_select_with_shared_boxes(self):
+ boxes = tf.constant([[[0, 0, 1, 1]],
+ [[0, 0.1, 1, 1.1]],
+ [[0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11]],
+ [[0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101]],
+ [[0, 1000, 1, 1002]],
+ [[0, 1000, 1, 1002.1]]], tf.float32)
+ scores = tf.constant([[.9], [.75], [.6], [.95], [.5], [.3], [.01], [.01]])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 3
+
+ exp_nms = [[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 100, 1, 101]]
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh, max_output_size)
+ with self.test_session() as sess:
+ nms_output = sess.run(nms.get())
+ self.assertAllClose(nms_output, exp_nms)
+
+ def test_multiclass_nms_select_with_separate_boxes(self):
+ boxes = tf.constant([[[0, 0, 1, 1], [0, 0, 4, 5]],
+ [[0, 0.1, 1, 1.1], [0, 0.1, 2, 1.1]],
+ [[0, -0.1, 1, 0.9], [0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11], [0, 10, 1, 11]],
+ [[0, 10.1, 1, 11.1], [0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101], [0, 100, 1, 101]],
+ [[0, 1000, 1, 1002], [0, 999, 2, 1004]],
+ [[0, 1000, 1, 1002.1], [0, 999, 2, 1002.7]]],
+ tf.float32)
+ scores = tf.constant([[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0],
+ [.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = [[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 999, 2, 1004],
+ [0, 100, 1, 101]]
+ exp_nms_scores = [.95, .9, .85, .3]
+ exp_nms_classes = [0, 0, 1, 0]
+
+ nms, _ = post_processing.multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh, max_output_size)
+ with self.test_session() as sess:
+ nms_corners_output, nms_scores_output, nms_classes_output = sess.run(
+ [nms.get(), nms.get_field(fields.BoxListFields.scores),
+ nms.get_field(fields.BoxListFields.classes)])
+ self.assertAllClose(nms_corners_output, exp_nms_corners)
+ self.assertAllClose(nms_scores_output, exp_nms_scores)
+ self.assertAllClose(nms_classes_output, exp_nms_classes)
+
+ def test_batch_multiclass_nms_with_batch_size_1(self):
+ boxes = tf.constant([[[[0, 0, 1, 1], [0, 0, 4, 5]],
+ [[0, 0.1, 1, 1.1], [0, 0.1, 2, 1.1]],
+ [[0, -0.1, 1, 0.9], [0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11], [0, 10, 1, 11]],
+ [[0, 10.1, 1, 11.1], [0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101], [0, 100, 1, 101]],
+ [[0, 1000, 1, 1002], [0, 999, 2, 1004]],
+ [[0, 1000, 1, 1002.1], [0, 999, 2, 1002.7]]]],
+ tf.float32)
+ scores = tf.constant([[[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0],
+ [.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]]])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = [[[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 999, 2, 1004],
+ [0, 100, 1, 101]]]
+ exp_nms_scores = [[.95, .9, .85, .3]]
+ exp_nms_classes = [[0, 0, 1, 0]]
+
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ nmsed_additional_fields, num_detections
+ ) = post_processing.batch_multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh,
+ max_size_per_class=max_output_size, max_total_size=max_output_size)
+
+ self.assertIsNone(nmsed_masks)
+ self.assertIsNone(nmsed_additional_fields)
+
+ with self.test_session() as sess:
+ (nmsed_boxes, nmsed_scores, nmsed_classes,
+ num_detections) = sess.run([nmsed_boxes, nmsed_scores, nmsed_classes,
+ num_detections])
+ self.assertAllClose(nmsed_boxes, exp_nms_corners)
+ self.assertAllClose(nmsed_scores, exp_nms_scores)
+ self.assertAllClose(nmsed_classes, exp_nms_classes)
+ self.assertEqual(num_detections, [4])
+
+ def test_batch_multiclass_nms_with_batch_size_2(self):
+ boxes = tf.constant([[[[0, 0, 1, 1], [0, 0, 4, 5]],
+ [[0, 0.1, 1, 1.1], [0, 0.1, 2, 1.1]],
+ [[0, -0.1, 1, 0.9], [0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11], [0, 10, 1, 11]]],
+ [[[0, 10.1, 1, 11.1], [0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101], [0, 100, 1, 101]],
+ [[0, 1000, 1, 1002], [0, 999, 2, 1004]],
+ [[0, 1000, 1, 1002.1], [0, 999, 2, 1002.7]]]],
+ tf.float32)
+ scores = tf.constant([[[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0]],
+ [[.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]]])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = np.array([[[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]],
+ [[0, 999, 2, 1004],
+ [0, 10.1, 1, 11.1],
+ [0, 100, 1, 101],
+ [0, 0, 0, 0]]])
+ exp_nms_scores = np.array([[.95, .9, 0, 0],
+ [.85, .5, .3, 0]])
+ exp_nms_classes = np.array([[0, 0, 0, 0],
+ [1, 0, 0, 0]])
+
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ nmsed_additional_fields, num_detections
+ ) = post_processing.batch_multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh,
+ max_size_per_class=max_output_size, max_total_size=max_output_size)
+
+ self.assertIsNone(nmsed_masks)
+ self.assertIsNone(nmsed_additional_fields)
+ # Check static shapes
+ self.assertAllEqual(nmsed_boxes.shape.as_list(),
+ exp_nms_corners.shape)
+ self.assertAllEqual(nmsed_scores.shape.as_list(),
+ exp_nms_scores.shape)
+ self.assertAllEqual(nmsed_classes.shape.as_list(),
+ exp_nms_classes.shape)
+ self.assertEqual(num_detections.shape.as_list(), [2])
+
+ with self.test_session() as sess:
+ (nmsed_boxes, nmsed_scores, nmsed_classes,
+ num_detections) = sess.run([nmsed_boxes, nmsed_scores, nmsed_classes,
+ num_detections])
+ self.assertAllClose(nmsed_boxes, exp_nms_corners)
+ self.assertAllClose(nmsed_scores, exp_nms_scores)
+ self.assertAllClose(nmsed_classes, exp_nms_classes)
+ self.assertAllClose(num_detections, [2, 3])
+
+ def test_batch_multiclass_nms_with_per_batch_clip_window(self):
+ boxes = tf.constant([[[[0, 0, 1, 1], [0, 0, 4, 5]],
+ [[0, 0.1, 1, 1.1], [0, 0.1, 2, 1.1]],
+ [[0, -0.1, 1, 0.9], [0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11], [0, 10, 1, 11]]],
+ [[[0, 10.1, 1, 11.1], [0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101], [0, 100, 1, 101]],
+ [[0, 1000, 1, 1002], [0, 999, 2, 1004]],
+ [[0, 1000, 1, 1002.1], [0, 999, 2, 1002.7]]]],
+ tf.float32)
+ scores = tf.constant([[[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0]],
+ [[.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]]])
+ clip_window = tf.constant([0., 0., 200., 200.])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = np.array([[[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]],
+ [[0, 10.1, 1, 11.1],
+ [0, 100, 1, 101],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]]])
+ exp_nms_scores = np.array([[.95, .9, 0, 0],
+ [.5, .3, 0, 0]])
+ exp_nms_classes = np.array([[0, 0, 0, 0],
+ [0, 0, 0, 0]])
+
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ nmsed_additional_fields, num_detections
+ ) = post_processing.batch_multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh,
+ max_size_per_class=max_output_size, max_total_size=max_output_size,
+ clip_window=clip_window)
+
+ self.assertIsNone(nmsed_masks)
+ self.assertIsNone(nmsed_additional_fields)
+ # Check static shapes
+ self.assertAllEqual(nmsed_boxes.shape.as_list(),
+ exp_nms_corners.shape)
+ self.assertAllEqual(nmsed_scores.shape.as_list(),
+ exp_nms_scores.shape)
+ self.assertAllEqual(nmsed_classes.shape.as_list(),
+ exp_nms_classes.shape)
+ self.assertEqual(num_detections.shape.as_list(), [2])
+
+ with self.test_session() as sess:
+ (nmsed_boxes, nmsed_scores, nmsed_classes,
+ num_detections) = sess.run([nmsed_boxes, nmsed_scores, nmsed_classes,
+ num_detections])
+ self.assertAllClose(nmsed_boxes, exp_nms_corners)
+ self.assertAllClose(nmsed_scores, exp_nms_scores)
+ self.assertAllClose(nmsed_classes, exp_nms_classes)
+ self.assertAllClose(num_detections, [2, 2])
+
+ def test_batch_multiclass_nms_with_per_image_clip_window(self):
+ boxes = tf.constant([[[[0, 0, 1, 1], [0, 0, 4, 5]],
+ [[0, 0.1, 1, 1.1], [0, 0.1, 2, 1.1]],
+ [[0, -0.1, 1, 0.9], [0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11], [0, 10, 1, 11]]],
+ [[[0, 10.1, 1, 11.1], [0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101], [0, 100, 1, 101]],
+ [[0, 1000, 1, 1002], [0, 999, 2, 1004]],
+ [[0, 1000, 1, 1002.1], [0, 999, 2, 1002.7]]]],
+ tf.float32)
+ scores = tf.constant([[[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0]],
+ [[.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]]])
+ clip_window = tf.constant([[0., 0., 5., 5.],
+ [0., 0., 200., 200.]])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = np.array([[[0, 0, 1, 1],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]],
+ [[0, 10.1, 1, 11.1],
+ [0, 100, 1, 101],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]]])
+ exp_nms_scores = np.array([[.9, 0., 0., 0.],
+ [.5, .3, 0, 0]])
+ exp_nms_classes = np.array([[0, 0, 0, 0],
+ [0, 0, 0, 0]])
+
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ nmsed_additional_fields, num_detections
+ ) = post_processing.batch_multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh,
+ max_size_per_class=max_output_size, max_total_size=max_output_size,
+ clip_window=clip_window)
+
+ self.assertIsNone(nmsed_masks)
+ self.assertIsNone(nmsed_additional_fields)
+ # Check static shapes
+ self.assertAllEqual(nmsed_boxes.shape.as_list(),
+ exp_nms_corners.shape)
+ self.assertAllEqual(nmsed_scores.shape.as_list(),
+ exp_nms_scores.shape)
+ self.assertAllEqual(nmsed_classes.shape.as_list(),
+ exp_nms_classes.shape)
+ self.assertEqual(num_detections.shape.as_list(), [2])
+
+ with self.test_session() as sess:
+ (nmsed_boxes, nmsed_scores, nmsed_classes,
+ num_detections) = sess.run([nmsed_boxes, nmsed_scores, nmsed_classes,
+ num_detections])
+ self.assertAllClose(nmsed_boxes, exp_nms_corners)
+ self.assertAllClose(nmsed_scores, exp_nms_scores)
+ self.assertAllClose(nmsed_classes, exp_nms_classes)
+ self.assertAllClose(num_detections, [1, 2])
+
+ def test_batch_multiclass_nms_with_masks(self):
+ boxes = tf.constant([[[[0, 0, 1, 1], [0, 0, 4, 5]],
+ [[0, 0.1, 1, 1.1], [0, 0.1, 2, 1.1]],
+ [[0, -0.1, 1, 0.9], [0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11], [0, 10, 1, 11]]],
+ [[[0, 10.1, 1, 11.1], [0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101], [0, 100, 1, 101]],
+ [[0, 1000, 1, 1002], [0, 999, 2, 1004]],
+ [[0, 1000, 1, 1002.1], [0, 999, 2, 1002.7]]]],
+ tf.float32)
+ scores = tf.constant([[[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0]],
+ [[.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]]])
+ masks = tf.constant([[[[[0, 1], [2, 3]], [[1, 2], [3, 4]]],
+ [[[2, 3], [4, 5]], [[3, 4], [5, 6]]],
+ [[[4, 5], [6, 7]], [[5, 6], [7, 8]]],
+ [[[6, 7], [8, 9]], [[7, 8], [9, 10]]]],
+ [[[[8, 9], [10, 11]], [[9, 10], [11, 12]]],
+ [[[10, 11], [12, 13]], [[11, 12], [13, 14]]],
+ [[[12, 13], [14, 15]], [[13, 14], [15, 16]]],
+ [[[14, 15], [16, 17]], [[15, 16], [17, 18]]]]],
+ tf.float32)
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = np.array([[[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]],
+ [[0, 999, 2, 1004],
+ [0, 10.1, 1, 11.1],
+ [0, 100, 1, 101],
+ [0, 0, 0, 0]]])
+ exp_nms_scores = np.array([[.95, .9, 0, 0],
+ [.85, .5, .3, 0]])
+ exp_nms_classes = np.array([[0, 0, 0, 0],
+ [1, 0, 0, 0]])
+ exp_nms_masks = np.array([[[[6, 7], [8, 9]],
+ [[0, 1], [2, 3]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]]],
+ [[[13, 14], [15, 16]],
+ [[8, 9], [10, 11]],
+ [[10, 11], [12, 13]],
+ [[0, 0], [0, 0]]]])
+
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ nmsed_additional_fields, num_detections
+ ) = post_processing.batch_multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh,
+ max_size_per_class=max_output_size, max_total_size=max_output_size,
+ masks=masks)
+
+ self.assertIsNone(nmsed_additional_fields)
+ # Check static shapes
+ self.assertAllEqual(nmsed_boxes.shape.as_list(), exp_nms_corners.shape)
+ self.assertAllEqual(nmsed_scores.shape.as_list(), exp_nms_scores.shape)
+ self.assertAllEqual(nmsed_classes.shape.as_list(), exp_nms_classes.shape)
+ self.assertAllEqual(nmsed_masks.shape.as_list(), exp_nms_masks.shape)
+ self.assertEqual(num_detections.shape.as_list(), [2])
+
+ with self.test_session() as sess:
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ num_detections) = sess.run([nmsed_boxes, nmsed_scores, nmsed_classes,
+ nmsed_masks, num_detections])
+
+ self.assertAllClose(nmsed_boxes, exp_nms_corners)
+ self.assertAllClose(nmsed_scores, exp_nms_scores)
+ self.assertAllClose(nmsed_classes, exp_nms_classes)
+ self.assertAllClose(num_detections, [2, 3])
+ self.assertAllClose(nmsed_masks, exp_nms_masks)
+
+ def test_batch_multiclass_nms_with_additional_fields(self):
+ boxes = tf.constant([[[[0, 0, 1, 1], [0, 0, 4, 5]],
+ [[0, 0.1, 1, 1.1], [0, 0.1, 2, 1.1]],
+ [[0, -0.1, 1, 0.9], [0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11], [0, 10, 1, 11]]],
+ [[[0, 10.1, 1, 11.1], [0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101], [0, 100, 1, 101]],
+ [[0, 1000, 1, 1002], [0, 999, 2, 1004]],
+ [[0, 1000, 1, 1002.1], [0, 999, 2, 1002.7]]]],
+ tf.float32)
+ scores = tf.constant([[[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0]],
+ [[.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]]])
+ additional_fields = {
+ 'keypoints': tf.constant(
+ [[[[6, 7], [8, 9]],
+ [[0, 1], [2, 3]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]]],
+ [[[13, 14], [15, 16]],
+ [[8, 9], [10, 11]],
+ [[10, 11], [12, 13]],
+ [[0, 0], [0, 0]]]],
+ tf.float32)
+ }
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = np.array([[[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]],
+ [[0, 999, 2, 1004],
+ [0, 10.1, 1, 11.1],
+ [0, 100, 1, 101],
+ [0, 0, 0, 0]]])
+ exp_nms_scores = np.array([[.95, .9, 0, 0],
+ [.85, .5, .3, 0]])
+ exp_nms_classes = np.array([[0, 0, 0, 0],
+ [1, 0, 0, 0]])
+ exp_nms_additional_fields = {
+ 'keypoints': np.array([[[[0, 0], [0, 0]],
+ [[6, 7], [8, 9]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]]],
+ [[[10, 11], [12, 13]],
+ [[13, 14], [15, 16]],
+ [[8, 9], [10, 11]],
+ [[0, 0], [0, 0]]]])
+ }
+
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ nmsed_additional_fields, num_detections
+ ) = post_processing.batch_multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh,
+ max_size_per_class=max_output_size, max_total_size=max_output_size,
+ additional_fields=additional_fields)
+
+ self.assertIsNone(nmsed_masks)
+ # Check static shapes
+ self.assertAllEqual(nmsed_boxes.shape.as_list(), exp_nms_corners.shape)
+ self.assertAllEqual(nmsed_scores.shape.as_list(), exp_nms_scores.shape)
+ self.assertAllEqual(nmsed_classes.shape.as_list(), exp_nms_classes.shape)
+ self.assertEqual(len(nmsed_additional_fields),
+ len(exp_nms_additional_fields))
+ for key in exp_nms_additional_fields:
+ self.assertAllEqual(nmsed_additional_fields[key].shape.as_list(),
+ exp_nms_additional_fields[key].shape)
+ self.assertEqual(num_detections.shape.as_list(), [2])
+
+ with self.test_session() as sess:
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_additional_fields,
+ num_detections) = sess.run([nmsed_boxes, nmsed_scores, nmsed_classes,
+ nmsed_additional_fields, num_detections])
+
+ self.assertAllClose(nmsed_boxes, exp_nms_corners)
+ self.assertAllClose(nmsed_scores, exp_nms_scores)
+ self.assertAllClose(nmsed_classes, exp_nms_classes)
+ for key in exp_nms_additional_fields:
+ self.assertAllClose(nmsed_additional_fields[key],
+ exp_nms_additional_fields[key])
+ self.assertAllClose(num_detections, [2, 3])
+
+ def test_batch_multiclass_nms_with_dynamic_batch_size(self):
+ boxes_placeholder = tf.placeholder(tf.float32, shape=(None, None, 2, 4))
+ scores_placeholder = tf.placeholder(tf.float32, shape=(None, None, 2))
+ masks_placeholder = tf.placeholder(tf.float32, shape=(None, None, 2, 2, 2))
+
+ boxes = np.array([[[[0, 0, 1, 1], [0, 0, 4, 5]],
+ [[0, 0.1, 1, 1.1], [0, 0.1, 2, 1.1]],
+ [[0, -0.1, 1, 0.9], [0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11], [0, 10, 1, 11]]],
+ [[[0, 10.1, 1, 11.1], [0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101], [0, 100, 1, 101]],
+ [[0, 1000, 1, 1002], [0, 999, 2, 1004]],
+ [[0, 1000, 1, 1002.1], [0, 999, 2, 1002.7]]]])
+ scores = np.array([[[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0]],
+ [[.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]]])
+ masks = np.array([[[[[0, 1], [2, 3]], [[1, 2], [3, 4]]],
+ [[[2, 3], [4, 5]], [[3, 4], [5, 6]]],
+ [[[4, 5], [6, 7]], [[5, 6], [7, 8]]],
+ [[[6, 7], [8, 9]], [[7, 8], [9, 10]]]],
+ [[[[8, 9], [10, 11]], [[9, 10], [11, 12]]],
+ [[[10, 11], [12, 13]], [[11, 12], [13, 14]]],
+ [[[12, 13], [14, 15]], [[13, 14], [15, 16]]],
+ [[[14, 15], [16, 17]], [[15, 16], [17, 18]]]]])
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = np.array([[[0, 10, 1, 11],
+ [0, 0, 1, 1],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]],
+ [[0, 999, 2, 1004],
+ [0, 10.1, 1, 11.1],
+ [0, 100, 1, 101],
+ [0, 0, 0, 0]]])
+ exp_nms_scores = np.array([[.95, .9, 0, 0],
+ [.85, .5, .3, 0]])
+ exp_nms_classes = np.array([[0, 0, 0, 0],
+ [1, 0, 0, 0]])
+ exp_nms_masks = np.array([[[[6, 7], [8, 9]],
+ [[0, 1], [2, 3]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]]],
+ [[[13, 14], [15, 16]],
+ [[8, 9], [10, 11]],
+ [[10, 11], [12, 13]],
+ [[0, 0], [0, 0]]]])
+
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ nmsed_additional_fields, num_detections
+ ) = post_processing.batch_multiclass_non_max_suppression(
+ boxes_placeholder, scores_placeholder, score_thresh, iou_thresh,
+ max_size_per_class=max_output_size, max_total_size=max_output_size,
+ masks=masks_placeholder)
+
+ self.assertIsNone(nmsed_additional_fields)
+ # Check static shapes
+ self.assertAllEqual(nmsed_boxes.shape.as_list(), [None, 4, 4])
+ self.assertAllEqual(nmsed_scores.shape.as_list(), [None, 4])
+ self.assertAllEqual(nmsed_classes.shape.as_list(), [None, 4])
+ self.assertAllEqual(nmsed_masks.shape.as_list(), [None, 4, 2, 2])
+ self.assertEqual(num_detections.shape.as_list(), [None])
+
+ with self.test_session() as sess:
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ num_detections) = sess.run([nmsed_boxes, nmsed_scores, nmsed_classes,
+ nmsed_masks, num_detections],
+ feed_dict={boxes_placeholder: boxes,
+ scores_placeholder: scores,
+ masks_placeholder: masks})
+ self.assertAllClose(nmsed_boxes, exp_nms_corners)
+ self.assertAllClose(nmsed_scores, exp_nms_scores)
+ self.assertAllClose(nmsed_classes, exp_nms_classes)
+ self.assertAllClose(num_detections, [2, 3])
+ self.assertAllClose(nmsed_masks, exp_nms_masks)
+
+ def test_batch_multiclass_nms_with_masks_and_num_valid_boxes(self):
+ boxes = tf.constant([[[[0, 0, 1, 1], [0, 0, 4, 5]],
+ [[0, 0.1, 1, 1.1], [0, 0.1, 2, 1.1]],
+ [[0, -0.1, 1, 0.9], [0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11], [0, 10, 1, 11]]],
+ [[[0, 10.1, 1, 11.1], [0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101], [0, 100, 1, 101]],
+ [[0, 1000, 1, 1002], [0, 999, 2, 1004]],
+ [[0, 1000, 1, 1002.1], [0, 999, 2, 1002.7]]]],
+ tf.float32)
+ scores = tf.constant([[[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0]],
+ [[.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]]])
+ masks = tf.constant([[[[[0, 1], [2, 3]], [[1, 2], [3, 4]]],
+ [[[2, 3], [4, 5]], [[3, 4], [5, 6]]],
+ [[[4, 5], [6, 7]], [[5, 6], [7, 8]]],
+ [[[6, 7], [8, 9]], [[7, 8], [9, 10]]]],
+ [[[[8, 9], [10, 11]], [[9, 10], [11, 12]]],
+ [[[10, 11], [12, 13]], [[11, 12], [13, 14]]],
+ [[[12, 13], [14, 15]], [[13, 14], [15, 16]]],
+ [[[14, 15], [16, 17]], [[15, 16], [17, 18]]]]],
+ tf.float32)
+ num_valid_boxes = tf.constant([1, 1], tf.int32)
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = [[[0, 0, 1, 1],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]],
+ [[0, 10.1, 1, 11.1],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]]]
+ exp_nms_scores = [[.9, 0, 0, 0],
+ [.5, 0, 0, 0]]
+ exp_nms_classes = [[0, 0, 0, 0],
+ [0, 0, 0, 0]]
+ exp_nms_masks = [[[[0, 1], [2, 3]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]]],
+ [[[8, 9], [10, 11]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]]]]
+
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ nmsed_additional_fields, num_detections
+ ) = post_processing.batch_multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh,
+ max_size_per_class=max_output_size, max_total_size=max_output_size,
+ num_valid_boxes=num_valid_boxes, masks=masks)
+
+ self.assertIsNone(nmsed_additional_fields)
+
+ with self.test_session() as sess:
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ num_detections) = sess.run([nmsed_boxes, nmsed_scores, nmsed_classes,
+ nmsed_masks, num_detections])
+ self.assertAllClose(nmsed_boxes, exp_nms_corners)
+ self.assertAllClose(nmsed_scores, exp_nms_scores)
+ self.assertAllClose(nmsed_classes, exp_nms_classes)
+ self.assertAllClose(num_detections, [1, 1])
+ self.assertAllClose(nmsed_masks, exp_nms_masks)
+
+ def test_batch_multiclass_nms_with_additional_fields_and_num_valid_boxes(
+ self):
+ boxes = tf.constant([[[[0, 0, 1, 1], [0, 0, 4, 5]],
+ [[0, 0.1, 1, 1.1], [0, 0.1, 2, 1.1]],
+ [[0, -0.1, 1, 0.9], [0, -0.1, 1, 0.9]],
+ [[0, 10, 1, 11], [0, 10, 1, 11]]],
+ [[[0, 10.1, 1, 11.1], [0, 10.1, 1, 11.1]],
+ [[0, 100, 1, 101], [0, 100, 1, 101]],
+ [[0, 1000, 1, 1002], [0, 999, 2, 1004]],
+ [[0, 1000, 1, 1002.1], [0, 999, 2, 1002.7]]]],
+ tf.float32)
+ scores = tf.constant([[[.9, 0.01], [.75, 0.05],
+ [.6, 0.01], [.95, 0]],
+ [[.5, 0.01], [.3, 0.01],
+ [.01, .85], [.01, .5]]])
+ additional_fields = {
+ 'keypoints': tf.constant(
+ [[[[6, 7], [8, 9]],
+ [[0, 1], [2, 3]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]]],
+ [[[13, 14], [15, 16]],
+ [[8, 9], [10, 11]],
+ [[10, 11], [12, 13]],
+ [[0, 0], [0, 0]]]],
+ tf.float32)
+ }
+ num_valid_boxes = tf.constant([1, 1], tf.int32)
+ score_thresh = 0.1
+ iou_thresh = .5
+ max_output_size = 4
+
+ exp_nms_corners = [[[0, 0, 1, 1],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]],
+ [[0, 10.1, 1, 11.1],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]]]
+ exp_nms_scores = [[.9, 0, 0, 0],
+ [.5, 0, 0, 0]]
+ exp_nms_classes = [[0, 0, 0, 0],
+ [0, 0, 0, 0]]
+ exp_nms_additional_fields = {
+ 'keypoints': np.array([[[[6, 7], [8, 9]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]]],
+ [[[13, 14], [15, 16]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]],
+ [[0, 0], [0, 0]]]])
+ }
+
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks,
+ nmsed_additional_fields, num_detections
+ ) = post_processing.batch_multiclass_non_max_suppression(
+ boxes, scores, score_thresh, iou_thresh,
+ max_size_per_class=max_output_size, max_total_size=max_output_size,
+ num_valid_boxes=num_valid_boxes,
+ additional_fields=additional_fields)
+
+ self.assertIsNone(nmsed_masks)
+
+ with self.test_session() as sess:
+ (nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_additional_fields,
+ num_detections) = sess.run([nmsed_boxes, nmsed_scores, nmsed_classes,
+ nmsed_additional_fields, num_detections])
+
+ self.assertAllClose(nmsed_boxes, exp_nms_corners)
+ self.assertAllClose(nmsed_scores, exp_nms_scores)
+ self.assertAllClose(nmsed_classes, exp_nms_classes)
+ for key in exp_nms_additional_fields:
+ self.assertAllClose(nmsed_additional_fields[key],
+ exp_nms_additional_fields[key])
+ self.assertAllClose(num_detections, [1, 1])
+
+ # TODO(bhattad): Remove conditional after CMLE moves to TF 1.9
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/prefetcher.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/prefetcher.py"
new file mode 100644
index 00000000..e690c599
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/prefetcher.py"
@@ -0,0 +1,61 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Provides functions to prefetch tensors to feed into models."""
+import tensorflow as tf
+
+
+def prefetch(tensor_dict, capacity):
+ """Creates a prefetch queue for tensors.
+
+ Creates a FIFO queue to asynchronously enqueue tensor_dicts and returns a
+ dequeue op that evaluates to a tensor_dict. This function is useful in
+ prefetching preprocessed tensors so that the data is readily available for
+ consumers.
+
+ Example input pipeline when you don't need batching:
+ ----------------------------------------------------
+ key, string_tensor = slim.parallel_reader.parallel_read(...)
+ tensor_dict = decoder.decode(string_tensor)
+ tensor_dict = preprocessor.preprocess(tensor_dict, ...)
+ prefetch_queue = prefetcher.prefetch(tensor_dict, capacity=20)
+ tensor_dict = prefetch_queue.dequeue()
+ outputs = Model(tensor_dict)
+ ...
+ ----------------------------------------------------
+
+ For input pipelines with batching, refer to core/batcher.py
+
+ Args:
+ tensor_dict: a dictionary of tensors to prefetch.
+ capacity: the size of the prefetch queue.
+
+ Returns:
+ a FIFO prefetcher queue
+ """
+ names = list(tensor_dict.keys())
+ dtypes = [t.dtype for t in tensor_dict.values()]
+ shapes = [t.get_shape() for t in tensor_dict.values()]
+ prefetch_queue = tf.PaddingFIFOQueue(capacity, dtypes=dtypes,
+ shapes=shapes,
+ names=names,
+ name='prefetch_queue')
+ enqueue_op = prefetch_queue.enqueue(tensor_dict)
+ tf.train.queue_runner.add_queue_runner(tf.train.queue_runner.QueueRunner(
+ prefetch_queue, [enqueue_op]))
+ tf.summary.scalar('queue/%s/fraction_of_%d_full' % (prefetch_queue.name,
+ capacity),
+ tf.to_float(prefetch_queue.size()) * (1. / capacity))
+ return prefetch_queue
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/prefetcher_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/prefetcher_test.py"
new file mode 100644
index 00000000..63f557e3
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/prefetcher_test.py"
@@ -0,0 +1,101 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.prefetcher."""
+import tensorflow as tf
+
+from object_detection.core import prefetcher
+
+slim = tf.contrib.slim
+
+
+class PrefetcherTest(tf.test.TestCase):
+
+ def test_prefetch_tensors_with_fully_defined_shapes(self):
+ with self.test_session() as sess:
+ batch_size = 10
+ image_size = 32
+ num_batches = 5
+ examples = tf.Variable(tf.constant(0, dtype=tf.int64))
+ counter = examples.count_up_to(num_batches)
+ image = tf.random_normal([batch_size, image_size,
+ image_size, 3],
+ dtype=tf.float32,
+ name='images')
+ label = tf.random_uniform([batch_size, 1], 0, 10,
+ dtype=tf.int32, name='labels')
+
+ prefetch_queue = prefetcher.prefetch(tensor_dict={'counter': counter,
+ 'image': image,
+ 'label': label},
+ capacity=100)
+ tensor_dict = prefetch_queue.dequeue()
+
+ self.assertAllEqual(tensor_dict['image'].get_shape().as_list(),
+ [batch_size, image_size, image_size, 3])
+ self.assertAllEqual(tensor_dict['label'].get_shape().as_list(),
+ [batch_size, 1])
+
+ tf.initialize_all_variables().run()
+ with slim.queues.QueueRunners(sess):
+ for _ in range(num_batches):
+ results = sess.run(tensor_dict)
+ self.assertEquals(results['image'].shape,
+ (batch_size, image_size, image_size, 3))
+ self.assertEquals(results['label'].shape, (batch_size, 1))
+ with self.assertRaises(tf.errors.OutOfRangeError):
+ sess.run(tensor_dict)
+
+ def test_prefetch_tensors_with_partially_defined_shapes(self):
+ with self.test_session() as sess:
+ batch_size = 10
+ image_size = 32
+ num_batches = 5
+ examples = tf.Variable(tf.constant(0, dtype=tf.int64))
+ counter = examples.count_up_to(num_batches)
+ image = tf.random_normal([batch_size,
+ tf.Variable(image_size),
+ tf.Variable(image_size), 3],
+ dtype=tf.float32,
+ name='image')
+ image.set_shape([batch_size, None, None, 3])
+ label = tf.random_uniform([batch_size, tf.Variable(1)], 0,
+ 10, dtype=tf.int32, name='label')
+ label.set_shape([batch_size, None])
+
+ prefetch_queue = prefetcher.prefetch(tensor_dict={'counter': counter,
+ 'image': image,
+ 'label': label},
+ capacity=100)
+ tensor_dict = prefetch_queue.dequeue()
+
+ self.assertAllEqual(tensor_dict['image'].get_shape().as_list(),
+ [batch_size, None, None, 3])
+ self.assertAllEqual(tensor_dict['label'].get_shape().as_list(),
+ [batch_size, None])
+
+ tf.initialize_all_variables().run()
+ with slim.queues.QueueRunners(sess):
+ for _ in range(num_batches):
+ results = sess.run(tensor_dict)
+ self.assertEquals(results['image'].shape,
+ (batch_size, image_size, image_size, 3))
+ self.assertEquals(results['label'].shape, (batch_size, 1))
+ with self.assertRaises(tf.errors.OutOfRangeError):
+ sess.run(tensor_dict)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/preprocessor.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/preprocessor.py"
new file mode 100644
index 00000000..4d8f60b0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/preprocessor.py"
@@ -0,0 +1,3320 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Preprocess images and bounding boxes for detection.
+
+We perform two sets of operations in preprocessing stage:
+(a) operations that are applied to both training and testing data,
+(b) operations that are applied only to training data for the purpose of
+ data augmentation.
+
+A preprocessing function receives a set of inputs,
+e.g. an image and bounding boxes,
+performs an operation on them, and returns them.
+Some examples are: randomly cropping the image, randomly mirroring the image,
+ randomly changing the brightness, contrast, hue and
+ randomly jittering the bounding boxes.
+
+The preprocess function receives a tensor_dict which is a dictionary that maps
+different field names to their tensors. For example,
+tensor_dict[fields.InputDataFields.image] holds the image tensor.
+The image is a rank 4 tensor: [1, height, width, channels] with
+dtype=tf.float32. The groundtruth_boxes is a rank 2 tensor: [N, 4] where
+in each row there is a box with [ymin xmin ymax xmax].
+Boxes are in normalized coordinates meaning
+their coordinate values range in [0, 1]
+
+To preprocess multiple images with the same operations in cases where
+nondeterministic operations are used, a preprocessor_cache.PreprocessorCache
+object can be passed into the preprocess function or individual operations.
+All nondeterministic operations except random_jitter_boxes support caching.
+E.g.
+Let tensor_dict{1,2,3,4,5} be copies of the same inputs.
+Let preprocess_options contain nondeterministic operation(s) excluding
+random_jitter_boxes.
+
+cache1 = preprocessor_cache.PreprocessorCache()
+cache2 = preprocessor_cache.PreprocessorCache()
+a = preprocess(tensor_dict1, preprocess_options, preprocess_vars_cache=cache1)
+b = preprocess(tensor_dict2, preprocess_options, preprocess_vars_cache=cache1)
+c = preprocess(tensor_dict3, preprocess_options, preprocess_vars_cache=cache2)
+d = preprocess(tensor_dict4, preprocess_options, preprocess_vars_cache=cache2)
+e = preprocess(tensor_dict5, preprocess_options)
+
+Then correspondings tensors of object pairs (a,b) and (c,d)
+are guaranteed to be equal element-wise, but the equality of any other object
+pair cannot be determined.
+
+Important Note: In tensor_dict, images is a rank 4 tensor, but preprocessing
+functions receive a rank 3 tensor for processing the image. Thus, inside the
+preprocess function we squeeze the image to become a rank 3 tensor and then
+we pass it to the functions. At the end of the preprocess we expand the image
+back to rank 4.
+"""
+
+import functools
+import inspect
+import sys
+import tensorflow as tf
+
+from tensorflow.python.ops import control_flow_ops
+
+from object_detection.core import box_list
+from object_detection.core import box_list_ops
+from object_detection.core import keypoint_ops
+from object_detection.core import preprocessor_cache
+from object_detection.core import standard_fields as fields
+from object_detection.utils import shape_utils
+
+
+def _apply_with_random_selector(x,
+ func,
+ num_cases,
+ preprocess_vars_cache=None,
+ key=''):
+ """Computes func(x, sel), with sel sampled from [0...num_cases-1].
+
+ If both preprocess_vars_cache AND key are the same between two calls, sel will
+ be the same value in both calls.
+
+ Args:
+ x: input Tensor.
+ func: Python function to apply.
+ num_cases: Python int32, number of cases to sample sel from.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+ key: variable identifier for preprocess_vars_cache.
+
+ Returns:
+ The result of func(x, sel), where func receives the value of the
+ selector as a python integer, but sel is sampled dynamically.
+ """
+ generator_func = functools.partial(
+ tf.random_uniform, [], maxval=num_cases, dtype=tf.int32)
+ rand_sel = _get_or_create_preprocess_rand_vars(
+ generator_func, preprocessor_cache.PreprocessorCache.SELECTOR,
+ preprocess_vars_cache, key)
+
+ # Pass the real x only to one of the func calls.
+ return control_flow_ops.merge([func(
+ control_flow_ops.switch(x, tf.equal(rand_sel, case))[1], case)
+ for case in range(num_cases)])[0]
+
+
+def _apply_with_random_selector_tuples(x,
+ func,
+ num_cases,
+ preprocess_vars_cache=None,
+ key=''):
+ """Computes func(x, sel), with sel sampled from [0...num_cases-1].
+
+ If both preprocess_vars_cache AND key are the same between two calls, sel will
+ be the same value in both calls.
+
+ Args:
+ x: A tuple of input tensors.
+ func: Python function to apply.
+ num_cases: Python int32, number of cases to sample sel from.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+ key: variable identifier for preprocess_vars_cache.
+
+ Returns:
+ The result of func(x, sel), where func receives the value of the
+ selector as a python integer, but sel is sampled dynamically.
+ """
+ num_inputs = len(x)
+ generator_func = functools.partial(
+ tf.random_uniform, [], maxval=num_cases, dtype=tf.int32)
+ rand_sel = _get_or_create_preprocess_rand_vars(
+ generator_func, preprocessor_cache.PreprocessorCache.SELECTOR_TUPLES,
+ preprocess_vars_cache, key)
+
+ # Pass the real x only to one of the func calls.
+ tuples = [list() for t in x]
+ for case in range(num_cases):
+ new_x = [control_flow_ops.switch(t, tf.equal(rand_sel, case))[1] for t in x]
+ output = func(tuple(new_x), case)
+ for j in range(num_inputs):
+ tuples[j].append(output[j])
+
+ for i in range(num_inputs):
+ tuples[i] = control_flow_ops.merge(tuples[i])[0]
+ return tuple(tuples)
+
+
+def _get_or_create_preprocess_rand_vars(generator_func,
+ function_id,
+ preprocess_vars_cache,
+ key=''):
+ """Returns a tensor stored in preprocess_vars_cache or using generator_func.
+
+ If the tensor was previously generated and appears in the PreprocessorCache,
+ the previously generated tensor will be returned. Otherwise, a new tensor
+ is generated using generator_func and stored in the cache.
+
+ Args:
+ generator_func: A 0-argument function that generates a tensor.
+ function_id: identifier for the preprocessing function used.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+ key: identifier for the variable stored.
+ Returns:
+ The generated tensor.
+ """
+ if preprocess_vars_cache is not None:
+ var = preprocess_vars_cache.get(function_id, key)
+ if var is None:
+ var = generator_func()
+ preprocess_vars_cache.update(function_id, key, var)
+ else:
+ var = generator_func()
+ return var
+
+
+def _random_integer(minval, maxval, seed):
+ """Returns a random 0-D tensor between minval and maxval.
+
+ Args:
+ minval: minimum value of the random tensor.
+ maxval: maximum value of the random tensor.
+ seed: random seed.
+
+ Returns:
+ A random 0-D tensor between minval and maxval.
+ """
+ return tf.random_uniform(
+ [], minval=minval, maxval=maxval, dtype=tf.int32, seed=seed)
+
+
+# TODO(mttang): This method is needed because the current
+# tf.image.rgb_to_grayscale method does not support quantization. Replace with
+# tf.image.rgb_to_grayscale after quantization support is added.
+def _rgb_to_grayscale(images, name=None):
+ """Converts one or more images from RGB to Grayscale.
+
+ Outputs a tensor of the same `DType` and rank as `images`. The size of the
+ last dimension of the output is 1, containing the Grayscale value of the
+ pixels.
+
+ Args:
+ images: The RGB tensor to convert. Last dimension must have size 3 and
+ should contain RGB values.
+ name: A name for the operation (optional).
+
+ Returns:
+ The converted grayscale image(s).
+ """
+ with tf.name_scope(name, 'rgb_to_grayscale', [images]) as name:
+ images = tf.convert_to_tensor(images, name='images')
+ # Remember original dtype to so we can convert back if needed
+ orig_dtype = images.dtype
+ flt_image = tf.image.convert_image_dtype(images, tf.float32)
+
+ # Reference for converting between RGB and grayscale.
+ # https://en.wikipedia.org/wiki/Luma_%28video%29
+ rgb_weights = [0.2989, 0.5870, 0.1140]
+ rank_1 = tf.expand_dims(tf.rank(images) - 1, 0)
+ gray_float = tf.reduce_sum(
+ flt_image * rgb_weights, rank_1, keep_dims=True)
+ gray_float.set_shape(images.get_shape()[:-1].concatenate([1]))
+ return tf.image.convert_image_dtype(gray_float, orig_dtype, name=name)
+
+
+def normalize_image(image, original_minval, original_maxval, target_minval,
+ target_maxval):
+ """Normalizes pixel values in the image.
+
+ Moves the pixel values from the current [original_minval, original_maxval]
+ range to a the [target_minval, target_maxval] range.
+
+ Args:
+ image: rank 3 float32 tensor containing 1
+ image -> [height, width, channels].
+ original_minval: current image minimum value.
+ original_maxval: current image maximum value.
+ target_minval: target image minimum value.
+ target_maxval: target image maximum value.
+
+ Returns:
+ image: image which is the same shape as input image.
+ """
+ with tf.name_scope('NormalizeImage', values=[image]):
+ original_minval = float(original_minval)
+ original_maxval = float(original_maxval)
+ target_minval = float(target_minval)
+ target_maxval = float(target_maxval)
+ image = tf.to_float(image)
+ image = tf.subtract(image, original_minval)
+ image = tf.multiply(image, (target_maxval - target_minval) /
+ (original_maxval - original_minval))
+ image = tf.add(image, target_minval)
+ return image
+
+
+def retain_boxes_above_threshold(boxes,
+ labels,
+ label_weights,
+ label_confidences=None,
+ multiclass_scores=None,
+ masks=None,
+ keypoints=None,
+ threshold=0.0):
+ """Retains boxes whose label weight is above a given threshold.
+
+ If the label weight for a box is missing (represented by NaN), the box is
+ retained. The boxes that don't pass the threshold will not appear in the
+ returned tensor.
+
+ Args:
+ boxes: float32 tensor of shape [num_instance, 4] representing boxes
+ location in normalized coordinates.
+ labels: rank 1 int32 tensor of shape [num_instance] containing the object
+ classes.
+ label_weights: float32 tensor of shape [num_instance] representing the
+ weight for each box.
+ label_confidences: float32 tensor of shape [num_instance] representing the
+ confidence for each box.
+ multiclass_scores: (optional) float32 tensor of shape
+ [num_instances, num_classes] representing the score for each box for each
+ class.
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks are of
+ the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x normalized
+ coordinates.
+ threshold: scalar python float.
+
+ Returns:
+ retained_boxes: [num_retained_instance, 4]
+ retianed_labels: [num_retained_instance]
+ retained_label_weights: [num_retained_instance]
+
+ If multiclass_scores, masks, or keypoints are not None, the function also
+ returns:
+
+ retained_multiclass_scores: [num_retained_instance, num_classes]
+ retained_masks: [num_retained_instance, height, width]
+ retained_keypoints: [num_retained_instance, num_keypoints, 2]
+ """
+ with tf.name_scope('RetainBoxesAboveThreshold',
+ values=[boxes, labels, label_weights]):
+ indices = tf.where(
+ tf.logical_or(label_weights > threshold, tf.is_nan(label_weights)))
+ indices = tf.squeeze(indices, axis=1)
+ retained_boxes = tf.gather(boxes, indices)
+ retained_labels = tf.gather(labels, indices)
+ retained_label_weights = tf.gather(label_weights, indices)
+ result = [retained_boxes, retained_labels, retained_label_weights]
+
+ if label_confidences is not None:
+ retained_label_confidences = tf.gather(label_confidences, indices)
+ result.append(retained_label_confidences)
+
+ if multiclass_scores is not None:
+ retained_multiclass_scores = tf.gather(multiclass_scores, indices)
+ result.append(retained_multiclass_scores)
+
+ if masks is not None:
+ retained_masks = tf.gather(masks, indices)
+ result.append(retained_masks)
+
+ if keypoints is not None:
+ retained_keypoints = tf.gather(keypoints, indices)
+ result.append(retained_keypoints)
+
+ return result
+
+
+def _flip_boxes_left_right(boxes):
+ """Left-right flip the boxes.
+
+ Args:
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+
+ Returns:
+ Flipped boxes.
+ """
+ ymin, xmin, ymax, xmax = tf.split(value=boxes, num_or_size_splits=4, axis=1)
+ flipped_xmin = tf.subtract(1.0, xmax)
+ flipped_xmax = tf.subtract(1.0, xmin)
+ flipped_boxes = tf.concat([ymin, flipped_xmin, ymax, flipped_xmax], 1)
+ return flipped_boxes
+
+
+def _flip_boxes_up_down(boxes):
+ """Up-down flip the boxes.
+
+ Args:
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+
+ Returns:
+ Flipped boxes.
+ """
+ ymin, xmin, ymax, xmax = tf.split(value=boxes, num_or_size_splits=4, axis=1)
+ flipped_ymin = tf.subtract(1.0, ymax)
+ flipped_ymax = tf.subtract(1.0, ymin)
+ flipped_boxes = tf.concat([flipped_ymin, xmin, flipped_ymax, xmax], 1)
+ return flipped_boxes
+
+
+def _rot90_boxes(boxes):
+ """Rotate boxes counter-clockwise by 90 degrees.
+
+ Args:
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+
+ Returns:
+ Rotated boxes.
+ """
+ ymin, xmin, ymax, xmax = tf.split(value=boxes, num_or_size_splits=4, axis=1)
+ rotated_ymin = tf.subtract(1.0, xmax)
+ rotated_ymax = tf.subtract(1.0, xmin)
+ rotated_xmin = ymin
+ rotated_xmax = ymax
+ rotated_boxes = tf.concat(
+ [rotated_ymin, rotated_xmin, rotated_ymax, rotated_xmax], 1)
+ return rotated_boxes
+
+
+def _flip_masks_left_right(masks):
+ """Left-right flip masks.
+
+ Args:
+ masks: rank 3 float32 tensor with shape
+ [num_instances, height, width] representing instance masks.
+
+ Returns:
+ flipped masks: rank 3 float32 tensor with shape
+ [num_instances, height, width] representing instance masks.
+ """
+ return masks[:, :, ::-1]
+
+
+def _flip_masks_up_down(masks):
+ """Up-down flip masks.
+
+ Args:
+ masks: rank 3 float32 tensor with shape
+ [num_instances, height, width] representing instance masks.
+
+ Returns:
+ flipped masks: rank 3 float32 tensor with shape
+ [num_instances, height, width] representing instance masks.
+ """
+ return masks[:, ::-1, :]
+
+
+def _rot90_masks(masks):
+ """Rotate masks counter-clockwise by 90 degrees.
+
+ Args:
+ masks: rank 3 float32 tensor with shape
+ [num_instances, height, width] representing instance masks.
+
+ Returns:
+ rotated masks: rank 3 float32 tensor with shape
+ [num_instances, height, width] representing instance masks.
+ """
+ masks = tf.transpose(masks, [0, 2, 1])
+ return masks[:, ::-1, :]
+
+
+def random_horizontal_flip(image,
+ boxes=None,
+ masks=None,
+ keypoints=None,
+ keypoint_flip_permutation=None,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly flips the image and detections horizontally.
+
+ The probability of flipping the image is 50%.
+
+ Args:
+ image: rank 3 float32 tensor with shape [height, width, channels].
+ boxes: (optional) rank 2 float32 tensor with shape [N, 4]
+ containing the bounding boxes.
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks
+ are of the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x
+ normalized coordinates.
+ keypoint_flip_permutation: rank 1 int32 tensor containing the keypoint flip
+ permutation.
+ seed: random seed
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same shape as input image.
+
+ If boxes, masks, keypoints, and keypoint_flip_permutation are not None,
+ the function also returns the following tensors.
+
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ masks: rank 3 float32 tensor with shape [num_instances, height, width]
+ containing instance masks.
+ keypoints: rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]
+
+ Raises:
+ ValueError: if keypoints are provided but keypoint_flip_permutation is not.
+ """
+
+ def _flip_image(image):
+ # flip image
+ image_flipped = tf.image.flip_left_right(image)
+ return image_flipped
+
+ if keypoints is not None and keypoint_flip_permutation is None:
+ raise ValueError(
+ 'keypoints are provided but keypoints_flip_permutation is not provided')
+
+ with tf.name_scope('RandomHorizontalFlip', values=[image, boxes]):
+ result = []
+ # random variable defining whether to do flip or not
+ generator_func = functools.partial(tf.random_uniform, [], seed=seed)
+ do_a_flip_random = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.HORIZONTAL_FLIP,
+ preprocess_vars_cache)
+ do_a_flip_random = tf.greater(do_a_flip_random, 0.5)
+
+ # flip image
+ image = tf.cond(do_a_flip_random, lambda: _flip_image(image), lambda: image)
+ result.append(image)
+
+ # flip boxes
+ if boxes is not None:
+ boxes = tf.cond(do_a_flip_random, lambda: _flip_boxes_left_right(boxes),
+ lambda: boxes)
+ result.append(boxes)
+
+ # flip masks
+ if masks is not None:
+ masks = tf.cond(do_a_flip_random, lambda: _flip_masks_left_right(masks),
+ lambda: masks)
+ result.append(masks)
+
+ # flip keypoints
+ if keypoints is not None and keypoint_flip_permutation is not None:
+ permutation = keypoint_flip_permutation
+ keypoints = tf.cond(
+ do_a_flip_random,
+ lambda: keypoint_ops.flip_horizontal(keypoints, 0.5, permutation),
+ lambda: keypoints)
+ result.append(keypoints)
+
+ return tuple(result)
+
+
+def random_vertical_flip(image,
+ boxes=None,
+ masks=None,
+ keypoints=None,
+ keypoint_flip_permutation=None,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly flips the image and detections vertically.
+
+ The probability of flipping the image is 50%.
+
+ Args:
+ image: rank 3 float32 tensor with shape [height, width, channels].
+ boxes: (optional) rank 2 float32 tensor with shape [N, 4]
+ containing the bounding boxes.
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks
+ are of the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x
+ normalized coordinates.
+ keypoint_flip_permutation: rank 1 int32 tensor containing the keypoint flip
+ permutation.
+ seed: random seed
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same shape as input image.
+
+ If boxes, masks, keypoints, and keypoint_flip_permutation are not None,
+ the function also returns the following tensors.
+
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ masks: rank 3 float32 tensor with shape [num_instances, height, width]
+ containing instance masks.
+ keypoints: rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]
+
+ Raises:
+ ValueError: if keypoints are provided but keypoint_flip_permutation is not.
+ """
+
+ def _flip_image(image):
+ # flip image
+ image_flipped = tf.image.flip_up_down(image)
+ return image_flipped
+
+ if keypoints is not None and keypoint_flip_permutation is None:
+ raise ValueError(
+ 'keypoints are provided but keypoints_flip_permutation is not provided')
+
+ with tf.name_scope('RandomVerticalFlip', values=[image, boxes]):
+ result = []
+ # random variable defining whether to do flip or not
+ generator_func = functools.partial(tf.random_uniform, [], seed=seed)
+ do_a_flip_random = _get_or_create_preprocess_rand_vars(
+ generator_func, preprocessor_cache.PreprocessorCache.VERTICAL_FLIP,
+ preprocess_vars_cache)
+ do_a_flip_random = tf.greater(do_a_flip_random, 0.5)
+
+ # flip image
+ image = tf.cond(do_a_flip_random, lambda: _flip_image(image), lambda: image)
+ result.append(image)
+
+ # flip boxes
+ if boxes is not None:
+ boxes = tf.cond(do_a_flip_random, lambda: _flip_boxes_up_down(boxes),
+ lambda: boxes)
+ result.append(boxes)
+
+ # flip masks
+ if masks is not None:
+ masks = tf.cond(do_a_flip_random, lambda: _flip_masks_up_down(masks),
+ lambda: masks)
+ result.append(masks)
+
+ # flip keypoints
+ if keypoints is not None and keypoint_flip_permutation is not None:
+ permutation = keypoint_flip_permutation
+ keypoints = tf.cond(
+ do_a_flip_random,
+ lambda: keypoint_ops.flip_vertical(keypoints, 0.5, permutation),
+ lambda: keypoints)
+ result.append(keypoints)
+
+ return tuple(result)
+
+
+def random_rotation90(image,
+ boxes=None,
+ masks=None,
+ keypoints=None,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly rotates the image and detections 90 degrees counter-clockwise.
+
+ The probability of rotating the image is 50%. This can be combined with
+ random_horizontal_flip and random_vertical_flip to produce an output with a
+ uniform distribution of the eight possible 90 degree rotation / reflection
+ combinations.
+
+ Args:
+ image: rank 3 float32 tensor with shape [height, width, channels].
+ boxes: (optional) rank 2 float32 tensor with shape [N, 4]
+ containing the bounding boxes.
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks
+ are of the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x
+ normalized coordinates.
+ seed: random seed
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same shape as input image.
+
+ If boxes, masks, and keypoints, are not None,
+ the function also returns the following tensors.
+
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ masks: rank 3 float32 tensor with shape [num_instances, height, width]
+ containing instance masks.
+ keypoints: rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]
+ """
+
+ def _rot90_image(image):
+ # flip image
+ image_rotated = tf.image.rot90(image)
+ return image_rotated
+
+ with tf.name_scope('RandomRotation90', values=[image, boxes]):
+ result = []
+
+ # random variable defining whether to rotate by 90 degrees or not
+ generator_func = functools.partial(tf.random_uniform, [], seed=seed)
+ do_a_rot90_random = _get_or_create_preprocess_rand_vars(
+ generator_func, preprocessor_cache.PreprocessorCache.ROTATION90,
+ preprocess_vars_cache)
+ do_a_rot90_random = tf.greater(do_a_rot90_random, 0.5)
+
+ # flip image
+ image = tf.cond(do_a_rot90_random, lambda: _rot90_image(image),
+ lambda: image)
+ result.append(image)
+
+ # flip boxes
+ if boxes is not None:
+ boxes = tf.cond(do_a_rot90_random, lambda: _rot90_boxes(boxes),
+ lambda: boxes)
+ result.append(boxes)
+
+ # flip masks
+ if masks is not None:
+ masks = tf.cond(do_a_rot90_random, lambda: _rot90_masks(masks),
+ lambda: masks)
+ result.append(masks)
+
+ # flip keypoints
+ if keypoints is not None:
+ keypoints = tf.cond(
+ do_a_rot90_random,
+ lambda: keypoint_ops.rot90(keypoints),
+ lambda: keypoints)
+ result.append(keypoints)
+
+ return tuple(result)
+
+
+def random_pixel_value_scale(image,
+ minval=0.9,
+ maxval=1.1,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Scales each value in the pixels of the image.
+
+ This function scales each pixel independent of the other ones.
+ For each value in image tensor, draws a random number between
+ minval and maxval and multiples the values with them.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 255].
+ minval: lower ratio of scaling pixel values.
+ maxval: upper ratio of scaling pixel values.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same shape as input image.
+ """
+ with tf.name_scope('RandomPixelValueScale', values=[image]):
+ generator_func = functools.partial(
+ tf.random_uniform, tf.shape(image),
+ minval=minval, maxval=maxval,
+ dtype=tf.float32, seed=seed)
+ color_coef = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.PIXEL_VALUE_SCALE,
+ preprocess_vars_cache)
+
+ image = tf.multiply(image, color_coef)
+ image = tf.clip_by_value(image, 0.0, 255.0)
+
+ return image
+
+
+def random_image_scale(image,
+ masks=None,
+ min_scale_ratio=0.5,
+ max_scale_ratio=2.0,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Scales the image size.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels].
+ masks: (optional) rank 3 float32 tensor containing masks with
+ size [height, width, num_masks]. The value is set to None if there are no
+ masks.
+ min_scale_ratio: minimum scaling ratio.
+ max_scale_ratio: maximum scaling ratio.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same rank as input image.
+ masks: If masks is not none, resized masks which are the same rank as input
+ masks will be returned.
+ """
+ with tf.name_scope('RandomImageScale', values=[image]):
+ result = []
+ image_shape = tf.shape(image)
+ image_height = image_shape[0]
+ image_width = image_shape[1]
+ generator_func = functools.partial(
+ tf.random_uniform, [],
+ minval=min_scale_ratio, maxval=max_scale_ratio,
+ dtype=tf.float32, seed=seed)
+ size_coef = _get_or_create_preprocess_rand_vars(
+ generator_func, preprocessor_cache.PreprocessorCache.IMAGE_SCALE,
+ preprocess_vars_cache)
+
+ image_newysize = tf.to_int32(
+ tf.multiply(tf.to_float(image_height), size_coef))
+ image_newxsize = tf.to_int32(
+ tf.multiply(tf.to_float(image_width), size_coef))
+ image = tf.image.resize_images(
+ image, [image_newysize, image_newxsize], align_corners=True)
+ result.append(image)
+ if masks is not None:
+ masks = tf.image.resize_images(
+ masks, [image_newysize, image_newxsize],
+ method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
+ align_corners=True)
+ result.append(masks)
+ return tuple(result)
+
+
+def random_rgb_to_gray(image,
+ probability=0.1,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Changes the image from RGB to Grayscale with the given probability.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 255].
+ probability: the probability of returning a grayscale image.
+ The probability should be a number between [0, 1].
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same shape as input image.
+ """
+ def _image_to_gray(image):
+ image_gray1 = _rgb_to_grayscale(image)
+ image_gray3 = tf.image.grayscale_to_rgb(image_gray1)
+ return image_gray3
+
+ with tf.name_scope('RandomRGBtoGray', values=[image]):
+ # random variable defining whether to change to grayscale or not
+ generator_func = functools.partial(tf.random_uniform, [], seed=seed)
+ do_gray_random = _get_or_create_preprocess_rand_vars(
+ generator_func, preprocessor_cache.PreprocessorCache.RGB_TO_GRAY,
+ preprocess_vars_cache)
+
+ image = tf.cond(
+ tf.greater(do_gray_random, probability), lambda: image,
+ lambda: _image_to_gray(image))
+
+ return image
+
+
+def random_adjust_brightness(image,
+ max_delta=0.2,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly adjusts brightness.
+
+ Makes sure the output image is still between 0 and 255.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 255].
+ max_delta: how much to change the brightness. A value between [0, 1).
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same shape as input image.
+ boxes: boxes which is the same shape as input boxes.
+ """
+ with tf.name_scope('RandomAdjustBrightness', values=[image]):
+ generator_func = functools.partial(tf.random_uniform, [],
+ -max_delta, max_delta, seed=seed)
+ delta = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.ADJUST_BRIGHTNESS,
+ preprocess_vars_cache)
+
+ image = tf.image.adjust_brightness(image / 255, delta) * 255
+ image = tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=255.0)
+ return image
+
+
+def random_adjust_contrast(image,
+ min_delta=0.8,
+ max_delta=1.25,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly adjusts contrast.
+
+ Makes sure the output image is still between 0 and 255.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 255].
+ min_delta: see max_delta.
+ max_delta: how much to change the contrast. Contrast will change with a
+ value between min_delta and max_delta. This value will be
+ multiplied to the current contrast of the image.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same shape as input image.
+ """
+ with tf.name_scope('RandomAdjustContrast', values=[image]):
+ generator_func = functools.partial(tf.random_uniform, [],
+ min_delta, max_delta, seed=seed)
+ contrast_factor = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.ADJUST_CONTRAST,
+ preprocess_vars_cache)
+ image = tf.image.adjust_contrast(image / 255, contrast_factor) * 255
+ image = tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=255.0)
+ return image
+
+
+def random_adjust_hue(image,
+ max_delta=0.02,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly adjusts hue.
+
+ Makes sure the output image is still between 0 and 255.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 255].
+ max_delta: change hue randomly with a value between 0 and max_delta.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same shape as input image.
+ """
+ with tf.name_scope('RandomAdjustHue', values=[image]):
+ generator_func = functools.partial(tf.random_uniform, [],
+ -max_delta, max_delta, seed=seed)
+ delta = _get_or_create_preprocess_rand_vars(
+ generator_func, preprocessor_cache.PreprocessorCache.ADJUST_HUE,
+ preprocess_vars_cache)
+ image = tf.image.adjust_hue(image / 255, delta) * 255
+ image = tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=255.0)
+ return image
+
+
+def random_adjust_saturation(image,
+ min_delta=0.8,
+ max_delta=1.25,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly adjusts saturation.
+
+ Makes sure the output image is still between 0 and 255.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 255].
+ min_delta: see max_delta.
+ max_delta: how much to change the saturation. Saturation will change with a
+ value between min_delta and max_delta. This value will be
+ multiplied to the current saturation of the image.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same shape as input image.
+ """
+ with tf.name_scope('RandomAdjustSaturation', values=[image]):
+ generator_func = functools.partial(tf.random_uniform, [],
+ min_delta, max_delta, seed=seed)
+ saturation_factor = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.ADJUST_SATURATION,
+ preprocess_vars_cache)
+ image = tf.image.adjust_saturation(image / 255, saturation_factor) * 255
+ image = tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=255.0)
+ return image
+
+
+def random_distort_color(image, color_ordering=0, preprocess_vars_cache=None):
+ """Randomly distorts color.
+
+ Randomly distorts color using a combination of brightness, hue, contrast and
+ saturation changes. Makes sure the output image is still between 0 and 255.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 255].
+ color_ordering: Python int, a type of distortion (valid values: 0, 1).
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same shape as input image.
+
+ Raises:
+ ValueError: if color_ordering is not in {0, 1}.
+ """
+ with tf.name_scope('RandomDistortColor', values=[image]):
+ if color_ordering == 0:
+ image = random_adjust_brightness(
+ image, max_delta=32. / 255.,
+ preprocess_vars_cache=preprocess_vars_cache)
+ image = random_adjust_saturation(
+ image, min_delta=0.5, max_delta=1.5,
+ preprocess_vars_cache=preprocess_vars_cache)
+ image = random_adjust_hue(
+ image, max_delta=0.2,
+ preprocess_vars_cache=preprocess_vars_cache)
+ image = random_adjust_contrast(
+ image, min_delta=0.5, max_delta=1.5,
+ preprocess_vars_cache=preprocess_vars_cache)
+
+ elif color_ordering == 1:
+ image = random_adjust_brightness(
+ image, max_delta=32. / 255.,
+ preprocess_vars_cache=preprocess_vars_cache)
+ image = random_adjust_contrast(
+ image, min_delta=0.5, max_delta=1.5,
+ preprocess_vars_cache=preprocess_vars_cache)
+ image = random_adjust_saturation(
+ image, min_delta=0.5, max_delta=1.5,
+ preprocess_vars_cache=preprocess_vars_cache)
+ image = random_adjust_hue(
+ image, max_delta=0.2,
+ preprocess_vars_cache=preprocess_vars_cache)
+ else:
+ raise ValueError('color_ordering must be in {0, 1}')
+ return image
+
+
+def random_jitter_boxes(boxes, ratio=0.05, seed=None):
+ """Randomly jitter boxes in image.
+
+ Args:
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ ratio: The ratio of the box width and height that the corners can jitter.
+ For example if the width is 100 pixels and ratio is 0.05,
+ the corners can jitter up to 5 pixels in the x direction.
+ seed: random seed.
+
+ Returns:
+ boxes: boxes which is the same shape as input boxes.
+ """
+ def random_jitter_box(box, ratio, seed):
+ """Randomly jitter box.
+
+ Args:
+ box: bounding box [1, 1, 4].
+ ratio: max ratio between jittered box and original box,
+ a number between [0, 0.5].
+ seed: random seed.
+
+ Returns:
+ jittered_box: jittered box.
+ """
+ rand_numbers = tf.random_uniform(
+ [1, 1, 4], minval=-ratio, maxval=ratio, dtype=tf.float32, seed=seed)
+ box_width = tf.subtract(box[0, 0, 3], box[0, 0, 1])
+ box_height = tf.subtract(box[0, 0, 2], box[0, 0, 0])
+ hw_coefs = tf.stack([box_height, box_width, box_height, box_width])
+ hw_rand_coefs = tf.multiply(hw_coefs, rand_numbers)
+ jittered_box = tf.add(box, hw_rand_coefs)
+ jittered_box = tf.clip_by_value(jittered_box, 0.0, 1.0)
+ return jittered_box
+
+ with tf.name_scope('RandomJitterBoxes', values=[boxes]):
+ # boxes are [N, 4]. Lets first make them [N, 1, 1, 4]
+ boxes_shape = tf.shape(boxes)
+ boxes = tf.expand_dims(boxes, 1)
+ boxes = tf.expand_dims(boxes, 2)
+
+ distorted_boxes = tf.map_fn(
+ lambda x: random_jitter_box(x, ratio, seed), boxes, dtype=tf.float32)
+
+ distorted_boxes = tf.reshape(distorted_boxes, boxes_shape)
+
+ return distorted_boxes
+
+
+def _strict_random_crop_image(image,
+ boxes,
+ labels,
+ label_weights,
+ label_confidences=None,
+ multiclass_scores=None,
+ masks=None,
+ keypoints=None,
+ min_object_covered=1.0,
+ aspect_ratio_range=(0.75, 1.33),
+ area_range=(0.1, 1.0),
+ overlap_thresh=0.3,
+ clip_boxes=True,
+ preprocess_vars_cache=None):
+ """Performs random crop.
+
+ Note: Keypoint coordinates that are outside the crop will be set to NaN, which
+ is consistent with the original keypoint encoding for non-existing keypoints.
+ This function always crops the image and is supposed to be used by
+ `random_crop_image` function which sometimes returns the image unchanged.
+
+ Args:
+ image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ boxes: rank 2 float32 tensor containing the bounding boxes with shape
+ [num_instances, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ labels: rank 1 int32 tensor containing the object classes.
+ label_weights: float32 tensor of shape [num_instances] representing the
+ weight for each box.
+ label_confidences: (optional) float32 tensor of shape [num_instances]
+ representing the confidence for each box.
+ multiclass_scores: (optional) float32 tensor of shape
+ [num_instances, num_classes] representing the score for each box for each
+ class.
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks
+ are of the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x
+ normalized coordinates.
+ min_object_covered: the cropped image must cover at least this fraction of
+ at least one of the input bounding boxes.
+ aspect_ratio_range: allowed range for aspect ratio of cropped image.
+ area_range: allowed range for area ratio between cropped image and the
+ original image.
+ overlap_thresh: minimum overlap thresh with new cropped
+ image to keep the box.
+ clip_boxes: whether to clip the boxes to the cropped image.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same rank as input image.
+ boxes: boxes which is the same rank as input boxes.
+ Boxes are in normalized form.
+ labels: new labels.
+
+ If label_weights, multiclass_scores, masks, or keypoints is not None, the
+ function also returns:
+ label_weights: rank 1 float32 tensor with shape [num_instances].
+ multiclass_scores: rank 2 float32 tensor with shape
+ [num_instances, num_classes]
+ masks: rank 3 float32 tensor with shape [num_instances, height, width]
+ containing instance masks.
+ keypoints: rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]
+ """
+ with tf.name_scope('RandomCropImage', values=[image, boxes]):
+ image_shape = tf.shape(image)
+
+ # boxes are [N, 4]. Lets first make them [N, 1, 4].
+ boxes_expanded = tf.expand_dims(
+ tf.clip_by_value(
+ boxes, clip_value_min=0.0, clip_value_max=1.0), 1)
+
+ generator_func = functools.partial(
+ tf.image.sample_distorted_bounding_box,
+ image_shape,
+ bounding_boxes=boxes_expanded,
+ min_object_covered=min_object_covered,
+ aspect_ratio_range=aspect_ratio_range,
+ area_range=area_range,
+ max_attempts=100,
+ use_image_if_no_bounding_boxes=True)
+
+ # for ssd cropping, each value of min_object_covered has its own
+ # cached random variable
+ sample_distorted_bounding_box = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.STRICT_CROP_IMAGE,
+ preprocess_vars_cache, key=min_object_covered)
+
+ im_box_begin, im_box_size, im_box = sample_distorted_bounding_box
+
+ new_image = tf.slice(image, im_box_begin, im_box_size)
+ new_image.set_shape([None, None, image.get_shape()[2]])
+
+ # [1, 4]
+ im_box_rank2 = tf.squeeze(im_box, squeeze_dims=[0])
+ # [4]
+ im_box_rank1 = tf.squeeze(im_box)
+
+ boxlist = box_list.BoxList(boxes)
+ boxlist.add_field('labels', labels)
+
+ if label_weights is not None:
+ boxlist.add_field('label_weights', label_weights)
+
+ if label_confidences is not None:
+ boxlist.add_field('label_confidences', label_confidences)
+
+ if multiclass_scores is not None:
+ boxlist.add_field('multiclass_scores', multiclass_scores)
+
+ im_boxlist = box_list.BoxList(im_box_rank2)
+
+ # remove boxes that are outside cropped image
+ boxlist, inside_window_ids = box_list_ops.prune_completely_outside_window(
+ boxlist, im_box_rank1)
+
+ # remove boxes that are outside image
+ overlapping_boxlist, keep_ids = box_list_ops.prune_non_overlapping_boxes(
+ boxlist, im_boxlist, overlap_thresh)
+
+ # change the coordinate of the remaining boxes
+ new_labels = overlapping_boxlist.get_field('labels')
+ new_boxlist = box_list_ops.change_coordinate_frame(overlapping_boxlist,
+ im_box_rank1)
+ new_boxes = new_boxlist.get()
+ if clip_boxes:
+ new_boxes = tf.clip_by_value(
+ new_boxes, clip_value_min=0.0, clip_value_max=1.0)
+
+ result = [new_image, new_boxes, new_labels]
+
+ if label_weights is not None:
+ new_label_weights = overlapping_boxlist.get_field('label_weights')
+ result.append(new_label_weights)
+
+ if label_confidences is not None:
+ new_label_confidences = overlapping_boxlist.get_field('label_confidences')
+ result.append(new_label_confidences)
+
+ if multiclass_scores is not None:
+ new_multiclass_scores = overlapping_boxlist.get_field('multiclass_scores')
+ result.append(new_multiclass_scores)
+
+ if masks is not None:
+ masks_of_boxes_inside_window = tf.gather(masks, inside_window_ids)
+ masks_of_boxes_completely_inside_window = tf.gather(
+ masks_of_boxes_inside_window, keep_ids)
+ masks_box_begin = [0, im_box_begin[0], im_box_begin[1]]
+ masks_box_size = [-1, im_box_size[0], im_box_size[1]]
+ new_masks = tf.slice(
+ masks_of_boxes_completely_inside_window,
+ masks_box_begin, masks_box_size)
+ result.append(new_masks)
+
+ if keypoints is not None:
+ keypoints_of_boxes_inside_window = tf.gather(keypoints, inside_window_ids)
+ keypoints_of_boxes_completely_inside_window = tf.gather(
+ keypoints_of_boxes_inside_window, keep_ids)
+ new_keypoints = keypoint_ops.change_coordinate_frame(
+ keypoints_of_boxes_completely_inside_window, im_box_rank1)
+ if clip_boxes:
+ new_keypoints = keypoint_ops.prune_outside_window(new_keypoints,
+ [0.0, 0.0, 1.0, 1.0])
+ result.append(new_keypoints)
+
+ return tuple(result)
+
+
+def random_crop_image(image,
+ boxes,
+ labels,
+ label_weights,
+ label_confidences=None,
+ multiclass_scores=None,
+ masks=None,
+ keypoints=None,
+ min_object_covered=1.0,
+ aspect_ratio_range=(0.75, 1.33),
+ area_range=(0.1, 1.0),
+ overlap_thresh=0.3,
+ clip_boxes=True,
+ random_coef=0.0,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly crops the image.
+
+ Given the input image and its bounding boxes, this op randomly
+ crops a subimage. Given a user-provided set of input constraints,
+ the crop window is resampled until it satisfies these constraints.
+ If within 100 trials it is unable to find a valid crop, the original
+ image is returned. See the Args section for a description of the input
+ constraints. Both input boxes and returned Boxes are in normalized
+ form (e.g., lie in the unit square [0, 1]).
+ This function will return the original image with probability random_coef.
+
+ Note: Keypoint coordinates that are outside the crop will be set to NaN, which
+ is consistent with the original keypoint encoding for non-existing keypoints.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ boxes: rank 2 float32 tensor containing the bounding boxes with shape
+ [num_instances, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ labels: rank 1 int32 tensor containing the object classes.
+ label_weights: float32 tensor of shape [num_instances] representing the
+ weight for each box.
+ label_confidences: (optional) float32 tensor of shape [num_instances].
+ representing the confidence for each box.
+ multiclass_scores: (optional) float32 tensor of shape
+ [num_instances, num_classes] representing the score for each box for each
+ class.
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks
+ are of the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x
+ normalized coordinates.
+ min_object_covered: the cropped image must cover at least this fraction of
+ at least one of the input bounding boxes.
+ aspect_ratio_range: allowed range for aspect ratio of cropped image.
+ area_range: allowed range for area ratio between cropped image and the
+ original image.
+ overlap_thresh: minimum overlap thresh with new cropped
+ image to keep the box.
+ clip_boxes: whether to clip the boxes to the cropped image.
+ random_coef: a random coefficient that defines the chance of getting the
+ original image. If random_coef is 0, we will always get the
+ cropped image, and if it is 1.0, we will always get the
+ original image.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: Image shape will be [new_height, new_width, channels].
+ boxes: boxes which is the same rank as input boxes. Boxes are in normalized
+ form.
+ labels: new labels.
+
+ If label_weights, multiclass_scores, masks, or keypoints is not None, the
+ function also returns:
+ label_weights: rank 1 float32 tensor with shape [num_instances].
+ multiclass_scores: rank 2 float32 tensor with shape
+ [num_instances, num_classes]
+ masks: rank 3 float32 tensor with shape [num_instances, height, width]
+ containing instance masks.
+ keypoints: rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]
+ """
+
+ def strict_random_crop_image_fn():
+ return _strict_random_crop_image(
+ image,
+ boxes,
+ labels,
+ label_weights,
+ label_confidences=label_confidences,
+ multiclass_scores=multiclass_scores,
+ masks=masks,
+ keypoints=keypoints,
+ min_object_covered=min_object_covered,
+ aspect_ratio_range=aspect_ratio_range,
+ area_range=area_range,
+ overlap_thresh=overlap_thresh,
+ clip_boxes=clip_boxes,
+ preprocess_vars_cache=preprocess_vars_cache)
+
+ # avoids tf.cond to make faster RCNN training on borg. See b/140057645.
+ if random_coef < sys.float_info.min:
+ result = strict_random_crop_image_fn()
+ else:
+ generator_func = functools.partial(tf.random_uniform, [], seed=seed)
+ do_a_crop_random = _get_or_create_preprocess_rand_vars(
+ generator_func, preprocessor_cache.PreprocessorCache.CROP_IMAGE,
+ preprocess_vars_cache)
+ do_a_crop_random = tf.greater(do_a_crop_random, random_coef)
+
+ outputs = [image, boxes, labels]
+
+ if label_weights is not None:
+ outputs.append(label_weights)
+ if label_confidences is not None:
+ outputs.append(label_confidences)
+ if multiclass_scores is not None:
+ outputs.append(multiclass_scores)
+ if masks is not None:
+ outputs.append(masks)
+ if keypoints is not None:
+ outputs.append(keypoints)
+
+ result = tf.cond(do_a_crop_random, strict_random_crop_image_fn,
+ lambda: tuple(outputs))
+ return result
+
+
+def random_pad_image(image,
+ boxes,
+ min_image_size=None,
+ max_image_size=None,
+ pad_color=None,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly pads the image.
+
+ This function randomly pads the image with zeros. The final size of the
+ padded image will be between min_image_size and max_image_size.
+ if min_image_size is smaller than the input image size, min_image_size will
+ be set to the input image size. The same for max_image_size. The input image
+ will be located at a uniformly random location inside the padded image.
+ The relative location of the boxes to the original image will remain the same.
+
+ Args:
+ image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ min_image_size: a tensor of size [min_height, min_width], type tf.int32.
+ If passed as None, will be set to image size
+ [height, width].
+ max_image_size: a tensor of size [max_height, max_width], type tf.int32.
+ If passed as None, will be set to twice the
+ image [height * 2, width * 2].
+ pad_color: padding color. A rank 1 tensor of [3] with dtype=tf.float32.
+ if set as None, it will be set to average color of the input
+ image.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: Image shape will be [new_height, new_width, channels].
+ boxes: boxes which is the same rank as input boxes. Boxes are in normalized
+ form.
+ """
+ if pad_color is None:
+ pad_color = tf.reduce_mean(image, axis=[0, 1])
+
+ image_shape = tf.shape(image)
+ image_height = image_shape[0]
+ image_width = image_shape[1]
+
+ if max_image_size is None:
+ max_image_size = tf.stack([image_height * 2, image_width * 2])
+ max_image_size = tf.maximum(max_image_size,
+ tf.stack([image_height, image_width]))
+
+ if min_image_size is None:
+ min_image_size = tf.stack([image_height, image_width])
+ min_image_size = tf.maximum(min_image_size,
+ tf.stack([image_height, image_width]))
+
+ target_height = tf.cond(
+ max_image_size[0] > min_image_size[0],
+ lambda: _random_integer(min_image_size[0], max_image_size[0], seed),
+ lambda: max_image_size[0])
+
+ target_width = tf.cond(
+ max_image_size[1] > min_image_size[1],
+ lambda: _random_integer(min_image_size[1], max_image_size[1], seed),
+ lambda: max_image_size[1])
+
+ offset_height = tf.cond(
+ target_height > image_height,
+ lambda: _random_integer(0, target_height - image_height, seed),
+ lambda: tf.constant(0, dtype=tf.int32))
+
+ offset_width = tf.cond(
+ target_width > image_width,
+ lambda: _random_integer(0, target_width - image_width, seed),
+ lambda: tf.constant(0, dtype=tf.int32))
+
+ gen_func = lambda: (target_height, target_width, offset_height, offset_width)
+ params = _get_or_create_preprocess_rand_vars(
+ gen_func, preprocessor_cache.PreprocessorCache.PAD_IMAGE,
+ preprocess_vars_cache)
+ target_height, target_width, offset_height, offset_width = params
+
+ new_image = tf.image.pad_to_bounding_box(
+ image,
+ offset_height=offset_height,
+ offset_width=offset_width,
+ target_height=target_height,
+ target_width=target_width)
+
+ # Setting color of the padded pixels
+ image_ones = tf.ones_like(image)
+ image_ones_padded = tf.image.pad_to_bounding_box(
+ image_ones,
+ offset_height=offset_height,
+ offset_width=offset_width,
+ target_height=target_height,
+ target_width=target_width)
+ image_color_padded = (1.0 - image_ones_padded) * pad_color
+ new_image += image_color_padded
+
+ # setting boxes
+ new_window = tf.to_float(
+ tf.stack([
+ -offset_height, -offset_width, target_height - offset_height,
+ target_width - offset_width
+ ]))
+ new_window /= tf.to_float(
+ tf.stack([image_height, image_width, image_height, image_width]))
+ boxlist = box_list.BoxList(boxes)
+ new_boxlist = box_list_ops.change_coordinate_frame(boxlist, new_window)
+ new_boxes = new_boxlist.get()
+
+ return new_image, new_boxes
+
+
+def random_crop_pad_image(image,
+ boxes,
+ labels,
+ label_weights,
+ label_confidences=None,
+ multiclass_scores=None,
+ min_object_covered=1.0,
+ aspect_ratio_range=(0.75, 1.33),
+ area_range=(0.1, 1.0),
+ overlap_thresh=0.3,
+ clip_boxes=True,
+ random_coef=0.0,
+ min_padded_size_ratio=(1.0, 1.0),
+ max_padded_size_ratio=(2.0, 2.0),
+ pad_color=None,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly crops and pads the image.
+
+ Given an input image and its bounding boxes, this op first randomly crops
+ the image and then randomly pads the image with background values. Parameters
+ min_padded_size_ratio and max_padded_size_ratio, determine the range of the
+ final output image size. Specifically, the final image size will have a size
+ in the range of min_padded_size_ratio * tf.shape(image) and
+ max_padded_size_ratio * tf.shape(image). Note that these ratios are with
+ respect to the size of the original image, so we can't capture the same
+ effect easily by independently applying RandomCropImage
+ followed by RandomPadImage.
+
+ Args:
+ image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ labels: rank 1 int32 tensor containing the object classes.
+ label_weights: rank 1 float32 containing the label weights.
+ label_confidences: rank 1 float32 containing the label confidences.
+ multiclass_scores: (optional) float32 tensor of shape
+ [num_instances, num_classes] representing the score for each box for each
+ class.
+ min_object_covered: the cropped image must cover at least this fraction of
+ at least one of the input bounding boxes.
+ aspect_ratio_range: allowed range for aspect ratio of cropped image.
+ area_range: allowed range for area ratio between cropped image and the
+ original image.
+ overlap_thresh: minimum overlap thresh with new cropped
+ image to keep the box.
+ clip_boxes: whether to clip the boxes to the cropped image.
+ random_coef: a random coefficient that defines the chance of getting the
+ original image. If random_coef is 0, we will always get the
+ cropped image, and if it is 1.0, we will always get the
+ original image.
+ min_padded_size_ratio: min ratio of padded image height and width to the
+ input image's height and width.
+ max_padded_size_ratio: max ratio of padded image height and width to the
+ input image's height and width.
+ pad_color: padding color. A rank 1 tensor of [3] with dtype=tf.float32.
+ if set as None, it will be set to average color of the randomly
+ cropped image.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ padded_image: padded image.
+ padded_boxes: boxes which is the same rank as input boxes. Boxes are in
+ normalized form.
+ cropped_labels: cropped labels.
+ if label_weights is not None also returns:
+ cropped_label_weights: cropped label weights.
+ if multiclass_scores is not None also returns:
+ cropped_multiclass_scores: cropped_multiclass_scores.
+
+ """
+ image_size = tf.shape(image)
+ image_height = image_size[0]
+ image_width = image_size[1]
+ result = random_crop_image(
+ image=image,
+ boxes=boxes,
+ labels=labels,
+ label_weights=label_weights,
+ label_confidences=label_confidences,
+ multiclass_scores=multiclass_scores,
+ min_object_covered=min_object_covered,
+ aspect_ratio_range=aspect_ratio_range,
+ area_range=area_range,
+ overlap_thresh=overlap_thresh,
+ clip_boxes=clip_boxes,
+ random_coef=random_coef,
+ seed=seed,
+ preprocess_vars_cache=preprocess_vars_cache)
+
+ cropped_image, cropped_boxes, cropped_labels = result[:3]
+
+ min_image_size = tf.to_int32(
+ tf.to_float(tf.stack([image_height, image_width])) *
+ min_padded_size_ratio)
+ max_image_size = tf.to_int32(
+ tf.to_float(tf.stack([image_height, image_width])) *
+ max_padded_size_ratio)
+
+ padded_image, padded_boxes = random_pad_image(
+ cropped_image,
+ cropped_boxes,
+ min_image_size=min_image_size,
+ max_image_size=max_image_size,
+ pad_color=pad_color,
+ seed=seed,
+ preprocess_vars_cache=preprocess_vars_cache)
+
+ cropped_padded_output = (padded_image, padded_boxes, cropped_labels)
+
+ index = 3
+ if label_weights is not None:
+ cropped_label_weights = result[index]
+ cropped_padded_output += (cropped_label_weights,)
+ index += 1
+
+ if label_confidences is not None:
+ cropped_label_confidences = result[index]
+ cropped_padded_output += (cropped_label_confidences,)
+ index += 1
+
+ if multiclass_scores is not None:
+ cropped_multiclass_scores = result[index]
+ cropped_padded_output += (cropped_multiclass_scores,)
+
+ return cropped_padded_output
+
+
+def random_crop_to_aspect_ratio(image,
+ boxes,
+ labels,
+ label_weights,
+ label_confidences=None,
+ multiclass_scores=None,
+ masks=None,
+ keypoints=None,
+ aspect_ratio=1.0,
+ overlap_thresh=0.3,
+ clip_boxes=True,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly crops an image to the specified aspect ratio.
+
+ Randomly crops the a portion of the image such that the crop is of the
+ specified aspect ratio, and the crop is as large as possible. If the specified
+ aspect ratio is larger than the aspect ratio of the image, this op will
+ randomly remove rows from the top and bottom of the image. If the specified
+ aspect ratio is less than the aspect ratio of the image, this op will randomly
+ remove cols from the left and right of the image. If the specified aspect
+ ratio is the same as the aspect ratio of the image, this op will return the
+ image.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ labels: rank 1 int32 tensor containing the object classes.
+ label_weights: float32 tensor of shape [num_instances] representing the
+ weight for each box.
+ label_confidences: (optional) float32 tensor of shape [num_instances]
+ representing the confidence for each box.
+ multiclass_scores: (optional) float32 tensor of shape
+ [num_instances, num_classes] representing the score for each box for each
+ class.
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks
+ are of the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x
+ normalized coordinates.
+ aspect_ratio: the aspect ratio of cropped image.
+ overlap_thresh: minimum overlap thresh with new cropped
+ image to keep the box.
+ clip_boxes: whether to clip the boxes to the cropped image.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same rank as input image.
+ boxes: boxes which is the same rank as input boxes.
+ Boxes are in normalized form.
+ labels: new labels.
+
+ If label_weights, masks, keypoints, or multiclass_scores is not None, the
+ function also returns:
+ label_weights: rank 1 float32 tensor with shape [num_instances].
+ masks: rank 3 float32 tensor with shape [num_instances, height, width]
+ containing instance masks.
+ keypoints: rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]
+ multiclass_scores: rank 2 float32 tensor with shape
+ [num_instances, num_classes]
+
+ Raises:
+ ValueError: If image is not a 3D tensor.
+ """
+ if len(image.get_shape()) != 3:
+ raise ValueError('Image should be 3D tensor')
+
+ with tf.name_scope('RandomCropToAspectRatio', values=[image]):
+ image_shape = tf.shape(image)
+ orig_height = image_shape[0]
+ orig_width = image_shape[1]
+ orig_aspect_ratio = tf.to_float(orig_width) / tf.to_float(orig_height)
+ new_aspect_ratio = tf.constant(aspect_ratio, dtype=tf.float32)
+ def target_height_fn():
+ return tf.to_int32(tf.round(tf.to_float(orig_width) / new_aspect_ratio))
+
+ target_height = tf.cond(orig_aspect_ratio >= new_aspect_ratio,
+ lambda: orig_height, target_height_fn)
+
+ def target_width_fn():
+ return tf.to_int32(tf.round(tf.to_float(orig_height) * new_aspect_ratio))
+
+ target_width = tf.cond(orig_aspect_ratio <= new_aspect_ratio,
+ lambda: orig_width, target_width_fn)
+
+ # either offset_height = 0 and offset_width is randomly chosen from
+ # [0, offset_width - target_width), or else offset_width = 0 and
+ # offset_height is randomly chosen from [0, offset_height - target_height)
+ offset_height = _random_integer(0, orig_height - target_height + 1, seed)
+ offset_width = _random_integer(0, orig_width - target_width + 1, seed)
+
+ generator_func = lambda: (offset_height, offset_width)
+ offset_height, offset_width = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.CROP_TO_ASPECT_RATIO,
+ preprocess_vars_cache)
+
+ new_image = tf.image.crop_to_bounding_box(
+ image, offset_height, offset_width, target_height, target_width)
+
+ im_box = tf.stack([
+ tf.to_float(offset_height) / tf.to_float(orig_height),
+ tf.to_float(offset_width) / tf.to_float(orig_width),
+ tf.to_float(offset_height + target_height) / tf.to_float(orig_height),
+ tf.to_float(offset_width + target_width) / tf.to_float(orig_width)
+ ])
+
+ boxlist = box_list.BoxList(boxes)
+ boxlist.add_field('labels', labels)
+
+ boxlist.add_field('label_weights', label_weights)
+
+ if label_confidences is not None:
+ boxlist.add_field('label_confidences', label_confidences)
+
+ if multiclass_scores is not None:
+ boxlist.add_field('multiclass_scores', multiclass_scores)
+
+ im_boxlist = box_list.BoxList(tf.expand_dims(im_box, 0))
+
+ # remove boxes whose overlap with the image is less than overlap_thresh
+ overlapping_boxlist, keep_ids = box_list_ops.prune_non_overlapping_boxes(
+ boxlist, im_boxlist, overlap_thresh)
+
+ # change the coordinate of the remaining boxes
+ new_labels = overlapping_boxlist.get_field('labels')
+ new_boxlist = box_list_ops.change_coordinate_frame(overlapping_boxlist,
+ im_box)
+ if clip_boxes:
+ new_boxlist = box_list_ops.clip_to_window(
+ new_boxlist, tf.constant([0.0, 0.0, 1.0, 1.0], tf.float32))
+ new_boxes = new_boxlist.get()
+
+ result = [new_image, new_boxes, new_labels]
+
+ new_label_weights = overlapping_boxlist.get_field('label_weights')
+ result.append(new_label_weights)
+
+ if label_confidences is not None:
+ new_label_confidences = (
+ overlapping_boxlist.get_field('label_confidences'))
+ result.append(new_label_confidences)
+
+ if multiclass_scores is not None:
+ new_multiclass_scores = overlapping_boxlist.get_field('multiclass_scores')
+ result.append(new_multiclass_scores)
+
+ if masks is not None:
+ masks_inside_window = tf.gather(masks, keep_ids)
+ masks_box_begin = tf.stack([0, offset_height, offset_width])
+ masks_box_size = tf.stack([-1, target_height, target_width])
+ new_masks = tf.slice(masks_inside_window, masks_box_begin, masks_box_size)
+ result.append(new_masks)
+
+ if keypoints is not None:
+ keypoints_inside_window = tf.gather(keypoints, keep_ids)
+ new_keypoints = keypoint_ops.change_coordinate_frame(
+ keypoints_inside_window, im_box)
+ if clip_boxes:
+ new_keypoints = keypoint_ops.prune_outside_window(new_keypoints,
+ [0.0, 0.0, 1.0, 1.0])
+ result.append(new_keypoints)
+
+ return tuple(result)
+
+
+def random_pad_to_aspect_ratio(image,
+ boxes,
+ masks=None,
+ keypoints=None,
+ aspect_ratio=1.0,
+ min_padded_size_ratio=(1.0, 1.0),
+ max_padded_size_ratio=(2.0, 2.0),
+ seed=None,
+ preprocess_vars_cache=None):
+ """Randomly zero pads an image to the specified aspect ratio.
+
+ Pads the image so that the resulting image will have the specified aspect
+ ratio without scaling less than the min_padded_size_ratio or more than the
+ max_padded_size_ratio. If the min_padded_size_ratio or max_padded_size_ratio
+ is lower than what is possible to maintain the aspect ratio, then this method
+ will use the least padding to achieve the specified aspect ratio.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks
+ are of the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x
+ normalized coordinates.
+ aspect_ratio: aspect ratio of the final image.
+ min_padded_size_ratio: min ratio of padded image height and width to the
+ input image's height and width.
+ max_padded_size_ratio: max ratio of padded image height and width to the
+ input image's height and width.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same rank as input image.
+ boxes: boxes which is the same rank as input boxes.
+ Boxes are in normalized form.
+ labels: new labels.
+
+ If masks, or keypoints is not None, the function also returns:
+ masks: rank 3 float32 tensor with shape [num_instances, height, width]
+ containing instance masks.
+ keypoints: rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]
+
+ Raises:
+ ValueError: If image is not a 3D tensor.
+ """
+ if len(image.get_shape()) != 3:
+ raise ValueError('Image should be 3D tensor')
+
+ with tf.name_scope('RandomPadToAspectRatio', values=[image]):
+ image_shape = tf.shape(image)
+ image_height = tf.to_float(image_shape[0])
+ image_width = tf.to_float(image_shape[1])
+ image_aspect_ratio = image_width / image_height
+ new_aspect_ratio = tf.constant(aspect_ratio, dtype=tf.float32)
+ target_height = tf.cond(
+ image_aspect_ratio <= new_aspect_ratio,
+ lambda: image_height,
+ lambda: image_width / new_aspect_ratio)
+ target_width = tf.cond(
+ image_aspect_ratio >= new_aspect_ratio,
+ lambda: image_width,
+ lambda: image_height * new_aspect_ratio)
+
+ min_height = tf.maximum(
+ min_padded_size_ratio[0] * image_height, target_height)
+ min_width = tf.maximum(
+ min_padded_size_ratio[1] * image_width, target_width)
+ max_height = tf.maximum(
+ max_padded_size_ratio[0] * image_height, target_height)
+ max_width = tf.maximum(
+ max_padded_size_ratio[1] * image_width, target_width)
+
+ max_scale = tf.minimum(max_height / target_height, max_width / target_width)
+ min_scale = tf.minimum(
+ max_scale,
+ tf.maximum(min_height / target_height, min_width / target_width))
+
+ generator_func = functools.partial(tf.random_uniform, [],
+ min_scale, max_scale, seed=seed)
+ scale = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.PAD_TO_ASPECT_RATIO,
+ preprocess_vars_cache)
+
+ target_height = tf.round(scale * target_height)
+ target_width = tf.round(scale * target_width)
+
+ new_image = tf.image.pad_to_bounding_box(
+ image, 0, 0, tf.to_int32(target_height), tf.to_int32(target_width))
+
+ im_box = tf.stack([
+ 0.0,
+ 0.0,
+ target_height / image_height,
+ target_width / image_width
+ ])
+ boxlist = box_list.BoxList(boxes)
+ new_boxlist = box_list_ops.change_coordinate_frame(boxlist, im_box)
+ new_boxes = new_boxlist.get()
+
+ result = [new_image, new_boxes]
+
+ if masks is not None:
+ new_masks = tf.expand_dims(masks, -1)
+ new_masks = tf.image.pad_to_bounding_box(new_masks, 0, 0,
+ tf.to_int32(target_height),
+ tf.to_int32(target_width))
+ new_masks = tf.squeeze(new_masks, [-1])
+ result.append(new_masks)
+
+ if keypoints is not None:
+ new_keypoints = keypoint_ops.change_coordinate_frame(keypoints, im_box)
+ result.append(new_keypoints)
+
+ return tuple(result)
+
+
+def random_black_patches(image,
+ max_black_patches=10,
+ probability=0.5,
+ size_to_image_ratio=0.1,
+ random_seed=None,
+ preprocess_vars_cache=None):
+ """Randomly adds some black patches to the image.
+
+ This op adds up to max_black_patches square black patches of a fixed size
+ to the image where size is specified via the size_to_image_ratio parameter.
+
+ Args:
+ image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ max_black_patches: number of times that the function tries to add a
+ black box to the image.
+ probability: at each try, what is the chance of adding a box.
+ size_to_image_ratio: Determines the ratio of the size of the black patches
+ to the size of the image.
+ box_size = size_to_image_ratio *
+ min(image_width, image_height)
+ random_seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image
+ """
+ def add_black_patch_to_image(image, idx):
+ """Function for adding one patch to the image.
+
+ Args:
+ image: image
+ idx: counter for number of patches that could have been added
+
+ Returns:
+ image with a randomly added black box
+ """
+ image_shape = tf.shape(image)
+ image_height = image_shape[0]
+ image_width = image_shape[1]
+ box_size = tf.to_int32(
+ tf.multiply(
+ tf.minimum(tf.to_float(image_height), tf.to_float(image_width)),
+ size_to_image_ratio))
+
+ generator_func = functools.partial(tf.random_uniform, [], minval=0.0,
+ maxval=(1.0 - size_to_image_ratio),
+ seed=random_seed)
+ normalized_y_min = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.ADD_BLACK_PATCH,
+ preprocess_vars_cache, key=str(idx) + 'y')
+ normalized_x_min = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.ADD_BLACK_PATCH,
+ preprocess_vars_cache, key=str(idx) + 'x')
+
+ y_min = tf.to_int32(normalized_y_min * tf.to_float(image_height))
+ x_min = tf.to_int32(normalized_x_min * tf.to_float(image_width))
+ black_box = tf.ones([box_size, box_size, 3], dtype=tf.float32)
+ mask = 1.0 - tf.image.pad_to_bounding_box(black_box, y_min, x_min,
+ image_height, image_width)
+ image = tf.multiply(image, mask)
+ return image
+
+ with tf.name_scope('RandomBlackPatchInImage', values=[image]):
+ for idx in range(max_black_patches):
+ generator_func = functools.partial(tf.random_uniform, [],
+ minval=0.0, maxval=1.0,
+ dtype=tf.float32, seed=random_seed)
+ random_prob = _get_or_create_preprocess_rand_vars(
+ generator_func,
+ preprocessor_cache.PreprocessorCache.BLACK_PATCHES,
+ preprocess_vars_cache, key=idx)
+ image = tf.cond(
+ tf.greater(random_prob, probability), lambda: image,
+ functools.partial(add_black_patch_to_image, image=image, idx=idx))
+ return image
+
+
+def image_to_float(image):
+ """Used in Faster R-CNN. Casts image pixel values to float.
+
+ Args:
+ image: input image which might be in tf.uint8 or sth else format
+
+ Returns:
+ image: image in tf.float32 format.
+ """
+ with tf.name_scope('ImageToFloat', values=[image]):
+ image = tf.to_float(image)
+ return image
+
+
+def random_resize_method(image, target_size, preprocess_vars_cache=None):
+ """Uses a random resize method to resize the image to target size.
+
+ Args:
+ image: a rank 3 tensor.
+ target_size: a list of [target_height, target_width]
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ resized image.
+ """
+
+ resized_image = _apply_with_random_selector(
+ image,
+ lambda x, method: tf.image.resize_images(x, target_size, method),
+ num_cases=4,
+ preprocess_vars_cache=preprocess_vars_cache,
+ key=preprocessor_cache.PreprocessorCache.RESIZE_METHOD)
+
+ return resized_image
+
+
+def _compute_new_static_size(image, min_dimension, max_dimension):
+ """Compute new static shape for resize_to_range method."""
+ image_shape = image.get_shape().as_list()
+ orig_height = image_shape[0]
+ orig_width = image_shape[1]
+ num_channels = image_shape[2]
+ orig_min_dim = min(orig_height, orig_width)
+ # Calculates the larger of the possible sizes
+ large_scale_factor = min_dimension / float(orig_min_dim)
+ # Scaling orig_(height|width) by large_scale_factor will make the smaller
+ # dimension equal to min_dimension, save for floating point rounding errors.
+ # For reasonably-sized images, taking the nearest integer will reliably
+ # eliminate this error.
+ large_height = int(round(orig_height * large_scale_factor))
+ large_width = int(round(orig_width * large_scale_factor))
+ large_size = [large_height, large_width]
+ if max_dimension:
+ # Calculates the smaller of the possible sizes, use that if the larger
+ # is too big.
+ orig_max_dim = max(orig_height, orig_width)
+ small_scale_factor = max_dimension / float(orig_max_dim)
+ # Scaling orig_(height|width) by small_scale_factor will make the larger
+ # dimension equal to max_dimension, save for floating point rounding
+ # errors. For reasonably-sized images, taking the nearest integer will
+ # reliably eliminate this error.
+ small_height = int(round(orig_height * small_scale_factor))
+ small_width = int(round(orig_width * small_scale_factor))
+ small_size = [small_height, small_width]
+ new_size = large_size
+ if max(large_size) > max_dimension:
+ new_size = small_size
+ else:
+ new_size = large_size
+ return tf.constant(new_size + [num_channels])
+
+
+def _compute_new_dynamic_size(image, min_dimension, max_dimension):
+ """Compute new dynamic shape for resize_to_range method."""
+ image_shape = tf.shape(image)
+ orig_height = tf.to_float(image_shape[0])
+ orig_width = tf.to_float(image_shape[1])
+ num_channels = image_shape[2]
+ orig_min_dim = tf.minimum(orig_height, orig_width)
+ # Calculates the larger of the possible sizes
+ min_dimension = tf.constant(min_dimension, dtype=tf.float32)
+ large_scale_factor = min_dimension / orig_min_dim
+ # Scaling orig_(height|width) by large_scale_factor will make the smaller
+ # dimension equal to min_dimension, save for floating point rounding errors.
+ # For reasonably-sized images, taking the nearest integer will reliably
+ # eliminate this error.
+ large_height = tf.to_int32(tf.round(orig_height * large_scale_factor))
+ large_width = tf.to_int32(tf.round(orig_width * large_scale_factor))
+ large_size = tf.stack([large_height, large_width])
+ if max_dimension:
+ # Calculates the smaller of the possible sizes, use that if the larger
+ # is too big.
+ orig_max_dim = tf.maximum(orig_height, orig_width)
+ max_dimension = tf.constant(max_dimension, dtype=tf.float32)
+ small_scale_factor = max_dimension / orig_max_dim
+ # Scaling orig_(height|width) by small_scale_factor will make the larger
+ # dimension equal to max_dimension, save for floating point rounding
+ # errors. For reasonably-sized images, taking the nearest integer will
+ # reliably eliminate this error.
+ small_height = tf.to_int32(tf.round(orig_height * small_scale_factor))
+ small_width = tf.to_int32(tf.round(orig_width * small_scale_factor))
+ small_size = tf.stack([small_height, small_width])
+ new_size = tf.cond(
+ tf.to_float(tf.reduce_max(large_size)) > max_dimension,
+ lambda: small_size, lambda: large_size)
+ else:
+ new_size = large_size
+ return tf.stack(tf.unstack(new_size) + [num_channels])
+
+
+def resize_to_range(image,
+ masks=None,
+ min_dimension=None,
+ max_dimension=None,
+ method=tf.image.ResizeMethod.BILINEAR,
+ align_corners=False,
+ pad_to_max_dimension=False,
+ per_channel_pad_value=(0, 0, 0)):
+ """Resizes an image so its dimensions are within the provided value.
+
+ The output size can be described by two cases:
+ 1. If the image can be rescaled so its minimum dimension is equal to the
+ provided value without the other dimension exceeding max_dimension,
+ then do so.
+ 2. Otherwise, resize so the largest dimension is equal to max_dimension.
+
+ Args:
+ image: A 3D tensor of shape [height, width, channels]
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks.
+ min_dimension: (optional) (scalar) desired size of the smaller image
+ dimension.
+ max_dimension: (optional) (scalar) maximum allowed size
+ of the larger image dimension.
+ method: (optional) interpolation method used in resizing. Defaults to
+ BILINEAR.
+ align_corners: bool. If true, exactly align all 4 corners of the input
+ and output. Defaults to False.
+ pad_to_max_dimension: Whether to resize the image and pad it with zeros
+ so the resulting image is of the spatial size
+ [max_dimension, max_dimension]. If masks are included they are padded
+ similarly.
+ per_channel_pad_value: A tuple of per-channel scalar value to use for
+ padding. By default pads zeros.
+
+ Returns:
+ Note that the position of the resized_image_shape changes based on whether
+ masks are present.
+ resized_image: A 3D tensor of shape [new_height, new_width, channels],
+ where the image has been resized (with bilinear interpolation) so that
+ min(new_height, new_width) == min_dimension or
+ max(new_height, new_width) == max_dimension.
+ resized_masks: If masks is not None, also outputs masks. A 3D tensor of
+ shape [num_instances, new_height, new_width].
+ resized_image_shape: A 1D tensor of shape [3] containing shape of the
+ resized image.
+
+ Raises:
+ ValueError: if the image is not a 3D tensor.
+ """
+ if len(image.get_shape()) != 3:
+ raise ValueError('Image should be 3D tensor')
+
+ with tf.name_scope('ResizeToRange', values=[image, min_dimension]):
+ if image.get_shape().is_fully_defined():
+ new_size = _compute_new_static_size(image, min_dimension, max_dimension)
+ else:
+ new_size = _compute_new_dynamic_size(image, min_dimension, max_dimension)
+ new_image = tf.image.resize_images(
+ image, new_size[:-1], method=method, align_corners=align_corners)
+
+ if pad_to_max_dimension:
+ channels = tf.unstack(new_image, axis=2)
+ if len(channels) != len(per_channel_pad_value):
+ raise ValueError('Number of channels must be equal to the length of '
+ 'per-channel pad value.')
+ new_image = tf.stack(
+ [
+ tf.pad(
+ channels[i], [[0, max_dimension - new_size[0]],
+ [0, max_dimension - new_size[1]]],
+ constant_values=per_channel_pad_value[i])
+ for i in range(len(channels))
+ ],
+ axis=2)
+ new_image.set_shape([max_dimension, max_dimension, 3])
+
+ result = [new_image]
+ if masks is not None:
+ new_masks = tf.expand_dims(masks, 3)
+ new_masks = tf.image.resize_images(
+ new_masks,
+ new_size[:-1],
+ method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
+ align_corners=align_corners)
+ if pad_to_max_dimension:
+ new_masks = tf.image.pad_to_bounding_box(
+ new_masks, 0, 0, max_dimension, max_dimension)
+ new_masks = tf.squeeze(new_masks, 3)
+ result.append(new_masks)
+
+ result.append(new_size)
+ return result
+
+
+# TODO(alirezafathi): Make sure the static shapes are preserved.
+def resize_to_min_dimension(image, masks=None, min_dimension=600):
+ """Resizes image and masks given the min size maintaining the aspect ratio.
+
+ If one of the image dimensions is smaller that min_dimension, it will scale
+ the image such that its smallest dimension is equal to min_dimension.
+ Otherwise, will keep the image size as is.
+
+ Args:
+ image: a tensor of size [height, width, channels].
+ masks: (optional) a tensors of size [num_instances, height, width].
+ min_dimension: minimum image dimension.
+
+ Returns:
+ Note that the position of the resized_image_shape changes based on whether
+ masks are present.
+ resized_image: A tensor of size [new_height, new_width, channels].
+ resized_masks: If masks is not None, also outputs masks. A 3D tensor of
+ shape [num_instances, new_height, new_width]
+ resized_image_shape: A 1D tensor of shape [3] containing the shape of the
+ resized image.
+
+ Raises:
+ ValueError: if the image is not a 3D tensor.
+ """
+ if len(image.get_shape()) != 3:
+ raise ValueError('Image should be 3D tensor')
+
+ with tf.name_scope('ResizeGivenMinDimension', values=[image, min_dimension]):
+ image_height = tf.shape(image)[0]
+ image_width = tf.shape(image)[1]
+ num_channels = tf.shape(image)[2]
+ min_image_dimension = tf.minimum(image_height, image_width)
+ min_target_dimension = tf.maximum(min_image_dimension, min_dimension)
+ target_ratio = tf.to_float(min_target_dimension) / tf.to_float(
+ min_image_dimension)
+ target_height = tf.to_int32(tf.to_float(image_height) * target_ratio)
+ target_width = tf.to_int32(tf.to_float(image_width) * target_ratio)
+ image = tf.image.resize_bilinear(
+ tf.expand_dims(image, axis=0),
+ size=[target_height, target_width],
+ align_corners=True)
+ result = [tf.squeeze(image, axis=0)]
+
+ if masks is not None:
+ masks = tf.image.resize_nearest_neighbor(
+ tf.expand_dims(masks, axis=3),
+ size=[target_height, target_width],
+ align_corners=True)
+ result.append(tf.squeeze(masks, axis=3))
+
+ result.append(tf.stack([target_height, target_width, num_channels]))
+ return result
+
+
+def scale_boxes_to_pixel_coordinates(image, boxes, keypoints=None):
+ """Scales boxes from normalized to pixel coordinates.
+
+ Args:
+ image: A 3D float32 tensor of shape [height, width, channels].
+ boxes: A 2D float32 tensor of shape [num_boxes, 4] containing the bounding
+ boxes in normalized coordinates. Each row is of the form
+ [ymin, xmin, ymax, xmax].
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x normalized
+ coordinates.
+
+ Returns:
+ image: unchanged input image.
+ scaled_boxes: a 2D float32 tensor of shape [num_boxes, 4] containing the
+ bounding boxes in pixel coordinates.
+ scaled_keypoints: a 3D float32 tensor with shape
+ [num_instances, num_keypoints, 2] containing the keypoints in pixel
+ coordinates.
+ """
+ boxlist = box_list.BoxList(boxes)
+ image_height = tf.shape(image)[0]
+ image_width = tf.shape(image)[1]
+ scaled_boxes = box_list_ops.scale(boxlist, image_height, image_width).get()
+ result = [image, scaled_boxes]
+ if keypoints is not None:
+ scaled_keypoints = keypoint_ops.scale(keypoints, image_height, image_width)
+ result.append(scaled_keypoints)
+ return tuple(result)
+
+
+# TODO(alirezafathi): Investigate if instead the function should return None if
+# masks is None.
+# pylint: disable=g-doc-return-or-yield
+def resize_image(image,
+ masks=None,
+ new_height=600,
+ new_width=1024,
+ method=tf.image.ResizeMethod.BILINEAR,
+ align_corners=False):
+ """Resizes images to the given height and width.
+
+ Args:
+ image: A 3D tensor of shape [height, width, channels]
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks.
+ new_height: (optional) (scalar) desired height of the image.
+ new_width: (optional) (scalar) desired width of the image.
+ method: (optional) interpolation method used in resizing. Defaults to
+ BILINEAR.
+ align_corners: bool. If true, exactly align all 4 corners of the input
+ and output. Defaults to False.
+
+ Returns:
+ Note that the position of the resized_image_shape changes based on whether
+ masks are present.
+ resized_image: A tensor of size [new_height, new_width, channels].
+ resized_masks: If masks is not None, also outputs masks. A 3D tensor of
+ shape [num_instances, new_height, new_width]
+ resized_image_shape: A 1D tensor of shape [3] containing the shape of the
+ resized image.
+ """
+ with tf.name_scope(
+ 'ResizeImage',
+ values=[image, new_height, new_width, method, align_corners]):
+ new_image = tf.image.resize_images(
+ image, tf.stack([new_height, new_width]),
+ method=method,
+ align_corners=align_corners)
+ image_shape = shape_utils.combined_static_and_dynamic_shape(image)
+ result = [new_image]
+ if masks is not None:
+ num_instances = tf.shape(masks)[0]
+ new_size = tf.stack([new_height, new_width])
+ def resize_masks_branch():
+ new_masks = tf.expand_dims(masks, 3)
+ new_masks = tf.image.resize_nearest_neighbor(
+ new_masks, new_size, align_corners=align_corners)
+ new_masks = tf.squeeze(new_masks, axis=3)
+ return new_masks
+
+ def reshape_masks_branch():
+ # The shape function will be computed for both branches of the
+ # condition, regardless of which branch is actually taken. Make sure
+ # that we don't trigger an assertion in the shape function when trying
+ # to reshape a non empty tensor into an empty one.
+ new_masks = tf.reshape(masks, [-1, new_size[0], new_size[1]])
+ return new_masks
+
+ masks = tf.cond(num_instances > 0, resize_masks_branch,
+ reshape_masks_branch)
+ result.append(masks)
+
+ result.append(tf.stack([new_height, new_width, image_shape[2]]))
+ return result
+
+
+def subtract_channel_mean(image, means=None):
+ """Normalizes an image by subtracting a mean from each channel.
+
+ Args:
+ image: A 3D tensor of shape [height, width, channels]
+ means: float list containing a mean for each channel
+ Returns:
+ normalized_images: a tensor of shape [height, width, channels]
+ Raises:
+ ValueError: if images is not a 4D tensor or if the number of means is not
+ equal to the number of channels.
+ """
+ with tf.name_scope('SubtractChannelMean', values=[image, means]):
+ if len(image.get_shape()) != 3:
+ raise ValueError('Input must be of size [height, width, channels]')
+ if len(means) != image.get_shape()[-1]:
+ raise ValueError('len(means) must match the number of channels')
+ return image - [[means]]
+
+
+def one_hot_encoding(labels, num_classes=None):
+ """One-hot encodes the multiclass labels.
+
+ Example usage:
+ labels = tf.constant([1, 4], dtype=tf.int32)
+ one_hot = OneHotEncoding(labels, num_classes=5)
+ one_hot.eval() # evaluates to [0, 1, 0, 0, 1]
+
+ Args:
+ labels: A tensor of shape [None] corresponding to the labels.
+ num_classes: Number of classes in the dataset.
+ Returns:
+ onehot_labels: a tensor of shape [num_classes] corresponding to the one hot
+ encoding of the labels.
+ Raises:
+ ValueError: if num_classes is not specified.
+ """
+ with tf.name_scope('OneHotEncoding', values=[labels]):
+ if num_classes is None:
+ raise ValueError('num_classes must be specified')
+
+ labels = tf.one_hot(labels, num_classes, 1, 0)
+ return tf.reduce_max(labels, 0)
+
+
+def rgb_to_gray(image):
+ """Converts a 3 channel RGB image to a 1 channel grayscale image.
+
+ Args:
+ image: Rank 3 float32 tensor containing 1 image -> [height, width, 3]
+ with pixel values varying between [0, 1].
+
+ Returns:
+ image: A single channel grayscale image -> [image, height, 1].
+ """
+ return _rgb_to_grayscale(image)
+
+
+def ssd_random_crop(image,
+ boxes,
+ labels,
+ label_weights,
+ label_confidences=None,
+ multiclass_scores=None,
+ masks=None,
+ keypoints=None,
+ min_object_covered=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
+ aspect_ratio_range=((0.5, 2.0),) * 7,
+ area_range=((0.1, 1.0),) * 7,
+ overlap_thresh=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
+ clip_boxes=(True,) * 7,
+ random_coef=(0.15,) * 7,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Random crop preprocessing with default parameters as in SSD paper.
+
+ Liu et al., SSD: Single shot multibox detector.
+ For further information on random crop preprocessing refer to RandomCrop
+ function above.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ labels: rank 1 int32 tensor containing the object classes.
+ label_weights: rank 1 float32 tensor containing the weights.
+ label_confidences: rank 1 float32 tensor containing the confidences.
+ multiclass_scores: (optional) float32 tensor of shape
+ [num_instances, num_classes] representing the score for each box for each
+ class.
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks
+ are of the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x
+ normalized coordinates.
+ min_object_covered: the cropped image must cover at least this fraction of
+ at least one of the input bounding boxes.
+ aspect_ratio_range: allowed range for aspect ratio of cropped image.
+ area_range: allowed range for area ratio between cropped image and the
+ original image.
+ overlap_thresh: minimum overlap thresh with new cropped
+ image to keep the box.
+ clip_boxes: whether to clip the boxes to the cropped image.
+ random_coef: a random coefficient that defines the chance of getting the
+ original image. If random_coef is 0, we will always get the
+ cropped image, and if it is 1.0, we will always get the
+ original image.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same rank as input image.
+ boxes: boxes which is the same rank as input boxes.
+ Boxes are in normalized form.
+ labels: new labels.
+
+ If label_weights, multiclass_scores, masks, or keypoints is not None, the
+ function also returns:
+ label_weights: rank 1 float32 tensor with shape [num_instances].
+ multiclass_scores: rank 2 float32 tensor with shape
+ [num_instances, num_classes]
+ masks: rank 3 float32 tensor with shape [num_instances, height, width]
+ containing instance masks.
+ keypoints: rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]
+ """
+
+ def random_crop_selector(selected_result, index):
+ """Applies random_crop_image to selected result.
+
+ Args:
+ selected_result: A tuple containing image, boxes, labels, keypoints (if
+ not None), and masks (if not None).
+ index: The index that was randomly selected.
+
+ Returns: A tuple containing image, boxes, labels, keypoints (if not None),
+ and masks (if not None).
+ """
+
+ i = 3
+ image, boxes, labels = selected_result[:i]
+ selected_label_weights = None
+ selected_label_confidences = None
+ selected_multiclass_scores = None
+ selected_masks = None
+ selected_keypoints = None
+ if label_weights is not None:
+ selected_label_weights = selected_result[i]
+ i += 1
+ if label_confidences is not None:
+ selected_label_confidences = selected_result[i]
+ i += 1
+ if multiclass_scores is not None:
+ selected_multiclass_scores = selected_result[i]
+ i += 1
+ if masks is not None:
+ selected_masks = selected_result[i]
+ i += 1
+ if keypoints is not None:
+ selected_keypoints = selected_result[i]
+
+ return random_crop_image(
+ image=image,
+ boxes=boxes,
+ labels=labels,
+ label_weights=selected_label_weights,
+ label_confidences=selected_label_confidences,
+ multiclass_scores=selected_multiclass_scores,
+ masks=selected_masks,
+ keypoints=selected_keypoints,
+ min_object_covered=min_object_covered[index],
+ aspect_ratio_range=aspect_ratio_range[index],
+ area_range=area_range[index],
+ overlap_thresh=overlap_thresh[index],
+ clip_boxes=clip_boxes[index],
+ random_coef=random_coef[index],
+ seed=seed,
+ preprocess_vars_cache=preprocess_vars_cache)
+
+ result = _apply_with_random_selector_tuples(
+ tuple(
+ t for t in (image, boxes, labels, label_weights, label_confidences,
+ multiclass_scores, masks, keypoints) if t is not None),
+ random_crop_selector,
+ num_cases=len(min_object_covered),
+ preprocess_vars_cache=preprocess_vars_cache,
+ key=preprocessor_cache.PreprocessorCache.SSD_CROP_SELECTOR_ID)
+ return result
+
+
+def ssd_random_crop_pad(image,
+ boxes,
+ labels,
+ label_weights,
+ label_confidences=None,
+ multiclass_scores=None,
+ min_object_covered=(0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
+ aspect_ratio_range=((0.5, 2.0),) * 6,
+ area_range=((0.1, 1.0),) * 6,
+ overlap_thresh=(0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
+ clip_boxes=(True,) * 6,
+ random_coef=(0.15,) * 6,
+ min_padded_size_ratio=((1.0, 1.0),) * 6,
+ max_padded_size_ratio=((2.0, 2.0),) * 6,
+ pad_color=(None,) * 6,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Random crop preprocessing with default parameters as in SSD paper.
+
+ Liu et al., SSD: Single shot multibox detector.
+ For further information on random crop preprocessing refer to RandomCrop
+ function above.
+
+ Args:
+ image: rank 3 float32 tensor containing 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ labels: rank 1 int32 tensor containing the object classes.
+ label_weights: float32 tensor of shape [num_instances] representing the
+ weight for each box.
+ label_confidences: float32 tensor of shape [num_instances] representing the
+ confidences for each box.
+ multiclass_scores: (optional) float32 tensor of shape
+ [num_instances, num_classes] representing the score for each box for each
+ class.
+ min_object_covered: the cropped image must cover at least this fraction of
+ at least one of the input bounding boxes.
+ aspect_ratio_range: allowed range for aspect ratio of cropped image.
+ area_range: allowed range for area ratio between cropped image and the
+ original image.
+ overlap_thresh: minimum overlap thresh with new cropped
+ image to keep the box.
+ clip_boxes: whether to clip the boxes to the cropped image.
+ random_coef: a random coefficient that defines the chance of getting the
+ original image. If random_coef is 0, we will always get the
+ cropped image, and if it is 1.0, we will always get the
+ original image.
+ min_padded_size_ratio: min ratio of padded image height and width to the
+ input image's height and width.
+ max_padded_size_ratio: max ratio of padded image height and width to the
+ input image's height and width.
+ pad_color: padding color. A rank 1 tensor of [3] with dtype=tf.float32.
+ if set as None, it will be set to average color of the randomly
+ cropped image.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: Image shape will be [new_height, new_width, channels].
+ boxes: boxes which is the same rank as input boxes. Boxes are in normalized
+ form.
+ new_labels: new labels.
+ new_label_weights: new label weights.
+ """
+
+ def random_crop_pad_selector(image_boxes_labels, index):
+ """Random crop preprocessing helper."""
+ i = 3
+ image, boxes, labels = image_boxes_labels[:i]
+ selected_label_weights = None
+ selected_label_confidences = None
+ selected_multiclass_scores = None
+ if label_weights is not None:
+ selected_label_weights = image_boxes_labels[i]
+ i += 1
+ if label_confidences is not None:
+ selected_label_confidences = image_boxes_labels[i]
+ i += 1
+ if multiclass_scores is not None:
+ selected_multiclass_scores = image_boxes_labels[i]
+
+ return random_crop_pad_image(
+ image,
+ boxes,
+ labels,
+ label_weights=selected_label_weights,
+ label_confidences=selected_label_confidences,
+ multiclass_scores=selected_multiclass_scores,
+ min_object_covered=min_object_covered[index],
+ aspect_ratio_range=aspect_ratio_range[index],
+ area_range=area_range[index],
+ overlap_thresh=overlap_thresh[index],
+ clip_boxes=clip_boxes[index],
+ random_coef=random_coef[index],
+ min_padded_size_ratio=min_padded_size_ratio[index],
+ max_padded_size_ratio=max_padded_size_ratio[index],
+ pad_color=pad_color[index],
+ seed=seed,
+ preprocess_vars_cache=preprocess_vars_cache)
+
+ return _apply_with_random_selector_tuples(
+ tuple(t for t in (image, boxes, labels, label_weights, label_confidences,
+ multiclass_scores) if t is not None),
+ random_crop_pad_selector,
+ num_cases=len(min_object_covered),
+ preprocess_vars_cache=preprocess_vars_cache,
+ key=preprocessor_cache.PreprocessorCache.SSD_CROP_PAD_SELECTOR_ID)
+
+
+def ssd_random_crop_fixed_aspect_ratio(
+ image,
+ boxes,
+ labels,
+ label_weights,
+ label_confidences=None,
+ multiclass_scores=None,
+ masks=None,
+ keypoints=None,
+ min_object_covered=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
+ aspect_ratio=1.0,
+ area_range=((0.1, 1.0),) * 7,
+ overlap_thresh=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
+ clip_boxes=(True,) * 7,
+ random_coef=(0.15,) * 7,
+ seed=None,
+ preprocess_vars_cache=None):
+ """Random crop preprocessing with default parameters as in SSD paper.
+
+ Liu et al., SSD: Single shot multibox detector.
+ For further information on random crop preprocessing refer to RandomCrop
+ function above.
+
+ The only difference is that the aspect ratio of the crops are fixed.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ labels: rank 1 int32 tensor containing the object classes.
+ label_weights: float32 tensor of shape [num_instances] representing the
+ weight for each box.
+ label_confidences: (optional) float32 tensor of shape [num_instances]
+ representing the confidences for each box.
+ multiclass_scores: (optional) float32 tensor of shape
+ [num_instances, num_classes] representing the score for each box for each
+ class.
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks
+ are of the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x
+ normalized coordinates.
+ min_object_covered: the cropped image must cover at least this fraction of
+ at least one of the input bounding boxes.
+ aspect_ratio: aspect ratio of the cropped image.
+ area_range: allowed range for area ratio between cropped image and the
+ original image.
+ overlap_thresh: minimum overlap thresh with new cropped
+ image to keep the box.
+ clip_boxes: whether to clip the boxes to the cropped image.
+ random_coef: a random coefficient that defines the chance of getting the
+ original image. If random_coef is 0, we will always get the
+ cropped image, and if it is 1.0, we will always get the
+ original image.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same rank as input image.
+ boxes: boxes which is the same rank as input boxes.
+ Boxes are in normalized form.
+ labels: new labels.
+
+ If mulitclass_scores, masks, or keypoints is not None, the function also
+ returns:
+
+ multiclass_scores: rank 2 float32 tensor with shape
+ [num_instances, num_classes]
+ masks: rank 3 float32 tensor with shape [num_instances, height, width]
+ containing instance masks.
+ keypoints: rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]
+ """
+ aspect_ratio_range = ((aspect_ratio, aspect_ratio),) * len(area_range)
+
+ crop_result = ssd_random_crop(
+ image,
+ boxes,
+ labels,
+ label_weights=label_weights,
+ label_confidences=label_confidences,
+ multiclass_scores=multiclass_scores,
+ masks=masks,
+ keypoints=keypoints,
+ min_object_covered=min_object_covered,
+ aspect_ratio_range=aspect_ratio_range,
+ area_range=area_range,
+ overlap_thresh=overlap_thresh,
+ clip_boxes=clip_boxes,
+ random_coef=random_coef,
+ seed=seed,
+ preprocess_vars_cache=preprocess_vars_cache)
+ i = 3
+ new_image, new_boxes, new_labels = crop_result[:i]
+ new_label_weights = None
+ new_label_confidences = None
+ new_multiclass_scores = None
+ new_masks = None
+ new_keypoints = None
+ if label_weights is not None:
+ new_label_weights = crop_result[i]
+ i += 1
+ if label_confidences is not None:
+ new_label_confidences = crop_result[i]
+ i += 1
+ if multiclass_scores is not None:
+ new_multiclass_scores = crop_result[i]
+ i += 1
+ if masks is not None:
+ new_masks = crop_result[i]
+ i += 1
+ if keypoints is not None:
+ new_keypoints = crop_result[i]
+
+ result = random_crop_to_aspect_ratio(
+ new_image,
+ new_boxes,
+ new_labels,
+ label_weights=new_label_weights,
+ label_confidences=new_label_confidences,
+ multiclass_scores=new_multiclass_scores,
+ masks=new_masks,
+ keypoints=new_keypoints,
+ aspect_ratio=aspect_ratio,
+ clip_boxes=clip_boxes,
+ seed=seed,
+ preprocess_vars_cache=preprocess_vars_cache)
+
+ return result
+
+
+def ssd_random_crop_pad_fixed_aspect_ratio(
+ image,
+ boxes,
+ labels,
+ label_weights,
+ label_confidences=None,
+ multiclass_scores=None,
+ masks=None,
+ keypoints=None,
+ min_object_covered=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
+ aspect_ratio=1.0,
+ aspect_ratio_range=((0.5, 2.0),) * 7,
+ area_range=((0.1, 1.0),) * 7,
+ overlap_thresh=(0.0, 0.1, 0.3, 0.5, 0.7, 0.9, 1.0),
+ clip_boxes=(True,) * 7,
+ random_coef=(0.15,) * 7,
+ min_padded_size_ratio=(1.0, 1.0),
+ max_padded_size_ratio=(2.0, 2.0),
+ seed=None,
+ preprocess_vars_cache=None):
+ """Random crop and pad preprocessing with default parameters as in SSD paper.
+
+ Liu et al., SSD: Single shot multibox detector.
+ For further information on random crop preprocessing refer to RandomCrop
+ function above.
+
+ The only difference is that after the initial crop, images are zero-padded
+ to a fixed aspect ratio instead of being resized to that aspect ratio.
+
+ Args:
+ image: rank 3 float32 tensor contains 1 image -> [height, width, channels]
+ with pixel values varying between [0, 1].
+ boxes: rank 2 float32 tensor containing the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning their coordinates vary
+ between [0, 1].
+ Each row is in the form of [ymin, xmin, ymax, xmax].
+ labels: rank 1 int32 tensor containing the object classes.
+ label_weights: float32 tensor of shape [num_instances] representing the
+ weight for each box.
+ label_confidences: (optional) float32 tensor of shape [num_instances]
+ representing the confidence for each box.
+ multiclass_scores: (optional) float32 tensor of shape
+ [num_instances, num_classes] representing the score for each box for each
+ class.
+ masks: (optional) rank 3 float32 tensor with shape
+ [num_instances, height, width] containing instance masks. The masks
+ are of the same height, width as the input `image`.
+ keypoints: (optional) rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]. The keypoints are in y-x
+ normalized coordinates.
+ min_object_covered: the cropped image must cover at least this fraction of
+ at least one of the input bounding boxes.
+ aspect_ratio: the final aspect ratio to pad to.
+ aspect_ratio_range: allowed range for aspect ratio of cropped image.
+ area_range: allowed range for area ratio between cropped image and the
+ original image.
+ overlap_thresh: minimum overlap thresh with new cropped
+ image to keep the box.
+ clip_boxes: whether to clip the boxes to the cropped image.
+ random_coef: a random coefficient that defines the chance of getting the
+ original image. If random_coef is 0, we will always get the
+ cropped image, and if it is 1.0, we will always get the
+ original image.
+ min_padded_size_ratio: min ratio of padded image height and width to the
+ input image's height and width.
+ max_padded_size_ratio: max ratio of padded image height and width to the
+ input image's height and width.
+ seed: random seed.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ image: image which is the same rank as input image.
+ boxes: boxes which is the same rank as input boxes.
+ Boxes are in normalized form.
+ labels: new labels.
+
+ If multiclass_scores, masks, or keypoints is not None, the function also
+ returns:
+
+ multiclass_scores: rank 2 with shape [num_instances, num_classes]
+ masks: rank 3 float32 tensor with shape [num_instances, height, width]
+ containing instance masks.
+ keypoints: rank 3 float32 tensor with shape
+ [num_instances, num_keypoints, 2]
+ """
+ crop_result = ssd_random_crop(
+ image,
+ boxes,
+ labels,
+ label_weights=label_weights,
+ label_confidences=label_confidences,
+ multiclass_scores=multiclass_scores,
+ masks=masks,
+ keypoints=keypoints,
+ min_object_covered=min_object_covered,
+ aspect_ratio_range=aspect_ratio_range,
+ area_range=area_range,
+ overlap_thresh=overlap_thresh,
+ clip_boxes=clip_boxes,
+ random_coef=random_coef,
+ seed=seed,
+ preprocess_vars_cache=preprocess_vars_cache)
+ i = 3
+ new_image, new_boxes, new_labels = crop_result[:i]
+ new_label_weights = None
+ new_label_confidences = None
+ new_multiclass_scores = None
+ new_masks = None
+ new_keypoints = None
+ if label_weights is not None:
+ new_label_weights = crop_result[i]
+ i += 1
+ if label_confidences is not None:
+ new_label_confidences = crop_result[i]
+ i += 1
+ if multiclass_scores is not None:
+ new_multiclass_scores = crop_result[i]
+ i += 1
+ if masks is not None:
+ new_masks = crop_result[i]
+ i += 1
+ if keypoints is not None:
+ new_keypoints = crop_result[i]
+
+ result = random_pad_to_aspect_ratio(
+ new_image,
+ new_boxes,
+ masks=new_masks,
+ keypoints=new_keypoints,
+ aspect_ratio=aspect_ratio,
+ min_padded_size_ratio=min_padded_size_ratio,
+ max_padded_size_ratio=max_padded_size_ratio,
+ seed=seed,
+ preprocess_vars_cache=preprocess_vars_cache)
+
+ result = list(result)
+ i = 3
+ result.insert(2, new_labels)
+ if new_label_weights is not None:
+ result.insert(i, new_label_weights)
+ i += 1
+ if new_label_confidences is not None:
+ result.insert(i, new_label_confidences)
+ i += 1
+ if multiclass_scores is not None:
+ result.insert(i, new_multiclass_scores)
+ result = tuple(result)
+
+ return result
+
+
+def convert_class_logits_to_softmax(multiclass_scores, temperature=1.0):
+ """Converts multiclass logits to softmax scores after applying temperature.
+
+ Args:
+ multiclass_scores: float32 tensor of shape
+ [num_instances, num_classes] representing the score for each box for each
+ class.
+ temperature: Scale factor to use prior to applying softmax. Larger
+ temperatures give more uniform distruibutions after softmax.
+
+ Returns:
+ multiclass_scores: float32 tensor of shape
+ [num_instances, num_classes] with scaling and softmax applied.
+ """
+
+ # Multiclass scores must be stored as logits. Apply temp and softmax.
+ multiclass_scores_scaled = tf.divide(
+ multiclass_scores, temperature, name='scale_logits')
+ multiclass_scores = tf.nn.softmax(multiclass_scores_scaled, name='softmax')
+
+ return multiclass_scores
+
+
+def get_default_func_arg_map(include_label_weights=True,
+ include_label_confidences=False,
+ include_multiclass_scores=False,
+ include_instance_masks=False,
+ include_keypoints=False):
+ """Returns the default mapping from a preprocessor function to its args.
+
+ Args:
+ include_label_weights: If True, preprocessing functions will modify the
+ label weights, too.
+ include_label_confidences: If True, preprocessing functions will modify the
+ label confidences, too.
+ include_multiclass_scores: If True, preprocessing functions will modify the
+ multiclass scores, too.
+ include_instance_masks: If True, preprocessing functions will modify the
+ instance masks, too.
+ include_keypoints: If True, preprocessing functions will modify the
+ keypoints, too.
+
+ Returns:
+ A map from preprocessing functions to the arguments they receive.
+ """
+ groundtruth_label_weights = None
+ if include_label_weights:
+ groundtruth_label_weights = (
+ fields.InputDataFields.groundtruth_weights)
+
+ groundtruth_label_confidences = None
+ if include_label_confidences:
+ groundtruth_label_confidences = (
+ fields.InputDataFields.groundtruth_confidences)
+
+ multiclass_scores = None
+ if include_multiclass_scores:
+ multiclass_scores = (fields.InputDataFields.multiclass_scores)
+
+ groundtruth_instance_masks = None
+ if include_instance_masks:
+ groundtruth_instance_masks = (
+ fields.InputDataFields.groundtruth_instance_masks)
+
+ groundtruth_keypoints = None
+ if include_keypoints:
+ groundtruth_keypoints = fields.InputDataFields.groundtruth_keypoints
+
+ prep_func_arg_map = {
+ normalize_image: (fields.InputDataFields.image,),
+ random_horizontal_flip: (
+ fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ groundtruth_instance_masks,
+ groundtruth_keypoints,
+ ),
+ random_vertical_flip: (
+ fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ groundtruth_instance_masks,
+ groundtruth_keypoints,
+ ),
+ random_rotation90: (
+ fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ groundtruth_instance_masks,
+ groundtruth_keypoints,
+ ),
+ random_pixel_value_scale: (fields.InputDataFields.image,),
+ random_image_scale: (
+ fields.InputDataFields.image,
+ groundtruth_instance_masks,
+ ),
+ random_rgb_to_gray: (fields.InputDataFields.image,),
+ random_adjust_brightness: (fields.InputDataFields.image,),
+ random_adjust_contrast: (fields.InputDataFields.image,),
+ random_adjust_hue: (fields.InputDataFields.image,),
+ random_adjust_saturation: (fields.InputDataFields.image,),
+ random_distort_color: (fields.InputDataFields.image,),
+ random_jitter_boxes: (fields.InputDataFields.groundtruth_boxes,),
+ random_crop_image: (fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ fields.InputDataFields.groundtruth_classes,
+ groundtruth_label_weights,
+ groundtruth_label_confidences,
+ multiclass_scores,
+ groundtruth_instance_masks, groundtruth_keypoints),
+ random_pad_image: (fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes),
+ random_crop_pad_image: (fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ fields.InputDataFields.groundtruth_classes,
+ groundtruth_label_weights,
+ groundtruth_label_confidences,
+ multiclass_scores),
+ random_crop_to_aspect_ratio: (
+ fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ fields.InputDataFields.groundtruth_classes,
+ groundtruth_label_weights,
+ groundtruth_label_confidences,
+ multiclass_scores,
+ groundtruth_instance_masks,
+ groundtruth_keypoints,
+ ),
+ random_pad_to_aspect_ratio: (
+ fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ groundtruth_instance_masks,
+ groundtruth_keypoints,
+ ),
+ random_black_patches: (fields.InputDataFields.image,),
+ retain_boxes_above_threshold: (
+ fields.InputDataFields.groundtruth_boxes,
+ fields.InputDataFields.groundtruth_classes,
+ groundtruth_label_weights,
+ groundtruth_label_confidences,
+ multiclass_scores,
+ groundtruth_instance_masks,
+ groundtruth_keypoints,
+ ),
+ image_to_float: (fields.InputDataFields.image,),
+ random_resize_method: (fields.InputDataFields.image,),
+ resize_to_range: (
+ fields.InputDataFields.image,
+ groundtruth_instance_masks,
+ ),
+ resize_to_min_dimension: (
+ fields.InputDataFields.image,
+ groundtruth_instance_masks,
+ ),
+ scale_boxes_to_pixel_coordinates: (
+ fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ groundtruth_keypoints,
+ ),
+ resize_image: (
+ fields.InputDataFields.image,
+ groundtruth_instance_masks,
+ ),
+ subtract_channel_mean: (fields.InputDataFields.image,),
+ one_hot_encoding: (fields.InputDataFields.groundtruth_image_classes,),
+ rgb_to_gray: (fields.InputDataFields.image,),
+ ssd_random_crop: (fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ fields.InputDataFields.groundtruth_classes,
+ groundtruth_label_weights,
+ groundtruth_label_confidences,
+ multiclass_scores,
+ groundtruth_instance_masks,
+ groundtruth_keypoints),
+ ssd_random_crop_pad: (fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ fields.InputDataFields.groundtruth_classes,
+ groundtruth_label_weights,
+ groundtruth_label_confidences,
+ multiclass_scores),
+ ssd_random_crop_fixed_aspect_ratio: (
+ fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ fields.InputDataFields.groundtruth_classes,
+ groundtruth_label_weights,
+ groundtruth_label_confidences,
+ multiclass_scores,
+ groundtruth_instance_masks,
+ groundtruth_keypoints),
+ ssd_random_crop_pad_fixed_aspect_ratio: (
+ fields.InputDataFields.image,
+ fields.InputDataFields.groundtruth_boxes,
+ fields.InputDataFields.groundtruth_classes,
+ groundtruth_label_weights,
+ groundtruth_label_confidences,
+ multiclass_scores,
+ groundtruth_instance_masks,
+ groundtruth_keypoints,
+ ),
+ convert_class_logits_to_softmax: (multiclass_scores,),
+ }
+
+ return prep_func_arg_map
+
+
+def preprocess(tensor_dict,
+ preprocess_options,
+ func_arg_map=None,
+ preprocess_vars_cache=None):
+ """Preprocess images and bounding boxes.
+
+ Various types of preprocessing (to be implemented) based on the
+ preprocess_options dictionary e.g. "crop image" (affects image and possibly
+ boxes), "white balance image" (affects only image), etc. If self._options
+ is None, no preprocessing is done.
+
+ Args:
+ tensor_dict: dictionary that contains images, boxes, and can contain other
+ things as well.
+ images-> rank 4 float32 tensor contains
+ 1 image -> [1, height, width, 3].
+ with pixel values varying between [0, 1]
+ boxes-> rank 2 float32 tensor containing
+ the bounding boxes -> [N, 4].
+ Boxes are in normalized form meaning
+ their coordinates vary between [0, 1].
+ Each row is in the form
+ of [ymin, xmin, ymax, xmax].
+ preprocess_options: It is a list of tuples, where each tuple contains a
+ function and a dictionary that contains arguments and
+ their values.
+ func_arg_map: mapping from preprocessing functions to arguments that they
+ expect to receive and return.
+ preprocess_vars_cache: PreprocessorCache object that records previously
+ performed augmentations. Updated in-place. If this
+ function is called multiple times with the same
+ non-null cache, it will perform deterministically.
+
+ Returns:
+ tensor_dict: which contains the preprocessed images, bounding boxes, etc.
+
+ Raises:
+ ValueError: (a) If the functions passed to Preprocess
+ are not in func_arg_map.
+ (b) If the arguments that a function needs
+ do not exist in tensor_dict.
+ (c) If image in tensor_dict is not rank 4
+ """
+ if func_arg_map is None:
+ func_arg_map = get_default_func_arg_map()
+
+ # changes the images to image (rank 4 to rank 3) since the functions
+ # receive rank 3 tensor for image
+ if fields.InputDataFields.image in tensor_dict:
+ images = tensor_dict[fields.InputDataFields.image]
+ if len(images.get_shape()) != 4:
+ raise ValueError('images in tensor_dict should be rank 4')
+ image = tf.squeeze(images, axis=0)
+ tensor_dict[fields.InputDataFields.image] = image
+
+ # Preprocess inputs based on preprocess_options
+ for option in preprocess_options:
+ func, params = option
+ if func not in func_arg_map:
+ raise ValueError('The function %s does not exist in func_arg_map' %
+ (func.__name__))
+ arg_names = func_arg_map[func]
+ for a in arg_names:
+ if a is not None and a not in tensor_dict:
+ raise ValueError('The function %s requires argument %s' %
+ (func.__name__, a))
+
+ def get_arg(key):
+ return tensor_dict[key] if key is not None else None
+
+ args = [get_arg(a) for a in arg_names]
+ if (preprocess_vars_cache is not None and
+ 'preprocess_vars_cache' in inspect.getargspec(func).args):
+ params['preprocess_vars_cache'] = preprocess_vars_cache
+ results = func(*args, **params)
+ if not isinstance(results, (list, tuple)):
+ results = (results,)
+ # Removes None args since the return values will not contain those.
+ arg_names = [arg_name for arg_name in arg_names if arg_name is not None]
+ for res, arg_name in zip(results, arg_names):
+ tensor_dict[arg_name] = res
+
+ # changes the image to images (rank 3 to rank 4) to be compatible to what
+ # we received in the first place
+ if fields.InputDataFields.image in tensor_dict:
+ image = tensor_dict[fields.InputDataFields.image]
+ images = tf.expand_dims(image, 0)
+ tensor_dict[fields.InputDataFields.image] = images
+
+ return tensor_dict
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/preprocessor_cache.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/preprocessor_cache.py"
new file mode 100644
index 00000000..4294ad1c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/preprocessor_cache.py"
@@ -0,0 +1,102 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Records previous preprocessing operations and allows them to be repeated.
+
+Used with object_detection.core.preprocessor. Passing a PreprocessorCache
+into individual data augmentation functions or the general preprocess() function
+will store all randomly generated variables in the PreprocessorCache. When
+a preprocessor function is called multiple times with the same
+PreprocessorCache object, that function will perform the same augmentation
+on all calls.
+"""
+
+from collections import defaultdict
+
+
+class PreprocessorCache(object):
+ """Dictionary wrapper storing random variables generated during preprocessing.
+ """
+
+ # Constant keys representing different preprocessing functions
+ ROTATION90 = 'rotation90'
+ HORIZONTAL_FLIP = 'horizontal_flip'
+ VERTICAL_FLIP = 'vertical_flip'
+ PIXEL_VALUE_SCALE = 'pixel_value_scale'
+ IMAGE_SCALE = 'image_scale'
+ RGB_TO_GRAY = 'rgb_to_gray'
+ ADJUST_BRIGHTNESS = 'adjust_brightness'
+ ADJUST_CONTRAST = 'adjust_contrast'
+ ADJUST_HUE = 'adjust_hue'
+ ADJUST_SATURATION = 'adjust_saturation'
+ DISTORT_COLOR = 'distort_color'
+ STRICT_CROP_IMAGE = 'strict_crop_image'
+ CROP_IMAGE = 'crop_image'
+ PAD_IMAGE = 'pad_image'
+ CROP_TO_ASPECT_RATIO = 'crop_to_aspect_ratio'
+ RESIZE_METHOD = 'resize_method'
+ PAD_TO_ASPECT_RATIO = 'pad_to_aspect_ratio'
+ BLACK_PATCHES = 'black_patches'
+ ADD_BLACK_PATCH = 'add_black_patch'
+ SELECTOR = 'selector'
+ SELECTOR_TUPLES = 'selector_tuples'
+ SSD_CROP_SELECTOR_ID = 'ssd_crop_selector_id'
+ SSD_CROP_PAD_SELECTOR_ID = 'ssd_crop_pad_selector_id'
+
+ # 23 permitted function ids
+ _VALID_FNS = [ROTATION90, HORIZONTAL_FLIP, VERTICAL_FLIP, PIXEL_VALUE_SCALE,
+ IMAGE_SCALE, RGB_TO_GRAY, ADJUST_BRIGHTNESS, ADJUST_CONTRAST,
+ ADJUST_HUE, ADJUST_SATURATION, DISTORT_COLOR, STRICT_CROP_IMAGE,
+ CROP_IMAGE, PAD_IMAGE, CROP_TO_ASPECT_RATIO, RESIZE_METHOD,
+ PAD_TO_ASPECT_RATIO, BLACK_PATCHES, ADD_BLACK_PATCH, SELECTOR,
+ SELECTOR_TUPLES, SSD_CROP_SELECTOR_ID, SSD_CROP_PAD_SELECTOR_ID]
+
+ def __init__(self):
+ self._history = defaultdict(dict)
+
+ def clear(self):
+ """Resets cache."""
+ self._history = defaultdict(dict)
+
+ def get(self, function_id, key):
+ """Gets stored value given a function id and key.
+
+ Args:
+ function_id: identifier for the preprocessing function used.
+ key: identifier for the variable stored.
+ Returns:
+ value: the corresponding value, expected to be a tensor or
+ nested structure of tensors.
+ Raises:
+ ValueError: if function_id is not one of the 23 valid function ids.
+ """
+ if function_id not in self._VALID_FNS:
+ raise ValueError('Function id not recognized: %s.' % str(function_id))
+ return self._history[function_id].get(key)
+
+ def update(self, function_id, key, value):
+ """Adds a value to the dictionary.
+
+ Args:
+ function_id: identifier for the preprocessing function used.
+ key: identifier for the variable stored.
+ value: the value to store, expected to be a tensor or nested structure
+ of tensors.
+ Raises:
+ ValueError: if function_id is not one of the 23 valid function ids.
+ """
+ if function_id not in self._VALID_FNS:
+ raise ValueError('Function id not recognized: %s.' % str(function_id))
+ self._history[function_id][key] = value
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/preprocessor_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/preprocessor_test.py"
new file mode 100644
index 00000000..4786f807
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/preprocessor_test.py"
@@ -0,0 +1,2946 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.preprocessor."""
+
+import numpy as np
+import six
+
+import tensorflow as tf
+
+from object_detection.core import preprocessor
+from object_detection.core import preprocessor_cache
+from object_detection.core import standard_fields as fields
+
+if six.PY2:
+ import mock # pylint: disable=g-import-not-at-top
+else:
+ from unittest import mock # pylint: disable=g-import-not-at-top
+
+
+class PreprocessorTest(tf.test.TestCase):
+
+ def createColorfulTestImage(self):
+ ch255 = tf.fill([1, 100, 200, 1], tf.constant(255, dtype=tf.uint8))
+ ch128 = tf.fill([1, 100, 200, 1], tf.constant(128, dtype=tf.uint8))
+ ch0 = tf.fill([1, 100, 200, 1], tf.constant(0, dtype=tf.uint8))
+ imr = tf.concat([ch255, ch0, ch0], 3)
+ img = tf.concat([ch255, ch255, ch0], 3)
+ imb = tf.concat([ch255, ch0, ch255], 3)
+ imw = tf.concat([ch128, ch128, ch128], 3)
+ imu = tf.concat([imr, img], 2)
+ imd = tf.concat([imb, imw], 2)
+ im = tf.concat([imu, imd], 1)
+ return im
+
+ def createTestImages(self):
+ images_r = tf.constant([[[128, 128, 128, 128], [0, 0, 128, 128],
+ [0, 128, 128, 128], [192, 192, 128, 128]]],
+ dtype=tf.uint8)
+ images_r = tf.expand_dims(images_r, 3)
+ images_g = tf.constant([[[0, 0, 128, 128], [0, 0, 128, 128],
+ [0, 128, 192, 192], [192, 192, 128, 192]]],
+ dtype=tf.uint8)
+ images_g = tf.expand_dims(images_g, 3)
+ images_b = tf.constant([[[128, 128, 192, 0], [0, 0, 128, 192],
+ [0, 128, 128, 0], [192, 192, 192, 128]]],
+ dtype=tf.uint8)
+ images_b = tf.expand_dims(images_b, 3)
+ images = tf.concat([images_r, images_g, images_b], 3)
+ return images
+
+ def createEmptyTestBoxes(self):
+ boxes = tf.constant([[]], dtype=tf.float32)
+ return boxes
+
+ def createTestBoxes(self):
+ boxes = tf.constant(
+ [[0.0, 0.25, 0.75, 1.0], [0.25, 0.5, 0.75, 1.0]], dtype=tf.float32)
+ return boxes
+
+ def createTestGroundtruthWeights(self):
+ return tf.constant([1.0, 0.5], dtype=tf.float32)
+
+ def createTestMasks(self):
+ mask = np.array([
+ [[255.0, 0.0, 0.0],
+ [255.0, 0.0, 0.0],
+ [255.0, 0.0, 0.0]],
+ [[255.0, 255.0, 0.0],
+ [255.0, 255.0, 0.0],
+ [255.0, 255.0, 0.0]]])
+ return tf.constant(mask, dtype=tf.float32)
+
+ def createTestKeypoints(self):
+ keypoints = np.array([
+ [[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]],
+ [[0.4, 0.4], [0.5, 0.5], [0.6, 0.6]],
+ ])
+ return tf.constant(keypoints, dtype=tf.float32)
+
+ def createTestKeypointsInsideCrop(self):
+ keypoints = np.array([
+ [[0.4, 0.4], [0.5, 0.5], [0.6, 0.6]],
+ [[0.4, 0.4], [0.5, 0.5], [0.6, 0.6]],
+ ])
+ return tf.constant(keypoints, dtype=tf.float32)
+
+ def createTestKeypointsOutsideCrop(self):
+ keypoints = np.array([
+ [[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]],
+ [[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]],
+ ])
+ return tf.constant(keypoints, dtype=tf.float32)
+
+ def createKeypointFlipPermutation(self):
+ return np.array([0, 2, 1], dtype=np.int32)
+
+ def createTestLabels(self):
+ labels = tf.constant([1, 2], dtype=tf.int32)
+ return labels
+
+ def createTestBoxesOutOfImage(self):
+ boxes = tf.constant(
+ [[-0.1, 0.25, 0.75, 1], [0.25, 0.5, 0.75, 1.1]], dtype=tf.float32)
+ return boxes
+
+ def createTestMultiClassScores(self):
+ return tf.constant([[1.0, 0.0], [0.5, 0.5]], dtype=tf.float32)
+
+ def expectedImagesAfterNormalization(self):
+ images_r = tf.constant([[[0, 0, 0, 0], [-1, -1, 0, 0],
+ [-1, 0, 0, 0], [0.5, 0.5, 0, 0]]],
+ dtype=tf.float32)
+ images_r = tf.expand_dims(images_r, 3)
+ images_g = tf.constant([[[-1, -1, 0, 0], [-1, -1, 0, 0],
+ [-1, 0, 0.5, 0.5], [0.5, 0.5, 0, 0.5]]],
+ dtype=tf.float32)
+ images_g = tf.expand_dims(images_g, 3)
+ images_b = tf.constant([[[0, 0, 0.5, -1], [-1, -1, 0, 0.5],
+ [-1, 0, 0, -1], [0.5, 0.5, 0.5, 0]]],
+ dtype=tf.float32)
+ images_b = tf.expand_dims(images_b, 3)
+ images = tf.concat([images_r, images_g, images_b], 3)
+ return images
+
+ def expectedMaxImageAfterColorScale(self):
+ images_r = tf.constant([[[0.1, 0.1, 0.1, 0.1], [-0.9, -0.9, 0.1, 0.1],
+ [-0.9, 0.1, 0.1, 0.1], [0.6, 0.6, 0.1, 0.1]]],
+ dtype=tf.float32)
+ images_r = tf.expand_dims(images_r, 3)
+ images_g = tf.constant([[[-0.9, -0.9, 0.1, 0.1], [-0.9, -0.9, 0.1, 0.1],
+ [-0.9, 0.1, 0.6, 0.6], [0.6, 0.6, 0.1, 0.6]]],
+ dtype=tf.float32)
+ images_g = tf.expand_dims(images_g, 3)
+ images_b = tf.constant([[[0.1, 0.1, 0.6, -0.9], [-0.9, -0.9, 0.1, 0.6],
+ [-0.9, 0.1, 0.1, -0.9], [0.6, 0.6, 0.6, 0.1]]],
+ dtype=tf.float32)
+ images_b = tf.expand_dims(images_b, 3)
+ images = tf.concat([images_r, images_g, images_b], 3)
+ return images
+
+ def expectedMinImageAfterColorScale(self):
+ images_r = tf.constant([[[-0.1, -0.1, -0.1, -0.1], [-1, -1, -0.1, -0.1],
+ [-1, -0.1, -0.1, -0.1], [0.4, 0.4, -0.1, -0.1]]],
+ dtype=tf.float32)
+ images_r = tf.expand_dims(images_r, 3)
+ images_g = tf.constant([[[-1, -1, -0.1, -0.1], [-1, -1, -0.1, -0.1],
+ [-1, -0.1, 0.4, 0.4], [0.4, 0.4, -0.1, 0.4]]],
+ dtype=tf.float32)
+ images_g = tf.expand_dims(images_g, 3)
+ images_b = tf.constant([[[-0.1, -0.1, 0.4, -1], [-1, -1, -0.1, 0.4],
+ [-1, -0.1, -0.1, -1], [0.4, 0.4, 0.4, -0.1]]],
+ dtype=tf.float32)
+ images_b = tf.expand_dims(images_b, 3)
+ images = tf.concat([images_r, images_g, images_b], 3)
+ return images
+
+ def expectedImagesAfterLeftRightFlip(self):
+ images_r = tf.constant([[[0, 0, 0, 0], [0, 0, -1, -1],
+ [0, 0, 0, -1], [0, 0, 0.5, 0.5]]],
+ dtype=tf.float32)
+ images_r = tf.expand_dims(images_r, 3)
+ images_g = tf.constant([[[0, 0, -1, -1], [0, 0, -1, -1],
+ [0.5, 0.5, 0, -1], [0.5, 0, 0.5, 0.5]]],
+ dtype=tf.float32)
+ images_g = tf.expand_dims(images_g, 3)
+ images_b = tf.constant([[[-1, 0.5, 0, 0], [0.5, 0, -1, -1],
+ [-1, 0, 0, -1], [0, 0.5, 0.5, 0.5]]],
+ dtype=tf.float32)
+ images_b = tf.expand_dims(images_b, 3)
+ images = tf.concat([images_r, images_g, images_b], 3)
+ return images
+
+ def expectedImagesAfterUpDownFlip(self):
+ images_r = tf.constant([[[0.5, 0.5, 0, 0], [-1, 0, 0, 0],
+ [-1, -1, 0, 0], [0, 0, 0, 0]]],
+ dtype=tf.float32)
+ images_r = tf.expand_dims(images_r, 3)
+ images_g = tf.constant([[[0.5, 0.5, 0, 0.5], [-1, 0, 0.5, 0.5],
+ [-1, -1, 0, 0], [-1, -1, 0, 0]]],
+ dtype=tf.float32)
+ images_g = tf.expand_dims(images_g, 3)
+ images_b = tf.constant([[[0.5, 0.5, 0.5, 0], [-1, 0, 0, -1],
+ [-1, -1, 0, 0.5], [0, 0, 0.5, -1]]],
+ dtype=tf.float32)
+ images_b = tf.expand_dims(images_b, 3)
+ images = tf.concat([images_r, images_g, images_b], 3)
+ return images
+
+ def expectedImagesAfterRot90(self):
+ images_r = tf.constant([[[0, 0, 0, 0], [0, 0, 0, 0],
+ [0, -1, 0, 0.5], [0, -1, -1, 0.5]]],
+ dtype=tf.float32)
+ images_r = tf.expand_dims(images_r, 3)
+ images_g = tf.constant([[[0, 0, 0.5, 0.5], [0, 0, 0.5, 0],
+ [-1, -1, 0, 0.5], [-1, -1, -1, 0.5]]],
+ dtype=tf.float32)
+ images_g = tf.expand_dims(images_g, 3)
+ images_b = tf.constant([[[-1, 0.5, -1, 0], [0.5, 0, 0, 0.5],
+ [0, -1, 0, 0.5], [0, -1, -1, 0.5]]],
+ dtype=tf.float32)
+ images_b = tf.expand_dims(images_b, 3)
+ images = tf.concat([images_r, images_g, images_b], 3)
+ return images
+
+ def expectedBoxesAfterLeftRightFlip(self):
+ boxes = tf.constant([[0.0, 0.0, 0.75, 0.75], [0.25, 0.0, 0.75, 0.5]],
+ dtype=tf.float32)
+ return boxes
+
+ def expectedBoxesAfterUpDownFlip(self):
+ boxes = tf.constant([[0.25, 0.25, 1.0, 1.0], [0.25, 0.5, 0.75, 1.0]],
+ dtype=tf.float32)
+ return boxes
+
+ def expectedBoxesAfterRot90(self):
+ boxes = tf.constant(
+ [[0.0, 0.0, 0.75, 0.75], [0.0, 0.25, 0.5, 0.75]], dtype=tf.float32)
+ return boxes
+
+ def expectedMasksAfterLeftRightFlip(self):
+ mask = np.array([
+ [[0.0, 0.0, 255.0],
+ [0.0, 0.0, 255.0],
+ [0.0, 0.0, 255.0]],
+ [[0.0, 255.0, 255.0],
+ [0.0, 255.0, 255.0],
+ [0.0, 255.0, 255.0]]])
+ return tf.constant(mask, dtype=tf.float32)
+
+ def expectedMasksAfterUpDownFlip(self):
+ mask = np.array([
+ [[255.0, 0.0, 0.0],
+ [255.0, 0.0, 0.0],
+ [255.0, 0.0, 0.0]],
+ [[255.0, 255.0, 0.0],
+ [255.0, 255.0, 0.0],
+ [255.0, 255.0, 0.0]]])
+ return tf.constant(mask, dtype=tf.float32)
+
+ def expectedMasksAfterRot90(self):
+ mask = np.array([
+ [[0.0, 0.0, 0.0],
+ [0.0, 0.0, 0.0],
+ [255.0, 255.0, 255.0]],
+ [[0.0, 0.0, 0.0],
+ [255.0, 255.0, 255.0],
+ [255.0, 255.0, 255.0]]])
+ return tf.constant(mask, dtype=tf.float32)
+
+ def expectedLabelScoresAfterThresholding(self):
+ return tf.constant([1.0], dtype=tf.float32)
+
+ def expectedBoxesAfterThresholding(self):
+ return tf.constant([[0.0, 0.25, 0.75, 1.0]], dtype=tf.float32)
+
+ def expectedLabelsAfterThresholding(self):
+ return tf.constant([1], dtype=tf.float32)
+
+ def expectedMultiClassScoresAfterThresholding(self):
+ return tf.constant([[1.0, 0.0]], dtype=tf.float32)
+
+ def expectedMasksAfterThresholding(self):
+ mask = np.array([
+ [[255.0, 0.0, 0.0],
+ [255.0, 0.0, 0.0],
+ [255.0, 0.0, 0.0]]])
+ return tf.constant(mask, dtype=tf.float32)
+
+ def expectedKeypointsAfterThresholding(self):
+ keypoints = np.array([
+ [[0.1, 0.1], [0.2, 0.2], [0.3, 0.3]]
+ ])
+ return tf.constant(keypoints, dtype=tf.float32)
+
+ def expectedLabelScoresAfterThresholdingWithMissingScore(self):
+ return tf.constant([np.nan], dtype=tf.float32)
+
+ def expectedBoxesAfterThresholdingWithMissingScore(self):
+ return tf.constant([[0.25, 0.5, 0.75, 1]], dtype=tf.float32)
+
+ def expectedLabelsAfterThresholdingWithMissingScore(self):
+ return tf.constant([2], dtype=tf.float32)
+
+ def testRgbToGrayscale(self):
+ images = self.createTestImages()
+ grayscale_images = preprocessor._rgb_to_grayscale(images)
+ expected_images = tf.image.rgb_to_grayscale(images)
+ with self.test_session() as sess:
+ (grayscale_images, expected_images) = sess.run(
+ [grayscale_images, expected_images])
+ self.assertAllEqual(expected_images, grayscale_images)
+
+ def testNormalizeImage(self):
+ preprocess_options = [(preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 256,
+ 'target_minval': -1,
+ 'target_maxval': 1
+ })]
+ images = self.createTestImages()
+ tensor_dict = {fields.InputDataFields.image: images}
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
+ images = tensor_dict[fields.InputDataFields.image]
+ images_expected = self.expectedImagesAfterNormalization()
+
+ with self.test_session() as sess:
+ (images_, images_expected_) = sess.run(
+ [images, images_expected])
+ images_shape_ = images_.shape
+ images_expected_shape_ = images_expected_.shape
+ expected_shape = [1, 4, 4, 3]
+ self.assertAllEqual(images_expected_shape_, images_shape_)
+ self.assertAllEqual(images_shape_, expected_shape)
+ self.assertAllClose(images_, images_expected_)
+
+ def testRetainBoxesAboveThreshold(self):
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ (retained_boxes, retained_labels,
+ retained_weights) = preprocessor.retain_boxes_above_threshold(
+ boxes, labels, weights, threshold=0.6)
+ with self.test_session() as sess:
+ (retained_boxes_, retained_labels_, retained_weights_,
+ expected_retained_boxes_, expected_retained_labels_,
+ expected_retained_weights_) = sess.run([
+ retained_boxes, retained_labels, retained_weights,
+ self.expectedBoxesAfterThresholding(),
+ self.expectedLabelsAfterThresholding(),
+ self.expectedLabelScoresAfterThresholding()])
+ self.assertAllClose(
+ retained_boxes_, expected_retained_boxes_)
+ self.assertAllClose(
+ retained_labels_, expected_retained_labels_)
+ self.assertAllClose(
+ retained_weights_, expected_retained_weights_)
+
+ def testRetainBoxesAboveThresholdWithMultiClassScores(self):
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ multiclass_scores = self.createTestMultiClassScores()
+ (_, _, _,
+ retained_multiclass_scores) = preprocessor.retain_boxes_above_threshold(
+ boxes,
+ labels,
+ weights,
+ multiclass_scores=multiclass_scores,
+ threshold=0.6)
+ with self.test_session() as sess:
+ (retained_multiclass_scores_,
+ expected_retained_multiclass_scores_) = sess.run([
+ retained_multiclass_scores,
+ self.expectedMultiClassScoresAfterThresholding()
+ ])
+
+ self.assertAllClose(retained_multiclass_scores_,
+ expected_retained_multiclass_scores_)
+
+ def testRetainBoxesAboveThresholdWithMasks(self):
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ masks = self.createTestMasks()
+ _, _, _, retained_masks = preprocessor.retain_boxes_above_threshold(
+ boxes, labels, weights, masks, threshold=0.6)
+ with self.test_session() as sess:
+ retained_masks_, expected_retained_masks_ = sess.run([
+ retained_masks, self.expectedMasksAfterThresholding()])
+
+ self.assertAllClose(
+ retained_masks_, expected_retained_masks_)
+
+ def testRetainBoxesAboveThresholdWithKeypoints(self):
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ keypoints = self.createTestKeypoints()
+ (_, _, _, retained_keypoints) = preprocessor.retain_boxes_above_threshold(
+ boxes, labels, weights, keypoints=keypoints, threshold=0.6)
+ with self.test_session() as sess:
+ (retained_keypoints_,
+ expected_retained_keypoints_) = sess.run([
+ retained_keypoints,
+ self.expectedKeypointsAfterThresholding()])
+
+ self.assertAllClose(
+ retained_keypoints_, expected_retained_keypoints_)
+
+ def testFlipBoxesLeftRight(self):
+ boxes = self.createTestBoxes()
+ flipped_boxes = preprocessor._flip_boxes_left_right(boxes)
+ expected_boxes = self.expectedBoxesAfterLeftRightFlip()
+ with self.test_session() as sess:
+ flipped_boxes, expected_boxes = sess.run([flipped_boxes, expected_boxes])
+ self.assertAllEqual(flipped_boxes.flatten(), expected_boxes.flatten())
+
+ def testFlipBoxesUpDown(self):
+ boxes = self.createTestBoxes()
+ flipped_boxes = preprocessor._flip_boxes_up_down(boxes)
+ expected_boxes = self.expectedBoxesAfterUpDownFlip()
+ with self.test_session() as sess:
+ flipped_boxes, expected_boxes = sess.run([flipped_boxes, expected_boxes])
+ self.assertAllEqual(flipped_boxes.flatten(), expected_boxes.flatten())
+
+ def testRot90Boxes(self):
+ boxes = self.createTestBoxes()
+ rotated_boxes = preprocessor._rot90_boxes(boxes)
+ expected_boxes = self.expectedBoxesAfterRot90()
+ with self.test_session() as sess:
+ rotated_boxes, expected_boxes = sess.run([rotated_boxes, expected_boxes])
+ self.assertAllEqual(rotated_boxes.flatten(), expected_boxes.flatten())
+
+ def testFlipMasksLeftRight(self):
+ test_mask = self.createTestMasks()
+ flipped_mask = preprocessor._flip_masks_left_right(test_mask)
+ expected_mask = self.expectedMasksAfterLeftRightFlip()
+ with self.test_session() as sess:
+ flipped_mask, expected_mask = sess.run([flipped_mask, expected_mask])
+ self.assertAllEqual(flipped_mask.flatten(), expected_mask.flatten())
+
+ def testFlipMasksUpDown(self):
+ test_mask = self.createTestMasks()
+ flipped_mask = preprocessor._flip_masks_up_down(test_mask)
+ expected_mask = self.expectedMasksAfterUpDownFlip()
+ with self.test_session() as sess:
+ flipped_mask, expected_mask = sess.run([flipped_mask, expected_mask])
+ self.assertAllEqual(flipped_mask.flatten(), expected_mask.flatten())
+
+ def testRot90Masks(self):
+ test_mask = self.createTestMasks()
+ rotated_mask = preprocessor._rot90_masks(test_mask)
+ expected_mask = self.expectedMasksAfterRot90()
+ with self.test_session() as sess:
+ rotated_mask, expected_mask = sess.run([rotated_mask, expected_mask])
+ self.assertAllEqual(rotated_mask.flatten(), expected_mask.flatten())
+
+ def _testPreprocessorCache(self,
+ preprocess_options,
+ test_boxes=False,
+ test_masks=False,
+ test_keypoints=False,
+ num_runs=4):
+ cache = preprocessor_cache.PreprocessorCache()
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ weights = self.createTestGroundtruthWeights()
+ classes = self.createTestLabels()
+ masks = self.createTestMasks()
+ keypoints = self.createTestKeypoints()
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_instance_masks=test_masks, include_keypoints=test_keypoints)
+ out = []
+ for i in range(num_runs):
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_weights: weights
+ }
+ num_outputs = 1
+ if test_boxes:
+ tensor_dict[fields.InputDataFields.groundtruth_boxes] = boxes
+ tensor_dict[fields.InputDataFields.groundtruth_classes] = classes
+ num_outputs += 1
+ if test_masks:
+ tensor_dict[fields.InputDataFields.groundtruth_instance_masks] = masks
+ num_outputs += 1
+ if test_keypoints:
+ tensor_dict[fields.InputDataFields.groundtruth_keypoints] = keypoints
+ num_outputs += 1
+ out.append(preprocessor.preprocess(
+ tensor_dict, preprocess_options, preprocessor_arg_map, cache))
+
+ with self.test_session() as sess:
+ to_run = []
+ for i in range(num_runs):
+ to_run.append(out[i][fields.InputDataFields.image])
+ if test_boxes:
+ to_run.append(out[i][fields.InputDataFields.groundtruth_boxes])
+ if test_masks:
+ to_run.append(
+ out[i][fields.InputDataFields.groundtruth_instance_masks])
+ if test_keypoints:
+ to_run.append(out[i][fields.InputDataFields.groundtruth_keypoints])
+
+ out_array = sess.run(to_run)
+ for i in range(num_outputs, len(out_array)):
+ self.assertAllClose(out_array[i], out_array[i - num_outputs])
+
+ def testRandomHorizontalFlip(self):
+ preprocess_options = [(preprocessor.random_horizontal_flip, {})]
+ images = self.expectedImagesAfterNormalization()
+ boxes = self.createTestBoxes()
+ tensor_dict = {fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes}
+ images_expected1 = self.expectedImagesAfterLeftRightFlip()
+ boxes_expected1 = self.expectedBoxesAfterLeftRightFlip()
+ images_expected2 = images
+ boxes_expected2 = boxes
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
+ images = tensor_dict[fields.InputDataFields.image]
+ boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
+
+ boxes_diff1 = tf.squared_difference(boxes, boxes_expected1)
+ boxes_diff2 = tf.squared_difference(boxes, boxes_expected2)
+ boxes_diff = tf.multiply(boxes_diff1, boxes_diff2)
+ boxes_diff_expected = tf.zeros_like(boxes_diff)
+
+ images_diff1 = tf.squared_difference(images, images_expected1)
+ images_diff2 = tf.squared_difference(images, images_expected2)
+ images_diff = tf.multiply(images_diff1, images_diff2)
+ images_diff_expected = tf.zeros_like(images_diff)
+
+ with self.test_session() as sess:
+ (images_diff_, images_diff_expected_, boxes_diff_,
+ boxes_diff_expected_) = sess.run([images_diff, images_diff_expected,
+ boxes_diff, boxes_diff_expected])
+ self.assertAllClose(boxes_diff_, boxes_diff_expected_)
+ self.assertAllClose(images_diff_, images_diff_expected_)
+
+ def testRandomHorizontalFlipWithEmptyBoxes(self):
+ preprocess_options = [(preprocessor.random_horizontal_flip, {})]
+ images = self.expectedImagesAfterNormalization()
+ boxes = self.createEmptyTestBoxes()
+ tensor_dict = {fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes}
+ images_expected1 = self.expectedImagesAfterLeftRightFlip()
+ boxes_expected = self.createEmptyTestBoxes()
+ images_expected2 = images
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
+ images = tensor_dict[fields.InputDataFields.image]
+ boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
+
+ images_diff1 = tf.squared_difference(images, images_expected1)
+ images_diff2 = tf.squared_difference(images, images_expected2)
+ images_diff = tf.multiply(images_diff1, images_diff2)
+ images_diff_expected = tf.zeros_like(images_diff)
+
+ with self.test_session() as sess:
+ (images_diff_, images_diff_expected_, boxes_,
+ boxes_expected_) = sess.run([images_diff, images_diff_expected, boxes,
+ boxes_expected])
+ self.assertAllClose(boxes_, boxes_expected_)
+ self.assertAllClose(images_diff_, images_diff_expected_)
+
+ def testRandomHorizontalFlipWithCache(self):
+ keypoint_flip_permutation = self.createKeypointFlipPermutation()
+ preprocess_options = [
+ (preprocessor.random_horizontal_flip,
+ {'keypoint_flip_permutation': keypoint_flip_permutation})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=True,
+ test_keypoints=True)
+
+ def testRunRandomHorizontalFlipWithMaskAndKeypoints(self):
+ preprocess_options = [(preprocessor.random_horizontal_flip, {})]
+ image_height = 3
+ image_width = 3
+ images = tf.random_uniform([1, image_height, image_width, 3])
+ boxes = self.createTestBoxes()
+ masks = self.createTestMasks()
+ keypoints = self.createTestKeypoints()
+ keypoint_flip_permutation = self.createKeypointFlipPermutation()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_instance_masks: masks,
+ fields.InputDataFields.groundtruth_keypoints: keypoints
+ }
+ preprocess_options = [
+ (preprocessor.random_horizontal_flip,
+ {'keypoint_flip_permutation': keypoint_flip_permutation})]
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_instance_masks=True, include_keypoints=True)
+ tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocess_options, func_arg_map=preprocessor_arg_map)
+ boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
+ masks = tensor_dict[fields.InputDataFields.groundtruth_instance_masks]
+ keypoints = tensor_dict[fields.InputDataFields.groundtruth_keypoints]
+ with self.test_session() as sess:
+ boxes, masks, keypoints = sess.run([boxes, masks, keypoints])
+ self.assertTrue(boxes is not None)
+ self.assertTrue(masks is not None)
+ self.assertTrue(keypoints is not None)
+
+ def testRandomVerticalFlip(self):
+ preprocess_options = [(preprocessor.random_vertical_flip, {})]
+ images = self.expectedImagesAfterNormalization()
+ boxes = self.createTestBoxes()
+ tensor_dict = {fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes}
+ images_expected1 = self.expectedImagesAfterUpDownFlip()
+ boxes_expected1 = self.expectedBoxesAfterUpDownFlip()
+ images_expected2 = images
+ boxes_expected2 = boxes
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
+ images = tensor_dict[fields.InputDataFields.image]
+ boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
+
+ boxes_diff1 = tf.squared_difference(boxes, boxes_expected1)
+ boxes_diff2 = tf.squared_difference(boxes, boxes_expected2)
+ boxes_diff = tf.multiply(boxes_diff1, boxes_diff2)
+ boxes_diff_expected = tf.zeros_like(boxes_diff)
+
+ images_diff1 = tf.squared_difference(images, images_expected1)
+ images_diff2 = tf.squared_difference(images, images_expected2)
+ images_diff = tf.multiply(images_diff1, images_diff2)
+ images_diff_expected = tf.zeros_like(images_diff)
+
+ with self.test_session() as sess:
+ (images_diff_, images_diff_expected_, boxes_diff_,
+ boxes_diff_expected_) = sess.run([images_diff, images_diff_expected,
+ boxes_diff, boxes_diff_expected])
+ self.assertAllClose(boxes_diff_, boxes_diff_expected_)
+ self.assertAllClose(images_diff_, images_diff_expected_)
+
+ def testRandomVerticalFlipWithEmptyBoxes(self):
+ preprocess_options = [(preprocessor.random_vertical_flip, {})]
+ images = self.expectedImagesAfterNormalization()
+ boxes = self.createEmptyTestBoxes()
+ tensor_dict = {fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes}
+ images_expected1 = self.expectedImagesAfterUpDownFlip()
+ boxes_expected = self.createEmptyTestBoxes()
+ images_expected2 = images
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
+ images = tensor_dict[fields.InputDataFields.image]
+ boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
+
+ images_diff1 = tf.squared_difference(images, images_expected1)
+ images_diff2 = tf.squared_difference(images, images_expected2)
+ images_diff = tf.multiply(images_diff1, images_diff2)
+ images_diff_expected = tf.zeros_like(images_diff)
+
+ with self.test_session() as sess:
+ (images_diff_, images_diff_expected_, boxes_,
+ boxes_expected_) = sess.run([images_diff, images_diff_expected, boxes,
+ boxes_expected])
+ self.assertAllClose(boxes_, boxes_expected_)
+ self.assertAllClose(images_diff_, images_diff_expected_)
+
+ def testRandomVerticalFlipWithCache(self):
+ keypoint_flip_permutation = self.createKeypointFlipPermutation()
+ preprocess_options = [
+ (preprocessor.random_vertical_flip,
+ {'keypoint_flip_permutation': keypoint_flip_permutation})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=True,
+ test_keypoints=True)
+
+ def testRunRandomVerticalFlipWithMaskAndKeypoints(self):
+ preprocess_options = [(preprocessor.random_vertical_flip, {})]
+ image_height = 3
+ image_width = 3
+ images = tf.random_uniform([1, image_height, image_width, 3])
+ boxes = self.createTestBoxes()
+ masks = self.createTestMasks()
+ keypoints = self.createTestKeypoints()
+ keypoint_flip_permutation = self.createKeypointFlipPermutation()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_instance_masks: masks,
+ fields.InputDataFields.groundtruth_keypoints: keypoints
+ }
+ preprocess_options = [
+ (preprocessor.random_vertical_flip,
+ {'keypoint_flip_permutation': keypoint_flip_permutation})]
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_instance_masks=True, include_keypoints=True)
+ tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocess_options, func_arg_map=preprocessor_arg_map)
+ boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
+ masks = tensor_dict[fields.InputDataFields.groundtruth_instance_masks]
+ keypoints = tensor_dict[fields.InputDataFields.groundtruth_keypoints]
+ with self.test_session() as sess:
+ boxes, masks, keypoints = sess.run([boxes, masks, keypoints])
+ self.assertTrue(boxes is not None)
+ self.assertTrue(masks is not None)
+ self.assertTrue(keypoints is not None)
+
+ def testRandomRotation90(self):
+ preprocess_options = [(preprocessor.random_rotation90, {})]
+ images = self.expectedImagesAfterNormalization()
+ boxes = self.createTestBoxes()
+ tensor_dict = {fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes}
+ images_expected1 = self.expectedImagesAfterRot90()
+ boxes_expected1 = self.expectedBoxesAfterRot90()
+ images_expected2 = images
+ boxes_expected2 = boxes
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
+ images = tensor_dict[fields.InputDataFields.image]
+ boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
+
+ boxes_diff1 = tf.squared_difference(boxes, boxes_expected1)
+ boxes_diff2 = tf.squared_difference(boxes, boxes_expected2)
+ boxes_diff = tf.multiply(boxes_diff1, boxes_diff2)
+ boxes_diff_expected = tf.zeros_like(boxes_diff)
+
+ images_diff1 = tf.squared_difference(images, images_expected1)
+ images_diff2 = tf.squared_difference(images, images_expected2)
+ images_diff = tf.multiply(images_diff1, images_diff2)
+ images_diff_expected = tf.zeros_like(images_diff)
+
+ with self.test_session() as sess:
+ (images_diff_, images_diff_expected_, boxes_diff_,
+ boxes_diff_expected_) = sess.run([images_diff, images_diff_expected,
+ boxes_diff, boxes_diff_expected])
+ self.assertAllClose(boxes_diff_, boxes_diff_expected_)
+ self.assertAllClose(images_diff_, images_diff_expected_)
+
+ def testRandomRotation90WithEmptyBoxes(self):
+ preprocess_options = [(preprocessor.random_rotation90, {})]
+ images = self.expectedImagesAfterNormalization()
+ boxes = self.createEmptyTestBoxes()
+ tensor_dict = {fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes}
+ images_expected1 = self.expectedImagesAfterRot90()
+ boxes_expected = self.createEmptyTestBoxes()
+ images_expected2 = images
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
+ images = tensor_dict[fields.InputDataFields.image]
+ boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
+
+ images_diff1 = tf.squared_difference(images, images_expected1)
+ images_diff2 = tf.squared_difference(images, images_expected2)
+ images_diff = tf.multiply(images_diff1, images_diff2)
+ images_diff_expected = tf.zeros_like(images_diff)
+
+ with self.test_session() as sess:
+ (images_diff_, images_diff_expected_, boxes_,
+ boxes_expected_) = sess.run([images_diff, images_diff_expected, boxes,
+ boxes_expected])
+ self.assertAllClose(boxes_, boxes_expected_)
+ self.assertAllClose(images_diff_, images_diff_expected_)
+
+ def testRandomRotation90WithCache(self):
+ preprocess_options = [(preprocessor.random_rotation90, {})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=True,
+ test_keypoints=True)
+
+ def testRunRandomRotation90WithMaskAndKeypoints(self):
+ preprocess_options = [(preprocessor.random_rotation90, {})]
+ image_height = 3
+ image_width = 3
+ images = tf.random_uniform([1, image_height, image_width, 3])
+ boxes = self.createTestBoxes()
+ masks = self.createTestMasks()
+ keypoints = self.createTestKeypoints()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_instance_masks: masks,
+ fields.InputDataFields.groundtruth_keypoints: keypoints
+ }
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_instance_masks=True, include_keypoints=True)
+ tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocess_options, func_arg_map=preprocessor_arg_map)
+ boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
+ masks = tensor_dict[fields.InputDataFields.groundtruth_instance_masks]
+ keypoints = tensor_dict[fields.InputDataFields.groundtruth_keypoints]
+ with self.test_session() as sess:
+ boxes, masks, keypoints = sess.run([boxes, masks, keypoints])
+ self.assertTrue(boxes is not None)
+ self.assertTrue(masks is not None)
+ self.assertTrue(keypoints is not None)
+
+ def testRandomPixelValueScale(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocessing_options.append((preprocessor.random_pixel_value_scale, {}))
+ images = self.createTestImages()
+ tensor_dict = {fields.InputDataFields.image: images}
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+ images_min = tf.to_float(images) * 0.9 / 255.0
+ images_max = tf.to_float(images) * 1.1 / 255.0
+ images = tensor_dict[fields.InputDataFields.image]
+ values_greater = tf.greater_equal(images, images_min)
+ values_less = tf.less_equal(images, images_max)
+ values_true = tf.fill([1, 4, 4, 3], True)
+ with self.test_session() as sess:
+ (values_greater_, values_less_, values_true_) = sess.run(
+ [values_greater, values_less, values_true])
+ self.assertAllClose(values_greater_, values_true_)
+ self.assertAllClose(values_less_, values_true_)
+
+ def testRandomPixelValueScaleWithCache(self):
+ preprocess_options = []
+ preprocess_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocess_options.append((preprocessor.random_pixel_value_scale, {}))
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=False,
+ test_keypoints=False)
+
+ def testRandomImageScale(self):
+ preprocess_options = [(preprocessor.random_image_scale, {})]
+ images_original = self.createTestImages()
+ tensor_dict = {fields.InputDataFields.image: images_original}
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
+ images_scaled = tensor_dict[fields.InputDataFields.image]
+ images_original_shape = tf.shape(images_original)
+ images_scaled_shape = tf.shape(images_scaled)
+ with self.test_session() as sess:
+ (images_original_shape_, images_scaled_shape_) = sess.run(
+ [images_original_shape, images_scaled_shape])
+ self.assertTrue(
+ images_original_shape_[1] * 0.5 <= images_scaled_shape_[1])
+ self.assertTrue(
+ images_original_shape_[1] * 2.0 >= images_scaled_shape_[1])
+ self.assertTrue(
+ images_original_shape_[2] * 0.5 <= images_scaled_shape_[2])
+ self.assertTrue(
+ images_original_shape_[2] * 2.0 >= images_scaled_shape_[2])
+
+ def testRandomImageScaleWithCache(self):
+ preprocess_options = [(preprocessor.random_image_scale, {})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=False,
+ test_masks=False,
+ test_keypoints=False)
+
+ def testRandomRGBtoGray(self):
+ preprocess_options = [(preprocessor.random_rgb_to_gray, {})]
+ images_original = self.createTestImages()
+ tensor_dict = {fields.InputDataFields.image: images_original}
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocess_options)
+ images_gray = tensor_dict[fields.InputDataFields.image]
+ images_gray_r, images_gray_g, images_gray_b = tf.split(
+ value=images_gray, num_or_size_splits=3, axis=3)
+ images_r, images_g, images_b = tf.split(
+ value=images_original, num_or_size_splits=3, axis=3)
+ images_r_diff1 = tf.squared_difference(tf.to_float(images_r),
+ tf.to_float(images_gray_r))
+ images_r_diff2 = tf.squared_difference(tf.to_float(images_gray_r),
+ tf.to_float(images_gray_g))
+ images_r_diff = tf.multiply(images_r_diff1, images_r_diff2)
+ images_g_diff1 = tf.squared_difference(tf.to_float(images_g),
+ tf.to_float(images_gray_g))
+ images_g_diff2 = tf.squared_difference(tf.to_float(images_gray_g),
+ tf.to_float(images_gray_b))
+ images_g_diff = tf.multiply(images_g_diff1, images_g_diff2)
+ images_b_diff1 = tf.squared_difference(tf.to_float(images_b),
+ tf.to_float(images_gray_b))
+ images_b_diff2 = tf.squared_difference(tf.to_float(images_gray_b),
+ tf.to_float(images_gray_r))
+ images_b_diff = tf.multiply(images_b_diff1, images_b_diff2)
+ image_zero1 = tf.constant(0, dtype=tf.float32, shape=[1, 4, 4, 1])
+ with self.test_session() as sess:
+ (images_r_diff_, images_g_diff_, images_b_diff_, image_zero1_) = sess.run(
+ [images_r_diff, images_g_diff, images_b_diff, image_zero1])
+ self.assertAllClose(images_r_diff_, image_zero1_)
+ self.assertAllClose(images_g_diff_, image_zero1_)
+ self.assertAllClose(images_b_diff_, image_zero1_)
+
+ def testRandomRGBtoGrayWithCache(self):
+ preprocess_options = [(
+ preprocessor.random_rgb_to_gray, {'probability': 0.5})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=False,
+ test_masks=False,
+ test_keypoints=False)
+
+ def testRandomAdjustBrightness(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocessing_options.append((preprocessor.random_adjust_brightness, {}))
+ images_original = self.createTestImages()
+ tensor_dict = {fields.InputDataFields.image: images_original}
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+ images_bright = tensor_dict[fields.InputDataFields.image]
+ image_original_shape = tf.shape(images_original)
+ image_bright_shape = tf.shape(images_bright)
+ with self.test_session() as sess:
+ (image_original_shape_, image_bright_shape_) = sess.run(
+ [image_original_shape, image_bright_shape])
+ self.assertAllEqual(image_original_shape_, image_bright_shape_)
+
+ def testRandomAdjustBrightnessWithCache(self):
+ preprocess_options = []
+ preprocess_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocess_options.append((preprocessor.random_adjust_brightness, {}))
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=False,
+ test_masks=False,
+ test_keypoints=False)
+
+ def testRandomAdjustContrast(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocessing_options.append((preprocessor.random_adjust_contrast, {}))
+ images_original = self.createTestImages()
+ tensor_dict = {fields.InputDataFields.image: images_original}
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+ images_contrast = tensor_dict[fields.InputDataFields.image]
+ image_original_shape = tf.shape(images_original)
+ image_contrast_shape = tf.shape(images_contrast)
+ with self.test_session() as sess:
+ (image_original_shape_, image_contrast_shape_) = sess.run(
+ [image_original_shape, image_contrast_shape])
+ self.assertAllEqual(image_original_shape_, image_contrast_shape_)
+
+ def testRandomAdjustContrastWithCache(self):
+ preprocess_options = []
+ preprocess_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocess_options.append((preprocessor.random_adjust_contrast, {}))
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=False,
+ test_masks=False,
+ test_keypoints=False)
+
+ def testRandomAdjustHue(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocessing_options.append((preprocessor.random_adjust_hue, {}))
+ images_original = self.createTestImages()
+ tensor_dict = {fields.InputDataFields.image: images_original}
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+ images_hue = tensor_dict[fields.InputDataFields.image]
+ image_original_shape = tf.shape(images_original)
+ image_hue_shape = tf.shape(images_hue)
+ with self.test_session() as sess:
+ (image_original_shape_, image_hue_shape_) = sess.run(
+ [image_original_shape, image_hue_shape])
+ self.assertAllEqual(image_original_shape_, image_hue_shape_)
+
+ def testRandomAdjustHueWithCache(self):
+ preprocess_options = []
+ preprocess_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocess_options.append((preprocessor.random_adjust_hue, {}))
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=False,
+ test_masks=False,
+ test_keypoints=False)
+
+ def testRandomDistortColor(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocessing_options.append((preprocessor.random_distort_color, {}))
+ images_original = self.createTestImages()
+ images_original_shape = tf.shape(images_original)
+ tensor_dict = {fields.InputDataFields.image: images_original}
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+ images_distorted_color = tensor_dict[fields.InputDataFields.image]
+ images_distorted_color_shape = tf.shape(images_distorted_color)
+ with self.test_session() as sess:
+ (images_original_shape_, images_distorted_color_shape_) = sess.run(
+ [images_original_shape, images_distorted_color_shape])
+ self.assertAllEqual(images_original_shape_, images_distorted_color_shape_)
+
+ def testRandomDistortColorWithCache(self):
+ preprocess_options = []
+ preprocess_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocess_options.append((preprocessor.random_distort_color, {}))
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=False,
+ test_masks=False,
+ test_keypoints=False)
+
+ def testRandomJitterBoxes(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.random_jitter_boxes, {}))
+ boxes = self.createTestBoxes()
+ boxes_shape = tf.shape(boxes)
+ tensor_dict = {fields.InputDataFields.groundtruth_boxes: boxes}
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+ distorted_boxes = tensor_dict[fields.InputDataFields.groundtruth_boxes]
+ distorted_boxes_shape = tf.shape(distorted_boxes)
+
+ with self.test_session() as sess:
+ (boxes_shape_, distorted_boxes_shape_) = sess.run(
+ [boxes_shape, distorted_boxes_shape])
+ self.assertAllEqual(boxes_shape_, distorted_boxes_shape_)
+
+ def testRandomCropImage(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocessing_options.append((preprocessor.random_crop_image, {}))
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+ distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+ distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ boxes_rank = tf.rank(boxes)
+ distorted_boxes_rank = tf.rank(distorted_boxes)
+ images_rank = tf.rank(images)
+ distorted_images_rank = tf.rank(distorted_images)
+ self.assertEqual(3, distorted_images.get_shape()[3])
+
+ with self.test_session() as sess:
+ (boxes_rank_, distorted_boxes_rank_, images_rank_,
+ distorted_images_rank_) = sess.run([
+ boxes_rank, distorted_boxes_rank, images_rank, distorted_images_rank
+ ])
+ self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
+ self.assertAllEqual(images_rank_, distorted_images_rank_)
+
+ def testRandomCropImageWithCache(self):
+ preprocess_options = [(preprocessor.random_rgb_to_gray,
+ {'probability': 0.5}),
+ (preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1,
+ }),
+ (preprocessor.random_crop_image, {})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=False,
+ test_keypoints=False)
+
+ def testRandomCropImageGrayscale(self):
+ preprocessing_options = [(preprocessor.rgb_to_gray, {}),
+ (preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1,
+ }),
+ (preprocessor.random_crop_image, {})]
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options)
+ distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ boxes_rank = tf.rank(boxes)
+ distorted_boxes_rank = tf.rank(distorted_boxes)
+ images_rank = tf.rank(images)
+ distorted_images_rank = tf.rank(distorted_images)
+ self.assertEqual(1, distorted_images.get_shape()[3])
+
+ with self.test_session() as sess:
+ session_results = sess.run([
+ boxes_rank, distorted_boxes_rank, images_rank, distorted_images_rank
+ ])
+ (boxes_rank_, distorted_boxes_rank_, images_rank_,
+ distorted_images_rank_) = session_results
+ self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
+ self.assertAllEqual(images_rank_, distorted_images_rank_)
+
+ def testRandomCropImageWithBoxOutOfImage(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocessing_options.append((preprocessor.random_crop_image, {}))
+ images = self.createTestImages()
+ boxes = self.createTestBoxesOutOfImage()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+ distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+ distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ boxes_rank = tf.rank(boxes)
+ distorted_boxes_rank = tf.rank(distorted_boxes)
+ images_rank = tf.rank(images)
+ distorted_images_rank = tf.rank(distorted_images)
+
+ with self.test_session() as sess:
+ (boxes_rank_, distorted_boxes_rank_, images_rank_,
+ distorted_images_rank_) = sess.run(
+ [boxes_rank, distorted_boxes_rank, images_rank,
+ distorted_images_rank])
+ self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
+ self.assertAllEqual(images_rank_, distorted_images_rank_)
+
+ def testRandomCropImageWithRandomCoefOne(self):
+ preprocessing_options = [(preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ })]
+
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights
+ }
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+ images = tensor_dict[fields.InputDataFields.image]
+
+ preprocessing_options = [(preprocessor.random_crop_image, {
+ 'random_coef': 1.0
+ })]
+ distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+
+ distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ distorted_weights = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_weights]
+ boxes_shape = tf.shape(boxes)
+ distorted_boxes_shape = tf.shape(distorted_boxes)
+ images_shape = tf.shape(images)
+ distorted_images_shape = tf.shape(distorted_images)
+
+ with self.test_session() as sess:
+ (boxes_shape_, distorted_boxes_shape_, images_shape_,
+ distorted_images_shape_, images_, distorted_images_,
+ boxes_, distorted_boxes_, labels_, distorted_labels_,
+ weights_, distorted_weights_) = sess.run(
+ [boxes_shape, distorted_boxes_shape, images_shape,
+ distorted_images_shape, images, distorted_images,
+ boxes, distorted_boxes, labels, distorted_labels,
+ weights, distorted_weights])
+ self.assertAllEqual(boxes_shape_, distorted_boxes_shape_)
+ self.assertAllEqual(images_shape_, distorted_images_shape_)
+ self.assertAllClose(images_, distorted_images_)
+ self.assertAllClose(boxes_, distorted_boxes_)
+ self.assertAllEqual(labels_, distorted_labels_)
+ self.assertAllEqual(weights_, distorted_weights_)
+
+ def testRandomCropWithMockSampleDistortedBoundingBox(self):
+ preprocessing_options = [(preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ })]
+
+ images = self.createColorfulTestImage()
+ boxes = tf.constant([[0.1, 0.1, 0.8, 0.3],
+ [0.2, 0.4, 0.75, 0.75],
+ [0.3, 0.1, 0.4, 0.7]], dtype=tf.float32)
+ labels = tf.constant([1, 7, 11], dtype=tf.int32)
+ weights = tf.constant([1.0, 0.5, 0.6], dtype=tf.float32)
+
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+ images = tensor_dict[fields.InputDataFields.image]
+
+ preprocessing_options = [(preprocessor.random_crop_image, {})]
+ with mock.patch.object(
+ tf.image,
+ 'sample_distorted_bounding_box') as mock_sample_distorted_bounding_box:
+ mock_sample_distorted_bounding_box.return_value = (tf.constant(
+ [6, 143, 0], dtype=tf.int32), tf.constant(
+ [190, 237, -1], dtype=tf.int32), tf.constant(
+ [[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
+
+ distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ distorted_weights = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_weights]
+ expected_boxes = tf.constant([[0.178947, 0.07173, 0.75789469, 0.66244733],
+ [0.28421, 0.0, 0.38947365, 0.57805908]],
+ dtype=tf.float32)
+ expected_labels = tf.constant([7, 11], dtype=tf.int32)
+ expected_weights = tf.constant([0.5, 0.6], dtype=tf.float32)
+
+ with self.test_session() as sess:
+ (distorted_boxes_, distorted_labels_, distorted_weights_,
+ expected_boxes_, expected_labels_, expected_weights_) = sess.run(
+ [distorted_boxes, distorted_labels, distorted_weights,
+ expected_boxes, expected_labels, expected_weights])
+ self.assertAllClose(distorted_boxes_, expected_boxes_)
+ self.assertAllEqual(distorted_labels_, expected_labels_)
+ self.assertAllEqual(distorted_weights_, expected_weights_)
+
+ def testRandomCropWithoutClipBoxes(self):
+ preprocessing_options = [(preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ })]
+
+ images = self.createColorfulTestImage()
+ boxes = tf.constant([[0.1, 0.1, 0.8, 0.3],
+ [0.2, 0.4, 0.75, 0.75],
+ [0.3, 0.1, 0.4, 0.7]], dtype=tf.float32)
+ keypoints = tf.constant([
+ [[0.1, 0.1], [0.8, 0.3]],
+ [[0.2, 0.4], [0.75, 0.75]],
+ [[0.3, 0.1], [0.4, 0.7]],
+ ], dtype=tf.float32)
+ labels = tf.constant([1, 7, 11], dtype=tf.int32)
+ weights = tf.constant([1.0, 0.5, 0.6], dtype=tf.float32)
+
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_keypoints: keypoints,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+
+ preprocessing_options = [(preprocessor.random_crop_image, {
+ 'clip_boxes': False,
+ })]
+ with mock.patch.object(
+ tf.image,
+ 'sample_distorted_bounding_box') as mock_sample_distorted_bounding_box:
+ mock_sample_distorted_bounding_box.return_value = (tf.constant(
+ [6, 143, 0], dtype=tf.int32), tf.constant(
+ [190, 237, -1], dtype=tf.int32), tf.constant(
+ [[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_keypoints=True)
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_keypoints = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_keypoints]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ distorted_weights = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_weights]
+ expected_boxes = tf.constant(
+ [[0.178947, 0.07173, 0.75789469, 0.66244733],
+ [0.28421, -0.434599, 0.38947365, 0.57805908]],
+ dtype=tf.float32)
+ expected_keypoints = tf.constant(
+ [[[0.178947, 0.07173], [0.75789469, 0.66244733]],
+ [[0.28421, -0.434599], [0.38947365, 0.57805908]]],
+ dtype=tf.float32)
+ expected_labels = tf.constant([7, 11], dtype=tf.int32)
+ expected_weights = tf.constant([0.5, 0.6], dtype=tf.float32)
+
+ with self.test_session() as sess:
+ (distorted_boxes_, distorted_keypoints_, distorted_labels_,
+ distorted_weights_, expected_boxes_, expected_keypoints_,
+ expected_labels_, expected_weights_) = sess.run(
+ [distorted_boxes, distorted_keypoints, distorted_labels,
+ distorted_weights, expected_boxes, expected_keypoints,
+ expected_labels, expected_weights])
+ self.assertAllClose(distorted_boxes_, expected_boxes_)
+ self.assertAllClose(distorted_keypoints_, expected_keypoints_)
+ self.assertAllEqual(distorted_labels_, expected_labels_)
+ self.assertAllEqual(distorted_weights_, expected_weights_)
+
+ def testRandomCropImageWithMultiClassScores(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocessing_options.append((preprocessor.random_crop_image, {}))
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ multiclass_scores = self.createTestMultiClassScores()
+
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ fields.InputDataFields.multiclass_scores: multiclass_scores
+ }
+ distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+ distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_multiclass_scores = distorted_tensor_dict[
+ fields.InputDataFields.multiclass_scores]
+ boxes_rank = tf.rank(boxes)
+ distorted_boxes_rank = tf.rank(distorted_boxes)
+ images_rank = tf.rank(images)
+ distorted_images_rank = tf.rank(distorted_images)
+ multiclass_scores_rank = tf.rank(multiclass_scores)
+ distorted_multiclass_scores_rank = tf.rank(distorted_multiclass_scores)
+
+ with self.test_session() as sess:
+ (boxes_rank_, distorted_boxes_, distorted_boxes_rank_, images_rank_,
+ distorted_images_rank_, multiclass_scores_rank_,
+ distorted_multiclass_scores_rank_,
+ distorted_multiclass_scores_) = sess.run([
+ boxes_rank, distorted_boxes, distorted_boxes_rank, images_rank,
+ distorted_images_rank, multiclass_scores_rank,
+ distorted_multiclass_scores_rank, distorted_multiclass_scores
+ ])
+ self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
+ self.assertAllEqual(images_rank_, distorted_images_rank_)
+ self.assertAllEqual(multiclass_scores_rank_,
+ distorted_multiclass_scores_rank_)
+ self.assertAllEqual(distorted_boxes_.shape[0],
+ distorted_multiclass_scores_.shape[0])
+
+ def testStrictRandomCropImageWithGroundtruthWeights(self):
+ image = self.createColorfulTestImage()[0]
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ with mock.patch.object(
+ tf.image,
+ 'sample_distorted_bounding_box'
+ ) as mock_sample_distorted_bounding_box:
+ mock_sample_distorted_bounding_box.return_value = (
+ tf.constant([6, 143, 0], dtype=tf.int32),
+ tf.constant([190, 237, -1], dtype=tf.int32),
+ tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
+ new_image, new_boxes, new_labels, new_groundtruth_weights = (
+ preprocessor._strict_random_crop_image(
+ image, boxes, labels, weights))
+ with self.test_session() as sess:
+ new_image, new_boxes, new_labels, new_groundtruth_weights = (
+ sess.run(
+ [new_image, new_boxes, new_labels, new_groundtruth_weights])
+ )
+
+ expected_boxes = np.array(
+ [[0.0, 0.0, 0.75789469, 1.0],
+ [0.23157893, 0.24050637, 0.75789469, 1.0]], dtype=np.float32)
+ self.assertAllEqual(new_image.shape, [190, 237, 3])
+ self.assertAllEqual(new_groundtruth_weights, [1.0, 0.5])
+ self.assertAllClose(
+ new_boxes.flatten(), expected_boxes.flatten())
+
+ def testStrictRandomCropImageWithMasks(self):
+ image = self.createColorfulTestImage()[0]
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ masks = tf.random_uniform([2, 200, 400], dtype=tf.float32)
+ with mock.patch.object(
+ tf.image,
+ 'sample_distorted_bounding_box'
+ ) as mock_sample_distorted_bounding_box:
+ mock_sample_distorted_bounding_box.return_value = (
+ tf.constant([6, 143, 0], dtype=tf.int32),
+ tf.constant([190, 237, -1], dtype=tf.int32),
+ tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
+ new_image, new_boxes, new_labels, new_weights, new_masks = (
+ preprocessor._strict_random_crop_image(
+ image, boxes, labels, weights, masks=masks))
+ with self.test_session() as sess:
+ new_image, new_boxes, new_labels, new_weights, new_masks = sess.run(
+ [new_image, new_boxes, new_labels, new_weights, new_masks])
+ expected_boxes = np.array(
+ [[0.0, 0.0, 0.75789469, 1.0],
+ [0.23157893, 0.24050637, 0.75789469, 1.0]], dtype=np.float32)
+ self.assertAllEqual(new_image.shape, [190, 237, 3])
+ self.assertAllEqual(new_masks.shape, [2, 190, 237])
+ self.assertAllClose(
+ new_boxes.flatten(), expected_boxes.flatten())
+
+ def testStrictRandomCropImageWithKeypoints(self):
+ image = self.createColorfulTestImage()[0]
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ keypoints = self.createTestKeypoints()
+ with mock.patch.object(
+ tf.image,
+ 'sample_distorted_bounding_box'
+ ) as mock_sample_distorted_bounding_box:
+ mock_sample_distorted_bounding_box.return_value = (
+ tf.constant([6, 143, 0], dtype=tf.int32),
+ tf.constant([190, 237, -1], dtype=tf.int32),
+ tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
+ new_image, new_boxes, new_labels, new_weights, new_keypoints = (
+ preprocessor._strict_random_crop_image(
+ image, boxes, labels, weights, keypoints=keypoints))
+ with self.test_session() as sess:
+ new_image, new_boxes, new_labels, new_weights, new_keypoints = sess.run(
+ [new_image, new_boxes, new_labels, new_weights, new_keypoints])
+
+ expected_boxes = np.array([
+ [0.0, 0.0, 0.75789469, 1.0],
+ [0.23157893, 0.24050637, 0.75789469, 1.0],], dtype=np.float32)
+ expected_keypoints = np.array([
+ [[np.nan, np.nan],
+ [np.nan, np.nan],
+ [np.nan, np.nan]],
+ [[0.38947368, 0.07173],
+ [0.49473682, 0.24050637],
+ [0.60000002, 0.40928277]]
+ ], dtype=np.float32)
+ self.assertAllEqual(new_image.shape, [190, 237, 3])
+ self.assertAllClose(
+ new_boxes.flatten(), expected_boxes.flatten())
+ self.assertAllClose(
+ new_keypoints.flatten(), expected_keypoints.flatten())
+
+ def testRunRandomCropImageWithMasks(self):
+ image = self.createColorfulTestImage()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ masks = tf.random_uniform([2, 200, 400], dtype=tf.float32)
+
+ tensor_dict = {
+ fields.InputDataFields.image: image,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ fields.InputDataFields.groundtruth_instance_masks: masks,
+ }
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_instance_masks=True)
+
+ preprocessing_options = [(preprocessor.random_crop_image, {})]
+
+ with mock.patch.object(
+ tf.image,
+ 'sample_distorted_bounding_box'
+ ) as mock_sample_distorted_bounding_box:
+ mock_sample_distorted_bounding_box.return_value = (
+ tf.constant([6, 143, 0], dtype=tf.int32),
+ tf.constant([190, 237, -1], dtype=tf.int32),
+ tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ distorted_masks = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_instance_masks]
+ with self.test_session() as sess:
+ (distorted_image_, distorted_boxes_, distorted_labels_,
+ distorted_masks_) = sess.run(
+ [distorted_image, distorted_boxes, distorted_labels,
+ distorted_masks])
+
+ expected_boxes = np.array([
+ [0.0, 0.0, 0.75789469, 1.0],
+ [0.23157893, 0.24050637, 0.75789469, 1.0],
+ ], dtype=np.float32)
+ self.assertAllEqual(distorted_image_.shape, [1, 190, 237, 3])
+ self.assertAllEqual(distorted_masks_.shape, [2, 190, 237])
+ self.assertAllEqual(distorted_labels_, [1, 2])
+ self.assertAllClose(
+ distorted_boxes_.flatten(), expected_boxes.flatten())
+
+ def testRunRandomCropImageWithKeypointsInsideCrop(self):
+ image = self.createColorfulTestImage()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ keypoints = self.createTestKeypointsInsideCrop()
+
+ tensor_dict = {
+ fields.InputDataFields.image: image,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_keypoints: keypoints,
+ fields.InputDataFields.groundtruth_weights: weights
+ }
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_keypoints=True)
+
+ preprocessing_options = [(preprocessor.random_crop_image, {})]
+
+ with mock.patch.object(
+ tf.image,
+ 'sample_distorted_bounding_box'
+ ) as mock_sample_distorted_bounding_box:
+ mock_sample_distorted_bounding_box.return_value = (
+ tf.constant([6, 143, 0], dtype=tf.int32),
+ tf.constant([190, 237, -1], dtype=tf.int32),
+ tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ distorted_keypoints = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_keypoints]
+ with self.test_session() as sess:
+ (distorted_image_, distorted_boxes_, distorted_labels_,
+ distorted_keypoints_) = sess.run(
+ [distorted_image, distorted_boxes, distorted_labels,
+ distorted_keypoints])
+
+ expected_boxes = np.array([
+ [0.0, 0.0, 0.75789469, 1.0],
+ [0.23157893, 0.24050637, 0.75789469, 1.0],
+ ], dtype=np.float32)
+ expected_keypoints = np.array([
+ [[0.38947368, 0.07173],
+ [0.49473682, 0.24050637],
+ [0.60000002, 0.40928277]],
+ [[0.38947368, 0.07173],
+ [0.49473682, 0.24050637],
+ [0.60000002, 0.40928277]]
+ ])
+ self.assertAllEqual(distorted_image_.shape, [1, 190, 237, 3])
+ self.assertAllEqual(distorted_labels_, [1, 2])
+ self.assertAllClose(
+ distorted_boxes_.flatten(), expected_boxes.flatten())
+ self.assertAllClose(
+ distorted_keypoints_.flatten(), expected_keypoints.flatten())
+
+ def testRunRandomCropImageWithKeypointsOutsideCrop(self):
+ image = self.createColorfulTestImage()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ keypoints = self.createTestKeypointsOutsideCrop()
+
+ tensor_dict = {
+ fields.InputDataFields.image: image,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ fields.InputDataFields.groundtruth_keypoints: keypoints
+ }
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_keypoints=True)
+
+ preprocessing_options = [(preprocessor.random_crop_image, {})]
+
+ with mock.patch.object(
+ tf.image,
+ 'sample_distorted_bounding_box'
+ ) as mock_sample_distorted_bounding_box:
+ mock_sample_distorted_bounding_box.return_value = (
+ tf.constant([6, 143, 0], dtype=tf.int32),
+ tf.constant([190, 237, -1], dtype=tf.int32),
+ tf.constant([[[0.03, 0.3575, 0.98, 0.95]]], dtype=tf.float32))
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ distorted_keypoints = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_keypoints]
+ with self.test_session() as sess:
+ (distorted_image_, distorted_boxes_, distorted_labels_,
+ distorted_keypoints_) = sess.run(
+ [distorted_image, distorted_boxes, distorted_labels,
+ distorted_keypoints])
+
+ expected_boxes = np.array([
+ [0.0, 0.0, 0.75789469, 1.0],
+ [0.23157893, 0.24050637, 0.75789469, 1.0],
+ ], dtype=np.float32)
+ expected_keypoints = np.array([
+ [[np.nan, np.nan],
+ [np.nan, np.nan],
+ [np.nan, np.nan]],
+ [[np.nan, np.nan],
+ [np.nan, np.nan],
+ [np.nan, np.nan]],
+ ])
+ self.assertAllEqual(distorted_image_.shape, [1, 190, 237, 3])
+ self.assertAllEqual(distorted_labels_, [1, 2])
+ self.assertAllClose(
+ distorted_boxes_.flatten(), expected_boxes.flatten())
+ self.assertAllClose(
+ distorted_keypoints_.flatten(), expected_keypoints.flatten())
+
+ def testRunRetainBoxesAboveThreshold(self):
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+
+ tensor_dict = {
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+
+ preprocessing_options = [
+ (preprocessor.retain_boxes_above_threshold, {'threshold': 0.6})
+ ]
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map()
+ retained_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ retained_boxes = retained_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ retained_labels = retained_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ retained_weights = retained_tensor_dict[
+ fields.InputDataFields.groundtruth_weights]
+
+ with self.test_session() as sess:
+ (retained_boxes_, retained_labels_,
+ retained_weights_, expected_retained_boxes_,
+ expected_retained_labels_, expected_retained_weights_) = sess.run(
+ [retained_boxes, retained_labels, retained_weights,
+ self.expectedBoxesAfterThresholding(),
+ self.expectedLabelsAfterThresholding(),
+ self.expectedLabelScoresAfterThresholding()])
+
+ self.assertAllClose(retained_boxes_, expected_retained_boxes_)
+ self.assertAllClose(retained_labels_, expected_retained_labels_)
+ self.assertAllClose(
+ retained_weights_, expected_retained_weights_)
+
+ def testRunRetainBoxesAboveThresholdWithMasks(self):
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ masks = self.createTestMasks()
+
+ tensor_dict = {
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ fields.InputDataFields.groundtruth_instance_masks: masks
+ }
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_label_weights=True,
+ include_instance_masks=True)
+
+ preprocessing_options = [
+ (preprocessor.retain_boxes_above_threshold, {'threshold': 0.6})
+ ]
+
+ retained_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ retained_masks = retained_tensor_dict[
+ fields.InputDataFields.groundtruth_instance_masks]
+
+ with self.test_session() as sess:
+ (retained_masks_, expected_masks_) = sess.run(
+ [retained_masks,
+ self.expectedMasksAfterThresholding()])
+ self.assertAllClose(retained_masks_, expected_masks_)
+
+ def testRunRetainBoxesAboveThresholdWithKeypoints(self):
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ keypoints = self.createTestKeypoints()
+
+ tensor_dict = {
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ fields.InputDataFields.groundtruth_keypoints: keypoints
+ }
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_keypoints=True)
+
+ preprocessing_options = [
+ (preprocessor.retain_boxes_above_threshold, {'threshold': 0.6})
+ ]
+
+ retained_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ retained_keypoints = retained_tensor_dict[
+ fields.InputDataFields.groundtruth_keypoints]
+
+ with self.test_session() as sess:
+ (retained_keypoints_, expected_keypoints_) = sess.run(
+ [retained_keypoints,
+ self.expectedKeypointsAfterThresholding()])
+ self.assertAllClose(retained_keypoints_, expected_keypoints_)
+
+ def testRandomCropToAspectRatioWithCache(self):
+ preprocess_options = [(preprocessor.random_crop_to_aspect_ratio, {})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=False,
+ test_keypoints=False)
+
+ def testRunRandomCropToAspectRatioWithMasks(self):
+ image = self.createColorfulTestImage()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ masks = tf.random_uniform([2, 200, 400], dtype=tf.float32)
+
+ tensor_dict = {
+ fields.InputDataFields.image: image,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ fields.InputDataFields.groundtruth_instance_masks: masks
+ }
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_instance_masks=True)
+
+ preprocessing_options = [(preprocessor.random_crop_to_aspect_ratio, {})]
+
+ with mock.patch.object(preprocessor,
+ '_random_integer') as mock_random_integer:
+ mock_random_integer.return_value = tf.constant(0, dtype=tf.int32)
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ distorted_masks = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_instance_masks]
+ with self.test_session() as sess:
+ (distorted_image_, distorted_boxes_, distorted_labels_,
+ distorted_masks_) = sess.run([
+ distorted_image, distorted_boxes, distorted_labels, distorted_masks
+ ])
+
+ expected_boxes = np.array([0.0, 0.5, 0.75, 1.0], dtype=np.float32)
+ self.assertAllEqual(distorted_image_.shape, [1, 200, 200, 3])
+ self.assertAllEqual(distorted_labels_, [1])
+ self.assertAllClose(distorted_boxes_.flatten(),
+ expected_boxes.flatten())
+ self.assertAllEqual(distorted_masks_.shape, [1, 200, 200])
+
+ def testRunRandomCropToAspectRatioWithKeypoints(self):
+ image = self.createColorfulTestImage()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ keypoints = self.createTestKeypoints()
+
+ tensor_dict = {
+ fields.InputDataFields.image: image,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ fields.InputDataFields.groundtruth_keypoints: keypoints
+ }
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_keypoints=True)
+
+ preprocessing_options = [(preprocessor.random_crop_to_aspect_ratio, {})]
+
+ with mock.patch.object(preprocessor,
+ '_random_integer') as mock_random_integer:
+ mock_random_integer.return_value = tf.constant(0, dtype=tf.int32)
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ distorted_keypoints = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_keypoints]
+ with self.test_session() as sess:
+ (distorted_image_, distorted_boxes_, distorted_labels_,
+ distorted_keypoints_) = sess.run([
+ distorted_image, distorted_boxes, distorted_labels,
+ distorted_keypoints
+ ])
+
+ expected_boxes = np.array([0.0, 0.5, 0.75, 1.0], dtype=np.float32)
+ expected_keypoints = np.array(
+ [[0.1, 0.2], [0.2, 0.4], [0.3, 0.6]], dtype=np.float32)
+ self.assertAllEqual(distorted_image_.shape, [1, 200, 200, 3])
+ self.assertAllEqual(distorted_labels_, [1])
+ self.assertAllClose(distorted_boxes_.flatten(),
+ expected_boxes.flatten())
+ self.assertAllClose(distorted_keypoints_.flatten(),
+ expected_keypoints.flatten())
+
+ def testRandomPadToAspectRatioWithCache(self):
+ preprocess_options = [(preprocessor.random_pad_to_aspect_ratio, {})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=True,
+ test_keypoints=True)
+
+ def testRunRandomPadToAspectRatioWithMinMaxPaddedSizeRatios(self):
+ image = self.createColorfulTestImage()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+
+ tensor_dict = {
+ fields.InputDataFields.image: image,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels
+ }
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map()
+ preprocessing_options = [(preprocessor.random_pad_to_aspect_ratio,
+ {'min_padded_size_ratio': (4.0, 4.0),
+ 'max_padded_size_ratio': (4.0, 4.0)})]
+
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ with self.test_session() as sess:
+ distorted_image_, distorted_boxes_, distorted_labels_ = sess.run([
+ distorted_image, distorted_boxes, distorted_labels])
+
+ expected_boxes = np.array(
+ [[0.0, 0.125, 0.1875, 0.5], [0.0625, 0.25, 0.1875, 0.5]],
+ dtype=np.float32)
+ self.assertAllEqual(distorted_image_.shape, [1, 800, 800, 3])
+ self.assertAllEqual(distorted_labels_, [1, 2])
+ self.assertAllClose(distorted_boxes_.flatten(),
+ expected_boxes.flatten())
+
+ def testRunRandomPadToAspectRatioWithMasks(self):
+ image = self.createColorfulTestImage()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ masks = tf.random_uniform([2, 200, 400], dtype=tf.float32)
+
+ tensor_dict = {
+ fields.InputDataFields.image: image,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_instance_masks: masks
+ }
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_instance_masks=True)
+
+ preprocessing_options = [(preprocessor.random_pad_to_aspect_ratio, {})]
+
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ distorted_masks = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_instance_masks]
+ with self.test_session() as sess:
+ (distorted_image_, distorted_boxes_, distorted_labels_,
+ distorted_masks_) = sess.run([
+ distorted_image, distorted_boxes, distorted_labels, distorted_masks
+ ])
+
+ expected_boxes = np.array(
+ [[0.0, 0.25, 0.375, 1.0], [0.125, 0.5, 0.375, 1.0]], dtype=np.float32)
+ self.assertAllEqual(distorted_image_.shape, [1, 400, 400, 3])
+ self.assertAllEqual(distorted_labels_, [1, 2])
+ self.assertAllClose(distorted_boxes_.flatten(),
+ expected_boxes.flatten())
+ self.assertAllEqual(distorted_masks_.shape, [2, 400, 400])
+
+ def testRunRandomPadToAspectRatioWithKeypoints(self):
+ image = self.createColorfulTestImage()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ keypoints = self.createTestKeypoints()
+
+ tensor_dict = {
+ fields.InputDataFields.image: image,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_keypoints: keypoints
+ }
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_keypoints=True)
+
+ preprocessing_options = [(preprocessor.random_pad_to_aspect_ratio, {})]
+
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ distorted_image = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_labels = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_classes]
+ distorted_keypoints = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_keypoints]
+ with self.test_session() as sess:
+ (distorted_image_, distorted_boxes_, distorted_labels_,
+ distorted_keypoints_) = sess.run([
+ distorted_image, distorted_boxes, distorted_labels,
+ distorted_keypoints
+ ])
+
+ expected_boxes = np.array(
+ [[0.0, 0.25, 0.375, 1.0], [0.125, 0.5, 0.375, 1.0]], dtype=np.float32)
+ expected_keypoints = np.array([
+ [[0.05, 0.1], [0.1, 0.2], [0.15, 0.3]],
+ [[0.2, 0.4], [0.25, 0.5], [0.3, 0.6]],
+ ], dtype=np.float32)
+ self.assertAllEqual(distorted_image_.shape, [1, 400, 400, 3])
+ self.assertAllEqual(distorted_labels_, [1, 2])
+ self.assertAllClose(distorted_boxes_.flatten(),
+ expected_boxes.flatten())
+ self.assertAllClose(distorted_keypoints_.flatten(),
+ expected_keypoints.flatten())
+
+ def testRandomPadImageWithCache(self):
+ preprocess_options = [(preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1,}), (preprocessor.random_pad_image, {})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=True,
+ test_keypoints=True)
+
+ def testRandomPadImage(self):
+ preprocessing_options = [(preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ })]
+
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ }
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+ images = tensor_dict[fields.InputDataFields.image]
+
+ preprocessing_options = [(preprocessor.random_pad_image, {})]
+ padded_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+
+ padded_images = padded_tensor_dict[fields.InputDataFields.image]
+ padded_boxes = padded_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ boxes_shape = tf.shape(boxes)
+ padded_boxes_shape = tf.shape(padded_boxes)
+ images_shape = tf.shape(images)
+ padded_images_shape = tf.shape(padded_images)
+
+ with self.test_session() as sess:
+ (boxes_shape_, padded_boxes_shape_, images_shape_,
+ padded_images_shape_, boxes_, padded_boxes_) = sess.run(
+ [boxes_shape, padded_boxes_shape, images_shape,
+ padded_images_shape, boxes, padded_boxes])
+ self.assertAllEqual(boxes_shape_, padded_boxes_shape_)
+ self.assertTrue((images_shape_[1] >= padded_images_shape_[1] * 0.5).all)
+ self.assertTrue((images_shape_[2] >= padded_images_shape_[2] * 0.5).all)
+ self.assertTrue((images_shape_[1] <= padded_images_shape_[1]).all)
+ self.assertTrue((images_shape_[2] <= padded_images_shape_[2]).all)
+ self.assertTrue(np.all((boxes_[:, 2] - boxes_[:, 0]) >= (
+ padded_boxes_[:, 2] - padded_boxes_[:, 0])))
+ self.assertTrue(np.all((boxes_[:, 3] - boxes_[:, 1]) >= (
+ padded_boxes_[:, 3] - padded_boxes_[:, 1])))
+
+ def testRandomCropPadImageWithCache(self):
+ preprocess_options = [(preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1,}), (preprocessor.random_crop_pad_image, {})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=True,
+ test_keypoints=True)
+
+ def testRandomCropPadImageWithRandomCoefOne(self):
+ preprocessing_options = [(preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ })]
+
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+ tensor_dict = preprocessor.preprocess(tensor_dict, preprocessing_options)
+ images = tensor_dict[fields.InputDataFields.image]
+
+ preprocessing_options = [(preprocessor.random_crop_pad_image, {
+ 'random_coef': 1.0
+ })]
+ padded_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+
+ padded_images = padded_tensor_dict[fields.InputDataFields.image]
+ padded_boxes = padded_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ boxes_shape = tf.shape(boxes)
+ padded_boxes_shape = tf.shape(padded_boxes)
+ images_shape = tf.shape(images)
+ padded_images_shape = tf.shape(padded_images)
+
+ with self.test_session() as sess:
+ (boxes_shape_, padded_boxes_shape_, images_shape_,
+ padded_images_shape_, boxes_, padded_boxes_) = sess.run(
+ [boxes_shape, padded_boxes_shape, images_shape,
+ padded_images_shape, boxes, padded_boxes])
+ self.assertAllEqual(boxes_shape_, padded_boxes_shape_)
+ self.assertTrue((images_shape_[1] >= padded_images_shape_[1] * 0.5).all)
+ self.assertTrue((images_shape_[2] >= padded_images_shape_[2] * 0.5).all)
+ self.assertTrue((images_shape_[1] <= padded_images_shape_[1]).all)
+ self.assertTrue((images_shape_[2] <= padded_images_shape_[2]).all)
+ self.assertTrue(np.all((boxes_[:, 2] - boxes_[:, 0]) >= (
+ padded_boxes_[:, 2] - padded_boxes_[:, 0])))
+ self.assertTrue(np.all((boxes_[:, 3] - boxes_[:, 1]) >= (
+ padded_boxes_[:, 3] - padded_boxes_[:, 1])))
+
+ def testRandomCropToAspectRatio(self):
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+ tensor_dict = preprocessor.preprocess(tensor_dict, [])
+ images = tensor_dict[fields.InputDataFields.image]
+
+ preprocessing_options = [(preprocessor.random_crop_to_aspect_ratio, {
+ 'aspect_ratio': 2.0
+ })]
+ cropped_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+
+ cropped_images = cropped_tensor_dict[fields.InputDataFields.image]
+ cropped_boxes = cropped_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ boxes_shape = tf.shape(boxes)
+ cropped_boxes_shape = tf.shape(cropped_boxes)
+ images_shape = tf.shape(images)
+ cropped_images_shape = tf.shape(cropped_images)
+
+ with self.test_session() as sess:
+ (boxes_shape_, cropped_boxes_shape_, images_shape_,
+ cropped_images_shape_) = sess.run([
+ boxes_shape, cropped_boxes_shape, images_shape, cropped_images_shape
+ ])
+ self.assertAllEqual(boxes_shape_, cropped_boxes_shape_)
+ self.assertEqual(images_shape_[1], cropped_images_shape_[1] * 2)
+ self.assertEqual(images_shape_[2], cropped_images_shape_[2])
+
+ def testRandomPadToAspectRatio(self):
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ }
+ tensor_dict = preprocessor.preprocess(tensor_dict, [])
+ images = tensor_dict[fields.InputDataFields.image]
+
+ preprocessing_options = [(preprocessor.random_pad_to_aspect_ratio, {
+ 'aspect_ratio': 2.0
+ })]
+ padded_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+
+ padded_images = padded_tensor_dict[fields.InputDataFields.image]
+ padded_boxes = padded_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ boxes_shape = tf.shape(boxes)
+ padded_boxes_shape = tf.shape(padded_boxes)
+ images_shape = tf.shape(images)
+ padded_images_shape = tf.shape(padded_images)
+
+ with self.test_session() as sess:
+ (boxes_shape_, padded_boxes_shape_, images_shape_,
+ padded_images_shape_) = sess.run([
+ boxes_shape, padded_boxes_shape, images_shape, padded_images_shape
+ ])
+ self.assertAllEqual(boxes_shape_, padded_boxes_shape_)
+ self.assertEqual(images_shape_[1], padded_images_shape_[1])
+ self.assertEqual(2 * images_shape_[2], padded_images_shape_[2])
+
+ def testRandomBlackPatchesWithCache(self):
+ preprocess_options = []
+ preprocess_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocess_options.append((preprocessor.random_black_patches, {
+ 'size_to_image_ratio': 0.5
+ }))
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=True,
+ test_keypoints=True)
+
+ def testRandomBlackPatches(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocessing_options.append((preprocessor.random_black_patches, {
+ 'size_to_image_ratio': 0.5
+ }))
+ images = self.createTestImages()
+ tensor_dict = {fields.InputDataFields.image: images}
+ blacked_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+ blacked_images = blacked_tensor_dict[fields.InputDataFields.image]
+ images_shape = tf.shape(images)
+ blacked_images_shape = tf.shape(blacked_images)
+
+ with self.test_session() as sess:
+ (images_shape_, blacked_images_shape_) = sess.run(
+ [images_shape, blacked_images_shape])
+ self.assertAllEqual(images_shape_, blacked_images_shape_)
+
+ def testRandomResizeMethodWithCache(self):
+ preprocess_options = []
+ preprocess_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocess_options.append((preprocessor.random_resize_method, {
+ 'target_size': (75, 150)
+ }))
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=True,
+ test_keypoints=True)
+
+ def testRandomResizeMethod(self):
+ preprocessing_options = []
+ preprocessing_options.append((preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }))
+ preprocessing_options.append((preprocessor.random_resize_method, {
+ 'target_size': (75, 150)
+ }))
+ images = self.createTestImages()
+ tensor_dict = {fields.InputDataFields.image: images}
+ resized_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+ resized_images = resized_tensor_dict[fields.InputDataFields.image]
+ resized_images_shape = tf.shape(resized_images)
+ expected_images_shape = tf.constant([1, 75, 150, 3], dtype=tf.int32)
+
+ with self.test_session() as sess:
+ (expected_images_shape_, resized_images_shape_) = sess.run(
+ [expected_images_shape, resized_images_shape])
+ self.assertAllEqual(expected_images_shape_,
+ resized_images_shape_)
+
+ def testResizeImageWithMasks(self):
+ """Tests image resizing, checking output sizes."""
+ in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
+ in_masks_shape_list = [[15, 60, 40], [10, 15, 30]]
+ height = 50
+ width = 100
+ expected_image_shape_list = [[50, 100, 3], [50, 100, 3]]
+ expected_masks_shape_list = [[15, 50, 100], [10, 50, 100]]
+
+ for (in_image_shape, expected_image_shape, in_masks_shape,
+ expected_mask_shape) in zip(in_image_shape_list,
+ expected_image_shape_list,
+ in_masks_shape_list,
+ expected_masks_shape_list):
+ in_image = tf.random_uniform(in_image_shape)
+ in_masks = tf.random_uniform(in_masks_shape)
+ out_image, out_masks, _ = preprocessor.resize_image(
+ in_image, in_masks, new_height=height, new_width=width)
+ out_image_shape = tf.shape(out_image)
+ out_masks_shape = tf.shape(out_masks)
+
+ with self.test_session() as sess:
+ out_image_shape, out_masks_shape = sess.run(
+ [out_image_shape, out_masks_shape])
+ self.assertAllEqual(out_image_shape, expected_image_shape)
+ self.assertAllEqual(out_masks_shape, expected_mask_shape)
+
+ def testResizeImageWithMasksTensorInputHeightAndWidth(self):
+ """Tests image resizing, checking output sizes."""
+ in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
+ in_masks_shape_list = [[15, 60, 40], [10, 15, 30]]
+ height = tf.constant(50, dtype=tf.int32)
+ width = tf.constant(100, dtype=tf.int32)
+ expected_image_shape_list = [[50, 100, 3], [50, 100, 3]]
+ expected_masks_shape_list = [[15, 50, 100], [10, 50, 100]]
+
+ for (in_image_shape, expected_image_shape, in_masks_shape,
+ expected_mask_shape) in zip(in_image_shape_list,
+ expected_image_shape_list,
+ in_masks_shape_list,
+ expected_masks_shape_list):
+ in_image = tf.random_uniform(in_image_shape)
+ in_masks = tf.random_uniform(in_masks_shape)
+ out_image, out_masks, _ = preprocessor.resize_image(
+ in_image, in_masks, new_height=height, new_width=width)
+ out_image_shape = tf.shape(out_image)
+ out_masks_shape = tf.shape(out_masks)
+
+ with self.test_session() as sess:
+ out_image_shape, out_masks_shape = sess.run(
+ [out_image_shape, out_masks_shape])
+ self.assertAllEqual(out_image_shape, expected_image_shape)
+ self.assertAllEqual(out_masks_shape, expected_mask_shape)
+
+ def testResizeImageWithNoInstanceMask(self):
+ """Tests image resizing, checking output sizes."""
+ in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
+ in_masks_shape_list = [[0, 60, 40], [0, 15, 30]]
+ height = 50
+ width = 100
+ expected_image_shape_list = [[50, 100, 3], [50, 100, 3]]
+ expected_masks_shape_list = [[0, 50, 100], [0, 50, 100]]
+
+ for (in_image_shape, expected_image_shape, in_masks_shape,
+ expected_mask_shape) in zip(in_image_shape_list,
+ expected_image_shape_list,
+ in_masks_shape_list,
+ expected_masks_shape_list):
+ in_image = tf.random_uniform(in_image_shape)
+ in_masks = tf.random_uniform(in_masks_shape)
+ out_image, out_masks, _ = preprocessor.resize_image(
+ in_image, in_masks, new_height=height, new_width=width)
+ out_image_shape = tf.shape(out_image)
+ out_masks_shape = tf.shape(out_masks)
+
+ with self.test_session() as sess:
+ out_image_shape, out_masks_shape = sess.run(
+ [out_image_shape, out_masks_shape])
+ self.assertAllEqual(out_image_shape, expected_image_shape)
+ self.assertAllEqual(out_masks_shape, expected_mask_shape)
+
+ def testResizeToRangePreservesStaticSpatialShape(self):
+ """Tests image resizing, checking output sizes."""
+ in_shape_list = [[60, 40, 3], [15, 30, 3], [15, 50, 3]]
+ min_dim = 50
+ max_dim = 100
+ expected_shape_list = [[75, 50, 3], [50, 100, 3], [30, 100, 3]]
+
+ for in_shape, expected_shape in zip(in_shape_list, expected_shape_list):
+ in_image = tf.random_uniform(in_shape)
+ out_image, _ = preprocessor.resize_to_range(
+ in_image, min_dimension=min_dim, max_dimension=max_dim)
+ self.assertAllEqual(out_image.get_shape().as_list(), expected_shape)
+
+ def testResizeToRangeWithDynamicSpatialShape(self):
+ """Tests image resizing, checking output sizes."""
+ in_shape_list = [[60, 40, 3], [15, 30, 3], [15, 50, 3]]
+ min_dim = 50
+ max_dim = 100
+ expected_shape_list = [[75, 50, 3], [50, 100, 3], [30, 100, 3]]
+
+ for in_shape, expected_shape in zip(in_shape_list, expected_shape_list):
+ in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
+ out_image, _ = preprocessor.resize_to_range(
+ in_image, min_dimension=min_dim, max_dimension=max_dim)
+ out_image_shape = tf.shape(out_image)
+ with self.test_session() as sess:
+ out_image_shape = sess.run(out_image_shape,
+ feed_dict={in_image:
+ np.random.randn(*in_shape)})
+ self.assertAllEqual(out_image_shape, expected_shape)
+
+ def testResizeToRangeWithPadToMaxDimensionReturnsCorrectShapes(self):
+ in_shape_list = [[60, 40, 3], [15, 30, 3], [15, 50, 3]]
+ min_dim = 50
+ max_dim = 100
+ expected_shape_list = [[100, 100, 3], [100, 100, 3], [100, 100, 3]]
+
+ for in_shape, expected_shape in zip(in_shape_list, expected_shape_list):
+ in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
+ out_image, _ = preprocessor.resize_to_range(
+ in_image,
+ min_dimension=min_dim,
+ max_dimension=max_dim,
+ pad_to_max_dimension=True)
+ self.assertAllEqual(out_image.shape.as_list(), expected_shape)
+ out_image_shape = tf.shape(out_image)
+ with self.test_session() as sess:
+ out_image_shape = sess.run(
+ out_image_shape, feed_dict={in_image: np.random.randn(*in_shape)})
+ self.assertAllEqual(out_image_shape, expected_shape)
+
+ def testResizeToRangeWithPadToMaxDimensionReturnsCorrectTensor(self):
+ in_image_np = np.array([[[0, 1, 2]]], np.float32)
+ ex_image_np = np.array(
+ [[[0, 1, 2], [123.68, 116.779, 103.939]],
+ [[123.68, 116.779, 103.939], [123.68, 116.779, 103.939]]], np.float32)
+ min_dim = 1
+ max_dim = 2
+
+ in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
+ out_image, _ = preprocessor.resize_to_range(
+ in_image,
+ min_dimension=min_dim,
+ max_dimension=max_dim,
+ pad_to_max_dimension=True,
+ per_channel_pad_value=(123.68, 116.779, 103.939))
+
+ with self.test_session() as sess:
+ out_image_np = sess.run(out_image, feed_dict={in_image: in_image_np})
+ self.assertAllClose(ex_image_np, out_image_np)
+
+ def testResizeToRangeWithMasksPreservesStaticSpatialShape(self):
+ """Tests image resizing, checking output sizes."""
+ in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
+ in_masks_shape_list = [[15, 60, 40], [10, 15, 30]]
+ min_dim = 50
+ max_dim = 100
+ expected_image_shape_list = [[75, 50, 3], [50, 100, 3]]
+ expected_masks_shape_list = [[15, 75, 50], [10, 50, 100]]
+
+ for (in_image_shape, expected_image_shape, in_masks_shape,
+ expected_mask_shape) in zip(in_image_shape_list,
+ expected_image_shape_list,
+ in_masks_shape_list,
+ expected_masks_shape_list):
+ in_image = tf.random_uniform(in_image_shape)
+ in_masks = tf.random_uniform(in_masks_shape)
+ out_image, out_masks, _ = preprocessor.resize_to_range(
+ in_image, in_masks, min_dimension=min_dim, max_dimension=max_dim)
+ self.assertAllEqual(out_masks.get_shape().as_list(), expected_mask_shape)
+ self.assertAllEqual(out_image.get_shape().as_list(), expected_image_shape)
+
+ def testResizeToRangeWithMasksAndPadToMaxDimension(self):
+ """Tests image resizing, checking output sizes."""
+ in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
+ in_masks_shape_list = [[15, 60, 40], [10, 15, 30]]
+ min_dim = 50
+ max_dim = 100
+ expected_image_shape_list = [[100, 100, 3], [100, 100, 3]]
+ expected_masks_shape_list = [[15, 100, 100], [10, 100, 100]]
+
+ for (in_image_shape,
+ expected_image_shape, in_masks_shape, expected_mask_shape) in zip(
+ in_image_shape_list, expected_image_shape_list,
+ in_masks_shape_list, expected_masks_shape_list):
+ in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
+ in_masks = tf.placeholder(tf.float32, shape=(None, None, None))
+ out_image, out_masks, _ = preprocessor.resize_to_range(
+ in_image,
+ in_masks,
+ min_dimension=min_dim,
+ max_dimension=max_dim,
+ pad_to_max_dimension=True)
+ out_image_shape = tf.shape(out_image)
+ out_masks_shape = tf.shape(out_masks)
+
+ with self.test_session() as sess:
+ out_image_shape, out_masks_shape = sess.run(
+ [out_image_shape, out_masks_shape],
+ feed_dict={
+ in_image: np.random.randn(*in_image_shape),
+ in_masks: np.random.randn(*in_masks_shape)
+ })
+ self.assertAllEqual(out_image_shape, expected_image_shape)
+ self.assertAllEqual(out_masks_shape, expected_mask_shape)
+
+ def testResizeToRangeWithMasksAndDynamicSpatialShape(self):
+ """Tests image resizing, checking output sizes."""
+ in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
+ in_masks_shape_list = [[15, 60, 40], [10, 15, 30]]
+ min_dim = 50
+ max_dim = 100
+ expected_image_shape_list = [[75, 50, 3], [50, 100, 3]]
+ expected_masks_shape_list = [[15, 75, 50], [10, 50, 100]]
+
+ for (in_image_shape, expected_image_shape, in_masks_shape,
+ expected_mask_shape) in zip(in_image_shape_list,
+ expected_image_shape_list,
+ in_masks_shape_list,
+ expected_masks_shape_list):
+ in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
+ in_masks = tf.placeholder(tf.float32, shape=(None, None, None))
+ in_masks = tf.random_uniform(in_masks_shape)
+ out_image, out_masks, _ = preprocessor.resize_to_range(
+ in_image, in_masks, min_dimension=min_dim, max_dimension=max_dim)
+ out_image_shape = tf.shape(out_image)
+ out_masks_shape = tf.shape(out_masks)
+
+ with self.test_session() as sess:
+ out_image_shape, out_masks_shape = sess.run(
+ [out_image_shape, out_masks_shape],
+ feed_dict={
+ in_image: np.random.randn(*in_image_shape),
+ in_masks: np.random.randn(*in_masks_shape)
+ })
+ self.assertAllEqual(out_image_shape, expected_image_shape)
+ self.assertAllEqual(out_masks_shape, expected_mask_shape)
+
+ def testResizeToRangeWithInstanceMasksTensorOfSizeZero(self):
+ """Tests image resizing, checking output sizes."""
+ in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
+ in_masks_shape_list = [[0, 60, 40], [0, 15, 30]]
+ min_dim = 50
+ max_dim = 100
+ expected_image_shape_list = [[75, 50, 3], [50, 100, 3]]
+ expected_masks_shape_list = [[0, 75, 50], [0, 50, 100]]
+
+ for (in_image_shape, expected_image_shape, in_masks_shape,
+ expected_mask_shape) in zip(in_image_shape_list,
+ expected_image_shape_list,
+ in_masks_shape_list,
+ expected_masks_shape_list):
+ in_image = tf.random_uniform(in_image_shape)
+ in_masks = tf.random_uniform(in_masks_shape)
+ out_image, out_masks, _ = preprocessor.resize_to_range(
+ in_image, in_masks, min_dimension=min_dim, max_dimension=max_dim)
+ out_image_shape = tf.shape(out_image)
+ out_masks_shape = tf.shape(out_masks)
+
+ with self.test_session() as sess:
+ out_image_shape, out_masks_shape = sess.run(
+ [out_image_shape, out_masks_shape])
+ self.assertAllEqual(out_image_shape, expected_image_shape)
+ self.assertAllEqual(out_masks_shape, expected_mask_shape)
+
+ def testResizeToRange4DImageTensor(self):
+ image = tf.random_uniform([1, 200, 300, 3])
+ with self.assertRaises(ValueError):
+ preprocessor.resize_to_range(image, 500, 600)
+
+ def testResizeToRangeSameMinMax(self):
+ """Tests image resizing, checking output sizes."""
+ in_shape_list = [[312, 312, 3], [299, 299, 3]]
+ min_dim = 320
+ max_dim = 320
+ expected_shape_list = [[320, 320, 3], [320, 320, 3]]
+
+ for in_shape, expected_shape in zip(in_shape_list, expected_shape_list):
+ in_image = tf.random_uniform(in_shape)
+ out_image, _ = preprocessor.resize_to_range(
+ in_image, min_dimension=min_dim, max_dimension=max_dim)
+ out_image_shape = tf.shape(out_image)
+
+ with self.test_session() as sess:
+ out_image_shape = sess.run(out_image_shape)
+ self.assertAllEqual(out_image_shape, expected_shape)
+
+ def testResizeToMinDimensionTensorShapes(self):
+ in_image_shape_list = [[60, 55, 3], [15, 30, 3]]
+ in_masks_shape_list = [[15, 60, 55], [10, 15, 30]]
+ min_dim = 50
+ expected_image_shape_list = [[60, 55, 3], [50, 100, 3]]
+ expected_masks_shape_list = [[15, 60, 55], [10, 50, 100]]
+
+ for (in_image_shape, expected_image_shape, in_masks_shape,
+ expected_mask_shape) in zip(in_image_shape_list,
+ expected_image_shape_list,
+ in_masks_shape_list,
+ expected_masks_shape_list):
+ in_image = tf.placeholder(tf.float32, shape=(None, None, 3))
+ in_masks = tf.placeholder(tf.float32, shape=(None, None, None))
+ in_masks = tf.random_uniform(in_masks_shape)
+ out_image, out_masks, _ = preprocessor.resize_to_min_dimension(
+ in_image, in_masks, min_dimension=min_dim)
+ out_image_shape = tf.shape(out_image)
+ out_masks_shape = tf.shape(out_masks)
+
+ with self.test_session() as sess:
+ out_image_shape, out_masks_shape = sess.run(
+ [out_image_shape, out_masks_shape],
+ feed_dict={
+ in_image: np.random.randn(*in_image_shape),
+ in_masks: np.random.randn(*in_masks_shape)
+ })
+ self.assertAllEqual(out_image_shape, expected_image_shape)
+ self.assertAllEqual(out_masks_shape, expected_mask_shape)
+
+ def testResizeToMinDimensionWithInstanceMasksTensorOfSizeZero(self):
+ """Tests image resizing, checking output sizes."""
+ in_image_shape_list = [[60, 40, 3], [15, 30, 3]]
+ in_masks_shape_list = [[0, 60, 40], [0, 15, 30]]
+ min_dim = 50
+ expected_image_shape_list = [[75, 50, 3], [50, 100, 3]]
+ expected_masks_shape_list = [[0, 75, 50], [0, 50, 100]]
+
+ for (in_image_shape, expected_image_shape, in_masks_shape,
+ expected_mask_shape) in zip(in_image_shape_list,
+ expected_image_shape_list,
+ in_masks_shape_list,
+ expected_masks_shape_list):
+ in_image = tf.random_uniform(in_image_shape)
+ in_masks = tf.random_uniform(in_masks_shape)
+ out_image, out_masks, _ = preprocessor.resize_to_min_dimension(
+ in_image, in_masks, min_dimension=min_dim)
+ out_image_shape = tf.shape(out_image)
+ out_masks_shape = tf.shape(out_masks)
+
+ with self.test_session() as sess:
+ out_image_shape, out_masks_shape = sess.run(
+ [out_image_shape, out_masks_shape])
+ self.assertAllEqual(out_image_shape, expected_image_shape)
+ self.assertAllEqual(out_masks_shape, expected_mask_shape)
+
+ def testResizeToMinDimensionRaisesErrorOn4DImage(self):
+ image = tf.random_uniform([1, 200, 300, 3])
+ with self.assertRaises(ValueError):
+ preprocessor.resize_to_min_dimension(image, 500)
+
+ def testScaleBoxesToPixelCoordinates(self):
+ """Tests box scaling, checking scaled values."""
+ in_shape = [60, 40, 3]
+ in_boxes = [[0.1, 0.2, 0.4, 0.6],
+ [0.5, 0.3, 0.9, 0.7]]
+
+ expected_boxes = [[6., 8., 24., 24.],
+ [30., 12., 54., 28.]]
+
+ in_image = tf.random_uniform(in_shape)
+ in_boxes = tf.constant(in_boxes)
+ _, out_boxes = preprocessor.scale_boxes_to_pixel_coordinates(
+ in_image, boxes=in_boxes)
+ with self.test_session() as sess:
+ out_boxes = sess.run(out_boxes)
+ self.assertAllClose(out_boxes, expected_boxes)
+
+ def testScaleBoxesToPixelCoordinatesWithKeypoints(self):
+ """Tests box and keypoint scaling, checking scaled values."""
+ in_shape = [60, 40, 3]
+ in_boxes = self.createTestBoxes()
+ in_keypoints = self.createTestKeypoints()
+
+ expected_boxes = [[0., 10., 45., 40.],
+ [15., 20., 45., 40.]]
+ expected_keypoints = [
+ [[6., 4.], [12., 8.], [18., 12.]],
+ [[24., 16.], [30., 20.], [36., 24.]],
+ ]
+
+ in_image = tf.random_uniform(in_shape)
+ _, out_boxes, out_keypoints = preprocessor.scale_boxes_to_pixel_coordinates(
+ in_image, boxes=in_boxes, keypoints=in_keypoints)
+ with self.test_session() as sess:
+ out_boxes_, out_keypoints_ = sess.run([out_boxes, out_keypoints])
+ self.assertAllClose(out_boxes_, expected_boxes)
+ self.assertAllClose(out_keypoints_, expected_keypoints)
+
+ def testSubtractChannelMean(self):
+ """Tests whether channel means have been subtracted."""
+ with self.test_session():
+ image = tf.zeros((240, 320, 3))
+ means = [1, 2, 3]
+ actual = preprocessor.subtract_channel_mean(image, means=means)
+ actual = actual.eval()
+
+ self.assertTrue((actual[:, :, 0] == -1).all())
+ self.assertTrue((actual[:, :, 1] == -2).all())
+ self.assertTrue((actual[:, :, 2] == -3).all())
+
+ def testOneHotEncoding(self):
+ """Tests one hot encoding of multiclass labels."""
+ with self.test_session():
+ labels = tf.constant([1, 4, 2], dtype=tf.int32)
+ one_hot = preprocessor.one_hot_encoding(labels, num_classes=5)
+ one_hot = one_hot.eval()
+
+ self.assertAllEqual([0, 1, 1, 0, 1], one_hot)
+
+ def testSSDRandomCropWithCache(self):
+ preprocess_options = [
+ (preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }),
+ (preprocessor.ssd_random_crop, {})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=False,
+ test_keypoints=False)
+
+ def testSSDRandomCrop(self):
+ preprocessing_options = [
+ (preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }),
+ (preprocessor.ssd_random_crop, {})]
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+ distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+ distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+
+ images_rank = tf.rank(images)
+ distorted_images_rank = tf.rank(distorted_images)
+ boxes_rank = tf.rank(boxes)
+ distorted_boxes_rank = tf.rank(distorted_boxes)
+
+ with self.test_session() as sess:
+ (boxes_rank_, distorted_boxes_rank_, images_rank_,
+ distorted_images_rank_) = sess.run(
+ [boxes_rank, distorted_boxes_rank, images_rank,
+ distorted_images_rank])
+ self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
+ self.assertAllEqual(images_rank_, distorted_images_rank_)
+
+ def testSSDRandomCropWithMultiClassScores(self):
+ preprocessing_options = [(preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }), (preprocessor.ssd_random_crop, {})]
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ multiclass_scores = self.createTestMultiClassScores()
+
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.multiclass_scores: multiclass_scores,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_multiclass_scores=True)
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ distorted_multiclass_scores = distorted_tensor_dict[
+ fields.InputDataFields.multiclass_scores]
+
+ images_rank = tf.rank(images)
+ distorted_images_rank = tf.rank(distorted_images)
+ boxes_rank = tf.rank(boxes)
+ distorted_boxes_rank = tf.rank(distorted_boxes)
+ multiclass_scores_rank = tf.rank(multiclass_scores)
+ distorted_multiclass_scores_rank = tf.rank(distorted_multiclass_scores)
+
+ with self.test_session() as sess:
+ (boxes_rank_, distorted_boxes_, distorted_boxes_rank_, images_rank_,
+ distorted_images_rank_, multiclass_scores_rank_,
+ distorted_multiclass_scores_,
+ distorted_multiclass_scores_rank_) = sess.run([
+ boxes_rank, distorted_boxes, distorted_boxes_rank, images_rank,
+ distorted_images_rank, multiclass_scores_rank,
+ distorted_multiclass_scores, distorted_multiclass_scores_rank
+ ])
+ self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
+ self.assertAllEqual(images_rank_, distorted_images_rank_)
+ self.assertAllEqual(multiclass_scores_rank_,
+ distorted_multiclass_scores_rank_)
+ self.assertAllEqual(distorted_boxes_.shape[0],
+ distorted_multiclass_scores_.shape[0])
+
+ def testSSDRandomCropPad(self):
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ preprocessing_options = [
+ (preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }),
+ (preprocessor.ssd_random_crop_pad, {})]
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights,
+ }
+ distorted_tensor_dict = preprocessor.preprocess(tensor_dict,
+ preprocessing_options)
+ distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+
+ images_rank = tf.rank(images)
+ distorted_images_rank = tf.rank(distorted_images)
+ boxes_rank = tf.rank(boxes)
+ distorted_boxes_rank = tf.rank(distorted_boxes)
+
+ with self.test_session() as sess:
+ (boxes_rank_, distorted_boxes_rank_, images_rank_,
+ distorted_images_rank_) = sess.run([
+ boxes_rank, distorted_boxes_rank, images_rank, distorted_images_rank
+ ])
+ self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
+ self.assertAllEqual(images_rank_, distorted_images_rank_)
+
+ def testSSDRandomCropFixedAspectRatioWithCache(self):
+ preprocess_options = [
+ (preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }),
+ (preprocessor.ssd_random_crop_fixed_aspect_ratio, {})]
+ self._testPreprocessorCache(preprocess_options,
+ test_boxes=True,
+ test_masks=False,
+ test_keypoints=False)
+
+ def _testSSDRandomCropFixedAspectRatio(self,
+ include_multiclass_scores,
+ include_instance_masks,
+ include_keypoints):
+ images = self.createTestImages()
+ boxes = self.createTestBoxes()
+ labels = self.createTestLabels()
+ weights = self.createTestGroundtruthWeights()
+ preprocessing_options = [(preprocessor.normalize_image, {
+ 'original_minval': 0,
+ 'original_maxval': 255,
+ 'target_minval': 0,
+ 'target_maxval': 1
+ }), (preprocessor.ssd_random_crop_fixed_aspect_ratio, {})]
+ tensor_dict = {
+ fields.InputDataFields.image: images,
+ fields.InputDataFields.groundtruth_boxes: boxes,
+ fields.InputDataFields.groundtruth_classes: labels,
+ fields.InputDataFields.groundtruth_weights: weights
+ }
+ if include_multiclass_scores:
+ multiclass_scores = self.createTestMultiClassScores()
+ tensor_dict[fields.InputDataFields.multiclass_scores] = (
+ multiclass_scores)
+ if include_instance_masks:
+ masks = self.createTestMasks()
+ tensor_dict[fields.InputDataFields.groundtruth_instance_masks] = masks
+ if include_keypoints:
+ keypoints = self.createTestKeypoints()
+ tensor_dict[fields.InputDataFields.groundtruth_keypoints] = keypoints
+
+ preprocessor_arg_map = preprocessor.get_default_func_arg_map(
+ include_multiclass_scores=include_multiclass_scores,
+ include_instance_masks=include_instance_masks,
+ include_keypoints=include_keypoints)
+ distorted_tensor_dict = preprocessor.preprocess(
+ tensor_dict, preprocessing_options, func_arg_map=preprocessor_arg_map)
+ distorted_images = distorted_tensor_dict[fields.InputDataFields.image]
+ distorted_boxes = distorted_tensor_dict[
+ fields.InputDataFields.groundtruth_boxes]
+ images_rank = tf.rank(images)
+ distorted_images_rank = tf.rank(distorted_images)
+ boxes_rank = tf.rank(boxes)
+ distorted_boxes_rank = tf.rank(distorted_boxes)
+
+ with self.test_session() as sess:
+ (boxes_rank_, distorted_boxes_rank_, images_rank_,
+ distorted_images_rank_) = sess.run(
+ [boxes_rank, distorted_boxes_rank, images_rank,
+ distorted_images_rank])
+ self.assertAllEqual(boxes_rank_, distorted_boxes_rank_)
+ self.assertAllEqual(images_rank_, distorted_images_rank_)
+
+ def testSSDRandomCropFixedAspectRatio(self):
+ self._testSSDRandomCropFixedAspectRatio(include_multiclass_scores=False,
+ include_instance_masks=False,
+ include_keypoints=False)
+
+ def testSSDRandomCropFixedAspectRatioWithMultiClassScores(self):
+ self._testSSDRandomCropFixedAspectRatio(include_multiclass_scores=True,
+ include_instance_masks=False,
+ include_keypoints=False)
+
+ def testSSDRandomCropFixedAspectRatioWithMasksAndKeypoints(self):
+ self._testSSDRandomCropFixedAspectRatio(include_multiclass_scores=False,
+ include_instance_masks=True,
+ include_keypoints=True)
+
+ def testSSDRandomCropFixedAspectRatioWithLabelScoresMasksAndKeypoints(self):
+ self._testSSDRandomCropFixedAspectRatio(include_multiclass_scores=False,
+ include_instance_masks=True,
+ include_keypoints=True)
+
+ def testConvertClassLogitsToSoftmax(self):
+ multiclass_scores = tf.constant(
+ [[1.0, 0.0], [0.5, 0.5], [1000, 1]], dtype=tf.float32)
+ temperature = 2.0
+
+ converted_multiclass_scores = (
+ preprocessor.convert_class_logits_to_softmax(
+ multiclass_scores=multiclass_scores, temperature=temperature))
+
+ expected_converted_multiclass_scores = [[[0.62245935, 0.37754068],
+ [0.5, 0.5], [1, 0]]]
+
+ with self.test_session() as sess:
+ (converted_multiclass_scores_) = sess.run([converted_multiclass_scores])
+
+ self.assertAllClose(converted_multiclass_scores_,
+ expected_converted_multiclass_scores)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/region_similarity_calculator.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/region_similarity_calculator.py"
new file mode 100644
index 00000000..793c7d38
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/region_similarity_calculator.py"
@@ -0,0 +1,154 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Region Similarity Calculators for BoxLists.
+
+Region Similarity Calculators compare a pairwise measure of similarity
+between the boxes in two BoxLists.
+"""
+from abc import ABCMeta
+from abc import abstractmethod
+
+import tensorflow as tf
+
+from object_detection.core import box_list_ops
+from object_detection.core import standard_fields as fields
+
+
+class RegionSimilarityCalculator(object):
+ """Abstract base class for region similarity calculator."""
+ __metaclass__ = ABCMeta
+
+ def compare(self, boxlist1, boxlist2, scope=None):
+ """Computes matrix of pairwise similarity between BoxLists.
+
+ This op (to be overridden) computes a measure of pairwise similarity between
+ the boxes in the given BoxLists. Higher values indicate more similarity.
+
+ Note that this method simply measures similarity and does not explicitly
+ perform a matching.
+
+ Args:
+ boxlist1: BoxList holding N boxes.
+ boxlist2: BoxList holding M boxes.
+ scope: Op scope name. Defaults to 'Compare' if None.
+
+ Returns:
+ a (float32) tensor of shape [N, M] with pairwise similarity score.
+ """
+ with tf.name_scope(scope, 'Compare', [boxlist1, boxlist2]) as scope:
+ return self._compare(boxlist1, boxlist2)
+
+ @abstractmethod
+ def _compare(self, boxlist1, boxlist2):
+ pass
+
+
+class IouSimilarity(RegionSimilarityCalculator):
+ """Class to compute similarity based on Intersection over Union (IOU) metric.
+
+ This class computes pairwise similarity between two BoxLists based on IOU.
+ """
+
+ def _compare(self, boxlist1, boxlist2):
+ """Compute pairwise IOU similarity between the two BoxLists.
+
+ Args:
+ boxlist1: BoxList holding N boxes.
+ boxlist2: BoxList holding M boxes.
+
+ Returns:
+ A tensor with shape [N, M] representing pairwise iou scores.
+ """
+ return box_list_ops.iou(boxlist1, boxlist2)
+
+
+class NegSqDistSimilarity(RegionSimilarityCalculator):
+ """Class to compute similarity based on the squared distance metric.
+
+ This class computes pairwise similarity between two BoxLists based on the
+ negative squared distance metric.
+ """
+
+ def _compare(self, boxlist1, boxlist2):
+ """Compute matrix of (negated) sq distances.
+
+ Args:
+ boxlist1: BoxList holding N boxes.
+ boxlist2: BoxList holding M boxes.
+
+ Returns:
+ A tensor with shape [N, M] representing negated pairwise squared distance.
+ """
+ return -1 * box_list_ops.sq_dist(boxlist1, boxlist2)
+
+
+class IoaSimilarity(RegionSimilarityCalculator):
+ """Class to compute similarity based on Intersection over Area (IOA) metric.
+
+ This class computes pairwise similarity between two BoxLists based on their
+ pairwise intersections divided by the areas of second BoxLists.
+ """
+
+ def _compare(self, boxlist1, boxlist2):
+ """Compute pairwise IOA similarity between the two BoxLists.
+
+ Args:
+ boxlist1: BoxList holding N boxes.
+ boxlist2: BoxList holding M boxes.
+
+ Returns:
+ A tensor with shape [N, M] representing pairwise IOA scores.
+ """
+ return box_list_ops.ioa(boxlist1, boxlist2)
+
+
+class ThresholdedIouSimilarity(RegionSimilarityCalculator):
+ """Class to compute similarity based on thresholded IOU and score.
+
+ This class computes pairwise similarity between two BoxLists based on IOU and
+ a 'score' present in boxlist1. If IOU > threshold, then the entry in the
+ output pairwise tensor will contain `score`, otherwise 0.
+ """
+
+ def __init__(self, iou_threshold=0):
+ """Initialize the ThresholdedIouSimilarity.
+
+ Args:
+ iou_threshold: For a given pair of boxes, if the IOU is > iou_threshold,
+ then the comparison result will be the foreground probability of
+ the first box, otherwise it will be zero.
+ """
+ self._iou_threshold = iou_threshold
+
+ def _compare(self, boxlist1, boxlist2):
+ """Compute pairwise IOU similarity between the two BoxLists and score.
+
+ Args:
+ boxlist1: BoxList holding N boxes. Must have a score field.
+ boxlist2: BoxList holding M boxes.
+
+ Returns:
+ A tensor with shape [N, M] representing scores threholded by pairwise
+ iou scores.
+ """
+ ious = box_list_ops.iou(boxlist1, boxlist2)
+ scores = boxlist1.get_field(fields.BoxListFields.scores)
+ scores = tf.expand_dims(scores, axis=1)
+ row_replicated_scores = tf.tile(scores, [1, tf.shape(ious)[-1]])
+ thresholded_ious = tf.where(ious > self._iou_threshold,
+ row_replicated_scores, tf.zeros_like(ious))
+
+ return thresholded_ious
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/region_similarity_calculator_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/region_similarity_calculator_test.py"
new file mode 100644
index 00000000..1d0c26bb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/region_similarity_calculator_test.py"
@@ -0,0 +1,95 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for region_similarity_calculator."""
+import tensorflow as tf
+
+from object_detection.core import box_list
+from object_detection.core import region_similarity_calculator
+from object_detection.core import standard_fields as fields
+
+
+class RegionSimilarityCalculatorTest(tf.test.TestCase):
+
+ def test_get_correct_pairwise_similarity_based_on_iou(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0]])
+ exp_output = [[2.0 / 16.0, 0, 6.0 / 400.0], [1.0 / 16.0, 0.0, 5.0 / 400.0]]
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ iou_similarity_calculator = region_similarity_calculator.IouSimilarity()
+ iou_similarity = iou_similarity_calculator.compare(boxes1, boxes2)
+ with self.test_session() as sess:
+ iou_output = sess.run(iou_similarity)
+ self.assertAllClose(iou_output, exp_output)
+
+ def test_get_correct_pairwise_similarity_based_on_squared_distances(self):
+ corners1 = tf.constant([[0.0, 0.0, 0.0, 0.0],
+ [1.0, 1.0, 0.0, 2.0]])
+ corners2 = tf.constant([[3.0, 4.0, 1.0, 0.0],
+ [-4.0, 0.0, 0.0, 3.0],
+ [0.0, 0.0, 0.0, 0.0]])
+ exp_output = [[-26, -25, 0], [-18, -27, -6]]
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ dist_similarity_calc = region_similarity_calculator.NegSqDistSimilarity()
+ dist_similarity = dist_similarity_calc.compare(boxes1, boxes2)
+ with self.test_session() as sess:
+ dist_output = sess.run(dist_similarity)
+ self.assertAllClose(dist_output, exp_output)
+
+ def test_get_correct_pairwise_similarity_based_on_ioa(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0]])
+ exp_output_1 = [[2.0 / 12.0, 0, 6.0 / 400.0],
+ [1.0 / 12.0, 0.0, 5.0 / 400.0]]
+ exp_output_2 = [[2.0 / 6.0, 1.0 / 5.0],
+ [0, 0],
+ [6.0 / 6.0, 5.0 / 5.0]]
+ boxes1 = box_list.BoxList(corners1)
+ boxes2 = box_list.BoxList(corners2)
+ ioa_similarity_calculator = region_similarity_calculator.IoaSimilarity()
+ ioa_similarity_1 = ioa_similarity_calculator.compare(boxes1, boxes2)
+ ioa_similarity_2 = ioa_similarity_calculator.compare(boxes2, boxes1)
+ with self.test_session() as sess:
+ iou_output_1, iou_output_2 = sess.run(
+ [ioa_similarity_1, ioa_similarity_2])
+ self.assertAllClose(iou_output_1, exp_output_1)
+ self.assertAllClose(iou_output_2, exp_output_2)
+
+ def test_get_correct_pairwise_similarity_based_on_thresholded_iou(self):
+ corners1 = tf.constant([[4.0, 3.0, 7.0, 5.0], [5.0, 6.0, 10.0, 7.0]])
+ corners2 = tf.constant([[3.0, 4.0, 6.0, 8.0], [14.0, 14.0, 15.0, 15.0],
+ [0.0, 0.0, 20.0, 20.0]])
+ scores = tf.constant([.3, .6])
+ iou_threshold = .013
+
+ exp_output = tf.constant([[0.3, 0., 0.3], [0.6, 0., 0.]])
+ boxes1 = box_list.BoxList(corners1)
+ boxes1.add_field(fields.BoxListFields.scores, scores)
+ boxes2 = box_list.BoxList(corners2)
+ iou_similarity_calculator = (
+ region_similarity_calculator.ThresholdedIouSimilarity(
+ iou_threshold=iou_threshold))
+ iou_similarity = iou_similarity_calculator.compare(boxes1, boxes2)
+ with self.test_session() as sess:
+ iou_output = sess.run(iou_similarity)
+ self.assertAllClose(iou_output, exp_output)
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/standard_fields.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/standard_fields.py"
new file mode 100644
index 00000000..cc0da39a
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/standard_fields.py"
@@ -0,0 +1,234 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Contains classes specifying naming conventions used for object detection.
+
+
+Specifies:
+ InputDataFields: standard fields used by reader/preprocessor/batcher.
+ DetectionResultFields: standard fields returned by object detector.
+ BoxListFields: standard field used by BoxList
+ TfExampleFields: standard fields for tf-example data format (go/tf-example).
+"""
+
+
+class InputDataFields(object):
+ """Names for the input tensors.
+
+ Holds the standard data field names to use for identifying input tensors. This
+ should be used by the decoder to identify keys for the returned tensor_dict
+ containing input tensors. And it should be used by the model to identify the
+ tensors it needs.
+
+ Attributes:
+ image: image.
+ image_additional_channels: additional channels.
+ original_image: image in the original input size.
+ original_image_spatial_shape: image in the original input size.
+ key: unique key corresponding to image.
+ source_id: source of the original image.
+ filename: original filename of the dataset (without common path).
+ groundtruth_image_classes: image-level class labels.
+ groundtruth_image_confidences: image-level class confidences.
+ groundtruth_boxes: coordinates of the ground truth boxes in the image.
+ groundtruth_classes: box-level class labels.
+ groundtruth_confidences: box-level class confidences. The shape should be
+ the same as the shape of groundtruth_classes.
+ groundtruth_label_types: box-level label types (e.g. explicit negative).
+ groundtruth_is_crowd: [DEPRECATED, use groundtruth_group_of instead]
+ is the groundtruth a single object or a crowd.
+ groundtruth_area: area of a groundtruth segment.
+ groundtruth_difficult: is a `difficult` object
+ groundtruth_group_of: is a `group_of` objects, e.g. multiple objects of the
+ same class, forming a connected group, where instances are heavily
+ occluding each other.
+ proposal_boxes: coordinates of object proposal boxes.
+ proposal_objectness: objectness score of each proposal.
+ groundtruth_instance_masks: ground truth instance masks.
+ groundtruth_instance_boundaries: ground truth instance boundaries.
+ groundtruth_instance_classes: instance mask-level class labels.
+ groundtruth_keypoints: ground truth keypoints.
+ groundtruth_keypoint_visibilities: ground truth keypoint visibilities.
+ groundtruth_label_weights: groundtruth label weights.
+ groundtruth_weights: groundtruth weight factor for bounding boxes.
+ num_groundtruth_boxes: number of groundtruth boxes.
+ is_annotated: whether an image has been labeled or not.
+ true_image_shapes: true shapes of images in the resized images, as resized
+ images can be padded with zeros.
+ multiclass_scores: the label score per class for each box.
+ """
+ image = 'image'
+ image_additional_channels = 'image_additional_channels'
+ original_image = 'original_image'
+ original_image_spatial_shape = 'original_image_spatial_shape'
+ key = 'key'
+ source_id = 'source_id'
+ filename = 'filename'
+ groundtruth_image_classes = 'groundtruth_image_classes'
+ groundtruth_image_confidences = 'groundtruth_image_confidences'
+ groundtruth_boxes = 'groundtruth_boxes'
+ groundtruth_classes = 'groundtruth_classes'
+ groundtruth_confidences = 'groundtruth_confidences'
+ groundtruth_label_types = 'groundtruth_label_types'
+ groundtruth_is_crowd = 'groundtruth_is_crowd'
+ groundtruth_area = 'groundtruth_area'
+ groundtruth_difficult = 'groundtruth_difficult'
+ groundtruth_group_of = 'groundtruth_group_of'
+ proposal_boxes = 'proposal_boxes'
+ proposal_objectness = 'proposal_objectness'
+ groundtruth_instance_masks = 'groundtruth_instance_masks'
+ groundtruth_instance_boundaries = 'groundtruth_instance_boundaries'
+ groundtruth_instance_classes = 'groundtruth_instance_classes'
+ groundtruth_keypoints = 'groundtruth_keypoints'
+ groundtruth_keypoint_visibilities = 'groundtruth_keypoint_visibilities'
+ groundtruth_label_weights = 'groundtruth_label_weights'
+ groundtruth_weights = 'groundtruth_weights'
+ num_groundtruth_boxes = 'num_groundtruth_boxes'
+ is_annotated = 'is_annotated'
+ true_image_shape = 'true_image_shape'
+ multiclass_scores = 'multiclass_scores'
+
+
+class DetectionResultFields(object):
+ """Naming conventions for storing the output of the detector.
+
+ Attributes:
+ source_id: source of the original image.
+ key: unique key corresponding to image.
+ detection_boxes: coordinates of the detection boxes in the image.
+ detection_scores: detection scores for the detection boxes in the image.
+ detection_classes: detection-level class labels.
+ detection_masks: contains a segmentation mask for each detection box.
+ detection_boundaries: contains an object boundary for each detection box.
+ detection_keypoints: contains detection keypoints for each detection box.
+ num_detections: number of detections in the batch.
+ """
+
+ source_id = 'source_id'
+ key = 'key'
+ detection_boxes = 'detection_boxes'
+ detection_scores = 'detection_scores'
+ detection_classes = 'detection_classes'
+ detection_masks = 'detection_masks'
+ detection_boundaries = 'detection_boundaries'
+ detection_keypoints = 'detection_keypoints'
+ num_detections = 'num_detections'
+
+
+class BoxListFields(object):
+ """Naming conventions for BoxLists.
+
+ Attributes:
+ boxes: bounding box coordinates.
+ classes: classes per bounding box.
+ scores: scores per bounding box.
+ weights: sample weights per bounding box.
+ objectness: objectness score per bounding box.
+ masks: masks per bounding box.
+ boundaries: boundaries per bounding box.
+ keypoints: keypoints per bounding box.
+ keypoint_heatmaps: keypoint heatmaps per bounding box.
+ is_crowd: is_crowd annotation per bounding box.
+ """
+ boxes = 'boxes'
+ classes = 'classes'
+ scores = 'scores'
+ weights = 'weights'
+ confidences = 'confidences'
+ objectness = 'objectness'
+ masks = 'masks'
+ boundaries = 'boundaries'
+ keypoints = 'keypoints'
+ keypoint_heatmaps = 'keypoint_heatmaps'
+ is_crowd = 'is_crowd'
+
+
+class TfExampleFields(object):
+ """TF-example proto feature names for object detection.
+
+ Holds the standard feature names to load from an Example proto for object
+ detection.
+
+ Attributes:
+ image_encoded: JPEG encoded string
+ image_format: image format, e.g. "JPEG"
+ filename: filename
+ channels: number of channels of image
+ colorspace: colorspace, e.g. "RGB"
+ height: height of image in pixels, e.g. 462
+ width: width of image in pixels, e.g. 581
+ source_id: original source of the image
+ image_class_text: image-level label in text format
+ image_class_label: image-level label in numerical format
+ object_class_text: labels in text format, e.g. ["person", "cat"]
+ object_class_label: labels in numbers, e.g. [16, 8]
+ object_bbox_xmin: xmin coordinates of groundtruth box, e.g. 10, 30
+ object_bbox_xmax: xmax coordinates of groundtruth box, e.g. 50, 40
+ object_bbox_ymin: ymin coordinates of groundtruth box, e.g. 40, 50
+ object_bbox_ymax: ymax coordinates of groundtruth box, e.g. 80, 70
+ object_view: viewpoint of object, e.g. ["frontal", "left"]
+ object_truncated: is object truncated, e.g. [true, false]
+ object_occluded: is object occluded, e.g. [true, false]
+ object_difficult: is object difficult, e.g. [true, false]
+ object_group_of: is object a single object or a group of objects
+ object_depiction: is object a depiction
+ object_is_crowd: [DEPRECATED, use object_group_of instead]
+ is the object a single object or a crowd
+ object_segment_area: the area of the segment.
+ object_weight: a weight factor for the object's bounding box.
+ instance_masks: instance segmentation masks.
+ instance_boundaries: instance boundaries.
+ instance_classes: Classes for each instance segmentation mask.
+ detection_class_label: class label in numbers.
+ detection_bbox_ymin: ymin coordinates of a detection box.
+ detection_bbox_xmin: xmin coordinates of a detection box.
+ detection_bbox_ymax: ymax coordinates of a detection box.
+ detection_bbox_xmax: xmax coordinates of a detection box.
+ detection_score: detection score for the class label and box.
+ """
+ image_encoded = 'image/encoded'
+ image_format = 'image/format' # format is reserved keyword
+ filename = 'image/filename'
+ channels = 'image/channels'
+ colorspace = 'image/colorspace'
+ height = 'image/height'
+ width = 'image/width'
+ source_id = 'image/source_id'
+ image_class_text = 'image/class/text'
+ image_class_label = 'image/class/label'
+ object_class_text = 'image/object/class/text'
+ object_class_label = 'image/object/class/label'
+ object_bbox_ymin = 'image/object/bbox/ymin'
+ object_bbox_xmin = 'image/object/bbox/xmin'
+ object_bbox_ymax = 'image/object/bbox/ymax'
+ object_bbox_xmax = 'image/object/bbox/xmax'
+ object_view = 'image/object/view'
+ object_truncated = 'image/object/truncated'
+ object_occluded = 'image/object/occluded'
+ object_difficult = 'image/object/difficult'
+ object_group_of = 'image/object/group_of'
+ object_depiction = 'image/object/depiction'
+ object_is_crowd = 'image/object/is_crowd'
+ object_segment_area = 'image/object/segment/area'
+ object_weight = 'image/object/weight'
+ instance_masks = 'image/segmentation/object'
+ instance_boundaries = 'image/boundaries/object'
+ instance_classes = 'image/segmentation/object/class'
+ detection_class_label = 'image/detection/label'
+ detection_bbox_ymin = 'image/detection/bbox/ymin'
+ detection_bbox_xmin = 'image/detection/bbox/xmin'
+ detection_bbox_ymax = 'image/detection/bbox/ymax'
+ detection_bbox_xmax = 'image/detection/bbox/xmax'
+ detection_score = 'image/detection/score'
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/target_assigner.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/target_assigner.py"
new file mode 100644
index 00000000..0bb3b613
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/target_assigner.py"
@@ -0,0 +1,634 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Base target assigner module.
+
+The job of a TargetAssigner is, for a given set of anchors (bounding boxes) and
+groundtruth detections (bounding boxes), to assign classification and regression
+targets to each anchor as well as weights to each anchor (specifying, e.g.,
+which anchors should not contribute to training loss).
+
+It assigns classification/regression targets by performing the following steps:
+1) Computing pairwise similarity between anchors and groundtruth boxes using a
+ provided RegionSimilarity Calculator
+2) Computing a matching based on the similarity matrix using a provided Matcher
+3) Assigning regression targets based on the matching and a provided BoxCoder
+4) Assigning classification targets based on the matching and groundtruth labels
+
+Note that TargetAssigners only operate on detections from a single
+image at a time, so any logic for applying a TargetAssigner to multiple
+images must be handled externally.
+"""
+import tensorflow as tf
+
+from object_detection.box_coders import faster_rcnn_box_coder
+from object_detection.box_coders import mean_stddev_box_coder
+from object_detection.core import box_coder as bcoder
+from object_detection.core import box_list
+from object_detection.core import matcher as mat
+from object_detection.core import region_similarity_calculator as sim_calc
+from object_detection.core import standard_fields as fields
+from object_detection.matchers import argmax_matcher
+from object_detection.matchers import bipartite_matcher
+from object_detection.utils import shape_utils
+
+
+class TargetAssigner(object):
+ """Target assigner to compute classification and regression targets."""
+
+ def __init__(self,
+ similarity_calc,
+ matcher,
+ box_coder,
+ negative_class_weight=1.0):
+ """Construct Object Detection Target Assigner.
+
+ Args:
+ similarity_calc: a RegionSimilarityCalculator
+ matcher: an object_detection.core.Matcher used to match groundtruth to
+ anchors.
+ box_coder: an object_detection.core.BoxCoder used to encode matching
+ groundtruth boxes with respect to anchors.
+ negative_class_weight: classification weight to be associated to negative
+ anchors (default: 1.0). The weight must be in [0., 1.].
+
+ Raises:
+ ValueError: if similarity_calc is not a RegionSimilarityCalculator or
+ if matcher is not a Matcher or if box_coder is not a BoxCoder
+ """
+ if not isinstance(similarity_calc, sim_calc.RegionSimilarityCalculator):
+ raise ValueError('similarity_calc must be a RegionSimilarityCalculator')
+ if not isinstance(matcher, mat.Matcher):
+ raise ValueError('matcher must be a Matcher')
+ if not isinstance(box_coder, bcoder.BoxCoder):
+ raise ValueError('box_coder must be a BoxCoder')
+ self._similarity_calc = similarity_calc
+ self._matcher = matcher
+ self._box_coder = box_coder
+ self._negative_class_weight = negative_class_weight
+
+ @property
+ def box_coder(self):
+ return self._box_coder
+
+ # TODO(rathodv): move labels, scores, and weights to groundtruth_boxes fields.
+ def assign(self,
+ anchors,
+ groundtruth_boxes,
+ groundtruth_labels=None,
+ unmatched_class_label=None,
+ groundtruth_weights=None):
+ """Assign classification and regression targets to each anchor.
+
+ For a given set of anchors and groundtruth detections, match anchors
+ to groundtruth_boxes and assign classification and regression targets to
+ each anchor as well as weights based on the resulting match (specifying,
+ e.g., which anchors should not contribute to training loss).
+
+ Anchors that are not matched to anything are given a classification target
+ of self._unmatched_cls_target which can be specified via the constructor.
+
+ Args:
+ anchors: a BoxList representing N anchors
+ groundtruth_boxes: a BoxList representing M groundtruth boxes
+ groundtruth_labels: a tensor of shape [M, d_1, ... d_k]
+ with labels for each of the ground_truth boxes. The subshape
+ [d_1, ... d_k] can be empty (corresponding to scalar inputs). When set
+ to None, groundtruth_labels assumes a binary problem where all
+ ground_truth boxes get a positive label (of 1).
+ unmatched_class_label: a float32 tensor with shape [d_1, d_2, ..., d_k]
+ which is consistent with the classification target for each
+ anchor (and can be empty for scalar targets). This shape must thus be
+ compatible with the groundtruth labels that are passed to the "assign"
+ function (which have shape [num_gt_boxes, d_1, d_2, ..., d_k]).
+ If set to None, unmatched_cls_target is set to be [0] for each anchor.
+ groundtruth_weights: a float tensor of shape [M] indicating the weight to
+ assign to all anchors match to a particular groundtruth box. The weights
+ must be in [0., 1.]. If None, all weights are set to 1. Generally no
+ groundtruth boxes with zero weight match to any anchors as matchers are
+ aware of groundtruth weights. Additionally, `cls_weights` and
+ `reg_weights` are calculated using groundtruth weights as an added
+ safety.
+
+ Returns:
+ cls_targets: a float32 tensor with shape [num_anchors, d_1, d_2 ... d_k],
+ where the subshape [d_1, ..., d_k] is compatible with groundtruth_labels
+ which has shape [num_gt_boxes, d_1, d_2, ... d_k].
+ cls_weights: a float32 tensor with shape [num_anchors, d_1, d_2 ... d_k],
+ representing weights for each element in cls_targets.
+ reg_targets: a float32 tensor with shape [num_anchors, box_code_dimension]
+ reg_weights: a float32 tensor with shape [num_anchors]
+ match: a matcher.Match object encoding the match between anchors and
+ groundtruth boxes, with rows corresponding to groundtruth boxes
+ and columns corresponding to anchors.
+
+ Raises:
+ ValueError: if anchors or groundtruth_boxes are not of type
+ box_list.BoxList
+ """
+ if not isinstance(anchors, box_list.BoxList):
+ raise ValueError('anchors must be an BoxList')
+ if not isinstance(groundtruth_boxes, box_list.BoxList):
+ raise ValueError('groundtruth_boxes must be an BoxList')
+
+ if unmatched_class_label is None:
+ unmatched_class_label = tf.constant([0], tf.float32)
+
+ if groundtruth_labels is None:
+ groundtruth_labels = tf.ones(tf.expand_dims(groundtruth_boxes.num_boxes(),
+ 0))
+ groundtruth_labels = tf.expand_dims(groundtruth_labels, -1)
+
+ unmatched_shape_assert = shape_utils.assert_shape_equal(
+ shape_utils.combined_static_and_dynamic_shape(groundtruth_labels)[1:],
+ shape_utils.combined_static_and_dynamic_shape(unmatched_class_label))
+ labels_and_box_shapes_assert = shape_utils.assert_shape_equal(
+ shape_utils.combined_static_and_dynamic_shape(
+ groundtruth_labels)[:1],
+ shape_utils.combined_static_and_dynamic_shape(
+ groundtruth_boxes.get())[:1])
+
+ if groundtruth_weights is None:
+ num_gt_boxes = groundtruth_boxes.num_boxes_static()
+ if not num_gt_boxes:
+ num_gt_boxes = groundtruth_boxes.num_boxes()
+ groundtruth_weights = tf.ones([num_gt_boxes], dtype=tf.float32)
+
+ with tf.control_dependencies(
+ [unmatched_shape_assert, labels_and_box_shapes_assert]):
+ match_quality_matrix = self._similarity_calc.compare(groundtruth_boxes,
+ anchors)
+ match = self._matcher.match(match_quality_matrix,
+ valid_rows=tf.greater(groundtruth_weights, 0))
+ reg_targets = self._create_regression_targets(anchors,
+ groundtruth_boxes,
+ match)
+ cls_targets = self._create_classification_targets(groundtruth_labels,
+ unmatched_class_label,
+ match)
+ reg_weights = self._create_regression_weights(match, groundtruth_weights)
+
+ cls_weights = self._create_classification_weights(match,
+ groundtruth_weights)
+ # convert cls_weights from per-anchor to per-class.
+ class_label_shape = tf.shape(cls_targets)[1:]
+ weights_shape = tf.shape(cls_weights)
+ weights_multiple = tf.concat(
+ [tf.ones_like(weights_shape), class_label_shape],
+ axis=0)
+ for _ in range(len(cls_targets.get_shape()[1:])):
+ cls_weights = tf.expand_dims(cls_weights, -1)
+ cls_weights = tf.tile(cls_weights, weights_multiple)
+
+ num_anchors = anchors.num_boxes_static()
+ if num_anchors is not None:
+ reg_targets = self._reset_target_shape(reg_targets, num_anchors)
+ cls_targets = self._reset_target_shape(cls_targets, num_anchors)
+ reg_weights = self._reset_target_shape(reg_weights, num_anchors)
+ cls_weights = self._reset_target_shape(cls_weights, num_anchors)
+
+ return cls_targets, cls_weights, reg_targets, reg_weights, match
+
+ def _reset_target_shape(self, target, num_anchors):
+ """Sets the static shape of the target.
+
+ Args:
+ target: the target tensor. Its first dimension will be overwritten.
+ num_anchors: the number of anchors, which is used to override the target's
+ first dimension.
+
+ Returns:
+ A tensor with the shape info filled in.
+ """
+ target_shape = target.get_shape().as_list()
+ target_shape[0] = num_anchors
+ target.set_shape(target_shape)
+ return target
+
+ def _create_regression_targets(self, anchors, groundtruth_boxes, match):
+ """Returns a regression target for each anchor.
+
+ Args:
+ anchors: a BoxList representing N anchors
+ groundtruth_boxes: a BoxList representing M groundtruth_boxes
+ match: a matcher.Match object
+
+ Returns:
+ reg_targets: a float32 tensor with shape [N, box_code_dimension]
+ """
+ matched_gt_boxes = match.gather_based_on_match(
+ groundtruth_boxes.get(),
+ unmatched_value=tf.zeros(4),
+ ignored_value=tf.zeros(4))
+ matched_gt_boxlist = box_list.BoxList(matched_gt_boxes)
+ if groundtruth_boxes.has_field(fields.BoxListFields.keypoints):
+ groundtruth_keypoints = groundtruth_boxes.get_field(
+ fields.BoxListFields.keypoints)
+ matched_keypoints = match.gather_based_on_match(
+ groundtruth_keypoints,
+ unmatched_value=tf.zeros(groundtruth_keypoints.get_shape()[1:]),
+ ignored_value=tf.zeros(groundtruth_keypoints.get_shape()[1:]))
+ matched_gt_boxlist.add_field(fields.BoxListFields.keypoints,
+ matched_keypoints)
+ matched_reg_targets = self._box_coder.encode(matched_gt_boxlist, anchors)
+ match_results_shape = shape_utils.combined_static_and_dynamic_shape(
+ match.match_results)
+
+ # Zero out the unmatched and ignored regression targets.
+ unmatched_ignored_reg_targets = tf.tile(
+ self._default_regression_target(), [match_results_shape[0], 1])
+ matched_anchors_mask = match.matched_column_indicator()
+ reg_targets = tf.where(matched_anchors_mask,
+ matched_reg_targets,
+ unmatched_ignored_reg_targets)
+ return reg_targets
+
+ def _default_regression_target(self):
+ """Returns the default target for anchors to regress to.
+
+ Default regression targets are set to zero (though in
+ this implementation what these targets are set to should
+ not matter as the regression weight of any box set to
+ regress to the default target is zero).
+
+ Returns:
+ default_target: a float32 tensor with shape [1, box_code_dimension]
+ """
+ return tf.constant([self._box_coder.code_size*[0]], tf.float32)
+
+ def _create_classification_targets(self, groundtruth_labels,
+ unmatched_class_label, match):
+ """Create classification targets for each anchor.
+
+ Assign a classification target of for each anchor to the matching
+ groundtruth label that is provided by match. Anchors that are not matched
+ to anything are given the target self._unmatched_cls_target
+
+ Args:
+ groundtruth_labels: a tensor of shape [num_gt_boxes, d_1, ... d_k]
+ with labels for each of the ground_truth boxes. The subshape
+ [d_1, ... d_k] can be empty (corresponding to scalar labels).
+ unmatched_class_label: a float32 tensor with shape [d_1, d_2, ..., d_k]
+ which is consistent with the classification target for each
+ anchor (and can be empty for scalar targets). This shape must thus be
+ compatible with the groundtruth labels that are passed to the "assign"
+ function (which have shape [num_gt_boxes, d_1, d_2, ..., d_k]).
+ match: a matcher.Match object that provides a matching between anchors
+ and groundtruth boxes.
+
+ Returns:
+ a float32 tensor with shape [num_anchors, d_1, d_2 ... d_k], where the
+ subshape [d_1, ..., d_k] is compatible with groundtruth_labels which has
+ shape [num_gt_boxes, d_1, d_2, ... d_k].
+ """
+ return match.gather_based_on_match(
+ groundtruth_labels,
+ unmatched_value=unmatched_class_label,
+ ignored_value=unmatched_class_label)
+
+ def _create_regression_weights(self, match, groundtruth_weights):
+ """Set regression weight for each anchor.
+
+ Only positive anchors are set to contribute to the regression loss, so this
+ method returns a weight of 1 for every positive anchor and 0 for every
+ negative anchor.
+
+ Args:
+ match: a matcher.Match object that provides a matching between anchors
+ and groundtruth boxes.
+ groundtruth_weights: a float tensor of shape [M] indicating the weight to
+ assign to all anchors match to a particular groundtruth box.
+
+ Returns:
+ a float32 tensor with shape [num_anchors] representing regression weights.
+ """
+ return match.gather_based_on_match(
+ groundtruth_weights, ignored_value=0., unmatched_value=0.)
+
+ def _create_classification_weights(self,
+ match,
+ groundtruth_weights):
+ """Create classification weights for each anchor.
+
+ Positive (matched) anchors are associated with a weight of
+ positive_class_weight and negative (unmatched) anchors are associated with
+ a weight of negative_class_weight. When anchors are ignored, weights are set
+ to zero. By default, both positive/negative weights are set to 1.0,
+ but they can be adjusted to handle class imbalance (which is almost always
+ the case in object detection).
+
+ Args:
+ match: a matcher.Match object that provides a matching between anchors
+ and groundtruth boxes.
+ groundtruth_weights: a float tensor of shape [M] indicating the weight to
+ assign to all anchors match to a particular groundtruth box.
+
+ Returns:
+ a float32 tensor with shape [num_anchors] representing classification
+ weights.
+ """
+ return match.gather_based_on_match(
+ groundtruth_weights,
+ ignored_value=0.,
+ unmatched_value=self._negative_class_weight)
+
+ def get_box_coder(self):
+ """Get BoxCoder of this TargetAssigner.
+
+ Returns:
+ BoxCoder object.
+ """
+ return self._box_coder
+
+
+# TODO(rathodv): This method pulls in all the implementation dependencies into
+# core. Therefore its best to have this factory method outside of core.
+def create_target_assigner(reference, stage=None,
+ negative_class_weight=1.0, use_matmul_gather=False):
+ """Factory function for creating standard target assigners.
+
+ Args:
+ reference: string referencing the type of TargetAssigner.
+ stage: string denoting stage: {proposal, detection}.
+ negative_class_weight: classification weight to be associated to negative
+ anchors (default: 1.0)
+ use_matmul_gather: whether to use matrix multiplication based gather which
+ are better suited for TPUs.
+
+ Returns:
+ TargetAssigner: desired target assigner.
+
+ Raises:
+ ValueError: if combination reference+stage is invalid.
+ """
+ if reference == 'Multibox' and stage == 'proposal':
+ similarity_calc = sim_calc.NegSqDistSimilarity()
+ matcher = bipartite_matcher.GreedyBipartiteMatcher()
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder()
+
+ elif reference == 'FasterRCNN' and stage == 'proposal':
+ similarity_calc = sim_calc.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.7,
+ unmatched_threshold=0.3,
+ force_match_for_each_row=True,
+ use_matmul_gather=use_matmul_gather)
+ box_coder = faster_rcnn_box_coder.FasterRcnnBoxCoder(
+ scale_factors=[10.0, 10.0, 5.0, 5.0])
+
+ elif reference == 'FasterRCNN' and stage == 'detection':
+ similarity_calc = sim_calc.IouSimilarity()
+ # Uses all proposals with IOU < 0.5 as candidate negatives.
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ negatives_lower_than_unmatched=True,
+ use_matmul_gather=use_matmul_gather)
+ box_coder = faster_rcnn_box_coder.FasterRcnnBoxCoder(
+ scale_factors=[10.0, 10.0, 5.0, 5.0])
+
+ elif reference == 'FastRCNN':
+ similarity_calc = sim_calc.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.1,
+ force_match_for_each_row=False,
+ negatives_lower_than_unmatched=False,
+ use_matmul_gather=use_matmul_gather)
+ box_coder = faster_rcnn_box_coder.FasterRcnnBoxCoder()
+
+ else:
+ raise ValueError('No valid combination of reference and stage.')
+
+ return TargetAssigner(similarity_calc, matcher, box_coder,
+ negative_class_weight=negative_class_weight)
+
+
+def batch_assign_targets(target_assigner,
+ anchors_batch,
+ gt_box_batch,
+ gt_class_targets_batch,
+ unmatched_class_label=None,
+ gt_weights_batch=None):
+ """Batched assignment of classification and regression targets.
+
+ Args:
+ target_assigner: a target assigner.
+ anchors_batch: BoxList representing N box anchors or list of BoxList objects
+ with length batch_size representing anchor sets.
+ gt_box_batch: a list of BoxList objects with length batch_size
+ representing groundtruth boxes for each image in the batch
+ gt_class_targets_batch: a list of tensors with length batch_size, where
+ each tensor has shape [num_gt_boxes_i, classification_target_size] and
+ num_gt_boxes_i is the number of boxes in the ith boxlist of
+ gt_box_batch.
+ unmatched_class_label: a float32 tensor with shape [d_1, d_2, ..., d_k]
+ which is consistent with the classification target for each
+ anchor (and can be empty for scalar targets). This shape must thus be
+ compatible with the groundtruth labels that are passed to the "assign"
+ function (which have shape [num_gt_boxes, d_1, d_2, ..., d_k]).
+ gt_weights_batch: A list of 1-D tf.float32 tensors of shape
+ [num_boxes] containing weights for groundtruth boxes.
+
+ Returns:
+ batch_cls_targets: a tensor with shape [batch_size, num_anchors,
+ num_classes],
+ batch_cls_weights: a tensor with shape [batch_size, num_anchors,
+ num_classes],
+ batch_reg_targets: a tensor with shape [batch_size, num_anchors,
+ box_code_dimension]
+ batch_reg_weights: a tensor with shape [batch_size, num_anchors],
+ match_list: a list of matcher.Match objects encoding the match between
+ anchors and groundtruth boxes for each image of the batch,
+ with rows of the Match objects corresponding to groundtruth boxes
+ and columns corresponding to anchors.
+ Raises:
+ ValueError: if input list lengths are inconsistent, i.e.,
+ batch_size == len(gt_box_batch) == len(gt_class_targets_batch)
+ and batch_size == len(anchors_batch) unless anchors_batch is a single
+ BoxList.
+ """
+ if not isinstance(anchors_batch, list):
+ anchors_batch = len(gt_box_batch) * [anchors_batch]
+ if not all(
+ isinstance(anchors, box_list.BoxList) for anchors in anchors_batch):
+ raise ValueError('anchors_batch must be a BoxList or list of BoxLists.')
+ if not (len(anchors_batch)
+ == len(gt_box_batch)
+ == len(gt_class_targets_batch)):
+ raise ValueError('batch size incompatible with lengths of anchors_batch, '
+ 'gt_box_batch and gt_class_targets_batch.')
+ cls_targets_list = []
+ cls_weights_list = []
+ reg_targets_list = []
+ reg_weights_list = []
+ match_list = []
+ if gt_weights_batch is None:
+ gt_weights_batch = [None] * len(gt_class_targets_batch)
+ for anchors, gt_boxes, gt_class_targets, gt_weights in zip(
+ anchors_batch, gt_box_batch, gt_class_targets_batch, gt_weights_batch):
+ (cls_targets, cls_weights,
+ reg_targets, reg_weights, match) = target_assigner.assign(
+ anchors, gt_boxes, gt_class_targets, unmatched_class_label, gt_weights)
+ cls_targets_list.append(cls_targets)
+ cls_weights_list.append(cls_weights)
+ reg_targets_list.append(reg_targets)
+ reg_weights_list.append(reg_weights)
+ match_list.append(match)
+ batch_cls_targets = tf.stack(cls_targets_list)
+ batch_cls_weights = tf.stack(cls_weights_list)
+ batch_reg_targets = tf.stack(reg_targets_list)
+ batch_reg_weights = tf.stack(reg_weights_list)
+ return (batch_cls_targets, batch_cls_weights, batch_reg_targets,
+ batch_reg_weights, match_list)
+
+
+def batch_assign_confidences(target_assigner,
+ anchors_batch,
+ gt_box_batch,
+ gt_class_confidences_batch,
+ gt_weights_batch=None,
+ unmatched_class_label=None,
+ include_background_class=True,
+ implicit_class_weight=1.0):
+ """Batched assignment of classification and regression targets.
+
+ This differences between batch_assign_confidences and batch_assign_targets:
+ - 'batch_assign_targets' supports scalar (agnostic), vector (multiclass) and
+ tensor (high-dimensional) targets. 'batch_assign_confidences' only support
+ scalar (agnostic) and vector (multiclass) targets.
+ - 'batch_assign_targets' assumes the input class tensor using the binary
+ one/K-hot encoding. 'batch_assign_confidences' takes the class confidence
+ scores as the input, where 1 means positive classes, 0 means implicit
+ negative classes, and -1 means explicit negative classes.
+ - 'batch_assign_confidences' assigns the targets in the similar way as
+ 'batch_assign_targets' except that it gives different weights for implicit
+ and explicit classes. This allows user to control the negative gradients
+ pushed differently for implicit and explicit examples during the training.
+
+ Args:
+ target_assigner: a target assigner.
+ anchors_batch: BoxList representing N box anchors or list of BoxList objects
+ with length batch_size representing anchor sets.
+ gt_box_batch: a list of BoxList objects with length batch_size
+ representing groundtruth boxes for each image in the batch
+ gt_class_confidences_batch: a list of tensors with length batch_size, where
+ each tensor has shape [num_gt_boxes_i, classification_target_size] and
+ num_gt_boxes_i is the number of boxes in the ith boxlist of
+ gt_box_batch. Note that in this tensor, 1 means explicit positive class,
+ -1 means explicit negative class, and 0 means implicit negative class.
+ gt_weights_batch: A list of 1-D tf.float32 tensors of shape
+ [num_gt_boxes_i] containing weights for groundtruth boxes.
+ unmatched_class_label: a float32 tensor with shape [d_1, d_2, ..., d_k]
+ which is consistent with the classification target for each
+ anchor (and can be empty for scalar targets). This shape must thus be
+ compatible with the groundtruth labels that are passed to the "assign"
+ function (which have shape [num_gt_boxes, d_1, d_2, ..., d_k]).
+ include_background_class: whether or not gt_class_confidences_batch includes
+ the background class.
+ implicit_class_weight: the weight assigned to implicit examples.
+
+ Returns:
+ batch_cls_targets: a tensor with shape [batch_size, num_anchors,
+ num_classes],
+ batch_cls_weights: a tensor with shape [batch_size, num_anchors,
+ num_classes],
+ batch_reg_targets: a tensor with shape [batch_size, num_anchors,
+ box_code_dimension]
+ batch_reg_weights: a tensor with shape [batch_size, num_anchors],
+ match_list: a list of matcher.Match objects encoding the match between
+ anchors and groundtruth boxes for each image of the batch,
+ with rows of the Match objects corresponding to groundtruth boxes
+ and columns corresponding to anchors.
+
+ Raises:
+ ValueError: if input list lengths are inconsistent, i.e.,
+ batch_size == len(gt_box_batch) == len(gt_class_targets_batch)
+ and batch_size == len(anchors_batch) unless anchors_batch is a single
+ BoxList, or if any element in gt_class_confidences_batch has rank > 2.
+ """
+ if not isinstance(anchors_batch, list):
+ anchors_batch = len(gt_box_batch) * [anchors_batch]
+ if not all(
+ isinstance(anchors, box_list.BoxList) for anchors in anchors_batch):
+ raise ValueError('anchors_batch must be a BoxList or list of BoxLists.')
+ if not (len(anchors_batch)
+ == len(gt_box_batch)
+ == len(gt_class_confidences_batch)):
+ raise ValueError('batch size incompatible with lengths of anchors_batch, '
+ 'gt_box_batch and gt_class_confidences_batch.')
+
+ cls_targets_list = []
+ cls_weights_list = []
+ reg_targets_list = []
+ reg_weights_list = []
+ match_list = []
+ if gt_weights_batch is None:
+ gt_weights_batch = [None] * len(gt_class_confidences_batch)
+ for anchors, gt_boxes, gt_class_confidences, gt_weights in zip(
+ anchors_batch, gt_box_batch, gt_class_confidences_batch,
+ gt_weights_batch):
+
+ if (gt_class_confidences is not None and
+ len(gt_class_confidences.get_shape().as_list()) > 2):
+ raise ValueError('The shape of the class target is not supported. ',
+ gt_class_confidences.get_shape())
+
+ cls_targets, _, reg_targets, _, match = target_assigner.assign(
+ anchors, gt_boxes, gt_class_confidences, unmatched_class_label,
+ groundtruth_weights=gt_weights)
+
+ if include_background_class:
+ cls_targets_without_background = tf.slice(
+ cls_targets, [0, 1], [-1, -1])
+ else:
+ cls_targets_without_background = cls_targets
+
+ positive_mask = tf.greater(cls_targets_without_background, 0.0)
+ negative_mask = tf.less(cls_targets_without_background, 0.0)
+ explicit_example_mask = tf.logical_or(positive_mask, negative_mask)
+ positive_anchors = tf.reduce_any(positive_mask, axis=-1)
+
+ regression_weights = tf.to_float(positive_anchors)
+ regression_targets = (
+ reg_targets * tf.expand_dims(regression_weights, axis=-1))
+ regression_weights_expanded = tf.expand_dims(regression_weights, axis=-1)
+
+ cls_targets_without_background = (
+ cls_targets_without_background * (1 - tf.to_float(negative_mask)))
+ cls_weights_without_background = (
+ (1 - implicit_class_weight) * tf.to_float(explicit_example_mask)
+ + implicit_class_weight)
+
+ if include_background_class:
+ cls_weights_background = (
+ (1 - implicit_class_weight) * regression_weights_expanded
+ + implicit_class_weight)
+ classification_weights = tf.concat(
+ [cls_weights_background, cls_weights_without_background], axis=-1)
+ cls_targets_background = 1 - regression_weights_expanded
+ classification_targets = tf.concat(
+ [cls_targets_background, cls_targets_without_background], axis=-1)
+ else:
+ classification_targets = cls_targets_without_background
+ classification_weights = cls_weights_without_background
+
+ cls_targets_list.append(classification_targets)
+ cls_weights_list.append(classification_weights)
+ reg_targets_list.append(regression_targets)
+ reg_weights_list.append(regression_weights)
+ match_list.append(match)
+ batch_cls_targets = tf.stack(cls_targets_list)
+ batch_cls_weights = tf.stack(cls_weights_list)
+ batch_reg_targets = tf.stack(reg_targets_list)
+ batch_reg_weights = tf.stack(reg_weights_list)
+ return (batch_cls_targets, batch_cls_weights, batch_reg_targets,
+ batch_reg_weights, match_list)
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/target_assigner_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/target_assigner_test.py"
new file mode 100644
index 00000000..443c33aa
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/core/target_assigner_test.py"
@@ -0,0 +1,1178 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Tests for object_detection.core.target_assigner."""
+import numpy as np
+import tensorflow as tf
+
+from object_detection.box_coders import keypoint_box_coder
+from object_detection.box_coders import mean_stddev_box_coder
+from object_detection.core import box_list
+from object_detection.core import region_similarity_calculator
+from object_detection.core import standard_fields as fields
+from object_detection.core import target_assigner as targetassigner
+from object_detection.matchers import argmax_matcher
+from object_detection.matchers import bipartite_matcher
+from object_detection.utils import test_case
+
+
+class TargetAssignerTest(test_case.TestCase):
+
+ def test_assign_agnostic(self):
+ def graph_fn(anchor_means, groundtruth_box_corners):
+ similarity_calc = region_similarity_calculator.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.5)
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=0.1)
+ target_assigner = targetassigner.TargetAssigner(
+ similarity_calc, matcher, box_coder)
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ groundtruth_boxlist = box_list.BoxList(groundtruth_box_corners)
+ result = target_assigner.assign(
+ anchors_boxlist, groundtruth_boxlist, unmatched_class_label=None)
+ (cls_targets, cls_weights, reg_targets, reg_weights, _) = result
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ anchor_means = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 1.0, 0.8],
+ [0, 0.5, .5, 1.0]], dtype=np.float32)
+ groundtruth_box_corners = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 0.9, 0.9]],
+ dtype=np.float32)
+ exp_cls_targets = [[1], [1], [0]]
+ exp_cls_weights = [[1], [1], [1]]
+ exp_reg_targets = [[0, 0, 0, 0],
+ [0, 0, -1, 1],
+ [0, 0, 0, 0]]
+ exp_reg_weights = [1, 1, 0]
+
+ (cls_targets_out,
+ cls_weights_out, reg_targets_out, reg_weights_out) = self.execute(
+ graph_fn, [anchor_means, groundtruth_box_corners])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+ self.assertEquals(cls_targets_out.dtype, np.float32)
+ self.assertEquals(cls_weights_out.dtype, np.float32)
+ self.assertEquals(reg_targets_out.dtype, np.float32)
+ self.assertEquals(reg_weights_out.dtype, np.float32)
+
+ def test_assign_class_agnostic_with_ignored_matches(self):
+ # Note: test is very similar to above. The third box matched with an IOU
+ # of 0.35, which is between the matched and unmatched threshold. This means
+ # That like above the expected classification targets are [1, 1, 0].
+ # Unlike above, the third target is ignored and therefore expected
+ # classification weights are [1, 1, 0].
+ def graph_fn(anchor_means, groundtruth_box_corners):
+ similarity_calc = region_similarity_calculator.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.3)
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=0.1)
+ target_assigner = targetassigner.TargetAssigner(
+ similarity_calc, matcher, box_coder)
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ groundtruth_boxlist = box_list.BoxList(groundtruth_box_corners)
+ result = target_assigner.assign(
+ anchors_boxlist, groundtruth_boxlist, unmatched_class_label=None)
+ (cls_targets, cls_weights, reg_targets, reg_weights, _) = result
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ anchor_means = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 1.0, 0.8],
+ [0.0, 0.5, .9, 1.0]], dtype=np.float32)
+ groundtruth_box_corners = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 0.9, 0.9]], dtype=np.float32)
+ exp_cls_targets = [[1], [1], [0]]
+ exp_cls_weights = [[1], [1], [0]]
+ exp_reg_targets = [[0, 0, 0, 0],
+ [0, 0, -1, 1],
+ [0, 0, 0, 0]]
+ exp_reg_weights = [1, 1, 0]
+ (cls_targets_out,
+ cls_weights_out, reg_targets_out, reg_weights_out) = self.execute(
+ graph_fn, [anchor_means, groundtruth_box_corners])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+ self.assertEquals(cls_targets_out.dtype, np.float32)
+ self.assertEquals(cls_weights_out.dtype, np.float32)
+ self.assertEquals(reg_targets_out.dtype, np.float32)
+ self.assertEquals(reg_weights_out.dtype, np.float32)
+
+ def test_assign_agnostic_with_keypoints(self):
+ def graph_fn(anchor_means, groundtruth_box_corners,
+ groundtruth_keypoints):
+ similarity_calc = region_similarity_calculator.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.5)
+ box_coder = keypoint_box_coder.KeypointBoxCoder(
+ num_keypoints=6, scale_factors=[10.0, 10.0, 5.0, 5.0])
+ target_assigner = targetassigner.TargetAssigner(
+ similarity_calc, matcher, box_coder)
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ groundtruth_boxlist = box_list.BoxList(groundtruth_box_corners)
+ groundtruth_boxlist.add_field(fields.BoxListFields.keypoints,
+ groundtruth_keypoints)
+ result = target_assigner.assign(
+ anchors_boxlist, groundtruth_boxlist, unmatched_class_label=None)
+ (cls_targets, cls_weights, reg_targets, reg_weights, _) = result
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ anchor_means = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 1.0, 1.0],
+ [0.0, 0.5, .9, 1.0]], dtype=np.float32)
+ groundtruth_box_corners = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.45, 0.45, 0.95, 0.95]],
+ dtype=np.float32)
+ groundtruth_keypoints = np.array(
+ [[[0.1, 0.2], [0.1, 0.3], [0.2, 0.2], [0.2, 0.2], [0.1, 0.1], [0.9, 0]],
+ [[0, 0.3], [0.2, 0.4], [0.5, 0.6], [0, 0.6], [0.8, 0.2], [0.2, 0.4]]],
+ dtype=np.float32)
+ exp_cls_targets = [[1], [1], [0]]
+ exp_cls_weights = [[1], [1], [1]]
+ exp_reg_targets = [[0, 0, 0, 0, -3, -1, -3, 1, -1, -1, -1, -1, -3, -3, 13,
+ -5],
+ [-1, -1, 0, 0, -15, -9, -11, -7, -5, -3, -15, -3, 1, -11,
+ -11, -7],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
+ exp_reg_weights = [1, 1, 0]
+ (cls_targets_out, cls_weights_out, reg_targets_out,
+ reg_weights_out) = self.execute(graph_fn, [anchor_means,
+ groundtruth_box_corners,
+ groundtruth_keypoints])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+ self.assertEquals(cls_targets_out.dtype, np.float32)
+ self.assertEquals(cls_weights_out.dtype, np.float32)
+ self.assertEquals(reg_targets_out.dtype, np.float32)
+ self.assertEquals(reg_weights_out.dtype, np.float32)
+
+ def test_assign_class_agnostic_with_keypoints_and_ignored_matches(self):
+ # Note: test is very similar to above. The third box matched with an IOU
+ # of 0.35, which is between the matched and unmatched threshold. This means
+ # That like above the expected classification targets are [1, 1, 0].
+ # Unlike above, the third target is ignored and therefore expected
+ # classification weights are [1, 1, 0].
+ def graph_fn(anchor_means, groundtruth_box_corners,
+ groundtruth_keypoints):
+ similarity_calc = region_similarity_calculator.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.5)
+ box_coder = keypoint_box_coder.KeypointBoxCoder(
+ num_keypoints=6, scale_factors=[10.0, 10.0, 5.0, 5.0])
+ target_assigner = targetassigner.TargetAssigner(
+ similarity_calc, matcher, box_coder)
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ groundtruth_boxlist = box_list.BoxList(groundtruth_box_corners)
+ groundtruth_boxlist.add_field(fields.BoxListFields.keypoints,
+ groundtruth_keypoints)
+ result = target_assigner.assign(
+ anchors_boxlist, groundtruth_boxlist, unmatched_class_label=None)
+ (cls_targets, cls_weights, reg_targets, reg_weights, _) = result
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ anchor_means = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 1.0, 1.0],
+ [0.0, 0.5, .9, 1.0]], dtype=np.float32)
+ groundtruth_box_corners = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.45, 0.45, 0.95, 0.95]],
+ dtype=np.float32)
+ groundtruth_keypoints = np.array(
+ [[[0.1, 0.2], [0.1, 0.3], [0.2, 0.2], [0.2, 0.2], [0.1, 0.1], [0.9, 0]],
+ [[0, 0.3], [0.2, 0.4], [0.5, 0.6], [0, 0.6], [0.8, 0.2], [0.2, 0.4]]],
+ dtype=np.float32)
+ exp_cls_targets = [[1], [1], [0]]
+ exp_cls_weights = [[1], [1], [1]]
+ exp_reg_targets = [[0, 0, 0, 0, -3, -1, -3, 1, -1, -1, -1, -1, -3, -3, 13,
+ -5],
+ [-1, -1, 0, 0, -15, -9, -11, -7, -5, -3, -15, -3, 1, -11,
+ -11, -7],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
+ exp_reg_weights = [1, 1, 0]
+ (cls_targets_out, cls_weights_out, reg_targets_out,
+ reg_weights_out) = self.execute(graph_fn, [anchor_means,
+ groundtruth_box_corners,
+ groundtruth_keypoints])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+ self.assertEquals(cls_targets_out.dtype, np.float32)
+ self.assertEquals(cls_weights_out.dtype, np.float32)
+ self.assertEquals(reg_targets_out.dtype, np.float32)
+ self.assertEquals(reg_weights_out.dtype, np.float32)
+
+ def test_assign_multiclass(self):
+
+ def graph_fn(anchor_means, groundtruth_box_corners, groundtruth_labels):
+ similarity_calc = region_similarity_calculator.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.5)
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=0.1)
+ unmatched_class_label = tf.constant([1, 0, 0, 0, 0, 0, 0], tf.float32)
+ target_assigner = targetassigner.TargetAssigner(
+ similarity_calc, matcher, box_coder)
+
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ groundtruth_boxlist = box_list.BoxList(groundtruth_box_corners)
+ result = target_assigner.assign(
+ anchors_boxlist,
+ groundtruth_boxlist,
+ groundtruth_labels,
+ unmatched_class_label=unmatched_class_label)
+ (cls_targets, cls_weights, reg_targets, reg_weights, _) = result
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ anchor_means = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 1.0, 0.8],
+ [0, 0.5, .5, 1.0],
+ [.75, 0, 1.0, .25]], dtype=np.float32)
+ groundtruth_box_corners = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 0.9, 0.9],
+ [.75, 0, .95, .27]], dtype=np.float32)
+ groundtruth_labels = np.array([[0, 1, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 1, 0],
+ [0, 0, 0, 1, 0, 0, 0]], dtype=np.float32)
+
+ exp_cls_targets = [[0, 1, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 1, 0],
+ [1, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 1, 0, 0, 0]]
+ exp_cls_weights = [[1, 1, 1, 1, 1, 1, 1],
+ [1, 1, 1, 1, 1, 1, 1],
+ [1, 1, 1, 1, 1, 1, 1],
+ [1, 1, 1, 1, 1, 1, 1]]
+ exp_reg_targets = [[0, 0, 0, 0],
+ [0, 0, -1, 1],
+ [0, 0, 0, 0],
+ [0, 0, -.5, .2]]
+ exp_reg_weights = [1, 1, 0, 1]
+
+ (cls_targets_out,
+ cls_weights_out, reg_targets_out, reg_weights_out) = self.execute(
+ graph_fn, [anchor_means, groundtruth_box_corners, groundtruth_labels])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+ self.assertEquals(cls_targets_out.dtype, np.float32)
+ self.assertEquals(cls_weights_out.dtype, np.float32)
+ self.assertEquals(reg_targets_out.dtype, np.float32)
+ self.assertEquals(reg_weights_out.dtype, np.float32)
+
+ def test_assign_multiclass_with_groundtruth_weights(self):
+
+ def graph_fn(anchor_means, groundtruth_box_corners, groundtruth_labels,
+ groundtruth_weights):
+ similarity_calc = region_similarity_calculator.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.5)
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=0.1)
+ unmatched_class_label = tf.constant([1, 0, 0, 0, 0, 0, 0], tf.float32)
+ target_assigner = targetassigner.TargetAssigner(
+ similarity_calc, matcher, box_coder)
+
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ groundtruth_boxlist = box_list.BoxList(groundtruth_box_corners)
+ result = target_assigner.assign(
+ anchors_boxlist,
+ groundtruth_boxlist,
+ groundtruth_labels,
+ unmatched_class_label=unmatched_class_label,
+ groundtruth_weights=groundtruth_weights)
+ (_, cls_weights, _, reg_weights, _) = result
+ return (cls_weights, reg_weights)
+
+ anchor_means = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 1.0, 0.8],
+ [0, 0.5, .5, 1.0],
+ [.75, 0, 1.0, .25]], dtype=np.float32)
+ groundtruth_box_corners = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 0.9, 0.9],
+ [.75, 0, .95, .27]], dtype=np.float32)
+ groundtruth_labels = np.array([[0, 1, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 1, 0],
+ [0, 0, 0, 1, 0, 0, 0]], dtype=np.float32)
+ groundtruth_weights = np.array([0.3, 0., 0.5], dtype=np.float32)
+
+ # background class gets weight of 1.
+ exp_cls_weights = [[0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3],
+ [0, 0, 0, 0, 0, 0, 0],
+ [1, 1, 1, 1, 1, 1, 1],
+ [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]]
+ exp_reg_weights = [0.3, 0., 0., 0.5] # background class gets weight of 0.
+
+ (cls_weights_out, reg_weights_out) = self.execute(graph_fn, [
+ anchor_means, groundtruth_box_corners, groundtruth_labels,
+ groundtruth_weights
+ ])
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+
+ def test_assign_multidimensional_class_targets(self):
+
+ def graph_fn(anchor_means, groundtruth_box_corners, groundtruth_labels):
+ similarity_calc = region_similarity_calculator.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.5)
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=0.1)
+
+ unmatched_class_label = tf.constant([[0, 0], [0, 0]], tf.float32)
+ target_assigner = targetassigner.TargetAssigner(
+ similarity_calc, matcher, box_coder)
+
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ groundtruth_boxlist = box_list.BoxList(groundtruth_box_corners)
+ result = target_assigner.assign(
+ anchors_boxlist,
+ groundtruth_boxlist,
+ groundtruth_labels,
+ unmatched_class_label=unmatched_class_label)
+ (cls_targets, cls_weights, reg_targets, reg_weights, _) = result
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ anchor_means = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 1.0, 0.8],
+ [0, 0.5, .5, 1.0],
+ [.75, 0, 1.0, .25]], dtype=np.float32)
+ groundtruth_box_corners = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 0.9, 0.9],
+ [.75, 0, .95, .27]], dtype=np.float32)
+
+ groundtruth_labels = np.array([[[0, 1], [1, 0]],
+ [[1, 0], [0, 1]],
+ [[0, 1], [1, .5]]], np.float32)
+
+ exp_cls_targets = [[[0, 1], [1, 0]],
+ [[1, 0], [0, 1]],
+ [[0, 0], [0, 0]],
+ [[0, 1], [1, .5]]]
+ exp_cls_weights = [[[1, 1], [1, 1]],
+ [[1, 1], [1, 1]],
+ [[1, 1], [1, 1]],
+ [[1, 1], [1, 1]]]
+ exp_reg_targets = [[0, 0, 0, 0],
+ [0, 0, -1, 1],
+ [0, 0, 0, 0],
+ [0, 0, -.5, .2]]
+ exp_reg_weights = [1, 1, 0, 1]
+ (cls_targets_out,
+ cls_weights_out, reg_targets_out, reg_weights_out) = self.execute(
+ graph_fn, [anchor_means, groundtruth_box_corners, groundtruth_labels])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+ self.assertEquals(cls_targets_out.dtype, np.float32)
+ self.assertEquals(cls_weights_out.dtype, np.float32)
+ self.assertEquals(reg_targets_out.dtype, np.float32)
+ self.assertEquals(reg_weights_out.dtype, np.float32)
+
+ def test_assign_empty_groundtruth(self):
+
+ def graph_fn(anchor_means, groundtruth_box_corners, groundtruth_labels):
+ similarity_calc = region_similarity_calculator.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.5)
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=0.1)
+ unmatched_class_label = tf.constant([0, 0, 0], tf.float32)
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ groundtruth_boxlist = box_list.BoxList(groundtruth_box_corners)
+ target_assigner = targetassigner.TargetAssigner(
+ similarity_calc, matcher, box_coder)
+ result = target_assigner.assign(
+ anchors_boxlist,
+ groundtruth_boxlist,
+ groundtruth_labels,
+ unmatched_class_label=unmatched_class_label)
+ (cls_targets, cls_weights, reg_targets, reg_weights, _) = result
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_box_corners = np.zeros((0, 4), dtype=np.float32)
+ groundtruth_labels = np.zeros((0, 3), dtype=np.float32)
+ anchor_means = np.array([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 1.0, 0.8],
+ [0, 0.5, .5, 1.0],
+ [.75, 0, 1.0, .25]],
+ dtype=np.float32)
+ exp_cls_targets = [[0, 0, 0],
+ [0, 0, 0],
+ [0, 0, 0],
+ [0, 0, 0]]
+ exp_cls_weights = [[1, 1, 1],
+ [1, 1, 1],
+ [1, 1, 1],
+ [1, 1, 1]]
+ exp_reg_targets = [[0, 0, 0, 0],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]]
+ exp_reg_weights = [0, 0, 0, 0]
+ (cls_targets_out,
+ cls_weights_out, reg_targets_out, reg_weights_out) = self.execute(
+ graph_fn, [anchor_means, groundtruth_box_corners, groundtruth_labels])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+ self.assertEquals(cls_targets_out.dtype, np.float32)
+ self.assertEquals(cls_weights_out.dtype, np.float32)
+ self.assertEquals(reg_targets_out.dtype, np.float32)
+ self.assertEquals(reg_weights_out.dtype, np.float32)
+
+ def test_raises_error_on_incompatible_groundtruth_boxes_and_labels(self):
+ similarity_calc = region_similarity_calculator.NegSqDistSimilarity()
+ matcher = bipartite_matcher.GreedyBipartiteMatcher()
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder()
+ unmatched_class_label = tf.constant([1, 0, 0, 0, 0, 0, 0], tf.float32)
+ target_assigner = targetassigner.TargetAssigner(
+ similarity_calc, matcher, box_coder)
+
+ prior_means = tf.constant([[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 1.0, 0.8],
+ [0, 0.5, .5, 1.0],
+ [.75, 0, 1.0, .25]])
+ priors = box_list.BoxList(prior_means)
+
+ box_corners = [[0.0, 0.0, 0.5, 0.5],
+ [0.0, 0.0, 0.5, 0.8],
+ [0.5, 0.5, 0.9, 0.9],
+ [.75, 0, .95, .27]]
+ boxes = box_list.BoxList(tf.constant(box_corners))
+
+ groundtruth_labels = tf.constant([[0, 1, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 1, 0],
+ [0, 0, 0, 1, 0, 0, 0]], tf.float32)
+ with self.assertRaisesRegexp(ValueError, 'Unequal shapes'):
+ target_assigner.assign(
+ priors,
+ boxes,
+ groundtruth_labels,
+ unmatched_class_label=unmatched_class_label)
+
+ def test_raises_error_on_invalid_groundtruth_labels(self):
+ similarity_calc = region_similarity_calculator.NegSqDistSimilarity()
+ matcher = bipartite_matcher.GreedyBipartiteMatcher()
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=1.0)
+ unmatched_class_label = tf.constant([[0, 0], [0, 0], [0, 0]], tf.float32)
+ target_assigner = targetassigner.TargetAssigner(
+ similarity_calc, matcher, box_coder)
+
+ prior_means = tf.constant([[0.0, 0.0, 0.5, 0.5]])
+ priors = box_list.BoxList(prior_means)
+
+ box_corners = [[0.0, 0.0, 0.5, 0.5],
+ [0.5, 0.5, 0.9, 0.9],
+ [.75, 0, .95, .27]]
+ boxes = box_list.BoxList(tf.constant(box_corners))
+ groundtruth_labels = tf.constant([[[0, 1], [1, 0]]], tf.float32)
+
+ with self.assertRaises(ValueError):
+ target_assigner.assign(
+ priors,
+ boxes,
+ groundtruth_labels,
+ unmatched_class_label=unmatched_class_label)
+
+
+class BatchTargetAssignerTest(test_case.TestCase):
+
+ def _get_target_assigner(self):
+ similarity_calc = region_similarity_calculator.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.5)
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=0.1)
+ return targetassigner.TargetAssigner(similarity_calc, matcher, box_coder)
+
+ def test_batch_assign_targets(self):
+
+ def graph_fn(anchor_means, groundtruth_boxlist1, groundtruth_boxlist2):
+ box_list1 = box_list.BoxList(groundtruth_boxlist1)
+ box_list2 = box_list.BoxList(groundtruth_boxlist2)
+ gt_box_batch = [box_list1, box_list2]
+ gt_class_targets = [None, None]
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ agnostic_target_assigner = self._get_target_assigner()
+ (cls_targets, cls_weights, reg_targets, reg_weights,
+ _) = targetassigner.batch_assign_targets(
+ agnostic_target_assigner, anchors_boxlist, gt_box_batch,
+ gt_class_targets)
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_boxlist1 = np.array([[0., 0., 0.2, 0.2]], dtype=np.float32)
+ groundtruth_boxlist2 = np.array([[0, 0.25123152, 1, 1],
+ [0.015789, 0.0985, 0.55789, 0.3842]],
+ dtype=np.float32)
+ anchor_means = np.array([[0, 0, .25, .25],
+ [0, .25, 1, 1],
+ [0, .1, .5, .5],
+ [.75, .75, 1, 1]], dtype=np.float32)
+
+ exp_cls_targets = [[[1], [0], [0], [0]],
+ [[0], [1], [1], [0]]]
+ exp_cls_weights = [[[1], [1], [1], [1]],
+ [[1], [1], [1], [1]]]
+ exp_reg_targets = [[[0, 0, -0.5, -0.5],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0,],
+ [0, 0, 0, 0,],],
+ [[0, 0, 0, 0,],
+ [0, 0.01231521, 0, 0],
+ [0.15789001, -0.01500003, 0.57889998, -1.15799987],
+ [0, 0, 0, 0]]]
+ exp_reg_weights = [[1, 0, 0, 0],
+ [0, 1, 1, 0]]
+
+ (cls_targets_out,
+ cls_weights_out, reg_targets_out, reg_weights_out) = self.execute(
+ graph_fn, [anchor_means, groundtruth_boxlist1, groundtruth_boxlist2])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+
+ def test_batch_assign_multiclass_targets(self):
+
+ def graph_fn(anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2):
+ box_list1 = box_list.BoxList(groundtruth_boxlist1)
+ box_list2 = box_list.BoxList(groundtruth_boxlist2)
+ gt_box_batch = [box_list1, box_list2]
+ gt_class_targets = [class_targets1, class_targets2]
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ multiclass_target_assigner = self._get_target_assigner()
+ num_classes = 3
+ unmatched_class_label = tf.constant([1] + num_classes * [0], tf.float32)
+ (cls_targets, cls_weights, reg_targets, reg_weights,
+ _) = targetassigner.batch_assign_targets(
+ multiclass_target_assigner, anchors_boxlist, gt_box_batch,
+ gt_class_targets, unmatched_class_label)
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_boxlist1 = np.array([[0., 0., 0.2, 0.2]], dtype=np.float32)
+ groundtruth_boxlist2 = np.array([[0, 0.25123152, 1, 1],
+ [0.015789, 0.0985, 0.55789, 0.3842]],
+ dtype=np.float32)
+ class_targets1 = np.array([[0, 1, 0, 0]], dtype=np.float32)
+ class_targets2 = np.array([[0, 0, 0, 1],
+ [0, 0, 1, 0]], dtype=np.float32)
+
+ anchor_means = np.array([[0, 0, .25, .25],
+ [0, .25, 1, 1],
+ [0, .1, .5, .5],
+ [.75, .75, 1, 1]], dtype=np.float32)
+ exp_cls_targets = [[[0, 1, 0, 0],
+ [1, 0, 0, 0],
+ [1, 0, 0, 0],
+ [1, 0, 0, 0]],
+ [[1, 0, 0, 0],
+ [0, 0, 0, 1],
+ [0, 0, 1, 0],
+ [1, 0, 0, 0]]]
+ exp_cls_weights = [[[1, 1, 1, 1],
+ [1, 1, 1, 1],
+ [1, 1, 1, 1],
+ [1, 1, 1, 1]],
+ [[1, 1, 1, 1],
+ [1, 1, 1, 1],
+ [1, 1, 1, 1],
+ [1, 1, 1, 1]]]
+ exp_reg_targets = [[[0, 0, -0.5, -0.5],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0,],
+ [0, 0, 0, 0,],],
+ [[0, 0, 0, 0,],
+ [0, 0.01231521, 0, 0],
+ [0.15789001, -0.01500003, 0.57889998, -1.15799987],
+ [0, 0, 0, 0]]]
+ exp_reg_weights = [[1, 0, 0, 0],
+ [0, 1, 1, 0]]
+
+ (cls_targets_out, cls_weights_out, reg_targets_out,
+ reg_weights_out) = self.execute(graph_fn, [
+ anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2
+ ])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+
+ def test_batch_assign_multiclass_targets_with_padded_groundtruth(self):
+
+ def graph_fn(anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2, groundtruth_weights1,
+ groundtruth_weights2):
+ box_list1 = box_list.BoxList(groundtruth_boxlist1)
+ box_list2 = box_list.BoxList(groundtruth_boxlist2)
+ gt_box_batch = [box_list1, box_list2]
+ gt_class_targets = [class_targets1, class_targets2]
+ gt_weights = [groundtruth_weights1, groundtruth_weights2]
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ multiclass_target_assigner = self._get_target_assigner()
+ num_classes = 3
+ unmatched_class_label = tf.constant([1] + num_classes * [0], tf.float32)
+ (cls_targets, cls_weights, reg_targets, reg_weights,
+ _) = targetassigner.batch_assign_targets(
+ multiclass_target_assigner, anchors_boxlist, gt_box_batch,
+ gt_class_targets, unmatched_class_label, gt_weights)
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_boxlist1 = np.array([[0., 0., 0.2, 0.2],
+ [0., 0., 0., 0.]], dtype=np.float32)
+ groundtruth_weights1 = np.array([1, 0], dtype=np.float32)
+ groundtruth_boxlist2 = np.array([[0, 0.25123152, 1, 1],
+ [0.015789, 0.0985, 0.55789, 0.3842],
+ [0, 0, 0, 0]],
+ dtype=np.float32)
+ groundtruth_weights2 = np.array([1, 1, 0], dtype=np.float32)
+ class_targets1 = np.array([[0, 1, 0, 0], [0, 0, 0, 0]], dtype=np.float32)
+ class_targets2 = np.array([[0, 0, 0, 1],
+ [0, 0, 1, 0],
+ [0, 0, 0, 0]], dtype=np.float32)
+
+ anchor_means = np.array([[0, 0, .25, .25],
+ [0, .25, 1, 1],
+ [0, .1, .5, .5],
+ [.75, .75, 1, 1]], dtype=np.float32)
+
+ exp_cls_targets = [[[0, 1, 0, 0],
+ [1, 0, 0, 0],
+ [1, 0, 0, 0],
+ [1, 0, 0, 0]],
+ [[1, 0, 0, 0],
+ [0, 0, 0, 1],
+ [0, 0, 1, 0],
+ [1, 0, 0, 0]]]
+ exp_cls_weights = [[[1, 1, 1, 1],
+ [1, 1, 1, 1],
+ [1, 1, 1, 1],
+ [1, 1, 1, 1]],
+ [[1, 1, 1, 1],
+ [1, 1, 1, 1],
+ [1, 1, 1, 1],
+ [1, 1, 1, 1]]]
+ exp_reg_targets = [[[0, 0, -0.5, -0.5],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0,],
+ [0, 0, 0, 0,],],
+ [[0, 0, 0, 0,],
+ [0, 0.01231521, 0, 0],
+ [0.15789001, -0.01500003, 0.57889998, -1.15799987],
+ [0, 0, 0, 0]]]
+ exp_reg_weights = [[1, 0, 0, 0],
+ [0, 1, 1, 0]]
+
+ (cls_targets_out, cls_weights_out, reg_targets_out,
+ reg_weights_out) = self.execute(graph_fn, [
+ anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2, groundtruth_weights1,
+ groundtruth_weights2
+ ])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+
+ def test_batch_assign_multidimensional_targets(self):
+
+ def graph_fn(anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2):
+ box_list1 = box_list.BoxList(groundtruth_boxlist1)
+ box_list2 = box_list.BoxList(groundtruth_boxlist2)
+ gt_box_batch = [box_list1, box_list2]
+ gt_class_targets = [class_targets1, class_targets2]
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ multiclass_target_assigner = self._get_target_assigner()
+ target_dimensions = (2, 3)
+ unmatched_class_label = tf.constant(np.zeros(target_dimensions),
+ tf.float32)
+ (cls_targets, cls_weights, reg_targets, reg_weights,
+ _) = targetassigner.batch_assign_targets(
+ multiclass_target_assigner, anchors_boxlist, gt_box_batch,
+ gt_class_targets, unmatched_class_label)
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_boxlist1 = np.array([[0., 0., 0.2, 0.2]], dtype=np.float32)
+ groundtruth_boxlist2 = np.array([[0, 0.25123152, 1, 1],
+ [0.015789, 0.0985, 0.55789, 0.3842]],
+ dtype=np.float32)
+ class_targets1 = np.array([[0, 1, 0, 0]], dtype=np.float32)
+ class_targets2 = np.array([[0, 0, 0, 1],
+ [0, 0, 1, 0]], dtype=np.float32)
+ class_targets1 = np.array([[[0, 1, 1],
+ [1, 1, 0]]], dtype=np.float32)
+ class_targets2 = np.array([[[0, 1, 1],
+ [1, 1, 0]],
+ [[0, 0, 1],
+ [0, 0, 1]]], dtype=np.float32)
+
+ anchor_means = np.array([[0, 0, .25, .25],
+ [0, .25, 1, 1],
+ [0, .1, .5, .5],
+ [.75, .75, 1, 1]], dtype=np.float32)
+
+ exp_cls_targets = [[[[0., 1., 1.],
+ [1., 1., 0.]],
+ [[0., 0., 0.],
+ [0., 0., 0.]],
+ [[0., 0., 0.],
+ [0., 0., 0.]],
+ [[0., 0., 0.],
+ [0., 0., 0.]]],
+ [[[0., 0., 0.],
+ [0., 0., 0.]],
+ [[0., 1., 1.],
+ [1., 1., 0.]],
+ [[0., 0., 1.],
+ [0., 0., 1.]],
+ [[0., 0., 0.],
+ [0., 0., 0.]]]]
+ exp_cls_weights = [[[[1., 1., 1.],
+ [1., 1., 1.]],
+ [[1., 1., 1.],
+ [1., 1., 1.]],
+ [[1., 1., 1.],
+ [1., 1., 1.]],
+ [[1., 1., 1.],
+ [1., 1., 1.]]],
+ [[[1., 1., 1.],
+ [1., 1., 1.]],
+ [[1., 1., 1.],
+ [1., 1., 1.]],
+ [[1., 1., 1.],
+ [1., 1., 1.]],
+ [[1., 1., 1.],
+ [1., 1., 1.]]]]
+ exp_reg_targets = [[[0, 0, -0.5, -0.5],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0,],
+ [0, 0, 0, 0,],],
+ [[0, 0, 0, 0,],
+ [0, 0.01231521, 0, 0],
+ [0.15789001, -0.01500003, 0.57889998, -1.15799987],
+ [0, 0, 0, 0]]]
+ exp_reg_weights = [[1, 0, 0, 0],
+ [0, 1, 1, 0]]
+
+ (cls_targets_out, cls_weights_out, reg_targets_out,
+ reg_weights_out) = self.execute(graph_fn, [
+ anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2
+ ])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+
+ def test_batch_assign_empty_groundtruth(self):
+
+ def graph_fn(anchor_means, groundtruth_box_corners, gt_class_targets):
+ groundtruth_boxlist = box_list.BoxList(groundtruth_box_corners)
+ gt_box_batch = [groundtruth_boxlist]
+ gt_class_targets_batch = [gt_class_targets]
+ anchors_boxlist = box_list.BoxList(anchor_means)
+
+ multiclass_target_assigner = self._get_target_assigner()
+ num_classes = 3
+ unmatched_class_label = tf.constant([1] + num_classes * [0], tf.float32)
+ (cls_targets, cls_weights, reg_targets, reg_weights,
+ _) = targetassigner.batch_assign_targets(
+ multiclass_target_assigner, anchors_boxlist,
+ gt_box_batch, gt_class_targets_batch, unmatched_class_label)
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_box_corners = np.zeros((0, 4), dtype=np.float32)
+ anchor_means = np.array([[0, 0, .25, .25],
+ [0, .25, 1, 1]], dtype=np.float32)
+ exp_cls_targets = [[[1, 0, 0, 0],
+ [1, 0, 0, 0]]]
+ exp_cls_weights = [[[1, 1, 1, 1],
+ [1, 1, 1, 1]]]
+ exp_reg_targets = [[[0, 0, 0, 0],
+ [0, 0, 0, 0]]]
+ exp_reg_weights = [[0, 0]]
+ num_classes = 3
+ pad = 1
+ gt_class_targets = np.zeros((0, num_classes + pad), dtype=np.float32)
+
+ (cls_targets_out,
+ cls_weights_out, reg_targets_out, reg_weights_out) = self.execute(
+ graph_fn, [anchor_means, groundtruth_box_corners, gt_class_targets])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+
+
+class BatchTargetAssignConfidencesTest(test_case.TestCase):
+
+ def _get_target_assigner(self):
+ similarity_calc = region_similarity_calculator.IouSimilarity()
+ matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=0.5,
+ unmatched_threshold=0.5)
+ box_coder = mean_stddev_box_coder.MeanStddevBoxCoder(stddev=0.1)
+ return targetassigner.TargetAssigner(similarity_calc, matcher, box_coder)
+
+ def test_batch_assign_empty_groundtruth(self):
+
+ def graph_fn(anchor_means, groundtruth_box_corners, gt_class_confidences):
+ groundtruth_boxlist = box_list.BoxList(groundtruth_box_corners)
+ gt_box_batch = [groundtruth_boxlist]
+ gt_class_confidences_batch = [gt_class_confidences]
+ anchors_boxlist = box_list.BoxList(anchor_means)
+
+ num_classes = 3
+ implicit_class_weight = 0.5
+ unmatched_class_label = tf.constant([1] + num_classes * [0], tf.float32)
+ multiclass_target_assigner = self._get_target_assigner()
+ (cls_targets, cls_weights, reg_targets, reg_weights,
+ _) = targetassigner.batch_assign_confidences(
+ multiclass_target_assigner,
+ anchors_boxlist,
+ gt_box_batch,
+ gt_class_confidences_batch,
+ unmatched_class_label=unmatched_class_label,
+ include_background_class=True,
+ implicit_class_weight=implicit_class_weight)
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_box_corners = np.zeros((0, 4), dtype=np.float32)
+ anchor_means = np.array([[0, 0, .25, .25],
+ [0, .25, 1, 1]], dtype=np.float32)
+ num_classes = 3
+ pad = 1
+ gt_class_confidences = np.zeros((0, num_classes + pad), dtype=np.float32)
+
+ exp_cls_targets = [[[1, 0, 0, 0],
+ [1, 0, 0, 0]]]
+ exp_cls_weights = [[[0.5, 0.5, 0.5, 0.5],
+ [0.5, 0.5, 0.5, 0.5]]]
+ exp_reg_targets = [[[0, 0, 0, 0],
+ [0, 0, 0, 0]]]
+ exp_reg_weights = [[0, 0]]
+
+ (cls_targets_out,
+ cls_weights_out, reg_targets_out, reg_weights_out) = self.execute(
+ graph_fn,
+ [anchor_means, groundtruth_box_corners, gt_class_confidences])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+
+ def test_batch_assign_confidences_agnostic(self):
+
+ def graph_fn(anchor_means, groundtruth_boxlist1, groundtruth_boxlist2):
+ box_list1 = box_list.BoxList(groundtruth_boxlist1)
+ box_list2 = box_list.BoxList(groundtruth_boxlist2)
+ gt_box_batch = [box_list1, box_list2]
+ gt_class_confidences_batch = [None, None]
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ agnostic_target_assigner = self._get_target_assigner()
+ implicit_class_weight = 0.5
+ (cls_targets, cls_weights, reg_targets, reg_weights,
+ _) = targetassigner.batch_assign_confidences(
+ agnostic_target_assigner,
+ anchors_boxlist,
+ gt_box_batch,
+ gt_class_confidences_batch,
+ include_background_class=False,
+ implicit_class_weight=implicit_class_weight)
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_boxlist1 = np.array([[0., 0., 0.2, 0.2]], dtype=np.float32)
+ groundtruth_boxlist2 = np.array([[0, 0.25123152, 1, 1],
+ [0.015789, 0.0985, 0.55789, 0.3842]],
+ dtype=np.float32)
+ anchor_means = np.array([[0, 0, .25, .25],
+ [0, .25, 1, 1],
+ [0, .1, .5, .5],
+ [.75, .75, 1, 1]], dtype=np.float32)
+
+ exp_cls_targets = [[[1], [0], [0], [0]],
+ [[0], [1], [1], [0]]]
+ exp_cls_weights = [[[1], [0.5], [0.5], [0.5]],
+ [[0.5], [1], [1], [0.5]]]
+ exp_reg_targets = [[[0, 0, -0.5, -0.5],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0,],
+ [0, 0, 0, 0,],],
+ [[0, 0, 0, 0,],
+ [0, 0.01231521, 0, 0],
+ [0.15789001, -0.01500003, 0.57889998, -1.15799987],
+ [0, 0, 0, 0]]]
+ exp_reg_weights = [[1, 0, 0, 0],
+ [0, 1, 1, 0]]
+
+ (cls_targets_out,
+ cls_weights_out, reg_targets_out, reg_weights_out) = self.execute(
+ graph_fn, [anchor_means, groundtruth_boxlist1, groundtruth_boxlist2])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+
+ def test_batch_assign_confidences_multiclass(self):
+
+ def graph_fn(anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2):
+ box_list1 = box_list.BoxList(groundtruth_boxlist1)
+ box_list2 = box_list.BoxList(groundtruth_boxlist2)
+ gt_box_batch = [box_list1, box_list2]
+ gt_class_confidences_batch = [class_targets1, class_targets2]
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ multiclass_target_assigner = self._get_target_assigner()
+ num_classes = 3
+ implicit_class_weight = 0.5
+ unmatched_class_label = tf.constant([1] + num_classes * [0], tf.float32)
+ (cls_targets, cls_weights, reg_targets, reg_weights,
+ _) = targetassigner.batch_assign_confidences(
+ multiclass_target_assigner,
+ anchors_boxlist,
+ gt_box_batch,
+ gt_class_confidences_batch,
+ unmatched_class_label=unmatched_class_label,
+ include_background_class=True,
+ implicit_class_weight=implicit_class_weight)
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_boxlist1 = np.array([[0., 0., 0.2, 0.2]], dtype=np.float32)
+ groundtruth_boxlist2 = np.array([[0, 0.25123152, 1, 1],
+ [0.015789, 0.0985, 0.55789, 0.3842]],
+ dtype=np.float32)
+ class_targets1 = np.array([[0, 1, 0, 0]], dtype=np.float32)
+ class_targets2 = np.array([[0, 0, 0, 1],
+ [0, 0, -1, 0]], dtype=np.float32)
+
+ anchor_means = np.array([[0, 0, .25, .25],
+ [0, .25, 1, 1],
+ [0, .1, .5, .5],
+ [.75, .75, 1, 1]], dtype=np.float32)
+ exp_cls_targets = [[[0, 1, 0, 0],
+ [1, 0, 0, 0],
+ [1, 0, 0, 0],
+ [1, 0, 0, 0]],
+ [[1, 0, 0, 0],
+ [0, 0, 0, 1],
+ [1, 0, 0, 0],
+ [1, 0, 0, 0]]]
+ exp_cls_weights = [[[1, 1, 0.5, 0.5],
+ [0.5, 0.5, 0.5, 0.5],
+ [0.5, 0.5, 0.5, 0.5],
+ [0.5, 0.5, 0.5, 0.5]],
+ [[0.5, 0.5, 0.5, 0.5],
+ [1, 0.5, 0.5, 1],
+ [0.5, 0.5, 1, 0.5],
+ [0.5, 0.5, 0.5, 0.5]]]
+ exp_reg_targets = [[[0, 0, -0.5, -0.5],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0,],
+ [0, 0, 0, 0,],],
+ [[0, 0, 0, 0,],
+ [0, 0.01231521, 0, 0],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]]]
+ exp_reg_weights = [[1, 0, 0, 0],
+ [0, 1, 0, 0]]
+
+ (cls_targets_out, cls_weights_out, reg_targets_out,
+ reg_weights_out) = self.execute(graph_fn, [
+ anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2
+ ])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+
+ def test_batch_assign_confidences_multiclass_with_padded_groundtruth(self):
+
+ def graph_fn(anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2, groundtruth_weights1,
+ groundtruth_weights2):
+ box_list1 = box_list.BoxList(groundtruth_boxlist1)
+ box_list2 = box_list.BoxList(groundtruth_boxlist2)
+ gt_box_batch = [box_list1, box_list2]
+ gt_class_confidences_batch = [class_targets1, class_targets2]
+ gt_weights = [groundtruth_weights1, groundtruth_weights2]
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ multiclass_target_assigner = self._get_target_assigner()
+ num_classes = 3
+ unmatched_class_label = tf.constant([1] + num_classes * [0], tf.float32)
+ implicit_class_weight = 0.5
+ (cls_targets, cls_weights, reg_targets, reg_weights,
+ _) = targetassigner.batch_assign_confidences(
+ multiclass_target_assigner,
+ anchors_boxlist,
+ gt_box_batch,
+ gt_class_confidences_batch,
+ gt_weights,
+ unmatched_class_label=unmatched_class_label,
+ include_background_class=True,
+ implicit_class_weight=implicit_class_weight)
+
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_boxlist1 = np.array([[0., 0., 0.2, 0.2],
+ [0., 0., 0., 0.]], dtype=np.float32)
+ groundtruth_weights1 = np.array([1, 0], dtype=np.float32)
+ groundtruth_boxlist2 = np.array([[0, 0.25123152, 1, 1],
+ [0.015789, 0.0985, 0.55789, 0.3842],
+ [0, 0, 0, 0]],
+ dtype=np.float32)
+ groundtruth_weights2 = np.array([1, 1, 0], dtype=np.float32)
+ class_targets1 = np.array([[0, 1, 0, 0], [0, 0, 0, 0]], dtype=np.float32)
+ class_targets2 = np.array([[0, 0, 0, 1],
+ [0, 0, -1, 0],
+ [0, 0, 0, 0]], dtype=np.float32)
+ anchor_means = np.array([[0, 0, .25, .25],
+ [0, .25, 1, 1],
+ [0, .1, .5, .5],
+ [.75, .75, 1, 1]], dtype=np.float32)
+
+ exp_cls_targets = [[[0, 1, 0, 0],
+ [1, 0, 0, 0],
+ [1, 0, 0, 0],
+ [1, 0, 0, 0]],
+ [[1, 0, 0, 0],
+ [0, 0, 0, 1],
+ [1, 0, 0, 0],
+ [1, 0, 0, 0]]]
+ exp_cls_weights = [[[1, 1, 0.5, 0.5],
+ [0.5, 0.5, 0.5, 0.5],
+ [0.5, 0.5, 0.5, 0.5],
+ [0.5, 0.5, 0.5, 0.5]],
+ [[0.5, 0.5, 0.5, 0.5],
+ [1, 0.5, 0.5, 1],
+ [0.5, 0.5, 1, 0.5],
+ [0.5, 0.5, 0.5, 0.5]]]
+ exp_reg_targets = [[[0, 0, -0.5, -0.5],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0,],
+ [0, 0, 0, 0,],],
+ [[0, 0, 0, 0,],
+ [0, 0.01231521, 0, 0],
+ [0, 0, 0, 0],
+ [0, 0, 0, 0]]]
+ exp_reg_weights = [[1, 0, 0, 0],
+ [0, 1, 0, 0]]
+
+ (cls_targets_out, cls_weights_out, reg_targets_out,
+ reg_weights_out) = self.execute(graph_fn, [
+ anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2, groundtruth_weights1,
+ groundtruth_weights2
+ ])
+ self.assertAllClose(cls_targets_out, exp_cls_targets)
+ self.assertAllClose(cls_weights_out, exp_cls_weights)
+ self.assertAllClose(reg_targets_out, exp_reg_targets)
+ self.assertAllClose(reg_weights_out, exp_reg_weights)
+
+ def test_batch_assign_confidences_multidimensional(self):
+
+ def graph_fn(anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2):
+ box_list1 = box_list.BoxList(groundtruth_boxlist1)
+ box_list2 = box_list.BoxList(groundtruth_boxlist2)
+ gt_box_batch = [box_list1, box_list2]
+ gt_class_confidences_batch = [class_targets1, class_targets2]
+ anchors_boxlist = box_list.BoxList(anchor_means)
+ multiclass_target_assigner = self._get_target_assigner()
+ target_dimensions = (2, 3)
+ unmatched_class_label = tf.constant(np.zeros(target_dimensions),
+ tf.float32)
+ implicit_class_weight = 0.5
+ (cls_targets, cls_weights, reg_targets, reg_weights,
+ _) = targetassigner.batch_assign_confidences(
+ multiclass_target_assigner,
+ anchors_boxlist,
+ gt_box_batch,
+ gt_class_confidences_batch,
+ unmatched_class_label=unmatched_class_label,
+ include_background_class=True,
+ implicit_class_weight=implicit_class_weight)
+ return (cls_targets, cls_weights, reg_targets, reg_weights)
+
+ groundtruth_boxlist1 = np.array([[0., 0., 0.2, 0.2]], dtype=np.float32)
+ groundtruth_boxlist2 = np.array([[0, 0.25123152, 1, 1],
+ [0.015789, 0.0985, 0.55789, 0.3842]],
+ dtype=np.float32)
+ class_targets1 = np.array([[0, 1, 0, 0]], dtype=np.float32)
+ class_targets2 = np.array([[0, 0, 0, 1],
+ [0, 0, 1, 0]], dtype=np.float32)
+ class_targets1 = np.array([[[0, 1, 1],
+ [1, 1, 0]]], dtype=np.float32)
+ class_targets2 = np.array([[[0, 1, 1],
+ [1, 1, 0]],
+ [[0, 0, 1],
+ [0, 0, 1]]], dtype=np.float32)
+
+ anchor_means = np.array([[0, 0, .25, .25],
+ [0, .25, 1, 1],
+ [0, .1, .5, .5],
+ [.75, .75, 1, 1]], dtype=np.float32)
+
+ with self.assertRaises(ValueError):
+ _, _, _, _ = self.execute(graph_fn, [
+ anchor_means, groundtruth_boxlist1, groundtruth_boxlist2,
+ class_targets1, class_targets2
+ ])
+
+
+class CreateTargetAssignerTest(tf.test.TestCase):
+
+ def test_create_target_assigner(self):
+ """Tests that named constructor gives working target assigners.
+
+ TODO(rathodv): Make this test more general.
+ """
+ corners = [[0.0, 0.0, 1.0, 1.0]]
+ groundtruth = box_list.BoxList(tf.constant(corners))
+
+ priors = box_list.BoxList(tf.constant(corners))
+ multibox_ta = (targetassigner
+ .create_target_assigner('Multibox', stage='proposal'))
+ multibox_ta.assign(priors, groundtruth)
+ # No tests on output, as that may vary arbitrarily as new target assigners
+ # are added. As long as it is constructed correctly and runs without errors,
+ # tests on the individual assigners cover correctness of the assignments.
+
+ anchors = box_list.BoxList(tf.constant(corners))
+ faster_rcnn_proposals_ta = (targetassigner
+ .create_target_assigner('FasterRCNN',
+ stage='proposal'))
+ faster_rcnn_proposals_ta.assign(anchors, groundtruth)
+
+ fast_rcnn_ta = (targetassigner
+ .create_target_assigner('FastRCNN'))
+ fast_rcnn_ta.assign(anchors, groundtruth)
+
+ faster_rcnn_detection_ta = (targetassigner
+ .create_target_assigner('FasterRCNN',
+ stage='detection'))
+ faster_rcnn_detection_ta.assign(anchors, groundtruth)
+
+ with self.assertRaises(ValueError):
+ targetassigner.create_target_assigner('InvalidDetector',
+ stage='invalid_stage')
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/ava_label_map_v2.1.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/ava_label_map_v2.1.pbtxt"
new file mode 100644
index 00000000..5e2c4856
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/ava_label_map_v2.1.pbtxt"
@@ -0,0 +1,240 @@
+item {
+ name: "bend/bow (at the waist)"
+ id: 1
+}
+item {
+ name: "crouch/kneel"
+ id: 3
+}
+item {
+ name: "dance"
+ id: 4
+}
+item {
+ name: "fall down"
+ id: 5
+}
+item {
+ name: "get up"
+ id: 6
+}
+item {
+ name: "jump/leap"
+ id: 7
+}
+item {
+ name: "lie/sleep"
+ id: 8
+}
+item {
+ name: "martial art"
+ id: 9
+}
+item {
+ name: "run/jog"
+ id: 10
+}
+item {
+ name: "sit"
+ id: 11
+}
+item {
+ name: "stand"
+ id: 12
+}
+item {
+ name: "swim"
+ id: 13
+}
+item {
+ name: "walk"
+ id: 14
+}
+item {
+ name: "answer phone"
+ id: 15
+}
+item {
+ name: "carry/hold (an object)"
+ id: 17
+}
+item {
+ name: "climb (e.g., a mountain)"
+ id: 20
+}
+item {
+ name: "close (e.g., a door, a box)"
+ id: 22
+}
+item {
+ name: "cut"
+ id: 24
+}
+item {
+ name: "dress/put on clothing"
+ id: 26
+}
+item {
+ name: "drink"
+ id: 27
+}
+item {
+ name: "drive (e.g., a car, a truck)"
+ id: 28
+}
+item {
+ name: "eat"
+ id: 29
+}
+item {
+ name: "enter"
+ id: 30
+}
+item {
+ name: "hit (an object)"
+ id: 34
+}
+item {
+ name: "lift/pick up"
+ id: 36
+}
+item {
+ name: "listen (e.g., to music)"
+ id: 37
+}
+item {
+ name: "open (e.g., a window, a car door)"
+ id: 38
+}
+item {
+ name: "play musical instrument"
+ id: 41
+}
+item {
+ name: "point to (an object)"
+ id: 43
+}
+item {
+ name: "pull (an object)"
+ id: 45
+}
+item {
+ name: "push (an object)"
+ id: 46
+}
+item {
+ name: "put down"
+ id: 47
+}
+item {
+ name: "read"
+ id: 48
+}
+item {
+ name: "ride (e.g., a bike, a car, a horse)"
+ id: 49
+}
+item {
+ name: "sail boat"
+ id: 51
+}
+item {
+ name: "shoot"
+ id: 52
+}
+item {
+ name: "smoke"
+ id: 54
+}
+item {
+ name: "take a photo"
+ id: 56
+}
+item {
+ name: "text on/look at a cellphone"
+ id: 57
+}
+item {
+ name: "throw"
+ id: 58
+}
+item {
+ name: "touch (an object)"
+ id: 59
+}
+item {
+ name: "turn (e.g., a screwdriver)"
+ id: 60
+}
+item {
+ name: "watch (e.g., TV)"
+ id: 61
+}
+item {
+ name: "work on a computer"
+ id: 62
+}
+item {
+ name: "write"
+ id: 63
+}
+item {
+ name: "fight/hit (a person)"
+ id: 64
+}
+item {
+ name: "give/serve (an object) to (a person)"
+ id: 65
+}
+item {
+ name: "grab (a person)"
+ id: 66
+}
+item {
+ name: "hand clap"
+ id: 67
+}
+item {
+ name: "hand shake"
+ id: 68
+}
+item {
+ name: "hand wave"
+ id: 69
+}
+item {
+ name: "hug (a person)"
+ id: 70
+}
+item {
+ name: "kiss (a person)"
+ id: 72
+}
+item {
+ name: "lift (a person)"
+ id: 73
+}
+item {
+ name: "listen to (a person)"
+ id: 74
+}
+item {
+ name: "push (another person)"
+ id: 76
+}
+item {
+ name: "sing to (e.g., self, a person, a group)"
+ id: 77
+}
+item {
+ name: "take (an object) from (a person)"
+ id: 78
+}
+item {
+ name: "talk to (e.g., self, a person, a group)"
+ id: 79
+}
+item {
+ name: "watch (a person)"
+ id: 80
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/face_label_map.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/face_label_map.pbtxt"
new file mode 100644
index 00000000..1c7355db
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/face_label_map.pbtxt"
@@ -0,0 +1,6 @@
+item {
+ name: "face"
+ id: 1
+ display_name: "face"
+}
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/fgvc_2854_classes_label_map.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/fgvc_2854_classes_label_map.pbtxt"
new file mode 100644
index 00000000..009797f0
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/fgvc_2854_classes_label_map.pbtxt"
@@ -0,0 +1,14270 @@
+item {
+ name: "147457"
+ id: 1
+ display_name: "Nicrophorus tomentosus"
+}
+item {
+ name: "81923"
+ id: 2
+ display_name: "Halyomorpha halys"
+}
+item {
+ name: "7"
+ id: 3
+ display_name: "Aramus guarauna"
+}
+item {
+ name: "201041"
+ id: 4
+ display_name: "Rupornis magnirostris"
+}
+item {
+ name: "65551"
+ id: 5
+ display_name: "Hyla eximia"
+}
+item {
+ name: "106516"
+ id: 6
+ display_name: "Nannothemis bella"
+}
+item {
+ name: "154287"
+ id: 7
+ display_name: "Acalymma vittatum"
+}
+item {
+ name: "32798"
+ id: 8
+ display_name: "Ramphotyphlops braminus"
+}
+item {
+ name: "8229"
+ id: 9
+ display_name: "Cyanocitta cristata"
+}
+item {
+ name: "73766"
+ id: 10
+ display_name: "Drymarchon melanurus"
+}
+item {
+ name: "409639"
+ id: 11
+ display_name: "Aenetus virescens"
+}
+item {
+ name: "8234"
+ id: 12
+ display_name: "Cyanocitta stelleri"
+}
+item {
+ name: "228593"
+ id: 13
+ display_name: "Polygrammate hebraeicum"
+}
+item {
+ name: "53"
+ id: 14
+ display_name: "Balearica regulorum"
+}
+item {
+ name: "57399"
+ id: 15
+ display_name: "Fistularia commersonii"
+}
+item {
+ name: "81979"
+ id: 16
+ display_name: "Syritta pipiens"
+}
+item {
+ name: "73788"
+ id: 17
+ display_name: "Plestiodon fasciatus"
+}
+item {
+ name: "73790"
+ id: 18
+ display_name: "Plestiodon inexpectatus"
+}
+item {
+ name: "16447"
+ id: 19
+ display_name: "Pyrocephalus rubinus"
+}
+item {
+ name: "73792"
+ id: 20
+ display_name: "Plestiodon laticeps"
+}
+item {
+ name: "49219"
+ id: 21
+ display_name: "Anguilla rostrata"
+}
+item {
+ name: "73797"
+ id: 22
+ display_name: "Plestiodon obsoletus"
+}
+item {
+ name: "73803"
+ id: 23
+ display_name: "Plestiodon tetragrammus"
+}
+item {
+ name: "122956"
+ id: 24
+ display_name: "Syntomoides imaon"
+}
+item {
+ name: "82003"
+ id: 25
+ display_name: "Arion ater"
+}
+item {
+ name: "32854"
+ id: 26
+ display_name: "Chamaeleo dilepis"
+}
+item {
+ name: "42341"
+ id: 27
+ display_name: "Tragelaphus scriptus"
+}
+item {
+ name: "82018"
+ id: 28
+ display_name: "Taeniopoda eques"
+}
+item {
+ name: "57443"
+ id: 29
+ display_name: "Libellula quadrimaculata"
+}
+item {
+ name: "4885"
+ id: 30
+ display_name: "Recurvirostra americana"
+}
+item {
+ name: "178403"
+ id: 31
+ display_name: "Phalaenophana pyramusalis"
+}
+item {
+ name: "135027"
+ id: 32
+ display_name: "Agalychnis dacnicolor"
+}
+item {
+ name: "49262"
+ id: 33
+ display_name: "Haemulon sciurus"
+}
+item {
+ name: "98417"
+ id: 34
+ display_name: "Cordulegaster diastatops"
+}
+item {
+ name: "57458"
+ id: 35
+ display_name: "Ladona julia"
+}
+item {
+ name: "115"
+ id: 36
+ display_name: "Ardeotis kori"
+}
+item {
+ name: "49269"
+ id: 37
+ display_name: "Diodon holocanthus"
+}
+item {
+ name: "57463"
+ id: 38
+ display_name: "Papilio canadensis"
+}
+item {
+ name: "82043"
+ id: 39
+ display_name: "Monochamus scutellatus"
+}
+item {
+ name: "147580"
+ id: 40
+ display_name: "Ceratotherium simum simum"
+}
+item {
+ name: "98430"
+ id: 41
+ display_name: "Cordulia shurtleffii"
+}
+item {
+ name: "8319"
+ id: 42
+ display_name: "Pica nuttalli"
+}
+item {
+ name: "43712"
+ id: 43
+ display_name: "Dasyprocta punctata"
+}
+item {
+ name: "8335"
+ id: 44
+ display_name: "Perisoreus canadensis"
+}
+item {
+ name: "508048"
+ id: 45
+ display_name: "Antigone canadensis"
+}
+item {
+ name: "49297"
+ id: 46
+ display_name: "Aetobatus narinari"
+}
+item {
+ name: "82069"
+ id: 47
+ display_name: "Phyciodes pulchella"
+}
+item {
+ name: "73149"
+ id: 48
+ display_name: "Parkesia noveboracensis"
+}
+item {
+ name: "180379"
+ id: 49
+ display_name: "Ardea herodias occidentalis"
+}
+item {
+ name: "73884"
+ id: 50
+ display_name: "Pantherophis emoryi"
+}
+item {
+ name: "106653"
+ id: 51
+ display_name: "Nehalennia irene"
+}
+item {
+ name: "73887"
+ id: 52
+ display_name: "Pantherophis guttatus"
+}
+item {
+ name: "73888"
+ id: 53
+ display_name: "Pantherophis obsoletus"
+}
+item {
+ name: "162"
+ id: 54
+ display_name: "Porzana carolina"
+}
+item {
+ name: "245925"
+ id: 55
+ display_name: "Siproeta stelenes biplagiata"
+}
+item {
+ name: "117302"
+ id: 56
+ display_name: "Physalia physalis"
+}
+item {
+ name: "57516"
+ id: 57
+ display_name: "Bombus terrestris"
+}
+item {
+ name: "204995"
+ id: 58
+ display_name: "Anas platyrhynchos diazi"
+}
+item {
+ name: "49348"
+ id: 59
+ display_name: "Hyles lineata"
+}
+item {
+ name: "82117"
+ id: 60
+ display_name: "Dolomedes tenebrosus"
+}
+item {
+ name: "114891"
+ id: 61
+ display_name: "Varanus salvator"
+}
+item {
+ name: "319695"
+ id: 62
+ display_name: "Epilachna mexicana"
+}
+item {
+ name: "41168"
+ id: 63
+ display_name: "Desmodus rotundus"
+}
+item {
+ name: "13688"
+ id: 64
+ display_name: "Motacilla cinerea"
+}
+item {
+ name: "57556"
+ id: 65
+ display_name: "Papio ursinus"
+}
+item {
+ name: "16598"
+ id: 66
+ display_name: "Empidonax difficilis"
+}
+item {
+ name: "16602"
+ id: 67
+ display_name: "Empidonax minimus"
+}
+item {
+ name: "16604"
+ id: 68
+ display_name: "Empidonax fulvifrons"
+}
+item {
+ name: "409181"
+ id: 69
+ display_name: "Trite planiceps"
+}
+item {
+ name: "82144"
+ id: 70
+ display_name: "Hemileuca eglanterina"
+}
+item {
+ name: "16611"
+ id: 71
+ display_name: "Empidonax traillii"
+}
+item {
+ name: "82153"
+ id: 72
+ display_name: "Ceratomia undulosa"
+}
+item {
+ name: "82155"
+ id: 73
+ display_name: "Bittacomorpha clavipes"
+}
+item {
+ name: "205036"
+ id: 74
+ display_name: "Xanthorhoe lacustrata"
+}
+item {
+ name: "16624"
+ id: 75
+ display_name: "Empidonax hammondii"
+}
+item {
+ name: "16625"
+ id: 76
+ display_name: "Empidonax occidentalis"
+}
+item {
+ name: "243"
+ id: 77
+ display_name: "Rallus limicola"
+}
+item {
+ name: "41"
+ id: 78
+ display_name: "Grus grus"
+}
+item {
+ name: "49402"
+ id: 79
+ display_name: "Abudefduf saxatilis"
+}
+item {
+ name: "58550"
+ id: 80
+ display_name: "Callophrys niphon"
+}
+item {
+ name: "205055"
+ id: 81
+ display_name: "Zopherus nodulosus haldemani"
+}
+item {
+ name: "82177"
+ id: 82
+ display_name: "Hermetia illucens"
+}
+item {
+ name: "9601"
+ id: 83
+ display_name: "Quiscalus major"
+}
+item {
+ name: "7101"
+ id: 84
+ display_name: "Branta leucopsis"
+}
+item {
+ name: "8470"
+ id: 85
+ display_name: "Cyanocorax yucatanicus"
+}
+item {
+ name: "74009"
+ id: 86
+ display_name: "Zamenis longissimus"
+}
+item {
+ name: "8474"
+ id: 87
+ display_name: "Cyanocorax yncas"
+}
+item {
+ name: "82204"
+ id: 88
+ display_name: "Nadata gibbosa"
+}
+item {
+ name: "123168"
+ id: 89
+ display_name: "Ensatina eschscholtzii xanthoptica"
+}
+item {
+ name: "82210"
+ id: 90
+ display_name: "Heterocampa biundata"
+}
+item {
+ name: "48284"
+ id: 91
+ display_name: "Oniscus asellus"
+}
+item {
+ name: "4146"
+ id: 92
+ display_name: "Oceanites oceanicus"
+}
+item {
+ name: "82225"
+ id: 93
+ display_name: "Lophocampa caryae"
+}
+item {
+ name: "9609"
+ id: 94
+ display_name: "Quiscalus niger"
+}
+item {
+ name: "65849"
+ id: 95
+ display_name: "Incilius nebulifer"
+}
+item {
+ name: "207583"
+ id: 96
+ display_name: "Miomantis caffra"
+}
+item {
+ name: "491839"
+ id: 97
+ display_name: "Pyrausta insequalis"
+}
+item {
+ name: "74048"
+ id: 98
+ display_name: "Alces americanus"
+}
+item {
+ name: "57665"
+ id: 99
+ display_name: "Cotinis mutabilis"
+}
+item {
+ name: "65860"
+ id: 100
+ display_name: "Incilius valliceps"
+}
+item {
+ name: "52911"
+ id: 101
+ display_name: "Dolichovespula maculata"
+}
+item {
+ name: "8524"
+ id: 102
+ display_name: "Psilorhinus morio"
+}
+item {
+ name: "49491"
+ id: 103
+ display_name: "Thalassoma bifasciatum"
+}
+item {
+ name: "41301"
+ id: 104
+ display_name: "Tadarida brasiliensis"
+}
+item {
+ name: "57687"
+ id: 105
+ display_name: "Xylocopa varipuncta"
+}
+item {
+ name: "57689"
+ id: 106
+ display_name: "Bombus vosnesenskii"
+}
+item {
+ name: "57690"
+ id: 107
+ display_name: "Bombus sonorus"
+}
+item {
+ name: "33118"
+ id: 108
+ display_name: "Basiliscus vittatus"
+}
+item {
+ name: "205151"
+ id: 109
+ display_name: "Phlogophora meticulosa"
+}
+item {
+ name: "49504"
+ id: 110
+ display_name: "Callinectes sapidus"
+}
+item {
+ name: "16737"
+ id: 111
+ display_name: "Megarynchus pitangua"
+}
+item {
+ name: "357"
+ id: 112
+ display_name: "Gallinula tenebrosa"
+}
+item {
+ name: "82278"
+ id: 113
+ display_name: "Ameiurus melas"
+}
+item {
+ name: "82279"
+ id: 114
+ display_name: "Automeris io"
+}
+item {
+ name: "505478"
+ id: 115
+ display_name: "Gallus gallus domesticus"
+}
+item {
+ name: "33135"
+ id: 116
+ display_name: "Crotaphytus collaris"
+}
+item {
+ name: "41328"
+ id: 117
+ display_name: "Lavia frons"
+}
+item {
+ name: "196979"
+ id: 118
+ display_name: "Anaxyrus boreas halophilus"
+}
+item {
+ name: "44902"
+ id: 119
+ display_name: "Sigmodon hispidus"
+}
+item {
+ name: "1428"
+ id: 120
+ display_name: "Numida meleagris"
+}
+item {
+ name: "119153"
+ id: 121
+ display_name: "Junco hyemalis caniceps"
+}
+item {
+ name: "49539"
+ id: 122
+ display_name: "Pisaster brevispinus"
+}
+item {
+ name: "328068"
+ id: 123
+ display_name: "Belocaulus angustipes"
+}
+item {
+ name: "120214"
+ id: 124
+ display_name: "Clostera albosigma"
+}
+item {
+ name: "16779"
+ id: 125
+ display_name: "Tyrannus vociferans"
+}
+item {
+ name: "16782"
+ id: 126
+ display_name: "Tyrannus tyrannus"
+}
+item {
+ name: "16783"
+ id: 127
+ display_name: "Tyrannus forficatus"
+}
+item {
+ name: "16784"
+ id: 128
+ display_name: "Tyrannus crassirostris"
+}
+item {
+ name: "57745"
+ id: 129
+ display_name: "Linckia laevigata"
+}
+item {
+ name: "205202"
+ id: 130
+ display_name: "Ecliptopera silaceata"
+}
+item {
+ name: "205203"
+ id: 131
+ display_name: "Dyspteris abortivaria"
+}
+item {
+ name: "16791"
+ id: 132
+ display_name: "Tyrannus verticalis"
+}
+item {
+ name: "16793"
+ id: 133
+ display_name: "Tyrannus savana"
+}
+item {
+ name: "205213"
+ id: 134
+ display_name: "Caripeta divisata"
+}
+item {
+ name: "49566"
+ id: 135
+ display_name: "Cicindela sexguttata"
+}
+item {
+ name: "491935"
+ id: 136
+ display_name: "Thylacodes squamigerus"
+}
+item {
+ name: "205216"
+ id: 137
+ display_name: "Cerma cerintha"
+}
+item {
+ name: "39665"
+ id: 138
+ display_name: "Caretta caretta"
+}
+item {
+ name: "147881"
+ id: 139
+ display_name: "Trichechus manatus latirostris"
+}
+item {
+ name: "28743"
+ id: 140
+ display_name: "Salvadora hexalepis"
+}
+item {
+ name: "205231"
+ id: 141
+ display_name: "Idaea dimidiata"
+}
+item {
+ name: "205233"
+ id: 142
+ display_name: "Iridopsis larvaria"
+}
+item {
+ name: "205235"
+ id: 143
+ display_name: "Leuconycta diphteroides"
+}
+item {
+ name: "436"
+ id: 144
+ display_name: "Gallirallus australis"
+}
+item {
+ name: "205238"
+ id: 145
+ display_name: "Metanema inatomaria"
+}
+item {
+ name: "49591"
+ id: 146
+ display_name: "Lepomis macrochirus"
+}
+item {
+ name: "229817"
+ id: 147
+ display_name: "Raphia frater"
+}
+item {
+ name: "49594"
+ id: 148
+ display_name: "Pomoxis nigromaculatus"
+}
+item {
+ name: "65979"
+ id: 149
+ display_name: "Lithobates catesbeianus"
+}
+item {
+ name: "49596"
+ id: 150
+ display_name: "Salvelinus fontinalis"
+}
+item {
+ name: "65982"
+ id: 151
+ display_name: "Lithobates clamitans"
+}
+item {
+ name: "8649"
+ id: 152
+ display_name: "Calocitta formosa"
+}
+item {
+ name: "8650"
+ id: 153
+ display_name: "Calocitta colliei"
+}
+item {
+ name: "82379"
+ id: 154
+ display_name: "Hemaris thysbe"
+}
+item {
+ name: "49614"
+ id: 155
+ display_name: "Lepomis gibbosus"
+}
+item {
+ name: "63028"
+ id: 156
+ display_name: "Hypercompe scribonia"
+}
+item {
+ name: "39672"
+ id: 157
+ display_name: "Eretmochelys imbricata"
+}
+item {
+ name: "66003"
+ id: 158
+ display_name: "Lithobates pipiens"
+}
+item {
+ name: "197077"
+ id: 159
+ display_name: "Vanessa kershawi"
+}
+item {
+ name: "473"
+ id: 160
+ display_name: "Fulica americana"
+}
+item {
+ name: "147930"
+ id: 161
+ display_name: "Rabidosa rabida"
+}
+item {
+ name: "147931"
+ id: 162
+ display_name: "Panoquina ocola"
+}
+item {
+ name: "66012"
+ id: 163
+ display_name: "Lithobates sylvaticus"
+}
+item {
+ name: "8671"
+ id: 164
+ display_name: "Pachyramphus aglaiae"
+}
+item {
+ name: "41440"
+ id: 165
+ display_name: "Phocoena phocoena"
+}
+item {
+ name: "27388"
+ id: 166
+ display_name: "Carphophis amoenus"
+}
+item {
+ name: "82418"
+ id: 167
+ display_name: "Cicindela punctulata"
+}
+item {
+ name: "25078"
+ id: 168
+ display_name: "Gastrophryne carolinensis"
+}
+item {
+ name: "82425"
+ id: 169
+ display_name: "Cicindela repanda"
+}
+item {
+ name: "143446"
+ id: 170
+ display_name: "Paonias myops"
+}
+item {
+ name: "41478"
+ id: 171
+ display_name: "Eschrichtius robustus"
+}
+item {
+ name: "5200"
+ id: 172
+ display_name: "Buteo lagopus"
+}
+item {
+ name: "148908"
+ id: 173
+ display_name: "Chrysodeixis includens"
+}
+item {
+ name: "41482"
+ id: 174
+ display_name: "Tursiops truncatus"
+}
+item {
+ name: "6914"
+ id: 175
+ display_name: "Cygnus atratus"
+}
+item {
+ name: "464301"
+ id: 176
+ display_name: "Philesturnus rufusater"
+}
+item {
+ name: "129226"
+ id: 177
+ display_name: "Chytolita morbidalis"
+}
+item {
+ name: "180759"
+ id: 178
+ display_name: "Aphonopelma iodius"
+}
+item {
+ name: "135318"
+ id: 179
+ display_name: "Apantesis phalerata"
+}
+item {
+ name: "49699"
+ id: 180
+ display_name: "Pisaster ochraceus"
+}
+item {
+ name: "49700"
+ id: 181
+ display_name: "Coluber lateralis lateralis"
+}
+item {
+ name: "61532"
+ id: 182
+ display_name: "Propylea quatuordecimpunctata"
+}
+item {
+ name: "4368"
+ id: 183
+ display_name: "Larus marinus"
+}
+item {
+ name: "41521"
+ id: 184
+ display_name: "Orcinus orca"
+}
+item {
+ name: "49716"
+ id: 185
+ display_name: "Paonias excaecata"
+}
+item {
+ name: "41526"
+ id: 186
+ display_name: "Delphinus delphis"
+}
+item {
+ name: "49723"
+ id: 187
+ display_name: "Pugettia producta"
+}
+item {
+ name: "16956"
+ id: 188
+ display_name: "Pitangus sulphuratus"
+}
+item {
+ name: "210607"
+ id: 189
+ display_name: "Diastictis fracturalis"
+}
+item {
+ name: "148030"
+ id: 190
+ display_name: "Equus asinus"
+}
+item {
+ name: "6924"
+ id: 191
+ display_name: "Anas rubripes"
+}
+item {
+ name: "30844"
+ id: 192
+ display_name: "Bothriechis schlegelii"
+}
+item {
+ name: "123628"
+ id: 193
+ display_name: "Argynnis paphia"
+}
+item {
+ name: "131676"
+ id: 194
+ display_name: "Anthus novaeseelandiae novaeseelandiae"
+}
+item {
+ name: "41566"
+ id: 195
+ display_name: "Megaptera novaeangliae"
+}
+item {
+ name: "49759"
+ id: 196
+ display_name: "Pyrgus oileus"
+}
+item {
+ name: "49761"
+ id: 197
+ display_name: "Anartia jatrophae"
+}
+item {
+ name: "49766"
+ id: 198
+ display_name: "Heliconius charithonia"
+}
+item {
+ name: "33383"
+ id: 199
+ display_name: "Coleonyx brevis"
+}
+item {
+ name: "33384"
+ id: 200
+ display_name: "Coleonyx elegans"
+}
+item {
+ name: "312764"
+ id: 201
+ display_name: "Euptoieta hegesia meridiania"
+}
+item {
+ name: "82538"
+ id: 202
+ display_name: "Vanessa gonerilla"
+}
+item {
+ name: "33387"
+ id: 203
+ display_name: "Coleonyx variegatus"
+}
+item {
+ name: "56082"
+ id: 204
+ display_name: "Aeshna canadensis"
+}
+item {
+ name: "17008"
+ id: 205
+ display_name: "Sayornis phoebe"
+}
+item {
+ name: "200808"
+ id: 206
+ display_name: "Sceloporus graciosus vandenburgianus"
+}
+item {
+ name: "17013"
+ id: 207
+ display_name: "Sayornis nigricans"
+}
+item {
+ name: "122381"
+ id: 208
+ display_name: "Cupido comyntas"
+}
+item {
+ name: "123516"
+ id: 209
+ display_name: "Mydas clavatus"
+}
+item {
+ name: "8834"
+ id: 210
+ display_name: "Tityra semifasciata"
+}
+item {
+ name: "146199"
+ id: 211
+ display_name: "Lampropeltis californiae"
+}
+item {
+ name: "17858"
+ id: 212
+ display_name: "Dryocopus lineatus"
+}
+item {
+ name: "334616"
+ id: 213
+ display_name: "Battus philenor hirsuta"
+}
+item {
+ name: "82582"
+ id: 214
+ display_name: "Labidomera clivicollis"
+}
+item {
+ name: "204699"
+ id: 215
+ display_name: "Pseudothyatira cymatophoroides"
+}
+item {
+ name: "41638"
+ id: 216
+ display_name: "Ursus americanus"
+}
+item {
+ name: "27420"
+ id: 217
+ display_name: "Desmognathus fuscus"
+}
+item {
+ name: "81584"
+ id: 218
+ display_name: "Anisota virginiensis"
+}
+item {
+ name: "49848"
+ id: 219
+ display_name: "Navanax inermis"
+}
+item {
+ name: "143476"
+ id: 220
+ display_name: "Calledapteryx dryopterata"
+}
+item {
+ name: "41663"
+ id: 221
+ display_name: "Procyon lotor"
+}
+item {
+ name: "49857"
+ id: 222
+ display_name: "Aplysia vaccaria"
+}
+item {
+ name: "41673"
+ id: 223
+ display_name: "Nasua narica"
+}
+item {
+ name: "41676"
+ id: 224
+ display_name: "Bassariscus astutus"
+}
+item {
+ name: "27427"
+ id: 225
+ display_name: "Aneides lugubris"
+}
+item {
+ name: "418530"
+ id: 226
+ display_name: "Porphyrio melanotus"
+}
+item {
+ name: "311419"
+ id: 227
+ display_name: "Neobernaya spadicea"
+}
+item {
+ name: "113502"
+ id: 228
+ display_name: "Sympetrum costiferum"
+}
+item {
+ name: "66278"
+ id: 229
+ display_name: "Oophaga pumilio"
+}
+item {
+ name: "6951"
+ id: 230
+ display_name: "Anas bahamensis"
+}
+item {
+ name: "213740"
+ id: 231
+ display_name: "Antaeotricha schlaegeri"
+}
+item {
+ name: "143485"
+ id: 232
+ display_name: "Xanthorhoe ferrugata"
+}
+item {
+ name: "120275"
+ id: 233
+ display_name: "Euphyia intermediata"
+}
+item {
+ name: "48035"
+ id: 234
+ display_name: "Strongylocentrotus purpuratus"
+}
+item {
+ name: "41728"
+ id: 235
+ display_name: "Mirounga angustirostris"
+}
+item {
+ name: "41733"
+ id: 236
+ display_name: "Halichoerus grypus"
+}
+item {
+ name: "41740"
+ id: 237
+ display_name: "Zalophus californianus"
+}
+item {
+ name: "118914"
+ id: 238
+ display_name: "Echinargus isola"
+}
+item {
+ name: "4936"
+ id: 239
+ display_name: "Egretta novaehollandiae"
+}
+item {
+ name: "131862"
+ id: 240
+ display_name: "Typocerus velutinus"
+}
+item {
+ name: "55401"
+ id: 241
+ display_name: "Pieris brassicae"
+}
+item {
+ name: "41752"
+ id: 242
+ display_name: "Arctocephalus forsteri"
+}
+item {
+ name: "41755"
+ id: 243
+ display_name: "Eumetopias jubatus"
+}
+item {
+ name: "123676"
+ id: 244
+ display_name: "Anas crecca carolinensis"
+}
+item {
+ name: "41763"
+ id: 245
+ display_name: "Phocarctos hookeri"
+}
+item {
+ name: "181034"
+ id: 246
+ display_name: "Cervus elaphus canadensis"
+}
+item {
+ name: "49964"
+ id: 247
+ display_name: "Ginglymostoma cirratum"
+}
+item {
+ name: "213809"
+ id: 248
+ display_name: "Anticarsia gemmatalis"
+}
+item {
+ name: "49972"
+ id: 249
+ display_name: "Battus philenor"
+}
+item {
+ name: "205623"
+ id: 250
+ display_name: "Microstylum morosum"
+}
+item {
+ name: "336697"
+ id: 251
+ display_name: "Arctia villica"
+}
+item {
+ name: "41789"
+ id: 252
+ display_name: "Taxidea taxus"
+}
+item {
+ name: "48724"
+ id: 253
+ display_name: "Phidiana hiltoni"
+}
+item {
+ name: "123713"
+ id: 254
+ display_name: "Neoscona oaxacensis"
+}
+item {
+ name: "33602"
+ id: 255
+ display_name: "Tarentola mauritanica"
+}
+item {
+ name: "846"
+ id: 256
+ display_name: "Alectoris chukar"
+}
+item {
+ name: "41808"
+ id: 257
+ display_name: "Mustela erminea"
+}
+item {
+ name: "50001"
+ id: 258
+ display_name: "Terrapene carolina carolina"
+}
+item {
+ name: "41810"
+ id: 259
+ display_name: "Mustela frenata"
+}
+item {
+ name: "82774"
+ id: 260
+ display_name: "Oryctes nasicornis"
+}
+item {
+ name: "41815"
+ id: 261
+ display_name: "Mustela nivalis"
+}
+item {
+ name: "4239"
+ id: 262
+ display_name: "Tachybaptus dominicus"
+}
+item {
+ name: "344926"
+ id: 263
+ display_name: "Artemisiospiza belli"
+}
+item {
+ name: "82792"
+ id: 264
+ display_name: "Celastrina neglecta"
+}
+item {
+ name: "41841"
+ id: 265
+ display_name: "Meles meles"
+}
+item {
+ name: "882"
+ id: 266
+ display_name: "Gallus gallus"
+}
+item {
+ name: "125758"
+ id: 267
+ display_name: "Mercenaria mercenaria"
+}
+item {
+ name: "9081"
+ id: 268
+ display_name: "Cardinalis sinuatus"
+}
+item {
+ name: "9083"
+ id: 269
+ display_name: "Cardinalis cardinalis"
+}
+item {
+ name: "9092"
+ id: 270
+ display_name: "Melospiza lincolnii"
+}
+item {
+ name: "4246"
+ id: 271
+ display_name: "Podilymbus podiceps"
+}
+item {
+ name: "9096"
+ id: 272
+ display_name: "Melospiza georgiana"
+}
+item {
+ name: "906"
+ id: 273
+ display_name: "Meleagris gallopavo"
+}
+item {
+ name: "50059"
+ id: 274
+ display_name: "Limacia cockerelli"
+}
+item {
+ name: "394124"
+ id: 275
+ display_name: "Orthodera novaezealandiae"
+}
+item {
+ name: "82832"
+ id: 276
+ display_name: "Cosmopepla lintneriana"
+}
+item {
+ name: "913"
+ id: 277
+ display_name: "Meleagris ocellata"
+}
+item {
+ name: "41877"
+ id: 278
+ display_name: "Conepatus leuconotus"
+}
+item {
+ name: "196419"
+ id: 279
+ display_name: "Euborellia annulipes"
+}
+item {
+ name: "50071"
+ id: 280
+ display_name: "Erynnis horatius"
+}
+item {
+ name: "41880"
+ id: 281
+ display_name: "Mephitis mephitis"
+}
+item {
+ name: "50073"
+ id: 282
+ display_name: "Dryas iulia"
+}
+item {
+ name: "173793"
+ id: 283
+ display_name: "Diphthera festiva"
+}
+item {
+ name: "41886"
+ id: 284
+ display_name: "Crocuta crocuta"
+}
+item {
+ name: "30683"
+ id: 285
+ display_name: "Agkistrodon contortrix contortrix"
+}
+item {
+ name: "931"
+ id: 286
+ display_name: "Lagopus lagopus"
+}
+item {
+ name: "41901"
+ id: 287
+ display_name: "Herpestes javanicus"
+}
+item {
+ name: "143517"
+ id: 288
+ display_name: "Biston betularia"
+}
+item {
+ name: "9139"
+ id: 289
+ display_name: "Spizella atrogularis"
+}
+item {
+ name: "8350"
+ id: 290
+ display_name: "Pyrrhocorax graculus"
+}
+item {
+ name: "9144"
+ id: 291
+ display_name: "Spizella breweri"
+}
+item {
+ name: "12936"
+ id: 292
+ display_name: "Sialia currucoides"
+}
+item {
+ name: "9152"
+ id: 293
+ display_name: "Spizella pusilla"
+}
+item {
+ name: "68229"
+ id: 294
+ display_name: "Tramea carolina"
+}
+item {
+ name: "6987"
+ id: 295
+ display_name: "Anas superciliosa"
+}
+item {
+ name: "9156"
+ id: 296
+ display_name: "Passerella iliaca"
+}
+item {
+ name: "202315"
+ id: 297
+ display_name: "Romaleon antennarium"
+}
+item {
+ name: "4257"
+ id: 298
+ display_name: "Phoenicopterus ruber"
+}
+item {
+ name: "25545"
+ id: 299
+ display_name: "Rana aurora"
+}
+item {
+ name: "15282"
+ id: 300
+ display_name: "Sylvia atricapilla"
+}
+item {
+ name: "103927"
+ id: 301
+ display_name: "Ladona deplanata"
+}
+item {
+ name: "17356"
+ id: 302
+ display_name: "Vireo bellii"
+}
+item {
+ name: "26765"
+ id: 303
+ display_name: "Ambystoma mavortium"
+}
+item {
+ name: "205777"
+ id: 304
+ display_name: "Plectrodera scalator"
+}
+item {
+ name: "17362"
+ id: 305
+ display_name: "Vireo plumbeus"
+}
+item {
+ name: "99283"
+ id: 306
+ display_name: "Didymops transversa"
+}
+item {
+ name: "17364"
+ id: 307
+ display_name: "Vireo philadelphicus"
+}
+item {
+ name: "17365"
+ id: 308
+ display_name: "Vireo flavifrons"
+}
+item {
+ name: "17366"
+ id: 309
+ display_name: "Vireo olivaceus"
+}
+item {
+ name: "9182"
+ id: 310
+ display_name: "Zonotrichia querula"
+}
+item {
+ name: "17375"
+ id: 311
+ display_name: "Vireo huttoni"
+}
+item {
+ name: "9184"
+ id: 312
+ display_name: "Zonotrichia albicollis"
+}
+item {
+ name: "9185"
+ id: 313
+ display_name: "Zonotrichia atricapilla"
+}
+item {
+ name: "50147"
+ id: 314
+ display_name: "Celithemis eponina"
+}
+item {
+ name: "47585"
+ id: 315
+ display_name: "Crassostrea virginica"
+}
+item {
+ name: "9195"
+ id: 316
+ display_name: "Emberiza citrinella"
+}
+item {
+ name: "41964"
+ id: 317
+ display_name: "Panthera leo"
+}
+item {
+ name: "6994"
+ id: 318
+ display_name: "Bucephala islandica"
+}
+item {
+ name: "52506"
+ id: 319
+ display_name: "Adalia bipunctata"
+}
+item {
+ name: "9201"
+ id: 320
+ display_name: "Emberiza schoeniclus"
+}
+item {
+ name: "17394"
+ id: 321
+ display_name: "Vireo gilvus"
+}
+item {
+ name: "25591"
+ id: 322
+ display_name: "Rana temporaria"
+}
+item {
+ name: "41976"
+ id: 323
+ display_name: "Lynx rufus"
+}
+item {
+ name: "214015"
+ id: 324
+ display_name: "Apoda y-inversum"
+}
+item {
+ name: "50176"
+ id: 325
+ display_name: "Enallagma vesperum"
+}
+item {
+ name: "99331"
+ id: 326
+ display_name: "Diplacodes trivialis"
+}
+item {
+ name: "50181"
+ id: 327
+ display_name: "Loxosceles reclusa"
+}
+item {
+ name: "74758"
+ id: 328
+ display_name: "Neovison vison"
+}
+item {
+ name: "123912"
+ id: 329
+ display_name: "Charaxes jasius"
+}
+item {
+ name: "41997"
+ id: 330
+ display_name: "Leopardus pardalis"
+}
+item {
+ name: "123920"
+ id: 331
+ display_name: "Dorcus parallelipipedus"
+}
+item {
+ name: "132334"
+ id: 332
+ display_name: "Urbanus procne"
+}
+item {
+ name: "123922"
+ id: 333
+ display_name: "Abudefduf sordidus"
+}
+item {
+ name: "9236"
+ id: 334
+ display_name: "Serinus serinus"
+}
+item {
+ name: "42007"
+ id: 335
+ display_name: "Puma concolor"
+}
+item {
+ name: "9240"
+ id: 336
+ display_name: "Serinus mozambicus"
+}
+item {
+ name: "148506"
+ id: 337
+ display_name: "Melanis pixe"
+}
+item {
+ name: "58399"
+ id: 338
+ display_name: "Urosalpinx cinerea"
+}
+item {
+ name: "312353"
+ id: 339
+ display_name: "Leptophobia aripa elodia"
+}
+item {
+ name: "148517"
+ id: 340
+ display_name: "Heliopetes laviana"
+}
+item {
+ name: "73905"
+ id: 341
+ display_name: "Phrynosoma cornutum"
+}
+item {
+ name: "39772"
+ id: 342
+ display_name: "Chrysemys picta marginata"
+}
+item {
+ name: "25646"
+ id: 343
+ display_name: "Rana boylii"
+}
+item {
+ name: "62984"
+ id: 344
+ display_name: "Aedes albopictus"
+}
+item {
+ name: "123959"
+ id: 345
+ display_name: "Ensatina eschscholtzii oregonensis"
+}
+item {
+ name: "1081"
+ id: 346
+ display_name: "Lophura leucomelanos"
+}
+item {
+ name: "39775"
+ id: 347
+ display_name: "Chrysemys picta picta"
+}
+item {
+ name: "42046"
+ id: 348
+ display_name: "Canis mesomelas"
+}
+item {
+ name: "42048"
+ id: 349
+ display_name: "Canis lupus"
+}
+item {
+ name: "42051"
+ id: 350
+ display_name: "Canis latrans"
+}
+item {
+ name: "9284"
+ id: 351
+ display_name: "Euphonia elegantissima"
+}
+item {
+ name: "25669"
+ id: 352
+ display_name: "Rana dalmatina"
+}
+item {
+ name: "9287"
+ id: 353
+ display_name: "Euphonia hirundinacea"
+}
+item {
+ name: "9291"
+ id: 354
+ display_name: "Euphonia affinis"
+}
+item {
+ name: "222284"
+ id: 355
+ display_name: "Iridopsis defectaria"
+}
+item {
+ name: "74832"
+ id: 356
+ display_name: "Papio anubis"
+}
+item {
+ name: "148563"
+ id: 357
+ display_name: "Myscelia ethusa"
+}
+item {
+ name: "42069"
+ id: 358
+ display_name: "Vulpes vulpes"
+}
+item {
+ name: "9743"
+ id: 359
+ display_name: "Agelaius tricolor"
+}
+item {
+ name: "42076"
+ id: 360
+ display_name: "Urocyon cinereoargenteus"
+}
+item {
+ name: "509025"
+ id: 361
+ display_name: "Momotus lessonii"
+}
+item {
+ name: "17506"
+ id: 362
+ display_name: "Zosterops japonicus"
+}
+item {
+ name: "4283"
+ id: 363
+ display_name: "Phalacrocorax pelagicus"
+}
+item {
+ name: "58469"
+ id: 364
+ display_name: "Thorybes pylades"
+}
+item {
+ name: "9319"
+ id: 365
+ display_name: "Icterus cucullatus"
+}
+item {
+ name: "58473"
+ id: 366
+ display_name: "Erynnis icelus"
+}
+item {
+ name: "58475"
+ id: 367
+ display_name: "Erynnis juvenalis"
+}
+item {
+ name: "42093"
+ id: 368
+ display_name: "Lycaon pictus"
+}
+item {
+ name: "58478"
+ id: 369
+ display_name: "Erynnis baptisiae"
+}
+item {
+ name: "9328"
+ id: 370
+ display_name: "Icterus graduacauda"
+}
+item {
+ name: "58481"
+ id: 371
+ display_name: "Ancyloxypha numitor"
+}
+item {
+ name: "132210"
+ id: 372
+ display_name: "Deloyala guttata"
+}
+item {
+ name: "58484"
+ id: 373
+ display_name: "Thymelicus lineola"
+}
+item {
+ name: "13701"
+ id: 374
+ display_name: "Motacilla aguimp"
+}
+item {
+ name: "410743"
+ id: 375
+ display_name: "Anas superciliosa \303\227 platyrhynchos"
+}
+item {
+ name: "9336"
+ id: 376
+ display_name: "Icterus pustulatus"
+}
+item {
+ name: "9339"
+ id: 377
+ display_name: "Icterus gularis"
+}
+item {
+ name: "124031"
+ id: 378
+ display_name: "Agrius convolvuli"
+}
+item {
+ name: "42113"
+ id: 379
+ display_name: "Pecari tajacu"
+}
+item {
+ name: "132227"
+ id: 380
+ display_name: "Lethe appalachia"
+}
+item {
+ name: "113516"
+ id: 381
+ display_name: "Sympetrum madidum"
+}
+item {
+ name: "58509"
+ id: 382
+ display_name: "Anatrytone logan"
+}
+item {
+ name: "83086"
+ id: 383
+ display_name: "Eurytides marcellus"
+}
+item {
+ name: "58511"
+ id: 384
+ display_name: "Poanes viator"
+}
+item {
+ name: "83090"
+ id: 385
+ display_name: "Epimecis hortaria"
+}
+item {
+ name: "115859"
+ id: 386
+ display_name: "Micrurus tener tener"
+}
+item {
+ name: "129902"
+ id: 387
+ display_name: "Camponotus pennsylvanicus"
+}
+item {
+ name: "42134"
+ id: 388
+ display_name: "Sus scrofa"
+}
+item {
+ name: "58519"
+ id: 389
+ display_name: "Pompeius verna"
+}
+item {
+ name: "205977"
+ id: 390
+ display_name: "Coccinella undecimpunctata"
+}
+item {
+ name: "58523"
+ id: 391
+ display_name: "Papilio polyxenes"
+}
+item {
+ name: "58525"
+ id: 392
+ display_name: "Papilio troilus"
+}
+item {
+ name: "410783"
+ id: 393
+ display_name: "Hypoblemum albovittatum"
+}
+item {
+ name: "9376"
+ id: 394
+ display_name: "Carduelis cannabina"
+}
+item {
+ name: "58531"
+ id: 395
+ display_name: "Colias philodice"
+}
+item {
+ name: "50340"
+ id: 396
+ display_name: "Hylephila phyleus"
+}
+item {
+ name: "42149"
+ id: 397
+ display_name: "Hippopotamus amphibius"
+}
+item {
+ name: "50342"
+ id: 398
+ display_name: "Erythrodiplax umbrata"
+}
+item {
+ name: "12883"
+ id: 399
+ display_name: "Catharus minimus"
+}
+item {
+ name: "28557"
+ id: 400
+ display_name: "Storeria occipitomaculata"
+}
+item {
+ name: "199"
+ id: 401
+ display_name: "Amaurornis phoenicurus"
+}
+item {
+ name: "58541"
+ id: 402
+ display_name: "Satyrium liparops"
+}
+item {
+ name: "58543"
+ id: 403
+ display_name: "Callophrys augustinus"
+}
+item {
+ name: "42161"
+ id: 404
+ display_name: "Dama dama"
+}
+item {
+ name: "61508"
+ id: 405
+ display_name: "Ischnura elegans"
+}
+item {
+ name: "1204"
+ id: 406
+ display_name: "Pavo cristatus"
+}
+item {
+ name: "42166"
+ id: 407
+ display_name: "Axis axis"
+}
+item {
+ name: "146797"
+ id: 408
+ display_name: "Platynota idaeusalis"
+}
+item {
+ name: "58556"
+ id: 409
+ display_name: "Celastrina ladon"
+}
+item {
+ name: "367477"
+ id: 410
+ display_name: "Rallus crepitans"
+}
+item {
+ name: "58561"
+ id: 411
+ display_name: "Libytheana carinenta"
+}
+item {
+ name: "58563"
+ id: 412
+ display_name: "Speyeria aphrodite"
+}
+item {
+ name: "58564"
+ id: 413
+ display_name: "Boloria bellona"
+}
+item {
+ name: "413489"
+ id: 414
+ display_name: "Nestor meridionalis septentrionalis"
+}
+item {
+ name: "42184"
+ id: 415
+ display_name: "Capreolus capreolus"
+}
+item {
+ name: "9419"
+ id: 416
+ display_name: "Pipilo chlorurus"
+}
+item {
+ name: "9420"
+ id: 417
+ display_name: "Pipilo maculatus"
+}
+item {
+ name: "9424"
+ id: 418
+ display_name: "Pipilo erythrophthalmus"
+}
+item {
+ name: "99539"
+ id: 419
+ display_name: "Dorocordulia libera"
+}
+item {
+ name: "58580"
+ id: 420
+ display_name: "Polygonia progne"
+}
+item {
+ name: "58581"
+ id: 421
+ display_name: "Nymphalis vaualbum"
+}
+item {
+ name: "42199"
+ id: 422
+ display_name: "Rangifer tarandus"
+}
+item {
+ name: "58586"
+ id: 423
+ display_name: "Limenitis archippus"
+}
+item {
+ name: "58587"
+ id: 424
+ display_name: "Asterocampa clyton"
+}
+item {
+ name: "42206"
+ id: 425
+ display_name: "Cervus elaphus"
+}
+item {
+ name: "312543"
+ id: 426
+ display_name: "Anartia jatrophae luteipicta"
+}
+item {
+ name: "204094"
+ id: 427
+ display_name: "Cairina moschata domestica"
+}
+item {
+ name: "4304"
+ id: 428
+ display_name: "Phalacrocorax varius"
+}
+item {
+ name: "42210"
+ id: 429
+ display_name: "Cervus nippon"
+}
+item {
+ name: "17638"
+ id: 430
+ display_name: "Picoides dorsalis"
+}
+item {
+ name: "132330"
+ id: 431
+ display_name: "Chlosyne janais"
+}
+item {
+ name: "58603"
+ id: 432
+ display_name: "Megisto cymela"
+}
+item {
+ name: "42220"
+ id: 433
+ display_name: "Odocoileus hemionus"
+}
+item {
+ name: "17645"
+ id: 434
+ display_name: "Picoides nuttallii"
+}
+item {
+ name: "58606"
+ id: 435
+ display_name: "Cercyonis pegala"
+}
+item {
+ name: "42223"
+ id: 436
+ display_name: "Odocoileus virginianus"
+}
+item {
+ name: "58609"
+ id: 437
+ display_name: "Lepisosteus osseus"
+}
+item {
+ name: "17650"
+ id: 438
+ display_name: "Picoides scalaris"
+}
+item {
+ name: "132339"
+ id: 439
+ display_name: "Anthanassa texana"
+}
+item {
+ name: "58612"
+ id: 440
+ display_name: "Carassius auratus"
+}
+item {
+ name: "1406"
+ id: 441
+ display_name: "Callipepla gambelii"
+}
+item {
+ name: "9462"
+ id: 442
+ display_name: "Pyrrhula pyrrhula"
+}
+item {
+ name: "4308"
+ id: 443
+ display_name: "Phalacrocorax brasilianus"
+}
+item {
+ name: "17660"
+ id: 444
+ display_name: "Picoides pubescens"
+}
+item {
+ name: "1280"
+ id: 445
+ display_name: "Colinus virginianus"
+}
+item {
+ name: "129920"
+ id: 446
+ display_name: "Calliostoma ligatum"
+}
+item {
+ name: "58627"
+ id: 447
+ display_name: "Perca flavescens"
+}
+item {
+ name: "148742"
+ id: 448
+ display_name: "Hamadryas februa"
+}
+item {
+ name: "39809"
+ id: 449
+ display_name: "Terrapene ornata ornata"
+}
+item {
+ name: "115979"
+ id: 450
+ display_name: "Plestiodon skiltonianus skiltonianus"
+}
+item {
+ name: "9484"
+ id: 451
+ display_name: "Sporophila torqueola"
+}
+item {
+ name: "17678"
+ id: 452
+ display_name: "Picoides villosus"
+}
+item {
+ name: "3862"
+ id: 453
+ display_name: "Calidris pusilla"
+}
+item {
+ name: "70421"
+ id: 454
+ display_name: "Acris blanchardi"
+}
+item {
+ name: "124183"
+ id: 455
+ display_name: "Phlogophora periculosa"
+}
+item {
+ name: "124184"
+ id: 456
+ display_name: "Plodia interpunctella"
+}
+item {
+ name: "99609"
+ id: 457
+ display_name: "Dromogomphus spinosus"
+}
+item {
+ name: "99610"
+ id: 458
+ display_name: "Dromogomphus spoliatus"
+}
+item {
+ name: "17694"
+ id: 459
+ display_name: "Picoides arcticus"
+}
+item {
+ name: "113521"
+ id: 460
+ display_name: "Sympetrum pallipes"
+}
+item {
+ name: "320801"
+ id: 461
+ display_name: "Aspidoscelis tesselata"
+}
+item {
+ name: "7047"
+ id: 462
+ display_name: "Aythya marila"
+}
+item {
+ name: "4317"
+ id: 463
+ display_name: "Phaethon aethereus"
+}
+item {
+ name: "81606"
+ id: 464
+ display_name: "Littorina littorea"
+}
+item {
+ name: "99891"
+ id: 465
+ display_name: "Enallagma aspersum"
+}
+item {
+ name: "9528"
+ id: 466
+ display_name: "Sturnella magna"
+}
+item {
+ name: "99641"
+ id: 467
+ display_name: "Dythemis fugax"
+}
+item {
+ name: "99644"
+ id: 468
+ display_name: "Dythemis nigrescens"
+}
+item {
+ name: "39818"
+ id: 469
+ display_name: "Terrapene carolina triunguis"
+}
+item {
+ name: "99647"
+ id: 470
+ display_name: "Dythemis velox"
+}
+item {
+ name: "148800"
+ id: 471
+ display_name: "Chioides albofasciatus"
+}
+item {
+ name: "19339"
+ id: 472
+ display_name: "Melopsittacus undulatus"
+}
+item {
+ name: "47509"
+ id: 473
+ display_name: "Diaulula sandiegensis"
+}
+item {
+ name: "148810"
+ id: 474
+ display_name: "Anaea aidea"
+}
+item {
+ name: "123070"
+ id: 475
+ display_name: "Capra hircus"
+}
+item {
+ name: "7054"
+ id: 476
+ display_name: "Aythya affinis"
+}
+item {
+ name: "99897"
+ id: 477
+ display_name: "Enallagma civile"
+}
+item {
+ name: "42328"
+ id: 478
+ display_name: "Kobus ellipsiprymnus"
+}
+item {
+ name: "48328"
+ id: 479
+ display_name: "Aurelia aurita"
+}
+item {
+ name: "132445"
+ id: 480
+ display_name: "Conchylodes ovulalis"
+}
+item {
+ name: "215271"
+ id: 481
+ display_name: "Bleptina caradrinalis"
+}
+item {
+ name: "83297"
+ id: 482
+ display_name: "Scarus rubroviolaceus"
+}
+item {
+ name: "42347"
+ id: 483
+ display_name: "Rupicapra rupicapra"
+}
+item {
+ name: "7058"
+ id: 484
+ display_name: "Aythya novaeseelandiae"
+}
+item {
+ name: "52457"
+ id: 485
+ display_name: "Chaetodon auriga"
+}
+item {
+ name: "1392"
+ id: 486
+ display_name: "Cyrtonyx montezumae"
+}
+item {
+ name: "4328"
+ id: 487
+ display_name: "Pelecanus occidentalis"
+}
+item {
+ name: "7647"
+ id: 488
+ display_name: "Cinclus cinclus"
+}
+item {
+ name: "148856"
+ id: 489
+ display_name: "Anteos clorinde"
+}
+item {
+ name: "7060"
+ id: 490
+ display_name: "Chen rossii"
+}
+item {
+ name: "58750"
+ id: 491
+ display_name: "Nomophila nearctica"
+}
+item {
+ name: "1409"
+ id: 492
+ display_name: "Callipepla californica"
+}
+item {
+ name: "9602"
+ id: 493
+ display_name: "Quiscalus quiscula"
+}
+item {
+ name: "296326"
+ id: 494
+ display_name: "Oncopeltus sexmaculatus"
+}
+item {
+ name: "9607"
+ id: 495
+ display_name: "Quiscalus mexicanus"
+}
+item {
+ name: "319724"
+ id: 496
+ display_name: "Euphoria kernii"
+}
+item {
+ name: "1419"
+ id: 497
+ display_name: "Callipepla squamata"
+}
+item {
+ name: "148883"
+ id: 498
+ display_name: "Eantis tamenund"
+}
+item {
+ name: "42391"
+ id: 499
+ display_name: "Ovis canadensis"
+}
+item {
+ name: "107937"
+ id: 500
+ display_name: "Orthemis discolor"
+}
+item {
+ name: "42405"
+ id: 501
+ display_name: "Syncerus caffer"
+}
+item {
+ name: "42408"
+ id: 502
+ display_name: "Bison bison"
+}
+item {
+ name: "116137"
+ id: 503
+ display_name: "Sceloporus cowlesi"
+}
+item {
+ name: "326296"
+ id: 504
+ display_name: "Bufo bufo"
+}
+item {
+ name: "148907"
+ id: 505
+ display_name: "Cydia latiferreana"
+}
+item {
+ name: "42414"
+ id: 506
+ display_name: "Oreamnos americanus"
+}
+item {
+ name: "116143"
+ id: 507
+ display_name: "Sceloporus tristichus"
+}
+item {
+ name: "99912"
+ id: 508
+ display_name: "Enallagma geminatum"
+}
+item {
+ name: "226889"
+ id: 509
+ display_name: "Pangrapta decoralis"
+}
+item {
+ name: "42429"
+ id: 510
+ display_name: "Antilocapra americana"
+}
+item {
+ name: "17855"
+ id: 511
+ display_name: "Dryocopus pileatus"
+}
+item {
+ name: "107974"
+ id: 512
+ display_name: "Orthetrum sabina"
+}
+item {
+ name: "56225"
+ id: 513
+ display_name: "Polygonia c-album"
+}
+item {
+ name: "67016"
+ id: 514
+ display_name: "Rana draytonii"
+}
+item {
+ name: "132553"
+ id: 515
+ display_name: "Strymon istapa"
+}
+item {
+ name: "73155"
+ id: 516
+ display_name: "Passerina caerulea"
+}
+item {
+ name: "26074"
+ id: 517
+ display_name: "Crocodylus moreletii"
+}
+item {
+ name: "171903"
+ id: 518
+ display_name: "Oligyra orbiculata"
+}
+item {
+ name: "26085"
+ id: 519
+ display_name: "Crocodylus acutus"
+}
+item {
+ name: "143613"
+ id: 520
+ display_name: "Homophoberia apicosa"
+}
+item {
+ name: "5715"
+ id: 521
+ display_name: "Amazilia beryllina"
+}
+item {
+ name: "9721"
+ id: 522
+ display_name: "Geothlypis trichas"
+}
+item {
+ name: "154446"
+ id: 523
+ display_name: "Lambdina fiscellaria"
+}
+item {
+ name: "236841"
+ id: 524
+ display_name: "Lichanura orcutti"
+}
+item {
+ name: "20737"
+ id: 525
+ display_name: "Trogon melanocephalus"
+}
+item {
+ name: "124431"
+ id: 526
+ display_name: "Cycloneda sanguinea"
+}
+item {
+ name: "124432"
+ id: 527
+ display_name: "Deroceras reticulatum"
+}
+item {
+ name: "39566"
+ id: 528
+ display_name: "Apalone ferox"
+}
+item {
+ name: "149017"
+ id: 529
+ display_name: "Chlorochlamys chloroleucaria"
+}
+item {
+ name: "15281"
+ id: 530
+ display_name: "Sylvia communis"
+}
+item {
+ name: "312873"
+ id: 531
+ display_name: "Anartia fatima fatima"
+}
+item {
+ name: "9771"
+ id: 532
+ display_name: "Pinicola enucleator"
+}
+item {
+ name: "39858"
+ id: 533
+ display_name: "Graptemys geographica"
+}
+item {
+ name: "26159"
+ id: 534
+ display_name: "Alligator mississippiensis"
+}
+item {
+ name: "304690"
+ id: 535
+ display_name: "Naupactus cervinus"
+}
+item {
+ name: "124467"
+ id: 536
+ display_name: "Pseudosphinx tetrio"
+}
+item {
+ name: "99892"
+ id: 537
+ display_name: "Enallagma basidens"
+}
+item {
+ name: "99895"
+ id: 538
+ display_name: "Enallagma carunculatum"
+}
+item {
+ name: "67129"
+ id: 539
+ display_name: "Rhinella marina"
+}
+item {
+ name: "83515"
+ id: 540
+ display_name: "Oxybelis aeneus"
+}
+item {
+ name: "81681"
+ id: 541
+ display_name: "Campaea perlata"
+}
+item {
+ name: "99901"
+ id: 542
+ display_name: "Enallagma cyathigerum"
+}
+item {
+ name: "99911"
+ id: 543
+ display_name: "Enallagma exsulans"
+}
+item {
+ name: "9800"
+ id: 544
+ display_name: "Coccothraustes vespertinus"
+}
+item {
+ name: "9801"
+ id: 545
+ display_name: "Coccothraustes coccothraustes"
+}
+item {
+ name: "154551"
+ id: 546
+ display_name: "Leptoglossus zonatus"
+}
+item {
+ name: "9807"
+ id: 547
+ display_name: "Vermivora chrysoptera"
+}
+item {
+ name: "61157"
+ id: 548
+ display_name: "Trichodes ornatus"
+}
+item {
+ name: "99924"
+ id: 549
+ display_name: "Enallagma signatum"
+}
+item {
+ name: "1626"
+ id: 550
+ display_name: "Opisthocomus hoazin"
+}
+item {
+ name: "132704"
+ id: 551
+ display_name: "Setophaga coronata coronata"
+}
+item {
+ name: "119056"
+ id: 552
+ display_name: "Centruroides vittatus"
+}
+item {
+ name: "50786"
+ id: 553
+ display_name: "Vanessa annabella"
+}
+item {
+ name: "60347"
+ id: 554
+ display_name: "Pituophis catenifer sayi"
+}
+item {
+ name: "9833"
+ id: 555
+ display_name: "Diglossa baritula"
+}
+item {
+ name: "132718"
+ id: 556
+ display_name: "Scathophaga stercoraria"
+}
+item {
+ name: "132719"
+ id: 557
+ display_name: "Calopteron reticulatum"
+}
+item {
+ name: "116340"
+ id: 558
+ display_name: "Dreissena polymorpha"
+}
+item {
+ name: "134078"
+ id: 559
+ display_name: "Scoliopteryx libatrix"
+}
+item {
+ name: "9850"
+ id: 560
+ display_name: "Saltator coerulescens"
+}
+item {
+ name: "117695"
+ id: 561
+ display_name: "Cucumaria miniata"
+}
+item {
+ name: "9854"
+ id: 562
+ display_name: "Saltator atriceps"
+}
+item {
+ name: "132736"
+ id: 563
+ display_name: "Urola nivalis"
+}
+item {
+ name: "34435"
+ id: 564
+ display_name: "Hemidactylus turcicus"
+}
+item {
+ name: "9864"
+ id: 565
+ display_name: "Sicalis flaveola"
+}
+item {
+ name: "7106"
+ id: 566
+ display_name: "Aix galericulata"
+}
+item {
+ name: "485010"
+ id: 567
+ display_name: "Chinavia hilaris"
+}
+item {
+ name: "132764"
+ id: 568
+ display_name: "Junco hyemalis hyemalis"
+}
+item {
+ name: "367558"
+ id: 569
+ display_name: "Eupsittula canicularis"
+}
+item {
+ name: "370351"
+ id: 570
+ display_name: "Microcarbo melanoleucos"
+}
+item {
+ name: "50867"
+ id: 571
+ display_name: "Argiope bruennichi"
+}
+item {
+ name: "67252"
+ id: 572
+ display_name: "Trachycephalus typhonius"
+}
+item {
+ name: "132789"
+ id: 573
+ display_name: "Clepsis peritana"
+}
+item {
+ name: "9915"
+ id: 574
+ display_name: "Piranga rubra"
+}
+item {
+ name: "50880"
+ id: 575
+ display_name: "Limenitis lorquini"
+}
+item {
+ name: "9921"
+ id: 576
+ display_name: "Piranga olivacea"
+}
+item {
+ name: "100034"
+ id: 577
+ display_name: "Epiaeschna heros"
+}
+item {
+ name: "9924"
+ id: 578
+ display_name: "Piranga flava"
+}
+item {
+ name: "42339"
+ id: 579
+ display_name: "Tragelaphus strepsiceros"
+}
+item {
+ name: "50892"
+ id: 580
+ display_name: "Euphydryas chalcedona"
+}
+item {
+ name: "130348"
+ id: 581
+ display_name: "Dione moneta"
+}
+item {
+ name: "394966"
+ id: 582
+ display_name: "Phaulacridium marginale"
+}
+item {
+ name: "9943"
+ id: 583
+ display_name: "Amphispiza bilineata"
+}
+item {
+ name: "4388"
+ id: 584
+ display_name: "Larus dominicanus"
+}
+item {
+ name: "1758"
+ id: 585
+ display_name: "Piaya cayana"
+}
+item {
+ name: "50913"
+ id: 586
+ display_name: "Hyalophora euryalus"
+}
+item {
+ name: "9958"
+ id: 587
+ display_name: "Aimophila ruficeps"
+}
+item {
+ name: "59115"
+ id: 588
+ display_name: "Gambusia affinis"
+}
+item {
+ name: "64346"
+ id: 589
+ display_name: "Natrix tessellata"
+}
+item {
+ name: "59119"
+ id: 590
+ display_name: "Pontia protodice"
+}
+item {
+ name: "18160"
+ id: 591
+ display_name: "Melanerpes lewis"
+}
+item {
+ name: "18161"
+ id: 592
+ display_name: "Melanerpes uropygialis"
+}
+item {
+ name: "50931"
+ id: 593
+ display_name: "Strymon melinus"
+}
+item {
+ name: "59124"
+ id: 594
+ display_name: "Anthocharis sara"
+}
+item {
+ name: "59127"
+ id: 595
+ display_name: "Lycaena helloides"
+}
+item {
+ name: "59128"
+ id: 596
+ display_name: "Atlides halesus"
+}
+item {
+ name: "67324"
+ id: 597
+ display_name: "Eurema daira"
+}
+item {
+ name: "9981"
+ id: 598
+ display_name: "Passerculus sandwichensis"
+}
+item {
+ name: "59134"
+ id: 599
+ display_name: "Satyrium sylvinus"
+}
+item {
+ name: "67327"
+ id: 600
+ display_name: "Schistocerca obscura"
+}
+item {
+ name: "67328"
+ id: 601
+ display_name: "Pholcus phalangioides"
+}
+item {
+ name: "59138"
+ id: 602
+ display_name: "Satyrium saepium"
+}
+item {
+ name: "132867"
+ id: 603
+ display_name: "Microtia elva"
+}
+item {
+ name: "18181"
+ id: 604
+ display_name: "Melanerpes pucherani"
+}
+item {
+ name: "7486"
+ id: 605
+ display_name: "Salpinctes obsoletus"
+}
+item {
+ name: "108303"
+ id: 606
+ display_name: "Paltothemis lineatipes"
+}
+item {
+ name: "59152"
+ id: 607
+ display_name: "Leptotes marina"
+}
+item {
+ name: "132881"
+ id: 608
+ display_name: "Catocala ultronia"
+}
+item {
+ name: "143662"
+ id: 609
+ display_name: "Orthosoma brunneum"
+}
+item {
+ name: "59164"
+ id: 610
+ display_name: "Plebejus icarioides"
+}
+item {
+ name: "18205"
+ id: 611
+ display_name: "Melanerpes carolinus"
+}
+item {
+ name: "18206"
+ id: 612
+ display_name: "Melanerpes chrysogenys"
+}
+item {
+ name: "83744"
+ id: 613
+ display_name: "Amblyomma americanum"
+}
+item {
+ name: "18209"
+ id: 614
+ display_name: "Melanerpes formicivorus"
+}
+item {
+ name: "116517"
+ id: 615
+ display_name: "Caiman crocodilus"
+}
+item {
+ name: "59176"
+ id: 616
+ display_name: "Phyciodes mylitta"
+}
+item {
+ name: "59182"
+ id: 617
+ display_name: "Euphydryas editha"
+}
+item {
+ name: "43997"
+ id: 618
+ display_name: "Myocastor coypus"
+}
+item {
+ name: "59185"
+ id: 619
+ display_name: "Coenonympha tullia"
+}
+item {
+ name: "59187"
+ id: 620
+ display_name: "Erynnis propertius"
+}
+item {
+ name: "59188"
+ id: 621
+ display_name: "Erynnis funeralis"
+}
+item {
+ name: "59189"
+ id: 622
+ display_name: "Erynnis tristis"
+}
+item {
+ name: "59190"
+ id: 623
+ display_name: "Heliopetes ericetorum"
+}
+item {
+ name: "34615"
+ id: 624
+ display_name: "Gekko gecko"
+}
+item {
+ name: "42808"
+ id: 625
+ display_name: "Trichosurus vulpecula"
+}
+item {
+ name: "59194"
+ id: 626
+ display_name: "Ochlodes sylvanoides"
+}
+item {
+ name: "59195"
+ id: 627
+ display_name: "Lerodea eufala"
+}
+item {
+ name: "18236"
+ id: 628
+ display_name: "Colaptes auratus"
+}
+item {
+ name: "10045"
+ id: 629
+ display_name: "Basileuterus rufifrons"
+}
+item {
+ name: "59202"
+ id: 630
+ display_name: "Larus michahellis"
+}
+item {
+ name: "10053"
+ id: 631
+ display_name: "Ramphocelus passerinii"
+}
+item {
+ name: "19975"
+ id: 632
+ display_name: "Athene cunicularia"
+}
+item {
+ name: "82231"
+ id: 633
+ display_name: "Periplaneta americana"
+}
+item {
+ name: "67409"
+ id: 634
+ display_name: "Gobiesox maeandricus"
+}
+item {
+ name: "83795"
+ id: 635
+ display_name: "Cipangopaludina chinensis"
+}
+item {
+ name: "59220"
+ id: 636
+ display_name: "Branta hutchinsii"
+}
+item {
+ name: "10069"
+ id: 637
+ display_name: "Fringilla montifringilla"
+}
+item {
+ name: "10070"
+ id: 638
+ display_name: "Fringilla coelebs"
+}
+item {
+ name: "83802"
+ id: 639
+ display_name: "Megacyllene robiniae"
+}
+item {
+ name: "83804"
+ id: 640
+ display_name: "Dynastes tityus"
+}
+item {
+ name: "51039"
+ id: 641
+ display_name: "Cepaea hortensis"
+}
+item {
+ name: "68062"
+ id: 642
+ display_name: "Menemerus bivittatus"
+}
+item {
+ name: "47527"
+ id: 643
+ display_name: "Ostracion meleagris"
+}
+item {
+ name: "67435"
+ id: 644
+ display_name: "Urbanus proteus"
+}
+item {
+ name: "10094"
+ id: 645
+ display_name: "Junco hyemalis"
+}
+item {
+ name: "67440"
+ id: 646
+ display_name: "Utetheisa ornatrix"
+}
+item {
+ name: "100210"
+ id: 647
+ display_name: "Epitheca canis"
+}
+item {
+ name: "1907"
+ id: 648
+ display_name: "Cuculus canorus"
+}
+item {
+ name: "100215"
+ id: 649
+ display_name: "Epitheca princeps"
+}
+item {
+ name: "27826"
+ id: 650
+ display_name: "Taricha granulosa"
+}
+item {
+ name: "129147"
+ id: 651
+ display_name: "Ammophila procera"
+}
+item {
+ name: "10111"
+ id: 652
+ display_name: "Junco phaeonotus"
+}
+item {
+ name: "83844"
+ id: 653
+ display_name: "Oxyopes salticus"
+}
+item {
+ name: "144107"
+ id: 654
+ display_name: "Tetracis crocallata"
+}
+item {
+ name: "51097"
+ id: 655
+ display_name: "Papilio zelicaon"
+}
+item {
+ name: "10138"
+ id: 656
+ display_name: "Ammodramus nelsoni"
+}
+item {
+ name: "10139"
+ id: 657
+ display_name: "Ammodramus savannarum"
+}
+item {
+ name: "10147"
+ id: 658
+ display_name: "Ammodramus maritimus"
+}
+item {
+ name: "59300"
+ id: 659
+ display_name: "Anagrapha falcifera"
+}
+item {
+ name: "51110"
+ id: 660
+ display_name: "Xylocopa virginica"
+}
+item {
+ name: "1960"
+ id: 661
+ display_name: "Coccyzus erythropthalmus"
+}
+item {
+ name: "42652"
+ id: 662
+ display_name: "Didelphis virginiana"
+}
+item {
+ name: "428606"
+ id: 663
+ display_name: "Heraclides rumiko"
+}
+item {
+ name: "127303"
+ id: 664
+ display_name: "Callophrys henrici"
+}
+item {
+ name: "1964"
+ id: 665
+ display_name: "Coccyzus minor"
+}
+item {
+ name: "1965"
+ id: 666
+ display_name: "Coccyzus americanus"
+}
+item {
+ name: "8520"
+ id: 667
+ display_name: "Nucifraga columbiana"
+}
+item {
+ name: "116658"
+ id: 668
+ display_name: "Siphanta acuta"
+}
+item {
+ name: "1972"
+ id: 669
+ display_name: "Crotophaga sulcirostris"
+}
+item {
+ name: "10168"
+ id: 670
+ display_name: "Pooecetes gramineus"
+}
+item {
+ name: "53893"
+ id: 671
+ display_name: "Chlosyne palla"
+}
+item {
+ name: "10173"
+ id: 672
+ display_name: "Arremonops rufivirgatus"
+}
+item {
+ name: "1986"
+ id: 673
+ display_name: "Geococcyx californianus"
+}
+item {
+ name: "1987"
+ id: 674
+ display_name: "Geococcyx velox"
+}
+item {
+ name: "116680"
+ id: 675
+ display_name: "Tabanus atratus"
+}
+item {
+ name: "116681"
+ id: 676
+ display_name: "Atteva aurea"
+}
+item {
+ name: "124875"
+ id: 677
+ display_name: "Spodoptera litura"
+}
+item {
+ name: "26575"
+ id: 678
+ display_name: "Diadophis punctatus"
+}
+item {
+ name: "10199"
+ id: 679
+ display_name: "Coereba flaveola"
+}
+item {
+ name: "26591"
+ id: 680
+ display_name: "Diadophis punctatus edwardsii"
+}
+item {
+ name: "59360"
+ id: 681
+ display_name: "Neverita duplicata"
+}
+item {
+ name: "68263"
+ id: 682
+ display_name: "Papilio multicaudata"
+}
+item {
+ name: "26598"
+ id: 683
+ display_name: "Diadophis punctatus amabilis"
+}
+item {
+ name: "42983"
+ id: 684
+ display_name: "Phascolarctos cinereus"
+}
+item {
+ name: "67560"
+ id: 685
+ display_name: "Adelpha californica"
+}
+item {
+ name: "10224"
+ id: 686
+ display_name: "Passerina ciris"
+}
+item {
+ name: "2038"
+ id: 687
+ display_name: "Alectura lathami"
+}
+item {
+ name: "10232"
+ id: 688
+ display_name: "Passerina leclancherii"
+}
+item {
+ name: "10234"
+ id: 689
+ display_name: "Passerina amoena"
+}
+item {
+ name: "10243"
+ id: 690
+ display_name: "Icteria virens"
+}
+item {
+ name: "2052"
+ id: 691
+ display_name: "Crax rubra"
+}
+item {
+ name: "94551"
+ id: 692
+ display_name: "Argia immunda"
+}
+item {
+ name: "2062"
+ id: 693
+ display_name: "Penelope purpurascens"
+}
+item {
+ name: "204490"
+ id: 694
+ display_name: "Copsychus malabaricus"
+}
+item {
+ name: "10257"
+ id: 695
+ display_name: "Paroaria capitata"
+}
+item {
+ name: "51221"
+ id: 696
+ display_name: "Procambarus clarkii"
+}
+item {
+ name: "10262"
+ id: 697
+ display_name: "Cyanerpes cyaneus"
+}
+item {
+ name: "508249"
+ id: 698
+ display_name: "Microcarbo melanoleucos brevirostris"
+}
+item {
+ name: "18460"
+ id: 699
+ display_name: "Sphyrapicus thyroideus"
+}
+item {
+ name: "10271"
+ id: 700
+ display_name: "Pheucticus ludovicianus"
+}
+item {
+ name: "18464"
+ id: 701
+ display_name: "Sphyrapicus ruber"
+}
+item {
+ name: "10274"
+ id: 702
+ display_name: "Pheucticus melanocephalus"
+}
+item {
+ name: "18467"
+ id: 703
+ display_name: "Sphyrapicus nuchalis"
+}
+item {
+ name: "100391"
+ id: 704
+ display_name: "Erythrodiplax berenice"
+}
+item {
+ name: "2089"
+ id: 705
+ display_name: "Ortalis poliocephala"
+}
+item {
+ name: "2090"
+ id: 706
+ display_name: "Ortalis vetula"
+}
+item {
+ name: "8038"
+ id: 707
+ display_name: "Corvus albus"
+}
+item {
+ name: "67629"
+ id: 708
+ display_name: "Oligocottus maculosus"
+}
+item {
+ name: "10286"
+ id: 709
+ display_name: "Mniotilta varia"
+}
+item {
+ name: "10288"
+ id: 710
+ display_name: "Volatinia jacarina"
+}
+item {
+ name: "100403"
+ id: 711
+ display_name: "Erythrodiplax minuscula"
+}
+item {
+ name: "84023"
+ id: 712
+ display_name: "Amorpha juglandis"
+}
+item {
+ name: "84024"
+ id: 713
+ display_name: "Galasa nigrinodis"
+}
+item {
+ name: "10297"
+ id: 714
+ display_name: "Thraupis palmarum"
+}
+item {
+ name: "67642"
+ id: 715
+ display_name: "Pantherophis spiloides"
+}
+item {
+ name: "67653"
+ id: 716
+ display_name: "Phoebis agarithe"
+}
+item {
+ name: "84038"
+ id: 717
+ display_name: "Haploa lecontei"
+}
+item {
+ name: "26695"
+ id: 718
+ display_name: "Scaphiopus holbrookii"
+}
+item {
+ name: "84040"
+ id: 719
+ display_name: "Chauliognathus marginatus"
+}
+item {
+ name: "51275"
+ id: 720
+ display_name: "Pentatoma rufipes"
+}
+item {
+ name: "2124"
+ id: 721
+ display_name: "Momotus mexicanus"
+}
+item {
+ name: "26702"
+ id: 722
+ display_name: "Spea hammondii"
+}
+item {
+ name: "10325"
+ id: 723
+ display_name: "Euphagus cyanocephalus"
+}
+item {
+ name: "43102"
+ id: 724
+ display_name: "Sylvilagus palustris"
+}
+item {
+ name: "49509"
+ id: 725
+ display_name: "Lutjanus griseus"
+}
+item {
+ name: "116834"
+ id: 726
+ display_name: "Cacatua galerita"
+}
+item {
+ name: "127188"
+ id: 727
+ display_name: "Junco hyemalis oreganus"
+}
+item {
+ name: "26725"
+ id: 728
+ display_name: "Ambystoma jeffersonianum"
+}
+item {
+ name: "43111"
+ id: 729
+ display_name: "Sylvilagus floridanus"
+}
+item {
+ name: "43112"
+ id: 730
+ display_name: "Sylvilagus bachmani"
+}
+item {
+ name: "67691"
+ id: 731
+ display_name: "Lophocampa maculata"
+}
+item {
+ name: "51311"
+ id: 732
+ display_name: "Urbanus dorantes"
+}
+item {
+ name: "67700"
+ id: 733
+ display_name: "Caracolus caracolla"
+}
+item {
+ name: "43128"
+ id: 734
+ display_name: "Lepus europaeus"
+}
+item {
+ name: "26745"
+ id: 735
+ display_name: "Ambystoma texanum"
+}
+item {
+ name: "67706"
+ id: 736
+ display_name: "Argiope argentata"
+}
+item {
+ name: "26747"
+ id: 737
+ display_name: "Ambystoma gracile"
+}
+item {
+ name: "67708"
+ id: 738
+ display_name: "Argiope trifasciata"
+}
+item {
+ name: "26749"
+ id: 739
+ display_name: "Ambystoma tigrinum"
+}
+item {
+ name: "4896"
+ id: 740
+ display_name: "Pluvialis fulva"
+}
+item {
+ name: "10369"
+ id: 741
+ display_name: "Molothrus aeneus"
+}
+item {
+ name: "26754"
+ id: 742
+ display_name: "Ambystoma macrodactylum"
+}
+item {
+ name: "10373"
+ id: 743
+ display_name: "Molothrus ater"
+}
+item {
+ name: "2185"
+ id: 744
+ display_name: "Merops pusillus"
+}
+item {
+ name: "84109"
+ id: 745
+ display_name: "Pisaurina mira"
+}
+item {
+ name: "67726"
+ id: 746
+ display_name: "Aeshna palmata"
+}
+item {
+ name: "2191"
+ id: 747
+ display_name: "Merops apiaster"
+}
+item {
+ name: "67731"
+ id: 748
+ display_name: "Anax junius"
+}
+item {
+ name: "198804"
+ id: 749
+ display_name: "Satyrium titus"
+}
+item {
+ name: "51349"
+ id: 750
+ display_name: "Pyrgus communis"
+}
+item {
+ name: "18584"
+ id: 751
+ display_name: "Pteroglossus torquatus"
+}
+item {
+ name: "67737"
+ id: 752
+ display_name: "Rhionaeschna multicolor"
+}
+item {
+ name: "198812"
+ id: 753
+ display_name: "Lethe anthedon"
+}
+item {
+ name: "321697"
+ id: 754
+ display_name: "Melanchroia chephise"
+}
+item {
+ name: "198821"
+ id: 755
+ display_name: "Pieris oleracea"
+}
+item {
+ name: "26790"
+ id: 756
+ display_name: "Ambystoma maculatum"
+}
+item {
+ name: "10411"
+ id: 757
+ display_name: "Loxia curvirostra"
+}
+item {
+ name: "133295"
+ id: 758
+ display_name: "Melitaea didyma"
+}
+item {
+ name: "67760"
+ id: 759
+ display_name: "Popillia japonica"
+}
+item {
+ name: "43188"
+ id: 760
+ display_name: "Ochotona princeps"
+}
+item {
+ name: "2229"
+ id: 761
+ display_name: "Merops orientalis"
+}
+item {
+ name: "10423"
+ id: 762
+ display_name: "Loxia leucoptera"
+}
+item {
+ name: "67771"
+ id: 763
+ display_name: "Leptoglossus occidentalis"
+}
+item {
+ name: "84162"
+ id: 764
+ display_name: "Chrysochus auratus"
+}
+item {
+ name: "26822"
+ id: 765
+ display_name: "Dicamptodon tenebrosus"
+}
+item {
+ name: "26823"
+ id: 766
+ display_name: "Dicamptodon ensatus"
+}
+item {
+ name: "51402"
+ id: 767
+ display_name: "Megalops atlanticus"
+}
+item {
+ name: "67725"
+ id: 768
+ display_name: "Aeshna interrupta"
+}
+item {
+ name: "411858"
+ id: 769
+ display_name: "Vanessa gonerilla gonerilla"
+}
+item {
+ name: "26835"
+ id: 770
+ display_name: "Drymobius margaritiferus"
+}
+item {
+ name: "84185"
+ id: 771
+ display_name: "Megalopyge opercularis"
+}
+item {
+ name: "2266"
+ id: 772
+ display_name: "Coracias garrulus"
+}
+item {
+ name: "141531"
+ id: 773
+ display_name: "Lethe eurydice"
+}
+item {
+ name: "2269"
+ id: 774
+ display_name: "Coracias caudatus"
+}
+item {
+ name: "133346"
+ id: 775
+ display_name: "Melittia cucurbitae"
+}
+item {
+ name: "2275"
+ id: 776
+ display_name: "Coracias benghalensis"
+}
+item {
+ name: "84196"
+ id: 777
+ display_name: "Pontania californica"
+}
+item {
+ name: "10470"
+ id: 778
+ display_name: "Xanthocephalus xanthocephalus"
+}
+item {
+ name: "10479"
+ id: 779
+ display_name: "Chondestes grammacus"
+}
+item {
+ name: "51440"
+ id: 780
+ display_name: "Pituophis catenifer catenifer"
+}
+item {
+ name: "54087"
+ id: 781
+ display_name: "Pieris napi"
+}
+item {
+ name: "59635"
+ id: 782
+ display_name: "Phragmatopoma californica"
+}
+item {
+ name: "10487"
+ id: 783
+ display_name: "Dolichonyx oryzivorus"
+}
+item {
+ name: "67835"
+ id: 784
+ display_name: "Danaus chrysippus"
+}
+item {
+ name: "59644"
+ id: 785
+ display_name: "Pantherophis alleghaniensis"
+}
+item {
+ name: "59646"
+ id: 786
+ display_name: "Pantherophis bairdi"
+}
+item {
+ name: "116999"
+ id: 787
+ display_name: "Pandion haliaetus"
+}
+item {
+ name: "117002"
+ id: 788
+ display_name: "Phainopepla nitens"
+}
+item {
+ name: "16770"
+ id: 789
+ display_name: "Tyrannus couchii"
+}
+item {
+ name: "84239"
+ id: 790
+ display_name: "Callophrys gryneus"
+}
+item {
+ name: "104553"
+ id: 791
+ display_name: "Leucorrhinia proxima"
+}
+item {
+ name: "117016"
+ id: 792
+ display_name: "Phylloscopus collybita"
+}
+item {
+ name: "49540"
+ id: 793
+ display_name: "Gasteracantha cancriformis"
+}
+item {
+ name: "59675"
+ id: 794
+ display_name: "Pyrrharctia isabella"
+}
+item {
+ name: "469277"
+ id: 795
+ display_name: "Neotibicen superbus"
+}
+item {
+ name: "236973"
+ id: 796
+ display_name: "Circus cyaneus hudsonius"
+}
+item {
+ name: "59683"
+ id: 797
+ display_name: "Porpita porpita"
+}
+item {
+ name: "26916"
+ id: 798
+ display_name: "Contia tenuis"
+}
+item {
+ name: "51493"
+ id: 799
+ display_name: "Trimerotropis pallidipennis"
+}
+item {
+ name: "51495"
+ id: 800
+ display_name: "Anthocharis cardamines"
+}
+item {
+ name: "133416"
+ id: 801
+ display_name: "Phoebis philea"
+}
+item {
+ name: "8583"
+ id: 802
+ display_name: "Grallina cyanoleuca"
+}
+item {
+ name: "395569"
+ id: 803
+ display_name: "Prionoplus reticularis"
+}
+item {
+ name: "59698"
+ id: 804
+ display_name: "Velella velella"
+}
+item {
+ name: "141626"
+ id: 805
+ display_name: "Lygaeus turcicus"
+}
+item {
+ name: "84286"
+ id: 806
+ display_name: "Diapheromera femorata"
+}
+item {
+ name: "117059"
+ id: 807
+ display_name: "Plectrophenax nivalis"
+}
+item {
+ name: "133447"
+ id: 808
+ display_name: "Crambus agitatellus"
+}
+item {
+ name: "133448"
+ id: 809
+ display_name: "Climaciella brunnea"
+}
+item {
+ name: "51534"
+ id: 810
+ display_name: "Leptotes cassius"
+}
+item {
+ name: "205197"
+ id: 811
+ display_name: "Eutrapela clemataria"
+}
+item {
+ name: "51536"
+ id: 812
+ display_name: "Ascia monuste"
+}
+item {
+ name: "10585"
+ id: 813
+ display_name: "Calamospiza melanocorys"
+}
+item {
+ name: "49552"
+ id: 814
+ display_name: "Scutigera coleoptrata"
+}
+item {
+ name: "51555"
+ id: 815
+ display_name: "Sympetrum illotum"
+}
+item {
+ name: "51557"
+ id: 816
+ display_name: "Bombylius major"
+}
+item {
+ name: "117095"
+ id: 817
+ display_name: "Regulus calendula"
+}
+item {
+ name: "117097"
+ id: 818
+ display_name: "Regulus ignicapilla"
+}
+item {
+ name: "117099"
+ id: 819
+ display_name: "Regulus regulus"
+}
+item {
+ name: "117100"
+ id: 820
+ display_name: "Regulus satrapa"
+}
+item {
+ name: "84333"
+ id: 821
+ display_name: "Eudryas grata"
+}
+item {
+ name: "215409"
+ id: 822
+ display_name: "Bradybaena similaris"
+}
+item {
+ name: "16787"
+ id: 823
+ display_name: "Tyrannus melancholicus"
+}
+item {
+ name: "46225"
+ id: 824
+ display_name: "Tamias dorsalis"
+}
+item {
+ name: "59774"
+ id: 825
+ display_name: "Pachydiplax longipennis"
+}
+item {
+ name: "59776"
+ id: 826
+ display_name: "Perithemis tenera"
+}
+item {
+ name: "119014"
+ id: 827
+ display_name: "Argia fumipennis violacea"
+}
+item {
+ name: "4326"
+ id: 828
+ display_name: "Pelecanus conspicillatus"
+}
+item {
+ name: "18833"
+ id: 829
+ display_name: "Aulacorhynchus prasinus"
+}
+item {
+ name: "43411"
+ id: 830
+ display_name: "Ateles geoffroyi"
+}
+item {
+ name: "141725"
+ id: 831
+ display_name: "Nezara viridula"
+}
+item {
+ name: "51614"
+ id: 832
+ display_name: "Eurema hecabe"
+}
+item {
+ name: "125343"
+ id: 833
+ display_name: "Crepidula fornicata"
+}
+item {
+ name: "2464"
+ id: 834
+ display_name: "Todiramphus sanctus"
+}
+item {
+ name: "43432"
+ id: 835
+ display_name: "Cebus capucinus"
+}
+item {
+ name: "43436"
+ id: 836
+ display_name: "Alouatta palliata"
+}
+item {
+ name: "43439"
+ id: 837
+ display_name: "Alouatta pigra"
+}
+item {
+ name: "9357"
+ id: 838
+ display_name: "Icterus bullockii"
+}
+item {
+ name: "84403"
+ id: 839
+ display_name: "Phyllopalpus pulchellus"
+}
+item {
+ name: "10676"
+ id: 840
+ display_name: "Spiza americana"
+}
+item {
+ name: "16798"
+ id: 841
+ display_name: "Tyrannus dominicensis"
+}
+item {
+ name: "141752"
+ id: 842
+ display_name: "Biblis hyperia"
+}
+item {
+ name: "4512"
+ id: 843
+ display_name: "Chlidonias niger"
+}
+item {
+ name: "43460"
+ id: 844
+ display_name: "Macaca mulatta"
+}
+item {
+ name: "51654"
+ id: 845
+ display_name: "Junonia almana"
+}
+item {
+ name: "51659"
+ id: 846
+ display_name: "Anthopleura xanthogrammica"
+}
+item {
+ name: "84428"
+ id: 847
+ display_name: "Drepana arcuata"
+}
+item {
+ name: "10702"
+ id: 848
+ display_name: "Oriturus superciliosus"
+}
+item {
+ name: "68047"
+ id: 849
+ display_name: "Psarocolius montezuma"
+}
+item {
+ name: "12707"
+ id: 850
+ display_name: "Turdus pilaris"
+}
+item {
+ name: "84437"
+ id: 851
+ display_name: "Nicrophorus orbicollis"
+}
+item {
+ name: "84438"
+ id: 852
+ display_name: "Platyprepia virginalis"
+}
+item {
+ name: "117209"
+ id: 853
+ display_name: "Notiomystis cincta"
+}
+item {
+ name: "343393"
+ id: 854
+ display_name: "Hypsopygia olinalis"
+}
+item {
+ name: "27101"
+ id: 855
+ display_name: "Eurycea longicauda"
+}
+item {
+ name: "117214"
+ id: 856
+ display_name: "Sagittarius serpentarius"
+}
+item {
+ name: "18911"
+ id: 857
+ display_name: "Psittacula krameri"
+}
+item {
+ name: "117218"
+ id: 858
+ display_name: "Verrucosa arenata"
+}
+item {
+ name: "117221"
+ id: 859
+ display_name: "Dasymutilla occidentalis"
+}
+item {
+ name: "35303"
+ id: 860
+ display_name: "Ctenosaura similis"
+}
+item {
+ name: "18920"
+ id: 861
+ display_name: "Platycercus eximius"
+}
+item {
+ name: "10729"
+ id: 862
+ display_name: "Protonotaria citrea"
+}
+item {
+ name: "35306"
+ id: 863
+ display_name: "Ctenosaura pectinata"
+}
+item {
+ name: "109650"
+ id: 864
+ display_name: "Platycnemis pennipes"
+}
+item {
+ name: "27120"
+ id: 865
+ display_name: "Eurycea bislineata"
+}
+item {
+ name: "27123"
+ id: 866
+ display_name: "Eurycea lucifuga"
+}
+item {
+ name: "51702"
+ id: 867
+ display_name: "Coccinella septempunctata"
+}
+item {
+ name: "2552"
+ id: 868
+ display_name: "Megaceryle torquata"
+}
+item {
+ name: "133625"
+ id: 869
+ display_name: "Zanclognatha jacchusalis"
+}
+item {
+ name: "18943"
+ id: 870
+ display_name: "Nestor meridionalis"
+}
+item {
+ name: "84481"
+ id: 871
+ display_name: "Calopteryx maculata"
+}
+item {
+ name: "35330"
+ id: 872
+ display_name: "Sauromalus ater"
+}
+item {
+ name: "27140"
+ id: 873
+ display_name: "Coluber constrictor priapus"
+}
+item {
+ name: "199179"
+ id: 874
+ display_name: "Polistes chinensis"
+}
+item {
+ name: "51724"
+ id: 875
+ display_name: "Mopalia lignosa"
+}
+item {
+ name: "27149"
+ id: 876
+ display_name: "Coluber constrictor constrictor"
+}
+item {
+ name: "35342"
+ id: 877
+ display_name: "Iguana iguana"
+}
+item {
+ name: "27153"
+ id: 878
+ display_name: "Coluber constrictor flaviventris"
+}
+item {
+ name: "35347"
+ id: 879
+ display_name: "Amblyrhynchus cristatus"
+}
+item {
+ name: "125461"
+ id: 880
+ display_name: "Ursus arctos horribilis"
+}
+item {
+ name: "84507"
+ id: 881
+ display_name: "Lygus lineolaris"
+}
+item {
+ name: "35356"
+ id: 882
+ display_name: "Dipsosaurus dorsalis"
+}
+item {
+ name: "51743"
+ id: 883
+ display_name: "Danaus gilippus"
+}
+item {
+ name: "18976"
+ id: 884
+ display_name: "Amazona viridigenalis"
+}
+item {
+ name: "125475"
+ id: 885
+ display_name: "Plusiodonta compressipalpis"
+}
+item {
+ name: "51748"
+ id: 886
+ display_name: "Danaus gilippus thersippus"
+}
+item {
+ name: "68137"
+ id: 887
+ display_name: "Chlorocebus pygerythrus"
+}
+item {
+ name: "133675"
+ id: 888
+ display_name: "Coenobita clypeatus"
+}
+item {
+ name: "215596"
+ id: 889
+ display_name: "Buprestis aurulenta"
+}
+item {
+ name: "117293"
+ id: 890
+ display_name: "Oecophylla smaragdina"
+}
+item {
+ name: "68142"
+ id: 891
+ display_name: "Prenolepis imparis"
+}
+item {
+ name: "27184"
+ id: 892
+ display_name: "Plethodon glutinosus"
+}
+item {
+ name: "27186"
+ id: 893
+ display_name: "Plethodon cinereus"
+}
+item {
+ name: "18995"
+ id: 894
+ display_name: "Amazona albifrons"
+}
+item {
+ name: "51765"
+ id: 895
+ display_name: "Poanes melane"
+}
+item {
+ name: "18998"
+ id: 896
+ display_name: "Amazona oratrix"
+}
+item {
+ name: "41396"
+ id: 897
+ display_name: "Rhynchonycteris naso"
+}
+item {
+ name: "27194"
+ id: 898
+ display_name: "Plethodon vehiculum"
+}
+item {
+ name: "51773"
+ id: 899
+ display_name: "Nathalis iole"
+}
+item {
+ name: "12908"
+ id: 900
+ display_name: "Saxicola rubetra"
+}
+item {
+ name: "68165"
+ id: 901
+ display_name: "Linepithema humile"
+}
+item {
+ name: "154721"
+ id: 902
+ display_name: "Brachygastra mellifica"
+}
+item {
+ name: "338504"
+ id: 903
+ display_name: "Xanthocnemis zealandica"
+}
+item {
+ name: "338505"
+ id: 904
+ display_name: "Melangyna novaezelandiae"
+}
+item {
+ name: "27093"
+ id: 905
+ display_name: "Eurycea cirrigera"
+}
+item {
+ name: "65975"
+ id: 906
+ display_name: "Lithobates berlandieri"
+}
+item {
+ name: "19020"
+ id: 907
+ display_name: "Ara militaris"
+}
+item {
+ name: "474210"
+ id: 908
+ display_name: "Spizelloides arborea"
+}
+item {
+ name: "205240"
+ id: 909
+ display_name: "Pantographa limata"
+}
+item {
+ name: "27226"
+ id: 910
+ display_name: "Plethodon albagula"
+}
+item {
+ name: "318545"
+ id: 911
+ display_name: "Coreus marginatus"
+}
+item {
+ name: "2662"
+ id: 912
+ display_name: "Ceryle rudis"
+}
+item {
+ name: "109161"
+ id: 913
+ display_name: "Perithemis intensa"
+}
+item {
+ name: "51824"
+ id: 914
+ display_name: "Calopteryx splendens"
+}
+item {
+ name: "27250"
+ id: 915
+ display_name: "Ensatina eschscholtzii"
+}
+item {
+ name: "2676"
+ id: 916
+ display_name: "Chloroceryle aenea"
+}
+item {
+ name: "2679"
+ id: 917
+ display_name: "Chloroceryle amazona"
+}
+item {
+ name: "84602"
+ id: 918
+ display_name: "Zale lunata"
+}
+item {
+ name: "133756"
+ id: 919
+ display_name: "Leptoglossus oppositus"
+}
+item {
+ name: "35453"
+ id: 920
+ display_name: "Zootoca vivipara"
+}
+item {
+ name: "84612"
+ id: 921
+ display_name: "Polyphylla decemlineata"
+}
+item {
+ name: "133765"
+ id: 922
+ display_name: "Eumenes fraternus"
+}
+item {
+ name: "68230"
+ id: 923
+ display_name: "Brachymesia gravida"
+}
+item {
+ name: "49601"
+ id: 924
+ display_name: "Mola mola"
+}
+item {
+ name: "68232"
+ id: 925
+ display_name: "Papilio palamedes"
+}
+item {
+ name: "68233"
+ id: 926
+ display_name: "Orthemis ferruginea"
+}
+item {
+ name: "68239"
+ id: 927
+ display_name: "Parnassius clodius"
+}
+item {
+ name: "68240"
+ id: 928
+ display_name: "Chlosyne lacinia"
+}
+item {
+ name: "68244"
+ id: 929
+ display_name: "Euptoieta claudia"
+}
+item {
+ name: "68249"
+ id: 930
+ display_name: "Dymasia dymas"
+}
+item {
+ name: "68251"
+ id: 931
+ display_name: "Limenitis weidemeyerii"
+}
+item {
+ name: "133790"
+ id: 932
+ display_name: "Chalybion californicum"
+}
+item {
+ name: "84644"
+ id: 933
+ display_name: "Phalangium opilio"
+}
+item {
+ name: "68262"
+ id: 934
+ display_name: "Polygonia faunus"
+}
+item {
+ name: "133799"
+ id: 935
+ display_name: "Xenox tigrinus"
+}
+item {
+ name: "68264"
+ id: 936
+ display_name: "Asterocampa celtis"
+}
+item {
+ name: "132892"
+ id: 937
+ display_name: "Anacridium aegyptium"
+}
+item {
+ name: "68268"
+ id: 938
+ display_name: "Euptoieta hegesia"
+}
+item {
+ name: "68269"
+ id: 939
+ display_name: "Aglais milberti"
+}
+item {
+ name: "43694"
+ id: 940
+ display_name: "Loxodonta africana"
+}
+item {
+ name: "59165"
+ id: 941
+ display_name: "Apodemia mormo"
+}
+item {
+ name: "68274"
+ id: 942
+ display_name: "Phyciodes phaon"
+}
+item {
+ name: "68275"
+ id: 943
+ display_name: "Battus polydamas"
+}
+item {
+ name: "84662"
+ id: 944
+ display_name: "Celastrina lucia"
+}
+item {
+ name: "16842"
+ id: 945
+ display_name: "Myiozetetes similis"
+}
+item {
+ name: "133826"
+ id: 946
+ display_name: "Zelus longipes"
+}
+item {
+ name: "14912"
+ id: 947
+ display_name: "Toxostoma curvirostre"
+}
+item {
+ name: "53708"
+ id: 948
+ display_name: "Pacifastacus leniusculus"
+}
+item {
+ name: "117452"
+ id: 949
+ display_name: "Sphinx kalmiae"
+}
+item {
+ name: "182997"
+ id: 950
+ display_name: "Megisto rubricata"
+}
+item {
+ name: "223965"
+ id: 951
+ display_name: "Lithacodia musta"
+}
+item {
+ name: "125663"
+ id: 952
+ display_name: "Kelletia kelletii"
+}
+item {
+ name: "125669"
+ id: 953
+ display_name: "Rumina decollata"
+}
+item {
+ name: "68328"
+ id: 954
+ display_name: "Oxythyrea funesta"
+}
+item {
+ name: "179324"
+ id: 955
+ display_name: "Dactylotum bicolor"
+}
+item {
+ name: "68330"
+ id: 956
+ display_name: "Arctia caja"
+}
+item {
+ name: "2548"
+ id: 957
+ display_name: "Megaceryle alcyon"
+}
+item {
+ name: "207600"
+ id: 958
+ display_name: "Thasus neocalifornicus"
+}
+item {
+ name: "207601"
+ id: 959
+ display_name: "Palpita quadristigmalis"
+}
+item {
+ name: "51954"
+ id: 960
+ display_name: "Sphecius speciosus"
+}
+item {
+ name: "207603"
+ id: 961
+ display_name: "Prolimacodes badia"
+}
+item {
+ name: "7294"
+ id: 962
+ display_name: "Eremophila alpestris"
+}
+item {
+ name: "19196"
+ id: 963
+ display_name: "Alisterus scapularis"
+}
+item {
+ name: "145194"
+ id: 964
+ display_name: "Cinnyris jugularis"
+}
+item {
+ name: "27390"
+ id: 965
+ display_name: "Desmognathus ochrophaeus"
+}
+item {
+ name: "207615"
+ id: 966
+ display_name: "Polistes apachus"
+}
+item {
+ name: "63275"
+ id: 967
+ display_name: "Tremex columba"
+}
+item {
+ name: "61910"
+ id: 968
+ display_name: "Orgyia antiqua"
+}
+item {
+ name: "199438"
+ id: 969
+ display_name: "Orgyia postica"
+}
+item {
+ name: "43794"
+ id: 970
+ display_name: "Castor canadensis"
+}
+item {
+ name: "84755"
+ id: 971
+ display_name: "Arion rufus"
+}
+item {
+ name: "51996"
+ id: 972
+ display_name: "Daphnis nerii"
+}
+item {
+ name: "194075"
+ id: 973
+ display_name: "Drymarchon melanurus erebennus"
+}
+item {
+ name: "133923"
+ id: 974
+ display_name: "Mermiria bivittata"
+}
+item {
+ name: "84778"
+ id: 975
+ display_name: "Leptinotarsa decemlineata"
+}
+item {
+ name: "11051"
+ id: 976
+ display_name: "Xiphorhynchus flavigaster"
+}
+item {
+ name: "121992"
+ id: 977
+ display_name: "Cervus elaphus roosevelti"
+}
+item {
+ name: "27459"
+ id: 978
+ display_name: "Batrachoseps attenuatus"
+}
+item {
+ name: "84806"
+ id: 979
+ display_name: "Acanalonia conica"
+}
+item {
+ name: "52043"
+ id: 980
+ display_name: "Spoladea recurvalis"
+}
+item {
+ name: "27468"
+ id: 981
+ display_name: "Batrachoseps major"
+}
+item {
+ name: "133966"
+ id: 982
+ display_name: "Lomographa vestaliata"
+}
+item {
+ name: "27474"
+ id: 983
+ display_name: "Batrachoseps nigriventris"
+}
+item {
+ name: "101204"
+ id: 984
+ display_name: "Gambusia holbrooki"
+}
+item {
+ name: "52055"
+ id: 985
+ display_name: "Crocothemis servilia"
+}
+item {
+ name: "4580"
+ id: 986
+ display_name: "Jacana jacana"
+}
+item {
+ name: "346970"
+ id: 987
+ display_name: "Callophrys dumetorum"
+}
+item {
+ name: "27486"
+ id: 988
+ display_name: "Pseudotriton ruber"
+}
+item {
+ name: "52075"
+ id: 989
+ display_name: "Atalopedes campestris"
+}
+item {
+ name: "27500"
+ id: 990
+ display_name: "Gyrinophilus porphyriticus"
+}
+item {
+ name: "73203"
+ id: 991
+ display_name: "Phalaropus fulicarius"
+}
+item {
+ name: "322417"
+ id: 992
+ display_name: "Limacus flavus"
+}
+item {
+ name: "40083"
+ id: 993
+ display_name: "Gopherus berlandieri"
+}
+item {
+ name: "68469"
+ id: 994
+ display_name: "Papilio demodocus"
+}
+item {
+ name: "2938"
+ id: 995
+ display_name: "Streptopelia turtur"
+}
+item {
+ name: "117633"
+ id: 996
+ display_name: "Mopalia muscosa"
+}
+item {
+ name: "117641"
+ id: 997
+ display_name: "Nucella lamellosa"
+}
+item {
+ name: "322443"
+ id: 998
+ display_name: "Thasus gigas"
+}
+item {
+ name: "68492"
+ id: 999
+ display_name: "Hemidactylus mabouia"
+}
+item {
+ name: "143853"
+ id: 1000
+ display_name: "Pica hudsonia"
+}
+item {
+ name: "144757"
+ id: 1001
+ display_name: "Corvus cornix"
+}
+item {
+ name: "117650"
+ id: 1002
+ display_name: "Mytilus edulis"
+}
+item {
+ name: "19349"
+ id: 1003
+ display_name: "Myiopsitta monachus"
+}
+item {
+ name: "2969"
+ id: 1004
+ display_name: "Streptopelia decaocto"
+}
+item {
+ name: "9919"
+ id: 1005
+ display_name: "Piranga ludoviciana"
+}
+item {
+ name: "5009"
+ id: 1006
+ display_name: "Ixobrychus exilis"
+}
+item {
+ name: "117666"
+ id: 1007
+ display_name: "Pleuroncodes planipes"
+}
+item {
+ name: "7603"
+ id: 1008
+ display_name: "Auriparus flaviceps"
+}
+item {
+ name: "117674"
+ id: 1009
+ display_name: "Ligia occidentalis"
+}
+item {
+ name: "145223"
+ id: 1010
+ display_name: "Geothlypis tolmiei"
+}
+item {
+ name: "60341"
+ id: 1011
+ display_name: "Lithobates sphenocephalus"
+}
+item {
+ name: "60342"
+ id: 1012
+ display_name: "Thamnophis proximus"
+}
+item {
+ name: "52155"
+ id: 1013
+ display_name: "Dermacentor variabilis"
+}
+item {
+ name: "60349"
+ id: 1014
+ display_name: "Scincella lateralis"
+}
+item {
+ name: "52158"
+ id: 1015
+ display_name: "Schistocerca nitens"
+}
+item {
+ name: "117696"
+ id: 1016
+ display_name: "Dendraster excentricus"
+}
+item {
+ name: "232391"
+ id: 1017
+ display_name: "Tetracha carolina"
+}
+item {
+ name: "3017"
+ id: 1018
+ display_name: "Columba livia"
+}
+item {
+ name: "145229"
+ id: 1019
+ display_name: "Setophaga citrina"
+}
+item {
+ name: "84950"
+ id: 1020
+ display_name: "Alypia octomaculata"
+}
+item {
+ name: "52188"
+ id: 1021
+ display_name: "Rhincodon typus"
+}
+item {
+ name: "494559"
+ id: 1022
+ display_name: "Polydrusus formosus"
+}
+item {
+ name: "145232"
+ id: 1023
+ display_name: "Setophaga cerulea"
+}
+item {
+ name: "3048"
+ id: 1024
+ display_name: "Columba palumbus"
+}
+item {
+ name: "9922"
+ id: 1025
+ display_name: "Piranga bidentata"
+}
+item {
+ name: "44026"
+ id: 1026
+ display_name: "Erethizon dorsatum"
+}
+item {
+ name: "61505"
+ id: 1027
+ display_name: "Manduca sexta"
+}
+item {
+ name: "84994"
+ id: 1028
+ display_name: "Acanthocephala declivis"
+}
+item {
+ name: "27652"
+ id: 1029
+ display_name: "Hemidactylium scutatum"
+}
+item {
+ name: "117767"
+ id: 1030
+ display_name: "Cervus elaphus nannodes"
+}
+item {
+ name: "494603"
+ id: 1031
+ display_name: "Hermissenda opalescens"
+}
+item {
+ name: "39819"
+ id: 1032
+ display_name: "Terrapene carolina bauri"
+}
+item {
+ name: "3093"
+ id: 1033
+ display_name: "Patagioenas leucocephala"
+}
+item {
+ name: "205316"
+ id: 1034
+ display_name: "Aidemona azteca"
+}
+item {
+ name: "216093"
+ id: 1035
+ display_name: "Caracolus marginella"
+}
+item {
+ name: "44062"
+ id: 1036
+ display_name: "Thomomys bottae"
+}
+item {
+ name: "85024"
+ id: 1037
+ display_name: "Heraclides cresphontes"
+}
+item {
+ name: "3108"
+ id: 1038
+ display_name: "Patagioenas fasciata"
+}
+item {
+ name: "213510"
+ id: 1039
+ display_name: "Anageshna primordialis"
+}
+item {
+ name: "85030"
+ id: 1040
+ display_name: "Crocothemis erythraea"
+}
+item {
+ name: "85034"
+ id: 1041
+ display_name: "Neoscona crucifera"
+}
+item {
+ name: "3117"
+ id: 1042
+ display_name: "Patagioenas flavirostris"
+}
+item {
+ name: "207924"
+ id: 1043
+ display_name: "Synchlora frondaria"
+}
+item {
+ name: "35900"
+ id: 1044
+ display_name: "Lacerta bilineata"
+}
+item {
+ name: "24382"
+ id: 1045
+ display_name: "Osteopilus septentrionalis"
+}
+item {
+ name: "145249"
+ id: 1046
+ display_name: "Setophaga discolor"
+}
+item {
+ name: "52297"
+ id: 1047
+ display_name: "Triakis semifasciata"
+}
+item {
+ name: "27726"
+ id: 1048
+ display_name: "Salamandra salamandra"
+}
+item {
+ name: "27727"
+ id: 1049
+ display_name: "Bogertophis subocularis"
+}
+item {
+ name: "143043"
+ id: 1050
+ display_name: "Cycnia tenera"
+}
+item {
+ name: "52313"
+ id: 1051
+ display_name: "Diodon hystrix"
+}
+item {
+ name: "143316"
+ id: 1052
+ display_name: "Schinia florida"
+}
+item {
+ name: "61968"
+ id: 1053
+ display_name: "Graphosoma lineatum"
+}
+item {
+ name: "502885"
+ id: 1054
+ display_name: "Lissachatina fulica"
+}
+item {
+ name: "71029"
+ id: 1055
+ display_name: "Crotalus cerastes cerastes"
+}
+item {
+ name: "207977"
+ id: 1056
+ display_name: "Aglais io"
+}
+item {
+ name: "19577"
+ id: 1057
+ display_name: "Chordeiles minor"
+}
+item {
+ name: "93312"
+ id: 1058
+ display_name: "Acropora palmata"
+}
+item {
+ name: "52354"
+ id: 1059
+ display_name: "Ambystoma laterale"
+}
+item {
+ name: "19587"
+ id: 1060
+ display_name: "Chordeiles acutipennis"
+}
+item {
+ name: "58585"
+ id: 1061
+ display_name: "Limenitis arthemis astyanax"
+}
+item {
+ name: "134277"
+ id: 1062
+ display_name: "Gastrophryne olivacea"
+}
+item {
+ name: "60551"
+ id: 1063
+ display_name: "Papilio glaucus"
+}
+item {
+ name: "3731"
+ id: 1064
+ display_name: "Platalea leucorodia"
+}
+item {
+ name: "232593"
+ id: 1065
+ display_name: "Thyris sepulchralis"
+}
+item {
+ name: "19609"
+ id: 1066
+ display_name: "Phalaenoptilus nuttallii"
+}
+item {
+ name: "126106"
+ id: 1067
+ display_name: "Haploa clymene"
+}
+item {
+ name: "27805"
+ id: 1068
+ display_name: "Notophthalmus viridescens"
+}
+item {
+ name: "199840"
+ id: 1069
+ display_name: "Haemorhous mexicanus"
+}
+item {
+ name: "199841"
+ id: 1070
+ display_name: "Haemorhous purpureus"
+}
+item {
+ name: "219719"
+ id: 1071
+ display_name: "Eudryas unio"
+}
+item {
+ name: "27818"
+ id: 1072
+ display_name: "Taricha torosa"
+}
+item {
+ name: "19627"
+ id: 1073
+ display_name: "Nyctidromus albicollis"
+}
+item {
+ name: "28750"
+ id: 1074
+ display_name: "Salvadora grahamiae lineata"
+}
+item {
+ name: "27824"
+ id: 1075
+ display_name: "Taricha rivularis"
+}
+item {
+ name: "146632"
+ id: 1076
+ display_name: "Toxomerus politus"
+}
+item {
+ name: "52402"
+ id: 1077
+ display_name: "Cetonia aurata"
+}
+item {
+ name: "18291"
+ id: 1078
+ display_name: "Campephilus guatemalensis"
+}
+item {
+ name: "60598"
+ id: 1079
+ display_name: "Ixodes scapularis"
+}
+item {
+ name: "199870"
+ id: 1080
+ display_name: "Pyralis farinalis"
+}
+item {
+ name: "60607"
+ id: 1081
+ display_name: "Limenitis arthemis"
+}
+item {
+ name: "205241"
+ id: 1082
+ display_name: "Plagodis phlogosaria"
+}
+item {
+ name: "14898"
+ id: 1083
+ display_name: "Toxostoma rufum"
+}
+item {
+ name: "126153"
+ id: 1084
+ display_name: "Amphion floridensis"
+}
+item {
+ name: "126155"
+ id: 1085
+ display_name: "Vespula germanica"
+}
+item {
+ name: "51392"
+ id: 1086
+ display_name: "Morone saxatilis"
+}
+item {
+ name: "3280"
+ id: 1087
+ display_name: "Leptotila verreauxi"
+}
+item {
+ name: "19670"
+ id: 1088
+ display_name: "Nyctibius jamaicensis"
+}
+item {
+ name: "6929"
+ id: 1089
+ display_name: "Anas penelope"
+}
+item {
+ name: "97738"
+ id: 1090
+ display_name: "Chromagrion conditum"
+}
+item {
+ name: "52449"
+ id: 1091
+ display_name: "Rhinecanthus rectangulus"
+}
+item {
+ name: "52451"
+ id: 1092
+ display_name: "Naso lituratus"
+}
+item {
+ name: "56529"
+ id: 1093
+ display_name: "Papilio machaon"
+}
+item {
+ name: "199913"
+ id: 1094
+ display_name: "Buteo plagiatus"
+}
+item {
+ name: "199914"
+ id: 1095
+ display_name: "Selasphorus calliope"
+}
+item {
+ name: "85227"
+ id: 1096
+ display_name: "Hemideina crassidens"
+}
+item {
+ name: "36076"
+ id: 1097
+ display_name: "Cophosaurus texanus"
+}
+item {
+ name: "36077"
+ id: 1098
+ display_name: "Cophosaurus texanus texanus"
+}
+item {
+ name: "208112"
+ id: 1099
+ display_name: "Palpita magniferalis"
+}
+item {
+ name: "85235"
+ id: 1100
+ display_name: "Deinacrida rugosa"
+}
+item {
+ name: "93429"
+ id: 1101
+ display_name: "Aeshna constricta"
+}
+item {
+ name: "36086"
+ id: 1102
+ display_name: "Callisaurus draconoides rhodostictus"
+}
+item {
+ name: "126204"
+ id: 1103
+ display_name: "Synchlora aerata"
+}
+item {
+ name: "93437"
+ id: 1104
+ display_name: "Aeshna mixta"
+}
+item {
+ name: "126207"
+ id: 1105
+ display_name: "Schizura unicornis"
+}
+item {
+ name: "126209"
+ id: 1106
+ display_name: "Metcalfa pruinosa"
+}
+item {
+ name: "126211"
+ id: 1107
+ display_name: "Poecilocapsus lineatus"
+}
+item {
+ name: "36100"
+ id: 1108
+ display_name: "Uta stansburiana elegans"
+}
+item {
+ name: "48342"
+ id: 1109
+ display_name: "Hemigrapsus nudus"
+}
+item {
+ name: "199942"
+ id: 1110
+ display_name: "Strategus aloeus"
+}
+item {
+ name: "126215"
+ id: 1111
+ display_name: "Monobia quadridens"
+}
+item {
+ name: "101640"
+ id: 1112
+ display_name: "Gomphaeschna furcillata"
+}
+item {
+ name: "126217"
+ id: 1113
+ display_name: "Pyrausta orphisalis"
+}
+item {
+ name: "36107"
+ id: 1114
+ display_name: "Urosaurus ornatus"
+}
+item {
+ name: "51940"
+ id: 1115
+ display_name: "Hemidactylus frenatus"
+}
+item {
+ name: "36121"
+ id: 1116
+ display_name: "Urosaurus graciosus"
+}
+item {
+ name: "19743"
+ id: 1117
+ display_name: "Megascops kennicottii"
+}
+item {
+ name: "68901"
+ id: 1118
+ display_name: "Salticus scenicus"
+}
+item {
+ name: "44326"
+ id: 1119
+ display_name: "Microtus californicus"
+}
+item {
+ name: "82481"
+ id: 1120
+ display_name: "Pieris marginalis"
+}
+item {
+ name: "474332"
+ id: 1121
+ display_name: "Porphyrio poliocephalus"
+}
+item {
+ name: "81674"
+ id: 1122
+ display_name: "Rivula propinqualis"
+}
+item {
+ name: "126252"
+ id: 1123
+ display_name: "Mastigoproctus giganteus"
+}
+item {
+ name: "36142"
+ id: 1124
+ display_name: "Sceloporus undulatus"
+}
+item {
+ name: "68911"
+ id: 1125
+ display_name: "Libellula needhami"
+}
+item {
+ name: "68912"
+ id: 1126
+ display_name: "Dysdera crocata"
+}
+item {
+ name: "42888"
+ id: 1127
+ display_name: "Macropus giganteus"
+}
+item {
+ name: "19765"
+ id: 1128
+ display_name: "Megascops asio"
+}
+item {
+ name: "68918"
+ id: 1129
+ display_name: "Poecilanthrax lucifer"
+}
+item {
+ name: "333705"
+ id: 1130
+ display_name: "Pantherophis obsoletus lindheimeri"
+}
+item {
+ name: "126267"
+ id: 1131
+ display_name: "Coleomegilla maculata"
+}
+item {
+ name: "101693"
+ id: 1132
+ display_name: "Gomphus vastus"
+}
+item {
+ name: "85221"
+ id: 1133
+ display_name: "Hemideina thoracica"
+}
+item {
+ name: "126276"
+ id: 1134
+ display_name: "Agrotis ipsilon"
+}
+item {
+ name: "85317"
+ id: 1135
+ display_name: "Eurosta solidaginis"
+}
+item {
+ name: "36169"
+ id: 1136
+ display_name: "Sceloporus spinosus"
+}
+item {
+ name: "60752"
+ id: 1137
+ display_name: "Hermeuptychia sosybius"
+}
+item {
+ name: "60754"
+ id: 1138
+ display_name: "Pyromorpha dimidiata"
+}
+item {
+ name: "126291"
+ id: 1139
+ display_name: "Prosapia bicincta"
+}
+item {
+ name: "52564"
+ id: 1140
+ display_name: "Anthopleura elegantissima"
+}
+item {
+ name: "126293"
+ id: 1141
+ display_name: "Prionoxystus robiniae"
+}
+item {
+ name: "120719"
+ id: 1142
+ display_name: "Pseudacris hypochondriaca"
+}
+item {
+ name: "36189"
+ id: 1143
+ display_name: "Sceloporus poinsettii"
+}
+item {
+ name: "52576"
+ id: 1144
+ display_name: "Uroctonus mordax"
+}
+item {
+ name: "36198"
+ id: 1145
+ display_name: "Sceloporus orcutti"
+}
+item {
+ name: "52584"
+ id: 1146
+ display_name: "Pantala hymenaea"
+}
+item {
+ name: "44395"
+ id: 1147
+ display_name: "Peromyscus leucopus"
+}
+item {
+ name: "36204"
+ id: 1148
+ display_name: "Sceloporus occidentalis"
+}
+item {
+ name: "52589"
+ id: 1149
+ display_name: "Coenonympha pamphilus"
+}
+item {
+ name: "3439"
+ id: 1150
+ display_name: "Zenaida auriculata"
+}
+item {
+ name: "36208"
+ id: 1151
+ display_name: "Sceloporus occidentalis bocourtii"
+}
+item {
+ name: "72936"
+ id: 1152
+ display_name: "Hymenolaimus malacorhynchos"
+}
+item {
+ name: "85362"
+ id: 1153
+ display_name: "Sphex ichneumoneus"
+}
+item {
+ name: "36217"
+ id: 1154
+ display_name: "Sceloporus merriami"
+}
+item {
+ name: "68993"
+ id: 1155
+ display_name: "Liometopum occidentale"
+}
+item {
+ name: "199916"
+ id: 1156
+ display_name: "Setophaga caerulescens"
+}
+item {
+ name: "52620"
+ id: 1157
+ display_name: "Cicindela oregona"
+}
+item {
+ name: "36243"
+ id: 1158
+ display_name: "Sceloporus jarrovii"
+}
+item {
+ name: "52628"
+ id: 1159
+ display_name: "Araneus diadematus"
+}
+item {
+ name: "180007"
+ id: 1160
+ display_name: "Otospermophilus beecheyi"
+}
+item {
+ name: "85408"
+ id: 1161
+ display_name: "Erythemis collocata"
+}
+item {
+ name: "36262"
+ id: 1162
+ display_name: "Sceloporus grammicus"
+}
+item {
+ name: "60839"
+ id: 1163
+ display_name: "Spilosoma virginica"
+}
+item {
+ name: "16968"
+ id: 1164
+ display_name: "Camptostoma imberbe"
+}
+item {
+ name: "4715"
+ id: 1165
+ display_name: "Caracara plancus"
+}
+item {
+ name: "313246"
+ id: 1166
+ display_name: "Olla v-nigrum"
+}
+item {
+ name: "126393"
+ id: 1167
+ display_name: "Stomolophus meleagris"
+}
+item {
+ name: "126397"
+ id: 1168
+ display_name: "Halysidota harrisii"
+}
+item {
+ name: "64221"
+ id: 1169
+ display_name: "Bipalium kewense"
+}
+item {
+ name: "28102"
+ id: 1170
+ display_name: "Virginia striatula"
+}
+item {
+ name: "150985"
+ id: 1171
+ display_name: "Planorbella trivolvis"
+}
+item {
+ name: "36306"
+ id: 1172
+ display_name: "Phrynosoma modestum"
+}
+item {
+ name: "36307"
+ id: 1173
+ display_name: "Phrynosoma orbiculare"
+}
+item {
+ name: "199929"
+ id: 1174
+ display_name: "Plagiometriona clavata"
+}
+item {
+ name: "3545"
+ id: 1175
+ display_name: "Columbina passerina"
+}
+item {
+ name: "36315"
+ id: 1176
+ display_name: "Phrynosoma hernandesi"
+}
+item {
+ name: "367556"
+ id: 1177
+ display_name: "Eupsittula nana"
+}
+item {
+ name: "371963"
+ id: 1178
+ display_name: "Lampropeltis multifasciata"
+}
+item {
+ name: "36339"
+ id: 1179
+ display_name: "Holbrookia propinqua"
+}
+item {
+ name: "36094"
+ id: 1180
+ display_name: "Uta stansburiana"
+}
+item {
+ name: "36343"
+ id: 1181
+ display_name: "Holbrookia maculata"
+}
+item {
+ name: "52766"
+ id: 1182
+ display_name: "Megaphasma denticrus"
+}
+item {
+ name: "18941"
+ id: 1183
+ display_name: "Nestor notabilis"
+}
+item {
+ name: "3580"
+ id: 1184
+ display_name: "Columbina talpacoti"
+}
+item {
+ name: "123690"
+ id: 1185
+ display_name: "Caranx melampygus"
+}
+item {
+ name: "52482"
+ id: 1186
+ display_name: "Episyrphus balteatus"
+}
+item {
+ name: "28762"
+ id: 1187
+ display_name: "Rhinocheilus lecontei"
+}
+item {
+ name: "3607"
+ id: 1188
+ display_name: "Geopelia striata"
+}
+item {
+ name: "52484"
+ id: 1189
+ display_name: "Celastrina echo"
+}
+item {
+ name: "61293"
+ id: 1190
+ display_name: "Thaumetopoea pityocampa"
+}
+item {
+ name: "19998"
+ id: 1191
+ display_name: "Athene noctua"
+}
+item {
+ name: "44575"
+ id: 1192
+ display_name: "Rattus rattus"
+}
+item {
+ name: "44576"
+ id: 1193
+ display_name: "Rattus norvegicus"
+}
+item {
+ name: "133250"
+ id: 1194
+ display_name: "Tettigonia viridissima"
+}
+item {
+ name: "52774"
+ id: 1195
+ display_name: "Bombus fervidus"
+}
+item {
+ name: "49756"
+ id: 1196
+ display_name: "Nephila clavipes"
+}
+item {
+ name: "52779"
+ id: 1197
+ display_name: "Bombus bimaculatus"
+}
+item {
+ name: "52782"
+ id: 1198
+ display_name: "Melissodes bimaculata"
+}
+item {
+ name: "126513"
+ id: 1199
+ display_name: "Larinioides cornutus"
+}
+item {
+ name: "69170"
+ id: 1200
+ display_name: "Hemigrapsus oregonensis"
+}
+item {
+ name: "1971"
+ id: 1201
+ display_name: "Crotophaga ani"
+}
+item {
+ name: "12942"
+ id: 1202
+ display_name: "Sialia sialis"
+}
+item {
+ name: "126532"
+ id: 1203
+ display_name: "Toxomerus geminatus"
+}
+item {
+ name: "216649"
+ id: 1204
+ display_name: "Chauliognathus pensylvanicus"
+}
+item {
+ name: "3734"
+ id: 1205
+ display_name: "Platalea alba"
+}
+item {
+ name: "216651"
+ id: 1206
+ display_name: "Chelinidea vittiger"
+}
+item {
+ name: "20044"
+ id: 1207
+ display_name: "Bubo virginianus"
+}
+item {
+ name: "11855"
+ id: 1208
+ display_name: "Petrochelidon fulva"
+}
+item {
+ name: "28246"
+ id: 1209
+ display_name: "Arizona elegans"
+}
+item {
+ name: "224855"
+ id: 1210
+ display_name: "Melipotis indomita"
+}
+item {
+ name: "11867"
+ id: 1211
+ display_name: "Progne subis"
+}
+item {
+ name: "126562"
+ id: 1212
+ display_name: "Setophaga coronata auduboni"
+}
+item {
+ name: "126568"
+ id: 1213
+ display_name: "Manduca rustica"
+}
+item {
+ name: "11882"
+ id: 1214
+ display_name: "Hirundo neoxena"
+}
+item {
+ name: "11901"
+ id: 1215
+ display_name: "Hirundo rustica"
+}
+item {
+ name: "52865"
+ id: 1216
+ display_name: "Tramea lacerata"
+}
+item {
+ name: "142978"
+ id: 1217
+ display_name: "Simyra insularis"
+}
+item {
+ name: "123499"
+ id: 1218
+ display_name: "Notophthalmus viridescens viridescens"
+}
+item {
+ name: "339592"
+ id: 1219
+ display_name: "Calidris virgata"
+}
+item {
+ name: "339593"
+ id: 1220
+ display_name: "Calidris pugnax"
+}
+item {
+ name: "44311"
+ id: 1221
+ display_name: "Microtus pennsylvanicus"
+}
+item {
+ name: "142988"
+ id: 1222
+ display_name: "Lerema accius"
+}
+item {
+ name: "142990"
+ id: 1223
+ display_name: "Autographa precationis"
+}
+item {
+ name: "142995"
+ id: 1224
+ display_name: "Hymenia perspectalis"
+}
+item {
+ name: "129423"
+ id: 1225
+ display_name: "Zelus luridus"
+}
+item {
+ name: "3733"
+ id: 1226
+ display_name: "Platalea regia"
+}
+item {
+ name: "470678"
+ id: 1227
+ display_name: "Cerithideopsis californica"
+}
+item {
+ name: "146713"
+ id: 1228
+ display_name: "Elaphria grata"
+}
+item {
+ name: "143002"
+ id: 1229
+ display_name: "Orthonama obstipata"
+}
+item {
+ name: "11931"
+ id: 1230
+ display_name: "Tachycineta thalassina"
+}
+item {
+ name: "143005"
+ id: 1231
+ display_name: "Costaconvexa centrostrigaria"
+}
+item {
+ name: "3743"
+ id: 1232
+ display_name: "Bostrychia hagedash"
+}
+item {
+ name: "143009"
+ id: 1233
+ display_name: "Ectropis crepuscularia"
+}
+item {
+ name: "36514"
+ id: 1234
+ display_name: "Anolis carolinensis"
+}
+item {
+ name: "143012"
+ id: 1235
+ display_name: "Zanclognatha pedipilalis"
+}
+item {
+ name: "11941"
+ id: 1236
+ display_name: "Riparia riparia"
+}
+item {
+ name: "52902"
+ id: 1237
+ display_name: "Palthis asopialis"
+}
+item {
+ name: "3751"
+ id: 1238
+ display_name: "Eudocimus albus"
+}
+item {
+ name: "52906"
+ id: 1239
+ display_name: "Chytonix palliatricula"
+}
+item {
+ name: "3756"
+ id: 1240
+ display_name: "Plegadis falcinellus"
+}
+item {
+ name: "3759"
+ id: 1241
+ display_name: "Plegadis chihi"
+}
+item {
+ name: "143024"
+ id: 1242
+ display_name: "Eusarca confusaria"
+}
+item {
+ name: "62067"
+ id: 1243
+ display_name: "Orthetrum cancellatum"
+}
+item {
+ name: "28340"
+ id: 1244
+ display_name: "Thamnophis sauritus"
+}
+item {
+ name: "28345"
+ id: 1245
+ display_name: "Thamnophis cyrtopsis"
+}
+item {
+ name: "143034"
+ id: 1246
+ display_name: "Hippodamia variegata"
+}
+item {
+ name: "28347"
+ id: 1247
+ display_name: "Thamnophis cyrtopsis ocellatus"
+}
+item {
+ name: "52925"
+ id: 1248
+ display_name: "Phyciodes tharos"
+}
+item {
+ name: "8010"
+ id: 1249
+ display_name: "Corvus corax"
+}
+item {
+ name: "11970"
+ id: 1250
+ display_name: "Stelgidopteryx serripennis"
+}
+item {
+ name: "28362"
+ id: 1251
+ display_name: "Thamnophis sirtalis"
+}
+item {
+ name: "3788"
+ id: 1252
+ display_name: "Sula dactylatra"
+}
+item {
+ name: "44749"
+ id: 1253
+ display_name: "Neotoma fuscipes"
+}
+item {
+ name: "52943"
+ id: 1254
+ display_name: "Trichodezia albovittata"
+}
+item {
+ name: "3793"
+ id: 1255
+ display_name: "Sula sula"
+}
+item {
+ name: "101667"
+ id: 1256
+ display_name: "Gomphus exilis"
+}
+item {
+ name: "3797"
+ id: 1257
+ display_name: "Sula leucogaster"
+}
+item {
+ name: "118486"
+ id: 1258
+ display_name: "Macaria aemulataria"
+}
+item {
+ name: "3801"
+ id: 1259
+ display_name: "Morus serrator"
+}
+item {
+ name: "28378"
+ id: 1260
+ display_name: "Thamnophis radix"
+}
+item {
+ name: "118492"
+ id: 1261
+ display_name: "Helicoverpa zea"
+}
+item {
+ name: "148793"
+ id: 1262
+ display_name: "Asterocampa leilia"
+}
+item {
+ name: "28384"
+ id: 1263
+ display_name: "Thamnophis proximus rubrilineatus"
+}
+item {
+ name: "257761"
+ id: 1264
+ display_name: "Phocides polybius"
+}
+item {
+ name: "28387"
+ id: 1265
+ display_name: "Thamnophis proximus orarius"
+}
+item {
+ name: "28390"
+ id: 1266
+ display_name: "Thamnophis marcianus"
+}
+item {
+ name: "118503"
+ id: 1267
+ display_name: "Darapsa myron"
+}
+item {
+ name: "3817"
+ id: 1268
+ display_name: "Eudyptula minor"
+}
+item {
+ name: "36135"
+ id: 1269
+ display_name: "Uma scoparia"
+}
+item {
+ name: "28396"
+ id: 1270
+ display_name: "Thamnophis hammondii"
+}
+item {
+ name: "28400"
+ id: 1271
+ display_name: "Thamnophis elegans elegans"
+}
+item {
+ name: "118513"
+ id: 1272
+ display_name: "Hypena scabra"
+}
+item {
+ name: "28403"
+ id: 1273
+ display_name: "Thamnophis elegans vagrans"
+}
+item {
+ name: "201342"
+ id: 1274
+ display_name: "Chalcoela iphitalis"
+}
+item {
+ name: "3831"
+ id: 1275
+ display_name: "Megadyptes antipodes"
+}
+item {
+ name: "126712"
+ id: 1276
+ display_name: "Corydalus cornutus"
+}
+item {
+ name: "30676"
+ id: 1277
+ display_name: "Agkistrodon piscivorus leucostoma"
+}
+item {
+ name: "3834"
+ id: 1278
+ display_name: "Scopus umbretta"
+}
+item {
+ name: "213631"
+ id: 1279
+ display_name: "Anicla infecta"
+}
+item {
+ name: "143105"
+ id: 1280
+ display_name: "Pleuroprucha insulsaria"
+}
+item {
+ name: "28418"
+ id: 1281
+ display_name: "Thamnophis atratus"
+}
+item {
+ name: "118531"
+ id: 1282
+ display_name: "Parallelia bistriaris"
+}
+item {
+ name: "145363"
+ id: 1283
+ display_name: "Troglodytes troglodytes"
+}
+item {
+ name: "3845"
+ id: 1284
+ display_name: "Calidris canutus"
+}
+item {
+ name: "12038"
+ id: 1285
+ display_name: "Lanius collurio"
+}
+item {
+ name: "143114"
+ id: 1286
+ display_name: "Phragmatobia fuliginosa"
+}
+item {
+ name: "3851"
+ id: 1287
+ display_name: "Calidris bairdii"
+}
+item {
+ name: "324226"
+ id: 1288
+ display_name: "Meleagris gallopavo intermedia"
+}
+item {
+ name: "143118"
+ id: 1289
+ display_name: "Pseudeustrotia carneola"
+}
+item {
+ name: "3855"
+ id: 1290
+ display_name: "Calidris mauri"
+}
+item {
+ name: "3856"
+ id: 1291
+ display_name: "Calidris maritima"
+}
+item {
+ name: "3857"
+ id: 1292
+ display_name: "Calidris alpina"
+}
+item {
+ name: "143124"
+ id: 1293
+ display_name: "Parapediasia teterrella"
+}
+item {
+ name: "143125"
+ id: 1294
+ display_name: "Hypena madefactalis"
+}
+item {
+ name: "3863"
+ id: 1295
+ display_name: "Calidris ferruginea"
+}
+item {
+ name: "118552"
+ id: 1296
+ display_name: "Felis catus"
+}
+item {
+ name: "3865"
+ id: 1297
+ display_name: "Calidris melanotos"
+}
+item {
+ name: "3869"
+ id: 1298
+ display_name: "Limnodromus griseus"
+}
+item {
+ name: "118558"
+ id: 1299
+ display_name: "Manduca quinquemaculata"
+}
+item {
+ name: "118559"
+ id: 1300
+ display_name: "Tetraopes tetrophthalmus"
+}
+item {
+ name: "12065"
+ id: 1301
+ display_name: "Malurus cyaneus"
+}
+item {
+ name: "3878"
+ id: 1302
+ display_name: "Tringa nebularia"
+}
+item {
+ name: "101681"
+ id: 1303
+ display_name: "Gomphus militaris"
+}
+item {
+ name: "413483"
+ id: 1304
+ display_name: "Todiramphus sanctus vagans"
+}
+item {
+ name: "3885"
+ id: 1305
+ display_name: "Tringa ochropus"
+}
+item {
+ name: "3888"
+ id: 1306
+ display_name: "Tringa glareola"
+}
+item {
+ name: "126770"
+ id: 1307
+ display_name: "Vulpes vulpes fulvus"
+}
+item {
+ name: "3892"
+ id: 1308
+ display_name: "Tringa melanoleuca"
+}
+item {
+ name: "3893"
+ id: 1309
+ display_name: "Tringa flavipes"
+}
+item {
+ name: "126775"
+ id: 1310
+ display_name: "Cervus elaphus nelsoni"
+}
+item {
+ name: "3896"
+ id: 1311
+ display_name: "Numenius arquata"
+}
+item {
+ name: "126777"
+ id: 1312
+ display_name: "Peucetia viridans"
+}
+item {
+ name: "3901"
+ id: 1313
+ display_name: "Numenius phaeopus"
+}
+item {
+ name: "32058"
+ id: 1314
+ display_name: "Elgaria multicarinata webbii"
+}
+item {
+ name: "413506"
+ id: 1315
+ display_name: "Phalacrocorax carbo novaehollandiae"
+}
+item {
+ name: "413508"
+ id: 1316
+ display_name: "Petroica macrocephala macrocephala"
+}
+item {
+ name: "413512"
+ id: 1317
+ display_name: "Petroica australis longipes"
+}
+item {
+ name: "61258"
+ id: 1318
+ display_name: "Junonia evarete"
+}
+item {
+ name: "28493"
+ id: 1319
+ display_name: "Tantilla nigriceps"
+}
+item {
+ name: "413522"
+ id: 1320
+ display_name: "Prosthemadera novaeseelandiae novaeseelandiae"
+}
+item {
+ name: "58506"
+ id: 1321
+ display_name: "Polites themistocles"
+}
+item {
+ name: "28505"
+ id: 1322
+ display_name: "Tantilla gracilis"
+}
+item {
+ name: "20315"
+ id: 1323
+ display_name: "Asio flammeus"
+}
+item {
+ name: "143196"
+ id: 1324
+ display_name: "Schinia arcigera"
+}
+item {
+ name: "413533"
+ id: 1325
+ display_name: "Rhipidura fuliginosa fuliginosa"
+}
+item {
+ name: "3936"
+ id: 1326
+ display_name: "Scolopax minor"
+}
+item {
+ name: "3938"
+ id: 1327
+ display_name: "Arenaria interpres"
+}
+item {
+ name: "3941"
+ id: 1328
+ display_name: "Arenaria melanocephala"
+}
+item {
+ name: "413543"
+ id: 1329
+ display_name: "Rhipidura fuliginosa placabilis"
+}
+item {
+ name: "3947"
+ id: 1330
+ display_name: "Limosa limosa"
+}
+item {
+ name: "3950"
+ id: 1331
+ display_name: "Limosa haemastica"
+}
+item {
+ name: "126269"
+ id: 1332
+ display_name: "Austrolestes colensonis"
+}
+item {
+ name: "3954"
+ id: 1333
+ display_name: "Limosa fedoa"
+}
+item {
+ name: "199998"
+ id: 1334
+ display_name: "Pedicia albivitta"
+}
+item {
+ name: "3959"
+ id: 1335
+ display_name: "Phalaropus lobatus"
+}
+item {
+ name: "3962"
+ id: 1336
+ display_name: "Bartramia longicauda"
+}
+item {
+ name: "199999"
+ id: 1337
+ display_name: "Callopistria mollissima"
+}
+item {
+ name: "104426"
+ id: 1338
+ display_name: "Lestes disjunctus"
+}
+item {
+ name: "126848"
+ id: 1339
+ display_name: "Delphinia picta"
+}
+item {
+ name: "3951"
+ id: 1340
+ display_name: "Limosa lapponica"
+}
+item {
+ name: "20356"
+ id: 1341
+ display_name: "Aegolius acadicus"
+}
+item {
+ name: "121792"
+ id: 1342
+ display_name: "Polistes carolina"
+}
+item {
+ name: "3978"
+ id: 1343
+ display_name: "Actitis hypoleucos"
+}
+item {
+ name: "53911"
+ id: 1344
+ display_name: "Cyprinus carpio"
+}
+item {
+ name: "135055"
+ id: 1345
+ display_name: "Bufotes balearicus"
+}
+item {
+ name: "19121"
+ id: 1346
+ display_name: "Trichoglossus haematodus"
+}
+item {
+ name: "28562"
+ id: 1347
+ display_name: "Storeria dekayi"
+}
+item {
+ name: "28563"
+ id: 1348
+ display_name: "Storeria dekayi texana"
+}
+item {
+ name: "20372"
+ id: 1349
+ display_name: "Surnia ulula"
+}
+item {
+ name: "135064"
+ id: 1350
+ display_name: "Bufotes viridis"
+}
+item {
+ name: "28570"
+ id: 1351
+ display_name: "Storeria dekayi dekayi"
+}
+item {
+ name: "61341"
+ id: 1352
+ display_name: "Narceus americanus"
+}
+item {
+ name: "7493"
+ id: 1353
+ display_name: "Polioptila caerulea"
+}
+item {
+ name: "29339"
+ id: 1354
+ display_name: "Natrix natrix"
+}
+item {
+ name: "9135"
+ id: 1355
+ display_name: "Spizella passerina"
+}
+item {
+ name: "126889"
+ id: 1356
+ display_name: "Toxomerus marginatus"
+}
+item {
+ name: "143274"
+ id: 1357
+ display_name: "Gluphisia septentrionis"
+}
+item {
+ name: "343021"
+ id: 1358
+ display_name: "Anguis fragilis"
+}
+item {
+ name: "14591"
+ id: 1359
+ display_name: "Pycnonotus jocosus"
+}
+item {
+ name: "10227"
+ id: 1360
+ display_name: "Passerina cyanea"
+}
+item {
+ name: "10228"
+ id: 1361
+ display_name: "Passerina versicolor"
+}
+item {
+ name: "61371"
+ id: 1362
+ display_name: "Panulirus interruptus"
+}
+item {
+ name: "143294"
+ id: 1363
+ display_name: "Colias croceus"
+}
+item {
+ name: "135104"
+ id: 1364
+ display_name: "Ichthyosaura alpestris"
+}
+item {
+ name: "83958"
+ id: 1365
+ display_name: "Phryganidia californica"
+}
+item {
+ name: "143302"
+ id: 1366
+ display_name: "Megapallifera mutabilis"
+}
+item {
+ name: "12231"
+ id: 1367
+ display_name: "Manorina melanocephala"
+}
+item {
+ name: "200661"
+ id: 1368
+ display_name: "Coluber constrictor mormon"
+}
+item {
+ name: "3681"
+ id: 1369
+ display_name: "Ocyphaps lophotes"
+}
+item {
+ name: "4773"
+ id: 1370
+ display_name: "Jabiru mycteria"
+}
+item {
+ name: "135140"
+ id: 1371
+ display_name: "Taricha sierrae"
+}
+item {
+ name: "28649"
+ id: 1372
+ display_name: "Sonora semiannulata"
+}
+item {
+ name: "53226"
+ id: 1373
+ display_name: "Boisea rubrolineata"
+}
+item {
+ name: "53227"
+ id: 1374
+ display_name: "Boisea trivittata"
+}
+item {
+ name: "14593"
+ id: 1375
+ display_name: "Pycnonotus cafer"
+}
+item {
+ name: "61428"
+ id: 1376
+ display_name: "Arion subfuscus"
+}
+item {
+ name: "333822"
+ id: 1377
+ display_name: "Anser cygnoides domesticus"
+}
+item {
+ name: "41641"
+ id: 1378
+ display_name: "Ursus arctos"
+}
+item {
+ name: "56602"
+ id: 1379
+ display_name: "Plebejus lupini"
+}
+item {
+ name: "55295"
+ id: 1380
+ display_name: "Grapsus grapsus"
+}
+item {
+ name: "36181"
+ id: 1381
+ display_name: "Sceloporus cyanogenys"
+}
+item {
+ name: "41708"
+ id: 1382
+ display_name: "Phoca vitulina"
+}
+item {
+ name: "118788"
+ id: 1383
+ display_name: "Desmia funeralis"
+}
+item {
+ name: "61445"
+ id: 1384
+ display_name: "Acanthocephala terminalis"
+}
+item {
+ name: "30721"
+ id: 1385
+ display_name: "Crotalus triseriatus"
+}
+item {
+ name: "180010"
+ id: 1386
+ display_name: "Callospermophilus lateralis"
+}
+item {
+ name: "53875"
+ id: 1387
+ display_name: "Ocypode quadrata"
+}
+item {
+ name: "18358"
+ id: 1388
+ display_name: "Picus viridis"
+}
+item {
+ name: "143390"
+ id: 1389
+ display_name: "Oxidus gracilis"
+}
+item {
+ name: "55785"
+ id: 1390
+ display_name: "Ochlodes agricola"
+}
+item {
+ name: "4141"
+ id: 1391
+ display_name: "Phoebastria nigripes"
+}
+item {
+ name: "20526"
+ id: 1392
+ display_name: "Struthio camelus"
+}
+item {
+ name: "32093"
+ id: 1393
+ display_name: "Boa constrictor"
+}
+item {
+ name: "4144"
+ id: 1394
+ display_name: "Phoebastria immutabilis"
+}
+item {
+ name: "74442"
+ id: 1395
+ display_name: "Hydrochoerus hydrochaeris"
+}
+item {
+ name: "61492"
+ id: 1396
+ display_name: "Chrysopilus thoracicus"
+}
+item {
+ name: "61495"
+ id: 1397
+ display_name: "Erythemis simplicicollis"
+}
+item {
+ name: "389177"
+ id: 1398
+ display_name: "Eriophora pustulosa"
+}
+item {
+ name: "61503"
+ id: 1399
+ display_name: "Ascalapha odorata"
+}
+item {
+ name: "118855"
+ id: 1400
+ display_name: "Calosoma scrutator"
+}
+item {
+ name: "61513"
+ id: 1401
+ display_name: "Adelges tsugae"
+}
+item {
+ name: "28749"
+ id: 1402
+ display_name: "Salvadora grahamiae"
+}
+item {
+ name: "143440"
+ id: 1403
+ display_name: "Ceratomia catalpae"
+}
+item {
+ name: "61523"
+ id: 1404
+ display_name: "Helix pomatia"
+}
+item {
+ name: "4180"
+ id: 1405
+ display_name: "Fulmarus glacialis"
+}
+item {
+ name: "143445"
+ id: 1406
+ display_name: "Pachysphinx modesta"
+}
+item {
+ name: "233560"
+ id: 1407
+ display_name: "Vespula squamosa"
+}
+item {
+ name: "126308"
+ id: 1408
+ display_name: "Marpesia chiron"
+}
+item {
+ name: "61536"
+ id: 1409
+ display_name: "Calopteryx virgo"
+}
+item {
+ name: "685"
+ id: 1410
+ display_name: "Francolinus pondicerianus"
+}
+item {
+ name: "60774"
+ id: 1411
+ display_name: "Psychomorpha epimenis"
+}
+item {
+ name: "135271"
+ id: 1412
+ display_name: "Amphibolips confluenta"
+}
+item {
+ name: "69736"
+ id: 1413
+ display_name: "Schistocerca americana"
+}
+item {
+ name: "69737"
+ id: 1414
+ display_name: "Xylophanes tersa"
+}
+item {
+ name: "6141"
+ id: 1415
+ display_name: "Cynanthus latirostris"
+}
+item {
+ name: "4205"
+ id: 1416
+ display_name: "Podiceps nigricollis"
+}
+item {
+ name: "69743"
+ id: 1417
+ display_name: "Wallengrenia otho"
+}
+item {
+ name: "4208"
+ id: 1418
+ display_name: "Podiceps cristatus"
+}
+item {
+ name: "4209"
+ id: 1419
+ display_name: "Podiceps auritus"
+}
+item {
+ name: "118901"
+ id: 1420
+ display_name: "Hyles gallii"
+}
+item {
+ name: "17871"
+ id: 1421
+ display_name: "Dendrocopos major"
+}
+item {
+ name: "143484"
+ id: 1422
+ display_name: "Blepharomastix ranalis"
+}
+item {
+ name: "4224"
+ id: 1423
+ display_name: "Podiceps grisegena"
+}
+item {
+ name: "200834"
+ id: 1424
+ display_name: "Sphenodon punctatus"
+}
+item {
+ name: "179995"
+ id: 1425
+ display_name: "Urocitellus beldingi"
+}
+item {
+ name: "322024"
+ id: 1426
+ display_name: "Apatura ilia"
+}
+item {
+ name: "44396"
+ id: 1427
+ display_name: "Peromyscus maniculatus"
+}
+item {
+ name: "4237"
+ id: 1428
+ display_name: "Tachybaptus ruficollis"
+}
+item {
+ name: "118930"
+ id: 1429
+ display_name: "Spodoptera ornithogalli"
+}
+item {
+ name: "118936"
+ id: 1430
+ display_name: "Euplagia quadripunctaria"
+}
+item {
+ name: "4804"
+ id: 1431
+ display_name: "Charadrius montanus"
+}
+item {
+ name: "127133"
+ id: 1432
+ display_name: "Hyphantria cunea"
+}
+item {
+ name: "143518"
+ id: 1433
+ display_name: "Prochoerodes lineola"
+}
+item {
+ name: "52592"
+ id: 1434
+ display_name: "Pararge aegeria"
+}
+item {
+ name: "36149"
+ id: 1435
+ display_name: "Sceloporus torquatus"
+}
+item {
+ name: "118951"
+ id: 1436
+ display_name: "Pterophylla camellifolia"
+}
+item {
+ name: "4265"
+ id: 1437
+ display_name: "Phalacrocorax auritus"
+}
+item {
+ name: "4270"
+ id: 1438
+ display_name: "Phalacrocorax carbo"
+}
+item {
+ name: "446640"
+ id: 1439
+ display_name: "Neomonachus schauinslandi"
+}
+item {
+ name: "118961"
+ id: 1440
+ display_name: "Conocephalus brevipennis"
+}
+item {
+ name: "28850"
+ id: 1441
+ display_name: "Regina septemvittata"
+}
+item {
+ name: "4277"
+ id: 1442
+ display_name: "Phalacrocorax penicillatus"
+}
+item {
+ name: "4234"
+ id: 1443
+ display_name: "Aechmophorus clarkii"
+}
+item {
+ name: "118967"
+ id: 1444
+ display_name: "Psyllobora vigintimaculata"
+}
+item {
+ name: "118968"
+ id: 1445
+ display_name: "Allograpta obliqua"
+}
+item {
+ name: "118970"
+ id: 1446
+ display_name: "Bombus impatiens"
+}
+item {
+ name: "123594"
+ id: 1447
+ display_name: "Anaxyrus americanus americanus"
+}
+item {
+ name: "69838"
+ id: 1448
+ display_name: "Cyanea capillata"
+}
+item {
+ name: "69844"
+ id: 1449
+ display_name: "Anthocharis midea"
+}
+item {
+ name: "48505"
+ id: 1450
+ display_name: "Junonia coenia"
+}
+item {
+ name: "151769"
+ id: 1451
+ display_name: "Diaphania hyalinata"
+}
+item {
+ name: "151770"
+ id: 1452
+ display_name: "Peridea angulosa"
+}
+item {
+ name: "53467"
+ id: 1453
+ display_name: "Leucauge venusta"
+}
+item {
+ name: "119013"
+ id: 1454
+ display_name: "Ctenucha virginica"
+}
+item {
+ name: "4327"
+ id: 1455
+ display_name: "Pelecanus onocrotalus"
+}
+item {
+ name: "143592"
+ id: 1456
+ display_name: "Spragueia leo"
+}
+item {
+ name: "200938"
+ id: 1457
+ display_name: "Diaethria anna"
+}
+item {
+ name: "4334"
+ id: 1458
+ display_name: "Pelecanus erythrorhynchos"
+}
+item {
+ name: "151794"
+ id: 1459
+ display_name: "Atta texana"
+}
+item {
+ name: "3454"
+ id: 1460
+ display_name: "Zenaida macroura"
+}
+item {
+ name: "4872"
+ id: 1461
+ display_name: "Vanellus miles"
+}
+item {
+ name: "4345"
+ id: 1462
+ display_name: "Larus occidentalis"
+}
+item {
+ name: "143610"
+ id: 1463
+ display_name: "Besma quercivoraria"
+}
+item {
+ name: "20733"
+ id: 1464
+ display_name: "Trogon massena"
+}
+item {
+ name: "143615"
+ id: 1465
+ display_name: "Udea rubigalis"
+}
+item {
+ name: "4352"
+ id: 1466
+ display_name: "Larus thayeri"
+}
+item {
+ name: "4353"
+ id: 1467
+ display_name: "Larus heermanni"
+}
+item {
+ name: "4354"
+ id: 1468
+ display_name: "Larus livens"
+}
+item {
+ name: "4356"
+ id: 1469
+ display_name: "Larus canus"
+}
+item {
+ name: "220826"
+ id: 1470
+ display_name: "Habrosyne scripta"
+}
+item {
+ name: "4361"
+ id: 1471
+ display_name: "Larus glaucoides"
+}
+item {
+ name: "4364"
+ id: 1472
+ display_name: "Larus delawarensis"
+}
+item {
+ name: "102672"
+ id: 1473
+ display_name: "Hetaerina titia"
+}
+item {
+ name: "20754"
+ id: 1474
+ display_name: "Trogon collaris"
+}
+item {
+ name: "479512"
+ id: 1475
+ display_name: "Acronicta fallax"
+}
+item {
+ name: "3460"
+ id: 1476
+ display_name: "Zenaida asiatica"
+}
+item {
+ name: "119066"
+ id: 1477
+ display_name: "Idia lubricalis"
+}
+item {
+ name: "119068"
+ id: 1478
+ display_name: "Apodemia virgulti"
+}
+item {
+ name: "4381"
+ id: 1479
+ display_name: "Larus fuscus"
+}
+item {
+ name: "4385"
+ id: 1480
+ display_name: "Larus californicus"
+}
+item {
+ name: "69922"
+ id: 1481
+ display_name: "Oncorhynchus nerka"
+}
+item {
+ name: "12580"
+ id: 1482
+ display_name: "Prosthemadera novaeseelandiae"
+}
+item {
+ name: "69925"
+ id: 1483
+ display_name: "Clinocardium nuttallii"
+}
+item {
+ name: "20781"
+ id: 1484
+ display_name: "Trogon elegans"
+}
+item {
+ name: "4399"
+ id: 1485
+ display_name: "Larus glaucescens"
+}
+item {
+ name: "94513"
+ id: 1486
+ display_name: "Archilestes grandis"
+}
+item {
+ name: "119090"
+ id: 1487
+ display_name: "Eremnophila aureonotata"
+}
+item {
+ name: "20787"
+ id: 1488
+ display_name: "Trogon citreolus"
+}
+item {
+ name: "69940"
+ id: 1489
+ display_name: "Hemiargus ceraunus"
+}
+item {
+ name: "61749"
+ id: 1490
+ display_name: "Lucanus cervus"
+}
+item {
+ name: "4415"
+ id: 1491
+ display_name: "Cepphus columba"
+}
+item {
+ name: "4832"
+ id: 1492
+ display_name: "Himantopus leucocephalus"
+}
+item {
+ name: "4418"
+ id: 1493
+ display_name: "Cepphus grylle"
+}
+item {
+ name: "12612"
+ id: 1494
+ display_name: "Anthornis melanura"
+}
+item {
+ name: "125627"
+ id: 1495
+ display_name: "Ellychnia corrusca"
+}
+item {
+ name: "201031"
+ id: 1496
+ display_name: "Leptoptilos crumenifer"
+}
+item {
+ name: "201032"
+ id: 1497
+ display_name: "Threskiornis moluccus"
+}
+item {
+ name: "60812"
+ id: 1498
+ display_name: "Lucanus capreolus"
+}
+item {
+ name: "10295"
+ id: 1499
+ display_name: "Thraupis episcopus"
+}
+item {
+ name: "209233"
+ id: 1500
+ display_name: "Equus caballus"
+}
+item {
+ name: "119122"
+ id: 1501
+ display_name: "Araneus trifolium"
+}
+item {
+ name: "201043"
+ id: 1502
+ display_name: "Geranoaetus albicaudatus"
+}
+item {
+ name: "61781"
+ id: 1503
+ display_name: "Ochlodes sylvanus"
+}
+item {
+ name: "49133"
+ id: 1504
+ display_name: "Vanessa atalanta"
+}
+item {
+ name: "94556"
+ id: 1505
+ display_name: "Argia lugens"
+}
+item {
+ name: "94557"
+ id: 1506
+ display_name: "Argia moesta"
+}
+item {
+ name: "61524"
+ id: 1507
+ display_name: "Forficula auricularia"
+}
+item {
+ name: "4449"
+ id: 1508
+ display_name: "Sterna paradisaea"
+}
+item {
+ name: "4450"
+ id: 1509
+ display_name: "Sterna hirundo"
+}
+item {
+ name: "348515"
+ id: 1510
+ display_name: "Nyctemera annulata"
+}
+item {
+ name: "110625"
+ id: 1511
+ display_name: "Progomphus obscurus"
+}
+item {
+ name: "94566"
+ id: 1512
+ display_name: "Argia plana"
+}
+item {
+ name: "4457"
+ id: 1513
+ display_name: "Sterna forsteri"
+}
+item {
+ name: "94571"
+ id: 1514
+ display_name: "Argia sedula"
+}
+item {
+ name: "61804"
+ id: 1515
+ display_name: "Olivella biplicata"
+}
+item {
+ name: "204532"
+ id: 1516
+ display_name: "Lanius excubitor"
+}
+item {
+ name: "29038"
+ id: 1517
+ display_name: "Pituophis deppei"
+}
+item {
+ name: "143728"
+ id: 1518
+ display_name: "Choristoneura rosaceana"
+}
+item {
+ name: "94577"
+ id: 1519
+ display_name: "Argia translata"
+}
+item {
+ name: "130451"
+ id: 1520
+ display_name: "Dione juno"
+}
+item {
+ name: "29044"
+ id: 1521
+ display_name: "Pituophis catenifer"
+}
+item {
+ name: "70005"
+ id: 1522
+ display_name: "Ilyanassa obsoleta"
+}
+item {
+ name: "143734"
+ id: 1523
+ display_name: "Eupithecia miserulata"
+}
+item {
+ name: "20856"
+ id: 1524
+ display_name: "Pharomachrus mocinno"
+}
+item {
+ name: "29049"
+ id: 1525
+ display_name: "Pituophis catenifer deserticola"
+}
+item {
+ name: "29052"
+ id: 1526
+ display_name: "Pituophis catenifer affinis"
+}
+item {
+ name: "29053"
+ id: 1527
+ display_name: "Pituophis catenifer annectens"
+}
+item {
+ name: "4478"
+ id: 1528
+ display_name: "Sterna striata"
+}
+item {
+ name: "407459"
+ id: 1529
+ display_name: "Dolomedes minor"
+}
+item {
+ name: "4489"
+ id: 1530
+ display_name: "Stercorarius parasiticus"
+}
+item {
+ name: "4491"
+ id: 1531
+ display_name: "Stercorarius pomarinus"
+}
+item {
+ name: "6969"
+ id: 1532
+ display_name: "Anas gracilis"
+}
+item {
+ name: "4494"
+ id: 1533
+ display_name: "Rissa tridactyla"
+}
+item {
+ name: "4496"
+ id: 1534
+ display_name: "Rynchops niger"
+}
+item {
+ name: "4501"
+ id: 1535
+ display_name: "Alca torda"
+}
+item {
+ name: "4504"
+ id: 1536
+ display_name: "Fratercula arctica"
+}
+item {
+ name: "4509"
+ id: 1537
+ display_name: "Fratercula cirrhata"
+}
+item {
+ name: "26693"
+ id: 1538
+ display_name: "Scaphiopus hurterii"
+}
+item {
+ name: "94624"
+ id: 1539
+ display_name: "Arigomphus submedianus"
+}
+item {
+ name: "94625"
+ id: 1540
+ display_name: "Arigomphus villosipes"
+}
+item {
+ name: "120720"
+ id: 1541
+ display_name: "Pseudacris sierra"
+}
+item {
+ name: "70057"
+ id: 1542
+ display_name: "Agrilus planipennis"
+}
+item {
+ name: "127402"
+ id: 1543
+ display_name: "Grammia virgo"
+}
+item {
+ name: "51271"
+ id: 1544
+ display_name: "Trachemys scripta elegans"
+}
+item {
+ name: "12716"
+ id: 1545
+ display_name: "Turdus merula"
+}
+item {
+ name: "12718"
+ id: 1546
+ display_name: "Turdus plumbeus"
+}
+item {
+ name: "12720"
+ id: 1547
+ display_name: "Turdus grayi"
+}
+item {
+ name: "63697"
+ id: 1548
+ display_name: "Metacarcinus magister"
+}
+item {
+ name: "12727"
+ id: 1549
+ display_name: "Turdus migratorius"
+}
+item {
+ name: "26698"
+ id: 1550
+ display_name: "Spea multiplicata"
+}
+item {
+ name: "12735"
+ id: 1551
+ display_name: "Turdus viscivorus"
+}
+item {
+ name: "26699"
+ id: 1552
+ display_name: "Spea bombifrons"
+}
+item {
+ name: "127431"
+ id: 1553
+ display_name: "Emmelina monodactyla"
+}
+item {
+ name: "4553"
+ id: 1554
+ display_name: "Cerorhinca monocerata"
+}
+item {
+ name: "12748"
+ id: 1555
+ display_name: "Turdus philomelos"
+}
+item {
+ name: "233933"
+ id: 1556
+ display_name: "Zale horrida"
+}
+item {
+ name: "1468"
+ id: 1557
+ display_name: "Galbula ruficauda"
+}
+item {
+ name: "111055"
+ id: 1558
+ display_name: "Pseudoleon superbus"
+}
+item {
+ name: "61908"
+ id: 1559
+ display_name: "Orgyia vetusta"
+}
+item {
+ name: "43086"
+ id: 1560
+ display_name: "Procavia capensis"
+}
+item {
+ name: "143830"
+ id: 1561
+ display_name: "Eumorpha vitis"
+}
+item {
+ name: "67663"
+ id: 1562
+ display_name: "Leptysma marginicollis"
+}
+item {
+ name: "127457"
+ id: 1563
+ display_name: "Idia americalis"
+}
+item {
+ name: "4578"
+ id: 1564
+ display_name: "Jacana spinosa"
+}
+item {
+ name: "127460"
+ id: 1565
+ display_name: "Idia aemula"
+}
+item {
+ name: "201192"
+ id: 1566
+ display_name: "Saxicola rubicola"
+}
+item {
+ name: "20969"
+ id: 1567
+ display_name: "Upupa epops"
+}
+item {
+ name: "94699"
+ id: 1568
+ display_name: "Aspidoscelis marmorata"
+}
+item {
+ name: "10322"
+ id: 1569
+ display_name: "Euphagus carolinus"
+}
+item {
+ name: "53743"
+ id: 1570
+ display_name: "Uca pugilator"
+}
+item {
+ name: "61256"
+ id: 1571
+ display_name: "Leptoglossus phyllopus"
+}
+item {
+ name: "29438"
+ id: 1572
+ display_name: "Coluber flagellum piceus"
+}
+item {
+ name: "53750"
+ id: 1573
+ display_name: "Lottia gigantea"
+}
+item {
+ name: "143865"
+ id: 1574
+ display_name: "Odocoileus hemionus hemionus"
+}
+item {
+ name: "143867"
+ id: 1575
+ display_name: "Protoboarmia porcelaria"
+}
+item {
+ name: "209405"
+ id: 1576
+ display_name: "Cenopis reticulatana"
+}
+item {
+ name: "49920"
+ id: 1577
+ display_name: "Nymphalis californica"
+}
+item {
+ name: "53762"
+ id: 1578
+ display_name: "Scolopendra polymorpha"
+}
+item {
+ name: "127492"
+ id: 1579
+ display_name: "Megalographa biloba"
+}
+item {
+ name: "62470"
+ id: 1580
+ display_name: "Limax maximus"
+}
+item {
+ name: "4621"
+ id: 1581
+ display_name: "Gavia pacifica"
+}
+item {
+ name: "14884"
+ id: 1582
+ display_name: "Mimus gilvus"
+}
+item {
+ name: "29200"
+ id: 1583
+ display_name: "Opheodrys aestivus"
+}
+item {
+ name: "201233"
+ id: 1584
+ display_name: "Passer italiae"
+}
+item {
+ name: "4626"
+ id: 1585
+ display_name: "Gavia immer"
+}
+item {
+ name: "4627"
+ id: 1586
+ display_name: "Gavia stellata"
+}
+item {
+ name: "12822"
+ id: 1587
+ display_name: "Oenanthe oenanthe"
+}
+item {
+ name: "4631"
+ id: 1588
+ display_name: "Fregata magnificens"
+}
+item {
+ name: "4636"
+ id: 1589
+ display_name: "Fregata minor"
+}
+item {
+ name: "70174"
+ id: 1590
+ display_name: "Hypolimnas bolina"
+}
+item {
+ name: "4643"
+ id: 1591
+ display_name: "Falco subbuteo"
+}
+item {
+ name: "4644"
+ id: 1592
+ display_name: "Falco mexicanus"
+}
+item {
+ name: "4645"
+ id: 1593
+ display_name: "Falco femoralis"
+}
+item {
+ name: "4647"
+ id: 1594
+ display_name: "Falco peregrinus"
+}
+item {
+ name: "119340"
+ id: 1595
+ display_name: "Amphipyra pyramidoides"
+}
+item {
+ name: "61997"
+ id: 1596
+ display_name: "Steatoda grossa"
+}
+item {
+ name: "70191"
+ id: 1597
+ display_name: "Ischnura ramburii"
+}
+item {
+ name: "53809"
+ id: 1598
+ display_name: "Phidippus audax"
+}
+item {
+ name: "143213"
+ id: 1599
+ display_name: "Frontinella communis"
+}
+item {
+ name: "4664"
+ id: 1600
+ display_name: "Falco rufigularis"
+}
+item {
+ name: "4665"
+ id: 1601
+ display_name: "Falco sparverius"
+}
+item {
+ name: "19893"
+ id: 1602
+ display_name: "Strix varia"
+}
+item {
+ name: "4672"
+ id: 1603
+ display_name: "Falco columbarius"
+}
+item {
+ name: "201281"
+ id: 1604
+ display_name: "Phyllodesma americana"
+}
+item {
+ name: "201282"
+ id: 1605
+ display_name: "Gallinula chloropus"
+}
+item {
+ name: "152131"
+ id: 1606
+ display_name: "Bagrada hilaris"
+}
+item {
+ name: "145276"
+ id: 1607
+ display_name: "Cardellina pusilla"
+}
+item {
+ name: "12878"
+ id: 1608
+ display_name: "Catharus ustulatus"
+}
+item {
+ name: "4690"
+ id: 1609
+ display_name: "Falco novaeseelandiae"
+}
+item {
+ name: "53843"
+ id: 1610
+ display_name: "Brephidium exilis"
+}
+item {
+ name: "36281"
+ id: 1611
+ display_name: "Sceloporus clarkii"
+}
+item {
+ name: "12890"
+ id: 1612
+ display_name: "Catharus guttatus"
+}
+item {
+ name: "62045"
+ id: 1613
+ display_name: "Lygaeus kalmii"
+}
+item {
+ name: "47075"
+ id: 1614
+ display_name: "Dasypus novemcinctus"
+}
+item {
+ name: "12901"
+ id: 1615
+ display_name: "Catharus fuscescens"
+}
+item {
+ name: "4714"
+ id: 1616
+ display_name: "Caracara cheriway"
+}
+item {
+ name: "53867"
+ id: 1617
+ display_name: "Erythemis plebeja"
+}
+item {
+ name: "62060"
+ id: 1618
+ display_name: "Palomena prasina"
+}
+item {
+ name: "53869"
+ id: 1619
+ display_name: "Ocypus olens"
+}
+item {
+ name: "4719"
+ id: 1620
+ display_name: "Herpetotheres cachinnans"
+}
+item {
+ name: "116840"
+ id: 1621
+ display_name: "Calcarius lapponicus"
+}
+item {
+ name: "4726"
+ id: 1622
+ display_name: "Milvago chimachima"
+}
+item {
+ name: "29304"
+ id: 1623
+ display_name: "Nerodia taxispilota"
+}
+item {
+ name: "29305"
+ id: 1624
+ display_name: "Nerodia sipedon"
+}
+item {
+ name: "29306"
+ id: 1625
+ display_name: "Nerodia sipedon sipedon"
+}
+item {
+ name: "142783"
+ id: 1626
+ display_name: "Myodocha serripes"
+}
+item {
+ name: "4733"
+ id: 1627
+ display_name: "Ciconia ciconia"
+}
+item {
+ name: "29310"
+ id: 1628
+ display_name: "Nerodia rhombifer"
+}
+item {
+ name: "201343"
+ id: 1629
+ display_name: "Lithacodes fasciola"
+}
+item {
+ name: "21121"
+ id: 1630
+ display_name: "Dendrobates auratus"
+}
+item {
+ name: "127618"
+ id: 1631
+ display_name: "Epirrhoe alternata"
+}
+item {
+ name: "43115"
+ id: 1632
+ display_name: "Sylvilagus audubonii"
+}
+item {
+ name: "29317"
+ id: 1633
+ display_name: "Nerodia fasciata"
+}
+item {
+ name: "4742"
+ id: 1634
+ display_name: "Mycteria americana"
+}
+item {
+ name: "53895"
+ id: 1635
+ display_name: "Stenopelmatus fuscus"
+}
+item {
+ name: "4744"
+ id: 1636
+ display_name: "Mycteria ibis"
+}
+item {
+ name: "12937"
+ id: 1637
+ display_name: "Sialia mexicana"
+}
+item {
+ name: "29322"
+ id: 1638
+ display_name: "Nerodia fasciata confluens"
+}
+item {
+ name: "29324"
+ id: 1639
+ display_name: "Nerodia clarkii clarkii"
+}
+item {
+ name: "29327"
+ id: 1640
+ display_name: "Nerodia cyclopion"
+}
+item {
+ name: "29328"
+ id: 1641
+ display_name: "Nerodia erythrogaster"
+}
+item {
+ name: "53905"
+ id: 1642
+ display_name: "Mantis religiosa"
+}
+item {
+ name: "4754"
+ id: 1643
+ display_name: "Ephippiorhynchus senegalensis"
+}
+item {
+ name: "127635"
+ id: 1644
+ display_name: "Plecia nearctica"
+}
+item {
+ name: "4756"
+ id: 1645
+ display_name: "Cathartes aura"
+}
+item {
+ name: "29334"
+ id: 1646
+ display_name: "Nerodia erythrogaster flavigaster"
+}
+item {
+ name: "12951"
+ id: 1647
+ display_name: "Myadestes townsendi"
+}
+item {
+ name: "4761"
+ id: 1648
+ display_name: "Cathartes burrovianus"
+}
+item {
+ name: "4763"
+ id: 1649
+ display_name: "Sarcoramphus papa"
+}
+item {
+ name: "4765"
+ id: 1650
+ display_name: "Coragyps atratus"
+}
+item {
+ name: "19890"
+ id: 1651
+ display_name: "Strix nebulosa"
+}
+item {
+ name: "26736"
+ id: 1652
+ display_name: "Ambystoma opacum"
+}
+item {
+ name: "66331"
+ id: 1653
+ display_name: "Pelophylax perezi"
+}
+item {
+ name: "4776"
+ id: 1654
+ display_name: "Anastomus lamelligerus"
+}
+item {
+ name: "4892"
+ id: 1655
+ display_name: "Pluvialis squatarola"
+}
+item {
+ name: "4778"
+ id: 1656
+ display_name: "Gymnogyps californianus"
+}
+item {
+ name: "12971"
+ id: 1657
+ display_name: "Muscicapa striata"
+}
+item {
+ name: "56776"
+ id: 1658
+ display_name: "Glaucopsyche lygdamus"
+}
+item {
+ name: "127669"
+ id: 1659
+ display_name: "Jadera haematoloma"
+}
+item {
+ name: "4793"
+ id: 1660
+ display_name: "Charadrius vociferus"
+}
+item {
+ name: "209594"
+ id: 1661
+ display_name: "Scantius aegyptius"
+}
+item {
+ name: "4795"
+ id: 1662
+ display_name: "Charadrius wilsonia"
+}
+item {
+ name: "48586"
+ id: 1663
+ display_name: "Cepaea nemoralis"
+}
+item {
+ name: "4798"
+ id: 1664
+ display_name: "Charadrius melodus"
+}
+item {
+ name: "12992"
+ id: 1665
+ display_name: "Phoenicurus phoenicurus"
+}
+item {
+ name: "45763"
+ id: 1666
+ display_name: "Ondatra zibethicus"
+}
+item {
+ name: "119492"
+ id: 1667
+ display_name: "Smerinthus cerisyi"
+}
+item {
+ name: "13000"
+ id: 1668
+ display_name: "Phoenicurus ochruros"
+}
+item {
+ name: "4811"
+ id: 1669
+ display_name: "Charadrius dubius"
+}
+item {
+ name: "64973"
+ id: 1670
+ display_name: "Anaxyrus cognatus"
+}
+item {
+ name: "2168"
+ id: 1671
+ display_name: "Eumomota superciliosa"
+}
+item {
+ name: "6980"
+ id: 1672
+ display_name: "Anas querquedula"
+}
+item {
+ name: "64975"
+ id: 1673
+ display_name: "Anaxyrus debilis"
+}
+item {
+ name: "43130"
+ id: 1674
+ display_name: "Lepus californicus"
+}
+item {
+ name: "67707"
+ id: 1675
+ display_name: "Argiope aurantia"
+}
+item {
+ name: "4836"
+ id: 1676
+ display_name: "Himantopus mexicanus"
+}
+item {
+ name: "4838"
+ id: 1677
+ display_name: "Haematopus bachmani"
+}
+item {
+ name: "43132"
+ id: 1678
+ display_name: "Lepus americanus"
+}
+item {
+ name: "144106"
+ id: 1679
+ display_name: "Pica pica"
+}
+item {
+ name: "4843"
+ id: 1680
+ display_name: "Haematopus ostralegus"
+}
+item {
+ name: "67709"
+ id: 1681
+ display_name: "Antrodiaetus riversi"
+}
+item {
+ name: "4848"
+ id: 1682
+ display_name: "Haematopus unicolor"
+}
+item {
+ name: "4857"
+ id: 1683
+ display_name: "Vanellus vanellus"
+}
+item {
+ name: "29435"
+ id: 1684
+ display_name: "Coluber flagellum testaceus"
+}
+item {
+ name: "119550"
+ id: 1685
+ display_name: "Feltia jaculifera"
+}
+item {
+ name: "4866"
+ id: 1686
+ display_name: "Vanellus spinosus"
+}
+item {
+ name: "4870"
+ id: 1687
+ display_name: "Vanellus armatus"
+}
+item {
+ name: "54024"
+ id: 1688
+ display_name: "Satyrium californica"
+}
+item {
+ name: "13071"
+ id: 1689
+ display_name: "Luscinia svecica"
+}
+item {
+ name: "3544"
+ id: 1690
+ display_name: "Columbina inca"
+}
+item {
+ name: "4883"
+ id: 1691
+ display_name: "Recurvirostra avosetta"
+}
+item {
+ name: "204701"
+ id: 1692
+ display_name: "Melanchra adjuncta"
+}
+item {
+ name: "56083"
+ id: 1693
+ display_name: "Armadillidium vulgare"
+}
+item {
+ name: "981"
+ id: 1694
+ display_name: "Phasianus colchicus"
+}
+item {
+ name: "4893"
+ id: 1695
+ display_name: "Pluvialis dominica"
+}
+item {
+ name: "103200"
+ id: 1696
+ display_name: "Hypsiglena jani"
+}
+item {
+ name: "127777"
+ id: 1697
+ display_name: "Vespula vulgaris"
+}
+item {
+ name: "7643"
+ id: 1698
+ display_name: "Cinclus mexicanus"
+}
+item {
+ name: "13094"
+ id: 1699
+ display_name: "Erithacus rubecula"
+}
+item {
+ name: "41777"
+ id: 1700
+ display_name: "Lontra canadensis"
+}
+item {
+ name: "64988"
+ id: 1701
+ display_name: "Anaxyrus terrestris"
+}
+item {
+ name: "18167"
+ id: 1702
+ display_name: "Melanerpes aurifrons"
+}
+item {
+ name: "54064"
+ id: 1703
+ display_name: "Polygonia comma"
+}
+item {
+ name: "209713"
+ id: 1704
+ display_name: "Phigalia titea"
+}
+item {
+ name: "54068"
+ id: 1705
+ display_name: "Boloria selene"
+}
+item {
+ name: "104585"
+ id: 1706
+ display_name: "Libellula semifasciata"
+}
+item {
+ name: "119608"
+ id: 1707
+ display_name: "Theba pisana"
+}
+item {
+ name: "4801"
+ id: 1708
+ display_name: "Charadrius hiaticula"
+}
+item {
+ name: "104586"
+ id: 1709
+ display_name: "Libellula vibrans"
+}
+item {
+ name: "4935"
+ id: 1710
+ display_name: "Egretta gularis"
+}
+item {
+ name: "4937"
+ id: 1711
+ display_name: "Egretta caerulea"
+}
+item {
+ name: "4938"
+ id: 1712
+ display_name: "Egretta tricolor"
+}
+item {
+ name: "4940"
+ id: 1713
+ display_name: "Egretta thula"
+}
+item {
+ name: "340813"
+ id: 1714
+ display_name: "Hyalymenus tarsatus"
+}
+item {
+ name: "4943"
+ id: 1715
+ display_name: "Egretta garzetta"
+}
+item {
+ name: "4947"
+ id: 1716
+ display_name: "Egretta sacra"
+}
+item {
+ name: "13141"
+ id: 1717
+ display_name: "Monticola solitarius"
+}
+item {
+ name: "4952"
+ id: 1718
+ display_name: "Ardea cocoi"
+}
+item {
+ name: "4954"
+ id: 1719
+ display_name: "Ardea cinerea"
+}
+item {
+ name: "67727"
+ id: 1720
+ display_name: "Aeshna umbrosa"
+}
+item {
+ name: "4956"
+ id: 1721
+ display_name: "Ardea herodias"
+}
+item {
+ name: "144223"
+ id: 1722
+ display_name: "Chlosyne theona"
+}
+item {
+ name: "201568"
+ id: 1723
+ display_name: "Diabrotica undecimpunctata undecimpunctata"
+}
+item {
+ name: "47383"
+ id: 1724
+ display_name: "Latrodectus geometricus"
+}
+item {
+ name: "119664"
+ id: 1725
+ display_name: "Cacyreus marshalli"
+}
+item {
+ name: "62321"
+ id: 1726
+ display_name: "Rutpela maculata"
+}
+item {
+ name: "217970"
+ id: 1727
+ display_name: "Cyclophora pendulinaria"
+}
+item {
+ name: "4981"
+ id: 1728
+ display_name: "Nycticorax nycticorax"
+}
+item {
+ name: "12714"
+ id: 1729
+ display_name: "Turdus rufopalliatus"
+}
+item {
+ name: "4994"
+ id: 1730
+ display_name: "Ardeola ralloides"
+}
+item {
+ name: "4999"
+ id: 1731
+ display_name: "Nyctanassa violacea"
+}
+item {
+ name: "37769"
+ id: 1732
+ display_name: "Plestiodon skiltonianus"
+}
+item {
+ name: "213826"
+ id: 1733
+ display_name: "Apamea amputatrix"
+}
+item {
+ name: "67736"
+ id: 1734
+ display_name: "Rhionaeschna californica"
+}
+item {
+ name: "155380"
+ id: 1735
+ display_name: "Andricus crystallinus"
+}
+item {
+ name: "144280"
+ id: 1736
+ display_name: "Aramides cajaneus"
+}
+item {
+ name: "5017"
+ id: 1737
+ display_name: "Bubulcus ibis"
+}
+item {
+ name: "5020"
+ id: 1738
+ display_name: "Butorides virescens"
+}
+item {
+ name: "144285"
+ id: 1739
+ display_name: "Porphyrio martinicus"
+}
+item {
+ name: "81729"
+ id: 1740
+ display_name: "Feniseca tarquinius"
+}
+item {
+ name: "127905"
+ id: 1741
+ display_name: "Bombus ternarius"
+}
+item {
+ name: "5034"
+ id: 1742
+ display_name: "Botaurus lentiginosus"
+}
+item {
+ name: "29330"
+ id: 1743
+ display_name: "Nerodia erythrogaster transversa"
+}
+item {
+ name: "5036"
+ id: 1744
+ display_name: "Cochlearius cochlearius"
+}
+item {
+ name: "46001"
+ id: 1745
+ display_name: "Sciurus vulgaris"
+}
+item {
+ name: "46005"
+ id: 1746
+ display_name: "Sciurus variegatoides"
+}
+item {
+ name: "127928"
+ id: 1747
+ display_name: "Autochton cellus"
+}
+item {
+ name: "340923"
+ id: 1748
+ display_name: "Scolypopa australis"
+}
+item {
+ name: "46017"
+ id: 1749
+ display_name: "Sciurus carolinensis"
+}
+item {
+ name: "46018"
+ id: 1750
+ display_name: "Sciurus aberti"
+}
+item {
+ name: "447427"
+ id: 1751
+ display_name: "Neverita lewisii"
+}
+item {
+ name: "46020"
+ id: 1752
+ display_name: "Sciurus niger"
+}
+item {
+ name: "5061"
+ id: 1753
+ display_name: "Anhinga novaehollandiae"
+}
+item {
+ name: "46023"
+ id: 1754
+ display_name: "Sciurus griseus"
+}
+item {
+ name: "122375"
+ id: 1755
+ display_name: "Carterocephalus palaemon"
+}
+item {
+ name: "5066"
+ id: 1756
+ display_name: "Anhinga rufa"
+}
+item {
+ name: "145289"
+ id: 1757
+ display_name: "Melozone fusca"
+}
+item {
+ name: "5074"
+ id: 1758
+ display_name: "Aquila chrysaetos"
+}
+item {
+ name: "49998"
+ id: 1759
+ display_name: "Thamnophis sirtalis infernalis"
+}
+item {
+ name: "13270"
+ id: 1760
+ display_name: "Hylocichla mustelina"
+}
+item {
+ name: "62423"
+ id: 1761
+ display_name: "Cimbex americana"
+}
+item {
+ name: "62424"
+ id: 1762
+ display_name: "Sitochroa palealis"
+}
+item {
+ name: "111578"
+ id: 1763
+ display_name: "Regina grahamii"
+}
+item {
+ name: "144207"
+ id: 1764
+ display_name: "Aphelocoma wollweberi"
+}
+item {
+ name: "62429"
+ id: 1765
+ display_name: "Pyronia tithonus"
+}
+item {
+ name: "47934"
+ id: 1766
+ display_name: "Libellula luctuosa"
+}
+item {
+ name: "50000"
+ id: 1767
+ display_name: "Clemmys guttata"
+}
+item {
+ name: "5097"
+ id: 1768
+ display_name: "Accipiter striatus"
+}
+item {
+ name: "119789"
+ id: 1769
+ display_name: "Cisseps fulvicollis"
+}
+item {
+ name: "5106"
+ id: 1770
+ display_name: "Accipiter nisus"
+}
+item {
+ name: "5108"
+ id: 1771
+ display_name: "Accipiter gentilis"
+}
+item {
+ name: "62456"
+ id: 1772
+ display_name: "Rhagonycha fulva"
+}
+item {
+ name: "4948"
+ id: 1773
+ display_name: "Egretta rufescens"
+}
+item {
+ name: "46082"
+ id: 1774
+ display_name: "Marmota marmota"
+}
+item {
+ name: "6990"
+ id: 1775
+ display_name: "Bucephala clangula"
+}
+item {
+ name: "4535"
+ id: 1776
+ display_name: "Anous stolidus"
+}
+item {
+ name: "46087"
+ id: 1777
+ display_name: "Marmota caligata"
+}
+item {
+ name: "72458"
+ id: 1778
+ display_name: "Actitis macularius"
+}
+item {
+ name: "4951"
+ id: 1779
+ display_name: "Ardea purpurea"
+}
+item {
+ name: "128012"
+ id: 1780
+ display_name: "Eumorpha fasciatus"
+}
+item {
+ name: "472078"
+ id: 1781
+ display_name: "Todiramphus chloris"
+}
+item {
+ name: "46095"
+ id: 1782
+ display_name: "Marmota monax"
+}
+item {
+ name: "34"
+ id: 1783
+ display_name: "Grus americana"
+}
+item {
+ name: "4835"
+ id: 1784
+ display_name: "Himantopus himantopus"
+}
+item {
+ name: "122374"
+ id: 1785
+ display_name: "Eurema mexicana"
+}
+item {
+ name: "19812"
+ id: 1786
+ display_name: "Glaucidium gnoma"
+}
+item {
+ name: "73823"
+ id: 1787
+ display_name: "Hierophis viridiflavus"
+}
+item {
+ name: "5168"
+ id: 1788
+ display_name: "Circus approximans"
+}
+item {
+ name: "143110"
+ id: 1789
+ display_name: "Hypagyrtis unipunctata"
+}
+item {
+ name: "65976"
+ id: 1790
+ display_name: "Lithobates blairi"
+}
+item {
+ name: "5173"
+ id: 1791
+ display_name: "Circus aeruginosus"
+}
+item {
+ name: "54327"
+ id: 1792
+ display_name: "Vespa crabro"
+}
+item {
+ name: "4273"
+ id: 1793
+ display_name: "Phalacrocorax sulcirostris"
+}
+item {
+ name: "5180"
+ id: 1794
+ display_name: "Buteo albonotatus"
+}
+item {
+ name: "103485"
+ id: 1795
+ display_name: "Ischnura denticollis"
+}
+item {
+ name: "62528"
+ id: 1796
+ display_name: "Butorides striata"
+}
+item {
+ name: "62529"
+ id: 1797
+ display_name: "Platalea ajaja"
+}
+item {
+ name: "5186"
+ id: 1798
+ display_name: "Buteo brachyurus"
+}
+item {
+ name: "103494"
+ id: 1799
+ display_name: "Ischnura hastata"
+}
+item {
+ name: "144455"
+ id: 1800
+ display_name: "Ardea alba"
+}
+item {
+ name: "103497"
+ id: 1801
+ display_name: "Ischnura perparva"
+}
+item {
+ name: "103498"
+ id: 1802
+ display_name: "Ischnura posita"
+}
+item {
+ name: "5196"
+ id: 1803
+ display_name: "Buteo swainsoni"
+}
+item {
+ name: "128079"
+ id: 1804
+ display_name: "Grammia ornata"
+}
+item {
+ name: "29777"
+ id: 1805
+ display_name: "Lampropeltis triangulum"
+}
+item {
+ name: "867"
+ id: 1806
+ display_name: "Alectoris rufa"
+}
+item {
+ name: "5206"
+ id: 1807
+ display_name: "Buteo lineatus"
+}
+item {
+ name: "29783"
+ id: 1808
+ display_name: "Lampropeltis triangulum triangulum"
+}
+item {
+ name: "122383"
+ id: 1809
+ display_name: "Plebejus melissa"
+}
+item {
+ name: "5212"
+ id: 1810
+ display_name: "Buteo jamaicensis"
+}
+item {
+ name: "81495"
+ id: 1811
+ display_name: "Libellula pulchella"
+}
+item {
+ name: "35003"
+ id: 1812
+ display_name: "Heloderma suspectum"
+}
+item {
+ name: "46180"
+ id: 1813
+ display_name: "Cynomys gunnisoni"
+}
+item {
+ name: "144485"
+ id: 1814
+ display_name: "Charadrius nivosus"
+}
+item {
+ name: "144490"
+ id: 1815
+ display_name: "Tringa incana"
+}
+item {
+ name: "144491"
+ id: 1816
+ display_name: "Tringa semipalmata"
+}
+item {
+ name: "25185"
+ id: 1817
+ display_name: "Hypopachus variolosus"
+}
+item {
+ name: "5231"
+ id: 1818
+ display_name: "Terathopius ecaudatus"
+}
+item {
+ name: "144496"
+ id: 1819
+ display_name: "Gallinago delicata"
+}
+item {
+ name: "5233"
+ id: 1820
+ display_name: "Buteogallus anthracinus"
+}
+item {
+ name: "211035"
+ id: 1821
+ display_name: "Speranza pustularia"
+}
+item {
+ name: "29813"
+ id: 1822
+ display_name: "Lampropeltis getula"
+}
+item {
+ name: "144502"
+ id: 1823
+ display_name: "Chroicocephalus philadelphia"
+}
+item {
+ name: "5242"
+ id: 1824
+ display_name: "Circaetus gallicus"
+}
+item {
+ name: "144507"
+ id: 1825
+ display_name: "Chroicocephalus novaehollandiae"
+}
+item {
+ name: "144510"
+ id: 1826
+ display_name: "Chroicocephalus ridibundus"
+}
+item {
+ name: "52757"
+ id: 1827
+ display_name: "Polistes fuscatus"
+}
+item {
+ name: "144514"
+ id: 1828
+ display_name: "Leucophaeus atricilla"
+}
+item {
+ name: "144515"
+ id: 1829
+ display_name: "Leucophaeus pipixcan"
+}
+item {
+ name: "46217"
+ id: 1830
+ display_name: "Tamias striatus"
+}
+item {
+ name: "144525"
+ id: 1831
+ display_name: "Onychoprion fuscatus"
+}
+item {
+ name: "46222"
+ id: 1832
+ display_name: "Tamias minimus"
+}
+item {
+ name: "144530"
+ id: 1833
+ display_name: "Sternula antillarum"
+}
+item {
+ name: "46230"
+ id: 1834
+ display_name: "Tamias merriami"
+}
+item {
+ name: "144537"
+ id: 1835
+ display_name: "Hydroprogne caspia"
+}
+item {
+ name: "144539"
+ id: 1836
+ display_name: "Thalasseus maximus"
+}
+item {
+ name: "144540"
+ id: 1837
+ display_name: "Thalasseus bergii"
+}
+item {
+ name: "5277"
+ id: 1838
+ display_name: "Elanus leucurus"
+}
+item {
+ name: "324766"
+ id: 1839
+ display_name: "Epicallima argenticinctella"
+}
+item {
+ name: "72486"
+ id: 1840
+ display_name: "Alopochen aegyptiaca"
+}
+item {
+ name: "62229"
+ id: 1841
+ display_name: "Ischnura cervula"
+}
+item {
+ name: "144550"
+ id: 1842
+ display_name: "Streptopelia senegalensis"
+}
+item {
+ name: "46256"
+ id: 1843
+ display_name: "Ammospermophilus harrisii"
+}
+item {
+ name: "94559"
+ id: 1844
+ display_name: "Argia nahuana"
+}
+item {
+ name: "46259"
+ id: 1845
+ display_name: "Tamiasciurus douglasii"
+}
+item {
+ name: "46260"
+ id: 1846
+ display_name: "Tamiasciurus hudsonicus"
+}
+item {
+ name: "119989"
+ id: 1847
+ display_name: "Stagmomantis carolina"
+}
+item {
+ name: "13494"
+ id: 1848
+ display_name: "Gerygone igata"
+}
+item {
+ name: "5305"
+ id: 1849
+ display_name: "Haliaeetus leucocephalus"
+}
+item {
+ name: "7596"
+ id: 1850
+ display_name: "Cistothorus platensis"
+}
+item {
+ name: "5308"
+ id: 1851
+ display_name: "Haliaeetus vocifer"
+}
+item {
+ name: "218301"
+ id: 1852
+ display_name: "Diacme elealis"
+}
+item {
+ name: "95422"
+ id: 1853
+ display_name: "Basiaeschna janata"
+}
+item {
+ name: "46272"
+ id: 1854
+ display_name: "Glaucomys volans"
+}
+item {
+ name: "120010"
+ id: 1855
+ display_name: "Polistes metricus"
+}
+item {
+ name: "144594"
+ id: 1856
+ display_name: "Bubo scandiacus"
+}
+item {
+ name: "52771"
+ id: 1857
+ display_name: "Gonepteryx rhamni"
+}
+item {
+ name: "144597"
+ id: 1858
+ display_name: "Ciccaba virgata"
+}
+item {
+ name: "890"
+ id: 1859
+ display_name: "Bonasa umbellus"
+}
+item {
+ name: "52773"
+ id: 1860
+ display_name: "Poanes zabulon"
+}
+item {
+ name: "120033"
+ id: 1861
+ display_name: "Lapara bombycoides"
+}
+item {
+ name: "5346"
+ id: 1862
+ display_name: "Busarellus nigricollis"
+}
+item {
+ name: "5349"
+ id: 1863
+ display_name: "Rostrhamus sociabilis"
+}
+item {
+ name: "36391"
+ id: 1864
+ display_name: "Anolis equestris"
+}
+item {
+ name: "46316"
+ id: 1865
+ display_name: "Trichechus manatus"
+}
+item {
+ name: "5267"
+ id: 1866
+ display_name: "Milvus milvus"
+}
+item {
+ name: "128241"
+ id: 1867
+ display_name: "Darapsa choerilus"
+}
+item {
+ name: "128242"
+ id: 1868
+ display_name: "Palthis angulalis"
+}
+item {
+ name: "5366"
+ id: 1869
+ display_name: "Gyps fulvus"
+}
+item {
+ name: "204512"
+ id: 1870
+ display_name: "Ficedula hypoleuca"
+}
+item {
+ name: "54526"
+ id: 1871
+ display_name: "Crassadoma gigantea"
+}
+item {
+ name: "144642"
+ id: 1872
+ display_name: "Momotus coeruliceps"
+}
+item {
+ name: "120070"
+ id: 1873
+ display_name: "Strongylocentrotus droebachiensis"
+}
+item {
+ name: "54538"
+ id: 1874
+ display_name: "Syngnathus leptorhynchus"
+}
+item {
+ name: "81746"
+ id: 1875
+ display_name: "Necrophila americana"
+}
+item {
+ name: "300301"
+ id: 1876
+ display_name: "Pseudomyrmex gracilis"
+}
+item {
+ name: "202003"
+ id: 1877
+ display_name: "Apiomerus spissipes"
+}
+item {
+ name: "41860"
+ id: 1878
+ display_name: "Enhydra lutris"
+}
+item {
+ name: "4817"
+ id: 1879
+ display_name: "Charadrius semipalmatus"
+}
+item {
+ name: "36145"
+ id: 1880
+ display_name: "Sceloporus variabilis"
+}
+item {
+ name: "202012"
+ id: 1881
+ display_name: "Steatoda capensis"
+}
+item {
+ name: "62749"
+ id: 1882
+ display_name: "Iphiclides podalirius"
+}
+item {
+ name: "5406"
+ id: 1883
+ display_name: "Haliastur indus"
+}
+item {
+ name: "62751"
+ id: 1884
+ display_name: "Andricus kingi"
+}
+item {
+ name: "5363"
+ id: 1885
+ display_name: "Gyps africanus"
+}
+item {
+ name: "5416"
+ id: 1886
+ display_name: "Ictinia mississippiensis"
+}
+item {
+ name: "62766"
+ id: 1887
+ display_name: "Issoria lathonia"
+}
+item {
+ name: "62768"
+ id: 1888
+ display_name: "Scolia dubia"
+}
+item {
+ name: "126206"
+ id: 1889
+ display_name: "Dissosteira carolina"
+}
+item {
+ name: "269875"
+ id: 1890
+ display_name: "Mallodon dasystomus"
+}
+item {
+ name: "155030"
+ id: 1891
+ display_name: "Limenitis reducta"
+}
+item {
+ name: "62345"
+ id: 1892
+ display_name: "Duttaphrynus melanostictus"
+}
+item {
+ name: "52519"
+ id: 1893
+ display_name: "Aeshna cyanea"
+}
+item {
+ name: "10001"
+ id: 1894
+ display_name: "Dives dives"
+}
+item {
+ name: "460365"
+ id: 1895
+ display_name: "Tegula funebralis"
+}
+item {
+ name: "13631"
+ id: 1896
+ display_name: "Baeolophus atricristatus"
+}
+item {
+ name: "13632"
+ id: 1897
+ display_name: "Baeolophus bicolor"
+}
+item {
+ name: "13633"
+ id: 1898
+ display_name: "Baeolophus inornatus"
+}
+item {
+ name: "9100"
+ id: 1899
+ display_name: "Melospiza melodia"
+}
+item {
+ name: "62796"
+ id: 1900
+ display_name: "Crotaphytus bicinctores"
+}
+item {
+ name: "62797"
+ id: 1901
+ display_name: "Gambelia wislizenii"
+}
+item {
+ name: "46009"
+ id: 1902
+ display_name: "Sciurus aureogaster"
+}
+item {
+ name: "112867"
+ id: 1903
+ display_name: "Sparisoma viride"
+}
+item {
+ name: "70997"
+ id: 1904
+ display_name: "Pelecinus polyturator"
+}
+item {
+ name: "62806"
+ id: 1905
+ display_name: "Mytilus californianus"
+}
+item {
+ name: "120156"
+ id: 1906
+ display_name: "Musca domestica"
+}
+item {
+ name: "136548"
+ id: 1907
+ display_name: "Euclea delphinii"
+}
+item {
+ name: "50065"
+ id: 1908
+ display_name: "Danaus eresimus"
+}
+item {
+ name: "43239"
+ id: 1909
+ display_name: "Tachyglossus aculeatus"
+}
+item {
+ name: "145303"
+ id: 1910
+ display_name: "Spinus spinus"
+}
+item {
+ name: "120183"
+ id: 1911
+ display_name: "Araneus marmoreus"
+}
+item {
+ name: "71032"
+ id: 1912
+ display_name: "Crotalus scutulatus scutulatus"
+}
+item {
+ name: "71034"
+ id: 1913
+ display_name: "Tenodera sinensis"
+}
+item {
+ name: "143121"
+ id: 1914
+ display_name: "Ochropleura implecta"
+}
+item {
+ name: "13695"
+ id: 1915
+ display_name: "Motacilla alba"
+}
+item {
+ name: "7458"
+ id: 1916
+ display_name: "Certhia americana"
+}
+item {
+ name: "38293"
+ id: 1917
+ display_name: "Lampropholis delicata"
+}
+item {
+ name: "144281"
+ id: 1918
+ display_name: "Bucorvus leadbeateri"
+}
+item {
+ name: "120217"
+ id: 1919
+ display_name: "Halysidota tessellaris"
+}
+item {
+ name: "226718"
+ id: 1920
+ display_name: "Otiorhynchus sulcatus"
+}
+item {
+ name: "464287"
+ id: 1921
+ display_name: "Anteaeolidiella oliviae"
+}
+item {
+ name: "226720"
+ id: 1922
+ display_name: "Oxychilus draparnaudi"
+}
+item {
+ name: "13729"
+ id: 1923
+ display_name: "Anthus pratensis"
+}
+item {
+ name: "13732"
+ id: 1924
+ display_name: "Anthus rubescens"
+}
+item {
+ name: "11930"
+ id: 1925
+ display_name: "Tachycineta albilinea"
+}
+item {
+ name: "71085"
+ id: 1926
+ display_name: "Varanus niloticus"
+}
+item {
+ name: "144814"
+ id: 1927
+ display_name: "Poecile carolinensis"
+}
+item {
+ name: "144815"
+ id: 1928
+ display_name: "Poecile atricapillus"
+}
+item {
+ name: "144816"
+ id: 1929
+ display_name: "Poecile gambeli"
+}
+item {
+ name: "144820"
+ id: 1930
+ display_name: "Poecile rufescens"
+}
+item {
+ name: "144823"
+ id: 1931
+ display_name: "Periparus ater"
+}
+item {
+ name: "10485"
+ id: 1932
+ display_name: "Chlorophanes spiza"
+}
+item {
+ name: "40523"
+ id: 1933
+ display_name: "Lasiurus cinereus"
+}
+item {
+ name: "47719"
+ id: 1934
+ display_name: "Datana ministra"
+}
+item {
+ name: "13770"
+ id: 1935
+ display_name: "Estrilda astrild"
+}
+item {
+ name: "144849"
+ id: 1936
+ display_name: "Cyanistes caeruleus"
+}
+item {
+ name: "218587"
+ id: 1937
+ display_name: "Discus rotundatus"
+}
+item {
+ name: "47105"
+ id: 1938
+ display_name: "Tamandua mexicana"
+}
+item {
+ name: "18463"
+ id: 1939
+ display_name: "Sphyrapicus varius"
+}
+item {
+ name: "11858"
+ id: 1940
+ display_name: "Petrochelidon pyrrhonota"
+}
+item {
+ name: "144882"
+ id: 1941
+ display_name: "Troglodytes pacificus"
+}
+item {
+ name: "144883"
+ id: 1942
+ display_name: "Troglodytes hiemalis"
+}
+item {
+ name: "153076"
+ id: 1943
+ display_name: "Nephelodes minians"
+}
+item {
+ name: "62978"
+ id: 1944
+ display_name: "Chlosyne nycteis"
+}
+item {
+ name: "128517"
+ id: 1945
+ display_name: "Catocala ilia"
+}
+item {
+ name: "153102"
+ id: 1946
+ display_name: "Dysphania militaris"
+}
+item {
+ name: "59651"
+ id: 1947
+ display_name: "Aquarius remigis"
+}
+item {
+ name: "13851"
+ id: 1948
+ display_name: "Passer montanus"
+}
+item {
+ name: "13858"
+ id: 1949
+ display_name: "Passer domesticus"
+}
+item {
+ name: "39742"
+ id: 1950
+ display_name: "Kinosternon flavescens"
+}
+item {
+ name: "506118"
+ id: 1951
+ display_name: "Aphelocoma californica"
+}
+item {
+ name: "5672"
+ id: 1952
+ display_name: "Amazilia yucatanensis"
+}
+item {
+ name: "5676"
+ id: 1953
+ display_name: "Amazilia tzacatl"
+}
+item {
+ name: "204503"
+ id: 1954
+ display_name: "Dicrurus adsimilis"
+}
+item {
+ name: "52785"
+ id: 1955
+ display_name: "Megachile sculpturalis"
+}
+item {
+ name: "126905"
+ id: 1956
+ display_name: "Harrisina americana"
+}
+item {
+ name: "55773"
+ id: 1957
+ display_name: "Promachus hinei"
+}
+item {
+ name: "84752"
+ id: 1958
+ display_name: "Microcentrum rhombifolium"
+}
+item {
+ name: "5698"
+ id: 1959
+ display_name: "Amazilia violiceps"
+}
+item {
+ name: "145539"
+ id: 1960
+ display_name: "Ovis canadensis nelsoni"
+}
+item {
+ name: "104004"
+ id: 1961
+ display_name: "Lampropeltis splendida"
+}
+item {
+ name: "13893"
+ id: 1962
+ display_name: "Lonchura punctulata"
+}
+item {
+ name: "63048"
+ id: 1963
+ display_name: "Nuttallina californica"
+}
+item {
+ name: "226901"
+ id: 1964
+ display_name: "Panopoda rufimargo"
+}
+item {
+ name: "194134"
+ id: 1965
+ display_name: "Anthanassa tulcis"
+}
+item {
+ name: "5049"
+ id: 1966
+ display_name: "Tigrisoma mexicanum"
+}
+item {
+ name: "407130"
+ id: 1967
+ display_name: "Porphyrio melanotus melanotus"
+}
+item {
+ name: "226910"
+ id: 1968
+ display_name: "Panthea furcilla"
+}
+item {
+ name: "130661"
+ id: 1969
+ display_name: "Catasticta nimbice"
+}
+item {
+ name: "120215"
+ id: 1970
+ display_name: "Bombus griseocollis"
+}
+item {
+ name: "144220"
+ id: 1971
+ display_name: "Melanitta americana"
+}
+item {
+ name: "9148"
+ id: 1972
+ display_name: "Spizella pallida"
+}
+item {
+ name: "320610"
+ id: 1973
+ display_name: "Sceloporus magister"
+}
+item {
+ name: "54900"
+ id: 1974
+ display_name: "Papilio polyxenes asterius"
+}
+item {
+ name: "36080"
+ id: 1975
+ display_name: "Callisaurus draconoides"
+}
+item {
+ name: "5758"
+ id: 1976
+ display_name: "Amazilia rutila"
+}
+item {
+ name: "3465"
+ id: 1977
+ display_name: "Zenaida aurita"
+}
+item {
+ name: "116461"
+ id: 1978
+ display_name: "Anolis sagrei"
+}
+item {
+ name: "61295"
+ id: 1979
+ display_name: "Aporia crataegi"
+}
+item {
+ name: "131673"
+ id: 1980
+ display_name: "Tetracis cachexiata"
+}
+item {
+ name: "63113"
+ id: 1981
+ display_name: "Blarina brevicauda"
+}
+item {
+ name: "26904"
+ id: 1982
+ display_name: "Coronella austriaca"
+}
+item {
+ name: "94575"
+ id: 1983
+ display_name: "Argia tibialis"
+}
+item {
+ name: "237166"
+ id: 1984
+ display_name: "Lycaena phlaeas hypophlaeas"
+}
+item {
+ name: "129305"
+ id: 1985
+ display_name: "Melanoplus bivittatus"
+}
+item {
+ name: "63128"
+ id: 1986
+ display_name: "Speyeria atlantis"
+}
+item {
+ name: "113514"
+ id: 1987
+ display_name: "Sympetrum internum"
+}
+item {
+ name: "48757"
+ id: 1988
+ display_name: "Echinothrix calamaris"
+}
+item {
+ name: "128670"
+ id: 1989
+ display_name: "Bombus vagans"
+}
+item {
+ name: "13988"
+ id: 1990
+ display_name: "Prunella modularis"
+}
+item {
+ name: "54951"
+ id: 1991
+ display_name: "Anartia fatima"
+}
+item {
+ name: "54952"
+ id: 1992
+ display_name: "Cardisoma guanhumi"
+}
+item {
+ name: "325295"
+ id: 1993
+ display_name: "Cydalima perspectalis"
+}
+item {
+ name: "63160"
+ id: 1994
+ display_name: "Celithemis elisa"
+}
+item {
+ name: "210615"
+ id: 1995
+ display_name: "Pyrausta volupialis"
+}
+item {
+ name: "472766"
+ id: 1996
+ display_name: "Falco tinnunculus"
+}
+item {
+ name: "29927"
+ id: 1997
+ display_name: "Heterodon nasicus"
+}
+item {
+ name: "145088"
+ id: 1998
+ display_name: "Ixoreus naevius"
+}
+item {
+ name: "6432"
+ id: 1999
+ display_name: "Archilochus colubris"
+}
+item {
+ name: "5827"
+ id: 2000
+ display_name: "Lampornis clemenciae"
+}
+item {
+ name: "15990"
+ id: 2001
+ display_name: "Myiarchus tuberculifer"
+}
+item {
+ name: "128712"
+ id: 2002
+ display_name: "Coccinella californica"
+}
+item {
+ name: "67559"
+ id: 2003
+ display_name: "Adelpha eulalia"
+}
+item {
+ name: "128719"
+ id: 2004
+ display_name: "Echinometra mathaei"
+}
+item {
+ name: "10247"
+ id: 2005
+ display_name: "Setophaga ruticilla"
+}
+item {
+ name: "202451"
+ id: 2006
+ display_name: "Copaeodes minima"
+}
+item {
+ name: "95958"
+ id: 2007
+ display_name: "Boyeria vinosa"
+}
+item {
+ name: "16016"
+ id: 2008
+ display_name: "Myiarchus tyrannulus"
+}
+item {
+ name: "36202"
+ id: 2009
+ display_name: "Sceloporus olivaceus"
+}
+item {
+ name: "95982"
+ id: 2010
+ display_name: "Brachymesia furcata"
+}
+item {
+ name: "126589"
+ id: 2011
+ display_name: "Calycopis isobeon"
+}
+item {
+ name: "120578"
+ id: 2012
+ display_name: "Micrathena sagittata"
+}
+item {
+ name: "194690"
+ id: 2013
+ display_name: "Pogonomyrmex barbatus"
+}
+item {
+ name: "120583"
+ id: 2014
+ display_name: "Parasteatoda tepidariorum"
+}
+item {
+ name: "202505"
+ id: 2015
+ display_name: "Zosterops lateralis"
+}
+item {
+ name: "38671"
+ id: 2016
+ display_name: "Aspidoscelis tigris"
+}
+item {
+ name: "38672"
+ id: 2017
+ display_name: "Aspidoscelis tigris stejnegeri"
+}
+item {
+ name: "9176"
+ id: 2018
+ display_name: "Zonotrichia leucophrys"
+}
+item {
+ name: "120596"
+ id: 2019
+ display_name: "Aphonopelma hentzi"
+}
+item {
+ name: "9744"
+ id: 2020
+ display_name: "Agelaius phoeniceus"
+}
+item {
+ name: "38684"
+ id: 2021
+ display_name: "Aspidoscelis tigris mundus"
+}
+item {
+ name: "62426"
+ id: 2022
+ display_name: "Aphantopus hyperantus"
+}
+item {
+ name: "30494"
+ id: 2023
+ display_name: "Micrurus tener"
+}
+item {
+ name: "58578"
+ id: 2024
+ display_name: "Euphydryas phaeton"
+}
+item {
+ name: "96036"
+ id: 2025
+ display_name: "Brechmorhoga mendax"
+}
+item {
+ name: "333608"
+ id: 2026
+ display_name: "Leukoma staminea"
+}
+item {
+ name: "38703"
+ id: 2027
+ display_name: "Aspidoscelis sexlineata sexlineata"
+}
+item {
+ name: "126600"
+ id: 2028
+ display_name: "Chortophaga viridifasciata"
+}
+item {
+ name: "63287"
+ id: 2029
+ display_name: "Megalorchestia californiana"
+}
+item {
+ name: "128824"
+ id: 2030
+ display_name: "Lucilia sericata"
+}
+item {
+ name: "104249"
+ id: 2031
+ display_name: "Lepisosteus oculatus"
+}
+item {
+ name: "203153"
+ id: 2032
+ display_name: "Parus major"
+}
+item {
+ name: "9183"
+ id: 2033
+ display_name: "Zonotrichia capensis"
+}
+item {
+ name: "82201"
+ id: 2034
+ display_name: "Hypena baltimoralis"
+}
+item {
+ name: "145217"
+ id: 2035
+ display_name: "Oreothlypis peregrina"
+}
+item {
+ name: "145218"
+ id: 2036
+ display_name: "Oreothlypis celata"
+}
+item {
+ name: "145221"
+ id: 2037
+ display_name: "Oreothlypis ruficapilla"
+}
+item {
+ name: "145224"
+ id: 2038
+ display_name: "Geothlypis philadelphia"
+}
+item {
+ name: "145225"
+ id: 2039
+ display_name: "Geothlypis formosa"
+}
+item {
+ name: "448331"
+ id: 2040
+ display_name: "Ambigolimax valentianus"
+}
+item {
+ name: "128845"
+ id: 2041
+ display_name: "Copestylum mexicanum"
+}
+item {
+ name: "145231"
+ id: 2042
+ display_name: "Setophaga tigrina"
+}
+item {
+ name: "145233"
+ id: 2043
+ display_name: "Setophaga americana"
+}
+item {
+ name: "145235"
+ id: 2044
+ display_name: "Setophaga magnolia"
+}
+item {
+ name: "145236"
+ id: 2045
+ display_name: "Setophaga castanea"
+}
+item {
+ name: "145237"
+ id: 2046
+ display_name: "Setophaga fusca"
+}
+item {
+ name: "145238"
+ id: 2047
+ display_name: "Setophaga petechia"
+}
+item {
+ name: "145240"
+ id: 2048
+ display_name: "Setophaga striata"
+}
+item {
+ name: "145242"
+ id: 2049
+ display_name: "Setophaga palmarum"
+}
+item {
+ name: "179855"
+ id: 2050
+ display_name: "Polites vibex"
+}
+item {
+ name: "145244"
+ id: 2051
+ display_name: "Setophaga pinus"
+}
+item {
+ name: "145245"
+ id: 2052
+ display_name: "Setophaga coronata"
+}
+item {
+ name: "145246"
+ id: 2053
+ display_name: "Setophaga dominica"
+}
+item {
+ name: "5987"
+ id: 2054
+ display_name: "Campylopterus hemileucurus"
+}
+item {
+ name: "17382"
+ id: 2055
+ display_name: "Vireo cassinii"
+}
+item {
+ name: "145254"
+ id: 2056
+ display_name: "Setophaga nigrescens"
+}
+item {
+ name: "145255"
+ id: 2057
+ display_name: "Setophaga townsendi"
+}
+item {
+ name: "145256"
+ id: 2058
+ display_name: "Setophaga occidentalis"
+}
+item {
+ name: "145257"
+ id: 2059
+ display_name: "Setophaga chrysoparia"
+}
+item {
+ name: "145258"
+ id: 2060
+ display_name: "Setophaga virens"
+}
+item {
+ name: "48786"
+ id: 2061
+ display_name: "Pollicipes polymerus"
+}
+item {
+ name: "36207"
+ id: 2062
+ display_name: "Sceloporus occidentalis longipes"
+}
+item {
+ name: "22392"
+ id: 2063
+ display_name: "Eleutherodactylus marnockii"
+}
+item {
+ name: "22393"
+ id: 2064
+ display_name: "Eleutherodactylus cystignathoides"
+}
+item {
+ name: "145275"
+ id: 2065
+ display_name: "Cardellina canadensis"
+}
+item {
+ name: "145277"
+ id: 2066
+ display_name: "Cardellina rubra"
+}
+item {
+ name: "7829"
+ id: 2067
+ display_name: "Aphelocoma coerulescens"
+}
+item {
+ name: "41963"
+ id: 2068
+ display_name: "Panthera pardus"
+}
+item {
+ name: "142998"
+ id: 2069
+ display_name: "Pyrausta acrionalis"
+}
+item {
+ name: "18204"
+ id: 2070
+ display_name: "Melanerpes erythrocephalus"
+}
+item {
+ name: "47425"
+ id: 2071
+ display_name: "Tonicella lineata"
+}
+item {
+ name: "148460"
+ id: 2072
+ display_name: "Charadra deridens"
+}
+item {
+ name: "145291"
+ id: 2073
+ display_name: "Emberiza calandra"
+}
+item {
+ name: "52523"
+ id: 2074
+ display_name: "Carcinus maenas"
+}
+item {
+ name: "46994"
+ id: 2075
+ display_name: "Scapanus latimanus"
+}
+item {
+ name: "114314"
+ id: 2076
+ display_name: "Tramea onusta"
+}
+item {
+ name: "145300"
+ id: 2077
+ display_name: "Acanthis flammea"
+}
+item {
+ name: "63382"
+ id: 2078
+ display_name: "Dermasterias imbricata"
+}
+item {
+ name: "126772"
+ id: 2079
+ display_name: "Ursus americanus californiensis"
+}
+item {
+ name: "145304"
+ id: 2080
+ display_name: "Spinus pinus"
+}
+item {
+ name: "10294"
+ id: 2081
+ display_name: "Thraupis abbas"
+}
+item {
+ name: "145308"
+ id: 2082
+ display_name: "Spinus psaltria"
+}
+item {
+ name: "145309"
+ id: 2083
+ display_name: "Spinus lawrencei"
+}
+item {
+ name: "145310"
+ id: 2084
+ display_name: "Spinus tristis"
+}
+item {
+ name: "3739"
+ id: 2085
+ display_name: "Threskiornis aethiopicus"
+}
+item {
+ name: "47014"
+ id: 2086
+ display_name: "Scalopus aquaticus"
+}
+item {
+ name: "4566"
+ id: 2087
+ display_name: "Gygis alba"
+}
+item {
+ name: "43335"
+ id: 2088
+ display_name: "Equus quagga"
+}
+item {
+ name: "41970"
+ id: 2089
+ display_name: "Panthera onca"
+}
+item {
+ name: "128950"
+ id: 2090
+ display_name: "Lycomorpha pholus"
+}
+item {
+ name: "11935"
+ id: 2091
+ display_name: "Tachycineta bicolor"
+}
+item {
+ name: "333759"
+ id: 2092
+ display_name: "Larus dominicanus dominicanus"
+}
+item {
+ name: "143008"
+ id: 2093
+ display_name: "Herpetogramma pertextalis"
+}
+item {
+ name: "235341"
+ id: 2094
+ display_name: "Coenonympha tullia california"
+}
+item {
+ name: "44705"
+ id: 2095
+ display_name: "Mus musculus"
+}
+item {
+ name: "145352"
+ id: 2096
+ display_name: "Lonchura oryzivora"
+}
+item {
+ name: "4840"
+ id: 2097
+ display_name: "Haematopus palliatus"
+}
+item {
+ name: "244845"
+ id: 2098
+ display_name: "Apiomerus californicus"
+}
+item {
+ name: "145360"
+ id: 2099
+ display_name: "Chloris chloris"
+}
+item {
+ name: "5112"
+ id: 2100
+ display_name: "Accipiter cooperii"
+}
+item {
+ name: "30675"
+ id: 2101
+ display_name: "Agkistrodon piscivorus"
+}
+item {
+ name: "341972"
+ id: 2102
+ display_name: "Crocodylus niloticus"
+}
+item {
+ name: "30677"
+ id: 2103
+ display_name: "Agkistrodon piscivorus conanti"
+}
+item {
+ name: "30678"
+ id: 2104
+ display_name: "Agkistrodon contortrix"
+}
+item {
+ name: "52900"
+ id: 2105
+ display_name: "Caenurgina crassiuscula"
+}
+item {
+ name: "30682"
+ id: 2106
+ display_name: "Agkistrodon contortrix laticinctus"
+}
+item {
+ name: "47067"
+ id: 2107
+ display_name: "Bradypus variegatus"
+}
+item {
+ name: "55260"
+ id: 2108
+ display_name: "Erythemis vesiculosa"
+}
+item {
+ name: "17402"
+ id: 2109
+ display_name: "Vireo solitarius"
+}
+item {
+ name: "6369"
+ id: 2110
+ display_name: "Selasphorus platycercus"
+}
+item {
+ name: "104416"
+ id: 2111
+ display_name: "Lestes alacer"
+}
+item {
+ name: "128993"
+ id: 2112
+ display_name: "Narceus annularus"
+}
+item {
+ name: "104422"
+ id: 2113
+ display_name: "Lestes congener"
+}
+item {
+ name: "227307"
+ id: 2114
+ display_name: "Patalene olyzonaria"
+}
+item {
+ name: "104429"
+ id: 2115
+ display_name: "Lestes dryas"
+}
+item {
+ name: "194542"
+ id: 2116
+ display_name: "Phyciodes graphica"
+}
+item {
+ name: "52904"
+ id: 2117
+ display_name: "Microcrambus elegans"
+}
+item {
+ name: "129363"
+ id: 2118
+ display_name: "Calephelis nemesis"
+}
+item {
+ name: "144506"
+ id: 2119
+ display_name: "Chroicocephalus scopulinus"
+}
+item {
+ name: "30713"
+ id: 2120
+ display_name: "Crotalus oreganus helleri"
+}
+item {
+ name: "47101"
+ id: 2121
+ display_name: "Choloepus hoffmanni"
+}
+item {
+ name: "210942"
+ id: 2122
+ display_name: "Caedicia simplex"
+}
+item {
+ name: "30719"
+ id: 2123
+ display_name: "Crotalus scutulatus"
+}
+item {
+ name: "30724"
+ id: 2124
+ display_name: "Crotalus ruber"
+}
+item {
+ name: "47110"
+ id: 2125
+ display_name: "Triopha maculata"
+}
+item {
+ name: "4235"
+ id: 2126
+ display_name: "Aechmophorus occidentalis"
+}
+item {
+ name: "30731"
+ id: 2127
+ display_name: "Crotalus molossus"
+}
+item {
+ name: "30733"
+ id: 2128
+ display_name: "Crotalus molossus nigrescens"
+}
+item {
+ name: "30735"
+ id: 2129
+ display_name: "Crotalus mitchellii"
+}
+item {
+ name: "30740"
+ id: 2130
+ display_name: "Crotalus lepidus"
+}
+item {
+ name: "30746"
+ id: 2131
+ display_name: "Crotalus horridus"
+}
+item {
+ name: "63518"
+ id: 2132
+ display_name: "Melanoplus differentialis"
+}
+item {
+ name: "30751"
+ id: 2133
+ display_name: "Crotalus cerastes"
+}
+item {
+ name: "126640"
+ id: 2134
+ display_name: "Caenurgina erechtea"
+}
+item {
+ name: "46086"
+ id: 2135
+ display_name: "Marmota flaviventris"
+}
+item {
+ name: "194599"
+ id: 2136
+ display_name: "Heliomata cycladata"
+}
+item {
+ name: "30764"
+ id: 2137
+ display_name: "Crotalus atrox"
+}
+item {
+ name: "204520"
+ id: 2138
+ display_name: "Hemiphaga novaeseelandiae"
+}
+item {
+ name: "128141"
+ id: 2139
+ display_name: "Crepidula adunca"
+}
+item {
+ name: "121183"
+ id: 2140
+ display_name: "Mythimna unipuncta"
+}
+item {
+ name: "40827"
+ id: 2141
+ display_name: "Eidolon helvum"
+}
+item {
+ name: "4571"
+ id: 2142
+ display_name: "Xema sabini"
+}
+item {
+ name: "211007"
+ id: 2143
+ display_name: "Nepytia canosaria"
+}
+item {
+ name: "47171"
+ id: 2144
+ display_name: "Flabellina iodinea"
+}
+item {
+ name: "211012"
+ id: 2145
+ display_name: "Maliattha synochitis"
+}
+item {
+ name: "30798"
+ id: 2146
+ display_name: "Bothrops asper"
+}
+item {
+ name: "47188"
+ id: 2147
+ display_name: "Pachygrapsus crassipes"
+}
+item {
+ name: "55387"
+ id: 2148
+ display_name: "Esox lucius"
+}
+item {
+ name: "58583"
+ id: 2149
+ display_name: "Limenitis arthemis arthemis"
+}
+item {
+ name: "104548"
+ id: 2150
+ display_name: "Leucorrhinia frigida"
+}
+item {
+ name: "104550"
+ id: 2151
+ display_name: "Leucorrhinia hudsonica"
+}
+item {
+ name: "104551"
+ id: 2152
+ display_name: "Leucorrhinia intacta"
+}
+item {
+ name: "47209"
+ id: 2153
+ display_name: "Hermissenda crassicornis"
+}
+item {
+ name: "55655"
+ id: 2154
+ display_name: "Lycaena phlaeas"
+}
+item {
+ name: "202861"
+ id: 2155
+ display_name: "Otala lactea"
+}
+item {
+ name: "143037"
+ id: 2156
+ display_name: "Lineodes integra"
+}
+item {
+ name: "47219"
+ id: 2157
+ display_name: "Apis mellifera"
+}
+item {
+ name: "24254"
+ id: 2158
+ display_name: "Pseudacris cadaverina"
+}
+item {
+ name: "47226"
+ id: 2159
+ display_name: "Papilio rutulus"
+}
+item {
+ name: "104572"
+ id: 2160
+ display_name: "Libellula comanche"
+}
+item {
+ name: "104574"
+ id: 2161
+ display_name: "Libellula croceipennis"
+}
+item {
+ name: "104575"
+ id: 2162
+ display_name: "Libellula cyanea"
+}
+item {
+ name: "145538"
+ id: 2163
+ display_name: "Ovis canadensis canadensis"
+}
+item {
+ name: "104580"
+ id: 2164
+ display_name: "Libellula incesta"
+}
+item {
+ name: "24257"
+ id: 2165
+ display_name: "Pseudacris streckeri"
+}
+item {
+ name: "53866"
+ id: 2166
+ display_name: "Calpodes ethlius"
+}
+item {
+ name: "18796"
+ id: 2167
+ display_name: "Ramphastos sulfuratus"
+}
+item {
+ name: "2413"
+ id: 2168
+ display_name: "Dacelo novaeguineae"
+}
+item {
+ name: "482"
+ id: 2169
+ display_name: "Fulica atra"
+}
+item {
+ name: "47251"
+ id: 2170
+ display_name: "Sphyraena barracuda"
+}
+item {
+ name: "358549"
+ id: 2171
+ display_name: "Hemaris diffinis"
+}
+item {
+ name: "81526"
+ id: 2172
+ display_name: "Crotalus viridis"
+}
+item {
+ name: "342169"
+ id: 2173
+ display_name: "Hirundo rustica erythrogaster"
+}
+item {
+ name: "39280"
+ id: 2174
+ display_name: "Leiocephalus carinatus"
+}
+item {
+ name: "47269"
+ id: 2175
+ display_name: "Dasyatis americana"
+}
+item {
+ name: "55467"
+ id: 2176
+ display_name: "Sabulodes aegrotata"
+}
+item {
+ name: "6316"
+ id: 2177
+ display_name: "Calypte costae"
+}
+item {
+ name: "6317"
+ id: 2178
+ display_name: "Calypte anna"
+}
+item {
+ name: "47280"
+ id: 2179
+ display_name: "Pterois volitans"
+}
+item {
+ name: "81608"
+ id: 2180
+ display_name: "Geukensia demissa"
+}
+item {
+ name: "121012"
+ id: 2181
+ display_name: "Euglandina rosea"
+}
+item {
+ name: "236980"
+ id: 2182
+ display_name: "Colaptes auratus cafer"
+}
+item {
+ name: "38673"
+ id: 2183
+ display_name: "Aspidoscelis tigris tigris"
+}
+item {
+ name: "3786"
+ id: 2184
+ display_name: "Sula nebouxii"
+}
+item {
+ name: "55487"
+ id: 2185
+ display_name: "Diabrotica undecimpunctata"
+}
+item {
+ name: "243904"
+ id: 2186
+ display_name: "Phrynosoma platyrhinos"
+}
+item {
+ name: "55489"
+ id: 2187
+ display_name: "Cycloneda munda"
+}
+item {
+ name: "204491"
+ id: 2188
+ display_name: "Copsychus saularis"
+}
+item {
+ name: "55492"
+ id: 2189
+ display_name: "Cycloneda polita"
+}
+item {
+ name: "129222"
+ id: 2190
+ display_name: "Heterophleps triguttaria"
+}
+item {
+ name: "129223"
+ id: 2191
+ display_name: "Pasiphila rectangulata"
+}
+item {
+ name: "28365"
+ id: 2192
+ display_name: "Thamnophis sirtalis sirtalis"
+}
+item {
+ name: "47316"
+ id: 2193
+ display_name: "Chaetodon lunula"
+}
+item {
+ name: "6359"
+ id: 2194
+ display_name: "Selasphorus sasin"
+}
+item {
+ name: "62500"
+ id: 2195
+ display_name: "Leptophobia aripa"
+}
+item {
+ name: "6363"
+ id: 2196
+ display_name: "Selasphorus rufus"
+}
+item {
+ name: "96480"
+ id: 2197
+ display_name: "Calopteryx aequabilis"
+}
+item {
+ name: "55521"
+ id: 2198
+ display_name: "Papilio eurymedon"
+}
+item {
+ name: "6371"
+ id: 2199
+ display_name: "Calothorax lucifer"
+}
+item {
+ name: "129263"
+ id: 2200
+ display_name: "Syrbula admirabilis"
+}
+item {
+ name: "28371"
+ id: 2201
+ display_name: "Thamnophis sirtalis fitchi"
+}
+item {
+ name: "243962"
+ id: 2202
+ display_name: "Charina bottae"
+}
+item {
+ name: "145659"
+ id: 2203
+ display_name: "Acronicta americana"
+}
+item {
+ name: "14588"
+ id: 2204
+ display_name: "Pycnonotus barbatus"
+}
+item {
+ name: "480298"
+ id: 2205
+ display_name: "Cornu aspersum"
+}
+item {
+ name: "51584"
+ id: 2206
+ display_name: "Melanitis leda"
+}
+item {
+ name: "243970"
+ id: 2207
+ display_name: "Larus glaucescens \303\227 occidentalis"
+}
+item {
+ name: "55556"
+ id: 2208
+ display_name: "Oncopeltus fasciatus"
+}
+item {
+ name: "506117"
+ id: 2209
+ display_name: "Aphelocoma woodhouseii"
+}
+item {
+ name: "63750"
+ id: 2210
+ display_name: "Anavitrinella pampinaria"
+}
+item {
+ name: "30983"
+ id: 2211
+ display_name: "Sistrurus miliarius"
+}
+item {
+ name: "211210"
+ id: 2212
+ display_name: "Holocnemus pluchei"
+}
+item {
+ name: "49587"
+ id: 2213
+ display_name: "Micropterus salmoides"
+}
+item {
+ name: "6417"
+ id: 2214
+ display_name: "Florisuga mellivora"
+}
+item {
+ name: "47381"
+ id: 2215
+ display_name: "Latrodectus mactans"
+}
+item {
+ name: "47382"
+ id: 2216
+ display_name: "Latrodectus hesperus"
+}
+item {
+ name: "4851"
+ id: 2217
+ display_name: "Haematopus finschi"
+}
+item {
+ name: "51588"
+ id: 2218
+ display_name: "Papilio polytes"
+}
+item {
+ name: "144431"
+ id: 2219
+ display_name: "Falcipennis canadensis"
+}
+item {
+ name: "118490"
+ id: 2220
+ display_name: "Haematopis grataria"
+}
+item {
+ name: "6433"
+ id: 2221
+ display_name: "Archilochus alexandri"
+}
+item {
+ name: "52956"
+ id: 2222
+ display_name: "Chaetodon capistratus"
+}
+item {
+ name: "203050"
+ id: 2223
+ display_name: "Junonia genoveva"
+}
+item {
+ name: "5170"
+ id: 2224
+ display_name: "Circus cyaneus"
+}
+item {
+ name: "84332"
+ id: 2225
+ display_name: "Panorpa nuptialis"
+}
+item {
+ name: "47414"
+ id: 2226
+ display_name: "Emerita analoga"
+}
+item {
+ name: "129335"
+ id: 2227
+ display_name: "Gibbifer californicus"
+}
+item {
+ name: "55610"
+ id: 2228
+ display_name: "Pyrrhocoris apterus"
+}
+item {
+ name: "58421"
+ id: 2229
+ display_name: "Phidippus johnsoni"
+}
+item {
+ name: "208608"
+ id: 2230
+ display_name: "Trachymela sloanei"
+}
+item {
+ name: "68138"
+ id: 2231
+ display_name: "Sympetrum corruptum"
+}
+item {
+ name: "129350"
+ id: 2232
+ display_name: "Photinus pyralis"
+}
+item {
+ name: "55625"
+ id: 2233
+ display_name: "Sympetrum striolatum"
+}
+item {
+ name: "55626"
+ id: 2234
+ display_name: "Pieris rapae"
+}
+item {
+ name: "203084"
+ id: 2235
+ display_name: "Ardea alba modesta"
+}
+item {
+ name: "129362"
+ id: 2236
+ display_name: "Zerene cesonia"
+}
+item {
+ name: "55638"
+ id: 2237
+ display_name: "Anania hortulata"
+}
+item {
+ name: "148537"
+ id: 2238
+ display_name: "Astraptes fulgerator"
+}
+item {
+ name: "55640"
+ id: 2239
+ display_name: "Celastrina argiolus"
+}
+item {
+ name: "55641"
+ id: 2240
+ display_name: "Polyommatus icarus"
+}
+item {
+ name: "16028"
+ id: 2241
+ display_name: "Myiarchus crinitus"
+}
+item {
+ name: "55643"
+ id: 2242
+ display_name: "Araschnia levana"
+}
+item {
+ name: "121180"
+ id: 2243
+ display_name: "Megastraea undosa"
+}
+item {
+ name: "47454"
+ id: 2244
+ display_name: "Triopha catalinae"
+}
+item {
+ name: "28389"
+ id: 2245
+ display_name: "Thamnophis ordinoides"
+}
+item {
+ name: "68139"
+ id: 2246
+ display_name: "Sympetrum vicinum"
+}
+item {
+ name: "55651"
+ id: 2247
+ display_name: "Autographa gamma"
+}
+item {
+ name: "55653"
+ id: 2248
+ display_name: "Maniola jurtina"
+}
+item {
+ name: "84369"
+ id: 2249
+ display_name: "Libellula forensis"
+}
+item {
+ name: "47135"
+ id: 2250
+ display_name: "Badumna longinqua"
+}
+item {
+ name: "48213"
+ id: 2251
+ display_name: "Ariolimax californicus"
+}
+item {
+ name: "121196"
+ id: 2252
+ display_name: "Acanthurus coeruleus"
+}
+item {
+ name: "47469"
+ id: 2253
+ display_name: "Doris montereyensis"
+}
+item {
+ name: "5181"
+ id: 2254
+ display_name: "Buteo regalis"
+}
+item {
+ name: "47472"
+ id: 2255
+ display_name: "Acanthodoris lutea"
+}
+item {
+ name: "129415"
+ id: 2256
+ display_name: "Copaeodes aurantiaca"
+}
+item {
+ name: "47505"
+ id: 2257
+ display_name: "Geitodoris heathi"
+}
+item {
+ name: "28398"
+ id: 2258
+ display_name: "Thamnophis elegans"
+}
+item {
+ name: "6553"
+ id: 2259
+ display_name: "Aeronautes saxatalis"
+}
+item {
+ name: "47516"
+ id: 2260
+ display_name: "Oncorhynchus mykiss"
+}
+item {
+ name: "6557"
+ id: 2261
+ display_name: "Chaetura vauxi"
+}
+item {
+ name: "47518"
+ id: 2262
+ display_name: "Salmo trutta"
+}
+item {
+ name: "55711"
+ id: 2263
+ display_name: "Ladona depressa"
+}
+item {
+ name: "55719"
+ id: 2264
+ display_name: "Eristalis tenax"
+}
+item {
+ name: "6571"
+ id: 2265
+ display_name: "Chaetura pelagica"
+}
+item {
+ name: "119881"
+ id: 2266
+ display_name: "Chrysochus cobaltinus"
+}
+item {
+ name: "145239"
+ id: 2267
+ display_name: "Setophaga pensylvanica"
+}
+item {
+ name: "154043"
+ id: 2268
+ display_name: "Bombus huntii"
+}
+item {
+ name: "41955"
+ id: 2269
+ display_name: "Acinonyx jubatus"
+}
+item {
+ name: "55746"
+ id: 2270
+ display_name: "Misumena vatia"
+}
+item {
+ name: "12024"
+ id: 2271
+ display_name: "Lanius ludovicianus"
+}
+item {
+ name: "5063"
+ id: 2272
+ display_name: "Anhinga anhinga"
+}
+item {
+ name: "59892"
+ id: 2273
+ display_name: "Prionus californicus"
+}
+item {
+ name: "52986"
+ id: 2274
+ display_name: "Largus californicus"
+}
+item {
+ name: "204454"
+ id: 2275
+ display_name: "Acridotheres tristis"
+}
+item {
+ name: "14816"
+ id: 2276
+ display_name: "Sitta pygmaea"
+}
+item {
+ name: "148560"
+ id: 2277
+ display_name: "Mestra amymone"
+}
+item {
+ name: "4585"
+ id: 2278
+ display_name: "Actophilornis africanus"
+}
+item {
+ name: "47590"
+ id: 2279
+ display_name: "Phloeodes diabolicus"
+}
+item {
+ name: "14823"
+ id: 2280
+ display_name: "Sitta canadensis"
+}
+item {
+ name: "14824"
+ id: 2281
+ display_name: "Sitta europaea"
+}
+item {
+ name: "14825"
+ id: 2282
+ display_name: "Sitta pusilla"
+}
+item {
+ name: "67598"
+ id: 2283
+ display_name: "Solenopsis invicta"
+}
+item {
+ name: "6638"
+ id: 2284
+ display_name: "Apus apus"
+}
+item {
+ name: "301557"
+ id: 2285
+ display_name: "Euphoria basalis"
+}
+item {
+ name: "132070"
+ id: 2286
+ display_name: "Phaneroptera nana"
+}
+item {
+ name: "14850"
+ id: 2287
+ display_name: "Sturnus vulgaris"
+}
+item {
+ name: "62550"
+ id: 2288
+ display_name: "Seiurus aurocapilla"
+}
+item {
+ name: "64006"
+ id: 2289
+ display_name: "Corbicula fluminea"
+}
+item {
+ name: "204545"
+ id: 2290
+ display_name: "Motacilla flava"
+}
+item {
+ name: "47632"
+ id: 2291
+ display_name: "Katharina tunicata"
+}
+item {
+ name: "325309"
+ id: 2292
+ display_name: "Chortophaga viridifasciata viridifasciata"
+}
+item {
+ name: "104993"
+ id: 2293
+ display_name: "Macrodiplax balteata"
+}
+item {
+ name: "17408"
+ id: 2294
+ display_name: "Vireo griseus"
+}
+item {
+ name: "14895"
+ id: 2295
+ display_name: "Toxostoma longirostre"
+}
+item {
+ name: "47664"
+ id: 2296
+ display_name: "Henricia leviuscula"
+}
+item {
+ name: "31281"
+ id: 2297
+ display_name: "Calotes versicolor"
+}
+item {
+ name: "119086"
+ id: 2298
+ display_name: "Agrius cingulata"
+}
+item {
+ name: "3849"
+ id: 2299
+ display_name: "Calidris alba"
+}
+item {
+ name: "14906"
+ id: 2300
+ display_name: "Toxostoma redivivum"
+}
+item {
+ name: "144479"
+ id: 2301
+ display_name: "Gallinula galeata"
+}
+item {
+ name: "3850"
+ id: 2302
+ display_name: "Calidris himantopus"
+}
+item {
+ name: "117520"
+ id: 2303
+ display_name: "Enhydra lutris nereis"
+}
+item {
+ name: "51491"
+ id: 2304
+ display_name: "Myliobatis californica"
+}
+item {
+ name: "121612"
+ id: 2305
+ display_name: "Estigmene acrea"
+}
+item {
+ name: "105034"
+ id: 2306
+ display_name: "Macromia illinoiensis"
+}
+item {
+ name: "6498"
+ id: 2307
+ display_name: "Eugenes fulgens"
+}
+item {
+ name: "46179"
+ id: 2308
+ display_name: "Cynomys ludovicianus"
+}
+item {
+ name: "105049"
+ id: 2309
+ display_name: "Macromia taeniolata"
+}
+item {
+ name: "94045"
+ id: 2310
+ display_name: "Anax longipes"
+}
+item {
+ name: "143119"
+ id: 2311
+ display_name: "Galgula partita"
+}
+item {
+ name: "9317"
+ id: 2312
+ display_name: "Icterus wagleri"
+}
+item {
+ name: "122704"
+ id: 2313
+ display_name: "Nucella ostrina"
+}
+item {
+ name: "146709"
+ id: 2314
+ display_name: "Grylloprociphilus imbricator"
+}
+item {
+ name: "9318"
+ id: 2315
+ display_name: "Icterus parisorum"
+}
+item {
+ name: "85333"
+ id: 2316
+ display_name: "Micrathena gracilis"
+}
+item {
+ name: "126737"
+ id: 2317
+ display_name: "Anania funebris"
+}
+item {
+ name: "49053"
+ id: 2318
+ display_name: "Cryptochiton stelleri"
+}
+item {
+ name: "47721"
+ id: 2319
+ display_name: "Parastichopus californicus"
+}
+item {
+ name: "34050"
+ id: 2320
+ display_name: "Phelsuma laticauda"
+}
+item {
+ name: "154219"
+ id: 2321
+ display_name: "Notarctia proxima"
+}
+item {
+ name: "51781"
+ id: 2322
+ display_name: "Tyria jacobaeae"
+}
+item {
+ name: "24230"
+ id: 2323
+ display_name: "Acris crepitans"
+}
+item {
+ name: "146032"
+ id: 2324
+ display_name: "Coluber flagellum"
+}
+item {
+ name: "146033"
+ id: 2325
+ display_name: "Coluber flagellum flagellum"
+}
+item {
+ name: "244340"
+ id: 2326
+ display_name: "Hordnia atropunctata"
+}
+item {
+ name: "146037"
+ id: 2327
+ display_name: "Coluber taeniatus"
+}
+item {
+ name: "244344"
+ id: 2328
+ display_name: "Scopula rubraria"
+}
+item {
+ name: "47737"
+ id: 2329
+ display_name: "Harpaphe haydeniana"
+}
+item {
+ name: "5227"
+ id: 2330
+ display_name: "Buteo platypterus"
+}
+item {
+ name: "39556"
+ id: 2331
+ display_name: "Apalone spinifera"
+}
+item {
+ name: "39560"
+ id: 2332
+ display_name: "Apalone spinifera emoryi"
+}
+item {
+ name: "318836"
+ id: 2333
+ display_name: "Gallinago gallinago"
+}
+item {
+ name: "105098"
+ id: 2334
+ display_name: "Magicicada septendecim"
+}
+item {
+ name: "96907"
+ id: 2335
+ display_name: "Celithemis fasciata"
+}
+item {
+ name: "9325"
+ id: 2336
+ display_name: "Icterus spurius"
+}
+item {
+ name: "3864"
+ id: 2337
+ display_name: "Calidris minutilla"
+}
+item {
+ name: "14995"
+ id: 2338
+ display_name: "Dumetella carolinensis"
+}
+item {
+ name: "424597"
+ id: 2339
+ display_name: "Porphyrio hochstetteri"
+}
+item {
+ name: "47768"
+ id: 2340
+ display_name: "Doriopsilla albopunctata"
+}
+item {
+ name: "498116"
+ id: 2341
+ display_name: "Aeolidia papillosa"
+}
+item {
+ name: "244378"
+ id: 2342
+ display_name: "Mallophora fautrix"
+}
+item {
+ name: "3866"
+ id: 2343
+ display_name: "Calidris fuscicollis"
+}
+item {
+ name: "47776"
+ id: 2344
+ display_name: "Ariolimax columbianus"
+}
+item {
+ name: "144497"
+ id: 2345
+ display_name: "Phalaropus tricolor"
+}
+item {
+ name: "39824"
+ id: 2346
+ display_name: "Pseudemys nelsoni"
+}
+item {
+ name: "236979"
+ id: 2347
+ display_name: "Colaptes auratus auratus"
+}
+item {
+ name: "55990"
+ id: 2348
+ display_name: "Podarcis muralis"
+}
+item {
+ name: "244407"
+ id: 2349
+ display_name: "Zelus renardii"
+}
+item {
+ name: "47802"
+ id: 2350
+ display_name: "Lymantria dispar"
+}
+item {
+ name: "15035"
+ id: 2351
+ display_name: "Melanotis caerulescens"
+}
+item {
+ name: "51658"
+ id: 2352
+ display_name: "Anthopleura artemisia"
+}
+item {
+ name: "121534"
+ id: 2353
+ display_name: "Oreta rosea"
+}
+item {
+ name: "73504"
+ id: 2354
+ display_name: "Tiaris olivaceus"
+}
+item {
+ name: "15045"
+ id: 2355
+ display_name: "Oreoscoptes montanus"
+}
+item {
+ name: "3873"
+ id: 2356
+ display_name: "Limnodromus scolopaceus"
+}
+item {
+ name: "47673"
+ id: 2357
+ display_name: "Pycnopodia helianthoides"
+}
+item {
+ name: "47817"
+ id: 2358
+ display_name: "Libellula saturata"
+}
+item {
+ name: "56644"
+ id: 2359
+ display_name: "Polygonia satyrus"
+}
+item {
+ name: "47826"
+ id: 2360
+ display_name: "Cancer productus"
+}
+item {
+ name: "3875"
+ id: 2361
+ display_name: "Tringa solitaria"
+}
+item {
+ name: "39782"
+ id: 2362
+ display_name: "Trachemys scripta"
+}
+item {
+ name: "143140"
+ id: 2363
+ display_name: "Cyllopsis gemma"
+}
+item {
+ name: "29818"
+ id: 2364
+ display_name: "Lampropeltis holbrooki"
+}
+item {
+ name: "56293"
+ id: 2365
+ display_name: "Macroglossum stellatarum"
+}
+item {
+ name: "154340"
+ id: 2366
+ display_name: "Gryllodes sigillatus"
+}
+item {
+ name: "14801"
+ id: 2367
+ display_name: "Sitta carolinensis"
+}
+item {
+ name: "121578"
+ id: 2368
+ display_name: "Ovis aries"
+}
+item {
+ name: "3879"
+ id: 2369
+ display_name: "Tringa totanus"
+}
+item {
+ name: "6893"
+ id: 2370
+ display_name: "Dendrocygna autumnalis"
+}
+item {
+ name: "154353"
+ id: 2371
+ display_name: "Sunira bicolorago"
+}
+item {
+ name: "6898"
+ id: 2372
+ display_name: "Dendrocygna viduata"
+}
+item {
+ name: "6899"
+ id: 2373
+ display_name: "Dendrocygna bicolor"
+}
+item {
+ name: "9342"
+ id: 2374
+ display_name: "Icterus abeillei"
+}
+item {
+ name: "39670"
+ id: 2375
+ display_name: "Lepidochelys olivacea"
+}
+item {
+ name: "4867"
+ id: 2376
+ display_name: "Vanellus chilensis"
+}
+item {
+ name: "39677"
+ id: 2377
+ display_name: "Dermochelys coriacea"
+}
+item {
+ name: "113407"
+ id: 2378
+ display_name: "Stylurus plagiatus"
+}
+item {
+ name: "39682"
+ id: 2379
+ display_name: "Chelydra serpentina"
+}
+item {
+ name: "6915"
+ id: 2380
+ display_name: "Cygnus buccinator"
+}
+item {
+ name: "6916"
+ id: 2381
+ display_name: "Cygnus cygnus"
+}
+item {
+ name: "6917"
+ id: 2382
+ display_name: "Cygnus columbianus"
+}
+item {
+ name: "29825"
+ id: 2383
+ display_name: "Lampropeltis calligaster calligaster"
+}
+item {
+ name: "6921"
+ id: 2384
+ display_name: "Cygnus olor"
+}
+item {
+ name: "146186"
+ id: 2385
+ display_name: "Intellagama lesueurii"
+}
+item {
+ name: "9346"
+ id: 2386
+ display_name: "Icterus galbula"
+}
+item {
+ name: "126765"
+ id: 2387
+ display_name: "Plutella xylostella"
+}
+item {
+ name: "71154"
+ id: 2388
+ display_name: "Aphis nerii"
+}
+item {
+ name: "6930"
+ id: 2389
+ display_name: "Anas platyrhynchos"
+}
+item {
+ name: "6933"
+ id: 2390
+ display_name: "Anas acuta"
+}
+item {
+ name: "39703"
+ id: 2391
+ display_name: "Sternotherus odoratus"
+}
+item {
+ name: "6937"
+ id: 2392
+ display_name: "Anas crecca"
+}
+item {
+ name: "64287"
+ id: 2393
+ display_name: "Lottia digitalis"
+}
+item {
+ name: "6944"
+ id: 2394
+ display_name: "Anas cyanoptera"
+}
+item {
+ name: "39713"
+ id: 2395
+ display_name: "Kinosternon subrubrum"
+}
+item {
+ name: "26691"
+ id: 2396
+ display_name: "Scaphiopus couchii"
+}
+item {
+ name: "6948"
+ id: 2397
+ display_name: "Anas fulvigula"
+}
+item {
+ name: "6953"
+ id: 2398
+ display_name: "Anas discors"
+}
+item {
+ name: "47914"
+ id: 2399
+ display_name: "Eumorpha pandorus"
+}
+item {
+ name: "47916"
+ id: 2400
+ display_name: "Actias luna"
+}
+item {
+ name: "6957"
+ id: 2401
+ display_name: "Anas strepera"
+}
+item {
+ name: "47919"
+ id: 2402
+ display_name: "Antheraea polyphemus"
+}
+item {
+ name: "119953"
+ id: 2403
+ display_name: "Hypoprepia fucosa"
+}
+item {
+ name: "6961"
+ id: 2404
+ display_name: "Anas clypeata"
+}
+item {
+ name: "134119"
+ id: 2405
+ display_name: "Anisomorpha buprestoides"
+}
+item {
+ name: "51678"
+ id: 2406
+ display_name: "Coenagrion puella"
+}
+item {
+ name: "72502"
+ id: 2407
+ display_name: "Anas chlorotis"
+}
+item {
+ name: "49060"
+ id: 2408
+ display_name: "Epiactis prolifera"
+}
+item {
+ name: "42122"
+ id: 2409
+ display_name: "Phacochoerus africanus"
+}
+item {
+ name: "58507"
+ id: 2410
+ display_name: "Poanes hobomok"
+}
+item {
+ name: "121669"
+ id: 2411
+ display_name: "Stenopus hispidus"
+}
+item {
+ name: "8143"
+ id: 2412
+ display_name: "Rhipidura leucophrys"
+}
+item {
+ name: "6985"
+ id: 2413
+ display_name: "Anas americana"
+}
+item {
+ name: "6993"
+ id: 2414
+ display_name: "Bucephala albeola"
+}
+item {
+ name: "121682"
+ id: 2415
+ display_name: "Tetraclita rubescens"
+}
+item {
+ name: "6996"
+ id: 2416
+ display_name: "Mergus serrator"
+}
+item {
+ name: "113498"
+ id: 2417
+ display_name: "Sympetrum ambiguum"
+}
+item {
+ name: "39771"
+ id: 2418
+ display_name: "Chrysemys picta"
+}
+item {
+ name: "7004"
+ id: 2419
+ display_name: "Mergus merganser"
+}
+item {
+ name: "39773"
+ id: 2420
+ display_name: "Chrysemys picta bellii"
+}
+item {
+ name: "113503"
+ id: 2421
+ display_name: "Sympetrum danae"
+}
+item {
+ name: "113507"
+ id: 2422
+ display_name: "Sympetrum fonscolombii"
+}
+item {
+ name: "154469"
+ id: 2423
+ display_name: "Isa textula"
+}
+item {
+ name: "47975"
+ id: 2424
+ display_name: "Argia apicalis"
+}
+item {
+ name: "7018"
+ id: 2425
+ display_name: "Anser anser"
+}
+item {
+ name: "7019"
+ id: 2426
+ display_name: "Anser albifrons"
+}
+item {
+ name: "47980"
+ id: 2427
+ display_name: "Speyeria cybele"
+}
+item {
+ name: "58514"
+ id: 2428
+ display_name: "Euphyes vestris"
+}
+item {
+ name: "113519"
+ id: 2429
+ display_name: "Sympetrum obtrusum"
+}
+item {
+ name: "7024"
+ id: 2430
+ display_name: "Somateria mollissima"
+}
+item {
+ name: "39793"
+ id: 2431
+ display_name: "Trachemys scripta scripta"
+}
+item {
+ name: "367475"
+ id: 2432
+ display_name: "Rallus obsoletus"
+}
+item {
+ name: "121716"
+ id: 2433
+ display_name: "Uresiphita reversalis"
+}
+item {
+ name: "113525"
+ id: 2434
+ display_name: "Sympetrum sanguineum"
+}
+item {
+ name: "113526"
+ id: 2435
+ display_name: "Sympetrum semicinctum"
+}
+item {
+ name: "18921"
+ id: 2436
+ display_name: "Platycercus elegans"
+}
+item {
+ name: "7032"
+ id: 2437
+ display_name: "Melanitta fusca"
+}
+item {
+ name: "5268"
+ id: 2438
+ display_name: "Milvus migrans"
+}
+item {
+ name: "144536"
+ id: 2439
+ display_name: "Gelochelidon nilotica"
+}
+item {
+ name: "413503"
+ id: 2440
+ display_name: "Ninox novaeseelandiae novaeseelandiae"
+}
+item {
+ name: "7036"
+ id: 2441
+ display_name: "Melanitta perspicillata"
+}
+item {
+ name: "64382"
+ id: 2442
+ display_name: "Lissotriton vulgaris"
+}
+item {
+ name: "39807"
+ id: 2443
+ display_name: "Terrapene ornata"
+}
+item {
+ name: "39808"
+ id: 2444
+ display_name: "Terrapene ornata luteola"
+}
+item {
+ name: "7044"
+ id: 2445
+ display_name: "Aythya collaris"
+}
+item {
+ name: "7045"
+ id: 2446
+ display_name: "Aythya ferina"
+}
+item {
+ name: "7046"
+ id: 2447
+ display_name: "Aythya fuligula"
+}
+item {
+ name: "146314"
+ id: 2448
+ display_name: "Opheodrys vernalis"
+}
+item {
+ name: "3906"
+ id: 2449
+ display_name: "Numenius americanus"
+}
+item {
+ name: "39823"
+ id: 2450
+ display_name: "Pseudemys gorzugi"
+}
+item {
+ name: "178991"
+ id: 2451
+ display_name: "Sypharochiton pelliserpentis"
+}
+item {
+ name: "7061"
+ id: 2452
+ display_name: "Chen caerulescens"
+}
+item {
+ name: "39830"
+ id: 2453
+ display_name: "Pseudemys concinna"
+}
+item {
+ name: "127490"
+ id: 2454
+ display_name: "Parrhasius m-album"
+}
+item {
+ name: "15256"
+ id: 2455
+ display_name: "Chamaea fasciata"
+}
+item {
+ name: "39836"
+ id: 2456
+ display_name: "Malaclemys terrapin"
+}
+item {
+ name: "133764"
+ id: 2457
+ display_name: "Trichopoda pennipes"
+}
+item {
+ name: "334753"
+ id: 2458
+ display_name: "Hypselonotus punctiventris"
+}
+item {
+ name: "58611"
+ id: 2459
+ display_name: "Amia calva"
+}
+item {
+ name: "56240"
+ id: 2460
+ display_name: "Argia vivida"
+}
+item {
+ name: "7089"
+ id: 2461
+ display_name: "Branta canadensis"
+}
+item {
+ name: "146354"
+ id: 2462
+ display_name: "Phrynosoma blainvillii"
+}
+item {
+ name: "56243"
+ id: 2463
+ display_name: "Plebejus acmon"
+}
+item {
+ name: "144542"
+ id: 2464
+ display_name: "Thalasseus elegans"
+}
+item {
+ name: "121783"
+ id: 2465
+ display_name: "Lithobates clamitans melanota"
+}
+item {
+ name: "39865"
+ id: 2466
+ display_name: "Glyptemys insculpta"
+}
+item {
+ name: "39867"
+ id: 2467
+ display_name: "Emys orbicularis"
+}
+item {
+ name: "7104"
+ id: 2468
+ display_name: "Branta sandvicensis"
+}
+item {
+ name: "50336"
+ id: 2469
+ display_name: "Siproeta stelenes"
+}
+item {
+ name: "7056"
+ id: 2470
+ display_name: "Aythya americana"
+}
+item {
+ name: "7107"
+ id: 2471
+ display_name: "Aix sponsa"
+}
+item {
+ name: "7109"
+ id: 2472
+ display_name: "Lophodytes cucullatus"
+}
+item {
+ name: "7111"
+ id: 2473
+ display_name: "Histrionicus histrionicus"
+}
+item {
+ name: "367562"
+ id: 2474
+ display_name: "Aratinga nenday"
+}
+item {
+ name: "39885"
+ id: 2475
+ display_name: "Emydoidea blandingii"
+}
+item {
+ name: "367566"
+ id: 2476
+ display_name: "Psittacara holochlorus"
+}
+item {
+ name: "143181"
+ id: 2477
+ display_name: "Marimatha nigrofimbria"
+}
+item {
+ name: "7120"
+ id: 2478
+ display_name: "Cairina moschata"
+}
+item {
+ name: "7122"
+ id: 2479
+ display_name: "Netta rufina"
+}
+item {
+ name: "130003"
+ id: 2480
+ display_name: "Phaeoura quernaria"
+}
+item {
+ name: "367572"
+ id: 2481
+ display_name: "Psittacara erythrogenys"
+}
+item {
+ name: "17009"
+ id: 2482
+ display_name: "Sayornis saya"
+}
+item {
+ name: "154582"
+ id: 2483
+ display_name: "Ennomos magnaria"
+}
+item {
+ name: "58532"
+ id: 2484
+ display_name: "Colias eurytheme"
+}
+item {
+ name: "121821"
+ id: 2485
+ display_name: "Sceliphron caementarium"
+}
+item {
+ name: "48094"
+ id: 2486
+ display_name: "Dryocampa rubicunda"
+}
+item {
+ name: "7057"
+ id: 2487
+ display_name: "Aythya valisineria"
+}
+item {
+ name: "17646"
+ id: 2488
+ display_name: "Picoides albolarvatus"
+}
+item {
+ name: "201551"
+ id: 2489
+ display_name: "Procyon lotor lotor"
+}
+item {
+ name: "58534"
+ id: 2490
+ display_name: "Lycaena hyllus"
+}
+item {
+ name: "73553"
+ id: 2491
+ display_name: "Vermivora cyanoptera"
+}
+item {
+ name: "359401"
+ id: 2492
+ display_name: "Exomala orientalis"
+}
+item {
+ name: "8018"
+ id: 2493
+ display_name: "Corvus caurinus"
+}
+item {
+ name: "490478"
+ id: 2494
+ display_name: "Tegula brunnea"
+}
+item {
+ name: "20307"
+ id: 2495
+ display_name: "Asio otus"
+}
+item {
+ name: "227466"
+ id: 2496
+ display_name: "Peridea ferruginea"
+}
+item {
+ name: "122172"
+ id: 2497
+ display_name: "Pyrisitia lisa"
+}
+item {
+ name: "133631"
+ id: 2498
+ display_name: "Polites peckius"
+}
+item {
+ name: "8021"
+ id: 2499
+ display_name: "Corvus brachyrhynchos"
+}
+item {
+ name: "7170"
+ id: 2500
+ display_name: "Clangula hyemalis"
+}
+item {
+ name: "58539"
+ id: 2501
+ display_name: "Satyrium calanus"
+}
+item {
+ name: "27137"
+ id: 2502
+ display_name: "Coluber constrictor"
+}
+item {
+ name: "7176"
+ id: 2503
+ display_name: "Chenonetta jubata"
+}
+item {
+ name: "42157"
+ id: 2504
+ display_name: "Giraffa camelopardalis"
+}
+item {
+ name: "144541"
+ id: 2505
+ display_name: "Thalasseus sandvicensis"
+}
+item {
+ name: "23572"
+ id: 2506
+ display_name: "Litoria aurea"
+}
+item {
+ name: "354820"
+ id: 2507
+ display_name: "Patiriella regularis"
+}
+item {
+ name: "55887"
+ id: 2508
+ display_name: "Andricus quercuscalifornicus"
+}
+item {
+ name: "46255"
+ id: 2509
+ display_name: "Ammospermophilus leucurus"
+}
+item {
+ name: "334341"
+ id: 2510
+ display_name: "Oryctolagus cuniculus domesticus"
+}
+item {
+ name: "144560"
+ id: 2511
+ display_name: "Eolophus roseicapilla"
+}
+item {
+ name: "94043"
+ id: 2512
+ display_name: "Anax imperator"
+}
+item {
+ name: "425004"
+ id: 2513
+ display_name: "Dryas iulia moderata"
+}
+item {
+ name: "269359"
+ id: 2514
+ display_name: "Cactophagus spinolae"
+}
+item {
+ name: "72755"
+ id: 2515
+ display_name: "Colaptes rubiginosus"
+}
+item {
+ name: "319123"
+ id: 2516
+ display_name: "Meleagris gallopavo silvestris"
+}
+item {
+ name: "130846"
+ id: 2517
+ display_name: "Lyssa zampa"
+}
+item {
+ name: "203831"
+ id: 2518
+ display_name: "Nemoria bistriaria"
+}
+item {
+ name: "367678"
+ id: 2519
+ display_name: "Ptiliogonys cinereus"
+}
+item {
+ name: "5301"
+ id: 2520
+ display_name: "Elanoides forficatus"
+}
+item {
+ name: "9398"
+ id: 2521
+ display_name: "Carduelis carduelis"
+}
+item {
+ name: "143201"
+ id: 2522
+ display_name: "Coryphista meadii"
+}
+item {
+ name: "104419"
+ id: 2523
+ display_name: "Lestes australis"
+}
+item {
+ name: "367693"
+ id: 2524
+ display_name: "Cassiculus melanicterus"
+}
+item {
+ name: "143452"
+ id: 2525
+ display_name: "Deidamia inscriptum"
+}
+item {
+ name: "466003"
+ id: 2526
+ display_name: "Romalea microptera"
+}
+item {
+ name: "84494"
+ id: 2527
+ display_name: "Paraphidippus aurantius"
+}
+item {
+ name: "203866"
+ id: 2528
+ display_name: "Rabdophaga strobiloides"
+}
+item {
+ name: "72797"
+ id: 2529
+ display_name: "Dendragapus fuliginosus"
+}
+item {
+ name: "7266"
+ id: 2530
+ display_name: "Psaltriparus minimus"
+}
+item {
+ name: "120920"
+ id: 2531
+ display_name: "Odocoileus virginianus clavium"
+}
+item {
+ name: "7278"
+ id: 2532
+ display_name: "Aegithalos caudatus"
+}
+item {
+ name: "30681"
+ id: 2533
+ display_name: "Agkistrodon contortrix mokasen"
+}
+item {
+ name: "413547"
+ id: 2534
+ display_name: "Zosterops lateralis lateralis"
+}
+item {
+ name: "48262"
+ id: 2535
+ display_name: "Apatelodes torrefacta"
+}
+item {
+ name: "121993"
+ id: 2536
+ display_name: "Lampides boeticus"
+}
+item {
+ name: "48267"
+ id: 2537
+ display_name: "Crotalus oreganus oreganus"
+}
+item {
+ name: "48268"
+ id: 2538
+ display_name: "Crotalus oreganus"
+}
+item {
+ name: "147309"
+ id: 2539
+ display_name: "Feltia herilis"
+}
+item {
+ name: "146413"
+ id: 2540
+ display_name: "Sceloporus consobrinus"
+}
+item {
+ name: "326764"
+ id: 2541
+ display_name: "Cyprinus carpio haematopterus"
+}
+item {
+ name: "5315"
+ id: 2542
+ display_name: "Haliaeetus leucogaster"
+}
+item {
+ name: "4519"
+ id: 2543
+ display_name: "Uria aalge"
+}
+item {
+ name: "40085"
+ id: 2544
+ display_name: "Gopherus polyphemus"
+}
+item {
+ name: "23702"
+ id: 2545
+ display_name: "Agalychnis callidryas"
+}
+item {
+ name: "210116"
+ id: 2546
+ display_name: "Tringa semipalmata inornatus"
+}
+item {
+ name: "40092"
+ id: 2547
+ display_name: "Stigmochelys pardalis"
+}
+item {
+ name: "59931"
+ id: 2548
+ display_name: "Acanthurus triostegus"
+}
+item {
+ name: "48292"
+ id: 2549
+ display_name: "Philoscia muscorum"
+}
+item {
+ name: "146601"
+ id: 2550
+ display_name: "Scolopendra heros"
+}
+item {
+ name: "244906"
+ id: 2551
+ display_name: "Panchlora nivea"
+}
+item {
+ name: "48302"
+ id: 2552
+ display_name: "Limulus polyphemus"
+}
+item {
+ name: "180008"
+ id: 2553
+ display_name: "Otospermophilus variegatus"
+}
+item {
+ name: "7347"
+ id: 2554
+ display_name: "Alauda arvensis"
+}
+item {
+ name: "43459"
+ id: 2555
+ display_name: "Macaca fascicularis"
+}
+item {
+ name: "113846"
+ id: 2556
+ display_name: "Telebasis salva"
+}
+item {
+ name: "7356"
+ id: 2557
+ display_name: "Galerida cristata"
+}
+item {
+ name: "64705"
+ id: 2558
+ display_name: "Delichon urbicum"
+}
+item {
+ name: "145932"
+ id: 2559
+ display_name: "Aspidoscelis hyperythra beldingi"
+}
+item {
+ name: "72912"
+ id: 2560
+ display_name: "Helmitheros vermivorum"
+}
+item {
+ name: "69805"
+ id: 2561
+ display_name: "Octogomphus specularis"
+}
+item {
+ name: "129572"
+ id: 2562
+ display_name: "Aphomia sociella"
+}
+item {
+ name: "31964"
+ id: 2563
+ display_name: "Barisia imbricata"
+}
+item {
+ name: "244625"
+ id: 2564
+ display_name: "Halmus chalybeus"
+}
+item {
+ name: "58576"
+ id: 2565
+ display_name: "Phyciodes cocyta"
+}
+item {
+ name: "72931"
+ id: 2566
+ display_name: "Hylocharis leucotis"
+}
+item {
+ name: "104449"
+ id: 2567
+ display_name: "Lestes rectangularis"
+}
+item {
+ name: "14886"
+ id: 2568
+ display_name: "Mimus polyglottos"
+}
+item {
+ name: "23783"
+ id: 2569
+ display_name: "Hyla versicolor"
+}
+item {
+ name: "23784"
+ id: 2570
+ display_name: "Hyla plicata"
+}
+item {
+ name: "8575"
+ id: 2571
+ display_name: "Gymnorhina tibicen"
+}
+item {
+ name: "2599"
+ id: 2572
+ display_name: "Alcedo atthis"
+}
+item {
+ name: "61152"
+ id: 2573
+ display_name: "Pyrrhosoma nymphula"
+}
+item {
+ name: "58579"
+ id: 2574
+ display_name: "Polygonia interrogationis"
+}
+item {
+ name: "31993"
+ id: 2575
+ display_name: "Ophisaurus attenuatus attenuatus"
+}
+item {
+ name: "53985"
+ id: 2576
+ display_name: "Odocoileus hemionus californicus"
+}
+item {
+ name: "144549"
+ id: 2577
+ display_name: "Streptopelia chinensis"
+}
+item {
+ name: "105730"
+ id: 2578
+ display_name: "Micrathyria hagenii"
+}
+item {
+ name: "7428"
+ id: 2579
+ display_name: "Bombycilla cedrorum"
+}
+item {
+ name: "7429"
+ id: 2580
+ display_name: "Bombycilla garrulus"
+}
+item {
+ name: "50391"
+ id: 2581
+ display_name: "Polygonia gracilis"
+}
+item {
+ name: "7067"
+ id: 2582
+ display_name: "Tadorna tadorna"
+}
+item {
+ name: "413513"
+ id: 2583
+ display_name: "Petroica australis australis"
+}
+item {
+ name: "39469"
+ id: 2584
+ display_name: "Varanus varius"
+}
+item {
+ name: "58479"
+ id: 2585
+ display_name: "Pholisora catullus"
+}
+item {
+ name: "127929"
+ id: 2586
+ display_name: "Achalarus lyciades"
+}
+item {
+ name: "48403"
+ id: 2587
+ display_name: "Gasterosteus aculeatus"
+}
+item {
+ name: "18990"
+ id: 2588
+ display_name: "Amazona autumnalis"
+}
+item {
+ name: "1241"
+ id: 2589
+ display_name: "Dendragapus obscurus"
+}
+item {
+ name: "228634"
+ id: 2590
+ display_name: "Ponometia erastrioides"
+}
+item {
+ name: "64806"
+ id: 2591
+ display_name: "Pelophylax"
+}
+item {
+ name: "51761"
+ id: 2592
+ display_name: "Hetaerina americana"
+}
+item {
+ name: "7464"
+ id: 2593
+ display_name: "Catherpes mexicanus"
+}
+item {
+ name: "318761"
+ id: 2594
+ display_name: "Sceloporus uniformis"
+}
+item {
+ name: "7068"
+ id: 2595
+ display_name: "Tadorna ferruginea"
+}
+item {
+ name: "204077"
+ id: 2596
+ display_name: "Achyra rantalis"
+}
+item {
+ name: "7470"
+ id: 2597
+ display_name: "Campylorhynchus brunneicapillus"
+}
+item {
+ name: "32048"
+ id: 2598
+ display_name: "Gerrhonotus infernalis"
+}
+item {
+ name: "204081"
+ id: 2599
+ display_name: "Pyrausta laticlavia"
+}
+item {
+ name: "7476"
+ id: 2600
+ display_name: "Campylorhynchus rufinucha"
+}
+item {
+ name: "32055"
+ id: 2601
+ display_name: "Elgaria multicarinata"
+}
+item {
+ name: "244276"
+ id: 2602
+ display_name: "Rhipidura fuliginosa"
+}
+item {
+ name: "144187"
+ id: 2603
+ display_name: "Pyrisitia proterpia"
+}
+item {
+ name: "32059"
+ id: 2604
+ display_name: "Elgaria multicarinata multicarinata"
+}
+item {
+ name: "32061"
+ id: 2605
+ display_name: "Elgaria kingii"
+}
+item {
+ name: "146750"
+ id: 2606
+ display_name: "Lascoria ambigualis"
+}
+item {
+ name: "32064"
+ id: 2607
+ display_name: "Elgaria coerulea"
+}
+item {
+ name: "23873"
+ id: 2608
+ display_name: "Hyla squirella"
+}
+item {
+ name: "48450"
+ id: 2609
+ display_name: "Peltodoris nobilis"
+}
+item {
+ name: "64146"
+ id: 2610
+ display_name: "Fissurella volcano"
+}
+item {
+ name: "48259"
+ id: 2611
+ display_name: "Pelidnota punctata"
+}
+item {
+ name: "122185"
+ id: 2612
+ display_name: "Pantherophis alleghaniensis quadrivittata"
+}
+item {
+ name: "7498"
+ id: 2613
+ display_name: "Polioptila melanura"
+}
+item {
+ name: "56652"
+ id: 2614
+ display_name: "Haliotis rufescens"
+}
+item {
+ name: "122191"
+ id: 2615
+ display_name: "Pelecanus occidentalis carolinensis"
+}
+item {
+ name: "73041"
+ id: 2616
+ display_name: "Melozone aberti"
+}
+item {
+ name: "199381"
+ id: 2617
+ display_name: "Homalodisca vitripennis"
+}
+item {
+ name: "73044"
+ id: 2618
+ display_name: "Melozone crissalis"
+}
+item {
+ name: "83290"
+ id: 2619
+ display_name: "Zanclus cornutus"
+}
+item {
+ name: "7513"
+ id: 2620
+ display_name: "Thryothorus ludovicianus"
+}
+item {
+ name: "28559"
+ id: 2621
+ display_name: "Storeria occipitomaculata occipitomaculata"
+}
+item {
+ name: "24255"
+ id: 2622
+ display_name: "Pseudacris maculata"
+}
+item {
+ name: "130398"
+ id: 2623
+ display_name: "Melanargia galathea"
+}
+item {
+ name: "29925"
+ id: 2624
+ display_name: "Heterodon platirhinos"
+}
+item {
+ name: "48484"
+ id: 2625
+ display_name: "Harmonia axyridis"
+}
+item {
+ name: "122214"
+ id: 2626
+ display_name: "Odontotaenius disjunctus"
+}
+item {
+ name: "39484"
+ id: 2627
+ display_name: "Xantusia vigilis"
+}
+item {
+ name: "73919"
+ id: 2628
+ display_name: "Podarcis sicula"
+}
+item {
+ name: "154553"
+ id: 2629
+ display_name: "Leptoglossus clypealis"
+}
+item {
+ name: "23922"
+ id: 2630
+ display_name: "Hyla intermedia"
+}
+item {
+ name: "122228"
+ id: 2631
+ display_name: "Acharia stimulea"
+}
+item {
+ name: "108344"
+ id: 2632
+ display_name: "Pantala flavescens"
+}
+item {
+ name: "118538"
+ id: 2633
+ display_name: "Cotinis nitida"
+}
+item {
+ name: "23930"
+ id: 2634
+ display_name: "Hyla chrysoscelis"
+}
+item {
+ name: "23933"
+ id: 2635
+ display_name: "Hyla arenicolor"
+}
+item {
+ name: "122238"
+ id: 2636
+ display_name: "Porcellio scaber"
+}
+item {
+ name: "479803"
+ id: 2637
+ display_name: "Dioprosopa clavata"
+}
+item {
+ name: "5355"
+ id: 2638
+ display_name: "Parabuteo unicinctus"
+}
+item {
+ name: "146822"
+ id: 2639
+ display_name: "Texola elada"
+}
+item {
+ name: "236935"
+ id: 2640
+ display_name: "Anas platyrhynchos domesticus"
+}
+item {
+ name: "7562"
+ id: 2641
+ display_name: "Troglodytes aedon"
+}
+item {
+ name: "339444"
+ id: 2642
+ display_name: "Buteo lineatus elegans"
+}
+item {
+ name: "42221"
+ id: 2643
+ display_name: "Odocoileus hemionus columbianus"
+}
+item {
+ name: "15764"
+ id: 2644
+ display_name: "Thamnophilus doliatus"
+}
+item {
+ name: "122261"
+ id: 2645
+ display_name: "Cucullia convexipennis"
+}
+item {
+ name: "122262"
+ id: 2646
+ display_name: "Brachystola magna"
+}
+item {
+ name: "7576"
+ id: 2647
+ display_name: "Thryomanes bewickii"
+}
+item {
+ name: "143015"
+ id: 2648
+ display_name: "Eubaphe mendica"
+}
+item {
+ name: "73592"
+ id: 2649
+ display_name: "Actinemys marmorata"
+}
+item {
+ name: "84549"
+ id: 2650
+ display_name: "Plathemis lydia"
+}
+item {
+ name: "23969"
+ id: 2651
+ display_name: "Hyla cinerea"
+}
+item {
+ name: "318882"
+ id: 2652
+ display_name: "Ancistrocerus gazella"
+}
+item {
+ name: "7072"
+ id: 2653
+ display_name: "Tadorna variegata"
+}
+item {
+ name: "48548"
+ id: 2654
+ display_name: "Vanessa cardui"
+}
+item {
+ name: "48549"
+ id: 2655
+ display_name: "Vanessa virginiensis"
+}
+item {
+ name: "122278"
+ id: 2656
+ display_name: "Pomacea canaliculata"
+}
+item {
+ name: "9457"
+ id: 2657
+ display_name: "Myioborus miniatus"
+}
+item {
+ name: "122280"
+ id: 2658
+ display_name: "Pyrgus albescens"
+}
+item {
+ name: "122281"
+ id: 2659
+ display_name: "Calycopis cecrops"
+}
+item {
+ name: "130474"
+ id: 2660
+ display_name: "Achlyodes pallida"
+}
+item {
+ name: "338503"
+ id: 2661
+ display_name: "Phalacrocorax varius varius"
+}
+item {
+ name: "9458"
+ id: 2662
+ display_name: "Myioborus pictus"
+}
+item {
+ name: "73629"
+ id: 2663
+ display_name: "Anolis nebulosus"
+}
+item {
+ name: "122291"
+ id: 2664
+ display_name: "Larus argentatus smithsonianus"
+}
+item {
+ name: "56756"
+ id: 2665
+ display_name: "Murgantia histrionica"
+}
+item {
+ name: "73148"
+ id: 2666
+ display_name: "Parkesia motacilla"
+}
+item {
+ name: "48575"
+ id: 2667
+ display_name: "Okenia rosacea"
+}
+item {
+ name: "56768"
+ id: 2668
+ display_name: "Sula granti"
+}
+item {
+ name: "48578"
+ id: 2669
+ display_name: "Anteos maerula"
+}
+item {
+ name: "64968"
+ id: 2670
+ display_name: "Anaxyrus americanus"
+}
+item {
+ name: "64970"
+ id: 2671
+ display_name: "Anaxyrus boreas"
+}
+item {
+ name: "115549"
+ id: 2672
+ display_name: "Crotalus lepidus lepidus"
+}
+item {
+ name: "64977"
+ id: 2673
+ display_name: "Anaxyrus fowleri"
+}
+item {
+ name: "19022"
+ id: 2674
+ display_name: "Ara macao"
+}
+item {
+ name: "24259"
+ id: 2675
+ display_name: "Pseudacris regilla"
+}
+item {
+ name: "64984"
+ id: 2676
+ display_name: "Anaxyrus punctatus"
+}
+item {
+ name: "64985"
+ id: 2677
+ display_name: "Anaxyrus quercicus"
+}
+item {
+ name: "73178"
+ id: 2678
+ display_name: "Peucaea ruficauda"
+}
+item {
+ name: "64987"
+ id: 2679
+ display_name: "Anaxyrus speciosus"
+}
+item {
+ name: "64989"
+ id: 2680
+ display_name: "Anaxyrus woodhousii"
+}
+item {
+ name: "339596"
+ id: 2681
+ display_name: "Calidris subruficollis"
+}
+item {
+ name: "56552"
+ id: 2682
+ display_name: "Carabus nemoralis"
+}
+item {
+ name: "84722"
+ id: 2683
+ display_name: "Ischnura verticalis"
+}
+item {
+ name: "122356"
+ id: 2684
+ display_name: "Eumorpha achemon"
+}
+item {
+ name: "318965"
+ id: 2685
+ display_name: "Chrysolina bankii"
+}
+item {
+ name: "228855"
+ id: 2686
+ display_name: "Protodeltote muscosula"
+}
+item {
+ name: "146940"
+ id: 2687
+ display_name: "Agriphila vulgivagella"
+}
+item {
+ name: "56832"
+ id: 2688
+ display_name: "Nymphalis antiopa"
+}
+item {
+ name: "61355"
+ id: 2689
+ display_name: "Vespula pensylvanica"
+}
+item {
+ name: "48645"
+ id: 2690
+ display_name: "Megathura crenulata"
+}
+item {
+ name: "73222"
+ id: 2691
+ display_name: "Phoenicopterus roseus"
+}
+item {
+ name: "363354"
+ id: 2692
+ display_name: "Lobatus gigas"
+}
+item {
+ name: "3802"
+ id: 2693
+ display_name: "Morus bassanus"
+}
+item {
+ name: "62722"
+ id: 2694
+ display_name: "Apalone spinifera spinifera"
+}
+item {
+ name: "48655"
+ id: 2695
+ display_name: "Aplysia californica"
+}
+item {
+ name: "54468"
+ id: 2696
+ display_name: "Aglais urticae"
+}
+item {
+ name: "48662"
+ id: 2697
+ display_name: "Danaus plexippus"
+}
+item {
+ name: "49071"
+ id: 2698
+ display_name: "Metridium senile"
+}
+item {
+ name: "228899"
+ id: 2699
+ display_name: "Psamatodes abydata"
+}
+item {
+ name: "133102"
+ id: 2700
+ display_name: "Oncometopia orbona"
+}
+item {
+ name: "39659"
+ id: 2701
+ display_name: "Chelonia mydas"
+}
+item {
+ name: "121437"
+ id: 2702
+ display_name: "Dolomedes triton"
+}
+item {
+ name: "94545"
+ id: 2703
+ display_name: "Argia fumipennis"
+}
+item {
+ name: "56887"
+ id: 2704
+ display_name: "Bombus pensylvanicus"
+}
+item {
+ name: "40509"
+ id: 2705
+ display_name: "Eptesicus fuscus"
+}
+item {
+ name: "58635"
+ id: 2706
+ display_name: "Lepomis megalotis"
+}
+item {
+ name: "100369"
+ id: 2707
+ display_name: "Erpetogomphus designatus"
+}
+item {
+ name: "58636"
+ id: 2708
+ display_name: "Lepomis cyanellus"
+}
+item {
+ name: "40522"
+ id: 2709
+ display_name: "Lasiurus borealis"
+}
+item {
+ name: "102006"
+ id: 2710
+ display_name: "Hagenius brevistylus"
+}
+item {
+ name: "50283"
+ id: 2711
+ display_name: "Marpesia petreus"
+}
+item {
+ name: "123829"
+ id: 2712
+ display_name: "Pelecanus occidentalis californicus"
+}
+item {
+ name: "62453"
+ id: 2713
+ display_name: "Anthidium manicatum"
+}
+item {
+ name: "56925"
+ id: 2714
+ display_name: "Graphocephala coccinea"
+}
+item {
+ name: "48738"
+ id: 2715
+ display_name: "Sphex pensylvanicus"
+}
+item {
+ name: "43151"
+ id: 2716
+ display_name: "Oryctolagus cuniculus"
+}
+item {
+ name: "19822"
+ id: 2717
+ display_name: "Glaucidium brasilianum"
+}
+item {
+ name: "48750"
+ id: 2718
+ display_name: "Lottia scabra"
+}
+item {
+ name: "335071"
+ id: 2719
+ display_name: "Elophila obliteralis"
+}
+item {
+ name: "81521"
+ id: 2720
+ display_name: "Vipera berus"
+}
+item {
+ name: "43697"
+ id: 2721
+ display_name: "Elephas maximus"
+}
+item {
+ name: "7079"
+ id: 2722
+ display_name: "Oxyura jamaicensis"
+}
+item {
+ name: "43042"
+ id: 2723
+ display_name: "Erinaceus europaeus"
+}
+item {
+ name: "40086"
+ id: 2724
+ display_name: "Gopherus agassizii"
+}
+item {
+ name: "81545"
+ id: 2725
+ display_name: "Lumbricus terrestris"
+}
+item {
+ name: "16010"
+ id: 2726
+ display_name: "Myiarchus cinerascens"
+}
+item {
+ name: "2669"
+ id: 2727
+ display_name: "Chloroceryle americana"
+}
+item {
+ name: "9535"
+ id: 2728
+ display_name: "Sturnella neglecta"
+}
+item {
+ name: "81554"
+ id: 2729
+ display_name: "Ictalurus punctatus"
+}
+item {
+ name: "339907"
+ id: 2730
+ display_name: "Ramphastos ambiguus"
+}
+item {
+ name: "39814"
+ id: 2731
+ display_name: "Terrapene carolina"
+}
+item {
+ name: "10254"
+ id: 2732
+ display_name: "Paroaria coronata"
+}
+item {
+ name: "40614"
+ id: 2733
+ display_name: "Antrozous pallidus"
+}
+item {
+ name: "502385"
+ id: 2734
+ display_name: "Probole amicaria"
+}
+item {
+ name: "24233"
+ id: 2735
+ display_name: "Acris gryllus"
+}
+item {
+ name: "81579"
+ id: 2736
+ display_name: "Steatoda triangulosa"
+}
+item {
+ name: "81580"
+ id: 2737
+ display_name: "Callosamia promethea"
+}
+item {
+ name: "146034"
+ id: 2738
+ display_name: "Coluber lateralis"
+}
+item {
+ name: "81582"
+ id: 2739
+ display_name: "Hyalophora cecropia"
+}
+item {
+ name: "81583"
+ id: 2740
+ display_name: "Anisota senatoria"
+}
+item {
+ name: "66002"
+ id: 2741
+ display_name: "Lithobates palustris"
+}
+item {
+ name: "81586"
+ id: 2742
+ display_name: "Citheronia regalis"
+}
+item {
+ name: "40629"
+ id: 2743
+ display_name: "Lasionycteris noctivagans"
+}
+item {
+ name: "81590"
+ id: 2744
+ display_name: "Eacles imperialis"
+}
+item {
+ name: "204472"
+ id: 2745
+ display_name: "Buteo buteo"
+}
+item {
+ name: "65212"
+ id: 2746
+ display_name: "Craugastor augusti"
+}
+item {
+ name: "48830"
+ id: 2747
+ display_name: "Patiria miniata"
+}
+item {
+ name: "48833"
+ id: 2748
+ display_name: "Pisaster giganteus"
+}
+item {
+ name: "16071"
+ id: 2749
+ display_name: "Myiodynastes luteiventris"
+}
+item {
+ name: "81610"
+ id: 2750
+ display_name: "Balanus glandula"
+}
+item {
+ name: "24268"
+ id: 2751
+ display_name: "Pseudacris crucifer"
+}
+item {
+ name: "16079"
+ id: 2752
+ display_name: "Contopus sordidulus"
+}
+item {
+ name: "204496"
+ id: 2753
+ display_name: "Corvus corone"
+}
+item {
+ name: "204498"
+ id: 2754
+ display_name: "Cyanoramphus novaezelandiae"
+}
+item {
+ name: "24277"
+ id: 2755
+ display_name: "Smilisca baudinii"
+}
+item {
+ name: "22631"
+ id: 2756
+ display_name: "Eleutherodactylus planirostris"
+}
+item {
+ name: "16100"
+ id: 2757
+ display_name: "Contopus virens"
+}
+item {
+ name: "42278"
+ id: 2758
+ display_name: "Aepyceros melampus"
+}
+item {
+ name: "16106"
+ id: 2759
+ display_name: "Contopus pertinax"
+}
+item {
+ name: "16110"
+ id: 2760
+ display_name: "Contopus cooperi"
+}
+item {
+ name: "42280"
+ id: 2761
+ display_name: "Connochaetes taurinus"
+}
+item {
+ name: "47455"
+ id: 2762
+ display_name: "Octopus rubescens"
+}
+item {
+ name: "204533"
+ id: 2763
+ display_name: "Larus argentatus"
+}
+item {
+ name: "81656"
+ id: 2764
+ display_name: "Nematocampa resistaria"
+}
+item {
+ name: "81657"
+ id: 2765
+ display_name: "Lacinipolia renigera"
+}
+item {
+ name: "204519"
+ id: 2766
+ display_name: "Halcyon smyrnensis"
+}
+item {
+ name: "62762"
+ id: 2767
+ display_name: "Cordulegaster dorsalis"
+}
+item {
+ name: "81663"
+ id: 2768
+ display_name: "Malacosoma disstria"
+}
+item {
+ name: "32512"
+ id: 2769
+ display_name: "Rena dulcis"
+}
+item {
+ name: "81665"
+ id: 2770
+ display_name: "Orgyia leucostigma"
+}
+item {
+ name: "130821"
+ id: 2771
+ display_name: "Haploa confusa"
+}
+item {
+ name: "81672"
+ id: 2772
+ display_name: "Clemensia albata"
+}
+item {
+ name: "204554"
+ id: 2773
+ display_name: "Onychognathus morio"
+}
+item {
+ name: "81677"
+ id: 2774
+ display_name: "Euchaetes egle"
+}
+item {
+ name: "81680"
+ id: 2775
+ display_name: "Scopula limboundata"
+}
+item {
+ name: "318497"
+ id: 2776
+ display_name: "Hemipenthes sinuosa"
+}
+item {
+ name: "179987"
+ id: 2777
+ display_name: "Ictidomys parvidens"
+}
+item {
+ name: "179988"
+ id: 2778
+ display_name: "Ictidomys tridecemlineatus"
+}
+item {
+ name: "81685"
+ id: 2779
+ display_name: "Evergestis pallidata"
+}
+item {
+ name: "81687"
+ id: 2780
+ display_name: "Noctua pronuba"
+}
+item {
+ name: "179992"
+ id: 2781
+ display_name: "Xerospermophilus spilosoma"
+}
+item {
+ name: "179994"
+ id: 2782
+ display_name: "Urocitellus armatus"
+}
+item {
+ name: "9519"
+ id: 2783
+ display_name: "Cyanocompsa parellina"
+}
+item {
+ name: "179998"
+ id: 2784
+ display_name: "Urocitellus columbianus"
+}
+item {
+ name: "114463"
+ id: 2785
+ display_name: "Trithemis annulata"
+}
+item {
+ name: "199169"
+ id: 2786
+ display_name: "Catocala maestosa"
+}
+item {
+ name: "143323"
+ id: 2787
+ display_name: "Tolype velleda"
+}
+item {
+ name: "120113"
+ id: 2788
+ display_name: "Anthrenus verbasci"
+}
+item {
+ name: "7601"
+ id: 2789
+ display_name: "Cistothorus palustris"
+}
+item {
+ name: "81706"
+ id: 2790
+ display_name: "Alaus oculatus"
+}
+item {
+ name: "220974"
+ id: 2791
+ display_name: "Harrisimemna trisignata"
+}
+item {
+ name: "20445"
+ id: 2792
+ display_name: "Tyto alba"
+}
+item {
+ name: "73523"
+ id: 2793
+ display_name: "Trogon caligatus"
+}
+item {
+ name: "49590"
+ id: 2794
+ display_name: "Micropterus dolomieu"
+}
+item {
+ name: "41729"
+ id: 2795
+ display_name: "Mirounga leonina"
+}
+item {
+ name: "48957"
+ id: 2796
+ display_name: "Arilus cristatus"
+}
+item {
+ name: "81727"
+ id: 2797
+ display_name: "Abaeis nicippe"
+}
+item {
+ name: "8000"
+ id: 2798
+ display_name: "Corvus monedula"
+}
+item {
+ name: "8001"
+ id: 2799
+ display_name: "Corvus ossifragus"
+}
+item {
+ name: "171843"
+ id: 2800
+ display_name: "Rabdotus dealbatus"
+}
+item {
+ name: "81734"
+ id: 2801
+ display_name: "Neophasia menapia"
+}
+item {
+ name: "258813"
+ id: 2802
+ display_name: "Clogmia albipunctata"
+}
+item {
+ name: "332243"
+ id: 2803
+ display_name: "Lepturobosca chrysocoma"
+}
+item {
+ name: "81744"
+ id: 2804
+ display_name: "Heliconius erato"
+}
+item {
+ name: "218424"
+ id: 2805
+ display_name: "Dicymolomia julianalis"
+}
+item {
+ name: "3813"
+ id: 2806
+ display_name: "Spheniscus demersus"
+}
+item {
+ name: "81749"
+ id: 2807
+ display_name: "Malacosoma americanum"
+}
+item {
+ name: "81752"
+ id: 2808
+ display_name: "Pyrausta tyralis"
+}
+item {
+ name: "48987"
+ id: 2809
+ display_name: "Hippodamia convergens"
+}
+item {
+ name: "8029"
+ id: 2810
+ display_name: "Corvus frugilegus"
+}
+item {
+ name: "8031"
+ id: 2811
+ display_name: "Corvus splendens"
+}
+item {
+ name: "147298"
+ id: 2812
+ display_name: "Lasiommata megera"
+}
+item {
+ name: "7087"
+ id: 2813
+ display_name: "Branta bernicla"
+}
+item {
+ name: "48550"
+ id: 2814
+ display_name: "Phoebis sennae"
+}
+item {
+ name: "4349"
+ id: 2815
+ display_name: "Larus hyperboreus"
+}
+item {
+ name: "84027"
+ id: 2816
+ display_name: "Trigonopeltastes delta"
+}
+item {
+ name: "194762"
+ id: 2817
+ display_name: "Vanessa itea"
+}
+item {
+ name: "311163"
+ id: 2818
+ display_name: "Pseudomops septentrionalis"
+}
+item {
+ name: "55957"
+ id: 2819
+ display_name: "Scudderia furcata"
+}
+item {
+ name: "39822"
+ id: 2820
+ display_name: "Pseudemys texana"
+}
+item {
+ name: "204685"
+ id: 2821
+ display_name: "Chlosyne ehrenbergii"
+}
+item {
+ name: "122767"
+ id: 2822
+ display_name: "Columba livia domestica"
+}
+item {
+ name: "55960"
+ id: 2823
+ display_name: "Sceloporus graciosus"
+}
+item {
+ name: "121823"
+ id: 2824
+ display_name: "Autographa californica"
+}
+item {
+ name: "8088"
+ id: 2825
+ display_name: "Garrulus glandarius"
+}
+item {
+ name: "65433"
+ id: 2826
+ display_name: "Ecnomiohyla miotympanum"
+}
+item {
+ name: "49051"
+ id: 2827
+ display_name: "Anthopleura sola"
+}
+item {
+ name: "125815"
+ id: 2828
+ display_name: "Coenonympha arcania"
+}
+item {
+ name: "55963"
+ id: 2829
+ display_name: "Malacosoma californicum"
+}
+item {
+ name: "120479"
+ id: 2830
+ display_name: "Anser anser domesticus"
+}
+item {
+ name: "133788"
+ id: 2831
+ display_name: "Xylocopa micans"
+}
+item {
+ name: "81559"
+ id: 2832
+ display_name: "Epargyreus clarus"
+}
+item {
+ name: "81839"
+ id: 2833
+ display_name: "Platycryptus undatus"
+}
+item {
+ name: "133791"
+ id: 2834
+ display_name: "Polistes exclamans"
+}
+item {
+ name: "84640"
+ id: 2835
+ display_name: "Polistes dominula"
+}
+item {
+ name: "73666"
+ id: 2836
+ display_name: "Aspidoscelis exsanguis"
+}
+item {
+ name: "73669"
+ id: 2837
+ display_name: "Aspidoscelis gularis"
+}
+item {
+ name: "16326"
+ id: 2838
+ display_name: "Mitrephanes phaeocercus"
+}
+item {
+ name: "49095"
+ id: 2839
+ display_name: "Pagurus samuelis"
+}
+item {
+ name: "73672"
+ id: 2840
+ display_name: "Aspidoscelis hyperythra"
+}
+item {
+ name: "59192"
+ id: 2841
+ display_name: "Polites sabuleti"
+}
+item {
+ name: "81561"
+ id: 2842
+ display_name: "Anaea andria"
+}
+item {
+ name: "81881"
+ id: 2843
+ display_name: "Amphipsalta zelandica"
+}
+item {
+ name: "73690"
+ id: 2844
+ display_name: "Aspidoscelis sexlineata"
+}
+item {
+ name: "73694"
+ id: 2845
+ display_name: "Aspidoscelis velox"
+}
+item {
+ name: "335840"
+ id: 2846
+ display_name: "Pyrausta inornatalis"
+}
+item {
+ name: "49126"
+ id: 2847
+ display_name: "Strongylocentrotus franciscanus"
+}
+item {
+ name: "204775"
+ id: 2848
+ display_name: "Kricogonia lyside"
+}
+item {
+ name: "475115"
+ id: 2849
+ display_name: "Ardenna creatopus"
+}
+item {
+ name: "475120"
+ id: 2850
+ display_name: "Ardenna gravis"
+}
+item {
+ name: "62803"
+ id: 2851
+ display_name: "Monadenia fidelis"
+}
+item {
+ name: "49150"
+ id: 2852
+ display_name: "Agraulis vanillae"
+}
+item {
+ name: "83929"
+ id: 2853
+ display_name: "Phanaeus vindex"
+}
+item {
+ name: "199839"
+ id: 2854
+ display_name: "Haemorhous cassinii"
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/kitti_label_map.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/kitti_label_map.pbtxt"
new file mode 100644
index 00000000..0afcc693
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/kitti_label_map.pbtxt"
@@ -0,0 +1,9 @@
+item {
+ id: 1
+ name: 'car'
+}
+
+item {
+ id: 2
+ name: 'pedestrian'
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/mscoco_complete_label_map.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/mscoco_complete_label_map.pbtxt"
new file mode 100644
index 00000000..d73fc065
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/mscoco_complete_label_map.pbtxt"
@@ -0,0 +1,455 @@
+item {
+ name: "background"
+ id: 0
+ display_name: "background"
+}
+item {
+ name: "/m/01g317"
+ id: 1
+ display_name: "person"
+}
+item {
+ name: "/m/0199g"
+ id: 2
+ display_name: "bicycle"
+}
+item {
+ name: "/m/0k4j"
+ id: 3
+ display_name: "car"
+}
+item {
+ name: "/m/04_sv"
+ id: 4
+ display_name: "motorcycle"
+}
+item {
+ name: "/m/05czz6l"
+ id: 5
+ display_name: "airplane"
+}
+item {
+ name: "/m/01bjv"
+ id: 6
+ display_name: "bus"
+}
+item {
+ name: "/m/07jdr"
+ id: 7
+ display_name: "train"
+}
+item {
+ name: "/m/07r04"
+ id: 8
+ display_name: "truck"
+}
+item {
+ name: "/m/019jd"
+ id: 9
+ display_name: "boat"
+}
+item {
+ name: "/m/015qff"
+ id: 10
+ display_name: "traffic light"
+}
+item {
+ name: "/m/01pns0"
+ id: 11
+ display_name: "fire hydrant"
+}
+item {
+ name: "12"
+ id: 12
+ display_name: "12"
+}
+item {
+ name: "/m/02pv19"
+ id: 13
+ display_name: "stop sign"
+}
+item {
+ name: "/m/015qbp"
+ id: 14
+ display_name: "parking meter"
+}
+item {
+ name: "/m/0cvnqh"
+ id: 15
+ display_name: "bench"
+}
+item {
+ name: "/m/015p6"
+ id: 16
+ display_name: "bird"
+}
+item {
+ name: "/m/01yrx"
+ id: 17
+ display_name: "cat"
+}
+item {
+ name: "/m/0bt9lr"
+ id: 18
+ display_name: "dog"
+}
+item {
+ name: "/m/03k3r"
+ id: 19
+ display_name: "horse"
+}
+item {
+ name: "/m/07bgp"
+ id: 20
+ display_name: "sheep"
+}
+item {
+ name: "/m/01xq0k1"
+ id: 21
+ display_name: "cow"
+}
+item {
+ name: "/m/0bwd_0j"
+ id: 22
+ display_name: "elephant"
+}
+item {
+ name: "/m/01dws"
+ id: 23
+ display_name: "bear"
+}
+item {
+ name: "/m/0898b"
+ id: 24
+ display_name: "zebra"
+}
+item {
+ name: "/m/03bk1"
+ id: 25
+ display_name: "giraffe"
+}
+item {
+ name: "26"
+ id: 26
+ display_name: "26"
+}
+item {
+ name: "/m/01940j"
+ id: 27
+ display_name: "backpack"
+}
+item {
+ name: "/m/0hnnb"
+ id: 28
+ display_name: "umbrella"
+}
+item {
+ name: "29"
+ id: 29
+ display_name: "29"
+}
+item {
+ name: "30"
+ id: 30
+ display_name: "30"
+}
+item {
+ name: "/m/080hkjn"
+ id: 31
+ display_name: "handbag"
+}
+item {
+ name: "/m/01rkbr"
+ id: 32
+ display_name: "tie"
+}
+item {
+ name: "/m/01s55n"
+ id: 33
+ display_name: "suitcase"
+}
+item {
+ name: "/m/02wmf"
+ id: 34
+ display_name: "frisbee"
+}
+item {
+ name: "/m/071p9"
+ id: 35
+ display_name: "skis"
+}
+item {
+ name: "/m/06__v"
+ id: 36
+ display_name: "snowboard"
+}
+item {
+ name: "/m/018xm"
+ id: 37
+ display_name: "sports ball"
+}
+item {
+ name: "/m/02zt3"
+ id: 38
+ display_name: "kite"
+}
+item {
+ name: "/m/03g8mr"
+ id: 39
+ display_name: "baseball bat"
+}
+item {
+ name: "/m/03grzl"
+ id: 40
+ display_name: "baseball glove"
+}
+item {
+ name: "/m/06_fw"
+ id: 41
+ display_name: "skateboard"
+}
+item {
+ name: "/m/019w40"
+ id: 42
+ display_name: "surfboard"
+}
+item {
+ name: "/m/0dv9c"
+ id: 43
+ display_name: "tennis racket"
+}
+item {
+ name: "/m/04dr76w"
+ id: 44
+ display_name: "bottle"
+}
+item {
+ name: "45"
+ id: 45
+ display_name: "45"
+}
+item {
+ name: "/m/09tvcd"
+ id: 46
+ display_name: "wine glass"
+}
+item {
+ name: "/m/08gqpm"
+ id: 47
+ display_name: "cup"
+}
+item {
+ name: "/m/0dt3t"
+ id: 48
+ display_name: "fork"
+}
+item {
+ name: "/m/04ctx"
+ id: 49
+ display_name: "knife"
+}
+item {
+ name: "/m/0cmx8"
+ id: 50
+ display_name: "spoon"
+}
+item {
+ name: "/m/04kkgm"
+ id: 51
+ display_name: "bowl"
+}
+item {
+ name: "/m/09qck"
+ id: 52
+ display_name: "banana"
+}
+item {
+ name: "/m/014j1m"
+ id: 53
+ display_name: "apple"
+}
+item {
+ name: "/m/0l515"
+ id: 54
+ display_name: "sandwich"
+}
+item {
+ name: "/m/0cyhj_"
+ id: 55
+ display_name: "orange"
+}
+item {
+ name: "/m/0hkxq"
+ id: 56
+ display_name: "broccoli"
+}
+item {
+ name: "/m/0fj52s"
+ id: 57
+ display_name: "carrot"
+}
+item {
+ name: "/m/01b9xk"
+ id: 58
+ display_name: "hot dog"
+}
+item {
+ name: "/m/0663v"
+ id: 59
+ display_name: "pizza"
+}
+item {
+ name: "/m/0jy4k"
+ id: 60
+ display_name: "donut"
+}
+item {
+ name: "/m/0fszt"
+ id: 61
+ display_name: "cake"
+}
+item {
+ name: "/m/01mzpv"
+ id: 62
+ display_name: "chair"
+}
+item {
+ name: "/m/02crq1"
+ id: 63
+ display_name: "couch"
+}
+item {
+ name: "/m/03fp41"
+ id: 64
+ display_name: "potted plant"
+}
+item {
+ name: "/m/03ssj5"
+ id: 65
+ display_name: "bed"
+}
+item {
+ name: "66"
+ id: 66
+ display_name: "66"
+}
+item {
+ name: "/m/04bcr3"
+ id: 67
+ display_name: "dining table"
+}
+item {
+ name: "68"
+ id: 68
+ display_name: "68"
+}
+item {
+ name: "69"
+ id: 69
+ display_name: "69"
+}
+item {
+ name: "/m/09g1w"
+ id: 70
+ display_name: "toilet"
+}
+item {
+ name: "71"
+ id: 71
+ display_name: "71"
+}
+item {
+ name: "/m/07c52"
+ id: 72
+ display_name: "tv"
+}
+item {
+ name: "/m/01c648"
+ id: 73
+ display_name: "laptop"
+}
+item {
+ name: "/m/020lf"
+ id: 74
+ display_name: "mouse"
+}
+item {
+ name: "/m/0qjjc"
+ id: 75
+ display_name: "remote"
+}
+item {
+ name: "/m/01m2v"
+ id: 76
+ display_name: "keyboard"
+}
+item {
+ name: "/m/050k8"
+ id: 77
+ display_name: "cell phone"
+}
+item {
+ name: "/m/0fx9l"
+ id: 78
+ display_name: "microwave"
+}
+item {
+ name: "/m/029bxz"
+ id: 79
+ display_name: "oven"
+}
+item {
+ name: "/m/01k6s3"
+ id: 80
+ display_name: "toaster"
+}
+item {
+ name: "/m/0130jx"
+ id: 81
+ display_name: "sink"
+}
+item {
+ name: "/m/040b_t"
+ id: 82
+ display_name: "refrigerator"
+}
+item {
+ name: "83"
+ id: 83
+ display_name: "83"
+}
+item {
+ name: "/m/0bt_c3"
+ id: 84
+ display_name: "book"
+}
+item {
+ name: "/m/01x3z"
+ id: 85
+ display_name: "clock"
+}
+item {
+ name: "/m/02s195"
+ id: 86
+ display_name: "vase"
+}
+item {
+ name: "/m/01lsmm"
+ id: 87
+ display_name: "scissors"
+}
+item {
+ name: "/m/0kmg4"
+ id: 88
+ display_name: "teddy bear"
+}
+item {
+ name: "/m/03wvsk"
+ id: 89
+ display_name: "hair drier"
+}
+item {
+ name: "/m/012xff"
+ id: 90
+ display_name: "toothbrush"
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/mscoco_label_map.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/mscoco_label_map.pbtxt"
new file mode 100644
index 00000000..1f4872bd
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/mscoco_label_map.pbtxt"
@@ -0,0 +1,400 @@
+item {
+ name: "/m/01g317"
+ id: 1
+ display_name: "person"
+}
+item {
+ name: "/m/0199g"
+ id: 2
+ display_name: "bicycle"
+}
+item {
+ name: "/m/0k4j"
+ id: 3
+ display_name: "car"
+}
+item {
+ name: "/m/04_sv"
+ id: 4
+ display_name: "motorcycle"
+}
+item {
+ name: "/m/05czz6l"
+ id: 5
+ display_name: "airplane"
+}
+item {
+ name: "/m/01bjv"
+ id: 6
+ display_name: "bus"
+}
+item {
+ name: "/m/07jdr"
+ id: 7
+ display_name: "train"
+}
+item {
+ name: "/m/07r04"
+ id: 8
+ display_name: "truck"
+}
+item {
+ name: "/m/019jd"
+ id: 9
+ display_name: "boat"
+}
+item {
+ name: "/m/015qff"
+ id: 10
+ display_name: "traffic light"
+}
+item {
+ name: "/m/01pns0"
+ id: 11
+ display_name: "fire hydrant"
+}
+item {
+ name: "/m/02pv19"
+ id: 13
+ display_name: "stop sign"
+}
+item {
+ name: "/m/015qbp"
+ id: 14
+ display_name: "parking meter"
+}
+item {
+ name: "/m/0cvnqh"
+ id: 15
+ display_name: "bench"
+}
+item {
+ name: "/m/015p6"
+ id: 16
+ display_name: "bird"
+}
+item {
+ name: "/m/01yrx"
+ id: 17
+ display_name: "cat"
+}
+item {
+ name: "/m/0bt9lr"
+ id: 18
+ display_name: "dog"
+}
+item {
+ name: "/m/03k3r"
+ id: 19
+ display_name: "horse"
+}
+item {
+ name: "/m/07bgp"
+ id: 20
+ display_name: "sheep"
+}
+item {
+ name: "/m/01xq0k1"
+ id: 21
+ display_name: "cow"
+}
+item {
+ name: "/m/0bwd_0j"
+ id: 22
+ display_name: "elephant"
+}
+item {
+ name: "/m/01dws"
+ id: 23
+ display_name: "bear"
+}
+item {
+ name: "/m/0898b"
+ id: 24
+ display_name: "zebra"
+}
+item {
+ name: "/m/03bk1"
+ id: 25
+ display_name: "giraffe"
+}
+item {
+ name: "/m/01940j"
+ id: 27
+ display_name: "backpack"
+}
+item {
+ name: "/m/0hnnb"
+ id: 28
+ display_name: "umbrella"
+}
+item {
+ name: "/m/080hkjn"
+ id: 31
+ display_name: "handbag"
+}
+item {
+ name: "/m/01rkbr"
+ id: 32
+ display_name: "tie"
+}
+item {
+ name: "/m/01s55n"
+ id: 33
+ display_name: "suitcase"
+}
+item {
+ name: "/m/02wmf"
+ id: 34
+ display_name: "frisbee"
+}
+item {
+ name: "/m/071p9"
+ id: 35
+ display_name: "skis"
+}
+item {
+ name: "/m/06__v"
+ id: 36
+ display_name: "snowboard"
+}
+item {
+ name: "/m/018xm"
+ id: 37
+ display_name: "sports ball"
+}
+item {
+ name: "/m/02zt3"
+ id: 38
+ display_name: "kite"
+}
+item {
+ name: "/m/03g8mr"
+ id: 39
+ display_name: "baseball bat"
+}
+item {
+ name: "/m/03grzl"
+ id: 40
+ display_name: "baseball glove"
+}
+item {
+ name: "/m/06_fw"
+ id: 41
+ display_name: "skateboard"
+}
+item {
+ name: "/m/019w40"
+ id: 42
+ display_name: "surfboard"
+}
+item {
+ name: "/m/0dv9c"
+ id: 43
+ display_name: "tennis racket"
+}
+item {
+ name: "/m/04dr76w"
+ id: 44
+ display_name: "bottle"
+}
+item {
+ name: "/m/09tvcd"
+ id: 46
+ display_name: "wine glass"
+}
+item {
+ name: "/m/08gqpm"
+ id: 47
+ display_name: "cup"
+}
+item {
+ name: "/m/0dt3t"
+ id: 48
+ display_name: "fork"
+}
+item {
+ name: "/m/04ctx"
+ id: 49
+ display_name: "knife"
+}
+item {
+ name: "/m/0cmx8"
+ id: 50
+ display_name: "spoon"
+}
+item {
+ name: "/m/04kkgm"
+ id: 51
+ display_name: "bowl"
+}
+item {
+ name: "/m/09qck"
+ id: 52
+ display_name: "banana"
+}
+item {
+ name: "/m/014j1m"
+ id: 53
+ display_name: "apple"
+}
+item {
+ name: "/m/0l515"
+ id: 54
+ display_name: "sandwich"
+}
+item {
+ name: "/m/0cyhj_"
+ id: 55
+ display_name: "orange"
+}
+item {
+ name: "/m/0hkxq"
+ id: 56
+ display_name: "broccoli"
+}
+item {
+ name: "/m/0fj52s"
+ id: 57
+ display_name: "carrot"
+}
+item {
+ name: "/m/01b9xk"
+ id: 58
+ display_name: "hot dog"
+}
+item {
+ name: "/m/0663v"
+ id: 59
+ display_name: "pizza"
+}
+item {
+ name: "/m/0jy4k"
+ id: 60
+ display_name: "donut"
+}
+item {
+ name: "/m/0fszt"
+ id: 61
+ display_name: "cake"
+}
+item {
+ name: "/m/01mzpv"
+ id: 62
+ display_name: "chair"
+}
+item {
+ name: "/m/02crq1"
+ id: 63
+ display_name: "couch"
+}
+item {
+ name: "/m/03fp41"
+ id: 64
+ display_name: "potted plant"
+}
+item {
+ name: "/m/03ssj5"
+ id: 65
+ display_name: "bed"
+}
+item {
+ name: "/m/04bcr3"
+ id: 67
+ display_name: "dining table"
+}
+item {
+ name: "/m/09g1w"
+ id: 70
+ display_name: "toilet"
+}
+item {
+ name: "/m/07c52"
+ id: 72
+ display_name: "tv"
+}
+item {
+ name: "/m/01c648"
+ id: 73
+ display_name: "laptop"
+}
+item {
+ name: "/m/020lf"
+ id: 74
+ display_name: "mouse"
+}
+item {
+ name: "/m/0qjjc"
+ id: 75
+ display_name: "remote"
+}
+item {
+ name: "/m/01m2v"
+ id: 76
+ display_name: "keyboard"
+}
+item {
+ name: "/m/050k8"
+ id: 77
+ display_name: "cell phone"
+}
+item {
+ name: "/m/0fx9l"
+ id: 78
+ display_name: "microwave"
+}
+item {
+ name: "/m/029bxz"
+ id: 79
+ display_name: "oven"
+}
+item {
+ name: "/m/01k6s3"
+ id: 80
+ display_name: "toaster"
+}
+item {
+ name: "/m/0130jx"
+ id: 81
+ display_name: "sink"
+}
+item {
+ name: "/m/040b_t"
+ id: 82
+ display_name: "refrigerator"
+}
+item {
+ name: "/m/0bt_c3"
+ id: 84
+ display_name: "book"
+}
+item {
+ name: "/m/01x3z"
+ id: 85
+ display_name: "clock"
+}
+item {
+ name: "/m/02s195"
+ id: 86
+ display_name: "vase"
+}
+item {
+ name: "/m/01lsmm"
+ id: 87
+ display_name: "scissors"
+}
+item {
+ name: "/m/0kmg4"
+ id: 88
+ display_name: "teddy bear"
+}
+item {
+ name: "/m/03wvsk"
+ id: 89
+ display_name: "hair drier"
+}
+item {
+ name: "/m/012xff"
+ id: 90
+ display_name: "toothbrush"
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/mscoco_minival_ids.txt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/mscoco_minival_ids.txt"
new file mode 100644
index 00000000..5bbff3c1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/mscoco_minival_ids.txt"
@@ -0,0 +1,8059 @@
+25096
+251824
+35313
+546011
+524186
+205866
+511403
+313916
+47471
+258628
+233560
+576017
+404517
+410056
+178690
+248980
+511724
+429718
+163076
+244111
+126766
+313182
+191981
+139992
+325237
+248129
+214519
+175438
+493321
+174103
+563762
+536795
+289960
+473720
+515540
+292118
+360851
+267175
+532876
+171613
+581415
+259819
+441841
+381682
+58157
+4980
+473929
+70626
+93773
+283412
+36765
+495020
+278401
+329307
+192810
+491784
+506416
+225495
+553747
+86442
+242208
+132686
+385877
+290248
+525705
+5476
+486521
+332512
+138556
+348083
+284375
+40018
+296994
+38685
+432429
+183407
+434358
+472164
+530494
+570693
+193401
+392612
+98872
+445766
+532209
+98322
+285114
+267725
+51605
+314812
+91105
+535506
+540264
+375341
+449828
+277659
+68933
+76873
+217554
+213592
+190776
+516224
+474479
+343599
+578813
+128669
+546292
+475365
+377626
+128833
+427091
+547227
+11742
+80213
+462241
+374574
+121572
+29151
+13892
+262394
+303667
+198724
+7320
+448492
+419080
+460379
+483965
+556516
+139181
+1103
+308715
+207507
+213827
+216083
+445597
+240275
+379585
+116389
+138124
+559051
+326898
+419386
+503660
+519460
+23893
+24458
+518109
+462982
+151492
+514254
+2477
+147165
+570394
+548766
+250083
+364341
+351967
+386277
+328084
+511299
+499349
+315501
+234965
+428562
+219771
+288150
+136021
+168619
+298316
+75118
+189752
+243857
+296222
+554002
+533628
+384596
+202981
+498350
+391463
+183991
+528062
+451084
+7899
+408534
+329030
+318566
+22492
+361285
+226973
+213356
+417265
+105622
+161169
+261487
+167477
+233370
+142999
+256713
+305833
+103579
+352538
+135763
+392144
+61181
+200302
+456908
+286858
+179850
+488075
+174511
+194755
+317822
+2302
+304596
+172556
+548275
+341678
+55299
+134760
+352936
+545129
+377012
+141328
+103757
+552837
+28246
+125167
+328745
+278760
+337133
+403389
+146825
+502558
+265916
+428985
+492041
+113403
+372037
+306103
+287574
+187495
+479805
+336309
+162043
+95899
+43133
+464248
+149115
+247438
+74030
+130645
+282841
+127092
+101172
+536743
+179642
+58133
+49667
+170605
+11347
+365277
+201970
+292663
+217219
+463226
+41924
+281102
+357816
+490878
+100343
+525058
+133503
+416145
+29341
+415413
+125527
+507951
+262609
+240210
+581781
+345137
+526342
+268641
+328777
+32001
+137538
+39115
+415958
+6771
+421865
+64909
+383601
+206907
+420840
+370980
+28452
+571893
+153520
+185890
+392991
+547013
+257359
+279879
+478614
+131919
+40937
+22874
+173375
+106344
+44801
+205401
+312870
+400886
+351530
+344013
+173500
+470423
+396729
+402499
+276585
+377097
+367619
+518908
+263866
+332292
+67805
+152211
+515025
+221350
+525247
+78490
+504342
+95908
+82668
+256199
+220270
+552065
+242379
+84866
+152281
+228464
+223122
+67537
+456968
+368349
+101985
+14681
+543551
+107558
+372009
+99054
+126540
+86877
+492785
+482585
+571564
+501116
+296871
+20395
+181518
+568041
+121154
+56187
+190018
+97156
+310325
+393274
+214574
+243222
+289949
+452121
+150508
+341752
+310757
+24040
+228551
+335589
+12020
+529597
+459884
+344888
+229713
+51948
+370929
+552061
+261072
+120070
+332067
+263014
+158993
+451714
+397327
+20965
+414340
+574946
+370266
+487534
+492246
+264771
+73702
+43997
+235124
+301093
+400048
+77681
+58472
+331386
+13783
+242513
+419158
+59325
+383033
+393258
+529041
+249276
+182775
+351793
+9727
+334069
+566771
+539355
+38662
+423617
+47559
+120592
+508303
+462565
+47916
+218208
+182362
+562101
+441442
+71239
+395378
+522637
+25603
+484450
+872
+171483
+527248
+323155
+240754
+15032
+419144
+313214
+250917
+333430
+242757
+221914
+283190
+194297
+228506
+550691
+172513
+312192
+530619
+113867
+323552
+374115
+35435
+160239
+62877
+441873
+196574
+62858
+557114
+427612
+242869
+356733
+304828
+24880
+490509
+407083
+457877
+402788
+536416
+385912
+544121
+500389
+451102
+12120
+483476
+70987
+482799
+542549
+49236
+424258
+435783
+182366
+438093
+501824
+232845
+53965
+223198
+288933
+450458
+285664
+196484
+408930
+519815
+290981
+398567
+315792
+490683
+257136
+75611
+302498
+332153
+82293
+416911
+558608
+564659
+536195
+370260
+57904
+527270
+6593
+145620
+551650
+470832
+515785
+251404
+287331
+150788
+334006
+266117
+10039
+579158
+328397
+468351
+550400
+31745
+405970
+16761
+323515
+459598
+558457
+570736
+476939
+472610
+72155
+112517
+13659
+530905
+458768
+43486
+560893
+493174
+31217
+262736
+412204
+142722
+151231
+480643
+197245
+398666
+444869
+110999
+191724
+479057
+492420
+170638
+277329
+301908
+395644
+537611
+141887
+47149
+403432
+34818
+372495
+67994
+337497
+478586
+249815
+533462
+281032
+289941
+151911
+271215
+407868
+360700
+508582
+103873
+353658
+369081
+406403
+331692
+26430
+105655
+572630
+37181
+91336
+484587
+318284
+113019
+33055
+25293
+229324
+374052
+384111
+213951
+315195
+319283
+539453
+17655
+308974
+326243
+539436
+417876
+526940
+356347
+221932
+73753
+292648
+262284
+304924
+558587
+374858
+253518
+311744
+539636
+40924
+136624
+334305
+365997
+63355
+191226
+526732
+367128
+575198
+500657
+50637
+17182
+424792
+565353
+563040
+383494
+74458
+155142
+197125
+223857
+428241
+440830
+371289
+437303
+330449
+93771
+82715
+499631
+381257
+563951
+192834
+528600
+404273
+270554
+208053
+188613
+484760
+432016
+129800
+91756
+523097
+317018
+487282
+444913
+159500
+126822
+540564
+105812
+560756
+306099
+471226
+123842
+513219
+154877
+497034
+283928
+564003
+238602
+194780
+462728
+558640
+524373
+455624
+3690
+560367
+316351
+455772
+223777
+161517
+243034
+250440
+239975
+441008
+324715
+152106
+246973
+462805
+296521
+412767
+530913
+370165
+292526
+107244
+217440
+330204
+220176
+577735
+197022
+127451
+518701
+212322
+204887
+27696
+348474
+119233
+282804
+230040
+425690
+409241
+296825
+296353
+375909
+123136
+573891
+338256
+198247
+373375
+151051
+500084
+557596
+120478
+44989
+283380
+149005
+522065
+626
+17198
+309633
+524245
+291589
+322714
+455847
+248468
+371948
+444928
+20438
+481670
+147195
+95022
+548159
+553165
+395324
+391371
+86884
+561121
+219737
+38875
+338159
+377881
+185472
+359277
+114861
+378048
+126226
+10217
+320246
+15827
+178236
+370279
+352978
+408101
+77615
+337044
+223714
+20796
+352445
+263834
+156704
+377867
+119402
+399567
+1180
+257941
+560675
+390471
+209290
+258382
+466339
+56437
+195042
+384230
+203214
+36077
+283038
+38323
+158770
+532381
+395903
+375461
+397857
+326798
+371699
+369503
+495626
+464328
+462211
+397719
+434089
+424793
+476770
+531852
+303538
+525849
+480917
+419653
+265063
+48956
+5184
+279149
+396727
+374266
+124429
+36124
+240213
+147556
+339512
+577182
+288599
+257169
+178254
+393869
+122314
+28713
+48133
+540681
+100974
+368459
+500110
+73634
+460982
+203878
+578344
+443602
+502012
+399666
+103603
+22090
+257529
+176328
+536656
+408873
+116881
+460972
+33835
+460781
+51223
+46463
+89395
+407646
+337453
+461715
+16257
+426987
+234889
+3125
+165643
+517472
+451435
+206800
+112128
+331236
+163306
+94185
+498716
+532732
+146509
+458567
+153832
+105996
+353398
+546976
+283060
+247624
+110048
+243491
+154798
+543600
+149962
+355256
+352900
+203081
+372203
+284605
+516244
+190494
+150301
+326082
+64146
+402858
+413538
+399510
+460251
+94336
+458721
+57345
+424162
+423508
+69356
+567220
+509786
+37038
+111535
+341318
+372067
+358120
+244909
+180653
+39852
+438560
+357041
+67065
+51928
+171717
+520430
+552395
+431355
+528084
+20913
+309610
+262323
+573784
+449485
+154846
+283438
+430871
+199578
+516318
+563912
+348483
+485613
+143440
+94922
+168817
+74457
+45830
+66297
+514173
+99186
+296236
+230903
+452312
+476444
+568981
+100811
+237350
+194724
+453622
+49559
+270609
+113701
+415393
+92173
+137004
+188795
+148280
+448114
+575964
+163155
+518719
+219329
+214247
+363927
+65357
+87617
+552612
+457817
+124796
+47740
+560463
+513968
+273637
+354212
+95959
+261061
+307265
+316237
+191342
+463272
+169273
+396518
+93261
+572733
+407386
+202658
+446497
+420852
+229274
+432724
+34900
+352533
+49891
+66144
+146831
+467484
+97988
+561647
+301155
+507421
+173217
+577584
+451940
+99927
+350639
+178941
+485155
+175948
+360673
+92963
+361321
+48739
+577310
+517795
+93405
+506458
+394681
+167920
+16995
+519573
+270532
+527750
+563403
+494608
+557780
+178691
+8676
+186927
+550173
+361656
+575911
+281315
+534377
+57570
+340894
+37624
+143103
+538243
+425077
+376545
+108129
+170974
+7522
+408906
+264279
+79415
+344025
+186797
+234349
+226472
+123639
+225177
+237984
+38714
+223671
+358247
+152465
+521405
+453722
+361111
+557117
+235832
+309341
+268469
+108353
+532531
+357279
+537280
+437618
+122953
+7088
+36693
+127659
+431901
+57244
+567565
+568111
+202926
+504516
+555685
+322369
+347620
+110231
+568982
+295340
+529798
+300341
+158160
+73588
+119476
+387216
+154994
+259755
+211282
+433971
+263588
+299468
+570138
+123017
+355106
+540172
+406215
+8401
+548844
+161820
+396432
+495348
+222407
+53123
+491556
+108130
+440617
+448309
+22596
+346841
+213829
+135076
+56326
+233139
+487418
+227326
+137763
+383389
+47882
+207797
+167452
+112065
+150703
+421109
+171753
+158279
+240800
+66821
+152886
+163640
+475466
+301799
+106712
+470885
+536370
+420389
+396768
+281950
+18903
+357529
+33650
+168243
+201004
+389295
+557150
+185327
+181256
+557396
+182025
+61564
+301928
+332455
+199403
+18444
+177452
+204206
+38465
+215906
+153103
+445019
+324527
+299207
+429281
+574675
+157067
+241269
+100850
+502818
+576566
+296775
+873
+280363
+355240
+383445
+286182
+67327
+422778
+494855
+337246
+266853
+47516
+381991
+44081
+403862
+381430
+370798
+173383
+387173
+22396
+484066
+349414
+262235
+492814
+65238
+209420
+336276
+453328
+407286
+420490
+360328
+158440
+398534
+489475
+477389
+297108
+69750
+507833
+198992
+99736
+546444
+514914
+482574
+54355
+63478
+191693
+61684
+412914
+267408
+424641
+56872
+318080
+30290
+33441
+199310
+337403
+26731
+453390
+506137
+188945
+185950
+239843
+357944
+290570
+523637
+551952
+513397
+357870
+523517
+277048
+259879
+186991
+521943
+21900
+281074
+187194
+526723
+568147
+513037
+177338
+243831
+203488
+208494
+188460
+289943
+399177
+404668
+160761
+271143
+76087
+478922
+440045
+449432
+61025
+331138
+227019
+147577
+548337
+444294
+458663
+236837
+6854
+444926
+484816
+516641
+397863
+188534
+64822
+213453
+66561
+43218
+514901
+322844
+498453
+488788
+391656
+298994
+64088
+464706
+193720
+199017
+186427
+15278
+350386
+342335
+372024
+550939
+35594
+381382
+235902
+26630
+213765
+550001
+129706
+577149
+353096
+376891
+28499
+427041
+314965
+231163
+5728
+347836
+184388
+27476
+284860
+476872
+301317
+99546
+147653
+529515
+311922
+20777
+2613
+59463
+430670
+560744
+60677
+332087
+296724
+353321
+103306
+363887
+76431
+423058
+120340
+119452
+6723
+462327
+163127
+402723
+489382
+183181
+107656
+375409
+355228
+430762
+512468
+409125
+270544
+559113
+495388
+529434
+38355
+422025
+379667
+131386
+183409
+573536
+581317
+425404
+350084
+472
+28532
+329717
+230220
+187196
+484166
+97434
+224595
+87483
+516998
+314876
+32610
+514586
+344816
+394418
+402330
+305993
+371497
+315790
+294908
+207431
+561014
+26584
+368671
+374990
+54747
+47571
+449424
+283761
+84735
+522127
+120473
+524656
+479659
+131627
+450959
+153300
+580908
+207785
+49115
+284991
+96505
+278306
+291655
+1404
+489304
+557459
+37740
+157465
+390475
+119166
+33871
+247428
+75905
+20779
+65035
+333556
+375415
+383676
+505243
+87327
+16451
+287235
+70190
+245067
+417520
+229234
+183786
+333018
+554156
+198915
+108021
+128262
+412443
+242543
+555050
+436511
+445233
+207886
+156397
+526257
+521357
+413043
+427189
+401614
+94823
+351130
+105945
+182314
+305879
+526197
+64409
+496800
+236461
+138175
+43816
+185904
+345711
+72536
+526737
+360400
+556537
+426053
+59044
+28290
+222548
+434915
+418623
+246454
+111801
+12448
+427133
+459117
+11262
+169045
+469996
+304390
+513096
+322822
+196371
+504977
+395364
+243950
+216218
+417217
+106736
+58194
+504101
+478522
+379314
+30432
+207027
+297146
+91844
+176031
+98287
+278095
+196053
+343692
+523137
+220224
+349485
+376193
+407067
+185781
+37871
+336464
+46331
+44244
+80274
+170147
+361106
+468499
+537864
+467457
+267343
+291528
+287828
+555648
+388284
+576085
+531973
+350122
+422253
+509811
+78093
+410019
+133090
+581205
+343976
+9007
+92478
+450674
+486306
+503978
+46378
+335578
+404071
+225558
+217923
+406217
+138054
+575815
+234990
+336257
+159240
+399516
+226408
+531126
+138599
+61693
+89861
+29504
+163296
+477906
+48419
+25595
+195594
+97592
+392555
+203849
+139248
+245651
+275755
+245426
+127279
+521359
+517623
+235747
+475906
+11198
+336101
+70134
+505447
+218996
+30080
+484457
+120441
+575643
+132703
+197915
+505576
+90956
+99741
+517819
+240918
+150834
+207306
+132682
+88250
+213599
+462584
+413321
+361521
+496081
+410583
+440027
+417284
+397069
+280498
+473171
+129739
+279774
+29370
+518899
+509867
+85556
+434930
+280710
+55077
+348793
+157756
+281111
+190689
+281447
+502854
+232894
+268742
+199553
+220808
+137330
+256903
+116017
+466416
+41635
+110906
+340934
+557501
+146767
+517617
+487159
+1561
+417281
+489014
+292463
+113533
+412247
+263973
+515444
+343561
+310200
+293804
+225867
+150320
+183914
+9707
+89999
+177842
+296524
+287829
+68300
+363654
+465986
+159969
+313948
+522779
+219820
+198352
+12959
+266727
+8016
+175804
+497867
+307892
+287527
+309638
+205854
+114119
+23023
+322586
+383341
+134198
+553522
+70426
+329138
+105367
+175597
+187791
+17944
+366611
+93493
+242422
+41842
+558840
+32203
+19667
+124297
+383726
+252625
+234794
+498228
+102906
+287967
+69021
+51326
+243896
+509423
+440124
+122582
+344325
+34455
+442478
+23587
+236904
+185633
+349841
+44294
+112568
+186296
+71914
+3837
+135486
+223747
+557517
+385181
+265313
+404263
+26564
+516867
+497096
+332351
+345139
+444304
+510877
+356387
+561214
+311471
+408789
+561729
+291380
+174671
+45710
+435136
+388858
+361693
+50811
+531134
+573605
+340175
+534988
+382671
+327047
+348400
+547137
+401037
+490711
+499266
+236370
+449075
+334015
+107234
+232315
+462953
+252048
+186822
+410168
+28994
+45550
+453626
+417957
+468577
+106338
+391684
+375143
+217622
+357903
+347648
+142182
+213843
+299148
+352587
+436676
+161875
+144655
+304741
+235017
+181799
+211042
+335507
+553731
+412531
+229740
+437129
+423830
+561806
+337666
+52016
+138057
+70254
+494393
+73119
+262425
+565395
+305329
+489611
+377080
+569450
+549766
+332940
+235302
+53893
+203781
+38449
+114870
+18699
+396338
+449839
+423613
+379767
+369594
+375812
+359219
+229311
+291675
+224907
+416885
+32964
+573406
+17282
+103375
+81860
+576886
+461334
+35672
+243442
+217269
+445055
+211112
+455675
+412384
+88967
+550643
+24223
+504074
+9275
+155546
+329542
+172658
+331600
+315492
+194208
+162867
+324614
+432017
+140860
+157944
+406616
+486079
+361172
+258346
+494140
+315384
+451014
+242619
+413684
+386187
+408501
+121089
+343603
+232538
+558671
+551596
+32992
+406647
+435260
+11156
+40896
+175382
+110560
+252968
+189694
+63154
+564816
+72004
+164788
+434583
+453104
+111878
+268484
+290768
+473215
+450620
+32673
+277479
+529917
+315868
+562419
+378347
+398637
+84097
+120527
+134193
+431472
+400238
+86426
+208830
+524535
+22213
+516813
+526044
+386193
+246672
+386739
+559252
+153344
+236123
+246074
+323615
+92644
+408621
+323231
+499940
+296105
+578902
+150098
+145015
+131431
+318618
+68409
+497928
+362520
+467755
+112702
+163219
+277289
+192362
+497674
+525439
+56267
+465868
+407570
+551608
+345211
+179653
+55295
+97315
+534041
+505822
+411082
+132375
+25378
+272008
+536605
+123511
+148737
+577712
+493751
+29587
+468297
+528458
+491058
+558976
+181421
+209685
+147545
+486964
+570516
+168662
+19446
+395997
+242911
+232511
+317035
+354527
+5961
+513793
+124390
+370123
+113397
+195790
+252813
+326919
+432414
+409239
+458221
+115667
+212239
+279279
+375554
+546622
+317188
+260818
+286021
+377111
+209868
+243148
+132037
+560624
+459721
+193498
+22623
+254164
+112841
+383470
+62692
+227940
+471335
+44858
+213649
+179898
+102837
+474078
+44478
+256197
+309492
+182923
+421139
+275695
+104965
+480780
+449749
+76513
+578591
+336695
+247474
+320490
+246105
+53183
+485740
+575823
+510735
+290741
+37017
+348708
+279784
+453634
+567644
+434192
+482719
+435324
+544299
+106896
+569926
+301574
+492885
+103462
+487151
+513585
+219647
+303685
+459645
+76292
+188579
+154883
+207728
+425074
+310493
+27221
+371694
+119404
+399665
+273556
+454577
+580698
+267664
+295769
+423740
+22461
+22667
+508443
+390401
+369997
+524627
+193349
+132223
+576743
+130586
+487741
+107542
+501420
+520109
+308156
+540581
+231362
+86471
+472930
+351133
+463605
+575577
+159842
+39504
+223020
+63525
+298627
+139883
+375205
+303549
+16838
+495680
+408112
+394474
+188044
+472143
+463751
+31481
+378139
+190853
+442614
+172006
+140270
+133051
+178028
+495090
+88455
+13232
+46323
+346275
+425905
+487013
+433136
+514402
+521906
+4157
+61418
+567205
+213351
+304008
+296492
+506561
+408120
+415961
+323186
+480379
+349199
+201918
+135023
+456483
+136173
+237917
+4972
+99081
+331569
+150007
+36450
+93400
+487461
+203629
+218093
+487181
+113935
+139512
+210981
+358883
+47419
+248382
+80357
+462663
+83097
+26159
+80429
+283055
+452676
+50159
+12326
+29430
+303264
+158122
+569070
+52925
+534876
+46975
+426376
+170293
+434417
+235517
+218476
+445008
+482774
+305632
+116848
+557252
+229270
+453485
+382214
+54759
+59171
+193328
+17152
+238071
+148531
+409725
+75434
+65358
+473057
+415408
+579415
+48636
+269606
+298784
+162799
+356400
+326854
+24601
+66499
+340247
+20992
+190218
+548464
+122203
+405306
+495376
+536028
+5713
+206831
+9395
+503939
+194440
+474253
+395849
+165141
+204935
+412621
+402922
+87141
+570664
+202622
+137362
+221737
+78947
+112129
+341957
+169562
+164780
+360216
+107641
+415015
+444955
+559102
+123070
+176592
+309366
+116461
+222075
+530470
+214363
+414487
+471567
+292123
+370210
+364243
+510254
+396350
+141524
+220310
+398604
+145436
+392476
+17482
+78032
+336171
+130812
+489743
+346638
+418854
+139072
+263860
+458240
+383443
+337533
+182334
+535608
+517946
+489924
+308117
+129945
+59973
+538364
+513458
+449433
+25165
+335851
+487688
+153834
+347612
+349689
+443688
+486008
+479149
+442286
+61108
+315338
+511546
+506444
+775
+121839
+291412
+497626
+387223
+367095
+557896
+196118
+530652
+447991
+215622
+232160
+296731
+272273
+473415
+364705
+235790
+479950
+141278
+547903
+66523
+353989
+121875
+237735
+100083
+348941
+288983
+390083
+168248
+120776
+489764
+219135
+551713
+256035
+309005
+112493
+579759
+114972
+458992
+295768
+158497
+309696
+363844
+507966
+313491
+280779
+327130
+292901
+127761
+183843
+456521
+164475
+224281
+443713
+72514
+567383
+476215
+565650
+17708
+474471
+248334
+196313
+164759
+212453
+319024
+332916
+35436
+113139
+172716
+7570
+161609
+144534
+137475
+561411
+45844
+332027
+36990
+190160
+421231
+283210
+365611
+511407
+400887
+485071
+481214
+347203
+153506
+397403
+229599
+357322
+76034
+101189
+567444
+92363
+526767
+218811
+362812
+339120
+579696
+399269
+10705
+549012
+410428
+105623
+535307
+419235
+119911
+236604
+515779
+188173
+66397
+549119
+478742
+256180
+128224
+440539
+112818
+315434
+97513
+171970
+433483
+226008
+83217
+424548
+343753
+350334
+479280
+208808
+43266
+399893
+444386
+47687
+499093
+565269
+465835
+167486
+433460
+169872
+299640
+158466
+241373
+50576
+161567
+73560
+349804
+181745
+352684
+450357
+532693
+88335
+256518
+94926
+541197
+14629
+276149
+539439
+498738
+25654
+291330
+146465
+160190
+513064
+75748
+499007
+164464
+134042
+422416
+543315
+34056
+303197
+394801
+293071
+44964
+529083
+414522
+331180
+227599
+581040
+382850
+159898
+176841
+205352
+540782
+406591
+184499
+14380
+350230
+458175
+528786
+314935
+111086
+2191
+20371
+337042
+558371
+296907
+539937
+511463
+574856
+87864
+403817
+152598
+169712
+533227
+173545
+478862
+19455
+258433
+373440
+460229
+525682
+176857
+525050
+277025
+156416
+206784
+415179
+183204
+210374
+312868
+514366
+65208
+376342
+515792
+383066
+85247
+119132
+338007
+88748
+206705
+495808
+532164
+150686
+35474
+207860
+111165
+391199
+346011
+537721
+11390
+487482
+360983
+400347
+92795
+347506
+324322
+371958
+101280
+222842
+563604
+210299
+150616
+96351
+330455
+273551
+228749
+248051
+495252
+372265
+52664
+191874
+157416
+446428
+136681
+1228
+321811
+93791
+477867
+192520
+157124
+40620
+200541
+103904
+329494
+60093
+112573
+489125
+513115
+322968
+561619
+74309
+572462
+248252
+375376
+217312
+243213
+79878
+452218
+349754
+554291
+434043
+460373
+452591
+567787
+504711
+196007
+511153
+312416
+296056
+308849
+203667
+253223
+331230
+465545
+363048
+69392
+301506
+216198
+147979
+6005
+381870
+56983
+320972
+144122
+210855
+151480
+299288
+462486
+103931
+321079
+4134
+239861
+540006
+413805
+221222
+198943
+450790
+380597
+388298
+58737
+246197
+160726
+398554
+513946
+222235
+323851
+364703
+125643
+169800
+445662
+223764
+575372
+489207
+559474
+7155
+453819
+402720
+102355
+415076
+287436
+35705
+111076
+395865
+310862
+570834
+54728
+215778
+80053
+35148
+350488
+524140
+190097
+36661
+302110
+96884
+383397
+245462
+446958
+138937
+424712
+561814
+276964
+148034
+411068
+357824
+103257
+322149
+508899
+580294
+214386
+114419
+271429
+168260
+209835
+573072
+252269
+31980
+161308
+281508
+192714
+247599
+188948
+180563
+419601
+233660
+154804
+311846
+181499
+5535
+175082
+531018
+412338
+166995
+441411
+427820
+516846
+287366
+67959
+271266
+330845
+74209
+508167
+542699
+66485
+453756
+158412
+443784
+118097
+265050
+29074
+152623
+532493
+292988
+530384
+192660
+502336
+472648
+151657
+351626
+241010
+115070
+268356
+539557
+304698
+251140
+497158
+527445
+385428
+179200
+512394
+184978
+141910
+36311
+579457
+19129
+424960
+181714
+126216
+512911
+488360
+379533
+337551
+325410
+364587
+468885
+211107
+90062
+500446
+105960
+451951
+431431
+134178
+164548
+173826
+373988
+15157
+3091
+393557
+380011
+75372
+37403
+209995
+493610
+315899
+353299
+355040
+547000
+86133
+58174
+377326
+510230
+480583
+158588
+432529
+311206
+127626
+239980
+166340
+104185
+405174
+507211
+542782
+448078
+253477
+542694
+567308
+214853
+288824
+283268
+480757
+503200
+221089
+112388
+171539
+124452
+224200
+206362
+428754
+256192
+119414
+351620
+330050
+547504
+216398
+94261
+19916
+163242
+432588
+143824
+361103
+271138
+260150
+313627
+141086
+308263
+388453
+153217
+372794
+514787
+251910
+351335
+92683
+465836
+18442
+404128
+208476
+47873
+303219
+201622
+367489
+32760
+436174
+401926
+338419
+45248
+328464
+312216
+156282
+315702
+300701
+345401
+515350
+29094
+284296
+466449
+351057
+110672
+364853
+10014
+415828
+397522
+451412
+433124
+158277
+93476
+183387
+109889
+223326
+105547
+530061
+256301
+526778
+80974
+86650
+45835
+202154
+92678
+315991
+423919
+455044
+491168
+272253
+146627
+285349
+86001
+44171
+162332
+257328
+432820
+519275
+380639
+269436
+236016
+543215
+346752
+575970
+423498
+136926
+195648
+126634
+133078
+138656
+490012
+122388
+195165
+434900
+533625
+504167
+333697
+216576
+538775
+125072
+391154
+545007
+150292
+566717
+367362
+490991
+356623
+141271
+402795
+516786
+39499
+536716
+293324
+212853
+276381
+57124
+325992
+394659
+452178
+117674
+461172
+518586
+497021
+462345
+526570
+17328
+202928
+62566
+411277
+256983
+49473
+211206
+398031
+277955
+531178
+453959
+27946
+252844
+30273
+536933
+500298
+229111
+7977
+27642
+303726
+79927
+110313
+527691
+442205
+33345
+365851
+233236
+239157
+409221
+400803
+32947
+422516
+359727
+215872
+559454
+289716
+450247
+57827
+312298
+530383
+260048
+35857
+224222
+299533
+13296
+325907
+117869
+54088
+391011
+340478
+205344
+347823
+468604
+78701
+101414
+197499
+490871
+89273
+380343
+441974
+35974
+486114
+354398
+535536
+294030
+7276
+278742
+137028
+98721
+372764
+429802
+72105
+220307
+116845
+195406
+333000
+130401
+264382
+125458
+363036
+286994
+531070
+113801
+4108
+47603
+130118
+573924
+302990
+237566
+21470
+577926
+139436
+425925
+36844
+63602
+399791
+35894
+347228
+225617
+504813
+245320
+466007
+553931
+166731
+164885
+19090
+457262
+247806
+502895
+167593
+352491
+520
+26386
+497348
+352000
+386164
+32901
+730
+30925
+333167
+150361
+231747
+462244
+504958
+260738
+313762
+346645
+486118
+202998
+541613
+183884
+230245
+83172
+126638
+51844
+421673
+118625
+377723
+229427
+371326
+104345
+361687
+114246
+397354
+104137
+120850
+260516
+389168
+234555
+26348
+78522
+409784
+303024
+377949
+69887
+546983
+113736
+298197
+476810
+137315
+376321
+410337
+492905
+119785
+158167
+185930
+354061
+106563
+328452
+506587
+536517
+480173
+570688
+376441
+252127
+247720
+132554
+41923
+400317
+170041
+151938
+198650
+6437
+49091
+221820
+455966
+309859
+300659
+15850
+388014
+253386
+65415
+238228
+548882
+302155
+93483
+371869
+397287
+315249
+360564
+448410
+21382
+477474
+144862
+517515
+230190
+322353
+231568
+14940
+132719
+498942
+182469
+113720
+168890
+94852
+246077
+117535
+52596
+419116
+522020
+255338
+125228
+564332
+106375
+249534
+220915
+177758
+293057
+222430
+196878
+554980
+375606
+173081
+84936
+418907
+562229
+457616
+125700
+66038
+239274
+574110
+305540
+98431
+167347
+53345
+438481
+286010
+5569
+343606
+168898
+191301
+236338
+291394
+715
+520237
+236954
+192212
+524002
+471625
+476029
+413124
+203455
+483328
+476417
+114389
+372428
+369221
+322654
+388157
+561314
+264540
+418680
+359540
+426182
+521613
+92248
+74478
+398905
+554273
+125909
+430583
+418959
+503522
+382999
+403145
+536375
+352618
+108193
+279696
+163253
+439007
+204536
+552186
+269926
+372147
+399921
+201418
+240565
+471483
+91619
+393971
+331648
+385856
+567440
+81922
+391722
+372894
+535997
+134096
+545958
+239943
+186929
+34222
+177714
+277812
+197111
+281878
+532003
+557172
+142890
+196116
+385454
+322845
+374987
+123137
+255112
+111207
+304819
+523526
+336046
+42893
+241273
+240049
+90659
+271364
+408008
+253282
+167067
+354278
+178317
+229653
+93333
+163666
+566920
+495199
+100329
+218119
+558864
+257382
+406152
+206587
+420339
+325919
+278853
+555763
+293200
+151000
+209664
+79380
+197177
+353953
+464522
+392260
+46144
+154202
+164366
+206025
+511236
+24921
+497907
+393226
+318138
+364125
+157321
+492395
+187857
+109939
+441500
+144251
+368581
+51403
+283498
+43555
+89356
+404601
+23272
+425762
+460682
+544629
+209829
+322029
+199247
+307262
+571242
+124236
+162393
+104829
+250766
+563938
+237399
+131516
+483001
+21994
+97958
+540187
+264497
+384808
+343187
+51277
+6712
+566103
+435384
+292082
+359039
+165157
+267972
+263796
+489313
+392722
+541924
+554433
+571034
+146112
+201934
+518716
+64116
+294992
+289586
+159970
+479617
+269006
+140465
+513260
+554805
+6579
+452696
+34445
+548296
+372983
+509656
+199339
+130030
+128372
+449454
+139306
+247914
+99024
+499134
+536653
+468917
+412813
+404338
+215303
+455414
+413497
+574988
+397117
+188631
+378701
+241867
+143129
+419884
+412749
+496954
+317732
+16977
+398309
+162363
+147576
+100016
+209018
+92660
+173302
+525732
+449198
+99734
+12733
+172946
+168032
+210988
+340697
+4795
+534887
+483553
+278323
+178175
+190095
+357542
+230432
+227460
+334609
+562121
+378126
+555357
+325666
+451859
+526837
+531710
+297249
+294839
+499785
+254976
+527220
+173057
+11760
+163012
+215998
+114420
+57812
+563712
+513887
+201859
+36333
+291990
+338375
+460621
+518889
+337502
+133050
+80172
+537007
+295270
+335644
+227852
+336044
+204137
+82259
+165675
+295713
+343937
+442567
+356002
+346932
+62985
+180925
+525381
+13081
+377406
+159774
+462643
+359105
+185821
+390201
+84168
+128059
+80340
+481159
+491902
+306619
+353807
+390569
+541562
+292616
+64621
+439224
+96288
+449798
+160927
+496324
+90778
+126145
+97230
+572767
+11570
+539075
+350988
+3779
+208135
+551315
+216449
+169606
+502
+67765
+281414
+118594
+146127
+543985
+124927
+471394
+385508
+373783
+501315
+140974
+42757
+527054
+202387
+513056
+329931
+153973
+510152
+520812
+534601
+131282
+386638
+508538
+234779
+229329
+396568
+153568
+229478
+153574
+356299
+436694
+324139
+299409
+212462
+478155
+393266
+117836
+190760
+213605
+196
+444382
+445211
+363845
+433277
+521141
+464786
+169076
+301402
+4495
+177258
+328962
+183757
+452966
+416059
+113233
+559417
+280678
+481398
+328372
+234910
+30667
+343062
+383046
+370953
+258089
+404229
+456931
+535183
+300867
+60507
+262672
+7288
+81100
+575395
+539951
+347848
+437594
+352005
+14941
+196453
+528386
+466939
+482187
+293468
+494077
+217285
+362951
+435751
+411480
+517315
+480015
+60610
+353001
+376442
+430265
+478338
+303069
+525344
+437331
+389315
+8179
+31981
+313872
+330920
+515465
+258905
+142249
+323128
+389699
+565012
+124636
+488693
+376608
+309424
+370596
+261940
+39871
+226984
+152866
+515050
+116861
+412876
+120411
+550452
+565273
+273791
+181466
+183155
+293505
+336113
+569997
+303738
+331049
+147030
+74058
+198176
+23991
+198841
+79816
+85183
+261535
+566756
+386291
+318200
+569849
+57429
+36049
+420827
+519271
+24391
+172087
+158795
+133002
+522198
+133698
+499365
+79261
+258860
+457718
+179948
+421875
+558073
+206684
+529762
+456756
+65773
+425722
+53102
+294264
+416730
+38574
+176275
+404297
+127494
+242060
+272212
+189244
+510861
+421370
+208516
+206431
+248457
+39502
+375087
+130839
+308730
+572453
+263474
+544611
+255708
+412604
+390094
+578131
+234463
+493563
+9450
+381914
+148999
+32300
+423576
+569758
+347253
+92939
+112212
+13923
+39472
+363736
+289659
+269949
+88349
+188522
+488915
+129054
+573823
+316000
+440562
+408818
+539302
+199575
+122300
+340047
+322816
+472878
+313922
+228071
+265648
+400166
+169166
+10040
+125245
+148766
+31281
+172599
+431067
+208236
+441824
+175611
+15148
+431199
+521587
+50025
+443139
+349822
+515056
+27530
+571970
+82367
+7115
+424333
+157601
+537506
+447187
+115182
+547597
+5586
+143040
+31650
+196336
+279818
+206273
+403104
+514248
+243190
+558642
+548246
+16848
+391539
+89614
+284589
+191314
+259452
+208380
+209441
+465463
+385005
+321385
+223569
+11727
+87574
+566470
+210890
+323598
+427193
+425676
+401240
+94021
+259571
+447553
+456053
+84693
+14278
+119995
+234595
+408696
+136271
+143560
+357578
+28071
+36561
+157102
+293789
+392251
+356622
+180274
+48320
+475779
+301326
+100977
+413551
+574010
+404479
+80725
+552221
+575441
+197424
+124601
+215633
+359546
+25386
+73199
+334466
+156572
+124614
+34121
+460049
+327623
+441695
+292488
+476514
+464018
+348571
+113413
+125208
+129690
+446218
+493761
+383413
+460390
+343149
+374041
+525211
+451263
+333683
+385194
+107427
+102872
+517249
+475879
+575755
+147787
+297180
+343774
+112437
+142240
+384503
+511111
+51089
+145408
+143582
+408138
+162858
+71850
+126925
+222781
+314616
+425609
+203928
+337563
+223300
+52644
+272566
+232597
+374430
+469075
+267164
+265851
+28134
+308889
+465795
+47263
+233727
+42
+493117
+124621
+533378
+361259
+458750
+429033
+383289
+490927
+520964
+174420
+64425
+378859
+401850
+281475
+46508
+205300
+280736
+110961
+230679
+151956
+321497
+73665
+488736
+165353
+365983
+556230
+21465
+581226
+448861
+3793
+347335
+150726
+75319
+2521
+285894
+133876
+104589
+346013
+63516
+83656
+491515
+326256
+49942
+28508
+475413
+270222
+235839
+48554
+327777
+111179
+507171
+425973
+449490
+205239
+82375
+459575
+432300
+91885
+340922
+270239
+195894
+121417
+344831
+439651
+232148
+391688
+480793
+534275
+260823
+469294
+8688
+255654
+191300
+383464
+81594
+21240
+478077
+517596
+555953
+294119
+402234
+459500
+564280
+106849
+167501
+98328
+267411
+145512
+272599
+50054
+414156
+161129
+418226
+11796
+502090
+390350
+440500
+240727
+104406
+163682
+437910
+143767
+358901
+527631
+500543
+28377
+231097
+227985
+556703
+421566
+73201
+478393
+280347
+15497
+131969
+515760
+295440
+462527
+42147
+120007
+212895
+425361
+454143
+5758
+366782
+213932
+229848
+458861
+132791
+476664
+150365
+343038
+529649
+180515
+499810
+329041
+15660
+419228
+396295
+502644
+321085
+245049
+34193
+217323
+446455
+528046
+375573
+15802
+147448
+407291
+84000
+280891
+150487
+510606
+163025
+249964
+126123
+233771
+118507
+97278
+357386
+23121
+10580
+2153
+176017
+371472
+373289
+173908
+296797
+334083
+301107
+577522
+125404
+278359
+575032
+273002
+266371
+108315
+255633
+503490
+250051
+143927
+117407
+198271
+447043
+329789
+399991
+458388
+87489
+228411
+494634
+260802
+454161
+446322
+231079
+438373
+395665
+244539
+212427
+356660
+347276
+183287
+498374
+21167
+544522
+418533
+288493
+245660
+406103
+406976
+367313
+455555
+117337
+384465
+185697
+160393
+463825
+276852
+181462
+176288
+452816
+102497
+54277
+225791
+361046
+197278
+9857
+227736
+398992
+55868
+170914
+181677
+467803
+560470
+264599
+540372
+559442
+201207
+137227
+267643
+355471
+245431
+555669
+344498
+84783
+193474
+102411
+401860
+119469
+448786
+449990
+568082
+340472
+307573
+231828
+307547
+82052
+15140
+493612
+503972
+386592
+473219
+495557
+159440
+355869
+311531
+209733
+240119
+415048
+296098
+249482
+15663
+151432
+263011
+488539
+463913
+502798
+174276
+495613
+407861
+229304
+146742
+545039
+161202
+295134
+162144
+453317
+52759
+335201
+222903
+20333
+559550
+336049
+346140
+491223
+306611
+102746
+455355
+449921
+477288
+77821
+289712
+452663
+147758
+129571
+490869
+345961
+94501
+160394
+432993
+178796
+372494
+316323
+383435
+194940
+74583
+148911
+518027
+431827
+32724
+158548
+227227
+500330
+54679
+321024
+471175
+252074
+476569
+573258
+337247
+294373
+558661
+148898
+563267
+163112
+411968
+193565
+455210
+349344
+337160
+160456
+255158
+553678
+123843
+549687
+381968
+579471
+100604
+379841
+357526
+197263
+14756
+412639
+210915
+47204
+539251
+166255
+490199
+260363
+91654
+170550
+187888
+97362
+285418
+176993
+292741
+361901
+296988
+223496
+493753
+114907
+151358
+316534
+472509
+499802
+348519
+347747
+58851
+104790
+396779
+130528
+2255
+19624
+526800
+233950
+505945
+131207
+290750
+114090
+196665
+8708
+134688
+394715
+115088
+492196
+530099
+518729
+291572
+421457
+445365
+78929
+415461
+551796
+210002
+207913
+344878
+303893
+149196
+353275
+122413
+553361
+519132
+467135
+431439
+17089
+322119
+228214
+35062
+105689
+366141
+285651
+60409
+472671
+401446
+492846
+21023
+421952
+374100
+265200
+506628
+62298
+243626
+212122
+350648
+409921
+428140
+399212
+388267
+198921
+429246
+202040
+570001
+261346
+61171
+131815
+455448
+82696
+554607
+102174
+386803
+188421
+191846
+209898
+380117
+321064
+119617
+188651
+132210
+244299
+174072
+542910
+378334
+118405
+543347
+183657
+581180
+395289
+64760
+265584
+29573
+493720
+94795
+315601
+416596
+260106
+244019
+463884
+579468
+112085
+300972
+238528
+382542
+57672
+165298
+46889
+289497
+337180
+481252
+7913
+432150
+288161
+403758
+257336
+565331
+346589
+270785
+205670
+231580
+508580
+98871
+239997
+554579
+160057
+404922
+78771
+380756
+171199
+148077
+22892
+145378
+26967
+235200
+176007
+90349
+554377
+189744
+257053
+270515
+66508
+113890
+291983
+558927
+420916
+140908
+58384
+438226
+575776
+106935
+40602
+468993
+494810
+210408
+365685
+483722
+39430
+258793
+272615
+51476
+189919
+443887
+391648
+422670
+445135
+198959
+405529
+459757
+465489
+81827
+262576
+408289
+309237
+76249
+460091
+512630
+45959
+280320
+200492
+404652
+48475
+18480
+457097
+65889
+162256
+265950
+520752
+299082
+51500
+499313
+104906
+35438
+167647
+7274
+387824
+242139
+173166
+399830
+12014
+510642
+154053
+67785
+78170
+514118
+87998
+52703
+203539
+534533
+85926
+274438
+401653
+458790
+509262
+144481
+387515
+246649
+503207
+235131
+501531
+62025
+43286
+272323
+326128
+561889
+167529
+171067
+50778
+301282
+469719
+509388
+480317
+379055
+546428
+192763
+445602
+420882
+232790
+174332
+232865
+292822
+511145
+119502
+312591
+110330
+281353
+116244
+58778
+428079
+64902
+520840
+232054
+473214
+572574
+296684
+351590
+217997
+178761
+71618
+226496
+285212
+381195
+499903
+232849
+468997
+345559
+503097
+578570
+396404
+405223
+578752
+403500
+188958
+504498
+491623
+462929
+525762
+395550
+574227
+240751
+169356
+524694
+40886
+571635
+487774
+86220
+95677
+268987
+502599
+155270
+103855
+125100
+241355
+220214
+391774
+110618
+154587
+134483
+458781
+360877
+465963
+194595
+346934
+127153
+188078
+553869
+102665
+400547
+33759
+42779
+397587
+140295
+151807
+549136
+470288
+89738
+328368
+546934
+164255
+563683
+399988
+360951
+217303
+326781
+546133
+135399
+94666
+330037
+569839
+411070
+497466
+404805
+417854
+318442
+255036
+457230
+346863
+307438
+370448
+5124
+152582
+38118
+12179
+58462
+308420
+329456
+74920
+250368
+186428
+556073
+111806
+361244
+80273
+230964
+156754
+503101
+75173
+389404
+195538
+88848
+286018
+245481
+140929
+533721
+268378
+70048
+315467
+46269
+372807
+192403
+387328
+163033
+481314
+65306
+192529
+321107
+112232
+441216
+412399
+565391
+220670
+61471
+463290
+346707
+67587
+147624
+13031
+396754
+278601
+439426
+42834
+281829
+376209
+353148
+556562
+97579
+217989
+319530
+82551
+235319
+431799
+53892
+52853
+54533
+88897
+225093
+386777
+546742
+273684
+413900
+245447
+577995
+16249
+188414
+485142
+199602
+89258
+109679
+502397
+14494
+13632
+51674
+244999
+305050
+455956
+426795
+560700
+327306
+410301
+343803
+539422
+156740
+527845
+100582
+9941
+466585
+61515
+231895
+157052
+41271
+148128
+141172
+320232
+78565
+539883
+391300
+365182
+322194
+116517
+323496
+473783
+519874
+440706
+361587
+265153
+329946
+342814
+32258
+153510
+194555
+309317
+245006
+300303
+97767
+218224
+370170
+290477
+207178
+456730
+209480
+513775
+199516
+581542
+32524
+416337
+96241
+506279
+422893
+248911
+509855
+355183
+201220
+234914
+333436
+68198
+429074
+328430
+160531
+467854
+280688
+140661
+349525
+267315
+565543
+313162
+25751
+232574
+560358
+505213
+494427
+160308
+287335
+99182
+413260
+558808
+290839
+122954
+229221
+192007
+243189
+117645
+552824
+366111
+102056
+356949
+566298
+97899
+422545
+343769
+13127
+179273
+104486
+37660
+304099
+517570
+20207
+36484
+36492
+155974
+107257
+534019
+522371
+222825
+96183
+509227
+302260
+95078
+280918
+367582
+317033
+347982
+73209
+290521
+187243
+425151
+483723
+573796
+187249
+144114
+132992
+35887
+546067
+426532
+45626
+461805
+129989
+541478
+485489
+578498
+485483
+144784
+248224
+372362
+92050
+423519
+473118
+177207
+105455
+276434
+157767
+384335
+509497
+338191
+224010
+327388
+96988
+43376
+67867
+320743
+555197
+104453
+14439
+512194
+396387
+252559
+108953
+461262
+66320
+97946
+238065
+306139
+572408
+577864
+81004
+464526
+89378
+193389
+259049
+85665
+381134
+412419
+308947
+557510
+502084
+288290
+254609
+188752
+439525
+13980
+140513
+240173
+305268
+38678
+394050
+402926
+364079
+159260
+293034
+55429
+289640
+291028
+211120
+48050
+93887
+361029
+486026
+388374
+207803
+540174
+530630
+430359
+36420
+120099
+199764
+492911
+84498
+200882
+139843
+4975
+421209
+259513
+520324
+211317
+236457
+419344
+3867
+287846
+50434
+26624
+507235
+16238
+103705
+497555
+440060
+175825
+245460
+308276
+178535
+391735
+206391
+201550
+400945
+194634
+262360
+554142
+407574
+225225
+246057
+498627
+486172
+226571
+461751
+459733
+345869
+503841
+286460
+45644
+22861
+285599
+580284
+569565
+286778
+150024
+542101
+484075
+538153
+20470
+128034
+544120
+357109
+450728
+550968
+326230
+558809
+76334
+555387
+47121
+523978
+11081
+378134
+116279
+364884
+488250
+551957
+322824
+545564
+255573
+286327
+355453
+361933
+434897
+32597
+226761
+166482
+557564
+208166
+232115
+283520
+137395
+555894
+103509
+174284
+458313
+316147
+344059
+370701
+548930
+89894
+373662
+572095
+19324
+574411
+45746
+480122
+63950
+92339
+201111
+157053
+401539
+427956
+339099
+274651
+159537
+556101
+323399
+564337
+514915
+556025
+66427
+322357
+173737
+369128
+420230
+45176
+509675
+374677
+272311
+109797
+384723
+383678
+453040
+91080
+301634
+533003
+40361
+221605
+216228
+104002
+161011
+146123
+214421
+496252
+264948
+9759
+138856
+316189
+145734
+50411
+325157
+259099
+516856
+529668
+135976
+467130
+367433
+385598
+520933
+102805
+30066
+436696
+216837
+380754
+350457
+126974
+565374
+73832
+214703
+110501
+380609
+135872
+140231
+251816
+133836
+398866
+230362
+426815
+2240
+51484
+546325
+224093
+221190
+525024
+238806
+99908
+165795
+109146
+537727
+496571
+183803
+211175
+433845
+168692
+526394
+368402
+256309
+468972
+139169
+398440
+171678
+547341
+64332
+533589
+483249
+406000
+330348
+439188
+572886
+252829
+242724
+139127
+404568
+45809
+52257
+458727
+334509
+559665
+60992
+290896
+503106
+27972
+536891
+410855
+31202
+457882
+403315
+87399
+395291
+322141
+226377
+202799
+420826
+553034
+212077
+97693
+266370
+101656
+504142
+342933
+87567
+342060
+268854
+437028
+20175
+198625
+405047
+382374
+338291
+403975
+527906
+322429
+545550
+140043
+107389
+74059
+315621
+110138
+78381
+295576
+494438
+106335
+472349
+15818
+162358
+366484
+44604
+66524
+118606
+366873
+270721
+556478
+350789
+298628
+163314
+262800
+459428
+491725
+285421
+406332
+498280
+34535
+524282
+315744
+226592
+218294
+459141
+242034
+114164
+293733
+248242
+452881
+441496
+54358
+177489
+372861
+349489
+483941
+572802
+356494
+193875
+146570
+58253
+21338
+6220
+341933
+533368
+1818
+428248
+293026
+227656
+193021
+326938
+512966
+226020
+343059
+249720
+540106
+375278
+300023
+126512
+517135
+472540
+361439
+132702
+503294
+109537
+540669
+332007
+245266
+313999
+10386
+225715
+311567
+103837
+302405
+248616
+102654
+155087
+124756
+379659
+569272
+160166
+428234
+422280
+174425
+133412
+174503
+216581
+345063
+52949
+69536
+216161
+272728
+200870
+120792
+193480
+493923
+445567
+558539
+51938
+422706
+416271
+244160
+437898
+327352
+305480
+349459
+522418
+485219
+225133
+361400
+546569
+190015
+348216
+421822
+457683
+178683
+40894
+234526
+465074
+518725
+168096
+210190
+139605
+35195
+463640
+286770
+141651
+112022
+532552
+325327
+227224
+17272
+84163
+331475
+126065
+289309
+8583
+52952
+189427
+579693
+437947
+187565
+215982
+356424
+453731
+463522
+372316
+251797
+70187
+280515
+556608
+341635
+391067
+469480
+476298
+57917
+146672
+122747
+394328
+12209
+80013
+573291
+278449
+129659
+579560
+557190
+227468
+334782
+51157
+23774
+9426
+86582
+39211
+275751
+131597
+51250
+357255
+9041
+346482
+9647
+157019
+409016
+273416
+114414
+298172
+388854
+275025
+58079
+518034
+503518
+146710
+120632
+474680
+303713
+259097
+479630
+208318
+437298
+173704
+361831
+371638
+344279
+230175
+72507
+417980
+72621
+163057
+92894
+543525
+577364
+263696
+472732
+66027
+391584
+197745
+131019
+65604
+91318
+535934
+212646
+576354
+482071
+160556
+120129
+7260
+344881
+447548
+318193
+30383
+527002
+34904
+35677
+526222
+105261
+401897
+399452
+25660
+524595
+384512
+117543
+514600
+268944
+112664
+222340
+569058
+495332
+192153
+75591
+286711
+174888
+577065
+25508
+169972
+401820
+425475
+290700
+173091
+559101
+122418
+244124
+198645
+325519
+276437
+528276
+146614
+45574
+417804
+326420
+250594
+27353
+310407
+370103
+274957
+561160
+167598
+397166
+257458
+404546
+148392
+373396
+62230
+493522
+563665
+274240
+269815
+79024
+527427
+84674
+486788
+267690
+443347
+149304
+412285
+207041
+412916
+10764
+151338
+299000
+17882
+475510
+398188
+558213
+70493
+180779
+347210
+280211
+58146
+379022
+504125
+537604
+464858
+329573
+568623
+228309
+454444
+552775
+557884
+435671
+168706
+142257
+571437
+574845
+387773
+321008
+574208
+405811
+375426
+321887
+256852
+433554
+517029
+125870
+80395
+497139
+490008
+405279
+571857
+225738
+514913
+456239
+499402
+96440
+487607
+370999
+319617
+370233
+60760
+352703
+478575
+84170
+134112
+77689
+185036
+73738
+547502
+104782
+213276
+136908
+436273
+442149
+355000
+374061
+249884
+105711
+136464
+146997
+76351
+388487
+99115
+124135
+24721
+132931
+1149
+182403
+386089
+81691
+480657
+441522
+60989
+268000
+55840
+514321
+577959
+359638
+457986
+533596
+60332
+367082
+772
+535842
+473541
+270677
+409009
+259216
+302318
+117036
+331372
+231125
+384486
+405214
+20760
+579760
+172995
+359110
+83110
+410068
+109916
+328757
+299261
+19028
+515660
+40757
+10256
+442695
+553097
+185903
+74388
+425120
+241326
+299609
+29397
+328728
+283881
+344029
+367336
+27075
+163628
+127263
+488979
+460147
+473050
+405762
+221547
+131581
+561187
+406489
+140696
+452721
+530466
+118965
+398803
+218365
+298738
+19441
+521550
+120157
+498687
+4754
+365866
+70865
+235156
+133386
+142742
+221183
+262391
+567053
+520982
+121349
+448779
+440354
+3983
+578993
+519691
+160703
+103307
+300408
+137106
+488377
+523660
+318022
+132578
+302520
+153040
+408817
+145227
+311190
+159662
+202923
+256775
+359864
+384848
+336404
+185303
+421703
+362682
+464622
+246590
+422729
+165500
+42563
+219216
+520232
+95063
+265547
+532686
+290558
+112591
+448211
+315281
+545475
+225850
+232460
+82740
+272880
+347254
+122047
+352151
+541486
+97249
+200252
+544782
+499571
+379014
+303534
+479909
+305464
+323682
+181524
+273855
+190783
+567801
+119752
+241503
+536429
+327323
+128756
+349868
+500495
+372260
+315824
+484986
+364993
+124759
+300124
+329319
+68628
+14549
+121897
+506595
+115709
+199610
+230150
+31717
+139549
+222332
+534161
+360393
+541664
+507167
+286523
+158660
+66926
+195750
+80022
+589
+252220
+47255
+247014
+49881
+455005
+232453
+445722
+516805
+544122
+541917
+469356
+370042
+130522
+502163
+307866
+408894
+524247
+52233
+177861
+348881
+357943
+295303
+475389
+431691
+61316
+143998
+503483
+340155
+488785
+133636
+133567
+251627
+470095
+34873
+88815
+261178
+468612
+127477
+157960
+15687
+303089
+572331
+456708
+190515
+126131
+239194
+332074
+129765
+107167
+478184
+421833
+359715
+112440
+331317
+74492
+505386
+247839
+534210
+134503
+422700
+352111
+98674
+546219
+520508
+503008
+461953
+101913
+362092
+22103
+359128
+316666
+335579
+414750
+297980
+365652
+53635
+547601
+97589
+570515
+7125
+99828
+321437
+80671
+426275
+294883
+212605
+424293
+338108
+25005
+6949
+234291
+428399
+7149
+343076
+575287
+431848
+307611
+293909
+542511
+564739
+573843
+356878
+472864
+336793
+121904
+161060
+254004
+269873
+216428
+77172
+346517
+498555
+203690
+348973
+117704
+552672
+275270
+208107
+314016
+427518
+278134
+53420
+318777
+238980
+350614
+467315
+61233
+272188
+550797
+125051
+553965
+187286
+282912
+102532
+156076
+467848
+130875
+531585
+523470
+507684
+332582
+438989
+489209
+125944
+127474
+371957
+570349
+283286
+541635
+547106
+253630
+388677
+572525
+542302
+554537
+367205
+228300
+443498
+356432
+123946
+490441
+211063
+224542
+116574
+434510
+33116
+353136
+134167
+128291
+542510
+433963
+147453
+365766
+374806
+336600
+38238
+165476
+535578
+127788
+157099
+173640
+114348
+496722
+58141
+467296
+235864
+5154
+22775
+422536
+136820
+453438
+446359
+41990
+422240
+39267
+391392
+233825
+308504
+478250
+87328
+4079
+127074
+267709
+377635
+353231
+185768
+487897
+124215
+249757
+341681
+557552
+280733
+374734
+281601
+456420
+222266
+491947
+432732
+467157
+94025
+410328
+428291
+397639
+163528
+234697
+557573
+208363
+515962
+358658
+373075
+438995
+425672
+450169
+216103
+254638
+288591
+53626
+43417
+372252
+5038
+218357
+120860
+399349
+485509
+530261
+477087
+352302
+96075
+495443
+133928
+197175
+134074
+212553
+448181
+152000
+254277
+105734
+75481
+343662
+479350
+554347
+71090
+297426
+22176
+277622
+469235
+163041
+221272
+154263
+89296
+68411
+192871
+183217
+258141
+53058
+540529
+566414
+560948
+254535
+246076
+135972
+420069
+431023
+343643
+32682
+515176
+222635
+377155
+547041
+513283
+26017
+366096
+252133
+138078
+25685
+321798
+549361
+14088
+423048
+570810
+374974
+447501
+492544
+554046
+575357
+420791
+6019
+340451
+66800
+565575
+148055
+330432
+483038
+455004
+288765
+11034
+86988
+347142
+450559
+543581
+293757
+556901
+533032
+333020
+260266
+22420
+13948
+512657
+214124
+231236
+177149
+560879
+491793
+35767
+312878
+118542
+450596
+423773
+48653
+224523
+509577
+462677
+75405
+350023
+452122
+42008
+302555
+382309
+468483
+368684
+372580
+31333
+153697
+124876
+330023
+315672
+53990
+136533
+82815
+356836
+414821
+268717
+7333
+77544
+525373
+371042
+227048
+576327
+419309
+239773
+8119
+424135
+297425
+222711
+489909
+393995
+31019
+539326
+517612
+102461
+199989
+483374
+44952
+103863
+528980
+441543
+85381
+247234
+50924
+483994
+87456
+424271
+356091
+534669
+378831
+560662
+298773
+257896
+498274
+305800
+40517
+183949
+276840
+84442
+297620
+298252
+119088
+233315
+283977
+345154
+287649
+427311
+63399
+4700
+463611
+224104
+209388
+431655
+364190
+28864
+412455
+283290
+228541
+422200
+985
+133596
+323853
+503081
+130732
+224675
+199688
+230862
+21396
+485390
+1532
+125778
+235541
+370478
+522478
+514292
+384338
+531707
+178746
+532747
+62915
+519491
+140691
+112093
+358024
+263687
+297595
+506085
+102446
+325768
+29558
+222054
+466965
+316254
+546500
+216785
+194184
+464390
+348371
+231582
+208995
+464339
+308856
+340946
+214604
+570586
+182227
+248441
+89078
+376310
+73450
+115924
+308235
+15994
+8749
+429679
+37751
+122040
+284286
+388707
+248163
+11320
+427997
+282062
+237600
+376751
+223314
+86215
+12443
+163255
+564940
+462640
+522713
+306303
+460675
+126833
+26201
+224757
+357899
+546782
+96427
+480944
+479556
+569273
+520528
+190690
+344832
+462466
+270354
+559776
+279259
+280909
+227781
+163798
+491098
+439658
+416088
+107375
+74132
+379800
+511654
+346687
+226161
+578849
+544272
+146149
+570624
+178299
+126671
+356380
+530766
+175954
+158798
+422095
+55780
+512276
+560626
+187329
+513125
+347216
+306486
+161840
+180917
+188192
+421437
+93120
+324891
+252216
+488476
+578347
+101959
+10693
+170038
+213586
+210439
+469202
+381463
+343248
+127785
+287328
+538690
+16382
+293022
+112378
+435785
+56092
+381504
+284365
+406129
+233119
+53629
+188509
+191053
+81056
+82252
+538319
+38439
+181948
+439710
+529344
+434035
+342958
+563882
+37734
+364743
+330986
+546226
+463211
+62210
+442724
+232241
+293858
+119345
+61953
+577033
+522015
+381587
+350107
+4936
+511307
+228771
+177811
+231450
+176168
+84540
+259408
+264238
+539738
+255827
+459382
+221105
+431742
+204337
+227741
+336356
+37655
+167159
+59352
+165937
+53956
+378712
+88462
+495786
+542938
+566498
+367228
+157577
+442661
+62363
+390689
+480664
+521540
+414249
+20571
+160855
+451683
+156832
+570045
+326542
+568276
+568717
+563311
+113579
+218268
+546095
+160661
+341118
+150649
+462632
+198972
+220025
+61720
+430681
+524011
+457217
+40064
+285583
+314493
+78023
+470882
+298722
+555597
+489829
+314779
+367818
+138503
+243737
+580255
+444565
+386677
+190841
+493074
+234347
+466988
+227033
+519039
+351554
+390585
+443303
+140983
+81079
+538005
+169757
+368780
+457322
+341804
+409116
+181805
+284292
+551358
+344548
+503569
+336587
+417055
+522315
+58705
+148955
+375530
+474934
+577893
+28881
+360772
+445267
+244737
+355777
+72811
+190788
+54513
+243075
+518551
+487530
+292169
+69293
+397303
+129285
+429996
+109532
+53802
+340573
+91280
+535602
+270908
+381925
+549220
+488573
+47131
+32735
+117525
+279085
+43961
+188906
+394677
+395
+185201
+189365
+127596
+32712
+504810
+3703
+182874
+146981
+306755
+453093
+520503
+169808
+225670
+91063
+348584
+461802
+572555
+185922
+131497
+46736
+536006
+256505
+214975
+13445
+350736
+98115
+50304
+361180
+511333
+564820
+429717
+222500
+40083
+538230
+349438
+371250
+528578
+240418
+302380
+261758
+535809
+308388
+578878
+509451
+46919
+562592
+499950
+90374
+318146
+195353
+355325
+314515
+237277
+203024
+238911
+32039
+145591
+16030
+135411
+229350
+421757
+48034
+183704
+307292
+97974
+275999
+448256
+451915
+119113
+143503
+494141
+50124
+306553
+35526
+255279
+560908
+247264
+367599
+192782
+511324
+574350
+67569
+204360
+111907
+2839
+513971
+245201
+185240
+339468
+540101
+539673
+194425
+22168
+520150
+301595
+96006
+68286
+131280
+356662
+182441
+284749
+107108
+49761
+386718
+55244
+187990
+248678
+147721
+425727
+360350
+310797
+76765
+400489
+247639
+279864
+44699
+356145
+69138
+445041
+560598
+165464
+536343
+7818
+322831
+334760
+451463
+348730
+285967
+286353
+201887
+166165
+359
+465591
+519359
+550444
+402711
+3661
+132706
+534983
+306281
+150317
+15978
+580029
+496090
+267127
+210980
+384015
+222559
+2235
+255649
+278168
+440840
+27326
+202562
+230268
+362712
+1573
+107661
+464515
+373132
+447242
+547440
+43613
+200143
+260883
+250901
+64693
+408480
+204757
+319933
+147471
+381332
+518197
+27656
+260257
+434580
+159203
+568630
+497441
+499597
+60179
+574804
+343254
+501762
+220704
+524536
+86946
+456046
+62937
+49633
+144305
+475593
+478553
+574145
+63648
+3794
+303177
+1340
+82835
+371427
+156747
+448694
+219567
+75095
+242615
+492077
+132776
+199125
+349622
+195754
+455548
+181873
+138185
+338044
+362797
+180953
+505826
+69773
+304834
+162580
+154090
+519853
+319687
+132328
+27969
+52166
+100547
+568131
+415218
+348045
+478159
+402869
+10211
+26547
+551692
+105432
+313340
+182348
+383419
+570947
+345353
+226883
+255784
+214199
+262262
+283261
+449708
+299970
+392391
+245997
+330410
+343571
+519542
+37470
+42144
+342521
+498537
+10935
+443860
+512648
+146099
+98599
+123932
+489861
+262895
+184700
+218587
+363581
+21001
+481404
+249356
+64240
+492349
+199236
+481064
+353405
+116479
+132024
+138768
+524665
+434511
+326970
+138784
+340368
+312081
+366615
+171942
+21232
+473850
+93686
+295574
+51054
+162692
+174091
+20070
+270066
+492816
+20904
+484500
+147140
+242972
+420081
+63563
+261712
+316396
+49413
+520787
+510955
+393840
+142487
+19817
+261180
+413736
+230619
+484614
+337011
+496575
+4338
+552545
+5601
+75426
+568863
+184227
+170629
+438567
+505132
+541353
+284674
+322567
+182423
+312051
+18896
+40471
+321725
+188850
+37119
+95569
+187362
+397133
+528972
+487131
+174989
+370325
+223554
+385633
+103485
+537574
+63240
+256566
+86467
+401092
+486968
+308441
+280017
+527464
+131965
+310479
+125556
+220160
+532963
+310052
+107963
+293841
+388534
+45603
+368949
+391825
+5107
+569705
+231549
+250108
+152933
+206433
+358817
+434006
+283904
+152808
+539975
+24629
+410231
+13465
+502318
+51961
+445594
+209062
+38726
+295420
+430079
+240147
+561512
+35795
+102589
+505619
+565469
+271772
+520561
+372300
+178807
+492805
+1083
+303704
+125635
+217521
+278032
+208688
+335325
+140435
+313990
+143822
+320857
+549230
+76844
+424219
+463876
+243199
+2988
+215170
+30012
+377738
+408568
+490624
+404839
+138316
+157206
+404461
+122934
+263346
+21327
+99913
+67975
+339676
+391891
+365305
+337055
+233834
+125524
+46869
+32577
+304744
+104176
+167356
+210404
+307989
+217223
+196046
+454414
+16356
+244487
+543660
+197461
+199681
+476787
+455085
+307074
+260547
+107468
+334769
+29437
+166837
+53838
+502979
+82678
+288860
+535523
+311950
+237723
+98656
+223123
+273930
+58057
+544334
+324857
+198043
+535326
+316505
+12991
+576820
+43611
+107839
+275749
+456695
+78188
+375786
+466239
+184830
+537128
+434513
+244344
+374576
+69140
+434247
+555009
+510857
+220819
+20598
+99416
+74967
+533129
+515577
+213361
+330974
+548848
+431557
+503278
+130043
+402570
+320554
+559884
+252629
+364596
+423484
+271230
+105552
+143143
+285751
+49994
+204162
+80646
+381393
+123415
+118417
+30932
+425412
+388130
+551243
+468337
+484893
+25014
+174390
+463781
+124647
+60823
+361964
+425702
+575110
+532390
+230881
+84592
+189997
+221307
+361472
+32364
+71918
+316365
+492378
+234251
+48504
+418070
+89884
+562045
+506552
+66360
+122962
+262605
+529939
+345229
+294853
+344397
+56091
+8599
+459823
+175785
+226128
+259983
+354515
+379144
+384995
+205253
+116786
+441432
+448810
+83452
+465129
+506906
+90616
+551959
+406404
+157891
+362090
+439630
+45099
+61960
+478430
+489605
+127050
+579872
+475798
+64510
+447733
+33066
+102848
+538819
+323760
+200401
+179765
+251317
+239376
+83836
+578092
+522452
+393056
+278848
+27787
+377239
+473427
+83065
+377005
+576539
+248019
+473370
+536369
+92648
+332461
+437609
+274800
+388846
+323048
+193407
+541898
+480140
+46526
+26432
+339738
+325991
+37705
+528033
+542922
+313420
+190463
+531000
+454907
+26448
+238199
+476652
+457147
+364256
+72632
+430380
+315448
+353320
+18158
+91527
+454252
+546987
+386370
+38064
+19763
+64152
+453216
+55223
+361860
+522566
+509531
+438432
+31164
+163290
+389197
+333440
+173464
+447842
+381615
+99961
+156126
+103134
+394940
+165638
+261706
+378311
+534081
+373848
+401642
+338019
+378096
+289610
+547421
+174672
+133343
+191360
+293751
+520892
+145214
+167668
+37456
+460962
+465267
+292804
+347529
+203661
+10766
+27371
+203845
+155736
+136715
+463588
+26640
+547612
+131453
+184274
+442456
+265085
+223256
+129420
+23019
+536467
+194532
+127585
+392637
+330408
+524775
+31993
+433924
+502852
+553129
+559364
+297343
+71360
+225537
+271148
+345499
+475893
+237463
+5278
+501243
+413235
+444236
+541071
+380088
+468063
+94858
+225913
+295614
+210276
+170975
+205570
+422375
+550365
+308702
+484627
+565031
+98979
+480345
+579548
+272673
+436875
+287874
+16502
+274917
+281809
+442968
+289263
+347766
+160933
+84533
+266409
+122199
+396200
+30958
+504541
+1591
+89432
+387150
+306383
+15260
+154515
+50752
+166913
+102644
+100196
+160278
+349579
+442536
+17923
+310564
+62020
+152004
+578330
+126299
+527025
+83494
+226400
+268435
+445334
+310391
+505156
+19157
+44677
+318171
+447765
+354369
+527486
+329939
+184771
+134856
+467675
+517133
+89697
+447080
+70685
+144938
+519673
+485758
+454957
+564851
+189451
+408757
+192616
+280734
+305060
+243946
+99179
+303971
+170519
+48917
+549965
+300245
+384101
+576607
+186709
+516341
+241668
+133470
+134811
+500825
+464689
+29833
+343820
+213429
+387434
+279305
+444207
+210777
+372043
+189868
+572229
+8495
+370090
+450282
+277080
+199158
+109612
+567708
+245659
+485129
+268363
+23448
+5352
+235597
+6871
+348720
+94113
+314613
+63729
+114458
+215394
+460460
+240387
+398726
+135604
+571728
+415770
+286908
+138151
+146272
+344094
+345209
+241187
+282768
+113037
+545583
+219283
+145873
+285957
+489235
+157271
+197458
+502671
+499845
+334884
+79084
+505573
+115618
+561491
+354202
+279838
+190734
+134738
+269450
+482784
+144610
+52774
+290659
+440646
+25807
+442952
+159215
+318224
+73445
+211653
+527960
+401862
+431026
+488755
+292278
+400554
+272630
+382668
+470298
+166426
+129645
+28820
+161227
+417696
+560677
+283216
+28978
+310302
+154419
+230450
+328289
+73118
+104691
+15085
+405574
+510548
+470005
+102928
+569249
+413126
+77282
+96732
+359020
+42182
+250875
+106206
+354929
+320796
+453341
+237318
+254834
+137265
+399865
+292685
+152252
+319579
+81484
+16599
+162257
+351034
+396051
+502275
+308278
+34483
+13333
+320290
+321579
+349794
+99219
+200162
+369470
+487583
+62703
+251639
+138246
+157170
+477112
+283963
+74860
+307057
+364075
+295491
+34757
+400161
+170194
+120874
+492817
+3817
+183973
+135436
+512989
+114744
+379210
+201072
+293785
+578385
+237420
+7888
+18224
+155317
+522406
+441440
+110482
+173400
+183348
+552504
+475660
+166948
+147025
+443259
+578792
+245227
+546687
+474519
+393284
+249668
+87493
+151651
+100306
+540466
+546556
+212675
+282942
+21310
+385535
+7304
+303409
+386116
+574297
+514550
+217133
+533553
+447152
+578703
+45392
+166205
+180154
+25143
+338802
+330110
+261389
+343506
+442726
+285388
+554934
+421316
+479912
+85192
+34874
+487266
+226173
+20748
+360660
+574509
+543364
+1554
+125539
+566931
+312889
+466945
+444804
+257187
+568587
+427160
+71123
+563849
+138589
+162841
+129663
+107226
+140686
+321663
+437117
+179808
+321718
+62398
+16497
+468933
+219841
+355430
+293554
+293044
+109516
+485887
+490620
+579893
+427135
+31636
+217919
+432441
+314396
+119802
+393682
+201764
+146193
+116358
+84825
+208311
+419774
+177468
+72052
+142585
+519598
+464006
+556083
+412136
+169361
+442929
+84567
+549932
+75560
+74656
+93314
+393838
+383018
+372433
+431281
+556278
+5513
+108503
+500478
+148588
+138713
+368153
+22646
+303778
+270758
+276706
+275429
+492025
+169111
+494328
+35891
+70258
+400528
+165229
+460494
+269311
+307658
+98283
+369294
+319345
+414578
+541550
+425388
+129855
+99477
+383073
+387906
+293124
+155873
+549224
+266021
+52869
+1584
+421902
+498535
+277235
+153013
+452013
+553561
+138040
+20820
+58483
+423506
+569001
+325153
+383039
+213421
+38825
+453283
+384661
+127702
+238147
+104893
+577826
+64974
+240655
+459153
+145665
+49810
+65008
+545385
+125070
+46433
+143329
+429174
+52947
+321314
+253341
+157365
+453162
+111910
+339019
+239575
+362219
+80652
+247317
+460286
+365724
+160875
+372220
+483389
+572181
+146190
+580975
+54761
+348488
+416104
+468778
+18833
+251537
+234366
+510078
+14723
+338595
+153797
+513098
+467138
+404618
+261982
+545730
+135846
+108244
+562557
+180524
+227370
+341856
+131743
+255691
+497878
+68878
+430640
+441473
+347664
+214369
+347018
+225238
+421762
+317024
+6180
+172004
+303101
+22488
+193494
+199346
+409627
+315350
+263463
+190722
+523292
+363902
+573778
+437290
+389812
+517082
+145073
+37907
+489763
+456261
+270386
+508917
+566823
+543897
+362482
+130966
+66632
+181962
+274613
+135708
+549746
+323766
+366714
+353295
+318813
+153307
+213693
+293378
+149446
+199927
+580543
+331727
+238488
+472833
+308645
+424225
+228746
+110435
+495377
+240646
+274491
+130921
+140006
+4688
+115241
+76962
+66650
+47718
+224991
+434187
+272048
+11169
+158222
+154000
+507436
+443499
+109937
+309692
+534018
+22797
+163339
+168683
+210098
+246069
+137954
+143320
+262587
+414795
+226938
+536831
+128791
+459590
+50514
+30067
+317479
+378655
+229968
+522702
+11122
+515266
+136600
+224509
+149912
+97656
+120747
+349480
+155199
+528731
+523807
+168544
+325664
+229981
+434410
+431208
+508996
+63791
+89225
+513690
+136740
+224364
+515424
+508302
+418175
+465552
+439907
+272097
+451087
+396304
+342273
+52507
+300066
+380089
+326248
+167906
+37846
+262993
+60090
+499249
+90432
+74456
+264660
+325598
+480985
+245411
+425644
+224724
+475439
+246478
+487438
+563731
+441854
+522665
+245915
+85747
+315162
+108761
+407521
+388528
+389453
+298331
+447791
+368820
+440034
+305677
+122208
+182369
+543531
+151820
+63650
+457580
+563381
+320899
+14869
+137260
+61925
+376307
+80367
+269089
+203705
+274835
+267321
+418106
+471273
+74037
+227855
+519758
+89045
+321217
+324203
+479129
+503431
+368528
+527718
+278579
+13525
+291582
+301837
+31667
+68120
+14007
+114158
+124262
+33626
+53949
+187585
+192247
+208844
+212766
+318671
+575012
+439339
+364073
+419624
+178078
+427783
+302159
+339368
+190680
+23807
+288579
+312720
+15778
+553558
+571834
+574376
+122161
+493815
+472376
+483432
+149123
+51628
+264628
+26609
+23696
+485081
+441323
+451679
+42055
+378795
+86439
+366493
+520996
+332869
+18014
+554523
+83476
+6040
+421834
+424392
+308160
+335233
+249809
+349098
+358090
+187349
+61782
+35498
+386514
+207108
+578418
+84447
+104108
+126107
+211674
+111909
+490708
+477025
+206757
+556205
+142484
+454296
+464366
+358254
+215482
+468548
+82680
+100909
+405432
+85764
+94651
+63973
+8131
+288592
+257470
+47597
+321557
+34520
+134066
+246701
+317797
+282365
+78176
+29577
+311075
+331937
+190395
+5802
+245112
+111032
+140556
+199127
+376491
+305253
+300375
+545903
+357782
+377911
+74963
+329336
+25057
+3244
+252020
+293474
+171050
+239306
+189772
+238090
+160031
+36761
+445675
+252716
+152214
+239466
+55155
+479829
+420281
+445812
+118106
+434576
+451104
+316708
+438535
+300322
+167952
+390072
+487220
+20247
+9400
+43944
+35770
+487351
+425462
+212203
+9668
+8981
+574241
+332096
+535563
+192944
+498733
+276151
+550645
+507037
+9769
+404249
+236747
+376416
+306415
+45966
+191296
+576875
+493932
+225075
+536444
+79920
+561681
+60700
+99874
+219437
+509819
+466665
+579326
+428739
+394611
+263083
+379554
+279391
+178516
+133690
+77396
+300137
+6861
+435359
+314108
+444152
+500139
+92749
+89188
+300233
+414201
+443204
+211097
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/oid_bbox_trainable_label_map.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/oid_bbox_trainable_label_map.pbtxt"
new file mode 100644
index 00000000..863e4f31
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/oid_bbox_trainable_label_map.pbtxt"
@@ -0,0 +1,2725 @@
+item {
+ name: "/m/01g317"
+ id: 1
+ display_name: "Person"
+}
+item {
+ name: "/m/09j2d"
+ id: 2
+ display_name: "Clothing"
+}
+item {
+ name: "/m/04yx4"
+ id: 3
+ display_name: "Man"
+}
+item {
+ name: "/m/0dzct"
+ id: 4
+ display_name: "Face"
+}
+item {
+ name: "/m/07j7r"
+ id: 5
+ display_name: "Tree"
+}
+item {
+ name: "/m/05s2s"
+ id: 6
+ display_name: "Plant"
+}
+item {
+ name: "/m/03bt1vf"
+ id: 7
+ display_name: "Woman"
+}
+item {
+ name: "/m/07yv9"
+ id: 8
+ display_name: "Vehicle"
+}
+item {
+ name: "/m/0cgh4"
+ id: 9
+ display_name: "Building"
+}
+item {
+ name: "/m/01prls"
+ id: 10
+ display_name: "Land vehicle"
+}
+item {
+ name: "/m/09j5n"
+ id: 11
+ display_name: "Footwear"
+}
+item {
+ name: "/m/05r655"
+ id: 12
+ display_name: "Girl"
+}
+item {
+ name: "/m/0jbk"
+ id: 13
+ display_name: "Animal"
+}
+item {
+ name: "/m/0k4j"
+ id: 14
+ display_name: "Car"
+}
+item {
+ name: "/m/02wbm"
+ id: 15
+ display_name: "Food"
+}
+item {
+ name: "/m/083wq"
+ id: 16
+ display_name: "Wheel"
+}
+item {
+ name: "/m/0c9ph5"
+ id: 17
+ display_name: "Flower"
+}
+item {
+ name: "/m/0c_jw"
+ id: 18
+ display_name: "Furniture"
+}
+item {
+ name: "/m/0d4v4"
+ id: 19
+ display_name: "Window"
+}
+item {
+ name: "/m/03jm5"
+ id: 20
+ display_name: "House"
+}
+item {
+ name: "/m/01bl7v"
+ id: 21
+ display_name: "Boy"
+}
+item {
+ name: "/m/0463sg"
+ id: 22
+ display_name: "Fashion accessory"
+}
+item {
+ name: "/m/04bcr3"
+ id: 23
+ display_name: "Table"
+}
+item {
+ name: "/m/0jyfg"
+ id: 24
+ display_name: "Glasses"
+}
+item {
+ name: "/m/01xyhv"
+ id: 25
+ display_name: "Suit"
+}
+item {
+ name: "/m/08dz3q"
+ id: 26
+ display_name: "Auto part"
+}
+item {
+ name: "/m/015p6"
+ id: 27
+ display_name: "Bird"
+}
+item {
+ name: "/m/05y5lj"
+ id: 28
+ display_name: "Sports equipment"
+}
+item {
+ name: "/m/01d40f"
+ id: 29
+ display_name: "Dress"
+}
+item {
+ name: "/m/0bt9lr"
+ id: 30
+ display_name: "Dog"
+}
+item {
+ name: "/m/01lrl"
+ id: 31
+ display_name: "Carnivore"
+}
+item {
+ name: "/m/02p0tk3"
+ id: 32
+ display_name: "Human body"
+}
+item {
+ name: "/m/0fly7"
+ id: 33
+ display_name: "Jeans"
+}
+item {
+ name: "/m/04szw"
+ id: 34
+ display_name: "Musical instrument"
+}
+item {
+ name: "/m/0271t"
+ id: 35
+ display_name: "Drink"
+}
+item {
+ name: "/m/019jd"
+ id: 36
+ display_name: "Boat"
+}
+item {
+ name: "/m/03q69"
+ id: 37
+ display_name: "Hair"
+}
+item {
+ name: "/m/0h9mv"
+ id: 38
+ display_name: "Tire"
+}
+item {
+ name: "/m/04hgtk"
+ id: 39
+ display_name: "Head"
+}
+item {
+ name: "/m/01yrx"
+ id: 40
+ display_name: "Cat"
+}
+item {
+ name: "/m/01rzcn"
+ id: 41
+ display_name: "Watercraft"
+}
+item {
+ name: "/m/01mzpv"
+ id: 42
+ display_name: "Chair"
+}
+item {
+ name: "/m/0199g"
+ id: 43
+ display_name: "Bike"
+}
+item {
+ name: "/m/01fdzj"
+ id: 44
+ display_name: "Tower"
+}
+item {
+ name: "/m/04rky"
+ id: 45
+ display_name: "Mammal"
+}
+item {
+ name: "/m/079cl"
+ id: 46
+ display_name: "Skyscraper"
+}
+item {
+ name: "/m/0dzf4"
+ id: 47
+ display_name: "Arm"
+}
+item {
+ name: "/m/0138tl"
+ id: 48
+ display_name: "Toy"
+}
+item {
+ name: "/m/06msq"
+ id: 49
+ display_name: "Sculpture"
+}
+item {
+ name: "/m/03xxp"
+ id: 50
+ display_name: "Invertebrate"
+}
+item {
+ name: "/m/0hg7b"
+ id: 51
+ display_name: "Microphone"
+}
+item {
+ name: "/m/01n5jq"
+ id: 52
+ display_name: "Poster"
+}
+item {
+ name: "/m/03vt0"
+ id: 53
+ display_name: "Insect"
+}
+item {
+ name: "/m/0342h"
+ id: 54
+ display_name: "Guitar"
+}
+item {
+ name: "/m/0k0pj"
+ id: 55
+ display_name: "Nose"
+}
+item {
+ name: "/m/02dl1y"
+ id: 56
+ display_name: "Hat"
+}
+item {
+ name: "/m/04brg2"
+ id: 57
+ display_name: "Tableware"
+}
+item {
+ name: "/m/02dgv"
+ id: 58
+ display_name: "Door"
+}
+item {
+ name: "/m/01bqk0"
+ id: 59
+ display_name: "Bicycle wheel"
+}
+item {
+ name: "/m/017ftj"
+ id: 60
+ display_name: "Sunglasses"
+}
+item {
+ name: "/m/052lwg6"
+ id: 61
+ display_name: "Baked goods"
+}
+item {
+ name: "/m/014sv8"
+ id: 62
+ display_name: "Eye"
+}
+item {
+ name: "/m/0270h"
+ id: 63
+ display_name: "Dessert"
+}
+item {
+ name: "/m/0283dt1"
+ id: 64
+ display_name: "Mouth"
+}
+item {
+ name: "/m/0k5j"
+ id: 65
+ display_name: "Aircraft"
+}
+item {
+ name: "/m/0cmf2"
+ id: 66
+ display_name: "Airplane"
+}
+item {
+ name: "/m/07jdr"
+ id: 67
+ display_name: "Train"
+}
+item {
+ name: "/m/032b3c"
+ id: 68
+ display_name: "Jacket"
+}
+item {
+ name: "/m/033rq4"
+ id: 69
+ display_name: "Street light"
+}
+item {
+ name: "/m/0k65p"
+ id: 70
+ display_name: "Hand"
+}
+item {
+ name: "/m/01ww8y"
+ id: 71
+ display_name: "Snack"
+}
+item {
+ name: "/m/0zvk5"
+ id: 72
+ display_name: "Helmet"
+}
+item {
+ name: "/m/07mhn"
+ id: 73
+ display_name: "Trousers"
+}
+item {
+ name: "/m/04dr76w"
+ id: 74
+ display_name: "Bottle"
+}
+item {
+ name: "/m/03fp41"
+ id: 75
+ display_name: "Houseplant"
+}
+item {
+ name: "/m/03k3r"
+ id: 76
+ display_name: "Horse"
+}
+item {
+ name: "/m/01y9k5"
+ id: 77
+ display_name: "Desk"
+}
+item {
+ name: "/m/0cdl1"
+ id: 78
+ display_name: "Palm tree"
+}
+item {
+ name: "/m/0f4s2w"
+ id: 79
+ display_name: "Vegetable"
+}
+item {
+ name: "/m/02xwb"
+ id: 80
+ display_name: "Fruit"
+}
+item {
+ name: "/m/035r7c"
+ id: 81
+ display_name: "Leg"
+}
+item {
+ name: "/m/0bt_c3"
+ id: 82
+ display_name: "Book"
+}
+item {
+ name: "/m/01_bhs"
+ id: 83
+ display_name: "Fast food"
+}
+item {
+ name: "/m/01599"
+ id: 84
+ display_name: "Beer"
+}
+item {
+ name: "/m/03120"
+ id: 85
+ display_name: "Flag"
+}
+item {
+ name: "/m/026t6"
+ id: 86
+ display_name: "Drum"
+}
+item {
+ name: "/m/01bjv"
+ id: 87
+ display_name: "Bus"
+}
+item {
+ name: "/m/07r04"
+ id: 88
+ display_name: "Truck"
+}
+item {
+ name: "/m/018xm"
+ id: 89
+ display_name: "Ball"
+}
+item {
+ name: "/m/01rkbr"
+ id: 90
+ display_name: "Tie"
+}
+item {
+ name: "/m/0fm3zh"
+ id: 91
+ display_name: "Flowerpot"
+}
+item {
+ name: "/m/02_n6y"
+ id: 92
+ display_name: "Goggles"
+}
+item {
+ name: "/m/04_sv"
+ id: 93
+ display_name: "Motorcycle"
+}
+item {
+ name: "/m/06z37_"
+ id: 94
+ display_name: "Picture frame"
+}
+item {
+ name: "/m/01bfm9"
+ id: 95
+ display_name: "Shorts"
+}
+item {
+ name: "/m/0h8mhzd"
+ id: 96
+ display_name: "Sports uniform"
+}
+item {
+ name: "/m/0d_2m"
+ id: 97
+ display_name: "Moths and butterflies"
+}
+item {
+ name: "/m/0gjbg72"
+ id: 98
+ display_name: "Shelf"
+}
+item {
+ name: "/m/01n4qj"
+ id: 99
+ display_name: "Shirt"
+}
+item {
+ name: "/m/0ch_cf"
+ id: 100
+ display_name: "Fish"
+}
+item {
+ name: "/m/06m11"
+ id: 101
+ display_name: "Rose"
+}
+item {
+ name: "/m/01jfm_"
+ id: 102
+ display_name: "Licence plate"
+}
+item {
+ name: "/m/02crq1"
+ id: 103
+ display_name: "Couch"
+}
+item {
+ name: "/m/083kb"
+ id: 104
+ display_name: "Weapon"
+}
+item {
+ name: "/m/01c648"
+ id: 105
+ display_name: "Laptop"
+}
+item {
+ name: "/m/09tvcd"
+ id: 106
+ display_name: "Wine glass"
+}
+item {
+ name: "/m/0h2r6"
+ id: 107
+ display_name: "Van"
+}
+item {
+ name: "/m/081qc"
+ id: 108
+ display_name: "Wine"
+}
+item {
+ name: "/m/09ddx"
+ id: 109
+ display_name: "Duck"
+}
+item {
+ name: "/m/03p3bw"
+ id: 110
+ display_name: "Bicycle helmet"
+}
+item {
+ name: "/m/0cyf8"
+ id: 111
+ display_name: "Butterfly"
+}
+item {
+ name: "/m/0b_rs"
+ id: 112
+ display_name: "Swimming pool"
+}
+item {
+ name: "/m/039xj_"
+ id: 113
+ display_name: "Ear"
+}
+item {
+ name: "/m/021sj1"
+ id: 114
+ display_name: "Office"
+}
+item {
+ name: "/m/0dv5r"
+ id: 115
+ display_name: "Camera"
+}
+item {
+ name: "/m/01lynh"
+ id: 116
+ display_name: "Stairs"
+}
+item {
+ name: "/m/06bt6"
+ id: 117
+ display_name: "Reptile"
+}
+item {
+ name: "/m/01226z"
+ id: 118
+ display_name: "Football"
+}
+item {
+ name: "/m/0fszt"
+ id: 119
+ display_name: "Cake"
+}
+item {
+ name: "/m/050k8"
+ id: 120
+ display_name: "Mobile phone"
+}
+item {
+ name: "/m/02wbtzl"
+ id: 121
+ display_name: "Sun hat"
+}
+item {
+ name: "/m/02p5f1q"
+ id: 122
+ display_name: "Coffee cup"
+}
+item {
+ name: "/m/025nd"
+ id: 123
+ display_name: "Christmas tree"
+}
+item {
+ name: "/m/02522"
+ id: 124
+ display_name: "Computer monitor"
+}
+item {
+ name: "/m/09ct_"
+ id: 125
+ display_name: "Helicopter"
+}
+item {
+ name: "/m/0cvnqh"
+ id: 126
+ display_name: "Bench"
+}
+item {
+ name: "/m/0d5gx"
+ id: 127
+ display_name: "Castle"
+}
+item {
+ name: "/m/01xygc"
+ id: 128
+ display_name: "Coat"
+}
+item {
+ name: "/m/04m6gz"
+ id: 129
+ display_name: "Porch"
+}
+item {
+ name: "/m/01gkx_"
+ id: 130
+ display_name: "Swimwear"
+}
+item {
+ name: "/m/01s105"
+ id: 131
+ display_name: "Cabinetry"
+}
+item {
+ name: "/m/01j61q"
+ id: 132
+ display_name: "Tent"
+}
+item {
+ name: "/m/0hnnb"
+ id: 133
+ display_name: "Umbrella"
+}
+item {
+ name: "/m/01j51"
+ id: 134
+ display_name: "Balloon"
+}
+item {
+ name: "/m/01knjb"
+ id: 135
+ display_name: "Billboard"
+}
+item {
+ name: "/m/03__z0"
+ id: 136
+ display_name: "Bookcase"
+}
+item {
+ name: "/m/01m2v"
+ id: 137
+ display_name: "Computer keyboard"
+}
+item {
+ name: "/m/0167gd"
+ id: 138
+ display_name: "Doll"
+}
+item {
+ name: "/m/0284d"
+ id: 139
+ display_name: "Dairy"
+}
+item {
+ name: "/m/03ssj5"
+ id: 140
+ display_name: "Bed"
+}
+item {
+ name: "/m/02fq_6"
+ id: 141
+ display_name: "Fedora"
+}
+item {
+ name: "/m/06nwz"
+ id: 142
+ display_name: "Seafood"
+}
+item {
+ name: "/m/0220r2"
+ id: 143
+ display_name: "Fountain"
+}
+item {
+ name: "/m/01mqdt"
+ id: 144
+ display_name: "Traffic sign"
+}
+item {
+ name: "/m/0268lbt"
+ id: 145
+ display_name: "Hiking equipment"
+}
+item {
+ name: "/m/07c52"
+ id: 146
+ display_name: "Television"
+}
+item {
+ name: "/m/0grw1"
+ id: 147
+ display_name: "Salad"
+}
+item {
+ name: "/m/01h3n"
+ id: 148
+ display_name: "Bee"
+}
+item {
+ name: "/m/078n6m"
+ id: 149
+ display_name: "Coffee table"
+}
+item {
+ name: "/m/01xq0k1"
+ id: 150
+ display_name: "Cattle"
+}
+item {
+ name: "/m/0gd2v"
+ id: 151
+ display_name: "Marine mammal"
+}
+item {
+ name: "/m/0dbvp"
+ id: 152
+ display_name: "Goose"
+}
+item {
+ name: "/m/03rszm"
+ id: 153
+ display_name: "Curtain"
+}
+item {
+ name: "/m/0h8n5zk"
+ id: 154
+ display_name: "Kitchen & dining room table"
+}
+item {
+ name: "/m/019dx1"
+ id: 155
+ display_name: "Home appliance"
+}
+item {
+ name: "/m/03hl4l9"
+ id: 156
+ display_name: "Marine invertebrates"
+}
+item {
+ name: "/m/0b3fp9"
+ id: 157
+ display_name: "Countertop"
+}
+item {
+ name: "/m/02rdsp"
+ id: 158
+ display_name: "Office supplies"
+}
+item {
+ name: "/m/0hf58v5"
+ id: 159
+ display_name: "Luggage and bags"
+}
+item {
+ name: "/m/04h7h"
+ id: 160
+ display_name: "Lighthouse"
+}
+item {
+ name: "/m/024g6"
+ id: 161
+ display_name: "Cocktail"
+}
+item {
+ name: "/m/0cffdh"
+ id: 162
+ display_name: "Maple"
+}
+item {
+ name: "/m/03q5c7"
+ id: 163
+ display_name: "Saucer"
+}
+item {
+ name: "/m/014y4n"
+ id: 164
+ display_name: "Paddle"
+}
+item {
+ name: "/m/01yx86"
+ id: 165
+ display_name: "Bronze sculpture"
+}
+item {
+ name: "/m/020jm"
+ id: 166
+ display_name: "Beetle"
+}
+item {
+ name: "/m/025dyy"
+ id: 167
+ display_name: "Box"
+}
+item {
+ name: "/m/01llwg"
+ id: 168
+ display_name: "Necklace"
+}
+item {
+ name: "/m/08pbxl"
+ id: 169
+ display_name: "Monkey"
+}
+item {
+ name: "/m/02d9qx"
+ id: 170
+ display_name: "Whiteboard"
+}
+item {
+ name: "/m/02pkr5"
+ id: 171
+ display_name: "Plumbing fixture"
+}
+item {
+ name: "/m/0h99cwc"
+ id: 172
+ display_name: "Kitchen appliance"
+}
+item {
+ name: "/m/050gv4"
+ id: 173
+ display_name: "Plate"
+}
+item {
+ name: "/m/02vqfm"
+ id: 174
+ display_name: "Coffee"
+}
+item {
+ name: "/m/09kx5"
+ id: 175
+ display_name: "Deer"
+}
+item {
+ name: "/m/019w40"
+ id: 176
+ display_name: "Surfboard"
+}
+item {
+ name: "/m/09dzg"
+ id: 177
+ display_name: "Turtle"
+}
+item {
+ name: "/m/07k1x"
+ id: 178
+ display_name: "Tool"
+}
+item {
+ name: "/m/080hkjn"
+ id: 179
+ display_name: "Handbag"
+}
+item {
+ name: "/m/07qxg_"
+ id: 180
+ display_name: "Football helmet"
+}
+item {
+ name: "/m/0ph39"
+ id: 181
+ display_name: "Canoe"
+}
+item {
+ name: "/m/018p4k"
+ id: 182
+ display_name: "Cart"
+}
+item {
+ name: "/m/02h19r"
+ id: 183
+ display_name: "Scarf"
+}
+item {
+ name: "/m/015h_t"
+ id: 184
+ display_name: "Beard"
+}
+item {
+ name: "/m/0fqfqc"
+ id: 185
+ display_name: "Drawer"
+}
+item {
+ name: "/m/025rp__"
+ id: 186
+ display_name: "Cowboy hat"
+}
+item {
+ name: "/m/01x3z"
+ id: 187
+ display_name: "Clock"
+}
+item {
+ name: "/m/0crjs"
+ id: 188
+ display_name: "Convenience store"
+}
+item {
+ name: "/m/0l515"
+ id: 189
+ display_name: "Sandwich"
+}
+item {
+ name: "/m/015qff"
+ id: 190
+ display_name: "Traffic light"
+}
+item {
+ name: "/m/09kmb"
+ id: 191
+ display_name: "Spider"
+}
+item {
+ name: "/m/09728"
+ id: 192
+ display_name: "Bread"
+}
+item {
+ name: "/m/071qp"
+ id: 193
+ display_name: "Squirrel"
+}
+item {
+ name: "/m/02s195"
+ id: 194
+ display_name: "Vase"
+}
+item {
+ name: "/m/06c54"
+ id: 195
+ display_name: "Rifle"
+}
+item {
+ name: "/m/01xqw"
+ id: 196
+ display_name: "Cello"
+}
+item {
+ name: "/m/05zsy"
+ id: 197
+ display_name: "Pumpkin"
+}
+item {
+ name: "/m/0bwd_0j"
+ id: 198
+ display_name: "Elephant"
+}
+item {
+ name: "/m/04m9y"
+ id: 199
+ display_name: "Lizard"
+}
+item {
+ name: "/m/052sf"
+ id: 200
+ display_name: "Mushroom"
+}
+item {
+ name: "/m/03grzl"
+ id: 201
+ display_name: "Baseball glove"
+}
+item {
+ name: "/m/01z1kdw"
+ id: 202
+ display_name: "Juice"
+}
+item {
+ name: "/m/02wv6h6"
+ id: 203
+ display_name: "Skirt"
+}
+item {
+ name: "/m/016m2d"
+ id: 204
+ display_name: "Skull"
+}
+item {
+ name: "/m/0dtln"
+ id: 205
+ display_name: "Lamp"
+}
+item {
+ name: "/m/057cc"
+ id: 206
+ display_name: "Musical keyboard"
+}
+item {
+ name: "/m/06k2mb"
+ id: 207
+ display_name: "High heels"
+}
+item {
+ name: "/m/0f6wt"
+ id: 208
+ display_name: "Falcon"
+}
+item {
+ name: "/m/0cxn2"
+ id: 209
+ display_name: "Ice cream"
+}
+item {
+ name: "/m/02jvh9"
+ id: 210
+ display_name: "Mug"
+}
+item {
+ name: "/m/0gjkl"
+ id: 211
+ display_name: "Watch"
+}
+item {
+ name: "/m/01b638"
+ id: 212
+ display_name: "Boot"
+}
+item {
+ name: "/m/071p9"
+ id: 213
+ display_name: "Ski"
+}
+item {
+ name: "/m/0pg52"
+ id: 214
+ display_name: "Taxi"
+}
+item {
+ name: "/m/0ftb8"
+ id: 215
+ display_name: "Sunflower"
+}
+item {
+ name: "/m/0hnyx"
+ id: 216
+ display_name: "Pastry"
+}
+item {
+ name: "/m/02jz0l"
+ id: 217
+ display_name: "Tap"
+}
+item {
+ name: "/m/04kkgm"
+ id: 218
+ display_name: "Bowl"
+}
+item {
+ name: "/m/0174n1"
+ id: 219
+ display_name: "Glove"
+}
+item {
+ name: "/m/0gv1x"
+ id: 220
+ display_name: "Parrot"
+}
+item {
+ name: "/m/09csl"
+ id: 221
+ display_name: "Eagle"
+}
+item {
+ name: "/m/02jnhm"
+ id: 222
+ display_name: "Tin can"
+}
+item {
+ name: "/m/099ssp"
+ id: 223
+ display_name: "Platter"
+}
+item {
+ name: "/m/03nfch"
+ id: 224
+ display_name: "Sandal"
+}
+item {
+ name: "/m/07y_7"
+ id: 225
+ display_name: "Violin"
+}
+item {
+ name: "/m/05z6w"
+ id: 226
+ display_name: "Penguin"
+}
+item {
+ name: "/m/03m3pdh"
+ id: 227
+ display_name: "Sofa bed"
+}
+item {
+ name: "/m/09ld4"
+ id: 228
+ display_name: "Frog"
+}
+item {
+ name: "/m/09b5t"
+ id: 229
+ display_name: "Chicken"
+}
+item {
+ name: "/m/054xkw"
+ id: 230
+ display_name: "Lifejacket"
+}
+item {
+ name: "/m/0130jx"
+ id: 231
+ display_name: "Sink"
+}
+item {
+ name: "/m/07fbm7"
+ id: 232
+ display_name: "Strawberry"
+}
+item {
+ name: "/m/01dws"
+ id: 233
+ display_name: "Bear"
+}
+item {
+ name: "/m/01tcjp"
+ id: 234
+ display_name: "Muffin"
+}
+item {
+ name: "/m/0dftk"
+ id: 235
+ display_name: "Swan"
+}
+item {
+ name: "/m/0c06p"
+ id: 236
+ display_name: "Candle"
+}
+item {
+ name: "/m/034c16"
+ id: 237
+ display_name: "Pillow"
+}
+item {
+ name: "/m/09d5_"
+ id: 238
+ display_name: "Owl"
+}
+item {
+ name: "/m/03hlz0c"
+ id: 239
+ display_name: "Kitchen utensil"
+}
+item {
+ name: "/m/0ft9s"
+ id: 240
+ display_name: "Dragonfly"
+}
+item {
+ name: "/m/011k07"
+ id: 241
+ display_name: "Tortoise"
+}
+item {
+ name: "/m/054_l"
+ id: 242
+ display_name: "Mirror"
+}
+item {
+ name: "/m/0jqgx"
+ id: 243
+ display_name: "Lily"
+}
+item {
+ name: "/m/0663v"
+ id: 244
+ display_name: "Pizza"
+}
+item {
+ name: "/m/0242l"
+ id: 245
+ display_name: "Coin"
+}
+item {
+ name: "/m/014trl"
+ id: 246
+ display_name: "Cosmetics"
+}
+item {
+ name: "/m/05r5c"
+ id: 247
+ display_name: "Piano"
+}
+item {
+ name: "/m/07j87"
+ id: 248
+ display_name: "Tomato"
+}
+item {
+ name: "/m/05kyg_"
+ id: 249
+ display_name: "Chest of drawers"
+}
+item {
+ name: "/m/0kmg4"
+ id: 250
+ display_name: "Teddy bear"
+}
+item {
+ name: "/m/07cmd"
+ id: 251
+ display_name: "Tank"
+}
+item {
+ name: "/m/0dv77"
+ id: 252
+ display_name: "Squash"
+}
+item {
+ name: "/m/096mb"
+ id: 253
+ display_name: "Lion"
+}
+item {
+ name: "/m/01gmv2"
+ id: 254
+ display_name: "Brassiere"
+}
+item {
+ name: "/m/07bgp"
+ id: 255
+ display_name: "Sheep"
+}
+item {
+ name: "/m/0cmx8"
+ id: 256
+ display_name: "Spoon"
+}
+item {
+ name: "/m/029tx"
+ id: 257
+ display_name: "Dinosaur"
+}
+item {
+ name: "/m/073bxn"
+ id: 258
+ display_name: "Tripod"
+}
+item {
+ name: "/m/0bh9flk"
+ id: 259
+ display_name: "Tablet computer"
+}
+item {
+ name: "/m/06mf6"
+ id: 260
+ display_name: "Rabbit"
+}
+item {
+ name: "/m/06_fw"
+ id: 261
+ display_name: "Skateboard"
+}
+item {
+ name: "/m/078jl"
+ id: 262
+ display_name: "Snake"
+}
+item {
+ name: "/m/0fbdv"
+ id: 263
+ display_name: "Shellfish"
+}
+item {
+ name: "/m/0h23m"
+ id: 264
+ display_name: "Sparrow"
+}
+item {
+ name: "/m/014j1m"
+ id: 265
+ display_name: "Apple"
+}
+item {
+ name: "/m/03fwl"
+ id: 266
+ display_name: "Goat"
+}
+item {
+ name: "/m/02y6n"
+ id: 267
+ display_name: "French fries"
+}
+item {
+ name: "/m/06c7f7"
+ id: 268
+ display_name: "Lipstick"
+}
+item {
+ name: "/m/026qbn5"
+ id: 269
+ display_name: "studio couch"
+}
+item {
+ name: "/m/0cdn1"
+ id: 270
+ display_name: "Hamburger"
+}
+item {
+ name: "/m/07clx"
+ id: 271
+ display_name: "Tea"
+}
+item {
+ name: "/m/07cx4"
+ id: 272
+ display_name: "Telephone"
+}
+item {
+ name: "/m/03g8mr"
+ id: 273
+ display_name: "Baseball bat"
+}
+item {
+ name: "/m/0cnyhnx"
+ id: 274
+ display_name: "Bull"
+}
+item {
+ name: "/m/01b7fy"
+ id: 275
+ display_name: "Headphones"
+}
+item {
+ name: "/m/04gth"
+ id: 276
+ display_name: "Lavender"
+}
+item {
+ name: "/m/0cyfs"
+ id: 277
+ display_name: "Parachute"
+}
+item {
+ name: "/m/021mn"
+ id: 278
+ display_name: "Cookie"
+}
+item {
+ name: "/m/07dm6"
+ id: 279
+ display_name: "Tiger"
+}
+item {
+ name: "/m/0k1tl"
+ id: 280
+ display_name: "Pen"
+}
+item {
+ name: "/m/0dv9c"
+ id: 281
+ display_name: "Racket"
+}
+item {
+ name: "/m/0dt3t"
+ id: 282
+ display_name: "Fork"
+}
+item {
+ name: "/m/04yqq2"
+ id: 283
+ display_name: "Bust"
+}
+item {
+ name: "/m/01cmb2"
+ id: 284
+ display_name: "Miniskirt"
+}
+item {
+ name: "/m/0gd36"
+ id: 285
+ display_name: "Sea lion"
+}
+item {
+ name: "/m/033cnk"
+ id: 286
+ display_name: "Egg"
+}
+item {
+ name: "/m/06ncr"
+ id: 287
+ display_name: "Saxophone"
+}
+item {
+ name: "/m/03bk1"
+ id: 288
+ display_name: "Giraffe"
+}
+item {
+ name: "/m/0bjyj5"
+ id: 289
+ display_name: "Waste container"
+}
+item {
+ name: "/m/06__v"
+ id: 290
+ display_name: "Snowboard"
+}
+item {
+ name: "/m/0qmmr"
+ id: 291
+ display_name: "Wheelchair"
+}
+item {
+ name: "/m/01xgg_"
+ id: 292
+ display_name: "Medical equipment"
+}
+item {
+ name: "/m/0czz2"
+ id: 293
+ display_name: "Antelope"
+}
+item {
+ name: "/m/02l8p9"
+ id: 294
+ display_name: "Harbor seal"
+}
+item {
+ name: "/m/09g1w"
+ id: 295
+ display_name: "Toilet"
+}
+item {
+ name: "/m/0ll1f78"
+ id: 296
+ display_name: "Shrimp"
+}
+item {
+ name: "/m/0cyhj_"
+ id: 297
+ display_name: "Orange"
+}
+item {
+ name: "/m/0642b4"
+ id: 298
+ display_name: "Cupboard"
+}
+item {
+ name: "/m/0h8mzrc"
+ id: 299
+ display_name: "Wall clock"
+}
+item {
+ name: "/m/068zj"
+ id: 300
+ display_name: "Pig"
+}
+item {
+ name: "/m/02z51p"
+ id: 301
+ display_name: "Nightstand"
+}
+item {
+ name: "/m/0h8nr_l"
+ id: 302
+ display_name: "Bathroom accessory"
+}
+item {
+ name: "/m/0388q"
+ id: 303
+ display_name: "Grape"
+}
+item {
+ name: "/m/02hj4"
+ id: 304
+ display_name: "Dolphin"
+}
+item {
+ name: "/m/01jfsr"
+ id: 305
+ display_name: "Lantern"
+}
+item {
+ name: "/m/07gql"
+ id: 306
+ display_name: "Trumpet"
+}
+item {
+ name: "/m/0h8my_4"
+ id: 307
+ display_name: "Tennis racket"
+}
+item {
+ name: "/m/0n28_"
+ id: 308
+ display_name: "Crab"
+}
+item {
+ name: "/m/0120dh"
+ id: 309
+ display_name: "Sea turtle"
+}
+item {
+ name: "/m/020kz"
+ id: 310
+ display_name: "Cannon"
+}
+item {
+ name: "/m/0mkg"
+ id: 311
+ display_name: "Accordion"
+}
+item {
+ name: "/m/03c7gz"
+ id: 312
+ display_name: "Door handle"
+}
+item {
+ name: "/m/09k_b"
+ id: 313
+ display_name: "Lemon"
+}
+item {
+ name: "/m/031n1"
+ id: 314
+ display_name: "Foot"
+}
+item {
+ name: "/m/04rmv"
+ id: 315
+ display_name: "Mouse"
+}
+item {
+ name: "/m/084rd"
+ id: 316
+ display_name: "Wok"
+}
+item {
+ name: "/m/02rgn06"
+ id: 317
+ display_name: "Volleyball"
+}
+item {
+ name: "/m/05z55"
+ id: 318
+ display_name: "Pasta"
+}
+item {
+ name: "/m/01r546"
+ id: 319
+ display_name: "Earrings"
+}
+item {
+ name: "/m/09qck"
+ id: 320
+ display_name: "Banana"
+}
+item {
+ name: "/m/012w5l"
+ id: 321
+ display_name: "Ladder"
+}
+item {
+ name: "/m/01940j"
+ id: 322
+ display_name: "Backpack"
+}
+item {
+ name: "/m/09f_2"
+ id: 323
+ display_name: "Crocodile"
+}
+item {
+ name: "/m/02p3w7d"
+ id: 324
+ display_name: "Roller skates"
+}
+item {
+ name: "/m/057p5t"
+ id: 325
+ display_name: "Scoreboard"
+}
+item {
+ name: "/m/0d8zb"
+ id: 326
+ display_name: "Jellyfish"
+}
+item {
+ name: "/m/01nq26"
+ id: 327
+ display_name: "Sock"
+}
+item {
+ name: "/m/01x_v"
+ id: 328
+ display_name: "Camel"
+}
+item {
+ name: "/m/05gqfk"
+ id: 329
+ display_name: "Plastic bag"
+}
+item {
+ name: "/m/0cydv"
+ id: 330
+ display_name: "Caterpillar"
+}
+item {
+ name: "/m/07030"
+ id: 331
+ display_name: "Sushi"
+}
+item {
+ name: "/m/084zz"
+ id: 332
+ display_name: "Whale"
+}
+item {
+ name: "/m/0c29q"
+ id: 333
+ display_name: "Leopard"
+}
+item {
+ name: "/m/02zn6n"
+ id: 334
+ display_name: "Barrel"
+}
+item {
+ name: "/m/03tw93"
+ id: 335
+ display_name: "Fireplace"
+}
+item {
+ name: "/m/0fqt361"
+ id: 336
+ display_name: "Stool"
+}
+item {
+ name: "/m/0f9_l"
+ id: 337
+ display_name: "Snail"
+}
+item {
+ name: "/m/0gm28"
+ id: 338
+ display_name: "Candy"
+}
+item {
+ name: "/m/09rvcxw"
+ id: 339
+ display_name: "Rocket"
+}
+item {
+ name: "/m/01nkt"
+ id: 340
+ display_name: "Cheese"
+}
+item {
+ name: "/m/04p0qw"
+ id: 341
+ display_name: "Billiard table"
+}
+item {
+ name: "/m/03hj559"
+ id: 342
+ display_name: "Mixing bowl"
+}
+item {
+ name: "/m/07pj7bq"
+ id: 343
+ display_name: "Bowling equipment"
+}
+item {
+ name: "/m/04ctx"
+ id: 344
+ display_name: "Knife"
+}
+item {
+ name: "/m/0703r8"
+ id: 345
+ display_name: "Loveseat"
+}
+item {
+ name: "/m/03qrc"
+ id: 346
+ display_name: "Hamster"
+}
+item {
+ name: "/m/020lf"
+ id: 347
+ display_name: "Mouse"
+}
+item {
+ name: "/m/0by6g"
+ id: 348
+ display_name: "Shark"
+}
+item {
+ name: "/m/01fh4r"
+ id: 349
+ display_name: "Teapot"
+}
+item {
+ name: "/m/07c6l"
+ id: 350
+ display_name: "Trombone"
+}
+item {
+ name: "/m/03bj1"
+ id: 351
+ display_name: "Panda"
+}
+item {
+ name: "/m/0898b"
+ id: 352
+ display_name: "Zebra"
+}
+item {
+ name: "/m/02x984l"
+ id: 353
+ display_name: "Mechanical fan"
+}
+item {
+ name: "/m/0fj52s"
+ id: 354
+ display_name: "Carrot"
+}
+item {
+ name: "/m/0cd4d"
+ id: 355
+ display_name: "Cheetah"
+}
+item {
+ name: "/m/02068x"
+ id: 356
+ display_name: "Gondola"
+}
+item {
+ name: "/m/01vbnl"
+ id: 357
+ display_name: "Bidet"
+}
+item {
+ name: "/m/0449p"
+ id: 358
+ display_name: "Jaguar"
+}
+item {
+ name: "/m/0gj37"
+ id: 359
+ display_name: "Ladybug"
+}
+item {
+ name: "/m/0nl46"
+ id: 360
+ display_name: "Crown"
+}
+item {
+ name: "/m/0152hh"
+ id: 361
+ display_name: "Snowman"
+}
+item {
+ name: "/m/03dnzn"
+ id: 362
+ display_name: "Bathtub"
+}
+item {
+ name: "/m/05_5p_0"
+ id: 363
+ display_name: "Table tennis racket"
+}
+item {
+ name: "/m/02jfl0"
+ id: 364
+ display_name: "Sombrero"
+}
+item {
+ name: "/m/01dxs"
+ id: 365
+ display_name: "Brown bear"
+}
+item {
+ name: "/m/0cjq5"
+ id: 366
+ display_name: "Lobster"
+}
+item {
+ name: "/m/040b_t"
+ id: 367
+ display_name: "Refrigerator"
+}
+item {
+ name: "/m/0_cp5"
+ id: 368
+ display_name: "Oyster"
+}
+item {
+ name: "/m/0gxl3"
+ id: 369
+ display_name: "Handgun"
+}
+item {
+ name: "/m/029bxz"
+ id: 370
+ display_name: "Oven"
+}
+item {
+ name: "/m/02zt3"
+ id: 371
+ display_name: "Kite"
+}
+item {
+ name: "/m/03d443"
+ id: 372
+ display_name: "Rhinoceros"
+}
+item {
+ name: "/m/0306r"
+ id: 373
+ display_name: "Fox"
+}
+item {
+ name: "/m/0h8l4fh"
+ id: 374
+ display_name: "Light bulb"
+}
+item {
+ name: "/m/0633h"
+ id: 375
+ display_name: "Polar bear"
+}
+item {
+ name: "/m/01s55n"
+ id: 376
+ display_name: "Suitcase"
+}
+item {
+ name: "/m/0hkxq"
+ id: 377
+ display_name: "Broccoli"
+}
+item {
+ name: "/m/0cn6p"
+ id: 378
+ display_name: "Otter"
+}
+item {
+ name: "/m/0dbzx"
+ id: 379
+ display_name: "Mule"
+}
+item {
+ name: "/m/01dy8n"
+ id: 380
+ display_name: "Woodpecker"
+}
+item {
+ name: "/m/01h8tj"
+ id: 381
+ display_name: "Starfish"
+}
+item {
+ name: "/m/03s_tn"
+ id: 382
+ display_name: "Kettle"
+}
+item {
+ name: "/m/01xs3r"
+ id: 383
+ display_name: "Jet ski"
+}
+item {
+ name: "/m/031b6r"
+ id: 384
+ display_name: "Window blind"
+}
+item {
+ name: "/m/06j2d"
+ id: 385
+ display_name: "Raven"
+}
+item {
+ name: "/m/0hqkz"
+ id: 386
+ display_name: "Grapefruit"
+}
+item {
+ name: "/m/01_5g"
+ id: 387
+ display_name: "Chopsticks"
+}
+item {
+ name: "/m/02zvsm"
+ id: 388
+ display_name: "Tart"
+}
+item {
+ name: "/m/0kpqd"
+ id: 389
+ display_name: "Watermelon"
+}
+item {
+ name: "/m/015x4r"
+ id: 390
+ display_name: "Cucumber"
+}
+item {
+ name: "/m/061hd_"
+ id: 391
+ display_name: "Infant bed"
+}
+item {
+ name: "/m/04ylt"
+ id: 392
+ display_name: "Missile"
+}
+item {
+ name: "/m/02wv84t"
+ id: 393
+ display_name: "Gas stove"
+}
+item {
+ name: "/m/04y4h8h"
+ id: 394
+ display_name: "Bathroom cabinet"
+}
+item {
+ name: "/m/01gllr"
+ id: 395
+ display_name: "Beehive"
+}
+item {
+ name: "/m/0pcr"
+ id: 396
+ display_name: "Alpaca"
+}
+item {
+ name: "/m/0jy4k"
+ id: 397
+ display_name: "Doughnut"
+}
+item {
+ name: "/m/09f20"
+ id: 398
+ display_name: "Hippopotamus"
+}
+item {
+ name: "/m/0mcx2"
+ id: 399
+ display_name: "Ipod"
+}
+item {
+ name: "/m/04c0y"
+ id: 400
+ display_name: "Kangaroo"
+}
+item {
+ name: "/m/0_k2"
+ id: 401
+ display_name: "Ant"
+}
+item {
+ name: "/m/0jg57"
+ id: 402
+ display_name: "Bell pepper"
+}
+item {
+ name: "/m/03fj2"
+ id: 403
+ display_name: "Goldfish"
+}
+item {
+ name: "/m/03ldnb"
+ id: 404
+ display_name: "Ceiling fan"
+}
+item {
+ name: "/m/06nrc"
+ id: 405
+ display_name: "Shotgun"
+}
+item {
+ name: "/m/01btn"
+ id: 406
+ display_name: "Barge"
+}
+item {
+ name: "/m/05vtc"
+ id: 407
+ display_name: "Potato"
+}
+item {
+ name: "/m/08hvt4"
+ id: 408
+ display_name: "Jug"
+}
+item {
+ name: "/m/0fx9l"
+ id: 409
+ display_name: "Microwave oven"
+}
+item {
+ name: "/m/01h44"
+ id: 410
+ display_name: "Bat"
+}
+item {
+ name: "/m/05n4y"
+ id: 411
+ display_name: "Ostrich"
+}
+item {
+ name: "/m/0jly1"
+ id: 412
+ display_name: "Turkey"
+}
+item {
+ name: "/m/06y5r"
+ id: 413
+ display_name: "Sword"
+}
+item {
+ name: "/m/05ctyq"
+ id: 414
+ display_name: "Tennis ball"
+}
+item {
+ name: "/m/0fp6w"
+ id: 415
+ display_name: "Pineapple"
+}
+item {
+ name: "/m/0d4w1"
+ id: 416
+ display_name: "Closet"
+}
+item {
+ name: "/m/02pv19"
+ id: 417
+ display_name: "Stop sign"
+}
+item {
+ name: "/m/07crc"
+ id: 418
+ display_name: "Taco"
+}
+item {
+ name: "/m/01dwwc"
+ id: 419
+ display_name: "Pancake"
+}
+item {
+ name: "/m/01b9xk"
+ id: 420
+ display_name: "Hot dog"
+}
+item {
+ name: "/m/013y1f"
+ id: 421
+ display_name: "Organ"
+}
+item {
+ name: "/m/0m53l"
+ id: 422
+ display_name: "Rays and skates"
+}
+item {
+ name: "/m/0174k2"
+ id: 423
+ display_name: "Washing machine"
+}
+item {
+ name: "/m/01dwsz"
+ id: 424
+ display_name: "Waffle"
+}
+item {
+ name: "/m/04vv5k"
+ id: 425
+ display_name: "Snowplow"
+}
+item {
+ name: "/m/04cp_"
+ id: 426
+ display_name: "Koala"
+}
+item {
+ name: "/m/0fz0h"
+ id: 427
+ display_name: "Honeycomb"
+}
+item {
+ name: "/m/0llzx"
+ id: 428
+ display_name: "Sewing machine"
+}
+item {
+ name: "/m/0319l"
+ id: 429
+ display_name: "Horn"
+}
+item {
+ name: "/m/04v6l4"
+ id: 430
+ display_name: "Frying pan"
+}
+item {
+ name: "/m/0dkzw"
+ id: 431
+ display_name: "Seat belt"
+}
+item {
+ name: "/m/027pcv"
+ id: 432
+ display_name: "Zucchini"
+}
+item {
+ name: "/m/0323sq"
+ id: 433
+ display_name: "Golf cart"
+}
+item {
+ name: "/m/054fyh"
+ id: 434
+ display_name: "Pitcher"
+}
+item {
+ name: "/m/01pns0"
+ id: 435
+ display_name: "Fire hydrant"
+}
+item {
+ name: "/m/012n7d"
+ id: 436
+ display_name: "Ambulance"
+}
+item {
+ name: "/m/044r5d"
+ id: 437
+ display_name: "Golf ball"
+}
+item {
+ name: "/m/01krhy"
+ id: 438
+ display_name: "Tiara"
+}
+item {
+ name: "/m/0dq75"
+ id: 439
+ display_name: "Raccoon"
+}
+item {
+ name: "/m/0176mf"
+ id: 440
+ display_name: "Belt"
+}
+item {
+ name: "/m/0h8lkj8"
+ id: 441
+ display_name: "Corded phone"
+}
+item {
+ name: "/m/04tn4x"
+ id: 442
+ display_name: "Swim cap"
+}
+item {
+ name: "/m/06l9r"
+ id: 443
+ display_name: "Red panda"
+}
+item {
+ name: "/m/0cjs7"
+ id: 444
+ display_name: "Asparagus"
+}
+item {
+ name: "/m/01lsmm"
+ id: 445
+ display_name: "Scissors"
+}
+item {
+ name: "/m/01lcw4"
+ id: 446
+ display_name: "Limousine"
+}
+item {
+ name: "/m/047j0r"
+ id: 447
+ display_name: "Filing cabinet"
+}
+item {
+ name: "/m/01fb_0"
+ id: 448
+ display_name: "Bagel"
+}
+item {
+ name: "/m/04169hn"
+ id: 449
+ display_name: "Wood-burning stove"
+}
+item {
+ name: "/m/076bq"
+ id: 450
+ display_name: "Segway"
+}
+item {
+ name: "/m/0hdln"
+ id: 451
+ display_name: "Ruler"
+}
+item {
+ name: "/m/01g3x7"
+ id: 452
+ display_name: "Bow and arrow"
+}
+item {
+ name: "/m/0l3ms"
+ id: 453
+ display_name: "Balance beam"
+}
+item {
+ name: "/m/058qzx"
+ id: 454
+ display_name: "Kitchen knife"
+}
+item {
+ name: "/m/0h8n6ft"
+ id: 455
+ display_name: "Cake stand"
+}
+item {
+ name: "/m/018j2"
+ id: 456
+ display_name: "Banjo"
+}
+item {
+ name: "/m/0l14j_"
+ id: 457
+ display_name: "Flute"
+}
+item {
+ name: "/m/0wdt60w"
+ id: 458
+ display_name: "Rugby ball"
+}
+item {
+ name: "/m/02gzp"
+ id: 459
+ display_name: "Dagger"
+}
+item {
+ name: "/m/0h8n6f9"
+ id: 460
+ display_name: "Dog bed"
+}
+item {
+ name: "/m/0fbw6"
+ id: 461
+ display_name: "Cabbage"
+}
+item {
+ name: "/m/07kng9"
+ id: 462
+ display_name: "Picnic basket"
+}
+item {
+ name: "/m/0dj6p"
+ id: 463
+ display_name: "Peach"
+}
+item {
+ name: "/m/06pcq"
+ id: 464
+ display_name: "Submarine sandwich"
+}
+item {
+ name: "/m/061_f"
+ id: 465
+ display_name: "Pear"
+}
+item {
+ name: "/m/04g2r"
+ id: 466
+ display_name: "Lynx"
+}
+item {
+ name: "/m/0jwn_"
+ id: 467
+ display_name: "Pomegranate"
+}
+item {
+ name: "/m/02f9f_"
+ id: 468
+ display_name: "Shower"
+}
+item {
+ name: "/m/01f8m5"
+ id: 469
+ display_name: "Blue jay"
+}
+item {
+ name: "/m/01m4t"
+ id: 470
+ display_name: "Printer"
+}
+item {
+ name: "/m/0cl4p"
+ id: 471
+ display_name: "Hedgehog"
+}
+item {
+ name: "/m/07xyvk"
+ id: 472
+ display_name: "Coffeemaker"
+}
+item {
+ name: "/m/084hf"
+ id: 473
+ display_name: "Worm"
+}
+item {
+ name: "/m/03v5tg"
+ id: 474
+ display_name: "Drinking straw"
+}
+item {
+ name: "/m/0qjjc"
+ id: 475
+ display_name: "Remote control"
+}
+item {
+ name: "/m/015x5n"
+ id: 476
+ display_name: "Radish"
+}
+item {
+ name: "/m/0ccs93"
+ id: 477
+ display_name: "Canary"
+}
+item {
+ name: "/m/0nybt"
+ id: 478
+ display_name: "Seahorse"
+}
+item {
+ name: "/m/02vkqh8"
+ id: 479
+ display_name: "Wardrobe"
+}
+item {
+ name: "/m/09gtd"
+ id: 480
+ display_name: "Toilet paper"
+}
+item {
+ name: "/m/019h78"
+ id: 481
+ display_name: "Centipede"
+}
+item {
+ name: "/m/015wgc"
+ id: 482
+ display_name: "Croissant"
+}
+item {
+ name: "/m/01x3jk"
+ id: 483
+ display_name: "Snowmobile"
+}
+item {
+ name: "/m/01j3zr"
+ id: 484
+ display_name: "Burrito"
+}
+item {
+ name: "/m/0c568"
+ id: 485
+ display_name: "Porcupine"
+}
+item {
+ name: "/m/02pdsw"
+ id: 486
+ display_name: "Cutting board"
+}
+item {
+ name: "/m/029b3"
+ id: 487
+ display_name: "Dice"
+}
+item {
+ name: "/m/03q5t"
+ id: 488
+ display_name: "Harpsichord"
+}
+item {
+ name: "/m/0p833"
+ id: 489
+ display_name: "Perfume"
+}
+item {
+ name: "/m/01d380"
+ id: 490
+ display_name: "Drill"
+}
+item {
+ name: "/m/024d2"
+ id: 491
+ display_name: "Calculator"
+}
+item {
+ name: "/m/0mw_6"
+ id: 492
+ display_name: "Willow"
+}
+item {
+ name: "/m/01f91_"
+ id: 493
+ display_name: "Pretzel"
+}
+item {
+ name: "/m/02g30s"
+ id: 494
+ display_name: "Guacamole"
+}
+item {
+ name: "/m/01hrv5"
+ id: 495
+ display_name: "Popcorn"
+}
+item {
+ name: "/m/03m5k"
+ id: 496
+ display_name: "Harp"
+}
+item {
+ name: "/m/0162_1"
+ id: 497
+ display_name: "Towel"
+}
+item {
+ name: "/m/063rgb"
+ id: 498
+ display_name: "Mixer"
+}
+item {
+ name: "/m/06_72j"
+ id: 499
+ display_name: "Digital clock"
+}
+item {
+ name: "/m/046dlr"
+ id: 500
+ display_name: "Alarm clock"
+}
+item {
+ name: "/m/047v4b"
+ id: 501
+ display_name: "Artichoke"
+}
+item {
+ name: "/m/04zpv"
+ id: 502
+ display_name: "Milk"
+}
+item {
+ name: "/m/043nyj"
+ id: 503
+ display_name: "Common fig"
+}
+item {
+ name: "/m/03bbps"
+ id: 504
+ display_name: "Power plugs and sockets"
+}
+item {
+ name: "/m/02w3r3"
+ id: 505
+ display_name: "Paper towel"
+}
+item {
+ name: "/m/02pjr4"
+ id: 506
+ display_name: "Blender"
+}
+item {
+ name: "/m/0755b"
+ id: 507
+ display_name: "Scorpion"
+}
+item {
+ name: "/m/02lbcq"
+ id: 508
+ display_name: "Stretcher"
+}
+item {
+ name: "/m/0fldg"
+ id: 509
+ display_name: "Mango"
+}
+item {
+ name: "/m/012074"
+ id: 510
+ display_name: "Magpie"
+}
+item {
+ name: "/m/035vxb"
+ id: 511
+ display_name: "Isopod"
+}
+item {
+ name: "/m/02w3_ws"
+ id: 512
+ display_name: "Personal care"
+}
+item {
+ name: "/m/0f6nr"
+ id: 513
+ display_name: "Unicycle"
+}
+item {
+ name: "/m/0420v5"
+ id: 514
+ display_name: "Punching bag"
+}
+item {
+ name: "/m/0frqm"
+ id: 515
+ display_name: "Envelope"
+}
+item {
+ name: "/m/03txqz"
+ id: 516
+ display_name: "Scale"
+}
+item {
+ name: "/m/0271qf7"
+ id: 517
+ display_name: "Wine rack"
+}
+item {
+ name: "/m/074d1"
+ id: 518
+ display_name: "Submarine"
+}
+item {
+ name: "/m/08p92x"
+ id: 519
+ display_name: "Cream"
+}
+item {
+ name: "/m/01j4z9"
+ id: 520
+ display_name: "Chainsaw"
+}
+item {
+ name: "/m/0kpt_"
+ id: 521
+ display_name: "Cantaloupe"
+}
+item {
+ name: "/m/0h8n27j"
+ id: 522
+ display_name: "Serving tray"
+}
+item {
+ name: "/m/03y6mg"
+ id: 523
+ display_name: "Food processor"
+}
+item {
+ name: "/m/04h8sr"
+ id: 524
+ display_name: "Dumbbell"
+}
+item {
+ name: "/m/065h6l"
+ id: 525
+ display_name: "Jacuzzi"
+}
+item {
+ name: "/m/02tsc9"
+ id: 526
+ display_name: "Slow cooker"
+}
+item {
+ name: "/m/012ysf"
+ id: 527
+ display_name: "Syringe"
+}
+item {
+ name: "/m/0ky7b"
+ id: 528
+ display_name: "Dishwasher"
+}
+item {
+ name: "/m/02wg_p"
+ id: 529
+ display_name: "Tree house"
+}
+item {
+ name: "/m/0584n8"
+ id: 530
+ display_name: "Briefcase"
+}
+item {
+ name: "/m/03kt2w"
+ id: 531
+ display_name: "Stationary bicycle"
+}
+item {
+ name: "/m/05kms"
+ id: 532
+ display_name: "Oboe"
+}
+item {
+ name: "/m/030610"
+ id: 533
+ display_name: "Treadmill"
+}
+item {
+ name: "/m/0lt4_"
+ id: 534
+ display_name: "Binoculars"
+}
+item {
+ name: "/m/076lb9"
+ id: 535
+ display_name: "Bench"
+}
+item {
+ name: "/m/02ctlc"
+ id: 536
+ display_name: "Cricket ball"
+}
+item {
+ name: "/m/02x8cch"
+ id: 537
+ display_name: "Salt and pepper shakers"
+}
+item {
+ name: "/m/09gys"
+ id: 538
+ display_name: "Squid"
+}
+item {
+ name: "/m/03jbxj"
+ id: 539
+ display_name: "Light switch"
+}
+item {
+ name: "/m/012xff"
+ id: 540
+ display_name: "Toothbrush"
+}
+item {
+ name: "/m/0h8kx63"
+ id: 541
+ display_name: "Spice rack"
+}
+item {
+ name: "/m/073g6"
+ id: 542
+ display_name: "Stethoscope"
+}
+item {
+ name: "/m/02cvgx"
+ id: 543
+ display_name: "Winter melon"
+}
+item {
+ name: "/m/027rl48"
+ id: 544
+ display_name: "Ladle"
+}
+item {
+ name: "/m/01kb5b"
+ id: 545
+ display_name: "Flashlight"
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/oid_object_detection_challenge_500_label_map.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/oid_object_detection_challenge_500_label_map.pbtxt"
new file mode 100644
index 00000000..044f6d4c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/oid_object_detection_challenge_500_label_map.pbtxt"
@@ -0,0 +1,2500 @@
+item {
+ name: "/m/061hd_"
+ id: 1
+ display_name: "Infant bed"
+}
+item {
+ name: "/m/06m11"
+ id: 2
+ display_name: "Rose"
+}
+item {
+ name: "/m/03120"
+ id: 3
+ display_name: "Flag"
+}
+item {
+ name: "/m/01kb5b"
+ id: 4
+ display_name: "Flashlight"
+}
+item {
+ name: "/m/0120dh"
+ id: 5
+ display_name: "Sea turtle"
+}
+item {
+ name: "/m/0dv5r"
+ id: 6
+ display_name: "Camera"
+}
+item {
+ name: "/m/0jbk"
+ id: 7
+ display_name: "Animal"
+}
+item {
+ name: "/m/0174n1"
+ id: 8
+ display_name: "Glove"
+}
+item {
+ name: "/m/09f_2"
+ id: 9
+ display_name: "Crocodile"
+}
+item {
+ name: "/m/01xq0k1"
+ id: 10
+ display_name: "Cattle"
+}
+item {
+ name: "/m/03jm5"
+ id: 11
+ display_name: "House"
+}
+item {
+ name: "/m/02g30s"
+ id: 12
+ display_name: "Guacamole"
+}
+item {
+ name: "/m/05z6w"
+ id: 13
+ display_name: "Penguin"
+}
+item {
+ name: "/m/01jfm_"
+ id: 14
+ display_name: "Vehicle registration plate"
+}
+item {
+ name: "/m/076lb9"
+ id: 15
+ display_name: "Training bench"
+}
+item {
+ name: "/m/0gj37"
+ id: 16
+ display_name: "Ladybug"
+}
+item {
+ name: "/m/0k0pj"
+ id: 17
+ display_name: "Human nose"
+}
+item {
+ name: "/m/0kpqd"
+ id: 18
+ display_name: "Watermelon"
+}
+item {
+ name: "/m/0l14j_"
+ id: 19
+ display_name: "Flute"
+}
+item {
+ name: "/m/0cyf8"
+ id: 20
+ display_name: "Butterfly"
+}
+item {
+ name: "/m/0174k2"
+ id: 21
+ display_name: "Washing machine"
+}
+item {
+ name: "/m/0dq75"
+ id: 22
+ display_name: "Raccoon"
+}
+item {
+ name: "/m/076bq"
+ id: 23
+ display_name: "Segway"
+}
+item {
+ name: "/m/07crc"
+ id: 24
+ display_name: "Taco"
+}
+item {
+ name: "/m/0d8zb"
+ id: 25
+ display_name: "Jellyfish"
+}
+item {
+ name: "/m/0fszt"
+ id: 26
+ display_name: "Cake"
+}
+item {
+ name: "/m/0k1tl"
+ id: 27
+ display_name: "Pen"
+}
+item {
+ name: "/m/020kz"
+ id: 28
+ display_name: "Cannon"
+}
+item {
+ name: "/m/09728"
+ id: 29
+ display_name: "Bread"
+}
+item {
+ name: "/m/07j7r"
+ id: 30
+ display_name: "Tree"
+}
+item {
+ name: "/m/0fbdv"
+ id: 31
+ display_name: "Shellfish"
+}
+item {
+ name: "/m/03ssj5"
+ id: 32
+ display_name: "Bed"
+}
+item {
+ name: "/m/03qrc"
+ id: 33
+ display_name: "Hamster"
+}
+item {
+ name: "/m/02dl1y"
+ id: 34
+ display_name: "Hat"
+}
+item {
+ name: "/m/01k6s3"
+ id: 35
+ display_name: "Toaster"
+}
+item {
+ name: "/m/02jfl0"
+ id: 36
+ display_name: "Sombrero"
+}
+item {
+ name: "/m/01krhy"
+ id: 37
+ display_name: "Tiara"
+}
+item {
+ name: "/m/04kkgm"
+ id: 38
+ display_name: "Bowl"
+}
+item {
+ name: "/m/0ft9s"
+ id: 39
+ display_name: "Dragonfly"
+}
+item {
+ name: "/m/0d_2m"
+ id: 40
+ display_name: "Moths and butterflies"
+}
+item {
+ name: "/m/0czz2"
+ id: 41
+ display_name: "Antelope"
+}
+item {
+ name: "/m/0f4s2w"
+ id: 42
+ display_name: "Vegetable"
+}
+item {
+ name: "/m/07dd4"
+ id: 43
+ display_name: "Torch"
+}
+item {
+ name: "/m/0cgh4"
+ id: 44
+ display_name: "Building"
+}
+item {
+ name: "/m/03bbps"
+ id: 45
+ display_name: "Power plugs and sockets"
+}
+item {
+ name: "/m/02pjr4"
+ id: 46
+ display_name: "Blender"
+}
+item {
+ name: "/m/04p0qw"
+ id: 47
+ display_name: "Billiard table"
+}
+item {
+ name: "/m/02pdsw"
+ id: 48
+ display_name: "Cutting board"
+}
+item {
+ name: "/m/01yx86"
+ id: 49
+ display_name: "Bronze sculpture"
+}
+item {
+ name: "/m/09dzg"
+ id: 50
+ display_name: "Turtle"
+}
+item {
+ name: "/m/0hkxq"
+ id: 51
+ display_name: "Broccoli"
+}
+item {
+ name: "/m/07dm6"
+ id: 52
+ display_name: "Tiger"
+}
+item {
+ name: "/m/054_l"
+ id: 53
+ display_name: "Mirror"
+}
+item {
+ name: "/m/01dws"
+ id: 54
+ display_name: "Bear"
+}
+item {
+ name: "/m/027pcv"
+ id: 55
+ display_name: "Zucchini"
+}
+item {
+ name: "/m/01d40f"
+ id: 56
+ display_name: "Dress"
+}
+item {
+ name: "/m/02rgn06"
+ id: 57
+ display_name: "Volleyball"
+}
+item {
+ name: "/m/0342h"
+ id: 58
+ display_name: "Guitar"
+}
+item {
+ name: "/m/06bt6"
+ id: 59
+ display_name: "Reptile"
+}
+item {
+ name: "/m/0323sq"
+ id: 60
+ display_name: "Golf cart"
+}
+item {
+ name: "/m/02zvsm"
+ id: 61
+ display_name: "Tart"
+}
+item {
+ name: "/m/02fq_6"
+ id: 62
+ display_name: "Fedora"
+}
+item {
+ name: "/m/01lrl"
+ id: 63
+ display_name: "Carnivore"
+}
+item {
+ name: "/m/0k4j"
+ id: 64
+ display_name: "Car"
+}
+item {
+ name: "/m/04h7h"
+ id: 65
+ display_name: "Lighthouse"
+}
+item {
+ name: "/m/07xyvk"
+ id: 66
+ display_name: "Coffeemaker"
+}
+item {
+ name: "/m/03y6mg"
+ id: 67
+ display_name: "Food processor"
+}
+item {
+ name: "/m/07r04"
+ id: 68
+ display_name: "Truck"
+}
+item {
+ name: "/m/03__z0"
+ id: 69
+ display_name: "Bookcase"
+}
+item {
+ name: "/m/019w40"
+ id: 70
+ display_name: "Surfboard"
+}
+item {
+ name: "/m/09j5n"
+ id: 71
+ display_name: "Footwear"
+}
+item {
+ name: "/m/0cvnqh"
+ id: 72
+ display_name: "Bench"
+}
+item {
+ name: "/m/01llwg"
+ id: 73
+ display_name: "Necklace"
+}
+item {
+ name: "/m/0c9ph5"
+ id: 74
+ display_name: "Flower"
+}
+item {
+ name: "/m/015x5n"
+ id: 75
+ display_name: "Radish"
+}
+item {
+ name: "/m/0gd2v"
+ id: 76
+ display_name: "Marine mammal"
+}
+item {
+ name: "/m/04v6l4"
+ id: 77
+ display_name: "Frying pan"
+}
+item {
+ name: "/m/02jz0l"
+ id: 78
+ display_name: "Tap"
+}
+item {
+ name: "/m/0dj6p"
+ id: 79
+ display_name: "Peach"
+}
+item {
+ name: "/m/04ctx"
+ id: 80
+ display_name: "Knife"
+}
+item {
+ name: "/m/080hkjn"
+ id: 81
+ display_name: "Handbag"
+}
+item {
+ name: "/m/01c648"
+ id: 82
+ display_name: "Laptop"
+}
+item {
+ name: "/m/01j61q"
+ id: 83
+ display_name: "Tent"
+}
+item {
+ name: "/m/012n7d"
+ id: 84
+ display_name: "Ambulance"
+}
+item {
+ name: "/m/025nd"
+ id: 85
+ display_name: "Christmas tree"
+}
+item {
+ name: "/m/09csl"
+ id: 86
+ display_name: "Eagle"
+}
+item {
+ name: "/m/01lcw4"
+ id: 87
+ display_name: "Limousine"
+}
+item {
+ name: "/m/0h8n5zk"
+ id: 88
+ display_name: "Kitchen & dining room table"
+}
+item {
+ name: "/m/0633h"
+ id: 89
+ display_name: "Polar bear"
+}
+item {
+ name: "/m/01fdzj"
+ id: 90
+ display_name: "Tower"
+}
+item {
+ name: "/m/01226z"
+ id: 91
+ display_name: "Football"
+}
+item {
+ name: "/m/0mw_6"
+ id: 92
+ display_name: "Willow"
+}
+item {
+ name: "/m/04hgtk"
+ id: 93
+ display_name: "Human head"
+}
+item {
+ name: "/m/02pv19"
+ id: 94
+ display_name: "Stop sign"
+}
+item {
+ name: "/m/09qck"
+ id: 95
+ display_name: "Banana"
+}
+item {
+ name: "/m/063rgb"
+ id: 96
+ display_name: "Mixer"
+}
+item {
+ name: "/m/0lt4_"
+ id: 97
+ display_name: "Binoculars"
+}
+item {
+ name: "/m/0270h"
+ id: 98
+ display_name: "Dessert"
+}
+item {
+ name: "/m/01h3n"
+ id: 99
+ display_name: "Bee"
+}
+item {
+ name: "/m/01mzpv"
+ id: 100
+ display_name: "Chair"
+}
+item {
+ name: "/m/04169hn"
+ id: 101
+ display_name: "Wood-burning stove"
+}
+item {
+ name: "/m/0fm3zh"
+ id: 102
+ display_name: "Flowerpot"
+}
+item {
+ name: "/m/0d20w4"
+ id: 103
+ display_name: "Beaker"
+}
+item {
+ name: "/m/0_cp5"
+ id: 104
+ display_name: "Oyster"
+}
+item {
+ name: "/m/01dy8n"
+ id: 105
+ display_name: "Woodpecker"
+}
+item {
+ name: "/m/03m5k"
+ id: 106
+ display_name: "Harp"
+}
+item {
+ name: "/m/03dnzn"
+ id: 107
+ display_name: "Bathtub"
+}
+item {
+ name: "/m/0h8mzrc"
+ id: 108
+ display_name: "Wall clock"
+}
+item {
+ name: "/m/0h8mhzd"
+ id: 109
+ display_name: "Sports uniform"
+}
+item {
+ name: "/m/03d443"
+ id: 110
+ display_name: "Rhinoceros"
+}
+item {
+ name: "/m/01gllr"
+ id: 111
+ display_name: "Beehive"
+}
+item {
+ name: "/m/0642b4"
+ id: 112
+ display_name: "Cupboard"
+}
+item {
+ name: "/m/09b5t"
+ id: 113
+ display_name: "Chicken"
+}
+item {
+ name: "/m/04yx4"
+ id: 114
+ display_name: "Man"
+}
+item {
+ name: "/m/01f8m5"
+ id: 115
+ display_name: "Blue jay"
+}
+item {
+ name: "/m/015x4r"
+ id: 116
+ display_name: "Cucumber"
+}
+item {
+ name: "/m/01j51"
+ id: 117
+ display_name: "Balloon"
+}
+item {
+ name: "/m/02zt3"
+ id: 118
+ display_name: "Kite"
+}
+item {
+ name: "/m/03tw93"
+ id: 119
+ display_name: "Fireplace"
+}
+item {
+ name: "/m/01jfsr"
+ id: 120
+ display_name: "Lantern"
+}
+item {
+ name: "/m/04ylt"
+ id: 121
+ display_name: "Missile"
+}
+item {
+ name: "/m/0bt_c3"
+ id: 122
+ display_name: "Book"
+}
+item {
+ name: "/m/0cmx8"
+ id: 123
+ display_name: "Spoon"
+}
+item {
+ name: "/m/0hqkz"
+ id: 124
+ display_name: "Grapefruit"
+}
+item {
+ name: "/m/071qp"
+ id: 125
+ display_name: "Squirrel"
+}
+item {
+ name: "/m/0cyhj_"
+ id: 126
+ display_name: "Orange"
+}
+item {
+ name: "/m/01xygc"
+ id: 127
+ display_name: "Coat"
+}
+item {
+ name: "/m/0420v5"
+ id: 128
+ display_name: "Punching bag"
+}
+item {
+ name: "/m/0898b"
+ id: 129
+ display_name: "Zebra"
+}
+item {
+ name: "/m/01knjb"
+ id: 130
+ display_name: "Billboard"
+}
+item {
+ name: "/m/0199g"
+ id: 131
+ display_name: "Bicycle"
+}
+item {
+ name: "/m/03c7gz"
+ id: 132
+ display_name: "Door handle"
+}
+item {
+ name: "/m/02x984l"
+ id: 133
+ display_name: "Mechanical fan"
+}
+item {
+ name: "/m/04zwwv"
+ id: 134
+ display_name: "Ring binder"
+}
+item {
+ name: "/m/04bcr3"
+ id: 135
+ display_name: "Table"
+}
+item {
+ name: "/m/0gv1x"
+ id: 136
+ display_name: "Parrot"
+}
+item {
+ name: "/m/01nq26"
+ id: 137
+ display_name: "Sock"
+}
+item {
+ name: "/m/02s195"
+ id: 138
+ display_name: "Vase"
+}
+item {
+ name: "/m/083kb"
+ id: 139
+ display_name: "Weapon"
+}
+item {
+ name: "/m/06nrc"
+ id: 140
+ display_name: "Shotgun"
+}
+item {
+ name: "/m/0jyfg"
+ id: 141
+ display_name: "Glasses"
+}
+item {
+ name: "/m/0nybt"
+ id: 142
+ display_name: "Seahorse"
+}
+item {
+ name: "/m/0176mf"
+ id: 143
+ display_name: "Belt"
+}
+item {
+ name: "/m/01rzcn"
+ id: 144
+ display_name: "Watercraft"
+}
+item {
+ name: "/m/0d4v4"
+ id: 145
+ display_name: "Window"
+}
+item {
+ name: "/m/03bk1"
+ id: 146
+ display_name: "Giraffe"
+}
+item {
+ name: "/m/096mb"
+ id: 147
+ display_name: "Lion"
+}
+item {
+ name: "/m/0h9mv"
+ id: 148
+ display_name: "Tire"
+}
+item {
+ name: "/m/07yv9"
+ id: 149
+ display_name: "Vehicle"
+}
+item {
+ name: "/m/0ph39"
+ id: 150
+ display_name: "Canoe"
+}
+item {
+ name: "/m/01rkbr"
+ id: 151
+ display_name: "Tie"
+}
+item {
+ name: "/m/0gjbg72"
+ id: 152
+ display_name: "Shelf"
+}
+item {
+ name: "/m/06z37_"
+ id: 153
+ display_name: "Picture frame"
+}
+item {
+ name: "/m/01m4t"
+ id: 154
+ display_name: "Printer"
+}
+item {
+ name: "/m/035r7c"
+ id: 155
+ display_name: "Human leg"
+}
+item {
+ name: "/m/019jd"
+ id: 156
+ display_name: "Boat"
+}
+item {
+ name: "/m/02tsc9"
+ id: 157
+ display_name: "Slow cooker"
+}
+item {
+ name: "/m/015wgc"
+ id: 158
+ display_name: "Croissant"
+}
+item {
+ name: "/m/0c06p"
+ id: 159
+ display_name: "Candle"
+}
+item {
+ name: "/m/01dwwc"
+ id: 160
+ display_name: "Pancake"
+}
+item {
+ name: "/m/034c16"
+ id: 161
+ display_name: "Pillow"
+}
+item {
+ name: "/m/0242l"
+ id: 162
+ display_name: "Coin"
+}
+item {
+ name: "/m/02lbcq"
+ id: 163
+ display_name: "Stretcher"
+}
+item {
+ name: "/m/03nfch"
+ id: 164
+ display_name: "Sandal"
+}
+item {
+ name: "/m/03bt1vf"
+ id: 165
+ display_name: "Woman"
+}
+item {
+ name: "/m/01lynh"
+ id: 166
+ display_name: "Stairs"
+}
+item {
+ name: "/m/03q5t"
+ id: 167
+ display_name: "Harpsichord"
+}
+item {
+ name: "/m/0fqt361"
+ id: 168
+ display_name: "Stool"
+}
+item {
+ name: "/m/01bjv"
+ id: 169
+ display_name: "Bus"
+}
+item {
+ name: "/m/01s55n"
+ id: 170
+ display_name: "Suitcase"
+}
+item {
+ name: "/m/0283dt1"
+ id: 171
+ display_name: "Human mouth"
+}
+item {
+ name: "/m/01z1kdw"
+ id: 172
+ display_name: "Juice"
+}
+item {
+ name: "/m/016m2d"
+ id: 173
+ display_name: "Skull"
+}
+item {
+ name: "/m/02dgv"
+ id: 174
+ display_name: "Door"
+}
+item {
+ name: "/m/07y_7"
+ id: 175
+ display_name: "Violin"
+}
+item {
+ name: "/m/01_5g"
+ id: 176
+ display_name: "Chopsticks"
+}
+item {
+ name: "/m/06_72j"
+ id: 177
+ display_name: "Digital clock"
+}
+item {
+ name: "/m/0ftb8"
+ id: 178
+ display_name: "Sunflower"
+}
+item {
+ name: "/m/0c29q"
+ id: 179
+ display_name: "Leopard"
+}
+item {
+ name: "/m/0jg57"
+ id: 180
+ display_name: "Bell pepper"
+}
+item {
+ name: "/m/02l8p9"
+ id: 181
+ display_name: "Harbor seal"
+}
+item {
+ name: "/m/078jl"
+ id: 182
+ display_name: "Snake"
+}
+item {
+ name: "/m/0llzx"
+ id: 183
+ display_name: "Sewing machine"
+}
+item {
+ name: "/m/0dbvp"
+ id: 184
+ display_name: "Goose"
+}
+item {
+ name: "/m/09ct_"
+ id: 185
+ display_name: "Helicopter"
+}
+item {
+ name: "/m/0dkzw"
+ id: 186
+ display_name: "Seat belt"
+}
+item {
+ name: "/m/02p5f1q"
+ id: 187
+ display_name: "Coffee cup"
+}
+item {
+ name: "/m/0fx9l"
+ id: 188
+ display_name: "Microwave oven"
+}
+item {
+ name: "/m/01b9xk"
+ id: 189
+ display_name: "Hot dog"
+}
+item {
+ name: "/m/0b3fp9"
+ id: 190
+ display_name: "Countertop"
+}
+item {
+ name: "/m/0h8n27j"
+ id: 191
+ display_name: "Serving tray"
+}
+item {
+ name: "/m/0h8n6f9"
+ id: 192
+ display_name: "Dog bed"
+}
+item {
+ name: "/m/01599"
+ id: 193
+ display_name: "Beer"
+}
+item {
+ name: "/m/017ftj"
+ id: 194
+ display_name: "Sunglasses"
+}
+item {
+ name: "/m/044r5d"
+ id: 195
+ display_name: "Golf ball"
+}
+item {
+ name: "/m/01dwsz"
+ id: 196
+ display_name: "Waffle"
+}
+item {
+ name: "/m/0cdl1"
+ id: 197
+ display_name: "Palm tree"
+}
+item {
+ name: "/m/07gql"
+ id: 198
+ display_name: "Trumpet"
+}
+item {
+ name: "/m/0hdln"
+ id: 199
+ display_name: "Ruler"
+}
+item {
+ name: "/m/0zvk5"
+ id: 200
+ display_name: "Helmet"
+}
+item {
+ name: "/m/012w5l"
+ id: 201
+ display_name: "Ladder"
+}
+item {
+ name: "/m/021sj1"
+ id: 202
+ display_name: "Office building"
+}
+item {
+ name: "/m/0bh9flk"
+ id: 203
+ display_name: "Tablet computer"
+}
+item {
+ name: "/m/09gtd"
+ id: 204
+ display_name: "Toilet paper"
+}
+item {
+ name: "/m/0jwn_"
+ id: 205
+ display_name: "Pomegranate"
+}
+item {
+ name: "/m/02wv6h6"
+ id: 206
+ display_name: "Skirt"
+}
+item {
+ name: "/m/02wv84t"
+ id: 207
+ display_name: "Gas stove"
+}
+item {
+ name: "/m/021mn"
+ id: 208
+ display_name: "Cookie"
+}
+item {
+ name: "/m/018p4k"
+ id: 209
+ display_name: "Cart"
+}
+item {
+ name: "/m/06j2d"
+ id: 210
+ display_name: "Raven"
+}
+item {
+ name: "/m/033cnk"
+ id: 211
+ display_name: "Egg"
+}
+item {
+ name: "/m/01j3zr"
+ id: 212
+ display_name: "Burrito"
+}
+item {
+ name: "/m/03fwl"
+ id: 213
+ display_name: "Goat"
+}
+item {
+ name: "/m/058qzx"
+ id: 214
+ display_name: "Kitchen knife"
+}
+item {
+ name: "/m/06_fw"
+ id: 215
+ display_name: "Skateboard"
+}
+item {
+ name: "/m/02x8cch"
+ id: 216
+ display_name: "Salt and pepper shakers"
+}
+item {
+ name: "/m/04g2r"
+ id: 217
+ display_name: "Lynx"
+}
+item {
+ name: "/m/01b638"
+ id: 218
+ display_name: "Boot"
+}
+item {
+ name: "/m/099ssp"
+ id: 219
+ display_name: "Platter"
+}
+item {
+ name: "/m/071p9"
+ id: 220
+ display_name: "Ski"
+}
+item {
+ name: "/m/01gkx_"
+ id: 221
+ display_name: "Swimwear"
+}
+item {
+ name: "/m/0b_rs"
+ id: 222
+ display_name: "Swimming pool"
+}
+item {
+ name: "/m/03v5tg"
+ id: 223
+ display_name: "Drinking straw"
+}
+item {
+ name: "/m/01j5ks"
+ id: 224
+ display_name: "Wrench"
+}
+item {
+ name: "/m/026t6"
+ id: 225
+ display_name: "Drum"
+}
+item {
+ name: "/m/0_k2"
+ id: 226
+ display_name: "Ant"
+}
+item {
+ name: "/m/039xj_"
+ id: 227
+ display_name: "Human ear"
+}
+item {
+ name: "/m/01b7fy"
+ id: 228
+ display_name: "Headphones"
+}
+item {
+ name: "/m/0220r2"
+ id: 229
+ display_name: "Fountain"
+}
+item {
+ name: "/m/015p6"
+ id: 230
+ display_name: "Bird"
+}
+item {
+ name: "/m/0fly7"
+ id: 231
+ display_name: "Jeans"
+}
+item {
+ name: "/m/07c52"
+ id: 232
+ display_name: "Television"
+}
+item {
+ name: "/m/0n28_"
+ id: 233
+ display_name: "Crab"
+}
+item {
+ name: "/m/0hg7b"
+ id: 234
+ display_name: "Microphone"
+}
+item {
+ name: "/m/019dx1"
+ id: 235
+ display_name: "Home appliance"
+}
+item {
+ name: "/m/04vv5k"
+ id: 236
+ display_name: "Snowplow"
+}
+item {
+ name: "/m/020jm"
+ id: 237
+ display_name: "Beetle"
+}
+item {
+ name: "/m/047v4b"
+ id: 238
+ display_name: "Artichoke"
+}
+item {
+ name: "/m/01xs3r"
+ id: 239
+ display_name: "Jet ski"
+}
+item {
+ name: "/m/03kt2w"
+ id: 240
+ display_name: "Stationary bicycle"
+}
+item {
+ name: "/m/03q69"
+ id: 241
+ display_name: "Human hair"
+}
+item {
+ name: "/m/01dxs"
+ id: 242
+ display_name: "Brown bear"
+}
+item {
+ name: "/m/01h8tj"
+ id: 243
+ display_name: "Starfish"
+}
+item {
+ name: "/m/0dt3t"
+ id: 244
+ display_name: "Fork"
+}
+item {
+ name: "/m/0cjq5"
+ id: 245
+ display_name: "Lobster"
+}
+item {
+ name: "/m/0h8lkj8"
+ id: 246
+ display_name: "Corded phone"
+}
+item {
+ name: "/m/0271t"
+ id: 247
+ display_name: "Drink"
+}
+item {
+ name: "/m/03q5c7"
+ id: 248
+ display_name: "Saucer"
+}
+item {
+ name: "/m/0fj52s"
+ id: 249
+ display_name: "Carrot"
+}
+item {
+ name: "/m/03vt0"
+ id: 250
+ display_name: "Insect"
+}
+item {
+ name: "/m/01x3z"
+ id: 251
+ display_name: "Clock"
+}
+item {
+ name: "/m/0d5gx"
+ id: 252
+ display_name: "Castle"
+}
+item {
+ name: "/m/0h8my_4"
+ id: 253
+ display_name: "Tennis racket"
+}
+item {
+ name: "/m/03ldnb"
+ id: 254
+ display_name: "Ceiling fan"
+}
+item {
+ name: "/m/0cjs7"
+ id: 255
+ display_name: "Asparagus"
+}
+item {
+ name: "/m/0449p"
+ id: 256
+ display_name: "Jaguar"
+}
+item {
+ name: "/m/04szw"
+ id: 257
+ display_name: "Musical instrument"
+}
+item {
+ name: "/m/07jdr"
+ id: 258
+ display_name: "Train"
+}
+item {
+ name: "/m/01yrx"
+ id: 259
+ display_name: "Cat"
+}
+item {
+ name: "/m/06c54"
+ id: 260
+ display_name: "Rifle"
+}
+item {
+ name: "/m/04h8sr"
+ id: 261
+ display_name: "Dumbbell"
+}
+item {
+ name: "/m/050k8"
+ id: 262
+ display_name: "Mobile phone"
+}
+item {
+ name: "/m/0pg52"
+ id: 263
+ display_name: "Taxi"
+}
+item {
+ name: "/m/02f9f_"
+ id: 264
+ display_name: "Shower"
+}
+item {
+ name: "/m/054fyh"
+ id: 265
+ display_name: "Pitcher"
+}
+item {
+ name: "/m/09k_b"
+ id: 266
+ display_name: "Lemon"
+}
+item {
+ name: "/m/03xxp"
+ id: 267
+ display_name: "Invertebrate"
+}
+item {
+ name: "/m/0jly1"
+ id: 268
+ display_name: "Turkey"
+}
+item {
+ name: "/m/06k2mb"
+ id: 269
+ display_name: "High heels"
+}
+item {
+ name: "/m/04yqq2"
+ id: 270
+ display_name: "Bust"
+}
+item {
+ name: "/m/0bwd_0j"
+ id: 271
+ display_name: "Elephant"
+}
+item {
+ name: "/m/02h19r"
+ id: 272
+ display_name: "Scarf"
+}
+item {
+ name: "/m/02zn6n"
+ id: 273
+ display_name: "Barrel"
+}
+item {
+ name: "/m/07c6l"
+ id: 274
+ display_name: "Trombone"
+}
+item {
+ name: "/m/05zsy"
+ id: 275
+ display_name: "Pumpkin"
+}
+item {
+ name: "/m/025dyy"
+ id: 276
+ display_name: "Box"
+}
+item {
+ name: "/m/07j87"
+ id: 277
+ display_name: "Tomato"
+}
+item {
+ name: "/m/09ld4"
+ id: 278
+ display_name: "Frog"
+}
+item {
+ name: "/m/01vbnl"
+ id: 279
+ display_name: "Bidet"
+}
+item {
+ name: "/m/0dzct"
+ id: 280
+ display_name: "Human face"
+}
+item {
+ name: "/m/03fp41"
+ id: 281
+ display_name: "Houseplant"
+}
+item {
+ name: "/m/0h2r6"
+ id: 282
+ display_name: "Van"
+}
+item {
+ name: "/m/0by6g"
+ id: 283
+ display_name: "Shark"
+}
+item {
+ name: "/m/0cxn2"
+ id: 284
+ display_name: "Ice cream"
+}
+item {
+ name: "/m/04tn4x"
+ id: 285
+ display_name: "Swim cap"
+}
+item {
+ name: "/m/0f6wt"
+ id: 286
+ display_name: "Falcon"
+}
+item {
+ name: "/m/05n4y"
+ id: 287
+ display_name: "Ostrich"
+}
+item {
+ name: "/m/0gxl3"
+ id: 288
+ display_name: "Handgun"
+}
+item {
+ name: "/m/02d9qx"
+ id: 289
+ display_name: "Whiteboard"
+}
+item {
+ name: "/m/04m9y"
+ id: 290
+ display_name: "Lizard"
+}
+item {
+ name: "/m/05z55"
+ id: 291
+ display_name: "Pasta"
+}
+item {
+ name: "/m/01x3jk"
+ id: 292
+ display_name: "Snowmobile"
+}
+item {
+ name: "/m/0h8l4fh"
+ id: 293
+ display_name: "Light bulb"
+}
+item {
+ name: "/m/031b6r"
+ id: 294
+ display_name: "Window blind"
+}
+item {
+ name: "/m/01tcjp"
+ id: 295
+ display_name: "Muffin"
+}
+item {
+ name: "/m/01f91_"
+ id: 296
+ display_name: "Pretzel"
+}
+item {
+ name: "/m/02522"
+ id: 297
+ display_name: "Computer monitor"
+}
+item {
+ name: "/m/0319l"
+ id: 298
+ display_name: "Horn"
+}
+item {
+ name: "/m/0c_jw"
+ id: 299
+ display_name: "Furniture"
+}
+item {
+ name: "/m/0l515"
+ id: 300
+ display_name: "Sandwich"
+}
+item {
+ name: "/m/0306r"
+ id: 301
+ display_name: "Fox"
+}
+item {
+ name: "/m/0crjs"
+ id: 302
+ display_name: "Convenience store"
+}
+item {
+ name: "/m/0ch_cf"
+ id: 303
+ display_name: "Fish"
+}
+item {
+ name: "/m/02xwb"
+ id: 304
+ display_name: "Fruit"
+}
+item {
+ name: "/m/01r546"
+ id: 305
+ display_name: "Earrings"
+}
+item {
+ name: "/m/03rszm"
+ id: 306
+ display_name: "Curtain"
+}
+item {
+ name: "/m/0388q"
+ id: 307
+ display_name: "Grape"
+}
+item {
+ name: "/m/03m3pdh"
+ id: 308
+ display_name: "Sofa bed"
+}
+item {
+ name: "/m/03k3r"
+ id: 309
+ display_name: "Horse"
+}
+item {
+ name: "/m/0hf58v5"
+ id: 310
+ display_name: "Luggage and bags"
+}
+item {
+ name: "/m/01y9k5"
+ id: 311
+ display_name: "Desk"
+}
+item {
+ name: "/m/05441v"
+ id: 312
+ display_name: "Crutch"
+}
+item {
+ name: "/m/03p3bw"
+ id: 313
+ display_name: "Bicycle helmet"
+}
+item {
+ name: "/m/0175cv"
+ id: 314
+ display_name: "Tick"
+}
+item {
+ name: "/m/0cmf2"
+ id: 315
+ display_name: "Airplane"
+}
+item {
+ name: "/m/0ccs93"
+ id: 316
+ display_name: "Canary"
+}
+item {
+ name: "/m/02d1br"
+ id: 317
+ display_name: "Spatula"
+}
+item {
+ name: "/m/0gjkl"
+ id: 318
+ display_name: "Watch"
+}
+item {
+ name: "/m/0jqgx"
+ id: 319
+ display_name: "Lily"
+}
+item {
+ name: "/m/0h99cwc"
+ id: 320
+ display_name: "Kitchen appliance"
+}
+item {
+ name: "/m/047j0r"
+ id: 321
+ display_name: "Filing cabinet"
+}
+item {
+ name: "/m/0k5j"
+ id: 322
+ display_name: "Aircraft"
+}
+item {
+ name: "/m/0h8n6ft"
+ id: 323
+ display_name: "Cake stand"
+}
+item {
+ name: "/m/0gm28"
+ id: 324
+ display_name: "Candy"
+}
+item {
+ name: "/m/0130jx"
+ id: 325
+ display_name: "Sink"
+}
+item {
+ name: "/m/04rmv"
+ id: 326
+ display_name: "Mouse"
+}
+item {
+ name: "/m/081qc"
+ id: 327
+ display_name: "Wine"
+}
+item {
+ name: "/m/0qmmr"
+ id: 328
+ display_name: "Wheelchair"
+}
+item {
+ name: "/m/03fj2"
+ id: 329
+ display_name: "Goldfish"
+}
+item {
+ name: "/m/040b_t"
+ id: 330
+ display_name: "Refrigerator"
+}
+item {
+ name: "/m/02y6n"
+ id: 331
+ display_name: "French fries"
+}
+item {
+ name: "/m/0fqfqc"
+ id: 332
+ display_name: "Drawer"
+}
+item {
+ name: "/m/030610"
+ id: 333
+ display_name: "Treadmill"
+}
+item {
+ name: "/m/07kng9"
+ id: 334
+ display_name: "Picnic basket"
+}
+item {
+ name: "/m/029b3"
+ id: 335
+ display_name: "Dice"
+}
+item {
+ name: "/m/0fbw6"
+ id: 336
+ display_name: "Cabbage"
+}
+item {
+ name: "/m/07qxg_"
+ id: 337
+ display_name: "Football helmet"
+}
+item {
+ name: "/m/068zj"
+ id: 338
+ display_name: "Pig"
+}
+item {
+ name: "/m/01g317"
+ id: 339
+ display_name: "Person"
+}
+item {
+ name: "/m/01bfm9"
+ id: 340
+ display_name: "Shorts"
+}
+item {
+ name: "/m/02068x"
+ id: 341
+ display_name: "Gondola"
+}
+item {
+ name: "/m/0fz0h"
+ id: 342
+ display_name: "Honeycomb"
+}
+item {
+ name: "/m/0jy4k"
+ id: 343
+ display_name: "Doughnut"
+}
+item {
+ name: "/m/05kyg_"
+ id: 344
+ display_name: "Chest of drawers"
+}
+item {
+ name: "/m/01prls"
+ id: 345
+ display_name: "Land vehicle"
+}
+item {
+ name: "/m/01h44"
+ id: 346
+ display_name: "Bat"
+}
+item {
+ name: "/m/08pbxl"
+ id: 347
+ display_name: "Monkey"
+}
+item {
+ name: "/m/02gzp"
+ id: 348
+ display_name: "Dagger"
+}
+item {
+ name: "/m/04brg2"
+ id: 349
+ display_name: "Tableware"
+}
+item {
+ name: "/m/031n1"
+ id: 350
+ display_name: "Human foot"
+}
+item {
+ name: "/m/02jvh9"
+ id: 351
+ display_name: "Mug"
+}
+item {
+ name: "/m/046dlr"
+ id: 352
+ display_name: "Alarm clock"
+}
+item {
+ name: "/m/0h8ntjv"
+ id: 353
+ display_name: "Pressure cooker"
+}
+item {
+ name: "/m/0k65p"
+ id: 354
+ display_name: "Human hand"
+}
+item {
+ name: "/m/011k07"
+ id: 355
+ display_name: "Tortoise"
+}
+item {
+ name: "/m/03grzl"
+ id: 356
+ display_name: "Baseball glove"
+}
+item {
+ name: "/m/06y5r"
+ id: 357
+ display_name: "Sword"
+}
+item {
+ name: "/m/061_f"
+ id: 358
+ display_name: "Pear"
+}
+item {
+ name: "/m/01cmb2"
+ id: 359
+ display_name: "Miniskirt"
+}
+item {
+ name: "/m/01mqdt"
+ id: 360
+ display_name: "Traffic sign"
+}
+item {
+ name: "/m/05r655"
+ id: 361
+ display_name: "Girl"
+}
+item {
+ name: "/m/02p3w7d"
+ id: 362
+ display_name: "Roller skates"
+}
+item {
+ name: "/m/029tx"
+ id: 363
+ display_name: "Dinosaur"
+}
+item {
+ name: "/m/04m6gz"
+ id: 364
+ display_name: "Porch"
+}
+item {
+ name: "/m/015h_t"
+ id: 365
+ display_name: "Human beard"
+}
+item {
+ name: "/m/06pcq"
+ id: 366
+ display_name: "Submarine sandwich"
+}
+item {
+ name: "/m/01bms0"
+ id: 367
+ display_name: "Screwdriver"
+}
+item {
+ name: "/m/07fbm7"
+ id: 368
+ display_name: "Strawberry"
+}
+item {
+ name: "/m/09tvcd"
+ id: 369
+ display_name: "Wine glass"
+}
+item {
+ name: "/m/06nwz"
+ id: 370
+ display_name: "Seafood"
+}
+item {
+ name: "/m/0dv9c"
+ id: 371
+ display_name: "Racket"
+}
+item {
+ name: "/m/083wq"
+ id: 372
+ display_name: "Wheel"
+}
+item {
+ name: "/m/0gd36"
+ id: 373
+ display_name: "Sea lion"
+}
+item {
+ name: "/m/0138tl"
+ id: 374
+ display_name: "Toy"
+}
+item {
+ name: "/m/07clx"
+ id: 375
+ display_name: "Tea"
+}
+item {
+ name: "/m/05ctyq"
+ id: 376
+ display_name: "Tennis ball"
+}
+item {
+ name: "/m/0bjyj5"
+ id: 377
+ display_name: "Waste container"
+}
+item {
+ name: "/m/0dbzx"
+ id: 378
+ display_name: "Mule"
+}
+item {
+ name: "/m/02ctlc"
+ id: 379
+ display_name: "Cricket ball"
+}
+item {
+ name: "/m/0fp6w"
+ id: 380
+ display_name: "Pineapple"
+}
+item {
+ name: "/m/0djtd"
+ id: 381
+ display_name: "Coconut"
+}
+item {
+ name: "/m/0167gd"
+ id: 382
+ display_name: "Doll"
+}
+item {
+ name: "/m/078n6m"
+ id: 383
+ display_name: "Coffee table"
+}
+item {
+ name: "/m/0152hh"
+ id: 384
+ display_name: "Snowman"
+}
+item {
+ name: "/m/04gth"
+ id: 385
+ display_name: "Lavender"
+}
+item {
+ name: "/m/0ll1f78"
+ id: 386
+ display_name: "Shrimp"
+}
+item {
+ name: "/m/0cffdh"
+ id: 387
+ display_name: "Maple"
+}
+item {
+ name: "/m/025rp__"
+ id: 388
+ display_name: "Cowboy hat"
+}
+item {
+ name: "/m/02_n6y"
+ id: 389
+ display_name: "Goggles"
+}
+item {
+ name: "/m/0wdt60w"
+ id: 390
+ display_name: "Rugby ball"
+}
+item {
+ name: "/m/0cydv"
+ id: 391
+ display_name: "Caterpillar"
+}
+item {
+ name: "/m/01n5jq"
+ id: 392
+ display_name: "Poster"
+}
+item {
+ name: "/m/09rvcxw"
+ id: 393
+ display_name: "Rocket"
+}
+item {
+ name: "/m/013y1f"
+ id: 394
+ display_name: "Organ"
+}
+item {
+ name: "/m/06ncr"
+ id: 395
+ display_name: "Saxophone"
+}
+item {
+ name: "/m/015qff"
+ id: 396
+ display_name: "Traffic light"
+}
+item {
+ name: "/m/024g6"
+ id: 397
+ display_name: "Cocktail"
+}
+item {
+ name: "/m/05gqfk"
+ id: 398
+ display_name: "Plastic bag"
+}
+item {
+ name: "/m/0dv77"
+ id: 399
+ display_name: "Squash"
+}
+item {
+ name: "/m/052sf"
+ id: 400
+ display_name: "Mushroom"
+}
+item {
+ name: "/m/0cdn1"
+ id: 401
+ display_name: "Hamburger"
+}
+item {
+ name: "/m/03jbxj"
+ id: 402
+ display_name: "Light switch"
+}
+item {
+ name: "/m/0cyfs"
+ id: 403
+ display_name: "Parachute"
+}
+item {
+ name: "/m/0kmg4"
+ id: 404
+ display_name: "Teddy bear"
+}
+item {
+ name: "/m/02cvgx"
+ id: 405
+ display_name: "Winter melon"
+}
+item {
+ name: "/m/09kx5"
+ id: 406
+ display_name: "Deer"
+}
+item {
+ name: "/m/057cc"
+ id: 407
+ display_name: "Musical keyboard"
+}
+item {
+ name: "/m/02pkr5"
+ id: 408
+ display_name: "Plumbing fixture"
+}
+item {
+ name: "/m/057p5t"
+ id: 409
+ display_name: "Scoreboard"
+}
+item {
+ name: "/m/03g8mr"
+ id: 410
+ display_name: "Baseball bat"
+}
+item {
+ name: "/m/0frqm"
+ id: 411
+ display_name: "Envelope"
+}
+item {
+ name: "/m/03m3vtv"
+ id: 412
+ display_name: "Adhesive tape"
+}
+item {
+ name: "/m/0584n8"
+ id: 413
+ display_name: "Briefcase"
+}
+item {
+ name: "/m/014y4n"
+ id: 414
+ display_name: "Paddle"
+}
+item {
+ name: "/m/01g3x7"
+ id: 415
+ display_name: "Bow and arrow"
+}
+item {
+ name: "/m/07cx4"
+ id: 416
+ display_name: "Telephone"
+}
+item {
+ name: "/m/07bgp"
+ id: 417
+ display_name: "Sheep"
+}
+item {
+ name: "/m/032b3c"
+ id: 418
+ display_name: "Jacket"
+}
+item {
+ name: "/m/01bl7v"
+ id: 419
+ display_name: "Boy"
+}
+item {
+ name: "/m/0663v"
+ id: 420
+ display_name: "Pizza"
+}
+item {
+ name: "/m/0cn6p"
+ id: 421
+ display_name: "Otter"
+}
+item {
+ name: "/m/02rdsp"
+ id: 422
+ display_name: "Office supplies"
+}
+item {
+ name: "/m/02crq1"
+ id: 423
+ display_name: "Couch"
+}
+item {
+ name: "/m/01xqw"
+ id: 424
+ display_name: "Cello"
+}
+item {
+ name: "/m/0cnyhnx"
+ id: 425
+ display_name: "Bull"
+}
+item {
+ name: "/m/01x_v"
+ id: 426
+ display_name: "Camel"
+}
+item {
+ name: "/m/018xm"
+ id: 427
+ display_name: "Ball"
+}
+item {
+ name: "/m/09ddx"
+ id: 428
+ display_name: "Duck"
+}
+item {
+ name: "/m/084zz"
+ id: 429
+ display_name: "Whale"
+}
+item {
+ name: "/m/01n4qj"
+ id: 430
+ display_name: "Shirt"
+}
+item {
+ name: "/m/07cmd"
+ id: 431
+ display_name: "Tank"
+}
+item {
+ name: "/m/04_sv"
+ id: 432
+ display_name: "Motorcycle"
+}
+item {
+ name: "/m/0mkg"
+ id: 433
+ display_name: "Accordion"
+}
+item {
+ name: "/m/09d5_"
+ id: 434
+ display_name: "Owl"
+}
+item {
+ name: "/m/0c568"
+ id: 435
+ display_name: "Porcupine"
+}
+item {
+ name: "/m/02wbtzl"
+ id: 436
+ display_name: "Sun hat"
+}
+item {
+ name: "/m/05bm6"
+ id: 437
+ display_name: "Nail"
+}
+item {
+ name: "/m/01lsmm"
+ id: 438
+ display_name: "Scissors"
+}
+item {
+ name: "/m/0dftk"
+ id: 439
+ display_name: "Swan"
+}
+item {
+ name: "/m/0dtln"
+ id: 440
+ display_name: "Lamp"
+}
+item {
+ name: "/m/0nl46"
+ id: 441
+ display_name: "Crown"
+}
+item {
+ name: "/m/05r5c"
+ id: 442
+ display_name: "Piano"
+}
+item {
+ name: "/m/06msq"
+ id: 443
+ display_name: "Sculpture"
+}
+item {
+ name: "/m/0cd4d"
+ id: 444
+ display_name: "Cheetah"
+}
+item {
+ name: "/m/05kms"
+ id: 445
+ display_name: "Oboe"
+}
+item {
+ name: "/m/02jnhm"
+ id: 446
+ display_name: "Tin can"
+}
+item {
+ name: "/m/0fldg"
+ id: 447
+ display_name: "Mango"
+}
+item {
+ name: "/m/073bxn"
+ id: 448
+ display_name: "Tripod"
+}
+item {
+ name: "/m/029bxz"
+ id: 449
+ display_name: "Oven"
+}
+item {
+ name: "/m/020lf"
+ id: 450
+ display_name: "Computer mouse"
+}
+item {
+ name: "/m/01btn"
+ id: 451
+ display_name: "Barge"
+}
+item {
+ name: "/m/02vqfm"
+ id: 452
+ display_name: "Coffee"
+}
+item {
+ name: "/m/06__v"
+ id: 453
+ display_name: "Snowboard"
+}
+item {
+ name: "/m/043nyj"
+ id: 454
+ display_name: "Common fig"
+}
+item {
+ name: "/m/0grw1"
+ id: 455
+ display_name: "Salad"
+}
+item {
+ name: "/m/03hl4l9"
+ id: 456
+ display_name: "Marine invertebrates"
+}
+item {
+ name: "/m/0hnnb"
+ id: 457
+ display_name: "Umbrella"
+}
+item {
+ name: "/m/04c0y"
+ id: 458
+ display_name: "Kangaroo"
+}
+item {
+ name: "/m/0dzf4"
+ id: 459
+ display_name: "Human arm"
+}
+item {
+ name: "/m/07v9_z"
+ id: 460
+ display_name: "Measuring cup"
+}
+item {
+ name: "/m/0f9_l"
+ id: 461
+ display_name: "Snail"
+}
+item {
+ name: "/m/0703r8"
+ id: 462
+ display_name: "Loveseat"
+}
+item {
+ name: "/m/01xyhv"
+ id: 463
+ display_name: "Suit"
+}
+item {
+ name: "/m/01fh4r"
+ id: 464
+ display_name: "Teapot"
+}
+item {
+ name: "/m/04dr76w"
+ id: 465
+ display_name: "Bottle"
+}
+item {
+ name: "/m/0pcr"
+ id: 466
+ display_name: "Alpaca"
+}
+item {
+ name: "/m/03s_tn"
+ id: 467
+ display_name: "Kettle"
+}
+item {
+ name: "/m/07mhn"
+ id: 468
+ display_name: "Trousers"
+}
+item {
+ name: "/m/01hrv5"
+ id: 469
+ display_name: "Popcorn"
+}
+item {
+ name: "/m/019h78"
+ id: 470
+ display_name: "Centipede"
+}
+item {
+ name: "/m/09kmb"
+ id: 471
+ display_name: "Spider"
+}
+item {
+ name: "/m/0h23m"
+ id: 472
+ display_name: "Sparrow"
+}
+item {
+ name: "/m/050gv4"
+ id: 473
+ display_name: "Plate"
+}
+item {
+ name: "/m/01fb_0"
+ id: 474
+ display_name: "Bagel"
+}
+item {
+ name: "/m/02w3_ws"
+ id: 475
+ display_name: "Personal care"
+}
+item {
+ name: "/m/014j1m"
+ id: 476
+ display_name: "Apple"
+}
+item {
+ name: "/m/01gmv2"
+ id: 477
+ display_name: "Brassiere"
+}
+item {
+ name: "/m/04y4h8h"
+ id: 478
+ display_name: "Bathroom cabinet"
+}
+item {
+ name: "/m/026qbn5"
+ id: 479
+ display_name: "Studio couch"
+}
+item {
+ name: "/m/01m2v"
+ id: 480
+ display_name: "Computer keyboard"
+}
+item {
+ name: "/m/05_5p_0"
+ id: 481
+ display_name: "Table tennis racket"
+}
+item {
+ name: "/m/07030"
+ id: 482
+ display_name: "Sushi"
+}
+item {
+ name: "/m/01s105"
+ id: 483
+ display_name: "Cabinetry"
+}
+item {
+ name: "/m/033rq4"
+ id: 484
+ display_name: "Street light"
+}
+item {
+ name: "/m/0162_1"
+ id: 485
+ display_name: "Towel"
+}
+item {
+ name: "/m/02z51p"
+ id: 486
+ display_name: "Nightstand"
+}
+item {
+ name: "/m/06mf6"
+ id: 487
+ display_name: "Rabbit"
+}
+item {
+ name: "/m/02hj4"
+ id: 488
+ display_name: "Dolphin"
+}
+item {
+ name: "/m/0bt9lr"
+ id: 489
+ display_name: "Dog"
+}
+item {
+ name: "/m/08hvt4"
+ id: 490
+ display_name: "Jug"
+}
+item {
+ name: "/m/084rd"
+ id: 491
+ display_name: "Wok"
+}
+item {
+ name: "/m/01pns0"
+ id: 492
+ display_name: "Fire hydrant"
+}
+item {
+ name: "/m/014sv8"
+ id: 493
+ display_name: "Human eye"
+}
+item {
+ name: "/m/079cl"
+ id: 494
+ display_name: "Skyscraper"
+}
+item {
+ name: "/m/01940j"
+ id: 495
+ display_name: "Backpack"
+}
+item {
+ name: "/m/05vtc"
+ id: 496
+ display_name: "Potato"
+}
+item {
+ name: "/m/02w3r3"
+ id: 497
+ display_name: "Paper towel"
+}
+item {
+ name: "/m/054xkw"
+ id: 498
+ display_name: "Lifejacket"
+}
+item {
+ name: "/m/01bqk0"
+ id: 499
+ display_name: "Bicycle wheel"
+}
+item {
+ name: "/m/09g1w"
+ id: 500
+ display_name: "Toilet"
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/pascal_label_map.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/pascal_label_map.pbtxt"
new file mode 100644
index 00000000..c9e9e2af
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/pascal_label_map.pbtxt"
@@ -0,0 +1,99 @@
+item {
+ id: 1
+ name: 'aeroplane'
+}
+
+item {
+ id: 2
+ name: 'bicycle'
+}
+
+item {
+ id: 3
+ name: 'bird'
+}
+
+item {
+ id: 4
+ name: 'boat'
+}
+
+item {
+ id: 5
+ name: 'bottle'
+}
+
+item {
+ id: 6
+ name: 'bus'
+}
+
+item {
+ id: 7
+ name: 'car'
+}
+
+item {
+ id: 8
+ name: 'cat'
+}
+
+item {
+ id: 9
+ name: 'chair'
+}
+
+item {
+ id: 10
+ name: 'cow'
+}
+
+item {
+ id: 11
+ name: 'diningtable'
+}
+
+item {
+ id: 12
+ name: 'dog'
+}
+
+item {
+ id: 13
+ name: 'horse'
+}
+
+item {
+ id: 14
+ name: 'motorbike'
+}
+
+item {
+ id: 15
+ name: 'person'
+}
+
+item {
+ id: 16
+ name: 'pottedplant'
+}
+
+item {
+ id: 17
+ name: 'sheep'
+}
+
+item {
+ id: 18
+ name: 'sofa'
+}
+
+item {
+ id: 19
+ name: 'train'
+}
+
+item {
+ id: 20
+ name: 'tvmonitor'
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/pet_label_map.pbtxt" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/pet_label_map.pbtxt"
new file mode 100644
index 00000000..54d7d351
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data/pet_label_map.pbtxt"
@@ -0,0 +1,184 @@
+item {
+ id: 1
+ name: 'Abyssinian'
+}
+
+item {
+ id: 2
+ name: 'american_bulldog'
+}
+
+item {
+ id: 3
+ name: 'american_pit_bull_terrier'
+}
+
+item {
+ id: 4
+ name: 'basset_hound'
+}
+
+item {
+ id: 5
+ name: 'beagle'
+}
+
+item {
+ id: 6
+ name: 'Bengal'
+}
+
+item {
+ id: 7
+ name: 'Birman'
+}
+
+item {
+ id: 8
+ name: 'Bombay'
+}
+
+item {
+ id: 9
+ name: 'boxer'
+}
+
+item {
+ id: 10
+ name: 'British_Shorthair'
+}
+
+item {
+ id: 11
+ name: 'chihuahua'
+}
+
+item {
+ id: 12
+ name: 'Egyptian_Mau'
+}
+
+item {
+ id: 13
+ name: 'english_cocker_spaniel'
+}
+
+item {
+ id: 14
+ name: 'english_setter'
+}
+
+item {
+ id: 15
+ name: 'german_shorthaired'
+}
+
+item {
+ id: 16
+ name: 'great_pyrenees'
+}
+
+item {
+ id: 17
+ name: 'havanese'
+}
+
+item {
+ id: 18
+ name: 'japanese_chin'
+}
+
+item {
+ id: 19
+ name: 'keeshond'
+}
+
+item {
+ id: 20
+ name: 'leonberger'
+}
+
+item {
+ id: 21
+ name: 'Maine_Coon'
+}
+
+item {
+ id: 22
+ name: 'miniature_pinscher'
+}
+
+item {
+ id: 23
+ name: 'newfoundland'
+}
+
+item {
+ id: 24
+ name: 'Persian'
+}
+
+item {
+ id: 25
+ name: 'pomeranian'
+}
+
+item {
+ id: 26
+ name: 'pug'
+}
+
+item {
+ id: 27
+ name: 'Ragdoll'
+}
+
+item {
+ id: 28
+ name: 'Russian_Blue'
+}
+
+item {
+ id: 29
+ name: 'saint_bernard'
+}
+
+item {
+ id: 30
+ name: 'samoyed'
+}
+
+item {
+ id: 31
+ name: 'scottish_terrier'
+}
+
+item {
+ id: 32
+ name: 'shiba_inu'
+}
+
+item {
+ id: 33
+ name: 'Siamese'
+}
+
+item {
+ id: 34
+ name: 'Sphynx'
+}
+
+item {
+ id: 35
+ name: 'staffordshire_bull_terrier'
+}
+
+item {
+ id: 36
+ name: 'wheaten_terrier'
+}
+
+item {
+ id: 37
+ name: 'yorkshire_terrier'
+}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data_decoders/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data_decoders/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data_decoders/tf_example_decoder.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data_decoders/tf_example_decoder.py"
new file mode 100644
index 00000000..c844e3dc
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data_decoders/tf_example_decoder.py"
@@ -0,0 +1,466 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tensorflow Example proto decoder for object detection.
+
+A decoder to decode string tensors containing serialized tensorflow.Example
+protos for object detection.
+"""
+import tensorflow as tf
+
+from object_detection.core import data_decoder
+from object_detection.core import standard_fields as fields
+from object_detection.protos import input_reader_pb2
+from object_detection.utils import label_map_util
+
+slim_example_decoder = tf.contrib.slim.tfexample_decoder
+
+
+class _ClassTensorHandler(slim_example_decoder.Tensor):
+ """An ItemHandler to fetch class ids from class text."""
+
+ def __init__(self,
+ tensor_key,
+ label_map_proto_file,
+ shape_keys=None,
+ shape=None,
+ default_value=''):
+ """Initializes the LookupTensor handler.
+
+ Simply calls a vocabulary (most often, a label mapping) lookup.
+
+ Args:
+ tensor_key: the name of the `TFExample` feature to read the tensor from.
+ label_map_proto_file: File path to a text format LabelMapProto message
+ mapping class text to id.
+ shape_keys: Optional name or list of names of the TF-Example feature in
+ which the tensor shape is stored. If a list, then each corresponds to
+ one dimension of the shape.
+ shape: Optional output shape of the `Tensor`. If provided, the `Tensor` is
+ reshaped accordingly.
+ default_value: The value used when the `tensor_key` is not found in a
+ particular `TFExample`.
+
+ Raises:
+ ValueError: if both `shape_keys` and `shape` are specified.
+ """
+ name_to_id = label_map_util.get_label_map_dict(
+ label_map_proto_file, use_display_name=False)
+ # We use a default_value of -1, but we expect all labels to be contained
+ # in the label map.
+ name_to_id_table = tf.contrib.lookup.HashTable(
+ initializer=tf.contrib.lookup.KeyValueTensorInitializer(
+ keys=tf.constant(list(name_to_id.keys())),
+ values=tf.constant(list(name_to_id.values()), dtype=tf.int64)),
+ default_value=-1)
+ display_name_to_id = label_map_util.get_label_map_dict(
+ label_map_proto_file, use_display_name=True)
+ # We use a default_value of -1, but we expect all labels to be contained
+ # in the label map.
+ display_name_to_id_table = tf.contrib.lookup.HashTable(
+ initializer=tf.contrib.lookup.KeyValueTensorInitializer(
+ keys=tf.constant(list(display_name_to_id.keys())),
+ values=tf.constant(
+ list(display_name_to_id.values()), dtype=tf.int64)),
+ default_value=-1)
+
+ self._name_to_id_table = name_to_id_table
+ self._display_name_to_id_table = display_name_to_id_table
+ super(_ClassTensorHandler, self).__init__(tensor_key, shape_keys, shape,
+ default_value)
+
+ def tensors_to_item(self, keys_to_tensors):
+ unmapped_tensor = super(_ClassTensorHandler,
+ self).tensors_to_item(keys_to_tensors)
+ return tf.maximum(self._name_to_id_table.lookup(unmapped_tensor),
+ self._display_name_to_id_table.lookup(unmapped_tensor))
+
+
+class _BackupHandler(slim_example_decoder.ItemHandler):
+ """An ItemHandler that tries two ItemHandlers in order."""
+
+ def __init__(self, handler, backup):
+ """Initializes the BackupHandler handler.
+
+ If the first Handler's tensors_to_item returns a Tensor with no elements,
+ the second Handler is used.
+
+ Args:
+ handler: The primary ItemHandler.
+ backup: The backup ItemHandler.
+
+ Raises:
+ ValueError: if either is not an ItemHandler.
+ """
+ if not isinstance(handler, slim_example_decoder.ItemHandler):
+ raise ValueError('Primary handler is of type %s instead of ItemHandler' %
+ type(handler))
+ if not isinstance(backup, slim_example_decoder.ItemHandler):
+ raise ValueError(
+ 'Backup handler is of type %s instead of ItemHandler' % type(backup))
+ self._handler = handler
+ self._backup = backup
+ super(_BackupHandler, self).__init__(handler.keys + backup.keys)
+
+ def tensors_to_item(self, keys_to_tensors):
+ item = self._handler.tensors_to_item(keys_to_tensors)
+ return tf.cond(
+ pred=tf.equal(tf.reduce_prod(tf.shape(item)), 0),
+ true_fn=lambda: self._backup.tensors_to_item(keys_to_tensors),
+ false_fn=lambda: item)
+
+
+class TfExampleDecoder(data_decoder.DataDecoder):
+ """Tensorflow Example proto decoder."""
+
+ def __init__(self,
+ load_instance_masks=False,
+ instance_mask_type=input_reader_pb2.NUMERICAL_MASKS,
+ label_map_proto_file=None,
+ use_display_name=False,
+ dct_method='',
+ num_keypoints=0,
+ num_additional_channels=0):
+ """Constructor sets keys_to_features and items_to_handlers.
+
+ Args:
+ load_instance_masks: whether or not to load and handle instance masks.
+ instance_mask_type: type of instance masks. Options are provided in
+ input_reader.proto. This is only used if `load_instance_masks` is True.
+ label_map_proto_file: a file path to a
+ object_detection.protos.StringIntLabelMap proto. If provided, then the
+ mapped IDs of 'image/object/class/text' will take precedence over the
+ existing 'image/object/class/label' ID. Also, if provided, it is
+ assumed that 'image/object/class/text' will be in the data.
+ use_display_name: whether or not to use the `display_name` for label
+ mapping (instead of `name`). Only used if label_map_proto_file is
+ provided.
+ dct_method: An optional string. Defaults to None. It only takes
+ effect when image format is jpeg, used to specify a hint about the
+ algorithm used for jpeg decompression. Currently valid values
+ are ['INTEGER_FAST', 'INTEGER_ACCURATE']. The hint may be ignored, for
+ example, the jpeg library does not have that specific option.
+ num_keypoints: the number of keypoints per object.
+ num_additional_channels: how many additional channels to use.
+
+ Raises:
+ ValueError: If `instance_mask_type` option is not one of
+ input_reader_pb2.DEFAULT, input_reader_pb2.NUMERICAL, or
+ input_reader_pb2.PNG_MASKS.
+ """
+ # TODO(rathodv): delete unused `use_display_name` argument once we change
+ # other decoders to handle label maps similarly.
+ del use_display_name
+ self.keys_to_features = {
+ 'image/encoded':
+ tf.FixedLenFeature((), tf.string, default_value=''),
+ 'image/format':
+ tf.FixedLenFeature((), tf.string, default_value='jpeg'),
+ 'image/filename':
+ tf.FixedLenFeature((), tf.string, default_value=''),
+ 'image/key/sha256':
+ tf.FixedLenFeature((), tf.string, default_value=''),
+ 'image/source_id':
+ tf.FixedLenFeature((), tf.string, default_value=''),
+ 'image/height':
+ tf.FixedLenFeature((), tf.int64, default_value=1),
+ 'image/width':
+ tf.FixedLenFeature((), tf.int64, default_value=1),
+ # Image-level labels.
+ 'image/class/text':
+ tf.VarLenFeature(tf.string),
+ 'image/class/label':
+ tf.VarLenFeature(tf.int64),
+ # Object boxes and classes.
+ 'image/object/bbox/xmin':
+ tf.VarLenFeature(tf.float32),
+ 'image/object/bbox/xmax':
+ tf.VarLenFeature(tf.float32),
+ 'image/object/bbox/ymin':
+ tf.VarLenFeature(tf.float32),
+ 'image/object/bbox/ymax':
+ tf.VarLenFeature(tf.float32),
+ 'image/object/class/label':
+ tf.VarLenFeature(tf.int64),
+ 'image/object/class/text':
+ tf.VarLenFeature(tf.string),
+ 'image/object/area':
+ tf.VarLenFeature(tf.float32),
+ 'image/object/is_crowd':
+ tf.VarLenFeature(tf.int64),
+ 'image/object/difficult':
+ tf.VarLenFeature(tf.int64),
+ 'image/object/group_of':
+ tf.VarLenFeature(tf.int64),
+ 'image/object/weight':
+ tf.VarLenFeature(tf.float32),
+ }
+ # We are checking `dct_method` instead of passing it directly in order to
+ # ensure TF version 1.6 compatibility.
+ if dct_method:
+ image = slim_example_decoder.Image(
+ image_key='image/encoded',
+ format_key='image/format',
+ channels=3,
+ dct_method=dct_method)
+ additional_channel_image = slim_example_decoder.Image(
+ image_key='image/additional_channels/encoded',
+ format_key='image/format',
+ channels=1,
+ repeated=True,
+ dct_method=dct_method)
+ else:
+ image = slim_example_decoder.Image(
+ image_key='image/encoded', format_key='image/format', channels=3)
+ additional_channel_image = slim_example_decoder.Image(
+ image_key='image/additional_channels/encoded',
+ format_key='image/format',
+ channels=1,
+ repeated=True)
+ self.items_to_handlers = {
+ fields.InputDataFields.image:
+ image,
+ fields.InputDataFields.source_id: (
+ slim_example_decoder.Tensor('image/source_id')),
+ fields.InputDataFields.key: (
+ slim_example_decoder.Tensor('image/key/sha256')),
+ fields.InputDataFields.filename: (
+ slim_example_decoder.Tensor('image/filename')),
+ # Object boxes and classes.
+ fields.InputDataFields.groundtruth_boxes: (
+ slim_example_decoder.BoundingBox(['ymin', 'xmin', 'ymax', 'xmax'],
+ 'image/object/bbox/')),
+ fields.InputDataFields.groundtruth_area:
+ slim_example_decoder.Tensor('image/object/area'),
+ fields.InputDataFields.groundtruth_is_crowd: (
+ slim_example_decoder.Tensor('image/object/is_crowd')),
+ fields.InputDataFields.groundtruth_difficult: (
+ slim_example_decoder.Tensor('image/object/difficult')),
+ fields.InputDataFields.groundtruth_group_of: (
+ slim_example_decoder.Tensor('image/object/group_of')),
+ fields.InputDataFields.groundtruth_weights: (
+ slim_example_decoder.Tensor('image/object/weight')),
+ }
+ if num_additional_channels > 0:
+ self.keys_to_features[
+ 'image/additional_channels/encoded'] = tf.FixedLenFeature(
+ (num_additional_channels,), tf.string)
+ self.items_to_handlers[
+ fields.InputDataFields.
+ image_additional_channels] = additional_channel_image
+ self._num_keypoints = num_keypoints
+ if num_keypoints > 0:
+ self.keys_to_features['image/object/keypoint/x'] = (
+ tf.VarLenFeature(tf.float32))
+ self.keys_to_features['image/object/keypoint/y'] = (
+ tf.VarLenFeature(tf.float32))
+ self.items_to_handlers[fields.InputDataFields.groundtruth_keypoints] = (
+ slim_example_decoder.ItemHandlerCallback(
+ ['image/object/keypoint/y', 'image/object/keypoint/x'],
+ self._reshape_keypoints))
+ if load_instance_masks:
+ if instance_mask_type in (input_reader_pb2.DEFAULT,
+ input_reader_pb2.NUMERICAL_MASKS):
+ self.keys_to_features['image/object/mask'] = (
+ tf.VarLenFeature(tf.float32))
+ self.items_to_handlers[
+ fields.InputDataFields.groundtruth_instance_masks] = (
+ slim_example_decoder.ItemHandlerCallback(
+ ['image/object/mask', 'image/height', 'image/width'],
+ self._reshape_instance_masks))
+ elif instance_mask_type == input_reader_pb2.PNG_MASKS:
+ self.keys_to_features['image/object/mask'] = tf.VarLenFeature(tf.string)
+ self.items_to_handlers[
+ fields.InputDataFields.groundtruth_instance_masks] = (
+ slim_example_decoder.ItemHandlerCallback(
+ ['image/object/mask', 'image/height', 'image/width'],
+ self._decode_png_instance_masks))
+ else:
+ raise ValueError('Did not recognize the `instance_mask_type` option.')
+ if label_map_proto_file:
+ # If the label_map_proto is provided, try to use it in conjunction with
+ # the class text, and fall back to a materialized ID.
+ label_handler = _BackupHandler(
+ _ClassTensorHandler(
+ 'image/object/class/text', label_map_proto_file,
+ default_value=''),
+ slim_example_decoder.Tensor('image/object/class/label'))
+ image_label_handler = _BackupHandler(
+ _ClassTensorHandler(
+ fields.TfExampleFields.image_class_text,
+ label_map_proto_file,
+ default_value=''),
+ slim_example_decoder.Tensor(fields.TfExampleFields.image_class_label))
+ else:
+ label_handler = slim_example_decoder.Tensor('image/object/class/label')
+ image_label_handler = slim_example_decoder.Tensor(
+ fields.TfExampleFields.image_class_label)
+ self.items_to_handlers[
+ fields.InputDataFields.groundtruth_classes] = label_handler
+ self.items_to_handlers[
+ fields.InputDataFields.groundtruth_image_classes] = image_label_handler
+
+ def decode(self, tf_example_string_tensor):
+ """Decodes serialized tensorflow example and returns a tensor dictionary.
+
+ Args:
+ tf_example_string_tensor: a string tensor holding a serialized tensorflow
+ example proto.
+
+ Returns:
+ A dictionary of the following tensors.
+ fields.InputDataFields.image - 3D uint8 tensor of shape [None, None, 3]
+ containing image.
+ fields.InputDataFields.original_image_spatial_shape - 1D int32 tensor of
+ shape [2] containing shape of the image.
+ fields.InputDataFields.source_id - string tensor containing original
+ image id.
+ fields.InputDataFields.key - string tensor with unique sha256 hash key.
+ fields.InputDataFields.filename - string tensor with original dataset
+ filename.
+ fields.InputDataFields.groundtruth_boxes - 2D float32 tensor of shape
+ [None, 4] containing box corners.
+ fields.InputDataFields.groundtruth_classes - 1D int64 tensor of shape
+ [None] containing classes for the boxes.
+ fields.InputDataFields.groundtruth_weights - 1D float32 tensor of
+ shape [None] indicating the weights of groundtruth boxes.
+ fields.InputDataFields.groundtruth_area - 1D float32 tensor of shape
+ [None] containing containing object mask area in pixel squared.
+ fields.InputDataFields.groundtruth_is_crowd - 1D bool tensor of shape
+ [None] indicating if the boxes enclose a crowd.
+
+ Optional:
+ fields.InputDataFields.image_additional_channels - 3D uint8 tensor of
+ shape [None, None, num_additional_channels]. 1st dim is height; 2nd dim
+ is width; 3rd dim is the number of additional channels.
+ fields.InputDataFields.groundtruth_difficult - 1D bool tensor of shape
+ [None] indicating if the boxes represent `difficult` instances.
+ fields.InputDataFields.groundtruth_group_of - 1D bool tensor of shape
+ [None] indicating if the boxes represent `group_of` instances.
+ fields.InputDataFields.groundtruth_keypoints - 3D float32 tensor of
+ shape [None, None, 2] containing keypoints, where the coordinates of
+ the keypoints are ordered (y, x).
+ fields.InputDataFields.groundtruth_instance_masks - 3D float32 tensor of
+ shape [None, None, None] containing instance masks.
+ fields.InputDataFields.groundtruth_image_classes - 1D uint64 of shape
+ [None] containing classes for the boxes.
+ """
+ serialized_example = tf.reshape(tf_example_string_tensor, shape=[])
+ decoder = slim_example_decoder.TFExampleDecoder(self.keys_to_features,
+ self.items_to_handlers)
+ keys = decoder.list_items()
+ tensors = decoder.decode(serialized_example, items=keys)
+ tensor_dict = dict(zip(keys, tensors))
+ is_crowd = fields.InputDataFields.groundtruth_is_crowd
+ tensor_dict[is_crowd] = tf.cast(tensor_dict[is_crowd], dtype=tf.bool)
+ tensor_dict[fields.InputDataFields.image].set_shape([None, None, 3])
+ tensor_dict[fields.InputDataFields.original_image_spatial_shape] = tf.shape(
+ tensor_dict[fields.InputDataFields.image])[:2]
+
+ if fields.InputDataFields.image_additional_channels in tensor_dict:
+ channels = tensor_dict[fields.InputDataFields.image_additional_channels]
+ channels = tf.squeeze(channels, axis=3)
+ channels = tf.transpose(channels, perm=[1, 2, 0])
+ tensor_dict[fields.InputDataFields.image_additional_channels] = channels
+
+ def default_groundtruth_weights():
+ return tf.ones(
+ [tf.shape(tensor_dict[fields.InputDataFields.groundtruth_boxes])[0]],
+ dtype=tf.float32)
+
+ tensor_dict[fields.InputDataFields.groundtruth_weights] = tf.cond(
+ tf.greater(
+ tf.shape(
+ tensor_dict[fields.InputDataFields.groundtruth_weights])[0],
+ 0), lambda: tensor_dict[fields.InputDataFields.groundtruth_weights],
+ default_groundtruth_weights)
+ return tensor_dict
+
+ def _reshape_keypoints(self, keys_to_tensors):
+ """Reshape keypoints.
+
+ The instance segmentation masks are reshaped to [num_instances,
+ num_keypoints, 2].
+
+ Args:
+ keys_to_tensors: a dictionary from keys to tensors.
+
+ Returns:
+ A 3-D float tensor of shape [num_instances, num_keypoints, 2] with values
+ in {0, 1}.
+ """
+ y = keys_to_tensors['image/object/keypoint/y']
+ if isinstance(y, tf.SparseTensor):
+ y = tf.sparse_tensor_to_dense(y)
+ y = tf.expand_dims(y, 1)
+ x = keys_to_tensors['image/object/keypoint/x']
+ if isinstance(x, tf.SparseTensor):
+ x = tf.sparse_tensor_to_dense(x)
+ x = tf.expand_dims(x, 1)
+ keypoints = tf.concat([y, x], 1)
+ keypoints = tf.reshape(keypoints, [-1, self._num_keypoints, 2])
+ return keypoints
+
+ def _reshape_instance_masks(self, keys_to_tensors):
+ """Reshape instance segmentation masks.
+
+ The instance segmentation masks are reshaped to [num_instances, height,
+ width].
+
+ Args:
+ keys_to_tensors: a dictionary from keys to tensors.
+
+ Returns:
+ A 3-D float tensor of shape [num_instances, height, width] with values
+ in {0, 1}.
+ """
+ height = keys_to_tensors['image/height']
+ width = keys_to_tensors['image/width']
+ to_shape = tf.cast(tf.stack([-1, height, width]), tf.int32)
+ masks = keys_to_tensors['image/object/mask']
+ if isinstance(masks, tf.SparseTensor):
+ masks = tf.sparse_tensor_to_dense(masks)
+ masks = tf.reshape(tf.to_float(tf.greater(masks, 0.0)), to_shape)
+ return tf.cast(masks, tf.float32)
+
+ def _decode_png_instance_masks(self, keys_to_tensors):
+ """Decode PNG instance segmentation masks and stack into dense tensor.
+
+ The instance segmentation masks are reshaped to [num_instances, height,
+ width].
+
+ Args:
+ keys_to_tensors: a dictionary from keys to tensors.
+
+ Returns:
+ A 3-D float tensor of shape [num_instances, height, width] with values
+ in {0, 1}.
+ """
+
+ def decode_png_mask(image_buffer):
+ image = tf.squeeze(
+ tf.image.decode_image(image_buffer, channels=1), axis=2)
+ image.set_shape([None, None])
+ image = tf.to_float(tf.greater(image, 0))
+ return image
+
+ png_masks = keys_to_tensors['image/object/mask']
+ height = keys_to_tensors['image/height']
+ width = keys_to_tensors['image/width']
+ if isinstance(png_masks, tf.SparseTensor):
+ png_masks = tf.sparse_tensor_to_dense(png_masks, default_value='')
+ return tf.cond(
+ tf.greater(tf.size(png_masks), 0),
+ lambda: tf.map_fn(decode_png_mask, png_masks, dtype=tf.float32),
+ lambda: tf.zeros(tf.to_int32(tf.stack([0, height, width]))))
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data_decoders/tf_example_decoder_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data_decoders/tf_example_decoder_test.py"
new file mode 100644
index 00000000..91fa8693
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/data_decoders/tf_example_decoder_test.py"
@@ -0,0 +1,839 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for object_detection.data_decoders.tf_example_decoder."""
+
+import os
+import numpy as np
+import tensorflow as tf
+
+from tensorflow.python.framework import test_util
+from object_detection.core import standard_fields as fields
+from object_detection.data_decoders import tf_example_decoder
+from object_detection.protos import input_reader_pb2
+from object_detection.utils import dataset_util
+
+slim_example_decoder = tf.contrib.slim.tfexample_decoder
+
+
+class TfExampleDecoderTest(tf.test.TestCase):
+
+ def _EncodeImage(self, image_tensor, encoding_type='jpeg'):
+ with self.test_session():
+ if encoding_type == 'jpeg':
+ image_encoded = tf.image.encode_jpeg(tf.constant(image_tensor)).eval()
+ elif encoding_type == 'png':
+ image_encoded = tf.image.encode_png(tf.constant(image_tensor)).eval()
+ else:
+ raise ValueError('Invalid encoding type.')
+ return image_encoded
+
+ def _DecodeImage(self, image_encoded, encoding_type='jpeg'):
+ with self.test_session():
+ if encoding_type == 'jpeg':
+ image_decoded = tf.image.decode_jpeg(tf.constant(image_encoded)).eval()
+ elif encoding_type == 'png':
+ image_decoded = tf.image.decode_png(tf.constant(image_encoded)).eval()
+ else:
+ raise ValueError('Invalid encoding type.')
+ return image_decoded
+
+ def testDecodeAdditionalChannels(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+
+ additional_channel_tensor = np.random.randint(
+ 256, size=(4, 5, 1)).astype(np.uint8)
+ encoded_additional_channel = self._EncodeImage(additional_channel_tensor)
+ decoded_additional_channel = self._DecodeImage(encoded_additional_channel)
+
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/additional_channels/encoded':
+ dataset_util.bytes_list_feature(
+ [encoded_additional_channel] * 2),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/source_id':
+ dataset_util.bytes_feature('image_id'),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder(
+ num_additional_channels=2)
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+ self.assertAllEqual(
+ np.concatenate([decoded_additional_channel] * 2, axis=2),
+ tensor_dict[fields.InputDataFields.image_additional_channels])
+
+ def testDecodeJpegImage(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ decoded_jpeg = self._DecodeImage(encoded_jpeg)
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded': dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format': dataset_util.bytes_feature('jpeg'),
+ 'image/source_id': dataset_util.bytes_feature('image_id'),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.image].
+ get_shape().as_list()), [None, None, 3])
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.
+ original_image_spatial_shape].
+ get_shape().as_list()), [2])
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(decoded_jpeg, tensor_dict[fields.InputDataFields.image])
+ self.assertAllEqual([4, 5], tensor_dict[fields.InputDataFields.
+ original_image_spatial_shape])
+ self.assertEqual('image_id', tensor_dict[fields.InputDataFields.source_id])
+
+ def testDecodeImageKeyAndFilename(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded': dataset_util.bytes_feature(encoded_jpeg),
+ 'image/key/sha256': dataset_util.bytes_feature('abc'),
+ 'image/filename': dataset_util.bytes_feature('filename')
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertEqual('abc', tensor_dict[fields.InputDataFields.key])
+ self.assertEqual('filename', tensor_dict[fields.InputDataFields.filename])
+
+ def testDecodePngImage(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_png = self._EncodeImage(image_tensor, encoding_type='png')
+ decoded_png = self._DecodeImage(encoded_png, encoding_type='png')
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded': dataset_util.bytes_feature(encoded_png),
+ 'image/format': dataset_util.bytes_feature('png'),
+ 'image/source_id': dataset_util.bytes_feature('image_id')
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.image].
+ get_shape().as_list()), [None, None, 3])
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.
+ original_image_spatial_shape].
+ get_shape().as_list()), [2])
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(decoded_png, tensor_dict[fields.InputDataFields.image])
+ self.assertAllEqual([4, 5], tensor_dict[fields.InputDataFields.
+ original_image_spatial_shape])
+ self.assertEqual('image_id', tensor_dict[fields.InputDataFields.source_id])
+
+ def testDecodePngInstanceMasks(self):
+ image_tensor = np.random.randint(256, size=(10, 10, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ mask_1 = np.random.randint(0, 2, size=(10, 10, 1)).astype(np.uint8)
+ mask_2 = np.random.randint(0, 2, size=(10, 10, 1)).astype(np.uint8)
+ encoded_png_1 = self._EncodeImage(mask_1, encoding_type='png')
+ decoded_png_1 = np.squeeze(mask_1.astype(np.float32))
+ encoded_png_2 = self._EncodeImage(mask_2, encoding_type='png')
+ decoded_png_2 = np.squeeze(mask_2.astype(np.float32))
+ encoded_masks = [encoded_png_1, encoded_png_2]
+ decoded_masks = np.stack([decoded_png_1, decoded_png_2])
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/mask':
+ dataset_util.bytes_list_feature(encoded_masks)
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder(
+ load_instance_masks=True, instance_mask_type=input_reader_pb2.PNG_MASKS)
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(
+ decoded_masks,
+ tensor_dict[fields.InputDataFields.groundtruth_instance_masks])
+
+ def testDecodeEmptyPngInstanceMasks(self):
+ image_tensor = np.random.randint(256, size=(10, 10, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ encoded_masks = []
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/mask':
+ dataset_util.bytes_list_feature(encoded_masks),
+ 'image/height':
+ dataset_util.int64_feature(10),
+ 'image/width':
+ dataset_util.int64_feature(10),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder(
+ load_instance_masks=True, instance_mask_type=input_reader_pb2.PNG_MASKS)
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+ self.assertAllEqual(
+ tensor_dict[fields.InputDataFields.groundtruth_instance_masks].shape,
+ [0, 10, 10])
+
+ def testDecodeBoundingBox(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ bbox_ymins = [0.0, 4.0]
+ bbox_xmins = [1.0, 5.0]
+ bbox_ymaxs = [2.0, 6.0]
+ bbox_xmaxs = [3.0, 7.0]
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/bbox/ymin':
+ dataset_util.float_list_feature(bbox_ymins),
+ 'image/object/bbox/xmin':
+ dataset_util.float_list_feature(bbox_xmins),
+ 'image/object/bbox/ymax':
+ dataset_util.float_list_feature(bbox_ymaxs),
+ 'image/object/bbox/xmax':
+ dataset_util.float_list_feature(bbox_xmaxs),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_boxes]
+ .get_shape().as_list()), [None, 4])
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ expected_boxes = np.vstack([bbox_ymins, bbox_xmins, bbox_ymaxs,
+ bbox_xmaxs]).transpose()
+ self.assertAllEqual(expected_boxes,
+ tensor_dict[fields.InputDataFields.groundtruth_boxes])
+
+ @test_util.enable_c_shapes
+ def testDecodeKeypoint(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ bbox_ymins = [0.0, 4.0]
+ bbox_xmins = [1.0, 5.0]
+ bbox_ymaxs = [2.0, 6.0]
+ bbox_xmaxs = [3.0, 7.0]
+ keypoint_ys = [0.0, 1.0, 2.0, 3.0, 4.0, 5.0]
+ keypoint_xs = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/bbox/ymin':
+ dataset_util.float_list_feature(bbox_ymins),
+ 'image/object/bbox/xmin':
+ dataset_util.float_list_feature(bbox_xmins),
+ 'image/object/bbox/ymax':
+ dataset_util.float_list_feature(bbox_ymaxs),
+ 'image/object/bbox/xmax':
+ dataset_util.float_list_feature(bbox_xmaxs),
+ 'image/object/keypoint/y':
+ dataset_util.float_list_feature(keypoint_ys),
+ 'image/object/keypoint/x':
+ dataset_util.float_list_feature(keypoint_xs),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder(num_keypoints=3)
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_boxes]
+ .get_shape().as_list()), [None, 4])
+ self.assertAllEqual(
+ (tensor_dict[fields.InputDataFields.groundtruth_keypoints].get_shape()
+ .as_list()), [2, 3, 2])
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ expected_boxes = np.vstack([bbox_ymins, bbox_xmins, bbox_ymaxs,
+ bbox_xmaxs]).transpose()
+ self.assertAllEqual(expected_boxes,
+ tensor_dict[fields.InputDataFields.groundtruth_boxes])
+
+ expected_keypoints = (
+ np.vstack([keypoint_ys, keypoint_xs]).transpose().reshape((2, 3, 2)))
+ self.assertAllEqual(
+ expected_keypoints,
+ tensor_dict[fields.InputDataFields.groundtruth_keypoints])
+
+ def testDecodeDefaultGroundtruthWeights(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ bbox_ymins = [0.0, 4.0]
+ bbox_xmins = [1.0, 5.0]
+ bbox_ymaxs = [2.0, 6.0]
+ bbox_xmaxs = [3.0, 7.0]
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/bbox/ymin':
+ dataset_util.float_list_feature(bbox_ymins),
+ 'image/object/bbox/xmin':
+ dataset_util.float_list_feature(bbox_xmins),
+ 'image/object/bbox/ymax':
+ dataset_util.float_list_feature(bbox_ymaxs),
+ 'image/object/bbox/xmax':
+ dataset_util.float_list_feature(bbox_xmaxs),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_boxes]
+ .get_shape().as_list()), [None, 4])
+
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllClose(tensor_dict[fields.InputDataFields.groundtruth_weights],
+ np.ones(2, dtype=np.float32))
+
+ @test_util.enable_c_shapes
+ def testDecodeObjectLabel(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ bbox_classes = [0, 1]
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/class/label':
+ dataset_util.int64_list_feature(bbox_classes),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_classes]
+ .get_shape().as_list()), [2])
+
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(bbox_classes,
+ tensor_dict[fields.InputDataFields.groundtruth_classes])
+
+ def testDecodeObjectLabelNoText(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ bbox_classes = [1, 2]
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/class/label':
+ dataset_util.int64_list_feature(bbox_classes),
+ })).SerializeToString()
+ label_map_string = """
+ item {
+ id:1
+ name:'cat'
+ }
+ item {
+ id:2
+ name:'dog'
+ }
+ """
+ label_map_path = os.path.join(self.get_temp_dir(), 'label_map.pbtxt')
+ with tf.gfile.Open(label_map_path, 'wb') as f:
+ f.write(label_map_string)
+
+ example_decoder = tf_example_decoder.TfExampleDecoder(
+ label_map_proto_file=label_map_path)
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_classes]
+ .get_shape().as_list()), [None])
+
+ init = tf.tables_initializer()
+ with self.test_session() as sess:
+ sess.run(init)
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(bbox_classes,
+ tensor_dict[fields.InputDataFields.groundtruth_classes])
+
+ def testDecodeObjectLabelUnrecognizedName(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ bbox_classes_text = ['cat', 'cheetah']
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/class/text':
+ dataset_util.bytes_list_feature(bbox_classes_text),
+ })).SerializeToString()
+
+ label_map_string = """
+ item {
+ id:2
+ name:'cat'
+ }
+ item {
+ id:1
+ name:'dog'
+ }
+ """
+ label_map_path = os.path.join(self.get_temp_dir(), 'label_map.pbtxt')
+ with tf.gfile.Open(label_map_path, 'wb') as f:
+ f.write(label_map_string)
+ example_decoder = tf_example_decoder.TfExampleDecoder(
+ label_map_proto_file=label_map_path)
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_classes]
+ .get_shape().as_list()), [None])
+
+ with self.test_session() as sess:
+ sess.run(tf.tables_initializer())
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual([2, -1],
+ tensor_dict[fields.InputDataFields.groundtruth_classes])
+
+ def testDecodeObjectLabelWithMappingWithDisplayName(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ bbox_classes_text = ['cat', 'dog']
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/class/text':
+ dataset_util.bytes_list_feature(bbox_classes_text),
+ })).SerializeToString()
+
+ label_map_string = """
+ item {
+ id:3
+ display_name:'cat'
+ }
+ item {
+ id:1
+ display_name:'dog'
+ }
+ """
+ label_map_path = os.path.join(self.get_temp_dir(), 'label_map.pbtxt')
+ with tf.gfile.Open(label_map_path, 'wb') as f:
+ f.write(label_map_string)
+ example_decoder = tf_example_decoder.TfExampleDecoder(
+ label_map_proto_file=label_map_path)
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_classes]
+ .get_shape().as_list()), [None])
+
+ with self.test_session() as sess:
+ sess.run(tf.tables_initializer())
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual([3, 1],
+ tensor_dict[fields.InputDataFields.groundtruth_classes])
+
+ def testDecodeObjectLabelWithMappingWithName(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ bbox_classes_text = ['cat', 'dog']
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/class/text':
+ dataset_util.bytes_list_feature(bbox_classes_text),
+ })).SerializeToString()
+
+ label_map_string = """
+ item {
+ id:3
+ name:'cat'
+ }
+ item {
+ id:1
+ name:'dog'
+ }
+ """
+ label_map_path = os.path.join(self.get_temp_dir(), 'label_map.pbtxt')
+ with tf.gfile.Open(label_map_path, 'wb') as f:
+ f.write(label_map_string)
+ example_decoder = tf_example_decoder.TfExampleDecoder(
+ label_map_proto_file=label_map_path)
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_classes]
+ .get_shape().as_list()), [None])
+
+ with self.test_session() as sess:
+ sess.run(tf.tables_initializer())
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual([3, 1],
+ tensor_dict[fields.InputDataFields.groundtruth_classes])
+
+ @test_util.enable_c_shapes
+ def testDecodeObjectArea(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ object_area = [100., 174.]
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/area':
+ dataset_util.float_list_feature(object_area),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_area]
+ .get_shape().as_list()), [2])
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(object_area,
+ tensor_dict[fields.InputDataFields.groundtruth_area])
+
+ @test_util.enable_c_shapes
+ def testDecodeObjectIsCrowd(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ object_is_crowd = [0, 1]
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/is_crowd':
+ dataset_util.int64_list_feature(object_is_crowd),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual(
+ (tensor_dict[fields.InputDataFields.groundtruth_is_crowd].get_shape()
+ .as_list()), [2])
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(
+ [bool(item) for item in object_is_crowd],
+ tensor_dict[fields.InputDataFields.groundtruth_is_crowd])
+
+ @test_util.enable_c_shapes
+ def testDecodeObjectDifficult(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ object_difficult = [0, 1]
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/difficult':
+ dataset_util.int64_list_feature(object_difficult),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual(
+ (tensor_dict[fields.InputDataFields.groundtruth_difficult].get_shape()
+ .as_list()), [2])
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(
+ [bool(item) for item in object_difficult],
+ tensor_dict[fields.InputDataFields.groundtruth_difficult])
+
+ @test_util.enable_c_shapes
+ def testDecodeObjectGroupOf(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ object_group_of = [0, 1]
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/group_of':
+ dataset_util.int64_list_feature(object_group_of),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual(
+ (tensor_dict[fields.InputDataFields.groundtruth_group_of].get_shape()
+ .as_list()), [2])
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(
+ [bool(item) for item in object_group_of],
+ tensor_dict[fields.InputDataFields.groundtruth_group_of])
+
+ def testDecodeObjectWeight(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ object_weights = [0.75, 1.0]
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/object/weight':
+ dataset_util.float_list_feature(object_weights),
+ })).SerializeToString()
+
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_weights]
+ .get_shape().as_list()), [None])
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(object_weights,
+ tensor_dict[fields.InputDataFields.groundtruth_weights])
+
+ @test_util.enable_c_shapes
+ def testDecodeInstanceSegmentation(self):
+ num_instances = 4
+ image_height = 5
+ image_width = 3
+
+ # Randomly generate image.
+ image_tensor = np.random.randint(
+ 256, size=(image_height, image_width, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+
+ # Randomly generate instance segmentation masks.
+ instance_masks = (
+ np.random.randint(2, size=(num_instances, image_height,
+ image_width)).astype(np.float32))
+ instance_masks_flattened = np.reshape(instance_masks, [-1])
+
+ # Randomly generate class labels for each instance.
+ object_classes = np.random.randint(
+ 100, size=(num_instances)).astype(np.int64)
+
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/height':
+ dataset_util.int64_feature(image_height),
+ 'image/width':
+ dataset_util.int64_feature(image_width),
+ 'image/object/mask':
+ dataset_util.float_list_feature(instance_masks_flattened),
+ 'image/object/class/label':
+ dataset_util.int64_list_feature(object_classes)
+ })).SerializeToString()
+ example_decoder = tf_example_decoder.TfExampleDecoder(
+ load_instance_masks=True)
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+
+ self.assertAllEqual(
+ (tensor_dict[fields.InputDataFields.groundtruth_instance_masks]
+ .get_shape().as_list()), [4, 5, 3])
+
+ self.assertAllEqual((tensor_dict[fields.InputDataFields.groundtruth_classes]
+ .get_shape().as_list()), [4])
+
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+
+ self.assertAllEqual(
+ instance_masks.astype(np.float32),
+ tensor_dict[fields.InputDataFields.groundtruth_instance_masks])
+ self.assertAllEqual(object_classes,
+ tensor_dict[fields.InputDataFields.groundtruth_classes])
+
+ def testInstancesNotAvailableByDefault(self):
+ num_instances = 4
+ image_height = 5
+ image_width = 3
+ # Randomly generate image.
+ image_tensor = np.random.randint(
+ 256, size=(image_height, image_width, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+
+ # Randomly generate instance segmentation masks.
+ instance_masks = (
+ np.random.randint(2, size=(num_instances, image_height,
+ image_width)).astype(np.float32))
+ instance_masks_flattened = np.reshape(instance_masks, [-1])
+
+ # Randomly generate class labels for each instance.
+ object_classes = np.random.randint(
+ 100, size=(num_instances)).astype(np.int64)
+
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/height':
+ dataset_util.int64_feature(image_height),
+ 'image/width':
+ dataset_util.int64_feature(image_width),
+ 'image/object/mask':
+ dataset_util.float_list_feature(instance_masks_flattened),
+ 'image/object/class/label':
+ dataset_util.int64_list_feature(object_classes)
+ })).SerializeToString()
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+ self.assertTrue(
+ fields.InputDataFields.groundtruth_instance_masks not in tensor_dict)
+
+ def testDecodeImageLabels(self):
+ image_tensor = np.random.randint(256, size=(4, 5, 3)).astype(np.uint8)
+ encoded_jpeg = self._EncodeImage(image_tensor)
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded': dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format': dataset_util.bytes_feature('jpeg'),
+ 'image/class/label': dataset_util.int64_list_feature([1, 2]),
+ })).SerializeToString()
+ example_decoder = tf_example_decoder.TfExampleDecoder()
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+ with self.test_session() as sess:
+ tensor_dict = sess.run(tensor_dict)
+ self.assertTrue(
+ fields.InputDataFields.groundtruth_image_classes in tensor_dict)
+ self.assertAllEqual(
+ tensor_dict[fields.InputDataFields.groundtruth_image_classes],
+ np.array([1, 2]))
+ example = tf.train.Example(
+ features=tf.train.Features(
+ feature={
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpeg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'),
+ 'image/class/text':
+ dataset_util.bytes_list_feature(['dog', 'cat']),
+ })).SerializeToString()
+ label_map_string = """
+ item {
+ id:3
+ name:'cat'
+ }
+ item {
+ id:1
+ name:'dog'
+ }
+ """
+ label_map_path = os.path.join(self.get_temp_dir(), 'label_map.pbtxt')
+ with tf.gfile.Open(label_map_path, 'wb') as f:
+ f.write(label_map_string)
+ example_decoder = tf_example_decoder.TfExampleDecoder(
+ label_map_proto_file=label_map_path)
+ tensor_dict = example_decoder.decode(tf.convert_to_tensor(example))
+ with self.test_session() as sess:
+ sess.run(tf.tables_initializer())
+ tensor_dict = sess.run(tensor_dict)
+ self.assertTrue(
+ fields.InputDataFields.groundtruth_image_classes in tensor_dict)
+ self.assertAllEqual(
+ tensor_dict[fields.InputDataFields.groundtruth_image_classes],
+ np.array([1, 3]))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/__init__.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/__init__.py"
new file mode 100644
index 00000000..e69de29b
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_coco_tf_record.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_coco_tf_record.py"
new file mode 100644
index 00000000..7f2bd1fb
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_coco_tf_record.py"
@@ -0,0 +1,282 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Convert raw COCO dataset to TFRecord for object_detection.
+
+Please note that this tool creates sharded output files.
+
+Example usage:
+ python create_coco_tf_record.py --logtostderr \
+ --train_image_dir="${TRAIN_IMAGE_DIR}" \
+ --val_image_dir="${VAL_IMAGE_DIR}" \
+ --test_image_dir="${TEST_IMAGE_DIR}" \
+ --train_annotations_file="${TRAIN_ANNOTATIONS_FILE}" \
+ --val_annotations_file="${VAL_ANNOTATIONS_FILE}" \
+ --testdev_annotations_file="${TESTDEV_ANNOTATIONS_FILE}" \
+ --output_dir="${OUTPUT_DIR}"
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import hashlib
+import io
+import json
+import os
+import contextlib2
+import numpy as np
+import PIL.Image
+
+from pycocotools import mask
+import tensorflow as tf
+
+from object_detection.dataset_tools import tf_record_creation_util
+from object_detection.utils import dataset_util
+from object_detection.utils import label_map_util
+
+
+flags = tf.app.flags
+tf.flags.DEFINE_boolean('include_masks', False,
+ 'Whether to include instance segmentations masks '
+ '(PNG encoded) in the result. default: False.')
+tf.flags.DEFINE_string('train_image_dir', '',
+ 'Training image directory.')
+tf.flags.DEFINE_string('val_image_dir', '',
+ 'Validation image directory.')
+tf.flags.DEFINE_string('test_image_dir', '',
+ 'Test image directory.')
+tf.flags.DEFINE_string('train_annotations_file', '',
+ 'Training annotations JSON file.')
+tf.flags.DEFINE_string('val_annotations_file', '',
+ 'Validation annotations JSON file.')
+tf.flags.DEFINE_string('testdev_annotations_file', '',
+ 'Test-dev annotations JSON file.')
+tf.flags.DEFINE_string('output_dir', '/tmp/', 'Output data directory.')
+
+FLAGS = flags.FLAGS
+
+tf.logging.set_verbosity(tf.logging.INFO)
+
+
+def create_tf_example(image,
+ annotations_list,
+ image_dir,
+ category_index,
+ include_masks=False):
+ """Converts image and annotations to a tf.Example proto.
+
+ Args:
+ image: dict with keys:
+ [u'license', u'file_name', u'coco_url', u'height', u'width',
+ u'date_captured', u'flickr_url', u'id']
+ annotations_list:
+ list of dicts with keys:
+ [u'segmentation', u'area', u'iscrowd', u'image_id',
+ u'bbox', u'category_id', u'id']
+ Notice that bounding box coordinates in the official COCO dataset are
+ given as [x, y, width, height] tuples using absolute coordinates where
+ x, y represent the top-left (0-indexed) corner. This function converts
+ to the format expected by the Tensorflow Object Detection API (which is
+ which is [ymin, xmin, ymax, xmax] with coordinates normalized relative
+ to image size).
+ image_dir: directory containing the image files.
+ category_index: a dict containing COCO category information keyed
+ by the 'id' field of each category. See the
+ label_map_util.create_category_index function.
+ include_masks: Whether to include instance segmentations masks
+ (PNG encoded) in the result. default: False.
+ Returns:
+ example: The converted tf.Example
+ num_annotations_skipped: Number of (invalid) annotations that were ignored.
+
+ Raises:
+ ValueError: if the image pointed to by data['filename'] is not a valid JPEG
+ """
+ image_height = image['height']
+ image_width = image['width']
+ filename = image['file_name']
+ image_id = image['id']
+
+ full_path = os.path.join(image_dir, filename)
+ with tf.gfile.GFile(full_path, 'rb') as fid:
+ encoded_jpg = fid.read()
+ encoded_jpg_io = io.BytesIO(encoded_jpg)
+ image = PIL.Image.open(encoded_jpg_io)
+ key = hashlib.sha256(encoded_jpg).hexdigest()
+
+ xmin = []
+ xmax = []
+ ymin = []
+ ymax = []
+ is_crowd = []
+ category_names = []
+ category_ids = []
+ area = []
+ encoded_mask_png = []
+ num_annotations_skipped = 0
+ for object_annotations in annotations_list:
+ (x, y, width, height) = tuple(object_annotations['bbox'])
+ if width <= 0 or height <= 0:
+ num_annotations_skipped += 1
+ continue
+ if x + width > image_width or y + height > image_height:
+ num_annotations_skipped += 1
+ continue
+ xmin.append(float(x) / image_width)
+ xmax.append(float(x + width) / image_width)
+ ymin.append(float(y) / image_height)
+ ymax.append(float(y + height) / image_height)
+ is_crowd.append(object_annotations['iscrowd'])
+ category_id = int(object_annotations['category_id'])
+ category_ids.append(category_id)
+ category_names.append(category_index[category_id]['name'].encode('utf8'))
+ area.append(object_annotations['area'])
+
+ if include_masks:
+ run_len_encoding = mask.frPyObjects(object_annotations['segmentation'],
+ image_height, image_width)
+ binary_mask = mask.decode(run_len_encoding)
+ if not object_annotations['iscrowd']:
+ binary_mask = np.amax(binary_mask, axis=2)
+ pil_image = PIL.Image.fromarray(binary_mask)
+ output_io = io.BytesIO()
+ pil_image.save(output_io, format='PNG')
+ encoded_mask_png.append(output_io.getvalue())
+ feature_dict = {
+ 'image/height':
+ dataset_util.int64_feature(image_height),
+ 'image/width':
+ dataset_util.int64_feature(image_width),
+ 'image/filename':
+ dataset_util.bytes_feature(filename.encode('utf8')),
+ 'image/source_id':
+ dataset_util.bytes_feature(str(image_id).encode('utf8')),
+ 'image/key/sha256':
+ dataset_util.bytes_feature(key.encode('utf8')),
+ 'image/encoded':
+ dataset_util.bytes_feature(encoded_jpg),
+ 'image/format':
+ dataset_util.bytes_feature('jpeg'.encode('utf8')),
+ 'image/object/bbox/xmin':
+ dataset_util.float_list_feature(xmin),
+ 'image/object/bbox/xmax':
+ dataset_util.float_list_feature(xmax),
+ 'image/object/bbox/ymin':
+ dataset_util.float_list_feature(ymin),
+ 'image/object/bbox/ymax':
+ dataset_util.float_list_feature(ymax),
+ 'image/object/class/text':
+ dataset_util.bytes_list_feature(category_names),
+ 'image/object/is_crowd':
+ dataset_util.int64_list_feature(is_crowd),
+ 'image/object/area':
+ dataset_util.float_list_feature(area),
+ }
+ if include_masks:
+ feature_dict['image/object/mask'] = (
+ dataset_util.bytes_list_feature(encoded_mask_png))
+ example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
+ return key, example, num_annotations_skipped
+
+
+def _create_tf_record_from_coco_annotations(
+ annotations_file, image_dir, output_path, include_masks, num_shards):
+ """Loads COCO annotation json files and converts to tf.Record format.
+
+ Args:
+ annotations_file: JSON file containing bounding box annotations.
+ image_dir: Directory containing the image files.
+ output_path: Path to output tf.Record file.
+ include_masks: Whether to include instance segmentations masks
+ (PNG encoded) in the result. default: False.
+ num_shards: number of output file shards.
+ """
+ with contextlib2.ExitStack() as tf_record_close_stack, \
+ tf.gfile.GFile(annotations_file, 'r') as fid:
+ output_tfrecords = tf_record_creation_util.open_sharded_output_tfrecords(
+ tf_record_close_stack, output_path, num_shards)
+ groundtruth_data = json.load(fid)
+ images = groundtruth_data['images']
+ category_index = label_map_util.create_category_index(
+ groundtruth_data['categories'])
+
+ annotations_index = {}
+ if 'annotations' in groundtruth_data:
+ tf.logging.info(
+ 'Found groundtruth annotations. Building annotations index.')
+ for annotation in groundtruth_data['annotations']:
+ image_id = annotation['image_id']
+ if image_id not in annotations_index:
+ annotations_index[image_id] = []
+ annotations_index[image_id].append(annotation)
+ missing_annotation_count = 0
+ for image in images:
+ image_id = image['id']
+ if image_id not in annotations_index:
+ missing_annotation_count += 1
+ annotations_index[image_id] = []
+ tf.logging.info('%d images are missing annotations.',
+ missing_annotation_count)
+
+ total_num_annotations_skipped = 0
+ for idx, image in enumerate(images):
+ if idx % 100 == 0:
+ tf.logging.info('On image %d of %d', idx, len(images))
+ annotations_list = annotations_index[image['id']]
+ _, tf_example, num_annotations_skipped = create_tf_example(
+ image, annotations_list, image_dir, category_index, include_masks)
+ total_num_annotations_skipped += num_annotations_skipped
+ shard_idx = idx % num_shards
+ output_tfrecords[shard_idx].write(tf_example.SerializeToString())
+ tf.logging.info('Finished writing, skipped %d annotations.',
+ total_num_annotations_skipped)
+
+
+def main(_):
+ assert FLAGS.train_image_dir, '`train_image_dir` missing.'
+ assert FLAGS.val_image_dir, '`val_image_dir` missing.'
+ assert FLAGS.test_image_dir, '`test_image_dir` missing.'
+ assert FLAGS.train_annotations_file, '`train_annotations_file` missing.'
+ assert FLAGS.val_annotations_file, '`val_annotations_file` missing.'
+ assert FLAGS.testdev_annotations_file, '`testdev_annotations_file` missing.'
+
+ if not tf.gfile.IsDirectory(FLAGS.output_dir):
+ tf.gfile.MakeDirs(FLAGS.output_dir)
+ train_output_path = os.path.join(FLAGS.output_dir, 'coco_train.record')
+ val_output_path = os.path.join(FLAGS.output_dir, 'coco_val.record')
+ testdev_output_path = os.path.join(FLAGS.output_dir, 'coco_testdev.record')
+
+ _create_tf_record_from_coco_annotations(
+ FLAGS.train_annotations_file,
+ FLAGS.train_image_dir,
+ train_output_path,
+ FLAGS.include_masks,
+ num_shards=100)
+ _create_tf_record_from_coco_annotations(
+ FLAGS.val_annotations_file,
+ FLAGS.val_image_dir,
+ val_output_path,
+ FLAGS.include_masks,
+ num_shards=10)
+ _create_tf_record_from_coco_annotations(
+ FLAGS.testdev_annotations_file,
+ FLAGS.test_image_dir,
+ testdev_output_path,
+ FLAGS.include_masks,
+ num_shards=100)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_coco_tf_record_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_coco_tf_record_test.py"
new file mode 100644
index 00000000..b99fd12b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_coco_tf_record_test.py"
@@ -0,0 +1,251 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Test for create_coco_tf_record.py."""
+
+import io
+import json
+import os
+
+import numpy as np
+import PIL.Image
+import tensorflow as tf
+
+from object_detection.dataset_tools import create_coco_tf_record
+
+
+class CreateCocoTFRecordTest(tf.test.TestCase):
+
+ def _assertProtoEqual(self, proto_field, expectation):
+ """Helper function to assert if a proto field equals some value.
+
+ Args:
+ proto_field: The protobuf field to compare.
+ expectation: The expected value of the protobuf field.
+ """
+ proto_list = [p for p in proto_field]
+ self.assertListEqual(proto_list, expectation)
+
+ def test_create_tf_example(self):
+ image_file_name = 'tmp_image.jpg'
+ image_data = np.random.rand(256, 256, 3)
+ tmp_dir = self.get_temp_dir()
+ save_path = os.path.join(tmp_dir, image_file_name)
+ image = PIL.Image.fromarray(image_data, 'RGB')
+ image.save(save_path)
+
+ image = {
+ 'file_name': image_file_name,
+ 'height': 256,
+ 'width': 256,
+ 'id': 11,
+ }
+
+ annotations_list = [{
+ 'area': .5,
+ 'iscrowd': False,
+ 'image_id': 11,
+ 'bbox': [64, 64, 128, 128],
+ 'category_id': 2,
+ 'id': 1000,
+ }]
+
+ image_dir = tmp_dir
+ category_index = {
+ 1: {
+ 'name': 'dog',
+ 'id': 1
+ },
+ 2: {
+ 'name': 'cat',
+ 'id': 2
+ },
+ 3: {
+ 'name': 'human',
+ 'id': 3
+ }
+ }
+
+ (_, example,
+ num_annotations_skipped) = create_coco_tf_record.create_tf_example(
+ image, annotations_list, image_dir, category_index)
+
+ self.assertEqual(num_annotations_skipped, 0)
+ self._assertProtoEqual(
+ example.features.feature['image/height'].int64_list.value, [256])
+ self._assertProtoEqual(
+ example.features.feature['image/width'].int64_list.value, [256])
+ self._assertProtoEqual(
+ example.features.feature['image/filename'].bytes_list.value,
+ [image_file_name])
+ self._assertProtoEqual(
+ example.features.feature['image/source_id'].bytes_list.value,
+ [str(image['id'])])
+ self._assertProtoEqual(
+ example.features.feature['image/format'].bytes_list.value, ['jpeg'])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/xmin'].float_list.value,
+ [0.25])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/ymin'].float_list.value,
+ [0.25])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/xmax'].float_list.value,
+ [0.75])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/ymax'].float_list.value,
+ [0.75])
+ self._assertProtoEqual(
+ example.features.feature['image/object/class/text'].bytes_list.value,
+ ['cat'])
+
+ def test_create_tf_example_with_instance_masks(self):
+ image_file_name = 'tmp_image.jpg'
+ image_data = np.random.rand(8, 8, 3)
+ tmp_dir = self.get_temp_dir()
+ save_path = os.path.join(tmp_dir, image_file_name)
+ image = PIL.Image.fromarray(image_data, 'RGB')
+ image.save(save_path)
+
+ image = {
+ 'file_name': image_file_name,
+ 'height': 8,
+ 'width': 8,
+ 'id': 11,
+ }
+
+ annotations_list = [{
+ 'area': .5,
+ 'iscrowd': False,
+ 'image_id': 11,
+ 'bbox': [0, 0, 8, 8],
+ 'segmentation': [[4, 0, 0, 0, 0, 4], [8, 4, 4, 8, 8, 8]],
+ 'category_id': 1,
+ 'id': 1000,
+ }]
+
+ image_dir = tmp_dir
+ category_index = {
+ 1: {
+ 'name': 'dog',
+ 'id': 1
+ },
+ }
+
+ (_, example,
+ num_annotations_skipped) = create_coco_tf_record.create_tf_example(
+ image, annotations_list, image_dir, category_index, include_masks=True)
+
+ self.assertEqual(num_annotations_skipped, 0)
+ self._assertProtoEqual(
+ example.features.feature['image/height'].int64_list.value, [8])
+ self._assertProtoEqual(
+ example.features.feature['image/width'].int64_list.value, [8])
+ self._assertProtoEqual(
+ example.features.feature['image/filename'].bytes_list.value,
+ [image_file_name])
+ self._assertProtoEqual(
+ example.features.feature['image/source_id'].bytes_list.value,
+ [str(image['id'])])
+ self._assertProtoEqual(
+ example.features.feature['image/format'].bytes_list.value, ['jpeg'])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/xmin'].float_list.value,
+ [0])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/ymin'].float_list.value,
+ [0])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/xmax'].float_list.value,
+ [1])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/ymax'].float_list.value,
+ [1])
+ self._assertProtoEqual(
+ example.features.feature['image/object/class/text'].bytes_list.value,
+ ['dog'])
+ encoded_mask_pngs = [
+ io.BytesIO(encoded_masks) for encoded_masks in example.features.feature[
+ 'image/object/mask'].bytes_list.value
+ ]
+ pil_masks = [
+ np.array(PIL.Image.open(encoded_mask_png))
+ for encoded_mask_png in encoded_mask_pngs
+ ]
+ self.assertTrue(len(pil_masks) == 1)
+ self.assertAllEqual(pil_masks[0],
+ [[1, 1, 1, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0, 0, 0],
+ [1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1, 1],
+ [0, 0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1]])
+
+ def test_create_sharded_tf_record(self):
+ tmp_dir = self.get_temp_dir()
+ image_paths = ['tmp1_image.jpg', 'tmp2_image.jpg']
+ for image_path in image_paths:
+ image_data = np.random.rand(256, 256, 3)
+ save_path = os.path.join(tmp_dir, image_path)
+ image = PIL.Image.fromarray(image_data, 'RGB')
+ image.save(save_path)
+
+ images = [{
+ 'file_name': image_paths[0],
+ 'height': 256,
+ 'width': 256,
+ 'id': 11,
+ }, {
+ 'file_name': image_paths[1],
+ 'height': 256,
+ 'width': 256,
+ 'id': 12,
+ }]
+
+ annotations = [{
+ 'area': .5,
+ 'iscrowd': False,
+ 'image_id': 11,
+ 'bbox': [64, 64, 128, 128],
+ 'category_id': 2,
+ 'id': 1000,
+ }]
+
+ category_index = [{
+ 'name': 'dog',
+ 'id': 1
+ }, {
+ 'name': 'cat',
+ 'id': 2
+ }, {
+ 'name': 'human',
+ 'id': 3
+ }]
+ groundtruth_data = {'images': images, 'annotations': annotations,
+ 'categories': category_index}
+ annotation_file = os.path.join(tmp_dir, 'annotation.json')
+ with open(annotation_file, 'w') as annotation_fid:
+ json.dump(groundtruth_data, annotation_fid)
+
+ output_path = os.path.join(tmp_dir, 'out.record')
+ create_coco_tf_record._create_tf_record_from_coco_annotations(
+ annotation_file,
+ tmp_dir,
+ output_path,
+ False,
+ 2)
+ self.assertTrue(os.path.exists(output_path + '-00000-of-00002'))
+ self.assertTrue(os.path.exists(output_path + '-00001-of-00002'))
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_face_tf_record.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_face_tf_record.py"
new file mode 100644
index 00000000..e7edb3b1
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_face_tf_record.py"
@@ -0,0 +1,184 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Convert raw PASCAL dataset to TFRecord for object_detection.
+
+Example usage:
+ python object_detection/dataset_tools/create_pascal_tf_record.py \
+ --data_dir=/home/user/VOCdevkit \
+ --year=VOC2012 \
+ --output_path=/home/user/pascal.record
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import hashlib
+import io
+import logging
+import os
+
+from lxml import etree
+import PIL.Image
+import tensorflow as tf
+
+from object_detection.utils import dataset_util
+from object_detection.utils import label_map_util
+
+
+flags = tf.app.flags
+flags.DEFINE_string('data_dir', '', 'Root directory to raw PASCAL VOC dataset.')
+flags.DEFINE_string('set', 'train', 'Convert training set, validation set or '
+ 'merged set.')
+flags.DEFINE_string('annotations_dir', 'Annotations',
+ '(Relative) path to annotations directory.')
+flags.DEFINE_string('year', 'VOC2007', 'Desired challenge year.')
+flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
+flags.DEFINE_string('label_map_path', 'object_detection/data/face_label_map.pbtxt',
+ 'Path to label map proto')
+flags.DEFINE_boolean('ignore_difficult_instances', False, 'Whether to ignore '
+ 'difficult instances')
+FLAGS = flags.FLAGS
+
+SETS = ['train', 'val', 'trainval', 'test']
+YEARS = ['fddb', 'widerface']
+
+
+def dict_to_tf_example(data,
+ dataset_directory,
+ label_map_dict,
+ ignore_difficult_instances=False,
+ image_subdirectory='JPEGImages'):
+ """Convert XML derived dict to tf.Example proto.
+
+ Notice that this function normalizes the bounding box coordinates provided
+ by the raw data.
+
+ Args:
+ data: dict holding PASCAL XML fields for a single image (obtained by
+ running dataset_util.recursive_parse_xml_to_dict)
+ dataset_directory: Path to root directory holding PASCAL dataset
+ label_map_dict: A map from string label names to integers ids.
+ ignore_difficult_instances: Whether to skip difficult instances in the
+ dataset (default: False).
+ image_subdirectory: String specifying subdirectory within the
+ PASCAL dataset directory holding the actual image data.
+
+ Returns:
+ example: The converted tf.Example.
+
+ Raises:
+ ValueError: if the image pointed to by data['filename'] is not a valid JPEG
+ """
+ img_path = os.path.join(data['folder'], image_subdirectory, data['filename'])
+ full_path = os.path.join(dataset_directory, img_path)
+ with tf.gfile.GFile(full_path, 'rb') as fid:
+ encoded_jpg = fid.read()
+ encoded_jpg_io = io.BytesIO(encoded_jpg)
+ image = PIL.Image.open(encoded_jpg_io)
+ if image.format != 'JPEG':
+ raise ValueError('Image format not JPEG')
+ key = hashlib.sha256(encoded_jpg).hexdigest()
+
+ width = int(data['size']['width'])
+ height = int(data['size']['height'])
+
+ xmin = []
+ ymin = []
+ xmax = []
+ ymax = []
+ classes = []
+ classes_text = []
+ truncated = []
+ poses = []
+ difficult_obj = []
+ if 'object' in data:
+ for obj in data['object']:
+ difficult = bool(int(obj['difficult']))
+ if ignore_difficult_instances and difficult:
+ continue
+
+ difficult_obj.append(int(difficult))
+
+ xmin.append(float(obj['bndbox']['xmin']) / width)
+ ymin.append(float(obj['bndbox']['ymin']) / height)
+ xmax.append(float(obj['bndbox']['xmax']) / width)
+ ymax.append(float(obj['bndbox']['ymax']) / height)
+ classes_text.append(obj['name'].encode('utf8'))
+ classes.append(label_map_dict[obj['name']])
+ truncated.append(int(obj['truncated']))
+ poses.append(obj['pose'].encode('utf8'))
+
+ example = tf.train.Example(features=tf.train.Features(feature={
+ 'image/height': dataset_util.int64_feature(height),
+ 'image/width': dataset_util.int64_feature(width),
+ 'image/filename': dataset_util.bytes_feature(
+ data['filename'].encode('utf8')),
+ 'image/source_id': dataset_util.bytes_feature(
+ data['filename'].encode('utf8')),
+ 'image/key/sha256': dataset_util.bytes_feature(key.encode('utf8')),
+ 'image/encoded': dataset_util.bytes_feature(encoded_jpg),
+ 'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),
+ 'image/object/bbox/xmin': dataset_util.float_list_feature(xmin),
+ 'image/object/bbox/xmax': dataset_util.float_list_feature(xmax),
+ 'image/object/bbox/ymin': dataset_util.float_list_feature(ymin),
+ 'image/object/bbox/ymax': dataset_util.float_list_feature(ymax),
+ 'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
+ 'image/object/class/label': dataset_util.int64_list_feature(classes),
+ 'image/object/difficult': dataset_util.int64_list_feature(difficult_obj),
+ 'image/object/truncated': dataset_util.int64_list_feature(truncated),
+ 'image/object/view': dataset_util.bytes_list_feature(poses),
+ }))
+ return example
+
+
+def main(_):
+ if FLAGS.set not in SETS:
+ raise ValueError('set must be in : {}'.format(SETS))
+ if FLAGS.year not in YEARS:
+ raise ValueError('year must be in : {}'.format(YEARS))
+
+ data_dir = FLAGS.data_dir
+ years = ['fddb', 'widerface']
+ if FLAGS.year != 'merged':
+ years = [FLAGS.year]
+
+ writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
+
+ label_map_dict = label_map_util.get_label_map_dict(FLAGS.label_map_path)
+
+ for year in years:
+ logging.info('Reading from PASCAL %s dataset.', year)
+ examples_path = os.path.join(data_dir, year, 'ImageSets', 'Main', FLAGS.set + '.txt')
+ annotations_dir = os.path.join(data_dir, year, FLAGS.annotations_dir)
+ examples_list = dataset_util.read_examples_list(examples_path)
+ for idx, example in enumerate(examples_list):
+ if idx % 100 == 0:
+ logging.info('On image %d of %d', idx, len(examples_list))
+ path = os.path.join(annotations_dir, example + '.xml')
+ with tf.gfile.GFile(path, 'r') as fid:
+ xml_str = fid.read()
+ xml = etree.fromstring(xml_str)
+ data = dataset_util.recursive_parse_xml_to_dict(xml)['annotation']
+
+ tf_example = dict_to_tf_example(data, FLAGS.data_dir, label_map_dict,
+ FLAGS.ignore_difficult_instances)
+ writer.write(tf_example.SerializeToString())
+
+ writer.close()
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_kitti_tf_record.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_kitti_tf_record.py"
new file mode 100644
index 00000000..c612db99
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_kitti_tf_record.py"
@@ -0,0 +1,310 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Convert raw KITTI detection dataset to TFRecord for object_detection.
+
+Converts KITTI detection dataset to TFRecords with a standard format allowing
+ to use this dataset to train object detectors. The raw dataset can be
+ downloaded from:
+ http://kitti.is.tue.mpg.de/kitti/data_object_image_2.zip.
+ http://kitti.is.tue.mpg.de/kitti/data_object_label_2.zip
+ Permission can be requested at the main website.
+
+ KITTI detection dataset contains 7481 training images. Using this code with
+ the default settings will set aside the first 500 images as a validation set.
+ This can be altered using the flags, see details below.
+
+Example usage:
+ python object_detection/dataset_tools/create_kitti_tf_record.py \
+ --data_dir=/home/user/kitti \
+ --output_path=/home/user/kitti.record
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+
+import hashlib
+import io
+import os
+
+import numpy as np
+import PIL.Image as pil
+import tensorflow as tf
+
+from object_detection.utils import dataset_util
+from object_detection.utils import label_map_util
+from object_detection.utils.np_box_ops import iou
+
+tf.app.flags.DEFINE_string('data_dir', '', 'Location of root directory for the '
+ 'data. Folder structure is assumed to be:'
+ '/training/label_2 (annotations) and'
+ '/data_object_image_2/training/image_2'
+ '(images).')
+tf.app.flags.DEFINE_string('output_path', '', 'Path to which TFRecord files'
+ 'will be written. The TFRecord with the training set'
+ 'will be located at: _train.tfrecord.'
+ 'And the TFRecord with the validation set will be'
+ 'located at: _val.tfrecord')
+tf.app.flags.DEFINE_string('classes_to_use', 'car,pedestrian,dontcare',
+ 'Comma separated list of class names that will be'
+ 'used. Adding the dontcare class will remove all'
+ 'bboxs in the dontcare regions.')
+tf.app.flags.DEFINE_string('label_map_path', 'data/kitti_label_map.pbtxt',
+ 'Path to label map proto.')
+tf.app.flags.DEFINE_integer('validation_set_size', '500', 'Number of images to'
+ 'be used as a validation set.')
+FLAGS = tf.app.flags.FLAGS
+
+
+def convert_kitti_to_tfrecords(data_dir, output_path, classes_to_use,
+ label_map_path, validation_set_size):
+ """Convert the KITTI detection dataset to TFRecords.
+
+ Args:
+ data_dir: The full path to the unzipped folder containing the unzipped data
+ from data_object_image_2 and data_object_label_2.zip.
+ Folder structure is assumed to be: data_dir/training/label_2 (annotations)
+ and data_dir/data_object_image_2/training/image_2 (images).
+ output_path: The path to which TFRecord files will be written. The TFRecord
+ with the training set will be located at: _train.tfrecord
+ And the TFRecord with the validation set will be located at:
+ _val.tfrecord
+ classes_to_use: List of strings naming the classes for which data should be
+ converted. Use the same names as presented in the KIITI README file.
+ Adding dontcare class will remove all other bounding boxes that overlap
+ with areas marked as dontcare regions.
+ label_map_path: Path to label map proto
+ validation_set_size: How many images should be left as the validation set.
+ (Ffirst `validation_set_size` examples are selected to be in the
+ validation set).
+ """
+ label_map_dict = label_map_util.get_label_map_dict(label_map_path)
+ train_count = 0
+ val_count = 0
+
+ annotation_dir = os.path.join(data_dir,
+ 'training',
+ 'label_2')
+
+ image_dir = os.path.join(data_dir,
+ 'data_object_image_2',
+ 'training',
+ 'image_2')
+
+ train_writer = tf.python_io.TFRecordWriter('%s_train.tfrecord'%
+ output_path)
+ val_writer = tf.python_io.TFRecordWriter('%s_val.tfrecord'%
+ output_path)
+
+ images = sorted(tf.gfile.ListDirectory(image_dir))
+ for img_name in images:
+ img_num = int(img_name.split('.')[0])
+ is_validation_img = img_num < validation_set_size
+ img_anno = read_annotation_file(os.path.join(annotation_dir,
+ str(img_num).zfill(6)+'.txt'))
+
+ image_path = os.path.join(image_dir, img_name)
+
+ # Filter all bounding boxes of this frame that are of a legal class, and
+ # don't overlap with a dontcare region.
+ # TODO(talremez) filter out targets that are truncated or heavily occluded.
+ annotation_for_image = filter_annotations(img_anno, classes_to_use)
+
+ example = prepare_example(image_path, annotation_for_image, label_map_dict)
+ if is_validation_img:
+ val_writer.write(example.SerializeToString())
+ val_count += 1
+ else:
+ train_writer.write(example.SerializeToString())
+ train_count += 1
+
+ train_writer.close()
+ val_writer.close()
+
+
+def prepare_example(image_path, annotations, label_map_dict):
+ """Converts a dictionary with annotations for an image to tf.Example proto.
+
+ Args:
+ image_path: The complete path to image.
+ annotations: A dictionary representing the annotation of a single object
+ that appears in the image.
+ label_map_dict: A map from string label names to integer ids.
+
+ Returns:
+ example: The converted tf.Example.
+ """
+ with tf.gfile.GFile(image_path, 'rb') as fid:
+ encoded_png = fid.read()
+ encoded_png_io = io.BytesIO(encoded_png)
+ image = pil.open(encoded_png_io)
+ image = np.asarray(image)
+
+ key = hashlib.sha256(encoded_png).hexdigest()
+
+ width = int(image.shape[1])
+ height = int(image.shape[0])
+
+ xmin_norm = annotations['2d_bbox_left'] / float(width)
+ ymin_norm = annotations['2d_bbox_top'] / float(height)
+ xmax_norm = annotations['2d_bbox_right'] / float(width)
+ ymax_norm = annotations['2d_bbox_bottom'] / float(height)
+
+ difficult_obj = [0]*len(xmin_norm)
+
+ example = tf.train.Example(features=tf.train.Features(feature={
+ 'image/height': dataset_util.int64_feature(height),
+ 'image/width': dataset_util.int64_feature(width),
+ 'image/filename': dataset_util.bytes_feature(image_path.encode('utf8')),
+ 'image/source_id': dataset_util.bytes_feature(image_path.encode('utf8')),
+ 'image/key/sha256': dataset_util.bytes_feature(key.encode('utf8')),
+ 'image/encoded': dataset_util.bytes_feature(encoded_png),
+ 'image/format': dataset_util.bytes_feature('png'.encode('utf8')),
+ 'image/object/bbox/xmin': dataset_util.float_list_feature(xmin_norm),
+ 'image/object/bbox/xmax': dataset_util.float_list_feature(xmax_norm),
+ 'image/object/bbox/ymin': dataset_util.float_list_feature(ymin_norm),
+ 'image/object/bbox/ymax': dataset_util.float_list_feature(ymax_norm),
+ 'image/object/class/text': dataset_util.bytes_list_feature(
+ [x.encode('utf8') for x in annotations['type']]),
+ 'image/object/class/label': dataset_util.int64_list_feature(
+ [label_map_dict[x] for x in annotations['type']]),
+ 'image/object/difficult': dataset_util.int64_list_feature(difficult_obj),
+ 'image/object/truncated': dataset_util.float_list_feature(
+ annotations['truncated']),
+ 'image/object/alpha': dataset_util.float_list_feature(
+ annotations['alpha']),
+ 'image/object/3d_bbox/height': dataset_util.float_list_feature(
+ annotations['3d_bbox_height']),
+ 'image/object/3d_bbox/width': dataset_util.float_list_feature(
+ annotations['3d_bbox_width']),
+ 'image/object/3d_bbox/length': dataset_util.float_list_feature(
+ annotations['3d_bbox_length']),
+ 'image/object/3d_bbox/x': dataset_util.float_list_feature(
+ annotations['3d_bbox_x']),
+ 'image/object/3d_bbox/y': dataset_util.float_list_feature(
+ annotations['3d_bbox_y']),
+ 'image/object/3d_bbox/z': dataset_util.float_list_feature(
+ annotations['3d_bbox_z']),
+ 'image/object/3d_bbox/rot_y': dataset_util.float_list_feature(
+ annotations['3d_bbox_rot_y']),
+ }))
+
+ return example
+
+
+def filter_annotations(img_all_annotations, used_classes):
+ """Filters out annotations from the unused classes and dontcare regions.
+
+ Filters out the annotations that belong to classes we do now wish to use and
+ (optionally) also removes all boxes that overlap with dontcare regions.
+
+ Args:
+ img_all_annotations: A list of annotation dictionaries. See documentation of
+ read_annotation_file for more details about the format of the annotations.
+ used_classes: A list of strings listing the classes we want to keep, if the
+ list contains "dontcare", all bounding boxes with overlapping with dont
+ care regions will also be filtered out.
+
+ Returns:
+ img_filtered_annotations: A list of annotation dictionaries that have passed
+ the filtering.
+ """
+
+ img_filtered_annotations = {}
+
+ # Filter the type of the objects.
+ relevant_annotation_indices = [
+ i for i, x in enumerate(img_all_annotations['type']) if x in used_classes
+ ]
+
+ for key in img_all_annotations.keys():
+ img_filtered_annotations[key] = (
+ img_all_annotations[key][relevant_annotation_indices])
+
+ if 'dontcare' in used_classes:
+ dont_care_indices = [i for i,
+ x in enumerate(img_filtered_annotations['type'])
+ if x == 'dontcare']
+
+ # bounding box format [y_min, x_min, y_max, x_max]
+ all_boxes = np.stack([img_filtered_annotations['2d_bbox_top'],
+ img_filtered_annotations['2d_bbox_left'],
+ img_filtered_annotations['2d_bbox_bottom'],
+ img_filtered_annotations['2d_bbox_right']],
+ axis=1)
+
+ ious = iou(boxes1=all_boxes,
+ boxes2=all_boxes[dont_care_indices])
+
+ # Remove all bounding boxes that overlap with a dontcare region.
+ if ious.size > 0:
+ boxes_to_remove = np.amax(ious, axis=1) > 0.0
+ for key in img_all_annotations.keys():
+ img_filtered_annotations[key] = (
+ img_filtered_annotations[key][np.logical_not(boxes_to_remove)])
+
+ return img_filtered_annotations
+
+
+def read_annotation_file(filename):
+ """Reads a KITTI annotation file.
+
+ Converts a KITTI annotation file into a dictionary containing all the
+ relevant information.
+
+ Args:
+ filename: the path to the annotataion text file.
+
+ Returns:
+ anno: A dictionary with the converted annotation information. See annotation
+ README file for details on the different fields.
+ """
+ with open(filename) as f:
+ content = f.readlines()
+ content = [x.strip().split(' ') for x in content]
+
+ anno = {}
+ anno['type'] = np.array([x[0].lower() for x in content])
+ anno['truncated'] = np.array([float(x[1]) for x in content])
+ anno['occluded'] = np.array([int(x[2]) for x in content])
+ anno['alpha'] = np.array([float(x[3]) for x in content])
+
+ anno['2d_bbox_left'] = np.array([float(x[4]) for x in content])
+ anno['2d_bbox_top'] = np.array([float(x[5]) for x in content])
+ anno['2d_bbox_right'] = np.array([float(x[6]) for x in content])
+ anno['2d_bbox_bottom'] = np.array([float(x[7]) for x in content])
+
+ anno['3d_bbox_height'] = np.array([float(x[8]) for x in content])
+ anno['3d_bbox_width'] = np.array([float(x[9]) for x in content])
+ anno['3d_bbox_length'] = np.array([float(x[10]) for x in content])
+ anno['3d_bbox_x'] = np.array([float(x[11]) for x in content])
+ anno['3d_bbox_y'] = np.array([float(x[12]) for x in content])
+ anno['3d_bbox_z'] = np.array([float(x[13]) for x in content])
+ anno['3d_bbox_rot_y'] = np.array([float(x[14]) for x in content])
+
+ return anno
+
+
+def main(_):
+ convert_kitti_to_tfrecords(
+ data_dir=FLAGS.data_dir,
+ output_path=FLAGS.output_path,
+ classes_to_use=FLAGS.classes_to_use.split(','),
+ label_map_path=FLAGS.label_map_path,
+ validation_set_size=FLAGS.validation_set_size)
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_kitti_tf_record_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_kitti_tf_record_test.py"
new file mode 100644
index 00000000..37ac4b8b
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_kitti_tf_record_test.py"
@@ -0,0 +1,130 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Test for create_kitti_tf_record.py."""
+
+import os
+
+import numpy as np
+import PIL.Image
+import tensorflow as tf
+
+from object_detection.dataset_tools import create_kitti_tf_record
+
+
+class CreateKittiTFRecordTest(tf.test.TestCase):
+
+ def _assertProtoEqual(self, proto_field, expectation):
+ """Helper function to assert if a proto field equals some value.
+
+ Args:
+ proto_field: The protobuf field to compare.
+ expectation: The expected value of the protobuf field.
+ """
+ proto_list = [p for p in proto_field]
+ self.assertListEqual(proto_list, expectation)
+
+ def test_dict_to_tf_example(self):
+ image_file_name = 'tmp_image.jpg'
+ image_data = np.random.rand(256, 256, 3)
+ save_path = os.path.join(self.get_temp_dir(), image_file_name)
+ image = PIL.Image.fromarray(image_data, 'RGB')
+ image.save(save_path)
+
+ annotations = {}
+ annotations['2d_bbox_left'] = np.array([64])
+ annotations['2d_bbox_top'] = np.array([64])
+ annotations['2d_bbox_right'] = np.array([192])
+ annotations['2d_bbox_bottom'] = np.array([192])
+ annotations['type'] = ['car']
+ annotations['truncated'] = np.array([1])
+ annotations['alpha'] = np.array([2])
+ annotations['3d_bbox_height'] = np.array([10])
+ annotations['3d_bbox_width'] = np.array([11])
+ annotations['3d_bbox_length'] = np.array([12])
+ annotations['3d_bbox_x'] = np.array([13])
+ annotations['3d_bbox_y'] = np.array([14])
+ annotations['3d_bbox_z'] = np.array([15])
+ annotations['3d_bbox_rot_y'] = np.array([4])
+
+ label_map_dict = {
+ 'background': 0,
+ 'car': 1,
+ }
+
+ example = create_kitti_tf_record.prepare_example(
+ save_path,
+ annotations,
+ label_map_dict)
+
+ self._assertProtoEqual(
+ example.features.feature['image/height'].int64_list.value, [256])
+ self._assertProtoEqual(
+ example.features.feature['image/width'].int64_list.value, [256])
+ self._assertProtoEqual(
+ example.features.feature['image/filename'].bytes_list.value,
+ [save_path])
+ self._assertProtoEqual(
+ example.features.feature['image/source_id'].bytes_list.value,
+ [save_path])
+ self._assertProtoEqual(
+ example.features.feature['image/format'].bytes_list.value, ['png'])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/xmin'].float_list.value,
+ [0.25])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/ymin'].float_list.value,
+ [0.25])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/xmax'].float_list.value,
+ [0.75])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/ymax'].float_list.value,
+ [0.75])
+ self._assertProtoEqual(
+ example.features.feature['image/object/class/text'].bytes_list.value,
+ ['car'])
+ self._assertProtoEqual(
+ example.features.feature['image/object/class/label'].int64_list.value,
+ [1])
+ self._assertProtoEqual(
+ example.features.feature['image/object/truncated'].float_list.value,
+ [1])
+ self._assertProtoEqual(
+ example.features.feature['image/object/alpha'].float_list.value,
+ [2])
+ self._assertProtoEqual(example.features.feature[
+ 'image/object/3d_bbox/height'].float_list.value, [10])
+ self._assertProtoEqual(
+ example.features.feature['image/object/3d_bbox/width'].float_list.value,
+ [11])
+ self._assertProtoEqual(example.features.feature[
+ 'image/object/3d_bbox/length'].float_list.value, [12])
+ self._assertProtoEqual(
+ example.features.feature['image/object/3d_bbox/x'].float_list.value,
+ [13])
+ self._assertProtoEqual(
+ example.features.feature['image/object/3d_bbox/y'].float_list.value,
+ [14])
+ self._assertProtoEqual(
+ example.features.feature['image/object/3d_bbox/z'].float_list.value,
+ [15])
+ self._assertProtoEqual(
+ example.features.feature['image/object/3d_bbox/rot_y'].float_list.value,
+ [4])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_oid_tf_record.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_oid_tf_record.py"
new file mode 100644
index 00000000..26d9699c
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_oid_tf_record.py"
@@ -0,0 +1,117 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+r"""Creates TFRecords of Open Images dataset for object detection.
+
+Example usage:
+ python object_detection/dataset_tools/create_oid_tf_record.py \
+ --input_box_annotations_csv=/path/to/input/annotations-human-bbox.csv \
+ --input_image_label_annotations_csv=/path/to/input/annotations-label.csv \
+ --input_images_directory=/path/to/input/image_pixels_directory \
+ --input_label_map=/path/to/input/labels_bbox_545.labelmap \
+ --output_tf_record_path_prefix=/path/to/output/prefix.tfrecord
+
+CSVs with bounding box annotations and image metadata (including the image URLs)
+can be downloaded from the Open Images GitHub repository:
+https://github.com/openimages/dataset
+
+This script will include every image found in the input_images_directory in the
+output TFRecord, even if the image has no corresponding bounding box annotations
+in the input_annotations_csv. If input_image_label_annotations_csv is specified,
+it will add image-level labels as well. Note that the information of whether a
+label is positivelly or negativelly verified is NOT added to tfrecord.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+import contextlib2
+import pandas as pd
+import tensorflow as tf
+
+from object_detection.dataset_tools import oid_tfrecord_creation
+from object_detection.dataset_tools import tf_record_creation_util
+from object_detection.utils import label_map_util
+
+tf.flags.DEFINE_string('input_box_annotations_csv', None,
+ 'Path to CSV containing image bounding box annotations')
+tf.flags.DEFINE_string('input_images_directory', None,
+ 'Directory containing the image pixels '
+ 'downloaded from the OpenImages GitHub repository.')
+tf.flags.DEFINE_string('input_image_label_annotations_csv', None,
+ 'Path to CSV containing image-level labels annotations')
+tf.flags.DEFINE_string('input_label_map', None, 'Path to the label map proto')
+tf.flags.DEFINE_string(
+ 'output_tf_record_path_prefix', None,
+ 'Path to the output TFRecord. The shard index and the number of shards '
+ 'will be appended for each output shard.')
+tf.flags.DEFINE_integer('num_shards', 100, 'Number of TFRecord shards')
+
+FLAGS = tf.flags.FLAGS
+
+
+def main(_):
+ tf.logging.set_verbosity(tf.logging.INFO)
+
+ required_flags = [
+ 'input_box_annotations_csv', 'input_images_directory', 'input_label_map',
+ 'output_tf_record_path_prefix'
+ ]
+ for flag_name in required_flags:
+ if not getattr(FLAGS, flag_name):
+ raise ValueError('Flag --{} is required'.format(flag_name))
+
+ label_map = label_map_util.get_label_map_dict(FLAGS.input_label_map)
+ all_box_annotations = pd.read_csv(FLAGS.input_box_annotations_csv)
+ if FLAGS.input_image_label_annotations_csv:
+ all_label_annotations = pd.read_csv(FLAGS.input_image_label_annotations_csv)
+ all_label_annotations.rename(
+ columns={'Confidence': 'ConfidenceImageLabel'}, inplace=True)
+ else:
+ all_label_annotations = None
+ all_images = tf.gfile.Glob(
+ os.path.join(FLAGS.input_images_directory, '*.jpg'))
+ all_image_ids = [os.path.splitext(os.path.basename(v))[0] for v in all_images]
+ all_image_ids = pd.DataFrame({'ImageID': all_image_ids})
+ all_annotations = pd.concat(
+ [all_box_annotations, all_image_ids, all_label_annotations])
+
+ tf.logging.log(tf.logging.INFO, 'Found %d images...', len(all_image_ids))
+
+ with contextlib2.ExitStack() as tf_record_close_stack:
+ output_tfrecords = tf_record_creation_util.open_sharded_output_tfrecords(
+ tf_record_close_stack, FLAGS.output_tf_record_path_prefix,
+ FLAGS.num_shards)
+
+ for counter, image_data in enumerate(all_annotations.groupby('ImageID')):
+ tf.logging.log_every_n(tf.logging.INFO, 'Processed %d images...', 1000,
+ counter)
+
+ image_id, image_annotations = image_data
+ # In OID image file names are formed by appending ".jpg" to the image ID.
+ image_path = os.path.join(FLAGS.input_images_directory, image_id + '.jpg')
+ with tf.gfile.Open(image_path) as image_file:
+ encoded_image = image_file.read()
+
+ tf_example = oid_tfrecord_creation.tf_example_from_annotations_data_frame(
+ image_annotations, label_map, encoded_image)
+ if tf_example:
+ shard_idx = int(image_id, 16) % FLAGS.num_shards
+ output_tfrecords[shard_idx].write(tf_example.SerializeToString())
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pascal_tf_record.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pascal_tf_record.py"
new file mode 100644
index 00000000..813071c9
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pascal_tf_record.py"
@@ -0,0 +1,185 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Convert raw PASCAL dataset to TFRecord for object_detection.
+
+Example usage:
+ python object_detection/dataset_tools/create_pascal_tf_record.py \
+ --data_dir=/home/user/VOCdevkit \
+ --year=VOC2012 \
+ --output_path=/home/user/pascal.record
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import hashlib
+import io
+import logging
+import os
+
+from lxml import etree
+import PIL.Image
+import tensorflow as tf
+
+from object_detection.utils import dataset_util
+from object_detection.utils import label_map_util
+
+
+flags = tf.app.flags
+flags.DEFINE_string('data_dir', '', 'Root directory to raw PASCAL VOC dataset.')
+flags.DEFINE_string('set', 'train', 'Convert training set, validation set or '
+ 'merged set.')
+flags.DEFINE_string('annotations_dir', 'Annotations',
+ '(Relative) path to annotations directory.')
+flags.DEFINE_string('year', 'VOC2007', 'Desired challenge year.')
+flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
+flags.DEFINE_string('label_map_path', 'data/pascal_label_map.pbtxt',
+ 'Path to label map proto')
+flags.DEFINE_boolean('ignore_difficult_instances', False, 'Whether to ignore '
+ 'difficult instances')
+FLAGS = flags.FLAGS
+
+SETS = ['train', 'val', 'trainval', 'test']
+YEARS = ['VOC2007', 'VOC2012', 'merged']
+
+
+def dict_to_tf_example(data,
+ dataset_directory,
+ label_map_dict,
+ ignore_difficult_instances=False,
+ image_subdirectory='JPEGImages'):
+ """Convert XML derived dict to tf.Example proto.
+
+ Notice that this function normalizes the bounding box coordinates provided
+ by the raw data.
+
+ Args:
+ data: dict holding PASCAL XML fields for a single image (obtained by
+ running dataset_util.recursive_parse_xml_to_dict)
+ dataset_directory: Path to root directory holding PASCAL dataset
+ label_map_dict: A map from string label names to integers ids.
+ ignore_difficult_instances: Whether to skip difficult instances in the
+ dataset (default: False).
+ image_subdirectory: String specifying subdirectory within the
+ PASCAL dataset directory holding the actual image data.
+
+ Returns:
+ example: The converted tf.Example.
+
+ Raises:
+ ValueError: if the image pointed to by data['filename'] is not a valid JPEG
+ """
+ img_path = os.path.join(data['folder'], image_subdirectory, data['filename'])
+ full_path = os.path.join(dataset_directory, img_path)
+ with tf.gfile.GFile(full_path, 'rb') as fid:
+ encoded_jpg = fid.read()
+ encoded_jpg_io = io.BytesIO(encoded_jpg)
+ image = PIL.Image.open(encoded_jpg_io)
+ if image.format != 'JPEG':
+ raise ValueError('Image format not JPEG')
+ key = hashlib.sha256(encoded_jpg).hexdigest()
+
+ width = int(data['size']['width'])
+ height = int(data['size']['height'])
+
+ xmin = []
+ ymin = []
+ xmax = []
+ ymax = []
+ classes = []
+ classes_text = []
+ truncated = []
+ poses = []
+ difficult_obj = []
+ if 'object' in data:
+ for obj in data['object']:
+ difficult = bool(int(obj['difficult']))
+ if ignore_difficult_instances and difficult:
+ continue
+
+ difficult_obj.append(int(difficult))
+
+ xmin.append(float(obj['bndbox']['xmin']) / width)
+ ymin.append(float(obj['bndbox']['ymin']) / height)
+ xmax.append(float(obj['bndbox']['xmax']) / width)
+ ymax.append(float(obj['bndbox']['ymax']) / height)
+ classes_text.append(obj['name'].encode('utf8'))
+ classes.append(label_map_dict[obj['name']])
+ truncated.append(int(obj['truncated']))
+ poses.append(obj['pose'].encode('utf8'))
+
+ example = tf.train.Example(features=tf.train.Features(feature={
+ 'image/height': dataset_util.int64_feature(height),
+ 'image/width': dataset_util.int64_feature(width),
+ 'image/filename': dataset_util.bytes_feature(
+ data['filename'].encode('utf8')),
+ 'image/source_id': dataset_util.bytes_feature(
+ data['filename'].encode('utf8')),
+ 'image/key/sha256': dataset_util.bytes_feature(key.encode('utf8')),
+ 'image/encoded': dataset_util.bytes_feature(encoded_jpg),
+ 'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),
+ 'image/object/bbox/xmin': dataset_util.float_list_feature(xmin),
+ 'image/object/bbox/xmax': dataset_util.float_list_feature(xmax),
+ 'image/object/bbox/ymin': dataset_util.float_list_feature(ymin),
+ 'image/object/bbox/ymax': dataset_util.float_list_feature(ymax),
+ 'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
+ 'image/object/class/label': dataset_util.int64_list_feature(classes),
+ 'image/object/difficult': dataset_util.int64_list_feature(difficult_obj),
+ 'image/object/truncated': dataset_util.int64_list_feature(truncated),
+ 'image/object/view': dataset_util.bytes_list_feature(poses),
+ }))
+ return example
+
+
+def main(_):
+ if FLAGS.set not in SETS:
+ raise ValueError('set must be in : {}'.format(SETS))
+ if FLAGS.year not in YEARS:
+ raise ValueError('year must be in : {}'.format(YEARS))
+
+ data_dir = FLAGS.data_dir
+ years = ['VOC2007', 'VOC2012']
+ if FLAGS.year != 'merged':
+ years = [FLAGS.year]
+
+ writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
+
+ label_map_dict = label_map_util.get_label_map_dict(FLAGS.label_map_path)
+
+ for year in years:
+ logging.info('Reading from PASCAL %s dataset.', year)
+ examples_path = os.path.join(data_dir, year, 'ImageSets', 'Main',
+ 'aeroplane_' + FLAGS.set + '.txt')
+ annotations_dir = os.path.join(data_dir, year, FLAGS.annotations_dir)
+ examples_list = dataset_util.read_examples_list(examples_path)
+ for idx, example in enumerate(examples_list):
+ if idx % 100 == 0:
+ logging.info('On image %d of %d', idx, len(examples_list))
+ path = os.path.join(annotations_dir, example + '.xml')
+ with tf.gfile.GFile(path, 'r') as fid:
+ xml_str = fid.read()
+ xml = etree.fromstring(xml_str)
+ data = dataset_util.recursive_parse_xml_to_dict(xml)['annotation']
+
+ tf_example = dict_to_tf_example(data, FLAGS.data_dir, label_map_dict,
+ FLAGS.ignore_difficult_instances)
+ writer.write(tf_example.SerializeToString())
+
+ writer.close()
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pascal_tf_record_test.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pascal_tf_record_test.py"
new file mode 100644
index 00000000..66929bd4
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pascal_tf_record_test.py"
@@ -0,0 +1,118 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""Test for create_pascal_tf_record.py."""
+
+import os
+
+import numpy as np
+import PIL.Image
+import tensorflow as tf
+
+from object_detection.dataset_tools import create_pascal_tf_record
+
+
+class CreatePascalTFRecordTest(tf.test.TestCase):
+
+ def _assertProtoEqual(self, proto_field, expectation):
+ """Helper function to assert if a proto field equals some value.
+
+ Args:
+ proto_field: The protobuf field to compare.
+ expectation: The expected value of the protobuf field.
+ """
+ proto_list = [p for p in proto_field]
+ self.assertListEqual(proto_list, expectation)
+
+ def test_dict_to_tf_example(self):
+ image_file_name = 'tmp_image.jpg'
+ image_data = np.random.rand(256, 256, 3)
+ save_path = os.path.join(self.get_temp_dir(), image_file_name)
+ image = PIL.Image.fromarray(image_data, 'RGB')
+ image.save(save_path)
+
+ data = {
+ 'folder': '',
+ 'filename': image_file_name,
+ 'size': {
+ 'height': 256,
+ 'width': 256,
+ },
+ 'object': [
+ {
+ 'difficult': 1,
+ 'bndbox': {
+ 'xmin': 64,
+ 'ymin': 64,
+ 'xmax': 192,
+ 'ymax': 192,
+ },
+ 'name': 'person',
+ 'truncated': 0,
+ 'pose': '',
+ },
+ ],
+ }
+
+ label_map_dict = {
+ 'background': 0,
+ 'person': 1,
+ 'notperson': 2,
+ }
+
+ example = create_pascal_tf_record.dict_to_tf_example(
+ data, self.get_temp_dir(), label_map_dict, image_subdirectory='')
+ self._assertProtoEqual(
+ example.features.feature['image/height'].int64_list.value, [256])
+ self._assertProtoEqual(
+ example.features.feature['image/width'].int64_list.value, [256])
+ self._assertProtoEqual(
+ example.features.feature['image/filename'].bytes_list.value,
+ [image_file_name])
+ self._assertProtoEqual(
+ example.features.feature['image/source_id'].bytes_list.value,
+ [image_file_name])
+ self._assertProtoEqual(
+ example.features.feature['image/format'].bytes_list.value, ['jpeg'])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/xmin'].float_list.value,
+ [0.25])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/ymin'].float_list.value,
+ [0.25])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/xmax'].float_list.value,
+ [0.75])
+ self._assertProtoEqual(
+ example.features.feature['image/object/bbox/ymax'].float_list.value,
+ [0.75])
+ self._assertProtoEqual(
+ example.features.feature['image/object/class/text'].bytes_list.value,
+ ['person'])
+ self._assertProtoEqual(
+ example.features.feature['image/object/class/label'].int64_list.value,
+ [1])
+ self._assertProtoEqual(
+ example.features.feature['image/object/difficult'].int64_list.value,
+ [1])
+ self._assertProtoEqual(
+ example.features.feature['image/object/truncated'].int64_list.value,
+ [0])
+ self._assertProtoEqual(
+ example.features.feature['image/object/view'].bytes_list.value, [''])
+
+
+if __name__ == '__main__':
+ tf.test.main()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pet_tf_record.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pet_tf_record.py"
new file mode 100644
index 00000000..9b3b55c6
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pet_tf_record.py"
@@ -0,0 +1,318 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+r"""Convert the Oxford pet dataset to TFRecord for object_detection.
+
+See: O. M. Parkhi, A. Vedaldi, A. Zisserman, C. V. Jawahar
+ Cats and Dogs
+ IEEE Conference on Computer Vision and Pattern Recognition, 2012
+ http://www.robots.ox.ac.uk/~vgg/data/pets/
+
+Example usage:
+ python object_detection/dataset_tools/create_pet_tf_record.py \
+ --data_dir=/home/user/pet \
+ --output_dir=/home/user/pet/output
+"""
+
+import hashlib
+import io
+import logging
+import os
+import random
+import re
+
+import contextlib2
+from lxml import etree
+import numpy as np
+import PIL.Image
+import tensorflow as tf
+
+from object_detection.dataset_tools import tf_record_creation_util
+from object_detection.utils import dataset_util
+from object_detection.utils import label_map_util
+
+flags = tf.app.flags
+flags.DEFINE_string('data_dir', '', 'Root directory to raw pet dataset.')
+flags.DEFINE_string('output_dir', '', 'Path to directory to output TFRecords.')
+flags.DEFINE_string('label_map_path', 'data/pet_label_map.pbtxt',
+ 'Path to label map proto')
+flags.DEFINE_boolean('faces_only', True, 'If True, generates bounding boxes '
+ 'for pet faces. Otherwise generates bounding boxes (as '
+ 'well as segmentations for full pet bodies). Note that '
+ 'in the latter case, the resulting files are much larger.')
+flags.DEFINE_string('mask_type', 'png', 'How to represent instance '
+ 'segmentation masks. Options are "png" or "numerical".')
+flags.DEFINE_integer('num_shards', 10, 'Number of TFRecord shards')
+
+FLAGS = flags.FLAGS
+
+
+def get_class_name_from_filename(file_name):
+ """Gets the class name from a file.
+
+ Args:
+ file_name: The file name to get the class name from.
+ ie. "american_pit_bull_terrier_105.jpg"
+
+ Returns:
+ A string of the class name.
+ """
+ match = re.match(r'([A-Za-z_]+)(_[0-9]+\.jpg)', file_name, re.I)
+ return match.groups()[0]
+
+
+def dict_to_tf_example(data,
+ mask_path,
+ label_map_dict,
+ image_subdirectory,
+ ignore_difficult_instances=False,
+ faces_only=True,
+ mask_type='png'):
+ """Convert XML derived dict to tf.Example proto.
+
+ Notice that this function normalizes the bounding box coordinates provided
+ by the raw data.
+
+ Args:
+ data: dict holding PASCAL XML fields for a single image (obtained by
+ running dataset_util.recursive_parse_xml_to_dict)
+ mask_path: String path to PNG encoded mask.
+ label_map_dict: A map from string label names to integers ids.
+ image_subdirectory: String specifying subdirectory within the
+ Pascal dataset directory holding the actual image data.
+ ignore_difficult_instances: Whether to skip difficult instances in the
+ dataset (default: False).
+ faces_only: If True, generates bounding boxes for pet faces. Otherwise
+ generates bounding boxes (as well as segmentations for full pet bodies).
+ mask_type: 'numerical' or 'png'. 'png' is recommended because it leads to
+ smaller file sizes.
+
+ Returns:
+ example: The converted tf.Example.
+
+ Raises:
+ ValueError: if the image pointed to by data['filename'] is not a valid JPEG
+ """
+ img_path = os.path.join(image_subdirectory, data['filename'])
+ with tf.gfile.GFile(img_path, 'rb') as fid:
+ encoded_jpg = fid.read()
+ encoded_jpg_io = io.BytesIO(encoded_jpg)
+ image = PIL.Image.open(encoded_jpg_io)
+ if image.format != 'JPEG':
+ raise ValueError('Image format not JPEG')
+ key = hashlib.sha256(encoded_jpg).hexdigest()
+
+ with tf.gfile.GFile(mask_path, 'rb') as fid:
+ encoded_mask_png = fid.read()
+ encoded_png_io = io.BytesIO(encoded_mask_png)
+ mask = PIL.Image.open(encoded_png_io)
+ if mask.format != 'PNG':
+ raise ValueError('Mask format not PNG')
+
+ mask_np = np.asarray(mask)
+ nonbackground_indices_x = np.any(mask_np != 2, axis=0)
+ nonbackground_indices_y = np.any(mask_np != 2, axis=1)
+ nonzero_x_indices = np.where(nonbackground_indices_x)
+ nonzero_y_indices = np.where(nonbackground_indices_y)
+
+ width = int(data['size']['width'])
+ height = int(data['size']['height'])
+
+ xmins = []
+ ymins = []
+ xmaxs = []
+ ymaxs = []
+ classes = []
+ classes_text = []
+ truncated = []
+ poses = []
+ difficult_obj = []
+ masks = []
+ if 'object' in data:
+ for obj in data['object']:
+ difficult = bool(int(obj['difficult']))
+ if ignore_difficult_instances and difficult:
+ continue
+ difficult_obj.append(int(difficult))
+
+ if faces_only:
+ xmin = float(obj['bndbox']['xmin'])
+ xmax = float(obj['bndbox']['xmax'])
+ ymin = float(obj['bndbox']['ymin'])
+ ymax = float(obj['bndbox']['ymax'])
+ else:
+ xmin = float(np.min(nonzero_x_indices))
+ xmax = float(np.max(nonzero_x_indices))
+ ymin = float(np.min(nonzero_y_indices))
+ ymax = float(np.max(nonzero_y_indices))
+
+ xmins.append(xmin / width)
+ ymins.append(ymin / height)
+ xmaxs.append(xmax / width)
+ ymaxs.append(ymax / height)
+ class_name = get_class_name_from_filename(data['filename'])
+ classes_text.append(class_name.encode('utf8'))
+ classes.append(label_map_dict[class_name])
+ truncated.append(int(obj['truncated']))
+ poses.append(obj['pose'].encode('utf8'))
+ if not faces_only:
+ mask_remapped = (mask_np != 2).astype(np.uint8)
+ masks.append(mask_remapped)
+
+ feature_dict = {
+ 'image/height': dataset_util.int64_feature(height),
+ 'image/width': dataset_util.int64_feature(width),
+ 'image/filename': dataset_util.bytes_feature(
+ data['filename'].encode('utf8')),
+ 'image/source_id': dataset_util.bytes_feature(
+ data['filename'].encode('utf8')),
+ 'image/key/sha256': dataset_util.bytes_feature(key.encode('utf8')),
+ 'image/encoded': dataset_util.bytes_feature(encoded_jpg),
+ 'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),
+ 'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
+ 'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
+ 'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
+ 'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
+ 'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
+ 'image/object/class/label': dataset_util.int64_list_feature(classes),
+ 'image/object/difficult': dataset_util.int64_list_feature(difficult_obj),
+ 'image/object/truncated': dataset_util.int64_list_feature(truncated),
+ 'image/object/view': dataset_util.bytes_list_feature(poses),
+ }
+ if not faces_only:
+ if mask_type == 'numerical':
+ mask_stack = np.stack(masks).astype(np.float32)
+ masks_flattened = np.reshape(mask_stack, [-1])
+ feature_dict['image/object/mask'] = (
+ dataset_util.float_list_feature(masks_flattened.tolist()))
+ elif mask_type == 'png':
+ encoded_mask_png_list = []
+ for mask in masks:
+ img = PIL.Image.fromarray(mask)
+ output = io.BytesIO()
+ img.save(output, format='PNG')
+ encoded_mask_png_list.append(output.getvalue())
+ feature_dict['image/object/mask'] = (
+ dataset_util.bytes_list_feature(encoded_mask_png_list))
+
+ example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
+ return example
+
+
+def create_tf_record(output_filename,
+ num_shards,
+ label_map_dict,
+ annotations_dir,
+ image_dir,
+ examples,
+ faces_only=True,
+ mask_type='png'):
+ """Creates a TFRecord file from examples.
+
+ Args:
+ output_filename: Path to where output file is saved.
+ num_shards: Number of shards for output file.
+ label_map_dict: The label map dictionary.
+ annotations_dir: Directory where annotation files are stored.
+ image_dir: Directory where image files are stored.
+ examples: Examples to parse and save to tf record.
+ faces_only: If True, generates bounding boxes for pet faces. Otherwise
+ generates bounding boxes (as well as segmentations for full pet bodies).
+ mask_type: 'numerical' or 'png'. 'png' is recommended because it leads to
+ smaller file sizes.
+ """
+ with contextlib2.ExitStack() as tf_record_close_stack:
+ output_tfrecords = tf_record_creation_util.open_sharded_output_tfrecords(
+ tf_record_close_stack, output_filename, num_shards)
+ for idx, example in enumerate(examples):
+ if idx % 100 == 0:
+ logging.info('On image %d of %d', idx, len(examples))
+ xml_path = os.path.join(annotations_dir, 'xmls', example + '.xml')
+ mask_path = os.path.join(annotations_dir, 'trimaps', example + '.png')
+
+ if not os.path.exists(xml_path):
+ logging.warning('Could not find %s, ignoring example.', xml_path)
+ continue
+ with tf.gfile.GFile(xml_path, 'r') as fid:
+ xml_str = fid.read()
+ xml = etree.fromstring(xml_str)
+ data = dataset_util.recursive_parse_xml_to_dict(xml)['annotation']
+
+ try:
+ tf_example = dict_to_tf_example(
+ data,
+ mask_path,
+ label_map_dict,
+ image_dir,
+ faces_only=faces_only,
+ mask_type=mask_type)
+ if tf_example:
+ shard_idx = idx % num_shards
+ output_tfrecords[shard_idx].write(tf_example.SerializeToString())
+ except ValueError:
+ logging.warning('Invalid example: %s, ignoring.', xml_path)
+
+
+# TODO(derekjchow): Add test for pet/PASCAL main files.
+def main(_):
+ data_dir = FLAGS.data_dir
+ label_map_dict = label_map_util.get_label_map_dict(FLAGS.label_map_path)
+
+ logging.info('Reading from Pet dataset.')
+ image_dir = os.path.join(data_dir, 'images')
+ annotations_dir = os.path.join(data_dir, 'annotations')
+ examples_path = os.path.join(annotations_dir, 'trainval.txt')
+ examples_list = dataset_util.read_examples_list(examples_path)
+
+ # Test images are not included in the downloaded data set, so we shall perform
+ # our own split.
+ random.seed(42)
+ random.shuffle(examples_list)
+ num_examples = len(examples_list)
+ num_train = int(0.7 * num_examples)
+ train_examples = examples_list[:num_train]
+ val_examples = examples_list[num_train:]
+ logging.info('%d training and %d validation examples.',
+ len(train_examples), len(val_examples))
+
+ train_output_path = os.path.join(FLAGS.output_dir, 'pet_faces_train.record')
+ val_output_path = os.path.join(FLAGS.output_dir, 'pet_faces_val.record')
+ if not FLAGS.faces_only:
+ train_output_path = os.path.join(FLAGS.output_dir,
+ 'pets_fullbody_with_masks_train.record')
+ val_output_path = os.path.join(FLAGS.output_dir,
+ 'pets_fullbody_with_masks_val.record')
+ create_tf_record(
+ train_output_path,
+ FLAGS.num_shards,
+ label_map_dict,
+ annotations_dir,
+ image_dir,
+ train_examples,
+ faces_only=FLAGS.faces_only,
+ mask_type=FLAGS.mask_type)
+ create_tf_record(
+ val_output_path,
+ FLAGS.num_shards,
+ label_map_dict,
+ annotations_dir,
+ image_dir,
+ val_examples,
+ faces_only=FLAGS.faces_only,
+ mask_type=FLAGS.mask_type)
+
+
+if __name__ == '__main__':
+ tf.app.run()
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pycocotools_package.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pycocotools_package.sh"
new file mode 100644
index 00000000..511df185
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/create_pycocotools_package.sh"
@@ -0,0 +1,53 @@
+#!/bin/bash
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# Script to download pycocotools and make package for CMLE jobs.
+#
+# usage:
+# bash object_detection/dataset_tools/create_pycocotools_package.sh \
+# /tmp/pycocotools
+set -e
+
+if [ -z "$1" ]; then
+ echo "usage create_pycocotools_package.sh [output dir]"
+ exit
+fi
+
+# Create the output directory.
+OUTPUT_DIR="${1%/}"
+SCRATCH_DIR="${OUTPUT_DIR}/raw"
+mkdir -p "${OUTPUT_DIR}"
+mkdir -p "${SCRATCH_DIR}"
+
+cd ${SCRATCH_DIR}
+git clone https://github.com/cocodataset/cocoapi.git
+cd cocoapi/PythonAPI && mv ../common ./
+
+sed "s/\.\.\/common/common/g" setup.py > setup.py.updated
+cp -f setup.py.updated setup.py
+rm setup.py.updated
+
+sed "s/\.\.\/common/common/g" pycocotools/_mask.pyx > _mask.pyx.updated
+cp -f _mask.pyx.updated pycocotools/_mask.pyx
+rm _mask.pyx.updated
+
+sed "s/import matplotlib\.pyplot as plt/import matplotlib\nmatplotlib\.use\(\'Agg\'\)\nimport matplotlib\.pyplot as plt/g" pycocotools/coco.py > coco.py.updated
+cp -f coco.py.updated pycocotools/coco.py
+rm coco.py.updated
+
+cd "${OUTPUT_DIR}"
+tar -czf pycocotools-2.0.tar.gz -C "${SCRATCH_DIR}/cocoapi/" PythonAPI/
+rm -rf ${SCRATCH_DIR}
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/download_and_preprocess_mscoco.sh" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/download_and_preprocess_mscoco.sh"
new file mode 100644
index 00000000..843ba869
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/download_and_preprocess_mscoco.sh"
@@ -0,0 +1,106 @@
+#!/bin/bash
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+# Script to download and preprocess the MSCOCO data set for detection.
+#
+# The outputs of this script are TFRecord files containing serialized
+# tf.Example protocol buffers. See create_coco_tf_record.py for details of how
+# the tf.Example protocol buffers are constructed and see
+# http://cocodataset.org/#overview for an overview of the dataset.
+#
+# usage:
+# bash object_detection/dataset_tools/download_and_preprocess_mscoco.sh \
+# /tmp/mscoco
+set -e
+
+if [ -z "$1" ]; then
+ echo "usage download_and_preprocess_mscoco.sh [data dir]"
+ exit
+fi
+
+if [ "$(uname)" == "Darwin" ]; then
+ UNZIP="tar -xf"
+else
+ UNZIP="unzip -nq"
+fi
+
+# Create the output directories.
+OUTPUT_DIR="${1%/}"
+SCRATCH_DIR="${OUTPUT_DIR}/raw-data"
+mkdir -p "${OUTPUT_DIR}"
+mkdir -p "${SCRATCH_DIR}"
+CURRENT_DIR=$(pwd)
+
+# Helper function to download and unpack a .zip file.
+function download_and_unzip() {
+ local BASE_URL=${1}
+ local FILENAME=${2}
+
+ if [ ! -f ${FILENAME} ]; then
+ echo "Downloading ${FILENAME} to $(pwd)"
+ wget -nd -c "${BASE_URL}/${FILENAME}"
+ else
+ echo "Skipping download of ${FILENAME}"
+ fi
+ echo "Unzipping ${FILENAME}"
+ ${UNZIP} ${FILENAME}
+}
+
+cd ${SCRATCH_DIR}
+
+# Download the images.
+BASE_IMAGE_URL="http://images.cocodataset.org/zips"
+
+TRAIN_IMAGE_FILE="train2017.zip"
+download_and_unzip ${BASE_IMAGE_URL} ${TRAIN_IMAGE_FILE}
+TRAIN_IMAGE_DIR="${SCRATCH_DIR}/train2017"
+
+VAL_IMAGE_FILE="val2017.zip"
+download_and_unzip ${BASE_IMAGE_URL} ${VAL_IMAGE_FILE}
+VAL_IMAGE_DIR="${SCRATCH_DIR}/val2017"
+
+TEST_IMAGE_FILE="test2017.zip"
+download_and_unzip ${BASE_IMAGE_URL} ${TEST_IMAGE_FILE}
+TEST_IMAGE_DIR="${SCRATCH_DIR}/test2017"
+
+# Download the annotations.
+BASE_INSTANCES_URL="http://images.cocodataset.org/annotations"
+INSTANCES_FILE="annotations_trainval2017.zip"
+download_and_unzip ${BASE_INSTANCES_URL} ${INSTANCES_FILE}
+
+TRAIN_ANNOTATIONS_FILE="${SCRATCH_DIR}/annotations/instances_train2017.json"
+VAL_ANNOTATIONS_FILE="${SCRATCH_DIR}/annotations/instances_val2017.json"
+
+# Download the test image info.
+BASE_IMAGE_INFO_URL="http://images.cocodataset.org/annotations"
+IMAGE_INFO_FILE="image_info_test2017.zip"
+download_and_unzip ${BASE_IMAGE_INFO_URL} ${IMAGE_INFO_FILE}
+
+TESTDEV_ANNOTATIONS_FILE="${SCRATCH_DIR}/annotations/image_info_test-dev2017.json"
+
+# Build TFRecords of the image data.
+cd "${CURRENT_DIR}"
+python object_detection/dataset_tools/create_coco_tf_record.py \
+ --logtostderr \
+ --include_masks \
+ --train_image_dir="${TRAIN_IMAGE_DIR}" \
+ --val_image_dir="${VAL_IMAGE_DIR}" \
+ --test_image_dir="${TEST_IMAGE_DIR}" \
+ --train_annotations_file="${TRAIN_ANNOTATIONS_FILE}" \
+ --val_annotations_file="${VAL_ANNOTATIONS_FILE}" \
+ --testdev_annotations_file="${TESTDEV_ANNOTATIONS_FILE}" \
+ --output_dir="${OUTPUT_DIR}"
+
diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/oid_hierarchical_labels_expansion.py" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/oid_hierarchical_labels_expansion.py"
new file mode 100644
index 00000000..61d6ed7f
--- /dev/null
+++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/models/research/object_detection/dataset_tools/oid_hierarchical_labels_expansion.py"
@@ -0,0 +1,201 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+r"""An executable to expand hierarchically image-level labels and boxes.
+
+Example usage:
+python models/research/object_detection/dataset_tools/\
+oid_hierarchical_labels_expansion.py \
+--json_hierarchy_file= \
+--input_annotations= \
+--output_annotations=